diff --git a/scrapped_outputs/0012265cfd9e8129835a009e5814c990.txt b/scrapped_outputs/0012265cfd9e8129835a009e5814c990.txt new file mode 100644 index 0000000000000000000000000000000000000000..643707bcdd440e65416f02ac6003e845768e0c87 --- /dev/null +++ b/scrapped_outputs/0012265cfd9e8129835a009e5814c990.txt @@ -0,0 +1,96 @@ +I2VGen-XL I2VGen-XL: High-Quality Image-to-Video Synthesis via Cascaded Diffusion Models by Shiwei Zhang, Jiayu Wang, Yingya Zhang, Kang Zhao, Hangjie Yuan, Zhiwu Qin, Xiang Wang, Deli Zhao, and Jingren Zhou. The abstract from the paper is: Video synthesis has recently made remarkable strides benefiting from the rapid development of diffusion models. However, it still encounters challenges in terms of semantic accuracy, clarity and spatio-temporal continuity. They primarily arise from the scarcity of well-aligned text-video data and the complex inherent structure of videos, making it difficult for the model to simultaneously ensure semantic and qualitative excellence. In this report, we propose a cascaded I2VGen-XL approach that enhances model performance by decoupling these two factors and ensures the alignment of the input data by utilizing static images as a form of crucial guidance. I2VGen-XL consists of two stages: i) the base stage guarantees coherent semantics and preserves content from input images by using two hierarchical encoders, and ii) the refinement stage enhances the video’s details by incorporating an additional brief text and improves the resolution to 1280×720. To improve the diversity, we collect around 35 million single-shot text-video pairs and 6 billion text-image pairs to optimize the model. By this means, I2VGen-XL can simultaneously enhance the semantic accuracy, continuity of details and clarity of generated videos. Through extensive experiments, we have investigated the underlying principles of I2VGen-XL and compared it with current top methods, which can demonstrate its effectiveness on diverse data. The source code and models will be publicly available at this https URL. The original codebase can be found here. The model checkpoints can be found here. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. Also, to know more about reducing the memory usage of this pipeline, refer to the [“Reduce memory usage”] section here. Sample output with I2VGenXL: masterpiece, bestquality, sunset. + Notes I2VGenXL always uses a clip_skip value of 1. This means it leverages the penultimate layer representations from the text encoder of CLIP. It can generate videos of quality that is often on par with Stable Video Diffusion (SVD). Unlike SVD, it additionally accepts text prompts as inputs. It can generate higher resolution videos. When using the DDIMScheduler (which is default for this pipeline), less than 50 steps for inference leads to bad results. I2VGenXLPipeline class diffusers.I2VGenXLPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer image_encoder: CLIPVisionModelWithProjection feature_extractor: CLIPImageProcessor unet: I2VGenXLUNet scheduler: DDIMScheduler ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (I2VGenXLUNet) — +A I2VGenXLUNet to denoise the encoded video latents. scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Pipeline for image-to-video generation as proposed in I2VGenXL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None image: Union = None height: Optional = 704 width: Optional = 1280 target_fps: Optional = 16 num_frames: int = 16 num_inference_steps: int = 50 guidance_scale: float = 9.0 negative_prompt: Union = None eta: float = 0.0 num_videos_per_prompt: Optional = 1 decode_chunk_size: Optional = 1 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = 1 ) → pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — +Image or images to guide image generation. If you provide a tensor, it needs to be compatible with +CLIPImageProcessor. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. target_fps (int, optional) — +Frames per second. The rate at which the generated images shall be exported to a video after generation. This is also used as a “micro-condition” while generation. num_frames (int, optional) — +The number of video frames to generate. num_inference_steps (int, optional) — +The number of denoising steps. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. num_videos_per_prompt (int, optional) — +The number of images to generate per prompt. decode_chunk_size (int, optional) — +The number of frames to decode at a time. The higher the chunk size, the higher the temporal consistency +between frames, but also the higher the memory consumption. By default, the decoder will decode all frames at once +for maximal quality. Reduce decode_chunk_size to reduce memory usage. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput or tuple + +If return_dict is True, pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for image-to-video generation with I2VGenXLPipeline. Examples: Copied >>> import torch +>>> from diffusers import I2VGenXLPipeline + +>>> pipeline = I2VGenXLPipeline.from_pretrained("ali-vilab/i2vgen-xl", torch_dtype=torch.float16, variant="fp16") +>>> pipeline.enable_model_cpu_offload() + +>>> image_url = "https://github.com/ali-vilab/i2vgen-xl/blob/main/data/test_images/img_0009.png?raw=true" +>>> image = load_image(image_url).convert("RGB") + +>>> prompt = "Papers were floating in the air on a table in the library" +>>> negative_prompt = "Distorted, discontinuous, Ugly, blurry, low resolution, motionless, static, disfigured, disconnected limbs, Ugly faces, incomplete arms" +>>> generator = torch.manual_seed(8888) + +>>> frames = pipeline( +... prompt=prompt, +... image=image, +... num_inference_steps=50, +... negative_prompt=negative_prompt, +... guidance_scale=9.0, +... generator=generator +... ).frames[0] +>>> video_path = export_to_gif(frames, "i2v.gif") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_videos_per_prompt negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_videos_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. I2VGenXLPipelineOutput class diffusers.pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput < source > ( frames: Union ) Parameters frames (List[np.ndarray] or torch.FloatTensor) — +List of denoised frames (essentially images) as NumPy arrays of shape (height, width, num_channels) or as +a torch tensor. The length of the list denotes the video length (the number of frames). Output class for image-to-video pipeline. diff --git a/scrapped_outputs/002e4f7b8ab52c12963857540b2c6ad7.txt b/scrapped_outputs/002e4f7b8ab52c12963857540b2c6ad7.txt new file mode 100644 index 0000000000000000000000000000000000000000..242e37fb1de48e73893d11901cd033d448afb601 --- /dev/null +++ b/scrapped_outputs/002e4f7b8ab52c12963857540b2c6ad7.txt @@ -0,0 +1,107 @@ +Semantic Guidance Semantic Guidance for Diffusion Models was proposed in SEGA: Instructing Text-to-Image Models using Semantic Guidance and provides strong semantic control over image generation. +Small changes to the text prompt usually result in entirely different output images. However, with SEGA a variety of changes to the image are enabled that can be controlled easily and intuitively, while staying true to the original image composition. The abstract from the paper is: Text-to-image diffusion models have recently received a lot of interest for their astonishing ability to produce high-fidelity images from text only. However, achieving one-shot generation that aligns with the user’s intent is nearly impossible, yet small changes to the input prompt often result in very different images. This leaves the user with little semantic control. To put the user in control, we show how to interact with the diffusion process to flexibly steer it along semantic directions. This semantic guidance (SEGA) generalizes to any generative architecture using classifier-free guidance. More importantly, it allows for subtle and extensive edits, changes in composition and style, as well as optimizing the overall artistic conception. We demonstrate SEGA’s effectiveness on both latent and pixel-based diffusion models such as Stable Diffusion, Paella, and DeepFloyd-IF using a variety of tasks, thus providing strong evidence for its versatility, flexibility, and improvements over existing methods. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. SemanticStableDiffusionPipeline class diffusers.SemanticStableDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (Q16SafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with latent editing. This model inherits from DiffusionPipeline and builds on the StableDiffusionPipeline. Check the superclass +documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular +device, etc.). __call__ < source > ( prompt: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: int = 1 eta: float = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 editing_prompt: Union = None editing_prompt_embeddings: Optional = None reverse_editing_direction: Union = False edit_guidance_scale: Union = 5 edit_warmup_steps: Union = 10 edit_cooldown_steps: Union = None edit_threshold: Union = 0.9 edit_momentum_scale: Optional = 0.1 edit_mom_beta: Optional = 0.4 edit_weights: Optional = None sem_guidance: Optional = None ) → SemanticStableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. editing_prompt (str or List[str], optional) — +The prompt or prompts to use for semantic guidance. Semantic guidance is disabled by setting +editing_prompt = None. Guidance direction of prompt should be specified via +reverse_editing_direction. editing_prompt_embeddings (torch.Tensor, optional) — +Pre-computed embeddings to use for semantic guidance. Guidance direction of embedding should be +specified via reverse_editing_direction. reverse_editing_direction (bool or List[bool], optional, defaults to False) — +Whether the corresponding prompt in editing_prompt should be increased or decreased. edit_guidance_scale (float or List[float], optional, defaults to 5) — +Guidance scale for semantic guidance. If provided as a list, values should correspond to +editing_prompt. edit_warmup_steps (float or List[float], optional, defaults to 10) — +Number of diffusion steps (for each prompt) for which semantic guidance is not applied. Momentum is +calculated for those steps and applied once all warmup periods are over. edit_cooldown_steps (float or List[float], optional, defaults to None) — +Number of diffusion steps (for each prompt) after which semantic guidance is longer applied. edit_threshold (float or List[float], optional, defaults to 0.9) — +Threshold of semantic guidance. edit_momentum_scale (float, optional, defaults to 0.1) — +Scale of the momentum to be added to the semantic guidance at each diffusion step. If set to 0.0, +momentum is disabled. Momentum is already built up during warmup (for diffusion steps smaller than +sld_warmup_steps). Momentum is only added to latent guidance once all warmup periods are finished. edit_mom_beta (float, optional, defaults to 0.4) — +Defines how semantic guidance momentum builds up. edit_mom_beta indicates how much of the previous +momentum is kept. Momentum is already built up during warmup (for diffusion steps smaller than +edit_warmup_steps). edit_weights (List[float], optional, defaults to None) — +Indicates how much each individual concept should influence the overall guidance. If no weights are +provided all concepts are applied equally. sem_guidance (List[torch.Tensor], optional) — +List of pre-generated guidance vectors to be applied at generation. Length of the list has to +correspond to num_inference_steps. Returns +SemanticStableDiffusionPipelineOutput or tuple + +If return_dict is True, +SemanticStableDiffusionPipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images and the second element +is a list of bools indicating whether the corresponding generated image contains “not-safe-for-work” +(nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import SemanticStableDiffusionPipeline + +>>> pipe = SemanticStableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> out = pipe( +... prompt="a photo of the face of a woman", +... num_images_per_prompt=1, +... guidance_scale=7, +... editing_prompt=[ +... "smiling, smile", # Concepts to apply +... "glasses, wearing glasses", +... "curls, wavy hair, curly hair", +... "beard, full beard, mustache", +... ], +... reverse_editing_direction=[ +... False, +... False, +... False, +... False, +... ], # Direction of guidance i.e. increase all concepts +... edit_warmup_steps=[10, 10, 10, 10], # Warmup period for each concept +... edit_guidance_scale=[4, 5, 5, 5.4], # Guidance scale for each concept +... edit_threshold=[ +... 0.99, +... 0.975, +... 0.925, +... 0.96, +... ], # Threshold for each concept. Threshold equals the percentile of the latent space that will be discarded. I.e. threshold=0.99 uses 1% of the latent dimensions +... edit_momentum_scale=0.3, # Momentum scale that will be added to the latent guidance +... edit_mom_beta=0.6, # Momentum beta +... edit_weights=[1, 1, 1, 1, 1], # Weights of the individual concepts against each other +... ) +>>> image = out.images[0] StableDiffusionSafePipelineOutput class diffusers.pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/00338ebc720885d1d32274136bd7514e.txt b/scrapped_outputs/00338ebc720885d1d32274136bd7514e.txt new file mode 100644 index 0000000000000000000000000000000000000000..1fe3bd3f06785a74a09c4c4199e812fcd2270991 --- /dev/null +++ b/scrapped_outputs/00338ebc720885d1d32274136bd7514e.txt @@ -0,0 +1,6 @@ +Overview 🤗 Diffusers provides a collection of training scripts for you to train your own diffusion models. You can find all of our training scripts in diffusers/examples. Each training script is: Self-contained: the training script does not depend on any local files, and all packages required to run the script are installed from the requirements.txt file. Easy-to-tweak: the training scripts are an example of how to train a diffusion model for a specific task and won’t work out-of-the-box for every training scenario. You’ll likely need to adapt the training script for your specific use-case. To help you with that, we’ve fully exposed the data preprocessing code and the training loop so you can modify it for your own use. Beginner-friendly: the training scripts are designed to be beginner-friendly and easy to understand, rather than including the latest state-of-the-art methods to get the best and most competitive results. Any training methods we consider too complex are purposefully left out. Single-purpose: each training script is expressly designed for only one task to keep it readable and understandable. Our current collection of training scripts include: Training SDXL-support LoRA-support Flax-support unconditional image generation text-to-image 👍 👍 👍 textual inversion 👍 DreamBooth 👍 👍 👍 ControlNet 👍 👍 InstructPix2Pix 👍 Custom Diffusion T2I-Adapters 👍 Kandinsky 2.2 👍 Wuerstchen 👍 These examples are actively maintained, so please feel free to open an issue if they aren’t working as expected. If you feel like another training example should be included, you’re more than welcome to start a Feature Request to discuss your feature idea with us and whether it meets our criteria of being self-contained, easy-to-tweak, beginner-friendly, and single-purpose. Install Make sure you can successfully run the latest versions of the example scripts by installing the library from source in a new virtual environment: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the folder of the training script (for example, DreamBooth) and install the requirements.txt file. Some training scripts have a specific requirement file for SDXL, LoRA or Flax. If you’re using one of these scripts, make sure you install its corresponding requirements file. Copied cd examples/dreambooth +pip install -r requirements.txt +# to train SDXL with DreamBooth +pip install -r requirements_sdxl.txt To speedup training and reduce memory-usage, we recommend: using PyTorch 2.0 or higher to automatically use scaled dot product attention during training (you don’t need to make any changes to the training code) installing xFormers to enable memory-efficient attention diff --git a/scrapped_outputs/003990abb5bccb7515ba047c3f63eebe.txt b/scrapped_outputs/003990abb5bccb7515ba047c3f63eebe.txt new file mode 100644 index 0000000000000000000000000000000000000000..9c200308786dff55b3d8f085d026e38d2ddec95d --- /dev/null +++ b/scrapped_outputs/003990abb5bccb7515ba047c3f63eebe.txt @@ -0,0 +1,96 @@ +DPMSolverMultistepScheduler DPMSolverMultistep is a multistep scheduler from DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps and DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. DPMSolver (and the improved version DPMSolver++) is a fast dedicated high-order solver for diffusion ODEs with convergence order guarantee. Empirically, DPMSolver sampling with only 20 steps can generate high-quality +samples, and it can generate quite good samples even in 10 steps. Tips It is recommended to set solver_order to 2 for guide sampling, and solver_order=3 for unconditional sampling. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use the dynamic +thresholding. This thresholding method is unsuitable for latent-space diffusion models such as +Stable Diffusion. The SDE variant of DPMSolver and DPM-Solver++ is also supported, but only for the first and second-order solvers. This is a fast SDE solver for the reverse diffusion SDE. It is recommended to use the second-order sde-dpmsolver++. DPMSolverMultistepScheduler class diffusers.DPMSolverMultistepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'dpmsolver++' solver_type: str = 'midpoint' lower_order_final: bool = True euler_at_final: bool = False use_karras_sigmas: Optional = False use_lu_lambdas: Optional = False lambda_min_clipped: float = -inf variance_type: Optional = None timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DPMSolver order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++". algorithm_type (str, defaults to dpmsolver++) — +Algorithm type for the solver; can be dpmsolver, dpmsolver++, sde-dpmsolver or sde-dpmsolver++. The +dpmsolver type implements the algorithms in the DPMSolver +paper, and the dpmsolver++ type implements the algorithms in the +DPMSolver++ paper. It is recommended to use dpmsolver++ or +sde-dpmsolver++ with solver_order=2 for guided sampling like in Stable Diffusion. solver_type (str, defaults to midpoint) — +Solver type for the second-order solver; can be midpoint or heun. The solver type slightly affects the +sample quality, especially for a small number of steps. It is recommended to use midpoint solvers. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. euler_at_final (bool, defaults to False) — +Whether to use Euler’s method in the final step. It is a trade-off between numerical stability and detail +richness. This can stabilize the sampling of the SDE variant of DPMSolver for small number of inference +steps, but sometimes may result in blurring. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. use_lu_lambdas (bool, optional, defaults to False) — +Whether to use the uniform-logSNR for step sizes proposed by Lu’s DPM-Solver in the noise schedule during +the sampling process. If True, the sigmas and time steps are determined according to a sequence of +lambda(t). lambda_min_clipped (float, defaults to -inf) — +Clipping threshold for the minimum value of lambda(t) for numerical stability. This is critical for the +cosine (squaredcos_cap_v2) noise schedule. variance_type (str, optional) — +Set to “learned” or “learned_range” for diffusion models that predict variance. If set, the model’s output +contains the predicted Gaussian variance. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DPMSolverMultistepScheduler is a fast dedicated high-order solver for diffusion ODEs. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is +designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an +integral of the data prediction model. The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise +prediction and data prediction models. dpm_solver_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DPMSolver (equivalent to DDIM). multistep_dpm_solver_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order multistep DPMSolver. multistep_dpm_solver_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order multistep DPMSolver. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int = None device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator = None return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep DPMSolver. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/004595462592973e8bbc3c61f477d432.txt b/scrapped_outputs/004595462592973e8bbc3c61f477d432.txt new file mode 100644 index 0000000000000000000000000000000000000000..5afc2be3d91199356b9d7628f7ca4a75d3ed1ce9 --- /dev/null +++ b/scrapped_outputs/004595462592973e8bbc3c61f477d432.txt @@ -0,0 +1,74 @@ +DDIMScheduler Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. The abstract from the paper is: Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. +To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models +with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. +We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. +We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space. The original codebase of this paper can be found at ermongroup/ddim, and you can contact the author on tsong.me. Tips The paper Common Diffusion Noise Schedules and Sample Steps are Flawed claims that a mismatch between the training and inference settings leads to suboptimal inference generation results for Stable Diffusion. To fix this, the authors propose: 🧪 This is an experimental feature! rescale the noise schedule to enforce zero terminal signal-to-noise ratio (SNR) Copied pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, rescale_betas_zero_snr=True) train a model with v_prediction (add the following argument to the train_text_to_image.py or train_text_to_image_lora.py scripts) Copied --prediction_type="v_prediction" change the sampler to always start from the last timestep Copied pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing") rescale classifier-free guidance to prevent over-exposure Copied image = pipe(prompt, guidance_rescale=0.7).images[0] For example: Copied from diffusers import DiffusionPipeline, DDIMScheduler +import torch + +pipe = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", torch_dtype=torch.float16) +pipe.scheduler = DDIMScheduler.from_config( + pipe.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing" +) +pipe.to("cuda") + +prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" +image = pipe(prompt, guidance_rescale=0.7).images[0] +image DDIMScheduler class diffusers.DDIMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None clip_sample: bool = True set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 clip_sample_range: float = 1.0 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the alpha value at step 0. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. DDIMScheduler extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with +non-Markovian guidance. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor eta: float = 0.0 use_clipped_model_output: bool = False generator = None variance_noise: Optional = None return_dict: bool = True ) → ~schedulers.scheduling_utils.DDIMSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. eta (float) — +The weight of noise for added noise in diffusion step. use_clipped_model_output (bool, defaults to False) — +If True, computes “corrected” model_output from the clipped predicted original sample. Necessary +because predicted original sample is clipped to [-1, 1] when self.config.clip_sample is True. If no +clipping has happened, “corrected” model_output would coincide with the one provided as input and +use_clipped_model_output has no effect. generator (torch.Generator, optional) — +A random number generator. variance_noise (torch.FloatTensor) — +Alternative to generating noise with generator by directly providing the noise for the variance +itself. Useful for methods such as CycleDiffusion. return_dict (bool, optional, defaults to True) — +Whether or not to return a DDIMSchedulerOutput or tuple. Returns +~schedulers.scheduling_utils.DDIMSchedulerOutput or tuple + +If return_dict is True, DDIMSchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). DDIMSchedulerOutput class diffusers.schedulers.scheduling_ddim.DDIMSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/004a80e3475d06e8d1f59f3264b0d35b.txt b/scrapped_outputs/004a80e3475d06e8d1f59f3264b0d35b.txt new file mode 100644 index 0000000000000000000000000000000000000000..6ee8935144841dba5f2940525340c76f92fa1f31 --- /dev/null +++ b/scrapped_outputs/004a80e3475d06e8d1f59f3264b0d35b.txt @@ -0,0 +1,215 @@ +Text-to-Video Generation with AnimateDiff Overview AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning by Yuwei Guo, Ceyuan Yang, Anyi Rao, Yaohui Wang, Yu Qiao, Dahua Lin, Bo Dai. The abstract of the paper is the following: With the advance of text-to-image models (e.g., Stable Diffusion) and corresponding personalization techniques such as DreamBooth and LoRA, everyone can manifest their imagination into high-quality images at an affordable cost. Subsequently, there is a great demand for image animation techniques to further combine generated static images with motion dynamics. In this report, we propose a practical framework to animate most of the existing personalized text-to-image models once and for all, saving efforts in model-specific tuning. At the core of the proposed framework is to insert a newly initialized motion modeling module into the frozen text-to-image model and train it on video clips to distill reasonable motion priors. Once trained, by simply injecting this motion modeling module, all personalized versions derived from the same base T2I readily become text-driven models that produce diverse and personalized animated images. We conduct our evaluation on several public representative personalized text-to-image models across anime pictures and realistic photographs, and demonstrate that our proposed framework helps these models generate temporally smooth animation clips while preserving the domain and diversity of their outputs. Code and pre-trained weights will be publicly available at this https URL. Available Pipelines Pipeline Tasks Demo AnimateDiffPipeline Text-to-Video Generation with AnimateDiff Available checkpoints Motion Adapter checkpoints can be found under guoyww. These checkpoints are meant to work with any model based on Stable Diffusion 1.4/1.5. Usage example AnimateDiff works with a MotionAdapter checkpoint and a Stable Diffusion model checkpoint. The MotionAdapter is a collection of Motion Modules that are responsible for adding coherent motion across image frames. These modules are applied after the Resnet and Attention blocks in Stable Diffusion UNet. The following example demonstrates how to use a MotionAdapter checkpoint with Diffusers for inference based on StableDiffusion-1.4/1.5. Copied import torch +from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16) +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + timestep_spacing="linspace", + beta_schedule="linear", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt=( + "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, " + "orange sky, warm lighting, fishing boats, ocean waves seagulls, " + "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, " + "golden hour, coastal landscape, seaside scenery" + ), + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=25, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") + Here are some sample outputs: masterpiece, bestquality, sunset. + AnimateDiff tends to work better with finetuned Stable Diffusion models. If you plan on using a scheduler that can clip samples, make sure to disable it by setting clip_sample=False in the scheduler as this can also have an adverse effect on generated samples. Additionally, the AnimateDiff checkpoints can be sensitive to the beta schedule of the scheduler. We recommend setting this to linear. Using Motion LoRAs Motion LoRAs are a collection of LoRAs that work with the guoyww/animatediff-motion-adapter-v1-5-2 checkpoint. These LoRAs are responsible for adding specific types of motion to the animations. Copied import torch +from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16) +pipe.load_lora_weights( + "guoyww/animatediff-motion-lora-zoom-out", adapter_name="zoom-out" +) + +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + beta_schedule="linear", + timestep_spacing="linspace", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt=( + "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, " + "orange sky, warm lighting, fishing boats, ocean waves seagulls, " + "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, " + "golden hour, coastal landscape, seaside scenery" + ), + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=25, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") + masterpiece, bestquality, sunset. + Using Motion LoRAs with PEFT You can also leverage the PEFT backend to combine Motion LoRA’s and create more complex animations. First install PEFT with Copied pip install peft Then you can use the following code to combine Motion LoRAs. Copied import torch +from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16) + +pipe.load_lora_weights( + "diffusers/animatediff-motion-lora-zoom-out", adapter_name="zoom-out", +) +pipe.load_lora_weights( + "diffusers/animatediff-motion-lora-pan-left", adapter_name="pan-left", +) +pipe.set_adapters(["zoom-out", "pan-left"], adapter_weights=[1.0, 1.0]) + +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + timestep_spacing="linspace", + beta_schedule="linear", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt=( + "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, " + "orange sky, warm lighting, fishing boats, ocean waves seagulls, " + "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, " + "golden hour, coastal landscape, seaside scenery" + ), + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=25, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") + masterpiece, bestquality, sunset. + Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. AnimateDiffPipeline class diffusers.AnimateDiffPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel motion_adapter: MotionAdapter scheduler: Union feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter (MotionAdapter) — +A MotionAdapter to be used in combination with unet to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None num_frames: Optional = 16 height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → TextToVideoSDPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated video. num_frames (int, optional, defaults to 16) — +The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds +amounts to 2 seconds of video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated video. Choose between torch.FloatTensor, PIL.Image or +np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +TextToVideoSDPipelineOutput or tuple + +If return_dict is True, TextToVideoSDPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler +>>> from diffusers.utils import export_to_gif + +>>> adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2") +>>> pipe = AnimateDiffPipeline.from_pretrained("frankjoshua/toonyou_beta6", motion_adapter=adapter) +>>> pipe.scheduler = DDIMScheduler(beta_schedule="linear", steps_offset=1, clip_sample=False) +>>> output = pipe(prompt="A corgi walking in the park") +>>> frames = output.frames[0] +>>> export_to_gif(frames, "animation.gif") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. enable_freeu disable_freeu enable_vae_slicing disable_vae_slicing enable_vae_tiling disable_vae_tiling AnimateDiffPipelineOutput class diffusers.pipelines.animatediff.AnimateDiffPipelineOutput < source > ( frames: Union ) diff --git a/scrapped_outputs/004c24a7d6387b52ef9a323876ac7239.txt b/scrapped_outputs/004c24a7d6387b52ef9a323876ac7239.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/007512d8a5a14389eb3f6aa13d0f082f.txt b/scrapped_outputs/007512d8a5a14389eb3f6aa13d0f082f.txt new file mode 100644 index 0000000000000000000000000000000000000000..3852e4b540ae565f239e88502bab4b42a7fe8ab9 --- /dev/null +++ b/scrapped_outputs/007512d8a5a14389eb3f6aa13d0f082f.txt @@ -0,0 +1,255 @@ +DiffEdit DiffEdit: Diffusion-based semantic image editing with mask guidance is by Guillaume Couairon, Jakob Verbeek, Holger Schwenk, and Matthieu Cord. The abstract from the paper is: Image generation has recently seen tremendous advances, with diffusion models allowing to synthesize convincing images for a large variety of text prompts. In this article, we propose DiffEdit, a method to take advantage of text-conditioned diffusion models for the task of semantic image editing, where the goal is to edit an image based on a text query. Semantic image editing is an extension of image generation, with the additional constraint that the generated image should be as similar as possible to a given input image. Current editing methods based on diffusion models usually require to provide a mask, making the task much easier by treating it as a conditional inpainting task. In contrast, our main contribution is able to automatically generate a mask highlighting regions of the input image that need to be edited, by contrasting predictions of a diffusion model conditioned on different text prompts. Moreover, we rely on latent inference to preserve content in those regions of interest and show excellent synergies with mask-based diffusion. DiffEdit achieves state-of-the-art editing performance on ImageNet. In addition, we evaluate semantic image editing in more challenging settings, using images from the COCO dataset as well as text-based generated images. The original codebase can be found at Xiang-cd/DiffEdit-stable-diffusion, and you can try it out in this demo. This pipeline was contributed by clarencechen. ❤️ Tips The pipeline can generate masks that can be fed into other inpainting pipelines. In order to generate an image using this pipeline, both an image mask (source and target prompts can be manually specified or generated, and passed to generate_mask()) +and a set of partially inverted latents (generated using invert()) must be provided as arguments when calling the pipeline to generate the final edited image. The function generate_mask() exposes two prompt arguments, source_prompt and target_prompt +that let you control the locations of the semantic edits in the final image to be generated. Let’s say, +you wanted to translate from “cat” to “dog”. In this case, the edit direction will be “cat -> dog”. To reflect +this in the generated mask, you simply have to set the embeddings related to the phrases including “cat” to +source_prompt and “dog” to target_prompt. When generating partially inverted latents using invert, assign a caption or text embedding describing the +overall image to the prompt argument to help guide the inverse latent sampling process. In most cases, the +source concept is sufficiently descriptive to yield good results, but feel free to explore alternatives. When calling the pipeline to generate the final edited image, assign the source concept to negative_prompt +and the target concept to prompt. Taking the above example, you simply have to set the embeddings related to +the phrases including “cat” to negative_prompt and “dog” to prompt. If you wanted to reverse the direction in the example above, i.e., “dog -> cat”, then it’s recommended to:Swap the source_prompt and target_prompt in the arguments to generate_mask. Change the input prompt in invert() to include “dog”. Swap the prompt and negative_prompt in the arguments to call the pipeline to generate the final edited image. The source and target prompts, or their corresponding embeddings, can also be automatically generated. Please refer to the DiffEdit guide for more details. StableDiffusionDiffEditPipeline class diffusers.StableDiffusionDiffEditPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor inverse_scheduler: DDIMInverseScheduler requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. inverse_scheduler (DDIMInverseScheduler) — +A scheduler to be used in combination with unet to fill in the unmasked part of the input latents. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. This is an experimental feature! Pipeline for text-guided image inpainting using Stable Diffusion and DiffEdit. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading and saving methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights generate_mask < source > ( image: Union = None target_prompt: Union = None target_negative_prompt: Union = None target_prompt_embeds: Optional = None target_negative_prompt_embeds: Optional = None source_prompt: Union = None source_negative_prompt: Union = None source_prompt_embeds: Optional = None source_negative_prompt_embeds: Optional = None num_maps_per_mask: Optional = 10 mask_encode_strength: Optional = 0.5 mask_thresholding_ratio: Optional = 3.0 num_inference_steps: int = 50 guidance_scale: float = 7.5 generator: Union = None output_type: Optional = 'np' cross_attention_kwargs: Optional = None ) → List[PIL.Image.Image] or np.array Parameters image (PIL.Image.Image) — +Image or tensor representing an image batch to be used for computing the mask. target_prompt (str or List[str], optional) — +The prompt or prompts to guide semantic mask generation. If not defined, you need to pass +prompt_embeds. target_negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). target_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. target_negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. source_prompt (str or List[str], optional) — +The prompt or prompts to guide semantic mask generation using DiffEdit. If not defined, you need to +pass source_prompt_embeds or source_image instead. source_negative_prompt (str or List[str], optional) — +The prompt or prompts to guide semantic mask generation away from using DiffEdit. If not defined, you +need to pass source_negative_prompt_embeds or source_image instead. source_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings to guide the semantic mask generation. Can be used to easily tweak text +inputs (prompt weighting). If not provided, text embeddings are generated from source_prompt input +argument. source_negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings to negatively guide the semantic mask generation. Can be used to easily +tweak text inputs (prompt weighting). If not provided, text embeddings are generated from +source_negative_prompt input argument. num_maps_per_mask (int, optional, defaults to 10) — +The number of noise maps sampled to generate the semantic mask using DiffEdit. mask_encode_strength (float, optional, defaults to 0.5) — +The strength of the noise maps sampled to generate the semantic mask using DiffEdit. Must be between 0 +and 1. mask_thresholding_ratio (float, optional, defaults to 3.0) — +The maximum multiple of the mean absolute difference used to clamp the semantic guidance map before +mask binarization. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the +AttnProcessor as defined in +self.processor. Returns +List[PIL.Image.Image] or np.array + +When returning a List[PIL.Image.Image], the list consists of a batch of single-channel binary images +with dimensions (height // self.vae_scale_factor, width // self.vae_scale_factor). If it’s +np.array, the shape is (batch_size, height // self.vae_scale_factor, width // self.vae_scale_factor). + Generate a latent mask given a mask prompt, a target prompt, and an image. Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionDiffEditPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + +>>> init_image = download_image(img_url).resize((768, 768)) + +>>> pipe = StableDiffusionDiffEditPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.enable_model_cpu_offload() + +>>> mask_prompt = "A bowl of fruits" +>>> prompt = "A bowl of pears" + +>>> mask_image = pipe.generate_mask(image=init_image, source_prompt=prompt, target_prompt=mask_prompt) +>>> image_latents = pipe.invert(image=init_image, prompt=mask_prompt).latents +>>> image = pipe(prompt=prompt, mask_image=mask_image, image_latents=image_latents).images[0] invert < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 50 inpaint_strength: float = 0.8 guidance_scale: float = 7.5 negative_prompt: Union = None generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None decode_latents: bool = False output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None lambda_auto_corr: float = 20.0 lambda_kl: float = 20.0 num_reg_steps: int = 0 num_auto_corr_rolls: int = 5 ) Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (PIL.Image.Image) — +Image or tensor representing an image batch to produce the inverted latents guided by prompt. inpaint_strength (float, optional, defaults to 0.8) — +Indicates extent of the noising process to run latent inversion. Must be between 0 and 1. When +inpaint_strength is 1, the inversion process is run for the full number of iterations specified in +num_inference_steps. image is used as a reference for the inversion process, and adding more noise +increases inpaint_strength. If inpaint_strength is 0, no inpainting occurs. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. decode_latents (bool, optional, defaults to False) — +Whether or not to decode the inverted latents into a generated image. Setting this argument to True +decodes all inverted latents for each timestep into a list of generated images. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.DiffEditInversionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the +AttnProcessor as defined in +self.processor. lambda_auto_corr (float, optional, defaults to 20.0) — +Lambda parameter to control auto correction. lambda_kl (float, optional, defaults to 20.0) — +Lambda parameter to control Kullback-Leibler divergence output. num_reg_steps (int, optional, defaults to 0) — +Number of regularization loss steps. num_auto_corr_rolls (int, optional, defaults to 5) — +Number of auto correction roll steps. Generate inverted latents given a prompt and image. Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionDiffEditPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + +>>> init_image = download_image(img_url).resize((768, 768)) + +>>> pipe = StableDiffusionDiffEditPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.enable_model_cpu_offload() + +>>> prompt = "A bowl of fruits" + +>>> inverted_latents = pipe.invert(image=init_image, prompt=prompt).latents __call__ < source > ( prompt: Union = None mask_image: Union = None image_latents: Union = None inpaint_strength: Optional = 0.8 num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_ckip: int = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. mask_image (PIL.Image.Image) — +Image or tensor representing an image batch to mask the generated image. White pixels in the mask are +repainted, while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, 1, H, W). image_latents (PIL.Image.Image or torch.FloatTensor) — +Partially noised image latents from the inversion process to be used as inputs for image generation. inpaint_strength (float, optional, defaults to 0.8) — +Indicates extent to inpaint the masked area. Must be between 0 and 1. When inpaint_strength is 1, the +denoising process is run on the masked area for the full number of iterations specified in +num_inference_steps. image_latents is used as a reference for the masked area, and adding more +noise to a region increases inpaint_strength. If inpaint_strength is 0, no inpainting occurs. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionDiffEditPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + +>>> init_image = download_image(img_url).resize((768, 768)) + +>>> pipe = StableDiffusionDiffEditPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.enable_model_cpu_offload() + +>>> mask_prompt = "A bowl of fruits" +>>> prompt = "A bowl of pears" + +>>> mask_image = pipe.generate_mask(image=init_image, source_prompt=prompt, target_prompt=mask_prompt) +>>> image_latents = pipe.invert(image=init_image, prompt=mask_prompt).latents +>>> image = pipe(prompt=prompt, mask_image=mask_image, image_latents=image_latents).images[0] disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/009a3df3d8ecf57196b920d396c1eb45.txt b/scrapped_outputs/009a3df3d8ecf57196b920d396c1eb45.txt new file mode 100644 index 0000000000000000000000000000000000000000..6ee8935144841dba5f2940525340c76f92fa1f31 --- /dev/null +++ b/scrapped_outputs/009a3df3d8ecf57196b920d396c1eb45.txt @@ -0,0 +1,215 @@ +Text-to-Video Generation with AnimateDiff Overview AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning by Yuwei Guo, Ceyuan Yang, Anyi Rao, Yaohui Wang, Yu Qiao, Dahua Lin, Bo Dai. The abstract of the paper is the following: With the advance of text-to-image models (e.g., Stable Diffusion) and corresponding personalization techniques such as DreamBooth and LoRA, everyone can manifest their imagination into high-quality images at an affordable cost. Subsequently, there is a great demand for image animation techniques to further combine generated static images with motion dynamics. In this report, we propose a practical framework to animate most of the existing personalized text-to-image models once and for all, saving efforts in model-specific tuning. At the core of the proposed framework is to insert a newly initialized motion modeling module into the frozen text-to-image model and train it on video clips to distill reasonable motion priors. Once trained, by simply injecting this motion modeling module, all personalized versions derived from the same base T2I readily become text-driven models that produce diverse and personalized animated images. We conduct our evaluation on several public representative personalized text-to-image models across anime pictures and realistic photographs, and demonstrate that our proposed framework helps these models generate temporally smooth animation clips while preserving the domain and diversity of their outputs. Code and pre-trained weights will be publicly available at this https URL. Available Pipelines Pipeline Tasks Demo AnimateDiffPipeline Text-to-Video Generation with AnimateDiff Available checkpoints Motion Adapter checkpoints can be found under guoyww. These checkpoints are meant to work with any model based on Stable Diffusion 1.4/1.5. Usage example AnimateDiff works with a MotionAdapter checkpoint and a Stable Diffusion model checkpoint. The MotionAdapter is a collection of Motion Modules that are responsible for adding coherent motion across image frames. These modules are applied after the Resnet and Attention blocks in Stable Diffusion UNet. The following example demonstrates how to use a MotionAdapter checkpoint with Diffusers for inference based on StableDiffusion-1.4/1.5. Copied import torch +from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16) +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + timestep_spacing="linspace", + beta_schedule="linear", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt=( + "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, " + "orange sky, warm lighting, fishing boats, ocean waves seagulls, " + "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, " + "golden hour, coastal landscape, seaside scenery" + ), + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=25, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") + Here are some sample outputs: masterpiece, bestquality, sunset. + AnimateDiff tends to work better with finetuned Stable Diffusion models. If you plan on using a scheduler that can clip samples, make sure to disable it by setting clip_sample=False in the scheduler as this can also have an adverse effect on generated samples. Additionally, the AnimateDiff checkpoints can be sensitive to the beta schedule of the scheduler. We recommend setting this to linear. Using Motion LoRAs Motion LoRAs are a collection of LoRAs that work with the guoyww/animatediff-motion-adapter-v1-5-2 checkpoint. These LoRAs are responsible for adding specific types of motion to the animations. Copied import torch +from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16) +pipe.load_lora_weights( + "guoyww/animatediff-motion-lora-zoom-out", adapter_name="zoom-out" +) + +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + beta_schedule="linear", + timestep_spacing="linspace", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt=( + "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, " + "orange sky, warm lighting, fishing boats, ocean waves seagulls, " + "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, " + "golden hour, coastal landscape, seaside scenery" + ), + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=25, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") + masterpiece, bestquality, sunset. + Using Motion LoRAs with PEFT You can also leverage the PEFT backend to combine Motion LoRA’s and create more complex animations. First install PEFT with Copied pip install peft Then you can use the following code to combine Motion LoRAs. Copied import torch +from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16) + +pipe.load_lora_weights( + "diffusers/animatediff-motion-lora-zoom-out", adapter_name="zoom-out", +) +pipe.load_lora_weights( + "diffusers/animatediff-motion-lora-pan-left", adapter_name="pan-left", +) +pipe.set_adapters(["zoom-out", "pan-left"], adapter_weights=[1.0, 1.0]) + +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + timestep_spacing="linspace", + beta_schedule="linear", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt=( + "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, " + "orange sky, warm lighting, fishing boats, ocean waves seagulls, " + "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, " + "golden hour, coastal landscape, seaside scenery" + ), + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=25, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") + masterpiece, bestquality, sunset. + Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. AnimateDiffPipeline class diffusers.AnimateDiffPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel motion_adapter: MotionAdapter scheduler: Union feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter (MotionAdapter) — +A MotionAdapter to be used in combination with unet to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None num_frames: Optional = 16 height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → TextToVideoSDPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated video. num_frames (int, optional, defaults to 16) — +The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds +amounts to 2 seconds of video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated video. Choose between torch.FloatTensor, PIL.Image or +np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +TextToVideoSDPipelineOutput or tuple + +If return_dict is True, TextToVideoSDPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler +>>> from diffusers.utils import export_to_gif + +>>> adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2") +>>> pipe = AnimateDiffPipeline.from_pretrained("frankjoshua/toonyou_beta6", motion_adapter=adapter) +>>> pipe.scheduler = DDIMScheduler(beta_schedule="linear", steps_offset=1, clip_sample=False) +>>> output = pipe(prompt="A corgi walking in the park") +>>> frames = output.frames[0] +>>> export_to_gif(frames, "animation.gif") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. enable_freeu disable_freeu enable_vae_slicing disable_vae_slicing enable_vae_tiling disable_vae_tiling AnimateDiffPipelineOutput class diffusers.pipelines.animatediff.AnimateDiffPipelineOutput < source > ( frames: Union ) diff --git a/scrapped_outputs/00a44ba96e48f08abc944973f3de6edb.txt b/scrapped_outputs/00a44ba96e48f08abc944973f3de6edb.txt new file mode 100644 index 0000000000000000000000000000000000000000..1c2807daa00904639945ad258ca6e10542925a4c --- /dev/null +++ b/scrapped_outputs/00a44ba96e48f08abc944973f3de6edb.txt @@ -0,0 +1,136 @@ +Cycle Diffusion Cycle Diffusion is a text guided image-to-image generation model proposed in Unifying Diffusion Models’ Latent Space, with Applications to CycleDiffusion and Guidance by Chen Henry Wu, Fernando De la Torre. The abstract from the paper is: Diffusion models have achieved unprecedented performance in generative modeling. The commonly-adopted formulation of the latent code of diffusion models is a sequence of gradually denoised samples, as opposed to the simpler (e.g., Gaussian) latent space of GANs, VAEs, and normalizing flows. This paper provides an alternative, Gaussian formulation of the latent space of various diffusion models, as well as an invertible DPM-Encoder that maps images into the latent space. While our formulation is purely based on the definition of diffusion models, we demonstrate several intriguing consequences. (1) Empirically, we observe that a common latent space emerges from two diffusion models trained independently on related domains. In light of this finding, we propose CycleDiffusion, which uses DPM-Encoder for unpaired image-to-image translation. Furthermore, applying CycleDiffusion to text-to-image diffusion models, we show that large-scale text-to-image diffusion models can be used as zero-shot image-to-image editors. (2) One can guide pre-trained diffusion models and GANs by controlling the latent codes in a unified, plug-and-play formulation based on energy-based models. Using the CLIP model and a face recognition model as guidance, we demonstrate that diffusion models have better coverage of low-density sub-populations and individuals than GANs. The code is publicly available at this https URL. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. CycleDiffusionPipeline class diffusers.CycleDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: DDIMScheduler safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can only be an +instance of DDIMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-guided image to image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: typing.Union[str, typing.List[str]] source_prompt: typing.Union[str, typing.List[str]] image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.FloatTensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.FloatTensor]] = None strength: float = 0.8 num_inference_steps: typing.Optional[int] = 50 guidance_scale: typing.Optional[float] = 7.5 source_guidance_scale: typing.Optional[float] = 1 num_images_per_prompt: typing.Optional[int] = 1 eta: typing.Optional[float] = 0.1 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None prompt_embeds: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: int = 1 cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None clip_skip: typing.Optional[int] = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor np.ndarray, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be used as the starting point. Can also accept image +latents as image, but if passing latents directly it is not encoded again. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. source_guidance_scale (float, optional, defaults to 1) — +Guidance scale for the source prompt. This is useful to control the amount of influence the source +prompt has for encoding. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Example: Copied import requests +import torch +from PIL import Image +from io import BytesIO + +from diffusers import CycleDiffusionPipeline, DDIMScheduler + +# load the pipeline +# make sure you're logged in with `huggingface-cli login` +model_id_or_path = "CompVis/stable-diffusion-v1-4" +scheduler = DDIMScheduler.from_pretrained(model_id_or_path, subfolder="scheduler") +pipe = CycleDiffusionPipeline.from_pretrained(model_id_or_path, scheduler=scheduler).to("cuda") + +# let's download an initial image +url = "https://raw.githubusercontent.com/ChenWu98/cycle-diffusion/main/data/dalle2/An%20astronaut%20riding%20a%20horse.png" +response = requests.get(url) +init_image = Image.open(BytesIO(response.content)).convert("RGB") +init_image = init_image.resize((512, 512)) +init_image.save("horse.png") + +# let's specify a prompt +source_prompt = "An astronaut riding a horse" +prompt = "An astronaut riding an elephant" + +# call the pipeline +image = pipe( + prompt=prompt, + source_prompt=source_prompt, + image=init_image, + num_inference_steps=100, + eta=0.1, + strength=0.8, + guidance_scale=2, + source_guidance_scale=1, +).images[0] + +image.save("horse_to_elephant.png") + +# let's try another example +# See more samples at the original repo: https://github.com/ChenWu98/cycle-diffusion +url = ( + "https://raw.githubusercontent.com/ChenWu98/cycle-diffusion/main/data/dalle2/A%20black%20colored%20car.png" +) +response = requests.get(url) +init_image = Image.open(BytesIO(response.content)).convert("RGB") +init_image = init_image.resize((512, 512)) +init_image.save("black.png") + +source_prompt = "A black colored car" +prompt = "A blue colored car" + +# call the pipeline +torch.manual_seed(0) +image = pipe( + prompt=prompt, + source_prompt=source_prompt, + image=init_image, + num_inference_steps=100, + eta=0.1, + strength=0.85, + guidance_scale=3, + source_guidance_scale=1, +).images[0] + +image.save("black_to_blue.png") encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None lora_scale: typing.Optional[float] = None clip_skip: typing.Optional[int] = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPiplineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] nsfw_content_detected: typing.Optional[typing.List[bool]] ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/00efdfed25ed505d82383e1aa6f01ddb.txt b/scrapped_outputs/00efdfed25ed505d82383e1aa6f01ddb.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/010878c4f61adff57a313b69bfbf36ee.txt b/scrapped_outputs/010878c4f61adff57a313b69bfbf36ee.txt new file mode 100644 index 0000000000000000000000000000000000000000..4049d6b91ac5929ba92113dc859ead44d28a4f4e --- /dev/null +++ b/scrapped_outputs/010878c4f61adff57a313b69bfbf36ee.txt @@ -0,0 +1,45 @@ +EulerAncestralDiscreteScheduler A scheduler that uses ancestral sampling with Euler method steps. This is a fast scheduler which can often generate good outputs in 20-30 steps. The scheduler is based on the original k-diffusion implementation by Katherine Crowson. EulerAncestralDiscreteScheduler class diffusers.EulerAncestralDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. Ancestral sampling with Euler method steps. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. Scales the denoising model input by (sigma**2 + 1) ** 0.5 to match the Euler algorithm. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor generator: Optional = None return_dict: bool = True ) → EulerAncestralDiscreteSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a +EulerAncestralDiscreteSchedulerOutput or tuple. Returns +EulerAncestralDiscreteSchedulerOutput or tuple + +If return_dict is True, +EulerAncestralDiscreteSchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). EulerAncestralDiscreteSchedulerOutput class diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/010b61c1b09524892e674b81e6a567e2.txt b/scrapped_outputs/010b61c1b09524892e674b81e6a567e2.txt new file mode 100644 index 0000000000000000000000000000000000000000..7645418c174b20843d0dcacad570025d04b154f1 --- /dev/null +++ b/scrapped_outputs/010b61c1b09524892e674b81e6a567e2.txt @@ -0,0 +1,8 @@ +ScoreSdeVpScheduler ScoreSdeVpScheduler is a variance preserving stochastic differential equation (SDE) scheduler. It was introduced in the Score-Based Generative Modeling through Stochastic Differential Equations paper by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole. The abstract from the paper is: Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model. 🚧 This scheduler is under construction! ScoreSdeVpScheduler class diffusers.schedulers.ScoreSdeVpScheduler < source > ( num_train_timesteps = 2000 beta_min = 0.1 beta_max = 20 sampling_eps = 0.001 ) Parameters num_train_timesteps (int, defaults to 2000) — +The number of diffusion steps to train the model. beta_min (int, defaults to 0.1) — beta_max (int, defaults to 20) — sampling_eps (int, defaults to 1e-3) — +The end value of sampling where timesteps decrease progressively from 1 to epsilon. ScoreSdeVpScheduler is a variance preserving stochastic differential equation (SDE) scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. set_timesteps < source > ( num_inference_steps device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the continuous timesteps used for the diffusion chain (to be run before inference). step_pred < source > ( score x t generator = None ) Parameters score () — x () — t () — generator (torch.Generator, optional) — +A random number generator. Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/013e30f4683bc1e82d2b6b2027109bad.txt b/scrapped_outputs/013e30f4683bc1e82d2b6b2027109bad.txt new file mode 100644 index 0000000000000000000000000000000000000000..c2a137491d5fbd628392ad86cd53c8aeb530c249 --- /dev/null +++ b/scrapped_outputs/013e30f4683bc1e82d2b6b2027109bad.txt @@ -0,0 +1,11 @@ +Installing xFormers + +We recommend the use of xFormers for both inference and training. In our tests, the optimizations performed in the attention blocks allow for both faster speed and reduced memory consumption. +Starting from version 0.0.16 of xFormers, released on January 2023, installation can be easily performed using pre-built pip wheels: + + + Copied +pip install xformers +The xFormers PIP package requires the latest version of PyTorch (1.13.1 as of xFormers 0.0.16). If you need to use a previous version of PyTorch, then we recommend you install xFormers from source using the project instructions. +After xFormers is installed, you can use enable_xformers_memory_efficient_attention() for faster inference and reduced memory consumption, as discussed here. +According to this issue, xFormers v0.0.16 cannot be used for training (fine-tune or Dreambooth) in some GPUs. If you observe that problem, please install a development version as indicated in that comment. diff --git a/scrapped_outputs/014fb36531fe935112c5eaa247063735.txt b/scrapped_outputs/014fb36531fe935112c5eaa247063735.txt new file mode 100644 index 0000000000000000000000000000000000000000..43caa4e9d69d10bca1cf7b16f04035243b30ac36 --- /dev/null +++ b/scrapped_outputs/014fb36531fe935112c5eaa247063735.txt @@ -0,0 +1,163 @@ +RePaint scheduler + + +Overview + +DDPM-based inpainting scheduler for unsupervised inpainting with extreme masks. +Intended for use with RePaintPipeline. +Based on the paper RePaint: Inpainting using Denoising Diffusion Probabilistic Models +and the original implementation by Andreas Lugmayr et al.: https://github.com/andreas128/RePaint + +RePaintScheduler + + +class diffusers.RePaintScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +eta: float = 0.0 +trained_betas: typing.Optional[numpy.ndarray] = None +clip_sample: bool = True + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. + + +eta (float) — +The weight of noise for added noise in a diffusion step. Its value is between 0.0 and 1.0 -0.0 is DDIM and +1.0 is DDPM scheduler respectively. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +variance_type (str) — +options to clip the variance used when adding noise to the denoised sample. Choose from fixed_small, +fixed_small_log, fixed_large, fixed_large_log, learned or learned_range. + + +clip_sample (bool, default True) — +option to clip predicted sample between -1 and 1 for numerical stability. + + + +RePaint is a schedule for DDPM inpainting inside a given mask. +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. +For more details, see the original paper: https://arxiv.org/pdf/2201.09865.pdf + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Optional[int] = None + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +timestep (int, optional) — current timestep + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +step + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +original_image: FloatTensor +mask: FloatTensor +generator: typing.Optional[torch._C.Generator] = None +return_dict: bool = True + +) +→ +~schedulers.scheduling_utils.RePaintSchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned +diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +original_image (torch.FloatTensor) — +the original image to inpaint on. + + +mask (torch.FloatTensor) — +the mask where 0.0 values define which part of the original image to inpaint (change). + + +generator (torch.Generator, optional) — random number generator. + + +return_dict (bool) — option for returning tuple rather than +DDPMSchedulerOutput class + + +Returns + +~schedulers.scheduling_utils.RePaintSchedulerOutput or tuple + + + +~schedulers.scheduling_utils.RePaintSchedulerOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/01a8586bc0784a4627557a3815ff5b5d.txt b/scrapped_outputs/01a8586bc0784a4627557a3815ff5b5d.txt new file mode 100644 index 0000000000000000000000000000000000000000..7c2ceeecc625c651b3ef1cc43f5c7fb053d83bae --- /dev/null +++ b/scrapped_outputs/01a8586bc0784a4627557a3815ff5b5d.txt @@ -0,0 +1,100 @@ +Stochastic Karras VE + + +Overview + +Elucidating the Design Space of Diffusion-Based Generative Models by Tero Karras, Miika Aittala, Timo Aila and Samuli Laine. +The abstract of the paper is the following: +We argue that the theory and practice of diffusion-based generative models are currently unnecessarily convoluted and seek to remedy the situation by presenting a design space that clearly separates the concrete design choices. This lets us identify several changes to both the sampling and training processes, as well as preconditioning of the score networks. Together, our improvements yield new state-of-the-art FID of 1.79 for CIFAR-10 in a class-conditional setting and 1.97 in an unconditional setting, with much faster sampling (35 network evaluations per image) than prior designs. To further demonstrate their modular nature, we show that our design changes dramatically improve both the efficiency and quality obtainable with pre-trained score networks from previous work, including improving the FID of an existing ImageNet-64 model from 2.07 to near-SOTA 1.55. +This pipeline implements the Stochastic sampling tailored to the Variance-Expanding (VE) models. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_stochastic_karras_ve.py +Unconditional Image Generation +- + +KarrasVePipeline + + +class diffusers.KarrasVePipeline + +< +source +> +( +unet: UNet2DModel +scheduler: KarrasVeScheduler + +) + + +Parameters + +unet (UNet2DModel) — U-Net architecture to denoise the encoded image. + + +scheduler (KarrasVeScheduler) — +Scheduler for the diffusion process to be used in combination with unet to denoise the encoded image. + + + +Stochastic sampling from Karras et al. [1] tailored to the Variance-Expanding (VE) models [2]. Use Algorithm 2 and +the VE column of Table 1 from [1] for reference. +[1] Karras, Tero, et al. “Elucidating the Design Space of Diffusion-Based Generative Models.” +https://arxiv.org/abs/2206.00364 [2] Song, Yang, et al. “Score-based generative modeling through stochastic +differential equations.” https://arxiv.org/abs/2011.13456 + +__call__ + +< +source +> +( +batch_size: int = 1 +num_inference_steps: int = 50 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +**kwargs + +) +→ +ImagePipelineOutput or tuple + +Parameters + +batch_size (int, optional, defaults to 1) — +The number of images to generate. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + +Returns + +ImagePipelineOutput or tuple + + + +~pipelines.utils.ImagePipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. diff --git a/scrapped_outputs/01be2bbed29849c60e5daa8454e05de7.txt b/scrapped_outputs/01be2bbed29849c60e5daa8454e05de7.txt new file mode 100644 index 0000000000000000000000000000000000000000..4122aa376447d7e50c2e46c3ee80f13ca14d4399 --- /dev/null +++ b/scrapped_outputs/01be2bbed29849c60e5daa8454e05de7.txt @@ -0,0 +1,286 @@ +Custom Pipelines + +For more information about community pipelines, please have a look at this issue. +Community examples consist of both inference and training examples that have been added by the community. +Please have a look at the following table to get an overview of all community examples. Click on the Code Example to get a copy-and-paste ready code example that you can try out. +If a community doesn’t work as expected, please open an issue and ping the author on it. +Example +Description +Code Example +Colab +Author +CLIP Guided Stable Diffusion +Doing CLIP guidance for text to image generation with Stable Diffusion +CLIP Guided Stable Diffusion + +Suraj Patil +One Step U-Net (Dummy) +Example showcasing of how to use Community Pipelines (see https://github.com/huggingface/diffusers/issues/841) +One Step U-Net +- +Patrick von Platen +Stable Diffusion Interpolation +Interpolate the latent space of Stable Diffusion between different prompts/seeds +Stable Diffusion Interpolation +- +Nate Raw +Stable Diffusion Mega +One Stable Diffusion Pipeline with all functionalities of Text2Image, Image2Image and Inpainting +Stable Diffusion Mega +- +Patrick von Platen +Long Prompt Weighting Stable Diffusion +One Stable Diffusion Pipeline without tokens length limit, and support parsing weighting in prompt. +Long Prompt Weighting Stable Diffusion +- +SkyTNT +Speech to Image +Using automatic-speech-recognition to transcribe text and Stable Diffusion to generate images +Speech to Image +- +Mikail Duzenli +To load a custom pipeline you just need to pass the custom_pipeline argument to DiffusionPipeline, as one of the files in diffusers/examples/community. Feel free to send a PR with your own pipelines, we will merge them quickly. + + + Copied +pipe = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", custom_pipeline="filename_in_the_community_folder" +) + +Example usages + + +CLIP Guided Stable Diffusion + +CLIP guided stable diffusion can help to generate more realistic images +by guiding stable diffusion at every denoising step with an additional CLIP model. +The following code requires roughly 12GB of GPU RAM. + + + Copied +from diffusers import DiffusionPipeline +from transformers import CLIPFeatureExtractor, CLIPModel +import torch + + +feature_extractor = CLIPFeatureExtractor.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K") +clip_model = CLIPModel.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K", torch_dtype=torch.float16) + + +guided_pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + custom_pipeline="clip_guided_stable_diffusion", + clip_model=clip_model, + feature_extractor=feature_extractor, + torch_dtype=torch.float16, +) +guided_pipeline.enable_attention_slicing() +guided_pipeline = guided_pipeline.to("cuda") + +prompt = "fantasy book cover, full moon, fantasy forest landscape, golden vector elements, fantasy magic, dark light night, intricate, elegant, sharp focus, illustration, highly detailed, digital painting, concept art, matte, art by WLOP and Artgerm and Albert Bierstadt, masterpiece" + +generator = torch.Generator(device="cuda").manual_seed(0) +images = [] +for i in range(4): + image = guided_pipeline( + prompt, + num_inference_steps=50, + guidance_scale=7.5, + clip_guidance_scale=100, + num_cutouts=4, + use_cutouts=False, + generator=generator, + ).images[0] + images.append(image) + +# save images locally +for i, img in enumerate(images): + img.save(f"./clip_guided_sd/image_{i}.png") +The images list contains a list of PIL images that can be saved locally or displayed directly in a google colab. +Generated images tend to be of higher qualtiy than natively using stable diffusion. E.g. the above script generates the following images: +. + +One Step Unet + +The dummy “one-step-unet” can be run as follows: + + + Copied +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="one_step_unet") +pipe() +Note: This community pipeline is not useful as a feature, but rather just serves as an example of how community pipelines can be added (see https://github.com/huggingface/diffusers/issues/841). + +Stable Diffusion Interpolation + +The following code can be run on a GPU of at least 8GB VRAM and should take approximately 5 minutes. + + + Copied +from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + torch_dtype=torch.float16, + safety_checker=None, # Very important for videos...lots of false positives while interpolating + custom_pipeline="interpolate_stable_diffusion", +).to("cuda") +pipe.enable_attention_slicing() + +frame_filepaths = pipe.walk( + prompts=["a dog", "a cat", "a horse"], + seeds=[42, 1337, 1234], + num_interpolation_steps=16, + output_dir="./dreams", + batch_size=4, + height=512, + width=512, + guidance_scale=8.5, + num_inference_steps=50, +) +The output of the walk(...) function returns a list of images saved under the folder as defined in output_dir. You can use these images to create videos of stable diffusion. +Please have a look at https://github.com/nateraw/stable-diffusion-videos for more in-detail information on how to create videos using stable diffusion as well as more feature-complete functionality. + +Stable Diffusion Mega + +The Stable Diffusion Mega Pipeline lets you use the main use cases of the stable diffusion pipeline in a single class. + + + Copied +#!/usr/bin/env python3 +from diffusers import DiffusionPipeline +import PIL +import requests +from io import BytesIO +import torch + + +def download_image(url): + response = requests.get(url) + return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +pipe = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + custom_pipeline="stable_diffusion_mega", + torch_dtype=torch.float16, +) +pipe.to("cuda") +pipe.enable_attention_slicing() + + +### Text-to-Image + +images = pipe.text2img("An astronaut riding a horse").images + +### Image-to-Image + +init_image = download_image( + "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +) + +prompt = "A fantasy landscape, trending on artstation" + +images = pipe.img2img(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images + +### Inpainting + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" +init_image = download_image(img_url).resize((512, 512)) +mask_image = download_image(mask_url).resize((512, 512)) + +prompt = "a cat sitting on a bench" +images = pipe.inpaint(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.75).images +As shown above this one pipeline can run all both “text-to-image”, “image-to-image”, and “inpainting” in one pipeline. + +Long Prompt Weighting Stable Diffusion + +The Pipeline lets you input prompt without 77 token length limit. And you can increase words weighting by using ”()” or decrease words weighting by using ”[]” +The Pipeline also lets you use the main use cases of the stable diffusion pipeline in a single class. + +pytorch + + + + Copied +from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained( + "hakurei/waifu-diffusion", custom_pipeline="lpw_stable_diffusion", torch_dtype=torch.float16 +) +pipe = pipe.to("cuda") + +prompt = "best_quality (1girl:1.3) bow bride brown_hair closed_mouth frilled_bow frilled_hair_tubes frills (full_body:1.3) fox_ear hair_bow hair_tubes happy hood japanese_clothes kimono long_sleeves red_bow smile solo tabi uchikake white_kimono wide_sleeves cherry_blossoms" +neg_prompt = "lowres, bad_anatomy, error_body, error_hair, error_arm, error_hands, bad_hands, error_fingers, bad_fingers, missing_fingers, error_legs, bad_legs, multiple_legs, missing_legs, error_lighting, error_shadow, error_reflection, text, error, extra_digit, fewer_digits, cropped, worst_quality, low_quality, normal_quality, jpeg_artifacts, signature, watermark, username, blurry" + +pipe.text2img(prompt, negative_prompt=neg_prompt, width=512, height=512, max_embeddings_multiples=3).images[0] + +onnxruntime + + + + Copied +from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + custom_pipeline="lpw_stable_diffusion_onnx", + revision="onnx", + provider="CUDAExecutionProvider", +) + +prompt = "a photo of an astronaut riding a horse on mars, best quality" +neg_prompt = "lowres, bad anatomy, error body, error hair, error arm, error hands, bad hands, error fingers, bad fingers, missing fingers, error legs, bad legs, multiple legs, missing legs, error lighting, error shadow, error reflection, text, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry" + +pipe.text2img(prompt, negative_prompt=neg_prompt, width=512, height=512, max_embeddings_multiples=3).images[0] +if you see Token indices sequence length is longer than the specified maximum sequence length for this model ( *** > 77 ) . Running this sequence through the model will result in indexing errors. Do not worry, it is normal. + +Speech to Image + +The following code can generate an image from an audio sample using pre-trained OpenAI whisper-small and Stable Diffusion. + + + Copied +import torch + +import matplotlib.pyplot as plt +from datasets import load_dataset +from diffusers import DiffusionPipeline +from transformers import ( + WhisperForConditionalGeneration, + WhisperProcessor, +) + + +device = "cuda" if torch.cuda.is_available() else "cpu" + +ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") + +audio_sample = ds[3] + +text = audio_sample["text"].lower() +speech_data = audio_sample["audio"]["array"] + +model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small").to(device) +processor = WhisperProcessor.from_pretrained("openai/whisper-small") + +diffuser_pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + custom_pipeline="speech_to_image_diffusion", + speech_model=model, + speech_processor=processor, + + torch_dtype=torch.float16, +) + +diffuser_pipeline.enable_attention_slicing() +diffuser_pipeline = diffuser_pipeline.to(device) + +output = diffuser_pipeline(speech_data) +plt.imshow(output.images[0]) +This example produces the following image: diff --git a/scrapped_outputs/01d80081236d3aed18b8ca7aabd28034.txt b/scrapped_outputs/01d80081236d3aed18b8ca7aabd28034.txt new file mode 100644 index 0000000000000000000000000000000000000000..96a0a5c22497290cdb231bbf72184daeee1b4d8c --- /dev/null +++ b/scrapped_outputs/01d80081236d3aed18b8ca7aabd28034.txt @@ -0,0 +1,18 @@ +VQModel The VQ-VAE model was introduced in Neural Discrete Representation Learning by Aaron van den Oord, Oriol Vinyals and Koray Kavukcuoglu. The model is used in 🤗 Diffusers to decode latent representations into images. Unlike AutoencoderKL, the VQModel works in a quantized latent space. The abstract from the paper is: Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector Quantised-Variational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of “posterior collapse” — where the latents are ignored when they are paired with a powerful autoregressive decoder — typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations. VQModel class diffusers.VQModel < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) up_block_types: Tuple = ('UpDecoderBlock2D',) block_out_channels: Tuple = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 3 sample_size: int = 32 num_vq_embeddings: int = 256 norm_num_groups: int = 32 vq_embed_dim: Optional = None scaling_factor: float = 0.18215 norm_type: str = 'group' mid_block_add_attention = True lookup_from_codebook = False force_upcast = False ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("DownEncoderBlock2D",)) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to ("UpDecoderBlock2D",)) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of block output channels. layers_per_block (int, optional, defaults to 1) — Number of layers per block. act_fn (str, optional, defaults to "silu") — The activation function to use. latent_channels (int, optional, defaults to 3) — Number of channels in the latent space. sample_size (int, optional, defaults to 32) — Sample input size. num_vq_embeddings (int, optional, defaults to 256) — Number of codebook vectors in the VQ-VAE. norm_num_groups (int, optional, defaults to 32) — Number of groups for normalization layers. vq_embed_dim (int, optional) — Hidden dim of codebook vectors in the VQ-VAE. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. norm_type (str, optional, defaults to "group") — +Type of normalization layer to use. Can be one of "group" or "spatial". A VQ-VAE model for decoding latent representations. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor return_dict: bool = True ) → VQEncoderOutput or tuple Parameters sample (torch.FloatTensor) — Input sample. return_dict (bool, optional, defaults to True) — +Whether or not to return a models.vq_model.VQEncoderOutput instead of a plain tuple. Returns +VQEncoderOutput or tuple + +If return_dict is True, a VQEncoderOutput is returned, otherwise a plain tuple +is returned. + The VQModel forward method. VQEncoderOutput class diffusers.models.vq_model.VQEncoderOutput < source > ( latents: FloatTensor ) Parameters latents (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The encoded output sample from the last layer of the model. Output of VQModel encoding method. diff --git a/scrapped_outputs/01df407ddd0ca5935cbb0f71822a1c38.txt b/scrapped_outputs/01df407ddd0ca5935cbb0f71822a1c38.txt new file mode 100644 index 0000000000000000000000000000000000000000..99c9c7d4f2201d98cc2da9436565b2c181d1c9c1 --- /dev/null +++ b/scrapped_outputs/01df407ddd0ca5935cbb0f71822a1c38.txt @@ -0,0 +1,83 @@ +Paint by Example Paint by Example: Exemplar-based Image Editing with Diffusion Models is by Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, Fang Wen. The abstract from the paper is: Language-guided image editing has achieved great success recently. In this paper, for the first time, we investigate exemplar-guided image editing for more precise control. We achieve this goal by leveraging self-supervised training to disentangle and re-organize the source image and the exemplar. However, the naive approach will cause obvious fusing artifacts. We carefully analyze it and propose an information bottleneck and strong augmentations to avoid the trivial solution of directly copying and pasting the exemplar image. Meanwhile, to ensure the controllability of the editing process, we design an arbitrary shape mask for the exemplar image and leverage the classifier-free guidance to increase the similarity to the exemplar image. The whole framework involves a single forward of the diffusion model without any iterative optimization. We demonstrate that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity. The original codebase can be found at Fantasy-Studio/Paint-by-Example, and you can try it out in a demo. Tips Paint by Example is supported by the official Fantasy-Studio/Paint-by-Example checkpoint. The checkpoint is warm-started from CompVis/stable-diffusion-v1-4 to inpaint partly masked images conditioned on example and reference images. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. PaintByExamplePipeline class diffusers.PaintByExamplePipeline < source > ( vae: AutoencoderKL image_encoder: PaintByExampleImageEncoder unet: UNet2DConditionModel scheduler: Union safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = False ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. image_encoder (PaintByExampleImageEncoder) — +Encodes the example input image. The unet is conditioned on the example image instead of a text prompt. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. 🧪 This is an experimental feature! Pipeline for image-guided image inpainting using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( example_image: Union image: Union mask_image: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters example_image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +An example image to guide image generation. image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +Image or tensor representing an image batch to be inpainted (parts of the image are masked out with +mask_image and repainted according to prompt). mask_image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +Image or tensor representing an image batch to mask image. White pixels in the mask are repainted, +while black pixels are preserved. If mask_image is a PIL image, it is converted to a single channel +(luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, so the +expected shape would be (B, H, W, 1). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Example: Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO +>>> from diffusers import PaintByExamplePipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = ( +... "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/image/example_1.png" +... ) +>>> mask_url = ( +... "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/mask/example_1.png" +... ) +>>> example_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/reference/example_1.jpg" + +>>> init_image = download_image(img_url).resize((512, 512)) +>>> mask_image = download_image(mask_url).resize((512, 512)) +>>> example_image = download_image(example_url).resize((512, 512)) + +>>> pipe = PaintByExamplePipeline.from_pretrained( +... "Fantasy-Studio/Paint-by-Example", +... torch_dtype=torch.float16, +... ) +>>> pipe = pipe.to("cuda") + +>>> image = pipe(image=init_image, mask_image=mask_image, example_image=example_image).images[0] +>>> image StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/0247f496918051ff626a635f40c86068.txt b/scrapped_outputs/0247f496918051ff626a635f40c86068.txt new file mode 100644 index 0000000000000000000000000000000000000000..5ee871335093ed2ca29b91e756da3147dae8eda6 --- /dev/null +++ b/scrapped_outputs/0247f496918051ff626a635f40c86068.txt @@ -0,0 +1,217 @@ +Load pipelines, models, and schedulers Having an easy way to use a diffusion system for inference is essential to 🧨 Diffusers. Diffusion systems often consist of multiple components like parameterized models, tokenizers, and schedulers that interact in complex ways. That is why we designed the DiffusionPipeline to wrap the complexity of the entire diffusion system into an easy-to-use API, while remaining flexible enough to be adapted for other use cases, such as loading each component individually as building blocks to assemble your own diffusion system. Everything you need for inference or training is accessible with the from_pretrained() method. This guide will show you how to load: pipelines from the Hub and locally different components into a pipeline checkpoint variants such as different floating point types or non-exponential mean averaged (EMA) weights models and schedulers Diffusion Pipeline 💡 Skip to the DiffusionPipeline explained section if you are interested in learning in more detail about how the DiffusionPipeline class works. The DiffusionPipeline class is the simplest and most generic way to load the latest trending diffusion model from the Hub. The DiffusionPipeline.from_pretrained() method automatically detects the correct pipeline class from the checkpoint, downloads, and caches all the required configuration and weight files, and returns a pipeline instance ready for inference. Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) You can also load a checkpoint with its specific pipeline class. The example above loaded a Stable Diffusion model; to get the same result, use the StableDiffusionPipeline class: Copied from diffusers import StableDiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) A checkpoint (such as CompVis/stable-diffusion-v1-4 or runwayml/stable-diffusion-v1-5) may also be used for more than one task, like text-to-image or image-to-image. To differentiate what task you want to use the checkpoint for, you have to load it directly with its corresponding task-specific pipeline class: Copied from diffusers import StableDiffusionImg2ImgPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionImg2ImgPipeline.from_pretrained(repo_id) Local pipeline To load a diffusion pipeline locally, use git-lfs to manually download the checkpoint (in this case, runwayml/stable-diffusion-v1-5) to your local disk. This creates a local folder, ./stable-diffusion-v1-5, on your disk: Copied git-lfs install +git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 Then pass the local path to from_pretrained(): Copied from diffusers import DiffusionPipeline + +repo_id = "./stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) The from_pretrained() method won’t download any files from the Hub when it detects a local path, but this also means it won’t download and cache the latest changes to a checkpoint. Swap components in a pipeline You can customize the default components of any pipeline with another compatible component. Customization is important because: Changing the scheduler is important for exploring the trade-off between generation speed and quality. Different components of a model are typically trained independently and you can swap out a component with a better-performing one. During finetuning, usually only some components - like the UNet or text encoder - are trained. To find out which schedulers are compatible for customization, you can use the compatibles method: Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) +stable_diffusion.scheduler.compatibles Let’s use the SchedulerMixin.from_pretrained() method to replace the default PNDMScheduler with a more performant scheduler, EulerDiscreteScheduler. The subfolder="scheduler" argument is required to load the scheduler configuration from the correct subfolder of the pipeline repository. Then you can pass the new EulerDiscreteScheduler instance to the scheduler argument in DiffusionPipeline: Copied from diffusers import DiffusionPipeline, EulerDiscreteScheduler + +repo_id = "runwayml/stable-diffusion-v1-5" +scheduler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, scheduler=scheduler, use_safetensors=True) Safety checker Diffusion models like Stable Diffusion can generate harmful content, which is why 🧨 Diffusers has a safety checker to check generated outputs against known hardcoded NSFW content. If you’d like to disable the safety checker for whatever reason, pass None to the safety_checker argument: Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, safety_checker=None, use_safetensors=True) +""" +You have disabled the safety checker for by passing `safety_checker=None`. Ensure that you abide by the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend keeping the safety filter enabled in all public-facing circumstances, disabling it only for use cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 . +""" Reuse components across pipelines You can also reuse the same components in multiple pipelines to avoid loading the weights into RAM twice. Use the components method to save the components: Copied from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True) + +components = stable_diffusion_txt2img.components Then you can pass the components to another pipeline without reloading the weights into RAM: Copied stable_diffusion_img2img = StableDiffusionImg2ImgPipeline(**components) You can also pass the components individually to the pipeline if you want more flexibility over which components to reuse or disable. For example, to reuse the same components in the text-to-image pipeline, except for the safety checker and feature extractor, in the image-to-image pipeline: Copied from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True) +stable_diffusion_img2img = StableDiffusionImg2ImgPipeline( + vae=stable_diffusion_txt2img.vae, + text_encoder=stable_diffusion_txt2img.text_encoder, + tokenizer=stable_diffusion_txt2img.tokenizer, + unet=stable_diffusion_txt2img.unet, + scheduler=stable_diffusion_txt2img.scheduler, + safety_checker=None, + feature_extractor=None, + requires_safety_checker=False, +) Checkpoint variants A checkpoint variant is usually a checkpoint whose weights are: Stored in a different floating point type for lower precision and lower storage, such as torch.float16, because it only requires half the bandwidth and storage to download. You can’t use this variant if you’re continuing training or using a CPU. Non-exponential mean averaged (EMA) weights, which shouldn’t be used for inference. You should use these to continue fine-tuning a model. 💡 When the checkpoints have identical model structures, but they were trained on different datasets and with a different training setup, they should be stored in separate repositories instead of variations (for example, stable-diffusion-v1-4 and stable-diffusion-v1-5). Otherwise, a variant is identical to the original checkpoint. They have exactly the same serialization format (like Safetensors), model structure, and weights that have identical tensor shapes. checkpoint type weight name argument for loading weights original diffusion_pytorch_model.bin floating point diffusion_pytorch_model.fp16.bin variant, torch_dtype non-EMA diffusion_pytorch_model.non_ema.bin variant There are two important arguments to know for loading variants: torch_dtype defines the floating point precision of the loaded checkpoints. For example, if you want to save bandwidth by loading a fp16 variant, you should specify torch_dtype=torch.float16 to convert the weights to fp16. Otherwise, the fp16 weights are converted to the default fp32 precision. You can also load the original checkpoint without defining the variant argument, and convert it to fp16 with torch_dtype=torch.float16. In this case, the default fp32 weights are downloaded first, and then they’re converted to fp16 after loading. variant defines which files should be loaded from the repository. For example, if you want to load a non_ema variant from the diffusers/stable-diffusion-variants repository, you should specify variant="non_ema" to download the non_ema files. Copied from diffusers import DiffusionPipeline +import torch + +# load fp16 variant +stable_diffusion = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16, use_safetensors=True +) +# load non_ema variant +stable_diffusion = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", variant="non_ema", use_safetensors=True +) To save a checkpoint stored in a different floating-point type or as a non-EMA variant, use the DiffusionPipeline.save_pretrained() method and specify the variant argument. You should try and save a variant to the same folder as the original checkpoint, so you can load both from the same folder: Copied from diffusers import DiffusionPipeline + +# save as fp16 variant +stable_diffusion.save_pretrained("runwayml/stable-diffusion-v1-5", variant="fp16") +# save as non-ema variant +stable_diffusion.save_pretrained("runwayml/stable-diffusion-v1-5", variant="non_ema") If you don’t save the variant to an existing folder, you must specify the variant argument otherwise it’ll throw an Exception because it can’t find the original checkpoint: Copied # 👎 this won't work +stable_diffusion = DiffusionPipeline.from_pretrained( + "./stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +# 👍 this works +stable_diffusion = DiffusionPipeline.from_pretrained( + "./stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16, use_safetensors=True +) Models Models are loaded from the ModelMixin.from_pretrained() method, which downloads and caches the latest version of the model weights and configurations. If the latest files are available in the local cache, from_pretrained() reuses files in the cache instead of re-downloading them. Models can be loaded from a subfolder with the subfolder argument. For example, the model weights for runwayml/stable-diffusion-v1-5 are stored in the unet subfolder: Copied from diffusers import UNet2DConditionModel + +repo_id = "runwayml/stable-diffusion-v1-5" +model = UNet2DConditionModel.from_pretrained(repo_id, subfolder="unet", use_safetensors=True) Or directly from a repository’s directory: Copied from diffusers import UNet2DModel + +repo_id = "google/ddpm-cifar10-32" +model = UNet2DModel.from_pretrained(repo_id, use_safetensors=True) You can also load and save model variants by specifying the variant argument in ModelMixin.from_pretrained() and ModelMixin.save_pretrained(): Copied from diffusers import UNet2DConditionModel + +model = UNet2DConditionModel.from_pretrained( + "runwayml/stable-diffusion-v1-5", subfolder="unet", variant="non_ema", use_safetensors=True +) +model.save_pretrained("./local-unet", variant="non_ema") Schedulers Schedulers are loaded from the SchedulerMixin.from_pretrained() method, and unlike models, schedulers are not parameterized or trained; they are defined by a configuration file. Loading schedulers does not consume any significant amount of memory and the same configuration file can be used for a variety of different schedulers. +For example, the following schedulers are compatible with StableDiffusionPipeline, which means you can load the same scheduler configuration file in any of these classes: Copied from diffusers import StableDiffusionPipeline +from diffusers import ( + DDPMScheduler, + DDIMScheduler, + PNDMScheduler, + LMSDiscreteScheduler, + EulerAncestralDiscreteScheduler, + EulerDiscreteScheduler, + DPMSolverMultistepScheduler, +) + +repo_id = "runwayml/stable-diffusion-v1-5" + +ddpm = DDPMScheduler.from_pretrained(repo_id, subfolder="scheduler") +ddim = DDIMScheduler.from_pretrained(repo_id, subfolder="scheduler") +pndm = PNDMScheduler.from_pretrained(repo_id, subfolder="scheduler") +lms = LMSDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +euler_anc = EulerAncestralDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +euler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +dpm = DPMSolverMultistepScheduler.from_pretrained(repo_id, subfolder="scheduler") + +# replace `dpm` with any of `ddpm`, `ddim`, `pndm`, `lms`, `euler_anc`, `euler` +pipeline = StableDiffusionPipeline.from_pretrained(repo_id, scheduler=dpm, use_safetensors=True) DiffusionPipeline explained As a class method, DiffusionPipeline.from_pretrained() is responsible for two things: Download the latest version of the folder structure required for inference and cache it. If the latest folder structure is available in the local cache, DiffusionPipeline.from_pretrained() reuses the cache and won’t redownload the files. Load the cached weights into the correct pipeline class - retrieved from the model_index.json file - and return an instance of it. The pipelines’ underlying folder structure corresponds directly with their class instances. For example, the StableDiffusionPipeline corresponds to the folder structure in runwayml/stable-diffusion-v1-5. Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipeline = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) +print(pipeline) You’ll see pipeline is an instance of StableDiffusionPipeline, which consists of seven components: "feature_extractor": a CLIPImageProcessor from 🤗 Transformers. "safety_checker": a component for screening against harmful content. "scheduler": an instance of PNDMScheduler. "text_encoder": a CLIPTextModel from 🤗 Transformers. "tokenizer": a CLIPTokenizer from 🤗 Transformers. "unet": an instance of UNet2DConditionModel. "vae": an instance of AutoencoderKL. Copied StableDiffusionPipeline { + "feature_extractor": [ + "transformers", + "CLIPImageProcessor" + ], + "safety_checker": [ + "stable_diffusion", + "StableDiffusionSafetyChecker" + ], + "scheduler": [ + "diffusers", + "PNDMScheduler" + ], + "text_encoder": [ + "transformers", + "CLIPTextModel" + ], + "tokenizer": [ + "transformers", + "CLIPTokenizer" + ], + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vae": [ + "diffusers", + "AutoencoderKL" + ] +} Compare the components of the pipeline instance to the runwayml/stable-diffusion-v1-5 folder structure, and you’ll see there is a separate folder for each of the components in the repository: Copied . +├── feature_extractor +│   └── preprocessor_config.json +├── model_index.json +├── safety_checker +│   ├── config.json +| ├── model.fp16.safetensors +│ ├── model.safetensors +│ ├── pytorch_model.bin +| └── pytorch_model.fp16.bin +├── scheduler +│   └── scheduler_config.json +├── text_encoder +│   ├── config.json +| ├── model.fp16.safetensors +│ ├── model.safetensors +│ |── pytorch_model.bin +| └── pytorch_model.fp16.bin +├── tokenizer +│   ├── merges.txt +│   ├── special_tokens_map.json +│   ├── tokenizer_config.json +│   └── vocab.json +├── unet +│   ├── config.json +│   ├── diffusion_pytorch_model.bin +| |── diffusion_pytorch_model.fp16.bin +│ |── diffusion_pytorch_model.f16.safetensors +│ |── diffusion_pytorch_model.non_ema.bin +│ |── diffusion_pytorch_model.non_ema.safetensors +│ └── diffusion_pytorch_model.safetensors +|── vae +. ├── config.json +. ├── diffusion_pytorch_model.bin + ├── diffusion_pytorch_model.fp16.bin + ├── diffusion_pytorch_model.fp16.safetensors + └── diffusion_pytorch_model.safetensors You can access each of the components of the pipeline as an attribute to view its configuration: Copied pipeline.tokenizer +CLIPTokenizer( + name_or_path="/root/.cache/huggingface/hub/models--runwayml--stable-diffusion-v1-5/snapshots/39593d5650112b4cc580433f6b0435385882d819/tokenizer", + vocab_size=49408, + model_max_length=77, + is_fast=False, + padding_side="right", + truncation_side="right", + special_tokens={ + "bos_token": AddedToken("<|startoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), + "eos_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), + "unk_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), + "pad_token": "<|endoftext|>", + }, + clean_up_tokenization_spaces=True +) Every pipeline expects a model_index.json file that tells the DiffusionPipeline: which pipeline class to load from _class_name which version of 🧨 Diffusers was used to create the model in _diffusers_version what components from which library are stored in the subfolders (name corresponds to the component and subfolder name, library corresponds to the name of the library to load the class from, and class corresponds to the class name) Copied { + "_class_name": "StableDiffusionPipeline", + "_diffusers_version": "0.6.0", + "feature_extractor": [ + "transformers", + "CLIPImageProcessor" + ], + "safety_checker": [ + "stable_diffusion", + "StableDiffusionSafetyChecker" + ], + "scheduler": [ + "diffusers", + "PNDMScheduler" + ], + "text_encoder": [ + "transformers", + "CLIPTextModel" + ], + "tokenizer": [ + "transformers", + "CLIPTokenizer" + ], + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vae": [ + "diffusers", + "AutoencoderKL" + ] +} diff --git a/scrapped_outputs/024b6d495f66ffbe96d4b6dc2553b492.txt b/scrapped_outputs/024b6d495f66ffbe96d4b6dc2553b492.txt new file mode 100644 index 0000000000000000000000000000000000000000..370ce691af60ec569bb22a8523c7b30831598db5 --- /dev/null +++ b/scrapped_outputs/024b6d495f66ffbe96d4b6dc2553b492.txt @@ -0,0 +1,260 @@ +Performing inference with LCM-LoRA Latent Consistency Models (LCM) enable quality image generation in typically 2-4 steps making it possible to use diffusion models in almost real-time settings. From the official website: LCMs can be distilled from any pre-trained Stable Diffusion (SD) in only 4,000 training steps (~32 A100 GPU Hours) for generating high quality 768 x 768 resolution images in 2~4 steps or even one step, significantly accelerating text-to-image generation. We employ LCM to distill the Dreamshaper-V7 version of SD in just 4,000 training iterations. For a more technical overview of LCMs, refer to the paper. However, each model needs to be distilled separately for latent consistency distillation. The core idea with LCM-LoRA is to train just a few adapter layers, the adapter being LoRA in this case. +This way, we don’t have to train the full model and keep the number of trainable parameters manageable. The resulting LoRAs can then be applied to any fine-tuned version of the model without distilling them separately. +Additionally, the LoRAs can be applied to image-to-image, ControlNet/T2I-Adapter, inpainting, AnimateDiff etc. +The LCM-LoRA can also be combined with other LoRAs to generate styled images in very few steps (4-8). LCM-LoRAs are available for stable-diffusion-v1-5, stable-diffusion-xl-base-1.0, and the SSD-1B model. All the checkpoints can be found in this collection. For more details about LCM-LoRA, refer to the technical report. This guide shows how to perform inference with LCM-LoRAs for text-to-image image-to-image combined with styled LoRAs ControlNet/T2I-Adapter inpainting AnimateDiff Before going through this guide, we’ll take a look at the general workflow for performing inference with LCM-LoRAs. +LCM-LoRAs are similar to other Stable Diffusion LoRAs so they can be used with any DiffusionPipeline that supports LoRAs. Load the task specific pipeline and model. Set the scheduler to LCMScheduler. Load the LCM-LoRA weights for the model. Reduce the guidance_scale between [1.0, 2.0] and set the num_inference_steps between [4, 8]. Perform inference with the pipeline with the usual parameters. Let’s look at how we can perform inference with LCM-LoRAs for different tasks. First, make sure you have peft installed, for better LoRA support. Copied pip install -U peft Text-to-image You’ll use the StableDiffusionXLPipeline with the scheduler: LCMScheduler and then load the LCM-LoRA. Together with the LCM-LoRA and the scheduler, the pipeline enables a fast inference workflow overcoming the slow iterative nature of diffusion models. Copied import torch +from diffusers import DiffusionPipeline, LCMScheduler + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + variant="fp16", + torch_dtype=torch.float16 +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl") + +prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" + +generator = torch.manual_seed(42) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=1.0 +).images[0] Notice that we use only 4 steps for generation which is way less than what’s typically used for standard SDXL. You may have noticed that we set guidance_scale=1.0, which disables classifer-free-guidance. This is because the LCM-LoRA is trained with guidance, so the batch size does not have to be doubled in this case. This leads to a faster inference time, with the drawback that negative prompts don’t have any effect on the denoising process. You can also use guidance with LCM-LoRA, but due to the nature of training the model is very sensitve to the guidance_scale values, high values can lead to artifacts in the generated images. In our experiments, we found that the best values are in the range of [1.0, 2.0]. Inference with a fine-tuned model As mentioned above, the LCM-LoRA can be applied to any fine-tuned version of the model without having to distill them separately. Let’s look at how we can perform inference with a fine-tuned model. In this example, we’ll use the animagine-xl model, which is a fine-tuned version of the SDXL model for generating anime. Copied from diffusers import DiffusionPipeline, LCMScheduler + +pipe = DiffusionPipeline.from_pretrained( + "Linaqruf/animagine-xl", + variant="fp16", + torch_dtype=torch.float16 +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl") + +prompt = "face focus, cute, masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=1.0 +).images[0] Image-to-image LCM-LoRA can be applied to image-to-image tasks too. Let’s look at how we can perform image-to-image generation with LCMs. For this example we’ll use the dreamshaper-7 model and the LCM-LoRA for stable-diffusion-v1-5 . Copied import torch +from diffusers import AutoPipelineForImage2Image, LCMScheduler +from diffusers.utils import make_image_grid, load_image + +pipe = AutoPipelineForImage2Image.from_pretrained( + "Lykon/dreamshaper-7", + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5") + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) +prompt = "Astronauts in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +generator = torch.manual_seed(0) +image = pipe( + prompt, + image=init_image, + num_inference_steps=4, + guidance_scale=1, + strength=0.6, + generator=generator +).images[0] +make_image_grid([init_image, image], rows=1, cols=2) You can get different results based on your prompt and the image you provide. To get the best results, we recommend trying different values for num_inference_steps, strength, and guidance_scale parameters and choose the best one. Combine with styled LoRAs LCM-LoRA can be combined with other LoRAs to generate styled-images in very few steps (4-8). In the following example, we’ll use the LCM-LoRA with the papercut LoRA. +To learn more about how to combine LoRAs, refer to this guide. Copied import torch +from diffusers import DiffusionPipeline, LCMScheduler + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + variant="fp16", + torch_dtype=torch.float16 +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LoRAs +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl", adapter_name="lcm") +pipe.load_lora_weights("TheLastBen/Papercut_SDXL", weight_name="papercut.safetensors", adapter_name="papercut") + +# Combine LoRAs +pipe.set_adapters(["lcm", "papercut"], adapter_weights=[1.0, 0.8]) + +prompt = "papercut, a cute fox" +generator = torch.manual_seed(0) +image = pipe(prompt, num_inference_steps=4, guidance_scale=1, generator=generator).images[0] +image ControlNet/T2I-Adapter Let’s look at how we can perform inference with ControlNet/T2I-Adapter and LCM-LoRA. ControlNet For this example, we’ll use the SD-v1-5 model and the LCM-LoRA for SD-v1-5 with canny ControlNet. Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, LCMScheduler +from diffusers.utils import load_image + +image = load_image( + "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +).resize((512, 512)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + controlnet=controlnet, + torch_dtype=torch.float16, + safety_checker=None, + variant="fp16" +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5") + +generator = torch.manual_seed(0) +image = pipe( + "the mona lisa", + image=canny_image, + num_inference_steps=4, + guidance_scale=1.5, + controlnet_conditioning_scale=0.8, + cross_attention_kwargs={"scale": 1}, + generator=generator, +).images[0] +make_image_grid([canny_image, image], rows=1, cols=2) The inference parameters in this example might not work for all examples, so we recommend you to try different values for `num_inference_steps`, `guidance_scale`, `controlnet_conditioning_scale` and `cross_attention_kwargs` parameters and choose the best one. T2I-Adapter This example shows how to use the LCM-LoRA with the Canny T2I-Adapter and SDXL. Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +# Prepare image +# Detect the canny map in low resolution to avoid high-frequency details +image = load_image( + "https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_canny.jpg" +).resize((384, 384)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image).resize((1024, 1024)) + +# load adapter +adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, varient="fp16").to("cuda") + +pipe = StableDiffusionXLAdapterPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + adapter=adapter, + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl") + +prompt = "Mystical fairy in real, magic, 4k picture, high quality" +negative_prompt = "extra digit, fewer digits, cropped, worst quality, low quality, glitch, deformed, mutated, ugly, disfigured" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, + negative_prompt=negative_prompt, + image=canny_image, + num_inference_steps=4, + guidance_scale=1.5, + adapter_conditioning_scale=0.8, + adapter_conditioning_factor=1, + generator=generator, +).images[0] +make_image_grid([canny_image, image], rows=1, cols=2) Inpainting LCM-LoRA can be used for inpainting as well. Copied import torch +from diffusers import AutoPipelineForInpainting, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +pipe = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5") + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +# generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, + image=init_image, + mask_image=mask_image, + generator=generator, + num_inference_steps=4, + guidance_scale=4, +).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) AnimateDiff AnimateDiff allows you to animate images using Stable Diffusion models. To get good results, we need to generate multiple frames (16-24), and doing this with standard SD models can be very slow. +LCM-LoRA can be used to speed up the process significantly, as you just need to do 4-8 steps for each frame. Let’s look at how we can perform animation with LCM-LoRA and AnimateDiff. Copied import torch +from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler, LCMScheduler +from diffusers.utils import export_to_gif + +adapter = MotionAdapter.from_pretrained("diffusers/animatediff-motion-adapter-v1-5") +pipe = AnimateDiffPipeline.from_pretrained( + "frankjoshua/toonyou_beta6", + motion_adapter=adapter, +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5", adapter_name="lcm") +pipe.load_lora_weights("guoyww/animatediff-motion-lora-zoom-in", weight_name="diffusion_pytorch_model.safetensors", adapter_name="motion-lora") + +pipe.set_adapters(["lcm", "motion-lora"], adapter_weights=[0.55, 1.2]) + +prompt = "best quality, masterpiece, 1girl, looking at viewer, blurry background, upper body, contemporary, dress" +generator = torch.manual_seed(0) +frames = pipe( + prompt=prompt, + num_inference_steps=5, + guidance_scale=1.25, + cross_attention_kwargs={"scale": 1}, + num_frames=24, + generator=generator +).frames[0] +export_to_gif(frames, "animation.gif") diff --git a/scrapped_outputs/029a71d92796bdac8ab84604964508c7.txt b/scrapped_outputs/029a71d92796bdac8ab84604964508c7.txt new file mode 100644 index 0000000000000000000000000000000000000000..8806ad9c7bc5846e28773660f7a1afd17769a279 --- /dev/null +++ b/scrapped_outputs/029a71d92796bdac8ab84604964508c7.txt @@ -0,0 +1,53 @@ +UNet3DConditionModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 3D UNet conditional model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet3DConditionModel class diffusers.UNet3DConditionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlock3D', 'CrossAttnDownBlock3D', 'CrossAttnDownBlock3D', 'DownBlock3D') up_block_types: Tuple = ('UpBlock3D', 'CrossAttnUpBlock3D', 'CrossAttnUpBlock3D', 'CrossAttnUpBlock3D') block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: int = 1024 attention_head_dim: Union = 64 num_attention_heads: Union = None ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. in_channels (int, optional, defaults to 4) — The number of channels in the input sample. out_channels (int, optional, defaults to 4) — The number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock3D", "CrossAttnDownBlock3D", "CrossAttnDownBlock3D", "DownBlock3D")) — +The tuple of downsample blocks to use. up_block_types (Tuple[str], optional, defaults to ("UpBlock3D", "CrossAttnUpBlock3D", "CrossAttnUpBlock3D", "CrossAttnUpBlock3D")) — +The tuple of upsample blocks to use. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — The number of layers per block. downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. act_fn (str, optional, defaults to "silu") — The activation function to use. norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, normalization and activation layers is skipped in post-processing. norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. cross_attention_dim (int, optional, defaults to 1024) — The dimension of the cross attention features. attention_head_dim (int, optional, defaults to 64) — The dimension of the attention heads. num_attention_heads (int, optional) — The number of attention heads. A conditional 3D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). disable_freeu < source > ( ) Disables the FreeU mechanism. enable_forward_chunking < source > ( chunk_size: Optional = None dim: int = 0 ) Parameters chunk_size (int, optional) — +The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually +over each tensor of dim=dim. dim (int, optional, defaults to 0) — +The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch) +or dim=1 (sequence length). Sets the attention processor to use feed forward +chunking. enable_freeu < source > ( s1 s2 b1 b2 ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism from https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stage blocks where they are being applied. Please refer to the official repository for combinations of values that +are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None down_block_additional_residuals: Optional = None mid_block_additional_residual: Optional = None return_dict: bool = True ) → ~models.unet_3d_condition.UNet3DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, num_channels, num_frames, height, width. timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). class_labels (torch.Tensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. +timestep_cond — (torch.Tensor, optional, defaults to None): +Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed +through the self.time_embedding layer to obtain the timestep embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. +down_block_additional_residuals — (tuple of torch.Tensor, optional): +A tuple of tensors that if specified are added to the residuals of down unet blocks. +mid_block_additional_residual — (torch.Tensor, optional): +A tensor that if specified is added to the residual of the middle unet block. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.unet_3d_condition.UNet3DConditionOutput instead of a plain +tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. Returns +~models.unet_3d_condition.UNet3DConditionOutput or tuple + +If return_dict is True, an ~models.unet_3d_condition.UNet3DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The UNet3DConditionModel forward method. fuse_qkv_projections < source > ( ) Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. set_attention_slice < source > ( slice_size: Union ) Parameters slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", input to the attention heads is halved, so attention is computed in two steps. If +"max", maximum amount of memory is saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in +several steps. This is useful for saving some memory in exchange for a small decrease in speed. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. unfuse_qkv_projections < source > ( ) Disables the fused QKV projection if enabled. This API is 🧪 experimental. unload_lora < source > ( ) Unloads LoRA weights. UNet3DConditionOutput class diffusers.models.unets.unet_3d_condition.UNet3DConditionOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, num_frames, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of UNet3DConditionModel. diff --git a/scrapped_outputs/02a8a2246909676ce154902d0be79029.txt b/scrapped_outputs/02a8a2246909676ce154902d0be79029.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/02aee9759affa29fb25ab0383cbb3c8d.txt b/scrapped_outputs/02aee9759affa29fb25ab0383cbb3c8d.txt new file mode 100644 index 0000000000000000000000000000000000000000..507273833a701bdd8365633f4cf442fc0a095949 --- /dev/null +++ b/scrapped_outputs/02aee9759affa29fb25ab0383cbb3c8d.txt @@ -0,0 +1,138 @@ +UNet2DConditionModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 2D UNet conditional model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet2DConditionModel class diffusers.UNet2DConditionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 center_input_sample: bool = False flip_sin_to_cos: bool = True freq_shift: int = 0 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') mid_block_type: Optional = 'UNetMidBlock2DCrossAttn' up_block_types: Tuple = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: Union = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 dropout: float = 0.0 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: Union = 1280 transformer_layers_per_block: Union = 1 reverse_transformer_layers_per_block: Optional = None encoder_hid_dim: Optional = None encoder_hid_dim_type: Optional = None attention_head_dim: Union = 8 num_attention_heads: Union = None dual_cross_attention: bool = False use_linear_projection: bool = False class_embed_type: Optional = None addition_embed_type: Optional = None addition_time_embed_dim: Optional = None num_class_embeds: Optional = None upcast_attention: bool = False resnet_time_scale_shift: str = 'default' resnet_skip_time_act: bool = False resnet_out_scale_factor: int = 1.0 time_embedding_type: str = 'positional' time_embedding_dim: Optional = None time_embedding_act_fn: Optional = None timestep_post_act: Optional = None time_cond_proj_dim: Optional = None conv_in_kernel: int = 3 conv_out_kernel: int = 3 projection_class_embeddings_input_dim: Optional = None attention_type: str = 'default' class_embeddings_concat: bool = False mid_block_only_cross_attention: Optional = None cross_attention_norm: Optional = None addition_embed_type_num_heads = 64 ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. in_channels (int, optional, defaults to 4) — Number of channels in the input sample. out_channels (int, optional, defaults to 4) — Number of channels in the output. center_input_sample (bool, optional, defaults to False) — Whether to center the input sample. flip_sin_to_cos (bool, optional, defaults to False) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. mid_block_type (str, optional, defaults to "UNetMidBlock2DCrossAttn") — +Block type for middle of UNet, it can be one of UNetMidBlock2DCrossAttn, UNetMidBlock2D, or +UNetMidBlock2DSimpleCrossAttn. If None, the mid block layer is skipped. up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. only_cross_attention(bool or Tuple[bool], optional, default to False) — +Whether to include self-attention in the basic transformer blocks, see +BasicTransformerBlock. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — The number of layers per block. downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. dropout (float, optional, defaults to 0.0) — The dropout probability to use. act_fn (str, optional, defaults to "silu") — The activation function to use. norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, normalization and activation layers is skipped in post-processing. norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. cross_attention_dim (int or Tuple[int], optional, defaults to 1280) — +The dimension of the cross attention features. transformer_layers_per_block (int, Tuple[int], or Tuple[Tuple] , optional, defaults to 1) — +The number of transformer blocks of type BasicTransformerBlock. Only relevant for +CrossAttnDownBlock2D, CrossAttnUpBlock2D, +UNetMidBlock2DCrossAttn. A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). reverse_transformer_layers_per_block : (Tuple[Tuple], optional, defaults to None): +The number of transformer blocks of type BasicTransformerBlock, in the upsampling +blocks of the U-Net. Only relevant if transformer_layers_per_block is of type Tuple[Tuple] and for +CrossAttnDownBlock2D, CrossAttnUpBlock2D, +UNetMidBlock2DCrossAttn. +encoder_hid_dim (int, optional, defaults to None): +If encoder_hid_dim_type is defined, encoder_hidden_states will be projected from encoder_hid_dim +dimension to cross_attention_dim. +encoder_hid_dim_type (str, optional, defaults to None): +If given, the encoder_hidden_states and potentially other embeddings are down-projected to text +embeddings of dimension cross_attention according to encoder_hid_dim_type. +attention_head_dim (int, optional, defaults to 8): The dimension of the attention heads. +num_attention_heads (int, optional): +The number of attention heads. If not defined, defaults to attention_head_dim +resnet_time_scale_shift (str, optional, defaults to "default"): Time scale shift config +for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. +class_embed_type (str, optional, defaults to None): +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", "identity", "projection", or "simple_projection". +addition_embed_type (str, optional, defaults to None): +Configures an optional embedding which will be summed with the time embeddings. Choose from None or +“text”. “text” will use the TextTimeEmbedding layer. +addition_time_embed_dim: (int, optional, defaults to None): +Dimension for the timestep embeddings. +num_class_embeds (int, optional, defaults to None): +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. +time_embedding_type (str, optional, defaults to positional): +The type of position embedding to use for timesteps. Choose from positional or fourier. +time_embedding_dim (int, optional, defaults to None): +An optional override for the dimension of the projected time embedding. +time_embedding_act_fn (str, optional, defaults to None): +Optional activation function to use only once on the time embeddings before they are passed to the rest of +the UNet. Choose from silu, mish, gelu, and swish. +timestep_post_act (str, optional, defaults to None): +The second activation function to use in timestep embedding. Choose from silu, mish and gelu. +time_cond_proj_dim (int, optional, defaults to None): +The dimension of cond_proj layer in the timestep embedding. +conv_in_kernel (int, optional, default to 3): The kernel size of conv_in layer. conv_out_kernel (int, +optional, default to 3): The kernel size of conv_out layer. projection_class_embeddings_input_dim (int, +optional): The dimension of the class_labels input when +class_embed_type="projection". Required when class_embed_type="projection". +class_embeddings_concat (bool, optional, defaults to False): Whether to concatenate the time +embeddings with the class embeddings. +mid_block_only_cross_attention (bool, optional, defaults to None): +Whether to use cross attention with the mid block when using the UNetMidBlock2DSimpleCrossAttn. If +only_cross_attention is given as a single boolean and mid_block_only_cross_attention is None, the +only_cross_attention value is used as the value for mid_block_only_cross_attention. Default to False +otherwise. disable_freeu < source > ( ) Disables the FreeU mechanism. enable_freeu < source > ( s1 s2 b1 b2 ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism from https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stage blocks where they are being applied. Please refer to the official repository for combinations of values that +are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None added_cond_kwargs: Optional = None down_block_additional_residuals: Optional = None mid_block_additional_residual: Optional = None down_intrablock_additional_residuals: Optional = None encoder_attention_mask: Optional = None return_dict: bool = True ) → UNet2DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, channel, height, width). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). class_labels (torch.Tensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. +timestep_cond — (torch.Tensor, optional, defaults to None): +Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed +through the self.time_embedding layer to obtain the timestep embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. +added_cond_kwargs — (dict, optional): +A kwargs dictionary containing additional embeddings that if specified are added to the embeddings that +are passed along to the UNet blocks. +down_block_additional_residuals — (tuple of torch.Tensor, optional): +A tuple of tensors that if specified are added to the residuals of down unet blocks. +mid_block_additional_residual — (torch.Tensor, optional): +A tensor that if specified is added to the residual of the middle unet block. encoder_attention_mask (torch.Tensor) — +A cross-attention mask of shape (batch, sequence_length) is applied to encoder_hidden_states. If +True the mask is kept, otherwise if False it is discarded. Mask will be converted into a bias, +which adds large negative values to the attention scores corresponding to “discard” tokens. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. +added_cond_kwargs — (dict, optional): +A kwargs dictionary containin additional embeddings that if specified are added to the embeddings that +are passed along to the UNet blocks. down_block_additional_residuals (tuple of torch.Tensor, optional) — +additional residuals to be added to UNet long skip connections from down blocks to up blocks for +example from ControlNet side model(s) mid_block_additional_residual (torch.Tensor, optional) — +additional residual to be added to UNet mid block output, for example from ControlNet side model down_intrablock_additional_residuals (tuple of torch.Tensor, optional) — +additional residuals to be added within UNet down blocks, for example from T2I-Adapter side model(s) Returns +UNet2DConditionOutput or tuple + +If return_dict is True, an UNet2DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The UNet2DConditionModel forward method. fuse_qkv_projections < source > ( ) Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. set_attention_slice < source > ( slice_size ) Parameters slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", input to the attention heads is halved, so attention is computed in two steps. If +"max", maximum amount of memory is saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in +several steps. This is useful for saving some memory in exchange for a small decrease in speed. set_attn_processor < source > ( processor: Union _remove_lora = False ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. unfuse_qkv_projections < source > ( ) Disables the fused QKV projection if enabled. This API is 🧪 experimental. UNet2DConditionOutput class diffusers.models.unet_2d_condition.UNet2DConditionOutput < source > ( sample: FloatTensor = None ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of UNet2DConditionModel. FlaxUNet2DConditionModel class diffusers.FlaxUNet2DConditionModel < source > ( sample_size: int = 32 in_channels: int = 4 out_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') up_block_types: Tuple = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 attention_head_dim: Union = 8 num_attention_heads: Union = None cross_attention_dim: int = 1280 dropout: float = 0.0 use_linear_projection: bool = False dtype: dtype = flip_sin_to_cos: bool = True freq_shift: int = 0 use_memory_efficient_attention: bool = False split_head_dim: bool = False transformer_layers_per_block: Union = 1 addition_embed_type: Optional = None addition_time_embed_dim: Optional = None addition_embed_type_num_heads: int = 64 projection_class_embeddings_input_dim: Optional = None parent: Union = name: Optional = None ) Parameters sample_size (int, optional) — +The size of the input sample. in_channels (int, optional, defaults to 4) — +The number of channels in the input sample. out_channels (int, optional, defaults to 4) — +The number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxDownBlock2D")) — +The tuple of downsample blocks to use. up_block_types (Tuple[str], optional, defaults to ("FlaxUpBlock2D", "FlaxCrossAttnUpBlock2D", "FlaxCrossAttnUpBlock2D", "FlaxCrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — +The number of layers per block. attention_head_dim (int or Tuple[int], optional, defaults to 8) — +The dimension of the attention heads. num_attention_heads (int or Tuple[int], optional) — +The number of attention heads. cross_attention_dim (int, optional, defaults to 768) — +The dimension of the cross attention features. dropout (float, optional, defaults to 0) — +Dropout probability for down, up and bottleneck blocks. flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. use_memory_efficient_attention (bool, optional, defaults to False) — +Enable memory efficient attention as described here. split_head_dim (bool, optional, defaults to False) — +Whether to split the head dimension into a new axis for the self-attention computation. In most cases, +enabling this flag should speed up the computation for Stable Diffusion 2.x and Stable Diffusion XL. A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. This model inherits from FlaxModelMixin. Check the superclass documentation for it’s generic methods +implemented for all models (such as downloading or saving). This model is also a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matters related to its +general usage and behavior. Inherent JAX features such as the following are supported: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization FlaxUNet2DConditionOutput class diffusers.models.unet_2d_condition_flax.FlaxUNet2DConditionOutput < source > ( sample: Array ) Parameters sample (jnp.ndarray of shape (batch_size, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of FlaxUNet2DConditionModel. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/02bd848b35977a9c9f00ad003cb069ef.txt b/scrapped_outputs/02bd848b35977a9c9f00ad003cb069ef.txt new file mode 100644 index 0000000000000000000000000000000000000000..987714f2dceff74a56e14f7f4e6b5e17d1f50da7 --- /dev/null +++ b/scrapped_outputs/02bd848b35977a9c9f00ad003cb069ef.txt @@ -0,0 +1,48 @@ +How to use Stable Diffusion in Apple Silicon (M1/M2) + +🤗 Diffusers is compatible with Apple silicon for Stable Diffusion inference, using the PyTorch mps device. These are the steps you need to follow to use your M1 or M2 computer with Stable Diffusion. + +Requirements + +Mac computer with Apple silicon (M1/M2) hardware. +macOS 12.6 or later (13.0 or later recommended). +arm64 version of Python. +PyTorch 1.13. You can install it with pip or conda using the instructions in https://pytorch.org/get-started/locally/. + +Inference Pipeline + +The snippet below demonstrates how to use the mps backend using the familiar to() interface to move the Stable Diffusion pipeline to your M1 or M2 device. +We recommend to “prime” the pipeline using an additional one-time pass through it. This is a temporary workaround for a weird issue we have detected: the first inference pass produces slightly different results than subsequent ones. You only need to do this pass once, and it’s ok to use just one inference step and discard the result. + + + Copied +# make sure you're logged in with `huggingface-cli login` +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +pipe = pipe.to("mps") + +# Recommended if your computer has < 64 GB of RAM +pipe.enable_attention_slicing() + +prompt = "a photo of an astronaut riding a horse on mars" + +# First-time "warmup" pass (see explanation above) +_ = pipe(prompt, num_inference_steps=1) + +# Results match those from the CPU device after the warmup pass. +image = pipe(prompt).images[0] + +Performance Recommendations + +M1/M2 performance is very sensitive to memory pressure. The system will automatically swap if it needs to, but performance will degrade significantly when it does. +We recommend you use attention slicing to reduce memory pressure during inference and prevent swapping, particularly if your computer has lass than 64 GB of system RAM, or if you generate images at non-standard resolutions larger than 512 × 512 pixels. Attention slicing performs the costly attention operation in multiple steps instead of all at once. It usually has a performance impact of ~20% in computers without universal memory, but we have observed better performance in most Apple Silicon computers, unless you have 64 GB or more. + + + Copied +pipeline.enable_attention_slicing() + +Known Issues + +As mentioned above, we are investigating a strange first-time inference issue. +Generating multiple prompts in a batch crashes or doesn’t work reliably. We believe this is related to the mps backend in PyTorch. This is being resolved, but for now we recommend to iterate instead of batching. diff --git a/scrapped_outputs/031de0c7e6fbc268b733b53d76fd629b.txt b/scrapped_outputs/031de0c7e6fbc268b733b53d76fd629b.txt new file mode 100644 index 0000000000000000000000000000000000000000..576dcc80f8d3648a3bfddba4f5d8e453c126504f --- /dev/null +++ b/scrapped_outputs/031de0c7e6fbc268b733b53d76fd629b.txt @@ -0,0 +1,58 @@ +Tiny AutoEncoder Tiny AutoEncoder for Stable Diffusion (TAESD) was introduced in madebyollin/taesd by Ollin Boer Bohan. It is a tiny distilled version of Stable Diffusion’s VAE that can quickly decode the latents in a StableDiffusionPipeline or StableDiffusionXLPipeline almost instantly. To use with Stable Diffusion v-2.1: Copied import torch +from diffusers import DiffusionPipeline, AutoencoderTiny + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1-base", torch_dtype=torch.float16 +) +pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesd", torch_dtype=torch.float16) +pipe = pipe.to("cuda") + +prompt = "slice of delicious New York-style berry cheesecake" +image = pipe(prompt, num_inference_steps=25).images[0] +image To use with Stable Diffusion XL 1.0 Copied import torch +from diffusers import DiffusionPipeline, AutoencoderTiny + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +) +pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesdxl", torch_dtype=torch.float16) +pipe = pipe.to("cuda") + +prompt = "slice of delicious New York-style berry cheesecake" +image = pipe(prompt, num_inference_steps=25).images[0] +image AutoencoderTiny class diffusers.AutoencoderTiny < source > ( in_channels: int = 3 out_channels: int = 3 encoder_block_out_channels: Tuple = (64, 64, 64, 64) decoder_block_out_channels: Tuple = (64, 64, 64, 64) act_fn: str = 'relu' latent_channels: int = 4 upsampling_scaling_factor: int = 2 num_encoder_blocks: Tuple = (1, 3, 3, 3) num_decoder_blocks: Tuple = (3, 3, 3, 1) latent_magnitude: int = 3 latent_shift: float = 0.5 force_upcast: bool = False scaling_factor: float = 1.0 ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. encoder_block_out_channels (Tuple[int], optional, defaults to (64, 64, 64, 64)) — +Tuple of integers representing the number of output channels for each encoder block. The length of the +tuple should be equal to the number of encoder blocks. decoder_block_out_channels (Tuple[int], optional, defaults to (64, 64, 64, 64)) — +Tuple of integers representing the number of output channels for each decoder block. The length of the +tuple should be equal to the number of decoder blocks. act_fn (str, optional, defaults to "relu") — +Activation function to be used throughout the model. latent_channels (int, optional, defaults to 4) — +Number of channels in the latent representation. The latent space acts as a compressed representation of +the input image. upsampling_scaling_factor (int, optional, defaults to 2) — +Scaling factor for upsampling in the decoder. It determines the size of the output image during the +upsampling process. num_encoder_blocks (Tuple[int], optional, defaults to (1, 3, 3, 3)) — +Tuple of integers representing the number of encoder blocks at each stage of the encoding process. The +length of the tuple should be equal to the number of stages in the encoder. Each stage has a different +number of encoder blocks. num_decoder_blocks (Tuple[int], optional, defaults to (3, 3, 3, 1)) — +Tuple of integers representing the number of decoder blocks at each stage of the decoding process. The +length of the tuple should be equal to the number of stages in the decoder. Each stage has a different +number of decoder blocks. latent_magnitude (float, optional, defaults to 3.0) — +Magnitude of the latent representation. This parameter scales the latent representation values to control +the extent of information preservation. latent_shift (float, optional, defaults to 0.5) — +Shift applied to the latent representation. This parameter controls the center of the latent space. scaling_factor (float, optional, defaults to 1.0) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. For this Autoencoder, +however, no such scaling factor was used, hence the value of 1.0 as the default. force_upcast (bool, optional, default to False) — +If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE +can be fine-tuned / trained to a lower range without losing too much precision, in which case +force_upcast can be set to False (see this fp16-friendly +AutoEncoder). A tiny distilled VAE model for encoding images into latents and decoding latent representations into images. AutoencoderTiny is a wrapper around the original implementation of TAESD. This model inherits from ModelMixin. Check the superclass documentation for its generic methods implemented for +all models (such as downloading or saving). disable_slicing < source > ( ) Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing +decoding in one step. disable_tiling < source > ( ) Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing +decoding in one step. enable_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_tiling < source > ( use_tiling: bool = True ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. forward < source > ( sample: FloatTensor return_dict: bool = True ) Parameters sample (torch.FloatTensor) — Input sample. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. scale_latents < source > ( x: FloatTensor ) raw latents -> [0, 1] unscale_latents < source > ( x: FloatTensor ) [0, 1] -> raw latents AutoencoderTinyOutput class diffusers.models.autoencoders.autoencoder_tiny.AutoencoderTinyOutput < source > ( latents: Tensor ) Parameters latents (torch.Tensor) — Encoded outputs of the Encoder. Output of AutoencoderTiny encoding method. diff --git a/scrapped_outputs/0337e3a463f82d01341bcedbe24ef622.txt b/scrapped_outputs/0337e3a463f82d01341bcedbe24ef622.txt new file mode 100644 index 0000000000000000000000000000000000000000..5ee871335093ed2ca29b91e756da3147dae8eda6 --- /dev/null +++ b/scrapped_outputs/0337e3a463f82d01341bcedbe24ef622.txt @@ -0,0 +1,217 @@ +Load pipelines, models, and schedulers Having an easy way to use a diffusion system for inference is essential to 🧨 Diffusers. Diffusion systems often consist of multiple components like parameterized models, tokenizers, and schedulers that interact in complex ways. That is why we designed the DiffusionPipeline to wrap the complexity of the entire diffusion system into an easy-to-use API, while remaining flexible enough to be adapted for other use cases, such as loading each component individually as building blocks to assemble your own diffusion system. Everything you need for inference or training is accessible with the from_pretrained() method. This guide will show you how to load: pipelines from the Hub and locally different components into a pipeline checkpoint variants such as different floating point types or non-exponential mean averaged (EMA) weights models and schedulers Diffusion Pipeline 💡 Skip to the DiffusionPipeline explained section if you are interested in learning in more detail about how the DiffusionPipeline class works. The DiffusionPipeline class is the simplest and most generic way to load the latest trending diffusion model from the Hub. The DiffusionPipeline.from_pretrained() method automatically detects the correct pipeline class from the checkpoint, downloads, and caches all the required configuration and weight files, and returns a pipeline instance ready for inference. Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) You can also load a checkpoint with its specific pipeline class. The example above loaded a Stable Diffusion model; to get the same result, use the StableDiffusionPipeline class: Copied from diffusers import StableDiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) A checkpoint (such as CompVis/stable-diffusion-v1-4 or runwayml/stable-diffusion-v1-5) may also be used for more than one task, like text-to-image or image-to-image. To differentiate what task you want to use the checkpoint for, you have to load it directly with its corresponding task-specific pipeline class: Copied from diffusers import StableDiffusionImg2ImgPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionImg2ImgPipeline.from_pretrained(repo_id) Local pipeline To load a diffusion pipeline locally, use git-lfs to manually download the checkpoint (in this case, runwayml/stable-diffusion-v1-5) to your local disk. This creates a local folder, ./stable-diffusion-v1-5, on your disk: Copied git-lfs install +git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 Then pass the local path to from_pretrained(): Copied from diffusers import DiffusionPipeline + +repo_id = "./stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) The from_pretrained() method won’t download any files from the Hub when it detects a local path, but this also means it won’t download and cache the latest changes to a checkpoint. Swap components in a pipeline You can customize the default components of any pipeline with another compatible component. Customization is important because: Changing the scheduler is important for exploring the trade-off between generation speed and quality. Different components of a model are typically trained independently and you can swap out a component with a better-performing one. During finetuning, usually only some components - like the UNet or text encoder - are trained. To find out which schedulers are compatible for customization, you can use the compatibles method: Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) +stable_diffusion.scheduler.compatibles Let’s use the SchedulerMixin.from_pretrained() method to replace the default PNDMScheduler with a more performant scheduler, EulerDiscreteScheduler. The subfolder="scheduler" argument is required to load the scheduler configuration from the correct subfolder of the pipeline repository. Then you can pass the new EulerDiscreteScheduler instance to the scheduler argument in DiffusionPipeline: Copied from diffusers import DiffusionPipeline, EulerDiscreteScheduler + +repo_id = "runwayml/stable-diffusion-v1-5" +scheduler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, scheduler=scheduler, use_safetensors=True) Safety checker Diffusion models like Stable Diffusion can generate harmful content, which is why 🧨 Diffusers has a safety checker to check generated outputs against known hardcoded NSFW content. If you’d like to disable the safety checker for whatever reason, pass None to the safety_checker argument: Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, safety_checker=None, use_safetensors=True) +""" +You have disabled the safety checker for by passing `safety_checker=None`. Ensure that you abide by the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend keeping the safety filter enabled in all public-facing circumstances, disabling it only for use cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 . +""" Reuse components across pipelines You can also reuse the same components in multiple pipelines to avoid loading the weights into RAM twice. Use the components method to save the components: Copied from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True) + +components = stable_diffusion_txt2img.components Then you can pass the components to another pipeline without reloading the weights into RAM: Copied stable_diffusion_img2img = StableDiffusionImg2ImgPipeline(**components) You can also pass the components individually to the pipeline if you want more flexibility over which components to reuse or disable. For example, to reuse the same components in the text-to-image pipeline, except for the safety checker and feature extractor, in the image-to-image pipeline: Copied from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True) +stable_diffusion_img2img = StableDiffusionImg2ImgPipeline( + vae=stable_diffusion_txt2img.vae, + text_encoder=stable_diffusion_txt2img.text_encoder, + tokenizer=stable_diffusion_txt2img.tokenizer, + unet=stable_diffusion_txt2img.unet, + scheduler=stable_diffusion_txt2img.scheduler, + safety_checker=None, + feature_extractor=None, + requires_safety_checker=False, +) Checkpoint variants A checkpoint variant is usually a checkpoint whose weights are: Stored in a different floating point type for lower precision and lower storage, such as torch.float16, because it only requires half the bandwidth and storage to download. You can’t use this variant if you’re continuing training or using a CPU. Non-exponential mean averaged (EMA) weights, which shouldn’t be used for inference. You should use these to continue fine-tuning a model. 💡 When the checkpoints have identical model structures, but they were trained on different datasets and with a different training setup, they should be stored in separate repositories instead of variations (for example, stable-diffusion-v1-4 and stable-diffusion-v1-5). Otherwise, a variant is identical to the original checkpoint. They have exactly the same serialization format (like Safetensors), model structure, and weights that have identical tensor shapes. checkpoint type weight name argument for loading weights original diffusion_pytorch_model.bin floating point diffusion_pytorch_model.fp16.bin variant, torch_dtype non-EMA diffusion_pytorch_model.non_ema.bin variant There are two important arguments to know for loading variants: torch_dtype defines the floating point precision of the loaded checkpoints. For example, if you want to save bandwidth by loading a fp16 variant, you should specify torch_dtype=torch.float16 to convert the weights to fp16. Otherwise, the fp16 weights are converted to the default fp32 precision. You can also load the original checkpoint without defining the variant argument, and convert it to fp16 with torch_dtype=torch.float16. In this case, the default fp32 weights are downloaded first, and then they’re converted to fp16 after loading. variant defines which files should be loaded from the repository. For example, if you want to load a non_ema variant from the diffusers/stable-diffusion-variants repository, you should specify variant="non_ema" to download the non_ema files. Copied from diffusers import DiffusionPipeline +import torch + +# load fp16 variant +stable_diffusion = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16, use_safetensors=True +) +# load non_ema variant +stable_diffusion = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", variant="non_ema", use_safetensors=True +) To save a checkpoint stored in a different floating-point type or as a non-EMA variant, use the DiffusionPipeline.save_pretrained() method and specify the variant argument. You should try and save a variant to the same folder as the original checkpoint, so you can load both from the same folder: Copied from diffusers import DiffusionPipeline + +# save as fp16 variant +stable_diffusion.save_pretrained("runwayml/stable-diffusion-v1-5", variant="fp16") +# save as non-ema variant +stable_diffusion.save_pretrained("runwayml/stable-diffusion-v1-5", variant="non_ema") If you don’t save the variant to an existing folder, you must specify the variant argument otherwise it’ll throw an Exception because it can’t find the original checkpoint: Copied # 👎 this won't work +stable_diffusion = DiffusionPipeline.from_pretrained( + "./stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +# 👍 this works +stable_diffusion = DiffusionPipeline.from_pretrained( + "./stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16, use_safetensors=True +) Models Models are loaded from the ModelMixin.from_pretrained() method, which downloads and caches the latest version of the model weights and configurations. If the latest files are available in the local cache, from_pretrained() reuses files in the cache instead of re-downloading them. Models can be loaded from a subfolder with the subfolder argument. For example, the model weights for runwayml/stable-diffusion-v1-5 are stored in the unet subfolder: Copied from diffusers import UNet2DConditionModel + +repo_id = "runwayml/stable-diffusion-v1-5" +model = UNet2DConditionModel.from_pretrained(repo_id, subfolder="unet", use_safetensors=True) Or directly from a repository’s directory: Copied from diffusers import UNet2DModel + +repo_id = "google/ddpm-cifar10-32" +model = UNet2DModel.from_pretrained(repo_id, use_safetensors=True) You can also load and save model variants by specifying the variant argument in ModelMixin.from_pretrained() and ModelMixin.save_pretrained(): Copied from diffusers import UNet2DConditionModel + +model = UNet2DConditionModel.from_pretrained( + "runwayml/stable-diffusion-v1-5", subfolder="unet", variant="non_ema", use_safetensors=True +) +model.save_pretrained("./local-unet", variant="non_ema") Schedulers Schedulers are loaded from the SchedulerMixin.from_pretrained() method, and unlike models, schedulers are not parameterized or trained; they are defined by a configuration file. Loading schedulers does not consume any significant amount of memory and the same configuration file can be used for a variety of different schedulers. +For example, the following schedulers are compatible with StableDiffusionPipeline, which means you can load the same scheduler configuration file in any of these classes: Copied from diffusers import StableDiffusionPipeline +from diffusers import ( + DDPMScheduler, + DDIMScheduler, + PNDMScheduler, + LMSDiscreteScheduler, + EulerAncestralDiscreteScheduler, + EulerDiscreteScheduler, + DPMSolverMultistepScheduler, +) + +repo_id = "runwayml/stable-diffusion-v1-5" + +ddpm = DDPMScheduler.from_pretrained(repo_id, subfolder="scheduler") +ddim = DDIMScheduler.from_pretrained(repo_id, subfolder="scheduler") +pndm = PNDMScheduler.from_pretrained(repo_id, subfolder="scheduler") +lms = LMSDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +euler_anc = EulerAncestralDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +euler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +dpm = DPMSolverMultistepScheduler.from_pretrained(repo_id, subfolder="scheduler") + +# replace `dpm` with any of `ddpm`, `ddim`, `pndm`, `lms`, `euler_anc`, `euler` +pipeline = StableDiffusionPipeline.from_pretrained(repo_id, scheduler=dpm, use_safetensors=True) DiffusionPipeline explained As a class method, DiffusionPipeline.from_pretrained() is responsible for two things: Download the latest version of the folder structure required for inference and cache it. If the latest folder structure is available in the local cache, DiffusionPipeline.from_pretrained() reuses the cache and won’t redownload the files. Load the cached weights into the correct pipeline class - retrieved from the model_index.json file - and return an instance of it. The pipelines’ underlying folder structure corresponds directly with their class instances. For example, the StableDiffusionPipeline corresponds to the folder structure in runwayml/stable-diffusion-v1-5. Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipeline = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) +print(pipeline) You’ll see pipeline is an instance of StableDiffusionPipeline, which consists of seven components: "feature_extractor": a CLIPImageProcessor from 🤗 Transformers. "safety_checker": a component for screening against harmful content. "scheduler": an instance of PNDMScheduler. "text_encoder": a CLIPTextModel from 🤗 Transformers. "tokenizer": a CLIPTokenizer from 🤗 Transformers. "unet": an instance of UNet2DConditionModel. "vae": an instance of AutoencoderKL. Copied StableDiffusionPipeline { + "feature_extractor": [ + "transformers", + "CLIPImageProcessor" + ], + "safety_checker": [ + "stable_diffusion", + "StableDiffusionSafetyChecker" + ], + "scheduler": [ + "diffusers", + "PNDMScheduler" + ], + "text_encoder": [ + "transformers", + "CLIPTextModel" + ], + "tokenizer": [ + "transformers", + "CLIPTokenizer" + ], + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vae": [ + "diffusers", + "AutoencoderKL" + ] +} Compare the components of the pipeline instance to the runwayml/stable-diffusion-v1-5 folder structure, and you’ll see there is a separate folder for each of the components in the repository: Copied . +├── feature_extractor +│   └── preprocessor_config.json +├── model_index.json +├── safety_checker +│   ├── config.json +| ├── model.fp16.safetensors +│ ├── model.safetensors +│ ├── pytorch_model.bin +| └── pytorch_model.fp16.bin +├── scheduler +│   └── scheduler_config.json +├── text_encoder +│   ├── config.json +| ├── model.fp16.safetensors +│ ├── model.safetensors +│ |── pytorch_model.bin +| └── pytorch_model.fp16.bin +├── tokenizer +│   ├── merges.txt +│   ├── special_tokens_map.json +│   ├── tokenizer_config.json +│   └── vocab.json +├── unet +│   ├── config.json +│   ├── diffusion_pytorch_model.bin +| |── diffusion_pytorch_model.fp16.bin +│ |── diffusion_pytorch_model.f16.safetensors +│ |── diffusion_pytorch_model.non_ema.bin +│ |── diffusion_pytorch_model.non_ema.safetensors +│ └── diffusion_pytorch_model.safetensors +|── vae +. ├── config.json +. ├── diffusion_pytorch_model.bin + ├── diffusion_pytorch_model.fp16.bin + ├── diffusion_pytorch_model.fp16.safetensors + └── diffusion_pytorch_model.safetensors You can access each of the components of the pipeline as an attribute to view its configuration: Copied pipeline.tokenizer +CLIPTokenizer( + name_or_path="/root/.cache/huggingface/hub/models--runwayml--stable-diffusion-v1-5/snapshots/39593d5650112b4cc580433f6b0435385882d819/tokenizer", + vocab_size=49408, + model_max_length=77, + is_fast=False, + padding_side="right", + truncation_side="right", + special_tokens={ + "bos_token": AddedToken("<|startoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), + "eos_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), + "unk_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), + "pad_token": "<|endoftext|>", + }, + clean_up_tokenization_spaces=True +) Every pipeline expects a model_index.json file that tells the DiffusionPipeline: which pipeline class to load from _class_name which version of 🧨 Diffusers was used to create the model in _diffusers_version what components from which library are stored in the subfolders (name corresponds to the component and subfolder name, library corresponds to the name of the library to load the class from, and class corresponds to the class name) Copied { + "_class_name": "StableDiffusionPipeline", + "_diffusers_version": "0.6.0", + "feature_extractor": [ + "transformers", + "CLIPImageProcessor" + ], + "safety_checker": [ + "stable_diffusion", + "StableDiffusionSafetyChecker" + ], + "scheduler": [ + "diffusers", + "PNDMScheduler" + ], + "text_encoder": [ + "transformers", + "CLIPTextModel" + ], + "tokenizer": [ + "transformers", + "CLIPTokenizer" + ], + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vae": [ + "diffusers", + "AutoencoderKL" + ] +} diff --git a/scrapped_outputs/0355b252e25654dc434b0da048d15629.txt b/scrapped_outputs/0355b252e25654dc434b0da048d15629.txt new file mode 100644 index 0000000000000000000000000000000000000000..a60cf1709306cd604a335558453963caf02df74b --- /dev/null +++ b/scrapped_outputs/0355b252e25654dc434b0da048d15629.txt @@ -0,0 +1,56 @@ +Community pipelines For more context about the design choices behind community pipelines, please have a look at this issue. Community pipelines allow you to get creative and build your own unique pipelines to share with the community. You can find all community pipelines in the diffusers/examples/community folder along with inference and training examples for how to use them. This guide showcases some of the community pipelines and hopefully it’ll inspire you to create your own (feel free to open a PR with your own pipeline and we will merge it!). To load a community pipeline, use the custom_pipeline argument in DiffusionPipeline to specify one of the files in diffusers/examples/community: Copied from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", custom_pipeline="filename_in_the_community_folder", use_safetensors=True +) If a community pipeline doesn’t work as expected, please open a GitHub issue and mention the author. You can learn more about community pipelines in the how to load community pipelines and how to contribute a community pipeline guides. Multilingual Stable Diffusion The multilingual Stable Diffusion pipeline uses a pretrained XLM-RoBERTa to identify a language and the mBART-large-50 model to handle the translation. This allows you to generate images from text in 20 languages. Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import make_image_grid +from transformers import ( + pipeline, + MBart50TokenizerFast, + MBartForConditionalGeneration, +) + +device = "cuda" if torch.cuda.is_available() else "cpu" +device_dict = {"cuda": 0, "cpu": -1} + +# add language detection pipeline +language_detection_model_ckpt = "papluca/xlm-roberta-base-language-detection" +language_detection_pipeline = pipeline("text-classification", + model=language_detection_model_ckpt, + device=device_dict[device]) + +# add model for language translation +translation_tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-one-mmt") +translation_model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-one-mmt").to(device) + +diffuser_pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + custom_pipeline="multilingual_stable_diffusion", + detection_pipeline=language_detection_pipeline, + translation_model=translation_model, + translation_tokenizer=translation_tokenizer, + torch_dtype=torch.float16, +) + +diffuser_pipeline.enable_attention_slicing() +diffuser_pipeline = diffuser_pipeline.to(device) + +prompt = ["a photograph of an astronaut riding a horse", + "Una casa en la playa", + "Ein Hund, der Orange isst", + "Un restaurant parisien"] + +images = diffuser_pipeline(prompt).images +make_image_grid(images, rows=2, cols=2) MagicMix MagicMix is a pipeline that can mix an image and text prompt to generate a new image that preserves the image structure. The mix_factor determines how much influence the prompt has on the layout generation, kmin controls the number of steps during the content generation process, and kmax determines how much information is kept in the layout of the original image. Copied from diffusers import DiffusionPipeline, DDIMScheduler +from diffusers.utils import load_image, make_image_grid + +pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + custom_pipeline="magic_mix", + scheduler=DDIMScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler"), +).to('cuda') + +img = load_image("https://user-images.githubusercontent.com/59410571/209578593-141467c7-d831-4792-8b9a-b17dc5e47816.jpg") +mix_img = pipeline(img, prompt="bed", kmin=0.3, kmax=0.5, mix_factor=0.5) +make_image_grid([img, mix_img], rows=1, cols=2) original image image and text prompt mix diff --git a/scrapped_outputs/035d2eb81551ae17f2f6548c483bb4ce.txt b/scrapped_outputs/035d2eb81551ae17f2f6548c483bb4ce.txt new file mode 100644 index 0000000000000000000000000000000000000000..d769a7f9060837ab9edb28b421635809b26af2d7 --- /dev/null +++ b/scrapped_outputs/035d2eb81551ae17f2f6548c483bb4ce.txt @@ -0,0 +1,61 @@ +Attention Processor An attention processor is a class for applying different types of attention mechanisms. AttnProcessor class diffusers.models.attention_processor.AttnProcessor < source > ( ) Default processor for performing attention-related computations. AttnProcessor2_0 class diffusers.models.attention_processor.AttnProcessor2_0 < source > ( ) Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). FusedAttnProcessor2_0 class diffusers.models.attention_processor.FusedAttnProcessor2_0 < source > ( ) Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). +It uses fused projection layers. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is currently 🧪 experimental in nature and can change in future. LoRAAttnProcessor class diffusers.models.attention_processor.LoRAAttnProcessor < source > ( hidden_size: int cross_attention_dim: Optional = None rank: int = 4 network_alpha: Optional = None **kwargs ) Parameters hidden_size (int, optional) — +The hidden size of the attention layer. cross_attention_dim (int, optional) — +The number of channels in the encoder_hidden_states. rank (int, defaults to 4) — +The dimension of the LoRA update matrices. network_alpha (int, optional) — +Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) — +Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism. LoRAAttnProcessor2_0 class diffusers.models.attention_processor.LoRAAttnProcessor2_0 < source > ( hidden_size: int cross_attention_dim: Optional = None rank: int = 4 network_alpha: Optional = None **kwargs ) Parameters hidden_size (int) — +The hidden size of the attention layer. cross_attention_dim (int, optional) — +The number of channels in the encoder_hidden_states. rank (int, defaults to 4) — +The dimension of the LoRA update matrices. network_alpha (int, optional) — +Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) — +Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism using PyTorch 2.0’s memory-efficient scaled dot-product +attention. CustomDiffusionAttnProcessor class diffusers.models.attention_processor.CustomDiffusionAttnProcessor < source > ( train_kv: bool = True train_q_out: bool = True hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 ) Parameters train_kv (bool, defaults to True) — +Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) — +Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) — +Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) — +The dropout probability to use. Processor for implementing attention for the Custom Diffusion method. CustomDiffusionAttnProcessor2_0 class diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0 < source > ( train_kv: bool = True train_q_out: bool = True hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 ) Parameters train_kv (bool, defaults to True) — +Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) — +Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) — +Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) — +The dropout probability to use. Processor for implementing attention for the Custom Diffusion method using PyTorch 2.0’s memory-efficient scaled +dot-product attention. AttnAddedKVProcessor class diffusers.models.attention_processor.AttnAddedKVProcessor < source > ( ) Processor for performing attention-related computations with extra learnable key and value matrices for the text +encoder. AttnAddedKVProcessor2_0 class diffusers.models.attention_processor.AttnAddedKVProcessor2_0 < source > ( ) Processor for performing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0), with extra +learnable key and value matrices for the text encoder. LoRAAttnAddedKVProcessor class diffusers.models.attention_processor.LoRAAttnAddedKVProcessor < source > ( hidden_size: int cross_attention_dim: Optional = None rank: int = 4 network_alpha: Optional = None ) Parameters hidden_size (int, optional) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. rank (int, defaults to 4) — +The dimension of the LoRA update matrices. network_alpha (int, optional) — +Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) — +Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism with extra learnable key and value matrices for the text +encoder. XFormersAttnProcessor class diffusers.models.attention_processor.XFormersAttnProcessor < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional, defaults to None) — +The base +operator to +use as the attention operator. It is recommended to set to None, and allow xFormers to choose the best +operator. Processor for implementing memory efficient attention using xFormers. LoRAXFormersAttnProcessor class diffusers.models.attention_processor.LoRAXFormersAttnProcessor < source > ( hidden_size: int cross_attention_dim: int rank: int = 4 attention_op: Optional = None network_alpha: Optional = None **kwargs ) Parameters hidden_size (int, optional) — +The hidden size of the attention layer. cross_attention_dim (int, optional) — +The number of channels in the encoder_hidden_states. rank (int, defaults to 4) — +The dimension of the LoRA update matrices. attention_op (Callable, optional, defaults to None) — +The base +operator to +use as the attention operator. It is recommended to set to None, and allow xFormers to choose the best +operator. network_alpha (int, optional) — +Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) — +Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism with memory efficient attention using xFormers. CustomDiffusionXFormersAttnProcessor class diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor < source > ( train_kv: bool = True train_q_out: bool = False hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 attention_op: Optional = None ) Parameters train_kv (bool, defaults to True) — +Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) — +Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) — +Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) — +The dropout probability to use. attention_op (Callable, optional, defaults to None) — +The base +operator to use +as the attention operator. It is recommended to set to None, and allow xFormers to choose the best operator. Processor for implementing memory efficient attention using xFormers for the Custom Diffusion method. SlicedAttnProcessor class diffusers.models.attention_processor.SlicedAttnProcessor < source > ( slice_size: int ) Parameters slice_size (int, optional) — +The number of steps to compute attention. Uses as many slices as attention_head_dim // slice_size, and +attention_head_dim must be a multiple of the slice_size. Processor for implementing sliced attention. SlicedAttnAddedKVProcessor class diffusers.models.attention_processor.SlicedAttnAddedKVProcessor < source > ( slice_size ) Parameters slice_size (int, optional) — +The number of steps to compute attention. Uses as many slices as attention_head_dim // slice_size, and +attention_head_dim must be a multiple of the slice_size. Processor for implementing sliced attention with extra learnable key and value matrices for the text encoder. diff --git a/scrapped_outputs/037a312aaecccf6bc6297a4be6c94e34.txt b/scrapped_outputs/037a312aaecccf6bc6297a4be6c94e34.txt new file mode 100644 index 0000000000000000000000000000000000000000..0051dea3c8497a0aea4368d8c2019c00ab6ab808 --- /dev/null +++ b/scrapped_outputs/037a312aaecccf6bc6297a4be6c94e34.txt @@ -0,0 +1,107 @@ +Semantic Guidance Semantic Guidance for Diffusion Models was proposed in SEGA: Instructing Text-to-Image Models using Semantic Guidance and provides strong semantic control over image generation. +Small changes to the text prompt usually result in entirely different output images. However, with SEGA a variety of changes to the image are enabled that can be controlled easily and intuitively, while staying true to the original image composition. The abstract from the paper is: Text-to-image diffusion models have recently received a lot of interest for their astonishing ability to produce high-fidelity images from text only. However, achieving one-shot generation that aligns with the user’s intent is nearly impossible, yet small changes to the input prompt often result in very different images. This leaves the user with little semantic control. To put the user in control, we show how to interact with the diffusion process to flexibly steer it along semantic directions. This semantic guidance (SEGA) generalizes to any generative architecture using classifier-free guidance. More importantly, it allows for subtle and extensive edits, changes in composition and style, as well as optimizing the overall artistic conception. We demonstrate SEGA’s effectiveness on both latent and pixel-based diffusion models such as Stable Diffusion, Paella, and DeepFloyd-IF using a variety of tasks, thus providing strong evidence for its versatility, flexibility, and improvements over existing methods. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. SemanticStableDiffusionPipeline class diffusers.SemanticStableDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (Q16SafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with latent editing. This model inherits from DiffusionPipeline and builds on the StableDiffusionPipeline. Check the superclass +documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular +device, etc.). __call__ < source > ( prompt: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: int = 1 eta: float = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 editing_prompt: Union = None editing_prompt_embeddings: Optional = None reverse_editing_direction: Union = False edit_guidance_scale: Union = 5 edit_warmup_steps: Union = 10 edit_cooldown_steps: Union = None edit_threshold: Union = 0.9 edit_momentum_scale: Optional = 0.1 edit_mom_beta: Optional = 0.4 edit_weights: Optional = None sem_guidance: Optional = None ) → ~pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. editing_prompt (str or List[str], optional) — +The prompt or prompts to use for semantic guidance. Semantic guidance is disabled by setting +editing_prompt = None. Guidance direction of prompt should be specified via +reverse_editing_direction. editing_prompt_embeddings (torch.Tensor, optional) — +Pre-computed embeddings to use for semantic guidance. Guidance direction of embedding should be +specified via reverse_editing_direction. reverse_editing_direction (bool or List[bool], optional, defaults to False) — +Whether the corresponding prompt in editing_prompt should be increased or decreased. edit_guidance_scale (float or List[float], optional, defaults to 5) — +Guidance scale for semantic guidance. If provided as a list, values should correspond to +editing_prompt. edit_warmup_steps (float or List[float], optional, defaults to 10) — +Number of diffusion steps (for each prompt) for which semantic guidance is not applied. Momentum is +calculated for those steps and applied once all warmup periods are over. edit_cooldown_steps (float or List[float], optional, defaults to None) — +Number of diffusion steps (for each prompt) after which semantic guidance is longer applied. edit_threshold (float or List[float], optional, defaults to 0.9) — +Threshold of semantic guidance. edit_momentum_scale (float, optional, defaults to 0.1) — +Scale of the momentum to be added to the semantic guidance at each diffusion step. If set to 0.0, +momentum is disabled. Momentum is already built up during warmup (for diffusion steps smaller than +sld_warmup_steps). Momentum is only added to latent guidance once all warmup periods are finished. edit_mom_beta (float, optional, defaults to 0.4) — +Defines how semantic guidance momentum builds up. edit_mom_beta indicates how much of the previous +momentum is kept. Momentum is already built up during warmup (for diffusion steps smaller than +edit_warmup_steps). edit_weights (List[float], optional, defaults to None) — +Indicates how much each individual concept should influence the overall guidance. If no weights are +provided all concepts are applied equally. sem_guidance (List[torch.Tensor], optional) — +List of pre-generated guidance vectors to be applied at generation. Length of the list has to +correspond to num_inference_steps. Returns +~pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput or tuple + +If return_dict is True, +~pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images and the second element +is a list of bools indicating whether the corresponding generated image contains “not-safe-for-work” +(nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import SemanticStableDiffusionPipeline + +>>> pipe = SemanticStableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> out = pipe( +... prompt="a photo of the face of a woman", +... num_images_per_prompt=1, +... guidance_scale=7, +... editing_prompt=[ +... "smiling, smile", # Concepts to apply +... "glasses, wearing glasses", +... "curls, wavy hair, curly hair", +... "beard, full beard, mustache", +... ], +... reverse_editing_direction=[ +... False, +... False, +... False, +... False, +... ], # Direction of guidance i.e. increase all concepts +... edit_warmup_steps=[10, 10, 10, 10], # Warmup period for each concept +... edit_guidance_scale=[4, 5, 5, 5.4], # Guidance scale for each concept +... edit_threshold=[ +... 0.99, +... 0.975, +... 0.925, +... 0.96, +... ], # Threshold for each concept. Threshold equals the percentile of the latent space that will be discarded. I.e. threshold=0.99 uses 1% of the latent dimensions +... edit_momentum_scale=0.3, # Momentum scale that will be added to the latent guidance +... edit_mom_beta=0.6, # Momentum beta +... edit_weights=[1, 1, 1, 1, 1], # Weights of the individual concepts against each other +... ) +>>> image = out.images[0] StableDiffusionSafePipelineOutput class diffusers.pipelines.semantic_stable_diffusion.pipeline_output.SemanticStableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/039174a093290e2204530344edb27be3.txt b/scrapped_outputs/039174a093290e2204530344edb27be3.txt new file mode 100644 index 0000000000000000000000000000000000000000..7ba14b6e0e43d4ca7ed6b0c338388308b99ebb1d --- /dev/null +++ b/scrapped_outputs/039174a093290e2204530344edb27be3.txt @@ -0,0 +1,265 @@ +ControlNet ControlNet is a type of model for controlling image diffusion models by conditioning the model with an additional input image. There are many types of conditioning inputs (canny edge, user sketching, human pose, depth, and more) you can use to control a diffusion model. This is hugely useful because it affords you greater control over image generation, making it easier to generate specific images without experimenting with different text prompts or denoising values as much. Check out Section 3.5 of the ControlNet paper v1 for a list of ControlNet implementations on various conditioning inputs. You can find the official Stable Diffusion ControlNet conditioned models on lllyasviel’s Hub profile, and more community-trained ones on the Hub. For Stable Diffusion XL (SDXL) ControlNet models, you can find them on the 🤗 Diffusers Hub organization, or you can browse community-trained ones on the Hub. A ControlNet model has two sets of weights (or blocks) connected by a zero-convolution layer: a locked copy keeps everything a large pretrained diffusion model has learned a trainable copy is trained on the additional conditioning input Since the locked copy preserves the pretrained model, training and implementing a ControlNet on a new conditioning input is as fast as finetuning any other model because you aren’t training the model from scratch. This guide will show you how to use ControlNet for text-to-image, image-to-image, inpainting, and more! There are many types of ControlNet conditioning inputs to choose from, but in this guide we’ll only focus on several of them. Feel free to experiment with other conditioning inputs! Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate opencv-python Text-to-image For text-to-image, you normally pass a text prompt to the model. But with ControlNet, you can specify an additional conditioning input. Let’s condition the model with a canny image, a white outline of an image on a black background. This way, the ControlNet can use the canny image as a control to guide the model to generate an image with the same outline. Load an image and use the opencv-python library to extract the canny image: Copied from diffusers.utils import load_image, make_image_grid +from PIL import Image +import cv2 +import numpy as np + +original_image = load_image( + "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +) + +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) original image canny image Next, load a ControlNet model conditioned on canny edge detection and pass it to the StableDiffusionControlNetPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler +import torch + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now pass your prompt and canny image to the pipeline: Copied output = pipe( + "the mona lisa", image=canny_image +).images[0] +make_image_grid([original_image, canny_image, output], rows=1, cols=3) Image-to-image For image-to-image, you’d typically pass an initial image and a prompt to the pipeline to generate a new image. With ControlNet, you can pass an additional conditioning input to guide the model. Let’s condition the model with a depth map, an image which contains spatial information. This way, the ControlNet can use the depth map as a control to guide the model to generate an image that preserves spatial information. You’ll use the StableDiffusionControlNetImg2ImgPipeline for this task, which is different from the StableDiffusionControlNetPipeline because it allows you to pass an initial image as the starting point for the image generation process. Load an image and use the depth-estimation Pipeline from 🤗 Transformers to extract the depth map of an image: Copied import torch +import numpy as np + +from transformers import pipeline +from diffusers.utils import load_image, make_image_grid + +image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-img2img.jpg" +) + +def get_depth_map(image, depth_estimator): + image = depth_estimator(image)["depth"] + image = np.array(image) + image = image[:, :, None] + image = np.concatenate([image, image, image], axis=2) + detected_map = torch.from_numpy(image).float() / 255.0 + depth_map = detected_map.permute(2, 0, 1) + return depth_map + +depth_estimator = pipeline("depth-estimation") +depth_map = get_depth_map(image, depth_estimator).unsqueeze(0).half().to("cuda") Next, load a ControlNet model conditioned on depth maps and pass it to the StableDiffusionControlNetImg2ImgPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, UniPCMultistepScheduler +import torch + +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11f1p_sd15_depth", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now pass your prompt, initial image, and depth map to the pipeline: Copied output = pipe( + "lego batman and robin", image=image, control_image=depth_map, +).images[0] +make_image_grid([image, output], rows=1, cols=2) original image generated image Inpainting For inpainting, you need an initial image, a mask image, and a prompt describing what to replace the mask with. ControlNet models allow you to add another control image to condition a model with. Let’s condition the model with an inpainting mask. This way, the ControlNet can use the inpainting mask as a control to guide the model to generate an image within the mask area. Load an initial image and a mask image: Copied from diffusers.utils import load_image, make_image_grid + +init_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-inpaint.jpg" +) +init_image = init_image.resize((512, 512)) + +mask_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-inpaint-mask.jpg" +) +mask_image = mask_image.resize((512, 512)) +make_image_grid([init_image, mask_image], rows=1, cols=2) Create a function to prepare the control image from the initial and mask images. This’ll create a tensor to mark the pixels in init_image as masked if the corresponding pixel in mask_image is over a certain threshold. Copied import numpy as np +import torch + +def make_inpaint_condition(image, image_mask): + image = np.array(image.convert("RGB")).astype(np.float32) / 255.0 + image_mask = np.array(image_mask.convert("L")).astype(np.float32) / 255.0 + + assert image.shape[0:1] == image_mask.shape[0:1] + image[image_mask > 0.5] = -1.0 # set as masked pixel + image = np.expand_dims(image, 0).transpose(0, 3, 1, 2) + image = torch.from_numpy(image) + return image + +control_image = make_inpaint_condition(init_image, mask_image) original image mask image Load a ControlNet model conditioned on inpainting and pass it to the StableDiffusionControlNetInpaintPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetInpaintPipeline, ControlNetModel, UniPCMultistepScheduler + +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now pass your prompt, initial image, mask image, and control image to the pipeline: Copied output = pipe( + "corgi face with large ears, detailed, pixar, animated, disney", + num_inference_steps=20, + eta=1.0, + image=init_image, + mask_image=mask_image, + control_image=control_image, +).images[0] +make_image_grid([init_image, mask_image, output], rows=1, cols=3) Guess mode Guess mode does not require supplying a prompt to a ControlNet at all! This forces the ControlNet encoder to do it’s best to “guess” the contents of the input control map (depth map, pose estimation, canny edge, etc.). Guess mode adjusts the scale of the output residuals from a ControlNet by a fixed ratio depending on the block depth. The shallowest DownBlock corresponds to 0.1, and as the blocks get deeper, the scale increases exponentially such that the scale of the MidBlock output becomes 1.0. Guess mode does not have any impact on prompt conditioning and you can still provide a prompt if you want. Set guess_mode=True in the pipeline, and it is recommended to set the guidance_scale value between 3.0 and 5.0. Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.utils import load_image, make_image_grid +import numpy as np +import torch +from PIL import Image +import cv2 + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", use_safetensors=True) +pipe = StableDiffusionControlNetPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", controlnet=controlnet, use_safetensors=True).to("cuda") + +original_image = load_image("https://huggingface.co/takuma104/controlnet_dev/resolve/main/bird_512x512.png") + +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +image = pipe("", image=canny_image, guess_mode=True, guidance_scale=3.0).images[0] +make_image_grid([original_image, canny_image, image], rows=1, cols=3) regular mode with prompt guess mode without prompt ControlNet with Stable Diffusion XL There aren’t too many ControlNet models compatible with Stable Diffusion XL (SDXL) at the moment, but we’ve trained two full-sized ControlNet models for SDXL conditioned on canny edge detection and depth maps. We’re also experimenting with creating smaller versions of these SDXL-compatible ControlNet models so it is easier to run on resource-constrained hardware. You can find these checkpoints on the 🤗 Diffusers Hub organization! Let’s use a SDXL ControlNet conditioned on canny images to generate an image. Start by loading an image and prepare the canny image: Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL +from diffusers.utils import load_image, make_image_grid +from PIL import Image +import cv2 +import numpy as np +import torch + +original_image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" +) + +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) +make_image_grid([original_image, canny_image], rows=1, cols=2) original image canny image Load a SDXL ControlNet model conditioned on canny edge detection and pass it to the StableDiffusionXLControlNetPipeline. You can also enable model offloading to reduce memory usage. Copied controlnet = ControlNetModel.from_pretrained( + "diffusers/controlnet-canny-sdxl-1.0", + torch_dtype=torch.float16, + use_safetensors=True +) +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionXLControlNetPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + controlnet=controlnet, + vae=vae, + torch_dtype=torch.float16, + use_safetensors=True +) +pipe.enable_model_cpu_offload() Now pass your prompt (and optionally a negative prompt if you’re using one) and canny image to the pipeline: The controlnet_conditioning_scale parameter determines how much weight to assign to the conditioning inputs. A value of 0.5 is recommended for good generalization, but feel free to experiment with this number! Copied prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" +negative_prompt = 'low quality, bad quality, sketches' + +image = pipe( + prompt, + negative_prompt=negative_prompt, + image=canny_image, + controlnet_conditioning_scale=0.5, +).images[0] +make_image_grid([original_image, canny_image, image], rows=1, cols=3) You can use StableDiffusionXLControlNetPipeline in guess mode as well by setting the parameter to True: Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL +from diffusers.utils import load_image, make_image_grid +import numpy as np +import torch +import cv2 +from PIL import Image + +prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" +negative_prompt = "low quality, bad quality, sketches" + +original_image = load_image( + "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" +) + +controlnet = ControlNetModel.from_pretrained( + "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16, use_safetensors=True +) +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionXLControlNetPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16, use_safetensors=True +) +pipe.enable_model_cpu_offload() + +image = np.array(original_image) +image = cv2.Canny(image, 100, 200) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +image = pipe( + prompt, negative_prompt=negative_prompt, controlnet_conditioning_scale=0.5, image=canny_image, guess_mode=True, +).images[0] +make_image_grid([original_image, canny_image, image], rows=1, cols=3) MultiControlNet Replace the SDXL model with a model like runwayml/stable-diffusion-v1-5 to use multiple conditioning inputs with Stable Diffusion models. You can compose multiple ControlNet conditionings from different image inputs to create a MultiControlNet. To get better results, it is often helpful to: mask conditionings such that they don’t overlap (for example, mask the area of a canny image where the pose conditioning is located) experiment with the controlnet_conditioning_scale parameter to determine how much weight to assign to each conditioning input In this example, you’ll combine a canny image and a human pose estimation image to generate a new image. Prepare the canny image conditioning: Copied from diffusers.utils import load_image, make_image_grid +from PIL import Image +import numpy as np +import cv2 + +original_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" +) +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) + +# zero out middle columns of image where pose will be overlaid +zero_start = image.shape[1] // 4 +zero_end = zero_start + image.shape[1] // 2 +image[:, zero_start:zero_end] = 0 + +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) +make_image_grid([original_image, canny_image], rows=1, cols=2) original image canny image For human pose estimation, install controlnet_aux: Copied # uncomment to install the necessary library in Colab +#!pip install -q controlnet-aux Prepare the human pose estimation conditioning: Copied from controlnet_aux import OpenposeDetector + +openpose = OpenposeDetector.from_pretrained("lllyasviel/ControlNet") +original_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/person.png" +) +openpose_image = openpose(original_image) +make_image_grid([original_image, openpose_image], rows=1, cols=2) original image human pose image Load a list of ControlNet models that correspond to each conditioning, and pass them to the StableDiffusionXLControlNetPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to reduce memory usage. Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL, UniPCMultistepScheduler +import torch + +controlnets = [ + ControlNetModel.from_pretrained( + "thibaud/controlnet-openpose-sdxl-1.0", torch_dtype=torch.float16 + ), + ControlNetModel.from_pretrained( + "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16, use_safetensors=True + ), +] + +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionXLControlNetPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnets, vae=vae, torch_dtype=torch.float16, use_safetensors=True +) +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now you can pass your prompt (an optional negative prompt if you’re using one), canny image, and pose image to the pipeline: Copied prompt = "a giant standing in a fantasy landscape, best quality" +negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality" + +generator = torch.manual_seed(1) + +images = [openpose_image.resize((1024, 1024)), canny_image.resize((1024, 1024))] + +images = pipe( + prompt, + image=images, + num_inference_steps=25, + generator=generator, + negative_prompt=negative_prompt, + num_images_per_prompt=3, + controlnet_conditioning_scale=[1.0, 0.8], +).images +make_image_grid([original_image, canny_image, openpose_image, + images[0].resize((512, 512)), images[1].resize((512, 512)), images[2].resize((512, 512))], rows=2, cols=3) diff --git a/scrapped_outputs/03a8acbaedc64b38f5af066e6bbee2e3.txt b/scrapped_outputs/03a8acbaedc64b38f5af066e6bbee2e3.txt new file mode 100644 index 0000000000000000000000000000000000000000..7d4f2a190ae5e539921a29c22ee5aca25a320dc2 --- /dev/null +++ b/scrapped_outputs/03a8acbaedc64b38f5af066e6bbee2e3.txt @@ -0,0 +1,10 @@ +Using Diffusers with other modalities + +Diffusers is in the process of expanding to modalities other than images. +Example type +Colab +Pipeline +Molecule conformation generation + +❌ +More coming soon! diff --git a/scrapped_outputs/041d6ec5bc898d377b96ad1c3e5ce22b.txt b/scrapped_outputs/041d6ec5bc898d377b96ad1c3e5ce22b.txt new file mode 100644 index 0000000000000000000000000000000000000000..7c4120ca559ac7e154bd60c031ca497e0b8a77e7 --- /dev/null +++ b/scrapped_outputs/041d6ec5bc898d377b96ad1c3e5ce22b.txt @@ -0,0 +1 @@ +Overview Generating high-quality outputs is computationally intensive, especially during each iterative step where you go from a noisy output to a less noisy output. One of 🤗 Diffuser’s goals is to make this technology widely accessible to everyone, which includes enabling fast inference on consumer and specialized hardware. This section will cover tips and tricks - like half-precision weights and sliced attention - for optimizing inference speed and reducing memory-consumption. You’ll also learn how to speed up your PyTorch code with torch.compile or ONNX Runtime, and enable memory-efficient attention with xFormers. There are also guides for running inference on specific hardware like Apple Silicon, and Intel or Habana processors. diff --git a/scrapped_outputs/04343d970e3a9bf96cf88b007a727277.txt b/scrapped_outputs/04343d970e3a9bf96cf88b007a727277.txt new file mode 100644 index 0000000000000000000000000000000000000000..78c3d8546c4767fffa594b36c432c1201bb2ccc3 --- /dev/null +++ b/scrapped_outputs/04343d970e3a9bf96cf88b007a727277.txt @@ -0,0 +1,17 @@ +Token merging Token merging (ToMe) merges redundant tokens/patches progressively in the forward pass of a Transformer-based network which can speed-up the inference latency of StableDiffusionPipeline. Install ToMe from pip: Copied pip install tomesd You can use ToMe from the tomesd library with the apply_patch function: Copied from diffusers import StableDiffusionPipeline + import torch + import tomesd + + pipeline = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True, + ).to("cuda") ++ tomesd.apply_patch(pipeline, ratio=0.5) + + image = pipeline("a photo of an astronaut riding a horse on mars").images[0] The apply_patch function exposes a number of arguments to help strike a balance between pipeline inference speed and the quality of the generated tokens. The most important argument is ratio which controls the number of tokens that are merged during the forward pass. As reported in the paper, ToMe can greatly preserve the quality of the generated images while boosting inference speed. By increasing the ratio, you can speed-up inference even further, but at the cost of some degraded image quality. To test the quality of the generated images, we sampled a few prompts from Parti Prompts and performed inference with the StableDiffusionPipeline with the following settings: We didn’t notice any significant decrease in the quality of the generated samples, and you can check out the generated samples in this WandB report. If you’re interested in reproducing this experiment, use this script. Benchmarks We also benchmarked the impact of tomesd on the StableDiffusionPipeline with xFormers enabled across several image resolutions. The results are obtained from A100 and V100 GPUs in the following development environment: Copied - `diffusers` version: 0.15.1 +- Python version: 3.8.16 +- PyTorch version (GPU?): 1.13.1+cu116 (True) +- Huggingface_hub version: 0.13.2 +- Transformers version: 4.27.2 +- Accelerate version: 0.18.0 +- xFormers version: 0.0.16 +- tomesd version: 0.1.2 To reproduce this benchmark, feel free to use this script. The results are reported in seconds, and where applicable we report the speed-up percentage over the vanilla pipeline when using ToMe and ToMe + xFormers. GPU Resolution Batch size Vanilla ToMe ToMe + xFormers A100 512 10 6.88 5.26 (+23.55%) 4.69 (+31.83%) 768 10 OOM 14.71 11 8 OOM 11.56 8.84 4 OOM 5.98 4.66 2 4.99 3.24 (+35.07%) 2.1 (+37.88%) 1 3.29 2.24 (+31.91%) 2.03 (+38.3%) 1024 10 OOM OOM OOM 8 OOM OOM OOM 4 OOM 12.51 9.09 2 OOM 6.52 4.96 1 6.4 3.61 (+43.59%) 2.81 (+56.09%) V100 512 10 OOM 10.03 9.29 8 OOM 8.05 7.47 4 5.7 4.3 (+24.56%) 3.98 (+30.18%) 2 3.14 2.43 (+22.61%) 2.27 (+27.71%) 1 1.88 1.57 (+16.49%) 1.57 (+16.49%) 768 10 OOM OOM 23.67 8 OOM OOM 18.81 4 OOM 11.81 9.7 2 OOM 6.27 5.2 1 5.43 3.38 (+37.75%) 2.82 (+48.07%) 1024 10 OOM OOM OOM 8 OOM OOM OOM 4 OOM OOM 19.35 2 OOM 13 10.78 1 OOM 6.66 5.54 As seen in the tables above, the speed-up from tomesd becomes more pronounced for larger image resolutions. It is also interesting to note that with tomesd, it is possible to run the pipeline on a higher resolution like 1024x1024. You may be able to speed-up inference even more with torch.compile. diff --git a/scrapped_outputs/044358532f240b4e1a89ecfcec43efdc.txt b/scrapped_outputs/044358532f240b4e1a89ecfcec43efdc.txt new file mode 100644 index 0000000000000000000000000000000000000000..7c4120ca559ac7e154bd60c031ca497e0b8a77e7 --- /dev/null +++ b/scrapped_outputs/044358532f240b4e1a89ecfcec43efdc.txt @@ -0,0 +1 @@ +Overview Generating high-quality outputs is computationally intensive, especially during each iterative step where you go from a noisy output to a less noisy output. One of 🤗 Diffuser’s goals is to make this technology widely accessible to everyone, which includes enabling fast inference on consumer and specialized hardware. This section will cover tips and tricks - like half-precision weights and sliced attention - for optimizing inference speed and reducing memory-consumption. You’ll also learn how to speed up your PyTorch code with torch.compile or ONNX Runtime, and enable memory-efficient attention with xFormers. There are also guides for running inference on specific hardware like Apple Silicon, and Intel or Habana processors. diff --git a/scrapped_outputs/04532fa8bf4664942bca163e9ce7d3af.txt b/scrapped_outputs/04532fa8bf4664942bca163e9ce7d3af.txt new file mode 100644 index 0000000000000000000000000000000000000000..9d0d5ffb83e07315423c11b905ac9fe8aa24c736 --- /dev/null +++ b/scrapped_outputs/04532fa8bf4664942bca163e9ce7d3af.txt @@ -0,0 +1,18 @@ +Installation 🤗 Diffusers is tested on Python 3.8+, PyTorch 1.7.0+, and Flax. Follow the installation instructions below for the deep learning library you are using: PyTorch installation instructions Flax installation instructions Install with pip You should install 🤗 Diffusers in a virtual environment. +If you’re unfamiliar with Python virtual environments, take a look at this guide. +A virtual environment makes it easier to manage different projects and avoid compatibility issues between dependencies. Start by creating a virtual environment in your project directory: Copied python -m venv .env Activate the virtual environment: Copied source .env/bin/activate You should also install 🤗 Transformers because 🤗 Diffusers relies on its models: Pytorch Hide Pytorch content Note - PyTorch only supports Python 3.8 - 3.11 on Windows. Copied pip install diffusers["torch"] transformers JAX Hide JAX content Copied pip install diffusers["flax"] transformers Install with conda After activating your virtual environment, with conda (maintained by the community): Copied conda install -c conda-forge diffusers Install from source Before installing 🤗 Diffusers from source, make sure you have PyTorch and 🤗 Accelerate installed. To install 🤗 Accelerate: Copied pip install accelerate Then install 🤗 Diffusers from source: Copied pip install git+https://github.com/huggingface/diffusers This command installs the bleeding edge main version rather than the latest stable version. +The main version is useful for staying up-to-date with the latest developments. +For instance, if a bug has been fixed since the last official release but a new release hasn’t been rolled out yet. +However, this means the main version may not always be stable. +We strive to keep the main version operational, and most issues are usually resolved within a few hours or a day. +If you run into a problem, please open an Issue so we can fix it even sooner! Editable install You will need an editable install if you’d like to: Use the main version of the source code. Contribute to 🤗 Diffusers and need to test changes in the code. Clone the repository and install 🤗 Diffusers with the following commands: Copied git clone https://github.com/huggingface/diffusers.git +cd diffusers Pytorch Hide Pytorch content Copied pip install -e ".[torch]" JAX Hide JAX content Copied pip install -e ".[flax]" These commands will link the folder you cloned the repository to and your Python library paths. +Python will now look inside the folder you cloned to in addition to the normal library paths. +For example, if your Python packages are typically installed in ~/anaconda3/envs/main/lib/python3.8/site-packages/, Python will also search the ~/diffusers/ folder you cloned to. You must keep the diffusers folder if you want to keep using the library. Now you can easily update your clone to the latest version of 🤗 Diffusers with the following command: Copied cd ~/diffusers/ +git pull Your Python environment will find the main version of 🤗 Diffusers on the next run. Cache Model weights and files are downloaded from the Hub to a cache which is usually your home directory. You can change the cache location by specifying the HF_HOME or HUGGINFACE_HUB_CACHE environment variables or configuring the cache_dir parameter in methods like from_pretrained(). Cached files allow you to run 🤗 Diffusers offline. To prevent 🤗 Diffusers from connecting to the internet, set the HF_HUB_OFFLINE environment variable to True and 🤗 Diffusers will only load previously downloaded files in the cache. Copied export HF_HUB_OFFLINE=True For more details about managing and cleaning the cache, take a look at the caching guide. Telemetry logging Our library gathers telemetry information during from_pretrained() requests. +The data gathered includes the version of 🤗 Diffusers and PyTorch/Flax, the requested model or pipeline class, +and the path to a pretrained checkpoint if it is hosted on the Hugging Face Hub. +This usage data helps us debug issues and prioritize new features. +Telemetry is only sent when loading models and pipelines from the Hub, +and it is not collected if you’re loading local files. We understand that not everyone wants to share additional information,and we respect your privacy. +You can disable telemetry collection by setting the DISABLE_TELEMETRY environment variable from your terminal: On Linux/MacOS: Copied export DISABLE_TELEMETRY=YES On Windows: Copied set DISABLE_TELEMETRY=YES diff --git a/scrapped_outputs/04863d9d6a0a778c9d89bfaf5c722799.txt b/scrapped_outputs/04863d9d6a0a778c9d89bfaf5c722799.txt new file mode 100644 index 0000000000000000000000000000000000000000..576dcc80f8d3648a3bfddba4f5d8e453c126504f --- /dev/null +++ b/scrapped_outputs/04863d9d6a0a778c9d89bfaf5c722799.txt @@ -0,0 +1,58 @@ +Tiny AutoEncoder Tiny AutoEncoder for Stable Diffusion (TAESD) was introduced in madebyollin/taesd by Ollin Boer Bohan. It is a tiny distilled version of Stable Diffusion’s VAE that can quickly decode the latents in a StableDiffusionPipeline or StableDiffusionXLPipeline almost instantly. To use with Stable Diffusion v-2.1: Copied import torch +from diffusers import DiffusionPipeline, AutoencoderTiny + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1-base", torch_dtype=torch.float16 +) +pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesd", torch_dtype=torch.float16) +pipe = pipe.to("cuda") + +prompt = "slice of delicious New York-style berry cheesecake" +image = pipe(prompt, num_inference_steps=25).images[0] +image To use with Stable Diffusion XL 1.0 Copied import torch +from diffusers import DiffusionPipeline, AutoencoderTiny + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +) +pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesdxl", torch_dtype=torch.float16) +pipe = pipe.to("cuda") + +prompt = "slice of delicious New York-style berry cheesecake" +image = pipe(prompt, num_inference_steps=25).images[0] +image AutoencoderTiny class diffusers.AutoencoderTiny < source > ( in_channels: int = 3 out_channels: int = 3 encoder_block_out_channels: Tuple = (64, 64, 64, 64) decoder_block_out_channels: Tuple = (64, 64, 64, 64) act_fn: str = 'relu' latent_channels: int = 4 upsampling_scaling_factor: int = 2 num_encoder_blocks: Tuple = (1, 3, 3, 3) num_decoder_blocks: Tuple = (3, 3, 3, 1) latent_magnitude: int = 3 latent_shift: float = 0.5 force_upcast: bool = False scaling_factor: float = 1.0 ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. encoder_block_out_channels (Tuple[int], optional, defaults to (64, 64, 64, 64)) — +Tuple of integers representing the number of output channels for each encoder block. The length of the +tuple should be equal to the number of encoder blocks. decoder_block_out_channels (Tuple[int], optional, defaults to (64, 64, 64, 64)) — +Tuple of integers representing the number of output channels for each decoder block. The length of the +tuple should be equal to the number of decoder blocks. act_fn (str, optional, defaults to "relu") — +Activation function to be used throughout the model. latent_channels (int, optional, defaults to 4) — +Number of channels in the latent representation. The latent space acts as a compressed representation of +the input image. upsampling_scaling_factor (int, optional, defaults to 2) — +Scaling factor for upsampling in the decoder. It determines the size of the output image during the +upsampling process. num_encoder_blocks (Tuple[int], optional, defaults to (1, 3, 3, 3)) — +Tuple of integers representing the number of encoder blocks at each stage of the encoding process. The +length of the tuple should be equal to the number of stages in the encoder. Each stage has a different +number of encoder blocks. num_decoder_blocks (Tuple[int], optional, defaults to (3, 3, 3, 1)) — +Tuple of integers representing the number of decoder blocks at each stage of the decoding process. The +length of the tuple should be equal to the number of stages in the decoder. Each stage has a different +number of decoder blocks. latent_magnitude (float, optional, defaults to 3.0) — +Magnitude of the latent representation. This parameter scales the latent representation values to control +the extent of information preservation. latent_shift (float, optional, defaults to 0.5) — +Shift applied to the latent representation. This parameter controls the center of the latent space. scaling_factor (float, optional, defaults to 1.0) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. For this Autoencoder, +however, no such scaling factor was used, hence the value of 1.0 as the default. force_upcast (bool, optional, default to False) — +If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE +can be fine-tuned / trained to a lower range without losing too much precision, in which case +force_upcast can be set to False (see this fp16-friendly +AutoEncoder). A tiny distilled VAE model for encoding images into latents and decoding latent representations into images. AutoencoderTiny is a wrapper around the original implementation of TAESD. This model inherits from ModelMixin. Check the superclass documentation for its generic methods implemented for +all models (such as downloading or saving). disable_slicing < source > ( ) Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing +decoding in one step. disable_tiling < source > ( ) Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing +decoding in one step. enable_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_tiling < source > ( use_tiling: bool = True ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. forward < source > ( sample: FloatTensor return_dict: bool = True ) Parameters sample (torch.FloatTensor) — Input sample. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. scale_latents < source > ( x: FloatTensor ) raw latents -> [0, 1] unscale_latents < source > ( x: FloatTensor ) [0, 1] -> raw latents AutoencoderTinyOutput class diffusers.models.autoencoders.autoencoder_tiny.AutoencoderTinyOutput < source > ( latents: Tensor ) Parameters latents (torch.Tensor) — Encoded outputs of the Encoder. Output of AutoencoderTiny encoding method. diff --git a/scrapped_outputs/04a5c43352cba1852d9743227a5502ec.txt b/scrapped_outputs/04a5c43352cba1852d9743227a5502ec.txt new file mode 100644 index 0000000000000000000000000000000000000000..c2a137491d5fbd628392ad86cd53c8aeb530c249 --- /dev/null +++ b/scrapped_outputs/04a5c43352cba1852d9743227a5502ec.txt @@ -0,0 +1,11 @@ +Installing xFormers + +We recommend the use of xFormers for both inference and training. In our tests, the optimizations performed in the attention blocks allow for both faster speed and reduced memory consumption. +Starting from version 0.0.16 of xFormers, released on January 2023, installation can be easily performed using pre-built pip wheels: + + + Copied +pip install xformers +The xFormers PIP package requires the latest version of PyTorch (1.13.1 as of xFormers 0.0.16). If you need to use a previous version of PyTorch, then we recommend you install xFormers from source using the project instructions. +After xFormers is installed, you can use enable_xformers_memory_efficient_attention() for faster inference and reduced memory consumption, as discussed here. +According to this issue, xFormers v0.0.16 cannot be used for training (fine-tune or Dreambooth) in some GPUs. If you observe that problem, please install a development version as indicated in that comment. diff --git a/scrapped_outputs/04b6c971d3b3042cb398245d60d142af.txt b/scrapped_outputs/04b6c971d3b3042cb398245d60d142af.txt new file mode 100644 index 0000000000000000000000000000000000000000..ff6a9e3f448f32b5e091930c4a212ed0ac90283a --- /dev/null +++ b/scrapped_outputs/04b6c971d3b3042cb398245d60d142af.txt @@ -0,0 +1,50 @@ +Attention Processor An attention processor is a class for applying different types of attention mechanisms. AttnProcessor class diffusers.models.attention_processor.AttnProcessor < source > ( ) Default processor for performing attention-related computations. AttnProcessor2_0 class diffusers.models.attention_processor.AttnProcessor2_0 < source > ( ) Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). AttnAddedKVProcessor class diffusers.models.attention_processor.AttnAddedKVProcessor < source > ( ) Processor for performing attention-related computations with extra learnable key and value matrices for the text +encoder. AttnAddedKVProcessor2_0 class diffusers.models.attention_processor.AttnAddedKVProcessor2_0 < source > ( ) Processor for performing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0), with extra +learnable key and value matrices for the text encoder. CrossFrameAttnProcessor class diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.CrossFrameAttnProcessor < source > ( batch_size = 2 ) Cross frame attention processor. Each frame attends the first frame. CustomDiffusionAttnProcessor class diffusers.models.attention_processor.CustomDiffusionAttnProcessor < source > ( train_kv: bool = True train_q_out: bool = True hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 ) Parameters train_kv (bool, defaults to True) — +Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) — +Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) — +Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) — +The dropout probability to use. Processor for implementing attention for the Custom Diffusion method. CustomDiffusionAttnProcessor2_0 class diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0 < source > ( train_kv: bool = True train_q_out: bool = True hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 ) Parameters train_kv (bool, defaults to True) — +Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) — +Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) — +Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) — +The dropout probability to use. Processor for implementing attention for the Custom Diffusion method using PyTorch 2.0’s memory-efficient scaled +dot-product attention. CustomDiffusionXFormersAttnProcessor class diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor < source > ( train_kv: bool = True train_q_out: bool = False hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 attention_op: Optional = None ) Parameters train_kv (bool, defaults to True) — +Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) — +Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) — +Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) — +The dropout probability to use. attention_op (Callable, optional, defaults to None) — +The base +operator to use +as the attention operator. It is recommended to set to None, and allow xFormers to choose the best operator. Processor for implementing memory efficient attention using xFormers for the Custom Diffusion method. FusedAttnProcessor2_0 class diffusers.models.attention_processor.FusedAttnProcessor2_0 < source > ( ) Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). +It uses fused projection layers. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is currently 🧪 experimental in nature and can change in future. LoRAAttnAddedKVProcessor class diffusers.models.attention_processor.LoRAAttnAddedKVProcessor < source > ( hidden_size: int cross_attention_dim: Optional = None rank: int = 4 network_alpha: Optional = None ) Parameters hidden_size (int, optional) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. rank (int, defaults to 4) — +The dimension of the LoRA update matrices. network_alpha (int, optional) — +Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) — +Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism with extra learnable key and value matrices for the text +encoder. LoRAXFormersAttnProcessor class diffusers.models.attention_processor.LoRAXFormersAttnProcessor < source > ( hidden_size: int cross_attention_dim: int rank: int = 4 attention_op: Optional = None network_alpha: Optional = None **kwargs ) Parameters hidden_size (int, optional) — +The hidden size of the attention layer. cross_attention_dim (int, optional) — +The number of channels in the encoder_hidden_states. rank (int, defaults to 4) — +The dimension of the LoRA update matrices. attention_op (Callable, optional, defaults to None) — +The base +operator to +use as the attention operator. It is recommended to set to None, and allow xFormers to choose the best +operator. network_alpha (int, optional) — +Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) — +Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism with memory efficient attention using xFormers. SlicedAttnProcessor class diffusers.models.attention_processor.SlicedAttnProcessor < source > ( slice_size: int ) Parameters slice_size (int, optional) — +The number of steps to compute attention. Uses as many slices as attention_head_dim // slice_size, and +attention_head_dim must be a multiple of the slice_size. Processor for implementing sliced attention. SlicedAttnAddedKVProcessor class diffusers.models.attention_processor.SlicedAttnAddedKVProcessor < source > ( slice_size ) Parameters slice_size (int, optional) — +The number of steps to compute attention. Uses as many slices as attention_head_dim // slice_size, and +attention_head_dim must be a multiple of the slice_size. Processor for implementing sliced attention with extra learnable key and value matrices for the text encoder. XFormersAttnProcessor class diffusers.models.attention_processor.XFormersAttnProcessor < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional, defaults to None) — +The base +operator to +use as the attention operator. It is recommended to set to None, and allow xFormers to choose the best +operator. Processor for implementing memory efficient attention using xFormers. diff --git a/scrapped_outputs/0513b0801d8c780910edb8268d9b7b3b.txt b/scrapped_outputs/0513b0801d8c780910edb8268d9b7b3b.txt new file mode 100644 index 0000000000000000000000000000000000000000..8278105dee001f6035eed45d773a2780bc02b523 --- /dev/null +++ b/scrapped_outputs/0513b0801d8c780910edb8268d9b7b3b.txt @@ -0,0 +1 @@ +SDXL Turbo Stable Diffusion XL (SDXL) Turbo was proposed in Adversarial Diffusion Distillation by Axel Sauer, Dominik Lorenz, Andreas Blattmann, and Robin Rombach. The abstract from the paper is: We introduce Adversarial Diffusion Distillation (ADD), a novel training approach that efficiently samples large-scale foundational image diffusion models in just 1–4 steps while maintaining high image quality. We use score distillation to leverage large-scale off-the-shelf image diffusion models as a teacher signal in combination with an adversarial loss to ensure high image fidelity even in the low-step regime of one or two sampling steps. Our analyses show that our model clearly outperforms existing few-step methods (GANs,Latent Consistency Models) in a single step and reaches the performance of state-of-the-art diffusion models (SDXL) in only four steps. ADD is the first method to unlock single-step, real-time image synthesis with foundation models. Tips SDXL Turbo uses the exact same architecture as SDXL, which means it also has the same API. Please refer to the SDXL API reference for more details. SDXL Turbo should disable guidance scale by setting guidance_scale=0.0. SDXL Turbo should use timestep_spacing='trailing' for the scheduler and use between 1 and 4 steps. SDXL Turbo has been trained to generate images of size 512x512. SDXL Turbo is open-access, but not open-source meaning that one might have to buy a model license in order to use it for commercial applications. Make sure to read the official model card to learn more. To learn how to use SDXL Turbo for various tasks, how to optimize performance, and other usage examples, take a look at the SDXL Turbo guide. Check out the Stability AI Hub organization for the official base and refiner model checkpoints! diff --git a/scrapped_outputs/05377f15590571c32cefbc2656f68eeb.txt b/scrapped_outputs/05377f15590571c32cefbc2656f68eeb.txt new file mode 100644 index 0000000000000000000000000000000000000000..78bbe5a9f180ff0b096046b649d06bb4063d6161 --- /dev/null +++ b/scrapped_outputs/05377f15590571c32cefbc2656f68eeb.txt @@ -0,0 +1,137 @@ +DiffEdit Image editing typically requires providing a mask of the area to be edited. DiffEdit automatically generates the mask for you based on a text query, making it easier overall to create a mask without image editing software. The DiffEdit algorithm works in three steps: the diffusion model denoises an image conditioned on some query text and reference text which produces different noise estimates for different areas of the image; the difference is used to infer a mask to identify which area of the image needs to be changed to match the query text the input image is encoded into latent space with DDIM the latents are decoded with the diffusion model conditioned on the text query, using the mask as a guide such that pixels outside the mask remain the same as in the input image This guide will show you how to use DiffEdit to edit images without manually creating a mask. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate The StableDiffusionDiffEditPipeline requires an image mask and a set of partially inverted latents. The image mask is generated from the generate_mask() function, and includes two parameters, source_prompt and target_prompt. These parameters determine what to edit in the image. For example, if you want to change a bowl of fruits to a bowl of pears, then: Copied source_prompt = "a bowl of fruits" +target_prompt = "a bowl of pears" The partially inverted latents are generated from the invert() function, and it is generally a good idea to include a prompt or caption describing the image to help guide the inverse latent sampling process. The caption can often be your source_prompt, but feel free to experiment with other text descriptions! Let’s load the pipeline, scheduler, inverse scheduler, and enable some optimizations to reduce memory usage: Copied import torch +from diffusers import DDIMScheduler, DDIMInverseScheduler, StableDiffusionDiffEditPipeline + +pipeline = StableDiffusionDiffEditPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", + torch_dtype=torch.float16, + safety_checker=None, + use_safetensors=True, +) +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +pipeline.enable_model_cpu_offload() +pipeline.enable_vae_slicing() Load the image to edit: Copied from diffusers.utils import load_image, make_image_grid + +img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" +raw_image = load_image(img_url).resize((768, 768)) +raw_image Use the generate_mask() function to generate the image mask. You’ll need to pass it the source_prompt and target_prompt to specify what to edit in the image: Copied from PIL import Image + +source_prompt = "a bowl of fruits" +target_prompt = "a basket of pears" +mask_image = pipeline.generate_mask( + image=raw_image, + source_prompt=source_prompt, + target_prompt=target_prompt, +) +Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L").resize((768, 768)) Next, create the inverted latents and pass it a caption describing the image: Copied inv_latents = pipeline.invert(prompt=source_prompt, image=raw_image).latents Finally, pass the image mask and inverted latents to the pipeline. The target_prompt becomes the prompt now, and the source_prompt is used as the negative_prompt: Copied output_image = pipeline( + prompt=target_prompt, + mask_image=mask_image, + image_latents=inv_latents, + negative_prompt=source_prompt, +).images[0] +mask_image = Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L").resize((768, 768)) +make_image_grid([raw_image, mask_image, output_image], rows=1, cols=3) original image edited image Generate source and target embeddings The source and target embeddings can be automatically generated with the Flan-T5 model instead of creating them manually. Load the Flan-T5 model and tokenizer from the 🤗 Transformers library: Copied import torch +from transformers import AutoTokenizer, T5ForConditionalGeneration + +tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-large") +model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", device_map="auto", torch_dtype=torch.float16) Provide some initial text to prompt the model to generate the source and target prompts. Copied source_concept = "bowl" +target_concept = "basket" + +source_text = f"Provide a caption for images containing a {source_concept}. " +"The captions should be in English and should be no longer than 150 characters." + +target_text = f"Provide a caption for images containing a {target_concept}. " +"The captions should be in English and should be no longer than 150 characters." Next, create a utility function to generate the prompts: Copied @torch.no_grad() +def generate_prompts(input_prompt): + input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids.to("cuda") + + outputs = model.generate( + input_ids, temperature=0.8, num_return_sequences=16, do_sample=True, max_new_tokens=128, top_k=10 + ) + return tokenizer.batch_decode(outputs, skip_special_tokens=True) + +source_prompts = generate_prompts(source_text) +target_prompts = generate_prompts(target_text) +print(source_prompts) +print(target_prompts) Check out the generation strategy guide if you’re interested in learning more about strategies for generating different quality text. Load the text encoder model used by the StableDiffusionDiffEditPipeline to encode the text. You’ll use the text encoder to compute the text embeddings: Copied import torch +from diffusers import StableDiffusionDiffEditPipeline + +pipeline = StableDiffusionDiffEditPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +pipeline.enable_vae_slicing() + +@torch.no_grad() +def embed_prompts(sentences, tokenizer, text_encoder, device="cuda"): + embeddings = [] + for sent in sentences: + text_inputs = tokenizer( + sent, + padding="max_length", + max_length=tokenizer.model_max_length, + truncation=True, + return_tensors="pt", + ) + text_input_ids = text_inputs.input_ids + prompt_embeds = text_encoder(text_input_ids.to(device), attention_mask=None)[0] + embeddings.append(prompt_embeds) + return torch.concatenate(embeddings, dim=0).mean(dim=0).unsqueeze(0) + +source_embeds = embed_prompts(source_prompts, pipeline.tokenizer, pipeline.text_encoder) +target_embeds = embed_prompts(target_prompts, pipeline.tokenizer, pipeline.text_encoder) Finally, pass the embeddings to the generate_mask() and invert() functions, and pipeline to generate the image: Copied from diffusers import DDIMInverseScheduler, DDIMScheduler + from diffusers.utils import load_image, make_image_grid + from PIL import Image + + pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) + pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) + + img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + raw_image = load_image(img_url).resize((768, 768)) + + mask_image = pipeline.generate_mask( + image=raw_image, +- source_prompt=source_prompt, +- target_prompt=target_prompt, ++ source_prompt_embeds=source_embeds, ++ target_prompt_embeds=target_embeds, + ) + + inv_latents = pipeline.invert( +- prompt=source_prompt, ++ prompt_embeds=source_embeds, + image=raw_image, + ).latents + + output_image = pipeline( + mask_image=mask_image, + image_latents=inv_latents, +- prompt=target_prompt, +- negative_prompt=source_prompt, ++ prompt_embeds=target_embeds, ++ negative_prompt_embeds=source_embeds, + ).images[0] + mask_image = Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L") + make_image_grid([raw_image, mask_image, output_image], rows=1, cols=3) Generate a caption for inversion While you can use the source_prompt as a caption to help generate the partially inverted latents, you can also use the BLIP model to automatically generate a caption. Load the BLIP model and processor from the 🤗 Transformers library: Copied import torch +from transformers import BlipForConditionalGeneration, BlipProcessor + +processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") +model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base", torch_dtype=torch.float16, low_cpu_mem_usage=True) Create a utility function to generate a caption from the input image: Copied @torch.no_grad() +def generate_caption(images, caption_generator, caption_processor): + text = "a photograph of" + + inputs = caption_processor(images, text, return_tensors="pt").to(device="cuda", dtype=caption_generator.dtype) + caption_generator.to("cuda") + outputs = caption_generator.generate(**inputs, max_new_tokens=128) + + # offload caption generator + caption_generator.to("cpu") + + caption = caption_processor.batch_decode(outputs, skip_special_tokens=True)[0] + return caption Load an input image and generate a caption for it using the generate_caption function: Copied from diffusers.utils import load_image + +img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" +raw_image = load_image(img_url).resize((768, 768)) +caption = generate_caption(raw_image, model, processor) generated caption: "a photograph of a bowl of fruit on a table" Now you can drop the caption into the invert() function to generate the partially inverted latents! diff --git a/scrapped_outputs/05582e67bfcec7fa9b41e4219522b5e8.txt b/scrapped_outputs/05582e67bfcec7fa9b41e4219522b5e8.txt new file mode 100644 index 0000000000000000000000000000000000000000..e61eb0a68fe6473d1d312b7484e9469ca28f24df --- /dev/null +++ b/scrapped_outputs/05582e67bfcec7fa9b41e4219522b5e8.txt @@ -0,0 +1,75 @@ +Shap-E Shap-E is a conditional model for generating 3D assets which could be used for video game development, interior design, and architecture. It is trained on a large dataset of 3D assets, and post-processed to render more views of each object and produce 16K instead of 4K point clouds. The Shap-E model is trained in two steps: an encoder accepts the point clouds and rendered views of a 3D asset and outputs the parameters of implicit functions that represent the asset a diffusion model is trained on the latents produced by the encoder to generate either neural radiance fields (NeRFs) or a textured 3D mesh, making it easier to render and use the 3D asset in downstream applications This guide will show you how to use Shap-E to start generating your own 3D assets! Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate trimesh Text-to-3D To generate a gif of a 3D object, pass a text prompt to the ShapEPipeline. The pipeline generates a list of image frames which are used to create the 3D object. Copied import torch +from diffusers import ShapEPipeline + +device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +pipe = ShapEPipeline.from_pretrained("openai/shap-e", torch_dtype=torch.float16, variant="fp16") +pipe = pipe.to(device) + +guidance_scale = 15.0 +prompt = ["A firecracker", "A birthday cupcake"] + +images = pipe( + prompt, + guidance_scale=guidance_scale, + num_inference_steps=64, + frame_size=256, +).images Now use the export_to_gif() function to turn the list of image frames into a gif of the 3D object. Copied from diffusers.utils import export_to_gif + +export_to_gif(images[0], "firecracker_3d.gif") +export_to_gif(images[1], "cake_3d.gif") prompt = "A firecracker" prompt = "A birthday cupcake" Image-to-3D To generate a 3D object from another image, use the ShapEImg2ImgPipeline. You can use an existing image or generate an entirely new one. Let’s use the Kandinsky 2.1 model to generate a new image. Copied from diffusers import DiffusionPipeline +import torch + +prior_pipeline = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipeline = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + +prompt = "A cheeseburger, white background" + +image_embeds, negative_image_embeds = prior_pipeline(prompt, guidance_scale=1.0).to_tuple() +image = pipeline( + prompt, + image_embeds=image_embeds, + negative_image_embeds=negative_image_embeds, +).images[0] + +image.save("burger.png") Pass the cheeseburger to the ShapEImg2ImgPipeline to generate a 3D representation of it. Copied from PIL import Image +from diffusers import ShapEImg2ImgPipeline +from diffusers.utils import export_to_gif + +pipe = ShapEImg2ImgPipeline.from_pretrained("openai/shap-e-img2img", torch_dtype=torch.float16, variant="fp16").to("cuda") + +guidance_scale = 3.0 +image = Image.open("burger.png").resize((256, 256)) + +images = pipe( + image, + guidance_scale=guidance_scale, + num_inference_steps=64, + frame_size=256, +).images + +gif_path = export_to_gif(images[0], "burger_3d.gif") cheeseburger 3D cheeseburger Generate mesh Shap-E is a flexible model that can also generate textured mesh outputs to be rendered for downstream applications. In this example, you’ll convert the output into a glb file because the 🤗 Datasets library supports mesh visualization of glb files which can be rendered by the Dataset viewer. You can generate mesh outputs for both the ShapEPipeline and ShapEImg2ImgPipeline by specifying the output_type parameter as "mesh": Copied import torch +from diffusers import ShapEPipeline + +device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +pipe = ShapEPipeline.from_pretrained("openai/shap-e", torch_dtype=torch.float16, variant="fp16") +pipe = pipe.to(device) + +guidance_scale = 15.0 +prompt = "A birthday cupcake" + +images = pipe(prompt, guidance_scale=guidance_scale, num_inference_steps=64, frame_size=256, output_type="mesh").images Use the export_to_ply() function to save the mesh output as a ply file: You can optionally save the mesh output as an obj file with the export_to_obj() function. The ability to save the mesh output in a variety of formats makes it more flexible for downstream usage! Copied from diffusers.utils import export_to_ply + +ply_path = export_to_ply(images[0], "3d_cake.ply") +print(f"Saved to folder: {ply_path}") Then you can convert the ply file to a glb file with the trimesh library: Copied import trimesh + +mesh = trimesh.load("3d_cake.ply") +mesh_export = mesh.export("3d_cake.glb", file_type="glb") By default, the mesh output is focused from the bottom viewpoint but you can change the default viewpoint by applying a rotation transform: Copied import trimesh +import numpy as np + +mesh = trimesh.load("3d_cake.ply") +rot = trimesh.transformations.rotation_matrix(-np.pi / 2, [1, 0, 0]) +mesh = mesh.apply_transform(rot) +mesh_export = mesh.export("3d_cake.glb", file_type="glb") Upload the mesh file to your dataset repository to visualize it with the Dataset viewer! diff --git a/scrapped_outputs/0563c13a7c1c4c7bf534f8ba98328463.txt b/scrapped_outputs/0563c13a7c1c4c7bf534f8ba98328463.txt new file mode 100644 index 0000000000000000000000000000000000000000..92fbeed0765a53040c69079974db40c4e9eb3387 --- /dev/null +++ b/scrapped_outputs/0563c13a7c1c4c7bf534f8ba98328463.txt @@ -0,0 +1,66 @@ +Latent Consistency Model Multistep Scheduler Overview Multistep and onestep scheduler (Algorithm 3) introduced alongside latent consistency models in the paper Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference by Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao. +This scheduler should be able to generate good samples from LatentConsistencyModelPipeline in 1-8 steps. LCMScheduler class diffusers.LCMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'scaled_linear' trained_betas: Union = None original_inference_steps: int = 50 clip_sample: bool = False clip_sample_range: float = 1.0 set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' timestep_scaling: float = 10.0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. original_inference_steps (int, optional, defaults to 50) — +The default number of inference steps used to generate a linearly-spaced timestep schedule, from which we +will ultimately take num_inference_steps evenly spaced timesteps to form the final timestep schedule. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the alpha value at step 0. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. timestep_scaling (float, defaults to 10.0) — +The factor the timesteps will be multiplied by when calculating the consistency model boundary conditions +c_skip and c_out. Increasing this will decrease the approximation error (although the approximation +error at the default of 10.0 is already pretty small). rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. LCMScheduler extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with +non-Markovian guidance. This model inherits from SchedulerMixin and ConfigMixin. ~ConfigMixin takes care of storing all config +attributes that are passed in the scheduler’s __init__ function, such as num_train_timesteps. They can be +accessed via scheduler.config.num_train_timesteps. SchedulerMixin provides general loading and saving +functionality via the SchedulerMixin.save_pretrained() and from_pretrained() functions. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: Optional = None device: Union = None original_inference_steps: Optional = None timesteps: Optional = None strength: int = 1.0 ) Parameters num_inference_steps (int, optional) — +The number of diffusion steps used when generating samples with a pre-trained model. If used, +timesteps must be None. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. original_inference_steps (int, optional) — +The original number of inference steps, which will be used to generate a linearly-spaced timestep +schedule (which is different from the standard diffusers implementation). We will then take +num_inference_steps timesteps from this schedule, evenly spaced in terms of indices, and use that as +our final timestep schedule. If not set, this will default to the original_inference_steps attribute. timesteps (List[int], optional) — +Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default +timestep spacing strategy of equal spacing between timesteps on the training/distillation timestep +schedule is used. If timesteps is passed, num_inference_steps must be None. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator: Optional = None return_dict: bool = True ) → ~schedulers.scheduling_utils.LCMSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a LCMSchedulerOutput or tuple. Returns +~schedulers.scheduling_utils.LCMSchedulerOutput or tuple + +If return_dict is True, LCMSchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/056988b6242e71f9baa34a0128b3b910.txt b/scrapped_outputs/056988b6242e71f9baa34a0128b3b910.txt new file mode 100644 index 0000000000000000000000000000000000000000..67c8b53cf21b58b36cb7eadc4efa707362746029 --- /dev/null +++ b/scrapped_outputs/056988b6242e71f9baa34a0128b3b910.txt @@ -0,0 +1,61 @@ +Stable Diffusion 2 Stable Diffusion 2 is a text-to-image latent diffusion model built upon the work of the original Stable Diffusion, and it was led by Robin Rombach and Katherine Crowson from Stability AI and LAION. The Stable Diffusion 2.0 release includes robust text-to-image models trained using a brand new text encoder (OpenCLIP), developed by LAION with support from Stability AI, which greatly improves the quality of the generated images compared to earlier V1 releases. The text-to-image models in this release can generate images with default resolutions of both 512x512 pixels and 768x768 pixels. +These models are trained on an aesthetic subset of the LAION-5B dataset created by the DeepFloyd team at Stability AI, which is then further filtered to remove adult content using LAION’s NSFW filter. For more details about how Stable Diffusion 2 works and how it differs from the original Stable Diffusion, please refer to the official announcement post. The architecture of Stable Diffusion 2 is more or less identical to the original Stable Diffusion model so check out it’s API documentation for how to use Stable Diffusion 2. We recommend using the DPMSolverMultistepScheduler as it gives a reasonable speed/quality trade-off and can be run with as little as 20 steps. Stable Diffusion 2 is available for tasks like text-to-image, inpainting, super-resolution, and depth-to-image: Task Repository text-to-image (512x512) stabilityai/stable-diffusion-2-base text-to-image (768x768) stabilityai/stable-diffusion-2 inpainting stabilityai/stable-diffusion-2-inpainting super-resolution stable-diffusion-x4-upscaler depth-to-image stabilityai/stable-diffusion-2-depth Here are some examples for how to use Stable Diffusion 2 for each task: Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! Text-to-image Copied from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +import torch + +repo_id = "stabilityai/stable-diffusion-2-base" +pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") + +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") + +prompt = "High quality photo of an astronaut riding a horse in space" +image = pipe(prompt, num_inference_steps=25).images[0] +image Inpainting Copied import torch +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +from diffusers.utils import load_image, make_image_grid + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +repo_id = "stabilityai/stable-diffusion-2-inpainting" +pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") + +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") + +prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +image = pipe(prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=25).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Super-resolution Copied from diffusers import StableDiffusionUpscalePipeline +from diffusers.utils import load_image, make_image_grid +import torch + +# load model and scheduler +model_id = "stabilityai/stable-diffusion-x4-upscaler" +pipeline = StableDiffusionUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) +pipeline = pipeline.to("cuda") + +# let's download an image +url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png" +low_res_img = load_image(url) +low_res_img = low_res_img.resize((128, 128)) +prompt = "a white cat" +upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0] +make_image_grid([low_res_img.resize((512, 512)), upscaled_image.resize((512, 512))], rows=1, cols=2) Depth-to-image Copied import torch +from diffusers import StableDiffusionDepth2ImgPipeline +from diffusers.utils import load_image, make_image_grid + +pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-depth", + torch_dtype=torch.float16, +).to("cuda") + + +url = "http://images.cocodataset.org/val2017/000000039769.jpg" +init_image = load_image(url) +prompt = "two tigers" +negative_prompt = "bad, deformed, ugly, bad anotomy" +image = pipe(prompt=prompt, image=init_image, negative_prompt=negative_prompt, strength=0.7).images[0] +make_image_grid([init_image, image], rows=1, cols=2) diff --git a/scrapped_outputs/0571ee854112d412f8b230bbf015c40b.txt b/scrapped_outputs/0571ee854112d412f8b230bbf015c40b.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/0589ba813ef6923277cca7ee6b454f67.txt b/scrapped_outputs/0589ba813ef6923277cca7ee6b454f67.txt new file mode 100644 index 0000000000000000000000000000000000000000..c93f948dc410dd64585375368bc3e52d8d0c43f6 --- /dev/null +++ b/scrapped_outputs/0589ba813ef6923277cca7ee6b454f67.txt @@ -0,0 +1,138 @@ +Single files Diffusers supports loading pretrained pipeline (or model) weights stored in a single file, such as a ckpt or safetensors file. These single file types are typically produced from community trained models. There are three classes for loading single file weights: FromSingleFileMixin supports loading pretrained pipeline weights stored in a single file, which can either be a ckpt or safetensors file. FromOriginalVAEMixin supports loading a pretrained AutoencoderKL from pretrained ControlNet weights stored in a single file, which can either be a ckpt or safetensors file. FromOriginalControlnetMixin supports loading pretrained ControlNet weights stored in a single file, which can either be a ckpt or safetensors file. To learn more about how to load single file weights, see the Load different Stable Diffusion formats loading guide. FromSingleFileMixin class diffusers.loaders.FromSingleFileMixin < source > ( ) Load model weights saved in the .ckpt format into a DiffusionPipeline. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. extract_ema (bool, optional, defaults to False) — +Whether to extract the EMA weights or not. Pass True to extract the EMA weights which usually yield +higher quality images for inference. Non-EMA weights are usually better for continuing finetuning. upcast_attention (bool, optional, defaults to None) — +Whether the attention computation should always be upcasted. image_size (int, optional, defaults to 512) — +The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable +Diffusion v2 base model. Use 768 for Stable Diffusion v2. prediction_type (str, optional) — +The prediction type the model was trained on. Use 'epsilon' for all Stable Diffusion v1 models and +the Stable Diffusion v2 base model. Use 'v_prediction' for Stable Diffusion v2. num_in_channels (int, optional, defaults to None) — +The number of input channels. If None, it is automatically inferred. scheduler_type (str, optional, defaults to "pndm") — +Type of scheduler to use. Should be one of ["pndm", "lms", "heun", "euler", "euler-ancestral", "dpm", "ddim"]. load_safety_checker (bool, optional, defaults to True) — +Whether to load the safety checker or not. text_encoder (CLIPTextModel, optional, defaults to None) — +An instance of CLIPTextModel to use, specifically the +clip-vit-large-patch14 variant. If this +parameter is None, the function loads a new instance of CLIPTextModel by itself if needed. vae (AutoencoderKL, optional, defaults to None) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. If +this parameter is None, the function will load a new instance of [CLIP] by itself, if needed. tokenizer (CLIPTokenizer, optional, defaults to None) — +An instance of CLIPTokenizer to use. If this parameter is None, the function loads a new instance +of CLIPTokenizer by itself if needed. original_config_file (str) — +Path to .yaml config file corresponding to the original architecture. If None, will be +automatically inferred by looking for a key that only exists in SD2.0 models. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (for example the pipeline components of the +specific pipeline class). The overwritten components are directly passed to the pipelines __init__ +method. See example below for more information. Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt or .safetensors +format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import StableDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +... ) + +>>> # Download pipeline from local file +>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt +>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly") + +>>> # Enable float16 and move to GPU +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", +... torch_dtype=torch.float16, +... ) +>>> pipeline.to("cuda") FromOriginalVAEMixin class diffusers.loaders.FromOriginalVAEMixin < source > ( ) Load pretrained ControlNet weights saved in the .ckpt or .safetensors format into an AutoencoderKL. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. image_size (int, optional, defaults to 512) — +The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable +Diffusion v2 base model. Use 768 for Stable Diffusion v2. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. upcast_attention (bool, optional, defaults to None) — +Whether the attention computation should always be upcasted. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution +Image Synthesis with Latent Diffusion Models paper. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (for example the pipeline components of the +specific pipeline class). The overwritten components are directly passed to the pipelines __init__ +method. See example below for more information. Instantiate a AutoencoderKL from pretrained ControlNet weights saved in the original .ckpt or +.safetensors format. The pipeline is set in evaluation mode (model.eval()) by default. Make sure to pass both image_size and scaling_factor to from_single_file() if you’re loading +a VAE from SDXL or a Stable Diffusion v2 model or higher. Examples: Copied from diffusers import AutoencoderKL + +url = "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors" # can also be local file +model = AutoencoderKL.from_single_file(url) FromOriginalControlnetMixin class diffusers.loaders.FromOriginalControlnetMixin < source > ( ) Load pretrained ControlNet weights saved in the .ckpt or .safetensors format into a ControlNetModel. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. image_size (int, optional, defaults to 512) — +The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable +Diffusion v2 base model. Use 768 for Stable Diffusion v2. upcast_attention (bool, optional, defaults to None) — +Whether the attention computation should always be upcasted. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (for example the pipeline components of the +specific pipeline class). The overwritten components are directly passed to the pipelines __init__ +method. See example below for more information. Instantiate a ControlNetModel from pretrained ControlNet weights saved in the original .ckpt or +.safetensors format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel + +url = "https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_canny.pth" # can also be a local path +model = ControlNetModel.from_single_file(url) + +url = "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.safetensors" # can also be a local path +pipe = StableDiffusionControlNetPipeline.from_single_file(url, controlnet=controlnet) diff --git a/scrapped_outputs/05b0f824d9e6de69327504f27e90b9e6.txt b/scrapped_outputs/05b0f824d9e6de69327504f27e90b9e6.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/05cb598c3dda9e4d07cb0d08b8e89e80.txt b/scrapped_outputs/05cb598c3dda9e4d07cb0d08b8e89e80.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/05fc9a1b7b04cc46e3de44a240e518af.txt b/scrapped_outputs/05fc9a1b7b04cc46e3de44a240e518af.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69636ab475595c240f0bd86a1983886d1f8de0d --- /dev/null +++ b/scrapped_outputs/05fc9a1b7b04cc46e3de44a240e518af.txt @@ -0,0 +1,40 @@ +DDIM Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. The abstract from the paper is: Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space. The original codebase can be found at ermongroup/ddim. DDIMPipeline class diffusers.DDIMPipeline < source > ( unet scheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image. Can be one of +DDPMScheduler, or DDIMScheduler. Pipeline for image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 generator: Union = None eta: float = 0.0 num_inference_steps: int = 50 use_clipped_model_output: Optional = None output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. A value of 0 corresponds to +DDIM and 1 corresponds to DDPM. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. use_clipped_model_output (bool, optional, defaults to None) — +If True or False, see documentation for DDIMScheduler.step(). If None, nothing is passed +downstream to the scheduler (use None for schedulers which don’t support this argument). output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Example: Copied >>> from diffusers import DDIMPipeline +>>> import PIL.Image +>>> import numpy as np + +>>> # load model and scheduler +>>> pipe = DDIMPipeline.from_pretrained("fusing/ddim-lsun-bedroom") + +>>> # run pipeline in inference (sample random noise and denoise) +>>> image = pipe(eta=0.0, num_inference_steps=50) + +>>> # process image to PIL +>>> image_processed = image.cpu().permute(0, 2, 3, 1) +>>> image_processed = (image_processed + 1.0) * 127.5 +>>> image_processed = image_processed.numpy().astype(np.uint8) +>>> image_pil = PIL.Image.fromarray(image_processed[0]) + +>>> # save image +>>> image_pil.save("test.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/060ba29d724ef0efe0746d1279958f67.txt b/scrapped_outputs/060ba29d724ef0efe0746d1279958f67.txt new file mode 100644 index 0000000000000000000000000000000000000000..98269f3c31d991ee698908d92c0548b99079f45a --- /dev/null +++ b/scrapped_outputs/060ba29d724ef0efe0746d1279958f67.txt @@ -0,0 +1,24 @@ +IPNDMScheduler IPNDMScheduler is a fourth-order Improved Pseudo Linear Multistep scheduler. The original implementation can be found at crowsonkb/v-diffusion-pytorch. IPNDMScheduler class diffusers.IPNDMScheduler < source > ( num_train_timesteps: int = 1000 trained_betas: Union = None ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. A fourth-order Improved Pseudo Linear Multistep scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the linear multistep method. It performs one forward pass multiple times to approximate the solution. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/063573cc3a27cd4c4dfb832c6896f1d6.txt b/scrapped_outputs/063573cc3a27cd4c4dfb832c6896f1d6.txt new file mode 100644 index 0000000000000000000000000000000000000000..ac1b7ed0d2eb9d761ee55f0863101b101fd84b33 --- /dev/null +++ b/scrapped_outputs/063573cc3a27cd4c4dfb832c6896f1d6.txt @@ -0,0 +1,50 @@ +Re-using seeds for fast prompt engineering + +A common use case when generating images is to generate a batch of images, select one image and improve it with a better, more detailed prompt in a second run. +To do this, one needs to make each generated image of the batch deterministic. +Images are generated by denoising gaussian random noise which can be instantiated by passing a torch generator. +Now, for batched generation, we need to make sure that every single generated image in the batch is tied exactly to one seed. In 🧨 Diffusers, this can be achieved by not passing one generator, but a list +of generators to the pipeline. +Let’s go through an example using runwayml/stable-diffusion-v1-5. +We want to generate several versions of the prompt: + + + Copied +prompt = "Labrador in the style of Vermeer" +Let’s load the pipeline + + + Copied +>>> from diffusers import DiffusionPipeline + +>>> pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +Now, let’s define 4 different generators, since we would like to reproduce a certain image. We’ll use seeds 0 to 3 to create our generators. + + + Copied +>>> import torch + +>>> generator = [torch.Generator(device="cuda").manual_seed(i) for i in range(4)] +Let’s generate 4 images: + + + Copied +>>> images = pipe(prompt, generator=generator, num_images_per_prompt=4).images +>>> images + +Ok, the last images has some double eyes, but the first image looks good! +Let’s try to make the prompt a bit better while keeping the first seed +so that the images are similar to the first image. + + + Copied +prompt = [prompt + t for t in [", highly realistic", ", artsy", ", trending", ", colorful"]] +generator = [torch.Generator(device="cuda").manual_seed(0) for i in range(4)] +We create 4 generators with seed 0, which is the first seed we used before. +Let’s run the pipeline again. + + + Copied +>>> images = pipe(prompt, generator=generator).images +>>> images diff --git a/scrapped_outputs/06530ae9cf4dc0269ff56c18a8aff76f.txt b/scrapped_outputs/06530ae9cf4dc0269ff56c18a8aff76f.txt new file mode 100644 index 0000000000000000000000000000000000000000..a8f413795df9ab5a09a9d2bf61a84f345bf9ed33 --- /dev/null +++ b/scrapped_outputs/06530ae9cf4dc0269ff56c18a8aff76f.txt @@ -0,0 +1,33 @@ +Variance preserving stochastic differential equation (VP-SDE) scheduler + + +Overview + +Original paper can be found here. +Score SDE-VP is under construction. + +ScoreSdeVpScheduler + + +class diffusers.schedulers.ScoreSdeVpScheduler + +< +source +> +( +num_train_timesteps = 2000 +beta_min = 0.1 +beta_max = 20 +sampling_eps = 0.001 + +) + + + +The variance preserving stochastic differential equation (SDE) scheduler. +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. +For more information, see the original paper: https://arxiv.org/abs/2011.13456 +UNDER CONSTRUCTION diff --git a/scrapped_outputs/06bb21c4da64d4d502f5d89cb0bcd79f.txt b/scrapped_outputs/06bb21c4da64d4d502f5d89cb0bcd79f.txt new file mode 100644 index 0000000000000000000000000000000000000000..4ae8ff9c01a41be4bc950412702f6aa66a636b0b --- /dev/null +++ b/scrapped_outputs/06bb21c4da64d4d502f5d89cb0bcd79f.txt @@ -0,0 +1,108 @@ +Accelerate inference of text-to-image diffusion models Diffusion models are known to be slower than their counter parts, GANs, because of the iterative and sequential reverse diffusion process. Recent works try to address limitation with: progressive timestep distillation (such as LCM LoRA) model compression (such as SSD-1B) reusing adjacent features of the denoiser (such as DeepCache) In this tutorial, we focus on leveraging the power of PyTorch 2 to accelerate the inference latency of text-to-image diffusion pipeline, instead. We will use Stable Diffusion XL (SDXL) as a case study, but the techniques we will discuss should extend to other text-to-image diffusion pipelines. Setup Make sure you’re on the latest version of diffusers: Copied pip install -U diffusers Then upgrade the other required libraries too: Copied pip install -U transformers accelerate peft To benefit from the fastest kernels, use PyTorch nightly. You can find the installation instructions here. To report the numbers shown below, we used an 80GB 400W A100 with its clock rate set to the maximum. This tutorial doesn’t present the benchmarking code and focuses on how to perform the optimizations, instead. For the full benchmarking code, refer to: https://github.com/huggingface/diffusion-fast. Baseline Let’s start with a baseline. Disable the use of a reduced precision and scaled_dot_product_attention: Copied from diffusers import StableDiffusionXLPipeline + +# Load the pipeline in full-precision and place its model components on CUDA. +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0" +).to("cuda") + +# Run the attention ops without efficiency. +pipe.unet.set_default_attn_processor() +pipe.vae.set_default_attn_processor() + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] This takes 7.36 seconds: Running inference in bfloat16 Enable the first optimization: use a reduced precision to run the inference. Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 +).to("cuda") + +# Run the attention ops without efficiency. +pipe.unet.set_default_attn_processor() +pipe.vae.set_default_attn_processor() + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] bfloat16 reduces the latency from 7.36 seconds to 4.63 seconds: Why bfloat16? Using a reduced numerical precision (such as float16, bfloat16) to run inference doesn’t affect the generation quality but significantly improves latency. The benefits of using the bfloat16 numerical precision as compared to float16 are hardware-dependent. Modern generations of GPUs tend to favor bfloat16. Furthermore, in our experiments, we bfloat16 to be much more resilient when used with quantization in comparison to float16. We have a dedicated guide for running inference in a reduced precision. Running attention efficiently Attention blocks are intensive to run. But with PyTorch’s scaled_dot_product_attention, we can run them efficiently. Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] scaled_dot_product_attention improves the latency from 4.63 seconds to 3.31 seconds. Use faster kernels with torch.compile Compile the UNet and the VAE to benefit from the faster kernels. First, configure a few compiler flags: Copied from diffusers import StableDiffusionXLPipeline +import torch + +torch._inductor.config.conv_1x1_as_mm = True +torch._inductor.config.coordinate_descent_tuning = True +torch._inductor.config.epilogue_fusion = False +torch._inductor.config.coordinate_descent_check_all_directions = True For the full list of compiler flags, refer to this file. It is also important to change the memory layout of the UNet and the VAE to “channels_last” when compiling them. This ensures maximum speed: Copied pipe.unet.to(memory_format=torch.channels_last) +pipe.vae.to(memory_format=torch.channels_last) Then, compile and perform inference: Copied # Compile the UNet and VAE. +pipe.unet = torch.compile(pipe.unet, mode="max-autotune", fullgraph=True) +pipe.vae.decode = torch.compile(pipe.vae.decode, mode="max-autotune", fullgraph=True) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# First call to `pipe` will be slow, subsequent ones will be faster. +image = pipe(prompt, num_inference_steps=30).images[0] torch.compile offers different backends and modes. As we’re aiming for maximum inference speed, we opt for the inductor backend using the “max-autotune”. “max-autotune” uses CUDA graphs and optimizes the compilation graph specifically for latency. Specifying fullgraph to be True ensures that there are no graph breaks in the underlying model, ensuring the fullest potential of torch.compile. Using SDPA attention and compiling both the UNet and VAE reduces the latency from 3.31 seconds to 2.54 seconds. Combine the projection matrices of attention Both the UNet and the VAE used in SDXL make use of Transformer-like blocks. A Transformer block consists of attention blocks and feed-forward blocks. In an attention block, the input is projected into three sub-spaces using three different projection matrices – Q, K, and V. In the naive implementation, these projections are performed separately on the input. But we can horizontally combine the projection matrices into a single matrix and perform the projection in one shot. This increases the size of the matmuls of the input projections and improves the impact of quantization (to be discussed next). Enabling this kind of computation in Diffusers just takes a single line of code: Copied pipe.fuse_qkv_projections() It provides a minor boost from 2.54 seconds to 2.52 seconds. Support for fuse_qkv_projections() is limited and experimental. As such, it’s not available for many non-SD pipelines such as Kandinsky. You can refer to this PR to get an idea about how to support this kind of computation. Dynamic quantization Aapply dynamic int8 quantization to both the UNet and the VAE. This is because quantization adds additional conversion overhead to the model that is hopefully made up for by faster matmuls (dynamic quantization). If the matmuls are too small, these techniques may degrade performance. Through experimentation, we found that certain linear layers in the UNet and the VAE don’t benefit from dynamic int8 quantization. You can check out the full code for filtering those layers here (referred to as dynamic_quant_filter_fn below). You will leverage the ultra-lightweight pure PyTorch library torchao to use its user-friendly APIs for quantization. First, configure all the compiler tags: Copied from diffusers import StableDiffusionXLPipeline +import torch + +# Notice the two new flags at the end. +torch._inductor.config.conv_1x1_as_mm = True +torch._inductor.config.coordinate_descent_tuning = True +torch._inductor.config.epilogue_fusion = False +torch._inductor.config.coordinate_descent_check_all_directions = True +torch._inductor.config.force_fuse_int_mm_with_mul = True +torch._inductor.config.use_mixed_mm = True Define the filtering functions: Copied def dynamic_quant_filter_fn(mod, *args): + return ( + isinstance(mod, torch.nn.Linear) + and mod.in_features > 16 + and (mod.in_features, mod.out_features) + not in [ + (1280, 640), + (1920, 1280), + (1920, 640), + (2048, 1280), + (2048, 2560), + (2560, 1280), + (256, 128), + (2816, 1280), + (320, 640), + (512, 1536), + (512, 256), + (512, 512), + (640, 1280), + (640, 1920), + (640, 320), + (640, 5120), + (640, 640), + (960, 320), + (960, 640), + ] + ) + + +def conv_filter_fn(mod, *args): + return ( + isinstance(mod, torch.nn.Conv2d) and mod.kernel_size == (1, 1) and 128 in [mod.in_channels, mod.out_channels] + ) Then apply all the optimizations discussed so far: Copied # SDPA + bfloat16. +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 +).to("cuda") + +# Combine attention projection matrices. +pipe.fuse_qkv_projections() + +# Change the memory layout. +pipe.unet.to(memory_format=torch.channels_last) +pipe.vae.to(memory_format=torch.channels_last) Since this quantization support is limited to linear layers only, we also turn suitable pointwise convolution layers into linear layers to maximize the benefit. Copied from torchao import swap_conv2d_1x1_to_linear + +swap_conv2d_1x1_to_linear(pipe.unet, conv_filter_fn) +swap_conv2d_1x1_to_linear(pipe.vae, conv_filter_fn) Apply dynamic quantization: Copied from torchao import apply_dynamic_quant + +apply_dynamic_quant(pipe.unet, dynamic_quant_filter_fn) +apply_dynamic_quant(pipe.vae, dynamic_quant_filter_fn) Finally, compile and perform inference: Copied pipe.unet = torch.compile(pipe.unet, mode="max-autotune", fullgraph=True) +pipe.vae.decode = torch.compile(pipe.vae.decode, mode="max-autotune", fullgraph=True) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] Applying dynamic quantization improves the latency from 2.52 seconds to 2.43 seconds. diff --git a/scrapped_outputs/06cae22db9ee5c5848ce2321726cf5fa.txt b/scrapped_outputs/06cae22db9ee5c5848ce2321726cf5fa.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/06f452358de5e80a33d7c29720e03a87.txt b/scrapped_outputs/06f452358de5e80a33d7c29720e03a87.txt new file mode 100644 index 0000000000000000000000000000000000000000..99f9d4b5885bd206a7181438e9f28f16726d42b9 --- /dev/null +++ b/scrapped_outputs/06f452358de5e80a33d7c29720e03a87.txt @@ -0,0 +1,171 @@ +Custom Diffusion Custom Diffusion is a training technique for personalizing image generation models. Like Textual Inversion, DreamBooth, and LoRA, Custom Diffusion only requires a few (~4-5) example images. This technique works by only training weights in the cross-attention layers, and it uses a special word to represent the newly learned concept. Custom Diffusion is unique because it can also learn multiple concepts at the same time. If you’re training on a GPU with limited vRAM, you should try enabling xFormers with --enable_xformers_memory_efficient_attention for faster training with lower vRAM requirements (16GB). To save even more memory, add --set_grads_to_none in the training argument to set the gradients to None instead of zero (this option can cause some issues, so if you experience any, try removing this parameter). This guide will explore the train_custom_diffusion.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies: Copied cd examples/custom_diffusion +pip install -r requirements.txt +pip install clip-retrieval 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script contains all the parameters to help you customize your training run. These are found in the parse_args() function. The function comes with default values, but you can also set your own values in the training command if you’d like. For example, to change the resolution of the input image: Copied accelerate launch train_custom_diffusion.py \ + --resolution=256 Many of the basic parameters are described in the DreamBooth training guide, so this guide focuses on the parameters unique to Custom Diffusion: --freeze_model: freezes the key and value parameters in the cross-attention layer; the default is crossattn_kv, but you can set it to crossattn to train all the parameters in the cross-attention layer --concepts_list: to learn multiple concepts, provide a path to a JSON file containing the concepts --modifier_token: a special word used to represent the learned concept --initializer_token: Prior preservation loss Prior preservation loss is a method that uses a model’s own generated samples to help it learn how to generate more diverse images. Because these generated sample images belong to the same class as the images you provided, they help the model retain what it has learned about the class and how it can use what it already knows about the class to make new compositions. Many of the parameters for prior preservation loss are described in the DreamBooth training guide. Regularization Custom Diffusion includes training the target images with a small set of real images to prevent overfitting. As you can imagine, this can be easy to do when you’re only training on a few images! Download 200 real images with clip_retrieval. The class_prompt should be the same category as the target images. These images are stored in class_data_dir. Copied python retrieve.py --class_prompt cat --class_data_dir real_reg/samples_cat --num_class_images 200 To enable regularization, add the following parameters: --with_prior_preservation: whether to use prior preservation loss --prior_loss_weight: controls the influence of the prior preservation loss on the model --real_prior: whether to use a small set of real images to prevent overfitting Copied accelerate launch train_custom_diffusion.py \ + --with_prior_preservation \ + --prior_loss_weight=1.0 \ + --class_data_dir="./real_reg/samples_cat" \ + --class_prompt="cat" \ + --real_prior=True \ Training script A lot of the code in the Custom Diffusion training script is similar to the DreamBooth script. This guide instead focuses on the code that is relevant to Custom Diffusion. The Custom Diffusion training script has two dataset classes: CustomDiffusionDataset: preprocesses the images, class images, and prompts for training PromptDataset: prepares the prompts for generating class images Next, the modifier_token is added to the tokenizer, converted to token ids, and the token embeddings are resized to account for the new modifier_token. Then the modifier_token embeddings are initialized with the embeddings of the initializer_token. All parameters in the text encoder are frozen, except for the token embeddings since this is what the model is trying to learn to associate with the concepts. Copied params_to_freeze = itertools.chain( + text_encoder.text_model.encoder.parameters(), + text_encoder.text_model.final_layer_norm.parameters(), + text_encoder.text_model.embeddings.position_embedding.parameters(), +) +freeze_params(params_to_freeze) Now you’ll need to add the Custom Diffusion weights to the attention layers. This is a really important step for getting the shape and size of the attention weights correct, and for setting the appropriate number of attention processors in each UNet block. Copied st = unet.state_dict() +for name, _ in unet.attn_processors.items(): + cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim + if name.startswith("mid_block"): + hidden_size = unet.config.block_out_channels[-1] + elif name.startswith("up_blocks"): + block_id = int(name[len("up_blocks.")]) + hidden_size = list(reversed(unet.config.block_out_channels))[block_id] + elif name.startswith("down_blocks"): + block_id = int(name[len("down_blocks.")]) + hidden_size = unet.config.block_out_channels[block_id] + layer_name = name.split(".processor")[0] + weights = { + "to_k_custom_diffusion.weight": st[layer_name + ".to_k.weight"], + "to_v_custom_diffusion.weight": st[layer_name + ".to_v.weight"], + } + if train_q_out: + weights["to_q_custom_diffusion.weight"] = st[layer_name + ".to_q.weight"] + weights["to_out_custom_diffusion.0.weight"] = st[layer_name + ".to_out.0.weight"] + weights["to_out_custom_diffusion.0.bias"] = st[layer_name + ".to_out.0.bias"] + if cross_attention_dim is not None: + custom_diffusion_attn_procs[name] = attention_class( + train_kv=train_kv, + train_q_out=train_q_out, + hidden_size=hidden_size, + cross_attention_dim=cross_attention_dim, + ).to(unet.device) + custom_diffusion_attn_procs[name].load_state_dict(weights) + else: + custom_diffusion_attn_procs[name] = attention_class( + train_kv=False, + train_q_out=False, + hidden_size=hidden_size, + cross_attention_dim=cross_attention_dim, + ) +del st +unet.set_attn_processor(custom_diffusion_attn_procs) +custom_diffusion_layers = AttnProcsLayers(unet.attn_processors) The optimizer is initialized to update the cross-attention layer parameters: Copied optimizer = optimizer_class( + itertools.chain(text_encoder.get_input_embeddings().parameters(), custom_diffusion_layers.parameters()) + if args.modifier_token is not None + else custom_diffusion_layers.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) In the training loop, it is important to only update the embeddings for the concept you’re trying to learn. This means setting the gradients of all the other token embeddings to zero: Copied if args.modifier_token is not None: + if accelerator.num_processes > 1: + grads_text_encoder = text_encoder.module.get_input_embeddings().weight.grad + else: + grads_text_encoder = text_encoder.get_input_embeddings().weight.grad + index_grads_to_zero = torch.arange(len(tokenizer)) != modifier_token_id[0] + for i in range(len(modifier_token_id[1:])): + index_grads_to_zero = index_grads_to_zero & ( + torch.arange(len(tokenizer)) != modifier_token_id[i] + ) + grads_text_encoder.data[index_grads_to_zero, :] = grads_text_encoder.data[ + index_grads_to_zero, : + ].fill_(0) Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 In this guide, you’ll download and use these example cat images. You can also create and use your own dataset if you want (see the Create a dataset for training guide). Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model, INSTANCE_DIR to the path where you just downloaded the cat images to, and OUTPUT_DIR to where you want to save the model. You’ll use as the special word to tie the newly learned embeddings to. The script creates and saves model checkpoints and a pytorch_custom_diffusion_weights.bin file to your repository. To monitor training progress with Weights and Biases, add the --report_to=wandb parameter to the training command and specify a validation prompt with --validation_prompt. This is useful for debugging and saving intermediate results. If you’re training on human faces, the Custom Diffusion team has found the following parameters to work well: --learning_rate=5e-6 --max_train_steps can be anywhere between 1000 and 2000 --freeze_model=crossattn use at least 15-20 images to train with + + + + Copied export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export OUTPUT_DIR="path-to-save-model" +export INSTANCE_DIR="./data/cat" + +accelerate launch train_custom_diffusion.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --output_dir=$OUTPUT_DIR \ + --class_data_dir=./real_reg/samples_cat/ \ + --with_prior_preservation \ + --real_prior \ + --prior_loss_weight=1.0 \ + --class_prompt="cat" \ + --num_class_images=200 \ + --instance_prompt="photo of a cat" \ + --resolution=512 \ + --train_batch_size=2 \ + --learning_rate=1e-5 \ + --lr_warmup_steps=0 \ + --max_train_steps=250 \ + --scale_lr \ + --hflip \ + --modifier_token "" \ + --validation_prompt=" cat sitting in a bucket" \ + --report_to="wandb" \ + --push_to_hub + + +Custom Diffusion can also learn multiple concepts if you provide a JSON file with some details about each concept it should learn. Run clip-retrieval to collect some real images to use for regularization: Copied pip install clip-retrieval +python retrieve.py --class_prompt {} --class_data_dir {} --num_class_images 200 Then you can launch the script: Copied export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export OUTPUT_DIR="path-to-save-model" + +accelerate launch train_custom_diffusion.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --output_dir=$OUTPUT_DIR \ + --concepts_list=./concept_list.json \ + --with_prior_preservation \ + --real_prior \ + --prior_loss_weight=1.0 \ + --resolution=512 \ + --train_batch_size=2 \ + --learning_rate=1e-5 \ + --lr_warmup_steps=0 \ + --max_train_steps=500 \ + --num_class_images=200 \ + --scale_lr \ + --hflip \ + --modifier_token "+" \ + --push_to_hub + + +Once training is finished, you can use your new Custom Diffusion model for inference. + + + + Copied import torch +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16, +).to("cuda") +pipeline.unet.load_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin") +pipeline.load_textual_inversion("path-to-save-model", weight_name=".bin") + +image = pipeline( + " cat sitting in a bucket", + num_inference_steps=100, + guidance_scale=6.0, + eta=1.0, +).images[0] +image.save("cat.png") + + + + Copied import torch +from huggingface_hub.repocard import RepoCard +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained("sayakpaul/custom-diffusion-cat-wooden-pot", torch_dtype=torch.float16).to("cuda") +pipeline.unet.load_attn_procs(model_id, weight_name="pytorch_custom_diffusion_weights.bin") +pipeline.load_textual_inversion(model_id, weight_name=".bin") +pipeline.load_textual_inversion(model_id, weight_name=".bin") + +image = pipeline( + "the cat sculpture in the style of a wooden pot", + num_inference_steps=100, + guidance_scale=6.0, + eta=1.0, +).images[0] +image.save("multi-subject.png") + + + Next steps Congratulations on training a model with Custom Diffusion! 🎉 To learn more: Read the Multi-Concept Customization of Text-to-Image Diffusion blog post to learn more details about the experimental results from the Custom Diffusion team. diff --git a/scrapped_outputs/070c4475160b0b7da54b844cd66f024c.txt b/scrapped_outputs/070c4475160b0b7da54b844cd66f024c.txt new file mode 100644 index 0000000000000000000000000000000000000000..816a6ec9c2fb9e36207317fc29707b1dd833518a --- /dev/null +++ b/scrapped_outputs/070c4475160b0b7da54b844cd66f024c.txt @@ -0,0 +1,412 @@ +Text-to-Video Generation with AnimateDiff Overview AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning by Yuwei Guo, Ceyuan Yang, Anyi Rao, Yaohui Wang, Yu Qiao, Dahua Lin, Bo Dai. The abstract of the paper is the following: With the advance of text-to-image models (e.g., Stable Diffusion) and corresponding personalization techniques such as DreamBooth and LoRA, everyone can manifest their imagination into high-quality images at an affordable cost. Subsequently, there is a great demand for image animation techniques to further combine generated static images with motion dynamics. In this report, we propose a practical framework to animate most of the existing personalized text-to-image models once and for all, saving efforts in model-specific tuning. At the core of the proposed framework is to insert a newly initialized motion modeling module into the frozen text-to-image model and train it on video clips to distill reasonable motion priors. Once trained, by simply injecting this motion modeling module, all personalized versions derived from the same base T2I readily become text-driven models that produce diverse and personalized animated images. We conduct our evaluation on several public representative personalized text-to-image models across anime pictures and realistic photographs, and demonstrate that our proposed framework helps these models generate temporally smooth animation clips while preserving the domain and diversity of their outputs. Code and pre-trained weights will be publicly available at this https URL. Available Pipelines Pipeline Tasks Demo AnimateDiffPipeline Text-to-Video Generation with AnimateDiff AnimateDiffVideoToVideoPipeline Video-to-Video Generation with AnimateDiff Available checkpoints Motion Adapter checkpoints can be found under guoyww. These checkpoints are meant to work with any model based on Stable Diffusion 1.4/1.5. Usage example AnimateDiffPipeline AnimateDiff works with a MotionAdapter checkpoint and a Stable Diffusion model checkpoint. The MotionAdapter is a collection of Motion Modules that are responsible for adding coherent motion across image frames. These modules are applied after the Resnet and Attention blocks in Stable Diffusion UNet. The following example demonstrates how to use a MotionAdapter checkpoint with Diffusers for inference based on StableDiffusion-1.4/1.5. Copied import torch +from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16) +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + timestep_spacing="linspace", + beta_schedule="linear", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt=( + "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, " + "orange sky, warm lighting, fishing boats, ocean waves seagulls, " + "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, " + "golden hour, coastal landscape, seaside scenery" + ), + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=25, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") + Here are some sample outputs: masterpiece, bestquality, sunset. + AnimateDiff tends to work better with finetuned Stable Diffusion models. If you plan on using a scheduler that can clip samples, make sure to disable it by setting clip_sample=False in the scheduler as this can also have an adverse effect on generated samples. Additionally, the AnimateDiff checkpoints can be sensitive to the beta schedule of the scheduler. We recommend setting this to linear. AnimateDiffVideoToVideoPipeline AnimateDiff can also be used to generate visually similar videos or enable style/character/background or other edits starting from an initial video, allowing you to seamlessly explore creative possibilities. Copied import imageio +import requests +import torch +from diffusers import AnimateDiffVideoToVideoPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif +from io import BytesIO +from PIL import Image + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffVideoToVideoPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16).to("cuda") +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + timestep_spacing="linspace", + beta_schedule="linear", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +# helper function to load videos +def load_video(file_path: str): + images = [] + + if file_path.startswith(('http://', 'https://')): + # If the file_path is a URL + response = requests.get(file_path) + response.raise_for_status() + content = BytesIO(response.content) + vid = imageio.get_reader(content) + else: + # Assuming it's a local file path + vid = imageio.get_reader(file_path) + + for frame in vid: + pil_image = Image.fromarray(frame) + images.append(pil_image) + + return images + +video = load_video("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-vid2vid-input-1.gif") + +output = pipe( + video = video, + prompt="panda playing a guitar, on a boat, in the ocean, high quality", + negative_prompt="bad quality, worse quality", + guidance_scale=7.5, + num_inference_steps=25, + strength=0.5, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") Here are some sample outputs: Source Video Output Video raccoon playing a guitar + panda playing a guitar + closeup of margot robbie, fireworks in the background, high quality + closeup of tony stark, robert downey jr, fireworks + Using Motion LoRAs Motion LoRAs are a collection of LoRAs that work with the guoyww/animatediff-motion-adapter-v1-5-2 checkpoint. These LoRAs are responsible for adding specific types of motion to the animations. Copied import torch +from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16) +pipe.load_lora_weights( + "guoyww/animatediff-motion-lora-zoom-out", adapter_name="zoom-out" +) + +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + beta_schedule="linear", + timestep_spacing="linspace", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt=( + "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, " + "orange sky, warm lighting, fishing boats, ocean waves seagulls, " + "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, " + "golden hour, coastal landscape, seaside scenery" + ), + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=25, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") + masterpiece, bestquality, sunset. + Using Motion LoRAs with PEFT You can also leverage the PEFT backend to combine Motion LoRA’s and create more complex animations. First install PEFT with Copied pip install peft Then you can use the following code to combine Motion LoRAs. Copied import torch +from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16) + +pipe.load_lora_weights( + "diffusers/animatediff-motion-lora-zoom-out", adapter_name="zoom-out", +) +pipe.load_lora_weights( + "diffusers/animatediff-motion-lora-pan-left", adapter_name="pan-left", +) +pipe.set_adapters(["zoom-out", "pan-left"], adapter_weights=[1.0, 1.0]) + +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + timestep_spacing="linspace", + beta_schedule="linear", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt=( + "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, " + "orange sky, warm lighting, fishing boats, ocean waves seagulls, " + "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, " + "golden hour, coastal landscape, seaside scenery" + ), + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=25, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") + masterpiece, bestquality, sunset. + Using FreeInit FreeInit: Bridging Initialization Gap in Video Diffusion Models by Tianxing Wu, Chenyang Si, Yuming Jiang, Ziqi Huang, Ziwei Liu. FreeInit is an effective method that improves temporal consistency and overall quality of videos generated using video-diffusion-models without any addition training. It can be applied to AnimateDiff, ModelScope, VideoCrafter and various other video generation models seamlessly at inference time, and works by iteratively refining the latent-initialization noise. More details can be found it the paper. The following example demonstrates the usage of FreeInit. Copied import torch +from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler +from diffusers.utils import export_to_gif + +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2") +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16).to("cuda") +pipe.scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + beta_schedule="linear", + clip_sample=False, + timestep_spacing="linspace", + steps_offset=1 +) + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_vae_tiling() + +# enable FreeInit +# Refer to the enable_free_init documentation for a full list of configurable parameters +pipe.enable_free_init(method="butterworth", use_fast_sampling=True) + +# run inference +output = pipe( + prompt="a panda playing a guitar, on a boat, in the ocean, high quality", + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=20, + generator=torch.Generator("cpu").manual_seed(666), +) + +# disable FreeInit +pipe.disable_free_init() + +frames = output.frames[0] +export_to_gif(frames, "animation.gif") FreeInit is not really free - the improved quality comes at the cost of extra computation. It requires sampling a few extra times depending on the num_iters parameter that is set when enabling it. Setting the use_fast_sampling parameter to True can improve the overall performance (at the cost of lower quality compared to when use_fast_sampling=False but still better results than vanilla video generation models). Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. AnimateDiffPipeline class diffusers.AnimateDiffPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel motion_adapter: MotionAdapter scheduler: Union feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter (MotionAdapter) — +A MotionAdapter to be used in combination with unet to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None num_frames: Optional = 16 height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → TextToVideoSDPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated video. num_frames (int, optional, defaults to 16) — +The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds +amounts to 2 seconds of video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated video. Choose between torch.FloatTensor, PIL.Image or +np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +TextToVideoSDPipelineOutput or tuple + +If return_dict is True, TextToVideoSDPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler +>>> from diffusers.utils import export_to_gif + +>>> adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2") +>>> pipe = AnimateDiffPipeline.from_pretrained("frankjoshua/toonyou_beta6", motion_adapter=adapter) +>>> pipe.scheduler = DDIMScheduler(beta_schedule="linear", steps_offset=1, clip_sample=False) +>>> output = pipe(prompt="A corgi walking in the park") +>>> frames = output.frames[0] +>>> export_to_gif(frames, "animation.gif") disable_free_init < source > ( ) Disables the FreeInit mechanism if enabled. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_free_init < source > ( num_iters: int = 3 use_fast_sampling: bool = False method: str = 'butterworth' order: int = 4 spatial_stop_frequency: float = 0.25 temporal_stop_frequency: float = 0.25 generator: Generator = None ) Parameters num_iters (int, optional, defaults to 3) — +Number of FreeInit noise re-initialization iterations. use_fast_sampling (bool, optional, defaults to False) — +Whether or not to speedup sampling procedure at the cost of probably lower quality results. Enables +the “Coarse-to-Fine Sampling” strategy, as mentioned in the paper, if set to True. method (str, optional, defaults to butterworth) — +Must be one of butterworth, ideal or gaussian to use as the filtering method for the +FreeInit low pass filter. order (int, optional, defaults to 4) — +Order of the filter used in butterworth method. Larger values lead to ideal method behaviour +whereas lower values lead to gaussian method behaviour. spatial_stop_frequency (float, optional, defaults to 0.25) — +Normalized stop frequency for spatial dimensions. Must be between 0 to 1. Referred to as d_s in +the original implementation. temporal_stop_frequency (float, optional, defaults to 0.25) — +Normalized stop frequency for temporal dimensions. Must be between 0 to 1. Referred to as d_t in +the original implementation. generator (torch.Generator, optional, defaults to 0.25) — +A torch.Generator to make +FreeInit generation deterministic. Enables the FreeInit mechanism as in https://arxiv.org/abs/2312.07537. This implementation has been adapted from the official repository. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. AnimateDiffVideoToVideoPipeline class diffusers.AnimateDiffVideoToVideoPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel motion_adapter: MotionAdapter scheduler: Union feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter (MotionAdapter) — +A MotionAdapter to be used in combination with unet to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for video-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( video: List = None prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: Optional = None guidance_scale: float = 7.5 strength: float = 0.8 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → AnimateDiffPipelineOutput or tuple Parameters video (List[PipelineImageInput]) — +The input video to condition the generation on. Must be a list of images/frames of the video. prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. strength (float, optional, defaults to 0.8) — +Higher strength leads to more differences between original video and generated video. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated video. Choose between torch.FloatTensor, PIL.Image or +np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a AnimateDiffPipelineOutput instead +of a plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +AnimateDiffPipelineOutput or tuple + +If return_dict is True, AnimateDiffPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. AnimateDiffPipelineOutput class diffusers.pipelines.animatediff.AnimateDiffPipelineOutput < source > ( frames: Union ) Parameters frames (List[List[PIL.Image.Image]] or torch.Tensor or np.ndarray) — +List of PIL Images of length batch_size or torch.Tensor or np.ndarray of shape +(batch_size, num_frames, height, width, num_channels). Output class for AnimateDiff pipelines. diff --git a/scrapped_outputs/071a5dd43d206b27fab101843e431a74.txt b/scrapped_outputs/071a5dd43d206b27fab101843e431a74.txt new file mode 100644 index 0000000000000000000000000000000000000000..c4de3beeb824ab523387da3094245983936dca96 --- /dev/null +++ b/scrapped_outputs/071a5dd43d206b27fab101843e431a74.txt @@ -0,0 +1,308 @@ +Depth-to-Image Generation + + +StableDiffusionDepth2ImgPipeline + +The depth-guided stable diffusion model was created by the researchers and engineers from CompVis, Stability AI, and LAION, as part of Stable Diffusion 2.0. It uses MiDas to infer depth based on an image. +StableDiffusionDepth2ImgPipeline lets you pass a text prompt and an initial image to condition the generation of new images as well as a depth_map to preserve the images’ structure. +The original codebase can be found here: +Stable Diffusion v2: Stability-AI/stablediffusion +Available Checkpoints are: +stable-diffusion-2-depth: stabilityai/stable-diffusion-2-depth + +class diffusers.StableDiffusionDepth2ImgPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +depth_estimator: DPTForDepthEstimation +feature_extractor: DPTFeatureExtractor + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + + +Pipeline for text-guided image to image generation using Stable Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None +depth_map: typing.Optional[torch.FloatTensor] = None +strength: float = 0.8 +num_inference_steps: typing.Optional[int] = 50 +guidance_scale: typing.Optional[float] = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: typing.Optional[float] = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. + + +strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter will be modulated by strength. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. Ignored when not using guidance (i.e., ignored if guidance_scale +is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import torch +>>> import requests +>>> from PIL import Image + +>>> from diffusers import StableDiffusionDepth2ImgPipeline + +>>> pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-depth", +... torch_dtype=torch.float16, +... ) +>>> pipe.to("cuda") + + +>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" +>>> init_image = Image.open(requests.get(url, stream=True).raw) +>>> prompt = "two tigers" +>>> n_propmt = "bad, deformed, ugly, bad anotomy" +>>> image = pipe(prompt=prompt, image=init_image, negative_prompt=n_propmt, strength=0.7).images[0] + +enable_attention_slicing + +< +source +> +( +slice_size: typing.Union[str, int, NoneType] = 'auto' + +) + + +Parameters + +slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maxium amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +disable_attention_slicing + +< +source +> +( +) + + + +Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go +back to computing attention in one step. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. diff --git a/scrapped_outputs/07532eb0beb9f8eed7615d89c4e7947e.txt b/scrapped_outputs/07532eb0beb9f8eed7615d89c4e7947e.txt new file mode 100644 index 0000000000000000000000000000000000000000..f855e68b8bc1a27f4f6c0425b7b6ee65371c12be --- /dev/null +++ b/scrapped_outputs/07532eb0beb9f8eed7615d89c4e7947e.txt @@ -0,0 +1,68 @@ +MusicLDM MusicLDM was proposed in MusicLDM: Enhancing Novelty in Text-to-Music Generation Using Beat-Synchronous Mixup Strategies by Ke Chen, Yusong Wu, Haohe Liu, Marianna Nezhurina, Taylor Berg-Kirkpatrick, Shlomo Dubnov. +MusicLDM takes a text prompt as input and predicts the corresponding music sample. Inspired by Stable Diffusion and AudioLDM, +MusicLDM is a text-to-music latent diffusion model (LDM) that learns continuous audio representations from CLAP +latents. MusicLDM is trained on a corpus of 466 hours of music data. Beat-synchronous data augmentation strategies are applied to the music samples, both in the time domain and in the latent space. Using beat-synchronous data augmentation strategies encourages the model to interpolate between the training samples, but stay within the domain of the training data. The result is generated music that is more diverse while staying faithful to the corresponding style. The abstract of the paper is the following: Diffusion models have shown promising results in cross-modal generation tasks, including text-to-image and text-to-audio generation. However, generating music, as a special type of audio, presents unique challenges due to limited availability of music data and sensitive issues related to copyright and plagiarism. In this paper, to tackle these challenges, we first construct a state-of-the-art text-to-music model, MusicLDM, that adapts Stable Diffusion and AudioLDM architectures to the music domain. We achieve this by retraining the contrastive language-audio pretraining model (CLAP) and the Hifi-GAN vocoder, as components of MusicLDM, on a collection of music data samples. Then, to address the limitations of training data and to avoid plagiarism, we leverage a beat tracking model and propose two different mixup strategies for data augmentation: beat-synchronous audio mixup and beat-synchronous latent mixup, which recombine training audio directly or via a latent embeddings space, respectively. Such mixup strategies encourage the model to interpolate between musical training samples and generate new music within the convex hull of the training data, making the generated music more diverse while still staying faithful to the corresponding style. In addition to popular evaluation metrics, we design several new evaluation metrics based on CLAP score to demonstrate that our proposed MusicLDM and beat-synchronous mixup strategies improve both the quality and novelty of generated music, as well as the correspondence between input text and generated music. This pipeline was contributed by sanchit-gandhi. Tips When constructing a prompt, keep in mind: Descriptive prompt inputs work best; use adjectives to describe the sound (for example, “high quality” or “clear”) and make the prompt context specific where possible (e.g. “melodic techno with a fast beat and synths” works better than “techno”). Using a negative prompt can significantly improve the quality of the generated audio. Try using a negative prompt of “low quality, average quality”. During inference: The quality of the generated audio sample can be controlled by the num_inference_steps argument; higher steps give higher quality audio at the expense of slower inference. Multiple waveforms can be generated in one go: set num_waveforms_per_prompt to a value greater than 1 to enable. Automatic scoring will be performed between the generated waveforms and prompt text, and the audios ranked from best to worst accordingly. The length of the generated audio sample can be controlled by varying the audio_length_in_s argument. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. MusicLDMPipeline class diffusers.MusicLDMPipeline < source > ( vae: AutoencoderKL text_encoder: Union tokenizer: Union feature_extractor: Optional unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vocoder: SpeechT5HifiGan ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (ClapModel) — +Frozen text-audio embedding model (ClapTextModel), specifically the +laion/clap-htsat-unfused variant. tokenizer (PreTrainedTokenizer) — +A RobertaTokenizer to tokenize text. feature_extractor (ClapFeatureExtractor) — +Feature extractor to compute mel-spectrograms from audio waveforms. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded audio latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. vocoder (SpeechT5HifiGan) — +Vocoder of class SpeechT5HifiGan. Pipeline for text-to-audio generation using MusicLDM. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None audio_length_in_s: Optional = None num_inference_steps: int = 200 guidance_scale: float = 2.0 negative_prompt: Union = None num_waveforms_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None output_type: Optional = 'np' ) → AudioPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide audio generation. If not defined, you need to pass prompt_embeds. audio_length_in_s (int, optional, defaults to 10.24) — +The length of the generated audio sample in seconds. num_inference_steps (int, optional, defaults to 200) — +The number of denoising steps. More denoising steps usually lead to a higher quality audio at the +expense of slower inference. guidance_scale (float, optional, defaults to 2.0) — +A higher guidance scale value encourages the model to generate audio that is closely linked to the text +prompt at the expense of lower sound quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in audio generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_waveforms_per_prompt (int, optional, defaults to 1) — +The number of waveforms to generate per prompt. If num_waveforms_per_prompt > 1, the text encoding +model is a joint text-audio model (ClapModel), and the tokenizer is a +[~transformers.ClapProcessor], then automatic scoring will be performed between the generated outputs +and the input text. This scoring ranks the generated waveforms based on their cosine similarity to text +input in the joint text-audio embedding space. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. return_dict (bool, optional, defaults to True) — +Whether or not to return a AudioPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. output_type (str, optional, defaults to "np") — +The output format of the generated audio. Choose between "np" to return a NumPy np.ndarray or +"pt" to return a PyTorch torch.Tensor object. Set to "latent" to return the latent diffusion +model (LDM) output. Returns +AudioPipelineOutput or tuple + +If return_dict is True, AudioPipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import MusicLDMPipeline +>>> import torch +>>> import scipy + +>>> repo_id = "ucsd-reach/musicldm" +>>> pipe = MusicLDMPipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> prompt = "Techno music with a strong, upbeat tempo and high melodic riffs" +>>> audio = pipe(prompt, num_inference_steps=10, audio_length_in_s=5.0).audios[0] + +>>> # save the audio sample as a .wav file +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio) enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. diff --git a/scrapped_outputs/076b65737992951742f4edceccb5acce.txt b/scrapped_outputs/076b65737992951742f4edceccb5acce.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/077010a91d093247770e57513b34f72b.txt b/scrapped_outputs/077010a91d093247770e57513b34f72b.txt new file mode 100644 index 0000000000000000000000000000000000000000..b0b0a9f6f6538388b8c5e1816de1537cd679e779 --- /dev/null +++ b/scrapped_outputs/077010a91d093247770e57513b34f72b.txt @@ -0,0 +1,96 @@ +MultiDiffusion MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation is by Omer Bar-Tal, Lior Yariv, Yaron Lipman, and Tali Dekel. The abstract from the paper is: Recent advances in text-to-image generation with diffusion models present transformative capabilities in image quality. However, user controllability of the generated image, and fast adaptation to new tasks still remains an open challenge, currently mostly addressed by costly and long re-training and fine-tuning or ad-hoc adaptations to specific image generation tasks. In this work, we present MultiDiffusion, a unified framework that enables versatile and controllable image generation, using a pre-trained text-to-image diffusion model, without any further training or finetuning. At the center of our approach is a new generation process, based on an optimization task that binds together multiple diffusion generation processes with a shared set of parameters or constraints. We show that MultiDiffusion can be readily applied to generate high quality and diverse images that adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes. You can find additional information about MultiDiffusion on the project page, original codebase, and try it out in a demo. Tips While calling StableDiffusionPanoramaPipeline, it’s possible to specify the view_batch_size parameter to be > 1. +For some GPUs with high performance, this can speedup the generation process and increase VRAM usage. To generate panorama-like images make sure you pass the width parameter accordingly. We recommend a width value of 2048 which is the default. Circular padding is applied to ensure there are no stitching artifacts when working with panoramas to ensure a seamless transition from the rightmost part to the leftmost part. By enabling circular padding (set circular_padding=True), the operation applies additional crops after the rightmost point of the image, allowing the model to “see” the transition from the rightmost part to the leftmost part. This helps maintain visual consistency in a 360-degree sense and creates a proper “panorama” that can be viewed using 360-degree panorama viewers. When decoding latents in Stable Diffusion, circular padding is applied to ensure that the decoded latents match in the RGB space. For example, without circular padding, there is a stitching artifact (default): + But with circular padding, the right and the left parts are matching (circular_padding=True): + Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionPanoramaPipeline class diffusers.StableDiffusionPanoramaPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: DDIMScheduler safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using MultiDiffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = 512 width: Optional = 2048 num_inference_steps: int = 50 guidance_scale: float = 7.5 view_batch_size: int = 1 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None circular_padding: bool = False clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 2048) — +The width in pixels of the generated image. The width is kept high because the pipeline is supposed +generate panorama-like images. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. view_batch_size (int, optional, defaults to 1) — +The batch size to denoise split views. For some GPUs with high performance, higher view batch size can +speedup the generation and increase the VRAM usage. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. circular_padding (bool, optional, defaults to False) — +If set to True, circular padding is applied to ensure there are no stitching artifacts. Circular +padding allows the model to seamlessly generate a transition from the rightmost part of the image to +the leftmost part, maintaining consistency in a 360-degree sense. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPanoramaPipeline, DDIMScheduler + +>>> model_ckpt = "stabilityai/stable-diffusion-2-base" +>>> scheduler = DDIMScheduler.from_pretrained(model_ckpt, subfolder="scheduler") +>>> pipe = StableDiffusionPanoramaPipeline.from_pretrained( +... model_ckpt, scheduler=scheduler, torch_dtype=torch.float16 +... ) + +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of the dolomites" +>>> image = pipe(prompt).images[0] disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/07b18ca9746ae35e7d731a618643ad03.txt b/scrapped_outputs/07b18ca9746ae35e7d731a618643ad03.txt new file mode 100644 index 0000000000000000000000000000000000000000..642f75cfa7384eee4d148149356f6a94df142d05 --- /dev/null +++ b/scrapped_outputs/07b18ca9746ae35e7d731a618643ad03.txt @@ -0,0 +1,390 @@ +Image-to-image The Stable Diffusion model can also be applied to image-to-image generation by passing a text prompt and an initial image to condition the generation of new images. The StableDiffusionImg2ImgPipeline uses the diffusion-denoising mechanism proposed in SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations by Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, Stefano Ermon. The abstract from the paper is: Guided image synthesis enables everyday users to create and edit photo-realistic images with minimum effort. The key challenge is balancing faithfulness to the user input (e.g., hand-drawn colored strokes) and realism of the synthesized image. Existing GAN-based methods attempt to achieve such balance using either conditional GANs or GAN inversions, which are challenging and often require additional training data or loss functions for individual applications. To address these issues, we introduce a new image synthesis and editing method, Stochastic Differential Editing (SDEdit), based on a diffusion model generative prior, which synthesizes realistic images by iteratively denoising through a stochastic differential equation (SDE). Given an input image with user guide of any type, SDEdit first adds noise to the input, then subsequently denoises the resulting image through the SDE prior to increase its realism. SDEdit does not require task-specific training or inversions and can naturally achieve the balance between realism and faithfulness. SDEdit significantly outperforms state-of-the-art GAN-based methods by up to 98.09% on realism and 91.72% on overall satisfaction scores, according to a human perception study, on multiple tasks, including stroke-based image synthesis and editing as well as image compositing. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionImg2ImgPipeline class diffusers.StableDiffusionImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-guided image-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.8 num_inference_steps: Optional = 50 timesteps: List = None guidance_scale: Optional = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: Optional = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: int = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be used as the starting point. For both +numpy array and pytorch tensor, the expected value range is between [0, 1] If it’s a tensor or a list +or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a +list of arrays, the expected shape should be (B, H, W, C) or (H, W, C) It can also accept image +latents as image, but if passing latents directly it is not encoded again. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import requests +>>> import torch +>>> from PIL import Image +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionImg2ImgPipeline + +>>> device = "cuda" +>>> model_id_or_path = "runwayml/stable-diffusion-v1-5" +>>> pipe = StableDiffusionImg2ImgPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +>>> response = requests.get(url) +>>> init_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_image = init_image.resize((768, 512)) + +>>> prompt = "A fantasy landscape, trending on artstation" + +>>> images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images +>>> images[0].save("fantasy_landscape.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. extract_ema (bool, optional, defaults to False) — +Whether to extract the EMA weights or not. Pass True to extract the EMA weights which usually yield +higher quality images for inference. Non-EMA weights are usually better for continuing finetuning. upcast_attention (bool, optional, defaults to None) — +Whether the attention computation should always be upcasted. image_size (int, optional, defaults to 512) — +The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable +Diffusion v2 base model. Use 768 for Stable Diffusion v2. prediction_type (str, optional) — +The prediction type the model was trained on. Use 'epsilon' for all Stable Diffusion v1 models and +the Stable Diffusion v2 base model. Use 'v_prediction' for Stable Diffusion v2. num_in_channels (int, optional, defaults to None) — +The number of input channels. If None, it is automatically inferred. scheduler_type (str, optional, defaults to "pndm") — +Type of scheduler to use. Should be one of ["pndm", "lms", "heun", "euler", "euler-ancestral", "dpm", "ddim"]. load_safety_checker (bool, optional, defaults to True) — +Whether to load the safety checker or not. text_encoder (CLIPTextModel, optional, defaults to None) — +An instance of CLIPTextModel to use, specifically the +clip-vit-large-patch14 variant. If this +parameter is None, the function loads a new instance of CLIPTextModel by itself if needed. vae (AutoencoderKL, optional, defaults to None) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. If +this parameter is None, the function will load a new instance of [CLIP] by itself, if needed. tokenizer (CLIPTokenizer, optional, defaults to None) — +An instance of CLIPTokenizer to use. If this parameter is None, the function loads a new instance +of CLIPTokenizer by itself if needed. original_config_file (str) — +Path to .yaml config file corresponding to the original architecture. If None, will be +automatically inferred by looking for a key that only exists in SD2.0 models. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (for example the pipeline components of the +specific pipeline class). The overwritten components are directly passed to the pipelines __init__ +method. See example below for more information. Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt or .safetensors +format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import StableDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +... ) + +>>> # Download pipeline from local file +>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt +>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly") + +>>> # Enable float16 and move to GPU +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", +... torch_dtype=torch.float16, +... ) +>>> pipeline.to("cuda") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionImg2ImgPipeline class diffusers.FlaxStableDiffusionImg2ImgPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-guided image-to-image generation using Stable Diffusion. This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: Array image: Array params: Union prng_seed: Array strength: float = 0.8 num_inference_steps: int = 50 height: Optional = None width: Optional = None guidance_scale: Union = 7.5 noise: Array = None neg_prompt_ids: Array = None return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt_ids (jnp.ndarray) — +The prompt or prompts to guide image generation. image (jnp.ndarray) — +Array representing an image batch to be used as the starting point. params (Dict or FrozenDict) — +Dictionary containing the model parameters/weights. prng_seed (jax.Array or jax.Array) — +Array containing random number generator key. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. noise (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. The array is generated by +sampling using the supplied random generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> import jax.numpy as jnp +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard +>>> import requests +>>> from io import BytesIO +>>> from PIL import Image +>>> from diffusers import FlaxStableDiffusionImg2ImgPipeline + + +>>> def create_key(seed=0): +... return jax.random.PRNGKey(seed) + + +>>> rng = create_key(0) + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +>>> response = requests.get(url) +>>> init_img = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_img = init_img.resize((768, 512)) + +>>> prompts = "A fantasy landscape, trending on artstation" + +>>> pipeline, params = FlaxStableDiffusionImg2ImgPipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", +... revision="flax", +... dtype=jnp.bfloat16, +... ) + +>>> num_samples = jax.device_count() +>>> rng = jax.random.split(rng, jax.device_count()) +>>> prompt_ids, processed_image = pipeline.prepare_inputs( +... prompt=[prompts] * num_samples, image=[init_img] * num_samples +... ) +>>> p_params = replicate(params) +>>> prompt_ids = shard(prompt_ids) +>>> processed_image = shard(processed_image) + +>>> output = pipeline( +... prompt_ids=prompt_ids, +... image=processed_image, +... params=p_params, +... prng_seed=rng, +... strength=0.75, +... num_inference_steps=50, +... jit=True, +... height=512, +... width=768, +... ).images + +>>> output_images = pipeline.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:]))) FlaxStableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/07bca4bb7d008ed191633525d6216b15.txt b/scrapped_outputs/07bca4bb7d008ed191633525d6216b15.txt new file mode 100644 index 0000000000000000000000000000000000000000..2965443dff72975f68a0fce80ea9e69b7c503bcd --- /dev/null +++ b/scrapped_outputs/07bca4bb7d008ed191633525d6216b15.txt @@ -0,0 +1,392 @@ +Text-to-Image Generation + + +StableDiffusionPipeline + +The Stable Diffusion model was created by the researchers and engineers from CompVis, Stability AI, runway, and LAION. The StableDiffusionPipeline is capable of generating photo-realistic images given any text input using Stable Diffusion. +The original codebase can be found here: +Stable Diffusion V1: CompVis/stable-diffusion +Stable Diffusion v2: Stability-AI/stablediffusion +Available Checkpoints are: +stable-diffusion-v1-4 (512x512 resolution) CompVis/stable-diffusion-v1-4 +stable-diffusion-v1-5 (512x512 resolution) runwayml/stable-diffusion-v1-5 +stable-diffusion-2-base (512x512 resolution): stabilityai/stable-diffusion-2-base +stable-diffusion-2 (768x768 resolution): stabilityai/stable-diffusion-2 +stable-diffusion-2-1-base (512x512 resolution) stabilityai/stable-diffusion-2-1-base +stable-diffusion-2-1 (768x768 resolution): stabilityai/stable-diffusion-2-1 + +class diffusers.StableDiffusionPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPFeatureExtractor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-to-image generation using Stable Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor as defined under +self.processor in +diffusers.cross_attention. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt).images[0] + +enable_attention_slicing + +< +source +> +( +slice_size: typing.Union[str, int, NoneType] = 'auto' + +) + + +Parameters + +slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maxium amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +disable_attention_slicing + +< +source +> +( +) + + + +Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go +back to computing attention in one step. + +enable_vae_slicing + +< +source +> +( +) + + + +Enable sliced VAE decoding. +When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several +steps. This is useful to save some memory and allow larger batch sizes. + +disable_vae_slicing + +< +source +> +( +) + + + +Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to +computing decoding in one step. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +enable_vae_tiling + +< +source +> +( +) + + + +Enable tiled VAE decoding. +When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in +several steps. This is useful to save a large amount of memory and to allow the processing of larger images. + +disable_vae_tiling + +< +source +> +( +) + + + +Disable tiled VAE decoding. If enable_vae_tiling was previously invoked, this method will go back to +computing decoding in one step. + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. diff --git a/scrapped_outputs/07ddbd74085f9f817ddf370cd3f13dc9.txt b/scrapped_outputs/07ddbd74085f9f817ddf370cd3f13dc9.txt new file mode 100644 index 0000000000000000000000000000000000000000..aee1e636a419504d65502e324985c985e38c0d21 --- /dev/null +++ b/scrapped_outputs/07ddbd74085f9f817ddf370cd3f13dc9.txt @@ -0,0 +1,36 @@ +VQ Diffusion Vector Quantized Diffusion Model for Text-to-Image Synthesis is by Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, Baining Guo. The abstract from the paper is: We present the vector quantized diffusion (VQ-Diffusion) model for text-to-image generation. This method is based on a vector quantized variational autoencoder (VQ-VAE) whose latent space is modeled by a conditional variant of the recently developed Denoising Diffusion Probabilistic Model (DDPM). We find that this latent-space method is well-suited for text-to-image generation tasks because it not only eliminates the unidirectional bias with existing methods but also allows us to incorporate a mask-and-replace diffusion strategy to avoid the accumulation of errors, which is a serious problem with existing methods. Our experiments show that the VQ-Diffusion produces significantly better text-to-image generation results when compared with conventional autoregressive (AR) models with similar numbers of parameters. Compared with previous GAN-based text-to-image methods, our VQ-Diffusion can handle more complex scenes and improve the synthesized image quality by a large margin. Finally, we show that the image generation computation in our method can be made highly efficient by reparameterization. With traditional AR methods, the text-to-image generation time increases linearly with the output image resolution and hence is quite time consuming even for normal size images. The VQ-Diffusion allows us to achieve a better trade-off between quality and speed. Our experiments indicate that the VQ-Diffusion model with the reparameterization is fifteen times faster than traditional AR methods while achieving a better image quality. The original codebase can be found at microsoft/VQ-Diffusion. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. VQDiffusionPipeline class diffusers.VQDiffusionPipeline < source > ( vqvae: VQModel text_encoder: CLIPTextModel tokenizer: CLIPTokenizer transformer: Transformer2DModel scheduler: VQDiffusionScheduler learned_classifier_free_sampling_embeddings: LearnedClassifierFreeSamplingEmbeddings ) Parameters vqvae (VQModel) — +Vector Quantized Variational Auto-Encoder (VAE) model to encode and decode images to and from latent +representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-base-patch32). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. transformer (Transformer2DModel) — +A conditional Transformer2DModel to denoise the encoded image latents. scheduler (VQDiffusionScheduler) — +A scheduler to be used in combination with transformer to denoise the encoded image latents. Pipeline for text-to-image generation using VQ Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: typing.Union[str, typing.List[str]] num_inference_steps: int = 100 guidance_scale: float = 5.0 truncation_rate: float = 1.0 num_images_per_prompt: int = 1 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: int = 1 ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. truncation_rate (float, optional, defaults to 1.0 (equivalent to no truncation)) — +Used to “truncate” the predicted classes for x_0 such that the cumulative probability for a pixel is at +most truncation_rate. The lowest probabilities that would increase the cumulative probability above +truncation_rate are set to zero. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor of shape (batch), optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Must be valid embedding indices.If not provided, a latents tensor will be generated of +completely masked latent pixels. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. truncate < source > ( log_p_x_0: FloatTensor truncation_rate: float ) Truncates log_p_x_0 such that for each column vector, the total cumulative probability is truncation_rate +The lowest probabilities that would increase the cumulative probability above truncation_rate are set to +zero. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/07ea9ac4c5879cbdfcd590384be2ba9c.txt b/scrapped_outputs/07ea9ac4c5879cbdfcd590384be2ba9c.txt new file mode 100644 index 0000000000000000000000000000000000000000..f2bcdd0eab08a61d4d8ad8d73bfbe01b5aad187f --- /dev/null +++ b/scrapped_outputs/07ea9ac4c5879cbdfcd590384be2ba9c.txt @@ -0,0 +1,234 @@ +Models 🤗 Diffusers provides pretrained models for popular algorithms and modules to create custom diffusion systems. The primary function of models is to denoise an input sample as modeled by the distribution pθ(xt−1∣xt)p_{\theta}(x_{t-1}|x_{t})pθ​(xt−1​∣xt​). All models are built from the base ModelMixin class which is a torch.nn.Module providing basic functionality for saving and loading models, locally and from the Hugging Face Hub. ModelMixin class diffusers.ModelMixin < source > ( ) Base class for all models. ModelMixin takes care of storing the model configuration and provides methods for loading, downloading and +saving models. config_name (str) — Filename to save a model to when calling save_pretrained(). disable_gradient_checkpointing < source > ( ) Deactivates gradient checkpointing for the current model (may be referred to as activation checkpointing or +checkpoint activations in other frameworks). disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. enable_gradient_checkpointing < source > ( ) Activates gradient checkpointing for the current model (may be referred to as activation checkpointing or +checkpoint activations in other frameworks). enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this option is enabled, you should observe lower GPU memory usage and a potential speed up during +inference. Speed up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import UNet2DConditionModel +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> model = UNet2DConditionModel.from_pretrained( +... "stabilityai/stable-diffusion-2-1", subfolder="unet", torch_dtype=torch.float16 +... ) +>>> model = model.to("cuda") +>>> model.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) from_pretrained < source > ( pretrained_model_name_or_path: Union **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with save_pretrained(). + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info (bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. from_flax (bool, optional, defaults to False) — +Load the model weights from a Flax checkpoint save file. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. Instantiate a pretrained PyTorch model from a pretrained model configuration. The model is set in evaluation mode - model.eval() - by default, and dropout modules are deactivated. To +train the model, set it back in training mode with model.train(). To use private or gated models, log-in with +huggingface-cli login. You can also activate the special +“offline-mode” to use this method in a +firewalled environment. Example: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5", subfolder="unet") If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. num_parameters < source > ( only_trainable: bool = False exclude_embeddings: bool = False ) → int Parameters only_trainable (bool, optional, defaults to False) — +Whether or not to return only the number of trainable parameters. exclude_embeddings (bool, optional, defaults to False) — +Whether or not to return only the number of non-embedding parameters. Returns +int + +The number of parameters. + Get number of (trainable or non-embedding) parameters in the module. Example: Copied from diffusers import UNet2DConditionModel + +model_id = "runwayml/stable-diffusion-v1-5" +unet = UNet2DConditionModel.from_pretrained(model_id, subfolder="unet") +unet.num_parameters(only_trainable=True) +859520964 save_pretrained < source > ( save_directory: Union is_main_process: bool = True save_function: Optional = None safe_serialization: bool = True variant: Optional = None push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save a model and its configuration file to. Will be created if it doesn’t exist. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save a model and its configuration file to a directory so that it can be reloaded using the +from_pretrained() class method. FlaxModelMixin class diffusers.FlaxModelMixin < source > ( ) Base class for all Flax models. FlaxModelMixin takes care of storing the model configuration and provides methods for loading, downloading and +saving models. config_name (str) — Filename to save a model to when calling save_pretrained(). from_pretrained < source > ( pretrained_model_name_or_path: Union dtype: dtype = *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — +Can be either: + +A string, the model id (for example runwayml/stable-diffusion-v1-5) of a pretrained model +hosted on the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +using save_pretrained(). + dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — +The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and +jax.numpy.bfloat16 (on TPUs). +This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If +specified, all the computation will be performed with the given dtype. + +This only specifies the dtype of the computation and does not influence the dtype of model +parameters. +If you wish to change the dtype of the model parameters, see to_fp16() and +to_bf16(). + model_args (sequence of positional arguments, optional) — +All remaining positional arguments are passed to the underlying model’s __init__ method. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only(bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. from_pt (bool, optional, defaults to False) — +Load the model weights from a PyTorch checkpoint save file. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to update the configuration object (after it is loaded) and initiate the model (for +example, output_attentions=True). Behaves differently depending on whether a config is provided or +automatically loaded: + +If a configuration is provided with config, kwargs are directly passed to the underlying +model’s __init__ method (we assume all relevant updates to the configuration have already been +done). +If a configuration is not provided, kwargs are first passed to the configuration class +initialization function from_config(). Each key of the kwargs that corresponds +to a configuration attribute is used to override said attribute with the supplied kwargs value. +Remaining keys that do not correspond to any configuration attribute are passed to the underlying +model’s __init__ function. + Instantiate a pretrained Flax model from a pretrained model configuration. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # Download model and configuration from huggingface.co and cache. +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # Model was saved using *save_pretrained('./test/saved_model/')* (for example purposes, not runnable). +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("./test/saved_model/") If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. save_pretrained < source > ( save_directory: Union params: Union is_main_process: bool = True push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save a model and its configuration file to. Will be created if it doesn’t exist. params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional key word arguments passed along to the push_to_hub() method. Save a model and its configuration file to a directory so that it can be reloaded using the +from_pretrained() class method. to_bf16 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans. It should be True +for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.bfloat16. This returns a new params tree and does not cast +the params in place. This method can be used on a TPU to explicitly convert the model parameters to bfloat16 precision to do full +half-precision training or to save weights in bfloat16 for inference in order to save memory and improve speed. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # load model +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model parameters will be in fp32 precision, to cast these to bfloat16 precision +>>> params = model.to_bf16(params) +>>> # If you don't want to cast certain parameters (for example layer norm bias and scale) +>>> # then pass the mask as follows +>>> from flax import traverse_util + +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> flat_params = traverse_util.flatten_dict(params) +>>> mask = { +... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) +... for path in flat_params +... } +>>> mask = traverse_util.unflatten_dict(mask) +>>> params = model.to_bf16(params, mask) to_fp16 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans. It should be True +for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.float16. This returns a new params tree and does not cast the +params in place. This method can be used on a GPU to explicitly convert the model parameters to float16 precision to do full +half-precision training or to save weights in float16 for inference in order to save memory and improve speed. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # load model +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model params will be in fp32, to cast these to float16 +>>> params = model.to_fp16(params) +>>> # If you want don't want to cast certain parameters (for example layer norm bias and scale) +>>> # then pass the mask as follows +>>> from flax import traverse_util + +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> flat_params = traverse_util.flatten_dict(params) +>>> mask = { +... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) +... for path in flat_params +... } +>>> mask = traverse_util.unflatten_dict(mask) +>>> params = model.to_fp16(params, mask) to_fp32 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans. It should be True +for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.float32. This method can be used to explicitly convert the +model parameters to fp32 precision. This returns a new params tree and does not cast the params in place. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # Download model and configuration from huggingface.co +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model params will be in fp32, to illustrate the use of this method, +>>> # we'll first cast to fp16 and back to fp32 +>>> params = model.to_f16(params) +>>> # now cast back to fp32 +>>> params = model.to_fp32(params) PushToHubMixin class diffusers.utils.PushToHubMixin < source > ( ) A Mixin to push a model, scheduler, or pipeline to the Hugging Face Hub. push_to_hub < source > ( repo_id: str commit_message: Optional = None private: Optional = None token: Optional = None create_pr: bool = False safe_serialization: bool = True variant: Optional = None ) Parameters repo_id (str) — +The name of the repository you want to push your model, scheduler, or pipeline files to. It should +contain your organization name when pushing to an organization. repo_id can also be a path to a local +directory. commit_message (str, optional) — +Message to commit while pushing. Default to "Upload {object}". private (bool, optional) — +Whether or not the repository created should be private. token (str, optional) — +The token to use as HTTP bearer authorization for remote files. The token generated when running +huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) — +Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to True) — +Whether or not to convert the model weights to the safetensors format. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. Upload model, scheduler, or pipeline files to the 🤗 Hugging Face Hub. Examples: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet") + +# Push the `unet` to your namespace with the name "my-finetuned-unet". +unet.push_to_hub("my-finetuned-unet") + +# Push the `unet` to an organization with the name "my-finetuned-unet". +unet.push_to_hub("your-org/my-finetuned-unet") diff --git a/scrapped_outputs/0814eb7784228db3330e94247474e862.txt b/scrapped_outputs/0814eb7784228db3330e94247474e862.txt new file mode 100644 index 0000000000000000000000000000000000000000..d23d93327c35d9c8f0901065ebe9c0cc039991a4 --- /dev/null +++ b/scrapped_outputs/0814eb7784228db3330e94247474e862.txt @@ -0,0 +1,260 @@ +Image-to-image Image-to-image is similar to text-to-image, but in addition to a prompt, you can also pass an initial image as a starting point for the diffusion process. The initial image is encoded to latent space and noise is added to it. Then the latent diffusion model takes a prompt and the noisy latent image, predicts the added noise, and removes the predicted noise from the initial latent image to get the new latent image. Lastly, a decoder decodes the new latent image back into an image. With 🤗 Diffusers, this is as easy as 1-2-3: Load a checkpoint into the AutoPipelineForImage2Image class; this pipeline automatically handles loading the correct pipeline class based on the checkpoint: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() You’ll notice throughout the guide, we use enable_model_cpu_offload() and enable_xformers_memory_efficient_attention(), to save memory and increase inference speed. If you’re using PyTorch 2.0, then you don’t need to call enable_xformers_memory_efficient_attention() on your pipeline because it’ll already be using PyTorch 2.0’s native scaled-dot product attention. Load an image to pass to the pipeline: Copied init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") Pass a prompt and image to the pipeline to generate an image: Copied prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k" +image = pipeline(prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Popular models The most popular image-to-image models are Stable Diffusion v1.5, Stable Diffusion XL (SDXL), and Kandinsky 2.2. The results from the Stable Diffusion and Kandinsky models vary due to their architecture differences and training process; you can generally expect SDXL to produce higher quality images than Stable Diffusion v1.5. Let’s take a quick look at how to use each of these models and compare their results. Stable Diffusion v1.5 Stable Diffusion v1.5 is a latent diffusion model initialized from an earlier checkpoint, and further finetuned for 595K steps on 512x512 images. To use this pipeline for image-to-image, you’ll need to prepare an initial image to pass to the pipeline. Then you can pass a prompt and the image to the pipeline to generate a new image: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Stable Diffusion XL (SDXL) SDXL is a more powerful version of the Stable Diffusion model. It uses a larger base model, and an additional refiner model to increase the quality of the base model’s output. Read the SDXL guide for a more detailed walkthrough of how to use this model, and other techniques it uses to produce high quality images. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-sdxl-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, strength=0.5).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Kandinsky 2.2 The Kandinsky model is different from the Stable Diffusion models because it uses an image prior model to create image embeddings. The embeddings help create a better alignment between text and images, allowing the latent diffusion model to generate better images. The simplest way to use Kandinsky 2.2 is: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Configure pipeline parameters There are several important parameters you can configure in the pipeline that’ll affect the image generation process and image quality. Let’s take a closer look at what these parameters do and how changing them affects the output. Strength strength is one of the most important parameters to consider and it’ll have a huge impact on your generated image. It determines how much the generated image resembles the initial image. In other words: 📈 a higher strength value gives the model more “creativity” to generate an image that’s different from the initial image; a strength value of 1.0 means the initial image is more or less ignored 📉 a lower strength value means the generated image is more similar to the initial image The strength and num_inference_steps parameters are related because strength determines the number of noise steps to add. For example, if the num_inference_steps is 50 and strength is 0.8, then this means adding 40 (50 * 0.8) steps of noise to the initial image and then denoising for 40 steps to get the newly generated image. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, strength=0.8).images[0] +make_image_grid([init_image, image], rows=1, cols=2) strength = 0.4 strength = 0.6 strength = 1.0 Guidance scale The guidance_scale parameter is used to control how closely aligned the generated image and text prompt are. A higher guidance_scale value means your generated image is more aligned with the prompt, while a lower guidance_scale value means your generated image has more space to deviate from the prompt. You can combine guidance_scale with strength for even more precise control over how expressive the model is. For example, combine a high strength + guidance_scale for maximum creativity or use a combination of low strength and low guidance_scale to generate an image that resembles the initial image but is not as strictly bound to the prompt. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, guidance_scale=8.0).images[0] +make_image_grid([init_image, image], rows=1, cols=2) guidance_scale = 0.1 guidance_scale = 5.0 guidance_scale = 10.0 Negative prompt A negative prompt conditions the model to not include things in an image, and it can be used to improve image quality or modify an image. For example, you can improve image quality by including negative prompts like “poor details” or “blurry” to encourage the model to generate a higher quality image. Or you can modify an image by specifying things to exclude from an image. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" + +# pass prompt and image to pipeline +image = pipeline(prompt, negative_prompt=negative_prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" negative_prompt = "jungle" Chained image-to-image pipelines There are some other interesting ways you can use an image-to-image pipeline aside from just generating an image (although that is pretty cool too). You can take it a step further and chain it with other pipelines. Text-to-image-to-image Chaining a text-to-image and image-to-image pipeline allows you to generate an image from text and use the generated image as the initial image for the image-to-image pipeline. This is useful if you want to generate an image entirely from scratch. For example, let’s chain a Stable Diffusion and a Kandinsky model. Start by generating an image with the text-to-image pipeline: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch +from diffusers.utils import make_image_grid + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +text2image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k").images[0] +text2image Now you can pass this generated image to the image-to-image pipeline: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image2image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", image=text2image).images[0] +make_image_grid([text2image, image2image], rows=1, cols=2) Image-to-image-to-image You can also chain multiple image-to-image pipelines together to create more interesting images. This can be useful for iteratively performing style transfer on an image, generating short GIFs, restoring color to an image, or restoring missing areas of an image. Start by generating an image: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, output_type="latent").images[0] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. Pass the latent output from this pipeline to the next pipeline to generate an image in a comic book art style: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "ogkalu/Comic-Diffusion", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# need to include the token "charliebo artstyle" in the prompt to use this checkpoint +image = pipeline("Astronaut in a jungle, charliebo artstyle", image=image, output_type="latent").images[0] Repeat one more time to generate the final image in a pixel art style: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "kohbanye/pixel-art-style", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# need to include the token "pixelartstyle" in the prompt to use this checkpoint +image = pipeline("Astronaut in a jungle, pixelartstyle", image=image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Image-to-upscaler-to-super-resolution Another way you can chain your image-to-image pipeline is with an upscaler and super-resolution pipeline to really increase the level of details in an image. Start with an image-to-image pipeline: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image_1 = pipeline(prompt, image=init_image, output_type="latent").images[0] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. Chain it to an upscaler pipeline to increase the image resolution: Copied from diffusers import StableDiffusionLatentUpscalePipeline + +upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained( + "stabilityai/sd-x2-latent-upscaler", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +upscaler.enable_model_cpu_offload() +upscaler.enable_xformers_memory_efficient_attention() + +image_2 = upscaler(prompt, image=image_1, output_type="latent").images[0] Finally, chain it to a super-resolution pipeline to further enhance the resolution: Copied from diffusers import StableDiffusionUpscalePipeline + +super_res = StableDiffusionUpscalePipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +super_res.enable_model_cpu_offload() +super_res.enable_xformers_memory_efficient_attention() + +image_3 = super_res(prompt, image=image_2).images[0] +make_image_grid([init_image, image_3.resize((512, 512))], rows=1, cols=2) Control image generation Trying to generate an image that looks exactly the way you want can be difficult, which is why controlled generation techniques and models are so useful. While you can use the negative_prompt to partially control image generation, there are more robust methods like prompt weighting and ControlNets. Prompt weighting Prompt weighting allows you to scale the representation of each concept in a prompt. For example, in a prompt like “Astronaut in a jungle, cold color palette, muted colors, detailed, 8k”, you can choose to increase or decrease the embeddings of “astronaut” and “jungle”. The Compel library provides a simple syntax for adjusting prompt weights and generating the embeddings. You can learn how to create the embeddings in the Prompt weighting guide. AutoPipelineForImage2Image has a prompt_embeds (and negative_prompt_embeds if you’re using a negative prompt) parameter where you can pass the embeddings which replaces the prompt parameter. Copied from diffusers import AutoPipelineForImage2Image +import torch + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt_embeds=prompt_embeds, # generated from Compel + negative_prompt_embeds=negative_prompt_embeds, # generated from Compel + image=init_image, +).images[0] ControlNet ControlNets provide a more flexible and accurate way to control image generation because you can use an additional conditioning image. The conditioning image can be a canny image, depth map, image segmentation, and even scribbles! Whatever type of conditioning image you choose, the ControlNet generates an image that preserves the information in it. For example, let’s condition an image with a depth map to keep the spatial information in the image. Copied from diffusers.utils import load_image, make_image_grid + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) +init_image = init_image.resize((958, 960)) # resize to depth image dimensions +depth_image = load_image("https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png") +make_image_grid([init_image, depth_image], rows=1, cols=2) Load a ControlNet model conditioned on depth maps and the AutoPipelineForImage2Image: Copied from diffusers import ControlNetModel, AutoPipelineForImage2Image +import torch + +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11f1p_sd15_depth", torch_dtype=torch.float16, variant="fp16", use_safetensors=True) +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() Now generate a new image conditioned on the depth map, initial image, and prompt: Copied prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image_control_net = pipeline(prompt, image=init_image, control_image=depth_image).images[0] +make_image_grid([init_image, depth_image, image_control_net], rows=1, cols=3) initial image depth image ControlNet image Let’s apply a new style to the image generated from the ControlNet by chaining it with an image-to-image pipeline: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "nitrosocke/elden-ring-diffusion", torch_dtype=torch.float16, +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +prompt = "elden ring style astronaut in a jungle" # include the token "elden ring style" in the prompt +negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" + +image_elden_ring = pipeline(prompt, negative_prompt=negative_prompt, image=image_control_net, strength=0.45, guidance_scale=10.5).images[0] +make_image_grid([init_image, depth_image, image_control_net, image_elden_ring], rows=2, cols=2) Optimize Running diffusion models is computationally expensive and intensive, but with a few optimization tricks, it is entirely possible to run them on consumer and free-tier GPUs. For example, you can use a more memory-efficient form of attention such as PyTorch 2.0’s scaled-dot product attention or xFormers (you can use one or the other, but there’s no need to use both). You can also offload the model to the GPU while the other pipeline components wait on the CPU. Copied + pipeline.enable_model_cpu_offload() ++ pipeline.enable_xformers_memory_efficient_attention() With torch.compile, you can boost your inference speed even more by wrapping your UNet with it: Copied pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) To learn more, take a look at the Reduce memory usage and Torch 2.0 guides. diff --git a/scrapped_outputs/08185a683bfc8b1afd61fee055871d54.txt b/scrapped_outputs/08185a683bfc8b1afd61fee055871d54.txt new file mode 100644 index 0000000000000000000000000000000000000000..97a771bf1c4a69150adf921fcc1b4adbe14566c1 --- /dev/null +++ b/scrapped_outputs/08185a683bfc8b1afd61fee055871d54.txt @@ -0,0 +1,927 @@ +DeepFloyd IF Overview DeepFloyd IF is a novel state-of-the-art open-source text-to-image model with a high degree of photorealism and language understanding. +The model is a modular composed of a frozen text encoder and three cascaded pixel diffusion modules: Stage 1: a base model that generates 64x64 px image based on text prompt, Stage 2: a 64x64 px => 256x256 px super-resolution model, and Stage 3: a 256x256 px => 1024x1024 px super-resolution model +Stage 1 and Stage 2 utilize a frozen text encoder based on the T5 transformer to extract text embeddings, which are then fed into a UNet architecture enhanced with cross-attention and attention pooling. +Stage 3 is Stability AI’s x4 Upscaling model. +The result is a highly efficient model that outperforms current state-of-the-art models, achieving a zero-shot FID score of 6.66 on the COCO dataset. +Our work underscores the potential of larger UNet architectures in the first stage of cascaded diffusion models and depicts a promising future for text-to-image synthesis. Usage Before you can use IF, you need to accept its usage conditions. To do so: Make sure to have a Hugging Face account and be logged in. Accept the license on the model card of DeepFloyd/IF-I-XL-v1.0. Accepting the license on the stage I model card will auto accept for the other IF models. Make sure to login locally. Install huggingface_hub: Copied pip install huggingface_hub --upgrade run the login function in a Python shell: Copied from huggingface_hub import login + +login() and enter your Hugging Face Hub access token. Next we install diffusers and dependencies: Copied pip install -q diffusers accelerate transformers The following sections give more in-detail examples of how to use IF. Specifically: Text-to-Image Generation Image-to-Image Generation Inpainting Reusing model weights Speed optimization Memory optimization Available checkpoints Stage-1 DeepFloyd/IF-I-XL-v1.0 DeepFloyd/IF-I-L-v1.0 DeepFloyd/IF-I-M-v1.0 Stage-2 DeepFloyd/IF-II-L-v1.0 DeepFloyd/IF-II-M-v1.0 Stage-3 stabilityai/stable-diffusion-x4-upscaler Google Colab Text-to-Image Generation By default diffusers makes use of model cpu offloading to run the whole IF pipeline with as little as 14 GB of VRAM. Copied from diffusers import DiffusionPipeline +from diffusers.utils import pt_to_pil, make_image_grid +import torch + +# stage 1 +stage_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +stage_1_output = stage_1( + prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, generator=generator, output_type="pt" +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# stage 2 +stage_2_output = stage_2( + image=stage_1_output, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png") + +# stage 3 +stage_3_output = stage_3(prompt=prompt, image=stage_2_output, noise_level=100, generator=generator).images +#stage_3_output[0].save("./if_stage_III.png") +make_image_grid([pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=3) Text Guided Image-to-Image Generation The same IF model weights can be used for text-guided image-to-image translation or image variation. +In this case just make sure to load the weights using the IFImg2ImgPipeline and IFImg2ImgSuperResolutionPipeline pipelines. Note: You can also directly move the weights of the text-to-image pipelines to the image-to-image pipelines +without loading them twice by making use of the components argument as explained here. Copied from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +from diffusers.utils import pt_to_pil, load_image, make_image_grid +import torch + +# download image +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +original_image = load_image(url) +original_image = original_image.resize((768, 512)) + +# stage 1 +stage_1 = IFImg2ImgPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = IFImg2ImgSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = "A fantasy landscape in style minecraft" +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +stage_1_output = stage_1( + image=original_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# stage 2 +stage_2_output = stage_2( + image=stage_1_output, + original_image=original_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png") + +# stage 3 +stage_3_output = stage_3(prompt=prompt, image=stage_2_output, generator=generator, noise_level=100).images +#stage_3_output[0].save("./if_stage_III.png") +make_image_grid([original_image, pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=4) Text Guided Inpainting Generation The same IF model weights can be used for text-guided image-to-image translation or image variation. +In this case just make sure to load the weights using the IFInpaintingPipeline and IFInpaintingSuperResolutionPipeline pipelines. Note: You can also directly move the weights of the text-to-image pipelines to the image-to-image pipelines +without loading them twice by making use of the ~DiffusionPipeline.components() function as explained here. Copied from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +from diffusers.utils import pt_to_pil, load_image, make_image_grid +import torch + +# download image +url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +original_image = load_image(url) + +# download mask +url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +mask_image = load_image(url) + +# stage 1 +stage_1 = IFInpaintingPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = IFInpaintingSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = "blue sunglasses" +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +stage_1_output = stage_1( + image=original_image, + mask_image=mask_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# stage 2 +stage_2_output = stage_2( + image=stage_1_output, + original_image=original_image, + mask_image=mask_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_II.png") + +# stage 3 +stage_3_output = stage_3(prompt=prompt, image=stage_2_output, generator=generator, noise_level=100).images +#stage_3_output[0].save("./if_stage_III.png") +make_image_grid([original_image, mask_image, pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=5) Converting between different pipelines In addition to being loaded with from_pretrained, Pipelines can also be loaded directly from each other. Copied from diffusers import IFPipeline, IFSuperResolutionPipeline + +pipe_1 = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0") +pipe_2 = IFSuperResolutionPipeline.from_pretrained("DeepFloyd/IF-II-L-v1.0") + + +from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline + +pipe_1 = IFImg2ImgPipeline(**pipe_1.components) +pipe_2 = IFImg2ImgSuperResolutionPipeline(**pipe_2.components) + + +from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline + +pipe_1 = IFInpaintingPipeline(**pipe_1.components) +pipe_2 = IFInpaintingSuperResolutionPipeline(**pipe_2.components) Optimizing for speed The simplest optimization to run IF faster is to move all model components to the GPU. Copied pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") You can also run the diffusion process for a shorter number of timesteps. This can either be done with the num_inference_steps argument: Copied pipe("", num_inference_steps=30) Or with the timesteps argument: Copied from diffusers.pipelines.deepfloyd_if import fast27_timesteps + +pipe("", timesteps=fast27_timesteps) When doing image variation or inpainting, you can also decrease the number of timesteps +with the strength argument. The strength argument is the amount of noise to add to the input image which also determines how many steps to run in the denoising process. +A smaller number will vary the image less but run faster. Copied pipe = IFImg2ImgPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") + +image = pipe(image=image, prompt="", strength=0.3).images You can also use torch.compile. Note that we have not exhaustively tested torch.compile +with IF and it might not give expected results. Copied from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") + +pipe.text_encoder = torch.compile(pipe.text_encoder, mode="reduce-overhead", fullgraph=True) +pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) Optimizing for memory When optimizing for GPU memory, we can use the standard diffusers CPU offloading APIs. Either the model based CPU offloading, Copied pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.enable_model_cpu_offload() or the more aggressive layer based CPU offloading. Copied pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.enable_sequential_cpu_offload() Additionally, T5 can be loaded in 8bit precision Copied from transformers import T5EncoderModel + +text_encoder = T5EncoderModel.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit" +) + +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", + text_encoder=text_encoder, # pass the previously instantiated 8bit text encoder + unet=None, + device_map="auto", +) + +prompt_embeds, negative_embeds = pipe.encode_prompt("") For CPU RAM constrained machines like Google Colab free tier where we can’t load all model components to the CPU at once, we can manually only load the pipeline with +the text encoder or UNet when the respective model components are needed. Copied from diffusers import IFPipeline, IFSuperResolutionPipeline +import torch +import gc +from transformers import T5EncoderModel +from diffusers.utils import pt_to_pil, make_image_grid + +text_encoder = T5EncoderModel.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit" +) + +# text to image +pipe = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", + text_encoder=text_encoder, # pass the previously instantiated 8bit text encoder + unet=None, + device_map="auto", +) + +prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +# Remove the pipeline so we can re-load the pipeline with the unet +del text_encoder +del pipe +gc.collect() +torch.cuda.empty_cache() + +pipe = IFPipeline.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto" +) + +generator = torch.Generator().manual_seed(0) +stage_1_output = pipe( + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + output_type="pt", + generator=generator, +).images + +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# Remove the pipeline so we can load the super-resolution pipeline +del pipe +gc.collect() +torch.cuda.empty_cache() + +# First super resolution + +pipe = IFSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto" +) + +generator = torch.Generator().manual_seed(0) +stage_2_output = pipe( + image=stage_1_output, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + output_type="pt", + generator=generator, +).images + +#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png") +make_image_grid([pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0]], rows=1, rows=2) Available Pipelines: Pipeline Tasks Colab pipeline_if.py Text-to-Image Generation - pipeline_if_superresolution.py Text-to-Image Generation - pipeline_if_img2img.py Image-to-Image Generation - pipeline_if_img2img_superresolution.py Image-to-Image Generation - pipeline_if_inpainting.py Image-to-Image Generation - pipeline_if_inpainting_superresolution.py Image-to-Image Generation - IFPipeline class diffusers.IFPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None num_inference_steps: int = 100 timesteps: List = None guidance_scale: float = 7.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 height: Optional = None width: Optional = None eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True cross_attention_kwargs: Optional = None ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 7.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to self.unet.config.sample_size) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size) — +The width in pixels of the generated image. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFPipeline, IFSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch + +>>> pipe = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt").images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt" +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> safety_modules = { +... "feature_extractor": pipe.feature_extractor, +... "safety_checker": pipe.safety_checker, +... "watermarker": pipe.watermarker, +... } +>>> super_res_2_pipe = DiffusionPipeline.from_pretrained( +... "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +... ) +>>> super_res_2_pipe.enable_model_cpu_offload() + +>>> image = super_res_2_pipe( +... prompt=prompt, +... image=image, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFSuperResolutionPipeline class diffusers.IFSuperResolutionPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler image_noising_scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None height: int = None width: int = None image: Union = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 4.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 250 clean_caption: bool = True ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. height (int, optional, defaults to None) — +The height in pixels of the generated image. width (int, optional, defaults to None) — +The width in pixels of the generated image. image (PIL.Image.Image, np.ndarray, torch.FloatTensor) — +The image to be upscaled. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional, defaults to None) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. noise_level (int, optional, defaults to 250) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFPipeline, IFSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch + +>>> pipe = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt").images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFImg2ImgPipeline class diffusers.IFImg2ImgPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.7 num_inference_steps: int = 80 timesteps: List = None guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True cross_attention_kwargs: Optional = None ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. strength (float, optional, defaults to 0.7) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. num_inference_steps (int, optional, defaults to 80) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 10.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image.resize((768, 512)) + +>>> pipe = IFImg2ImgPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A fantasy landscape in style minecraft" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFImg2ImgSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", +... text_encoder=None, +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFImg2ImgSuperResolutionPipeline class diffusers.IFImg2ImgSuperResolutionPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler image_noising_scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( image: Union original_image: Union = None strength: float = 0.8 prompt: Union = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 4.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 250 clean_caption: bool = True ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. original_image (torch.FloatTensor or PIL.Image.Image) — +The original image that image was varied from. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. noise_level (int, optional, defaults to 250) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image.resize((768, 512)) + +>>> pipe = IFImg2ImgPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A fantasy landscape in style minecraft" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFImg2ImgSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", +... text_encoder=None, +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFInpaintingPipeline class diffusers.IFInpaintingPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None strength: float = 1.0 num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True cross_attention_kwargs: Optional = None ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). strength (float, optional, defaults to 1.0) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 7.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +>>> response = requests.get(url) +>>> mask_image = Image.open(BytesIO(response.content)) +>>> mask_image = mask_image + +>>> pipe = IFInpaintingPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "blue sunglasses" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... mask_image=mask_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFInpaintingSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... mask_image=mask_image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFInpaintingSuperResolutionPipeline class diffusers.IFInpaintingSuperResolutionPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler image_noising_scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( image: Union original_image: Union = None mask_image: Union = None strength: float = 0.8 prompt: Union = None num_inference_steps: int = 100 timesteps: List = None guidance_scale: float = 4.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 0 clean_caption: bool = True ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. original_image (torch.FloatTensor or PIL.Image.Image) — +The original image that image was varied from. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. noise_level (int, optional, defaults to 0) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +>>> response = requests.get(url) +>>> mask_image = Image.open(BytesIO(response.content)) +>>> mask_image = mask_image + +>>> pipe = IFInpaintingPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "blue sunglasses" + +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) +>>> image = pipe( +... image=original_image, +... mask_image=mask_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFInpaintingSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... mask_image=mask_image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. diff --git a/scrapped_outputs/082adee65c4aefdefe7bbbed3691f287.txt b/scrapped_outputs/082adee65c4aefdefe7bbbed3691f287.txt new file mode 100644 index 0000000000000000000000000000000000000000..810a91b8fef1b421013373c972981ec5ae26c4c4 --- /dev/null +++ b/scrapped_outputs/082adee65c4aefdefe7bbbed3691f287.txt @@ -0,0 +1,21 @@ +ConsistencyDecoderScheduler This scheduler is a part of the ConsistencyDecoderPipeline and was introduced in DALL-E 3. The original codebase can be found at openai/consistency_models. ConsistencyDecoderScheduler class diffusers.schedulers.ConsistencyDecoderScheduler < source > ( num_train_timesteps: int = 1024 sigma_data: float = 0.5 ) scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor generator: Optional = None return_dict: bool = True ) → ~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (float) — +The current timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a +~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput or tuple. Returns +~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput or tuple + +If return_dict is True, +~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/082d48e829d8c50ec28178915723303f.txt b/scrapped_outputs/082d48e829d8c50ec28178915723303f.txt new file mode 100644 index 0000000000000000000000000000000000000000..eed31c90b94b82998fbc1e32829e8db344dabe3b --- /dev/null +++ b/scrapped_outputs/082d48e829d8c50ec28178915723303f.txt @@ -0,0 +1,75 @@ +What is safetensors ? + +safetensors is a different format +from the classic .bin which uses Pytorch which uses pickle. It contains the +exact same data, which is just the model weights (or tensors). +Pickle is notoriously unsafe which allow any malicious file to execute arbitrary code. +The hub itself tries to prevent issues from it, but it’s not a silver bullet. +safetensors first and foremost goal is to make loading machine learning models safe +in the sense that no takeover of your computer can be done. +Hence the name. + +Why use safetensors ? + +Safety can be one reason, if you’re attempting to use a not well known model and +you’re not sure about the source of the file. +And a secondary reason, is the speed of loading. Safetensors can load models much faster +than regular pickle files. If you spend a lot of times switching models, this can be +a huge timesave. +Numbers taken AMD EPYC 7742 64-Core Processor + + + Copied +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1") + +# Loaded in safetensors 0:00:02.033658 +# Loaded in Pytorch 0:00:02.663379 +This is for the entire loading time, the actual weights loading time to load 500MB: + + + Copied +Safetensors: 3.4873ms +PyTorch: 172.7537ms +Performance in general is a tricky business, and there are a few things to understand: +If you’re using the model for the first time from the hub, you will have to download the weights. +That’s extremely likely to be much slower than any loading method, therefore you will not see any difference +If you’re loading the model for the first time (let’s say after a reboot) then your machine will have to +actually read the disk. It’s likely to be as slow in both cases. Again the speed difference may not be as visible (this depends on hardware and the actual model). +The best performance benefit is when the model was already loaded previously on your computer and you’re switching from one model to another. Your OS, is trying really hard not to read from disk, since this is slow, so it will keep the files around in RAM, making it loading again much faster. Since safetensors is doing zero-copy of the tensors, reloading will be faster than pytorch since it has at least once extra copy to do. + +How to use safetensors ? + +If you have safetensors installed, and all the weights are available in safetensors format, \ +then by default it will use that instead of the pytorch weights. +If you are really paranoid about this, the ultimate weapon would be disabling torch.load: + + + Copied +import torch + + +def _raise(): + raise RuntimeError("I don't want to use pickle") + + +torch.load = lambda *args, **kwargs: _raise() + +I want to use model X but it doesn't have safetensors weights. + +Just go to this space. +This will create a new PR with the weights, let’s say refs/pr/22. +This space will download the pickled version, convert it, and upload it on the hub as a PR. +If anything bad is contained in the file, it’s Huggingface hub that will get issues, not your own computer. +And we’re equipped with dealing with it. +Then in order to use the model, even before the branch gets accepted by the original author you can do: + + + Copied +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", revision="refs/pr/22") +or you can test it directly online with this space. +And that’s it ! +Anything unclear, concerns, or found a bugs ? Open an issue diff --git a/scrapped_outputs/088db26767f16e699885c86bcc74883c.txt b/scrapped_outputs/088db26767f16e699885c86bcc74883c.txt new file mode 100644 index 0000000000000000000000000000000000000000..99c9c7d4f2201d98cc2da9436565b2c181d1c9c1 --- /dev/null +++ b/scrapped_outputs/088db26767f16e699885c86bcc74883c.txt @@ -0,0 +1,83 @@ +Paint by Example Paint by Example: Exemplar-based Image Editing with Diffusion Models is by Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, Fang Wen. The abstract from the paper is: Language-guided image editing has achieved great success recently. In this paper, for the first time, we investigate exemplar-guided image editing for more precise control. We achieve this goal by leveraging self-supervised training to disentangle and re-organize the source image and the exemplar. However, the naive approach will cause obvious fusing artifacts. We carefully analyze it and propose an information bottleneck and strong augmentations to avoid the trivial solution of directly copying and pasting the exemplar image. Meanwhile, to ensure the controllability of the editing process, we design an arbitrary shape mask for the exemplar image and leverage the classifier-free guidance to increase the similarity to the exemplar image. The whole framework involves a single forward of the diffusion model without any iterative optimization. We demonstrate that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity. The original codebase can be found at Fantasy-Studio/Paint-by-Example, and you can try it out in a demo. Tips Paint by Example is supported by the official Fantasy-Studio/Paint-by-Example checkpoint. The checkpoint is warm-started from CompVis/stable-diffusion-v1-4 to inpaint partly masked images conditioned on example and reference images. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. PaintByExamplePipeline class diffusers.PaintByExamplePipeline < source > ( vae: AutoencoderKL image_encoder: PaintByExampleImageEncoder unet: UNet2DConditionModel scheduler: Union safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = False ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. image_encoder (PaintByExampleImageEncoder) — +Encodes the example input image. The unet is conditioned on the example image instead of a text prompt. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. 🧪 This is an experimental feature! Pipeline for image-guided image inpainting using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( example_image: Union image: Union mask_image: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters example_image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +An example image to guide image generation. image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +Image or tensor representing an image batch to be inpainted (parts of the image are masked out with +mask_image and repainted according to prompt). mask_image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +Image or tensor representing an image batch to mask image. White pixels in the mask are repainted, +while black pixels are preserved. If mask_image is a PIL image, it is converted to a single channel +(luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, so the +expected shape would be (B, H, W, 1). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Example: Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO +>>> from diffusers import PaintByExamplePipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = ( +... "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/image/example_1.png" +... ) +>>> mask_url = ( +... "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/mask/example_1.png" +... ) +>>> example_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/reference/example_1.jpg" + +>>> init_image = download_image(img_url).resize((512, 512)) +>>> mask_image = download_image(mask_url).resize((512, 512)) +>>> example_image = download_image(example_url).resize((512, 512)) + +>>> pipe = PaintByExamplePipeline.from_pretrained( +... "Fantasy-Studio/Paint-by-Example", +... torch_dtype=torch.float16, +... ) +>>> pipe = pipe.to("cuda") + +>>> image = pipe(image=init_image, mask_image=mask_image, example_image=example_image).images[0] +>>> image StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/08a38a02f8bb6662bffc07b611915c5f.txt b/scrapped_outputs/08a38a02f8bb6662bffc07b611915c5f.txt new file mode 100644 index 0000000000000000000000000000000000000000..b26a6d56b0f7175109506df5db21894b73ff5f5f --- /dev/null +++ b/scrapped_outputs/08a38a02f8bb6662bffc07b611915c5f.txt @@ -0,0 +1,25 @@ +Metal Performance Shaders (MPS) 🤗 Diffusers is compatible with Apple silicon (M1/M2 chips) using the PyTorch mps device, which uses the Metal framework to leverage the GPU on MacOS devices. You’ll need to have: macOS computer with Apple silicon (M1/M2) hardware macOS 12.6 or later (13.0 or later recommended) arm64 version of Python PyTorch 2.0 (recommended) or 1.13 (minimum version supported for mps) The mps backend uses PyTorch’s .to() interface to move the Stable Diffusion pipeline on to your M1 or M2 device: Copied from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +pipe = pipe.to("mps") + +# Recommended if your computer has < 64 GB of RAM +pipe.enable_attention_slicing() + +prompt = "a photo of an astronaut riding a horse on mars" +image = pipe(prompt).images[0] +image Generating multiple prompts in a batch can crash or fail to work reliably. We believe this is related to the mps backend in PyTorch. While this is being investigated, you should iterate instead of batching. If you’re using PyTorch 1.13, you need to “prime” the pipeline with an additional one-time pass through it. This is a temporary workaround for an issue where the first inference pass produces slightly different results than subsequent ones. You only need to do this pass once, and after just one inference step you can discard the result. Copied from diffusers import DiffusionPipeline + + pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5").to("mps") + pipe.enable_attention_slicing() + + prompt = "a photo of an astronaut riding a horse on mars" + # First-time "warmup" pass if PyTorch version is 1.13 ++ _ = pipe(prompt, num_inference_steps=1) + + # Results match those from the CPU device after the warmup pass. + image = pipe(prompt).images[0] Troubleshoot M1/M2 performance is very sensitive to memory pressure. When this occurs, the system automatically swaps if it needs to which significantly degrades performance. To prevent this from happening, we recommend attention slicing to reduce memory pressure during inference and prevent swapping. This is especially relevant if your computer has less than 64GB of system RAM, or if you generate images at non-standard resolutions larger than 512×512 pixels. Call the enable_attention_slicing() function on your pipeline: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True).to("mps") +pipeline.enable_attention_slicing() Attention slicing performs the costly attention operation in multiple steps instead of all at once. It usually improves performance by ~20% in computers without universal memory, but we’ve observed better performance in most Apple silicon computers unless you have 64GB of RAM or more. diff --git a/scrapped_outputs/08b8b6452d3e251c6d0fb8f810ca6972.txt b/scrapped_outputs/08b8b6452d3e251c6d0fb8f810ca6972.txt new file mode 100644 index 0000000000000000000000000000000000000000..a782332fc7cd440b86e7889f43564b9e3d2ea725 --- /dev/null +++ b/scrapped_outputs/08b8b6452d3e251c6d0fb8f810ca6972.txt @@ -0,0 +1,87 @@ +Understanding pipelines, models and schedulers 🧨 Diffusers is designed to be a user-friendly and flexible toolbox for building diffusion systems tailored to your use-case. At the core of the toolbox are models and schedulers. While the DiffusionPipeline bundles these components together for convenience, you can also unbundle the pipeline and use the models and schedulers separately to create new diffusion systems. In this tutorial, you’ll learn how to use models and schedulers to assemble a diffusion system for inference, starting with a basic pipeline and then progressing to the Stable Diffusion pipeline. Deconstruct a basic pipeline A pipeline is a quick and easy way to run a model for inference, requiring no more than four lines of code to generate an image: Copied >>> from diffusers import DDPMPipeline + +>>> ddpm = DDPMPipeline.from_pretrained("google/ddpm-cat-256", use_safetensors=True).to("cuda") +>>> image = ddpm(num_inference_steps=25).images[0] +>>> image That was super easy, but how did the pipeline do that? Let’s breakdown the pipeline and take a look at what’s happening under the hood. In the example above, the pipeline contains a UNet2DModel model and a DDPMScheduler. The pipeline denoises an image by taking random noise the size of the desired output and passing it through the model several times. At each timestep, the model predicts the noise residual and the scheduler uses it to predict a less noisy image. The pipeline repeats this process until it reaches the end of the specified number of inference steps. To recreate the pipeline with the model and scheduler separately, let’s write our own denoising process. Load the model and scheduler: Copied >>> from diffusers import DDPMScheduler, UNet2DModel + +>>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cat-256") +>>> model = UNet2DModel.from_pretrained("google/ddpm-cat-256", use_safetensors=True).to("cuda") Set the number of timesteps to run the denoising process for: Copied >>> scheduler.set_timesteps(50) Setting the scheduler timesteps creates a tensor with evenly spaced elements in it, 50 in this example. Each element corresponds to a timestep at which the model denoises an image. When you create the denoising loop later, you’ll iterate over this tensor to denoise an image: Copied >>> scheduler.timesteps +tensor([980, 960, 940, 920, 900, 880, 860, 840, 820, 800, 780, 760, 740, 720, + 700, 680, 660, 640, 620, 600, 580, 560, 540, 520, 500, 480, 460, 440, + 420, 400, 380, 360, 340, 320, 300, 280, 260, 240, 220, 200, 180, 160, + 140, 120, 100, 80, 60, 40, 20, 0]) Create some random noise with the same shape as the desired output: Copied >>> import torch + +>>> sample_size = model.config.sample_size +>>> noise = torch.randn((1, 3, sample_size, sample_size), device="cuda") Now write a loop to iterate over the timesteps. At each timestep, the model does a UNet2DModel.forward() pass and returns the noisy residual. The scheduler’s step() method takes the noisy residual, timestep, and input and it predicts the image at the previous timestep. This output becomes the next input to the model in the denoising loop, and it’ll repeat until it reaches the end of the timesteps array. Copied >>> input = noise + +>>> for t in scheduler.timesteps: +... with torch.no_grad(): +... noisy_residual = model(input, t).sample +... previous_noisy_sample = scheduler.step(noisy_residual, t, input).prev_sample +... input = previous_noisy_sample This is the entire denoising process, and you can use this same pattern to write any diffusion system. The last step is to convert the denoised output into an image: Copied >>> from PIL import Image +>>> import numpy as np + +>>> image = (input / 2 + 0.5).clamp(0, 1).squeeze() +>>> image = (image.permute(1, 2, 0) * 255).round().to(torch.uint8).cpu().numpy() +>>> image = Image.fromarray(image) +>>> image In the next section, you’ll put your skills to the test and breakdown the more complex Stable Diffusion pipeline. The steps are more or less the same. You’ll initialize the necessary components, and set the number of timesteps to create a timestep array. The timestep array is used in the denoising loop, and for each element in this array, the model predicts a less noisy image. The denoising loop iterates over the timestep’s, and at each timestep, it outputs a noisy residual and the scheduler uses it to predict a less noisy image at the previous timestep. This process is repeated until you reach the end of the timestep array. Let’s try it out! Deconstruct the Stable Diffusion pipeline Stable Diffusion is a text-to-image latent diffusion model. It is called a latent diffusion model because it works with a lower-dimensional representation of the image instead of the actual pixel space, which makes it more memory efficient. The encoder compresses the image into a smaller representation, and a decoder to convert the compressed representation back into an image. For text-to-image models, you’ll need a tokenizer and an encoder to generate text embeddings. From the previous example, you already know you need a UNet model and a scheduler. As you can see, this is already more complex than the DDPM pipeline which only contains a UNet model. The Stable Diffusion model has three separate pretrained models. 💡 Read the How does Stable Diffusion work? blog for more details about how the VAE, UNet, and text encoder models work. Now that you know what you need for the Stable Diffusion pipeline, load all these components with the from_pretrained() method. You can find them in the pretrained runwayml/stable-diffusion-v1-5 checkpoint, and each component is stored in a separate subfolder: Copied >>> from PIL import Image +>>> import torch +>>> from transformers import CLIPTextModel, CLIPTokenizer +>>> from diffusers import AutoencoderKL, UNet2DConditionModel, PNDMScheduler + +>>> vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae", use_safetensors=True) +>>> tokenizer = CLIPTokenizer.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="tokenizer") +>>> text_encoder = CLIPTextModel.from_pretrained( +... "CompVis/stable-diffusion-v1-4", subfolder="text_encoder", use_safetensors=True +... ) +>>> unet = UNet2DConditionModel.from_pretrained( +... "CompVis/stable-diffusion-v1-4", subfolder="unet", use_safetensors=True +... ) Instead of the default PNDMScheduler, exchange it for the UniPCMultistepScheduler to see how easy it is to plug a different scheduler in: Copied >>> from diffusers import UniPCMultistepScheduler + +>>> scheduler = UniPCMultistepScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler") To speed up inference, move the models to a GPU since, unlike the scheduler, they have trainable weights: Copied >>> torch_device = "cuda" +>>> vae.to(torch_device) +>>> text_encoder.to(torch_device) +>>> unet.to(torch_device) Create text embeddings The next step is to tokenize the text to generate embeddings. The text is used to condition the UNet model and steer the diffusion process towards something that resembles the input prompt. 💡 The guidance_scale parameter determines how much weight should be given to the prompt when generating an image. Feel free to choose any prompt you like if you want to generate something else! Copied >>> prompt = ["a photograph of an astronaut riding a horse"] +>>> height = 512 # default height of Stable Diffusion +>>> width = 512 # default width of Stable Diffusion +>>> num_inference_steps = 25 # Number of denoising steps +>>> guidance_scale = 7.5 # Scale for classifier-free guidance +>>> generator = torch.manual_seed(0) # Seed generator to create the initial latent noise +>>> batch_size = len(prompt) Tokenize the text and generate the embeddings from the prompt: Copied >>> text_input = tokenizer( +... prompt, padding="max_length", max_length=tokenizer.model_max_length, truncation=True, return_tensors="pt" +... ) + +>>> with torch.no_grad(): +... text_embeddings = text_encoder(text_input.input_ids.to(torch_device))[0] You’ll also need to generate the unconditional text embeddings which are the embeddings for the padding token. These need to have the same shape (batch_size and seq_length) as the conditional text_embeddings: Copied >>> max_length = text_input.input_ids.shape[-1] +>>> uncond_input = tokenizer([""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt") +>>> uncond_embeddings = text_encoder(uncond_input.input_ids.to(torch_device))[0] Let’s concatenate the conditional and unconditional embeddings into a batch to avoid doing two forward passes: Copied >>> text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) Create random noise Next, generate some initial random noise as a starting point for the diffusion process. This is the latent representation of the image, and it’ll be gradually denoised. At this point, the latent image is smaller than the final image size but that’s okay though because the model will transform it into the final 512x512 image dimensions later. 💡 The height and width are divided by 8 because the vae model has 3 down-sampling layers. You can check by running the following: Copied 2 ** (len(vae.config.block_out_channels) - 1) == 8 Copied >>> latents = torch.randn( +... (batch_size, unet.config.in_channels, height // 8, width // 8), +... generator=generator, +... device=torch_device, +... ) Denoise the image Start by scaling the input with the initial noise distribution, sigma, the noise scale value, which is required for improved schedulers like UniPCMultistepScheduler: Copied >>> latents = latents * scheduler.init_noise_sigma The last step is to create the denoising loop that’ll progressively transform the pure noise in latents to an image described by your prompt. Remember, the denoising loop needs to do three things: Set the scheduler’s timesteps to use during denoising. Iterate over the timesteps. At each timestep, call the UNet model to predict the noise residual and pass it to the scheduler to compute the previous noisy sample. Copied >>> from tqdm.auto import tqdm + +>>> scheduler.set_timesteps(num_inference_steps) + +>>> for t in tqdm(scheduler.timesteps): +... # expand the latents if we are doing classifier-free guidance to avoid doing two forward passes. +... latent_model_input = torch.cat([latents] * 2) + +... latent_model_input = scheduler.scale_model_input(latent_model_input, timestep=t) + +... # predict the noise residual +... with torch.no_grad(): +... noise_pred = unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample + +... # perform guidance +... noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) +... noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) + +... # compute the previous noisy sample x_t -> x_t-1 +... latents = scheduler.step(noise_pred, t, latents).prev_sample Decode the image The final step is to use the vae to decode the latent representation into an image and get the decoded output with sample: Copied # scale and decode the image latents with vae +latents = 1 / 0.18215 * latents +with torch.no_grad(): + image = vae.decode(latents).sample Lastly, convert the image to a PIL.Image to see your generated image! Copied >>> image = (image / 2 + 0.5).clamp(0, 1).squeeze() +>>> image = (image.permute(1, 2, 0) * 255).to(torch.uint8).cpu().numpy() +>>> images = (image * 255).round().astype("uint8") +>>> image = Image.fromarray(image) +>>> image Next steps From basic to complex pipelines, you’ve seen that all you really need to write your own diffusion system is a denoising loop. The loop should set the scheduler’s timesteps, iterate over them, and alternate between calling the UNet model to predict the noise residual and passing it to the scheduler to compute the previous noisy sample. This is really what 🧨 Diffusers is designed for: to make it intuitive and easy to write your own diffusion system using models and schedulers. For your next steps, feel free to: Learn how to build and contribute a pipeline to 🧨 Diffusers. We can’t wait and see what you’ll come up with! Explore existing pipelines in the library, and see if you can deconstruct and build a pipeline from scratch using the models and schedulers separately. diff --git a/scrapped_outputs/08d728e98d2df0538841a053eeea0ad8.txt b/scrapped_outputs/08d728e98d2df0538841a053eeea0ad8.txt new file mode 100644 index 0000000000000000000000000000000000000000..d5b7d8b4b3e7332a66718ccf8ad4bab757e09676 --- /dev/null +++ b/scrapped_outputs/08d728e98d2df0538841a053eeea0ad8.txt @@ -0,0 +1,526 @@ +Audio Diffusion + + +Overview + +Audio Diffusion by Robert Dargavel Smith. +Audio Diffusion leverages the recent advances in image generation using diffusion models by converting audio samples to +and from mel spectrogram images. +The original codebase of this implementation can be found here, including +training scripts and example notebooks. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_audio_diffusion.py +Unconditional Audio Generation + + +Examples: + + +Audio Diffusion + + + + Copied +import torch +from IPython.display import Audio +from diffusers import DiffusionPipeline + +device = "cuda" if torch.cuda.is_available() else "cpu" +pipe = DiffusionPipeline.from_pretrained("teticio/audio-diffusion-256").to(device) + +output = pipe() +display(output.images[0]) +display(Audio(output.audios[0], rate=mel.get_sample_rate())) + +Latent Audio Diffusion + + + + Copied +import torch +from IPython.display import Audio +from diffusers import DiffusionPipeline + +device = "cuda" if torch.cuda.is_available() else "cpu" +pipe = DiffusionPipeline.from_pretrained("teticio/latent-audio-diffusion-256").to(device) + +output = pipe() +display(output.images[0]) +display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate())) + +Audio Diffusion with DDIM (faster) + + + + Copied +import torch +from IPython.display import Audio +from diffusers import DiffusionPipeline + +device = "cuda" if torch.cuda.is_available() else "cpu" +pipe = DiffusionPipeline.from_pretrained("teticio/audio-diffusion-ddim-256").to(device) + +output = pipe() +display(output.images[0]) +display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate())) + +Variations, in-painting, out-painting etc. + + + + Copied +output = pipe( + raw_audio=output.audios[0, 0], + start_step=int(pipe.get_default_steps() / 2), + mask_start_secs=1, + mask_end_secs=1, +) +display(output.images[0]) +display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate())) + +AudioDiffusionPipeline + + +class diffusers.AudioDiffusionPipeline + +< +source +> +( +vqvae: AutoencoderKL +unet: UNet2DConditionModel +mel: Mel +scheduler: typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_ddpm.DDPMScheduler] + +) + + +Parameters + +vqae (AutoencoderKL) — Variational AutoEncoder for Latent Audio Diffusion or None + + +unet (UNet2DConditionModel) — UNET model + + +mel (Mel) — transform audio <-> spectrogram + + +scheduler ([DDIMScheduler or DDPMScheduler]) — de-noising scheduler + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +batch_size: int = 1 +audio_file: str = None +raw_audio: ndarray = None +slice: int = 0 +start_step: int = 0 +steps: int = None +generator: Generator = None +mask_start_secs: float = 0 +mask_end_secs: float = 0 +step_generator: Generator = None +eta: float = 0 +noise: Tensor = None +encoding: Tensor = None +return_dict = True + +) +→ +List[PIL Image] + +Parameters + +batch_size (int) — number of samples to generate + + +audio_file (str) — must be a file on disk due to Librosa limitation or + + +raw_audio (np.ndarray) — audio as numpy array + + +slice (int) — slice number of audio to convert + + +start_step (int) — step to start from + + +steps (int) — number of de-noising steps (defaults to 50 for DDIM, 1000 for DDPM) + + +generator (torch.Generator) — random number generator or None + + +mask_start_secs (float) — number of seconds of audio to mask (not generate) at start + + +mask_end_secs (float) — number of seconds of audio to mask (not generate) at end + + +step_generator (torch.Generator) — random number generator used to de-noise or None + + +eta (float) — parameter between 0 and 1 used with DDIM scheduler + + +noise (torch.Tensor) — noise tensor of shape (batch_size, 1, height, width) or None + + +encoding (torch.Tensor) — for UNet2DConditionModel shape (batch_size, seq_length, cross_attention_dim) + + +return_dict (bool) — if True return AudioPipelineOutput, ImagePipelineOutput else Tuple + + +Returns + +List[PIL Image] + + + +mel spectrograms (float, List[np.ndarray]): sample rate and raw audios + + +Generate random mel spectrogram from audio input and convert to audio. + +encode + +< +source +> +( +images: typing.List[PIL.Image.Image] +steps: int = 50 + +) +→ +np.ndarray + +Parameters + +images (List[PIL Image]) — list of images to encode + + +steps (int) — number of encoding steps to perform (defaults to 50) + + +Returns + +np.ndarray + + + +noise tensor of shape (batch_size, 1, height, width) + + +Reverse step process: recover noisy image from generated image. + +get_default_steps + +< +source +> +( +) +→ +int + +Returns + +int + + + +number of steps + + +Returns default number of steps recommended for inference + +get_input_dims + +< +source +> +( +) +→ +Tuple + +Returns + +Tuple + + + +(height, width) + + +Returns dimension of input image + +slerp + +< +source +> +( +x0: Tensor +x1: Tensor +alpha: float + +) +→ +torch.Tensor + +Parameters + +x0 (torch.Tensor) — first tensor to interpolate between + + +x1 (torch.Tensor) — seconds tensor to interpolate between + + +alpha (float) — interpolation between 0 and 1 + + +Returns + +torch.Tensor + + + +interpolated tensor + + +Spherical Linear intERPolation + +Mel + + +class diffusers.Mel + +< +source +> +( +x_res: int = 256 +y_res: int = 256 +sample_rate: int = 22050 +n_fft: int = 2048 +hop_length: int = 512 +top_db: int = 80 +n_iter: int = 32 + +) + + +Parameters + +x_res (int) — x resolution of spectrogram (time) + + +y_res (int) — y resolution of spectrogram (frequency bins) + + +sample_rate (int) — sample rate of audio + + +n_fft (int) — number of Fast Fourier Transforms + + +hop_length (int) — hop length (a higher number is recommended for lower than 256 y_res) + + +top_db (int) — loudest in decibels + + +n_iter (int) — number of iterations for Griffin Linn mel inversion + + + + +audio_slice_to_image + +< +source +> +( +slice: int + +) +→ +PIL Image + +Parameters + +slice (int) — slice number of audio to convert (out of get_number_of_slices()) + + +Returns + +PIL Image + + + +grayscale image of x_res x y_res + + +Convert slice of audio to spectrogram. + +get_audio_slice + +< +source +> +( +slice: int = 0 + +) +→ +np.ndarray + +Parameters + +slice (int) — slice number of audio (out of get_number_of_slices()) + + +Returns + +np.ndarray + + + +audio as numpy array + + +Get slice of audio. + +get_number_of_slices + +< +source +> +( +) +→ +int + +Returns + +int + + + +number of spectograms audio can be sliced into + + +Get number of slices in audio. + +get_sample_rate + +< +source +> +( +) +→ +int + +Returns + +int + + + +sample rate of audio + + +Get sample rate: + +image_to_audio + +< +source +> +( +image: Image + +) +→ +audio (np.ndarray) + +Parameters + +image (PIL Image) — x_res x y_res grayscale image + + +Returns + +audio (np.ndarray) + + + +raw audio + + +Converts spectrogram to audio. + +load_audio + +< +source +> +( +audio_file: str = None +raw_audio: ndarray = None + +) + + +Parameters + +audio_file (str) — must be a file on disk due to Librosa limitation or + + +raw_audio (np.ndarray) — audio as numpy array + + + +Load audio. + +set_resolution + +< +source +> +( +x_res: int +y_res: int + +) + + +Parameters + +x_res (int) — x resolution of spectrogram (time) + + +y_res (int) — y resolution of spectrogram (frequency bins) + + + +Set resolution. diff --git a/scrapped_outputs/08ec8ed2f7d5f2d39535da2e501003ee.txt b/scrapped_outputs/08ec8ed2f7d5f2d39535da2e501003ee.txt new file mode 100644 index 0000000000000000000000000000000000000000..e5216241a747f397d63553bb27a7052f4d40a072 --- /dev/null +++ b/scrapped_outputs/08ec8ed2f7d5f2d39535da2e501003ee.txt @@ -0,0 +1,145 @@ +Loaders + +There are many weights to train adapter neural networks for diffusion models, such as +Textual Inversion +LoRA +Hypernetworks +Such adapter neural networks often only consist of a fraction of the number of weights compared +to the pretrained model and as such are very portable. The Diffusers library offers an easy-to-use +API to load such adapter neural networks via the loaders.py module. +Note: This module is still highly experimental and prone to future changes. + +LoaderMixins + + +UNet2DConditionLoadersMixin + + +class diffusers.loaders.UNet2DConditionLoadersMixin + +< +source +> +( +) + + + + +load_attn_procs + +< +source +> +( +pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]] +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. +Valid model ids should have an organization name, like google/ddpm-celebahq-256. +A path to a directory containing model weights saved using ~ModelMixin.save_config, e.g., +./my_model_directory/. +A torch state +dict. + + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running diffusers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +subfolder (str, optional, defaults to "") — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + +mirror (str, optional) — +Mirror source to accelerate downloads in China. If you are from China and have an accessibility +problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. +Please refer to the mirror site for more information. + + + +Load pretrained attention processor layers into UNet2DConditionModel. Attention processor layers have to be +defined in +cross_attention.py +and be a torch.nn.Module class. +This function is experimental and might change in the future +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. +Activate the special “offline-mode” to use +this method in a firewalled environment. + +save_attn_procs + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +is_main_process: bool = True +weights_name: str = 'pytorch_lora_weights.bin' +save_function: typing.Callable = None + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. + + +is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful when in distributed training like +TPUs and need to call this function on all processes. In this case, set is_main_process=True only on +the main process to avoid race conditions. + + +save_function (Callable) — +The function to use to save the state dictionary. Useful on distributed training like TPUs when one +need to replace torch.save by another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. + + + +Save an attention procesor to a directory, so that it can be re-loaded using the +[load_attn_procs()](/docs/diffusers/v0.12.0/en/api/loaders#diffusers.loaders.UNet2DConditionLoadersMixin.load_attn_procs) method. diff --git a/scrapped_outputs/08ef2ecbc8735190a9016d227968f472.txt b/scrapped_outputs/08ef2ecbc8735190a9016d227968f472.txt new file mode 100644 index 0000000000000000000000000000000000000000..ab7809d34983d6a8ebbe82ac4a22518de74ebdc9 --- /dev/null +++ b/scrapped_outputs/08ef2ecbc8735190a9016d227968f472.txt @@ -0,0 +1,31 @@ +Prior Transformer The Prior Transformer was originally introduced in Hierarchical Text-Conditional Image Generation with CLIP Latents by Ramesh et al. It is used to predict CLIP image embeddings from CLIP text embeddings; image embeddings are predicted through a denoising diffusion process. The abstract from the paper is: Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples. PriorTransformer class diffusers.PriorTransformer < source > ( num_attention_heads: int = 32 attention_head_dim: int = 64 num_layers: int = 20 embedding_dim: int = 768 num_embeddings = 77 additional_embeddings = 4 dropout: float = 0.0 time_embed_act_fn: str = 'silu' norm_in_type: Optional = None embedding_proj_norm_type: Optional = None encoder_hid_proj_type: Optional = 'linear' added_emb_type: Optional = 'prd' time_embed_dim: Optional = None embedding_proj_dim: Optional = None clip_embed_dim: Optional = None ) Parameters num_attention_heads (int, optional, defaults to 32) — The number of heads to use for multi-head attention. attention_head_dim (int, optional, defaults to 64) — The number of channels in each head. num_layers (int, optional, defaults to 20) — The number of layers of Transformer blocks to use. embedding_dim (int, optional, defaults to 768) — The dimension of the model input hidden_states num_embeddings (int, optional, defaults to 77) — +The number of embeddings of the model input hidden_states additional_embeddings (int, optional, defaults to 4) — The number of additional tokens appended to the +projected hidden_states. The actual length of the used hidden_states is num_embeddings + additional_embeddings. dropout (float, optional, defaults to 0.0) — The dropout probability to use. time_embed_act_fn (str, optional, defaults to ‘silu’) — +The activation function to use to create timestep embeddings. norm_in_type (str, optional, defaults to None) — The normalization layer to apply on hidden states before +passing to Transformer blocks. Set it to None if normalization is not needed. embedding_proj_norm_type (str, optional, defaults to None) — +The normalization layer to apply on the input proj_embedding. Set it to None if normalization is not +needed. encoder_hid_proj_type (str, optional, defaults to linear) — +The projection layer to apply on the input encoder_hidden_states. Set it to None if +encoder_hidden_states is None. added_emb_type (str, optional, defaults to prd) — Additional embeddings to condition the model. +Choose from prd or None. if choose prd, it will prepend a token indicating the (quantized) dot +product between the text embedding and image embedding as proposed in the unclip paper +https://arxiv.org/abs/2204.06125 If it is None, no additional embeddings will be prepended. time_embed_dim (int, *optional*, defaults to None) -- The dimension of timestep embeddings. If None, will be set to num_attention_heads * attention_head_dim` embedding_proj_dim (int, optional, default to None) — +The dimension of proj_embedding. If None, will be set to embedding_dim. clip_embed_dim (int, optional, default to None) — +The dimension of the output. If None, will be set to embedding_dim. A Prior Transformer model. forward < source > ( hidden_states timestep: Union proj_embedding: FloatTensor encoder_hidden_states: Optional = None attention_mask: Optional = None return_dict: bool = True ) → ~models.prior_transformer.PriorTransformerOutput or tuple Parameters hidden_states (torch.FloatTensor of shape (batch_size, embedding_dim)) — +The currently predicted image embeddings. timestep (torch.LongTensor) — +Current denoising step. proj_embedding (torch.FloatTensor of shape (batch_size, embedding_dim)) — +Projected embedding vector the denoising process is conditioned on. encoder_hidden_states (torch.FloatTensor of shape (batch_size, num_embeddings, embedding_dim)) — +Hidden states of the text embeddings the denoising process is conditioned on. attention_mask (torch.BoolTensor of shape (batch_size, num_embeddings)) — +Text mask for the text embeddings. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.prior_transformer.PriorTransformerOutput instead of a plain +tuple. Returns +~models.prior_transformer.PriorTransformerOutput or tuple + +If return_dict is True, a ~models.prior_transformer.PriorTransformerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + The PriorTransformer forward method. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. PriorTransformerOutput class diffusers.models.transformers.prior_transformer.PriorTransformerOutput < source > ( predicted_image_embedding: FloatTensor ) Parameters predicted_image_embedding (torch.FloatTensor of shape (batch_size, embedding_dim)) — +The predicted CLIP image embedding conditioned on the CLIP text embedding input. The output of PriorTransformer. diff --git a/scrapped_outputs/0911f01d6b46efaf31018d61fe720cee.txt b/scrapped_outputs/0911f01d6b46efaf31018d61fe720cee.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69636ab475595c240f0bd86a1983886d1f8de0d --- /dev/null +++ b/scrapped_outputs/0911f01d6b46efaf31018d61fe720cee.txt @@ -0,0 +1,40 @@ +DDIM Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. The abstract from the paper is: Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space. The original codebase can be found at ermongroup/ddim. DDIMPipeline class diffusers.DDIMPipeline < source > ( unet scheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image. Can be one of +DDPMScheduler, or DDIMScheduler. Pipeline for image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 generator: Union = None eta: float = 0.0 num_inference_steps: int = 50 use_clipped_model_output: Optional = None output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. A value of 0 corresponds to +DDIM and 1 corresponds to DDPM. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. use_clipped_model_output (bool, optional, defaults to None) — +If True or False, see documentation for DDIMScheduler.step(). If None, nothing is passed +downstream to the scheduler (use None for schedulers which don’t support this argument). output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Example: Copied >>> from diffusers import DDIMPipeline +>>> import PIL.Image +>>> import numpy as np + +>>> # load model and scheduler +>>> pipe = DDIMPipeline.from_pretrained("fusing/ddim-lsun-bedroom") + +>>> # run pipeline in inference (sample random noise and denoise) +>>> image = pipe(eta=0.0, num_inference_steps=50) + +>>> # process image to PIL +>>> image_processed = image.cpu().permute(0, 2, 3, 1) +>>> image_processed = (image_processed + 1.0) * 127.5 +>>> image_processed = image_processed.numpy().astype(np.uint8) +>>> image_pil = PIL.Image.fromarray(image_processed[0]) + +>>> # save image +>>> image_pil.save("test.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/0943e46bec427eff28345b35b0e08d80.txt b/scrapped_outputs/0943e46bec427eff28345b35b0e08d80.txt new file mode 100644 index 0000000000000000000000000000000000000000..e9b53eb8a868ef3829ac58348524811ec445482c --- /dev/null +++ b/scrapped_outputs/0943e46bec427eff28345b35b0e08d80.txt @@ -0,0 +1,143 @@ +BLIP-Diffusion BLIP-Diffusion was proposed in BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing. It enables zero-shot subject-driven generation and control-guided zero-shot generation. The abstract from the paper is: Subject-driven text-to-image generation models create novel renditions of an input subject based on text prompts. Existing models suffer from lengthy fine-tuning and difficulties preserving the subject fidelity. To overcome these limitations, we introduce BLIP-Diffusion, a new subject-driven image generation model that supports multimodal control which consumes inputs of subject images and text prompts. Unlike other subject-driven generation models, BLIP-Diffusion introduces a new multimodal encoder which is pre-trained to provide subject representation. We first pre-train the multimodal encoder following BLIP-2 to produce visual representation aligned with the text. Then we design a subject representation learning task which enables a diffusion model to leverage such visual representation and generates new subject renditions. Compared with previous methods such as DreamBooth, our model enables zero-shot subject-driven generation, and efficient fine-tuning for customized subject with up to 20x speedup. We also demonstrate that BLIP-Diffusion can be flexibly combined with existing techniques such as ControlNet and prompt-to-prompt to enable novel subject-driven generation and editing applications. Project page at this https URL. The original codebase can be found at salesforce/LAVIS. You can find the official BLIP-Diffusion checkpoints under the hf.co/SalesForce organization. BlipDiffusionPipeline and BlipDiffusionControlNetPipeline were contributed by ayushtues. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. BlipDiffusionPipeline class diffusers.BlipDiffusionPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: ContextCLIPTextModel vae: AutoencoderKL unet: UNet2DConditionModel scheduler: PNDMScheduler qformer: Blip2QFormerModel image_processor: BlipImageProcessor ctx_begin_pos: int = 2 mean: List = None std: List = None ) Parameters tokenizer (CLIPTokenizer) — +Tokenizer for the text encoder text_encoder (ContextCLIPTextModel) — +Text encoder to encode the text prompt vae (AutoencoderKL) — +VAE model to map the latents to the image unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. scheduler (PNDMScheduler) — +A scheduler to be used in combination with unet to generate image latents. qformer (Blip2QFormerModel) — +QFormer model to get multi-modal embeddings from the text and image. image_processor (BlipImageProcessor) — +Image Processor to preprocess and postprocess the image. ctx_begin_pos (int, optional, defaults to 2) — +Position of the context token in the text encoder. Pipeline for Zero-Shot Subject Driven Generation using Blip Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: List reference_image: Image source_subject_category: List target_subject_category: List latents: Optional = None guidance_scale: float = 7.5 height: int = 512 width: int = 512 num_inference_steps: int = 50 generator: Union = None neg_prompt: Optional = '' prompt_strength: float = 1.0 prompt_reps: int = 20 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (List[str]) — +The prompt or prompts to guide the image generation. reference_image (PIL.Image.Image) — +The reference image to condition the generation on. source_subject_category (List[str]) — +The source subject category. target_subject_category (List[str]) — +The target subject category. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by random sampling. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. height (int, optional, defaults to 512) — +The height of the generated image. width (int, optional, defaults to 512) — +The width of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. neg_prompt (str, optional, defaults to "") — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). prompt_strength (float, optional, defaults to 1.0) — +The strength of the prompt. Specifies the number of times the prompt is repeated along with prompt_reps +to amplify the prompt. prompt_reps (int, optional, defaults to 20) — +The number of times the prompt is repeated along with prompt_strength to amplify the prompt. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers.pipelines import BlipDiffusionPipeline +>>> from diffusers.utils import load_image +>>> import torch + +>>> blip_diffusion_pipe = BlipDiffusionPipeline.from_pretrained( +... "Salesforce/blipdiffusion", torch_dtype=torch.float16 +... ).to("cuda") + + +>>> cond_subject = "dog" +>>> tgt_subject = "dog" +>>> text_prompt_input = "swimming underwater" + +>>> cond_image = load_image( +... "https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/dog.jpg" +... ) +>>> guidance_scale = 7.5 +>>> num_inference_steps = 25 +>>> negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate" + + +>>> output = blip_diffusion_pipe( +... text_prompt_input, +... cond_image, +... cond_subject, +... tgt_subject, +... guidance_scale=guidance_scale, +... num_inference_steps=num_inference_steps, +... neg_prompt=negative_prompt, +... height=512, +... width=512, +... ).images +>>> output[0].save("image.png") BlipDiffusionControlNetPipeline class diffusers.BlipDiffusionControlNetPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: ContextCLIPTextModel vae: AutoencoderKL unet: UNet2DConditionModel scheduler: PNDMScheduler qformer: Blip2QFormerModel controlnet: ControlNetModel image_processor: BlipImageProcessor ctx_begin_pos: int = 2 mean: List = None std: List = None ) Parameters tokenizer (CLIPTokenizer) — +Tokenizer for the text encoder text_encoder (ContextCLIPTextModel) — +Text encoder to encode the text prompt vae (AutoencoderKL) — +VAE model to map the latents to the image unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. scheduler (PNDMScheduler) — +A scheduler to be used in combination with unet to generate image latents. qformer (Blip2QFormerModel) — +QFormer model to get multi-modal embeddings from the text and image. controlnet (ControlNetModel) — +ControlNet model to get the conditioning image embedding. image_processor (BlipImageProcessor) — +Image Processor to preprocess and postprocess the image. ctx_begin_pos (int, optional, defaults to 2) — +Position of the context token in the text encoder. Pipeline for Canny Edge based Controlled subject-driven generation using Blip Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: List reference_image: Image condtioning_image: Image source_subject_category: List target_subject_category: List latents: Optional = None guidance_scale: float = 7.5 height: int = 512 width: int = 512 num_inference_steps: int = 50 generator: Union = None neg_prompt: Optional = '' prompt_strength: float = 1.0 prompt_reps: int = 20 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (List[str]) — +The prompt or prompts to guide the image generation. reference_image (PIL.Image.Image) — +The reference image to condition the generation on. condtioning_image (PIL.Image.Image) — +The conditioning canny edge image to condition the generation on. source_subject_category (List[str]) — +The source subject category. target_subject_category (List[str]) — +The target subject category. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by random sampling. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. height (int, optional, defaults to 512) — +The height of the generated image. width (int, optional, defaults to 512) — +The width of the generated image. seed (int, optional, defaults to 42) — +The seed to use for random generation. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. neg_prompt (str, optional, defaults to "") — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). prompt_strength (float, optional, defaults to 1.0) — +The strength of the prompt. Specifies the number of times the prompt is repeated along with prompt_reps +to amplify the prompt. prompt_reps (int, optional, defaults to 20) — +The number of times the prompt is repeated along with prompt_strength to amplify the prompt. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers.pipelines import BlipDiffusionControlNetPipeline +>>> from diffusers.utils import load_image +>>> from controlnet_aux import CannyDetector +>>> import torch + +>>> blip_diffusion_pipe = BlipDiffusionControlNetPipeline.from_pretrained( +... "Salesforce/blipdiffusion-controlnet", torch_dtype=torch.float16 +... ).to("cuda") + +>>> style_subject = "flower" +>>> tgt_subject = "teapot" +>>> text_prompt = "on a marble table" + +>>> cldm_cond_image = load_image( +... "https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/kettle.jpg" +... ).resize((512, 512)) +>>> canny = CannyDetector() +>>> cldm_cond_image = canny(cldm_cond_image, 30, 70, output_type="pil") +>>> style_image = load_image( +... "https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg" +... ) +>>> guidance_scale = 7.5 +>>> num_inference_steps = 50 +>>> negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate" + + +>>> output = blip_diffusion_pipe( +... text_prompt, +... style_image, +... cldm_cond_image, +... style_subject, +... tgt_subject, +... guidance_scale=guidance_scale, +... num_inference_steps=num_inference_steps, +... neg_prompt=negative_prompt, +... height=512, +... width=512, +... ).images +>>> output[0].save("image.png") diff --git a/scrapped_outputs/095b377c30b712fdf07e7dc9c67f2acc.txt b/scrapped_outputs/095b377c30b712fdf07e7dc9c67f2acc.txt new file mode 100644 index 0000000000000000000000000000000000000000..8c5bcb9f001a84d9b945c267456eb710daaafe80 --- /dev/null +++ b/scrapped_outputs/095b377c30b712fdf07e7dc9c67f2acc.txt @@ -0,0 +1,104 @@ +DPMSolverSinglestepScheduler DPMSolverSinglestepScheduler is a single step scheduler from DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps and DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. DPMSolver (and the improved version DPMSolver++) is a fast dedicated high-order solver for diffusion ODEs with convergence order guarantee. Empirically, DPMSolver sampling with only 20 steps can generate high-quality +samples, and it can generate quite good samples even in 10 steps. The original implementation can be found at LuChengTHU/dpm-solver. Tips It is recommended to set solver_order to 2 for guide sampling, and solver_order=3 for unconditional sampling. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use dynamic +thresholding. This thresholding method is unsuitable for latent-space diffusion models such as +Stable Diffusion. DPMSolverSinglestepScheduler class diffusers.DPMSolverSinglestepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Optional = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'dpmsolver++' solver_type: str = 'midpoint' lower_order_final: bool = False use_karras_sigmas: Optional = False final_sigmas_type: Optional = 'zero' lambda_min_clipped: float = -inf variance_type: Optional = None ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DPMSolver order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++". algorithm_type (str, defaults to dpmsolver++) — +Algorithm type for the solver; can be dpmsolver or dpmsolver++. The +dpmsolver type implements the algorithms in the DPMSolver +paper, and the dpmsolver++ type implements the algorithms in the +DPMSolver++ paper. It is recommended to use dpmsolver++ or +sde-dpmsolver++ with solver_order=2 for guided sampling like in Stable Diffusion. solver_type (str, defaults to midpoint) — +Solver type for the second-order solver; can be midpoint or heun. The solver type slightly affects the +sample quality, especially for a small number of steps. It is recommended to use midpoint solvers. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. final_sigmas_type (str, optional, defaults to "zero") — +The final sigma value for the noise schedule during the sampling process. If "sigma_min", the final sigma +is the same as the last sigma in the training schedule. If zero, the final sigma is set to 0. lambda_min_clipped (float, defaults to -inf) — +Clipping threshold for the minimum value of lambda(t) for numerical stability. This is critical for the +cosine (squaredcos_cap_v2) noise schedule. variance_type (str, optional) — +Set to “learned” or “learned_range” for diffusion models that predict variance. If set, the model’s output +contains the predicted Gaussian variance. DPMSolverSinglestepScheduler is a fast dedicated high-order solver for diffusion ODEs. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is +designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an +integral of the data prediction model. The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise +prediction and data prediction models. dpm_solver_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DPMSolver (equivalent to DDIM). get_order_list < source > ( num_inference_steps: int ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. Computes the solver order at each time step. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). singlestep_dpm_solver_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. timestep (int) — +The current and latter discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order singlestep DPMSolver that computes the solution at time prev_timestep from the +time timestep_list[-2]. singlestep_dpm_solver_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. timestep (int) — +The current and latter discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order singlestep DPMSolver that computes the solution at time prev_timestep from the +time timestep_list[-3]. singlestep_dpm_solver_update < source > ( model_output_list: List *args sample: FloatTensor = None order: int = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. timestep (int) — +The current and latter discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. order (int) — +The solver order at this step. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the singlestep DPMSolver. step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the singlestep DPMSolver. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/095cf31020cc11ca9d5f31f25243aa55.txt b/scrapped_outputs/095cf31020cc11ca9d5f31f25243aa55.txt new file mode 100644 index 0000000000000000000000000000000000000000..7d4f2a190ae5e539921a29c22ee5aca25a320dc2 --- /dev/null +++ b/scrapped_outputs/095cf31020cc11ca9d5f31f25243aa55.txt @@ -0,0 +1,10 @@ +Using Diffusers with other modalities + +Diffusers is in the process of expanding to modalities other than images. +Example type +Colab +Pipeline +Molecule conformation generation + +❌ +More coming soon! diff --git a/scrapped_outputs/09625e3df7b081987702652e767d8bd3.txt b/scrapped_outputs/09625e3df7b081987702652e767d8bd3.txt new file mode 100644 index 0000000000000000000000000000000000000000..bbc3acf76c7c15bd0150cb7a94aa944d1e65fda4 --- /dev/null +++ b/scrapped_outputs/09625e3df7b081987702652e767d8bd3.txt @@ -0,0 +1,93 @@ +InstructPix2Pix InstructPix2Pix is a Stable Diffusion model trained to edit images from human-provided instructions. For example, your prompt can be “turn the clouds rainy” and the model will edit the input image accordingly. This model is conditioned on the text prompt (or editing instruction) and the input image. This guide will explore the train_instruct_pix2pix.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/instruct_pix2pix +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script has many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. Default values are provided for most parameters that work pretty well, but you can also set your own values in the training command if you’d like. For example, to increase the resolution of the input image: Copied accelerate launch train_instruct_pix2pix.py \ + --resolution=512 \ Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the relevant parameters for InstructPix2Pix: --original_image_column: the original image before the edits are made --edited_image_column: the image after the edits are made --edit_prompt_column: the instructions to edit the image --conditioning_dropout_prob: the dropout probability for the edited image and edit prompts during training which enables classifier-free guidance (CFG) for one or both conditioning inputs Training script The dataset preprocessing code and training loop are found in the main() function. This is where you’ll make your changes to the training script to adapt it for your own use-case. As with the script parameters, a walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the InstructPix2Pix relevant parts of the script. The script begins by modifing the number of input channels in the first convolutional layer of the UNet to account for InstructPix2Pix’s additional conditioning image: Copied in_channels = 8 +out_channels = unet.conv_in.out_channels +unet.register_to_config(in_channels=in_channels) + +with torch.no_grad(): + new_conv_in = nn.Conv2d( + in_channels, out_channels, unet.conv_in.kernel_size, unet.conv_in.stride, unet.conv_in.padding + ) + new_conv_in.weight.zero_() + new_conv_in.weight[:, :4, :, :].copy_(unet.conv_in.weight) + unet.conv_in = new_conv_in These UNet parameters are updated by the optimizer: Copied optimizer = optimizer_cls( + unet.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Next, the edited images and and edit instructions are preprocessed and tokenized. It is important the same image transformations are applied to the original and edited images. Copied def preprocess_train(examples): + preprocessed_images = preprocess_images(examples) + + original_images, edited_images = preprocessed_images.chunk(2) + original_images = original_images.reshape(-1, 3, args.resolution, args.resolution) + edited_images = edited_images.reshape(-1, 3, args.resolution, args.resolution) + + examples["original_pixel_values"] = original_images + examples["edited_pixel_values"] = edited_images + + captions = list(examples[edit_prompt_column]) + examples["input_ids"] = tokenize_captions(captions) + return examples Finally, in the training loop, it starts by encoding the edited images into latent space: Copied latents = vae.encode(batch["edited_pixel_values"].to(weight_dtype)).latent_dist.sample() +latents = latents * vae.config.scaling_factor Then, the script applies dropout to the original image and edit instruction embeddings to support CFG. This is what enables the model to modulate the influence of the edit instruction and original image on the edited image. Copied encoder_hidden_states = text_encoder(batch["input_ids"])[0] +original_image_embeds = vae.encode(batch["original_pixel_values"].to(weight_dtype)).latent_dist.mode() + +if args.conditioning_dropout_prob is not None: + random_p = torch.rand(bsz, device=latents.device, generator=generator) + prompt_mask = random_p < 2 * args.conditioning_dropout_prob + prompt_mask = prompt_mask.reshape(bsz, 1, 1) + null_conditioning = text_encoder(tokenize_captions([""]).to(accelerator.device))[0] + encoder_hidden_states = torch.where(prompt_mask, null_conditioning, encoder_hidden_states) + + image_mask_dtype = original_image_embeds.dtype + image_mask = 1 - ( + (random_p >= args.conditioning_dropout_prob).to(image_mask_dtype) + * (random_p < 3 * args.conditioning_dropout_prob).to(image_mask_dtype) + ) + image_mask = image_mask.reshape(bsz, 1, 1, 1) + original_image_embeds = image_mask * original_image_embeds That’s pretty much it! Aside from the differences described here, the rest of the script is very similar to the Text-to-image training script, so feel free to check it out for more details. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’re happy with the changes to your script or if you’re okay with the default configuration, you’re ready to launch the training script! 🚀 This guide uses the fusing/instructpix2pix-1000-samples dataset, which is a smaller version of the original dataset. You can also create and use your own dataset if you’d like (see the Create a dataset for training guide). Set the MODEL_NAME environment variable to the name of the model (can be a model id on the Hub or a path to a local model), and the DATASET_ID to the name of the dataset on the Hub. The script creates and saves all the components (feature extractor, scheduler, text encoder, UNet, etc.) to a subfolder in your repository. For better results, try longer training runs with a larger dataset. We’ve only tested this training script on a smaller-scale dataset. To monitor training progress with Weights and Biases, add the --report_to=wandb parameter to the training command and specify a validation image with --val_image_url and a validation prompt with --validation_prompt. This can be really useful for debugging the model. If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. Copied accelerate launch --mixed_precision="fp16" train_instruct_pix2pix.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$DATASET_ID \ + --enable_xformers_memory_efficient_attention \ + --resolution=256 \ + --random_flip \ + --train_batch_size=4 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --max_train_steps=15000 \ + --checkpointing_steps=5000 \ + --checkpoints_total_limit=1 \ + --learning_rate=5e-05 \ + --max_grad_norm=1 \ + --lr_warmup_steps=0 \ + --conditioning_dropout_prob=0.05 \ + --mixed_precision=fp16 \ + --seed=42 \ + --push_to_hub After training is finished, you can use your new InstructPix2Pix for inference: Copied import PIL +import requests +import torch +from diffusers import StableDiffusionInstructPix2PixPipeline +from diffusers.utils import load_image + +pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained("your_cool_model", torch_dtype=torch.float16).to("cuda") +generator = torch.Generator("cuda").manual_seed(0) + +image = load_image("https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/test_pix2pix_4.png") +prompt = "add some ducks to the lake" +num_inference_steps = 20 +image_guidance_scale = 1.5 +guidance_scale = 10 + +edited_image = pipeline( + prompt, + image=image, + num_inference_steps=num_inference_steps, + image_guidance_scale=image_guidance_scale, + guidance_scale=guidance_scale, + generator=generator, +).images[0] +edited_image.save("edited_image.png") You should experiment with different num_inference_steps, image_guidance_scale, and guidance_scale values to see how they affect inference speed and quality. The guidance scale parameters are especially impactful because they control how much the original image and edit instructions affect the edited image. Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_instruct_pix2pix_sdxl.py script to train a SDXL model to follow image editing instructions. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on training your own InstructPix2Pix model! 🥳 To learn more about the model, it may be helpful to: Read the Instruction-tuning Stable Diffusion with InstructPix2Pix blog post to learn more about some experiments we’ve done with InstructPix2Pix, dataset preparation, and results for different instructions. diff --git a/scrapped_outputs/09821ff2db32c23e4cbfb1e72a111542.txt b/scrapped_outputs/09821ff2db32c23e4cbfb1e72a111542.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/099a71265f417564b5405e5c107edcc4.txt b/scrapped_outputs/099a71265f417564b5405e5c107edcc4.txt new file mode 100644 index 0000000000000000000000000000000000000000..0a7cc0b79a2823c78003b419462fee63e47bb1de --- /dev/null +++ b/scrapped_outputs/099a71265f417564b5405e5c107edcc4.txt @@ -0,0 +1,18 @@ +ONNX Runtime 🤗 Optimum provides a Stable Diffusion pipeline compatible with ONNX Runtime. You’ll need to install 🤗 Optimum with the following command for ONNX Runtime support: Copied pip install -q optimum["onnxruntime"] This guide will show you how to use the Stable Diffusion and Stable Diffusion XL (SDXL) pipelines with ONNX Runtime. Stable Diffusion To load and run inference, use the ORTStableDiffusionPipeline. If you want to load a PyTorch model and convert it to the ONNX format on-the-fly, set export=True: Copied from optimum.onnxruntime import ORTStableDiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id, export=True) +prompt = "sailing ship in storm by Leonardo da Vinci" +image = pipeline(prompt).images[0] +pipeline.save_pretrained("./onnx-stable-diffusion-v1-5") Generating multiple prompts in a batch seems to take too much memory. While we look into it, you may need to iterate instead of batching. To export the pipeline in the ONNX format offline and use it later for inference, +use the optimum-cli export command: Copied optimum-cli export onnx --model runwayml/stable-diffusion-v1-5 sd_v15_onnx/ Then to perform inference (you don’t have to specify export=True again): Copied from optimum.onnxruntime import ORTStableDiffusionPipeline + +model_id = "sd_v15_onnx" +pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id) +prompt = "sailing ship in storm by Leonardo da Vinci" +image = pipeline(prompt).images[0] You can find more examples in 🤗 Optimum documentation, and Stable Diffusion is supported for text-to-image, image-to-image, and inpainting. Stable Diffusion XL To load and run inference with SDXL, use the ORTStableDiffusionXLPipeline: Copied from optimum.onnxruntime import ORTStableDiffusionXLPipeline + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipeline = ORTStableDiffusionXLPipeline.from_pretrained(model_id) +prompt = "sailing ship in storm by Leonardo da Vinci" +image = pipeline(prompt).images[0] To export the pipeline in the ONNX format and use it later for inference, use the optimum-cli export command: Copied optimum-cli export onnx --model stabilityai/stable-diffusion-xl-base-1.0 --task stable-diffusion-xl sd_xl_onnx/ SDXL in the ONNX format is supported for text-to-image and image-to-image. diff --git a/scrapped_outputs/099edc61a4ae37ce3c31e7b390d9b4e9.txt b/scrapped_outputs/099edc61a4ae37ce3c31e7b390d9b4e9.txt new file mode 100644 index 0000000000000000000000000000000000000000..4540f6a7c0e03add95f145da0638f9a5a6f1c9cb --- /dev/null +++ b/scrapped_outputs/099edc61a4ae37ce3c31e7b390d9b4e9.txt @@ -0,0 +1,14 @@ +DeepCache DeepCache accelerates StableDiffusionPipeline and StableDiffusionXLPipeline by strategically caching and reusing high-level features while efficiently updating low-level features by taking advantage of the U-Net architecture. Start by installing DeepCache: Copied pip install DeepCache Then load and enable the DeepCacheSDHelper: Copied import torch + from diffusers import StableDiffusionPipeline + pipe = StableDiffusionPipeline.from_pretrained('runwayml/stable-diffusion-v1-5', torch_dtype=torch.float16).to("cuda") + ++ from DeepCache import DeepCacheSDHelper ++ helper = DeepCacheSDHelper(pipe=pipe) ++ helper.set_params( ++ cache_interval=3, ++ cache_branch_id=0, ++ ) ++ helper.enable() + + image = pipe("a photo of an astronaut on a moon").images[0] The set_params method accepts two arguments: cache_interval and cache_branch_id. cache_interval means the frequency of feature caching, specified as the number of steps between each cache operation. cache_branch_id identifies which branch of the network (ordered from the shallowest to the deepest layer) is responsible for executing the caching processes. +Opting for a lower cache_branch_id or a larger cache_interval can lead to faster inference speed at the expense of reduced image quality (ablation experiments of these two hyperparameters can be found in the paper). Once those arguments are set, use the enable or disable methods to activate or deactivate the DeepCacheSDHelper. You can find more generated samples (original pipeline vs DeepCache) and the corresponding inference latency in the WandB report. The prompts are randomly selected from the MS-COCO 2017 dataset. Benchmark We tested how much faster DeepCache accelerates Stable Diffusion v2.1 with 50 inference steps on an NVIDIA RTX A5000, using different configurations for resolution, batch size, cache interval (I), and cache branch (B). Resolution Batch size Original DeepCache(I=3, B=0) DeepCache(I=5, B=0) DeepCache(I=5, B=1) 512 8 15.96 6.88(2.32x) 5.03(3.18x) 7.27(2.20x) 4 8.39 3.60(2.33x) 2.62(3.21x) 3.75(2.24x) 1 2.61 1.12(2.33x) 0.81(3.24x) 1.11(2.35x) 768 8 43.58 18.99(2.29x) 13.96(3.12x) 21.27(2.05x) 4 22.24 9.67(2.30x) 7.10(3.13x) 10.74(2.07x) 1 6.33 2.72(2.33x) 1.97(3.21x) 2.98(2.12x) 1024 8 101.95 45.57(2.24x) 33.72(3.02x) 53.00(1.92x) 4 49.25 21.86(2.25x) 16.19(3.04x) 25.78(1.91x) 1 13.83 6.07(2.28x) 4.43(3.12x) 7.15(1.93x) diff --git a/scrapped_outputs/09f5db237c4aa3887e77e7464919fb9e.txt b/scrapped_outputs/09f5db237c4aa3887e77e7464919fb9e.txt new file mode 100644 index 0000000000000000000000000000000000000000..c096748fc9379b50eaf61a541e581e9ab2545d55 --- /dev/null +++ b/scrapped_outputs/09f5db237c4aa3887e77e7464919fb9e.txt @@ -0,0 +1,383 @@ +Text2Video-Zero Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators is by Levon Khachatryan, Andranik Movsisyan, Vahram Tadevosyan, Roberto Henschel, Zhangyang Wang, Shant Navasardyan, Humphrey Shi. Text2Video-Zero enables zero-shot video generation using either: A textual prompt A prompt combined with guidance from poses or edges Video Instruct-Pix2Pix (instruction-guided video editing) Results are temporally consistent and closely follow the guidance and textual prompts. The abstract from the paper is: Recent text-to-video generation approaches rely on computationally heavy training and require large-scale video datasets. In this paper, we introduce a new task of zero-shot text-to-video generation and propose a low-cost approach (without any training or optimization) by leveraging the power of existing text-to-image synthesis methods (e.g., Stable Diffusion), making them suitable for the video domain. +Our key modifications include (i) enriching the latent codes of the generated frames with motion dynamics to keep the global scene and the background time consistent; and (ii) reprogramming frame-level self-attention using a new cross-frame attention of each frame on the first frame, to preserve the context, appearance, and identity of the foreground object. +Experiments show that this leads to low overhead, yet high-quality and remarkably consistent video generation. Moreover, our approach is not limited to text-to-video synthesis but is also applicable to other tasks such as conditional and content-specialized video generation, and Video Instruct-Pix2Pix, i.e., instruction-guided video editing. +As experiments show, our method performs comparably or sometimes better than recent approaches, despite not being trained on additional video data. You can find additional information about Text2Video-Zero on the project page, paper, and original codebase. Usage example Text-To-Video To generate a video from prompt, run the following Python code: Copied import torch +from diffusers import TextToVideoZeroPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +prompt = "A panda is playing guitar on times square" +result = pipe(prompt=prompt).images +result = [(r * 255).astype("uint8") for r in result] +imageio.mimsave("video.mp4", result, fps=4) You can change these parameters in the pipeline call: Motion field strength (see the paper, Sect. 3.3.1):motion_field_strength_x and motion_field_strength_y. Default: motion_field_strength_x=12, motion_field_strength_y=12 T and T' (see the paper, Sect. 3.3.1)t0 and t1 in the range {0, ..., num_inference_steps}. Default: t0=45, t1=48 Video length:video_length, the number of frames video_length to be generated. Default: video_length=8 We can also generate longer videos by doing the processing in a chunk-by-chunk manner: Copied import torch +from diffusers import TextToVideoZeroPipeline +import numpy as np + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") +seed = 0 +video_length = 24 #24 ÷ 4fps = 6 seconds +chunk_size = 8 +prompt = "A panda is playing guitar on times square" + +# Generate the video chunk-by-chunk +result = [] +chunk_ids = np.arange(0, video_length, chunk_size - 1) +generator = torch.Generator(device="cuda") +for i in range(len(chunk_ids)): + print(f"Processing chunk {i + 1} / {len(chunk_ids)}") + ch_start = chunk_ids[i] + ch_end = video_length if i == len(chunk_ids) - 1 else chunk_ids[i + 1] + # Attach the first frame for Cross Frame Attention + frame_ids = [0] + list(range(ch_start, ch_end)) + # Fix the seed for the temporal consistency + generator.manual_seed(seed) + output = pipe(prompt=prompt, video_length=len(frame_ids), generator=generator, frame_ids=frame_ids) + result.append(output.images[1:]) + +# Concatenate chunks and save +result = np.concatenate(result) +result = [(r * 255).astype("uint8") for r in result] +imageio.mimsave("video.mp4", result, fps=4) SDXL SupportIn order to use the SDXL model when generating a video from prompt, use the TextToVideoZeroSDXLPipeline pipeline: Copied import torch +from diffusers import TextToVideoZeroSDXLPipeline + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipe = TextToVideoZeroSDXLPipeline.from_pretrained( + model_id, torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") Text-To-Video with Pose Control To generate a video from prompt with additional pose control Download a demo video Copied from huggingface_hub import hf_hub_download + +filename = "__assets__/poses_skeleton_gifs/dance1_corr.mp4" +repo_id = "PAIR/Text2Video-Zero" +video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) Read video containing extracted pose images Copied from PIL import Image +import imageio + +reader = imageio.get_reader(video_path, "ffmpeg") +frame_count = 8 +pose_images = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] To extract pose from actual video, read ControlNet documentation. Run StableDiffusionControlNetPipeline with our custom attention processor Copied import torch +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +model_id = "runwayml/stable-diffusion-v1-5" +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + model_id, controlnet=controlnet, torch_dtype=torch.float16 +).to("cuda") + +# Set the attention processor +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) +pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) + +# fix latents for all frames +latents = torch.randn((1, 4, 64, 64), device="cuda", dtype=torch.float16).repeat(len(pose_images), 1, 1, 1) + +prompt = "Darth Vader dancing in a desert" +result = pipe(prompt=[prompt] * len(pose_images), image=pose_images, latents=latents).images +imageio.mimsave("video.mp4", result, fps=4) SDXL Support Since our attention processor also works with SDXL, it can be utilized to generate a video from prompt using ControlNet models powered by SDXL: Copied import torch +from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +controlnet_model_id = 'thibaud/controlnet-openpose-sdxl-1.0' +model_id = 'stabilityai/stable-diffusion-xl-base-1.0' + +controlnet = ControlNetModel.from_pretrained(controlnet_model_id, torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + model_id, controlnet=controlnet, torch_dtype=torch.float16 +).to('cuda') + +# Set the attention processor +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) +pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) + +# fix latents for all frames +latents = torch.randn((1, 4, 128, 128), device="cuda", dtype=torch.float16).repeat(len(pose_images), 1, 1, 1) + +prompt = "Darth Vader dancing in a desert" +result = pipe(prompt=[prompt] * len(pose_images), image=pose_images, latents=latents).images +imageio.mimsave("video.mp4", result, fps=4) Text-To-Video with Edge Control To generate a video from prompt with additional Canny edge control, follow the same steps described above for pose-guided generation using Canny edge ControlNet model. Video Instruct-Pix2Pix To perform text-guided video editing (with InstructPix2Pix): Download a demo video Copied from huggingface_hub import hf_hub_download + +filename = "__assets__/pix2pix video/camel.mp4" +repo_id = "PAIR/Text2Video-Zero" +video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) Read video from path Copied from PIL import Image +import imageio + +reader = imageio.get_reader(video_path, "ffmpeg") +frame_count = 8 +video = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] Run StableDiffusionInstructPix2PixPipeline with our custom attention processor Copied import torch +from diffusers import StableDiffusionInstructPix2PixPipeline +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +model_id = "timbrooks/instruct-pix2pix" +pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=3)) + +prompt = "make it Van Gogh Starry Night style" +result = pipe(prompt=[prompt] * len(video), image=video).images +imageio.mimsave("edited_video.mp4", result, fps=4) DreamBooth specialization Methods Text-To-Video, Text-To-Video with Pose Control and Text-To-Video with Edge Control +can run with custom DreamBooth models, as shown below for +Canny edge ControlNet model and +Avatar style DreamBooth model: Download a demo video Copied from huggingface_hub import hf_hub_download + +filename = "__assets__/canny_videos_mp4/girl_turning.mp4" +repo_id = "PAIR/Text2Video-Zero" +video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) Read video from path Copied from PIL import Image +import imageio + +reader = imageio.get_reader(video_path, "ffmpeg") +frame_count = 8 +canny_edges = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] Run StableDiffusionControlNetPipeline with custom trained DreamBooth model Copied import torch +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +# set model id to custom model +model_id = "PAIR/text2video-zero-controlnet-canny-avatar" +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + model_id, controlnet=controlnet, torch_dtype=torch.float16 +).to("cuda") + +# Set the attention processor +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) +pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) + +# fix latents for all frames +latents = torch.randn((1, 4, 64, 64), device="cuda", dtype=torch.float16).repeat(len(canny_edges), 1, 1, 1) + +prompt = "oil painting of a beautiful girl avatar style" +result = pipe(prompt=[prompt] * len(canny_edges), image=canny_edges, latents=latents).images +imageio.mimsave("video.mp4", result, fps=4) You can filter out some available DreamBooth-trained models with this link. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. TextToVideoZeroPipeline class diffusers.TextToVideoZeroPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet3DConditionModel to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for zero-shot text-to-video generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union video_length: Optional = 8 height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None motion_field_strength_x: float = 12 motion_field_strength_y: float = 12 output_type: Optional = 'tensor' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 t0: int = 44 t1: int = 47 frame_ids: Optional = None ) → TextToVideoPipelineOutput Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. video_length (int, optional, defaults to 8) — +The number of generated video frames. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in video generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_videos_per_prompt (int, optional, defaults to 1) — +The number of videos to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "numpy") — +The output format of the generated video. Choose between "latent" and "numpy". return_dict (bool, optional, defaults to True) — +Whether or not to return a +TextToVideoPipelineOutput instead of +a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. motion_field_strength_x (float, optional, defaults to 12) — +Strength of motion in generated video along x-axis. See the paper, +Sect. 3.3.1. motion_field_strength_y (float, optional, defaults to 12) — +Strength of motion in generated video along y-axis. See the paper, +Sect. 3.3.1. t0 (int, optional, defaults to 44) — +Timestep t0. Should be in the range [0, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. t1 (int, optional, defaults to 47) — +Timestep t0. Should be in the range [t0 + 1, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. frame_ids (List[int], optional) — +Indexes of the frames that are being generated. This is used when generating longer videos +chunk-by-chunk. Returns +TextToVideoPipelineOutput + +The output contains a ndarray of the generated video, when output_type != "latent", otherwise a +latent code of generated videos and a list of bools indicating whether the corresponding generated +video contains “not-safe-for-work” (nsfw) content.. + The call function to the pipeline for generation. backward_loop < source > ( latents timesteps prompt_embeds guidance_scale callback callback_steps num_warmup_steps extra_step_kwargs cross_attention_kwargs = None ) → latents Parameters callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. +extra_step_kwargs — +Extra_step_kwargs. +cross_attention_kwargs — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. +num_warmup_steps — +number of warmup steps. Returns +latents + +Latents of backward process output at time timesteps[-1]. + Perform backward process given list of time steps. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. forward_loop < source > ( x_t0 t0 t1 generator ) → x_t1 Parameters generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. Returns +x_t1 + +Forward process applied to x_t0 from time t0 to t1. + Perform DDPM forward process from time t0 to t1. This is the same as adding noise with corresponding variance. TextToVideoZeroSDXLPipeline class diffusers.TextToVideoZeroSDXLPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for zero-shot text-to-video generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union prompt_2: Union = None video_length: Optional = 8 height: Optional = None width: Optional = None num_inference_steps: int = 50 denoising_end: Optional = None guidance_scale: float = 7.5 negative_prompt: Union = None negative_prompt_2: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None frame_ids: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None latents: Optional = None motion_field_strength_x: float = 12 motion_field_strength_y: float = 12 output_type: Optional = 'tensor' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Optional = None crops_coords_top_left: Tuple = (0, 0) target_size: Optional = None t0: int = 44 t1: int = 47 ) Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders video_length (int, optional, defaults to 8) — +The number of generated video frames. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_videos_per_prompt (int, optional, defaults to 1) — +The number of videos to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. frame_ids (List[int], optional) — +Indexes of the frames that are being generated. This is used when generating longer videos +chunk-by-chunk. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. motion_field_strength_x (float, optional, defaults to 12) — +Strength of motion in generated video along x-axis. See the paper, +Sect. 3.3.1. motion_field_strength_y (float, optional, defaults to 12) — +Strength of motion in generated video along y-axis. See the paper, +Sect. 3.3.1. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. guidance_rescale (float, optional, defaults to 0.7) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (width, height) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (width, height). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. t0 (int, optional, defaults to 44) — +Timestep t0. Should be in the range [0, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. t1 (int, optional, defaults to 47) — +Timestep t0. Should be in the range [t0 + 1, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. Function invoked when calling the pipeline for generation. backward_loop < source > ( latents timesteps prompt_embeds guidance_scale callback callback_steps num_warmup_steps extra_step_kwargs add_text_embeds add_time_ids cross_attention_kwargs = None guidance_rescale: float = 0.0 ) → latents Parameters callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. +extra_step_kwargs — +Extra_step_kwargs. +cross_attention_kwargs — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. +num_warmup_steps — +number of warmup steps. Returns +latents + +latents of backward process output at time timesteps[-1] + Perform backward process given list of time steps disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. forward_loop < source > ( x_t0 t0 t1 generator ) → x_t1 Parameters generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. Returns +x_t1 + +Forward process applied to x_t0 from time t0 to t1. + Perform DDPM forward process from time t0 to t1. This is the same as adding noise with corresponding variance. TextToVideoPipelineOutput class diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.TextToVideoPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images ([List[PIL.Image.Image], np.ndarray]) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected ([List[bool]]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for zero-shot text-to-video pipeline. diff --git a/scrapped_outputs/0a42357f68098c291a13ec7e16a82e8c.txt b/scrapped_outputs/0a42357f68098c291a13ec7e16a82e8c.txt new file mode 100644 index 0000000000000000000000000000000000000000..0e7f0031784cb18f903bdc8b268f914396bcafa5 --- /dev/null +++ b/scrapped_outputs/0a42357f68098c291a13ec7e16a82e8c.txt @@ -0,0 +1,628 @@ +ControlNet with Stable Diffusion XL ControlNet was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that’ll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process. The abstract from the paper is: We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models. You can find additional smaller Stable Diffusion XL (SDXL) ControlNet checkpoints from the 🤗 Diffusers Hub organization, and browse community-trained checkpoints on the Hub. 🧪 Many of the SDXL ControlNet checkpoints are experimental, and there is a lot of room for improvement. Feel free to open an Issue and leave us feedback on how we can improve! If you don’t see a checkpoint you’re interested in, you can train your own SDXL ControlNet with our training script. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionXLControlNetPipeline class diffusers.StableDiffusionXLControlNetPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). text_encoder_2 (CLIPTextModelWithProjection) — +Second frozen text-encoder +(laion/CLIP-ViT-bigG-14-laion2B-39B-b160k). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. tokenizer_2 (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings should always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it defaults to True if the package is installed; otherwise no +watermarker is used. Pipeline for text-to-image generation using Stable Diffusion XL with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 5.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. This is sent to tokenizer_2 +and text_encoder_2. If not defined, negative_prompt is used in both text-encoders. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, pooled text embeddings are generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs (prompt +weighting). If not provided, pooled negative_prompt_embeds are generated from negative_prompt input +argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned containing the output images. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" +>>> negative_prompt = "low quality, bad quality, sketches" + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" +... ) + +>>> # initialize the models and pipeline +>>> controlnet_conditioning_scale = 0.5 # recommended for good generalization +>>> controlnet = ControlNetModel.from_pretrained( +... "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16 +... ) +>>> vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) +>>> pipe = StableDiffusionXLControlNetPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> # get canny image +>>> image = np.array(image) +>>> image = cv2.Canny(image, 100, 200) +>>> image = image[:, :, None] +>>> image = np.concatenate([image, image, image], axis=2) +>>> canny_image = Image.fromarray(image) + +>>> # generate image +>>> image = pipe( +... prompt, controlnet_conditioning_scale=controlnet_conditioning_scale, image=canny_image +... ).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionXLControlNetImg2ImgPipeline class diffusers.StableDiffusionXLControlNetImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple ControlNets +as a list, the outputs from each ControlNet are added together to create one combined additional +conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires an aesthetic_score condition to be passed during inference. Also see the +config of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for image-to-image generation using Stable Diffusion XL with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None control_image: Union = None height: Optional = None width: Optional = None strength: float = 0.8 num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 0.8 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The initial image will be used as the starting point for the image generation process. Can also accept +image latents as image, if passing latents directly, it will not be encoded again. control_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition. ControlNet uses this input condition to generate guidance to Unet. If +the type is specified as Torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can +also be accepted as an image. The dimensions of the output image defaults to image’s dimensions. If +height and/or width are passed, image is resized according to them. If multiple ControlNets are +specified in init, images must be passed as a list such that each element of the list can be correctly +batched for input to a single controlnet. height (int, optional, defaults to the size of control_image) — +The height in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to the size of control_image) — +The width in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the controlnet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set the +corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +In this mode, the ControlNet encoder will try best to recognize the content of the input image even if +you remove all prompts. The guidance_scale between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the controlnet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the controlnet stops applying. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple +containing the output images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> # pip install accelerate transformers safetensors diffusers + +>>> import torch +>>> import numpy as np +>>> from PIL import Image + +>>> from transformers import DPTFeatureExtractor, DPTForDepthEstimation +>>> from diffusers import ControlNetModel, StableDiffusionXLControlNetImg2ImgPipeline, AutoencoderKL +>>> from diffusers.utils import load_image + + +>>> depth_estimator = DPTForDepthEstimation.from_pretrained("Intel/dpt-hybrid-midas").to("cuda") +>>> feature_extractor = DPTFeatureExtractor.from_pretrained("Intel/dpt-hybrid-midas") +>>> controlnet = ControlNetModel.from_pretrained( +... "diffusers/controlnet-depth-sdxl-1.0-small", +... variant="fp16", +... use_safetensors=True, +... torch_dtype=torch.float16, +... ).to("cuda") +>>> vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16).to("cuda") +>>> pipe = StableDiffusionXLControlNetImg2ImgPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", +... controlnet=controlnet, +... vae=vae, +... variant="fp16", +... use_safetensors=True, +... torch_dtype=torch.float16, +... ).to("cuda") +>>> pipe.enable_model_cpu_offload() + + +>>> def get_depth_map(image): +... image = feature_extractor(images=image, return_tensors="pt").pixel_values.to("cuda") +... with torch.no_grad(), torch.autocast("cuda"): +... depth_map = depth_estimator(image).predicted_depth + +... depth_map = torch.nn.functional.interpolate( +... depth_map.unsqueeze(1), +... size=(1024, 1024), +... mode="bicubic", +... align_corners=False, +... ) +... depth_min = torch.amin(depth_map, dim=[1, 2, 3], keepdim=True) +... depth_max = torch.amax(depth_map, dim=[1, 2, 3], keepdim=True) +... depth_map = (depth_map - depth_min) / (depth_max - depth_min) +... image = torch.cat([depth_map] * 3, dim=1) +... image = image.permute(0, 2, 3, 1).cpu().numpy()[0] +... image = Image.fromarray((image * 255.0).clip(0, 255).astype(np.uint8)) +... return image + + +>>> prompt = "A robot, 4k photo" +>>> image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ).resize((1024, 1024)) +>>> controlnet_conditioning_scale = 0.5 # recommended for good generalization +>>> depth_image = get_depth_map(image) + +>>> images = pipe( +... prompt, +... image=image, +... control_image=depth_image, +... strength=0.99, +... num_inference_steps=50, +... controlnet_conditioning_scale=controlnet_conditioning_scale, +... ).images +>>> images[0].save(f"robot_cat.png") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionXLControlNetInpaintPipeline class diffusers.StableDiffusionXLControlNetInpaintPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: ControlNetModel scheduler: KarrasDiffusionSchedulers requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None mask_image: Union = None control_image: Union = None height: Optional = None width: Optional = None strength: float = 0.9999 num_inference_steps: int = 50 denoising_start: Optional = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (PIL.Image.Image) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. strength (float, optional, defaults to 0.9999) — +Conceptually, indicates how much to transform the masked portion of the reference image. Must be +between 0 and 1. image will be used as a starting point, adding more noise to it the larger the +strength. The number of denoising steps depends on the amount of noise initially added. When +strength is 1, added noise will be maximum and the denoising process will run for the full number of +iterations specified in num_inference_steps. A value of 1, therefore, essentially ignores the masked +portion of the reference image. Note that in the case of denoising_start being declared as an +integer, the value of strength will be ignored. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. denoising_start (float, optional) — +When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be +bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and +it is assumed that the passed image is a partly denoised image. Note that when this is specified, +strength will be ignored. The denoising_start parameter is particularly beneficial when this pipeline +is integrated into a “Mixture of Denoisers” multi-pipeline setup, as detailed in Refining the Image +Output. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be +denoised by a successor pipeline that has denoising_start set to 0.8 so that it only denoises the +final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline +forms a part of a “Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (width, height) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (width, height). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> # !pip install transformers accelerate +>>> from diffusers import StableDiffusionXLControlNetInpaintPipeline, ControlNetModel, DDIMScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> init_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy.png" +... ) +>>> init_image = init_image.resize((1024, 1024)) + +>>> generator = torch.Generator(device="cpu").manual_seed(1) + +>>> mask_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy_mask.png" +... ) +>>> mask_image = mask_image.resize((1024, 1024)) + + +>>> def make_canny_condition(image): +... image = np.array(image) +... image = cv2.Canny(image, 100, 200) +... image = image[:, :, None] +... image = np.concatenate([image, image, image], axis=2) +... image = Image.fromarray(image) +... return image + + +>>> control_image = make_canny_condition(init_image) + +>>> controlnet = ControlNetModel.from_pretrained( +... "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16 +... ) +>>> pipe = StableDiffusionXLControlNetInpaintPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> image = pipe( +... "a handsome man with ray-ban sunglasses", +... num_inference_steps=20, +... generator=generator, +... eta=1.0, +... image=init_image, +... mask_image=mask_image, +... control_image=control_image, +... ).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/0a4cdcaccfbbb8ba8ae36c14aab63c79.txt b/scrapped_outputs/0a4cdcaccfbbb8ba8ae36c14aab63c79.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/0aa5cbff6116cc8859eab5c847f7562f.txt b/scrapped_outputs/0aa5cbff6116cc8859eab5c847f7562f.txt new file mode 100644 index 0000000000000000000000000000000000000000..3852e4b540ae565f239e88502bab4b42a7fe8ab9 --- /dev/null +++ b/scrapped_outputs/0aa5cbff6116cc8859eab5c847f7562f.txt @@ -0,0 +1,255 @@ +DiffEdit DiffEdit: Diffusion-based semantic image editing with mask guidance is by Guillaume Couairon, Jakob Verbeek, Holger Schwenk, and Matthieu Cord. The abstract from the paper is: Image generation has recently seen tremendous advances, with diffusion models allowing to synthesize convincing images for a large variety of text prompts. In this article, we propose DiffEdit, a method to take advantage of text-conditioned diffusion models for the task of semantic image editing, where the goal is to edit an image based on a text query. Semantic image editing is an extension of image generation, with the additional constraint that the generated image should be as similar as possible to a given input image. Current editing methods based on diffusion models usually require to provide a mask, making the task much easier by treating it as a conditional inpainting task. In contrast, our main contribution is able to automatically generate a mask highlighting regions of the input image that need to be edited, by contrasting predictions of a diffusion model conditioned on different text prompts. Moreover, we rely on latent inference to preserve content in those regions of interest and show excellent synergies with mask-based diffusion. DiffEdit achieves state-of-the-art editing performance on ImageNet. In addition, we evaluate semantic image editing in more challenging settings, using images from the COCO dataset as well as text-based generated images. The original codebase can be found at Xiang-cd/DiffEdit-stable-diffusion, and you can try it out in this demo. This pipeline was contributed by clarencechen. ❤️ Tips The pipeline can generate masks that can be fed into other inpainting pipelines. In order to generate an image using this pipeline, both an image mask (source and target prompts can be manually specified or generated, and passed to generate_mask()) +and a set of partially inverted latents (generated using invert()) must be provided as arguments when calling the pipeline to generate the final edited image. The function generate_mask() exposes two prompt arguments, source_prompt and target_prompt +that let you control the locations of the semantic edits in the final image to be generated. Let’s say, +you wanted to translate from “cat” to “dog”. In this case, the edit direction will be “cat -> dog”. To reflect +this in the generated mask, you simply have to set the embeddings related to the phrases including “cat” to +source_prompt and “dog” to target_prompt. When generating partially inverted latents using invert, assign a caption or text embedding describing the +overall image to the prompt argument to help guide the inverse latent sampling process. In most cases, the +source concept is sufficiently descriptive to yield good results, but feel free to explore alternatives. When calling the pipeline to generate the final edited image, assign the source concept to negative_prompt +and the target concept to prompt. Taking the above example, you simply have to set the embeddings related to +the phrases including “cat” to negative_prompt and “dog” to prompt. If you wanted to reverse the direction in the example above, i.e., “dog -> cat”, then it’s recommended to:Swap the source_prompt and target_prompt in the arguments to generate_mask. Change the input prompt in invert() to include “dog”. Swap the prompt and negative_prompt in the arguments to call the pipeline to generate the final edited image. The source and target prompts, or their corresponding embeddings, can also be automatically generated. Please refer to the DiffEdit guide for more details. StableDiffusionDiffEditPipeline class diffusers.StableDiffusionDiffEditPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor inverse_scheduler: DDIMInverseScheduler requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. inverse_scheduler (DDIMInverseScheduler) — +A scheduler to be used in combination with unet to fill in the unmasked part of the input latents. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. This is an experimental feature! Pipeline for text-guided image inpainting using Stable Diffusion and DiffEdit. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading and saving methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights generate_mask < source > ( image: Union = None target_prompt: Union = None target_negative_prompt: Union = None target_prompt_embeds: Optional = None target_negative_prompt_embeds: Optional = None source_prompt: Union = None source_negative_prompt: Union = None source_prompt_embeds: Optional = None source_negative_prompt_embeds: Optional = None num_maps_per_mask: Optional = 10 mask_encode_strength: Optional = 0.5 mask_thresholding_ratio: Optional = 3.0 num_inference_steps: int = 50 guidance_scale: float = 7.5 generator: Union = None output_type: Optional = 'np' cross_attention_kwargs: Optional = None ) → List[PIL.Image.Image] or np.array Parameters image (PIL.Image.Image) — +Image or tensor representing an image batch to be used for computing the mask. target_prompt (str or List[str], optional) — +The prompt or prompts to guide semantic mask generation. If not defined, you need to pass +prompt_embeds. target_negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). target_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. target_negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. source_prompt (str or List[str], optional) — +The prompt or prompts to guide semantic mask generation using DiffEdit. If not defined, you need to +pass source_prompt_embeds or source_image instead. source_negative_prompt (str or List[str], optional) — +The prompt or prompts to guide semantic mask generation away from using DiffEdit. If not defined, you +need to pass source_negative_prompt_embeds or source_image instead. source_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings to guide the semantic mask generation. Can be used to easily tweak text +inputs (prompt weighting). If not provided, text embeddings are generated from source_prompt input +argument. source_negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings to negatively guide the semantic mask generation. Can be used to easily +tweak text inputs (prompt weighting). If not provided, text embeddings are generated from +source_negative_prompt input argument. num_maps_per_mask (int, optional, defaults to 10) — +The number of noise maps sampled to generate the semantic mask using DiffEdit. mask_encode_strength (float, optional, defaults to 0.5) — +The strength of the noise maps sampled to generate the semantic mask using DiffEdit. Must be between 0 +and 1. mask_thresholding_ratio (float, optional, defaults to 3.0) — +The maximum multiple of the mean absolute difference used to clamp the semantic guidance map before +mask binarization. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the +AttnProcessor as defined in +self.processor. Returns +List[PIL.Image.Image] or np.array + +When returning a List[PIL.Image.Image], the list consists of a batch of single-channel binary images +with dimensions (height // self.vae_scale_factor, width // self.vae_scale_factor). If it’s +np.array, the shape is (batch_size, height // self.vae_scale_factor, width // self.vae_scale_factor). + Generate a latent mask given a mask prompt, a target prompt, and an image. Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionDiffEditPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + +>>> init_image = download_image(img_url).resize((768, 768)) + +>>> pipe = StableDiffusionDiffEditPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.enable_model_cpu_offload() + +>>> mask_prompt = "A bowl of fruits" +>>> prompt = "A bowl of pears" + +>>> mask_image = pipe.generate_mask(image=init_image, source_prompt=prompt, target_prompt=mask_prompt) +>>> image_latents = pipe.invert(image=init_image, prompt=mask_prompt).latents +>>> image = pipe(prompt=prompt, mask_image=mask_image, image_latents=image_latents).images[0] invert < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 50 inpaint_strength: float = 0.8 guidance_scale: float = 7.5 negative_prompt: Union = None generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None decode_latents: bool = False output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None lambda_auto_corr: float = 20.0 lambda_kl: float = 20.0 num_reg_steps: int = 0 num_auto_corr_rolls: int = 5 ) Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (PIL.Image.Image) — +Image or tensor representing an image batch to produce the inverted latents guided by prompt. inpaint_strength (float, optional, defaults to 0.8) — +Indicates extent of the noising process to run latent inversion. Must be between 0 and 1. When +inpaint_strength is 1, the inversion process is run for the full number of iterations specified in +num_inference_steps. image is used as a reference for the inversion process, and adding more noise +increases inpaint_strength. If inpaint_strength is 0, no inpainting occurs. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. decode_latents (bool, optional, defaults to False) — +Whether or not to decode the inverted latents into a generated image. Setting this argument to True +decodes all inverted latents for each timestep into a list of generated images. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.DiffEditInversionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the +AttnProcessor as defined in +self.processor. lambda_auto_corr (float, optional, defaults to 20.0) — +Lambda parameter to control auto correction. lambda_kl (float, optional, defaults to 20.0) — +Lambda parameter to control Kullback-Leibler divergence output. num_reg_steps (int, optional, defaults to 0) — +Number of regularization loss steps. num_auto_corr_rolls (int, optional, defaults to 5) — +Number of auto correction roll steps. Generate inverted latents given a prompt and image. Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionDiffEditPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + +>>> init_image = download_image(img_url).resize((768, 768)) + +>>> pipe = StableDiffusionDiffEditPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.enable_model_cpu_offload() + +>>> prompt = "A bowl of fruits" + +>>> inverted_latents = pipe.invert(image=init_image, prompt=prompt).latents __call__ < source > ( prompt: Union = None mask_image: Union = None image_latents: Union = None inpaint_strength: Optional = 0.8 num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_ckip: int = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. mask_image (PIL.Image.Image) — +Image or tensor representing an image batch to mask the generated image. White pixels in the mask are +repainted, while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, 1, H, W). image_latents (PIL.Image.Image or torch.FloatTensor) — +Partially noised image latents from the inversion process to be used as inputs for image generation. inpaint_strength (float, optional, defaults to 0.8) — +Indicates extent to inpaint the masked area. Must be between 0 and 1. When inpaint_strength is 1, the +denoising process is run on the masked area for the full number of iterations specified in +num_inference_steps. image_latents is used as a reference for the masked area, and adding more +noise to a region increases inpaint_strength. If inpaint_strength is 0, no inpainting occurs. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionDiffEditPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + +>>> init_image = download_image(img_url).resize((768, 768)) + +>>> pipe = StableDiffusionDiffEditPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.enable_model_cpu_offload() + +>>> mask_prompt = "A bowl of fruits" +>>> prompt = "A bowl of pears" + +>>> mask_image = pipe.generate_mask(image=init_image, source_prompt=prompt, target_prompt=mask_prompt) +>>> image_latents = pipe.invert(image=init_image, prompt=mask_prompt).latents +>>> image = pipe(prompt=prompt, mask_image=mask_image, image_latents=image_latents).images[0] disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/0aad254c95fc8011754ea72b42bd8926.txt b/scrapped_outputs/0aad254c95fc8011754ea72b42bd8926.txt new file mode 100644 index 0000000000000000000000000000000000000000..96a0a5c22497290cdb231bbf72184daeee1b4d8c --- /dev/null +++ b/scrapped_outputs/0aad254c95fc8011754ea72b42bd8926.txt @@ -0,0 +1,18 @@ +VQModel The VQ-VAE model was introduced in Neural Discrete Representation Learning by Aaron van den Oord, Oriol Vinyals and Koray Kavukcuoglu. The model is used in 🤗 Diffusers to decode latent representations into images. Unlike AutoencoderKL, the VQModel works in a quantized latent space. The abstract from the paper is: Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector Quantised-Variational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of “posterior collapse” — where the latents are ignored when they are paired with a powerful autoregressive decoder — typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations. VQModel class diffusers.VQModel < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) up_block_types: Tuple = ('UpDecoderBlock2D',) block_out_channels: Tuple = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 3 sample_size: int = 32 num_vq_embeddings: int = 256 norm_num_groups: int = 32 vq_embed_dim: Optional = None scaling_factor: float = 0.18215 norm_type: str = 'group' mid_block_add_attention = True lookup_from_codebook = False force_upcast = False ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("DownEncoderBlock2D",)) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to ("UpDecoderBlock2D",)) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of block output channels. layers_per_block (int, optional, defaults to 1) — Number of layers per block. act_fn (str, optional, defaults to "silu") — The activation function to use. latent_channels (int, optional, defaults to 3) — Number of channels in the latent space. sample_size (int, optional, defaults to 32) — Sample input size. num_vq_embeddings (int, optional, defaults to 256) — Number of codebook vectors in the VQ-VAE. norm_num_groups (int, optional, defaults to 32) — Number of groups for normalization layers. vq_embed_dim (int, optional) — Hidden dim of codebook vectors in the VQ-VAE. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. norm_type (str, optional, defaults to "group") — +Type of normalization layer to use. Can be one of "group" or "spatial". A VQ-VAE model for decoding latent representations. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor return_dict: bool = True ) → VQEncoderOutput or tuple Parameters sample (torch.FloatTensor) — Input sample. return_dict (bool, optional, defaults to True) — +Whether or not to return a models.vq_model.VQEncoderOutput instead of a plain tuple. Returns +VQEncoderOutput or tuple + +If return_dict is True, a VQEncoderOutput is returned, otherwise a plain tuple +is returned. + The VQModel forward method. VQEncoderOutput class diffusers.models.vq_model.VQEncoderOutput < source > ( latents: FloatTensor ) Parameters latents (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The encoded output sample from the last layer of the model. Output of VQModel encoding method. diff --git a/scrapped_outputs/0b07f41230725e63f211c6abd4673112.txt b/scrapped_outputs/0b07f41230725e63f211c6abd4673112.txt new file mode 100644 index 0000000000000000000000000000000000000000..e7efd5146c1078113af0423ef6c60dab2df7383d --- /dev/null +++ b/scrapped_outputs/0b07f41230725e63f211c6abd4673112.txt @@ -0,0 +1,77 @@ +Stable Diffusion XL This script is experimental, and it’s easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset. Stable Diffusion XL (SDXL) is a larger and more powerful iteration of the Stable Diffusion model, capable of producing higher resolution images. SDXL’s UNet is 3x larger and the model adds a second text encoder to the architecture. Depending on the hardware available to you, this can be very computationally intensive and it may not run on a consumer GPU like a Tesla T4. To help fit this larger model into memory and to speedup training, try enabling gradient_checkpointing, mixed_precision, and gradient_accumulation_steps. You can reduce your memory-usage even more by enabling memory-efficient attention with xFormers and using bitsandbytes’ 8-bit optimizer. This guide will explore the train_text_to_image_sdxl.py training script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/text_to_image +pip install -r requirements_sdxl.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the bf16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image_sdxl.py \ + --mixed_precision="bf16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so you’ll focus on the parameters that are relevant to training SDXL in this guide. --pretrained_vae_model_name_or_path: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify a better VAE --proportion_empty_prompts: the proportion of image prompts to replace with empty strings --timestep_bias_strategy: where (earlier vs. later) in the timestep to apply a bias, which can encourage the model to either learn low or high frequency details --timestep_bias_multiplier: the weight of the bias to apply to the timestep --timestep_bias_begin: the timestep to begin applying the bias --timestep_bias_end: the timestep to end applying the bias --timestep_bias_portion: the proportion of timesteps to apply the bias to Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting either epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_text_to_image_sdxl.py \ + --snr_gamma=5.0 Training script The training script is also similar to the Text-to-image training guide, but it’s been modified to support SDXL training. This guide will focus on the code that is unique to the SDXL training script. It starts by creating functions to tokenize the prompts to calculate the prompt embeddings, and to compute the image embeddings with the VAE. Next, you’ll a function to generate the timesteps weights depending on the number of timesteps and the timestep bias strategy to apply. Within the main() function, in addition to loading a tokenizer, the script loads a second tokenizer and text encoder because the SDXL architecture uses two of each: Copied tokenizer_one = AutoTokenizer.from_pretrained( + args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision, use_fast=False +) +tokenizer_two = AutoTokenizer.from_pretrained( + args.pretrained_model_name_or_path, subfolder="tokenizer_2", revision=args.revision, use_fast=False +) + +text_encoder_cls_one = import_model_class_from_model_name_or_path( + args.pretrained_model_name_or_path, args.revision +) +text_encoder_cls_two = import_model_class_from_model_name_or_path( + args.pretrained_model_name_or_path, args.revision, subfolder="text_encoder_2" +) The prompt and image embeddings are computed first and kept in memory, which isn’t typically an issue for a smaller dataset, but for larger datasets it can lead to memory problems. If this is the case, you should save the pre-computed embeddings to disk separately and load them into memory during the training process (see this PR for more discussion about this topic). Copied text_encoders = [text_encoder_one, text_encoder_two] +tokenizers = [tokenizer_one, tokenizer_two] +compute_embeddings_fn = functools.partial( + encode_prompt, + text_encoders=text_encoders, + tokenizers=tokenizers, + proportion_empty_prompts=args.proportion_empty_prompts, + caption_column=args.caption_column, +) + +train_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint) +train_dataset = train_dataset.map( + compute_vae_encodings_fn, + batched=True, + batch_size=args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps, + new_fingerprint=new_fingerprint_for_vae, +) After calculating the embeddings, the text encoder, VAE, and tokenizer are deleted to free up some memory: Copied del text_encoders, tokenizers, vae +gc.collect() +torch.cuda.empty_cache() Finally, the training loop takes care of the rest. If you chose to apply a timestep bias strategy, you’ll see the timestep weights are calculated and added as noise: Copied weights = generate_timestep_weights(args, noise_scheduler.config.num_train_timesteps).to( + model_input.device + ) + timesteps = torch.multinomial(weights, bsz, replacement=True).long() + +noisy_model_input = noise_scheduler.add_noise(model_input, noise, timesteps) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 Let’s train on the Pokémon BLIP captions dataset to generate your own Pokémon. Set the environment variables MODEL_NAME and DATASET_NAME to the model and the dataset (either from the Hub or a local path). You should also specify a VAE other than the SDXL VAE (either from the Hub or a local path) with VAE_NAME to avoid numerical instabilities. To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_prompt and --validation_epochs to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. Copied export MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0" +export VAE_NAME="madebyollin/sdxl-vae-fp16-fix" +export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch train_text_to_image_sdxl.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --pretrained_vae_model_name_or_path=$VAE_NAME \ + --dataset_name=$DATASET_NAME \ + --enable_xformers_memory_efficient_attention \ + --resolution=512 \ + --center_crop \ + --random_flip \ + --proportion_empty_prompts=0.2 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --max_train_steps=10000 \ + --use_8bit_adam \ + --learning_rate=1e-06 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --mixed_precision="fp16" \ + --report_to="wandb" \ + --validation_prompt="a cute Sundar Pichai creature" \ + --validation_epochs 5 \ + --checkpointing_steps=5000 \ + --output_dir="sdxl-pokemon-model" \ + --push_to_hub After you’ve finished training, you can use your newly trained SDXL model for inference! PyTorch PyTorch XLA Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("path/to/your/model", torch_dtype=torch.float16).to("cuda") + +prompt = "A pokemon with green eyes and red legs." +image = pipeline(prompt, num_inference_steps=30, guidance_scale=7.5).images[0] +image.save("pokemon.png") Next steps Congratulations on training a SDXL model! To learn more about how to use your new model, the following guides may be helpful: Read the Stable Diffusion XL guide to learn how to use it for a variety of different tasks (text-to-image, image-to-image, inpainting), how to use it’s refiner model, and the different types of micro-conditionings. Check out the DreamBooth and LoRA training guides to learn how to train a personalized SDXL model with just a few example images. These two training techniques can even be combined! diff --git a/scrapped_outputs/0b3ed1869b025b8d3f4cc23e503f237e.txt b/scrapped_outputs/0b3ed1869b025b8d3f4cc23e503f237e.txt new file mode 100644 index 0000000000000000000000000000000000000000..e807efa0bdba9fcaf725824d3ab7c1cc5f8142b5 --- /dev/null +++ b/scrapped_outputs/0b3ed1869b025b8d3f4cc23e503f237e.txt @@ -0,0 +1,138 @@ +Kandinsky 3 Kandinsky 3 is created by Vladimir Arkhipkin,Anastasia Maltseva,Igor Pavlov,Andrei Filatov,Arseniy Shakhmatov,Andrey Kuznetsov,Denis Dimitrov, Zein Shaheen The description from it’s Github page: Kandinsky 3.0 is an open-source text-to-image diffusion model built upon the Kandinsky2-x model family. In comparison to its predecessors, enhancements have been made to the text understanding and visual quality of the model, achieved by increasing the size of the text encoder and Diffusion U-Net models, respectively. Its architecture includes 3 main components: FLAN-UL2, which is an encoder decoder model based on the T5 architecture. New U-Net architecture featuring BigGAN-deep blocks doubles depth while maintaining the same number of parameters. Sber-MoVQGAN is a decoder proven to have superior results in image restoration. The original codebase can be found at ai-forever/Kandinsky-3. Check out the Kandinsky Community organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting. Make sure to check out the schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. Kandinsky3Pipeline class diffusers.Kandinsky3Pipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: Kandinsky3UNet scheduler: DDPMScheduler movq: VQModel ) __call__ < source > ( prompt: Union = None num_inference_steps: int = 25 guidance_scale: float = 3.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 height: Optional = 1024 width: Optional = 1024 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None output_type: Optional = 'pil' return_dict: bool = True latents = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 3.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to self.unet.config.sample_size) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size) — +The width in pixels of the generated image. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask. Must provide if passing prompt_embeds directly. negative_attention_mask (torch.FloatTensor, optional) — +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import AutoPipelineForText2Image +>>> import torch + +>>> pipe = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-3", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A photograph of the inside of a subway train. There are raccoons sitting on the seats. One of them is reading a newspaper. The window shows the city in the background." + +>>> generator = torch.Generator(device="cpu").manual_seed(0) +>>> image = pipe(prompt, num_inference_steps=25, generator=generator).images[0] encode_prompt < source > ( prompt do_classifier_free_guidance = True num_images_per_prompt = 1 device = None negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None _cut_context = False attention_mask: Optional = None negative_attention_mask: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device, optional): +torch device to place the resulting embeddings on num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask. Must provide if passing prompt_embeds directly. negative_attention_mask (torch.FloatTensor, optional) — +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. Encodes the prompt into text encoder hidden states. Kandinsky3Img2ImgPipeline class diffusers.Kandinsky3Img2ImgPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: Kandinsky3UNet scheduler: DDPMScheduler movq: VQModel ) __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.3 num_inference_steps: int = 25 guidance_scale: float = 3.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 3.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask. Must provide if passing prompt_embeds directly. negative_attention_mask (torch.FloatTensor, optional) — +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import AutoPipelineForImage2Image +>>> from diffusers.utils import load_image +>>> import torch + +>>> pipe = AutoPipelineForImage2Image.from_pretrained("kandinsky-community/kandinsky-3", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A painting of the inside of a subway train with tiny raccoons." +>>> image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky3/t2i.png") + +>>> generator = torch.Generator(device="cpu").manual_seed(0) +>>> image = pipe(prompt, image=image, strength=0.75, num_inference_steps=25, generator=generator).images[0] encode_prompt < source > ( prompt do_classifier_free_guidance = True num_images_per_prompt = 1 device = None negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None _cut_context = False attention_mask: Optional = None negative_attention_mask: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded Encodes the prompt into text encoder hidden states. device: (torch.device, optional): +torch device to place the resulting embeddings on +num_images_per_prompt (int, optional, defaults to 1): +number of images that should be generated per prompt +do_classifier_free_guidance (bool, optional, defaults to True): +whether to use classifier free guidance or not +negative_prompt (str or List[str], optional): +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). +prompt_embeds (torch.FloatTensor, optional): +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. +negative_prompt_embeds (torch.FloatTensor, optional): +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. +attention_mask (torch.FloatTensor, optional): +Pre-generated attention mask. Must provide if passing prompt_embeds directly. +negative_attention_mask (torch.FloatTensor, optional): +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. diff --git a/scrapped_outputs/0b45eb128d0b7b3102218926950fea49.txt b/scrapped_outputs/0b45eb128d0b7b3102218926950fea49.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/0b49c59a79843fe4fa44c7e2972c1152.txt b/scrapped_outputs/0b49c59a79843fe4fa44c7e2972c1152.txt new file mode 100644 index 0000000000000000000000000000000000000000..f2282512f2f0bcea89548e640b2b6d75311dad9c --- /dev/null +++ b/scrapped_outputs/0b49c59a79843fe4fa44c7e2972c1152.txt @@ -0,0 +1,27 @@ +OpenVINO 🤗 Optimum provides Stable Diffusion pipelines compatible with OpenVINO to perform inference on a variety of Intel processors (see the full list of supported devices). You’ll need to install 🤗 Optimum Intel with the --upgrade-strategy eager option to ensure optimum-intel is using the latest version: Copied pip install --upgrade-strategy eager optimum["openvino"] This guide will show you how to use the Stable Diffusion and Stable Diffusion XL (SDXL) pipelines with OpenVINO. Stable Diffusion To load and run inference, use the OVStableDiffusionPipeline. If you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, set export=True: Copied from optimum.intel import OVStableDiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipeline = OVStableDiffusionPipeline.from_pretrained(model_id, export=True) +prompt = "sailing ship in storm by Rembrandt" +image = pipeline(prompt).images[0] + +# Don't forget to save the exported model +pipeline.save_pretrained("openvino-sd-v1-5") To further speed-up inference, statically reshape the model. If you change any parameters such as the outputs height or width, you’ll need to statically reshape your model again. Copied # Define the shapes related to the inputs and desired outputs +batch_size, num_images, height, width = 1, 1, 512, 512 + +# Statically reshape the model +pipeline.reshape(batch_size, height, width, num_images) +# Compile the model before inference +pipeline.compile() + +image = pipeline( + prompt, + height=height, + width=width, + num_images_per_prompt=num_images, +).images[0] You can find more examples in the 🤗 Optimum documentation, and Stable Diffusion is supported for text-to-image, image-to-image, and inpainting. Stable Diffusion XL To load and run inference with SDXL, use the OVStableDiffusionXLPipeline: Copied from optimum.intel import OVStableDiffusionXLPipeline + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipeline = OVStableDiffusionXLPipeline.from_pretrained(model_id) +prompt = "sailing ship in storm by Rembrandt" +image = pipeline(prompt).images[0] To further speed-up inference, statically reshape the model as shown in the Stable Diffusion section. You can find more examples in the 🤗 Optimum documentation, and running SDXL in OpenVINO is supported for text-to-image and image-to-image. diff --git a/scrapped_outputs/0b5b8fd5d04d9afaeecec176770eb4b6.txt b/scrapped_outputs/0b5b8fd5d04d9afaeecec176770eb4b6.txt new file mode 100644 index 0000000000000000000000000000000000000000..a3ef40f070274021b77fb2e361dbd5e9e695ba0c --- /dev/null +++ b/scrapped_outputs/0b5b8fd5d04d9afaeecec176770eb4b6.txt @@ -0,0 +1,116 @@ +Single files Diffusers supports loading pretrained pipeline (or model) weights stored in a single file, such as a ckpt or safetensors file. These single file types are typically produced from community trained models. There are three classes for loading single file weights: FromSingleFileMixin supports loading pretrained pipeline weights stored in a single file, which can either be a ckpt or safetensors file. FromOriginalVAEMixin supports loading a pretrained AutoencoderKL from pretrained ControlNet weights stored in a single file, which can either be a ckpt or safetensors file. FromOriginalControlnetMixin supports loading pretrained ControlNet weights stored in a single file, which can either be a ckpt or safetensors file. To learn more about how to load single file weights, see the Load different Stable Diffusion formats loading guide. FromSingleFileMixin class diffusers.loaders.FromSingleFileMixin < source > ( ) Load model weights saved in the .ckpt format into a DiffusionPipeline. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt or .safetensors +format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import StableDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +... ) + +>>> # Download pipeline from local file +>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt +>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly") + +>>> # Enable float16 and move to GPU +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", +... torch_dtype=torch.float16, +... ) +>>> pipeline.to("cuda") FromOriginalVAEMixin class diffusers.loaders.FromOriginalVAEMixin < source > ( ) Load pretrained AutoencoderKL weights saved in the .ckpt or .safetensors format into a AutoencoderKL. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + config_file (str, optional) — +Filepath to the configuration YAML file associated with the model. If not provided it will default to: +https://raw.githubusercontent.com/CompVis/stable-diffusion/main/configs/stable-diffusion/v1-inference.yaml torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. image_size (int, optional, defaults to 512) — +The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable +Diffusion v2 base model. Use 768 for Stable Diffusion v2. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution +Image Synthesis with Latent Diffusion Models paper. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (for example the pipeline components of the +specific pipeline class). The overwritten components are directly passed to the pipelines __init__ +method. See example below for more information. Instantiate a AutoencoderKL from pretrained ControlNet weights saved in the original .ckpt or +.safetensors format. The pipeline is set in evaluation mode (model.eval()) by default. Make sure to pass both image_size and scaling_factor to from_single_file() if you’re loading +a VAE from SDXL or a Stable Diffusion v2 model or higher. Examples: Copied from diffusers import AutoencoderKL + +url = "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors" # can also be local file +model = AutoencoderKL.from_single_file(url) FromOriginalControlnetMixin class diffusers.loaders.FromOriginalControlNetMixin < source > ( ) Load pretrained ControlNet weights saved in the .ckpt or .safetensors format into a ControlNetModel. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. image_size (int, optional, defaults to 512) — +The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable +Diffusion v2 base model. Use 768 for Stable Diffusion v2. upcast_attention (bool, optional, defaults to None) — +Whether the attention computation should always be upcasted. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (for example the pipeline components of the +specific pipeline class). The overwritten components are directly passed to the pipelines __init__ +method. See example below for more information. Instantiate a ControlNetModel from pretrained ControlNet weights saved in the original .ckpt or +.safetensors format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel + +url = "https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_canny.pth" # can also be a local path +model = ControlNetModel.from_single_file(url) + +url = "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.safetensors" # can also be a local path +pipe = StableDiffusionControlNetPipeline.from_single_file(url, controlnet=controlnet) diff --git a/scrapped_outputs/0b6598a82fd0af2ee36767ce05814ab9.txt b/scrapped_outputs/0b6598a82fd0af2ee36767ce05814ab9.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/0b74104af6d59e88df57f73ebd10fa82.txt b/scrapped_outputs/0b74104af6d59e88df57f73ebd10fa82.txt new file mode 100644 index 0000000000000000000000000000000000000000..4586d640c21b1ec0deb7b532e633cfae56bf77a3 --- /dev/null +++ b/scrapped_outputs/0b74104af6d59e88df57f73ebd10fa82.txt @@ -0,0 +1,108 @@ +Quicktour + +Get up and running with 🧨 Diffusers quickly! +Whether you’re a developer or an everyday user, this quick tour will help you get started and show you how to use DiffusionPipeline for inference. +Before you begin, make sure you have all the necessary libraries installed: + + + Copied +pip install --upgrade diffusers accelerate transformers +accelerate speeds up model loading for inference and training +transformers is required to run the most popular diffusion models, such as Stable Diffusion + +DiffusionPipeline + +The DiffusionPipeline is the easiest way to use a pre-trained diffusion system for inference. You can use the DiffusionPipeline out-of-the-box for many tasks across different modalities. Take a look at the table below for some supported tasks: +Task +Description +Pipeline +Unconditional Image Generation +generate an image from gaussian noise +unconditional_image_generation +Text-Guided Image Generation +generate an image given a text prompt +conditional_image_generation +Text-Guided Image-to-Image Translation +adapt an image guided by a text prompt +img2img +Text-Guided Image-Inpainting +fill the masked part of an image given the image, the mask and a text prompt +inpaint +Text-Guided Depth-to-Image Translation +adapt parts of an image guided by a text prompt while preserving structure via depth estimation +depth2image +For more in-detail information on how diffusion pipelines function for the different tasks, please have a look at the Using Diffusers section. +As an example, start by creating an instance of DiffusionPipeline and specify which pipeline checkpoint you would like to download. +You can use the DiffusionPipeline for any Diffusers’ checkpoint. +In this guide though, you’ll use DiffusionPipeline for text-to-image generation with Stable Diffusion. +For Stable Diffusion, please carefully read its license before running the model. +This is due to the improved image generation capabilities of the model and the potentially harmful content that could be produced with it. +Please, head over to your stable diffusion model of choice, e.g. runwayml/stable-diffusion-v1-5, and read the license. +You can load the model as follows: + + + Copied +>>> from diffusers import DiffusionPipeline + +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +The DiffusionPipeline downloads and caches all modeling, tokenization, and scheduling components. +Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on GPU. +You can move the generator object to GPU, just like you would in PyTorch. + + + Copied +>>> pipeline.to("cuda") +Now you can use the pipeline on your text prompt: + + + Copied +>>> image = pipeline("An image of a squirrel in Picasso style").images[0] +The output is by default wrapped into a PIL Image object. +You can save the image by simply calling: + + + Copied +>>> image.save("image_of_squirrel_painting.png") +Note: You can also use the pipeline locally by downloading the weights via: + + + Copied +git lfs install +git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 +and then loading the saved weights into the pipeline. + + + Copied +>>> pipeline = DiffusionPipeline.from_pretrained("./stable-diffusion-v1-5") +Running the pipeline is then identical to the code above as it’s the same model architecture. + + + Copied +>>> generator.to("cuda") +>>> image = generator("An image of a squirrel in Picasso style").images[0] +>>> image.save("image_of_squirrel_painting.png") +Diffusion systems can be used with multiple different schedulers each with their +pros and cons. By default, Stable Diffusion runs with PNDMScheduler, but it’s very simple to +use a different scheduler. E.g. if you would instead like to use the EulerDiscreteScheduler scheduler, +you could use it as follows: + + + Copied +>>> from diffusers import EulerDiscreteScheduler + +>>> pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") + +>>> # change scheduler to Euler +>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) +For more in-detail information on how to change between schedulers, please refer to the Using Schedulers guide. +Stability AI’s Stable Diffusion model is an impressive image generation model +and can do much more than just generating images from text. We have dedicated a whole documentation page, +just for Stable Diffusion here. +If you want to know how to optimize Stable Diffusion to run on less memory, higher inference speeds, on specific hardware, such as Mac, or with ONNX Runtime, please have a look at our +optimization pages: +Optimized PyTorch on GPU +Mac OS with PyTorch +ONNX +OpenVINO +If you want to fine-tune or train your diffusion model, please have a look at the training section +Finally, please be considerate when distributing generated images publicly 🤗. diff --git a/scrapped_outputs/0b8528226218b91368083412e05d10d0.txt b/scrapped_outputs/0b8528226218b91368083412e05d10d0.txt new file mode 100644 index 0000000000000000000000000000000000000000..218eb87f8f649852b0b2e0b52a2a1d758aa1b603 --- /dev/null +++ b/scrapped_outputs/0b8528226218b91368083412e05d10d0.txt @@ -0,0 +1 @@ +Using Diffusers with other modalities Diffusers is in the process of expanding to modalities other than images. Example type Colab Pipeline Molecule conformation generation ❌ More coming soon! diff --git a/scrapped_outputs/0b8926a7f3c481b06b1a17726ec830e3.txt b/scrapped_outputs/0b8926a7f3c481b06b1a17726ec830e3.txt new file mode 100644 index 0000000000000000000000000000000000000000..68423ddd910d132ae1322ca37d1a005d76c1e75b --- /dev/null +++ b/scrapped_outputs/0b8926a7f3c481b06b1a17726ec830e3.txt @@ -0,0 +1,238 @@ +VQDiffusionScheduler + + +Overview + +Original paper can be found here + +VQDiffusionScheduler + + +class diffusers.VQDiffusionScheduler + +< +source +> +( +num_vec_classes: int +num_train_timesteps: int = 100 +alpha_cum_start: float = 0.99999 +alpha_cum_end: float = 9e-06 +gamma_cum_start: float = 9e-06 +gamma_cum_end: float = 0.99999 + +) + + +Parameters + +num_vec_classes (int) — +The number of classes of the vector embeddings of the latent pixels. Includes the class for the masked +latent pixel. + + +num_train_timesteps (int) — +Number of diffusion steps used to train the model. + + +alpha_cum_start (float) — +The starting cumulative alpha value. + + +alpha_cum_end (float) — +The ending cumulative alpha value. + + +gamma_cum_start (float) — +The starting cumulative gamma value. + + +gamma_cum_end (float) — +The ending cumulative gamma value. + + + +The VQ-diffusion transformer outputs predicted probabilities of the initial unnoised image. +The VQ-diffusion scheduler converts the transformer’s output into a sample for the unnoised image at the previous +diffusion timestep. +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. +For more details, see the original paper: https://arxiv.org/abs/2111.14822 + +log_Q_t_transitioning_to_known_class + +< +source +> +( +t: torch.int32 +x_t: LongTensor +log_onehot_x_t: FloatTensor +cumulative: bool + +) +→ +torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels) + +Parameters + +t (torch.Long) — +The timestep that determines which transition matrix is used. + + +x_t (torch.LongTensor of shape (batch size, num latent pixels)) — +The classes of each latent pixel at time t. + + +log_onehot_x_t (torch.FloatTensor of shape (batch size, num classes, num latent pixels)) — +The log one-hot vectors of x_t + + +cumulative (bool) — +If cumulative is False, we use the single step transition matrix t-1->t. If cumulative is True, +we use the cumulative transition matrix 0->t. + + +Returns + +torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels) + + + +Each column of the returned matrix is a row of log probabilities of the complete probability +transition matrix. +When non cumulative, returns self.num_classes - 1 rows because the initial latent pixel cannot be +masked. +Where: + +q_n is the probability distribution for the forward process of the nth latent pixel. +C_0 is a class of a latent pixel embedding +C_k is the class of the masked latent pixel + +non-cumulative result (omitting logarithms): +_0(x_t | x_{t-1\} = C_0) ... q_n(x_t | x_{t-1\} = C_0) + . . . + . . . + . . . +q_0(x_t | x_{t-1\} = C_k) ... q_n(x_t | x_{t-1\} = C_k)`} + /> +cumulative result (omitting logarithms): +_0_cumulative(x_t | x_0 = C_0) ... q_n_cumulative(x_t | x_0 = C_0) + . . . + . . . + . . . +q_0_cumulative(x_t | x_0 = C_{k-1\}) ... q_n_cumulative(x_t | x_0 = C_{k-1\})`} + /> + + +Returns the log probabilities of the rows from the (cumulative or non-cumulative) transition matrix for each +latent pixel in x_t. +See equation (7) for the complete non-cumulative transition matrix. The complete cumulative transition matrix +is the same structure except the parameters (alpha, beta, gamma) are the cumulative analogs. + +q_posterior + +< +source +> +( +log_p_x_0 +x_t +t + +) +→ +torch.FloatTensor of shape (batch size, num classes, num latent pixels) + +Parameters + +t (torch.Long) — +The timestep that determines which transition matrix is used. + + +Returns + +torch.FloatTensor of shape (batch size, num classes, num latent pixels) + + + +The log probabilities for the predicted classes of the image at timestep t-1. I.e. Equation (11). + + +Calculates the log probabilities for the predicted classes of the image at timestep t-1. I.e. Equation (11). +Instead of directly computing equation (11), we use Equation (5) to restate Equation (11) in terms of only +forward probabilities. +Equation (11) stated in terms of forward probabilities via Equation (5): +Where: +the sum is over x0 = {C_0 … C{k-1}} (classes for x_0) +p(x{t-1} | x_t) = sum( q(x_t | x{t-1}) q(x_{t-1} | x_0) p(x_0) / q(x_t | x_0) ) + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device) — +device to place the timesteps and the diffusion process parameters (alpha, beta, gamma) on. + + + +Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: torch.int64 +sample: LongTensor +generator: typing.Optional[torch._C.Generator] = None +return_dict: bool = True + +) +→ +~schedulers.scheduling_utils.VQDiffusionSchedulerOutput or tuple + +Parameters + +t (torch.long) — +The timestep that determines which transition matrices are used. +x_t — (torch.LongTensor of shape (batch size, num latent pixels)): +The classes of each latent pixel at time t +generator — (torch.Generator or None): +RNG for the noise applied to p(x_{t-1} | x_t) before it is sampled from. + + +return_dict (bool) — +option for returning tuple rather than VQDiffusionSchedulerOutput class + + +Returns + +~schedulers.scheduling_utils.VQDiffusionSchedulerOutput or tuple + + + +~schedulers.scheduling_utils.VQDiffusionSchedulerOutput if return_dict is True, otherwise a tuple. +When returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep via the reverse transition distribution i.e. Equation (11). See the +docstring for self.q_posterior for more in depth docs on how Equation (11) is computed. diff --git a/scrapped_outputs/0bb7f80ee966688a72592a870cffbb61.txt b/scrapped_outputs/0bb7f80ee966688a72592a870cffbb61.txt new file mode 100644 index 0000000000000000000000000000000000000000..051657a0a8a5f093e79718c5c36eb36f9c0bf990 --- /dev/null +++ b/scrapped_outputs/0bb7f80ee966688a72592a870cffbb61.txt @@ -0,0 +1,235 @@ +AutoPipeline AutoPipeline is designed to: make it easy for you to load a checkpoint for a task without knowing the specific pipeline class to use use multiple pipelines in your workflow Based on the task, the AutoPipeline class automatically retrieves the relevant pipeline given the name or path to the pretrained weights with the from_pretrained() method. To seamlessly switch between tasks with the same checkpoint without reallocating additional memory, use the from_pipe() method to transfer the components from the original pipeline to the new one. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +image = pipeline(prompt, num_inference_steps=25).images[0] Check out the AutoPipeline tutorial to learn how to use this API! AutoPipeline supports text-to-image, image-to-image, and inpainting for the following diffusion models: Stable Diffusion ControlNet Stable Diffusion XL (SDXL) DeepFloyd IF Kandinsky 2.1 Kandinsky 2.2 AutoPipelineForText2Image class diffusers.AutoPipelineForText2Image < source > ( *args **kwargs ) AutoPipelineForText2Image is a generic pipeline class that instantiates a text-to-image pipeline class. The +specific underlying pipeline class is automatically selected from either the +from_pretrained() or from_pipe() methods. This class cannot be instantiated using __init__() (throws an error). Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_or_path **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiates a text-to-image Pytorch diffusion pipeline from pretrained pipeline weight. The from_pretrained() method takes care of returning the correct pipeline class instance by: Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its +config object Find the text-to-image pipeline linked to the pipeline class using pattern matching on pipeline class +name. If a controlnet argument is passed, it will instantiate a StableDiffusionControlNetPipeline object. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import AutoPipelineForText2Image + +>>> pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> image = pipeline(prompt).images[0] from_pipe < source > ( pipeline **kwargs ) Parameters pipeline (DiffusionPipeline) — +an instantiated DiffusionPipeline object Instantiates a text-to-image Pytorch diffusion pipeline from another instantiated diffusion pipeline class. The from_pipe() method takes care of returning the correct pipeline class instance by finding the text-to-image +pipeline linked to the pipeline class using pattern matching on pipeline class name. All the modules the pipeline contains will be used to initialize the new pipeline without reallocating +additional memory. The pipeline is set in evaluation mode (model.eval()) by default. Copied >>> from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image + +>>> pipe_i2i = AutoPipelineForImage2Image.from_pretrained( +... "runwayml/stable-diffusion-v1-5", requires_safety_checker=False +... ) + +>>> pipe_t2i = AutoPipelineForText2Image.from_pipe(pipe_i2i) +>>> image = pipe_t2i(prompt).images[0] AutoPipelineForImage2Image class diffusers.AutoPipelineForImage2Image < source > ( *args **kwargs ) AutoPipelineForImage2Image is a generic pipeline class that instantiates an image-to-image pipeline class. The +specific underlying pipeline class is automatically selected from either the +from_pretrained() or from_pipe() methods. This class cannot be instantiated using __init__() (throws an error). Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_or_path **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiates a image-to-image Pytorch diffusion pipeline from pretrained pipeline weight. The from_pretrained() method takes care of returning the correct pipeline class instance by: Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its +config object Find the image-to-image pipeline linked to the pipeline class using pattern matching on pipeline class +name. If a controlnet argument is passed, it will instantiate a StableDiffusionControlNetImg2ImgPipeline +object. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import AutoPipelineForImage2Image + +>>> pipeline = AutoPipelineForImage2Image.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> image = pipeline(prompt, image).images[0] from_pipe < source > ( pipeline **kwargs ) Parameters pipeline (DiffusionPipeline) — +an instantiated DiffusionPipeline object Instantiates a image-to-image Pytorch diffusion pipeline from another instantiated diffusion pipeline class. The from_pipe() method takes care of returning the correct pipeline class instance by finding the +image-to-image pipeline linked to the pipeline class using pattern matching on pipeline class name. All the modules the pipeline contains will be used to initialize the new pipeline without reallocating +additional memory. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image + +>>> pipe_t2i = AutoPipelineForText2Image.from_pretrained( +... "runwayml/stable-diffusion-v1-5", requires_safety_checker=False +... ) + +>>> pipe_i2i = AutoPipelineForImage2Image.from_pipe(pipe_t2i) +>>> image = pipe_i2i(prompt, image).images[0] AutoPipelineForInpainting class diffusers.AutoPipelineForInpainting < source > ( *args **kwargs ) AutoPipelineForInpainting is a generic pipeline class that instantiates an inpainting pipeline class. The +specific underlying pipeline class is automatically selected from either the +from_pretrained() or from_pipe() methods. This class cannot be instantiated using __init__() (throws an error). Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_or_path **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiates a inpainting Pytorch diffusion pipeline from pretrained pipeline weight. The from_pretrained() method takes care of returning the correct pipeline class instance by: Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its +config object Find the inpainting pipeline linked to the pipeline class using pattern matching on pipeline class name. If a controlnet argument is passed, it will instantiate a StableDiffusionControlNetInpaintPipeline +object. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import AutoPipelineForInpainting + +>>> pipeline = AutoPipelineForInpainting.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> image = pipeline(prompt, image=init_image, mask_image=mask_image).images[0] from_pipe < source > ( pipeline **kwargs ) Parameters pipeline (DiffusionPipeline) — +an instantiated DiffusionPipeline object Instantiates a inpainting Pytorch diffusion pipeline from another instantiated diffusion pipeline class. The from_pipe() method takes care of returning the correct pipeline class instance by finding the inpainting +pipeline linked to the pipeline class using pattern matching on pipeline class name. All the modules the pipeline class contain will be used to initialize the new pipeline without reallocating +additional memory. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import AutoPipelineForText2Image, AutoPipelineForInpainting + +>>> pipe_t2i = AutoPipelineForText2Image.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", requires_safety_checker=False +... ) + +>>> pipe_inpaint = AutoPipelineForInpainting.from_pipe(pipe_t2i) +>>> image = pipe_inpaint(prompt, image=init_image, mask_image=mask_image).images[0] diff --git a/scrapped_outputs/0c2f3dae04bae91f43aed44383315b0e.txt b/scrapped_outputs/0c2f3dae04bae91f43aed44383315b0e.txt new file mode 100644 index 0000000000000000000000000000000000000000..118d04526fdacb6e280461a814f7dea84ba76932 --- /dev/null +++ b/scrapped_outputs/0c2f3dae04bae91f43aed44383315b0e.txt @@ -0,0 +1,51 @@ +DDIMInverseScheduler DDIMInverseScheduler is the inverted scheduler from Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. +The implementation is mostly based on the DDIM inversion definition from Null-text Inversion for Editing Real Images using Guided Diffusion Models. DDIMInverseScheduler class diffusers.DDIMInverseScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None clip_sample: bool = True set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' clip_sample_range: float = 1.0 timestep_spacing: str = 'leading' rescale_betas_zero_snr: bool = False **kwargs ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 0, otherwise +it uses the alpha value at step num_train_timesteps - 1. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use num_train_timesteps - 1 for the previous alpha +product. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. DDIMInverseScheduler is the reverse scheduler of DDIMScheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → ~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. eta (float) — +The weight of noise for added noise in diffusion step. use_clipped_model_output (bool, defaults to False) — +If True, computes “corrected” model_output from the clipped predicted original sample. Necessary +because predicted original sample is clipped to [-1, 1] when self.config.clip_sample is True. If no +clipping has happened, “corrected” model_output would coincide with the one provided as input and +use_clipped_model_output has no effect. variance_noise (torch.FloatTensor) — +Alternative to generating noise with generator by directly providing the noise for the variance +itself. Useful for methods such as CycleDiffusion. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput or +tuple. Returns +~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput or tuple + +If return_dict is True, ~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput is +returned, otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/0c43d6ad5d98d59604ab0365ba5ae70d.txt b/scrapped_outputs/0c43d6ad5d98d59604ab0365ba5ae70d.txt new file mode 100644 index 0000000000000000000000000000000000000000..6643d5b160e9a2f169422a8c75cb5792940e3a60 --- /dev/null +++ b/scrapped_outputs/0c43d6ad5d98d59604ab0365ba5ae70d.txt @@ -0,0 +1,250 @@ +DiffEdit DiffEdit: Diffusion-based semantic image editing with mask guidance is by Guillaume Couairon, Jakob Verbeek, Holger Schwenk, and Matthieu Cord. The abstract from the paper is: Image generation has recently seen tremendous advances, with diffusion models allowing to synthesize convincing images for a large variety of text prompts. In this article, we propose DiffEdit, a method to take advantage of text-conditioned diffusion models for the task of semantic image editing, where the goal is to edit an image based on a text query. Semantic image editing is an extension of image generation, with the additional constraint that the generated image should be as similar as possible to a given input image. Current editing methods based on diffusion models usually require to provide a mask, making the task much easier by treating it as a conditional inpainting task. In contrast, our main contribution is able to automatically generate a mask highlighting regions of the input image that need to be edited, by contrasting predictions of a diffusion model conditioned on different text prompts. Moreover, we rely on latent inference to preserve content in those regions of interest and show excellent synergies with mask-based diffusion. DiffEdit achieves state-of-the-art editing performance on ImageNet. In addition, we evaluate semantic image editing in more challenging settings, using images from the COCO dataset as well as text-based generated images. The original codebase can be found at Xiang-cd/DiffEdit-stable-diffusion, and you can try it out in this demo. This pipeline was contributed by clarencechen. ❤️ Tips The pipeline can generate masks that can be fed into other inpainting pipelines. In order to generate an image using this pipeline, both an image mask (source and target prompts can be manually specified or generated, and passed to generate_mask()) +and a set of partially inverted latents (generated using invert()) must be provided as arguments when calling the pipeline to generate the final edited image. The function generate_mask() exposes two prompt arguments, source_prompt and target_prompt +that let you control the locations of the semantic edits in the final image to be generated. Let’s say, +you wanted to translate from “cat” to “dog”. In this case, the edit direction will be “cat -> dog”. To reflect +this in the generated mask, you simply have to set the embeddings related to the phrases including “cat” to +source_prompt and “dog” to target_prompt. When generating partially inverted latents using invert, assign a caption or text embedding describing the +overall image to the prompt argument to help guide the inverse latent sampling process. In most cases, the +source concept is sufficiently descriptive to yield good results, but feel free to explore alternatives. When calling the pipeline to generate the final edited image, assign the source concept to negative_prompt +and the target concept to prompt. Taking the above example, you simply have to set the embeddings related to +the phrases including “cat” to negative_prompt and “dog” to prompt. If you wanted to reverse the direction in the example above, i.e., “dog -> cat”, then it’s recommended to:Swap the source_prompt and target_prompt in the arguments to generate_mask. Change the input prompt in invert() to include “dog”. Swap the prompt and negative_prompt in the arguments to call the pipeline to generate the final edited image. The source and target prompts, or their corresponding embeddings, can also be automatically generated. Please refer to the DiffEdit guide for more details. StableDiffusionDiffEditPipeline class diffusers.StableDiffusionDiffEditPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor inverse_scheduler: DDIMInverseScheduler requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. inverse_scheduler (DDIMInverseScheduler) — +A scheduler to be used in combination with unet to fill in the unmasked part of the input latents. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. This is an experimental feature! Pipeline for text-guided image inpainting using Stable Diffusion and DiffEdit. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading and saving methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights generate_mask < source > ( image: Union = None target_prompt: Union = None target_negative_prompt: Union = None target_prompt_embeds: Optional = None target_negative_prompt_embeds: Optional = None source_prompt: Union = None source_negative_prompt: Union = None source_prompt_embeds: Optional = None source_negative_prompt_embeds: Optional = None num_maps_per_mask: Optional = 10 mask_encode_strength: Optional = 0.5 mask_thresholding_ratio: Optional = 3.0 num_inference_steps: int = 50 guidance_scale: float = 7.5 generator: Union = None output_type: Optional = 'np' cross_attention_kwargs: Optional = None ) → List[PIL.Image.Image] or np.array Parameters image (PIL.Image.Image) — +Image or tensor representing an image batch to be used for computing the mask. target_prompt (str or List[str], optional) — +The prompt or prompts to guide semantic mask generation. If not defined, you need to pass +prompt_embeds. target_negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). target_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. target_negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. source_prompt (str or List[str], optional) — +The prompt or prompts to guide semantic mask generation using DiffEdit. If not defined, you need to +pass source_prompt_embeds or source_image instead. source_negative_prompt (str or List[str], optional) — +The prompt or prompts to guide semantic mask generation away from using DiffEdit. If not defined, you +need to pass source_negative_prompt_embeds or source_image instead. source_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings to guide the semantic mask generation. Can be used to easily tweak text +inputs (prompt weighting). If not provided, text embeddings are generated from source_prompt input +argument. source_negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings to negatively guide the semantic mask generation. Can be used to easily +tweak text inputs (prompt weighting). If not provided, text embeddings are generated from +source_negative_prompt input argument. num_maps_per_mask (int, optional, defaults to 10) — +The number of noise maps sampled to generate the semantic mask using DiffEdit. mask_encode_strength (float, optional, defaults to 0.5) — +The strength of the noise maps sampled to generate the semantic mask using DiffEdit. Must be between 0 +and 1. mask_thresholding_ratio (float, optional, defaults to 3.0) — +The maximum multiple of the mean absolute difference used to clamp the semantic guidance map before +mask binarization. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the +AttnProcessor as defined in +self.processor. Returns +List[PIL.Image.Image] or np.array + +When returning a List[PIL.Image.Image], the list consists of a batch of single-channel binary images +with dimensions (height // self.vae_scale_factor, width // self.vae_scale_factor). If it’s +np.array, the shape is (batch_size, height // self.vae_scale_factor, width // self.vae_scale_factor). + Generate a latent mask given a mask prompt, a target prompt, and an image. Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionDiffEditPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + +>>> init_image = download_image(img_url).resize((768, 768)) + +>>> pipe = StableDiffusionDiffEditPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.enable_model_cpu_offload() + +>>> mask_prompt = "A bowl of fruits" +>>> prompt = "A bowl of pears" + +>>> mask_image = pipe.generate_mask(image=init_image, source_prompt=prompt, target_prompt=mask_prompt) +>>> image_latents = pipe.invert(image=init_image, prompt=mask_prompt).latents +>>> image = pipe(prompt=prompt, mask_image=mask_image, image_latents=image_latents).images[0] invert < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 50 inpaint_strength: float = 0.8 guidance_scale: float = 7.5 negative_prompt: Union = None generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None decode_latents: bool = False output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None lambda_auto_corr: float = 20.0 lambda_kl: float = 20.0 num_reg_steps: int = 0 num_auto_corr_rolls: int = 5 ) Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (PIL.Image.Image) — +Image or tensor representing an image batch to produce the inverted latents guided by prompt. inpaint_strength (float, optional, defaults to 0.8) — +Indicates extent of the noising process to run latent inversion. Must be between 0 and 1. When +inpaint_strength is 1, the inversion process is run for the full number of iterations specified in +num_inference_steps. image is used as a reference for the inversion process, and adding more noise +increases inpaint_strength. If inpaint_strength is 0, no inpainting occurs. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. decode_latents (bool, optional, defaults to False) — +Whether or not to decode the inverted latents into a generated image. Setting this argument to True +decodes all inverted latents for each timestep into a list of generated images. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.DiffEditInversionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the +AttnProcessor as defined in +self.processor. lambda_auto_corr (float, optional, defaults to 20.0) — +Lambda parameter to control auto correction. lambda_kl (float, optional, defaults to 20.0) — +Lambda parameter to control Kullback-Leibler divergence output. num_reg_steps (int, optional, defaults to 0) — +Number of regularization loss steps. num_auto_corr_rolls (int, optional, defaults to 5) — +Number of auto correction roll steps. Generate inverted latents given a prompt and image. Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionDiffEditPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + +>>> init_image = download_image(img_url).resize((768, 768)) + +>>> pipe = StableDiffusionDiffEditPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.enable_model_cpu_offload() + +>>> prompt = "A bowl of fruits" + +>>> inverted_latents = pipe.invert(image=init_image, prompt=prompt).latents __call__ < source > ( prompt: Union = None mask_image: Union = None image_latents: Union = None inpaint_strength: Optional = 0.8 num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_ckip: int = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. mask_image (PIL.Image.Image) — +Image or tensor representing an image batch to mask the generated image. White pixels in the mask are +repainted, while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, 1, H, W). image_latents (PIL.Image.Image or torch.FloatTensor) — +Partially noised image latents from the inversion process to be used as inputs for image generation. inpaint_strength (float, optional, defaults to 0.8) — +Indicates extent to inpaint the masked area. Must be between 0 and 1. When inpaint_strength is 1, the +denoising process is run on the masked area for the full number of iterations specified in +num_inference_steps. image_latents is used as a reference for the masked area, and adding more +noise to a region increases inpaint_strength. If inpaint_strength is 0, no inpainting occurs. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionDiffEditPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + +>>> init_image = download_image(img_url).resize((768, 768)) + +>>> pipe = StableDiffusionDiffEditPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.enable_model_cpu_offload() + +>>> mask_prompt = "A bowl of fruits" +>>> prompt = "A bowl of pears" + +>>> mask_image = pipe.generate_mask(image=init_image, source_prompt=prompt, target_prompt=mask_prompt) +>>> image_latents = pipe.invert(image=init_image, prompt=mask_prompt).latents +>>> image = pipe(prompt=prompt, mask_image=mask_image, image_latents=image_latents).images[0] encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/0c5099f94a5079084210036b7dcef09e.txt b/scrapped_outputs/0c5099f94a5079084210036b7dcef09e.txt new file mode 100644 index 0000000000000000000000000000000000000000..f86c7601a8960e5b9b1d28395df88617938da400 --- /dev/null +++ b/scrapped_outputs/0c5099f94a5079084210036b7dcef09e.txt @@ -0,0 +1,42 @@ +LMSDiscreteScheduler LMSDiscreteScheduler is a linear multistep scheduler for discrete beta schedules. The scheduler is ported from and created by Katherine Crowson, and the original implementation can be found at crowsonkb/k-diffusion. LMSDiscreteScheduler class diffusers.LMSDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None use_karras_sigmas: Optional = False prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. A linear multistep scheduler for discrete beta schedules. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. get_lms_coefficient < source > ( order t current_order ) Parameters order () — t () — current_order () — Compute the linear multistep coefficient. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (float or torch.FloatTensor) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor order: int = 4 return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float or torch.FloatTensor) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. order (int, defaults to 4) — +The order of the linear multistep method. return_dict (bool, optional, defaults to True) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). LMSDiscreteSchedulerOutput class diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/0cbf5ff5cfd8501babe6e9a7f5ea1d1b.txt b/scrapped_outputs/0cbf5ff5cfd8501babe6e9a7f5ea1d1b.txt new file mode 100644 index 0000000000000000000000000000000000000000..6bf3fb5ccdc0024cae1750fc6804f62a64341e8e --- /dev/null +++ b/scrapped_outputs/0cbf5ff5cfd8501babe6e9a7f5ea1d1b.txt @@ -0,0 +1,59 @@ +RePaint RePaint: Inpainting using Denoising Diffusion Probabilistic Models is by Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, Luc Van Gool. The abstract from the paper is: Free-form inpainting is the task of adding new content to an image in the regions specified by an arbitrary binary mask. Most existing approaches train for a certain distribution of masks, which limits their generalization capabilities to unseen mask types. Furthermore, training with pixel-wise and perceptual losses often leads to simple textural extensions towards the missing areas instead of semantically meaningful generation. In this work, we propose RePaint: A Denoising Diffusion Probabilistic Model (DDPM) based inpainting approach that is applicable to even extreme masks. We employ a pretrained unconditional DDPM as the generative prior. To condition the generation process, we only alter the reverse diffusion iterations by sampling the unmasked regions using the given image information. Since this technique does not modify or condition the original DDPM network itself, the model produces high-quality and diverse output images for any inpainting form. We validate our method for both faces and general-purpose image inpainting using standard and extreme masks. +RePaint outperforms state-of-the-art Autoregressive, and GAN approaches for at least five out of six mask distributions. The original codebase can be found at andreas128/RePaint. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. RePaintPipeline class diffusers.RePaintPipeline < source > ( unet scheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (RePaintScheduler) — +A RePaintScheduler to be used in combination with unet to denoise the encoded image. Pipeline for image inpainting using RePaint. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: typing.Union[torch.Tensor, PIL.Image.Image] mask_image: typing.Union[torch.Tensor, PIL.Image.Image] num_inference_steps: int = 250 eta: float = 0.0 jump_length: int = 10 jump_n_sample: int = 10 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters image (torch.FloatTensor or PIL.Image.Image) — +The original image to inpaint on. mask_image (torch.FloatTensor or PIL.Image.Image) — +The mask_image where 0.0 define which part of the original image to inpaint. num_inference_steps (int, optional, defaults to 1000) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. eta (float) — +The weight of the added noise in a diffusion step. Its value is between 0.0 and 1.0; 0.0 corresponds to +DDIM and 1.0 is the DDPM scheduler. jump_length (int, optional, defaults to 10) — +The number of steps taken forward in time before going backward in time for a single jump (“j” in +RePaint paper). Take a look at Figure 9 and 10 in the paper. jump_n_sample (int, optional, defaults to 10) — +The number of times to make a forward time jump for a given chosen time sample. Take a look at Figure 9 +and 10 in the paper. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Example: Copied >>> from io import BytesIO +>>> import torch +>>> import PIL +>>> import requests +>>> from diffusers import RePaintPipeline, RePaintScheduler + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/celeba_hq_256.png" +>>> mask_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/mask_256.png" + +>>> # Load the original image and the mask as PIL images +>>> original_image = download_image(img_url).resize((256, 256)) +>>> mask_image = download_image(mask_url).resize((256, 256)) + +>>> # Load the RePaint scheduler and pipeline based on a pretrained DDPM model +>>> scheduler = RePaintScheduler.from_pretrained("google/ddpm-ema-celebahq-256") +>>> pipe = RePaintPipeline.from_pretrained("google/ddpm-ema-celebahq-256", scheduler=scheduler) +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> output = pipe( +... image=original_image, +... mask_image=mask_image, +... num_inference_steps=250, +... eta=0.0, +... jump_length=10, +... jump_n_sample=10, +... generator=generator, +... ) +>>> inpainted_image = output.images[0] ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/0ccb9f55b4a609382682197514c9b4b8.txt b/scrapped_outputs/0ccb9f55b4a609382682197514c9b4b8.txt new file mode 100644 index 0000000000000000000000000000000000000000..fbcb60fe130c5e417d414793d40c89f0240a5740 --- /dev/null +++ b/scrapped_outputs/0ccb9f55b4a609382682197514c9b4b8.txt @@ -0,0 +1,69 @@ +Load LoRAs for inference There are many adapters (with LoRAs being the most common type) trained in different styles to achieve different effects. You can even combine multiple adapters to create new and unique images. With the 🤗 PEFT integration in 🤗 Diffusers, it is really easy to load and manage adapters for inference. In this guide, you’ll learn how to use different adapters with Stable Diffusion XL (SDXL) for inference. Throughout this guide, you’ll use LoRA as the main adapter technique, so we’ll use the terms LoRA and adapter interchangeably. You should have some familiarity with LoRA, and if you don’t, we welcome you to check out the LoRA guide. Let’s first install all the required libraries. Copied !pip install -q transformers accelerate +!pip install peft +!pip install diffusers Now, let’s load a pipeline with a SDXL checkpoint: Copied from diffusers import DiffusionPipeline +import torch + +pipe_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipe = DiffusionPipeline.from_pretrained(pipe_id, torch_dtype=torch.float16).to("cuda") Next, load a LoRA checkpoint with the load_lora_weights() method. With the 🤗 PEFT integration, you can assign a specific adapter_name to the checkpoint, which let’s you easily switch between different LoRA checkpoints. Let’s call this adapter "toy". Copied pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") And then perform inference: Copied prompt = "toy_face of a hacker with a hoodie" + +lora_scale= 0.9 +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) +).images[0] +image With the adapter_name parameter, it is really easy to use another adapter for inference! Load the nerijs/pixel-art-xl adapter that has been fine-tuned to generate pixel art images, and let’s call it "pixel". The pipeline automatically sets the first loaded adapter ("toy") as the active adapter. But you can activate the "pixel" adapter with the set_adapters() method as shown below: Copied pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipe.set_adapters("pixel") Let’s now generate an image with the second adapter and check the result: Copied prompt = "a hacker with a hoodie, pixel art" +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) +).images[0] +image Combine multiple adapters You can also perform multi-adapter inference where you combine different adapter checkpoints for inference. Once again, use the set_adapters() method to activate two LoRA checkpoints and specify the weight for how the checkpoints should be combined. Copied pipe.set_adapters(["pixel", "toy"], adapter_weights=[0.5, 1.0]) Now that we have set these two adapters, let’s generate an image from the combined adapters! LoRA checkpoints in the diffusion community are almost always obtained with DreamBooth. DreamBooth training often relies on “trigger” words in the input text prompts in order for the generation results to look as expected. When you combine multiple LoRA checkpoints, it’s important to ensure the trigger words for the corresponding LoRA checkpoints are present in the input text prompts. The trigger words for CiroN2022/toy-face and nerijs/pixel-art-xl are found in their repositories. Copied # Notice how the prompt is constructed. +prompt = "toy_face of a hacker with a hoodie, pixel art" +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": 1.0}, generator=torch.manual_seed(0) +).images[0] +image Impressive! As you can see, the model was able to generate an image that mixes the characteristics of both adapters. If you want to go back to using only one adapter, use the set_adapters() method to activate the "toy" adapter: Copied # First, set the adapter. +pipe.set_adapters("toy") + +# Then, run inference. +prompt = "toy_face of a hacker with a hoodie" +lora_scale= 0.9 +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) +).images[0] +image If you want to switch to only the base model, disable all LoRAs with the disable_lora() method. Copied pipe.disable_lora() + +prompt = "toy_face of a hacker with a hoodie" +lora_scale= 0.9 +image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] +image Monitoring active adapters You have attached multiple adapters in this tutorial, and if you’re feeling a bit lost on what adapters have been attached to the pipeline’s components, you can easily check the list of active adapters using the get_active_adapters() method: Copied active_adapters = pipe.get_active_adapters() +active_adapters +["toy", "pixel"] You can also get the active adapters of each pipeline component with get_list_adapters(): Copied list_adapters_component_wise = pipe.get_list_adapters() +list_adapters_component_wise +{"text_encoder": ["toy", "pixel"], "unet": ["toy", "pixel"], "text_encoder_2": ["toy", "pixel"]} Fusing adapters into the model You can use PEFT to easily fuse/unfuse multiple adapters directly into the model weights (both UNet and text encoder) using the fuse_lora() method, which can lead to a speed-up in inference and lower VRAM usage. Copied pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") + +pipe.set_adapters(["pixel", "toy"], adapter_weights=[0.5, 1.0]) +# Fuses the LoRAs into the Unet +pipe.fuse_lora() + +prompt = "toy_face of a hacker with a hoodie, pixel art" +image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] + +# Gets the Unet back to the original state +pipe.unfuse_lora() You can also fuse some adapters using adapter_names for faster generation: Copied pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") + +pipe.set_adapters(["pixel"], adapter_weights=[0.5, 1.0]) +# Fuses the LoRAs into the Unet +pipe.fuse_lora(adapter_names=["pixel"]) + +prompt = "a hacker with a hoodie, pixel art" +image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] + +# Gets the Unet back to the original state +pipe.unfuse_lora() + +# Fuse all adapters +pipe.fuse_lora(adapter_names=["pixel", "toy"]) + +prompt = "toy_face of a hacker with a hoodie, pixel art" +image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] diff --git a/scrapped_outputs/0d238bcca46d38eb37d34093413da900.txt b/scrapped_outputs/0d238bcca46d38eb37d34093413da900.txt new file mode 100644 index 0000000000000000000000000000000000000000..843875e320b6bcdb29106ed38d7b3cffd10030d2 --- /dev/null +++ b/scrapped_outputs/0d238bcca46d38eb37d34093413da900.txt @@ -0,0 +1,232 @@ +Würstchen Wuerstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models is by Pablo Pernias, Dominic Rampas, Mats L. Richter and Christopher Pal and Marc Aubreville. The abstract from the paper is: We introduce Würstchen, a novel architecture for text-to-image synthesis that combines competitive performance with unprecedented cost-effectiveness for large-scale text-to-image diffusion models. A key contribution of our work is to develop a latent diffusion technique in which we learn a detailed but extremely compact semantic image representation used to guide the diffusion process. This highly compressed representation of an image provides much more detailed guidance compared to latent representations of language and this significantly reduces the computational requirements to achieve state-of-the-art results. Our approach also improves the quality of text-conditioned image generation based on our user preference study. The training requirements of our approach consists of 24,602 A100-GPU hours - compared to Stable Diffusion 2.1’s 200,000 GPU hours. Our approach also requires less training data to achieve these results. Furthermore, our compact latent representations allows us to perform inference over twice as fast, slashing the usual costs and carbon footprint of a state-of-the-art (SOTA) diffusion model significantly, without compromising the end performance. In a broader comparison against SOTA models our approach is substantially more efficient and compares favorably in terms of image quality. We believe that this work motivates more emphasis on the prioritization of both performance and computational accessibility. Würstchen Overview Würstchen is a diffusion model, whose text-conditional model works in a highly compressed latent space of images. Why is this important? Compressing data can reduce computational costs for both training and inference by magnitudes. Training on 1024x1024 images is way more expensive than training on 32x32. Usually, other works make use of a relatively small compression, in the range of 4x - 8x spatial compression. Würstchen takes this to an extreme. Through its novel design, we achieve a 42x spatial compression. This was unseen before because common methods fail to faithfully reconstruct detailed images after 16x spatial compression. Würstchen employs a two-stage compression, what we call Stage A and Stage B. Stage A is a VQGAN, and Stage B is a Diffusion Autoencoder (more details can be found in the paper). A third model, Stage C, is learned in that highly compressed latent space. This training requires fractions of the compute used for current top-performing models, while also allowing cheaper and faster inference. Würstchen v2 comes to Diffusers After the initial paper release, we have improved numerous things in the architecture, training and sampling, making Würstchen competitive to current state-of-the-art models in many ways. We are excited to release this new version together with Diffusers. Here is a list of the improvements. Higher resolution (1024x1024 up to 2048x2048) Faster inference Multi Aspect Resolution Sampling Better quality We are releasing 3 checkpoints for the text-conditional image generation model (Stage C). Those are: v2-base v2-aesthetic (default) v2-interpolated (50% interpolation between v2-base and v2-aesthetic) We recommend using v2-interpolated, as it has a nice touch of both photorealism and aesthetics. Use v2-base for finetunings as it does not have a style bias and use v2-aesthetic for very artistic generations. +A comparison can be seen here: Text-to-Image Generation For the sake of usability, Würstchen can be used with a single pipeline. This pipeline can be used as follows: Copied import torch +from diffusers import AutoPipelineForText2Image +from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS + +pipe = AutoPipelineForText2Image.from_pretrained("warp-ai/wuerstchen", torch_dtype=torch.float16).to("cuda") + +caption = "Anthropomorphic cat dressed as a fire fighter" +images = pipe( + caption, + width=1024, + height=1536, + prior_timesteps=DEFAULT_STAGE_C_TIMESTEPS, + prior_guidance_scale=4.0, + num_images_per_prompt=2, +).images For explanation purposes, we can also initialize the two main pipelines of Würstchen individually. Würstchen consists of 3 stages: Stage C, Stage B, Stage A. They all have different jobs and work only together. When generating text-conditional images, Stage C will first generate the latents in a very compressed latent space. This is what happens in the prior_pipeline. Afterwards, the generated latents will be passed to Stage B, which decompresses the latents into a bigger latent space of a VQGAN. These latents can then be decoded by Stage A, which is a VQGAN, into the pixel-space. Stage B & Stage A are both encapsulated in the decoder_pipeline. For more details, take a look at the paper. Copied import torch +from diffusers import WuerstchenDecoderPipeline, WuerstchenPriorPipeline +from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS + +device = "cuda" +dtype = torch.float16 +num_images_per_prompt = 2 + +prior_pipeline = WuerstchenPriorPipeline.from_pretrained( + "warp-ai/wuerstchen-prior", torch_dtype=dtype +).to(device) +decoder_pipeline = WuerstchenDecoderPipeline.from_pretrained( + "warp-ai/wuerstchen", torch_dtype=dtype +).to(device) + +caption = "Anthropomorphic cat dressed as a fire fighter" +negative_prompt = "" + +prior_output = prior_pipeline( + prompt=caption, + height=1024, + width=1536, + timesteps=DEFAULT_STAGE_C_TIMESTEPS, + negative_prompt=negative_prompt, + guidance_scale=4.0, + num_images_per_prompt=num_images_per_prompt, +) +decoder_output = decoder_pipeline( + image_embeddings=prior_output.image_embeddings, + prompt=caption, + negative_prompt=negative_prompt, + guidance_scale=0.0, + output_type="pil", +).images[0] +decoder_output Speed-Up Inference You can make use of torch.compile function and gain a speed-up of about 2-3x: Copied prior_pipeline.prior = torch.compile(prior_pipeline.prior, mode="reduce-overhead", fullgraph=True) +decoder_pipeline.decoder = torch.compile(decoder_pipeline.decoder, mode="reduce-overhead", fullgraph=True) Limitations Due to the high compression employed by Würstchen, generations can lack a good amount +of detail. To our human eye, this is especially noticeable in faces, hands etc. Images can only be generated in 128-pixel steps, e.g. the next higher resolution +after 1024x1024 is 1152x1152 The model lacks the ability to render correct text in images The model often does not achieve photorealism Difficult compositional prompts are hard for the model The original codebase, as well as experimental ideas, can be found at dome272/Wuerstchen. WuerstchenCombinedPipeline class diffusers.WuerstchenCombinedPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModel decoder: WuerstchenDiffNeXt scheduler: DDPMWuerstchenScheduler vqgan: PaellaVQModel prior_tokenizer: CLIPTokenizer prior_text_encoder: CLIPTextModel prior_prior: WuerstchenPrior prior_scheduler: DDPMWuerstchenScheduler ) Parameters tokenizer (CLIPTokenizer) — +The decoder tokenizer to be used for text inputs. text_encoder (CLIPTextModel) — +The decoder text encoder to be used for text inputs. decoder (WuerstchenDiffNeXt) — +The decoder model to be used for decoder image generation pipeline. scheduler (DDPMWuerstchenScheduler) — +The scheduler to be used for decoder image generation pipeline. vqgan (PaellaVQModel) — +The VQGAN model to be used for decoder image generation pipeline. prior_tokenizer (CLIPTokenizer) — +The prior tokenizer to be used for text inputs. prior_text_encoder (CLIPTextModel) — +The prior text encoder to be used for text inputs. prior_prior (WuerstchenPrior) — +The prior model to be used for prior pipeline. prior_scheduler (DDPMWuerstchenScheduler) — +The scheduler to be used for prior pipeline. Combined Pipeline for text-to-image generation using Wuerstchen This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union = None height: int = 512 width: int = 512 prior_num_inference_steps: int = 60 prior_timesteps: Optional = None prior_guidance_scale: float = 4.0 num_inference_steps: int = 12 decoder_timesteps: Optional = None decoder_guidance_scale: float = 0.0 negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation for the prior and decoder. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings for the prior. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings for the prior. Can be used to easily tweak text inputs, e.g. +prompt weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt +input argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +prior_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +prior_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked +to the text prompt, usually at the expense of lower image quality. prior_num_inference_steps (Union[int, Dict[float, int]], optional, defaults to 60) — +The number of prior denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. For more specific timestep spacing, you can pass customized +prior_timesteps num_inference_steps (int, optional, defaults to 12) — +The number of decoder denoising steps. More denoising steps usually lead to a higher quality image at +the expense of slower inference. For more specific timestep spacing, you can pass customized +timesteps prior_timesteps (List[float], optional) — +Custom timesteps to use for the denoising process for the prior. If not defined, equal spaced +prior_num_inference_steps timesteps are used. Must be in descending order. decoder_timesteps (List[float], optional) — +Custom timesteps to use for the denoising process for the decoder. If not defined, equal spaced +num_inference_steps timesteps are used. Must be in descending order. decoder_guidance_scale (float, optional, defaults to 0.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. prior_callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). prior_callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the prior_callback_on_step_end function. The tensors specified in the +list will be passed as callback_kwargs argument. You will only be able to include variables listed in +the ._callback_tensor_inputs attribute of your pipeline class. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusions import WuerstchenCombinedPipeline + +>>> pipe = WuerstchenCombinedPipeline.from_pretrained("warp-ai/Wuerstchen", torch_dtype=torch.float16).to( +... "cuda" +... ) +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> images = pipe(prompt=prompt) enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models (unet, text_encoder, vae, and safety checker state dicts) to CPU using 🤗 +Accelerate, significantly reducing memory usage. Models are moved to a torch.device('meta') and loaded on a +GPU only when their specific submodule’s forward method is called. Offloading happens on a submodule basis. +Memory savings are higher than using enable_model_cpu_offload, but performance is lower. WuerstchenPriorPipeline class diffusers.WuerstchenPriorPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModel prior: WuerstchenPrior scheduler: DDPMWuerstchenScheduler latent_mean: float = 42.0 latent_std: float = 1.0 resolution_multiple: float = 42.67 ) Parameters prior (Prior) — +The canonical unCLIP prior to approximate the image embedding from the text embedding. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (DDPMWuerstchenScheduler) — +A scheduler to be used in combination with prior to generate image embedding. latent_mean (‘float’, optional, defaults to 42.0) — +Mean value for latent diffusers. latent_std (‘float’, optional, defaults to 1.0) — +Standard value for latent diffusers. resolution_multiple (‘float’, optional, defaults to 42.67) — +Default resolution for multiple images generated. Pipeline for generating image prior for Wuerstchen. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None height: int = 1024 width: int = 1024 num_inference_steps: int = 60 timesteps: List = None guidance_scale: float = 8.0 negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pt' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. height (int, optional, defaults to 1024) — +The height in pixels of the generated image. width (int, optional, defaults to 1024) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 60) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 8.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +decoder_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +decoder_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely +linked to the text prompt, usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if decoder_guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import WuerstchenPriorPipeline + +>>> prior_pipe = WuerstchenPriorPipeline.from_pretrained( +... "warp-ai/wuerstchen-prior", torch_dtype=torch.float16 +... ).to("cuda") + +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> prior_output = pipe(prompt) WuerstchenPriorPipelineOutput class diffusers.pipelines.wuerstchen.pipeline_wuerstchen_prior.WuerstchenPriorPipelineOutput < source > ( image_embeddings: Union ) Parameters image_embeddings (torch.FloatTensor or np.ndarray) — +Prior image embeddings for text prompt Output class for WuerstchenPriorPipeline. WuerstchenDecoderPipeline class diffusers.WuerstchenDecoderPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModel decoder: WuerstchenDiffNeXt scheduler: DDPMWuerstchenScheduler vqgan: PaellaVQModel latent_dim_scale: float = 10.67 ) Parameters tokenizer (CLIPTokenizer) — +The CLIP tokenizer. text_encoder (CLIPTextModel) — +The CLIP text encoder. decoder (WuerstchenDiffNeXt) — +The WuerstchenDiffNeXt unet decoder. vqgan (PaellaVQModel) — +The VQGAN model. scheduler (DDPMWuerstchenScheduler) — +A scheduler to be used in combination with prior to generate image embedding. latent_dim_scale (float, optional, defaults to 10.67) — +Multiplier to determine the VQ latent space size from the image embeddings. If the image embeddings are +height=24 and width=24, the VQ latent shape needs to be height=int(2410.67)=256 and +width=int(2410.67)=256 in order to match the training conditions. Pipeline for generating images from the Wuerstchen model. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeddings: Union prompt: Union = None num_inference_steps: int = 12 timesteps: Optional = None guidance_scale: float = 0.0 negative_prompt: Union = None num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) Parameters image_embedding (torch.FloatTensor or List[torch.FloatTensor]) — +Image Embeddings either extracted from an image or generated by a Prior Model. prompt (str or List[str]) — +The prompt or prompts to guide the image generation. num_inference_steps (int, optional, defaults to 12) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 0.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +decoder_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +decoder_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely +linked to the text prompt, usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if decoder_guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import WuerstchenPriorPipeline, WuerstchenDecoderPipeline + +>>> prior_pipe = WuerstchenPriorPipeline.from_pretrained( +... "warp-ai/wuerstchen-prior", torch_dtype=torch.float16 +... ).to("cuda") +>>> gen_pipe = WuerstchenDecoderPipeline.from_pretrain("warp-ai/wuerstchen", torch_dtype=torch.float16).to( +... "cuda" +... ) + +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> prior_output = pipe(prompt) +>>> images = gen_pipe(prior_output.image_embeddings, prompt=prompt) Citation Copied @misc{pernias2023wuerstchen, + title={Wuerstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models}, + author={Pablo Pernias and Dominic Rampas and Mats L. Richter and Christopher J. Pal and Marc Aubreville}, + year={2023}, + eprint={2306.00637}, + archivePrefix={arXiv}, + primaryClass={cs.CV} + } diff --git a/scrapped_outputs/0d46b348ee7c0a4de704face67787972.txt b/scrapped_outputs/0d46b348ee7c0a4de704face67787972.txt new file mode 100644 index 0000000000000000000000000000000000000000..decca91d4d95af55c3709464e9e718dff77b607a --- /dev/null +++ b/scrapped_outputs/0d46b348ee7c0a4de704face67787972.txt @@ -0,0 +1,170 @@ +🧨 Diffusers + +🤗 Diffusers provides pretrained vision and audio diffusion models, and serves as a modular toolbox for inference and training. +More precisely, 🤗 Diffusers offers: +State-of-the-art diffusion pipelines that can be run in inference with just a couple of lines of code (see Using Diffusers) or have a look at Pipelines to get an overview of all supported pipelines and their corresponding papers. +Various noise schedulers that can be used interchangeably for the preferred speed vs. quality trade-off in inference. For more information see Schedulers. +Multiple types of models, such as UNet, can be used as building blocks in an end-to-end diffusion system. See Models for more details +Training examples to show how to train the most popular diffusion model tasks. For more information see Training. + +🧨 Diffusers Pipelines + +The following table summarizes all officially supported pipelines, their corresponding paper, and if +available a colab notebook to directly try them out. +Pipeline +Paper +Tasks +Colab +alt_diffusion +AltDiffusion +Image-to-Image Text-Guided Generation + +audio_diffusion +Audio Diffusion +Unconditional Audio Generation + +controlnet +ControlNet with Stable Diffusion +Image-to-Image Text-Guided Generation +[ +cycle_diffusion +Cycle Diffusion +Image-to-Image Text-Guided Generation + +dance_diffusion +Dance Diffusion +Unconditional Audio Generation + +ddpm +Denoising Diffusion Probabilistic Models +Unconditional Image Generation + +ddim +Denoising Diffusion Implicit Models +Unconditional Image Generation + +latent_diffusion +High-Resolution Image Synthesis with Latent Diffusion Models +Text-to-Image Generation + +latent_diffusion +High-Resolution Image Synthesis with Latent Diffusion Models +Super Resolution Image-to-Image + +latent_diffusion_uncond +High-Resolution Image Synthesis with Latent Diffusion Models +Unconditional Image Generation + +paint_by_example +Paint by Example: Exemplar-based Image Editing with Diffusion Models +Image-Guided Image Inpainting + +pndm +Pseudo Numerical Methods for Diffusion Models on Manifolds +Unconditional Image Generation + +score_sde_ve +Score-Based Generative Modeling through Stochastic Differential Equations +Unconditional Image Generation + +score_sde_vp +Score-Based Generative Modeling through Stochastic Differential Equations +Unconditional Image Generation + +semantic_stable_diffusion +Semantic Guidance +Text-Guided Generation + +stable_diffusion_text2img +Stable Diffusion +Text-to-Image Generation + +stable_diffusion_img2img +Stable Diffusion +Image-to-Image Text-Guided Generation + +stable_diffusion_inpaint +Stable Diffusion +Text-Guided Image Inpainting + +stable_diffusion_panorama +MultiDiffusion +Text-to-Panorama Generation + +stable_diffusion_pix2pix +InstructPix2Pix +Text-Guided Image Editing + +stable_diffusion_pix2pix_zero +Zero-shot Image-to-Image Translation +Text-Guided Image Editing + +stable_diffusion_attend_and_excite +Attend and Excite for Stable Diffusion +Text-to-Image Generation + +stable_diffusion_self_attention_guidance +Self-Attention Guidance +Text-to-Image Generation + +stable_diffusion_image_variation +Stable Diffusion Image Variations +Image-to-Image Generation + +stable_diffusion_latent_upscale +Stable Diffusion Latent Upscaler +Text-Guided Super Resolution Image-to-Image + +stable_diffusion_2 +Stable Diffusion 2 +Text-to-Image Generation + +stable_diffusion_2 +Stable Diffusion 2 +Text-Guided Image Inpainting + +stable_diffusion_2 +Depth-Conditional Stable Diffusion +Depth-to-Image Generation + +stable_diffusion_2 +Stable Diffusion 2 +Text-Guided Super Resolution Image-to-Image + +stable_diffusion_safe +Safe Stable Diffusion +Text-Guided Generation + +stable_unclip +Stable unCLIP +Text-to-Image Generation + +stable_unclip +Stable unCLIP +Image-to-Image Text-Guided Generation + +stochastic_karras_ve +Elucidating the Design Space of Diffusion-Based Generative Models +Unconditional Image Generation + +unclip +Hierarchical Text-Conditional Image Generation with CLIP Latents +Text-to-Image Generation + +versatile_diffusion +Versatile Diffusion: Text, Images and Variations All in One Diffusion Model +Text-to-Image Generation + +versatile_diffusion +Versatile Diffusion: Text, Images and Variations All in One Diffusion Model +Image Variations Generation + +versatile_diffusion +Versatile Diffusion: Text, Images and Variations All in One Diffusion Model +Dual Image and Text Guided Generation + +vq_diffusion +Vector Quantized Diffusion Model for Text-to-Image Synthesis +Text-to-Image Generation + +Note: Pipelines are simple examples of how to play around with the diffusion systems as described in the corresponding papers. diff --git a/scrapped_outputs/0d618645072df8b364d48b06660143f8.txt b/scrapped_outputs/0d618645072df8b364d48b06660143f8.txt new file mode 100644 index 0000000000000000000000000000000000000000..f559dcc80ec22dbf65c22dd7f4b1273f5e564097 --- /dev/null +++ b/scrapped_outputs/0d618645072df8b364d48b06660143f8.txt @@ -0,0 +1,118 @@ +Latent upscaler The Stable Diffusion latent upscaler model was created by Katherine Crowson in collaboration with Stability AI. It is used to enhance the output image resolution by a factor of 2 (see this demo notebook for a demonstration of the original implementation). Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionLatentUpscalePipeline class diffusers.StableDiffusionLatentUpscalePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: EulerDiscreteScheduler ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A EulerDiscreteScheduler to be used in combination with unet to denoise the encoded image latents. Pipeline for upscaling Stable Diffusion output image resolution by a factor of 2. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union image: Union = None num_inference_steps: int = 75 guidance_scale: float = 9.0 negative_prompt: Union = None generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image upscaling. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be upscaled. If it’s a tensor, it can be either a +latent output from a Stable Diffusion model or an image tensor in the range [-1, 1]. It is considered +a latent if image.shape[1] is 4; otherwise, it is considered to be an image representation and +encoded using this pipeline’s vae encoder. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import StableDiffusionLatentUpscalePipeline, StableDiffusionPipeline +>>> import torch + + +>>> pipeline = StableDiffusionPipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 +... ) +>>> pipeline.to("cuda") + +>>> model_id = "stabilityai/sd-x2-latent-upscaler" +>>> upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) +>>> upscaler.to("cuda") + +>>> prompt = "a photo of an astronaut high resolution, unreal engine, ultra realistic" +>>> generator = torch.manual_seed(33) + +>>> low_res_latents = pipeline(prompt, generator=generator, output_type="latent").images + +>>> with torch.no_grad(): +... image = pipeline.decode_latents(low_res_latents) +>>> image = pipeline.numpy_to_pil(image)[0] + +>>> image.save("../images/a1.png") + +>>> upscaled_image = upscaler( +... prompt=prompt, +... image=low_res_latents, +... num_inference_steps=20, +... guidance_scale=0, +... generator=generator, +... ).images[0] + +>>> upscaled_image.save("../images/a2.png") enable_sequential_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using 🤗 Accelerate, significantly reducing memory usage. When called, the state +dicts of all torch.nn.Module components (except those in self._exclude_from_cpu_offload) are saved to CPU +and then moved to torch.device('meta') and loaded to GPU only when their specific submodule has its forward +method called. Offloading happens on a submodule basis. Memory savings are higher than with +enable_model_cpu_offload, but performance is lower. enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/0d6d58615e2f15b349fe52192472aaba.txt b/scrapped_outputs/0d6d58615e2f15b349fe52192472aaba.txt new file mode 100644 index 0000000000000000000000000000000000000000..57df4f740b0ed4c2de36ed35b7d1339813164b7f --- /dev/null +++ b/scrapped_outputs/0d6d58615e2f15b349fe52192472aaba.txt @@ -0,0 +1,222 @@ +Denoising Diffusion Implicit Models (DDIM) + + +Overview + +Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. +The abstract of the paper is the following: +Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space. +The original codebase of this paper can be found here: ermongroup/ddim. +For questions, feel free to contact the author on tsong.me. + +DDIMScheduler + + +class diffusers.DDIMScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +clip_sample: bool = True +set_alpha_to_one: bool = True +steps_offset: int = 0 +prediction_type: str = 'epsilon' +thresholding: bool = False +dynamic_thresholding_ratio: float = 0.995 +clip_sample_range: float = 1.0 +sample_max_value: float = 1.0 + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +clip_sample (bool, default True) — +option to clip predicted sample for numerical stability. + + +clip_sample_range (float, default 1.0) — +the maximum magnitude for sample clipping. Valid only when clip_sample=True. + + +set_alpha_to_one (bool, default True) — +each diffusion step uses the value of alphas product at that step and at the previous one. For the final +step there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the value of alpha at step 0. + + +steps_offset (int, default 0) — +an offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False, to make the last step use step 0 for the previous alpha product, as done in +stable diffusion. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + +thresholding (bool, default False) — +whether to use the “dynamic thresholding” method (introduced by Imagen, https://arxiv.org/abs/2205.11487). +Note that the thresholding method is unsuitable for latent-space diffusion models (such as +stable-diffusion). + + +dynamic_thresholding_ratio (float, default 0.995) — +the ratio for the dynamic thresholding method. Default is 0.995, the same as Imagen +(https://arxiv.org/abs/2205.11487). Valid only when thresholding=True. + + +sample_max_value (float, default 1.0) — +the threshold value for dynamic thresholding. Valid only when thresholding=True. + + + +Denoising diffusion implicit models is a scheduler that extends the denoising procedure introduced in denoising +diffusion probabilistic models (DDPMs) with non-Markovian guidance. +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. +For more details, see the original paper: https://arxiv.org/abs/2010.02502 + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Optional[int] = None + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +timestep (int, optional) — current timestep + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + + +Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +eta: float = 0.0 +use_clipped_model_output: bool = False +generator = None +variance_noise: typing.Optional[torch.FloatTensor] = None +return_dict: bool = True + +) +→ +~schedulers.scheduling_utils.DDIMSchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +eta (float) — weight of noise for added noise in diffusion step. + + +use_clipped_model_output (bool) — if True, compute “corrected” model_output from the clipped +predicted original sample. Necessary because predicted original sample is clipped to [-1, 1] when +self.config.clip_sample is True. If no clipping has happened, “corrected” model_output would +coincide with the one provided as input and use_clipped_model_output will have not effect. +generator — random number generator. + + +variance_noise (torch.FloatTensor) — instead of generating noise for the variance using generator, we +can directly provide the noise for the variance itself. This is useful for methods such as +CycleDiffusion. (https://arxiv.org/abs/2210.05559) + + +return_dict (bool) — option for returning tuple rather than DDIMSchedulerOutput class + + +Returns + +~schedulers.scheduling_utils.DDIMSchedulerOutput or tuple + + + +~schedulers.scheduling_utils.DDIMSchedulerOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/0dab9e2bea20007da64c2dd35a06b6be.txt b/scrapped_outputs/0dab9e2bea20007da64c2dd35a06b6be.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/0dbbf31587a31517c397861727d3324e.txt b/scrapped_outputs/0dbbf31587a31517c397861727d3324e.txt new file mode 100644 index 0000000000000000000000000000000000000000..fb94d834121295af6b4bed33934b6c96b5fc6a51 --- /dev/null +++ b/scrapped_outputs/0dbbf31587a31517c397861727d3324e.txt @@ -0,0 +1,79 @@ +Unconditional image generation Unconditional image generation models are not conditioned on text or images during training. It only generates images that resemble its training data distribution. This guide will explore the train_unconditional.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies: Copied cd examples/unconditional_image_generation +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the bf16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_unconditional.py \ + --mixed_precision="bf16" Some basic and important parameters to specify include: --dataset_name: the name of the dataset on the Hub or a local path to the dataset to train on --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command Bring your dataset, and let the training script handle everything else! Training script The code for preprocessing the dataset and the training loop is found in the main() function. If you need to adapt the training script, this is where you’ll need to make your changes. The train_unconditional script initializes a UNet2DModel if you don’t provide a model configuration. You can configure the UNet here if you’d like: Copied model = UNet2DModel( + sample_size=args.resolution, + in_channels=3, + out_channels=3, + layers_per_block=2, + block_out_channels=(128, 128, 256, 256, 512, 512), + down_block_types=( + "DownBlock2D", + "DownBlock2D", + "DownBlock2D", + "DownBlock2D", + "AttnDownBlock2D", + "DownBlock2D", + ), + up_block_types=( + "UpBlock2D", + "AttnUpBlock2D", + "UpBlock2D", + "UpBlock2D", + "UpBlock2D", + "UpBlock2D", + ), +) Next, the script initializes a scheduler and optimizer: Copied # Initialize the scheduler +accepts_prediction_type = "prediction_type" in set(inspect.signature(DDPMScheduler.__init__).parameters.keys()) +if accepts_prediction_type: + noise_scheduler = DDPMScheduler( + num_train_timesteps=args.ddpm_num_steps, + beta_schedule=args.ddpm_beta_schedule, + prediction_type=args.prediction_type, + ) +else: + noise_scheduler = DDPMScheduler(num_train_timesteps=args.ddpm_num_steps, beta_schedule=args.ddpm_beta_schedule) + +# Initialize the optimizer +optimizer = torch.optim.AdamW( + model.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Then it loads a dataset and you can specify how to preprocess it: Copied dataset = load_dataset("imagefolder", data_dir=args.train_data_dir, cache_dir=args.cache_dir, split="train") + +augmentations = transforms.Compose( + [ + transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), + transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution), + transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x), + transforms.ToTensor(), + transforms.Normalize([0.5], [0.5]), + ] +) Finally, the training loop handles everything else such as adding noise to the images, predicting the noise residual, calculating the loss, saving checkpoints at specified steps, and saving and pushing the model to the Hub. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 A full training run takes 2 hours on 4xV100 GPUs. + + + + Copied accelerate launch train_unconditional.py \ + --dataset_name="huggan/flowers-102-categories" \ + --output_dir="ddpm-ema-flowers-64" \ + --mixed_precision="fp16" \ + --push_to_hub + + +If you’re training with more than one GPU, add the --multi_gpu parameter to the training command: Copied accelerate launch --multi_gpu train_unconditional.py \ + --dataset_name="huggan/flowers-102-categories" \ + --output_dir="ddpm-ema-flowers-64" \ + --mixed_precision="fp16" \ + --push_to_hub + + +The training script creates and saves a checkpoint file in your repository. Now you can load and use your trained model for inference: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128").to("cuda") +image = pipeline().images[0] diff --git a/scrapped_outputs/0dcd4398c0af644af7f3f31ab3351aec.txt b/scrapped_outputs/0dcd4398c0af644af7f3f31ab3351aec.txt new file mode 100644 index 0000000000000000000000000000000000000000..d652e1d857c98c3e8bba256ca96f37cda949853a --- /dev/null +++ b/scrapped_outputs/0dcd4398c0af644af7f3f31ab3351aec.txt @@ -0,0 +1,57 @@ +Schedulers 🤗 Diffusers provides many scheduler functions for the diffusion process. A scheduler takes a model’s output (the sample which the diffusion process is iterating on) and a timestep to return a denoised sample. The timestep is important because it dictates where in the diffusion process the step is; data is generated by iterating forward n timesteps and inference occurs by propagating backward through the timesteps. Based on the timestep, a scheduler may be discrete in which case the timestep is an int or continuous in which case the timestep is a float. Depending on the context, a scheduler defines how to iteratively add noise to an image or how to update a sample based on a model’s output: during training, a scheduler adds noise (there are different algorithms for how to add noise) to a sample to train a diffusion model during inference, a scheduler defines how to update a sample based on a pretrained model’s output Many schedulers are implemented from the k-diffusion library by Katherine Crowson, and they’re also widely used in A1111. To help you map the schedulers from k-diffusion and A1111 to the schedulers in 🤗 Diffusers, take a look at the table below: A1111/k-diffusion 🤗 Diffusers Usage DPM++ 2M DPMSolverMultistepScheduler DPM++ 2M Karras DPMSolverMultistepScheduler init with use_karras_sigmas=True DPM++ 2M SDE DPMSolverMultistepScheduler init with algorithm_type="sde-dpmsolver++" DPM++ 2M SDE Karras DPMSolverMultistepScheduler init with use_karras_sigmas=True and algorithm_type="sde-dpmsolver++" DPM++ 2S a N/A very similar to DPMSolverSinglestepScheduler DPM++ 2S a Karras N/A very similar to DPMSolverSinglestepScheduler(use_karras_sigmas=True, ...) DPM++ SDE DPMSolverSinglestepScheduler DPM++ SDE Karras DPMSolverSinglestepScheduler init with use_karras_sigmas=True DPM2 KDPM2DiscreteScheduler DPM2 Karras KDPM2DiscreteScheduler init with use_karras_sigmas=True DPM2 a KDPM2AncestralDiscreteScheduler DPM2 a Karras KDPM2AncestralDiscreteScheduler init with use_karras_sigmas=True DPM adaptive N/A DPM fast N/A Euler EulerDiscreteScheduler Euler a EulerAncestralDiscreteScheduler Heun HeunDiscreteScheduler LMS LMSDiscreteScheduler LMS Karras LMSDiscreteScheduler init with use_karras_sigmas=True N/A DEISMultistepScheduler N/A UniPCMultistepScheduler All schedulers are built from the base SchedulerMixin class which implements low level utilities shared by all schedulers. SchedulerMixin class diffusers.SchedulerMixin < source > ( ) Base class for all schedulers. SchedulerMixin contains common functions shared by all schedulers such as general loading and saving +functionalities. ConfigMixin takes care of storing the configuration attributes (like num_train_timesteps) that are passed to +the scheduler’s __init__ function, and the attributes can be accessed by scheduler.config.num_train_timesteps. Class attributes: _compatibles (List[str]) — A list of scheduler classes that are compatible with the parent scheduler +class. Use from_config() to load a different compatible scheduler class (should be overridden +by parent class). from_pretrained < source > ( pretrained_model_name_or_path: Union = None subfolder: Optional = None return_unused_kwargs = False **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the scheduler +configuration saved with save_pretrained(). + subfolder (str, optional) — +The subfolder location of a model file within a larger model repository on the Hub or locally. return_unused_kwargs (bool, optional, defaults to False) — +Whether kwargs that are not consumed by the Python class should be returned or not. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. Instantiate a scheduler from a pre-defined JSON configuration file in a local directory or Hub repository. To use private or gated models, log-in with +huggingface-cli login. You can also activate the special +“offline-mode” to use this method in a +firewalled environment. save_pretrained < source > ( save_directory: Union push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory where the configuration JSON file will be saved (will be created if it does not exist). push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save a scheduler configuration object to a directory so that it can be reloaded using the +from_pretrained() class method. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. KarrasDiffusionSchedulers KarrasDiffusionSchedulers are a broad generalization of schedulers in 🤗 Diffusers. The schedulers in this class are distinguished at a high level by their noise sampling strategy, the type of network and scaling, the training strategy, and how the loss is weighed. The different schedulers in this class, depending on the ordinary differential equations (ODE) solver type, fall into the above taxonomy and provide a good abstraction for the design of the main schedulers implemented in 🤗 Diffusers. The schedulers in this class are given here. PushToHubMixin class diffusers.utils.PushToHubMixin < source > ( ) A Mixin to push a model, scheduler, or pipeline to the Hugging Face Hub. push_to_hub < source > ( repo_id: str commit_message: Optional = None private: Optional = None token: Optional = None create_pr: bool = False safe_serialization: bool = True variant: Optional = None ) Parameters repo_id (str) — +The name of the repository you want to push your model, scheduler, or pipeline files to. It should +contain your organization name when pushing to an organization. repo_id can also be a path to a local +directory. commit_message (str, optional) — +Message to commit while pushing. Default to "Upload {object}". private (bool, optional) — +Whether or not the repository created should be private. token (str, optional) — +The token to use as HTTP bearer authorization for remote files. The token generated when running +huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) — +Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to True) — +Whether or not to convert the model weights to the safetensors format. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. Upload model, scheduler, or pipeline files to the 🤗 Hugging Face Hub. Examples: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet") + +# Push the `unet` to your namespace with the name "my-finetuned-unet". +unet.push_to_hub("my-finetuned-unet") + +# Push the `unet` to an organization with the name "my-finetuned-unet". +unet.push_to_hub("your-org/my-finetuned-unet") diff --git a/scrapped_outputs/0deefae8018f073aaa00634dedea52cc.txt b/scrapped_outputs/0deefae8018f073aaa00634dedea52cc.txt new file mode 100644 index 0000000000000000000000000000000000000000..28be7c2be08b90122a456c3dc3dafcfdbac176dc --- /dev/null +++ b/scrapped_outputs/0deefae8018f073aaa00634dedea52cc.txt @@ -0,0 +1,75 @@ +AutoPipeline 🤗 Diffusers is able to complete many different tasks, and you can often reuse the same pretrained weights for multiple tasks such as text-to-image, image-to-image, and inpainting. If you’re new to the library and diffusion models though, it may be difficult to know which pipeline to use for a task. For example, if you’re using the runwayml/stable-diffusion-v1-5 checkpoint for text-to-image, you might not know that you could also use it for image-to-image and inpainting by loading the checkpoint with the StableDiffusionImg2ImgPipeline and StableDiffusionInpaintPipeline classes respectively. The AutoPipeline class is designed to simplify the variety of pipelines in 🤗 Diffusers. It is a generic, task-first pipeline that lets you focus on the task. The AutoPipeline automatically detects the correct pipeline class to use, which makes it easier to load a checkpoint for a task without knowing the specific pipeline class name. Take a look at the AutoPipeline reference to see which tasks are supported. Currently, it supports text-to-image, image-to-image, and inpainting. This tutorial shows you how to use an AutoPipeline to automatically infer the pipeline class to load for a specific task, given the pretrained weights. Choose an AutoPipeline for your task Start by picking a checkpoint. For example, if you’re interested in text-to-image with the runwayml/stable-diffusion-v1-5 checkpoint, use AutoPipelineForText2Image: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") +prompt = "peasant and dragon combat, wood cutting style, viking era, bevel with rune" + +image = pipeline(prompt, num_inference_steps=25).images[0] +image Under the hood, AutoPipelineForText2Image: automatically detects a "stable-diffusion" class from the model_index.json file loads the corresponding text-to-image StableDiffusionPipeline based on the "stable-diffusion" class name Likewise, for image-to-image, AutoPipelineForImage2Image detects a "stable-diffusion" checkpoint from the model_index.json file and it’ll load the corresponding StableDiffusionImg2ImgPipeline behind the scenes. You can also pass any additional arguments specific to the pipeline class such as strength, which determines the amount of noise or variation added to an input image: Copied from diffusers import AutoPipelineForImage2Image +import torch +import requests +from PIL import Image +from io import BytesIO + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") +prompt = "a portrait of a dog wearing a pearl earring" + +url = "https://upload.wikimedia.org/wikipedia/commons/thumb/0/0f/1665_Girl_with_a_Pearl_Earring.jpg/800px-1665_Girl_with_a_Pearl_Earring.jpg" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") +image.thumbnail((768, 768)) + +image = pipeline(prompt, image, num_inference_steps=200, strength=0.75, guidance_scale=10.5).images[0] +image And if you want to do inpainting, then AutoPipelineForInpainting loads the underlying StableDiffusionInpaintPipeline class in the same way: Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +import torch + +pipeline = AutoPipelineForInpainting.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).convert("RGB") +mask_image = load_image(mask_url).convert("RGB") + +prompt = "A majestic tiger sitting on a bench" +image = pipeline(prompt, image=init_image, mask_image=mask_image, num_inference_steps=50, strength=0.80).images[0] +image If you try to load an unsupported checkpoint, it’ll throw an error: Copied from diffusers import AutoPipelineForImage2Image +import torch + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "openai/shap-e-img2img", torch_dtype=torch.float16, use_safetensors=True +) +"ValueError: AutoPipeline can't find a pipeline linked to ShapEImg2ImgPipeline for None" Use multiple pipelines For some workflows or if you’re loading many pipelines, it is more memory-efficient to reuse the same components from a checkpoint instead of reloading them which would unnecessarily consume additional memory. For example, if you’re using a checkpoint for text-to-image and you want to use it again for image-to-image, use the from_pipe() method. This method creates a new pipeline from the components of a previously loaded pipeline at no additional memory cost. The from_pipe() method detects the original pipeline class and maps it to the new pipeline class corresponding to the task you want to do. For example, if you load a "stable-diffusion" class pipeline for text-to-image: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch + +pipeline_text2img = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +print(type(pipeline_text2img)) +"" Then from_pipe() maps the original "stable-diffusion" pipeline class to StableDiffusionImg2ImgPipeline: Copied pipeline_img2img = AutoPipelineForImage2Image.from_pipe(pipeline_text2img) +print(type(pipeline_img2img)) +"" If you passed an optional argument - like disabling the safety checker - to the original pipeline, this argument is also passed on to the new pipeline: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch + +pipeline_text2img = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, + requires_safety_checker=False, +).to("cuda") + +pipeline_img2img = AutoPipelineForImage2Image.from_pipe(pipeline_text2img) +print(pipeline_img2img.config.requires_safety_checker) +"False" You can overwrite any of the arguments and even configuration from the original pipeline if you want to change the behavior of the new pipeline. For example, to turn the safety checker back on and add the strength argument: Copied pipeline_img2img = AutoPipelineForImage2Image.from_pipe(pipeline_text2img, requires_safety_checker=True, strength=0.3) +print(pipeline_img2img.config.requires_safety_checker) +"True" diff --git a/scrapped_outputs/0e040b86a80f30e9c6c4f849d705cd7f.txt b/scrapped_outputs/0e040b86a80f30e9c6c4f849d705cd7f.txt new file mode 100644 index 0000000000000000000000000000000000000000..d23d93327c35d9c8f0901065ebe9c0cc039991a4 --- /dev/null +++ b/scrapped_outputs/0e040b86a80f30e9c6c4f849d705cd7f.txt @@ -0,0 +1,260 @@ +Image-to-image Image-to-image is similar to text-to-image, but in addition to a prompt, you can also pass an initial image as a starting point for the diffusion process. The initial image is encoded to latent space and noise is added to it. Then the latent diffusion model takes a prompt and the noisy latent image, predicts the added noise, and removes the predicted noise from the initial latent image to get the new latent image. Lastly, a decoder decodes the new latent image back into an image. With 🤗 Diffusers, this is as easy as 1-2-3: Load a checkpoint into the AutoPipelineForImage2Image class; this pipeline automatically handles loading the correct pipeline class based on the checkpoint: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() You’ll notice throughout the guide, we use enable_model_cpu_offload() and enable_xformers_memory_efficient_attention(), to save memory and increase inference speed. If you’re using PyTorch 2.0, then you don’t need to call enable_xformers_memory_efficient_attention() on your pipeline because it’ll already be using PyTorch 2.0’s native scaled-dot product attention. Load an image to pass to the pipeline: Copied init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") Pass a prompt and image to the pipeline to generate an image: Copied prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k" +image = pipeline(prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Popular models The most popular image-to-image models are Stable Diffusion v1.5, Stable Diffusion XL (SDXL), and Kandinsky 2.2. The results from the Stable Diffusion and Kandinsky models vary due to their architecture differences and training process; you can generally expect SDXL to produce higher quality images than Stable Diffusion v1.5. Let’s take a quick look at how to use each of these models and compare their results. Stable Diffusion v1.5 Stable Diffusion v1.5 is a latent diffusion model initialized from an earlier checkpoint, and further finetuned for 595K steps on 512x512 images. To use this pipeline for image-to-image, you’ll need to prepare an initial image to pass to the pipeline. Then you can pass a prompt and the image to the pipeline to generate a new image: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Stable Diffusion XL (SDXL) SDXL is a more powerful version of the Stable Diffusion model. It uses a larger base model, and an additional refiner model to increase the quality of the base model’s output. Read the SDXL guide for a more detailed walkthrough of how to use this model, and other techniques it uses to produce high quality images. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-sdxl-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, strength=0.5).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Kandinsky 2.2 The Kandinsky model is different from the Stable Diffusion models because it uses an image prior model to create image embeddings. The embeddings help create a better alignment between text and images, allowing the latent diffusion model to generate better images. The simplest way to use Kandinsky 2.2 is: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Configure pipeline parameters There are several important parameters you can configure in the pipeline that’ll affect the image generation process and image quality. Let’s take a closer look at what these parameters do and how changing them affects the output. Strength strength is one of the most important parameters to consider and it’ll have a huge impact on your generated image. It determines how much the generated image resembles the initial image. In other words: 📈 a higher strength value gives the model more “creativity” to generate an image that’s different from the initial image; a strength value of 1.0 means the initial image is more or less ignored 📉 a lower strength value means the generated image is more similar to the initial image The strength and num_inference_steps parameters are related because strength determines the number of noise steps to add. For example, if the num_inference_steps is 50 and strength is 0.8, then this means adding 40 (50 * 0.8) steps of noise to the initial image and then denoising for 40 steps to get the newly generated image. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, strength=0.8).images[0] +make_image_grid([init_image, image], rows=1, cols=2) strength = 0.4 strength = 0.6 strength = 1.0 Guidance scale The guidance_scale parameter is used to control how closely aligned the generated image and text prompt are. A higher guidance_scale value means your generated image is more aligned with the prompt, while a lower guidance_scale value means your generated image has more space to deviate from the prompt. You can combine guidance_scale with strength for even more precise control over how expressive the model is. For example, combine a high strength + guidance_scale for maximum creativity or use a combination of low strength and low guidance_scale to generate an image that resembles the initial image but is not as strictly bound to the prompt. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, guidance_scale=8.0).images[0] +make_image_grid([init_image, image], rows=1, cols=2) guidance_scale = 0.1 guidance_scale = 5.0 guidance_scale = 10.0 Negative prompt A negative prompt conditions the model to not include things in an image, and it can be used to improve image quality or modify an image. For example, you can improve image quality by including negative prompts like “poor details” or “blurry” to encourage the model to generate a higher quality image. Or you can modify an image by specifying things to exclude from an image. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" + +# pass prompt and image to pipeline +image = pipeline(prompt, negative_prompt=negative_prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" negative_prompt = "jungle" Chained image-to-image pipelines There are some other interesting ways you can use an image-to-image pipeline aside from just generating an image (although that is pretty cool too). You can take it a step further and chain it with other pipelines. Text-to-image-to-image Chaining a text-to-image and image-to-image pipeline allows you to generate an image from text and use the generated image as the initial image for the image-to-image pipeline. This is useful if you want to generate an image entirely from scratch. For example, let’s chain a Stable Diffusion and a Kandinsky model. Start by generating an image with the text-to-image pipeline: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch +from diffusers.utils import make_image_grid + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +text2image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k").images[0] +text2image Now you can pass this generated image to the image-to-image pipeline: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image2image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", image=text2image).images[0] +make_image_grid([text2image, image2image], rows=1, cols=2) Image-to-image-to-image You can also chain multiple image-to-image pipelines together to create more interesting images. This can be useful for iteratively performing style transfer on an image, generating short GIFs, restoring color to an image, or restoring missing areas of an image. Start by generating an image: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, output_type="latent").images[0] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. Pass the latent output from this pipeline to the next pipeline to generate an image in a comic book art style: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "ogkalu/Comic-Diffusion", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# need to include the token "charliebo artstyle" in the prompt to use this checkpoint +image = pipeline("Astronaut in a jungle, charliebo artstyle", image=image, output_type="latent").images[0] Repeat one more time to generate the final image in a pixel art style: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "kohbanye/pixel-art-style", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# need to include the token "pixelartstyle" in the prompt to use this checkpoint +image = pipeline("Astronaut in a jungle, pixelartstyle", image=image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Image-to-upscaler-to-super-resolution Another way you can chain your image-to-image pipeline is with an upscaler and super-resolution pipeline to really increase the level of details in an image. Start with an image-to-image pipeline: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image_1 = pipeline(prompt, image=init_image, output_type="latent").images[0] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. Chain it to an upscaler pipeline to increase the image resolution: Copied from diffusers import StableDiffusionLatentUpscalePipeline + +upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained( + "stabilityai/sd-x2-latent-upscaler", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +upscaler.enable_model_cpu_offload() +upscaler.enable_xformers_memory_efficient_attention() + +image_2 = upscaler(prompt, image=image_1, output_type="latent").images[0] Finally, chain it to a super-resolution pipeline to further enhance the resolution: Copied from diffusers import StableDiffusionUpscalePipeline + +super_res = StableDiffusionUpscalePipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +super_res.enable_model_cpu_offload() +super_res.enable_xformers_memory_efficient_attention() + +image_3 = super_res(prompt, image=image_2).images[0] +make_image_grid([init_image, image_3.resize((512, 512))], rows=1, cols=2) Control image generation Trying to generate an image that looks exactly the way you want can be difficult, which is why controlled generation techniques and models are so useful. While you can use the negative_prompt to partially control image generation, there are more robust methods like prompt weighting and ControlNets. Prompt weighting Prompt weighting allows you to scale the representation of each concept in a prompt. For example, in a prompt like “Astronaut in a jungle, cold color palette, muted colors, detailed, 8k”, you can choose to increase or decrease the embeddings of “astronaut” and “jungle”. The Compel library provides a simple syntax for adjusting prompt weights and generating the embeddings. You can learn how to create the embeddings in the Prompt weighting guide. AutoPipelineForImage2Image has a prompt_embeds (and negative_prompt_embeds if you’re using a negative prompt) parameter where you can pass the embeddings which replaces the prompt parameter. Copied from diffusers import AutoPipelineForImage2Image +import torch + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt_embeds=prompt_embeds, # generated from Compel + negative_prompt_embeds=negative_prompt_embeds, # generated from Compel + image=init_image, +).images[0] ControlNet ControlNets provide a more flexible and accurate way to control image generation because you can use an additional conditioning image. The conditioning image can be a canny image, depth map, image segmentation, and even scribbles! Whatever type of conditioning image you choose, the ControlNet generates an image that preserves the information in it. For example, let’s condition an image with a depth map to keep the spatial information in the image. Copied from diffusers.utils import load_image, make_image_grid + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) +init_image = init_image.resize((958, 960)) # resize to depth image dimensions +depth_image = load_image("https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png") +make_image_grid([init_image, depth_image], rows=1, cols=2) Load a ControlNet model conditioned on depth maps and the AutoPipelineForImage2Image: Copied from diffusers import ControlNetModel, AutoPipelineForImage2Image +import torch + +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11f1p_sd15_depth", torch_dtype=torch.float16, variant="fp16", use_safetensors=True) +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() Now generate a new image conditioned on the depth map, initial image, and prompt: Copied prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image_control_net = pipeline(prompt, image=init_image, control_image=depth_image).images[0] +make_image_grid([init_image, depth_image, image_control_net], rows=1, cols=3) initial image depth image ControlNet image Let’s apply a new style to the image generated from the ControlNet by chaining it with an image-to-image pipeline: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "nitrosocke/elden-ring-diffusion", torch_dtype=torch.float16, +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +prompt = "elden ring style astronaut in a jungle" # include the token "elden ring style" in the prompt +negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" + +image_elden_ring = pipeline(prompt, negative_prompt=negative_prompt, image=image_control_net, strength=0.45, guidance_scale=10.5).images[0] +make_image_grid([init_image, depth_image, image_control_net, image_elden_ring], rows=2, cols=2) Optimize Running diffusion models is computationally expensive and intensive, but with a few optimization tricks, it is entirely possible to run them on consumer and free-tier GPUs. For example, you can use a more memory-efficient form of attention such as PyTorch 2.0’s scaled-dot product attention or xFormers (you can use one or the other, but there’s no need to use both). You can also offload the model to the GPU while the other pipeline components wait on the CPU. Copied + pipeline.enable_model_cpu_offload() ++ pipeline.enable_xformers_memory_efficient_attention() With torch.compile, you can boost your inference speed even more by wrapping your UNet with it: Copied pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) To learn more, take a look at the Reduce memory usage and Torch 2.0 guides. diff --git a/scrapped_outputs/0e0f2f439499567ecc2b06458b6fd0b0.txt b/scrapped_outputs/0e0f2f439499567ecc2b06458b6fd0b0.txt new file mode 100644 index 0000000000000000000000000000000000000000..48649ec5c0477bba9de1fe1afcb189a2b6b4fbd9 --- /dev/null +++ b/scrapped_outputs/0e0f2f439499567ecc2b06458b6fd0b0.txt @@ -0,0 +1,88 @@ +Textual Inversion Textual Inversion is a training method for personalizing models by learning new text embeddings from a few example images. The file produced from training is extremely small (a few KBs) and the new embeddings can be loaded into the text encoder. TextualInversionLoaderMixin provides a function for loading Textual Inversion embeddings from Diffusers and Automatic1111 into the text encoder and loading a special token to activate the embeddings. To learn more about how to load Textual Inversion embeddings, see the Textual Inversion loading guide. TextualInversionLoaderMixin class diffusers.loaders.TextualInversionLoaderMixin < source > ( ) Load Textual Inversion tokens and embeddings to the tokenizer and text encoder. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") maybe_convert_prompt < source > ( prompt: Union tokenizer: PreTrainedTokenizer ) → str or list of str Parameters prompt (str or list of str) — +The prompt or prompts to guide the image generation. tokenizer (PreTrainedTokenizer) — +The tokenizer responsible for encoding the prompt into input tokens. Returns +str or list of str + +The converted prompt + Processes prompts that include a special token corresponding to a multi-vector textual inversion embedding to +be replaced with multiple special tokens each corresponding to one of the vectors. If the prompt has no textual +inversion token or if the textual inversion token is a single vector, the input prompt is returned. unload_textual_inversion < source > ( tokens: Union = None ) Unload Textual Inversion embeddings from the text encoder of StableDiffusionPipeline Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5") + +# Example 1 +pipeline.load_textual_inversion("sd-concepts-library/gta5-artwork") +pipeline.load_textual_inversion("sd-concepts-library/moeb-style") + +# Remove all token embeddings +pipeline.unload_textual_inversion() + +# Example 2 +pipeline.load_textual_inversion("sd-concepts-library/moeb-style") +pipeline.load_textual_inversion("sd-concepts-library/gta5-artwork") + +# Remove just one token +pipeline.unload_textual_inversion("") diff --git a/scrapped_outputs/0edc23ac7d764feb78d52b1c71384b7f.txt b/scrapped_outputs/0edc23ac7d764feb78d52b1c71384b7f.txt new file mode 100644 index 0000000000000000000000000000000000000000..013c1269daf8c4cfa37c93f9c5a59b6be09f9038 --- /dev/null +++ b/scrapped_outputs/0edc23ac7d764feb78d52b1c71384b7f.txt @@ -0,0 +1,112 @@ +How to contribute to Diffusers 🧨 We ❤️ contributions from the open-source community! Everyone is welcome, and all types of participation –not just code– are valued and appreciated. Answering questions, helping others, reaching out, and improving the documentation are all immensely valuable to the community, so don’t be afraid and get involved if you’re up for it! Everyone is encouraged to start by saying 👋 in our public Discord channel. We discuss the latest trends in diffusion models, ask questions, show off personal projects, help each other with contributions, or just hang out ☕. Whichever way you choose to contribute, we strive to be part of an open, welcoming, and kind community. Please, read our code of conduct and be mindful to respect it during your interactions. We also recommend you become familiar with the ethical guidelines that guide our project and ask you to adhere to the same principles of transparency and responsibility. We enormously value feedback from the community, so please do not be afraid to speak up if you believe you have valuable feedback that can help improve the library - every message, comment, issue, and pull request (PR) is read and considered. Overview You can contribute in many ways ranging from answering questions on issues to adding new diffusion models to +the core library. In the following, we give an overview of different ways to contribute, ranked by difficulty in ascending order. All of them are valuable to the community. Asking and answering questions on the Diffusers discussion forum or on Discord. Opening new issues on the GitHub Issues tab. Answering issues on the GitHub Issues tab. Fix a simple issue, marked by the “Good first issue” label, see here. Contribute to the documentation. Contribute a Community Pipeline. Contribute to the examples. Fix a more difficult issue, marked by the “Good second issue” label, see here. Add a new pipeline, model, or scheduler, see “New Pipeline/Model” and “New scheduler” issues. For this contribution, please have a look at Design Philosophy. As said before, all contributions are valuable to the community. +In the following, we will explain each contribution a bit more in detail. For all contributions 4 - 9, you will need to open a PR. It is explained in detail how to do so in Opening a pull request. 1. Asking and answering questions on the Diffusers discussion forum or on the Diffusers Discord Any question or comment related to the Diffusers library can be asked on the discussion forum or on Discord. Such questions and comments include (but are not limited to): Reports of training or inference experiments in an attempt to share knowledge Presentation of personal projects Questions to non-official training examples Project proposals General feedback Paper summaries Asking for help on personal projects that build on top of the Diffusers library General questions Ethical questions regarding diffusion models … Every question that is asked on the forum or on Discord actively encourages the community to publicly +share knowledge and might very well help a beginner in the future who has the same question you’re +having. Please do pose any questions you might have. +In the same spirit, you are of immense help to the community by answering such questions because this way you are publicly documenting knowledge for everybody to learn from. Please keep in mind that the more effort you put into asking or answering a question, the higher +the quality of the publicly documented knowledge. In the same way, well-posed and well-answered questions create a high-quality knowledge database accessible to everybody, while badly posed questions or answers reduce the overall quality of the public knowledge database. +In short, a high quality question or answer is precise, concise, relevant, easy-to-understand, accessible, and well-formated/well-posed. For more information, please have a look through the How to write a good issue section. NOTE about channels: +The forum is much better indexed by search engines, such as Google. Posts are ranked by popularity rather than chronologically. Hence, it’s easier to look up questions and answers that we posted some time ago. +In addition, questions and answers posted in the forum can easily be linked to. +In contrast, Discord has a chat-like format that invites fast back-and-forth communication. +While it will most likely take less time for you to get an answer to your question on Discord, your +question won’t be visible anymore over time. Also, it’s much harder to find information that was posted a while back on Discord. We therefore strongly recommend using the forum for high-quality questions and answers in an attempt to create long-lasting knowledge for the community. If discussions on Discord lead to very interesting answers and conclusions, we recommend posting the results on the forum to make the information more available for future readers. 2. Opening new issues on the GitHub issues tab The 🧨 Diffusers library is robust and reliable thanks to the users who notify us of +the problems they encounter. So thank you for reporting an issue. Remember, GitHub issues are reserved for technical questions directly related to the Diffusers library, bug reports, feature requests, or feedback on the library design. In a nutshell, this means that everything that is not related to the code of the Diffusers library (including the documentation) should not be asked on GitHub, but rather on either the forum or Discord. Please consider the following guidelines when opening a new issue: Make sure you have searched whether your issue has already been asked before (use the search bar on GitHub under Issues). Please never report a new issue on another (related) issue. If another issue is highly related, please +open a new issue nevertheless and link to the related issue. Make sure your issue is written in English. Please use one of the great, free online translation services, such as DeepL to translate from your native language to English if you are not comfortable in English. Check whether your issue might be solved by updating to the newest Diffusers version. Before posting your issue, please make sure that python -c "import diffusers; print(diffusers.__version__)" is higher or matches the latest Diffusers version. Remember that the more effort you put into opening a new issue, the higher the quality of your answer will be and the better the overall quality of the Diffusers issues. New issues usually include the following. 2.1. Reproducible, minimal bug reports A bug report should always have a reproducible code snippet and be as minimal and concise as possible. +This means in more detail: Narrow the bug down as much as you can, do not just dump your whole code file. Format your code. Do not include any external libraries except for Diffusers depending on them. Always provide all necessary information about your environment; for this, you can run: diffusers-cli env in your shell and copy-paste the displayed information to the issue. Explain the issue. If the reader doesn’t know what the issue is and why it is an issue, she cannot solve it. Always make sure the reader can reproduce your issue with as little effort as possible. If your code snippet cannot be run because of missing libraries or undefined variables, the reader cannot help you. Make sure your reproducible code snippet is as minimal as possible and can be copy-pasted into a simple Python shell. If in order to reproduce your issue a model and/or dataset is required, make sure the reader has access to that model or dataset. You can always upload your model or dataset to the Hub to make it easily downloadable. Try to keep your model and dataset as small as possible, to make the reproduction of your issue as effortless as possible. For more information, please have a look through the How to write a good issue section. You can open a bug report here. 2.2. Feature requests A world-class feature request addresses the following points: Motivation first: Is it related to a problem/frustration with the library? If so, please explain +why. Providing a code snippet that demonstrates the problem is best. Is it related to something you would need for a project? We’d love to hear +about it! Is it something you worked on and think could benefit the community? +Awesome! Tell us what problem it solved for you. Write a full paragraph describing the feature; Provide a code snippet that demonstrates its future use; In case this is related to a paper, please attach a link; Attach any additional information (drawings, screenshots, etc.) you think may help. You can open a feature request here. 2.3 Feedback Feedback about the library design and why it is good or not good helps the core maintainers immensely to build a user-friendly library. To understand the philosophy behind the current design philosophy, please have a look here. If you feel like a certain design choice does not fit with the current design philosophy, please explain why and how it should be changed. If a certain design choice follows the design philosophy too much, hence restricting use cases, explain why and how it should be changed. +If a certain design choice is very useful for you, please also leave a note as this is great feedback for future design decisions. You can open an issue about feedback here. 2.4 Technical questions Technical questions are mainly about why certain code of the library was written in a certain way, or what a certain part of the code does. Please make sure to link to the code in question and please provide details on +why this part of the code is difficult to understand. You can open an issue about a technical question here. 2.5 Proposal to add a new model, scheduler, or pipeline If the diffusion model community released a new model, pipeline, or scheduler that you would like to see in the Diffusers library, please provide the following information: Short description of the diffusion pipeline, model, or scheduler and link to the paper or public release. Link to any of its open-source implementation(s). Link to the model weights if they are available. If you are willing to contribute to the model yourself, let us know so we can best guide you. Also, don’t forget +to tag the original author of the component (model, scheduler, pipeline, etc.) by GitHub handle if you can find it. You can open a request for a model/pipeline/scheduler here. 3. Answering issues on the GitHub issues tab Answering issues on GitHub might require some technical knowledge of Diffusers, but we encourage everybody to give it a try even if you are not 100% certain that your answer is correct. +Some tips to give a high-quality answer to an issue: Be as concise and minimal as possible. Stay on topic. An answer to the issue should concern the issue and only the issue. Provide links to code, papers, or other sources that prove or encourage your point. Answer in code. If a simple code snippet is the answer to the issue or shows how the issue can be solved, please provide a fully reproducible code snippet. Also, many issues tend to be simply off-topic, duplicates of other issues, or irrelevant. It is of great +help to the maintainers if you can answer such issues, encouraging the author of the issue to be +more precise, provide the link to a duplicated issue or redirect them to the forum or Discord. If you have verified that the issued bug report is correct and requires a correction in the source code, +please have a look at the next sections. For all of the following contributions, you will need to open a PR. It is explained in detail how to do so in the Opening a pull request section. 4. Fixing a "Good first issue" Good first issues are marked by the Good first issue label. Usually, the issue already +explains how a potential solution should look so that it is easier to fix. +If the issue hasn’t been closed and you would like to try to fix this issue, you can just leave a message “I would like to try this issue.”. There are usually three scenarios: a.) The issue description already proposes a fix. In this case and if the solution makes sense to you, you can open a PR or draft PR to fix it. b.) The issue description does not propose a fix. In this case, you can ask what a proposed fix could look like and someone from the Diffusers team should answer shortly. If you have a good idea of how to fix it, feel free to directly open a PR. c.) There is already an open PR to fix the issue, but the issue hasn’t been closed yet. If the PR has gone stale, you can simply open a new PR and link to the stale PR. PRs often go stale if the original contributor who wanted to fix the issue suddenly cannot find the time anymore to proceed. This often happens in open-source and is very normal. In this case, the community will be very happy if you give it a new try and leverage the knowledge of the existing PR. If there is already a PR and it is active, you can help the author by giving suggestions, reviewing the PR or even asking whether you can contribute to the PR. 5. Contribute to the documentation A good library always has good documentation! The official documentation is often one of the first points of contact for new users of the library, and therefore contributing to the documentation is a highly +valuable contribution. Contributing to the library can have many forms: Correcting spelling or grammatical errors. Correct incorrect formatting of the docstring. If you see that the official documentation is weirdly displayed or a link is broken, we would be very happy if you take some time to correct it. Correct the shape or dimensions of a docstring input or output tensor. Clarify documentation that is hard to understand or incorrect. Update outdated code examples. Translating the documentation to another language. Anything displayed on the official Diffusers doc page is part of the official documentation and can be corrected, adjusted in the respective documentation source. Please have a look at this page on how to verify changes made to the documentation locally. 6. Contribute a community pipeline Pipelines are usually the first point of contact between the Diffusers library and the user. +Pipelines are examples of how to use Diffusers models and schedulers. +We support two types of pipelines: Official Pipelines Community Pipelines Both official and community pipelines follow the same design and consist of the same type of components. Official pipelines are tested and maintained by the core maintainers of Diffusers. Their code +resides in src/diffusers/pipelines. +In contrast, community pipelines are contributed and maintained purely by the community and are not tested. +They reside in examples/community and while they can be accessed via the PyPI diffusers package, their code is not part of the PyPI distribution. The reason for the distinction is that the core maintainers of the Diffusers library cannot maintain and test all +possible ways diffusion models can be used for inference, but some of them may be of interest to the community. +Officially released diffusion pipelines, +such as Stable Diffusion are added to the core src/diffusers/pipelines package which ensures +high quality of maintenance, no backward-breaking code changes, and testing. +More bleeding edge pipelines should be added as community pipelines. If usage for a community pipeline is high, the pipeline can be moved to the official pipelines upon request from the community. This is one of the ways we strive to be a community-driven library. To add a community pipeline, one should add a .py file to examples/community and adapt the examples/community/README.md to include an example of the new pipeline. An example can be seen here. Community pipeline PRs are only checked at a superficial level and ideally they should be maintained by their original authors. Contributing a community pipeline is a great way to understand how Diffusers models and schedulers work. Having contributed a community pipeline is usually the first stepping stone to contributing an official pipeline to the +core package. 7. Contribute to training examples Diffusers examples are a collection of training scripts that reside in examples. We support two types of training examples: Official training examples Research training examples Research training examples are located in examples/research_projects whereas official training examples include all folders under examples except the research_projects and community folders. +The official training examples are maintained by the Diffusers’ core maintainers whereas the research training examples are maintained by the community. +This is because of the same reasons put forward in 6. Contribute a community pipeline for official pipelines vs. community pipelines: It is not feasible for the core maintainers to maintain all possible training methods for diffusion models. +If the Diffusers core maintainers and the community consider a certain training paradigm to be too experimental or not popular enough, the corresponding training code should be put in the research_projects folder and maintained by the author. Both official training and research examples consist of a directory that contains one or more training scripts, a requirements.txt file, and a README.md file. In order for the user to make use of the +training examples, it is required to clone the repository: Copied git clone https://github.com/huggingface/diffusers as well as to install all additional dependencies required for training: Copied pip install -r /examples//requirements.txt Therefore when adding an example, the requirements.txt file shall define all pip dependencies required for your training example so that once all those are installed, the user can run the example’s training script. See, for example, the DreamBooth requirements.txt file. Training examples of the Diffusers library should adhere to the following philosophy: All the code necessary to run the examples should be found in a single Python file. One should be able to run the example from the command line with python .py --args. Examples should be kept simple and serve as an example on how to use Diffusers for training. The purpose of example scripts is not to create state-of-the-art diffusion models, but rather to reproduce known training schemes without adding too much custom logic. As a byproduct of this point, our examples also strive to serve as good educational materials. To contribute an example, it is highly recommended to look at already existing examples such as dreambooth to get an idea of how they should look like. +We strongly advise contributors to make use of the Accelerate library as it’s tightly integrated +with Diffusers. +Once an example script works, please make sure to add a comprehensive README.md that states how to use the example exactly. This README should include: An example command on how to run the example script as shown here. A link to some training results (logs, models, etc.) that show what the user can expect as shown here. If you are adding a non-official/research training example, please don’t forget to add a sentence that you are maintaining this training example which includes your git handle as shown here. If you are contributing to the official training examples, please also make sure to add a test to examples/test_examples.py. This is not necessary for non-official training examples. 8. Fixing a "Good second issue" Good second issues are marked by the Good second issue label. Good second issues are +usually more complicated to solve than Good first issues. +The issue description usually gives less guidance on how to fix the issue and requires +a decent understanding of the library by the interested contributor. +If you are interested in tackling a good second issue, feel free to open a PR to fix it and link the PR to the issue. If you see that a PR has already been opened for this issue but did not get merged, have a look to understand why it wasn’t merged and try to open an improved PR. +Good second issues are usually more difficult to get merged compared to good first issues, so don’t hesitate to ask for help from the core maintainers. If your PR is almost finished the core maintainers can also jump into your PR and commit to it in order to get it merged. 9. Adding pipelines, models, schedulers Pipelines, models, and schedulers are the most important pieces of the Diffusers library. +They provide easy access to state-of-the-art diffusion technologies and thus allow the community to +build powerful generative AI applications. By adding a new model, pipeline, or scheduler you might enable a new powerful use case for any of the user interfaces relying on Diffusers which can be of immense value for the whole generative AI ecosystem. Diffusers has a couple of open feature requests for all three components - feel free to gloss over them +if you don’t know yet what specific component you would like to add: Model or pipeline Scheduler Before adding any of the three components, it is strongly recommended that you give the Philosophy guide a read to better understand the design of any of the three components. Please be aware that we cannot merge model, scheduler, or pipeline additions that strongly diverge from our design philosophy +as it will lead to API inconsistencies. If you fundamentally disagree with a design choice, please open a Feedback issue instead so that it can be discussed whether a certain design pattern/design choice shall be changed everywhere in the library and whether we shall update our design philosophy. Consistency across the library is very important for us. Please make sure to add links to the original codebase/paper to the PR and ideally also ping the original author directly on the PR so that they can follow the progress and potentially help with questions. If you are unsure or stuck in the PR, don’t hesitate to leave a message to ask for a first review or help. Copied from mechanism A unique and important feature to understand when adding any pipeline, model or scheduler code is the # Copied from mechanism. You’ll see this all over the Diffusers codebase, and the reason we use it is to keep the codebase easy to understand and maintain. Marking code with the # Copied from mechanism forces the marked code to be identical to the code it was copied from. This makes it easy to update and propagate changes across many files whenever you run make fix-copies. For example, in the code example below, StableDiffusionPipelineOutput is the original code and AltDiffusionPipelineOutput uses the # Copied from mechanism to copy it. The only difference is changing the class prefix from Stable to Alt. Copied # Copied from diffusers.pipelines.stable_diffusion.pipeline_output.StableDiffusionPipelineOutput with Stable->Alt +class AltDiffusionPipelineOutput(BaseOutput): + """ + Output class for Alt Diffusion pipelines. + + Args: + images (`List[PIL.Image.Image]` or `np.ndarray`) + List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width, + num_channels)`. + nsfw_content_detected (`List[bool]`) + List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or + `None` if safety checking could not be performed. + """ To learn more, read this section of the ~Don’t~ Repeat Yourself* blog post. How to write a good issue The better your issue is written, the higher the chances that it will be quickly resolved. Make sure that you’ve used the correct template for your issue. You can pick between Bug Report, Feature Request, Feedback about API Design, New model/pipeline/scheduler addition, Forum, or a blank issue. Make sure to pick the correct one when opening a new issue. Be precise: Give your issue a fitting title. Try to formulate your issue description as simple as possible. The more precise you are when submitting an issue, the less time it takes to understand the issue and potentially solve it. Make sure to open an issue for one issue only and not for multiple issues. If you found multiple issues, simply open multiple issues. If your issue is a bug, try to be as precise as possible about what bug it is - you should not just write “Error in diffusers”. Reproducibility: No reproducible code snippet == no solution. If you encounter a bug, maintainers have to be able to reproduce it. Make sure that you include a code snippet that can be copy-pasted into a Python interpreter to reproduce the issue. Make sure that your code snippet works, i.e. that there are no missing imports or missing links to images, … Your issue should contain an error message and a code snippet that can be copy-pasted without any changes to reproduce the exact same error message. If your issue is using local model weights or local data that cannot be accessed by the reader, the issue cannot be solved. If you cannot share your data or model, try to make a dummy model or dummy data. Minimalistic: Try to help the reader as much as you can to understand the issue as quickly as possible by staying as concise as possible. Remove all code / all information that is irrelevant to the issue. If you have found a bug, try to create the easiest code example you can to demonstrate your issue, do not just dump your whole workflow into the issue as soon as you have found a bug. E.g., if you train a model and get an error at some point during the training, you should first try to understand what part of the training code is responsible for the error and try to reproduce it with a couple of lines. Try to use dummy data instead of full datasets. Add links. If you are referring to a certain naming, method, or model make sure to provide a link so that the reader can better understand what you mean. If you are referring to a specific PR or issue, make sure to link it to your issue. Do not assume that the reader knows what you are talking about. The more links you add to your issue the better. Formatting. Make sure to nicely format your issue by formatting code into Python code syntax, and error messages into normal code syntax. See the official GitHub formatting docs for more information. Think of your issue not as a ticket to be solved, but rather as a beautiful entry to a well-written encyclopedia. Every added issue is a contribution to publicly available knowledge. By adding a nicely written issue you not only make it easier for maintainers to solve your issue, but you are helping the whole community to better understand a certain aspect of the library. How to write a good PR Be a chameleon. Understand existing design patterns and syntax and make sure your code additions flow seamlessly into the existing code base. Pull requests that significantly diverge from existing design patterns or user interfaces will not be merged. Be laser focused. A pull request should solve one problem and one problem only. Make sure to not fall into the trap of “also fixing another problem while we’re adding it”. It is much more difficult to review pull requests that solve multiple, unrelated problems at once. If helpful, try to add a code snippet that displays an example of how your addition can be used. The title of your pull request should be a summary of its contribution. If your pull request addresses an issue, please mention the issue number in +the pull request description to make sure they are linked (and people +consulting the issue know you are working on it); To indicate a work in progress please prefix the title with [WIP]. These +are useful to avoid duplicated work, and to differentiate it from PRs ready +to be merged; Try to formulate and format your text as explained in How to write a good issue. Make sure existing tests pass; Add high-coverage tests. No quality testing = no merge. If you are adding new @slow tests, make sure they pass using +RUN_SLOW=1 python -m pytest tests/test_my_new_model.py. +CircleCI does not run the slow tests, but GitHub Actions does every night! All public methods must have informative docstrings that work nicely with markdown. See pipeline_latent_diffusion.py for an example. Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted dataset like +hf-internal-testing or huggingface/documentation-images to place these files. +If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images +to this dataset. How to open a PR Before writing code, we strongly advise you to search through the existing PRs or +issues to make sure that nobody is already working on the same thing. If you are +unsure, it is always a good idea to open an issue to get some feedback. You will need basic git proficiency to be able to contribute to +🧨 Diffusers. git is not the easiest tool to use but it has the greatest +manual. Type git --help in a shell and enjoy. If you prefer books, Pro +Git is a very good reference. Follow these steps to start contributing (supported Python versions): Fork the repository by +clicking on the ‘Fork’ button on the repository’s page. This creates a copy of the code +under your GitHub user account. Clone your fork to your local disk, and add the base repository as a remote: Copied $ git clone git@github.com:/diffusers.git +$ cd diffusers +$ git remote add upstream https://github.com/huggingface/diffusers.git Create a new branch to hold your development changes: Copied $ git checkout -b a-descriptive-name-for-my-changes Do not work on the main branch. Set up a development environment by running the following command in a virtual environment: Copied $ pip install -e ".[dev]" If you have already cloned the repo, you might need to git pull to get the most recent changes in the +library. Develop the features on your branch. As you work on the features, you should make sure that the test suite +passes. You should run the tests impacted by your changes like this: Copied $ pytest tests/.py Before you run the tests, please make sure you install the dependencies required for testing. You can do so +with this command: Copied $ pip install -e ".[test]" You can also run the full test suite with the following command, but it takes +a beefy machine to produce a result in a decent amount of time now that +Diffusers has grown a lot. Here is the command for it: Copied $ make test 🧨 Diffusers relies on black and isort to format its source code +consistently. After you make changes, apply automatic style corrections and code verifications +that can’t be automated in one go with: Copied $ make style 🧨 Diffusers also uses ruff and a few custom scripts to check for coding mistakes. Quality +control runs in CI, however, you can also run the same checks with: Copied $ make quality Once you’re happy with your changes, add changed files using git add and +make a commit with git commit to record your changes locally: Copied $ git add modified_file.py +$ git commit -m "A descriptive message about your changes." It is a good idea to sync your copy of the code with the original +repository regularly. This way you can quickly account for changes: Copied $ git pull upstream main Push the changes to your account using: Copied $ git push -u origin a-descriptive-name-for-my-changes Once you are satisfied, go to the +webpage of your fork on GitHub. Click on ‘Pull request’ to send your changes +to the project maintainers for review. It’s OK if maintainers ask you for changes. It happens to core contributors +too! So everyone can see the changes in the Pull request, work in your local +branch and push the changes to your fork. They will automatically appear in +the pull request. Tests An extensive test suite is included to test the library behavior and several examples. Library tests can be found in +the tests folder. We like pytest and pytest-xdist because it’s faster. From the root of the +repository, here’s how to run tests with pytest for the library: Copied $ python -m pytest -n auto --dist=loadfile -s -v ./tests/ In fact, that’s how make test is implemented! You can specify a smaller set of tests in order to test only the feature +you’re working on. By default, slow tests are skipped. Set the RUN_SLOW environment variable to +yes to run them. This will download many gigabytes of models — make sure you +have enough disk space and a good Internet connection, or a lot of patience! Copied $ RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/ unittest is fully supported, here’s how to run tests with it: Copied $ python -m unittest discover -s tests -t . -v +$ python -m unittest discover -s examples -t examples -v Syncing forked main with upstream (HuggingFace) main To avoid pinging the upstream repository which adds reference notes to each upstream PR and sends unnecessary notifications to the developers involved in these PRs, +when syncing the main branch of a forked repository, please, follow these steps: When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead, merge directly into the forked main. If a PR is absolutely necessary, use the following steps after checking out your branch: Copied $ git checkout -b your-branch-for-syncing +$ git pull --squash --no-commit upstream main +$ git commit -m '' +$ git push --set-upstream origin your-branch-for-syncing Style guide For documentation strings, 🧨 Diffusers follows the Google style. diff --git a/scrapped_outputs/0ef23d047ea0df42b96ebf37ef535e78.txt b/scrapped_outputs/0ef23d047ea0df42b96ebf37ef535e78.txt new file mode 100644 index 0000000000000000000000000000000000000000..d05e83f211afd073b47b8d298eea79b4b3c9daf7 --- /dev/null +++ b/scrapped_outputs/0ef23d047ea0df42b96ebf37ef535e78.txt @@ -0,0 +1,97 @@ +Text-to-image When you think of diffusion models, text-to-image is usually one of the first things that come to mind. Text-to-image generates an image from a text description (for example, “Astronaut in a jungle, cold color palette, muted colors, detailed, 8k”) which is also known as a prompt. From a very high level, a diffusion model takes a prompt and some random initial noise, and iteratively removes the noise to construct an image. The denoising process is guided by the prompt, and once the denoising process ends after a predetermined number of time steps, the image representation is decoded into an image. Read the How does Stable Diffusion work? blog post to learn more about how a latent diffusion model works. You can generate images from a prompt in 🤗 Diffusers in two steps: Load a checkpoint into the AutoPipelineForText2Image class, which automatically detects the appropriate pipeline class to use based on the checkpoint: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") Pass a prompt to the pipeline to generate an image: Copied image = pipeline( + "stained glass of darth vader, backlight, centered composition, masterpiece, photorealistic, 8k" +).images[0] +image Popular models The most common text-to-image models are Stable Diffusion v1.5, Stable Diffusion XL (SDXL), and Kandinsky 2.2. There are also ControlNet models or adapters that can be used with text-to-image models for more direct control in generating images. The results from each model are slightly different because of their architecture and training process, but no matter which model you choose, their usage is more or less the same. Let’s use the same prompt for each model and compare their results. Stable Diffusion v1.5 Stable Diffusion v1.5 is a latent diffusion model initialized from Stable Diffusion v1-4, and finetuned for 595K steps on 512x512 images from the LAION-Aesthetics V2 dataset. You can use this model like: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0] +image Stable Diffusion XL SDXL is a much larger version of the previous Stable Diffusion models, and involves a two-stage model process that adds even more details to an image. It also includes some additional micro-conditionings to generate high-quality images centered subjects. Take a look at the more comprehensive SDXL guide to learn more about how to use it. In general, you can use SDXL like: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0] +image Kandinsky 2.2 The Kandinsky model is a bit different from the Stable Diffusion models because it also uses an image prior model to create embeddings that are used to better align text and images in the diffusion model. The easiest way to use Kandinsky 2.2 is: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0] +image ControlNet ControlNet models are auxiliary models or adapters that are finetuned on top of text-to-image models, such as Stable Diffusion v1.5. Using ControlNet models in combination with text-to-image models offers diverse options for more explicit control over how to generate an image. With ControlNet, you add an additional conditioning input image to the model. For example, if you provide an image of a human pose (usually represented as multiple keypoints that are connected into a skeleton) as a conditioning input, the model generates an image that follows the pose of the image. Check out the more in-depth ControlNet guide to learn more about other conditioning inputs and how to use them. In this example, let’s condition the ControlNet with a human pose estimation image. Load the ControlNet model pretrained on human pose estimations: Copied from diffusers import ControlNetModel, AutoPipelineForText2Image +from diffusers.utils import load_image +import torch + +controlnet = ControlNetModel.from_pretrained( + "lllyasviel/control_v11p_sd15_openpose", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +pose_image = load_image("https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png") Pass the controlnet to the AutoPipelineForText2Image, and provide the prompt and pose estimation image: Copied pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16" +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", image=pose_image, generator=generator).images[0] +image Stable Diffusion v1.5 Stable Diffusion XL Kandinsky 2.2 ControlNet (pose conditioning) Configure pipeline parameters There are a number of parameters that can be configured in the pipeline that affect how an image is generated. You can change the image’s output size, specify a negative prompt to improve image quality, and more. This section dives deeper into how to use these parameters. Height and width The height and width parameters control the height and width (in pixels) of the generated image. By default, the Stable Diffusion v1.5 model outputs 512x512 images, but you can change this to any size that is a multiple of 8. For example, to create a rectangular image: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +image = pipeline( + "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", height=768, width=512 +).images[0] +image Other models may have different default image sizes depending on the image sizes in the training dataset. For example, SDXL’s default image size is 1024x1024 and using lower height and width values may result in lower quality images. Make sure you check the model’s API reference first! Guidance scale The guidance_scale parameter affects how much the prompt influences image generation. A lower value gives the model “creativity” to generate images that are more loosely related to the prompt. Higher guidance_scale values push the model to follow the prompt more closely, and if this value is too high, you may observe some artifacts in the generated image. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +image = pipeline( + "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", guidance_scale=3.5 +).images[0] +image guidance_scale = 2.5 guidance_scale = 7.5 guidance_scale = 10.5 Negative prompt Just like how a prompt guides generation, a negative prompt steers the model away from things you don’t want the model to generate. This is commonly used to improve overall image quality by removing poor or bad image features such as “low resolution” or “bad details”. You can also use a negative prompt to remove or modify the content and style of an image. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +image = pipeline( + prompt="Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", + negative_prompt="ugly, deformed, disfigured, poor details, bad anatomy", +).images[0] +image negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" negative_prompt = "astronaut" Generator A torch.Generator object enables reproducibility in a pipeline by setting a manual seed. You can use a Generator to generate batches of images and iteratively improve on an image generated from a seed as detailed in the Improve image quality with deterministic generation guide. You can set a seed and Generator as shown below. Creating an image with a Generator should return the same result each time instead of randomly generating a new image. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +generator = torch.Generator(device="cuda").manual_seed(30) +image = pipeline( + "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", + generator=generator, +).images[0] +image Control image generation There are several ways to exert more control over how an image is generated outside of configuring a pipeline’s parameters, such as prompt weighting and ControlNet models. Prompt weighting Prompt weighting is a technique for increasing or decreasing the importance of concepts in a prompt to emphasize or minimize certain features in an image. We recommend using the Compel library to help you generate the weighted prompt embeddings. Learn how to create the prompt embeddings in the Prompt weighting guide. This example focuses on how to use the prompt embeddings in the pipeline. Once you’ve created the embeddings, you can pass them to the prompt_embeds (and negative_prompt_embeds if you’re using a negative prompt) parameter in the pipeline. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +image = pipeline( + prompt_embeds=prompt_embeds, # generated from Compel + negative_prompt_embeds=negative_prompt_embeds, # generated from Compel +).images[0] ControlNet As you saw in the ControlNet section, these models offer a more flexible and accurate way to generate images by incorporating an additional conditioning image input. Each ControlNet model is pretrained on a particular type of conditioning image to generate new images that resemble it. For example, if you take a ControlNet model pretrained on depth maps, you can give the model a depth map as a conditioning input and it’ll generate an image that preserves the spatial information in it. This is quicker and easier than specifying the depth information in a prompt. You can even combine multiple conditioning inputs with a MultiControlNet! There are many types of conditioning inputs you can use, and 🤗 Diffusers supports ControlNet for Stable Diffusion and SDXL models. Take a look at the more comprehensive ControlNet guide to learn how you can use these models. Optimize Diffusion models are large, and the iterative nature of denoising an image is computationally expensive and intensive. But this doesn’t mean you need access to powerful - or even many - GPUs to use them. There are many optimization techniques for running diffusion models on consumer and free-tier resources. For example, you can load model weights in half-precision to save GPU memory and increase speed or offload the entire model to the GPU to save even more memory. PyTorch 2.0 also supports a more memory-efficient attention mechanism called scaled dot product attention that is automatically enabled if you’re using PyTorch 2.0. You can combine this with torch.compile to speed your code up even more: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16").to("cuda") +pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) For more tips on how to optimize your code to save memory and speed up inference, read the Memory and speed and Torch 2.0 guides. diff --git a/scrapped_outputs/0f0a004579d0526ad92c0c4867027446.txt b/scrapped_outputs/0f0a004579d0526ad92c0c4867027446.txt new file mode 100644 index 0000000000000000000000000000000000000000..ae010f11e65233dac43dd755c7b86163538bf00a --- /dev/null +++ b/scrapped_outputs/0f0a004579d0526ad92c0c4867027446.txt @@ -0,0 +1,69 @@ +Load adapters There are several training techniques for personalizing diffusion models to generate images of a specific subject or images in certain styles. Each of these training methods produces a different type of adapter. Some of the adapters generate an entirely new model, while other adapters only modify a smaller set of embeddings or weights. This means the loading process for each adapter is also different. This guide will show you how to load DreamBooth, textual inversion, and LoRA weights. Feel free to browse the Stable Diffusion Conceptualizer, LoRA the Explorer, and the Diffusers Models Gallery for checkpoints and embeddings to use. DreamBooth DreamBooth finetunes an entire diffusion model on just several images of a subject to generate images of that subject in new styles and settings. This method works by using a special word in the prompt that the model learns to associate with the subject image. Of all the training methods, DreamBooth produces the largest file size (usually a few GBs) because it is a full checkpoint model. Let’s load the herge_style checkpoint, which is trained on just 10 images drawn by Hergé, to generate images in that style. For it to work, you need to include the special word herge_style in your prompt to trigger the checkpoint: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("sd-dreambooth-library/herge-style", torch_dtype=torch.float16).to("cuda") +prompt = "A cute herge_style brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration" +image = pipeline(prompt).images[0] +image Textual inversion Textual inversion is very similar to DreamBooth and it can also personalize a diffusion model to generate certain concepts (styles, objects) from just a few images. This method works by training and finding new embeddings that represent the images you provide with a special word in the prompt. As a result, the diffusion model weights stay the same and the training process produces a relatively tiny (a few KBs) file. Because textual inversion creates embeddings, it cannot be used on its own like DreamBooth and requires another model. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") Now you can load the textual inversion embeddings with the load_textual_inversion() method and generate some images. Let’s load the sd-concepts-library/gta5-artwork embeddings and you’ll need to include the special word in your prompt to trigger it: Copied pipeline.load_textual_inversion("sd-concepts-library/gta5-artwork") +prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration, style" +image = pipeline(prompt).images[0] +image Textual inversion can also be trained on undesirable things to create negative embeddings to discourage a model from generating images with those undesirable things like blurry images or extra fingers on a hand. This can be an easy way to quickly improve your prompt. You’ll also load the embeddings with load_textual_inversion(), but this time, you’ll need two more parameters: weight_name: specifies the weight file to load if the file was saved in the 🤗 Diffusers format with a specific name or if the file is stored in the A1111 format token: specifies the special word to use in the prompt to trigger the embeddings Let’s load the sayakpaul/EasyNegative-test embeddings: Copied pipeline.load_textual_inversion( + "sayakpaul/EasyNegative-test", weight_name="EasyNegative.safetensors", token="EasyNegative" +) Now you can use the token to generate an image with the negative embeddings: Copied prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration, EasyNegative" +negative_prompt = "EasyNegative" + +image = pipeline(prompt, negative_prompt=negative_prompt, num_inference_steps=50).images[0] +image LoRA Low-Rank Adaptation (LoRA) is a popular training technique because it is fast and generates smaller file sizes (a couple hundred MBs). Like the other methods in this guide, LoRA can train a model to learn new styles from just a few images. It works by inserting new weights into the diffusion model and then only the new weights are trained instead of the entire model. This makes LoRAs faster to train and easier to store. LoRA is a very general training technique that can be used with other training methods. For example, it is common to train a model with DreamBooth and LoRA. It is also increasingly common to load and merge multiple LoRAs to create new and unique images. You can learn more about it in the in-depth Merge LoRAs guide since merging is outside the scope of this loading guide. LoRAs also need to be used with another model: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") Then use the load_lora_weights() method to load the ostris/super-cereal-sdxl-lora weights and specify the weights filename from the repository: Copied pipeline.load_lora_weights("ostris/super-cereal-sdxl-lora", weight_name="cereal_box_sdxl_v1.safetensors") +prompt = "bears, pizza bites" +image = pipeline(prompt).images[0] +image The load_lora_weights() method loads LoRA weights into both the UNet and text encoder. It is the preferred way for loading LoRAs because it can handle cases where: the LoRA weights don’t have separate identifiers for the UNet and text encoder the LoRA weights have separate identifiers for the UNet and text encoder But if you only need to load LoRA weights into the UNet, then you can use the load_attn_procs() method. Let’s load the jbilcke-hf/sdxl-cinematic-1 LoRA: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.unet.load_attn_procs("jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors") + +# use cnmt in the prompt to trigger the LoRA +prompt = "A cute cnmt eating a slice of pizza, stunning color scheme, masterpiece, illustration" +image = pipeline(prompt).images[0] +image For both load_lora_weights() and load_attn_procs(), you can pass the cross_attention_kwargs={"scale": 0.5} parameter to adjust how much of the LoRA weights to use. A value of 0 is the same as only using the base model weights, and a value of 1 is equivalent to using the fully finetuned LoRA. To unload the LoRA weights, use the unload_lora_weights() method to discard the LoRA weights and restore the model to its original weights: Copied pipeline.unload_lora_weights() Kohya and TheLastBen Other popular LoRA trainers from the community include those by Kohya and TheLastBen. These trainers create different LoRA checkpoints than those trained by 🤗 Diffusers, but they can still be loaded in the same way. Kohya TheLastBen To load a Kohya LoRA, let’s download the Blueprintify SD XL 1.0 checkpoint from Civitai as an example: Copied !wget https://civitai.com/api/download/models/168776 -O blueprintify-sd-xl-10.safetensors Load the LoRA checkpoint with the load_lora_weights() method, and specify the filename in the weight_name parameter: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("path/to/weights", weight_name="blueprintify-sd-xl-10.safetensors") Generate an image: Copied # use bl3uprint in the prompt to trigger the LoRA +prompt = "bl3uprint, a highly detailed blueprint of the eiffel tower, explaining how to build all parts, many txt, blueprint grid backdrop" +image = pipeline(prompt).images[0] +image Some limitations of using Kohya LoRAs with 🤗 Diffusers include: Images may not look like those generated by UIs - like ComfyUI - for multiple reasons, which are explained here. LyCORIS checkpoints aren’t fully supported. The load_lora_weights() method loads LyCORIS checkpoints with LoRA and LoCon modules, but Hada and LoKR are not supported. IP-Adapter IP-Adapter is a lightweight adapter that enables image prompting for any diffusion model. This adapter works by decoupling the cross-attention layers of the image and text features. All the other model components are frozen and only the embedded image features in the UNet are trained. As a result, IP-Adapter files are typically only ~100MBs. You can learn more about how to use IP-Adapter for different tasks and specific use cases in the IP-Adapter guide. Diffusers currently only supports IP-Adapter for some of the most popular pipelines. Feel free to open a feature request if you have a cool use case and want to integrate IP-Adapter with an unsupported pipeline! +Official IP-Adapter checkpoints are available from h94/IP-Adapter. To start, load a Stable Diffusion checkpoint. Copied from diffusers import AutoPipelineForText2Image +import torch +from diffusers.utils import load_image + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") Then load the IP-Adapter weights and add it to the pipeline with the load_ip_adapter() method. Copied pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") Once loaded, you can use the pipeline with an image and text prompt to guide the image generation process. Copied image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_neg_embed.png") +generator = torch.Generator(device="cpu").manual_seed(33) +images = pipeline( +    prompt='best quality, high quality, wearing sunglasses', +    ip_adapter_image=image, +    negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", +    num_inference_steps=50, +    generator=generator, +).images[0] +images     IP-Adapter Plus IP-Adapter relies on an image encoder to generate image features. If the IP-Adapter repository contains an image_encoder subfolder, the image encoder is automatically loaded and registered to the pipeline. Otherwise, you’ll need to explicitly load the image encoder with a CLIPVisionModelWithProjection model and pass it to the pipeline. This is the case for IP-Adapter Plus checkpoints which use the ViT-H image encoder. Copied from transformers import CLIPVisionModelWithProjection + +image_encoder = CLIPVisionModelWithProjection.from_pretrained( + "h94/IP-Adapter", + subfolder="models/image_encoder", + torch_dtype=torch.float16 +) + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + image_encoder=image_encoder, + torch_dtype=torch.float16 +).to("cuda") + +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name="ip-adapter-plus_sdxl_vit-h.safetensors") diff --git a/scrapped_outputs/0f34b5dd3b7740fd37240fb63fa3c467.txt b/scrapped_outputs/0f34b5dd3b7740fd37240fb63fa3c467.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/0f3c4a10c686dc07806be98846a559bc.txt b/scrapped_outputs/0f3c4a10c686dc07806be98846a559bc.txt new file mode 100644 index 0000000000000000000000000000000000000000..d389bbcd127fb4947ea0a7cb5913033c8204bc76 --- /dev/null +++ b/scrapped_outputs/0f3c4a10c686dc07806be98846a559bc.txt @@ -0,0 +1,111 @@ +Stable Diffusion text-to-image fine-tuning + +The train_text_to_image.py script shows how to fine-tune the stable diffusion model on your own dataset. +The text-to-image fine-tuning script is experimental. It’s easy to overfit and run into issues like catastrophic forgetting. We recommend to explore different hyperparameters to get the best results on your dataset. + +Running locally + + +Installing the dependencies + +Before running the scripts, make sure to install the library’s training dependencies: + + + Copied +pip install git+https://github.com/huggingface/diffusers.git +pip install -U -r requirements.txt +And initialize an 🤗Accelerate environment with: + + + Copied +accelerate config +You need to accept the model license before downloading or using the weights. In this example we’ll use model version v1-4, so you’ll need to visit its card, read the license and tick the checkbox if you agree. +You have to be a registered user in 🤗 Hugging Face Hub, and you’ll also need to use an access token for the code to work. For more information on access tokens, please refer to this section of the documentation. +Run the following command to authenticate your token + + + Copied +huggingface-cli login +If you have already cloned the repo, then you won’t need to go through these steps. Instead, you can pass the path to your local checkout to the training script and it will be loaded from there. + +Hardware Requirements for Fine-tuning + +Using gradient_checkpointing and mixed_precision it should be possible to fine tune the model on a single 24GB GPU. For higher batch_size and faster training it’s better to use GPUs with more than 30GB of GPU memory. You can also use JAX / Flax for fine-tuning on TPUs or GPUs, see below for details. + +Fine-tuning Example + +The following script will launch a fine-tuning run using Justin Pinkneys’ captioned Pokemon dataset, available in Hugging Face Hub. + + + Copied +export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export dataset_name="lambdalabs/pokemon-blip-captions" + +accelerate launch train_text_to_image.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$dataset_name \ + --use_ema \ + --resolution=512 --center_crop --random_flip \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --mixed_precision="fp16" \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --lr_scheduler="constant" --lr_warmup_steps=0 \ + --output_dir="sd-pokemon-model" +To run on your own training files you need to prepare the dataset according to the format required by datasets. You can upload your dataset to the Hub, or you can prepare a local folder with your files. This documentation explains how to do it. +You should modify the script if you wish to use custom loading logic. We have left pointers in the code in the appropriate places :) + + + Copied +export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export TRAIN_DIR="path_to_your_dataset" +export OUTPUT_DIR="path_to_save_model" + +accelerate launch train_text_to_image.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --train_data_dir=$TRAIN_DIR \ + --use_ema \ + --resolution=512 --center_crop --random_flip \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --mixed_precision="fp16" \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --lr_scheduler="constant" --lr_warmup_steps=0 \ + --output_dir=${OUTPUT_DIR} +Once training is finished the model will be saved to the OUTPUT_DIR specified in the command. To load the fine-tuned model for inference, just pass that path to StableDiffusionPipeline: + + + Copied +from diffusers import StableDiffusionPipeline + +model_path = "path_to_saved_model" +pipe = StableDiffusionPipeline.from_pretrained(model_path, torch_dtype=torch.float16) +pipe.to("cuda") + +image = pipe(prompt="yoda").images[0] +image.save("yoda-pokemon.png") + +Flax / JAX fine-tuning + +Thanks to @duongna211 it’s possible to fine-tune Stable Diffusion using Flax! This is very efficient on TPU hardware but works great on GPUs too. You can use the Flax training script like this: + + + Copied +export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export dataset_name="lambdalabs/pokemon-blip-captions" + +python train_text_to_image_flax.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$dataset_name \ + --resolution=512 --center_crop --random_flip \ + --train_batch_size=1 \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --output_dir="sd-pokemon-model" diff --git a/scrapped_outputs/0f9eacec89341d2025dfd34536f9a2ca.txt b/scrapped_outputs/0f9eacec89341d2025dfd34536f9a2ca.txt new file mode 100644 index 0000000000000000000000000000000000000000..fbc000e3f1f4798b3b57e43c2f0af0e2e06c9cce --- /dev/null +++ b/scrapped_outputs/0f9eacec89341d2025dfd34536f9a2ca.txt @@ -0,0 +1,65 @@ +Latent Consistency Model Multistep Scheduler Overview Multistep and onestep scheduler (Algorithm 3) introduced alongside latent consistency models in the paper Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference by Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao. +This scheduler should be able to generate good samples from LatentConsistencyModelPipeline in 1-8 steps. LCMScheduler class diffusers.LCMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'scaled_linear' trained_betas: Union = None original_inference_steps: int = 50 clip_sample: bool = False clip_sample_range: float = 1.0 set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' timestep_scaling: float = 10.0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. original_inference_steps (int, optional, defaults to 50) — +The default number of inference steps used to generate a linearly-spaced timestep schedule, from which we +will ultimately take num_inference_steps evenly spaced timesteps to form the final timestep schedule. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the alpha value at step 0. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. timestep_scaling (float, defaults to 10.0) — +The factor the timesteps will be multiplied by when calculating the consistency model boundary conditions +c_skip and c_out. Increasing this will decrease the approximation error (although the approximation +error at the default of 10.0 is already pretty small). rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. LCMScheduler extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with +non-Markovian guidance. This model inherits from SchedulerMixin and ConfigMixin. ~ConfigMixin takes care of storing all config +attributes that are passed in the scheduler’s __init__ function, such as num_train_timesteps. They can be +accessed via scheduler.config.num_train_timesteps. SchedulerMixin provides general loading and saving +functionality via the SchedulerMixin.save_pretrained() and from_pretrained() functions. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: Optional = None device: Union = None original_inference_steps: Optional = None timesteps: Optional = None strength: int = 1.0 ) Parameters num_inference_steps (int, optional) — +The number of diffusion steps used when generating samples with a pre-trained model. If used, +timesteps must be None. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. original_inference_steps (int, optional) — +The original number of inference steps, which will be used to generate a linearly-spaced timestep +schedule (which is different from the standard diffusers implementation). We will then take +num_inference_steps timesteps from this schedule, evenly spaced in terms of indices, and use that as +our final timestep schedule. If not set, this will default to the original_inference_steps attribute. timesteps (List[int], optional) — +Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default +timestep spacing strategy of equal spacing between timesteps on the training/distillation timestep +schedule is used. If timesteps is passed, num_inference_steps must be None. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator: Optional = None return_dict: bool = True ) → ~schedulers.scheduling_utils.LCMSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a LCMSchedulerOutput or tuple. Returns +~schedulers.scheduling_utils.LCMSchedulerOutput or tuple + +If return_dict is True, LCMSchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/0fcfb49b76235b1fad8e3663a91fb48a.txt b/scrapped_outputs/0fcfb49b76235b1fad8e3663a91fb48a.txt new file mode 100644 index 0000000000000000000000000000000000000000..d652e1d857c98c3e8bba256ca96f37cda949853a --- /dev/null +++ b/scrapped_outputs/0fcfb49b76235b1fad8e3663a91fb48a.txt @@ -0,0 +1,57 @@ +Schedulers 🤗 Diffusers provides many scheduler functions for the diffusion process. A scheduler takes a model’s output (the sample which the diffusion process is iterating on) and a timestep to return a denoised sample. The timestep is important because it dictates where in the diffusion process the step is; data is generated by iterating forward n timesteps and inference occurs by propagating backward through the timesteps. Based on the timestep, a scheduler may be discrete in which case the timestep is an int or continuous in which case the timestep is a float. Depending on the context, a scheduler defines how to iteratively add noise to an image or how to update a sample based on a model’s output: during training, a scheduler adds noise (there are different algorithms for how to add noise) to a sample to train a diffusion model during inference, a scheduler defines how to update a sample based on a pretrained model’s output Many schedulers are implemented from the k-diffusion library by Katherine Crowson, and they’re also widely used in A1111. To help you map the schedulers from k-diffusion and A1111 to the schedulers in 🤗 Diffusers, take a look at the table below: A1111/k-diffusion 🤗 Diffusers Usage DPM++ 2M DPMSolverMultistepScheduler DPM++ 2M Karras DPMSolverMultistepScheduler init with use_karras_sigmas=True DPM++ 2M SDE DPMSolverMultistepScheduler init with algorithm_type="sde-dpmsolver++" DPM++ 2M SDE Karras DPMSolverMultistepScheduler init with use_karras_sigmas=True and algorithm_type="sde-dpmsolver++" DPM++ 2S a N/A very similar to DPMSolverSinglestepScheduler DPM++ 2S a Karras N/A very similar to DPMSolverSinglestepScheduler(use_karras_sigmas=True, ...) DPM++ SDE DPMSolverSinglestepScheduler DPM++ SDE Karras DPMSolverSinglestepScheduler init with use_karras_sigmas=True DPM2 KDPM2DiscreteScheduler DPM2 Karras KDPM2DiscreteScheduler init with use_karras_sigmas=True DPM2 a KDPM2AncestralDiscreteScheduler DPM2 a Karras KDPM2AncestralDiscreteScheduler init with use_karras_sigmas=True DPM adaptive N/A DPM fast N/A Euler EulerDiscreteScheduler Euler a EulerAncestralDiscreteScheduler Heun HeunDiscreteScheduler LMS LMSDiscreteScheduler LMS Karras LMSDiscreteScheduler init with use_karras_sigmas=True N/A DEISMultistepScheduler N/A UniPCMultistepScheduler All schedulers are built from the base SchedulerMixin class which implements low level utilities shared by all schedulers. SchedulerMixin class diffusers.SchedulerMixin < source > ( ) Base class for all schedulers. SchedulerMixin contains common functions shared by all schedulers such as general loading and saving +functionalities. ConfigMixin takes care of storing the configuration attributes (like num_train_timesteps) that are passed to +the scheduler’s __init__ function, and the attributes can be accessed by scheduler.config.num_train_timesteps. Class attributes: _compatibles (List[str]) — A list of scheduler classes that are compatible with the parent scheduler +class. Use from_config() to load a different compatible scheduler class (should be overridden +by parent class). from_pretrained < source > ( pretrained_model_name_or_path: Union = None subfolder: Optional = None return_unused_kwargs = False **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the scheduler +configuration saved with save_pretrained(). + subfolder (str, optional) — +The subfolder location of a model file within a larger model repository on the Hub or locally. return_unused_kwargs (bool, optional, defaults to False) — +Whether kwargs that are not consumed by the Python class should be returned or not. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. Instantiate a scheduler from a pre-defined JSON configuration file in a local directory or Hub repository. To use private or gated models, log-in with +huggingface-cli login. You can also activate the special +“offline-mode” to use this method in a +firewalled environment. save_pretrained < source > ( save_directory: Union push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory where the configuration JSON file will be saved (will be created if it does not exist). push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save a scheduler configuration object to a directory so that it can be reloaded using the +from_pretrained() class method. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. KarrasDiffusionSchedulers KarrasDiffusionSchedulers are a broad generalization of schedulers in 🤗 Diffusers. The schedulers in this class are distinguished at a high level by their noise sampling strategy, the type of network and scaling, the training strategy, and how the loss is weighed. The different schedulers in this class, depending on the ordinary differential equations (ODE) solver type, fall into the above taxonomy and provide a good abstraction for the design of the main schedulers implemented in 🤗 Diffusers. The schedulers in this class are given here. PushToHubMixin class diffusers.utils.PushToHubMixin < source > ( ) A Mixin to push a model, scheduler, or pipeline to the Hugging Face Hub. push_to_hub < source > ( repo_id: str commit_message: Optional = None private: Optional = None token: Optional = None create_pr: bool = False safe_serialization: bool = True variant: Optional = None ) Parameters repo_id (str) — +The name of the repository you want to push your model, scheduler, or pipeline files to. It should +contain your organization name when pushing to an organization. repo_id can also be a path to a local +directory. commit_message (str, optional) — +Message to commit while pushing. Default to "Upload {object}". private (bool, optional) — +Whether or not the repository created should be private. token (str, optional) — +The token to use as HTTP bearer authorization for remote files. The token generated when running +huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) — +Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to True) — +Whether or not to convert the model weights to the safetensors format. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. Upload model, scheduler, or pipeline files to the 🤗 Hugging Face Hub. Examples: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet") + +# Push the `unet` to your namespace with the name "my-finetuned-unet". +unet.push_to_hub("my-finetuned-unet") + +# Push the `unet` to an organization with the name "my-finetuned-unet". +unet.push_to_hub("your-org/my-finetuned-unet") diff --git a/scrapped_outputs/0fdd5b90b5c044b397676ae25d1b3dda.txt b/scrapped_outputs/0fdd5b90b5c044b397676ae25d1b3dda.txt new file mode 100644 index 0000000000000000000000000000000000000000..d55e063674e7ba7d673fc575bc54864e06986365 --- /dev/null +++ b/scrapped_outputs/0fdd5b90b5c044b397676ae25d1b3dda.txt @@ -0,0 +1,85 @@ +🧨 Diffusers Training Examples + +Diffusers training examples are a collection of scripts to demonstrate how to effectively use the diffusers library +for a variety of use cases. +Note: If you are looking for official examples on how to use diffusers for inference, +please have a look at src/diffusers/pipelines +Our examples aspire to be self-contained, easy-to-tweak, beginner-friendly and for one-purpose-only. +More specifically, this means: +Self-contained: An example script shall only depend on “pip-install-able” Python packages that can be found in a requirements.txt file. Example scripts shall not depend on any local files. This means that one can simply download an example script, e.g. train_unconditional.py, install the required dependencies, e.g. requirements.txt and execute the example script. +Easy-to-tweak: While we strive to present as many use cases as possible, the example scripts are just that - examples. It is expected that they won’t work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs. To help you with that, most of the examples fully expose the preprocessing of the data and the training loop to allow you to tweak and edit them as required. +Beginner-friendly: We do not aim for providing state-of-the-art training scripts for the newest models, but rather examples that can be used as a way to better understand diffusion models and how to use them with the diffusers library. We often purposefully leave out certain state-of-the-art methods if we consider them too complex for beginners. +One-purpose-only: Examples should show one task and one task only. Even if a task is from a modeling +point of view very similar, e.g. image super-resolution and image modification tend to use the same model and training method, we want examples to showcase only one task to keep them as readable and easy-to-understand as possible. +We provide official examples that cover the most popular tasks of diffusion models. +Official examples are actively maintained by the diffusers maintainers and we try to rigorously follow our example philosophy as defined above. +If you feel like another important example should exist, we are more than happy to welcome a Feature Request or directly a Pull Request from you! +Training examples show how to pretrain or fine-tune diffusion models for a variety of tasks. Currently we support: +Unconditional Training +Text-to-Image Training +Text Inversion +Dreambooth +LoRA Support +ControlNet +InstructPix2Pix +Custom Diffusion +If possible, please install xFormers for memory efficient attention. This could help make your training faster and less memory intensive. +Task +🤗 Accelerate +🤗 Datasets +Colab +Unconditional Image Generation +✅ +✅ + +Text-to-Image fine-tuning +✅ +✅ + +Textual Inversion +✅ +- + +Dreambooth +✅ +- + +Training with LoRA +✅ +- +- +ControlNet +✅ +✅ +- +InstructPix2Pix +✅ +✅ +- +Custom Diffusion +✅ +✅ +- + +Community + +In addition, we provide community examples, which are examples added and maintained by our community. +Community examples can consist of both training examples or inference pipelines. +For such examples, we are more lenient regarding the philosophy defined above and also cannot guarantee to provide maintenance for every issue. +Examples that are useful for the community, but are either not yet deemed popular or not yet following our above philosophy should go into the community examples folder. The community folder therefore includes training examples and inference pipelines. +Note: Community examples can be a great first contribution to show to the community how you like to use diffusers 🪄. + +Important note + +To make sure you can successfully run the latest versions of the example scripts, you have to install the library from source and install some example-specific requirements. To do this, execute the following steps in a new virtual environment: + + + Copied +git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . +Then cd in the example folder of your choice and run + + + Copied +pip install -r requirements.txt diff --git a/scrapped_outputs/0ffc5e864c8f62d13b592c822162760a.txt b/scrapped_outputs/0ffc5e864c8f62d13b592c822162760a.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/10002242b713a74d1a50bf48ae20a3ad.txt b/scrapped_outputs/10002242b713a74d1a50bf48ae20a3ad.txt new file mode 100644 index 0000000000000000000000000000000000000000..6944df78621a4402f3dc6b9675ba16c4fe0e310a --- /dev/null +++ b/scrapped_outputs/10002242b713a74d1a50bf48ae20a3ad.txt @@ -0,0 +1,164 @@ +Euler Ancestral scheduler + + +Overview + +Ancestral sampling with Euler method steps. Based on the original (k-diffusion)[https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L72] implementation by Katherine Crowson. +Fast scheduler which often times generates good outputs with 20-30 steps. + +EulerAncestralDiscreteScheduler + + +class diffusers.EulerAncestralDiscreteScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +prediction_type: str = 'epsilon' + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + + +Ancestral sampling with Euler method steps. Based on the original k-diffusion implementation by Katherine Crowson: +https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L72 +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[float, torch.FloatTensor] + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +timestep (float or torch.FloatTensor) — the current timestep in the diffusion chain + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Scales the denoising model input by (sigma**2 + 1) ** 0.5 to match the Euler algorithm. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device, optional) — +the device to which the timesteps should be moved to. If None, the timesteps are not moved. + + + +Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: typing.Union[float, torch.FloatTensor] +sample: FloatTensor +generator: typing.Optional[torch._C.Generator] = None +return_dict: bool = True + +) +→ +~schedulers.scheduling_utils.EulerAncestralDiscreteSchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (float) — current timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +generator (torch.Generator, optional) — Random number generator. + + +return_dict (bool) — option for returning tuple rather than EulerAncestralDiscreteSchedulerOutput class + + +Returns + +~schedulers.scheduling_utils.EulerAncestralDiscreteSchedulerOutput or tuple + + + +~schedulers.scheduling_utils.EulerAncestralDiscreteSchedulerOutput if return_dict is True, otherwise +a tuple. When returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/101380647ec6bfb3e11f7460aca77153.txt b/scrapped_outputs/101380647ec6bfb3e11f7460aca77153.txt new file mode 100644 index 0000000000000000000000000000000000000000..4489d79b093be6851dc1a0af80970153dacaabb8 --- /dev/null +++ b/scrapped_outputs/101380647ec6bfb3e11f7460aca77153.txt @@ -0,0 +1,164 @@ +Euler Ancestral scheduler + + +Overview + +Ancestral sampling with Euler method steps. Based on the original k-diffusion implementation by Katherine Crowson. +Fast scheduler which often times generates good outputs with 20-30 steps. + +EulerAncestralDiscreteScheduler + + +class diffusers.EulerAncestralDiscreteScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +prediction_type: str = 'epsilon' + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + + +Ancestral sampling with Euler method steps. Based on the original k-diffusion implementation by Katherine Crowson: +https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L72 +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[float, torch.FloatTensor] + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +timestep (float or torch.FloatTensor) — the current timestep in the diffusion chain + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Scales the denoising model input by (sigma**2 + 1) ** 0.5 to match the Euler algorithm. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device, optional) — +the device to which the timesteps should be moved to. If None, the timesteps are not moved. + + + +Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: typing.Union[float, torch.FloatTensor] +sample: FloatTensor +generator: typing.Optional[torch._C.Generator] = None +return_dict: bool = True + +) +→ +~schedulers.scheduling_utils.EulerAncestralDiscreteSchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (float) — current timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +generator (torch.Generator, optional) — Random number generator. + + +return_dict (bool) — option for returning tuple rather than EulerAncestralDiscreteSchedulerOutput class + + +Returns + +~schedulers.scheduling_utils.EulerAncestralDiscreteSchedulerOutput or tuple + + + +~schedulers.scheduling_utils.EulerAncestralDiscreteSchedulerOutput if return_dict is True, otherwise +a tuple. When returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/101efe450f653225ad5dc65842af671c.txt b/scrapped_outputs/101efe450f653225ad5dc65842af671c.txt new file mode 100644 index 0000000000000000000000000000000000000000..2dde9c6e189ad6d607bc313e3e555570773bb332 --- /dev/null +++ b/scrapped_outputs/101efe450f653225ad5dc65842af671c.txt @@ -0,0 +1,19 @@ +Adapt a model to a new task Many diffusion systems share the same components, allowing you to adapt a pretrained model for one task to an entirely different task. This guide will show you how to adapt a pretrained text-to-image model for inpainting by initializing and modifying the architecture of a pretrained UNet2DConditionModel. Configure UNet2DConditionModel parameters A UNet2DConditionModel by default accepts 4 channels in the input sample. For example, load a pretrained text-to-image model like runwayml/stable-diffusion-v1-5 and take a look at the number of in_channels: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) +pipeline.unet.config["in_channels"] +4 Inpainting requires 9 channels in the input sample. You can check this value in a pretrained inpainting model like runwayml/stable-diffusion-inpainting: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-inpainting", use_safetensors=True) +pipeline.unet.config["in_channels"] +9 To adapt your text-to-image model for inpainting, you’ll need to change the number of in_channels from 4 to 9. Initialize a UNet2DConditionModel with the pretrained text-to-image model weights, and change in_channels to 9. Changing the number of in_channels means you need to set ignore_mismatched_sizes=True and low_cpu_mem_usage=False to avoid a size mismatch error because the shape is different now. Copied from diffusers import UNet2DConditionModel + +model_id = "runwayml/stable-diffusion-v1-5" +unet = UNet2DConditionModel.from_pretrained( + model_id, + subfolder="unet", + in_channels=9, + low_cpu_mem_usage=False, + ignore_mismatched_sizes=True, + use_safetensors=True, +) The pretrained weights of the other components from the text-to-image model are initialized from their checkpoints, but the input channel weights (conv_in.weight) of the unet are randomly initialized. It is important to finetune the model for inpainting because otherwise the model returns noise. diff --git a/scrapped_outputs/1031e981100930b3bc3d389cbccce2f2.txt b/scrapped_outputs/1031e981100930b3bc3d389cbccce2f2.txt new file mode 100644 index 0000000000000000000000000000000000000000..c2908009aeaa21b6bb9b1078769b68e294f7cd36 --- /dev/null +++ b/scrapped_outputs/1031e981100930b3bc3d389cbccce2f2.txt @@ -0,0 +1,60 @@ +Text-guided image-inpainting + + + + + + + + + + + + +The StableDiffusionInpaintPipeline allows you to edit specific parts of an image by providing a mask and a text prompt. It uses a version of Stable Diffusion, like runwayml/stable-diffusion-inpainting specifically trained for inpainting tasks. +Get started by loading an instance of the StableDiffusionInpaintPipeline: + + + Copied +import PIL +import requests +import torch +from io import BytesIO + +from diffusers import StableDiffusionInpaintPipeline + +pipeline = StableDiffusionInpaintPipeline.from_pretrained( + "runwayml/stable-diffusion-inpainting", + torch_dtype=torch.float16, +) +pipeline = pipeline.to("cuda") +Download an image and a mask of a dog which you’ll eventually replace: + + + Copied +def download_image(url): + response = requests.get(url) + return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = download_image(img_url).resize((512, 512)) +mask_image = download_image(mask_url).resize((512, 512)) +Now you can create a prompt to replace the mask with something else: + + + Copied +prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] +image +mask_image +prompt +output + + +Face of a yellow cat, high resolution, sitting on a park bench + +A previous experimental implementation of inpainting used a different, lower-quality process. To ensure backwards compatibility, loading a pretrained pipeline that doesn’t contain the new model will still apply the old inpainting method. +Check out the Spaces below to try out image inpainting yourself! diff --git a/scrapped_outputs/10460f485441d9a9939f01792dc9fafe.txt b/scrapped_outputs/10460f485441d9a9939f01792dc9fafe.txt new file mode 100644 index 0000000000000000000000000000000000000000..c6f952ad08987328ef5a7108f6c98636c5902202 --- /dev/null +++ b/scrapped_outputs/10460f485441d9a9939f01792dc9fafe.txt @@ -0,0 +1,76 @@ +Contribute a community pipeline 💡 Take a look at GitHub Issue #841 for more context about why we’re adding community pipelines to help everyone easily share their work without being slowed down. Community pipelines allow you to add any additional features you’d like on top of the DiffusionPipeline. The main benefit of building on top of the DiffusionPipeline is anyone can load and use your pipeline by only adding one more argument, making it super easy for the community to access. This guide will show you how to create a community pipeline and explain how they work. To keep things simple, you’ll create a “one-step” pipeline where the UNet does a single forward pass and calls the scheduler once. Initialize the pipeline You should start by creating a one_step_unet.py file for your community pipeline. In this file, create a pipeline class that inherits from the DiffusionPipeline to be able to load model weights and the scheduler configuration from the Hub. The one-step pipeline needs a UNet and a scheduler, so you’ll need to add these as arguments to the __init__ function: Copied from diffusers import DiffusionPipeline +import torch + +class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() To ensure your pipeline and its components (unet and scheduler) can be saved with save_pretrained(), add them to the register_modules function: Copied from diffusers import DiffusionPipeline + import torch + + class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() + ++ self.register_modules(unet=unet, scheduler=scheduler) Cool, the __init__ step is done and you can move to the forward pass now! 🔥 Define the forward pass In the forward pass, which we recommend defining as __call__, you have complete creative freedom to add whatever feature you’d like. For our amazing one-step pipeline, create a random image and only call the unet and scheduler once by setting timestep=1: Copied from diffusers import DiffusionPipeline + import torch + + class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() + + self.register_modules(unet=unet, scheduler=scheduler) + ++ def __call__(self): ++ image = torch.randn( ++ (1, self.unet.config.in_channels, self.unet.config.sample_size, self.unet.config.sample_size), ++ ) ++ timestep = 1 + ++ model_output = self.unet(image, timestep).sample ++ scheduler_output = self.scheduler.step(model_output, timestep, image).prev_sample + ++ return scheduler_output That’s it! 🚀 You can now run this pipeline by passing a unet and scheduler to it: Copied from diffusers import DDPMScheduler, UNet2DModel + +scheduler = DDPMScheduler() +unet = UNet2DModel() + +pipeline = UnetSchedulerOneForwardPipeline(unet=unet, scheduler=scheduler) + +output = pipeline() But what’s even better is you can load pre-existing weights into the pipeline if the pipeline structure is identical. For example, you can load the google/ddpm-cifar10-32 weights into the one-step pipeline: Copied pipeline = UnetSchedulerOneForwardPipeline.from_pretrained("google/ddpm-cifar10-32", use_safetensors=True) + +output = pipeline() Share your pipeline Open a Pull Request on the 🧨 Diffusers repository to add your awesome pipeline in one_step_unet.py to the examples/community subfolder. Once it is merged, anyone with diffusers >= 0.4.0 installed can use this pipeline magically 🪄 by specifying it in the custom_pipeline argument: Copied from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "google/ddpm-cifar10-32", custom_pipeline="one_step_unet", use_safetensors=True +) +pipe() Another way to share your community pipeline is to upload the one_step_unet.py file directly to your preferred model repository on the Hub. Instead of specifying the one_step_unet.py file, pass the model repository id to the custom_pipeline argument: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "google/ddpm-cifar10-32", custom_pipeline="stevhliu/one_step_unet", use_safetensors=True +) Take a look at the following table to compare the two sharing workflows to help you decide the best option for you: GitHub community pipeline HF Hub community pipeline usage same same review process open a Pull Request on GitHub and undergo a review process from the Diffusers team before merging; may be slower upload directly to a Hub repository without any review; this is the fastest workflow visibility included in the official Diffusers repository and documentation included on your HF Hub profile and relies on your own usage/promotion to gain visibility 💡 You can use whatever package you want in your community pipeline file - as long as the user has it installed, everything will work fine. Make sure you have one and only one pipeline class that inherits from DiffusionPipeline because this is automatically detected. How do community pipelines work? A community pipeline is a class that inherits from DiffusionPipeline which means: It can be loaded with the custom_pipeline argument. The model weights and scheduler configuration are loaded from pretrained_model_name_or_path. The code that implements a feature in the community pipeline is defined in a pipeline.py file. Sometimes you can’t load all the pipeline components weights from an official repository. In this case, the other components should be passed directly to the pipeline: Copied from diffusers import DiffusionPipeline +from transformers import CLIPImageProcessor, CLIPModel + +model_id = "CompVis/stable-diffusion-v1-4" +clip_model_id = "laion/CLIP-ViT-B-32-laion2B-s34B-b79K" + +feature_extractor = CLIPImageProcessor.from_pretrained(clip_model_id) +clip_model = CLIPModel.from_pretrained(clip_model_id, torch_dtype=torch.float16) + +pipeline = DiffusionPipeline.from_pretrained( + model_id, + custom_pipeline="clip_guided_stable_diffusion", + clip_model=clip_model, + feature_extractor=feature_extractor, + scheduler=scheduler, + torch_dtype=torch.float16, + use_safetensors=True, +) The magic behind community pipelines is contained in the following code. It allows the community pipeline to be loaded from GitHub or the Hub, and it’ll be available to all 🧨 Diffusers packages. Copied # 2. Load the pipeline class, if using custom module then load it from the Hub +# if we load from explicit class, let's use it +if custom_pipeline is not None: + pipeline_class = get_class_from_dynamic_module( + custom_pipeline, module_file=CUSTOM_PIPELINE_FILE_NAME, cache_dir=custom_pipeline + ) +elif cls != DiffusionPipeline: + pipeline_class = cls +else: + diffusers_module = importlib.import_module(cls.__module__.split(".")[0]) + pipeline_class = getattr(diffusers_module, config_dict["_class_name"]) diff --git a/scrapped_outputs/104f4fb097804e3d2f000d691ad353fe.txt b/scrapped_outputs/104f4fb097804e3d2f000d691ad353fe.txt new file mode 100644 index 0000000000000000000000000000000000000000..534ed9b2b2cb527d35728b7a9f190221a428849a --- /dev/null +++ b/scrapped_outputs/104f4fb097804e3d2f000d691ad353fe.txt @@ -0,0 +1,253 @@ +Latent Diffusion + + +Overview + +Latent Diffusion was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. +The abstract of the paper is the following: +By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. +The original codebase can be found here. + +Tips: + + + + + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_latent_diffusion.py +Text-to-Image Generation +- +pipeline_latent_diffusion_superresolution.py +Super Resolution +- + +Examples: + + +LDMTextToImagePipeline + + +class diffusers.LDMTextToImagePipeline + +< +source +> +( +vqvae: typing.Union[diffusers.models.vq_model.VQModel, diffusers.models.autoencoder_kl.AutoencoderKL] +bert: PreTrainedModel +tokenizer: PreTrainedTokenizer +unet: typing.Union[diffusers.models.unet_2d.UNet2DModel, diffusers.models.unet_2d_condition.UNet2DConditionModel] +scheduler: typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler] + +) + + +Parameters + +vqvae (VQModel) — +Vector-quantized (VQ) Model to encode and decode images to and from latent representations. + + +bert (LDMBertModel) — +Text-encoder model based on BERT architecture. + + +tokenizer (transformers.BertTokenizer) — +Tokenizer of class +BertTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: typing.Optional[int] = 50 +guidance_scale: typing.Optional[float] = 1.0 +eta: typing.Optional[float] = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +**kwargs + +) +→ +ImagePipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 1.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt at +the, usually at the expense of lower image quality. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + +Returns + +ImagePipelineOutput or tuple + + + +~pipelines.utils.ImagePipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. + + + +LDMSuperResolutionPipeline + + +class diffusers.LDMSuperResolutionPipeline + +< +source +> +( +vqvae: VQModel +unet: UNet2DModel +scheduler: typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler] + +) + + +Parameters + +vqvae (VQModel) — +Vector-quantized (VQ) VAE Model to encode and decode images to and from latent representations. + + +unet (UNet2DModel) — U-Net architecture to denoise the encoded image. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latens. Can be one of +DDIMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, +EulerAncestralDiscreteScheduler, DPMSolverMultistepScheduler, or PNDMScheduler. + + + +A pipeline for image super-resolution using Latent +This class inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +image: typing.Union[torch.Tensor, PIL.Image.Image] = None +batch_size: typing.Optional[int] = 1 +num_inference_steps: typing.Optional[int] = 100 +eta: typing.Optional[float] = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +**kwargs + +) +→ +ImagePipelineOutput or tuple + +Parameters + +image (torch.Tensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. + + +batch_size (int, optional, defaults to 1) — +Number of images to generate. + + +num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + +Returns + +ImagePipelineOutput or tuple + + + +~pipelines.utils.ImagePipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. diff --git a/scrapped_outputs/1055b296b99921026f27aa0b46a705fa.txt b/scrapped_outputs/1055b296b99921026f27aa0b46a705fa.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/107cbaa9c20cbf9d141bc949b1d6ebec.txt b/scrapped_outputs/107cbaa9c20cbf9d141bc949b1d6ebec.txt new file mode 100644 index 0000000000000000000000000000000000000000..de08a59d3b80c6e4fb6b55a1f1ca0865db2fa227 --- /dev/null +++ b/scrapped_outputs/107cbaa9c20cbf9d141bc949b1d6ebec.txt @@ -0,0 +1,86 @@ +Understanding pipelines, models and schedulers 🧨 Diffusers is designed to be a user-friendly and flexible toolbox for building diffusion systems tailored to your use-case. At the core of the toolbox are models and schedulers. While the DiffusionPipeline bundles these components together for convenience, you can also unbundle the pipeline and use the models and schedulers separately to create new diffusion systems. In this tutorial, you’ll learn how to use models and schedulers to assemble a diffusion system for inference, starting with a basic pipeline and then progressing to the Stable Diffusion pipeline. Deconstruct a basic pipeline A pipeline is a quick and easy way to run a model for inference, requiring no more than four lines of code to generate an image: Copied >>> from diffusers import DDPMPipeline + +>>> ddpm = DDPMPipeline.from_pretrained("google/ddpm-cat-256", use_safetensors=True).to("cuda") +>>> image = ddpm(num_inference_steps=25).images[0] +>>> image That was super easy, but how did the pipeline do that? Let’s breakdown the pipeline and take a look at what’s happening under the hood. In the example above, the pipeline contains a UNet2DModel model and a DDPMScheduler. The pipeline denoises an image by taking random noise the size of the desired output and passing it through the model several times. At each timestep, the model predicts the noise residual and the scheduler uses it to predict a less noisy image. The pipeline repeats this process until it reaches the end of the specified number of inference steps. To recreate the pipeline with the model and scheduler separately, let’s write our own denoising process. Load the model and scheduler: Copied >>> from diffusers import DDPMScheduler, UNet2DModel + +>>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cat-256") +>>> model = UNet2DModel.from_pretrained("google/ddpm-cat-256", use_safetensors=True).to("cuda") Set the number of timesteps to run the denoising process for: Copied >>> scheduler.set_timesteps(50) Setting the scheduler timesteps creates a tensor with evenly spaced elements in it, 50 in this example. Each element corresponds to a timestep at which the model denoises an image. When you create the denoising loop later, you’ll iterate over this tensor to denoise an image: Copied >>> scheduler.timesteps +tensor([980, 960, 940, 920, 900, 880, 860, 840, 820, 800, 780, 760, 740, 720, + 700, 680, 660, 640, 620, 600, 580, 560, 540, 520, 500, 480, 460, 440, + 420, 400, 380, 360, 340, 320, 300, 280, 260, 240, 220, 200, 180, 160, + 140, 120, 100, 80, 60, 40, 20, 0]) Create some random noise with the same shape as the desired output: Copied >>> import torch + +>>> sample_size = model.config.sample_size +>>> noise = torch.randn((1, 3, sample_size, sample_size), device="cuda") Now write a loop to iterate over the timesteps. At each timestep, the model does a UNet2DModel.forward() pass and returns the noisy residual. The scheduler’s step() method takes the noisy residual, timestep, and input and it predicts the image at the previous timestep. This output becomes the next input to the model in the denoising loop, and it’ll repeat until it reaches the end of the timesteps array. Copied >>> input = noise + +>>> for t in scheduler.timesteps: +... with torch.no_grad(): +... noisy_residual = model(input, t).sample +... previous_noisy_sample = scheduler.step(noisy_residual, t, input).prev_sample +... input = previous_noisy_sample This is the entire denoising process, and you can use this same pattern to write any diffusion system. The last step is to convert the denoised output into an image: Copied >>> from PIL import Image +>>> import numpy as np + +>>> image = (input / 2 + 0.5).clamp(0, 1).squeeze() +>>> image = (image.permute(1, 2, 0) * 255).round().to(torch.uint8).cpu().numpy() +>>> image = Image.fromarray(image) +>>> image In the next section, you’ll put your skills to the test and breakdown the more complex Stable Diffusion pipeline. The steps are more or less the same. You’ll initialize the necessary components, and set the number of timesteps to create a timestep array. The timestep array is used in the denoising loop, and for each element in this array, the model predicts a less noisy image. The denoising loop iterates over the timestep’s, and at each timestep, it outputs a noisy residual and the scheduler uses it to predict a less noisy image at the previous timestep. This process is repeated until you reach the end of the timestep array. Let’s try it out! Deconstruct the Stable Diffusion pipeline Stable Diffusion is a text-to-image latent diffusion model. It is called a latent diffusion model because it works with a lower-dimensional representation of the image instead of the actual pixel space, which makes it more memory efficient. The encoder compresses the image into a smaller representation, and a decoder to convert the compressed representation back into an image. For text-to-image models, you’ll need a tokenizer and an encoder to generate text embeddings. From the previous example, you already know you need a UNet model and a scheduler. As you can see, this is already more complex than the DDPM pipeline which only contains a UNet model. The Stable Diffusion model has three separate pretrained models. 💡 Read the How does Stable Diffusion work? blog for more details about how the VAE, UNet, and text encoder models work. Now that you know what you need for the Stable Diffusion pipeline, load all these components with the from_pretrained() method. You can find them in the pretrained runwayml/stable-diffusion-v1-5 checkpoint, and each component is stored in a separate subfolder: Copied >>> from PIL import Image +>>> import torch +>>> from transformers import CLIPTextModel, CLIPTokenizer +>>> from diffusers import AutoencoderKL, UNet2DConditionModel, PNDMScheduler + +>>> vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae", use_safetensors=True) +>>> tokenizer = CLIPTokenizer.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="tokenizer") +>>> text_encoder = CLIPTextModel.from_pretrained( +... "CompVis/stable-diffusion-v1-4", subfolder="text_encoder", use_safetensors=True +... ) +>>> unet = UNet2DConditionModel.from_pretrained( +... "CompVis/stable-diffusion-v1-4", subfolder="unet", use_safetensors=True +... ) Instead of the default PNDMScheduler, exchange it for the UniPCMultistepScheduler to see how easy it is to plug a different scheduler in: Copied >>> from diffusers import UniPCMultistepScheduler + +>>> scheduler = UniPCMultistepScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler") To speed up inference, move the models to a GPU since, unlike the scheduler, they have trainable weights: Copied >>> torch_device = "cuda" +>>> vae.to(torch_device) +>>> text_encoder.to(torch_device) +>>> unet.to(torch_device) Create text embeddings The next step is to tokenize the text to generate embeddings. The text is used to condition the UNet model and steer the diffusion process towards something that resembles the input prompt. 💡 The guidance_scale parameter determines how much weight should be given to the prompt when generating an image. Feel free to choose any prompt you like if you want to generate something else! Copied >>> prompt = ["a photograph of an astronaut riding a horse"] +>>> height = 512 # default height of Stable Diffusion +>>> width = 512 # default width of Stable Diffusion +>>> num_inference_steps = 25 # Number of denoising steps +>>> guidance_scale = 7.5 # Scale for classifier-free guidance +>>> generator = torch.manual_seed(0) # Seed generator to create the initial latent noise +>>> batch_size = len(prompt) Tokenize the text and generate the embeddings from the prompt: Copied >>> text_input = tokenizer( +... prompt, padding="max_length", max_length=tokenizer.model_max_length, truncation=True, return_tensors="pt" +... ) + +>>> with torch.no_grad(): +... text_embeddings = text_encoder(text_input.input_ids.to(torch_device))[0] You’ll also need to generate the unconditional text embeddings which are the embeddings for the padding token. These need to have the same shape (batch_size and seq_length) as the conditional text_embeddings: Copied >>> max_length = text_input.input_ids.shape[-1] +>>> uncond_input = tokenizer([""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt") +>>> uncond_embeddings = text_encoder(uncond_input.input_ids.to(torch_device))[0] Let’s concatenate the conditional and unconditional embeddings into a batch to avoid doing two forward passes: Copied >>> text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) Create random noise Next, generate some initial random noise as a starting point for the diffusion process. This is the latent representation of the image, and it’ll be gradually denoised. At this point, the latent image is smaller than the final image size but that’s okay though because the model will transform it into the final 512x512 image dimensions later. 💡 The height and width are divided by 8 because the vae model has 3 down-sampling layers. You can check by running the following: Copied 2 ** (len(vae.config.block_out_channels) - 1) == 8 Copied >>> latents = torch.randn( +... (batch_size, unet.config.in_channels, height // 8, width // 8), +... generator=generator, +... device=torch_device, +... ) Denoise the image Start by scaling the input with the initial noise distribution, sigma, the noise scale value, which is required for improved schedulers like UniPCMultistepScheduler: Copied >>> latents = latents * scheduler.init_noise_sigma The last step is to create the denoising loop that’ll progressively transform the pure noise in latents to an image described by your prompt. Remember, the denoising loop needs to do three things: Set the scheduler’s timesteps to use during denoising. Iterate over the timesteps. At each timestep, call the UNet model to predict the noise residual and pass it to the scheduler to compute the previous noisy sample. Copied >>> from tqdm.auto import tqdm + +>>> scheduler.set_timesteps(num_inference_steps) + +>>> for t in tqdm(scheduler.timesteps): +... # expand the latents if we are doing classifier-free guidance to avoid doing two forward passes. +... latent_model_input = torch.cat([latents] * 2) + +... latent_model_input = scheduler.scale_model_input(latent_model_input, timestep=t) + +... # predict the noise residual +... with torch.no_grad(): +... noise_pred = unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample + +... # perform guidance +... noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) +... noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) + +... # compute the previous noisy sample x_t -> x_t-1 +... latents = scheduler.step(noise_pred, t, latents).prev_sample Decode the image The final step is to use the vae to decode the latent representation into an image and get the decoded output with sample: Copied # scale and decode the image latents with vae +latents = 1 / 0.18215 * latents +with torch.no_grad(): + image = vae.decode(latents).sample Lastly, convert the image to a PIL.Image to see your generated image! Copied >>> image = (image / 2 + 0.5).clamp(0, 1).squeeze() +>>> image = (image.permute(1, 2, 0) * 255).to(torch.uint8).cpu().numpy() +>>> image = Image.fromarray(image) +>>> image Next steps From basic to complex pipelines, you’ve seen that all you really need to write your own diffusion system is a denoising loop. The loop should set the scheduler’s timesteps, iterate over them, and alternate between calling the UNet model to predict the noise residual and passing it to the scheduler to compute the previous noisy sample. This is really what 🧨 Diffusers is designed for: to make it intuitive and easy to write your own diffusion system using models and schedulers. For your next steps, feel free to: Learn how to build and contribute a pipeline to 🧨 Diffusers. We can’t wait and see what you’ll come up with! Explore existing pipelines in the library, and see if you can deconstruct and build a pipeline from scratch using the models and schedulers separately. diff --git a/scrapped_outputs/10c74e0cbf68bd4a1cf73d45fa959427.txt b/scrapped_outputs/10c74e0cbf68bd4a1cf73d45fa959427.txt new file mode 100644 index 0000000000000000000000000000000000000000..a7b674d6dbdb5acb929fa209ec80df7084a21fd2 --- /dev/null +++ b/scrapped_outputs/10c74e0cbf68bd4a1cf73d45fa959427.txt @@ -0,0 +1,243 @@ +🧪 This pipeline is for research purposes only. Text-to-video ModelScope Text-to-Video Technical Report is by Jiuniu Wang, Hangjie Yuan, Dayou Chen, Yingya Zhang, Xiang Wang, Shiwei Zhang. The abstract from the paper is: This paper introduces ModelScopeT2V, a text-to-video synthesis model that evolves from a text-to-image synthesis model (i.e., Stable Diffusion). ModelScopeT2V incorporates spatio-temporal blocks to ensure consistent frame generation and smooth movement transitions. The model could adapt to varying frame numbers during training and inference, rendering it suitable for both image-text and video-text datasets. ModelScopeT2V brings together three components (i.e., VQGAN, a text encoder, and a denoising UNet), totally comprising 1.7 billion parameters, in which 0.5 billion parameters are dedicated to temporal capabilities. The model demonstrates superior performance over state-of-the-art methods across three evaluation metrics. The code and an online demo are available at https://modelscope.cn/models/damo/text-to-video-synthesis/summary. You can find additional information about Text-to-Video on the project page, original codebase, and try it out in a demo. Official checkpoints can be found at damo-vilab and cerspense. Usage example text-to-video-ms-1.7b Let’s start by generating a short video with the default length of 16 frames (2s at 8 fps): Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import export_to_video + +pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") +pipe = pipe.to("cuda") + +prompt = "Spiderman is surfing" +video_frames = pipe(prompt).frames[0] +video_path = export_to_video(video_frames) +video_path Diffusers supports different optimization techniques to improve the latency +and memory footprint of a pipeline. Since videos are often more memory-heavy than images, +we can enable CPU offloading and VAE slicing to keep the memory footprint at bay. Let’s generate a video of 8 seconds (64 frames) on the same GPU using CPU offloading and VAE slicing: Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import export_to_video + +pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") +pipe.enable_model_cpu_offload() + +# memory optimization +pipe.enable_vae_slicing() + +prompt = "Darth Vader surfing a wave" +video_frames = pipe(prompt, num_frames=64).frames[0] +video_path = export_to_video(video_frames) +video_path It just takes 7 GBs of GPU memory to generate the 64 video frames using PyTorch 2.0, “fp16” precision and the techniques mentioned above. We can also use a different scheduler easily, using the same method we’d use for Stable Diffusion: Copied import torch +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +from diffusers.utils import export_to_video + +pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() + +prompt = "Spiderman is surfing" +video_frames = pipe(prompt, num_inference_steps=25).frames[0] +video_path = export_to_video(video_frames) +video_path Here are some sample outputs: An astronaut riding a horse. + Darth vader surfing in waves. + cerspense/zeroscope_v2_576w & cerspense/zeroscope_v2_XL Zeroscope are watermark-free model and have been trained on specific sizes such as 576x320 and 1024x576. +One should first generate a video using the lower resolution checkpoint cerspense/zeroscope_v2_576w with TextToVideoSDPipeline, +which can then be upscaled using VideoToVideoSDPipeline and cerspense/zeroscope_v2_XL. Copied import torch +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +from diffusers.utils import export_to_video +from PIL import Image + +pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16) +pipe.enable_model_cpu_offload() + +# memory optimization +pipe.unet.enable_forward_chunking(chunk_size=1, dim=1) +pipe.enable_vae_slicing() + +prompt = "Darth Vader surfing a wave" +video_frames = pipe(prompt, num_frames=24).frames[0] +video_path = export_to_video(video_frames) +video_path Now the video can be upscaled: Copied pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_XL", torch_dtype=torch.float16) +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() + +# memory optimization +pipe.unet.enable_forward_chunking(chunk_size=1, dim=1) +pipe.enable_vae_slicing() + +video = [Image.fromarray(frame).resize((1024, 576)) for frame in video_frames] + +video_frames = pipe(prompt, video=video, strength=0.6).frames[0] +video_path = export_to_video(video_frames) +video_path Here are some sample outputs: Darth vader surfing in waves. + Tips Video generation is memory-intensive and one way to reduce your memory usage is to set enable_forward_chunking on the pipeline’s UNet so you don’t run the entire feedforward layer at once. Breaking it up into chunks in a loop is more efficient. Check out the Text or image-to-video guide for more details about how certain parameters can affect video generation and how to optimize inference by reducing memory usage. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. TextToVideoSDPipeline class diffusers.TextToVideoSDPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet3DConditionModel scheduler: KarrasDiffusionSchedulers ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet3DConditionModel) — +A UNet3DConditionModel to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_frames: int = 16 num_inference_steps: int = 50 guidance_scale: float = 9.0 negative_prompt: Union = None eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'np' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → TextToVideoSDPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated video. num_frames (int, optional, defaults to 16) — +The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds +amounts to 2 seconds of video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "np") — +The output format of the generated video. Choose between torch.FloatTensor or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +TextToVideoSDPipelineOutput or tuple + +If return_dict is True, TextToVideoSDPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import TextToVideoSDPipeline +>>> from diffusers.utils import export_to_video + +>>> pipe = TextToVideoSDPipeline.from_pretrained( +... "damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16" +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "Spiderman is surfing" +>>> video_frames = pipe(prompt).frames[0] +>>> video_path = export_to_video(video_frames) +>>> video_path encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. VideoToVideoSDPipeline class diffusers.VideoToVideoSDPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet3DConditionModel scheduler: KarrasDiffusionSchedulers ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet3DConditionModel) — +A UNet3DConditionModel to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-guided video-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None video: Union = None strength: float = 0.6 num_inference_steps: int = 50 guidance_scale: float = 15.0 negative_prompt: Union = None eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'np' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → TextToVideoSDPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. video (List[np.ndarray] or torch.FloatTensor) — +video frames or tensor representing a video batch to be used as the starting point for the process. +Can also accept video latents as image, if passing latents directly, it will not be encoded again. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference video. Must be between 0 and 1. video is used as a +starting point, adding more noise to it the larger the strength. The number of denoising steps +depends on the amount of noise initially added. When strength is 1, added noise is maximum and the +denoising process runs for the full number of iterations specified in num_inference_steps. A value of +1 essentially ignores video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in video generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "np") — +The output format of the generated video. Choose between torch.FloatTensor or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +TextToVideoSDPipelineOutput or tuple + +If return_dict is True, TextToVideoSDPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +>>> from diffusers.utils import export_to_video + +>>> pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16) +>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe.to("cuda") + +>>> prompt = "spiderman running in the desert" +>>> video_frames = pipe(prompt, num_inference_steps=40, height=320, width=576, num_frames=24).frames[0] +>>> # safe low-res video +>>> video_path = export_to_video(video_frames, output_video_path="./video_576_spiderman.mp4") + +>>> # let's offload the text-to-image model +>>> pipe.to("cpu") + +>>> # and load the image-to-image model +>>> pipe = DiffusionPipeline.from_pretrained( +... "cerspense/zeroscope_v2_XL", torch_dtype=torch.float16, revision="refs/pr/15" +... ) +>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe.enable_model_cpu_offload() + +>>> # The VAE consumes A LOT of memory, let's make sure we run it in sliced mode +>>> pipe.vae.enable_slicing() + +>>> # now let's upscale it +>>> video = [Image.fromarray(frame).resize((1024, 576)) for frame in video_frames] + +>>> # and denoise it +>>> video_frames = pipe(prompt, video=video, strength=0.6).frames[0] +>>> video_path = export_to_video(video_frames, output_video_path="./video_1024_spiderman.mp4") +>>> video_path encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. TextToVideoSDPipelineOutput class diffusers.pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput < source > ( frames: Union ) Parameters frames (torch.Tensor, np.ndarray, or List[List[PIL.Image.Image]]) — +List of video outputs - It can be a nested list of length batch_size, with each sub-list containing denoised Output class for text-to-video pipelines. PIL image sequences of length num_frames. It can also be a NumPy array or Torch tensor of shape +(batch_size, num_frames, channels, height, width) diff --git a/scrapped_outputs/10f402d03fbadf8f9f2ab414b666b8da.txt b/scrapped_outputs/10f402d03fbadf8f9f2ab414b666b8da.txt new file mode 100644 index 0000000000000000000000000000000000000000..0216b63015b72cee2b55724c811388c4d1a98e96 --- /dev/null +++ b/scrapped_outputs/10f402d03fbadf8f9f2ab414b666b8da.txt @@ -0,0 +1,41 @@ +KarrasVeScheduler KarrasVeScheduler is a stochastic sampler tailored to variance-expanding (VE) models. It is based on the Elucidating the Design Space of Diffusion-Based Generative Models and Score-based generative modeling through stochastic differential equations papers. KarrasVeScheduler class diffusers.KarrasVeScheduler < source > ( sigma_min: float = 0.02 sigma_max: float = 100 s_noise: float = 1.007 s_churn: float = 80 s_min: float = 0.05 s_max: float = 50 ) Parameters sigma_min (float, defaults to 0.02) — +The minimum noise magnitude. sigma_max (float, defaults to 100) — +The maximum noise magnitude. s_noise (float, defaults to 1.007) — +The amount of additional noise to counteract loss of detail during sampling. A reasonable range is [1.000, +1.011]. s_churn (float, defaults to 80) — +The parameter controlling the overall amount of stochasticity. A reasonable range is [0, 100]. s_min (float, defaults to 0.05) — +The start value of the sigma range to add noise (enable stochasticity). A reasonable range is [0, 10]. s_max (float, defaults to 50) — +The end value of the sigma range to add noise. A reasonable range is [0.2, 80]. A stochastic scheduler tailored to variance-expanding models. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. For more details on the parameters, see Appendix E. The grid search values used +to find the optimal {s_noise, s_churn, s_min, s_max} for a specific model are described in Table 5 of the paper. add_noise_to_input < source > ( sample: FloatTensor sigma: float generator: Optional = None ) Parameters sample (torch.FloatTensor) — +The input sample. sigma (float) — generator (torch.Generator, optional) — +A random number generator. Explicit Langevin-like “churn” step of adding noise to the sample according to a gamma_i ≥ 0 to reach a +higher noise level sigma_hat = sigma_i + gamma_i*sigma_i. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor sigma_hat: float sigma_prev: float sample_hat: FloatTensor return_dict: bool = True ) → ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. sigma_hat (float) — sigma_prev (float) — sample_hat (torch.FloatTensor) — return_dict (bool, optional, defaults to True) — +Whether or not to return a ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput or tuple. Returns +~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput or tuple + +If return_dict is True, ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). step_correct < source > ( model_output: FloatTensor sigma_hat: float sigma_prev: float sample_hat: FloatTensor sample_prev: FloatTensor derivative: FloatTensor return_dict: bool = True ) → prev_sample (TODO) Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. sigma_hat (float) — TODO sigma_prev (float) — TODO sample_hat (torch.FloatTensor) — TODO sample_prev (torch.FloatTensor) — TODO derivative (torch.FloatTensor) — TODO return_dict (bool, optional, defaults to True) — +Whether or not to return a DDPMSchedulerOutput or tuple. Returns +prev_sample (TODO) + +updated sample in the diffusion chain. derivative (TODO): TODO + Corrects the predicted sample based on the model_output of the network. KarrasVeOutput class diffusers.schedulers.deprecated.scheduling_karras_ve.KarrasVeOutput < source > ( prev_sample: FloatTensor derivative: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. derivative (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Derivative of predicted original image sample (x_0). pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/111445f03ddeda23186cae960674758b.txt b/scrapped_outputs/111445f03ddeda23186cae960674758b.txt new file mode 100644 index 0000000000000000000000000000000000000000..923735996db131119f1ed82ba37eae73f2bb0f3e --- /dev/null +++ b/scrapped_outputs/111445f03ddeda23186cae960674758b.txt @@ -0,0 +1,27 @@ +DDPM Denoising Diffusion Probabilistic Models (DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes a diffusion based model of the same name. In the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline. The abstract from the paper is: We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. The original codebase can be found at hohonathanho/diffusion. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. DDPMPipeline class diffusers.DDPMPipeline < source > ( unet scheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image. Can be one of +DDPMScheduler, or DDIMScheduler. Pipeline for image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 generator: Union = None num_inference_steps: int = 1000 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. num_inference_steps (int, optional, defaults to 1000) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Example: Copied >>> from diffusers import DDPMPipeline + +>>> # load model and scheduler +>>> pipe = DDPMPipeline.from_pretrained("google/ddpm-cat-256") + +>>> # run pipeline in inference (sample random noise and denoise) +>>> image = pipe().images[0] + +>>> # save image +>>> image.save("ddpm_generated_image.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/11285eaa73f7e6a6bcdb921a81bc6634.txt b/scrapped_outputs/11285eaa73f7e6a6bcdb921a81bc6634.txt new file mode 100644 index 0000000000000000000000000000000000000000..156e21d626ab15c942cd8ad3d18e0d7c614d887d --- /dev/null +++ b/scrapped_outputs/11285eaa73f7e6a6bcdb921a81bc6634.txt @@ -0,0 +1,66 @@ +🧨 Diffusers Training Examples + +Diffusers training examples are a collection of scripts to demonstrate how to effectively use the diffusers library +for a variety of use cases. +Note: If you are looking for official examples on how to use diffusers for inference, +please have a look at src/diffusers/pipelines +Our examples aspire to be self-contained, easy-to-tweak, beginner-friendly and for one-purpose-only. +More specifically, this means: +Self-contained: An example script shall only depend on “pip-install-able” Python packages that can be found in a requirements.txt file. Example scripts shall not depend on any local files. This means that one can simply download an example script, e.g. train_unconditional.py, install the required dependencies, e.g. requirements.txt and execute the example script. +Easy-to-tweak: While we strive to present as many use cases as possible, the example scripts are just that - examples. It is expected that they won’t work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs. To help you with that, most of the examples fully expose the preprocessing of the data and the training loop to allow you to tweak and edit them as required. +Beginner-friendly: We do not aim for providing state-of-the-art training scripts for the newest models, but rather examples that can be used as a way to better understand diffusion models and how to use them with the diffusers library. We often purposefully leave out certain state-of-the-art methods if we consider them too complex for beginners. +One-purpose-only: Examples should show one task and one task only. Even if a task is from a modeling +point of view very similar, e.g. image super-resolution and image modification tend to use the same model and training method, we want examples to showcase only one task to keep them as readable and easy-to-understand as possible. +We provide official examples that cover the most popular tasks of diffusion models. +Official examples are actively maintained by the diffusers maintainers and we try to rigorously follow our example philosophy as defined above. +If you feel like another important example should exist, we are more than happy to welcome a Feature Request or directly a Pull Request from you! +Training examples show how to pretrain or fine-tune diffusion models for a variety of tasks. Currently we support: +Unconditional Training +Text-to-Image Training +Text Inversion +Dreambooth +LoRA Support +If possible, please install xFormers for memory efficient attention. This could help make your training faster and less memory intensive. +Task +🤗 Accelerate +🤗 Datasets +Colab +Unconditional Image Generation +✅ +✅ + +Text-to-Image fine-tuning +✅ +✅ + +Textual Inversion +✅ +- + +Dreambooth +✅ +- + + +Community + +In addition, we provide community examples, which are examples added and maintained by our community. +Community examples can consist of both training examples or inference pipelines. +For such examples, we are more lenient regarding the philosophy defined above and also cannot guarantee to provide maintenance for every issue. +Examples that are useful for the community, but are either not yet deemed popular or not yet following our above philosophy should go into the community examples folder. The community folder therefore includes training examples and inference pipelines. +Note: Community examples can be a great first contribution to show to the community how you like to use diffusers 🪄. + +Important note + +To make sure you can successfully run the latest versions of the example scripts, you have to install the library from source and install some example-specific requirements. To do this, execute the following steps in a new virtual environment: + + + Copied +git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . +Then cd in the example folder of your choice and run + + + Copied +pip install -r requirements.txt diff --git a/scrapped_outputs/113b96162b04dccbffb877bd7fd44e95.txt b/scrapped_outputs/113b96162b04dccbffb877bd7fd44e95.txt new file mode 100644 index 0000000000000000000000000000000000000000..f9d5759d2a52433aeb4a07b9b2cace405fc5aff7 --- /dev/null +++ b/scrapped_outputs/113b96162b04dccbffb877bd7fd44e95.txt @@ -0,0 +1,61 @@ +Distilled Stable Diffusion inference Stable Diffusion inference can be a computationally intensive process because it must iteratively denoise the latents to generate an image. To reduce the computational burden, you can use a distilled version of the Stable Diffusion model from Nota AI. The distilled version of their Stable Diffusion model eliminates some of the residual and attention blocks from the UNet, reducing the model size by 51% and improving latency on CPU/GPU by 43%. Read this blog post to learn more about how knowledge distillation training works to produce a faster, smaller, and cheaper generative model. Let’s load the distilled Stable Diffusion model and compare it against the original Stable Diffusion model: Copied from diffusers import StableDiffusionPipeline +import torch + +distilled = StableDiffusionPipeline.from_pretrained( + "nota-ai/bk-sdm-small", torch_dtype=torch.float16, use_safetensors=True, +).to("cuda") + +original = StableDiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16, use_safetensors=True, +).to("cuda") Given a prompt, get the inference time for the original model: Copied import time + +seed = 2023 +generator = torch.manual_seed(seed) + +NUM_ITERS_TO_RUN = 3 +NUM_INFERENCE_STEPS = 25 +NUM_IMAGES_PER_PROMPT = 4 + +prompt = "a golden vase with different flowers" + +start = time.time_ns() +for _ in range(NUM_ITERS_TO_RUN): + images = original( + prompt, + num_inference_steps=NUM_INFERENCE_STEPS, + generator=generator, + num_images_per_prompt=NUM_IMAGES_PER_PROMPT + ).images +end = time.time_ns() +original_sd = f"{(end - start) / 1e6:.1f}" + +print(f"Execution time -- {original_sd} ms\n") +"Execution time -- 45781.5 ms" Time the distilled model inference: Copied start = time.time_ns() +for _ in range(NUM_ITERS_TO_RUN): + images = distilled( + prompt, + num_inference_steps=NUM_INFERENCE_STEPS, + generator=generator, + num_images_per_prompt=NUM_IMAGES_PER_PROMPT + ).images +end = time.time_ns() + +distilled_sd = f"{(end - start) / 1e6:.1f}" +print(f"Execution time -- {distilled_sd} ms\n") +"Execution time -- 29884.2 ms" original Stable Diffusion (45781.5 ms) distilled Stable Diffusion (29884.2 ms) Tiny AutoEncoder To speed inference up even more, use a tiny distilled version of the Stable Diffusion VAE to denoise the latents into images. Replace the VAE in the distilled Stable Diffusion model with the tiny VAE: Copied from diffusers import AutoencoderTiny + +distilled.vae = AutoencoderTiny.from_pretrained( + "sayakpaul/taesd-diffusers", torch_dtype=torch.float16, use_safetensors=True, +).to("cuda") Time the distilled model and distilled VAE inference: Copied start = time.time_ns() +for _ in range(NUM_ITERS_TO_RUN): + images = distilled( + prompt, + num_inference_steps=NUM_INFERENCE_STEPS, + generator=generator, + num_images_per_prompt=NUM_IMAGES_PER_PROMPT + ).images +end = time.time_ns() + +distilled_tiny_sd = f"{(end - start) / 1e6:.1f}" +print(f"Execution time -- {distilled_tiny_sd} ms\n") +"Execution time -- 27165.7 ms" distilled Stable Diffusion + Tiny AutoEncoder (27165.7 ms) diff --git a/scrapped_outputs/1142c9c153b3280d967e14c3e00785fb.txt b/scrapped_outputs/1142c9c153b3280d967e14c3e00785fb.txt new file mode 100644 index 0000000000000000000000000000000000000000..f44a3d21a8e26d613db10e2b1641d1bc1fb54490 --- /dev/null +++ b/scrapped_outputs/1142c9c153b3280d967e14c3e00785fb.txt @@ -0,0 +1,2 @@ +🧨 Diffusers’ Ethical Guidelines Preamble Diffusers provides pre-trained diffusion models and serves as a modular toolbox for inference and training. Given its real case applications in the world and potential negative impacts on society, we think it is important to provide the project with ethical guidelines to guide the development, users’ contributions, and usage of the Diffusers library. The risks associated with using this technology are still being examined, but to name a few: copyrights issues for artists; deep-fake exploitation; sexual content generation in inappropriate contexts; non-consensual impersonation; harmful social biases perpetuating the oppression of marginalized groups. +We will keep tracking risks and adapt the following guidelines based on the community’s responsiveness and valuable feedback. Scope The Diffusers community will apply the following ethical guidelines to the project’s development and help coordinate how the community will integrate the contributions, especially concerning sensitive topics related to ethical concerns. Ethical guidelines The following ethical guidelines apply generally, but we will primarily implement them when dealing with ethically sensitive issues while making a technical choice. Furthermore, we commit to adapting those ethical principles over time following emerging harms related to the state of the art of the technology in question. Transparency: we are committed to being transparent in managing PRs, explaining our choices to users, and making technical decisions. Consistency: we are committed to guaranteeing our users the same level of attention in project management, keeping it technically stable and consistent. Simplicity: with a desire to make it easy to use and exploit the Diffusers library, we are committed to keeping the project’s goals lean and coherent. Accessibility: the Diffusers project helps lower the entry bar for contributors who can help run it even without technical expertise. Doing so makes research artifacts more accessible to the community. Reproducibility: we aim to be transparent about the reproducibility of upstream code, models, and datasets when made available through the Diffusers library. Responsibility: as a community and through teamwork, we hold a collective responsibility to our users by anticipating and mitigating this technology’s potential risks and dangers. Examples of implementations: Safety features and Mechanisms The team works daily to make the technical and non-technical tools available to deal with the potential ethical and social risks associated with diffusion technology. Moreover, the community’s input is invaluable in ensuring these features’ implementation and raising awareness with us. Community tab: it enables the community to discuss and better collaborate on a project. Bias exploration and evaluation: the Hugging Face team provides a space to demonstrate the biases in Stable Diffusion interactively. In this sense, we support and encourage bias explorers and evaluations. Encouraging safety in deployment Safe Stable Diffusion: It mitigates the well-known issue that models, like Stable Diffusion, that are trained on unfiltered, web-crawled datasets tend to suffer from inappropriate degeneration. Related paper: Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models. Safety Checker: It checks and compares the class probability of a set of hard-coded harmful concepts in the embedding space against an image after it has been generated. The harmful concepts are intentionally hidden to prevent reverse engineering of the checker. Staged released on the Hub: in particularly sensitive situations, access to some repositories should be restricted. This staged release is an intermediary step that allows the repository’s authors to have more control over its use. Licensing: OpenRAILs, a new type of licensing, allow us to ensure free access while having a set of restrictions that ensure more responsible use. diff --git a/scrapped_outputs/119ae8c756504ab47cf744a9c372f027.txt b/scrapped_outputs/119ae8c756504ab47cf744a9c372f027.txt new file mode 100644 index 0000000000000000000000000000000000000000..d05e83f211afd073b47b8d298eea79b4b3c9daf7 --- /dev/null +++ b/scrapped_outputs/119ae8c756504ab47cf744a9c372f027.txt @@ -0,0 +1,97 @@ +Text-to-image When you think of diffusion models, text-to-image is usually one of the first things that come to mind. Text-to-image generates an image from a text description (for example, “Astronaut in a jungle, cold color palette, muted colors, detailed, 8k”) which is also known as a prompt. From a very high level, a diffusion model takes a prompt and some random initial noise, and iteratively removes the noise to construct an image. The denoising process is guided by the prompt, and once the denoising process ends after a predetermined number of time steps, the image representation is decoded into an image. Read the How does Stable Diffusion work? blog post to learn more about how a latent diffusion model works. You can generate images from a prompt in 🤗 Diffusers in two steps: Load a checkpoint into the AutoPipelineForText2Image class, which automatically detects the appropriate pipeline class to use based on the checkpoint: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") Pass a prompt to the pipeline to generate an image: Copied image = pipeline( + "stained glass of darth vader, backlight, centered composition, masterpiece, photorealistic, 8k" +).images[0] +image Popular models The most common text-to-image models are Stable Diffusion v1.5, Stable Diffusion XL (SDXL), and Kandinsky 2.2. There are also ControlNet models or adapters that can be used with text-to-image models for more direct control in generating images. The results from each model are slightly different because of their architecture and training process, but no matter which model you choose, their usage is more or less the same. Let’s use the same prompt for each model and compare their results. Stable Diffusion v1.5 Stable Diffusion v1.5 is a latent diffusion model initialized from Stable Diffusion v1-4, and finetuned for 595K steps on 512x512 images from the LAION-Aesthetics V2 dataset. You can use this model like: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0] +image Stable Diffusion XL SDXL is a much larger version of the previous Stable Diffusion models, and involves a two-stage model process that adds even more details to an image. It also includes some additional micro-conditionings to generate high-quality images centered subjects. Take a look at the more comprehensive SDXL guide to learn more about how to use it. In general, you can use SDXL like: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0] +image Kandinsky 2.2 The Kandinsky model is a bit different from the Stable Diffusion models because it also uses an image prior model to create embeddings that are used to better align text and images in the diffusion model. The easiest way to use Kandinsky 2.2 is: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0] +image ControlNet ControlNet models are auxiliary models or adapters that are finetuned on top of text-to-image models, such as Stable Diffusion v1.5. Using ControlNet models in combination with text-to-image models offers diverse options for more explicit control over how to generate an image. With ControlNet, you add an additional conditioning input image to the model. For example, if you provide an image of a human pose (usually represented as multiple keypoints that are connected into a skeleton) as a conditioning input, the model generates an image that follows the pose of the image. Check out the more in-depth ControlNet guide to learn more about other conditioning inputs and how to use them. In this example, let’s condition the ControlNet with a human pose estimation image. Load the ControlNet model pretrained on human pose estimations: Copied from diffusers import ControlNetModel, AutoPipelineForText2Image +from diffusers.utils import load_image +import torch + +controlnet = ControlNetModel.from_pretrained( + "lllyasviel/control_v11p_sd15_openpose", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +pose_image = load_image("https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png") Pass the controlnet to the AutoPipelineForText2Image, and provide the prompt and pose estimation image: Copied pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16" +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", image=pose_image, generator=generator).images[0] +image Stable Diffusion v1.5 Stable Diffusion XL Kandinsky 2.2 ControlNet (pose conditioning) Configure pipeline parameters There are a number of parameters that can be configured in the pipeline that affect how an image is generated. You can change the image’s output size, specify a negative prompt to improve image quality, and more. This section dives deeper into how to use these parameters. Height and width The height and width parameters control the height and width (in pixels) of the generated image. By default, the Stable Diffusion v1.5 model outputs 512x512 images, but you can change this to any size that is a multiple of 8. For example, to create a rectangular image: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +image = pipeline( + "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", height=768, width=512 +).images[0] +image Other models may have different default image sizes depending on the image sizes in the training dataset. For example, SDXL’s default image size is 1024x1024 and using lower height and width values may result in lower quality images. Make sure you check the model’s API reference first! Guidance scale The guidance_scale parameter affects how much the prompt influences image generation. A lower value gives the model “creativity” to generate images that are more loosely related to the prompt. Higher guidance_scale values push the model to follow the prompt more closely, and if this value is too high, you may observe some artifacts in the generated image. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +image = pipeline( + "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", guidance_scale=3.5 +).images[0] +image guidance_scale = 2.5 guidance_scale = 7.5 guidance_scale = 10.5 Negative prompt Just like how a prompt guides generation, a negative prompt steers the model away from things you don’t want the model to generate. This is commonly used to improve overall image quality by removing poor or bad image features such as “low resolution” or “bad details”. You can also use a negative prompt to remove or modify the content and style of an image. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +image = pipeline( + prompt="Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", + negative_prompt="ugly, deformed, disfigured, poor details, bad anatomy", +).images[0] +image negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" negative_prompt = "astronaut" Generator A torch.Generator object enables reproducibility in a pipeline by setting a manual seed. You can use a Generator to generate batches of images and iteratively improve on an image generated from a seed as detailed in the Improve image quality with deterministic generation guide. You can set a seed and Generator as shown below. Creating an image with a Generator should return the same result each time instead of randomly generating a new image. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +generator = torch.Generator(device="cuda").manual_seed(30) +image = pipeline( + "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", + generator=generator, +).images[0] +image Control image generation There are several ways to exert more control over how an image is generated outside of configuring a pipeline’s parameters, such as prompt weighting and ControlNet models. Prompt weighting Prompt weighting is a technique for increasing or decreasing the importance of concepts in a prompt to emphasize or minimize certain features in an image. We recommend using the Compel library to help you generate the weighted prompt embeddings. Learn how to create the prompt embeddings in the Prompt weighting guide. This example focuses on how to use the prompt embeddings in the pipeline. Once you’ve created the embeddings, you can pass them to the prompt_embeds (and negative_prompt_embeds if you’re using a negative prompt) parameter in the pipeline. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +image = pipeline( + prompt_embeds=prompt_embeds, # generated from Compel + negative_prompt_embeds=negative_prompt_embeds, # generated from Compel +).images[0] ControlNet As you saw in the ControlNet section, these models offer a more flexible and accurate way to generate images by incorporating an additional conditioning image input. Each ControlNet model is pretrained on a particular type of conditioning image to generate new images that resemble it. For example, if you take a ControlNet model pretrained on depth maps, you can give the model a depth map as a conditioning input and it’ll generate an image that preserves the spatial information in it. This is quicker and easier than specifying the depth information in a prompt. You can even combine multiple conditioning inputs with a MultiControlNet! There are many types of conditioning inputs you can use, and 🤗 Diffusers supports ControlNet for Stable Diffusion and SDXL models. Take a look at the more comprehensive ControlNet guide to learn how you can use these models. Optimize Diffusion models are large, and the iterative nature of denoising an image is computationally expensive and intensive. But this doesn’t mean you need access to powerful - or even many - GPUs to use them. There are many optimization techniques for running diffusion models on consumer and free-tier resources. For example, you can load model weights in half-precision to save GPU memory and increase speed or offload the entire model to the GPU to save even more memory. PyTorch 2.0 also supports a more memory-efficient attention mechanism called scaled dot product attention that is automatically enabled if you’re using PyTorch 2.0. You can combine this with torch.compile to speed your code up even more: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16").to("cuda") +pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) For more tips on how to optimize your code to save memory and speed up inference, read the Memory and speed and Torch 2.0 guides. diff --git a/scrapped_outputs/11ef4b213865d8a0fee6f4f7b3b44a60.txt b/scrapped_outputs/11ef4b213865d8a0fee6f4f7b3b44a60.txt new file mode 100644 index 0000000000000000000000000000000000000000..ef87814e315d0b1fa9553c9af5b3de5bd286d86a --- /dev/null +++ b/scrapped_outputs/11ef4b213865d8a0fee6f4f7b3b44a60.txt @@ -0,0 +1,30 @@ +Unconditional Image Generation + +The DiffusionPipeline is the easiest way to use a pre-trained diffusion system for inference +Start by creating an instance of DiffusionPipeline and specify which pipeline checkpoint you would like to download. +You can use the DiffusionPipeline for any Diffusers’ checkpoint. +In this guide though, you’ll use DiffusionPipeline for unconditional image generation with DDPM: + + + Copied +>>> from diffusers import DiffusionPipeline + +>>> generator = DiffusionPipeline.from_pretrained("google/ddpm-celebahq-256") +The DiffusionPipeline downloads and caches all modeling, tokenization, and scheduling components. +Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on GPU. +You can move the generator object to GPU, just like you would in PyTorch. + + + Copied +>>> generator.to("cuda") +Now you can use the generator on your text prompt: + + + Copied +>>> image = generator().images[0] +The output is by default wrapped into a PIL Image object. +You can save the image by simply calling: + + + Copied +>>> image.save("generated_image.png") diff --git a/scrapped_outputs/11fe64e46c9ecd4e102931dbfbeb2b1d.txt b/scrapped_outputs/11fe64e46c9ecd4e102931dbfbeb2b1d.txt new file mode 100644 index 0000000000000000000000000000000000000000..ae04cd19402c4ab82a0fccbebd2b76ec8a611a13 --- /dev/null +++ b/scrapped_outputs/11fe64e46c9ecd4e102931dbfbeb2b1d.txt @@ -0,0 +1,41 @@ +UNetMotionModel The UNet model was originally introduced by Ronneberger et al for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 2D UNet model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNetMotionModel class diffusers.UNetMotionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlockMotion', 'CrossAttnDownBlockMotion', 'CrossAttnDownBlockMotion', 'DownBlockMotion') up_block_types: Tuple = ('UpBlockMotion', 'CrossAttnUpBlockMotion', 'CrossAttnUpBlockMotion', 'CrossAttnUpBlockMotion') block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: int = 32 norm_eps: float = 1e-05 cross_attention_dim: int = 1280 use_linear_projection: bool = False num_attention_heads: Union = 8 motion_max_seq_length: int = 32 motion_num_attention_heads: int = 8 use_motion_mid_block: int = True encoder_hid_dim: Optional = None encoder_hid_dim_type: Optional = None ) A modified conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a +sample shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). disable_freeu < source > ( ) Disables the FreeU mechanism. enable_forward_chunking < source > ( chunk_size: Optional = None dim: int = 0 ) Parameters chunk_size (int, optional) — +The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually +over each tensor of dim=dim. dim (int, optional, defaults to 0) — +The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch) +or dim=1 (sequence length). Sets the attention processor to use feed forward +chunking. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism from https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stage blocks where they are being applied. Please refer to the official repository for combinations of values that +are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None added_cond_kwargs: Optional = None down_block_additional_residuals: Optional = None mid_block_additional_residual: Optional = None return_dict: bool = True ) → UNet3DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, num_frames, channel, height, width. timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). +timestep_cond — (torch.Tensor, optional, defaults to None): +Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed +through the self.time_embedding layer to obtain the timestep embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. +down_block_additional_residuals — (tuple of torch.Tensor, optional): +A tuple of tensors that if specified are added to the residuals of down unet blocks. +mid_block_additional_residual — (torch.Tensor, optional): +A tensor that if specified is added to the residual of the middle unet block. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet3DConditionOutput instead of a plain +tuple. Returns +UNet3DConditionOutput or tuple + +If return_dict is True, an UNet3DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The UNetMotionModel forward method. freeze_unet2d_params < source > ( ) Freeze the weights of just the UNet2DConditionModel, and leave the motion modules +unfrozen for fine tuning. set_attn_processor < source > ( processor: Union _remove_lora = False ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. UNet3DConditionOutput class diffusers.models.unet_3d_condition.UNet3DConditionOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_frames, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of UNet3DConditionModel. diff --git a/scrapped_outputs/1228b516fe695ad815f33946a872e248.txt b/scrapped_outputs/1228b516fe695ad815f33946a872e248.txt new file mode 100644 index 0000000000000000000000000000000000000000..aee1e636a419504d65502e324985c985e38c0d21 --- /dev/null +++ b/scrapped_outputs/1228b516fe695ad815f33946a872e248.txt @@ -0,0 +1,36 @@ +VQ Diffusion Vector Quantized Diffusion Model for Text-to-Image Synthesis is by Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, Baining Guo. The abstract from the paper is: We present the vector quantized diffusion (VQ-Diffusion) model for text-to-image generation. This method is based on a vector quantized variational autoencoder (VQ-VAE) whose latent space is modeled by a conditional variant of the recently developed Denoising Diffusion Probabilistic Model (DDPM). We find that this latent-space method is well-suited for text-to-image generation tasks because it not only eliminates the unidirectional bias with existing methods but also allows us to incorporate a mask-and-replace diffusion strategy to avoid the accumulation of errors, which is a serious problem with existing methods. Our experiments show that the VQ-Diffusion produces significantly better text-to-image generation results when compared with conventional autoregressive (AR) models with similar numbers of parameters. Compared with previous GAN-based text-to-image methods, our VQ-Diffusion can handle more complex scenes and improve the synthesized image quality by a large margin. Finally, we show that the image generation computation in our method can be made highly efficient by reparameterization. With traditional AR methods, the text-to-image generation time increases linearly with the output image resolution and hence is quite time consuming even for normal size images. The VQ-Diffusion allows us to achieve a better trade-off between quality and speed. Our experiments indicate that the VQ-Diffusion model with the reparameterization is fifteen times faster than traditional AR methods while achieving a better image quality. The original codebase can be found at microsoft/VQ-Diffusion. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. VQDiffusionPipeline class diffusers.VQDiffusionPipeline < source > ( vqvae: VQModel text_encoder: CLIPTextModel tokenizer: CLIPTokenizer transformer: Transformer2DModel scheduler: VQDiffusionScheduler learned_classifier_free_sampling_embeddings: LearnedClassifierFreeSamplingEmbeddings ) Parameters vqvae (VQModel) — +Vector Quantized Variational Auto-Encoder (VAE) model to encode and decode images to and from latent +representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-base-patch32). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. transformer (Transformer2DModel) — +A conditional Transformer2DModel to denoise the encoded image latents. scheduler (VQDiffusionScheduler) — +A scheduler to be used in combination with transformer to denoise the encoded image latents. Pipeline for text-to-image generation using VQ Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: typing.Union[str, typing.List[str]] num_inference_steps: int = 100 guidance_scale: float = 5.0 truncation_rate: float = 1.0 num_images_per_prompt: int = 1 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: int = 1 ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. truncation_rate (float, optional, defaults to 1.0 (equivalent to no truncation)) — +Used to “truncate” the predicted classes for x_0 such that the cumulative probability for a pixel is at +most truncation_rate. The lowest probabilities that would increase the cumulative probability above +truncation_rate are set to zero. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor of shape (batch), optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Must be valid embedding indices.If not provided, a latents tensor will be generated of +completely masked latent pixels. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. truncate < source > ( log_p_x_0: FloatTensor truncation_rate: float ) Truncates log_p_x_0 such that for each column vector, the total cumulative probability is truncation_rate +The lowest probabilities that would increase the cumulative probability above truncation_rate are set to +zero. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/12321c21833cde580ce39c983b8b579f.txt b/scrapped_outputs/12321c21833cde580ce39c983b8b579f.txt new file mode 100644 index 0000000000000000000000000000000000000000..9cfc96be6aaacc8d08b00ff6b4042e641b297921 --- /dev/null +++ b/scrapped_outputs/12321c21833cde580ce39c983b8b579f.txt @@ -0,0 +1,13 @@ +PEFT Diffusers supports loading adapters such as LoRA with the PEFT library with the PeftAdapterMixin class. This allows modeling classes in Diffusers like UNet2DConditionModel to load an adapter. Refer to the Inference with PEFT tutorial for an overview of how to use PEFT in Diffusers for inference. PeftAdapterMixin class diffusers.loaders.PeftAdapterMixin < source > ( ) A class containing all functions for loading and using adapters weights that are supported in PEFT library. For +more details about adapters and injecting them in a transformer-based model, check out the PEFT documentation. Install the latest version of PEFT, and use this mixin to: Attach new adapters in the model. Attach multiple adapters and iteratively activate/deactivate them. Activate/deactivate all adapters from the model. Get a list of the active adapters. active_adapters < source > ( ) Gets the current list of active adapters of the model. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +documentation. add_adapter < source > ( adapter_config adapter_name: str = 'default' ) Parameters adapter_config ([~peft.PeftConfig]) — +The configuration of the adapter to add; supported adapters are non-prefix tuning and adaption prompt +methods. adapter_name (str, optional, defaults to "default") — +The name of the adapter to add. If no name is passed, a default name is assigned to the adapter. Adds a new adapter to the current model for training. If no adapter name is passed, a default name is assigned +to the adapter to follow the convention of the PEFT library. If you are not familiar with adapters and PEFT methods, we invite you to read more about them in the PEFT +documentation. disable_adapters < source > ( ) Disable all adapters attached to the model and fallback to inference with the base model only. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +documentation. enable_adapters < source > ( ) Enable adapters that are attached to the model. The model uses self.active_adapters() to retrieve the +list of adapters to enable. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +documentation. set_adapter < source > ( adapter_name: Union ) Parameters adapter_name (Union[str, List[str]])) — +The list of adapters to set or the adapter name in the case of a single adapter. Sets a specific adapter by forcing the model to only use that adapter and disables the other adapters. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +documentation. diff --git a/scrapped_outputs/12478db00ae7f6c44e0a6bb06c7b1f55.txt b/scrapped_outputs/12478db00ae7f6c44e0a6bb06c7b1f55.txt new file mode 100644 index 0000000000000000000000000000000000000000..5ee871335093ed2ca29b91e756da3147dae8eda6 --- /dev/null +++ b/scrapped_outputs/12478db00ae7f6c44e0a6bb06c7b1f55.txt @@ -0,0 +1,217 @@ +Load pipelines, models, and schedulers Having an easy way to use a diffusion system for inference is essential to 🧨 Diffusers. Diffusion systems often consist of multiple components like parameterized models, tokenizers, and schedulers that interact in complex ways. That is why we designed the DiffusionPipeline to wrap the complexity of the entire diffusion system into an easy-to-use API, while remaining flexible enough to be adapted for other use cases, such as loading each component individually as building blocks to assemble your own diffusion system. Everything you need for inference or training is accessible with the from_pretrained() method. This guide will show you how to load: pipelines from the Hub and locally different components into a pipeline checkpoint variants such as different floating point types or non-exponential mean averaged (EMA) weights models and schedulers Diffusion Pipeline 💡 Skip to the DiffusionPipeline explained section if you are interested in learning in more detail about how the DiffusionPipeline class works. The DiffusionPipeline class is the simplest and most generic way to load the latest trending diffusion model from the Hub. The DiffusionPipeline.from_pretrained() method automatically detects the correct pipeline class from the checkpoint, downloads, and caches all the required configuration and weight files, and returns a pipeline instance ready for inference. Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) You can also load a checkpoint with its specific pipeline class. The example above loaded a Stable Diffusion model; to get the same result, use the StableDiffusionPipeline class: Copied from diffusers import StableDiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) A checkpoint (such as CompVis/stable-diffusion-v1-4 or runwayml/stable-diffusion-v1-5) may also be used for more than one task, like text-to-image or image-to-image. To differentiate what task you want to use the checkpoint for, you have to load it directly with its corresponding task-specific pipeline class: Copied from diffusers import StableDiffusionImg2ImgPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionImg2ImgPipeline.from_pretrained(repo_id) Local pipeline To load a diffusion pipeline locally, use git-lfs to manually download the checkpoint (in this case, runwayml/stable-diffusion-v1-5) to your local disk. This creates a local folder, ./stable-diffusion-v1-5, on your disk: Copied git-lfs install +git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 Then pass the local path to from_pretrained(): Copied from diffusers import DiffusionPipeline + +repo_id = "./stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) The from_pretrained() method won’t download any files from the Hub when it detects a local path, but this also means it won’t download and cache the latest changes to a checkpoint. Swap components in a pipeline You can customize the default components of any pipeline with another compatible component. Customization is important because: Changing the scheduler is important for exploring the trade-off between generation speed and quality. Different components of a model are typically trained independently and you can swap out a component with a better-performing one. During finetuning, usually only some components - like the UNet or text encoder - are trained. To find out which schedulers are compatible for customization, you can use the compatibles method: Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) +stable_diffusion.scheduler.compatibles Let’s use the SchedulerMixin.from_pretrained() method to replace the default PNDMScheduler with a more performant scheduler, EulerDiscreteScheduler. The subfolder="scheduler" argument is required to load the scheduler configuration from the correct subfolder of the pipeline repository. Then you can pass the new EulerDiscreteScheduler instance to the scheduler argument in DiffusionPipeline: Copied from diffusers import DiffusionPipeline, EulerDiscreteScheduler + +repo_id = "runwayml/stable-diffusion-v1-5" +scheduler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, scheduler=scheduler, use_safetensors=True) Safety checker Diffusion models like Stable Diffusion can generate harmful content, which is why 🧨 Diffusers has a safety checker to check generated outputs against known hardcoded NSFW content. If you’d like to disable the safety checker for whatever reason, pass None to the safety_checker argument: Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, safety_checker=None, use_safetensors=True) +""" +You have disabled the safety checker for by passing `safety_checker=None`. Ensure that you abide by the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend keeping the safety filter enabled in all public-facing circumstances, disabling it only for use cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 . +""" Reuse components across pipelines You can also reuse the same components in multiple pipelines to avoid loading the weights into RAM twice. Use the components method to save the components: Copied from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True) + +components = stable_diffusion_txt2img.components Then you can pass the components to another pipeline without reloading the weights into RAM: Copied stable_diffusion_img2img = StableDiffusionImg2ImgPipeline(**components) You can also pass the components individually to the pipeline if you want more flexibility over which components to reuse or disable. For example, to reuse the same components in the text-to-image pipeline, except for the safety checker and feature extractor, in the image-to-image pipeline: Copied from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True) +stable_diffusion_img2img = StableDiffusionImg2ImgPipeline( + vae=stable_diffusion_txt2img.vae, + text_encoder=stable_diffusion_txt2img.text_encoder, + tokenizer=stable_diffusion_txt2img.tokenizer, + unet=stable_diffusion_txt2img.unet, + scheduler=stable_diffusion_txt2img.scheduler, + safety_checker=None, + feature_extractor=None, + requires_safety_checker=False, +) Checkpoint variants A checkpoint variant is usually a checkpoint whose weights are: Stored in a different floating point type for lower precision and lower storage, such as torch.float16, because it only requires half the bandwidth and storage to download. You can’t use this variant if you’re continuing training or using a CPU. Non-exponential mean averaged (EMA) weights, which shouldn’t be used for inference. You should use these to continue fine-tuning a model. 💡 When the checkpoints have identical model structures, but they were trained on different datasets and with a different training setup, they should be stored in separate repositories instead of variations (for example, stable-diffusion-v1-4 and stable-diffusion-v1-5). Otherwise, a variant is identical to the original checkpoint. They have exactly the same serialization format (like Safetensors), model structure, and weights that have identical tensor shapes. checkpoint type weight name argument for loading weights original diffusion_pytorch_model.bin floating point diffusion_pytorch_model.fp16.bin variant, torch_dtype non-EMA diffusion_pytorch_model.non_ema.bin variant There are two important arguments to know for loading variants: torch_dtype defines the floating point precision of the loaded checkpoints. For example, if you want to save bandwidth by loading a fp16 variant, you should specify torch_dtype=torch.float16 to convert the weights to fp16. Otherwise, the fp16 weights are converted to the default fp32 precision. You can also load the original checkpoint without defining the variant argument, and convert it to fp16 with torch_dtype=torch.float16. In this case, the default fp32 weights are downloaded first, and then they’re converted to fp16 after loading. variant defines which files should be loaded from the repository. For example, if you want to load a non_ema variant from the diffusers/stable-diffusion-variants repository, you should specify variant="non_ema" to download the non_ema files. Copied from diffusers import DiffusionPipeline +import torch + +# load fp16 variant +stable_diffusion = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16, use_safetensors=True +) +# load non_ema variant +stable_diffusion = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", variant="non_ema", use_safetensors=True +) To save a checkpoint stored in a different floating-point type or as a non-EMA variant, use the DiffusionPipeline.save_pretrained() method and specify the variant argument. You should try and save a variant to the same folder as the original checkpoint, so you can load both from the same folder: Copied from diffusers import DiffusionPipeline + +# save as fp16 variant +stable_diffusion.save_pretrained("runwayml/stable-diffusion-v1-5", variant="fp16") +# save as non-ema variant +stable_diffusion.save_pretrained("runwayml/stable-diffusion-v1-5", variant="non_ema") If you don’t save the variant to an existing folder, you must specify the variant argument otherwise it’ll throw an Exception because it can’t find the original checkpoint: Copied # 👎 this won't work +stable_diffusion = DiffusionPipeline.from_pretrained( + "./stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +# 👍 this works +stable_diffusion = DiffusionPipeline.from_pretrained( + "./stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16, use_safetensors=True +) Models Models are loaded from the ModelMixin.from_pretrained() method, which downloads and caches the latest version of the model weights and configurations. If the latest files are available in the local cache, from_pretrained() reuses files in the cache instead of re-downloading them. Models can be loaded from a subfolder with the subfolder argument. For example, the model weights for runwayml/stable-diffusion-v1-5 are stored in the unet subfolder: Copied from diffusers import UNet2DConditionModel + +repo_id = "runwayml/stable-diffusion-v1-5" +model = UNet2DConditionModel.from_pretrained(repo_id, subfolder="unet", use_safetensors=True) Or directly from a repository’s directory: Copied from diffusers import UNet2DModel + +repo_id = "google/ddpm-cifar10-32" +model = UNet2DModel.from_pretrained(repo_id, use_safetensors=True) You can also load and save model variants by specifying the variant argument in ModelMixin.from_pretrained() and ModelMixin.save_pretrained(): Copied from diffusers import UNet2DConditionModel + +model = UNet2DConditionModel.from_pretrained( + "runwayml/stable-diffusion-v1-5", subfolder="unet", variant="non_ema", use_safetensors=True +) +model.save_pretrained("./local-unet", variant="non_ema") Schedulers Schedulers are loaded from the SchedulerMixin.from_pretrained() method, and unlike models, schedulers are not parameterized or trained; they are defined by a configuration file. Loading schedulers does not consume any significant amount of memory and the same configuration file can be used for a variety of different schedulers. +For example, the following schedulers are compatible with StableDiffusionPipeline, which means you can load the same scheduler configuration file in any of these classes: Copied from diffusers import StableDiffusionPipeline +from diffusers import ( + DDPMScheduler, + DDIMScheduler, + PNDMScheduler, + LMSDiscreteScheduler, + EulerAncestralDiscreteScheduler, + EulerDiscreteScheduler, + DPMSolverMultistepScheduler, +) + +repo_id = "runwayml/stable-diffusion-v1-5" + +ddpm = DDPMScheduler.from_pretrained(repo_id, subfolder="scheduler") +ddim = DDIMScheduler.from_pretrained(repo_id, subfolder="scheduler") +pndm = PNDMScheduler.from_pretrained(repo_id, subfolder="scheduler") +lms = LMSDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +euler_anc = EulerAncestralDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +euler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +dpm = DPMSolverMultistepScheduler.from_pretrained(repo_id, subfolder="scheduler") + +# replace `dpm` with any of `ddpm`, `ddim`, `pndm`, `lms`, `euler_anc`, `euler` +pipeline = StableDiffusionPipeline.from_pretrained(repo_id, scheduler=dpm, use_safetensors=True) DiffusionPipeline explained As a class method, DiffusionPipeline.from_pretrained() is responsible for two things: Download the latest version of the folder structure required for inference and cache it. If the latest folder structure is available in the local cache, DiffusionPipeline.from_pretrained() reuses the cache and won’t redownload the files. Load the cached weights into the correct pipeline class - retrieved from the model_index.json file - and return an instance of it. The pipelines’ underlying folder structure corresponds directly with their class instances. For example, the StableDiffusionPipeline corresponds to the folder structure in runwayml/stable-diffusion-v1-5. Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipeline = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) +print(pipeline) You’ll see pipeline is an instance of StableDiffusionPipeline, which consists of seven components: "feature_extractor": a CLIPImageProcessor from 🤗 Transformers. "safety_checker": a component for screening against harmful content. "scheduler": an instance of PNDMScheduler. "text_encoder": a CLIPTextModel from 🤗 Transformers. "tokenizer": a CLIPTokenizer from 🤗 Transformers. "unet": an instance of UNet2DConditionModel. "vae": an instance of AutoencoderKL. Copied StableDiffusionPipeline { + "feature_extractor": [ + "transformers", + "CLIPImageProcessor" + ], + "safety_checker": [ + "stable_diffusion", + "StableDiffusionSafetyChecker" + ], + "scheduler": [ + "diffusers", + "PNDMScheduler" + ], + "text_encoder": [ + "transformers", + "CLIPTextModel" + ], + "tokenizer": [ + "transformers", + "CLIPTokenizer" + ], + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vae": [ + "diffusers", + "AutoencoderKL" + ] +} Compare the components of the pipeline instance to the runwayml/stable-diffusion-v1-5 folder structure, and you’ll see there is a separate folder for each of the components in the repository: Copied . +├── feature_extractor +│   └── preprocessor_config.json +├── model_index.json +├── safety_checker +│   ├── config.json +| ├── model.fp16.safetensors +│ ├── model.safetensors +│ ├── pytorch_model.bin +| └── pytorch_model.fp16.bin +├── scheduler +│   └── scheduler_config.json +├── text_encoder +│   ├── config.json +| ├── model.fp16.safetensors +│ ├── model.safetensors +│ |── pytorch_model.bin +| └── pytorch_model.fp16.bin +├── tokenizer +│   ├── merges.txt +│   ├── special_tokens_map.json +│   ├── tokenizer_config.json +│   └── vocab.json +├── unet +│   ├── config.json +│   ├── diffusion_pytorch_model.bin +| |── diffusion_pytorch_model.fp16.bin +│ |── diffusion_pytorch_model.f16.safetensors +│ |── diffusion_pytorch_model.non_ema.bin +│ |── diffusion_pytorch_model.non_ema.safetensors +│ └── diffusion_pytorch_model.safetensors +|── vae +. ├── config.json +. ├── diffusion_pytorch_model.bin + ├── diffusion_pytorch_model.fp16.bin + ├── diffusion_pytorch_model.fp16.safetensors + └── diffusion_pytorch_model.safetensors You can access each of the components of the pipeline as an attribute to view its configuration: Copied pipeline.tokenizer +CLIPTokenizer( + name_or_path="/root/.cache/huggingface/hub/models--runwayml--stable-diffusion-v1-5/snapshots/39593d5650112b4cc580433f6b0435385882d819/tokenizer", + vocab_size=49408, + model_max_length=77, + is_fast=False, + padding_side="right", + truncation_side="right", + special_tokens={ + "bos_token": AddedToken("<|startoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), + "eos_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), + "unk_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), + "pad_token": "<|endoftext|>", + }, + clean_up_tokenization_spaces=True +) Every pipeline expects a model_index.json file that tells the DiffusionPipeline: which pipeline class to load from _class_name which version of 🧨 Diffusers was used to create the model in _diffusers_version what components from which library are stored in the subfolders (name corresponds to the component and subfolder name, library corresponds to the name of the library to load the class from, and class corresponds to the class name) Copied { + "_class_name": "StableDiffusionPipeline", + "_diffusers_version": "0.6.0", + "feature_extractor": [ + "transformers", + "CLIPImageProcessor" + ], + "safety_checker": [ + "stable_diffusion", + "StableDiffusionSafetyChecker" + ], + "scheduler": [ + "diffusers", + "PNDMScheduler" + ], + "text_encoder": [ + "transformers", + "CLIPTextModel" + ], + "tokenizer": [ + "transformers", + "CLIPTokenizer" + ], + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vae": [ + "diffusers", + "AutoencoderKL" + ] +} diff --git a/scrapped_outputs/125a9c879e9dcbc450f97b285e5500e1.txt b/scrapped_outputs/125a9c879e9dcbc450f97b285e5500e1.txt new file mode 100644 index 0000000000000000000000000000000000000000..3ac97628e55336ffb0041210b78e5d43066c4f7c --- /dev/null +++ b/scrapped_outputs/125a9c879e9dcbc450f97b285e5500e1.txt @@ -0,0 +1,225 @@ +AudioLDM 2 AudioLDM 2 was proposed in AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining by Haohe Liu et al. AudioLDM 2 takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional sound effects, human speech and music. Inspired by Stable Diffusion, AudioLDM 2 is a text-to-audio latent diffusion model (LDM) that learns continuous audio representations from text embeddings. Two text encoder models are used to compute the text embeddings from a prompt input: the text-branch of CLAP and the encoder of Flan-T5. These text embeddings are then projected to a shared embedding space by an AudioLDM2ProjectionModel. A GPT2 language model (LM) is used to auto-regressively predict eight new embedding vectors, conditional on the projected CLAP and Flan-T5 embeddings. The generated embedding vectors and Flan-T5 text embeddings are used as cross-attention conditioning in the LDM. The UNet of AudioLDM 2 is unique in the sense that it takes two cross-attention embeddings, as opposed to one cross-attention conditioning, as in most other LDMs. The abstract of the paper is the following: Although audio generation shares commonalities across different types of audio, such as speech, music, and sound effects, designing models for each type requires careful consideration of specific objectives and biases that can significantly differ from those of other types. To bring us closer to a unified perspective of audio generation, this paper proposes a framework that utilizes the same learning method for speech, music, and sound effect generation. Our framework introduces a general representation of audio, called “language of audio” (LOA). Any audio can be translated into LOA based on AudioMAE, a self-supervised pre-trained representation learning model. In the generation process, we translate any modalities into LOA by using a GPT-2 model, and we perform self-supervised audio generation learning with a latent diffusion model conditioned on LOA. The proposed framework naturally brings advantages such as in-context learning abilities and reusable self-supervised pretrained AudioMAE and latent diffusion models. Experiments on the major benchmarks of text-to-audio, text-to-music, and text-to-speech demonstrate state-of-the-art or competitive performance against previous approaches. Our code, pretrained model, and demo are available at this https URL. This pipeline was contributed by sanchit-gandhi. The original codebase can be found at haoheliu/audioldm2. Tips Choosing a checkpoint AudioLDM2 comes in three variants. Two of these checkpoints are applicable to the general task of text-to-audio generation. The third checkpoint is trained exclusively on text-to-music generation. All checkpoints share the same model size for the text encoders and VAE. They differ in the size and depth of the UNet. +See table below for details on the three checkpoints: Checkpoint Task UNet Model Size Total Model Size Training Data / h audioldm2 Text-to-audio 350M 1.1B 1150k audioldm2-large Text-to-audio 750M 1.5B 1150k audioldm2-music Text-to-music 350M 1.1B 665k Constructing a prompt Descriptive prompt inputs work best: use adjectives to describe the sound (e.g. “high quality” or “clear”) and make the prompt context specific (e.g. “water stream in a forest” instead of “stream”). It’s best to use general terms like “cat” or “dog” instead of specific names or abstract objects the model may not be familiar with. Using a negative prompt can significantly improve the quality of the generated waveform, by guiding the generation away from terms that correspond to poor quality audio. Try using a negative prompt of “Low quality.” Controlling inference The quality of the predicted audio sample can be controlled by the num_inference_steps argument; higher steps give higher quality audio at the expense of slower inference. The length of the predicted audio sample can be controlled by varying the audio_length_in_s argument. Evaluating generated waveforms: The quality of the generated waveforms can vary significantly based on the seed. Try generating with different seeds until you find a satisfactory generation. Multiple waveforms can be generated in one go: set num_waveforms_per_prompt to a value greater than 1. Automatic scoring will be performed between the generated waveforms and prompt text, and the audios ranked from best to worst accordingly. The following example demonstrates how to construct good music generation using the aforementioned tips: example. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. AudioLDM2Pipeline class diffusers.AudioLDM2Pipeline < source > ( vae: AutoencoderKL text_encoder: ClapModel text_encoder_2: T5EncoderModel projection_model: AudioLDM2ProjectionModel language_model: GPT2Model tokenizer: Union tokenizer_2: Union feature_extractor: ClapFeatureExtractor unet: AudioLDM2UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vocoder: SpeechT5HifiGan ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (ClapModel) — +First frozen text-encoder. AudioLDM2 uses the joint audio-text embedding model +CLAP, +specifically the laion/clap-htsat-unfused variant. The +text branch is used to encode the text prompt to a prompt embedding. The full audio-text model is used to +rank generated waveforms against the text prompt by computing similarity scores. text_encoder_2 (T5EncoderModel) — +Second frozen text-encoder. AudioLDM2 uses the encoder of +T5, specifically the +google/flan-t5-large variant. projection_model (AudioLDM2ProjectionModel) — +A trained model used to linearly project the hidden-states from the first and second text encoder models +and insert learned SOS and EOS token embeddings. The projected hidden-states from the two text encoders are +concatenated to give the input to the language model. language_model (GPT2Model) — +An auto-regressive language model used to generate a sequence of hidden-states conditioned on the projected +outputs from the two text encoders. tokenizer (RobertaTokenizer) — +Tokenizer to tokenize text for the first frozen text-encoder. tokenizer_2 (T5Tokenizer) — +Tokenizer to tokenize text for the second frozen text-encoder. feature_extractor (ClapFeatureExtractor) — +Feature extractor to pre-process generated audio waveforms to log-mel spectrograms for automatic scoring. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded audio latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. vocoder (SpeechT5HifiGan) — +Vocoder of class SpeechT5HifiGan to convert the mel-spectrogram latents to the final audio waveform. Pipeline for text-to-audio generation using AudioLDM2. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None audio_length_in_s: Optional = None num_inference_steps: int = 200 guidance_scale: float = 3.5 negative_prompt: Union = None num_waveforms_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None generated_prompt_embeds: Optional = None negative_generated_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None max_new_tokens: Optional = None return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None output_type: Optional = 'np' ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide audio generation. If not defined, you need to pass prompt_embeds. audio_length_in_s (int, optional, defaults to 10.24) — +The length of the generated audio sample in seconds. num_inference_steps (int, optional, defaults to 200) — +The number of denoising steps. More denoising steps usually lead to a higher quality audio at the +expense of slower inference. guidance_scale (float, optional, defaults to 3.5) — +A higher guidance scale value encourages the model to generate audio that is closely linked to the text +prompt at the expense of lower sound quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in audio generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_waveforms_per_prompt (int, optional, defaults to 1) — +The number of waveforms to generate per prompt. If num_waveforms_per_prompt > 1, then automatic +scoring is performed between the generated outputs and the text prompt. This scoring ranks the +generated waveforms based on their cosine similarity with the text input in the joint text-audio +embedding space. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for spectrogram +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings from the GPT2 langauge model. Can be used to easily tweak text inputs, +e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input +argument. negative_generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings from the GPT2 language model. Can be used to easily tweak text +inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from +negative_prompt input argument. attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the prompt_embeds. If not provided, attention mask will +be computed from prompt input argument. negative_attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the negative_prompt_embeds. If not provided, attention +mask will be computed from negative_prompt input argument. max_new_tokens (int, optional, defaults to None) — +Number of new tokens to generate with the GPT2 language model. If not provided, number of tokens will +be taken from the config of the model. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. output_type (str, optional, defaults to "np") — +The output format of the generated audio. Choose between "np" to return a NumPy np.ndarray or +"pt" to return a PyTorch torch.Tensor object. Set to "latent" to return the latent diffusion +model (LDM) output. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Examples: Copied >>> import scipy +>>> import torch +>>> from diffusers import AudioLDM2Pipeline + +>>> repo_id = "cvssp/audioldm2" +>>> pipe = AudioLDM2Pipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> # define the prompts +>>> prompt = "The sound of a hammer hitting a wooden surface." +>>> negative_prompt = "Low quality." + +>>> # set the seed for generator +>>> generator = torch.Generator("cuda").manual_seed(0) + +>>> # run the generation +>>> audio = pipe( +... prompt, +... negative_prompt=negative_prompt, +... num_inference_steps=200, +... audio_length_in_s=10.0, +... num_waveforms_per_prompt=3, +... generator=generator, +... ).audios + +>>> # save the best audio sample (index 0) as a .wav file +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio[0]) disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_waveforms_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None generated_prompt_embeds: Optional = None negative_generated_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None max_new_tokens: Optional = None ) → prompt_embeds (torch.FloatTensor) Parameters prompt (str or List[str], optional) — +prompt to be encoded device (torch.device) — +torch device num_waveforms_per_prompt (int) — +number of waveforms that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the audio generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-computed text embeddings from the Flan T5 model. Can be used to easily tweak text inputs, e.g. +prompt weighting. If not provided, text embeddings will be computed from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-computed negative text embeddings from the Flan T5 model. Can be used to easily tweak text inputs, +e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from +negative_prompt input argument. generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings from the GPT2 langauge model. Can be used to easily tweak text inputs, +e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input +argument. negative_generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings from the GPT2 language model. Can be used to easily tweak text +inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from +negative_prompt input argument. attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the prompt_embeds. If not provided, attention mask will +be computed from prompt input argument. negative_attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the negative_prompt_embeds. If not provided, attention +mask will be computed from negative_prompt input argument. max_new_tokens (int, optional, defaults to None) — +The number of new tokens to generate with the GPT2 language model. Returns +prompt_embeds (torch.FloatTensor) + +Text embeddings from the Flan T5 model. +attention_mask (torch.LongTensor): +Attention mask to be applied to the prompt_embeds. +generated_prompt_embeds (torch.FloatTensor): +Text embeddings generated from the GPT2 langauge model. + Encodes the prompt into text encoder hidden states. Example: Copied >>> import scipy +>>> import torch +>>> from diffusers import AudioLDM2Pipeline + +>>> repo_id = "cvssp/audioldm2" +>>> pipe = AudioLDM2Pipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> # Get text embedding vectors +>>> prompt_embeds, attention_mask, generated_prompt_embeds = pipe.encode_prompt( +... prompt="Techno music with a strong, upbeat tempo and high melodic riffs", +... device="cuda", +... do_classifier_free_guidance=True, +... ) + +>>> # Pass text embeddings to pipeline for text-conditional audio generation +>>> audio = pipe( +... prompt_embeds=prompt_embeds, +... attention_mask=attention_mask, +... generated_prompt_embeds=generated_prompt_embeds, +... num_inference_steps=200, +... audio_length_in_s=10.0, +... ).audios[0] + +>>> # save generated audio sample +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio) generate_language_model < source > ( inputs_embeds: Tensor = None max_new_tokens: int = 8 **model_kwargs ) → inputs_embeds (torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)`) Parameters inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — +The sequence used as a prompt for the generation. max_new_tokens (int) — +Number of new tokens to generate. model_kwargs (Dict[str, Any], optional) — +Ad hoc parametrization of additional model-specific kwargs that will be forwarded to the forward +function of the model. Returns +inputs_embeds (torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)`) + +The sequence of generated hidden-states. + Generates a sequence of hidden-states from the language model, conditioned on the embedding inputs. AudioLDM2ProjectionModel class diffusers.AudioLDM2ProjectionModel < source > ( text_encoder_dim text_encoder_1_dim langauge_model_dim ) Parameters text_encoder_dim (int) — +Dimensionality of the text embeddings from the first text encoder (CLAP). text_encoder_1_dim (int) — +Dimensionality of the text embeddings from the second text encoder (T5 or VITS). langauge_model_dim (int) — +Dimensionality of the text embeddings from the language model (GPT2). A simple linear projection model to map two text embeddings to a shared latent space. It also inserts learned +embedding vectors at the start and end of each text embedding sequence respectively. Each variable appended with +_1 refers to that corresponding to the second text encoder. Otherwise, it is from the first. forward < source > ( hidden_states: Optional = None hidden_states_1: Optional = None attention_mask: Optional = None attention_mask_1: Optional = None ) AudioLDM2UNet2DConditionModel class diffusers.AudioLDM2UNet2DConditionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 flip_sin_to_cos: bool = True freq_shift: int = 0 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') mid_block_type: Optional = 'UNetMidBlock2DCrossAttn' up_block_types: Tuple = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: Union = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: Union = 1280 transformer_layers_per_block: Union = 1 attention_head_dim: Union = 8 num_attention_heads: Union = None use_linear_projection: bool = False class_embed_type: Optional = None num_class_embeds: Optional = None upcast_attention: bool = False resnet_time_scale_shift: str = 'default' time_embedding_type: str = 'positional' time_embedding_dim: Optional = None time_embedding_act_fn: Optional = None timestep_post_act: Optional = None time_cond_proj_dim: Optional = None conv_in_kernel: int = 3 conv_out_kernel: int = 3 projection_class_embeddings_input_dim: Optional = None class_embeddings_concat: bool = False ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. in_channels (int, optional, defaults to 4) — Number of channels in the input sample. out_channels (int, optional, defaults to 4) — Number of channels in the output. flip_sin_to_cos (bool, optional, defaults to False) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. mid_block_type (str, optional, defaults to "UNetMidBlock2DCrossAttn") — +Block type for middle of UNet, it can only be UNetMidBlock2DCrossAttn for AudioLDM2. up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. only_cross_attention (bool or Tuple[bool], optional, default to False) — +Whether to include self-attention in the basic transformer blocks, see +BasicTransformerBlock. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — The number of layers per block. downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. act_fn (str, optional, defaults to "silu") — The activation function to use. norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, normalization and activation layers is skipped in post-processing. norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. cross_attention_dim (int or Tuple[int], optional, defaults to 1280) — +The dimension of the cross attention features. transformer_layers_per_block (int or Tuple[int], optional, defaults to 1) — +The number of transformer blocks of type BasicTransformerBlock. Only relevant for +~models.unet_2d_blocks.CrossAttnDownBlock2D, ~models.unet_2d_blocks.CrossAttnUpBlock2D, +~models.unet_2d_blocks.UNetMidBlock2DCrossAttn. attention_head_dim (int, optional, defaults to 8) — The dimension of the attention heads. num_attention_heads (int, optional) — +The number of attention heads. If not defined, defaults to attention_head_dim resnet_time_scale_shift (str, optional, defaults to "default") — Time scale shift config +for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", "identity", "projection", or "simple_projection". num_class_embeds (int, optional, defaults to None) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. time_embedding_type (str, optional, defaults to positional) — +The type of position embedding to use for timesteps. Choose from positional or fourier. time_embedding_dim (int, optional, defaults to None) — +An optional override for the dimension of the projected time embedding. time_embedding_act_fn (str, optional, defaults to None) — +Optional activation function to use only once on the time embeddings before they are passed to the rest of +the UNet. Choose from silu, mish, gelu, and swish. timestep_post_act (str, optional, defaults to None) — +The second activation function to use in timestep embedding. Choose from silu, mish and gelu. time_cond_proj_dim (int, optional, defaults to None) — +The dimension of cond_proj layer in the timestep embedding. conv_in_kernel (int, optional, default to 3) — The kernel size of conv_in layer. conv_out_kernel (int, optional, default to 3) — The kernel size of conv_out layer. projection_class_embeddings_input_dim (int, optional) — The dimension of the class_labels input when +class_embed_type="projection". Required when class_embed_type="projection". class_embeddings_concat (bool, optional, defaults to False) — Whether to concatenate the time +embeddings with the class embeddings. A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. Compared to the vanilla UNet2DConditionModel, this variant optionally includes an additional +self-attention layer in each Transformer block, as well as multiple cross-attention layers. It also allows for up +to two cross-attention embeddings, encoder_hidden_states and encoder_hidden_states_1. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None encoder_attention_mask: Optional = None return_dict: bool = True encoder_hidden_states_1: Optional = None encoder_attention_mask_1: Optional = None ) → UNet2DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, channel, height, width). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). encoder_attention_mask (torch.Tensor) — +A cross-attention mask of shape (batch, sequence_length) is applied to encoder_hidden_states. If +True the mask is kept, otherwise if False it is discarded. Mask will be converted into a bias, +which adds large negative values to the attention scores corresponding to “discard” tokens. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. encoder_hidden_states_1 (torch.FloatTensor, optional) — +A second set of encoder hidden states with shape (batch, sequence_length_2, feature_dim_2). Can be +used to condition the model on a different set of embeddings to encoder_hidden_states. encoder_attention_mask_1 (torch.Tensor, optional) — +A cross-attention mask of shape (batch, sequence_length_2) is applied to encoder_hidden_states_1. +If True the mask is kept, otherwise if False it is discarded. Mask will be converted into a bias, +which adds large negative values to the attention scores corresponding to “discard” tokens. Returns +UNet2DConditionOutput or tuple + +If return_dict is True, an UNet2DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The AudioLDM2UNet2DConditionModel forward method. AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. diff --git a/scrapped_outputs/12ad9bf55f4f815b83a7a9d15268e613.txt b/scrapped_outputs/12ad9bf55f4f815b83a7a9d15268e613.txt new file mode 100644 index 0000000000000000000000000000000000000000..242e37fb1de48e73893d11901cd033d448afb601 --- /dev/null +++ b/scrapped_outputs/12ad9bf55f4f815b83a7a9d15268e613.txt @@ -0,0 +1,107 @@ +Semantic Guidance Semantic Guidance for Diffusion Models was proposed in SEGA: Instructing Text-to-Image Models using Semantic Guidance and provides strong semantic control over image generation. +Small changes to the text prompt usually result in entirely different output images. However, with SEGA a variety of changes to the image are enabled that can be controlled easily and intuitively, while staying true to the original image composition. The abstract from the paper is: Text-to-image diffusion models have recently received a lot of interest for their astonishing ability to produce high-fidelity images from text only. However, achieving one-shot generation that aligns with the user’s intent is nearly impossible, yet small changes to the input prompt often result in very different images. This leaves the user with little semantic control. To put the user in control, we show how to interact with the diffusion process to flexibly steer it along semantic directions. This semantic guidance (SEGA) generalizes to any generative architecture using classifier-free guidance. More importantly, it allows for subtle and extensive edits, changes in composition and style, as well as optimizing the overall artistic conception. We demonstrate SEGA’s effectiveness on both latent and pixel-based diffusion models such as Stable Diffusion, Paella, and DeepFloyd-IF using a variety of tasks, thus providing strong evidence for its versatility, flexibility, and improvements over existing methods. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. SemanticStableDiffusionPipeline class diffusers.SemanticStableDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (Q16SafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with latent editing. This model inherits from DiffusionPipeline and builds on the StableDiffusionPipeline. Check the superclass +documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular +device, etc.). __call__ < source > ( prompt: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: int = 1 eta: float = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 editing_prompt: Union = None editing_prompt_embeddings: Optional = None reverse_editing_direction: Union = False edit_guidance_scale: Union = 5 edit_warmup_steps: Union = 10 edit_cooldown_steps: Union = None edit_threshold: Union = 0.9 edit_momentum_scale: Optional = 0.1 edit_mom_beta: Optional = 0.4 edit_weights: Optional = None sem_guidance: Optional = None ) → SemanticStableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. editing_prompt (str or List[str], optional) — +The prompt or prompts to use for semantic guidance. Semantic guidance is disabled by setting +editing_prompt = None. Guidance direction of prompt should be specified via +reverse_editing_direction. editing_prompt_embeddings (torch.Tensor, optional) — +Pre-computed embeddings to use for semantic guidance. Guidance direction of embedding should be +specified via reverse_editing_direction. reverse_editing_direction (bool or List[bool], optional, defaults to False) — +Whether the corresponding prompt in editing_prompt should be increased or decreased. edit_guidance_scale (float or List[float], optional, defaults to 5) — +Guidance scale for semantic guidance. If provided as a list, values should correspond to +editing_prompt. edit_warmup_steps (float or List[float], optional, defaults to 10) — +Number of diffusion steps (for each prompt) for which semantic guidance is not applied. Momentum is +calculated for those steps and applied once all warmup periods are over. edit_cooldown_steps (float or List[float], optional, defaults to None) — +Number of diffusion steps (for each prompt) after which semantic guidance is longer applied. edit_threshold (float or List[float], optional, defaults to 0.9) — +Threshold of semantic guidance. edit_momentum_scale (float, optional, defaults to 0.1) — +Scale of the momentum to be added to the semantic guidance at each diffusion step. If set to 0.0, +momentum is disabled. Momentum is already built up during warmup (for diffusion steps smaller than +sld_warmup_steps). Momentum is only added to latent guidance once all warmup periods are finished. edit_mom_beta (float, optional, defaults to 0.4) — +Defines how semantic guidance momentum builds up. edit_mom_beta indicates how much of the previous +momentum is kept. Momentum is already built up during warmup (for diffusion steps smaller than +edit_warmup_steps). edit_weights (List[float], optional, defaults to None) — +Indicates how much each individual concept should influence the overall guidance. If no weights are +provided all concepts are applied equally. sem_guidance (List[torch.Tensor], optional) — +List of pre-generated guidance vectors to be applied at generation. Length of the list has to +correspond to num_inference_steps. Returns +SemanticStableDiffusionPipelineOutput or tuple + +If return_dict is True, +SemanticStableDiffusionPipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images and the second element +is a list of bools indicating whether the corresponding generated image contains “not-safe-for-work” +(nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import SemanticStableDiffusionPipeline + +>>> pipe = SemanticStableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> out = pipe( +... prompt="a photo of the face of a woman", +... num_images_per_prompt=1, +... guidance_scale=7, +... editing_prompt=[ +... "smiling, smile", # Concepts to apply +... "glasses, wearing glasses", +... "curls, wavy hair, curly hair", +... "beard, full beard, mustache", +... ], +... reverse_editing_direction=[ +... False, +... False, +... False, +... False, +... ], # Direction of guidance i.e. increase all concepts +... edit_warmup_steps=[10, 10, 10, 10], # Warmup period for each concept +... edit_guidance_scale=[4, 5, 5, 5.4], # Guidance scale for each concept +... edit_threshold=[ +... 0.99, +... 0.975, +... 0.925, +... 0.96, +... ], # Threshold for each concept. Threshold equals the percentile of the latent space that will be discarded. I.e. threshold=0.99 uses 1% of the latent dimensions +... edit_momentum_scale=0.3, # Momentum scale that will be added to the latent guidance +... edit_mom_beta=0.6, # Momentum beta +... edit_weights=[1, 1, 1, 1, 1], # Weights of the individual concepts against each other +... ) +>>> image = out.images[0] StableDiffusionSafePipelineOutput class diffusers.pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/12bae73ad9c854b39c6b0a7b33caab50.txt b/scrapped_outputs/12bae73ad9c854b39c6b0a7b33caab50.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/12bcc26591ed2d40ef4f3dcb9fce0cd3.txt b/scrapped_outputs/12bcc26591ed2d40ef4f3dcb9fce0cd3.txt new file mode 100644 index 0000000000000000000000000000000000000000..e37bbead67d21277358aedb13e677930bc250b1b --- /dev/null +++ b/scrapped_outputs/12bcc26591ed2d40ef4f3dcb9fce0cd3.txt @@ -0,0 +1,14 @@ +Speed up inference There are several ways to optimize 🤗 Diffusers for inference speed. As a general rule of thumb, we recommend using either xFormers or torch.nn.functional.scaled_dot_product_attention in PyTorch 2.0 for their memory-efficient attention. In many cases, optimizing for speed or memory leads to improved performance in the other, so you should try to optimize for both whenever you can. This guide focuses on inference speed, but you can learn more about preserving memory in the Reduce memory usage guide. The results below are obtained from generating a single 512x512 image from the prompt a photo of an astronaut riding a horse on mars with 50 DDIM steps on a Nvidia Titan RTX, demonstrating the speed-up you can expect. latency speed-up original 9.50s x1 fp16 3.61s x2.63 channels last 3.30s x2.88 traced UNet 3.21s x2.96 memory efficient attention 2.63s x3.61 Use TensorFloat-32 On Ampere and later CUDA devices, matrix multiplications and convolutions can use the TensorFloat-32 (TF32) mode for faster, but slightly less accurate computations. By default, PyTorch enables TF32 mode for convolutions but not matrix multiplications. Unless your network requires full float32 precision, we recommend enabling TF32 for matrix multiplications. It can significantly speeds up computations with typically negligible loss in numerical accuracy. Copied import torch + +torch.backends.cuda.matmul.allow_tf32 = True You can learn more about TF32 in the Mixed precision training guide. Half-precision weights To save GPU memory and get more speed, try loading and running the model weights directly in half-precision or float16: Copied import torch +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +image = pipe(prompt).images[0] Don’t use torch.autocast in any of the pipelines as it can lead to black images and is always slower than pure float16 precision. Distilled model You could also use a distilled Stable Diffusion model and autoencoder to speed up inference. During distillation, many of the UNet’s residual and attention blocks are shed to reduce the model size. The distilled model is faster and uses less memory while generating images of comparable quality to the full Stable Diffusion model. Learn more about in the Distilled Stable Diffusion inference guide! diff --git a/scrapped_outputs/12d0a1f562f768a52399f7b6e9cb469e.txt b/scrapped_outputs/12d0a1f562f768a52399f7b6e9cb469e.txt new file mode 100644 index 0000000000000000000000000000000000000000..99c9c7d4f2201d98cc2da9436565b2c181d1c9c1 --- /dev/null +++ b/scrapped_outputs/12d0a1f562f768a52399f7b6e9cb469e.txt @@ -0,0 +1,83 @@ +Paint by Example Paint by Example: Exemplar-based Image Editing with Diffusion Models is by Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, Fang Wen. The abstract from the paper is: Language-guided image editing has achieved great success recently. In this paper, for the first time, we investigate exemplar-guided image editing for more precise control. We achieve this goal by leveraging self-supervised training to disentangle and re-organize the source image and the exemplar. However, the naive approach will cause obvious fusing artifacts. We carefully analyze it and propose an information bottleneck and strong augmentations to avoid the trivial solution of directly copying and pasting the exemplar image. Meanwhile, to ensure the controllability of the editing process, we design an arbitrary shape mask for the exemplar image and leverage the classifier-free guidance to increase the similarity to the exemplar image. The whole framework involves a single forward of the diffusion model without any iterative optimization. We demonstrate that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity. The original codebase can be found at Fantasy-Studio/Paint-by-Example, and you can try it out in a demo. Tips Paint by Example is supported by the official Fantasy-Studio/Paint-by-Example checkpoint. The checkpoint is warm-started from CompVis/stable-diffusion-v1-4 to inpaint partly masked images conditioned on example and reference images. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. PaintByExamplePipeline class diffusers.PaintByExamplePipeline < source > ( vae: AutoencoderKL image_encoder: PaintByExampleImageEncoder unet: UNet2DConditionModel scheduler: Union safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = False ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. image_encoder (PaintByExampleImageEncoder) — +Encodes the example input image. The unet is conditioned on the example image instead of a text prompt. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. 🧪 This is an experimental feature! Pipeline for image-guided image inpainting using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( example_image: Union image: Union mask_image: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters example_image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +An example image to guide image generation. image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +Image or tensor representing an image batch to be inpainted (parts of the image are masked out with +mask_image and repainted according to prompt). mask_image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +Image or tensor representing an image batch to mask image. White pixels in the mask are repainted, +while black pixels are preserved. If mask_image is a PIL image, it is converted to a single channel +(luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, so the +expected shape would be (B, H, W, 1). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Example: Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO +>>> from diffusers import PaintByExamplePipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = ( +... "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/image/example_1.png" +... ) +>>> mask_url = ( +... "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/mask/example_1.png" +... ) +>>> example_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/reference/example_1.jpg" + +>>> init_image = download_image(img_url).resize((512, 512)) +>>> mask_image = download_image(mask_url).resize((512, 512)) +>>> example_image = download_image(example_url).resize((512, 512)) + +>>> pipe = PaintByExamplePipeline.from_pretrained( +... "Fantasy-Studio/Paint-by-Example", +... torch_dtype=torch.float16, +... ) +>>> pipe = pipe.to("cuda") + +>>> image = pipe(image=init_image, mask_image=mask_image, example_image=example_image).images[0] +>>> image StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/12d65e6747e2623aea3a71bc754261c6.txt b/scrapped_outputs/12d65e6747e2623aea3a71bc754261c6.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/1313b5d061dd3e79b8f48b11a5a3ea04.txt b/scrapped_outputs/1313b5d061dd3e79b8f48b11a5a3ea04.txt new file mode 100644 index 0000000000000000000000000000000000000000..b2d859bc97e9bd992d2613a0e2e7f43466ad9f8d --- /dev/null +++ b/scrapped_outputs/1313b5d061dd3e79b8f48b11a5a3ea04.txt @@ -0,0 +1,75 @@ +DEISMultistepScheduler Diffusion Exponential Integrator Sampler (DEIS) is proposed in Fast Sampling of Diffusion Models with Exponential Integrator by Qinsheng Zhang and Yongxin Chen. DEISMultistepScheduler is a fast high order solver for diffusion ordinary differential equations (ODEs). This implementation modifies the polynomial fitting formula in log-rho space instead of the original linear t space in the DEIS paper. The modification enjoys closed-form coefficients for exponential multistep update instead of replying on the numerical solver. The abstract from the paper is: The past few years have witnessed the great success of Diffusion models~(DMs) in generating high-fidelity samples in generative modeling tasks. A major limitation of the DM is its notoriously slow sampling procedure which normally requires hundreds to thousands of time discretization steps of the learned diffusion process to reach the desired accuracy. Our goal is to develop a fast sampling method for DMs with a much less number of steps while retaining high sample quality. To this end, we systematically analyze the sampling procedure in DMs and identify key factors that affect the sample quality, among which the method of discretization is most crucial. By carefully examining the learned diffusion process, we propose Diffusion Exponential Integrator Sampler~(DEIS). It is based on the Exponential Integrator designed for discretizing ordinary differential equations (ODEs) and leverages a semilinear structure of the learned diffusion process to reduce the discretization error. The proposed method can be applied to any DMs and can generate high-fidelity samples in as few as 10 steps. In our experiments, it takes about 3 minutes on one A6000 GPU to generate 50k images from CIFAR10. Moreover, by directly using pre-trained DMs, we achieve the state-of-art sampling performance when the number of score function evaluation~(NFE) is limited, e.g., 4.17 FID with 10 NFEs, 3.37 FID, and 9.74 IS with only 15 NFEs on CIFAR10. Code is available at this https URL. Tips It is recommended to set solver_order to 2 or 3, while solver_order=1 is equivalent to DDIMScheduler. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set thresholding=True to use the dynamic thresholding. DEISMultistepScheduler class diffusers.DEISMultistepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Optional = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'deis' solver_type: str = 'logrho' lower_order_final: bool = True use_karras_sigmas: Optional = False timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DEIS order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. algorithm_type (str, defaults to deis) — +The algorithm type for the solver. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DEISMultistepScheduler is a fast high order solver for diffusion ordinary differential equations (ODEs). This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DEIS algorithm needs. deis_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DEIS (equivalent to DDIM). multistep_deis_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order multistep DEIS. multistep_deis_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order multistep DEIS. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep DEIS. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/132a563dc582eb60a0de3792634fb9ff.txt b/scrapped_outputs/132a563dc582eb60a0de3792634fb9ff.txt new file mode 100644 index 0000000000000000000000000000000000000000..ff28dd01033ce547a340e7754e35c2123f361679 --- /dev/null +++ b/scrapped_outputs/132a563dc582eb60a0de3792634fb9ff.txt @@ -0,0 +1,14 @@ +Text-guided depth-to-image generation The StableDiffusionDepth2ImgPipeline lets you pass a text prompt and an initial image to condition the generation of new images. In addition, you can also pass a depth_map to preserve the image structure. If no depth_map is provided, the pipeline automatically predicts the depth via an integrated depth-estimation model. Start by creating an instance of the StableDiffusionDepth2ImgPipeline: Copied import torch +from diffusers import StableDiffusionDepth2ImgPipeline +from diffusers.utils import load_image, make_image_grid + +pipeline = StableDiffusionDepth2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-depth", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") Now pass your prompt to the pipeline. You can also pass a negative_prompt to prevent certain words from guiding how an image is generated: Copied url = "http://images.cocodataset.org/val2017/000000039769.jpg" +init_image = load_image(url) +prompt = "two tigers" +negative_prompt = "bad, deformed, ugly, bad anatomy" +image = pipeline(prompt=prompt, image=init_image, negative_prompt=negative_prompt, strength=0.7).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Input Output diff --git a/scrapped_outputs/132d285293ed0e68ec1092d4b4953520.txt b/scrapped_outputs/132d285293ed0e68ec1092d4b4953520.txt new file mode 100644 index 0000000000000000000000000000000000000000..28d0025fe6227f68f990a2d355304bcc0dc60e92 --- /dev/null +++ b/scrapped_outputs/132d285293ed0e68ec1092d4b4953520.txt @@ -0,0 +1,112 @@ +Unconditional Latent Diffusion + + +Overview + +Unconditional Latent Diffusion was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. +The abstract of the paper is the following: +By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. +The original codebase can be found here. + +Tips: + + + + + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_latent_diffusion_uncond.py +Unconditional Image Generation +- + +Examples: + + +LDMPipeline + + +class diffusers.LDMPipeline + +< +source +> +( +vqvae: VQModel +unet: UNet2DModel +scheduler: DDIMScheduler + +) + + +Parameters + +vqvae (VQModel) — +Vector-quantized (VQ) Model to encode and decode images to and from latent representations. + + +unet (UNet2DModel) — U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +DDIMScheduler is to be used in combination with unet to denoise the encoded image latents. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +batch_size: int = 1 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +eta: float = 0.0 +num_inference_steps: int = 50 +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +**kwargs + +) +→ +ImagePipelineOutput or tuple + +Parameters + +batch_size (int, optional, defaults to 1) — +Number of images to generate. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + +Returns + +ImagePipelineOutput or tuple + + + +~pipelines.utils.ImagePipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. diff --git a/scrapped_outputs/136cd85275aabed05aaa66181a4e690c.txt b/scrapped_outputs/136cd85275aabed05aaa66181a4e690c.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/137fbb11f60db53f51cbd85a46712148.txt b/scrapped_outputs/137fbb11f60db53f51cbd85a46712148.txt new file mode 100644 index 0000000000000000000000000000000000000000..65a9cfaf29f703e7c7512eba0f3f7082686a6b82 --- /dev/null +++ b/scrapped_outputs/137fbb11f60db53f51cbd85a46712148.txt @@ -0,0 +1,40 @@ +KDPM2DiscreteScheduler The KDPM2DiscreteScheduler is inspired by the Elucidating the Design Space of Diffusion-Based Generative Models paper, and the scheduler is ported from and created by Katherine Crowson. The original codebase can be found at crowsonkb/k-diffusion. KDPM2DiscreteScheduler class diffusers.KDPM2DiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None use_karras_sigmas: Optional = False prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.00085) — +The starting beta value of inference. beta_end (float, defaults to 0.012) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. KDPM2DiscreteScheduler is inspired by the DPMSolver2 and Algorithm 2 from the Elucidating the Design Space of +Diffusion-Based Generative Models paper. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/13c5e712b287c718393160ff88a93c5c.txt b/scrapped_outputs/13c5e712b287c718393160ff88a93c5c.txt new file mode 100644 index 0000000000000000000000000000000000000000..b0b0a9f6f6538388b8c5e1816de1537cd679e779 --- /dev/null +++ b/scrapped_outputs/13c5e712b287c718393160ff88a93c5c.txt @@ -0,0 +1,96 @@ +MultiDiffusion MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation is by Omer Bar-Tal, Lior Yariv, Yaron Lipman, and Tali Dekel. The abstract from the paper is: Recent advances in text-to-image generation with diffusion models present transformative capabilities in image quality. However, user controllability of the generated image, and fast adaptation to new tasks still remains an open challenge, currently mostly addressed by costly and long re-training and fine-tuning or ad-hoc adaptations to specific image generation tasks. In this work, we present MultiDiffusion, a unified framework that enables versatile and controllable image generation, using a pre-trained text-to-image diffusion model, without any further training or finetuning. At the center of our approach is a new generation process, based on an optimization task that binds together multiple diffusion generation processes with a shared set of parameters or constraints. We show that MultiDiffusion can be readily applied to generate high quality and diverse images that adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes. You can find additional information about MultiDiffusion on the project page, original codebase, and try it out in a demo. Tips While calling StableDiffusionPanoramaPipeline, it’s possible to specify the view_batch_size parameter to be > 1. +For some GPUs with high performance, this can speedup the generation process and increase VRAM usage. To generate panorama-like images make sure you pass the width parameter accordingly. We recommend a width value of 2048 which is the default. Circular padding is applied to ensure there are no stitching artifacts when working with panoramas to ensure a seamless transition from the rightmost part to the leftmost part. By enabling circular padding (set circular_padding=True), the operation applies additional crops after the rightmost point of the image, allowing the model to “see” the transition from the rightmost part to the leftmost part. This helps maintain visual consistency in a 360-degree sense and creates a proper “panorama” that can be viewed using 360-degree panorama viewers. When decoding latents in Stable Diffusion, circular padding is applied to ensure that the decoded latents match in the RGB space. For example, without circular padding, there is a stitching artifact (default): + But with circular padding, the right and the left parts are matching (circular_padding=True): + Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionPanoramaPipeline class diffusers.StableDiffusionPanoramaPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: DDIMScheduler safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using MultiDiffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = 512 width: Optional = 2048 num_inference_steps: int = 50 guidance_scale: float = 7.5 view_batch_size: int = 1 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None circular_padding: bool = False clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 2048) — +The width in pixels of the generated image. The width is kept high because the pipeline is supposed +generate panorama-like images. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. view_batch_size (int, optional, defaults to 1) — +The batch size to denoise split views. For some GPUs with high performance, higher view batch size can +speedup the generation and increase the VRAM usage. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. circular_padding (bool, optional, defaults to False) — +If set to True, circular padding is applied to ensure there are no stitching artifacts. Circular +padding allows the model to seamlessly generate a transition from the rightmost part of the image to +the leftmost part, maintaining consistency in a 360-degree sense. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPanoramaPipeline, DDIMScheduler + +>>> model_ckpt = "stabilityai/stable-diffusion-2-base" +>>> scheduler = DDIMScheduler.from_pretrained(model_ckpt, subfolder="scheduler") +>>> pipe = StableDiffusionPanoramaPipeline.from_pretrained( +... model_ckpt, scheduler=scheduler, torch_dtype=torch.float16 +... ) + +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of the dolomites" +>>> image = pipe(prompt).images[0] disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/13df0b0a023e578ae3f580ffebc6ba38.txt b/scrapped_outputs/13df0b0a023e578ae3f580ffebc6ba38.txt new file mode 100644 index 0000000000000000000000000000000000000000..2dde9c6e189ad6d607bc313e3e555570773bb332 --- /dev/null +++ b/scrapped_outputs/13df0b0a023e578ae3f580ffebc6ba38.txt @@ -0,0 +1,19 @@ +Adapt a model to a new task Many diffusion systems share the same components, allowing you to adapt a pretrained model for one task to an entirely different task. This guide will show you how to adapt a pretrained text-to-image model for inpainting by initializing and modifying the architecture of a pretrained UNet2DConditionModel. Configure UNet2DConditionModel parameters A UNet2DConditionModel by default accepts 4 channels in the input sample. For example, load a pretrained text-to-image model like runwayml/stable-diffusion-v1-5 and take a look at the number of in_channels: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) +pipeline.unet.config["in_channels"] +4 Inpainting requires 9 channels in the input sample. You can check this value in a pretrained inpainting model like runwayml/stable-diffusion-inpainting: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-inpainting", use_safetensors=True) +pipeline.unet.config["in_channels"] +9 To adapt your text-to-image model for inpainting, you’ll need to change the number of in_channels from 4 to 9. Initialize a UNet2DConditionModel with the pretrained text-to-image model weights, and change in_channels to 9. Changing the number of in_channels means you need to set ignore_mismatched_sizes=True and low_cpu_mem_usage=False to avoid a size mismatch error because the shape is different now. Copied from diffusers import UNet2DConditionModel + +model_id = "runwayml/stable-diffusion-v1-5" +unet = UNet2DConditionModel.from_pretrained( + model_id, + subfolder="unet", + in_channels=9, + low_cpu_mem_usage=False, + ignore_mismatched_sizes=True, + use_safetensors=True, +) The pretrained weights of the other components from the text-to-image model are initialized from their checkpoints, but the input channel weights (conv_in.weight) of the unet are randomly initialized. It is important to finetune the model for inpainting because otherwise the model returns noise. diff --git a/scrapped_outputs/13ef1d17714d91ab09d413c4b602a2eb.txt b/scrapped_outputs/13ef1d17714d91ab09d413c4b602a2eb.txt new file mode 100644 index 0000000000000000000000000000000000000000..ec0ca022fc192e20ccf6ff3307b2799096156b70 --- /dev/null +++ b/scrapped_outputs/13ef1d17714d91ab09d413c4b602a2eb.txt @@ -0,0 +1,44 @@ +Using Diffusers for reinforcement learning + +Support for one RL model and related pipelines is included in the experimental source of diffusers. +More models and examples coming soon! + +Diffuser Value-guided Planning + +You can run the model from Planning with Diffusion for Flexible Behavior Synthesis with Diffusers. +The script is located in the RL Examples folder. +Or, run this example in Colab + +class diffusers.experimental.ValueGuidedRLPipeline + +< +source +> +( +value_function: UNet1DModel +unet: UNet1DModel +scheduler: DDPMScheduler +env + +) + + +Parameters + +value_function (UNet1DModel) — A specialized UNet for fine-tuning trajectories base on reward. + + +unet (UNet1DModel) — U-Net architecture to denoise the encoded trajectories. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded trajectories. Default for this +application is DDPMScheduler. +env — An environment following the OpenAI gym API to act in. For now only Hopper has pretrained models. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) +Pipeline for sampling actions from a diffusion model trained to predict sequences of states. +Original implementation inspired by this repository: https://github.com/jannerm/diffuser. diff --git a/scrapped_outputs/13f4c91d519e870a4592810208afeebb.txt b/scrapped_outputs/13f4c91d519e870a4592810208afeebb.txt new file mode 100644 index 0000000000000000000000000000000000000000..b36fcdaae1a968a902d79e9e2398812f703a2021 --- /dev/null +++ b/scrapped_outputs/13f4c91d519e870a4592810208afeebb.txt @@ -0,0 +1,63 @@ +Kandinsky 2.2 This script is experimental, and it’s easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset. Kandinsky 2.2 is a multilingual text-to-image model capable of producing more photorealistic images. The model includes an image prior model for creating image embeddings from text prompts, and a decoder model that generates images based on the prior model’s embeddings. That’s why you’ll find two separate scripts in Diffusers for Kandinsky 2.2, one for training the prior model and one for training the decoder model. You can train both models separately, but to get the best results, you should train both the prior and decoder models. Depending on your GPU, you may need to enable gradient_checkpointing (⚠️ not supported for the prior model!), mixed_precision, and gradient_accumulation_steps to help fit the model into memory and to speedup training. You can reduce your memory-usage even more by enabling memory-efficient attention with xFormers (version v0.0.16 fails for training on some GPUs so you may need to install a development version instead). This guide explores the train_text_to_image_prior.py and the train_text_to_image_decoder.py scripts to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the scripts, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/kandinsky2_2/text_to_image +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training scripts that are important for understanding how to modify it, but it doesn’t cover every aspect of the scripts in detail. If you’re interested in learning more, feel free to read through the scripts and let us know if you have any questions or concerns. Script parameters The training scripts provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. The training scripts provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image_prior.py \ + --mixed_precision="fp16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so let’s get straight to a walkthrough of the Kandinsky training scripts! Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_text_to_image_prior.py \ + --snr_gamma=5.0 Training script The training script is also similar to the Text-to-image training guide, but it’s been modified to support training the prior and decoder models. This guide focuses on the code that is unique to the Kandinsky 2.2 training scripts. prior model decoder model The main() function contains the code for preparing the dataset and training the model. One of the main differences you’ll notice right away is that the training script also loads a CLIPImageProcessor - in addition to a scheduler and tokenizer - for preprocessing images and a CLIPVisionModelWithProjection model for encoding the images: Copied noise_scheduler = DDPMScheduler(beta_schedule="squaredcos_cap_v2", prediction_type="sample") +image_processor = CLIPImageProcessor.from_pretrained( + args.pretrained_prior_model_name_or_path, subfolder="image_processor" +) +tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="tokenizer") + +with ContextManagers(deepspeed_zero_init_disabled_context_manager()): + image_encoder = CLIPVisionModelWithProjection.from_pretrained( + args.pretrained_prior_model_name_or_path, subfolder="image_encoder", torch_dtype=weight_dtype + ).eval() + text_encoder = CLIPTextModelWithProjection.from_pretrained( + args.pretrained_prior_model_name_or_path, subfolder="text_encoder", torch_dtype=weight_dtype + ).eval() Kandinsky uses a PriorTransformer to generate the image embeddings, so you’ll want to setup the optimizer to learn the prior mode’s parameters. Copied prior = PriorTransformer.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="prior") +prior.train() +optimizer = optimizer_cls( + prior.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Next, the input captions are tokenized, and images are preprocessed by the CLIPImageProcessor: Copied def preprocess_train(examples): + images = [image.convert("RGB") for image in examples[image_column]] + examples["clip_pixel_values"] = image_processor(images, return_tensors="pt").pixel_values + examples["text_input_ids"], examples["text_mask"] = tokenize_captions(examples) + return examples Finally, the training loop converts the input images into latents, adds noise to the image embeddings, and makes a prediction: Copied model_pred = prior( + noisy_latents, + timestep=timesteps, + proj_embedding=prompt_embeds, + encoder_hidden_states=text_encoder_hidden_states, + attention_mask=text_mask, +).predicted_image_embedding If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 You’ll train on the Pokémon BLIP captions dataset to generate your own Pokémon, but you can also create and train on your own dataset by following the Create a dataset for training guide. Set the environment variable DATASET_NAME to the name of the dataset on the Hub or if you’re training on your own files, set the environment variable TRAIN_DIR to a path to your dataset. If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_prompt to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. prior model decoder model Copied export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch --mixed_precision="fp16" train_text_to_image_prior.py \ + --dataset_name=$DATASET_NAME \ + --resolution=768 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --checkpoints_total_limit=3 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --validation_prompts="A robot pokemon, 4k photo" \ + --report_to="wandb" \ + --push_to_hub \ + --output_dir="kandi2-prior-pokemon-model" Once training is finished, you can use your newly trained model for inference! prior model decoder model Copied from diffusers import AutoPipelineForText2Image, DiffusionPipeline +import torch + +prior_pipeline = DiffusionPipeline.from_pretrained(output_dir, torch_dtype=torch.float16) +prior_components = {"prior_" + k: v for k,v in prior_pipeline.components.items()} +pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", **prior_components, torch_dtype=torch.float16) + +pipe.enable_model_cpu_offload() +prompt="A robot pokemon, 4k photo" +image = pipeline(prompt=prompt, negative_prompt=negative_prompt).images[0] Feel free to replace kandinsky-community/kandinsky-2-2-decoder with your own trained decoder checkpoint! Next steps Congratulations on training a Kandinsky 2.2 model! To learn more about how to use your new model, the following guides may be helpful: Read the Kandinsky guide to learn how to use it for a variety of different tasks (text-to-image, image-to-image, inpainting, interpolation), and how it can be combined with a ControlNet. Check out the DreamBooth and LoRA training guides to learn how to train a personalized Kandinsky model with just a few example images. These two training techniques can even be combined! diff --git a/scrapped_outputs/1431a7204d849bd990c610b0e260a4dc.txt b/scrapped_outputs/1431a7204d849bd990c610b0e260a4dc.txt new file mode 100644 index 0000000000000000000000000000000000000000..260e2d1961cab74b037b8005bfcbb5822351f744 --- /dev/null +++ b/scrapped_outputs/1431a7204d849bd990c610b0e260a4dc.txt @@ -0,0 +1,197 @@ +UniDiffuser The UniDiffuser model was proposed in One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale by Fan Bao, Shen Nie, Kaiwen Xue, Chongxuan Li, Shi Pu, Yaole Wang, Gang Yue, Yue Cao, Hang Su, Jun Zhu. The abstract from the paper is: This paper proposes a unified diffusion framework (dubbed UniDiffuser) to fit all distributions relevant to a set of multi-modal data in one model. Our key insight is — learning diffusion models for marginal, conditional, and joint distributions can be unified as predicting the noise in the perturbed data, where the perturbation levels (i.e. timesteps) can be different for different modalities. Inspired by the unified view, UniDiffuser learns all distributions simultaneously with a minimal modification to the original diffusion model — perturbs data in all modalities instead of a single modality, inputs individual timesteps in different modalities, and predicts the noise of all modalities instead of a single modality. UniDiffuser is parameterized by a transformer for diffusion models to handle input types of different modalities. Implemented on large-scale paired image-text data, UniDiffuser is able to perform image, text, text-to-image, image-to-text, and image-text pair generation by setting proper timesteps without additional overhead. In particular, UniDiffuser is able to produce perceptually realistic samples in all tasks and its quantitative results (e.g., the FID and CLIP score) are not only superior to existing general-purpose models but also comparable to the bespoken models (e.g., Stable Diffusion and DALL-E 2) in representative tasks (e.g., text-to-image generation). You can find the original codebase at thu-ml/unidiffuser and additional checkpoints at thu-ml. There is currently an issue on PyTorch 1.X where the output images are all black or the pixel values become NaNs. This issue can be mitigated by switching to PyTorch 2.X. This pipeline was contributed by dg845. ❤️ Usage Examples Because the UniDiffuser model is trained to model the joint distribution of (image, text) pairs, it is capable of performing a diverse range of generation tasks: Unconditional Image and Text Generation Unconditional generation (where we start from only latents sampled from a standard Gaussian prior) from a UniDiffuserPipeline will produce a (image, text) pair: Copied import torch + +from diffusers import UniDiffuserPipeline + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Unconditional image and text generation. The generation task is automatically inferred. +sample = pipe(num_inference_steps=20, guidance_scale=8.0) +image = sample.images[0] +text = sample.text[0] +image.save("unidiffuser_joint_sample_image.png") +print(text) This is also called “joint” generation in the UniDiffuser paper, since we are sampling from the joint image-text distribution. Note that the generation task is inferred from the inputs used when calling the pipeline. +It is also possible to manually specify the unconditional generation task (“mode”) manually with UniDiffuserPipeline.set_joint_mode(): Copied # Equivalent to the above. +pipe.set_joint_mode() +sample = pipe(num_inference_steps=20, guidance_scale=8.0) When the mode is set manually, subsequent calls to the pipeline will use the set mode without attempting to infer the mode. +You can reset the mode with UniDiffuserPipeline.reset_mode(), after which the pipeline will once again infer the mode. You can also generate only an image or only text (which the UniDiffuser paper calls “marginal” generation since we sample from the marginal distribution of images and text, respectively): Copied # Unlike other generation tasks, image-only and text-only generation don't use classifier-free guidance +# Image-only generation +pipe.set_image_mode() +sample_image = pipe(num_inference_steps=20).images[0] +# Text-only generation +pipe.set_text_mode() +sample_text = pipe(num_inference_steps=20).text[0] Text-to-Image Generation UniDiffuser is also capable of sampling from conditional distributions; that is, the distribution of images conditioned on a text prompt or the distribution of texts conditioned on an image. +Here is an example of sampling from the conditional image distribution (text-to-image generation or text-conditioned image generation): Copied import torch + +from diffusers import UniDiffuserPipeline + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Text-to-image generation +prompt = "an elephant under the sea" + +sample = pipe(prompt=prompt, num_inference_steps=20, guidance_scale=8.0) +t2i_image = sample.images[0] +t2i_image The text2img mode requires that either an input prompt or prompt_embeds be supplied. You can set the text2img mode manually with UniDiffuserPipeline.set_text_to_image_mode(). Image-to-Text Generation Similarly, UniDiffuser can also produce text samples given an image (image-to-text or image-conditioned text generation): Copied import torch + +from diffusers import UniDiffuserPipeline +from diffusers.utils import load_image + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Image-to-text generation +image_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unidiffuser/unidiffuser_example_image.jpg" +init_image = load_image(image_url).resize((512, 512)) + +sample = pipe(image=init_image, num_inference_steps=20, guidance_scale=8.0) +i2t_text = sample.text[0] +print(i2t_text) The img2text mode requires that an input image be supplied. You can set the img2text mode manually with UniDiffuserPipeline.set_image_to_text_mode(). Image Variation The UniDiffuser authors suggest performing image variation through a “round-trip” generation method, where given an input image, we first perform an image-to-text generation, and then perform a text-to-image generation on the outputs of the first generation. +This produces a new image which is semantically similar to the input image: Copied import torch + +from diffusers import UniDiffuserPipeline +from diffusers.utils import load_image + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Image variation can be performed with an image-to-text generation followed by a text-to-image generation: +# 1. Image-to-text generation +image_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unidiffuser/unidiffuser_example_image.jpg" +init_image = load_image(image_url).resize((512, 512)) + +sample = pipe(image=init_image, num_inference_steps=20, guidance_scale=8.0) +i2t_text = sample.text[0] +print(i2t_text) + +# 2. Text-to-image generation +sample = pipe(prompt=i2t_text, num_inference_steps=20, guidance_scale=8.0) +final_image = sample.images[0] +final_image.save("unidiffuser_image_variation_sample.png") Text Variation Similarly, text variation can be performed on an input prompt with a text-to-image generation followed by a image-to-text generation: Copied import torch + +from diffusers import UniDiffuserPipeline + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Text variation can be performed with a text-to-image generation followed by a image-to-text generation: +# 1. Text-to-image generation +prompt = "an elephant under the sea" + +sample = pipe(prompt=prompt, num_inference_steps=20, guidance_scale=8.0) +t2i_image = sample.images[0] +t2i_image.save("unidiffuser_text2img_sample_image.png") + +# 2. Image-to-text generation +sample = pipe(image=t2i_image, num_inference_steps=20, guidance_scale=8.0) +final_prompt = sample.text[0] +print(final_prompt) Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. UniDiffuserPipeline class diffusers.UniDiffuserPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel image_encoder: CLIPVisionModelWithProjection clip_image_processor: CLIPImageProcessor clip_tokenizer: CLIPTokenizer text_decoder: UniDiffuserTextDecoder text_tokenizer: GPT2Tokenizer unet: UniDiffuserModel scheduler: KarrasDiffusionSchedulers ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. This +is part of the UniDiffuser image representation along with the CLIP vision encoding. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). image_encoder (CLIPVisionModel) — +A CLIPVisionModel to encode images as part of its image representation along with the VAE +latent representation. image_processor (CLIPImageProcessor) — +CLIPImageProcessor to preprocess an image before CLIP encoding it with image_encoder. clip_tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize the prompt before encoding it with text_encoder. text_decoder (UniDiffuserTextDecoder) — +Frozen text decoder. This is a GPT-style model which is used to generate text from the UniDiffuser +embedding. text_tokenizer (GPT2Tokenizer) — +A GPT2Tokenizer to decode text for text generation; used along with the text_decoder. unet (UniDiffuserModel) — +A U-ViT model with UNNet-style skip connections between transformer +layers to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image and/or text latents. The +original UniDiffuser paper uses the DPMSolverMultistepScheduler scheduler. Pipeline for a bimodal image-text model which supports unconditional text and image generation, text-conditioned +image generation, image-conditioned text generation, and joint image-text generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None image: Union = None height: Optional = None width: Optional = None data_type: Optional = 1 num_inference_steps: int = 50 guidance_scale: float = 8.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 num_prompts_per_image: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_latents: Optional = None vae_latents: Optional = None clip_latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → ImageTextPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. +Required for text-conditioned image generation (text2img) mode. image (torch.FloatTensor or PIL.Image.Image, optional) — +Image or tensor representing an image batch. Required for image-conditioned text generation +(img2text) mode. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. data_type (int, optional, defaults to 1) — +The data type (either 0 or 1). Only used if you are loading a checkpoint which supports a data type +embedding; this is added for compatibility with the +UniDiffuser-v1 checkpoint. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 8.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). Used in +text-conditioned image generation (text2img) mode. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. Used in text2img (text-conditioned image generation) and +img mode. If the mode is joint and both num_images_per_prompt and num_prompts_per_image are +supplied, min(num_images_per_prompt, num_prompts_per_image) samples are generated. num_prompts_per_image (int, optional, defaults to 1) — +The number of prompts to generate per image. Used in img2text (image-conditioned text generation) and +text mode. If the mode is joint and both num_images_per_prompt and num_prompts_per_image are +supplied, min(num_images_per_prompt, num_prompts_per_image) samples are generated. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for joint +image-text generation. Can be used to tweak the same generation with different prompts. If not +provided, a latents tensor is generated by sampling using the supplied random generator. This assumes +a full set of VAE, CLIP, and text latents, if supplied, overrides the value of prompt_latents, +vae_latents, and clip_latents. prompt_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for text +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. vae_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. clip_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. Used in text-conditioned +image generation (text2img) mode. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are be generated from the negative_prompt input argument. Used +in text-conditioned image generation (text2img) mode. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImageTextPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +ImageTextPipelineOutput or tuple + +If return_dict is True, ImageTextPipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images and the second element +is a list of generated texts. + The call function to the pipeline for generation. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. reset_mode < source > ( ) Removes a manually set mode; after calling this, the pipeline will infer the mode from inputs. set_image_mode < source > ( ) Manually set the generation mode to unconditional (“marginal”) image generation. set_image_to_text_mode < source > ( ) Manually set the generation mode to image-conditioned text generation. set_joint_mode < source > ( ) Manually set the generation mode to unconditional joint image-text generation. set_text_mode < source > ( ) Manually set the generation mode to unconditional (“marginal”) text generation. set_text_to_image_mode < source > ( ) Manually set the generation mode to text-conditioned image generation. ImageTextPipelineOutput class diffusers.ImageTextPipelineOutput < source > ( images: Union text: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). text (List[str] or List[List[str]]) — +List of generated text strings of length batch_size or a list of list of strings whose outer list has +length batch_size. Output class for joint image-text pipelines. diff --git a/scrapped_outputs/14361bfeb2c08409d15e9624c02cb108.txt b/scrapped_outputs/14361bfeb2c08409d15e9624c02cb108.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/1467c05b5938dbcf7f87f6f8e12c3371.txt b/scrapped_outputs/1467c05b5938dbcf7f87f6f8e12c3371.txt new file mode 100644 index 0000000000000000000000000000000000000000..f2282512f2f0bcea89548e640b2b6d75311dad9c --- /dev/null +++ b/scrapped_outputs/1467c05b5938dbcf7f87f6f8e12c3371.txt @@ -0,0 +1,27 @@ +OpenVINO 🤗 Optimum provides Stable Diffusion pipelines compatible with OpenVINO to perform inference on a variety of Intel processors (see the full list of supported devices). You’ll need to install 🤗 Optimum Intel with the --upgrade-strategy eager option to ensure optimum-intel is using the latest version: Copied pip install --upgrade-strategy eager optimum["openvino"] This guide will show you how to use the Stable Diffusion and Stable Diffusion XL (SDXL) pipelines with OpenVINO. Stable Diffusion To load and run inference, use the OVStableDiffusionPipeline. If you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, set export=True: Copied from optimum.intel import OVStableDiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipeline = OVStableDiffusionPipeline.from_pretrained(model_id, export=True) +prompt = "sailing ship in storm by Rembrandt" +image = pipeline(prompt).images[0] + +# Don't forget to save the exported model +pipeline.save_pretrained("openvino-sd-v1-5") To further speed-up inference, statically reshape the model. If you change any parameters such as the outputs height or width, you’ll need to statically reshape your model again. Copied # Define the shapes related to the inputs and desired outputs +batch_size, num_images, height, width = 1, 1, 512, 512 + +# Statically reshape the model +pipeline.reshape(batch_size, height, width, num_images) +# Compile the model before inference +pipeline.compile() + +image = pipeline( + prompt, + height=height, + width=width, + num_images_per_prompt=num_images, +).images[0] You can find more examples in the 🤗 Optimum documentation, and Stable Diffusion is supported for text-to-image, image-to-image, and inpainting. Stable Diffusion XL To load and run inference with SDXL, use the OVStableDiffusionXLPipeline: Copied from optimum.intel import OVStableDiffusionXLPipeline + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipeline = OVStableDiffusionXLPipeline.from_pretrained(model_id) +prompt = "sailing ship in storm by Rembrandt" +image = pipeline(prompt).images[0] To further speed-up inference, statically reshape the model as shown in the Stable Diffusion section. You can find more examples in the 🤗 Optimum documentation, and running SDXL in OpenVINO is supported for text-to-image and image-to-image. diff --git a/scrapped_outputs/147a72e0b20af5f87b56cada1a634612.txt b/scrapped_outputs/147a72e0b20af5f87b56cada1a634612.txt new file mode 100644 index 0000000000000000000000000000000000000000..e3c71ca96baa76c1c11f96cfbdad30df65a97ee3 --- /dev/null +++ b/scrapped_outputs/147a72e0b20af5f87b56cada1a634612.txt @@ -0,0 +1,112 @@ +How to contribute to Diffusers 🧨 We ❤️ contributions from the open-source community! Everyone is welcome, and all types of participation –not just code– are valued and appreciated. Answering questions, helping others, reaching out, and improving the documentation are all immensely valuable to the community, so don’t be afraid and get involved if you’re up for it! Everyone is encouraged to start by saying 👋 in our public Discord channel. We discuss the latest trends in diffusion models, ask questions, show off personal projects, help each other with contributions, or just hang out ☕. Whichever way you choose to contribute, we strive to be part of an open, welcoming, and kind community. Please, read our code of conduct and be mindful to respect it during your interactions. We also recommend you become familiar with the ethical guidelines that guide our project and ask you to adhere to the same principles of transparency and responsibility. We enormously value feedback from the community, so please do not be afraid to speak up if you believe you have valuable feedback that can help improve the library - every message, comment, issue, and pull request (PR) is read and considered. Overview You can contribute in many ways ranging from answering questions on issues to adding new diffusion models to +the core library. In the following, we give an overview of different ways to contribute, ranked by difficulty in ascending order. All of them are valuable to the community. Asking and answering questions on the Diffusers discussion forum or on Discord. Opening new issues on the GitHub Issues tab. Answering issues on the GitHub Issues tab. Fix a simple issue, marked by the “Good first issue” label, see here. Contribute to the documentation. Contribute a Community Pipeline. Contribute to the examples. Fix a more difficult issue, marked by the “Good second issue” label, see here. Add a new pipeline, model, or scheduler, see “New Pipeline/Model” and “New scheduler” issues. For this contribution, please have a look at Design Philosophy. As said before, all contributions are valuable to the community. +In the following, we will explain each contribution a bit more in detail. For all contributions 4 - 9, you will need to open a PR. It is explained in detail how to do so in Opening a pull request. 1. Asking and answering questions on the Diffusers discussion forum or on the Diffusers Discord Any question or comment related to the Diffusers library can be asked on the discussion forum or on Discord. Such questions and comments include (but are not limited to): Reports of training or inference experiments in an attempt to share knowledge Presentation of personal projects Questions to non-official training examples Project proposals General feedback Paper summaries Asking for help on personal projects that build on top of the Diffusers library General questions Ethical questions regarding diffusion models … Every question that is asked on the forum or on Discord actively encourages the community to publicly +share knowledge and might very well help a beginner in the future who has the same question you’re +having. Please do pose any questions you might have. +In the same spirit, you are of immense help to the community by answering such questions because this way you are publicly documenting knowledge for everybody to learn from. Please keep in mind that the more effort you put into asking or answering a question, the higher +the quality of the publicly documented knowledge. In the same way, well-posed and well-answered questions create a high-quality knowledge database accessible to everybody, while badly posed questions or answers reduce the overall quality of the public knowledge database. +In short, a high quality question or answer is precise, concise, relevant, easy-to-understand, accessible, and well-formated/well-posed. For more information, please have a look through the How to write a good issue section. NOTE about channels: +The forum is much better indexed by search engines, such as Google. Posts are ranked by popularity rather than chronologically. Hence, it’s easier to look up questions and answers that we posted some time ago. +In addition, questions and answers posted in the forum can easily be linked to. +In contrast, Discord has a chat-like format that invites fast back-and-forth communication. +While it will most likely take less time for you to get an answer to your question on Discord, your +question won’t be visible anymore over time. Also, it’s much harder to find information that was posted a while back on Discord. We therefore strongly recommend using the forum for high-quality questions and answers in an attempt to create long-lasting knowledge for the community. If discussions on Discord lead to very interesting answers and conclusions, we recommend posting the results on the forum to make the information more available for future readers. 2. Opening new issues on the GitHub issues tab The 🧨 Diffusers library is robust and reliable thanks to the users who notify us of +the problems they encounter. So thank you for reporting an issue. Remember, GitHub issues are reserved for technical questions directly related to the Diffusers library, bug reports, feature requests, or feedback on the library design. In a nutshell, this means that everything that is not related to the code of the Diffusers library (including the documentation) should not be asked on GitHub, but rather on either the forum or Discord. Please consider the following guidelines when opening a new issue: Make sure you have searched whether your issue has already been asked before (use the search bar on GitHub under Issues). Please never report a new issue on another (related) issue. If another issue is highly related, please +open a new issue nevertheless and link to the related issue. Make sure your issue is written in English. Please use one of the great, free online translation services, such as DeepL to translate from your native language to English if you are not comfortable in English. Check whether your issue might be solved by updating to the newest Diffusers version. Before posting your issue, please make sure that python -c "import diffusers; print(diffusers.__version__)" is higher or matches the latest Diffusers version. Remember that the more effort you put into opening a new issue, the higher the quality of your answer will be and the better the overall quality of the Diffusers issues. New issues usually include the following. 2.1. Reproducible, minimal bug reports A bug report should always have a reproducible code snippet and be as minimal and concise as possible. +This means in more detail: Narrow the bug down as much as you can, do not just dump your whole code file. Format your code. Do not include any external libraries except for Diffusers depending on them. Always provide all necessary information about your environment; for this, you can run: diffusers-cli env in your shell and copy-paste the displayed information to the issue. Explain the issue. If the reader doesn’t know what the issue is and why it is an issue, she cannot solve it. Always make sure the reader can reproduce your issue with as little effort as possible. If your code snippet cannot be run because of missing libraries or undefined variables, the reader cannot help you. Make sure your reproducible code snippet is as minimal as possible and can be copy-pasted into a simple Python shell. If in order to reproduce your issue a model and/or dataset is required, make sure the reader has access to that model or dataset. You can always upload your model or dataset to the Hub to make it easily downloadable. Try to keep your model and dataset as small as possible, to make the reproduction of your issue as effortless as possible. For more information, please have a look through the How to write a good issue section. You can open a bug report here. 2.2. Feature requests A world-class feature request addresses the following points: Motivation first: Is it related to a problem/frustration with the library? If so, please explain +why. Providing a code snippet that demonstrates the problem is best. Is it related to something you would need for a project? We’d love to hear +about it! Is it something you worked on and think could benefit the community? +Awesome! Tell us what problem it solved for you. Write a full paragraph describing the feature; Provide a code snippet that demonstrates its future use; In case this is related to a paper, please attach a link; Attach any additional information (drawings, screenshots, etc.) you think may help. You can open a feature request here. 2.3 Feedback Feedback about the library design and why it is good or not good helps the core maintainers immensely to build a user-friendly library. To understand the philosophy behind the current design philosophy, please have a look here. If you feel like a certain design choice does not fit with the current design philosophy, please explain why and how it should be changed. If a certain design choice follows the design philosophy too much, hence restricting use cases, explain why and how it should be changed. +If a certain design choice is very useful for you, please also leave a note as this is great feedback for future design decisions. You can open an issue about feedback here. 2.4 Technical questions Technical questions are mainly about why certain code of the library was written in a certain way, or what a certain part of the code does. Please make sure to link to the code in question and please provide details on +why this part of the code is difficult to understand. You can open an issue about a technical question here. 2.5 Proposal to add a new model, scheduler, or pipeline If the diffusion model community released a new model, pipeline, or scheduler that you would like to see in the Diffusers library, please provide the following information: Short description of the diffusion pipeline, model, or scheduler and link to the paper or public release. Link to any of its open-source implementation(s). Link to the model weights if they are available. If you are willing to contribute to the model yourself, let us know so we can best guide you. Also, don’t forget +to tag the original author of the component (model, scheduler, pipeline, etc.) by GitHub handle if you can find it. You can open a request for a model/pipeline/scheduler here. 3. Answering issues on the GitHub issues tab Answering issues on GitHub might require some technical knowledge of Diffusers, but we encourage everybody to give it a try even if you are not 100% certain that your answer is correct. +Some tips to give a high-quality answer to an issue: Be as concise and minimal as possible. Stay on topic. An answer to the issue should concern the issue and only the issue. Provide links to code, papers, or other sources that prove or encourage your point. Answer in code. If a simple code snippet is the answer to the issue or shows how the issue can be solved, please provide a fully reproducible code snippet. Also, many issues tend to be simply off-topic, duplicates of other issues, or irrelevant. It is of great +help to the maintainers if you can answer such issues, encouraging the author of the issue to be +more precise, provide the link to a duplicated issue or redirect them to the forum or Discord. If you have verified that the issued bug report is correct and requires a correction in the source code, +please have a look at the next sections. For all of the following contributions, you will need to open a PR. It is explained in detail how to do so in the Opening a pull request section. 4. Fixing a “Good first issue” Good first issues are marked by the Good first issue label. Usually, the issue already +explains how a potential solution should look so that it is easier to fix. +If the issue hasn’t been closed and you would like to try to fix this issue, you can just leave a message “I would like to try this issue.”. There are usually three scenarios: a.) The issue description already proposes a fix. In this case and if the solution makes sense to you, you can open a PR or draft PR to fix it. b.) The issue description does not propose a fix. In this case, you can ask what a proposed fix could look like and someone from the Diffusers team should answer shortly. If you have a good idea of how to fix it, feel free to directly open a PR. c.) There is already an open PR to fix the issue, but the issue hasn’t been closed yet. If the PR has gone stale, you can simply open a new PR and link to the stale PR. PRs often go stale if the original contributor who wanted to fix the issue suddenly cannot find the time anymore to proceed. This often happens in open-source and is very normal. In this case, the community will be very happy if you give it a new try and leverage the knowledge of the existing PR. If there is already a PR and it is active, you can help the author by giving suggestions, reviewing the PR or even asking whether you can contribute to the PR. 5. Contribute to the documentation A good library always has good documentation! The official documentation is often one of the first points of contact for new users of the library, and therefore contributing to the documentation is a highly +valuable contribution. Contributing to the library can have many forms: Correcting spelling or grammatical errors. Correct incorrect formatting of the docstring. If you see that the official documentation is weirdly displayed or a link is broken, we would be very happy if you take some time to correct it. Correct the shape or dimensions of a docstring input or output tensor. Clarify documentation that is hard to understand or incorrect. Update outdated code examples. Translating the documentation to another language. Anything displayed on the official Diffusers doc page is part of the official documentation and can be corrected, adjusted in the respective documentation source. Please have a look at this page on how to verify changes made to the documentation locally. 6. Contribute a community pipeline Pipelines are usually the first point of contact between the Diffusers library and the user. +Pipelines are examples of how to use Diffusers models and schedulers. +We support two types of pipelines: Official Pipelines Community Pipelines Both official and community pipelines follow the same design and consist of the same type of components. Official pipelines are tested and maintained by the core maintainers of Diffusers. Their code +resides in src/diffusers/pipelines. +In contrast, community pipelines are contributed and maintained purely by the community and are not tested. +They reside in examples/community and while they can be accessed via the PyPI diffusers package, their code is not part of the PyPI distribution. The reason for the distinction is that the core maintainers of the Diffusers library cannot maintain and test all +possible ways diffusion models can be used for inference, but some of them may be of interest to the community. +Officially released diffusion pipelines, +such as Stable Diffusion are added to the core src/diffusers/pipelines package which ensures +high quality of maintenance, no backward-breaking code changes, and testing. +More bleeding edge pipelines should be added as community pipelines. If usage for a community pipeline is high, the pipeline can be moved to the official pipelines upon request from the community. This is one of the ways we strive to be a community-driven library. To add a community pipeline, one should add a .py file to examples/community and adapt the examples/community/README.md to include an example of the new pipeline. An example can be seen here. Community pipeline PRs are only checked at a superficial level and ideally they should be maintained by their original authors. Contributing a community pipeline is a great way to understand how Diffusers models and schedulers work. Having contributed a community pipeline is usually the first stepping stone to contributing an official pipeline to the +core package. 7. Contribute to training examples Diffusers examples are a collection of training scripts that reside in examples. We support two types of training examples: Official training examples Research training examples Research training examples are located in examples/research_projects whereas official training examples include all folders under examples except the research_projects and community folders. +The official training examples are maintained by the Diffusers’ core maintainers whereas the research training examples are maintained by the community. +This is because of the same reasons put forward in 6. Contribute a community pipeline for official pipelines vs. community pipelines: It is not feasible for the core maintainers to maintain all possible training methods for diffusion models. +If the Diffusers core maintainers and the community consider a certain training paradigm to be too experimental or not popular enough, the corresponding training code should be put in the research_projects folder and maintained by the author. Both official training and research examples consist of a directory that contains one or more training scripts, a requirements.txt file, and a README.md file. In order for the user to make use of the +training examples, it is required to clone the repository: Copied git clone https://github.com/huggingface/diffusers as well as to install all additional dependencies required for training: Copied pip install -r /examples//requirements.txt Therefore when adding an example, the requirements.txt file shall define all pip dependencies required for your training example so that once all those are installed, the user can run the example’s training script. See, for example, the DreamBooth requirements.txt file. Training examples of the Diffusers library should adhere to the following philosophy: All the code necessary to run the examples should be found in a single Python file. One should be able to run the example from the command line with python .py --args. Examples should be kept simple and serve as an example on how to use Diffusers for training. The purpose of example scripts is not to create state-of-the-art diffusion models, but rather to reproduce known training schemes without adding too much custom logic. As a byproduct of this point, our examples also strive to serve as good educational materials. To contribute an example, it is highly recommended to look at already existing examples such as dreambooth to get an idea of how they should look like. +We strongly advise contributors to make use of the Accelerate library as it’s tightly integrated +with Diffusers. +Once an example script works, please make sure to add a comprehensive README.md that states how to use the example exactly. This README should include: An example command on how to run the example script as shown here. A link to some training results (logs, models, etc.) that show what the user can expect as shown here. If you are adding a non-official/research training example, please don’t forget to add a sentence that you are maintaining this training example which includes your git handle as shown here. If you are contributing to the official training examples, please also make sure to add a test to examples/test_examples.py. This is not necessary for non-official training examples. 8. Fixing a “Good second issue” Good second issues are marked by the Good second issue label. Good second issues are +usually more complicated to solve than Good first issues. +The issue description usually gives less guidance on how to fix the issue and requires +a decent understanding of the library by the interested contributor. +If you are interested in tackling a good second issue, feel free to open a PR to fix it and link the PR to the issue. If you see that a PR has already been opened for this issue but did not get merged, have a look to understand why it wasn’t merged and try to open an improved PR. +Good second issues are usually more difficult to get merged compared to good first issues, so don’t hesitate to ask for help from the core maintainers. If your PR is almost finished the core maintainers can also jump into your PR and commit to it in order to get it merged. 9. Adding pipelines, models, schedulers Pipelines, models, and schedulers are the most important pieces of the Diffusers library. +They provide easy access to state-of-the-art diffusion technologies and thus allow the community to +build powerful generative AI applications. By adding a new model, pipeline, or scheduler you might enable a new powerful use case for any of the user interfaces relying on Diffusers which can be of immense value for the whole generative AI ecosystem. Diffusers has a couple of open feature requests for all three components - feel free to gloss over them +if you don’t know yet what specific component you would like to add: Model or pipeline Scheduler Before adding any of the three components, it is strongly recommended that you give the Philosophy guide a read to better understand the design of any of the three components. Please be aware that we cannot merge model, scheduler, or pipeline additions that strongly diverge from our design philosophy +as it will lead to API inconsistencies. If you fundamentally disagree with a design choice, please open a Feedback issue instead so that it can be discussed whether a certain design pattern/design choice shall be changed everywhere in the library and whether we shall update our design philosophy. Consistency across the library is very important for us. Please make sure to add links to the original codebase/paper to the PR and ideally also ping the original author directly on the PR so that they can follow the progress and potentially help with questions. If you are unsure or stuck in the PR, don’t hesitate to leave a message to ask for a first review or help. Copied from mechanism A unique and important feature to understand when adding any pipeline, model or scheduler code is the # Copied from mechanism. You’ll see this all over the Diffusers codebase, and the reason we use it is to keep the codebase easy to understand and maintain. Marking code with the # Copied from mechanism forces the marked code to be identical to the code it was copied from. This makes it easy to update and propagate changes across many files whenever you run make fix-copies. For example, in the code example below, StableDiffusionPipelineOutput is the original code and AltDiffusionPipelineOutput uses the # Copied from mechanism to copy it. The only difference is changing the class prefix from Stable to Alt. Copied # Copied from diffusers.pipelines.stable_diffusion.pipeline_output.StableDiffusionPipelineOutput with Stable->Alt +class AltDiffusionPipelineOutput(BaseOutput): + """ + Output class for Alt Diffusion pipelines. + + Args: + images (`List[PIL.Image.Image]` or `np.ndarray`) + List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width, + num_channels)`. + nsfw_content_detected (`List[bool]`) + List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or + `None` if safety checking could not be performed. + """ To learn more, read this section of the ~Don’t~ Repeat Yourself* blog post. How to write a good issue The better your issue is written, the higher the chances that it will be quickly resolved. Make sure that you’ve used the correct template for your issue. You can pick between Bug Report, Feature Request, Feedback about API Design, New model/pipeline/scheduler addition, Forum, or a blank issue. Make sure to pick the correct one when opening a new issue. Be precise: Give your issue a fitting title. Try to formulate your issue description as simple as possible. The more precise you are when submitting an issue, the less time it takes to understand the issue and potentially solve it. Make sure to open an issue for one issue only and not for multiple issues. If you found multiple issues, simply open multiple issues. If your issue is a bug, try to be as precise as possible about what bug it is - you should not just write “Error in diffusers”. Reproducibility: No reproducible code snippet == no solution. If you encounter a bug, maintainers have to be able to reproduce it. Make sure that you include a code snippet that can be copy-pasted into a Python interpreter to reproduce the issue. Make sure that your code snippet works, i.e. that there are no missing imports or missing links to images, … Your issue should contain an error message and a code snippet that can be copy-pasted without any changes to reproduce the exact same error message. If your issue is using local model weights or local data that cannot be accessed by the reader, the issue cannot be solved. If you cannot share your data or model, try to make a dummy model or dummy data. Minimalistic: Try to help the reader as much as you can to understand the issue as quickly as possible by staying as concise as possible. Remove all code / all information that is irrelevant to the issue. If you have found a bug, try to create the easiest code example you can to demonstrate your issue, do not just dump your whole workflow into the issue as soon as you have found a bug. E.g., if you train a model and get an error at some point during the training, you should first try to understand what part of the training code is responsible for the error and try to reproduce it with a couple of lines. Try to use dummy data instead of full datasets. Add links. If you are referring to a certain naming, method, or model make sure to provide a link so that the reader can better understand what you mean. If you are referring to a specific PR or issue, make sure to link it to your issue. Do not assume that the reader knows what you are talking about. The more links you add to your issue the better. Formatting. Make sure to nicely format your issue by formatting code into Python code syntax, and error messages into normal code syntax. See the official GitHub formatting docs for more information. Think of your issue not as a ticket to be solved, but rather as a beautiful entry to a well-written encyclopedia. Every added issue is a contribution to publicly available knowledge. By adding a nicely written issue you not only make it easier for maintainers to solve your issue, but you are helping the whole community to better understand a certain aspect of the library. How to write a good PR Be a chameleon. Understand existing design patterns and syntax and make sure your code additions flow seamlessly into the existing code base. Pull requests that significantly diverge from existing design patterns or user interfaces will not be merged. Be laser focused. A pull request should solve one problem and one problem only. Make sure to not fall into the trap of “also fixing another problem while we’re adding it”. It is much more difficult to review pull requests that solve multiple, unrelated problems at once. If helpful, try to add a code snippet that displays an example of how your addition can be used. The title of your pull request should be a summary of its contribution. If your pull request addresses an issue, please mention the issue number in +the pull request description to make sure they are linked (and people +consulting the issue know you are working on it); To indicate a work in progress please prefix the title with [WIP]. These +are useful to avoid duplicated work, and to differentiate it from PRs ready +to be merged; Try to formulate and format your text as explained in How to write a good issue. Make sure existing tests pass; Add high-coverage tests. No quality testing = no merge. If you are adding new @slow tests, make sure they pass using +RUN_SLOW=1 python -m pytest tests/test_my_new_model.py. +CircleCI does not run the slow tests, but GitHub Actions does every night! All public methods must have informative docstrings that work nicely with markdown. See pipeline_latent_diffusion.py for an example. Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted dataset like +hf-internal-testing or huggingface/documentation-images to place these files. +If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images +to this dataset. How to open a PR Before writing code, we strongly advise you to search through the existing PRs or +issues to make sure that nobody is already working on the same thing. If you are +unsure, it is always a good idea to open an issue to get some feedback. You will need basic git proficiency to be able to contribute to +🧨 Diffusers. git is not the easiest tool to use but it has the greatest +manual. Type git --help in a shell and enjoy. If you prefer books, Pro +Git is a very good reference. Follow these steps to start contributing (supported Python versions): Fork the repository by +clicking on the ‘Fork’ button on the repository’s page. This creates a copy of the code +under your GitHub user account. Clone your fork to your local disk, and add the base repository as a remote: Copied $ git clone git@github.com:/diffusers.git +$ cd diffusers +$ git remote add upstream https://github.com/huggingface/diffusers.git Create a new branch to hold your development changes: Copied $ git checkout -b a-descriptive-name-for-my-changes Do not work on the main branch. Set up a development environment by running the following command in a virtual environment: Copied $ pip install -e ".[dev]" If you have already cloned the repo, you might need to git pull to get the most recent changes in the +library. Develop the features on your branch. As you work on the features, you should make sure that the test suite +passes. You should run the tests impacted by your changes like this: Copied $ pytest tests/.py Before you run the tests, please make sure you install the dependencies required for testing. You can do so +with this command: Copied $ pip install -e ".[test]" You can also run the full test suite with the following command, but it takes +a beefy machine to produce a result in a decent amount of time now that +Diffusers has grown a lot. Here is the command for it: Copied $ make test 🧨 Diffusers relies on black and isort to format its source code +consistently. After you make changes, apply automatic style corrections and code verifications +that can’t be automated in one go with: Copied $ make style 🧨 Diffusers also uses ruff and a few custom scripts to check for coding mistakes. Quality +control runs in CI, however, you can also run the same checks with: Copied $ make quality Once you’re happy with your changes, add changed files using git add and +make a commit with git commit to record your changes locally: Copied $ git add modified_file.py +$ git commit -m "A descriptive message about your changes." It is a good idea to sync your copy of the code with the original +repository regularly. This way you can quickly account for changes: Copied $ git pull upstream main Push the changes to your account using: Copied $ git push -u origin a-descriptive-name-for-my-changes Once you are satisfied, go to the +webpage of your fork on GitHub. Click on ‘Pull request’ to send your changes +to the project maintainers for review. It’s OK if maintainers ask you for changes. It happens to core contributors +too! So everyone can see the changes in the Pull request, work in your local +branch and push the changes to your fork. They will automatically appear in +the pull request. Tests An extensive test suite is included to test the library behavior and several examples. Library tests can be found in +the tests folder. We like pytest and pytest-xdist because it’s faster. From the root of the +repository, here’s how to run tests with pytest for the library: Copied $ python -m pytest -n auto --dist=loadfile -s -v ./tests/ In fact, that’s how make test is implemented! You can specify a smaller set of tests in order to test only the feature +you’re working on. By default, slow tests are skipped. Set the RUN_SLOW environment variable to +yes to run them. This will download many gigabytes of models — make sure you +have enough disk space and a good Internet connection, or a lot of patience! Copied $ RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/ unittest is fully supported, here’s how to run tests with it: Copied $ python -m unittest discover -s tests -t . -v +$ python -m unittest discover -s examples -t examples -v Syncing forked main with upstream (HuggingFace) main To avoid pinging the upstream repository which adds reference notes to each upstream PR and sends unnecessary notifications to the developers involved in these PRs, +when syncing the main branch of a forked repository, please, follow these steps: When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead, merge directly into the forked main. If a PR is absolutely necessary, use the following steps after checking out your branch: Copied $ git checkout -b your-branch-for-syncing +$ git pull --squash --no-commit upstream main +$ git commit -m '' +$ git push --set-upstream origin your-branch-for-syncing Style guide For documentation strings, 🧨 Diffusers follows the Google style. diff --git a/scrapped_outputs/149e3164f9c1b6f05e825d0dbdfc6082.txt b/scrapped_outputs/149e3164f9c1b6f05e825d0dbdfc6082.txt new file mode 100644 index 0000000000000000000000000000000000000000..11477af7da0355430f35587a5aa097be653d9a3d --- /dev/null +++ b/scrapped_outputs/149e3164f9c1b6f05e825d0dbdfc6082.txt @@ -0,0 +1,68 @@ +VQDiffusionScheduler VQDiffusionScheduler converts the transformer model’s output into a sample for the unnoised image at the previous diffusion timestep. It was introduced in Vector Quantized Diffusion Model for Text-to-Image Synthesis by Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, Baining Guo. The abstract from the paper is: We present the vector quantized diffusion (VQ-Diffusion) model for text-to-image generation. This method is based on a vector quantized variational autoencoder (VQ-VAE) whose latent space is modeled by a conditional variant of the recently developed Denoising Diffusion Probabilistic Model (DDPM). We find that this latent-space method is well-suited for text-to-image generation tasks because it not only eliminates the unidirectional bias with existing methods but also allows us to incorporate a mask-and-replace diffusion strategy to avoid the accumulation of errors, which is a serious problem with existing methods. Our experiments show that the VQ-Diffusion produces significantly better text-to-image generation results when compared with conventional autoregressive (AR) models with similar numbers of parameters. Compared with previous GAN-based text-to-image methods, our VQ-Diffusion can handle more complex scenes and improve the synthesized image quality by a large margin. Finally, we show that the image generation computation in our method can be made highly efficient by reparameterization. With traditional AR methods, the text-to-image generation time increases linearly with the output image resolution and hence is quite time consuming even for normal size images. The VQ-Diffusion allows us to achieve a better trade-off between quality and speed. Our experiments indicate that the VQ-Diffusion model with the reparameterization is fifteen times faster than traditional AR methods while achieving a better image quality. VQDiffusionScheduler class diffusers.VQDiffusionScheduler < source > ( num_vec_classes: int num_train_timesteps: int = 100 alpha_cum_start: float = 0.99999 alpha_cum_end: float = 9e-06 gamma_cum_start: float = 9e-06 gamma_cum_end: float = 0.99999 ) Parameters num_vec_classes (int) — +The number of classes of the vector embeddings of the latent pixels. Includes the class for the masked +latent pixel. num_train_timesteps (int, defaults to 100) — +The number of diffusion steps to train the model. alpha_cum_start (float, defaults to 0.99999) — +The starting cumulative alpha value. alpha_cum_end (float, defaults to 0.00009) — +The ending cumulative alpha value. gamma_cum_start (float, defaults to 0.00009) — +The starting cumulative gamma value. gamma_cum_end (float, defaults to 0.99999) — +The ending cumulative gamma value. A scheduler for vector quantized diffusion. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. log_Q_t_transitioning_to_known_class < source > ( t: torch.int32 x_t: LongTensor log_onehot_x_t: FloatTensor cumulative: bool ) → torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels) Parameters t (torch.Long) — +The timestep that determines which transition matrix is used. x_t (torch.LongTensor of shape (batch size, num latent pixels)) — +The classes of each latent pixel at time t. log_onehot_x_t (torch.FloatTensor of shape (batch size, num classes, num latent pixels)) — +The log one-hot vectors of x_t. cumulative (bool) — +If cumulative is False, the single step transition matrix t-1->t is used. If cumulative is +True, the cumulative transition matrix 0->t is used. Returns +torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels) + +Each column of the returned matrix is a row of log probabilities of the complete probability +transition matrix. +When non cumulative, returns self.num_classes - 1 rows because the initial latent pixel cannot be +masked. +Where: + +q_n is the probability distribution for the forward process of the nth latent pixel. +C_0 is a class of a latent pixel embedding +C_k is the class of the masked latent pixel + +non-cumulative result (omitting logarithms): +_0(x_t | x_{t-1\} = C_0) ... q_n(x_t | x_{t-1\} = C_0) + . . . + . . . + . . . +q_0(x_t | x_{t-1\} = C_k) ... q_n(x_t | x_{t-1\} = C_k)`} + wrap={false} + /> +cumulative result (omitting logarithms): +_0_cumulative(x_t | x_0 = C_0) ... q_n_cumulative(x_t | x_0 = C_0) + . . . + . . . + . . . +q_0_cumulative(x_t | x_0 = C_{k-1\}) ... q_n_cumulative(x_t | x_0 = C_{k-1\})`} + wrap={false} + /> + Calculates the log probabilities of the rows from the (cumulative or non-cumulative) transition matrix for each +latent pixel in x_t. q_posterior < source > ( log_p_x_0 x_t t ) → torch.FloatTensor of shape (batch size, num classes, num latent pixels) Parameters log_p_x_0 (torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels)) — +The log probabilities for the predicted classes of the initial latent pixels. Does not include a +prediction for the masked class as the initial unnoised image cannot be masked. x_t (torch.LongTensor of shape (batch size, num latent pixels)) — +The classes of each latent pixel at time t. t (torch.Long) — +The timestep that determines which transition matrix is used. Returns +torch.FloatTensor of shape (batch size, num classes, num latent pixels) + +The log probabilities for the predicted classes of the image at timestep t-1. + Calculates the log probabilities for the predicted classes of the image at timestep t-1: Copied p(x_{t-1} | x_t) = sum( q(x_t | x_{t-1}) * q(x_{t-1} | x_0) * p(x_0) / q(x_t | x_0) ) set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps and diffusion process parameters (alpha, beta, gamma) should be moved +to. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: torch.int64 sample: LongTensor generator: Optional = None return_dict: bool = True ) → VQDiffusionSchedulerOutput or tuple Parameters t (torch.long) — +The timestep that determines which transition matrices are used. x_t (torch.LongTensor of shape (batch size, num latent pixels)) — +The classes of each latent pixel at time t. generator (torch.Generator, or None) — +A random number generator for the noise applied to p(x_{t-1} | x_t) before it is sampled from. return_dict (bool, optional, defaults to True) — +Whether or not to return a VQDiffusionSchedulerOutput or +tuple. Returns +VQDiffusionSchedulerOutput or tuple + +If return_dict is True, VQDiffusionSchedulerOutput is +returned, otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by the reverse transition distribution. See +q_posterior() for more details about how the distribution is computer. VQDiffusionSchedulerOutput class diffusers.schedulers.scheduling_vq_diffusion.VQDiffusionSchedulerOutput < source > ( prev_sample: LongTensor ) Parameters prev_sample (torch.LongTensor of shape (batch size, num latent pixels)) — +Computed sample x_{t-1} of previous timestep. prev_sample should be used as next model input in the +denoising loop. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/14abcc66f4d3a5a176c332d27f1c92ef.txt b/scrapped_outputs/14abcc66f4d3a5a176c332d27f1c92ef.txt new file mode 100644 index 0000000000000000000000000000000000000000..1fe3bd3f06785a74a09c4c4199e812fcd2270991 --- /dev/null +++ b/scrapped_outputs/14abcc66f4d3a5a176c332d27f1c92ef.txt @@ -0,0 +1,6 @@ +Overview 🤗 Diffusers provides a collection of training scripts for you to train your own diffusion models. You can find all of our training scripts in diffusers/examples. Each training script is: Self-contained: the training script does not depend on any local files, and all packages required to run the script are installed from the requirements.txt file. Easy-to-tweak: the training scripts are an example of how to train a diffusion model for a specific task and won’t work out-of-the-box for every training scenario. You’ll likely need to adapt the training script for your specific use-case. To help you with that, we’ve fully exposed the data preprocessing code and the training loop so you can modify it for your own use. Beginner-friendly: the training scripts are designed to be beginner-friendly and easy to understand, rather than including the latest state-of-the-art methods to get the best and most competitive results. Any training methods we consider too complex are purposefully left out. Single-purpose: each training script is expressly designed for only one task to keep it readable and understandable. Our current collection of training scripts include: Training SDXL-support LoRA-support Flax-support unconditional image generation text-to-image 👍 👍 👍 textual inversion 👍 DreamBooth 👍 👍 👍 ControlNet 👍 👍 InstructPix2Pix 👍 Custom Diffusion T2I-Adapters 👍 Kandinsky 2.2 👍 Wuerstchen 👍 These examples are actively maintained, so please feel free to open an issue if they aren’t working as expected. If you feel like another training example should be included, you’re more than welcome to start a Feature Request to discuss your feature idea with us and whether it meets our criteria of being self-contained, easy-to-tweak, beginner-friendly, and single-purpose. Install Make sure you can successfully run the latest versions of the example scripts by installing the library from source in a new virtual environment: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the folder of the training script (for example, DreamBooth) and install the requirements.txt file. Some training scripts have a specific requirement file for SDXL, LoRA or Flax. If you’re using one of these scripts, make sure you install its corresponding requirements file. Copied cd examples/dreambooth +pip install -r requirements.txt +# to train SDXL with DreamBooth +pip install -r requirements_sdxl.txt To speedup training and reduce memory-usage, we recommend: using PyTorch 2.0 or higher to automatically use scaled dot product attention during training (you don’t need to make any changes to the training code) installing xFormers to enable memory-efficient attention diff --git a/scrapped_outputs/14ac7d59ae50327429795dd02cae513d.txt b/scrapped_outputs/14ac7d59ae50327429795dd02cae513d.txt new file mode 100644 index 0000000000000000000000000000000000000000..e4abc6c3bdbf1174d841ae03e5693f7552e06dd7 --- /dev/null +++ b/scrapped_outputs/14ac7d59ae50327429795dd02cae513d.txt @@ -0,0 +1,38 @@ +Distributed inference with multiple GPUs On distributed setups, you can run inference across multiple GPUs with 🤗 Accelerate or PyTorch Distributed, which is useful for generating with multiple prompts in parallel. This guide will show you how to use 🤗 Accelerate and PyTorch Distributed for distributed inference. 🤗 Accelerate 🤗 Accelerate is a library designed to make it easy to train or run inference across distributed setups. It simplifies the process of setting up the distributed environment, allowing you to focus on your PyTorch code. To begin, create a Python file and initialize an accelerate.PartialState to create a distributed environment; your setup is automatically detected so you don’t need to explicitly define the rank or world_size. Move the DiffusionPipeline to distributed_state.device to assign a GPU to each process. Now use the split_between_processes utility as a context manager to automatically distribute the prompts between the number of processes. Copied import torch +from accelerate import PartialState +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +distributed_state = PartialState() +pipeline.to(distributed_state.device) + +with distributed_state.split_between_processes(["a dog", "a cat"]) as prompt: + result = pipeline(prompt).images[0] + result.save(f"result_{distributed_state.process_index}.png") Use the --num_processes argument to specify the number of GPUs to use, and call accelerate launch to run the script: Copied accelerate launch run_distributed.py --num_processes=2 To learn more, take a look at the Distributed Inference with 🤗 Accelerate guide. PyTorch Distributed PyTorch supports DistributedDataParallel which enables data parallelism. To start, create a Python file and import torch.distributed and torch.multiprocessing to set up the distributed process group and to spawn the processes for inference on each GPU. You should also initialize a DiffusionPipeline: Copied import torch +import torch.distributed as dist +import torch.multiprocessing as mp + +from diffusers import DiffusionPipeline + +sd = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) You’ll want to create a function to run inference; init_process_group handles creating a distributed environment with the type of backend to use, the rank of the current process, and the world_size or the number of processes participating. If you’re running inference in parallel over 2 GPUs, then the world_size is 2. Move the DiffusionPipeline to rank and use get_rank to assign a GPU to each process, where each process handles a different prompt: Copied def run_inference(rank, world_size): + dist.init_process_group("nccl", rank=rank, world_size=world_size) + + sd.to(rank) + + if torch.distributed.get_rank() == 0: + prompt = "a dog" + elif torch.distributed.get_rank() == 1: + prompt = "a cat" + + image = sd(prompt).images[0] + image.save(f"./{'_'.join(prompt)}.png") To run the distributed inference, call mp.spawn to run the run_inference function on the number of GPUs defined in world_size: Copied def main(): + world_size = 2 + mp.spawn(run_inference, args=(world_size,), nprocs=world_size, join=True) + + +if __name__ == "__main__": + main() Once you’ve completed the inference script, use the --nproc_per_node argument to specify the number of GPUs to use and call torchrun to run the script: Copied torchrun run_distributed.py --nproc_per_node=2 diff --git a/scrapped_outputs/14acadf3b13f814cef45d6ab7c155eea.txt b/scrapped_outputs/14acadf3b13f814cef45d6ab7c155eea.txt new file mode 100644 index 0000000000000000000000000000000000000000..70b4217dd0c7138c00d1e18f1498d6ca0f929b68 --- /dev/null +++ b/scrapped_outputs/14acadf3b13f814cef45d6ab7c155eea.txt @@ -0,0 +1,31 @@ +Load different Stable Diffusion formats Stable Diffusion models are available in different formats depending on the framework they’re trained and saved with, and where you download them from. Converting these formats for use in 🤗 Diffusers allows you to use all the features supported by the library, such as using different schedulers for inference, building your custom pipeline, and a variety of techniques and methods for optimizing inference speed. We highly recommend using the .safetensors format because it is more secure than traditional pickled files which are vulnerable and can be exploited to execute any code on your machine (learn more in the Load safetensors guide). This guide will show you how to convert other Stable Diffusion formats to be compatible with 🤗 Diffusers. PyTorch .ckpt The checkpoint - or .ckpt - format is commonly used to store and save models. The .ckpt file contains the entire model and is typically several GBs in size. While you can load and use a .ckpt file directly with the from_single_file() method, it is generally better to convert the .ckpt file to 🤗 Diffusers so both formats are available. There are two options for converting a .ckpt file: use a Space to convert the checkpoint or convert the .ckpt file with a script. Convert with a Space The easiest and most convenient way to convert a .ckpt file is to use the SD to Diffusers Space. You can follow the instructions on the Space to convert the .ckpt file. This approach works well for basic models, but it may struggle with more customized models. You’ll know the Space failed if it returns an empty pull request or error. In this case, you can try converting the .ckpt file with a script. Convert with a script 🤗 Diffusers provides a conversion script for converting .ckpt files. This approach is more reliable than the Space above. Before you start, make sure you have a local clone of 🤗 Diffusers to run the script and log in to your Hugging Face account so you can open pull requests and push your converted model to the Hub. Copied huggingface-cli login To use the script: Git clone the repository containing the .ckpt file you want to convert. For this example, let’s convert this TemporalNet .ckpt file: Copied git lfs install +git clone https://huggingface.co/CiaraRowles/TemporalNet Open a pull request on the repository where you’re converting the checkpoint from: Copied cd TemporalNet && git fetch origin refs/pr/13:pr/13 +git checkout pr/13 There are several input arguments to configure in the conversion script, but the most important ones are: checkpoint_path: the path to the .ckpt file to convert. original_config_file: a YAML file defining the configuration of the original architecture. If you can’t find this file, try searching for the YAML file in the GitHub repository where you found the .ckpt file. dump_path: the path to the converted model. For example, you can take the cldm_v15.yaml file from the ControlNet repository because the TemporalNet model is a Stable Diffusion v1.5 and ControlNet model. Now you can run the script to convert the .ckpt file: Copied python ../diffusers/scripts/convert_original_stable_diffusion_to_diffusers.py --checkpoint_path temporalnetv3.ckpt --original_config_file cldm_v15.yaml --dump_path ./ --controlnet Once the conversion is done, upload your converted model and test out the resulting pull request! Copied git push origin pr/13:refs/pr/13 Keras .pb or .h5 🧪 This is an experimental feature. Only Stable Diffusion v1 checkpoints are supported by the Convert KerasCV Space at the moment. KerasCV supports training for Stable Diffusion v1 and v2. However, it offers limited support for experimenting with Stable Diffusion models for inference and deployment whereas 🤗 Diffusers has a more complete set of features for this purpose, such as different noise schedulers, flash attention, and other +optimization techniques. The Convert KerasCV Space converts .pb or .h5 files to PyTorch, and then wraps them in a StableDiffusionPipeline so it is ready for inference. The converted checkpoint is stored in a repository on the Hugging Face Hub. For this example, let’s convert the sayakpaul/textual-inversion-kerasio checkpoint which was trained with Textual Inversion. It uses the special token to personalize images with cats. The Convert KerasCV Space allows you to input the following: Your Hugging Face token. Paths to download the UNet and text encoder weights from. Depending on how the model was trained, you don’t necessarily need to provide the paths to both the UNet and text encoder. For example, Textual Inversion only requires the embeddings from the text encoder and a text-to-image model only requires the UNet weights. Placeholder token is only applicable for textual inversion models. The output_repo_prefix is the name of the repository where the converted model is stored. Click the Submit button to automatically convert the KerasCV checkpoint! Once the checkpoint is successfully converted, you’ll see a link to the new repository containing the converted checkpoint. Follow the link to the new repository, and you’ll see the Convert KerasCV Space generated a model card with an inference widget to try out the converted model. If you prefer to run inference with code, click on the Use in Diffusers button in the upper right corner of the model card to copy and paste the code snippet: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline", use_safetensors=True +) Then, you can generate an image like: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline", use_safetensors=True +) +pipeline.to("cuda") + +placeholder_token = "" +prompt = f"two {placeholder_token} getting married, photorealistic, high quality" +image = pipeline(prompt, num_inference_steps=50).images[0] A1111 LoRA files Automatic1111 (A1111) is a popular web UI for Stable Diffusion that supports model sharing platforms like Civitai. Models trained with the Low-Rank Adaptation (LoRA) technique are especially popular because they’re fast to train and have a much smaller file size than a fully finetuned model. 🤗 Diffusers supports loading A1111 LoRA checkpoints with load_lora_weights(): Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "Lykon/dreamshaper-xl-1-0", torch_dtype=torch.float16, variant="fp16" +).to("cuda") Download a LoRA checkpoint from Civitai; this example uses the Blueprintify SD XL 1.0 checkpoint, but feel free to try out any LoRA checkpoint! Copied # uncomment to download the safetensor weights +#!wget https://civitai.com/api/download/models/168776 -O blueprintify.safetensors Load the LoRA checkpoint into the pipeline with the load_lora_weights() method: Copied pipeline.load_lora_weights(".", weight_name="blueprintify.safetensors") Now you can use the pipeline to generate images: Copied prompt = "bl3uprint, a highly detailed blueprint of the empire state building, explaining how to build all parts, many txt, blueprint grid backdrop" +negative_prompt = "lowres, cropped, worst quality, low quality, normal quality, artifacts, signature, watermark, username, blurry, more than one bridge, bad architecture" + +image = pipeline( + prompt=prompt, + negative_prompt=negative_prompt, + generator=torch.manual_seed(0), +).images[0] +image diff --git a/scrapped_outputs/14ae903ed3e5b2c442e60e8bf9987063.txt b/scrapped_outputs/14ae903ed3e5b2c442e60e8bf9987063.txt new file mode 100644 index 0000000000000000000000000000000000000000..7645418c174b20843d0dcacad570025d04b154f1 --- /dev/null +++ b/scrapped_outputs/14ae903ed3e5b2c442e60e8bf9987063.txt @@ -0,0 +1,8 @@ +ScoreSdeVpScheduler ScoreSdeVpScheduler is a variance preserving stochastic differential equation (SDE) scheduler. It was introduced in the Score-Based Generative Modeling through Stochastic Differential Equations paper by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole. The abstract from the paper is: Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model. 🚧 This scheduler is under construction! ScoreSdeVpScheduler class diffusers.schedulers.ScoreSdeVpScheduler < source > ( num_train_timesteps = 2000 beta_min = 0.1 beta_max = 20 sampling_eps = 0.001 ) Parameters num_train_timesteps (int, defaults to 2000) — +The number of diffusion steps to train the model. beta_min (int, defaults to 0.1) — beta_max (int, defaults to 20) — sampling_eps (int, defaults to 1e-3) — +The end value of sampling where timesteps decrease progressively from 1 to epsilon. ScoreSdeVpScheduler is a variance preserving stochastic differential equation (SDE) scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. set_timesteps < source > ( num_inference_steps device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the continuous timesteps used for the diffusion chain (to be run before inference). step_pred < source > ( score x t generator = None ) Parameters score () — x () — t () — generator (torch.Generator, optional) — +A random number generator. Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/15038bf1372d8ba3eff64f40a4d6042a.txt b/scrapped_outputs/15038bf1372d8ba3eff64f40a4d6042a.txt new file mode 100644 index 0000000000000000000000000000000000000000..f3d2a1a340ad1efdbcd58232cb5909967c8d6d47 --- /dev/null +++ b/scrapped_outputs/15038bf1372d8ba3eff64f40a4d6042a.txt @@ -0,0 +1,64 @@ +Configuration Schedulers from SchedulerMixin and models from ModelMixin inherit from ConfigMixin which stores all the parameters that are passed to their respective __init__ methods in a JSON-configuration file. To use private or gated models, log-in with huggingface-cli login. ConfigMixin class diffusers.ConfigMixin < source > ( ) Base class for all configuration classes. All configuration parameters are stored under self.config. Also +provides the from_config() and save_config() methods for loading, downloading, and +saving classes that inherit from ConfigMixin. Class attributes: config_name (str) — A filename under which the config should stored when calling +save_config() (should be overridden by parent class). ignore_for_config (List[str]) — A list of attributes that should not be saved in the config (should be +overridden by subclass). has_compatibles (bool) — Whether the class has compatible classes (should be overridden by subclass). _deprecated_kwargs (List[str]) — Keyword arguments that are deprecated. Note that the init function +should only have a kwargs argument if at least one argument is deprecated (should be overridden by +subclass). load_config < source > ( pretrained_model_name_or_path: Union return_unused_kwargs = False return_commit_hash = False **kwargs ) → dict Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing model weights saved with +save_config(). + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. return_unused_kwargs (bool, optional, defaults to `False) — +Whether unused keyword arguments of the config are returned. return_commit_hash (bool, optional, defaults to False) -- Whether the commit_hash` of the loaded configuration are returned. Returns +dict + +A dictionary of all the parameters stored in a JSON configuration file. + Load a model or scheduler configuration. from_config < source > ( config: Union = None return_unused_kwargs = False **kwargs ) → ModelMixin or SchedulerMixin Parameters config (Dict[str, Any]) — +A config dictionary from which the Python class is instantiated. Make sure to only load configuration +files of compatible classes. return_unused_kwargs (bool, optional, defaults to False) — +Whether kwargs that are not consumed by the Python class should be returned or not. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to update the configuration object (after it is loaded) and initiate the Python class. +**kwargs are passed directly to the underlying scheduler/model’s __init__ method and eventually +overwrite the same named arguments in config. Returns +ModelMixin or SchedulerMixin + +A model or scheduler object instantiated from a config dictionary. + Instantiate a Python class from a config dictionary. Examples: Copied >>> from diffusers import DDPMScheduler, DDIMScheduler, PNDMScheduler + +>>> # Download scheduler from huggingface.co and cache. +>>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cifar10-32") + +>>> # Instantiate DDIM scheduler class with same config as DDPM +>>> scheduler = DDIMScheduler.from_config(scheduler.config) + +>>> # Instantiate PNDM scheduler class with same config as DDPM +>>> scheduler = PNDMScheduler.from_config(scheduler.config) save_config < source > ( save_directory: Union push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory where the configuration JSON file is saved (will be created if it does not exist). push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save a configuration object to the directory specified in save_directory so that it can be reloaded using the +from_config() class method. to_json_file < source > ( json_file_path: Union ) Parameters json_file_path (str or os.PathLike) — +Path to the JSON file to save a configuration instance’s parameters. Save the configuration instance’s parameters to a JSON file. to_json_string < source > ( ) → str Returns +str + +String containing all the attributes that make up the configuration instance in JSON format. + Serializes the configuration instance to a JSON string. diff --git a/scrapped_outputs/1511064b6345189addd160d91169048f.txt b/scrapped_outputs/1511064b6345189addd160d91169048f.txt new file mode 100644 index 0000000000000000000000000000000000000000..f86c7601a8960e5b9b1d28395df88617938da400 --- /dev/null +++ b/scrapped_outputs/1511064b6345189addd160d91169048f.txt @@ -0,0 +1,42 @@ +LMSDiscreteScheduler LMSDiscreteScheduler is a linear multistep scheduler for discrete beta schedules. The scheduler is ported from and created by Katherine Crowson, and the original implementation can be found at crowsonkb/k-diffusion. LMSDiscreteScheduler class diffusers.LMSDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None use_karras_sigmas: Optional = False prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. A linear multistep scheduler for discrete beta schedules. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. get_lms_coefficient < source > ( order t current_order ) Parameters order () — t () — current_order () — Compute the linear multistep coefficient. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (float or torch.FloatTensor) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor order: int = 4 return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float or torch.FloatTensor) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. order (int, defaults to 4) — +The order of the linear multistep method. return_dict (bool, optional, defaults to True) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). LMSDiscreteSchedulerOutput class diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/151d7900e516ae5efb9bf0b7420a7ce4.txt b/scrapped_outputs/151d7900e516ae5efb9bf0b7420a7ce4.txt new file mode 100644 index 0000000000000000000000000000000000000000..49d64c2bb4b20fbd4bc944a6449825ee53c95919 --- /dev/null +++ b/scrapped_outputs/151d7900e516ae5efb9bf0b7420a7ce4.txt @@ -0,0 +1,41 @@ +KDPM2AncestralDiscreteScheduler The KDPM2DiscreteScheduler with ancestral sampling is inspired by the Elucidating the Design Space of Diffusion-Based Generative Models paper, and the scheduler is ported from and created by Katherine Crowson. The original codebase can be found at crowsonkb/k-diffusion. KDPM2AncestralDiscreteScheduler class diffusers.KDPM2AncestralDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None use_karras_sigmas: Optional = False prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.00085) — +The starting beta value of inference. beta_end (float, defaults to 0.012) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. KDPM2DiscreteScheduler with ancestral sampling is inspired by the DPMSolver2 and Algorithm 2 from the Elucidating +the Design Space of Diffusion-Based Generative Models paper. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union generator: Optional = None return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, ~schedulers.scheduling_ddim.SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/15231430cd664d41fcea50b3b7d0e802.txt b/scrapped_outputs/15231430cd664d41fcea50b3b7d0e802.txt new file mode 100644 index 0000000000000000000000000000000000000000..da7517473881ae8a5f98c9de9071381dc720f891 --- /dev/null +++ b/scrapped_outputs/15231430cd664d41fcea50b3b7d0e802.txt @@ -0,0 +1 @@ +Diffusers 🤗 Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you’re looking for a simple inference solution or want to train your own diffusion model, 🤗 Diffusers is a modular toolbox that supports both. Our library is designed with a focus on usability over performance, simple over easy, and customizability over abstractions. The library has three main components: State-of-the-art diffusion pipelines for inference with just a few lines of code. There are many pipelines in 🤗 Diffusers, check out the table in the pipeline overview for a complete list of available pipelines and the task they solve. Interchangeable noise schedulers for balancing trade-offs between generation speed and quality. Pretrained models that can be used as building blocks, and combined with schedulers, for creating your own end-to-end diffusion systems. Tutorials Learn the fundamental skills you need to start generating outputs, build your own diffusion system, and train a diffusion model. We recommend starting here if you're using 🤗 Diffusers for the first time! How-to guides Practical guides for helping you load pipelines, models, and schedulers. You'll also learn how to use pipelines for specific tasks, control how outputs are generated, optimize for inference speed, and different training techniques. Conceptual guides Understand why the library was designed the way it was, and learn more about the ethical guidelines and safety implementations for using the library. Reference Technical descriptions of how 🤗 Diffusers classes and methods work. diff --git a/scrapped_outputs/15253e02217e53c256ed5f52cbb53ca7.txt b/scrapped_outputs/15253e02217e53c256ed5f52cbb53ca7.txt new file mode 100644 index 0000000000000000000000000000000000000000..191eba717cd93724b13a5915ff44bfc9153360dd --- /dev/null +++ b/scrapped_outputs/15253e02217e53c256ed5f52cbb53ca7.txt @@ -0,0 +1,338 @@ +GLIGEN (Grounded Language-to-Image Generation) The GLIGEN model was created by researchers and engineers from University of Wisconsin-Madison, Columbia University, and Microsoft. The StableDiffusionGLIGENPipeline and StableDiffusionGLIGENTextImagePipeline can generate photorealistic images conditioned on grounding inputs. Along with text and bounding boxes with StableDiffusionGLIGENPipeline, if input images are given, StableDiffusionGLIGENTextImagePipeline can insert objects described by text at the region defined by bounding boxes. Otherwise, it’ll generate an image described by the caption/prompt and insert objects described by text at the region defined by bounding boxes. It’s trained on COCO2014D and COCO2014CD datasets, and the model uses a frozen CLIP ViT-L/14 text encoder to condition itself on grounding inputs. The abstract from the paper is: Large-scale text-to-image diffusion models have made amazing advances. However, the status quo is to use text input alone, which can impede controllability. In this work, we propose GLIGEN, Grounded-Language-to-Image Generation, a novel approach that builds upon and extends the functionality of existing pre-trained text-to-image diffusion models by enabling them to also be conditioned on grounding inputs. To preserve the vast concept knowledge of the pre-trained model, we freeze all of its weights and inject the grounding information into new trainable layers via a gated mechanism. Our model achieves open-world grounded text2img generation with caption and bounding box condition inputs, and the grounding ability generalizes well to novel spatial configurations and concepts. GLIGEN’s zeroshot performance on COCO and LVIS outperforms existing supervised layout-to-image baselines by a large margin. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality and how to reuse pipeline components efficiently! If you want to use one of the official checkpoints for a task, explore the gligen Hub organizations! StableDiffusionGLIGENPipeline was contributed by Nikhil Gajendrakumar and StableDiffusionGLIGENTextImagePipeline was contributed by Nguyễn Công Tú Anh. StableDiffusionGLIGENPipeline class diffusers.StableDiffusionGLIGENPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with Grounded-Language-to-Image Generation (GLIGEN). This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 gligen_scheduled_sampling_beta: float = 0.3 gligen_phrases: List = None gligen_boxes: List = None gligen_inpaint_image: Optional = None negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. gligen_phrases (List[str]) — +The phrases to guide what to include in each of the regions defined by the corresponding +gligen_boxes. There should only be one phrase per bounding box. gligen_boxes (List[List[float]]) — +The bounding boxes that identify rectangular regions of the image that are going to be filled with the +content described by the corresponding gligen_phrases. Each rectangular box is defined as a +List[float] of 4 elements [xmin, ymin, xmax, ymax] where each value is between [0,1]. gligen_inpaint_image (PIL.Image.Image, optional) — +The input image, if provided, is inpainted with objects described by the gligen_boxes and +gligen_phrases. Otherwise, it is treated as a generation task on a blank input image. gligen_scheduled_sampling_beta (float, defaults to 0.3) — +Scheduled Sampling factor from GLIGEN: Open-Set Grounded Text-to-Image +Generation. Scheduled Sampling factor is only varied for +scheduled sampling during inference for improved quality and controllability. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor from Common Diffusion Noise Schedules and Sample Steps are +Flawed. Guidance rescale factor should fix overexposure when +using zero terminal SNR. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionGLIGENPipeline +>>> from diffusers.utils import load_image + +>>> # Insert objects described by text at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENPipeline.from_pretrained( +... "masterful/gligen-1-4-inpainting-text-box", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> input_image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/livingroom_modern.png" +... ) +>>> prompt = "a birthday cake" +>>> boxes = [[0.2676, 0.6088, 0.4773, 0.7183]] +>>> phrases = ["a birthday cake"] + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_inpaint_image=input_image, +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-1-4-inpainting-text-box.jpg") + +>>> # Generate an image described by the prompt and +>>> # insert objects described by text at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENPipeline.from_pretrained( +... "masterful/gligen-1-4-generation-text-box", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a waterfall and a modern high speed train running through the tunnel in a beautiful forest with fall foliage" +>>> boxes = [[0.1387, 0.2051, 0.4277, 0.7090], [0.4980, 0.4355, 0.8516, 0.7266]] +>>> phrases = ["a waterfall", "a modern high speed train running through the tunnel"] + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-1-4-generation-text-box.jpg") enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. prepare_latents < source > ( batch_size num_channels_latents height width dtype device generator latents = None ) enable_fuser < source > ( enabled = True ) encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionGLIGENTextImagePipeline class diffusers.StableDiffusionGLIGENTextImagePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer processor: CLIPProcessor image_encoder: CLIPVisionModelWithProjection image_project: CLIPImageProjection unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. processor (CLIPProcessor) — +A CLIPProcessor to procces reference image. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder (clip-vit-large-patch14). image_project (CLIPImageProjection) — +A CLIPImageProjection to project image embedding into phrases embedding space. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with Grounded-Language-to-Image Generation (GLIGEN). This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 gligen_scheduled_sampling_beta: float = 0.3 gligen_phrases: List = None gligen_images: List = None input_phrases_mask: Union = None input_images_mask: Union = None gligen_boxes: List = None gligen_inpaint_image: Optional = None negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None gligen_normalize_constant: float = 28.7 clip_skip: int = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. gligen_phrases (List[str]) — +The phrases to guide what to include in each of the regions defined by the corresponding +gligen_boxes. There should only be one phrase per bounding box. gligen_images (List[PIL.Image.Image]) — +The images to guide what to include in each of the regions defined by the corresponding gligen_boxes. +There should only be one image per bounding box input_phrases_mask (int or List[int]) — +pre phrases mask input defined by the correspongding input_phrases_mask input_images_mask (int or List[int]) — +pre images mask input defined by the correspongding input_images_mask gligen_boxes (List[List[float]]) — +The bounding boxes that identify rectangular regions of the image that are going to be filled with the +content described by the corresponding gligen_phrases. Each rectangular box is defined as a +List[float] of 4 elements [xmin, ymin, xmax, ymax] where each value is between [0,1]. gligen_inpaint_image (PIL.Image.Image, optional) — +The input image, if provided, is inpainted with objects described by the gligen_boxes and +gligen_phrases. Otherwise, it is treated as a generation task on a blank input image. gligen_scheduled_sampling_beta (float, defaults to 0.3) — +Scheduled Sampling factor from GLIGEN: Open-Set Grounded Text-to-Image +Generation. Scheduled Sampling factor is only varied for +scheduled sampling during inference for improved quality and controllability. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. gligen_normalize_constant (float, optional, defaults to 28.7) — +The normalize value of the image embedding. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionGLIGENTextImagePipeline +>>> from diffusers.utils import load_image + +>>> # Insert objects described by image at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained( +... "anhnct/Gligen_Inpainting_Text_Image", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> input_image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/livingroom_modern.png" +... ) +>>> prompt = "a backpack" +>>> boxes = [[0.2676, 0.4088, 0.4773, 0.7183]] +>>> phrases = None +>>> gligen_image = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/backpack.jpeg" +... ) + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_inpaint_image=input_image, +... gligen_boxes=boxes, +... gligen_images=[gligen_image], +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-inpainting-text-image-box.jpg") + +>>> # Generate an image described by the prompt and +>>> # insert objects described by text and image at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained( +... "anhnct/Gligen_Text_Image", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a flower sitting on the beach" +>>> boxes = [[0.0, 0.09, 0.53, 0.76]] +>>> phrases = ["flower"] +>>> gligen_image = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/pexels-pixabay-60597.jpg" +... ) + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_images=[gligen_image], +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-generation-text-image-box.jpg") + +>>> # Generate an image described by the prompt and +>>> # transfer style described by image at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained( +... "anhnct/Gligen_Text_Image", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a dragon flying on the sky" +>>> boxes = [[0.4, 0.2, 1.0, 0.8], [0.0, 1.0, 0.0, 1.0]] # Set `[0.0, 1.0, 0.0, 1.0]` for the style + +>>> gligen_image = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" +... ) + +>>> gligen_placeholder = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" +... ) + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=[ +... "dragon", +... "placeholder", +... ], # Can use any text instead of `placeholder` token, because we will use mask here +... gligen_images=[ +... gligen_placeholder, +... gligen_image, +... ], # Can use any image in gligen_placeholder, because we will use mask here +... input_phrases_mask=[1, 0], # Set 0 for the placeholder token +... input_images_mask=[0, 1], # Set 0 for the placeholder image +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-generation-text-image-box-style-transfer.jpg") enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. prepare_latents < source > ( batch_size num_channels_latents height width dtype device generator latents = None ) enable_fuser < source > ( enabled = True ) complete_mask < source > ( has_mask max_objs device ) Based on the input mask corresponding value 0 or 1 for each phrases and image, mask the features +corresponding to phrases and images. crop < source > ( im new_width new_height ) Crop the input image to the specified dimensions. draw_inpaint_mask_from_boxes < source > ( boxes size ) Create an inpainting mask based on given boxes. This function generates an inpainting mask using the provided +boxes to mark regions that need to be inpainted. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_clip_feature < source > ( input normalize_constant device is_image = False ) Get image and phrases embedding by using CLIP pretrain model. The image embedding is transformed into the +phrases embedding space through a projection. get_cross_attention_kwargs_with_grounded < source > ( hidden_size gligen_phrases gligen_images gligen_boxes input_phrases_mask input_images_mask repeat_batch normalize_constant max_objs device ) Prepare the cross-attention kwargs containing information about the grounded input (boxes, mask, image +embedding, phrases embedding). get_cross_attention_kwargs_without_grounded < source > ( hidden_size repeat_batch max_objs device ) Prepare the cross-attention kwargs without information about the grounded input (boxes, mask, image embedding, +phrases embedding) (All are zero tensor). target_size_center_crop < source > ( im new_hw ) Crop and resize the image to the target size while keeping the center. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/15473737d581d9e8a41203b8f32e7ddf.txt b/scrapped_outputs/15473737d581d9e8a41203b8f32e7ddf.txt new file mode 100644 index 0000000000000000000000000000000000000000..aa2d63d59b04449a98f5d12b99c53e29a1ead14b --- /dev/null +++ b/scrapped_outputs/15473737d581d9e8a41203b8f32e7ddf.txt @@ -0,0 +1,64 @@ +Textual Inversion Textual Inversion is a training technique for personalizing image generation models with just a few example images of what you want it to learn. This technique works by learning and updating the text embeddings (the new embeddings are tied to a special word you must use in the prompt) to match the example images you provide. If you’re training on a GPU with limited vRAM, you should try enabling the gradient_checkpointing and mixed_precision parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing or xFormers. With the same configuration and setup as PyTorch, the Flax training script should be at least ~70% faster! This guide will explore the textual_inversion.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/textual_inversion +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script has many parameters to help you tailor the training run to your needs. All of the parameters and their descriptions are listed in the parse_args() function. Where applicable, Diffusers provides default values for each parameter such as the training batch size and learning rate, but feel free to change these values in the training command if you’d like. For example, to increase the number of gradient accumulation steps above the default value of 1: Copied accelerate launch textual_inversion.py \ + --gradient_accumulation_steps=4 Some other basic and important parameters to specify include: --pretrained_model_name_or_path: the name of the model on the Hub or a local path to the pretrained model --train_data_dir: path to a folder containing the training dataset (example images) --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command --num_vectors: the number of vectors to learn the embeddings with; increasing this parameter helps the model learn better but it comes with increased training costs --placeholder_token: the special word to tie the learned embeddings to (you must use the word in your prompt for inference) --initializer_token: a single-word that roughly describes the object or style you’re trying to train on --learnable_property: whether you’re training the model to learn a new “style” (for example, Van Gogh’s painting style) or “object” (for example, your dog) Training script Unlike some of the other training scripts, textual_inversion.py has a custom dataset class, TextualInversionDataset for creating a dataset. You can customize the image size, placeholder token, interpolation method, whether to crop the image, and more. If you need to change how the dataset is created, you can modify TextualInversionDataset. Next, you’ll find the dataset preprocessing code and training loop in the main() function. The script starts by loading the tokenizer, scheduler and model: Copied # Load tokenizer +if args.tokenizer_name: + tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name) +elif args.pretrained_model_name_or_path: + tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer") + +# Load scheduler and models +noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") +text_encoder = CLIPTextModel.from_pretrained( + args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision +) +vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision) +unet = UNet2DConditionModel.from_pretrained( + args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision +) The special placeholder token is added next to the tokenizer, and the embedding is readjusted to account for the new token. Then, the script creates a dataset from the TextualInversionDataset: Copied train_dataset = TextualInversionDataset( + data_root=args.train_data_dir, + tokenizer=tokenizer, + size=args.resolution, + placeholder_token=(" ".join(tokenizer.convert_ids_to_tokens(placeholder_token_ids))), + repeats=args.repeats, + learnable_property=args.learnable_property, + center_crop=args.center_crop, + set="train", +) +train_dataloader = torch.utils.data.DataLoader( + train_dataset, batch_size=args.train_batch_size, shuffle=True, num_workers=args.dataloader_num_workers +) Finally, the training loop handles everything else from predicting the noisy residual to updating the embedding weights of the special placeholder token. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 For this guide, you’ll download some images of a cat toy and store them in a directory. But remember, you can create and use your own dataset if you want (see the Create a dataset for training guide). Copied from huggingface_hub import snapshot_download + +local_dir = "./cat" +snapshot_download( + "diffusers/cat_toy_example", local_dir=local_dir, repo_type="dataset", ignore_patterns=".gitattributes" +) Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model, and DATA_DIR to the path where you just downloaded the cat images to. The script creates and saves the following files to your repository: learned_embeds.bin: the learned embedding vectors corresponding to your example images token_identifier.txt: the special placeholder token type_of_concept.txt: the type of concept you’re training on (either “object” or “style”) A full training run takes ~1 hour on a single V100 GPU. One more thing before you launch the script. If you’re interested in following along with the training process, you can periodically save generated images as training progresses. Add the following parameters to the training command: Copied --validation_prompt="A train" +--num_validation_images=4 +--validation_steps=100 PyTorch Flax Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export DATA_DIR="./cat" + +accelerate launch textual_inversion.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --train_data_dir=$DATA_DIR \ + --learnable_property="object" \ + --placeholder_token="" \ + --initializer_token="toy" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --max_train_steps=3000 \ + --learning_rate=5.0e-04 \ + --scale_lr \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --output_dir="textual_inversion_cat" \ + --push_to_hub After training is complete, you can use your newly trained model for inference like: PyTorch Flax Copied from diffusers import StableDiffusionPipeline +import torch + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") +pipeline.load_textual_inversion("sd-concepts-library/cat-toy") +image = pipeline("A train", num_inference_steps=50).images[0] +image.save("cat-train.png") Next steps Congratulations on training your own Textual Inversion model! 🎉 To learn more about how to use your new model, the following guides may be helpful: Learn how to load Textual Inversion embeddings and also use them as negative embeddings. Learn how to use Textual Inversion for inference with Stable Diffusion 1/2 and Stable Diffusion XL. diff --git a/scrapped_outputs/154bd06072ba0cddca9b88c8706634a2.txt b/scrapped_outputs/154bd06072ba0cddca9b88c8706634a2.txt new file mode 100644 index 0000000000000000000000000000000000000000..7172ff07e1b418100afd17352ce66615379947e7 --- /dev/null +++ b/scrapped_outputs/154bd06072ba0cddca9b88c8706634a2.txt @@ -0,0 +1,845 @@ +ControlNet ControlNet was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that’ll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process. The abstract from the paper is: We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models. This model was contributed by takuma104. ❤️ The original codebase can be found at lllyasviel/ControlNet, and you can find official ControlNet checkpoints on lllyasviel’s Hub profile. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionControlNetPipeline class diffusers.StableDiffusionControlNetPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. When prompt is a list, and if a list of images is passed for a single ControlNet, +each will be paired with each prompt in the prompt list. This also applies to multiple ControlNets, +where a list of image lists can be passed to batch for each prompt and each ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +... ) +>>> image = np.array(image) + +>>> # get canny image +>>> image = cv2.Canny(image, 100, 200) +>>> image = image[:, :, None] +>>> image = np.concatenate([image, image, image], axis=2) +>>> canny_image = Image.fromarray(image) + +>>> # load control net and stable diffusion v1-5 +>>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +>>> pipe = StableDiffusionControlNetPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> # speed up diffusion process with faster scheduler and memory optimization +>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +>>> # remove following line if xformers is not installed +>>> pipe.enable_xformers_memory_efficient_attention() + +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> generator = torch.manual_seed(0) +>>> image = pipe( +... "futuristic-looking woman", num_inference_steps=20, generator=generator, image=canny_image +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionControlNetImg2ImgPipeline class diffusers.StableDiffusionControlNetImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for image-to-image generation using Stable Diffusion with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None control_image: Union = None height: Optional = None width: Optional = None strength: float = 0.8 num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 0.8 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The initial image to be used as the starting point for the image generation process. Can also accept +image latents as image, and if passing latents directly they are not encoded again. control_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, UniPCMultistepScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +... ) +>>> np_image = np.array(image) + +>>> # get canny image +>>> np_image = cv2.Canny(np_image, 100, 200) +>>> np_image = np_image[:, :, None] +>>> np_image = np.concatenate([np_image, np_image, np_image], axis=2) +>>> canny_image = Image.fromarray(np_image) + +>>> # load control net and stable diffusion v1-5 +>>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +>>> pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> # speed up diffusion process with faster scheduler and memory optimization +>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> generator = torch.manual_seed(0) +>>> image = pipe( +... "futuristic-looking woman", +... num_inference_steps=20, +... generator=generator, +... image=image, +... control_image=canny_image, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionControlNetInpaintPipeline class diffusers.StableDiffusionControlNetInpaintPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for image inpainting using Stable Diffusion with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters This pipeline can be used with checkpoints that have been specifically fine-tuned for inpainting +(runwayml/stable-diffusion-inpainting) as well as +default text-to-image Stable Diffusion checkpoints +(runwayml/stable-diffusion-v1-5). Default text-to-image +Stable Diffusion checkpoints might be preferable for ControlNets that have been fine-tuned on those, such as +lllyasviel/control_v11p_sd15_inpaint. __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None control_image: Union = None height: Optional = None width: Optional = None padding_mask_crop: Optional = None strength: float = 1.0 num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 0.5 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], — +List[PIL.Image.Image], or List[np.ndarray]): +Image, NumPy array or tensor representing an image batch to be used as the starting point. For both +NumPy array and PyTorch tensor, the expected value range is between [0, 1]. If it’s a tensor or a +list or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a NumPy array or +a list of arrays, the expected shape should be (B, H, W, C) or (H, W, C). It can also accept image +latents as image, but if passing latents directly it is not encoded again. mask_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], — +List[PIL.Image.Image], or List[np.ndarray]): +Image, NumPy array or tensor representing an image batch to mask image. White pixels in the mask +are repainted while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a NumPy array or PyTorch tensor, it should contain one +color channel (L) instead of 3, so the expected shape for PyTorch tensor would be (B, 1, H, W), (B, H, W), (1, H, W), (H, W). And for NumPy array, it would be for (B, H, W, 1), (B, H, W), (H, W, 1), or (H, W). control_image (torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image], — +List[List[torch.FloatTensor]], or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. padding_mask_crop (int, optional, defaults to None) — +The size of margin in the crop to be applied to the image and masking. If None, no crop is applied to image and mask_image. If +padding_mask_crop is not None, it will first find a rectangular region with the same aspect ration of the image and +contains all masked area, and then expand that area based on padding_mask_crop. The image and mask_image will then be cropped based on +the expanded area before resizing to the original image size for inpainting. This is useful when the masked area is small while the image is large +and contain information inreleant for inpainging, such as background. strength (float, optional, defaults to 1.0) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 0.5) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install transformers accelerate +>>> from diffusers import StableDiffusionControlNetInpaintPipeline, ControlNetModel, DDIMScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> init_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy.png" +... ) +>>> init_image = init_image.resize((512, 512)) + +>>> generator = torch.Generator(device="cpu").manual_seed(1) + +>>> mask_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy_mask.png" +... ) +>>> mask_image = mask_image.resize((512, 512)) + + +>>> def make_canny_condition(image): +... image = np.array(image) +... image = cv2.Canny(image, 100, 200) +... image = image[:, :, None] +... image = np.concatenate([image, image, image], axis=2) +... image = Image.fromarray(image) +... return image + + +>>> control_image = make_canny_condition(init_image) + +>>> controlnet = ControlNetModel.from_pretrained( +... "lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16 +... ) +>>> pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> image = pipe( +... "a handsome man with ray-ban sunglasses", +... num_inference_steps=20, +... generator=generator, +... eta=1.0, +... image=init_image, +... mask_image=mask_image, +... control_image=control_image, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionControlNetPipeline class diffusers.FlaxStableDiffusionControlNetPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel controlnet: FlaxControlNetModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. controlnet (FlaxControlNetModel — +Provides additional conditioning to the unet during the denoising process. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-to-image generation using Stable Diffusion with ControlNet Guidance. This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: Array image: Array params: Union prng_seed: Array num_inference_steps: int = 50 guidance_scale: Union = 7.5 latents: Array = None neg_prompt_ids: Array = None controlnet_conditioning_scale: Union = 1.0 return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt_ids (jnp.ndarray) — +The prompt or prompts to guide the image generation. image (jnp.ndarray) — +Array representing the ControlNet input condition to provide guidance to the unet for generation. params (Dict or FrozenDict) — +Dictionary containing the model parameters/weights. prng_seed (jax.Array) — +Array containing random number generator key. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. latents (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +array is generated by sampling using the supplied random generator. controlnet_conditioning_scale (float or jnp.ndarray, optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> import jax.numpy as jnp +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard +>>> from diffusers.utils import load_image, make_image_grid +>>> from PIL import Image +>>> from diffusers import FlaxStableDiffusionControlNetPipeline, FlaxControlNetModel + + +>>> def create_key(seed=0): +... return jax.random.PRNGKey(seed) + + +>>> rng = create_key(0) + +>>> # get canny image +>>> canny_image = load_image( +... "https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/blog_post_cell_10_output_0.jpeg" +... ) + +>>> prompts = "best quality, extremely detailed" +>>> negative_prompts = "monochrome, lowres, bad anatomy, worst quality, low quality" + +>>> # load control net and stable diffusion v1-5 +>>> controlnet, controlnet_params = FlaxControlNetModel.from_pretrained( +... "lllyasviel/sd-controlnet-canny", from_pt=True, dtype=jnp.float32 +... ) +>>> pipe, params = FlaxStableDiffusionControlNetPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, revision="flax", dtype=jnp.float32 +... ) +>>> params["controlnet"] = controlnet_params + +>>> num_samples = jax.device_count() +>>> rng = jax.random.split(rng, jax.device_count()) + +>>> prompt_ids = pipe.prepare_text_inputs([prompts] * num_samples) +>>> negative_prompt_ids = pipe.prepare_text_inputs([negative_prompts] * num_samples) +>>> processed_image = pipe.prepare_image_inputs([canny_image] * num_samples) + +>>> p_params = replicate(params) +>>> prompt_ids = shard(prompt_ids) +>>> negative_prompt_ids = shard(negative_prompt_ids) +>>> processed_image = shard(processed_image) + +>>> output = pipe( +... prompt_ids=prompt_ids, +... image=processed_image, +... params=p_params, +... prng_seed=rng, +... num_inference_steps=50, +... neg_prompt_ids=negative_prompt_ids, +... jit=True, +... ).images + +>>> output_images = pipe.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:]))) +>>> output_images = make_image_grid(output_images, num_samples // 4, 4) +>>> output_images.save("generated_image.png") FlaxStableDiffusionControlNetPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/156dc2f63e2897d6e917fdb66e03ceed.txt b/scrapped_outputs/156dc2f63e2897d6e917fdb66e03ceed.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/1579287ef3b82e6cee2ec6761193bdc5.txt b/scrapped_outputs/1579287ef3b82e6cee2ec6761193bdc5.txt new file mode 100644 index 0000000000000000000000000000000000000000..b36fcdaae1a968a902d79e9e2398812f703a2021 --- /dev/null +++ b/scrapped_outputs/1579287ef3b82e6cee2ec6761193bdc5.txt @@ -0,0 +1,63 @@ +Kandinsky 2.2 This script is experimental, and it’s easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset. Kandinsky 2.2 is a multilingual text-to-image model capable of producing more photorealistic images. The model includes an image prior model for creating image embeddings from text prompts, and a decoder model that generates images based on the prior model’s embeddings. That’s why you’ll find two separate scripts in Diffusers for Kandinsky 2.2, one for training the prior model and one for training the decoder model. You can train both models separately, but to get the best results, you should train both the prior and decoder models. Depending on your GPU, you may need to enable gradient_checkpointing (⚠️ not supported for the prior model!), mixed_precision, and gradient_accumulation_steps to help fit the model into memory and to speedup training. You can reduce your memory-usage even more by enabling memory-efficient attention with xFormers (version v0.0.16 fails for training on some GPUs so you may need to install a development version instead). This guide explores the train_text_to_image_prior.py and the train_text_to_image_decoder.py scripts to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the scripts, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/kandinsky2_2/text_to_image +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training scripts that are important for understanding how to modify it, but it doesn’t cover every aspect of the scripts in detail. If you’re interested in learning more, feel free to read through the scripts and let us know if you have any questions or concerns. Script parameters The training scripts provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. The training scripts provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image_prior.py \ + --mixed_precision="fp16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so let’s get straight to a walkthrough of the Kandinsky training scripts! Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_text_to_image_prior.py \ + --snr_gamma=5.0 Training script The training script is also similar to the Text-to-image training guide, but it’s been modified to support training the prior and decoder models. This guide focuses on the code that is unique to the Kandinsky 2.2 training scripts. prior model decoder model The main() function contains the code for preparing the dataset and training the model. One of the main differences you’ll notice right away is that the training script also loads a CLIPImageProcessor - in addition to a scheduler and tokenizer - for preprocessing images and a CLIPVisionModelWithProjection model for encoding the images: Copied noise_scheduler = DDPMScheduler(beta_schedule="squaredcos_cap_v2", prediction_type="sample") +image_processor = CLIPImageProcessor.from_pretrained( + args.pretrained_prior_model_name_or_path, subfolder="image_processor" +) +tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="tokenizer") + +with ContextManagers(deepspeed_zero_init_disabled_context_manager()): + image_encoder = CLIPVisionModelWithProjection.from_pretrained( + args.pretrained_prior_model_name_or_path, subfolder="image_encoder", torch_dtype=weight_dtype + ).eval() + text_encoder = CLIPTextModelWithProjection.from_pretrained( + args.pretrained_prior_model_name_or_path, subfolder="text_encoder", torch_dtype=weight_dtype + ).eval() Kandinsky uses a PriorTransformer to generate the image embeddings, so you’ll want to setup the optimizer to learn the prior mode’s parameters. Copied prior = PriorTransformer.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="prior") +prior.train() +optimizer = optimizer_cls( + prior.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Next, the input captions are tokenized, and images are preprocessed by the CLIPImageProcessor: Copied def preprocess_train(examples): + images = [image.convert("RGB") for image in examples[image_column]] + examples["clip_pixel_values"] = image_processor(images, return_tensors="pt").pixel_values + examples["text_input_ids"], examples["text_mask"] = tokenize_captions(examples) + return examples Finally, the training loop converts the input images into latents, adds noise to the image embeddings, and makes a prediction: Copied model_pred = prior( + noisy_latents, + timestep=timesteps, + proj_embedding=prompt_embeds, + encoder_hidden_states=text_encoder_hidden_states, + attention_mask=text_mask, +).predicted_image_embedding If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 You’ll train on the Pokémon BLIP captions dataset to generate your own Pokémon, but you can also create and train on your own dataset by following the Create a dataset for training guide. Set the environment variable DATASET_NAME to the name of the dataset on the Hub or if you’re training on your own files, set the environment variable TRAIN_DIR to a path to your dataset. If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_prompt to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. prior model decoder model Copied export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch --mixed_precision="fp16" train_text_to_image_prior.py \ + --dataset_name=$DATASET_NAME \ + --resolution=768 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --checkpoints_total_limit=3 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --validation_prompts="A robot pokemon, 4k photo" \ + --report_to="wandb" \ + --push_to_hub \ + --output_dir="kandi2-prior-pokemon-model" Once training is finished, you can use your newly trained model for inference! prior model decoder model Copied from diffusers import AutoPipelineForText2Image, DiffusionPipeline +import torch + +prior_pipeline = DiffusionPipeline.from_pretrained(output_dir, torch_dtype=torch.float16) +prior_components = {"prior_" + k: v for k,v in prior_pipeline.components.items()} +pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", **prior_components, torch_dtype=torch.float16) + +pipe.enable_model_cpu_offload() +prompt="A robot pokemon, 4k photo" +image = pipeline(prompt=prompt, negative_prompt=negative_prompt).images[0] Feel free to replace kandinsky-community/kandinsky-2-2-decoder with your own trained decoder checkpoint! Next steps Congratulations on training a Kandinsky 2.2 model! To learn more about how to use your new model, the following guides may be helpful: Read the Kandinsky guide to learn how to use it for a variety of different tasks (text-to-image, image-to-image, inpainting, interpolation), and how it can be combined with a ControlNet. Check out the DreamBooth and LoRA training guides to learn how to train a personalized Kandinsky model with just a few example images. These two training techniques can even be combined! diff --git a/scrapped_outputs/15810ea24ffdabc836f304286c049b56.txt b/scrapped_outputs/15810ea24ffdabc836f304286c049b56.txt new file mode 100644 index 0000000000000000000000000000000000000000..d497661a6c9cfce4b8b06d95ad96868e9dc634a1 --- /dev/null +++ b/scrapped_outputs/15810ea24ffdabc836f304286c049b56.txt @@ -0,0 +1,42 @@ +Textual inversion The StableDiffusionPipeline supports textual inversion, a technique that enables a model like Stable Diffusion to learn a new concept from just a few sample images. This gives you more control over the generated images and allows you to tailor the model towards specific concepts. You can get started quickly with a collection of community created concepts in the Stable Diffusion Conceptualizer. This guide will show you how to run inference with textual inversion using a pre-learned concept from the Stable Diffusion Conceptualizer. If you’re interested in teaching a model new concepts with textual inversion, take a look at the Textual Inversion training guide. Import the necessary libraries: Copied import torch +from diffusers import StableDiffusionPipeline +from diffusers.utils import make_image_grid Stable Diffusion 1 and 2 Pick a Stable Diffusion checkpoint and a pre-learned concept from the Stable Diffusion Conceptualizer: Copied pretrained_model_name_or_path = "runwayml/stable-diffusion-v1-5" +repo_id_embeds = "sd-concepts-library/cat-toy" Now you can load a pipeline, and pass the pre-learned concept to it: Copied pipeline = StableDiffusionPipeline.from_pretrained( + pretrained_model_name_or_path, torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +pipeline.load_textual_inversion(repo_id_embeds) Create a prompt with the pre-learned concept by using the special placeholder token , and choose the number of samples and rows of images you’d like to generate: Copied prompt = "a grafitti in a favela wall with a on it" + +num_samples_per_row = 2 +num_rows = 2 Then run the pipeline (feel free to adjust the parameters like num_inference_steps and guidance_scale to see how they affect image quality), save the generated images and visualize them with the helper function you created at the beginning: Copied all_images = [] +for _ in range(num_rows): + images = pipeline(prompt, num_images_per_prompt=num_samples_per_row, num_inference_steps=50, guidance_scale=7.5).images + all_images.extend(images) + +grid = make_image_grid(all_images, num_rows, num_samples_per_row) +grid Stable Diffusion XL Stable Diffusion XL (SDXL) can also use textual inversion vectors for inference. In contrast to Stable Diffusion 1 and 2, SDXL has two text encoders so you’ll need two textual inversion embeddings - one for each text encoder model. Let’s download the SDXL textual inversion embeddings and have a closer look at it’s structure: Copied from huggingface_hub import hf_hub_download +from safetensors.torch import load_file + +file = hf_hub_download("dn118/unaestheticXL", filename="unaestheticXLv31.safetensors") +state_dict = load_file(file) +state_dict Copied {'clip_g': tensor([[ 0.0077, -0.0112, 0.0065, ..., 0.0195, 0.0159, 0.0275], + ..., + [-0.0170, 0.0213, 0.0143, ..., -0.0302, -0.0240, -0.0362]], + 'clip_l': tensor([[ 0.0023, 0.0192, 0.0213, ..., -0.0385, 0.0048, -0.0011], + ..., + [ 0.0475, -0.0508, -0.0145, ..., 0.0070, -0.0089, -0.0163]], There are two tensors, "clip_g" and "clip_l". +"clip_g" corresponds to the bigger text encoder in SDXL and refers to +pipe.text_encoder_2 and "clip_l" refers to pipe.text_encoder. Now you can load each tensor separately by passing them along with the correct text encoder and tokenizer +to load_textual_inversion(): Copied from diffusers import AutoPipelineForText2Image +import torch + +pipe = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") + +pipe.load_textual_inversion(state_dict["clip_g"], token="unaestheticXLv31", text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2) +pipe.load_textual_inversion(state_dict["clip_l"], token="unaestheticXLv31", text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer) + +# the embedding should be used as a negative embedding, so we pass it as a negative prompt +generator = torch.Generator().manual_seed(33) +image = pipe("a woman standing in front of a mountain", negative_prompt="unaestheticXLv31", generator=generator).images[0] +image diff --git a/scrapped_outputs/15d81f6af4d6fc2e3f674cb9319d9c4c.txt b/scrapped_outputs/15d81f6af4d6fc2e3f674cb9319d9c4c.txt new file mode 100644 index 0000000000000000000000000000000000000000..5ac980c70abc6eba4fbd0f38f30a6ecdd94ad92f --- /dev/null +++ b/scrapped_outputs/15d81f6af4d6fc2e3f674cb9319d9c4c.txt @@ -0,0 +1,201 @@ +Depth-to-image The Stable Diffusion model can also infer depth based on an image using MiDaS. This allows you to pass a text prompt and an initial image to condition the generation of new images as well as a depth_map to preserve the image structure. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionDepth2ImgPipeline class diffusers.StableDiffusionDepth2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers depth_estimator: DPTForDepthEstimation feature_extractor: DPTFeatureExtractor ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-guided depth-based image-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None image: Union = None depth_map: Optional = None strength: float = 0.8 num_inference_steps: Optional = 50 guidance_scale: Optional = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: Optional = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be used as the starting point. Can accept image +latents as image only if depth_map is not None. depth_map (torch.FloatTensor, optional) — +Depth prediction to be used as additional conditioning for the image generation process. If not +defined, it automatically predicts the depth with self.depth_estimator. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> import requests +>>> from PIL import Image + +>>> from diffusers import StableDiffusionDepth2ImgPipeline + +>>> pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-depth", +... torch_dtype=torch.float16, +... ) +>>> pipe.to("cuda") + + +>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" +>>> init_image = Image.open(requests.get(url, stream=True).raw) +>>> prompt = "two tigers" +>>> n_propmt = "bad, deformed, ugly, bad anotomy" +>>> image = pipe(prompt=prompt, image=init_image, negative_prompt=n_propmt, strength=0.7).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/16117154cb42ac11702ca2a61e4ea041.txt b/scrapped_outputs/16117154cb42ac11702ca2a61e4ea041.txt new file mode 100644 index 0000000000000000000000000000000000000000..cff714448fde8a5841e9c4833e95b6589962a2ce --- /dev/null +++ b/scrapped_outputs/16117154cb42ac11702ca2a61e4ea041.txt @@ -0,0 +1 @@ +Overview 🧨 Diffusers offers many pipelines, models, and schedulers for generative tasks. To make loading these components as simple as possible, we provide a single and unified method - from_pretrained() - that loads any of these components from either the Hugging Face Hub or your local machine. Whenever you load a pipeline or model, the latest files are automatically downloaded and cached so you can quickly reuse them next time without redownloading the files. This section will show you everything you need to know about loading pipelines, how to load different components in a pipeline, how to load checkpoint variants, and how to load community pipelines. You’ll also learn how to load schedulers and compare the speed and quality trade-offs of using different schedulers. Finally, you’ll see how to convert and load KerasCV checkpoints so you can use them in PyTorch with 🧨 Diffusers. diff --git a/scrapped_outputs/16200b0d871b35675b46ba4b1a5c0452.txt b/scrapped_outputs/16200b0d871b35675b46ba4b1a5c0452.txt new file mode 100644 index 0000000000000000000000000000000000000000..b06040eb958f24dac956d0613c545aa730efd563 --- /dev/null +++ b/scrapped_outputs/16200b0d871b35675b46ba4b1a5c0452.txt @@ -0,0 +1,255 @@ +Pipelines + +Pipelines provide a simple way to run state-of-the-art diffusion models in inference. +Most diffusion systems consist of multiple independently-trained models and highly adaptable scheduler +components - all of which are needed to have a functioning end-to-end diffusion system. +As an example, Stable Diffusion has three independently trained models: +Autoencoder +Conditional Unet +CLIP text encoder +a scheduler component, scheduler, +a CLIPFeatureExtractor, +as well as a safety checker. +All of these components are necessary to run stable diffusion in inference even though they were trained +or created independently from each other. +To that end, we strive to offer all open-sourced, state-of-the-art diffusion system under a unified API. +More specifically, we strive to provide pipelines that +can load the officially published weights and yield 1-to-1 the same outputs as the original implementation according to the corresponding paper (e.g. LDMTextToImagePipeline, uses the officially released weights of High-Resolution Image Synthesis with Latent Diffusion Models), +have a simple user interface to run the model in inference (see the Pipelines API section), +are easy to understand with code that is self-explanatory and can be read along-side the official paper (see Pipelines summary), +can easily be contributed by the community (see the Contribution section). +Note that pipelines do not (and should not) offer any training functionality. +If you are looking for official training examples, please have a look at examples. + +🧨 Diffusers Summary + +The following table summarizes all officially supported pipelines, their corresponding paper, and if +available a colab notebook to directly try them out. +Pipeline +Paper +Tasks +Colab +alt_diffusion +AltDiffusion +Image-to-Image Text-Guided Generation +- +audio_diffusion +Audio Diffusion +Unconditional Audio Generation + +cycle_diffusion +Cycle Diffusion +Image-to-Image Text-Guided Generation + +dance_diffusion +Dance Diffusion +Unconditional Audio Generation + +ddpm +Denoising Diffusion Probabilistic Models +Unconditional Image Generation + +ddim +Denoising Diffusion Implicit Models +Unconditional Image Generation + +latent_diffusion +High-Resolution Image Synthesis with Latent Diffusion Models +Text-to-Image Generation + +latent_diffusion +High-Resolution Image Synthesis with Latent Diffusion Models +Super Resolution Image-to-Image + +latent_diffusion_uncond +High-Resolution Image Synthesis with Latent Diffusion Models +Unconditional Image Generation + +paint_by_example +Paint by Example: Exemplar-based Image Editing with Diffusion Models +Image-Guided Image Inpainting + +pndm +Pseudo Numerical Methods for Diffusion Models on Manifolds +Unconditional Image Generation + +score_sde_ve +Score-Based Generative Modeling through Stochastic Differential Equations +Unconditional Image Generation + +score_sde_vp +Score-Based Generative Modeling through Stochastic Differential Equations +Unconditional Image Generation + +stable_diffusion +Stable Diffusion +Text-to-Image Generation + +stable_diffusion +Stable Diffusion +Image-to-Image Text-Guided Generation + +stable_diffusion +Stable Diffusion +Text-Guided Image Inpainting + +stable_diffusion_2 +Stable Diffusion 2 +Text-to-Image Generation + +stable_diffusion_2 +Stable Diffusion 2 +Text-Guided Image Inpainting + +stable_diffusion_2 +Stable Diffusion 2 +Text-Guided Super Resolution Image-to-Image + +stable_diffusion_safe +Safe Stable Diffusion +Text-Guided Generation + +stochastic_karras_ve +Elucidating the Design Space of Diffusion-Based Generative Models +Unconditional Image Generation + +unclip +Hierarchical Text-Conditional Image Generation with CLIP Latents +Text-to-Image Generation + +versatile_diffusion +Versatile Diffusion: Text, Images and Variations All in One Diffusion Model +Text-to-Image Generation + +versatile_diffusion +Versatile Diffusion: Text, Images and Variations All in One Diffusion Model +Image Variations Generation + +versatile_diffusion +Versatile Diffusion: Text, Images and Variations All in One Diffusion Model +Dual Image and Text Guided Generation + +vq_diffusion +Vector Quantized Diffusion Model for Text-to-Image Synthesis +Text-to-Image Generation + +Note: Pipelines are simple examples of how to play around with the diffusion systems as described in the corresponding papers. +However, most of them can be adapted to use different scheduler components or even different model components. Some pipeline examples are shown in the Examples below. + +Pipelines API + +Diffusion models often consist of multiple independently-trained models or other previously existing components. +Each model has been trained independently on a different task and the scheduler can easily be swapped out and replaced with a different one. +During inference, we however want to be able to easily load all components and use them in inference - even if one component, e.g. CLIP’s text encoder, originates from a different library, such as Transformers. To that end, all pipelines provide the following functionality: +from_pretrained method that accepts a Hugging Face Hub repository id, e.g. runwayml/stable-diffusion-v1-5 or a path to a local directory, e.g. +”./stable-diffusion”. To correctly retrieve which models and components should be loaded, one has to provide a model_index.json file, e.g. runwayml/stable-diffusion-v1-5/model_index.json, which defines all components that should be +loaded into the pipelines. More specifically, for each model/component one needs to define the format : ["", ""]. is the attribute name given to the loaded instance of which can be found in the library or pipeline folder called "". +save_pretrained that accepts a local path, e.g. ./stable-diffusion under which all models/components of the pipeline will be saved. For each component/model a folder is created inside the local path that is named after the given attribute name, e.g. ./stable_diffusion/unet. +In addition, a model_index.json file is created at the root of the local path, e.g. ./stable_diffusion/model_index.json so that the complete pipeline can again be instantiated +from the local path. +to which accepts a string or torch.device to move all models that are of type torch.nn.Module to the passed device. The behavior is fully analogous to PyTorch’s to method. +__call__ method to use the pipeline in inference. __call__ defines inference logic of the pipeline and should ideally encompass all aspects of it, from pre-processing to forwarding tensors to the different models and schedulers, as well as post-processing. The API of the __call__ method can strongly vary from pipeline to pipeline. E.g. a text-to-image pipeline, such as StableDiffusionPipeline should accept among other things the text prompt to generate the image. A pure image generation pipeline, such as DDPMPipeline on the other hand can be run without providing any inputs. To better understand what inputs can be adapted for +each pipeline, one should look directly into the respective pipeline. +Note: All pipelines have PyTorch’s autograd disabled by decorating the __call__ method with a torch.no_grad decorator because pipelines should +not be used for training. If you want to store the gradients during the forward pass, we recommend writing your own pipeline, see also our community-examples + +Contribution + +We are more than happy about any contribution to the officially supported pipelines 🤗. We aspire +all of our pipelines to be self-contained, easy-to-tweak, beginner-friendly and for one-purpose-only. +Self-contained: A pipeline shall be as self-contained as possible. More specifically, this means that all functionality should be either directly defined in the pipeline file itself, should be inherited from (and only from) the DiffusionPipeline class or be directly attached to the model and scheduler components of the pipeline. +Easy-to-use: Pipelines should be extremely easy to use - one should be able to load the pipeline and +use it for its designated task, e.g. text-to-image generation, in just a couple of lines of code. Most +logic including pre-processing, an unrolled diffusion loop, and post-processing should all happen inside the __call__ method. +Easy-to-tweak: Certain pipelines will not be able to handle all use cases and tasks that you might like them to. If you want to use a certain pipeline for a specific use case that is not yet supported, you might have to copy the pipeline file and tweak the code to your needs. We try to make the pipeline code as readable as possible so that each part –from pre-processing to diffusing to post-processing– can easily be adapted. If you would like the community to benefit from your customized pipeline, we would love to see a contribution to our community-examples. If you feel that an important pipeline should be part of the official pipelines but isn’t, a contribution to the official pipelines would be even better. +One-purpose-only: Pipelines should be used for one task and one task only. Even if two tasks are very similar from a modeling point of view, e.g. image2image translation and in-painting, pipelines shall be used for one task only to keep them easy-to-tweak and readable. + +Examples + + +Text-to-Image generation with Stable Diffusion + + + + Copied +# make sure you're logged in with `huggingface-cli login` +from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler + +pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +image = pipe(prompt).images[0] + +image.save("astronaut_rides_horse.png") + +Image-to-Image text-guided generation with Stable Diffusion + +The StableDiffusionImg2ImgPipeline lets you pass a text prompt and an initial image to condition the generation of new images. + + + Copied +import requests +from PIL import Image +from io import BytesIO + +from diffusers import StableDiffusionImg2ImgPipeline + +# load the pipeline +device = "cuda" +pipe = StableDiffusionImg2ImgPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to( + device +) + +# let's download an initial image +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +response = requests.get(url) +init_image = Image.open(BytesIO(response.content)).convert("RGB") +init_image = init_image.resize((768, 512)) + +prompt = "A fantasy landscape, trending on artstation" + +images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images + +images[0].save("fantasy_landscape.png") +You can also run this example on colab + +Tweak prompts reusing seeds and latents + +You can generate your own latents to reproduce results, or tweak your prompt on a specific result you liked. This notebook shows how to do it step by step. You can also run it in Google Colab . + +In-painting using Stable Diffusion + +The StableDiffusionInpaintPipeline lets you edit specific parts of an image by providing a mask and text prompt. + + + Copied +import PIL +import requests +import torch +from io import BytesIO + +from diffusers import StableDiffusionInpaintPipeline + + +def download_image(url): + response = requests.get(url) + return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = download_image(img_url).resize((512, 512)) +mask_image = download_image(mask_url).resize((512, 512)) + +pipe = StableDiffusionInpaintPipeline.from_pretrained( + "runwayml/stable-diffusion-inpainting", + torch_dtype=torch.float16, +) +pipe = pipe.to("cuda") + +prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] +You can also run this example on colab diff --git a/scrapped_outputs/163a65e2082838dc76c977c438437643.txt b/scrapped_outputs/163a65e2082838dc76c977c438437643.txt new file mode 100644 index 0000000000000000000000000000000000000000..70e8ff8ae9a89a38f63fb94929d9090c96587fe0 --- /dev/null +++ b/scrapped_outputs/163a65e2082838dc76c977c438437643.txt @@ -0,0 +1,435 @@ +Text-to-Video Generation with AnimateDiff Overview AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning by Yuwei Guo, Ceyuan Yang, Anyi Rao, Yaohui Wang, Yu Qiao, Dahua Lin, Bo Dai. The abstract of the paper is the following: With the advance of text-to-image models (e.g., Stable Diffusion) and corresponding personalization techniques such as DreamBooth and LoRA, everyone can manifest their imagination into high-quality images at an affordable cost. Subsequently, there is a great demand for image animation techniques to further combine generated static images with motion dynamics. In this report, we propose a practical framework to animate most of the existing personalized text-to-image models once and for all, saving efforts in model-specific tuning. At the core of the proposed framework is to insert a newly initialized motion modeling module into the frozen text-to-image model and train it on video clips to distill reasonable motion priors. Once trained, by simply injecting this motion modeling module, all personalized versions derived from the same base T2I readily become text-driven models that produce diverse and personalized animated images. We conduct our evaluation on several public representative personalized text-to-image models across anime pictures and realistic photographs, and demonstrate that our proposed framework helps these models generate temporally smooth animation clips while preserving the domain and diversity of their outputs. Code and pre-trained weights will be publicly available at this https URL. Available Pipelines Pipeline Tasks Demo AnimateDiffPipeline Text-to-Video Generation with AnimateDiff AnimateDiffVideoToVideoPipeline Video-to-Video Generation with AnimateDiff Available checkpoints Motion Adapter checkpoints can be found under guoyww. These checkpoints are meant to work with any model based on Stable Diffusion 1.4/1.5. Usage example AnimateDiffPipeline AnimateDiff works with a MotionAdapter checkpoint and a Stable Diffusion model checkpoint. The MotionAdapter is a collection of Motion Modules that are responsible for adding coherent motion across image frames. These modules are applied after the Resnet and Attention blocks in Stable Diffusion UNet. The following example demonstrates how to use a MotionAdapter checkpoint with Diffusers for inference based on StableDiffusion-1.4/1.5. Copied import torch +from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16) +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + timestep_spacing="linspace", + beta_schedule="linear", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt=( + "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, " + "orange sky, warm lighting, fishing boats, ocean waves seagulls, " + "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, " + "golden hour, coastal landscape, seaside scenery" + ), + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=25, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") + Here are some sample outputs: masterpiece, bestquality, sunset. + AnimateDiff tends to work better with finetuned Stable Diffusion models. If you plan on using a scheduler that can clip samples, make sure to disable it by setting clip_sample=False in the scheduler as this can also have an adverse effect on generated samples. Additionally, the AnimateDiff checkpoints can be sensitive to the beta schedule of the scheduler. We recommend setting this to linear. AnimateDiffVideoToVideoPipeline AnimateDiff can also be used to generate visually similar videos or enable style/character/background or other edits starting from an initial video, allowing you to seamlessly explore creative possibilities. Copied import imageio +import requests +import torch +from diffusers import AnimateDiffVideoToVideoPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif +from io import BytesIO +from PIL import Image + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffVideoToVideoPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16).to("cuda") +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + timestep_spacing="linspace", + beta_schedule="linear", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +# helper function to load videos +def load_video(file_path: str): + images = [] + + if file_path.startswith(('http://', 'https://')): + # If the file_path is a URL + response = requests.get(file_path) + response.raise_for_status() + content = BytesIO(response.content) + vid = imageio.get_reader(content) + else: + # Assuming it's a local file path + vid = imageio.get_reader(file_path) + + for frame in vid: + pil_image = Image.fromarray(frame) + images.append(pil_image) + + return images + +video = load_video("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-vid2vid-input-1.gif") + +output = pipe( + video = video, + prompt="panda playing a guitar, on a boat, in the ocean, high quality", + negative_prompt="bad quality, worse quality", + guidance_scale=7.5, + num_inference_steps=25, + strength=0.5, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") Here are some sample outputs: Source Video Output Video raccoon playing a guitar + panda playing a guitar + closeup of margot robbie, fireworks in the background, high quality + closeup of tony stark, robert downey jr, fireworks + Using Motion LoRAs Motion LoRAs are a collection of LoRAs that work with the guoyww/animatediff-motion-adapter-v1-5-2 checkpoint. These LoRAs are responsible for adding specific types of motion to the animations. Copied import torch +from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16) +pipe.load_lora_weights( + "guoyww/animatediff-motion-lora-zoom-out", adapter_name="zoom-out" +) + +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + beta_schedule="linear", + timestep_spacing="linspace", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt=( + "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, " + "orange sky, warm lighting, fishing boats, ocean waves seagulls, " + "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, " + "golden hour, coastal landscape, seaside scenery" + ), + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=25, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") + masterpiece, bestquality, sunset. + Using Motion LoRAs with PEFT You can also leverage the PEFT backend to combine Motion LoRA’s and create more complex animations. First install PEFT with Copied pip install peft Then you can use the following code to combine Motion LoRAs. Copied import torch +from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16) + +pipe.load_lora_weights( + "diffusers/animatediff-motion-lora-zoom-out", adapter_name="zoom-out", +) +pipe.load_lora_weights( + "diffusers/animatediff-motion-lora-pan-left", adapter_name="pan-left", +) +pipe.set_adapters(["zoom-out", "pan-left"], adapter_weights=[1.0, 1.0]) + +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + timestep_spacing="linspace", + beta_schedule="linear", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt=( + "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, " + "orange sky, warm lighting, fishing boats, ocean waves seagulls, " + "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, " + "golden hour, coastal landscape, seaside scenery" + ), + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=25, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") + masterpiece, bestquality, sunset. + Using FreeInit FreeInit: Bridging Initialization Gap in Video Diffusion Models by Tianxing Wu, Chenyang Si, Yuming Jiang, Ziqi Huang, Ziwei Liu. FreeInit is an effective method that improves temporal consistency and overall quality of videos generated using video-diffusion-models without any addition training. It can be applied to AnimateDiff, ModelScope, VideoCrafter and various other video generation models seamlessly at inference time, and works by iteratively refining the latent-initialization noise. More details can be found it the paper. The following example demonstrates the usage of FreeInit. Copied import torch +from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler +from diffusers.utils import export_to_gif + +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2") +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16).to("cuda") +pipe.scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + beta_schedule="linear", + clip_sample=False, + timestep_spacing="linspace", + steps_offset=1 +) + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_vae_tiling() + +# enable FreeInit +# Refer to the enable_free_init documentation for a full list of configurable parameters +pipe.enable_free_init(method="butterworth", use_fast_sampling=True) + +# run inference +output = pipe( + prompt="a panda playing a guitar, on a boat, in the ocean, high quality", + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=20, + generator=torch.Generator("cpu").manual_seed(666), +) + +# disable FreeInit +pipe.disable_free_init() + +frames = output.frames[0] +export_to_gif(frames, "animation.gif") FreeInit is not really free - the improved quality comes at the cost of extra computation. It requires sampling a few extra times depending on the num_iters parameter that is set when enabling it. Setting the use_fast_sampling parameter to True can improve the overall performance (at the cost of lower quality compared to when use_fast_sampling=False but still better results than vanilla video generation models). Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. Using AnimateLCM AnimateLCM is a motion module checkpoint and an LCM LoRA that have been created using a consistency learning strategy that decouples the distillation of the image generation priors and the motion generation priors. Copied import torch +from diffusers import AnimateDiffPipeline, LCMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +adapter = MotionAdapter.from_pretrained("wangfuyun/AnimateLCM") +pipe = AnimateDiffPipeline.from_pretrained("emilianJR/epiCRealism", motion_adapter=adapter) +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config, beta_schedule="linear") + +pipe.load_lora_weights("wangfuyun/AnimateLCM", weight_name="sd15_lora_beta.safetensors", adapter_name="lcm-lora") + +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt="A space rocket with trails of smoke behind it launching into space from the desert, 4k, high resolution", + negative_prompt="bad quality, worse quality, low resolution", + num_frames=16, + guidance_scale=1.5, + num_inference_steps=6, + generator=torch.Generator("cpu").manual_seed(0), +) +frames = output.frames[0] +export_to_gif(frames, "animatelcm.gif") A space rocket, 4K. + AnimateLCM is also compatible with existing Motion LoRAs. Copied import torch +from diffusers import AnimateDiffPipeline, LCMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +adapter = MotionAdapter.from_pretrained("wangfuyun/AnimateLCM") +pipe = AnimateDiffPipeline.from_pretrained("emilianJR/epiCRealism", motion_adapter=adapter) +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config, beta_schedule="linear") + +pipe.load_lora_weights("wangfuyun/AnimateLCM", weight_name="sd15_lora_beta.safetensors", adapter_name="lcm-lora") +pipe.load_lora_weights("guoyww/animatediff-motion-lora-tilt-up", adapter_name="tilt-up") + +pipe.set_adapters(["lcm-lora", "tilt-up"], [1.0, 0.8]) +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt="A space rocket with trails of smoke behind it launching into space from the desert, 4k, high resolution", + negative_prompt="bad quality, worse quality, low resolution", + num_frames=16, + guidance_scale=1.5, + num_inference_steps=6, + generator=torch.Generator("cpu").manual_seed(0), +) +frames = output.frames[0] +export_to_gif(frames, "animatelcm-motion-lora.gif") A space rocket, 4K. + AnimateDiffPipeline class diffusers.AnimateDiffPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel motion_adapter: MotionAdapter scheduler: Union feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter (MotionAdapter) — +A MotionAdapter to be used in combination with unet to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None num_frames: Optional = 16 height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → AnimateDiffPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated video. num_frames (int, optional, defaults to 16) — +The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds +amounts to 2 seconds of video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated video. Choose between torch.FloatTensor, PIL.Image or +np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +AnimateDiffPipelineOutput or tuple + +If return_dict is True, AnimateDiffPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler +>>> from diffusers.utils import export_to_gif + +>>> adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2") +>>> pipe = AnimateDiffPipeline.from_pretrained("frankjoshua/toonyou_beta6", motion_adapter=adapter) +>>> pipe.scheduler = DDIMScheduler(beta_schedule="linear", steps_offset=1, clip_sample=False) +>>> output = pipe(prompt="A corgi walking in the park") +>>> frames = output.frames[0] +>>> export_to_gif(frames, "animation.gif") encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. AnimateDiffVideoToVideoPipeline class diffusers.AnimateDiffVideoToVideoPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel motion_adapter: MotionAdapter scheduler: Union feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter (MotionAdapter) — +A MotionAdapter to be used in combination with unet to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for video-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( video: List = None prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: Optional = None guidance_scale: float = 7.5 strength: float = 0.8 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → pipelines.animatediff.pipeline_output.AnimateDiffPipelineOutput or tuple Parameters video (List[PipelineImageInput]) — +The input video to condition the generation on. Must be a list of images/frames of the video. prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. strength (float, optional, defaults to 0.8) — +Higher strength leads to more differences between original video and generated video. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated video. Choose between torch.FloatTensor, PIL.Image or +np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a AnimateDiffPipelineOutput instead +of a plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +pipelines.animatediff.pipeline_output.AnimateDiffPipelineOutput or tuple + +If return_dict is True, pipelines.animatediff.pipeline_output.AnimateDiffPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. AnimateDiffPipelineOutput class diffusers.pipelines.animatediff.AnimateDiffPipelineOutput < source > ( frames: Union ) Parameters frames (torch.Tensor, np.ndarray, or List[List[PIL.Image.Image]]) — +List of video outputs - It can be a nested list of length batch_size, with each sub-list containing denoised Output class for AnimateDiff pipelines. PIL image sequences of length num_frames. It can also be a NumPy array or Torch tensor of shape +(batch_size, num_frames, channels, height, width) diff --git a/scrapped_outputs/163eb9ccfc9cbec2f732eae7f0be65ee.txt b/scrapped_outputs/163eb9ccfc9cbec2f732eae7f0be65ee.txt new file mode 100644 index 0000000000000000000000000000000000000000..13aef0767c19d544c8b380b818921e179de42362 --- /dev/null +++ b/scrapped_outputs/163eb9ccfc9cbec2f732eae7f0be65ee.txt @@ -0,0 +1,14 @@ +Speed up inference There are several ways to optimize 🤗 Diffusers for inference speed. As a general rule of thumb, we recommend using either xFormers or torch.nn.functional.scaled_dot_product_attention in PyTorch 2.0 for their memory-efficient attention. In many cases, optimizing for speed or memory leads to improved performance in the other, so you should try to optimize for both whenever you can. This guide focuses on inference speed, but you can learn more about preserving memory in the Reduce memory usage guide. The results below are obtained from generating a single 512x512 image from the prompt a photo of an astronaut riding a horse on mars with 50 DDIM steps on a Nvidia Titan RTX, demonstrating the speed-up you can expect. latency speed-up original 9.50s x1 fp16 3.61s x2.63 channels last 3.30s x2.88 traced UNet 3.21s x2.96 memory efficient attention 2.63s x3.61 Use TensorFloat-32 On Ampere and later CUDA devices, matrix multiplications and convolutions can use the TensorFloat-32 (TF32) mode for faster, but slightly less accurate computations. By default, PyTorch enables TF32 mode for convolutions but not matrix multiplications. Unless your network requires full float32 precision, we recommend enabling TF32 for matrix multiplications. It can significantly speeds up computations with typically negligible loss in numerical accuracy. Copied import torch + +torch.backends.cuda.matmul.allow_tf32 = True You can learn more about TF32 in the Mixed precision training guide. Half-precision weights To save GPU memory and get more speed, try loading and running the model weights directly in half-precision or float16: Copied import torch +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +image = pipe(prompt).images[0] Don’t use torch.autocast in any of the pipelines as it can lead to black images and is always slower than pure float16 precision. diff --git a/scrapped_outputs/16b9a73ed884d18d7fa96a5c03f384e8.txt b/scrapped_outputs/16b9a73ed884d18d7fa96a5c03f384e8.txt new file mode 100644 index 0000000000000000000000000000000000000000..d23d93327c35d9c8f0901065ebe9c0cc039991a4 --- /dev/null +++ b/scrapped_outputs/16b9a73ed884d18d7fa96a5c03f384e8.txt @@ -0,0 +1,260 @@ +Image-to-image Image-to-image is similar to text-to-image, but in addition to a prompt, you can also pass an initial image as a starting point for the diffusion process. The initial image is encoded to latent space and noise is added to it. Then the latent diffusion model takes a prompt and the noisy latent image, predicts the added noise, and removes the predicted noise from the initial latent image to get the new latent image. Lastly, a decoder decodes the new latent image back into an image. With 🤗 Diffusers, this is as easy as 1-2-3: Load a checkpoint into the AutoPipelineForImage2Image class; this pipeline automatically handles loading the correct pipeline class based on the checkpoint: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() You’ll notice throughout the guide, we use enable_model_cpu_offload() and enable_xformers_memory_efficient_attention(), to save memory and increase inference speed. If you’re using PyTorch 2.0, then you don’t need to call enable_xformers_memory_efficient_attention() on your pipeline because it’ll already be using PyTorch 2.0’s native scaled-dot product attention. Load an image to pass to the pipeline: Copied init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") Pass a prompt and image to the pipeline to generate an image: Copied prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k" +image = pipeline(prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Popular models The most popular image-to-image models are Stable Diffusion v1.5, Stable Diffusion XL (SDXL), and Kandinsky 2.2. The results from the Stable Diffusion and Kandinsky models vary due to their architecture differences and training process; you can generally expect SDXL to produce higher quality images than Stable Diffusion v1.5. Let’s take a quick look at how to use each of these models and compare their results. Stable Diffusion v1.5 Stable Diffusion v1.5 is a latent diffusion model initialized from an earlier checkpoint, and further finetuned for 595K steps on 512x512 images. To use this pipeline for image-to-image, you’ll need to prepare an initial image to pass to the pipeline. Then you can pass a prompt and the image to the pipeline to generate a new image: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Stable Diffusion XL (SDXL) SDXL is a more powerful version of the Stable Diffusion model. It uses a larger base model, and an additional refiner model to increase the quality of the base model’s output. Read the SDXL guide for a more detailed walkthrough of how to use this model, and other techniques it uses to produce high quality images. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-sdxl-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, strength=0.5).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Kandinsky 2.2 The Kandinsky model is different from the Stable Diffusion models because it uses an image prior model to create image embeddings. The embeddings help create a better alignment between text and images, allowing the latent diffusion model to generate better images. The simplest way to use Kandinsky 2.2 is: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Configure pipeline parameters There are several important parameters you can configure in the pipeline that’ll affect the image generation process and image quality. Let’s take a closer look at what these parameters do and how changing them affects the output. Strength strength is one of the most important parameters to consider and it’ll have a huge impact on your generated image. It determines how much the generated image resembles the initial image. In other words: 📈 a higher strength value gives the model more “creativity” to generate an image that’s different from the initial image; a strength value of 1.0 means the initial image is more or less ignored 📉 a lower strength value means the generated image is more similar to the initial image The strength and num_inference_steps parameters are related because strength determines the number of noise steps to add. For example, if the num_inference_steps is 50 and strength is 0.8, then this means adding 40 (50 * 0.8) steps of noise to the initial image and then denoising for 40 steps to get the newly generated image. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, strength=0.8).images[0] +make_image_grid([init_image, image], rows=1, cols=2) strength = 0.4 strength = 0.6 strength = 1.0 Guidance scale The guidance_scale parameter is used to control how closely aligned the generated image and text prompt are. A higher guidance_scale value means your generated image is more aligned with the prompt, while a lower guidance_scale value means your generated image has more space to deviate from the prompt. You can combine guidance_scale with strength for even more precise control over how expressive the model is. For example, combine a high strength + guidance_scale for maximum creativity or use a combination of low strength and low guidance_scale to generate an image that resembles the initial image but is not as strictly bound to the prompt. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, guidance_scale=8.0).images[0] +make_image_grid([init_image, image], rows=1, cols=2) guidance_scale = 0.1 guidance_scale = 5.0 guidance_scale = 10.0 Negative prompt A negative prompt conditions the model to not include things in an image, and it can be used to improve image quality or modify an image. For example, you can improve image quality by including negative prompts like “poor details” or “blurry” to encourage the model to generate a higher quality image. Or you can modify an image by specifying things to exclude from an image. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" + +# pass prompt and image to pipeline +image = pipeline(prompt, negative_prompt=negative_prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" negative_prompt = "jungle" Chained image-to-image pipelines There are some other interesting ways you can use an image-to-image pipeline aside from just generating an image (although that is pretty cool too). You can take it a step further and chain it with other pipelines. Text-to-image-to-image Chaining a text-to-image and image-to-image pipeline allows you to generate an image from text and use the generated image as the initial image for the image-to-image pipeline. This is useful if you want to generate an image entirely from scratch. For example, let’s chain a Stable Diffusion and a Kandinsky model. Start by generating an image with the text-to-image pipeline: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch +from diffusers.utils import make_image_grid + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +text2image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k").images[0] +text2image Now you can pass this generated image to the image-to-image pipeline: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image2image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", image=text2image).images[0] +make_image_grid([text2image, image2image], rows=1, cols=2) Image-to-image-to-image You can also chain multiple image-to-image pipelines together to create more interesting images. This can be useful for iteratively performing style transfer on an image, generating short GIFs, restoring color to an image, or restoring missing areas of an image. Start by generating an image: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, output_type="latent").images[0] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. Pass the latent output from this pipeline to the next pipeline to generate an image in a comic book art style: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "ogkalu/Comic-Diffusion", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# need to include the token "charliebo artstyle" in the prompt to use this checkpoint +image = pipeline("Astronaut in a jungle, charliebo artstyle", image=image, output_type="latent").images[0] Repeat one more time to generate the final image in a pixel art style: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "kohbanye/pixel-art-style", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# need to include the token "pixelartstyle" in the prompt to use this checkpoint +image = pipeline("Astronaut in a jungle, pixelartstyle", image=image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Image-to-upscaler-to-super-resolution Another way you can chain your image-to-image pipeline is with an upscaler and super-resolution pipeline to really increase the level of details in an image. Start with an image-to-image pipeline: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image_1 = pipeline(prompt, image=init_image, output_type="latent").images[0] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. Chain it to an upscaler pipeline to increase the image resolution: Copied from diffusers import StableDiffusionLatentUpscalePipeline + +upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained( + "stabilityai/sd-x2-latent-upscaler", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +upscaler.enable_model_cpu_offload() +upscaler.enable_xformers_memory_efficient_attention() + +image_2 = upscaler(prompt, image=image_1, output_type="latent").images[0] Finally, chain it to a super-resolution pipeline to further enhance the resolution: Copied from diffusers import StableDiffusionUpscalePipeline + +super_res = StableDiffusionUpscalePipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +super_res.enable_model_cpu_offload() +super_res.enable_xformers_memory_efficient_attention() + +image_3 = super_res(prompt, image=image_2).images[0] +make_image_grid([init_image, image_3.resize((512, 512))], rows=1, cols=2) Control image generation Trying to generate an image that looks exactly the way you want can be difficult, which is why controlled generation techniques and models are so useful. While you can use the negative_prompt to partially control image generation, there are more robust methods like prompt weighting and ControlNets. Prompt weighting Prompt weighting allows you to scale the representation of each concept in a prompt. For example, in a prompt like “Astronaut in a jungle, cold color palette, muted colors, detailed, 8k”, you can choose to increase or decrease the embeddings of “astronaut” and “jungle”. The Compel library provides a simple syntax for adjusting prompt weights and generating the embeddings. You can learn how to create the embeddings in the Prompt weighting guide. AutoPipelineForImage2Image has a prompt_embeds (and negative_prompt_embeds if you’re using a negative prompt) parameter where you can pass the embeddings which replaces the prompt parameter. Copied from diffusers import AutoPipelineForImage2Image +import torch + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt_embeds=prompt_embeds, # generated from Compel + negative_prompt_embeds=negative_prompt_embeds, # generated from Compel + image=init_image, +).images[0] ControlNet ControlNets provide a more flexible and accurate way to control image generation because you can use an additional conditioning image. The conditioning image can be a canny image, depth map, image segmentation, and even scribbles! Whatever type of conditioning image you choose, the ControlNet generates an image that preserves the information in it. For example, let’s condition an image with a depth map to keep the spatial information in the image. Copied from diffusers.utils import load_image, make_image_grid + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) +init_image = init_image.resize((958, 960)) # resize to depth image dimensions +depth_image = load_image("https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png") +make_image_grid([init_image, depth_image], rows=1, cols=2) Load a ControlNet model conditioned on depth maps and the AutoPipelineForImage2Image: Copied from diffusers import ControlNetModel, AutoPipelineForImage2Image +import torch + +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11f1p_sd15_depth", torch_dtype=torch.float16, variant="fp16", use_safetensors=True) +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() Now generate a new image conditioned on the depth map, initial image, and prompt: Copied prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image_control_net = pipeline(prompt, image=init_image, control_image=depth_image).images[0] +make_image_grid([init_image, depth_image, image_control_net], rows=1, cols=3) initial image depth image ControlNet image Let’s apply a new style to the image generated from the ControlNet by chaining it with an image-to-image pipeline: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "nitrosocke/elden-ring-diffusion", torch_dtype=torch.float16, +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +prompt = "elden ring style astronaut in a jungle" # include the token "elden ring style" in the prompt +negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" + +image_elden_ring = pipeline(prompt, negative_prompt=negative_prompt, image=image_control_net, strength=0.45, guidance_scale=10.5).images[0] +make_image_grid([init_image, depth_image, image_control_net, image_elden_ring], rows=2, cols=2) Optimize Running diffusion models is computationally expensive and intensive, but with a few optimization tricks, it is entirely possible to run them on consumer and free-tier GPUs. For example, you can use a more memory-efficient form of attention such as PyTorch 2.0’s scaled-dot product attention or xFormers (you can use one or the other, but there’s no need to use both). You can also offload the model to the GPU while the other pipeline components wait on the CPU. Copied + pipeline.enable_model_cpu_offload() ++ pipeline.enable_xformers_memory_efficient_attention() With torch.compile, you can boost your inference speed even more by wrapping your UNet with it: Copied pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) To learn more, take a look at the Reduce memory usage and Torch 2.0 guides. diff --git a/scrapped_outputs/1758de789537eaab9c4334c25dee5758.txt b/scrapped_outputs/1758de789537eaab9c4334c25dee5758.txt new file mode 100644 index 0000000000000000000000000000000000000000..7c97c654c1ee35cd7df313c6a56cd1af6d619611 --- /dev/null +++ b/scrapped_outputs/1758de789537eaab9c4334c25dee5758.txt @@ -0,0 +1,78 @@ +Pipeline callbacks The denoising loop of a pipeline can be modified with custom defined functions using the callback_on_step_end parameter. The callback function is executed at the end of each step, and modifies the pipeline attributes and variables for the next step. This is really useful for dynamically adjusting certain pipeline attributes or modifying tensor variables. This versatility allows for interesting use-cases such as changing the prompt embeddings at each timestep, assigning different weights to the prompt embeddings, and editing the guidance scale. With callbacks, you can implement new features without modifying the underlying code! 🤗 Diffusers currently only supports callback_on_step_end, but feel free to open a feature request if you have a cool use-case and require a callback function with a different execution point! This guide will demonstrate how callbacks work by a few features you can implement with them. Dynamic classifier-free guidance Dynamic classifier-free guidance (CFG) is a feature that allows you to disable CFG after a certain number of inference steps which can help you save compute with minimal cost to performance. The callback function for this should have the following arguments: pipeline (or the pipeline instance) provides access to important properties such as num_timesteps and guidance_scale. You can modify these properties by updating the underlying attributes. For this example, you’ll disable CFG by setting pipeline._guidance_scale=0.0. step_index and timestep tell you where you are in the denoising loop. Use step_index to turn off CFG after reaching 40% of num_timesteps. callback_kwargs is a dict that contains tensor variables you can modify during the denoising loop. It only includes variables specified in the callback_on_step_end_tensor_inputs argument, which is passed to the pipeline’s __call__ method. Different pipelines may use different sets of variables, so please check a pipeline’s _callback_tensor_inputs attribute for the list of variables you can modify. Some common variables include latents and prompt_embeds. For this function, change the batch size of prompt_embeds after setting guidance_scale=0.0 in order for it to work properly. Your callback function should look something like this: Copied def callback_dynamic_cfg(pipe, step_index, timestep, callback_kwargs): + # adjust the batch_size of prompt_embeds according to guidance_scale + if step_index == int(pipeline.num_timesteps * 0.4): + prompt_embeds = callback_kwargs["prompt_embeds"] + prompt_embeds = prompt_embeds.chunk(2)[-1] + + # update guidance_scale and prompt_embeds + pipeline._guidance_scale = 0.0 + callback_kwargs["prompt_embeds"] = prompt_embeds + return callback_kwargs Now, you can pass the callback function to the callback_on_step_end parameter and the prompt_embeds to callback_on_step_end_tensor_inputs. Copied import torch +from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +pipeline = pipeline.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" + +generator = torch.Generator(device="cuda").manual_seed(1) +out = pipeline( + prompt, + generator=generator, + callback_on_step_end=callback_dynamic_cfg, + callback_on_step_end_tensor_inputs=['prompt_embeds'] +) + +out.images[0].save("out_custom_cfg.png") Interrupt the diffusion process The interruption callback is supported for text-to-image, image-to-image, and inpainting for the StableDiffusionPipeline and StableDiffusionXLPipeline. Stopping the diffusion process early is useful when building UIs that work with Diffusers because it allows users to stop the generation process if they’re unhappy with the intermediate results. You can incorporate this into your pipeline with a callback. This callback function should take the following arguments: pipeline, i, t, and callback_kwargs (this must be returned). Set the pipeline’s _interrupt attribute to True to stop the diffusion process after a certain number of steps. You are also free to implement your own custom stopping logic inside the callback. In this example, the diffusion process is stopped after 10 steps even though num_inference_steps is set to 50. Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +pipeline.enable_model_cpu_offload() +num_inference_steps = 50 + +def interrupt_callback(pipeline, i, t, callback_kwargs): + stop_idx = 10 + if i == stop_idx: + pipeline._interrupt = True + + return callback_kwargs + +pipeline( + "A photo of a cat", + num_inference_steps=num_inference_steps, + callback_on_step_end=interrupt_callback, +) Display image after each generation step This tip was contributed by asomoza. Display an image after each generation step by accessing and converting the latents after each step into an image. The latent space is compressed to 128x128, so the images are also 128x128 which is useful for a quick preview. Use the function below to convert the SDXL latents (4 channels) to RGB tensors (3 channels) as explained in the Explaining the SDXL latent space blog post. Copied def latents_to_rgb(latents): + weights = ( + (60, -60, 25, -70), + (60, -5, 15, -50), + (60, 10, -5, -35) + ) + + weights_tensor = torch.t(torch.tensor(weights, dtype=latents.dtype).to(latents.device)) + biases_tensor = torch.tensor((150, 140, 130), dtype=latents.dtype).to(latents.device) + rgb_tensor = torch.einsum("...lxy,lr -> ...rxy", latents, weights_tensor) + biases_tensor.unsqueeze(-1).unsqueeze(-1) + image_array = rgb_tensor.clamp(0, 255)[0].byte().cpu().numpy() + image_array = image_array.transpose(1, 2, 0) + + return Image.fromarray(image_array) Create a function to decode and save the latents into an image. Copied def decode_tensors(pipe, step, timestep, callback_kwargs): + latents = callback_kwargs["latents"] + + image = latents_to_rgb(latents) + image.save(f"{step}.png") + + return callback_kwargs Pass the decode_tensors function to the callback_on_step_end parameter to decode the tensors after each step. You also need to specify what you want to modify in the callback_on_step_end_tensor_inputs parameter, which in this case are the latents. Copied from diffusers import AutoPipelineForText2Image +import torch +from PIL import Image + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + torch_dtype=torch.float16, + variant="fp16", + use_safetensors=True +).to("cuda") + +image = pipe( + prompt = "A croissant shaped like a cute bear." + negative_prompt = "Deformed, ugly, bad anatomy" + callback_on_step_end=decode_tensors, + callback_on_step_end_tensor_inputs=["latents"], +).images[0] step 0 step 19 step 29 step 39 step 49 diff --git a/scrapped_outputs/1771884b91c90349cf9c04876ad7bd8c.txt b/scrapped_outputs/1771884b91c90349cf9c04876ad7bd8c.txt new file mode 100644 index 0000000000000000000000000000000000000000..59083c57c1a632ae7752bc6ded438137105156ce --- /dev/null +++ b/scrapped_outputs/1771884b91c90349cf9c04876ad7bd8c.txt @@ -0,0 +1,102 @@ +DPMSolverMultistepScheduler DPMSolverMultistep is a multistep scheduler from DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps and DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. DPMSolver (and the improved version DPMSolver++) is a fast dedicated high-order solver for diffusion ODEs with convergence order guarantee. Empirically, DPMSolver sampling with only 20 steps can generate high-quality +samples, and it can generate quite good samples even in 10 steps. Tips It is recommended to set solver_order to 2 for guide sampling, and solver_order=3 for unconditional sampling. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use the dynamic +thresholding. This thresholding method is unsuitable for latent-space diffusion models such as +Stable Diffusion. The SDE variant of DPMSolver and DPM-Solver++ is also supported, but only for the first and second-order solvers. This is a fast SDE solver for the reverse diffusion SDE. It is recommended to use the second-order sde-dpmsolver++. DPMSolverMultistepScheduler class diffusers.DPMSolverMultistepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'dpmsolver++' solver_type: str = 'midpoint' lower_order_final: bool = True euler_at_final: bool = False use_karras_sigmas: Optional = False use_lu_lambdas: Optional = False final_sigmas_type: Optional = 'zero' lambda_min_clipped: float = -inf variance_type: Optional = None timestep_spacing: str = 'linspace' steps_offset: int = 0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DPMSolver order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++". algorithm_type (str, defaults to dpmsolver++) — +Algorithm type for the solver; can be dpmsolver, dpmsolver++, sde-dpmsolver or sde-dpmsolver++. The +dpmsolver type implements the algorithms in the DPMSolver +paper, and the dpmsolver++ type implements the algorithms in the +DPMSolver++ paper. It is recommended to use dpmsolver++ or +sde-dpmsolver++ with solver_order=2 for guided sampling like in Stable Diffusion. solver_type (str, defaults to midpoint) — +Solver type for the second-order solver; can be midpoint or heun. The solver type slightly affects the +sample quality, especially for a small number of steps. It is recommended to use midpoint solvers. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. euler_at_final (bool, defaults to False) — +Whether to use Euler’s method in the final step. It is a trade-off between numerical stability and detail +richness. This can stabilize the sampling of the SDE variant of DPMSolver for small number of inference +steps, but sometimes may result in blurring. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. use_lu_lambdas (bool, optional, defaults to False) — +Whether to use the uniform-logSNR for step sizes proposed by Lu’s DPM-Solver in the noise schedule during +the sampling process. If True, the sigmas and time steps are determined according to a sequence of +lambda(t). final_sigmas_type (str, defaults to "zero") — +The final sigma value for the noise schedule during the sampling process. If "sigma_min", the final sigma +is the same as the last sigma in the training schedule. If zero, the final sigma is set to 0. lambda_min_clipped (float, defaults to -inf) — +Clipping threshold for the minimum value of lambda(t) for numerical stability. This is critical for the +cosine (squaredcos_cap_v2) noise schedule. variance_type (str, optional) — +Set to “learned” or “learned_range” for diffusion models that predict variance. If set, the model’s output +contains the predicted Gaussian variance. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. DPMSolverMultistepScheduler is a fast dedicated high-order solver for diffusion ODEs. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is +designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an +integral of the data prediction model. The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise +prediction and data prediction models. dpm_solver_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DPMSolver (equivalent to DDIM). multistep_dpm_solver_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order multistep DPMSolver. multistep_dpm_solver_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order multistep DPMSolver. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: int = None device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator = None return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep DPMSolver. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/17b4a049d39be51eaf3db39e64109bc7.txt b/scrapped_outputs/17b4a049d39be51eaf3db39e64109bc7.txt new file mode 100644 index 0000000000000000000000000000000000000000..039dc21252f140b854db30919cf4105c2b03492c --- /dev/null +++ b/scrapped_outputs/17b4a049d39be51eaf3db39e64109bc7.txt @@ -0,0 +1,249 @@ +Evaluating Diffusion Models Evaluation of generative models like Stable Diffusion is subjective in nature. But as practitioners and researchers, we often have to make careful choices amongst many different possibilities. So, when working with different generative models (like GANs, Diffusion, etc.), how do we choose one over the other? Qualitative evaluation of such models can be error-prone and might incorrectly influence a decision. +However, quantitative metrics don’t necessarily correspond to image quality. So, usually, a combination +of both qualitative and quantitative evaluations provides a stronger signal when choosing one model +over the other. In this document, we provide a non-exhaustive overview of qualitative and quantitative methods to evaluate Diffusion models. For quantitative methods, we specifically focus on how to implement them alongside diffusers. The methods shown in this document can also be used to evaluate different noise schedulers keeping the underlying generation model fixed. Scenarios We cover Diffusion models with the following pipelines: Text-guided image generation (such as the StableDiffusionPipeline). Text-guided image generation, additionally conditioned on an input image (such as the StableDiffusionImg2ImgPipeline and StableDiffusionInstructPix2PixPipeline). Class-conditioned image generation models (such as the DiTPipeline). Qualitative Evaluation Qualitative evaluation typically involves human assessment of generated images. Quality is measured across aspects such as compositionality, image-text alignment, and spatial relations. Common prompts provide a degree of uniformity for subjective metrics. +DrawBench and PartiPrompts are prompt datasets used for qualitative benchmarking. DrawBench and PartiPrompts were introduced by Imagen and Parti respectively. From the official Parti website: PartiPrompts (P2) is a rich set of over 1600 prompts in English that we release as part of this work. P2 can be used to measure model capabilities across various categories and challenge aspects. PartiPrompts has the following columns: Prompt Category of the prompt (such as “Abstract”, “World Knowledge”, etc.) Challenge reflecting the difficulty (such as “Basic”, “Complex”, “Writing & Symbols”, etc.) These benchmarks allow for side-by-side human evaluation of different image generation models. For this, the 🧨 Diffusers team has built Open Parti Prompts, which is a community-driven qualitative benchmark based on Parti Prompts to compare state-of-the-art open-source diffusion models: Open Parti Prompts Game: For 10 parti prompts, 4 generated images are shown and the user selects the image that suits the prompt best. Open Parti Prompts Leaderboard: The leaderboard comparing the currently best open-sourced diffusion models to each other. To manually compare images, let’s see how we can use diffusers on a couple of PartiPrompts. Below we show some prompts sampled across different challenges: Basic, Complex, Linguistic Structures, Imagination, and Writing & Symbols. Here we are using PartiPrompts as a dataset. Copied from datasets import load_dataset + +# prompts = load_dataset("nateraw/parti-prompts", split="train") +# prompts = prompts.shuffle() +# sample_prompts = [prompts[i]["Prompt"] for i in range(5)] + +# Fixing these sample prompts in the interest of reproducibility. +sample_prompts = [ + "a corgi", + "a hot air balloon with a yin-yang symbol, with the moon visible in the daytime sky", + "a car with no windows", + "a cube made of porcupine", + 'The saying "BE EXCELLENT TO EACH OTHER" written on a red brick wall with a graffiti image of a green alien wearing a tuxedo. A yellow fire hydrant is on a sidewalk in the foreground.', +] Now we can use these prompts to generate some images using Stable Diffusion (v1-4 checkpoint): Copied import torch + +seed = 0 +generator = torch.manual_seed(seed) + +images = sd_pipeline(sample_prompts, num_images_per_prompt=1, generator=generator).images We can also set num_images_per_prompt accordingly to compare different images for the same prompt. Running the same pipeline but with a different checkpoint (v1-5), yields: Once several images are generated from all the prompts using multiple models (under evaluation), these results are presented to human evaluators for scoring. For +more details on the DrawBench and PartiPrompts benchmarks, refer to their respective papers. It is useful to look at some inference samples while a model is training to measure the +training progress. In our training scripts, we support this utility with additional support for +logging to TensorBoard and Weights & Biases. Quantitative Evaluation In this section, we will walk you through how to evaluate three different diffusion pipelines using: CLIP score CLIP directional similarity FID Text-guided image generation CLIP score measures the compatibility of image-caption pairs. Higher CLIP scores imply higher compatibility 🔼. The CLIP score is a quantitative measurement of the qualitative concept “compatibility”. Image-caption pair compatibility can also be thought of as the semantic similarity between the image and the caption. CLIP score was found to have high correlation with human judgement. Let’s first load a StableDiffusionPipeline: Copied from diffusers import StableDiffusionPipeline +import torch + +model_ckpt = "CompVis/stable-diffusion-v1-4" +sd_pipeline = StableDiffusionPipeline.from_pretrained(model_ckpt, torch_dtype=torch.float16).to("cuda") Generate some images with multiple prompts: Copied prompts = [ + "a photo of an astronaut riding a horse on mars", + "A high tech solarpunk utopia in the Amazon rainforest", + "A pikachu fine dining with a view to the Eiffel Tower", + "A mecha robot in a favela in expressionist style", + "an insect robot preparing a delicious meal", + "A small cabin on top of a snowy mountain in the style of Disney, artstation", +] + +images = sd_pipeline(prompts, num_images_per_prompt=1, output_type="np").images + +print(images.shape) +# (6, 512, 512, 3) And then, we calculate the CLIP score. Copied from torchmetrics.functional.multimodal import clip_score +from functools import partial + +clip_score_fn = partial(clip_score, model_name_or_path="openai/clip-vit-base-patch16") + +def calculate_clip_score(images, prompts): + images_int = (images * 255).astype("uint8") + clip_score = clip_score_fn(torch.from_numpy(images_int).permute(0, 3, 1, 2), prompts).detach() + return round(float(clip_score), 4) + +sd_clip_score = calculate_clip_score(images, prompts) +print(f"CLIP score: {sd_clip_score}") +# CLIP score: 35.7038 In the above example, we generated one image per prompt. If we generated multiple images per prompt, we would have to take the average score from the generated images per prompt. Now, if we wanted to compare two checkpoints compatible with the StableDiffusionPipeline we should pass a generator while calling the pipeline. First, we generate images with a +fixed seed with the v1-4 Stable Diffusion checkpoint: Copied seed = 0 +generator = torch.manual_seed(seed) + +images = sd_pipeline(prompts, num_images_per_prompt=1, generator=generator, output_type="np").images Then we load the v1-5 checkpoint to generate images: Copied model_ckpt_1_5 = "runwayml/stable-diffusion-v1-5" +sd_pipeline_1_5 = StableDiffusionPipeline.from_pretrained(model_ckpt_1_5, torch_dtype=weight_dtype).to(device) + +images_1_5 = sd_pipeline_1_5(prompts, num_images_per_prompt=1, generator=generator, output_type="np").images And finally, we compare their CLIP scores: Copied sd_clip_score_1_4 = calculate_clip_score(images, prompts) +print(f"CLIP Score with v-1-4: {sd_clip_score_1_4}") +# CLIP Score with v-1-4: 34.9102 + +sd_clip_score_1_5 = calculate_clip_score(images_1_5, prompts) +print(f"CLIP Score with v-1-5: {sd_clip_score_1_5}") +# CLIP Score with v-1-5: 36.2137 It seems like the v1-5 checkpoint performs better than its predecessor. Note, however, that the number of prompts we used to compute the CLIP scores is quite low. For a more practical evaluation, this number should be way higher, and the prompts should be diverse. By construction, there are some limitations in this score. The captions in the training dataset +were crawled from the web and extracted from alt and similar tags associated an image on the internet. +They are not necessarily representative of what a human being would use to describe an image. Hence we +had to “engineer” some prompts here. Image-conditioned text-to-image generation In this case, we condition the generation pipeline with an input image as well as a text prompt. Let’s take the StableDiffusionInstructPix2PixPipeline, as an example. It takes an edit instruction as an input prompt and an input image to be edited. Here is one example: One strategy to evaluate such a model is to measure the consistency of the change between the two images (in CLIP space) with the change between the two image captions (as shown in CLIP-Guided Domain Adaptation of Image Generators). This is referred to as the ”CLIP directional similarity“. Caption 1 corresponds to the input image (image 1) that is to be edited. Caption 2 corresponds to the edited image (image 2). It should reflect the edit instruction. Following is a pictorial overview: We have prepared a mini dataset to implement this metric. Let’s first load the dataset. Copied from datasets import load_dataset + +dataset = load_dataset("sayakpaul/instructpix2pix-demo", split="train") +dataset.features Copied {'input': Value(dtype='string', id=None), + 'edit': Value(dtype='string', id=None), + 'output': Value(dtype='string', id=None), + 'image': Image(decode=True, id=None)} Here we have: input is a caption corresponding to the image. edit denotes the edit instruction. output denotes the modified caption reflecting the edit instruction. Let’s take a look at a sample. Copied idx = 0 +print(f"Original caption: {dataset[idx]['input']}") +print(f"Edit instruction: {dataset[idx]['edit']}") +print(f"Modified caption: {dataset[idx]['output']}") Copied Original caption: 2. FAROE ISLANDS: An archipelago of 18 mountainous isles in the North Atlantic Ocean between Norway and Iceland, the Faroe Islands has 'everything you could hope for', according to Big 7 Travel. It boasts 'crystal clear waterfalls, rocky cliffs that seem to jut out of nowhere and velvety green hills' +Edit instruction: make the isles all white marble +Modified caption: 2. WHITE MARBLE ISLANDS: An archipelago of 18 mountainous white marble isles in the North Atlantic Ocean between Norway and Iceland, the White Marble Islands has 'everything you could hope for', according to Big 7 Travel. It boasts 'crystal clear waterfalls, rocky cliffs that seem to jut out of nowhere and velvety green hills' And here is the image: Copied dataset[idx]["image"] We will first edit the images of our dataset with the edit instruction and compute the directional similarity. Let’s first load the StableDiffusionInstructPix2PixPipeline: Copied from diffusers import StableDiffusionInstructPix2PixPipeline + +instruct_pix2pix_pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained( + "timbrooks/instruct-pix2pix", torch_dtype=torch.float16 +).to(device) Now, we perform the edits: Copied import numpy as np + + +def edit_image(input_image, instruction): + image = instruct_pix2pix_pipeline( + instruction, + image=input_image, + output_type="np", + generator=generator, + ).images[0] + return image + +input_images = [] +original_captions = [] +modified_captions = [] +edited_images = [] + +for idx in range(len(dataset)): + input_image = dataset[idx]["image"] + edit_instruction = dataset[idx]["edit"] + edited_image = edit_image(input_image, edit_instruction) + + input_images.append(np.array(input_image)) + original_captions.append(dataset[idx]["input"]) + modified_captions.append(dataset[idx]["output"]) + edited_images.append(edited_image) To measure the directional similarity, we first load CLIP’s image and text encoders: Copied from transformers import ( + CLIPTokenizer, + CLIPTextModelWithProjection, + CLIPVisionModelWithProjection, + CLIPImageProcessor, +) + +clip_id = "openai/clip-vit-large-patch14" +tokenizer = CLIPTokenizer.from_pretrained(clip_id) +text_encoder = CLIPTextModelWithProjection.from_pretrained(clip_id).to(device) +image_processor = CLIPImageProcessor.from_pretrained(clip_id) +image_encoder = CLIPVisionModelWithProjection.from_pretrained(clip_id).to(device) Notice that we are using a particular CLIP checkpoint, i.e., openai/clip-vit-large-patch14. This is because the Stable Diffusion pre-training was performed with this CLIP variant. For more details, refer to the documentation. Next, we prepare a PyTorch nn.Module to compute directional similarity: Copied import torch.nn as nn +import torch.nn.functional as F + + +class DirectionalSimilarity(nn.Module): + def __init__(self, tokenizer, text_encoder, image_processor, image_encoder): + super().__init__() + self.tokenizer = tokenizer + self.text_encoder = text_encoder + self.image_processor = image_processor + self.image_encoder = image_encoder + + def preprocess_image(self, image): + image = self.image_processor(image, return_tensors="pt")["pixel_values"] + return {"pixel_values": image.to(device)} + + def tokenize_text(self, text): + inputs = self.tokenizer( + text, + max_length=self.tokenizer.model_max_length, + padding="max_length", + truncation=True, + return_tensors="pt", + ) + return {"input_ids": inputs.input_ids.to(device)} + + def encode_image(self, image): + preprocessed_image = self.preprocess_image(image) + image_features = self.image_encoder(**preprocessed_image).image_embeds + image_features = image_features / image_features.norm(dim=1, keepdim=True) + return image_features + + def encode_text(self, text): + tokenized_text = self.tokenize_text(text) + text_features = self.text_encoder(**tokenized_text).text_embeds + text_features = text_features / text_features.norm(dim=1, keepdim=True) + return text_features + + def compute_directional_similarity(self, img_feat_one, img_feat_two, text_feat_one, text_feat_two): + sim_direction = F.cosine_similarity(img_feat_two - img_feat_one, text_feat_two - text_feat_one) + return sim_direction + + def forward(self, image_one, image_two, caption_one, caption_two): + img_feat_one = self.encode_image(image_one) + img_feat_two = self.encode_image(image_two) + text_feat_one = self.encode_text(caption_one) + text_feat_two = self.encode_text(caption_two) + directional_similarity = self.compute_directional_similarity( + img_feat_one, img_feat_two, text_feat_one, text_feat_two + ) + return directional_similarity Let’s put DirectionalSimilarity to use now. Copied dir_similarity = DirectionalSimilarity(tokenizer, text_encoder, image_processor, image_encoder) +scores = [] + +for i in range(len(input_images)): + original_image = input_images[i] + original_caption = original_captions[i] + edited_image = edited_images[i] + modified_caption = modified_captions[i] + + similarity_score = dir_similarity(original_image, edited_image, original_caption, modified_caption) + scores.append(float(similarity_score.detach().cpu())) + +print(f"CLIP directional similarity: {np.mean(scores)}") +# CLIP directional similarity: 0.0797976553440094 Like the CLIP Score, the higher the CLIP directional similarity, the better it is. It should be noted that the StableDiffusionInstructPix2PixPipeline exposes two arguments, namely, image_guidance_scale and guidance_scale that let you control the quality of the final edited image. We encourage you to experiment with these two arguments and see the impact of that on the directional similarity. We can extend the idea of this metric to measure how similar the original image and edited version are. To do that, we can just do F.cosine_similarity(img_feat_two, img_feat_one). For these kinds of edits, we would still want the primary semantics of the images to be preserved as much as possible, i.e., a high similarity score. We can use these metrics for similar pipelines such as the StableDiffusionPix2PixZeroPipeline. Both CLIP score and CLIP direction similarity rely on the CLIP model, which can make the evaluations biased. Extending metrics like IS, FID (discussed later), or KID can be difficult when the model under evaluation was pre-trained on a large image-captioning dataset (such as the LAION-5B dataset). This is because underlying these metrics is an InceptionNet (pre-trained on the ImageNet-1k dataset) used for extracting intermediate image features. The pre-training dataset of Stable Diffusion may have limited overlap with the pre-training dataset of InceptionNet, so it is not a good candidate here for feature extraction. Using the above metrics helps evaluate models that are class-conditioned. For example, DiT. It was pre-trained being conditioned on the ImageNet-1k classes. Class-conditioned image generation Class-conditioned generative models are usually pre-trained on a class-labeled dataset such as ImageNet-1k. Popular metrics for evaluating these models include Fréchet Inception Distance (FID), Kernel Inception Distance (KID), and Inception Score (IS). In this document, we focus on FID (Heusel et al.). We show how to compute it with the DiTPipeline, which uses the DiT model under the hood. FID aims to measure how similar are two datasets of images. As per this resource: Fréchet Inception Distance is a measure of similarity between two datasets of images. It was shown to correlate well with the human judgment of visual quality and is most often used to evaluate the quality of samples of Generative Adversarial Networks. FID is calculated by computing the Fréchet distance between two Gaussians fitted to feature representations of the Inception network. These two datasets are essentially the dataset of real images and the dataset of fake images (generated images in our case). FID is usually calculated with two large datasets. However, for this document, we will work with two mini datasets. Let’s first download a few images from the ImageNet-1k training set: Copied from zipfile import ZipFile +import requests + + +def download(url, local_filepath): + r = requests.get(url) + with open(local_filepath, "wb") as f: + f.write(r.content) + return local_filepath + +dummy_dataset_url = "https://hf.co/datasets/sayakpaul/sample-datasets/resolve/main/sample-imagenet-images.zip" +local_filepath = download(dummy_dataset_url, dummy_dataset_url.split("/")[-1]) + +with ZipFile(local_filepath, "r") as zipper: + zipper.extractall(".") Copied from PIL import Image +import os + +dataset_path = "sample-imagenet-images" +image_paths = sorted([os.path.join(dataset_path, x) for x in os.listdir(dataset_path)]) + +real_images = [np.array(Image.open(path).convert("RGB")) for path in image_paths] These are 10 images from the following ImageNet-1k classes: “cassette_player”, “chain_saw” (x2), “church”, “gas_pump” (x3), “parachute” (x2), and “tench”. Real images. Now that the images are loaded, let’s apply some lightweight pre-processing on them to use them for FID calculation. Copied from torchvision.transforms import functional as F + + +def preprocess_image(image): + image = torch.tensor(image).unsqueeze(0) + image = image.permute(0, 3, 1, 2) / 255.0 + return F.center_crop(image, (256, 256)) + +real_images = torch.cat([preprocess_image(image) for image in real_images]) +print(real_images.shape) +# torch.Size([10, 3, 256, 256]) We now load the DiTPipeline to generate images conditioned on the above-mentioned classes. Copied from diffusers import DiTPipeline, DPMSolverMultistepScheduler + +dit_pipeline = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256", torch_dtype=torch.float16) +dit_pipeline.scheduler = DPMSolverMultistepScheduler.from_config(dit_pipeline.scheduler.config) +dit_pipeline = dit_pipeline.to("cuda") + +words = [ + "cassette player", + "chainsaw", + "chainsaw", + "church", + "gas pump", + "gas pump", + "gas pump", + "parachute", + "parachute", + "tench", +] + +class_ids = dit_pipeline.get_label_ids(words) +output = dit_pipeline(class_labels=class_ids, generator=generator, output_type="np") + +fake_images = output.images +fake_images = torch.tensor(fake_images) +fake_images = fake_images.permute(0, 3, 1, 2) +print(fake_images.shape) +# torch.Size([10, 3, 256, 256]) Now, we can compute the FID using torchmetrics. Copied from torchmetrics.image.fid import FrechetInceptionDistance + +fid = FrechetInceptionDistance(normalize=True) +fid.update(real_images, real=True) +fid.update(fake_images, real=False) + +print(f"FID: {float(fid.compute())}") +# FID: 177.7147216796875 The lower the FID, the better it is. Several things can influence FID here: Number of images (both real and fake) Randomness induced in the diffusion process Number of inference steps in the diffusion process The scheduler being used in the diffusion process For the last two points, it is, therefore, a good practice to run the evaluation across different seeds and inference steps, and then report an average result. FID results tend to be fragile as they depend on a lot of factors: The specific Inception model used during computation. The implementation accuracy of the computation. The image format (not the same if we start from PNGs vs JPGs). Keeping that in mind, FID is often most useful when comparing similar runs, but it is +hard to reproduce paper results unless the authors carefully disclose the FID +measurement code. These points apply to other related metrics too, such as KID and IS. As a final step, let’s visually inspect the fake_images. Fake images. diff --git a/scrapped_outputs/17b54f38e818bff8fcd7316f3b19f88e.txt b/scrapped_outputs/17b54f38e818bff8fcd7316f3b19f88e.txt new file mode 100644 index 0000000000000000000000000000000000000000..f971d25fc44aa74df592b1a56356146d3ed210ee --- /dev/null +++ b/scrapped_outputs/17b54f38e818bff8fcd7316f3b19f88e.txt @@ -0,0 +1,83 @@ +K-Diffusion k-diffusion is a popular library created by Katherine Crowson. We provide StableDiffusionKDiffusionPipeline and StableDiffusionXLKDiffusionPipeline that allow you to run Stable DIffusion with samplers from k-diffusion. Note that most the samplers from k-diffusion are implemented in Diffusers and we recommend using existing schedulers. You can find a mapping between k-diffusion samplers and schedulers in Diffusers here StableDiffusionKDiffusionPipeline class diffusers.StableDiffusionKDiffusionPipeline < source > ( vae text_encoder tokenizer unet scheduler safety_checker feature_extractor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. Pipeline for text-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights This is an experimental pipeline and is likely to change in the future. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionXLKDiffusionPipeline class diffusers.StableDiffusionXLKDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. Pipeline for text-to-image generation using Stable Diffusion XL and k-diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. diff --git a/scrapped_outputs/18241b7878555f32a3215089355279c0.txt b/scrapped_outputs/18241b7878555f32a3215089355279c0.txt new file mode 100644 index 0000000000000000000000000000000000000000..97a771bf1c4a69150adf921fcc1b4adbe14566c1 --- /dev/null +++ b/scrapped_outputs/18241b7878555f32a3215089355279c0.txt @@ -0,0 +1,927 @@ +DeepFloyd IF Overview DeepFloyd IF is a novel state-of-the-art open-source text-to-image model with a high degree of photorealism and language understanding. +The model is a modular composed of a frozen text encoder and three cascaded pixel diffusion modules: Stage 1: a base model that generates 64x64 px image based on text prompt, Stage 2: a 64x64 px => 256x256 px super-resolution model, and Stage 3: a 256x256 px => 1024x1024 px super-resolution model +Stage 1 and Stage 2 utilize a frozen text encoder based on the T5 transformer to extract text embeddings, which are then fed into a UNet architecture enhanced with cross-attention and attention pooling. +Stage 3 is Stability AI’s x4 Upscaling model. +The result is a highly efficient model that outperforms current state-of-the-art models, achieving a zero-shot FID score of 6.66 on the COCO dataset. +Our work underscores the potential of larger UNet architectures in the first stage of cascaded diffusion models and depicts a promising future for text-to-image synthesis. Usage Before you can use IF, you need to accept its usage conditions. To do so: Make sure to have a Hugging Face account and be logged in. Accept the license on the model card of DeepFloyd/IF-I-XL-v1.0. Accepting the license on the stage I model card will auto accept for the other IF models. Make sure to login locally. Install huggingface_hub: Copied pip install huggingface_hub --upgrade run the login function in a Python shell: Copied from huggingface_hub import login + +login() and enter your Hugging Face Hub access token. Next we install diffusers and dependencies: Copied pip install -q diffusers accelerate transformers The following sections give more in-detail examples of how to use IF. Specifically: Text-to-Image Generation Image-to-Image Generation Inpainting Reusing model weights Speed optimization Memory optimization Available checkpoints Stage-1 DeepFloyd/IF-I-XL-v1.0 DeepFloyd/IF-I-L-v1.0 DeepFloyd/IF-I-M-v1.0 Stage-2 DeepFloyd/IF-II-L-v1.0 DeepFloyd/IF-II-M-v1.0 Stage-3 stabilityai/stable-diffusion-x4-upscaler Google Colab Text-to-Image Generation By default diffusers makes use of model cpu offloading to run the whole IF pipeline with as little as 14 GB of VRAM. Copied from diffusers import DiffusionPipeline +from diffusers.utils import pt_to_pil, make_image_grid +import torch + +# stage 1 +stage_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +stage_1_output = stage_1( + prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, generator=generator, output_type="pt" +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# stage 2 +stage_2_output = stage_2( + image=stage_1_output, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png") + +# stage 3 +stage_3_output = stage_3(prompt=prompt, image=stage_2_output, noise_level=100, generator=generator).images +#stage_3_output[0].save("./if_stage_III.png") +make_image_grid([pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=3) Text Guided Image-to-Image Generation The same IF model weights can be used for text-guided image-to-image translation or image variation. +In this case just make sure to load the weights using the IFImg2ImgPipeline and IFImg2ImgSuperResolutionPipeline pipelines. Note: You can also directly move the weights of the text-to-image pipelines to the image-to-image pipelines +without loading them twice by making use of the components argument as explained here. Copied from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +from diffusers.utils import pt_to_pil, load_image, make_image_grid +import torch + +# download image +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +original_image = load_image(url) +original_image = original_image.resize((768, 512)) + +# stage 1 +stage_1 = IFImg2ImgPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = IFImg2ImgSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = "A fantasy landscape in style minecraft" +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +stage_1_output = stage_1( + image=original_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# stage 2 +stage_2_output = stage_2( + image=stage_1_output, + original_image=original_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png") + +# stage 3 +stage_3_output = stage_3(prompt=prompt, image=stage_2_output, generator=generator, noise_level=100).images +#stage_3_output[0].save("./if_stage_III.png") +make_image_grid([original_image, pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=4) Text Guided Inpainting Generation The same IF model weights can be used for text-guided image-to-image translation or image variation. +In this case just make sure to load the weights using the IFInpaintingPipeline and IFInpaintingSuperResolutionPipeline pipelines. Note: You can also directly move the weights of the text-to-image pipelines to the image-to-image pipelines +without loading them twice by making use of the ~DiffusionPipeline.components() function as explained here. Copied from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +from diffusers.utils import pt_to_pil, load_image, make_image_grid +import torch + +# download image +url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +original_image = load_image(url) + +# download mask +url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +mask_image = load_image(url) + +# stage 1 +stage_1 = IFInpaintingPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = IFInpaintingSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = "blue sunglasses" +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +stage_1_output = stage_1( + image=original_image, + mask_image=mask_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# stage 2 +stage_2_output = stage_2( + image=stage_1_output, + original_image=original_image, + mask_image=mask_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_II.png") + +# stage 3 +stage_3_output = stage_3(prompt=prompt, image=stage_2_output, generator=generator, noise_level=100).images +#stage_3_output[0].save("./if_stage_III.png") +make_image_grid([original_image, mask_image, pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=5) Converting between different pipelines In addition to being loaded with from_pretrained, Pipelines can also be loaded directly from each other. Copied from diffusers import IFPipeline, IFSuperResolutionPipeline + +pipe_1 = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0") +pipe_2 = IFSuperResolutionPipeline.from_pretrained("DeepFloyd/IF-II-L-v1.0") + + +from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline + +pipe_1 = IFImg2ImgPipeline(**pipe_1.components) +pipe_2 = IFImg2ImgSuperResolutionPipeline(**pipe_2.components) + + +from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline + +pipe_1 = IFInpaintingPipeline(**pipe_1.components) +pipe_2 = IFInpaintingSuperResolutionPipeline(**pipe_2.components) Optimizing for speed The simplest optimization to run IF faster is to move all model components to the GPU. Copied pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") You can also run the diffusion process for a shorter number of timesteps. This can either be done with the num_inference_steps argument: Copied pipe("", num_inference_steps=30) Or with the timesteps argument: Copied from diffusers.pipelines.deepfloyd_if import fast27_timesteps + +pipe("", timesteps=fast27_timesteps) When doing image variation or inpainting, you can also decrease the number of timesteps +with the strength argument. The strength argument is the amount of noise to add to the input image which also determines how many steps to run in the denoising process. +A smaller number will vary the image less but run faster. Copied pipe = IFImg2ImgPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") + +image = pipe(image=image, prompt="", strength=0.3).images You can also use torch.compile. Note that we have not exhaustively tested torch.compile +with IF and it might not give expected results. Copied from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") + +pipe.text_encoder = torch.compile(pipe.text_encoder, mode="reduce-overhead", fullgraph=True) +pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) Optimizing for memory When optimizing for GPU memory, we can use the standard diffusers CPU offloading APIs. Either the model based CPU offloading, Copied pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.enable_model_cpu_offload() or the more aggressive layer based CPU offloading. Copied pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.enable_sequential_cpu_offload() Additionally, T5 can be loaded in 8bit precision Copied from transformers import T5EncoderModel + +text_encoder = T5EncoderModel.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit" +) + +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", + text_encoder=text_encoder, # pass the previously instantiated 8bit text encoder + unet=None, + device_map="auto", +) + +prompt_embeds, negative_embeds = pipe.encode_prompt("") For CPU RAM constrained machines like Google Colab free tier where we can’t load all model components to the CPU at once, we can manually only load the pipeline with +the text encoder or UNet when the respective model components are needed. Copied from diffusers import IFPipeline, IFSuperResolutionPipeline +import torch +import gc +from transformers import T5EncoderModel +from diffusers.utils import pt_to_pil, make_image_grid + +text_encoder = T5EncoderModel.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit" +) + +# text to image +pipe = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", + text_encoder=text_encoder, # pass the previously instantiated 8bit text encoder + unet=None, + device_map="auto", +) + +prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +# Remove the pipeline so we can re-load the pipeline with the unet +del text_encoder +del pipe +gc.collect() +torch.cuda.empty_cache() + +pipe = IFPipeline.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto" +) + +generator = torch.Generator().manual_seed(0) +stage_1_output = pipe( + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + output_type="pt", + generator=generator, +).images + +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# Remove the pipeline so we can load the super-resolution pipeline +del pipe +gc.collect() +torch.cuda.empty_cache() + +# First super resolution + +pipe = IFSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto" +) + +generator = torch.Generator().manual_seed(0) +stage_2_output = pipe( + image=stage_1_output, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + output_type="pt", + generator=generator, +).images + +#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png") +make_image_grid([pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0]], rows=1, rows=2) Available Pipelines: Pipeline Tasks Colab pipeline_if.py Text-to-Image Generation - pipeline_if_superresolution.py Text-to-Image Generation - pipeline_if_img2img.py Image-to-Image Generation - pipeline_if_img2img_superresolution.py Image-to-Image Generation - pipeline_if_inpainting.py Image-to-Image Generation - pipeline_if_inpainting_superresolution.py Image-to-Image Generation - IFPipeline class diffusers.IFPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None num_inference_steps: int = 100 timesteps: List = None guidance_scale: float = 7.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 height: Optional = None width: Optional = None eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True cross_attention_kwargs: Optional = None ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 7.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to self.unet.config.sample_size) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size) — +The width in pixels of the generated image. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFPipeline, IFSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch + +>>> pipe = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt").images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt" +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> safety_modules = { +... "feature_extractor": pipe.feature_extractor, +... "safety_checker": pipe.safety_checker, +... "watermarker": pipe.watermarker, +... } +>>> super_res_2_pipe = DiffusionPipeline.from_pretrained( +... "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +... ) +>>> super_res_2_pipe.enable_model_cpu_offload() + +>>> image = super_res_2_pipe( +... prompt=prompt, +... image=image, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFSuperResolutionPipeline class diffusers.IFSuperResolutionPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler image_noising_scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None height: int = None width: int = None image: Union = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 4.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 250 clean_caption: bool = True ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. height (int, optional, defaults to None) — +The height in pixels of the generated image. width (int, optional, defaults to None) — +The width in pixels of the generated image. image (PIL.Image.Image, np.ndarray, torch.FloatTensor) — +The image to be upscaled. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional, defaults to None) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. noise_level (int, optional, defaults to 250) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFPipeline, IFSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch + +>>> pipe = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt").images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFImg2ImgPipeline class diffusers.IFImg2ImgPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.7 num_inference_steps: int = 80 timesteps: List = None guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True cross_attention_kwargs: Optional = None ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. strength (float, optional, defaults to 0.7) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. num_inference_steps (int, optional, defaults to 80) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 10.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image.resize((768, 512)) + +>>> pipe = IFImg2ImgPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A fantasy landscape in style minecraft" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFImg2ImgSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", +... text_encoder=None, +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFImg2ImgSuperResolutionPipeline class diffusers.IFImg2ImgSuperResolutionPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler image_noising_scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( image: Union original_image: Union = None strength: float = 0.8 prompt: Union = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 4.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 250 clean_caption: bool = True ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. original_image (torch.FloatTensor or PIL.Image.Image) — +The original image that image was varied from. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. noise_level (int, optional, defaults to 250) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image.resize((768, 512)) + +>>> pipe = IFImg2ImgPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A fantasy landscape in style minecraft" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFImg2ImgSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", +... text_encoder=None, +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFInpaintingPipeline class diffusers.IFInpaintingPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None strength: float = 1.0 num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True cross_attention_kwargs: Optional = None ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). strength (float, optional, defaults to 1.0) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 7.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +>>> response = requests.get(url) +>>> mask_image = Image.open(BytesIO(response.content)) +>>> mask_image = mask_image + +>>> pipe = IFInpaintingPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "blue sunglasses" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... mask_image=mask_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFInpaintingSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... mask_image=mask_image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFInpaintingSuperResolutionPipeline class diffusers.IFInpaintingSuperResolutionPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler image_noising_scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( image: Union original_image: Union = None mask_image: Union = None strength: float = 0.8 prompt: Union = None num_inference_steps: int = 100 timesteps: List = None guidance_scale: float = 4.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 0 clean_caption: bool = True ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. original_image (torch.FloatTensor or PIL.Image.Image) — +The original image that image was varied from. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. noise_level (int, optional, defaults to 0) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +>>> response = requests.get(url) +>>> mask_image = Image.open(BytesIO(response.content)) +>>> mask_image = mask_image + +>>> pipe = IFInpaintingPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "blue sunglasses" + +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) +>>> image = pipe( +... image=original_image, +... mask_image=mask_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFInpaintingSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... mask_image=mask_image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. diff --git a/scrapped_outputs/184a2eb38e718462dd7c2d365f68ee71.txt b/scrapped_outputs/184a2eb38e718462dd7c2d365f68ee71.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/1866f8160439dfe71a2e15f14a8657e7.txt b/scrapped_outputs/1866f8160439dfe71a2e15f14a8657e7.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/1872a92a601d5b882b340a24632d9e13.txt b/scrapped_outputs/1872a92a601d5b882b340a24632d9e13.txt new file mode 100644 index 0000000000000000000000000000000000000000..f94f567f98cc2ea0c0b0894f04d08e55446dbcc1 --- /dev/null +++ b/scrapped_outputs/1872a92a601d5b882b340a24632d9e13.txt @@ -0,0 +1,71 @@ +Textual Inversion Textual Inversion is a training method for personalizing models by learning new text embeddings from a few example images. The file produced from training is extremely small (a few KBs) and the new embeddings can be loaded into the text encoder. TextualInversionLoaderMixin provides a function for loading Textual Inversion embeddings from Diffusers and Automatic1111 into the text encoder and loading a special token to activate the embeddings. To learn more about how to load Textual Inversion embeddings, see the Textual Inversion loading guide. TextualInversionLoaderMixin class diffusers.loaders.TextualInversionLoaderMixin < source > ( ) Load Textual Inversion tokens and embeddings to the tokenizer and text encoder. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") maybe_convert_prompt < source > ( prompt: Union tokenizer: PreTrainedTokenizer ) → str or list of str Parameters prompt (str or list of str) — +The prompt or prompts to guide the image generation. tokenizer (PreTrainedTokenizer) — +The tokenizer responsible for encoding the prompt into input tokens. Returns +str or list of str + +The converted prompt + Processes prompts that include a special token corresponding to a multi-vector textual inversion embedding to +be replaced with multiple special tokens each corresponding to one of the vectors. If the prompt has no textual +inversion token or if the textual inversion token is a single vector, the input prompt is returned. diff --git a/scrapped_outputs/18753cc90a3dddbc69a9975950b4460f.txt b/scrapped_outputs/18753cc90a3dddbc69a9975950b4460f.txt new file mode 100644 index 0000000000000000000000000000000000000000..bbc3acf76c7c15bd0150cb7a94aa944d1e65fda4 --- /dev/null +++ b/scrapped_outputs/18753cc90a3dddbc69a9975950b4460f.txt @@ -0,0 +1,93 @@ +InstructPix2Pix InstructPix2Pix is a Stable Diffusion model trained to edit images from human-provided instructions. For example, your prompt can be “turn the clouds rainy” and the model will edit the input image accordingly. This model is conditioned on the text prompt (or editing instruction) and the input image. This guide will explore the train_instruct_pix2pix.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/instruct_pix2pix +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script has many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. Default values are provided for most parameters that work pretty well, but you can also set your own values in the training command if you’d like. For example, to increase the resolution of the input image: Copied accelerate launch train_instruct_pix2pix.py \ + --resolution=512 \ Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the relevant parameters for InstructPix2Pix: --original_image_column: the original image before the edits are made --edited_image_column: the image after the edits are made --edit_prompt_column: the instructions to edit the image --conditioning_dropout_prob: the dropout probability for the edited image and edit prompts during training which enables classifier-free guidance (CFG) for one or both conditioning inputs Training script The dataset preprocessing code and training loop are found in the main() function. This is where you’ll make your changes to the training script to adapt it for your own use-case. As with the script parameters, a walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the InstructPix2Pix relevant parts of the script. The script begins by modifing the number of input channels in the first convolutional layer of the UNet to account for InstructPix2Pix’s additional conditioning image: Copied in_channels = 8 +out_channels = unet.conv_in.out_channels +unet.register_to_config(in_channels=in_channels) + +with torch.no_grad(): + new_conv_in = nn.Conv2d( + in_channels, out_channels, unet.conv_in.kernel_size, unet.conv_in.stride, unet.conv_in.padding + ) + new_conv_in.weight.zero_() + new_conv_in.weight[:, :4, :, :].copy_(unet.conv_in.weight) + unet.conv_in = new_conv_in These UNet parameters are updated by the optimizer: Copied optimizer = optimizer_cls( + unet.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Next, the edited images and and edit instructions are preprocessed and tokenized. It is important the same image transformations are applied to the original and edited images. Copied def preprocess_train(examples): + preprocessed_images = preprocess_images(examples) + + original_images, edited_images = preprocessed_images.chunk(2) + original_images = original_images.reshape(-1, 3, args.resolution, args.resolution) + edited_images = edited_images.reshape(-1, 3, args.resolution, args.resolution) + + examples["original_pixel_values"] = original_images + examples["edited_pixel_values"] = edited_images + + captions = list(examples[edit_prompt_column]) + examples["input_ids"] = tokenize_captions(captions) + return examples Finally, in the training loop, it starts by encoding the edited images into latent space: Copied latents = vae.encode(batch["edited_pixel_values"].to(weight_dtype)).latent_dist.sample() +latents = latents * vae.config.scaling_factor Then, the script applies dropout to the original image and edit instruction embeddings to support CFG. This is what enables the model to modulate the influence of the edit instruction and original image on the edited image. Copied encoder_hidden_states = text_encoder(batch["input_ids"])[0] +original_image_embeds = vae.encode(batch["original_pixel_values"].to(weight_dtype)).latent_dist.mode() + +if args.conditioning_dropout_prob is not None: + random_p = torch.rand(bsz, device=latents.device, generator=generator) + prompt_mask = random_p < 2 * args.conditioning_dropout_prob + prompt_mask = prompt_mask.reshape(bsz, 1, 1) + null_conditioning = text_encoder(tokenize_captions([""]).to(accelerator.device))[0] + encoder_hidden_states = torch.where(prompt_mask, null_conditioning, encoder_hidden_states) + + image_mask_dtype = original_image_embeds.dtype + image_mask = 1 - ( + (random_p >= args.conditioning_dropout_prob).to(image_mask_dtype) + * (random_p < 3 * args.conditioning_dropout_prob).to(image_mask_dtype) + ) + image_mask = image_mask.reshape(bsz, 1, 1, 1) + original_image_embeds = image_mask * original_image_embeds That’s pretty much it! Aside from the differences described here, the rest of the script is very similar to the Text-to-image training script, so feel free to check it out for more details. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’re happy with the changes to your script or if you’re okay with the default configuration, you’re ready to launch the training script! 🚀 This guide uses the fusing/instructpix2pix-1000-samples dataset, which is a smaller version of the original dataset. You can also create and use your own dataset if you’d like (see the Create a dataset for training guide). Set the MODEL_NAME environment variable to the name of the model (can be a model id on the Hub or a path to a local model), and the DATASET_ID to the name of the dataset on the Hub. The script creates and saves all the components (feature extractor, scheduler, text encoder, UNet, etc.) to a subfolder in your repository. For better results, try longer training runs with a larger dataset. We’ve only tested this training script on a smaller-scale dataset. To monitor training progress with Weights and Biases, add the --report_to=wandb parameter to the training command and specify a validation image with --val_image_url and a validation prompt with --validation_prompt. This can be really useful for debugging the model. If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. Copied accelerate launch --mixed_precision="fp16" train_instruct_pix2pix.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$DATASET_ID \ + --enable_xformers_memory_efficient_attention \ + --resolution=256 \ + --random_flip \ + --train_batch_size=4 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --max_train_steps=15000 \ + --checkpointing_steps=5000 \ + --checkpoints_total_limit=1 \ + --learning_rate=5e-05 \ + --max_grad_norm=1 \ + --lr_warmup_steps=0 \ + --conditioning_dropout_prob=0.05 \ + --mixed_precision=fp16 \ + --seed=42 \ + --push_to_hub After training is finished, you can use your new InstructPix2Pix for inference: Copied import PIL +import requests +import torch +from diffusers import StableDiffusionInstructPix2PixPipeline +from diffusers.utils import load_image + +pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained("your_cool_model", torch_dtype=torch.float16).to("cuda") +generator = torch.Generator("cuda").manual_seed(0) + +image = load_image("https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/test_pix2pix_4.png") +prompt = "add some ducks to the lake" +num_inference_steps = 20 +image_guidance_scale = 1.5 +guidance_scale = 10 + +edited_image = pipeline( + prompt, + image=image, + num_inference_steps=num_inference_steps, + image_guidance_scale=image_guidance_scale, + guidance_scale=guidance_scale, + generator=generator, +).images[0] +edited_image.save("edited_image.png") You should experiment with different num_inference_steps, image_guidance_scale, and guidance_scale values to see how they affect inference speed and quality. The guidance scale parameters are especially impactful because they control how much the original image and edit instructions affect the edited image. Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_instruct_pix2pix_sdxl.py script to train a SDXL model to follow image editing instructions. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on training your own InstructPix2Pix model! 🥳 To learn more about the model, it may be helpful to: Read the Instruction-tuning Stable Diffusion with InstructPix2Pix blog post to learn more about some experiments we’ve done with InstructPix2Pix, dataset preparation, and results for different instructions. diff --git a/scrapped_outputs/18d84435882fdf2448f734b2932655f4.txt b/scrapped_outputs/18d84435882fdf2448f734b2932655f4.txt new file mode 100644 index 0000000000000000000000000000000000000000..0edb177b2ecc106af9689aa9d54df820cf9faa8f --- /dev/null +++ b/scrapped_outputs/18d84435882fdf2448f734b2932655f4.txt @@ -0,0 +1,2 @@ +Spectrogram Diffusion Spectrogram Diffusion is by Curtis Hawthorne, Ian Simon, Adam Roberts, Neil Zeghidour, Josh Gardner, Ethan Manilow, and Jesse Engel. An ideal music synthesizer should be both interactive and expressive, generating high-fidelity audio in realtime for arbitrary combinations of instruments and notes. Recent neural synthesizers have exhibited a tradeoff between domain-specific models that offer detailed control of only specific instruments, or raw waveform models that can train on any music but with minimal control and slow generation. In this work, we focus on a middle ground of neural synthesizers that can generate audio from MIDI sequences with arbitrary combinations of instruments in realtime. This enables training on a wide range of transcription datasets with a single model, which in turn offers note-level control of composition and instrumentation across a wide range of instruments. We use a simple two-stage process: MIDI to spectrograms with an encoder-decoder Transformer, then spectrograms to audio with a generative adversarial network (GAN) spectrogram inverter. We compare training the decoder as an autoregressive model and as a Denoising Diffusion Probabilistic Model (DDPM) and find that the DDPM approach is superior both qualitatively and as measured by audio reconstruction and Fréchet distance metrics. Given the interactivity and generality of this approach, we find this to be a promising first step towards interactive and expressive neural synthesis for arbitrary combinations of instruments and notes. The original codebase can be found at magenta/music-spectrogram-diffusion. As depicted above the model takes as input a MIDI file and tokenizes it into a sequence of 5 second intervals. Each tokenized interval then together with positional encodings is passed through the Note Encoder and its representation is concatenated with the previous window’s generated spectrogram representation obtained via the Context Encoder. For the initial 5 second window this is set to zero. The resulting context is then used as conditioning to sample the denoised Spectrogram from the MIDI window and we concatenate this spectrogram to the final output as well as use it for the context of the next MIDI window. The process repeats till we have gone over all the MIDI inputs. Finally a MelGAN decoder converts the potentially long spectrogram to audio which is the final result of this pipeline. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. SpectrogramDiffusionPipeline class diffusers.SpectrogramDiffusionPipeline < source > ( *args **kwargs ) __call__ ( *args **kwargs ) Call self as a function. AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. diff --git a/scrapped_outputs/191ab7708ac80c5fdf8b48c6578cda24.txt b/scrapped_outputs/191ab7708ac80c5fdf8b48c6578cda24.txt new file mode 100644 index 0000000000000000000000000000000000000000..260e2d1961cab74b037b8005bfcbb5822351f744 --- /dev/null +++ b/scrapped_outputs/191ab7708ac80c5fdf8b48c6578cda24.txt @@ -0,0 +1,197 @@ +UniDiffuser The UniDiffuser model was proposed in One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale by Fan Bao, Shen Nie, Kaiwen Xue, Chongxuan Li, Shi Pu, Yaole Wang, Gang Yue, Yue Cao, Hang Su, Jun Zhu. The abstract from the paper is: This paper proposes a unified diffusion framework (dubbed UniDiffuser) to fit all distributions relevant to a set of multi-modal data in one model. Our key insight is — learning diffusion models for marginal, conditional, and joint distributions can be unified as predicting the noise in the perturbed data, where the perturbation levels (i.e. timesteps) can be different for different modalities. Inspired by the unified view, UniDiffuser learns all distributions simultaneously with a minimal modification to the original diffusion model — perturbs data in all modalities instead of a single modality, inputs individual timesteps in different modalities, and predicts the noise of all modalities instead of a single modality. UniDiffuser is parameterized by a transformer for diffusion models to handle input types of different modalities. Implemented on large-scale paired image-text data, UniDiffuser is able to perform image, text, text-to-image, image-to-text, and image-text pair generation by setting proper timesteps without additional overhead. In particular, UniDiffuser is able to produce perceptually realistic samples in all tasks and its quantitative results (e.g., the FID and CLIP score) are not only superior to existing general-purpose models but also comparable to the bespoken models (e.g., Stable Diffusion and DALL-E 2) in representative tasks (e.g., text-to-image generation). You can find the original codebase at thu-ml/unidiffuser and additional checkpoints at thu-ml. There is currently an issue on PyTorch 1.X where the output images are all black or the pixel values become NaNs. This issue can be mitigated by switching to PyTorch 2.X. This pipeline was contributed by dg845. ❤️ Usage Examples Because the UniDiffuser model is trained to model the joint distribution of (image, text) pairs, it is capable of performing a diverse range of generation tasks: Unconditional Image and Text Generation Unconditional generation (where we start from only latents sampled from a standard Gaussian prior) from a UniDiffuserPipeline will produce a (image, text) pair: Copied import torch + +from diffusers import UniDiffuserPipeline + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Unconditional image and text generation. The generation task is automatically inferred. +sample = pipe(num_inference_steps=20, guidance_scale=8.0) +image = sample.images[0] +text = sample.text[0] +image.save("unidiffuser_joint_sample_image.png") +print(text) This is also called “joint” generation in the UniDiffuser paper, since we are sampling from the joint image-text distribution. Note that the generation task is inferred from the inputs used when calling the pipeline. +It is also possible to manually specify the unconditional generation task (“mode”) manually with UniDiffuserPipeline.set_joint_mode(): Copied # Equivalent to the above. +pipe.set_joint_mode() +sample = pipe(num_inference_steps=20, guidance_scale=8.0) When the mode is set manually, subsequent calls to the pipeline will use the set mode without attempting to infer the mode. +You can reset the mode with UniDiffuserPipeline.reset_mode(), after which the pipeline will once again infer the mode. You can also generate only an image or only text (which the UniDiffuser paper calls “marginal” generation since we sample from the marginal distribution of images and text, respectively): Copied # Unlike other generation tasks, image-only and text-only generation don't use classifier-free guidance +# Image-only generation +pipe.set_image_mode() +sample_image = pipe(num_inference_steps=20).images[0] +# Text-only generation +pipe.set_text_mode() +sample_text = pipe(num_inference_steps=20).text[0] Text-to-Image Generation UniDiffuser is also capable of sampling from conditional distributions; that is, the distribution of images conditioned on a text prompt or the distribution of texts conditioned on an image. +Here is an example of sampling from the conditional image distribution (text-to-image generation or text-conditioned image generation): Copied import torch + +from diffusers import UniDiffuserPipeline + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Text-to-image generation +prompt = "an elephant under the sea" + +sample = pipe(prompt=prompt, num_inference_steps=20, guidance_scale=8.0) +t2i_image = sample.images[0] +t2i_image The text2img mode requires that either an input prompt or prompt_embeds be supplied. You can set the text2img mode manually with UniDiffuserPipeline.set_text_to_image_mode(). Image-to-Text Generation Similarly, UniDiffuser can also produce text samples given an image (image-to-text or image-conditioned text generation): Copied import torch + +from diffusers import UniDiffuserPipeline +from diffusers.utils import load_image + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Image-to-text generation +image_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unidiffuser/unidiffuser_example_image.jpg" +init_image = load_image(image_url).resize((512, 512)) + +sample = pipe(image=init_image, num_inference_steps=20, guidance_scale=8.0) +i2t_text = sample.text[0] +print(i2t_text) The img2text mode requires that an input image be supplied. You can set the img2text mode manually with UniDiffuserPipeline.set_image_to_text_mode(). Image Variation The UniDiffuser authors suggest performing image variation through a “round-trip” generation method, where given an input image, we first perform an image-to-text generation, and then perform a text-to-image generation on the outputs of the first generation. +This produces a new image which is semantically similar to the input image: Copied import torch + +from diffusers import UniDiffuserPipeline +from diffusers.utils import load_image + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Image variation can be performed with an image-to-text generation followed by a text-to-image generation: +# 1. Image-to-text generation +image_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unidiffuser/unidiffuser_example_image.jpg" +init_image = load_image(image_url).resize((512, 512)) + +sample = pipe(image=init_image, num_inference_steps=20, guidance_scale=8.0) +i2t_text = sample.text[0] +print(i2t_text) + +# 2. Text-to-image generation +sample = pipe(prompt=i2t_text, num_inference_steps=20, guidance_scale=8.0) +final_image = sample.images[0] +final_image.save("unidiffuser_image_variation_sample.png") Text Variation Similarly, text variation can be performed on an input prompt with a text-to-image generation followed by a image-to-text generation: Copied import torch + +from diffusers import UniDiffuserPipeline + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Text variation can be performed with a text-to-image generation followed by a image-to-text generation: +# 1. Text-to-image generation +prompt = "an elephant under the sea" + +sample = pipe(prompt=prompt, num_inference_steps=20, guidance_scale=8.0) +t2i_image = sample.images[0] +t2i_image.save("unidiffuser_text2img_sample_image.png") + +# 2. Image-to-text generation +sample = pipe(image=t2i_image, num_inference_steps=20, guidance_scale=8.0) +final_prompt = sample.text[0] +print(final_prompt) Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. UniDiffuserPipeline class diffusers.UniDiffuserPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel image_encoder: CLIPVisionModelWithProjection clip_image_processor: CLIPImageProcessor clip_tokenizer: CLIPTokenizer text_decoder: UniDiffuserTextDecoder text_tokenizer: GPT2Tokenizer unet: UniDiffuserModel scheduler: KarrasDiffusionSchedulers ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. This +is part of the UniDiffuser image representation along with the CLIP vision encoding. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). image_encoder (CLIPVisionModel) — +A CLIPVisionModel to encode images as part of its image representation along with the VAE +latent representation. image_processor (CLIPImageProcessor) — +CLIPImageProcessor to preprocess an image before CLIP encoding it with image_encoder. clip_tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize the prompt before encoding it with text_encoder. text_decoder (UniDiffuserTextDecoder) — +Frozen text decoder. This is a GPT-style model which is used to generate text from the UniDiffuser +embedding. text_tokenizer (GPT2Tokenizer) — +A GPT2Tokenizer to decode text for text generation; used along with the text_decoder. unet (UniDiffuserModel) — +A U-ViT model with UNNet-style skip connections between transformer +layers to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image and/or text latents. The +original UniDiffuser paper uses the DPMSolverMultistepScheduler scheduler. Pipeline for a bimodal image-text model which supports unconditional text and image generation, text-conditioned +image generation, image-conditioned text generation, and joint image-text generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None image: Union = None height: Optional = None width: Optional = None data_type: Optional = 1 num_inference_steps: int = 50 guidance_scale: float = 8.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 num_prompts_per_image: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_latents: Optional = None vae_latents: Optional = None clip_latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → ImageTextPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. +Required for text-conditioned image generation (text2img) mode. image (torch.FloatTensor or PIL.Image.Image, optional) — +Image or tensor representing an image batch. Required for image-conditioned text generation +(img2text) mode. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. data_type (int, optional, defaults to 1) — +The data type (either 0 or 1). Only used if you are loading a checkpoint which supports a data type +embedding; this is added for compatibility with the +UniDiffuser-v1 checkpoint. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 8.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). Used in +text-conditioned image generation (text2img) mode. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. Used in text2img (text-conditioned image generation) and +img mode. If the mode is joint and both num_images_per_prompt and num_prompts_per_image are +supplied, min(num_images_per_prompt, num_prompts_per_image) samples are generated. num_prompts_per_image (int, optional, defaults to 1) — +The number of prompts to generate per image. Used in img2text (image-conditioned text generation) and +text mode. If the mode is joint and both num_images_per_prompt and num_prompts_per_image are +supplied, min(num_images_per_prompt, num_prompts_per_image) samples are generated. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for joint +image-text generation. Can be used to tweak the same generation with different prompts. If not +provided, a latents tensor is generated by sampling using the supplied random generator. This assumes +a full set of VAE, CLIP, and text latents, if supplied, overrides the value of prompt_latents, +vae_latents, and clip_latents. prompt_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for text +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. vae_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. clip_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. Used in text-conditioned +image generation (text2img) mode. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are be generated from the negative_prompt input argument. Used +in text-conditioned image generation (text2img) mode. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImageTextPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +ImageTextPipelineOutput or tuple + +If return_dict is True, ImageTextPipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images and the second element +is a list of generated texts. + The call function to the pipeline for generation. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. reset_mode < source > ( ) Removes a manually set mode; after calling this, the pipeline will infer the mode from inputs. set_image_mode < source > ( ) Manually set the generation mode to unconditional (“marginal”) image generation. set_image_to_text_mode < source > ( ) Manually set the generation mode to image-conditioned text generation. set_joint_mode < source > ( ) Manually set the generation mode to unconditional joint image-text generation. set_text_mode < source > ( ) Manually set the generation mode to unconditional (“marginal”) text generation. set_text_to_image_mode < source > ( ) Manually set the generation mode to text-conditioned image generation. ImageTextPipelineOutput class diffusers.ImageTextPipelineOutput < source > ( images: Union text: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). text (List[str] or List[List[str]]) — +List of generated text strings of length batch_size or a list of list of strings whose outer list has +length batch_size. Output class for joint image-text pipelines. diff --git a/scrapped_outputs/191e9c619e0373d4ec78f70f35a2a266.txt b/scrapped_outputs/191e9c619e0373d4ec78f70f35a2a266.txt new file mode 100644 index 0000000000000000000000000000000000000000..b0b0a9f6f6538388b8c5e1816de1537cd679e779 --- /dev/null +++ b/scrapped_outputs/191e9c619e0373d4ec78f70f35a2a266.txt @@ -0,0 +1,96 @@ +MultiDiffusion MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation is by Omer Bar-Tal, Lior Yariv, Yaron Lipman, and Tali Dekel. The abstract from the paper is: Recent advances in text-to-image generation with diffusion models present transformative capabilities in image quality. However, user controllability of the generated image, and fast adaptation to new tasks still remains an open challenge, currently mostly addressed by costly and long re-training and fine-tuning or ad-hoc adaptations to specific image generation tasks. In this work, we present MultiDiffusion, a unified framework that enables versatile and controllable image generation, using a pre-trained text-to-image diffusion model, without any further training or finetuning. At the center of our approach is a new generation process, based on an optimization task that binds together multiple diffusion generation processes with a shared set of parameters or constraints. We show that MultiDiffusion can be readily applied to generate high quality and diverse images that adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes. You can find additional information about MultiDiffusion on the project page, original codebase, and try it out in a demo. Tips While calling StableDiffusionPanoramaPipeline, it’s possible to specify the view_batch_size parameter to be > 1. +For some GPUs with high performance, this can speedup the generation process and increase VRAM usage. To generate panorama-like images make sure you pass the width parameter accordingly. We recommend a width value of 2048 which is the default. Circular padding is applied to ensure there are no stitching artifacts when working with panoramas to ensure a seamless transition from the rightmost part to the leftmost part. By enabling circular padding (set circular_padding=True), the operation applies additional crops after the rightmost point of the image, allowing the model to “see” the transition from the rightmost part to the leftmost part. This helps maintain visual consistency in a 360-degree sense and creates a proper “panorama” that can be viewed using 360-degree panorama viewers. When decoding latents in Stable Diffusion, circular padding is applied to ensure that the decoded latents match in the RGB space. For example, without circular padding, there is a stitching artifact (default): + But with circular padding, the right and the left parts are matching (circular_padding=True): + Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionPanoramaPipeline class diffusers.StableDiffusionPanoramaPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: DDIMScheduler safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using MultiDiffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = 512 width: Optional = 2048 num_inference_steps: int = 50 guidance_scale: float = 7.5 view_batch_size: int = 1 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None circular_padding: bool = False clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 2048) — +The width in pixels of the generated image. The width is kept high because the pipeline is supposed +generate panorama-like images. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. view_batch_size (int, optional, defaults to 1) — +The batch size to denoise split views. For some GPUs with high performance, higher view batch size can +speedup the generation and increase the VRAM usage. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. circular_padding (bool, optional, defaults to False) — +If set to True, circular padding is applied to ensure there are no stitching artifacts. Circular +padding allows the model to seamlessly generate a transition from the rightmost part of the image to +the leftmost part, maintaining consistency in a 360-degree sense. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPanoramaPipeline, DDIMScheduler + +>>> model_ckpt = "stabilityai/stable-diffusion-2-base" +>>> scheduler = DDIMScheduler.from_pretrained(model_ckpt, subfolder="scheduler") +>>> pipe = StableDiffusionPanoramaPipeline.from_pretrained( +... model_ckpt, scheduler=scheduler, torch_dtype=torch.float16 +... ) + +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of the dolomites" +>>> image = pipe(prompt).images[0] disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/1935e264f42216de9bc8b1a0a45c5318.txt b/scrapped_outputs/1935e264f42216de9bc8b1a0a45c5318.txt new file mode 100644 index 0000000000000000000000000000000000000000..86c0719a2317d8cc8ac7716a79b72e0231f612d9 --- /dev/null +++ b/scrapped_outputs/1935e264f42216de9bc8b1a0a45c5318.txt @@ -0,0 +1,33 @@ +Transformer2D A Transformer model for image-like data from CompVis that is based on the Vision Transformer introduced by Dosovitskiy et al. The Transformer2DModel accepts discrete (classes of vector embeddings) or continuous (actual embeddings) inputs. When the input is continuous: Project the input and reshape it to (batch_size, sequence_length, feature_dimension). Apply the Transformer blocks in the standard way. Reshape to image. When the input is discrete: It is assumed one of the input classes is the masked latent pixel. The predicted classes of the unnoised image don’t contain a prediction for the masked pixel because the unnoised image cannot be masked. Convert input (classes of latent pixels) to embeddings and apply positional embeddings. Apply the Transformer blocks in the standard way. Predict classes of unnoised image. Transformer2DModel class diffusers.Transformer2DModel < source > ( num_attention_heads: int = 16 attention_head_dim: int = 88 in_channels: Optional = None out_channels: Optional = None num_layers: int = 1 dropout: float = 0.0 norm_num_groups: int = 32 cross_attention_dim: Optional = None attention_bias: bool = False sample_size: Optional = None num_vector_embeds: Optional = None patch_size: Optional = None activation_fn: str = 'geglu' num_embeds_ada_norm: Optional = None use_linear_projection: bool = False only_cross_attention: bool = False double_self_attention: bool = False upcast_attention: bool = False norm_type: str = 'layer_norm' norm_elementwise_affine: bool = True norm_eps: float = 1e-05 attention_type: str = 'default' caption_channels: int = None ) Parameters num_attention_heads (int, optional, defaults to 16) — The number of heads to use for multi-head attention. attention_head_dim (int, optional, defaults to 88) — The number of channels in each head. in_channels (int, optional) — +The number of channels in the input and output (specify if the input is continuous). num_layers (int, optional, defaults to 1) — The number of layers of Transformer blocks to use. dropout (float, optional, defaults to 0.0) — The dropout probability to use. cross_attention_dim (int, optional) — The number of encoder_hidden_states dimensions to use. sample_size (int, optional) — The width of the latent images (specify if the input is discrete). +This is fixed during training since it is used to learn a number of position embeddings. num_vector_embeds (int, optional) — +The number of classes of the vector embeddings of the latent pixels (specify if the input is discrete). +Includes the class for the masked latent pixel. activation_fn (str, optional, defaults to "geglu") — Activation function to use in feed-forward. num_embeds_ada_norm ( int, optional) — +The number of diffusion steps used during training. Pass if at least one of the norm_layers is +AdaLayerNorm. This is fixed during training since it is used to learn a number of embeddings that are +added to the hidden states. +During inference, you can denoise for up to but not more steps than num_embeds_ada_norm. attention_bias (bool, optional) — +Configure if the TransformerBlocks attention should contain a bias parameter. A 2D Transformer model for image-like data. forward < source > ( hidden_states: Tensor encoder_hidden_states: Optional = None timestep: Optional = None added_cond_kwargs: Dict = None class_labels: Optional = None cross_attention_kwargs: Dict = None attention_mask: Optional = None encoder_attention_mask: Optional = None return_dict: bool = True ) Parameters hidden_states (torch.LongTensor of shape (batch size, num latent pixels) if discrete, torch.FloatTensor of shape (batch size, channel, height, width) if continuous) — +Input hidden_states. encoder_hidden_states ( torch.FloatTensor of shape (batch size, sequence len, embed dims), optional) — +Conditional embeddings for cross attention layer. If not given, cross-attention defaults to +self-attention. timestep ( torch.LongTensor, optional) — +Used to indicate denoising step. Optional timestep to be applied as an embedding in AdaLayerNorm. class_labels ( torch.LongTensor of shape (batch size, num classes), optional) — +Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in +AdaLayerZeroNorm. cross_attention_kwargs ( Dict[str, Any], optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. attention_mask ( torch.Tensor, optional) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. encoder_attention_mask ( torch.Tensor, optional) — +Cross-attention mask applied to encoder_hidden_states. Two formats supported: + +Mask (batch, sequence_length) True = keep, False = discard. +Bias (batch, 1, sequence_length) 0 = keep, -10000 = discard. + +If ndim == 2: will be interpreted as a mask, then converted into a bias consistent with the format +above. This bias will be added to the cross-attention scores. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. The Transformer2DModel forward method. Transformer2DModelOutput class diffusers.models.transformer_2d.Transformer2DModelOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if Transformer2DModel is discrete) — +The hidden states output conditioned on the encoder_hidden_states input. If discrete, returns probability +distributions for the unnoised latent pixels. The output of Transformer2DModel. diff --git a/scrapped_outputs/1951d9aac3434acef4c017be3a237f29.txt b/scrapped_outputs/1951d9aac3434acef4c017be3a237f29.txt new file mode 100644 index 0000000000000000000000000000000000000000..11477af7da0355430f35587a5aa097be653d9a3d --- /dev/null +++ b/scrapped_outputs/1951d9aac3434acef4c017be3a237f29.txt @@ -0,0 +1,68 @@ +VQDiffusionScheduler VQDiffusionScheduler converts the transformer model’s output into a sample for the unnoised image at the previous diffusion timestep. It was introduced in Vector Quantized Diffusion Model for Text-to-Image Synthesis by Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, Baining Guo. The abstract from the paper is: We present the vector quantized diffusion (VQ-Diffusion) model for text-to-image generation. This method is based on a vector quantized variational autoencoder (VQ-VAE) whose latent space is modeled by a conditional variant of the recently developed Denoising Diffusion Probabilistic Model (DDPM). We find that this latent-space method is well-suited for text-to-image generation tasks because it not only eliminates the unidirectional bias with existing methods but also allows us to incorporate a mask-and-replace diffusion strategy to avoid the accumulation of errors, which is a serious problem with existing methods. Our experiments show that the VQ-Diffusion produces significantly better text-to-image generation results when compared with conventional autoregressive (AR) models with similar numbers of parameters. Compared with previous GAN-based text-to-image methods, our VQ-Diffusion can handle more complex scenes and improve the synthesized image quality by a large margin. Finally, we show that the image generation computation in our method can be made highly efficient by reparameterization. With traditional AR methods, the text-to-image generation time increases linearly with the output image resolution and hence is quite time consuming even for normal size images. The VQ-Diffusion allows us to achieve a better trade-off between quality and speed. Our experiments indicate that the VQ-Diffusion model with the reparameterization is fifteen times faster than traditional AR methods while achieving a better image quality. VQDiffusionScheduler class diffusers.VQDiffusionScheduler < source > ( num_vec_classes: int num_train_timesteps: int = 100 alpha_cum_start: float = 0.99999 alpha_cum_end: float = 9e-06 gamma_cum_start: float = 9e-06 gamma_cum_end: float = 0.99999 ) Parameters num_vec_classes (int) — +The number of classes of the vector embeddings of the latent pixels. Includes the class for the masked +latent pixel. num_train_timesteps (int, defaults to 100) — +The number of diffusion steps to train the model. alpha_cum_start (float, defaults to 0.99999) — +The starting cumulative alpha value. alpha_cum_end (float, defaults to 0.00009) — +The ending cumulative alpha value. gamma_cum_start (float, defaults to 0.00009) — +The starting cumulative gamma value. gamma_cum_end (float, defaults to 0.99999) — +The ending cumulative gamma value. A scheduler for vector quantized diffusion. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. log_Q_t_transitioning_to_known_class < source > ( t: torch.int32 x_t: LongTensor log_onehot_x_t: FloatTensor cumulative: bool ) → torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels) Parameters t (torch.Long) — +The timestep that determines which transition matrix is used. x_t (torch.LongTensor of shape (batch size, num latent pixels)) — +The classes of each latent pixel at time t. log_onehot_x_t (torch.FloatTensor of shape (batch size, num classes, num latent pixels)) — +The log one-hot vectors of x_t. cumulative (bool) — +If cumulative is False, the single step transition matrix t-1->t is used. If cumulative is +True, the cumulative transition matrix 0->t is used. Returns +torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels) + +Each column of the returned matrix is a row of log probabilities of the complete probability +transition matrix. +When non cumulative, returns self.num_classes - 1 rows because the initial latent pixel cannot be +masked. +Where: + +q_n is the probability distribution for the forward process of the nth latent pixel. +C_0 is a class of a latent pixel embedding +C_k is the class of the masked latent pixel + +non-cumulative result (omitting logarithms): +_0(x_t | x_{t-1\} = C_0) ... q_n(x_t | x_{t-1\} = C_0) + . . . + . . . + . . . +q_0(x_t | x_{t-1\} = C_k) ... q_n(x_t | x_{t-1\} = C_k)`} + wrap={false} + /> +cumulative result (omitting logarithms): +_0_cumulative(x_t | x_0 = C_0) ... q_n_cumulative(x_t | x_0 = C_0) + . . . + . . . + . . . +q_0_cumulative(x_t | x_0 = C_{k-1\}) ... q_n_cumulative(x_t | x_0 = C_{k-1\})`} + wrap={false} + /> + Calculates the log probabilities of the rows from the (cumulative or non-cumulative) transition matrix for each +latent pixel in x_t. q_posterior < source > ( log_p_x_0 x_t t ) → torch.FloatTensor of shape (batch size, num classes, num latent pixels) Parameters log_p_x_0 (torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels)) — +The log probabilities for the predicted classes of the initial latent pixels. Does not include a +prediction for the masked class as the initial unnoised image cannot be masked. x_t (torch.LongTensor of shape (batch size, num latent pixels)) — +The classes of each latent pixel at time t. t (torch.Long) — +The timestep that determines which transition matrix is used. Returns +torch.FloatTensor of shape (batch size, num classes, num latent pixels) + +The log probabilities for the predicted classes of the image at timestep t-1. + Calculates the log probabilities for the predicted classes of the image at timestep t-1: Copied p(x_{t-1} | x_t) = sum( q(x_t | x_{t-1}) * q(x_{t-1} | x_0) * p(x_0) / q(x_t | x_0) ) set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps and diffusion process parameters (alpha, beta, gamma) should be moved +to. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: torch.int64 sample: LongTensor generator: Optional = None return_dict: bool = True ) → VQDiffusionSchedulerOutput or tuple Parameters t (torch.long) — +The timestep that determines which transition matrices are used. x_t (torch.LongTensor of shape (batch size, num latent pixels)) — +The classes of each latent pixel at time t. generator (torch.Generator, or None) — +A random number generator for the noise applied to p(x_{t-1} | x_t) before it is sampled from. return_dict (bool, optional, defaults to True) — +Whether or not to return a VQDiffusionSchedulerOutput or +tuple. Returns +VQDiffusionSchedulerOutput or tuple + +If return_dict is True, VQDiffusionSchedulerOutput is +returned, otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by the reverse transition distribution. See +q_posterior() for more details about how the distribution is computer. VQDiffusionSchedulerOutput class diffusers.schedulers.scheduling_vq_diffusion.VQDiffusionSchedulerOutput < source > ( prev_sample: LongTensor ) Parameters prev_sample (torch.LongTensor of shape (batch size, num latent pixels)) — +Computed sample x_{t-1} of previous timestep. prev_sample should be used as next model input in the +denoising loop. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/1956496e5c0c12c8b1447488dc8f0f49.txt b/scrapped_outputs/1956496e5c0c12c8b1447488dc8f0f49.txt new file mode 100644 index 0000000000000000000000000000000000000000..3202fb51e10a32c683f71e7b038c0b00367fe667 --- /dev/null +++ b/scrapped_outputs/1956496e5c0c12c8b1447488dc8f0f49.txt @@ -0,0 +1 @@ +Overview The APIs in this section are more experimental and prone to breaking changes. Most of them are used internally for development, but they may also be useful to you if you’re interested in building a diffusion model with some custom parts or if you’re interested in some of our helper utilities for working with 🤗 Diffusers. diff --git a/scrapped_outputs/199c4a163091108b70a7f2b4f7ec4a0c.txt b/scrapped_outputs/199c4a163091108b70a7f2b4f7ec4a0c.txt new file mode 100644 index 0000000000000000000000000000000000000000..f1c1a9c2dd958669628fba113f9bf7c7441fb5bf --- /dev/null +++ b/scrapped_outputs/199c4a163091108b70a7f2b4f7ec4a0c.txt @@ -0,0 +1,234 @@ +Variance exploding, stochastic sampling from Karras et. al + + +Overview + +Original paper can be found here. + +KarrasVeScheduler + + +class diffusers.KarrasVeScheduler + +< +source +> +( +sigma_min: float = 0.02 +sigma_max: float = 100 +s_noise: float = 1.007 +s_churn: float = 80 +s_min: float = 0.05 +s_max: float = 50 + +) + + +Parameters + +sigma_min (float) — minimum noise magnitude + + +sigma_max (float) — maximum noise magnitude + + +s_noise (float) — the amount of additional noise to counteract loss of detail during sampling. +A reasonable range is [1.000, 1.011]. + + +s_churn (float) — the parameter controlling the overall amount of stochasticity. +A reasonable range is [0, 100]. + + +s_min (float) — the start value of the sigma range where we add noise (enable stochasticity). +A reasonable range is [0, 10]. + + +s_max (float) — the end value of the sigma range where we add noise. +A reasonable range is [0.2, 80]. + + + +Stochastic sampling from Karras et al. [1] tailored to the Variance-Expanding (VE) models [2]. Use Algorithm 2 and +the VE column of Table 1 from [1] for reference. +[1] Karras, Tero, et al. “Elucidating the Design Space of Diffusion-Based Generative Models.” +https://arxiv.org/abs/2206.00364 [2] Song, Yang, et al. “Score-based generative modeling through stochastic +differential equations.” https://arxiv.org/abs/2011.13456 +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. +For more details on the parameters, see the original paper’s Appendix E.: “Elucidating the Design Space of +Diffusion-Based Generative Models.” https://arxiv.org/abs/2206.00364. The grid search values used to find the +optimal {s_noise, s_churn, s_min, s_max} for a specific model are described in Table 5 of the paper. + +add_noise_to_input + +< +source +> +( +sample: FloatTensor +sigma: float +generator: typing.Optional[torch._C.Generator] = None + +) + + + +Explicit Langevin-like “churn” step of adding noise to the sample according to a factor gamma_i ≥ 0 to reach a +higher noise level sigma_hat = sigma_i + gamma_i*sigma_i. +TODO Args: + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Optional[int] = None + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +timestep (int, optional) — current timestep + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + + +Sets the continuous timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +sigma_hat: float +sigma_prev: float +sample_hat: FloatTensor +return_dict: bool = True + +) +→ +KarrasVeOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +sigma_hat (float) — TODO + + +sigma_prev (float) — TODO + + +sample_hat (torch.FloatTensor) — TODO + + +return_dict (bool) — option for returning tuple rather than KarrasVeOutput class +KarrasVeOutput — updated sample in the diffusion chain and derivative (TODO double check). + + +Returns + +KarrasVeOutput or tuple + + + +KarrasVeOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion +process from the learned model outputs (most often the predicted noise). + +step_correct + +< +source +> +( +model_output: FloatTensor +sigma_hat: float +sigma_prev: float +sample_hat: FloatTensor +sample_prev: FloatTensor +derivative: FloatTensor +return_dict: bool = True + +) +→ +prev_sample (TODO) + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +sigma_hat (float) — TODO + + +sigma_prev (float) — TODO + + +sample_hat (torch.FloatTensor) — TODO + + +sample_prev (torch.FloatTensor) — TODO + + +derivative (torch.FloatTensor) — TODO + + +return_dict (bool) — option for returning tuple rather than KarrasVeOutput class + + +Returns + +prev_sample (TODO) + + + +updated sample in the diffusion chain. derivative (TODO): TODO + + +Correct the predicted sample based on the output model_output of the network. TODO complete description diff --git a/scrapped_outputs/19d5e12573f3571455aefa124c40edf3.txt b/scrapped_outputs/19d5e12573f3571455aefa124c40edf3.txt new file mode 100644 index 0000000000000000000000000000000000000000..6a4046e75f9452321616835465fe3146c7ab0c46 --- /dev/null +++ b/scrapped_outputs/19d5e12573f3571455aefa124c40edf3.txt @@ -0,0 +1,215 @@ +Configuration + +The handling of configurations in Diffusers is with the ConfigMixin class. + +class diffusers.ConfigMixin + +< +source +> +( +) + + + +Base class for all configuration classes. Stores all configuration parameters under self.config Also handles all +methods for loading/downloading/saving classes inheriting from ConfigMixin with +from_config() +save_config() +Class attributes: +config_name (str) — A filename under which the config should stored when calling +save_config() (should be overridden by parent class). +ignore_for_config (List[str]) — A list of attributes that should not be saved in the config (should be +overridden by subclass). +has_compatibles (bool) — Whether the class has compatible classes (should be overridden by subclass). +_deprecated_kwargs (List[str]) — Keyword arguments that are deprecated. Note that the init function +should only have a kwargs argument if at least one argument is deprecated (should be overridden by +subclass). + +from_config + +< +source +> +( +config: typing.Union[diffusers.configuration_utils.FrozenDict, typing.Dict[str, typing.Any]] = None +return_unused_kwargs = False +**kwargs + +) + + +Parameters + +config (Dict[str, Any]) — +A config dictionary from which the Python class will be instantiated. Make sure to only load +configuration files of compatible classes. + + +return_unused_kwargs (bool, optional, defaults to False) — +Whether kwargs that are not consumed by the Python class should be returned or not. + + +kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to update the configuration object (after it being loaded) and initiate the Python class. +**kwargs will be directly passed to the underlying scheduler/model’s __init__ method and eventually +overwrite same named arguments of config. + + + +Instantiate a Python class from a config dictionary + +Examples: + + + Copied +>>> from diffusers import DDPMScheduler, DDIMScheduler, PNDMScheduler + +>>> # Download scheduler from huggingface.co and cache. +>>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cifar10-32") + +>>> # Instantiate DDIM scheduler class with same config as DDPM +>>> scheduler = DDIMScheduler.from_config(scheduler.config) + +>>> # Instantiate PNDM scheduler class with same config as DDPM +>>> scheduler = PNDMScheduler.from_config(scheduler.config) + +load_config + +< +source +> +( +pretrained_model_name_or_path: typing.Union[str, os.PathLike] +return_unused_kwargs = False +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id of a model repo on huggingface.co. Valid model ids should have an +organization name, like google/ddpm-celebahq-256. +A path to a directory containing model weights saved using save_config(), e.g., +./my_model_directory/. + + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running transformers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +subfolder (str, optional, defaults to "") — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + + +Instantiate a Python class from a config dictionary +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. +Activate the special “offline-mode” to +use this method in a firewalled environment. + +save_config + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +push_to_hub: bool = False +**kwargs + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory where the configuration JSON file will be saved (will be created if it does not exist). + + + +Save a configuration object to the directory save_directory, so that it can be re-loaded using the +from_config() class method. + +to_json_file + +< +source +> +( +json_file_path: typing.Union[str, os.PathLike] + +) + + +Parameters + +json_file_path (str or os.PathLike) — +Path to the JSON file in which this configuration instance’s parameters will be saved. + + + +Save this instance to a JSON file. + +to_json_string + +< +source +> +( +) +→ +str + +Returns + +str + + + +String containing all the attributes that make up this configuration instance in JSON format. + + +Serializes this instance to a JSON string. +Under further construction 🚧, open a PR if you want to contribute! diff --git a/scrapped_outputs/19ec11235b8988ef42057c8ed1e466f1.txt b/scrapped_outputs/19ec11235b8988ef42057c8ed1e466f1.txt new file mode 100644 index 0000000000000000000000000000000000000000..3fe8c7053aebf337174e539c954d11ce8ae19ba3 --- /dev/null +++ b/scrapped_outputs/19ec11235b8988ef42057c8ed1e466f1.txt @@ -0,0 +1,176 @@ +Denoising diffusion probabilistic models (DDPM) + + +Overview + +Denoising Diffusion Probabilistic Models +(DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes the diffusion based model of the same name, but in the context of the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline. +The abstract of the paper is the following: +We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. +The original paper can be found here. + +DDPMScheduler + + +class diffusers.DDPMScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +variance_type: str = 'fixed_small' +clip_sample: bool = True +prediction_type: str = 'epsilon' +**kwargs + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +variance_type (str) — +options to clip the variance used when adding noise to the denoised sample. Choose from fixed_small, +fixed_small_log, fixed_large, fixed_large_log, learned or learned_range. + + +clip_sample (bool, default True) — +option to clip predicted sample between -1 and 1 for numerical stability. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + + +Denoising diffusion probabilistic models (DDPMs) explores the connections between denoising score matching and +Langevin dynamics sampling. +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. +For more details, see the original paper: https://arxiv.org/abs/2006.11239 + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Optional[int] = None + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +timestep (int, optional) — current timestep + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + + +Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +generator = None +return_dict: bool = True +**kwargs + +) +→ +~schedulers.scheduling_utils.DDPMSchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. +generator — random number generator. + + +return_dict (bool) — option for returning tuple rather than DDPMSchedulerOutput class + + +Returns + +~schedulers.scheduling_utils.DDPMSchedulerOutput or tuple + + + +~schedulers.scheduling_utils.DDPMSchedulerOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/19ed7ee6b3aa2039dcf29daec6ded942.txt b/scrapped_outputs/19ed7ee6b3aa2039dcf29daec6ded942.txt new file mode 100644 index 0000000000000000000000000000000000000000..bbc3acf76c7c15bd0150cb7a94aa944d1e65fda4 --- /dev/null +++ b/scrapped_outputs/19ed7ee6b3aa2039dcf29daec6ded942.txt @@ -0,0 +1,93 @@ +InstructPix2Pix InstructPix2Pix is a Stable Diffusion model trained to edit images from human-provided instructions. For example, your prompt can be “turn the clouds rainy” and the model will edit the input image accordingly. This model is conditioned on the text prompt (or editing instruction) and the input image. This guide will explore the train_instruct_pix2pix.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/instruct_pix2pix +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script has many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. Default values are provided for most parameters that work pretty well, but you can also set your own values in the training command if you’d like. For example, to increase the resolution of the input image: Copied accelerate launch train_instruct_pix2pix.py \ + --resolution=512 \ Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the relevant parameters for InstructPix2Pix: --original_image_column: the original image before the edits are made --edited_image_column: the image after the edits are made --edit_prompt_column: the instructions to edit the image --conditioning_dropout_prob: the dropout probability for the edited image and edit prompts during training which enables classifier-free guidance (CFG) for one or both conditioning inputs Training script The dataset preprocessing code and training loop are found in the main() function. This is where you’ll make your changes to the training script to adapt it for your own use-case. As with the script parameters, a walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the InstructPix2Pix relevant parts of the script. The script begins by modifing the number of input channels in the first convolutional layer of the UNet to account for InstructPix2Pix’s additional conditioning image: Copied in_channels = 8 +out_channels = unet.conv_in.out_channels +unet.register_to_config(in_channels=in_channels) + +with torch.no_grad(): + new_conv_in = nn.Conv2d( + in_channels, out_channels, unet.conv_in.kernel_size, unet.conv_in.stride, unet.conv_in.padding + ) + new_conv_in.weight.zero_() + new_conv_in.weight[:, :4, :, :].copy_(unet.conv_in.weight) + unet.conv_in = new_conv_in These UNet parameters are updated by the optimizer: Copied optimizer = optimizer_cls( + unet.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Next, the edited images and and edit instructions are preprocessed and tokenized. It is important the same image transformations are applied to the original and edited images. Copied def preprocess_train(examples): + preprocessed_images = preprocess_images(examples) + + original_images, edited_images = preprocessed_images.chunk(2) + original_images = original_images.reshape(-1, 3, args.resolution, args.resolution) + edited_images = edited_images.reshape(-1, 3, args.resolution, args.resolution) + + examples["original_pixel_values"] = original_images + examples["edited_pixel_values"] = edited_images + + captions = list(examples[edit_prompt_column]) + examples["input_ids"] = tokenize_captions(captions) + return examples Finally, in the training loop, it starts by encoding the edited images into latent space: Copied latents = vae.encode(batch["edited_pixel_values"].to(weight_dtype)).latent_dist.sample() +latents = latents * vae.config.scaling_factor Then, the script applies dropout to the original image and edit instruction embeddings to support CFG. This is what enables the model to modulate the influence of the edit instruction and original image on the edited image. Copied encoder_hidden_states = text_encoder(batch["input_ids"])[0] +original_image_embeds = vae.encode(batch["original_pixel_values"].to(weight_dtype)).latent_dist.mode() + +if args.conditioning_dropout_prob is not None: + random_p = torch.rand(bsz, device=latents.device, generator=generator) + prompt_mask = random_p < 2 * args.conditioning_dropout_prob + prompt_mask = prompt_mask.reshape(bsz, 1, 1) + null_conditioning = text_encoder(tokenize_captions([""]).to(accelerator.device))[0] + encoder_hidden_states = torch.where(prompt_mask, null_conditioning, encoder_hidden_states) + + image_mask_dtype = original_image_embeds.dtype + image_mask = 1 - ( + (random_p >= args.conditioning_dropout_prob).to(image_mask_dtype) + * (random_p < 3 * args.conditioning_dropout_prob).to(image_mask_dtype) + ) + image_mask = image_mask.reshape(bsz, 1, 1, 1) + original_image_embeds = image_mask * original_image_embeds That’s pretty much it! Aside from the differences described here, the rest of the script is very similar to the Text-to-image training script, so feel free to check it out for more details. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’re happy with the changes to your script or if you’re okay with the default configuration, you’re ready to launch the training script! 🚀 This guide uses the fusing/instructpix2pix-1000-samples dataset, which is a smaller version of the original dataset. You can also create and use your own dataset if you’d like (see the Create a dataset for training guide). Set the MODEL_NAME environment variable to the name of the model (can be a model id on the Hub or a path to a local model), and the DATASET_ID to the name of the dataset on the Hub. The script creates and saves all the components (feature extractor, scheduler, text encoder, UNet, etc.) to a subfolder in your repository. For better results, try longer training runs with a larger dataset. We’ve only tested this training script on a smaller-scale dataset. To monitor training progress with Weights and Biases, add the --report_to=wandb parameter to the training command and specify a validation image with --val_image_url and a validation prompt with --validation_prompt. This can be really useful for debugging the model. If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. Copied accelerate launch --mixed_precision="fp16" train_instruct_pix2pix.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$DATASET_ID \ + --enable_xformers_memory_efficient_attention \ + --resolution=256 \ + --random_flip \ + --train_batch_size=4 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --max_train_steps=15000 \ + --checkpointing_steps=5000 \ + --checkpoints_total_limit=1 \ + --learning_rate=5e-05 \ + --max_grad_norm=1 \ + --lr_warmup_steps=0 \ + --conditioning_dropout_prob=0.05 \ + --mixed_precision=fp16 \ + --seed=42 \ + --push_to_hub After training is finished, you can use your new InstructPix2Pix for inference: Copied import PIL +import requests +import torch +from diffusers import StableDiffusionInstructPix2PixPipeline +from diffusers.utils import load_image + +pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained("your_cool_model", torch_dtype=torch.float16).to("cuda") +generator = torch.Generator("cuda").manual_seed(0) + +image = load_image("https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/test_pix2pix_4.png") +prompt = "add some ducks to the lake" +num_inference_steps = 20 +image_guidance_scale = 1.5 +guidance_scale = 10 + +edited_image = pipeline( + prompt, + image=image, + num_inference_steps=num_inference_steps, + image_guidance_scale=image_guidance_scale, + guidance_scale=guidance_scale, + generator=generator, +).images[0] +edited_image.save("edited_image.png") You should experiment with different num_inference_steps, image_guidance_scale, and guidance_scale values to see how they affect inference speed and quality. The guidance scale parameters are especially impactful because they control how much the original image and edit instructions affect the edited image. Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_instruct_pix2pix_sdxl.py script to train a SDXL model to follow image editing instructions. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on training your own InstructPix2Pix model! 🥳 To learn more about the model, it may be helpful to: Read the Instruction-tuning Stable Diffusion with InstructPix2Pix blog post to learn more about some experiments we’ve done with InstructPix2Pix, dataset preparation, and results for different instructions. diff --git a/scrapped_outputs/1a463b64ddbe4d267ac8d9384b0ff205.txt b/scrapped_outputs/1a463b64ddbe4d267ac8d9384b0ff205.txt new file mode 100644 index 0000000000000000000000000000000000000000..acbc313e656972084810639a2513c61961c63127 --- /dev/null +++ b/scrapped_outputs/1a463b64ddbe4d267ac8d9384b0ff205.txt @@ -0,0 +1 @@ +Normalization layers Customized normalization layers for supporting various models in 🤗 Diffusers. AdaLayerNorm class diffusers.models.normalization.AdaLayerNorm < source > ( embedding_dim: int num_embeddings: int ) Parameters embedding_dim (int) — The size of each embedding vector. num_embeddings (int) — The size of the embeddings dictionary. Norm layer modified to incorporate timestep embeddings. AdaLayerNormZero class diffusers.models.normalization.AdaLayerNormZero < source > ( embedding_dim: int num_embeddings: int ) Parameters embedding_dim (int) — The size of each embedding vector. num_embeddings (int) — The size of the embeddings dictionary. Norm layer adaptive layer norm zero (adaLN-Zero). AdaLayerNormSingle class diffusers.models.normalization.AdaLayerNormSingle < source > ( embedding_dim: int use_additional_conditions: bool = False ) Parameters embedding_dim (int) — The size of each embedding vector. use_additional_conditions (bool) — To use additional conditions for normalization or not. Norm layer adaptive layer norm single (adaLN-single). As proposed in PixArt-Alpha (see: https://arxiv.org/abs/2310.00426; Section 2.3). AdaGroupNorm class diffusers.models.normalization.AdaGroupNorm < source > ( embedding_dim: int out_dim: int num_groups: int act_fn: Optional = None eps: float = 1e-05 ) Parameters embedding_dim (int) — The size of each embedding vector. num_embeddings (int) — The size of the embeddings dictionary. num_groups (int) — The number of groups to separate the channels into. act_fn (str, optional, defaults to None) — The activation function to use. eps (float, optional, defaults to 1e-5) — The epsilon value to use for numerical stability. GroupNorm layer modified to incorporate timestep embeddings. diff --git a/scrapped_outputs/1a6da09b2212688a3e24842dba535243.txt b/scrapped_outputs/1a6da09b2212688a3e24842dba535243.txt new file mode 100644 index 0000000000000000000000000000000000000000..d8610ad87c070caa4fdd6e48fd8b56d49472e888 --- /dev/null +++ b/scrapped_outputs/1a6da09b2212688a3e24842dba535243.txt @@ -0,0 +1,41 @@ +HeunDiscreteScheduler The Heun scheduler (Algorithm 1) is from the Elucidating the Design Space of Diffusion-Based Generative Models paper by Karras et al. The scheduler is ported from the k-diffusion library and created by Katherine Crowson. HeunDiscreteScheduler class diffusers.HeunDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' use_karras_sigmas: Optional = False clip_sample: Optional = False clip_sample_range: float = 1.0 timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. Scheduler with Heun steps for discrete beta schedules. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/1aa8fbd05af75459c8b876cd197c29d8.txt b/scrapped_outputs/1aa8fbd05af75459c8b876cd197c29d8.txt new file mode 100644 index 0000000000000000000000000000000000000000..d769a7f9060837ab9edb28b421635809b26af2d7 --- /dev/null +++ b/scrapped_outputs/1aa8fbd05af75459c8b876cd197c29d8.txt @@ -0,0 +1,61 @@ +Attention Processor An attention processor is a class for applying different types of attention mechanisms. AttnProcessor class diffusers.models.attention_processor.AttnProcessor < source > ( ) Default processor for performing attention-related computations. AttnProcessor2_0 class diffusers.models.attention_processor.AttnProcessor2_0 < source > ( ) Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). FusedAttnProcessor2_0 class diffusers.models.attention_processor.FusedAttnProcessor2_0 < source > ( ) Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). +It uses fused projection layers. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is currently 🧪 experimental in nature and can change in future. LoRAAttnProcessor class diffusers.models.attention_processor.LoRAAttnProcessor < source > ( hidden_size: int cross_attention_dim: Optional = None rank: int = 4 network_alpha: Optional = None **kwargs ) Parameters hidden_size (int, optional) — +The hidden size of the attention layer. cross_attention_dim (int, optional) — +The number of channels in the encoder_hidden_states. rank (int, defaults to 4) — +The dimension of the LoRA update matrices. network_alpha (int, optional) — +Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) — +Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism. LoRAAttnProcessor2_0 class diffusers.models.attention_processor.LoRAAttnProcessor2_0 < source > ( hidden_size: int cross_attention_dim: Optional = None rank: int = 4 network_alpha: Optional = None **kwargs ) Parameters hidden_size (int) — +The hidden size of the attention layer. cross_attention_dim (int, optional) — +The number of channels in the encoder_hidden_states. rank (int, defaults to 4) — +The dimension of the LoRA update matrices. network_alpha (int, optional) — +Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) — +Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism using PyTorch 2.0’s memory-efficient scaled dot-product +attention. CustomDiffusionAttnProcessor class diffusers.models.attention_processor.CustomDiffusionAttnProcessor < source > ( train_kv: bool = True train_q_out: bool = True hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 ) Parameters train_kv (bool, defaults to True) — +Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) — +Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) — +Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) — +The dropout probability to use. Processor for implementing attention for the Custom Diffusion method. CustomDiffusionAttnProcessor2_0 class diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0 < source > ( train_kv: bool = True train_q_out: bool = True hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 ) Parameters train_kv (bool, defaults to True) — +Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) — +Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) — +Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) — +The dropout probability to use. Processor for implementing attention for the Custom Diffusion method using PyTorch 2.0’s memory-efficient scaled +dot-product attention. AttnAddedKVProcessor class diffusers.models.attention_processor.AttnAddedKVProcessor < source > ( ) Processor for performing attention-related computations with extra learnable key and value matrices for the text +encoder. AttnAddedKVProcessor2_0 class diffusers.models.attention_processor.AttnAddedKVProcessor2_0 < source > ( ) Processor for performing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0), with extra +learnable key and value matrices for the text encoder. LoRAAttnAddedKVProcessor class diffusers.models.attention_processor.LoRAAttnAddedKVProcessor < source > ( hidden_size: int cross_attention_dim: Optional = None rank: int = 4 network_alpha: Optional = None ) Parameters hidden_size (int, optional) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. rank (int, defaults to 4) — +The dimension of the LoRA update matrices. network_alpha (int, optional) — +Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) — +Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism with extra learnable key and value matrices for the text +encoder. XFormersAttnProcessor class diffusers.models.attention_processor.XFormersAttnProcessor < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional, defaults to None) — +The base +operator to +use as the attention operator. It is recommended to set to None, and allow xFormers to choose the best +operator. Processor for implementing memory efficient attention using xFormers. LoRAXFormersAttnProcessor class diffusers.models.attention_processor.LoRAXFormersAttnProcessor < source > ( hidden_size: int cross_attention_dim: int rank: int = 4 attention_op: Optional = None network_alpha: Optional = None **kwargs ) Parameters hidden_size (int, optional) — +The hidden size of the attention layer. cross_attention_dim (int, optional) — +The number of channels in the encoder_hidden_states. rank (int, defaults to 4) — +The dimension of the LoRA update matrices. attention_op (Callable, optional, defaults to None) — +The base +operator to +use as the attention operator. It is recommended to set to None, and allow xFormers to choose the best +operator. network_alpha (int, optional) — +Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) — +Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism with memory efficient attention using xFormers. CustomDiffusionXFormersAttnProcessor class diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor < source > ( train_kv: bool = True train_q_out: bool = False hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 attention_op: Optional = None ) Parameters train_kv (bool, defaults to True) — +Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) — +Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) — +Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) — +The dropout probability to use. attention_op (Callable, optional, defaults to None) — +The base +operator to use +as the attention operator. It is recommended to set to None, and allow xFormers to choose the best operator. Processor for implementing memory efficient attention using xFormers for the Custom Diffusion method. SlicedAttnProcessor class diffusers.models.attention_processor.SlicedAttnProcessor < source > ( slice_size: int ) Parameters slice_size (int, optional) — +The number of steps to compute attention. Uses as many slices as attention_head_dim // slice_size, and +attention_head_dim must be a multiple of the slice_size. Processor for implementing sliced attention. SlicedAttnAddedKVProcessor class diffusers.models.attention_processor.SlicedAttnAddedKVProcessor < source > ( slice_size ) Parameters slice_size (int, optional) — +The number of steps to compute attention. Uses as many slices as attention_head_dim // slice_size, and +attention_head_dim must be a multiple of the slice_size. Processor for implementing sliced attention with extra learnable key and value matrices for the text encoder. diff --git a/scrapped_outputs/1ad4a0aedb03744d90c9159fb277b345.txt b/scrapped_outputs/1ad4a0aedb03744d90c9159fb277b345.txt new file mode 100644 index 0000000000000000000000000000000000000000..370ce691af60ec569bb22a8523c7b30831598db5 --- /dev/null +++ b/scrapped_outputs/1ad4a0aedb03744d90c9159fb277b345.txt @@ -0,0 +1,260 @@ +Performing inference with LCM-LoRA Latent Consistency Models (LCM) enable quality image generation in typically 2-4 steps making it possible to use diffusion models in almost real-time settings. From the official website: LCMs can be distilled from any pre-trained Stable Diffusion (SD) in only 4,000 training steps (~32 A100 GPU Hours) for generating high quality 768 x 768 resolution images in 2~4 steps or even one step, significantly accelerating text-to-image generation. We employ LCM to distill the Dreamshaper-V7 version of SD in just 4,000 training iterations. For a more technical overview of LCMs, refer to the paper. However, each model needs to be distilled separately for latent consistency distillation. The core idea with LCM-LoRA is to train just a few adapter layers, the adapter being LoRA in this case. +This way, we don’t have to train the full model and keep the number of trainable parameters manageable. The resulting LoRAs can then be applied to any fine-tuned version of the model without distilling them separately. +Additionally, the LoRAs can be applied to image-to-image, ControlNet/T2I-Adapter, inpainting, AnimateDiff etc. +The LCM-LoRA can also be combined with other LoRAs to generate styled images in very few steps (4-8). LCM-LoRAs are available for stable-diffusion-v1-5, stable-diffusion-xl-base-1.0, and the SSD-1B model. All the checkpoints can be found in this collection. For more details about LCM-LoRA, refer to the technical report. This guide shows how to perform inference with LCM-LoRAs for text-to-image image-to-image combined with styled LoRAs ControlNet/T2I-Adapter inpainting AnimateDiff Before going through this guide, we’ll take a look at the general workflow for performing inference with LCM-LoRAs. +LCM-LoRAs are similar to other Stable Diffusion LoRAs so they can be used with any DiffusionPipeline that supports LoRAs. Load the task specific pipeline and model. Set the scheduler to LCMScheduler. Load the LCM-LoRA weights for the model. Reduce the guidance_scale between [1.0, 2.0] and set the num_inference_steps between [4, 8]. Perform inference with the pipeline with the usual parameters. Let’s look at how we can perform inference with LCM-LoRAs for different tasks. First, make sure you have peft installed, for better LoRA support. Copied pip install -U peft Text-to-image You’ll use the StableDiffusionXLPipeline with the scheduler: LCMScheduler and then load the LCM-LoRA. Together with the LCM-LoRA and the scheduler, the pipeline enables a fast inference workflow overcoming the slow iterative nature of diffusion models. Copied import torch +from diffusers import DiffusionPipeline, LCMScheduler + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + variant="fp16", + torch_dtype=torch.float16 +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl") + +prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" + +generator = torch.manual_seed(42) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=1.0 +).images[0] Notice that we use only 4 steps for generation which is way less than what’s typically used for standard SDXL. You may have noticed that we set guidance_scale=1.0, which disables classifer-free-guidance. This is because the LCM-LoRA is trained with guidance, so the batch size does not have to be doubled in this case. This leads to a faster inference time, with the drawback that negative prompts don’t have any effect on the denoising process. You can also use guidance with LCM-LoRA, but due to the nature of training the model is very sensitve to the guidance_scale values, high values can lead to artifacts in the generated images. In our experiments, we found that the best values are in the range of [1.0, 2.0]. Inference with a fine-tuned model As mentioned above, the LCM-LoRA can be applied to any fine-tuned version of the model without having to distill them separately. Let’s look at how we can perform inference with a fine-tuned model. In this example, we’ll use the animagine-xl model, which is a fine-tuned version of the SDXL model for generating anime. Copied from diffusers import DiffusionPipeline, LCMScheduler + +pipe = DiffusionPipeline.from_pretrained( + "Linaqruf/animagine-xl", + variant="fp16", + torch_dtype=torch.float16 +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl") + +prompt = "face focus, cute, masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=1.0 +).images[0] Image-to-image LCM-LoRA can be applied to image-to-image tasks too. Let’s look at how we can perform image-to-image generation with LCMs. For this example we’ll use the dreamshaper-7 model and the LCM-LoRA for stable-diffusion-v1-5 . Copied import torch +from diffusers import AutoPipelineForImage2Image, LCMScheduler +from diffusers.utils import make_image_grid, load_image + +pipe = AutoPipelineForImage2Image.from_pretrained( + "Lykon/dreamshaper-7", + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5") + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) +prompt = "Astronauts in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +generator = torch.manual_seed(0) +image = pipe( + prompt, + image=init_image, + num_inference_steps=4, + guidance_scale=1, + strength=0.6, + generator=generator +).images[0] +make_image_grid([init_image, image], rows=1, cols=2) You can get different results based on your prompt and the image you provide. To get the best results, we recommend trying different values for num_inference_steps, strength, and guidance_scale parameters and choose the best one. Combine with styled LoRAs LCM-LoRA can be combined with other LoRAs to generate styled-images in very few steps (4-8). In the following example, we’ll use the LCM-LoRA with the papercut LoRA. +To learn more about how to combine LoRAs, refer to this guide. Copied import torch +from diffusers import DiffusionPipeline, LCMScheduler + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + variant="fp16", + torch_dtype=torch.float16 +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LoRAs +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl", adapter_name="lcm") +pipe.load_lora_weights("TheLastBen/Papercut_SDXL", weight_name="papercut.safetensors", adapter_name="papercut") + +# Combine LoRAs +pipe.set_adapters(["lcm", "papercut"], adapter_weights=[1.0, 0.8]) + +prompt = "papercut, a cute fox" +generator = torch.manual_seed(0) +image = pipe(prompt, num_inference_steps=4, guidance_scale=1, generator=generator).images[0] +image ControlNet/T2I-Adapter Let’s look at how we can perform inference with ControlNet/T2I-Adapter and LCM-LoRA. ControlNet For this example, we’ll use the SD-v1-5 model and the LCM-LoRA for SD-v1-5 with canny ControlNet. Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, LCMScheduler +from diffusers.utils import load_image + +image = load_image( + "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +).resize((512, 512)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + controlnet=controlnet, + torch_dtype=torch.float16, + safety_checker=None, + variant="fp16" +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5") + +generator = torch.manual_seed(0) +image = pipe( + "the mona lisa", + image=canny_image, + num_inference_steps=4, + guidance_scale=1.5, + controlnet_conditioning_scale=0.8, + cross_attention_kwargs={"scale": 1}, + generator=generator, +).images[0] +make_image_grid([canny_image, image], rows=1, cols=2) The inference parameters in this example might not work for all examples, so we recommend you to try different values for `num_inference_steps`, `guidance_scale`, `controlnet_conditioning_scale` and `cross_attention_kwargs` parameters and choose the best one. T2I-Adapter This example shows how to use the LCM-LoRA with the Canny T2I-Adapter and SDXL. Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +# Prepare image +# Detect the canny map in low resolution to avoid high-frequency details +image = load_image( + "https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_canny.jpg" +).resize((384, 384)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image).resize((1024, 1024)) + +# load adapter +adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, varient="fp16").to("cuda") + +pipe = StableDiffusionXLAdapterPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + adapter=adapter, + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl") + +prompt = "Mystical fairy in real, magic, 4k picture, high quality" +negative_prompt = "extra digit, fewer digits, cropped, worst quality, low quality, glitch, deformed, mutated, ugly, disfigured" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, + negative_prompt=negative_prompt, + image=canny_image, + num_inference_steps=4, + guidance_scale=1.5, + adapter_conditioning_scale=0.8, + adapter_conditioning_factor=1, + generator=generator, +).images[0] +make_image_grid([canny_image, image], rows=1, cols=2) Inpainting LCM-LoRA can be used for inpainting as well. Copied import torch +from diffusers import AutoPipelineForInpainting, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +pipe = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5") + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +# generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, + image=init_image, + mask_image=mask_image, + generator=generator, + num_inference_steps=4, + guidance_scale=4, +).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) AnimateDiff AnimateDiff allows you to animate images using Stable Diffusion models. To get good results, we need to generate multiple frames (16-24), and doing this with standard SD models can be very slow. +LCM-LoRA can be used to speed up the process significantly, as you just need to do 4-8 steps for each frame. Let’s look at how we can perform animation with LCM-LoRA and AnimateDiff. Copied import torch +from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler, LCMScheduler +from diffusers.utils import export_to_gif + +adapter = MotionAdapter.from_pretrained("diffusers/animatediff-motion-adapter-v1-5") +pipe = AnimateDiffPipeline.from_pretrained( + "frankjoshua/toonyou_beta6", + motion_adapter=adapter, +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5", adapter_name="lcm") +pipe.load_lora_weights("guoyww/animatediff-motion-lora-zoom-in", weight_name="diffusion_pytorch_model.safetensors", adapter_name="motion-lora") + +pipe.set_adapters(["lcm", "motion-lora"], adapter_weights=[0.55, 1.2]) + +prompt = "best quality, masterpiece, 1girl, looking at viewer, blurry background, upper body, contemporary, dress" +generator = torch.manual_seed(0) +frames = pipe( + prompt=prompt, + num_inference_steps=5, + guidance_scale=1.25, + cross_attention_kwargs={"scale": 1}, + num_frames=24, + generator=generator +).frames[0] +export_to_gif(frames, "animation.gif") diff --git a/scrapped_outputs/1afbe66d5d20610b3ce1b303c991b9ed.txt b/scrapped_outputs/1afbe66d5d20610b3ce1b303c991b9ed.txt new file mode 100644 index 0000000000000000000000000000000000000000..f075e9cd0ba2034d16f267a85ac522ba988132d7 --- /dev/null +++ b/scrapped_outputs/1afbe66d5d20610b3ce1b303c991b9ed.txt @@ -0,0 +1,97 @@ +Dance Diffusion + + +Overview + +Dance Diffusion by Zach Evans. +Dance Diffusion is the first in a suite of generative audio tools for producers and musicians to be released by Harmonai. +For more info or to get involved in the development of these tools, please visit https://harmonai.org and fill out the form on the front page. +The original codebase of this implementation can be found here. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_dance_diffusion.py +Unconditional Audio Generation +- + +DanceDiffusionPipeline + + +class diffusers.DanceDiffusionPipeline + +< +source +> +( +unet +scheduler + +) + + +Parameters + +unet (UNet1DModel) — U-Net architecture to denoise the encoded image. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image. Can be one of +IPNDMScheduler. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +batch_size: int = 1 +num_inference_steps: int = 100 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +audio_length_in_s: typing.Optional[float] = None +return_dict: bool = True + +) +→ +AudioPipelineOutput or tuple + +Parameters + +batch_size (int, optional, defaults to 1) — +The number of audio samples to generate. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality audio sample at +the expense of slower inference. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +audio_length_in_s (float, optional, defaults to self.unet.config.sample_size/self.unet.config.sample_rate) — +The length of the generated audio sample in seconds. Note that the output of the pipeline, i.e. +sample_size, will be audio_length_in_s * self.unet.config.sample_rate. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a AudioPipelineOutput instead of a plain tuple. + + +Returns + +AudioPipelineOutput or tuple + + + +~pipelines.utils.AudioPipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. diff --git a/scrapped_outputs/1b4d575caa8ff7c2326419e2d808840a.txt b/scrapped_outputs/1b4d575caa8ff7c2326419e2d808840a.txt new file mode 100644 index 0000000000000000000000000000000000000000..fbba08e6089c48721c4daf719b002f35502d6466 --- /dev/null +++ b/scrapped_outputs/1b4d575caa8ff7c2326419e2d808840a.txt @@ -0,0 +1,573 @@ +Stable Diffusion XL Stable Diffusion XL (SDXL) was proposed in SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis by Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. The abstract from the paper is: We present SDXL, a latent diffusion model for text-to-image synthesis. Compared to previous versions of Stable Diffusion, SDXL leverages a three times larger UNet backbone: The increase of model parameters is mainly due to more attention blocks and a larger cross-attention context as SDXL uses a second text encoder. We design multiple novel conditioning schemes and train SDXL on multiple aspect ratios. We also introduce a refinement model which is used to improve the visual fidelity of samples generated by SDXL using a post-hoc image-to-image technique. We demonstrate that SDXL shows drastically improved performance compared the previous versions of Stable Diffusion and achieves results competitive with those of black-box state-of-the-art image generators. Tips Using SDXL with a DPM++ scheduler for less than 50 steps is known to produce visual artifacts because the solver becomes numerically unstable. To fix this issue, take a look at this PR which recommends for ODE/SDE solvers:set use_karras_sigmas=True or lu_lambdas=True to improve image quality set euler_at_final=True if you’re using a solver with uniform step sizes (DPM++2M or DPM++2M SDE) Most SDXL checkpoints work best with an image size of 1024x1024. Image sizes of 768x768 and 512x512 are also supported, but the results aren’t as good. Anything below 512x512 is not recommended and likely won’t be for default checkpoints like stabilityai/stable-diffusion-xl-base-1.0. SDXL can pass a different prompt for each of the text encoders it was trained on. We can even pass different parts of the same prompt to the text encoders. SDXL output images can be improved by making use of a refiner model in an image-to-image setting. SDXL offers negative_original_size, negative_crops_coords_top_left, and negative_target_size to negatively condition the model on image resolution and cropping parameters. To learn how to use SDXL for various tasks, how to optimize performance, and other usage examples, take a look at the Stable Diffusion XL guide. Check out the Stability AI Hub organization for the official base and refiner model checkpoints! StableDiffusionXLPipeline class diffusers.StableDiffusionXLPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Optional = None crops_coords_top_left: Tuple = (0, 0) target_size: Optional = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 5.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput instead +of a plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLPipeline + +>>> pipe = StableDiffusionXLPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionXLImg2ImgPipeline class diffusers.StableDiffusionXLImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires an aesthetic_score condition to be passed during inference. Also see the +config of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None strength: float = 0.3 num_inference_steps: int = 50 timesteps: List = None denoising_start: Optional = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor or PIL.Image.Image or np.ndarray or List[torch.FloatTensor] or List[PIL.Image.Image] or List[np.ndarray]) — +The image(s) to modify with the pipeline. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. Note that in the case of +denoising_start being declared as an integer, the value of strength will be ignored. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_start (float, optional) — +When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be +bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and +it is assumed that the passed image is a partly denoised image. Note that when this is specified, +strength will be ignored. The denoising_start parameter is particularly beneficial when this pipeline +is integrated into a “Mixture of Denoisers” multi-pipeline setup, as detailed in Refine Image +Quality. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be +denoised by a successor pipeline that has denoising_start set to 0.8 so that it only denoises the +final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline +forms a part of a “Mixture of Denoisers” multi-pipeline setup, as elaborated in Refine Image +Quality. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +`tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLImg2ImgPipeline +>>> from diffusers.utils import load_image + +>>> pipe = StableDiffusionXLImg2ImgPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") +>>> url = "https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/aa_xl/000000009.png" + +>>> init_image = load_image(url).convert("RGB") +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt, image=init_image).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionXLInpaintPipeline class diffusers.StableDiffusionXLInpaintPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires a aesthetic_score condition to be passed during inference. Also see the config +of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None mask_image: Union = None masked_image_latents: FloatTensor = None height: Optional = None width: Optional = None padding_mask_crop: Optional = None strength: float = 0.9999 num_inference_steps: int = 50 timesteps: List = None denoising_start: Optional = None denoising_end: Optional = None guidance_scale: float = 7.5 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (PIL.Image.Image) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. padding_mask_crop (int, optional, defaults to None) — +The size of margin in the crop to be applied to the image and masking. If None, no crop is applied to image and mask_image. If +padding_mask_crop is not None, it will first find a rectangular region with the same aspect ration of the image and +contains all masked area, and then expand that area based on padding_mask_crop. The image and mask_image will then be cropped based on +the expanded area before resizing to the original image size for inpainting. This is useful when the masked area is small while the image is large +and contain information inreleant for inpainging, such as background. strength (float, optional, defaults to 0.9999) — +Conceptually, indicates how much to transform the masked portion of the reference image. Must be +between 0 and 1. image will be used as a starting point, adding more noise to it the larger the +strength. The number of denoising steps depends on the amount of noise initially added. When +strength is 1, added noise will be maximum and the denoising process will run for the full number of +iterations specified in num_inference_steps. A value of 1, therefore, essentially ignores the masked +portion of the reference image. Note that in the case of denoising_start being declared as an +integer, the value of strength will be ignored. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_start (float, optional) — +When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be +bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and +it is assumed that the passed image is a partly denoised image. Note that when this is specified, +strength will be ignored. The denoising_start parameter is particularly beneficial when this pipeline +is integrated into a “Mixture of Denoisers” multi-pipeline setup, as detailed in Refining the Image +Output. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be +denoised by a successor pipeline that has denoising_start set to 0.8 so that it only denoises the +final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline +forms a part of a “Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLInpaintPipeline +>>> from diffusers.utils import load_image + +>>> pipe = StableDiffusionXLInpaintPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", +... torch_dtype=torch.float16, +... variant="fp16", +... use_safetensors=True, +... ) +>>> pipe.to("cuda") + +>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +>>> init_image = load_image(img_url).convert("RGB") +>>> mask_image = load_image(mask_url).convert("RGB") + +>>> prompt = "A majestic tiger sitting on a bench" +>>> image = pipe( +... prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=50, strength=0.80 +... ).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. diff --git a/scrapped_outputs/1b5343507dacba4aac7882af3141453f.txt b/scrapped_outputs/1b5343507dacba4aac7882af3141453f.txt new file mode 100644 index 0000000000000000000000000000000000000000..1303672965e9fdebfc1a9c219c08f87449df8999 --- /dev/null +++ b/scrapped_outputs/1b5343507dacba4aac7882af3141453f.txt @@ -0,0 +1,45 @@ +Stable Video Diffusion Stable Video Diffusion is a powerful image-to-video generation model that can generate high resolution (576x1024) 2-4 second videos conditioned on the input image. This guide will show you how to use SVD to short generate videos from images. Before you begin, make sure you have the following libraries installed: Copied !pip install -q -U diffusers transformers accelerate Image to Video Generation The are two variants of SVD. SVD +and SVD-XT. The svd checkpoint is trained to generate 14 frames and the svd-xt checkpoint is further +finetuned to generate 25 frames. We will use the svd-xt checkpoint for this guide. Copied import torch + +from diffusers import StableVideoDiffusionPipeline +from diffusers.utils import load_image, export_to_video + +pipe = StableVideoDiffusionPipeline.from_pretrained( + "stabilityai/stable-video-diffusion-img2vid-xt", torch_dtype=torch.float16, variant="fp16" +) +pipe.enable_model_cpu_offload() + +# Load the conditioning image +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png") +image = image.resize((1024, 576)) + +generator = torch.manual_seed(42) +frames = pipe(image, decode_chunk_size=8, generator=generator).frames[0] + +export_to_video(frames, "generated.mp4", fps=7) Source Image Video Since generating videos is more memory intensive we can use the `decode_chunk_size` argument to control how many frames are decoded at once. This will reduce the memory usage. It's recommended to tweak this value based on your GPU memory. +Setting `decode_chunk_size=1` will decode one frame at a time and will use the least amount of memory but the video might have some flickering. +Additionally, we also use model cpu offloading to reduce the memory usage. Torch.compile You can achieve a 20-25% speed-up at the expense of slightly increased memory by compiling the UNet as follows: Copied - pipe.enable_model_cpu_offload() ++ pipe.to("cuda") ++ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) Low-memory Video generation is very memory intensive as we have to essentially generate num_frames all at once. The mechanism is very comparable to text-to-image generation with a high batch size. To reduce the memory requirement you have multiple options. The following options trade inference speed against lower memory requirement: enable model offloading: Each component of the pipeline is offloaded to CPU once it’s not needed anymore. enable feed-forward chunking: The feed-forward layer runs in a loop instead of running with a single huge feed-forward batch size reduce decode_chunk_size: This means that the VAE decodes frames in chunks instead of decoding them all together. Note: In addition to leading to a small slowdown, this method also slightly leads to video quality deterioration You can enable them as follows: Copied -pipe.enable_model_cpu_offload() +-frames = pipe(image, decode_chunk_size=8, generator=generator).frames[0] ++pipe.enable_model_cpu_offload() ++pipe.unet.enable_forward_chunking() ++frames = pipe(image, decode_chunk_size=2, generator=generator, num_frames=25).frames[0] Including all these tricks should lower the memory requirement to less than 8GB VRAM. Micro-conditioning Along with conditioning image Stable Diffusion Video also allows providing micro-conditioning that allows more control over the generated video. +It accepts the following arguments: fps: The frames per second of the generated video. motion_bucket_id: The motion bucket id to use for the generated video. This can be used to control the motion of the generated video. Increasing the motion bucket id will increase the motion of the generated video. noise_aug_strength: The amount of noise added to the conditioning image. The higher the values the less the video will resemble the conditioning image. Increasing this value will also increase the motion of the generated video. Here is an example of using micro-conditioning to generate a video with more motion. Copied import torch + +from diffusers import StableVideoDiffusionPipeline +from diffusers.utils import load_image, export_to_video + +pipe = StableVideoDiffusionPipeline.from_pretrained( + "stabilityai/stable-video-diffusion-img2vid-xt", torch_dtype=torch.float16, variant="fp16" +) +pipe.enable_model_cpu_offload() + +# Load the conditioning image +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png") +image = image.resize((1024, 576)) + +generator = torch.manual_seed(42) +frames = pipe(image, decode_chunk_size=8, generator=generator, motion_bucket_id=180, noise_aug_strength=0.1).frames[0] +export_to_video(frames, "generated.mp4", fps=7) diff --git a/scrapped_outputs/1b540cc4703b9c624efe0346491f8c59.txt b/scrapped_outputs/1b540cc4703b9c624efe0346491f8c59.txt new file mode 100644 index 0000000000000000000000000000000000000000..9d0a8a28b6d3bc1a9ce7a2bdbcac9943975943ca --- /dev/null +++ b/scrapped_outputs/1b540cc4703b9c624efe0346491f8c59.txt @@ -0,0 +1 @@ +Overview Welcome to 🧨 Diffusers! If you’re new to diffusion models and generative AI, and want to learn more, then you’ve come to the right place. These beginner-friendly tutorials are designed to provide a gentle introduction to diffusion models and help you understand the library fundamentals - the core components and how 🧨 Diffusers is meant to be used. You’ll learn how to use a pipeline for inference to rapidly generate things, and then deconstruct that pipeline to really understand how to use the library as a modular toolbox for building your own diffusion systems. In the next lesson, you’ll learn how to train your own diffusion model to generate what you want. After completing the tutorials, you’ll have gained the necessary skills to start exploring the library on your own and see how to use it for your own projects and applications. Feel free to join our community on Discord or the forums to connect and collaborate with other users and developers! Let’s start diffusing! 🧨 diff --git a/scrapped_outputs/1b5851a9afbef1b8ba9f8c395d96c261.txt b/scrapped_outputs/1b5851a9afbef1b8ba9f8c395d96c261.txt new file mode 100644 index 0000000000000000000000000000000000000000..de08a59d3b80c6e4fb6b55a1f1ca0865db2fa227 --- /dev/null +++ b/scrapped_outputs/1b5851a9afbef1b8ba9f8c395d96c261.txt @@ -0,0 +1,86 @@ +Understanding pipelines, models and schedulers 🧨 Diffusers is designed to be a user-friendly and flexible toolbox for building diffusion systems tailored to your use-case. At the core of the toolbox are models and schedulers. While the DiffusionPipeline bundles these components together for convenience, you can also unbundle the pipeline and use the models and schedulers separately to create new diffusion systems. In this tutorial, you’ll learn how to use models and schedulers to assemble a diffusion system for inference, starting with a basic pipeline and then progressing to the Stable Diffusion pipeline. Deconstruct a basic pipeline A pipeline is a quick and easy way to run a model for inference, requiring no more than four lines of code to generate an image: Copied >>> from diffusers import DDPMPipeline + +>>> ddpm = DDPMPipeline.from_pretrained("google/ddpm-cat-256", use_safetensors=True).to("cuda") +>>> image = ddpm(num_inference_steps=25).images[0] +>>> image That was super easy, but how did the pipeline do that? Let’s breakdown the pipeline and take a look at what’s happening under the hood. In the example above, the pipeline contains a UNet2DModel model and a DDPMScheduler. The pipeline denoises an image by taking random noise the size of the desired output and passing it through the model several times. At each timestep, the model predicts the noise residual and the scheduler uses it to predict a less noisy image. The pipeline repeats this process until it reaches the end of the specified number of inference steps. To recreate the pipeline with the model and scheduler separately, let’s write our own denoising process. Load the model and scheduler: Copied >>> from diffusers import DDPMScheduler, UNet2DModel + +>>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cat-256") +>>> model = UNet2DModel.from_pretrained("google/ddpm-cat-256", use_safetensors=True).to("cuda") Set the number of timesteps to run the denoising process for: Copied >>> scheduler.set_timesteps(50) Setting the scheduler timesteps creates a tensor with evenly spaced elements in it, 50 in this example. Each element corresponds to a timestep at which the model denoises an image. When you create the denoising loop later, you’ll iterate over this tensor to denoise an image: Copied >>> scheduler.timesteps +tensor([980, 960, 940, 920, 900, 880, 860, 840, 820, 800, 780, 760, 740, 720, + 700, 680, 660, 640, 620, 600, 580, 560, 540, 520, 500, 480, 460, 440, + 420, 400, 380, 360, 340, 320, 300, 280, 260, 240, 220, 200, 180, 160, + 140, 120, 100, 80, 60, 40, 20, 0]) Create some random noise with the same shape as the desired output: Copied >>> import torch + +>>> sample_size = model.config.sample_size +>>> noise = torch.randn((1, 3, sample_size, sample_size), device="cuda") Now write a loop to iterate over the timesteps. At each timestep, the model does a UNet2DModel.forward() pass and returns the noisy residual. The scheduler’s step() method takes the noisy residual, timestep, and input and it predicts the image at the previous timestep. This output becomes the next input to the model in the denoising loop, and it’ll repeat until it reaches the end of the timesteps array. Copied >>> input = noise + +>>> for t in scheduler.timesteps: +... with torch.no_grad(): +... noisy_residual = model(input, t).sample +... previous_noisy_sample = scheduler.step(noisy_residual, t, input).prev_sample +... input = previous_noisy_sample This is the entire denoising process, and you can use this same pattern to write any diffusion system. The last step is to convert the denoised output into an image: Copied >>> from PIL import Image +>>> import numpy as np + +>>> image = (input / 2 + 0.5).clamp(0, 1).squeeze() +>>> image = (image.permute(1, 2, 0) * 255).round().to(torch.uint8).cpu().numpy() +>>> image = Image.fromarray(image) +>>> image In the next section, you’ll put your skills to the test and breakdown the more complex Stable Diffusion pipeline. The steps are more or less the same. You’ll initialize the necessary components, and set the number of timesteps to create a timestep array. The timestep array is used in the denoising loop, and for each element in this array, the model predicts a less noisy image. The denoising loop iterates over the timestep’s, and at each timestep, it outputs a noisy residual and the scheduler uses it to predict a less noisy image at the previous timestep. This process is repeated until you reach the end of the timestep array. Let’s try it out! Deconstruct the Stable Diffusion pipeline Stable Diffusion is a text-to-image latent diffusion model. It is called a latent diffusion model because it works with a lower-dimensional representation of the image instead of the actual pixel space, which makes it more memory efficient. The encoder compresses the image into a smaller representation, and a decoder to convert the compressed representation back into an image. For text-to-image models, you’ll need a tokenizer and an encoder to generate text embeddings. From the previous example, you already know you need a UNet model and a scheduler. As you can see, this is already more complex than the DDPM pipeline which only contains a UNet model. The Stable Diffusion model has three separate pretrained models. 💡 Read the How does Stable Diffusion work? blog for more details about how the VAE, UNet, and text encoder models work. Now that you know what you need for the Stable Diffusion pipeline, load all these components with the from_pretrained() method. You can find them in the pretrained runwayml/stable-diffusion-v1-5 checkpoint, and each component is stored in a separate subfolder: Copied >>> from PIL import Image +>>> import torch +>>> from transformers import CLIPTextModel, CLIPTokenizer +>>> from diffusers import AutoencoderKL, UNet2DConditionModel, PNDMScheduler + +>>> vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae", use_safetensors=True) +>>> tokenizer = CLIPTokenizer.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="tokenizer") +>>> text_encoder = CLIPTextModel.from_pretrained( +... "CompVis/stable-diffusion-v1-4", subfolder="text_encoder", use_safetensors=True +... ) +>>> unet = UNet2DConditionModel.from_pretrained( +... "CompVis/stable-diffusion-v1-4", subfolder="unet", use_safetensors=True +... ) Instead of the default PNDMScheduler, exchange it for the UniPCMultistepScheduler to see how easy it is to plug a different scheduler in: Copied >>> from diffusers import UniPCMultistepScheduler + +>>> scheduler = UniPCMultistepScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler") To speed up inference, move the models to a GPU since, unlike the scheduler, they have trainable weights: Copied >>> torch_device = "cuda" +>>> vae.to(torch_device) +>>> text_encoder.to(torch_device) +>>> unet.to(torch_device) Create text embeddings The next step is to tokenize the text to generate embeddings. The text is used to condition the UNet model and steer the diffusion process towards something that resembles the input prompt. 💡 The guidance_scale parameter determines how much weight should be given to the prompt when generating an image. Feel free to choose any prompt you like if you want to generate something else! Copied >>> prompt = ["a photograph of an astronaut riding a horse"] +>>> height = 512 # default height of Stable Diffusion +>>> width = 512 # default width of Stable Diffusion +>>> num_inference_steps = 25 # Number of denoising steps +>>> guidance_scale = 7.5 # Scale for classifier-free guidance +>>> generator = torch.manual_seed(0) # Seed generator to create the initial latent noise +>>> batch_size = len(prompt) Tokenize the text and generate the embeddings from the prompt: Copied >>> text_input = tokenizer( +... prompt, padding="max_length", max_length=tokenizer.model_max_length, truncation=True, return_tensors="pt" +... ) + +>>> with torch.no_grad(): +... text_embeddings = text_encoder(text_input.input_ids.to(torch_device))[0] You’ll also need to generate the unconditional text embeddings which are the embeddings for the padding token. These need to have the same shape (batch_size and seq_length) as the conditional text_embeddings: Copied >>> max_length = text_input.input_ids.shape[-1] +>>> uncond_input = tokenizer([""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt") +>>> uncond_embeddings = text_encoder(uncond_input.input_ids.to(torch_device))[0] Let’s concatenate the conditional and unconditional embeddings into a batch to avoid doing two forward passes: Copied >>> text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) Create random noise Next, generate some initial random noise as a starting point for the diffusion process. This is the latent representation of the image, and it’ll be gradually denoised. At this point, the latent image is smaller than the final image size but that’s okay though because the model will transform it into the final 512x512 image dimensions later. 💡 The height and width are divided by 8 because the vae model has 3 down-sampling layers. You can check by running the following: Copied 2 ** (len(vae.config.block_out_channels) - 1) == 8 Copied >>> latents = torch.randn( +... (batch_size, unet.config.in_channels, height // 8, width // 8), +... generator=generator, +... device=torch_device, +... ) Denoise the image Start by scaling the input with the initial noise distribution, sigma, the noise scale value, which is required for improved schedulers like UniPCMultistepScheduler: Copied >>> latents = latents * scheduler.init_noise_sigma The last step is to create the denoising loop that’ll progressively transform the pure noise in latents to an image described by your prompt. Remember, the denoising loop needs to do three things: Set the scheduler’s timesteps to use during denoising. Iterate over the timesteps. At each timestep, call the UNet model to predict the noise residual and pass it to the scheduler to compute the previous noisy sample. Copied >>> from tqdm.auto import tqdm + +>>> scheduler.set_timesteps(num_inference_steps) + +>>> for t in tqdm(scheduler.timesteps): +... # expand the latents if we are doing classifier-free guidance to avoid doing two forward passes. +... latent_model_input = torch.cat([latents] * 2) + +... latent_model_input = scheduler.scale_model_input(latent_model_input, timestep=t) + +... # predict the noise residual +... with torch.no_grad(): +... noise_pred = unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample + +... # perform guidance +... noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) +... noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) + +... # compute the previous noisy sample x_t -> x_t-1 +... latents = scheduler.step(noise_pred, t, latents).prev_sample Decode the image The final step is to use the vae to decode the latent representation into an image and get the decoded output with sample: Copied # scale and decode the image latents with vae +latents = 1 / 0.18215 * latents +with torch.no_grad(): + image = vae.decode(latents).sample Lastly, convert the image to a PIL.Image to see your generated image! Copied >>> image = (image / 2 + 0.5).clamp(0, 1).squeeze() +>>> image = (image.permute(1, 2, 0) * 255).to(torch.uint8).cpu().numpy() +>>> image = Image.fromarray(image) +>>> image Next steps From basic to complex pipelines, you’ve seen that all you really need to write your own diffusion system is a denoising loop. The loop should set the scheduler’s timesteps, iterate over them, and alternate between calling the UNet model to predict the noise residual and passing it to the scheduler to compute the previous noisy sample. This is really what 🧨 Diffusers is designed for: to make it intuitive and easy to write your own diffusion system using models and schedulers. For your next steps, feel free to: Learn how to build and contribute a pipeline to 🧨 Diffusers. We can’t wait and see what you’ll come up with! Explore existing pipelines in the library, and see if you can deconstruct and build a pipeline from scratch using the models and schedulers separately. diff --git a/scrapped_outputs/1bb804e655727260d7cf57034477a235.txt b/scrapped_outputs/1bb804e655727260d7cf57034477a235.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/1bbb2aeb166b36890a02de4ded87547d.txt b/scrapped_outputs/1bbb2aeb166b36890a02de4ded87547d.txt new file mode 100644 index 0000000000000000000000000000000000000000..2dde9c6e189ad6d607bc313e3e555570773bb332 --- /dev/null +++ b/scrapped_outputs/1bbb2aeb166b36890a02de4ded87547d.txt @@ -0,0 +1,19 @@ +Adapt a model to a new task Many diffusion systems share the same components, allowing you to adapt a pretrained model for one task to an entirely different task. This guide will show you how to adapt a pretrained text-to-image model for inpainting by initializing and modifying the architecture of a pretrained UNet2DConditionModel. Configure UNet2DConditionModel parameters A UNet2DConditionModel by default accepts 4 channels in the input sample. For example, load a pretrained text-to-image model like runwayml/stable-diffusion-v1-5 and take a look at the number of in_channels: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) +pipeline.unet.config["in_channels"] +4 Inpainting requires 9 channels in the input sample. You can check this value in a pretrained inpainting model like runwayml/stable-diffusion-inpainting: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-inpainting", use_safetensors=True) +pipeline.unet.config["in_channels"] +9 To adapt your text-to-image model for inpainting, you’ll need to change the number of in_channels from 4 to 9. Initialize a UNet2DConditionModel with the pretrained text-to-image model weights, and change in_channels to 9. Changing the number of in_channels means you need to set ignore_mismatched_sizes=True and low_cpu_mem_usage=False to avoid a size mismatch error because the shape is different now. Copied from diffusers import UNet2DConditionModel + +model_id = "runwayml/stable-diffusion-v1-5" +unet = UNet2DConditionModel.from_pretrained( + model_id, + subfolder="unet", + in_channels=9, + low_cpu_mem_usage=False, + ignore_mismatched_sizes=True, + use_safetensors=True, +) The pretrained weights of the other components from the text-to-image model are initialized from their checkpoints, but the input channel weights (conv_in.weight) of the unet are randomly initialized. It is important to finetune the model for inpainting because otherwise the model returns noise. diff --git a/scrapped_outputs/1bbcb1ab8d5be1f0d11289804010311d.txt b/scrapped_outputs/1bbcb1ab8d5be1f0d11289804010311d.txt new file mode 100644 index 0000000000000000000000000000000000000000..56e59074b7081ba9ef6c56015b3698c1be3d3268 --- /dev/null +++ b/scrapped_outputs/1bbcb1ab8d5be1f0d11289804010311d.txt @@ -0,0 +1,252 @@ +Latent Diffusion + + +Overview + +Latent Diffusion was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. +The abstract of the paper is the following: +By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. +The original codebase can be found here. + +Tips: + + + + + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_latent_diffusion.py +Text-to-Image Generation +- +pipeline_latent_diffusion_superresolution.py +Super Resolution +- + +Examples: + + +LDMTextToImagePipeline + + +class diffusers.LDMTextToImagePipeline + +< +source +> +( +vqvae: typing.Union[diffusers.models.vq_model.VQModel, diffusers.models.autoencoder_kl.AutoencoderKL] +bert: PreTrainedModel +tokenizer: PreTrainedTokenizer +unet: typing.Union[diffusers.models.unet_2d.UNet2DModel, diffusers.models.unet_2d_condition.UNet2DConditionModel] +scheduler: typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler] + +) + + +Parameters + +vqvae (VQModel) — +Vector-quantized (VQ) Model to encode and decode images to and from latent representations. + + +bert (LDMBertModel) — +Text-encoder model based on BERT architecture. + + +tokenizer (transformers.BertTokenizer) — +Tokenizer of class +BertTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: typing.Optional[int] = 50 +guidance_scale: typing.Optional[float] = 1.0 +eta: typing.Optional[float] = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +**kwargs + +) +→ +ImagePipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 1.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt at +the, usually at the expense of lower image quality. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + +Returns + +ImagePipelineOutput or tuple + + + +~pipelines.utils.ImagePipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. + + + +LDMSuperResolutionPipeline + + +class diffusers.LDMSuperResolutionPipeline + +< +source +> +( +vqvae: VQModel +unet: UNet2DModel +scheduler: typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler] + +) + + +Parameters + +vqvae (VQModel) — +Vector-quantized (VQ) VAE Model to encode and decode images to and from latent representations. + + +unet (UNet2DModel) — U-Net architecture to denoise the encoded image. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latens. Can be one of +DDIMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, +EulerAncestralDiscreteScheduler, DPMSolverMultistepScheduler, or PNDMScheduler. + + + +A pipeline for image super-resolution using Latent +This class inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +image: typing.Union[torch.Tensor, PIL.Image.Image] = None +batch_size: typing.Optional[int] = 1 +num_inference_steps: typing.Optional[int] = 100 +eta: typing.Optional[float] = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True + +) +→ +ImagePipelineOutput or tuple + +Parameters + +image (torch.Tensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. + + +batch_size (int, optional, defaults to 1) — +Number of images to generate. + + +num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + +Returns + +ImagePipelineOutput or tuple + + + +~pipelines.utils.ImagePipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. diff --git a/scrapped_outputs/1bf252aa1d3200f0b556a14294bde1a4.txt b/scrapped_outputs/1bf252aa1d3200f0b556a14294bde1a4.txt new file mode 100644 index 0000000000000000000000000000000000000000..79beedec7941f21d28c6e409790c9155bcf39eff --- /dev/null +++ b/scrapped_outputs/1bf252aa1d3200f0b556a14294bde1a4.txt @@ -0,0 +1,74 @@ +LoRA This is experimental and the API may change in the future. LoRA (Low-Rank Adaptation of Large Language Models) is a popular and lightweight training technique that significantly reduces the number of trainable parameters. It works by inserting a smaller number of new weights into the model and only these are trained. This makes training with LoRA much faster, memory-efficient, and produces smaller model weights (a few hundred MBs), which are easier to store and share. LoRA can also be combined with other training techniques like DreamBooth to speedup training. LoRA is very versatile and supported for DreamBooth, Kandinsky 2.2, Stable Diffusion XL, text-to-image, and Wuerstchen. This guide will explore the train_text_to_image_lora.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies for the script you’re using: + + + + Copied cd examples/text_to_image +pip install -r requirements.txt + + + + Copied cd examples/text_to_image +pip install -r requirements_flax.txt + + + 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script has many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. Default values are provided for most parameters that work pretty well, but you can also set your own values in the training command if you’d like. For example, to increase the number of epochs to train: Copied accelerate launch train_text_to_image_lora.py \ + --num_train_epochs=150 \ Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the LoRA relevant parameters: --rank: the number of low-rank matrices to train --learning_rate: the default learning rate is 1e-4, but with LoRA, you can use a higher learning rate Training script The dataset preprocessing code and training loop are found in the main() function, and if you need to adapt the training script, this is where you’ll make your changes. As with the script parameters, a walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the LoRA relevant parts of the script. The script begins by adding the new LoRA weights to the attention layers. This involves correctly configuring the weight size for each block in the UNet. You’ll see the rank parameter is used to create the LoRAAttnProcessor: Copied lora_attn_procs = {} +for name in unet.attn_processors.keys(): + cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim + if name.startswith("mid_block"): + hidden_size = unet.config.block_out_channels[-1] + elif name.startswith("up_blocks"): + block_id = int(name[len("up_blocks.")]) + hidden_size = list(reversed(unet.config.block_out_channels))[block_id] + elif name.startswith("down_blocks"): + block_id = int(name[len("down_blocks.")]) + hidden_size = unet.config.block_out_channels[block_id] + + lora_attn_procs[name] = LoRAAttnProcessor( + hidden_size=hidden_size, + cross_attention_dim=cross_attention_dim, + rank=args.rank, + ) + +unet.set_attn_processor(lora_attn_procs) +lora_layers = AttnProcsLayers(unet.attn_processors) The optimizer is initialized with the lora_layers because these are the only weights that’ll be optimized: Copied optimizer = optimizer_cls( + lora_layers.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Aside from setting up the LoRA layers, the training script is more or less the same as train_text_to_image.py! Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 Let’s train on the Pokémon BLIP captions dataset to generate our yown Pokémon. Set the environment variables MODEL_NAME and DATASET_NAME to the model and dataset respectively. You should also specify where to save the model in OUTPUT_DIR, and the name of the model to save to on the Hub with HUB_MODEL_ID. The script creates and saves the following files to your repository: saved model checkpoints pytorch_lora_weights.safetensors (the trained LoRA weights) If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. A full training run takes ~5 hours on a 2080 Ti GPU with 11GB of VRAM. Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="/sddata/finetune/lora/pokemon" +export HUB_MODEL_ID="pokemon-lora" +export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch --mixed_precision="fp16" train_text_to_image_lora.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$DATASET_NAME \ + --dataloader_num_workers=8 \ + --resolution=512 \ + --center_crop \ + --random_flip \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --max_train_steps=15000 \ + --learning_rate=1e-04 \ + --max_grad_norm=1 \ + --lr_scheduler="cosine" \ + --lr_warmup_steps=0 \ + --output_dir=${OUTPUT_DIR} \ + --push_to_hub \ + --hub_model_id=${HUB_MODEL_ID} \ + --report_to=wandb \ + --checkpointing_steps=500 \ + --validation_prompt="A pokemon with blue eyes." \ + --seed=1337 Once training has been completed, you can use your model for inference: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("path/to/lora/model", weight_name="pytorch_lora_weights.safetensors") +image = pipeline("A pokemon with blue eyes").images[0] Next steps Congratulations on training a new model with LoRA! To learn more about how to use your new model, the following guides may be helpful: Learn how to load different LoRA formats trained using community trainers like Kohya and TheLastBen. Learn how to use and combine multiple LoRA’s with PEFT for inference. diff --git a/scrapped_outputs/1bf38f61e150b1e385cacbea97673d3e.txt b/scrapped_outputs/1bf38f61e150b1e385cacbea97673d3e.txt new file mode 100644 index 0000000000000000000000000000000000000000..7ba14b6e0e43d4ca7ed6b0c338388308b99ebb1d --- /dev/null +++ b/scrapped_outputs/1bf38f61e150b1e385cacbea97673d3e.txt @@ -0,0 +1,265 @@ +ControlNet ControlNet is a type of model for controlling image diffusion models by conditioning the model with an additional input image. There are many types of conditioning inputs (canny edge, user sketching, human pose, depth, and more) you can use to control a diffusion model. This is hugely useful because it affords you greater control over image generation, making it easier to generate specific images without experimenting with different text prompts or denoising values as much. Check out Section 3.5 of the ControlNet paper v1 for a list of ControlNet implementations on various conditioning inputs. You can find the official Stable Diffusion ControlNet conditioned models on lllyasviel’s Hub profile, and more community-trained ones on the Hub. For Stable Diffusion XL (SDXL) ControlNet models, you can find them on the 🤗 Diffusers Hub organization, or you can browse community-trained ones on the Hub. A ControlNet model has two sets of weights (or blocks) connected by a zero-convolution layer: a locked copy keeps everything a large pretrained diffusion model has learned a trainable copy is trained on the additional conditioning input Since the locked copy preserves the pretrained model, training and implementing a ControlNet on a new conditioning input is as fast as finetuning any other model because you aren’t training the model from scratch. This guide will show you how to use ControlNet for text-to-image, image-to-image, inpainting, and more! There are many types of ControlNet conditioning inputs to choose from, but in this guide we’ll only focus on several of them. Feel free to experiment with other conditioning inputs! Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate opencv-python Text-to-image For text-to-image, you normally pass a text prompt to the model. But with ControlNet, you can specify an additional conditioning input. Let’s condition the model with a canny image, a white outline of an image on a black background. This way, the ControlNet can use the canny image as a control to guide the model to generate an image with the same outline. Load an image and use the opencv-python library to extract the canny image: Copied from diffusers.utils import load_image, make_image_grid +from PIL import Image +import cv2 +import numpy as np + +original_image = load_image( + "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +) + +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) original image canny image Next, load a ControlNet model conditioned on canny edge detection and pass it to the StableDiffusionControlNetPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler +import torch + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now pass your prompt and canny image to the pipeline: Copied output = pipe( + "the mona lisa", image=canny_image +).images[0] +make_image_grid([original_image, canny_image, output], rows=1, cols=3) Image-to-image For image-to-image, you’d typically pass an initial image and a prompt to the pipeline to generate a new image. With ControlNet, you can pass an additional conditioning input to guide the model. Let’s condition the model with a depth map, an image which contains spatial information. This way, the ControlNet can use the depth map as a control to guide the model to generate an image that preserves spatial information. You’ll use the StableDiffusionControlNetImg2ImgPipeline for this task, which is different from the StableDiffusionControlNetPipeline because it allows you to pass an initial image as the starting point for the image generation process. Load an image and use the depth-estimation Pipeline from 🤗 Transformers to extract the depth map of an image: Copied import torch +import numpy as np + +from transformers import pipeline +from diffusers.utils import load_image, make_image_grid + +image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-img2img.jpg" +) + +def get_depth_map(image, depth_estimator): + image = depth_estimator(image)["depth"] + image = np.array(image) + image = image[:, :, None] + image = np.concatenate([image, image, image], axis=2) + detected_map = torch.from_numpy(image).float() / 255.0 + depth_map = detected_map.permute(2, 0, 1) + return depth_map + +depth_estimator = pipeline("depth-estimation") +depth_map = get_depth_map(image, depth_estimator).unsqueeze(0).half().to("cuda") Next, load a ControlNet model conditioned on depth maps and pass it to the StableDiffusionControlNetImg2ImgPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, UniPCMultistepScheduler +import torch + +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11f1p_sd15_depth", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now pass your prompt, initial image, and depth map to the pipeline: Copied output = pipe( + "lego batman and robin", image=image, control_image=depth_map, +).images[0] +make_image_grid([image, output], rows=1, cols=2) original image generated image Inpainting For inpainting, you need an initial image, a mask image, and a prompt describing what to replace the mask with. ControlNet models allow you to add another control image to condition a model with. Let’s condition the model with an inpainting mask. This way, the ControlNet can use the inpainting mask as a control to guide the model to generate an image within the mask area. Load an initial image and a mask image: Copied from diffusers.utils import load_image, make_image_grid + +init_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-inpaint.jpg" +) +init_image = init_image.resize((512, 512)) + +mask_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-inpaint-mask.jpg" +) +mask_image = mask_image.resize((512, 512)) +make_image_grid([init_image, mask_image], rows=1, cols=2) Create a function to prepare the control image from the initial and mask images. This’ll create a tensor to mark the pixels in init_image as masked if the corresponding pixel in mask_image is over a certain threshold. Copied import numpy as np +import torch + +def make_inpaint_condition(image, image_mask): + image = np.array(image.convert("RGB")).astype(np.float32) / 255.0 + image_mask = np.array(image_mask.convert("L")).astype(np.float32) / 255.0 + + assert image.shape[0:1] == image_mask.shape[0:1] + image[image_mask > 0.5] = -1.0 # set as masked pixel + image = np.expand_dims(image, 0).transpose(0, 3, 1, 2) + image = torch.from_numpy(image) + return image + +control_image = make_inpaint_condition(init_image, mask_image) original image mask image Load a ControlNet model conditioned on inpainting and pass it to the StableDiffusionControlNetInpaintPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetInpaintPipeline, ControlNetModel, UniPCMultistepScheduler + +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now pass your prompt, initial image, mask image, and control image to the pipeline: Copied output = pipe( + "corgi face with large ears, detailed, pixar, animated, disney", + num_inference_steps=20, + eta=1.0, + image=init_image, + mask_image=mask_image, + control_image=control_image, +).images[0] +make_image_grid([init_image, mask_image, output], rows=1, cols=3) Guess mode Guess mode does not require supplying a prompt to a ControlNet at all! This forces the ControlNet encoder to do it’s best to “guess” the contents of the input control map (depth map, pose estimation, canny edge, etc.). Guess mode adjusts the scale of the output residuals from a ControlNet by a fixed ratio depending on the block depth. The shallowest DownBlock corresponds to 0.1, and as the blocks get deeper, the scale increases exponentially such that the scale of the MidBlock output becomes 1.0. Guess mode does not have any impact on prompt conditioning and you can still provide a prompt if you want. Set guess_mode=True in the pipeline, and it is recommended to set the guidance_scale value between 3.0 and 5.0. Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.utils import load_image, make_image_grid +import numpy as np +import torch +from PIL import Image +import cv2 + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", use_safetensors=True) +pipe = StableDiffusionControlNetPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", controlnet=controlnet, use_safetensors=True).to("cuda") + +original_image = load_image("https://huggingface.co/takuma104/controlnet_dev/resolve/main/bird_512x512.png") + +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +image = pipe("", image=canny_image, guess_mode=True, guidance_scale=3.0).images[0] +make_image_grid([original_image, canny_image, image], rows=1, cols=3) regular mode with prompt guess mode without prompt ControlNet with Stable Diffusion XL There aren’t too many ControlNet models compatible with Stable Diffusion XL (SDXL) at the moment, but we’ve trained two full-sized ControlNet models for SDXL conditioned on canny edge detection and depth maps. We’re also experimenting with creating smaller versions of these SDXL-compatible ControlNet models so it is easier to run on resource-constrained hardware. You can find these checkpoints on the 🤗 Diffusers Hub organization! Let’s use a SDXL ControlNet conditioned on canny images to generate an image. Start by loading an image and prepare the canny image: Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL +from diffusers.utils import load_image, make_image_grid +from PIL import Image +import cv2 +import numpy as np +import torch + +original_image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" +) + +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) +make_image_grid([original_image, canny_image], rows=1, cols=2) original image canny image Load a SDXL ControlNet model conditioned on canny edge detection and pass it to the StableDiffusionXLControlNetPipeline. You can also enable model offloading to reduce memory usage. Copied controlnet = ControlNetModel.from_pretrained( + "diffusers/controlnet-canny-sdxl-1.0", + torch_dtype=torch.float16, + use_safetensors=True +) +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionXLControlNetPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + controlnet=controlnet, + vae=vae, + torch_dtype=torch.float16, + use_safetensors=True +) +pipe.enable_model_cpu_offload() Now pass your prompt (and optionally a negative prompt if you’re using one) and canny image to the pipeline: The controlnet_conditioning_scale parameter determines how much weight to assign to the conditioning inputs. A value of 0.5 is recommended for good generalization, but feel free to experiment with this number! Copied prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" +negative_prompt = 'low quality, bad quality, sketches' + +image = pipe( + prompt, + negative_prompt=negative_prompt, + image=canny_image, + controlnet_conditioning_scale=0.5, +).images[0] +make_image_grid([original_image, canny_image, image], rows=1, cols=3) You can use StableDiffusionXLControlNetPipeline in guess mode as well by setting the parameter to True: Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL +from diffusers.utils import load_image, make_image_grid +import numpy as np +import torch +import cv2 +from PIL import Image + +prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" +negative_prompt = "low quality, bad quality, sketches" + +original_image = load_image( + "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" +) + +controlnet = ControlNetModel.from_pretrained( + "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16, use_safetensors=True +) +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionXLControlNetPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16, use_safetensors=True +) +pipe.enable_model_cpu_offload() + +image = np.array(original_image) +image = cv2.Canny(image, 100, 200) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +image = pipe( + prompt, negative_prompt=negative_prompt, controlnet_conditioning_scale=0.5, image=canny_image, guess_mode=True, +).images[0] +make_image_grid([original_image, canny_image, image], rows=1, cols=3) MultiControlNet Replace the SDXL model with a model like runwayml/stable-diffusion-v1-5 to use multiple conditioning inputs with Stable Diffusion models. You can compose multiple ControlNet conditionings from different image inputs to create a MultiControlNet. To get better results, it is often helpful to: mask conditionings such that they don’t overlap (for example, mask the area of a canny image where the pose conditioning is located) experiment with the controlnet_conditioning_scale parameter to determine how much weight to assign to each conditioning input In this example, you’ll combine a canny image and a human pose estimation image to generate a new image. Prepare the canny image conditioning: Copied from diffusers.utils import load_image, make_image_grid +from PIL import Image +import numpy as np +import cv2 + +original_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" +) +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) + +# zero out middle columns of image where pose will be overlaid +zero_start = image.shape[1] // 4 +zero_end = zero_start + image.shape[1] // 2 +image[:, zero_start:zero_end] = 0 + +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) +make_image_grid([original_image, canny_image], rows=1, cols=2) original image canny image For human pose estimation, install controlnet_aux: Copied # uncomment to install the necessary library in Colab +#!pip install -q controlnet-aux Prepare the human pose estimation conditioning: Copied from controlnet_aux import OpenposeDetector + +openpose = OpenposeDetector.from_pretrained("lllyasviel/ControlNet") +original_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/person.png" +) +openpose_image = openpose(original_image) +make_image_grid([original_image, openpose_image], rows=1, cols=2) original image human pose image Load a list of ControlNet models that correspond to each conditioning, and pass them to the StableDiffusionXLControlNetPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to reduce memory usage. Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL, UniPCMultistepScheduler +import torch + +controlnets = [ + ControlNetModel.from_pretrained( + "thibaud/controlnet-openpose-sdxl-1.0", torch_dtype=torch.float16 + ), + ControlNetModel.from_pretrained( + "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16, use_safetensors=True + ), +] + +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionXLControlNetPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnets, vae=vae, torch_dtype=torch.float16, use_safetensors=True +) +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now you can pass your prompt (an optional negative prompt if you’re using one), canny image, and pose image to the pipeline: Copied prompt = "a giant standing in a fantasy landscape, best quality" +negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality" + +generator = torch.manual_seed(1) + +images = [openpose_image.resize((1024, 1024)), canny_image.resize((1024, 1024))] + +images = pipe( + prompt, + image=images, + num_inference_steps=25, + generator=generator, + negative_prompt=negative_prompt, + num_images_per_prompt=3, + controlnet_conditioning_scale=[1.0, 0.8], +).images +make_image_grid([original_image, canny_image, openpose_image, + images[0].resize((512, 512)), images[1].resize((512, 512)), images[2].resize((512, 512))], rows=2, cols=3) diff --git a/scrapped_outputs/1bfee94d2e11c9320f0bd79f0963bdda.txt b/scrapped_outputs/1bfee94d2e11c9320f0bd79f0963bdda.txt new file mode 100644 index 0000000000000000000000000000000000000000..af8bc21f7006c2432f3cf43cbda561eb3e9ef283 --- /dev/null +++ b/scrapped_outputs/1bfee94d2e11c9320f0bd79f0963bdda.txt @@ -0,0 +1,42 @@ +RePaintScheduler RePaintScheduler is a DDPM-based inpainting scheduler for unsupervised inpainting with extreme masks. It is designed to be used with the RePaintPipeline, and it is based on the paper RePaint: Inpainting using Denoising Diffusion Probabilistic Models by Andreas Lugmayr et al. The abstract from the paper is: Free-form inpainting is the task of adding new content to an image in the regions specified by an arbitrary binary mask. Most existing approaches train for a certain distribution of masks, which limits their generalization capabilities to unseen mask types. Furthermore, training with pixel-wise and perceptual losses often leads to simple textural extensions towards the missing areas instead of semantically meaningful generation. In this work, we propose RePaint: A Denoising Diffusion Probabilistic Model (DDPM) based inpainting approach that is applicable to even extreme masks. We employ a pretrained unconditional DDPM as the generative prior. To condition the generation process, we only alter the reverse diffusion iterations by sampling the unmasked regions using the given image information. Since this technique does not modify or condition the original DDPM network itself, the model produces high-quality and diverse output images for any inpainting form. We validate our method for both faces and general-purpose image inpainting using standard and extreme masks. RePaint outperforms state-of-the-art Autoregressive, and GAN approaches for at least five out of six mask distributions. GitHub Repository: this http URL. The original implementation can be found at andreas128/RePaint. RePaintScheduler class diffusers.RePaintScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' eta: float = 0.0 trained_betas: Optional = None clip_sample: bool = True ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, squaredcos_cap_v2, or sigmoid. eta (float) — +The weight of noise for added noise in diffusion step. If its value is between 0.0 and 1.0 it corresponds +to the DDIM scheduler, and if its value is between -0.0 and 1.0 it corresponds to the DDPM scheduler. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. clip_sample (bool, defaults to True) — +Clip the predicted sample between -1 and 1 for numerical stability. RePaintScheduler is a scheduler for DDPM inpainting inside a given mask. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int jump_length: int = 10 jump_n_sample: int = 10 device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. If used, +timesteps must be None. jump_length (int, defaults to 10) — +The number of steps taken forward in time before going backward in time for a single jump (“j” in +RePaint paper). Take a look at Figure 9 and 10 in the paper. jump_n_sample (int, defaults to 10) — +The number of times to make a forward time jump for a given chosen time sample. Take a look at Figure 9 +and 10 in the paper. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor original_image: FloatTensor mask: FloatTensor generator: Optional = None return_dict: bool = True ) → RePaintSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. original_image (torch.FloatTensor) — +The original image to inpaint on. mask (torch.FloatTensor) — +The mask where a value of 0.0 indicates which part of the original image to inpaint. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a RePaintSchedulerOutput or tuple. Returns +RePaintSchedulerOutput or tuple + +If return_dict is True, RePaintSchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). RePaintSchedulerOutput class diffusers.schedulers.scheduling_repaint.RePaintSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from +the current timestep. pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/1c2a29aa78d23d5a50bbe5f0d658e494.txt b/scrapped_outputs/1c2a29aa78d23d5a50bbe5f0d658e494.txt new file mode 100644 index 0000000000000000000000000000000000000000..bfb701b6b92da524e2044f38c56691f6854d8e5e --- /dev/null +++ b/scrapped_outputs/1c2a29aa78d23d5a50bbe5f0d658e494.txt @@ -0,0 +1,169 @@ +Latent Consistency Model Latent Consistency Models (LCM) enable quality image generation in typically 2-4 steps making it possible to use diffusion models in almost real-time settings. From the official website: LCMs can be distilled from any pre-trained Stable Diffusion (SD) in only 4,000 training steps (~32 A100 GPU Hours) for generating high quality 768 x 768 resolution images in 2~4 steps or even one step, significantly accelerating text-to-image generation. We employ LCM to distill the Dreamshaper-V7 version of SD in just 4,000 training iterations. For a more technical overview of LCMs, refer to the paper. LCM distilled models are available for stable-diffusion-v1-5, stable-diffusion-xl-base-1.0, and the SSD-1B model. All the checkpoints can be found in this collection. This guide shows how to perform inference with LCMs for text-to-image image-to-image combined with style LoRAs ControlNet/T2I-Adapter Text-to-image You’ll use the StableDiffusionXLPipeline pipeline with the LCMScheduler and then load the LCM-LoRA. Together with the LCM-LoRA and the scheduler, the pipeline enables a fast inference workflow, overcoming the slow iterative nature of diffusion models. Copied from diffusers import StableDiffusionXLPipeline, UNet2DConditionModel, LCMScheduler +import torch + +unet = UNet2DConditionModel.from_pretrained( + "latent-consistency/lcm-sdxl", + torch_dtype=torch.float16, + variant="fp16", +) +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16, variant="fp16", +).to("cuda") +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=8.0 +).images[0] Notice that we use only 4 steps for generation which is way less than what’s typically used for standard SDXL. Some details to keep in mind: To perform classifier-free guidance, batch size is usually doubled inside the pipeline. LCM, however, applies guidance using guidance embeddings, so the batch size does not have to be doubled in this case. This leads to a faster inference time, with the drawback that negative prompts don’t have any effect on the denoising process. The UNet was trained using the [3., 13.] guidance scale range. So, that is the ideal range for guidance_scale. However, disabling guidance_scale using a value of 1.0 is also effective in most cases. Image-to-image LCMs can be applied to image-to-image tasks too. For this example, we’ll use the LCM_Dreamshaper_v7 model, but the same steps can be applied to other LCM models as well. Copied import torch +from diffusers import AutoPipelineForImage2Image, UNet2DConditionModel, LCMScheduler +from diffusers.utils import make_image_grid, load_image + +unet = UNet2DConditionModel.from_pretrained( + "SimianLuo/LCM_Dreamshaper_v7", + subfolder="unet", + torch_dtype=torch.float16, +) + +pipe = AutoPipelineForImage2Image.from_pretrained( + "Lykon/dreamshaper-7", + unet=unet, + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) +prompt = "Astronauts in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +generator = torch.manual_seed(0) +image = pipe( + prompt, + image=init_image, + num_inference_steps=4, + guidance_scale=7.5, + strength=0.5, + generator=generator +).images[0] +make_image_grid([init_image, image], rows=1, cols=2) You can get different results based on your prompt and the image you provide. To get the best results, we recommend trying different values for num_inference_steps, strength, and guidance_scale parameters and choose the best one. Combine with style LoRAs LCMs can be used with other styled LoRAs to generate styled-images in very few steps (4-8). In the following example, we’ll use the papercut LoRA. Copied from diffusers import StableDiffusionXLPipeline, UNet2DConditionModel, LCMScheduler +import torch + +unet = UNet2DConditionModel.from_pretrained( + "latent-consistency/lcm-sdxl", + torch_dtype=torch.float16, + variant="fp16", +) +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16, variant="fp16", +).to("cuda") +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +pipe.load_lora_weights("TheLastBen/Papercut_SDXL", weight_name="papercut.safetensors", adapter_name="papercut") + +prompt = "papercut, a cute fox" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=8.0 +).images[0] +image ControlNet/T2I-Adapter Let’s look at how we can perform inference with ControlNet/T2I-Adapter and a LCM. ControlNet For this example, we’ll use the LCM_Dreamshaper_v7 model with canny ControlNet, but the same steps can be applied to other LCM models as well. Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +image = load_image( + "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +).resize((512, 512)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + "SimianLuo/LCM_Dreamshaper_v7", + controlnet=controlnet, + torch_dtype=torch.float16, + safety_checker=None, +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +generator = torch.manual_seed(0) +image = pipe( + "the mona lisa", + image=canny_image, + num_inference_steps=4, + generator=generator, +).images[0] +make_image_grid([canny_image, image], rows=1, cols=2) The inference parameters in this example might not work for all examples, so we recommend trying different values for the `num_inference_steps`, `guidance_scale`, `controlnet_conditioning_scale`, and `cross_attention_kwargs` parameters and choosing the best one. T2I-Adapter This example shows how to use the lcm-sdxl with the Canny T2I-Adapter. Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionXLAdapterPipeline, UNet2DConditionModel, T2IAdapter, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +# Prepare image +# Detect the canny map in low resolution to avoid high-frequency details +image = load_image( + "https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_canny.jpg" +).resize((384, 384)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image).resize((1024, 1216)) + +# load adapter +adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, varient="fp16").to("cuda") + +unet = UNet2DConditionModel.from_pretrained( + "latent-consistency/lcm-sdxl", + torch_dtype=torch.float16, + variant="fp16", +) +pipe = StableDiffusionXLAdapterPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + unet=unet, + adapter=adapter, + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +prompt = "Mystical fairy in real, magic, 4k picture, high quality" +negative_prompt = "extra digit, fewer digits, cropped, worst quality, low quality, glitch, deformed, mutated, ugly, disfigured" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, + negative_prompt=negative_prompt, + image=canny_image, + num_inference_steps=4, + guidance_scale=5, + adapter_conditioning_scale=0.8, + adapter_conditioning_factor=1, + generator=generator, +).images[0] +grid = make_image_grid([canny_image, image], rows=1, cols=2) diff --git a/scrapped_outputs/1c44882ce831ec31c535aacdf568ff81.txt b/scrapped_outputs/1c44882ce831ec31c535aacdf568ff81.txt new file mode 100644 index 0000000000000000000000000000000000000000..032f569366b1a5bb387a95e95afb74b4ab65d517 --- /dev/null +++ b/scrapped_outputs/1c44882ce831ec31c535aacdf568ff81.txt @@ -0,0 +1,17 @@ +UNet1DModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 1D UNet model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet1DModel class diffusers.UNet1DModel < source > ( sample_size: int = 65536 sample_rate: Optional = None in_channels: int = 2 out_channels: int = 2 extra_in_channels: int = 0 time_embedding_type: str = 'fourier' flip_sin_to_cos: bool = True use_timestep_embedding: bool = False freq_shift: float = 0.0 down_block_types: Tuple = ('DownBlock1DNoSkip', 'DownBlock1D', 'AttnDownBlock1D') up_block_types: Tuple = ('AttnUpBlock1D', 'UpBlock1D', 'UpBlock1DNoSkip') mid_block_type: Tuple = 'UNetMidBlock1D' out_block_type: str = None block_out_channels: Tuple = (32, 32, 64) act_fn: str = None norm_num_groups: int = 8 layers_per_block: int = 1 downsample_each_block: bool = False ) Parameters sample_size (int, optional) — Default length of sample. Should be adaptable at runtime. in_channels (int, optional, defaults to 2) — Number of channels in the input sample. out_channels (int, optional, defaults to 2) — Number of channels in the output. extra_in_channels (int, optional, defaults to 0) — +Number of additional channels to be added to the input of the first down block. Useful for cases where the +input data has more channels than what the model was initially designed for. time_embedding_type (str, optional, defaults to "fourier") — Type of time embedding to use. freq_shift (float, optional, defaults to 0.0) — Frequency shift for Fourier time embedding. flip_sin_to_cos (bool, optional, defaults to False) — +Whether to flip sin to cos for Fourier time embedding. down_block_types (Tuple[str], optional, defaults to ("DownBlock1DNoSkip", "DownBlock1D", "AttnDownBlock1D")) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to ("AttnUpBlock1D", "UpBlock1D", "UpBlock1DNoSkip")) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (32, 32, 64)) — +Tuple of block output channels. mid_block_type (str, optional, defaults to "UNetMidBlock1D") — Block type for middle of UNet. out_block_type (str, optional, defaults to None) — Optional output processing block of UNet. act_fn (str, optional, defaults to None) — Optional activation function in UNet blocks. norm_num_groups (int, optional, defaults to 8) — The number of groups for normalization. layers_per_block (int, optional, defaults to 1) — The number of layers per block. downsample_each_block (int, optional, defaults to False) — +Experimental feature for using a UNet without upsampling. A 1D UNet model that takes a noisy sample and a timestep and returns a sample shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor timestep: Union return_dict: bool = True ) → ~models.unet_1d.UNet1DOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch_size, num_channels, sample_size). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.unet_1d.UNet1DOutput instead of a plain tuple. Returns +~models.unet_1d.UNet1DOutput or tuple + +If return_dict is True, an ~models.unet_1d.UNet1DOutput is returned, otherwise a tuple is +returned where the first element is the sample tensor. + The UNet1DModel forward method. UNet1DOutput class diffusers.models.unets.unet_1d.UNet1DOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, sample_size)) — +The hidden states output from the last layer of the model. The output of UNet1DModel. diff --git a/scrapped_outputs/1c659a3ede8cdafdd507ad9b18c3a334.txt b/scrapped_outputs/1c659a3ede8cdafdd507ad9b18c3a334.txt new file mode 100644 index 0000000000000000000000000000000000000000..7f071804a6d1fd96f89b53ac2e21853833e83f62 --- /dev/null +++ b/scrapped_outputs/1c659a3ede8cdafdd507ad9b18c3a334.txt @@ -0,0 +1,74 @@ +DEISMultistepScheduler Diffusion Exponential Integrator Sampler (DEIS) is proposed in Fast Sampling of Diffusion Models with Exponential Integrator by Qinsheng Zhang and Yongxin Chen. DEISMultistepScheduler is a fast high order solver for diffusion ordinary differential equations (ODEs). This implementation modifies the polynomial fitting formula in log-rho space instead of the original linear t space in the DEIS paper. The modification enjoys closed-form coefficients for exponential multistep update instead of replying on the numerical solver. The abstract from the paper is: The past few years have witnessed the great success of Diffusion models~(DMs) in generating high-fidelity samples in generative modeling tasks. A major limitation of the DM is its notoriously slow sampling procedure which normally requires hundreds to thousands of time discretization steps of the learned diffusion process to reach the desired accuracy. Our goal is to develop a fast sampling method for DMs with a much less number of steps while retaining high sample quality. To this end, we systematically analyze the sampling procedure in DMs and identify key factors that affect the sample quality, among which the method of discretization is most crucial. By carefully examining the learned diffusion process, we propose Diffusion Exponential Integrator Sampler~(DEIS). It is based on the Exponential Integrator designed for discretizing ordinary differential equations (ODEs) and leverages a semilinear structure of the learned diffusion process to reduce the discretization error. The proposed method can be applied to any DMs and can generate high-fidelity samples in as few as 10 steps. In our experiments, it takes about 3 minutes on one A6000 GPU to generate 50k images from CIFAR10. Moreover, by directly using pre-trained DMs, we achieve the state-of-art sampling performance when the number of score function evaluation~(NFE) is limited, e.g., 4.17 FID with 10 NFEs, 3.37 FID, and 9.74 IS with only 15 NFEs on CIFAR10. Code is available at this https URL. Tips It is recommended to set solver_order to 2 or 3, while solver_order=1 is equivalent to DDIMScheduler. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set thresholding=True to use the dynamic thresholding. DEISMultistepScheduler class diffusers.DEISMultistepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Optional = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'deis' solver_type: str = 'logrho' lower_order_final: bool = True use_karras_sigmas: Optional = False timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DEIS order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. algorithm_type (str, defaults to deis) — +The algorithm type for the solver. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DEISMultistepScheduler is a fast high order solver for diffusion ordinary differential equations (ODEs). This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DEIS algorithm needs. deis_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DEIS (equivalent to DDIM). multistep_deis_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order multistep DEIS. multistep_deis_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order multistep DEIS. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep DEIS. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/1c935797052e5f15cf84a77bb2342ba5.txt b/scrapped_outputs/1c935797052e5f15cf84a77bb2342ba5.txt new file mode 100644 index 0000000000000000000000000000000000000000..d0ff9812e8390d7761559412d64c19cfc04afa33 --- /dev/null +++ b/scrapped_outputs/1c935797052e5f15cf84a77bb2342ba5.txt @@ -0,0 +1,89 @@ +Quicktour Diffusion models are trained to denoise random Gaussian noise step-by-step to generate a sample of interest, such as an image or audio. This has sparked a tremendous amount of interest in generative AI, and you have probably seen examples of diffusion generated images on the internet. 🧨 Diffusers is a library aimed at making diffusion models widely accessible to everyone. Whether you’re a developer or an everyday user, this quicktour will introduce you to 🧨 Diffusers and help you get up and generating quickly! There are three main components of the library to know about: The DiffusionPipeline is a high-level end-to-end class designed to rapidly generate samples from pretrained diffusion models for inference. Popular pretrained model architectures and modules that can be used as building blocks for creating diffusion systems. Many different schedulers - algorithms that control how noise is added for training, and how to generate denoised images during inference. The quicktour will show you how to use the DiffusionPipeline for inference, and then walk you through how to combine a model and scheduler to replicate what’s happening inside the DiffusionPipeline. The quicktour is a simplified version of the introductory 🧨 Diffusers notebook to help you get started quickly. If you want to learn more about 🧨 Diffusers’ goal, design philosophy, and additional details about its core API, check out the notebook! Before you begin, make sure you have all the necessary libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install --upgrade diffusers accelerate transformers 🤗 Accelerate speeds up model loading for inference and training. 🤗 Transformers is required to run the most popular diffusion models, such as Stable Diffusion. DiffusionPipeline The DiffusionPipeline is the easiest way to use a pretrained diffusion system for inference. It is an end-to-end system containing the model and the scheduler. You can use the DiffusionPipeline out-of-the-box for many tasks. Take a look at the table below for some supported tasks, and for a complete list of supported tasks, check out the 🧨 Diffusers Summary table. Task Description Pipeline Unconditional Image Generation generate an image from Gaussian noise unconditional_image_generation Text-Guided Image Generation generate an image given a text prompt conditional_image_generation Text-Guided Image-to-Image Translation adapt an image guided by a text prompt img2img Text-Guided Image-Inpainting fill the masked part of an image given the image, the mask and a text prompt inpaint Text-Guided Depth-to-Image Translation adapt parts of an image guided by a text prompt while preserving structure via depth estimation depth2img Start by creating an instance of a DiffusionPipeline and specify which pipeline checkpoint you would like to download. +You can use the DiffusionPipeline for any checkpoint stored on the Hugging Face Hub. +In this quicktour, you’ll load the stable-diffusion-v1-5 checkpoint for text-to-image generation. For Stable Diffusion models, please carefully read the license first before running the model. 🧨 Diffusers implements a safety_checker to prevent offensive or harmful content, but the model’s improved image generation capabilities can still produce potentially harmful content. Load the model with the from_pretrained() method: Copied >>> from diffusers import DiffusionPipeline + +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) The DiffusionPipeline downloads and caches all modeling, tokenization, and scheduling components. You’ll see that the Stable Diffusion pipeline is composed of the UNet2DConditionModel and PNDMScheduler among other things: Copied >>> pipeline +StableDiffusionPipeline { + "_class_name": "StableDiffusionPipeline", + "_diffusers_version": "0.21.4", + ..., + "scheduler": [ + "diffusers", + "PNDMScheduler" + ], + ..., + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vae": [ + "diffusers", + "AutoencoderKL" + ] +} We strongly recommend running the pipeline on a GPU because the model consists of roughly 1.4 billion parameters. +You can move the generator object to a GPU, just like you would in PyTorch: Copied >>> pipeline.to("cuda") Now you can pass a text prompt to the pipeline to generate an image, and then access the denoised image. By default, the image output is wrapped in a PIL.Image object. Copied >>> image = pipeline("An image of a squirrel in Picasso style").images[0] +>>> image Save the image by calling save: Copied >>> image.save("image_of_squirrel_painting.png") Local pipeline You can also use the pipeline locally. The only difference is you need to download the weights first: Copied !git lfs install +!git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 Then load the saved weights into the pipeline: Copied >>> pipeline = DiffusionPipeline.from_pretrained("./stable-diffusion-v1-5", use_safetensors=True) Now, you can run the pipeline as you would in the section above. Swapping schedulers Different schedulers come with different denoising speeds and quality trade-offs. The best way to find out which one works best for you is to try them out! One of the main features of 🧨 Diffusers is to allow you to easily switch between schedulers. For example, to replace the default PNDMScheduler with the EulerDiscreteScheduler, load it with the from_config() method: Copied >>> from diffusers import EulerDiscreteScheduler + +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) +>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) Try generating an image with the new scheduler and see if you notice a difference! In the next section, you’ll take a closer look at the components - the model and scheduler - that make up the DiffusionPipeline and learn how to use these components to generate an image of a cat. Models Most models take a noisy sample, and at each timestep it predicts the noise residual (other models learn to predict the previous sample directly or the velocity or v-prediction), the difference between a less noisy image and the input image. You can mix and match models to create other diffusion systems. Models are initiated with the from_pretrained() method which also locally caches the model weights so it is faster the next time you load the model. For the quicktour, you’ll load the UNet2DModel, a basic unconditional image generation model with a checkpoint trained on cat images: Copied >>> from diffusers import UNet2DModel + +>>> repo_id = "google/ddpm-cat-256" +>>> model = UNet2DModel.from_pretrained(repo_id, use_safetensors=True) To access the model parameters, call model.config: Copied >>> model.config The model configuration is a 🧊 frozen 🧊 dictionary, which means those parameters can’t be changed after the model is created. This is intentional and ensures that the parameters used to define the model architecture at the start remain the same, while other parameters can still be adjusted during inference. Some of the most important parameters are: sample_size: the height and width dimension of the input sample. in_channels: the number of input channels of the input sample. down_block_types and up_block_types: the type of down- and upsampling blocks used to create the UNet architecture. block_out_channels: the number of output channels of the downsampling blocks; also used in reverse order for the number of input channels of the upsampling blocks. layers_per_block: the number of ResNet blocks present in each UNet block. To use the model for inference, create the image shape with random Gaussian noise. It should have a batch axis because the model can receive multiple random noises, a channel axis corresponding to the number of input channels, and a sample_size axis for the height and width of the image: Copied >>> import torch + +>>> torch.manual_seed(0) + +>>> noisy_sample = torch.randn(1, model.config.in_channels, model.config.sample_size, model.config.sample_size) +>>> noisy_sample.shape +torch.Size([1, 3, 256, 256]) For inference, pass the noisy image and a timestep to the model. The timestep indicates how noisy the input image is, with more noise at the beginning and less at the end. This helps the model determine its position in the diffusion process, whether it is closer to the start or the end. Use the sample method to get the model output: Copied >>> with torch.no_grad(): +... noisy_residual = model(sample=noisy_sample, timestep=2).sample To generate actual examples though, you’ll need a scheduler to guide the denoising process. In the next section, you’ll learn how to couple a model with a scheduler. Schedulers Schedulers manage going from a noisy sample to a less noisy sample given the model output - in this case, it is the noisy_residual. 🧨 Diffusers is a toolbox for building diffusion systems. While the DiffusionPipeline is a convenient way to get started with a pre-built diffusion system, you can also choose your own model and scheduler components separately to build a custom diffusion system. For the quicktour, you’ll instantiate the DDPMScheduler with its from_config() method: Copied >>> from diffusers import DDPMScheduler + +>>> scheduler = DDPMScheduler.from_pretrained(repo_id) +>>> scheduler +DDPMScheduler { + "_class_name": "DDPMScheduler", + "_diffusers_version": "0.21.4", + "beta_end": 0.02, + "beta_schedule": "linear", + "beta_start": 0.0001, + "clip_sample": true, + "clip_sample_range": 1.0, + "dynamic_thresholding_ratio": 0.995, + "num_train_timesteps": 1000, + "prediction_type": "epsilon", + "sample_max_value": 1.0, + "steps_offset": 0, + "thresholding": false, + "timestep_spacing": "leading", + "trained_betas": null, + "variance_type": "fixed_small" +} 💡 Unlike a model, a scheduler does not have trainable weights and is parameter-free! Some of the most important parameters are: num_train_timesteps: the length of the denoising process or, in other words, the number of timesteps required to process random Gaussian noise into a data sample. beta_schedule: the type of noise schedule to use for inference and training. beta_start and beta_end: the start and end noise values for the noise schedule. To predict a slightly less noisy image, pass the following to the scheduler’s step() method: model output, timestep, and current sample. Copied >>> less_noisy_sample = scheduler.step(model_output=noisy_residual, timestep=2, sample=noisy_sample).prev_sample +>>> less_noisy_sample.shape +torch.Size([1, 3, 256, 256]) The less_noisy_sample can be passed to the next timestep where it’ll get even less noisy! Let’s bring it all together now and visualize the entire denoising process. First, create a function that postprocesses and displays the denoised image as a PIL.Image: Copied >>> import PIL.Image +>>> import numpy as np + + +>>> def display_sample(sample, i): +... image_processed = sample.cpu().permute(0, 2, 3, 1) +... image_processed = (image_processed + 1.0) * 127.5 +... image_processed = image_processed.numpy().astype(np.uint8) + +... image_pil = PIL.Image.fromarray(image_processed[0]) +... display(f"Image at step {i}") +... display(image_pil) To speed up the denoising process, move the input and model to a GPU: Copied >>> model.to("cuda") +>>> noisy_sample = noisy_sample.to("cuda") Now create a denoising loop that predicts the residual of the less noisy sample, and computes the less noisy sample with the scheduler: Copied >>> import tqdm + +>>> sample = noisy_sample + +>>> for i, t in enumerate(tqdm.tqdm(scheduler.timesteps)): +... # 1. predict noise residual +... with torch.no_grad(): +... residual = model(sample, t).sample + +... # 2. compute less noisy image and set x_t -> x_t-1 +... sample = scheduler.step(residual, t, sample).prev_sample + +... # 3. optionally look at image +... if (i + 1) % 50 == 0: +... display_sample(sample, i + 1) Sit back and watch as a cat is generated from nothing but noise! 😻 Next steps Hopefully, you generated some cool images with 🧨 Diffusers in this quicktour! For your next steps, you can: Train or finetune a model to generate your own images in the training tutorial. See example official and community training or finetuning scripts for a variety of use cases. Learn more about loading, accessing, changing, and comparing schedulers in the Using different Schedulers guide. Explore prompt engineering, speed and memory optimizations, and tips and tricks for generating higher-quality images with the Stable Diffusion guide. Dive deeper into speeding up 🧨 Diffusers with guides on optimized PyTorch on a GPU, and inference guides for running Stable Diffusion on Apple Silicon (M1/M2) and ONNX Runtime. diff --git a/scrapped_outputs/1ca63a2b6b09935e3c3bfe8fd9f2f6c5.txt b/scrapped_outputs/1ca63a2b6b09935e3c3bfe8fd9f2f6c5.txt new file mode 100644 index 0000000000000000000000000000000000000000..260e2d1961cab74b037b8005bfcbb5822351f744 --- /dev/null +++ b/scrapped_outputs/1ca63a2b6b09935e3c3bfe8fd9f2f6c5.txt @@ -0,0 +1,197 @@ +UniDiffuser The UniDiffuser model was proposed in One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale by Fan Bao, Shen Nie, Kaiwen Xue, Chongxuan Li, Shi Pu, Yaole Wang, Gang Yue, Yue Cao, Hang Su, Jun Zhu. The abstract from the paper is: This paper proposes a unified diffusion framework (dubbed UniDiffuser) to fit all distributions relevant to a set of multi-modal data in one model. Our key insight is — learning diffusion models for marginal, conditional, and joint distributions can be unified as predicting the noise in the perturbed data, where the perturbation levels (i.e. timesteps) can be different for different modalities. Inspired by the unified view, UniDiffuser learns all distributions simultaneously with a minimal modification to the original diffusion model — perturbs data in all modalities instead of a single modality, inputs individual timesteps in different modalities, and predicts the noise of all modalities instead of a single modality. UniDiffuser is parameterized by a transformer for diffusion models to handle input types of different modalities. Implemented on large-scale paired image-text data, UniDiffuser is able to perform image, text, text-to-image, image-to-text, and image-text pair generation by setting proper timesteps without additional overhead. In particular, UniDiffuser is able to produce perceptually realistic samples in all tasks and its quantitative results (e.g., the FID and CLIP score) are not only superior to existing general-purpose models but also comparable to the bespoken models (e.g., Stable Diffusion and DALL-E 2) in representative tasks (e.g., text-to-image generation). You can find the original codebase at thu-ml/unidiffuser and additional checkpoints at thu-ml. There is currently an issue on PyTorch 1.X where the output images are all black or the pixel values become NaNs. This issue can be mitigated by switching to PyTorch 2.X. This pipeline was contributed by dg845. ❤️ Usage Examples Because the UniDiffuser model is trained to model the joint distribution of (image, text) pairs, it is capable of performing a diverse range of generation tasks: Unconditional Image and Text Generation Unconditional generation (where we start from only latents sampled from a standard Gaussian prior) from a UniDiffuserPipeline will produce a (image, text) pair: Copied import torch + +from diffusers import UniDiffuserPipeline + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Unconditional image and text generation. The generation task is automatically inferred. +sample = pipe(num_inference_steps=20, guidance_scale=8.0) +image = sample.images[0] +text = sample.text[0] +image.save("unidiffuser_joint_sample_image.png") +print(text) This is also called “joint” generation in the UniDiffuser paper, since we are sampling from the joint image-text distribution. Note that the generation task is inferred from the inputs used when calling the pipeline. +It is also possible to manually specify the unconditional generation task (“mode”) manually with UniDiffuserPipeline.set_joint_mode(): Copied # Equivalent to the above. +pipe.set_joint_mode() +sample = pipe(num_inference_steps=20, guidance_scale=8.0) When the mode is set manually, subsequent calls to the pipeline will use the set mode without attempting to infer the mode. +You can reset the mode with UniDiffuserPipeline.reset_mode(), after which the pipeline will once again infer the mode. You can also generate only an image or only text (which the UniDiffuser paper calls “marginal” generation since we sample from the marginal distribution of images and text, respectively): Copied # Unlike other generation tasks, image-only and text-only generation don't use classifier-free guidance +# Image-only generation +pipe.set_image_mode() +sample_image = pipe(num_inference_steps=20).images[0] +# Text-only generation +pipe.set_text_mode() +sample_text = pipe(num_inference_steps=20).text[0] Text-to-Image Generation UniDiffuser is also capable of sampling from conditional distributions; that is, the distribution of images conditioned on a text prompt or the distribution of texts conditioned on an image. +Here is an example of sampling from the conditional image distribution (text-to-image generation or text-conditioned image generation): Copied import torch + +from diffusers import UniDiffuserPipeline + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Text-to-image generation +prompt = "an elephant under the sea" + +sample = pipe(prompt=prompt, num_inference_steps=20, guidance_scale=8.0) +t2i_image = sample.images[0] +t2i_image The text2img mode requires that either an input prompt or prompt_embeds be supplied. You can set the text2img mode manually with UniDiffuserPipeline.set_text_to_image_mode(). Image-to-Text Generation Similarly, UniDiffuser can also produce text samples given an image (image-to-text or image-conditioned text generation): Copied import torch + +from diffusers import UniDiffuserPipeline +from diffusers.utils import load_image + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Image-to-text generation +image_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unidiffuser/unidiffuser_example_image.jpg" +init_image = load_image(image_url).resize((512, 512)) + +sample = pipe(image=init_image, num_inference_steps=20, guidance_scale=8.0) +i2t_text = sample.text[0] +print(i2t_text) The img2text mode requires that an input image be supplied. You can set the img2text mode manually with UniDiffuserPipeline.set_image_to_text_mode(). Image Variation The UniDiffuser authors suggest performing image variation through a “round-trip” generation method, where given an input image, we first perform an image-to-text generation, and then perform a text-to-image generation on the outputs of the first generation. +This produces a new image which is semantically similar to the input image: Copied import torch + +from diffusers import UniDiffuserPipeline +from diffusers.utils import load_image + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Image variation can be performed with an image-to-text generation followed by a text-to-image generation: +# 1. Image-to-text generation +image_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unidiffuser/unidiffuser_example_image.jpg" +init_image = load_image(image_url).resize((512, 512)) + +sample = pipe(image=init_image, num_inference_steps=20, guidance_scale=8.0) +i2t_text = sample.text[0] +print(i2t_text) + +# 2. Text-to-image generation +sample = pipe(prompt=i2t_text, num_inference_steps=20, guidance_scale=8.0) +final_image = sample.images[0] +final_image.save("unidiffuser_image_variation_sample.png") Text Variation Similarly, text variation can be performed on an input prompt with a text-to-image generation followed by a image-to-text generation: Copied import torch + +from diffusers import UniDiffuserPipeline + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Text variation can be performed with a text-to-image generation followed by a image-to-text generation: +# 1. Text-to-image generation +prompt = "an elephant under the sea" + +sample = pipe(prompt=prompt, num_inference_steps=20, guidance_scale=8.0) +t2i_image = sample.images[0] +t2i_image.save("unidiffuser_text2img_sample_image.png") + +# 2. Image-to-text generation +sample = pipe(image=t2i_image, num_inference_steps=20, guidance_scale=8.0) +final_prompt = sample.text[0] +print(final_prompt) Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. UniDiffuserPipeline class diffusers.UniDiffuserPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel image_encoder: CLIPVisionModelWithProjection clip_image_processor: CLIPImageProcessor clip_tokenizer: CLIPTokenizer text_decoder: UniDiffuserTextDecoder text_tokenizer: GPT2Tokenizer unet: UniDiffuserModel scheduler: KarrasDiffusionSchedulers ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. This +is part of the UniDiffuser image representation along with the CLIP vision encoding. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). image_encoder (CLIPVisionModel) — +A CLIPVisionModel to encode images as part of its image representation along with the VAE +latent representation. image_processor (CLIPImageProcessor) — +CLIPImageProcessor to preprocess an image before CLIP encoding it with image_encoder. clip_tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize the prompt before encoding it with text_encoder. text_decoder (UniDiffuserTextDecoder) — +Frozen text decoder. This is a GPT-style model which is used to generate text from the UniDiffuser +embedding. text_tokenizer (GPT2Tokenizer) — +A GPT2Tokenizer to decode text for text generation; used along with the text_decoder. unet (UniDiffuserModel) — +A U-ViT model with UNNet-style skip connections between transformer +layers to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image and/or text latents. The +original UniDiffuser paper uses the DPMSolverMultistepScheduler scheduler. Pipeline for a bimodal image-text model which supports unconditional text and image generation, text-conditioned +image generation, image-conditioned text generation, and joint image-text generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None image: Union = None height: Optional = None width: Optional = None data_type: Optional = 1 num_inference_steps: int = 50 guidance_scale: float = 8.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 num_prompts_per_image: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_latents: Optional = None vae_latents: Optional = None clip_latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → ImageTextPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. +Required for text-conditioned image generation (text2img) mode. image (torch.FloatTensor or PIL.Image.Image, optional) — +Image or tensor representing an image batch. Required for image-conditioned text generation +(img2text) mode. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. data_type (int, optional, defaults to 1) — +The data type (either 0 or 1). Only used if you are loading a checkpoint which supports a data type +embedding; this is added for compatibility with the +UniDiffuser-v1 checkpoint. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 8.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). Used in +text-conditioned image generation (text2img) mode. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. Used in text2img (text-conditioned image generation) and +img mode. If the mode is joint and both num_images_per_prompt and num_prompts_per_image are +supplied, min(num_images_per_prompt, num_prompts_per_image) samples are generated. num_prompts_per_image (int, optional, defaults to 1) — +The number of prompts to generate per image. Used in img2text (image-conditioned text generation) and +text mode. If the mode is joint and both num_images_per_prompt and num_prompts_per_image are +supplied, min(num_images_per_prompt, num_prompts_per_image) samples are generated. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for joint +image-text generation. Can be used to tweak the same generation with different prompts. If not +provided, a latents tensor is generated by sampling using the supplied random generator. This assumes +a full set of VAE, CLIP, and text latents, if supplied, overrides the value of prompt_latents, +vae_latents, and clip_latents. prompt_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for text +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. vae_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. clip_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. Used in text-conditioned +image generation (text2img) mode. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are be generated from the negative_prompt input argument. Used +in text-conditioned image generation (text2img) mode. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImageTextPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +ImageTextPipelineOutput or tuple + +If return_dict is True, ImageTextPipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images and the second element +is a list of generated texts. + The call function to the pipeline for generation. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. reset_mode < source > ( ) Removes a manually set mode; after calling this, the pipeline will infer the mode from inputs. set_image_mode < source > ( ) Manually set the generation mode to unconditional (“marginal”) image generation. set_image_to_text_mode < source > ( ) Manually set the generation mode to image-conditioned text generation. set_joint_mode < source > ( ) Manually set the generation mode to unconditional joint image-text generation. set_text_mode < source > ( ) Manually set the generation mode to unconditional (“marginal”) text generation. set_text_to_image_mode < source > ( ) Manually set the generation mode to text-conditioned image generation. ImageTextPipelineOutput class diffusers.ImageTextPipelineOutput < source > ( images: Union text: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). text (List[str] or List[List[str]]) — +List of generated text strings of length batch_size or a list of list of strings whose outer list has +length batch_size. Output class for joint image-text pipelines. diff --git a/scrapped_outputs/1cb83e1c2533a49ecb4f62c2f1ebc730.txt b/scrapped_outputs/1cb83e1c2533a49ecb4f62c2f1ebc730.txt new file mode 100644 index 0000000000000000000000000000000000000000..161bab95d89c856bbecb72654e8b0d0142d13c70 --- /dev/null +++ b/scrapped_outputs/1cb83e1c2533a49ecb4f62c2f1ebc730.txt @@ -0,0 +1,6 @@ +Unconditional image generation Unconditional image generation generates images that look like a random sample from the training data the model was trained on because the denoising process is not guided by any additional context like text or image. To get started, use the DiffusionPipeline to load the anton-l/ddpm-butterflies-128 checkpoint to generate images of butterflies. The DiffusionPipeline downloads and caches all the model components required to generate an image. Copied from diffusers import DiffusionPipeline + +generator = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128").to("cuda") +image = generator().images[0] +image Want to generate images of something else? Take a look at the training guide to learn how to train a model to generate your own images. The output image is a PIL.Image object that can be saved: Copied image.save("generated_image.png") You can also try experimenting with the num_inference_steps parameter, which controls the number of denoising steps. More denoising steps typically produce higher quality images, but it’ll take longer to generate. Feel free to play around with this parameter to see how it affects the image quality. Copied image = generator(num_inference_steps=100).images[0] +image Try out the Space below to generate an image of a butterfly! diff --git a/scrapped_outputs/1cb8cce19c67a1c8f36bfc57789c23c5.txt b/scrapped_outputs/1cb8cce19c67a1c8f36bfc57789c23c5.txt new file mode 100644 index 0000000000000000000000000000000000000000..1cfb8190aed87448e28b1c5a54655d114bd647cf --- /dev/null +++ b/scrapped_outputs/1cb8cce19c67a1c8f36bfc57789c23c5.txt @@ -0,0 +1,347 @@ +UniPC + + +Overview + +UniPC is a training-free framework designed for the fast sampling of diffusion models, which consists of a corrector (UniC) and a predictor (UniP) that share a unified analytical form and support arbitrary orders. +For more details about the method, please refer to the paper and the code. +Fast Sampling of Diffusion Models with Exponential Integrator. + +UniPCMultistepScheduler + + +class diffusers.UniPCMultistepScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +solver_order: int = 2 +prediction_type: str = 'epsilon' +thresholding: bool = False +dynamic_thresholding_ratio: float = 0.995 +sample_max_value: float = 1.0 +predict_x0: bool = True +solver_type: str = 'bh2' +lower_order_final: bool = True +disable_corrector: typing.List[int] = [] +solver_p: SchedulerMixin = None + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +solver_order (int, default 2) — +the order of UniPC, also the p in UniPC-p; can be any positive integer. Note that the effective order of +accuracy is solver_order + 1 due to the UniC. We recommend to use solver_order=2 for guided sampling, +and solver_order=3 for unconditional sampling. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + +thresholding (bool, default False) — +whether to use the “dynamic thresholding” method (introduced by Imagen, https://arxiv.org/abs/2205.11487). +For pixel-space diffusion models, you can set both predict_x0=True and thresholding=True to use the +dynamic thresholding. Note that the thresholding method is unsuitable for latent-space diffusion models +(such as stable-diffusion). + + +dynamic_thresholding_ratio (float, default 0.995) — +the ratio for the dynamic thresholding method. Default is 0.995, the same as Imagen +(https://arxiv.org/abs/2205.11487). + + +sample_max_value (float, default 1.0) — +the threshold value for dynamic thresholding. Valid only when thresholding=True and predict_x0=True. + + +predict_x0 (bool, default True) — +whether to use the updating algrithm on the predicted x0. See https://arxiv.org/abs/2211.01095 for details + + +solver_type (str, default bh2) — +the solver type of UniPC. We recommend use bh1 for unconditional sampling when steps < 10, and use bh2 +otherwise. + + +lower_order_final (bool, default True) — +whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. We empirically +find this trick can stabilize the sampling of DPM-Solver for steps < 15, especially for steps <= 10. + + +disable_corrector (list, default []) — +decide which step to disable the corrector. For large guidance scale, the misalignment between the +epsilon_theta(x_t, c)and epsilon_theta(x_t^c, c) might influence the convergence. This can be mitigated +by disable the corrector at the first few steps (e.g., disable_corrector=[0]) + + +solver_p (SchedulerMixin, default None) — +can be any other scheduler. If specified, the algorithm will become solver_p + UniC. + + + +UniPC is a training-free framework designed for the fast sampling of diffusion models, which consists of a +corrector (UniC) and a predictor (UniP) that share a unified analytical form and support arbitrary orders. UniPC is +by desinged model-agnostic, supporting pixel-space/latent-space DPMs on unconditional/conditional sampling. It can +also be applied to both noise prediction model and data prediction model. The corrector UniC can be also applied +after any off-the-shelf solvers to increase the order of accuracy. +For more details, see the original paper: https://arxiv.org/abs/2302.04867 +Currently, we support the multistep UniPC for both noise prediction models and data prediction models. We recommend +to use solver_order=2 for guided sampling, and solver_order=3 for unconditional sampling. +We also support the “dynamic thresholding” method in Imagen (https://arxiv.org/abs/2205.11487). For pixel-space +diffusion models, you can set both predict_x0=True and thresholding=True to use the dynamic thresholding. Note +that the thresholding method is unsuitable for latent-space diffusion models (such as stable-diffusion). +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +convert_model_output + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the converted model output. + + +Convert the model output to the corresponding type that the algorithm PC needs. + +multistep_uni_c_bh_update + +< +source +> +( +this_model_output: FloatTensor +this_timestep: int +last_sample: FloatTensor +this_sample: FloatTensor +order: int + +) +→ +torch.FloatTensor + +Parameters + +this_model_output (torch.FloatTensor) — the model outputs at x_t + + +this_timestep (int) — the current timestep t + + +last_sample (torch.FloatTensor) — the generated sample before the last predictor: x_{t-1} + + +this_sample (torch.FloatTensor) — the generated sample after the last predictor: x_{t} + + +order (int) — the p of UniC-p at this step. Note that the effective order of accuracy +should be order + 1 + + +Returns + +torch.FloatTensor + + + +the corrected sample tensor at the current timestep. + + +One step for the UniC (B(h) version). + +multistep_uni_p_bh_update + +< +source +> +( +model_output: FloatTensor +prev_timestep: int +sample: FloatTensor +order: int + +) +→ +torch.FloatTensor + +Parameters + +model_output (torch.FloatTensor) — +direct outputs from learned diffusion model at the current timestep. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +order (int) — the order of UniP at this step, also the p in UniPC-p. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the UniP (B(h) version). Alternatively, self.solver_p is used if is specified. + +scale_model_input + +< +source +> +( +sample: FloatTensor +*args +**kwargs + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device, optional) — +the device to which the timesteps should be moved to. If None, the timesteps are not moved. + + + +Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +return_dict: bool = True + +) +→ +~scheduling_utils.SchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +return_dict (bool) — option for returning tuple rather than SchedulerOutput class + + +Returns + +~scheduling_utils.SchedulerOutput or tuple + + + +~scheduling_utils.SchedulerOutput if return_dict is +True, otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + +Step function propagating the sample with the multistep UniPC. diff --git a/scrapped_outputs/1cc57e9f9fa4387872e11300153db773.txt b/scrapped_outputs/1cc57e9f9fa4387872e11300153db773.txt new file mode 100644 index 0000000000000000000000000000000000000000..0bb1c25cee39a4d571553ee70193cbd912f297b7 --- /dev/null +++ b/scrapped_outputs/1cc57e9f9fa4387872e11300153db773.txt @@ -0,0 +1,31 @@ +Prior Transformer The Prior Transformer was originally introduced in Hierarchical Text-Conditional Image Generation with CLIP Latents by Ramesh et al. It is used to predict CLIP image embeddings from CLIP text embeddings; image embeddings are predicted through a denoising diffusion process. The abstract from the paper is: Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples. PriorTransformer class diffusers.PriorTransformer < source > ( num_attention_heads: int = 32 attention_head_dim: int = 64 num_layers: int = 20 embedding_dim: int = 768 num_embeddings = 77 additional_embeddings = 4 dropout: float = 0.0 time_embed_act_fn: str = 'silu' norm_in_type: Optional = None embedding_proj_norm_type: Optional = None encoder_hid_proj_type: Optional = 'linear' added_emb_type: Optional = 'prd' time_embed_dim: Optional = None embedding_proj_dim: Optional = None clip_embed_dim: Optional = None ) Parameters num_attention_heads (int, optional, defaults to 32) — The number of heads to use for multi-head attention. attention_head_dim (int, optional, defaults to 64) — The number of channels in each head. num_layers (int, optional, defaults to 20) — The number of layers of Transformer blocks to use. embedding_dim (int, optional, defaults to 768) — The dimension of the model input hidden_states num_embeddings (int, optional, defaults to 77) — +The number of embeddings of the model input hidden_states additional_embeddings (int, optional, defaults to 4) — The number of additional tokens appended to the +projected hidden_states. The actual length of the used hidden_states is num_embeddings + additional_embeddings. dropout (float, optional, defaults to 0.0) — The dropout probability to use. time_embed_act_fn (str, optional, defaults to ‘silu’) — +The activation function to use to create timestep embeddings. norm_in_type (str, optional, defaults to None) — The normalization layer to apply on hidden states before +passing to Transformer blocks. Set it to None if normalization is not needed. embedding_proj_norm_type (str, optional, defaults to None) — +The normalization layer to apply on the input proj_embedding. Set it to None if normalization is not +needed. encoder_hid_proj_type (str, optional, defaults to linear) — +The projection layer to apply on the input encoder_hidden_states. Set it to None if +encoder_hidden_states is None. added_emb_type (str, optional, defaults to prd) — Additional embeddings to condition the model. +Choose from prd or None. if choose prd, it will prepend a token indicating the (quantized) dot +product between the text embedding and image embedding as proposed in the unclip paper +https://arxiv.org/abs/2204.06125 If it is None, no additional embeddings will be prepended. time_embed_dim (int, *optional*, defaults to None) -- The dimension of timestep embeddings. If None, will be set to num_attention_heads * attention_head_dim` embedding_proj_dim (int, optional, default to None) — +The dimension of proj_embedding. If None, will be set to embedding_dim. clip_embed_dim (int, optional, default to None) — +The dimension of the output. If None, will be set to embedding_dim. A Prior Transformer model. forward < source > ( hidden_states timestep: Union proj_embedding: FloatTensor encoder_hidden_states: Optional = None attention_mask: Optional = None return_dict: bool = True ) → PriorTransformerOutput or tuple Parameters hidden_states (torch.FloatTensor of shape (batch_size, embedding_dim)) — +The currently predicted image embeddings. timestep (torch.LongTensor) — +Current denoising step. proj_embedding (torch.FloatTensor of shape (batch_size, embedding_dim)) — +Projected embedding vector the denoising process is conditioned on. encoder_hidden_states (torch.FloatTensor of shape (batch_size, num_embeddings, embedding_dim)) — +Hidden states of the text embeddings the denoising process is conditioned on. attention_mask (torch.BoolTensor of shape (batch_size, num_embeddings)) — +Text mask for the text embeddings. return_dict (bool, optional, defaults to True) — +Whether or not to return a PriorTransformerOutput instead of a plain +tuple. Returns +PriorTransformerOutput or tuple + +If return_dict is True, a PriorTransformerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + The PriorTransformer forward method. set_attn_processor < source > ( processor: Union _remove_lora = False ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. PriorTransformerOutput class diffusers.models.prior_transformer.PriorTransformerOutput < source > ( predicted_image_embedding: FloatTensor ) Parameters predicted_image_embedding (torch.FloatTensor of shape (batch_size, embedding_dim)) — +The predicted CLIP image embedding conditioned on the CLIP text embedding input. The output of PriorTransformer. diff --git a/scrapped_outputs/1ccd03596431311600c01b35fd62ceae.txt b/scrapped_outputs/1ccd03596431311600c01b35fd62ceae.txt new file mode 100644 index 0000000000000000000000000000000000000000..191eba717cd93724b13a5915ff44bfc9153360dd --- /dev/null +++ b/scrapped_outputs/1ccd03596431311600c01b35fd62ceae.txt @@ -0,0 +1,338 @@ +GLIGEN (Grounded Language-to-Image Generation) The GLIGEN model was created by researchers and engineers from University of Wisconsin-Madison, Columbia University, and Microsoft. The StableDiffusionGLIGENPipeline and StableDiffusionGLIGENTextImagePipeline can generate photorealistic images conditioned on grounding inputs. Along with text and bounding boxes with StableDiffusionGLIGENPipeline, if input images are given, StableDiffusionGLIGENTextImagePipeline can insert objects described by text at the region defined by bounding boxes. Otherwise, it’ll generate an image described by the caption/prompt and insert objects described by text at the region defined by bounding boxes. It’s trained on COCO2014D and COCO2014CD datasets, and the model uses a frozen CLIP ViT-L/14 text encoder to condition itself on grounding inputs. The abstract from the paper is: Large-scale text-to-image diffusion models have made amazing advances. However, the status quo is to use text input alone, which can impede controllability. In this work, we propose GLIGEN, Grounded-Language-to-Image Generation, a novel approach that builds upon and extends the functionality of existing pre-trained text-to-image diffusion models by enabling them to also be conditioned on grounding inputs. To preserve the vast concept knowledge of the pre-trained model, we freeze all of its weights and inject the grounding information into new trainable layers via a gated mechanism. Our model achieves open-world grounded text2img generation with caption and bounding box condition inputs, and the grounding ability generalizes well to novel spatial configurations and concepts. GLIGEN’s zeroshot performance on COCO and LVIS outperforms existing supervised layout-to-image baselines by a large margin. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality and how to reuse pipeline components efficiently! If you want to use one of the official checkpoints for a task, explore the gligen Hub organizations! StableDiffusionGLIGENPipeline was contributed by Nikhil Gajendrakumar and StableDiffusionGLIGENTextImagePipeline was contributed by Nguyễn Công Tú Anh. StableDiffusionGLIGENPipeline class diffusers.StableDiffusionGLIGENPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with Grounded-Language-to-Image Generation (GLIGEN). This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 gligen_scheduled_sampling_beta: float = 0.3 gligen_phrases: List = None gligen_boxes: List = None gligen_inpaint_image: Optional = None negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. gligen_phrases (List[str]) — +The phrases to guide what to include in each of the regions defined by the corresponding +gligen_boxes. There should only be one phrase per bounding box. gligen_boxes (List[List[float]]) — +The bounding boxes that identify rectangular regions of the image that are going to be filled with the +content described by the corresponding gligen_phrases. Each rectangular box is defined as a +List[float] of 4 elements [xmin, ymin, xmax, ymax] where each value is between [0,1]. gligen_inpaint_image (PIL.Image.Image, optional) — +The input image, if provided, is inpainted with objects described by the gligen_boxes and +gligen_phrases. Otherwise, it is treated as a generation task on a blank input image. gligen_scheduled_sampling_beta (float, defaults to 0.3) — +Scheduled Sampling factor from GLIGEN: Open-Set Grounded Text-to-Image +Generation. Scheduled Sampling factor is only varied for +scheduled sampling during inference for improved quality and controllability. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor from Common Diffusion Noise Schedules and Sample Steps are +Flawed. Guidance rescale factor should fix overexposure when +using zero terminal SNR. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionGLIGENPipeline +>>> from diffusers.utils import load_image + +>>> # Insert objects described by text at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENPipeline.from_pretrained( +... "masterful/gligen-1-4-inpainting-text-box", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> input_image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/livingroom_modern.png" +... ) +>>> prompt = "a birthday cake" +>>> boxes = [[0.2676, 0.6088, 0.4773, 0.7183]] +>>> phrases = ["a birthday cake"] + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_inpaint_image=input_image, +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-1-4-inpainting-text-box.jpg") + +>>> # Generate an image described by the prompt and +>>> # insert objects described by text at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENPipeline.from_pretrained( +... "masterful/gligen-1-4-generation-text-box", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a waterfall and a modern high speed train running through the tunnel in a beautiful forest with fall foliage" +>>> boxes = [[0.1387, 0.2051, 0.4277, 0.7090], [0.4980, 0.4355, 0.8516, 0.7266]] +>>> phrases = ["a waterfall", "a modern high speed train running through the tunnel"] + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-1-4-generation-text-box.jpg") enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. prepare_latents < source > ( batch_size num_channels_latents height width dtype device generator latents = None ) enable_fuser < source > ( enabled = True ) encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionGLIGENTextImagePipeline class diffusers.StableDiffusionGLIGENTextImagePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer processor: CLIPProcessor image_encoder: CLIPVisionModelWithProjection image_project: CLIPImageProjection unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. processor (CLIPProcessor) — +A CLIPProcessor to procces reference image. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder (clip-vit-large-patch14). image_project (CLIPImageProjection) — +A CLIPImageProjection to project image embedding into phrases embedding space. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with Grounded-Language-to-Image Generation (GLIGEN). This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 gligen_scheduled_sampling_beta: float = 0.3 gligen_phrases: List = None gligen_images: List = None input_phrases_mask: Union = None input_images_mask: Union = None gligen_boxes: List = None gligen_inpaint_image: Optional = None negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None gligen_normalize_constant: float = 28.7 clip_skip: int = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. gligen_phrases (List[str]) — +The phrases to guide what to include in each of the regions defined by the corresponding +gligen_boxes. There should only be one phrase per bounding box. gligen_images (List[PIL.Image.Image]) — +The images to guide what to include in each of the regions defined by the corresponding gligen_boxes. +There should only be one image per bounding box input_phrases_mask (int or List[int]) — +pre phrases mask input defined by the correspongding input_phrases_mask input_images_mask (int or List[int]) — +pre images mask input defined by the correspongding input_images_mask gligen_boxes (List[List[float]]) — +The bounding boxes that identify rectangular regions of the image that are going to be filled with the +content described by the corresponding gligen_phrases. Each rectangular box is defined as a +List[float] of 4 elements [xmin, ymin, xmax, ymax] where each value is between [0,1]. gligen_inpaint_image (PIL.Image.Image, optional) — +The input image, if provided, is inpainted with objects described by the gligen_boxes and +gligen_phrases. Otherwise, it is treated as a generation task on a blank input image. gligen_scheduled_sampling_beta (float, defaults to 0.3) — +Scheduled Sampling factor from GLIGEN: Open-Set Grounded Text-to-Image +Generation. Scheduled Sampling factor is only varied for +scheduled sampling during inference for improved quality and controllability. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. gligen_normalize_constant (float, optional, defaults to 28.7) — +The normalize value of the image embedding. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionGLIGENTextImagePipeline +>>> from diffusers.utils import load_image + +>>> # Insert objects described by image at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained( +... "anhnct/Gligen_Inpainting_Text_Image", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> input_image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/livingroom_modern.png" +... ) +>>> prompt = "a backpack" +>>> boxes = [[0.2676, 0.4088, 0.4773, 0.7183]] +>>> phrases = None +>>> gligen_image = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/backpack.jpeg" +... ) + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_inpaint_image=input_image, +... gligen_boxes=boxes, +... gligen_images=[gligen_image], +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-inpainting-text-image-box.jpg") + +>>> # Generate an image described by the prompt and +>>> # insert objects described by text and image at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained( +... "anhnct/Gligen_Text_Image", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a flower sitting on the beach" +>>> boxes = [[0.0, 0.09, 0.53, 0.76]] +>>> phrases = ["flower"] +>>> gligen_image = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/pexels-pixabay-60597.jpg" +... ) + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_images=[gligen_image], +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-generation-text-image-box.jpg") + +>>> # Generate an image described by the prompt and +>>> # transfer style described by image at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained( +... "anhnct/Gligen_Text_Image", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a dragon flying on the sky" +>>> boxes = [[0.4, 0.2, 1.0, 0.8], [0.0, 1.0, 0.0, 1.0]] # Set `[0.0, 1.0, 0.0, 1.0]` for the style + +>>> gligen_image = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" +... ) + +>>> gligen_placeholder = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" +... ) + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=[ +... "dragon", +... "placeholder", +... ], # Can use any text instead of `placeholder` token, because we will use mask here +... gligen_images=[ +... gligen_placeholder, +... gligen_image, +... ], # Can use any image in gligen_placeholder, because we will use mask here +... input_phrases_mask=[1, 0], # Set 0 for the placeholder token +... input_images_mask=[0, 1], # Set 0 for the placeholder image +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-generation-text-image-box-style-transfer.jpg") enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. prepare_latents < source > ( batch_size num_channels_latents height width dtype device generator latents = None ) enable_fuser < source > ( enabled = True ) complete_mask < source > ( has_mask max_objs device ) Based on the input mask corresponding value 0 or 1 for each phrases and image, mask the features +corresponding to phrases and images. crop < source > ( im new_width new_height ) Crop the input image to the specified dimensions. draw_inpaint_mask_from_boxes < source > ( boxes size ) Create an inpainting mask based on given boxes. This function generates an inpainting mask using the provided +boxes to mark regions that need to be inpainted. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_clip_feature < source > ( input normalize_constant device is_image = False ) Get image and phrases embedding by using CLIP pretrain model. The image embedding is transformed into the +phrases embedding space through a projection. get_cross_attention_kwargs_with_grounded < source > ( hidden_size gligen_phrases gligen_images gligen_boxes input_phrases_mask input_images_mask repeat_batch normalize_constant max_objs device ) Prepare the cross-attention kwargs containing information about the grounded input (boxes, mask, image +embedding, phrases embedding). get_cross_attention_kwargs_without_grounded < source > ( hidden_size repeat_batch max_objs device ) Prepare the cross-attention kwargs without information about the grounded input (boxes, mask, image embedding, +phrases embedding) (All are zero tensor). target_size_center_crop < source > ( im new_hw ) Crop and resize the image to the target size while keeping the center. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/1cf7856f0479fe93f07e66c07c8dacca.txt b/scrapped_outputs/1cf7856f0479fe93f07e66c07c8dacca.txt new file mode 100644 index 0000000000000000000000000000000000000000..b141ceaf084a8212da6ac7e6a804208f1ca7d021 --- /dev/null +++ b/scrapped_outputs/1cf7856f0479fe93f07e66c07c8dacca.txt @@ -0,0 +1,35 @@ +Dance Diffusion Dance Diffusion is by Zach Evans. Dance Diffusion is the first in a suite of generative audio tools for producers and musicians released by Harmonai. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. DanceDiffusionPipeline class diffusers.DanceDiffusionPipeline < source > ( unet scheduler ) Parameters unet (UNet1DModel) — +A UNet1DModel to denoise the encoded audio. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +IPNDMScheduler. Pipeline for audio generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 num_inference_steps: int = 100 generator: Union = None audio_length_in_s: Optional = None return_dict: bool = True ) → AudioPipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of audio samples to generate. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher-quality audio sample at +the expense of slower inference. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. audio_length_in_s (float, optional, defaults to self.unet.config.sample_size/self.unet.config.sample_rate) — +The length of the generated audio sample in seconds. return_dict (bool, optional, defaults to True) — +Whether or not to return a AudioPipelineOutput instead of a plain tuple. Returns +AudioPipelineOutput or tuple + +If return_dict is True, AudioPipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Example: Copied from diffusers import DiffusionPipeline +from scipy.io.wavfile import write + +model_id = "harmonai/maestro-150k" +pipe = DiffusionPipeline.from_pretrained(model_id) +pipe = pipe.to("cuda") + +audios = pipe(audio_length_in_s=4.0).audios + +# To save locally +for i, audio in enumerate(audios): + write(f"maestro_test_{i}.wav", pipe.unet.sample_rate, audio.transpose()) + +# To dislay in google colab +import IPython.display as ipd + +for audio in audios: + display(ipd.Audio(audio, rate=pipe.unet.sample_rate)) AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. diff --git a/scrapped_outputs/1d0e14d463daf74f9975e2ad48a196ad.txt b/scrapped_outputs/1d0e14d463daf74f9975e2ad48a196ad.txt new file mode 100644 index 0000000000000000000000000000000000000000..7645418c174b20843d0dcacad570025d04b154f1 --- /dev/null +++ b/scrapped_outputs/1d0e14d463daf74f9975e2ad48a196ad.txt @@ -0,0 +1,8 @@ +ScoreSdeVpScheduler ScoreSdeVpScheduler is a variance preserving stochastic differential equation (SDE) scheduler. It was introduced in the Score-Based Generative Modeling through Stochastic Differential Equations paper by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole. The abstract from the paper is: Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model. 🚧 This scheduler is under construction! ScoreSdeVpScheduler class diffusers.schedulers.ScoreSdeVpScheduler < source > ( num_train_timesteps = 2000 beta_min = 0.1 beta_max = 20 sampling_eps = 0.001 ) Parameters num_train_timesteps (int, defaults to 2000) — +The number of diffusion steps to train the model. beta_min (int, defaults to 0.1) — beta_max (int, defaults to 20) — sampling_eps (int, defaults to 1e-3) — +The end value of sampling where timesteps decrease progressively from 1 to epsilon. ScoreSdeVpScheduler is a variance preserving stochastic differential equation (SDE) scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. set_timesteps < source > ( num_inference_steps device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the continuous timesteps used for the diffusion chain (to be run before inference). step_pred < source > ( score x t generator = None ) Parameters score () — x () — t () — generator (torch.Generator, optional) — +A random number generator. Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/1d0ffb78abf653e562d8cb00e282df3f.txt b/scrapped_outputs/1d0ffb78abf653e562d8cb00e282df3f.txt new file mode 100644 index 0000000000000000000000000000000000000000..4a2dab2440032fce02434afcfbdf3d52bba38d63 --- /dev/null +++ b/scrapped_outputs/1d0ffb78abf653e562d8cb00e282df3f.txt @@ -0,0 +1,11 @@ +Philosophy 🧨 Diffusers provides state-of-the-art pretrained diffusion models across multiple modalities. +Its purpose is to serve as a modular toolbox for both inference and training. We aim at building a library that stands the test of time and therefore take API design very seriously. In a nutshell, Diffusers is built to be a natural extension of PyTorch. Therefore, most of our design choices are based on PyTorch’s Design Principles. Let’s go over the most important ones: Usability over Performance While Diffusers has many built-in performance-enhancing features (see Memory and Speed), models are always loaded with the highest precision and lowest optimization. Therefore, by default diffusion pipelines are always instantiated on CPU with float32 precision if not otherwise defined by the user. This ensures usability across different platforms and accelerators and means that no complex installations are required to run the library. Diffusers aims to be a light-weight package and therefore has very few required dependencies, but many soft dependencies that can improve performance (such as accelerate, safetensors, onnx, etc…). We strive to keep the library as lightweight as possible so that it can be added without much concern as a dependency on other packages. Diffusers prefers simple, self-explainable code over condensed, magic code. This means that short-hand code syntaxes such as lambda functions, and advanced PyTorch operators are often not desired. Simple over easy As PyTorch states, explicit is better than implicit and simple is better than complex. This design philosophy is reflected in multiple parts of the library: We follow PyTorch’s API with methods like DiffusionPipeline.to to let the user handle device management. Raising concise error messages is preferred to silently correct erroneous input. Diffusers aims at teaching the user, rather than making the library as easy to use as possible. Complex model vs. scheduler logic is exposed instead of magically handled inside. Schedulers/Samplers are separated from diffusion models with minimal dependencies on each other. This forces the user to write the unrolled denoising loop. However, the separation allows for easier debugging and gives the user more control over adapting the denoising process or switching out diffusion models or schedulers. Separately trained components of the diffusion pipeline, e.g. the text encoder, the unet, and the variational autoencoder, each have their own model class. This forces the user to handle the interaction between the different model components, and the serialization format separates the model components into different files. However, this allows for easier debugging and customization. DreamBooth or Textual Inversion training +is very simple thanks to Diffusers’ ability to separate single components of the diffusion pipeline. Tweakable, contributor-friendly over abstraction For large parts of the library, Diffusers adopts an important design principle of the Transformers library, which is to prefer copy-pasted code over hasty abstractions. This design principle is very opinionated and stands in stark contrast to popular design principles such as Don’t repeat yourself (DRY). +In short, just like Transformers does for modeling files, Diffusers prefers to keep an extremely low level of abstraction and very self-contained code for pipelines and schedulers. +Functions, long code blocks, and even classes can be copied across multiple files which at first can look like a bad, sloppy design choice that makes the library unmaintainable. +However, this design has proven to be extremely successful for Transformers and makes a lot of sense for community-driven, open-source machine learning libraries because: Machine Learning is an extremely fast-moving field in which paradigms, model architectures, and algorithms are changing rapidly, which therefore makes it very difficult to define long-lasting code abstractions. Machine Learning practitioners like to be able to quickly tweak existing code for ideation and research and therefore prefer self-contained code over one that contains many abstractions. Open-source libraries rely on community contributions and therefore must build a library that is easy to contribute to. The more abstract the code, the more dependencies, the harder to read, and the harder to contribute to. Contributors simply stop contributing to very abstract libraries out of fear of breaking vital functionality. If contributing to a library cannot break other fundamental code, not only is it more inviting for potential new contributors, but it is also easier to review and contribute to multiple parts in parallel. At Hugging Face, we call this design the single-file policy which means that almost all of the code of a certain class should be written in a single, self-contained file. To read more about the philosophy, you can have a look +at this blog post. In Diffusers, we follow this philosophy for both pipelines and schedulers, but only partly for diffusion models. The reason we don’t follow this design fully for diffusion models is because almost all diffusion pipelines, such +as DDPM, Stable Diffusion, unCLIP (DALL·E 2) and Imagen all rely on the same diffusion model, the UNet. Great, now you should have generally understood why 🧨 Diffusers is designed the way it is 🤗. +We try to apply these design principles consistently across the library. Nevertheless, there are some minor exceptions to the philosophy or some unlucky design choices. If you have feedback regarding the design, we would ❤️ to hear it directly on GitHub. Design Philosophy in Details Now, let’s look a bit into the nitty-gritty details of the design philosophy. Diffusers essentially consists of three major classes: pipelines, models, and schedulers. +Let’s walk through more in-detail design decisions for each class. Pipelines Pipelines are designed to be easy to use (therefore do not follow Simple over easy 100%), are not feature complete, and should loosely be seen as examples of how to use models and schedulers for inference. The following design principles are followed: Pipelines follow the single-file policy. All pipelines can be found in individual directories under src/diffusers/pipelines. One pipeline folder corresponds to one diffusion paper/project/release. Multiple pipeline files can be gathered in one pipeline folder, as it’s done for src/diffusers/pipelines/stable-diffusion. If pipelines share similar functionality, one can make use of the #Copied from mechanism. Pipelines all inherit from DiffusionPipeline. Every pipeline consists of different model and scheduler components, that are documented in the model_index.json file, are accessible under the same name as attributes of the pipeline and can be shared between pipelines with DiffusionPipeline.components function. Every pipeline should be loadable via the DiffusionPipeline.from_pretrained function. Pipelines should be used only for inference. Pipelines should be very readable, self-explanatory, and easy to tweak. Pipelines should be designed to build on top of each other and be easy to integrate into higher-level APIs. Pipelines are not intended to be feature-complete user interfaces. For future complete user interfaces one should rather have a look at InvokeAI, Diffuzers, and lama-cleaner. Every pipeline should have one and only one way to run it via a __call__ method. The naming of the __call__ arguments should be shared across all pipelines. Pipelines should be named after the task they are intended to solve. In almost all cases, novel diffusion pipelines shall be implemented in a new pipeline folder/file. Models Models are designed as configurable toolboxes that are natural extensions of PyTorch’s Module class. They only partly follow the single-file policy. The following design principles are followed: Models correspond to a type of model architecture. E.g. the UNet2DConditionModel class is used for all UNet variations that expect 2D image inputs and are conditioned on some context. All models can be found in src/diffusers/models and every model architecture shall be defined in its file, e.g. unet_2d_condition.py, transformer_2d.py, etc… Models do not follow the single-file policy and should make use of smaller model building blocks, such as attention.py, resnet.py, embeddings.py, etc… Note: This is in stark contrast to Transformers’ modeling files and shows that models do not really follow the single-file policy. Models intend to expose complexity, just like PyTorch’s Module class, and give clear error messages. Models all inherit from ModelMixin and ConfigMixin. Models can be optimized for performance when it doesn’t demand major code changes, keeps backward compatibility, and gives significant memory or compute gain. Models should by default have the highest precision and lowest performance setting. To integrate new model checkpoints whose general architecture can be classified as an architecture that already exists in Diffusers, the existing model architecture shall be adapted to make it work with the new checkpoint. One should only create a new file if the model architecture is fundamentally different. Models should be designed to be easily extendable to future changes. This can be achieved by limiting public function arguments, configuration arguments, and “foreseeing” future changes, e.g. it is usually better to add string “…type” arguments that can easily be extended to new future types instead of boolean is_..._type arguments. Only the minimum amount of changes shall be made to existing architectures to make a new model checkpoint work. The model design is a difficult trade-off between keeping code readable and concise and supporting many model checkpoints. For most parts of the modeling code, classes shall be adapted for new model checkpoints, while there are some exceptions where it is preferred to add new classes to make sure the code is kept concise and +readable long-term, such as UNet blocks and Attention processors. Schedulers Schedulers are responsible to guide the denoising process for inference as well as to define a noise schedule for training. They are designed as individual classes with loadable configuration files and strongly follow the single-file policy. The following design principles are followed: All schedulers are found in src/diffusers/schedulers. Schedulers are not allowed to import from large utils files and shall be kept very self-contained. One scheduler Python file corresponds to one scheduler algorithm (as might be defined in a paper). If schedulers share similar functionalities, we can make use of the #Copied from mechanism. Schedulers all inherit from SchedulerMixin and ConfigMixin. Schedulers can be easily swapped out with the ConfigMixin.from_config method as explained in detail here. Every scheduler has to have a set_num_inference_steps, and a step function. set_num_inference_steps(...) has to be called before every denoising process, i.e. before step(...) is called. Every scheduler exposes the timesteps to be “looped over” via a timesteps attribute, which is an array of timesteps the model will be called upon. The step(...) function takes a predicted model output and the “current” sample (x_t) and returns the “previous”, slightly more denoised sample (x_t-1). Given the complexity of diffusion schedulers, the step function does not expose all the complexity and can be a bit of a “black box”. In almost all cases, novel schedulers shall be implemented in a new scheduling file. diff --git a/scrapped_outputs/1d2fdff396791d51ce0263036094177e.txt b/scrapped_outputs/1d2fdff396791d51ce0263036094177e.txt new file mode 100644 index 0000000000000000000000000000000000000000..025d8d9b7e21e34a1a210fa0bd70fff4f7c14e19 --- /dev/null +++ b/scrapped_outputs/1d2fdff396791d51ce0263036094177e.txt @@ -0,0 +1,63 @@ +BaseOutputs + +All models have outputs that are instances of subclasses of BaseOutput. Those are +data structures containing all the information returned by the model, but that can also be used as tuples or +dictionaries. +Let’s see how this looks in an example: + + + Copied +from diffusers import DDIMPipeline + +pipeline = DDIMPipeline.from_pretrained("google/ddpm-cifar10-32") +outputs = pipeline() +The outputs object is a ImagePipelineOutput, as we can see in the +documentation of that class below, it means it has an image attribute. +You can access each attribute as you would usually do, and if that attribute has not been returned by the model, you will get None: + + + Copied +outputs.images +or via keyword lookup + + + Copied +outputs["images"] +When considering our outputs object as tuple, it only considers the attributes that don’t have None values. +Here for instance, we could retrieve images via indexing: + + + Copied +outputs[:1] +which will return the tuple (outputs.images) for instance. + +BaseOutput + + +class diffusers.utils.BaseOutput + +< +source +> +( +) + + + +Base class for all model outputs as dataclass. Has a __getitem__ that allows indexing by integer or slice (like a +tuple) or strings (like a dictionary) that will ignore the None attributes. Otherwise behaves like a regular +python dictionary. +You can’t unpack a BaseOutput directly. Use the to_tuple() method to convert it to a tuple +before. + +to_tuple + +< +source +> +( +) + + + +Convert self to a tuple containing all the attributes/keys that are not None. diff --git a/scrapped_outputs/1d662d4590b4b816da7ab51d9fe7aa83.txt b/scrapped_outputs/1d662d4590b4b816da7ab51d9fe7aa83.txt new file mode 100644 index 0000000000000000000000000000000000000000..848931d1969089ae8a8d21d431c071f2b1f6f901 --- /dev/null +++ b/scrapped_outputs/1d662d4590b4b816da7ab51d9fe7aa83.txt @@ -0,0 +1,71 @@ +AutoencoderKL The variational autoencoder (VAE) model with KL loss was introduced in Auto-Encoding Variational Bayes by Diederik P. Kingma and Max Welling. The model is used in 🤗 Diffusers to encode images into latents and to decode latent representations into images. The abstract from the paper is: How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions are two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results. Loading from the original format By default the AutoencoderKL should be loaded with from_pretrained(), but it can also be loaded +from the original format using FromOriginalVAEMixin.from_single_file as follows: Copied from diffusers import AutoencoderKL + +url = "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors" # can also be a local file +model = AutoencoderKL.from_single_file(url) AutoencoderKL class diffusers.AutoencoderKL < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) up_block_types: Tuple = ('UpDecoderBlock2D',) block_out_channels: Tuple = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 4 norm_num_groups: int = 32 sample_size: int = 32 scaling_factor: float = 0.18215 force_upcast: float = True ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("DownEncoderBlock2D",)) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to ("UpDecoderBlock2D",)) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of block output channels. act_fn (str, optional, defaults to "silu") — The activation function to use. latent_channels (int, optional, defaults to 4) — Number of channels in the latent space. sample_size (int, optional, defaults to 32) — Sample input size. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. force_upcast (bool, optional, default to True) — +If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE +can be fine-tuned / trained to a lower range without loosing too much precision in which case +force_upcast can be set to False - see: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix A VAE model with KL loss for encoding images into latents and decoding latent representations into images. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). wrapper < source > ( *args **kwargs ) wrapper < source > ( *args **kwargs ) disable_slicing < source > ( ) Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing +decoding in one step. disable_tiling < source > ( ) Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing +decoding in one step. enable_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_tiling < source > ( use_tiling: bool = True ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. forward < source > ( sample: FloatTensor sample_posterior: bool = False return_dict: bool = True generator: Optional = None ) Parameters sample (torch.FloatTensor) — Input sample. sample_posterior (bool, optional, defaults to False) — +Whether to sample from the posterior. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. fuse_qkv_projections < source > ( ) Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. tiled_decode < source > ( z: FloatTensor return_dict: bool = True ) → ~models.vae.DecoderOutput or tuple Parameters z (torch.FloatTensor) — Input batch of latent vectors. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.vae.DecoderOutput instead of a plain tuple. Returns +~models.vae.DecoderOutput or tuple + +If return_dict is True, a ~models.vae.DecoderOutput is returned, otherwise a plain tuple is +returned. + Decode a batch of images using a tiled decoder. tiled_encode < source > ( x: FloatTensor return_dict: bool = True ) → ~models.autoencoder_kl.AutoencoderKLOutput or tuple Parameters x (torch.FloatTensor) — Input batch of images. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.autoencoder_kl.AutoencoderKLOutput instead of a plain tuple. Returns +~models.autoencoder_kl.AutoencoderKLOutput or tuple + +If return_dict is True, a ~models.autoencoder_kl.AutoencoderKLOutput is returned, otherwise a plain +tuple is returned. + Encode a batch of images using a tiled encoder. When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several +steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is +different from non-tiled encoding because each tile uses a different encoder. To avoid tiling artifacts, the +tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the +output, but they should be much less noticeable. unfuse_qkv_projections < source > ( ) Disables the fused QKV projection if enabled. This API is 🧪 experimental. AutoencoderKLOutput class diffusers.models.modeling_outputs.AutoencoderKLOutput < source > ( latent_dist: DiagonalGaussianDistribution ) Parameters latent_dist (DiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of DiagonalGaussianDistribution. +DiagonalGaussianDistribution allows for sampling latents from the distribution. Output of AutoencoderKL encoding method. DecoderOutput class diffusers.models.autoencoders.vae.DecoderOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The decoded output sample from the last layer of the model. Output of decoding method. FlaxAutoencoderKL class diffusers.FlaxAutoencoderKL < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) up_block_types: Tuple = ('UpDecoderBlock2D',) block_out_channels: Tuple = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 4 norm_num_groups: int = 32 sample_size: int = 32 scaling_factor: float = 0.18215 dtype: dtype = parent: Union = name: Optional = None ) Parameters in_channels (int, optional, defaults to 3) — +Number of channels in the input image. out_channels (int, optional, defaults to 3) — +Number of channels in the output. down_block_types (Tuple[str], optional, defaults to (DownEncoderBlock2D)) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to (UpDecoderBlock2D)) — +Tuple of upsample block types. block_out_channels (Tuple[str], optional, defaults to (64,)) — +Tuple of block output channels. layers_per_block (int, optional, defaults to 2) — +Number of ResNet layer for each block. act_fn (str, optional, defaults to silu) — +The activation function to use. latent_channels (int, optional, defaults to 4) — +Number of channels in the latent space. norm_num_groups (int, optional, defaults to 32) — +The number of groups for normalization. sample_size (int, optional, defaults to 32) — +Sample input size. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. dtype (jnp.dtype, optional, defaults to jnp.float32) — +The dtype of the parameters. Flax implementation of a VAE model with KL loss for decoding latent representations. This model inherits from FlaxModelMixin. Check the superclass documentation for it’s generic methods +implemented for all models (such as downloading or saving). This model is a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matter related to its +general usage and behavior. Inherent JAX features such as the following are supported: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization FlaxAutoencoderKLOutput class diffusers.models.vae_flax.FlaxAutoencoderKLOutput < source > ( latent_dist: FlaxDiagonalGaussianDistribution ) Parameters latent_dist (FlaxDiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of FlaxDiagonalGaussianDistribution. +FlaxDiagonalGaussianDistribution allows for sampling latents from the distribution. Output of AutoencoderKL encoding method. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. FlaxDecoderOutput class diffusers.models.vae_flax.FlaxDecoderOutput < source > ( sample: Array ) Parameters sample (jnp.ndarray of shape (batch_size, num_channels, height, width)) — +The decoded output sample from the last layer of the model. dtype (jnp.dtype, optional, defaults to jnp.float32) — +The dtype of the parameters. Output of decoding method. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/1d920f776a7627eaba25198383d47575.txt b/scrapped_outputs/1d920f776a7627eaba25198383d47575.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/1db4a7d47a10fa403be31cee208ca7c7.txt b/scrapped_outputs/1db4a7d47a10fa403be31cee208ca7c7.txt new file mode 100644 index 0000000000000000000000000000000000000000..1c3a7c4d6baaf8edb204d8505fb82bb01aede289 --- /dev/null +++ b/scrapped_outputs/1db4a7d47a10fa403be31cee208ca7c7.txt @@ -0,0 +1,342 @@ +Text-Guided Image Inpainting + + +StableDiffusionInpaintPipeline + +The Stable Diffusion model was created by the researchers and engineers from CompVis, Stability AI, runway, and LAION. The StableDiffusionInpaintPipeline lets you edit specific parts of an image by providing a mask and a text prompt using Stable Diffusion. +The original codebase can be found here: +Stable Diffusion V1: CampVis/stable-diffusion +Stable Diffusion V2: Stability-AI/stablediffusion +Available checkpoints are: +stable-diffusion-inpainting (512x512 resolution): runwayml/stable-diffusion-inpainting +stable-diffusion-2-inpainting (512x512 resolution): stabilityai/stable-diffusion-2-inpainting + +class diffusers.StableDiffusionInpaintPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPFeatureExtractor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-guided image inpainting using Stable Diffusion. This is an experimental feature. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None +mask_image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +nrompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: typing.Optional[int] = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +image (PIL.Image.Image) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. + + +mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. Ignored when not using guidance (i.e., ignored if guidance_scale +is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionInpaintPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +>>> init_image = download_image(img_url).resize((512, 512)) +>>> mask_image = download_image(mask_url).resize((512, 512)) + +>>> pipe = StableDiffusionInpaintPipeline.from_pretrained( +... "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +>>> image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] + +enable_attention_slicing + +< +source +> +( +slice_size: typing.Union[str, int, NoneType] = 'auto' + +) + + +Parameters + +slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maxium amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +disable_attention_slicing + +< +source +> +( +) + + + +Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go +back to computing attention in one step. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. diff --git a/scrapped_outputs/1de0bbd3929c7a3805b608fe4c0553f5.txt b/scrapped_outputs/1de0bbd3929c7a3805b608fe4c0553f5.txt new file mode 100644 index 0000000000000000000000000000000000000000..1867f773b4344fd37e77bce342b7730704ed1f48 --- /dev/null +++ b/scrapped_outputs/1de0bbd3929c7a3805b608fe4c0553f5.txt @@ -0,0 +1,76 @@ +Load community pipelines and components Community pipelines Community pipelines are any DiffusionPipeline class that are different from the original implementation as specified in their paper (for example, the StableDiffusionControlNetPipeline corresponds to the Text-to-Image Generation with ControlNet Conditioning paper). They provide additional functionality or extend the original implementation of a pipeline. There are many cool community pipelines like Speech to Image or Composable Stable Diffusion, and you can find all the official community pipelines here. To load any community pipeline on the Hub, pass the repository id of the community pipeline to the custom_pipeline argument and the model repository where you’d like to load the pipeline weights and components from. For example, the example below loads a dummy pipeline from hf-internal-testing/diffusers-dummy-pipeline and the pipeline weights and components from google/ddpm-cifar10-32: 🔒 By loading a community pipeline from the Hugging Face Hub, you are trusting that the code you are loading is safe. Make sure to inspect the code online before loading and running it automatically! Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "google/ddpm-cifar10-32", custom_pipeline="hf-internal-testing/diffusers-dummy-pipeline", use_safetensors=True +) Loading an official community pipeline is similar, but you can mix loading weights from an official repository id and pass pipeline components directly. The example below loads the community CLIP Guided Stable Diffusion pipeline, and you can pass the CLIP model components directly to it: Copied from diffusers import DiffusionPipeline +from transformers import CLIPImageProcessor, CLIPModel + +clip_model_id = "laion/CLIP-ViT-B-32-laion2B-s34B-b79K" + +feature_extractor = CLIPImageProcessor.from_pretrained(clip_model_id) +clip_model = CLIPModel.from_pretrained(clip_model_id) + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + custom_pipeline="clip_guided_stable_diffusion", + clip_model=clip_model, + feature_extractor=feature_extractor, + use_safetensors=True, +) For more information about community pipelines, take a look at the Community pipelines guide for how to use them and if you’re interested in adding a community pipeline check out the How to contribute a community pipeline guide! Community components Community components allow users to build pipelines that may have customized components that are not a part of Diffusers. If your pipeline has custom components that Diffusers doesn’t already support, you need to provide their implementations as Python modules. These customized components could be a VAE, UNet, and scheduler. In most cases, the text encoder is imported from the Transformers library. The pipeline code itself can also be customized. This section shows how users should use community components to build a community pipeline. You’ll use the showlab/show-1-base pipeline checkpoint as an example. So, let’s start loading the components: Import and load the text encoder from Transformers: Copied from transformers import T5Tokenizer, T5EncoderModel + +pipe_id = "showlab/show-1-base" +tokenizer = T5Tokenizer.from_pretrained(pipe_id, subfolder="tokenizer") +text_encoder = T5EncoderModel.from_pretrained(pipe_id, subfolder="text_encoder") Load a scheduler: Copied from diffusers import DPMSolverMultistepScheduler + +scheduler = DPMSolverMultistepScheduler.from_pretrained(pipe_id, subfolder="scheduler") Load an image processor: Copied from transformers import CLIPFeatureExtractor + +feature_extractor = CLIPFeatureExtractor.from_pretrained(pipe_id, subfolder="feature_extractor") In steps 4 and 5, the custom UNet and pipeline implementation must match the format shown in their files for this example to work. Now you’ll load a custom UNet, which in this example, has already been implemented in the showone_unet_3d_condition.py script for your convenience. You’ll notice the UNet3DConditionModel class name is changed to ShowOneUNet3DConditionModel because UNet3DConditionModel already exists in Diffusers. Any components needed for the ShowOneUNet3DConditionModel class should be placed in the showone_unet_3d_condition.py script. Once this is done, you can initialize the UNet: Copied from showone_unet_3d_condition import ShowOneUNet3DConditionModel + +unet = ShowOneUNet3DConditionModel.from_pretrained(pipe_id, subfolder="unet") Finally, you’ll load the custom pipeline code. For this example, it has already been created for you in the pipeline_t2v_base_pixel.py script. This script contains a custom TextToVideoIFPipeline class for generating videos from text. Just like the custom UNet, any code needed for the custom pipeline to work should go in the pipeline_t2v_base_pixel.py script. Once everything is in place, you can initialize the TextToVideoIFPipeline with the ShowOneUNet3DConditionModel: Copied from pipeline_t2v_base_pixel import TextToVideoIFPipeline +import torch + +pipeline = TextToVideoIFPipeline( + unet=unet, + text_encoder=text_encoder, + tokenizer=tokenizer, + scheduler=scheduler, + feature_extractor=feature_extractor +) +pipeline = pipeline.to(device="cuda") +pipeline.torch_dtype = torch.float16 Push the pipeline to the Hub to share with the community! Copied pipeline.push_to_hub("custom-t2v-pipeline") After the pipeline is successfully pushed, you need a couple of changes: Change the _class_name attribute in model_index.json to "pipeline_t2v_base_pixel" and "TextToVideoIFPipeline". Upload showone_unet_3d_condition.py to the unet directory. Upload pipeline_t2v_base_pixel.py to the pipeline base directory. To run inference, simply add the trust_remote_code argument while initializing the pipeline to handle all the “magic” behind the scenes. Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "/", trust_remote_code=True, torch_dtype=torch.float16 +).to("cuda") + +prompt = "hello" + +# Text embeds +prompt_embeds, negative_embeds = pipeline.encode_prompt(prompt) + +# Keyframes generation (8x64x40, 2fps) +video_frames = pipeline( + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + num_frames=8, + height=40, + width=64, + num_inference_steps=2, + guidance_scale=9.0, + output_type="pt" +).frames As an additional reference example, you can refer to the repository structure of stabilityai/japanese-stable-diffusion-xl, that makes use of the trust_remote_code feature: Copied +from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/japanese-stable-diffusion-xl", trust_remote_code=True +) +pipeline.to("cuda") + +# if using torch < 2.0 +# pipeline.enable_xformers_memory_efficient_attention() + +prompt = "柴犬、カラフルアート" + +image = pipeline(prompt=prompt).images[0] diff --git a/scrapped_outputs/1de120543ad4be3d07d95e2db554adf8.txt b/scrapped_outputs/1de120543ad4be3d07d95e2db554adf8.txt new file mode 100644 index 0000000000000000000000000000000000000000..12062fad7c1578fb4c93f827d5677dc581faff89 --- /dev/null +++ b/scrapped_outputs/1de120543ad4be3d07d95e2db554adf8.txt @@ -0,0 +1,323 @@ +Pipelines Pipelines provide a simple way to run state-of-the-art diffusion models in inference by bundling all of the necessary components (multiple independently-trained models, schedulers, and processors) into a single end-to-end class. Pipelines are flexible and they can be adapted to use different schedulers or even model components. All pipelines are built from the base DiffusionPipeline class which provides basic functionality for loading, downloading, and saving all the components. Specific pipeline types (for example StableDiffusionPipeline) loaded with from_pretrained() are automatically detected and the pipeline components are loaded and passed to the __init__ function of the pipeline. You shouldn’t use the DiffusionPipeline class for training. Individual components (for example, UNet2DModel and UNet2DConditionModel) of diffusion pipelines are usually trained individually, so we suggest directly working with them instead. Pipelines do not offer any training functionality. You’ll notice PyTorch’s autograd is disabled by decorating the __call__() method with a torch.no_grad decorator because pipelines should not be used for training. If you’re interested in training, please take a look at the Training guides instead! The table below lists all the pipelines currently available in 🤗 Diffusers and the tasks they support. Click on a pipeline to view its abstract and published paper. Pipeline Tasks AltDiffusion image2image AnimateDiff text2video Attend-and-Excite text2image Audio Diffusion image2audio AudioLDM text2audio AudioLDM2 text2audio BLIP Diffusion text2image Consistency Models unconditional image generation ControlNet text2image, image2image, inpainting ControlNet with Stable Diffusion XL text2image ControlNet-XS text2image ControlNet-XS with Stable Diffusion XL text2image Cycle Diffusion image2image Dance Diffusion unconditional audio generation DDIM unconditional image generation DDPM unconditional image generation DeepFloyd IF text2image, image2image, inpainting, super-resolution DiffEdit inpainting DiT text2image GLIGEN text2image InstructPix2Pix image editing Kandinsky 2.1 text2image, image2image, inpainting, interpolation Kandinsky 2.2 text2image, image2image, inpainting Kandinsky 3 text2image, image2image Latent Consistency Models text2image Latent Diffusion text2image, super-resolution LDM3D text2image, text-to-3D, text-to-pano, upscaling MultiDiffusion text2image MusicLDM text2audio Paint by Example inpainting ParaDiGMS text2image Pix2Pix Zero image editing PixArt-α text2image PNDM unconditional image generation RePaint inpainting Score SDE VE unconditional image generation Self-Attention Guidance text2image Semantic Guidance text2image Shap-E text-to-3D, image-to-3D Spectrogram Diffusion Stable Diffusion text2image, image2image, depth2image, inpainting, image variation, latent upscaler, super-resolution Stable Diffusion Model Editing model editing Stable Diffusion XL text2image, image2image, inpainting Stable Diffusion XL Turbo text2image, image2image, inpainting Stable unCLIP text2image, image variation Stochastic Karras VE unconditional image generation T2I-Adapter text2image Text2Video text2video, video2video Text2Video-Zero text2video unCLIP text2image, image variation Unconditional Latent Diffusion unconditional image generation UniDiffuser text2image, image2text, image variation, text variation, unconditional image generation, unconditional audio generation Value-guided planning value guided sampling Versatile Diffusion text2image, image variation VQ Diffusion text2image Wuerstchen text2image DiffusionPipeline class diffusers.DiffusionPipeline < source > ( ) Base class for all pipelines. DiffusionPipeline stores all components (models, schedulers, and processors) for diffusion pipelines and +provides methods for loading, downloading and saving models. It also includes methods to: move all PyTorch modules to the device of your choice enable/disable the progress bar for the denoising iteration Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. _optional_components (List[str]) — List of all optional components that don’t have to be passed to the +pipeline to function (should be overridden by subclasses). __call__ ( *args **kwargs ) Call self as a function. device < source > ( ) → torch.device Returns +torch.device + +The torch device on which the pipeline is located. + to < source > ( *args **kwargs ) → DiffusionPipeline Parameters dtype (torch.dtype, optional) — +Returns a pipeline with the specified +dtype device (torch.Device, optional) — +Returns a pipeline with the specified +device silence_dtype_warnings (str, optional, defaults to False) — +Whether to omit warnings if the target dtype is not compatible with the target device. Returns +DiffusionPipeline + +The pipeline converted to specified dtype and/or dtype. + Performs Pipeline dtype and/or device conversion. A torch.dtype and torch.device are inferred from the +arguments of self.to(*args, **kwargs). If the pipeline already has the correct torch.dtype and torch.device, then it is returned as is. Otherwise, +the returned pipeline is a copy of self with the desired torch.dtype and torch.device. Here are the ways to call to: to(dtype, silence_dtype_warnings=False) → DiffusionPipeline to return a pipeline with the specified +dtype to(device, silence_dtype_warnings=False) → DiffusionPipeline to return a pipeline with the specified +device to(device=None, dtype=None, silence_dtype_warnings=False) → DiffusionPipeline to return a pipeline with the +specified device and +dtype components < source > ( ) The self.components property can be useful to run different pipelines with the same weights and +configurations without reallocating additional memory. Returns (dict): +A dictionary containing all the modules needed to initialize the pipeline. Examples: Copied >>> from diffusers import ( +... StableDiffusionPipeline, +... StableDiffusionImg2ImgPipeline, +... StableDiffusionInpaintPipeline, +... ) + +>>> text2img = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> img2img = StableDiffusionImg2ImgPipeline(**text2img.components) +>>> inpaint = StableDiffusionInpaintPipeline(**text2img.components) disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. download < source > ( pretrained_model_name **kwargs ) → os.PathLike Parameters pretrained_model_name (str or os.PathLike, optional) — +A string, the repository id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. custom_pipeline (str, optional) — +Can be either: + + +A string, the repository id (for example CompVis/ldm-text2im-large-256) of a pretrained +pipeline hosted on the Hub. The repository must contain a file called pipeline.py that defines +the custom pipeline. + + +A string, the file name of a community pipeline hosted on GitHub under +Community. Valid file +names must match the file name and not the pipeline script (clip_guided_stable_diffusion +instead of clip_guided_stable_diffusion.py). Community pipelines are always loaded from the +current main branch of GitHub. + + +A path to a directory (./my_pipeline_directory/) containing a custom pipeline. The directory +must contain a file called pipeline.py that defines the custom pipeline. + + + +🧪 This is an experimental feature and may change in the future. + +For more information on how to load and create custom pipelines, take a look at How to contribute a +community pipeline. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. use_onnx (bool, optional, defaults to False) — +If set to True, ONNX weights will always be downloaded if present. If set to False, ONNX weights +will never be downloaded. By default use_onnx defaults to the _is_onnx class attribute which is +False for non-ONNX pipelines and True for ONNX pipelines. ONNX weights include both files ending +with .onnx and .pb. trust_remote_code (bool, optional, defaults to False) — +Whether or not to allow for custom pipelines and components defined on the Hub in their own files. This +option should only be set to True for repositories you trust and in which you have read the code, as +it will execute code present on the Hub on your local machine. Returns +os.PathLike + +A path to the downloaded pipeline. + Download and cache a PyTorch diffusion pipeline from pretrained pipeline weights. To use private or gated models, log-in with +huggingface-cli login. enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] enable_model_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_sequential_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using 🤗 Accelerate, significantly reducing memory usage. When called, the state +dicts of all torch.nn.Module components (except those in self._exclude_from_cpu_offload) are saved to CPU +and then moved to torch.device('meta') and loaded to GPU only when their specific submodule has its forward +method called. Offloading happens on a submodule basis. Memory savings are higher than with +enable_model_cpu_offload, but performance is lower. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) from_pretrained < source > ( pretrained_model_name_or_path: Union **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. custom_pipeline (str, optional) — + +🧪 This is an experimental feature and may change in the future. + +Can be either: + +A string, the repo id (for example hf-internal-testing/diffusers-dummy-pipeline) of a custom +pipeline hosted on the Hub. The repository must contain a file called pipeline.py that defines +the custom pipeline. +A string, the file name of a community pipeline hosted on GitHub under +Community. Valid file +names must match the file name and not the pipeline script (clip_guided_stable_diffusion +instead of clip_guided_stable_diffusion.py). Community pipelines are always loaded from the +current main branch of GitHub. +A path to a directory (./my_pipeline_directory/) containing a custom pipeline. The directory +must contain a file called pipeline.py that defines the custom pipeline. + +For more information on how to load and create custom pipelines, please have a look at Loading and +Adding Custom +Pipelines force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional) — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. Defaults to the latest stable 🤗 Diffusers version. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. use_onnx (bool, optional, defaults to None) — +If set to True, ONNX weights will always be downloaded if present. If set to False, ONNX weights +will never be downloaded. By default use_onnx defaults to the _is_onnx class attribute which is +False for non-ONNX pipelines and True for ONNX pipelines. ONNX weights include both files ending +with .onnx and .pb. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiate a PyTorch diffusion pipeline from pretrained pipeline weights. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import DiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256") + +>>> # Download pipeline that requires an authorization token +>>> # For more information on access tokens, please refer to this section +>>> # of the documentation](https://huggingface.co/docs/hub/security-tokens) +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") + +>>> # Use a different scheduler +>>> from diffusers import LMSDiscreteScheduler + +>>> scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.scheduler = scheduler maybe_free_model_hooks < source > ( ) Function that offloads all components, removes all model hooks that were added when using +enable_model_cpu_offload and then applies them again. In case the model has not been offloaded this function +is a no-op. Make sure to add this function to the end of the __call__ function of your pipeline so that it +functions correctly when applying enable_model_cpu_offload. numpy_to_pil < source > ( images ) Convert a NumPy image or a batch of images to a PIL image. save_pretrained < source > ( save_directory: Union safe_serialization: bool = True variant: Optional = None push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save a pipeline to. Will be created if it doesn’t exist. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save all saveable variables of the pipeline to a directory. A pipeline variable can be saved and loaded if its +class implements both a save and loading method. The pipeline is easily reloaded using the +from_pretrained() class method. FlaxDiffusionPipeline class diffusers.FlaxDiffusionPipeline < source > ( ) Base class for Flax-based pipelines. FlaxDiffusionPipeline stores all components (models, schedulers, and processors) for diffusion pipelines and +provides methods for loading, downloading and saving models. It also includes methods to: enable/disable the progress bar for the denoising iteration Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_name_or_path: Union **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example runwayml/stable-diffusion-v1-5) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +using save_pretrained(). + dtype (str or jnp.dtype, optional) — +Override the default jnp.dtype and load the model under this dtype. If "auto", the dtype is +automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components) of the specific pipeline +class. The overwritten components are passed directly to the pipelines __init__ method. Instantiate a Flax-based diffusion pipeline from pretrained pipeline weights. The pipeline is set in evaluation mode (`model.eval()) by default and dropout modules are deactivated. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of FlaxUNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import FlaxDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> # Requires to be logged in to Hugging Face hub, +>>> # see more in [the documentation](https://huggingface.co/docs/hub/security-tokens) +>>> pipeline, params = FlaxDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... revision="bf16", +... dtype=jnp.bfloat16, +... ) + +>>> # Download pipeline, but use a different scheduler +>>> from diffusers import FlaxDPMSolverMultistepScheduler + +>>> model_id = "runwayml/stable-diffusion-v1-5" +>>> dpmpp, dpmpp_state = FlaxDPMSolverMultistepScheduler.from_pretrained( +... model_id, +... subfolder="scheduler", +... ) + +>>> dpm_pipe, dpm_params = FlaxStableDiffusionPipeline.from_pretrained( +... model_id, revision="bf16", dtype=jnp.bfloat16, scheduler=dpmpp +... ) +>>> dpm_params["scheduler"] = dpmpp_state numpy_to_pil < source > ( images ) Convert a NumPy image or a batch of images to a PIL image. save_pretrained < source > ( save_directory: Union params: Union push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save all saveable variables of the pipeline to a directory. A pipeline variable can be saved and loaded if its +class implements both a save and loading method. The pipeline is easily reloaded using the +from_pretrained() class method. PushToHubMixin class diffusers.utils.PushToHubMixin < source > ( ) A Mixin to push a model, scheduler, or pipeline to the Hugging Face Hub. push_to_hub < source > ( repo_id: str commit_message: Optional = None private: Optional = None token: Optional = None create_pr: bool = False safe_serialization: bool = True variant: Optional = None ) Parameters repo_id (str) — +The name of the repository you want to push your model, scheduler, or pipeline files to. It should +contain your organization name when pushing to an organization. repo_id can also be a path to a local +directory. commit_message (str, optional) — +Message to commit while pushing. Default to "Upload {object}". private (bool, optional) — +Whether or not the repository created should be private. token (str, optional) — +The token to use as HTTP bearer authorization for remote files. The token generated when running +huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) — +Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to True) — +Whether or not to convert the model weights to the safetensors format. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. Upload model, scheduler, or pipeline files to the 🤗 Hugging Face Hub. Examples: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet") + +# Push the `unet` to your namespace with the name "my-finetuned-unet". +unet.push_to_hub("my-finetuned-unet") + +# Push the `unet` to an organization with the name "my-finetuned-unet". +unet.push_to_hub("your-org/my-finetuned-unet") diff --git a/scrapped_outputs/1e0013a667ce9699fe12a35b7c20de01.txt b/scrapped_outputs/1e0013a667ce9699fe12a35b7c20de01.txt new file mode 100644 index 0000000000000000000000000000000000000000..98269f3c31d991ee698908d92c0548b99079f45a --- /dev/null +++ b/scrapped_outputs/1e0013a667ce9699fe12a35b7c20de01.txt @@ -0,0 +1,24 @@ +IPNDMScheduler IPNDMScheduler is a fourth-order Improved Pseudo Linear Multistep scheduler. The original implementation can be found at crowsonkb/v-diffusion-pytorch. IPNDMScheduler class diffusers.IPNDMScheduler < source > ( num_train_timesteps: int = 1000 trained_betas: Union = None ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. A fourth-order Improved Pseudo Linear Multistep scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the linear multistep method. It performs one forward pass multiple times to approximate the solution. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/1e4ee927d5e8e7db60e3cd2555e67adb.txt b/scrapped_outputs/1e4ee927d5e8e7db60e3cd2555e67adb.txt new file mode 100644 index 0000000000000000000000000000000000000000..5d1ccc640374c52939e8c79b012d5191e7fb4c25 --- /dev/null +++ b/scrapped_outputs/1e4ee927d5e8e7db60e3cd2555e67adb.txt @@ -0,0 +1,251 @@ +PaintByExample + + +Overview + +Paint by Example: Exemplar-based Image Editing with Diffusion Models by Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, Fang Wen. +The abstract of the paper is the following: +Language-guided image editing has achieved great success recently. In this paper, for the first time, we investigate exemplar-guided image editing for more precise control. We achieve this goal by leveraging self-supervised training to disentangle and re-organize the source image and the exemplar. However, the naive approach will cause obvious fusing artifacts. We carefully analyze it and propose an information bottleneck and strong augmentations to avoid the trivial solution of directly copying and pasting the exemplar image. Meanwhile, to ensure the controllability of the editing process, we design an arbitrary shape mask for the exemplar image and leverage the classifier-free guidance to increase the similarity to the exemplar image. The whole framework involves a single forward of the diffusion model without any iterative optimization. We demonstrate that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity. +The original codebase can be found here. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_paint_by_example.py +Image-Guided Image Painting +- + +Tips + +PaintByExample is supported by the official Fantasy-Studio/Paint-by-Example checkpoint. The checkpoint has been warm-started from the CompVis/stable-diffusion-v1-4 and with the objective to inpaint partly masked images conditioned on example / reference images +To quickly demo PaintByExample, please have a look at this demo +You can run the following code snippet as an example: + + + Copied +# !pip install diffusers transformers + +import PIL +import requests +import torch +from io import BytesIO +from diffusers import DiffusionPipeline + + +def download_image(url): + response = requests.get(url) + return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +img_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/image/example_1.png" +mask_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/mask/example_1.png" +example_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/reference/example_1.jpg" + +init_image = download_image(img_url).resize((512, 512)) +mask_image = download_image(mask_url).resize((512, 512)) +example_image = download_image(example_url).resize((512, 512)) + +pipe = DiffusionPipeline.from_pretrained( + "Fantasy-Studio/Paint-by-Example", + torch_dtype=torch.float16, +) +pipe = pipe.to("cuda") + +image = pipe(image=init_image, mask_image=mask_image, example_image=example_image).images[0] +image + +PaintByExamplePipeline + + +class diffusers.PaintByExamplePipeline + +< +source +> +( +vae: AutoencoderKL +image_encoder: PaintByExampleImageEncoder +unet: UNet2DConditionModel +scheduler: typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler] +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPImageProcessor +requires_safety_checker: bool = False + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +image_encoder (PaintByExampleImageEncoder) — +Encodes the example input image. The unet is conditioned on the example image instead of a text prompt. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for image-guided image inpainting using Stable Diffusion. This is an experimental feature. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +example_image: typing.Union[torch.FloatTensor, PIL.Image.Image] +image: typing.Union[torch.FloatTensor, PIL.Image.Image] +mask_image: typing.Union[torch.FloatTensor, PIL.Image.Image] +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 5.0 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +example_image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +The exemplar image to guide the image generation. + + +image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. + + +mask_image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. diff --git a/scrapped_outputs/1e593ca141c6e7b76bd2f58c831f03e7.txt b/scrapped_outputs/1e593ca141c6e7b76bd2f58c831f03e7.txt new file mode 100644 index 0000000000000000000000000000000000000000..adefa4c809ce22324ba26e06ba200fc10adfee55 --- /dev/null +++ b/scrapped_outputs/1e593ca141c6e7b76bd2f58c831f03e7.txt @@ -0,0 +1,51 @@ +LoRA This is experimental and the API may change in the future. LoRA (Low-Rank Adaptation of Large Language Models) is a popular and lightweight training technique that significantly reduces the number of trainable parameters. It works by inserting a smaller number of new weights into the model and only these are trained. This makes training with LoRA much faster, memory-efficient, and produces smaller model weights (a few hundred MBs), which are easier to store and share. LoRA can also be combined with other training techniques like DreamBooth to speedup training. LoRA is very versatile and supported for DreamBooth, Kandinsky 2.2, Stable Diffusion XL, text-to-image, and Wuerstchen. This guide will explore the train_text_to_image_lora.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/text_to_image +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script has many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. Default values are provided for most parameters that work pretty well, but you can also set your own values in the training command if you’d like. For example, to increase the number of epochs to train: Copied accelerate launch train_text_to_image_lora.py \ + --num_train_epochs=150 \ Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the LoRA relevant parameters: --rank: the inner dimension of the low-rank matrices to train; a higher rank means more trainable parameters --learning_rate: the default learning rate is 1e-4, but with LoRA, you can use a higher learning rate Training script The dataset preprocessing code and training loop are found in the main() function, and if you need to adapt the training script, this is where you’ll make your changes. As with the script parameters, a walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the LoRA relevant parts of the script. UNet text encoder Diffusers uses ~peft.LoraConfig from the PEFT library to set up the parameters of the LoRA adapter such as the rank, alpha, and which modules to insert the LoRA weights into. The adapter is added to the UNet, and only the LoRA layers are filtered for optimization in lora_layers. Copied unet_lora_config = LoraConfig( + r=args.rank, + lora_alpha=args.rank, + init_lora_weights="gaussian", + target_modules=["to_k", "to_q", "to_v", "to_out.0"], +) + +unet.add_adapter(unet_lora_config) +lora_layers = filter(lambda p: p.requires_grad, unet.parameters()) The optimizer is initialized with the lora_layers because these are the only weights that’ll be optimized: Copied optimizer = optimizer_cls( + lora_layers, + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Aside from setting up the LoRA layers, the training script is more or less the same as train_text_to_image.py! Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 Let’s train on the Pokémon BLIP captions dataset to generate our own Pokémon. Set the environment variables MODEL_NAME and DATASET_NAME to the model and dataset respectively. You should also specify where to save the model in OUTPUT_DIR, and the name of the model to save to on the Hub with HUB_MODEL_ID. The script creates and saves the following files to your repository: saved model checkpoints pytorch_lora_weights.safetensors (the trained LoRA weights) If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. A full training run takes ~5 hours on a 2080 Ti GPU with 11GB of VRAM. Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="/sddata/finetune/lora/pokemon" +export HUB_MODEL_ID="pokemon-lora" +export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch --mixed_precision="fp16" train_text_to_image_lora.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$DATASET_NAME \ + --dataloader_num_workers=8 \ + --resolution=512 \ + --center_crop \ + --random_flip \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --max_train_steps=15000 \ + --learning_rate=1e-04 \ + --max_grad_norm=1 \ + --lr_scheduler="cosine" \ + --lr_warmup_steps=0 \ + --output_dir=${OUTPUT_DIR} \ + --push_to_hub \ + --hub_model_id=${HUB_MODEL_ID} \ + --report_to=wandb \ + --checkpointing_steps=500 \ + --validation_prompt="A pokemon with blue eyes." \ + --seed=1337 Once training has been completed, you can use your model for inference: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("path/to/lora/model", weight_name="pytorch_lora_weights.safetensors") +image = pipeline("A pokemon with blue eyes").images[0] Next steps Congratulations on training a new model with LoRA! To learn more about how to use your new model, the following guides may be helpful: Learn how to load different LoRA formats trained using community trainers like Kohya and TheLastBen. Learn how to use and combine multiple LoRA’s with PEFT for inference. diff --git a/scrapped_outputs/1e59bf7d5d99dac6cfec223d9428f968.txt b/scrapped_outputs/1e59bf7d5d99dac6cfec223d9428f968.txt new file mode 100644 index 0000000000000000000000000000000000000000..c82e25825d8d9963f7b4b0f30bedbc489b9e96a3 --- /dev/null +++ b/scrapped_outputs/1e59bf7d5d99dac6cfec223d9428f968.txt @@ -0,0 +1,30 @@ +Transformer Temporal A Transformer model for video-like data. TransformerTemporalModel class diffusers.models.TransformerTemporalModel < source > ( num_attention_heads: int = 16 attention_head_dim: int = 88 in_channels: Optional = None out_channels: Optional = None num_layers: int = 1 dropout: float = 0.0 norm_num_groups: int = 32 cross_attention_dim: Optional = None attention_bias: bool = False sample_size: Optional = None activation_fn: str = 'geglu' norm_elementwise_affine: bool = True double_self_attention: bool = True positional_embeddings: Optional = None num_positional_embeddings: Optional = None ) Parameters num_attention_heads (int, optional, defaults to 16) — The number of heads to use for multi-head attention. attention_head_dim (int, optional, defaults to 88) — The number of channels in each head. in_channels (int, optional) — +The number of channels in the input and output (specify if the input is continuous). num_layers (int, optional, defaults to 1) — The number of layers of Transformer blocks to use. dropout (float, optional, defaults to 0.0) — The dropout probability to use. cross_attention_dim (int, optional) — The number of encoder_hidden_states dimensions to use. attention_bias (bool, optional) — +Configure if the TransformerBlock attention should contain a bias parameter. sample_size (int, optional) — The width of the latent images (specify if the input is discrete). +This is fixed during training since it is used to learn a number of position embeddings. activation_fn (str, optional, defaults to "geglu") — +Activation function to use in feed-forward. See diffusers.models.activations.get_activation for supported +activation functions. norm_elementwise_affine (bool, optional) — +Configure if the TransformerBlock should use learnable elementwise affine parameters for normalization. double_self_attention (bool, optional) — +Configure if each TransformerBlock should contain two self-attention layers. +positional_embeddings — (str, optional): +The type of positional embeddings to apply to the sequence input before passing use. +num_positional_embeddings — (int, optional): +The maximum length of the sequence over which to apply positional embeddings. A Transformer model for video-like data. forward < source > ( hidden_states: FloatTensor encoder_hidden_states: Optional = None timestep: Optional = None class_labels: LongTensor = None num_frames: int = 1 cross_attention_kwargs: Optional = None return_dict: bool = True ) → ~models.transformer_temporal.TransformerTemporalModelOutput or tuple Parameters hidden_states (torch.LongTensor of shape (batch size, num latent pixels) if discrete, torch.FloatTensor of shape (batch size, channel, height, width) if continuous) — +Input hidden_states. encoder_hidden_states ( torch.LongTensor of shape (batch size, encoder_hidden_states dim), optional) — +Conditional embeddings for cross attention layer. If not given, cross-attention defaults to +self-attention. timestep ( torch.LongTensor, optional) — +Used to indicate denoising step. Optional timestep to be applied as an embedding in AdaLayerNorm. class_labels ( torch.LongTensor of shape (batch size, num classes), optional) — +Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in +AdaLayerZeroNorm. num_frames (int, optional, defaults to 1) — +The number of frames to be processed per batch. This is used to reshape the hidden states. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. Returns +~models.transformer_temporal.TransformerTemporalModelOutput or tuple + +If return_dict is True, an ~models.transformer_temporal.TransformerTemporalModelOutput is +returned, otherwise a tuple where the first element is the sample tensor. + The TransformerTemporal forward method. TransformerTemporalModelOutput class diffusers.models.transformers.transformer_temporal.TransformerTemporalModelOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size x num_frames, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. The output of TransformerTemporalModel. diff --git a/scrapped_outputs/1e6b37a50783a183a797a010c0fb0e6d.txt b/scrapped_outputs/1e6b37a50783a183a797a010c0fb0e6d.txt new file mode 100644 index 0000000000000000000000000000000000000000..9bde3951f84e5b290f8688e805ab553f8fef7ca1 --- /dev/null +++ b/scrapped_outputs/1e6b37a50783a183a797a010c0fb0e6d.txt @@ -0,0 +1,154 @@ +How to contribute a community pipeline + +💡 Take a look at GitHub Issue #841 for more context about why we’re adding community pipelines to help everyone easily share their work without being slowed down. +Community pipelines allow you to add any additional features you’d like on top of the DiffusionPipeline. The main benefit of building on top of the DiffusionPipeline is anyone can load and use your pipeline by only adding one more argument, making it super easy for the community to access. +This guide will show you how to create a community pipeline and explain how they work. To keep things simple, you’ll create a “one-step” pipeline where the UNet does a single forward pass and calls the scheduler once. + +Initialize the pipeline + +You should start by creating a one_step_unet.py file for your community pipeline. In this file, create a pipeline class that inherits from the DiffusionPipeline to be able to load model weights and the scheduler configuration from the Hub. The one-step pipeline needs a UNet and a scheduler, so you’ll need to add these as arguments to the __init__ function: + + + Copied +from diffusers import DiffusionPipeline +import torch + + +class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() +To ensure your pipeline and its components (unet and scheduler) can be saved with save_pretrained(), add them to the register_modules function: + + + Copied + from diffusers import DiffusionPipeline + import torch + + class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() + ++ self.register_modules(unet=unet, scheduler=scheduler) +Cool, the __init__ step is done and you can move to the forward pass now! 🔥 + +Define the forward pass + +In the forward pass, which we recommend defining as __call__, you have complete creative freedom to add whatever feature you’d like. For our amazing one-step pipeline, create a random image and only call the unet and scheduler once by setting timestep=1: + + + Copied + from diffusers import DiffusionPipeline + import torch + + + class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() + + self.register_modules(unet=unet, scheduler=scheduler) + ++ def __call__(self): ++ image = torch.randn( ++ (1, self.unet.config.in_channels, self.unet.config.sample_size, self.unet.config.sample_size), ++ ) ++ timestep = 1 + ++ model_output = self.unet(image, timestep).sample ++ scheduler_output = self.scheduler.step(model_output, timestep, image).prev_sample + ++ return scheduler_output +That’s it! 🚀 You can now run this pipeline by passing a unet and scheduler to it: + + + Copied +from diffusers import DDPMScheduler, UNet2DModel + +scheduler = DDPMScheduler() +unet = UNet2DModel() + +pipeline = UnetSchedulerOneForwardPipeline(unet=unet, scheduler=scheduler) + +output = pipeline() +But what’s even better is you can load pre-existing weights into the pipeline if the pipeline structure is identical. For example, you can load the google/ddpm-cifar10-32 weights into the one-step pipeline: + + + Copied +pipeline = UnetSchedulerOneForwardPipeline.from_pretrained("google/ddpm-cifar10-32") + +output = pipeline() + +Share your pipeline + +Open a Pull Request on the 🧨 Diffusers repository to add your awesome pipeline in one_step_unet.py to the examples/community subfolder. +Once it is merged, anyone with diffusers >= 0.4.0 installed can use this pipeline magically 🪄 by specifying it in the custom_pipeline argument: + + + Copied +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="one_step_unet") +pipe() +Another way to share your community pipeline is to upload the one_step_unet.py file directly to your preferred model repository on the Hub. Instead of specifying the one_step_unet.py file, pass the model repository id to the custom_pipeline argument: + + + Copied +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="stevhliu/one_step_unet") +Take a look at the following table to compare the two sharing workflows to help you decide the best option for you: + +GitHub community pipeline +HF Hub community pipeline +usage +same +same +review process +open a Pull Request on GitHub and undergo a review process from the Diffusers team before merging; may be slower +upload directly to a Hub repository without any review; this is the fastest workflow +visibility +included in the official Diffusers repository and documentation +included on your HF Hub profile and relies on your own usage/promotion to gain visibility +💡 You can use whatever package you want in your community pipeline file - as long as the user has it installed, everything will work fine. Make sure you have one and only one pipeline class that inherits from DiffusionPipeline because this is automatically detected. + +How do community pipelines work? + +A community pipeline is a class that inherits from DiffusionPipeline which means: +It can be loaded with the custom_pipeline argument. +The model weights and scheduler configuration are loaded from pretrained_model_name_or_path. +The code that implements a feature in the community pipeline is defined in a pipeline.py file. +Sometimes you can’t load all the pipeline components weights from an official repository. In this case, the other components should be passed directly to the pipeline: + + + Copied +from diffusers import DiffusionPipeline +from transformers import CLIPFeatureExtractor, CLIPModel + +model_id = "CompVis/stable-diffusion-v1-4" +clip_model_id = "laion/CLIP-ViT-B-32-laion2B-s34B-b79K" + +feature_extractor = CLIPFeatureExtractor.from_pretrained(clip_model_id) +clip_model = CLIPModel.from_pretrained(clip_model_id, torch_dtype=torch.float16) + +pipeline = DiffusionPipeline.from_pretrained( + model_id, + custom_pipeline="clip_guided_stable_diffusion", + clip_model=clip_model, + feature_extractor=feature_extractor, + scheduler=scheduler, + torch_dtype=torch.float16, +) +The magic behind community pipelines is contained in the following code. It allows the community pipeline to be loaded from GitHub or the Hub, and it’ll be available to all 🧨 Diffusers packages. + + + Copied +# 2. Load the pipeline class, if using custom module then load it from the hub +# if we load from explicit class, let's use it +if custom_pipeline is not None: + pipeline_class = get_class_from_dynamic_module( + custom_pipeline, module_file=CUSTOM_PIPELINE_FILE_NAME, cache_dir=custom_pipeline + ) +elif cls != DiffusionPipeline: + pipeline_class = cls +else: + diffusers_module = importlib.import_module(cls.__module__.split(".")[0]) + pipeline_class = getattr(diffusers_module, config_dict["_class_name"]) diff --git a/scrapped_outputs/1e797d405cbe0545aee7d78f17258d8b.txt b/scrapped_outputs/1e797d405cbe0545aee7d78f17258d8b.txt new file mode 100644 index 0000000000000000000000000000000000000000..da7517473881ae8a5f98c9de9071381dc720f891 --- /dev/null +++ b/scrapped_outputs/1e797d405cbe0545aee7d78f17258d8b.txt @@ -0,0 +1 @@ +Diffusers 🤗 Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you’re looking for a simple inference solution or want to train your own diffusion model, 🤗 Diffusers is a modular toolbox that supports both. Our library is designed with a focus on usability over performance, simple over easy, and customizability over abstractions. The library has three main components: State-of-the-art diffusion pipelines for inference with just a few lines of code. There are many pipelines in 🤗 Diffusers, check out the table in the pipeline overview for a complete list of available pipelines and the task they solve. Interchangeable noise schedulers for balancing trade-offs between generation speed and quality. Pretrained models that can be used as building blocks, and combined with schedulers, for creating your own end-to-end diffusion systems. Tutorials Learn the fundamental skills you need to start generating outputs, build your own diffusion system, and train a diffusion model. We recommend starting here if you're using 🤗 Diffusers for the first time! How-to guides Practical guides for helping you load pipelines, models, and schedulers. You'll also learn how to use pipelines for specific tasks, control how outputs are generated, optimize for inference speed, and different training techniques. Conceptual guides Understand why the library was designed the way it was, and learn more about the ethical guidelines and safety implementations for using the library. Reference Technical descriptions of how 🤗 Diffusers classes and methods work. diff --git a/scrapped_outputs/1e8e483f159fa9f79f3a720256c7395e.txt b/scrapped_outputs/1e8e483f159fa9f79f3a720256c7395e.txt new file mode 100644 index 0000000000000000000000000000000000000000..f86c7601a8960e5b9b1d28395df88617938da400 --- /dev/null +++ b/scrapped_outputs/1e8e483f159fa9f79f3a720256c7395e.txt @@ -0,0 +1,42 @@ +LMSDiscreteScheduler LMSDiscreteScheduler is a linear multistep scheduler for discrete beta schedules. The scheduler is ported from and created by Katherine Crowson, and the original implementation can be found at crowsonkb/k-diffusion. LMSDiscreteScheduler class diffusers.LMSDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None use_karras_sigmas: Optional = False prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. A linear multistep scheduler for discrete beta schedules. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. get_lms_coefficient < source > ( order t current_order ) Parameters order () — t () — current_order () — Compute the linear multistep coefficient. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (float or torch.FloatTensor) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor order: int = 4 return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float or torch.FloatTensor) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. order (int, defaults to 4) — +The order of the linear multistep method. return_dict (bool, optional, defaults to True) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). LMSDiscreteSchedulerOutput class diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/1e99f7b78a77b8a55a2cdc2f8a041aec.txt b/scrapped_outputs/1e99f7b78a77b8a55a2cdc2f8a041aec.txt new file mode 100644 index 0000000000000000000000000000000000000000..7b33af7ded71fb9ee111a4c828a87ecbd9858360 --- /dev/null +++ b/scrapped_outputs/1e99f7b78a77b8a55a2cdc2f8a041aec.txt @@ -0,0 +1,36 @@ +Consistency Decoder Consistency decoder can be used to decode the latents from the denoising UNet in the StableDiffusionPipeline. This decoder was introduced in the DALL-E 3 technical report. The original codebase can be found at openai/consistencydecoder. Inference is only supported for 2 iterations as of now. The pipeline could not have been contributed without the help of madebyollin and mrsteyk from this issue. ConsistencyDecoderVAE class diffusers.ConsistencyDecoderVAE < source > ( scaling_factor: float = 0.18215 latent_channels: int = 4 encoder_act_fn: str = 'silu' encoder_block_out_channels: Tuple = (128, 256, 512, 512) encoder_double_z: bool = True encoder_down_block_types: Tuple = ('DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D') encoder_in_channels: int = 3 encoder_layers_per_block: int = 2 encoder_norm_num_groups: int = 32 encoder_out_channels: int = 4 decoder_add_attention: bool = False decoder_block_out_channels: Tuple = (320, 640, 1024, 1024) decoder_down_block_types: Tuple = ('ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D') decoder_downsample_padding: int = 1 decoder_in_channels: int = 7 decoder_layers_per_block: int = 3 decoder_norm_eps: float = 1e-05 decoder_norm_num_groups: int = 32 decoder_num_train_timesteps: int = 1024 decoder_out_channels: int = 6 decoder_resnet_time_scale_shift: str = 'scale_shift' decoder_time_embedding_type: str = 'learned' decoder_up_block_types: Tuple = ('ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D') ) The consistency decoder used with DALL-E 3. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline, ConsistencyDecoderVAE + +>>> vae = ConsistencyDecoderVAE.from_pretrained("openai/consistency-decoder", torch_dtype=torch.float16) +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", vae=vae, torch_dtype=torch.float16 +... ).to("cuda") + +>>> pipe("horse", generator=torch.manual_seed(0)).images wrapper < source > ( *args **kwargs ) disable_slicing < source > ( ) Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing +decoding in one step. disable_tiling < source > ( ) Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing +decoding in one step. enable_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_tiling < source > ( use_tiling: bool = True ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. forward < source > ( sample: FloatTensor sample_posterior: bool = False return_dict: bool = True generator: Optional = None ) → DecoderOutput or tuple Parameters sample (torch.FloatTensor) — Input sample. sample_posterior (bool, optional, defaults to False) — +Whether to sample from the posterior. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. generator (torch.Generator, optional, defaults to None) — +Generator to use for sampling. Returns +DecoderOutput or tuple + +If return_dict is True, a DecoderOutput is returned, otherwise a plain tuple is returned. + set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. tiled_encode < source > ( x: FloatTensor return_dict: bool = True ) → ~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput or tuple Parameters x (torch.FloatTensor) — Input batch of images. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput instead of a +plain tuple. Returns +~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput or tuple + +If return_dict is True, a ~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput is returned, +otherwise a plain tuple is returned. + Encode a batch of images using a tiled encoder. When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several +steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is +different from non-tiled encoding because each tile uses a different encoder. To avoid tiling artifacts, the +tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the +output, but they should be much less noticeable. diff --git a/scrapped_outputs/1ed4310fbdc9454037941d5257143af9.txt b/scrapped_outputs/1ed4310fbdc9454037941d5257143af9.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/1efcbc29c8b61d1835ec71a5ad6cdc66.txt b/scrapped_outputs/1efcbc29c8b61d1835ec71a5ad6cdc66.txt new file mode 100644 index 0000000000000000000000000000000000000000..fa6755ea3c7c8d60d3512e78072a458c1594b457 --- /dev/null +++ b/scrapped_outputs/1efcbc29c8b61d1835ec71a5ad6cdc66.txt @@ -0,0 +1,552 @@ +Stable Diffusion XL Stable Diffusion XL (SDXL) was proposed in SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis by Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. The abstract from the paper is: We present SDXL, a latent diffusion model for text-to-image synthesis. Compared to previous versions of Stable Diffusion, SDXL leverages a three times larger UNet backbone: The increase of model parameters is mainly due to more attention blocks and a larger cross-attention context as SDXL uses a second text encoder. We design multiple novel conditioning schemes and train SDXL on multiple aspect ratios. We also introduce a refinement model which is used to improve the visual fidelity of samples generated by SDXL using a post-hoc image-to-image technique. We demonstrate that SDXL shows drastically improved performance compared the previous versions of Stable Diffusion and achieves results competitive with those of black-box state-of-the-art image generators. Tips Using SDXL with a DPM++ scheduler for less than 50 steps is known to produce visual artifacts because the solver becomes numerically unstable. To fix this issue, take a look at this PR which recommends for ODE/SDE solvers:set use_karras_sigmas=True or lu_lambdas=True to improve image quality set euler_at_final=True if you’re using a solver with uniform step sizes (DPM++2M or DPM++2M SDE) Most SDXL checkpoints work best with an image size of 1024x1024. Image sizes of 768x768 and 512x512 are also supported, but the results aren’t as good. Anything below 512x512 is not recommended and likely won’t be for default checkpoints like stabilityai/stable-diffusion-xl-base-1.0. SDXL can pass a different prompt for each of the text encoders it was trained on. We can even pass different parts of the same prompt to the text encoders. SDXL output images can be improved by making use of a refiner model in an image-to-image setting. SDXL offers negative_original_size, negative_crops_coords_top_left, and negative_target_size to negatively condition the model on image resolution and cropping parameters. To learn how to use SDXL for various tasks, how to optimize performance, and other usage examples, take a look at the Stable Diffusion XL guide. Check out the Stability AI Hub organization for the official base and refiner model checkpoints! StableDiffusionXLPipeline class diffusers.StableDiffusionXLPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Optional = None crops_coords_top_left: Tuple = (0, 0) target_size: Optional = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 5.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput instead +of a plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLPipeline + +>>> pipe = StableDiffusionXLPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt).images[0] encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionXLImg2ImgPipeline class diffusers.StableDiffusionXLImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires an aesthetic_score condition to be passed during inference. Also see the +config of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None strength: float = 0.3 num_inference_steps: int = 50 timesteps: List = None denoising_start: Optional = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor or PIL.Image.Image or np.ndarray or List[torch.FloatTensor] or List[PIL.Image.Image] or List[np.ndarray]) — +The image(s) to modify with the pipeline. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. Note that in the case of +denoising_start being declared as an integer, the value of strength will be ignored. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_start (float, optional) — +When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be +bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and +it is assumed that the passed image is a partly denoised image. Note that when this is specified, +strength will be ignored. The denoising_start parameter is particularly beneficial when this pipeline +is integrated into a “Mixture of Denoisers” multi-pipeline setup, as detailed in Refine Image +Quality. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be +denoised by a successor pipeline that has denoising_start set to 0.8 so that it only denoises the +final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline +forms a part of a “Mixture of Denoisers” multi-pipeline setup, as elaborated in Refine Image +Quality. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +`tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLImg2ImgPipeline +>>> from diffusers.utils import load_image + +>>> pipe = StableDiffusionXLImg2ImgPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") +>>> url = "https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/aa_xl/000000009.png" + +>>> init_image = load_image(url).convert("RGB") +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt, image=init_image).images[0] encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionXLInpaintPipeline class diffusers.StableDiffusionXLInpaintPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires a aesthetic_score condition to be passed during inference. Also see the config +of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None mask_image: Union = None masked_image_latents: FloatTensor = None height: Optional = None width: Optional = None padding_mask_crop: Optional = None strength: float = 0.9999 num_inference_steps: int = 50 timesteps: List = None denoising_start: Optional = None denoising_end: Optional = None guidance_scale: float = 7.5 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (PIL.Image.Image) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. padding_mask_crop (int, optional, defaults to None) — +The size of margin in the crop to be applied to the image and masking. If None, no crop is applied to image and mask_image. If +padding_mask_crop is not None, it will first find a rectangular region with the same aspect ration of the image and +contains all masked area, and then expand that area based on padding_mask_crop. The image and mask_image will then be cropped based on +the expanded area before resizing to the original image size for inpainting. This is useful when the masked area is small while the image is large +and contain information inreleant for inpainging, such as background. strength (float, optional, defaults to 0.9999) — +Conceptually, indicates how much to transform the masked portion of the reference image. Must be +between 0 and 1. image will be used as a starting point, adding more noise to it the larger the +strength. The number of denoising steps depends on the amount of noise initially added. When +strength is 1, added noise will be maximum and the denoising process will run for the full number of +iterations specified in num_inference_steps. A value of 1, therefore, essentially ignores the masked +portion of the reference image. Note that in the case of denoising_start being declared as an +integer, the value of strength will be ignored. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_start (float, optional) — +When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be +bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and +it is assumed that the passed image is a partly denoised image. Note that when this is specified, +strength will be ignored. The denoising_start parameter is particularly beneficial when this pipeline +is integrated into a “Mixture of Denoisers” multi-pipeline setup, as detailed in Refining the Image +Output. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be +denoised by a successor pipeline that has denoising_start set to 0.8 so that it only denoises the +final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline +forms a part of a “Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLInpaintPipeline +>>> from diffusers.utils import load_image + +>>> pipe = StableDiffusionXLInpaintPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", +... torch_dtype=torch.float16, +... variant="fp16", +... use_safetensors=True, +... ) +>>> pipe.to("cuda") + +>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +>>> init_image = load_image(img_url).convert("RGB") +>>> mask_image = load_image(mask_url).convert("RGB") + +>>> prompt = "A majestic tiger sitting on a bench" +>>> image = pipe( +... prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=50, strength=0.80 +... ).images[0] encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 diff --git a/scrapped_outputs/1f08e38a0c053268b0dc520689474934.txt b/scrapped_outputs/1f08e38a0c053268b0dc520689474934.txt new file mode 100644 index 0000000000000000000000000000000000000000..6d4b37b0a52f96659677efd85840d7f5e1ea639c --- /dev/null +++ b/scrapped_outputs/1f08e38a0c053268b0dc520689474934.txt @@ -0,0 +1,41 @@ +Text-to-image The text-to-image script is experimental, and it’s easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset. Text-to-image models like Stable Diffusion are conditioned to generate images given a text prompt. Training a model can be taxing on your hardware, but if you enable gradient_checkpointing and mixed_precision, it is possible to train a model on a single 24GB GPU. If you’re training with larger batch sizes or want to train faster, it’s better to use GPUs with more than 30GB of memory. You can reduce your memory footprint by enabling memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing, gradient accumulation or xFormers. A GPU with at least 30GB of memory or a TPU v3 is recommended for training with Flax. This guide will explore the train_text_to_image.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/text_to_image +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image.py \ + --mixed_precision="fp16" Some basic and important parameters include: --pretrained_model_name_or_path: the name of the model on the Hub or a local path to the pretrained model --dataset_name: the name of the dataset on the Hub or a local path to the dataset to train on --image_column: the name of the image column in the dataset to train on --caption_column: the name of the text column in the dataset to train on --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_text_to_image.py \ + --snr_gamma=5.0 You can compare the loss surfaces for different snr_gamma values in this Weights and Biases report. For smaller datasets, the effects of Min-SNR may not be as obvious compared to larger datasets. Training script The dataset preprocessing code and training loop are found in the main() function. If you need to adapt the training script, this is where you’ll need to make your changes. The train_text_to_image script starts by loading a scheduler and tokenizer. You can choose to use a different scheduler here if you want: Copied noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") +tokenizer = CLIPTokenizer.from_pretrained( + args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision +) Then the script loads the UNet model: Copied load_model = UNet2DConditionModel.from_pretrained(input_dir, subfolder="unet") +model.register_to_config(**load_model.config) + +model.load_state_dict(load_model.state_dict()) Next, the text and image columns of the dataset need to be preprocessed. The tokenize_captions function handles tokenizing the inputs, and the train_transforms function specifies the type of transforms to apply to the image. Both of these functions are bundled into preprocess_train: Copied def preprocess_train(examples): + images = [image.convert("RGB") for image in examples[image_column]] + examples["pixel_values"] = [train_transforms(image) for image in images] + examples["input_ids"] = tokenize_captions(examples) + return examples Lastly, the training loop handles everything else. It encodes images into latent space, adds noise to the latents, computes the text embeddings to condition on, updates the model parameters, and saves and pushes the model to the Hub. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 PyTorch Flax Let’s train on the Pokémon BLIP captions dataset to generate your own Pokémon. Set the environment variables MODEL_NAME and dataset_name to the model and the dataset (either from the Hub or a local path). If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. To train on a local dataset, set the TRAIN_DIR and OUTPUT_DIR environment variables to the path of the dataset and where to save the model to. Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export dataset_name="lambdalabs/pokemon-blip-captions" + +accelerate launch --mixed_precision="fp16" train_text_to_image.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$dataset_name \ + --use_ema \ + --resolution=512 --center_crop --random_flip \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --enable_xformers_memory_efficient_attention + --lr_scheduler="constant" --lr_warmup_steps=0 \ + --output_dir="sd-pokemon-model" \ + --push_to_hub Once training is complete, you can use your newly trained model for inference: PyTorch Flax Copied from diffusers import StableDiffusionPipeline +import torch + +pipeline = StableDiffusionPipeline.from_pretrained("path/to/saved_model", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + +image = pipeline(prompt="yoda").images[0] +image.save("yoda-pokemon.png") Next steps Congratulations on training your own text-to-image model! To learn more about how to use your new model, the following guides may be helpful: Learn how to load LoRA weights for inference if you trained your model with LoRA. Learn more about how certain parameters like guidance scale or techniques such as prompt weighting can help you control inference in the Text-to-image task guide. diff --git a/scrapped_outputs/1f16c1667e9ded4d0aaa231d4d9453d4.txt b/scrapped_outputs/1f16c1667e9ded4d0aaa231d4d9453d4.txt new file mode 100644 index 0000000000000000000000000000000000000000..576dcc80f8d3648a3bfddba4f5d8e453c126504f --- /dev/null +++ b/scrapped_outputs/1f16c1667e9ded4d0aaa231d4d9453d4.txt @@ -0,0 +1,58 @@ +Tiny AutoEncoder Tiny AutoEncoder for Stable Diffusion (TAESD) was introduced in madebyollin/taesd by Ollin Boer Bohan. It is a tiny distilled version of Stable Diffusion’s VAE that can quickly decode the latents in a StableDiffusionPipeline or StableDiffusionXLPipeline almost instantly. To use with Stable Diffusion v-2.1: Copied import torch +from diffusers import DiffusionPipeline, AutoencoderTiny + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1-base", torch_dtype=torch.float16 +) +pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesd", torch_dtype=torch.float16) +pipe = pipe.to("cuda") + +prompt = "slice of delicious New York-style berry cheesecake" +image = pipe(prompt, num_inference_steps=25).images[0] +image To use with Stable Diffusion XL 1.0 Copied import torch +from diffusers import DiffusionPipeline, AutoencoderTiny + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +) +pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesdxl", torch_dtype=torch.float16) +pipe = pipe.to("cuda") + +prompt = "slice of delicious New York-style berry cheesecake" +image = pipe(prompt, num_inference_steps=25).images[0] +image AutoencoderTiny class diffusers.AutoencoderTiny < source > ( in_channels: int = 3 out_channels: int = 3 encoder_block_out_channels: Tuple = (64, 64, 64, 64) decoder_block_out_channels: Tuple = (64, 64, 64, 64) act_fn: str = 'relu' latent_channels: int = 4 upsampling_scaling_factor: int = 2 num_encoder_blocks: Tuple = (1, 3, 3, 3) num_decoder_blocks: Tuple = (3, 3, 3, 1) latent_magnitude: int = 3 latent_shift: float = 0.5 force_upcast: bool = False scaling_factor: float = 1.0 ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. encoder_block_out_channels (Tuple[int], optional, defaults to (64, 64, 64, 64)) — +Tuple of integers representing the number of output channels for each encoder block. The length of the +tuple should be equal to the number of encoder blocks. decoder_block_out_channels (Tuple[int], optional, defaults to (64, 64, 64, 64)) — +Tuple of integers representing the number of output channels for each decoder block. The length of the +tuple should be equal to the number of decoder blocks. act_fn (str, optional, defaults to "relu") — +Activation function to be used throughout the model. latent_channels (int, optional, defaults to 4) — +Number of channels in the latent representation. The latent space acts as a compressed representation of +the input image. upsampling_scaling_factor (int, optional, defaults to 2) — +Scaling factor for upsampling in the decoder. It determines the size of the output image during the +upsampling process. num_encoder_blocks (Tuple[int], optional, defaults to (1, 3, 3, 3)) — +Tuple of integers representing the number of encoder blocks at each stage of the encoding process. The +length of the tuple should be equal to the number of stages in the encoder. Each stage has a different +number of encoder blocks. num_decoder_blocks (Tuple[int], optional, defaults to (3, 3, 3, 1)) — +Tuple of integers representing the number of decoder blocks at each stage of the decoding process. The +length of the tuple should be equal to the number of stages in the decoder. Each stage has a different +number of decoder blocks. latent_magnitude (float, optional, defaults to 3.0) — +Magnitude of the latent representation. This parameter scales the latent representation values to control +the extent of information preservation. latent_shift (float, optional, defaults to 0.5) — +Shift applied to the latent representation. This parameter controls the center of the latent space. scaling_factor (float, optional, defaults to 1.0) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. For this Autoencoder, +however, no such scaling factor was used, hence the value of 1.0 as the default. force_upcast (bool, optional, default to False) — +If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE +can be fine-tuned / trained to a lower range without losing too much precision, in which case +force_upcast can be set to False (see this fp16-friendly +AutoEncoder). A tiny distilled VAE model for encoding images into latents and decoding latent representations into images. AutoencoderTiny is a wrapper around the original implementation of TAESD. This model inherits from ModelMixin. Check the superclass documentation for its generic methods implemented for +all models (such as downloading or saving). disable_slicing < source > ( ) Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing +decoding in one step. disable_tiling < source > ( ) Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing +decoding in one step. enable_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_tiling < source > ( use_tiling: bool = True ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. forward < source > ( sample: FloatTensor return_dict: bool = True ) Parameters sample (torch.FloatTensor) — Input sample. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. scale_latents < source > ( x: FloatTensor ) raw latents -> [0, 1] unscale_latents < source > ( x: FloatTensor ) [0, 1] -> raw latents AutoencoderTinyOutput class diffusers.models.autoencoders.autoencoder_tiny.AutoencoderTinyOutput < source > ( latents: Tensor ) Parameters latents (torch.Tensor) — Encoded outputs of the Encoder. Output of AutoencoderTiny encoding method. diff --git a/scrapped_outputs/1f4788cc8416bb5e0851eb48d2091f7f.txt b/scrapped_outputs/1f4788cc8416bb5e0851eb48d2091f7f.txt new file mode 100644 index 0000000000000000000000000000000000000000..682e7ed4ade907ab1a141f47a047e5803e87a77a --- /dev/null +++ b/scrapped_outputs/1f4788cc8416bb5e0851eb48d2091f7f.txt @@ -0,0 +1,33 @@ +Logging 🤗 Diffusers has a centralized logging system to easily manage the verbosity of the library. The default verbosity is set to WARNING. To change the verbosity level, use one of the direct setters. For instance, to change the verbosity to the INFO level. Copied import diffusers + +diffusers.logging.set_verbosity_info() You can also use the environment variable DIFFUSERS_VERBOSITY to override the default verbosity. You can set it +to one of the following: debug, info, warning, error, critical. For example: Copied DIFFUSERS_VERBOSITY=error ./myprogram.py Additionally, some warnings can be disabled by setting the environment variable +DIFFUSERS_NO_ADVISORY_WARNINGS to a true value, like 1. This disables any warning logged by +logger.warning_advice. For example: Copied DIFFUSERS_NO_ADVISORY_WARNINGS=1 ./myprogram.py Here is an example of how to use the same logger as the library in your own module or script: Copied from diffusers.utils import logging + +logging.set_verbosity_info() +logger = logging.get_logger("diffusers") +logger.info("INFO") +logger.warning("WARN") All methods of the logging module are documented below. The main methods are +logging.get_verbosity to get the current level of verbosity in the logger and +logging.set_verbosity to set the verbosity to the level of your choice. In order from the least verbose to the most verbose: Method Integer value Description diffusers.logging.CRITICAL or diffusers.logging.FATAL 50 only report the most critical errors diffusers.logging.ERROR 40 only report errors diffusers.logging.WARNING or diffusers.logging.WARN 30 only report errors and warnings (default) diffusers.logging.INFO 20 only report errors, warnings, and basic information diffusers.logging.DEBUG 10 report all information By default, tqdm progress bars are displayed during model download. logging.disable_progress_bar and logging.enable_progress_bar are used to enable or disable this behavior. Base setters diffusers.utils.logging.set_verbosity_error < source > ( ) Set the verbosity to the ERROR level. diffusers.utils.logging.set_verbosity_warning < source > ( ) Set the verbosity to the WARNING level. diffusers.utils.logging.set_verbosity_info < source > ( ) Set the verbosity to the INFO level. diffusers.utils.logging.set_verbosity_debug < source > ( ) Set the verbosity to the DEBUG level. Other functions diffusers.utils.logging.get_verbosity < source > ( ) → int Returns +int + +Logging level integers which can be one of: + +50: diffusers.logging.CRITICAL or diffusers.logging.FATAL +40: diffusers.logging.ERROR +30: diffusers.logging.WARNING or diffusers.logging.WARN +20: diffusers.logging.INFO +10: diffusers.logging.DEBUG + + Return the current level for the 🤗 Diffusers’ root logger as an int. diffusers.utils.logging.set_verbosity < source > ( verbosity: int ) Parameters verbosity (int) — +Logging level which can be one of: + +diffusers.logging.CRITICAL or diffusers.logging.FATAL +diffusers.logging.ERROR +diffusers.logging.WARNING or diffusers.logging.WARN +diffusers.logging.INFO +diffusers.logging.DEBUG + Set the verbosity level for the 🤗 Diffusers’ root logger. diffusers.utils.get_logger < source > ( name: Optional = None ) Return a logger with the specified name. This function is not supposed to be directly accessed unless you are writing a custom diffusers module. diffusers.utils.logging.enable_default_handler < source > ( ) Enable the default handler of the 🤗 Diffusers’ root logger. diffusers.utils.logging.disable_default_handler < source > ( ) Disable the default handler of the 🤗 Diffusers’ root logger. diffusers.utils.logging.enable_explicit_format < source > ( ) Enable explicit formatting for every 🤗 Diffusers’ logger. The explicit formatter is as follows: Copied [LEVELNAME|FILENAME|LINE NUMBER] TIME >> MESSAGE +All handlers currently bound to the root logger are affected by this method. diffusers.utils.logging.reset_format < source > ( ) Resets the formatting for 🤗 Diffusers’ loggers. All handlers currently bound to the root logger are affected by this method. diffusers.utils.logging.enable_progress_bar < source > ( ) Enable tqdm progress bar. diffusers.utils.logging.disable_progress_bar < source > ( ) Disable tqdm progress bar. diff --git a/scrapped_outputs/1f638aaa2b40dac0cdab1aa57b6cff29.txt b/scrapped_outputs/1f638aaa2b40dac0cdab1aa57b6cff29.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/1f85be2e463581d8eb38f68a7eb06241.txt b/scrapped_outputs/1f85be2e463581d8eb38f68a7eb06241.txt new file mode 100644 index 0000000000000000000000000000000000000000..1bb035d0780b0d0794fda4284e2742622d151586 --- /dev/null +++ b/scrapped_outputs/1f85be2e463581d8eb38f68a7eb06241.txt @@ -0,0 +1,108 @@ +Stable diffusion pipelines + +Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from CompVis, Stability AI and LAION. It’s trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs. +Latent diffusion is the research on top of which Stable Diffusion was built. It was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. You can learn more details about it in the specific pipeline for latent diffusion that is part of 🤗 Diffusers. +For more details about how Stable Diffusion works and how it differs from the base latent diffusion model, please refer to the official launch announcement post and this section of our own blog post. +Tips: +To tweak your prompts on a specific result you liked, you can generate your own latents, as demonstrated in the following notebook: +Overview: +Pipeline +Tasks +Colab +Demo +StableDiffusionPipeline +Text-to-Image Generation + +🤗 Stable Diffusion +StableDiffusionImg2ImgPipeline +Image-to-Image Text-Guided Generation + +🤗 Diffuse the Rest +StableDiffusionInpaintPipeline +Experimental – Text-Guided Image Inpainting + +Coming soon +StableDiffusionDepth2ImgPipeline +Experimental – Depth-to-Image Text-Guided Generation + +Coming soon +StableDiffusionImageVariationPipeline +Experimental – Image Variation Generation + +🤗 Stable Diffusion Image Variations +StableDiffusionUpscalePipeline +Experimental – Text-Guided Image Super-Resolution + +Coming soon +StableDiffusionInstructPix2PixPipeline +Experimental – Text-Based Image Editing + +InstructPix2Pix: Learning to Follow Image Editing Instructions + +Tips + + +How to load and use different schedulers. + +The stable diffusion pipeline uses PNDMScheduler scheduler by default. But diffusers provides many other schedulers that can be used with the stable diffusion pipeline such as DDIMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, EulerAncestralDiscreteScheduler etc. +To use a different scheduler, you can either change it via the ConfigMixin.from_config() method or pass the scheduler argument to the from_pretrained method of the pipeline. For example, to use the EulerDiscreteScheduler, you can do the following: + + + Copied +>>> from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler + +>>> pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") +>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +>>> # or +>>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler") +>>> pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=euler_scheduler) + +How to convert all use cases with multiple or single pipeline + +If you want to use all possible use cases in a single DiffusionPipeline you can either: +Make use of the Stable Diffusion Mega Pipeline or +Make use of the components functionality to instantiate all components in the most memory-efficient way: + + + Copied +>>> from diffusers import ( +... StableDiffusionPipeline, +... StableDiffusionImg2ImgPipeline, +... StableDiffusionInpaintPipeline, +... ) + +>>> text2img = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") +>>> img2img = StableDiffusionImg2ImgPipeline(**text2img.components) +>>> inpaint = StableDiffusionInpaintPipeline(**text2img.components) + +>>> # now you can use text2img(...), img2img(...), inpaint(...) just like the call methods of each respective pipeline + +StableDiffusionPipelineOutput + + +class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput + +< +source +> +( +images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] +nsfw_content_detected: typing.Optional[typing.List[bool]] + +) + + +Parameters + +images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or numpy array of shape (batch_size, height, width, num_channels). PIL images or numpy array present the denoised images of the diffusion pipeline. + + +nsfw_content_detected (List[bool]) — +List of flags denoting whether the corresponding generated image likely represents “not-safe-for-work” +(nsfw) content, or None if safety checking could not be performed. + + + +Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/1fc0dc00658b3e5102fdc2811bb89e3b.txt b/scrapped_outputs/1fc0dc00658b3e5102fdc2811bb89e3b.txt new file mode 100644 index 0000000000000000000000000000000000000000..5ac980c70abc6eba4fbd0f38f30a6ecdd94ad92f --- /dev/null +++ b/scrapped_outputs/1fc0dc00658b3e5102fdc2811bb89e3b.txt @@ -0,0 +1,201 @@ +Depth-to-image The Stable Diffusion model can also infer depth based on an image using MiDaS. This allows you to pass a text prompt and an initial image to condition the generation of new images as well as a depth_map to preserve the image structure. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionDepth2ImgPipeline class diffusers.StableDiffusionDepth2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers depth_estimator: DPTForDepthEstimation feature_extractor: DPTFeatureExtractor ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-guided depth-based image-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None image: Union = None depth_map: Optional = None strength: float = 0.8 num_inference_steps: Optional = 50 guidance_scale: Optional = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: Optional = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be used as the starting point. Can accept image +latents as image only if depth_map is not None. depth_map (torch.FloatTensor, optional) — +Depth prediction to be used as additional conditioning for the image generation process. If not +defined, it automatically predicts the depth with self.depth_estimator. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> import requests +>>> from PIL import Image + +>>> from diffusers import StableDiffusionDepth2ImgPipeline + +>>> pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-depth", +... torch_dtype=torch.float16, +... ) +>>> pipe.to("cuda") + + +>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" +>>> init_image = Image.open(requests.get(url, stream=True).raw) +>>> prompt = "two tigers" +>>> n_propmt = "bad, deformed, ugly, bad anotomy" +>>> image = pipe(prompt=prompt, image=init_image, negative_prompt=n_propmt, strength=0.7).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/1fd7042ee1aa23d89696c797387f7bea.txt b/scrapped_outputs/1fd7042ee1aa23d89696c797387f7bea.txt new file mode 100644 index 0000000000000000000000000000000000000000..2add5dcbc2dfbc796cac5009a8f482715b5ce8eb --- /dev/null +++ b/scrapped_outputs/1fd7042ee1aa23d89696c797387f7bea.txt @@ -0,0 +1,5 @@ +UVit2DModel The U-ViT model is a vision transformer (ViT) based UNet. This model incorporates elements from ViT (considers all inputs such as time, conditions and noisy image patches as tokens) and a UNet (long skip connections between the shallow and deep layers). The skip connection is important for predicting pixel-level features. An additional 3x3 convolutional block is applied prior to the final output to improve image quality. The abstract from the paper is: Currently, applying diffusion models in pixel space of high resolution images is difficult. Instead, existing approaches focus on diffusion in lower dimensional spaces (latent diffusion), or have multiple super-resolution levels of generation referred to as cascades. The downside is that these approaches add additional complexity to the diffusion framework. This paper aims to improve denoising diffusion for high resolution images while keeping the model as simple as possible. The paper is centered around the research question: How can one train a standard denoising diffusion models on high resolution images, and still obtain performance comparable to these alternate approaches? The four main findings are: 1) the noise schedule should be adjusted for high resolution images, 2) It is sufficient to scale only a particular part of the architecture, 3) dropout should be added at specific locations in the architecture, and 4) downsampling is an effective strategy to avoid high resolution feature maps. Combining these simple yet effective techniques, we achieve state-of-the-art on image generation among diffusion models without sampling modifiers on ImageNet. UVit2DModel class diffusers.UVit2DModel < source > ( hidden_size: int = 1024 use_bias: bool = False hidden_dropout: float = 0.0 cond_embed_dim: int = 768 micro_cond_encode_dim: int = 256 micro_cond_embed_dim: int = 1280 encoder_hidden_size: int = 768 vocab_size: int = 8256 codebook_size: int = 8192 in_channels: int = 768 block_out_channels: int = 768 num_res_blocks: int = 3 downsample: bool = False upsample: bool = False block_num_heads: int = 12 num_hidden_layers: int = 22 num_attention_heads: int = 16 attention_dropout: float = 0.0 intermediate_size: int = 2816 layer_norm_eps: float = 1e-06 ln_elementwise_affine: bool = True sample_size: int = 64 ) set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. UVit2DConvEmbed class diffusers.models.unets.uvit_2d.UVit2DConvEmbed < source > ( in_channels block_out_channels vocab_size elementwise_affine eps bias ) UVitBlock class diffusers.models.unets.uvit_2d.UVitBlock < source > ( channels num_res_blocks: int hidden_size hidden_dropout ln_elementwise_affine layer_norm_eps use_bias block_num_heads attention_dropout downsample: bool upsample: bool ) ConvNextBlock class diffusers.models.unets.uvit_2d.ConvNextBlock < source > ( channels layer_norm_eps ln_elementwise_affine use_bias hidden_dropout hidden_size res_ffn_factor = 4 ) ConvMlmLayer class diffusers.models.unets.uvit_2d.ConvMlmLayer < source > ( block_out_channels: int in_channels: int use_bias: bool ln_elementwise_affine: bool layer_norm_eps: float codebook_size: int ) diff --git a/scrapped_outputs/1ff137e3d464cb49fa7b146c28496847.txt b/scrapped_outputs/1ff137e3d464cb49fa7b146c28496847.txt new file mode 100644 index 0000000000000000000000000000000000000000..923735996db131119f1ed82ba37eae73f2bb0f3e --- /dev/null +++ b/scrapped_outputs/1ff137e3d464cb49fa7b146c28496847.txt @@ -0,0 +1,27 @@ +DDPM Denoising Diffusion Probabilistic Models (DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes a diffusion based model of the same name. In the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline. The abstract from the paper is: We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. The original codebase can be found at hohonathanho/diffusion. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. DDPMPipeline class diffusers.DDPMPipeline < source > ( unet scheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image. Can be one of +DDPMScheduler, or DDIMScheduler. Pipeline for image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 generator: Union = None num_inference_steps: int = 1000 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. num_inference_steps (int, optional, defaults to 1000) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Example: Copied >>> from diffusers import DDPMPipeline + +>>> # load model and scheduler +>>> pipe = DDPMPipeline.from_pretrained("google/ddpm-cat-256") + +>>> # run pipeline in inference (sample random noise and denoise) +>>> image = pipe().images[0] + +>>> # save image +>>> image.save("ddpm_generated_image.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/2004496a41926d30ad959783bc089e7f.txt b/scrapped_outputs/2004496a41926d30ad959783bc089e7f.txt new file mode 100644 index 0000000000000000000000000000000000000000..619c1344357a2477dbdb089431e14fc3b2eaccb0 --- /dev/null +++ b/scrapped_outputs/2004496a41926d30ad959783bc089e7f.txt @@ -0,0 +1,130 @@ +improved pseudo numerical methods for diffusion models (iPNDM) + + +Overview + +Original implementation can be found here. + +IPNDMScheduler + + +class diffusers.IPNDMScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + + +Improved Pseudo numerical methods for diffusion models (iPNDM) ported from @crowsonkb’s amazing k-diffusion +library +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. +For more details, see the original paper: https://arxiv.org/abs/2202.09778 + +scale_model_input + +< +source +> +( +sample: FloatTensor +*args +**kwargs + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + + +Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +return_dict: bool = True + +) +→ +~scheduling_utils.SchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +return_dict (bool) — option for returning tuple rather than SchedulerOutput class + + +Returns + +~scheduling_utils.SchedulerOutput or tuple + + + +~scheduling_utils.SchedulerOutput if return_dict is +True, otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + +Step function propagating the sample with the linear multi-step method. This has one forward pass with multiple +times to approximate the solution. diff --git a/scrapped_outputs/202f45823ed9b25a5511cb4f617f6086.txt b/scrapped_outputs/202f45823ed9b25a5511cb4f617f6086.txt new file mode 100644 index 0000000000000000000000000000000000000000..c45daf9a97ec4b41db61304ab7ca97f58be2ed61 --- /dev/null +++ b/scrapped_outputs/202f45823ed9b25a5511cb4f617f6086.txt @@ -0,0 +1 @@ +xFormers We recommend xFormers for both inference and training. In our tests, the optimizations performed in the attention blocks allow for both faster speed and reduced memory consumption. Install xFormers from pip: Copied pip install xformers The xFormers pip package requires the latest version of PyTorch. If you need to use a previous version of PyTorch, then we recommend installing xFormers from the source. After xFormers is installed, you can use enable_xformers_memory_efficient_attention() for faster inference and reduced memory consumption as shown in this section. According to this issue, xFormers v0.0.16 cannot be used for training (fine-tune or DreamBooth) in some GPUs. If you observe this problem, please install a development version as indicated in the issue comments. diff --git a/scrapped_outputs/203791f07433914b4a40099037994ac8.txt b/scrapped_outputs/203791f07433914b4a40099037994ac8.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/203b542d757253fa7385bb0a5d7b1640.txt b/scrapped_outputs/203b542d757253fa7385bb0a5d7b1640.txt new file mode 100644 index 0000000000000000000000000000000000000000..6d4b37b0a52f96659677efd85840d7f5e1ea639c --- /dev/null +++ b/scrapped_outputs/203b542d757253fa7385bb0a5d7b1640.txt @@ -0,0 +1,41 @@ +Text-to-image The text-to-image script is experimental, and it’s easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset. Text-to-image models like Stable Diffusion are conditioned to generate images given a text prompt. Training a model can be taxing on your hardware, but if you enable gradient_checkpointing and mixed_precision, it is possible to train a model on a single 24GB GPU. If you’re training with larger batch sizes or want to train faster, it’s better to use GPUs with more than 30GB of memory. You can reduce your memory footprint by enabling memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing, gradient accumulation or xFormers. A GPU with at least 30GB of memory or a TPU v3 is recommended for training with Flax. This guide will explore the train_text_to_image.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/text_to_image +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image.py \ + --mixed_precision="fp16" Some basic and important parameters include: --pretrained_model_name_or_path: the name of the model on the Hub or a local path to the pretrained model --dataset_name: the name of the dataset on the Hub or a local path to the dataset to train on --image_column: the name of the image column in the dataset to train on --caption_column: the name of the text column in the dataset to train on --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_text_to_image.py \ + --snr_gamma=5.0 You can compare the loss surfaces for different snr_gamma values in this Weights and Biases report. For smaller datasets, the effects of Min-SNR may not be as obvious compared to larger datasets. Training script The dataset preprocessing code and training loop are found in the main() function. If you need to adapt the training script, this is where you’ll need to make your changes. The train_text_to_image script starts by loading a scheduler and tokenizer. You can choose to use a different scheduler here if you want: Copied noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") +tokenizer = CLIPTokenizer.from_pretrained( + args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision +) Then the script loads the UNet model: Copied load_model = UNet2DConditionModel.from_pretrained(input_dir, subfolder="unet") +model.register_to_config(**load_model.config) + +model.load_state_dict(load_model.state_dict()) Next, the text and image columns of the dataset need to be preprocessed. The tokenize_captions function handles tokenizing the inputs, and the train_transforms function specifies the type of transforms to apply to the image. Both of these functions are bundled into preprocess_train: Copied def preprocess_train(examples): + images = [image.convert("RGB") for image in examples[image_column]] + examples["pixel_values"] = [train_transforms(image) for image in images] + examples["input_ids"] = tokenize_captions(examples) + return examples Lastly, the training loop handles everything else. It encodes images into latent space, adds noise to the latents, computes the text embeddings to condition on, updates the model parameters, and saves and pushes the model to the Hub. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 PyTorch Flax Let’s train on the Pokémon BLIP captions dataset to generate your own Pokémon. Set the environment variables MODEL_NAME and dataset_name to the model and the dataset (either from the Hub or a local path). If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. To train on a local dataset, set the TRAIN_DIR and OUTPUT_DIR environment variables to the path of the dataset and where to save the model to. Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export dataset_name="lambdalabs/pokemon-blip-captions" + +accelerate launch --mixed_precision="fp16" train_text_to_image.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$dataset_name \ + --use_ema \ + --resolution=512 --center_crop --random_flip \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --enable_xformers_memory_efficient_attention + --lr_scheduler="constant" --lr_warmup_steps=0 \ + --output_dir="sd-pokemon-model" \ + --push_to_hub Once training is complete, you can use your newly trained model for inference: PyTorch Flax Copied from diffusers import StableDiffusionPipeline +import torch + +pipeline = StableDiffusionPipeline.from_pretrained("path/to/saved_model", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + +image = pipeline(prompt="yoda").images[0] +image.save("yoda-pokemon.png") Next steps Congratulations on training your own text-to-image model! To learn more about how to use your new model, the following guides may be helpful: Learn how to load LoRA weights for inference if you trained your model with LoRA. Learn more about how certain parameters like guidance scale or techniques such as prompt weighting can help you control inference in the Text-to-image task guide. diff --git a/scrapped_outputs/20459795d5b3fe4ea62e0bbba622287f.txt b/scrapped_outputs/20459795d5b3fe4ea62e0bbba622287f.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/204ca2d16bd00457d91203d00fa61bcd.txt b/scrapped_outputs/204ca2d16bd00457d91203d00fa61bcd.txt new file mode 100644 index 0000000000000000000000000000000000000000..c6f952ad08987328ef5a7108f6c98636c5902202 --- /dev/null +++ b/scrapped_outputs/204ca2d16bd00457d91203d00fa61bcd.txt @@ -0,0 +1,76 @@ +Contribute a community pipeline 💡 Take a look at GitHub Issue #841 for more context about why we’re adding community pipelines to help everyone easily share their work without being slowed down. Community pipelines allow you to add any additional features you’d like on top of the DiffusionPipeline. The main benefit of building on top of the DiffusionPipeline is anyone can load and use your pipeline by only adding one more argument, making it super easy for the community to access. This guide will show you how to create a community pipeline and explain how they work. To keep things simple, you’ll create a “one-step” pipeline where the UNet does a single forward pass and calls the scheduler once. Initialize the pipeline You should start by creating a one_step_unet.py file for your community pipeline. In this file, create a pipeline class that inherits from the DiffusionPipeline to be able to load model weights and the scheduler configuration from the Hub. The one-step pipeline needs a UNet and a scheduler, so you’ll need to add these as arguments to the __init__ function: Copied from diffusers import DiffusionPipeline +import torch + +class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() To ensure your pipeline and its components (unet and scheduler) can be saved with save_pretrained(), add them to the register_modules function: Copied from diffusers import DiffusionPipeline + import torch + + class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() + ++ self.register_modules(unet=unet, scheduler=scheduler) Cool, the __init__ step is done and you can move to the forward pass now! 🔥 Define the forward pass In the forward pass, which we recommend defining as __call__, you have complete creative freedom to add whatever feature you’d like. For our amazing one-step pipeline, create a random image and only call the unet and scheduler once by setting timestep=1: Copied from diffusers import DiffusionPipeline + import torch + + class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() + + self.register_modules(unet=unet, scheduler=scheduler) + ++ def __call__(self): ++ image = torch.randn( ++ (1, self.unet.config.in_channels, self.unet.config.sample_size, self.unet.config.sample_size), ++ ) ++ timestep = 1 + ++ model_output = self.unet(image, timestep).sample ++ scheduler_output = self.scheduler.step(model_output, timestep, image).prev_sample + ++ return scheduler_output That’s it! 🚀 You can now run this pipeline by passing a unet and scheduler to it: Copied from diffusers import DDPMScheduler, UNet2DModel + +scheduler = DDPMScheduler() +unet = UNet2DModel() + +pipeline = UnetSchedulerOneForwardPipeline(unet=unet, scheduler=scheduler) + +output = pipeline() But what’s even better is you can load pre-existing weights into the pipeline if the pipeline structure is identical. For example, you can load the google/ddpm-cifar10-32 weights into the one-step pipeline: Copied pipeline = UnetSchedulerOneForwardPipeline.from_pretrained("google/ddpm-cifar10-32", use_safetensors=True) + +output = pipeline() Share your pipeline Open a Pull Request on the 🧨 Diffusers repository to add your awesome pipeline in one_step_unet.py to the examples/community subfolder. Once it is merged, anyone with diffusers >= 0.4.0 installed can use this pipeline magically 🪄 by specifying it in the custom_pipeline argument: Copied from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "google/ddpm-cifar10-32", custom_pipeline="one_step_unet", use_safetensors=True +) +pipe() Another way to share your community pipeline is to upload the one_step_unet.py file directly to your preferred model repository on the Hub. Instead of specifying the one_step_unet.py file, pass the model repository id to the custom_pipeline argument: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "google/ddpm-cifar10-32", custom_pipeline="stevhliu/one_step_unet", use_safetensors=True +) Take a look at the following table to compare the two sharing workflows to help you decide the best option for you: GitHub community pipeline HF Hub community pipeline usage same same review process open a Pull Request on GitHub and undergo a review process from the Diffusers team before merging; may be slower upload directly to a Hub repository without any review; this is the fastest workflow visibility included in the official Diffusers repository and documentation included on your HF Hub profile and relies on your own usage/promotion to gain visibility 💡 You can use whatever package you want in your community pipeline file - as long as the user has it installed, everything will work fine. Make sure you have one and only one pipeline class that inherits from DiffusionPipeline because this is automatically detected. How do community pipelines work? A community pipeline is a class that inherits from DiffusionPipeline which means: It can be loaded with the custom_pipeline argument. The model weights and scheduler configuration are loaded from pretrained_model_name_or_path. The code that implements a feature in the community pipeline is defined in a pipeline.py file. Sometimes you can’t load all the pipeline components weights from an official repository. In this case, the other components should be passed directly to the pipeline: Copied from diffusers import DiffusionPipeline +from transformers import CLIPImageProcessor, CLIPModel + +model_id = "CompVis/stable-diffusion-v1-4" +clip_model_id = "laion/CLIP-ViT-B-32-laion2B-s34B-b79K" + +feature_extractor = CLIPImageProcessor.from_pretrained(clip_model_id) +clip_model = CLIPModel.from_pretrained(clip_model_id, torch_dtype=torch.float16) + +pipeline = DiffusionPipeline.from_pretrained( + model_id, + custom_pipeline="clip_guided_stable_diffusion", + clip_model=clip_model, + feature_extractor=feature_extractor, + scheduler=scheduler, + torch_dtype=torch.float16, + use_safetensors=True, +) The magic behind community pipelines is contained in the following code. It allows the community pipeline to be loaded from GitHub or the Hub, and it’ll be available to all 🧨 Diffusers packages. Copied # 2. Load the pipeline class, if using custom module then load it from the Hub +# if we load from explicit class, let's use it +if custom_pipeline is not None: + pipeline_class = get_class_from_dynamic_module( + custom_pipeline, module_file=CUSTOM_PIPELINE_FILE_NAME, cache_dir=custom_pipeline + ) +elif cls != DiffusionPipeline: + pipeline_class = cls +else: + diffusers_module = importlib.import_module(cls.__module__.split(".")[0]) + pipeline_class = getattr(diffusers_module, config_dict["_class_name"]) diff --git a/scrapped_outputs/20a7cf02fa22af13480d5a665ca07b71.txt b/scrapped_outputs/20a7cf02fa22af13480d5a665ca07b71.txt new file mode 100644 index 0000000000000000000000000000000000000000..78c3d8546c4767fffa594b36c432c1201bb2ccc3 --- /dev/null +++ b/scrapped_outputs/20a7cf02fa22af13480d5a665ca07b71.txt @@ -0,0 +1,17 @@ +Token merging Token merging (ToMe) merges redundant tokens/patches progressively in the forward pass of a Transformer-based network which can speed-up the inference latency of StableDiffusionPipeline. Install ToMe from pip: Copied pip install tomesd You can use ToMe from the tomesd library with the apply_patch function: Copied from diffusers import StableDiffusionPipeline + import torch + import tomesd + + pipeline = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True, + ).to("cuda") ++ tomesd.apply_patch(pipeline, ratio=0.5) + + image = pipeline("a photo of an astronaut riding a horse on mars").images[0] The apply_patch function exposes a number of arguments to help strike a balance between pipeline inference speed and the quality of the generated tokens. The most important argument is ratio which controls the number of tokens that are merged during the forward pass. As reported in the paper, ToMe can greatly preserve the quality of the generated images while boosting inference speed. By increasing the ratio, you can speed-up inference even further, but at the cost of some degraded image quality. To test the quality of the generated images, we sampled a few prompts from Parti Prompts and performed inference with the StableDiffusionPipeline with the following settings: We didn’t notice any significant decrease in the quality of the generated samples, and you can check out the generated samples in this WandB report. If you’re interested in reproducing this experiment, use this script. Benchmarks We also benchmarked the impact of tomesd on the StableDiffusionPipeline with xFormers enabled across several image resolutions. The results are obtained from A100 and V100 GPUs in the following development environment: Copied - `diffusers` version: 0.15.1 +- Python version: 3.8.16 +- PyTorch version (GPU?): 1.13.1+cu116 (True) +- Huggingface_hub version: 0.13.2 +- Transformers version: 4.27.2 +- Accelerate version: 0.18.0 +- xFormers version: 0.0.16 +- tomesd version: 0.1.2 To reproduce this benchmark, feel free to use this script. The results are reported in seconds, and where applicable we report the speed-up percentage over the vanilla pipeline when using ToMe and ToMe + xFormers. GPU Resolution Batch size Vanilla ToMe ToMe + xFormers A100 512 10 6.88 5.26 (+23.55%) 4.69 (+31.83%) 768 10 OOM 14.71 11 8 OOM 11.56 8.84 4 OOM 5.98 4.66 2 4.99 3.24 (+35.07%) 2.1 (+37.88%) 1 3.29 2.24 (+31.91%) 2.03 (+38.3%) 1024 10 OOM OOM OOM 8 OOM OOM OOM 4 OOM 12.51 9.09 2 OOM 6.52 4.96 1 6.4 3.61 (+43.59%) 2.81 (+56.09%) V100 512 10 OOM 10.03 9.29 8 OOM 8.05 7.47 4 5.7 4.3 (+24.56%) 3.98 (+30.18%) 2 3.14 2.43 (+22.61%) 2.27 (+27.71%) 1 1.88 1.57 (+16.49%) 1.57 (+16.49%) 768 10 OOM OOM 23.67 8 OOM OOM 18.81 4 OOM 11.81 9.7 2 OOM 6.27 5.2 1 5.43 3.38 (+37.75%) 2.82 (+48.07%) 1024 10 OOM OOM OOM 8 OOM OOM OOM 4 OOM OOM 19.35 2 OOM 13 10.78 1 OOM 6.66 5.54 As seen in the tables above, the speed-up from tomesd becomes more pronounced for larger image resolutions. It is also interesting to note that with tomesd, it is possible to run the pipeline on a higher resolution like 1024x1024. You may be able to speed-up inference even more with torch.compile. diff --git a/scrapped_outputs/20b70160b3f77982a3f3f53bbe24c17a.txt b/scrapped_outputs/20b70160b3f77982a3f3f53bbe24c17a.txt new file mode 100644 index 0000000000000000000000000000000000000000..cabb3f43d86edf3d429f9913bf7e19cb9e3ada3b --- /dev/null +++ b/scrapped_outputs/20b70160b3f77982a3f3f53bbe24c17a.txt @@ -0,0 +1,308 @@ +Depth-to-Image Generation + + +StableDiffusionDepth2ImgPipeline + +The depth-guided stable diffusion model was created by the researchers and engineers from CompVis, Stability AI, and LAION, as part of Stable Diffusion 2.0. It uses MiDas to infer depth based on an image. +StableDiffusionDepth2ImgPipeline lets you pass a text prompt and an initial image to condition the generation of new images as well as a depth_map to preserve the images’ structure. +The original codebase can be found here: +Stable Diffusion v2: Stability-AI/stablediffusion +Available Checkpoints are: +stable-diffusion-2-depth: stabilityai/stable-diffusion-2-depth + +class diffusers.StableDiffusionDepth2ImgPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +depth_estimator: DPTForDepthEstimation +feature_extractor: DPTFeatureExtractor + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + + +Pipeline for text-guided image to image generation using Stable Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None +depth_map: typing.Optional[torch.FloatTensor] = None +strength: float = 0.8 +num_inference_steps: typing.Optional[int] = 50 +guidance_scale: typing.Optional[float] = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: typing.Optional[float] = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: typing.Optional[int] = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. + + +strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter will be modulated by strength. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. Ignored when not using guidance (i.e., ignored if guidance_scale +is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import torch +>>> import requests +>>> from PIL import Image + +>>> from diffusers import StableDiffusionDepth2ImgPipeline + +>>> pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-depth", +... torch_dtype=torch.float16, +... ) +>>> pipe.to("cuda") + + +>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" +>>> init_image = Image.open(requests.get(url, stream=True).raw) +>>> prompt = "two tigers" +>>> n_propmt = "bad, deformed, ugly, bad anotomy" +>>> image = pipe(prompt=prompt, image=init_image, negative_prompt=n_propmt, strength=0.7).images[0] + +enable_attention_slicing + +< +source +> +( +slice_size: typing.Union[str, int, NoneType] = 'auto' + +) + + +Parameters + +slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maxium amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +disable_attention_slicing + +< +source +> +( +) + + + +Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go +back to computing attention in one step. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. diff --git a/scrapped_outputs/20f6a1aaea12e0b1daeefd39822439d8.txt b/scrapped_outputs/20f6a1aaea12e0b1daeefd39822439d8.txt new file mode 100644 index 0000000000000000000000000000000000000000..cd183c157cf8eff7e7916669417c27bf06b12611 --- /dev/null +++ b/scrapped_outputs/20f6a1aaea12e0b1daeefd39822439d8.txt @@ -0,0 +1,225 @@ +Latent Consistency Models Latent Consistency Models (LCMs) were proposed in Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference by Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao. The abstract of the paper is as follows: Latent Diffusion models (LDMs) have achieved remarkable results in synthesizing high-resolution images. However, the iterative sampling process is computationally intensive and leads to slow generation. Inspired by Consistency Models (song et al.), we propose Latent Consistency Models (LCMs), enabling swift inference with minimal steps on any pre-trained LDMs, including Stable Diffusion (rombach et al). Viewing the guided reverse diffusion process as solving an augmented probability flow ODE (PF-ODE), LCMs are designed to directly predict the solution of such ODE in latent space, mitigating the need for numerous iterations and allowing rapid, high-fidelity sampling. Efficiently distilled from pre-trained classifier-free guided diffusion models, a high-quality 768 x 768 2~4-step LCM takes only 32 A100 GPU hours for training. Furthermore, we introduce Latent Consistency Fine-tuning (LCF), a novel method that is tailored for fine-tuning LCMs on customized image datasets. Evaluation on the LAION-5B-Aesthetics dataset demonstrates that LCMs achieve state-of-the-art text-to-image generation performance with few-step inference. Project Page: this https URL. A demo for the SimianLuo/LCM_Dreamshaper_v7 checkpoint can be found here. The pipelines were contributed by luosiallen, nagolinc, and dg845. LatentConsistencyModelPipeline class diffusers.LatentConsistencyModelPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: LCMScheduler safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Currently only +supports LCMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. requires_safety_checker (bool, optional, defaults to True) — +Whether the pipeline requires a safety checker component. Pipeline for text-to-image generation using a latent consistency model. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 4 original_inference_steps: int = None timesteps: List = None guidance_scale: float = 8.5 num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. original_inference_steps (int, optional) — +The original number of inference steps use to generate a linearly-spaced timestep schedule, from which +we will draw num_inference_steps evenly spaced timesteps from as our final timestep schedule, +following the Skipping-Step method in the paper (see Section 4.3). If not set this will default to the +scheduler’s original_inference_steps attribute. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps on the original LCM training/distillation timestep schedule are used. Must be in descending +order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. +Note that the original latent consistency models paper uses a different CFG formulation where the +guidance scales are decreased by 1 (so in the paper formulation CFG is enabled when guidance_scale > 0). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import DiffusionPipeline +>>> import torch + +>>> pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7") +>>> # To save GPU memory, torch.float16 can be used, but it may compromise image quality. +>>> pipe.to(torch_device="cuda", torch_dtype=torch.float32) + +>>> prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" + +>>> # Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps. +>>> num_inference_steps = 4 +>>> images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=8.0).images +>>> images[0].save("image.png") enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 LatentConsistencyModelImg2ImgPipeline class diffusers.LatentConsistencyModelImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: LCMScheduler safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Currently only +supports LCMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. requires_safety_checker (bool, optional, defaults to True) — +Whether the pipeline requires a safety checker component. Pipeline for image-to-image generation using a latent consistency model. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 4 strength: float = 0.8 original_inference_steps: int = None timesteps: List = None guidance_scale: float = 8.5 num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. original_inference_steps (int, optional) — +The original number of inference steps use to generate a linearly-spaced timestep schedule, from which +we will draw num_inference_steps evenly spaced timesteps from as our final timestep schedule, +following the Skipping-Step method in the paper (see Section 4.3). If not set this will default to the +scheduler’s original_inference_steps attribute. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps on the original LCM training/distillation timestep schedule are used. Must be in descending +order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. +Note that the original latent consistency models paper uses a different CFG formulation where the +guidance scales are decreased by 1 (so in the paper formulation CFG is enabled when guidance_scale > 0). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import AutoPipelineForImage2Image +>>> import torch +>>> import PIL + +>>> pipe = AutoPipelineForImage2Image.from_pretrained("SimianLuo/LCM_Dreamshaper_v7") +>>> # To save GPU memory, torch.float16 can be used, but it may compromise image quality. +>>> pipe.to(torch_device="cuda", torch_dtype=torch.float32) + +>>> prompt = "High altitude snowy mountains" +>>> image = PIL.Image.open("./snowy_mountains.png") + +>>> # Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps. +>>> num_inference_steps = 4 +>>> images = pipe( +... prompt=prompt, image=image, num_inference_steps=num_inference_steps, guidance_scale=8.0 +... ).images + +>>> images[0].save("image.png") enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/20fe6ead27bc4da242723847736eb74f.txt b/scrapped_outputs/20fe6ead27bc4da242723847736eb74f.txt new file mode 100644 index 0000000000000000000000000000000000000000..651ea7735a84779102f99c628b73538a2c0f99d1 --- /dev/null +++ b/scrapped_outputs/20fe6ead27bc4da242723847736eb74f.txt @@ -0,0 +1,98 @@ +DDPM + + +Overview + +Denoising Diffusion Probabilistic Models +(DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes the diffusion based model of the same name, but in the context of the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline. +The abstract of the paper is the following: +We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. +The original codebase of this paper can be found here. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_ddpm.py +Unconditional Image Generation +- + +DDPMPipeline + + +class diffusers.DDPMPipeline + +< +source +> +( +unet +scheduler + +) + + +Parameters + +unet (UNet2DModel) — U-Net architecture to denoise the encoded image. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image. Can be one of +DDPMScheduler, or DDIMScheduler. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +batch_size: int = 1 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +num_inference_steps: int = 1000 +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True + +) +→ +ImagePipelineOutput or tuple + +Parameters + +batch_size (int, optional, defaults to 1) — +The number of images to generate. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +num_inference_steps (int, optional, defaults to 1000) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + +Returns + +ImagePipelineOutput or tuple + + + +~pipelines.utils.ImagePipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. diff --git a/scrapped_outputs/2134ca930ee717510693231026145e48.txt b/scrapped_outputs/2134ca930ee717510693231026145e48.txt new file mode 100644 index 0000000000000000000000000000000000000000..dc3d7a1c7171fd34d73cc2b81943b203dbd4e6fa --- /dev/null +++ b/scrapped_outputs/2134ca930ee717510693231026145e48.txt @@ -0,0 +1,457 @@ +Text-to-Image Generation with Adapter Conditioning Overview T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models by Chong Mou, Xintao Wang, Liangbin Xie, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie. Using the pretrained models we can provide control images (for example, a depth map) to control Stable Diffusion text-to-image generation so that it follows the structure of the depth image and fills in the details. The abstract of the paper is the following: The incredible generative ability of large-scale text-to-image (T2I) models has demonstrated strong power of learning complex structures and meaningful semantics. However, relying solely on text prompts cannot fully take advantage of the knowledge learned by the model, especially when flexible and accurate controlling (e.g., color and structure) is needed. In this paper, we aim to “dig out” the capabilities that T2I models have implicitly learned, and then explicitly use them to control the generation more granularly. Specifically, we propose to learn simple and lightweight T2I-Adapters to align internal knowledge in T2I models with external control signals, while freezing the original large T2I models. In this way, we can train various adapters according to different conditions, achieving rich control and editing effects in the color and structure of the generation results. Further, the proposed T2I-Adapters have attractive properties of practical value, such as composability and generalization ability. Extensive experiments demonstrate that our T2I-Adapter has promising generation quality and a wide range of applications. This model was contributed by the community contributor HimariO ❤️ . Available Pipelines: Pipeline Tasks Demo StableDiffusionAdapterPipeline Text-to-Image Generation with T2I-Adapter Conditioning - StableDiffusionXLAdapterPipeline Text-to-Image Generation with T2I-Adapter Conditioning on StableDiffusion-XL - Usage example with the base model of StableDiffusion-1.4/1.5 In the following we give a simple example of how to use a T2I-Adapter checkpoint with Diffusers for inference based on StableDiffusion-1.4/1.5. +All adapters use the same pipeline. Images are first converted into the appropriate control image format. The control image and prompt are passed to the StableDiffusionAdapterPipeline. Let’s have a look at a simple example using the Color Adapter. Copied from diffusers.utils import load_image, make_image_grid + +image = load_image("https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_ref.png") Then we can create our color palette by simply resizing it to 8 by 8 pixels and then scaling it back to original size. Copied from PIL import Image + +color_palette = image.resize((8, 8)) +color_palette = color_palette.resize((512, 512), resample=Image.Resampling.NEAREST) Let’s take a look at the processed image. Next, create the adapter pipeline Copied import torch +from diffusers import StableDiffusionAdapterPipeline, T2IAdapter + +adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_color_sd14v1", torch_dtype=torch.float16) +pipe = StableDiffusionAdapterPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + adapter=adapter, + torch_dtype=torch.float16, +) +pipe.to("cuda") Finally, pass the prompt and control image to the pipeline Copied # fix the random seed, so you will get the same result as the example +generator = torch.Generator("cuda").manual_seed(7) + +out_image = pipe( + "At night, glowing cubes in front of the beach", + image=color_palette, + generator=generator, +).images[0] +make_image_grid([image, color_palette, out_image], rows=1, cols=3) Usage example with the base model of StableDiffusion-XL In the following we give a simple example of how to use a T2I-Adapter checkpoint with Diffusers for inference based on StableDiffusion-XL. +All adapters use the same pipeline. Images are first downloaded into the appropriate control image format. The control image and prompt are passed to the StableDiffusionXLAdapterPipeline. Let’s have a look at a simple example using the Sketch Adapter. Copied from diffusers.utils import load_image, make_image_grid + +sketch_image = load_image("https://huggingface.co/Adapter/t2iadapter/resolve/main/sketch.png").convert("L") Then, create the adapter pipeline Copied import torch +from diffusers import ( + T2IAdapter, + StableDiffusionXLAdapterPipeline, + DDPMScheduler +) + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +adapter = T2IAdapter.from_pretrained("Adapter/t2iadapter", subfolder="sketch_sdxl_1.0", torch_dtype=torch.float16, adapter_type="full_adapter_xl") +scheduler = DDPMScheduler.from_pretrained(model_id, subfolder="scheduler") + +pipe = StableDiffusionXLAdapterPipeline.from_pretrained( + model_id, adapter=adapter, safety_checker=None, torch_dtype=torch.float16, variant="fp16", scheduler=scheduler +) + +pipe.to("cuda") Finally, pass the prompt and control image to the pipeline Copied # fix the random seed, so you will get the same result as the example +generator = torch.Generator().manual_seed(42) + +sketch_image_out = pipe( + prompt="a photo of a dog in real world, high quality", + negative_prompt="extra digit, fewer digits, cropped, worst quality, low quality", + image=sketch_image, + generator=generator, + guidance_scale=7.5 +).images[0] +make_image_grid([sketch_image, sketch_image_out], rows=1, cols=2) Available checkpoints Non-diffusers checkpoints can be found under TencentARC/T2I-Adapter. T2I-Adapter with Stable Diffusion 1.4 Model Name Control Image Overview Control Image Example Generated Image Example TencentARC/t2iadapter_color_sd14v1 Trained with spatial color palette An image with 8x8 color palette. TencentARC/t2iadapter_canny_sd14v1 Trained with canny edge detection A monochrome image with white edges on a black background. TencentARC/t2iadapter_sketch_sd14v1 Trained with PidiNet edge detection A hand-drawn monochrome image with white outlines on a black background. TencentARC/t2iadapter_depth_sd14v1 Trained with Midas depth estimation A grayscale image with black representing deep areas and white representing shallow areas. TencentARC/t2iadapter_openpose_sd14v1 Trained with OpenPose bone image A OpenPose bone image. TencentARC/t2iadapter_keypose_sd14v1 Trained with mmpose skeleton image A mmpose skeleton image. TencentARC/t2iadapter_seg_sd14v1Trained with semantic segmentation An custom segmentation protocol image. TencentARC/t2iadapter_canny_sd15v2 TencentARC/t2iadapter_depth_sd15v2 TencentARC/t2iadapter_sketch_sd15v2 TencentARC/t2iadapter_zoedepth_sd15v1 Adapter/t2iadapter, subfolder=‘sketch_sdxl_1.0’ Adapter/t2iadapter, subfolder=‘canny_sdxl_1.0’ Adapter/t2iadapter, subfolder=‘openpose_sdxl_1.0’ Combining multiple adapters MultiAdapter can be used for applying multiple conditionings at once. Here we use the keypose adapter for the character posture and the depth adapter for creating the scene. Copied from diffusers.utils import load_image, make_image_grid + +cond_keypose = load_image( + "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png" +) +cond_depth = load_image( + "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png" +) +cond = [cond_keypose, cond_depth] + +prompt = ["A man walking in an office room with a nice view"] The two control images look as such: MultiAdapter combines keypose and depth adapters. adapter_conditioning_scale balances the relative influence of the different adapters. Copied import torch +from diffusers import StableDiffusionAdapterPipeline, MultiAdapter, T2IAdapter + +adapters = MultiAdapter( + [ + T2IAdapter.from_pretrained("TencentARC/t2iadapter_keypose_sd14v1"), + T2IAdapter.from_pretrained("TencentARC/t2iadapter_depth_sd14v1"), + ] +) +adapters = adapters.to(torch.float16) + +pipe = StableDiffusionAdapterPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + torch_dtype=torch.float16, + adapter=adapters, +).to("cuda") + +image = pipe(prompt, cond, adapter_conditioning_scale=[0.8, 0.8]).images[0] +make_image_grid([cond_keypose, cond_depth, image], rows=1, cols=3) T2I-Adapter vs ControlNet T2I-Adapter is similar to ControlNet. +T2I-Adapter uses a smaller auxiliary network which is only run once for the entire diffusion process. +However, T2I-Adapter performs slightly worse than ControlNet. StableDiffusionAdapterPipeline class diffusers.StableDiffusionAdapterPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel adapter: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True ) Parameters adapter (T2IAdapter or MultiAdapter or List[T2IAdapter]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple Adapter as a +list, the outputs from each Adapter are added together to create one combined additional conditioning. adapter_weights (List[float], optional, defaults to None) — +List of floats representing the weight which will be multiply to each adapter’s output before adding them +together. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. Pipeline for text-to-image generation using Stable Diffusion augmented with T2I-Adapter +https://arxiv.org/abs/2302.08453 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None adapter_conditioning_scale: Union = 1.0 clip_skip: Optional = None ) → ~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor] or List[PIL.Image.Image] or List[List[PIL.Image.Image]]) — +The Adapter input condition. Adapter uses this input condition to generate guidance to Unet. If the +type is specified as Torch.FloatTensor, it is passed to Adapter as is. PIL.Image.Image` can also be +accepted as an image. The control image is automatically resized to fit the output image. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor as defined under +self.processor in +diffusers.models.attention_processor. adapter_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the adapter are multiplied by adapter_conditioning_scale before they are added to the +residual in the original unet. If multiple adapters are specified in init, you can set the +corresponding scale as a list. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from PIL import Image +>>> from diffusers.utils import load_image +>>> import torch +>>> from diffusers import StableDiffusionAdapterPipeline, T2IAdapter + +>>> image = load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_ref.png" +... ) + +>>> color_palette = image.resize((8, 8)) +>>> color_palette = color_palette.resize((512, 512), resample=Image.Resampling.NEAREST) + +>>> adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_color_sd14v1", torch_dtype=torch.float16) +>>> pipe = StableDiffusionAdapterPipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", +... adapter=adapter, +... torch_dtype=torch.float16, +... ) + +>>> pipe.to("cuda") + +>>> out_image = pipe( +... "At night, glowing cubes in front of the beach", +... image=color_palette, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionXLAdapterPipeline class diffusers.StableDiffusionXLAdapterPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel adapter: Union scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters adapter (T2IAdapter or MultiAdapter or List[T2IAdapter]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple Adapter as a +list, the outputs from each Adapter are added together to create one combined additional conditioning. adapter_weights (List[float], optional, defaults to None) — +List of floats representing the weight which will be multiply to each adapter’s output before adding them +together. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. Pipeline for text-to-image generation using Stable Diffusion augmented with T2I-Adapter +https://arxiv.org/abs/2302.08453 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Optional = None crops_coords_top_left: Tuple = (0, 0) target_size: Optional = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None adapter_conditioning_scale: Union = 1.0 adapter_conditioning_factor: float = 1.0 clip_skip: Optional = None ) → ~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor] or List[PIL.Image.Image] or List[List[PIL.Image.Image]]) — +The Adapter input condition. Adapter uses this input condition to generate guidance to Unet. If the +type is specified as Torch.FloatTensor, it is passed to Adapter as is. PIL.Image.Image` can also be +accepted as an image. The control image is automatically resized to fit the output image. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 5.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion_xl.StableDiffusionAdapterPipelineOutput +instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. adapter_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the adapter are multiplied by adapter_conditioning_scale before they are added to the +residual in the original unet. If multiple adapters are specified in init, you can set the +corresponding scale as a list. adapter_conditioning_factor (float, optional, defaults to 1.0) — +The fraction of timesteps for which adapter should be applied. If adapter_conditioning_factor is +0.0, adapter is not applied at all. If adapter_conditioning_factor is 1.0, adapter is applied for +all timesteps. If adapter_conditioning_factor is 0.5, adapter is applied for half of the timesteps. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import T2IAdapter, StableDiffusionXLAdapterPipeline, DDPMScheduler +>>> from diffusers.utils import load_image + +>>> sketch_image = load_image("https://huggingface.co/Adapter/t2iadapter/resolve/main/sketch.png").convert("L") + +>>> model_id = "stabilityai/stable-diffusion-xl-base-1.0" + +>>> adapter = T2IAdapter.from_pretrained( +... "Adapter/t2iadapter", +... subfolder="sketch_sdxl_1.0", +... torch_dtype=torch.float16, +... adapter_type="full_adapter_xl", +... ) +>>> scheduler = DDPMScheduler.from_pretrained(model_id, subfolder="scheduler") + +>>> pipe = StableDiffusionXLAdapterPipeline.from_pretrained( +... model_id, adapter=adapter, torch_dtype=torch.float16, variant="fp16", scheduler=scheduler +... ).to("cuda") + +>>> generator = torch.manual_seed(42) +>>> sketch_image_out = pipe( +... prompt="a photo of a dog in real world, high quality", +... negative_prompt="extra digit, fewer digits, cropped, worst quality, low quality", +... image=sketch_image, +... generator=generator, +... guidance_scale=7.5, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 diff --git a/scrapped_outputs/213d0366641b9e0bd70241ffb72c18d1.txt b/scrapped_outputs/213d0366641b9e0bd70241ffb72c18d1.txt new file mode 100644 index 0000000000000000000000000000000000000000..1734d899388d6f30b496363114538c7da7d1ab97 --- /dev/null +++ b/scrapped_outputs/213d0366641b9e0bd70241ffb72c18d1.txt @@ -0,0 +1,226 @@ +aMUSEd Amused is a lightweight text to image model based off of the muse architecture. Amused is particularly useful in applications that require a lightweight and fast model such as generating many images quickly at once. Amused is a vqvae token based transformer that can generate an image in fewer forward passes than many diffusion models. In contrast with muse, it uses the smaller text encoder CLIP-L/14 instead of t5-xxl. Due to its small parameter count and few forward pass generation process, amused can generate many images quickly. This benefit is seen particularly at larger batch sizes. Model Params amused-256 603M amused-512 608M AmusedPipeline class diffusers.AmusedPipeline < source > ( vqvae: VQModel tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection transformer: UVit2DModel scheduler: AmusedScheduler ) __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 12 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Optional = None latents: Optional = None prompt_embeds: Optional = None encoder_hidden_states: Optional = None negative_prompt_embeds: Optional = None negative_encoder_hidden_states: Optional = None output_type = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None micro_conditioning_aesthetic_score: int = 6 micro_conditioning_crop_coord: Tuple = (0, 0) temperature: Union = (2, 0) ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.transformer.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 16) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.IntTensor, optional) — +Pre-generated tokens representing latent vectors in self.vqvae, to be used as inputs for image +gneration. If not provided, the starting latents will be completely masked. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. A single vector from the +pooled and projected final hidden states. encoder_hidden_states (torch.FloatTensor, optional) — +Pre-generated penultimate hidden states from the text encoder providing additional text conditioning. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. negative_encoder_hidden_states (torch.FloatTensor, optional) — +Analogous to encoder_hidden_states for the positive prompt. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. micro_conditioning_aesthetic_score (int, optional, defaults to 6) — +The targeted aesthetic score according to the laion aesthetic classifier. See https://laion.ai/blog/laion-aesthetics/ +and the micro-conditioning section of https://arxiv.org/abs/2307.01952. micro_conditioning_crop_coord (Tuple[int], optional, defaults to (0, 0)) — +The targeted height, width crop coordinates. See the micro-conditioning section of https://arxiv.org/abs/2307.01952. temperature (Union[int, Tuple[int, int], List[int]], optional, defaults to (2, 0)) — +Configures the temperature scheduler on self.scheduler see AmusedScheduler#set_timesteps. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import AmusedPipeline + +>>> pipe = AmusedPipeline.from_pretrained( +... "amused/amused-512", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt).images[0] enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. class diffusers.AmusedImg2ImgPipeline < source > ( vqvae: VQModel tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection transformer: UVit2DModel scheduler: AmusedScheduler ) __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.5 num_inference_steps: int = 12 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Optional = None prompt_embeds: Optional = None encoder_hidden_states: Optional = None negative_prompt_embeds: Optional = None negative_encoder_hidden_states: Optional = None output_type = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None micro_conditioning_aesthetic_score: int = 6 micro_conditioning_crop_coord: Tuple = (0, 0) temperature: Union = (2, 0) ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be used as the starting point. For both +numpy array and pytorch tensor, the expected value range is between [0, 1] If it’s a tensor or a list +or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a +list of arrays, the expected shape should be (B, H, W, C) or (H, W, C) It can also accept image +latents as image, but if passing latents directly it is not encoded again. strength (float, optional, defaults to 0.5) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 16) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. A single vector from the +pooled and projected final hidden states. encoder_hidden_states (torch.FloatTensor, optional) — +Pre-generated penultimate hidden states from the text encoder providing additional text conditioning. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. negative_encoder_hidden_states (torch.FloatTensor, optional) — +Analogous to encoder_hidden_states for the positive prompt. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. micro_conditioning_aesthetic_score (int, optional, defaults to 6) — +The targeted aesthetic score according to the laion aesthetic classifier. See https://laion.ai/blog/laion-aesthetics/ +and the micro-conditioning section of https://arxiv.org/abs/2307.01952. micro_conditioning_crop_coord (Tuple[int], optional, defaults to (0, 0)) — +The targeted height, width crop coordinates. See the micro-conditioning section of https://arxiv.org/abs/2307.01952. temperature (Union[int, Tuple[int, int], List[int]], optional, defaults to (2, 0)) — +Configures the temperature scheduler on self.scheduler see AmusedScheduler#set_timesteps. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import AmusedImg2ImgPipeline +>>> from diffusers.utils import load_image + +>>> pipe = AmusedImg2ImgPipeline.from_pretrained( +... "amused/amused-512", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "winter mountains" +>>> input_image = ( +... load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/open_muse/mountains.jpg" +... ) +... .resize((512, 512)) +... .convert("RGB") +... ) +>>> image = pipe(prompt, input_image).images[0] enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. class diffusers.AmusedInpaintPipeline < source > ( vqvae: VQModel tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection transformer: UVit2DModel scheduler: AmusedScheduler ) __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None strength: float = 1.0 num_inference_steps: int = 12 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Optional = None prompt_embeds: Optional = None encoder_hidden_states: Optional = None negative_prompt_embeds: Optional = None negative_encoder_hidden_states: Optional = None output_type = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None micro_conditioning_aesthetic_score: int = 6 micro_conditioning_crop_coord: Tuple = (0, 0) temperature: Union = (2, 0) ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be used as the starting point. For both +numpy array and pytorch tensor, the expected value range is between [0, 1] If it’s a tensor or a list +or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a +list of arrays, the expected shape should be (B, H, W, C) or (H, W, C) It can also accept image +latents as image, but if passing latents directly it is not encoded again. mask_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to mask image. White pixels in the mask +are repainted while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a numpy array or pytorch tensor, it should contain one +color channel (L) instead of 3, so the expected shape for pytorch tensor would be (B, 1, H, W), (B, H, W), (1, H, W), (H, W). And for numpy array would be for (B, H, W, 1), (B, H, W), (H, W, 1), or (H, W). strength (float, optional, defaults to 1.0) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 16) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. A single vector from the +pooled and projected final hidden states. encoder_hidden_states (torch.FloatTensor, optional) — +Pre-generated penultimate hidden states from the text encoder providing additional text conditioning. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. negative_encoder_hidden_states (torch.FloatTensor, optional) — +Analogous to encoder_hidden_states for the positive prompt. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. micro_conditioning_aesthetic_score (int, optional, defaults to 6) — +The targeted aesthetic score according to the laion aesthetic classifier. See https://laion.ai/blog/laion-aesthetics/ +and the micro-conditioning section of https://arxiv.org/abs/2307.01952. micro_conditioning_crop_coord (Tuple[int], optional, defaults to (0, 0)) — +The targeted height, width crop coordinates. See the micro-conditioning section of https://arxiv.org/abs/2307.01952. temperature (Union[int, Tuple[int, int], List[int]], optional, defaults to (2, 0)) — +Configures the temperature scheduler on self.scheduler see AmusedScheduler#set_timesteps. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import AmusedInpaintPipeline +>>> from diffusers.utils import load_image + +>>> pipe = AmusedInpaintPipeline.from_pretrained( +... "amused/amused-512", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "fall mountains" +>>> input_image = ( +... load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/open_muse/mountains_1.jpg" +... ) +... .resize((512, 512)) +... .convert("RGB") +... ) +>>> mask = ( +... load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/open_muse/mountains_1_mask.png" +... ) +... .resize((512, 512)) +... .convert("L") +... ) +>>> pipe(prompt, input_image, mask).images[0].save("out.png") enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. diff --git a/scrapped_outputs/21459b2d92ff1dd03f82cb963d0e6c9a.txt b/scrapped_outputs/21459b2d92ff1dd03f82cb963d0e6c9a.txt new file mode 100644 index 0000000000000000000000000000000000000000..4b6768075cf1e8a48d64b0ca2f557393a85af24d --- /dev/null +++ b/scrapped_outputs/21459b2d92ff1dd03f82cb963d0e6c9a.txt @@ -0,0 +1,82 @@ +Self-Attention Guidance Improving Sample Quality of Diffusion Models Using Self-Attention Guidance is by Susung Hong et al. The abstract from the paper is: Denoising diffusion models (DDMs) have attracted attention for their exceptional generation quality and diversity. This success is largely attributed to the use of class- or text-conditional diffusion guidance methods, such as classifier and classifier-free guidance. In this paper, we present a more comprehensive perspective that goes beyond the traditional guidance methods. From this generalized perspective, we introduce novel condition- and training-free strategies to enhance the quality of generated images. As a simple solution, blur guidance improves the suitability of intermediate samples for their fine-scale information and structures, enabling diffusion models to generate higher quality samples with a moderate guidance scale. Improving upon this, Self-Attention Guidance (SAG) uses the intermediate self-attention maps of diffusion models to enhance their stability and efficacy. Specifically, SAG adversarially blurs only the regions that diffusion models attend to at each iteration and guides them accordingly. Our experimental results show that our SAG improves the performance of various diffusion models, including ADM, IDDPM, Stable Diffusion, and DiT. Moreover, combining SAG with conventional guidance methods leads to further improvement. You can find additional information about Self-Attention Guidance on the project page, original codebase, and try it out in a demo or notebook. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionSAGPipeline class diffusers.StableDiffusionSAGPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 sag_scale: float = 0.75 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. sag_scale (float, optional, defaults to 0.75) — +Chosen between [0, 1.0] for better quality. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionSAGPipeline + +>>> pipe = StableDiffusionSAGPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt, sag_scale=0.75).images[0] encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/2185869221daeeee6119f88310d98d6d.txt b/scrapped_outputs/2185869221daeeee6119f88310d98d6d.txt new file mode 100644 index 0000000000000000000000000000000000000000..c988f5fbd3af0bec8702ce742b8e0ac4c55d6c07 --- /dev/null +++ b/scrapped_outputs/2185869221daeeee6119f88310d98d6d.txt @@ -0,0 +1,151 @@ +DPM Discrete Scheduler inspired by Karras et. al paper + + +Overview + +Inspired by Karras et. al. Scheduler ported from @crowsonkb’s https://github.com/crowsonkb/k-diffusion library: +All credit for making this scheduler work goes to Katherine Crowson + +KDPM2DiscreteScheduler + + +class diffusers.KDPM2DiscreteScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.00085 +beta_end: float = 0.012 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +prediction_type: str = 'epsilon' + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. beta_start (float): the + + +starting beta value of inference. beta_end (float) — the final beta value. beta_schedule (str): +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. +options to clip the variance used when adding noise to the denoised sample. Choose from fixed_small, +fixed_small_log, fixed_large, fixed_large_log, learned or learned_range. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + + +Scheduler created by @crowsonkb in k_diffusion, see: +https://github.com/crowsonkb/k-diffusion/blob/5b3af030dd83e0297272d861c19477735d0317ec/k_diffusion/sampling.py#L188 +Scheduler inspired by DPM-Solver-2 and Algorthim 2 from Karras et al. (2022). +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[float, torch.FloatTensor] + +) +→ +torch.FloatTensor + +Parameters + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the — + + +current timestep. — +sample (torch.FloatTensor): input sample timestep (int, optional): current timestep + + +Returns + +torch.FloatTensor + + + +scaled input sample + + + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None +num_train_timesteps: typing.Optional[int] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device, optional) — +the device to which the timesteps should be moved to. If None, the timesteps are not moved. + + + +Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: typing.Union[torch.FloatTensor, numpy.ndarray] +timestep: typing.Union[float, torch.FloatTensor] +sample: typing.Union[torch.FloatTensor, numpy.ndarray] +return_dict: bool = True + +) +→ +SchedulerOutput or tuple + +Parameters + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion — + + +process from the learned model outputs (most often the predicted noise). — +model_output (torch.FloatTensor or np.ndarray): direct output from learned diffusion model. timestep +(int): current discrete timestep in the diffusion chain. sample (torch.FloatTensor or np.ndarray): +current instance of sample being created by diffusion process. +return_dict (bool): option for returning tuple rather than SchedulerOutput class + + +Returns + +SchedulerOutput or tuple + + + +SchedulerOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. diff --git a/scrapped_outputs/21864027515bc65ba244b898f40035e7.txt b/scrapped_outputs/21864027515bc65ba244b898f40035e7.txt new file mode 100644 index 0000000000000000000000000000000000000000..af8bc21f7006c2432f3cf43cbda561eb3e9ef283 --- /dev/null +++ b/scrapped_outputs/21864027515bc65ba244b898f40035e7.txt @@ -0,0 +1,42 @@ +RePaintScheduler RePaintScheduler is a DDPM-based inpainting scheduler for unsupervised inpainting with extreme masks. It is designed to be used with the RePaintPipeline, and it is based on the paper RePaint: Inpainting using Denoising Diffusion Probabilistic Models by Andreas Lugmayr et al. The abstract from the paper is: Free-form inpainting is the task of adding new content to an image in the regions specified by an arbitrary binary mask. Most existing approaches train for a certain distribution of masks, which limits their generalization capabilities to unseen mask types. Furthermore, training with pixel-wise and perceptual losses often leads to simple textural extensions towards the missing areas instead of semantically meaningful generation. In this work, we propose RePaint: A Denoising Diffusion Probabilistic Model (DDPM) based inpainting approach that is applicable to even extreme masks. We employ a pretrained unconditional DDPM as the generative prior. To condition the generation process, we only alter the reverse diffusion iterations by sampling the unmasked regions using the given image information. Since this technique does not modify or condition the original DDPM network itself, the model produces high-quality and diverse output images for any inpainting form. We validate our method for both faces and general-purpose image inpainting using standard and extreme masks. RePaint outperforms state-of-the-art Autoregressive, and GAN approaches for at least five out of six mask distributions. GitHub Repository: this http URL. The original implementation can be found at andreas128/RePaint. RePaintScheduler class diffusers.RePaintScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' eta: float = 0.0 trained_betas: Optional = None clip_sample: bool = True ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, squaredcos_cap_v2, or sigmoid. eta (float) — +The weight of noise for added noise in diffusion step. If its value is between 0.0 and 1.0 it corresponds +to the DDIM scheduler, and if its value is between -0.0 and 1.0 it corresponds to the DDPM scheduler. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. clip_sample (bool, defaults to True) — +Clip the predicted sample between -1 and 1 for numerical stability. RePaintScheduler is a scheduler for DDPM inpainting inside a given mask. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int jump_length: int = 10 jump_n_sample: int = 10 device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. If used, +timesteps must be None. jump_length (int, defaults to 10) — +The number of steps taken forward in time before going backward in time for a single jump (“j” in +RePaint paper). Take a look at Figure 9 and 10 in the paper. jump_n_sample (int, defaults to 10) — +The number of times to make a forward time jump for a given chosen time sample. Take a look at Figure 9 +and 10 in the paper. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor original_image: FloatTensor mask: FloatTensor generator: Optional = None return_dict: bool = True ) → RePaintSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. original_image (torch.FloatTensor) — +The original image to inpaint on. mask (torch.FloatTensor) — +The mask where a value of 0.0 indicates which part of the original image to inpaint. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a RePaintSchedulerOutput or tuple. Returns +RePaintSchedulerOutput or tuple + +If return_dict is True, RePaintSchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). RePaintSchedulerOutput class diffusers.schedulers.scheduling_repaint.RePaintSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from +the current timestep. pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/21b228e32ff81051e9d95ef7a2144535.txt b/scrapped_outputs/21b228e32ff81051e9d95ef7a2144535.txt new file mode 100644 index 0000000000000000000000000000000000000000..d9d30a7d367c357d2e506841038933d2d5cecb7f --- /dev/null +++ b/scrapped_outputs/21b228e32ff81051e9d95ef7a2144535.txt @@ -0,0 +1,43 @@ +LMSDiscreteScheduler LMSDiscreteScheduler is a linear multistep scheduler for discrete beta schedules. The scheduler is ported from and created by Katherine Crowson, and the original implementation can be found at crowsonkb/k-diffusion. LMSDiscreteScheduler class diffusers.LMSDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None use_karras_sigmas: Optional = False prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. A linear multistep scheduler for discrete beta schedules. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. get_lms_coefficient < source > ( order t current_order ) Parameters order () — t () — current_order () — Compute the linear multistep coefficient. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (float or torch.FloatTensor) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor order: int = 4 return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float or torch.FloatTensor) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. order (int, defaults to 4) — +The order of the linear multistep method. return_dict (bool, optional, defaults to True) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). LMSDiscreteSchedulerOutput class diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/21d21aea5f1cd4447fd179bceee7668d.txt b/scrapped_outputs/21d21aea5f1cd4447fd179bceee7668d.txt new file mode 100644 index 0000000000000000000000000000000000000000..44b7c3217aecabaffcc1588bafca11767d420516 --- /dev/null +++ b/scrapped_outputs/21d21aea5f1cd4447fd179bceee7668d.txt @@ -0,0 +1,44 @@ +Unconditional image generation + + + + + + + + + + + + +Unconditional image generation is a relatively straightforward task. The model only generates images - without any additional context like text or an image - resembling the training data it was trained on. +The DiffusionPipeline is the easiest way to use a pre-trained diffusion system for inference. +Start by creating an instance of DiffusionPipeline and specify which pipeline checkpoint you would like to download. +You can use any of the 🧨 Diffusers checkpoints from the Hub (the checkpoint you’ll use generates images of butterflies). +💡 Want to train your own unconditional image generation model? Take a look at the training guide to learn how to generate your own images. +In this guide, you’ll use DiffusionPipeline for unconditional image generation with DDPM: + + + Copied +>>> from diffusers import DiffusionPipeline + +>>> generator = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128") +The DiffusionPipeline downloads and caches all modeling, tokenization, and scheduling components. +Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on a GPU. +You can move the generator object to a GPU, just like you would in PyTorch: + + + Copied +>>> generator.to("cuda") +Now you can use the generator to generate an image: + + + Copied +>>> image = generator().images[0] +The output is by default wrapped into a PIL.Image object. +You can save the image by calling: + + + Copied +>>> image.save("generated_image.png") +Try out the Spaces below, and feel free to play around with the inference steps parameter to see how it affects the image quality! diff --git a/scrapped_outputs/21d228f44c3ce64dc5796117cf664713.txt b/scrapped_outputs/21d228f44c3ce64dc5796117cf664713.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/21eb3064147321a6afaf74c1f3df0ff1.txt b/scrapped_outputs/21eb3064147321a6afaf74c1f3df0ff1.txt new file mode 100644 index 0000000000000000000000000000000000000000..5eb8aca237f4b1aa72ff085bbc8ab70f6ba7cd91 --- /dev/null +++ b/scrapped_outputs/21eb3064147321a6afaf74c1f3df0ff1.txt @@ -0,0 +1,128 @@ +LoRA LoRA is a fast and lightweight training method that inserts and trains a significantly smaller number of parameters instead of all the model parameters. This produces a smaller file (~100 MBs) and makes it easier to quickly train a model to learn a new concept. LoRA weights are typically loaded into the UNet, text encoder or both. There are two classes for loading LoRA weights: LoraLoaderMixin provides functions for loading and unloading, fusing and unfusing, enabling and disabling, and more functions for managing LoRA weights. This class can be used with any model. StableDiffusionXLLoraLoaderMixin is a Stable Diffusion (SDXL) version of the LoraLoaderMixin class for loading and saving LoRA weights. It can only be used with the SDXL model. To learn more about how to load LoRA weights, see the LoRA loading guide. LoraLoaderMixin class diffusers.loaders.LoraLoaderMixin < source > ( ) Load LoRA layers into UNet2DConditionModel and +CLIPTextModel. delete_adapters < source > ( adapter_names: Union ) Parameters Deletes the LoRA layers of adapter_name for the unet and text-encoder(s). — +adapter_names (Union[List[str], str]): +The names of the adapter to delete. Can be a single string or a list of strings disable_lora_for_text_encoder < source > ( text_encoder: Optional = None ) Parameters text_encoder (torch.nn.Module, optional) — +The text encoder module to disable the LoRA layers for. If None, it will try to get the +text_encoder attribute. Disables the LoRA layers for the text encoder. enable_lora_for_text_encoder < source > ( text_encoder: Optional = None ) Parameters text_encoder (torch.nn.Module, optional) — +The text encoder module to enable the LoRA layers for. If None, it will try to get the text_encoder +attribute. Enables the LoRA layers for the text encoder. fuse_lora < source > ( fuse_unet: bool = True fuse_text_encoder: bool = True lora_scale: float = 1.0 safe_fusing: bool = False adapter_names: Optional = None ) Parameters fuse_unet (bool, defaults to True) — Whether to fuse the UNet LoRA parameters. fuse_text_encoder (bool, defaults to True) — +Whether to fuse the text encoder LoRA parameters. If the text encoder wasn’t monkey-patched with the +LoRA parameters then it won’t have any effect. lora_scale (float, defaults to 1.0) — +Controls how much to influence the outputs with the LoRA parameters. safe_fusing (bool, defaults to False) — +Whether to check fused weights for NaN values before fusing and if values are NaN not fusing them. adapter_names (List[str], optional) — +Adapter names to be used for fusing. If nothing is passed, all active adapters will be fused. Fuses the LoRA parameters into the original parameters of the corresponding blocks. This is an experimental API. Example: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipeline.fuse_lora(lora_scale=0.7) get_active_adapters < source > ( ) Gets the list of the current active adapters. Example: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", +).to("cuda") +pipeline.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") +pipeline.get_active_adapters() get_list_adapters < source > ( ) Gets the current list of all available adapters in the pipeline. load_lora_into_text_encoder < source > ( state_dict network_alphas text_encoder prefix = None lora_scale = 1.0 low_cpu_mem_usage = None adapter_name = None _pipeline = None ) Parameters state_dict (dict) — +A standard state dict containing the lora layer parameters. The key should be prefixed with an +additional text_encoder to distinguish between unet lora layers. network_alphas (Dict[str, float]) — +See LoRALinearLayer for more details. text_encoder (CLIPTextModel) — +The text encoder model to load the LoRA layers into. prefix (str) — +Expected prefix of the text_encoder in the state_dict. lora_scale (float) — +How much to scale the output of the lora linear layer before it is added with the output of the regular +lora layer. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. This will load the LoRA layers specified in state_dict into text_encoder load_lora_into_transformer < source > ( state_dict network_alphas transformer low_cpu_mem_usage = None adapter_name = None _pipeline = None ) Parameters state_dict (dict) — +A standard state dict containing the lora layer parameters. The keys can either be indexed directly +into the unet or prefixed with an additional unet which can be used to distinguish between text +encoder lora layers. network_alphas (Dict[str, float]) — +See LoRALinearLayer for more details. unet (UNet2DConditionModel) — +The UNet model to load the LoRA layers into. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. This will load the LoRA layers specified in state_dict into transformer. load_lora_into_unet < source > ( state_dict network_alphas unet low_cpu_mem_usage = None adapter_name = None _pipeline = None ) Parameters state_dict (dict) — +A standard state dict containing the lora layer parameters. The keys can either be indexed directly +into the unet or prefixed with an additional unet which can be used to distinguish between text +encoder lora layers. network_alphas (Dict[str, float]) — +See LoRALinearLayer for more details. unet (UNet2DConditionModel) — +The UNet model to load the LoRA layers into. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. This will load the LoRA layers specified in state_dict into unet. load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. lora_state_dict < source > ( pretrained_model_name_or_path_or_dict: Union **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with ModelMixin.save_pretrained(). +A torch state +dict. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Return state dict for lora weights and the network alphas. We support loading A1111 formatted LoRA checkpoints in a limited capacity. This function is experimental and might change in the future. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. set_adapters_for_text_encoder < source > ( adapter_names: Union text_encoder: Optional = None text_encoder_weights: List = None ) Parameters adapter_names (List[str] or str) — +The names of the adapters to use. text_encoder (torch.nn.Module, optional) — +The text encoder module to set the adapter layers for. If None, it will try to get the text_encoder +attribute. text_encoder_weights (List[float], optional) — +The weights to use for the text encoder. If None, the weights are set to 1.0 for all the adapters. Sets the adapter layers for the text encoder. set_lora_device < source > ( adapter_names: List device: Union ) Parameters adapter_names (List[str]) — +List of adapters to send device to. device (Union[torch.device, str, int]) — +Device to send the adapters to. Can be either a torch device, a str or an integer. Moves the LoRAs listed in adapter_names to a target device. Useful for offloading the LoRA to the CPU in case +you want to load multiple adapters and free some GPU memory. unfuse_lora < source > ( unfuse_unet: bool = True unfuse_text_encoder: bool = True ) Parameters unfuse_unet (bool, defaults to True) — Whether to unfuse the UNet LoRA parameters. unfuse_text_encoder (bool, defaults to True) — +Whether to unfuse the text encoder LoRA parameters. If the text encoder wasn’t monkey-patched with the +LoRA parameters then it won’t have any effect. Reverses the effect of +pipe.fuse_lora(). This is an experimental API. unload_lora_weights < source > ( ) Unloads the LoRA parameters. Examples: Copied >>> # Assuming `pipeline` is already loaded with the LoRA parameters. +>>> pipeline.unload_lora_weights() +>>> ... StableDiffusionXLLoraLoaderMixin class diffusers.loaders.StableDiffusionXLLoraLoaderMixin < source > ( ) This class overrides LoraLoaderMixin with LoRA loading/saving code that’s specific to SDXL load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name: Optional = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. kwargs (dict, optional) — +See lora_state_dict(). Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. diff --git a/scrapped_outputs/21f978f91743a4a59d177285bec71c6f.txt b/scrapped_outputs/21f978f91743a4a59d177285bec71c6f.txt new file mode 100644 index 0000000000000000000000000000000000000000..67a04d47a343157cf49701884d4b1a9f5310d4bb --- /dev/null +++ b/scrapped_outputs/21f978f91743a4a59d177285bec71c6f.txt @@ -0,0 +1,70 @@ +Weighting prompts + +Text-guided diffusion models generate images based on a given text prompt. The text prompt +can include multiple concepts that the model should generate and it’s often desirable to weight +certain parts of the prompt more or less. +Diffusion models work by conditioning the cross attention layers of the diffusion model with contextualized text embeddings (see the Stable Diffusion Guide for more information). +Thus a simple way to emphasize (or de-emphasize) certain parts of the prompt is by increasing or reducing the scale of the text embedding vector that corresponds to the relevant part of the prompt. +This is called “prompt-weighting” and has been a highly demanded feature by the community (see issue here). + +How to do prompt-weighting in Diffusers + +We believe the role of diffusers is to be a toolbox that provides essential features that enable other projects, such as InvokeAI or diffuzers, to build powerful UIs. In order to support arbitrary methods to manipulate prompts, diffusers exposes a prompt_embeds function argument to many pipelines such as StableDiffusionPipeline, allowing to directly pass the “prompt-weighted”/scaled text embeddings to the pipeline. +The compel library provides an easy way to emphasize or de-emphasize portions of the prompt for you. We strongly recommend it instead of preparing the embeddings yourself. +Let’s look at a simple example. Imagine you want to generate an image of "a red cat playing with a ball" as +follows: + + + Copied +from diffusers import StableDiffusionPipeline, UniPCMultistepScheduler + +pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) + +prompt = "a red cat playing with a ball" + +generator = torch.Generator(device="cpu").manual_seed(33) + +image = pipe(prompt, generator=generator, num_inference_steps=20).images[0] +image +This gives you: + +As you can see, there is no “ball” in the image. Let’s emphasize this part! +For this we should install the compel library: + + + Copied +pip install compel +and then create a Compel object: + + + Copied +from compel import Compel + +compel_proc = Compel(tokenizer=pipe.tokenizer, text_encoder=pipe.text_encoder) +Now we emphasize the part “ball” with the "++" syntax: + + + Copied +prompt = "a red cat playing with a ball++" +and instead of passing this to the pipeline directly, we have to process it using compel_proc: + + + Copied +prompt_embeds = compel_proc(prompt) +Now we can pass prompt_embeds directly to the pipeline: + + + Copied +generator = torch.Generator(device="cpu").manual_seed(33) + +images = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image +We now get the following image which has a “ball”! + +Similarly, we de-emphasize parts of the sentence by using the -- suffix for words, feel free to give it +a try! +If your favorite pipeline does not have a prompt_embeds input, please make sure to open an issue, the +diffusers team tries to be as responsive as possible. +Also, please check out the documentation of the compel library for +more information. diff --git a/scrapped_outputs/2212d0dcb5ebff3a4b5660b5694cf80b.txt b/scrapped_outputs/2212d0dcb5ebff3a4b5660b5694cf80b.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69636ab475595c240f0bd86a1983886d1f8de0d --- /dev/null +++ b/scrapped_outputs/2212d0dcb5ebff3a4b5660b5694cf80b.txt @@ -0,0 +1,40 @@ +DDIM Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. The abstract from the paper is: Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space. The original codebase can be found at ermongroup/ddim. DDIMPipeline class diffusers.DDIMPipeline < source > ( unet scheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image. Can be one of +DDPMScheduler, or DDIMScheduler. Pipeline for image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 generator: Union = None eta: float = 0.0 num_inference_steps: int = 50 use_clipped_model_output: Optional = None output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. A value of 0 corresponds to +DDIM and 1 corresponds to DDPM. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. use_clipped_model_output (bool, optional, defaults to None) — +If True or False, see documentation for DDIMScheduler.step(). If None, nothing is passed +downstream to the scheduler (use None for schedulers which don’t support this argument). output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Example: Copied >>> from diffusers import DDIMPipeline +>>> import PIL.Image +>>> import numpy as np + +>>> # load model and scheduler +>>> pipe = DDIMPipeline.from_pretrained("fusing/ddim-lsun-bedroom") + +>>> # run pipeline in inference (sample random noise and denoise) +>>> image = pipe(eta=0.0, num_inference_steps=50) + +>>> # process image to PIL +>>> image_processed = image.cpu().permute(0, 2, 3, 1) +>>> image_processed = (image_processed + 1.0) * 127.5 +>>> image_processed = image_processed.numpy().astype(np.uint8) +>>> image_pil = PIL.Image.fromarray(image_processed[0]) + +>>> # save image +>>> image_pil.save("test.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/2231d6dbc71cc4fe2460ab95682db02b.txt b/scrapped_outputs/2231d6dbc71cc4fe2460ab95682db02b.txt new file mode 100644 index 0000000000000000000000000000000000000000..49dfad88e1e2c0dcad3d9918f9f7b9486f85e0dc --- /dev/null +++ b/scrapped_outputs/2231d6dbc71cc4fe2460ab95682db02b.txt @@ -0,0 +1,92 @@ +DPMSolverMultistepInverse DPMSolverMultistepInverse is the inverted scheduler from DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps and DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. The implementation is mostly based on the DDIM inversion definition of Null-text Inversion for Editing Real Images using Guided Diffusion Models and notebook implementation of the DiffEdit latent inversion from Xiang-cd/DiffEdit-stable-diffusion. Tips Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use the dynamic +thresholding. This thresholding method is unsuitable for latent-space diffusion models such as +Stable Diffusion. DPMSolverMultistepInverseScheduler class diffusers.DPMSolverMultistepInverseScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'dpmsolver++' solver_type: str = 'midpoint' lower_order_final: bool = True euler_at_final: bool = False use_karras_sigmas: Optional = False lambda_min_clipped: float = -inf variance_type: Optional = None timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DPMSolver order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++". algorithm_type (str, defaults to dpmsolver++) — +Algorithm type for the solver; can be dpmsolver, dpmsolver++, sde-dpmsolver or sde-dpmsolver++. The +dpmsolver type implements the algorithms in the DPMSolver +paper, and the dpmsolver++ type implements the algorithms in the +DPMSolver++ paper. It is recommended to use dpmsolver++ or +sde-dpmsolver++ with solver_order=2 for guided sampling like in Stable Diffusion. solver_type (str, defaults to midpoint) — +Solver type for the second-order solver; can be midpoint or heun. The solver type slightly affects the +sample quality, especially for a small number of steps. It is recommended to use midpoint solvers. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. euler_at_final (bool, defaults to False) — +Whether to use Euler’s method in the final step. It is a trade-off between numerical stability and detail +richness. This can stabilize the sampling of the SDE variant of DPMSolver for small number of inference +steps, but sometimes may result in blurring. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. lambda_min_clipped (float, defaults to -inf) — +Clipping threshold for the minimum value of lambda(t) for numerical stability. This is critical for the +cosine (squaredcos_cap_v2) noise schedule. variance_type (str, optional) — +Set to “learned” or “learned_range” for diffusion models that predict variance. If set, the model’s output +contains the predicted Gaussian variance. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DPMSolverMultistepInverseScheduler is the reverse scheduler of DPMSolverMultistepScheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is +designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an +integral of the data prediction model. The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise +prediction and data prediction models. dpm_solver_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DPMSolver (equivalent to DDIM). multistep_dpm_solver_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order multistep DPMSolver. multistep_dpm_solver_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order multistep DPMSolver. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int = None device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator = None return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep DPMSolver. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/223e58880a72b03ea4b7fdabb4722237.txt b/scrapped_outputs/223e58880a72b03ea4b7fdabb4722237.txt new file mode 100644 index 0000000000000000000000000000000000000000..e109b181bff7e509d8447aec9e012243d4f843dc --- /dev/null +++ b/scrapped_outputs/223e58880a72b03ea4b7fdabb4722237.txt @@ -0,0 +1,115 @@ +DreamBooth DreamBooth is a training technique that updates the entire diffusion model by training on just a few images of a subject or style. It works by associating a special word in the prompt with the example images. If you’re training on a GPU with limited vRAM, you should try enabling the gradient_checkpointing and mixed_precision parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing or xFormers. You should have a GPU with >30GB of memory if you want to train faster with Flax. This guide will explore the train_dreambooth.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/dreambooth +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters DreamBooth is very sensitive to training hyperparameters, and it is easy to overfit. Read the Training Stable Diffusion with Dreambooth using 🧨 Diffusers blog post for recommended settings for different subjects to help you choose the appropriate hyperparameters. The training script offers many parameters for customizing your training run. All of the parameters and their descriptions are found in the parse_args() function. The parameters are set with default values that should work pretty well out-of-the-box, but you can also set your own values in the training command if you’d like. For example, to train in the bf16 format: Copied accelerate launch train_dreambooth.py \ + --mixed_precision="bf16" Some basic and important parameters to know and specify are: --pretrained_model_name_or_path: the name of the model on the Hub or a local path to the pretrained model --instance_data_dir: path to a folder containing the training dataset (example images) --instance_prompt: the text prompt that contains the special word for the example images --train_text_encoder: whether to also train the text encoder --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_dreambooth.py \ + --snr_gamma=5.0 Prior preservation loss Prior preservation loss is a method that uses a model’s own generated samples to help it learn how to generate more diverse images. Because these generated sample images belong to the same class as the images you provided, they help the model retain what it has learned about the class and how it can use what it already knows about the class to make new compositions. --with_prior_preservation: whether to use prior preservation loss --prior_loss_weight: controls the influence of the prior preservation loss on the model --class_data_dir: path to a folder containing the generated class sample images --class_prompt: the text prompt describing the class of the generated sample images Copied accelerate launch train_dreambooth.py \ + --with_prior_preservation \ + --prior_loss_weight=1.0 \ + --class_data_dir="path/to/class/images" \ + --class_prompt="text prompt describing class" Train text encoder To improve the quality of the generated outputs, you can also train the text encoder in addition to the UNet. This requires additional memory and you’ll need a GPU with at least 24GB of vRAM. If you have the necessary hardware, then training the text encoder produces better results, especially when generating images of faces. Enable this option by: Copied accelerate launch train_dreambooth.py \ + --train_text_encoder Training script DreamBooth comes with its own dataset classes: DreamBoothDataset: preprocesses the images and class images, and tokenizes the prompts for training PromptDataset: generates the prompt embeddings to generate the class images If you enabled prior preservation loss, the class images are generated here: Copied sample_dataset = PromptDataset(args.class_prompt, num_new_images) +sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size) + +sample_dataloader = accelerator.prepare(sample_dataloader) +pipeline.to(accelerator.device) + +for example in tqdm( + sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process +): + images = pipeline(example["prompt"]).images Next is the main() function which handles setting up the dataset for training and the training loop itself. The script loads the tokenizer, scheduler and models: Copied # Load the tokenizer +if args.tokenizer_name: + tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name, revision=args.revision, use_fast=False) +elif args.pretrained_model_name_or_path: + tokenizer = AutoTokenizer.from_pretrained( + args.pretrained_model_name_or_path, + subfolder="tokenizer", + revision=args.revision, + use_fast=False, + ) + +# Load scheduler and models +noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") +text_encoder = text_encoder_cls.from_pretrained( + args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision +) + +if model_has_vae(args): + vae = AutoencoderKL.from_pretrained( + args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision + ) +else: + vae = None + +unet = UNet2DConditionModel.from_pretrained( + args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision +) Then, it’s time to create the training dataset and DataLoader from DreamBoothDataset: Copied train_dataset = DreamBoothDataset( + instance_data_root=args.instance_data_dir, + instance_prompt=args.instance_prompt, + class_data_root=args.class_data_dir if args.with_prior_preservation else None, + class_prompt=args.class_prompt, + class_num=args.num_class_images, + tokenizer=tokenizer, + size=args.resolution, + center_crop=args.center_crop, + encoder_hidden_states=pre_computed_encoder_hidden_states, + class_prompt_encoder_hidden_states=pre_computed_class_prompt_encoder_hidden_states, + tokenizer_max_length=args.tokenizer_max_length, +) + +train_dataloader = torch.utils.data.DataLoader( + train_dataset, + batch_size=args.train_batch_size, + shuffle=True, + collate_fn=lambda examples: collate_fn(examples, args.with_prior_preservation), + num_workers=args.dataloader_num_workers, +) Lastly, the training loop takes care of the remaining steps such as converting images to latent space, adding noise to the input, predicting the noise residual, and calculating the loss. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script You’re now ready to launch the training script! 🚀 For this guide, you’ll download some images of a dog and store them in a directory. But remember, you can create and use your own dataset if you want (see the Create a dataset for training guide). Copied from huggingface_hub import snapshot_download + +local_dir = "./dog" +snapshot_download( + "diffusers/dog-example", + local_dir=local_dir, + repo_type="dataset", + ignore_patterns=".gitattributes", +) Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model, INSTANCE_DIR to the path where you just downloaded the dog images to, and OUTPUT_DIR to where you want to save the model. You’ll use sks as the special word to tie the training to. If you’re interested in following along with the training process, you can periodically save generated images as training progresses. Add the following parameters to the training command: Copied --validation_prompt="a photo of a sks dog" +--num_validation_images=4 +--validation_steps=100 One more thing before you launch the script! Depending on the GPU you have, you may need to enable certain optimizations to train DreamBooth. 16GB 12GB 8GB On a 16GB GPU, you can use bitsandbytes 8-bit optimizer and gradient checkpointing to help you train a DreamBooth model. Install bitsandbytes: Copied pip install bitsandbytes Then, add the following parameter to your training command: Copied accelerate launch train_dreambooth.py \ + --gradient_checkpointing \ + --use_8bit_adam \ PyTorch Flax Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export INSTANCE_DIR="./dog" +export OUTPUT_DIR="path_to_saved_model" + +accelerate launch train_dreambooth.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --output_dir=$OUTPUT_DIR \ + --instance_prompt="a photo of sks dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=1 \ + --learning_rate=5e-6 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --max_train_steps=400 \ + --push_to_hub Once training is complete, you can use your newly trained model for inference! Can’t wait to try your model for inference before training is complete? 🤭 Make sure you have the latest version of 🤗 Accelerate installed. Copied from diffusers import DiffusionPipeline, UNet2DConditionModel +from transformers import CLIPTextModel +import torch + +unet = UNet2DConditionModel.from_pretrained("path/to/model/checkpoint-100/unet") + +# if you have trained with `--args.train_text_encoder` make sure to also load the text encoder +text_encoder = CLIPTextModel.from_pretrained("path/to/model/checkpoint-100/checkpoint-100/text_encoder") + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", unet=unet, text_encoder=text_encoder, dtype=torch.float16, +).to("cuda") + +image = pipeline("A photo of sks dog in a bucket", num_inference_steps=50, guidance_scale=7.5).images[0] +image.save("dog-bucket.png") PyTorch Flax Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("path_to_saved_model", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +image = pipeline("A photo of sks dog in a bucket", num_inference_steps=50, guidance_scale=7.5).images[0] +image.save("dog-bucket.png") LoRA LoRA is a training technique for significantly reducing the number of trainable parameters. As a result, training is faster and it is easier to store the resulting weights because they are a lot smaller (~100MBs). Use the train_dreambooth_lora.py script to train with LoRA. The LoRA training script is discussed in more detail in the LoRA training guide. Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_dreambooth_lora_sdxl.py script to train a SDXL model with LoRA. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on training your DreamBooth model! To learn more about how to use your new model, the following guide may be helpful: Learn how to load a DreamBooth model for inference if you trained your model with LoRA. diff --git a/scrapped_outputs/225425d5ab306fe55bb4949de67063ca.txt b/scrapped_outputs/225425d5ab306fe55bb4949de67063ca.txt new file mode 100644 index 0000000000000000000000000000000000000000..b4d313869e87be5d21416eaebaf209bc174ce6fb --- /dev/null +++ b/scrapped_outputs/225425d5ab306fe55bb4949de67063ca.txt @@ -0,0 +1,89 @@ +Load community pipelines and components Community pipelines Community pipelines are any DiffusionPipeline class that are different from the original implementation as specified in their paper (for example, the StableDiffusionControlNetPipeline corresponds to the Text-to-Image Generation with ControlNet Conditioning paper). They provide additional functionality or extend the original implementation of a pipeline. There are many cool community pipelines like Speech to Image or Composable Stable Diffusion, and you can find all the official community pipelines here. To load any community pipeline on the Hub, pass the repository id of the community pipeline to the custom_pipeline argument and the model repository where you’d like to load the pipeline weights and components from. For example, the example below loads a dummy pipeline from hf-internal-testing/diffusers-dummy-pipeline and the pipeline weights and components from google/ddpm-cifar10-32: 🔒 By loading a community pipeline from the Hugging Face Hub, you are trusting that the code you are loading is safe. Make sure to inspect the code online before loading and running it automatically! Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "google/ddpm-cifar10-32", custom_pipeline="hf-internal-testing/diffusers-dummy-pipeline", use_safetensors=True +) Loading an official community pipeline is similar, but you can mix loading weights from an official repository id and pass pipeline components directly. The example below loads the community CLIP Guided Stable Diffusion pipeline, and you can pass the CLIP model components directly to it: Copied from diffusers import DiffusionPipeline +from transformers import CLIPImageProcessor, CLIPModel + +clip_model_id = "laion/CLIP-ViT-B-32-laion2B-s34B-b79K" + +feature_extractor = CLIPImageProcessor.from_pretrained(clip_model_id) +clip_model = CLIPModel.from_pretrained(clip_model_id) + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + custom_pipeline="clip_guided_stable_diffusion", + clip_model=clip_model, + feature_extractor=feature_extractor, + use_safetensors=True, +) Load from a local file Community pipelines can also be loaded from a local file if you pass a file path instead. The path to the passed directory must contain a pipeline.py file that contains the pipeline class in order to successfully load it. Copied pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + custom_pipeline="./path/to/pipeline_directory/", + clip_model=clip_model, + feature_extractor=feature_extractor, + use_safetensors=True, +) Load from a specific version By default, community pipelines are loaded from the latest stable version of Diffusers. To load a community pipeline from another version, use the custom_revision parameter. main older version For example, to load from the main branch: Copied pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + custom_pipeline="clip_guided_stable_diffusion", + custom_revision="main", + clip_model=clip_model, + feature_extractor=feature_extractor, + use_safetensors=True, +) For more information about community pipelines, take a look at the Community pipelines guide for how to use them and if you’re interested in adding a community pipeline check out the How to contribute a community pipeline guide! Community components Community components allow users to build pipelines that may have customized components that are not a part of Diffusers. If your pipeline has custom components that Diffusers doesn’t already support, you need to provide their implementations as Python modules. These customized components could be a VAE, UNet, and scheduler. In most cases, the text encoder is imported from the Transformers library. The pipeline code itself can also be customized. This section shows how users should use community components to build a community pipeline. You’ll use the showlab/show-1-base pipeline checkpoint as an example. So, let’s start loading the components: Import and load the text encoder from Transformers: Copied from transformers import T5Tokenizer, T5EncoderModel + +pipe_id = "showlab/show-1-base" +tokenizer = T5Tokenizer.from_pretrained(pipe_id, subfolder="tokenizer") +text_encoder = T5EncoderModel.from_pretrained(pipe_id, subfolder="text_encoder") Load a scheduler: Copied from diffusers import DPMSolverMultistepScheduler + +scheduler = DPMSolverMultistepScheduler.from_pretrained(pipe_id, subfolder="scheduler") Load an image processor: Copied from transformers import CLIPFeatureExtractor + +feature_extractor = CLIPFeatureExtractor.from_pretrained(pipe_id, subfolder="feature_extractor") In steps 4 and 5, the custom UNet and pipeline implementation must match the format shown in their files for this example to work. Now you’ll load a custom UNet, which in this example, has already been implemented in the showone_unet_3d_condition.py script for your convenience. You’ll notice the UNet3DConditionModel class name is changed to ShowOneUNet3DConditionModel because UNet3DConditionModel already exists in Diffusers. Any components needed for the ShowOneUNet3DConditionModel class should be placed in the showone_unet_3d_condition.py script. Once this is done, you can initialize the UNet: Copied from showone_unet_3d_condition import ShowOneUNet3DConditionModel + +unet = ShowOneUNet3DConditionModel.from_pretrained(pipe_id, subfolder="unet") Finally, you’ll load the custom pipeline code. For this example, it has already been created for you in the pipeline_t2v_base_pixel.py script. This script contains a custom TextToVideoIFPipeline class for generating videos from text. Just like the custom UNet, any code needed for the custom pipeline to work should go in the pipeline_t2v_base_pixel.py script. Once everything is in place, you can initialize the TextToVideoIFPipeline with the ShowOneUNet3DConditionModel: Copied from pipeline_t2v_base_pixel import TextToVideoIFPipeline +import torch + +pipeline = TextToVideoIFPipeline( + unet=unet, + text_encoder=text_encoder, + tokenizer=tokenizer, + scheduler=scheduler, + feature_extractor=feature_extractor +) +pipeline = pipeline.to(device="cuda") +pipeline.torch_dtype = torch.float16 Push the pipeline to the Hub to share with the community! Copied pipeline.push_to_hub("custom-t2v-pipeline") After the pipeline is successfully pushed, you need a couple of changes: Change the _class_name attribute in model_index.json to "pipeline_t2v_base_pixel" and "TextToVideoIFPipeline". Upload showone_unet_3d_condition.py to the unet directory. Upload pipeline_t2v_base_pixel.py to the pipeline base directory. To run inference, simply add the trust_remote_code argument while initializing the pipeline to handle all the “magic” behind the scenes. Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "/", trust_remote_code=True, torch_dtype=torch.float16 +).to("cuda") + +prompt = "hello" + +# Text embeds +prompt_embeds, negative_embeds = pipeline.encode_prompt(prompt) + +# Keyframes generation (8x64x40, 2fps) +video_frames = pipeline( + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + num_frames=8, + height=40, + width=64, + num_inference_steps=2, + guidance_scale=9.0, + output_type="pt" +).frames As an additional reference example, you can refer to the repository structure of stabilityai/japanese-stable-diffusion-xl, that makes use of the trust_remote_code feature: Copied +from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/japanese-stable-diffusion-xl", trust_remote_code=True +) +pipeline.to("cuda") + +# if using torch < 2.0 +# pipeline.enable_xformers_memory_efficient_attention() + +prompt = "柴犬、カラフルアート" + +image = pipeline(prompt=prompt).images[0] diff --git a/scrapped_outputs/229ed35ca73e4a52081e1a6650e6c995.txt b/scrapped_outputs/229ed35ca73e4a52081e1a6650e6c995.txt new file mode 100644 index 0000000000000000000000000000000000000000..7c4120ca559ac7e154bd60c031ca497e0b8a77e7 --- /dev/null +++ b/scrapped_outputs/229ed35ca73e4a52081e1a6650e6c995.txt @@ -0,0 +1 @@ +Overview Generating high-quality outputs is computationally intensive, especially during each iterative step where you go from a noisy output to a less noisy output. One of 🤗 Diffuser’s goals is to make this technology widely accessible to everyone, which includes enabling fast inference on consumer and specialized hardware. This section will cover tips and tricks - like half-precision weights and sliced attention - for optimizing inference speed and reducing memory-consumption. You’ll also learn how to speed up your PyTorch code with torch.compile or ONNX Runtime, and enable memory-efficient attention with xFormers. There are also guides for running inference on specific hardware like Apple Silicon, and Intel or Habana processors. diff --git a/scrapped_outputs/22f19334ea13d1d620fc304b4120ba2c.txt b/scrapped_outputs/22f19334ea13d1d620fc304b4120ba2c.txt new file mode 100644 index 0000000000000000000000000000000000000000..3f1da6f062b9100f23c460d43fea7ef944c9e17f --- /dev/null +++ b/scrapped_outputs/22f19334ea13d1d620fc304b4120ba2c.txt @@ -0,0 +1,555 @@ +Pipelines + +The DiffusionPipeline is the easiest way to load any pretrained diffusion pipeline from the Hub and to use it in inference. +One should not use the Diffusion Pipeline class for training or fine-tuning a diffusion model. Individual + components of diffusion pipelines are usually trained individually, so we suggest to directly work + with `UNetModel` and `UNetConditionModel`. + +Any diffusion pipeline that is loaded with from_pretrained() will automatically +detect the pipeline type, e.g. StableDiffusionPipeline and consequently load each component of the +pipeline and pass them into the __init__ function of the pipeline, e.g. __init__(). +Any pipeline object can be saved locally with save_pretrained(). + +DiffusionPipeline + + +class diffusers.DiffusionPipeline + +< +source +> +( +) + + + +Base class for all models. +DiffusionPipeline takes care of storing all components (models, schedulers, processors) for diffusion pipelines +and handles methods for loading, downloading and saving models as well as a few methods common to all pipelines to: +move all PyTorch modules to the device of your choice +enabling/disabling the progress bar for the denoising iteration +Class attributes: +config_name (str) — name of the config file that will store the class and module names of all +components of the diffusion pipeline. +_optional_components (Liststr) — list of all components that are optional so they don’t have to be +passed for the pipeline to function (should be overridden by subclasses). + +__call__ + + +( +*args +**kwargs + +) + + + +Call self as a function. + +device + +< +source +> +( +) +→ +torch.device + +Returns + +torch.device + + + +The torch device on which the pipeline is located. + + + +to + +< +source +> +( +torch_device: typing.Union[str, torch.device, NoneType] = None +torch_dtype: typing.Optional[torch.dtype] = None +silence_dtype_warnings: bool = False + +) + + + + +components + +< +source +> +( +) + + + +The self.components property can be useful to run different pipelines with the same weights and +configurations to not have to re-allocate memory. + +Examples: + + + Copied +>>> from diffusers import ( +... StableDiffusionPipeline, +... StableDiffusionImg2ImgPipeline, +... StableDiffusionInpaintPipeline, +... ) + +>>> text2img = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> img2img = StableDiffusionImg2ImgPipeline(**text2img.components) +>>> inpaint = StableDiffusionInpaintPipeline(**text2img.components) + +disable_attention_slicing + +< +source +> +( +) + + + +Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go +back to computing attention in one step. + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +download + +< +source +> +( +pretrained_model_name +**kwargs + +) + + +Parameters + +pretrained_model_name (str or os.PathLike, optional) — +Should be a string, the repo id of a pretrained pipeline hosted inside a model repo on +https://huggingface.co/ Valid repo ids have to be located under a user or organization name, like +CompVis/ldm-text2im-large-256. + + + +Download and cache a PyTorch diffusion pipeline from pre-trained pipeline weights. +custom_pipeline (str, optional): +This is an experimental feature and is likely to change in the future. +Can be either: +A string, the repo id of a custom pipeline hosted inside a model repo on +https://huggingface.co/. Valid repo ids have to be located under a user or organization name, +like hf-internal-testing/diffusers-dummy-pipeline. +It is required that the model repo has a file, called pipeline.py that defines the custom +pipeline. +A string, the file name of a community pipeline hosted on GitHub under +https://github.com/huggingface/diffusers/tree/main/examples/community. Valid file names have to +match exactly the file name without .py located under the above link, e.g. +clip_guided_stable_diffusion. +Community pipelines are always loaded from the current main branch of GitHub. +A path to a directory containing a custom pipeline, e.g., ./my_pipeline_directory/. +It is required that the directory has a file, called pipeline.py that defines the custom +pipeline. +For more information on how to load and create custom pipelines, please have a look at Loading and +Adding Custom +Pipelines +force_download (bool, optional, defaults to False): +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. +resume_download (bool, optional, defaults to False): +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. +proxies (Dict[str, str], optional): +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. +output_loading_info(bool, optional, defaults to False): +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. +local_files_only(bool, optional, defaults to False): +Whether or not to only look at local files (i.e., do not try to download the model). +use_auth_token (str or bool, optional): +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running huggingface-cli login (stored in ~/.huggingface). +revision (str, optional, defaults to "main"): +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. +custom_revision (str, optional, defaults to "main" when loading from the Hub and to local version of +diffusers when loading from GitHub): +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a diffusers version when loading a +custom pipeline from GitHub. +mirror (str, optional): +Mirror source to accelerate downloads in China. If you are from China and have an accessibility +problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. +Please refer to the mirror site for more information. specify the folder name here. +variant (str, optional): +If specified load weights from variant filename, e.g. pytorch_model..bin. variant is +ignored when using from_flax. +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models + +enable_attention_slicing + +< +source +> +( +slice_size: typing.Union[str, int, NoneType] = 'auto' + +) + + +Parameters + +slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) + +from_pretrained + +< +source +> +( +pretrained_model_name_or_path: typing.Union[str, os.PathLike, NoneType] +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id of a pretrained pipeline hosted inside a model repo on +https://huggingface.co/ Valid repo ids have to be located under a user or organization name, like +CompVis/ldm-text2im-large-256. +A path to a directory containing pipeline weights saved using +save_pretrained(), e.g., ./my_pipeline_directory/. + + + +torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model under this dtype. If "auto" is passed the dtype +will be automatically derived from the model’s weights. + + +custom_pipeline (str, optional) — + +This is an experimental feature and is likely to change in the future. + +Can be either: + + +A string, the repo id of a custom pipeline hosted inside a model repo on +https://huggingface.co/. Valid repo ids have to be located under a user or organization name, +like hf-internal-testing/diffusers-dummy-pipeline. + +It is required that the model repo has a file, called pipeline.py that defines the custom +pipeline. + + + +A string, the file name of a community pipeline hosted on GitHub under +https://github.com/huggingface/diffusers/tree/main/examples/community. Valid file names have to +match exactly the file name without .py located under the above link, e.g. +clip_guided_stable_diffusion. + +Community pipelines are always loaded from the current main branch of GitHub. + + + +A path to a directory containing a custom pipeline, e.g., ./my_pipeline_directory/. + +It is required that the directory has a file, called pipeline.py that defines the custom +pipeline. + + + +For more information on how to load and create custom pipelines, please have a look at Loading and +Adding Custom +Pipelines + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running huggingface-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +custom_revision (str, optional, defaults to "main" when loading from the Hub and to local version of diffusers when loading from GitHub) — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a diffusers version when loading a +custom pipeline from GitHub. + + +mirror (str, optional) — +Mirror source to accelerate downloads in China. If you are from China and have an accessibility +problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. +Please refer to the mirror site for more information. specify the folder name here. + + +device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be refined to each +parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the +same device. +To have Accelerate compute the most optimized device_map automatically, set device_map="auto". For +more information about each option see designing a device +map. + + +low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading by not initializing the weights and only loading the pre-trained weights. This +also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the +model. This is only supported when torch version >= 1.9.0. If you are using an older version of torch, +setting this argument to True will raise an error. + + +use_safetensors (bool, optional ) — +If set to True, the pipeline will be loaded from safetensors weights. If set to None (the +default). The pipeline will load using safetensors if the safetensors weights are available and if +safetensors is installed. If the to False the pipeline will not use safetensors. + + +kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load - and saveable variables - i.e. the pipeline components - of the +specific pipeline class. The overwritten components are then directly passed to the pipelines +__init__ method. See example below for more information. + + +variant (str, optional) — +If specified load weights from variant filename, e.g. pytorch_model..bin. variant is +ignored when using from_flax. + + + +Instantiate a PyTorch diffusion pipeline from pre-trained pipeline weights. +The pipeline is set in evaluation mode by default using model.eval() (Dropout modules are deactivated). +The warning Weights from XXX not initialized from pretrained model means that the weights of XXX do not come +pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning +task. +The warning Weights from XXX not used in YYY means that the layer XXX is not used by YYY, therefore those +weights are discarded. +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models, e.g. "runwayml/stable-diffusion-v1-5" +Activate the special “offline-mode” to use +this method in a firewalled environment. + +Examples: + + + Copied +>>> from diffusers import DiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256") + +>>> # Download pipeline that requires an authorization token +>>> # For more information on access tokens, please refer to this section +>>> # of the documentation](https://huggingface.co/docs/hub/security-tokens) +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") + +>>> # Use a different scheduler +>>> from diffusers import LMSDiscreteScheduler + +>>> scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.scheduler = scheduler + +numpy_to_pil + +< +source +> +( +images + +) + + + +Convert a numpy image or a batch of images to a PIL image. + +save_pretrained + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +safe_serialization: bool = False +variant: typing.Optional[str] = None + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. + + +safe_serialization (bool, optional, defaults to False) — +Whether to save the model using safetensors or the traditional PyTorch way (that uses pickle). + + +variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. + + + +Save all variables of the pipeline that can be saved and loaded as well as the pipelines configuration file to +a directory. A pipeline variable can be saved and loaded if its class implements both a save and loading +method. The pipeline can easily be re-loaded using the [from_pretrained()](/docs/diffusers/v0.16.0/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained) class method. + +ImagePipelineOutput + + +By default diffusion pipelines return an object of class + +class diffusers.ImagePipelineOutput + +< +source +> +( +images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] + +) + + +Parameters + +images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or numpy array of shape (batch_size, height, width, num_channels). PIL images or numpy array present the denoised images of the diffusion pipeline. + + + +Output class for image pipelines. + +AudioPipelineOutput + + +By default diffusion pipelines return an object of class + +class diffusers.AudioPipelineOutput + +< +source +> +( +audios: ndarray + +) + + +Parameters + +audios (np.ndarray) — +List of denoised samples of shape (batch_size, num_channels, sample_rate). Numpy array present the +denoised audio samples of the diffusion pipeline. + + + +Output class for audio pipelines. diff --git a/scrapped_outputs/22fe6c6fabf8eb56667fd7344a04ae7d.txt b/scrapped_outputs/22fe6c6fabf8eb56667fd7344a04ae7d.txt new file mode 100644 index 0000000000000000000000000000000000000000..fda34355c59de34ea233bb3813ca74088e89b1bb --- /dev/null +++ b/scrapped_outputs/22fe6c6fabf8eb56667fd7344a04ae7d.txt @@ -0,0 +1,187 @@ +Euler scheduler + + +Overview + +Euler scheduler (Algorithm 2) from the paper Elucidating the Design Space of Diffusion-Based Generative Models by Karras et al. (2022). Based on the original k-diffusion implementation by Katherine Crowson. +Fast scheduler which often times generates good outputs with 20-30 steps. + +EulerDiscreteScheduler + + +class diffusers.EulerDiscreteScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +prediction_type: str = 'epsilon' +interpolation_type: str = 'linear' + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +prediction_type (str, default "epsilon", optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + +interpolation_type (str, default "linear", optional) — +interpolation type to compute intermediate sigmas for the scheduler denoising steps. Should be one of +["linear", "log_linear"]. + + + +Euler scheduler (Algorithm 2) from Karras et al. (2022) https://arxiv.org/abs/2206.00364. . Based on the original +k-diffusion implementation by Katherine Crowson: +https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L51 +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[float, torch.FloatTensor] + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +timestep (float or torch.FloatTensor) — the current timestep in the diffusion chain + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Scales the denoising model input by (sigma**2 + 1) ** 0.5 to match the Euler algorithm. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device, optional) — +the device to which the timesteps should be moved to. If None, the timesteps are not moved. + + + +Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: typing.Union[float, torch.FloatTensor] +sample: FloatTensor +s_churn: float = 0.0 +s_tmin: float = 0.0 +s_tmax: float = inf +s_noise: float = 1.0 +generator: typing.Optional[torch._C.Generator] = None +return_dict: bool = True + +) +→ +~schedulers.scheduling_utils.EulerDiscreteSchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (float) — current timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +s_churn (float) — + + +s_tmin (float) — + + +s_tmax (float) — + + +s_noise (float) — + + +generator (torch.Generator, optional) — Random number generator. + + +return_dict (bool) — option for returning tuple rather than EulerDiscreteSchedulerOutput class + + +Returns + +~schedulers.scheduling_utils.EulerDiscreteSchedulerOutput or tuple + + + +~schedulers.scheduling_utils.EulerDiscreteSchedulerOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/23055201fafc0e437aaa7cc14a875c51.txt b/scrapped_outputs/23055201fafc0e437aaa7cc14a875c51.txt new file mode 100644 index 0000000000000000000000000000000000000000..89bc887713a4db23ab02dc3377a161ea6292c27f --- /dev/null +++ b/scrapped_outputs/23055201fafc0e437aaa7cc14a875c51.txt @@ -0,0 +1,23 @@ +Controlled generation Controlling outputs generated by diffusion models has been long pursued by the community and is now an active research topic. In many popular diffusion models, subtle changes in inputs, both images and text prompts, can drastically change outputs. In an ideal world we want to be able to control how semantics are preserved and changed. Most examples of preserving semantics reduce to being able to accurately map a change in input to a change in output. I.e. adding an adjective to a subject in a prompt preserves the entire image, only modifying the changed subject. Or, image variation of a particular subject preserves the subject’s pose. Additionally, there are qualities of generated images that we would like to influence beyond semantic preservation. I.e. in general, we would like our outputs to be of good quality, adhere to a particular style, or be realistic. We will document some of the techniques diffusers supports to control generation of diffusion models. Much is cutting edge research and can be quite nuanced. If something needs clarifying or you have a suggestion, don’t hesitate to open a discussion on the forum or a GitHub issue. We provide a high level explanation of how the generation can be controlled as well as a snippet of the technicals. For more in depth explanations on the technicals, the original papers which are linked from the pipelines are always the best resources. Depending on the use case, one should choose a technique accordingly. In many cases, these techniques can be combined. For example, one can combine Textual Inversion with SEGA to provide more semantic guidance to the outputs generated using Textual Inversion. Unless otherwise mentioned, these are techniques that work with existing models and don’t require their own weights. InstructPix2Pix Pix2Pix Zero Attend and Excite Semantic Guidance Self-attention Guidance Depth2Image MultiDiffusion Panorama DreamBooth Textual Inversion ControlNet Prompt Weighting Custom Diffusion Model Editing DiffEdit T2I-Adapter FABRIC For convenience, we provide a table to denote which methods are inference-only and which require fine-tuning/training. Method Inference only Requires training / fine-tuning Comments InstructPix2Pix ✅ ❌ Can additionally befine-tuned for better performance on specific edit instructions. Pix2Pix Zero ✅ ❌ Attend and Excite ✅ ❌ Semantic Guidance ✅ ❌ Self-attention Guidance ✅ ❌ Depth2Image ✅ ❌ MultiDiffusion Panorama ✅ ❌ DreamBooth ❌ ✅ Textual Inversion ❌ ✅ ControlNet ✅ ❌ A ControlNet can be trained/fine-tuned ona custom conditioning. Prompt Weighting ✅ ❌ Custom Diffusion ❌ ✅ Model Editing ✅ ❌ DiffEdit ✅ ❌ T2I-Adapter ✅ ❌ Fabric ✅ ❌ InstructPix2Pix Paper InstructPix2Pix is fine-tuned from Stable Diffusion to support editing input images. It takes as inputs an image and a prompt describing an edit, and it outputs the edited image. +InstructPix2Pix has been explicitly trained to work well with InstructGPT-like prompts. Pix2Pix Zero Paper Pix2Pix Zero allows modifying an image so that one concept or subject is translated to another one while preserving general image semantics. The denoising process is guided from one conceptual embedding towards another conceptual embedding. The intermediate latents are optimized during the denoising process to push the attention maps towards reference attention maps. The reference attention maps are from the denoising process of the input image and are used to encourage semantic preservation. Pix2Pix Zero can be used both to edit synthetic images as well as real images. To edit synthetic images, one first generates an image given a caption. +Next, we generate image captions for the concept that shall be edited and for the new target concept. We can use a model like Flan-T5 for this purpose. Then, “mean” prompt embeddings for both the source and target concepts are created via the text encoder. Finally, the pix2pix-zero algorithm is used to edit the synthetic image. To edit a real image, one first generates an image caption using a model like BLIP. Then one applies DDIM inversion on the prompt and image to generate “inverse” latents. Similar to before, “mean” prompt embeddings for both source and target concepts are created and finally the pix2pix-zero algorithm in combination with the “inverse” latents is used to edit the image. Pix2Pix Zero is the first model that allows “zero-shot” image editing. This means that the model +can edit an image in less than a minute on a consumer GPU as shown here. As mentioned above, Pix2Pix Zero includes optimizing the latents (and not any of the UNet, VAE, or the text encoder) to steer the generation toward a specific concept. This means that the overall +pipeline might require more memory than a standard StableDiffusionPipeline. An important distinction between methods like InstructPix2Pix and Pix2Pix Zero is that the former +involves fine-tuning the pre-trained weights while the latter does not. This means that you can +apply Pix2Pix Zero to any of the available Stable Diffusion models. Attend and Excite Paper Attend and Excite allows subjects in the prompt to be faithfully represented in the final image. A set of token indices are given as input, corresponding to the subjects in the prompt that need to be present in the image. During denoising, each token index is guaranteed to have a minimum attention threshold for at least one patch of the image. The intermediate latents are iteratively optimized during the denoising process to strengthen the attention of the most neglected subject token until the attention threshold is passed for all subject tokens. Like Pix2Pix Zero, Attend and Excite also involves a mini optimization loop (leaving the pre-trained weights untouched) in its pipeline and can require more memory than the usual StableDiffusionPipeline. Semantic Guidance (SEGA) Paper SEGA allows applying or removing one or more concepts from an image. The strength of the concept can also be controlled. I.e. the smile concept can be used to incrementally increase or decrease the smile of a portrait. Similar to how classifier free guidance provides guidance via empty prompt inputs, SEGA provides guidance on conceptual prompts. Multiple of these conceptual prompts can be applied simultaneously. Each conceptual prompt can either add or remove their concept depending on if the guidance is applied positively or negatively. Unlike Pix2Pix Zero or Attend and Excite, SEGA directly interacts with the diffusion process instead of performing any explicit gradient-based optimization. Self-attention Guidance (SAG) Paper Self-attention Guidance improves the general quality of images. SAG provides guidance from predictions not conditioned on high-frequency details to fully conditioned images. The high frequency details are extracted out of the UNet self-attention maps. Depth2Image Project Depth2Image is fine-tuned from Stable Diffusion to better preserve semantics for text guided image variation. It conditions on a monocular depth estimate of the original image. MultiDiffusion Panorama Paper MultiDiffusion Panorama defines a new generation process over a pre-trained diffusion model. This process binds together multiple diffusion generation methods that can be readily applied to generate high quality and diverse images. Results adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes. +MultiDiffusion Panorama allows to generate high-quality images at arbitrary aspect ratios (e.g., panoramas). Fine-tuning your own models In addition to pre-trained models, Diffusers has training scripts for fine-tuning models on user-provided data. DreamBooth Project DreamBooth fine-tunes a model to teach it about a new subject. I.e. a few pictures of a person can be used to generate images of that person in different styles. Textual Inversion Paper Textual Inversion fine-tunes a model to teach it about a new concept. I.e. a few pictures of a style of artwork can be used to generate images in that style. ControlNet Paper ControlNet is an auxiliary network which adds an extra condition. +There are 8 canonical pre-trained ControlNets trained on different conditionings such as edge detection, scribbles, +depth maps, and semantic segmentations. Prompt Weighting Prompt weighting is a simple technique that puts more attention weight on certain parts of the text +input. Custom Diffusion Paper Custom Diffusion only fine-tunes the cross-attention maps of a pre-trained +text-to-image diffusion model. It also allows for additionally performing Textual Inversion. It supports +multi-concept training by design. Like DreamBooth and Textual Inversion, Custom Diffusion is also used to +teach a pre-trained text-to-image diffusion model about new concepts to generate outputs involving the +concept(s) of interest. Model Editing Paper The text-to-image model editing pipeline helps you mitigate some of the incorrect implicit assumptions a pre-trained text-to-image +diffusion model might make about the subjects present in the input prompt. For example, if you prompt Stable Diffusion to generate images for “A pack of roses”, the roses in the generated images +are more likely to be red. This pipeline helps you change that assumption. DiffEdit Paper DiffEdit allows for semantic editing of input images along with +input prompts while preserving the original input images as much as possible. T2I-Adapter Paper T2I-Adapter is an auxiliary network which adds an extra condition. +There are 8 canonical pre-trained adapters trained on different conditionings such as edge detection, sketch, +depth maps, and semantic segmentations. Fabric Paper Fabric is a training-free +approach applicable to a wide range of popular diffusion models, which exploits +the self-attention layer present in the most widely used architectures to condition +the diffusion process on a set of feedback images. diff --git a/scrapped_outputs/230a1a1461381aeb24b786687453c6b9.txt b/scrapped_outputs/230a1a1461381aeb24b786687453c6b9.txt new file mode 100644 index 0000000000000000000000000000000000000000..f559dcc80ec22dbf65c22dd7f4b1273f5e564097 --- /dev/null +++ b/scrapped_outputs/230a1a1461381aeb24b786687453c6b9.txt @@ -0,0 +1,118 @@ +Latent upscaler The Stable Diffusion latent upscaler model was created by Katherine Crowson in collaboration with Stability AI. It is used to enhance the output image resolution by a factor of 2 (see this demo notebook for a demonstration of the original implementation). Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionLatentUpscalePipeline class diffusers.StableDiffusionLatentUpscalePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: EulerDiscreteScheduler ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A EulerDiscreteScheduler to be used in combination with unet to denoise the encoded image latents. Pipeline for upscaling Stable Diffusion output image resolution by a factor of 2. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union image: Union = None num_inference_steps: int = 75 guidance_scale: float = 9.0 negative_prompt: Union = None generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image upscaling. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be upscaled. If it’s a tensor, it can be either a +latent output from a Stable Diffusion model or an image tensor in the range [-1, 1]. It is considered +a latent if image.shape[1] is 4; otherwise, it is considered to be an image representation and +encoded using this pipeline’s vae encoder. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import StableDiffusionLatentUpscalePipeline, StableDiffusionPipeline +>>> import torch + + +>>> pipeline = StableDiffusionPipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 +... ) +>>> pipeline.to("cuda") + +>>> model_id = "stabilityai/sd-x2-latent-upscaler" +>>> upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) +>>> upscaler.to("cuda") + +>>> prompt = "a photo of an astronaut high resolution, unreal engine, ultra realistic" +>>> generator = torch.manual_seed(33) + +>>> low_res_latents = pipeline(prompt, generator=generator, output_type="latent").images + +>>> with torch.no_grad(): +... image = pipeline.decode_latents(low_res_latents) +>>> image = pipeline.numpy_to_pil(image)[0] + +>>> image.save("../images/a1.png") + +>>> upscaled_image = upscaler( +... prompt=prompt, +... image=low_res_latents, +... num_inference_steps=20, +... guidance_scale=0, +... generator=generator, +... ).images[0] + +>>> upscaled_image.save("../images/a2.png") enable_sequential_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using 🤗 Accelerate, significantly reducing memory usage. When called, the state +dicts of all torch.nn.Module components (except those in self._exclude_from_cpu_offload) are saved to CPU +and then moved to torch.device('meta') and loaded to GPU only when their specific submodule has its forward +method called. Offloading happens on a submodule basis. Memory savings are higher than with +enable_model_cpu_offload, but performance is lower. enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/2357b4694fa1d46ad91562032dd94655.txt b/scrapped_outputs/2357b4694fa1d46ad91562032dd94655.txt new file mode 100644 index 0000000000000000000000000000000000000000..682e7ed4ade907ab1a141f47a047e5803e87a77a --- /dev/null +++ b/scrapped_outputs/2357b4694fa1d46ad91562032dd94655.txt @@ -0,0 +1,33 @@ +Logging 🤗 Diffusers has a centralized logging system to easily manage the verbosity of the library. The default verbosity is set to WARNING. To change the verbosity level, use one of the direct setters. For instance, to change the verbosity to the INFO level. Copied import diffusers + +diffusers.logging.set_verbosity_info() You can also use the environment variable DIFFUSERS_VERBOSITY to override the default verbosity. You can set it +to one of the following: debug, info, warning, error, critical. For example: Copied DIFFUSERS_VERBOSITY=error ./myprogram.py Additionally, some warnings can be disabled by setting the environment variable +DIFFUSERS_NO_ADVISORY_WARNINGS to a true value, like 1. This disables any warning logged by +logger.warning_advice. For example: Copied DIFFUSERS_NO_ADVISORY_WARNINGS=1 ./myprogram.py Here is an example of how to use the same logger as the library in your own module or script: Copied from diffusers.utils import logging + +logging.set_verbosity_info() +logger = logging.get_logger("diffusers") +logger.info("INFO") +logger.warning("WARN") All methods of the logging module are documented below. The main methods are +logging.get_verbosity to get the current level of verbosity in the logger and +logging.set_verbosity to set the verbosity to the level of your choice. In order from the least verbose to the most verbose: Method Integer value Description diffusers.logging.CRITICAL or diffusers.logging.FATAL 50 only report the most critical errors diffusers.logging.ERROR 40 only report errors diffusers.logging.WARNING or diffusers.logging.WARN 30 only report errors and warnings (default) diffusers.logging.INFO 20 only report errors, warnings, and basic information diffusers.logging.DEBUG 10 report all information By default, tqdm progress bars are displayed during model download. logging.disable_progress_bar and logging.enable_progress_bar are used to enable or disable this behavior. Base setters diffusers.utils.logging.set_verbosity_error < source > ( ) Set the verbosity to the ERROR level. diffusers.utils.logging.set_verbosity_warning < source > ( ) Set the verbosity to the WARNING level. diffusers.utils.logging.set_verbosity_info < source > ( ) Set the verbosity to the INFO level. diffusers.utils.logging.set_verbosity_debug < source > ( ) Set the verbosity to the DEBUG level. Other functions diffusers.utils.logging.get_verbosity < source > ( ) → int Returns +int + +Logging level integers which can be one of: + +50: diffusers.logging.CRITICAL or diffusers.logging.FATAL +40: diffusers.logging.ERROR +30: diffusers.logging.WARNING or diffusers.logging.WARN +20: diffusers.logging.INFO +10: diffusers.logging.DEBUG + + Return the current level for the 🤗 Diffusers’ root logger as an int. diffusers.utils.logging.set_verbosity < source > ( verbosity: int ) Parameters verbosity (int) — +Logging level which can be one of: + +diffusers.logging.CRITICAL or diffusers.logging.FATAL +diffusers.logging.ERROR +diffusers.logging.WARNING or diffusers.logging.WARN +diffusers.logging.INFO +diffusers.logging.DEBUG + Set the verbosity level for the 🤗 Diffusers’ root logger. diffusers.utils.get_logger < source > ( name: Optional = None ) Return a logger with the specified name. This function is not supposed to be directly accessed unless you are writing a custom diffusers module. diffusers.utils.logging.enable_default_handler < source > ( ) Enable the default handler of the 🤗 Diffusers’ root logger. diffusers.utils.logging.disable_default_handler < source > ( ) Disable the default handler of the 🤗 Diffusers’ root logger. diffusers.utils.logging.enable_explicit_format < source > ( ) Enable explicit formatting for every 🤗 Diffusers’ logger. The explicit formatter is as follows: Copied [LEVELNAME|FILENAME|LINE NUMBER] TIME >> MESSAGE +All handlers currently bound to the root logger are affected by this method. diffusers.utils.logging.reset_format < source > ( ) Resets the formatting for 🤗 Diffusers’ loggers. All handlers currently bound to the root logger are affected by this method. diffusers.utils.logging.enable_progress_bar < source > ( ) Enable tqdm progress bar. diffusers.utils.logging.disable_progress_bar < source > ( ) Disable tqdm progress bar. diff --git a/scrapped_outputs/2359651b1f885b8a4a20f361532b1e6d.txt b/scrapped_outputs/2359651b1f885b8a4a20f361532b1e6d.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/2382cbf3a79e54071379becdb8d7b7cf.txt b/scrapped_outputs/2382cbf3a79e54071379becdb8d7b7cf.txt new file mode 100644 index 0000000000000000000000000000000000000000..49e19fb4c11ed7fa69c26f38e304a1a47862bdca --- /dev/null +++ b/scrapped_outputs/2382cbf3a79e54071379becdb8d7b7cf.txt @@ -0,0 +1,466 @@ +Text-to-Image Generation with Adapter Conditioning Overview T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models by Chong Mou, Xintao Wang, Liangbin Xie, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie. Using the pretrained models we can provide control images (for example, a depth map) to control Stable Diffusion text-to-image generation so that it follows the structure of the depth image and fills in the details. The abstract of the paper is the following: The incredible generative ability of large-scale text-to-image (T2I) models has demonstrated strong power of learning complex structures and meaningful semantics. However, relying solely on text prompts cannot fully take advantage of the knowledge learned by the model, especially when flexible and accurate controlling (e.g., color and structure) is needed. In this paper, we aim to “dig out” the capabilities that T2I models have implicitly learned, and then explicitly use them to control the generation more granularly. Specifically, we propose to learn simple and lightweight T2I-Adapters to align internal knowledge in T2I models with external control signals, while freezing the original large T2I models. In this way, we can train various adapters according to different conditions, achieving rich control and editing effects in the color and structure of the generation results. Further, the proposed T2I-Adapters have attractive properties of practical value, such as composability and generalization ability. Extensive experiments demonstrate that our T2I-Adapter has promising generation quality and a wide range of applications. This model was contributed by the community contributor HimariO ❤️ . Available Pipelines: Pipeline Tasks Demo StableDiffusionAdapterPipeline Text-to-Image Generation with T2I-Adapter Conditioning - StableDiffusionXLAdapterPipeline Text-to-Image Generation with T2I-Adapter Conditioning on StableDiffusion-XL - Usage example with the base model of StableDiffusion-1.4/1.5 In the following we give a simple example of how to use a T2I-Adapter checkpoint with Diffusers for inference based on StableDiffusion-1.4/1.5. +All adapters use the same pipeline. Images are first converted into the appropriate control image format. The control image and prompt are passed to the StableDiffusionAdapterPipeline. Let’s have a look at a simple example using the Color Adapter. Copied from diffusers.utils import load_image, make_image_grid + +image = load_image("https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_ref.png") Then we can create our color palette by simply resizing it to 8 by 8 pixels and then scaling it back to original size. Copied from PIL import Image + +color_palette = image.resize((8, 8)) +color_palette = color_palette.resize((512, 512), resample=Image.Resampling.NEAREST) Let’s take a look at the processed image. Next, create the adapter pipeline Copied import torch +from diffusers import StableDiffusionAdapterPipeline, T2IAdapter + +adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_color_sd14v1", torch_dtype=torch.float16) +pipe = StableDiffusionAdapterPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + adapter=adapter, + torch_dtype=torch.float16, +) +pipe.to("cuda") Finally, pass the prompt and control image to the pipeline Copied # fix the random seed, so you will get the same result as the example +generator = torch.Generator("cuda").manual_seed(7) + +out_image = pipe( + "At night, glowing cubes in front of the beach", + image=color_palette, + generator=generator, +).images[0] +make_image_grid([image, color_palette, out_image], rows=1, cols=3) Usage example with the base model of StableDiffusion-XL In the following we give a simple example of how to use a T2I-Adapter checkpoint with Diffusers for inference based on StableDiffusion-XL. +All adapters use the same pipeline. Images are first downloaded into the appropriate control image format. The control image and prompt are passed to the StableDiffusionXLAdapterPipeline. Let’s have a look at a simple example using the Sketch Adapter. Copied from diffusers.utils import load_image, make_image_grid + +sketch_image = load_image("https://huggingface.co/Adapter/t2iadapter/resolve/main/sketch.png").convert("L") Then, create the adapter pipeline Copied import torch +from diffusers import ( + T2IAdapter, + StableDiffusionXLAdapterPipeline, + DDPMScheduler +) + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +adapter = T2IAdapter.from_pretrained("Adapter/t2iadapter", subfolder="sketch_sdxl_1.0", torch_dtype=torch.float16, adapter_type="full_adapter_xl") +scheduler = DDPMScheduler.from_pretrained(model_id, subfolder="scheduler") + +pipe = StableDiffusionXLAdapterPipeline.from_pretrained( + model_id, adapter=adapter, safety_checker=None, torch_dtype=torch.float16, variant="fp16", scheduler=scheduler +) + +pipe.to("cuda") Finally, pass the prompt and control image to the pipeline Copied # fix the random seed, so you will get the same result as the example +generator = torch.Generator().manual_seed(42) + +sketch_image_out = pipe( + prompt="a photo of a dog in real world, high quality", + negative_prompt="extra digit, fewer digits, cropped, worst quality, low quality", + image=sketch_image, + generator=generator, + guidance_scale=7.5 +).images[0] +make_image_grid([sketch_image, sketch_image_out], rows=1, cols=2) Available checkpoints Non-diffusers checkpoints can be found under TencentARC/T2I-Adapter. T2I-Adapter with Stable Diffusion 1.4 Model Name Control Image Overview Control Image Example Generated Image Example TencentARC/t2iadapter_color_sd14v1 Trained with spatial color palette An image with 8x8 color palette. TencentARC/t2iadapter_canny_sd14v1 Trained with canny edge detection A monochrome image with white edges on a black background. TencentARC/t2iadapter_sketch_sd14v1 Trained with PidiNet edge detection A hand-drawn monochrome image with white outlines on a black background. TencentARC/t2iadapter_depth_sd14v1 Trained with Midas depth estimation A grayscale image with black representing deep areas and white representing shallow areas. TencentARC/t2iadapter_openpose_sd14v1 Trained with OpenPose bone image A OpenPose bone image. TencentARC/t2iadapter_keypose_sd14v1 Trained with mmpose skeleton image A mmpose skeleton image. TencentARC/t2iadapter_seg_sd14v1Trained with semantic segmentation An custom segmentation protocol image. TencentARC/t2iadapter_canny_sd15v2 TencentARC/t2iadapter_depth_sd15v2 TencentARC/t2iadapter_sketch_sd15v2 TencentARC/t2iadapter_zoedepth_sd15v1 Adapter/t2iadapter, subfolder=‘sketch_sdxl_1.0’ Adapter/t2iadapter, subfolder=‘canny_sdxl_1.0’ Adapter/t2iadapter, subfolder=‘openpose_sdxl_1.0’ Combining multiple adapters MultiAdapter can be used for applying multiple conditionings at once. Here we use the keypose adapter for the character posture and the depth adapter for creating the scene. Copied from diffusers.utils import load_image, make_image_grid + +cond_keypose = load_image( + "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png" +) +cond_depth = load_image( + "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png" +) +cond = [cond_keypose, cond_depth] + +prompt = ["A man walking in an office room with a nice view"] The two control images look as such: MultiAdapter combines keypose and depth adapters. adapter_conditioning_scale balances the relative influence of the different adapters. Copied import torch +from diffusers import StableDiffusionAdapterPipeline, MultiAdapter, T2IAdapter + +adapters = MultiAdapter( + [ + T2IAdapter.from_pretrained("TencentARC/t2iadapter_keypose_sd14v1"), + T2IAdapter.from_pretrained("TencentARC/t2iadapter_depth_sd14v1"), + ] +) +adapters = adapters.to(torch.float16) + +pipe = StableDiffusionAdapterPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + torch_dtype=torch.float16, + adapter=adapters, +).to("cuda") + +image = pipe(prompt, cond, adapter_conditioning_scale=[0.8, 0.8]).images[0] +make_image_grid([cond_keypose, cond_depth, image], rows=1, cols=3) T2I-Adapter vs ControlNet T2I-Adapter is similar to ControlNet. +T2I-Adapter uses a smaller auxiliary network which is only run once for the entire diffusion process. +However, T2I-Adapter performs slightly worse than ControlNet. StableDiffusionAdapterPipeline class diffusers.StableDiffusionAdapterPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel adapter: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True ) Parameters adapter (T2IAdapter or MultiAdapter or List[T2IAdapter]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple Adapter as a +list, the outputs from each Adapter are added together to create one combined additional conditioning. adapter_weights (List[float], optional, defaults to None) — +List of floats representing the weight which will be multiply to each adapter’s output before adding them +together. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. Pipeline for text-to-image generation using Stable Diffusion augmented with T2I-Adapter +https://arxiv.org/abs/2302.08453 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None adapter_conditioning_scale: Union = 1.0 clip_skip: Optional = None ) → ~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor] or List[PIL.Image.Image] or List[List[PIL.Image.Image]]) — +The Adapter input condition. Adapter uses this input condition to generate guidance to Unet. If the +type is specified as Torch.FloatTensor, it is passed to Adapter as is. PIL.Image.Image` can also be +accepted as an image. The control image is automatically resized to fit the output image. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor as defined under +self.processor in +diffusers.models.attention_processor. adapter_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the adapter are multiplied by adapter_conditioning_scale before they are added to the +residual in the original unet. If multiple adapters are specified in init, you can set the +corresponding scale as a list. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from PIL import Image +>>> from diffusers.utils import load_image +>>> import torch +>>> from diffusers import StableDiffusionAdapterPipeline, T2IAdapter + +>>> image = load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_ref.png" +... ) + +>>> color_palette = image.resize((8, 8)) +>>> color_palette = color_palette.resize((512, 512), resample=Image.Resampling.NEAREST) + +>>> adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_color_sd14v1", torch_dtype=torch.float16) +>>> pipe = StableDiffusionAdapterPipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", +... adapter=adapter, +... torch_dtype=torch.float16, +... ) + +>>> pipe.to("cuda") + +>>> out_image = pipe( +... "At night, glowing cubes in front of the beach", +... image=color_palette, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionXLAdapterPipeline class diffusers.StableDiffusionXLAdapterPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel adapter: Union scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters adapter (T2IAdapter or MultiAdapter or List[T2IAdapter]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple Adapter as a +list, the outputs from each Adapter are added together to create one combined additional conditioning. adapter_weights (List[float], optional, defaults to None) — +List of floats representing the weight which will be multiply to each adapter’s output before adding them +together. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. Pipeline for text-to-image generation using Stable Diffusion augmented with T2I-Adapter +https://arxiv.org/abs/2302.08453 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Optional = None crops_coords_top_left: Tuple = (0, 0) target_size: Optional = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None adapter_conditioning_scale: Union = 1.0 adapter_conditioning_factor: float = 1.0 clip_skip: Optional = None ) → ~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor] or List[PIL.Image.Image] or List[List[PIL.Image.Image]]) — +The Adapter input condition. Adapter uses this input condition to generate guidance to Unet. If the +type is specified as Torch.FloatTensor, it is passed to Adapter as is. PIL.Image.Image` can also be +accepted as an image. The control image is automatically resized to fit the output image. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 5.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion_xl.StableDiffusionAdapterPipelineOutput +instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. adapter_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the adapter are multiplied by adapter_conditioning_scale before they are added to the +residual in the original unet. If multiple adapters are specified in init, you can set the +corresponding scale as a list. adapter_conditioning_factor (float, optional, defaults to 1.0) — +The fraction of timesteps for which adapter should be applied. If adapter_conditioning_factor is +0.0, adapter is not applied at all. If adapter_conditioning_factor is 1.0, adapter is applied for +all timesteps. If adapter_conditioning_factor is 0.5, adapter is applied for half of the timesteps. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import T2IAdapter, StableDiffusionXLAdapterPipeline, DDPMScheduler +>>> from diffusers.utils import load_image + +>>> sketch_image = load_image("https://huggingface.co/Adapter/t2iadapter/resolve/main/sketch.png").convert("L") + +>>> model_id = "stabilityai/stable-diffusion-xl-base-1.0" + +>>> adapter = T2IAdapter.from_pretrained( +... "Adapter/t2iadapter", +... subfolder="sketch_sdxl_1.0", +... torch_dtype=torch.float16, +... adapter_type="full_adapter_xl", +... ) +>>> scheduler = DDPMScheduler.from_pretrained(model_id, subfolder="scheduler") + +>>> pipe = StableDiffusionXLAdapterPipeline.from_pretrained( +... model_id, adapter=adapter, torch_dtype=torch.float16, variant="fp16", scheduler=scheduler +... ).to("cuda") + +>>> generator = torch.manual_seed(42) +>>> sketch_image_out = pipe( +... prompt="a photo of a dog in real world, high quality", +... negative_prompt="extra digit, fewer digits, cropped, worst quality, low quality", +... image=sketch_image, +... generator=generator, +... guidance_scale=7.5, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 diff --git a/scrapped_outputs/23b470b058553591e7e9d71f681e89f7.txt b/scrapped_outputs/23b470b058553591e7e9d71f681e89f7.txt new file mode 100644 index 0000000000000000000000000000000000000000..db7171b03930077dc4188ad756a7f5e1ae92467f --- /dev/null +++ b/scrapped_outputs/23b470b058553591e7e9d71f681e89f7.txt @@ -0,0 +1,27 @@ +UNet2DModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 2D UNet model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet2DModel class diffusers.UNet2DModel < source > ( sample_size: Union = None in_channels: int = 3 out_channels: int = 3 center_input_sample: bool = False time_embedding_type: str = 'positional' freq_shift: int = 0 flip_sin_to_cos: bool = True down_block_types: Tuple = ('DownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D') up_block_types: Tuple = ('AttnUpBlock2D', 'AttnUpBlock2D', 'AttnUpBlock2D', 'UpBlock2D') block_out_channels: Tuple = (224, 448, 672, 896) layers_per_block: int = 2 mid_block_scale_factor: float = 1 downsample_padding: int = 1 downsample_type: str = 'conv' upsample_type: str = 'conv' dropout: float = 0.0 act_fn: str = 'silu' attention_head_dim: Optional = 8 norm_num_groups: int = 32 attn_norm_num_groups: Optional = None norm_eps: float = 1e-05 resnet_time_scale_shift: str = 'default' add_attention: bool = True class_embed_type: Optional = None num_class_embeds: Optional = None num_train_timesteps: Optional = None ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. Dimensions must be a multiple of 2 ** (len(block_out_channels) - 1). in_channels (int, optional, defaults to 3) — Number of channels in the input sample. out_channels (int, optional, defaults to 3) — Number of channels in the output. center_input_sample (bool, optional, defaults to False) — Whether to center the input sample. time_embedding_type (str, optional, defaults to "positional") — Type of time embedding to use. freq_shift (int, optional, defaults to 0) — Frequency shift for Fourier time embedding. flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip sin to cos for Fourier time embedding. down_block_types (Tuple[str], optional, defaults to ("DownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D")) — +Tuple of downsample block types. mid_block_type (str, optional, defaults to "UNetMidBlock2D") — +Block type for middle of UNet, it can be either UNetMidBlock2D or UnCLIPUNetMidBlock2D. up_block_types (Tuple[str], optional, defaults to ("AttnUpBlock2D", "AttnUpBlock2D", "AttnUpBlock2D", "UpBlock2D")) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (224, 448, 672, 896)) — +Tuple of block output channels. layers_per_block (int, optional, defaults to 2) — The number of layers per block. mid_block_scale_factor (float, optional, defaults to 1) — The scale factor for the mid block. downsample_padding (int, optional, defaults to 1) — The padding for the downsample convolution. downsample_type (str, optional, defaults to conv) — +The downsample type for downsampling layers. Choose between “conv” and “resnet” upsample_type (str, optional, defaults to conv) — +The upsample type for upsampling layers. Choose between “conv” and “resnet” dropout (float, optional, defaults to 0.0) — The dropout probability to use. act_fn (str, optional, defaults to "silu") — The activation function to use. attention_head_dim (int, optional, defaults to 8) — The attention head dimension. norm_num_groups (int, optional, defaults to 32) — The number of groups for normalization. attn_norm_num_groups (int, optional, defaults to None) — +If set to an integer, a group norm layer will be created in the mid block’s Attention layer with the +given number of groups. If left as None, the group norm layer will only be created if +resnet_time_scale_shift is set to default, and if created will have norm_num_groups groups. norm_eps (float, optional, defaults to 1e-5) — The epsilon for normalization. resnet_time_scale_shift (str, optional, defaults to "default") — Time scale shift config +for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", or "identity". num_class_embeds (int, optional, defaults to None) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim when performing class +conditioning with class_embed_type equal to None. A 2D UNet model that takes a noisy sample and a timestep and returns a sample shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor timestep: Union class_labels: Optional = None return_dict: bool = True ) → ~models.unet_2d.UNet2DOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, channel, height, width). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. class_labels (torch.FloatTensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.unet_2d.UNet2DOutput instead of a plain tuple. Returns +~models.unet_2d.UNet2DOutput or tuple + +If return_dict is True, an ~models.unet_2d.UNet2DOutput is returned, otherwise a tuple is +returned where the first element is the sample tensor. + The UNet2DModel forward method. UNet2DOutput class diffusers.models.unets.unet_2d.UNet2DOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The hidden states output from the last layer of the model. The output of UNet2DModel. diff --git a/scrapped_outputs/23bb481818f347aaf7a15899518c3ef9.txt b/scrapped_outputs/23bb481818f347aaf7a15899518c3ef9.txt new file mode 100644 index 0000000000000000000000000000000000000000..6d4b37b0a52f96659677efd85840d7f5e1ea639c --- /dev/null +++ b/scrapped_outputs/23bb481818f347aaf7a15899518c3ef9.txt @@ -0,0 +1,41 @@ +Text-to-image The text-to-image script is experimental, and it’s easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset. Text-to-image models like Stable Diffusion are conditioned to generate images given a text prompt. Training a model can be taxing on your hardware, but if you enable gradient_checkpointing and mixed_precision, it is possible to train a model on a single 24GB GPU. If you’re training with larger batch sizes or want to train faster, it’s better to use GPUs with more than 30GB of memory. You can reduce your memory footprint by enabling memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing, gradient accumulation or xFormers. A GPU with at least 30GB of memory or a TPU v3 is recommended for training with Flax. This guide will explore the train_text_to_image.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/text_to_image +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image.py \ + --mixed_precision="fp16" Some basic and important parameters include: --pretrained_model_name_or_path: the name of the model on the Hub or a local path to the pretrained model --dataset_name: the name of the dataset on the Hub or a local path to the dataset to train on --image_column: the name of the image column in the dataset to train on --caption_column: the name of the text column in the dataset to train on --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_text_to_image.py \ + --snr_gamma=5.0 You can compare the loss surfaces for different snr_gamma values in this Weights and Biases report. For smaller datasets, the effects of Min-SNR may not be as obvious compared to larger datasets. Training script The dataset preprocessing code and training loop are found in the main() function. If you need to adapt the training script, this is where you’ll need to make your changes. The train_text_to_image script starts by loading a scheduler and tokenizer. You can choose to use a different scheduler here if you want: Copied noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") +tokenizer = CLIPTokenizer.from_pretrained( + args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision +) Then the script loads the UNet model: Copied load_model = UNet2DConditionModel.from_pretrained(input_dir, subfolder="unet") +model.register_to_config(**load_model.config) + +model.load_state_dict(load_model.state_dict()) Next, the text and image columns of the dataset need to be preprocessed. The tokenize_captions function handles tokenizing the inputs, and the train_transforms function specifies the type of transforms to apply to the image. Both of these functions are bundled into preprocess_train: Copied def preprocess_train(examples): + images = [image.convert("RGB") for image in examples[image_column]] + examples["pixel_values"] = [train_transforms(image) for image in images] + examples["input_ids"] = tokenize_captions(examples) + return examples Lastly, the training loop handles everything else. It encodes images into latent space, adds noise to the latents, computes the text embeddings to condition on, updates the model parameters, and saves and pushes the model to the Hub. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 PyTorch Flax Let’s train on the Pokémon BLIP captions dataset to generate your own Pokémon. Set the environment variables MODEL_NAME and dataset_name to the model and the dataset (either from the Hub or a local path). If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. To train on a local dataset, set the TRAIN_DIR and OUTPUT_DIR environment variables to the path of the dataset and where to save the model to. Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export dataset_name="lambdalabs/pokemon-blip-captions" + +accelerate launch --mixed_precision="fp16" train_text_to_image.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$dataset_name \ + --use_ema \ + --resolution=512 --center_crop --random_flip \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --enable_xformers_memory_efficient_attention + --lr_scheduler="constant" --lr_warmup_steps=0 \ + --output_dir="sd-pokemon-model" \ + --push_to_hub Once training is complete, you can use your newly trained model for inference: PyTorch Flax Copied from diffusers import StableDiffusionPipeline +import torch + +pipeline = StableDiffusionPipeline.from_pretrained("path/to/saved_model", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + +image = pipeline(prompt="yoda").images[0] +image.save("yoda-pokemon.png") Next steps Congratulations on training your own text-to-image model! To learn more about how to use your new model, the following guides may be helpful: Learn how to load LoRA weights for inference if you trained your model with LoRA. Learn more about how certain parameters like guidance scale or techniques such as prompt weighting can help you control inference in the Text-to-image task guide. diff --git a/scrapped_outputs/23ca95b9fd3bdd8db21926b6b95b7671.txt b/scrapped_outputs/23ca95b9fd3bdd8db21926b6b95b7671.txt new file mode 100644 index 0000000000000000000000000000000000000000..a9b23cd194564c43aca8fd94b78d118e14153f64 --- /dev/null +++ b/scrapped_outputs/23ca95b9fd3bdd8db21926b6b95b7671.txt @@ -0,0 +1,263 @@ +🧪 This pipeline is for research purposes only. Text-to-video ModelScope Text-to-Video Technical Report is by Jiuniu Wang, Hangjie Yuan, Dayou Chen, Yingya Zhang, Xiang Wang, Shiwei Zhang. The abstract from the paper is: This paper introduces ModelScopeT2V, a text-to-video synthesis model that evolves from a text-to-image synthesis model (i.e., Stable Diffusion). ModelScopeT2V incorporates spatio-temporal blocks to ensure consistent frame generation and smooth movement transitions. The model could adapt to varying frame numbers during training and inference, rendering it suitable for both image-text and video-text datasets. ModelScopeT2V brings together three components (i.e., VQGAN, a text encoder, and a denoising UNet), totally comprising 1.7 billion parameters, in which 0.5 billion parameters are dedicated to temporal capabilities. The model demonstrates superior performance over state-of-the-art methods across three evaluation metrics. The code and an online demo are available at https://modelscope.cn/models/damo/text-to-video-synthesis/summary. You can find additional information about Text-to-Video on the project page, original codebase, and try it out in a demo. Official checkpoints can be found at damo-vilab and cerspense. Usage example text-to-video-ms-1.7b Let’s start by generating a short video with the default length of 16 frames (2s at 8 fps): Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import export_to_video + +pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") +pipe = pipe.to("cuda") + +prompt = "Spiderman is surfing" +video_frames = pipe(prompt).frames +video_path = export_to_video(video_frames) +video_path Diffusers supports different optimization techniques to improve the latency +and memory footprint of a pipeline. Since videos are often more memory-heavy than images, +we can enable CPU offloading and VAE slicing to keep the memory footprint at bay. Let’s generate a video of 8 seconds (64 frames) on the same GPU using CPU offloading and VAE slicing: Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import export_to_video + +pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") +pipe.enable_model_cpu_offload() + +# memory optimization +pipe.enable_vae_slicing() + +prompt = "Darth Vader surfing a wave" +video_frames = pipe(prompt, num_frames=64).frames +video_path = export_to_video(video_frames) +video_path It just takes 7 GBs of GPU memory to generate the 64 video frames using PyTorch 2.0, “fp16” precision and the techniques mentioned above. We can also use a different scheduler easily, using the same method we’d use for Stable Diffusion: Copied import torch +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +from diffusers.utils import export_to_video + +pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() + +prompt = "Spiderman is surfing" +video_frames = pipe(prompt, num_inference_steps=25).frames +video_path = export_to_video(video_frames) +video_path Here are some sample outputs: An astronaut riding a horse. + Darth vader surfing in waves. + cerspense/zeroscope_v2_576w & cerspense/zeroscope_v2_XL Zeroscope are watermark-free model and have been trained on specific sizes such as 576x320 and 1024x576. +One should first generate a video using the lower resolution checkpoint cerspense/zeroscope_v2_576w with TextToVideoSDPipeline, +which can then be upscaled using VideoToVideoSDPipeline and cerspense/zeroscope_v2_XL. Copied import torch +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +from diffusers.utils import export_to_video +from PIL import Image + +pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16) +pipe.enable_model_cpu_offload() + +# memory optimization +pipe.unet.enable_forward_chunking(chunk_size=1, dim=1) +pipe.enable_vae_slicing() + +prompt = "Darth Vader surfing a wave" +video_frames = pipe(prompt, num_frames=24).frames +video_path = export_to_video(video_frames) +video_path Now the video can be upscaled: Copied pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_XL", torch_dtype=torch.float16) +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() + +# memory optimization +pipe.unet.enable_forward_chunking(chunk_size=1, dim=1) +pipe.enable_vae_slicing() + +video = [Image.fromarray(frame).resize((1024, 576)) for frame in video_frames] + +video_frames = pipe(prompt, video=video, strength=0.6).frames +video_path = export_to_video(video_frames) +video_path Here are some sample outputs: Darth vader surfing in waves. + Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. TextToVideoSDPipeline class diffusers.TextToVideoSDPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet3DConditionModel scheduler: KarrasDiffusionSchedulers ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet3DConditionModel) — +A UNet3DConditionModel to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_frames: int = 16 num_inference_steps: int = 50 guidance_scale: float = 9.0 negative_prompt: Union = None eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'np' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → TextToVideoSDPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated video. num_frames (int, optional, defaults to 16) — +The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds +amounts to 2 seconds of video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "np") — +The output format of the generated video. Choose between torch.FloatTensor or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +TextToVideoSDPipelineOutput or tuple + +If return_dict is True, TextToVideoSDPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import TextToVideoSDPipeline +>>> from diffusers.utils import export_to_video + +>>> pipe = TextToVideoSDPipeline.from_pretrained( +... "damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16" +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "Spiderman is surfing" +>>> video_frames = pipe(prompt).frames +>>> video_path = export_to_video(video_frames) +>>> video_path disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. VideoToVideoSDPipeline class diffusers.VideoToVideoSDPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet3DConditionModel scheduler: KarrasDiffusionSchedulers ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet3DConditionModel) — +A UNet3DConditionModel to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-guided video-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None video: Union = None strength: float = 0.6 num_inference_steps: int = 50 guidance_scale: float = 15.0 negative_prompt: Union = None eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'np' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → TextToVideoSDPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. video (List[np.ndarray] or torch.FloatTensor) — +video frames or tensor representing a video batch to be used as the starting point for the process. +Can also accept video latents as image, if passing latents directly, it will not be encoded again. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference video. Must be between 0 and 1. video is used as a +starting point, adding more noise to it the larger the strength. The number of denoising steps +depends on the amount of noise initially added. When strength is 1, added noise is maximum and the +denoising process runs for the full number of iterations specified in num_inference_steps. A value of +1 essentially ignores video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in video generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "np") — +The output format of the generated video. Choose between torch.FloatTensor or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +TextToVideoSDPipelineOutput or tuple + +If return_dict is True, TextToVideoSDPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +>>> from diffusers.utils import export_to_video + +>>> pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16) +>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe.to("cuda") + +>>> prompt = "spiderman running in the desert" +>>> video_frames = pipe(prompt, num_inference_steps=40, height=320, width=576, num_frames=24).frames +>>> # safe low-res video +>>> video_path = export_to_video(video_frames, output_video_path="./video_576_spiderman.mp4") + +>>> # let's offload the text-to-image model +>>> pipe.to("cpu") + +>>> # and load the image-to-image model +>>> pipe = DiffusionPipeline.from_pretrained( +... "cerspense/zeroscope_v2_XL", torch_dtype=torch.float16, revision="refs/pr/15" +... ) +>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe.enable_model_cpu_offload() + +>>> # The VAE consumes A LOT of memory, let's make sure we run it in sliced mode +>>> pipe.vae.enable_slicing() + +>>> # now let's upscale it +>>> video = [Image.fromarray(frame).resize((1024, 576)) for frame in video_frames] + +>>> # and denoise it +>>> video_frames = pipe(prompt, video=video, strength=0.6).frames +>>> video_path = export_to_video(video_frames, output_video_path="./video_1024_spiderman.mp4") +>>> video_path disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. TextToVideoSDPipelineOutput class diffusers.pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput < source > ( frames: Union ) Parameters frames (List[np.ndarray] or torch.FloatTensor) — +List of denoised frames (essentially images) as NumPy arrays of shape (height, width, num_channels) or as +a torch tensor. The length of the list denotes the video length (the number of frames). Output class for text-to-video pipelines. diff --git a/scrapped_outputs/23cf2d1bd7d1e34c6aaae4009a6ecb2a.txt b/scrapped_outputs/23cf2d1bd7d1e34c6aaae4009a6ecb2a.txt new file mode 100644 index 0000000000000000000000000000000000000000..b45fe5213bcfa863fc1c686b497f93e27b1008f7 --- /dev/null +++ b/scrapped_outputs/23cf2d1bd7d1e34c6aaae4009a6ecb2a.txt @@ -0,0 +1,630 @@ +Kandinsky 2.2 Kandinsky 2.2 is created by Arseniy Shakhmatov, Anton Razzhigaev, Aleksandr Nikolich, Vladimir Arkhipkin, Igor Pavlov, Andrey Kuznetsov, and Denis Dimitrov. The description from it’s GitHub page is: Kandinsky 2.2 brings substantial improvements upon its predecessor, Kandinsky 2.1, by introducing a new, more powerful image encoder - CLIP-ViT-G and the ControlNet support. The switch to CLIP-ViT-G as the image encoder significantly increases the model’s capability to generate more aesthetic pictures and better understand text, thus enhancing the model’s overall performance. The addition of the ControlNet mechanism allows the model to effectively control the process of generating images. This leads to more accurate and visually appealing outputs and opens new possibilities for text-guided image manipulation. The original codebase can be found at ai-forever/Kandinsky-2. Check out the Kandinsky Community organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting. Make sure to check out the schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. KandinskyV22PriorPipeline class diffusers.KandinskyV22PriorPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModelWithProjection text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: UnCLIPScheduler image_processor: CLIPImageProcessor ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Pipeline for generating image prior for Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 output_type: Optional = 'pt' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → KandinskyPriorPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. output_type (str, optional, defaults to "pt") — +The output format of the generate image. Choose between: "np" (np.array) or "pt" +(torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyV22Pipeline, KandinskyV22PriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior") +>>> pipe_prior.to("cuda") +>>> prompt = "red cat, 4k photo" +>>> image_emb, negative_image_emb = pipe_prior(prompt).to_tuple() + +>>> pipe = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder") +>>> pipe.to("cuda") +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=50, +... ).images +>>> image[0].save("cat.png") interpolate < source > ( images_and_prompts: List weights: List num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None negative_prior_prompt: Optional = None negative_prompt: str = '' guidance_scale: float = 4.0 device = None ) → KandinskyPriorPipelineOutput or tuple Parameters images_and_prompts (List[Union[str, PIL.Image.Image, torch.FloatTensor]]) — +list of prompts and images to guide the image generation. +weights — (List[float]): +list of weights for each condition in images_and_prompts num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. negative_prior_prompt (str, optional) — +The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when using the prior pipeline for interpolation. Examples: Copied >>> from diffusers import KandinskyV22PriorPipeline, KandinskyV22Pipeline +>>> from diffusers.utils import load_image +>>> import PIL +>>> import torch +>>> from torchvision import transforms + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") +>>> img1 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) +>>> img2 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/starry_night.jpeg" +... ) +>>> images_texts = ["a cat", img1, img2] +>>> weights = [0.3, 0.3, 0.4] +>>> out = pipe_prior.interpolate(images_texts, weights) +>>> pipe = KandinskyV22Pipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") +>>> image = pipe( +... image_embeds=out.image_embeds, +... negative_image_embeds=out.negative_image_embeds, +... height=768, +... width=768, +... num_inference_steps=50, +... ).images[0] +>>> image.save("starry_cat.png") KandinskyV22Pipeline class diffusers.KandinskyV22Pipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union negative_image_embeds: Union height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyV22Pipeline, KandinskyV22PriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior") +>>> pipe_prior.to("cuda") +>>> prompt = "red cat, 4k photo" +>>> out = pipe_prior(prompt) +>>> image_emb = out.image_embeds +>>> zero_image_emb = out.negative_image_embeds +>>> pipe = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder") +>>> pipe.to("cuda") +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=50, +... ).images +>>> image[0].save("cat.png") KandinskyV22CombinedPipeline class diffusers.KandinskyV22CombinedPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. prior_image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Combined Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. prior_callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference of the prior pipeline. +The function is called with the following arguments: prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). prior_callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the prior_callback_on_step_end function. The tensors specified in the +list will be passed as callback_kwargs argument. You will only be able to include variables listed in +the ._callback_tensor_inputs attribute of your prior pipeline class. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference of the decoder pipeline. +The function is called with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors +as specified by callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipe = AutoPipelineForText2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" + +image = pipe(prompt=prompt, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. KandinskyV22ControlnetPipeline class diffusers.KandinskyV22ControlnetPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union negative_image_embeds: Union hint: FloatTensor height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. hint (torch.FloatTensor) — +The controlnet condition. image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22PriorEmb2EmbPipeline class diffusers.KandinskyV22PriorEmb2EmbPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModelWithProjection text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: UnCLIPScheduler image_processor: CLIPImageProcessor ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Pipeline for generating image prior for Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union strength: float = 0.3 negative_prompt: Union = None num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None guidance_scale: float = 4.0 output_type: Optional = 'pt' return_dict: bool = True ) → KandinskyPriorPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference emb. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. emb (torch.FloatTensor) — +The image embedding. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. output_type (str, optional, defaults to "pt") — +The output format of the generate image. Choose between: "np" (np.array) or "pt" +(torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyV22Pipeline, KandinskyV22PriorEmb2EmbPipeline +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> prompt = "red cat, 4k photo" +>>> img = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) +>>> image_emb, nagative_image_emb = pipe_prior(prompt, image=img, strength=0.2).to_tuple() + +>>> pipe = KandinskyPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-decoder, torch_dtype=torch.float16" +... ) +>>> pipe.to("cuda") + +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... ).images + +>>> image[0].save("cat.png") interpolate < source > ( images_and_prompts: List weights: List num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None negative_prior_prompt: Optional = None negative_prompt: str = '' guidance_scale: float = 4.0 device = None ) → KandinskyPriorPipelineOutput or tuple Parameters images_and_prompts (List[Union[str, PIL.Image.Image, torch.FloatTensor]]) — +list of prompts and images to guide the image generation. +weights — (List[float]): +list of weights for each condition in images_and_prompts num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. negative_prior_prompt (str, optional) — +The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when using the prior pipeline for interpolation. Examples: Copied >>> from diffusers import KandinskyV22PriorEmb2EmbPipeline, KandinskyV22Pipeline +>>> from diffusers.utils import load_image +>>> import PIL + +>>> import torch +>>> from torchvision import transforms + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> img1 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) + +>>> img2 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/starry_night.jpeg" +... ) + +>>> images_texts = ["a cat", img1, img2] +>>> weights = [0.3, 0.3, 0.4] +>>> image_emb, zero_image_emb = pipe_prior.interpolate(images_texts, weights) + +>>> pipe = KandinskyV22Pipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") + +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=150, +... ).images[0] + +>>> image.save("starry_cat.png") KandinskyV22Img2ImgPipeline class diffusers.KandinskyV22Img2ImgPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union image: Union negative_image_embeds: Union height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 strength: float = 0.3 num_images_per_prompt: int = 1 generator: Union = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22Img2ImgCombinedPipeline class diffusers.KandinskyV22Img2ImgCombinedPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. prior_image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Combined Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 strength: float = 0.3 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForImage2Image +import torch +import requests +from io import BytesIO +from PIL import Image +import os + +pipe = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") +image.thumbnail((768, 768)) + +image = pipe(prompt=prompt, image=original_image, num_inference_steps=25).images[0] enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. KandinskyV22ControlnetImg2ImgPipeline class diffusers.KandinskyV22ControlnetImg2ImgPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union image: Union negative_image_embeds: Union hint: FloatTensor height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 strength: float = 0.3 num_images_per_prompt: int = 1 generator: Union = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. hint (torch.FloatTensor) — +The controlnet condition. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22InpaintPipeline class diffusers.KandinskyV22InpaintPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-guided image inpainting using Kandinsky2.1 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union image: Union mask_image: Union negative_image_embeds: Union height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. image (PIL.Image.Image) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. mask_image (np.array) — +Tensor representing an image batch, to mask image. White pixels in the mask will be repainted, while +black pixels will be preserved. If mask_image is a PIL image, it will be converted to a single +channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, +so the expected shape would be (B, H, W, 1). negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22InpaintCombinedPipeline class diffusers.KandinskyV22InpaintCombinedPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. prior_image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Combined Pipeline for inpainting generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union mask_image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. mask_image (np.array) — +Tensor representing an image batch, to mask image. White pixels in the mask will be repainted, while +black pixels will be preserved. If mask_image is a PIL image, it will be converted to a single +channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, +so the expected shape would be (B, H, W, 1). negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. prior_callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). prior_callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the prior_callback_on_step_end function. The tensors specified in the +list will be passed as callback_kwargs argument. You will only be able to include variables listed in +the ._callback_tensor_inputs attribute of your pipeline class. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +import torch +import numpy as np + +pipe = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +original_image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png" +) + +mask = np.zeros((768, 768), dtype=np.float32) +# Let's mask out an area above the cat's head +mask[:250, 250:-250] = 1 + +image = pipe(prompt=prompt, image=original_image, mask_image=mask, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. diff --git a/scrapped_outputs/23e9fc703822b09bfaff372982512c98.txt b/scrapped_outputs/23e9fc703822b09bfaff372982512c98.txt new file mode 100644 index 0000000000000000000000000000000000000000..05bb3456bec15d58da2bfd1d87651a0c21a28dfb --- /dev/null +++ b/scrapped_outputs/23e9fc703822b09bfaff372982512c98.txt @@ -0,0 +1,175 @@ +Denoising diffusion probabilistic models (DDPM) + + +Overview + +Denoising Diffusion Probabilistic Models +(DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes the diffusion based model of the same name, but in the context of the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline. +The abstract of the paper is the following: +We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. +The original paper can be found here. + +DDPMScheduler + + +class diffusers.DDPMScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +variance_type: str = 'fixed_small' +clip_sample: bool = True +prediction_type: str = 'epsilon' +clip_sample_range: typing.Optional[float] = 1.0 + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +variance_type (str) — +options to clip the variance used when adding noise to the denoised sample. Choose from fixed_small, +fixed_small_log, fixed_large, fixed_large_log, learned or learned_range. + + +clip_sample (bool, default True) — +option to clip predicted sample between -1 and 1 for numerical stability. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + + +Denoising diffusion probabilistic models (DDPMs) explores the connections between denoising score matching and +Langevin dynamics sampling. +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. +For more details, see the original paper: https://arxiv.org/abs/2006.11239 + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Optional[int] = None + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +timestep (int, optional) — current timestep + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + + +Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +generator = None +return_dict: bool = True + +) +→ +~schedulers.scheduling_utils.DDPMSchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. +generator — random number generator. + + +return_dict (bool) — option for returning tuple rather than DDPMSchedulerOutput class + + +Returns + +~schedulers.scheduling_utils.DDPMSchedulerOutput or tuple + + + +~schedulers.scheduling_utils.DDPMSchedulerOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/23f37b290dbf456b58b1d0f1e4c2f484.txt b/scrapped_outputs/23f37b290dbf456b58b1d0f1e4c2f484.txt new file mode 100644 index 0000000000000000000000000000000000000000..88c6593b32ef62cb7820e9bf8a18fcf276dfa370 --- /dev/null +++ b/scrapped_outputs/23f37b290dbf456b58b1d0f1e4c2f484.txt @@ -0,0 +1,304 @@ +Stable unCLIP Stable unCLIP checkpoints are finetuned from Stable Diffusion 2.1 checkpoints to condition on CLIP image embeddings. +Stable unCLIP still conditions on text embeddings. Given the two separate conditionings, stable unCLIP can be used +for text guided image variation. When combined with an unCLIP prior, it can also be used for full text to image generation. The abstract from the paper is: Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples. Tips Stable unCLIP takes noise_level as input during inference which determines how much noise is added to the image embeddings. A higher noise_level increases variation in the final un-noised images. By default, we do not add any additional noise to the image embeddings (noise_level = 0). Text-to-Image Generation Stable unCLIP can be leveraged for text-to-image generation by pipelining it with the prior model of KakaoBrain’s open source DALL-E 2 replication Karlo: Copied import torch +from diffusers import UnCLIPScheduler, DDPMScheduler, StableUnCLIPPipeline +from diffusers.models import PriorTransformer +from transformers import CLIPTokenizer, CLIPTextModelWithProjection + +prior_model_id = "kakaobrain/karlo-v1-alpha" +data_type = torch.float16 +prior = PriorTransformer.from_pretrained(prior_model_id, subfolder="prior", torch_dtype=data_type) + +prior_text_model_id = "openai/clip-vit-large-patch14" +prior_tokenizer = CLIPTokenizer.from_pretrained(prior_text_model_id) +prior_text_model = CLIPTextModelWithProjection.from_pretrained(prior_text_model_id, torch_dtype=data_type) +prior_scheduler = UnCLIPScheduler.from_pretrained(prior_model_id, subfolder="prior_scheduler") +prior_scheduler = DDPMScheduler.from_config(prior_scheduler.config) + +stable_unclip_model_id = "stabilityai/stable-diffusion-2-1-unclip-small" + +pipe = StableUnCLIPPipeline.from_pretrained( + stable_unclip_model_id, + torch_dtype=data_type, + variant="fp16", + prior_tokenizer=prior_tokenizer, + prior_text_encoder=prior_text_model, + prior=prior, + prior_scheduler=prior_scheduler, +) + +pipe = pipe.to("cuda") +wave_prompt = "dramatic wave, the Oceans roar, Strong wave spiral across the oceans as the waves unfurl into roaring crests; perfect wave form; perfect wave shape; dramatic wave shape; wave shape unbelievable; wave; wave shape spectacular" + +image = pipe(prompt=wave_prompt).images[0] +image For text-to-image we use stabilityai/stable-diffusion-2-1-unclip-small as it was trained on CLIP ViT-L/14 embedding, the same as the Karlo model prior. stabilityai/stable-diffusion-2-1-unclip was trained on OpenCLIP ViT-H, so we don’t recommend its use. Text guided Image-to-Image Variation Copied from diffusers import StableUnCLIPImg2ImgPipeline +from diffusers.utils import load_image +import torch + +pipe = StableUnCLIPImg2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1-unclip", torch_dtype=torch.float16, variation="fp16" +) +pipe = pipe.to("cuda") + +url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/stable_unclip/tarsila_do_amaral.png" +init_image = load_image(url) + +images = pipe(init_image).images +images[0].save("variation_image.png") Optionally, you can also pass a prompt to pipe such as: Copied prompt = "A fantasy landscape, trending on artstation" + +image = pipe(init_image, prompt=prompt).images[0] +image Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableUnCLIPPipeline class diffusers.StableUnCLIPPipeline < source > ( prior_tokenizer: CLIPTokenizer prior_text_encoder: CLIPTextModelWithProjection prior: PriorTransformer prior_scheduler: KarrasDiffusionSchedulers image_normalizer: StableUnCLIPImageNormalizer image_noising_scheduler: KarrasDiffusionSchedulers tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vae: AutoencoderKL ) Parameters prior_tokenizer (CLIPTokenizer) — +A CLIPTokenizer. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen CLIPTextModelWithProjection text-encoder. prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_scheduler (KarrasDiffusionSchedulers) — +Scheduler used in the prior denoising process. image_normalizer (StableUnCLIPImageNormalizer) — +Used to normalize the predicted image embeddings before the noise is applied and un-normalize the image +embeddings after the noise has been applied. image_noising_scheduler (KarrasDiffusionSchedulers) — +Noise schedule for adding noise to the predicted image embeddings. The amount of noise to add is determined +by the noise_level. tokenizer (CLIPTokenizer) — +A CLIPTokenizer. text_encoder (CLIPTextModel) — +Frozen CLIPTextModel text-encoder. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (KarrasDiffusionSchedulers) — +A scheduler to be used in combination with unet to denoise the encoded image latents. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. Pipeline for text-to-image generation using stable unCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 20 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Optional = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 0 prior_num_inference_steps: int = 25 prior_guidance_scale: float = 4.0 prior_latents: Optional = None clip_skip: Optional = None ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 20) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. noise_level (int, optional, defaults to 0) — +The amount of noise to add to the image embeddings. A higher noise_level increases the variance in +the final un-noised images. See StableUnCLIPPipeline.noise_image_embeddings() for more details. prior_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps in the prior denoising process. More denoising steps usually lead to a +higher quality image at the expense of slower inference. prior_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. prior_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +embedding generation in the prior denoising process. Can be used to tweak the same generation with +different prompts. If not provided, a latents tensor is generated by sampling using the supplied random +generator. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +ImagePipelineOutput or tuple + +~ pipeline_utils.ImagePipelineOutput if return_dict is True, otherwise a tuple. When returning +a tuple, the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableUnCLIPPipeline + +>>> pipe = StableUnCLIPPipeline.from_pretrained( +... "fusing/stable-unclip-2-1-l", torch_dtype=torch.float16 +... ) # TODO update model path +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> images = pipe(prompt).images +>>> images[0].save("astronaut_horse.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. noise_image_embeddings < source > ( image_embeds: Tensor noise_level: int noise: Optional = None generator: Optional = None ) Add noise to the image embeddings. The amount of noise is controlled by a noise_level input. A higher +noise_level increases the variance in the final un-noised images. The noise is applied in two ways: A noise schedule is applied directly to the embeddings. A vector of sinusoidal time embeddings are appended to the output. In both cases, the amount of noise is controlled by the same noise_level. The embeddings are normalized before the noise is applied and un-normalized after the noise is applied. StableUnCLIPImg2ImgPipeline class diffusers.StableUnCLIPImg2ImgPipeline < source > ( feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection image_normalizer: StableUnCLIPImageNormalizer image_noising_scheduler: KarrasDiffusionSchedulers tokenizer: CLIPTokenizer text_encoder: CLIPTextModel unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vae: AutoencoderKL ) Parameters feature_extractor (CLIPImageProcessor) — +Feature extractor for image pre-processing before being encoded. image_encoder (CLIPVisionModelWithProjection) — +CLIP vision model for encoding images. image_normalizer (StableUnCLIPImageNormalizer) — +Used to normalize the predicted image embeddings before the noise is applied and un-normalize the image +embeddings after the noise has been applied. image_noising_scheduler (KarrasDiffusionSchedulers) — +Noise schedule for adding noise to the predicted image embeddings. The amount of noise to add is determined +by the noise_level. tokenizer (~transformers.CLIPTokenizer) — +A [~transformers.CLIPTokenizer)]. text_encoder (CLIPTextModel) — +Frozen CLIPTextModel text-encoder. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (KarrasDiffusionSchedulers) — +A scheduler to be used in combination with unet to denoise the encoded image latents. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. Pipeline for text-guided image-to-image generation using stable unCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( image: Union = None prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 20 guidance_scale: float = 10 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Optional = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 0 image_embeds: Optional = None clip_skip: Optional = None ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, either prompt_embeds will be +used or prompt is initialized to "". image (torch.FloatTensor or PIL.Image.Image) — +Image or tensor representing an image batch. The image is encoded to its CLIP embedding which the +unet is conditioned on. The image is not encoded by the vae and then used as the latents in the +denoising process like it is in the standard Stable Diffusion text-guided image variation process. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 20) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. noise_level (int, optional, defaults to 0) — +The amount of noise to add to the image embeddings. A higher noise_level increases the variance in +the final un-noised images. See StableUnCLIPPipeline.noise_image_embeddings() for more details. image_embeds (torch.FloatTensor, optional) — +Pre-generated CLIP embeddings to condition the unet on. These latents are not used in the denoising +process. If you want to provide pre-generated latents, pass them to __call__ as latents. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +ImagePipelineOutput or tuple + +~ pipeline_utils.ImagePipelineOutput if return_dict is True, otherwise a tuple. When returning +a tuple, the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import requests +>>> import torch +>>> from PIL import Image +>>> from io import BytesIO + +>>> from diffusers import StableUnCLIPImg2ImgPipeline + +>>> pipe = StableUnCLIPImg2ImgPipeline.from_pretrained( +... "fusing/stable-unclip-2-1-l-img2img", torch_dtype=torch.float16 +... ) # TODO update model path +>>> pipe = pipe.to("cuda") + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +>>> response = requests.get(url) +>>> init_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_image = init_image.resize((768, 512)) + +>>> prompt = "A fantasy landscape, trending on artstation" + +>>> images = pipe(prompt, init_image).images +>>> images[0].save("fantasy_landscape.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. noise_image_embeddings < source > ( image_embeds: Tensor noise_level: int noise: Optional = None generator: Optional = None ) Add noise to the image embeddings. The amount of noise is controlled by a noise_level input. A higher +noise_level increases the variance in the final un-noised images. The noise is applied in two ways: A noise schedule is applied directly to the embeddings. A vector of sinusoidal time embeddings are appended to the output. In both cases, the amount of noise is controlled by the same noise_level. The embeddings are normalized before the noise is applied and un-normalized after the noise is applied. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/240dfba9e329a7742e4e97e3745d17ab.txt b/scrapped_outputs/240dfba9e329a7742e4e97e3745d17ab.txt new file mode 100644 index 0000000000000000000000000000000000000000..218eb87f8f649852b0b2e0b52a2a1d758aa1b603 --- /dev/null +++ b/scrapped_outputs/240dfba9e329a7742e4e97e3745d17ab.txt @@ -0,0 +1 @@ +Using Diffusers with other modalities Diffusers is in the process of expanding to modalities other than images. Example type Colab Pipeline Molecule conformation generation ❌ More coming soon! diff --git a/scrapped_outputs/243dbcbd789ae7678c8e584a220bc7ab.txt b/scrapped_outputs/243dbcbd789ae7678c8e584a220bc7ab.txt new file mode 100644 index 0000000000000000000000000000000000000000..7e564f60c6f595a7fded7c2e79ba36d399dfdcf7 --- /dev/null +++ b/scrapped_outputs/243dbcbd789ae7678c8e584a220bc7ab.txt @@ -0,0 +1,407 @@ +How to contribute to Diffusers 🧨 + +We ❤️ contributions from the open-source community! Everyone is welcome, and all types of participation –not just code– are valued and appreciated. Answering questions, helping others, reaching out, and improving the documentation are all immensely valuable to the community, so don’t be afraid and get involved if you’re up for it! +Everyone is encouraged to start by saying 👋 in our public Discord channel. We discuss the latest trends in diffusion models, ask questions, show off personal projects, help each other with contributions, or just hang out ☕. +Whichever way you choose to contribute, we strive to be part of an open, welcoming, and kind community. Please, read our code of conduct and be mindful to respect it during your interactions. We also recommend you become familiar with the ethical guidelines that guide our project and ask you to adhere to the same principles of transparency and responsibility. +We enormously value feedback from the community, so please do not be afraid to speak up if you believe you have valuable feedback that can help improve the library - every message, comment, issue, and pull request (PR) is read and considered. + +Overview + +You can contribute in many ways ranging from answering questions on issues to adding new diffusion models to +the core library. +In the following, we give an overview of different ways to contribute, ranked by difficulty in ascending order. All of them are valuable to the community. +Asking and answering questions on the Diffusers discussion forum or on Discord. +Opening new issues on the GitHub Issues tab +Answering issues on the GitHub Issues tab +Fix a simple issue, marked by the “Good first issue” label, see here. +Contribute to the documentation. +Contribute a Community Pipeline +Contribute to the examples. +Fix a more difficult issue, marked by the “Good second issue” label, see here. +Add a new pipeline, model, or scheduler, see “New Pipeline/Model” and “New scheduler” issues. For this contribution, please have a look at Design Philosophy. +As said before, all contributions are valuable to the community. +In the following, we will explain each contribution a bit more in detail. +For all contributions 4.-9. you will need to open a PR. It is explained in detail how to do so in Opening a pull requst + +1. Asking and answering questions on the Diffusers discussion forum or on the Diffusers Discord + +Any question or comment related to the Diffusers library can be asked on the discussion forum or on Discord. Such questions and comments include (but are not limited to): +Reports of training or inference experiments in an attempt to share knowledge +Presentation of personal projects +Questions to non-official training examples +Project proposals +General feedback +Paper summaries +Asking for help on personal projects that build on top of the Diffusers library +General questions +Ethical questions regarding diffusion models +… +Every question that is asked on the forum or on Discord actively encourages the community to publicly +share knowledge and might very well help a beginner in the future that has the same question you’re +having. Please do pose any questions you might have. +In the same spirit, you are of immense help to the community by answering such questions because this way you are publicly documenting knowledge for everybody to learn from. +Please keep in mind that the more effort you put into asking or answering a question, the higher +the quality of the publicly documented knowledge. In the same way, well-posed and well-answered questions create a high-quality knowledge database accessible to everybody, while badly posed questions or answers reduce the overall quality of the public knowledge database. +In short, a high quality question or answer is precise, concise, relevant, easy-to-understand, accesible, and well-formated/well-posed. For more information, please have a look through the How to write a good issue section. +NOTE about channels: +The forum is much better indexed by search engines, such as Google. Posts are ranked by popularity rather than chronologically. Hence, it’s easier to look up questions and answers that we posted some time ago. +In addition, questions and answers posted in the forum can easily be linked to. +In contrast, Discord has a chat-like format that invites fast back-and-forth communication. +While it will most likely take less time for you to get an answer to your question on Discord, your +question won’t be visible anymore over time. Also, it’s much harder to find information that was posted a while back on Discord. We therefore strongly recommend using the forum for high-quality questions and answers in an attempt to create long-lasting knowledge for the community. If discussions on Discord lead to very interesting answers and conclusions, we recommend posting the results on the forum to make the information more available for future readers. + +2. Opening new issues on the GitHub issues tab + +The 🧨 Diffusers library is robust and reliable thanks to the users who notify us of +the problems they encounter. So thank you for reporting an issue. +Remember, GitHub issues are reserved for technical questions directly related to the Diffusers library, bug reports, feature requests, or feedback on the library design. +In a nutshell, this means that everything that is not related to the code of the Diffusers library (including the documentation) should not be asked on GitHub, but rather on either the forum or Discord. +Please consider the following guidelines when opening a new issue: +Make sure you have searched whether your issue has already been asked before (use the search bar on GitHub under Issues). +Please never report a new issue on another (related) issue. If another issue is highly related, please +open a new issue nevertheless and link to the related issue. +Make sure your issue is written in English. Please use one of the great, free online translation services, such as DeepL to translate from your native language to English if you are not comfortable in English. +Check whether your issue might be solved by updating to the newest Diffusers version. Before posting your issue, please make sure that python -c "import diffusers; print(diffusers.__version__)" is higher or matches the latest Diffusers version. +Remember that the more effort you put into opening a new issue, the higher the quality of your answer will be and the better the overall quality of the Diffusers issues. +New issues usually include the following. + +2.1. Reproducible, minimal bug reports. + +A bug report should always have a reproducible code snippet and be as minimal and concise as possible. +This means in more detail: +Narrow the bug down as much as you can, do not just dump your whole code file +Format your code +Do not include any external libraries except for Diffusers depending on them. +Always provide all necessary information about your environment; for this, you can run: diffusers-cli env in your shell and copy-paste the displayed information to the issue. +Explain the issue. If the reader doesn’t know what the issue is and why it is an issue, she cannot solve it. +Always make sure the reader can reproduce your issue with as little effort as possible. If your code snippet cannot be run because of missing libraries or undefined variables, the reader cannot help you. Make sure your reproducible code snippet is as minimal as possible and can be copy-pasted into a simple Python shell. +If in order to reproduce your issue a model and/or dataset is required, make sure the reader has access to that model or dataset. You can always upload your model or dataset to the Hub to make it easily downloadable. Try to keep your model and dataset as small as possible, to make the reproduction of your issue as effortless as possible. +For more information, please have a look through the How to write a good issue section. +You can open a bug report here. + +2.2. Feature requests. + +A world-class feature request addresses the following points: +Motivation first: +Is it related to a problem/frustration with the library? If so, please explain +why. Providing a code snippet that demonstrates the problem is best. +Is it related to something you would need for a project? We’d love to hear +about it! +Is it something you worked on and think could benefit the community? +Awesome! Tell us what problem it solved for you. +Write a full paragraph describing the feature; +Provide a code snippet that demonstrates its future use; +In case this is related to a paper, please attach a link; +Attach any additional information (drawings, screenshots, etc.) you think may help. +You can open a feature request here. + +2.3 Feedback. + +Feedback about the library design and why it is good or not good helps the core maintainers immensely to build a user-friendly library. To understand the philosophy behind the current design philosophy, please have a look here. If you feel like a certain design choice does not fit with the current design philosophy, please explain why and how it should be changed. If a certain design choice follows the design philosophy too much, hence restricting use cases, explain why and how it should be changed. +If a certain design choice is very useful for you, please also leave a note as this is great feedback for future design decisions. +You can open an issue about feedback here. + +2.4 Technical questions. + +Technical questions are mainly about why certain code of the library was written in a certain way, or what a certain part of the code does. Please make sure to link to the code in question and please provide detail on +why this part of the code is difficult to understand. +You can open an issue about a technical question here. + +2.5 Proposal to add a new model, scheduler, or pipeline. + +If the diffusion model community released a new model, pipeline, or scheduler that you would like to see in the Diffusers library, please provide the following information: +Short description of the diffusion pipeline, model, or scheduler and link to the paper or public release. +Link to any of its open-source implementation. +Link to the model weights if they are available. +If you are willing to contribute to the model yourself, let us know so we can best guide you. Also, don’t forget +to tag the original author of the component (model, scheduler, pipeline, etc.) by GitHub handle if you can find it. +You can open a request for a model/pipeline/scheduler here. + +3. Answering issues on the GitHub issues tab + +Answering issues on GitHub might require some technical knowledge of Diffusers, but we encourage everybody to give it a try even if you are not 100% certain that your answer is correct. +Some tips to give a high-quality answer to an issue: +Be as concise and minimal as possible +Stay on topic. An answer to the issue should concern the issue and only the issue. +Provide links to code, papers, or other sources that prove or encourage your point. +Answer in code. If a simple code snippet is the answer to the issue or shows how the issue can be solved, please provide a fully reproducible code snippet. +Also, many issues tend to be simply off-topic, duplicates of other issues, or irrelevant. It is of great +help to the maintainers if you can answer such issues, encouraging the author of the issue to be +more precise, provide the link to a duplicated issue or redirect them to the forum or Discord +If you have verified that the issued bug report is correct and requires a correction in the source code, +please have a look at the next sections. +For all of the following contributions, you will need to open a PR. It is explained in detail how to do so in the Opening a pull requst section. + +4. Fixing a Good first issue +Good first issues are marked by the Good first issue label. Usually, the issue already +explains how a potential solution should look so that it is easier to fix. +If the issue hasn’t been closed and you would like to try to fix this issue, you can just leave a message “I would like to try this issue.”. There are usually three scenarios: +a.) The issue description already proposes a fix. In this case and if the solution makes sense to you, you can open a PR or draft PR to fix it. +b.) The issue description does not propose a fix. In this case, you can ask what a proposed fix could look like and someone from the Diffusers team should answer shortly. If you have a good idea of how to fix it, feel free to directly open a PR. +c.) There is already an open PR to fix the issue, but the issue hasn’t been closed yet. If the PR has gone stale, you can simply open a new PR and link to the stale PR. PRs often go stale if the original contributor who wanted to fix the issue suddenly cannot find the time anymore to proceed. This often happens in open-source and is very normal. In this case, the community will be very happy if you give it a new try and leverage the knowledge of the existing PR. If there is already a PR and it is active, you can help the author by giving suggestions, reviewing the PR or even asking whether you can contribute to the PR. + +5. Contribute to the documentation + +A good library always has good documentation! The official documentation is often one of the first points of contact for new users of the library, and therefore contributing to the documentation is a highly +valuable contribution. +Contributing to the library can have many forms: +Correcting spelling or grammatical errors. +Correct incorrect formatting of the docstring. If you see that the official documentation is weirdly displayed or a link is broken, we are very happy if you take some time to correct it. +Correct the shape or dimensions of a docstring input or output tensor. +Clarify documentation that is hard to understand or incorrect. +Update outdated code examples. +Translating the documentation to another language. +Anything displayed on the official Diffusers doc page is part of the official documentation and can be corrected, adjusted in the respective documentation source. +Please have a look at this page on how to verify changes made to the documentation locally. + +6. Contribute a community pipeline + +Pipelines are usually the first point of contact between the Diffusers library and the user. +Pipelines are examples of how to use Diffusers models and schedulers. +We support two types of pipelines: +Official Pipelines +Community Pipelines +Both official and community pipelines follow the same design and consist of the same type of components. +Official pipelines are tested and maintained by the core maintainers of Diffusers. Their code +resides in src/diffusers/pipelines. +In contrast, community pipelines are contributed and maintained purely by the community and are not tested. +They reside in examples/community and while they can be accessed via the PyPI diffusers package, their code is not part of the PyPI distribution. +The reason for the distinction is that the core maintainers of the Diffusers library cannot maintain and test all +possible ways diffusion models can be used for inference, but some of them may be of interest to the community. +Officially released diffusion pipelines, +such as Stable Diffusion are added to the core src/diffusers/pipelines package which ensures +high quality of maintenance, no backward-breaking code changes, and testing. +More bleeding edge pipelines should be added as community pipelines. If usage for a community pipeline is high, the pipeline can be moved to the official pipelines upon request from the community. This is one of the ways we strive to be a community-driven library. +To add a community pipeline, one should add a .py file to examples/community and adapt the examples/community/README.md to include an example of the new pipeline. +An example can be seen here. +Community pipeline PRs are only checked at a superficial level and ideally they should be maintained by their original authors. +Contributing a community pipeline is a great way to understand how Diffusers models and schedulers work. Having contributed a community pipeline is usually the first stepping stone to contributing an official pipeline to the +core package. + +7. Contribute to training examples + +Diffusers examples are a collection of training scripts that reside in examples. +We support two types of training examples: +Official training examples +Research training examples +Research training examples are located in examples/research_projects whereas official training examples include all folders under examples except the research_projects and community folders. +The official training examples are maintained by the Diffusers’ core maintainers whereas the research training examples are maintained by the community. +This is because of the same reasons put forward in 6. Contribute a community pipeline for official pipelines vs. community pipelines: It is not feasible for the core maintainers to maintain all possible training methods for diffusion models. +If the Diffusers core maintainers and the community consider a certain training paradigm to be too experimental or not popular enough, the corresponding training code should be put in the research_projects folder and maintained by the author. +Both official training and research examples consist of a directory that contains one or more training scripts, a requirements.txt file, and a README.md file. In order for the user to make use of the +training examples, it is required to clone the repository: + + + Copied +git clone https://github.com/huggingface/diffusers +as well as to install all additional dependencies required for training: + + + Copied +pip install -r /examples//requirements.txt +Therefore when adding an example, the requirements.txt file shall define all pip dependencies required for your training example so that once all those are installed, the user can run the example’s training script. See, for example, the DreamBooth requirements.txt file. +Training examples of the Diffusers library should adhere to the following philosophy: +All the code necessary to run the examples should be found in a single Python file +One should be able to run the example from the command line with python .py --args +Examples should be kept simple and serve as an example on how to use Diffusers for training. The purpose of example scripts is not to create state-of-the-art diffusion models, but rather to reproduce known training schemes without adding too much custom logic. As a byproduct of this point, our examples also strive to serve as good educational materials. +To contribute an example, it is highly recommended to look at already existing examples such as dreambooth to get an idea of how they should look like. +We strongly advise contributors to make use of the Accelerate library as it’s tightly integrated +with Diffusers. +Once an example script works, please make sure to add a comprehensive README.md that states how to use the example exactly. This README should include: +An example command on how to run the example script as shown here e.g.. +A link to some training results (logs, models, …) that show what the user can expect as shown here e.g.. +If you are adding a non-official/research training example, please don’t forget to add a sentence that you are maintaining this training example which includes your git handle as shown here. +If you are contributing to the official training examples, please also make sure to add a test to examples/test_examples.py. This is not necessary for non-official training examples. + +8. Fixing a Good second issue +Good second issues are marked by the Good second issue label. Good second issues are +usually more complicated to solve than Good first issues. +The issue description usually gives less guidance on how to fix the issue and requires +a decent understanding of the library by the interested contributor. +If you are interested in tackling a second good issue, feel free to open a PR to fix it and link the PR to the issue. If you see that a PR has already been opened for this issue but did not get merged, have a look to understand why it wasn’t merged and try to open an improved PR. +Good second issues are usually more difficult to get merged compared to good first issues, so don’t hesitate to ask for help from the core maintainers. If your PR is almost finished the core maintainers can also jump into your PR and commit to it in order to get it merged. + +9. Adding pipelines, models, schedulers + +Pipelines, models, and schedulers are the most important pieces of the Diffusers library. +They provide easy access to state-of-the-art diffusion technologies and thus allow the community to +build powerful generative AI applications. +By adding a new model, pipeline, or scheduler you might enable a new powerful use case for any of the user interfaces relying on Diffusers which can be of immense value for the whole generative AI ecosystem. +Diffusers has a couple of open feature requests for all three components - feel free to gloss over them +if you don’t know yet what specific component you would like to add: +Model or pipeline +Scheduler +Before adding any of the three components, it is strongly recommended that you give the Philosophy guide a read to better understand the design of any of the three components. Please be aware that +we cannot merge model, scheduler, or pipeline additions that strongly diverge from our design philosophy +as it will lead to API inconsistencies. If you fundamentally disagree with a design choice, please +open a Feedback issue instead so that it can be discussed whether a certain design +pattern/design choice shall be changed everywhere in the library and whether we shall update our design philosophy. Consistency across the library is very important for us. +Please make sure to add links to the original codebase/paper to the PR and ideally also ping the +original author directly on the PR so that they can follow the progress and potentially help with questions. +If you are unsure or stuck in the PR, don’t hesitate to leave a message to ask for a first review or help. + +How to write a good issue + +The better your issue is written, the higher the chances that it will be quickly resolved. +Make sure that you’ve used the correct template for your issue. You can pick between Bug Report, Feature Request, Feedback about API Design, New model/pipeline/scheduler addition, Forum, or a blank issue. Make sure to pick the correct one when opening a new issue. +Be precise: Give your issue a fitting title. Try to formulate your issue description as simple as possible. The more precise you are when submitting an issue, the less time it takes to understand the issue and potentially solve it. Make sure to open an issue for one issue only and not for multiple issues. If you found multiple issues, simply open multiple issues. If your issue is a bug, try to be as precise as possible about what bug it is - you should not just write “Error in diffusers”. +Reproducibility: No reproducible code snippet == no solution. If you encounter a bug, maintainers have to be able to reproduce it. Make sure that you include a code snippet that can be copy-pasted into a Python interpreter to reproduce the issue. Make sure that your code snippet works, i.e. that there are no missing imports or missing links to images, … Your issue should contain an error message and a code snippet that can be copy-pasted without any changes to reproduce the exact same error message. If your issue is using local model weights or local data that cannot be accessed by the reader, the issue cannot be solved. If you cannot share your data or model, try to make a dummy model or dummy data. +Minimalistic: Try to help the reader as much as you can to understand the issue as quickly as possible by staying as concise as possible. Remove all code / all information that is irrelevant to the issue. If you have found a bug, try to create the easiest code example you can to demonstrate your issue, do not just dump your whole workflow into the issue as soon as you have found a bug. E.g., if you train a model and get an error at some point during the training, you should first try to understand what part of the training code is responsible for the error and try to reproduce it with a couple of lines. Try to use dummy data instead of full datasets. +Add links. If you are referring to a certain naming, method, or model make sure to provide a link so that the reader can better understand what you mean. If you are referring to a specific PR or issue, make sure to link it to your issue. Do not assume that the reader knows what you are talking about. The more links you add to your issue the better. +Formatting. Make sure to nicely format your issue by formatting code into Python code syntax, and error messages into normal code syntax. See the official GitHub formatting docs for more information. +Think of your issue not as a ticket to be solved, but rather as a beautiful entry to a well-written encyclopedia. Every added issue is a contribution to publicly available knowledge. By adding a nicely written issue you not only make it easier for maintainers to solve your issue, but you are helping the whole community to better understand a certain aspect of the library. + +How to write a good PR + +Be a chameleon. Understand existing design patterns and syntax and make sure your code additions flow seamlessly into the existing code base. Pull requests that significantly diverge from existing design patterns or user interfaces will not be merged. +Be laser focused. A pull request should solve one problem and one problem only. Make sure to not fall into the trap of “also fixing another problem while we’re adding it”. It is much more difficult to review pull requests that solve multiple, unrelated problems at once. +If helpful, try to add a code snippet that displays an example of how your addition can be used. +The title of your pull request should be a summary of its contribution. +If your pull request addresses an issue, please mention the issue number in +the pull request description to make sure they are linked (and people +consulting the issue know you are working on it); +To indicate a work in progress please prefix the title with [WIP]. These +are useful to avoid duplicated work, and to differentiate it from PRs ready +to be merged; +Try to formulate and format your text as explained in How to write a good issue. +Make sure existing tests pass; +Add high-coverage tests. No quality testing = no merge. +If you are adding new @slow tests, make sure they pass using +RUN_SLOW=1 python -m pytest tests/test_my_new_model.py. +CircleCI does not run the slow tests, but GitHub actions does every night! +All public methods must have informative docstrings that work nicely with markdown. See [pipeline_latent_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py) for an example. +Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted dataset like +hf-internal-testing or huggingface/documentation-images to place these files. +If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images +to this dataset. + +How to open a PR + +Before writing code, we strongly advise you to search through the existing PRs or +issues to make sure that nobody is already working on the same thing. If you are +unsure, it is always a good idea to open an issue to get some feedback. +You will need basic git proficiency to be able to contribute to +🧨 Diffusers. git is not the easiest tool to use but it has the greatest +manual. Type git --help in a shell and enjoy. If you prefer books, Pro +Git is a very good reference. +Follow these steps to start contributing (supported Python versions): +Fork the repository by +clicking on the ‘Fork’ button on the repository’s page. This creates a copy of the code +under your GitHub user account. +Clone your fork to your local disk, and add the base repository as a remote: + + + Copied +$ git clone git@github.com:/diffusers.git +$ cd diffusers +$ git remote add upstream https://github.com/huggingface/diffusers.git +Create a new branch to hold your development changes: + + + Copied +$ git checkout -b a-descriptive-name-for-my-changes +Do not work on the main branch. +Set up a development environment by running the following command in a virtual environment: + + + Copied +$ pip install -e ".[dev]" +If you have already cloned the repo, you might need to git pull to get the most recent changes in the +library. +Develop the features on your branch. +As you work on the features, you should make sure that the test suite +passes. You should run the tests impacted by your changes like this: + + + Copied +$ pytest tests/.py +You can also run the full suite with the following command, but it takes +a beefy machine to produce a result in a decent amount of time now that +Diffusers has grown a lot. Here is the command for it: + + + Copied +$ make test +🧨 Diffusers relies on black and isort to format its source code +consistently. After you make changes, apply automatic style corrections and code verifications +that can’t be automated in one go with: + + + Copied +$ make style +🧨 Diffusers also uses ruff and a few custom scripts to check for coding mistakes. Quality +control runs in CI, however, you can also run the same checks with: + + + Copied +$ make quality +Once you’re happy with your changes, add changed files using git add and +make a commit with git commit to record your changes locally: + + + Copied +$ git add modified_file.py +$ git commit +It is a good idea to sync your copy of the code with the original +repository regularly. This way you can quickly account for changes: + + + Copied +$ git pull upstream main +Push the changes to your account using: + + + Copied +$ git push -u origin a-descriptive-name-for-my-changes +Once you are satisfied, go to the +webpage of your fork on GitHub. Click on ‘Pull request’ to send your changes +to the project maintainers for review. +It’s ok if maintainers ask you for changes. It happens to core contributors +too! So everyone can see the changes in the Pull request, work in your local +branch and push the changes to your fork. They will automatically appear in +the pull request. + +Tests + +An extensive test suite is included to test the library behavior and several examples. Library tests can be found in +the tests folder. +We like pytest and pytest-xdist because it’s faster. From the root of the +repository, here’s how to run tests with pytest for the library: + + + Copied +$ python -m pytest -n auto --dist=loadfile -s -v ./tests/ +In fact, that’s how make test is implemented! +You can specify a smaller set of tests in order to test only the feature +you’re working on. +By default, slow tests are skipped. Set the RUN_SLOW environment variable to +yes to run them. This will download many gigabytes of models — make sure you +have enough disk space and a good Internet connection, or a lot of patience! + + + Copied +$ RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/ +unittest is fully supported, here’s how to run tests with it: + + + Copied +$ python -m unittest discover -s tests -t . -v +$ python -m unittest discover -s examples -t examples -v + +Syncing forked main with upstream (HuggingFace) main + +To avoid pinging the upstream repository which adds reference notes to each upstream PR and sends unnecessary notifications to the developers involved in these PRs, +when syncing the main branch of a forked repository, please, follow these steps: +When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead, merge directly into the forked main. +If a PR is absolutely necessary, use the following steps after checking out your branch: + + + Copied +$ git checkout -b your-branch-for-syncing +$ git pull --squash --no-commit upstream main +$ git commit -m '' +$ git push --set-upstream origin your-branch-for-syncing + +Style guide + +For documentation strings, 🧨 Diffusers follows the google style. diff --git a/scrapped_outputs/245e2e0db818e1a2fca0df9fffd4d611.txt b/scrapped_outputs/245e2e0db818e1a2fca0df9fffd4d611.txt new file mode 100644 index 0000000000000000000000000000000000000000..d61c3f265da975aac5d562125c788f3e245e5b73 --- /dev/null +++ b/scrapped_outputs/245e2e0db818e1a2fca0df9fffd4d611.txt @@ -0,0 +1,96 @@ +ControlNet The ControlNet model was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, Maneesh Agrawala. It provides a greater degree of control over text-to-image generation by conditioning the model on additional inputs such as edge maps, depth maps, segmentation maps, and keypoints for pose detection. The abstract from the paper is: We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models. Loading from the original format By default the ControlNetModel should be loaded with from_pretrained(), but it can also be loaded +from the original format using FromOriginalControlnetMixin.from_single_file as follows: Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel + +url = "https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_canny.pth" # can also be a local path +controlnet = ControlNetModel.from_single_file(url) + +url = "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.safetensors" # can also be a local path +pipe = StableDiffusionControlNetPipeline.from_single_file(url, controlnet=controlnet) ControlNetModel class diffusers.ControlNetModel < source > ( in_channels: int = 4 conditioning_channels: int = 3 flip_sin_to_cos: bool = True freq_shift: int = 0 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') mid_block_type: Optional = 'UNetMidBlock2DCrossAttn' only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: int = 1280 transformer_layers_per_block: Union = 1 encoder_hid_dim: Optional = None encoder_hid_dim_type: Optional = None attention_head_dim: Union = 8 num_attention_heads: Union = None use_linear_projection: bool = False class_embed_type: Optional = None addition_embed_type: Optional = None addition_time_embed_dim: Optional = None num_class_embeds: Optional = None upcast_attention: bool = False resnet_time_scale_shift: str = 'default' projection_class_embeddings_input_dim: Optional = None controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: Optional = (16, 32, 96, 256) global_pool_conditions: bool = False addition_embed_type_num_heads: int = 64 ) Parameters in_channels (int, defaults to 4) — +The number of channels in the input sample. flip_sin_to_cos (bool, defaults to True) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, defaults to 0) — +The frequency shift to apply to the time embedding. down_block_types (tuple[str], defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. only_cross_attention (Union[bool, Tuple[bool]], defaults to False) — block_out_channels (tuple[int], defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, defaults to 2) — +The number of layers per block. downsample_padding (int, defaults to 1) — +The padding to use for the downsampling convolution. mid_block_scale_factor (float, defaults to 1) — +The scale factor to use for the mid block. act_fn (str, defaults to “silu”) — +The activation function to use. norm_num_groups (int, optional, defaults to 32) — +The number of groups to use for the normalization. If None, normalization and activation layers is skipped +in post-processing. norm_eps (float, defaults to 1e-5) — +The epsilon to use for the normalization. cross_attention_dim (int, defaults to 1280) — +The dimension of the cross attention features. transformer_layers_per_block (int or Tuple[int], optional, defaults to 1) — +The number of transformer blocks of type BasicTransformerBlock. Only relevant for +~models.unet_2d_blocks.CrossAttnDownBlock2D, ~models.unet_2d_blocks.CrossAttnUpBlock2D, +~models.unet_2d_blocks.UNetMidBlock2DCrossAttn. encoder_hid_dim (int, optional, defaults to None) — +If encoder_hid_dim_type is defined, encoder_hidden_states will be projected from encoder_hid_dim +dimension to cross_attention_dim. encoder_hid_dim_type (str, optional, defaults to None) — +If given, the encoder_hidden_states and potentially other embeddings are down-projected to text +embeddings of dimension cross_attention according to encoder_hid_dim_type. attention_head_dim (Union[int, Tuple[int]], defaults to 8) — +The dimension of the attention heads. use_linear_projection (bool, defaults to False) — class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", "identity", "projection", or "simple_projection". addition_embed_type (str, optional, defaults to None) — +Configures an optional embedding which will be summed with the time embeddings. Choose from None or +“text”. “text” will use the TextTimeEmbedding layer. num_class_embeds (int, optional, defaults to 0) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. upcast_attention (bool, defaults to False) — resnet_time_scale_shift (str, defaults to "default") — +Time scale shift config for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. projection_class_embeddings_input_dim (int, optional, defaults to None) — +The dimension of the class_labels input when class_embed_type="projection". Required when +class_embed_type="projection". controlnet_conditioning_channel_order (str, defaults to "rgb") — +The channel order of conditional image. Will convert to rgb if it’s bgr. conditioning_embedding_out_channels (tuple[int], optional, defaults to (16, 32, 96, 256)) — +The tuple of output channel for each block in the conditioning_embedding layer. global_pool_conditions (bool, defaults to False) — +TODO(Patrick) - unused parameter. addition_embed_type_num_heads (int, defaults to 64) — +The number of heads to use for the TextTimeEmbedding layer. A ControlNet model. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor controlnet_cond: FloatTensor conditioning_scale: float = 1.0 class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None added_cond_kwargs: Optional = None cross_attention_kwargs: Optional = None guess_mode: bool = False return_dict: bool = True ) → ControlNetOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor. timestep (Union[torch.Tensor, float, int]) — +The number of timesteps to denoise an input. encoder_hidden_states (torch.Tensor) — +The encoder hidden states. controlnet_cond (torch.FloatTensor) — +The conditional input tensor of shape (batch_size, sequence_length, hidden_size). conditioning_scale (float, defaults to 1.0) — +The scale factor for ControlNet outputs. class_labels (torch.Tensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. timestep_cond (torch.Tensor, optional, defaults to None) — +Additional conditional embeddings for timestep. If provided, the embeddings will be summed with the +timestep_embedding passed through the self.time_embedding layer to obtain the final timestep +embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. added_cond_kwargs (dict) — +Additional conditions for the Stable Diffusion XL UNet. cross_attention_kwargs (dict[str], optional, defaults to None) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. guess_mode (bool, defaults to False) — +In this mode, the ControlNet encoder tries its best to recognize the input content of the input even if +you remove all prompts. A guidance_scale between 3.0 and 5.0 is recommended. return_dict (bool, defaults to True) — +Whether or not to return a ControlNetOutput instead of a plain tuple. Returns +ControlNetOutput or tuple + +If return_dict is True, a ControlNetOutput is returned, otherwise a tuple is +returned where the first element is the sample tensor. + The ControlNetModel forward method. from_unet < source > ( unet: UNet2DConditionModel controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: Optional = (16, 32, 96, 256) load_weights_from_unet: bool = True conditioning_channels: int = 3 ) Parameters unet (UNet2DConditionModel) — +The UNet model weights to copy to the ControlNetModel. All configuration options are also copied +where applicable. Instantiate a ControlNetModel from UNet2DConditionModel. set_attention_slice < source > ( slice_size: Union ) Parameters slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", input to the attention heads is halved, so attention is computed in two steps. If +"max", maximum amount of memory is saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in +several steps. This is useful for saving some memory in exchange for a small decrease in speed. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. ControlNetOutput class diffusers.models.controlnet.ControlNetOutput < source > ( down_block_res_samples: Tuple mid_block_res_sample: Tensor ) Parameters down_block_res_samples (tuple[torch.Tensor]) — +A tuple of downsample activations at different resolutions for each downsampling block. Each tensor should +be of shape (batch_size, channel * resolution, height //resolution, width // resolution). Output can be +used to condition the original UNet’s downsampling activations. mid_down_block_re_sample (torch.Tensor) — +The activation of the midde block (the lowest sample resolution). Each tensor should be of shape +(batch_size, channel * lowest_resolution, height // lowest_resolution, width // lowest_resolution). +Output can be used to condition the original UNet’s middle block activation. The output of ControlNetModel. FlaxControlNetModel class diffusers.FlaxControlNetModel < source > ( sample_size: int = 32 in_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 attention_head_dim: Union = 8 num_attention_heads: Union = None cross_attention_dim: int = 1280 dropout: float = 0.0 use_linear_projection: bool = False dtype: dtype = flip_sin_to_cos: bool = True freq_shift: int = 0 controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: Tuple = (16, 32, 96, 256) parent: Union = name: Optional = None ) Parameters sample_size (int, optional) — +The size of the input sample. in_channels (int, optional, defaults to 4) — +The number of channels in the input sample. down_block_types (Tuple[str], optional, defaults to ("FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxDownBlock2D")) — +The tuple of downsample blocks to use. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — +The number of layers per block. attention_head_dim (int or Tuple[int], optional, defaults to 8) — +The dimension of the attention heads. num_attention_heads (int or Tuple[int], optional) — +The number of attention heads. cross_attention_dim (int, optional, defaults to 768) — +The dimension of the cross attention features. dropout (float, optional, defaults to 0) — +Dropout probability for down, up and bottleneck blocks. flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. controlnet_conditioning_channel_order (str, optional, defaults to rgb) — +The channel order of conditional image. Will convert to rgb if it’s bgr. conditioning_embedding_out_channels (tuple, optional, defaults to (16, 32, 96, 256)) — +The tuple of output channel for each block in the conditioning_embedding layer. A ControlNet model. This model inherits from FlaxModelMixin. Check the superclass documentation for it’s generic methods +implemented for all models (such as downloading or saving). This model is also a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matters related to its +general usage and behavior. Inherent JAX features such as the following are supported: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization FlaxControlNetOutput class diffusers.models.controlnet_flax.FlaxControlNetOutput < source > ( down_block_res_samples: Array mid_block_res_sample: Array ) Parameters down_block_res_samples (jnp.ndarray) — mid_block_res_sample (jnp.ndarray) — The output of FlaxControlNetModel. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/24633c8f8e59d27932cf6839d72990be.txt b/scrapped_outputs/24633c8f8e59d27932cf6839d72990be.txt new file mode 100644 index 0000000000000000000000000000000000000000..3de545917a945be758b5da9cc73ec3840eca6cd1 --- /dev/null +++ b/scrapped_outputs/24633c8f8e59d27932cf6839d72990be.txt @@ -0,0 +1,108 @@ +Installation + +Install 🤗 Diffusers for whichever deep learning library you’re working with. +🤗 Diffusers is tested on Python 3.7+, PyTorch 1.7.0+ and flax. Follow the installation instructions below for the deep learning library you are using: +PyTorch installation instructions. +Flax installation instructions. + +Install with pip + +You should install 🤗 Diffusers in a virtual environment. +If you’re unfamiliar with Python virtual environments, take a look at this guide. +A virtual environment makes it easier to manage different projects, and avoid compatibility issues between dependencies. +Start by creating a virtual environment in your project directory: + + + Copied +python -m venv .env +Activate the virtual environment: + + + Copied +source .env/bin/activate +Now you’re ready to install 🤗 Diffusers with the following command: +For PyTorch + + + Copied +pip install diffusers["torch"] +For Flax + + + Copied +pip install diffusers["flax"] + +Install from source + +Before intsalling diffusers from source, make sure you have torch and accelerate installed. +For torch installation refer to the torch docs. +To install accelerate + + + Copied +pip install accelerate +Install 🤗 Diffusers from source with the following command: + + + Copied +pip install git+https://github.com/huggingface/diffusers +This command installs the bleeding edge main version rather than the latest stable version. +The main version is useful for staying up-to-date with the latest developments. +For instance, if a bug has been fixed since the last official release but a new release hasn’t been rolled out yet. +However, this means the main version may not always be stable. +We strive to keep the main version operational, and most issues are usually resolved within a few hours or a day. +If you run into a problem, please open an Issue, so we can fix it even sooner! + +Editable install + +You will need an editable install if you’d like to: +Use the main version of the source code. +Contribute to 🤗 Diffusers and need to test changes in the code. +Clone the repository and install 🤗 Diffusers with the following commands: + + + Copied +git clone https://github.com/huggingface/diffusers.git +cd diffusers +For PyTorch + + + Copied +pip install -e ".[torch]" +For Flax + + + Copied +pip install -e ".[flax]" +These commands will link the folder you cloned the repository to and your Python library paths. +Python will now look inside the folder you cloned to in addition to the normal library paths. +For example, if your Python packages are typically installed in ~/anaconda3/envs/main/lib/python3.7/site-packages/, Python will also search the folder you cloned to: ~/diffusers/. +You must keep the diffusers folder if you want to keep using the library. +Now you can easily update your clone to the latest version of 🤗 Diffusers with the following command: + + + Copied +cd ~/diffusers/ +git pull +Your Python environment will find the main version of 🤗 Diffusers on the next run. + +Notice on telemetry logging + +Our library gathers telemetry information during from_pretrained() requests. +This data includes the version of Diffusers and PyTorch/Flax, the requested model or pipeline class, +and the path to a pretrained checkpoint if it is hosted on the Hub. +This usage data helps us debug issues and prioritize new features. +Telemetry is only sent when loading models and pipelines from the HuggingFace Hub, +and is not collected during local usage. +We understand that not everyone wants to share additional information, and we respect your privacy, +so you can disable telemetry collection by setting the DISABLE_TELEMETRY environment variable from your terminal: +On Linux/MacOS: + + + Copied +export DISABLE_TELEMETRY=YES +On Windows: + + + Copied +set DISABLE_TELEMETRY=YES diff --git a/scrapped_outputs/247e8c5e5717cd247899e0f69f30aba3.txt b/scrapped_outputs/247e8c5e5717cd247899e0f69f30aba3.txt new file mode 100644 index 0000000000000000000000000000000000000000..e61eb0a68fe6473d1d312b7484e9469ca28f24df --- /dev/null +++ b/scrapped_outputs/247e8c5e5717cd247899e0f69f30aba3.txt @@ -0,0 +1,75 @@ +Shap-E Shap-E is a conditional model for generating 3D assets which could be used for video game development, interior design, and architecture. It is trained on a large dataset of 3D assets, and post-processed to render more views of each object and produce 16K instead of 4K point clouds. The Shap-E model is trained in two steps: an encoder accepts the point clouds and rendered views of a 3D asset and outputs the parameters of implicit functions that represent the asset a diffusion model is trained on the latents produced by the encoder to generate either neural radiance fields (NeRFs) or a textured 3D mesh, making it easier to render and use the 3D asset in downstream applications This guide will show you how to use Shap-E to start generating your own 3D assets! Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate trimesh Text-to-3D To generate a gif of a 3D object, pass a text prompt to the ShapEPipeline. The pipeline generates a list of image frames which are used to create the 3D object. Copied import torch +from diffusers import ShapEPipeline + +device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +pipe = ShapEPipeline.from_pretrained("openai/shap-e", torch_dtype=torch.float16, variant="fp16") +pipe = pipe.to(device) + +guidance_scale = 15.0 +prompt = ["A firecracker", "A birthday cupcake"] + +images = pipe( + prompt, + guidance_scale=guidance_scale, + num_inference_steps=64, + frame_size=256, +).images Now use the export_to_gif() function to turn the list of image frames into a gif of the 3D object. Copied from diffusers.utils import export_to_gif + +export_to_gif(images[0], "firecracker_3d.gif") +export_to_gif(images[1], "cake_3d.gif") prompt = "A firecracker" prompt = "A birthday cupcake" Image-to-3D To generate a 3D object from another image, use the ShapEImg2ImgPipeline. You can use an existing image or generate an entirely new one. Let’s use the Kandinsky 2.1 model to generate a new image. Copied from diffusers import DiffusionPipeline +import torch + +prior_pipeline = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipeline = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + +prompt = "A cheeseburger, white background" + +image_embeds, negative_image_embeds = prior_pipeline(prompt, guidance_scale=1.0).to_tuple() +image = pipeline( + prompt, + image_embeds=image_embeds, + negative_image_embeds=negative_image_embeds, +).images[0] + +image.save("burger.png") Pass the cheeseburger to the ShapEImg2ImgPipeline to generate a 3D representation of it. Copied from PIL import Image +from diffusers import ShapEImg2ImgPipeline +from diffusers.utils import export_to_gif + +pipe = ShapEImg2ImgPipeline.from_pretrained("openai/shap-e-img2img", torch_dtype=torch.float16, variant="fp16").to("cuda") + +guidance_scale = 3.0 +image = Image.open("burger.png").resize((256, 256)) + +images = pipe( + image, + guidance_scale=guidance_scale, + num_inference_steps=64, + frame_size=256, +).images + +gif_path = export_to_gif(images[0], "burger_3d.gif") cheeseburger 3D cheeseburger Generate mesh Shap-E is a flexible model that can also generate textured mesh outputs to be rendered for downstream applications. In this example, you’ll convert the output into a glb file because the 🤗 Datasets library supports mesh visualization of glb files which can be rendered by the Dataset viewer. You can generate mesh outputs for both the ShapEPipeline and ShapEImg2ImgPipeline by specifying the output_type parameter as "mesh": Copied import torch +from diffusers import ShapEPipeline + +device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +pipe = ShapEPipeline.from_pretrained("openai/shap-e", torch_dtype=torch.float16, variant="fp16") +pipe = pipe.to(device) + +guidance_scale = 15.0 +prompt = "A birthday cupcake" + +images = pipe(prompt, guidance_scale=guidance_scale, num_inference_steps=64, frame_size=256, output_type="mesh").images Use the export_to_ply() function to save the mesh output as a ply file: You can optionally save the mesh output as an obj file with the export_to_obj() function. The ability to save the mesh output in a variety of formats makes it more flexible for downstream usage! Copied from diffusers.utils import export_to_ply + +ply_path = export_to_ply(images[0], "3d_cake.ply") +print(f"Saved to folder: {ply_path}") Then you can convert the ply file to a glb file with the trimesh library: Copied import trimesh + +mesh = trimesh.load("3d_cake.ply") +mesh_export = mesh.export("3d_cake.glb", file_type="glb") By default, the mesh output is focused from the bottom viewpoint but you can change the default viewpoint by applying a rotation transform: Copied import trimesh +import numpy as np + +mesh = trimesh.load("3d_cake.ply") +rot = trimesh.transformations.rotation_matrix(-np.pi / 2, [1, 0, 0]) +mesh = mesh.apply_transform(rot) +mesh_export = mesh.export("3d_cake.glb", file_type="glb") Upload the mesh file to your dataset repository to visualize it with the Dataset viewer! diff --git a/scrapped_outputs/2480baf10f21e4e618cd23d70131dd64.txt b/scrapped_outputs/2480baf10f21e4e618cd23d70131dd64.txt new file mode 100644 index 0000000000000000000000000000000000000000..88c6593b32ef62cb7820e9bf8a18fcf276dfa370 --- /dev/null +++ b/scrapped_outputs/2480baf10f21e4e618cd23d70131dd64.txt @@ -0,0 +1,304 @@ +Stable unCLIP Stable unCLIP checkpoints are finetuned from Stable Diffusion 2.1 checkpoints to condition on CLIP image embeddings. +Stable unCLIP still conditions on text embeddings. Given the two separate conditionings, stable unCLIP can be used +for text guided image variation. When combined with an unCLIP prior, it can also be used for full text to image generation. The abstract from the paper is: Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples. Tips Stable unCLIP takes noise_level as input during inference which determines how much noise is added to the image embeddings. A higher noise_level increases variation in the final un-noised images. By default, we do not add any additional noise to the image embeddings (noise_level = 0). Text-to-Image Generation Stable unCLIP can be leveraged for text-to-image generation by pipelining it with the prior model of KakaoBrain’s open source DALL-E 2 replication Karlo: Copied import torch +from diffusers import UnCLIPScheduler, DDPMScheduler, StableUnCLIPPipeline +from diffusers.models import PriorTransformer +from transformers import CLIPTokenizer, CLIPTextModelWithProjection + +prior_model_id = "kakaobrain/karlo-v1-alpha" +data_type = torch.float16 +prior = PriorTransformer.from_pretrained(prior_model_id, subfolder="prior", torch_dtype=data_type) + +prior_text_model_id = "openai/clip-vit-large-patch14" +prior_tokenizer = CLIPTokenizer.from_pretrained(prior_text_model_id) +prior_text_model = CLIPTextModelWithProjection.from_pretrained(prior_text_model_id, torch_dtype=data_type) +prior_scheduler = UnCLIPScheduler.from_pretrained(prior_model_id, subfolder="prior_scheduler") +prior_scheduler = DDPMScheduler.from_config(prior_scheduler.config) + +stable_unclip_model_id = "stabilityai/stable-diffusion-2-1-unclip-small" + +pipe = StableUnCLIPPipeline.from_pretrained( + stable_unclip_model_id, + torch_dtype=data_type, + variant="fp16", + prior_tokenizer=prior_tokenizer, + prior_text_encoder=prior_text_model, + prior=prior, + prior_scheduler=prior_scheduler, +) + +pipe = pipe.to("cuda") +wave_prompt = "dramatic wave, the Oceans roar, Strong wave spiral across the oceans as the waves unfurl into roaring crests; perfect wave form; perfect wave shape; dramatic wave shape; wave shape unbelievable; wave; wave shape spectacular" + +image = pipe(prompt=wave_prompt).images[0] +image For text-to-image we use stabilityai/stable-diffusion-2-1-unclip-small as it was trained on CLIP ViT-L/14 embedding, the same as the Karlo model prior. stabilityai/stable-diffusion-2-1-unclip was trained on OpenCLIP ViT-H, so we don’t recommend its use. Text guided Image-to-Image Variation Copied from diffusers import StableUnCLIPImg2ImgPipeline +from diffusers.utils import load_image +import torch + +pipe = StableUnCLIPImg2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1-unclip", torch_dtype=torch.float16, variation="fp16" +) +pipe = pipe.to("cuda") + +url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/stable_unclip/tarsila_do_amaral.png" +init_image = load_image(url) + +images = pipe(init_image).images +images[0].save("variation_image.png") Optionally, you can also pass a prompt to pipe such as: Copied prompt = "A fantasy landscape, trending on artstation" + +image = pipe(init_image, prompt=prompt).images[0] +image Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableUnCLIPPipeline class diffusers.StableUnCLIPPipeline < source > ( prior_tokenizer: CLIPTokenizer prior_text_encoder: CLIPTextModelWithProjection prior: PriorTransformer prior_scheduler: KarrasDiffusionSchedulers image_normalizer: StableUnCLIPImageNormalizer image_noising_scheduler: KarrasDiffusionSchedulers tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vae: AutoencoderKL ) Parameters prior_tokenizer (CLIPTokenizer) — +A CLIPTokenizer. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen CLIPTextModelWithProjection text-encoder. prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_scheduler (KarrasDiffusionSchedulers) — +Scheduler used in the prior denoising process. image_normalizer (StableUnCLIPImageNormalizer) — +Used to normalize the predicted image embeddings before the noise is applied and un-normalize the image +embeddings after the noise has been applied. image_noising_scheduler (KarrasDiffusionSchedulers) — +Noise schedule for adding noise to the predicted image embeddings. The amount of noise to add is determined +by the noise_level. tokenizer (CLIPTokenizer) — +A CLIPTokenizer. text_encoder (CLIPTextModel) — +Frozen CLIPTextModel text-encoder. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (KarrasDiffusionSchedulers) — +A scheduler to be used in combination with unet to denoise the encoded image latents. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. Pipeline for text-to-image generation using stable unCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 20 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Optional = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 0 prior_num_inference_steps: int = 25 prior_guidance_scale: float = 4.0 prior_latents: Optional = None clip_skip: Optional = None ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 20) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. noise_level (int, optional, defaults to 0) — +The amount of noise to add to the image embeddings. A higher noise_level increases the variance in +the final un-noised images. See StableUnCLIPPipeline.noise_image_embeddings() for more details. prior_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps in the prior denoising process. More denoising steps usually lead to a +higher quality image at the expense of slower inference. prior_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. prior_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +embedding generation in the prior denoising process. Can be used to tweak the same generation with +different prompts. If not provided, a latents tensor is generated by sampling using the supplied random +generator. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +ImagePipelineOutput or tuple + +~ pipeline_utils.ImagePipelineOutput if return_dict is True, otherwise a tuple. When returning +a tuple, the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableUnCLIPPipeline + +>>> pipe = StableUnCLIPPipeline.from_pretrained( +... "fusing/stable-unclip-2-1-l", torch_dtype=torch.float16 +... ) # TODO update model path +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> images = pipe(prompt).images +>>> images[0].save("astronaut_horse.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. noise_image_embeddings < source > ( image_embeds: Tensor noise_level: int noise: Optional = None generator: Optional = None ) Add noise to the image embeddings. The amount of noise is controlled by a noise_level input. A higher +noise_level increases the variance in the final un-noised images. The noise is applied in two ways: A noise schedule is applied directly to the embeddings. A vector of sinusoidal time embeddings are appended to the output. In both cases, the amount of noise is controlled by the same noise_level. The embeddings are normalized before the noise is applied and un-normalized after the noise is applied. StableUnCLIPImg2ImgPipeline class diffusers.StableUnCLIPImg2ImgPipeline < source > ( feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection image_normalizer: StableUnCLIPImageNormalizer image_noising_scheduler: KarrasDiffusionSchedulers tokenizer: CLIPTokenizer text_encoder: CLIPTextModel unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vae: AutoencoderKL ) Parameters feature_extractor (CLIPImageProcessor) — +Feature extractor for image pre-processing before being encoded. image_encoder (CLIPVisionModelWithProjection) — +CLIP vision model for encoding images. image_normalizer (StableUnCLIPImageNormalizer) — +Used to normalize the predicted image embeddings before the noise is applied and un-normalize the image +embeddings after the noise has been applied. image_noising_scheduler (KarrasDiffusionSchedulers) — +Noise schedule for adding noise to the predicted image embeddings. The amount of noise to add is determined +by the noise_level. tokenizer (~transformers.CLIPTokenizer) — +A [~transformers.CLIPTokenizer)]. text_encoder (CLIPTextModel) — +Frozen CLIPTextModel text-encoder. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (KarrasDiffusionSchedulers) — +A scheduler to be used in combination with unet to denoise the encoded image latents. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. Pipeline for text-guided image-to-image generation using stable unCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( image: Union = None prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 20 guidance_scale: float = 10 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Optional = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 0 image_embeds: Optional = None clip_skip: Optional = None ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, either prompt_embeds will be +used or prompt is initialized to "". image (torch.FloatTensor or PIL.Image.Image) — +Image or tensor representing an image batch. The image is encoded to its CLIP embedding which the +unet is conditioned on. The image is not encoded by the vae and then used as the latents in the +denoising process like it is in the standard Stable Diffusion text-guided image variation process. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 20) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. noise_level (int, optional, defaults to 0) — +The amount of noise to add to the image embeddings. A higher noise_level increases the variance in +the final un-noised images. See StableUnCLIPPipeline.noise_image_embeddings() for more details. image_embeds (torch.FloatTensor, optional) — +Pre-generated CLIP embeddings to condition the unet on. These latents are not used in the denoising +process. If you want to provide pre-generated latents, pass them to __call__ as latents. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +ImagePipelineOutput or tuple + +~ pipeline_utils.ImagePipelineOutput if return_dict is True, otherwise a tuple. When returning +a tuple, the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import requests +>>> import torch +>>> from PIL import Image +>>> from io import BytesIO + +>>> from diffusers import StableUnCLIPImg2ImgPipeline + +>>> pipe = StableUnCLIPImg2ImgPipeline.from_pretrained( +... "fusing/stable-unclip-2-1-l-img2img", torch_dtype=torch.float16 +... ) # TODO update model path +>>> pipe = pipe.to("cuda") + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +>>> response = requests.get(url) +>>> init_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_image = init_image.resize((768, 512)) + +>>> prompt = "A fantasy landscape, trending on artstation" + +>>> images = pipe(prompt, init_image).images +>>> images[0].save("fantasy_landscape.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. noise_image_embeddings < source > ( image_embeds: Tensor noise_level: int noise: Optional = None generator: Optional = None ) Add noise to the image embeddings. The amount of noise is controlled by a noise_level input. A higher +noise_level increases the variance in the final un-noised images. The noise is applied in two ways: A noise schedule is applied directly to the embeddings. A vector of sinusoidal time embeddings are appended to the output. In both cases, the amount of noise is controlled by the same noise_level. The embeddings are normalized before the noise is applied and un-normalized after the noise is applied. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/24949e70efd09162b42e7236943050d5.txt b/scrapped_outputs/24949e70efd09162b42e7236943050d5.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/249d5141516b26a6b8d64f5c5bc08686.txt b/scrapped_outputs/249d5141516b26a6b8d64f5c5bc08686.txt new file mode 100644 index 0000000000000000000000000000000000000000..2977789ef5a5b67f0bb0a50bc4374d1f6abe0ae5 --- /dev/null +++ b/scrapped_outputs/249d5141516b26a6b8d64f5c5bc08686.txt @@ -0,0 +1,115 @@ +Using KerasCV Stable Diffusion Checkpoints in Diffusers + +This is an experimental feature. +KerasCV provides APIs for implementing various computer vision workflows. It +also provides the Stable Diffusion v1 and v2 +models. Many practitioners find it easy to fine-tune the Stable Diffusion models shipped by KerasCV. However, as of this writing, KerasCV offers limited support to experiment with Stable Diffusion models for inference and deployment. On the other hand, +Diffusers provides tooling dedicated to this purpose (and more), such as different noise schedulers, flash attention, and other +optimization techniques. +How about fine-tuning Stable Diffusion models in KerasCV and exporting them such that they become compatible with Diffusers to combine the +best of both worlds? We have created a tool that +lets you do just that! It takes KerasCV Stable Diffusion checkpoints and exports them to Diffusers-compatible checkpoints. +More specifically, it first converts the checkpoints to PyTorch and then wraps them into a +StableDiffusionPipeline which is ready +for inference. Finally, it pushes the converted checkpoints to a repository on the Hugging Face Hub. +We welcome you to try out the tool here +and share feedback via discussions. + +Getting Started + +First, you need to obtain the fine-tuned KerasCV Stable Diffusion checkpoints. We provide an +overview of the different ways Stable Diffusion models can be fine-tuned using diffusers. For the Keras implementation of some of these methods, you can check out these resources: +Teach StableDiffusion new concepts via Textual Inversion +Fine-tuning Stable Diffusion +DreamBooth +Prompt-to-Prompt editing +Stable Diffusion is comprised of the following models: +Text encoder +UNet +VAE +Depending on the fine-tuning task, we may fine-tune one or more of these components (the VAE is almost always left untouched). Here are some common combinations: +DreamBooth: UNet and text encoder +Classical text to image fine-tuning: UNet +Textual Inversion: Just the newly initialized embeddings in the text encoder + +Performing the Conversion + +Let’s use this checkpoint which was generated +by conducting Textual Inversion with the following “placeholder token”: . +On the tool, we supply the following things: +Path(s) to download the fine-tuned checkpoint(s) (KerasCV) +An HF token +Placeholder token (only applicable for Textual Inversion) + +As soon as you hit “Submit”, the conversion process will begin. Once it’s complete, you should see the following: + +If you click the link, you +should see something like so: + +If you head over to the model card of the repository, the +following should appear: + +Note that we’re not specifying the UNet weights here since the UNet is not fine-tuned during Textual Inversion. +And that’s it! You now have your fine-tuned KerasCV Stable Diffusion model in Diffusers 🧨. + +Using the Converted Model in Diffusers + +Just beside the model card of the repository, +you’d notice an inference widget to try out the model directly from the UI 🤗 + +On the top right hand side, we provide a “Use in Diffusers” button. If you click the button, you should see the following code-snippet: + + + Copied +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained("sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline") +The model is in standard diffusers format. Let’s perform inference! + + + Copied +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained("sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline") +pipeline.to("cuda") + +placeholder_token = "" +prompt = f"two {placeholder_token} getting married, photorealistic, high quality" +image = pipeline(prompt, num_inference_steps=50).images[0] +And we get: + +Note that if you specified a placeholder_token while performing the conversion, the tool will log it accordingly. Refer +to the model card of this repository +as an example. +We welcome you to use the tool for various Stable Diffusion fine-tuning scenarios and let us know your feedback! Here are some examples +of Diffusers checkpoints that were obtained using the tool: +sayakpaul/text-unet-dogs-kerascv_sd_diffusers_pipeline (DreamBooth with both the text encoder and UNet fine-tuned) +sayakpaul/unet-dogs-kerascv_sd_diffusers_pipeline (DreamBooth with only the UNet fine-tuned) + +Incorporating Diffusers Goodies 🎁 + +Diffusers provides various options that one can leverage to experiment with different inference setups. One particularly +useful option is the use of a different noise scheduler during inference other than what was used during fine-tuning. +Let’s try out the DPMSolverMultistepScheduler +which is different from the one (DDPMScheduler) used during +fine-tuning. +You can read more details about this process in this section. + + + Copied +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler + +pipeline = DiffusionPipeline.from_pretrained("sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline") +pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) +pipeline.to("cuda") + +placeholder_token = "" +prompt = f"two {placeholder_token} getting married, photorealistic, high quality" +image = pipeline(prompt, num_inference_steps=50).images[0] + +One can also continue fine-tuning from these Diffusers checkpoints by leveraging some relevant tools from Diffusers. Refer here for +more details. For inference-specific optimizations, refer here. + +Known Limitations + +Only Stable Diffusion v1 checkpoints are supported for conversion in this tool. diff --git a/scrapped_outputs/24aa42e1941ad00cd7782cb6df527f94.txt b/scrapped_outputs/24aa42e1941ad00cd7782cb6df527f94.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/24c6f6153e88ff5e58067a8a42665d6c.txt b/scrapped_outputs/24c6f6153e88ff5e58067a8a42665d6c.txt new file mode 100644 index 0000000000000000000000000000000000000000..99c9c7d4f2201d98cc2da9436565b2c181d1c9c1 --- /dev/null +++ b/scrapped_outputs/24c6f6153e88ff5e58067a8a42665d6c.txt @@ -0,0 +1,83 @@ +Paint by Example Paint by Example: Exemplar-based Image Editing with Diffusion Models is by Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, Fang Wen. The abstract from the paper is: Language-guided image editing has achieved great success recently. In this paper, for the first time, we investigate exemplar-guided image editing for more precise control. We achieve this goal by leveraging self-supervised training to disentangle and re-organize the source image and the exemplar. However, the naive approach will cause obvious fusing artifacts. We carefully analyze it and propose an information bottleneck and strong augmentations to avoid the trivial solution of directly copying and pasting the exemplar image. Meanwhile, to ensure the controllability of the editing process, we design an arbitrary shape mask for the exemplar image and leverage the classifier-free guidance to increase the similarity to the exemplar image. The whole framework involves a single forward of the diffusion model without any iterative optimization. We demonstrate that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity. The original codebase can be found at Fantasy-Studio/Paint-by-Example, and you can try it out in a demo. Tips Paint by Example is supported by the official Fantasy-Studio/Paint-by-Example checkpoint. The checkpoint is warm-started from CompVis/stable-diffusion-v1-4 to inpaint partly masked images conditioned on example and reference images. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. PaintByExamplePipeline class diffusers.PaintByExamplePipeline < source > ( vae: AutoencoderKL image_encoder: PaintByExampleImageEncoder unet: UNet2DConditionModel scheduler: Union safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = False ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. image_encoder (PaintByExampleImageEncoder) — +Encodes the example input image. The unet is conditioned on the example image instead of a text prompt. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. 🧪 This is an experimental feature! Pipeline for image-guided image inpainting using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( example_image: Union image: Union mask_image: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters example_image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +An example image to guide image generation. image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +Image or tensor representing an image batch to be inpainted (parts of the image are masked out with +mask_image and repainted according to prompt). mask_image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +Image or tensor representing an image batch to mask image. White pixels in the mask are repainted, +while black pixels are preserved. If mask_image is a PIL image, it is converted to a single channel +(luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, so the +expected shape would be (B, H, W, 1). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Example: Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO +>>> from diffusers import PaintByExamplePipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = ( +... "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/image/example_1.png" +... ) +>>> mask_url = ( +... "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/mask/example_1.png" +... ) +>>> example_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/reference/example_1.jpg" + +>>> init_image = download_image(img_url).resize((512, 512)) +>>> mask_image = download_image(mask_url).resize((512, 512)) +>>> example_image = download_image(example_url).resize((512, 512)) + +>>> pipe = PaintByExamplePipeline.from_pretrained( +... "Fantasy-Studio/Paint-by-Example", +... torch_dtype=torch.float16, +... ) +>>> pipe = pipe.to("cuda") + +>>> image = pipe(image=init_image, mask_image=mask_image, example_image=example_image).images[0] +>>> image StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/24dca93ab43862a1f47c0ae1efdc9be5.txt b/scrapped_outputs/24dca93ab43862a1f47c0ae1efdc9be5.txt new file mode 100644 index 0000000000000000000000000000000000000000..ae35bd71905061d7430ba6a839a139739f34ded5 --- /dev/null +++ b/scrapped_outputs/24dca93ab43862a1f47c0ae1efdc9be5.txt @@ -0,0 +1,84 @@ +Self-Attention Guidance Improving Sample Quality of Diffusion Models Using Self-Attention Guidance is by Susung Hong et al. The abstract from the paper is: Denoising diffusion models (DDMs) have attracted attention for their exceptional generation quality and diversity. This success is largely attributed to the use of class- or text-conditional diffusion guidance methods, such as classifier and classifier-free guidance. In this paper, we present a more comprehensive perspective that goes beyond the traditional guidance methods. From this generalized perspective, we introduce novel condition- and training-free strategies to enhance the quality of generated images. As a simple solution, blur guidance improves the suitability of intermediate samples for their fine-scale information and structures, enabling diffusion models to generate higher quality samples with a moderate guidance scale. Improving upon this, Self-Attention Guidance (SAG) uses the intermediate self-attention maps of diffusion models to enhance their stability and efficacy. Specifically, SAG adversarially blurs only the regions that diffusion models attend to at each iteration and guides them accordingly. Our experimental results show that our SAG improves the performance of various diffusion models, including ADM, IDDPM, Stable Diffusion, and DiT. Moreover, combining SAG with conventional guidance methods leads to further improvement. You can find additional information about Self-Attention Guidance on the project page, original codebase, and try it out in a demo or notebook. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionSAGPipeline class diffusers.StableDiffusionSAGPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 sag_scale: float = 0.75 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. sag_scale (float, optional, defaults to 0.75) — +Chosen between [0, 1.0] for better quality. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionSAGPipeline + +>>> pipe = StableDiffusionSAGPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt, sag_scale=0.75).images[0] disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/250af941bd4fc14a7eb9277c43338316.txt b/scrapped_outputs/250af941bd4fc14a7eb9277c43338316.txt new file mode 100644 index 0000000000000000000000000000000000000000..0454f29f161e7c79737a21f6448f556cf18eca51 --- /dev/null +++ b/scrapped_outputs/250af941bd4fc14a7eb9277c43338316.txt @@ -0,0 +1,81 @@ +Push files to the Hub 🤗 Diffusers provides a PushToHubMixin for uploading your model, scheduler, or pipeline to the Hub. It is an easy way to store your files on the Hub, and also allows you to share your work with others. Under the hood, the PushToHubMixin: creates a repository on the Hub saves your model, scheduler, or pipeline files so they can be reloaded later uploads folder containing these files to the Hub This guide will show you how to use the PushToHubMixin to upload your files to the Hub. You’ll need to log in to your Hub account with your access token first: Copied from huggingface_hub import notebook_login + +notebook_login() Models To push a model to the Hub, call push_to_hub() and specify the repository id of the model to be stored on the Hub: Copied from diffusers import ControlNetModel + +controlnet = ControlNetModel( + block_out_channels=(32, 64), + layers_per_block=2, + in_channels=4, + down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), + cross_attention_dim=32, + conditioning_embedding_out_channels=(16, 32), +) +controlnet.push_to_hub("my-controlnet-model") For models, you can also specify the variant of the weights to push to the Hub. For example, to push fp16 weights: Copied controlnet.push_to_hub("my-controlnet-model", variant="fp16") The push_to_hub() function saves the model’s config.json file and the weights are automatically saved in the safetensors format. Now you can reload the model from your repository on the Hub: Copied model = ControlNetModel.from_pretrained("your-namespace/my-controlnet-model") Scheduler To push a scheduler to the Hub, call push_to_hub() and specify the repository id of the scheduler to be stored on the Hub: Copied from diffusers import DDIMScheduler + +scheduler = DDIMScheduler( + beta_start=0.00085, + beta_end=0.012, + beta_schedule="scaled_linear", + clip_sample=False, + set_alpha_to_one=False, +) +scheduler.push_to_hub("my-controlnet-scheduler") The push_to_hub() function saves the scheduler’s scheduler_config.json file to the specified repository. Now you can reload the scheduler from your repository on the Hub: Copied scheduler = DDIMScheduler.from_pretrained("your-namepsace/my-controlnet-scheduler") Pipeline You can also push an entire pipeline with all it’s components to the Hub. For example, initialize the components of a StableDiffusionPipeline with the parameters you want: Copied from diffusers import ( + UNet2DConditionModel, + AutoencoderKL, + DDIMScheduler, + StableDiffusionPipeline, +) +from transformers import CLIPTextModel, CLIPTextConfig, CLIPTokenizer + +unet = UNet2DConditionModel( + block_out_channels=(32, 64), + layers_per_block=2, + sample_size=32, + in_channels=4, + out_channels=4, + down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), + up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), + cross_attention_dim=32, +) + +scheduler = DDIMScheduler( + beta_start=0.00085, + beta_end=0.012, + beta_schedule="scaled_linear", + clip_sample=False, + set_alpha_to_one=False, +) + +vae = AutoencoderKL( + block_out_channels=[32, 64], + in_channels=3, + out_channels=3, + down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], + up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], + latent_channels=4, +) + +text_encoder_config = CLIPTextConfig( + bos_token_id=0, + eos_token_id=2, + hidden_size=32, + intermediate_size=37, + layer_norm_eps=1e-05, + num_attention_heads=4, + num_hidden_layers=5, + pad_token_id=1, + vocab_size=1000, +) +text_encoder = CLIPTextModel(text_encoder_config) +tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") Pass all of the components to the StableDiffusionPipeline and call push_to_hub() to push the pipeline to the Hub: Copied components = { + "unet": unet, + "scheduler": scheduler, + "vae": vae, + "text_encoder": text_encoder, + "tokenizer": tokenizer, + "safety_checker": None, + "feature_extractor": None, +} + +pipeline = StableDiffusionPipeline(**components) +pipeline.push_to_hub("my-pipeline") The push_to_hub() function saves each component to a subfolder in the repository. Now you can reload the pipeline from your repository on the Hub: Copied pipeline = StableDiffusionPipeline.from_pretrained("your-namespace/my-pipeline") Privacy Set private=True in the push_to_hub() function to keep your model, scheduler, or pipeline files private: Copied controlnet.push_to_hub("my-controlnet-model-private", private=True) Private repositories are only visible to you, and other users won’t be able to clone the repository and your repository won’t appear in search results. Even if a user has the URL to your private repository, they’ll receive a 404 - Sorry, we can't find the page you are looking for. You must be logged in to load a model from a private repository. diff --git a/scrapped_outputs/2518d2d3577d80a091b096d41434c92f.txt b/scrapped_outputs/2518d2d3577d80a091b096d41434c92f.txt new file mode 100644 index 0000000000000000000000000000000000000000..b6ee2d139f8d33d1b57f5e5dc720363dd35642a1 --- /dev/null +++ b/scrapped_outputs/2518d2d3577d80a091b096d41434c92f.txt @@ -0,0 +1,101 @@ +Shap-E The Shap-E model was proposed in Shap-E: Generating Conditional 3D Implicit Functions by Alex Nichol and Heewoo Jun from OpenAI. The abstract from the paper is: We present Shap-E, a conditional generative model for 3D assets. Unlike recent work on 3D generative models which produce a single output representation, Shap-E directly generates the parameters of implicit functions that can be rendered as both textured meshes and neural radiance fields. We train Shap-E in two stages: first, we train an encoder that deterministically maps 3D assets into the parameters of an implicit function; second, we train a conditional diffusion model on outputs of the encoder. When trained on a large dataset of paired 3D and text data, our resulting models are capable of generating complex and diverse 3D assets in a matter of seconds. When compared to Point-E, an explicit generative model over point clouds, Shap-E converges faster and reaches comparable or better sample quality despite modeling a higher-dimensional, multi-representation output space. The original codebase can be found at openai/shap-e. See the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. ShapEPipeline class diffusers.ShapEPipeline < source > ( prior: PriorTransformer text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: HeunDiscreteScheduler shap_e_renderer: ShapERenderer ) Parameters prior (PriorTransformer) — +The canonical unCLIP prior to approximate the image embedding from the text embedding. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. scheduler (HeunDiscreteScheduler) — +A scheduler to be used in combination with the prior model to generate image embedding. shap_e_renderer (ShapERenderer) — +Shap-E renderer projects the generated latents into parameters of a MLP to create 3D objects with the NeRF +rendering method. Pipeline for generating latent representation of a 3D asset and rendering with the NeRF method. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: str num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 frame_size: int = 64 output_type: Optional = 'pil' return_dict: bool = True ) → ShapEPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. frame_size (int, optional, default to 64) — +The width and height of each image frame of the generated 3D output. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between "pil" (PIL.Image.Image), "np" +(np.array), "latent" (torch.Tensor), or mesh (MeshDecoderOutput). return_dict (bool, optional, defaults to True) — +Whether or not to return a ShapEPipelineOutput instead of a plain +tuple. Returns +ShapEPipelineOutput or tuple + +If return_dict is True, ShapEPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from diffusers.utils import export_to_gif + +>>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +>>> repo = "openai/shap-e" +>>> pipe = DiffusionPipeline.from_pretrained(repo, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> guidance_scale = 15.0 +>>> prompt = "a shark" + +>>> images = pipe( +... prompt, +... guidance_scale=guidance_scale, +... num_inference_steps=64, +... frame_size=256, +... ).images + +>>> gif_path = export_to_gif(images[0], "shark_3d.gif") ShapEImg2ImgPipeline class diffusers.ShapEImg2ImgPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModel image_processor: CLIPImageProcessor scheduler: HeunDiscreteScheduler shap_e_renderer: ShapERenderer ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModel) — +Frozen image-encoder. image_processor (CLIPImageProcessor) — +A CLIPImageProcessor to process images. scheduler (HeunDiscreteScheduler) — +A scheduler to be used in combination with the prior model to generate image embedding. shap_e_renderer (ShapERenderer) — +Shap-E renderer projects the generated latents into parameters of a MLP to create 3D objects with the NeRF +rendering method. Pipeline for generating latent representation of a 3D asset and rendering with the NeRF method from an image. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 frame_size: int = 64 output_type: Optional = 'pil' return_dict: bool = True ) → ShapEPipelineOutput or tuple Parameters image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be used as the starting point. Can also accept image +latents as image, but if passing latents directly it is not encoded again. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. frame_size (int, optional, default to 64) — +The width and height of each image frame of the generated 3D output. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between "pil" (PIL.Image.Image), "np" +(np.array), "latent" (torch.Tensor), or mesh (MeshDecoderOutput). return_dict (bool, optional, defaults to True) — +Whether or not to return a ShapEPipelineOutput instead of a plain +tuple. Returns +ShapEPipelineOutput or tuple + +If return_dict is True, ShapEPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> from PIL import Image +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from diffusers.utils import export_to_gif, load_image + +>>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +>>> repo = "openai/shap-e-img2img" +>>> pipe = DiffusionPipeline.from_pretrained(repo, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> guidance_scale = 3.0 +>>> image_url = "https://hf.co/datasets/diffusers/docs-images/resolve/main/shap-e/corgi.png" +>>> image = load_image(image_url).convert("RGB") + +>>> images = pipe( +... image, +... guidance_scale=guidance_scale, +... num_inference_steps=64, +... frame_size=256, +... ).images + +>>> gif_path = export_to_gif(images[0], "corgi_3d.gif") ShapEPipelineOutput class diffusers.pipelines.shap_e.pipeline_shap_e.ShapEPipelineOutput < source > ( images: Union ) Parameters images (torch.FloatTensor) — +A list of images for 3D rendering. Output class for ShapEPipeline and ShapEImg2ImgPipeline. diff --git a/scrapped_outputs/25221b6e05e1705f944718e3c5f621b8.txt b/scrapped_outputs/25221b6e05e1705f944718e3c5f621b8.txt new file mode 100644 index 0000000000000000000000000000000000000000..0824b6b7ee98c1a6f9d50f91c37b16cb080bb278 --- /dev/null +++ b/scrapped_outputs/25221b6e05e1705f944718e3c5f621b8.txt @@ -0,0 +1,46 @@ +EulerAncestralDiscreteScheduler A scheduler that uses ancestral sampling with Euler method steps. This is a fast scheduler which can often generate good outputs in 20-30 steps. The scheduler is based on the original k-diffusion implementation by Katherine Crowson. EulerAncestralDiscreteScheduler class diffusers.EulerAncestralDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. Ancestral sampling with Euler method steps. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. Scales the denoising model input by (sigma**2 + 1) ** 0.5 to match the Euler algorithm. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor generator: Optional = None return_dict: bool = True ) → EulerAncestralDiscreteSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a +EulerAncestralDiscreteSchedulerOutput or tuple. Returns +EulerAncestralDiscreteSchedulerOutput or tuple + +If return_dict is True, +EulerAncestralDiscreteSchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). EulerAncestralDiscreteSchedulerOutput class diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/2548858d6db4f87695b8b3d6cb6610ac.txt b/scrapped_outputs/2548858d6db4f87695b8b3d6cb6610ac.txt new file mode 100644 index 0000000000000000000000000000000000000000..aa2d63d59b04449a98f5d12b99c53e29a1ead14b --- /dev/null +++ b/scrapped_outputs/2548858d6db4f87695b8b3d6cb6610ac.txt @@ -0,0 +1,64 @@ +Textual Inversion Textual Inversion is a training technique for personalizing image generation models with just a few example images of what you want it to learn. This technique works by learning and updating the text embeddings (the new embeddings are tied to a special word you must use in the prompt) to match the example images you provide. If you’re training on a GPU with limited vRAM, you should try enabling the gradient_checkpointing and mixed_precision parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing or xFormers. With the same configuration and setup as PyTorch, the Flax training script should be at least ~70% faster! This guide will explore the textual_inversion.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/textual_inversion +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script has many parameters to help you tailor the training run to your needs. All of the parameters and their descriptions are listed in the parse_args() function. Where applicable, Diffusers provides default values for each parameter such as the training batch size and learning rate, but feel free to change these values in the training command if you’d like. For example, to increase the number of gradient accumulation steps above the default value of 1: Copied accelerate launch textual_inversion.py \ + --gradient_accumulation_steps=4 Some other basic and important parameters to specify include: --pretrained_model_name_or_path: the name of the model on the Hub or a local path to the pretrained model --train_data_dir: path to a folder containing the training dataset (example images) --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command --num_vectors: the number of vectors to learn the embeddings with; increasing this parameter helps the model learn better but it comes with increased training costs --placeholder_token: the special word to tie the learned embeddings to (you must use the word in your prompt for inference) --initializer_token: a single-word that roughly describes the object or style you’re trying to train on --learnable_property: whether you’re training the model to learn a new “style” (for example, Van Gogh’s painting style) or “object” (for example, your dog) Training script Unlike some of the other training scripts, textual_inversion.py has a custom dataset class, TextualInversionDataset for creating a dataset. You can customize the image size, placeholder token, interpolation method, whether to crop the image, and more. If you need to change how the dataset is created, you can modify TextualInversionDataset. Next, you’ll find the dataset preprocessing code and training loop in the main() function. The script starts by loading the tokenizer, scheduler and model: Copied # Load tokenizer +if args.tokenizer_name: + tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name) +elif args.pretrained_model_name_or_path: + tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer") + +# Load scheduler and models +noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") +text_encoder = CLIPTextModel.from_pretrained( + args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision +) +vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision) +unet = UNet2DConditionModel.from_pretrained( + args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision +) The special placeholder token is added next to the tokenizer, and the embedding is readjusted to account for the new token. Then, the script creates a dataset from the TextualInversionDataset: Copied train_dataset = TextualInversionDataset( + data_root=args.train_data_dir, + tokenizer=tokenizer, + size=args.resolution, + placeholder_token=(" ".join(tokenizer.convert_ids_to_tokens(placeholder_token_ids))), + repeats=args.repeats, + learnable_property=args.learnable_property, + center_crop=args.center_crop, + set="train", +) +train_dataloader = torch.utils.data.DataLoader( + train_dataset, batch_size=args.train_batch_size, shuffle=True, num_workers=args.dataloader_num_workers +) Finally, the training loop handles everything else from predicting the noisy residual to updating the embedding weights of the special placeholder token. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 For this guide, you’ll download some images of a cat toy and store them in a directory. But remember, you can create and use your own dataset if you want (see the Create a dataset for training guide). Copied from huggingface_hub import snapshot_download + +local_dir = "./cat" +snapshot_download( + "diffusers/cat_toy_example", local_dir=local_dir, repo_type="dataset", ignore_patterns=".gitattributes" +) Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model, and DATA_DIR to the path where you just downloaded the cat images to. The script creates and saves the following files to your repository: learned_embeds.bin: the learned embedding vectors corresponding to your example images token_identifier.txt: the special placeholder token type_of_concept.txt: the type of concept you’re training on (either “object” or “style”) A full training run takes ~1 hour on a single V100 GPU. One more thing before you launch the script. If you’re interested in following along with the training process, you can periodically save generated images as training progresses. Add the following parameters to the training command: Copied --validation_prompt="A train" +--num_validation_images=4 +--validation_steps=100 PyTorch Flax Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export DATA_DIR="./cat" + +accelerate launch textual_inversion.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --train_data_dir=$DATA_DIR \ + --learnable_property="object" \ + --placeholder_token="" \ + --initializer_token="toy" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --max_train_steps=3000 \ + --learning_rate=5.0e-04 \ + --scale_lr \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --output_dir="textual_inversion_cat" \ + --push_to_hub After training is complete, you can use your newly trained model for inference like: PyTorch Flax Copied from diffusers import StableDiffusionPipeline +import torch + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") +pipeline.load_textual_inversion("sd-concepts-library/cat-toy") +image = pipeline("A train", num_inference_steps=50).images[0] +image.save("cat-train.png") Next steps Congratulations on training your own Textual Inversion model! 🎉 To learn more about how to use your new model, the following guides may be helpful: Learn how to load Textual Inversion embeddings and also use them as negative embeddings. Learn how to use Textual Inversion for inference with Stable Diffusion 1/2 and Stable Diffusion XL. diff --git a/scrapped_outputs/259ce88fa30e9a143db19f8a0d5b423a.txt b/scrapped_outputs/259ce88fa30e9a143db19f8a0d5b423a.txt new file mode 100644 index 0000000000000000000000000000000000000000..09ea06c0ca900208018e1f32ac434cb633735212 --- /dev/null +++ b/scrapped_outputs/259ce88fa30e9a143db19f8a0d5b423a.txt @@ -0,0 +1,86 @@ +I2VGen-XL I2VGen-XL: High-Quality Image-to-Video Synthesis via Cascaded Diffusion Models by Shiwei Zhang, Jiayu Wang, Yingya Zhang, Kang Zhao, Hangjie Yuan, Zhiwu Qin, Xiang Wang, Deli Zhao, and Jingren Zhou. The abstract from the paper is: Video synthesis has recently made remarkable strides benefiting from the rapid development of diffusion models. However, it still encounters challenges in terms of semantic accuracy, clarity and spatio-temporal continuity. They primarily arise from the scarcity of well-aligned text-video data and the complex inherent structure of videos, making it difficult for the model to simultaneously ensure semantic and qualitative excellence. In this report, we propose a cascaded I2VGen-XL approach that enhances model performance by decoupling these two factors and ensures the alignment of the input data by utilizing static images as a form of crucial guidance. I2VGen-XL consists of two stages: i) the base stage guarantees coherent semantics and preserves content from input images by using two hierarchical encoders, and ii) the refinement stage enhances the video’s details by incorporating an additional brief text and improves the resolution to 1280×720. To improve the diversity, we collect around 35 million single-shot text-video pairs and 6 billion text-image pairs to optimize the model. By this means, I2VGen-XL can simultaneously enhance the semantic accuracy, continuity of details and clarity of generated videos. Through extensive experiments, we have investigated the underlying principles of I2VGen-XL and compared it with current top methods, which can demonstrate its effectiveness on diverse data. The source code and models will be publicly available at this https URL. The original codebase can be found here. The model checkpoints can be found here. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. Also, to know more about reducing the memory usage of this pipeline, refer to the [“Reduce memory usage”] section here. Sample output with I2VGenXL: library. + Notes I2VGenXL always uses a clip_skip value of 1. This means it leverages the penultimate layer representations from the text encoder of CLIP. It can generate videos of quality that is often on par with Stable Video Diffusion (SVD). Unlike SVD, it additionally accepts text prompts as inputs. It can generate higher resolution videos. When using the DDIMScheduler (which is default for this pipeline), less than 50 steps for inference leads to bad results. I2VGenXLPipeline class diffusers.I2VGenXLPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer image_encoder: CLIPVisionModelWithProjection feature_extractor: CLIPImageProcessor unet: I2VGenXLUNet scheduler: DDIMScheduler ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (I2VGenXLUNet) — +A I2VGenXLUNet to denoise the encoded video latents. scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Pipeline for image-to-video generation as proposed in I2VGenXL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None image: Union = None height: Optional = 704 width: Optional = 1280 target_fps: Optional = 16 num_frames: int = 16 num_inference_steps: int = 50 guidance_scale: float = 9.0 negative_prompt: Union = None eta: float = 0.0 num_videos_per_prompt: Optional = 1 decode_chunk_size: Optional = 1 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = 1 ) → pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — +Image or images to guide image generation. If you provide a tensor, it needs to be compatible with +CLIPImageProcessor. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. target_fps (int, optional) — +Frames per second. The rate at which the generated images shall be exported to a video after generation. This is also used as a “micro-condition” while generation. num_frames (int, optional) — +The number of video frames to generate. num_inference_steps (int, optional) — +The number of denoising steps. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. num_videos_per_prompt (int, optional) — +The number of images to generate per prompt. decode_chunk_size (int, optional) — +The number of frames to decode at a time. The higher the chunk size, the higher the temporal consistency +between frames, but also the higher the memory consumption. By default, the decoder will decode all frames at once +for maximal quality. Reduce decode_chunk_size to reduce memory usage. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput or tuple + +If return_dict is True, pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for image-to-video generation with I2VGenXLPipeline. Examples: Copied >>> import torch +>>> from diffusers import I2VGenXLPipeline +>>> from diffusers.utils import export_to_gif, load_image + +>>> pipeline = I2VGenXLPipeline.from_pretrained("ali-vilab/i2vgen-xl", torch_dtype=torch.float16, variant="fp16") +>>> pipeline.enable_model_cpu_offload() + +>>> image_url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/i2vgen_xl_images/img_0009.png" +>>> image = load_image(image_url).convert("RGB") + +>>> prompt = "Papers were floating in the air on a table in the library" +>>> negative_prompt = "Distorted, discontinuous, Ugly, blurry, low resolution, motionless, static, disfigured, disconnected limbs, Ugly faces, incomplete arms" +>>> generator = torch.manual_seed(8888) + +>>> frames = pipeline( +... prompt=prompt, +... image=image, +... num_inference_steps=50, +... negative_prompt=negative_prompt, +... guidance_scale=9.0, +... generator=generator +... ).frames[0] +>>> video_path = export_to_gif(frames, "i2v.gif") encode_prompt < source > ( prompt device num_videos_per_prompt negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_videos_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. I2VGenXLPipelineOutput class diffusers.pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput < source > ( frames: Union ) Parameters frames (torch.Tensor, np.ndarray, or List[List[PIL.Image.Image]]) — +List of video outputs - It can be a nested list of length batch_size, with each sub-list containing denoised Output class for image-to-video pipeline. PIL image sequences of length num_frames. It can also be a NumPy array or Torch tensor of shape +(batch_size, num_frames, channels, height, width) diff --git a/scrapped_outputs/25a4d5d0681cb060596acb3ead3431c8.txt b/scrapped_outputs/25a4d5d0681cb060596acb3ead3431c8.txt new file mode 100644 index 0000000000000000000000000000000000000000..c6f952ad08987328ef5a7108f6c98636c5902202 --- /dev/null +++ b/scrapped_outputs/25a4d5d0681cb060596acb3ead3431c8.txt @@ -0,0 +1,76 @@ +Contribute a community pipeline 💡 Take a look at GitHub Issue #841 for more context about why we’re adding community pipelines to help everyone easily share their work without being slowed down. Community pipelines allow you to add any additional features you’d like on top of the DiffusionPipeline. The main benefit of building on top of the DiffusionPipeline is anyone can load and use your pipeline by only adding one more argument, making it super easy for the community to access. This guide will show you how to create a community pipeline and explain how they work. To keep things simple, you’ll create a “one-step” pipeline where the UNet does a single forward pass and calls the scheduler once. Initialize the pipeline You should start by creating a one_step_unet.py file for your community pipeline. In this file, create a pipeline class that inherits from the DiffusionPipeline to be able to load model weights and the scheduler configuration from the Hub. The one-step pipeline needs a UNet and a scheduler, so you’ll need to add these as arguments to the __init__ function: Copied from diffusers import DiffusionPipeline +import torch + +class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() To ensure your pipeline and its components (unet and scheduler) can be saved with save_pretrained(), add them to the register_modules function: Copied from diffusers import DiffusionPipeline + import torch + + class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() + ++ self.register_modules(unet=unet, scheduler=scheduler) Cool, the __init__ step is done and you can move to the forward pass now! 🔥 Define the forward pass In the forward pass, which we recommend defining as __call__, you have complete creative freedom to add whatever feature you’d like. For our amazing one-step pipeline, create a random image and only call the unet and scheduler once by setting timestep=1: Copied from diffusers import DiffusionPipeline + import torch + + class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() + + self.register_modules(unet=unet, scheduler=scheduler) + ++ def __call__(self): ++ image = torch.randn( ++ (1, self.unet.config.in_channels, self.unet.config.sample_size, self.unet.config.sample_size), ++ ) ++ timestep = 1 + ++ model_output = self.unet(image, timestep).sample ++ scheduler_output = self.scheduler.step(model_output, timestep, image).prev_sample + ++ return scheduler_output That’s it! 🚀 You can now run this pipeline by passing a unet and scheduler to it: Copied from diffusers import DDPMScheduler, UNet2DModel + +scheduler = DDPMScheduler() +unet = UNet2DModel() + +pipeline = UnetSchedulerOneForwardPipeline(unet=unet, scheduler=scheduler) + +output = pipeline() But what’s even better is you can load pre-existing weights into the pipeline if the pipeline structure is identical. For example, you can load the google/ddpm-cifar10-32 weights into the one-step pipeline: Copied pipeline = UnetSchedulerOneForwardPipeline.from_pretrained("google/ddpm-cifar10-32", use_safetensors=True) + +output = pipeline() Share your pipeline Open a Pull Request on the 🧨 Diffusers repository to add your awesome pipeline in one_step_unet.py to the examples/community subfolder. Once it is merged, anyone with diffusers >= 0.4.0 installed can use this pipeline magically 🪄 by specifying it in the custom_pipeline argument: Copied from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "google/ddpm-cifar10-32", custom_pipeline="one_step_unet", use_safetensors=True +) +pipe() Another way to share your community pipeline is to upload the one_step_unet.py file directly to your preferred model repository on the Hub. Instead of specifying the one_step_unet.py file, pass the model repository id to the custom_pipeline argument: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "google/ddpm-cifar10-32", custom_pipeline="stevhliu/one_step_unet", use_safetensors=True +) Take a look at the following table to compare the two sharing workflows to help you decide the best option for you: GitHub community pipeline HF Hub community pipeline usage same same review process open a Pull Request on GitHub and undergo a review process from the Diffusers team before merging; may be slower upload directly to a Hub repository without any review; this is the fastest workflow visibility included in the official Diffusers repository and documentation included on your HF Hub profile and relies on your own usage/promotion to gain visibility 💡 You can use whatever package you want in your community pipeline file - as long as the user has it installed, everything will work fine. Make sure you have one and only one pipeline class that inherits from DiffusionPipeline because this is automatically detected. How do community pipelines work? A community pipeline is a class that inherits from DiffusionPipeline which means: It can be loaded with the custom_pipeline argument. The model weights and scheduler configuration are loaded from pretrained_model_name_or_path. The code that implements a feature in the community pipeline is defined in a pipeline.py file. Sometimes you can’t load all the pipeline components weights from an official repository. In this case, the other components should be passed directly to the pipeline: Copied from diffusers import DiffusionPipeline +from transformers import CLIPImageProcessor, CLIPModel + +model_id = "CompVis/stable-diffusion-v1-4" +clip_model_id = "laion/CLIP-ViT-B-32-laion2B-s34B-b79K" + +feature_extractor = CLIPImageProcessor.from_pretrained(clip_model_id) +clip_model = CLIPModel.from_pretrained(clip_model_id, torch_dtype=torch.float16) + +pipeline = DiffusionPipeline.from_pretrained( + model_id, + custom_pipeline="clip_guided_stable_diffusion", + clip_model=clip_model, + feature_extractor=feature_extractor, + scheduler=scheduler, + torch_dtype=torch.float16, + use_safetensors=True, +) The magic behind community pipelines is contained in the following code. It allows the community pipeline to be loaded from GitHub or the Hub, and it’ll be available to all 🧨 Diffusers packages. Copied # 2. Load the pipeline class, if using custom module then load it from the Hub +# if we load from explicit class, let's use it +if custom_pipeline is not None: + pipeline_class = get_class_from_dynamic_module( + custom_pipeline, module_file=CUSTOM_PIPELINE_FILE_NAME, cache_dir=custom_pipeline + ) +elif cls != DiffusionPipeline: + pipeline_class = cls +else: + diffusers_module = importlib.import_module(cls.__module__.split(".")[0]) + pipeline_class = getattr(diffusers_module, config_dict["_class_name"]) diff --git a/scrapped_outputs/25cfd407b02932cbd3416d03fb580467.txt b/scrapped_outputs/25cfd407b02932cbd3416d03fb580467.txt new file mode 100644 index 0000000000000000000000000000000000000000..62825fe72aa801b97e465830300492417c227d28 --- /dev/null +++ b/scrapped_outputs/25cfd407b02932cbd3416d03fb580467.txt @@ -0,0 +1,18 @@ +Stable Diffusion pipelines Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from CompVis, Stability AI and LAION. Latent diffusion applies the diffusion process over a lower dimensional latent space to reduce memory and compute complexity. This specific type of diffusion model was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. Stable Diffusion is trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs. For more details about how Stable Diffusion works and how it differs from the base latent diffusion model, take a look at the Stability AI announcement and our own blog post for more technical details. You can find the original codebase for Stable Diffusion v1.0 at CompVis/stable-diffusion and Stable Diffusion v2.0 at Stability-AI/stablediffusion as well as their original scripts for various tasks. Additional official checkpoints for the different Stable Diffusion versions and tasks can be found on the CompVis, Runway, and Stability AI Hub organizations. Explore these organizations to find the best checkpoint for your use-case! The table below summarizes the available Stable Diffusion pipelines, their supported tasks, and an interactive demo: Pipeline Supported tasks 🤗 Space StableDiffusion text-to-image StableDiffusionImg2Img image-to-image StableDiffusionInpaint inpainting StableDiffusionDepth2Img depth-to-image StableDiffusionImageVariation image variation StableDiffusionPipelineSafe filtered text-to-image StableDiffusion2 text-to-image, inpainting, depth-to-image, super-resolution StableDiffusionXL text-to-image, image-to-image StableDiffusionLatentUpscale super-resolution StableDiffusionUpscale super-resolution StableDiffusionLDM3D text-to-rgb, text-to-depth, text-to-pano StableDiffusionUpscaleLDM3D ldm3d super-resolution Tips To help you get the most out of the Stable Diffusion pipelines, here are a few tips for improving performance and usability. These tips are applicable to all Stable Diffusion pipelines. Explore tradeoff between speed and quality StableDiffusionPipeline uses the PNDMScheduler by default, but 🤗 Diffusers provides many other schedulers (some of which are faster or output better quality) that are compatible. For example, if you want to use the EulerDiscreteScheduler instead of the default: Copied from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler + +pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") +pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +# or +euler_scheduler = EulerDiscreteScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler") +pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=euler_scheduler) Reuse pipeline components to save memory To save memory and use the same components across multiple pipelines, use the .components method to avoid loading weights into RAM more than once. Copied from diffusers import ( + StableDiffusionPipeline, + StableDiffusionImg2ImgPipeline, + StableDiffusionInpaintPipeline, +) + +text2img = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") +img2img = StableDiffusionImg2ImgPipeline(**text2img.components) +inpaint = StableDiffusionInpaintPipeline(**text2img.components) + +# now you can use text2img(...), img2img(...), inpaint(...) just like the call methods of each respective pipeline diff --git a/scrapped_outputs/25dd035ce3f5680d4f4439dd6a0efb55.txt b/scrapped_outputs/25dd035ce3f5680d4f4439dd6a0efb55.txt new file mode 100644 index 0000000000000000000000000000000000000000..13e167b0959d0af366c380365e13d791431b1240 --- /dev/null +++ b/scrapped_outputs/25dd035ce3f5680d4f4439dd6a0efb55.txt @@ -0,0 +1,8 @@ +Utilities Utility and helper functions for working with 🤗 Diffusers. numpy_to_pil diffusers.utils.numpy_to_pil < source > ( images ) Convert a numpy image or a batch of images to a PIL image. pt_to_pil diffusers.utils.pt_to_pil < source > ( images ) Convert a torch image to a PIL image. load_image diffusers.utils.load_image < source > ( image: Union convert_method: Callable = None ) → PIL.Image.Image Parameters image (str or PIL.Image.Image) — +The image to convert to the PIL Image format. convert_method (Callable[[PIL.Image.Image], PIL.Image.Image], optional) — +A conversion method to apply to the image after loading it. +When set to None the image will be converted “RGB”. Returns +PIL.Image.Image + +A PIL Image. + Loads image to a PIL Image. export_to_gif diffusers.utils.export_to_gif < source > ( image: List output_gif_path: str = None ) export_to_video diffusers.utils.export_to_video < source > ( video_frames: Union output_video_path: str = None fps: int = 8 ) make_image_grid diffusers.utils.make_image_grid < source > ( images: List rows: int cols: int resize: int = None ) Prepares a single grid of images. Useful for visualization purposes. diff --git a/scrapped_outputs/25edcae861f71ed1d0de7bde148bfd95.txt b/scrapped_outputs/25edcae861f71ed1d0de7bde148bfd95.txt new file mode 100644 index 0000000000000000000000000000000000000000..b38b5c13a31ff2d5b90900e6331e648465b535b4 --- /dev/null +++ b/scrapped_outputs/25edcae861f71ed1d0de7bde148bfd95.txt @@ -0,0 +1,174 @@ +Reduce memory usage A barrier to using diffusion models is the large amount of memory required. To overcome this challenge, there are several memory-reducing techniques you can use to run even some of the largest models on free-tier or consumer GPUs. Some of these techniques can even be combined to further reduce memory usage. In many cases, optimizing for memory or speed leads to improved performance in the other, so you should try to optimize for both whenever you can. This guide focuses on minimizing memory usage, but you can also learn more about how to Speed up inference. The results below are obtained from generating a single 512x512 image from the prompt a photo of an astronaut riding a horse on mars with 50 DDIM steps on a Nvidia Titan RTX, demonstrating the speed-up you can expect as a result of reduced memory consumption. latency speed-up original 9.50s x1 fp16 3.61s x2.63 channels last 3.30s x2.88 traced UNet 3.21s x2.96 memory-efficient attention 2.63s x3.61 Sliced VAE Sliced VAE enables decoding large batches of images with limited VRAM or batches with 32 images or more by decoding the batches of latents one image at a time. You’ll likely want to couple this with enable_xformers_memory_efficient_attention() to reduce memory use further if you have xFormers installed. To use sliced VAE, call enable_vae_slicing() on your pipeline before inference: Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_vae_slicing() +#pipe.enable_xformers_memory_efficient_attention() +images = pipe([prompt] * 32).images You may see a small performance boost in VAE decoding on multi-image batches, and there should be no performance impact on single-image batches. Tiled VAE Tiled VAE processing also enables working with large images on limited VRAM (for example, generating 4k images on 8GB of VRAM) by splitting the image into overlapping tiles, decoding the tiles, and then blending the outputs together to compose the final image. You should also used tiled VAE with enable_xformers_memory_efficient_attention() to reduce memory use further if you have xFormers installed. To use tiled VAE processing, call enable_vae_tiling() on your pipeline before inference: Copied import torch +from diffusers import StableDiffusionPipeline, UniPCMultistepScheduler + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") +prompt = "a beautiful landscape photograph" +pipe.enable_vae_tiling() +#pipe.enable_xformers_memory_efficient_attention() + +image = pipe([prompt], width=3840, height=2224, num_inference_steps=20).images[0] The output image has some tile-to-tile tone variation because the tiles are decoded separately, but you shouldn’t see any sharp and obvious seams between the tiles. Tiling is turned off for images that are 512x512 or smaller. CPU offloading Offloading the weights to the CPU and only loading them on the GPU when performing the forward pass can also save memory. Often, this technique can reduce memory consumption to less than 3GB. To perform CPU offloading, call enable_sequential_cpu_offload(): Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_sequential_cpu_offload() +image = pipe(prompt).images[0] CPU offloading works on submodules rather than whole models. This is the best way to minimize memory consumption, but inference is much slower due to the iterative nature of the diffusion process. The UNet component of the pipeline runs several times (as many as num_inference_steps); each time, the different UNet submodules are sequentially onloaded and offloaded as needed, resulting in a large number of memory transfers. Consider using model offloading if you want to optimize for speed because it is much faster. The tradeoff is your memory savings won’t be as large. When using enable_sequential_cpu_offload(), don’t move the pipeline to CUDA beforehand or else the gain in memory consumption will only be minimal (see this issue for more information). enable_sequential_cpu_offload() is a stateful operation that installs hooks on the models. Model offloading Model offloading requires 🤗 Accelerate version 0.17.0 or higher. Sequential CPU offloading preserves a lot of memory but it makes inference slower because submodules are moved to GPU as needed, and they’re immediately returned to the CPU when a new module runs. Full-model offloading is an alternative that moves whole models to the GPU, instead of handling each model’s constituent submodules. There is a negligible impact on inference time (compared with moving the pipeline to cuda), and it still provides some memory savings. During model offloading, only one of the main components of the pipeline (typically the text encoder, UNet and VAE) +is placed on the GPU while the others wait on the CPU. Components like the UNet that run for multiple iterations stay on the GPU until they’re no longer needed. Enable model offloading by calling enable_model_cpu_offload() on the pipeline: Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_model_cpu_offload() +image = pipe(prompt).images[0] In order to properly offload models after they’re called, it is required to run the entire pipeline and models are called in the pipeline’s expected order. Exercise caution if models are reused outside the context of the pipeline after hooks have been installed. See Removing Hooks for more information. enable_model_cpu_offload() is a stateful operation that installs hooks on the models and state on the pipeline. Channels-last memory format The channels-last memory format is an alternative way of ordering NCHW tensors in memory to preserve dimension ordering. Channels-last tensors are ordered in such a way that the channels become the densest dimension (storing images pixel-per-pixel). Since not all operators currently support the channels-last format, it may result in worst performance but you should still try and see if it works for your model. For example, to set the pipeline’s UNet to use the channels-last format: Copied print(pipe.unet.conv_out.state_dict()["weight"].stride()) # (2880, 9, 3, 1) +pipe.unet.to(memory_format=torch.channels_last) # in-place operation +print( + pipe.unet.conv_out.state_dict()["weight"].stride() +) # (2880, 1, 960, 320) having a stride of 1 for the 2nd dimension proves that it works Tracing Tracing runs an example input tensor through the model and captures the operations that are performed on it as that input makes its way through the model’s layers. The executable or ScriptFunction that is returned is optimized with just-in-time compilation. To trace a UNet: Copied import time +import torch +from diffusers import StableDiffusionPipeline +import functools + +# torch disable grad +torch.set_grad_enabled(False) + +# set variables +n_experiments = 2 +unet_runs_per_experiment = 50 + + +# load inputs +def generate_inputs(): + sample = torch.randn((2, 4, 64, 64), device="cuda", dtype=torch.float16) + timestep = torch.rand(1, device="cuda", dtype=torch.float16) * 999 + encoder_hidden_states = torch.randn((2, 77, 768), device="cuda", dtype=torch.float16) + return sample, timestep, encoder_hidden_states + + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") +unet = pipe.unet +unet.eval() +unet.to(memory_format=torch.channels_last) # use channels_last memory format +unet.forward = functools.partial(unet.forward, return_dict=False) # set return_dict=False as default + +# warmup +for _ in range(3): + with torch.inference_mode(): + inputs = generate_inputs() + orig_output = unet(*inputs) + +# trace +print("tracing..") +unet_traced = torch.jit.trace(unet, inputs) +unet_traced.eval() +print("done tracing") + + +# warmup and optimize graph +for _ in range(5): + with torch.inference_mode(): + inputs = generate_inputs() + orig_output = unet_traced(*inputs) + + +# benchmarking +with torch.inference_mode(): + for _ in range(n_experiments): + torch.cuda.synchronize() + start_time = time.time() + for _ in range(unet_runs_per_experiment): + orig_output = unet_traced(*inputs) + torch.cuda.synchronize() + print(f"unet traced inference took {time.time() - start_time:.2f} seconds") + for _ in range(n_experiments): + torch.cuda.synchronize() + start_time = time.time() + for _ in range(unet_runs_per_experiment): + orig_output = unet(*inputs) + torch.cuda.synchronize() + print(f"unet inference took {time.time() - start_time:.2f} seconds") + +# save the model +unet_traced.save("unet_traced.pt") Replace the unet attribute of the pipeline with the traced model: Copied from diffusers import StableDiffusionPipeline +import torch +from dataclasses import dataclass + + +@dataclass +class UNet2DConditionOutput: + sample: torch.FloatTensor + + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") + +# use jitted unet +unet_traced = torch.jit.load("unet_traced.pt") + + +# del pipe.unet +class TracedUNet(torch.nn.Module): + def __init__(self): + super().__init__() + self.in_channels = pipe.unet.config.in_channels + self.device = pipe.unet.device + + def forward(self, latent_model_input, t, encoder_hidden_states): + sample = unet_traced(latent_model_input, t, encoder_hidden_states)[0] + return UNet2DConditionOutput(sample=sample) + + +pipe.unet = TracedUNet() + +with torch.inference_mode(): + image = pipe([prompt] * 1, num_inference_steps=50).images[0] Memory-efficient attention Recent work on optimizing bandwidth in the attention block has generated huge speed-ups and reductions in GPU memory usage. The most recent type of memory-efficient attention is Flash Attention (you can check out the original code at HazyResearch/flash-attention). If you have PyTorch >= 2.0 installed, you should not expect a speed-up for inference when enabling xformers. To use Flash Attention, install the following: PyTorch > 1.12 CUDA available xFormers Then call enable_xformers_memory_efficient_attention() on the pipeline: Copied from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") + +pipe.enable_xformers_memory_efficient_attention() + +with torch.inference_mode(): + sample = pipe("a small cat") + +# optional: You can disable it via +# pipe.disable_xformers_memory_efficient_attention() The iteration speed when using xformers should match the iteration speed of PyTorch 2.0 as described here. diff --git a/scrapped_outputs/263cc3b6d9c3f733f01062ff724687fa.txt b/scrapped_outputs/263cc3b6d9c3f733f01062ff724687fa.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/2641ff0da2cbf99eff28dcf870761064.txt b/scrapped_outputs/2641ff0da2cbf99eff28dcf870761064.txt new file mode 100644 index 0000000000000000000000000000000000000000..b413917c52bc7069ecb64d4b6c9ce531220bac25 --- /dev/null +++ b/scrapped_outputs/2641ff0da2cbf99eff28dcf870761064.txt @@ -0,0 +1,87 @@ +Create reproducible pipelines Reproducibility is important for testing, replicating results, and can even be used to improve image quality. However, the randomness in diffusion models is a desired property because it allows the pipeline to generate different images every time it is run. While you can’t expect to get the exact same results across platforms, you can expect results to be reproducible across releases and platforms within a certain tolerance range. Even then, tolerance varies depending on the diffusion pipeline and checkpoint. This is why it’s important to understand how to control sources of randomness in diffusion models or use deterministic algorithms. 💡 We strongly recommend reading PyTorch’s statement about reproducibility: Completely reproducible results are not guaranteed across PyTorch releases, individual commits, or different platforms. Furthermore, results may not be reproducible between CPU and GPU executions, even when using identical seeds. Control randomness During inference, pipelines rely heavily on random sampling operations which include creating the +Gaussian noise tensors to denoise and adding noise to the scheduling step. Take a look at the tensor values in the DDIMPipeline after two inference steps: Copied from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np").images +print(np.abs(image).sum()) Running the code above prints one value, but if you run it again you get a different value. What is going on here? Every time the pipeline is run, torch.randn uses a different random seed to create Gaussian noise which is denoised stepwise. This leads to a different result each time it is run, which is great for diffusion pipelines since it generates a different random image each time. But if you need to reliably generate the same image, that’ll depend on whether you’re running the pipeline on a CPU or GPU. CPU To generate reproducible results on a CPU, you’ll need to use a PyTorch Generator and set a seed: Copied import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) + +# create a generator for reproducibility +generator = torch.Generator(device="cpu").manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) Now when you run the code above, it always prints a value of 1491.1711 no matter what because the Generator object with the seed is passed to all the random functions of the pipeline. If you run this code example on your specific hardware and PyTorch version, you should get a similar, if not the same, result. 💡 It might be a bit unintuitive at first to pass Generator objects to the pipeline instead of +just integer values representing the seed, but this is the recommended design when dealing with +probabilistic models in PyTorch, as Generators are random states that can be +passed to multiple pipelines in a sequence. GPU Writing a reproducible pipeline on a GPU is a bit trickier, and full reproducibility across different hardware is not guaranteed because matrix multiplication - which diffusion pipelines require a lot of - is less deterministic on a GPU than a CPU. For example, if you run the same code example above on a GPU: Copied import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) +ddim.to("cuda") + +# create a generator for reproducibility +generator = torch.Generator(device="cuda").manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) The result is not the same even though you’re using an identical seed because the GPU uses a different random number generator than the CPU. To circumvent this problem, 🧨 Diffusers has a randn_tensor() function for creating random noise on the CPU, and then moving the tensor to a GPU if necessary. The randn_tensor function is used everywhere inside the pipeline, allowing the user to always pass a CPU Generator even if the pipeline is run on a GPU. You’ll see the results are much closer now! Copied import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) +ddim.to("cuda") + +# create a generator for reproducibility; notice you don't place it on the GPU! +generator = torch.manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) 💡 If reproducibility is important, we recommend always passing a CPU generator. +The performance loss is often neglectable, and you’ll generate much more similar +values than if the pipeline had been run on a GPU. Finally, for more complex pipelines such as UnCLIPPipeline, these are often extremely +susceptible to precision error propagation. Don’t expect similar results across +different GPU hardware or PyTorch versions. In this case, you’ll need to run +exactly the same hardware and PyTorch version for full reproducibility. Deterministic algorithms You can also configure PyTorch to use deterministic algorithms to create a reproducible pipeline. However, you should be aware that deterministic algorithms may be slower than nondeterministic ones and you may observe a decrease in performance. But if reproducibility is important to you, then this is the way to go! Nondeterministic behavior occurs when operations are launched in more than one CUDA stream. To avoid this, set the environment variable CUBLAS_WORKSPACE_CONFIG to :16:8 to only use one buffer size during runtime. PyTorch typically benchmarks multiple algorithms to select the fastest one, but if you want reproducibility, you should disable this feature because the benchmark may select different algorithms each time. Lastly, pass True to torch.use_deterministic_algorithms to enable deterministic algorithms. Copied import os +import torch + +os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":16:8" + +torch.backends.cudnn.benchmark = False +torch.use_deterministic_algorithms(True) Now when you run the same pipeline twice, you’ll get identical results. Copied import torch +from diffusers import DDIMScheduler, StableDiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True).to("cuda") +pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) +g = torch.Generator(device="cuda") + +prompt = "A bear is playing a guitar on Times Square" + +g.manual_seed(0) +result1 = pipe(prompt=prompt, num_inference_steps=50, generator=g, output_type="latent").images + +g.manual_seed(0) +result2 = pipe(prompt=prompt, num_inference_steps=50, generator=g, output_type="latent").images + +print("L_inf dist =", abs(result1 - result2).max()) +"L_inf dist = tensor(0., device='cuda:0')" diff --git a/scrapped_outputs/265d9850d9b1d1d30a162e8bb1f0225e.txt b/scrapped_outputs/265d9850d9b1d1d30a162e8bb1f0225e.txt new file mode 100644 index 0000000000000000000000000000000000000000..0c8b48f867375e08b716930801b9b71701856adf --- /dev/null +++ b/scrapped_outputs/265d9850d9b1d1d30a162e8bb1f0225e.txt @@ -0,0 +1,3 @@ +OpenVINO + +Under construction 🚧 diff --git a/scrapped_outputs/268a1cfeac1ba417fa3b802104e2770d.txt b/scrapped_outputs/268a1cfeac1ba417fa3b802104e2770d.txt new file mode 100644 index 0000000000000000000000000000000000000000..880c99be557ecb33b18849c5b32298e2e7b85f9f --- /dev/null +++ b/scrapped_outputs/268a1cfeac1ba417fa3b802104e2770d.txt @@ -0,0 +1,36 @@ +Load LoRAs for inference There are many adapter types (with LoRAs being the most popular) trained in different styles to achieve different effects. You can even combine multiple adapters to create new and unique images. In this tutorial, you’ll learn how to easily load and manage adapters for inference with the 🤗 PEFT integration in 🤗 Diffusers. You’ll use LoRA as the main adapter technique, so you’ll see the terms LoRA and adapter used interchangeably. Let’s first install all the required libraries. Copied !pip install -q transformers accelerate peft diffusers Now, load a pipeline with a Stable Diffusion XL (SDXL) checkpoint: Copied from diffusers import DiffusionPipeline +import torch + +pipe_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipe = DiffusionPipeline.from_pretrained(pipe_id, torch_dtype=torch.float16).to("cuda") Next, load a CiroN2022/toy-face adapter with the load_lora_weights() method. With the 🤗 PEFT integration, you can assign a specific adapter_name to the checkpoint, which let’s you easily switch between different LoRA checkpoints. Let’s call this adapter "toy". Copied pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") Make sure to include the token toy_face in the prompt and then you can perform inference: Copied prompt = "toy_face of a hacker with a hoodie" + +lora_scale= 0.9 +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) +).images[0] +image With the adapter_name parameter, it is really easy to use another adapter for inference! Load the nerijs/pixel-art-xl adapter that has been fine-tuned to generate pixel art images and call it "pixel". The pipeline automatically sets the first loaded adapter ("toy") as the active adapter, but you can activate the "pixel" adapter with the set_adapters() method: Copied pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipe.set_adapters("pixel") Make sure you include the token pixel art in your prompt to generate a pixel art image: Copied prompt = "a hacker with a hoodie, pixel art" +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) +).images[0] +image Merge adapters You can also merge different adapter checkpoints for inference to blend their styles together. Once again, use the set_adapters() method to activate the pixel and toy adapters and specify the weights for how they should be merged. Copied pipe.set_adapters(["pixel", "toy"], adapter_weights=[0.5, 1.0]) LoRA checkpoints in the diffusion community are almost always obtained with DreamBooth. DreamBooth training often relies on “trigger” words in the input text prompts in order for the generation results to look as expected. When you combine multiple LoRA checkpoints, it’s important to ensure the trigger words for the corresponding LoRA checkpoints are present in the input text prompts. Remember to use the trigger words for CiroN2022/toy-face and nerijs/pixel-art-xl (these are found in their repositories) in the prompt to generate an image. Copied prompt = "toy_face of a hacker with a hoodie, pixel art" +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": 1.0}, generator=torch.manual_seed(0) +).images[0] +image Impressive! As you can see, the model generated an image that mixed the characteristics of both adapters. Through its PEFT integration, Diffusers also offers more efficient merging methods which you can learn about in the Merge LoRAs guide! To return to only using one adapter, use the set_adapters() method to activate the "toy" adapter: Copied pipe.set_adapters("toy") + +prompt = "toy_face of a hacker with a hoodie" +lora_scale= 0.9 +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) +).images[0] +image Or to disable all adapters entirely, use the disable_lora() method to return the base model. Copied pipe.disable_lora() + +prompt = "toy_face of a hacker with a hoodie" +lora_scale= 0.9 +image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] +image Manage active adapters You have attached multiple adapters in this tutorial, and if you’re feeling a bit lost on what adapters have been attached to the pipeline’s components, use the get_active_adapters() method to check the list of active adapters: Copied active_adapters = pipe.get_active_adapters() +active_adapters +["toy", "pixel"] You can also get the active adapters of each pipeline component with get_list_adapters(): Copied list_adapters_component_wise = pipe.get_list_adapters() +list_adapters_component_wise +{"text_encoder": ["toy", "pixel"], "unet": ["toy", "pixel"], "text_encoder_2": ["toy", "pixel"]} diff --git a/scrapped_outputs/268d94abe09dcf04757bf1baa37d30b9.txt b/scrapped_outputs/268d94abe09dcf04757bf1baa37d30b9.txt new file mode 100644 index 0000000000000000000000000000000000000000..eaf1daaf7ae542a78f5381f7eae39049ee58f668 --- /dev/null +++ b/scrapped_outputs/268d94abe09dcf04757bf1baa37d30b9.txt @@ -0,0 +1,49 @@ +Improve generation quality with FreeU The UNet is responsible for denoising during the reverse diffusion process, and there are two distinct features in its architecture: Backbone features primarily contribute to the denoising process Skip features mainly introduce high-frequency features into the decoder module and can make the network overlook the semantics in the backbone features However, the skip connection can sometimes introduce unnatural image details. FreeU is a technique for improving image quality by rebalancing the contributions from the UNet’s skip connections and backbone feature maps. FreeU is applied during inference and it does not require any additional training. The technique works for different tasks such as text-to-image, image-to-image, and text-to-video. In this guide, you will apply FreeU to the StableDiffusionPipeline, StableDiffusionXLPipeline, and TextToVideoSDPipeline. You need to install Diffusers from source to run the examples below. StableDiffusionPipeline Load the pipeline: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, safety_checker=None +).to("cuda") Then enable the FreeU mechanism with the FreeU-specific hyperparameters. These values are scaling factors for the backbone and skip features. Copied pipeline.enable_freeu(s1=0.9, s2=0.2, b1=1.2, b2=1.4) The values above are from the official FreeU code repository where you can also find reference hyperparameters for different models. Disable the FreeU mechanism by calling disable_freeu() on a pipeline. And then run inference: Copied prompt = "A squirrel eating a burger" +seed = 2023 +image = pipeline(prompt, generator=torch.manual_seed(seed)).images[0] +image The figure below compares non-FreeU and FreeU results respectively for the same hyperparameters used above (prompt and seed): Let’s see how Stable Diffusion 2 results are impacted: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16, safety_checker=None +).to("cuda") + +prompt = "A squirrel eating a burger" +seed = 2023 + +pipeline.enable_freeu(s1=0.9, s2=0.2, b1=1.1, b2=1.2) +image = pipeline(prompt, generator=torch.manual_seed(seed)).images[0] +image Stable Diffusion XL Finally, let’s take a look at how FreeU affects Stable Diffusion XL results: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, +).to("cuda") + +prompt = "A squirrel eating a burger" +seed = 2023 + +# Comes from +# https://wandb.ai/nasirk24/UNET-FreeU-SDXL/reports/FreeU-SDXL-Optimal-Parameters--Vmlldzo1NDg4NTUw +pipeline.enable_freeu(s1=0.6, s2=0.4, b1=1.1, b2=1.2) +image = pipeline(prompt, generator=torch.manual_seed(seed)).images[0] +image Text-to-video generation FreeU can also be used to improve video quality: Copied from diffusers import DiffusionPipeline +from diffusers.utils import export_to_video +import torch + +model_id = "cerspense/zeroscope_v2_576w" +pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +prompt = "an astronaut riding a horse on mars" +seed = 2023 + +# The values come from +# https://github.com/lyn-rgb/FreeU_Diffusers#video-pipelines +pipe.enable_freeu(b1=1.2, b2=1.4, s1=0.9, s2=0.2) +video_frames = pipe(prompt, height=320, width=576, num_frames=30, generator=torch.manual_seed(seed)).frames +export_to_video(video_frames, "astronaut_rides_horse.mp4") Thanks to kadirnar for helping to integrate the feature, and to justindujardin for the helpful discussions. diff --git a/scrapped_outputs/2692e265527a797ec2775a12f326c8fa.txt b/scrapped_outputs/2692e265527a797ec2775a12f326c8fa.txt new file mode 100644 index 0000000000000000000000000000000000000000..27e473e96ef3e5480dbddcafab99a5316b599755 --- /dev/null +++ b/scrapped_outputs/2692e265527a797ec2775a12f326c8fa.txt @@ -0,0 +1,57 @@ +Wuerstchen The Wuerstchen model drastically reduces computational costs by compressing the latent space by 42x, without compromising image quality and accelerating inference. During training, Wuerstchen uses two models (VQGAN + autoencoder) to compress the latents, and then a third model (text-conditioned latent diffusion model) is conditioned on this highly compressed space to generate an image. To fit the prior model into GPU memory and to speedup training, try enabling gradient_accumulation_steps, gradient_checkpointing, and mixed_precision respectively. This guide explores the train_text_to_image_prior.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/wuerstchen/text_to_image +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training scripts that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the scripts and let us know if you have any questions or concerns. Script parameters The training scripts provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image_prior.py \ + --mixed_precision="fp16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so let’s dive right into the Wuerstchen training script! Training script The training script is also similar to the Text-to-image training guide, but it’s been modified to support Wuerstchen. This guide focuses on the code that is unique to the Wuerstchen training script. The main() function starts by initializing the image encoder - an EfficientNet - in addition to the usual scheduler and tokenizer. Copied with ContextManagers(deepspeed_zero_init_disabled_context_manager()): + pretrained_checkpoint_file = hf_hub_download("dome272/wuerstchen", filename="model_v2_stage_b.pt") + state_dict = torch.load(pretrained_checkpoint_file, map_location="cpu") + image_encoder = EfficientNetEncoder() + image_encoder.load_state_dict(state_dict["effnet_state_dict"]) + image_encoder.eval() You’ll also load the WuerstchenPrior model for optimization. Copied prior = WuerstchenPrior.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="prior") + +optimizer = optimizer_cls( + prior.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Next, you’ll apply some transforms to the images and tokenize the captions: Copied def preprocess_train(examples): + images = [image.convert("RGB") for image in examples[image_column]] + examples["effnet_pixel_values"] = [effnet_transforms(image) for image in images] + examples["text_input_ids"], examples["text_mask"] = tokenize_captions(examples) + return examples Finally, the training loop handles compressing the images to latent space with the EfficientNetEncoder, adding noise to the latents, and predicting the noise residual with the WuerstchenPrior model. Copied pred_noise = prior(noisy_latents, timesteps, prompt_embeds) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 Set the DATASET_NAME environment variable to the dataset name from the Hub. This guide uses the Pokémon BLIP captions dataset, but you can create and train on your own datasets as well (see the Create a dataset for training guide). To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_prompt to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. Copied export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch train_text_to_image_prior.py \ + --mixed_precision="fp16" \ + --dataset_name=$DATASET_NAME \ + --resolution=768 \ + --train_batch_size=4 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --dataloader_num_workers=4 \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --checkpoints_total_limit=3 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --validation_prompts="A robot pokemon, 4k photo" \ + --report_to="wandb" \ + --push_to_hub \ + --output_dir="wuerstchen-prior-pokemon-model" Once training is complete, you can use your newly trained model for inference! Copied import torch +from diffusers import AutoPipelineForText2Image +from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS + +pipeline = AutoPipelineForText2Image.from_pretrained("path/to/saved/model", torch_dtype=torch.float16).to("cuda") + +caption = "A cute bird pokemon holding a shield" +images = pipeline( + caption, + width=1024, + height=1536, + prior_timesteps=DEFAULT_STAGE_C_TIMESTEPS, + prior_guidance_scale=4.0, + num_images_per_prompt=2, +).images Next steps Congratulations on training a Wuerstchen model! To learn more about how to use your new model, the following may be helpful: Take a look at the Wuerstchen API documentation to learn more about how to use the pipeline for text-to-image generation and its limitations. diff --git a/scrapped_outputs/26983426d96ef5731f7c1fef09a3572f.txt b/scrapped_outputs/26983426d96ef5731f7c1fef09a3572f.txt new file mode 100644 index 0000000000000000000000000000000000000000..31cbdde7d3f5e542cc1b460fc970c1544f49a07d --- /dev/null +++ b/scrapped_outputs/26983426d96ef5731f7c1fef09a3572f.txt @@ -0,0 +1,71 @@ +Load LoRAs for inference There are many adapters (with LoRAs being the most common type) trained in different styles to achieve different effects. You can even combine multiple adapters to create new and unique images. With the 🤗 PEFT integration in 🤗 Diffusers, it is really easy to load and manage adapters for inference. In this guide, you’ll learn how to use different adapters with Stable Diffusion XL (SDXL) for inference. Throughout this guide, you’ll use LoRA as the main adapter technique, so we’ll use the terms LoRA and adapter interchangeably. You should have some familiarity with LoRA, and if you don’t, we welcome you to check out the LoRA guide. Let’s first install all the required libraries. Copied !pip install -q transformers accelerate +!pip install peft +!pip install diffusers Now, let’s load a pipeline with a SDXL checkpoint: Copied from diffusers import DiffusionPipeline +import torch + +pipe_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipe = DiffusionPipeline.from_pretrained(pipe_id, torch_dtype=torch.float16).to("cuda") Next, load a LoRA checkpoint with the load_lora_weights() method. With the 🤗 PEFT integration, you can assign a specific adapter_name to the checkpoint, which let’s you easily switch between different LoRA checkpoints. Let’s call this adapter "toy". Copied pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") And then perform inference: Copied prompt = "toy_face of a hacker with a hoodie" + +lora_scale= 0.9 +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) +).images[0] +image With the adapter_name parameter, it is really easy to use another adapter for inference! Load the nerijs/pixel-art-xl adapter that has been fine-tuned to generate pixel art images, and let’s call it "pixel". The pipeline automatically sets the first loaded adapter ("toy") as the active adapter. But you can activate the "pixel" adapter with the set_adapters() method as shown below: Copied pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipe.set_adapters("pixel") Let’s now generate an image with the second adapter and check the result: Copied prompt = "a hacker with a hoodie, pixel art" +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) +).images[0] +image Combine multiple adapters You can also perform multi-adapter inference where you combine different adapter checkpoints for inference. Once again, use the set_adapters() method to activate two LoRA checkpoints and specify the weight for how the checkpoints should be combined. Copied pipe.set_adapters(["pixel", "toy"], adapter_weights=[0.5, 1.0]) Now that we have set these two adapters, let’s generate an image from the combined adapters! LoRA checkpoints in the diffusion community are almost always obtained with DreamBooth. DreamBooth training often relies on “trigger” words in the input text prompts in order for the generation results to look as expected. When you combine multiple LoRA checkpoints, it’s important to ensure the trigger words for the corresponding LoRA checkpoints are present in the input text prompts. The trigger words for CiroN2022/toy-face and nerijs/pixel-art-xl are found in their repositories. Copied # Notice how the prompt is constructed. +prompt = "toy_face of a hacker with a hoodie, pixel art" +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": 1.0}, generator=torch.manual_seed(0) +).images[0] +image Impressive! As you can see, the model was able to generate an image that mixes the characteristics of both adapters. If you want to go back to using only one adapter, use the set_adapters() method to activate the "toy" adapter: Copied # First, set the adapter. +pipe.set_adapters("toy") + +# Then, run inference. +prompt = "toy_face of a hacker with a hoodie" +lora_scale= 0.9 +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) +).images[0] +image If you want to switch to only the base model, disable all LoRAs with the disable_lora() method. Copied pipe.disable_lora() + +prompt = "toy_face of a hacker with a hoodie" +lora_scale= 0.9 +image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] +image Monitoring active adapters You have attached multiple adapters in this tutorial, and if you’re feeling a bit lost on what adapters have been attached to the pipeline’s components, you can easily check the list of active adapters using the get_active_adapters() method: Copied active_adapters = pipe.get_active_adapters() +active_adapters +["toy", "pixel"] You can also get the active adapters of each pipeline component with get_list_adapters(): Copied list_adapters_component_wise = pipe.get_list_adapters() +list_adapters_component_wise +{"text_encoder": ["toy", "pixel"], "unet": ["toy", "pixel"], "text_encoder_2": ["toy", "pixel"]} Fusing adapters into the model You can use PEFT to easily fuse/unfuse multiple adapters directly into the model weights (both UNet and text encoder) using the fuse_lora() method, which can lead to a speed-up in inference and lower VRAM usage. Copied pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") + +pipe.set_adapters(["pixel", "toy"], adapter_weights=[0.5, 1.0]) +# Fuses the LoRAs into the Unet +pipe.fuse_lora() + +prompt = "toy_face of a hacker with a hoodie, pixel art" +image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] + +# Gets the Unet back to the original state +pipe.unfuse_lora() You can also fuse some adapters using adapter_names for faster generation: Copied pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") + +pipe.set_adapters(["pixel"], adapter_weights=[0.5, 1.0]) +# Fuses the LoRAs into the Unet +pipe.fuse_lora(adapter_names=["pixel"]) + +prompt = "a hacker with a hoodie, pixel art" +image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] + +# Gets the Unet back to the original state +pipe.unfuse_lora() + +# Fuse all adapters +pipe.fuse_lora(adapter_names=["pixel", "toy"]) + +prompt = "toy_face of a hacker with a hoodie, pixel art" +image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] Saving a pipeline after fusing the adapters To properly save a pipeline after it’s been loaded with the adapters, it should be serialized like so: Copied pipe.fuse_lora(lora_scale=1.0) +pipe.unload_lora_weights() +pipe.save_pretrained("path-to-pipeline") diff --git a/scrapped_outputs/26b2c66d74b10b0a6e68040532bdfe57.txt b/scrapped_outputs/26b2c66d74b10b0a6e68040532bdfe57.txt new file mode 100644 index 0000000000000000000000000000000000000000..2dd3da44ce18bcb5133ef0eef41276eeb6637b6a --- /dev/null +++ b/scrapped_outputs/26b2c66d74b10b0a6e68040532bdfe57.txt @@ -0,0 +1,845 @@ +ControlNet ControlNet was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that’ll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process. The abstract from the paper is: We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models. This model was contributed by takuma104. ❤️ The original codebase can be found at lllyasviel/ControlNet, and you can find official ControlNet checkpoints on lllyasviel’s Hub profile. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionControlNetPipeline class diffusers.StableDiffusionControlNetPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +... ) +>>> image = np.array(image) + +>>> # get canny image +>>> image = cv2.Canny(image, 100, 200) +>>> image = image[:, :, None] +>>> image = np.concatenate([image, image, image], axis=2) +>>> canny_image = Image.fromarray(image) + +>>> # load control net and stable diffusion v1-5 +>>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +>>> pipe = StableDiffusionControlNetPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> # speed up diffusion process with faster scheduler and memory optimization +>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +>>> # remove following line if xformers is not installed +>>> pipe.enable_xformers_memory_efficient_attention() + +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> generator = torch.manual_seed(0) +>>> image = pipe( +... "futuristic-looking woman", num_inference_steps=20, generator=generator, image=canny_image +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionControlNetImg2ImgPipeline class diffusers.StableDiffusionControlNetImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for image-to-image generation using Stable Diffusion with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None control_image: Union = None height: Optional = None width: Optional = None strength: float = 0.8 num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 0.8 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The initial image to be used as the starting point for the image generation process. Can also accept +image latents as image, and if passing latents directly they are not encoded again. control_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, UniPCMultistepScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +... ) +>>> np_image = np.array(image) + +>>> # get canny image +>>> np_image = cv2.Canny(np_image, 100, 200) +>>> np_image = np_image[:, :, None] +>>> np_image = np.concatenate([np_image, np_image, np_image], axis=2) +>>> canny_image = Image.fromarray(np_image) + +>>> # load control net and stable diffusion v1-5 +>>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +>>> pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> # speed up diffusion process with faster scheduler and memory optimization +>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> generator = torch.manual_seed(0) +>>> image = pipe( +... "futuristic-looking woman", +... num_inference_steps=20, +... generator=generator, +... image=image, +... control_image=canny_image, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionControlNetInpaintPipeline class diffusers.StableDiffusionControlNetInpaintPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for image inpainting using Stable Diffusion with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters This pipeline can be used with checkpoints that have been specifically fine-tuned for inpainting +(runwayml/stable-diffusion-inpainting) as well as +default text-to-image Stable Diffusion checkpoints +(runwayml/stable-diffusion-v1-5). Default text-to-image +Stable Diffusion checkpoints might be preferable for ControlNets that have been fine-tuned on those, such as +lllyasviel/control_v11p_sd15_inpaint. __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None control_image: Union = None height: Optional = None width: Optional = None strength: float = 1.0 num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 0.5 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], — +List[PIL.Image.Image], or List[np.ndarray]): +Image, NumPy array or tensor representing an image batch to be used as the starting point. For both +NumPy array and PyTorch tensor, the expected value range is between [0, 1]. If it’s a tensor or a +list or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a NumPy array or +a list of arrays, the expected shape should be (B, H, W, C) or (H, W, C). It can also accept image +latents as image, but if passing latents directly it is not encoded again. mask_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], — +List[PIL.Image.Image], or List[np.ndarray]): +Image, NumPy array or tensor representing an image batch to mask image. White pixels in the mask +are repainted while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a NumPy array or PyTorch tensor, it should contain one +color channel (L) instead of 3, so the expected shape for PyTorch tensor would be (B, 1, H, W), (B, H, W), (1, H, W), (H, W). And for NumPy array, it would be for (B, H, W, 1), (B, H, W), (H, W, 1), or (H, W). control_image (torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image], — +List[List[torch.FloatTensor]], or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. strength (float, optional, defaults to 1.0) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 0.5) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install transformers accelerate +>>> from diffusers import StableDiffusionControlNetInpaintPipeline, ControlNetModel, DDIMScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> init_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy.png" +... ) +>>> init_image = init_image.resize((512, 512)) + +>>> generator = torch.Generator(device="cpu").manual_seed(1) + +>>> mask_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy_mask.png" +... ) +>>> mask_image = mask_image.resize((512, 512)) + + +>>> def make_canny_condition(image): +... image = np.array(image) +... image = cv2.Canny(image, 100, 200) +... image = image[:, :, None] +... image = np.concatenate([image, image, image], axis=2) +... image = Image.fromarray(image) +... return image + + +>>> control_image = make_canny_condition(init_image) + +>>> controlnet = ControlNetModel.from_pretrained( +... "lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16 +... ) +>>> pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> image = pipe( +... "a handsome man with ray-ban sunglasses", +... num_inference_steps=20, +... generator=generator, +... eta=1.0, +... image=init_image, +... mask_image=mask_image, +... control_image=control_image, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionControlNetPipeline class diffusers.FlaxStableDiffusionControlNetPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel controlnet: FlaxControlNetModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. controlnet (FlaxControlNetModel — +Provides additional conditioning to the unet during the denoising process. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-to-image generation using Stable Diffusion with ControlNet Guidance. This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: Array image: Array params: Union prng_seed: Array num_inference_steps: int = 50 guidance_scale: Union = 7.5 latents: Array = None neg_prompt_ids: Array = None controlnet_conditioning_scale: Union = 1.0 return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt_ids (jnp.ndarray) — +The prompt or prompts to guide the image generation. image (jnp.ndarray) — +Array representing the ControlNet input condition to provide guidance to the unet for generation. params (Dict or FrozenDict) — +Dictionary containing the model parameters/weights. prng_seed (jax.Array) — +Array containing random number generator key. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. latents (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +array is generated by sampling using the supplied random generator. controlnet_conditioning_scale (float or jnp.ndarray, optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> import jax.numpy as jnp +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard +>>> from diffusers.utils import load_image, make_image_grid +>>> from PIL import Image +>>> from diffusers import FlaxStableDiffusionControlNetPipeline, FlaxControlNetModel + + +>>> def create_key(seed=0): +... return jax.random.PRNGKey(seed) + + +>>> rng = create_key(0) + +>>> # get canny image +>>> canny_image = load_image( +... "https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/blog_post_cell_10_output_0.jpeg" +... ) + +>>> prompts = "best quality, extremely detailed" +>>> negative_prompts = "monochrome, lowres, bad anatomy, worst quality, low quality" + +>>> # load control net and stable diffusion v1-5 +>>> controlnet, controlnet_params = FlaxControlNetModel.from_pretrained( +... "lllyasviel/sd-controlnet-canny", from_pt=True, dtype=jnp.float32 +... ) +>>> pipe, params = FlaxStableDiffusionControlNetPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, revision="flax", dtype=jnp.float32 +... ) +>>> params["controlnet"] = controlnet_params + +>>> num_samples = jax.device_count() +>>> rng = jax.random.split(rng, jax.device_count()) + +>>> prompt_ids = pipe.prepare_text_inputs([prompts] * num_samples) +>>> negative_prompt_ids = pipe.prepare_text_inputs([negative_prompts] * num_samples) +>>> processed_image = pipe.prepare_image_inputs([canny_image] * num_samples) + +>>> p_params = replicate(params) +>>> prompt_ids = shard(prompt_ids) +>>> negative_prompt_ids = shard(negative_prompt_ids) +>>> processed_image = shard(processed_image) + +>>> output = pipe( +... prompt_ids=prompt_ids, +... image=processed_image, +... params=p_params, +... prng_seed=rng, +... num_inference_steps=50, +... neg_prompt_ids=negative_prompt_ids, +... jit=True, +... ).images + +>>> output_images = pipe.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:]))) +>>> output_images = make_image_grid(output_images, num_samples // 4, 4) +>>> output_images.save("generated_image.png") FlaxStableDiffusionControlNetPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/26ca6b99768fb9ff7e1c812d205aecef.txt b/scrapped_outputs/26ca6b99768fb9ff7e1c812d205aecef.txt new file mode 100644 index 0000000000000000000000000000000000000000..b2398d1de27ff4e269edc24774c357698663a5e3 --- /dev/null +++ b/scrapped_outputs/26ca6b99768fb9ff7e1c812d205aecef.txt @@ -0,0 +1,43 @@ +Text-guided depth-to-image generation + + + + + + + + + + + + +The StableDiffusionDepth2ImgPipeline lets you pass a text prompt and an initial image to condition the generation of new images. In addition, you can also pass a depth_map to preserve the image structure. If no depth_map is provided, the pipeline automatically predicts the depth via an integrated depth-estimation model. +Start by creating an instance of the StableDiffusionDepth2ImgPipeline: + + + Copied +import torch +import requests +from PIL import Image + +from diffusers import StableDiffusionDepth2ImgPipeline + +pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-depth", + torch_dtype=torch.float16, +).to("cuda") +Now pass your prompt to the pipeline. You can also pass a negative_prompt to prevent certain words from guiding how an image is generated: + + + Copied +url = "http://images.cocodataset.org/val2017/000000039769.jpg" +init_image = Image.open(requests.get(url, stream=True).raw) +prompt = "two tigers" +n_prompt = "bad, deformed, ugly, bad anatomy" +image = pipe(prompt=prompt, image=init_image, negative_prompt=n_prompt, strength=0.7).images[0] +image +Input +Output + + +Play around with the Spaces below and see if you notice a difference between generated images with and without a depth map! diff --git a/scrapped_outputs/26da9a0ae1d4acf71f515d27e04944b2.txt b/scrapped_outputs/26da9a0ae1d4acf71f515d27e04944b2.txt new file mode 100644 index 0000000000000000000000000000000000000000..c64e5338e7b801217166447f9876dee342fd9e20 --- /dev/null +++ b/scrapped_outputs/26da9a0ae1d4acf71f515d27e04944b2.txt @@ -0,0 +1,100 @@ +UNet Some training methods - like LoRA and Custom Diffusion - typically target the UNet’s attention layers, but these training methods can also target other non-attention layers. Instead of training all of a model’s parameters, only a subset of the parameters are trained, which is faster and more efficient. This class is useful if you’re only loading weights into a UNet. If you need to load weights into the text encoder or a text encoder and UNet, try using the load_lora_weights() function instead. The UNet2DConditionLoadersMixin class provides functions for loading and saving weights, fusing and unfusing LoRAs, disabling and enabling LoRAs, and setting and deleting adapters. To learn more about how to load LoRA weights, see the LoRA loading guide. UNet2DConditionLoadersMixin class diffusers.loaders.UNet2DConditionLoadersMixin < source > ( ) Load LoRA layers into a UNet2DCondtionModel. delete_adapters < source > ( adapter_names: Union ) Parameters adapter_names (Union[List[str], str]) — +The names (single string or list of strings) of the adapter to delete. Delete an adapter’s LoRA layers from the UNet. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_names="cinematic" +) +pipeline.delete_adapters("cinematic") disable_lora < source > ( ) Disable the UNet’s active LoRA layers. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) +pipeline.disable_lora() enable_lora < source > ( ) Enable the UNet’s active LoRA layers. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) +pipeline.enable_lora() load_attn_procs < source > ( pretrained_model_name_or_path_or_dict: Union **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with ModelMixin.save_pretrained(). +A torch state +dict. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load pretrained attention processor layers into UNet2DConditionModel. Attention processor layers have to be +defined in +attention_processor.py +and be a torch.nn.Module class. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.unet.load_attn_procs( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) save_attn_procs < source > ( save_directory: Union is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save an attention processor to (will be created if it doesn’t exist). is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or with pickle. Save attention processor layers to a directory so that it can be reloaded with the +load_attn_procs() method. Example: Copied import torch +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + torch_dtype=torch.float16, +).to("cuda") +pipeline.unet.load_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin") +pipeline.unet.save_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin") set_adapters < source > ( adapter_names: Union weights: Union = None ) Parameters adapter_names (List[str] or str) — +The names of the adapters to use. adapter_weights (Union[List[float], float], optional) — +The adapter(s) weights to use with the UNet. If None, the weights are set to 1.0 for all the +adapters. Set the currently active adapters for use in the UNet. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) +pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipeline.set_adapters(["cinematic", "pixel"], adapter_weights=[0.5, 0.5]) diff --git a/scrapped_outputs/2707e0ca4fea320cb9e5fe879b8bf87d.txt b/scrapped_outputs/2707e0ca4fea320cb9e5fe879b8bf87d.txt new file mode 100644 index 0000000000000000000000000000000000000000..a60cf1709306cd604a335558453963caf02df74b --- /dev/null +++ b/scrapped_outputs/2707e0ca4fea320cb9e5fe879b8bf87d.txt @@ -0,0 +1,56 @@ +Community pipelines For more context about the design choices behind community pipelines, please have a look at this issue. Community pipelines allow you to get creative and build your own unique pipelines to share with the community. You can find all community pipelines in the diffusers/examples/community folder along with inference and training examples for how to use them. This guide showcases some of the community pipelines and hopefully it’ll inspire you to create your own (feel free to open a PR with your own pipeline and we will merge it!). To load a community pipeline, use the custom_pipeline argument in DiffusionPipeline to specify one of the files in diffusers/examples/community: Copied from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", custom_pipeline="filename_in_the_community_folder", use_safetensors=True +) If a community pipeline doesn’t work as expected, please open a GitHub issue and mention the author. You can learn more about community pipelines in the how to load community pipelines and how to contribute a community pipeline guides. Multilingual Stable Diffusion The multilingual Stable Diffusion pipeline uses a pretrained XLM-RoBERTa to identify a language and the mBART-large-50 model to handle the translation. This allows you to generate images from text in 20 languages. Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import make_image_grid +from transformers import ( + pipeline, + MBart50TokenizerFast, + MBartForConditionalGeneration, +) + +device = "cuda" if torch.cuda.is_available() else "cpu" +device_dict = {"cuda": 0, "cpu": -1} + +# add language detection pipeline +language_detection_model_ckpt = "papluca/xlm-roberta-base-language-detection" +language_detection_pipeline = pipeline("text-classification", + model=language_detection_model_ckpt, + device=device_dict[device]) + +# add model for language translation +translation_tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-one-mmt") +translation_model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-one-mmt").to(device) + +diffuser_pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + custom_pipeline="multilingual_stable_diffusion", + detection_pipeline=language_detection_pipeline, + translation_model=translation_model, + translation_tokenizer=translation_tokenizer, + torch_dtype=torch.float16, +) + +diffuser_pipeline.enable_attention_slicing() +diffuser_pipeline = diffuser_pipeline.to(device) + +prompt = ["a photograph of an astronaut riding a horse", + "Una casa en la playa", + "Ein Hund, der Orange isst", + "Un restaurant parisien"] + +images = diffuser_pipeline(prompt).images +make_image_grid(images, rows=2, cols=2) MagicMix MagicMix is a pipeline that can mix an image and text prompt to generate a new image that preserves the image structure. The mix_factor determines how much influence the prompt has on the layout generation, kmin controls the number of steps during the content generation process, and kmax determines how much information is kept in the layout of the original image. Copied from diffusers import DiffusionPipeline, DDIMScheduler +from diffusers.utils import load_image, make_image_grid + +pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + custom_pipeline="magic_mix", + scheduler=DDIMScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler"), +).to('cuda') + +img = load_image("https://user-images.githubusercontent.com/59410571/209578593-141467c7-d831-4792-8b9a-b17dc5e47816.jpg") +mix_img = pipeline(img, prompt="bed", kmin=0.3, kmax=0.5, mix_factor=0.5) +make_image_grid([img, mix_img], rows=1, cols=2) original image image and text prompt mix diff --git a/scrapped_outputs/273280d3ba051ba2f09fdffa94c1ba74.txt b/scrapped_outputs/273280d3ba051ba2f09fdffa94c1ba74.txt new file mode 100644 index 0000000000000000000000000000000000000000..c64e5338e7b801217166447f9876dee342fd9e20 --- /dev/null +++ b/scrapped_outputs/273280d3ba051ba2f09fdffa94c1ba74.txt @@ -0,0 +1,100 @@ +UNet Some training methods - like LoRA and Custom Diffusion - typically target the UNet’s attention layers, but these training methods can also target other non-attention layers. Instead of training all of a model’s parameters, only a subset of the parameters are trained, which is faster and more efficient. This class is useful if you’re only loading weights into a UNet. If you need to load weights into the text encoder or a text encoder and UNet, try using the load_lora_weights() function instead. The UNet2DConditionLoadersMixin class provides functions for loading and saving weights, fusing and unfusing LoRAs, disabling and enabling LoRAs, and setting and deleting adapters. To learn more about how to load LoRA weights, see the LoRA loading guide. UNet2DConditionLoadersMixin class diffusers.loaders.UNet2DConditionLoadersMixin < source > ( ) Load LoRA layers into a UNet2DCondtionModel. delete_adapters < source > ( adapter_names: Union ) Parameters adapter_names (Union[List[str], str]) — +The names (single string or list of strings) of the adapter to delete. Delete an adapter’s LoRA layers from the UNet. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_names="cinematic" +) +pipeline.delete_adapters("cinematic") disable_lora < source > ( ) Disable the UNet’s active LoRA layers. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) +pipeline.disable_lora() enable_lora < source > ( ) Enable the UNet’s active LoRA layers. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) +pipeline.enable_lora() load_attn_procs < source > ( pretrained_model_name_or_path_or_dict: Union **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with ModelMixin.save_pretrained(). +A torch state +dict. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load pretrained attention processor layers into UNet2DConditionModel. Attention processor layers have to be +defined in +attention_processor.py +and be a torch.nn.Module class. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.unet.load_attn_procs( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) save_attn_procs < source > ( save_directory: Union is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save an attention processor to (will be created if it doesn’t exist). is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or with pickle. Save attention processor layers to a directory so that it can be reloaded with the +load_attn_procs() method. Example: Copied import torch +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + torch_dtype=torch.float16, +).to("cuda") +pipeline.unet.load_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin") +pipeline.unet.save_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin") set_adapters < source > ( adapter_names: Union weights: Union = None ) Parameters adapter_names (List[str] or str) — +The names of the adapters to use. adapter_weights (Union[List[float], float], optional) — +The adapter(s) weights to use with the UNet. If None, the weights are set to 1.0 for all the +adapters. Set the currently active adapters for use in the UNet. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) +pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipeline.set_adapters(["cinematic", "pixel"], adapter_weights=[0.5, 0.5]) diff --git a/scrapped_outputs/27dfab18dccf1bcf479e8e308704b1eb.txt b/scrapped_outputs/27dfab18dccf1bcf479e8e308704b1eb.txt new file mode 100644 index 0000000000000000000000000000000000000000..bedbfd4f29d8fea8e1cb1523c05c8b8e204c564f --- /dev/null +++ b/scrapped_outputs/27dfab18dccf1bcf479e8e308704b1eb.txt @@ -0,0 +1,52 @@ +CMStochasticIterativeScheduler Consistency Models by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever introduced a multistep and onestep scheduler (Algorithm 1) that is capable of generating good samples in one or a small number of steps. The abstract from the paper is: Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast one-step generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet 64x64 and LSUN 256x256. The original codebase can be found at openai/consistency_models. CMStochasticIterativeScheduler class diffusers.CMStochasticIterativeScheduler < source > ( num_train_timesteps: int = 40 sigma_min: float = 0.002 sigma_max: float = 80.0 sigma_data: float = 0.5 s_noise: float = 1.0 rho: float = 7.0 clip_denoised: bool = True ) Parameters num_train_timesteps (int, defaults to 40) — +The number of diffusion steps to train the model. sigma_min (float, defaults to 0.002) — +Minimum noise magnitude in the sigma schedule. Defaults to 0.002 from the original implementation. sigma_max (float, defaults to 80.0) — +Maximum noise magnitude in the sigma schedule. Defaults to 80.0 from the original implementation. sigma_data (float, defaults to 0.5) — +The standard deviation of the data distribution from the EDM +paper. Defaults to 0.5 from the original implementation. s_noise (float, defaults to 1.0) — +The amount of additional noise to counteract loss of detail during sampling. A reasonable range is [1.000, +1.011]. Defaults to 1.0 from the original implementation. rho (float, defaults to 7.0) — +The parameter for calculating the Karras sigma schedule from the EDM +paper. Defaults to 7.0 from the original implementation. clip_denoised (bool, defaults to True) — +Whether to clip the denoised outputs to (-1, 1). timesteps (List or np.ndarray or torch.Tensor, optional) — +An explicit timestep schedule that can be optionally specified. The timesteps are expected to be in +increasing order. Multistep and onestep sampling for consistency models. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. get_scalings_for_boundary_condition < source > ( sigma ) → tuple Parameters sigma (torch.FloatTensor) — +The current sigma in the Karras sigma schedule. Returns +tuple + +A two-element tuple where c_skip (which weights the current sample) is the first element and c_out +(which weights the consistency model output) is the second element. + Gets the scalings used in the consistency model parameterization (from Appendix C of the +paper) to enforce boundary condition. epsilon in the equations for c_skip and c_out is set to sigma_min. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (float or torch.FloatTensor) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Scales the consistency model input by (sigma**2 + sigma_data**2) ** 0.5. set_timesteps < source > ( num_inference_steps: Optional = None device: Union = None timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. timesteps (List[int], optional) — +Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default +timestep spacing strategy of equal spacing between timesteps is used. If timesteps is passed, +num_inference_steps must be None. Sets the timesteps used for the diffusion chain (to be run before inference). sigma_to_t < source > ( sigmas: Union ) → float or np.ndarray Parameters sigmas (float or np.ndarray) — +A single Karras sigma or an array of Karras sigmas. Returns +float or np.ndarray + +A scaled input timestep or scaled input timestep array. + Gets scaled timesteps from the Karras sigmas for input to the consistency model. step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor generator: Optional = None return_dict: bool = True ) → CMStochasticIterativeSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (float) — +The current timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a +CMStochasticIterativeSchedulerOutput or tuple. Returns +CMStochasticIterativeSchedulerOutput or tuple + +If return_dict is True, +CMStochasticIterativeSchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). CMStochasticIterativeSchedulerOutput class diffusers.schedulers.scheduling_consistency_models.CMStochasticIterativeSchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Output class for the scheduler’s step function. diff --git a/scrapped_outputs/28159a13554a75bd8001716a09b8acdb.txt b/scrapped_outputs/28159a13554a75bd8001716a09b8acdb.txt new file mode 100644 index 0000000000000000000000000000000000000000..b141ceaf084a8212da6ac7e6a804208f1ca7d021 --- /dev/null +++ b/scrapped_outputs/28159a13554a75bd8001716a09b8acdb.txt @@ -0,0 +1,35 @@ +Dance Diffusion Dance Diffusion is by Zach Evans. Dance Diffusion is the first in a suite of generative audio tools for producers and musicians released by Harmonai. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. DanceDiffusionPipeline class diffusers.DanceDiffusionPipeline < source > ( unet scheduler ) Parameters unet (UNet1DModel) — +A UNet1DModel to denoise the encoded audio. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +IPNDMScheduler. Pipeline for audio generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 num_inference_steps: int = 100 generator: Union = None audio_length_in_s: Optional = None return_dict: bool = True ) → AudioPipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of audio samples to generate. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher-quality audio sample at +the expense of slower inference. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. audio_length_in_s (float, optional, defaults to self.unet.config.sample_size/self.unet.config.sample_rate) — +The length of the generated audio sample in seconds. return_dict (bool, optional, defaults to True) — +Whether or not to return a AudioPipelineOutput instead of a plain tuple. Returns +AudioPipelineOutput or tuple + +If return_dict is True, AudioPipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Example: Copied from diffusers import DiffusionPipeline +from scipy.io.wavfile import write + +model_id = "harmonai/maestro-150k" +pipe = DiffusionPipeline.from_pretrained(model_id) +pipe = pipe.to("cuda") + +audios = pipe(audio_length_in_s=4.0).audios + +# To save locally +for i, audio in enumerate(audios): + write(f"maestro_test_{i}.wav", pipe.unet.sample_rate, audio.transpose()) + +# To dislay in google colab +import IPython.display as ipd + +for audio in audios: + display(ipd.Audio(audio, rate=pipe.unet.sample_rate)) AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. diff --git a/scrapped_outputs/28162d27557090c840d55d9c80eaffc3.txt b/scrapped_outputs/28162d27557090c840d55d9c80eaffc3.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/282c1843a217c17a6fe810aae4e82c00.txt b/scrapped_outputs/282c1843a217c17a6fe810aae4e82c00.txt new file mode 100644 index 0000000000000000000000000000000000000000..d769a7f9060837ab9edb28b421635809b26af2d7 --- /dev/null +++ b/scrapped_outputs/282c1843a217c17a6fe810aae4e82c00.txt @@ -0,0 +1,61 @@ +Attention Processor An attention processor is a class for applying different types of attention mechanisms. AttnProcessor class diffusers.models.attention_processor.AttnProcessor < source > ( ) Default processor for performing attention-related computations. AttnProcessor2_0 class diffusers.models.attention_processor.AttnProcessor2_0 < source > ( ) Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). FusedAttnProcessor2_0 class diffusers.models.attention_processor.FusedAttnProcessor2_0 < source > ( ) Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). +It uses fused projection layers. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is currently 🧪 experimental in nature and can change in future. LoRAAttnProcessor class diffusers.models.attention_processor.LoRAAttnProcessor < source > ( hidden_size: int cross_attention_dim: Optional = None rank: int = 4 network_alpha: Optional = None **kwargs ) Parameters hidden_size (int, optional) — +The hidden size of the attention layer. cross_attention_dim (int, optional) — +The number of channels in the encoder_hidden_states. rank (int, defaults to 4) — +The dimension of the LoRA update matrices. network_alpha (int, optional) — +Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) — +Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism. LoRAAttnProcessor2_0 class diffusers.models.attention_processor.LoRAAttnProcessor2_0 < source > ( hidden_size: int cross_attention_dim: Optional = None rank: int = 4 network_alpha: Optional = None **kwargs ) Parameters hidden_size (int) — +The hidden size of the attention layer. cross_attention_dim (int, optional) — +The number of channels in the encoder_hidden_states. rank (int, defaults to 4) — +The dimension of the LoRA update matrices. network_alpha (int, optional) — +Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) — +Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism using PyTorch 2.0’s memory-efficient scaled dot-product +attention. CustomDiffusionAttnProcessor class diffusers.models.attention_processor.CustomDiffusionAttnProcessor < source > ( train_kv: bool = True train_q_out: bool = True hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 ) Parameters train_kv (bool, defaults to True) — +Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) — +Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) — +Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) — +The dropout probability to use. Processor for implementing attention for the Custom Diffusion method. CustomDiffusionAttnProcessor2_0 class diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0 < source > ( train_kv: bool = True train_q_out: bool = True hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 ) Parameters train_kv (bool, defaults to True) — +Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) — +Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) — +Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) — +The dropout probability to use. Processor for implementing attention for the Custom Diffusion method using PyTorch 2.0’s memory-efficient scaled +dot-product attention. AttnAddedKVProcessor class diffusers.models.attention_processor.AttnAddedKVProcessor < source > ( ) Processor for performing attention-related computations with extra learnable key and value matrices for the text +encoder. AttnAddedKVProcessor2_0 class diffusers.models.attention_processor.AttnAddedKVProcessor2_0 < source > ( ) Processor for performing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0), with extra +learnable key and value matrices for the text encoder. LoRAAttnAddedKVProcessor class diffusers.models.attention_processor.LoRAAttnAddedKVProcessor < source > ( hidden_size: int cross_attention_dim: Optional = None rank: int = 4 network_alpha: Optional = None ) Parameters hidden_size (int, optional) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. rank (int, defaults to 4) — +The dimension of the LoRA update matrices. network_alpha (int, optional) — +Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) — +Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism with extra learnable key and value matrices for the text +encoder. XFormersAttnProcessor class diffusers.models.attention_processor.XFormersAttnProcessor < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional, defaults to None) — +The base +operator to +use as the attention operator. It is recommended to set to None, and allow xFormers to choose the best +operator. Processor for implementing memory efficient attention using xFormers. LoRAXFormersAttnProcessor class diffusers.models.attention_processor.LoRAXFormersAttnProcessor < source > ( hidden_size: int cross_attention_dim: int rank: int = 4 attention_op: Optional = None network_alpha: Optional = None **kwargs ) Parameters hidden_size (int, optional) — +The hidden size of the attention layer. cross_attention_dim (int, optional) — +The number of channels in the encoder_hidden_states. rank (int, defaults to 4) — +The dimension of the LoRA update matrices. attention_op (Callable, optional, defaults to None) — +The base +operator to +use as the attention operator. It is recommended to set to None, and allow xFormers to choose the best +operator. network_alpha (int, optional) — +Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) — +Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism with memory efficient attention using xFormers. CustomDiffusionXFormersAttnProcessor class diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor < source > ( train_kv: bool = True train_q_out: bool = False hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 attention_op: Optional = None ) Parameters train_kv (bool, defaults to True) — +Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) — +Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) — +Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) — +The dropout probability to use. attention_op (Callable, optional, defaults to None) — +The base +operator to use +as the attention operator. It is recommended to set to None, and allow xFormers to choose the best operator. Processor for implementing memory efficient attention using xFormers for the Custom Diffusion method. SlicedAttnProcessor class diffusers.models.attention_processor.SlicedAttnProcessor < source > ( slice_size: int ) Parameters slice_size (int, optional) — +The number of steps to compute attention. Uses as many slices as attention_head_dim // slice_size, and +attention_head_dim must be a multiple of the slice_size. Processor for implementing sliced attention. SlicedAttnAddedKVProcessor class diffusers.models.attention_processor.SlicedAttnAddedKVProcessor < source > ( slice_size ) Parameters slice_size (int, optional) — +The number of steps to compute attention. Uses as many slices as attention_head_dim // slice_size, and +attention_head_dim must be a multiple of the slice_size. Processor for implementing sliced attention with extra learnable key and value matrices for the text encoder. diff --git a/scrapped_outputs/2830e5b374d188a38f529dfb72ff9559.txt b/scrapped_outputs/2830e5b374d188a38f529dfb72ff9559.txt new file mode 100644 index 0000000000000000000000000000000000000000..707a06e6336d2883e0c81a8c8cc00f306f544615 --- /dev/null +++ b/scrapped_outputs/2830e5b374d188a38f529dfb72ff9559.txt @@ -0,0 +1,65 @@ +Unconditional image generation Unconditional image generation models are not conditioned on text or images during training. It only generates images that resemble its training data distribution. This guide will explore the train_unconditional.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies: Copied cd examples/unconditional_image_generation +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the bf16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_unconditional.py \ + --mixed_precision="bf16" Some basic and important parameters to specify include: --dataset_name: the name of the dataset on the Hub or a local path to the dataset to train on --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command Bring your dataset, and let the training script handle everything else! Training script The code for preprocessing the dataset and the training loop is found in the main() function. If you need to adapt the training script, this is where you’ll need to make your changes. The train_unconditional script initializes a UNet2DModel if you don’t provide a model configuration. You can configure the UNet here if you’d like: Copied model = UNet2DModel( + sample_size=args.resolution, + in_channels=3, + out_channels=3, + layers_per_block=2, + block_out_channels=(128, 128, 256, 256, 512, 512), + down_block_types=( + "DownBlock2D", + "DownBlock2D", + "DownBlock2D", + "DownBlock2D", + "AttnDownBlock2D", + "DownBlock2D", + ), + up_block_types=( + "UpBlock2D", + "AttnUpBlock2D", + "UpBlock2D", + "UpBlock2D", + "UpBlock2D", + "UpBlock2D", + ), +) Next, the script initializes a scheduler and optimizer: Copied # Initialize the scheduler +accepts_prediction_type = "prediction_type" in set(inspect.signature(DDPMScheduler.__init__).parameters.keys()) +if accepts_prediction_type: + noise_scheduler = DDPMScheduler( + num_train_timesteps=args.ddpm_num_steps, + beta_schedule=args.ddpm_beta_schedule, + prediction_type=args.prediction_type, + ) +else: + noise_scheduler = DDPMScheduler(num_train_timesteps=args.ddpm_num_steps, beta_schedule=args.ddpm_beta_schedule) + +# Initialize the optimizer +optimizer = torch.optim.AdamW( + model.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Then it loads a dataset and you can specify how to preprocess it: Copied dataset = load_dataset("imagefolder", data_dir=args.train_data_dir, cache_dir=args.cache_dir, split="train") + +augmentations = transforms.Compose( + [ + transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), + transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution), + transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x), + transforms.ToTensor(), + transforms.Normalize([0.5], [0.5]), + ] +) Finally, the training loop handles everything else such as adding noise to the images, predicting the noise residual, calculating the loss, saving checkpoints at specified steps, and saving and pushing the model to the Hub. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 A full training run takes 2 hours on 4xV100 GPUs. single GPU multi-GPU Copied accelerate launch train_unconditional.py \ + --dataset_name="huggan/flowers-102-categories" \ + --output_dir="ddpm-ema-flowers-64" \ + --mixed_precision="fp16" \ + --push_to_hub The training script creates and saves a checkpoint file in your repository. Now you can load and use your trained model for inference: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128").to("cuda") +image = pipeline().images[0] diff --git a/scrapped_outputs/284c2fbfc5293bfc679534842acf1598.txt b/scrapped_outputs/284c2fbfc5293bfc679534842acf1598.txt new file mode 100644 index 0000000000000000000000000000000000000000..a60cf1709306cd604a335558453963caf02df74b --- /dev/null +++ b/scrapped_outputs/284c2fbfc5293bfc679534842acf1598.txt @@ -0,0 +1,56 @@ +Community pipelines For more context about the design choices behind community pipelines, please have a look at this issue. Community pipelines allow you to get creative and build your own unique pipelines to share with the community. You can find all community pipelines in the diffusers/examples/community folder along with inference and training examples for how to use them. This guide showcases some of the community pipelines and hopefully it’ll inspire you to create your own (feel free to open a PR with your own pipeline and we will merge it!). To load a community pipeline, use the custom_pipeline argument in DiffusionPipeline to specify one of the files in diffusers/examples/community: Copied from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", custom_pipeline="filename_in_the_community_folder", use_safetensors=True +) If a community pipeline doesn’t work as expected, please open a GitHub issue and mention the author. You can learn more about community pipelines in the how to load community pipelines and how to contribute a community pipeline guides. Multilingual Stable Diffusion The multilingual Stable Diffusion pipeline uses a pretrained XLM-RoBERTa to identify a language and the mBART-large-50 model to handle the translation. This allows you to generate images from text in 20 languages. Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import make_image_grid +from transformers import ( + pipeline, + MBart50TokenizerFast, + MBartForConditionalGeneration, +) + +device = "cuda" if torch.cuda.is_available() else "cpu" +device_dict = {"cuda": 0, "cpu": -1} + +# add language detection pipeline +language_detection_model_ckpt = "papluca/xlm-roberta-base-language-detection" +language_detection_pipeline = pipeline("text-classification", + model=language_detection_model_ckpt, + device=device_dict[device]) + +# add model for language translation +translation_tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-one-mmt") +translation_model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-one-mmt").to(device) + +diffuser_pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + custom_pipeline="multilingual_stable_diffusion", + detection_pipeline=language_detection_pipeline, + translation_model=translation_model, + translation_tokenizer=translation_tokenizer, + torch_dtype=torch.float16, +) + +diffuser_pipeline.enable_attention_slicing() +diffuser_pipeline = diffuser_pipeline.to(device) + +prompt = ["a photograph of an astronaut riding a horse", + "Una casa en la playa", + "Ein Hund, der Orange isst", + "Un restaurant parisien"] + +images = diffuser_pipeline(prompt).images +make_image_grid(images, rows=2, cols=2) MagicMix MagicMix is a pipeline that can mix an image and text prompt to generate a new image that preserves the image structure. The mix_factor determines how much influence the prompt has on the layout generation, kmin controls the number of steps during the content generation process, and kmax determines how much information is kept in the layout of the original image. Copied from diffusers import DiffusionPipeline, DDIMScheduler +from diffusers.utils import load_image, make_image_grid + +pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + custom_pipeline="magic_mix", + scheduler=DDIMScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler"), +).to('cuda') + +img = load_image("https://user-images.githubusercontent.com/59410571/209578593-141467c7-d831-4792-8b9a-b17dc5e47816.jpg") +mix_img = pipeline(img, prompt="bed", kmin=0.3, kmax=0.5, mix_factor=0.5) +make_image_grid([img, mix_img], rows=1, cols=2) original image image and text prompt mix diff --git a/scrapped_outputs/28515f6f2a098ed74b2149527f61d90f.txt b/scrapped_outputs/28515f6f2a098ed74b2149527f61d90f.txt new file mode 100644 index 0000000000000000000000000000000000000000..cdd78d68bba0e712cfad73d0a4eb0e2833f322c8 --- /dev/null +++ b/scrapped_outputs/28515f6f2a098ed74b2149527f61d90f.txt @@ -0,0 +1,15 @@ +Outputs All model outputs are subclasses of BaseOutput, data structures containing all the information returned by the model. The outputs can also be used as tuples or dictionaries. For example: Copied from diffusers import DDIMPipeline + +pipeline = DDIMPipeline.from_pretrained("google/ddpm-cifar10-32") +outputs = pipeline() The outputs object is a ImagePipelineOutput which means it has an image attribute. You can access each attribute as you normally would or with a keyword lookup, and if that attribute is not returned by the model, you will get None: Copied outputs.images +outputs["images"] When considering the outputs object as a tuple, it only considers the attributes that don’t have None values. +For instance, retrieving an image by indexing into it returns the tuple (outputs.images): Copied outputs[:1] To check a specific pipeline or model output, refer to its corresponding API documentation. BaseOutput class diffusers.utils.BaseOutput < source > ( ) Base class for all model outputs as dataclass. Has a __getitem__ that allows indexing by integer or slice (like a +tuple) or strings (like a dictionary) that will ignore the None attributes. Otherwise behaves like a regular +Python dictionary. You can’t unpack a BaseOutput directly. Use the to_tuple() method to convert it to a tuple +first. to_tuple < source > ( ) Convert self to a tuple containing all the attributes/keys that are not None. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. FlaxImagePipelineOutput class diffusers.pipelines.pipeline_flax_utils.FlaxImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. ImageTextPipelineOutput class diffusers.ImageTextPipelineOutput < source > ( images: Union text: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). text (List[str] or List[List[str]]) — +List of generated text strings of length batch_size or a list of list of strings whose outer list has +length batch_size. Output class for joint image-text pipelines. diff --git a/scrapped_outputs/286a2aa6dfaafeda3f253d53f26c7c51.txt b/scrapped_outputs/286a2aa6dfaafeda3f253d53f26c7c51.txt new file mode 100644 index 0000000000000000000000000000000000000000..576dcc80f8d3648a3bfddba4f5d8e453c126504f --- /dev/null +++ b/scrapped_outputs/286a2aa6dfaafeda3f253d53f26c7c51.txt @@ -0,0 +1,58 @@ +Tiny AutoEncoder Tiny AutoEncoder for Stable Diffusion (TAESD) was introduced in madebyollin/taesd by Ollin Boer Bohan. It is a tiny distilled version of Stable Diffusion’s VAE that can quickly decode the latents in a StableDiffusionPipeline or StableDiffusionXLPipeline almost instantly. To use with Stable Diffusion v-2.1: Copied import torch +from diffusers import DiffusionPipeline, AutoencoderTiny + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1-base", torch_dtype=torch.float16 +) +pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesd", torch_dtype=torch.float16) +pipe = pipe.to("cuda") + +prompt = "slice of delicious New York-style berry cheesecake" +image = pipe(prompt, num_inference_steps=25).images[0] +image To use with Stable Diffusion XL 1.0 Copied import torch +from diffusers import DiffusionPipeline, AutoencoderTiny + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +) +pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesdxl", torch_dtype=torch.float16) +pipe = pipe.to("cuda") + +prompt = "slice of delicious New York-style berry cheesecake" +image = pipe(prompt, num_inference_steps=25).images[0] +image AutoencoderTiny class diffusers.AutoencoderTiny < source > ( in_channels: int = 3 out_channels: int = 3 encoder_block_out_channels: Tuple = (64, 64, 64, 64) decoder_block_out_channels: Tuple = (64, 64, 64, 64) act_fn: str = 'relu' latent_channels: int = 4 upsampling_scaling_factor: int = 2 num_encoder_blocks: Tuple = (1, 3, 3, 3) num_decoder_blocks: Tuple = (3, 3, 3, 1) latent_magnitude: int = 3 latent_shift: float = 0.5 force_upcast: bool = False scaling_factor: float = 1.0 ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. encoder_block_out_channels (Tuple[int], optional, defaults to (64, 64, 64, 64)) — +Tuple of integers representing the number of output channels for each encoder block. The length of the +tuple should be equal to the number of encoder blocks. decoder_block_out_channels (Tuple[int], optional, defaults to (64, 64, 64, 64)) — +Tuple of integers representing the number of output channels for each decoder block. The length of the +tuple should be equal to the number of decoder blocks. act_fn (str, optional, defaults to "relu") — +Activation function to be used throughout the model. latent_channels (int, optional, defaults to 4) — +Number of channels in the latent representation. The latent space acts as a compressed representation of +the input image. upsampling_scaling_factor (int, optional, defaults to 2) — +Scaling factor for upsampling in the decoder. It determines the size of the output image during the +upsampling process. num_encoder_blocks (Tuple[int], optional, defaults to (1, 3, 3, 3)) — +Tuple of integers representing the number of encoder blocks at each stage of the encoding process. The +length of the tuple should be equal to the number of stages in the encoder. Each stage has a different +number of encoder blocks. num_decoder_blocks (Tuple[int], optional, defaults to (3, 3, 3, 1)) — +Tuple of integers representing the number of decoder blocks at each stage of the decoding process. The +length of the tuple should be equal to the number of stages in the decoder. Each stage has a different +number of decoder blocks. latent_magnitude (float, optional, defaults to 3.0) — +Magnitude of the latent representation. This parameter scales the latent representation values to control +the extent of information preservation. latent_shift (float, optional, defaults to 0.5) — +Shift applied to the latent representation. This parameter controls the center of the latent space. scaling_factor (float, optional, defaults to 1.0) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. For this Autoencoder, +however, no such scaling factor was used, hence the value of 1.0 as the default. force_upcast (bool, optional, default to False) — +If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE +can be fine-tuned / trained to a lower range without losing too much precision, in which case +force_upcast can be set to False (see this fp16-friendly +AutoEncoder). A tiny distilled VAE model for encoding images into latents and decoding latent representations into images. AutoencoderTiny is a wrapper around the original implementation of TAESD. This model inherits from ModelMixin. Check the superclass documentation for its generic methods implemented for +all models (such as downloading or saving). disable_slicing < source > ( ) Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing +decoding in one step. disable_tiling < source > ( ) Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing +decoding in one step. enable_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_tiling < source > ( use_tiling: bool = True ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. forward < source > ( sample: FloatTensor return_dict: bool = True ) Parameters sample (torch.FloatTensor) — Input sample. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. scale_latents < source > ( x: FloatTensor ) raw latents -> [0, 1] unscale_latents < source > ( x: FloatTensor ) [0, 1] -> raw latents AutoencoderTinyOutput class diffusers.models.autoencoders.autoencoder_tiny.AutoencoderTinyOutput < source > ( latents: Tensor ) Parameters latents (torch.Tensor) — Encoded outputs of the Encoder. Output of AutoencoderTiny encoding method. diff --git a/scrapped_outputs/28a22fd5781dcecc4f8fd71de7491c84.txt b/scrapped_outputs/28a22fd5781dcecc4f8fd71de7491c84.txt new file mode 100644 index 0000000000000000000000000000000000000000..9de2a9918b4f9735de3ea0d622cdf65706556cae --- /dev/null +++ b/scrapped_outputs/28a22fd5781dcecc4f8fd71de7491c84.txt @@ -0,0 +1,124 @@ +Schedulers Diffusion pipelines are inherently a collection of diffusion models and schedulers that are partly independent from each other. This means that one is able to switch out parts of the pipeline to better customize +a pipeline to one’s use case. The best example of this is the Schedulers. Whereas diffusion models usually simply define the forward pass from noise to a less noisy sample, +schedulers define the whole denoising process, i.e.: How many denoising steps? Stochastic or deterministic? What algorithm to use to find the denoised sample? They can be quite complex and often define a trade-off between denoising speed and denoising quality. +It is extremely difficult to measure quantitatively which scheduler works best for a given diffusion pipeline, so it is often recommended to simply try out which works best. The following paragraphs show how to do so with the 🧨 Diffusers library. Load pipeline Let’s start by loading the runwayml/stable-diffusion-v1-5 model in the DiffusionPipeline: Copied from huggingface_hub import login +from diffusers import DiffusionPipeline +import torch + +login() + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) Next, we move it to GPU: Copied pipeline.to("cuda") Access the scheduler The scheduler is always one of the components of the pipeline and is usually called "scheduler". +So it can be accessed via the "scheduler" property. Copied pipeline.scheduler Output: Copied PNDMScheduler { + "_class_name": "PNDMScheduler", + "_diffusers_version": "0.21.4", + "beta_end": 0.012, + "beta_schedule": "scaled_linear", + "beta_start": 0.00085, + "clip_sample": false, + "num_train_timesteps": 1000, + "set_alpha_to_one": false, + "skip_prk_steps": true, + "steps_offset": 1, + "timestep_spacing": "leading", + "trained_betas": null +} We can see that the scheduler is of type PNDMScheduler. +Cool, now let’s compare the scheduler in its performance to other schedulers. +First we define a prompt on which we will test all the different schedulers: Copied prompt = "A photograph of an astronaut riding a horse on Mars, high resolution, high definition." Next, we create a generator from a random seed that will ensure that we can generate similar images as well as run the pipeline: Copied generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image Changing the scheduler Now we show how easy it is to change the scheduler of a pipeline. Every scheduler has a property compatibles +which defines all compatible schedulers. You can take a look at all available, compatible schedulers for the Stable Diffusion pipeline as follows. Copied pipeline.scheduler.compatibles Output: Copied [diffusers.utils.dummy_torch_and_torchsde_objects.DPMSolverSDEScheduler, + diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, + diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, + diffusers.schedulers.scheduling_ddim.DDIMScheduler, + diffusers.schedulers.scheduling_ddpm.DDPMScheduler, + diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler, + diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler, + diffusers.schedulers.scheduling_deis_multistep.DEISMultistepScheduler, + diffusers.schedulers.scheduling_pndm.PNDMScheduler, + diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, + diffusers.schedulers.scheduling_unipc_multistep.UniPCMultistepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_discrete.KDPM2DiscreteScheduler, + diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_ancestral_discrete.KDPM2AncestralDiscreteScheduler] Cool, lots of schedulers to look at. Feel free to have a look at their respective class definitions: EulerDiscreteScheduler, LMSDiscreteScheduler, DDIMScheduler, DDPMScheduler, HeunDiscreteScheduler, DPMSolverMultistepScheduler, DEISMultistepScheduler, PNDMScheduler, EulerAncestralDiscreteScheduler, UniPCMultistepScheduler, KDPM2DiscreteScheduler, DPMSolverSinglestepScheduler, KDPM2AncestralDiscreteScheduler. We will now compare the input prompt with all other schedulers. To change the scheduler of the pipeline you can make use of the +convenient config property in combination with the from_config() function. Copied pipeline.scheduler.config returns a dictionary of the configuration of the scheduler: Output: Copied FrozenDict([('num_train_timesteps', 1000), + ('beta_start', 0.00085), + ('beta_end', 0.012), + ('beta_schedule', 'scaled_linear'), + ('trained_betas', None), + ('skip_prk_steps', True), + ('set_alpha_to_one', False), + ('prediction_type', 'epsilon'), + ('timestep_spacing', 'leading'), + ('steps_offset', 1), + ('_use_default_values', ['timestep_spacing', 'prediction_type']), + ('_class_name', 'PNDMScheduler'), + ('_diffusers_version', '0.21.4'), + ('clip_sample', False)]) This configuration can then be used to instantiate a scheduler +of a different class that is compatible with the pipeline. Here, +we change the scheduler to the DDIMScheduler. Copied from diffusers import DDIMScheduler + +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) Cool, now we can run the pipeline again to compare the generation quality. Copied generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image If you are a JAX/Flax user, please check this section instead. Compare schedulers So far we have tried running the stable diffusion pipeline with two schedulers: PNDMScheduler and DDIMScheduler. +A number of better schedulers have been released that can be run with much fewer steps; let’s compare them here: LMSDiscreteScheduler usually leads to better results: Copied from diffusers import LMSDiscreteScheduler + +pipeline.scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image EulerDiscreteScheduler and EulerAncestralDiscreteScheduler can generate high quality results with as little as 30 steps. Copied from diffusers import EulerDiscreteScheduler + +pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=30).images[0] +image and: Copied from diffusers import EulerAncestralDiscreteScheduler + +pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=30).images[0] +image DPMSolverMultistepScheduler gives a reasonable speed/quality trade-off and can be run with as little as 20 steps. Copied from diffusers import DPMSolverMultistepScheduler + +pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=20).images[0] +image As you can see, most images look very similar and are arguably of very similar quality. It often really depends on the specific use case which scheduler to choose. A good approach is always to run multiple different +schedulers to compare results. Changing the Scheduler in Flax If you are a JAX/Flax user, you can also change the default pipeline scheduler. This is a complete example of how to run inference using the Flax Stable Diffusion pipeline and the super-fast DPM-Solver++ scheduler: Copied import jax +import numpy as np +from flax.jax_utils import replicate +from flax.training.common_utils import shard + +from diffusers import FlaxStableDiffusionPipeline, FlaxDPMSolverMultistepScheduler + +model_id = "runwayml/stable-diffusion-v1-5" +scheduler, scheduler_state = FlaxDPMSolverMultistepScheduler.from_pretrained( + model_id, + subfolder="scheduler" +) +pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( + model_id, + scheduler=scheduler, + revision="bf16", + dtype=jax.numpy.bfloat16, +) +params["scheduler"] = scheduler_state + +# Generate 1 image per parallel device (8 on TPUv2-8 or TPUv3-8) +prompt = "a photo of an astronaut riding a horse on mars" +num_samples = jax.device_count() +prompt_ids = pipeline.prepare_inputs([prompt] * num_samples) + +prng_seed = jax.random.PRNGKey(0) +num_inference_steps = 25 + +# shard inputs and rng +params = replicate(params) +prng_seed = jax.random.split(prng_seed, jax.device_count()) +prompt_ids = shard(prompt_ids) + +images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images +images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) The following Flax schedulers are not yet compatible with the Flax Stable Diffusion Pipeline: FlaxLMSDiscreteScheduler FlaxDDPMScheduler diff --git a/scrapped_outputs/28e97ba2ca1472b23fdb28fcbfbd99ee.txt b/scrapped_outputs/28e97ba2ca1472b23fdb28fcbfbd99ee.txt new file mode 100644 index 0000000000000000000000000000000000000000..118d04526fdacb6e280461a814f7dea84ba76932 --- /dev/null +++ b/scrapped_outputs/28e97ba2ca1472b23fdb28fcbfbd99ee.txt @@ -0,0 +1,51 @@ +DDIMInverseScheduler DDIMInverseScheduler is the inverted scheduler from Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. +The implementation is mostly based on the DDIM inversion definition from Null-text Inversion for Editing Real Images using Guided Diffusion Models. DDIMInverseScheduler class diffusers.DDIMInverseScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None clip_sample: bool = True set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' clip_sample_range: float = 1.0 timestep_spacing: str = 'leading' rescale_betas_zero_snr: bool = False **kwargs ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 0, otherwise +it uses the alpha value at step num_train_timesteps - 1. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use num_train_timesteps - 1 for the previous alpha +product. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. DDIMInverseScheduler is the reverse scheduler of DDIMScheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → ~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. eta (float) — +The weight of noise for added noise in diffusion step. use_clipped_model_output (bool, defaults to False) — +If True, computes “corrected” model_output from the clipped predicted original sample. Necessary +because predicted original sample is clipped to [-1, 1] when self.config.clip_sample is True. If no +clipping has happened, “corrected” model_output would coincide with the one provided as input and +use_clipped_model_output has no effect. variance_noise (torch.FloatTensor) — +Alternative to generating noise with generator by directly providing the noise for the variance +itself. Useful for methods such as CycleDiffusion. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput or +tuple. Returns +~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput or tuple + +If return_dict is True, ~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput is +returned, otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/28e9caf9e649d2f1ef03be40f4057ae7.txt b/scrapped_outputs/28e9caf9e649d2f1ef03be40f4057ae7.txt new file mode 100644 index 0000000000000000000000000000000000000000..c45daf9a97ec4b41db61304ab7ca97f58be2ed61 --- /dev/null +++ b/scrapped_outputs/28e9caf9e649d2f1ef03be40f4057ae7.txt @@ -0,0 +1 @@ +xFormers We recommend xFormers for both inference and training. In our tests, the optimizations performed in the attention blocks allow for both faster speed and reduced memory consumption. Install xFormers from pip: Copied pip install xformers The xFormers pip package requires the latest version of PyTorch. If you need to use a previous version of PyTorch, then we recommend installing xFormers from the source. After xFormers is installed, you can use enable_xformers_memory_efficient_attention() for faster inference and reduced memory consumption as shown in this section. According to this issue, xFormers v0.0.16 cannot be used for training (fine-tune or DreamBooth) in some GPUs. If you observe this problem, please install a development version as indicated in the issue comments. diff --git a/scrapped_outputs/2907447a6668b5dba9478ea898ccce8d.txt b/scrapped_outputs/2907447a6668b5dba9478ea898ccce8d.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/290ff96d6c66033f8edc5609c9649461.txt b/scrapped_outputs/290ff96d6c66033f8edc5609c9649461.txt new file mode 100644 index 0000000000000000000000000000000000000000..67c8b53cf21b58b36cb7eadc4efa707362746029 --- /dev/null +++ b/scrapped_outputs/290ff96d6c66033f8edc5609c9649461.txt @@ -0,0 +1,61 @@ +Stable Diffusion 2 Stable Diffusion 2 is a text-to-image latent diffusion model built upon the work of the original Stable Diffusion, and it was led by Robin Rombach and Katherine Crowson from Stability AI and LAION. The Stable Diffusion 2.0 release includes robust text-to-image models trained using a brand new text encoder (OpenCLIP), developed by LAION with support from Stability AI, which greatly improves the quality of the generated images compared to earlier V1 releases. The text-to-image models in this release can generate images with default resolutions of both 512x512 pixels and 768x768 pixels. +These models are trained on an aesthetic subset of the LAION-5B dataset created by the DeepFloyd team at Stability AI, which is then further filtered to remove adult content using LAION’s NSFW filter. For more details about how Stable Diffusion 2 works and how it differs from the original Stable Diffusion, please refer to the official announcement post. The architecture of Stable Diffusion 2 is more or less identical to the original Stable Diffusion model so check out it’s API documentation for how to use Stable Diffusion 2. We recommend using the DPMSolverMultistepScheduler as it gives a reasonable speed/quality trade-off and can be run with as little as 20 steps. Stable Diffusion 2 is available for tasks like text-to-image, inpainting, super-resolution, and depth-to-image: Task Repository text-to-image (512x512) stabilityai/stable-diffusion-2-base text-to-image (768x768) stabilityai/stable-diffusion-2 inpainting stabilityai/stable-diffusion-2-inpainting super-resolution stable-diffusion-x4-upscaler depth-to-image stabilityai/stable-diffusion-2-depth Here are some examples for how to use Stable Diffusion 2 for each task: Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! Text-to-image Copied from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +import torch + +repo_id = "stabilityai/stable-diffusion-2-base" +pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") + +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") + +prompt = "High quality photo of an astronaut riding a horse in space" +image = pipe(prompt, num_inference_steps=25).images[0] +image Inpainting Copied import torch +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +from diffusers.utils import load_image, make_image_grid + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +repo_id = "stabilityai/stable-diffusion-2-inpainting" +pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") + +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") + +prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +image = pipe(prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=25).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Super-resolution Copied from diffusers import StableDiffusionUpscalePipeline +from diffusers.utils import load_image, make_image_grid +import torch + +# load model and scheduler +model_id = "stabilityai/stable-diffusion-x4-upscaler" +pipeline = StableDiffusionUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) +pipeline = pipeline.to("cuda") + +# let's download an image +url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png" +low_res_img = load_image(url) +low_res_img = low_res_img.resize((128, 128)) +prompt = "a white cat" +upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0] +make_image_grid([low_res_img.resize((512, 512)), upscaled_image.resize((512, 512))], rows=1, cols=2) Depth-to-image Copied import torch +from diffusers import StableDiffusionDepth2ImgPipeline +from diffusers.utils import load_image, make_image_grid + +pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-depth", + torch_dtype=torch.float16, +).to("cuda") + + +url = "http://images.cocodataset.org/val2017/000000039769.jpg" +init_image = load_image(url) +prompt = "two tigers" +negative_prompt = "bad, deformed, ugly, bad anotomy" +image = pipe(prompt=prompt, image=init_image, negative_prompt=negative_prompt, strength=0.7).images[0] +make_image_grid([init_image, image], rows=1, cols=2) diff --git a/scrapped_outputs/2910915e902e61751f9da17199ae454c.txt b/scrapped_outputs/2910915e902e61751f9da17199ae454c.txt new file mode 100644 index 0000000000000000000000000000000000000000..e98a635b56d7d8021b54ca94665324a1ae805e1f --- /dev/null +++ b/scrapped_outputs/2910915e902e61751f9da17199ae454c.txt @@ -0,0 +1,4 @@ +Overview + +Generating high-quality outputs is computationally intensive, especially during each iterative step where you go from a noisy output to a less noisy output. One of 🧨 Diffuser’s goal is to make this technology widely accessible to everyone, which includes enabling fast inference on consumer and specialized hardware. +This section will cover tips and tricks - like half-precision weights and sliced attention - for optimizing inference speed and reducing memory-consumption. You can also learn how to speed up your PyTorch code with torch.compile or ONNX Runtime, and enable memory-efficient attention with xFormers. There are also guides for running inference on specific hardware like Apple Silicon, and Intel or Habana processors. diff --git a/scrapped_outputs/292c4dc409c56c7622642542f4bdcfc2.txt b/scrapped_outputs/292c4dc409c56c7622642542f4bdcfc2.txt new file mode 100644 index 0000000000000000000000000000000000000000..6eb814578b3c61caf6866a5ffadcbcf16e6fec47 --- /dev/null +++ b/scrapped_outputs/292c4dc409c56c7622642542f4bdcfc2.txt @@ -0,0 +1,26 @@ +How to run Stable Diffusion with Core ML Core ML is the model format and machine learning library supported by Apple frameworks. If you are interested in running Stable Diffusion models inside your macOS or iOS/iPadOS apps, this guide will show you how to convert existing PyTorch checkpoints into the Core ML format and use them for inference with Python or Swift. Core ML models can leverage all the compute engines available in Apple devices: the CPU, the GPU, and the Apple Neural Engine (or ANE, a tensor-optimized accelerator available in Apple Silicon Macs and modern iPhones/iPads). Depending on the model and the device it’s running on, Core ML can mix and match compute engines too, so some portions of the model may run on the CPU while others run on GPU, for example. You can also run the diffusers Python codebase on Apple Silicon Macs using the mps accelerator built into PyTorch. This approach is explained in depth in the mps guide, but it is not compatible with native apps. Stable Diffusion Core ML Checkpoints Stable Diffusion weights (or checkpoints) are stored in the PyTorch format, so you need to convert them to the Core ML format before we can use them inside native apps. Thankfully, Apple engineers developed a conversion tool based on diffusers to convert the PyTorch checkpoints to Core ML. Before you convert a model, though, take a moment to explore the Hugging Face Hub – chances are the model you’re interested in is already available in Core ML format: the Apple organization includes Stable Diffusion versions 1.4, 1.5, 2.0 base, and 2.1 base coreml community includes custom finetuned models use this filter to return all available Core ML checkpoints If you can’t find the model you’re interested in, we recommend you follow the instructions for Converting Models to Core ML by Apple. Selecting the Core ML Variant to Use Stable Diffusion models can be converted to different Core ML variants intended for different purposes: The type of attention blocks used. The attention operation is used to “pay attention” to the relationship between different areas in the image representations and to understand how the image and text representations are related. Attention is compute- and memory-intensive, so different implementations exist that consider the hardware characteristics of different devices. For Core ML Stable Diffusion models, there are two attention variants: split_einsum (introduced by Apple) is optimized for ANE devices, which is available in modern iPhones, iPads and M-series computers. The “original” attention (the base implementation used in diffusers) is only compatible with CPU/GPU and not ANE. It can be faster to run your model on CPU + GPU using original attention than ANE. See this performance benchmark as well as some additional measures provided by the community for additional details. The supported inference framework. packages are suitable for Python inference. This can be used to test converted Core ML models before attempting to integrate them inside native apps, or if you want to explore Core ML performance but don’t need to support native apps. For example, an application with a web UI could perfectly use a Python Core ML backend. compiled models are required for Swift code. The compiled models in the Hub split the large UNet model weights into several files for compatibility with iOS and iPadOS devices. This corresponds to the --chunk-unet conversion option. If you want to support native apps, then you need to select the compiled variant. The official Core ML Stable Diffusion models include these variants, but the community ones may vary: Copied coreml-stable-diffusion-v1-4 +├── README.md +├── original +│ ├── compiled +│ └── packages +└── split_einsum + ├── compiled + └── packages You can download and use the variant you need as shown below. Core ML Inference in Python Install the following libraries to run Core ML inference in Python: Copied pip install huggingface_hub +pip install git+https://github.com/apple/ml-stable-diffusion Download the Model Checkpoints To run inference in Python, use one of the versions stored in the packages folders because the compiled ones are only compatible with Swift. You may choose whether you want to use original or split_einsum attention. This is how you’d download the original attention variant from the Hub to a directory called models: Copied from huggingface_hub import snapshot_download +from pathlib import Path + +repo_id = "apple/coreml-stable-diffusion-v1-4" +variant = "original/packages" + +model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_")) +snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False) +print(f"Model downloaded at {model_path}") Inference Once you have downloaded a snapshot of the model, you can test it using Apple’s Python script. Copied python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" -i models/coreml-stable-diffusion-v1-4_original_packages -o --compute-unit CPU_AND_GPU --seed 93 Pass the path of the downloaded checkpoint with -i flag to the script. --compute-unit indicates the hardware you want to allow for inference. It must be one of the following options: ALL, CPU_AND_GPU, CPU_ONLY, CPU_AND_NE. You may also provide an optional output path, and a seed for reproducibility. The inference script assumes you’re using the original version of the Stable Diffusion model, CompVis/stable-diffusion-v1-4. If you use another model, you have to specify its Hub id in the inference command line, using the --model-version option. This works for models already supported and custom models you trained or fine-tuned yourself. For example, if you want to use runwayml/stable-diffusion-v1-5: Copied python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" --compute-unit ALL -o output --seed 93 -i models/coreml-stable-diffusion-v1-5_original_packages --model-version runwayml/stable-diffusion-v1-5 Core ML inference in Swift Running inference in Swift is slightly faster than in Python because the models are already compiled in the mlmodelc format. This is noticeable on app startup when the model is loaded but shouldn’t be noticeable if you run several generations afterward. Download To run inference in Swift on your Mac, you need one of the compiled checkpoint versions. We recommend you download them locally using Python code similar to the previous example, but with one of the compiled variants: Copied from huggingface_hub import snapshot_download +from pathlib import Path + +repo_id = "apple/coreml-stable-diffusion-v1-4" +variant = "original/compiled" + +model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_")) +snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False) +print(f"Model downloaded at {model_path}") Inference To run inference, please clone Apple’s repo: Copied git clone https://github.com/apple/ml-stable-diffusion +cd ml-stable-diffusion And then use Apple’s command line tool, Swift Package Manager: Copied swift run StableDiffusionSample --resource-path models/coreml-stable-diffusion-v1-4_original_compiled --compute-units all "a photo of an astronaut riding a horse on mars" You have to specify in --resource-path one of the checkpoints downloaded in the previous step, so please make sure it contains compiled Core ML bundles with the extension .mlmodelc. The --compute-units has to be one of these values: all, cpuOnly, cpuAndGPU, cpuAndNeuralEngine. For more details, please refer to the instructions in Apple’s repo. Supported Diffusers Features The Core ML models and inference code don’t support many of the features, options, and flexibility of 🧨 Diffusers. These are some of the limitations to keep in mind: Core ML models are only suitable for inference. They can’t be used for training or fine-tuning. Only two schedulers have been ported to Swift, the default one used by Stable Diffusion and DPMSolverMultistepScheduler, which we ported to Swift from our diffusers implementation. We recommend you use DPMSolverMultistepScheduler, since it produces the same quality in about half the steps. Negative prompts, classifier-free guidance scale, and image-to-image tasks are available in the inference code. Advanced features such as depth guidance, ControlNet, and latent upscalers are not available yet. Apple’s conversion and inference repo and our own swift-coreml-diffusers repos are intended as technology demonstrators to enable other developers to build upon. If you feel strongly about any missing features, please feel free to open a feature request or, better yet, a contribution PR 🙂. Native Diffusers Swift app One easy way to run Stable Diffusion on your own Apple hardware is to use our open-source Swift repo, based on diffusers and Apple’s conversion and inference repo. You can study the code, compile it with Xcode and adapt it for your own needs. For your convenience, there’s also a standalone Mac app in the App Store, so you can play with it without having to deal with the code or IDE. If you are a developer and have determined that Core ML is the best solution to build your Stable Diffusion app, then you can use the rest of this guide to get started with your project. We can’t wait to see what you’ll build 🙂. diff --git a/scrapped_outputs/293ae101220cf06ae4fb1f963bd11b45.txt b/scrapped_outputs/293ae101220cf06ae4fb1f963bd11b45.txt new file mode 100644 index 0000000000000000000000000000000000000000..4a2dab2440032fce02434afcfbdf3d52bba38d63 --- /dev/null +++ b/scrapped_outputs/293ae101220cf06ae4fb1f963bd11b45.txt @@ -0,0 +1,11 @@ +Philosophy 🧨 Diffusers provides state-of-the-art pretrained diffusion models across multiple modalities. +Its purpose is to serve as a modular toolbox for both inference and training. We aim at building a library that stands the test of time and therefore take API design very seriously. In a nutshell, Diffusers is built to be a natural extension of PyTorch. Therefore, most of our design choices are based on PyTorch’s Design Principles. Let’s go over the most important ones: Usability over Performance While Diffusers has many built-in performance-enhancing features (see Memory and Speed), models are always loaded with the highest precision and lowest optimization. Therefore, by default diffusion pipelines are always instantiated on CPU with float32 precision if not otherwise defined by the user. This ensures usability across different platforms and accelerators and means that no complex installations are required to run the library. Diffusers aims to be a light-weight package and therefore has very few required dependencies, but many soft dependencies that can improve performance (such as accelerate, safetensors, onnx, etc…). We strive to keep the library as lightweight as possible so that it can be added without much concern as a dependency on other packages. Diffusers prefers simple, self-explainable code over condensed, magic code. This means that short-hand code syntaxes such as lambda functions, and advanced PyTorch operators are often not desired. Simple over easy As PyTorch states, explicit is better than implicit and simple is better than complex. This design philosophy is reflected in multiple parts of the library: We follow PyTorch’s API with methods like DiffusionPipeline.to to let the user handle device management. Raising concise error messages is preferred to silently correct erroneous input. Diffusers aims at teaching the user, rather than making the library as easy to use as possible. Complex model vs. scheduler logic is exposed instead of magically handled inside. Schedulers/Samplers are separated from diffusion models with minimal dependencies on each other. This forces the user to write the unrolled denoising loop. However, the separation allows for easier debugging and gives the user more control over adapting the denoising process or switching out diffusion models or schedulers. Separately trained components of the diffusion pipeline, e.g. the text encoder, the unet, and the variational autoencoder, each have their own model class. This forces the user to handle the interaction between the different model components, and the serialization format separates the model components into different files. However, this allows for easier debugging and customization. DreamBooth or Textual Inversion training +is very simple thanks to Diffusers’ ability to separate single components of the diffusion pipeline. Tweakable, contributor-friendly over abstraction For large parts of the library, Diffusers adopts an important design principle of the Transformers library, which is to prefer copy-pasted code over hasty abstractions. This design principle is very opinionated and stands in stark contrast to popular design principles such as Don’t repeat yourself (DRY). +In short, just like Transformers does for modeling files, Diffusers prefers to keep an extremely low level of abstraction and very self-contained code for pipelines and schedulers. +Functions, long code blocks, and even classes can be copied across multiple files which at first can look like a bad, sloppy design choice that makes the library unmaintainable. +However, this design has proven to be extremely successful for Transformers and makes a lot of sense for community-driven, open-source machine learning libraries because: Machine Learning is an extremely fast-moving field in which paradigms, model architectures, and algorithms are changing rapidly, which therefore makes it very difficult to define long-lasting code abstractions. Machine Learning practitioners like to be able to quickly tweak existing code for ideation and research and therefore prefer self-contained code over one that contains many abstractions. Open-source libraries rely on community contributions and therefore must build a library that is easy to contribute to. The more abstract the code, the more dependencies, the harder to read, and the harder to contribute to. Contributors simply stop contributing to very abstract libraries out of fear of breaking vital functionality. If contributing to a library cannot break other fundamental code, not only is it more inviting for potential new contributors, but it is also easier to review and contribute to multiple parts in parallel. At Hugging Face, we call this design the single-file policy which means that almost all of the code of a certain class should be written in a single, self-contained file. To read more about the philosophy, you can have a look +at this blog post. In Diffusers, we follow this philosophy for both pipelines and schedulers, but only partly for diffusion models. The reason we don’t follow this design fully for diffusion models is because almost all diffusion pipelines, such +as DDPM, Stable Diffusion, unCLIP (DALL·E 2) and Imagen all rely on the same diffusion model, the UNet. Great, now you should have generally understood why 🧨 Diffusers is designed the way it is 🤗. +We try to apply these design principles consistently across the library. Nevertheless, there are some minor exceptions to the philosophy or some unlucky design choices. If you have feedback regarding the design, we would ❤️ to hear it directly on GitHub. Design Philosophy in Details Now, let’s look a bit into the nitty-gritty details of the design philosophy. Diffusers essentially consists of three major classes: pipelines, models, and schedulers. +Let’s walk through more in-detail design decisions for each class. Pipelines Pipelines are designed to be easy to use (therefore do not follow Simple over easy 100%), are not feature complete, and should loosely be seen as examples of how to use models and schedulers for inference. The following design principles are followed: Pipelines follow the single-file policy. All pipelines can be found in individual directories under src/diffusers/pipelines. One pipeline folder corresponds to one diffusion paper/project/release. Multiple pipeline files can be gathered in one pipeline folder, as it’s done for src/diffusers/pipelines/stable-diffusion. If pipelines share similar functionality, one can make use of the #Copied from mechanism. Pipelines all inherit from DiffusionPipeline. Every pipeline consists of different model and scheduler components, that are documented in the model_index.json file, are accessible under the same name as attributes of the pipeline and can be shared between pipelines with DiffusionPipeline.components function. Every pipeline should be loadable via the DiffusionPipeline.from_pretrained function. Pipelines should be used only for inference. Pipelines should be very readable, self-explanatory, and easy to tweak. Pipelines should be designed to build on top of each other and be easy to integrate into higher-level APIs. Pipelines are not intended to be feature-complete user interfaces. For future complete user interfaces one should rather have a look at InvokeAI, Diffuzers, and lama-cleaner. Every pipeline should have one and only one way to run it via a __call__ method. The naming of the __call__ arguments should be shared across all pipelines. Pipelines should be named after the task they are intended to solve. In almost all cases, novel diffusion pipelines shall be implemented in a new pipeline folder/file. Models Models are designed as configurable toolboxes that are natural extensions of PyTorch’s Module class. They only partly follow the single-file policy. The following design principles are followed: Models correspond to a type of model architecture. E.g. the UNet2DConditionModel class is used for all UNet variations that expect 2D image inputs and are conditioned on some context. All models can be found in src/diffusers/models and every model architecture shall be defined in its file, e.g. unet_2d_condition.py, transformer_2d.py, etc… Models do not follow the single-file policy and should make use of smaller model building blocks, such as attention.py, resnet.py, embeddings.py, etc… Note: This is in stark contrast to Transformers’ modeling files and shows that models do not really follow the single-file policy. Models intend to expose complexity, just like PyTorch’s Module class, and give clear error messages. Models all inherit from ModelMixin and ConfigMixin. Models can be optimized for performance when it doesn’t demand major code changes, keeps backward compatibility, and gives significant memory or compute gain. Models should by default have the highest precision and lowest performance setting. To integrate new model checkpoints whose general architecture can be classified as an architecture that already exists in Diffusers, the existing model architecture shall be adapted to make it work with the new checkpoint. One should only create a new file if the model architecture is fundamentally different. Models should be designed to be easily extendable to future changes. This can be achieved by limiting public function arguments, configuration arguments, and “foreseeing” future changes, e.g. it is usually better to add string “…type” arguments that can easily be extended to new future types instead of boolean is_..._type arguments. Only the minimum amount of changes shall be made to existing architectures to make a new model checkpoint work. The model design is a difficult trade-off between keeping code readable and concise and supporting many model checkpoints. For most parts of the modeling code, classes shall be adapted for new model checkpoints, while there are some exceptions where it is preferred to add new classes to make sure the code is kept concise and +readable long-term, such as UNet blocks and Attention processors. Schedulers Schedulers are responsible to guide the denoising process for inference as well as to define a noise schedule for training. They are designed as individual classes with loadable configuration files and strongly follow the single-file policy. The following design principles are followed: All schedulers are found in src/diffusers/schedulers. Schedulers are not allowed to import from large utils files and shall be kept very self-contained. One scheduler Python file corresponds to one scheduler algorithm (as might be defined in a paper). If schedulers share similar functionalities, we can make use of the #Copied from mechanism. Schedulers all inherit from SchedulerMixin and ConfigMixin. Schedulers can be easily swapped out with the ConfigMixin.from_config method as explained in detail here. Every scheduler has to have a set_num_inference_steps, and a step function. set_num_inference_steps(...) has to be called before every denoising process, i.e. before step(...) is called. Every scheduler exposes the timesteps to be “looped over” via a timesteps attribute, which is an array of timesteps the model will be called upon. The step(...) function takes a predicted model output and the “current” sample (x_t) and returns the “previous”, slightly more denoised sample (x_t-1). Given the complexity of diffusion schedulers, the step function does not expose all the complexity and can be a bit of a “black box”. In almost all cases, novel schedulers shall be implemented in a new scheduling file. diff --git a/scrapped_outputs/294495901b3e67976cb68a39450c616c.txt b/scrapped_outputs/294495901b3e67976cb68a39450c616c.txt new file mode 100644 index 0000000000000000000000000000000000000000..0216b63015b72cee2b55724c811388c4d1a98e96 --- /dev/null +++ b/scrapped_outputs/294495901b3e67976cb68a39450c616c.txt @@ -0,0 +1,41 @@ +KarrasVeScheduler KarrasVeScheduler is a stochastic sampler tailored to variance-expanding (VE) models. It is based on the Elucidating the Design Space of Diffusion-Based Generative Models and Score-based generative modeling through stochastic differential equations papers. KarrasVeScheduler class diffusers.KarrasVeScheduler < source > ( sigma_min: float = 0.02 sigma_max: float = 100 s_noise: float = 1.007 s_churn: float = 80 s_min: float = 0.05 s_max: float = 50 ) Parameters sigma_min (float, defaults to 0.02) — +The minimum noise magnitude. sigma_max (float, defaults to 100) — +The maximum noise magnitude. s_noise (float, defaults to 1.007) — +The amount of additional noise to counteract loss of detail during sampling. A reasonable range is [1.000, +1.011]. s_churn (float, defaults to 80) — +The parameter controlling the overall amount of stochasticity. A reasonable range is [0, 100]. s_min (float, defaults to 0.05) — +The start value of the sigma range to add noise (enable stochasticity). A reasonable range is [0, 10]. s_max (float, defaults to 50) — +The end value of the sigma range to add noise. A reasonable range is [0.2, 80]. A stochastic scheduler tailored to variance-expanding models. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. For more details on the parameters, see Appendix E. The grid search values used +to find the optimal {s_noise, s_churn, s_min, s_max} for a specific model are described in Table 5 of the paper. add_noise_to_input < source > ( sample: FloatTensor sigma: float generator: Optional = None ) Parameters sample (torch.FloatTensor) — +The input sample. sigma (float) — generator (torch.Generator, optional) — +A random number generator. Explicit Langevin-like “churn” step of adding noise to the sample according to a gamma_i ≥ 0 to reach a +higher noise level sigma_hat = sigma_i + gamma_i*sigma_i. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor sigma_hat: float sigma_prev: float sample_hat: FloatTensor return_dict: bool = True ) → ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. sigma_hat (float) — sigma_prev (float) — sample_hat (torch.FloatTensor) — return_dict (bool, optional, defaults to True) — +Whether or not to return a ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput or tuple. Returns +~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput or tuple + +If return_dict is True, ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). step_correct < source > ( model_output: FloatTensor sigma_hat: float sigma_prev: float sample_hat: FloatTensor sample_prev: FloatTensor derivative: FloatTensor return_dict: bool = True ) → prev_sample (TODO) Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. sigma_hat (float) — TODO sigma_prev (float) — TODO sample_hat (torch.FloatTensor) — TODO sample_prev (torch.FloatTensor) — TODO derivative (torch.FloatTensor) — TODO return_dict (bool, optional, defaults to True) — +Whether or not to return a DDPMSchedulerOutput or tuple. Returns +prev_sample (TODO) + +updated sample in the diffusion chain. derivative (TODO): TODO + Corrects the predicted sample based on the model_output of the network. KarrasVeOutput class diffusers.schedulers.deprecated.scheduling_karras_ve.KarrasVeOutput < source > ( prev_sample: FloatTensor derivative: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. derivative (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Derivative of predicted original image sample (x_0). pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/2945bf4f1ace07c5b9d1178845285bff.txt b/scrapped_outputs/2945bf4f1ace07c5b9d1178845285bff.txt new file mode 100644 index 0000000000000000000000000000000000000000..bbc3acf76c7c15bd0150cb7a94aa944d1e65fda4 --- /dev/null +++ b/scrapped_outputs/2945bf4f1ace07c5b9d1178845285bff.txt @@ -0,0 +1,93 @@ +InstructPix2Pix InstructPix2Pix is a Stable Diffusion model trained to edit images from human-provided instructions. For example, your prompt can be “turn the clouds rainy” and the model will edit the input image accordingly. This model is conditioned on the text prompt (or editing instruction) and the input image. This guide will explore the train_instruct_pix2pix.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/instruct_pix2pix +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script has many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. Default values are provided for most parameters that work pretty well, but you can also set your own values in the training command if you’d like. For example, to increase the resolution of the input image: Copied accelerate launch train_instruct_pix2pix.py \ + --resolution=512 \ Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the relevant parameters for InstructPix2Pix: --original_image_column: the original image before the edits are made --edited_image_column: the image after the edits are made --edit_prompt_column: the instructions to edit the image --conditioning_dropout_prob: the dropout probability for the edited image and edit prompts during training which enables classifier-free guidance (CFG) for one or both conditioning inputs Training script The dataset preprocessing code and training loop are found in the main() function. This is where you’ll make your changes to the training script to adapt it for your own use-case. As with the script parameters, a walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the InstructPix2Pix relevant parts of the script. The script begins by modifing the number of input channels in the first convolutional layer of the UNet to account for InstructPix2Pix’s additional conditioning image: Copied in_channels = 8 +out_channels = unet.conv_in.out_channels +unet.register_to_config(in_channels=in_channels) + +with torch.no_grad(): + new_conv_in = nn.Conv2d( + in_channels, out_channels, unet.conv_in.kernel_size, unet.conv_in.stride, unet.conv_in.padding + ) + new_conv_in.weight.zero_() + new_conv_in.weight[:, :4, :, :].copy_(unet.conv_in.weight) + unet.conv_in = new_conv_in These UNet parameters are updated by the optimizer: Copied optimizer = optimizer_cls( + unet.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Next, the edited images and and edit instructions are preprocessed and tokenized. It is important the same image transformations are applied to the original and edited images. Copied def preprocess_train(examples): + preprocessed_images = preprocess_images(examples) + + original_images, edited_images = preprocessed_images.chunk(2) + original_images = original_images.reshape(-1, 3, args.resolution, args.resolution) + edited_images = edited_images.reshape(-1, 3, args.resolution, args.resolution) + + examples["original_pixel_values"] = original_images + examples["edited_pixel_values"] = edited_images + + captions = list(examples[edit_prompt_column]) + examples["input_ids"] = tokenize_captions(captions) + return examples Finally, in the training loop, it starts by encoding the edited images into latent space: Copied latents = vae.encode(batch["edited_pixel_values"].to(weight_dtype)).latent_dist.sample() +latents = latents * vae.config.scaling_factor Then, the script applies dropout to the original image and edit instruction embeddings to support CFG. This is what enables the model to modulate the influence of the edit instruction and original image on the edited image. Copied encoder_hidden_states = text_encoder(batch["input_ids"])[0] +original_image_embeds = vae.encode(batch["original_pixel_values"].to(weight_dtype)).latent_dist.mode() + +if args.conditioning_dropout_prob is not None: + random_p = torch.rand(bsz, device=latents.device, generator=generator) + prompt_mask = random_p < 2 * args.conditioning_dropout_prob + prompt_mask = prompt_mask.reshape(bsz, 1, 1) + null_conditioning = text_encoder(tokenize_captions([""]).to(accelerator.device))[0] + encoder_hidden_states = torch.where(prompt_mask, null_conditioning, encoder_hidden_states) + + image_mask_dtype = original_image_embeds.dtype + image_mask = 1 - ( + (random_p >= args.conditioning_dropout_prob).to(image_mask_dtype) + * (random_p < 3 * args.conditioning_dropout_prob).to(image_mask_dtype) + ) + image_mask = image_mask.reshape(bsz, 1, 1, 1) + original_image_embeds = image_mask * original_image_embeds That’s pretty much it! Aside from the differences described here, the rest of the script is very similar to the Text-to-image training script, so feel free to check it out for more details. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’re happy with the changes to your script or if you’re okay with the default configuration, you’re ready to launch the training script! 🚀 This guide uses the fusing/instructpix2pix-1000-samples dataset, which is a smaller version of the original dataset. You can also create and use your own dataset if you’d like (see the Create a dataset for training guide). Set the MODEL_NAME environment variable to the name of the model (can be a model id on the Hub or a path to a local model), and the DATASET_ID to the name of the dataset on the Hub. The script creates and saves all the components (feature extractor, scheduler, text encoder, UNet, etc.) to a subfolder in your repository. For better results, try longer training runs with a larger dataset. We’ve only tested this training script on a smaller-scale dataset. To monitor training progress with Weights and Biases, add the --report_to=wandb parameter to the training command and specify a validation image with --val_image_url and a validation prompt with --validation_prompt. This can be really useful for debugging the model. If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. Copied accelerate launch --mixed_precision="fp16" train_instruct_pix2pix.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$DATASET_ID \ + --enable_xformers_memory_efficient_attention \ + --resolution=256 \ + --random_flip \ + --train_batch_size=4 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --max_train_steps=15000 \ + --checkpointing_steps=5000 \ + --checkpoints_total_limit=1 \ + --learning_rate=5e-05 \ + --max_grad_norm=1 \ + --lr_warmup_steps=0 \ + --conditioning_dropout_prob=0.05 \ + --mixed_precision=fp16 \ + --seed=42 \ + --push_to_hub After training is finished, you can use your new InstructPix2Pix for inference: Copied import PIL +import requests +import torch +from diffusers import StableDiffusionInstructPix2PixPipeline +from diffusers.utils import load_image + +pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained("your_cool_model", torch_dtype=torch.float16).to("cuda") +generator = torch.Generator("cuda").manual_seed(0) + +image = load_image("https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/test_pix2pix_4.png") +prompt = "add some ducks to the lake" +num_inference_steps = 20 +image_guidance_scale = 1.5 +guidance_scale = 10 + +edited_image = pipeline( + prompt, + image=image, + num_inference_steps=num_inference_steps, + image_guidance_scale=image_guidance_scale, + guidance_scale=guidance_scale, + generator=generator, +).images[0] +edited_image.save("edited_image.png") You should experiment with different num_inference_steps, image_guidance_scale, and guidance_scale values to see how they affect inference speed and quality. The guidance scale parameters are especially impactful because they control how much the original image and edit instructions affect the edited image. Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_instruct_pix2pix_sdxl.py script to train a SDXL model to follow image editing instructions. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on training your own InstructPix2Pix model! 🥳 To learn more about the model, it may be helpful to: Read the Instruction-tuning Stable Diffusion with InstructPix2Pix blog post to learn more about some experiments we’ve done with InstructPix2Pix, dataset preparation, and results for different instructions. diff --git a/scrapped_outputs/295fa1ca0df951537edc937c6abf5fd7.txt b/scrapped_outputs/295fa1ca0df951537edc937c6abf5fd7.txt new file mode 100644 index 0000000000000000000000000000000000000000..96c0514d704cece83e17fb8a355ec25c182d1eb8 --- /dev/null +++ b/scrapped_outputs/295fa1ca0df951537edc937c6abf5fd7.txt @@ -0,0 +1,24 @@ +Unconditional Latent Diffusion Unconditional Latent Diffusion was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. The abstract from the paper is: By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. The original codebase can be found at CompVis/latent-diffusion. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. LDMPipeline class diffusers.LDMPipeline < source > ( vqvae: VQModel unet: UNet2DModel scheduler: DDIMScheduler ) Parameters vqvae (VQModel) — +Vector-quantized (VQ) model to encode and decode images to and from latent representations. unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +DDIMScheduler is used in combination with unet to denoise the encoded image latents. Pipeline for unconditional image generation using latent diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None eta: float = 0.0 num_inference_steps: int = 50 output_type: typing.Optional[str] = 'pil' return_dict: bool = True **kwargs ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +Number of images to generate. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Example: Copied >>> from diffusers import LDMPipeline + +>>> # load model and scheduler +>>> pipe = LDMPipeline.from_pretrained("CompVis/ldm-celebahq-256") + +>>> # run pipeline in inference (sample random noise and denoise) +>>> image = pipe().images[0] ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/2960c0fe3657578437298c64568ee67a.txt b/scrapped_outputs/2960c0fe3657578437298c64568ee67a.txt new file mode 100644 index 0000000000000000000000000000000000000000..27722ad672b52389a6db8c53b56dc33ed939a819 --- /dev/null +++ b/scrapped_outputs/2960c0fe3657578437298c64568ee67a.txt @@ -0,0 +1,129 @@ +LoRA Support in Diffusers + +Diffusers supports LoRA for faster fine-tuning of Stable Diffusion, allowing greater memory efficiency and easier portability. +Low-Rank Adaption of Large Language Models was first introduced by Microsoft in +LoRA: Low-Rank Adaptation of Large Language Models by Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen. +In a nutshell, LoRA allows adapting pretrained models by adding pairs of rank-decomposition weight matrices (called update matrices) +to existing weights and only training those newly added weights. This has a couple of advantages: +Previous pretrained weights are kept frozen so that the model is not so prone to catastrophic forgetting. +Rank-decomposition matrices have significantly fewer parameters than the original model, which means that trained LoRA weights are easily portable. +LoRA matrices are generally added to the attention layers of the original model and they control to which extent the model is adapted toward new training images via a scale parameter. +Note that the usage of LoRA is not just limited to attention layers. In the original LoRA work, the authors found out that just amending +the attention layers of a language model is sufficient to obtain good downstream performance with great efficiency. This is why, it’s common +to just add the LoRA weights to the attention layers of a model. +cloneofsimo was the first to try out LoRA training for Stable Diffusion in the popular lora GitHub repository. +LoRA allows us to achieve greater memory efficiency since the pretrained weights are kept frozen and only the LoRA weights are trained, thereby +allowing us to run fine-tuning on consumer GPUs like Tesla T4, RTX 3080 or even RTX 2080 Ti! One can get access to GPUs like T4 in the free +tiers of Kaggle Kernels and Google Colab Notebooks. + +Getting started with LoRA for fine-tuning + +Stable Diffusion can be fine-tuned in different ways: +Textual inversion +DreamBooth +Text2Image fine-tuning +We provide two end-to-end examples that show how to run fine-tuning with LoRA: +DreamBooth +Text2Image +If you want to perform DreamBooth training with LoRA, for instance, you would run: + + + Copied +export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export INSTANCE_DIR="path-to-instance-images" +export OUTPUT_DIR="path-to-save-model" + +accelerate launch train_dreambooth_lora.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --output_dir=$OUTPUT_DIR \ + --instance_prompt="a photo of sks dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=1 \ + --checkpointing_steps=100 \ + --learning_rate=1e-4 \ + --report_to="wandb" \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --max_train_steps=500 \ + --validation_prompt="A photo of sks dog in a bucket" \ + --validation_epochs=50 \ + --seed="0" \ + --push_to_hub +A similar process can be followed to fully fine-tune Stable Diffusion on a custom dataset using the +examples/text_to_image/train_text_to_image_lora.py script. +Refer to the respective examples linked above to learn more. +When using LoRA we can use a much higher learning rate (typically 1e-4 as opposed to ~1e-6) compared to non-LoRA Dreambooth fine-tuning. +But there is no free lunch. For the given dataset and expected generation quality, you’d still need to experiment with +different hyperparameters. Here are some important ones: +Training timeLearning rate +Number of training steps +Inference time Number of steps +Scheduler type +Additionally, you can follow this blog that documents some of our experimental +findings for performing DreamBooth training of Stable Diffusion. +When fine-tuning, the LoRA update matrices are only added to the attention layers. To enable this, we added new weight +loading functionalities. Their details are available here. + +Inference + +Assuming you used the examples/text_to_image/train_text_to_image_lora.py to fine-tune Stable Diffusion on the Pokemon +dataset, you can perform inference like so: + + + Copied +from diffusers import StableDiffusionPipeline +import torch + +model_path = "sayakpaul/sd-model-finetuned-lora-t4" +pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16) +pipe.unet.load_attn_procs(model_path) +pipe.to("cuda") + +prompt = "A pokemon with blue eyes." +image = pipe(prompt, num_inference_steps=30, guidance_scale=7.5).images[0] +image.save("pokemon.png") +Here are some example images you can expect: + +sayakpaul/sd-model-finetuned-lora-t4 contains LoRA fine-tuned update matrices +which is only 3 MBs in size. During inference, the pre-trained Stable Diffusion checkpoints are loaded alongside these update +matrices and then they are combined to run inference. +You can use the huggingface_hub library to retrieve the base model +from sayakpaul/sd-model-finetuned-lora-t4 like so: + + + Copied +from huggingface_hub.repocard import RepoCard + +card = RepoCard.load("sayakpaul/sd-model-finetuned-lora-t4") +base_model = card.data.to_dict()["base_model"] +# 'CompVis/stable-diffusion-v1-4' +And then you can use pipe = StableDiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.float16). +This is especially useful when you don’t want to hardcode the base model identifier during initializing the StableDiffusionPipeline. +Inference for DreamBooth training remains the same. Check +this section for more details. + +Merging LoRA with original model + +When performing inference, you can merge the trained LoRA weights with the frozen pre-trained model weights, to interpolate between the original model’s inference result (as if no fine-tuning had occurred) and the fully fine-tuned version. +You can adjust the merging ratio with a parameter called α (alpha) in the paper, or scale in our implementation. You can tweak it with the following code, that passes scale as cross_attention_kwargs in the pipeline call: + + + Copied +from diffusers import StableDiffusionPipeline +import torch + +model_path = "sayakpaul/sd-model-finetuned-lora-t4" +pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16) +pipe.unet.load_attn_procs(model_path) +pipe.to("cuda") + +prompt = "A pokemon with blue eyes." +image = pipe(prompt, num_inference_steps=30, guidance_scale=7.5, cross_attention_kwargs={"scale": 0.5}).images[0] +image.save("pokemon.png") +A value of 0 is the same as not using the LoRA weights, whereas 1 means only the LoRA fine-tuned weights will be used. Values between 0 and 1 will interpolate between the two versions. + +Known limitations + +Currently, we only support LoRA for the attention layers of UNet2DConditionModel. diff --git a/scrapped_outputs/297db8bf6c5f61f8c060d5f948fa1e85.txt b/scrapped_outputs/297db8bf6c5f61f8c060d5f948fa1e85.txt new file mode 100644 index 0000000000000000000000000000000000000000..86d9ddbbae81241685d47196515ab51585d529f3 --- /dev/null +++ b/scrapped_outputs/297db8bf6c5f61f8c060d5f948fa1e85.txt @@ -0,0 +1,93 @@ +Latent Consistency Distillation Latent Consistency Models (LCMs) are able to generate high-quality images in just a few steps, representing a big leap forward because many pipelines require at least 25+ steps. LCMs are produced by applying the latent consistency distillation method to any Stable Diffusion model. This method works by applying one-stage guided distillation to the latent space, and incorporating a skipping-step method to consistently skip timesteps to accelerate the distillation process (refer to section 4.1, 4.2, and 4.3 of the paper for more details). If you’re training on a GPU with limited vRAM, try enabling gradient_checkpointing, gradient_accumulation_steps, and mixed_precision to reduce memory-usage and speedup training. You can reduce your memory-usage even more by enabling memory-efficient attention with xFormers and bitsandbytes’ 8-bit optimizer. This guide will explore the train_lcm_distill_sd_wds.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/consistency_distillation +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment (try enabling torch.compile to significantly speedup training): Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_lcm_distill_sd_wds.py \ + --mixed_precision="fp16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so you’ll focus on the parameters that are relevant to latent consistency distillation in this guide. --pretrained_teacher_model: the path to a pretrained latent diffusion model to use as the teacher model --pretrained_vae_model_name_or_path: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify an alternative VAE (like this VAE by madebyollin which works in fp16) --w_min and --w_max: the minimum and maximum guidance scale values for guidance scale sampling --num_ddim_timesteps: the number of timesteps for DDIM sampling --loss_type: the type of loss (L2 or Huber) to calculate for latent consistency distillation; Huber loss is generally preferred because it’s more robust to outliers --huber_c: the Huber loss parameter Training script The training script starts by creating a dataset class - Text2ImageDataset - for preprocessing the images and creating a training dataset. Copied def transform(example): + image = example["image"] + image = TF.resize(image, resolution, interpolation=transforms.InterpolationMode.BILINEAR) + + c_top, c_left, _, _ = transforms.RandomCrop.get_params(image, output_size=(resolution, resolution)) + image = TF.crop(image, c_top, c_left, resolution, resolution) + image = TF.to_tensor(image) + image = TF.normalize(image, [0.5], [0.5]) + + example["image"] = image + return example For improved performance on reading and writing large datasets stored in the cloud, this script uses the WebDataset format to create a preprocessing pipeline to apply transforms and create a dataset and dataloader for training. Images are processed and fed to the training loop without having to download the full dataset first. Copied processing_pipeline = [ + wds.decode("pil", handler=wds.ignore_and_continue), + wds.rename(image="jpg;png;jpeg;webp", text="text;txt;caption", handler=wds.warn_and_continue), + wds.map(filter_keys({"image", "text"})), + wds.map(transform), + wds.to_tuple("image", "text"), +] In the main() function, all the necessary components like the noise scheduler, tokenizers, text encoders, and VAE are loaded. The teacher UNet is also loaded here and then you can create a student UNet from the teacher UNet. The student UNet is updated by the optimizer during training. Copied teacher_unet = UNet2DConditionModel.from_pretrained( + args.pretrained_teacher_model, subfolder="unet", revision=args.teacher_revision +) + +unet = UNet2DConditionModel(**teacher_unet.config) +unet.load_state_dict(teacher_unet.state_dict(), strict=False) +unet.train() Now you can create the optimizer to update the UNet parameters: Copied optimizer = optimizer_class( + unet.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Create the dataset: Copied dataset = Text2ImageDataset( + train_shards_path_or_url=args.train_shards_path_or_url, + num_train_examples=args.max_train_samples, + per_gpu_batch_size=args.train_batch_size, + global_batch_size=args.train_batch_size * accelerator.num_processes, + num_workers=args.dataloader_num_workers, + resolution=args.resolution, + shuffle_buffer_size=1000, + pin_memory=True, + persistent_workers=True, +) +train_dataloader = dataset.train_dataloader Next, you’re ready to setup the training loop and implement the latent consistency distillation method (see Algorithm 1 in the paper for more details). This section of the script takes care of adding noise to the latents, sampling and creating a guidance scale embedding, and predicting the original image from the noise. Copied pred_x_0 = predicted_origin( + noise_pred, + start_timesteps, + noisy_model_input, + noise_scheduler.config.prediction_type, + alpha_schedule, + sigma_schedule, +) + +model_pred = c_skip_start * noisy_model_input + c_out_start * pred_x_0 It gets the teacher model predictions and the LCM predictions next, calculates the loss, and then backpropagates it to the LCM. Copied if args.loss_type == "l2": + loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") +elif args.loss_type == "huber": + loss = torch.mean( + torch.sqrt((model_pred.float() - target.float()) ** 2 + args.huber_c**2) - args.huber_c + ) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Now you’re ready to launch the training script and start distilling! For this guide, you’ll use the --train_shards_path_or_url to specify the path to the Conceptual Captions 12M dataset stored on the Hub here. Set the MODEL_DIR environment variable to the name of the teacher model and OUTPUT_DIR to where you want to save the model. Copied export MODEL_DIR="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="path/to/saved/model" + +accelerate launch train_lcm_distill_sd_wds.py \ + --pretrained_teacher_model=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --mixed_precision=fp16 \ + --resolution=512 \ + --learning_rate=1e-6 --loss_type="huber" --ema_decay=0.95 --adam_weight_decay=0.0 \ + --max_train_steps=1000 \ + --max_train_samples=4000000 \ + --dataloader_num_workers=8 \ + --train_shards_path_or_url="pipe:curl -L -s https://huggingface.co/datasets/laion/conceptual-captions-12m-webdataset/resolve/main/data/{00000..01099}.tar?download=true" \ + --validation_steps=200 \ + --checkpointing_steps=200 --checkpoints_total_limit=10 \ + --train_batch_size=12 \ + --gradient_checkpointing --enable_xformers_memory_efficient_attention \ + --gradient_accumulation_steps=1 \ + --use_8bit_adam \ + --resume_from_checkpoint=latest \ + --report_to=wandb \ + --seed=453645634 \ + --push_to_hub Once training is complete, you can use your new LCM for inference. Copied from diffusers import UNet2DConditionModel, DiffusionPipeline, LCMScheduler +import torch + +unet = UNet2DConditionModel.from_pretrained("your-username/your-model", torch_dtype=torch.float16, variant="fp16") +pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", unet=unet, torch_dtype=torch.float16, variant="fp16") + +pipeline.scheduler = LCMScheduler.from_config(pipe.scheduler.config) +pipeline.to("cuda") + +prompt = "sushi rolls in the form of panda heads, sushi platter" + +image = pipeline(prompt, num_inference_steps=4, guidance_scale=1.0).images[0] LoRA LoRA is a training technique for significantly reducing the number of trainable parameters. As a result, training is faster and it is easier to store the resulting weights because they are a lot smaller (~100MBs). Use the train_lcm_distill_lora_sd_wds.py or train_lcm_distill_lora_sdxl.wds.py script to train with LoRA. The LoRA training script is discussed in more detail in the LoRA training guide. Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_lcm_distill_sdxl_wds.py script to train a SDXL model with LoRA. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on distilling a LCM model! To learn more about LCM, the following may be helpful: Learn how to use LCMs for inference for text-to-image, image-to-image, and with LoRA checkpoints. Read the SDXL in 4 steps with Latent Consistency LoRAs blog post to learn more about SDXL LCM-LoRA’s for super fast inference, quality comparisons, benchmarks, and more. diff --git a/scrapped_outputs/29ca32d756a015f2a8f5eb66264c82dc.txt b/scrapped_outputs/29ca32d756a015f2a8f5eb66264c82dc.txt new file mode 100644 index 0000000000000000000000000000000000000000..c096748fc9379b50eaf61a541e581e9ab2545d55 --- /dev/null +++ b/scrapped_outputs/29ca32d756a015f2a8f5eb66264c82dc.txt @@ -0,0 +1,383 @@ +Text2Video-Zero Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators is by Levon Khachatryan, Andranik Movsisyan, Vahram Tadevosyan, Roberto Henschel, Zhangyang Wang, Shant Navasardyan, Humphrey Shi. Text2Video-Zero enables zero-shot video generation using either: A textual prompt A prompt combined with guidance from poses or edges Video Instruct-Pix2Pix (instruction-guided video editing) Results are temporally consistent and closely follow the guidance and textual prompts. The abstract from the paper is: Recent text-to-video generation approaches rely on computationally heavy training and require large-scale video datasets. In this paper, we introduce a new task of zero-shot text-to-video generation and propose a low-cost approach (without any training or optimization) by leveraging the power of existing text-to-image synthesis methods (e.g., Stable Diffusion), making them suitable for the video domain. +Our key modifications include (i) enriching the latent codes of the generated frames with motion dynamics to keep the global scene and the background time consistent; and (ii) reprogramming frame-level self-attention using a new cross-frame attention of each frame on the first frame, to preserve the context, appearance, and identity of the foreground object. +Experiments show that this leads to low overhead, yet high-quality and remarkably consistent video generation. Moreover, our approach is not limited to text-to-video synthesis but is also applicable to other tasks such as conditional and content-specialized video generation, and Video Instruct-Pix2Pix, i.e., instruction-guided video editing. +As experiments show, our method performs comparably or sometimes better than recent approaches, despite not being trained on additional video data. You can find additional information about Text2Video-Zero on the project page, paper, and original codebase. Usage example Text-To-Video To generate a video from prompt, run the following Python code: Copied import torch +from diffusers import TextToVideoZeroPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +prompt = "A panda is playing guitar on times square" +result = pipe(prompt=prompt).images +result = [(r * 255).astype("uint8") for r in result] +imageio.mimsave("video.mp4", result, fps=4) You can change these parameters in the pipeline call: Motion field strength (see the paper, Sect. 3.3.1):motion_field_strength_x and motion_field_strength_y. Default: motion_field_strength_x=12, motion_field_strength_y=12 T and T' (see the paper, Sect. 3.3.1)t0 and t1 in the range {0, ..., num_inference_steps}. Default: t0=45, t1=48 Video length:video_length, the number of frames video_length to be generated. Default: video_length=8 We can also generate longer videos by doing the processing in a chunk-by-chunk manner: Copied import torch +from diffusers import TextToVideoZeroPipeline +import numpy as np + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") +seed = 0 +video_length = 24 #24 ÷ 4fps = 6 seconds +chunk_size = 8 +prompt = "A panda is playing guitar on times square" + +# Generate the video chunk-by-chunk +result = [] +chunk_ids = np.arange(0, video_length, chunk_size - 1) +generator = torch.Generator(device="cuda") +for i in range(len(chunk_ids)): + print(f"Processing chunk {i + 1} / {len(chunk_ids)}") + ch_start = chunk_ids[i] + ch_end = video_length if i == len(chunk_ids) - 1 else chunk_ids[i + 1] + # Attach the first frame for Cross Frame Attention + frame_ids = [0] + list(range(ch_start, ch_end)) + # Fix the seed for the temporal consistency + generator.manual_seed(seed) + output = pipe(prompt=prompt, video_length=len(frame_ids), generator=generator, frame_ids=frame_ids) + result.append(output.images[1:]) + +# Concatenate chunks and save +result = np.concatenate(result) +result = [(r * 255).astype("uint8") for r in result] +imageio.mimsave("video.mp4", result, fps=4) SDXL SupportIn order to use the SDXL model when generating a video from prompt, use the TextToVideoZeroSDXLPipeline pipeline: Copied import torch +from diffusers import TextToVideoZeroSDXLPipeline + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipe = TextToVideoZeroSDXLPipeline.from_pretrained( + model_id, torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") Text-To-Video with Pose Control To generate a video from prompt with additional pose control Download a demo video Copied from huggingface_hub import hf_hub_download + +filename = "__assets__/poses_skeleton_gifs/dance1_corr.mp4" +repo_id = "PAIR/Text2Video-Zero" +video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) Read video containing extracted pose images Copied from PIL import Image +import imageio + +reader = imageio.get_reader(video_path, "ffmpeg") +frame_count = 8 +pose_images = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] To extract pose from actual video, read ControlNet documentation. Run StableDiffusionControlNetPipeline with our custom attention processor Copied import torch +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +model_id = "runwayml/stable-diffusion-v1-5" +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + model_id, controlnet=controlnet, torch_dtype=torch.float16 +).to("cuda") + +# Set the attention processor +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) +pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) + +# fix latents for all frames +latents = torch.randn((1, 4, 64, 64), device="cuda", dtype=torch.float16).repeat(len(pose_images), 1, 1, 1) + +prompt = "Darth Vader dancing in a desert" +result = pipe(prompt=[prompt] * len(pose_images), image=pose_images, latents=latents).images +imageio.mimsave("video.mp4", result, fps=4) SDXL Support Since our attention processor also works with SDXL, it can be utilized to generate a video from prompt using ControlNet models powered by SDXL: Copied import torch +from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +controlnet_model_id = 'thibaud/controlnet-openpose-sdxl-1.0' +model_id = 'stabilityai/stable-diffusion-xl-base-1.0' + +controlnet = ControlNetModel.from_pretrained(controlnet_model_id, torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + model_id, controlnet=controlnet, torch_dtype=torch.float16 +).to('cuda') + +# Set the attention processor +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) +pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) + +# fix latents for all frames +latents = torch.randn((1, 4, 128, 128), device="cuda", dtype=torch.float16).repeat(len(pose_images), 1, 1, 1) + +prompt = "Darth Vader dancing in a desert" +result = pipe(prompt=[prompt] * len(pose_images), image=pose_images, latents=latents).images +imageio.mimsave("video.mp4", result, fps=4) Text-To-Video with Edge Control To generate a video from prompt with additional Canny edge control, follow the same steps described above for pose-guided generation using Canny edge ControlNet model. Video Instruct-Pix2Pix To perform text-guided video editing (with InstructPix2Pix): Download a demo video Copied from huggingface_hub import hf_hub_download + +filename = "__assets__/pix2pix video/camel.mp4" +repo_id = "PAIR/Text2Video-Zero" +video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) Read video from path Copied from PIL import Image +import imageio + +reader = imageio.get_reader(video_path, "ffmpeg") +frame_count = 8 +video = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] Run StableDiffusionInstructPix2PixPipeline with our custom attention processor Copied import torch +from diffusers import StableDiffusionInstructPix2PixPipeline +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +model_id = "timbrooks/instruct-pix2pix" +pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=3)) + +prompt = "make it Van Gogh Starry Night style" +result = pipe(prompt=[prompt] * len(video), image=video).images +imageio.mimsave("edited_video.mp4", result, fps=4) DreamBooth specialization Methods Text-To-Video, Text-To-Video with Pose Control and Text-To-Video with Edge Control +can run with custom DreamBooth models, as shown below for +Canny edge ControlNet model and +Avatar style DreamBooth model: Download a demo video Copied from huggingface_hub import hf_hub_download + +filename = "__assets__/canny_videos_mp4/girl_turning.mp4" +repo_id = "PAIR/Text2Video-Zero" +video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) Read video from path Copied from PIL import Image +import imageio + +reader = imageio.get_reader(video_path, "ffmpeg") +frame_count = 8 +canny_edges = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] Run StableDiffusionControlNetPipeline with custom trained DreamBooth model Copied import torch +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +# set model id to custom model +model_id = "PAIR/text2video-zero-controlnet-canny-avatar" +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + model_id, controlnet=controlnet, torch_dtype=torch.float16 +).to("cuda") + +# Set the attention processor +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) +pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) + +# fix latents for all frames +latents = torch.randn((1, 4, 64, 64), device="cuda", dtype=torch.float16).repeat(len(canny_edges), 1, 1, 1) + +prompt = "oil painting of a beautiful girl avatar style" +result = pipe(prompt=[prompt] * len(canny_edges), image=canny_edges, latents=latents).images +imageio.mimsave("video.mp4", result, fps=4) You can filter out some available DreamBooth-trained models with this link. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. TextToVideoZeroPipeline class diffusers.TextToVideoZeroPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet3DConditionModel to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for zero-shot text-to-video generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union video_length: Optional = 8 height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None motion_field_strength_x: float = 12 motion_field_strength_y: float = 12 output_type: Optional = 'tensor' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 t0: int = 44 t1: int = 47 frame_ids: Optional = None ) → TextToVideoPipelineOutput Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. video_length (int, optional, defaults to 8) — +The number of generated video frames. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in video generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_videos_per_prompt (int, optional, defaults to 1) — +The number of videos to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "numpy") — +The output format of the generated video. Choose between "latent" and "numpy". return_dict (bool, optional, defaults to True) — +Whether or not to return a +TextToVideoPipelineOutput instead of +a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. motion_field_strength_x (float, optional, defaults to 12) — +Strength of motion in generated video along x-axis. See the paper, +Sect. 3.3.1. motion_field_strength_y (float, optional, defaults to 12) — +Strength of motion in generated video along y-axis. See the paper, +Sect. 3.3.1. t0 (int, optional, defaults to 44) — +Timestep t0. Should be in the range [0, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. t1 (int, optional, defaults to 47) — +Timestep t0. Should be in the range [t0 + 1, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. frame_ids (List[int], optional) — +Indexes of the frames that are being generated. This is used when generating longer videos +chunk-by-chunk. Returns +TextToVideoPipelineOutput + +The output contains a ndarray of the generated video, when output_type != "latent", otherwise a +latent code of generated videos and a list of bools indicating whether the corresponding generated +video contains “not-safe-for-work” (nsfw) content.. + The call function to the pipeline for generation. backward_loop < source > ( latents timesteps prompt_embeds guidance_scale callback callback_steps num_warmup_steps extra_step_kwargs cross_attention_kwargs = None ) → latents Parameters callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. +extra_step_kwargs — +Extra_step_kwargs. +cross_attention_kwargs — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. +num_warmup_steps — +number of warmup steps. Returns +latents + +Latents of backward process output at time timesteps[-1]. + Perform backward process given list of time steps. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. forward_loop < source > ( x_t0 t0 t1 generator ) → x_t1 Parameters generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. Returns +x_t1 + +Forward process applied to x_t0 from time t0 to t1. + Perform DDPM forward process from time t0 to t1. This is the same as adding noise with corresponding variance. TextToVideoZeroSDXLPipeline class diffusers.TextToVideoZeroSDXLPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for zero-shot text-to-video generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union prompt_2: Union = None video_length: Optional = 8 height: Optional = None width: Optional = None num_inference_steps: int = 50 denoising_end: Optional = None guidance_scale: float = 7.5 negative_prompt: Union = None negative_prompt_2: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None frame_ids: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None latents: Optional = None motion_field_strength_x: float = 12 motion_field_strength_y: float = 12 output_type: Optional = 'tensor' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Optional = None crops_coords_top_left: Tuple = (0, 0) target_size: Optional = None t0: int = 44 t1: int = 47 ) Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders video_length (int, optional, defaults to 8) — +The number of generated video frames. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_videos_per_prompt (int, optional, defaults to 1) — +The number of videos to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. frame_ids (List[int], optional) — +Indexes of the frames that are being generated. This is used when generating longer videos +chunk-by-chunk. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. motion_field_strength_x (float, optional, defaults to 12) — +Strength of motion in generated video along x-axis. See the paper, +Sect. 3.3.1. motion_field_strength_y (float, optional, defaults to 12) — +Strength of motion in generated video along y-axis. See the paper, +Sect. 3.3.1. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. guidance_rescale (float, optional, defaults to 0.7) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (width, height) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (width, height). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. t0 (int, optional, defaults to 44) — +Timestep t0. Should be in the range [0, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. t1 (int, optional, defaults to 47) — +Timestep t0. Should be in the range [t0 + 1, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. Function invoked when calling the pipeline for generation. backward_loop < source > ( latents timesteps prompt_embeds guidance_scale callback callback_steps num_warmup_steps extra_step_kwargs add_text_embeds add_time_ids cross_attention_kwargs = None guidance_rescale: float = 0.0 ) → latents Parameters callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. +extra_step_kwargs — +Extra_step_kwargs. +cross_attention_kwargs — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. +num_warmup_steps — +number of warmup steps. Returns +latents + +latents of backward process output at time timesteps[-1] + Perform backward process given list of time steps disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. forward_loop < source > ( x_t0 t0 t1 generator ) → x_t1 Parameters generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. Returns +x_t1 + +Forward process applied to x_t0 from time t0 to t1. + Perform DDPM forward process from time t0 to t1. This is the same as adding noise with corresponding variance. TextToVideoPipelineOutput class diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.TextToVideoPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images ([List[PIL.Image.Image], np.ndarray]) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected ([List[bool]]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for zero-shot text-to-video pipeline. diff --git a/scrapped_outputs/29da7f569b6532e7a85c970c7cb06e75.txt b/scrapped_outputs/29da7f569b6532e7a85c970c7cb06e75.txt new file mode 100644 index 0000000000000000000000000000000000000000..a1d62e149f06897a73f0cf31016ea5252858f00a --- /dev/null +++ b/scrapped_outputs/29da7f569b6532e7a85c970c7cb06e75.txt @@ -0,0 +1,525 @@ +Kandinsky 2.1 Kandinsky 2.1 is created by Arseniy Shakhmatov, Anton Razzhigaev, Aleksandr Nikolich, Vladimir Arkhipkin, Igor Pavlov, Andrey Kuznetsov, and Denis Dimitrov. The description from it’s GitHub page is: Kandinsky 2.1 inherits best practicies from Dall-E 2 and Latent diffusion, while introducing some new ideas. As text and image encoder it uses CLIP model and diffusion image prior (mapping) between latent spaces of CLIP modalities. This approach increases the visual performance of the model and unveils new horizons in blending images and text-guided image manipulation. The original codebase can be found at ai-forever/Kandinsky-2. Check out the Kandinsky Community organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. KandinskyPriorPipeline class diffusers.KandinskyPriorPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModelWithProjection text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: UnCLIPScheduler image_processor: CLIPImageProcessor ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Pipeline for generating image prior for Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 output_type: Optional = 'pt' return_dict: bool = True ) → KandinskyPriorPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. output_type (str, optional, defaults to "pt") — +The output format of the generate image. Choose between: "np" (np.array) or "pt" +(torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyPipeline, KandinskyPriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior") +>>> pipe_prior.to("cuda") + +>>> prompt = "red cat, 4k photo" +>>> out = pipe_prior(prompt) +>>> image_emb = out.image_embeds +>>> negative_image_emb = out.negative_image_embeds + +>>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1") +>>> pipe.to("cuda") + +>>> image = pipe( +... prompt, +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... ).images + +>>> image[0].save("cat.png") interpolate < source > ( images_and_prompts: List weights: List num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None negative_prior_prompt: Optional = None negative_prompt: str = '' guidance_scale: float = 4.0 device = None ) → KandinskyPriorPipelineOutput or tuple Parameters images_and_prompts (List[Union[str, PIL.Image.Image, torch.FloatTensor]]) — +list of prompts and images to guide the image generation. +weights — (List[float]): +list of weights for each condition in images_and_prompts num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. negative_prior_prompt (str, optional) — +The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when using the prior pipeline for interpolation. Examples: Copied >>> from diffusers import KandinskyPriorPipeline, KandinskyPipeline +>>> from diffusers.utils import load_image +>>> import PIL + +>>> import torch +>>> from torchvision import transforms + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> img1 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) + +>>> img2 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/starry_night.jpeg" +... ) + +>>> images_texts = ["a cat", img1, img2] +>>> weights = [0.3, 0.3, 0.4] +>>> image_emb, zero_image_emb = pipe_prior.interpolate(images_texts, weights) + +>>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) +>>> pipe.to("cuda") + +>>> image = pipe( +... "", +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=150, +... ).images[0] + +>>> image.save("starry_cat.png") KandinskyPipeline class diffusers.KandinskyPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image_embeds: Union negative_image_embeds: Union negative_prompt: Union = None height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyPipeline, KandinskyPriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained("kandinsky-community/Kandinsky-2-1-prior") +>>> pipe_prior.to("cuda") + +>>> prompt = "red cat, 4k photo" +>>> out = pipe_prior(prompt) +>>> image_emb = out.image_embeds +>>> negative_image_emb = out.negative_image_embeds + +>>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1") +>>> pipe.to("cuda") + +>>> image = pipe( +... prompt, +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... ).images + +>>> image[0].save("cat.png") KandinskyCombinedPipeline class diffusers.KandinskyCombinedPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Combined Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipe = AutoPipelineForText2Image.from_pretrained( + "kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" + +image = pipe(prompt=prompt, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models (unet, text_encoder, vae, and safety checker state dicts) to CPU using 🤗 +Accelerate, significantly reducing memory usage. Models are moved to a torch.device('meta') and loaded on a +GPU only when their specific submodule’s forward method is called. Offloading happens on a submodule basis. +Memory savings are higher than using enable_model_cpu_offload, but performance is lower. KandinskyImg2ImgPipeline class diffusers.KandinskyImg2ImgPipeline < source > ( text_encoder: MultilingualCLIP movq: VQModel tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: DDIMScheduler ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ image encoder and decoder Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union image_embeds: FloatTensor negative_image_embeds: FloatTensor negative_prompt: Union = None height: int = 512 width: int = 512 num_inference_steps: int = 100 strength: float = 0.3 guidance_scale: float = 7.0 num_images_per_prompt: int = 1 generator: Union = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyImg2ImgPipeline, KandinskyPriorPipeline +>>> from diffusers.utils import load_image +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> prompt = "A red cartoon frog, 4k" +>>> image_emb, zero_image_emb = pipe_prior(prompt, return_dict=False) + +>>> pipe = KandinskyImg2ImgPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") + +>>> init_image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/frog.png" +... ) + +>>> image = pipe( +... prompt, +... image=init_image, +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... strength=0.2, +... ).images + +>>> image[0].save("red_frog.png") KandinskyImg2ImgCombinedPipeline class diffusers.KandinskyImg2ImgCombinedPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Combined Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 strength: float = 0.3 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForImage2Image +import torch +import requests +from io import BytesIO +from PIL import Image +import os + +pipe = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") +image.thumbnail((768, 768)) + +image = pipe(prompt=prompt, image=original_image, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. KandinskyInpaintPipeline class diffusers.KandinskyInpaintPipeline < source > ( text_encoder: MultilingualCLIP movq: VQModel tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: DDIMScheduler ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ image encoder and decoder Pipeline for text-guided image inpainting using Kandinsky2.1 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union mask_image: Union image_embeds: FloatTensor negative_image_embeds: FloatTensor negative_prompt: Union = None height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image or np.ndarray) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. mask_image (PIL.Image.Image,torch.FloatTensor or np.ndarray) — +Image, or a tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. You can pass a pytorch tensor as mask only if the +image you passed is a pytorch tensor, and it should contain one color channel (L) instead of 3, so the +expected shape would be either (B, 1, H, W,), (B, H, W), (1, H, W) or (H, W) If image is an PIL +image or numpy array, mask should also be a either PIL image or numpy array. If it is a PIL image, it +will be converted to a single channel (luminance) before use. If it is a nummpy array, the expected +shape is (H, W). image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyInpaintPipeline, KandinskyPriorPipeline +>>> from diffusers.utils import load_image +>>> import torch +>>> import numpy as np + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> prompt = "a hat" +>>> image_emb, zero_image_emb = pipe_prior(prompt, return_dict=False) + +>>> pipe = KandinskyInpaintPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") + +>>> init_image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) + +>>> mask = np.zeros((768, 768), dtype=np.float32) +>>> mask[:250, 250:-250] = 1 + +>>> out = pipe( +... prompt, +... image=init_image, +... mask_image=mask, +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=50, +... ) + +>>> image = out.images[0] +>>> image.save("cat_with_hat.png") KandinskyInpaintCombinedPipeline class diffusers.KandinskyInpaintCombinedPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Combined Pipeline for generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union mask_image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. mask_image (np.array) — +Tensor representing an image batch, to mask image. White pixels in the mask will be repainted, while +black pixels will be preserved. If mask_image is a PIL image, it will be converted to a single +channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, +so the expected shape would be (B, H, W, 1). negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +import torch +import numpy as np + +pipe = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +original_image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png" +) + +mask = np.zeros((768, 768), dtype=np.float32) +# Let's mask out an area above the cat's head +mask[:250, 250:-250] = 1 + +image = pipe(prompt=prompt, image=original_image, mask_image=mask, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. diff --git a/scrapped_outputs/2a12b337390f3b3966043e8207239128.txt b/scrapped_outputs/2a12b337390f3b3966043e8207239128.txt new file mode 100644 index 0000000000000000000000000000000000000000..43caa4e9d69d10bca1cf7b16f04035243b30ac36 --- /dev/null +++ b/scrapped_outputs/2a12b337390f3b3966043e8207239128.txt @@ -0,0 +1,163 @@ +RePaint scheduler + + +Overview + +DDPM-based inpainting scheduler for unsupervised inpainting with extreme masks. +Intended for use with RePaintPipeline. +Based on the paper RePaint: Inpainting using Denoising Diffusion Probabilistic Models +and the original implementation by Andreas Lugmayr et al.: https://github.com/andreas128/RePaint + +RePaintScheduler + + +class diffusers.RePaintScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +eta: float = 0.0 +trained_betas: typing.Optional[numpy.ndarray] = None +clip_sample: bool = True + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. + + +eta (float) — +The weight of noise for added noise in a diffusion step. Its value is between 0.0 and 1.0 -0.0 is DDIM and +1.0 is DDPM scheduler respectively. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +variance_type (str) — +options to clip the variance used when adding noise to the denoised sample. Choose from fixed_small, +fixed_small_log, fixed_large, fixed_large_log, learned or learned_range. + + +clip_sample (bool, default True) — +option to clip predicted sample between -1 and 1 for numerical stability. + + + +RePaint is a schedule for DDPM inpainting inside a given mask. +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. +For more details, see the original paper: https://arxiv.org/pdf/2201.09865.pdf + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Optional[int] = None + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +timestep (int, optional) — current timestep + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +step + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +original_image: FloatTensor +mask: FloatTensor +generator: typing.Optional[torch._C.Generator] = None +return_dict: bool = True + +) +→ +~schedulers.scheduling_utils.RePaintSchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned +diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +original_image (torch.FloatTensor) — +the original image to inpaint on. + + +mask (torch.FloatTensor) — +the mask where 0.0 values define which part of the original image to inpaint (change). + + +generator (torch.Generator, optional) — random number generator. + + +return_dict (bool) — option for returning tuple rather than +DDPMSchedulerOutput class + + +Returns + +~schedulers.scheduling_utils.RePaintSchedulerOutput or tuple + + + +~schedulers.scheduling_utils.RePaintSchedulerOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/2a36e2c7cf202b03aeb507ccd3ffc6ae.txt b/scrapped_outputs/2a36e2c7cf202b03aeb507ccd3ffc6ae.txt new file mode 100644 index 0000000000000000000000000000000000000000..db47242d5b00684f783685474911a8d89dd98131 --- /dev/null +++ b/scrapped_outputs/2a36e2c7cf202b03aeb507ccd3ffc6ae.txt @@ -0,0 +1,250 @@ +Logging + +🧨 Diffusers has a centralized logging system, so that you can setup the verbosity of the library easily. +Currently the default verbosity of the library is WARNING. +To change the level of verbosity, just use one of the direct setters. For instance, here is how to change the verbosity +to the INFO level. + + + Copied +import diffusers + +diffusers.logging.set_verbosity_info() +You can also use the environment variable DIFFUSERS_VERBOSITY to override the default verbosity. You can set it +to one of the following: debug, info, warning, error, critical. For example: + + + Copied +DIFFUSERS_VERBOSITY=error ./myprogram.py +Additionally, some warnings can be disabled by setting the environment variable +DIFFUSERS_NO_ADVISORY_WARNINGS to a true value, like 1. This will disable any warning that is logged using +logger.warning_advice. For example: + + + Copied +DIFFUSERS_NO_ADVISORY_WARNINGS=1 ./myprogram.py +Here is an example of how to use the same logger as the library in your own module or script: + + + Copied +from diffusers.utils import logging + +logging.set_verbosity_info() +logger = logging.get_logger("diffusers") +logger.info("INFO") +logger.warning("WARN") +All the methods of this logging module are documented below, the main ones are +logging.get_verbosity() to get the current level of verbosity in the logger and +logging.set_verbosity() to set the verbosity to the level of your choice. In order (from the least +verbose to the most verbose), those levels (with their corresponding int values in parenthesis) are: +diffusers.logging.CRITICAL or diffusers.logging.FATAL (int value, 50): only report the most +critical errors. +diffusers.logging.ERROR (int value, 40): only report errors. +diffusers.logging.WARNING or diffusers.logging.WARN (int value, 30): only reports error and +warnings. This the default level used by the library. +diffusers.logging.INFO (int value, 20): reports error, warnings and basic information. +diffusers.logging.DEBUG (int value, 10): report all information. +By default, tqdm progress bars will be displayed during model download. logging.disable_progress_bar() and logging.enable_progress_bar() can be used to suppress or unsuppress this behavior. + +Base setters + + +diffusers.utils.logging.set_verbosity_error + +< +source +> +( +) + + + +Set the verbosity to the ERROR level. + +diffusers.utils.logging.set_verbosity_warning + +< +source +> +( +) + + + +Set the verbosity to the WARNING level. + +diffusers.utils.logging.set_verbosity_info + +< +source +> +( +) + + + +Set the verbosity to the INFO level. + +diffusers.utils.logging.set_verbosity_debug + +< +source +> +( +) + + + +Set the verbosity to the DEBUG level. + +Other functions + + +diffusers.utils.logging.get_verbosity + +< +source +> +( +) +→ +int + +Returns + +int + + + +The logging level. + + +Return the current level for the 🤗 Diffusers’ root logger as an int. +🤗 Diffusers has following logging levels: +50: diffusers.logging.CRITICAL or diffusers.logging.FATAL +40: diffusers.logging.ERROR +30: diffusers.logging.WARNING or diffusers.logging.WARN +20: diffusers.logging.INFO +10: diffusers.logging.DEBUG + +diffusers.utils.logging.set_verbosity + +< +source +> +( +verbosity: int + +) + + +Parameters + +verbosity (int) — +Logging level, e.g., one of: + +diffusers.logging.CRITICAL or diffusers.logging.FATAL +diffusers.logging.ERROR +diffusers.logging.WARNING or diffusers.logging.WARN +diffusers.logging.INFO +diffusers.logging.DEBUG + + + + +Set the verbosity level for the 🤗 Diffusers’ root logger. + +diffusers.utils.get_logger + +< +source +> +( +name: typing.Optional[str] = None + +) + + + +Return a logger with the specified name. +This function is not supposed to be directly accessed unless you are writing a custom diffusers module. + +diffusers.utils.logging.enable_default_handler + +< +source +> +( +) + + + +Enable the default handler of the HuggingFace Diffusers’ root logger. + +diffusers.utils.logging.disable_default_handler + +< +source +> +( +) + + + +Disable the default handler of the HuggingFace Diffusers’ root logger. + +diffusers.utils.logging.enable_explicit_format + +< +source +> +( +) + + + + +Enable explicit formatting for every HuggingFace Diffusers’ logger. The explicit formatter is as follows: + + + Copied + [LEVELNAME|FILENAME|LINE NUMBER] TIME >> MESSAGE +All handlers currently bound to the root logger are affected by this method. + + +diffusers.utils.logging.reset_format + +< +source +> +( +) + + + +Resets the formatting for HuggingFace Diffusers’ loggers. +All handlers currently bound to the root logger are affected by this method. + +diffusers.utils.logging.enable_progress_bar + +< +source +> +( +) + + + +Enable tqdm progress bar. + +diffusers.utils.logging.disable_progress_bar + +< +source +> +( +) + + + +Disable tqdm progress bar. diff --git a/scrapped_outputs/2a64b536c1e7e34105d1d3bf718d9639.txt b/scrapped_outputs/2a64b536c1e7e34105d1d3bf718d9639.txt new file mode 100644 index 0000000000000000000000000000000000000000..b6ee2d139f8d33d1b57f5e5dc720363dd35642a1 --- /dev/null +++ b/scrapped_outputs/2a64b536c1e7e34105d1d3bf718d9639.txt @@ -0,0 +1,101 @@ +Shap-E The Shap-E model was proposed in Shap-E: Generating Conditional 3D Implicit Functions by Alex Nichol and Heewoo Jun from OpenAI. The abstract from the paper is: We present Shap-E, a conditional generative model for 3D assets. Unlike recent work on 3D generative models which produce a single output representation, Shap-E directly generates the parameters of implicit functions that can be rendered as both textured meshes and neural radiance fields. We train Shap-E in two stages: first, we train an encoder that deterministically maps 3D assets into the parameters of an implicit function; second, we train a conditional diffusion model on outputs of the encoder. When trained on a large dataset of paired 3D and text data, our resulting models are capable of generating complex and diverse 3D assets in a matter of seconds. When compared to Point-E, an explicit generative model over point clouds, Shap-E converges faster and reaches comparable or better sample quality despite modeling a higher-dimensional, multi-representation output space. The original codebase can be found at openai/shap-e. See the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. ShapEPipeline class diffusers.ShapEPipeline < source > ( prior: PriorTransformer text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: HeunDiscreteScheduler shap_e_renderer: ShapERenderer ) Parameters prior (PriorTransformer) — +The canonical unCLIP prior to approximate the image embedding from the text embedding. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. scheduler (HeunDiscreteScheduler) — +A scheduler to be used in combination with the prior model to generate image embedding. shap_e_renderer (ShapERenderer) — +Shap-E renderer projects the generated latents into parameters of a MLP to create 3D objects with the NeRF +rendering method. Pipeline for generating latent representation of a 3D asset and rendering with the NeRF method. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: str num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 frame_size: int = 64 output_type: Optional = 'pil' return_dict: bool = True ) → ShapEPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. frame_size (int, optional, default to 64) — +The width and height of each image frame of the generated 3D output. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between "pil" (PIL.Image.Image), "np" +(np.array), "latent" (torch.Tensor), or mesh (MeshDecoderOutput). return_dict (bool, optional, defaults to True) — +Whether or not to return a ShapEPipelineOutput instead of a plain +tuple. Returns +ShapEPipelineOutput or tuple + +If return_dict is True, ShapEPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from diffusers.utils import export_to_gif + +>>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +>>> repo = "openai/shap-e" +>>> pipe = DiffusionPipeline.from_pretrained(repo, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> guidance_scale = 15.0 +>>> prompt = "a shark" + +>>> images = pipe( +... prompt, +... guidance_scale=guidance_scale, +... num_inference_steps=64, +... frame_size=256, +... ).images + +>>> gif_path = export_to_gif(images[0], "shark_3d.gif") ShapEImg2ImgPipeline class diffusers.ShapEImg2ImgPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModel image_processor: CLIPImageProcessor scheduler: HeunDiscreteScheduler shap_e_renderer: ShapERenderer ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModel) — +Frozen image-encoder. image_processor (CLIPImageProcessor) — +A CLIPImageProcessor to process images. scheduler (HeunDiscreteScheduler) — +A scheduler to be used in combination with the prior model to generate image embedding. shap_e_renderer (ShapERenderer) — +Shap-E renderer projects the generated latents into parameters of a MLP to create 3D objects with the NeRF +rendering method. Pipeline for generating latent representation of a 3D asset and rendering with the NeRF method from an image. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 frame_size: int = 64 output_type: Optional = 'pil' return_dict: bool = True ) → ShapEPipelineOutput or tuple Parameters image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be used as the starting point. Can also accept image +latents as image, but if passing latents directly it is not encoded again. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. frame_size (int, optional, default to 64) — +The width and height of each image frame of the generated 3D output. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between "pil" (PIL.Image.Image), "np" +(np.array), "latent" (torch.Tensor), or mesh (MeshDecoderOutput). return_dict (bool, optional, defaults to True) — +Whether or not to return a ShapEPipelineOutput instead of a plain +tuple. Returns +ShapEPipelineOutput or tuple + +If return_dict is True, ShapEPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> from PIL import Image +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from diffusers.utils import export_to_gif, load_image + +>>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +>>> repo = "openai/shap-e-img2img" +>>> pipe = DiffusionPipeline.from_pretrained(repo, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> guidance_scale = 3.0 +>>> image_url = "https://hf.co/datasets/diffusers/docs-images/resolve/main/shap-e/corgi.png" +>>> image = load_image(image_url).convert("RGB") + +>>> images = pipe( +... image, +... guidance_scale=guidance_scale, +... num_inference_steps=64, +... frame_size=256, +... ).images + +>>> gif_path = export_to_gif(images[0], "corgi_3d.gif") ShapEPipelineOutput class diffusers.pipelines.shap_e.pipeline_shap_e.ShapEPipelineOutput < source > ( images: Union ) Parameters images (torch.FloatTensor) — +A list of images for 3D rendering. Output class for ShapEPipeline and ShapEImg2ImgPipeline. diff --git a/scrapped_outputs/2a6c76e6d57e615c5b6d8d669505ffbe.txt b/scrapped_outputs/2a6c76e6d57e615c5b6d8d669505ffbe.txt new file mode 100644 index 0000000000000000000000000000000000000000..2c0bd1dff1ec6566954d252cc7c21acc8827994e --- /dev/null +++ b/scrapped_outputs/2a6c76e6d57e615c5b6d8d669505ffbe.txt @@ -0,0 +1,310 @@ +InstructPix2Pix InstructPix2Pix: Learning to Follow Image Editing Instructions is by Tim Brooks, Aleksander Holynski and Alexei A. Efros. The abstract from the paper is: We propose a method for editing images from human instructions: given an input image and a written instruction that tells the model what to do, our model follows these instructions to edit the image. To obtain training data for this problem, we combine the knowledge of two large pretrained models — a language model (GPT-3) and a text-to-image model (Stable Diffusion) — to generate a large dataset of image editing examples. Our conditional diffusion model, InstructPix2Pix, is trained on our generated data, and generalizes to real images and user-written instructions at inference time. Since it performs edits in the forward pass and does not require per example fine-tuning or inversion, our model edits images quickly, in a matter of seconds. We show compelling editing results for a diverse collection of input images and written instructions. You can find additional information about InstructPix2Pix on the project page, original codebase, and try it out in a demo. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionInstructPix2PixPipeline class diffusers.StableDiffusionInstructPix2PixPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for pixel-level image editing by following text instructions (based on Stable Diffusion). This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 100 guidance_scale: float = 7.5 image_guidance_scale: float = 1.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor np.ndarray, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be repainted according to prompt. Can also accept +image latents as image, but if passing latents directly it is not encoded again. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. image_guidance_scale (float, optional, defaults to 1.5) — +Push the generated image towards the inital image. Image guidance scale is enabled by setting +image_guidance_scale > 1. Higher image guidance scale encourages generated images that are closely +linked to the source image, usually at the expense of lower image quality. This pipeline requires a +value of at least 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionInstructPix2PixPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" + +>>> image = download_image(img_url).resize((512, 512)) + +>>> pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained( +... "timbrooks/instruct-pix2pix", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "make the mountains snowy" +>>> image = pipe(prompt=prompt, image=image).images[0] load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. StableDiffusionXLInstructPix2PixPipeline class diffusers.StableDiffusionXLInstructPix2PixPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires a aesthetic_score condition to be passed during inference. Also see the config +of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for pixel-level image editing by following text instructions. Based on Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 100 denoising_end: Optional = None guidance_scale: float = 5.0 image_guidance_scale: float = 1.5 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None ) → ~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor or PIL.Image.Image or np.ndarray or List[torch.FloatTensor] or List[PIL.Image.Image] or List[np.ndarray]) — +The image(s) to modify with the pipeline. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 5.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. image_guidance_scale (float, optional, defaults to 1.5) — +Image guidance scale is to push the generated image towards the inital image image. Image guidance +scale is enabled by setting image_guidance_scale > 1. Higher image guidance scale encourages to +generate images that are closely linked to the source image image, usually at the expense of lower +image quality. This pipeline requires a value of at least 1. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. Returns +~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLInstructPix2PixPipeline +>>> from diffusers.utils import load_image + +>>> resolution = 768 +>>> image = load_image( +... "https://hf.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" +... ).resize((resolution, resolution)) +>>> edit_instruction = "Turn sky into a cloudy one" + +>>> pipe = StableDiffusionXLInstructPix2PixPipeline.from_pretrained( +... "diffusers/sdxl-instructpix2pix-768", torch_dtype=torch.float16 +... ).to("cuda") + +>>> edited_image = pipe( +... prompt=edit_instruction, +... image=image, +... height=resolution, +... width=resolution, +... guidance_scale=3.0, +... image_guidance_scale=1.5, +... num_inference_steps=30, +... ).images[0] +>>> edited_image encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. Encodes the prompt into text encoder hidden states. diff --git a/scrapped_outputs/2a6e59fca1e496bb2423ff5daf6cf300.txt b/scrapped_outputs/2a6e59fca1e496bb2423ff5daf6cf300.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/2a70ea5bed4ba8a97cb4ce5dca53fcca.txt b/scrapped_outputs/2a70ea5bed4ba8a97cb4ce5dca53fcca.txt new file mode 100644 index 0000000000000000000000000000000000000000..a001c5e9c77873189a313244b2e7bed2ac696984 --- /dev/null +++ b/scrapped_outputs/2a70ea5bed4ba8a97cb4ce5dca53fcca.txt @@ -0,0 +1,101 @@ +Image variation The Stable Diffusion model can also generate variations from an input image. It uses a fine-tuned version of a Stable Diffusion model by Justin Pinkney from Lambda. The original codebase can be found at LambdaLabsML/lambda-diffusers and additional official checkpoints for image variation can be found at lambdalabs/sd-image-variations-diffusers. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionImageVariationPipeline class diffusers.StableDiffusionImageVariationPipeline < source > ( vae: AutoencoderKL image_encoder: CLIPVisionModelWithProjection unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. image_encoder (CLIPVisionModelWithProjection) — +Frozen CLIP image-encoder (clip-vit-large-patch14). text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline to generate image variations from an input image using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — +Image or images to guide image generation. If you provide a tensor, it needs to be compatible with +CLIPImageProcessor. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied from diffusers import StableDiffusionImageVariationPipeline +from PIL import Image +from io import BytesIO +import requests + +pipe = StableDiffusionImageVariationPipeline.from_pretrained( + "lambdalabs/sd-image-variations-diffusers", revision="v2.0" +) +pipe = pipe.to("cuda") + +url = "https://lh3.googleusercontent.com/y-iFOHfLTwkuQSUegpwDdgKmOjRSTvPxat63dQLB25xkTs4lhIbRUFeNBWZzYf370g=s1200" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") + +out = pipe(image, num_images_per_prompt=3, guidance_scale=15) +out["images"][0].save("result.jpg") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/2a7cc5561c0b0367672a23603465efa3.txt b/scrapped_outputs/2a7cc5561c0b0367672a23603465efa3.txt new file mode 100644 index 0000000000000000000000000000000000000000..f4cc4262c8901cbf0efaaf3a95066a4f6481fc18 --- /dev/null +++ b/scrapped_outputs/2a7cc5561c0b0367672a23603465efa3.txt @@ -0,0 +1,78 @@ +unCLIP Hierarchical Text-Conditional Image Generation with CLIP Latents is by Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen. The unCLIP model in 🤗 Diffusers comes from kakaobrain’s karlo. The abstract from the paper is following: Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples. You can find lucidrains’ DALL-E 2 recreation at lucidrains/DALLE2-pytorch. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. UnCLIPPipeline class diffusers.UnCLIPPipeline < source > ( prior: PriorTransformer decoder: UNet2DConditionModel text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer text_proj: UnCLIPTextProjModel super_res_first: UNet2DModel super_res_last: UNet2DModel prior_scheduler: UnCLIPScheduler decoder_scheduler: UnCLIPScheduler super_res_scheduler: UnCLIPScheduler ) Parameters text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. prior (PriorTransformer) — +The canonical unCLIP prior to approximate the image embedding from the text embedding. text_proj (UnCLIPTextProjModel) — +Utility class to prepare and combine the embeddings before they are passed to the decoder. decoder (UNet2DConditionModel) — +The decoder to invert the image embedding into an image. super_res_first (UNet2DModel) — +Super resolution UNet. Used in all but the last step of the super resolution diffusion process. super_res_last (UNet2DModel) — +Super resolution UNet. Used in the last step of the super resolution diffusion process. prior_scheduler (UnCLIPScheduler) — +Scheduler used in the prior denoising process (a modified DDPMScheduler). decoder_scheduler (UnCLIPScheduler) — +Scheduler used in the decoder denoising process (a modified DDPMScheduler). super_res_scheduler (UnCLIPScheduler) — +Scheduler used in the super resolution denoising process (a modified DDPMScheduler). Pipeline for text-to-image generation using unCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None num_images_per_prompt: int = 1 prior_num_inference_steps: int = 25 decoder_num_inference_steps: int = 25 super_res_num_inference_steps: int = 7 generator: Union = None prior_latents: Optional = None decoder_latents: Optional = None super_res_latents: Optional = None text_model_output: Union = None text_attention_mask: Optional = None prior_guidance_scale: float = 4.0 decoder_guidance_scale: float = 8.0 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. This can only be left undefined if text_model_output +and text_attention_mask is passed. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. prior_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the prior. More denoising steps usually lead to a higher quality +image at the expense of slower inference. decoder_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality +image at the expense of slower inference. super_res_num_inference_steps (int, optional, defaults to 7) — +The number of denoising steps for super resolution. More denoising steps usually lead to a higher +quality image at the expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. prior_latents (torch.FloatTensor of shape (batch size, embeddings dimension), optional) — +Pre-generated noisy latents to be used as inputs for the prior. decoder_latents (torch.FloatTensor of shape (batch size, channels, height, width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. super_res_latents (torch.FloatTensor of shape (batch size, channels, super res height, super res width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. prior_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. decoder_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. text_model_output (CLIPTextModelOutput, optional) — +Pre-defined CLIPTextModel outputs that can be derived from the text encoder. Pre-defined text +outputs can be passed for tasks like text embedding interpolations. Make sure to also pass +text_attention_mask in this case. prompt can the be left None. text_attention_mask (torch.Tensor, optional) — +Pre-defined CLIP text attention mask that can be derived from the tokenizer. Pre-defined text attention +masks are necessary when passing text_model_output. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. UnCLIPImageVariationPipeline class diffusers.UnCLIPImageVariationPipeline < source > ( decoder: UNet2DConditionModel text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer text_proj: UnCLIPTextProjModel feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection super_res_first: UNet2DModel super_res_last: UNet2DModel decoder_scheduler: UnCLIPScheduler super_res_scheduler: UnCLIPScheduler ) Parameters text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the image_encoder. image_encoder (CLIPVisionModelWithProjection) — +Frozen CLIP image-encoder (clip-vit-large-patch14). text_proj (UnCLIPTextProjModel) — +Utility class to prepare and combine the embeddings before they are passed to the decoder. decoder (UNet2DConditionModel) — +The decoder to invert the image embedding into an image. super_res_first (UNet2DModel) — +Super resolution UNet. Used in all but the last step of the super resolution diffusion process. super_res_last (UNet2DModel) — +Super resolution UNet. Used in the last step of the super resolution diffusion process. decoder_scheduler (UnCLIPScheduler) — +Scheduler used in the decoder denoising process (a modified DDPMScheduler). super_res_scheduler (UnCLIPScheduler) — +Scheduler used in the super resolution denoising process (a modified DDPMScheduler). Pipeline to generate image variations from an input image using UnCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union = None num_images_per_prompt: int = 1 decoder_num_inference_steps: int = 25 super_res_num_inference_steps: int = 7 generator: Optional = None decoder_latents: Optional = None super_res_latents: Optional = None image_embeddings: Optional = None decoder_guidance_scale: float = 8.0 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — +Image or tensor representing an image batch to be used as the starting point. If you provide a +tensor, it needs to be compatible with the CLIPImageProcessor +configuration. +Can be left as None only when image_embeddings are passed. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. decoder_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality +image at the expense of slower inference. super_res_num_inference_steps (int, optional, defaults to 7) — +The number of denoising steps for super resolution. More denoising steps usually lead to a higher +quality image at the expense of slower inference. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. decoder_latents (torch.FloatTensor of shape (batch size, channels, height, width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. super_res_latents (torch.FloatTensor of shape (batch size, channels, super res height, super res width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. decoder_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. image_embeddings (torch.Tensor, optional) — +Pre-defined image embeddings that can be derived from the image encoder. Pre-defined image embeddings +can be passed for tasks like image interpolations. image can be left as None. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/2a922a0c9951f08fd91056a1595e14a5.txt b/scrapped_outputs/2a922a0c9951f08fd91056a1595e14a5.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/2aafb8f4487807b01b2c7b617c3ae7c7.txt b/scrapped_outputs/2aafb8f4487807b01b2c7b617c3ae7c7.txt new file mode 100644 index 0000000000000000000000000000000000000000..f9d5759d2a52433aeb4a07b9b2cace405fc5aff7 --- /dev/null +++ b/scrapped_outputs/2aafb8f4487807b01b2c7b617c3ae7c7.txt @@ -0,0 +1,61 @@ +Distilled Stable Diffusion inference Stable Diffusion inference can be a computationally intensive process because it must iteratively denoise the latents to generate an image. To reduce the computational burden, you can use a distilled version of the Stable Diffusion model from Nota AI. The distilled version of their Stable Diffusion model eliminates some of the residual and attention blocks from the UNet, reducing the model size by 51% and improving latency on CPU/GPU by 43%. Read this blog post to learn more about how knowledge distillation training works to produce a faster, smaller, and cheaper generative model. Let’s load the distilled Stable Diffusion model and compare it against the original Stable Diffusion model: Copied from diffusers import StableDiffusionPipeline +import torch + +distilled = StableDiffusionPipeline.from_pretrained( + "nota-ai/bk-sdm-small", torch_dtype=torch.float16, use_safetensors=True, +).to("cuda") + +original = StableDiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16, use_safetensors=True, +).to("cuda") Given a prompt, get the inference time for the original model: Copied import time + +seed = 2023 +generator = torch.manual_seed(seed) + +NUM_ITERS_TO_RUN = 3 +NUM_INFERENCE_STEPS = 25 +NUM_IMAGES_PER_PROMPT = 4 + +prompt = "a golden vase with different flowers" + +start = time.time_ns() +for _ in range(NUM_ITERS_TO_RUN): + images = original( + prompt, + num_inference_steps=NUM_INFERENCE_STEPS, + generator=generator, + num_images_per_prompt=NUM_IMAGES_PER_PROMPT + ).images +end = time.time_ns() +original_sd = f"{(end - start) / 1e6:.1f}" + +print(f"Execution time -- {original_sd} ms\n") +"Execution time -- 45781.5 ms" Time the distilled model inference: Copied start = time.time_ns() +for _ in range(NUM_ITERS_TO_RUN): + images = distilled( + prompt, + num_inference_steps=NUM_INFERENCE_STEPS, + generator=generator, + num_images_per_prompt=NUM_IMAGES_PER_PROMPT + ).images +end = time.time_ns() + +distilled_sd = f"{(end - start) / 1e6:.1f}" +print(f"Execution time -- {distilled_sd} ms\n") +"Execution time -- 29884.2 ms" original Stable Diffusion (45781.5 ms) distilled Stable Diffusion (29884.2 ms) Tiny AutoEncoder To speed inference up even more, use a tiny distilled version of the Stable Diffusion VAE to denoise the latents into images. Replace the VAE in the distilled Stable Diffusion model with the tiny VAE: Copied from diffusers import AutoencoderTiny + +distilled.vae = AutoencoderTiny.from_pretrained( + "sayakpaul/taesd-diffusers", torch_dtype=torch.float16, use_safetensors=True, +).to("cuda") Time the distilled model and distilled VAE inference: Copied start = time.time_ns() +for _ in range(NUM_ITERS_TO_RUN): + images = distilled( + prompt, + num_inference_steps=NUM_INFERENCE_STEPS, + generator=generator, + num_images_per_prompt=NUM_IMAGES_PER_PROMPT + ).images +end = time.time_ns() + +distilled_tiny_sd = f"{(end - start) / 1e6:.1f}" +print(f"Execution time -- {distilled_tiny_sd} ms\n") +"Execution time -- 27165.7 ms" distilled Stable Diffusion + Tiny AutoEncoder (27165.7 ms) diff --git a/scrapped_outputs/2ab8e11bacf993fdaff6e5bae5a518be.txt b/scrapped_outputs/2ab8e11bacf993fdaff6e5bae5a518be.txt new file mode 100644 index 0000000000000000000000000000000000000000..4a2dab2440032fce02434afcfbdf3d52bba38d63 --- /dev/null +++ b/scrapped_outputs/2ab8e11bacf993fdaff6e5bae5a518be.txt @@ -0,0 +1,11 @@ +Philosophy 🧨 Diffusers provides state-of-the-art pretrained diffusion models across multiple modalities. +Its purpose is to serve as a modular toolbox for both inference and training. We aim at building a library that stands the test of time and therefore take API design very seriously. In a nutshell, Diffusers is built to be a natural extension of PyTorch. Therefore, most of our design choices are based on PyTorch’s Design Principles. Let’s go over the most important ones: Usability over Performance While Diffusers has many built-in performance-enhancing features (see Memory and Speed), models are always loaded with the highest precision and lowest optimization. Therefore, by default diffusion pipelines are always instantiated on CPU with float32 precision if not otherwise defined by the user. This ensures usability across different platforms and accelerators and means that no complex installations are required to run the library. Diffusers aims to be a light-weight package and therefore has very few required dependencies, but many soft dependencies that can improve performance (such as accelerate, safetensors, onnx, etc…). We strive to keep the library as lightweight as possible so that it can be added without much concern as a dependency on other packages. Diffusers prefers simple, self-explainable code over condensed, magic code. This means that short-hand code syntaxes such as lambda functions, and advanced PyTorch operators are often not desired. Simple over easy As PyTorch states, explicit is better than implicit and simple is better than complex. This design philosophy is reflected in multiple parts of the library: We follow PyTorch’s API with methods like DiffusionPipeline.to to let the user handle device management. Raising concise error messages is preferred to silently correct erroneous input. Diffusers aims at teaching the user, rather than making the library as easy to use as possible. Complex model vs. scheduler logic is exposed instead of magically handled inside. Schedulers/Samplers are separated from diffusion models with minimal dependencies on each other. This forces the user to write the unrolled denoising loop. However, the separation allows for easier debugging and gives the user more control over adapting the denoising process or switching out diffusion models or schedulers. Separately trained components of the diffusion pipeline, e.g. the text encoder, the unet, and the variational autoencoder, each have their own model class. This forces the user to handle the interaction between the different model components, and the serialization format separates the model components into different files. However, this allows for easier debugging and customization. DreamBooth or Textual Inversion training +is very simple thanks to Diffusers’ ability to separate single components of the diffusion pipeline. Tweakable, contributor-friendly over abstraction For large parts of the library, Diffusers adopts an important design principle of the Transformers library, which is to prefer copy-pasted code over hasty abstractions. This design principle is very opinionated and stands in stark contrast to popular design principles such as Don’t repeat yourself (DRY). +In short, just like Transformers does for modeling files, Diffusers prefers to keep an extremely low level of abstraction and very self-contained code for pipelines and schedulers. +Functions, long code blocks, and even classes can be copied across multiple files which at first can look like a bad, sloppy design choice that makes the library unmaintainable. +However, this design has proven to be extremely successful for Transformers and makes a lot of sense for community-driven, open-source machine learning libraries because: Machine Learning is an extremely fast-moving field in which paradigms, model architectures, and algorithms are changing rapidly, which therefore makes it very difficult to define long-lasting code abstractions. Machine Learning practitioners like to be able to quickly tweak existing code for ideation and research and therefore prefer self-contained code over one that contains many abstractions. Open-source libraries rely on community contributions and therefore must build a library that is easy to contribute to. The more abstract the code, the more dependencies, the harder to read, and the harder to contribute to. Contributors simply stop contributing to very abstract libraries out of fear of breaking vital functionality. If contributing to a library cannot break other fundamental code, not only is it more inviting for potential new contributors, but it is also easier to review and contribute to multiple parts in parallel. At Hugging Face, we call this design the single-file policy which means that almost all of the code of a certain class should be written in a single, self-contained file. To read more about the philosophy, you can have a look +at this blog post. In Diffusers, we follow this philosophy for both pipelines and schedulers, but only partly for diffusion models. The reason we don’t follow this design fully for diffusion models is because almost all diffusion pipelines, such +as DDPM, Stable Diffusion, unCLIP (DALL·E 2) and Imagen all rely on the same diffusion model, the UNet. Great, now you should have generally understood why 🧨 Diffusers is designed the way it is 🤗. +We try to apply these design principles consistently across the library. Nevertheless, there are some minor exceptions to the philosophy or some unlucky design choices. If you have feedback regarding the design, we would ❤️ to hear it directly on GitHub. Design Philosophy in Details Now, let’s look a bit into the nitty-gritty details of the design philosophy. Diffusers essentially consists of three major classes: pipelines, models, and schedulers. +Let’s walk through more in-detail design decisions for each class. Pipelines Pipelines are designed to be easy to use (therefore do not follow Simple over easy 100%), are not feature complete, and should loosely be seen as examples of how to use models and schedulers for inference. The following design principles are followed: Pipelines follow the single-file policy. All pipelines can be found in individual directories under src/diffusers/pipelines. One pipeline folder corresponds to one diffusion paper/project/release. Multiple pipeline files can be gathered in one pipeline folder, as it’s done for src/diffusers/pipelines/stable-diffusion. If pipelines share similar functionality, one can make use of the #Copied from mechanism. Pipelines all inherit from DiffusionPipeline. Every pipeline consists of different model and scheduler components, that are documented in the model_index.json file, are accessible under the same name as attributes of the pipeline and can be shared between pipelines with DiffusionPipeline.components function. Every pipeline should be loadable via the DiffusionPipeline.from_pretrained function. Pipelines should be used only for inference. Pipelines should be very readable, self-explanatory, and easy to tweak. Pipelines should be designed to build on top of each other and be easy to integrate into higher-level APIs. Pipelines are not intended to be feature-complete user interfaces. For future complete user interfaces one should rather have a look at InvokeAI, Diffuzers, and lama-cleaner. Every pipeline should have one and only one way to run it via a __call__ method. The naming of the __call__ arguments should be shared across all pipelines. Pipelines should be named after the task they are intended to solve. In almost all cases, novel diffusion pipelines shall be implemented in a new pipeline folder/file. Models Models are designed as configurable toolboxes that are natural extensions of PyTorch’s Module class. They only partly follow the single-file policy. The following design principles are followed: Models correspond to a type of model architecture. E.g. the UNet2DConditionModel class is used for all UNet variations that expect 2D image inputs and are conditioned on some context. All models can be found in src/diffusers/models and every model architecture shall be defined in its file, e.g. unet_2d_condition.py, transformer_2d.py, etc… Models do not follow the single-file policy and should make use of smaller model building blocks, such as attention.py, resnet.py, embeddings.py, etc… Note: This is in stark contrast to Transformers’ modeling files and shows that models do not really follow the single-file policy. Models intend to expose complexity, just like PyTorch’s Module class, and give clear error messages. Models all inherit from ModelMixin and ConfigMixin. Models can be optimized for performance when it doesn’t demand major code changes, keeps backward compatibility, and gives significant memory or compute gain. Models should by default have the highest precision and lowest performance setting. To integrate new model checkpoints whose general architecture can be classified as an architecture that already exists in Diffusers, the existing model architecture shall be adapted to make it work with the new checkpoint. One should only create a new file if the model architecture is fundamentally different. Models should be designed to be easily extendable to future changes. This can be achieved by limiting public function arguments, configuration arguments, and “foreseeing” future changes, e.g. it is usually better to add string “…type” arguments that can easily be extended to new future types instead of boolean is_..._type arguments. Only the minimum amount of changes shall be made to existing architectures to make a new model checkpoint work. The model design is a difficult trade-off between keeping code readable and concise and supporting many model checkpoints. For most parts of the modeling code, classes shall be adapted for new model checkpoints, while there are some exceptions where it is preferred to add new classes to make sure the code is kept concise and +readable long-term, such as UNet blocks and Attention processors. Schedulers Schedulers are responsible to guide the denoising process for inference as well as to define a noise schedule for training. They are designed as individual classes with loadable configuration files and strongly follow the single-file policy. The following design principles are followed: All schedulers are found in src/diffusers/schedulers. Schedulers are not allowed to import from large utils files and shall be kept very self-contained. One scheduler Python file corresponds to one scheduler algorithm (as might be defined in a paper). If schedulers share similar functionalities, we can make use of the #Copied from mechanism. Schedulers all inherit from SchedulerMixin and ConfigMixin. Schedulers can be easily swapped out with the ConfigMixin.from_config method as explained in detail here. Every scheduler has to have a set_num_inference_steps, and a step function. set_num_inference_steps(...) has to be called before every denoising process, i.e. before step(...) is called. Every scheduler exposes the timesteps to be “looped over” via a timesteps attribute, which is an array of timesteps the model will be called upon. The step(...) function takes a predicted model output and the “current” sample (x_t) and returns the “previous”, slightly more denoised sample (x_t-1). Given the complexity of diffusion schedulers, the step function does not expose all the complexity and can be a bit of a “black box”. In almost all cases, novel schedulers shall be implemented in a new scheduling file. diff --git a/scrapped_outputs/2ae370a1d8194705180a4a68d30e1ee6.txt b/scrapped_outputs/2ae370a1d8194705180a4a68d30e1ee6.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/2b1a1a31da55c72be7fa8f8b5cb730e8.txt b/scrapped_outputs/2b1a1a31da55c72be7fa8f8b5cb730e8.txt new file mode 100644 index 0000000000000000000000000000000000000000..d0ff9812e8390d7761559412d64c19cfc04afa33 --- /dev/null +++ b/scrapped_outputs/2b1a1a31da55c72be7fa8f8b5cb730e8.txt @@ -0,0 +1,89 @@ +Quicktour Diffusion models are trained to denoise random Gaussian noise step-by-step to generate a sample of interest, such as an image or audio. This has sparked a tremendous amount of interest in generative AI, and you have probably seen examples of diffusion generated images on the internet. 🧨 Diffusers is a library aimed at making diffusion models widely accessible to everyone. Whether you’re a developer or an everyday user, this quicktour will introduce you to 🧨 Diffusers and help you get up and generating quickly! There are three main components of the library to know about: The DiffusionPipeline is a high-level end-to-end class designed to rapidly generate samples from pretrained diffusion models for inference. Popular pretrained model architectures and modules that can be used as building blocks for creating diffusion systems. Many different schedulers - algorithms that control how noise is added for training, and how to generate denoised images during inference. The quicktour will show you how to use the DiffusionPipeline for inference, and then walk you through how to combine a model and scheduler to replicate what’s happening inside the DiffusionPipeline. The quicktour is a simplified version of the introductory 🧨 Diffusers notebook to help you get started quickly. If you want to learn more about 🧨 Diffusers’ goal, design philosophy, and additional details about its core API, check out the notebook! Before you begin, make sure you have all the necessary libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install --upgrade diffusers accelerate transformers 🤗 Accelerate speeds up model loading for inference and training. 🤗 Transformers is required to run the most popular diffusion models, such as Stable Diffusion. DiffusionPipeline The DiffusionPipeline is the easiest way to use a pretrained diffusion system for inference. It is an end-to-end system containing the model and the scheduler. You can use the DiffusionPipeline out-of-the-box for many tasks. Take a look at the table below for some supported tasks, and for a complete list of supported tasks, check out the 🧨 Diffusers Summary table. Task Description Pipeline Unconditional Image Generation generate an image from Gaussian noise unconditional_image_generation Text-Guided Image Generation generate an image given a text prompt conditional_image_generation Text-Guided Image-to-Image Translation adapt an image guided by a text prompt img2img Text-Guided Image-Inpainting fill the masked part of an image given the image, the mask and a text prompt inpaint Text-Guided Depth-to-Image Translation adapt parts of an image guided by a text prompt while preserving structure via depth estimation depth2img Start by creating an instance of a DiffusionPipeline and specify which pipeline checkpoint you would like to download. +You can use the DiffusionPipeline for any checkpoint stored on the Hugging Face Hub. +In this quicktour, you’ll load the stable-diffusion-v1-5 checkpoint for text-to-image generation. For Stable Diffusion models, please carefully read the license first before running the model. 🧨 Diffusers implements a safety_checker to prevent offensive or harmful content, but the model’s improved image generation capabilities can still produce potentially harmful content. Load the model with the from_pretrained() method: Copied >>> from diffusers import DiffusionPipeline + +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) The DiffusionPipeline downloads and caches all modeling, tokenization, and scheduling components. You’ll see that the Stable Diffusion pipeline is composed of the UNet2DConditionModel and PNDMScheduler among other things: Copied >>> pipeline +StableDiffusionPipeline { + "_class_name": "StableDiffusionPipeline", + "_diffusers_version": "0.21.4", + ..., + "scheduler": [ + "diffusers", + "PNDMScheduler" + ], + ..., + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vae": [ + "diffusers", + "AutoencoderKL" + ] +} We strongly recommend running the pipeline on a GPU because the model consists of roughly 1.4 billion parameters. +You can move the generator object to a GPU, just like you would in PyTorch: Copied >>> pipeline.to("cuda") Now you can pass a text prompt to the pipeline to generate an image, and then access the denoised image. By default, the image output is wrapped in a PIL.Image object. Copied >>> image = pipeline("An image of a squirrel in Picasso style").images[0] +>>> image Save the image by calling save: Copied >>> image.save("image_of_squirrel_painting.png") Local pipeline You can also use the pipeline locally. The only difference is you need to download the weights first: Copied !git lfs install +!git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 Then load the saved weights into the pipeline: Copied >>> pipeline = DiffusionPipeline.from_pretrained("./stable-diffusion-v1-5", use_safetensors=True) Now, you can run the pipeline as you would in the section above. Swapping schedulers Different schedulers come with different denoising speeds and quality trade-offs. The best way to find out which one works best for you is to try them out! One of the main features of 🧨 Diffusers is to allow you to easily switch between schedulers. For example, to replace the default PNDMScheduler with the EulerDiscreteScheduler, load it with the from_config() method: Copied >>> from diffusers import EulerDiscreteScheduler + +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) +>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) Try generating an image with the new scheduler and see if you notice a difference! In the next section, you’ll take a closer look at the components - the model and scheduler - that make up the DiffusionPipeline and learn how to use these components to generate an image of a cat. Models Most models take a noisy sample, and at each timestep it predicts the noise residual (other models learn to predict the previous sample directly or the velocity or v-prediction), the difference between a less noisy image and the input image. You can mix and match models to create other diffusion systems. Models are initiated with the from_pretrained() method which also locally caches the model weights so it is faster the next time you load the model. For the quicktour, you’ll load the UNet2DModel, a basic unconditional image generation model with a checkpoint trained on cat images: Copied >>> from diffusers import UNet2DModel + +>>> repo_id = "google/ddpm-cat-256" +>>> model = UNet2DModel.from_pretrained(repo_id, use_safetensors=True) To access the model parameters, call model.config: Copied >>> model.config The model configuration is a 🧊 frozen 🧊 dictionary, which means those parameters can’t be changed after the model is created. This is intentional and ensures that the parameters used to define the model architecture at the start remain the same, while other parameters can still be adjusted during inference. Some of the most important parameters are: sample_size: the height and width dimension of the input sample. in_channels: the number of input channels of the input sample. down_block_types and up_block_types: the type of down- and upsampling blocks used to create the UNet architecture. block_out_channels: the number of output channels of the downsampling blocks; also used in reverse order for the number of input channels of the upsampling blocks. layers_per_block: the number of ResNet blocks present in each UNet block. To use the model for inference, create the image shape with random Gaussian noise. It should have a batch axis because the model can receive multiple random noises, a channel axis corresponding to the number of input channels, and a sample_size axis for the height and width of the image: Copied >>> import torch + +>>> torch.manual_seed(0) + +>>> noisy_sample = torch.randn(1, model.config.in_channels, model.config.sample_size, model.config.sample_size) +>>> noisy_sample.shape +torch.Size([1, 3, 256, 256]) For inference, pass the noisy image and a timestep to the model. The timestep indicates how noisy the input image is, with more noise at the beginning and less at the end. This helps the model determine its position in the diffusion process, whether it is closer to the start or the end. Use the sample method to get the model output: Copied >>> with torch.no_grad(): +... noisy_residual = model(sample=noisy_sample, timestep=2).sample To generate actual examples though, you’ll need a scheduler to guide the denoising process. In the next section, you’ll learn how to couple a model with a scheduler. Schedulers Schedulers manage going from a noisy sample to a less noisy sample given the model output - in this case, it is the noisy_residual. 🧨 Diffusers is a toolbox for building diffusion systems. While the DiffusionPipeline is a convenient way to get started with a pre-built diffusion system, you can also choose your own model and scheduler components separately to build a custom diffusion system. For the quicktour, you’ll instantiate the DDPMScheduler with its from_config() method: Copied >>> from diffusers import DDPMScheduler + +>>> scheduler = DDPMScheduler.from_pretrained(repo_id) +>>> scheduler +DDPMScheduler { + "_class_name": "DDPMScheduler", + "_diffusers_version": "0.21.4", + "beta_end": 0.02, + "beta_schedule": "linear", + "beta_start": 0.0001, + "clip_sample": true, + "clip_sample_range": 1.0, + "dynamic_thresholding_ratio": 0.995, + "num_train_timesteps": 1000, + "prediction_type": "epsilon", + "sample_max_value": 1.0, + "steps_offset": 0, + "thresholding": false, + "timestep_spacing": "leading", + "trained_betas": null, + "variance_type": "fixed_small" +} 💡 Unlike a model, a scheduler does not have trainable weights and is parameter-free! Some of the most important parameters are: num_train_timesteps: the length of the denoising process or, in other words, the number of timesteps required to process random Gaussian noise into a data sample. beta_schedule: the type of noise schedule to use for inference and training. beta_start and beta_end: the start and end noise values for the noise schedule. To predict a slightly less noisy image, pass the following to the scheduler’s step() method: model output, timestep, and current sample. Copied >>> less_noisy_sample = scheduler.step(model_output=noisy_residual, timestep=2, sample=noisy_sample).prev_sample +>>> less_noisy_sample.shape +torch.Size([1, 3, 256, 256]) The less_noisy_sample can be passed to the next timestep where it’ll get even less noisy! Let’s bring it all together now and visualize the entire denoising process. First, create a function that postprocesses and displays the denoised image as a PIL.Image: Copied >>> import PIL.Image +>>> import numpy as np + + +>>> def display_sample(sample, i): +... image_processed = sample.cpu().permute(0, 2, 3, 1) +... image_processed = (image_processed + 1.0) * 127.5 +... image_processed = image_processed.numpy().astype(np.uint8) + +... image_pil = PIL.Image.fromarray(image_processed[0]) +... display(f"Image at step {i}") +... display(image_pil) To speed up the denoising process, move the input and model to a GPU: Copied >>> model.to("cuda") +>>> noisy_sample = noisy_sample.to("cuda") Now create a denoising loop that predicts the residual of the less noisy sample, and computes the less noisy sample with the scheduler: Copied >>> import tqdm + +>>> sample = noisy_sample + +>>> for i, t in enumerate(tqdm.tqdm(scheduler.timesteps)): +... # 1. predict noise residual +... with torch.no_grad(): +... residual = model(sample, t).sample + +... # 2. compute less noisy image and set x_t -> x_t-1 +... sample = scheduler.step(residual, t, sample).prev_sample + +... # 3. optionally look at image +... if (i + 1) % 50 == 0: +... display_sample(sample, i + 1) Sit back and watch as a cat is generated from nothing but noise! 😻 Next steps Hopefully, you generated some cool images with 🧨 Diffusers in this quicktour! For your next steps, you can: Train or finetune a model to generate your own images in the training tutorial. See example official and community training or finetuning scripts for a variety of use cases. Learn more about loading, accessing, changing, and comparing schedulers in the Using different Schedulers guide. Explore prompt engineering, speed and memory optimizations, and tips and tricks for generating higher-quality images with the Stable Diffusion guide. Dive deeper into speeding up 🧨 Diffusers with guides on optimized PyTorch on a GPU, and inference guides for running Stable Diffusion on Apple Silicon (M1/M2) and ONNX Runtime. diff --git a/scrapped_outputs/2baf05dc41b3fd97625f2b2997be6406.txt b/scrapped_outputs/2baf05dc41b3fd97625f2b2997be6406.txt new file mode 100644 index 0000000000000000000000000000000000000000..f30b39a298e4c56dee2c29827af6d01fc3c8586a --- /dev/null +++ b/scrapped_outputs/2baf05dc41b3fd97625f2b2997be6406.txt @@ -0,0 +1,36 @@ +AsymmetricAutoencoderKL Improved larger variational autoencoder (VAE) model with KL loss for inpainting task: Designing a Better Asymmetric VQGAN for StableDiffusion by Zixin Zhu, Xuelu Feng, Dongdong Chen, Jianmin Bao, Le Wang, Yinpeng Chen, Lu Yuan, Gang Hua. The abstract from the paper is: StableDiffusion is a revolutionary text-to-image generator that is causing a stir in the world of image generation and editing. Unlike traditional methods that learn a diffusion model in pixel space, StableDiffusion learns a diffusion model in the latent space via a VQGAN, ensuring both efficiency and quality. It not only supports image generation tasks, but also enables image editing for real images, such as image inpainting and local editing. However, we have observed that the vanilla VQGAN used in StableDiffusion leads to significant information loss, causing distortion artifacts even in non-edited image regions. To this end, we propose a new asymmetric VQGAN with two simple designs. Firstly, in addition to the input from the encoder, the decoder contains a conditional branch that incorporates information from task-specific priors, such as the unmasked image region in inpainting. Secondly, the decoder is much heavier than the encoder, allowing for more detailed recovery while only slightly increasing the total inference cost. The training cost of our asymmetric VQGAN is cheap, and we only need to retrain a new asymmetric decoder while keeping the vanilla VQGAN encoder and StableDiffusion unchanged. Our asymmetric VQGAN can be widely used in StableDiffusion-based inpainting and local editing methods. Extensive experiments demonstrate that it can significantly improve the inpainting and editing performance, while maintaining the original text-to-image capability. The code is available at https://github.com/buxiangzhiren/Asymmetric_VQGAN Evaluation results can be found in section 4.1 of the original paper. Available checkpoints https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-1-5 https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-2 Example Usage Copied from diffusers import AsymmetricAutoencoderKL, StableDiffusionInpaintPipeline +from diffusers.utils import load_image, make_image_grid + + +prompt = "a photo of a person with beard" +img_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/celeba_hq_256.png" +mask_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/mask_256.png" + +original_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +pipe = StableDiffusionInpaintPipeline.from_pretrained("runwayml/stable-diffusion-inpainting") +pipe.vae = AsymmetricAutoencoderKL.from_pretrained("cross-attention/asymmetric-autoencoder-kl-x-1-5") +pipe.to("cuda") + +image = pipe(prompt=prompt, image=original_image, mask_image=mask_image).images[0] +make_image_grid([original_image, mask_image, image], rows=1, cols=3) AsymmetricAutoencoderKL class diffusers.AsymmetricAutoencoderKL < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) down_block_out_channels: Tuple = (64,) layers_per_down_block: int = 1 up_block_types: Tuple = ('UpDecoderBlock2D',) up_block_out_channels: Tuple = (64,) layers_per_up_block: int = 1 act_fn: str = 'silu' latent_channels: int = 4 norm_num_groups: int = 32 sample_size: int = 32 scaling_factor: float = 0.18215 ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("DownEncoderBlock2D",)) — +Tuple of downsample block types. down_block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of down block output channels. layers_per_down_block (int, optional, defaults to 1) — +Number layers for down block. up_block_types (Tuple[str], optional, defaults to ("UpDecoderBlock2D",)) — +Tuple of upsample block types. up_block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of up block output channels. layers_per_up_block (int, optional, defaults to 1) — +Number layers for up block. act_fn (str, optional, defaults to "silu") — The activation function to use. latent_channels (int, optional, defaults to 4) — Number of channels in the latent space. sample_size (int, optional, defaults to 32) — Sample input size. norm_num_groups (int, optional, defaults to 32) — +Number of groups to use for the first normalization layer in ResNet blocks. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. Designing a Better Asymmetric VQGAN for StableDiffusion https://arxiv.org/abs/2306.04632 . A VAE model with KL loss +for encoding images into latents and decoding latent representations into images. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor mask: Optional = None sample_posterior: bool = False return_dict: bool = True generator: Optional = None ) Parameters sample (torch.FloatTensor) — Input sample. mask (torch.FloatTensor, optional, defaults to None) — Optional inpainting mask. sample_posterior (bool, optional, defaults to False) — +Whether to sample from the posterior. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. AutoencoderKLOutput class diffusers.models.modeling_outputs.AutoencoderKLOutput < source > ( latent_dist: DiagonalGaussianDistribution ) Parameters latent_dist (DiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of DiagonalGaussianDistribution. +DiagonalGaussianDistribution allows for sampling latents from the distribution. Output of AutoencoderKL encoding method. DecoderOutput class diffusers.models.autoencoders.vae.DecoderOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The decoded output sample from the last layer of the model. Output of decoding method. diff --git a/scrapped_outputs/2bc56a01eb9fa1f226d7b598c72219b4.txt b/scrapped_outputs/2bc56a01eb9fa1f226d7b598c72219b4.txt new file mode 100644 index 0000000000000000000000000000000000000000..6cb15709ef2db459331589418952eb68057fc110 --- /dev/null +++ b/scrapped_outputs/2bc56a01eb9fa1f226d7b598c72219b4.txt @@ -0,0 +1,26 @@ +IP-Adapter IP-Adapter is a lightweight adapter that enables prompting a diffusion model with an image. This method decouples the cross-attention layers of the image and text features. The image features are generated from an image encoder. Files generated from IP-Adapter are only ~100MBs. Learn how to load an IP-Adapter checkpoint and image in the IP-Adapter loading guide. IPAdapterMixin class diffusers.loaders.IPAdapterMixin < source > ( ) Mixin for handling IP Adapters. load_ip_adapter < source > ( pretrained_model_name_or_path_or_dict: Union subfolder: Union weight_name: Union **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with ModelMixin.save_pretrained(). +A torch state +dict. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. unload_ip_adapter < source > ( ) Unloads the IP Adapter weights Examples: Copied >>> # Assuming `pipeline` is already loaded with the IP Adapter weights. +>>> pipeline.unload_ip_adapter() +>>> ... diff --git a/scrapped_outputs/2bf33d2b3c26a00bcc0889fe583664d7.txt b/scrapped_outputs/2bf33d2b3c26a00bcc0889fe583664d7.txt new file mode 100644 index 0000000000000000000000000000000000000000..5ee871335093ed2ca29b91e756da3147dae8eda6 --- /dev/null +++ b/scrapped_outputs/2bf33d2b3c26a00bcc0889fe583664d7.txt @@ -0,0 +1,217 @@ +Load pipelines, models, and schedulers Having an easy way to use a diffusion system for inference is essential to 🧨 Diffusers. Diffusion systems often consist of multiple components like parameterized models, tokenizers, and schedulers that interact in complex ways. That is why we designed the DiffusionPipeline to wrap the complexity of the entire diffusion system into an easy-to-use API, while remaining flexible enough to be adapted for other use cases, such as loading each component individually as building blocks to assemble your own diffusion system. Everything you need for inference or training is accessible with the from_pretrained() method. This guide will show you how to load: pipelines from the Hub and locally different components into a pipeline checkpoint variants such as different floating point types or non-exponential mean averaged (EMA) weights models and schedulers Diffusion Pipeline 💡 Skip to the DiffusionPipeline explained section if you are interested in learning in more detail about how the DiffusionPipeline class works. The DiffusionPipeline class is the simplest and most generic way to load the latest trending diffusion model from the Hub. The DiffusionPipeline.from_pretrained() method automatically detects the correct pipeline class from the checkpoint, downloads, and caches all the required configuration and weight files, and returns a pipeline instance ready for inference. Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) You can also load a checkpoint with its specific pipeline class. The example above loaded a Stable Diffusion model; to get the same result, use the StableDiffusionPipeline class: Copied from diffusers import StableDiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) A checkpoint (such as CompVis/stable-diffusion-v1-4 or runwayml/stable-diffusion-v1-5) may also be used for more than one task, like text-to-image or image-to-image. To differentiate what task you want to use the checkpoint for, you have to load it directly with its corresponding task-specific pipeline class: Copied from diffusers import StableDiffusionImg2ImgPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionImg2ImgPipeline.from_pretrained(repo_id) Local pipeline To load a diffusion pipeline locally, use git-lfs to manually download the checkpoint (in this case, runwayml/stable-diffusion-v1-5) to your local disk. This creates a local folder, ./stable-diffusion-v1-5, on your disk: Copied git-lfs install +git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 Then pass the local path to from_pretrained(): Copied from diffusers import DiffusionPipeline + +repo_id = "./stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) The from_pretrained() method won’t download any files from the Hub when it detects a local path, but this also means it won’t download and cache the latest changes to a checkpoint. Swap components in a pipeline You can customize the default components of any pipeline with another compatible component. Customization is important because: Changing the scheduler is important for exploring the trade-off between generation speed and quality. Different components of a model are typically trained independently and you can swap out a component with a better-performing one. During finetuning, usually only some components - like the UNet or text encoder - are trained. To find out which schedulers are compatible for customization, you can use the compatibles method: Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) +stable_diffusion.scheduler.compatibles Let’s use the SchedulerMixin.from_pretrained() method to replace the default PNDMScheduler with a more performant scheduler, EulerDiscreteScheduler. The subfolder="scheduler" argument is required to load the scheduler configuration from the correct subfolder of the pipeline repository. Then you can pass the new EulerDiscreteScheduler instance to the scheduler argument in DiffusionPipeline: Copied from diffusers import DiffusionPipeline, EulerDiscreteScheduler + +repo_id = "runwayml/stable-diffusion-v1-5" +scheduler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, scheduler=scheduler, use_safetensors=True) Safety checker Diffusion models like Stable Diffusion can generate harmful content, which is why 🧨 Diffusers has a safety checker to check generated outputs against known hardcoded NSFW content. If you’d like to disable the safety checker for whatever reason, pass None to the safety_checker argument: Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, safety_checker=None, use_safetensors=True) +""" +You have disabled the safety checker for by passing `safety_checker=None`. Ensure that you abide by the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend keeping the safety filter enabled in all public-facing circumstances, disabling it only for use cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 . +""" Reuse components across pipelines You can also reuse the same components in multiple pipelines to avoid loading the weights into RAM twice. Use the components method to save the components: Copied from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True) + +components = stable_diffusion_txt2img.components Then you can pass the components to another pipeline without reloading the weights into RAM: Copied stable_diffusion_img2img = StableDiffusionImg2ImgPipeline(**components) You can also pass the components individually to the pipeline if you want more flexibility over which components to reuse or disable. For example, to reuse the same components in the text-to-image pipeline, except for the safety checker and feature extractor, in the image-to-image pipeline: Copied from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True) +stable_diffusion_img2img = StableDiffusionImg2ImgPipeline( + vae=stable_diffusion_txt2img.vae, + text_encoder=stable_diffusion_txt2img.text_encoder, + tokenizer=stable_diffusion_txt2img.tokenizer, + unet=stable_diffusion_txt2img.unet, + scheduler=stable_diffusion_txt2img.scheduler, + safety_checker=None, + feature_extractor=None, + requires_safety_checker=False, +) Checkpoint variants A checkpoint variant is usually a checkpoint whose weights are: Stored in a different floating point type for lower precision and lower storage, such as torch.float16, because it only requires half the bandwidth and storage to download. You can’t use this variant if you’re continuing training or using a CPU. Non-exponential mean averaged (EMA) weights, which shouldn’t be used for inference. You should use these to continue fine-tuning a model. 💡 When the checkpoints have identical model structures, but they were trained on different datasets and with a different training setup, they should be stored in separate repositories instead of variations (for example, stable-diffusion-v1-4 and stable-diffusion-v1-5). Otherwise, a variant is identical to the original checkpoint. They have exactly the same serialization format (like Safetensors), model structure, and weights that have identical tensor shapes. checkpoint type weight name argument for loading weights original diffusion_pytorch_model.bin floating point diffusion_pytorch_model.fp16.bin variant, torch_dtype non-EMA diffusion_pytorch_model.non_ema.bin variant There are two important arguments to know for loading variants: torch_dtype defines the floating point precision of the loaded checkpoints. For example, if you want to save bandwidth by loading a fp16 variant, you should specify torch_dtype=torch.float16 to convert the weights to fp16. Otherwise, the fp16 weights are converted to the default fp32 precision. You can also load the original checkpoint without defining the variant argument, and convert it to fp16 with torch_dtype=torch.float16. In this case, the default fp32 weights are downloaded first, and then they’re converted to fp16 after loading. variant defines which files should be loaded from the repository. For example, if you want to load a non_ema variant from the diffusers/stable-diffusion-variants repository, you should specify variant="non_ema" to download the non_ema files. Copied from diffusers import DiffusionPipeline +import torch + +# load fp16 variant +stable_diffusion = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16, use_safetensors=True +) +# load non_ema variant +stable_diffusion = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", variant="non_ema", use_safetensors=True +) To save a checkpoint stored in a different floating-point type or as a non-EMA variant, use the DiffusionPipeline.save_pretrained() method and specify the variant argument. You should try and save a variant to the same folder as the original checkpoint, so you can load both from the same folder: Copied from diffusers import DiffusionPipeline + +# save as fp16 variant +stable_diffusion.save_pretrained("runwayml/stable-diffusion-v1-5", variant="fp16") +# save as non-ema variant +stable_diffusion.save_pretrained("runwayml/stable-diffusion-v1-5", variant="non_ema") If you don’t save the variant to an existing folder, you must specify the variant argument otherwise it’ll throw an Exception because it can’t find the original checkpoint: Copied # 👎 this won't work +stable_diffusion = DiffusionPipeline.from_pretrained( + "./stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +# 👍 this works +stable_diffusion = DiffusionPipeline.from_pretrained( + "./stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16, use_safetensors=True +) Models Models are loaded from the ModelMixin.from_pretrained() method, which downloads and caches the latest version of the model weights and configurations. If the latest files are available in the local cache, from_pretrained() reuses files in the cache instead of re-downloading them. Models can be loaded from a subfolder with the subfolder argument. For example, the model weights for runwayml/stable-diffusion-v1-5 are stored in the unet subfolder: Copied from diffusers import UNet2DConditionModel + +repo_id = "runwayml/stable-diffusion-v1-5" +model = UNet2DConditionModel.from_pretrained(repo_id, subfolder="unet", use_safetensors=True) Or directly from a repository’s directory: Copied from diffusers import UNet2DModel + +repo_id = "google/ddpm-cifar10-32" +model = UNet2DModel.from_pretrained(repo_id, use_safetensors=True) You can also load and save model variants by specifying the variant argument in ModelMixin.from_pretrained() and ModelMixin.save_pretrained(): Copied from diffusers import UNet2DConditionModel + +model = UNet2DConditionModel.from_pretrained( + "runwayml/stable-diffusion-v1-5", subfolder="unet", variant="non_ema", use_safetensors=True +) +model.save_pretrained("./local-unet", variant="non_ema") Schedulers Schedulers are loaded from the SchedulerMixin.from_pretrained() method, and unlike models, schedulers are not parameterized or trained; they are defined by a configuration file. Loading schedulers does not consume any significant amount of memory and the same configuration file can be used for a variety of different schedulers. +For example, the following schedulers are compatible with StableDiffusionPipeline, which means you can load the same scheduler configuration file in any of these classes: Copied from diffusers import StableDiffusionPipeline +from diffusers import ( + DDPMScheduler, + DDIMScheduler, + PNDMScheduler, + LMSDiscreteScheduler, + EulerAncestralDiscreteScheduler, + EulerDiscreteScheduler, + DPMSolverMultistepScheduler, +) + +repo_id = "runwayml/stable-diffusion-v1-5" + +ddpm = DDPMScheduler.from_pretrained(repo_id, subfolder="scheduler") +ddim = DDIMScheduler.from_pretrained(repo_id, subfolder="scheduler") +pndm = PNDMScheduler.from_pretrained(repo_id, subfolder="scheduler") +lms = LMSDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +euler_anc = EulerAncestralDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +euler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +dpm = DPMSolverMultistepScheduler.from_pretrained(repo_id, subfolder="scheduler") + +# replace `dpm` with any of `ddpm`, `ddim`, `pndm`, `lms`, `euler_anc`, `euler` +pipeline = StableDiffusionPipeline.from_pretrained(repo_id, scheduler=dpm, use_safetensors=True) DiffusionPipeline explained As a class method, DiffusionPipeline.from_pretrained() is responsible for two things: Download the latest version of the folder structure required for inference and cache it. If the latest folder structure is available in the local cache, DiffusionPipeline.from_pretrained() reuses the cache and won’t redownload the files. Load the cached weights into the correct pipeline class - retrieved from the model_index.json file - and return an instance of it. The pipelines’ underlying folder structure corresponds directly with their class instances. For example, the StableDiffusionPipeline corresponds to the folder structure in runwayml/stable-diffusion-v1-5. Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipeline = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) +print(pipeline) You’ll see pipeline is an instance of StableDiffusionPipeline, which consists of seven components: "feature_extractor": a CLIPImageProcessor from 🤗 Transformers. "safety_checker": a component for screening against harmful content. "scheduler": an instance of PNDMScheduler. "text_encoder": a CLIPTextModel from 🤗 Transformers. "tokenizer": a CLIPTokenizer from 🤗 Transformers. "unet": an instance of UNet2DConditionModel. "vae": an instance of AutoencoderKL. Copied StableDiffusionPipeline { + "feature_extractor": [ + "transformers", + "CLIPImageProcessor" + ], + "safety_checker": [ + "stable_diffusion", + "StableDiffusionSafetyChecker" + ], + "scheduler": [ + "diffusers", + "PNDMScheduler" + ], + "text_encoder": [ + "transformers", + "CLIPTextModel" + ], + "tokenizer": [ + "transformers", + "CLIPTokenizer" + ], + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vae": [ + "diffusers", + "AutoencoderKL" + ] +} Compare the components of the pipeline instance to the runwayml/stable-diffusion-v1-5 folder structure, and you’ll see there is a separate folder for each of the components in the repository: Copied . +├── feature_extractor +│   └── preprocessor_config.json +├── model_index.json +├── safety_checker +│   ├── config.json +| ├── model.fp16.safetensors +│ ├── model.safetensors +│ ├── pytorch_model.bin +| └── pytorch_model.fp16.bin +├── scheduler +│   └── scheduler_config.json +├── text_encoder +│   ├── config.json +| ├── model.fp16.safetensors +│ ├── model.safetensors +│ |── pytorch_model.bin +| └── pytorch_model.fp16.bin +├── tokenizer +│   ├── merges.txt +│   ├── special_tokens_map.json +│   ├── tokenizer_config.json +│   └── vocab.json +├── unet +│   ├── config.json +│   ├── diffusion_pytorch_model.bin +| |── diffusion_pytorch_model.fp16.bin +│ |── diffusion_pytorch_model.f16.safetensors +│ |── diffusion_pytorch_model.non_ema.bin +│ |── diffusion_pytorch_model.non_ema.safetensors +│ └── diffusion_pytorch_model.safetensors +|── vae +. ├── config.json +. ├── diffusion_pytorch_model.bin + ├── diffusion_pytorch_model.fp16.bin + ├── diffusion_pytorch_model.fp16.safetensors + └── diffusion_pytorch_model.safetensors You can access each of the components of the pipeline as an attribute to view its configuration: Copied pipeline.tokenizer +CLIPTokenizer( + name_or_path="/root/.cache/huggingface/hub/models--runwayml--stable-diffusion-v1-5/snapshots/39593d5650112b4cc580433f6b0435385882d819/tokenizer", + vocab_size=49408, + model_max_length=77, + is_fast=False, + padding_side="right", + truncation_side="right", + special_tokens={ + "bos_token": AddedToken("<|startoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), + "eos_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), + "unk_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), + "pad_token": "<|endoftext|>", + }, + clean_up_tokenization_spaces=True +) Every pipeline expects a model_index.json file that tells the DiffusionPipeline: which pipeline class to load from _class_name which version of 🧨 Diffusers was used to create the model in _diffusers_version what components from which library are stored in the subfolders (name corresponds to the component and subfolder name, library corresponds to the name of the library to load the class from, and class corresponds to the class name) Copied { + "_class_name": "StableDiffusionPipeline", + "_diffusers_version": "0.6.0", + "feature_extractor": [ + "transformers", + "CLIPImageProcessor" + ], + "safety_checker": [ + "stable_diffusion", + "StableDiffusionSafetyChecker" + ], + "scheduler": [ + "diffusers", + "PNDMScheduler" + ], + "text_encoder": [ + "transformers", + "CLIPTextModel" + ], + "tokenizer": [ + "transformers", + "CLIPTokenizer" + ], + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vae": [ + "diffusers", + "AutoencoderKL" + ] +} diff --git a/scrapped_outputs/2c2c2a7745ec851b1b4fefc4f506ab4e.txt b/scrapped_outputs/2c2c2a7745ec851b1b4fefc4f506ab4e.txt new file mode 100644 index 0000000000000000000000000000000000000000..d5b7d8b4b3e7332a66718ccf8ad4bab757e09676 --- /dev/null +++ b/scrapped_outputs/2c2c2a7745ec851b1b4fefc4f506ab4e.txt @@ -0,0 +1,526 @@ +Audio Diffusion + + +Overview + +Audio Diffusion by Robert Dargavel Smith. +Audio Diffusion leverages the recent advances in image generation using diffusion models by converting audio samples to +and from mel spectrogram images. +The original codebase of this implementation can be found here, including +training scripts and example notebooks. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_audio_diffusion.py +Unconditional Audio Generation + + +Examples: + + +Audio Diffusion + + + + Copied +import torch +from IPython.display import Audio +from diffusers import DiffusionPipeline + +device = "cuda" if torch.cuda.is_available() else "cpu" +pipe = DiffusionPipeline.from_pretrained("teticio/audio-diffusion-256").to(device) + +output = pipe() +display(output.images[0]) +display(Audio(output.audios[0], rate=mel.get_sample_rate())) + +Latent Audio Diffusion + + + + Copied +import torch +from IPython.display import Audio +from diffusers import DiffusionPipeline + +device = "cuda" if torch.cuda.is_available() else "cpu" +pipe = DiffusionPipeline.from_pretrained("teticio/latent-audio-diffusion-256").to(device) + +output = pipe() +display(output.images[0]) +display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate())) + +Audio Diffusion with DDIM (faster) + + + + Copied +import torch +from IPython.display import Audio +from diffusers import DiffusionPipeline + +device = "cuda" if torch.cuda.is_available() else "cpu" +pipe = DiffusionPipeline.from_pretrained("teticio/audio-diffusion-ddim-256").to(device) + +output = pipe() +display(output.images[0]) +display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate())) + +Variations, in-painting, out-painting etc. + + + + Copied +output = pipe( + raw_audio=output.audios[0, 0], + start_step=int(pipe.get_default_steps() / 2), + mask_start_secs=1, + mask_end_secs=1, +) +display(output.images[0]) +display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate())) + +AudioDiffusionPipeline + + +class diffusers.AudioDiffusionPipeline + +< +source +> +( +vqvae: AutoencoderKL +unet: UNet2DConditionModel +mel: Mel +scheduler: typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_ddpm.DDPMScheduler] + +) + + +Parameters + +vqae (AutoencoderKL) — Variational AutoEncoder for Latent Audio Diffusion or None + + +unet (UNet2DConditionModel) — UNET model + + +mel (Mel) — transform audio <-> spectrogram + + +scheduler ([DDIMScheduler or DDPMScheduler]) — de-noising scheduler + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +batch_size: int = 1 +audio_file: str = None +raw_audio: ndarray = None +slice: int = 0 +start_step: int = 0 +steps: int = None +generator: Generator = None +mask_start_secs: float = 0 +mask_end_secs: float = 0 +step_generator: Generator = None +eta: float = 0 +noise: Tensor = None +encoding: Tensor = None +return_dict = True + +) +→ +List[PIL Image] + +Parameters + +batch_size (int) — number of samples to generate + + +audio_file (str) — must be a file on disk due to Librosa limitation or + + +raw_audio (np.ndarray) — audio as numpy array + + +slice (int) — slice number of audio to convert + + +start_step (int) — step to start from + + +steps (int) — number of de-noising steps (defaults to 50 for DDIM, 1000 for DDPM) + + +generator (torch.Generator) — random number generator or None + + +mask_start_secs (float) — number of seconds of audio to mask (not generate) at start + + +mask_end_secs (float) — number of seconds of audio to mask (not generate) at end + + +step_generator (torch.Generator) — random number generator used to de-noise or None + + +eta (float) — parameter between 0 and 1 used with DDIM scheduler + + +noise (torch.Tensor) — noise tensor of shape (batch_size, 1, height, width) or None + + +encoding (torch.Tensor) — for UNet2DConditionModel shape (batch_size, seq_length, cross_attention_dim) + + +return_dict (bool) — if True return AudioPipelineOutput, ImagePipelineOutput else Tuple + + +Returns + +List[PIL Image] + + + +mel spectrograms (float, List[np.ndarray]): sample rate and raw audios + + +Generate random mel spectrogram from audio input and convert to audio. + +encode + +< +source +> +( +images: typing.List[PIL.Image.Image] +steps: int = 50 + +) +→ +np.ndarray + +Parameters + +images (List[PIL Image]) — list of images to encode + + +steps (int) — number of encoding steps to perform (defaults to 50) + + +Returns + +np.ndarray + + + +noise tensor of shape (batch_size, 1, height, width) + + +Reverse step process: recover noisy image from generated image. + +get_default_steps + +< +source +> +( +) +→ +int + +Returns + +int + + + +number of steps + + +Returns default number of steps recommended for inference + +get_input_dims + +< +source +> +( +) +→ +Tuple + +Returns + +Tuple + + + +(height, width) + + +Returns dimension of input image + +slerp + +< +source +> +( +x0: Tensor +x1: Tensor +alpha: float + +) +→ +torch.Tensor + +Parameters + +x0 (torch.Tensor) — first tensor to interpolate between + + +x1 (torch.Tensor) — seconds tensor to interpolate between + + +alpha (float) — interpolation between 0 and 1 + + +Returns + +torch.Tensor + + + +interpolated tensor + + +Spherical Linear intERPolation + +Mel + + +class diffusers.Mel + +< +source +> +( +x_res: int = 256 +y_res: int = 256 +sample_rate: int = 22050 +n_fft: int = 2048 +hop_length: int = 512 +top_db: int = 80 +n_iter: int = 32 + +) + + +Parameters + +x_res (int) — x resolution of spectrogram (time) + + +y_res (int) — y resolution of spectrogram (frequency bins) + + +sample_rate (int) — sample rate of audio + + +n_fft (int) — number of Fast Fourier Transforms + + +hop_length (int) — hop length (a higher number is recommended for lower than 256 y_res) + + +top_db (int) — loudest in decibels + + +n_iter (int) — number of iterations for Griffin Linn mel inversion + + + + +audio_slice_to_image + +< +source +> +( +slice: int + +) +→ +PIL Image + +Parameters + +slice (int) — slice number of audio to convert (out of get_number_of_slices()) + + +Returns + +PIL Image + + + +grayscale image of x_res x y_res + + +Convert slice of audio to spectrogram. + +get_audio_slice + +< +source +> +( +slice: int = 0 + +) +→ +np.ndarray + +Parameters + +slice (int) — slice number of audio (out of get_number_of_slices()) + + +Returns + +np.ndarray + + + +audio as numpy array + + +Get slice of audio. + +get_number_of_slices + +< +source +> +( +) +→ +int + +Returns + +int + + + +number of spectograms audio can be sliced into + + +Get number of slices in audio. + +get_sample_rate + +< +source +> +( +) +→ +int + +Returns + +int + + + +sample rate of audio + + +Get sample rate: + +image_to_audio + +< +source +> +( +image: Image + +) +→ +audio (np.ndarray) + +Parameters + +image (PIL Image) — x_res x y_res grayscale image + + +Returns + +audio (np.ndarray) + + + +raw audio + + +Converts spectrogram to audio. + +load_audio + +< +source +> +( +audio_file: str = None +raw_audio: ndarray = None + +) + + +Parameters + +audio_file (str) — must be a file on disk due to Librosa limitation or + + +raw_audio (np.ndarray) — audio as numpy array + + + +Load audio. + +set_resolution + +< +source +> +( +x_res: int +y_res: int + +) + + +Parameters + +x_res (int) — x resolution of spectrogram (time) + + +y_res (int) — y resolution of spectrogram (frequency bins) + + + +Set resolution. diff --git a/scrapped_outputs/2c3d87d76a6ea558481251ffb8f4df1e.txt b/scrapped_outputs/2c3d87d76a6ea558481251ffb8f4df1e.txt new file mode 100644 index 0000000000000000000000000000000000000000..468c0483a2546314fa3f8291e558ee4a11ec620d --- /dev/null +++ b/scrapped_outputs/2c3d87d76a6ea558481251ffb8f4df1e.txt @@ -0,0 +1,69 @@ +JAX/Flax 🤗 Diffusers supports Flax for super fast inference on Google TPUs, such as those available in Colab, Kaggle or Google Cloud Platform. This guide shows you how to run inference with Stable Diffusion using JAX/Flax. Before you begin, make sure you have the necessary libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q jax==0.3.25 jaxlib==0.3.25 flax transformers ftfy +#!pip install -q diffusers You should also make sure you’re using a TPU backend. While JAX does not run exclusively on TPUs, you’ll get the best performance on a TPU because each server has 8 TPU accelerators working in parallel. If you are running this guide in Colab, select Runtime in the menu above, select the option Change runtime type, and then select TPU under the Hardware accelerator setting. Import JAX and quickly check whether you’re using a TPU: Copied import jax +import jax.tools.colab_tpu +jax.tools.colab_tpu.setup_tpu() + +num_devices = jax.device_count() +device_type = jax.devices()[0].device_kind + +print(f"Found {num_devices} JAX devices of type {device_type}.") +assert ( + "TPU" in device_type, + "Available device is not a TPU, please select TPU from Runtime > Change runtime type > Hardware accelerator" +) +# Found 8 JAX devices of type Cloud TPU. Great, now you can import the rest of the dependencies you’ll need: Copied import jax.numpy as jnp +from jax import pmap +from flax.jax_utils import replicate +from flax.training.common_utils import shard + +from diffusers import FlaxStableDiffusionPipeline Load a model Flax is a functional framework, so models are stateless and parameters are stored outside of them. Loading a pretrained Flax pipeline returns both the pipeline and the model weights (or parameters). In this guide, you’ll use bfloat16, a more efficient half-float type that is supported by TPUs (you can also use float32 for full precision if you want). Copied dtype = jnp.bfloat16 +pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + revision="bf16", + dtype=dtype, +) Inference TPUs usually have 8 devices working in parallel, so let’s use the same prompt for each device. This means you can perform inference on 8 devices at once, with each device generating one image. As a result, you’ll get 8 images in the same amount of time it takes for one chip to generate a single image! Learn more details in the How does parallelization work? section. After replicating the prompt, get the tokenized text ids by calling the prepare_inputs function on the pipeline. The length of the tokenized text is set to 77 tokens as required by the configuration of the underlying CLIP text model. Copied prompt = "A cinematic film still of Morgan Freeman starring as Jimi Hendrix, portrait, 40mm lens, shallow depth of field, close up, split lighting, cinematic" +prompt = [prompt] * jax.device_count() +prompt_ids = pipeline.prepare_inputs(prompt) +prompt_ids.shape +# (8, 77) Model parameters and inputs have to be replicated across the 8 parallel devices. The parameters dictionary is replicated with flax.jax_utils.replicate which traverses the dictionary and changes the shape of the weights so they are repeated 8 times. Arrays are replicated using shard. Copied # parameters +p_params = replicate(params) + +# arrays +prompt_ids = shard(prompt_ids) +prompt_ids.shape +# (8, 1, 77) This shape means each one of the 8 devices receives as an input a jnp array with shape (1, 77), where 1 is the batch size per device. On TPUs with sufficient memory, you could have a batch size larger than 1 if you want to generate multiple images (per chip) at once. Next, create a random number generator to pass to the generation function. This is standard procedure in Flax, which is very serious and opinionated about random numbers. All functions that deal with random numbers are expected to receive a generator to ensure reproducibility, even when you’re training across multiple distributed devices. The helper function below uses a seed to initialize a random number generator. As long as you use the same seed, you’ll get the exact same results. Feel free to use different seeds when exploring results later in the guide. Copied def create_key(seed=0): + return jax.random.PRNGKey(seed) The helper function, or rng, is split 8 times so each device receives a different generator and generates a different image. Copied rng = create_key(0) +rng = jax.random.split(rng, jax.device_count()) To take advantage of JAX’s optimized speed on a TPU, pass jit=True to the pipeline to compile the JAX code into an efficient representation and to ensure the model runs in parallel across the 8 devices. You need to ensure all your inputs have the same shape in subsequent calls, otherwise JAX will need to recompile the code which is slower. The first inference run takes more time because it needs to compile the code, but subsequent calls (even with different inputs) are much faster. For example, it took more than a minute to compile on a TPU v2-8, but then it takes about 7s on a future inference run! Copied %%time +images = pipeline(prompt_ids, p_params, rng, jit=True)[0] + +# CPU times: user 56.2 s, sys: 42.5 s, total: 1min 38s +# Wall time: 1min 29s The returned array has shape (8, 1, 512, 512, 3) which should be reshaped to remove the second dimension and get 8 images of 512 × 512 × 3. Then you can use the numpy_to_pil() function to convert the arrays into images. Copied from diffusers.utils import make_image_grid + +images = images.reshape((images.shape[0] * images.shape[1],) + images.shape[-3:]) +images = pipeline.numpy_to_pil(images) +make_image_grid(images, rows=2, cols=4) Using different prompts You don’t necessarily have to use the same prompt on all devices. For example, to generate 8 different prompts: Copied prompts = [ + "Labrador in the style of Hokusai", + "Painting of a squirrel skating in New York", + "HAL-9000 in the style of Van Gogh", + "Times Square under water, with fish and a dolphin swimming around", + "Ancient Roman fresco showing a man working on his laptop", + "Close-up photograph of young black woman against urban background, high quality, bokeh", + "Armchair in the shape of an avocado", + "Clown astronaut in space, with Earth in the background", +] + +prompt_ids = pipeline.prepare_inputs(prompts) +prompt_ids = shard(prompt_ids) + +images = pipeline(prompt_ids, p_params, rng, jit=True).images +images = images.reshape((images.shape[0] * images.shape[1],) + images.shape[-3:]) +images = pipeline.numpy_to_pil(images) + +make_image_grid(images, 2, 4) How does parallelization work? The Flax pipeline in 🤗 Diffusers automatically compiles the model and runs it in parallel on all available devices. Let’s take a closer look at how that process works. JAX parallelization can be done in multiple ways. The easiest one revolves around using the jax.pmap function to achieve single-program multiple-data (SPMD) parallelization. It means running several copies of the same code, each on different data inputs. More sophisticated approaches are possible, and you can go over to the JAX documentation to explore this topic in more detail if you are interested! jax.pmap does two things: Compiles (or ”jits”) the code which is similar to jax.jit(). This does not happen when you call pmap, and only the first time the pmapped function is called. Ensures the compiled code runs in parallel on all available devices. To demonstrate, call pmap on the pipeline’s _generate method (this is a private method that generates images and may be renamed or removed in future releases of 🤗 Diffusers): Copied p_generate = pmap(pipeline._generate) After calling pmap, the prepared function p_generate will: Make a copy of the underlying function, pipeline._generate, on each device. Send each device a different portion of the input arguments (this is why it’s necessary to call the shard function). In this case, prompt_ids has shape (8, 1, 77, 768) so the array is split into 8 and each copy of _generate receives an input with shape (1, 77, 768). The most important thing to pay attention to here is the batch size (1 in this example), and the input dimensions that make sense for your code. You don’t have to change anything else to make the code work in parallel. The first time you call the pipeline takes more time, but the calls afterward are much faster. The block_until_ready function is used to correctly measure inference time because JAX uses asynchronous dispatch and returns control to the Python loop as soon as it can. You don’t need to use that in your code; blocking occurs automatically when you want to use the result of a computation that has not yet been materialized. Copied %%time +images = p_generate(prompt_ids, p_params, rng) +images = images.block_until_ready() + +# CPU times: user 1min 15s, sys: 18.2 s, total: 1min 34s +# Wall time: 1min 15s Check your image dimensions to see if they’re correct: Copied images.shape +# (8, 1, 512, 512, 3) diff --git a/scrapped_outputs/2c54397c89dd7d0793056b8eb4546597.txt b/scrapped_outputs/2c54397c89dd7d0793056b8eb4546597.txt new file mode 100644 index 0000000000000000000000000000000000000000..576dcc80f8d3648a3bfddba4f5d8e453c126504f --- /dev/null +++ b/scrapped_outputs/2c54397c89dd7d0793056b8eb4546597.txt @@ -0,0 +1,58 @@ +Tiny AutoEncoder Tiny AutoEncoder for Stable Diffusion (TAESD) was introduced in madebyollin/taesd by Ollin Boer Bohan. It is a tiny distilled version of Stable Diffusion’s VAE that can quickly decode the latents in a StableDiffusionPipeline or StableDiffusionXLPipeline almost instantly. To use with Stable Diffusion v-2.1: Copied import torch +from diffusers import DiffusionPipeline, AutoencoderTiny + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1-base", torch_dtype=torch.float16 +) +pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesd", torch_dtype=torch.float16) +pipe = pipe.to("cuda") + +prompt = "slice of delicious New York-style berry cheesecake" +image = pipe(prompt, num_inference_steps=25).images[0] +image To use with Stable Diffusion XL 1.0 Copied import torch +from diffusers import DiffusionPipeline, AutoencoderTiny + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +) +pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesdxl", torch_dtype=torch.float16) +pipe = pipe.to("cuda") + +prompt = "slice of delicious New York-style berry cheesecake" +image = pipe(prompt, num_inference_steps=25).images[0] +image AutoencoderTiny class diffusers.AutoencoderTiny < source > ( in_channels: int = 3 out_channels: int = 3 encoder_block_out_channels: Tuple = (64, 64, 64, 64) decoder_block_out_channels: Tuple = (64, 64, 64, 64) act_fn: str = 'relu' latent_channels: int = 4 upsampling_scaling_factor: int = 2 num_encoder_blocks: Tuple = (1, 3, 3, 3) num_decoder_blocks: Tuple = (3, 3, 3, 1) latent_magnitude: int = 3 latent_shift: float = 0.5 force_upcast: bool = False scaling_factor: float = 1.0 ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. encoder_block_out_channels (Tuple[int], optional, defaults to (64, 64, 64, 64)) — +Tuple of integers representing the number of output channels for each encoder block. The length of the +tuple should be equal to the number of encoder blocks. decoder_block_out_channels (Tuple[int], optional, defaults to (64, 64, 64, 64)) — +Tuple of integers representing the number of output channels for each decoder block. The length of the +tuple should be equal to the number of decoder blocks. act_fn (str, optional, defaults to "relu") — +Activation function to be used throughout the model. latent_channels (int, optional, defaults to 4) — +Number of channels in the latent representation. The latent space acts as a compressed representation of +the input image. upsampling_scaling_factor (int, optional, defaults to 2) — +Scaling factor for upsampling in the decoder. It determines the size of the output image during the +upsampling process. num_encoder_blocks (Tuple[int], optional, defaults to (1, 3, 3, 3)) — +Tuple of integers representing the number of encoder blocks at each stage of the encoding process. The +length of the tuple should be equal to the number of stages in the encoder. Each stage has a different +number of encoder blocks. num_decoder_blocks (Tuple[int], optional, defaults to (3, 3, 3, 1)) — +Tuple of integers representing the number of decoder blocks at each stage of the decoding process. The +length of the tuple should be equal to the number of stages in the decoder. Each stage has a different +number of decoder blocks. latent_magnitude (float, optional, defaults to 3.0) — +Magnitude of the latent representation. This parameter scales the latent representation values to control +the extent of information preservation. latent_shift (float, optional, defaults to 0.5) — +Shift applied to the latent representation. This parameter controls the center of the latent space. scaling_factor (float, optional, defaults to 1.0) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. For this Autoencoder, +however, no such scaling factor was used, hence the value of 1.0 as the default. force_upcast (bool, optional, default to False) — +If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE +can be fine-tuned / trained to a lower range without losing too much precision, in which case +force_upcast can be set to False (see this fp16-friendly +AutoEncoder). A tiny distilled VAE model for encoding images into latents and decoding latent representations into images. AutoencoderTiny is a wrapper around the original implementation of TAESD. This model inherits from ModelMixin. Check the superclass documentation for its generic methods implemented for +all models (such as downloading or saving). disable_slicing < source > ( ) Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing +decoding in one step. disable_tiling < source > ( ) Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing +decoding in one step. enable_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_tiling < source > ( use_tiling: bool = True ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. forward < source > ( sample: FloatTensor return_dict: bool = True ) Parameters sample (torch.FloatTensor) — Input sample. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. scale_latents < source > ( x: FloatTensor ) raw latents -> [0, 1] unscale_latents < source > ( x: FloatTensor ) [0, 1] -> raw latents AutoencoderTinyOutput class diffusers.models.autoencoders.autoencoder_tiny.AutoencoderTinyOutput < source > ( latents: Tensor ) Parameters latents (torch.Tensor) — Encoded outputs of the Encoder. Output of AutoencoderTiny encoding method. diff --git a/scrapped_outputs/2c56a934c3dabccc371956e55e1e583d.txt b/scrapped_outputs/2c56a934c3dabccc371956e55e1e583d.txt new file mode 100644 index 0000000000000000000000000000000000000000..fc70c0d39d26f48206e829173b963d203c5dbc51 --- /dev/null +++ b/scrapped_outputs/2c56a934c3dabccc371956e55e1e583d.txt @@ -0,0 +1,102 @@ +DPMSolverSinglestepScheduler DPMSolverSinglestepScheduler is a single step scheduler from DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps and DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. DPMSolver (and the improved version DPMSolver++) is a fast dedicated high-order solver for diffusion ODEs with convergence order guarantee. Empirically, DPMSolver sampling with only 20 steps can generate high-quality +samples, and it can generate quite good samples even in 10 steps. The original implementation can be found at LuChengTHU/dpm-solver. Tips It is recommended to set solver_order to 2 for guide sampling, and solver_order=3 for unconditional sampling. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use dynamic +thresholding. This thresholding method is unsuitable for latent-space diffusion models such as +Stable Diffusion. DPMSolverSinglestepScheduler class diffusers.DPMSolverSinglestepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Optional = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'dpmsolver++' solver_type: str = 'midpoint' lower_order_final: bool = True use_karras_sigmas: Optional = False lambda_min_clipped: float = -inf variance_type: Optional = None ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DPMSolver order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++". algorithm_type (str, defaults to dpmsolver++) — +Algorithm type for the solver; can be dpmsolver, dpmsolver++, sde-dpmsolver or sde-dpmsolver++. The +dpmsolver type implements the algorithms in the DPMSolver +paper, and the dpmsolver++ type implements the algorithms in the +DPMSolver++ paper. It is recommended to use dpmsolver++ or +sde-dpmsolver++ with solver_order=2 for guided sampling like in Stable Diffusion. solver_type (str, defaults to midpoint) — +Solver type for the second-order solver; can be midpoint or heun. The solver type slightly affects the +sample quality, especially for a small number of steps. It is recommended to use midpoint solvers. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. lambda_min_clipped (float, defaults to -inf) — +Clipping threshold for the minimum value of lambda(t) for numerical stability. This is critical for the +cosine (squaredcos_cap_v2) noise schedule. variance_type (str, optional) — +Set to “learned” or “learned_range” for diffusion models that predict variance. If set, the model’s output +contains the predicted Gaussian variance. DPMSolverSinglestepScheduler is a fast dedicated high-order solver for diffusion ODEs. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is +designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an +integral of the data prediction model. The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise +prediction and data prediction models. dpm_solver_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DPMSolver (equivalent to DDIM). get_order_list < source > ( num_inference_steps: int ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. Computes the solver order at each time step. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). singlestep_dpm_solver_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. timestep (int) — +The current and latter discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order singlestep DPMSolver that computes the solution at time prev_timestep from the +time timestep_list[-2]. singlestep_dpm_solver_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. timestep (int) — +The current and latter discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order singlestep DPMSolver that computes the solution at time prev_timestep from the +time timestep_list[-3]. singlestep_dpm_solver_update < source > ( model_output_list: List *args sample: FloatTensor = None order: int = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. timestep (int) — +The current and latter discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. order (int) — +The solver order at this step. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the singlestep DPMSolver. step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the singlestep DPMSolver. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/2cadf44c0003fff6eb9b7e7d77c082f7.txt b/scrapped_outputs/2cadf44c0003fff6eb9b7e7d77c082f7.txt new file mode 100644 index 0000000000000000000000000000000000000000..191eba717cd93724b13a5915ff44bfc9153360dd --- /dev/null +++ b/scrapped_outputs/2cadf44c0003fff6eb9b7e7d77c082f7.txt @@ -0,0 +1,338 @@ +GLIGEN (Grounded Language-to-Image Generation) The GLIGEN model was created by researchers and engineers from University of Wisconsin-Madison, Columbia University, and Microsoft. The StableDiffusionGLIGENPipeline and StableDiffusionGLIGENTextImagePipeline can generate photorealistic images conditioned on grounding inputs. Along with text and bounding boxes with StableDiffusionGLIGENPipeline, if input images are given, StableDiffusionGLIGENTextImagePipeline can insert objects described by text at the region defined by bounding boxes. Otherwise, it’ll generate an image described by the caption/prompt and insert objects described by text at the region defined by bounding boxes. It’s trained on COCO2014D and COCO2014CD datasets, and the model uses a frozen CLIP ViT-L/14 text encoder to condition itself on grounding inputs. The abstract from the paper is: Large-scale text-to-image diffusion models have made amazing advances. However, the status quo is to use text input alone, which can impede controllability. In this work, we propose GLIGEN, Grounded-Language-to-Image Generation, a novel approach that builds upon and extends the functionality of existing pre-trained text-to-image diffusion models by enabling them to also be conditioned on grounding inputs. To preserve the vast concept knowledge of the pre-trained model, we freeze all of its weights and inject the grounding information into new trainable layers via a gated mechanism. Our model achieves open-world grounded text2img generation with caption and bounding box condition inputs, and the grounding ability generalizes well to novel spatial configurations and concepts. GLIGEN’s zeroshot performance on COCO and LVIS outperforms existing supervised layout-to-image baselines by a large margin. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality and how to reuse pipeline components efficiently! If you want to use one of the official checkpoints for a task, explore the gligen Hub organizations! StableDiffusionGLIGENPipeline was contributed by Nikhil Gajendrakumar and StableDiffusionGLIGENTextImagePipeline was contributed by Nguyễn Công Tú Anh. StableDiffusionGLIGENPipeline class diffusers.StableDiffusionGLIGENPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with Grounded-Language-to-Image Generation (GLIGEN). This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 gligen_scheduled_sampling_beta: float = 0.3 gligen_phrases: List = None gligen_boxes: List = None gligen_inpaint_image: Optional = None negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. gligen_phrases (List[str]) — +The phrases to guide what to include in each of the regions defined by the corresponding +gligen_boxes. There should only be one phrase per bounding box. gligen_boxes (List[List[float]]) — +The bounding boxes that identify rectangular regions of the image that are going to be filled with the +content described by the corresponding gligen_phrases. Each rectangular box is defined as a +List[float] of 4 elements [xmin, ymin, xmax, ymax] where each value is between [0,1]. gligen_inpaint_image (PIL.Image.Image, optional) — +The input image, if provided, is inpainted with objects described by the gligen_boxes and +gligen_phrases. Otherwise, it is treated as a generation task on a blank input image. gligen_scheduled_sampling_beta (float, defaults to 0.3) — +Scheduled Sampling factor from GLIGEN: Open-Set Grounded Text-to-Image +Generation. Scheduled Sampling factor is only varied for +scheduled sampling during inference for improved quality and controllability. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor from Common Diffusion Noise Schedules and Sample Steps are +Flawed. Guidance rescale factor should fix overexposure when +using zero terminal SNR. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionGLIGENPipeline +>>> from diffusers.utils import load_image + +>>> # Insert objects described by text at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENPipeline.from_pretrained( +... "masterful/gligen-1-4-inpainting-text-box", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> input_image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/livingroom_modern.png" +... ) +>>> prompt = "a birthday cake" +>>> boxes = [[0.2676, 0.6088, 0.4773, 0.7183]] +>>> phrases = ["a birthday cake"] + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_inpaint_image=input_image, +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-1-4-inpainting-text-box.jpg") + +>>> # Generate an image described by the prompt and +>>> # insert objects described by text at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENPipeline.from_pretrained( +... "masterful/gligen-1-4-generation-text-box", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a waterfall and a modern high speed train running through the tunnel in a beautiful forest with fall foliage" +>>> boxes = [[0.1387, 0.2051, 0.4277, 0.7090], [0.4980, 0.4355, 0.8516, 0.7266]] +>>> phrases = ["a waterfall", "a modern high speed train running through the tunnel"] + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-1-4-generation-text-box.jpg") enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. prepare_latents < source > ( batch_size num_channels_latents height width dtype device generator latents = None ) enable_fuser < source > ( enabled = True ) encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionGLIGENTextImagePipeline class diffusers.StableDiffusionGLIGENTextImagePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer processor: CLIPProcessor image_encoder: CLIPVisionModelWithProjection image_project: CLIPImageProjection unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. processor (CLIPProcessor) — +A CLIPProcessor to procces reference image. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder (clip-vit-large-patch14). image_project (CLIPImageProjection) — +A CLIPImageProjection to project image embedding into phrases embedding space. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with Grounded-Language-to-Image Generation (GLIGEN). This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 gligen_scheduled_sampling_beta: float = 0.3 gligen_phrases: List = None gligen_images: List = None input_phrases_mask: Union = None input_images_mask: Union = None gligen_boxes: List = None gligen_inpaint_image: Optional = None negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None gligen_normalize_constant: float = 28.7 clip_skip: int = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. gligen_phrases (List[str]) — +The phrases to guide what to include in each of the regions defined by the corresponding +gligen_boxes. There should only be one phrase per bounding box. gligen_images (List[PIL.Image.Image]) — +The images to guide what to include in each of the regions defined by the corresponding gligen_boxes. +There should only be one image per bounding box input_phrases_mask (int or List[int]) — +pre phrases mask input defined by the correspongding input_phrases_mask input_images_mask (int or List[int]) — +pre images mask input defined by the correspongding input_images_mask gligen_boxes (List[List[float]]) — +The bounding boxes that identify rectangular regions of the image that are going to be filled with the +content described by the corresponding gligen_phrases. Each rectangular box is defined as a +List[float] of 4 elements [xmin, ymin, xmax, ymax] where each value is between [0,1]. gligen_inpaint_image (PIL.Image.Image, optional) — +The input image, if provided, is inpainted with objects described by the gligen_boxes and +gligen_phrases. Otherwise, it is treated as a generation task on a blank input image. gligen_scheduled_sampling_beta (float, defaults to 0.3) — +Scheduled Sampling factor from GLIGEN: Open-Set Grounded Text-to-Image +Generation. Scheduled Sampling factor is only varied for +scheduled sampling during inference for improved quality and controllability. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. gligen_normalize_constant (float, optional, defaults to 28.7) — +The normalize value of the image embedding. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionGLIGENTextImagePipeline +>>> from diffusers.utils import load_image + +>>> # Insert objects described by image at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained( +... "anhnct/Gligen_Inpainting_Text_Image", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> input_image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/livingroom_modern.png" +... ) +>>> prompt = "a backpack" +>>> boxes = [[0.2676, 0.4088, 0.4773, 0.7183]] +>>> phrases = None +>>> gligen_image = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/backpack.jpeg" +... ) + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_inpaint_image=input_image, +... gligen_boxes=boxes, +... gligen_images=[gligen_image], +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-inpainting-text-image-box.jpg") + +>>> # Generate an image described by the prompt and +>>> # insert objects described by text and image at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained( +... "anhnct/Gligen_Text_Image", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a flower sitting on the beach" +>>> boxes = [[0.0, 0.09, 0.53, 0.76]] +>>> phrases = ["flower"] +>>> gligen_image = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/pexels-pixabay-60597.jpg" +... ) + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_images=[gligen_image], +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-generation-text-image-box.jpg") + +>>> # Generate an image described by the prompt and +>>> # transfer style described by image at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained( +... "anhnct/Gligen_Text_Image", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a dragon flying on the sky" +>>> boxes = [[0.4, 0.2, 1.0, 0.8], [0.0, 1.0, 0.0, 1.0]] # Set `[0.0, 1.0, 0.0, 1.0]` for the style + +>>> gligen_image = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" +... ) + +>>> gligen_placeholder = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" +... ) + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=[ +... "dragon", +... "placeholder", +... ], # Can use any text instead of `placeholder` token, because we will use mask here +... gligen_images=[ +... gligen_placeholder, +... gligen_image, +... ], # Can use any image in gligen_placeholder, because we will use mask here +... input_phrases_mask=[1, 0], # Set 0 for the placeholder token +... input_images_mask=[0, 1], # Set 0 for the placeholder image +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-generation-text-image-box-style-transfer.jpg") enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. prepare_latents < source > ( batch_size num_channels_latents height width dtype device generator latents = None ) enable_fuser < source > ( enabled = True ) complete_mask < source > ( has_mask max_objs device ) Based on the input mask corresponding value 0 or 1 for each phrases and image, mask the features +corresponding to phrases and images. crop < source > ( im new_width new_height ) Crop the input image to the specified dimensions. draw_inpaint_mask_from_boxes < source > ( boxes size ) Create an inpainting mask based on given boxes. This function generates an inpainting mask using the provided +boxes to mark regions that need to be inpainted. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_clip_feature < source > ( input normalize_constant device is_image = False ) Get image and phrases embedding by using CLIP pretrain model. The image embedding is transformed into the +phrases embedding space through a projection. get_cross_attention_kwargs_with_grounded < source > ( hidden_size gligen_phrases gligen_images gligen_boxes input_phrases_mask input_images_mask repeat_batch normalize_constant max_objs device ) Prepare the cross-attention kwargs containing information about the grounded input (boxes, mask, image +embedding, phrases embedding). get_cross_attention_kwargs_without_grounded < source > ( hidden_size repeat_batch max_objs device ) Prepare the cross-attention kwargs without information about the grounded input (boxes, mask, image embedding, +phrases embedding) (All are zero tensor). target_size_center_crop < source > ( im new_hw ) Crop and resize the image to the target size while keeping the center. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/2cc59956d96b077c9da0afd1d9c5b122.txt b/scrapped_outputs/2cc59956d96b077c9da0afd1d9c5b122.txt new file mode 100644 index 0000000000000000000000000000000000000000..3ac97628e55336ffb0041210b78e5d43066c4f7c --- /dev/null +++ b/scrapped_outputs/2cc59956d96b077c9da0afd1d9c5b122.txt @@ -0,0 +1,225 @@ +AudioLDM 2 AudioLDM 2 was proposed in AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining by Haohe Liu et al. AudioLDM 2 takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional sound effects, human speech and music. Inspired by Stable Diffusion, AudioLDM 2 is a text-to-audio latent diffusion model (LDM) that learns continuous audio representations from text embeddings. Two text encoder models are used to compute the text embeddings from a prompt input: the text-branch of CLAP and the encoder of Flan-T5. These text embeddings are then projected to a shared embedding space by an AudioLDM2ProjectionModel. A GPT2 language model (LM) is used to auto-regressively predict eight new embedding vectors, conditional on the projected CLAP and Flan-T5 embeddings. The generated embedding vectors and Flan-T5 text embeddings are used as cross-attention conditioning in the LDM. The UNet of AudioLDM 2 is unique in the sense that it takes two cross-attention embeddings, as opposed to one cross-attention conditioning, as in most other LDMs. The abstract of the paper is the following: Although audio generation shares commonalities across different types of audio, such as speech, music, and sound effects, designing models for each type requires careful consideration of specific objectives and biases that can significantly differ from those of other types. To bring us closer to a unified perspective of audio generation, this paper proposes a framework that utilizes the same learning method for speech, music, and sound effect generation. Our framework introduces a general representation of audio, called “language of audio” (LOA). Any audio can be translated into LOA based on AudioMAE, a self-supervised pre-trained representation learning model. In the generation process, we translate any modalities into LOA by using a GPT-2 model, and we perform self-supervised audio generation learning with a latent diffusion model conditioned on LOA. The proposed framework naturally brings advantages such as in-context learning abilities and reusable self-supervised pretrained AudioMAE and latent diffusion models. Experiments on the major benchmarks of text-to-audio, text-to-music, and text-to-speech demonstrate state-of-the-art or competitive performance against previous approaches. Our code, pretrained model, and demo are available at this https URL. This pipeline was contributed by sanchit-gandhi. The original codebase can be found at haoheliu/audioldm2. Tips Choosing a checkpoint AudioLDM2 comes in three variants. Two of these checkpoints are applicable to the general task of text-to-audio generation. The third checkpoint is trained exclusively on text-to-music generation. All checkpoints share the same model size for the text encoders and VAE. They differ in the size and depth of the UNet. +See table below for details on the three checkpoints: Checkpoint Task UNet Model Size Total Model Size Training Data / h audioldm2 Text-to-audio 350M 1.1B 1150k audioldm2-large Text-to-audio 750M 1.5B 1150k audioldm2-music Text-to-music 350M 1.1B 665k Constructing a prompt Descriptive prompt inputs work best: use adjectives to describe the sound (e.g. “high quality” or “clear”) and make the prompt context specific (e.g. “water stream in a forest” instead of “stream”). It’s best to use general terms like “cat” or “dog” instead of specific names or abstract objects the model may not be familiar with. Using a negative prompt can significantly improve the quality of the generated waveform, by guiding the generation away from terms that correspond to poor quality audio. Try using a negative prompt of “Low quality.” Controlling inference The quality of the predicted audio sample can be controlled by the num_inference_steps argument; higher steps give higher quality audio at the expense of slower inference. The length of the predicted audio sample can be controlled by varying the audio_length_in_s argument. Evaluating generated waveforms: The quality of the generated waveforms can vary significantly based on the seed. Try generating with different seeds until you find a satisfactory generation. Multiple waveforms can be generated in one go: set num_waveforms_per_prompt to a value greater than 1. Automatic scoring will be performed between the generated waveforms and prompt text, and the audios ranked from best to worst accordingly. The following example demonstrates how to construct good music generation using the aforementioned tips: example. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. AudioLDM2Pipeline class diffusers.AudioLDM2Pipeline < source > ( vae: AutoencoderKL text_encoder: ClapModel text_encoder_2: T5EncoderModel projection_model: AudioLDM2ProjectionModel language_model: GPT2Model tokenizer: Union tokenizer_2: Union feature_extractor: ClapFeatureExtractor unet: AudioLDM2UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vocoder: SpeechT5HifiGan ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (ClapModel) — +First frozen text-encoder. AudioLDM2 uses the joint audio-text embedding model +CLAP, +specifically the laion/clap-htsat-unfused variant. The +text branch is used to encode the text prompt to a prompt embedding. The full audio-text model is used to +rank generated waveforms against the text prompt by computing similarity scores. text_encoder_2 (T5EncoderModel) — +Second frozen text-encoder. AudioLDM2 uses the encoder of +T5, specifically the +google/flan-t5-large variant. projection_model (AudioLDM2ProjectionModel) — +A trained model used to linearly project the hidden-states from the first and second text encoder models +and insert learned SOS and EOS token embeddings. The projected hidden-states from the two text encoders are +concatenated to give the input to the language model. language_model (GPT2Model) — +An auto-regressive language model used to generate a sequence of hidden-states conditioned on the projected +outputs from the two text encoders. tokenizer (RobertaTokenizer) — +Tokenizer to tokenize text for the first frozen text-encoder. tokenizer_2 (T5Tokenizer) — +Tokenizer to tokenize text for the second frozen text-encoder. feature_extractor (ClapFeatureExtractor) — +Feature extractor to pre-process generated audio waveforms to log-mel spectrograms for automatic scoring. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded audio latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. vocoder (SpeechT5HifiGan) — +Vocoder of class SpeechT5HifiGan to convert the mel-spectrogram latents to the final audio waveform. Pipeline for text-to-audio generation using AudioLDM2. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None audio_length_in_s: Optional = None num_inference_steps: int = 200 guidance_scale: float = 3.5 negative_prompt: Union = None num_waveforms_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None generated_prompt_embeds: Optional = None negative_generated_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None max_new_tokens: Optional = None return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None output_type: Optional = 'np' ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide audio generation. If not defined, you need to pass prompt_embeds. audio_length_in_s (int, optional, defaults to 10.24) — +The length of the generated audio sample in seconds. num_inference_steps (int, optional, defaults to 200) — +The number of denoising steps. More denoising steps usually lead to a higher quality audio at the +expense of slower inference. guidance_scale (float, optional, defaults to 3.5) — +A higher guidance scale value encourages the model to generate audio that is closely linked to the text +prompt at the expense of lower sound quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in audio generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_waveforms_per_prompt (int, optional, defaults to 1) — +The number of waveforms to generate per prompt. If num_waveforms_per_prompt > 1, then automatic +scoring is performed between the generated outputs and the text prompt. This scoring ranks the +generated waveforms based on their cosine similarity with the text input in the joint text-audio +embedding space. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for spectrogram +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings from the GPT2 langauge model. Can be used to easily tweak text inputs, +e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input +argument. negative_generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings from the GPT2 language model. Can be used to easily tweak text +inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from +negative_prompt input argument. attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the prompt_embeds. If not provided, attention mask will +be computed from prompt input argument. negative_attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the negative_prompt_embeds. If not provided, attention +mask will be computed from negative_prompt input argument. max_new_tokens (int, optional, defaults to None) — +Number of new tokens to generate with the GPT2 language model. If not provided, number of tokens will +be taken from the config of the model. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. output_type (str, optional, defaults to "np") — +The output format of the generated audio. Choose between "np" to return a NumPy np.ndarray or +"pt" to return a PyTorch torch.Tensor object. Set to "latent" to return the latent diffusion +model (LDM) output. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Examples: Copied >>> import scipy +>>> import torch +>>> from diffusers import AudioLDM2Pipeline + +>>> repo_id = "cvssp/audioldm2" +>>> pipe = AudioLDM2Pipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> # define the prompts +>>> prompt = "The sound of a hammer hitting a wooden surface." +>>> negative_prompt = "Low quality." + +>>> # set the seed for generator +>>> generator = torch.Generator("cuda").manual_seed(0) + +>>> # run the generation +>>> audio = pipe( +... prompt, +... negative_prompt=negative_prompt, +... num_inference_steps=200, +... audio_length_in_s=10.0, +... num_waveforms_per_prompt=3, +... generator=generator, +... ).audios + +>>> # save the best audio sample (index 0) as a .wav file +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio[0]) disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_waveforms_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None generated_prompt_embeds: Optional = None negative_generated_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None max_new_tokens: Optional = None ) → prompt_embeds (torch.FloatTensor) Parameters prompt (str or List[str], optional) — +prompt to be encoded device (torch.device) — +torch device num_waveforms_per_prompt (int) — +number of waveforms that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the audio generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-computed text embeddings from the Flan T5 model. Can be used to easily tweak text inputs, e.g. +prompt weighting. If not provided, text embeddings will be computed from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-computed negative text embeddings from the Flan T5 model. Can be used to easily tweak text inputs, +e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from +negative_prompt input argument. generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings from the GPT2 langauge model. Can be used to easily tweak text inputs, +e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input +argument. negative_generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings from the GPT2 language model. Can be used to easily tweak text +inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from +negative_prompt input argument. attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the prompt_embeds. If not provided, attention mask will +be computed from prompt input argument. negative_attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the negative_prompt_embeds. If not provided, attention +mask will be computed from negative_prompt input argument. max_new_tokens (int, optional, defaults to None) — +The number of new tokens to generate with the GPT2 language model. Returns +prompt_embeds (torch.FloatTensor) + +Text embeddings from the Flan T5 model. +attention_mask (torch.LongTensor): +Attention mask to be applied to the prompt_embeds. +generated_prompt_embeds (torch.FloatTensor): +Text embeddings generated from the GPT2 langauge model. + Encodes the prompt into text encoder hidden states. Example: Copied >>> import scipy +>>> import torch +>>> from diffusers import AudioLDM2Pipeline + +>>> repo_id = "cvssp/audioldm2" +>>> pipe = AudioLDM2Pipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> # Get text embedding vectors +>>> prompt_embeds, attention_mask, generated_prompt_embeds = pipe.encode_prompt( +... prompt="Techno music with a strong, upbeat tempo and high melodic riffs", +... device="cuda", +... do_classifier_free_guidance=True, +... ) + +>>> # Pass text embeddings to pipeline for text-conditional audio generation +>>> audio = pipe( +... prompt_embeds=prompt_embeds, +... attention_mask=attention_mask, +... generated_prompt_embeds=generated_prompt_embeds, +... num_inference_steps=200, +... audio_length_in_s=10.0, +... ).audios[0] + +>>> # save generated audio sample +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio) generate_language_model < source > ( inputs_embeds: Tensor = None max_new_tokens: int = 8 **model_kwargs ) → inputs_embeds (torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)`) Parameters inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — +The sequence used as a prompt for the generation. max_new_tokens (int) — +Number of new tokens to generate. model_kwargs (Dict[str, Any], optional) — +Ad hoc parametrization of additional model-specific kwargs that will be forwarded to the forward +function of the model. Returns +inputs_embeds (torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)`) + +The sequence of generated hidden-states. + Generates a sequence of hidden-states from the language model, conditioned on the embedding inputs. AudioLDM2ProjectionModel class diffusers.AudioLDM2ProjectionModel < source > ( text_encoder_dim text_encoder_1_dim langauge_model_dim ) Parameters text_encoder_dim (int) — +Dimensionality of the text embeddings from the first text encoder (CLAP). text_encoder_1_dim (int) — +Dimensionality of the text embeddings from the second text encoder (T5 or VITS). langauge_model_dim (int) — +Dimensionality of the text embeddings from the language model (GPT2). A simple linear projection model to map two text embeddings to a shared latent space. It also inserts learned +embedding vectors at the start and end of each text embedding sequence respectively. Each variable appended with +_1 refers to that corresponding to the second text encoder. Otherwise, it is from the first. forward < source > ( hidden_states: Optional = None hidden_states_1: Optional = None attention_mask: Optional = None attention_mask_1: Optional = None ) AudioLDM2UNet2DConditionModel class diffusers.AudioLDM2UNet2DConditionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 flip_sin_to_cos: bool = True freq_shift: int = 0 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') mid_block_type: Optional = 'UNetMidBlock2DCrossAttn' up_block_types: Tuple = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: Union = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: Union = 1280 transformer_layers_per_block: Union = 1 attention_head_dim: Union = 8 num_attention_heads: Union = None use_linear_projection: bool = False class_embed_type: Optional = None num_class_embeds: Optional = None upcast_attention: bool = False resnet_time_scale_shift: str = 'default' time_embedding_type: str = 'positional' time_embedding_dim: Optional = None time_embedding_act_fn: Optional = None timestep_post_act: Optional = None time_cond_proj_dim: Optional = None conv_in_kernel: int = 3 conv_out_kernel: int = 3 projection_class_embeddings_input_dim: Optional = None class_embeddings_concat: bool = False ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. in_channels (int, optional, defaults to 4) — Number of channels in the input sample. out_channels (int, optional, defaults to 4) — Number of channels in the output. flip_sin_to_cos (bool, optional, defaults to False) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. mid_block_type (str, optional, defaults to "UNetMidBlock2DCrossAttn") — +Block type for middle of UNet, it can only be UNetMidBlock2DCrossAttn for AudioLDM2. up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. only_cross_attention (bool or Tuple[bool], optional, default to False) — +Whether to include self-attention in the basic transformer blocks, see +BasicTransformerBlock. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — The number of layers per block. downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. act_fn (str, optional, defaults to "silu") — The activation function to use. norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, normalization and activation layers is skipped in post-processing. norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. cross_attention_dim (int or Tuple[int], optional, defaults to 1280) — +The dimension of the cross attention features. transformer_layers_per_block (int or Tuple[int], optional, defaults to 1) — +The number of transformer blocks of type BasicTransformerBlock. Only relevant for +~models.unet_2d_blocks.CrossAttnDownBlock2D, ~models.unet_2d_blocks.CrossAttnUpBlock2D, +~models.unet_2d_blocks.UNetMidBlock2DCrossAttn. attention_head_dim (int, optional, defaults to 8) — The dimension of the attention heads. num_attention_heads (int, optional) — +The number of attention heads. If not defined, defaults to attention_head_dim resnet_time_scale_shift (str, optional, defaults to "default") — Time scale shift config +for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", "identity", "projection", or "simple_projection". num_class_embeds (int, optional, defaults to None) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. time_embedding_type (str, optional, defaults to positional) — +The type of position embedding to use for timesteps. Choose from positional or fourier. time_embedding_dim (int, optional, defaults to None) — +An optional override for the dimension of the projected time embedding. time_embedding_act_fn (str, optional, defaults to None) — +Optional activation function to use only once on the time embeddings before they are passed to the rest of +the UNet. Choose from silu, mish, gelu, and swish. timestep_post_act (str, optional, defaults to None) — +The second activation function to use in timestep embedding. Choose from silu, mish and gelu. time_cond_proj_dim (int, optional, defaults to None) — +The dimension of cond_proj layer in the timestep embedding. conv_in_kernel (int, optional, default to 3) — The kernel size of conv_in layer. conv_out_kernel (int, optional, default to 3) — The kernel size of conv_out layer. projection_class_embeddings_input_dim (int, optional) — The dimension of the class_labels input when +class_embed_type="projection". Required when class_embed_type="projection". class_embeddings_concat (bool, optional, defaults to False) — Whether to concatenate the time +embeddings with the class embeddings. A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. Compared to the vanilla UNet2DConditionModel, this variant optionally includes an additional +self-attention layer in each Transformer block, as well as multiple cross-attention layers. It also allows for up +to two cross-attention embeddings, encoder_hidden_states and encoder_hidden_states_1. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None encoder_attention_mask: Optional = None return_dict: bool = True encoder_hidden_states_1: Optional = None encoder_attention_mask_1: Optional = None ) → UNet2DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, channel, height, width). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). encoder_attention_mask (torch.Tensor) — +A cross-attention mask of shape (batch, sequence_length) is applied to encoder_hidden_states. If +True the mask is kept, otherwise if False it is discarded. Mask will be converted into a bias, +which adds large negative values to the attention scores corresponding to “discard” tokens. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. encoder_hidden_states_1 (torch.FloatTensor, optional) — +A second set of encoder hidden states with shape (batch, sequence_length_2, feature_dim_2). Can be +used to condition the model on a different set of embeddings to encoder_hidden_states. encoder_attention_mask_1 (torch.Tensor, optional) — +A cross-attention mask of shape (batch, sequence_length_2) is applied to encoder_hidden_states_1. +If True the mask is kept, otherwise if False it is discarded. Mask will be converted into a bias, +which adds large negative values to the attention scores corresponding to “discard” tokens. Returns +UNet2DConditionOutput or tuple + +If return_dict is True, an UNet2DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The AudioLDM2UNet2DConditionModel forward method. AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. diff --git a/scrapped_outputs/2cc815a3f2ca6c0130795a279ba4e7c1.txt b/scrapped_outputs/2cc815a3f2ca6c0130795a279ba4e7c1.txt new file mode 100644 index 0000000000000000000000000000000000000000..6c6930421010fe84f98ab906144201bb0390aa30 --- /dev/null +++ b/scrapped_outputs/2cc815a3f2ca6c0130795a279ba4e7c1.txt @@ -0,0 +1,81 @@ +Latent Diffusion Latent Diffusion was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. The abstract from the paper is: By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. The original codebase can be found at CompVis/latent-diffusion. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. LDMTextToImagePipeline class diffusers.LDMTextToImagePipeline < source > ( vqvae: Union bert: PreTrainedModel tokenizer: PreTrainedTokenizer unet: Union scheduler: Union ) Parameters vqvae (VQModel) — +Vector-quantized (VQ) model to encode and decode images to and from latent representations. bert (LDMBertModel) — +Text-encoder model based on BERT. tokenizer (BertTokenizer) — +A BertTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-image generation using latent diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union height: Optional = None width: Optional = None num_inference_steps: Optional = 50 guidance_scale: Optional = 1.0 eta: Optional = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 1.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Example: Copied >>> from diffusers import DiffusionPipeline + +>>> # load model and scheduler +>>> ldm = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256") + +>>> # run pipeline in inference (sample random noise and denoise) +>>> prompt = "A painting of a squirrel eating a burger" +>>> images = ldm([prompt], num_inference_steps=50, eta=0.3, guidance_scale=6).images + +>>> # save images +>>> for idx, image in enumerate(images): +... image.save(f"squirrel-{idx}.png") LDMSuperResolutionPipeline class diffusers.LDMSuperResolutionPipeline < source > ( vqvae: VQModel unet: UNet2DModel scheduler: Union ) Parameters vqvae (VQModel) — +Vector-quantized (VQ) model to encode and decode images to and from latent representations. unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latens. Can be one of +DDIMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, +EulerAncestralDiscreteScheduler, DPMSolverMultistepScheduler, or PNDMScheduler. A pipeline for image super-resolution using latent diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union = None batch_size: Optional = 1 num_inference_steps: Optional = 100 eta: Optional = 0.0 generator: Union = None output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters image (torch.Tensor or PIL.Image.Image) — +Image or tensor representing an image batch to be used as the starting point for the process. batch_size (int, optional, defaults to 1) — +Number of images to generate. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Example: Copied >>> import requests +>>> from PIL import Image +>>> from io import BytesIO +>>> from diffusers import LDMSuperResolutionPipeline +>>> import torch + +>>> # load model and scheduler +>>> pipeline = LDMSuperResolutionPipeline.from_pretrained("CompVis/ldm-super-resolution-4x-openimages") +>>> pipeline = pipeline.to("cuda") + +>>> # let's download an image +>>> url = ( +... "https://user-images.githubusercontent.com/38061659/199705896-b48e17b8-b231-47cd-a270-4ffa5a93fa3e.png" +... ) +>>> response = requests.get(url) +>>> low_res_img = Image.open(BytesIO(response.content)).convert("RGB") +>>> low_res_img = low_res_img.resize((128, 128)) + +>>> # run pipeline in inference (sample random noise and denoise) +>>> upscaled_image = pipeline(low_res_img, num_inference_steps=100, eta=1).images[0] +>>> # save image +>>> upscaled_image.save("ldm_generated_image.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/2ce0765d67d705254ae415b493ff127a.txt b/scrapped_outputs/2ce0765d67d705254ae415b493ff127a.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/2ce1dc1aa7daee7b30ebdf3322214c28.txt b/scrapped_outputs/2ce1dc1aa7daee7b30ebdf3322214c28.txt new file mode 100644 index 0000000000000000000000000000000000000000..88c6593b32ef62cb7820e9bf8a18fcf276dfa370 --- /dev/null +++ b/scrapped_outputs/2ce1dc1aa7daee7b30ebdf3322214c28.txt @@ -0,0 +1,304 @@ +Stable unCLIP Stable unCLIP checkpoints are finetuned from Stable Diffusion 2.1 checkpoints to condition on CLIP image embeddings. +Stable unCLIP still conditions on text embeddings. Given the two separate conditionings, stable unCLIP can be used +for text guided image variation. When combined with an unCLIP prior, it can also be used for full text to image generation. The abstract from the paper is: Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples. Tips Stable unCLIP takes noise_level as input during inference which determines how much noise is added to the image embeddings. A higher noise_level increases variation in the final un-noised images. By default, we do not add any additional noise to the image embeddings (noise_level = 0). Text-to-Image Generation Stable unCLIP can be leveraged for text-to-image generation by pipelining it with the prior model of KakaoBrain’s open source DALL-E 2 replication Karlo: Copied import torch +from diffusers import UnCLIPScheduler, DDPMScheduler, StableUnCLIPPipeline +from diffusers.models import PriorTransformer +from transformers import CLIPTokenizer, CLIPTextModelWithProjection + +prior_model_id = "kakaobrain/karlo-v1-alpha" +data_type = torch.float16 +prior = PriorTransformer.from_pretrained(prior_model_id, subfolder="prior", torch_dtype=data_type) + +prior_text_model_id = "openai/clip-vit-large-patch14" +prior_tokenizer = CLIPTokenizer.from_pretrained(prior_text_model_id) +prior_text_model = CLIPTextModelWithProjection.from_pretrained(prior_text_model_id, torch_dtype=data_type) +prior_scheduler = UnCLIPScheduler.from_pretrained(prior_model_id, subfolder="prior_scheduler") +prior_scheduler = DDPMScheduler.from_config(prior_scheduler.config) + +stable_unclip_model_id = "stabilityai/stable-diffusion-2-1-unclip-small" + +pipe = StableUnCLIPPipeline.from_pretrained( + stable_unclip_model_id, + torch_dtype=data_type, + variant="fp16", + prior_tokenizer=prior_tokenizer, + prior_text_encoder=prior_text_model, + prior=prior, + prior_scheduler=prior_scheduler, +) + +pipe = pipe.to("cuda") +wave_prompt = "dramatic wave, the Oceans roar, Strong wave spiral across the oceans as the waves unfurl into roaring crests; perfect wave form; perfect wave shape; dramatic wave shape; wave shape unbelievable; wave; wave shape spectacular" + +image = pipe(prompt=wave_prompt).images[0] +image For text-to-image we use stabilityai/stable-diffusion-2-1-unclip-small as it was trained on CLIP ViT-L/14 embedding, the same as the Karlo model prior. stabilityai/stable-diffusion-2-1-unclip was trained on OpenCLIP ViT-H, so we don’t recommend its use. Text guided Image-to-Image Variation Copied from diffusers import StableUnCLIPImg2ImgPipeline +from diffusers.utils import load_image +import torch + +pipe = StableUnCLIPImg2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1-unclip", torch_dtype=torch.float16, variation="fp16" +) +pipe = pipe.to("cuda") + +url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/stable_unclip/tarsila_do_amaral.png" +init_image = load_image(url) + +images = pipe(init_image).images +images[0].save("variation_image.png") Optionally, you can also pass a prompt to pipe such as: Copied prompt = "A fantasy landscape, trending on artstation" + +image = pipe(init_image, prompt=prompt).images[0] +image Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableUnCLIPPipeline class diffusers.StableUnCLIPPipeline < source > ( prior_tokenizer: CLIPTokenizer prior_text_encoder: CLIPTextModelWithProjection prior: PriorTransformer prior_scheduler: KarrasDiffusionSchedulers image_normalizer: StableUnCLIPImageNormalizer image_noising_scheduler: KarrasDiffusionSchedulers tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vae: AutoencoderKL ) Parameters prior_tokenizer (CLIPTokenizer) — +A CLIPTokenizer. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen CLIPTextModelWithProjection text-encoder. prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_scheduler (KarrasDiffusionSchedulers) — +Scheduler used in the prior denoising process. image_normalizer (StableUnCLIPImageNormalizer) — +Used to normalize the predicted image embeddings before the noise is applied and un-normalize the image +embeddings after the noise has been applied. image_noising_scheduler (KarrasDiffusionSchedulers) — +Noise schedule for adding noise to the predicted image embeddings. The amount of noise to add is determined +by the noise_level. tokenizer (CLIPTokenizer) — +A CLIPTokenizer. text_encoder (CLIPTextModel) — +Frozen CLIPTextModel text-encoder. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (KarrasDiffusionSchedulers) — +A scheduler to be used in combination with unet to denoise the encoded image latents. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. Pipeline for text-to-image generation using stable unCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 20 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Optional = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 0 prior_num_inference_steps: int = 25 prior_guidance_scale: float = 4.0 prior_latents: Optional = None clip_skip: Optional = None ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 20) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. noise_level (int, optional, defaults to 0) — +The amount of noise to add to the image embeddings. A higher noise_level increases the variance in +the final un-noised images. See StableUnCLIPPipeline.noise_image_embeddings() for more details. prior_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps in the prior denoising process. More denoising steps usually lead to a +higher quality image at the expense of slower inference. prior_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. prior_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +embedding generation in the prior denoising process. Can be used to tweak the same generation with +different prompts. If not provided, a latents tensor is generated by sampling using the supplied random +generator. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +ImagePipelineOutput or tuple + +~ pipeline_utils.ImagePipelineOutput if return_dict is True, otherwise a tuple. When returning +a tuple, the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableUnCLIPPipeline + +>>> pipe = StableUnCLIPPipeline.from_pretrained( +... "fusing/stable-unclip-2-1-l", torch_dtype=torch.float16 +... ) # TODO update model path +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> images = pipe(prompt).images +>>> images[0].save("astronaut_horse.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. noise_image_embeddings < source > ( image_embeds: Tensor noise_level: int noise: Optional = None generator: Optional = None ) Add noise to the image embeddings. The amount of noise is controlled by a noise_level input. A higher +noise_level increases the variance in the final un-noised images. The noise is applied in two ways: A noise schedule is applied directly to the embeddings. A vector of sinusoidal time embeddings are appended to the output. In both cases, the amount of noise is controlled by the same noise_level. The embeddings are normalized before the noise is applied and un-normalized after the noise is applied. StableUnCLIPImg2ImgPipeline class diffusers.StableUnCLIPImg2ImgPipeline < source > ( feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection image_normalizer: StableUnCLIPImageNormalizer image_noising_scheduler: KarrasDiffusionSchedulers tokenizer: CLIPTokenizer text_encoder: CLIPTextModel unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vae: AutoencoderKL ) Parameters feature_extractor (CLIPImageProcessor) — +Feature extractor for image pre-processing before being encoded. image_encoder (CLIPVisionModelWithProjection) — +CLIP vision model for encoding images. image_normalizer (StableUnCLIPImageNormalizer) — +Used to normalize the predicted image embeddings before the noise is applied and un-normalize the image +embeddings after the noise has been applied. image_noising_scheduler (KarrasDiffusionSchedulers) — +Noise schedule for adding noise to the predicted image embeddings. The amount of noise to add is determined +by the noise_level. tokenizer (~transformers.CLIPTokenizer) — +A [~transformers.CLIPTokenizer)]. text_encoder (CLIPTextModel) — +Frozen CLIPTextModel text-encoder. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (KarrasDiffusionSchedulers) — +A scheduler to be used in combination with unet to denoise the encoded image latents. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. Pipeline for text-guided image-to-image generation using stable unCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( image: Union = None prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 20 guidance_scale: float = 10 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Optional = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 0 image_embeds: Optional = None clip_skip: Optional = None ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, either prompt_embeds will be +used or prompt is initialized to "". image (torch.FloatTensor or PIL.Image.Image) — +Image or tensor representing an image batch. The image is encoded to its CLIP embedding which the +unet is conditioned on. The image is not encoded by the vae and then used as the latents in the +denoising process like it is in the standard Stable Diffusion text-guided image variation process. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 20) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. noise_level (int, optional, defaults to 0) — +The amount of noise to add to the image embeddings. A higher noise_level increases the variance in +the final un-noised images. See StableUnCLIPPipeline.noise_image_embeddings() for more details. image_embeds (torch.FloatTensor, optional) — +Pre-generated CLIP embeddings to condition the unet on. These latents are not used in the denoising +process. If you want to provide pre-generated latents, pass them to __call__ as latents. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +ImagePipelineOutput or tuple + +~ pipeline_utils.ImagePipelineOutput if return_dict is True, otherwise a tuple. When returning +a tuple, the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import requests +>>> import torch +>>> from PIL import Image +>>> from io import BytesIO + +>>> from diffusers import StableUnCLIPImg2ImgPipeline + +>>> pipe = StableUnCLIPImg2ImgPipeline.from_pretrained( +... "fusing/stable-unclip-2-1-l-img2img", torch_dtype=torch.float16 +... ) # TODO update model path +>>> pipe = pipe.to("cuda") + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +>>> response = requests.get(url) +>>> init_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_image = init_image.resize((768, 512)) + +>>> prompt = "A fantasy landscape, trending on artstation" + +>>> images = pipe(prompt, init_image).images +>>> images[0].save("fantasy_landscape.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. noise_image_embeddings < source > ( image_embeds: Tensor noise_level: int noise: Optional = None generator: Optional = None ) Add noise to the image embeddings. The amount of noise is controlled by a noise_level input. A higher +noise_level increases the variance in the final un-noised images. The noise is applied in two ways: A noise schedule is applied directly to the embeddings. A vector of sinusoidal time embeddings are appended to the output. In both cases, the amount of noise is controlled by the same noise_level. The embeddings are normalized before the noise is applied and un-normalized after the noise is applied. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/2cf33b039453af0d2be1fa32d9407a3e.txt b/scrapped_outputs/2cf33b039453af0d2be1fa32d9407a3e.txt new file mode 100644 index 0000000000000000000000000000000000000000..370ce691af60ec569bb22a8523c7b30831598db5 --- /dev/null +++ b/scrapped_outputs/2cf33b039453af0d2be1fa32d9407a3e.txt @@ -0,0 +1,260 @@ +Performing inference with LCM-LoRA Latent Consistency Models (LCM) enable quality image generation in typically 2-4 steps making it possible to use diffusion models in almost real-time settings. From the official website: LCMs can be distilled from any pre-trained Stable Diffusion (SD) in only 4,000 training steps (~32 A100 GPU Hours) for generating high quality 768 x 768 resolution images in 2~4 steps or even one step, significantly accelerating text-to-image generation. We employ LCM to distill the Dreamshaper-V7 version of SD in just 4,000 training iterations. For a more technical overview of LCMs, refer to the paper. However, each model needs to be distilled separately for latent consistency distillation. The core idea with LCM-LoRA is to train just a few adapter layers, the adapter being LoRA in this case. +This way, we don’t have to train the full model and keep the number of trainable parameters manageable. The resulting LoRAs can then be applied to any fine-tuned version of the model without distilling them separately. +Additionally, the LoRAs can be applied to image-to-image, ControlNet/T2I-Adapter, inpainting, AnimateDiff etc. +The LCM-LoRA can also be combined with other LoRAs to generate styled images in very few steps (4-8). LCM-LoRAs are available for stable-diffusion-v1-5, stable-diffusion-xl-base-1.0, and the SSD-1B model. All the checkpoints can be found in this collection. For more details about LCM-LoRA, refer to the technical report. This guide shows how to perform inference with LCM-LoRAs for text-to-image image-to-image combined with styled LoRAs ControlNet/T2I-Adapter inpainting AnimateDiff Before going through this guide, we’ll take a look at the general workflow for performing inference with LCM-LoRAs. +LCM-LoRAs are similar to other Stable Diffusion LoRAs so they can be used with any DiffusionPipeline that supports LoRAs. Load the task specific pipeline and model. Set the scheduler to LCMScheduler. Load the LCM-LoRA weights for the model. Reduce the guidance_scale between [1.0, 2.0] and set the num_inference_steps between [4, 8]. Perform inference with the pipeline with the usual parameters. Let’s look at how we can perform inference with LCM-LoRAs for different tasks. First, make sure you have peft installed, for better LoRA support. Copied pip install -U peft Text-to-image You’ll use the StableDiffusionXLPipeline with the scheduler: LCMScheduler and then load the LCM-LoRA. Together with the LCM-LoRA and the scheduler, the pipeline enables a fast inference workflow overcoming the slow iterative nature of diffusion models. Copied import torch +from diffusers import DiffusionPipeline, LCMScheduler + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + variant="fp16", + torch_dtype=torch.float16 +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl") + +prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" + +generator = torch.manual_seed(42) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=1.0 +).images[0] Notice that we use only 4 steps for generation which is way less than what’s typically used for standard SDXL. You may have noticed that we set guidance_scale=1.0, which disables classifer-free-guidance. This is because the LCM-LoRA is trained with guidance, so the batch size does not have to be doubled in this case. This leads to a faster inference time, with the drawback that negative prompts don’t have any effect on the denoising process. You can also use guidance with LCM-LoRA, but due to the nature of training the model is very sensitve to the guidance_scale values, high values can lead to artifacts in the generated images. In our experiments, we found that the best values are in the range of [1.0, 2.0]. Inference with a fine-tuned model As mentioned above, the LCM-LoRA can be applied to any fine-tuned version of the model without having to distill them separately. Let’s look at how we can perform inference with a fine-tuned model. In this example, we’ll use the animagine-xl model, which is a fine-tuned version of the SDXL model for generating anime. Copied from diffusers import DiffusionPipeline, LCMScheduler + +pipe = DiffusionPipeline.from_pretrained( + "Linaqruf/animagine-xl", + variant="fp16", + torch_dtype=torch.float16 +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl") + +prompt = "face focus, cute, masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=1.0 +).images[0] Image-to-image LCM-LoRA can be applied to image-to-image tasks too. Let’s look at how we can perform image-to-image generation with LCMs. For this example we’ll use the dreamshaper-7 model and the LCM-LoRA for stable-diffusion-v1-5 . Copied import torch +from diffusers import AutoPipelineForImage2Image, LCMScheduler +from diffusers.utils import make_image_grid, load_image + +pipe = AutoPipelineForImage2Image.from_pretrained( + "Lykon/dreamshaper-7", + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5") + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) +prompt = "Astronauts in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +generator = torch.manual_seed(0) +image = pipe( + prompt, + image=init_image, + num_inference_steps=4, + guidance_scale=1, + strength=0.6, + generator=generator +).images[0] +make_image_grid([init_image, image], rows=1, cols=2) You can get different results based on your prompt and the image you provide. To get the best results, we recommend trying different values for num_inference_steps, strength, and guidance_scale parameters and choose the best one. Combine with styled LoRAs LCM-LoRA can be combined with other LoRAs to generate styled-images in very few steps (4-8). In the following example, we’ll use the LCM-LoRA with the papercut LoRA. +To learn more about how to combine LoRAs, refer to this guide. Copied import torch +from diffusers import DiffusionPipeline, LCMScheduler + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + variant="fp16", + torch_dtype=torch.float16 +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LoRAs +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl", adapter_name="lcm") +pipe.load_lora_weights("TheLastBen/Papercut_SDXL", weight_name="papercut.safetensors", adapter_name="papercut") + +# Combine LoRAs +pipe.set_adapters(["lcm", "papercut"], adapter_weights=[1.0, 0.8]) + +prompt = "papercut, a cute fox" +generator = torch.manual_seed(0) +image = pipe(prompt, num_inference_steps=4, guidance_scale=1, generator=generator).images[0] +image ControlNet/T2I-Adapter Let’s look at how we can perform inference with ControlNet/T2I-Adapter and LCM-LoRA. ControlNet For this example, we’ll use the SD-v1-5 model and the LCM-LoRA for SD-v1-5 with canny ControlNet. Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, LCMScheduler +from diffusers.utils import load_image + +image = load_image( + "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +).resize((512, 512)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + controlnet=controlnet, + torch_dtype=torch.float16, + safety_checker=None, + variant="fp16" +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5") + +generator = torch.manual_seed(0) +image = pipe( + "the mona lisa", + image=canny_image, + num_inference_steps=4, + guidance_scale=1.5, + controlnet_conditioning_scale=0.8, + cross_attention_kwargs={"scale": 1}, + generator=generator, +).images[0] +make_image_grid([canny_image, image], rows=1, cols=2) The inference parameters in this example might not work for all examples, so we recommend you to try different values for `num_inference_steps`, `guidance_scale`, `controlnet_conditioning_scale` and `cross_attention_kwargs` parameters and choose the best one. T2I-Adapter This example shows how to use the LCM-LoRA with the Canny T2I-Adapter and SDXL. Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +# Prepare image +# Detect the canny map in low resolution to avoid high-frequency details +image = load_image( + "https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_canny.jpg" +).resize((384, 384)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image).resize((1024, 1024)) + +# load adapter +adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, varient="fp16").to("cuda") + +pipe = StableDiffusionXLAdapterPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + adapter=adapter, + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl") + +prompt = "Mystical fairy in real, magic, 4k picture, high quality" +negative_prompt = "extra digit, fewer digits, cropped, worst quality, low quality, glitch, deformed, mutated, ugly, disfigured" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, + negative_prompt=negative_prompt, + image=canny_image, + num_inference_steps=4, + guidance_scale=1.5, + adapter_conditioning_scale=0.8, + adapter_conditioning_factor=1, + generator=generator, +).images[0] +make_image_grid([canny_image, image], rows=1, cols=2) Inpainting LCM-LoRA can be used for inpainting as well. Copied import torch +from diffusers import AutoPipelineForInpainting, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +pipe = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5") + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +# generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, + image=init_image, + mask_image=mask_image, + generator=generator, + num_inference_steps=4, + guidance_scale=4, +).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) AnimateDiff AnimateDiff allows you to animate images using Stable Diffusion models. To get good results, we need to generate multiple frames (16-24), and doing this with standard SD models can be very slow. +LCM-LoRA can be used to speed up the process significantly, as you just need to do 4-8 steps for each frame. Let’s look at how we can perform animation with LCM-LoRA and AnimateDiff. Copied import torch +from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler, LCMScheduler +from diffusers.utils import export_to_gif + +adapter = MotionAdapter.from_pretrained("diffusers/animatediff-motion-adapter-v1-5") +pipe = AnimateDiffPipeline.from_pretrained( + "frankjoshua/toonyou_beta6", + motion_adapter=adapter, +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5", adapter_name="lcm") +pipe.load_lora_weights("guoyww/animatediff-motion-lora-zoom-in", weight_name="diffusion_pytorch_model.safetensors", adapter_name="motion-lora") + +pipe.set_adapters(["lcm", "motion-lora"], adapter_weights=[0.55, 1.2]) + +prompt = "best quality, masterpiece, 1girl, looking at viewer, blurry background, upper body, contemporary, dress" +generator = torch.manual_seed(0) +frames = pipe( + prompt=prompt, + num_inference_steps=5, + guidance_scale=1.25, + cross_attention_kwargs={"scale": 1}, + num_frames=24, + generator=generator +).frames[0] +export_to_gif(frames, "animation.gif") diff --git a/scrapped_outputs/2d1812f3942bc376a9f22a5f32cb5099.txt b/scrapped_outputs/2d1812f3942bc376a9f22a5f32cb5099.txt new file mode 100644 index 0000000000000000000000000000000000000000..adefa4c809ce22324ba26e06ba200fc10adfee55 --- /dev/null +++ b/scrapped_outputs/2d1812f3942bc376a9f22a5f32cb5099.txt @@ -0,0 +1,51 @@ +LoRA This is experimental and the API may change in the future. LoRA (Low-Rank Adaptation of Large Language Models) is a popular and lightweight training technique that significantly reduces the number of trainable parameters. It works by inserting a smaller number of new weights into the model and only these are trained. This makes training with LoRA much faster, memory-efficient, and produces smaller model weights (a few hundred MBs), which are easier to store and share. LoRA can also be combined with other training techniques like DreamBooth to speedup training. LoRA is very versatile and supported for DreamBooth, Kandinsky 2.2, Stable Diffusion XL, text-to-image, and Wuerstchen. This guide will explore the train_text_to_image_lora.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/text_to_image +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script has many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. Default values are provided for most parameters that work pretty well, but you can also set your own values in the training command if you’d like. For example, to increase the number of epochs to train: Copied accelerate launch train_text_to_image_lora.py \ + --num_train_epochs=150 \ Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the LoRA relevant parameters: --rank: the inner dimension of the low-rank matrices to train; a higher rank means more trainable parameters --learning_rate: the default learning rate is 1e-4, but with LoRA, you can use a higher learning rate Training script The dataset preprocessing code and training loop are found in the main() function, and if you need to adapt the training script, this is where you’ll make your changes. As with the script parameters, a walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the LoRA relevant parts of the script. UNet text encoder Diffusers uses ~peft.LoraConfig from the PEFT library to set up the parameters of the LoRA adapter such as the rank, alpha, and which modules to insert the LoRA weights into. The adapter is added to the UNet, and only the LoRA layers are filtered for optimization in lora_layers. Copied unet_lora_config = LoraConfig( + r=args.rank, + lora_alpha=args.rank, + init_lora_weights="gaussian", + target_modules=["to_k", "to_q", "to_v", "to_out.0"], +) + +unet.add_adapter(unet_lora_config) +lora_layers = filter(lambda p: p.requires_grad, unet.parameters()) The optimizer is initialized with the lora_layers because these are the only weights that’ll be optimized: Copied optimizer = optimizer_cls( + lora_layers, + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Aside from setting up the LoRA layers, the training script is more or less the same as train_text_to_image.py! Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 Let’s train on the Pokémon BLIP captions dataset to generate our own Pokémon. Set the environment variables MODEL_NAME and DATASET_NAME to the model and dataset respectively. You should also specify where to save the model in OUTPUT_DIR, and the name of the model to save to on the Hub with HUB_MODEL_ID. The script creates and saves the following files to your repository: saved model checkpoints pytorch_lora_weights.safetensors (the trained LoRA weights) If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. A full training run takes ~5 hours on a 2080 Ti GPU with 11GB of VRAM. Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="/sddata/finetune/lora/pokemon" +export HUB_MODEL_ID="pokemon-lora" +export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch --mixed_precision="fp16" train_text_to_image_lora.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$DATASET_NAME \ + --dataloader_num_workers=8 \ + --resolution=512 \ + --center_crop \ + --random_flip \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --max_train_steps=15000 \ + --learning_rate=1e-04 \ + --max_grad_norm=1 \ + --lr_scheduler="cosine" \ + --lr_warmup_steps=0 \ + --output_dir=${OUTPUT_DIR} \ + --push_to_hub \ + --hub_model_id=${HUB_MODEL_ID} \ + --report_to=wandb \ + --checkpointing_steps=500 \ + --validation_prompt="A pokemon with blue eyes." \ + --seed=1337 Once training has been completed, you can use your model for inference: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("path/to/lora/model", weight_name="pytorch_lora_weights.safetensors") +image = pipeline("A pokemon with blue eyes").images[0] Next steps Congratulations on training a new model with LoRA! To learn more about how to use your new model, the following guides may be helpful: Learn how to load different LoRA formats trained using community trainers like Kohya and TheLastBen. Learn how to use and combine multiple LoRA’s with PEFT for inference. diff --git a/scrapped_outputs/2d7a903b5fc21382989c64e7aa148a48.txt b/scrapped_outputs/2d7a903b5fc21382989c64e7aa148a48.txt new file mode 100644 index 0000000000000000000000000000000000000000..b45fe5213bcfa863fc1c686b497f93e27b1008f7 --- /dev/null +++ b/scrapped_outputs/2d7a903b5fc21382989c64e7aa148a48.txt @@ -0,0 +1,630 @@ +Kandinsky 2.2 Kandinsky 2.2 is created by Arseniy Shakhmatov, Anton Razzhigaev, Aleksandr Nikolich, Vladimir Arkhipkin, Igor Pavlov, Andrey Kuznetsov, and Denis Dimitrov. The description from it’s GitHub page is: Kandinsky 2.2 brings substantial improvements upon its predecessor, Kandinsky 2.1, by introducing a new, more powerful image encoder - CLIP-ViT-G and the ControlNet support. The switch to CLIP-ViT-G as the image encoder significantly increases the model’s capability to generate more aesthetic pictures and better understand text, thus enhancing the model’s overall performance. The addition of the ControlNet mechanism allows the model to effectively control the process of generating images. This leads to more accurate and visually appealing outputs and opens new possibilities for text-guided image manipulation. The original codebase can be found at ai-forever/Kandinsky-2. Check out the Kandinsky Community organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting. Make sure to check out the schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. KandinskyV22PriorPipeline class diffusers.KandinskyV22PriorPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModelWithProjection text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: UnCLIPScheduler image_processor: CLIPImageProcessor ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Pipeline for generating image prior for Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 output_type: Optional = 'pt' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → KandinskyPriorPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. output_type (str, optional, defaults to "pt") — +The output format of the generate image. Choose between: "np" (np.array) or "pt" +(torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyV22Pipeline, KandinskyV22PriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior") +>>> pipe_prior.to("cuda") +>>> prompt = "red cat, 4k photo" +>>> image_emb, negative_image_emb = pipe_prior(prompt).to_tuple() + +>>> pipe = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder") +>>> pipe.to("cuda") +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=50, +... ).images +>>> image[0].save("cat.png") interpolate < source > ( images_and_prompts: List weights: List num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None negative_prior_prompt: Optional = None negative_prompt: str = '' guidance_scale: float = 4.0 device = None ) → KandinskyPriorPipelineOutput or tuple Parameters images_and_prompts (List[Union[str, PIL.Image.Image, torch.FloatTensor]]) — +list of prompts and images to guide the image generation. +weights — (List[float]): +list of weights for each condition in images_and_prompts num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. negative_prior_prompt (str, optional) — +The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when using the prior pipeline for interpolation. Examples: Copied >>> from diffusers import KandinskyV22PriorPipeline, KandinskyV22Pipeline +>>> from diffusers.utils import load_image +>>> import PIL +>>> import torch +>>> from torchvision import transforms + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") +>>> img1 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) +>>> img2 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/starry_night.jpeg" +... ) +>>> images_texts = ["a cat", img1, img2] +>>> weights = [0.3, 0.3, 0.4] +>>> out = pipe_prior.interpolate(images_texts, weights) +>>> pipe = KandinskyV22Pipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") +>>> image = pipe( +... image_embeds=out.image_embeds, +... negative_image_embeds=out.negative_image_embeds, +... height=768, +... width=768, +... num_inference_steps=50, +... ).images[0] +>>> image.save("starry_cat.png") KandinskyV22Pipeline class diffusers.KandinskyV22Pipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union negative_image_embeds: Union height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyV22Pipeline, KandinskyV22PriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior") +>>> pipe_prior.to("cuda") +>>> prompt = "red cat, 4k photo" +>>> out = pipe_prior(prompt) +>>> image_emb = out.image_embeds +>>> zero_image_emb = out.negative_image_embeds +>>> pipe = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder") +>>> pipe.to("cuda") +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=50, +... ).images +>>> image[0].save("cat.png") KandinskyV22CombinedPipeline class diffusers.KandinskyV22CombinedPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. prior_image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Combined Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. prior_callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference of the prior pipeline. +The function is called with the following arguments: prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). prior_callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the prior_callback_on_step_end function. The tensors specified in the +list will be passed as callback_kwargs argument. You will only be able to include variables listed in +the ._callback_tensor_inputs attribute of your prior pipeline class. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference of the decoder pipeline. +The function is called with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors +as specified by callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipe = AutoPipelineForText2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" + +image = pipe(prompt=prompt, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. KandinskyV22ControlnetPipeline class diffusers.KandinskyV22ControlnetPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union negative_image_embeds: Union hint: FloatTensor height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. hint (torch.FloatTensor) — +The controlnet condition. image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22PriorEmb2EmbPipeline class diffusers.KandinskyV22PriorEmb2EmbPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModelWithProjection text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: UnCLIPScheduler image_processor: CLIPImageProcessor ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Pipeline for generating image prior for Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union strength: float = 0.3 negative_prompt: Union = None num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None guidance_scale: float = 4.0 output_type: Optional = 'pt' return_dict: bool = True ) → KandinskyPriorPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference emb. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. emb (torch.FloatTensor) — +The image embedding. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. output_type (str, optional, defaults to "pt") — +The output format of the generate image. Choose between: "np" (np.array) or "pt" +(torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyV22Pipeline, KandinskyV22PriorEmb2EmbPipeline +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> prompt = "red cat, 4k photo" +>>> img = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) +>>> image_emb, nagative_image_emb = pipe_prior(prompt, image=img, strength=0.2).to_tuple() + +>>> pipe = KandinskyPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-decoder, torch_dtype=torch.float16" +... ) +>>> pipe.to("cuda") + +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... ).images + +>>> image[0].save("cat.png") interpolate < source > ( images_and_prompts: List weights: List num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None negative_prior_prompt: Optional = None negative_prompt: str = '' guidance_scale: float = 4.0 device = None ) → KandinskyPriorPipelineOutput or tuple Parameters images_and_prompts (List[Union[str, PIL.Image.Image, torch.FloatTensor]]) — +list of prompts and images to guide the image generation. +weights — (List[float]): +list of weights for each condition in images_and_prompts num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. negative_prior_prompt (str, optional) — +The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when using the prior pipeline for interpolation. Examples: Copied >>> from diffusers import KandinskyV22PriorEmb2EmbPipeline, KandinskyV22Pipeline +>>> from diffusers.utils import load_image +>>> import PIL + +>>> import torch +>>> from torchvision import transforms + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> img1 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) + +>>> img2 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/starry_night.jpeg" +... ) + +>>> images_texts = ["a cat", img1, img2] +>>> weights = [0.3, 0.3, 0.4] +>>> image_emb, zero_image_emb = pipe_prior.interpolate(images_texts, weights) + +>>> pipe = KandinskyV22Pipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") + +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=150, +... ).images[0] + +>>> image.save("starry_cat.png") KandinskyV22Img2ImgPipeline class diffusers.KandinskyV22Img2ImgPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union image: Union negative_image_embeds: Union height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 strength: float = 0.3 num_images_per_prompt: int = 1 generator: Union = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22Img2ImgCombinedPipeline class diffusers.KandinskyV22Img2ImgCombinedPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. prior_image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Combined Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 strength: float = 0.3 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForImage2Image +import torch +import requests +from io import BytesIO +from PIL import Image +import os + +pipe = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") +image.thumbnail((768, 768)) + +image = pipe(prompt=prompt, image=original_image, num_inference_steps=25).images[0] enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. KandinskyV22ControlnetImg2ImgPipeline class diffusers.KandinskyV22ControlnetImg2ImgPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union image: Union negative_image_embeds: Union hint: FloatTensor height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 strength: float = 0.3 num_images_per_prompt: int = 1 generator: Union = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. hint (torch.FloatTensor) — +The controlnet condition. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22InpaintPipeline class diffusers.KandinskyV22InpaintPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-guided image inpainting using Kandinsky2.1 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union image: Union mask_image: Union negative_image_embeds: Union height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. image (PIL.Image.Image) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. mask_image (np.array) — +Tensor representing an image batch, to mask image. White pixels in the mask will be repainted, while +black pixels will be preserved. If mask_image is a PIL image, it will be converted to a single +channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, +so the expected shape would be (B, H, W, 1). negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22InpaintCombinedPipeline class diffusers.KandinskyV22InpaintCombinedPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. prior_image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Combined Pipeline for inpainting generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union mask_image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. mask_image (np.array) — +Tensor representing an image batch, to mask image. White pixels in the mask will be repainted, while +black pixels will be preserved. If mask_image is a PIL image, it will be converted to a single +channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, +so the expected shape would be (B, H, W, 1). negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. prior_callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). prior_callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the prior_callback_on_step_end function. The tensors specified in the +list will be passed as callback_kwargs argument. You will only be able to include variables listed in +the ._callback_tensor_inputs attribute of your pipeline class. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +import torch +import numpy as np + +pipe = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +original_image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png" +) + +mask = np.zeros((768, 768), dtype=np.float32) +# Let's mask out an area above the cat's head +mask[:250, 250:-250] = 1 + +image = pipe(prompt=prompt, image=original_image, mask_image=mask, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. diff --git a/scrapped_outputs/2d91b55776d72f79c67a8a7c8c49f0dd.txt b/scrapped_outputs/2d91b55776d72f79c67a8a7c8c49f0dd.txt new file mode 100644 index 0000000000000000000000000000000000000000..bcb666def15e33f1f85b4b3d91e464c6e12c8f33 --- /dev/null +++ b/scrapped_outputs/2d91b55776d72f79c67a8a7c8c49f0dd.txt @@ -0,0 +1,52 @@ +UNet3DConditionModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 3D UNet conditional model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet3DConditionModel class diffusers.UNet3DConditionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlock3D', 'CrossAttnDownBlock3D', 'CrossAttnDownBlock3D', 'DownBlock3D') up_block_types: Tuple = ('UpBlock3D', 'CrossAttnUpBlock3D', 'CrossAttnUpBlock3D', 'CrossAttnUpBlock3D') block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: int = 1024 attention_head_dim: Union = 64 num_attention_heads: Union = None ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. in_channels (int, optional, defaults to 4) — The number of channels in the input sample. out_channels (int, optional, defaults to 4) — The number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — The number of layers per block. downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. act_fn (str, optional, defaults to "silu") — The activation function to use. norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, normalization and activation layers is skipped in post-processing. norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. cross_attention_dim (int, optional, defaults to 1280) — The dimension of the cross attention features. attention_head_dim (int, optional, defaults to 8) — The dimension of the attention heads. num_attention_heads (int, optional) — The number of attention heads. A conditional 3D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). disable_freeu < source > ( ) Disables the FreeU mechanism. enable_forward_chunking < source > ( chunk_size: Optional = None dim: int = 0 ) Parameters chunk_size (int, optional) — +The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually +over each tensor of dim=dim. dim (int, optional, defaults to 0) — +The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch) +or dim=1 (sequence length). Sets the attention processor to use feed forward +chunking. enable_freeu < source > ( s1 s2 b1 b2 ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism from https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stage blocks where they are being applied. Please refer to the official repository for combinations of values that +are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None down_block_additional_residuals: Optional = None mid_block_additional_residual: Optional = None return_dict: bool = True ) → ~models.unet_3d_condition.UNet3DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, num_frames, channel, height, width. timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). class_labels (torch.Tensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. +timestep_cond — (torch.Tensor, optional, defaults to None): +Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed +through the self.time_embedding layer to obtain the timestep embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. +down_block_additional_residuals — (tuple of torch.Tensor, optional): +A tuple of tensors that if specified are added to the residuals of down unet blocks. +mid_block_additional_residual — (torch.Tensor, optional): +A tensor that if specified is added to the residual of the middle unet block. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.unet_3d_condition.UNet3DConditionOutput instead of a plain +tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. Returns +~models.unet_3d_condition.UNet3DConditionOutput or tuple + +If return_dict is True, an ~models.unet_3d_condition.UNet3DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The UNet3DConditionModel forward method. set_attention_slice < source > ( slice_size: Union ) Parameters slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", input to the attention heads is halved, so attention is computed in two steps. If +"max", maximum amount of memory is saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in +several steps. This is useful for saving some memory in exchange for a small decrease in speed. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. unload_lora < source > ( ) Unloads LoRA weights. UNet3DConditionOutput class diffusers.models.unets.unet_3d_condition.UNet3DConditionOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_frames, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of UNet3DConditionModel. diff --git a/scrapped_outputs/2d91c0b2d274570bcb5e2925c5b9fe1b.txt b/scrapped_outputs/2d91c0b2d274570bcb5e2925c5b9fe1b.txt new file mode 100644 index 0000000000000000000000000000000000000000..54b679f844e0756b73267dc59e36b49e7f006adb --- /dev/null +++ b/scrapped_outputs/2d91c0b2d274570bcb5e2925c5b9fe1b.txt @@ -0,0 +1,95 @@ +PNDM + + +Overview + +Pseudo Numerical methods for Diffusion Models on manifolds (PNDM) by Luping Liu, Yi Ren, Zhijie Lin and Zhou Zhao. +The abstract of the paper is the following: +Denoising Diffusion Probabilistic Models (DDPMs) can generate high-quality samples such as image and audio samples. However, DDPMs require hundreds to thousands of iterations to produce final samples. Several prior works have successfully accelerated DDPMs through adjusting the variance schedule (e.g., Improved Denoising Diffusion Probabilistic Models) or the denoising equation (e.g., Denoising Diffusion Implicit Models (DDIMs)). However, these acceleration methods cannot maintain the quality of samples and even introduce new noise at a high speedup rate, which limit their practicability. To accelerate the inference process while keeping the sample quality, we provide a fresh perspective that DDPMs should be treated as solving differential equations on manifolds. Under such a perspective, we propose pseudo numerical methods for diffusion models (PNDMs). Specifically, we figure out how to solve differential equations on manifolds and show that DDIMs are simple cases of pseudo numerical methods. We change several classical numerical methods to corresponding pseudo numerical methods and find that the pseudo linear multi-step method is the best in most situations. According to our experiments, by directly using pre-trained models on Cifar10, CelebA and LSUN, PNDMs can generate higher quality synthetic images with only 50 steps compared with 1000-step DDIMs (20x speedup), significantly outperform DDIMs with 250 steps (by around 0.4 in FID) and have good generalization on different variance schedules. +The original codebase can be found here. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_pndm.py +Unconditional Image Generation +- + +PNDMPipeline + + +class diffusers.PNDMPipeline + +< +source +> +( +unet: UNet2DModel +scheduler: PNDMScheduler + +) + + +Parameters + +unet (UNet2DModel) — U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +The PNDMScheduler to be used in combination with unet to denoise the encoded image. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +batch_size: int = 1 +num_inference_steps: int = 50 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +**kwargs + +) +→ +ImagePipelineOutput or tuple + +Parameters + +batch_size (int, optional, defaults to 1) — The number of images to generate. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +generator (torch.Generator, optional) — A torch +generator to make generation +deterministic. + + +output_type (str, optional, defaults to "pil") — The output format of the generate image. Choose +between PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — Whether or not to return a +ImagePipelineOutput instead of a plain tuple. + + +Returns + +ImagePipelineOutput or tuple + + + +~pipelines.utils.ImagePipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. diff --git a/scrapped_outputs/2dc80f3592539f82e33c0adbae71d35b.txt b/scrapped_outputs/2dc80f3592539f82e33c0adbae71d35b.txt new file mode 100644 index 0000000000000000000000000000000000000000..dddf90dd1d6da28f9ddab5c27cdbe1acd8a26951 --- /dev/null +++ b/scrapped_outputs/2dc80f3592539f82e33c0adbae71d35b.txt @@ -0,0 +1,99 @@ +DDPM + + +Overview + +Denoising Diffusion Probabilistic Models +(DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes the diffusion based model of the same name, but in the context of the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline. +The abstract of the paper is the following: +We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. +The original codebase of this paper can be found here. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_ddpm.py +Unconditional Image Generation +- + +DDPMPipeline + + +class diffusers.DDPMPipeline + +< +source +> +( +unet +scheduler + +) + + +Parameters + +unet (UNet2DModel) — U-Net architecture to denoise the encoded image. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image. Can be one of +DDPMScheduler, or DDIMScheduler. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +batch_size: int = 1 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +num_inference_steps: int = 1000 +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +**kwargs + +) +→ +ImagePipelineOutput or tuple + +Parameters + +batch_size (int, optional, defaults to 1) — +The number of images to generate. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +num_inference_steps (int, optional, defaults to 1000) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + +Returns + +ImagePipelineOutput or tuple + + + +~pipelines.utils.ImagePipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. diff --git a/scrapped_outputs/2ddd3b9cea3f099f12a0fdda33866a50.txt b/scrapped_outputs/2ddd3b9cea3f099f12a0fdda33866a50.txt new file mode 100644 index 0000000000000000000000000000000000000000..62bd3cb1c1f02d7aa575bf2a1b1dcd16ef4a3d38 --- /dev/null +++ b/scrapped_outputs/2ddd3b9cea3f099f12a0fdda33866a50.txt @@ -0,0 +1,365 @@ +Attend and Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models + + +Overview + +Attend and Excite for Stable Diffusion was proposed in Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models and provides textual attention control over the image generation. +The abstract of the paper is the following: +Text-to-image diffusion models have recently received a lot of interest for their astonishing ability to produce high-fidelity images from text only. However, achieving one-shot generation that aligns with the user’s intent is nearly impossible, yet small changes to the input prompt often result in very different images. This leaves the user with little semantic control. To put the user in control, we show how to interact with the diffusion process to flexibly steer it along semantic directions. This semantic guidance (SEGA) allows for subtle and extensive edits, changes in composition and style, as well as optimizing the overall artistic conception. We demonstrate SEGA’s effectiveness on a variety of tasks and provide evidence for its versatility and flexibility. +Resources +Project Page +Paper +Original Code +Demo + +Available Pipelines: + +Pipeline +Tasks +Colab +Demo +pipeline_semantic_stable_diffusion_attend_and_excite.py +Text-to-Image Generation +- +https://huggingface.co/spaces/AttendAndExcite/Attend-and-Excite + +Usage example + + + + Copied +import torch +from diffusers import StableDiffusionAttendAndExcitePipeline + +model_id = "CompVis/stable-diffusion-v1-4" +pipe = StableDiffusionAttendAndExcitePipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") +pipe = pipe.to("cuda") + +prompt = "a cat and a frog" + +# use get_indices function to find out indices of the tokens you want to alter +pipe.get_indices(prompt) + +token_indices = [2, 5] +seed = 6141 +generator = torch.Generator("cuda").manual_seed(seed) + +images = pipe( + prompt=prompt, + token_indices=token_indices, + guidance_scale=7.5, + generator=generator, + num_inference_steps=50, + max_iter_to_alter=25, +).images + +image = images[0] +image.save(f"../images/{prompt}_{seed}.png") + +StableDiffusionAttendAndExcitePipeline + + +class diffusers.StableDiffusionAttendAndExcitePipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPImageProcessor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-to-image generation using Stable Diffusion and Attend and Excite. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] +token_indices: typing.Union[typing.List[int], typing.List[typing.List[int]]] +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: int = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None +max_iter_to_alter: int = 25 +thresholds: dict = {0: 0.05, 10: 0.5, 20: 0.8} +scale_factor: int = 20 +attn_res: typing.Optional[typing.Tuple[int]] = (16, 16) + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +token_indices (List[int]) — +The token indices to alter with attend-and-excite. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +max_iter_to_alter (int, optional, defaults to 25) — +Number of denoising steps to apply attend-and-excite. The first denoising steps are +where the attend-and-excite is applied. I.e. if max_iter_to_alter is 25 and there are a total of 30 +denoising steps, the first 25 denoising steps will apply attend-and-excite and the last 5 will not +apply attend-and-excite. + + +thresholds (dict, optional, defaults to {0 -- 0.05, 10: 0.5, 20: 0.8}): +Dictionary defining the iterations and desired thresholds to apply iterative latent refinement in. + + +scale_factor (int, optional, default to 20) — +Scale factor that controls the step size of each Attend and Excite update. + + +attn_res (tuple, optional, default computed from width and height) — +The 2D resolution of the semantic attention map. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. :type attention_store: object + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import StableDiffusionAttendAndExcitePipeline + +>>> pipe = StableDiffusionAttendAndExcitePipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 +... ).to("cuda") + + +>>> prompt = "a cat and a frog" + +>>> # use get_indices function to find out indices of the tokens you want to alter +>>> pipe.get_indices(prompt) +{0: '<|startoftext|>', 1: 'a', 2: 'cat', 3: 'and', 4: 'a', 5: 'frog', 6: '<|endoftext|>'} + +>>> token_indices = [2, 5] +>>> seed = 6141 +>>> generator = torch.Generator("cuda").manual_seed(seed) + +>>> images = pipe( +... prompt=prompt, +... token_indices=token_indices, +... guidance_scale=7.5, +... generator=generator, +... num_inference_steps=50, +... max_iter_to_alter=25, +... ).images + +>>> image = images[0] +>>> image.save(f"../images/{prompt}_{seed}.png") + +disable_vae_slicing + +< +source +> +( +) + + + +Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to +computing decoding in one step. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. + +enable_vae_slicing + +< +source +> +( +) + + + +Enable sliced VAE decoding. +When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several +steps. This is useful to save some memory and allow larger batch sizes. + +get_indices + +< +source +> +( +prompt: str + +) + + + +Utility function to list the indices of the tokens you wish to alte diff --git a/scrapped_outputs/2e168cceca868692325d90c828eb80c4.txt b/scrapped_outputs/2e168cceca868692325d90c828eb80c4.txt new file mode 100644 index 0000000000000000000000000000000000000000..0051dea3c8497a0aea4368d8c2019c00ab6ab808 --- /dev/null +++ b/scrapped_outputs/2e168cceca868692325d90c828eb80c4.txt @@ -0,0 +1,107 @@ +Semantic Guidance Semantic Guidance for Diffusion Models was proposed in SEGA: Instructing Text-to-Image Models using Semantic Guidance and provides strong semantic control over image generation. +Small changes to the text prompt usually result in entirely different output images. However, with SEGA a variety of changes to the image are enabled that can be controlled easily and intuitively, while staying true to the original image composition. The abstract from the paper is: Text-to-image diffusion models have recently received a lot of interest for their astonishing ability to produce high-fidelity images from text only. However, achieving one-shot generation that aligns with the user’s intent is nearly impossible, yet small changes to the input prompt often result in very different images. This leaves the user with little semantic control. To put the user in control, we show how to interact with the diffusion process to flexibly steer it along semantic directions. This semantic guidance (SEGA) generalizes to any generative architecture using classifier-free guidance. More importantly, it allows for subtle and extensive edits, changes in composition and style, as well as optimizing the overall artistic conception. We demonstrate SEGA’s effectiveness on both latent and pixel-based diffusion models such as Stable Diffusion, Paella, and DeepFloyd-IF using a variety of tasks, thus providing strong evidence for its versatility, flexibility, and improvements over existing methods. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. SemanticStableDiffusionPipeline class diffusers.SemanticStableDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (Q16SafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with latent editing. This model inherits from DiffusionPipeline and builds on the StableDiffusionPipeline. Check the superclass +documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular +device, etc.). __call__ < source > ( prompt: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: int = 1 eta: float = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 editing_prompt: Union = None editing_prompt_embeddings: Optional = None reverse_editing_direction: Union = False edit_guidance_scale: Union = 5 edit_warmup_steps: Union = 10 edit_cooldown_steps: Union = None edit_threshold: Union = 0.9 edit_momentum_scale: Optional = 0.1 edit_mom_beta: Optional = 0.4 edit_weights: Optional = None sem_guidance: Optional = None ) → ~pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. editing_prompt (str or List[str], optional) — +The prompt or prompts to use for semantic guidance. Semantic guidance is disabled by setting +editing_prompt = None. Guidance direction of prompt should be specified via +reverse_editing_direction. editing_prompt_embeddings (torch.Tensor, optional) — +Pre-computed embeddings to use for semantic guidance. Guidance direction of embedding should be +specified via reverse_editing_direction. reverse_editing_direction (bool or List[bool], optional, defaults to False) — +Whether the corresponding prompt in editing_prompt should be increased or decreased. edit_guidance_scale (float or List[float], optional, defaults to 5) — +Guidance scale for semantic guidance. If provided as a list, values should correspond to +editing_prompt. edit_warmup_steps (float or List[float], optional, defaults to 10) — +Number of diffusion steps (for each prompt) for which semantic guidance is not applied. Momentum is +calculated for those steps and applied once all warmup periods are over. edit_cooldown_steps (float or List[float], optional, defaults to None) — +Number of diffusion steps (for each prompt) after which semantic guidance is longer applied. edit_threshold (float or List[float], optional, defaults to 0.9) — +Threshold of semantic guidance. edit_momentum_scale (float, optional, defaults to 0.1) — +Scale of the momentum to be added to the semantic guidance at each diffusion step. If set to 0.0, +momentum is disabled. Momentum is already built up during warmup (for diffusion steps smaller than +sld_warmup_steps). Momentum is only added to latent guidance once all warmup periods are finished. edit_mom_beta (float, optional, defaults to 0.4) — +Defines how semantic guidance momentum builds up. edit_mom_beta indicates how much of the previous +momentum is kept. Momentum is already built up during warmup (for diffusion steps smaller than +edit_warmup_steps). edit_weights (List[float], optional, defaults to None) — +Indicates how much each individual concept should influence the overall guidance. If no weights are +provided all concepts are applied equally. sem_guidance (List[torch.Tensor], optional) — +List of pre-generated guidance vectors to be applied at generation. Length of the list has to +correspond to num_inference_steps. Returns +~pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput or tuple + +If return_dict is True, +~pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images and the second element +is a list of bools indicating whether the corresponding generated image contains “not-safe-for-work” +(nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import SemanticStableDiffusionPipeline + +>>> pipe = SemanticStableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> out = pipe( +... prompt="a photo of the face of a woman", +... num_images_per_prompt=1, +... guidance_scale=7, +... editing_prompt=[ +... "smiling, smile", # Concepts to apply +... "glasses, wearing glasses", +... "curls, wavy hair, curly hair", +... "beard, full beard, mustache", +... ], +... reverse_editing_direction=[ +... False, +... False, +... False, +... False, +... ], # Direction of guidance i.e. increase all concepts +... edit_warmup_steps=[10, 10, 10, 10], # Warmup period for each concept +... edit_guidance_scale=[4, 5, 5, 5.4], # Guidance scale for each concept +... edit_threshold=[ +... 0.99, +... 0.975, +... 0.925, +... 0.96, +... ], # Threshold for each concept. Threshold equals the percentile of the latent space that will be discarded. I.e. threshold=0.99 uses 1% of the latent dimensions +... edit_momentum_scale=0.3, # Momentum scale that will be added to the latent guidance +... edit_mom_beta=0.6, # Momentum beta +... edit_weights=[1, 1, 1, 1, 1], # Weights of the individual concepts against each other +... ) +>>> image = out.images[0] StableDiffusionSafePipelineOutput class diffusers.pipelines.semantic_stable_diffusion.pipeline_output.SemanticStableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/2e1a0c4e93963b5a4fcfd76bd28b97fa.txt b/scrapped_outputs/2e1a0c4e93963b5a4fcfd76bd28b97fa.txt new file mode 100644 index 0000000000000000000000000000000000000000..f1c1a9c2dd958669628fba113f9bf7c7441fb5bf --- /dev/null +++ b/scrapped_outputs/2e1a0c4e93963b5a4fcfd76bd28b97fa.txt @@ -0,0 +1,234 @@ +Variance exploding, stochastic sampling from Karras et. al + + +Overview + +Original paper can be found here. + +KarrasVeScheduler + + +class diffusers.KarrasVeScheduler + +< +source +> +( +sigma_min: float = 0.02 +sigma_max: float = 100 +s_noise: float = 1.007 +s_churn: float = 80 +s_min: float = 0.05 +s_max: float = 50 + +) + + +Parameters + +sigma_min (float) — minimum noise magnitude + + +sigma_max (float) — maximum noise magnitude + + +s_noise (float) — the amount of additional noise to counteract loss of detail during sampling. +A reasonable range is [1.000, 1.011]. + + +s_churn (float) — the parameter controlling the overall amount of stochasticity. +A reasonable range is [0, 100]. + + +s_min (float) — the start value of the sigma range where we add noise (enable stochasticity). +A reasonable range is [0, 10]. + + +s_max (float) — the end value of the sigma range where we add noise. +A reasonable range is [0.2, 80]. + + + +Stochastic sampling from Karras et al. [1] tailored to the Variance-Expanding (VE) models [2]. Use Algorithm 2 and +the VE column of Table 1 from [1] for reference. +[1] Karras, Tero, et al. “Elucidating the Design Space of Diffusion-Based Generative Models.” +https://arxiv.org/abs/2206.00364 [2] Song, Yang, et al. “Score-based generative modeling through stochastic +differential equations.” https://arxiv.org/abs/2011.13456 +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. +For more details on the parameters, see the original paper’s Appendix E.: “Elucidating the Design Space of +Diffusion-Based Generative Models.” https://arxiv.org/abs/2206.00364. The grid search values used to find the +optimal {s_noise, s_churn, s_min, s_max} for a specific model are described in Table 5 of the paper. + +add_noise_to_input + +< +source +> +( +sample: FloatTensor +sigma: float +generator: typing.Optional[torch._C.Generator] = None + +) + + + +Explicit Langevin-like “churn” step of adding noise to the sample according to a factor gamma_i ≥ 0 to reach a +higher noise level sigma_hat = sigma_i + gamma_i*sigma_i. +TODO Args: + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Optional[int] = None + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +timestep (int, optional) — current timestep + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + + +Sets the continuous timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +sigma_hat: float +sigma_prev: float +sample_hat: FloatTensor +return_dict: bool = True + +) +→ +KarrasVeOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +sigma_hat (float) — TODO + + +sigma_prev (float) — TODO + + +sample_hat (torch.FloatTensor) — TODO + + +return_dict (bool) — option for returning tuple rather than KarrasVeOutput class +KarrasVeOutput — updated sample in the diffusion chain and derivative (TODO double check). + + +Returns + +KarrasVeOutput or tuple + + + +KarrasVeOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion +process from the learned model outputs (most often the predicted noise). + +step_correct + +< +source +> +( +model_output: FloatTensor +sigma_hat: float +sigma_prev: float +sample_hat: FloatTensor +sample_prev: FloatTensor +derivative: FloatTensor +return_dict: bool = True + +) +→ +prev_sample (TODO) + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +sigma_hat (float) — TODO + + +sigma_prev (float) — TODO + + +sample_hat (torch.FloatTensor) — TODO + + +sample_prev (torch.FloatTensor) — TODO + + +derivative (torch.FloatTensor) — TODO + + +return_dict (bool) — option for returning tuple rather than KarrasVeOutput class + + +Returns + +prev_sample (TODO) + + + +updated sample in the diffusion chain. derivative (TODO): TODO + + +Correct the predicted sample based on the output model_output of the network. TODO complete description diff --git a/scrapped_outputs/2e24c473132aac952e8f496e78581161.txt b/scrapped_outputs/2e24c473132aac952e8f496e78581161.txt new file mode 100644 index 0000000000000000000000000000000000000000..0216b63015b72cee2b55724c811388c4d1a98e96 --- /dev/null +++ b/scrapped_outputs/2e24c473132aac952e8f496e78581161.txt @@ -0,0 +1,41 @@ +KarrasVeScheduler KarrasVeScheduler is a stochastic sampler tailored to variance-expanding (VE) models. It is based on the Elucidating the Design Space of Diffusion-Based Generative Models and Score-based generative modeling through stochastic differential equations papers. KarrasVeScheduler class diffusers.KarrasVeScheduler < source > ( sigma_min: float = 0.02 sigma_max: float = 100 s_noise: float = 1.007 s_churn: float = 80 s_min: float = 0.05 s_max: float = 50 ) Parameters sigma_min (float, defaults to 0.02) — +The minimum noise magnitude. sigma_max (float, defaults to 100) — +The maximum noise magnitude. s_noise (float, defaults to 1.007) — +The amount of additional noise to counteract loss of detail during sampling. A reasonable range is [1.000, +1.011]. s_churn (float, defaults to 80) — +The parameter controlling the overall amount of stochasticity. A reasonable range is [0, 100]. s_min (float, defaults to 0.05) — +The start value of the sigma range to add noise (enable stochasticity). A reasonable range is [0, 10]. s_max (float, defaults to 50) — +The end value of the sigma range to add noise. A reasonable range is [0.2, 80]. A stochastic scheduler tailored to variance-expanding models. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. For more details on the parameters, see Appendix E. The grid search values used +to find the optimal {s_noise, s_churn, s_min, s_max} for a specific model are described in Table 5 of the paper. add_noise_to_input < source > ( sample: FloatTensor sigma: float generator: Optional = None ) Parameters sample (torch.FloatTensor) — +The input sample. sigma (float) — generator (torch.Generator, optional) — +A random number generator. Explicit Langevin-like “churn” step of adding noise to the sample according to a gamma_i ≥ 0 to reach a +higher noise level sigma_hat = sigma_i + gamma_i*sigma_i. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor sigma_hat: float sigma_prev: float sample_hat: FloatTensor return_dict: bool = True ) → ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. sigma_hat (float) — sigma_prev (float) — sample_hat (torch.FloatTensor) — return_dict (bool, optional, defaults to True) — +Whether or not to return a ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput or tuple. Returns +~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput or tuple + +If return_dict is True, ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). step_correct < source > ( model_output: FloatTensor sigma_hat: float sigma_prev: float sample_hat: FloatTensor sample_prev: FloatTensor derivative: FloatTensor return_dict: bool = True ) → prev_sample (TODO) Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. sigma_hat (float) — TODO sigma_prev (float) — TODO sample_hat (torch.FloatTensor) — TODO sample_prev (torch.FloatTensor) — TODO derivative (torch.FloatTensor) — TODO return_dict (bool, optional, defaults to True) — +Whether or not to return a DDPMSchedulerOutput or tuple. Returns +prev_sample (TODO) + +updated sample in the diffusion chain. derivative (TODO): TODO + Corrects the predicted sample based on the model_output of the network. KarrasVeOutput class diffusers.schedulers.deprecated.scheduling_karras_ve.KarrasVeOutput < source > ( prev_sample: FloatTensor derivative: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. derivative (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Derivative of predicted original image sample (x_0). pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/2e38275f432cd7bcef7df5d3556817fa.txt b/scrapped_outputs/2e38275f432cd7bcef7df5d3556817fa.txt new file mode 100644 index 0000000000000000000000000000000000000000..68ff112b968d56ed709f7889837161b8952ee99b --- /dev/null +++ b/scrapped_outputs/2e38275f432cd7bcef7df5d3556817fa.txt @@ -0,0 +1,235 @@ +AutoPipeline AutoPipeline is designed to: make it easy for you to load a checkpoint for a task without knowing the specific pipeline class to use use multiple pipelines in your workflow Based on the task, the AutoPipeline class automatically retrieves the relevant pipeline given the name or path to the pretrained weights with the from_pretrained() method. To seamlessly switch between tasks with the same checkpoint without reallocating additional memory, use the from_pipe() method to transfer the components from the original pipeline to the new one. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +image = pipeline(prompt, num_inference_steps=25).images[0] Check out the AutoPipeline tutorial to learn how to use this API! AutoPipeline supports text-to-image, image-to-image, and inpainting for the following diffusion models: Stable Diffusion ControlNet Stable Diffusion XL (SDXL) DeepFloyd IF Kandinsky 2.1 Kandinsky 2.2 AutoPipelineForText2Image class diffusers.AutoPipelineForText2Image < source > ( *args **kwargs ) AutoPipelineForText2Image is a generic pipeline class that instantiates a text-to-image pipeline class. The +specific underlying pipeline class is automatically selected from either the +from_pretrained() or from_pipe() methods. This class cannot be instantiated using __init__() (throws an error). Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_or_path **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiates a text-to-image Pytorch diffusion pipeline from pretrained pipeline weight. The from_pretrained() method takes care of returning the correct pipeline class instance by: Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its +config object Find the text-to-image pipeline linked to the pipeline class using pattern matching on pipeline class +name. If a controlnet argument is passed, it will instantiate a StableDiffusionControlNetPipeline object. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import AutoPipelineForText2Image + +>>> pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> image = pipeline(prompt).images[0] from_pipe < source > ( pipeline **kwargs ) Parameters pipeline (DiffusionPipeline) — +an instantiated DiffusionPipeline object Instantiates a text-to-image Pytorch diffusion pipeline from another instantiated diffusion pipeline class. The from_pipe() method takes care of returning the correct pipeline class instance by finding the text-to-image +pipeline linked to the pipeline class using pattern matching on pipeline class name. All the modules the pipeline contains will be used to initialize the new pipeline without reallocating +additional memoery. The pipeline is set in evaluation mode (model.eval()) by default. Copied >>> from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image + +>>> pipe_i2i = AutoPipelineForImage2Image.from_pretrained( +... "runwayml/stable-diffusion-v1-5", requires_safety_checker=False +... ) + +>>> pipe_t2i = AutoPipelineForText2Image.from_pipe(pipe_i2i) +>>> image = pipe_t2i(prompt).images[0] AutoPipelineForImage2Image class diffusers.AutoPipelineForImage2Image < source > ( *args **kwargs ) AutoPipelineForImage2Image is a generic pipeline class that instantiates an image-to-image pipeline class. The +specific underlying pipeline class is automatically selected from either the +from_pretrained() or from_pipe() methods. This class cannot be instantiated using __init__() (throws an error). Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_or_path **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiates a image-to-image Pytorch diffusion pipeline from pretrained pipeline weight. The from_pretrained() method takes care of returning the correct pipeline class instance by: Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its +config object Find the image-to-image pipeline linked to the pipeline class using pattern matching on pipeline class +name. If a controlnet argument is passed, it will instantiate a StableDiffusionControlNetImg2ImgPipeline +object. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import AutoPipelineForImage2Image + +>>> pipeline = AutoPipelineForImage2Image.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> image = pipeline(prompt, image).images[0] from_pipe < source > ( pipeline **kwargs ) Parameters pipeline (DiffusionPipeline) — +an instantiated DiffusionPipeline object Instantiates a image-to-image Pytorch diffusion pipeline from another instantiated diffusion pipeline class. The from_pipe() method takes care of returning the correct pipeline class instance by finding the +image-to-image pipeline linked to the pipeline class using pattern matching on pipeline class name. All the modules the pipeline contains will be used to initialize the new pipeline without reallocating +additional memoery. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image + +>>> pipe_t2i = AutoPipelineForText2Image.from_pretrained( +... "runwayml/stable-diffusion-v1-5", requires_safety_checker=False +... ) + +>>> pipe_i2i = AutoPipelineForImage2Image.from_pipe(pipe_t2i) +>>> image = pipe_i2i(prompt, image).images[0] AutoPipelineForInpainting class diffusers.AutoPipelineForInpainting < source > ( *args **kwargs ) AutoPipelineForInpainting is a generic pipeline class that instantiates an inpainting pipeline class. The +specific underlying pipeline class is automatically selected from either the +from_pretrained() or from_pipe() methods. This class cannot be instantiated using __init__() (throws an error). Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_or_path **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiates a inpainting Pytorch diffusion pipeline from pretrained pipeline weight. The from_pretrained() method takes care of returning the correct pipeline class instance by: Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its +config object Find the inpainting pipeline linked to the pipeline class using pattern matching on pipeline class name. If a controlnet argument is passed, it will instantiate a StableDiffusionControlNetInpaintPipeline +object. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import AutoPipelineForInpainting + +>>> pipeline = AutoPipelineForInpainting.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> image = pipeline(prompt, image=init_image, mask_image=mask_image).images[0] from_pipe < source > ( pipeline **kwargs ) Parameters pipeline (DiffusionPipeline) — +an instantiated DiffusionPipeline object Instantiates a inpainting Pytorch diffusion pipeline from another instantiated diffusion pipeline class. The from_pipe() method takes care of returning the correct pipeline class instance by finding the inpainting +pipeline linked to the pipeline class using pattern matching on pipeline class name. All the modules the pipeline class contain will be used to initialize the new pipeline without reallocating +additional memoery. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import AutoPipelineForText2Image, AutoPipelineForInpainting + +>>> pipe_t2i = AutoPipelineForText2Image.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", requires_safety_checker=False +... ) + +>>> pipe_inpaint = AutoPipelineForInpainting.from_pipe(pipe_t2i) +>>> image = pipe_inpaint(prompt, image=init_image, mask_image=mask_image).images[0] diff --git a/scrapped_outputs/2e4d7fa3c26a15b3acf6b7eee42c88de.txt b/scrapped_outputs/2e4d7fa3c26a15b3acf6b7eee42c88de.txt new file mode 100644 index 0000000000000000000000000000000000000000..9f114fdb9e4df008a7dccedd3c1f0129e4d4d434 --- /dev/null +++ b/scrapped_outputs/2e4d7fa3c26a15b3acf6b7eee42c88de.txt @@ -0,0 +1,96 @@ +ControlNet The ControlNet model was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, Maneesh Agrawala. It provides a greater degree of control over text-to-image generation by conditioning the model on additional inputs such as edge maps, depth maps, segmentation maps, and keypoints for pose detection. The abstract from the paper is: We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models. Loading from the original format By default the ControlNetModel should be loaded with from_pretrained(), but it can also be loaded +from the original format using FromOriginalControlnetMixin.from_single_file as follows: Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel + +url = "https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_canny.pth" # can also be a local path +controlnet = ControlNetModel.from_single_file(url) + +url = "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.safetensors" # can also be a local path +pipe = StableDiffusionControlNetPipeline.from_single_file(url, controlnet=controlnet) ControlNetModel class diffusers.ControlNetModel < source > ( in_channels: int = 4 conditioning_channels: int = 3 flip_sin_to_cos: bool = True freq_shift: int = 0 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') mid_block_type: Optional = 'UNetMidBlock2DCrossAttn' only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: int = 1280 transformer_layers_per_block: Union = 1 encoder_hid_dim: Optional = None encoder_hid_dim_type: Optional = None attention_head_dim: Union = 8 num_attention_heads: Union = None use_linear_projection: bool = False class_embed_type: Optional = None addition_embed_type: Optional = None addition_time_embed_dim: Optional = None num_class_embeds: Optional = None upcast_attention: bool = False resnet_time_scale_shift: str = 'default' projection_class_embeddings_input_dim: Optional = None controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: Optional = (16, 32, 96, 256) global_pool_conditions: bool = False addition_embed_type_num_heads: int = 64 ) Parameters in_channels (int, defaults to 4) — +The number of channels in the input sample. flip_sin_to_cos (bool, defaults to True) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, defaults to 0) — +The frequency shift to apply to the time embedding. down_block_types (tuple[str], defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. only_cross_attention (Union[bool, Tuple[bool]], defaults to False) — block_out_channels (tuple[int], defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, defaults to 2) — +The number of layers per block. downsample_padding (int, defaults to 1) — +The padding to use for the downsampling convolution. mid_block_scale_factor (float, defaults to 1) — +The scale factor to use for the mid block. act_fn (str, defaults to “silu”) — +The activation function to use. norm_num_groups (int, optional, defaults to 32) — +The number of groups to use for the normalization. If None, normalization and activation layers is skipped +in post-processing. norm_eps (float, defaults to 1e-5) — +The epsilon to use for the normalization. cross_attention_dim (int, defaults to 1280) — +The dimension of the cross attention features. transformer_layers_per_block (int or Tuple[int], optional, defaults to 1) — +The number of transformer blocks of type BasicTransformerBlock. Only relevant for +~models.unet_2d_blocks.CrossAttnDownBlock2D, ~models.unet_2d_blocks.CrossAttnUpBlock2D, +~models.unet_2d_blocks.UNetMidBlock2DCrossAttn. encoder_hid_dim (int, optional, defaults to None) — +If encoder_hid_dim_type is defined, encoder_hidden_states will be projected from encoder_hid_dim +dimension to cross_attention_dim. encoder_hid_dim_type (str, optional, defaults to None) — +If given, the encoder_hidden_states and potentially other embeddings are down-projected to text +embeddings of dimension cross_attention according to encoder_hid_dim_type. attention_head_dim (Union[int, Tuple[int]], defaults to 8) — +The dimension of the attention heads. use_linear_projection (bool, defaults to False) — class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", "identity", "projection", or "simple_projection". addition_embed_type (str, optional, defaults to None) — +Configures an optional embedding which will be summed with the time embeddings. Choose from None or +“text”. “text” will use the TextTimeEmbedding layer. num_class_embeds (int, optional, defaults to 0) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. upcast_attention (bool, defaults to False) — resnet_time_scale_shift (str, defaults to "default") — +Time scale shift config for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. projection_class_embeddings_input_dim (int, optional, defaults to None) — +The dimension of the class_labels input when class_embed_type="projection". Required when +class_embed_type="projection". controlnet_conditioning_channel_order (str, defaults to "rgb") — +The channel order of conditional image. Will convert to rgb if it’s bgr. conditioning_embedding_out_channels (tuple[int], optional, defaults to (16, 32, 96, 256)) — +The tuple of output channel for each block in the conditioning_embedding layer. global_pool_conditions (bool, defaults to False) — +TODO(Patrick) - unused parameter. addition_embed_type_num_heads (int, defaults to 64) — +The number of heads to use for the TextTimeEmbedding layer. A ControlNet model. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor controlnet_cond: FloatTensor conditioning_scale: float = 1.0 class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None added_cond_kwargs: Optional = None cross_attention_kwargs: Optional = None guess_mode: bool = False return_dict: bool = True ) → ControlNetOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor. timestep (Union[torch.Tensor, float, int]) — +The number of timesteps to denoise an input. encoder_hidden_states (torch.Tensor) — +The encoder hidden states. controlnet_cond (torch.FloatTensor) — +The conditional input tensor of shape (batch_size, sequence_length, hidden_size). conditioning_scale (float, defaults to 1.0) — +The scale factor for ControlNet outputs. class_labels (torch.Tensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. timestep_cond (torch.Tensor, optional, defaults to None) — +Additional conditional embeddings for timestep. If provided, the embeddings will be summed with the +timestep_embedding passed through the self.time_embedding layer to obtain the final timestep +embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. added_cond_kwargs (dict) — +Additional conditions for the Stable Diffusion XL UNet. cross_attention_kwargs (dict[str], optional, defaults to None) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. guess_mode (bool, defaults to False) — +In this mode, the ControlNet encoder tries its best to recognize the input content of the input even if +you remove all prompts. A guidance_scale between 3.0 and 5.0 is recommended. return_dict (bool, defaults to True) — +Whether or not to return a ControlNetOutput instead of a plain tuple. Returns +ControlNetOutput or tuple + +If return_dict is True, a ControlNetOutput is returned, otherwise a tuple is +returned where the first element is the sample tensor. + The ControlNetModel forward method. from_unet < source > ( unet: UNet2DConditionModel controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: Optional = (16, 32, 96, 256) load_weights_from_unet: bool = True conditioning_channels: int = 3 ) Parameters unet (UNet2DConditionModel) — +The UNet model weights to copy to the ControlNetModel. All configuration options are also copied +where applicable. Instantiate a ControlNetModel from UNet2DConditionModel. set_attention_slice < source > ( slice_size: Union ) Parameters slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", input to the attention heads is halved, so attention is computed in two steps. If +"max", maximum amount of memory is saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in +several steps. This is useful for saving some memory in exchange for a small decrease in speed. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. ControlNetOutput class diffusers.models.controlnet.ControlNetOutput < source > ( down_block_res_samples: Tuple mid_block_res_sample: Tensor ) Parameters down_block_res_samples (tuple[torch.Tensor]) — +A tuple of downsample activations at different resolutions for each downsampling block. Each tensor should +be of shape (batch_size, channel * resolution, height //resolution, width // resolution). Output can be +used to condition the original UNet’s downsampling activations. mid_down_block_re_sample (torch.Tensor) — +The activation of the midde block (the lowest sample resolution). Each tensor should be of shape +(batch_size, channel * lowest_resolution, height // lowest_resolution, width // lowest_resolution). +Output can be used to condition the original UNet’s middle block activation. The output of ControlNetModel. FlaxControlNetModel class diffusers.FlaxControlNetModel < source > ( sample_size: int = 32 in_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 attention_head_dim: Union = 8 num_attention_heads: Union = None cross_attention_dim: int = 1280 dropout: float = 0.0 use_linear_projection: bool = False dtype: dtype = flip_sin_to_cos: bool = True freq_shift: int = 0 controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: Tuple = (16, 32, 96, 256) parent: Union = name: Optional = None ) Parameters sample_size (int, optional) — +The size of the input sample. in_channels (int, optional, defaults to 4) — +The number of channels in the input sample. down_block_types (Tuple[str], optional, defaults to ("FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxDownBlock2D")) — +The tuple of downsample blocks to use. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — +The number of layers per block. attention_head_dim (int or Tuple[int], optional, defaults to 8) — +The dimension of the attention heads. num_attention_heads (int or Tuple[int], optional) — +The number of attention heads. cross_attention_dim (int, optional, defaults to 768) — +The dimension of the cross attention features. dropout (float, optional, defaults to 0) — +Dropout probability for down, up and bottleneck blocks. flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. controlnet_conditioning_channel_order (str, optional, defaults to rgb) — +The channel order of conditional image. Will convert to rgb if it’s bgr. conditioning_embedding_out_channels (tuple, optional, defaults to (16, 32, 96, 256)) — +The tuple of output channel for each block in the conditioning_embedding layer. A ControlNet model. This model inherits from FlaxModelMixin. Check the superclass documentation for it’s generic methods +implemented for all models (such as downloading or saving). This model is also a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matters related to its +general usage and behavior. Inherent JAX features such as the following are supported: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization FlaxControlNetOutput class diffusers.models.controlnet_flax.FlaxControlNetOutput < source > ( down_block_res_samples: Array mid_block_res_sample: Array ) Parameters down_block_res_samples (jnp.ndarray) — mid_block_res_sample (jnp.ndarray) — The output of FlaxControlNetModel. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/2e7bd95f9d5f9b3a2525ac07cc4ebdb6.txt b/scrapped_outputs/2e7bd95f9d5f9b3a2525ac07cc4ebdb6.txt new file mode 100644 index 0000000000000000000000000000000000000000..b26a6d56b0f7175109506df5db21894b73ff5f5f --- /dev/null +++ b/scrapped_outputs/2e7bd95f9d5f9b3a2525ac07cc4ebdb6.txt @@ -0,0 +1,25 @@ +Metal Performance Shaders (MPS) 🤗 Diffusers is compatible with Apple silicon (M1/M2 chips) using the PyTorch mps device, which uses the Metal framework to leverage the GPU on MacOS devices. You’ll need to have: macOS computer with Apple silicon (M1/M2) hardware macOS 12.6 or later (13.0 or later recommended) arm64 version of Python PyTorch 2.0 (recommended) or 1.13 (minimum version supported for mps) The mps backend uses PyTorch’s .to() interface to move the Stable Diffusion pipeline on to your M1 or M2 device: Copied from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +pipe = pipe.to("mps") + +# Recommended if your computer has < 64 GB of RAM +pipe.enable_attention_slicing() + +prompt = "a photo of an astronaut riding a horse on mars" +image = pipe(prompt).images[0] +image Generating multiple prompts in a batch can crash or fail to work reliably. We believe this is related to the mps backend in PyTorch. While this is being investigated, you should iterate instead of batching. If you’re using PyTorch 1.13, you need to “prime” the pipeline with an additional one-time pass through it. This is a temporary workaround for an issue where the first inference pass produces slightly different results than subsequent ones. You only need to do this pass once, and after just one inference step you can discard the result. Copied from diffusers import DiffusionPipeline + + pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5").to("mps") + pipe.enable_attention_slicing() + + prompt = "a photo of an astronaut riding a horse on mars" + # First-time "warmup" pass if PyTorch version is 1.13 ++ _ = pipe(prompt, num_inference_steps=1) + + # Results match those from the CPU device after the warmup pass. + image = pipe(prompt).images[0] Troubleshoot M1/M2 performance is very sensitive to memory pressure. When this occurs, the system automatically swaps if it needs to which significantly degrades performance. To prevent this from happening, we recommend attention slicing to reduce memory pressure during inference and prevent swapping. This is especially relevant if your computer has less than 64GB of system RAM, or if you generate images at non-standard resolutions larger than 512×512 pixels. Call the enable_attention_slicing() function on your pipeline: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True).to("mps") +pipeline.enable_attention_slicing() Attention slicing performs the costly attention operation in multiple steps instead of all at once. It usually improves performance by ~20% in computers without universal memory, but we’ve observed better performance in most Apple silicon computers unless you have 64GB of RAM or more. diff --git a/scrapped_outputs/2e807a3f98359ccb5a8022848e444860.txt b/scrapped_outputs/2e807a3f98359ccb5a8022848e444860.txt new file mode 100644 index 0000000000000000000000000000000000000000..9325325931d7654b3c3a275f038bf531d9fb9421 --- /dev/null +++ b/scrapped_outputs/2e807a3f98359ccb5a8022848e444860.txt @@ -0,0 +1,75 @@ +What is safetensors ? + +safetensors is a different format +from the classic .bin which uses Pytorch which uses pickle. It contains the +exact same data, which is just the model weights (or tensors). +Pickle is notoriously unsafe which allow any malicious file to execute arbitrary code. +The hub itself tries to prevent issues from it, but it’s not a silver bullet. +safetensors first and foremost goal is to make loading machine learning models safe +in the sense that no takeover of your computer can be done. +Hence the name. + +Why use safetensors ? + +Safety can be one reason, if you’re attempting to use a not well known model and +you’re not sure about the source of the file. +And a secondary reason, is the speed of loading. Safetensors can load models much faster +than regular pickle files. If you spend a lot of times switching models, this can be +a huge timesave. +Numbers taken AMD EPYC 7742 64-Core Processor + + + Copied +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1") + +# Loaded in safetensors 0:00:02.033658 +# Loaded in Pytorch 0:00:02.663379 +This is for the entire loading time, the actual weights loading time to load 500MB: + + + Copied +Safetensors: 3.4873ms +PyTorch: 172.7537ms +Performance in general is a tricky business, and there are a few things to understand: +If you’re using the model for the first time from the hub, you will have to download the weights. +That’s extremely likely to be much slower than any loading method, therefore you will not see any difference +If you’re loading the model for the first time (let’s say after a reboot) then your machine will have to +actually read the disk. It’s likely to be as slow in both cases. Again the speed difference may not be as visible (this depends on hardware and the actual model). +The best performance benefit is when the model was already loaded previously on your computer and you’re switching from one model to another. Your OS, is trying really hard not to read from disk, since this is slow, so it will keep the files around in RAM, making it loading again much faster. Since safetensors is doing zero-copy of the tensors, reloading will be faster than pytorch since it has at least once extra copy to do. + +How to use safetensors ? + +If you have safetensors installed, and all the weights are available in safetensors format, \ +then by default it will use that instead of the pytorch weights. +If you are really paranoid about this, the ultimate weapon would be disabling torch.load: + + + Copied +import torch + + +def _raise(): + raise RuntimeError("I don't want to use pickle") + + +torch.load = lambda *args, **kwargs: _raise() + +I want to use model X but it doesn't have safetensors weights. + +Just go to this space. +This will create a new PR with the weights, let’s say refs/pr/22. +This space will download the pickled version, convert it, and upload it on the hub as a PR. +If anything bad is contained in the file, it’s Huggingface hub that will get issues, not your own computer. +And we’re equipped with dealing with it. +Then in order to use the model, even before the branch gets accepted by the original author you can do: + + + Copied +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", revision="refs/pr/22") +or you can test it directly online with this space. +And that’s it ! +Anything unclear, concerns, or found a bugs ? Open an issue diff --git a/scrapped_outputs/2ea41af74032ac0887d032c244099a1d.txt b/scrapped_outputs/2ea41af74032ac0887d032c244099a1d.txt new file mode 100644 index 0000000000000000000000000000000000000000..a1d62e149f06897a73f0cf31016ea5252858f00a --- /dev/null +++ b/scrapped_outputs/2ea41af74032ac0887d032c244099a1d.txt @@ -0,0 +1,525 @@ +Kandinsky 2.1 Kandinsky 2.1 is created by Arseniy Shakhmatov, Anton Razzhigaev, Aleksandr Nikolich, Vladimir Arkhipkin, Igor Pavlov, Andrey Kuznetsov, and Denis Dimitrov. The description from it’s GitHub page is: Kandinsky 2.1 inherits best practicies from Dall-E 2 and Latent diffusion, while introducing some new ideas. As text and image encoder it uses CLIP model and diffusion image prior (mapping) between latent spaces of CLIP modalities. This approach increases the visual performance of the model and unveils new horizons in blending images and text-guided image manipulation. The original codebase can be found at ai-forever/Kandinsky-2. Check out the Kandinsky Community organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. KandinskyPriorPipeline class diffusers.KandinskyPriorPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModelWithProjection text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: UnCLIPScheduler image_processor: CLIPImageProcessor ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Pipeline for generating image prior for Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 output_type: Optional = 'pt' return_dict: bool = True ) → KandinskyPriorPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. output_type (str, optional, defaults to "pt") — +The output format of the generate image. Choose between: "np" (np.array) or "pt" +(torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyPipeline, KandinskyPriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior") +>>> pipe_prior.to("cuda") + +>>> prompt = "red cat, 4k photo" +>>> out = pipe_prior(prompt) +>>> image_emb = out.image_embeds +>>> negative_image_emb = out.negative_image_embeds + +>>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1") +>>> pipe.to("cuda") + +>>> image = pipe( +... prompt, +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... ).images + +>>> image[0].save("cat.png") interpolate < source > ( images_and_prompts: List weights: List num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None negative_prior_prompt: Optional = None negative_prompt: str = '' guidance_scale: float = 4.0 device = None ) → KandinskyPriorPipelineOutput or tuple Parameters images_and_prompts (List[Union[str, PIL.Image.Image, torch.FloatTensor]]) — +list of prompts and images to guide the image generation. +weights — (List[float]): +list of weights for each condition in images_and_prompts num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. negative_prior_prompt (str, optional) — +The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when using the prior pipeline for interpolation. Examples: Copied >>> from diffusers import KandinskyPriorPipeline, KandinskyPipeline +>>> from diffusers.utils import load_image +>>> import PIL + +>>> import torch +>>> from torchvision import transforms + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> img1 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) + +>>> img2 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/starry_night.jpeg" +... ) + +>>> images_texts = ["a cat", img1, img2] +>>> weights = [0.3, 0.3, 0.4] +>>> image_emb, zero_image_emb = pipe_prior.interpolate(images_texts, weights) + +>>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) +>>> pipe.to("cuda") + +>>> image = pipe( +... "", +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=150, +... ).images[0] + +>>> image.save("starry_cat.png") KandinskyPipeline class diffusers.KandinskyPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image_embeds: Union negative_image_embeds: Union negative_prompt: Union = None height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyPipeline, KandinskyPriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained("kandinsky-community/Kandinsky-2-1-prior") +>>> pipe_prior.to("cuda") + +>>> prompt = "red cat, 4k photo" +>>> out = pipe_prior(prompt) +>>> image_emb = out.image_embeds +>>> negative_image_emb = out.negative_image_embeds + +>>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1") +>>> pipe.to("cuda") + +>>> image = pipe( +... prompt, +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... ).images + +>>> image[0].save("cat.png") KandinskyCombinedPipeline class diffusers.KandinskyCombinedPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Combined Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipe = AutoPipelineForText2Image.from_pretrained( + "kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" + +image = pipe(prompt=prompt, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models (unet, text_encoder, vae, and safety checker state dicts) to CPU using 🤗 +Accelerate, significantly reducing memory usage. Models are moved to a torch.device('meta') and loaded on a +GPU only when their specific submodule’s forward method is called. Offloading happens on a submodule basis. +Memory savings are higher than using enable_model_cpu_offload, but performance is lower. KandinskyImg2ImgPipeline class diffusers.KandinskyImg2ImgPipeline < source > ( text_encoder: MultilingualCLIP movq: VQModel tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: DDIMScheduler ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ image encoder and decoder Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union image_embeds: FloatTensor negative_image_embeds: FloatTensor negative_prompt: Union = None height: int = 512 width: int = 512 num_inference_steps: int = 100 strength: float = 0.3 guidance_scale: float = 7.0 num_images_per_prompt: int = 1 generator: Union = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyImg2ImgPipeline, KandinskyPriorPipeline +>>> from diffusers.utils import load_image +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> prompt = "A red cartoon frog, 4k" +>>> image_emb, zero_image_emb = pipe_prior(prompt, return_dict=False) + +>>> pipe = KandinskyImg2ImgPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") + +>>> init_image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/frog.png" +... ) + +>>> image = pipe( +... prompt, +... image=init_image, +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... strength=0.2, +... ).images + +>>> image[0].save("red_frog.png") KandinskyImg2ImgCombinedPipeline class diffusers.KandinskyImg2ImgCombinedPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Combined Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 strength: float = 0.3 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForImage2Image +import torch +import requests +from io import BytesIO +from PIL import Image +import os + +pipe = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") +image.thumbnail((768, 768)) + +image = pipe(prompt=prompt, image=original_image, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. KandinskyInpaintPipeline class diffusers.KandinskyInpaintPipeline < source > ( text_encoder: MultilingualCLIP movq: VQModel tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: DDIMScheduler ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ image encoder and decoder Pipeline for text-guided image inpainting using Kandinsky2.1 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union mask_image: Union image_embeds: FloatTensor negative_image_embeds: FloatTensor negative_prompt: Union = None height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image or np.ndarray) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. mask_image (PIL.Image.Image,torch.FloatTensor or np.ndarray) — +Image, or a tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. You can pass a pytorch tensor as mask only if the +image you passed is a pytorch tensor, and it should contain one color channel (L) instead of 3, so the +expected shape would be either (B, 1, H, W,), (B, H, W), (1, H, W) or (H, W) If image is an PIL +image or numpy array, mask should also be a either PIL image or numpy array. If it is a PIL image, it +will be converted to a single channel (luminance) before use. If it is a nummpy array, the expected +shape is (H, W). image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyInpaintPipeline, KandinskyPriorPipeline +>>> from diffusers.utils import load_image +>>> import torch +>>> import numpy as np + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> prompt = "a hat" +>>> image_emb, zero_image_emb = pipe_prior(prompt, return_dict=False) + +>>> pipe = KandinskyInpaintPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") + +>>> init_image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) + +>>> mask = np.zeros((768, 768), dtype=np.float32) +>>> mask[:250, 250:-250] = 1 + +>>> out = pipe( +... prompt, +... image=init_image, +... mask_image=mask, +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=50, +... ) + +>>> image = out.images[0] +>>> image.save("cat_with_hat.png") KandinskyInpaintCombinedPipeline class diffusers.KandinskyInpaintCombinedPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Combined Pipeline for generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union mask_image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. mask_image (np.array) — +Tensor representing an image batch, to mask image. White pixels in the mask will be repainted, while +black pixels will be preserved. If mask_image is a PIL image, it will be converted to a single +channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, +so the expected shape would be (B, H, W, 1). negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +import torch +import numpy as np + +pipe = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +original_image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png" +) + +mask = np.zeros((768, 768), dtype=np.float32) +# Let's mask out an area above the cat's head +mask[:250, 250:-250] = 1 + +image = pipe(prompt=prompt, image=original_image, mask_image=mask, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. diff --git a/scrapped_outputs/2ea877d0533df56f440b1cc6771d1979.txt b/scrapped_outputs/2ea877d0533df56f440b1cc6771d1979.txt new file mode 100644 index 0000000000000000000000000000000000000000..e9b53eb8a868ef3829ac58348524811ec445482c --- /dev/null +++ b/scrapped_outputs/2ea877d0533df56f440b1cc6771d1979.txt @@ -0,0 +1,143 @@ +BLIP-Diffusion BLIP-Diffusion was proposed in BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing. It enables zero-shot subject-driven generation and control-guided zero-shot generation. The abstract from the paper is: Subject-driven text-to-image generation models create novel renditions of an input subject based on text prompts. Existing models suffer from lengthy fine-tuning and difficulties preserving the subject fidelity. To overcome these limitations, we introduce BLIP-Diffusion, a new subject-driven image generation model that supports multimodal control which consumes inputs of subject images and text prompts. Unlike other subject-driven generation models, BLIP-Diffusion introduces a new multimodal encoder which is pre-trained to provide subject representation. We first pre-train the multimodal encoder following BLIP-2 to produce visual representation aligned with the text. Then we design a subject representation learning task which enables a diffusion model to leverage such visual representation and generates new subject renditions. Compared with previous methods such as DreamBooth, our model enables zero-shot subject-driven generation, and efficient fine-tuning for customized subject with up to 20x speedup. We also demonstrate that BLIP-Diffusion can be flexibly combined with existing techniques such as ControlNet and prompt-to-prompt to enable novel subject-driven generation and editing applications. Project page at this https URL. The original codebase can be found at salesforce/LAVIS. You can find the official BLIP-Diffusion checkpoints under the hf.co/SalesForce organization. BlipDiffusionPipeline and BlipDiffusionControlNetPipeline were contributed by ayushtues. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. BlipDiffusionPipeline class diffusers.BlipDiffusionPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: ContextCLIPTextModel vae: AutoencoderKL unet: UNet2DConditionModel scheduler: PNDMScheduler qformer: Blip2QFormerModel image_processor: BlipImageProcessor ctx_begin_pos: int = 2 mean: List = None std: List = None ) Parameters tokenizer (CLIPTokenizer) — +Tokenizer for the text encoder text_encoder (ContextCLIPTextModel) — +Text encoder to encode the text prompt vae (AutoencoderKL) — +VAE model to map the latents to the image unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. scheduler (PNDMScheduler) — +A scheduler to be used in combination with unet to generate image latents. qformer (Blip2QFormerModel) — +QFormer model to get multi-modal embeddings from the text and image. image_processor (BlipImageProcessor) — +Image Processor to preprocess and postprocess the image. ctx_begin_pos (int, optional, defaults to 2) — +Position of the context token in the text encoder. Pipeline for Zero-Shot Subject Driven Generation using Blip Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: List reference_image: Image source_subject_category: List target_subject_category: List latents: Optional = None guidance_scale: float = 7.5 height: int = 512 width: int = 512 num_inference_steps: int = 50 generator: Union = None neg_prompt: Optional = '' prompt_strength: float = 1.0 prompt_reps: int = 20 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (List[str]) — +The prompt or prompts to guide the image generation. reference_image (PIL.Image.Image) — +The reference image to condition the generation on. source_subject_category (List[str]) — +The source subject category. target_subject_category (List[str]) — +The target subject category. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by random sampling. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. height (int, optional, defaults to 512) — +The height of the generated image. width (int, optional, defaults to 512) — +The width of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. neg_prompt (str, optional, defaults to "") — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). prompt_strength (float, optional, defaults to 1.0) — +The strength of the prompt. Specifies the number of times the prompt is repeated along with prompt_reps +to amplify the prompt. prompt_reps (int, optional, defaults to 20) — +The number of times the prompt is repeated along with prompt_strength to amplify the prompt. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers.pipelines import BlipDiffusionPipeline +>>> from diffusers.utils import load_image +>>> import torch + +>>> blip_diffusion_pipe = BlipDiffusionPipeline.from_pretrained( +... "Salesforce/blipdiffusion", torch_dtype=torch.float16 +... ).to("cuda") + + +>>> cond_subject = "dog" +>>> tgt_subject = "dog" +>>> text_prompt_input = "swimming underwater" + +>>> cond_image = load_image( +... "https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/dog.jpg" +... ) +>>> guidance_scale = 7.5 +>>> num_inference_steps = 25 +>>> negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate" + + +>>> output = blip_diffusion_pipe( +... text_prompt_input, +... cond_image, +... cond_subject, +... tgt_subject, +... guidance_scale=guidance_scale, +... num_inference_steps=num_inference_steps, +... neg_prompt=negative_prompt, +... height=512, +... width=512, +... ).images +>>> output[0].save("image.png") BlipDiffusionControlNetPipeline class diffusers.BlipDiffusionControlNetPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: ContextCLIPTextModel vae: AutoencoderKL unet: UNet2DConditionModel scheduler: PNDMScheduler qformer: Blip2QFormerModel controlnet: ControlNetModel image_processor: BlipImageProcessor ctx_begin_pos: int = 2 mean: List = None std: List = None ) Parameters tokenizer (CLIPTokenizer) — +Tokenizer for the text encoder text_encoder (ContextCLIPTextModel) — +Text encoder to encode the text prompt vae (AutoencoderKL) — +VAE model to map the latents to the image unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. scheduler (PNDMScheduler) — +A scheduler to be used in combination with unet to generate image latents. qformer (Blip2QFormerModel) — +QFormer model to get multi-modal embeddings from the text and image. controlnet (ControlNetModel) — +ControlNet model to get the conditioning image embedding. image_processor (BlipImageProcessor) — +Image Processor to preprocess and postprocess the image. ctx_begin_pos (int, optional, defaults to 2) — +Position of the context token in the text encoder. Pipeline for Canny Edge based Controlled subject-driven generation using Blip Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: List reference_image: Image condtioning_image: Image source_subject_category: List target_subject_category: List latents: Optional = None guidance_scale: float = 7.5 height: int = 512 width: int = 512 num_inference_steps: int = 50 generator: Union = None neg_prompt: Optional = '' prompt_strength: float = 1.0 prompt_reps: int = 20 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (List[str]) — +The prompt or prompts to guide the image generation. reference_image (PIL.Image.Image) — +The reference image to condition the generation on. condtioning_image (PIL.Image.Image) — +The conditioning canny edge image to condition the generation on. source_subject_category (List[str]) — +The source subject category. target_subject_category (List[str]) — +The target subject category. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by random sampling. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. height (int, optional, defaults to 512) — +The height of the generated image. width (int, optional, defaults to 512) — +The width of the generated image. seed (int, optional, defaults to 42) — +The seed to use for random generation. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. neg_prompt (str, optional, defaults to "") — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). prompt_strength (float, optional, defaults to 1.0) — +The strength of the prompt. Specifies the number of times the prompt is repeated along with prompt_reps +to amplify the prompt. prompt_reps (int, optional, defaults to 20) — +The number of times the prompt is repeated along with prompt_strength to amplify the prompt. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers.pipelines import BlipDiffusionControlNetPipeline +>>> from diffusers.utils import load_image +>>> from controlnet_aux import CannyDetector +>>> import torch + +>>> blip_diffusion_pipe = BlipDiffusionControlNetPipeline.from_pretrained( +... "Salesforce/blipdiffusion-controlnet", torch_dtype=torch.float16 +... ).to("cuda") + +>>> style_subject = "flower" +>>> tgt_subject = "teapot" +>>> text_prompt = "on a marble table" + +>>> cldm_cond_image = load_image( +... "https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/kettle.jpg" +... ).resize((512, 512)) +>>> canny = CannyDetector() +>>> cldm_cond_image = canny(cldm_cond_image, 30, 70, output_type="pil") +>>> style_image = load_image( +... "https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg" +... ) +>>> guidance_scale = 7.5 +>>> num_inference_steps = 50 +>>> negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate" + + +>>> output = blip_diffusion_pipe( +... text_prompt, +... style_image, +... cldm_cond_image, +... style_subject, +... tgt_subject, +... guidance_scale=guidance_scale, +... num_inference_steps=num_inference_steps, +... neg_prompt=negative_prompt, +... height=512, +... width=512, +... ).images +>>> output[0].save("image.png") diff --git a/scrapped_outputs/2eb75baec29d84fa687f9e7ebc755f1d.txt b/scrapped_outputs/2eb75baec29d84fa687f9e7ebc755f1d.txt new file mode 100644 index 0000000000000000000000000000000000000000..7aa9d7438e50cb065d601931ea93e05ed669bc92 --- /dev/null +++ b/scrapped_outputs/2eb75baec29d84fa687f9e7ebc755f1d.txt @@ -0,0 +1,58 @@ +Effective and efficient diffusion Getting the DiffusionPipeline to generate images in a certain style or include what you want can be tricky. Often times, you have to run the DiffusionPipeline several times before you end up with an image you’re happy with. But generating something out of nothing is a computationally intensive process, especially if you’re running inference over and over again. This is why it’s important to get the most computational (speed) and memory (GPU vRAM) efficiency from the pipeline to reduce the time between inference cycles so you can iterate faster. This tutorial walks you through how to generate faster and better with the DiffusionPipeline. Begin by loading the runwayml/stable-diffusion-v1-5 model: Copied from diffusers import DiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipeline = DiffusionPipeline.from_pretrained(model_id, use_safetensors=True) The example prompt you’ll use is a portrait of an old warrior chief, but feel free to use your own prompt: Copied prompt = "portrait photo of a old warrior chief" Speed 💡 If you don’t have access to a GPU, you can use one for free from a GPU provider like Colab! One of the simplest ways to speed up inference is to place the pipeline on a GPU the same way you would with any PyTorch module: Copied pipeline = pipeline.to("cuda") To make sure you can use the same image and improve on it, use a Generator and set a seed for reproducibility: Copied import torch + +generator = torch.Generator("cuda").manual_seed(0) Now you can generate an image: Copied image = pipeline(prompt, generator=generator).images[0] +image This process took ~30 seconds on a T4 GPU (it might be faster if your allocated GPU is better than a T4). By default, the DiffusionPipeline runs inference with full float32 precision for 50 inference steps. You can speed this up by switching to a lower precision like float16 or running fewer inference steps. Let’s start by loading the model in float16 and generate an image: Copied import torch + +pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, use_safetensors=True) +pipeline = pipeline.to("cuda") +generator = torch.Generator("cuda").manual_seed(0) +image = pipeline(prompt, generator=generator).images[0] +image This time, it only took ~11 seconds to generate the image, which is almost 3x faster than before! 💡 We strongly suggest always running your pipelines in float16, and so far, we’ve rarely seen any degradation in output quality. Another option is to reduce the number of inference steps. Choosing a more efficient scheduler could help decrease the number of steps without sacrificing output quality. You can find which schedulers are compatible with the current model in the DiffusionPipeline by calling the compatibles method: Copied pipeline.scheduler.compatibles +[ + diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, + diffusers.schedulers.scheduling_unipc_multistep.UniPCMultistepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_discrete.KDPM2DiscreteScheduler, + diffusers.schedulers.scheduling_deis_multistep.DEISMultistepScheduler, + diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, + diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler, + diffusers.schedulers.scheduling_ddpm.DDPMScheduler, + diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_ancestral_discrete.KDPM2AncestralDiscreteScheduler, + diffusers.utils.dummy_torch_and_torchsde_objects.DPMSolverSDEScheduler, + diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler, + diffusers.schedulers.scheduling_pndm.PNDMScheduler, + diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, + diffusers.schedulers.scheduling_ddim.DDIMScheduler, +] The Stable Diffusion model uses the PNDMScheduler by default which usually requires ~50 inference steps, but more performant schedulers like DPMSolverMultistepScheduler, require only ~20 or 25 inference steps. Use the from_config() method to load a new scheduler: Copied from diffusers import DPMSolverMultistepScheduler + +pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) Now set the num_inference_steps to 20: Copied generator = torch.Generator("cuda").manual_seed(0) +image = pipeline(prompt, generator=generator, num_inference_steps=20).images[0] +image Great, you’ve managed to cut the inference time to just 4 seconds! ⚡️ Memory The other key to improving pipeline performance is consuming less memory, which indirectly implies more speed, since you’re often trying to maximize the number of images generated per second. The easiest way to see how many images you can generate at once is to try out different batch sizes until you get an OutOfMemoryError (OOM). Create a function that’ll generate a batch of images from a list of prompts and Generators. Make sure to assign each Generator a seed so you can reuse it if it produces a good result. Copied def get_inputs(batch_size=1): + generator = [torch.Generator("cuda").manual_seed(i) for i in range(batch_size)] + prompts = batch_size * [prompt] + num_inference_steps = 20 + + return {"prompt": prompts, "generator": generator, "num_inference_steps": num_inference_steps} Start with batch_size=4 and see how much memory you’ve consumed: Copied from diffusers.utils import make_image_grid + +images = pipeline(**get_inputs(batch_size=4)).images +make_image_grid(images, 2, 2) Unless you have a GPU with more vRAM, the code above probably returned an OOM error! Most of the memory is taken up by the cross-attention layers. Instead of running this operation in a batch, you can run it sequentially to save a significant amount of memory. All you have to do is configure the pipeline to use the enable_attention_slicing() function: Copied pipeline.enable_attention_slicing() Now try increasing the batch_size to 8! Copied images = pipeline(**get_inputs(batch_size=8)).images +make_image_grid(images, rows=2, cols=4) Whereas before you couldn’t even generate a batch of 4 images, now you can generate a batch of 8 images at ~3.5 seconds per image! This is probably the fastest you can go on a T4 GPU without sacrificing quality. Quality In the last two sections, you learned how to optimize the speed of your pipeline by using fp16, reducing the number of inference steps by using a more performant scheduler, and enabling attention slicing to reduce memory consumption. Now you’re going to focus on how to improve the quality of generated images. Better checkpoints The most obvious step is to use better checkpoints. The Stable Diffusion model is a good starting point, and since its official launch, several improved versions have also been released. However, using a newer version doesn’t automatically mean you’ll get better results. You’ll still have to experiment with different checkpoints yourself, and do a little research (such as using negative prompts) to get the best results. As the field grows, there are more and more high-quality checkpoints finetuned to produce certain styles. Try exploring the Hub and Diffusers Gallery to find one you’re interested in! Better pipeline components You can also try replacing the current pipeline components with a newer version. Let’s try loading the latest autoencoder from Stability AI into the pipeline, and generate some images: Copied from diffusers import AutoencoderKL + +vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16).to("cuda") +pipeline.vae = vae +images = pipeline(**get_inputs(batch_size=8)).images +make_image_grid(images, rows=2, cols=4) Better prompt engineering The text prompt you use to generate an image is super important, so much so that it is called prompt engineering. Some considerations to keep during prompt engineering are: How is the image or similar images of the one I want to generate stored on the internet? What additional detail can I give that steers the model towards the style I want? With this in mind, let’s improve the prompt to include color and higher quality details: Copied prompt += ", tribal panther make up, blue on red, side profile, looking away, serious eyes" +prompt += " 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta" Generate a batch of images with the new prompt: Copied images = pipeline(**get_inputs(batch_size=8)).images +make_image_grid(images, rows=2, cols=4) Pretty impressive! Let’s tweak the second image - corresponding to the Generator with a seed of 1 - a bit more by adding some text about the age of the subject: Copied prompts = [ + "portrait photo of the oldest warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a old warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a young warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", +] + +generator = [torch.Generator("cuda").manual_seed(1) for _ in range(len(prompts))] +images = pipeline(prompt=prompts, generator=generator, num_inference_steps=25).images +make_image_grid(images, 2, 2) Next steps In this tutorial, you learned how to optimize a DiffusionPipeline for computational and memory efficiency as well as improving the quality of generated outputs. If you’re interested in making your pipeline even faster, take a look at the following resources: Learn how PyTorch 2.0 and torch.compile can yield 5 - 300% faster inference speed. On an A100 GPU, inference can be up to 50% faster! If you can’t use PyTorch 2, we recommend you install xFormers. Its memory-efficient attention mechanism works great with PyTorch 1.13.1 for faster speed and reduced memory consumption. Other optimization techniques, such as model offloading, are covered in this guide. diff --git a/scrapped_outputs/2eb81a9f7988254d15f7ab84ae0b9aba.txt b/scrapped_outputs/2eb81a9f7988254d15f7ab84ae0b9aba.txt new file mode 100644 index 0000000000000000000000000000000000000000..8ffbeca318ea60288f515ef9c440ebea9a984f50 --- /dev/null +++ b/scrapped_outputs/2eb81a9f7988254d15f7ab84ae0b9aba.txt @@ -0,0 +1,80 @@ +UniPCMultistepScheduler UniPCMultistepScheduler is a training-free framework designed for fast sampling of diffusion models. It was introduced in UniPC: A Unified Predictor-Corrector Framework for Fast Sampling of Diffusion Models by Wenliang Zhao, Lujia Bai, Yongming Rao, Jie Zhou, Jiwen Lu. It consists of a corrector (UniC) and a predictor (UniP) that share a unified analytical form and support arbitrary orders. +UniPC is by design model-agnostic, supporting pixel-space/latent-space DPMs on unconditional/conditional sampling. It can also be applied to both noise prediction and data prediction models. The corrector UniC can be also applied after any off-the-shelf solvers to increase the order of accuracy. The abstract from the paper is: Diffusion probabilistic models (DPMs) have demonstrated a very promising ability in high-resolution image synthesis. However, sampling from a pre-trained DPM is time-consuming due to the multiple evaluations of the denoising network, making it more and more important to accelerate the sampling of DPMs. Despite recent progress in designing fast samplers, existing methods still cannot generate satisfying images in many applications where fewer steps (e.g., <10) are favored. In this paper, we develop a unified corrector (UniC) that can be applied after any existing DPM sampler to increase the order of accuracy without extra model evaluations, and derive a unified predictor (UniP) that supports arbitrary order as a byproduct. Combining UniP and UniC, we propose a unified predictor-corrector framework called UniPC for the fast sampling of DPMs, which has a unified analytical form for any order and can significantly improve the sampling quality over previous methods, especially in extremely few steps. We evaluate our methods through extensive experiments including both unconditional and conditional sampling using pixel-space and latent-space DPMs. Our UniPC can achieve 3.87 FID on CIFAR10 (unconditional) and 7.51 FID on ImageNet 256×256 (conditional) with only 10 function evaluations. Code is available at this https URL. Tips It is recommended to set solver_order to 2 for guide sampling, and solver_order=3 for unconditional sampling. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both predict_x0=True and thresholding=True to use dynamic thresholding. This thresholding method is unsuitable for latent-space diffusion models such as Stable Diffusion. UniPCMultistepScheduler class diffusers.UniPCMultistepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 predict_x0: bool = True solver_type: str = 'bh2' lower_order_final: bool = True disable_corrector: List = [] solver_p: SchedulerMixin = None use_karras_sigmas: Optional = False timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, default 2) — +The UniPC order which can be any positive integer. The effective order of accuracy is solver_order + 1 +due to the UniC. It is recommended to use solver_order=2 for guided sampling, and solver_order=3 for +unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and predict_x0=True. predict_x0 (bool, defaults to True) — +Whether to use the updating algorithm on the predicted x0. solver_type (str, default bh2) — +Solver type for UniPC. It is recommended to use bh1 for unconditional sampling when steps < 10, and bh2 +otherwise. lower_order_final (bool, default True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. disable_corrector (list, default []) — +Decides which step to disable the corrector to mitigate the misalignment between epsilon_theta(x_t, c) +and epsilon_theta(x_t^c, c) which can influence convergence for a large guidance scale. Corrector is +usually disabled during the first few steps. solver_p (SchedulerMixin, default None) — +Any other scheduler that if specified, the algorithm becomes solver_p + UniC. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. UniPCMultistepScheduler is a training-free framework designed for the fast sampling of diffusion models. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the UniPC algorithm needs. multistep_uni_c_bh_update < source > ( this_model_output: FloatTensor *args last_sample: FloatTensor = None this_sample: FloatTensor = None order: int = None **kwargs ) → torch.FloatTensor Parameters this_model_output (torch.FloatTensor) — +The model outputs at x_t. this_timestep (int) — +The current timestep t. last_sample (torch.FloatTensor) — +The generated sample before the last predictor x_{t-1}. this_sample (torch.FloatTensor) — +The generated sample after the last predictor x_{t}. order (int) — +The p of UniC-p at this step. The effective order of accuracy should be order + 1. Returns +torch.FloatTensor + +The corrected sample tensor at the current timestep. + One step for the UniC (B(h) version). multistep_uni_p_bh_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None order: int = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model at the current timestep. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. order (int) — +The order of UniP at this timestep (corresponds to the p in UniPC-p). Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the UniP (B(h) version). Alternatively, self.solver_p is used if is specified. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep UniPC. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/2ec3756eaf583e44b5a44106eb125042.txt b/scrapped_outputs/2ec3756eaf583e44b5a44106eb125042.txt new file mode 100644 index 0000000000000000000000000000000000000000..49d64c2bb4b20fbd4bc944a6449825ee53c95919 --- /dev/null +++ b/scrapped_outputs/2ec3756eaf583e44b5a44106eb125042.txt @@ -0,0 +1,41 @@ +KDPM2AncestralDiscreteScheduler The KDPM2DiscreteScheduler with ancestral sampling is inspired by the Elucidating the Design Space of Diffusion-Based Generative Models paper, and the scheduler is ported from and created by Katherine Crowson. The original codebase can be found at crowsonkb/k-diffusion. KDPM2AncestralDiscreteScheduler class diffusers.KDPM2AncestralDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None use_karras_sigmas: Optional = False prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.00085) — +The starting beta value of inference. beta_end (float, defaults to 0.012) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. KDPM2DiscreteScheduler with ancestral sampling is inspired by the DPMSolver2 and Algorithm 2 from the Elucidating +the Design Space of Diffusion-Based Generative Models paper. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union generator: Optional = None return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, ~schedulers.scheduling_ddim.SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/2ed7184945a947fbc0b1dd29670f0ea1.txt b/scrapped_outputs/2ed7184945a947fbc0b1dd29670f0ea1.txt new file mode 100644 index 0000000000000000000000000000000000000000..fc1dc94699af2cf7d2df56ad88052beba3fe6541 --- /dev/null +++ b/scrapped_outputs/2ed7184945a947fbc0b1dd29670f0ea1.txt @@ -0,0 +1,324 @@ +InstructPix2Pix InstructPix2Pix: Learning to Follow Image Editing Instructions is by Tim Brooks, Aleksander Holynski and Alexei A. Efros. The abstract from the paper is: We propose a method for editing images from human instructions: given an input image and a written instruction that tells the model what to do, our model follows these instructions to edit the image. To obtain training data for this problem, we combine the knowledge of two large pretrained models — a language model (GPT-3) and a text-to-image model (Stable Diffusion) — to generate a large dataset of image editing examples. Our conditional diffusion model, InstructPix2Pix, is trained on our generated data, and generalizes to real images and user-written instructions at inference time. Since it performs edits in the forward pass and does not require per example fine-tuning or inversion, our model edits images quickly, in a matter of seconds. We show compelling editing results for a diverse collection of input images and written instructions. You can find additional information about InstructPix2Pix on the project page, original codebase, and try it out in a demo. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionInstructPix2PixPipeline class diffusers.StableDiffusionInstructPix2PixPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for pixel-level image editing by following text instructions (based on Stable Diffusion). This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 100 guidance_scale: float = 7.5 image_guidance_scale: float = 1.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor np.ndarray, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be repainted according to prompt. Can also accept +image latents as image, but if passing latents directly it is not encoded again. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. image_guidance_scale (float, optional, defaults to 1.5) — +Push the generated image towards the inital image. Image guidance scale is enabled by setting +image_guidance_scale > 1. Higher image guidance scale encourages generated images that are closely +linked to the source image, usually at the expense of lower image quality. This pipeline requires a +value of at least 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionInstructPix2PixPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" + +>>> image = download_image(img_url).resize((512, 512)) + +>>> pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained( +... "timbrooks/instruct-pix2pix", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "make the mountains snowy" +>>> image = pipe(prompt=prompt, image=image).images[0] load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. StableDiffusionXLInstructPix2PixPipeline class diffusers.StableDiffusionXLInstructPix2PixPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires a aesthetic_score condition to be passed during inference. Also see the config +of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for pixel-level image editing by following text instructions. Based on Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 100 denoising_end: Optional = None guidance_scale: float = 5.0 image_guidance_scale: float = 1.5 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None ) → StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor or PIL.Image.Image or np.ndarray or List[torch.FloatTensor] or List[PIL.Image.Image] or List[np.ndarray]) — +The image(s) to modify with the pipeline. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 5.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. image_guidance_scale (float, optional, defaults to 1.5) — +Image guidance scale is to push the generated image towards the inital image image. Image guidance +scale is enabled by setting image_guidance_scale > 1. Higher image guidance scale encourages to +generate images that are closely linked to the source image image, usually at the expense of lower +image quality. This pipeline requires a value of at least 1. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. Returns +StableDiffusionXLPipelineOutput or tuple + +StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLInstructPix2PixPipeline +>>> from diffusers.utils import load_image + +>>> resolution = 768 +>>> image = load_image( +... "https://hf.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" +... ).resize((resolution, resolution)) +>>> edit_instruction = "Turn sky into a cloudy one" + +>>> pipe = StableDiffusionXLInstructPix2PixPipeline.from_pretrained( +... "diffusers/sdxl-instructpix2pix-768", torch_dtype=torch.float16 +... ).to("cuda") + +>>> edited_image = pipe( +... prompt=edit_instruction, +... image=image, +... height=resolution, +... width=resolution, +... guidance_scale=3.0, +... image_guidance_scale=1.5, +... num_inference_steps=30, +... ).images[0] +>>> edited_image disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously invoked, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several +steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in +several steps. This is useful to save a large amount of memory and to allow the processing of larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. Encodes the prompt into text encoder hidden states. diff --git a/scrapped_outputs/2f1a2c11d7ab8df1f7c3e723c7d2f778.txt b/scrapped_outputs/2f1a2c11d7ab8df1f7c3e723c7d2f778.txt new file mode 100644 index 0000000000000000000000000000000000000000..13aef0767c19d544c8b380b818921e179de42362 --- /dev/null +++ b/scrapped_outputs/2f1a2c11d7ab8df1f7c3e723c7d2f778.txt @@ -0,0 +1,14 @@ +Speed up inference There are several ways to optimize 🤗 Diffusers for inference speed. As a general rule of thumb, we recommend using either xFormers or torch.nn.functional.scaled_dot_product_attention in PyTorch 2.0 for their memory-efficient attention. In many cases, optimizing for speed or memory leads to improved performance in the other, so you should try to optimize for both whenever you can. This guide focuses on inference speed, but you can learn more about preserving memory in the Reduce memory usage guide. The results below are obtained from generating a single 512x512 image from the prompt a photo of an astronaut riding a horse on mars with 50 DDIM steps on a Nvidia Titan RTX, demonstrating the speed-up you can expect. latency speed-up original 9.50s x1 fp16 3.61s x2.63 channels last 3.30s x2.88 traced UNet 3.21s x2.96 memory efficient attention 2.63s x3.61 Use TensorFloat-32 On Ampere and later CUDA devices, matrix multiplications and convolutions can use the TensorFloat-32 (TF32) mode for faster, but slightly less accurate computations. By default, PyTorch enables TF32 mode for convolutions but not matrix multiplications. Unless your network requires full float32 precision, we recommend enabling TF32 for matrix multiplications. It can significantly speeds up computations with typically negligible loss in numerical accuracy. Copied import torch + +torch.backends.cuda.matmul.allow_tf32 = True You can learn more about TF32 in the Mixed precision training guide. Half-precision weights To save GPU memory and get more speed, try loading and running the model weights directly in half-precision or float16: Copied import torch +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +image = pipe(prompt).images[0] Don’t use torch.autocast in any of the pipelines as it can lead to black images and is always slower than pure float16 precision. diff --git a/scrapped_outputs/2f271b8ba5f4c066eb31abf7b4143189.txt b/scrapped_outputs/2f271b8ba5f4c066eb31abf7b4143189.txt new file mode 100644 index 0000000000000000000000000000000000000000..e3c16fa32759c83ff581d2c898be0970cba3dbad --- /dev/null +++ b/scrapped_outputs/2f271b8ba5f4c066eb31abf7b4143189.txt @@ -0,0 +1,359 @@ +Text-Guided Image Inpainting + + +StableDiffusionInpaintPipeline + +The Stable Diffusion model was created by the researchers and engineers from CompVis, Stability AI, runway, and LAION. The StableDiffusionInpaintPipeline lets you edit specific parts of an image by providing a mask and a text prompt using Stable Diffusion. +The original codebase can be found here: +Stable Diffusion V1: CampVis/stable-diffusion +Stable Diffusion V2: Stability-AI/stablediffusion +Available checkpoints are: +stable-diffusion-inpainting (512x512 resolution): runwayml/stable-diffusion-inpainting +stable-diffusion-2-inpainting (512x512 resolution): stabilityai/stable-diffusion-2-inpainting + +class diffusers.StableDiffusionInpaintPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPFeatureExtractor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-guided image inpainting using Stable Diffusion. This is an experimental feature. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None +mask_image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +image (PIL.Image.Image) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. + + +mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. Ignored when not using guidance (i.e., ignored if guidance_scale +is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionInpaintPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +>>> init_image = download_image(img_url).resize((512, 512)) +>>> mask_image = download_image(mask_url).resize((512, 512)) + +>>> pipe = StableDiffusionInpaintPipeline.from_pretrained( +... "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +>>> image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] + +enable_attention_slicing + +< +source +> +( +slice_size: typing.Union[str, int, NoneType] = 'auto' + +) + + +Parameters + +slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maxium amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +disable_attention_slicing + +< +source +> +( +) + + + +Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go +back to computing attention in one step. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. diff --git a/scrapped_outputs/2f3c4c9f94b4f2f09f1c73428167fe32.txt b/scrapped_outputs/2f3c4c9f94b4f2f09f1c73428167fe32.txt new file mode 100644 index 0000000000000000000000000000000000000000..70b4217dd0c7138c00d1e18f1498d6ca0f929b68 --- /dev/null +++ b/scrapped_outputs/2f3c4c9f94b4f2f09f1c73428167fe32.txt @@ -0,0 +1,31 @@ +Load different Stable Diffusion formats Stable Diffusion models are available in different formats depending on the framework they’re trained and saved with, and where you download them from. Converting these formats for use in 🤗 Diffusers allows you to use all the features supported by the library, such as using different schedulers for inference, building your custom pipeline, and a variety of techniques and methods for optimizing inference speed. We highly recommend using the .safetensors format because it is more secure than traditional pickled files which are vulnerable and can be exploited to execute any code on your machine (learn more in the Load safetensors guide). This guide will show you how to convert other Stable Diffusion formats to be compatible with 🤗 Diffusers. PyTorch .ckpt The checkpoint - or .ckpt - format is commonly used to store and save models. The .ckpt file contains the entire model and is typically several GBs in size. While you can load and use a .ckpt file directly with the from_single_file() method, it is generally better to convert the .ckpt file to 🤗 Diffusers so both formats are available. There are two options for converting a .ckpt file: use a Space to convert the checkpoint or convert the .ckpt file with a script. Convert with a Space The easiest and most convenient way to convert a .ckpt file is to use the SD to Diffusers Space. You can follow the instructions on the Space to convert the .ckpt file. This approach works well for basic models, but it may struggle with more customized models. You’ll know the Space failed if it returns an empty pull request or error. In this case, you can try converting the .ckpt file with a script. Convert with a script 🤗 Diffusers provides a conversion script for converting .ckpt files. This approach is more reliable than the Space above. Before you start, make sure you have a local clone of 🤗 Diffusers to run the script and log in to your Hugging Face account so you can open pull requests and push your converted model to the Hub. Copied huggingface-cli login To use the script: Git clone the repository containing the .ckpt file you want to convert. For this example, let’s convert this TemporalNet .ckpt file: Copied git lfs install +git clone https://huggingface.co/CiaraRowles/TemporalNet Open a pull request on the repository where you’re converting the checkpoint from: Copied cd TemporalNet && git fetch origin refs/pr/13:pr/13 +git checkout pr/13 There are several input arguments to configure in the conversion script, but the most important ones are: checkpoint_path: the path to the .ckpt file to convert. original_config_file: a YAML file defining the configuration of the original architecture. If you can’t find this file, try searching for the YAML file in the GitHub repository where you found the .ckpt file. dump_path: the path to the converted model. For example, you can take the cldm_v15.yaml file from the ControlNet repository because the TemporalNet model is a Stable Diffusion v1.5 and ControlNet model. Now you can run the script to convert the .ckpt file: Copied python ../diffusers/scripts/convert_original_stable_diffusion_to_diffusers.py --checkpoint_path temporalnetv3.ckpt --original_config_file cldm_v15.yaml --dump_path ./ --controlnet Once the conversion is done, upload your converted model and test out the resulting pull request! Copied git push origin pr/13:refs/pr/13 Keras .pb or .h5 🧪 This is an experimental feature. Only Stable Diffusion v1 checkpoints are supported by the Convert KerasCV Space at the moment. KerasCV supports training for Stable Diffusion v1 and v2. However, it offers limited support for experimenting with Stable Diffusion models for inference and deployment whereas 🤗 Diffusers has a more complete set of features for this purpose, such as different noise schedulers, flash attention, and other +optimization techniques. The Convert KerasCV Space converts .pb or .h5 files to PyTorch, and then wraps them in a StableDiffusionPipeline so it is ready for inference. The converted checkpoint is stored in a repository on the Hugging Face Hub. For this example, let’s convert the sayakpaul/textual-inversion-kerasio checkpoint which was trained with Textual Inversion. It uses the special token to personalize images with cats. The Convert KerasCV Space allows you to input the following: Your Hugging Face token. Paths to download the UNet and text encoder weights from. Depending on how the model was trained, you don’t necessarily need to provide the paths to both the UNet and text encoder. For example, Textual Inversion only requires the embeddings from the text encoder and a text-to-image model only requires the UNet weights. Placeholder token is only applicable for textual inversion models. The output_repo_prefix is the name of the repository where the converted model is stored. Click the Submit button to automatically convert the KerasCV checkpoint! Once the checkpoint is successfully converted, you’ll see a link to the new repository containing the converted checkpoint. Follow the link to the new repository, and you’ll see the Convert KerasCV Space generated a model card with an inference widget to try out the converted model. If you prefer to run inference with code, click on the Use in Diffusers button in the upper right corner of the model card to copy and paste the code snippet: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline", use_safetensors=True +) Then, you can generate an image like: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline", use_safetensors=True +) +pipeline.to("cuda") + +placeholder_token = "" +prompt = f"two {placeholder_token} getting married, photorealistic, high quality" +image = pipeline(prompt, num_inference_steps=50).images[0] A1111 LoRA files Automatic1111 (A1111) is a popular web UI for Stable Diffusion that supports model sharing platforms like Civitai. Models trained with the Low-Rank Adaptation (LoRA) technique are especially popular because they’re fast to train and have a much smaller file size than a fully finetuned model. 🤗 Diffusers supports loading A1111 LoRA checkpoints with load_lora_weights(): Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "Lykon/dreamshaper-xl-1-0", torch_dtype=torch.float16, variant="fp16" +).to("cuda") Download a LoRA checkpoint from Civitai; this example uses the Blueprintify SD XL 1.0 checkpoint, but feel free to try out any LoRA checkpoint! Copied # uncomment to download the safetensor weights +#!wget https://civitai.com/api/download/models/168776 -O blueprintify.safetensors Load the LoRA checkpoint into the pipeline with the load_lora_weights() method: Copied pipeline.load_lora_weights(".", weight_name="blueprintify.safetensors") Now you can use the pipeline to generate images: Copied prompt = "bl3uprint, a highly detailed blueprint of the empire state building, explaining how to build all parts, many txt, blueprint grid backdrop" +negative_prompt = "lowres, cropped, worst quality, low quality, normal quality, artifacts, signature, watermark, username, blurry, more than one bridge, bad architecture" + +image = pipeline( + prompt=prompt, + negative_prompt=negative_prompt, + generator=torch.manual_seed(0), +).images[0] +image diff --git a/scrapped_outputs/2f604e059a5d9c2420dc3ac5f0d63596.txt b/scrapped_outputs/2f604e059a5d9c2420dc3ac5f0d63596.txt new file mode 100644 index 0000000000000000000000000000000000000000..dbefbca11fa04d4d6535c53f4288ba2bd56c73ef --- /dev/null +++ b/scrapped_outputs/2f604e059a5d9c2420dc3ac5f0d63596.txt @@ -0,0 +1,128 @@ +How to run Stable Diffusion with Core ML + +Core ML is the model format and machine learning library supported by Apple frameworks. If you are interested in running Stable Diffusion models inside your macOS or iOS/iPadOS apps, this guide will show you how to convert existing PyTorch checkpoints into the Core ML format and use them for inference with Python or Swift. +Core ML models can leverage all the compute engines available in Apple devices: the CPU, the GPU, and the Apple Neural Engine (or ANE, a tensor-optimized accelerator available in Apple Silicon Macs and modern iPhones/iPads). Depending on the model and the device it’s running on, Core ML can mix and match compute engines too, so some portions of the model may run on the CPU while others run on GPU, for example. +You can also run the diffusers Python codebase on Apple Silicon Macs using the mps accelerator built into PyTorch. This approach is explained in depth in the mps guide, but it is not compatible with native apps. + +Stable Diffusion Core ML Checkpoints + +Stable Diffusion weights (or checkpoints) are stored in the PyTorch format, so you need to convert them to the Core ML format before we can use them inside native apps. +Thankfully, Apple engineers developed a conversion tool based on diffusers to convert the PyTorch checkpoints to Core ML. +Before you convert a model, though, take a moment to explore the Hugging Face Hub – chances are the model you’re interested in is already available in Core ML format: +the Apple organization includes Stable Diffusion versions 1.4, 1.5, 2.0 base, and 2.1 base +coreml organization includes custom DreamBoothed and finetuned models +use this filter to return all available Core ML checkpoints +If you can’t find the model you’re interested in, we recommend you follow the instructions for Converting Models to Core ML by Apple. + +Selecting the Core ML Variant to Use + +Stable Diffusion models can be converted to different Core ML variants intended for different purposes: +The type of attention blocks used. The attention operation is used to “pay attention” to the relationship between different areas in the image representations and to understand how the image and text representations are related. Attention is compute- and memory-intensive, so different implementations exist that consider the hardware characteristics of different devices. For Core ML Stable Diffusion models, there are two attention variants: +split_einsum (introduced by Apple) is optimized for ANE devices, which is available in modern iPhones, iPads and M-series computers. +The “original” attention (the base implementation used in diffusers) is only compatible with CPU/GPU and not ANE. It can be faster to run your model on CPU + GPU using original attention than ANE. See this performance benchmark as well as some additional measures provided by the community for additional details. +The supported inference framework. +packages are suitable for Python inference. This can be used to test converted Core ML models before attempting to integrate them inside native apps, or if you want to explore Core ML performance but don’t need to support native apps. For example, an application with a web UI could perfectly use a Python Core ML backend. +compiled models are required for Swift code. The compiled models in the Hub split the large UNet model weights into several files for compatibility with iOS and iPadOS devices. This corresponds to the --chunk-unet conversion option. If you want to support native apps, then you need to select the compiled variant. +The official Core ML Stable Diffusion models include these variants, but the community ones may vary: + + + Copied +coreml-stable-diffusion-v1-4 +├── README.md +├── original +│ ├── compiled +│ └── packages +└── split_einsum + ├── compiled + └── packages +You can download and use the variant you need as shown below. + +Core ML Inference in Python + +Install the following libraries to run Core ML inference in Python: + + + Copied +pip install huggingface_hub +pip install git+https://github.com/apple/ml-stable-diffusion + +Download the Model Checkpoints + +To run inference in Python, use one of the versions stored in the packages folders because the compiled ones are only compatible with Swift. You may choose whether you want to use original or split_einsum attention. +This is how you’d download the original attention variant from the Hub to a directory called models: + + + Copied +from huggingface_hub import snapshot_download +from pathlib import Path + +repo_id = "apple/coreml-stable-diffusion-v1-4" +variant = "original/packages" + +model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_")) +snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False) +print(f"Model downloaded at {model_path}") + +Inference + +Once you have downloaded a snapshot of the model, you can test it using Apple’s Python script. + + + Copied +python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" -i models/coreml-stable-diffusion-v1-4_original_packages -o --compute-unit CPU_AND_GPU --seed 93 + should point to the checkpoint you downloaded in the step above, and --compute-unit indicates the hardware you want to allow for inference. It must be one of the following options: ALL, CPU_AND_GPU, CPU_ONLY, CPU_AND_NE. You may also provide an optional output path, and a seed for reproducibility. +The inference script assumes you’re using the original version of the Stable Diffusion model, CompVis/stable-diffusion-v1-4. If you use another model, you have to specify its Hub id in the inference command line, using the --model-version option. This works for models already supported and custom models you trained or fine-tuned yourself. +For example, if you want to use runwayml/stable-diffusion-v1-5: + + + Copied +python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" --compute-unit ALL -o output --seed 93 -i models/coreml-stable-diffusion-v1-5_original_packages --model-version runwayml/stable-diffusion-v1-5 + +Core ML inference in Swift + +Running inference in Swift is slightly faster than in Python because the models are already compiled in the mlmodelc format. This is noticeable on app startup when the model is loaded but shouldn’t be noticeable if you run several generations afterward. + +Download + +To run inference in Swift on your Mac, you need one of the compiled checkpoint versions. We recommend you download them locally using Python code similar to the previous example, but with one of the compiled variants: + + + Copied +from huggingface_hub import snapshot_download +from pathlib import Path + +repo_id = "apple/coreml-stable-diffusion-v1-4" +variant = "original/compiled" + +model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_")) +snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False) +print(f"Model downloaded at {model_path}") + +Inference + +To run inference, please clone Apple’s repo: + + + Copied +git clone https://github.com/apple/ml-stable-diffusion +cd ml-stable-diffusion +And then use Apple’s command line tool, Swift Package Manager: + + + Copied +swift run StableDiffusionSample --resource-path models/coreml-stable-diffusion-v1-4_original_compiled --compute-units all "a photo of an astronaut riding a horse on mars" +You have to specify in --resource-path one of the checkpoints downloaded in the previous step, so please make sure it contains compiled Core ML bundles with the extension .mlmodelc. The --compute-units has to be one of these values: all, cpuOnly, cpuAndGPU, cpuAndNeuralEngine. +For more details, please refer to the instructions in Apple’s repo. + +Supported Diffusers Features + +The Core ML models and inference code don’t support many of the features, options, and flexibility of 🧨 Diffusers. These are some of the limitations to keep in mind: +Core ML models are only suitable for inference. They can’t be used for training or fine-tuning. +Only two schedulers have been ported to Swift, the default one used by Stable Diffusion and DPMSolverMultistepScheduler, which we ported to Swift from our diffusers implementation. We recommend you use DPMSolverMultistepScheduler, since it produces the same quality in about half the steps. +Negative prompts, classifier-free guidance scale, and image-to-image tasks are available in the inference code. Advanced features such as depth guidance, ControlNet, and latent upscalers are not available yet. +Apple’s conversion and inference repo and our own swift-coreml-diffusers repos are intended as technology demonstrators to enable other developers to build upon. +If you feel strongly about any missing features, please feel free to open a feature request or, better yet, a contribution PR :) + +Native Diffusers Swift app + +One easy way to run Stable Diffusion on your own Apple hardware is to use our open-source Swift repo, based on diffusers and Apple’s conversion and inference repo. You can study the code, compile it with Xcode and adapt it for your own needs. For your convenience, there’s also a standalone Mac app in the App Store, so you can play with it without having to deal with the code or IDE. If you are a developer and have determined that Core ML is the best solution to build your Stable Diffusion app, then you can use the rest of this guide to get started with your project. We can’t wait to see what you’ll build :) diff --git a/scrapped_outputs/2f9cdac96aed1a160977c1b85e1a5e69.txt b/scrapped_outputs/2f9cdac96aed1a160977c1b85e1a5e69.txt new file mode 100644 index 0000000000000000000000000000000000000000..f44a3d21a8e26d613db10e2b1641d1bc1fb54490 --- /dev/null +++ b/scrapped_outputs/2f9cdac96aed1a160977c1b85e1a5e69.txt @@ -0,0 +1,2 @@ +🧨 Diffusers’ Ethical Guidelines Preamble Diffusers provides pre-trained diffusion models and serves as a modular toolbox for inference and training. Given its real case applications in the world and potential negative impacts on society, we think it is important to provide the project with ethical guidelines to guide the development, users’ contributions, and usage of the Diffusers library. The risks associated with using this technology are still being examined, but to name a few: copyrights issues for artists; deep-fake exploitation; sexual content generation in inappropriate contexts; non-consensual impersonation; harmful social biases perpetuating the oppression of marginalized groups. +We will keep tracking risks and adapt the following guidelines based on the community’s responsiveness and valuable feedback. Scope The Diffusers community will apply the following ethical guidelines to the project’s development and help coordinate how the community will integrate the contributions, especially concerning sensitive topics related to ethical concerns. Ethical guidelines The following ethical guidelines apply generally, but we will primarily implement them when dealing with ethically sensitive issues while making a technical choice. Furthermore, we commit to adapting those ethical principles over time following emerging harms related to the state of the art of the technology in question. Transparency: we are committed to being transparent in managing PRs, explaining our choices to users, and making technical decisions. Consistency: we are committed to guaranteeing our users the same level of attention in project management, keeping it technically stable and consistent. Simplicity: with a desire to make it easy to use and exploit the Diffusers library, we are committed to keeping the project’s goals lean and coherent. Accessibility: the Diffusers project helps lower the entry bar for contributors who can help run it even without technical expertise. Doing so makes research artifacts more accessible to the community. Reproducibility: we aim to be transparent about the reproducibility of upstream code, models, and datasets when made available through the Diffusers library. Responsibility: as a community and through teamwork, we hold a collective responsibility to our users by anticipating and mitigating this technology’s potential risks and dangers. Examples of implementations: Safety features and Mechanisms The team works daily to make the technical and non-technical tools available to deal with the potential ethical and social risks associated with diffusion technology. Moreover, the community’s input is invaluable in ensuring these features’ implementation and raising awareness with us. Community tab: it enables the community to discuss and better collaborate on a project. Bias exploration and evaluation: the Hugging Face team provides a space to demonstrate the biases in Stable Diffusion interactively. In this sense, we support and encourage bias explorers and evaluations. Encouraging safety in deployment Safe Stable Diffusion: It mitigates the well-known issue that models, like Stable Diffusion, that are trained on unfiltered, web-crawled datasets tend to suffer from inappropriate degeneration. Related paper: Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models. Safety Checker: It checks and compares the class probability of a set of hard-coded harmful concepts in the embedding space against an image after it has been generated. The harmful concepts are intentionally hidden to prevent reverse engineering of the checker. Staged released on the Hub: in particularly sensitive situations, access to some repositories should be restricted. This staged release is an intermediary step that allows the repository’s authors to have more control over its use. Licensing: OpenRAILs, a new type of licensing, allow us to ensure free access while having a set of restrictions that ensure more responsible use. diff --git a/scrapped_outputs/2fa53b814cd93ccf6a7eb6740705a1e6.txt b/scrapped_outputs/2fa53b814cd93ccf6a7eb6740705a1e6.txt new file mode 100644 index 0000000000000000000000000000000000000000..12f932f27da948cb5ce81edca4bff5444475b84d --- /dev/null +++ b/scrapped_outputs/2fa53b814cd93ccf6a7eb6740705a1e6.txt @@ -0,0 +1,11 @@ +Control image brightness The Stable Diffusion pipeline is mediocre at generating images that are either very bright or dark as explained in the Common Diffusion Noise Schedules and Sample Steps are Flawed paper. The solutions proposed in the paper are currently implemented in the DDIMScheduler which you can use to improve the lighting in your images. 💡 Take a look at the paper linked above for more details about the proposed solutions! One of the solutions is to train a model with v prediction and v loss. Add the following flag to the train_text_to_image.py or train_text_to_image_lora.py scripts to enable v_prediction: Copied --prediction_type="v_prediction" For example, let’s use the ptx0/pseudo-journey-v2 checkpoint which has been finetuned with v_prediction. Next, configure the following parameters in the DDIMScheduler: rescale_betas_zero_snr=True, rescales the noise schedule to zero terminal signal-to-noise ratio (SNR) timestep_spacing="trailing", starts sampling from the last timestep Copied from diffusers import DiffusionPipeline, DDIMScheduler + +pipeline = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", use_safetensors=True) + +# switch the scheduler in the pipeline to use the DDIMScheduler +pipeline.scheduler = DDIMScheduler.from_config( + pipeline.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing" +) +pipeline.to("cuda") Finally, in your call to the pipeline, set guidance_rescale to prevent overexposure: Copied prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" +image = pipeline(prompt, guidance_rescale=0.7).images[0] +image diff --git a/scrapped_outputs/2fc3b9e818ae13e6e1ed26d8c3c2008c.txt b/scrapped_outputs/2fc3b9e818ae13e6e1ed26d8c3c2008c.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/2fca7ca24c81219fa78c08de5ddce113.txt b/scrapped_outputs/2fca7ca24c81219fa78c08de5ddce113.txt new file mode 100644 index 0000000000000000000000000000000000000000..b26a6d56b0f7175109506df5db21894b73ff5f5f --- /dev/null +++ b/scrapped_outputs/2fca7ca24c81219fa78c08de5ddce113.txt @@ -0,0 +1,25 @@ +Metal Performance Shaders (MPS) 🤗 Diffusers is compatible with Apple silicon (M1/M2 chips) using the PyTorch mps device, which uses the Metal framework to leverage the GPU on MacOS devices. You’ll need to have: macOS computer with Apple silicon (M1/M2) hardware macOS 12.6 or later (13.0 or later recommended) arm64 version of Python PyTorch 2.0 (recommended) or 1.13 (minimum version supported for mps) The mps backend uses PyTorch’s .to() interface to move the Stable Diffusion pipeline on to your M1 or M2 device: Copied from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +pipe = pipe.to("mps") + +# Recommended if your computer has < 64 GB of RAM +pipe.enable_attention_slicing() + +prompt = "a photo of an astronaut riding a horse on mars" +image = pipe(prompt).images[0] +image Generating multiple prompts in a batch can crash or fail to work reliably. We believe this is related to the mps backend in PyTorch. While this is being investigated, you should iterate instead of batching. If you’re using PyTorch 1.13, you need to “prime” the pipeline with an additional one-time pass through it. This is a temporary workaround for an issue where the first inference pass produces slightly different results than subsequent ones. You only need to do this pass once, and after just one inference step you can discard the result. Copied from diffusers import DiffusionPipeline + + pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5").to("mps") + pipe.enable_attention_slicing() + + prompt = "a photo of an astronaut riding a horse on mars" + # First-time "warmup" pass if PyTorch version is 1.13 ++ _ = pipe(prompt, num_inference_steps=1) + + # Results match those from the CPU device after the warmup pass. + image = pipe(prompt).images[0] Troubleshoot M1/M2 performance is very sensitive to memory pressure. When this occurs, the system automatically swaps if it needs to which significantly degrades performance. To prevent this from happening, we recommend attention slicing to reduce memory pressure during inference and prevent swapping. This is especially relevant if your computer has less than 64GB of system RAM, or if you generate images at non-standard resolutions larger than 512×512 pixels. Call the enable_attention_slicing() function on your pipeline: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True).to("mps") +pipeline.enable_attention_slicing() Attention slicing performs the costly attention operation in multiple steps instead of all at once. It usually improves performance by ~20% in computers without universal memory, but we’ve observed better performance in most Apple silicon computers unless you have 64GB of RAM or more. diff --git a/scrapped_outputs/2fe655b6c5e6a2cea119953fb8ef79ca.txt b/scrapped_outputs/2fe655b6c5e6a2cea119953fb8ef79ca.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/2fedd478830d35127d3b44cd642ff992.txt b/scrapped_outputs/2fedd478830d35127d3b44cd642ff992.txt new file mode 100644 index 0000000000000000000000000000000000000000..70b4217dd0c7138c00d1e18f1498d6ca0f929b68 --- /dev/null +++ b/scrapped_outputs/2fedd478830d35127d3b44cd642ff992.txt @@ -0,0 +1,31 @@ +Load different Stable Diffusion formats Stable Diffusion models are available in different formats depending on the framework they’re trained and saved with, and where you download them from. Converting these formats for use in 🤗 Diffusers allows you to use all the features supported by the library, such as using different schedulers for inference, building your custom pipeline, and a variety of techniques and methods for optimizing inference speed. We highly recommend using the .safetensors format because it is more secure than traditional pickled files which are vulnerable and can be exploited to execute any code on your machine (learn more in the Load safetensors guide). This guide will show you how to convert other Stable Diffusion formats to be compatible with 🤗 Diffusers. PyTorch .ckpt The checkpoint - or .ckpt - format is commonly used to store and save models. The .ckpt file contains the entire model and is typically several GBs in size. While you can load and use a .ckpt file directly with the from_single_file() method, it is generally better to convert the .ckpt file to 🤗 Diffusers so both formats are available. There are two options for converting a .ckpt file: use a Space to convert the checkpoint or convert the .ckpt file with a script. Convert with a Space The easiest and most convenient way to convert a .ckpt file is to use the SD to Diffusers Space. You can follow the instructions on the Space to convert the .ckpt file. This approach works well for basic models, but it may struggle with more customized models. You’ll know the Space failed if it returns an empty pull request or error. In this case, you can try converting the .ckpt file with a script. Convert with a script 🤗 Diffusers provides a conversion script for converting .ckpt files. This approach is more reliable than the Space above. Before you start, make sure you have a local clone of 🤗 Diffusers to run the script and log in to your Hugging Face account so you can open pull requests and push your converted model to the Hub. Copied huggingface-cli login To use the script: Git clone the repository containing the .ckpt file you want to convert. For this example, let’s convert this TemporalNet .ckpt file: Copied git lfs install +git clone https://huggingface.co/CiaraRowles/TemporalNet Open a pull request on the repository where you’re converting the checkpoint from: Copied cd TemporalNet && git fetch origin refs/pr/13:pr/13 +git checkout pr/13 There are several input arguments to configure in the conversion script, but the most important ones are: checkpoint_path: the path to the .ckpt file to convert. original_config_file: a YAML file defining the configuration of the original architecture. If you can’t find this file, try searching for the YAML file in the GitHub repository where you found the .ckpt file. dump_path: the path to the converted model. For example, you can take the cldm_v15.yaml file from the ControlNet repository because the TemporalNet model is a Stable Diffusion v1.5 and ControlNet model. Now you can run the script to convert the .ckpt file: Copied python ../diffusers/scripts/convert_original_stable_diffusion_to_diffusers.py --checkpoint_path temporalnetv3.ckpt --original_config_file cldm_v15.yaml --dump_path ./ --controlnet Once the conversion is done, upload your converted model and test out the resulting pull request! Copied git push origin pr/13:refs/pr/13 Keras .pb or .h5 🧪 This is an experimental feature. Only Stable Diffusion v1 checkpoints are supported by the Convert KerasCV Space at the moment. KerasCV supports training for Stable Diffusion v1 and v2. However, it offers limited support for experimenting with Stable Diffusion models for inference and deployment whereas 🤗 Diffusers has a more complete set of features for this purpose, such as different noise schedulers, flash attention, and other +optimization techniques. The Convert KerasCV Space converts .pb or .h5 files to PyTorch, and then wraps them in a StableDiffusionPipeline so it is ready for inference. The converted checkpoint is stored in a repository on the Hugging Face Hub. For this example, let’s convert the sayakpaul/textual-inversion-kerasio checkpoint which was trained with Textual Inversion. It uses the special token to personalize images with cats. The Convert KerasCV Space allows you to input the following: Your Hugging Face token. Paths to download the UNet and text encoder weights from. Depending on how the model was trained, you don’t necessarily need to provide the paths to both the UNet and text encoder. For example, Textual Inversion only requires the embeddings from the text encoder and a text-to-image model only requires the UNet weights. Placeholder token is only applicable for textual inversion models. The output_repo_prefix is the name of the repository where the converted model is stored. Click the Submit button to automatically convert the KerasCV checkpoint! Once the checkpoint is successfully converted, you’ll see a link to the new repository containing the converted checkpoint. Follow the link to the new repository, and you’ll see the Convert KerasCV Space generated a model card with an inference widget to try out the converted model. If you prefer to run inference with code, click on the Use in Diffusers button in the upper right corner of the model card to copy and paste the code snippet: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline", use_safetensors=True +) Then, you can generate an image like: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline", use_safetensors=True +) +pipeline.to("cuda") + +placeholder_token = "" +prompt = f"two {placeholder_token} getting married, photorealistic, high quality" +image = pipeline(prompt, num_inference_steps=50).images[0] A1111 LoRA files Automatic1111 (A1111) is a popular web UI for Stable Diffusion that supports model sharing platforms like Civitai. Models trained with the Low-Rank Adaptation (LoRA) technique are especially popular because they’re fast to train and have a much smaller file size than a fully finetuned model. 🤗 Diffusers supports loading A1111 LoRA checkpoints with load_lora_weights(): Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "Lykon/dreamshaper-xl-1-0", torch_dtype=torch.float16, variant="fp16" +).to("cuda") Download a LoRA checkpoint from Civitai; this example uses the Blueprintify SD XL 1.0 checkpoint, but feel free to try out any LoRA checkpoint! Copied # uncomment to download the safetensor weights +#!wget https://civitai.com/api/download/models/168776 -O blueprintify.safetensors Load the LoRA checkpoint into the pipeline with the load_lora_weights() method: Copied pipeline.load_lora_weights(".", weight_name="blueprintify.safetensors") Now you can use the pipeline to generate images: Copied prompt = "bl3uprint, a highly detailed blueprint of the empire state building, explaining how to build all parts, many txt, blueprint grid backdrop" +negative_prompt = "lowres, cropped, worst quality, low quality, normal quality, artifacts, signature, watermark, username, blurry, more than one bridge, bad architecture" + +image = pipeline( + prompt=prompt, + negative_prompt=negative_prompt, + generator=torch.manual_seed(0), +).images[0] +image diff --git a/scrapped_outputs/300959eb7a837f8d558d7375e63109eb.txt b/scrapped_outputs/300959eb7a837f8d558d7375e63109eb.txt new file mode 100644 index 0000000000000000000000000000000000000000..1867f773b4344fd37e77bce342b7730704ed1f48 --- /dev/null +++ b/scrapped_outputs/300959eb7a837f8d558d7375e63109eb.txt @@ -0,0 +1,76 @@ +Load community pipelines and components Community pipelines Community pipelines are any DiffusionPipeline class that are different from the original implementation as specified in their paper (for example, the StableDiffusionControlNetPipeline corresponds to the Text-to-Image Generation with ControlNet Conditioning paper). They provide additional functionality or extend the original implementation of a pipeline. There are many cool community pipelines like Speech to Image or Composable Stable Diffusion, and you can find all the official community pipelines here. To load any community pipeline on the Hub, pass the repository id of the community pipeline to the custom_pipeline argument and the model repository where you’d like to load the pipeline weights and components from. For example, the example below loads a dummy pipeline from hf-internal-testing/diffusers-dummy-pipeline and the pipeline weights and components from google/ddpm-cifar10-32: 🔒 By loading a community pipeline from the Hugging Face Hub, you are trusting that the code you are loading is safe. Make sure to inspect the code online before loading and running it automatically! Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "google/ddpm-cifar10-32", custom_pipeline="hf-internal-testing/diffusers-dummy-pipeline", use_safetensors=True +) Loading an official community pipeline is similar, but you can mix loading weights from an official repository id and pass pipeline components directly. The example below loads the community CLIP Guided Stable Diffusion pipeline, and you can pass the CLIP model components directly to it: Copied from diffusers import DiffusionPipeline +from transformers import CLIPImageProcessor, CLIPModel + +clip_model_id = "laion/CLIP-ViT-B-32-laion2B-s34B-b79K" + +feature_extractor = CLIPImageProcessor.from_pretrained(clip_model_id) +clip_model = CLIPModel.from_pretrained(clip_model_id) + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + custom_pipeline="clip_guided_stable_diffusion", + clip_model=clip_model, + feature_extractor=feature_extractor, + use_safetensors=True, +) For more information about community pipelines, take a look at the Community pipelines guide for how to use them and if you’re interested in adding a community pipeline check out the How to contribute a community pipeline guide! Community components Community components allow users to build pipelines that may have customized components that are not a part of Diffusers. If your pipeline has custom components that Diffusers doesn’t already support, you need to provide their implementations as Python modules. These customized components could be a VAE, UNet, and scheduler. In most cases, the text encoder is imported from the Transformers library. The pipeline code itself can also be customized. This section shows how users should use community components to build a community pipeline. You’ll use the showlab/show-1-base pipeline checkpoint as an example. So, let’s start loading the components: Import and load the text encoder from Transformers: Copied from transformers import T5Tokenizer, T5EncoderModel + +pipe_id = "showlab/show-1-base" +tokenizer = T5Tokenizer.from_pretrained(pipe_id, subfolder="tokenizer") +text_encoder = T5EncoderModel.from_pretrained(pipe_id, subfolder="text_encoder") Load a scheduler: Copied from diffusers import DPMSolverMultistepScheduler + +scheduler = DPMSolverMultistepScheduler.from_pretrained(pipe_id, subfolder="scheduler") Load an image processor: Copied from transformers import CLIPFeatureExtractor + +feature_extractor = CLIPFeatureExtractor.from_pretrained(pipe_id, subfolder="feature_extractor") In steps 4 and 5, the custom UNet and pipeline implementation must match the format shown in their files for this example to work. Now you’ll load a custom UNet, which in this example, has already been implemented in the showone_unet_3d_condition.py script for your convenience. You’ll notice the UNet3DConditionModel class name is changed to ShowOneUNet3DConditionModel because UNet3DConditionModel already exists in Diffusers. Any components needed for the ShowOneUNet3DConditionModel class should be placed in the showone_unet_3d_condition.py script. Once this is done, you can initialize the UNet: Copied from showone_unet_3d_condition import ShowOneUNet3DConditionModel + +unet = ShowOneUNet3DConditionModel.from_pretrained(pipe_id, subfolder="unet") Finally, you’ll load the custom pipeline code. For this example, it has already been created for you in the pipeline_t2v_base_pixel.py script. This script contains a custom TextToVideoIFPipeline class for generating videos from text. Just like the custom UNet, any code needed for the custom pipeline to work should go in the pipeline_t2v_base_pixel.py script. Once everything is in place, you can initialize the TextToVideoIFPipeline with the ShowOneUNet3DConditionModel: Copied from pipeline_t2v_base_pixel import TextToVideoIFPipeline +import torch + +pipeline = TextToVideoIFPipeline( + unet=unet, + text_encoder=text_encoder, + tokenizer=tokenizer, + scheduler=scheduler, + feature_extractor=feature_extractor +) +pipeline = pipeline.to(device="cuda") +pipeline.torch_dtype = torch.float16 Push the pipeline to the Hub to share with the community! Copied pipeline.push_to_hub("custom-t2v-pipeline") After the pipeline is successfully pushed, you need a couple of changes: Change the _class_name attribute in model_index.json to "pipeline_t2v_base_pixel" and "TextToVideoIFPipeline". Upload showone_unet_3d_condition.py to the unet directory. Upload pipeline_t2v_base_pixel.py to the pipeline base directory. To run inference, simply add the trust_remote_code argument while initializing the pipeline to handle all the “magic” behind the scenes. Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "/", trust_remote_code=True, torch_dtype=torch.float16 +).to("cuda") + +prompt = "hello" + +# Text embeds +prompt_embeds, negative_embeds = pipeline.encode_prompt(prompt) + +# Keyframes generation (8x64x40, 2fps) +video_frames = pipeline( + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + num_frames=8, + height=40, + width=64, + num_inference_steps=2, + guidance_scale=9.0, + output_type="pt" +).frames As an additional reference example, you can refer to the repository structure of stabilityai/japanese-stable-diffusion-xl, that makes use of the trust_remote_code feature: Copied +from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/japanese-stable-diffusion-xl", trust_remote_code=True +) +pipeline.to("cuda") + +# if using torch < 2.0 +# pipeline.enable_xformers_memory_efficient_attention() + +prompt = "柴犬、カラフルアート" + +image = pipeline(prompt=prompt).images[0] diff --git a/scrapped_outputs/304ace3ad663acd12d3c441ee0672fe3.txt b/scrapped_outputs/304ace3ad663acd12d3c441ee0672fe3.txt new file mode 100644 index 0000000000000000000000000000000000000000..c82e25825d8d9963f7b4b0f30bedbc489b9e96a3 --- /dev/null +++ b/scrapped_outputs/304ace3ad663acd12d3c441ee0672fe3.txt @@ -0,0 +1,30 @@ +Transformer Temporal A Transformer model for video-like data. TransformerTemporalModel class diffusers.models.TransformerTemporalModel < source > ( num_attention_heads: int = 16 attention_head_dim: int = 88 in_channels: Optional = None out_channels: Optional = None num_layers: int = 1 dropout: float = 0.0 norm_num_groups: int = 32 cross_attention_dim: Optional = None attention_bias: bool = False sample_size: Optional = None activation_fn: str = 'geglu' norm_elementwise_affine: bool = True double_self_attention: bool = True positional_embeddings: Optional = None num_positional_embeddings: Optional = None ) Parameters num_attention_heads (int, optional, defaults to 16) — The number of heads to use for multi-head attention. attention_head_dim (int, optional, defaults to 88) — The number of channels in each head. in_channels (int, optional) — +The number of channels in the input and output (specify if the input is continuous). num_layers (int, optional, defaults to 1) — The number of layers of Transformer blocks to use. dropout (float, optional, defaults to 0.0) — The dropout probability to use. cross_attention_dim (int, optional) — The number of encoder_hidden_states dimensions to use. attention_bias (bool, optional) — +Configure if the TransformerBlock attention should contain a bias parameter. sample_size (int, optional) — The width of the latent images (specify if the input is discrete). +This is fixed during training since it is used to learn a number of position embeddings. activation_fn (str, optional, defaults to "geglu") — +Activation function to use in feed-forward. See diffusers.models.activations.get_activation for supported +activation functions. norm_elementwise_affine (bool, optional) — +Configure if the TransformerBlock should use learnable elementwise affine parameters for normalization. double_self_attention (bool, optional) — +Configure if each TransformerBlock should contain two self-attention layers. +positional_embeddings — (str, optional): +The type of positional embeddings to apply to the sequence input before passing use. +num_positional_embeddings — (int, optional): +The maximum length of the sequence over which to apply positional embeddings. A Transformer model for video-like data. forward < source > ( hidden_states: FloatTensor encoder_hidden_states: Optional = None timestep: Optional = None class_labels: LongTensor = None num_frames: int = 1 cross_attention_kwargs: Optional = None return_dict: bool = True ) → ~models.transformer_temporal.TransformerTemporalModelOutput or tuple Parameters hidden_states (torch.LongTensor of shape (batch size, num latent pixels) if discrete, torch.FloatTensor of shape (batch size, channel, height, width) if continuous) — +Input hidden_states. encoder_hidden_states ( torch.LongTensor of shape (batch size, encoder_hidden_states dim), optional) — +Conditional embeddings for cross attention layer. If not given, cross-attention defaults to +self-attention. timestep ( torch.LongTensor, optional) — +Used to indicate denoising step. Optional timestep to be applied as an embedding in AdaLayerNorm. class_labels ( torch.LongTensor of shape (batch size, num classes), optional) — +Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in +AdaLayerZeroNorm. num_frames (int, optional, defaults to 1) — +The number of frames to be processed per batch. This is used to reshape the hidden states. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. Returns +~models.transformer_temporal.TransformerTemporalModelOutput or tuple + +If return_dict is True, an ~models.transformer_temporal.TransformerTemporalModelOutput is +returned, otherwise a tuple where the first element is the sample tensor. + The TransformerTemporal forward method. TransformerTemporalModelOutput class diffusers.models.transformers.transformer_temporal.TransformerTemporalModelOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size x num_frames, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. The output of TransformerTemporalModel. diff --git a/scrapped_outputs/308da963275db3742f634d67596a647a.txt b/scrapped_outputs/308da963275db3742f634d67596a647a.txt new file mode 100644 index 0000000000000000000000000000000000000000..ef62c086e705e0fd98841711ee18a967fbc85f5e --- /dev/null +++ b/scrapped_outputs/308da963275db3742f634d67596a647a.txt @@ -0,0 +1,41 @@ +UNetMotionModel The UNet model was originally introduced by Ronneberger et al for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 2D UNet model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNetMotionModel class diffusers.UNetMotionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlockMotion', 'CrossAttnDownBlockMotion', 'CrossAttnDownBlockMotion', 'DownBlockMotion') up_block_types: Tuple = ('UpBlockMotion', 'CrossAttnUpBlockMotion', 'CrossAttnUpBlockMotion', 'CrossAttnUpBlockMotion') block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: int = 32 norm_eps: float = 1e-05 cross_attention_dim: int = 1280 use_linear_projection: bool = False num_attention_heads: Union = 8 motion_max_seq_length: int = 32 motion_num_attention_heads: int = 8 use_motion_mid_block: int = True encoder_hid_dim: Optional = None encoder_hid_dim_type: Optional = None ) A modified conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a +sample shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). disable_freeu < source > ( ) Disables the FreeU mechanism. enable_forward_chunking < source > ( chunk_size: Optional = None dim: int = 0 ) Parameters chunk_size (int, optional) — +The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually +over each tensor of dim=dim. dim (int, optional, defaults to 0) — +The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch) +or dim=1 (sequence length). Sets the attention processor to use feed forward +chunking. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism from https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stage blocks where they are being applied. Please refer to the official repository for combinations of values that +are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None added_cond_kwargs: Optional = None down_block_additional_residuals: Optional = None mid_block_additional_residual: Optional = None return_dict: bool = True ) → ~models.unet_3d_condition.UNet3DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, num_frames, channel, height, width. timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). +timestep_cond — (torch.Tensor, optional, defaults to None): +Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed +through the self.time_embedding layer to obtain the timestep embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. +down_block_additional_residuals — (tuple of torch.Tensor, optional): +A tuple of tensors that if specified are added to the residuals of down unet blocks. +mid_block_additional_residual — (torch.Tensor, optional): +A tensor that if specified is added to the residual of the middle unet block. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.unet_3d_condition.UNet3DConditionOutput instead of a plain +tuple. Returns +~models.unet_3d_condition.UNet3DConditionOutput or tuple + +If return_dict is True, an ~models.unet_3d_condition.UNet3DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The UNetMotionModel forward method. freeze_unet2d_params < source > ( ) Freeze the weights of just the UNet2DConditionModel, and leave the motion modules +unfrozen for fine tuning. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. UNet3DConditionOutput class diffusers.models.unets.unet_3d_condition.UNet3DConditionOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_frames, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of UNet3DConditionModel. diff --git a/scrapped_outputs/308e04b86cf189787748de3fdab8d41d.txt b/scrapped_outputs/308e04b86cf189787748de3fdab8d41d.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/309ad156057d565398bc2bbbc5d4df28.txt b/scrapped_outputs/309ad156057d565398bc2bbbc5d4df28.txt new file mode 100644 index 0000000000000000000000000000000000000000..1867f773b4344fd37e77bce342b7730704ed1f48 --- /dev/null +++ b/scrapped_outputs/309ad156057d565398bc2bbbc5d4df28.txt @@ -0,0 +1,76 @@ +Load community pipelines and components Community pipelines Community pipelines are any DiffusionPipeline class that are different from the original implementation as specified in their paper (for example, the StableDiffusionControlNetPipeline corresponds to the Text-to-Image Generation with ControlNet Conditioning paper). They provide additional functionality or extend the original implementation of a pipeline. There are many cool community pipelines like Speech to Image or Composable Stable Diffusion, and you can find all the official community pipelines here. To load any community pipeline on the Hub, pass the repository id of the community pipeline to the custom_pipeline argument and the model repository where you’d like to load the pipeline weights and components from. For example, the example below loads a dummy pipeline from hf-internal-testing/diffusers-dummy-pipeline and the pipeline weights and components from google/ddpm-cifar10-32: 🔒 By loading a community pipeline from the Hugging Face Hub, you are trusting that the code you are loading is safe. Make sure to inspect the code online before loading and running it automatically! Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "google/ddpm-cifar10-32", custom_pipeline="hf-internal-testing/diffusers-dummy-pipeline", use_safetensors=True +) Loading an official community pipeline is similar, but you can mix loading weights from an official repository id and pass pipeline components directly. The example below loads the community CLIP Guided Stable Diffusion pipeline, and you can pass the CLIP model components directly to it: Copied from diffusers import DiffusionPipeline +from transformers import CLIPImageProcessor, CLIPModel + +clip_model_id = "laion/CLIP-ViT-B-32-laion2B-s34B-b79K" + +feature_extractor = CLIPImageProcessor.from_pretrained(clip_model_id) +clip_model = CLIPModel.from_pretrained(clip_model_id) + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + custom_pipeline="clip_guided_stable_diffusion", + clip_model=clip_model, + feature_extractor=feature_extractor, + use_safetensors=True, +) For more information about community pipelines, take a look at the Community pipelines guide for how to use them and if you’re interested in adding a community pipeline check out the How to contribute a community pipeline guide! Community components Community components allow users to build pipelines that may have customized components that are not a part of Diffusers. If your pipeline has custom components that Diffusers doesn’t already support, you need to provide their implementations as Python modules. These customized components could be a VAE, UNet, and scheduler. In most cases, the text encoder is imported from the Transformers library. The pipeline code itself can also be customized. This section shows how users should use community components to build a community pipeline. You’ll use the showlab/show-1-base pipeline checkpoint as an example. So, let’s start loading the components: Import and load the text encoder from Transformers: Copied from transformers import T5Tokenizer, T5EncoderModel + +pipe_id = "showlab/show-1-base" +tokenizer = T5Tokenizer.from_pretrained(pipe_id, subfolder="tokenizer") +text_encoder = T5EncoderModel.from_pretrained(pipe_id, subfolder="text_encoder") Load a scheduler: Copied from diffusers import DPMSolverMultistepScheduler + +scheduler = DPMSolverMultistepScheduler.from_pretrained(pipe_id, subfolder="scheduler") Load an image processor: Copied from transformers import CLIPFeatureExtractor + +feature_extractor = CLIPFeatureExtractor.from_pretrained(pipe_id, subfolder="feature_extractor") In steps 4 and 5, the custom UNet and pipeline implementation must match the format shown in their files for this example to work. Now you’ll load a custom UNet, which in this example, has already been implemented in the showone_unet_3d_condition.py script for your convenience. You’ll notice the UNet3DConditionModel class name is changed to ShowOneUNet3DConditionModel because UNet3DConditionModel already exists in Diffusers. Any components needed for the ShowOneUNet3DConditionModel class should be placed in the showone_unet_3d_condition.py script. Once this is done, you can initialize the UNet: Copied from showone_unet_3d_condition import ShowOneUNet3DConditionModel + +unet = ShowOneUNet3DConditionModel.from_pretrained(pipe_id, subfolder="unet") Finally, you’ll load the custom pipeline code. For this example, it has already been created for you in the pipeline_t2v_base_pixel.py script. This script contains a custom TextToVideoIFPipeline class for generating videos from text. Just like the custom UNet, any code needed for the custom pipeline to work should go in the pipeline_t2v_base_pixel.py script. Once everything is in place, you can initialize the TextToVideoIFPipeline with the ShowOneUNet3DConditionModel: Copied from pipeline_t2v_base_pixel import TextToVideoIFPipeline +import torch + +pipeline = TextToVideoIFPipeline( + unet=unet, + text_encoder=text_encoder, + tokenizer=tokenizer, + scheduler=scheduler, + feature_extractor=feature_extractor +) +pipeline = pipeline.to(device="cuda") +pipeline.torch_dtype = torch.float16 Push the pipeline to the Hub to share with the community! Copied pipeline.push_to_hub("custom-t2v-pipeline") After the pipeline is successfully pushed, you need a couple of changes: Change the _class_name attribute in model_index.json to "pipeline_t2v_base_pixel" and "TextToVideoIFPipeline". Upload showone_unet_3d_condition.py to the unet directory. Upload pipeline_t2v_base_pixel.py to the pipeline base directory. To run inference, simply add the trust_remote_code argument while initializing the pipeline to handle all the “magic” behind the scenes. Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "/", trust_remote_code=True, torch_dtype=torch.float16 +).to("cuda") + +prompt = "hello" + +# Text embeds +prompt_embeds, negative_embeds = pipeline.encode_prompt(prompt) + +# Keyframes generation (8x64x40, 2fps) +video_frames = pipeline( + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + num_frames=8, + height=40, + width=64, + num_inference_steps=2, + guidance_scale=9.0, + output_type="pt" +).frames As an additional reference example, you can refer to the repository structure of stabilityai/japanese-stable-diffusion-xl, that makes use of the trust_remote_code feature: Copied +from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/japanese-stable-diffusion-xl", trust_remote_code=True +) +pipeline.to("cuda") + +# if using torch < 2.0 +# pipeline.enable_xformers_memory_efficient_attention() + +prompt = "柴犬、カラフルアート" + +image = pipeline(prompt=prompt).images[0] diff --git a/scrapped_outputs/30ab0a98baf78e47d18fb80ab91bc2fe.txt b/scrapped_outputs/30ab0a98baf78e47d18fb80ab91bc2fe.txt new file mode 100644 index 0000000000000000000000000000000000000000..f2bcdd0eab08a61d4d8ad8d73bfbe01b5aad187f --- /dev/null +++ b/scrapped_outputs/30ab0a98baf78e47d18fb80ab91bc2fe.txt @@ -0,0 +1,234 @@ +Models 🤗 Diffusers provides pretrained models for popular algorithms and modules to create custom diffusion systems. The primary function of models is to denoise an input sample as modeled by the distribution pθ(xt−1∣xt)p_{\theta}(x_{t-1}|x_{t})pθ​(xt−1​∣xt​). All models are built from the base ModelMixin class which is a torch.nn.Module providing basic functionality for saving and loading models, locally and from the Hugging Face Hub. ModelMixin class diffusers.ModelMixin < source > ( ) Base class for all models. ModelMixin takes care of storing the model configuration and provides methods for loading, downloading and +saving models. config_name (str) — Filename to save a model to when calling save_pretrained(). disable_gradient_checkpointing < source > ( ) Deactivates gradient checkpointing for the current model (may be referred to as activation checkpointing or +checkpoint activations in other frameworks). disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. enable_gradient_checkpointing < source > ( ) Activates gradient checkpointing for the current model (may be referred to as activation checkpointing or +checkpoint activations in other frameworks). enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this option is enabled, you should observe lower GPU memory usage and a potential speed up during +inference. Speed up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import UNet2DConditionModel +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> model = UNet2DConditionModel.from_pretrained( +... "stabilityai/stable-diffusion-2-1", subfolder="unet", torch_dtype=torch.float16 +... ) +>>> model = model.to("cuda") +>>> model.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) from_pretrained < source > ( pretrained_model_name_or_path: Union **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with save_pretrained(). + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info (bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. from_flax (bool, optional, defaults to False) — +Load the model weights from a Flax checkpoint save file. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. Instantiate a pretrained PyTorch model from a pretrained model configuration. The model is set in evaluation mode - model.eval() - by default, and dropout modules are deactivated. To +train the model, set it back in training mode with model.train(). To use private or gated models, log-in with +huggingface-cli login. You can also activate the special +“offline-mode” to use this method in a +firewalled environment. Example: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5", subfolder="unet") If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. num_parameters < source > ( only_trainable: bool = False exclude_embeddings: bool = False ) → int Parameters only_trainable (bool, optional, defaults to False) — +Whether or not to return only the number of trainable parameters. exclude_embeddings (bool, optional, defaults to False) — +Whether or not to return only the number of non-embedding parameters. Returns +int + +The number of parameters. + Get number of (trainable or non-embedding) parameters in the module. Example: Copied from diffusers import UNet2DConditionModel + +model_id = "runwayml/stable-diffusion-v1-5" +unet = UNet2DConditionModel.from_pretrained(model_id, subfolder="unet") +unet.num_parameters(only_trainable=True) +859520964 save_pretrained < source > ( save_directory: Union is_main_process: bool = True save_function: Optional = None safe_serialization: bool = True variant: Optional = None push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save a model and its configuration file to. Will be created if it doesn’t exist. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save a model and its configuration file to a directory so that it can be reloaded using the +from_pretrained() class method. FlaxModelMixin class diffusers.FlaxModelMixin < source > ( ) Base class for all Flax models. FlaxModelMixin takes care of storing the model configuration and provides methods for loading, downloading and +saving models. config_name (str) — Filename to save a model to when calling save_pretrained(). from_pretrained < source > ( pretrained_model_name_or_path: Union dtype: dtype = *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — +Can be either: + +A string, the model id (for example runwayml/stable-diffusion-v1-5) of a pretrained model +hosted on the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +using save_pretrained(). + dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — +The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and +jax.numpy.bfloat16 (on TPUs). +This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If +specified, all the computation will be performed with the given dtype. + +This only specifies the dtype of the computation and does not influence the dtype of model +parameters. +If you wish to change the dtype of the model parameters, see to_fp16() and +to_bf16(). + model_args (sequence of positional arguments, optional) — +All remaining positional arguments are passed to the underlying model’s __init__ method. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only(bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. from_pt (bool, optional, defaults to False) — +Load the model weights from a PyTorch checkpoint save file. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to update the configuration object (after it is loaded) and initiate the model (for +example, output_attentions=True). Behaves differently depending on whether a config is provided or +automatically loaded: + +If a configuration is provided with config, kwargs are directly passed to the underlying +model’s __init__ method (we assume all relevant updates to the configuration have already been +done). +If a configuration is not provided, kwargs are first passed to the configuration class +initialization function from_config(). Each key of the kwargs that corresponds +to a configuration attribute is used to override said attribute with the supplied kwargs value. +Remaining keys that do not correspond to any configuration attribute are passed to the underlying +model’s __init__ function. + Instantiate a pretrained Flax model from a pretrained model configuration. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # Download model and configuration from huggingface.co and cache. +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # Model was saved using *save_pretrained('./test/saved_model/')* (for example purposes, not runnable). +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("./test/saved_model/") If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. save_pretrained < source > ( save_directory: Union params: Union is_main_process: bool = True push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save a model and its configuration file to. Will be created if it doesn’t exist. params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional key word arguments passed along to the push_to_hub() method. Save a model and its configuration file to a directory so that it can be reloaded using the +from_pretrained() class method. to_bf16 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans. It should be True +for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.bfloat16. This returns a new params tree and does not cast +the params in place. This method can be used on a TPU to explicitly convert the model parameters to bfloat16 precision to do full +half-precision training or to save weights in bfloat16 for inference in order to save memory and improve speed. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # load model +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model parameters will be in fp32 precision, to cast these to bfloat16 precision +>>> params = model.to_bf16(params) +>>> # If you don't want to cast certain parameters (for example layer norm bias and scale) +>>> # then pass the mask as follows +>>> from flax import traverse_util + +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> flat_params = traverse_util.flatten_dict(params) +>>> mask = { +... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) +... for path in flat_params +... } +>>> mask = traverse_util.unflatten_dict(mask) +>>> params = model.to_bf16(params, mask) to_fp16 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans. It should be True +for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.float16. This returns a new params tree and does not cast the +params in place. This method can be used on a GPU to explicitly convert the model parameters to float16 precision to do full +half-precision training or to save weights in float16 for inference in order to save memory and improve speed. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # load model +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model params will be in fp32, to cast these to float16 +>>> params = model.to_fp16(params) +>>> # If you want don't want to cast certain parameters (for example layer norm bias and scale) +>>> # then pass the mask as follows +>>> from flax import traverse_util + +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> flat_params = traverse_util.flatten_dict(params) +>>> mask = { +... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) +... for path in flat_params +... } +>>> mask = traverse_util.unflatten_dict(mask) +>>> params = model.to_fp16(params, mask) to_fp32 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans. It should be True +for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.float32. This method can be used to explicitly convert the +model parameters to fp32 precision. This returns a new params tree and does not cast the params in place. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # Download model and configuration from huggingface.co +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model params will be in fp32, to illustrate the use of this method, +>>> # we'll first cast to fp16 and back to fp32 +>>> params = model.to_f16(params) +>>> # now cast back to fp32 +>>> params = model.to_fp32(params) PushToHubMixin class diffusers.utils.PushToHubMixin < source > ( ) A Mixin to push a model, scheduler, or pipeline to the Hugging Face Hub. push_to_hub < source > ( repo_id: str commit_message: Optional = None private: Optional = None token: Optional = None create_pr: bool = False safe_serialization: bool = True variant: Optional = None ) Parameters repo_id (str) — +The name of the repository you want to push your model, scheduler, or pipeline files to. It should +contain your organization name when pushing to an organization. repo_id can also be a path to a local +directory. commit_message (str, optional) — +Message to commit while pushing. Default to "Upload {object}". private (bool, optional) — +Whether or not the repository created should be private. token (str, optional) — +The token to use as HTTP bearer authorization for remote files. The token generated when running +huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) — +Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to True) — +Whether or not to convert the model weights to the safetensors format. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. Upload model, scheduler, or pipeline files to the 🤗 Hugging Face Hub. Examples: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet") + +# Push the `unet` to your namespace with the name "my-finetuned-unet". +unet.push_to_hub("my-finetuned-unet") + +# Push the `unet` to an organization with the name "my-finetuned-unet". +unet.push_to_hub("your-org/my-finetuned-unet") diff --git a/scrapped_outputs/30ae2809e342b101d5da663267bd6266.txt b/scrapped_outputs/30ae2809e342b101d5da663267bd6266.txt new file mode 100644 index 0000000000000000000000000000000000000000..5609c43fc2c76167b35287c9c0d231795b1d9be0 --- /dev/null +++ b/scrapped_outputs/30ae2809e342b101d5da663267bd6266.txt @@ -0,0 +1,332 @@ +Text-to-image The Stable Diffusion model was created by researchers and engineers from CompVis, Stability AI, Runway, and LAION. The StableDiffusionPipeline is capable of generating photorealistic images given any text input. It’s trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs. Latent diffusion is the research on top of which Stable Diffusion was built. It was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. The abstract from the paper is: By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. Code is available at https://github.com/CompVis/latent-diffusion. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionPipeline class diffusers.StableDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor from Common Diffusion Noise Schedules and Sample Steps are +Flawed. Guidance rescale factor should fix overexposure when +using zero terminal SNR. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. original_config_file (str, optional) — +The path to the original config file that was used to train the model. If not provided, the config file +will be inferred from the checkpoint file. model_type (str, optional) — +The type of model to load. If not provided, the model type will be inferred from the checkpoint file. image_size (int, optional) — +The size of the image output. It’s used to configure the sample_size parameter of the UNet and VAE model. load_safety_checker (bool, optional, defaults to False) — +Whether to load the safety checker model or not. By default, the safety checker is not loaded unless a safety_checker component is passed to the kwargs. num_in_channels (int, optional) — +Specify the number of input channels for the UNet model. Read more about how to configure UNet model with this parameter +here. scaling_factor (float, optional) — +The scaling factor to use for the VAE model. If not provided, it is inferred from the config file first. +If the scaling factor is not found in the config file, the default value 0.18215 is used. scheduler_type (str, optional) — +The type of scheduler to load. If not provided, the scheduler type will be inferred from the checkpoint file. prediction_type (str, optional) — +The type of prediction to load. If not provided, the prediction type will be inferred from the checkpoint file. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt or .safetensors +format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import StableDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +... ) + +>>> # Download pipeline from local file +>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt +>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly") + +>>> # Enable float16 and move to GPU +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", +... torch_dtype=torch.float16, +... ) +>>> pipeline.to("cuda") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionPipeline class diffusers.FlaxStableDiffusionPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-to-image generation using Stable Diffusion. This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: array params: Union prng_seed: Array num_inference_steps: int = 50 height: Optional = None width: Optional = None guidance_scale: Union = 7.5 latents: Array = None neg_prompt_ids: Array = None return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. latents (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +array is generated by sampling using the supplied random generator. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard + +>>> from diffusers import FlaxStableDiffusionPipeline + +>>> pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", revision="bf16", dtype=jax.numpy.bfloat16 +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" + +>>> prng_seed = jax.random.PRNGKey(0) +>>> num_inference_steps = 50 + +>>> num_samples = jax.device_count() +>>> prompt = num_samples * [prompt] +>>> prompt_ids = pipeline.prepare_inputs(prompt) +# shard inputs and rng + +>>> params = replicate(params) +>>> prng_seed = jax.random.split(prng_seed, jax.device_count()) +>>> prompt_ids = shard(prompt_ids) + +>>> images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images +>>> images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) FlaxStableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/30b0fbe69ff61648d547035fbd858e02.txt b/scrapped_outputs/30b0fbe69ff61648d547035fbd858e02.txt new file mode 100644 index 0000000000000000000000000000000000000000..ca254f42f72a76d580bb5340e193834f7f82b6d6 --- /dev/null +++ b/scrapped_outputs/30b0fbe69ff61648d547035fbd858e02.txt @@ -0,0 +1,86 @@ +Prompt weighting Prompt weighting provides a way to emphasize or de-emphasize certain parts of a prompt, allowing for more control over the generated image. A prompt can include several concepts, which gets turned into contextualized text embeddings. The embeddings are used by the model to condition its cross-attention layers to generate an image (read the Stable Diffusion blog post to learn more about how it works). Prompt weighting works by increasing or decreasing the scale of the text embedding vector that corresponds to its concept in the prompt because you may not necessarily want the model to focus on all concepts equally. The easiest way to prepare the prompt-weighted embeddings is to use Compel, a text prompt-weighting and blending library. Once you have the prompt-weighted embeddings, you can pass them to any pipeline that has a prompt_embeds (and optionally negative_prompt_embeds) parameter, such as StableDiffusionPipeline, StableDiffusionControlNetPipeline, and StableDiffusionXLPipeline. If your favorite pipeline doesn’t have a prompt_embeds parameter, please open an issue so we can add it! This guide will show you how to weight and blend your prompts with Compel in 🤗 Diffusers. Before you begin, make sure you have the latest version of Compel installed: Copied # uncomment to install in Colab +#!pip install compel --upgrade For this guide, let’s generate an image with the prompt "a red cat playing with a ball" using the StableDiffusionPipeline: Copied from diffusers import StableDiffusionPipeline, UniPCMultistepScheduler +import torch + +pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", use_safetensors=True) +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.to("cuda") + +prompt = "a red cat playing with a ball" + +generator = torch.Generator(device="cpu").manual_seed(33) + +image = pipe(prompt, generator=generator, num_inference_steps=20).images[0] +image Weighting You’ll notice there is no “ball” in the image! Let’s use compel to upweight the concept of “ball” in the prompt. Create a Compel object, and pass it a tokenizer and text encoder: Copied from compel import Compel + +compel_proc = Compel(tokenizer=pipe.tokenizer, text_encoder=pipe.text_encoder) compel uses + or - to increase or decrease the weight of a word in the prompt. To increase the weight of “ball”: + corresponds to the value 1.1, ++ corresponds to 1.1^2, and so on. Similarly, - corresponds to 0.9 and -- corresponds to 0.9^2. Feel free to experiment with adding more + or - in your prompt! Copied prompt = "a red cat playing with a ball++" Pass the prompt to compel_proc to create the new prompt embeddings which are passed to the pipeline: Copied prompt_embeds = compel_proc(prompt) +generator = torch.manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image To downweight parts of the prompt, use the - suffix: Copied prompt = "a red------- cat playing with a ball" +prompt_embeds = compel_proc(prompt) + +generator = torch.manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image You can even up or downweight multiple concepts in the same prompt: Copied prompt = "a red cat++ playing with a ball----" +prompt_embeds = compel_proc(prompt) + +generator = torch.manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image Blending You can also create a weighted blend of prompts by adding .blend() to a list of prompts and passing it some weights. Your blend may not always produce the result you expect because it breaks some assumptions about how the text encoder functions, so just have fun and experiment with it! Copied prompt_embeds = compel_proc('("a red cat playing with a ball", "jungle").blend(0.7, 0.8)') +generator = torch.Generator(device="cuda").manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image Conjunction A conjunction diffuses each prompt independently and concatenates their results by their weighted sum. Add .and() to the end of a list of prompts to create a conjunction: Copied prompt_embeds = compel_proc('["a red cat", "playing with a", "ball"].and()') +generator = torch.Generator(device="cuda").manual_seed(55) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image Textual inversion Textual inversion is a technique for learning a specific concept from some images which you can use to generate new images conditioned on that concept. Create a pipeline and use the load_textual_inversion() function to load the textual inversion embeddings (feel free to browse the Stable Diffusion Conceptualizer for 100+ trained concepts): Copied import torch +from diffusers import StableDiffusionPipeline +from compel import Compel, DiffusersTextualInversionManager + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, + use_safetensors=True, variant="fp16").to("cuda") +pipe.load_textual_inversion("sd-concepts-library/midjourney-style") Compel provides a DiffusersTextualInversionManager class to simplify prompt weighting with textual inversion. Instantiate DiffusersTextualInversionManager and pass it to the Compel class: Copied textual_inversion_manager = DiffusersTextualInversionManager(pipe) +compel_proc = Compel( + tokenizer=pipe.tokenizer, + text_encoder=pipe.text_encoder, + textual_inversion_manager=textual_inversion_manager) Incorporate the concept to condition a prompt with using the syntax: Copied prompt_embeds = compel_proc('("A red cat++ playing with a ball ")') + +image = pipe(prompt_embeds=prompt_embeds).images[0] +image DreamBooth DreamBooth is a technique for generating contextualized images of a subject given just a few images of the subject to train on. It is similar to textual inversion, but DreamBooth trains the full model whereas textual inversion only fine-tunes the text embeddings. This means you should use from_pretrained() to load the DreamBooth model (feel free to browse the Stable Diffusion Dreambooth Concepts Library for 100+ trained models): Copied import torch +from diffusers import DiffusionPipeline, UniPCMultistepScheduler +from compel import Compel + +pipe = DiffusionPipeline.from_pretrained("sd-dreambooth-library/dndcoverart-v1", torch_dtype=torch.float16).to("cuda") +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) Create a Compel class with a tokenizer and text encoder, and pass your prompt to it. Depending on the model you use, you’ll need to incorporate the model’s unique identifier into your prompt. For example, the dndcoverart-v1 model uses the identifier dndcoverart: Copied compel_proc = Compel(tokenizer=pipe.tokenizer, text_encoder=pipe.text_encoder) +prompt_embeds = compel_proc('("magazine cover of a dndcoverart dragon, high quality, intricate details, larry elmore art style").and()') +image = pipe(prompt_embeds=prompt_embeds).images[0] +image Stable Diffusion XL Stable Diffusion XL (SDXL) has two tokenizers and text encoders so it’s usage is a bit different. To address this, you should pass both tokenizers and encoders to the Compel class: Copied from compel import Compel, ReturnedEmbeddingsType +from diffusers import DiffusionPipeline +from diffusers.utils import make_image_grid +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + variant="fp16", + use_safetensors=True, + torch_dtype=torch.float16 +).to("cuda") + +compel = Compel( + tokenizer=[pipeline.tokenizer, pipeline.tokenizer_2] , + text_encoder=[pipeline.text_encoder, pipeline.text_encoder_2], + returned_embeddings_type=ReturnedEmbeddingsType.PENULTIMATE_HIDDEN_STATES_NON_NORMALIZED, + requires_pooled=[False, True] +) This time, let’s upweight “ball” by a factor of 1.5 for the first prompt, and downweight “ball” by 0.6 for the second prompt. The StableDiffusionXLPipeline also requires pooled_prompt_embeds (and optionally negative_pooled_prompt_embeds) so you should pass those to the pipeline along with the conditioning tensors: Copied # apply weights +prompt = ["a red cat playing with a (ball)1.5", "a red cat playing with a (ball)0.6"] +conditioning, pooled = compel(prompt) + +# generate image +generator = [torch.Generator().manual_seed(33) for _ in range(len(prompt))] +images = pipeline(prompt_embeds=conditioning, pooled_prompt_embeds=pooled, generator=generator, num_inference_steps=30).images +make_image_grid(images, rows=1, cols=2) "a red cat playing with a (ball)1.5" "a red cat playing with a (ball)0.6" diff --git a/scrapped_outputs/30fe9ee720e1c7673bd9a99f0404f805.txt b/scrapped_outputs/30fe9ee720e1c7673bd9a99f0404f805.txt new file mode 100644 index 0000000000000000000000000000000000000000..fe52e2711d246e696f90c20d07eb69b0a9ed5607 --- /dev/null +++ b/scrapped_outputs/30fe9ee720e1c7673bd9a99f0404f805.txt @@ -0,0 +1,199 @@ +Denoising diffusion implicit models (DDIM) + + +Overview + +Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. +The abstract of the paper is the following: +Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space. +The original codebase of this paper can be found here: ermongroup/ddim. +For questions, feel free to contact the author on tsong.me. + +DDIMScheduler + + +class diffusers.DDIMScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +clip_sample: bool = True +set_alpha_to_one: bool = True +steps_offset: int = 0 +prediction_type: str = 'epsilon' + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +clip_sample (bool, default True) — +option to clip predicted sample between -1 and 1 for numerical stability. + + +set_alpha_to_one (bool, default True) — +each diffusion step uses the value of alphas product at that step and at the previous one. For the final +step there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the value of alpha at step 0. + + +steps_offset (int, default 0) — +an offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False, to make the last step use step 0 for the previous alpha product, as done in +stable diffusion. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + + +Denoising diffusion implicit models is a scheduler that extends the denoising procedure introduced in denoising +diffusion probabilistic models (DDPMs) with non-Markovian guidance. +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. +For more details, see the original paper: https://arxiv.org/abs/2010.02502 + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Optional[int] = None + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +timestep (int, optional) — current timestep + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + + +Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +eta: float = 0.0 +use_clipped_model_output: bool = False +generator = None +variance_noise: typing.Optional[torch.FloatTensor] = None +return_dict: bool = True + +) +→ +~schedulers.scheduling_utils.DDIMSchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +eta (float) — weight of noise for added noise in diffusion step. + + +use_clipped_model_output (bool) — if True, compute “corrected” model_output from the clipped +predicted original sample. Necessary because predicted original sample is clipped to [-1, 1] when +self.config.clip_sample is True. If no clipping has happened, “corrected” model_output would +coincide with the one provided as input and use_clipped_model_output will have not effect. +generator — random number generator. + + +variance_noise (torch.FloatTensor) — instead of generating noise for the variance using generator, we +can directly provide the noise for the variance itself. This is useful for methods such as +CycleDiffusion. (https://arxiv.org/abs/2210.05559) + + +return_dict (bool) — option for returning tuple rather than DDIMSchedulerOutput class + + +Returns + +~schedulers.scheduling_utils.DDIMSchedulerOutput or tuple + + + +~schedulers.scheduling_utils.DDIMSchedulerOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/313c023cf8c1942623140f71e2877f89.txt b/scrapped_outputs/313c023cf8c1942623140f71e2877f89.txt new file mode 100644 index 0000000000000000000000000000000000000000..a60cf1709306cd604a335558453963caf02df74b --- /dev/null +++ b/scrapped_outputs/313c023cf8c1942623140f71e2877f89.txt @@ -0,0 +1,56 @@ +Community pipelines For more context about the design choices behind community pipelines, please have a look at this issue. Community pipelines allow you to get creative and build your own unique pipelines to share with the community. You can find all community pipelines in the diffusers/examples/community folder along with inference and training examples for how to use them. This guide showcases some of the community pipelines and hopefully it’ll inspire you to create your own (feel free to open a PR with your own pipeline and we will merge it!). To load a community pipeline, use the custom_pipeline argument in DiffusionPipeline to specify one of the files in diffusers/examples/community: Copied from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", custom_pipeline="filename_in_the_community_folder", use_safetensors=True +) If a community pipeline doesn’t work as expected, please open a GitHub issue and mention the author. You can learn more about community pipelines in the how to load community pipelines and how to contribute a community pipeline guides. Multilingual Stable Diffusion The multilingual Stable Diffusion pipeline uses a pretrained XLM-RoBERTa to identify a language and the mBART-large-50 model to handle the translation. This allows you to generate images from text in 20 languages. Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import make_image_grid +from transformers import ( + pipeline, + MBart50TokenizerFast, + MBartForConditionalGeneration, +) + +device = "cuda" if torch.cuda.is_available() else "cpu" +device_dict = {"cuda": 0, "cpu": -1} + +# add language detection pipeline +language_detection_model_ckpt = "papluca/xlm-roberta-base-language-detection" +language_detection_pipeline = pipeline("text-classification", + model=language_detection_model_ckpt, + device=device_dict[device]) + +# add model for language translation +translation_tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-one-mmt") +translation_model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-one-mmt").to(device) + +diffuser_pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + custom_pipeline="multilingual_stable_diffusion", + detection_pipeline=language_detection_pipeline, + translation_model=translation_model, + translation_tokenizer=translation_tokenizer, + torch_dtype=torch.float16, +) + +diffuser_pipeline.enable_attention_slicing() +diffuser_pipeline = diffuser_pipeline.to(device) + +prompt = ["a photograph of an astronaut riding a horse", + "Una casa en la playa", + "Ein Hund, der Orange isst", + "Un restaurant parisien"] + +images = diffuser_pipeline(prompt).images +make_image_grid(images, rows=2, cols=2) MagicMix MagicMix is a pipeline that can mix an image and text prompt to generate a new image that preserves the image structure. The mix_factor determines how much influence the prompt has on the layout generation, kmin controls the number of steps during the content generation process, and kmax determines how much information is kept in the layout of the original image. Copied from diffusers import DiffusionPipeline, DDIMScheduler +from diffusers.utils import load_image, make_image_grid + +pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + custom_pipeline="magic_mix", + scheduler=DDIMScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler"), +).to('cuda') + +img = load_image("https://user-images.githubusercontent.com/59410571/209578593-141467c7-d831-4792-8b9a-b17dc5e47816.jpg") +mix_img = pipeline(img, prompt="bed", kmin=0.3, kmax=0.5, mix_factor=0.5) +make_image_grid([img, mix_img], rows=1, cols=2) original image image and text prompt mix diff --git a/scrapped_outputs/317dc2e012bf87c99e37709aea670db6.txt b/scrapped_outputs/317dc2e012bf87c99e37709aea670db6.txt new file mode 100644 index 0000000000000000000000000000000000000000..65a9cfaf29f703e7c7512eba0f3f7082686a6b82 --- /dev/null +++ b/scrapped_outputs/317dc2e012bf87c99e37709aea670db6.txt @@ -0,0 +1,40 @@ +KDPM2DiscreteScheduler The KDPM2DiscreteScheduler is inspired by the Elucidating the Design Space of Diffusion-Based Generative Models paper, and the scheduler is ported from and created by Katherine Crowson. The original codebase can be found at crowsonkb/k-diffusion. KDPM2DiscreteScheduler class diffusers.KDPM2DiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None use_karras_sigmas: Optional = False prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.00085) — +The starting beta value of inference. beta_end (float, defaults to 0.012) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. KDPM2DiscreteScheduler is inspired by the DPMSolver2 and Algorithm 2 from the Elucidating the Design Space of +Diffusion-Based Generative Models paper. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/317df460f32710bfe05201439474dde5.txt b/scrapped_outputs/317df460f32710bfe05201439474dde5.txt new file mode 100644 index 0000000000000000000000000000000000000000..d509c1ac7ab849c2b3afbdbbc876d1114069ba2e --- /dev/null +++ b/scrapped_outputs/317df460f32710bfe05201439474dde5.txt @@ -0,0 +1,217 @@ +Latent Consistency Models Latent Consistency Models (LCMs) were proposed in Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference by Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao. The abstract of the paper is as follows: Latent Diffusion models (LDMs) have achieved remarkable results in synthesizing high-resolution images. However, the iterative sampling process is computationally intensive and leads to slow generation. Inspired by Consistency Models (song et al.), we propose Latent Consistency Models (LCMs), enabling swift inference with minimal steps on any pre-trained LDMs, including Stable Diffusion (rombach et al). Viewing the guided reverse diffusion process as solving an augmented probability flow ODE (PF-ODE), LCMs are designed to directly predict the solution of such ODE in latent space, mitigating the need for numerous iterations and allowing rapid, high-fidelity sampling. Efficiently distilled from pre-trained classifier-free guided diffusion models, a high-quality 768 x 768 2~4-step LCM takes only 32 A100 GPU hours for training. Furthermore, we introduce Latent Consistency Fine-tuning (LCF), a novel method that is tailored for fine-tuning LCMs on customized image datasets. Evaluation on the LAION-5B-Aesthetics dataset demonstrates that LCMs achieve state-of-the-art text-to-image generation performance with few-step inference. Project Page: this https URL. A demo for the SimianLuo/LCM_Dreamshaper_v7 checkpoint can be found here. The pipelines were contributed by luosiallen, nagolinc, and dg845. LatentConsistencyModelPipeline class diffusers.LatentConsistencyModelPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: LCMScheduler safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Currently only +supports LCMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. requires_safety_checker (bool, optional, defaults to True) — +Whether the pipeline requires a safety checker component. Pipeline for text-to-image generation using a latent consistency model. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 4 original_inference_steps: int = None timesteps: List = None guidance_scale: float = 8.5 num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. original_inference_steps (int, optional) — +The original number of inference steps use to generate a linearly-spaced timestep schedule, from which +we will draw num_inference_steps evenly spaced timesteps from as our final timestep schedule, +following the Skipping-Step method in the paper (see Section 4.3). If not set this will default to the +scheduler’s original_inference_steps attribute. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps on the original LCM training/distillation timestep schedule are used. Must be in descending +order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. +Note that the original latent consistency models paper uses a different CFG formulation where the +guidance scales are decreased by 1 (so in the paper formulation CFG is enabled when guidance_scale > 0). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import DiffusionPipeline +>>> import torch + +>>> pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7") +>>> # To save GPU memory, torch.float16 can be used, but it may compromise image quality. +>>> pipe.to(torch_device="cuda", torch_dtype=torch.float32) + +>>> prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" + +>>> # Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps. +>>> num_inference_steps = 4 +>>> images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=8.0).images +>>> images[0].save("image.png") enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 LatentConsistencyModelImg2ImgPipeline class diffusers.LatentConsistencyModelImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: LCMScheduler safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Currently only +supports LCMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. requires_safety_checker (bool, optional, defaults to True) — +Whether the pipeline requires a safety checker component. Pipeline for image-to-image generation using a latent consistency model. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 4 strength: float = 0.8 original_inference_steps: int = None timesteps: List = None guidance_scale: float = 8.5 num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. original_inference_steps (int, optional) — +The original number of inference steps use to generate a linearly-spaced timestep schedule, from which +we will draw num_inference_steps evenly spaced timesteps from as our final timestep schedule, +following the Skipping-Step method in the paper (see Section 4.3). If not set this will default to the +scheduler’s original_inference_steps attribute. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps on the original LCM training/distillation timestep schedule are used. Must be in descending +order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. +Note that the original latent consistency models paper uses a different CFG formulation where the +guidance scales are decreased by 1 (so in the paper formulation CFG is enabled when guidance_scale > 0). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import AutoPipelineForImage2Image +>>> import torch +>>> import PIL + +>>> pipe = AutoPipelineForImage2Image.from_pretrained("SimianLuo/LCM_Dreamshaper_v7") +>>> # To save GPU memory, torch.float16 can be used, but it may compromise image quality. +>>> pipe.to(torch_device="cuda", torch_dtype=torch.float32) + +>>> prompt = "High altitude snowy mountains" +>>> image = PIL.Image.open("./snowy_mountains.png") + +>>> # Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps. +>>> num_inference_steps = 4 +>>> images = pipe( +... prompt=prompt, image=image, num_inference_steps=num_inference_steps, guidance_scale=8.0 +... ).images + +>>> images[0].save("image.png") enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/31b13512936495eaf54117ceb6acdd68.txt b/scrapped_outputs/31b13512936495eaf54117ceb6acdd68.txt new file mode 100644 index 0000000000000000000000000000000000000000..f76d06d350dce4e259612dd262fa10081b7f954d --- /dev/null +++ b/scrapped_outputs/31b13512936495eaf54117ceb6acdd68.txt @@ -0,0 +1,210 @@ +Schedulers + +Diffusion pipelines are inherently a collection of diffusion models and schedulers that are partly independent from each other. This means that one is able to switch out parts of the pipeline to better customize +a pipeline to one’s use case. The best example of this are the Schedulers. +Whereas diffusion models usually simply define the forward pass from noise to a less noisy sample, +schedulers define the whole denoising process, i.e.: +How many denoising steps? +Stochastic or deterministic? +What algorithm to use to find the denoised sample +They can be quite complex and often define a trade-off between denoising speed and denoising quality. +It is extremely difficult to measure quantitatively which scheduler works best for a given diffusion pipeline, so it is often recommended to simply try out which works best. +The following paragraphs shows how to do so with the 🧨 Diffusers library. + +Load pipeline + +Let’s start by loading the stable diffusion pipeline. +Remember that you have to be a registered user on the 🤗 Hugging Face Hub, and have “click-accepted” the license in order to use stable diffusion. + + + Copied +from huggingface_hub import login +from diffusers import DiffusionPipeline +import torch + +# first we need to login with our access token +login() + +# Now we can download the pipeline +pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +Next, we move it to GPU: + + + Copied +pipeline.to("cuda") + +Access the scheduler + +The scheduler is always one of the components of the pipeline and is usually called "scheduler". +So it can be accessed via the "scheduler" property. + + + Copied +pipeline.scheduler +Output: + + + Copied +PNDMScheduler { + "_class_name": "PNDMScheduler", + "_diffusers_version": "0.8.0.dev0", + "beta_end": 0.012, + "beta_schedule": "scaled_linear", + "beta_start": 0.00085, + "clip_sample": false, + "num_train_timesteps": 1000, + "set_alpha_to_one": false, + "skip_prk_steps": true, + "steps_offset": 1, + "trained_betas": null +} +We can see that the scheduler is of type PNDMScheduler. +Cool, now let’s compare the scheduler in its performance to other schedulers. +First we define a prompt on which we will test all the different schedulers: + + + Copied +prompt = "A photograph of an astronaut riding a horse on Mars, high resolution, high definition." +Next, we create a generator from a random seed that will ensure that we can generate similar images as well as run the pipeline: + + + Copied +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image + + + + +Changing the scheduler + +Now we show how easy it is to change the scheduler of a pipeline. Every scheduler has a property SchedulerMixin.compatibles +which defines all compatible schedulers. You can take a look at all available, compatible schedulers for the Stable Diffusion pipeline as follows. + + + Copied +pipeline.scheduler.compatibles +Output: + + + Copied +[diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, + diffusers.schedulers.scheduling_ddim.DDIMScheduler, + diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler, + diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, + diffusers.schedulers.scheduling_pndm.PNDMScheduler, + diffusers.schedulers.scheduling_ddpm.DDPMScheduler, + diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler] +Cool, lots of schedulers to look at. Feel free to have a look at their respective class definitions: +LMSDiscreteScheduler, +DDIMScheduler, +DPMSolverMultistepScheduler, +EulerDiscreteScheduler, +PNDMScheduler, +DDPMScheduler, +EulerAncestralDiscreteScheduler. +We will now compare the input prompt with all other schedulers. To change the scheduler of the pipeline you can make use of the +convenient ConfigMixin.config property in combination with the ConfigMixin.from_config() function. + + + Copied +pipeline.scheduler.config +returns a dictionary of the configuration of the scheduler: +Output: + + + Copied +FrozenDict([('num_train_timesteps', 1000), + ('beta_start', 0.00085), + ('beta_end', 0.012), + ('beta_schedule', 'scaled_linear'), + ('trained_betas', None), + ('skip_prk_steps', True), + ('set_alpha_to_one', False), + ('steps_offset', 1), + ('_class_name', 'PNDMScheduler'), + ('_diffusers_version', '0.8.0.dev0'), + ('clip_sample', False)]) +This configuration can then be used to instantiate a scheduler +of a different class that is compatible with the pipeline. Here, +we change the scheduler to the DDIMScheduler. + + + Copied +from diffusers import DDIMScheduler + +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +Cool, now we can run the pipeline again to compare the generation quality. + + + Copied +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image + + + + +Compare schedulers + +So far we have tried running the stable diffusion pipeline with two schedulers: PNDMScheduler and DDIMScheduler. +A number of better schedulers have been released that can be run with much fewer steps, let’s compare them here: +LMSDiscreteScheduler usually leads to better results: + + + Copied +from diffusers import LMSDiscreteScheduler + +pipeline.scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image + + + +EulerDiscreteScheduler and EulerAncestralDiscreteScheduler can generate high quality results with as little as 30 steps. + + + Copied +from diffusers import EulerDiscreteScheduler + +pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=30).images[0] +image + + + +and: + + + Copied +from diffusers import EulerAncestralDiscreteScheduler + +pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=30).images[0] +image + + + +At the time of writing this doc DPMSolverMultistepScheduler gives arguably the best speed/quality trade-off and can be run with as little +as 20 steps. + + + Copied +from diffusers import DPMSolverMultistepScheduler + +pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=20).images[0] +image + + + +As you can see most images look very similar and are arguably of very similar quality. It often really depends on the specific use case which scheduler to choose. A good approach is always to run multiple different +schedulers to compare results. diff --git a/scrapped_outputs/3215ea6d65dadb631d6ff6b42078edd7.txt b/scrapped_outputs/3215ea6d65dadb631d6ff6b42078edd7.txt new file mode 100644 index 0000000000000000000000000000000000000000..12f932f27da948cb5ce81edca4bff5444475b84d --- /dev/null +++ b/scrapped_outputs/3215ea6d65dadb631d6ff6b42078edd7.txt @@ -0,0 +1,11 @@ +Control image brightness The Stable Diffusion pipeline is mediocre at generating images that are either very bright or dark as explained in the Common Diffusion Noise Schedules and Sample Steps are Flawed paper. The solutions proposed in the paper are currently implemented in the DDIMScheduler which you can use to improve the lighting in your images. 💡 Take a look at the paper linked above for more details about the proposed solutions! One of the solutions is to train a model with v prediction and v loss. Add the following flag to the train_text_to_image.py or train_text_to_image_lora.py scripts to enable v_prediction: Copied --prediction_type="v_prediction" For example, let’s use the ptx0/pseudo-journey-v2 checkpoint which has been finetuned with v_prediction. Next, configure the following parameters in the DDIMScheduler: rescale_betas_zero_snr=True, rescales the noise schedule to zero terminal signal-to-noise ratio (SNR) timestep_spacing="trailing", starts sampling from the last timestep Copied from diffusers import DiffusionPipeline, DDIMScheduler + +pipeline = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", use_safetensors=True) + +# switch the scheduler in the pipeline to use the DDIMScheduler +pipeline.scheduler = DDIMScheduler.from_config( + pipeline.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing" +) +pipeline.to("cuda") Finally, in your call to the pipeline, set guidance_rescale to prevent overexposure: Copied prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" +image = pipeline(prompt, guidance_rescale=0.7).images[0] +image diff --git a/scrapped_outputs/323c37831387f0b0d167d2dd86bcd3f8.txt b/scrapped_outputs/323c37831387f0b0d167d2dd86bcd3f8.txt new file mode 100644 index 0000000000000000000000000000000000000000..1682c999f107e1c6ee2accbd0fb9ce7568ca96d8 --- /dev/null +++ b/scrapped_outputs/323c37831387f0b0d167d2dd86bcd3f8.txt @@ -0,0 +1,200 @@ +DreamBooth DreamBooth is a training technique that updates the entire diffusion model by training on just a few images of a subject or style. It works by associating a special word in the prompt with the example images. If you’re training on a GPU with limited vRAM, you should try enabling the gradient_checkpointing and mixed_precision parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing or xFormers. You should have a GPU with >30GB of memory if you want to train faster with Flax. This guide will explore the train_dreambooth.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies for the script you’re using: + + + + Copied cd examples/dreambooth +pip install -r requirements.txt + + + + Copied cd examples/dreambooth +pip install -r requirements_flax.txt + + + 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters DreamBooth is very sensitive to training hyperparameters, and it is easy to overfit. Read the Training Stable Diffusion with Dreambooth using 🧨 Diffusers blog post for recommended settings for different subjects to help you choose the appropriate hyperparameters. The training script offers many parameters for customizing your training run. All of the parameters and their descriptions are found in the parse_args() function. The parameters are set with default values that should work pretty well out-of-the-box, but you can also set your own values in the training command if you’d like. For example, to train in the bf16 format: Copied accelerate launch train_dreambooth.py \ + --mixed_precision="bf16" Some basic and important parameters to know and specify are: --pretrained_model_name_or_path: the name of the model on the Hub or a local path to the pretrained model --instance_data_dir: path to a folder containing the training dataset (example images) --instance_prompt: the text prompt that contains the special word for the example images --train_text_encoder: whether to also train the text encoder --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_dreambooth.py \ + --snr_gamma=5.0 Prior preservation loss Prior preservation loss is a method that uses a model’s own generated samples to help it learn how to generate more diverse images. Because these generated sample images belong to the same class as the images you provided, they help the model retain what it has learned about the class and how it can use what it already knows about the class to make new compositions. --with_prior_preservation: whether to use prior preservation loss --prior_loss_weight: controls the influence of the prior preservation loss on the model --class_data_dir: path to a folder containing the generated class sample images --class_prompt: the text prompt describing the class of the generated sample images Copied accelerate launch train_dreambooth.py \ + --with_prior_preservation \ + --prior_loss_weight=1.0 \ + --class_data_dir="path/to/class/images" \ + --class_prompt="text prompt describing class" Train text encoder To improve the quality of the generated outputs, you can also train the text encoder in addition to the UNet. This requires additional memory and you’ll need a GPU with at least 24GB of vRAM. If you have the necessary hardware, then training the text encoder produces better results, especially when generating images of faces. Enable this option by: Copied accelerate launch train_dreambooth.py \ + --train_text_encoder Training script DreamBooth comes with its own dataset classes: DreamBoothDataset: preprocesses the images and class images, and tokenizes the prompts for training PromptDataset: generates the prompt embeddings to generate the class images If you enabled prior preservation loss, the class images are generated here: Copied sample_dataset = PromptDataset(args.class_prompt, num_new_images) +sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size) + +sample_dataloader = accelerator.prepare(sample_dataloader) +pipeline.to(accelerator.device) + +for example in tqdm( + sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process +): + images = pipeline(example["prompt"]).images Next is the main() function which handles setting up the dataset for training and the training loop itself. The script loads the tokenizer, scheduler and models: Copied # Load the tokenizer +if args.tokenizer_name: + tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name, revision=args.revision, use_fast=False) +elif args.pretrained_model_name_or_path: + tokenizer = AutoTokenizer.from_pretrained( + args.pretrained_model_name_or_path, + subfolder="tokenizer", + revision=args.revision, + use_fast=False, + ) + +# Load scheduler and models +noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") +text_encoder = text_encoder_cls.from_pretrained( + args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision +) + +if model_has_vae(args): + vae = AutoencoderKL.from_pretrained( + args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision + ) +else: + vae = None + +unet = UNet2DConditionModel.from_pretrained( + args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision +) Then, it’s time to create the training dataset and DataLoader from DreamBoothDataset: Copied train_dataset = DreamBoothDataset( + instance_data_root=args.instance_data_dir, + instance_prompt=args.instance_prompt, + class_data_root=args.class_data_dir if args.with_prior_preservation else None, + class_prompt=args.class_prompt, + class_num=args.num_class_images, + tokenizer=tokenizer, + size=args.resolution, + center_crop=args.center_crop, + encoder_hidden_states=pre_computed_encoder_hidden_states, + class_prompt_encoder_hidden_states=pre_computed_class_prompt_encoder_hidden_states, + tokenizer_max_length=args.tokenizer_max_length, +) + +train_dataloader = torch.utils.data.DataLoader( + train_dataset, + batch_size=args.train_batch_size, + shuffle=True, + collate_fn=lambda examples: collate_fn(examples, args.with_prior_preservation), + num_workers=args.dataloader_num_workers, +) Lastly, the training loop takes care of the remaining steps such as converting images to latent space, adding noise to the input, predicting the noise residual, and calculating the loss. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script You’re now ready to launch the training script! 🚀 For this guide, you’ll download some images of a dog and store them in a directory. But remember, you can create and use your own dataset if you want (see the Create a dataset for training guide). Copied from huggingface_hub import snapshot_download + +local_dir = "./dog" +snapshot_download( + "diffusers/dog-example", + local_dir=local_dir, + repo_type="dataset", + ignore_patterns=".gitattributes", +) Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model, INSTANCE_DIR to the path where you just downloaded the dog images to, and OUTPUT_DIR to where you want to save the model. You’ll use sks as the special word to tie the training to. If you’re interested in following along with the training process, you can periodically save generated images as training progresses. Add the following parameters to the training command: Copied --validation_prompt="a photo of a sks dog" +--num_validation_images=4 +--validation_steps=100 One more thing before you launch the script! Depending on the GPU you have, you may need to enable certain optimizations to train DreamBooth. + + +On a 16GB GPU, you can use bitsandbytes 8-bit optimizer and gradient checkpointing to help you train a DreamBooth model. Install bitsandbytes: Copied pip install bitsandbytes Then, add the following parameter to your training command: Copied accelerate launch train_dreambooth.py \ + --gradient_checkpointing \ + --use_8bit_adam \ + + +On a 12GB GPU, you’ll need bitsandbytes 8-bit optimizer, gradient checkpointing, xFormers, and set the gradients to None instead of zero to reduce your memory-usage. Copied accelerate launch train_dreambooth.py \ + --use_8bit_adam \ + --gradient_checkpointing \ + --enable_xformers_memory_efficient_attention \ + --set_grads_to_none \ + + +On a 8GB GPU, you’ll need DeepSpeed to offload some of the tensors from the vRAM to either the CPU or NVME to allow training with less GPU memory. Run the following command to configure your 🤗 Accelerate environment: Copied accelerate config During configuration, confirm that you want to use DeepSpeed. Now it should be possible to train on under 8GB vRAM by combining DeepSpeed stage 2, fp16 mixed precision, and offloading the model parameters and the optimizer state to the CPU. The drawback is that this requires more system RAM (~25 GB). See the DeepSpeed documentation for more configuration options. You should also change the default Adam optimizer to DeepSpeed’s optimized version of Adam deepspeed.ops.adam.DeepSpeedCPUAdam for a substantial speedup. Enabling DeepSpeedCPUAdam requires your system’s CUDA toolchain version to be the same as the one installed with PyTorch. bitsandbytes 8-bit optimizers don’t seem to be compatible with DeepSpeed at the moment. That’s it! You don’t need to add any additional parameters to your training command. + + + + + + Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export INSTANCE_DIR="./dog" +export OUTPUT_DIR="path_to_saved_model" + +accelerate launch train_dreambooth.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --output_dir=$OUTPUT_DIR \ + --instance_prompt="a photo of sks dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=1 \ + --learning_rate=5e-6 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --max_train_steps=400 \ + --push_to_hub + + + + Copied export MODEL_NAME="duongna/stable-diffusion-v1-4-flax" +export INSTANCE_DIR="./dog" +export OUTPUT_DIR="path-to-save-model" + +python train_dreambooth_flax.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --output_dir=$OUTPUT_DIR \ + --instance_prompt="a photo of sks dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --learning_rate=5e-6 \ + --max_train_steps=400 \ + --push_to_hub + + +Once training is complete, you can use your newly trained model for inference! Can’t wait to try your model for inference before training is complete? 🤭 Make sure you have the latest version of 🤗 Accelerate installed. Copied from diffusers import DiffusionPipeline, UNet2DConditionModel +from transformers import CLIPTextModel +import torch + +unet = UNet2DConditionModel.from_pretrained("path/to/model/checkpoint-100/unet") + +# if you have trained with `--args.train_text_encoder` make sure to also load the text encoder +text_encoder = CLIPTextModel.from_pretrained("path/to/model/checkpoint-100/checkpoint-100/text_encoder") + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", unet=unet, text_encoder=text_encoder, dtype=torch.float16, +).to("cuda") + +image = pipeline("A photo of sks dog in a bucket", num_inference_steps=50, guidance_scale=7.5).images[0] +image.save("dog-bucket.png") + + + + Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("path_to_saved_model", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +image = pipeline("A photo of sks dog in a bucket", num_inference_steps=50, guidance_scale=7.5).images[0] +image.save("dog-bucket.png") + + + + Copied import jax +import numpy as np +from flax.jax_utils import replicate +from flax.training.common_utils import shard +from diffusers import FlaxStableDiffusionPipeline + +pipeline, params = FlaxStableDiffusionPipeline.from_pretrained("path-to-your-trained-model", dtype=jax.numpy.bfloat16) + +prompt = "A photo of sks dog in a bucket" +prng_seed = jax.random.PRNGKey(0) +num_inference_steps = 50 + +num_samples = jax.device_count() +prompt = num_samples * [prompt] +prompt_ids = pipeline.prepare_inputs(prompt) + +# shard inputs and rng +params = replicate(params) +prng_seed = jax.random.split(prng_seed, jax.device_count()) +prompt_ids = shard(prompt_ids) + +images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images +images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) +image.save("dog-bucket.png") + + + LoRA LoRA is a training technique for significantly reducing the number of trainable parameters. As a result, training is faster and it is easier to store the resulting weights because they are a lot smaller (~100MBs). Use the train_dreambooth_lora.py script to train with LoRA. The LoRA training script is discussed in more detail in the LoRA training guide. Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_dreambooth_lora_sdxl.py script to train a SDXL model with LoRA. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on training your DreamBooth model! To learn more about how to use your new model, the following guide may be helpful: Learn how to load a DreamBooth model for inference if you trained your model with LoRA. diff --git a/scrapped_outputs/3255eb3c121e785745f6cde95ce49dc2.txt b/scrapped_outputs/3255eb3c121e785745f6cde95ce49dc2.txt new file mode 100644 index 0000000000000000000000000000000000000000..51eec044ff9541ddf40cd3ef6404f0e25abfaa6f --- /dev/null +++ b/scrapped_outputs/3255eb3c121e785745f6cde95ce49dc2.txt @@ -0,0 +1,226 @@ +aMUSEd aMUSEd was introduced in aMUSEd: An Open MUSE Reproduction by Suraj Patil, William Berman, Robin Rombach, and Patrick von Platen. Amused is a lightweight text to image model based off of the MUSE architecture. Amused is particularly useful in applications that require a lightweight and fast model such as generating many images quickly at once. Amused is a vqvae token based transformer that can generate an image in fewer forward passes than many diffusion models. In contrast with muse, it uses the smaller text encoder CLIP-L/14 instead of t5-xxl. Due to its small parameter count and few forward pass generation process, amused can generate many images quickly. This benefit is seen particularly at larger batch sizes. The abstract from the paper is: We present aMUSEd, an open-source, lightweight masked image model (MIM) for text-to-image generation based on MUSE. With 10 percent of MUSE’s parameters, aMUSEd is focused on fast image generation. We believe MIM is under-explored compared to latent diffusion, the prevailing approach for text-to-image generation. Compared to latent diffusion, MIM requires fewer inference steps and is more interpretable. Additionally, MIM can be fine-tuned to learn additional styles with only a single image. We hope to encourage further exploration of MIM by demonstrating its effectiveness on large-scale text-to-image generation and releasing reproducible training code. We also release checkpoints for two models which directly produce images at 256x256 and 512x512 resolutions. Model Params amused-256 603M amused-512 608M AmusedPipeline class diffusers.AmusedPipeline < source > ( vqvae: VQModel tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection transformer: UVit2DModel scheduler: AmusedScheduler ) __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 12 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Optional = None latents: Optional = None prompt_embeds: Optional = None encoder_hidden_states: Optional = None negative_prompt_embeds: Optional = None negative_encoder_hidden_states: Optional = None output_type = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None micro_conditioning_aesthetic_score: int = 6 micro_conditioning_crop_coord: Tuple = (0, 0) temperature: Union = (2, 0) ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.transformer.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 16) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.IntTensor, optional) — +Pre-generated tokens representing latent vectors in self.vqvae, to be used as inputs for image +gneration. If not provided, the starting latents will be completely masked. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. A single vector from the +pooled and projected final hidden states. encoder_hidden_states (torch.FloatTensor, optional) — +Pre-generated penultimate hidden states from the text encoder providing additional text conditioning. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. negative_encoder_hidden_states (torch.FloatTensor, optional) — +Analogous to encoder_hidden_states for the positive prompt. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. micro_conditioning_aesthetic_score (int, optional, defaults to 6) — +The targeted aesthetic score according to the laion aesthetic classifier. See https://laion.ai/blog/laion-aesthetics/ +and the micro-conditioning section of https://arxiv.org/abs/2307.01952. micro_conditioning_crop_coord (Tuple[int], optional, defaults to (0, 0)) — +The targeted height, width crop coordinates. See the micro-conditioning section of https://arxiv.org/abs/2307.01952. temperature (Union[int, Tuple[int, int], List[int]], optional, defaults to (2, 0)) — +Configures the temperature scheduler on self.scheduler see AmusedScheduler#set_timesteps. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import AmusedPipeline + +>>> pipe = AmusedPipeline.from_pretrained( +... "amused/amused-512", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt).images[0] enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. class diffusers.AmusedImg2ImgPipeline < source > ( vqvae: VQModel tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection transformer: UVit2DModel scheduler: AmusedScheduler ) __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.5 num_inference_steps: int = 12 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Optional = None prompt_embeds: Optional = None encoder_hidden_states: Optional = None negative_prompt_embeds: Optional = None negative_encoder_hidden_states: Optional = None output_type = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None micro_conditioning_aesthetic_score: int = 6 micro_conditioning_crop_coord: Tuple = (0, 0) temperature: Union = (2, 0) ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be used as the starting point. For both +numpy array and pytorch tensor, the expected value range is between [0, 1] If it’s a tensor or a list +or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a +list of arrays, the expected shape should be (B, H, W, C) or (H, W, C) It can also accept image +latents as image, but if passing latents directly it is not encoded again. strength (float, optional, defaults to 0.5) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 16) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. A single vector from the +pooled and projected final hidden states. encoder_hidden_states (torch.FloatTensor, optional) — +Pre-generated penultimate hidden states from the text encoder providing additional text conditioning. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. negative_encoder_hidden_states (torch.FloatTensor, optional) — +Analogous to encoder_hidden_states for the positive prompt. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. micro_conditioning_aesthetic_score (int, optional, defaults to 6) — +The targeted aesthetic score according to the laion aesthetic classifier. See https://laion.ai/blog/laion-aesthetics/ +and the micro-conditioning section of https://arxiv.org/abs/2307.01952. micro_conditioning_crop_coord (Tuple[int], optional, defaults to (0, 0)) — +The targeted height, width crop coordinates. See the micro-conditioning section of https://arxiv.org/abs/2307.01952. temperature (Union[int, Tuple[int, int], List[int]], optional, defaults to (2, 0)) — +Configures the temperature scheduler on self.scheduler see AmusedScheduler#set_timesteps. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import AmusedImg2ImgPipeline +>>> from diffusers.utils import load_image + +>>> pipe = AmusedImg2ImgPipeline.from_pretrained( +... "amused/amused-512", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "winter mountains" +>>> input_image = ( +... load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/open_muse/mountains.jpg" +... ) +... .resize((512, 512)) +... .convert("RGB") +... ) +>>> image = pipe(prompt, input_image).images[0] enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. class diffusers.AmusedInpaintPipeline < source > ( vqvae: VQModel tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection transformer: UVit2DModel scheduler: AmusedScheduler ) __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None strength: float = 1.0 num_inference_steps: int = 12 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Optional = None prompt_embeds: Optional = None encoder_hidden_states: Optional = None negative_prompt_embeds: Optional = None negative_encoder_hidden_states: Optional = None output_type = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None micro_conditioning_aesthetic_score: int = 6 micro_conditioning_crop_coord: Tuple = (0, 0) temperature: Union = (2, 0) ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be used as the starting point. For both +numpy array and pytorch tensor, the expected value range is between [0, 1] If it’s a tensor or a list +or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a +list of arrays, the expected shape should be (B, H, W, C) or (H, W, C) It can also accept image +latents as image, but if passing latents directly it is not encoded again. mask_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to mask image. White pixels in the mask +are repainted while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a numpy array or pytorch tensor, it should contain one +color channel (L) instead of 3, so the expected shape for pytorch tensor would be (B, 1, H, W), (B, H, W), (1, H, W), (H, W). And for numpy array would be for (B, H, W, 1), (B, H, W), (H, W, 1), or (H, W). strength (float, optional, defaults to 1.0) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 16) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. A single vector from the +pooled and projected final hidden states. encoder_hidden_states (torch.FloatTensor, optional) — +Pre-generated penultimate hidden states from the text encoder providing additional text conditioning. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. negative_encoder_hidden_states (torch.FloatTensor, optional) — +Analogous to encoder_hidden_states for the positive prompt. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. micro_conditioning_aesthetic_score (int, optional, defaults to 6) — +The targeted aesthetic score according to the laion aesthetic classifier. See https://laion.ai/blog/laion-aesthetics/ +and the micro-conditioning section of https://arxiv.org/abs/2307.01952. micro_conditioning_crop_coord (Tuple[int], optional, defaults to (0, 0)) — +The targeted height, width crop coordinates. See the micro-conditioning section of https://arxiv.org/abs/2307.01952. temperature (Union[int, Tuple[int, int], List[int]], optional, defaults to (2, 0)) — +Configures the temperature scheduler on self.scheduler see AmusedScheduler#set_timesteps. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import AmusedInpaintPipeline +>>> from diffusers.utils import load_image + +>>> pipe = AmusedInpaintPipeline.from_pretrained( +... "amused/amused-512", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "fall mountains" +>>> input_image = ( +... load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/open_muse/mountains_1.jpg" +... ) +... .resize((512, 512)) +... .convert("RGB") +... ) +>>> mask = ( +... load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/open_muse/mountains_1_mask.png" +... ) +... .resize((512, 512)) +... .convert("L") +... ) +>>> pipe(prompt, input_image, mask).images[0].save("out.png") enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. diff --git a/scrapped_outputs/3261dd8b886ac754c541b54c908f1cd6.txt b/scrapped_outputs/3261dd8b886ac754c541b54c908f1cd6.txt new file mode 100644 index 0000000000000000000000000000000000000000..5eb8aca237f4b1aa72ff085bbc8ab70f6ba7cd91 --- /dev/null +++ b/scrapped_outputs/3261dd8b886ac754c541b54c908f1cd6.txt @@ -0,0 +1,128 @@ +LoRA LoRA is a fast and lightweight training method that inserts and trains a significantly smaller number of parameters instead of all the model parameters. This produces a smaller file (~100 MBs) and makes it easier to quickly train a model to learn a new concept. LoRA weights are typically loaded into the UNet, text encoder or both. There are two classes for loading LoRA weights: LoraLoaderMixin provides functions for loading and unloading, fusing and unfusing, enabling and disabling, and more functions for managing LoRA weights. This class can be used with any model. StableDiffusionXLLoraLoaderMixin is a Stable Diffusion (SDXL) version of the LoraLoaderMixin class for loading and saving LoRA weights. It can only be used with the SDXL model. To learn more about how to load LoRA weights, see the LoRA loading guide. LoraLoaderMixin class diffusers.loaders.LoraLoaderMixin < source > ( ) Load LoRA layers into UNet2DConditionModel and +CLIPTextModel. delete_adapters < source > ( adapter_names: Union ) Parameters Deletes the LoRA layers of adapter_name for the unet and text-encoder(s). — +adapter_names (Union[List[str], str]): +The names of the adapter to delete. Can be a single string or a list of strings disable_lora_for_text_encoder < source > ( text_encoder: Optional = None ) Parameters text_encoder (torch.nn.Module, optional) — +The text encoder module to disable the LoRA layers for. If None, it will try to get the +text_encoder attribute. Disables the LoRA layers for the text encoder. enable_lora_for_text_encoder < source > ( text_encoder: Optional = None ) Parameters text_encoder (torch.nn.Module, optional) — +The text encoder module to enable the LoRA layers for. If None, it will try to get the text_encoder +attribute. Enables the LoRA layers for the text encoder. fuse_lora < source > ( fuse_unet: bool = True fuse_text_encoder: bool = True lora_scale: float = 1.0 safe_fusing: bool = False adapter_names: Optional = None ) Parameters fuse_unet (bool, defaults to True) — Whether to fuse the UNet LoRA parameters. fuse_text_encoder (bool, defaults to True) — +Whether to fuse the text encoder LoRA parameters. If the text encoder wasn’t monkey-patched with the +LoRA parameters then it won’t have any effect. lora_scale (float, defaults to 1.0) — +Controls how much to influence the outputs with the LoRA parameters. safe_fusing (bool, defaults to False) — +Whether to check fused weights for NaN values before fusing and if values are NaN not fusing them. adapter_names (List[str], optional) — +Adapter names to be used for fusing. If nothing is passed, all active adapters will be fused. Fuses the LoRA parameters into the original parameters of the corresponding blocks. This is an experimental API. Example: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipeline.fuse_lora(lora_scale=0.7) get_active_adapters < source > ( ) Gets the list of the current active adapters. Example: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", +).to("cuda") +pipeline.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") +pipeline.get_active_adapters() get_list_adapters < source > ( ) Gets the current list of all available adapters in the pipeline. load_lora_into_text_encoder < source > ( state_dict network_alphas text_encoder prefix = None lora_scale = 1.0 low_cpu_mem_usage = None adapter_name = None _pipeline = None ) Parameters state_dict (dict) — +A standard state dict containing the lora layer parameters. The key should be prefixed with an +additional text_encoder to distinguish between unet lora layers. network_alphas (Dict[str, float]) — +See LoRALinearLayer for more details. text_encoder (CLIPTextModel) — +The text encoder model to load the LoRA layers into. prefix (str) — +Expected prefix of the text_encoder in the state_dict. lora_scale (float) — +How much to scale the output of the lora linear layer before it is added with the output of the regular +lora layer. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. This will load the LoRA layers specified in state_dict into text_encoder load_lora_into_transformer < source > ( state_dict network_alphas transformer low_cpu_mem_usage = None adapter_name = None _pipeline = None ) Parameters state_dict (dict) — +A standard state dict containing the lora layer parameters. The keys can either be indexed directly +into the unet or prefixed with an additional unet which can be used to distinguish between text +encoder lora layers. network_alphas (Dict[str, float]) — +See LoRALinearLayer for more details. unet (UNet2DConditionModel) — +The UNet model to load the LoRA layers into. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. This will load the LoRA layers specified in state_dict into transformer. load_lora_into_unet < source > ( state_dict network_alphas unet low_cpu_mem_usage = None adapter_name = None _pipeline = None ) Parameters state_dict (dict) — +A standard state dict containing the lora layer parameters. The keys can either be indexed directly +into the unet or prefixed with an additional unet which can be used to distinguish between text +encoder lora layers. network_alphas (Dict[str, float]) — +See LoRALinearLayer for more details. unet (UNet2DConditionModel) — +The UNet model to load the LoRA layers into. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. This will load the LoRA layers specified in state_dict into unet. load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. lora_state_dict < source > ( pretrained_model_name_or_path_or_dict: Union **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with ModelMixin.save_pretrained(). +A torch state +dict. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Return state dict for lora weights and the network alphas. We support loading A1111 formatted LoRA checkpoints in a limited capacity. This function is experimental and might change in the future. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. set_adapters_for_text_encoder < source > ( adapter_names: Union text_encoder: Optional = None text_encoder_weights: List = None ) Parameters adapter_names (List[str] or str) — +The names of the adapters to use. text_encoder (torch.nn.Module, optional) — +The text encoder module to set the adapter layers for. If None, it will try to get the text_encoder +attribute. text_encoder_weights (List[float], optional) — +The weights to use for the text encoder. If None, the weights are set to 1.0 for all the adapters. Sets the adapter layers for the text encoder. set_lora_device < source > ( adapter_names: List device: Union ) Parameters adapter_names (List[str]) — +List of adapters to send device to. device (Union[torch.device, str, int]) — +Device to send the adapters to. Can be either a torch device, a str or an integer. Moves the LoRAs listed in adapter_names to a target device. Useful for offloading the LoRA to the CPU in case +you want to load multiple adapters and free some GPU memory. unfuse_lora < source > ( unfuse_unet: bool = True unfuse_text_encoder: bool = True ) Parameters unfuse_unet (bool, defaults to True) — Whether to unfuse the UNet LoRA parameters. unfuse_text_encoder (bool, defaults to True) — +Whether to unfuse the text encoder LoRA parameters. If the text encoder wasn’t monkey-patched with the +LoRA parameters then it won’t have any effect. Reverses the effect of +pipe.fuse_lora(). This is an experimental API. unload_lora_weights < source > ( ) Unloads the LoRA parameters. Examples: Copied >>> # Assuming `pipeline` is already loaded with the LoRA parameters. +>>> pipeline.unload_lora_weights() +>>> ... StableDiffusionXLLoraLoaderMixin class diffusers.loaders.StableDiffusionXLLoraLoaderMixin < source > ( ) This class overrides LoraLoaderMixin with LoRA loading/saving code that’s specific to SDXL load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name: Optional = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. kwargs (dict, optional) — +See lora_state_dict(). Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. diff --git a/scrapped_outputs/32725b940950b73823a3b30974fcedb9.txt b/scrapped_outputs/32725b940950b73823a3b30974fcedb9.txt new file mode 100644 index 0000000000000000000000000000000000000000..fc1dc94699af2cf7d2df56ad88052beba3fe6541 --- /dev/null +++ b/scrapped_outputs/32725b940950b73823a3b30974fcedb9.txt @@ -0,0 +1,324 @@ +InstructPix2Pix InstructPix2Pix: Learning to Follow Image Editing Instructions is by Tim Brooks, Aleksander Holynski and Alexei A. Efros. The abstract from the paper is: We propose a method for editing images from human instructions: given an input image and a written instruction that tells the model what to do, our model follows these instructions to edit the image. To obtain training data for this problem, we combine the knowledge of two large pretrained models — a language model (GPT-3) and a text-to-image model (Stable Diffusion) — to generate a large dataset of image editing examples. Our conditional diffusion model, InstructPix2Pix, is trained on our generated data, and generalizes to real images and user-written instructions at inference time. Since it performs edits in the forward pass and does not require per example fine-tuning or inversion, our model edits images quickly, in a matter of seconds. We show compelling editing results for a diverse collection of input images and written instructions. You can find additional information about InstructPix2Pix on the project page, original codebase, and try it out in a demo. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionInstructPix2PixPipeline class diffusers.StableDiffusionInstructPix2PixPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for pixel-level image editing by following text instructions (based on Stable Diffusion). This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 100 guidance_scale: float = 7.5 image_guidance_scale: float = 1.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor np.ndarray, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be repainted according to prompt. Can also accept +image latents as image, but if passing latents directly it is not encoded again. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. image_guidance_scale (float, optional, defaults to 1.5) — +Push the generated image towards the inital image. Image guidance scale is enabled by setting +image_guidance_scale > 1. Higher image guidance scale encourages generated images that are closely +linked to the source image, usually at the expense of lower image quality. This pipeline requires a +value of at least 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionInstructPix2PixPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" + +>>> image = download_image(img_url).resize((512, 512)) + +>>> pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained( +... "timbrooks/instruct-pix2pix", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "make the mountains snowy" +>>> image = pipe(prompt=prompt, image=image).images[0] load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. StableDiffusionXLInstructPix2PixPipeline class diffusers.StableDiffusionXLInstructPix2PixPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires a aesthetic_score condition to be passed during inference. Also see the config +of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for pixel-level image editing by following text instructions. Based on Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 100 denoising_end: Optional = None guidance_scale: float = 5.0 image_guidance_scale: float = 1.5 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None ) → StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor or PIL.Image.Image or np.ndarray or List[torch.FloatTensor] or List[PIL.Image.Image] or List[np.ndarray]) — +The image(s) to modify with the pipeline. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 5.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. image_guidance_scale (float, optional, defaults to 1.5) — +Image guidance scale is to push the generated image towards the inital image image. Image guidance +scale is enabled by setting image_guidance_scale > 1. Higher image guidance scale encourages to +generate images that are closely linked to the source image image, usually at the expense of lower +image quality. This pipeline requires a value of at least 1. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. Returns +StableDiffusionXLPipelineOutput or tuple + +StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLInstructPix2PixPipeline +>>> from diffusers.utils import load_image + +>>> resolution = 768 +>>> image = load_image( +... "https://hf.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" +... ).resize((resolution, resolution)) +>>> edit_instruction = "Turn sky into a cloudy one" + +>>> pipe = StableDiffusionXLInstructPix2PixPipeline.from_pretrained( +... "diffusers/sdxl-instructpix2pix-768", torch_dtype=torch.float16 +... ).to("cuda") + +>>> edited_image = pipe( +... prompt=edit_instruction, +... image=image, +... height=resolution, +... width=resolution, +... guidance_scale=3.0, +... image_guidance_scale=1.5, +... num_inference_steps=30, +... ).images[0] +>>> edited_image disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously invoked, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several +steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in +several steps. This is useful to save a large amount of memory and to allow the processing of larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. Encodes the prompt into text encoder hidden states. diff --git a/scrapped_outputs/32769d90fb1a4ce36a45c6410e0ac2bf.txt b/scrapped_outputs/32769d90fb1a4ce36a45c6410e0ac2bf.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/32a3b812a85e357affa0d7f7d520ccce.txt b/scrapped_outputs/32a3b812a85e357affa0d7f7d520ccce.txt new file mode 100644 index 0000000000000000000000000000000000000000..f3d2a1a340ad1efdbcd58232cb5909967c8d6d47 --- /dev/null +++ b/scrapped_outputs/32a3b812a85e357affa0d7f7d520ccce.txt @@ -0,0 +1,64 @@ +Configuration Schedulers from SchedulerMixin and models from ModelMixin inherit from ConfigMixin which stores all the parameters that are passed to their respective __init__ methods in a JSON-configuration file. To use private or gated models, log-in with huggingface-cli login. ConfigMixin class diffusers.ConfigMixin < source > ( ) Base class for all configuration classes. All configuration parameters are stored under self.config. Also +provides the from_config() and save_config() methods for loading, downloading, and +saving classes that inherit from ConfigMixin. Class attributes: config_name (str) — A filename under which the config should stored when calling +save_config() (should be overridden by parent class). ignore_for_config (List[str]) — A list of attributes that should not be saved in the config (should be +overridden by subclass). has_compatibles (bool) — Whether the class has compatible classes (should be overridden by subclass). _deprecated_kwargs (List[str]) — Keyword arguments that are deprecated. Note that the init function +should only have a kwargs argument if at least one argument is deprecated (should be overridden by +subclass). load_config < source > ( pretrained_model_name_or_path: Union return_unused_kwargs = False return_commit_hash = False **kwargs ) → dict Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing model weights saved with +save_config(). + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. return_unused_kwargs (bool, optional, defaults to `False) — +Whether unused keyword arguments of the config are returned. return_commit_hash (bool, optional, defaults to False) -- Whether the commit_hash` of the loaded configuration are returned. Returns +dict + +A dictionary of all the parameters stored in a JSON configuration file. + Load a model or scheduler configuration. from_config < source > ( config: Union = None return_unused_kwargs = False **kwargs ) → ModelMixin or SchedulerMixin Parameters config (Dict[str, Any]) — +A config dictionary from which the Python class is instantiated. Make sure to only load configuration +files of compatible classes. return_unused_kwargs (bool, optional, defaults to False) — +Whether kwargs that are not consumed by the Python class should be returned or not. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to update the configuration object (after it is loaded) and initiate the Python class. +**kwargs are passed directly to the underlying scheduler/model’s __init__ method and eventually +overwrite the same named arguments in config. Returns +ModelMixin or SchedulerMixin + +A model or scheduler object instantiated from a config dictionary. + Instantiate a Python class from a config dictionary. Examples: Copied >>> from diffusers import DDPMScheduler, DDIMScheduler, PNDMScheduler + +>>> # Download scheduler from huggingface.co and cache. +>>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cifar10-32") + +>>> # Instantiate DDIM scheduler class with same config as DDPM +>>> scheduler = DDIMScheduler.from_config(scheduler.config) + +>>> # Instantiate PNDM scheduler class with same config as DDPM +>>> scheduler = PNDMScheduler.from_config(scheduler.config) save_config < source > ( save_directory: Union push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory where the configuration JSON file is saved (will be created if it does not exist). push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save a configuration object to the directory specified in save_directory so that it can be reloaded using the +from_config() class method. to_json_file < source > ( json_file_path: Union ) Parameters json_file_path (str or os.PathLike) — +Path to the JSON file to save a configuration instance’s parameters. Save the configuration instance’s parameters to a JSON file. to_json_string < source > ( ) → str Returns +str + +String containing all the attributes that make up the configuration instance in JSON format. + Serializes the configuration instance to a JSON string. diff --git a/scrapped_outputs/32a4ac5cf1103b831e82131960916edd.txt b/scrapped_outputs/32a4ac5cf1103b831e82131960916edd.txt new file mode 100644 index 0000000000000000000000000000000000000000..a1d62e149f06897a73f0cf31016ea5252858f00a --- /dev/null +++ b/scrapped_outputs/32a4ac5cf1103b831e82131960916edd.txt @@ -0,0 +1,525 @@ +Kandinsky 2.1 Kandinsky 2.1 is created by Arseniy Shakhmatov, Anton Razzhigaev, Aleksandr Nikolich, Vladimir Arkhipkin, Igor Pavlov, Andrey Kuznetsov, and Denis Dimitrov. The description from it’s GitHub page is: Kandinsky 2.1 inherits best practicies from Dall-E 2 and Latent diffusion, while introducing some new ideas. As text and image encoder it uses CLIP model and diffusion image prior (mapping) between latent spaces of CLIP modalities. This approach increases the visual performance of the model and unveils new horizons in blending images and text-guided image manipulation. The original codebase can be found at ai-forever/Kandinsky-2. Check out the Kandinsky Community organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. KandinskyPriorPipeline class diffusers.KandinskyPriorPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModelWithProjection text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: UnCLIPScheduler image_processor: CLIPImageProcessor ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Pipeline for generating image prior for Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 output_type: Optional = 'pt' return_dict: bool = True ) → KandinskyPriorPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. output_type (str, optional, defaults to "pt") — +The output format of the generate image. Choose between: "np" (np.array) or "pt" +(torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyPipeline, KandinskyPriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior") +>>> pipe_prior.to("cuda") + +>>> prompt = "red cat, 4k photo" +>>> out = pipe_prior(prompt) +>>> image_emb = out.image_embeds +>>> negative_image_emb = out.negative_image_embeds + +>>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1") +>>> pipe.to("cuda") + +>>> image = pipe( +... prompt, +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... ).images + +>>> image[0].save("cat.png") interpolate < source > ( images_and_prompts: List weights: List num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None negative_prior_prompt: Optional = None negative_prompt: str = '' guidance_scale: float = 4.0 device = None ) → KandinskyPriorPipelineOutput or tuple Parameters images_and_prompts (List[Union[str, PIL.Image.Image, torch.FloatTensor]]) — +list of prompts and images to guide the image generation. +weights — (List[float]): +list of weights for each condition in images_and_prompts num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. negative_prior_prompt (str, optional) — +The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when using the prior pipeline for interpolation. Examples: Copied >>> from diffusers import KandinskyPriorPipeline, KandinskyPipeline +>>> from diffusers.utils import load_image +>>> import PIL + +>>> import torch +>>> from torchvision import transforms + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> img1 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) + +>>> img2 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/starry_night.jpeg" +... ) + +>>> images_texts = ["a cat", img1, img2] +>>> weights = [0.3, 0.3, 0.4] +>>> image_emb, zero_image_emb = pipe_prior.interpolate(images_texts, weights) + +>>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) +>>> pipe.to("cuda") + +>>> image = pipe( +... "", +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=150, +... ).images[0] + +>>> image.save("starry_cat.png") KandinskyPipeline class diffusers.KandinskyPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image_embeds: Union negative_image_embeds: Union negative_prompt: Union = None height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyPipeline, KandinskyPriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained("kandinsky-community/Kandinsky-2-1-prior") +>>> pipe_prior.to("cuda") + +>>> prompt = "red cat, 4k photo" +>>> out = pipe_prior(prompt) +>>> image_emb = out.image_embeds +>>> negative_image_emb = out.negative_image_embeds + +>>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1") +>>> pipe.to("cuda") + +>>> image = pipe( +... prompt, +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... ).images + +>>> image[0].save("cat.png") KandinskyCombinedPipeline class diffusers.KandinskyCombinedPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Combined Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipe = AutoPipelineForText2Image.from_pretrained( + "kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" + +image = pipe(prompt=prompt, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models (unet, text_encoder, vae, and safety checker state dicts) to CPU using 🤗 +Accelerate, significantly reducing memory usage. Models are moved to a torch.device('meta') and loaded on a +GPU only when their specific submodule’s forward method is called. Offloading happens on a submodule basis. +Memory savings are higher than using enable_model_cpu_offload, but performance is lower. KandinskyImg2ImgPipeline class diffusers.KandinskyImg2ImgPipeline < source > ( text_encoder: MultilingualCLIP movq: VQModel tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: DDIMScheduler ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ image encoder and decoder Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union image_embeds: FloatTensor negative_image_embeds: FloatTensor negative_prompt: Union = None height: int = 512 width: int = 512 num_inference_steps: int = 100 strength: float = 0.3 guidance_scale: float = 7.0 num_images_per_prompt: int = 1 generator: Union = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyImg2ImgPipeline, KandinskyPriorPipeline +>>> from diffusers.utils import load_image +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> prompt = "A red cartoon frog, 4k" +>>> image_emb, zero_image_emb = pipe_prior(prompt, return_dict=False) + +>>> pipe = KandinskyImg2ImgPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") + +>>> init_image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/frog.png" +... ) + +>>> image = pipe( +... prompt, +... image=init_image, +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... strength=0.2, +... ).images + +>>> image[0].save("red_frog.png") KandinskyImg2ImgCombinedPipeline class diffusers.KandinskyImg2ImgCombinedPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Combined Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 strength: float = 0.3 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForImage2Image +import torch +import requests +from io import BytesIO +from PIL import Image +import os + +pipe = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") +image.thumbnail((768, 768)) + +image = pipe(prompt=prompt, image=original_image, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. KandinskyInpaintPipeline class diffusers.KandinskyInpaintPipeline < source > ( text_encoder: MultilingualCLIP movq: VQModel tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: DDIMScheduler ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ image encoder and decoder Pipeline for text-guided image inpainting using Kandinsky2.1 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union mask_image: Union image_embeds: FloatTensor negative_image_embeds: FloatTensor negative_prompt: Union = None height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image or np.ndarray) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. mask_image (PIL.Image.Image,torch.FloatTensor or np.ndarray) — +Image, or a tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. You can pass a pytorch tensor as mask only if the +image you passed is a pytorch tensor, and it should contain one color channel (L) instead of 3, so the +expected shape would be either (B, 1, H, W,), (B, H, W), (1, H, W) or (H, W) If image is an PIL +image or numpy array, mask should also be a either PIL image or numpy array. If it is a PIL image, it +will be converted to a single channel (luminance) before use. If it is a nummpy array, the expected +shape is (H, W). image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyInpaintPipeline, KandinskyPriorPipeline +>>> from diffusers.utils import load_image +>>> import torch +>>> import numpy as np + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> prompt = "a hat" +>>> image_emb, zero_image_emb = pipe_prior(prompt, return_dict=False) + +>>> pipe = KandinskyInpaintPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") + +>>> init_image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) + +>>> mask = np.zeros((768, 768), dtype=np.float32) +>>> mask[:250, 250:-250] = 1 + +>>> out = pipe( +... prompt, +... image=init_image, +... mask_image=mask, +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=50, +... ) + +>>> image = out.images[0] +>>> image.save("cat_with_hat.png") KandinskyInpaintCombinedPipeline class diffusers.KandinskyInpaintCombinedPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Combined Pipeline for generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union mask_image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. mask_image (np.array) — +Tensor representing an image batch, to mask image. White pixels in the mask will be repainted, while +black pixels will be preserved. If mask_image is a PIL image, it will be converted to a single +channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, +so the expected shape would be (B, H, W, 1). negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +import torch +import numpy as np + +pipe = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +original_image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png" +) + +mask = np.zeros((768, 768), dtype=np.float32) +# Let's mask out an area above the cat's head +mask[:250, 250:-250] = 1 + +image = pipe(prompt=prompt, image=original_image, mask_image=mask, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. diff --git a/scrapped_outputs/32c0125b05b8805d21d8dd8867714cba.txt b/scrapped_outputs/32c0125b05b8805d21d8dd8867714cba.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/32dddb2bd19d38ba54e2d56787b9537d.txt b/scrapped_outputs/32dddb2bd19d38ba54e2d56787b9537d.txt new file mode 100644 index 0000000000000000000000000000000000000000..b141ceaf084a8212da6ac7e6a804208f1ca7d021 --- /dev/null +++ b/scrapped_outputs/32dddb2bd19d38ba54e2d56787b9537d.txt @@ -0,0 +1,35 @@ +Dance Diffusion Dance Diffusion is by Zach Evans. Dance Diffusion is the first in a suite of generative audio tools for producers and musicians released by Harmonai. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. DanceDiffusionPipeline class diffusers.DanceDiffusionPipeline < source > ( unet scheduler ) Parameters unet (UNet1DModel) — +A UNet1DModel to denoise the encoded audio. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +IPNDMScheduler. Pipeline for audio generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 num_inference_steps: int = 100 generator: Union = None audio_length_in_s: Optional = None return_dict: bool = True ) → AudioPipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of audio samples to generate. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher-quality audio sample at +the expense of slower inference. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. audio_length_in_s (float, optional, defaults to self.unet.config.sample_size/self.unet.config.sample_rate) — +The length of the generated audio sample in seconds. return_dict (bool, optional, defaults to True) — +Whether or not to return a AudioPipelineOutput instead of a plain tuple. Returns +AudioPipelineOutput or tuple + +If return_dict is True, AudioPipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Example: Copied from diffusers import DiffusionPipeline +from scipy.io.wavfile import write + +model_id = "harmonai/maestro-150k" +pipe = DiffusionPipeline.from_pretrained(model_id) +pipe = pipe.to("cuda") + +audios = pipe(audio_length_in_s=4.0).audios + +# To save locally +for i, audio in enumerate(audios): + write(f"maestro_test_{i}.wav", pipe.unet.sample_rate, audio.transpose()) + +# To dislay in google colab +import IPython.display as ipd + +for audio in audios: + display(ipd.Audio(audio, rate=pipe.unet.sample_rate)) AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. diff --git a/scrapped_outputs/32deb00cb2c301f5f5268107269b79d4.txt b/scrapped_outputs/32deb00cb2c301f5f5268107269b79d4.txt new file mode 100644 index 0000000000000000000000000000000000000000..f1c1a9c2dd958669628fba113f9bf7c7441fb5bf --- /dev/null +++ b/scrapped_outputs/32deb00cb2c301f5f5268107269b79d4.txt @@ -0,0 +1,234 @@ +Variance exploding, stochastic sampling from Karras et. al + + +Overview + +Original paper can be found here. + +KarrasVeScheduler + + +class diffusers.KarrasVeScheduler + +< +source +> +( +sigma_min: float = 0.02 +sigma_max: float = 100 +s_noise: float = 1.007 +s_churn: float = 80 +s_min: float = 0.05 +s_max: float = 50 + +) + + +Parameters + +sigma_min (float) — minimum noise magnitude + + +sigma_max (float) — maximum noise magnitude + + +s_noise (float) — the amount of additional noise to counteract loss of detail during sampling. +A reasonable range is [1.000, 1.011]. + + +s_churn (float) — the parameter controlling the overall amount of stochasticity. +A reasonable range is [0, 100]. + + +s_min (float) — the start value of the sigma range where we add noise (enable stochasticity). +A reasonable range is [0, 10]. + + +s_max (float) — the end value of the sigma range where we add noise. +A reasonable range is [0.2, 80]. + + + +Stochastic sampling from Karras et al. [1] tailored to the Variance-Expanding (VE) models [2]. Use Algorithm 2 and +the VE column of Table 1 from [1] for reference. +[1] Karras, Tero, et al. “Elucidating the Design Space of Diffusion-Based Generative Models.” +https://arxiv.org/abs/2206.00364 [2] Song, Yang, et al. “Score-based generative modeling through stochastic +differential equations.” https://arxiv.org/abs/2011.13456 +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. +For more details on the parameters, see the original paper’s Appendix E.: “Elucidating the Design Space of +Diffusion-Based Generative Models.” https://arxiv.org/abs/2206.00364. The grid search values used to find the +optimal {s_noise, s_churn, s_min, s_max} for a specific model are described in Table 5 of the paper. + +add_noise_to_input + +< +source +> +( +sample: FloatTensor +sigma: float +generator: typing.Optional[torch._C.Generator] = None + +) + + + +Explicit Langevin-like “churn” step of adding noise to the sample according to a factor gamma_i ≥ 0 to reach a +higher noise level sigma_hat = sigma_i + gamma_i*sigma_i. +TODO Args: + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Optional[int] = None + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +timestep (int, optional) — current timestep + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + + +Sets the continuous timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +sigma_hat: float +sigma_prev: float +sample_hat: FloatTensor +return_dict: bool = True + +) +→ +KarrasVeOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +sigma_hat (float) — TODO + + +sigma_prev (float) — TODO + + +sample_hat (torch.FloatTensor) — TODO + + +return_dict (bool) — option for returning tuple rather than KarrasVeOutput class +KarrasVeOutput — updated sample in the diffusion chain and derivative (TODO double check). + + +Returns + +KarrasVeOutput or tuple + + + +KarrasVeOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion +process from the learned model outputs (most often the predicted noise). + +step_correct + +< +source +> +( +model_output: FloatTensor +sigma_hat: float +sigma_prev: float +sample_hat: FloatTensor +sample_prev: FloatTensor +derivative: FloatTensor +return_dict: bool = True + +) +→ +prev_sample (TODO) + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +sigma_hat (float) — TODO + + +sigma_prev (float) — TODO + + +sample_hat (torch.FloatTensor) — TODO + + +sample_prev (torch.FloatTensor) — TODO + + +derivative (torch.FloatTensor) — TODO + + +return_dict (bool) — option for returning tuple rather than KarrasVeOutput class + + +Returns + +prev_sample (TODO) + + + +updated sample in the diffusion chain. derivative (TODO): TODO + + +Correct the predicted sample based on the output model_output of the network. TODO complete description diff --git a/scrapped_outputs/32e7fa9e3d69ff9ce20f1f3539b41dd9.txt b/scrapped_outputs/32e7fa9e3d69ff9ce20f1f3539b41dd9.txt new file mode 100644 index 0000000000000000000000000000000000000000..b141ceaf084a8212da6ac7e6a804208f1ca7d021 --- /dev/null +++ b/scrapped_outputs/32e7fa9e3d69ff9ce20f1f3539b41dd9.txt @@ -0,0 +1,35 @@ +Dance Diffusion Dance Diffusion is by Zach Evans. Dance Diffusion is the first in a suite of generative audio tools for producers and musicians released by Harmonai. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. DanceDiffusionPipeline class diffusers.DanceDiffusionPipeline < source > ( unet scheduler ) Parameters unet (UNet1DModel) — +A UNet1DModel to denoise the encoded audio. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +IPNDMScheduler. Pipeline for audio generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 num_inference_steps: int = 100 generator: Union = None audio_length_in_s: Optional = None return_dict: bool = True ) → AudioPipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of audio samples to generate. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher-quality audio sample at +the expense of slower inference. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. audio_length_in_s (float, optional, defaults to self.unet.config.sample_size/self.unet.config.sample_rate) — +The length of the generated audio sample in seconds. return_dict (bool, optional, defaults to True) — +Whether or not to return a AudioPipelineOutput instead of a plain tuple. Returns +AudioPipelineOutput or tuple + +If return_dict is True, AudioPipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Example: Copied from diffusers import DiffusionPipeline +from scipy.io.wavfile import write + +model_id = "harmonai/maestro-150k" +pipe = DiffusionPipeline.from_pretrained(model_id) +pipe = pipe.to("cuda") + +audios = pipe(audio_length_in_s=4.0).audios + +# To save locally +for i, audio in enumerate(audios): + write(f"maestro_test_{i}.wav", pipe.unet.sample_rate, audio.transpose()) + +# To dislay in google colab +import IPython.display as ipd + +for audio in audios: + display(ipd.Audio(audio, rate=pipe.unet.sample_rate)) AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. diff --git a/scrapped_outputs/32f7d14507b13b5858e16ee005375ff8.txt b/scrapped_outputs/32f7d14507b13b5858e16ee005375ff8.txt new file mode 100644 index 0000000000000000000000000000000000000000..8ddef7d2587e0ab05a500a167a90610ae978a96c --- /dev/null +++ b/scrapped_outputs/32f7d14507b13b5858e16ee005375ff8.txt @@ -0,0 +1,107 @@ +Attend-and-Excite Attend-and-Excite for Stable Diffusion was proposed in Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models and provides textual attention control over image generation. The abstract from the paper is: Recent text-to-image generative models have demonstrated an unparalleled ability to generate diverse and creative imagery guided by a target text prompt. While revolutionary, current state-of-the-art diffusion models may still fail in generating images that fully convey the semantics in the given text prompt. We analyze the publicly available Stable Diffusion model and assess the existence of catastrophic neglect, where the model fails to generate one or more of the subjects from the input prompt. Moreover, we find that in some cases the model also fails to correctly bind attributes (e.g., colors) to their corresponding subjects. To help mitigate these failure cases, we introduce the concept of Generative Semantic Nursing (GSN), where we seek to intervene in the generative process on the fly during inference time to improve the faithfulness of the generated images. Using an attention-based formulation of GSN, dubbed Attend-and-Excite, we guide the model to refine the cross-attention units to attend to all subject tokens in the text prompt and strengthen - or excite - their activations, encouraging the model to generate all subjects described in the text prompt. We compare our approach to alternative approaches and demonstrate that it conveys the desired concepts more faithfully across a range of text prompts. You can find additional information about Attend-and-Excite on the project page, the original codebase, or try it out in a demo. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionAttendAndExcitePipeline class diffusers.StableDiffusionAttendAndExcitePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion and Attend-and-Excite. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings __call__ < source > ( prompt: Union token_indices: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: int = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None max_iter_to_alter: int = 25 thresholds: dict = {0: 0.05, 10: 0.5, 20: 0.8} scale_factor: int = 20 attn_res: Optional = (16, 16) clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. token_indices (List[int]) — +The token indices to alter with attend-and-excite. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. max_iter_to_alter (int, optional, defaults to 25) — +Number of denoising steps to apply attend-and-excite. The max_iter_to_alter denoising steps are when +attend-and-excite is applied. For example, if max_iter_to_alter is 25 and there are a total of 30 +denoising steps, the first 25 denoising steps applies attend-and-excite and the last 5 will not. thresholds (dict, optional, defaults to {0 -- 0.05, 10: 0.5, 20: 0.8}): +Dictionary defining the iterations and desired thresholds to apply iterative latent refinement in. scale_factor (int, optional, default to 20) — +Scale factor to control the step size of each attend-and-excite update. attn_res (tuple, optional, default computed from width and height) — +The 2D resolution of the semantic attention map. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionAttendAndExcitePipeline + +>>> pipe = StableDiffusionAttendAndExcitePipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 +... ).to("cuda") + + +>>> prompt = "a cat and a frog" + +>>> # use get_indices function to find out indices of the tokens you want to alter +>>> pipe.get_indices(prompt) +{0: '<|startoftext|>', 1: 'a', 2: 'cat', 3: 'and', 4: 'a', 5: 'frog', 6: '<|endoftext|>'} + +>>> token_indices = [2, 5] +>>> seed = 6141 +>>> generator = torch.Generator("cuda").manual_seed(seed) + +>>> images = pipe( +... prompt=prompt, +... token_indices=token_indices, +... guidance_scale=7.5, +... generator=generator, +... num_inference_steps=50, +... max_iter_to_alter=25, +... ).images + +>>> image = images[0] +>>> image.save(f"../images/{prompt}_{seed}.png") disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_indices < source > ( prompt: str ) Utility function to list the indices of the tokens you wish to alte StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/3328964edb2937eddb5fcecf1b10efaf.txt b/scrapped_outputs/3328964edb2937eddb5fcecf1b10efaf.txt new file mode 100644 index 0000000000000000000000000000000000000000..e7efd5146c1078113af0423ef6c60dab2df7383d --- /dev/null +++ b/scrapped_outputs/3328964edb2937eddb5fcecf1b10efaf.txt @@ -0,0 +1,77 @@ +Stable Diffusion XL This script is experimental, and it’s easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset. Stable Diffusion XL (SDXL) is a larger and more powerful iteration of the Stable Diffusion model, capable of producing higher resolution images. SDXL’s UNet is 3x larger and the model adds a second text encoder to the architecture. Depending on the hardware available to you, this can be very computationally intensive and it may not run on a consumer GPU like a Tesla T4. To help fit this larger model into memory and to speedup training, try enabling gradient_checkpointing, mixed_precision, and gradient_accumulation_steps. You can reduce your memory-usage even more by enabling memory-efficient attention with xFormers and using bitsandbytes’ 8-bit optimizer. This guide will explore the train_text_to_image_sdxl.py training script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/text_to_image +pip install -r requirements_sdxl.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the bf16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image_sdxl.py \ + --mixed_precision="bf16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so you’ll focus on the parameters that are relevant to training SDXL in this guide. --pretrained_vae_model_name_or_path: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify a better VAE --proportion_empty_prompts: the proportion of image prompts to replace with empty strings --timestep_bias_strategy: where (earlier vs. later) in the timestep to apply a bias, which can encourage the model to either learn low or high frequency details --timestep_bias_multiplier: the weight of the bias to apply to the timestep --timestep_bias_begin: the timestep to begin applying the bias --timestep_bias_end: the timestep to end applying the bias --timestep_bias_portion: the proportion of timesteps to apply the bias to Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting either epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_text_to_image_sdxl.py \ + --snr_gamma=5.0 Training script The training script is also similar to the Text-to-image training guide, but it’s been modified to support SDXL training. This guide will focus on the code that is unique to the SDXL training script. It starts by creating functions to tokenize the prompts to calculate the prompt embeddings, and to compute the image embeddings with the VAE. Next, you’ll a function to generate the timesteps weights depending on the number of timesteps and the timestep bias strategy to apply. Within the main() function, in addition to loading a tokenizer, the script loads a second tokenizer and text encoder because the SDXL architecture uses two of each: Copied tokenizer_one = AutoTokenizer.from_pretrained( + args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision, use_fast=False +) +tokenizer_two = AutoTokenizer.from_pretrained( + args.pretrained_model_name_or_path, subfolder="tokenizer_2", revision=args.revision, use_fast=False +) + +text_encoder_cls_one = import_model_class_from_model_name_or_path( + args.pretrained_model_name_or_path, args.revision +) +text_encoder_cls_two = import_model_class_from_model_name_or_path( + args.pretrained_model_name_or_path, args.revision, subfolder="text_encoder_2" +) The prompt and image embeddings are computed first and kept in memory, which isn’t typically an issue for a smaller dataset, but for larger datasets it can lead to memory problems. If this is the case, you should save the pre-computed embeddings to disk separately and load them into memory during the training process (see this PR for more discussion about this topic). Copied text_encoders = [text_encoder_one, text_encoder_two] +tokenizers = [tokenizer_one, tokenizer_two] +compute_embeddings_fn = functools.partial( + encode_prompt, + text_encoders=text_encoders, + tokenizers=tokenizers, + proportion_empty_prompts=args.proportion_empty_prompts, + caption_column=args.caption_column, +) + +train_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint) +train_dataset = train_dataset.map( + compute_vae_encodings_fn, + batched=True, + batch_size=args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps, + new_fingerprint=new_fingerprint_for_vae, +) After calculating the embeddings, the text encoder, VAE, and tokenizer are deleted to free up some memory: Copied del text_encoders, tokenizers, vae +gc.collect() +torch.cuda.empty_cache() Finally, the training loop takes care of the rest. If you chose to apply a timestep bias strategy, you’ll see the timestep weights are calculated and added as noise: Copied weights = generate_timestep_weights(args, noise_scheduler.config.num_train_timesteps).to( + model_input.device + ) + timesteps = torch.multinomial(weights, bsz, replacement=True).long() + +noisy_model_input = noise_scheduler.add_noise(model_input, noise, timesteps) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 Let’s train on the Pokémon BLIP captions dataset to generate your own Pokémon. Set the environment variables MODEL_NAME and DATASET_NAME to the model and the dataset (either from the Hub or a local path). You should also specify a VAE other than the SDXL VAE (either from the Hub or a local path) with VAE_NAME to avoid numerical instabilities. To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_prompt and --validation_epochs to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. Copied export MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0" +export VAE_NAME="madebyollin/sdxl-vae-fp16-fix" +export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch train_text_to_image_sdxl.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --pretrained_vae_model_name_or_path=$VAE_NAME \ + --dataset_name=$DATASET_NAME \ + --enable_xformers_memory_efficient_attention \ + --resolution=512 \ + --center_crop \ + --random_flip \ + --proportion_empty_prompts=0.2 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --max_train_steps=10000 \ + --use_8bit_adam \ + --learning_rate=1e-06 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --mixed_precision="fp16" \ + --report_to="wandb" \ + --validation_prompt="a cute Sundar Pichai creature" \ + --validation_epochs 5 \ + --checkpointing_steps=5000 \ + --output_dir="sdxl-pokemon-model" \ + --push_to_hub After you’ve finished training, you can use your newly trained SDXL model for inference! PyTorch PyTorch XLA Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("path/to/your/model", torch_dtype=torch.float16).to("cuda") + +prompt = "A pokemon with green eyes and red legs." +image = pipeline(prompt, num_inference_steps=30, guidance_scale=7.5).images[0] +image.save("pokemon.png") Next steps Congratulations on training a SDXL model! To learn more about how to use your new model, the following guides may be helpful: Read the Stable Diffusion XL guide to learn how to use it for a variety of different tasks (text-to-image, image-to-image, inpainting), how to use it’s refiner model, and the different types of micro-conditionings. Check out the DreamBooth and LoRA training guides to learn how to train a personalized SDXL model with just a few example images. These two training techniques can even be combined! diff --git a/scrapped_outputs/3343a36d8d2d73129f46b53ef6276682.txt b/scrapped_outputs/3343a36d8d2d73129f46b53ef6276682.txt new file mode 100644 index 0000000000000000000000000000000000000000..6643d5b160e9a2f169422a8c75cb5792940e3a60 --- /dev/null +++ b/scrapped_outputs/3343a36d8d2d73129f46b53ef6276682.txt @@ -0,0 +1,250 @@ +DiffEdit DiffEdit: Diffusion-based semantic image editing with mask guidance is by Guillaume Couairon, Jakob Verbeek, Holger Schwenk, and Matthieu Cord. The abstract from the paper is: Image generation has recently seen tremendous advances, with diffusion models allowing to synthesize convincing images for a large variety of text prompts. In this article, we propose DiffEdit, a method to take advantage of text-conditioned diffusion models for the task of semantic image editing, where the goal is to edit an image based on a text query. Semantic image editing is an extension of image generation, with the additional constraint that the generated image should be as similar as possible to a given input image. Current editing methods based on diffusion models usually require to provide a mask, making the task much easier by treating it as a conditional inpainting task. In contrast, our main contribution is able to automatically generate a mask highlighting regions of the input image that need to be edited, by contrasting predictions of a diffusion model conditioned on different text prompts. Moreover, we rely on latent inference to preserve content in those regions of interest and show excellent synergies with mask-based diffusion. DiffEdit achieves state-of-the-art editing performance on ImageNet. In addition, we evaluate semantic image editing in more challenging settings, using images from the COCO dataset as well as text-based generated images. The original codebase can be found at Xiang-cd/DiffEdit-stable-diffusion, and you can try it out in this demo. This pipeline was contributed by clarencechen. ❤️ Tips The pipeline can generate masks that can be fed into other inpainting pipelines. In order to generate an image using this pipeline, both an image mask (source and target prompts can be manually specified or generated, and passed to generate_mask()) +and a set of partially inverted latents (generated using invert()) must be provided as arguments when calling the pipeline to generate the final edited image. The function generate_mask() exposes two prompt arguments, source_prompt and target_prompt +that let you control the locations of the semantic edits in the final image to be generated. Let’s say, +you wanted to translate from “cat” to “dog”. In this case, the edit direction will be “cat -> dog”. To reflect +this in the generated mask, you simply have to set the embeddings related to the phrases including “cat” to +source_prompt and “dog” to target_prompt. When generating partially inverted latents using invert, assign a caption or text embedding describing the +overall image to the prompt argument to help guide the inverse latent sampling process. In most cases, the +source concept is sufficiently descriptive to yield good results, but feel free to explore alternatives. When calling the pipeline to generate the final edited image, assign the source concept to negative_prompt +and the target concept to prompt. Taking the above example, you simply have to set the embeddings related to +the phrases including “cat” to negative_prompt and “dog” to prompt. If you wanted to reverse the direction in the example above, i.e., “dog -> cat”, then it’s recommended to:Swap the source_prompt and target_prompt in the arguments to generate_mask. Change the input prompt in invert() to include “dog”. Swap the prompt and negative_prompt in the arguments to call the pipeline to generate the final edited image. The source and target prompts, or their corresponding embeddings, can also be automatically generated. Please refer to the DiffEdit guide for more details. StableDiffusionDiffEditPipeline class diffusers.StableDiffusionDiffEditPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor inverse_scheduler: DDIMInverseScheduler requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. inverse_scheduler (DDIMInverseScheduler) — +A scheduler to be used in combination with unet to fill in the unmasked part of the input latents. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. This is an experimental feature! Pipeline for text-guided image inpainting using Stable Diffusion and DiffEdit. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading and saving methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights generate_mask < source > ( image: Union = None target_prompt: Union = None target_negative_prompt: Union = None target_prompt_embeds: Optional = None target_negative_prompt_embeds: Optional = None source_prompt: Union = None source_negative_prompt: Union = None source_prompt_embeds: Optional = None source_negative_prompt_embeds: Optional = None num_maps_per_mask: Optional = 10 mask_encode_strength: Optional = 0.5 mask_thresholding_ratio: Optional = 3.0 num_inference_steps: int = 50 guidance_scale: float = 7.5 generator: Union = None output_type: Optional = 'np' cross_attention_kwargs: Optional = None ) → List[PIL.Image.Image] or np.array Parameters image (PIL.Image.Image) — +Image or tensor representing an image batch to be used for computing the mask. target_prompt (str or List[str], optional) — +The prompt or prompts to guide semantic mask generation. If not defined, you need to pass +prompt_embeds. target_negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). target_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. target_negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. source_prompt (str or List[str], optional) — +The prompt or prompts to guide semantic mask generation using DiffEdit. If not defined, you need to +pass source_prompt_embeds or source_image instead. source_negative_prompt (str or List[str], optional) — +The prompt or prompts to guide semantic mask generation away from using DiffEdit. If not defined, you +need to pass source_negative_prompt_embeds or source_image instead. source_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings to guide the semantic mask generation. Can be used to easily tweak text +inputs (prompt weighting). If not provided, text embeddings are generated from source_prompt input +argument. source_negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings to negatively guide the semantic mask generation. Can be used to easily +tweak text inputs (prompt weighting). If not provided, text embeddings are generated from +source_negative_prompt input argument. num_maps_per_mask (int, optional, defaults to 10) — +The number of noise maps sampled to generate the semantic mask using DiffEdit. mask_encode_strength (float, optional, defaults to 0.5) — +The strength of the noise maps sampled to generate the semantic mask using DiffEdit. Must be between 0 +and 1. mask_thresholding_ratio (float, optional, defaults to 3.0) — +The maximum multiple of the mean absolute difference used to clamp the semantic guidance map before +mask binarization. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the +AttnProcessor as defined in +self.processor. Returns +List[PIL.Image.Image] or np.array + +When returning a List[PIL.Image.Image], the list consists of a batch of single-channel binary images +with dimensions (height // self.vae_scale_factor, width // self.vae_scale_factor). If it’s +np.array, the shape is (batch_size, height // self.vae_scale_factor, width // self.vae_scale_factor). + Generate a latent mask given a mask prompt, a target prompt, and an image. Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionDiffEditPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + +>>> init_image = download_image(img_url).resize((768, 768)) + +>>> pipe = StableDiffusionDiffEditPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.enable_model_cpu_offload() + +>>> mask_prompt = "A bowl of fruits" +>>> prompt = "A bowl of pears" + +>>> mask_image = pipe.generate_mask(image=init_image, source_prompt=prompt, target_prompt=mask_prompt) +>>> image_latents = pipe.invert(image=init_image, prompt=mask_prompt).latents +>>> image = pipe(prompt=prompt, mask_image=mask_image, image_latents=image_latents).images[0] invert < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 50 inpaint_strength: float = 0.8 guidance_scale: float = 7.5 negative_prompt: Union = None generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None decode_latents: bool = False output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None lambda_auto_corr: float = 20.0 lambda_kl: float = 20.0 num_reg_steps: int = 0 num_auto_corr_rolls: int = 5 ) Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (PIL.Image.Image) — +Image or tensor representing an image batch to produce the inverted latents guided by prompt. inpaint_strength (float, optional, defaults to 0.8) — +Indicates extent of the noising process to run latent inversion. Must be between 0 and 1. When +inpaint_strength is 1, the inversion process is run for the full number of iterations specified in +num_inference_steps. image is used as a reference for the inversion process, and adding more noise +increases inpaint_strength. If inpaint_strength is 0, no inpainting occurs. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. decode_latents (bool, optional, defaults to False) — +Whether or not to decode the inverted latents into a generated image. Setting this argument to True +decodes all inverted latents for each timestep into a list of generated images. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.DiffEditInversionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the +AttnProcessor as defined in +self.processor. lambda_auto_corr (float, optional, defaults to 20.0) — +Lambda parameter to control auto correction. lambda_kl (float, optional, defaults to 20.0) — +Lambda parameter to control Kullback-Leibler divergence output. num_reg_steps (int, optional, defaults to 0) — +Number of regularization loss steps. num_auto_corr_rolls (int, optional, defaults to 5) — +Number of auto correction roll steps. Generate inverted latents given a prompt and image. Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionDiffEditPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + +>>> init_image = download_image(img_url).resize((768, 768)) + +>>> pipe = StableDiffusionDiffEditPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.enable_model_cpu_offload() + +>>> prompt = "A bowl of fruits" + +>>> inverted_latents = pipe.invert(image=init_image, prompt=prompt).latents __call__ < source > ( prompt: Union = None mask_image: Union = None image_latents: Union = None inpaint_strength: Optional = 0.8 num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_ckip: int = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. mask_image (PIL.Image.Image) — +Image or tensor representing an image batch to mask the generated image. White pixels in the mask are +repainted, while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, 1, H, W). image_latents (PIL.Image.Image or torch.FloatTensor) — +Partially noised image latents from the inversion process to be used as inputs for image generation. inpaint_strength (float, optional, defaults to 0.8) — +Indicates extent to inpaint the masked area. Must be between 0 and 1. When inpaint_strength is 1, the +denoising process is run on the masked area for the full number of iterations specified in +num_inference_steps. image_latents is used as a reference for the masked area, and adding more +noise to a region increases inpaint_strength. If inpaint_strength is 0, no inpainting occurs. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionDiffEditPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + +>>> init_image = download_image(img_url).resize((768, 768)) + +>>> pipe = StableDiffusionDiffEditPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.enable_model_cpu_offload() + +>>> mask_prompt = "A bowl of fruits" +>>> prompt = "A bowl of pears" + +>>> mask_image = pipe.generate_mask(image=init_image, source_prompt=prompt, target_prompt=mask_prompt) +>>> image_latents = pipe.invert(image=init_image, prompt=mask_prompt).latents +>>> image = pipe(prompt=prompt, mask_image=mask_image, image_latents=image_latents).images[0] encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/3344202ba865a0451371b2d239b0de8a.txt b/scrapped_outputs/3344202ba865a0451371b2d239b0de8a.txt new file mode 100644 index 0000000000000000000000000000000000000000..77bfc70e39049721df753225367296a6dc627c51 --- /dev/null +++ b/scrapped_outputs/3344202ba865a0451371b2d239b0de8a.txt @@ -0,0 +1,124 @@ +PixArt-α PixArt-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis is Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, and Zhenguo Li. The abstract from the paper is: The most advanced text-to-image (T2I) models require significant training costs (e.g., millions of GPU hours), seriously hindering the fundamental innovation for the AIGC community while increasing CO2 emissions. This paper introduces PIXART-α, a Transformer-based T2I diffusion model whose image generation quality is competitive with state-of-the-art image generators (e.g., Imagen, SDXL, and even Midjourney), reaching near-commercial application standards. Additionally, it supports high-resolution image synthesis up to 1024px resolution with low training cost, as shown in Figure 1 and 2. To achieve this goal, three core designs are proposed: (1) Training strategy decomposition: We devise three distinct training steps that separately optimize pixel dependency, text-image alignment, and image aesthetic quality; (2) Efficient T2I Transformer: We incorporate cross-attention modules into Diffusion Transformer (DiT) to inject text conditions and streamline the computation-intensive class-condition branch; (3) High-informative data: We emphasize the significance of concept density in text-image pairs and leverage a large Vision-Language model to auto-label dense pseudo-captions to assist text-image alignment learning. As a result, PIXART-α’s training speed markedly surpasses existing large-scale T2I models, e.g., PIXART-α only takes 10.8% of Stable Diffusion v1.5’s training time (675 vs. 6,250 A100 GPU days), saving nearly $300,000 ($26,000 vs. $320,000) and reducing 90% CO2 emissions. Moreover, compared with a larger SOTA model, RAPHAEL, our training cost is merely 1%. Extensive experiments demonstrate that PIXART-α excels in image quality, artistry, and semantic control. We hope PIXART-α will provide new insights to the AIGC community and startups to accelerate building their own high-quality yet low-cost generative models from scratch. You can find the original codebase at PixArt-alpha/PixArt-alpha and all the available checkpoints at PixArt-alpha. Some notes about this pipeline: It uses a Transformer backbone (instead of a UNet) for denoising. As such it has a similar architecture as DiT. It was trained using text conditions computed from T5. This aspect makes the pipeline better at following complex text prompts with intricate details. It is good at producing high-resolution images at different aspect ratios. To get the best results, the authors recommend some size brackets which can be found here. It rivals the quality of state-of-the-art text-to-image generation systems (as of this writing) such as Stable Diffusion XL, Imagen, and DALL-E 2, while being more efficient than them. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. Inference with under 8GB GPU VRAM Run the PixArtAlphaPipeline with under 8GB GPU VRAM by loading the text encoder in 8-bit precision. Let’s walk through a full-fledged example. First, install the bitsandbytes library: Copied pip install -U bitsandbytes Then load the text encoder in 8-bit: Copied from transformers import T5EncoderModel +from diffusers import PixArtAlphaPipeline +import torch + +text_encoder = T5EncoderModel.from_pretrained( + "PixArt-alpha/PixArt-XL-2-1024-MS", + subfolder="text_encoder", + load_in_8bit=True, + device_map="auto", + +) +pipe = PixArtAlphaPipeline.from_pretrained( + "PixArt-alpha/PixArt-XL-2-1024-MS", + text_encoder=text_encoder, + transformer=None, + device_map="auto" +) Now, use the pipe to encode a prompt: Copied with torch.no_grad(): + prompt = "cute cat" + prompt_embeds, prompt_attention_mask, negative_embeds, negative_prompt_attention_mask = pipe.encode_prompt(prompt) Since text embeddings have been computed, remove the text_encoder and pipe from the memory, and free up som GPU VRAM: Copied import gc + +def flush(): + gc.collect() + torch.cuda.empty_cache() + +del text_encoder +del pipe +flush() Then compute the latents with the prompt embeddings as inputs: Copied pipe = PixArtAlphaPipeline.from_pretrained( + "PixArt-alpha/PixArt-XL-2-1024-MS", + text_encoder=None, + torch_dtype=torch.float16, +).to("cuda") + +latents = pipe( + negative_prompt=None, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + prompt_attention_mask=prompt_attention_mask, + negative_prompt_attention_mask=negative_prompt_attention_mask, + num_images_per_prompt=1, + output_type="latent", +).images + +del pipe.transformer +flush() Notice that while initializing pipe, you’re setting text_encoder to None so that it’s not loaded. Once the latents are computed, pass it off to the VAE to decode into a real image: Copied with torch.no_grad(): + image = pipe.vae.decode(latents / pipe.vae.config.scaling_factor, return_dict=False)[0] +image = pipe.image_processor.postprocess(image, output_type="pil")[0] +image.save("cat.png") By deleting components you aren’t using and flushing the GPU VRAM, you should be able to run PixArtAlphaPipeline with under 8GB GPU VRAM. If you want a report of your memory-usage, run this script. Text embeddings computed in 8-bit can impact the quality of the generated images because of the information loss in the representation space caused by the reduced precision. It’s recommended to compare the outputs with and without 8-bit. While loading the text_encoder, you set load_in_8bit to True. You could also specify load_in_4bit to bring your memory requirements down even further to under 7GB. PixArtAlphaPipeline class diffusers.PixArtAlphaPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel vae: AutoencoderKL transformer: Transformer2DModel scheduler: DPMSolverMultistepScheduler ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (T5EncoderModel) — +Frozen text-encoder. PixArt-Alpha uses +T5, specifically the +t5-v1_1-xxl variant. tokenizer (T5Tokenizer) — +Tokenizer of class +T5Tokenizer. transformer (Transformer2DModel) — +A text conditioned Transformer2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with transformer to denoise the encoded image latents. Pipeline for text-to-image generation using PixArt-Alpha. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union = None negative_prompt: str = '' num_inference_steps: int = 20 timesteps: List = None guidance_scale: float = 4.5 num_images_per_prompt: Optional = 1 height: Optional = None width: Optional = None eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None prompt_attention_mask: Optional = None negative_prompt_embeds: Optional = None negative_prompt_attention_mask: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True use_resolution_binning: bool = True **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to self.unet.config.sample_size) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size) — +The width in pixels of the generated image. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. prompt_attention_mask (torch.FloatTensor, optional) — Pre-generated attention mask for text embeddings. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. For PixArt-Alpha this negative prompt should be "". If not +provided, negative_prompt_embeds will be generated from negative_prompt input argument. negative_prompt_attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask for negative text embeddings. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. use_resolution_binning (bool defaults to True) — +If set to True, the requested height and width are first mapped to the closest resolutions using +ASPECT_RATIO_1024_BIN. After the produced latents are decoded into images, they are resized back to +the requested resolution. Useful for generating non-square images. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import PixArtAlphaPipeline + +>>> # You can replace the checkpoint id with "PixArt-alpha/PixArt-XL-2-512x512" too. +>>> pipe = PixArtAlphaPipeline.from_pretrained("PixArt-alpha/PixArt-XL-2-1024-MS", torch_dtype=torch.float16) +>>> # Enable memory optimizations. +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A small cactus with a happy face in the Sahara desert." +>>> image = pipe(prompt).images[0] classify_height_width_bin < source > ( height: int width: int ratios: dict ) Returns binned height and width. encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True negative_prompt: str = '' num_images_per_prompt: int = 1 device: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None prompt_attention_mask: Optional = None negative_prompt_attention_mask: Optional = None clean_caption: bool = False **kwargs ) Parameters prompt (str or List[str], optional) — +prompt to be encoded negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. If not defined, one has to pass negative_prompt_embeds +instead. Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). For +PixArt-Alpha, this should be "". do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. For PixArt-Alpha, it’s should be the embeddings of the "" +string. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. diff --git a/scrapped_outputs/33450c008e5596873e17fd09eaba6b98.txt b/scrapped_outputs/33450c008e5596873e17fd09eaba6b98.txt new file mode 100644 index 0000000000000000000000000000000000000000..e4bf68022a6cca56941ea31fe97ff663e04a88a6 --- /dev/null +++ b/scrapped_outputs/33450c008e5596873e17fd09eaba6b98.txt @@ -0,0 +1,66 @@ +VQDiffusionScheduler VQDiffusionScheduler converts the transformer model’s output into a sample for the unnoised image at the previous diffusion timestep. It was introduced in Vector Quantized Diffusion Model for Text-to-Image Synthesis by Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, Baining Guo. The abstract from the paper is: We present the vector quantized diffusion (VQ-Diffusion) model for text-to-image generation. This method is based on a vector quantized variational autoencoder (VQ-VAE) whose latent space is modeled by a conditional variant of the recently developed Denoising Diffusion Probabilistic Model (DDPM). We find that this latent-space method is well-suited for text-to-image generation tasks because it not only eliminates the unidirectional bias with existing methods but also allows us to incorporate a mask-and-replace diffusion strategy to avoid the accumulation of errors, which is a serious problem with existing methods. Our experiments show that the VQ-Diffusion produces significantly better text-to-image generation results when compared with conventional autoregressive (AR) models with similar numbers of parameters. Compared with previous GAN-based text-to-image methods, our VQ-Diffusion can handle more complex scenes and improve the synthesized image quality by a large margin. Finally, we show that the image generation computation in our method can be made highly efficient by reparameterization. With traditional AR methods, the text-to-image generation time increases linearly with the output image resolution and hence is quite time consuming even for normal size images. The VQ-Diffusion allows us to achieve a better trade-off between quality and speed. Our experiments indicate that the VQ-Diffusion model with the reparameterization is fifteen times faster than traditional AR methods while achieving a better image quality. VQDiffusionScheduler class diffusers.VQDiffusionScheduler < source > ( num_vec_classes: int num_train_timesteps: int = 100 alpha_cum_start: float = 0.99999 alpha_cum_end: float = 9e-06 gamma_cum_start: float = 9e-06 gamma_cum_end: float = 0.99999 ) Parameters num_vec_classes (int) — +The number of classes of the vector embeddings of the latent pixels. Includes the class for the masked +latent pixel. num_train_timesteps (int, defaults to 100) — +The number of diffusion steps to train the model. alpha_cum_start (float, defaults to 0.99999) — +The starting cumulative alpha value. alpha_cum_end (float, defaults to 0.00009) — +The ending cumulative alpha value. gamma_cum_start (float, defaults to 0.00009) — +The starting cumulative gamma value. gamma_cum_end (float, defaults to 0.99999) — +The ending cumulative gamma value. A scheduler for vector quantized diffusion. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. log_Q_t_transitioning_to_known_class < source > ( t: torch.int32 x_t: LongTensor log_onehot_x_t: FloatTensor cumulative: bool ) → torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels) Parameters t (torch.Long) — +The timestep that determines which transition matrix is used. x_t (torch.LongTensor of shape (batch size, num latent pixels)) — +The classes of each latent pixel at time t. log_onehot_x_t (torch.FloatTensor of shape (batch size, num classes, num latent pixels)) — +The log one-hot vectors of x_t. cumulative (bool) — +If cumulative is False, the single step transition matrix t-1->t is used. If cumulative is +True, the cumulative transition matrix 0->t is used. Returns +torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels) + +Each column of the returned matrix is a row of log probabilities of the complete probability +transition matrix. +When non cumulative, returns self.num_classes - 1 rows because the initial latent pixel cannot be +masked. +Where: + +q_n is the probability distribution for the forward process of the nth latent pixel. +C_0 is a class of a latent pixel embedding +C_k is the class of the masked latent pixel + +non-cumulative result (omitting logarithms): +_0(x_t | x_{t-1\} = C_0) ... q_n(x_t | x_{t-1\} = C_0) + . . . + . . . + . . . +q_0(x_t | x_{t-1\} = C_k) ... q_n(x_t | x_{t-1\} = C_k)`} + /> +cumulative result (omitting logarithms): +_0_cumulative(x_t | x_0 = C_0) ... q_n_cumulative(x_t | x_0 = C_0) + . . . + . . . + . . . +q_0_cumulative(x_t | x_0 = C_{k-1\}) ... q_n_cumulative(x_t | x_0 = C_{k-1\})`} + /> + Calculates the log probabilities of the rows from the (cumulative or non-cumulative) transition matrix for each +latent pixel in x_t. q_posterior < source > ( log_p_x_0 x_t t ) → torch.FloatTensor of shape (batch size, num classes, num latent pixels) Parameters log_p_x_0 (torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels)) — +The log probabilities for the predicted classes of the initial latent pixels. Does not include a +prediction for the masked class as the initial unnoised image cannot be masked. x_t (torch.LongTensor of shape (batch size, num latent pixels)) — +The classes of each latent pixel at time t. t (torch.Long) — +The timestep that determines which transition matrix is used. Returns +torch.FloatTensor of shape (batch size, num classes, num latent pixels) + +The log probabilities for the predicted classes of the image at timestep t-1. + Calculates the log probabilities for the predicted classes of the image at timestep t-1: Copied p(x_{t-1} | x_t) = sum( q(x_t | x_{t-1}) * q(x_{t-1} | x_0) * p(x_0) / q(x_t | x_0) ) set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps and diffusion process parameters (alpha, beta, gamma) should be moved +to. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: torch.int64 sample: LongTensor generator: Optional = None return_dict: bool = True ) → VQDiffusionSchedulerOutput or tuple Parameters t (torch.long) — +The timestep that determines which transition matrices are used. x_t (torch.LongTensor of shape (batch size, num latent pixels)) — +The classes of each latent pixel at time t. generator (torch.Generator, or None) — +A random number generator for the noise applied to p(x_{t-1} | x_t) before it is sampled from. return_dict (bool, optional, defaults to True) — +Whether or not to return a VQDiffusionSchedulerOutput or +tuple. Returns +VQDiffusionSchedulerOutput or tuple + +If return_dict is True, VQDiffusionSchedulerOutput is +returned, otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by the reverse transition distribution. See +q_posterior() for more details about how the distribution is computer. VQDiffusionSchedulerOutput class diffusers.schedulers.scheduling_vq_diffusion.VQDiffusionSchedulerOutput < source > ( prev_sample: LongTensor ) Parameters prev_sample (torch.LongTensor of shape (batch size, num latent pixels)) — +Computed sample x_{t-1} of previous timestep. prev_sample should be used as next model input in the +denoising loop. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/33696355aa2eaa263cbec0ec80bde77e.txt b/scrapped_outputs/33696355aa2eaa263cbec0ec80bde77e.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/3371e17d8ddbfeb48261dad4a9a9f7f2.txt b/scrapped_outputs/3371e17d8ddbfeb48261dad4a9a9f7f2.txt new file mode 100644 index 0000000000000000000000000000000000000000..b26a6d56b0f7175109506df5db21894b73ff5f5f --- /dev/null +++ b/scrapped_outputs/3371e17d8ddbfeb48261dad4a9a9f7f2.txt @@ -0,0 +1,25 @@ +Metal Performance Shaders (MPS) 🤗 Diffusers is compatible with Apple silicon (M1/M2 chips) using the PyTorch mps device, which uses the Metal framework to leverage the GPU on MacOS devices. You’ll need to have: macOS computer with Apple silicon (M1/M2) hardware macOS 12.6 or later (13.0 or later recommended) arm64 version of Python PyTorch 2.0 (recommended) or 1.13 (minimum version supported for mps) The mps backend uses PyTorch’s .to() interface to move the Stable Diffusion pipeline on to your M1 or M2 device: Copied from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +pipe = pipe.to("mps") + +# Recommended if your computer has < 64 GB of RAM +pipe.enable_attention_slicing() + +prompt = "a photo of an astronaut riding a horse on mars" +image = pipe(prompt).images[0] +image Generating multiple prompts in a batch can crash or fail to work reliably. We believe this is related to the mps backend in PyTorch. While this is being investigated, you should iterate instead of batching. If you’re using PyTorch 1.13, you need to “prime” the pipeline with an additional one-time pass through it. This is a temporary workaround for an issue where the first inference pass produces slightly different results than subsequent ones. You only need to do this pass once, and after just one inference step you can discard the result. Copied from diffusers import DiffusionPipeline + + pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5").to("mps") + pipe.enable_attention_slicing() + + prompt = "a photo of an astronaut riding a horse on mars" + # First-time "warmup" pass if PyTorch version is 1.13 ++ _ = pipe(prompt, num_inference_steps=1) + + # Results match those from the CPU device after the warmup pass. + image = pipe(prompt).images[0] Troubleshoot M1/M2 performance is very sensitive to memory pressure. When this occurs, the system automatically swaps if it needs to which significantly degrades performance. To prevent this from happening, we recommend attention slicing to reduce memory pressure during inference and prevent swapping. This is especially relevant if your computer has less than 64GB of system RAM, or if you generate images at non-standard resolutions larger than 512×512 pixels. Call the enable_attention_slicing() function on your pipeline: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True).to("mps") +pipeline.enable_attention_slicing() Attention slicing performs the costly attention operation in multiple steps instead of all at once. It usually improves performance by ~20% in computers without universal memory, but we’ve observed better performance in most Apple silicon computers unless you have 64GB of RAM or more. diff --git a/scrapped_outputs/33804380226f30b150fa9a7909627423.txt b/scrapped_outputs/33804380226f30b150fa9a7909627423.txt new file mode 100644 index 0000000000000000000000000000000000000000..b36fcdaae1a968a902d79e9e2398812f703a2021 --- /dev/null +++ b/scrapped_outputs/33804380226f30b150fa9a7909627423.txt @@ -0,0 +1,63 @@ +Kandinsky 2.2 This script is experimental, and it’s easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset. Kandinsky 2.2 is a multilingual text-to-image model capable of producing more photorealistic images. The model includes an image prior model for creating image embeddings from text prompts, and a decoder model that generates images based on the prior model’s embeddings. That’s why you’ll find two separate scripts in Diffusers for Kandinsky 2.2, one for training the prior model and one for training the decoder model. You can train both models separately, but to get the best results, you should train both the prior and decoder models. Depending on your GPU, you may need to enable gradient_checkpointing (⚠️ not supported for the prior model!), mixed_precision, and gradient_accumulation_steps to help fit the model into memory and to speedup training. You can reduce your memory-usage even more by enabling memory-efficient attention with xFormers (version v0.0.16 fails for training on some GPUs so you may need to install a development version instead). This guide explores the train_text_to_image_prior.py and the train_text_to_image_decoder.py scripts to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the scripts, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/kandinsky2_2/text_to_image +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training scripts that are important for understanding how to modify it, but it doesn’t cover every aspect of the scripts in detail. If you’re interested in learning more, feel free to read through the scripts and let us know if you have any questions or concerns. Script parameters The training scripts provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. The training scripts provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image_prior.py \ + --mixed_precision="fp16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so let’s get straight to a walkthrough of the Kandinsky training scripts! Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_text_to_image_prior.py \ + --snr_gamma=5.0 Training script The training script is also similar to the Text-to-image training guide, but it’s been modified to support training the prior and decoder models. This guide focuses on the code that is unique to the Kandinsky 2.2 training scripts. prior model decoder model The main() function contains the code for preparing the dataset and training the model. One of the main differences you’ll notice right away is that the training script also loads a CLIPImageProcessor - in addition to a scheduler and tokenizer - for preprocessing images and a CLIPVisionModelWithProjection model for encoding the images: Copied noise_scheduler = DDPMScheduler(beta_schedule="squaredcos_cap_v2", prediction_type="sample") +image_processor = CLIPImageProcessor.from_pretrained( + args.pretrained_prior_model_name_or_path, subfolder="image_processor" +) +tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="tokenizer") + +with ContextManagers(deepspeed_zero_init_disabled_context_manager()): + image_encoder = CLIPVisionModelWithProjection.from_pretrained( + args.pretrained_prior_model_name_or_path, subfolder="image_encoder", torch_dtype=weight_dtype + ).eval() + text_encoder = CLIPTextModelWithProjection.from_pretrained( + args.pretrained_prior_model_name_or_path, subfolder="text_encoder", torch_dtype=weight_dtype + ).eval() Kandinsky uses a PriorTransformer to generate the image embeddings, so you’ll want to setup the optimizer to learn the prior mode’s parameters. Copied prior = PriorTransformer.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="prior") +prior.train() +optimizer = optimizer_cls( + prior.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Next, the input captions are tokenized, and images are preprocessed by the CLIPImageProcessor: Copied def preprocess_train(examples): + images = [image.convert("RGB") for image in examples[image_column]] + examples["clip_pixel_values"] = image_processor(images, return_tensors="pt").pixel_values + examples["text_input_ids"], examples["text_mask"] = tokenize_captions(examples) + return examples Finally, the training loop converts the input images into latents, adds noise to the image embeddings, and makes a prediction: Copied model_pred = prior( + noisy_latents, + timestep=timesteps, + proj_embedding=prompt_embeds, + encoder_hidden_states=text_encoder_hidden_states, + attention_mask=text_mask, +).predicted_image_embedding If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 You’ll train on the Pokémon BLIP captions dataset to generate your own Pokémon, but you can also create and train on your own dataset by following the Create a dataset for training guide. Set the environment variable DATASET_NAME to the name of the dataset on the Hub or if you’re training on your own files, set the environment variable TRAIN_DIR to a path to your dataset. If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_prompt to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. prior model decoder model Copied export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch --mixed_precision="fp16" train_text_to_image_prior.py \ + --dataset_name=$DATASET_NAME \ + --resolution=768 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --checkpoints_total_limit=3 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --validation_prompts="A robot pokemon, 4k photo" \ + --report_to="wandb" \ + --push_to_hub \ + --output_dir="kandi2-prior-pokemon-model" Once training is finished, you can use your newly trained model for inference! prior model decoder model Copied from diffusers import AutoPipelineForText2Image, DiffusionPipeline +import torch + +prior_pipeline = DiffusionPipeline.from_pretrained(output_dir, torch_dtype=torch.float16) +prior_components = {"prior_" + k: v for k,v in prior_pipeline.components.items()} +pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", **prior_components, torch_dtype=torch.float16) + +pipe.enable_model_cpu_offload() +prompt="A robot pokemon, 4k photo" +image = pipeline(prompt=prompt, negative_prompt=negative_prompt).images[0] Feel free to replace kandinsky-community/kandinsky-2-2-decoder with your own trained decoder checkpoint! Next steps Congratulations on training a Kandinsky 2.2 model! To learn more about how to use your new model, the following guides may be helpful: Read the Kandinsky guide to learn how to use it for a variety of different tasks (text-to-image, image-to-image, inpainting, interpolation), and how it can be combined with a ControlNet. Check out the DreamBooth and LoRA training guides to learn how to train a personalized Kandinsky model with just a few example images. These two training techniques can even be combined! diff --git a/scrapped_outputs/33958ebd92cc4c86834ec45dae76c37f.txt b/scrapped_outputs/33958ebd92cc4c86834ec45dae76c37f.txt new file mode 100644 index 0000000000000000000000000000000000000000..a7b663b381edb40c44b5dc45124142bca44fb798 --- /dev/null +++ b/scrapped_outputs/33958ebd92cc4c86834ec45dae76c37f.txt @@ -0,0 +1,148 @@ +PyTorch 2.0 🤗 Diffusers supports the latest optimizations from PyTorch 2.0 which include: A memory-efficient attention implementation, scaled dot product attention, without requiring any extra dependencies such as xFormers. torch.compile, a just-in-time (JIT) compiler to provide an extra performance boost when individual models are compiled. Both of these optimizations require PyTorch 2.0 or later and 🤗 Diffusers > 0.13.0. Copied pip install --upgrade torch diffusers Scaled dot product attention torch.nn.functional.scaled_dot_product_attention (SDPA) is an optimized and memory-efficient attention (similar to xFormers) that automatically enables several other optimizations depending on the model inputs and GPU type. SDPA is enabled by default if you’re using PyTorch 2.0 and the latest version of 🤗 Diffusers, so you don’t need to add anything to your code. However, if you want to explicitly enable it, you can set a DiffusionPipeline to use AttnProcessor2_0: Copied import torch + from diffusers import DiffusionPipeline ++ from diffusers.models.attention_processor import AttnProcessor2_0 + + pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True).to("cuda") ++ pipe.unet.set_attn_processor(AttnProcessor2_0()) + + prompt = "a photo of an astronaut riding a horse on mars" + image = pipe(prompt).images[0] SDPA should be as fast and memory efficient as xFormers; check the benchmark for more details. In some cases - such as making the pipeline more deterministic or converting it to other formats - it may be helpful to use the vanilla attention processor, AttnProcessor. To revert to AttnProcessor, call the set_default_attn_processor() function on the pipeline: Copied import torch + from diffusers import DiffusionPipeline + + pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True).to("cuda") ++ pipe.unet.set_default_attn_processor() + + prompt = "a photo of an astronaut riding a horse on mars" + image = pipe(prompt).images[0] torch.compile The torch.compile function can often provide an additional speed-up to your PyTorch code. In 🤗 Diffusers, it is usually best to wrap the UNet with torch.compile because it does most of the heavy lifting in the pipeline. Copied from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) +images = pipe(prompt, num_inference_steps=steps, num_images_per_prompt=batch_size).images[0] Depending on GPU type, torch.compile can provide an additional speed-up of 5-300x on top of SDPA! If you’re using more recent GPU architectures such as Ampere (A100, 3090), Ada (4090), and Hopper (H100), torch.compile is able to squeeze even more performance out of these GPUs. Compilation requires some time to complete, so it is best suited for situations where you prepare your pipeline once and then perform the same type of inference operations multiple times. For example, calling the compiled pipeline on a different image size triggers compilation again which can be expensive. For more information and different options about torch.compile, refer to the torch_compile tutorial. Benchmark We conducted a comprehensive benchmark with PyTorch 2.0’s efficient attention implementation and torch.compile across different GPUs and batch sizes for five of our most used pipelines. The code is benchmarked on 🤗 Diffusers v0.17.0.dev0 to optimize torch.compile usage (see here for more details). Expand the dropdown below to find the code used to benchmark each pipeline: Stable Diffusion text-to-image Copied from diffusers import DiffusionPipeline +import torch + +path = "runwayml/stable-diffusion-v1-5" + +run_compile = True # Set True / False + +pipe = DiffusionPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors=True) +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + images = pipe(prompt=prompt).images Stable Diffusion image-to-image Copied from diffusers import StableDiffusionImg2ImgPipeline +from diffusers.utils import load_image +import torch + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +init_image = load_image(url) +init_image = init_image.resize((512, 512)) + +path = "runwayml/stable-diffusion-v1-5" + +run_compile = True # Set True / False + +pipe = StableDiffusionImg2ImgPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors=True) +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + image = pipe(prompt=prompt, image=init_image).images[0] Stable Diffusion inpainting Copied from diffusers import StableDiffusionInpaintPipeline +from diffusers.utils import load_image +import torch + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +path = "runwayml/stable-diffusion-inpainting" + +run_compile = True # Set True / False + +pipe = StableDiffusionInpaintPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors=True) +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] ControlNet Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.utils import load_image +import torch + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +init_image = load_image(url) +init_image = init_image.resize((512, 512)) + +path = "runwayml/stable-diffusion-v1-5" + +run_compile = True # Set True / False +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + path, controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) +pipe.controlnet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + pipe.controlnet = torch.compile(pipe.controlnet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + image = pipe(prompt=prompt, image=init_image).images[0] DeepFloyd IF text-to-image + upscaling Copied from diffusers import DiffusionPipeline +import torch + +run_compile = True # Set True / False + +pipe_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-M-v1.0", variant="fp16", text_encoder=None, torch_dtype=torch.float16, use_safetensors=True) +pipe_1.to("cuda") +pipe_2 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-II-M-v1.0", variant="fp16", text_encoder=None, torch_dtype=torch.float16, use_safetensors=True) +pipe_2.to("cuda") +pipe_3 = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-x4-upscaler", torch_dtype=torch.float16, use_safetensors=True) +pipe_3.to("cuda") + + +pipe_1.unet.to(memory_format=torch.channels_last) +pipe_2.unet.to(memory_format=torch.channels_last) +pipe_3.unet.to(memory_format=torch.channels_last) + +if run_compile: + pipe_1.unet = torch.compile(pipe_1.unet, mode="reduce-overhead", fullgraph=True) + pipe_2.unet = torch.compile(pipe_2.unet, mode="reduce-overhead", fullgraph=True) + pipe_3.unet = torch.compile(pipe_3.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "the blue hulk" + +prompt_embeds = torch.randn((1, 2, 4096), dtype=torch.float16) +neg_prompt_embeds = torch.randn((1, 2, 4096), dtype=torch.float16) + +for _ in range(3): + image_1 = pipe_1(prompt_embeds=prompt_embeds, negative_prompt_embeds=neg_prompt_embeds, output_type="pt").images + image_2 = pipe_2(image=image_1, prompt_embeds=prompt_embeds, negative_prompt_embeds=neg_prompt_embeds, output_type="pt").images + image_3 = pipe_3(prompt=prompt, image=image_1, noise_level=100).images The graph below highlights the relative speed-ups for the StableDiffusionPipeline across five GPU families with PyTorch 2.0 and torch.compile enabled. The benchmarks for the following graphs are measured in number of iterations/second. To give you an even better idea of how this speed-up holds for the other pipelines, consider the following +graph for an A100 with PyTorch 2.0 and torch.compile: In the following tables, we report our findings in terms of the number of iterations/second. A100 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 21.66 23.13 44.03 49.74 SD - img2img 21.81 22.40 43.92 46.32 SD - inpaint 22.24 23.23 43.76 49.25 SD - controlnet 15.02 15.82 32.13 36.08 IF 20.21 / 13.84 / 24.00 20.12 / 13.70 / 24.03 ❌ 97.34 / 27.23 / 111.66 SDXL - txt2img 8.64 9.9 - - A100 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 11.6 13.12 14.62 17.27 SD - img2img 11.47 13.06 14.66 17.25 SD - inpaint 11.67 13.31 14.88 17.48 SD - controlnet 8.28 9.38 10.51 12.41 IF 25.02 18.04 ❌ 48.47 SDXL - txt2img 2.44 2.74 - - A100 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 3.04 3.6 3.83 4.68 SD - img2img 2.98 3.58 3.83 4.67 SD - inpaint 3.04 3.66 3.9 4.76 SD - controlnet 2.15 2.58 2.74 3.35 IF 8.78 9.82 ❌ 16.77 SDXL - txt2img 0.64 0.72 - - V100 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 18.99 19.14 20.95 22.17 SD - img2img 18.56 19.18 20.95 22.11 SD - inpaint 19.14 19.06 21.08 22.20 SD - controlnet 13.48 13.93 15.18 15.88 IF 20.01 / 9.08 / 23.34 19.79 / 8.98 / 24.10 ❌ 55.75 / 11.57 / 57.67 V100 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 5.96 5.89 6.83 6.86 SD - img2img 5.90 5.91 6.81 6.82 SD - inpaint 5.99 6.03 6.93 6.95 SD - controlnet 4.26 4.29 4.92 4.93 IF 15.41 14.76 ❌ 22.95 V100 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 1.66 1.66 1.92 1.90 SD - img2img 1.65 1.65 1.91 1.89 SD - inpaint 1.69 1.69 1.95 1.93 SD - controlnet 1.19 1.19 OOM after warmup 1.36 IF 5.43 5.29 ❌ 7.06 T4 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 6.9 6.95 7.3 7.56 SD - img2img 6.84 6.99 7.04 7.55 SD - inpaint 6.91 6.7 7.01 7.37 SD - controlnet 4.89 4.86 5.35 5.48 IF 17.42 / 2.47 / 18.52 16.96 / 2.45 / 18.69 ❌ 24.63 / 2.47 / 23.39 SDXL - txt2img 1.15 1.16 - - T4 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 1.79 1.79 2.03 1.99 SD - img2img 1.77 1.77 2.05 2.04 SD - inpaint 1.81 1.82 2.09 2.09 SD - controlnet 1.34 1.27 1.47 1.46 IF 5.79 5.61 ❌ 7.39 SDXL - txt2img 0.288 0.289 - - T4 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 2.34s 2.30s OOM after 2nd iteration 1.99s SD - img2img 2.35s 2.31s OOM after warmup 2.00s SD - inpaint 2.30s 2.26s OOM after 2nd iteration 1.95s SD - controlnet OOM after 2nd iteration OOM after 2nd iteration OOM after warmup OOM after warmup IF * 1.44 1.44 ❌ 1.94 SDXL - txt2img OOM OOM - - RTX 3090 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 22.56 22.84 23.84 25.69 SD - img2img 22.25 22.61 24.1 25.83 SD - inpaint 22.22 22.54 24.26 26.02 SD - controlnet 16.03 16.33 17.38 18.56 IF 27.08 / 9.07 / 31.23 26.75 / 8.92 / 31.47 ❌ 68.08 / 11.16 / 65.29 RTX 3090 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 6.46 6.35 7.29 7.3 SD - img2img 6.33 6.27 7.31 7.26 SD - inpaint 6.47 6.4 7.44 7.39 SD - controlnet 4.59 4.54 5.27 5.26 IF 16.81 16.62 ❌ 21.57 RTX 3090 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 1.7 1.69 1.93 1.91 SD - img2img 1.68 1.67 1.93 1.9 SD - inpaint 1.72 1.71 1.97 1.94 SD - controlnet 1.23 1.22 1.4 1.38 IF 5.01 5.00 ❌ 6.33 RTX 4090 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 40.5 41.89 44.65 49.81 SD - img2img 40.39 41.95 44.46 49.8 SD - inpaint 40.51 41.88 44.58 49.72 SD - controlnet 29.27 30.29 32.26 36.03 IF 69.71 / 18.78 / 85.49 69.13 / 18.80 / 85.56 ❌ 124.60 / 26.37 / 138.79 SDXL - txt2img 6.8 8.18 - - RTX 4090 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 12.62 12.84 15.32 15.59 SD - img2img 12.61 12,.79 15.35 15.66 SD - inpaint 12.65 12.81 15.3 15.58 SD - controlnet 9.1 9.25 11.03 11.22 IF 31.88 31.14 ❌ 43.92 SDXL - txt2img 2.19 2.35 - - RTX 4090 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 3.17 3.2 3.84 3.85 SD - img2img 3.16 3.2 3.84 3.85 SD - inpaint 3.17 3.2 3.85 3.85 SD - controlnet 2.23 2.3 2.7 2.75 IF 9.26 9.2 ❌ 13.31 SDXL - txt2img 0.52 0.53 - - Notes Follow this PR for more details on the environment used for conducting the benchmarks. For the DeepFloyd IF pipeline where batch sizes > 1, we only used a batch size of > 1 in the first IF pipeline for text-to-image generation and NOT for upscaling. That means the two upscaling pipelines received a batch size of 1. Thanks to Horace He from the PyTorch team for their support in improving our support of torch.compile() in Diffusers. diff --git a/scrapped_outputs/339ce86235d6863416f95b99fb65cf1b.txt b/scrapped_outputs/339ce86235d6863416f95b99fb65cf1b.txt new file mode 100644 index 0000000000000000000000000000000000000000..7bd504477a205d9c0dc87eaed2a11b9b0351b8cf --- /dev/null +++ b/scrapped_outputs/339ce86235d6863416f95b99fb65cf1b.txt @@ -0,0 +1,271 @@ +Image Variation + + +StableDiffusionImageVariationPipeline + +StableDiffusionImageVariationPipeline lets you generate variations from an input image using Stable Diffusion. It uses a fine-tuned version of Stable Diffusion model, trained by Justin Pinkney (@Buntworthy) at Lambda. +The original codebase can be found here: +Stable Diffusion Image Variations +Available Checkpoints are: +sd-image-variations-diffusers: lambdalabs/sd-image-variations-diffusers + +class diffusers.StableDiffusionImageVariationPipeline + +< +source +> +( +vae: AutoencoderKL +image_encoder: CLIPVisionModelWithProjection +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPImageProcessor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +image_encoder (CLIPVisionModelWithProjection) — +Frozen CLIP image-encoder. Stable Diffusion Image Variation uses the vision portion of +CLIP, +specifically the clip-vit-large-patch14 variant. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline to generate variations from an input image using Stable Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +image: typing.Union[PIL.Image.Image, typing.List[PIL.Image.Image], torch.FloatTensor] +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — +The image or images to guide the image generation. If you provide a tensor, it needs to comply with the +configuration of +this +CLIPImageProcessor + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +enable_attention_slicing + +< +source +> +( +slice_size: typing.Union[str, int, NoneType] = 'auto' + +) + + +Parameters + +slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +disable_attention_slicing + +< +source +> +( +) + + + +Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go +back to computing attention in one step. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. diff --git a/scrapped_outputs/33bc95e79cfdd14b32b68afffa4933d4.txt b/scrapped_outputs/33bc95e79cfdd14b32b68afffa4933d4.txt new file mode 100644 index 0000000000000000000000000000000000000000..8278105dee001f6035eed45d773a2780bc02b523 --- /dev/null +++ b/scrapped_outputs/33bc95e79cfdd14b32b68afffa4933d4.txt @@ -0,0 +1 @@ +SDXL Turbo Stable Diffusion XL (SDXL) Turbo was proposed in Adversarial Diffusion Distillation by Axel Sauer, Dominik Lorenz, Andreas Blattmann, and Robin Rombach. The abstract from the paper is: We introduce Adversarial Diffusion Distillation (ADD), a novel training approach that efficiently samples large-scale foundational image diffusion models in just 1–4 steps while maintaining high image quality. We use score distillation to leverage large-scale off-the-shelf image diffusion models as a teacher signal in combination with an adversarial loss to ensure high image fidelity even in the low-step regime of one or two sampling steps. Our analyses show that our model clearly outperforms existing few-step methods (GANs,Latent Consistency Models) in a single step and reaches the performance of state-of-the-art diffusion models (SDXL) in only four steps. ADD is the first method to unlock single-step, real-time image synthesis with foundation models. Tips SDXL Turbo uses the exact same architecture as SDXL, which means it also has the same API. Please refer to the SDXL API reference for more details. SDXL Turbo should disable guidance scale by setting guidance_scale=0.0. SDXL Turbo should use timestep_spacing='trailing' for the scheduler and use between 1 and 4 steps. SDXL Turbo has been trained to generate images of size 512x512. SDXL Turbo is open-access, but not open-source meaning that one might have to buy a model license in order to use it for commercial applications. Make sure to read the official model card to learn more. To learn how to use SDXL Turbo for various tasks, how to optimize performance, and other usage examples, take a look at the SDXL Turbo guide. Check out the Stability AI Hub organization for the official base and refiner model checkpoints! diff --git a/scrapped_outputs/33bed4ad4ca04e2b1ded10d3f4b8ebaa.txt b/scrapped_outputs/33bed4ad4ca04e2b1ded10d3f4b8ebaa.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/33bfa52c2a2db32665c6345bc7e03351.txt b/scrapped_outputs/33bfa52c2a2db32665c6345bc7e03351.txt new file mode 100644 index 0000000000000000000000000000000000000000..bdb12cb9f8ec935ec9417d06fc21a1176b44b6b4 --- /dev/null +++ b/scrapped_outputs/33bfa52c2a2db32665c6345bc7e03351.txt @@ -0,0 +1,248 @@ +Load adapters There are several training techniques for personalizing diffusion models to generate images of a specific subject or images in certain styles. Each of these training methods produces a different type of adapter. Some of the adapters generate an entirely new model, while other adapters only modify a smaller set of embeddings or weights. This means the loading process for each adapter is also different. This guide will show you how to load DreamBooth, textual inversion, and LoRA weights. Feel free to browse the Stable Diffusion Conceptualizer, LoRA the Explorer, and the Diffusers Models Gallery for checkpoints and embeddings to use. DreamBooth DreamBooth finetunes an entire diffusion model on just several images of a subject to generate images of that subject in new styles and settings. This method works by using a special word in the prompt that the model learns to associate with the subject image. Of all the training methods, DreamBooth produces the largest file size (usually a few GBs) because it is a full checkpoint model. Let’s load the herge_style checkpoint, which is trained on just 10 images drawn by Hergé, to generate images in that style. For it to work, you need to include the special word herge_style in your prompt to trigger the checkpoint: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("sd-dreambooth-library/herge-style", torch_dtype=torch.float16).to("cuda") +prompt = "A cute herge_style brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration" +image = pipeline(prompt).images[0] +image Textual inversion Textual inversion is very similar to DreamBooth and it can also personalize a diffusion model to generate certain concepts (styles, objects) from just a few images. This method works by training and finding new embeddings that represent the images you provide with a special word in the prompt. As a result, the diffusion model weights stay the same and the training process produces a relatively tiny (a few KBs) file. Because textual inversion creates embeddings, it cannot be used on its own like DreamBooth and requires another model. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") Now you can load the textual inversion embeddings with the load_textual_inversion() method and generate some images. Let’s load the sd-concepts-library/gta5-artwork embeddings and you’ll need to include the special word in your prompt to trigger it: Copied pipeline.load_textual_inversion("sd-concepts-library/gta5-artwork") +prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration, style" +image = pipeline(prompt).images[0] +image Textual inversion can also be trained on undesirable things to create negative embeddings to discourage a model from generating images with those undesirable things like blurry images or extra fingers on a hand. This can be an easy way to quickly improve your prompt. You’ll also load the embeddings with load_textual_inversion(), but this time, you’ll need two more parameters: weight_name: specifies the weight file to load if the file was saved in the 🤗 Diffusers format with a specific name or if the file is stored in the A1111 format token: specifies the special word to use in the prompt to trigger the embeddings Let’s load the sayakpaul/EasyNegative-test embeddings: Copied pipeline.load_textual_inversion( + "sayakpaul/EasyNegative-test", weight_name="EasyNegative.safetensors", token="EasyNegative" +) Now you can use the token to generate an image with the negative embeddings: Copied prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration, EasyNegative" +negative_prompt = "EasyNegative" + +image = pipeline(prompt, negative_prompt=negative_prompt, num_inference_steps=50).images[0] +image LoRA Low-Rank Adaptation (LoRA) is a popular training technique because it is fast and generates smaller file sizes (a couple hundred MBs). Like the other methods in this guide, LoRA can train a model to learn new styles from just a few images. It works by inserting new weights into the diffusion model and then only the new weights are trained instead of the entire model. This makes LoRAs faster to train and easier to store. LoRA is a very general training technique that can be used with other training methods. For example, it is common to train a model with DreamBooth and LoRA. LoRAs also need to be used with another model: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") Then use the load_lora_weights() method to load the ostris/super-cereal-sdxl-lora weights and specify the weights filename from the repository: Copied pipeline.load_lora_weights("ostris/super-cereal-sdxl-lora", weight_name="cereal_box_sdxl_v1.safetensors") +prompt = "bears, pizza bites" +image = pipeline(prompt).images[0] +image The load_lora_weights() method loads LoRA weights into both the UNet and text encoder. It is the preferred way for loading LoRAs because it can handle cases where: the LoRA weights don’t have separate identifiers for the UNet and text encoder the LoRA weights have separate identifiers for the UNet and text encoder But if you only need to load LoRA weights into the UNet, then you can use the load_attn_procs() method. Let’s load the jbilcke-hf/sdxl-cinematic-1 LoRA: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.unet.load_attn_procs("jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors") + +# use cnmt in the prompt to trigger the LoRA +prompt = "A cute cnmt eating a slice of pizza, stunning color scheme, masterpiece, illustration" +image = pipeline(prompt).images[0] +image For both load_lora_weights() and load_attn_procs(), you can pass the cross_attention_kwargs={"scale": 0.5} parameter to adjust how much of the LoRA weights to use. A value of 0 is the same as only using the base model weights, and a value of 1 is equivalent to using the fully finetuned LoRA. To unload the LoRA weights, use the unload_lora_weights() method to discard the LoRA weights and restore the model to its original weights: Copied pipeline.unload_lora_weights() Load multiple LoRAs It can be fun to use multiple LoRAs together to create something entirely new and unique. The fuse_lora() method allows you to fuse the LoRA weights with the original weights of the underlying model. Fusing the weights can lead to a speedup in inference latency because you don’t need to separately load the base model and LoRA! You can save your fused pipeline with save_pretrained() to avoid loading and fusing the weights every time you want to use the model. Load an initial model: Copied from diffusers import StableDiffusionXLPipeline, AutoencoderKL +import torch + +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + vae=vae, + torch_dtype=torch.float16, +).to("cuda") Next, load the LoRA checkpoint and fuse it with the original weights. The lora_scale parameter controls how much to scale the output by with the LoRA weights. It is important to make the lora_scale adjustments in the fuse_lora() method because it won’t work if you try to pass scale to the cross_attention_kwargs in the pipeline. If you need to reset the original model weights for any reason (use a different lora_scale), you should use the unfuse_lora() method. Copied pipeline.load_lora_weights("ostris/ikea-instructions-lora-sdxl") +pipeline.fuse_lora(lora_scale=0.7) + +# to unfuse the LoRA weights +pipeline.unfuse_lora() Then fuse this pipeline with the next set of LoRA weights: Copied pipeline.load_lora_weights("ostris/super-cereal-sdxl-lora") +pipeline.fuse_lora(lora_scale=0.7) You can’t unfuse multiple LoRA checkpoints, so if you need to reset the model to its original weights, you’ll need to reload it. Now you can generate an image that uses the weights from both LoRAs: Copied prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration" +image = pipeline(prompt).images[0] +image 🤗 PEFT Read the Inference with 🤗 PEFT tutorial to learn more about its integration with 🤗 Diffusers and how you can easily work with and juggle multiple adapters. You’ll need to install 🤗 Diffusers and PEFT from source to run the example in this section. Another way you can load and use multiple LoRAs is to specify the adapter_name parameter in load_lora_weights(). This method takes advantage of the 🤗 PEFT integration. For example, load and name both LoRA weights: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("ostris/ikea-instructions-lora-sdxl", weight_name="ikea_instructions_xl_v1_5.safetensors", adapter_name="ikea") +pipeline.load_lora_weights("ostris/super-cereal-sdxl-lora", weight_name="cereal_box_sdxl_v1.safetensors", adapter_name="cereal") Now use the set_adapters() to activate both LoRAs, and you can configure how much weight each LoRA should have on the output: Copied pipeline.set_adapters(["ikea", "cereal"], adapter_weights=[0.7, 0.5]) Then, generate an image: Copied prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration" +image = pipeline(prompt, num_inference_steps=30, cross_attention_kwargs={"scale": 1.0}).images[0] +image Kohya and TheLastBen Other popular LoRA trainers from the community include those by Kohya and TheLastBen. These trainers create different LoRA checkpoints than those trained by 🤗 Diffusers, but they can still be loaded in the same way. Let’s download the Blueprintify SD XL 1.0 checkpoint from Civitai: Copied !wget https://civitai.com/api/download/models/168776 -O blueprintify-sd-xl-10.safetensors Load the LoRA checkpoint with the load_lora_weights() method, and specify the filename in the weight_name parameter: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("path/to/weights", weight_name="blueprintify-sd-xl-10.safetensors") Generate an image: Copied # use bl3uprint in the prompt to trigger the LoRA +prompt = "bl3uprint, a highly detailed blueprint of the eiffel tower, explaining how to build all parts, many txt, blueprint grid backdrop" +image = pipeline(prompt).images[0] +image Some limitations of using Kohya LoRAs with 🤗 Diffusers include: Images may not look like those generated by UIs - like ComfyUI - for multiple reasons, which are explained here. LyCORIS checkpoints aren’t fully supported. The load_lora_weights() method loads LyCORIS checkpoints with LoRA and LoCon modules, but Hada and LoKR are not supported. Loading a checkpoint from TheLastBen is very similar. For example, to load the TheLastBen/William_Eggleston_Style_SDXL checkpoint: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("TheLastBen/William_Eggleston_Style_SDXL", weight_name="wegg.safetensors") + +# use by william eggleston in the prompt to trigger the LoRA +prompt = "a house by william eggleston, sunrays, beautiful, sunlight, sunrays, beautiful" +image = pipeline(prompt=prompt).images[0] +image IP-Adapter IP-Adapter is an effective and lightweight adapter that adds image prompting capabilities to a diffusion model. This adapter works by decoupling the cross-attention layers of the image and text features. All the other model components are frozen and only the embedded image features in the UNet are trained. As a result, IP-Adapter files are typically only ~100MBs. IP-Adapter works with most of our pipelines, including Stable Diffusion, Stable Diffusion XL (SDXL), ControlNet, T2I-Adapter, AnimateDiff. And you can use any custom models finetuned from the same base models. It also works with LCM-Lora out of box. You can find official IP-Adapter checkpoints in h94/IP-Adapter. IP-Adapter was contributed by okotaku. Let’s first create a Stable Diffusion Pipeline. Copied from diffusers import AutoPipelineForText2Image +import torch +from diffusers.utils import load_image + + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") Now load the h94/IP-Adapter weights with the load_ip_adapter() method. Copied pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") IP-Adapter relies on an image encoder to generate the image features, if your IP-Adapter weights folder contains a "image_encoder" subfolder, the image encoder will be automatically loaded and registered to the pipeline. Otherwise you can so load a [CLIPVisionModelWithProjection](https://huggingface.co/docs/transformers/v4.37.2/en/model_doc/clip#transformers.CLIPVisionModelWithProjection) model and pass it to a Stable Diffusion pipeline when you create it. + + Copied from diffusers import AutoPipelineForText2Image +from transformers import CLIPVisionModelWithProjection +import torch + +image_encoder = CLIPVisionModelWithProjection.from_pretrained( + "h94/IP-Adapter", + subfolder="models/image_encoder", + torch_dtype=torch.float16, +).to("cuda") + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", image_encoder=image_encoder, torch_dtype=torch.float16).to("cuda") IP-Adapter allows you to use both image and text to condition the image generation process. For example, let’s use the bear image from the Textual Inversion section as the image prompt (ip_adapter_image) along with a text prompt to add “sunglasses”. 😎 Copied pipeline.set_ip_adapter_scale(0.6) +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_neg_embed.png") +generator = torch.Generator(device="cpu").manual_seed(33) +images = pipeline( +    prompt='best quality, high quality, wearing sunglasses', +    ip_adapter_image=image, +    negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", +    num_inference_steps=50, +    generator=generator, +).images +images[0]     You can use the set_ip_adapter_scale() method to adjust the text prompt and image prompt condition ratio.  If you’re only using the image prompt, you should set the scale to 1.0. You can lower the scale to get more generation diversity, but it’ll be less aligned with the prompt. +scale=0.5 can achieve good results in most cases when you use both text and image prompts. IP-Adapter also works great with Image-to-Image and Inpainting pipelines. See below examples of how you can use it with Image-to-Image and Inpaint. image-to-image inpaint Copied from diffusers import AutoPipelineForImage2Image +import torch +from diffusers.utils import load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") + +image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/vermeer.jpg") +ip_image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/river.png") + +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") +generator = torch.Generator(device="cpu").manual_seed(33) +images = pipeline( +    prompt='best quality, high quality', +    image = image, +    ip_adapter_image=ip_image, +    num_inference_steps=50, +    generator=generator, +    strength=0.6, +).images +images[0] IP-Adapters can also be used with SDXL Copied from diffusers import AutoPipelineForText2Image +from diffusers.utils import load_image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + torch_dtype=torch.float16 +).to("cuda") + +image = load_image("https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/watercolor_painting.jpeg") + +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name="ip-adapter_sdxl.bin") + +generator = torch.Generator(device="cpu").manual_seed(33) +image = pipeline( + prompt="best quality, high quality", + ip_adapter_image=image, + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=25, + generator=generator, +).images[0] +image.save("sdxl_t2i.png") input image adapted image You can use the IP-Adapter face model to apply specific faces to your images. It is an effective way to maintain consistent characters in your image generations. +Weights are loaded with the same method used for the other IP-Adapters. Copied # Load ip-adapter-full-face_sd15.bin +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter-full-face_sd15.bin") It is recommended to use DDIMScheduler and EulerDiscreteScheduler for face model. Copied import torch +from diffusers import StableDiffusionPipeline, DDIMScheduler +from diffusers.utils import load_image + +pipeline = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, +).to("cuda") +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter-full-face_sd15.bin") + +pipeline.set_ip_adapter_scale(0.7) + +image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ai_face2.png") + +generator = torch.Generator(device="cpu").manual_seed(33) + +image = pipeline( + prompt="A photo of a girl wearing a black dress, holding red roses in hand, upper body, behind is the Eiffel Tower", + ip_adapter_image=image, + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=50, num_images_per_prompt=1, width=512, height=704, + generator=generator, +).images[0] input image output image You can load multiple IP-Adapter models and use multiple reference images at the same time. In this example we use IP-Adapter-Plus face model to create a consistent character and also use IP-Adapter-Plus model along with 10 images to create a coherent style in the image we generate. Copied import torch +from diffusers import AutoPipelineForText2Image, DDIMScheduler +from transformers import CLIPVisionModelWithProjection +from diffusers.utils import load_image + +image_encoder = CLIPVisionModelWithProjection.from_pretrained( + "h94/IP-Adapter", + subfolder="models/image_encoder", + torch_dtype=torch.float16, +) + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + torch_dtype=torch.float16, + image_encoder=image_encoder, +) +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +pipeline.load_ip_adapter( + "h94/IP-Adapter", + subfolder="sdxl_models", + weight_name=["ip-adapter-plus_sdxl_vit-h.safetensors", "ip-adapter-plus-face_sdxl_vit-h.safetensors"] +) +pipeline.set_ip_adapter_scale([0.7, 0.3]) +pipeline.enable_model_cpu_offload() + +face_image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/women_input.png") +style_folder = "https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/style_ziggy" +style_images = [load_image(f"{style_folder}/img{i}.png") for i in range(10)] + +generator = torch.Generator(device="cpu").manual_seed(0) + +image = pipeline( + prompt="wonderwoman", + ip_adapter_image=[style_images, face_image], + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=50, num_images_per_prompt=1, + generator=generator, +).images[0]     style input image face input image output image LCM-Lora You can use IP-Adapter with LCM-Lora to achieve “instant fine-tune” with custom images. Note that you need to load IP-Adapter weights before loading the LCM-Lora weights. Copied from diffusers import DiffusionPipeline, LCMScheduler +import torch +from diffusers.utils import load_image + +model_id = "sd-dreambooth-library/herge-style" +lcm_lora_id = "latent-consistency/lcm-lora-sdv1-5" + +pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) + +pipe.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") +pipe.load_lora_weights(lcm_lora_id) +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() + +prompt = "best quality, high quality" +image = load_image("https://user-images.githubusercontent.com/24734142/266492875-2d50d223-8475-44f0-a7c6-08b51cb53572.png") +images = pipe( + prompt=prompt, + ip_adapter_image=image, + num_inference_steps=4, + guidance_scale=1, +).images[0] Other pipelines IP-Adapter is compatible with any pipeline that (1) uses a text prompt and (2) uses Stable Diffusion or Stable Diffusion XL checkpoint. To use IP-Adapter with a different pipeline, all you need to do is to run load_ip_adapter() method after you create the pipeline, and then pass your image to the pipeline as ip_adapter_image 🤗 Diffusers currently only supports using IP-Adapter with some of the most popular pipelines, feel free to open a feature request if you have a cool use-case and require integrating IP-adapters with a pipeline that does not support it yet! You can find below examples on how to use IP-Adapter with ControlNet and AnimateDiff. ControlNet AnimateDiff Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +import torch +from diffusers.utils import load_image + +controlnet_model_path = "lllyasviel/control_v11f1p_sd15_depth" +controlnet = ControlNetModel.from_pretrained(controlnet_model_path, torch_dtype=torch.float16) + +pipeline = StableDiffusionControlNetPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16) +pipeline.to("cuda") + +image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/statue.png") +depth_map = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/depth.png") + +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") + +generator = torch.Generator(device="cpu").manual_seed(33) +images = pipeline( + prompt='best quality, high quality', + image=depth_map, + ip_adapter_image=image, + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=50, + generator=generator, +).images +images[0] input image adapted image diff --git a/scrapped_outputs/33ef5bd3ee550ce307f23f5aa8d7c991.txt b/scrapped_outputs/33ef5bd3ee550ce307f23f5aa8d7c991.txt new file mode 100644 index 0000000000000000000000000000000000000000..be96d2b3cd8b0e9da6f4a7f488cab978fbcab007 --- /dev/null +++ b/scrapped_outputs/33ef5bd3ee550ce307f23f5aa8d7c991.txt @@ -0,0 +1,25 @@ +IPNDMScheduler IPNDMScheduler is a fourth-order Improved Pseudo Linear Multistep scheduler. The original implementation can be found at crowsonkb/v-diffusion-pytorch. IPNDMScheduler class diffusers.IPNDMScheduler < source > ( num_train_timesteps: int = 1000 trained_betas: Union = None ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. A fourth-order Improved Pseudo Linear Multistep scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the linear multistep method. It performs one forward pass multiple times to approximate the solution. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/34000a9f14c7b990180cba6c526fc4b4.txt b/scrapped_outputs/34000a9f14c7b990180cba6c526fc4b4.txt new file mode 100644 index 0000000000000000000000000000000000000000..6024bf1a00e90500c0a7ce1aa584ee7f009df150 --- /dev/null +++ b/scrapped_outputs/34000a9f14c7b990180cba6c526fc4b4.txt @@ -0,0 +1,448 @@ +Singlestep DPM-Solver + + +Overview + +Original paper can be found here and the improved version. The original implementation can be found here. + +DPMSolverSinglestepScheduler + + +class diffusers.DPMSolverSinglestepScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Optional[numpy.ndarray] = None +solver_order: int = 2 +prediction_type: str = 'epsilon' +thresholding: bool = False +dynamic_thresholding_ratio: float = 0.995 +sample_max_value: float = 1.0 +algorithm_type: str = 'dpmsolver++' +solver_type: str = 'midpoint' +lower_order_final: bool = True + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +solver_order (int, default 2) — +the order of DPM-Solver; can be 1 or 2 or 3. We recommend to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. + + +prediction_type (str, default epsilon) — +indicates whether the model predicts the noise (epsilon), or the data / x0. One of epsilon, sample, +or v-prediction. + + +thresholding (bool, default False) — +whether to use the “dynamic thresholding” method (introduced by Imagen, https://arxiv.org/abs/2205.11487). +For pixel-space diffusion models, you can set both algorithm_type=dpmsolver++ and thresholding=True to +use the dynamic thresholding. Note that the thresholding method is unsuitable for latent-space diffusion +models (such as stable-diffusion). + + +dynamic_thresholding_ratio (float, default 0.995) — +the ratio for the dynamic thresholding method. Default is 0.995, the same as Imagen +(https://arxiv.org/abs/2205.11487). + + +sample_max_value (float, default 1.0) — +the threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++. + + +algorithm_type (str, default dpmsolver++) — +the algorithm type for the solver. Either dpmsolver or dpmsolver++. The dpmsolver type implements the +algorithms in https://arxiv.org/abs/2206.00927, and the dpmsolver++ type implements the algorithms in +https://arxiv.org/abs/2211.01095. We recommend to use dpmsolver++ with solver_order=2 for guided +sampling (e.g. stable-diffusion). + + +solver_type (str, default midpoint) — +the solver type for the second-order solver. Either midpoint or heun. The solver type slightly affects +the sample quality, especially for small number of steps. We empirically find that midpoint solvers are +slightly better, so we recommend to use the midpoint type. + + +lower_order_final (bool, default True) — +whether to use lower-order solvers in the final steps. For singlestep schedulers, we recommend to enable +this to use up all the function evaluations. + + + +DPM-Solver (and the improved version DPM-Solver++) is a fast dedicated high-order solver for diffusion ODEs with +the convergence order guarantee. Empirically, sampling by DPM-Solver with only 20 steps can generate high-quality +samples, and it can generate quite good samples even in only 10 steps. +For more details, see the original paper: https://arxiv.org/abs/2206.00927 and https://arxiv.org/abs/2211.01095 +Currently, we support the singlestep DPM-Solver for both noise prediction models and data prediction models. We +recommend to use solver_order=2 for guided sampling, and solver_order=3 for unconditional sampling. +We also support the “dynamic thresholding” method in Imagen (https://arxiv.org/abs/2205.11487). For pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use the dynamic +thresholding. Note that the thresholding method is unsuitable for latent-space diffusion models (such as +stable-diffusion). +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +convert_model_output + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the converted model output. + + +Convert the model output to the corresponding type that the algorithm (DPM-Solver / DPM-Solver++) needs. +DPM-Solver is designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to +discretize an integral of the data prediction model. So we need to first convert the model output to the +corresponding type to match the algorithm. +Note that the algorithm type and the model type is decoupled. That is to say, we can use either DPM-Solver or +DPM-Solver++ for both noise prediction model and data prediction model. + +dpm_solver_first_order_update + +< +source +> +( +model_output: FloatTensor +timestep: int +prev_timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the first-order DPM-Solver (equivalent to DDIM). +See https://arxiv.org/abs/2206.00927 for the detailed derivation. + +get_order_list + +< +source +> +( +num_inference_steps: int + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + + +Computes the solver order at each time step. + +scale_model_input + +< +source +> +( +sample: FloatTensor +*args +**kwargs + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device, optional) — +the device to which the timesteps should be moved to. If None, the timesteps are not moved. + + + +Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. + +singlestep_dpm_solver_second_order_update + +< +source +> +( +model_output_list: typing.List[torch.FloatTensor] +timestep_list: typing.List[int] +prev_timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output_list (List[torch.FloatTensor]) — +direct outputs from learned diffusion model at current and latter timesteps. + + +timestep (int) — current and latter discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the second-order singlestep DPM-Solver. +It computes the solution at time prev_timestep from the time timestep_list[-2]. + +singlestep_dpm_solver_third_order_update + +< +source +> +( +model_output_list: typing.List[torch.FloatTensor] +timestep_list: typing.List[int] +prev_timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output_list (List[torch.FloatTensor]) — +direct outputs from learned diffusion model at current and latter timesteps. + + +timestep (int) — current and latter discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the third-order singlestep DPM-Solver. +It computes the solution at time prev_timestep from the time timestep_list[-3]. + +singlestep_dpm_solver_update + +< +source +> +( +model_output_list: typing.List[torch.FloatTensor] +timestep_list: typing.List[int] +prev_timestep: int +sample: FloatTensor +order: int + +) +→ +torch.FloatTensor + +Parameters + +model_output_list (List[torch.FloatTensor]) — +direct outputs from learned diffusion model at current and latter timesteps. + + +timestep (int) — current and latter discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +order (int) — +the solver order at this step. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the singlestep DPM-Solver. + +step + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +return_dict: bool = True + +) +→ +~scheduling_utils.SchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +return_dict (bool) — option for returning tuple rather than SchedulerOutput class + + +Returns + +~scheduling_utils.SchedulerOutput or tuple + + + +~scheduling_utils.SchedulerOutput if return_dict is +True, otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + +Step function propagating the sample with the singlestep DPM-Solver. diff --git a/scrapped_outputs/3415cf0f1cd23df97cffb9b71ac78b73.txt b/scrapped_outputs/3415cf0f1cd23df97cffb9b71ac78b73.txt new file mode 100644 index 0000000000000000000000000000000000000000..923735996db131119f1ed82ba37eae73f2bb0f3e --- /dev/null +++ b/scrapped_outputs/3415cf0f1cd23df97cffb9b71ac78b73.txt @@ -0,0 +1,27 @@ +DDPM Denoising Diffusion Probabilistic Models (DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes a diffusion based model of the same name. In the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline. The abstract from the paper is: We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. The original codebase can be found at hohonathanho/diffusion. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. DDPMPipeline class diffusers.DDPMPipeline < source > ( unet scheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image. Can be one of +DDPMScheduler, or DDIMScheduler. Pipeline for image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 generator: Union = None num_inference_steps: int = 1000 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. num_inference_steps (int, optional, defaults to 1000) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Example: Copied >>> from diffusers import DDPMPipeline + +>>> # load model and scheduler +>>> pipe = DDPMPipeline.from_pretrained("google/ddpm-cat-256") + +>>> # run pipeline in inference (sample random noise and denoise) +>>> image = pipe().images[0] + +>>> # save image +>>> image.save("ddpm_generated_image.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/3435e7525d5c8497108c5fc9028ec59c.txt b/scrapped_outputs/3435e7525d5c8497108c5fc9028ec59c.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/3438ed5bed7fe452a3e42a6df1a67da1.txt b/scrapped_outputs/3438ed5bed7fe452a3e42a6df1a67da1.txt new file mode 100644 index 0000000000000000000000000000000000000000..26444ce0b02439b036cdb5951e8bcee16133d21d --- /dev/null +++ b/scrapped_outputs/3438ed5bed7fe452a3e42a6df1a67da1.txt @@ -0,0 +1,7 @@ +Value-guided planning 🧪 This is an experimental pipeline for reinforcement learning! This pipeline is based on the Planning with Diffusion for Flexible Behavior Synthesis paper by Michael Janner, Yilun Du, Joshua B. Tenenbaum, Sergey Levine. The abstract from the paper is: Model-based reinforcement learning methods often use learning only for the purpose of estimating an approximate dynamics model, offloading the rest of the decision-making work to classical trajectory optimizers. While conceptually simple, this combination has a number of empirical shortcomings, suggesting that learned models may not be well-suited to standard trajectory optimization. In this paper, we consider what it would look like to fold as much of the trajectory optimization pipeline as possible into the modeling problem, such that sampling from the model and planning with it become nearly identical. The core of our technical approach lies in a diffusion probabilistic model that plans by iteratively denoising trajectories. We show how classifier-guided sampling and image inpainting can be reinterpreted as coherent planning strategies, explore the unusual and useful properties of diffusion-based planning methods, and demonstrate the effectiveness of our framework in control settings that emphasize long-horizon decision-making and test-time flexibility. You can find additional information about the model on the project page, the original codebase, or try it out in a demo notebook. The script to run the model is available here. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. ValueGuidedRLPipeline class diffusers.experimental.ValueGuidedRLPipeline < source > ( value_function: UNet1DModel unet: UNet1DModel scheduler: DDPMScheduler env ) Parameters value_function (UNet1DModel) — +A specialized UNet for fine-tuning trajectories base on reward. unet (UNet1DModel) — +UNet architecture to denoise the encoded trajectories. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded trajectories. Default for this +application is DDPMScheduler. env () — +An environment following the OpenAI gym API to act in. For now only Hopper has pretrained models. Pipeline for value-guided sampling from a diffusion model trained to predict sequences of states. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). diff --git a/scrapped_outputs/34752a87b6dbce1bb93b329a7e0bc7c9.txt b/scrapped_outputs/34752a87b6dbce1bb93b329a7e0bc7c9.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/34869565778e633dd4fcb1602cc90950.txt b/scrapped_outputs/34869565778e633dd4fcb1602cc90950.txt new file mode 100644 index 0000000000000000000000000000000000000000..0216b63015b72cee2b55724c811388c4d1a98e96 --- /dev/null +++ b/scrapped_outputs/34869565778e633dd4fcb1602cc90950.txt @@ -0,0 +1,41 @@ +KarrasVeScheduler KarrasVeScheduler is a stochastic sampler tailored to variance-expanding (VE) models. It is based on the Elucidating the Design Space of Diffusion-Based Generative Models and Score-based generative modeling through stochastic differential equations papers. KarrasVeScheduler class diffusers.KarrasVeScheduler < source > ( sigma_min: float = 0.02 sigma_max: float = 100 s_noise: float = 1.007 s_churn: float = 80 s_min: float = 0.05 s_max: float = 50 ) Parameters sigma_min (float, defaults to 0.02) — +The minimum noise magnitude. sigma_max (float, defaults to 100) — +The maximum noise magnitude. s_noise (float, defaults to 1.007) — +The amount of additional noise to counteract loss of detail during sampling. A reasonable range is [1.000, +1.011]. s_churn (float, defaults to 80) — +The parameter controlling the overall amount of stochasticity. A reasonable range is [0, 100]. s_min (float, defaults to 0.05) — +The start value of the sigma range to add noise (enable stochasticity). A reasonable range is [0, 10]. s_max (float, defaults to 50) — +The end value of the sigma range to add noise. A reasonable range is [0.2, 80]. A stochastic scheduler tailored to variance-expanding models. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. For more details on the parameters, see Appendix E. The grid search values used +to find the optimal {s_noise, s_churn, s_min, s_max} for a specific model are described in Table 5 of the paper. add_noise_to_input < source > ( sample: FloatTensor sigma: float generator: Optional = None ) Parameters sample (torch.FloatTensor) — +The input sample. sigma (float) — generator (torch.Generator, optional) — +A random number generator. Explicit Langevin-like “churn” step of adding noise to the sample according to a gamma_i ≥ 0 to reach a +higher noise level sigma_hat = sigma_i + gamma_i*sigma_i. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor sigma_hat: float sigma_prev: float sample_hat: FloatTensor return_dict: bool = True ) → ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. sigma_hat (float) — sigma_prev (float) — sample_hat (torch.FloatTensor) — return_dict (bool, optional, defaults to True) — +Whether or not to return a ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput or tuple. Returns +~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput or tuple + +If return_dict is True, ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). step_correct < source > ( model_output: FloatTensor sigma_hat: float sigma_prev: float sample_hat: FloatTensor sample_prev: FloatTensor derivative: FloatTensor return_dict: bool = True ) → prev_sample (TODO) Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. sigma_hat (float) — TODO sigma_prev (float) — TODO sample_hat (torch.FloatTensor) — TODO sample_prev (torch.FloatTensor) — TODO derivative (torch.FloatTensor) — TODO return_dict (bool, optional, defaults to True) — +Whether or not to return a DDPMSchedulerOutput or tuple. Returns +prev_sample (TODO) + +updated sample in the diffusion chain. derivative (TODO): TODO + Corrects the predicted sample based on the model_output of the network. KarrasVeOutput class diffusers.schedulers.deprecated.scheduling_karras_ve.KarrasVeOutput < source > ( prev_sample: FloatTensor derivative: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. derivative (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Derivative of predicted original image sample (x_0). pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/34bd8a250a9fe425adf13bd2bf86cddf.txt b/scrapped_outputs/34bd8a250a9fe425adf13bd2bf86cddf.txt new file mode 100644 index 0000000000000000000000000000000000000000..8ffbeca318ea60288f515ef9c440ebea9a984f50 --- /dev/null +++ b/scrapped_outputs/34bd8a250a9fe425adf13bd2bf86cddf.txt @@ -0,0 +1,80 @@ +UniPCMultistepScheduler UniPCMultistepScheduler is a training-free framework designed for fast sampling of diffusion models. It was introduced in UniPC: A Unified Predictor-Corrector Framework for Fast Sampling of Diffusion Models by Wenliang Zhao, Lujia Bai, Yongming Rao, Jie Zhou, Jiwen Lu. It consists of a corrector (UniC) and a predictor (UniP) that share a unified analytical form and support arbitrary orders. +UniPC is by design model-agnostic, supporting pixel-space/latent-space DPMs on unconditional/conditional sampling. It can also be applied to both noise prediction and data prediction models. The corrector UniC can be also applied after any off-the-shelf solvers to increase the order of accuracy. The abstract from the paper is: Diffusion probabilistic models (DPMs) have demonstrated a very promising ability in high-resolution image synthesis. However, sampling from a pre-trained DPM is time-consuming due to the multiple evaluations of the denoising network, making it more and more important to accelerate the sampling of DPMs. Despite recent progress in designing fast samplers, existing methods still cannot generate satisfying images in many applications where fewer steps (e.g., <10) are favored. In this paper, we develop a unified corrector (UniC) that can be applied after any existing DPM sampler to increase the order of accuracy without extra model evaluations, and derive a unified predictor (UniP) that supports arbitrary order as a byproduct. Combining UniP and UniC, we propose a unified predictor-corrector framework called UniPC for the fast sampling of DPMs, which has a unified analytical form for any order and can significantly improve the sampling quality over previous methods, especially in extremely few steps. We evaluate our methods through extensive experiments including both unconditional and conditional sampling using pixel-space and latent-space DPMs. Our UniPC can achieve 3.87 FID on CIFAR10 (unconditional) and 7.51 FID on ImageNet 256×256 (conditional) with only 10 function evaluations. Code is available at this https URL. Tips It is recommended to set solver_order to 2 for guide sampling, and solver_order=3 for unconditional sampling. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both predict_x0=True and thresholding=True to use dynamic thresholding. This thresholding method is unsuitable for latent-space diffusion models such as Stable Diffusion. UniPCMultistepScheduler class diffusers.UniPCMultistepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 predict_x0: bool = True solver_type: str = 'bh2' lower_order_final: bool = True disable_corrector: List = [] solver_p: SchedulerMixin = None use_karras_sigmas: Optional = False timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, default 2) — +The UniPC order which can be any positive integer. The effective order of accuracy is solver_order + 1 +due to the UniC. It is recommended to use solver_order=2 for guided sampling, and solver_order=3 for +unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and predict_x0=True. predict_x0 (bool, defaults to True) — +Whether to use the updating algorithm on the predicted x0. solver_type (str, default bh2) — +Solver type for UniPC. It is recommended to use bh1 for unconditional sampling when steps < 10, and bh2 +otherwise. lower_order_final (bool, default True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. disable_corrector (list, default []) — +Decides which step to disable the corrector to mitigate the misalignment between epsilon_theta(x_t, c) +and epsilon_theta(x_t^c, c) which can influence convergence for a large guidance scale. Corrector is +usually disabled during the first few steps. solver_p (SchedulerMixin, default None) — +Any other scheduler that if specified, the algorithm becomes solver_p + UniC. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. UniPCMultistepScheduler is a training-free framework designed for the fast sampling of diffusion models. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the UniPC algorithm needs. multistep_uni_c_bh_update < source > ( this_model_output: FloatTensor *args last_sample: FloatTensor = None this_sample: FloatTensor = None order: int = None **kwargs ) → torch.FloatTensor Parameters this_model_output (torch.FloatTensor) — +The model outputs at x_t. this_timestep (int) — +The current timestep t. last_sample (torch.FloatTensor) — +The generated sample before the last predictor x_{t-1}. this_sample (torch.FloatTensor) — +The generated sample after the last predictor x_{t}. order (int) — +The p of UniC-p at this step. The effective order of accuracy should be order + 1. Returns +torch.FloatTensor + +The corrected sample tensor at the current timestep. + One step for the UniC (B(h) version). multistep_uni_p_bh_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None order: int = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model at the current timestep. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. order (int) — +The order of UniP at this timestep (corresponds to the p in UniPC-p). Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the UniP (B(h) version). Alternatively, self.solver_p is used if is specified. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep UniPC. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/34d0acf53eda910a9d84b97cffd65b24.txt b/scrapped_outputs/34d0acf53eda910a9d84b97cffd65b24.txt new file mode 100644 index 0000000000000000000000000000000000000000..b0b0a9f6f6538388b8c5e1816de1537cd679e779 --- /dev/null +++ b/scrapped_outputs/34d0acf53eda910a9d84b97cffd65b24.txt @@ -0,0 +1,96 @@ +MultiDiffusion MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation is by Omer Bar-Tal, Lior Yariv, Yaron Lipman, and Tali Dekel. The abstract from the paper is: Recent advances in text-to-image generation with diffusion models present transformative capabilities in image quality. However, user controllability of the generated image, and fast adaptation to new tasks still remains an open challenge, currently mostly addressed by costly and long re-training and fine-tuning or ad-hoc adaptations to specific image generation tasks. In this work, we present MultiDiffusion, a unified framework that enables versatile and controllable image generation, using a pre-trained text-to-image diffusion model, without any further training or finetuning. At the center of our approach is a new generation process, based on an optimization task that binds together multiple diffusion generation processes with a shared set of parameters or constraints. We show that MultiDiffusion can be readily applied to generate high quality and diverse images that adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes. You can find additional information about MultiDiffusion on the project page, original codebase, and try it out in a demo. Tips While calling StableDiffusionPanoramaPipeline, it’s possible to specify the view_batch_size parameter to be > 1. +For some GPUs with high performance, this can speedup the generation process and increase VRAM usage. To generate panorama-like images make sure you pass the width parameter accordingly. We recommend a width value of 2048 which is the default. Circular padding is applied to ensure there are no stitching artifacts when working with panoramas to ensure a seamless transition from the rightmost part to the leftmost part. By enabling circular padding (set circular_padding=True), the operation applies additional crops after the rightmost point of the image, allowing the model to “see” the transition from the rightmost part to the leftmost part. This helps maintain visual consistency in a 360-degree sense and creates a proper “panorama” that can be viewed using 360-degree panorama viewers. When decoding latents in Stable Diffusion, circular padding is applied to ensure that the decoded latents match in the RGB space. For example, without circular padding, there is a stitching artifact (default): + But with circular padding, the right and the left parts are matching (circular_padding=True): + Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionPanoramaPipeline class diffusers.StableDiffusionPanoramaPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: DDIMScheduler safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using MultiDiffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = 512 width: Optional = 2048 num_inference_steps: int = 50 guidance_scale: float = 7.5 view_batch_size: int = 1 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None circular_padding: bool = False clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 2048) — +The width in pixels of the generated image. The width is kept high because the pipeline is supposed +generate panorama-like images. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. view_batch_size (int, optional, defaults to 1) — +The batch size to denoise split views. For some GPUs with high performance, higher view batch size can +speedup the generation and increase the VRAM usage. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. circular_padding (bool, optional, defaults to False) — +If set to True, circular padding is applied to ensure there are no stitching artifacts. Circular +padding allows the model to seamlessly generate a transition from the rightmost part of the image to +the leftmost part, maintaining consistency in a 360-degree sense. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPanoramaPipeline, DDIMScheduler + +>>> model_ckpt = "stabilityai/stable-diffusion-2-base" +>>> scheduler = DDIMScheduler.from_pretrained(model_ckpt, subfolder="scheduler") +>>> pipe = StableDiffusionPanoramaPipeline.from_pretrained( +... model_ckpt, scheduler=scheduler, torch_dtype=torch.float16 +... ) + +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of the dolomites" +>>> image = pipe(prompt).images[0] disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/34e0eb9c7de81f3142e182401f22a301.txt b/scrapped_outputs/34e0eb9c7de81f3142e182401f22a301.txt new file mode 100644 index 0000000000000000000000000000000000000000..e54eec4e353c68fea93ef96f1cfd56712863b328 --- /dev/null +++ b/scrapped_outputs/34e0eb9c7de81f3142e182401f22a301.txt @@ -0,0 +1,180 @@ +Low-Rank Adaptation of Large Language Models (LoRA) + + + + + + + + + + + + +Currently, LoRA is only supported for the attention layers of the UNet2DConditionalModel. We also +support LoRA fine-tuning of the text encoder for DreamBooth in a limited capacity. For more details on how we support +LoRA fine-tuning of the text encoder, refer to the discussion on this PR. +Low-Rank Adaptation of Large Language Models (LoRA) is a training method that accelerates the training of large models while consuming less memory. It adds pairs of rank-decomposition weight matrices (called update matrices) to existing weights, and only trains those newly added weights. This has a couple of advantages: +Previous pretrained weights are kept frozen so the model is not as prone to catastrophic forgetting. +Rank-decomposition matrices have significantly fewer parameters than the original model, which means that trained LoRA weights are easily portable. +LoRA matrices are generally added to the attention layers of the original model. 🧨 Diffusers provides the load_attn_procs() method to load the LoRA weights into a model’s attention layers. You can control the extent to which the model is adapted toward new training images via a scale parameter. +The greater memory-efficiency allows you to run fine-tuning on consumer GPUs like the Tesla T4, RTX 3080 or even the RTX 2080 Ti! GPUs like the T4 are free and readily accessible in Kaggle or Google Colab notebooks. +💡 LoRA is not only limited to attention layers. The authors found that amending +the attention layers of a language model is sufficient to obtain good downstream performance with great efficiency. This is why it’s common to just add the LoRA weights to the attention layers of a model. Check out the Using LoRA for efficient Stable Diffusion fine-tuning blog for more information about how LoRA works! +cloneofsimo was the first to try out LoRA training for Stable Diffusion in the popular lora GitHub repository. 🧨 Diffusers now supports finetuning with LoRA for text-to-image generation and DreamBooth. This guide will show you how to do both. +If you’d like to store or share your model with the community, login to your Hugging Face account (create one if you don’t have one already): + + + Copied +huggingface-cli login + +Text-to-image + +Finetuning a model like Stable Diffusion, which has billions of parameters, can be slow and difficult. With LoRA, it is much easier and faster to finetune a diffusion model. It can run on hardware with as little as 11GB of GPU RAM without resorting to tricks such as 8-bit optimizers. + +Training + +Let’s finetune stable-diffusion-v1-5 on the Pokémon BLIP captions dataset to generate your own Pokémon. +Specify the MODEL_NAME environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the ~diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path argument. You’ll also need to set the DATASET_NAME environment variable to the name of the dataset you want to train on. +The OUTPUT_DIR and HUB_MODEL_ID variables are optional and specify where to save the model to on the Hub: + + + Copied +export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="/sddata/finetune/lora/pokemon" +export HUB_MODEL_ID="pokemon-lora" +export DATASET_NAME="lambdalabs/pokemon-blip-captions" +There are some flags to be aware of before you start training: +--push_to_hub stores the trained LoRA embeddings on the Hub. +--report_to=wandb reports and logs the training results to your Weights & Biases dashboard (as an example, take a look at this report). +--learning_rate=1e-04, you can afford to use a higher learning rate than you normally would with LoRA. +Now you’re ready to launch the training (you can find the full training script here): + + + Copied +accelerate launch --mixed_precision="fp16" train_text_to_image_lora.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$DATASET_NAME \ + --dataloader_num_workers=8 \ + --resolution=512 --center_crop --random_flip \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --max_train_steps=15000 \ + --learning_rate=1e-04 \ + --max_grad_norm=1 \ + --lr_scheduler="cosine" --lr_warmup_steps=0 \ + --output_dir=${OUTPUT_DIR} \ + --push_to_hub \ + --hub_model_id=${HUB_MODEL_ID} \ + --report_to=wandb \ + --checkpointing_steps=500 \ + --validation_prompt="A pokemon with blue eyes." \ + --seed=1337 + +Inference + +Now you can use the model for inference by loading the base model in the StableDiffusionPipeline and then the DPMSolverMultistepScheduler: + + + Copied +>>> import torch +>>> from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler + +>>> model_base = "runwayml/stable-diffusion-v1-5" + +>>> pipe = StableDiffusionPipeline.from_pretrained(model_base, torch_dtype=torch.float16) +>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +Load the LoRA weights from your finetuned model on top of the base model weights, and then move the pipeline to a GPU for faster inference. When you merge the LoRA weights with the frozen pretrained model weights, you can optionally adjust how much of the weights to merge with the scale parameter: +💡 A scale value of 0 is the same as not using your LoRA weights and you’re only using the base model weights, and a scale value of 1 means you’re only using the fully finetuned LoRA weights. Values between 0 and 1 interpolates between the two weights. + + + Copied +>>> pipe.unet.load_attn_procs(model_path) +>>> pipe.to("cuda") +# use half the weights from the LoRA finetuned model and half the weights from the base model + +>>> image = pipe( +... "A pokemon with blue eyes.", num_inference_steps=25, guidance_scale=7.5, cross_attention_kwargs={"scale": 0.5} +... ).images[0] +# use the weights from the fully finetuned LoRA model + +>>> image = pipe("A pokemon with blue eyes.", num_inference_steps=25, guidance_scale=7.5).images[0] +>>> image.save("blue_pokemon.png") + +DreamBooth + +DreamBooth is a finetuning technique for personalizing a text-to-image model like Stable Diffusion to generate photorealistic images of a subject in different contexts, given a few images of the subject. However, DreamBooth is very sensitive to hyperparameters and it is easy to overfit. Some important hyperparameters to consider include those that affect the training time (learning rate, number of training steps), and inference time (number of steps, scheduler type). +💡 Take a look at the Training Stable Diffusion with DreamBooth using 🧨 Diffusers blog for an in-depth analysis of DreamBooth experiments and recommended settings. + +Training + +Let’s finetune stable-diffusion-v1-5 with DreamBooth and LoRA with some 🐶 dog images. Download and save these images to a directory. +To start, specify the MODEL_NAME environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the ~diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path argument. You’ll also need to set INSTANCE_DIR to the path of the directory containing the images. +The OUTPUT_DIR variables is optional and specifies where to save the model to on the Hub: + + + Copied +export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export INSTANCE_DIR="path-to-instance-images" +export OUTPUT_DIR="path-to-save-model" +There are some flags to be aware of before you start training: +--push_to_hub stores the trained LoRA embeddings on the Hub. +--report_to=wandb reports and logs the training results to your Weights & Biases dashboard (as an example, take a look at this report). +--learning_rate=1e-04, you can afford to use a higher learning rate than you normally would with LoRA. +Now you’re ready to launch the training (you can find the full training script here): + + + Copied +accelerate launch train_dreambooth_lora.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --output_dir=$OUTPUT_DIR \ + --instance_prompt="a photo of sks dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=1 \ + --checkpointing_steps=100 \ + --learning_rate=1e-4 \ + --report_to="wandb" \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --max_train_steps=500 \ + --validation_prompt="A photo of sks dog in a bucket" \ + --validation_epochs=50 \ + --seed="0" \ + --push_to_hub +It’s also possible to additionally fine-tune the text encoder with LoRA. This, in most cases, leads +to better results with a slight increase in the compute. To allow fine-tuning the text encoder with LoRA, +specify the --train_text_encoder while launching the train_dreambooth_lora.py script. + +Inference + +Now you can use the model for inference by loading the base model in the StableDiffusionPipeline: + + + Copied +>>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> model_base = "runwayml/stable-diffusion-v1-5" + +>>> pipe = StableDiffusionPipeline.from_pretrained(model_base, torch_dtype=torch.float16) +Load the LoRA weights from your finetuned DreamBooth model on top of the base model weights, and then move the pipeline to a GPU for faster inference. When you merge the LoRA weights with the frozen pretrained model weights, you can optionally adjust how much of the weights to merge with the scale parameter: +💡 A scale value of 0 is the same as not using your LoRA weights and you’re only using the base model weights, and a scale value of 1 means you’re only using the fully finetuned LoRA weights. Values between 0 and 1 interpolates between the two weights. + + + Copied +>>> pipe.unet.load_attn_procs(model_path) +>>> pipe.to("cuda") +# use half the weights from the LoRA finetuned model and half the weights from the base model + +>>> image = pipe( +... "A picture of a sks dog in a bucket.", +... num_inference_steps=25, +... guidance_scale=7.5, +... cross_attention_kwargs={"scale": 0.5}, +... ).images[0] +# use the weights from the fully finetuned LoRA model + +>>> image = pipe("A picture of a sks dog in a bucket.", num_inference_steps=25, guidance_scale=7.5).images[0] +>>> image.save("bucket-dog.png") diff --git a/scrapped_outputs/35241b66a5fa6d96e9ad60d6e7d5d90c.txt b/scrapped_outputs/35241b66a5fa6d96e9ad60d6e7d5d90c.txt new file mode 100644 index 0000000000000000000000000000000000000000..0a7cc0b79a2823c78003b419462fee63e47bb1de --- /dev/null +++ b/scrapped_outputs/35241b66a5fa6d96e9ad60d6e7d5d90c.txt @@ -0,0 +1,18 @@ +ONNX Runtime 🤗 Optimum provides a Stable Diffusion pipeline compatible with ONNX Runtime. You’ll need to install 🤗 Optimum with the following command for ONNX Runtime support: Copied pip install -q optimum["onnxruntime"] This guide will show you how to use the Stable Diffusion and Stable Diffusion XL (SDXL) pipelines with ONNX Runtime. Stable Diffusion To load and run inference, use the ORTStableDiffusionPipeline. If you want to load a PyTorch model and convert it to the ONNX format on-the-fly, set export=True: Copied from optimum.onnxruntime import ORTStableDiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id, export=True) +prompt = "sailing ship in storm by Leonardo da Vinci" +image = pipeline(prompt).images[0] +pipeline.save_pretrained("./onnx-stable-diffusion-v1-5") Generating multiple prompts in a batch seems to take too much memory. While we look into it, you may need to iterate instead of batching. To export the pipeline in the ONNX format offline and use it later for inference, +use the optimum-cli export command: Copied optimum-cli export onnx --model runwayml/stable-diffusion-v1-5 sd_v15_onnx/ Then to perform inference (you don’t have to specify export=True again): Copied from optimum.onnxruntime import ORTStableDiffusionPipeline + +model_id = "sd_v15_onnx" +pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id) +prompt = "sailing ship in storm by Leonardo da Vinci" +image = pipeline(prompt).images[0] You can find more examples in 🤗 Optimum documentation, and Stable Diffusion is supported for text-to-image, image-to-image, and inpainting. Stable Diffusion XL To load and run inference with SDXL, use the ORTStableDiffusionXLPipeline: Copied from optimum.onnxruntime import ORTStableDiffusionXLPipeline + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipeline = ORTStableDiffusionXLPipeline.from_pretrained(model_id) +prompt = "sailing ship in storm by Leonardo da Vinci" +image = pipeline(prompt).images[0] To export the pipeline in the ONNX format and use it later for inference, use the optimum-cli export command: Copied optimum-cli export onnx --model stabilityai/stable-diffusion-xl-base-1.0 --task stable-diffusion-xl sd_xl_onnx/ SDXL in the ONNX format is supported for text-to-image and image-to-image. diff --git a/scrapped_outputs/353f4ab53a7121873d9a1e15f7ba8edd.txt b/scrapped_outputs/353f4ab53a7121873d9a1e15f7ba8edd.txt new file mode 100644 index 0000000000000000000000000000000000000000..ed35a98294cb8a50e4dac13e73718fc964b0d4d5 --- /dev/null +++ b/scrapped_outputs/353f4ab53a7121873d9a1e15f7ba8edd.txt @@ -0,0 +1,313 @@ +Super-Resolution + + +StableDiffusionUpscalePipeline + +The upscaler diffusion model was created by the researchers and engineers from CompVis, Stability AI, and LAION, as part of Stable Diffusion 2.0. StableDiffusionUpscalePipeline can be used to enhance the resolution of input images by a factor of 4. +The original codebase can be found here: +Stable Diffusion v2: Stability-AI/stablediffusion +Available Checkpoints are: +stabilityai/stable-diffusion-x4-upscaler (x4 resolution resolution): stable-diffusion-x4-upscaler + +class diffusers.StableDiffusionUpscalePipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +low_res_scheduler: DDPMScheduler +scheduler: KarrasDiffusionSchedulers +max_noise_level: int = 350 + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +low_res_scheduler (SchedulerMixin) — +A scheduler used to add initial noise to the low res conditioning image. It must be an instance of +DDPMScheduler. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + + +Pipeline for text-guided image super-resolution using Stable Diffusion 2. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +image: typing.Union[torch.FloatTensor, PIL.Image.Image, typing.List[PIL.Image.Image]] = None +num_inference_steps: int = 75 +guidance_scale: float = 9.0 +noise_level: int = 20 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: typing.Optional[int] = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +image (PIL.Image.Image or ListPIL.Image.Image or torch.FloatTensor) — +Image, or tensor representing an image batch which will be upscaled. * + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. Ignored when not using guidance (i.e., ignored if guidance_scale +is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import requests +>>> from PIL import Image +>>> from io import BytesIO +>>> from diffusers import StableDiffusionUpscalePipeline +>>> import torch + +>>> # load model and scheduler +>>> model_id = "stabilityai/stable-diffusion-x4-upscaler" +>>> pipeline = StableDiffusionUpscalePipeline.from_pretrained( +... model_id, revision="fp16", torch_dtype=torch.float16 +... ) +>>> pipeline = pipeline.to("cuda") + +>>> # let's download an image +>>> url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png" +>>> response = requests.get(url) +>>> low_res_img = Image.open(BytesIO(response.content)).convert("RGB") +>>> low_res_img = low_res_img.resize((128, 128)) +>>> prompt = "a white cat" + +>>> upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0] +>>> upscaled_image.save("upsampled_cat.png") + +enable_attention_slicing + +< +source +> +( +slice_size: typing.Union[str, int, NoneType] = 'auto' + +) + + +Parameters + +slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maxium amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +disable_attention_slicing + +< +source +> +( +) + + + +Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go +back to computing attention in one step. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. diff --git a/scrapped_outputs/355a1565a00ef3d3e8870d96043fabd6.txt b/scrapped_outputs/355a1565a00ef3d3e8870d96043fabd6.txt new file mode 100644 index 0000000000000000000000000000000000000000..118d04526fdacb6e280461a814f7dea84ba76932 --- /dev/null +++ b/scrapped_outputs/355a1565a00ef3d3e8870d96043fabd6.txt @@ -0,0 +1,51 @@ +DDIMInverseScheduler DDIMInverseScheduler is the inverted scheduler from Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. +The implementation is mostly based on the DDIM inversion definition from Null-text Inversion for Editing Real Images using Guided Diffusion Models. DDIMInverseScheduler class diffusers.DDIMInverseScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None clip_sample: bool = True set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' clip_sample_range: float = 1.0 timestep_spacing: str = 'leading' rescale_betas_zero_snr: bool = False **kwargs ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 0, otherwise +it uses the alpha value at step num_train_timesteps - 1. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use num_train_timesteps - 1 for the previous alpha +product. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. DDIMInverseScheduler is the reverse scheduler of DDIMScheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → ~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. eta (float) — +The weight of noise for added noise in diffusion step. use_clipped_model_output (bool, defaults to False) — +If True, computes “corrected” model_output from the clipped predicted original sample. Necessary +because predicted original sample is clipped to [-1, 1] when self.config.clip_sample is True. If no +clipping has happened, “corrected” model_output would coincide with the one provided as input and +use_clipped_model_output has no effect. variance_noise (torch.FloatTensor) — +Alternative to generating noise with generator by directly providing the noise for the variance +itself. Useful for methods such as CycleDiffusion. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput or +tuple. Returns +~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput or tuple + +If return_dict is True, ~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput is +returned, otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/355c5ad6b55b13c2b7a8b812f301b091.txt b/scrapped_outputs/355c5ad6b55b13c2b7a8b812f301b091.txt new file mode 100644 index 0000000000000000000000000000000000000000..e9b8423989949022327acbe936067e63af1f8776 --- /dev/null +++ b/scrapped_outputs/355c5ad6b55b13c2b7a8b812f301b091.txt @@ -0,0 +1,124 @@ +Stable diffusion pipelines + +Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from CompVis, Stability AI and LAION. It’s trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs. +Latent diffusion is the research on top of which Stable Diffusion was built. It was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. You can learn more details about it in the specific pipeline for latent diffusion that is part of 🤗 Diffusers. +For more details about how Stable Diffusion works and how it differs from the base latent diffusion model, please refer to the official launch announcement post and this section of our own blog post. +Tips: +To tweak your prompts on a specific result you liked, you can generate your own latents, as demonstrated in the following notebook: +Overview: +Pipeline +Tasks +Colab +Demo +StableDiffusionPipeline +Text-to-Image Generation + +🤗 Stable Diffusion +StableDiffusionImg2ImgPipeline +Image-to-Image Text-Guided Generation + +🤗 Diffuse the Rest +StableDiffusionInpaintPipeline +Experimental – Text-Guided Image Inpainting + +Coming soon +StableDiffusionDepth2ImgPipeline +Experimental – Depth-to-Image Text-Guided Generation + +Coming soon +StableDiffusionImageVariationPipeline +Experimental – Image Variation Generation + +🤗 Stable Diffusion Image Variations +StableDiffusionUpscalePipeline +Experimental – Text-Guided Image Super-Resolution + +Coming soon +StableDiffusionLatentUpscalePipeline +Experimental – Text-Guided Image Super-Resolution + +Coming soon +StableDiffusionInstructPix2PixPipeline +Experimental – Text-Based Image Editing + +InstructPix2Pix: Learning to Follow Image Editing Instructions +StableDiffusionAttendAndExcitePipeline +Experimental – Text-to-Image Generation + +Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models +StableDiffusionPix2PixZeroPipeline +Experimental – Text-Based Image Editing + +Zero-shot Image-to-Image Translation +StableDiffusionModelEditingPipeline +Experimental – Text-to-Image Model Editing + +Editing Implicit Assumptions in Text-to-Image Diffusion Models + +Tips + + +How to load and use different schedulers. + +The stable diffusion pipeline uses PNDMScheduler scheduler by default. But diffusers provides many other schedulers that can be used with the stable diffusion pipeline such as DDIMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, EulerAncestralDiscreteScheduler etc. +To use a different scheduler, you can either change it via the ConfigMixin.from_config() method or pass the scheduler argument to the from_pretrained method of the pipeline. For example, to use the EulerDiscreteScheduler, you can do the following: + + + Copied +>>> from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler + +>>> pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") +>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +>>> # or +>>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler") +>>> pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=euler_scheduler) + +How to convert all use cases with multiple or single pipeline + +If you want to use all possible use cases in a single DiffusionPipeline you can either: +Make use of the Stable Diffusion Mega Pipeline or +Make use of the components functionality to instantiate all components in the most memory-efficient way: + + + Copied +>>> from diffusers import ( +... StableDiffusionPipeline, +... StableDiffusionImg2ImgPipeline, +... StableDiffusionInpaintPipeline, +... ) + +>>> text2img = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") +>>> img2img = StableDiffusionImg2ImgPipeline(**text2img.components) +>>> inpaint = StableDiffusionInpaintPipeline(**text2img.components) + +>>> # now you can use text2img(...), img2img(...), inpaint(...) just like the call methods of each respective pipeline + +StableDiffusionPipelineOutput + + +class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput + +< +source +> +( +images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] +nsfw_content_detected: typing.Optional[typing.List[bool]] + +) + + +Parameters + +images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or numpy array of shape (batch_size, height, width, num_channels). PIL images or numpy array present the denoised images of the diffusion pipeline. + + +nsfw_content_detected (List[bool]) — +List of flags denoting whether the corresponding generated image likely represents “not-safe-for-work” +(nsfw) content, or None if safety checking could not be performed. + + + +Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/35c9bb3caf5510156a315d6bb71eb552.txt b/scrapped_outputs/35c9bb3caf5510156a315d6bb71eb552.txt new file mode 100644 index 0000000000000000000000000000000000000000..e7329224ae6981db4a03041ff4d3d5107653214c --- /dev/null +++ b/scrapped_outputs/35c9bb3caf5510156a315d6bb71eb552.txt @@ -0,0 +1,235 @@ +variance exploding stochastic differential equation (VE-SDE) scheduler + + +Overview + +Original paper can be found here. + +ScoreSdeVeScheduler + + +class diffusers.ScoreSdeVeScheduler + +< +source +> +( +num_train_timesteps: int = 2000 +snr: float = 0.15 +sigma_min: float = 0.01 +sigma_max: float = 1348.0 +sampling_eps: float = 1e-05 +correct_steps: int = 1 + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +snr (float) — +coefficient weighting the step from the model_output sample (from the network) to the random noise. + + +sigma_min (float) — +initial noise scale for sigma sequence in sampling procedure. The minimum sigma should mirror the +distribution of the data. + + +sigma_max (float) — maximum value used for the range of continuous timesteps passed into the model. + + +sampling_eps (float) — the end value of sampling, where timesteps decrease progressively from 1 to +epsilon. — + + +correct_steps (int) — number of correction steps performed on a produced sample. + + + +The variance exploding stochastic differential equation (SDE) scheduler. +For more information, see the original paper: https://arxiv.org/abs/2011.13456 +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Optional[int] = None + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +timestep (int, optional) — current timestep + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_sigmas + +< +source +> +( +num_inference_steps: int +sigma_min: float = None +sigma_max: float = None +sampling_eps: float = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +sigma_min (float, optional) — +initial noise scale value (overrides value given at Scheduler instantiation). + + +sigma_max (float, optional) — final noise scale value (overrides value given at Scheduler instantiation). + + +sampling_eps (float, optional) — final timestep value (overrides value given at Scheduler instantiation). + + + +Sets the noise scales used for the diffusion chain. Supporting function to be run before inference. +The sigmas control the weight of the drift and diffusion components of sample update. + +set_timesteps + +< +source +> +( +num_inference_steps: int +sampling_eps: float = None +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +sampling_eps (float, optional) — final timestep value (overrides value given at Scheduler instantiation). + + + +Sets the continuous timesteps used for the diffusion chain. Supporting function to be run before inference. + +step_correct + +< +source +> +( +model_output: FloatTensor +sample: FloatTensor +generator: typing.Optional[torch._C.Generator] = None +return_dict: bool = True + +) +→ +SdeVeOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. +generator — random number generator. + + +return_dict (bool) — option for returning tuple rather than SchedulerOutput class + + +Returns + +SdeVeOutput or tuple + + + +SdeVeOutput if +return_dict is True, otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + +Correct the predicted sample based on the output model_output of the network. This is often run repeatedly +after making the prediction for the previous timestep. + +step_pred + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +generator: typing.Optional[torch._C.Generator] = None +return_dict: bool = True + +) +→ +SdeVeOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. +generator — random number generator. + + +return_dict (bool) — option for returning tuple rather than SchedulerOutput class + + +Returns + +SdeVeOutput or tuple + + + +SdeVeOutput if +return_dict is True, otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/35f8f5a098c20d8bc4fc522e6b45166c.txt b/scrapped_outputs/35f8f5a098c20d8bc4fc522e6b45166c.txt new file mode 100644 index 0000000000000000000000000000000000000000..b20fa826f93ceab8b9350b48a73ddf983d626f35 --- /dev/null +++ b/scrapped_outputs/35f8f5a098c20d8bc4fc522e6b45166c.txt @@ -0,0 +1,115 @@ +Custom Diffusion Custom Diffusion is a training technique for personalizing image generation models. Like Textual Inversion, DreamBooth, and LoRA, Custom Diffusion only requires a few (~4-5) example images. This technique works by only training weights in the cross-attention layers, and it uses a special word to represent the newly learned concept. Custom Diffusion is unique because it can also learn multiple concepts at the same time. If you’re training on a GPU with limited vRAM, you should try enabling xFormers with --enable_xformers_memory_efficient_attention for faster training with lower vRAM requirements (16GB). To save even more memory, add --set_grads_to_none in the training argument to set the gradients to None instead of zero (this option can cause some issues, so if you experience any, try removing this parameter). This guide will explore the train_custom_diffusion.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies: Copied cd examples/custom_diffusion +pip install -r requirements.txt +pip install clip-retrieval 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script contains all the parameters to help you customize your training run. These are found in the parse_args() function. The function comes with default values, but you can also set your own values in the training command if you’d like. For example, to change the resolution of the input image: Copied accelerate launch train_custom_diffusion.py \ + --resolution=256 Many of the basic parameters are described in the DreamBooth training guide, so this guide focuses on the parameters unique to Custom Diffusion: --freeze_model: freezes the key and value parameters in the cross-attention layer; the default is crossattn_kv, but you can set it to crossattn to train all the parameters in the cross-attention layer --concepts_list: to learn multiple concepts, provide a path to a JSON file containing the concepts --modifier_token: a special word used to represent the learned concept --initializer_token: Prior preservation loss Prior preservation loss is a method that uses a model’s own generated samples to help it learn how to generate more diverse images. Because these generated sample images belong to the same class as the images you provided, they help the model retain what it has learned about the class and how it can use what it already knows about the class to make new compositions. Many of the parameters for prior preservation loss are described in the DreamBooth training guide. Regularization Custom Diffusion includes training the target images with a small set of real images to prevent overfitting. As you can imagine, this can be easy to do when you’re only training on a few images! Download 200 real images with clip_retrieval. The class_prompt should be the same category as the target images. These images are stored in class_data_dir. Copied python retrieve.py --class_prompt cat --class_data_dir real_reg/samples_cat --num_class_images 200 To enable regularization, add the following parameters: --with_prior_preservation: whether to use prior preservation loss --prior_loss_weight: controls the influence of the prior preservation loss on the model --real_prior: whether to use a small set of real images to prevent overfitting Copied accelerate launch train_custom_diffusion.py \ + --with_prior_preservation \ + --prior_loss_weight=1.0 \ + --class_data_dir="./real_reg/samples_cat" \ + --class_prompt="cat" \ + --real_prior=True \ Training script A lot of the code in the Custom Diffusion training script is similar to the DreamBooth script. This guide instead focuses on the code that is relevant to Custom Diffusion. The Custom Diffusion training script has two dataset classes: CustomDiffusionDataset: preprocesses the images, class images, and prompts for training PromptDataset: prepares the prompts for generating class images Next, the modifier_token is added to the tokenizer, converted to token ids, and the token embeddings are resized to account for the new modifier_token. Then the modifier_token embeddings are initialized with the embeddings of the initializer_token. All parameters in the text encoder are frozen, except for the token embeddings since this is what the model is trying to learn to associate with the concepts. Copied params_to_freeze = itertools.chain( + text_encoder.text_model.encoder.parameters(), + text_encoder.text_model.final_layer_norm.parameters(), + text_encoder.text_model.embeddings.position_embedding.parameters(), +) +freeze_params(params_to_freeze) Now you’ll need to add the Custom Diffusion weights to the attention layers. This is a really important step for getting the shape and size of the attention weights correct, and for setting the appropriate number of attention processors in each UNet block. Copied st = unet.state_dict() +for name, _ in unet.attn_processors.items(): + cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim + if name.startswith("mid_block"): + hidden_size = unet.config.block_out_channels[-1] + elif name.startswith("up_blocks"): + block_id = int(name[len("up_blocks.")]) + hidden_size = list(reversed(unet.config.block_out_channels))[block_id] + elif name.startswith("down_blocks"): + block_id = int(name[len("down_blocks.")]) + hidden_size = unet.config.block_out_channels[block_id] + layer_name = name.split(".processor")[0] + weights = { + "to_k_custom_diffusion.weight": st[layer_name + ".to_k.weight"], + "to_v_custom_diffusion.weight": st[layer_name + ".to_v.weight"], + } + if train_q_out: + weights["to_q_custom_diffusion.weight"] = st[layer_name + ".to_q.weight"] + weights["to_out_custom_diffusion.0.weight"] = st[layer_name + ".to_out.0.weight"] + weights["to_out_custom_diffusion.0.bias"] = st[layer_name + ".to_out.0.bias"] + if cross_attention_dim is not None: + custom_diffusion_attn_procs[name] = attention_class( + train_kv=train_kv, + train_q_out=train_q_out, + hidden_size=hidden_size, + cross_attention_dim=cross_attention_dim, + ).to(unet.device) + custom_diffusion_attn_procs[name].load_state_dict(weights) + else: + custom_diffusion_attn_procs[name] = attention_class( + train_kv=False, + train_q_out=False, + hidden_size=hidden_size, + cross_attention_dim=cross_attention_dim, + ) +del st +unet.set_attn_processor(custom_diffusion_attn_procs) +custom_diffusion_layers = AttnProcsLayers(unet.attn_processors) The optimizer is initialized to update the cross-attention layer parameters: Copied optimizer = optimizer_class( + itertools.chain(text_encoder.get_input_embeddings().parameters(), custom_diffusion_layers.parameters()) + if args.modifier_token is not None + else custom_diffusion_layers.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) In the training loop, it is important to only update the embeddings for the concept you’re trying to learn. This means setting the gradients of all the other token embeddings to zero: Copied if args.modifier_token is not None: + if accelerator.num_processes > 1: + grads_text_encoder = text_encoder.module.get_input_embeddings().weight.grad + else: + grads_text_encoder = text_encoder.get_input_embeddings().weight.grad + index_grads_to_zero = torch.arange(len(tokenizer)) != modifier_token_id[0] + for i in range(len(modifier_token_id[1:])): + index_grads_to_zero = index_grads_to_zero & ( + torch.arange(len(tokenizer)) != modifier_token_id[i] + ) + grads_text_encoder.data[index_grads_to_zero, :] = grads_text_encoder.data[ + index_grads_to_zero, : + ].fill_(0) Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 In this guide, you’ll download and use these example cat images. You can also create and use your own dataset if you want (see the Create a dataset for training guide). Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model, INSTANCE_DIR to the path where you just downloaded the cat images to, and OUTPUT_DIR to where you want to save the model. You’ll use as the special word to tie the newly learned embeddings to. The script creates and saves model checkpoints and a pytorch_custom_diffusion_weights.bin file to your repository. To monitor training progress with Weights and Biases, add the --report_to=wandb parameter to the training command and specify a validation prompt with --validation_prompt. This is useful for debugging and saving intermediate results. If you’re training on human faces, the Custom Diffusion team has found the following parameters to work well: --learning_rate=5e-6 --max_train_steps can be anywhere between 1000 and 2000 --freeze_model=crossattn use at least 15-20 images to train with single concept multiple concepts Copied export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export OUTPUT_DIR="path-to-save-model" +export INSTANCE_DIR="./data/cat" + +accelerate launch train_custom_diffusion.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --output_dir=$OUTPUT_DIR \ + --class_data_dir=./real_reg/samples_cat/ \ + --with_prior_preservation \ + --real_prior \ + --prior_loss_weight=1.0 \ + --class_prompt="cat" \ + --num_class_images=200 \ + --instance_prompt="photo of a cat" \ + --resolution=512 \ + --train_batch_size=2 \ + --learning_rate=1e-5 \ + --lr_warmup_steps=0 \ + --max_train_steps=250 \ + --scale_lr \ + --hflip \ + --modifier_token "" \ + --validation_prompt=" cat sitting in a bucket" \ + --report_to="wandb" \ + --push_to_hub Once training is finished, you can use your new Custom Diffusion model for inference. single concept multiple concepts Copied import torch +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16, +).to("cuda") +pipeline.unet.load_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin") +pipeline.load_textual_inversion("path-to-save-model", weight_name=".bin") + +image = pipeline( + " cat sitting in a bucket", + num_inference_steps=100, + guidance_scale=6.0, + eta=1.0, +).images[0] +image.save("cat.png") Next steps Congratulations on training a model with Custom Diffusion! 🎉 To learn more: Read the Multi-Concept Customization of Text-to-Image Diffusion blog post to learn more details about the experimental results from the Custom Diffusion team. diff --git a/scrapped_outputs/3621efea38d10e617a24c3d678c5c58a.txt b/scrapped_outputs/3621efea38d10e617a24c3d678c5c58a.txt new file mode 100644 index 0000000000000000000000000000000000000000..97a771bf1c4a69150adf921fcc1b4adbe14566c1 --- /dev/null +++ b/scrapped_outputs/3621efea38d10e617a24c3d678c5c58a.txt @@ -0,0 +1,927 @@ +DeepFloyd IF Overview DeepFloyd IF is a novel state-of-the-art open-source text-to-image model with a high degree of photorealism and language understanding. +The model is a modular composed of a frozen text encoder and three cascaded pixel diffusion modules: Stage 1: a base model that generates 64x64 px image based on text prompt, Stage 2: a 64x64 px => 256x256 px super-resolution model, and Stage 3: a 256x256 px => 1024x1024 px super-resolution model +Stage 1 and Stage 2 utilize a frozen text encoder based on the T5 transformer to extract text embeddings, which are then fed into a UNet architecture enhanced with cross-attention and attention pooling. +Stage 3 is Stability AI’s x4 Upscaling model. +The result is a highly efficient model that outperforms current state-of-the-art models, achieving a zero-shot FID score of 6.66 on the COCO dataset. +Our work underscores the potential of larger UNet architectures in the first stage of cascaded diffusion models and depicts a promising future for text-to-image synthesis. Usage Before you can use IF, you need to accept its usage conditions. To do so: Make sure to have a Hugging Face account and be logged in. Accept the license on the model card of DeepFloyd/IF-I-XL-v1.0. Accepting the license on the stage I model card will auto accept for the other IF models. Make sure to login locally. Install huggingface_hub: Copied pip install huggingface_hub --upgrade run the login function in a Python shell: Copied from huggingface_hub import login + +login() and enter your Hugging Face Hub access token. Next we install diffusers and dependencies: Copied pip install -q diffusers accelerate transformers The following sections give more in-detail examples of how to use IF. Specifically: Text-to-Image Generation Image-to-Image Generation Inpainting Reusing model weights Speed optimization Memory optimization Available checkpoints Stage-1 DeepFloyd/IF-I-XL-v1.0 DeepFloyd/IF-I-L-v1.0 DeepFloyd/IF-I-M-v1.0 Stage-2 DeepFloyd/IF-II-L-v1.0 DeepFloyd/IF-II-M-v1.0 Stage-3 stabilityai/stable-diffusion-x4-upscaler Google Colab Text-to-Image Generation By default diffusers makes use of model cpu offloading to run the whole IF pipeline with as little as 14 GB of VRAM. Copied from diffusers import DiffusionPipeline +from diffusers.utils import pt_to_pil, make_image_grid +import torch + +# stage 1 +stage_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +stage_1_output = stage_1( + prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, generator=generator, output_type="pt" +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# stage 2 +stage_2_output = stage_2( + image=stage_1_output, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png") + +# stage 3 +stage_3_output = stage_3(prompt=prompt, image=stage_2_output, noise_level=100, generator=generator).images +#stage_3_output[0].save("./if_stage_III.png") +make_image_grid([pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=3) Text Guided Image-to-Image Generation The same IF model weights can be used for text-guided image-to-image translation or image variation. +In this case just make sure to load the weights using the IFImg2ImgPipeline and IFImg2ImgSuperResolutionPipeline pipelines. Note: You can also directly move the weights of the text-to-image pipelines to the image-to-image pipelines +without loading them twice by making use of the components argument as explained here. Copied from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +from diffusers.utils import pt_to_pil, load_image, make_image_grid +import torch + +# download image +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +original_image = load_image(url) +original_image = original_image.resize((768, 512)) + +# stage 1 +stage_1 = IFImg2ImgPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = IFImg2ImgSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = "A fantasy landscape in style minecraft" +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +stage_1_output = stage_1( + image=original_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# stage 2 +stage_2_output = stage_2( + image=stage_1_output, + original_image=original_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png") + +# stage 3 +stage_3_output = stage_3(prompt=prompt, image=stage_2_output, generator=generator, noise_level=100).images +#stage_3_output[0].save("./if_stage_III.png") +make_image_grid([original_image, pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=4) Text Guided Inpainting Generation The same IF model weights can be used for text-guided image-to-image translation or image variation. +In this case just make sure to load the weights using the IFInpaintingPipeline and IFInpaintingSuperResolutionPipeline pipelines. Note: You can also directly move the weights of the text-to-image pipelines to the image-to-image pipelines +without loading them twice by making use of the ~DiffusionPipeline.components() function as explained here. Copied from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +from diffusers.utils import pt_to_pil, load_image, make_image_grid +import torch + +# download image +url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +original_image = load_image(url) + +# download mask +url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +mask_image = load_image(url) + +# stage 1 +stage_1 = IFInpaintingPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = IFInpaintingSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = "blue sunglasses" +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +stage_1_output = stage_1( + image=original_image, + mask_image=mask_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# stage 2 +stage_2_output = stage_2( + image=stage_1_output, + original_image=original_image, + mask_image=mask_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_II.png") + +# stage 3 +stage_3_output = stage_3(prompt=prompt, image=stage_2_output, generator=generator, noise_level=100).images +#stage_3_output[0].save("./if_stage_III.png") +make_image_grid([original_image, mask_image, pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=5) Converting between different pipelines In addition to being loaded with from_pretrained, Pipelines can also be loaded directly from each other. Copied from diffusers import IFPipeline, IFSuperResolutionPipeline + +pipe_1 = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0") +pipe_2 = IFSuperResolutionPipeline.from_pretrained("DeepFloyd/IF-II-L-v1.0") + + +from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline + +pipe_1 = IFImg2ImgPipeline(**pipe_1.components) +pipe_2 = IFImg2ImgSuperResolutionPipeline(**pipe_2.components) + + +from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline + +pipe_1 = IFInpaintingPipeline(**pipe_1.components) +pipe_2 = IFInpaintingSuperResolutionPipeline(**pipe_2.components) Optimizing for speed The simplest optimization to run IF faster is to move all model components to the GPU. Copied pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") You can also run the diffusion process for a shorter number of timesteps. This can either be done with the num_inference_steps argument: Copied pipe("", num_inference_steps=30) Or with the timesteps argument: Copied from diffusers.pipelines.deepfloyd_if import fast27_timesteps + +pipe("", timesteps=fast27_timesteps) When doing image variation or inpainting, you can also decrease the number of timesteps +with the strength argument. The strength argument is the amount of noise to add to the input image which also determines how many steps to run in the denoising process. +A smaller number will vary the image less but run faster. Copied pipe = IFImg2ImgPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") + +image = pipe(image=image, prompt="", strength=0.3).images You can also use torch.compile. Note that we have not exhaustively tested torch.compile +with IF and it might not give expected results. Copied from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") + +pipe.text_encoder = torch.compile(pipe.text_encoder, mode="reduce-overhead", fullgraph=True) +pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) Optimizing for memory When optimizing for GPU memory, we can use the standard diffusers CPU offloading APIs. Either the model based CPU offloading, Copied pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.enable_model_cpu_offload() or the more aggressive layer based CPU offloading. Copied pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.enable_sequential_cpu_offload() Additionally, T5 can be loaded in 8bit precision Copied from transformers import T5EncoderModel + +text_encoder = T5EncoderModel.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit" +) + +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", + text_encoder=text_encoder, # pass the previously instantiated 8bit text encoder + unet=None, + device_map="auto", +) + +prompt_embeds, negative_embeds = pipe.encode_prompt("") For CPU RAM constrained machines like Google Colab free tier where we can’t load all model components to the CPU at once, we can manually only load the pipeline with +the text encoder or UNet when the respective model components are needed. Copied from diffusers import IFPipeline, IFSuperResolutionPipeline +import torch +import gc +from transformers import T5EncoderModel +from diffusers.utils import pt_to_pil, make_image_grid + +text_encoder = T5EncoderModel.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit" +) + +# text to image +pipe = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", + text_encoder=text_encoder, # pass the previously instantiated 8bit text encoder + unet=None, + device_map="auto", +) + +prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +# Remove the pipeline so we can re-load the pipeline with the unet +del text_encoder +del pipe +gc.collect() +torch.cuda.empty_cache() + +pipe = IFPipeline.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto" +) + +generator = torch.Generator().manual_seed(0) +stage_1_output = pipe( + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + output_type="pt", + generator=generator, +).images + +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# Remove the pipeline so we can load the super-resolution pipeline +del pipe +gc.collect() +torch.cuda.empty_cache() + +# First super resolution + +pipe = IFSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto" +) + +generator = torch.Generator().manual_seed(0) +stage_2_output = pipe( + image=stage_1_output, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + output_type="pt", + generator=generator, +).images + +#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png") +make_image_grid([pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0]], rows=1, rows=2) Available Pipelines: Pipeline Tasks Colab pipeline_if.py Text-to-Image Generation - pipeline_if_superresolution.py Text-to-Image Generation - pipeline_if_img2img.py Image-to-Image Generation - pipeline_if_img2img_superresolution.py Image-to-Image Generation - pipeline_if_inpainting.py Image-to-Image Generation - pipeline_if_inpainting_superresolution.py Image-to-Image Generation - IFPipeline class diffusers.IFPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None num_inference_steps: int = 100 timesteps: List = None guidance_scale: float = 7.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 height: Optional = None width: Optional = None eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True cross_attention_kwargs: Optional = None ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 7.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to self.unet.config.sample_size) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size) — +The width in pixels of the generated image. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFPipeline, IFSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch + +>>> pipe = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt").images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt" +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> safety_modules = { +... "feature_extractor": pipe.feature_extractor, +... "safety_checker": pipe.safety_checker, +... "watermarker": pipe.watermarker, +... } +>>> super_res_2_pipe = DiffusionPipeline.from_pretrained( +... "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +... ) +>>> super_res_2_pipe.enable_model_cpu_offload() + +>>> image = super_res_2_pipe( +... prompt=prompt, +... image=image, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFSuperResolutionPipeline class diffusers.IFSuperResolutionPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler image_noising_scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None height: int = None width: int = None image: Union = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 4.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 250 clean_caption: bool = True ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. height (int, optional, defaults to None) — +The height in pixels of the generated image. width (int, optional, defaults to None) — +The width in pixels of the generated image. image (PIL.Image.Image, np.ndarray, torch.FloatTensor) — +The image to be upscaled. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional, defaults to None) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. noise_level (int, optional, defaults to 250) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFPipeline, IFSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch + +>>> pipe = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt").images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFImg2ImgPipeline class diffusers.IFImg2ImgPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.7 num_inference_steps: int = 80 timesteps: List = None guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True cross_attention_kwargs: Optional = None ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. strength (float, optional, defaults to 0.7) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. num_inference_steps (int, optional, defaults to 80) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 10.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image.resize((768, 512)) + +>>> pipe = IFImg2ImgPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A fantasy landscape in style minecraft" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFImg2ImgSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", +... text_encoder=None, +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFImg2ImgSuperResolutionPipeline class diffusers.IFImg2ImgSuperResolutionPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler image_noising_scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( image: Union original_image: Union = None strength: float = 0.8 prompt: Union = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 4.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 250 clean_caption: bool = True ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. original_image (torch.FloatTensor or PIL.Image.Image) — +The original image that image was varied from. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. noise_level (int, optional, defaults to 250) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image.resize((768, 512)) + +>>> pipe = IFImg2ImgPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A fantasy landscape in style minecraft" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFImg2ImgSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", +... text_encoder=None, +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFInpaintingPipeline class diffusers.IFInpaintingPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None strength: float = 1.0 num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True cross_attention_kwargs: Optional = None ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). strength (float, optional, defaults to 1.0) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 7.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +>>> response = requests.get(url) +>>> mask_image = Image.open(BytesIO(response.content)) +>>> mask_image = mask_image + +>>> pipe = IFInpaintingPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "blue sunglasses" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... mask_image=mask_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFInpaintingSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... mask_image=mask_image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFInpaintingSuperResolutionPipeline class diffusers.IFInpaintingSuperResolutionPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler image_noising_scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( image: Union original_image: Union = None mask_image: Union = None strength: float = 0.8 prompt: Union = None num_inference_steps: int = 100 timesteps: List = None guidance_scale: float = 4.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 0 clean_caption: bool = True ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. original_image (torch.FloatTensor or PIL.Image.Image) — +The original image that image was varied from. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. noise_level (int, optional, defaults to 0) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +>>> response = requests.get(url) +>>> mask_image = Image.open(BytesIO(response.content)) +>>> mask_image = mask_image + +>>> pipe = IFInpaintingPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "blue sunglasses" + +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) +>>> image = pipe( +... image=original_image, +... mask_image=mask_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFInpaintingSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... mask_image=mask_image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. diff --git a/scrapped_outputs/3643459fe32e690a80c27131ccbb7088.txt b/scrapped_outputs/3643459fe32e690a80c27131ccbb7088.txt new file mode 100644 index 0000000000000000000000000000000000000000..18ff21ef44b1209309d3996bfa0c5efab35a57c1 --- /dev/null +++ b/scrapped_outputs/3643459fe32e690a80c27131ccbb7088.txt @@ -0,0 +1,78 @@ +Safe Stable Diffusion Safe Stable Diffusion was proposed in Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models and mitigates inappropriate degeneration from Stable Diffusion models because they’re trained on unfiltered web-crawled datasets. For instance Stable Diffusion may unexpectedly generate nudity, violence, images depicting self-harm, and otherwise offensive content. Safe Stable Diffusion is an extension of Stable Diffusion that drastically reduces this type of content. The abstract from the paper is: Text-conditioned image generation models have recently achieved astonishing results in image quality and text alignment and are consequently employed in a fast-growing number of applications. Since they are highly data-driven, relying on billion-sized datasets randomly scraped from the internet, they also suffer, as we demonstrate, from degenerated and biased human behavior. In turn, they may even reinforce such biases. To help combat these undesired side effects, we present safe latent diffusion (SLD). Specifically, to measure the inappropriate degeneration due to unfiltered and imbalanced training sets, we establish a novel image generation test bed-inappropriate image prompts (I2P)-containing dedicated, real-world image-to-text prompts covering concepts such as nudity and violence. As our exhaustive empirical evaluation demonstrates, the introduced SLD removes and suppresses inappropriate image parts during the diffusion process, with no additional training required and no adverse effect on overall image quality or text alignment. Tips Use the safety_concept property of StableDiffusionPipelineSafe to check and edit the current safety concept: Copied >>> from diffusers import StableDiffusionPipelineSafe + +>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe") +>>> pipeline.safety_concept +'an image showing hate, harassment, violence, suffering, humiliation, harm, suicide, sexual, nudity, bodily fluids, blood, obscene gestures, illegal activity, drug use, theft, vandalism, weapons, child abuse, brutality, cruelty' For each image generation the active concept is also contained in StableDiffusionSafePipelineOutput. There are 4 configurations (SafetyConfig.WEAK, SafetyConfig.MEDIUM, SafetyConfig.STRONG, and SafetyConfig.MAX) that can be applied: Copied >>> from diffusers import StableDiffusionPipelineSafe +>>> from diffusers.pipelines.stable_diffusion_safe import SafetyConfig + +>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe") +>>> prompt = "the four horsewomen of the apocalypse, painting by tom of finland, gaston bussiere, craig mullins, j. c. leyendecker" +>>> out = pipeline(prompt=prompt, **SafetyConfig.MAX) Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionPipelineSafe class diffusers.StableDiffusionPipelineSafe < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: SafeStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline based on the StableDiffusionPipeline for text-to-image generation using Safe Latent Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 sld_guidance_scale: Optional = 1000 sld_warmup_steps: Optional = 10 sld_threshold: Optional = 0.01 sld_momentum_scale: Optional = 0.3 sld_mom_beta: Optional = 0.4 ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. sld_guidance_scale (float, optional, defaults to 1000) — +If sld_guidance_scale < 1, safety guidance is disabled. sld_warmup_steps (int, optional, defaults to 10) — +Number of warmup steps for safety guidance. SLD is only be applied for diffusion steps greater than +sld_warmup_steps. sld_threshold (float, optional, defaults to 0.01) — +Threshold that separates the hyperplane between appropriate and inappropriate images. sld_momentum_scale (float, optional, defaults to 0.3) — +Scale of the SLD momentum to be added to the safety guidance at each diffusion step. If set to 0.0, +momentum is disabled. Momentum is built up during warmup for diffusion steps smaller than +sld_warmup_steps. sld_mom_beta (float, optional, defaults to 0.4) — +Defines how safety guidance momentum builds up. sld_mom_beta indicates how much of the previous +momentum is kept. Momentum is built up during warmup for diffusion steps smaller than +sld_warmup_steps. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied import torch +from diffusers import StableDiffusionPipelineSafe +from diffusers.pipelines.stable_diffusion_safe import SafetyConfig + +pipeline = StableDiffusionPipelineSafe.from_pretrained( + "AIML-TUDA/stable-diffusion-safe", torch_dtype=torch.float16 +).to("cuda") +prompt = "the four horsewomen of the apocalypse, painting by tom of finland, gaston bussiere, craig mullins, j. c. leyendecker" +image = pipeline(prompt=prompt, **SafetyConfig.MEDIUM).images[0] StableDiffusionSafePipelineOutput class diffusers.pipelines.stable_diffusion_safe.StableDiffusionSafePipelineOutput < source > ( images: Union nsfw_content_detected: Optional unsafe_images: Union applied_safety_concept: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or numpy array of shape (batch_size, height, width, num_channels). PIL images or numpy array present the denoised images of the diffusion pipeline. nsfw_content_detected (List[bool]) — +List of flags denoting whether the corresponding generated image likely represents “not-safe-for-work” +(nsfw) content, or None if safety checking could not be performed. images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images that were flagged by the safety checker any may contain “not-safe-for-work” +(nsfw) content, or None if no safety check was performed or no images were flagged. applied_safety_concept (str) — +The safety concept that was applied for safety guidance, or None if safety guidance was disabled Output class for Safe Stable Diffusion pipelines. __call__ ( *args **kwargs ) Call self as a function. diff --git a/scrapped_outputs/364386405d34bdbd3d48c7f77cbd7109.txt b/scrapped_outputs/364386405d34bdbd3d48c7f77cbd7109.txt new file mode 100644 index 0000000000000000000000000000000000000000..51eec044ff9541ddf40cd3ef6404f0e25abfaa6f --- /dev/null +++ b/scrapped_outputs/364386405d34bdbd3d48c7f77cbd7109.txt @@ -0,0 +1,226 @@ +aMUSEd aMUSEd was introduced in aMUSEd: An Open MUSE Reproduction by Suraj Patil, William Berman, Robin Rombach, and Patrick von Platen. Amused is a lightweight text to image model based off of the MUSE architecture. Amused is particularly useful in applications that require a lightweight and fast model such as generating many images quickly at once. Amused is a vqvae token based transformer that can generate an image in fewer forward passes than many diffusion models. In contrast with muse, it uses the smaller text encoder CLIP-L/14 instead of t5-xxl. Due to its small parameter count and few forward pass generation process, amused can generate many images quickly. This benefit is seen particularly at larger batch sizes. The abstract from the paper is: We present aMUSEd, an open-source, lightweight masked image model (MIM) for text-to-image generation based on MUSE. With 10 percent of MUSE’s parameters, aMUSEd is focused on fast image generation. We believe MIM is under-explored compared to latent diffusion, the prevailing approach for text-to-image generation. Compared to latent diffusion, MIM requires fewer inference steps and is more interpretable. Additionally, MIM can be fine-tuned to learn additional styles with only a single image. We hope to encourage further exploration of MIM by demonstrating its effectiveness on large-scale text-to-image generation and releasing reproducible training code. We also release checkpoints for two models which directly produce images at 256x256 and 512x512 resolutions. Model Params amused-256 603M amused-512 608M AmusedPipeline class diffusers.AmusedPipeline < source > ( vqvae: VQModel tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection transformer: UVit2DModel scheduler: AmusedScheduler ) __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 12 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Optional = None latents: Optional = None prompt_embeds: Optional = None encoder_hidden_states: Optional = None negative_prompt_embeds: Optional = None negative_encoder_hidden_states: Optional = None output_type = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None micro_conditioning_aesthetic_score: int = 6 micro_conditioning_crop_coord: Tuple = (0, 0) temperature: Union = (2, 0) ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.transformer.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 16) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.IntTensor, optional) — +Pre-generated tokens representing latent vectors in self.vqvae, to be used as inputs for image +gneration. If not provided, the starting latents will be completely masked. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. A single vector from the +pooled and projected final hidden states. encoder_hidden_states (torch.FloatTensor, optional) — +Pre-generated penultimate hidden states from the text encoder providing additional text conditioning. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. negative_encoder_hidden_states (torch.FloatTensor, optional) — +Analogous to encoder_hidden_states for the positive prompt. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. micro_conditioning_aesthetic_score (int, optional, defaults to 6) — +The targeted aesthetic score according to the laion aesthetic classifier. See https://laion.ai/blog/laion-aesthetics/ +and the micro-conditioning section of https://arxiv.org/abs/2307.01952. micro_conditioning_crop_coord (Tuple[int], optional, defaults to (0, 0)) — +The targeted height, width crop coordinates. See the micro-conditioning section of https://arxiv.org/abs/2307.01952. temperature (Union[int, Tuple[int, int], List[int]], optional, defaults to (2, 0)) — +Configures the temperature scheduler on self.scheduler see AmusedScheduler#set_timesteps. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import AmusedPipeline + +>>> pipe = AmusedPipeline.from_pretrained( +... "amused/amused-512", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt).images[0] enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. class diffusers.AmusedImg2ImgPipeline < source > ( vqvae: VQModel tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection transformer: UVit2DModel scheduler: AmusedScheduler ) __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.5 num_inference_steps: int = 12 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Optional = None prompt_embeds: Optional = None encoder_hidden_states: Optional = None negative_prompt_embeds: Optional = None negative_encoder_hidden_states: Optional = None output_type = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None micro_conditioning_aesthetic_score: int = 6 micro_conditioning_crop_coord: Tuple = (0, 0) temperature: Union = (2, 0) ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be used as the starting point. For both +numpy array and pytorch tensor, the expected value range is between [0, 1] If it’s a tensor or a list +or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a +list of arrays, the expected shape should be (B, H, W, C) or (H, W, C) It can also accept image +latents as image, but if passing latents directly it is not encoded again. strength (float, optional, defaults to 0.5) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 16) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. A single vector from the +pooled and projected final hidden states. encoder_hidden_states (torch.FloatTensor, optional) — +Pre-generated penultimate hidden states from the text encoder providing additional text conditioning. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. negative_encoder_hidden_states (torch.FloatTensor, optional) — +Analogous to encoder_hidden_states for the positive prompt. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. micro_conditioning_aesthetic_score (int, optional, defaults to 6) — +The targeted aesthetic score according to the laion aesthetic classifier. See https://laion.ai/blog/laion-aesthetics/ +and the micro-conditioning section of https://arxiv.org/abs/2307.01952. micro_conditioning_crop_coord (Tuple[int], optional, defaults to (0, 0)) — +The targeted height, width crop coordinates. See the micro-conditioning section of https://arxiv.org/abs/2307.01952. temperature (Union[int, Tuple[int, int], List[int]], optional, defaults to (2, 0)) — +Configures the temperature scheduler on self.scheduler see AmusedScheduler#set_timesteps. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import AmusedImg2ImgPipeline +>>> from diffusers.utils import load_image + +>>> pipe = AmusedImg2ImgPipeline.from_pretrained( +... "amused/amused-512", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "winter mountains" +>>> input_image = ( +... load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/open_muse/mountains.jpg" +... ) +... .resize((512, 512)) +... .convert("RGB") +... ) +>>> image = pipe(prompt, input_image).images[0] enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. class diffusers.AmusedInpaintPipeline < source > ( vqvae: VQModel tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection transformer: UVit2DModel scheduler: AmusedScheduler ) __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None strength: float = 1.0 num_inference_steps: int = 12 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Optional = None prompt_embeds: Optional = None encoder_hidden_states: Optional = None negative_prompt_embeds: Optional = None negative_encoder_hidden_states: Optional = None output_type = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None micro_conditioning_aesthetic_score: int = 6 micro_conditioning_crop_coord: Tuple = (0, 0) temperature: Union = (2, 0) ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be used as the starting point. For both +numpy array and pytorch tensor, the expected value range is between [0, 1] If it’s a tensor or a list +or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a +list of arrays, the expected shape should be (B, H, W, C) or (H, W, C) It can also accept image +latents as image, but if passing latents directly it is not encoded again. mask_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to mask image. White pixels in the mask +are repainted while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a numpy array or pytorch tensor, it should contain one +color channel (L) instead of 3, so the expected shape for pytorch tensor would be (B, 1, H, W), (B, H, W), (1, H, W), (H, W). And for numpy array would be for (B, H, W, 1), (B, H, W), (H, W, 1), or (H, W). strength (float, optional, defaults to 1.0) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 16) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. A single vector from the +pooled and projected final hidden states. encoder_hidden_states (torch.FloatTensor, optional) — +Pre-generated penultimate hidden states from the text encoder providing additional text conditioning. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. negative_encoder_hidden_states (torch.FloatTensor, optional) — +Analogous to encoder_hidden_states for the positive prompt. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. micro_conditioning_aesthetic_score (int, optional, defaults to 6) — +The targeted aesthetic score according to the laion aesthetic classifier. See https://laion.ai/blog/laion-aesthetics/ +and the micro-conditioning section of https://arxiv.org/abs/2307.01952. micro_conditioning_crop_coord (Tuple[int], optional, defaults to (0, 0)) — +The targeted height, width crop coordinates. See the micro-conditioning section of https://arxiv.org/abs/2307.01952. temperature (Union[int, Tuple[int, int], List[int]], optional, defaults to (2, 0)) — +Configures the temperature scheduler on self.scheduler see AmusedScheduler#set_timesteps. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import AmusedInpaintPipeline +>>> from diffusers.utils import load_image + +>>> pipe = AmusedInpaintPipeline.from_pretrained( +... "amused/amused-512", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "fall mountains" +>>> input_image = ( +... load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/open_muse/mountains_1.jpg" +... ) +... .resize((512, 512)) +... .convert("RGB") +... ) +>>> mask = ( +... load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/open_muse/mountains_1_mask.png" +... ) +... .resize((512, 512)) +... .convert("L") +... ) +>>> pipe(prompt, input_image, mask).images[0].save("out.png") enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. diff --git a/scrapped_outputs/367b12a908890db0be11ec86439d3af4.txt b/scrapped_outputs/367b12a908890db0be11ec86439d3af4.txt new file mode 100644 index 0000000000000000000000000000000000000000..c796491cbfe9ea7c96684c36934fc2d682903305 --- /dev/null +++ b/scrapped_outputs/367b12a908890db0be11ec86439d3af4.txt @@ -0,0 +1,191 @@ +Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image generation model that iterates on the previous Stable Diffusion models in three key ways: the UNet is 3x larger and SDXL combines a second text encoder (OpenCLIP ViT-bigG/14) with the original text encoder to significantly increase the number of parameters introduces size and crop-conditioning to preserve training data from being discarded and gain more control over how a generated image should be cropped introduces a two-stage model process; the base model (can also be run as a standalone model) generates an image as an input to the refiner model which adds additional high-quality details This guide will show you how to use SDXL for text-to-image, image-to-image, and inpainting. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate invisible-watermark>=0.2.0 We recommend installing the invisible-watermark library to help identify images that are generated. If the invisible-watermark library is installed, it is used by default. To disable the watermarker: Copied pipeline = StableDiffusionXLPipeline.from_pretrained(..., add_watermarker=False) Load model checkpoints Model weights may be stored in separate subfolders on the Hub or locally, in which case, you should use the from_pretrained() method: Copied from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = StableDiffusionXLImg2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, use_safetensors=True, variant="fp16" +).to("cuda") You can also use the from_single_file() method to load a model checkpoint stored in a single file format (.ckpt or .safetensors) from the Hub or locally: Copied from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_single_file( + "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0.safetensors", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = StableDiffusionXLImg2ImgPipeline.from_single_file( + "https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/blob/main/sd_xl_refiner_1.0.safetensors", torch_dtype=torch.float16, use_safetensors=True, variant="fp16" +).to("cuda") Text-to-image For text-to-image, pass a text prompt. By default, SDXL generates a 1024x1024 image for the best results. You can try setting the height and width parameters to 768x768 or 512x512, but anything below 512x512 is not likely to work. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline_text2image = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipeline_text2image(prompt=prompt).images[0] +image Image-to-image For image-to-image, SDXL works especially well with image sizes between 768x768 and 1024x1024. Pass an initial image, and a text prompt to condition the image with: Copied from diffusers import AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +# use from_pipe to avoid consuming additional memory when loading a checkpoint +pipeline = AutoPipelineForImage2Image.from_pipe(pipeline_text2image).to("cuda") + +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png" +init_image = load_image(url) +prompt = "a dog catching a frisbee in the jungle" +image = pipeline(prompt, image=init_image, strength=0.8, guidance_scale=10.5).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Inpainting For inpainting, you’ll need the original image and a mask of what you want to replace in the original image. Create a prompt to describe what you want to replace the masked area with. Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +# use from_pipe to avoid consuming additional memory when loading a checkpoint +pipeline = AutoPipelineForInpainting.from_pipe(pipeline_text2image).to("cuda") + +img_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png" +mask_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-inpaint-mask.png" + +init_image = load_image(img_url) +mask_image = load_image(mask_url) + +prompt = "A deep sea diver floating" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.85, guidance_scale=12.5).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Refine image quality SDXL includes a refiner model specialized in denoising low-noise stage images to generate higher-quality images from the base model. There are two ways to use the refiner: use the base and refiner models together to produce a refined image use the base model to produce an image, and subsequently use the refiner model to add more details to the image (this is how SDXL was originally trained) Base + refiner model When you use the base and refiner model together to generate an image, this is known as an ensemble of expert denoisers. The ensemble of expert denoisers approach requires fewer overall denoising steps versus passing the base model’s output to the refiner model, so it should be significantly faster to run. However, you won’t be able to inspect the base model’s output because it still contains a large amount of noise. As an ensemble of expert denoisers, the base model serves as the expert during the high-noise diffusion stage and the refiner model serves as the expert during the low-noise diffusion stage. Load the base and refiner model: Copied from diffusers import DiffusionPipeline +import torch + +base = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", + text_encoder_2=base.text_encoder_2, + vae=base.vae, + torch_dtype=torch.float16, + use_safetensors=True, + variant="fp16", +).to("cuda") To use this approach, you need to define the number of timesteps for each model to run through their respective stages. For the base model, this is controlled by the denoising_end parameter and for the refiner model, it is controlled by the denoising_start parameter. The denoising_end and denoising_start parameters should be a float between 0 and 1. These parameters are represented as a proportion of discrete timesteps as defined by the scheduler. If you’re also using the strength parameter, it’ll be ignored because the number of denoising steps is determined by the discrete timesteps the model is trained on and the declared fractional cutoff. Let’s set denoising_end=0.8 so the base model performs the first 80% of denoising the high-noise timesteps and set denoising_start=0.8 so the refiner model performs the last 20% of denoising the low-noise timesteps. The base model output should be in latent space instead of a PIL image. Copied prompt = "A majestic lion jumping from a big stone at night" + +image = base( + prompt=prompt, + num_inference_steps=40, + denoising_end=0.8, + output_type="latent", +).images +image = refiner( + prompt=prompt, + num_inference_steps=40, + denoising_start=0.8, + image=image, +).images[0] +image default base model ensemble of expert denoisers The refiner model can also be used for inpainting in the StableDiffusionXLInpaintPipeline: Copied from diffusers import StableDiffusionXLInpaintPipeline +from diffusers.utils import load_image, make_image_grid +import torch + +base = StableDiffusionXLInpaintPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = StableDiffusionXLInpaintPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", + text_encoder_2=base.text_encoder_2, + vae=base.vae, + torch_dtype=torch.float16, + use_safetensors=True, + variant="fp16", +).to("cuda") + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url) +mask_image = load_image(mask_url) + +prompt = "A majestic tiger sitting on a bench" +num_inference_steps = 75 +high_noise_frac = 0.7 + +image = base( + prompt=prompt, + image=init_image, + mask_image=mask_image, + num_inference_steps=num_inference_steps, + denoising_end=high_noise_frac, + output_type="latent", +).images +image = refiner( + prompt=prompt, + image=image, + mask_image=mask_image, + num_inference_steps=num_inference_steps, + denoising_start=high_noise_frac, +).images[0] +make_image_grid([init_image, mask_image, image.resize((512, 512))], rows=1, cols=3) This ensemble of expert denoisers method works well for all available schedulers! Base to refiner model SDXL gets a boost in image quality by using the refiner model to add additional high-quality details to the fully-denoised image from the base model, in an image-to-image setting. Load the base and refiner models: Copied from diffusers import DiffusionPipeline +import torch + +base = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", + text_encoder_2=base.text_encoder_2, + vae=base.vae, + torch_dtype=torch.float16, + use_safetensors=True, + variant="fp16", +).to("cuda") Generate an image from the base model, and set the model output to latent space: Copied prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +image = base(prompt=prompt, output_type="latent").images[0] Pass the generated image to the refiner model: Copied image = refiner(prompt=prompt, image=image[None, :]).images[0] base model base model + refiner model For inpainting, load the base and the refiner model in the StableDiffusionXLInpaintPipeline, remove the denoising_end and denoising_start parameters, and choose a smaller number of inference steps for the refiner. Micro-conditioning SDXL training involves several additional conditioning techniques, which are referred to as micro-conditioning. These include original image size, target image size, and cropping parameters. The micro-conditionings can be used at inference time to create high-quality, centered images. You can use both micro-conditioning and negative micro-conditioning parameters thanks to classifier-free guidance. They are available in the StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline, StableDiffusionXLInpaintPipeline, and StableDiffusionXLControlNetPipeline. Size conditioning There are two types of size conditioning: original_size conditioning comes from upscaled images in the training batch (because it would be wasteful to discard the smaller images which make up almost 40% of the total training data). This way, SDXL learns that upscaling artifacts are not supposed to be present in high-resolution images. During inference, you can use original_size to indicate the original image resolution. Using the default value of (1024, 1024) produces higher-quality images that resemble the 1024x1024 images in the dataset. If you choose to use a lower resolution, such as (256, 256), the model still generates 1024x1024 images, but they’ll look like the low resolution images (simpler patterns, blurring) in the dataset. target_size conditioning comes from finetuning SDXL to support different image aspect ratios. During inference, if you use the default value of (1024, 1024), you’ll get an image that resembles the composition of square images in the dataset. We recommend using the same value for target_size and original_size, but feel free to experiment with other options! 🤗 Diffusers also lets you specify negative conditions about an image’s size to steer generation away from certain image resolutions: Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe( + prompt=prompt, + negative_original_size=(512, 512), + negative_target_size=(1024, 1024), +).images[0] Images negatively conditioned on image resolutions of (128, 128), (256, 256), and (512, 512). Crop conditioning Images generated by previous Stable Diffusion models may sometimes appear to be cropped. This is because images are actually cropped during training so that all the images in a batch have the same size. By conditioning on crop coordinates, SDXL learns that no cropping - coordinates (0, 0) - usually correlates with centered subjects and complete faces (this is the default value in 🤗 Diffusers). You can experiment with different coordinates if you want to generate off-centered compositions! Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipeline(prompt=prompt, crops_coords_top_left=(256, 0)).images[0] +image You can also specify negative cropping coordinates to steer generation away from certain cropping parameters: Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe( + prompt=prompt, + negative_original_size=(512, 512), + negative_crops_coords_top_left=(0, 0), + negative_target_size=(1024, 1024), +).images[0] +image Use a different prompt for each text-encoder SDXL uses two text-encoders, so it is possible to pass a different prompt to each text-encoder, which can improve quality. Pass your original prompt to prompt and the second prompt to prompt_2 (use negative_prompt and negative_prompt_2 if you’re using negative prompts): Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +# prompt is passed to OAI CLIP-ViT/L-14 +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +# prompt_2 is passed to OpenCLIP-ViT/bigG-14 +prompt_2 = "Van Gogh painting" +image = pipeline(prompt=prompt, prompt_2=prompt_2).images[0] +image The dual text-encoders also support textual inversion embeddings that need to be loaded separately as explained in the SDXL textual inversion section. Optimizations SDXL is a large model, and you may need to optimize memory to get it to run on your hardware. Here are some tips to save memory and speed up inference. Offload the model to the CPU with enable_model_cpu_offload() for out-of-memory errors: Copied - base.to("cuda") +- refiner.to("cuda") ++ base.enable_model_cpu_offload() ++ refiner.enable_model_cpu_offload() Use torch.compile for ~20% speed-up (you need torch>=2.0): Copied + base.unet = torch.compile(base.unet, mode="reduce-overhead", fullgraph=True) ++ refiner.unet = torch.compile(refiner.unet, mode="reduce-overhead", fullgraph=True) Enable xFormers to run SDXL if torch<2.0: Copied + base.enable_xformers_memory_efficient_attention() ++ refiner.enable_xformers_memory_efficient_attention() Other resources If you’re interested in experimenting with a minimal version of the UNet2DConditionModel used in SDXL, take a look at the minSDXL implementation which is written in PyTorch and directly compatible with 🤗 Diffusers. diff --git a/scrapped_outputs/36841a3604083ba220ca511ddf52fe42.txt b/scrapped_outputs/36841a3604083ba220ca511ddf52fe42.txt new file mode 100644 index 0000000000000000000000000000000000000000..67c8b53cf21b58b36cb7eadc4efa707362746029 --- /dev/null +++ b/scrapped_outputs/36841a3604083ba220ca511ddf52fe42.txt @@ -0,0 +1,61 @@ +Stable Diffusion 2 Stable Diffusion 2 is a text-to-image latent diffusion model built upon the work of the original Stable Diffusion, and it was led by Robin Rombach and Katherine Crowson from Stability AI and LAION. The Stable Diffusion 2.0 release includes robust text-to-image models trained using a brand new text encoder (OpenCLIP), developed by LAION with support from Stability AI, which greatly improves the quality of the generated images compared to earlier V1 releases. The text-to-image models in this release can generate images with default resolutions of both 512x512 pixels and 768x768 pixels. +These models are trained on an aesthetic subset of the LAION-5B dataset created by the DeepFloyd team at Stability AI, which is then further filtered to remove adult content using LAION’s NSFW filter. For more details about how Stable Diffusion 2 works and how it differs from the original Stable Diffusion, please refer to the official announcement post. The architecture of Stable Diffusion 2 is more or less identical to the original Stable Diffusion model so check out it’s API documentation for how to use Stable Diffusion 2. We recommend using the DPMSolverMultistepScheduler as it gives a reasonable speed/quality trade-off and can be run with as little as 20 steps. Stable Diffusion 2 is available for tasks like text-to-image, inpainting, super-resolution, and depth-to-image: Task Repository text-to-image (512x512) stabilityai/stable-diffusion-2-base text-to-image (768x768) stabilityai/stable-diffusion-2 inpainting stabilityai/stable-diffusion-2-inpainting super-resolution stable-diffusion-x4-upscaler depth-to-image stabilityai/stable-diffusion-2-depth Here are some examples for how to use Stable Diffusion 2 for each task: Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! Text-to-image Copied from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +import torch + +repo_id = "stabilityai/stable-diffusion-2-base" +pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") + +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") + +prompt = "High quality photo of an astronaut riding a horse in space" +image = pipe(prompt, num_inference_steps=25).images[0] +image Inpainting Copied import torch +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +from diffusers.utils import load_image, make_image_grid + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +repo_id = "stabilityai/stable-diffusion-2-inpainting" +pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") + +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") + +prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +image = pipe(prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=25).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Super-resolution Copied from diffusers import StableDiffusionUpscalePipeline +from diffusers.utils import load_image, make_image_grid +import torch + +# load model and scheduler +model_id = "stabilityai/stable-diffusion-x4-upscaler" +pipeline = StableDiffusionUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) +pipeline = pipeline.to("cuda") + +# let's download an image +url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png" +low_res_img = load_image(url) +low_res_img = low_res_img.resize((128, 128)) +prompt = "a white cat" +upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0] +make_image_grid([low_res_img.resize((512, 512)), upscaled_image.resize((512, 512))], rows=1, cols=2) Depth-to-image Copied import torch +from diffusers import StableDiffusionDepth2ImgPipeline +from diffusers.utils import load_image, make_image_grid + +pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-depth", + torch_dtype=torch.float16, +).to("cuda") + + +url = "http://images.cocodataset.org/val2017/000000039769.jpg" +init_image = load_image(url) +prompt = "two tigers" +negative_prompt = "bad, deformed, ugly, bad anotomy" +image = pipe(prompt=prompt, image=init_image, negative_prompt=negative_prompt, strength=0.7).images[0] +make_image_grid([init_image, image], rows=1, cols=2) diff --git a/scrapped_outputs/3685d0628c20819df67fbc57c6366a8e.txt b/scrapped_outputs/3685d0628c20819df67fbc57c6366a8e.txt new file mode 100644 index 0000000000000000000000000000000000000000..90f987bd68cea6f4c0f29a9a85768db8b9798fed --- /dev/null +++ b/scrapped_outputs/3685d0628c20819df67fbc57c6366a8e.txt @@ -0,0 +1 @@ +Overview A pipeline is an end-to-end class that provides a quick and easy way to use a diffusion system for inference by bundling independently trained models and schedulers together. Certain combinations of models and schedulers define specific pipeline types, like StableDiffusionXLPipeline or StableDiffusionControlNetPipeline, with specific capabilities. All pipeline types inherit from the base DiffusionPipeline class; pass it any checkpoint, and it’ll automatically detect the pipeline type and load the necessary components. This section demonstrates how to use specific pipelines such as Stable Diffusion XL, ControlNet, and DiffEdit. You’ll also learn how to use a distilled version of the Stable Diffusion model to speed up inference, how to create reproducible pipelines, and how to use and contribute community pipelines. diff --git a/scrapped_outputs/3692cb3fc70c62bd3fb0c991ca86e4ad.txt b/scrapped_outputs/3692cb3fc70c62bd3fb0c991ca86e4ad.txt new file mode 100644 index 0000000000000000000000000000000000000000..acbc313e656972084810639a2513c61961c63127 --- /dev/null +++ b/scrapped_outputs/3692cb3fc70c62bd3fb0c991ca86e4ad.txt @@ -0,0 +1 @@ +Normalization layers Customized normalization layers for supporting various models in 🤗 Diffusers. AdaLayerNorm class diffusers.models.normalization.AdaLayerNorm < source > ( embedding_dim: int num_embeddings: int ) Parameters embedding_dim (int) — The size of each embedding vector. num_embeddings (int) — The size of the embeddings dictionary. Norm layer modified to incorporate timestep embeddings. AdaLayerNormZero class diffusers.models.normalization.AdaLayerNormZero < source > ( embedding_dim: int num_embeddings: int ) Parameters embedding_dim (int) — The size of each embedding vector. num_embeddings (int) — The size of the embeddings dictionary. Norm layer adaptive layer norm zero (adaLN-Zero). AdaLayerNormSingle class diffusers.models.normalization.AdaLayerNormSingle < source > ( embedding_dim: int use_additional_conditions: bool = False ) Parameters embedding_dim (int) — The size of each embedding vector. use_additional_conditions (bool) — To use additional conditions for normalization or not. Norm layer adaptive layer norm single (adaLN-single). As proposed in PixArt-Alpha (see: https://arxiv.org/abs/2310.00426; Section 2.3). AdaGroupNorm class diffusers.models.normalization.AdaGroupNorm < source > ( embedding_dim: int out_dim: int num_groups: int act_fn: Optional = None eps: float = 1e-05 ) Parameters embedding_dim (int) — The size of each embedding vector. num_embeddings (int) — The size of the embeddings dictionary. num_groups (int) — The number of groups to separate the channels into. act_fn (str, optional, defaults to None) — The activation function to use. eps (float, optional, defaults to 1e-5) — The epsilon value to use for numerical stability. GroupNorm layer modified to incorporate timestep embeddings. diff --git a/scrapped_outputs/370a71e785611a1fe3a0edd8571274c5.txt b/scrapped_outputs/370a71e785611a1fe3a0edd8571274c5.txt new file mode 100644 index 0000000000000000000000000000000000000000..acbc313e656972084810639a2513c61961c63127 --- /dev/null +++ b/scrapped_outputs/370a71e785611a1fe3a0edd8571274c5.txt @@ -0,0 +1 @@ +Normalization layers Customized normalization layers for supporting various models in 🤗 Diffusers. AdaLayerNorm class diffusers.models.normalization.AdaLayerNorm < source > ( embedding_dim: int num_embeddings: int ) Parameters embedding_dim (int) — The size of each embedding vector. num_embeddings (int) — The size of the embeddings dictionary. Norm layer modified to incorporate timestep embeddings. AdaLayerNormZero class diffusers.models.normalization.AdaLayerNormZero < source > ( embedding_dim: int num_embeddings: int ) Parameters embedding_dim (int) — The size of each embedding vector. num_embeddings (int) — The size of the embeddings dictionary. Norm layer adaptive layer norm zero (adaLN-Zero). AdaLayerNormSingle class diffusers.models.normalization.AdaLayerNormSingle < source > ( embedding_dim: int use_additional_conditions: bool = False ) Parameters embedding_dim (int) — The size of each embedding vector. use_additional_conditions (bool) — To use additional conditions for normalization or not. Norm layer adaptive layer norm single (adaLN-single). As proposed in PixArt-Alpha (see: https://arxiv.org/abs/2310.00426; Section 2.3). AdaGroupNorm class diffusers.models.normalization.AdaGroupNorm < source > ( embedding_dim: int out_dim: int num_groups: int act_fn: Optional = None eps: float = 1e-05 ) Parameters embedding_dim (int) — The size of each embedding vector. num_embeddings (int) — The size of the embeddings dictionary. num_groups (int) — The number of groups to separate the channels into. act_fn (str, optional, defaults to None) — The activation function to use. eps (float, optional, defaults to 1e-5) — The epsilon value to use for numerical stability. GroupNorm layer modified to incorporate timestep embeddings. diff --git a/scrapped_outputs/37101f978381b52a323bfc4ab932d55f.txt b/scrapped_outputs/37101f978381b52a323bfc4ab932d55f.txt new file mode 100644 index 0000000000000000000000000000000000000000..a9b23cd194564c43aca8fd94b78d118e14153f64 --- /dev/null +++ b/scrapped_outputs/37101f978381b52a323bfc4ab932d55f.txt @@ -0,0 +1,263 @@ +🧪 This pipeline is for research purposes only. Text-to-video ModelScope Text-to-Video Technical Report is by Jiuniu Wang, Hangjie Yuan, Dayou Chen, Yingya Zhang, Xiang Wang, Shiwei Zhang. The abstract from the paper is: This paper introduces ModelScopeT2V, a text-to-video synthesis model that evolves from a text-to-image synthesis model (i.e., Stable Diffusion). ModelScopeT2V incorporates spatio-temporal blocks to ensure consistent frame generation and smooth movement transitions. The model could adapt to varying frame numbers during training and inference, rendering it suitable for both image-text and video-text datasets. ModelScopeT2V brings together three components (i.e., VQGAN, a text encoder, and a denoising UNet), totally comprising 1.7 billion parameters, in which 0.5 billion parameters are dedicated to temporal capabilities. The model demonstrates superior performance over state-of-the-art methods across three evaluation metrics. The code and an online demo are available at https://modelscope.cn/models/damo/text-to-video-synthesis/summary. You can find additional information about Text-to-Video on the project page, original codebase, and try it out in a demo. Official checkpoints can be found at damo-vilab and cerspense. Usage example text-to-video-ms-1.7b Let’s start by generating a short video with the default length of 16 frames (2s at 8 fps): Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import export_to_video + +pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") +pipe = pipe.to("cuda") + +prompt = "Spiderman is surfing" +video_frames = pipe(prompt).frames +video_path = export_to_video(video_frames) +video_path Diffusers supports different optimization techniques to improve the latency +and memory footprint of a pipeline. Since videos are often more memory-heavy than images, +we can enable CPU offloading and VAE slicing to keep the memory footprint at bay. Let’s generate a video of 8 seconds (64 frames) on the same GPU using CPU offloading and VAE slicing: Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import export_to_video + +pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") +pipe.enable_model_cpu_offload() + +# memory optimization +pipe.enable_vae_slicing() + +prompt = "Darth Vader surfing a wave" +video_frames = pipe(prompt, num_frames=64).frames +video_path = export_to_video(video_frames) +video_path It just takes 7 GBs of GPU memory to generate the 64 video frames using PyTorch 2.0, “fp16” precision and the techniques mentioned above. We can also use a different scheduler easily, using the same method we’d use for Stable Diffusion: Copied import torch +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +from diffusers.utils import export_to_video + +pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() + +prompt = "Spiderman is surfing" +video_frames = pipe(prompt, num_inference_steps=25).frames +video_path = export_to_video(video_frames) +video_path Here are some sample outputs: An astronaut riding a horse. + Darth vader surfing in waves. + cerspense/zeroscope_v2_576w & cerspense/zeroscope_v2_XL Zeroscope are watermark-free model and have been trained on specific sizes such as 576x320 and 1024x576. +One should first generate a video using the lower resolution checkpoint cerspense/zeroscope_v2_576w with TextToVideoSDPipeline, +which can then be upscaled using VideoToVideoSDPipeline and cerspense/zeroscope_v2_XL. Copied import torch +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +from diffusers.utils import export_to_video +from PIL import Image + +pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16) +pipe.enable_model_cpu_offload() + +# memory optimization +pipe.unet.enable_forward_chunking(chunk_size=1, dim=1) +pipe.enable_vae_slicing() + +prompt = "Darth Vader surfing a wave" +video_frames = pipe(prompt, num_frames=24).frames +video_path = export_to_video(video_frames) +video_path Now the video can be upscaled: Copied pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_XL", torch_dtype=torch.float16) +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() + +# memory optimization +pipe.unet.enable_forward_chunking(chunk_size=1, dim=1) +pipe.enable_vae_slicing() + +video = [Image.fromarray(frame).resize((1024, 576)) for frame in video_frames] + +video_frames = pipe(prompt, video=video, strength=0.6).frames +video_path = export_to_video(video_frames) +video_path Here are some sample outputs: Darth vader surfing in waves. + Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. TextToVideoSDPipeline class diffusers.TextToVideoSDPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet3DConditionModel scheduler: KarrasDiffusionSchedulers ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet3DConditionModel) — +A UNet3DConditionModel to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_frames: int = 16 num_inference_steps: int = 50 guidance_scale: float = 9.0 negative_prompt: Union = None eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'np' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → TextToVideoSDPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated video. num_frames (int, optional, defaults to 16) — +The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds +amounts to 2 seconds of video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "np") — +The output format of the generated video. Choose between torch.FloatTensor or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +TextToVideoSDPipelineOutput or tuple + +If return_dict is True, TextToVideoSDPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import TextToVideoSDPipeline +>>> from diffusers.utils import export_to_video + +>>> pipe = TextToVideoSDPipeline.from_pretrained( +... "damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16" +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "Spiderman is surfing" +>>> video_frames = pipe(prompt).frames +>>> video_path = export_to_video(video_frames) +>>> video_path disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. VideoToVideoSDPipeline class diffusers.VideoToVideoSDPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet3DConditionModel scheduler: KarrasDiffusionSchedulers ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet3DConditionModel) — +A UNet3DConditionModel to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-guided video-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None video: Union = None strength: float = 0.6 num_inference_steps: int = 50 guidance_scale: float = 15.0 negative_prompt: Union = None eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'np' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → TextToVideoSDPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. video (List[np.ndarray] or torch.FloatTensor) — +video frames or tensor representing a video batch to be used as the starting point for the process. +Can also accept video latents as image, if passing latents directly, it will not be encoded again. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference video. Must be between 0 and 1. video is used as a +starting point, adding more noise to it the larger the strength. The number of denoising steps +depends on the amount of noise initially added. When strength is 1, added noise is maximum and the +denoising process runs for the full number of iterations specified in num_inference_steps. A value of +1 essentially ignores video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in video generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "np") — +The output format of the generated video. Choose between torch.FloatTensor or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +TextToVideoSDPipelineOutput or tuple + +If return_dict is True, TextToVideoSDPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +>>> from diffusers.utils import export_to_video + +>>> pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16) +>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe.to("cuda") + +>>> prompt = "spiderman running in the desert" +>>> video_frames = pipe(prompt, num_inference_steps=40, height=320, width=576, num_frames=24).frames +>>> # safe low-res video +>>> video_path = export_to_video(video_frames, output_video_path="./video_576_spiderman.mp4") + +>>> # let's offload the text-to-image model +>>> pipe.to("cpu") + +>>> # and load the image-to-image model +>>> pipe = DiffusionPipeline.from_pretrained( +... "cerspense/zeroscope_v2_XL", torch_dtype=torch.float16, revision="refs/pr/15" +... ) +>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe.enable_model_cpu_offload() + +>>> # The VAE consumes A LOT of memory, let's make sure we run it in sliced mode +>>> pipe.vae.enable_slicing() + +>>> # now let's upscale it +>>> video = [Image.fromarray(frame).resize((1024, 576)) for frame in video_frames] + +>>> # and denoise it +>>> video_frames = pipe(prompt, video=video, strength=0.6).frames +>>> video_path = export_to_video(video_frames, output_video_path="./video_1024_spiderman.mp4") +>>> video_path disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. TextToVideoSDPipelineOutput class diffusers.pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput < source > ( frames: Union ) Parameters frames (List[np.ndarray] or torch.FloatTensor) — +List of denoised frames (essentially images) as NumPy arrays of shape (height, width, num_channels) or as +a torch tensor. The length of the list denotes the video length (the number of frames). Output class for text-to-video pipelines. diff --git a/scrapped_outputs/3716a047d46189d7d066b8332d637e54.txt b/scrapped_outputs/3716a047d46189d7d066b8332d637e54.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/37494805004c4052bc5119f1b4d4531b.txt b/scrapped_outputs/37494805004c4052bc5119f1b4d4531b.txt new file mode 100644 index 0000000000000000000000000000000000000000..d652e1d857c98c3e8bba256ca96f37cda949853a --- /dev/null +++ b/scrapped_outputs/37494805004c4052bc5119f1b4d4531b.txt @@ -0,0 +1,57 @@ +Schedulers 🤗 Diffusers provides many scheduler functions for the diffusion process. A scheduler takes a model’s output (the sample which the diffusion process is iterating on) and a timestep to return a denoised sample. The timestep is important because it dictates where in the diffusion process the step is; data is generated by iterating forward n timesteps and inference occurs by propagating backward through the timesteps. Based on the timestep, a scheduler may be discrete in which case the timestep is an int or continuous in which case the timestep is a float. Depending on the context, a scheduler defines how to iteratively add noise to an image or how to update a sample based on a model’s output: during training, a scheduler adds noise (there are different algorithms for how to add noise) to a sample to train a diffusion model during inference, a scheduler defines how to update a sample based on a pretrained model’s output Many schedulers are implemented from the k-diffusion library by Katherine Crowson, and they’re also widely used in A1111. To help you map the schedulers from k-diffusion and A1111 to the schedulers in 🤗 Diffusers, take a look at the table below: A1111/k-diffusion 🤗 Diffusers Usage DPM++ 2M DPMSolverMultistepScheduler DPM++ 2M Karras DPMSolverMultistepScheduler init with use_karras_sigmas=True DPM++ 2M SDE DPMSolverMultistepScheduler init with algorithm_type="sde-dpmsolver++" DPM++ 2M SDE Karras DPMSolverMultistepScheduler init with use_karras_sigmas=True and algorithm_type="sde-dpmsolver++" DPM++ 2S a N/A very similar to DPMSolverSinglestepScheduler DPM++ 2S a Karras N/A very similar to DPMSolverSinglestepScheduler(use_karras_sigmas=True, ...) DPM++ SDE DPMSolverSinglestepScheduler DPM++ SDE Karras DPMSolverSinglestepScheduler init with use_karras_sigmas=True DPM2 KDPM2DiscreteScheduler DPM2 Karras KDPM2DiscreteScheduler init with use_karras_sigmas=True DPM2 a KDPM2AncestralDiscreteScheduler DPM2 a Karras KDPM2AncestralDiscreteScheduler init with use_karras_sigmas=True DPM adaptive N/A DPM fast N/A Euler EulerDiscreteScheduler Euler a EulerAncestralDiscreteScheduler Heun HeunDiscreteScheduler LMS LMSDiscreteScheduler LMS Karras LMSDiscreteScheduler init with use_karras_sigmas=True N/A DEISMultistepScheduler N/A UniPCMultistepScheduler All schedulers are built from the base SchedulerMixin class which implements low level utilities shared by all schedulers. SchedulerMixin class diffusers.SchedulerMixin < source > ( ) Base class for all schedulers. SchedulerMixin contains common functions shared by all schedulers such as general loading and saving +functionalities. ConfigMixin takes care of storing the configuration attributes (like num_train_timesteps) that are passed to +the scheduler’s __init__ function, and the attributes can be accessed by scheduler.config.num_train_timesteps. Class attributes: _compatibles (List[str]) — A list of scheduler classes that are compatible with the parent scheduler +class. Use from_config() to load a different compatible scheduler class (should be overridden +by parent class). from_pretrained < source > ( pretrained_model_name_or_path: Union = None subfolder: Optional = None return_unused_kwargs = False **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the scheduler +configuration saved with save_pretrained(). + subfolder (str, optional) — +The subfolder location of a model file within a larger model repository on the Hub or locally. return_unused_kwargs (bool, optional, defaults to False) — +Whether kwargs that are not consumed by the Python class should be returned or not. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. Instantiate a scheduler from a pre-defined JSON configuration file in a local directory or Hub repository. To use private or gated models, log-in with +huggingface-cli login. You can also activate the special +“offline-mode” to use this method in a +firewalled environment. save_pretrained < source > ( save_directory: Union push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory where the configuration JSON file will be saved (will be created if it does not exist). push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save a scheduler configuration object to a directory so that it can be reloaded using the +from_pretrained() class method. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. KarrasDiffusionSchedulers KarrasDiffusionSchedulers are a broad generalization of schedulers in 🤗 Diffusers. The schedulers in this class are distinguished at a high level by their noise sampling strategy, the type of network and scaling, the training strategy, and how the loss is weighed. The different schedulers in this class, depending on the ordinary differential equations (ODE) solver type, fall into the above taxonomy and provide a good abstraction for the design of the main schedulers implemented in 🤗 Diffusers. The schedulers in this class are given here. PushToHubMixin class diffusers.utils.PushToHubMixin < source > ( ) A Mixin to push a model, scheduler, or pipeline to the Hugging Face Hub. push_to_hub < source > ( repo_id: str commit_message: Optional = None private: Optional = None token: Optional = None create_pr: bool = False safe_serialization: bool = True variant: Optional = None ) Parameters repo_id (str) — +The name of the repository you want to push your model, scheduler, or pipeline files to. It should +contain your organization name when pushing to an organization. repo_id can also be a path to a local +directory. commit_message (str, optional) — +Message to commit while pushing. Default to "Upload {object}". private (bool, optional) — +Whether or not the repository created should be private. token (str, optional) — +The token to use as HTTP bearer authorization for remote files. The token generated when running +huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) — +Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to True) — +Whether or not to convert the model weights to the safetensors format. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. Upload model, scheduler, or pipeline files to the 🤗 Hugging Face Hub. Examples: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet") + +# Push the `unet` to your namespace with the name "my-finetuned-unet". +unet.push_to_hub("my-finetuned-unet") + +# Push the `unet` to an organization with the name "my-finetuned-unet". +unet.push_to_hub("your-org/my-finetuned-unet") diff --git a/scrapped_outputs/3775424aaa63e6ec3b933520f635027f.txt b/scrapped_outputs/3775424aaa63e6ec3b933520f635027f.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/378faa05b75f3b90cb1c4e34daf62b0d.txt b/scrapped_outputs/378faa05b75f3b90cb1c4e34daf62b0d.txt new file mode 100644 index 0000000000000000000000000000000000000000..5eb8aca237f4b1aa72ff085bbc8ab70f6ba7cd91 --- /dev/null +++ b/scrapped_outputs/378faa05b75f3b90cb1c4e34daf62b0d.txt @@ -0,0 +1,128 @@ +LoRA LoRA is a fast and lightweight training method that inserts and trains a significantly smaller number of parameters instead of all the model parameters. This produces a smaller file (~100 MBs) and makes it easier to quickly train a model to learn a new concept. LoRA weights are typically loaded into the UNet, text encoder or both. There are two classes for loading LoRA weights: LoraLoaderMixin provides functions for loading and unloading, fusing and unfusing, enabling and disabling, and more functions for managing LoRA weights. This class can be used with any model. StableDiffusionXLLoraLoaderMixin is a Stable Diffusion (SDXL) version of the LoraLoaderMixin class for loading and saving LoRA weights. It can only be used with the SDXL model. To learn more about how to load LoRA weights, see the LoRA loading guide. LoraLoaderMixin class diffusers.loaders.LoraLoaderMixin < source > ( ) Load LoRA layers into UNet2DConditionModel and +CLIPTextModel. delete_adapters < source > ( adapter_names: Union ) Parameters Deletes the LoRA layers of adapter_name for the unet and text-encoder(s). — +adapter_names (Union[List[str], str]): +The names of the adapter to delete. Can be a single string or a list of strings disable_lora_for_text_encoder < source > ( text_encoder: Optional = None ) Parameters text_encoder (torch.nn.Module, optional) — +The text encoder module to disable the LoRA layers for. If None, it will try to get the +text_encoder attribute. Disables the LoRA layers for the text encoder. enable_lora_for_text_encoder < source > ( text_encoder: Optional = None ) Parameters text_encoder (torch.nn.Module, optional) — +The text encoder module to enable the LoRA layers for. If None, it will try to get the text_encoder +attribute. Enables the LoRA layers for the text encoder. fuse_lora < source > ( fuse_unet: bool = True fuse_text_encoder: bool = True lora_scale: float = 1.0 safe_fusing: bool = False adapter_names: Optional = None ) Parameters fuse_unet (bool, defaults to True) — Whether to fuse the UNet LoRA parameters. fuse_text_encoder (bool, defaults to True) — +Whether to fuse the text encoder LoRA parameters. If the text encoder wasn’t monkey-patched with the +LoRA parameters then it won’t have any effect. lora_scale (float, defaults to 1.0) — +Controls how much to influence the outputs with the LoRA parameters. safe_fusing (bool, defaults to False) — +Whether to check fused weights for NaN values before fusing and if values are NaN not fusing them. adapter_names (List[str], optional) — +Adapter names to be used for fusing. If nothing is passed, all active adapters will be fused. Fuses the LoRA parameters into the original parameters of the corresponding blocks. This is an experimental API. Example: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipeline.fuse_lora(lora_scale=0.7) get_active_adapters < source > ( ) Gets the list of the current active adapters. Example: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", +).to("cuda") +pipeline.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") +pipeline.get_active_adapters() get_list_adapters < source > ( ) Gets the current list of all available adapters in the pipeline. load_lora_into_text_encoder < source > ( state_dict network_alphas text_encoder prefix = None lora_scale = 1.0 low_cpu_mem_usage = None adapter_name = None _pipeline = None ) Parameters state_dict (dict) — +A standard state dict containing the lora layer parameters. The key should be prefixed with an +additional text_encoder to distinguish between unet lora layers. network_alphas (Dict[str, float]) — +See LoRALinearLayer for more details. text_encoder (CLIPTextModel) — +The text encoder model to load the LoRA layers into. prefix (str) — +Expected prefix of the text_encoder in the state_dict. lora_scale (float) — +How much to scale the output of the lora linear layer before it is added with the output of the regular +lora layer. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. This will load the LoRA layers specified in state_dict into text_encoder load_lora_into_transformer < source > ( state_dict network_alphas transformer low_cpu_mem_usage = None adapter_name = None _pipeline = None ) Parameters state_dict (dict) — +A standard state dict containing the lora layer parameters. The keys can either be indexed directly +into the unet or prefixed with an additional unet which can be used to distinguish between text +encoder lora layers. network_alphas (Dict[str, float]) — +See LoRALinearLayer for more details. unet (UNet2DConditionModel) — +The UNet model to load the LoRA layers into. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. This will load the LoRA layers specified in state_dict into transformer. load_lora_into_unet < source > ( state_dict network_alphas unet low_cpu_mem_usage = None adapter_name = None _pipeline = None ) Parameters state_dict (dict) — +A standard state dict containing the lora layer parameters. The keys can either be indexed directly +into the unet or prefixed with an additional unet which can be used to distinguish between text +encoder lora layers. network_alphas (Dict[str, float]) — +See LoRALinearLayer for more details. unet (UNet2DConditionModel) — +The UNet model to load the LoRA layers into. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. This will load the LoRA layers specified in state_dict into unet. load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. lora_state_dict < source > ( pretrained_model_name_or_path_or_dict: Union **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with ModelMixin.save_pretrained(). +A torch state +dict. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Return state dict for lora weights and the network alphas. We support loading A1111 formatted LoRA checkpoints in a limited capacity. This function is experimental and might change in the future. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. set_adapters_for_text_encoder < source > ( adapter_names: Union text_encoder: Optional = None text_encoder_weights: List = None ) Parameters adapter_names (List[str] or str) — +The names of the adapters to use. text_encoder (torch.nn.Module, optional) — +The text encoder module to set the adapter layers for. If None, it will try to get the text_encoder +attribute. text_encoder_weights (List[float], optional) — +The weights to use for the text encoder. If None, the weights are set to 1.0 for all the adapters. Sets the adapter layers for the text encoder. set_lora_device < source > ( adapter_names: List device: Union ) Parameters adapter_names (List[str]) — +List of adapters to send device to. device (Union[torch.device, str, int]) — +Device to send the adapters to. Can be either a torch device, a str or an integer. Moves the LoRAs listed in adapter_names to a target device. Useful for offloading the LoRA to the CPU in case +you want to load multiple adapters and free some GPU memory. unfuse_lora < source > ( unfuse_unet: bool = True unfuse_text_encoder: bool = True ) Parameters unfuse_unet (bool, defaults to True) — Whether to unfuse the UNet LoRA parameters. unfuse_text_encoder (bool, defaults to True) — +Whether to unfuse the text encoder LoRA parameters. If the text encoder wasn’t monkey-patched with the +LoRA parameters then it won’t have any effect. Reverses the effect of +pipe.fuse_lora(). This is an experimental API. unload_lora_weights < source > ( ) Unloads the LoRA parameters. Examples: Copied >>> # Assuming `pipeline` is already loaded with the LoRA parameters. +>>> pipeline.unload_lora_weights() +>>> ... StableDiffusionXLLoraLoaderMixin class diffusers.loaders.StableDiffusionXLLoraLoaderMixin < source > ( ) This class overrides LoraLoaderMixin with LoRA loading/saving code that’s specific to SDXL load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name: Optional = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. kwargs (dict, optional) — +See lora_state_dict(). Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. diff --git a/scrapped_outputs/3794e36527616a44bcd0cd1310c5cfc2.txt b/scrapped_outputs/3794e36527616a44bcd0cd1310c5cfc2.txt new file mode 100644 index 0000000000000000000000000000000000000000..350fcde2194ed65053fe8201403456d5175dba21 --- /dev/null +++ b/scrapped_outputs/3794e36527616a44bcd0cd1310c5cfc2.txt @@ -0,0 +1,98 @@ +DPMSolverMultistepScheduler DPMSolverMultistep is a multistep scheduler from DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps and DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. DPMSolver (and the improved version DPMSolver++) is a fast dedicated high-order solver for diffusion ODEs with convergence order guarantee. Empirically, DPMSolver sampling with only 20 steps can generate high-quality +samples, and it can generate quite good samples even in 10 steps. Tips It is recommended to set solver_order to 2 for guide sampling, and solver_order=3 for unconditional sampling. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use the dynamic +thresholding. This thresholding method is unsuitable for latent-space diffusion models such as +Stable Diffusion. The SDE variant of DPMSolver and DPM-Solver++ is also supported, but only for the first and second-order solvers. This is a fast SDE solver for the reverse diffusion SDE. It is recommended to use the second-order sde-dpmsolver++. DPMSolverMultistepScheduler class diffusers.DPMSolverMultistepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'dpmsolver++' solver_type: str = 'midpoint' lower_order_final: bool = True euler_at_final: bool = False use_karras_sigmas: Optional = False use_lu_lambdas: Optional = False final_sigmas_type: Optional = 'zero' lambda_min_clipped: float = -inf variance_type: Optional = None timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DPMSolver order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++". algorithm_type (str, defaults to dpmsolver++) — +Algorithm type for the solver; can be dpmsolver, dpmsolver++, sde-dpmsolver or sde-dpmsolver++. The +dpmsolver type implements the algorithms in the DPMSolver +paper, and the dpmsolver++ type implements the algorithms in the +DPMSolver++ paper. It is recommended to use dpmsolver++ or +sde-dpmsolver++ with solver_order=2 for guided sampling like in Stable Diffusion. solver_type (str, defaults to midpoint) — +Solver type for the second-order solver; can be midpoint or heun. The solver type slightly affects the +sample quality, especially for a small number of steps. It is recommended to use midpoint solvers. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. euler_at_final (bool, defaults to False) — +Whether to use Euler’s method in the final step. It is a trade-off between numerical stability and detail +richness. This can stabilize the sampling of the SDE variant of DPMSolver for small number of inference +steps, but sometimes may result in blurring. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. use_lu_lambdas (bool, optional, defaults to False) — +Whether to use the uniform-logSNR for step sizes proposed by Lu’s DPM-Solver in the noise schedule during +the sampling process. If True, the sigmas and time steps are determined according to a sequence of +lambda(t). final_sigmas_type (str, defaults to "zero") — +The final sigma value for the noise schedule during the sampling process. If "sigma_min", the final sigma +is the same as the last sigma in the training schedule. If zero, the final sigma is set to 0. lambda_min_clipped (float, defaults to -inf) — +Clipping threshold for the minimum value of lambda(t) for numerical stability. This is critical for the +cosine (squaredcos_cap_v2) noise schedule. variance_type (str, optional) — +Set to “learned” or “learned_range” for diffusion models that predict variance. If set, the model’s output +contains the predicted Gaussian variance. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DPMSolverMultistepScheduler is a fast dedicated high-order solver for diffusion ODEs. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is +designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an +integral of the data prediction model. The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise +prediction and data prediction models. dpm_solver_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DPMSolver (equivalent to DDIM). multistep_dpm_solver_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order multistep DPMSolver. multistep_dpm_solver_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order multistep DPMSolver. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int = None device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator = None return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep DPMSolver. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/37a87e38aeb4b28f60fc4edfb084be73.txt b/scrapped_outputs/37a87e38aeb4b28f60fc4edfb084be73.txt new file mode 100644 index 0000000000000000000000000000000000000000..f30b39a298e4c56dee2c29827af6d01fc3c8586a --- /dev/null +++ b/scrapped_outputs/37a87e38aeb4b28f60fc4edfb084be73.txt @@ -0,0 +1,36 @@ +AsymmetricAutoencoderKL Improved larger variational autoencoder (VAE) model with KL loss for inpainting task: Designing a Better Asymmetric VQGAN for StableDiffusion by Zixin Zhu, Xuelu Feng, Dongdong Chen, Jianmin Bao, Le Wang, Yinpeng Chen, Lu Yuan, Gang Hua. The abstract from the paper is: StableDiffusion is a revolutionary text-to-image generator that is causing a stir in the world of image generation and editing. Unlike traditional methods that learn a diffusion model in pixel space, StableDiffusion learns a diffusion model in the latent space via a VQGAN, ensuring both efficiency and quality. It not only supports image generation tasks, but also enables image editing for real images, such as image inpainting and local editing. However, we have observed that the vanilla VQGAN used in StableDiffusion leads to significant information loss, causing distortion artifacts even in non-edited image regions. To this end, we propose a new asymmetric VQGAN with two simple designs. Firstly, in addition to the input from the encoder, the decoder contains a conditional branch that incorporates information from task-specific priors, such as the unmasked image region in inpainting. Secondly, the decoder is much heavier than the encoder, allowing for more detailed recovery while only slightly increasing the total inference cost. The training cost of our asymmetric VQGAN is cheap, and we only need to retrain a new asymmetric decoder while keeping the vanilla VQGAN encoder and StableDiffusion unchanged. Our asymmetric VQGAN can be widely used in StableDiffusion-based inpainting and local editing methods. Extensive experiments demonstrate that it can significantly improve the inpainting and editing performance, while maintaining the original text-to-image capability. The code is available at https://github.com/buxiangzhiren/Asymmetric_VQGAN Evaluation results can be found in section 4.1 of the original paper. Available checkpoints https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-1-5 https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-2 Example Usage Copied from diffusers import AsymmetricAutoencoderKL, StableDiffusionInpaintPipeline +from diffusers.utils import load_image, make_image_grid + + +prompt = "a photo of a person with beard" +img_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/celeba_hq_256.png" +mask_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/mask_256.png" + +original_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +pipe = StableDiffusionInpaintPipeline.from_pretrained("runwayml/stable-diffusion-inpainting") +pipe.vae = AsymmetricAutoencoderKL.from_pretrained("cross-attention/asymmetric-autoencoder-kl-x-1-5") +pipe.to("cuda") + +image = pipe(prompt=prompt, image=original_image, mask_image=mask_image).images[0] +make_image_grid([original_image, mask_image, image], rows=1, cols=3) AsymmetricAutoencoderKL class diffusers.AsymmetricAutoencoderKL < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) down_block_out_channels: Tuple = (64,) layers_per_down_block: int = 1 up_block_types: Tuple = ('UpDecoderBlock2D',) up_block_out_channels: Tuple = (64,) layers_per_up_block: int = 1 act_fn: str = 'silu' latent_channels: int = 4 norm_num_groups: int = 32 sample_size: int = 32 scaling_factor: float = 0.18215 ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("DownEncoderBlock2D",)) — +Tuple of downsample block types. down_block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of down block output channels. layers_per_down_block (int, optional, defaults to 1) — +Number layers for down block. up_block_types (Tuple[str], optional, defaults to ("UpDecoderBlock2D",)) — +Tuple of upsample block types. up_block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of up block output channels. layers_per_up_block (int, optional, defaults to 1) — +Number layers for up block. act_fn (str, optional, defaults to "silu") — The activation function to use. latent_channels (int, optional, defaults to 4) — Number of channels in the latent space. sample_size (int, optional, defaults to 32) — Sample input size. norm_num_groups (int, optional, defaults to 32) — +Number of groups to use for the first normalization layer in ResNet blocks. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. Designing a Better Asymmetric VQGAN for StableDiffusion https://arxiv.org/abs/2306.04632 . A VAE model with KL loss +for encoding images into latents and decoding latent representations into images. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor mask: Optional = None sample_posterior: bool = False return_dict: bool = True generator: Optional = None ) Parameters sample (torch.FloatTensor) — Input sample. mask (torch.FloatTensor, optional, defaults to None) — Optional inpainting mask. sample_posterior (bool, optional, defaults to False) — +Whether to sample from the posterior. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. AutoencoderKLOutput class diffusers.models.modeling_outputs.AutoencoderKLOutput < source > ( latent_dist: DiagonalGaussianDistribution ) Parameters latent_dist (DiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of DiagonalGaussianDistribution. +DiagonalGaussianDistribution allows for sampling latents from the distribution. Output of AutoencoderKL encoding method. DecoderOutput class diffusers.models.autoencoders.vae.DecoderOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The decoded output sample from the last layer of the model. Output of decoding method. diff --git a/scrapped_outputs/37aadefd9d88be79c26880d15d1891fe.txt b/scrapped_outputs/37aadefd9d88be79c26880d15d1891fe.txt new file mode 100644 index 0000000000000000000000000000000000000000..82d259981b4276cfd5bd3165faef3807c13cf6b7 --- /dev/null +++ b/scrapped_outputs/37aadefd9d88be79c26880d15d1891fe.txt @@ -0,0 +1,331 @@ +Loading + +A core premise of the diffusers library is to make diffusion models as accessible as possible. +Accessibility is therefore achieved by providing an API to load complete diffusion pipelines as well as individual components with a single line of code. +In the following we explain in-detail how to easily load: +Complete Diffusion Pipelines via the DiffusionPipeline.from_pretrained() +Diffusion Models via ModelMixin.from_pretrained() +Schedulers via SchedulerMixin.from_pretrained() + +Loading pipelines + +The DiffusionPipeline class is the easiest way to access any diffusion model that is available on the Hub. Let’s look at an example on how to download CompVis’ Latent Diffusion model. + + + Copied +from diffusers import DiffusionPipeline + +repo_id = "CompVis/ldm-text2im-large-256" +ldm = DiffusionPipeline.from_pretrained(repo_id) +Here DiffusionPipeline automatically detects the correct pipeline (i.e. LDMTextToImagePipeline), downloads and caches all required configuration and weight files (if not already done so), and finally returns a pipeline instance, called ldm. +The pipeline instance can then be called using LDMTextToImagePipeline.call() (i.e., ldm("image of a astronaut riding a horse")) for text-to-image generation. +Instead of using the generic DiffusionPipeline class for loading, you can also load the appropriate pipeline class directly. The code snippet above yields the same instance as when doing: + + + Copied +from diffusers import LDMTextToImagePipeline + +repo_id = "CompVis/ldm-text2im-large-256" +ldm = LDMTextToImagePipeline.from_pretrained(repo_id) +Diffusion pipelines like LDMTextToImagePipeline often consist of multiple components. These components can be both parameterized models, such as "unet", "vqvae" and “bert”, tokenizers or schedulers. These components can interact in complex ways with each other when using the pipeline in inference, e.g. for LDMTextToImagePipeline or StableDiffusionPipeline the inference call is explained here. +The purpose of the pipeline classes is to wrap the complexity of these diffusion systems and give the user an easy-to-use API while staying flexible for customization, as will be shown later. + +Loading pipelines that require access request + +Due to the capabilities of diffusion models to generate extremely realistic images, there is a certain danger that such models might be misused for unwanted applications, e.g. generating pornography or violent images. +In order to minimize the possibility of such unsolicited use cases, some of the most powerful diffusion models require users to acknowledge a license before being able to use the model. If the user does not agree to the license, the pipeline cannot be downloaded. +If you try to load runwayml/stable-diffusion-v1-5 the same way as done previously: + + + Copied +from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id) +it will only work if you have both click-accepted the license on the model card and are logged into the Hugging Face Hub. Otherwise you will get an error message +such as the following: + + + Copied +OSError: runwayml/stable-diffusion-v1-5 is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' +If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` +Therefore, we need to make sure to click-accept the license. You can do this by simply visiting +the model card and clicking on “Agree and access repository”: + + + +Second, you need to login with your access token: + + + Copied +huggingface-cli login +before trying to load the model. Or alternatively, you can pass your access token directly via the flag use_auth_token. In this case you do not need +to run huggingface-cli login before: + + + Copied +from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, use_auth_token="") +The final option to use pipelines that require access without having to rely on the Hugging Face Hub is to load the pipeline locally as explained in the next section. + +Loading pipelines locally + +If you prefer to have complete control over the pipeline and its corresponding files or, as said before, if you want to use pipelines that require an access request without having to be connected to the Hugging Face Hub, +we recommend loading pipelines locally. +To load a diffusion pipeline locally, you first need to manually download the whole folder structure on your local disk and then pass a local path to the DiffusionPipeline.from_pretrained(). Let’s again look at an example for +CompVis’ Latent Diffusion model. +First, you should make use of git-lfs to download the whole folder structure that has been uploaded to the model repository: + + + Copied +git lfs install +git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 +The command above will create a local folder called ./stable-diffusion-v1-5 on your disk. +Now, all you have to do is to simply pass the local folder path to from_pretrained: + + + Copied +from diffusers import DiffusionPipeline + +repo_id = "./stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id) +If repo_id is a local path, as it is the case here, DiffusionPipeline.from_pretrained() will automatically detect it and therefore not try to download any files from the Hub. +While we usually recommend to load weights directly from the Hub to be certain to stay up to date with the newest changes, loading pipelines locally should be preferred if one +wants to stay anonymous, self-contained applications, etc… + +Loading customized pipelines + +Advanced users that want to load customized versions of diffusion pipelines can do so by swapping any of the default components, e.g. the scheduler, with other scheduler classes. +A classical use case of this functionality is to swap the scheduler. Stable Diffusion v1-5 uses the PNDMScheduler by default which is generally not the most performant scheduler. Since the release +of stable diffusion, multiple improved schedulers have been published. To use those, the user has to manually load their preferred scheduler and pass it into DiffusionPipeline.from_pretrained(). +E.g. to use EulerDiscreteScheduler or DPMSolverMultistepScheduler to have a better quality vs. generation speed trade-off for inference, one could load them as follows: + + + Copied +from diffusers import DiffusionPipeline, EulerDiscreteScheduler, DPMSolverMultistepScheduler + +repo_id = "runwayml/stable-diffusion-v1-5" + +scheduler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +# or +# scheduler = DPMSolverMultistepScheduler.from_pretrained(repo_id, subfolder="scheduler") + +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, scheduler=scheduler) +Three things are worth paying attention to here. +First, the scheduler is loaded with SchedulerMixin.from_pretrained() +Second, the scheduler is loaded with a function argument, called subfolder="scheduler" as the configuration of stable diffusion’s scheduling is defined in a subfolder of the official pipeline repository +Third, the scheduler instance can simply be passed with the scheduler keyword argument to DiffusionPipeline.from_pretrained(). This works because the StableDiffusionPipeline defines its scheduler with the scheduler attribute. It’s not possible to use a different name, such as sampler=scheduler since sampler is not a defined keyword for StableDiffusionPipeline.__init__() +Not only the scheduler components can be customized for diffusion pipelines; in theory, all components of a pipeline can be customized. In practice, however, it often only makes sense to switch out a component that has compatible alternatives to what the pipeline expects. +Many scheduler classes are compatible with each other as can be seen here. This is not always the case for other components, such as the "unet". +One special case that can also be customized is the "safety_checker" of stable diffusion. If you believe the safety checker doesn’t serve you any good, you can simply disable it by passing None: + + + Copied +from diffusers import DiffusionPipeline, EulerDiscreteScheduler, DPMSolverMultistepScheduler + +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, safety_checker=None) +Another common use case is to reuse the same components in multiple pipelines, e.g. the weights and configurations of "runwayml/stable-diffusion-v1-5" can be used for both StableDiffusionPipeline and StableDiffusionImg2ImgPipeline and we might not want to +use the exact same weights into RAM twice. In this case, customizing all the input instances would help us +to only load the weights into RAM once: + + + Copied +from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id) + +components = stable_diffusion_txt2img.components + +# weights are not reloaded into RAM +stable_diffusion_img2img = StableDiffusionImg2ImgPipeline(**components) +Note how the above code snippet makes use of DiffusionPipeline.components. + +How does loading work? + +As a class method, DiffusionPipeline.from_pretrained() is responsible for two things: +Download the latest version of the folder structure required to run the repo_id with diffusers and cache them. If the latest folder structure is available in the local cache, DiffusionPipeline.from_pretrained() will simply reuse the cache and not re-download the files. +Load the cached weights into the correct pipeline class – one of the officially supported pipeline classes - and return an instance of the class. The correct pipeline class is thereby retrieved from the model_index.json file. +The underlying folder structure of diffusion pipelines correspond 1-to-1 to their corresponding class instances, e.g. LDMTextToImagePipeline for CompVis/ldm-text2im-large-256 +This can be understood better by looking at an example. Let’s print out pipeline class instance pipeline we just defined: + + + Copied +from diffusers import DiffusionPipeline + +repo_id = "CompVis/ldm-text2im-large-256" +ldm = DiffusionPipeline.from_pretrained(repo_id) +print(ldm) +Output: + + + Copied +LDMTextToImagePipeline { + "bert": [ + "latent_diffusion", + "LDMBertModel" + ], + "scheduler": [ + "diffusers", + "DDIMScheduler" + ], + "tokenizer": [ + "transformers", + "BertTokenizer" + ], + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vqvae": [ + "diffusers", + "AutoencoderKL" + ] +} +First, we see that the official pipeline is the LDMTextToImagePipeline, and second we see that the LDMTextToImagePipeline consists of 5 components: +"bert" of class LDMBertModel as defined in the pipeline +"scheduler" of class DDIMScheduler +"tokenizer" of class BertTokenizer as defined in transformers +"unet" of class UNet2DConditionModel +"vqvae" of class AutoencoderKL +Let’s now compare the pipeline instance to the folder structure of the model repository CompVis/ldm-text2im-large-256. Looking at the folder structure of CompVis/ldm-text2im-large-256 on the Hub, we can see it matches 1-to-1 the printed out instance of LDMTextToImagePipeline above: + + + Copied +. +├── bert +│   ├── config.json +│   └── pytorch_model.bin +├── model_index.json +├── scheduler +│   └── scheduler_config.json +├── tokenizer +│   ├── special_tokens_map.json +│   ├── tokenizer_config.json +│   └── vocab.txt +├── unet +│   ├── config.json +│   └── diffusion_pytorch_model.bin +└── vqvae + ├── config.json + └── diffusion_pytorch_model.bin +As we can see each attribute of the instance of LDMTextToImagePipeline has its configuration and possibly weights defined in a subfolder that is called exactly like the class attribute ("bert", "scheduler", "tokenizer", "unet", "vqvae"). Importantly, every pipeline expects a model_index.json file that tells the DiffusionPipeline both: +which pipeline class should be loaded, and +what sub-classes from which library are stored in which subfolders +In the case of CompVis/ldm-text2im-large-256 the model_index.json is therefore defined as follows: + + + Copied +{ + "_class_name": "LDMTextToImagePipeline", + "_diffusers_version": "0.0.4", + "bert": [ + "latent_diffusion", + "LDMBertModel" + ], + "scheduler": [ + "diffusers", + "DDIMScheduler" + ], + "tokenizer": [ + "transformers", + "BertTokenizer" + ], + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vqvae": [ + "diffusers", + "AutoencoderKL" + ] +} +_class_name tells DiffusionPipeline which pipeline class should be loaded. +_diffusers_version can be useful to know under which diffusers version this model was created. +Every component of the pipeline is then defined under the form: + + + Copied +"name" : [ + "library", + "class" +] +The "name" field corresponds both to the name of the subfolder in which the configuration and weights are stored as well as the attribute name of the pipeline class (as can be seen here and here +The "library" field corresponds to the name of the library, e.g. diffusers or transformers from which the "class" should be loaded +The "class" field corresponds to the name of the class, e.g. BertTokenizer or UNet2DConditionModel + +Loading models + +Models as defined under src/diffusers/models can be loaded via the ModelMixin.from_pretrained() function. The API is very similar the DiffusionPipeline.from_pretrained() and works in the same way: +Download the latest version of the model weights and configuration with diffusers and cache them. If the latest files are available in the local cache, ModelMixin.from_pretrained() will simply reuse the cache and not re-download the files. +Load the cached weights into the defined model class - one of the existing model classes - and return an instance of the class. +In constrast to DiffusionPipeline.from_pretrained(), models rely on fewer files that usually don’t require a folder structure, but just a diffusion_pytorch_model.bin and config.json file. +Let’s look at an example: + + + Copied +from diffusers import UNet2DConditionModel + +repo_id = "CompVis/ldm-text2im-large-256" +model = UNet2DConditionModel.from_pretrained(repo_id, subfolder="unet") +Note how we have to define the subfolder="unet" argument to tell ModelMixin.from_pretrained() that the model weights are located in a subfolder of the repository. +As explained in Loading customized pipelines, one can pass a loaded model to a diffusion pipeline, via DiffusionPipeline.from_pretrained(): + + + Copied +from diffusers import DiffusionPipeline + +repo_id = "CompVis/ldm-text2im-large-256" +ldm = DiffusionPipeline.from_pretrained(repo_id, unet=model) +If the model files can be found directly at the root level, which is usually only the case for some very simple diffusion models, such as google/ddpm-cifar10-32, we don’t +need to pass a subfolder argument: + + + Copied +from diffusers import UNet2DModel + +repo_id = "google/ddpm-cifar10-32" +model = UNet2DModel.from_pretrained(repo_id) + +Loading schedulers + +Schedulers rely on SchedulerMixin.from_pretrained(). Schedulers are not parameterized or trained, but instead purely defined by a configuration file. +For consistency, we use the same method name as we do for models or pipelines, but no weights are loaded in this case. +In constrast to pipelines or models, loading schedulers does not consume any significant amount of memory and the same configuration file can often be used for a variety of different schedulers. +For example, all of: +DDPMScheduler +DDIMScheduler +PNDMScheduler +LMSDiscreteScheduler +EulerDiscreteScheduler +EulerAncestralDiscreteScheduler +DPMSolverMultistepScheduler +are compatible with StableDiffusionPipeline and therefore the same scheduler configuration file can be loaded in any of those classes: + + + Copied +from diffusers import StableDiffusionPipeline +from diffusers import ( + DDPMScheduler, + DDIMScheduler, + PNDMScheduler, + LMSDiscreteScheduler, + EulerDiscreteScheduler, + EulerAncestralDiscreteScheduler, + DPMSolverMultistepScheduler, +) + +repo_id = "runwayml/stable-diffusion-v1-5" + +ddpm = DDPMScheduler.from_pretrained(repo_id, subfolder="scheduler") +ddim = DDIMScheduler.from_pretrained(repo_id, subfolder="scheduler") +pndm = PNDMScheduler.from_pretrained(repo_id, subfolder="scheduler") +lms = LMSDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +euler_anc = EulerAncestralDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +euler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +dpm = DPMSolverMultistepScheduler.from_pretrained(repo_id, subfolder="scheduler") + +# replace `dpm` with any of `ddpm`, `ddim`, `pndm`, `lms`, `euler`, `euler_anc` +pipeline = StableDiffusionPipeline.from_pretrained(repo_id, scheduler=dpm) diff --git a/scrapped_outputs/37b0a2b947f558a0e71e64f30b2e6415.txt b/scrapped_outputs/37b0a2b947f558a0e71e64f30b2e6415.txt new file mode 100644 index 0000000000000000000000000000000000000000..46b497020216eabb3750dd7c2b1feffa0b29e64b --- /dev/null +++ b/scrapped_outputs/37b0a2b947f558a0e71e64f30b2e6415.txt @@ -0,0 +1,342 @@ +Pix2Pix Zero Zero-shot Image-to-Image Translation is by Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yijun Li, Jingwan Lu, and Jun-Yan Zhu. The abstract from the paper is: Large-scale text-to-image generative models have shown their remarkable ability to synthesize diverse and high-quality images. However, it is still challenging to directly apply these models for editing real images for two reasons. First, it is hard for users to come up with a perfect text prompt that accurately describes every visual detail in the input image. Second, while existing models can introduce desirable changes in certain regions, they often dramatically alter the input content and introduce unexpected changes in unwanted regions. In this work, we propose pix2pix-zero, an image-to-image translation method that can preserve the content of the original image without manual prompting. We first automatically discover editing directions that reflect desired edits in the text embedding space. To preserve the general content structure after editing, we further propose cross-attention guidance, which aims to retain the cross-attention maps of the input image throughout the diffusion process. In addition, our method does not need additional training for these edits and can directly use the existing pre-trained text-to-image diffusion model. We conduct extensive experiments and show that our method outperforms existing and concurrent works for both real and synthetic image editing. You can find additional information about Pix2Pix Zero on the project page, original codebase, and try it out in a demo. Tips The pipeline can be conditioned on real input images. Check out the code examples below to know more. The pipeline exposes two arguments namely source_embeds and target_embeds +that let you control the direction of the semantic edits in the final image to be generated. Let’s say, +you wanted to translate from “cat” to “dog”. In this case, the edit direction will be “cat -> dog”. To reflect +this in the pipeline, you simply have to set the embeddings related to the phrases including “cat” to +source_embeds and “dog” to target_embeds. Refer to the code example below for more details. When you’re using this pipeline from a prompt, specify the source concept in the prompt. Taking +the above example, a valid input prompt would be: “a high resolution painting of a cat in the style of van gogh”. If you wanted to reverse the direction in the example above, i.e., “dog -> cat”, then it’s recommended to:Swap the source_embeds and target_embeds. Change the input prompt to include “dog”. To learn more about how the source and target embeddings are generated, refer to the original paper. Below, we also provide some directions on how to generate the embeddings. Note that the quality of the outputs generated with this pipeline is dependent on how good the source_embeds and target_embeds are. Please, refer to this discussion for some suggestions on the topic. Available Pipelines: Pipeline Tasks Demo StableDiffusionPix2PixZeroPipeline Text-Based Image Editing 🤗 Space Usage example Based on an image generated with the input prompt Copied import requests +import torch + +from diffusers import DDIMScheduler, StableDiffusionPix2PixZeroPipeline + + +def download(embedding_url, local_filepath): + r = requests.get(embedding_url) + with open(local_filepath, "wb") as f: + f.write(r.content) + + +model_ckpt = "CompVis/stable-diffusion-v1-4" +pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained( + model_ckpt, conditions_input_image=False, torch_dtype=torch.float16 +) +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +pipeline.to("cuda") + +prompt = "a high resolution painting of a cat in the style of van gogh" +src_embs_url = "https://github.com/pix2pixzero/pix2pix-zero/raw/main/assets/embeddings_sd_1.4/cat.pt" +target_embs_url = "https://github.com/pix2pixzero/pix2pix-zero/raw/main/assets/embeddings_sd_1.4/dog.pt" + +for url in [src_embs_url, target_embs_url]: + download(url, url.split("/")[-1]) + +src_embeds = torch.load(src_embs_url.split("/")[-1]) +target_embeds = torch.load(target_embs_url.split("/")[-1]) + +image = pipeline( + prompt, + source_embeds=src_embeds, + target_embeds=target_embeds, + num_inference_steps=50, + cross_attention_guidance_amount=0.15, +).images[0] +image Based on an input image When the pipeline is conditioned on an input image, we first obtain an inverted +noise from it using a DDIMInverseScheduler with the help of a generated caption. Then the inverted noise is used to start the generation process. First, let’s load our pipeline: Copied import torch +from transformers import BlipForConditionalGeneration, BlipProcessor +from diffusers import DDIMScheduler, DDIMInverseScheduler, StableDiffusionPix2PixZeroPipeline + +captioner_id = "Salesforce/blip-image-captioning-base" +processor = BlipProcessor.from_pretrained(captioner_id) +model = BlipForConditionalGeneration.from_pretrained(captioner_id, torch_dtype=torch.float16, low_cpu_mem_usage=True) + +sd_model_ckpt = "CompVis/stable-diffusion-v1-4" +pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained( + sd_model_ckpt, + caption_generator=model, + caption_processor=processor, + torch_dtype=torch.float16, + safety_checker=None, +) +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +pipeline.enable_model_cpu_offload() Then, we load an input image for conditioning and obtain a suitable caption for it: Copied from diffusers.utils import load_image + +img_url = "https://github.com/pix2pixzero/pix2pix-zero/raw/main/assets/test_images/cats/cat_6.png" +raw_image = load_image(url).resize((512, 512)) +caption = pipeline.generate_caption(raw_image) +caption Then we employ the generated caption and the input image to get the inverted noise: Copied generator = torch.manual_seed(0) +inv_latents = pipeline.invert(caption, image=raw_image, generator=generator).latents Now, generate the image with edit directions: Copied # See the "Generating source and target embeddings" section below to +# automate the generation of these captions with a pre-trained model like Flan-T5 as explained below. +source_prompts = ["a cat sitting on the street", "a cat playing in the field", "a face of a cat"] +target_prompts = ["a dog sitting on the street", "a dog playing in the field", "a face of a dog"] + +source_embeds = pipeline.get_embeds(source_prompts, batch_size=2) +target_embeds = pipeline.get_embeds(target_prompts, batch_size=2) + + +image = pipeline( + caption, + source_embeds=source_embeds, + target_embeds=target_embeds, + num_inference_steps=50, + cross_attention_guidance_amount=0.15, + generator=generator, + latents=inv_latents, + negative_prompt=caption, +).images[0] +image Generating source and target embeddings The authors originally used the GPT-3 API to generate the source and target captions for discovering +edit directions. However, we can also leverage open source and public models for the same purpose. +Below, we provide an end-to-end example with the Flan-T5 model +for generating captions and CLIP for +computing embeddings on the generated captions. 1. Load the generation model: Copied import torch +from transformers import AutoTokenizer, T5ForConditionalGeneration + +tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-xl") +model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xl", device_map="auto", torch_dtype=torch.float16) 2. Construct a starting prompt: Copied source_concept = "cat" +target_concept = "dog" + +source_text = f"Provide a caption for images containing a {source_concept}. " +"The captions should be in English and should be no longer than 150 characters." + +target_text = f"Provide a caption for images containing a {target_concept}. " +"The captions should be in English and should be no longer than 150 characters." Here, we’re interested in the “cat -> dog” direction. 3. Generate captions: We can use a utility like so for this purpose. Copied def generate_captions(input_prompt): + input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids.to("cuda") + + outputs = model.generate( + input_ids, temperature=0.8, num_return_sequences=16, do_sample=True, max_new_tokens=128, top_k=10 + ) + return tokenizer.batch_decode(outputs, skip_special_tokens=True) And then we just call it to generate our captions: Copied source_captions = generate_captions(source_text) +target_captions = generate_captions(target_concept) +print(source_captions, target_captions, sep='\n') We encourage you to play around with the different parameters supported by the +generate() method (documentation) for the generation quality you are looking for. 4. Load the embedding model: Here, we need to use the same text encoder model used by the subsequent Stable Diffusion model. Copied from diffusers import StableDiffusionPix2PixZeroPipeline + +pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 +) +pipeline = pipeline.to("cuda") +tokenizer = pipeline.tokenizer +text_encoder = pipeline.text_encoder 5. Compute embeddings: Copied import torch + +def embed_captions(sentences, tokenizer, text_encoder, device="cuda"): + with torch.no_grad(): + embeddings = [] + for sent in sentences: + text_inputs = tokenizer( + sent, + padding="max_length", + max_length=tokenizer.model_max_length, + truncation=True, + return_tensors="pt", + ) + text_input_ids = text_inputs.input_ids + prompt_embeds = text_encoder(text_input_ids.to(device), attention_mask=None)[0] + embeddings.append(prompt_embeds) + return torch.concatenate(embeddings, dim=0).mean(dim=0).unsqueeze(0) + +source_embeddings = embed_captions(source_captions, tokenizer, text_encoder) +target_embeddings = embed_captions(target_captions, tokenizer, text_encoder) And you’re done! Here is a Colab Notebook that you can use to interact with the entire process. Now, you can use these embeddings directly while calling the pipeline: Copied from diffusers import DDIMScheduler + +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) + +image = pipeline( + prompt, + source_embeds=source_embeddings, + target_embeds=target_embeddings, + num_inference_steps=50, + cross_attention_guidance_amount=0.15, +).images[0] +image Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionPix2PixZeroPipeline class diffusers.StableDiffusionPix2PixZeroPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: typing.Union[diffusers.schedulers.scheduling_ddpm.DDPMScheduler, diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler] feature_extractor: CLIPImageProcessor safety_checker: StableDiffusionSafetyChecker inverse_scheduler: DDIMInverseScheduler caption_generator: BlipForConditionalGeneration caption_processor: BlipProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, EulerAncestralDiscreteScheduler, or DDPMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. requires_safety_checker (bool) — +Whether the pipeline requires a safety checker. We recommend setting it to True if you’re using the +pipeline publicly. Pipeline for pixel-level image editing using Pix2Pix Zero. Based on Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: typing.Union[typing.List[str], str, NoneType] = None source_embeds: Tensor = None target_embeds: Tensor = None height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: typing.Union[typing.List[str], str, NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None cross_attention_guidance_amount: float = 0.1 output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: typing.Optional[int] = 1 cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None clip_skip: typing.Optional[int] = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. source_embeds (torch.Tensor) — +Source concept embeddings. Generation of the embeddings as per the original +paper. Used in discovering the edit direction. target_embeds (torch.Tensor) — +Target concept embeddings. Generation of the embeddings as per the original +paper. Used in discovering the edit direction. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. cross_attention_guidance_amount (float, defaults to 0.1) — +Amount of guidance needed from the reference cross-attention maps. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import requests +>>> import torch + +>>> from diffusers import DDIMScheduler, StableDiffusionPix2PixZeroPipeline + + +>>> def download(embedding_url, local_filepath): +... r = requests.get(embedding_url) +... with open(local_filepath, "wb") as f: +... f.write(r.content) + + +>>> model_ckpt = "CompVis/stable-diffusion-v1-4" +>>> pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained(model_ckpt, torch_dtype=torch.float16) +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.to("cuda") + +>>> prompt = "a high resolution painting of a cat in the style of van gough" +>>> source_emb_url = "https://hf.co/datasets/sayakpaul/sample-datasets/resolve/main/cat.pt" +>>> target_emb_url = "https://hf.co/datasets/sayakpaul/sample-datasets/resolve/main/dog.pt" + +>>> for url in [source_emb_url, target_emb_url]: +... download(url, url.split("/")[-1]) + +>>> src_embeds = torch.load(source_emb_url.split("/")[-1]) +>>> target_embeds = torch.load(target_emb_url.split("/")[-1]) +>>> images = pipeline( +... prompt, +... source_embeds=src_embeds, +... target_embeds=target_embeds, +... num_inference_steps=50, +... cross_attention_guidance_amount=0.15, +... ).images + +>>> images[0].save("edited_image_dog.png") construct_direction < source > ( embs_source: Tensor embs_target: Tensor ) Constructs the edit direction to steer the image generation process semantically. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None lora_scale: typing.Optional[float] = None clip_skip: typing.Optional[int] = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. generate_caption < source > ( images ) Generates caption for a given image. invert < source > ( prompt: typing.Optional[str] = None image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.FloatTensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.FloatTensor]] = None num_inference_steps: int = 50 guidance_scale: float = 1 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None prompt_embeds: typing.Optional[torch.FloatTensor] = None cross_attention_guidance_amount: float = 0.1 output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: typing.Optional[int] = 1 cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None lambda_auto_corr: float = 20.0 lambda_kl: float = 20.0 num_reg_steps: int = 5 num_auto_corr_rolls: int = 5 ) Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor np.ndarray, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch which will be used for conditioning. Can also accept +image latents as image, if passing latents directly, it will not be encoded again. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 1) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. cross_attention_guidance_amount (float, defaults to 0.1) — +Amount of guidance needed from the reference cross-attention maps. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. lambda_auto_corr (float, optional, defaults to 20.0) — +Lambda parameter to control auto correction lambda_kl (float, optional, defaults to 20.0) — +Lambda parameter to control Kullback–Leibler divergence output num_reg_steps (int, optional, defaults to 5) — +Number of regularization loss steps num_auto_corr_rolls (int, optional, defaults to 5) — +Number of auto correction roll steps Function used to generate inverted latents given a prompt and image. Examples: Copied >>> import torch +>>> from transformers import BlipForConditionalGeneration, BlipProcessor +>>> from diffusers import DDIMScheduler, DDIMInverseScheduler, StableDiffusionPix2PixZeroPipeline + +>>> import requests +>>> from PIL import Image + +>>> captioner_id = "Salesforce/blip-image-captioning-base" +>>> processor = BlipProcessor.from_pretrained(captioner_id) +>>> model = BlipForConditionalGeneration.from_pretrained( +... captioner_id, torch_dtype=torch.float16, low_cpu_mem_usage=True +... ) + +>>> sd_model_ckpt = "CompVis/stable-diffusion-v1-4" +>>> pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained( +... sd_model_ckpt, +... caption_generator=model, +... caption_processor=processor, +... torch_dtype=torch.float16, +... safety_checker=None, +... ) + +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.enable_model_cpu_offload() + +>>> img_url = "https://github.com/pix2pixzero/pix2pix-zero/raw/main/assets/test_images/cats/cat_6.png" + +>>> raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB").resize((512, 512)) +>>> # generate caption +>>> caption = pipeline.generate_caption(raw_image) + +>>> # "a photography of a cat with flowers and dai dai daie - daie - daie kasaii" +>>> inv_latents = pipeline.invert(caption, image=raw_image).latents +>>> # we need to generate source and target embeds + +>>> source_prompts = ["a cat sitting on the street", "a cat playing in the field", "a face of a cat"] + +>>> target_prompts = ["a dog sitting on the street", "a dog playing in the field", "a face of a dog"] + +>>> source_embeds = pipeline.get_embeds(source_prompts) +>>> target_embeds = pipeline.get_embeds(target_prompts) +>>> # the latents can then be used to edit a real image +>>> # when using Stable Diffusion 2 or other models that use v-prediction +>>> # set `cross_attention_guidance_amount` to 0.01 or less to avoid input latent gradient explosion + +>>> image = pipeline( +... caption, +... source_embeds=source_embeds, +... target_embeds=target_embeds, +... num_inference_steps=50, +... cross_attention_guidance_amount=0.15, +... generator=generator, +... latents=inv_latents, +... negative_prompt=caption, +... ).images[0] +>>> image.save("edited_image.png") diff --git a/scrapped_outputs/37c01d73cc96290284a5dd83c1ab8ff5.txt b/scrapped_outputs/37c01d73cc96290284a5dd83c1ab8ff5.txt new file mode 100644 index 0000000000000000000000000000000000000000..ac59df5433d23b7c188dd3d53bf865450ff7dab9 --- /dev/null +++ b/scrapped_outputs/37c01d73cc96290284a5dd83c1ab8ff5.txt @@ -0,0 +1 @@ +Reinforcement learning training with DDPO You can fine-tune Stable Diffusion on a reward function via reinforcement learning with the 🤗 TRL library and 🤗 Diffusers. This is done with the Denoising Diffusion Policy Optimization (DDPO) algorithm introduced by Black et al. in Training Diffusion Models with Reinforcement Learning, which is implemented in 🤗 TRL with the DDPOTrainer. For more information, check out the DDPOTrainer API reference and the Finetune Stable Diffusion Models with DDPO via TRL blog post. diff --git a/scrapped_outputs/37d3268b38c10534c341acaaca010bf2.txt b/scrapped_outputs/37d3268b38c10534c341acaaca010bf2.txt new file mode 100644 index 0000000000000000000000000000000000000000..6944df78621a4402f3dc6b9675ba16c4fe0e310a --- /dev/null +++ b/scrapped_outputs/37d3268b38c10534c341acaaca010bf2.txt @@ -0,0 +1,164 @@ +Euler Ancestral scheduler + + +Overview + +Ancestral sampling with Euler method steps. Based on the original (k-diffusion)[https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L72] implementation by Katherine Crowson. +Fast scheduler which often times generates good outputs with 20-30 steps. + +EulerAncestralDiscreteScheduler + + +class diffusers.EulerAncestralDiscreteScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +prediction_type: str = 'epsilon' + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + + +Ancestral sampling with Euler method steps. Based on the original k-diffusion implementation by Katherine Crowson: +https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L72 +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[float, torch.FloatTensor] + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +timestep (float or torch.FloatTensor) — the current timestep in the diffusion chain + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Scales the denoising model input by (sigma**2 + 1) ** 0.5 to match the Euler algorithm. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device, optional) — +the device to which the timesteps should be moved to. If None, the timesteps are not moved. + + + +Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: typing.Union[float, torch.FloatTensor] +sample: FloatTensor +generator: typing.Optional[torch._C.Generator] = None +return_dict: bool = True + +) +→ +~schedulers.scheduling_utils.EulerAncestralDiscreteSchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (float) — current timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +generator (torch.Generator, optional) — Random number generator. + + +return_dict (bool) — option for returning tuple rather than EulerAncestralDiscreteSchedulerOutput class + + +Returns + +~schedulers.scheduling_utils.EulerAncestralDiscreteSchedulerOutput or tuple + + + +~schedulers.scheduling_utils.EulerAncestralDiscreteSchedulerOutput if return_dict is True, otherwise +a tuple. When returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/37df3278d735baf9a36e2d0595f0f987.txt b/scrapped_outputs/37df3278d735baf9a36e2d0595f0f987.txt new file mode 100644 index 0000000000000000000000000000000000000000..11477af7da0355430f35587a5aa097be653d9a3d --- /dev/null +++ b/scrapped_outputs/37df3278d735baf9a36e2d0595f0f987.txt @@ -0,0 +1,68 @@ +VQDiffusionScheduler VQDiffusionScheduler converts the transformer model’s output into a sample for the unnoised image at the previous diffusion timestep. It was introduced in Vector Quantized Diffusion Model for Text-to-Image Synthesis by Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, Baining Guo. The abstract from the paper is: We present the vector quantized diffusion (VQ-Diffusion) model for text-to-image generation. This method is based on a vector quantized variational autoencoder (VQ-VAE) whose latent space is modeled by a conditional variant of the recently developed Denoising Diffusion Probabilistic Model (DDPM). We find that this latent-space method is well-suited for text-to-image generation tasks because it not only eliminates the unidirectional bias with existing methods but also allows us to incorporate a mask-and-replace diffusion strategy to avoid the accumulation of errors, which is a serious problem with existing methods. Our experiments show that the VQ-Diffusion produces significantly better text-to-image generation results when compared with conventional autoregressive (AR) models with similar numbers of parameters. Compared with previous GAN-based text-to-image methods, our VQ-Diffusion can handle more complex scenes and improve the synthesized image quality by a large margin. Finally, we show that the image generation computation in our method can be made highly efficient by reparameterization. With traditional AR methods, the text-to-image generation time increases linearly with the output image resolution and hence is quite time consuming even for normal size images. The VQ-Diffusion allows us to achieve a better trade-off between quality and speed. Our experiments indicate that the VQ-Diffusion model with the reparameterization is fifteen times faster than traditional AR methods while achieving a better image quality. VQDiffusionScheduler class diffusers.VQDiffusionScheduler < source > ( num_vec_classes: int num_train_timesteps: int = 100 alpha_cum_start: float = 0.99999 alpha_cum_end: float = 9e-06 gamma_cum_start: float = 9e-06 gamma_cum_end: float = 0.99999 ) Parameters num_vec_classes (int) — +The number of classes of the vector embeddings of the latent pixels. Includes the class for the masked +latent pixel. num_train_timesteps (int, defaults to 100) — +The number of diffusion steps to train the model. alpha_cum_start (float, defaults to 0.99999) — +The starting cumulative alpha value. alpha_cum_end (float, defaults to 0.00009) — +The ending cumulative alpha value. gamma_cum_start (float, defaults to 0.00009) — +The starting cumulative gamma value. gamma_cum_end (float, defaults to 0.99999) — +The ending cumulative gamma value. A scheduler for vector quantized diffusion. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. log_Q_t_transitioning_to_known_class < source > ( t: torch.int32 x_t: LongTensor log_onehot_x_t: FloatTensor cumulative: bool ) → torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels) Parameters t (torch.Long) — +The timestep that determines which transition matrix is used. x_t (torch.LongTensor of shape (batch size, num latent pixels)) — +The classes of each latent pixel at time t. log_onehot_x_t (torch.FloatTensor of shape (batch size, num classes, num latent pixels)) — +The log one-hot vectors of x_t. cumulative (bool) — +If cumulative is False, the single step transition matrix t-1->t is used. If cumulative is +True, the cumulative transition matrix 0->t is used. Returns +torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels) + +Each column of the returned matrix is a row of log probabilities of the complete probability +transition matrix. +When non cumulative, returns self.num_classes - 1 rows because the initial latent pixel cannot be +masked. +Where: + +q_n is the probability distribution for the forward process of the nth latent pixel. +C_0 is a class of a latent pixel embedding +C_k is the class of the masked latent pixel + +non-cumulative result (omitting logarithms): +_0(x_t | x_{t-1\} = C_0) ... q_n(x_t | x_{t-1\} = C_0) + . . . + . . . + . . . +q_0(x_t | x_{t-1\} = C_k) ... q_n(x_t | x_{t-1\} = C_k)`} + wrap={false} + /> +cumulative result (omitting logarithms): +_0_cumulative(x_t | x_0 = C_0) ... q_n_cumulative(x_t | x_0 = C_0) + . . . + . . . + . . . +q_0_cumulative(x_t | x_0 = C_{k-1\}) ... q_n_cumulative(x_t | x_0 = C_{k-1\})`} + wrap={false} + /> + Calculates the log probabilities of the rows from the (cumulative or non-cumulative) transition matrix for each +latent pixel in x_t. q_posterior < source > ( log_p_x_0 x_t t ) → torch.FloatTensor of shape (batch size, num classes, num latent pixels) Parameters log_p_x_0 (torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels)) — +The log probabilities for the predicted classes of the initial latent pixels. Does not include a +prediction for the masked class as the initial unnoised image cannot be masked. x_t (torch.LongTensor of shape (batch size, num latent pixels)) — +The classes of each latent pixel at time t. t (torch.Long) — +The timestep that determines which transition matrix is used. Returns +torch.FloatTensor of shape (batch size, num classes, num latent pixels) + +The log probabilities for the predicted classes of the image at timestep t-1. + Calculates the log probabilities for the predicted classes of the image at timestep t-1: Copied p(x_{t-1} | x_t) = sum( q(x_t | x_{t-1}) * q(x_{t-1} | x_0) * p(x_0) / q(x_t | x_0) ) set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps and diffusion process parameters (alpha, beta, gamma) should be moved +to. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: torch.int64 sample: LongTensor generator: Optional = None return_dict: bool = True ) → VQDiffusionSchedulerOutput or tuple Parameters t (torch.long) — +The timestep that determines which transition matrices are used. x_t (torch.LongTensor of shape (batch size, num latent pixels)) — +The classes of each latent pixel at time t. generator (torch.Generator, or None) — +A random number generator for the noise applied to p(x_{t-1} | x_t) before it is sampled from. return_dict (bool, optional, defaults to True) — +Whether or not to return a VQDiffusionSchedulerOutput or +tuple. Returns +VQDiffusionSchedulerOutput or tuple + +If return_dict is True, VQDiffusionSchedulerOutput is +returned, otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by the reverse transition distribution. See +q_posterior() for more details about how the distribution is computer. VQDiffusionSchedulerOutput class diffusers.schedulers.scheduling_vq_diffusion.VQDiffusionSchedulerOutput < source > ( prev_sample: LongTensor ) Parameters prev_sample (torch.LongTensor of shape (batch size, num latent pixels)) — +Computed sample x_{t-1} of previous timestep. prev_sample should be used as next model input in the +denoising loop. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/3825f30c0f475ab3862b7e930ac4e641.txt b/scrapped_outputs/3825f30c0f475ab3862b7e930ac4e641.txt new file mode 100644 index 0000000000000000000000000000000000000000..7b1735de34d975258705c997ab6b7091fbeddde0 --- /dev/null +++ b/scrapped_outputs/3825f30c0f475ab3862b7e930ac4e641.txt @@ -0,0 +1,2 @@ +Activation functions Customized activation functions for supporting various models in 🤗 Diffusers. GELU class diffusers.models.activations.GELU < source > ( dim_in: int dim_out: int approximate: str = 'none' bias: bool = True ) Parameters dim_in (int) — The number of channels in the input. dim_out (int) — The number of channels in the output. approximate (str, optional, defaults to "none") — If "tanh", use tanh approximation. bias (bool, defaults to True) — Whether to use a bias in the linear layer. GELU activation function with tanh approximation support with approximate="tanh". GEGLU class diffusers.models.activations.GEGLU < source > ( dim_in: int dim_out: int bias: bool = True ) Parameters dim_in (int) — The number of channels in the input. dim_out (int) — The number of channels in the output. bias (bool, defaults to True) — Whether to use a bias in the linear layer. A variant of the gated linear unit activation function. ApproximateGELU class diffusers.models.activations.ApproximateGELU < source > ( dim_in: int dim_out: int bias: bool = True ) Parameters dim_in (int) — The number of channels in the input. dim_out (int) — The number of channels in the output. bias (bool, defaults to True) — Whether to use a bias in the linear layer. The approximate form of the Gaussian Error Linear Unit (GELU). For more details, see section 2 of this +paper. diff --git a/scrapped_outputs/383c2f545b67c04e84e397d782999019.txt b/scrapped_outputs/383c2f545b67c04e84e397d782999019.txt new file mode 100644 index 0000000000000000000000000000000000000000..c6f952ad08987328ef5a7108f6c98636c5902202 --- /dev/null +++ b/scrapped_outputs/383c2f545b67c04e84e397d782999019.txt @@ -0,0 +1,76 @@ +Contribute a community pipeline 💡 Take a look at GitHub Issue #841 for more context about why we’re adding community pipelines to help everyone easily share their work without being slowed down. Community pipelines allow you to add any additional features you’d like on top of the DiffusionPipeline. The main benefit of building on top of the DiffusionPipeline is anyone can load and use your pipeline by only adding one more argument, making it super easy for the community to access. This guide will show you how to create a community pipeline and explain how they work. To keep things simple, you’ll create a “one-step” pipeline where the UNet does a single forward pass and calls the scheduler once. Initialize the pipeline You should start by creating a one_step_unet.py file for your community pipeline. In this file, create a pipeline class that inherits from the DiffusionPipeline to be able to load model weights and the scheduler configuration from the Hub. The one-step pipeline needs a UNet and a scheduler, so you’ll need to add these as arguments to the __init__ function: Copied from diffusers import DiffusionPipeline +import torch + +class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() To ensure your pipeline and its components (unet and scheduler) can be saved with save_pretrained(), add them to the register_modules function: Copied from diffusers import DiffusionPipeline + import torch + + class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() + ++ self.register_modules(unet=unet, scheduler=scheduler) Cool, the __init__ step is done and you can move to the forward pass now! 🔥 Define the forward pass In the forward pass, which we recommend defining as __call__, you have complete creative freedom to add whatever feature you’d like. For our amazing one-step pipeline, create a random image and only call the unet and scheduler once by setting timestep=1: Copied from diffusers import DiffusionPipeline + import torch + + class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() + + self.register_modules(unet=unet, scheduler=scheduler) + ++ def __call__(self): ++ image = torch.randn( ++ (1, self.unet.config.in_channels, self.unet.config.sample_size, self.unet.config.sample_size), ++ ) ++ timestep = 1 + ++ model_output = self.unet(image, timestep).sample ++ scheduler_output = self.scheduler.step(model_output, timestep, image).prev_sample + ++ return scheduler_output That’s it! 🚀 You can now run this pipeline by passing a unet and scheduler to it: Copied from diffusers import DDPMScheduler, UNet2DModel + +scheduler = DDPMScheduler() +unet = UNet2DModel() + +pipeline = UnetSchedulerOneForwardPipeline(unet=unet, scheduler=scheduler) + +output = pipeline() But what’s even better is you can load pre-existing weights into the pipeline if the pipeline structure is identical. For example, you can load the google/ddpm-cifar10-32 weights into the one-step pipeline: Copied pipeline = UnetSchedulerOneForwardPipeline.from_pretrained("google/ddpm-cifar10-32", use_safetensors=True) + +output = pipeline() Share your pipeline Open a Pull Request on the 🧨 Diffusers repository to add your awesome pipeline in one_step_unet.py to the examples/community subfolder. Once it is merged, anyone with diffusers >= 0.4.0 installed can use this pipeline magically 🪄 by specifying it in the custom_pipeline argument: Copied from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "google/ddpm-cifar10-32", custom_pipeline="one_step_unet", use_safetensors=True +) +pipe() Another way to share your community pipeline is to upload the one_step_unet.py file directly to your preferred model repository on the Hub. Instead of specifying the one_step_unet.py file, pass the model repository id to the custom_pipeline argument: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "google/ddpm-cifar10-32", custom_pipeline="stevhliu/one_step_unet", use_safetensors=True +) Take a look at the following table to compare the two sharing workflows to help you decide the best option for you: GitHub community pipeline HF Hub community pipeline usage same same review process open a Pull Request on GitHub and undergo a review process from the Diffusers team before merging; may be slower upload directly to a Hub repository without any review; this is the fastest workflow visibility included in the official Diffusers repository and documentation included on your HF Hub profile and relies on your own usage/promotion to gain visibility 💡 You can use whatever package you want in your community pipeline file - as long as the user has it installed, everything will work fine. Make sure you have one and only one pipeline class that inherits from DiffusionPipeline because this is automatically detected. How do community pipelines work? A community pipeline is a class that inherits from DiffusionPipeline which means: It can be loaded with the custom_pipeline argument. The model weights and scheduler configuration are loaded from pretrained_model_name_or_path. The code that implements a feature in the community pipeline is defined in a pipeline.py file. Sometimes you can’t load all the pipeline components weights from an official repository. In this case, the other components should be passed directly to the pipeline: Copied from diffusers import DiffusionPipeline +from transformers import CLIPImageProcessor, CLIPModel + +model_id = "CompVis/stable-diffusion-v1-4" +clip_model_id = "laion/CLIP-ViT-B-32-laion2B-s34B-b79K" + +feature_extractor = CLIPImageProcessor.from_pretrained(clip_model_id) +clip_model = CLIPModel.from_pretrained(clip_model_id, torch_dtype=torch.float16) + +pipeline = DiffusionPipeline.from_pretrained( + model_id, + custom_pipeline="clip_guided_stable_diffusion", + clip_model=clip_model, + feature_extractor=feature_extractor, + scheduler=scheduler, + torch_dtype=torch.float16, + use_safetensors=True, +) The magic behind community pipelines is contained in the following code. It allows the community pipeline to be loaded from GitHub or the Hub, and it’ll be available to all 🧨 Diffusers packages. Copied # 2. Load the pipeline class, if using custom module then load it from the Hub +# if we load from explicit class, let's use it +if custom_pipeline is not None: + pipeline_class = get_class_from_dynamic_module( + custom_pipeline, module_file=CUSTOM_PIPELINE_FILE_NAME, cache_dir=custom_pipeline + ) +elif cls != DiffusionPipeline: + pipeline_class = cls +else: + diffusers_module = importlib.import_module(cls.__module__.split(".")[0]) + pipeline_class = getattr(diffusers_module, config_dict["_class_name"]) diff --git a/scrapped_outputs/384d30129cbabcdb31e8d492f45d5156.txt b/scrapped_outputs/384d30129cbabcdb31e8d492f45d5156.txt new file mode 100644 index 0000000000000000000000000000000000000000..9d0a8a28b6d3bc1a9ce7a2bdbcac9943975943ca --- /dev/null +++ b/scrapped_outputs/384d30129cbabcdb31e8d492f45d5156.txt @@ -0,0 +1 @@ +Overview Welcome to 🧨 Diffusers! If you’re new to diffusion models and generative AI, and want to learn more, then you’ve come to the right place. These beginner-friendly tutorials are designed to provide a gentle introduction to diffusion models and help you understand the library fundamentals - the core components and how 🧨 Diffusers is meant to be used. You’ll learn how to use a pipeline for inference to rapidly generate things, and then deconstruct that pipeline to really understand how to use the library as a modular toolbox for building your own diffusion systems. In the next lesson, you’ll learn how to train your own diffusion model to generate what you want. After completing the tutorials, you’ll have gained the necessary skills to start exploring the library on your own and see how to use it for your own projects and applications. Feel free to join our community on Discord or the forums to connect and collaborate with other users and developers! Let’s start diffusing! 🧨 diff --git a/scrapped_outputs/38c60aa09a7741f7912e8dc1b72cb5f1.txt b/scrapped_outputs/38c60aa09a7741f7912e8dc1b72cb5f1.txt new file mode 100644 index 0000000000000000000000000000000000000000..8ddef7d2587e0ab05a500a167a90610ae978a96c --- /dev/null +++ b/scrapped_outputs/38c60aa09a7741f7912e8dc1b72cb5f1.txt @@ -0,0 +1,107 @@ +Attend-and-Excite Attend-and-Excite for Stable Diffusion was proposed in Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models and provides textual attention control over image generation. The abstract from the paper is: Recent text-to-image generative models have demonstrated an unparalleled ability to generate diverse and creative imagery guided by a target text prompt. While revolutionary, current state-of-the-art diffusion models may still fail in generating images that fully convey the semantics in the given text prompt. We analyze the publicly available Stable Diffusion model and assess the existence of catastrophic neglect, where the model fails to generate one or more of the subjects from the input prompt. Moreover, we find that in some cases the model also fails to correctly bind attributes (e.g., colors) to their corresponding subjects. To help mitigate these failure cases, we introduce the concept of Generative Semantic Nursing (GSN), where we seek to intervene in the generative process on the fly during inference time to improve the faithfulness of the generated images. Using an attention-based formulation of GSN, dubbed Attend-and-Excite, we guide the model to refine the cross-attention units to attend to all subject tokens in the text prompt and strengthen - or excite - their activations, encouraging the model to generate all subjects described in the text prompt. We compare our approach to alternative approaches and demonstrate that it conveys the desired concepts more faithfully across a range of text prompts. You can find additional information about Attend-and-Excite on the project page, the original codebase, or try it out in a demo. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionAttendAndExcitePipeline class diffusers.StableDiffusionAttendAndExcitePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion and Attend-and-Excite. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings __call__ < source > ( prompt: Union token_indices: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: int = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None max_iter_to_alter: int = 25 thresholds: dict = {0: 0.05, 10: 0.5, 20: 0.8} scale_factor: int = 20 attn_res: Optional = (16, 16) clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. token_indices (List[int]) — +The token indices to alter with attend-and-excite. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. max_iter_to_alter (int, optional, defaults to 25) — +Number of denoising steps to apply attend-and-excite. The max_iter_to_alter denoising steps are when +attend-and-excite is applied. For example, if max_iter_to_alter is 25 and there are a total of 30 +denoising steps, the first 25 denoising steps applies attend-and-excite and the last 5 will not. thresholds (dict, optional, defaults to {0 -- 0.05, 10: 0.5, 20: 0.8}): +Dictionary defining the iterations and desired thresholds to apply iterative latent refinement in. scale_factor (int, optional, default to 20) — +Scale factor to control the step size of each attend-and-excite update. attn_res (tuple, optional, default computed from width and height) — +The 2D resolution of the semantic attention map. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionAttendAndExcitePipeline + +>>> pipe = StableDiffusionAttendAndExcitePipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 +... ).to("cuda") + + +>>> prompt = "a cat and a frog" + +>>> # use get_indices function to find out indices of the tokens you want to alter +>>> pipe.get_indices(prompt) +{0: '<|startoftext|>', 1: 'a', 2: 'cat', 3: 'and', 4: 'a', 5: 'frog', 6: '<|endoftext|>'} + +>>> token_indices = [2, 5] +>>> seed = 6141 +>>> generator = torch.Generator("cuda").manual_seed(seed) + +>>> images = pipe( +... prompt=prompt, +... token_indices=token_indices, +... guidance_scale=7.5, +... generator=generator, +... num_inference_steps=50, +... max_iter_to_alter=25, +... ).images + +>>> image = images[0] +>>> image.save(f"../images/{prompt}_{seed}.png") disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_indices < source > ( prompt: str ) Utility function to list the indices of the tokens you wish to alte StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/38d26b04520ee1c3bcb46fba0a6cfe2e.txt b/scrapped_outputs/38d26b04520ee1c3bcb46fba0a6cfe2e.txt new file mode 100644 index 0000000000000000000000000000000000000000..eaf1daaf7ae542a78f5381f7eae39049ee58f668 --- /dev/null +++ b/scrapped_outputs/38d26b04520ee1c3bcb46fba0a6cfe2e.txt @@ -0,0 +1,49 @@ +Improve generation quality with FreeU The UNet is responsible for denoising during the reverse diffusion process, and there are two distinct features in its architecture: Backbone features primarily contribute to the denoising process Skip features mainly introduce high-frequency features into the decoder module and can make the network overlook the semantics in the backbone features However, the skip connection can sometimes introduce unnatural image details. FreeU is a technique for improving image quality by rebalancing the contributions from the UNet’s skip connections and backbone feature maps. FreeU is applied during inference and it does not require any additional training. The technique works for different tasks such as text-to-image, image-to-image, and text-to-video. In this guide, you will apply FreeU to the StableDiffusionPipeline, StableDiffusionXLPipeline, and TextToVideoSDPipeline. You need to install Diffusers from source to run the examples below. StableDiffusionPipeline Load the pipeline: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, safety_checker=None +).to("cuda") Then enable the FreeU mechanism with the FreeU-specific hyperparameters. These values are scaling factors for the backbone and skip features. Copied pipeline.enable_freeu(s1=0.9, s2=0.2, b1=1.2, b2=1.4) The values above are from the official FreeU code repository where you can also find reference hyperparameters for different models. Disable the FreeU mechanism by calling disable_freeu() on a pipeline. And then run inference: Copied prompt = "A squirrel eating a burger" +seed = 2023 +image = pipeline(prompt, generator=torch.manual_seed(seed)).images[0] +image The figure below compares non-FreeU and FreeU results respectively for the same hyperparameters used above (prompt and seed): Let’s see how Stable Diffusion 2 results are impacted: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16, safety_checker=None +).to("cuda") + +prompt = "A squirrel eating a burger" +seed = 2023 + +pipeline.enable_freeu(s1=0.9, s2=0.2, b1=1.1, b2=1.2) +image = pipeline(prompt, generator=torch.manual_seed(seed)).images[0] +image Stable Diffusion XL Finally, let’s take a look at how FreeU affects Stable Diffusion XL results: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, +).to("cuda") + +prompt = "A squirrel eating a burger" +seed = 2023 + +# Comes from +# https://wandb.ai/nasirk24/UNET-FreeU-SDXL/reports/FreeU-SDXL-Optimal-Parameters--Vmlldzo1NDg4NTUw +pipeline.enable_freeu(s1=0.6, s2=0.4, b1=1.1, b2=1.2) +image = pipeline(prompt, generator=torch.manual_seed(seed)).images[0] +image Text-to-video generation FreeU can also be used to improve video quality: Copied from diffusers import DiffusionPipeline +from diffusers.utils import export_to_video +import torch + +model_id = "cerspense/zeroscope_v2_576w" +pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +prompt = "an astronaut riding a horse on mars" +seed = 2023 + +# The values come from +# https://github.com/lyn-rgb/FreeU_Diffusers#video-pipelines +pipe.enable_freeu(b1=1.2, b2=1.4, s1=0.9, s2=0.2) +video_frames = pipe(prompt, height=320, width=576, num_frames=30, generator=torch.manual_seed(seed)).frames +export_to_video(video_frames, "astronaut_rides_horse.mp4") Thanks to kadirnar for helping to integrate the feature, and to justindujardin for the helpful discussions. diff --git a/scrapped_outputs/38e6b305ec4a9d20b36b57bf66083726.txt b/scrapped_outputs/38e6b305ec4a9d20b36b57bf66083726.txt new file mode 100644 index 0000000000000000000000000000000000000000..ba742a3486b776386c3822f9e8510f7b289ad220 --- /dev/null +++ b/scrapped_outputs/38e6b305ec4a9d20b36b57bf66083726.txt @@ -0,0 +1,338 @@ +Safe Stable Diffusion + +Safe Stable Diffusion was proposed in Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models and mitigates the well known issue that models like Stable Diffusion that are trained on unfiltered, web-crawled datasets tend to suffer from inappropriate degeneration. For instance Stable Diffusion may unexpectedly generate nudity, violence, images depicting self-harm, or otherwise offensive content. +Safe Stable Diffusion is an extension to the Stable Diffusion that drastically reduces content like this. +The abstract of the paper is the following: +Text-conditioned image generation models have recently achieved astonishing results in image quality and text alignment and are consequently employed in a fast-growing number of applications. Since they are highly data-driven, relying on billion-sized datasets randomly scraped from the internet, they also suffer, as we demonstrate, from degenerated and biased human behavior. In turn, they may even reinforce such biases. To help combat these undesired side effects, we present safe latent diffusion (SLD). Specifically, to measure the inappropriate degeneration due to unfiltered and imbalanced training sets, we establish a novel image generation test bed-inappropriate image prompts (I2P)-containing dedicated, real-world image-to-text prompts covering concepts such as nudity and violence. As our exhaustive empirical evaluation demonstrates, the introduced SLD removes and suppresses inappropriate image parts during the diffusion process, with no additional training required and no adverse effect on overall image quality or text alignment. +Overview: +Pipeline +Tasks +Colab +Demo +pipeline_stable_diffusion_safe.py +Text-to-Image Generation + + + +Tips + +Safe Stable Diffusion may also be used with weights of Stable Diffusion. + +Run Safe Stable Diffusion + +Safe Stable Diffusion can be tested very easily with the StableDiffusionPipelineSafe, and the "AIML-TUDA/stable-diffusion-safe" checkpoint exactly in the same way it is shown in the Conditional Image Generation Guide. + +Interacting with the Safety Concept + +To check and edit the currently used safety concept, use the safety_concept property of StableDiffusionPipelineSafe + + + Copied +>>> from diffusers import StableDiffusionPipelineSafe + +>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe") +>>> pipeline.safety_concept +For each image generation the active concept is also contained in StableDiffusionSafePipelineOutput. + +Using pre-defined safety configurations + +You may use the 4 configurations defined in the Safe Latent Diffusion paper as follows: + + + Copied +>>> from diffusers import StableDiffusionPipelineSafe +>>> from diffusers.pipelines.stable_diffusion_safe import SafetyConfig + +>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe") +>>> prompt = "the four horsewomen of the apocalypse, painting by tom of finland, gaston bussiere, craig mullins, j. c. leyendecker" +>>> out = pipeline(prompt=prompt, **SafetyConfig.MAX) +The following configurations are available: SafetyConfig.WEAK, SafetyConfig.MEDIUM, SafetyConfig.STRONG, and SafetyConfig.MAX. + +How to load and use different schedulers. + +The safe stable diffusion pipeline uses PNDMScheduler scheduler by default. But diffusers provides many other schedulers that can be used with the stable diffusion pipeline such as DDIMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, EulerAncestralDiscreteScheduler etc. +To use a different scheduler, you can either change it via the ConfigMixin.from_config() method or pass the scheduler argument to the from_pretrained method of the pipeline. For example, to use the EulerDiscreteScheduler, you can do the following: + + + Copied +>>> from diffusers import StableDiffusionPipelineSafe, EulerDiscreteScheduler + +>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe") +>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +>>> # or +>>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("AIML-TUDA/stable-diffusion-safe", subfolder="scheduler") +>>> pipeline = StableDiffusionPipelineSafe.from_pretrained( +... "AIML-TUDA/stable-diffusion-safe", scheduler=euler_scheduler +... ) + +StableDiffusionSafePipelineOutput + + +class diffusers.pipelines.stable_diffusion_safe.StableDiffusionSafePipelineOutput + +< +source +> +( +images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] +nsfw_content_detected: typing.Optional[typing.List[bool]] +unsafe_images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray, NoneType] +applied_safety_concept: typing.Optional[str] + +) + + +Parameters + +images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or numpy array of shape (batch_size, height, width, num_channels). PIL images or numpy array present the denoised images of the diffusion pipeline. + + +nsfw_content_detected (List[bool]) — +List of flags denoting whether the corresponding generated image likely represents “not-safe-for-work” +(nsfw) content, or None if safety checking could not be performed. + + +images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images that were flagged by the safety checker any may contain “not-safe-for-work” +(nsfw) content, or None if no safety check was performed or no images were flagged. + + +applied_safety_concept (str) — +The safety concept that was applied for safety guidance, or None if safety guidance was disabled + + + +Output class for Safe Stable Diffusion pipelines. + +__call__ + + +( +*args +**kwargs + +) + + + +Call self as a function. + +StableDiffusionPipelineSafe + + +class diffusers.StableDiffusionPipelineSafe + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: SafeStableDiffusionSafetyChecker +feature_extractor: CLIPFeatureExtractor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-to-image generation using Safe Latent Diffusion. +The implementation is based on the StableDiffusionPipeline +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +sld_guidance_scale: typing.Optional[float] = 1000 +sld_warmup_steps: typing.Optional[int] = 10 +sld_threshold: typing.Optional[float] = 0.01 +sld_momentum_scale: typing.Optional[float] = 0.3 +sld_mom_beta: typing.Optional[float] = 0.4 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +sld_guidance_scale (float, optional, defaults to 1000) — +Safe latent guidance as defined in Safe Latent Diffusion. +sld_guidance_scale is defined as sS of Eq. 6. If set to be less than 1, safety guidance will be +disabled. + + +sld_warmup_steps (int, optional, defaults to 10) — +Number of warmup steps for safety guidance. SLD will only be applied for diffusion steps greater than +sld_warmup_steps. sld_warmup_steps is defined as delta of Safe Latent +Diffusion. + + +sld_threshold (float, optional, defaults to 0.01) — +Threshold that separates the hyperplane between appropriate and inappropriate images. sld_threshold +is defined as lamda of Eq. 5 in Safe Latent Diffusion. + + +sld_momentum_scale (float, optional, defaults to 0.3) — +Scale of the SLD momentum to be added to the safety guidance at each diffusion step. If set to 0.0 +momentum will be disabled. Momentum is already built up during warmup, i.e. for diffusion steps smaller +than sld_warmup_steps. sld_momentum_scale is defined as sm of Eq. 7 in Safe Latent +Diffusion. + + +sld_mom_beta (float, optional, defaults to 0.4) — +Defines how safety guidance momentum builds up. sld_mom_beta indicates how much of the previous +momentum will be kept. Momentum is already built up during warmup, i.e. for diffusion steps smaller +than sld_warmup_steps. sld_mom_beta is defined as beta m of Eq. 8 in Safe Latent +Diffusion. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +enable_sequential_cpu_offload + +< +source +> +( +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. diff --git a/scrapped_outputs/390521c2794ea58ab57fbf73f1d2ee3a.txt b/scrapped_outputs/390521c2794ea58ab57fbf73f1d2ee3a.txt new file mode 100644 index 0000000000000000000000000000000000000000..4c696398635d3121e95a98f588be43126adc80ee --- /dev/null +++ b/scrapped_outputs/390521c2794ea58ab57fbf73f1d2ee3a.txt @@ -0,0 +1,323 @@ +Text-to-image The Stable Diffusion model was created by researchers and engineers from CompVis, Stability AI, Runway, and LAION. The StableDiffusionPipeline is capable of generating photorealistic images given any text input. It’s trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs. Latent diffusion is the research on top of which Stable Diffusion was built. It was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. The abstract from the paper is: By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. Code is available at https://github.com/CompVis/latent-diffusion. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionPipeline class diffusers.StableDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor from Common Diffusion Noise Schedules and Sample Steps are +Flawed. Guidance rescale factor should fix overexposure when +using zero terminal SNR. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt or .safetensors +format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import StableDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +... ) + +>>> # Download pipeline from local file +>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt +>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly") + +>>> # Enable float16 and move to GPU +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", +... torch_dtype=torch.float16, +... ) +>>> pipeline.to("cuda") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionPipeline class diffusers.FlaxStableDiffusionPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-to-image generation using Stable Diffusion. This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: array params: Union prng_seed: Array num_inference_steps: int = 50 height: Optional = None width: Optional = None guidance_scale: Union = 7.5 latents: Array = None neg_prompt_ids: Array = None return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. latents (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +array is generated by sampling using the supplied random generator. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard + +>>> from diffusers import FlaxStableDiffusionPipeline + +>>> pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", revision="bf16", dtype=jax.numpy.bfloat16 +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" + +>>> prng_seed = jax.random.PRNGKey(0) +>>> num_inference_steps = 50 + +>>> num_samples = jax.device_count() +>>> prompt = num_samples * [prompt] +>>> prompt_ids = pipeline.prepare_inputs(prompt) +# shard inputs and rng + +>>> params = replicate(params) +>>> prng_seed = jax.random.split(prng_seed, jax.device_count()) +>>> prompt_ids = shard(prompt_ids) + +>>> images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images +>>> images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) FlaxStableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/392b73191b267cadf5e54b6a4a8eff4b.txt b/scrapped_outputs/392b73191b267cadf5e54b6a4a8eff4b.txt new file mode 100644 index 0000000000000000000000000000000000000000..2c0bd1dff1ec6566954d252cc7c21acc8827994e --- /dev/null +++ b/scrapped_outputs/392b73191b267cadf5e54b6a4a8eff4b.txt @@ -0,0 +1,310 @@ +InstructPix2Pix InstructPix2Pix: Learning to Follow Image Editing Instructions is by Tim Brooks, Aleksander Holynski and Alexei A. Efros. The abstract from the paper is: We propose a method for editing images from human instructions: given an input image and a written instruction that tells the model what to do, our model follows these instructions to edit the image. To obtain training data for this problem, we combine the knowledge of two large pretrained models — a language model (GPT-3) and a text-to-image model (Stable Diffusion) — to generate a large dataset of image editing examples. Our conditional diffusion model, InstructPix2Pix, is trained on our generated data, and generalizes to real images and user-written instructions at inference time. Since it performs edits in the forward pass and does not require per example fine-tuning or inversion, our model edits images quickly, in a matter of seconds. We show compelling editing results for a diverse collection of input images and written instructions. You can find additional information about InstructPix2Pix on the project page, original codebase, and try it out in a demo. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionInstructPix2PixPipeline class diffusers.StableDiffusionInstructPix2PixPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for pixel-level image editing by following text instructions (based on Stable Diffusion). This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 100 guidance_scale: float = 7.5 image_guidance_scale: float = 1.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor np.ndarray, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be repainted according to prompt. Can also accept +image latents as image, but if passing latents directly it is not encoded again. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. image_guidance_scale (float, optional, defaults to 1.5) — +Push the generated image towards the inital image. Image guidance scale is enabled by setting +image_guidance_scale > 1. Higher image guidance scale encourages generated images that are closely +linked to the source image, usually at the expense of lower image quality. This pipeline requires a +value of at least 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionInstructPix2PixPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" + +>>> image = download_image(img_url).resize((512, 512)) + +>>> pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained( +... "timbrooks/instruct-pix2pix", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "make the mountains snowy" +>>> image = pipe(prompt=prompt, image=image).images[0] load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. StableDiffusionXLInstructPix2PixPipeline class diffusers.StableDiffusionXLInstructPix2PixPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires a aesthetic_score condition to be passed during inference. Also see the config +of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for pixel-level image editing by following text instructions. Based on Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 100 denoising_end: Optional = None guidance_scale: float = 5.0 image_guidance_scale: float = 1.5 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None ) → ~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor or PIL.Image.Image or np.ndarray or List[torch.FloatTensor] or List[PIL.Image.Image] or List[np.ndarray]) — +The image(s) to modify with the pipeline. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 5.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. image_guidance_scale (float, optional, defaults to 1.5) — +Image guidance scale is to push the generated image towards the inital image image. Image guidance +scale is enabled by setting image_guidance_scale > 1. Higher image guidance scale encourages to +generate images that are closely linked to the source image image, usually at the expense of lower +image quality. This pipeline requires a value of at least 1. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. Returns +~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLInstructPix2PixPipeline +>>> from diffusers.utils import load_image + +>>> resolution = 768 +>>> image = load_image( +... "https://hf.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" +... ).resize((resolution, resolution)) +>>> edit_instruction = "Turn sky into a cloudy one" + +>>> pipe = StableDiffusionXLInstructPix2PixPipeline.from_pretrained( +... "diffusers/sdxl-instructpix2pix-768", torch_dtype=torch.float16 +... ).to("cuda") + +>>> edited_image = pipe( +... prompt=edit_instruction, +... image=image, +... height=resolution, +... width=resolution, +... guidance_scale=3.0, +... image_guidance_scale=1.5, +... num_inference_steps=30, +... ).images[0] +>>> edited_image encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. Encodes the prompt into text encoder hidden states. diff --git a/scrapped_outputs/392ddac7e02e9da361af1833db2bcdbb.txt b/scrapped_outputs/392ddac7e02e9da361af1833db2bcdbb.txt new file mode 100644 index 0000000000000000000000000000000000000000..82a9574b3624949d4502b1947f4846160a989916 --- /dev/null +++ b/scrapped_outputs/392ddac7e02e9da361af1833db2bcdbb.txt @@ -0,0 +1,58 @@ +How to use Stable Diffusion on Habana Gaudi + +🤗 Diffusers is compatible with Habana Gaudi through 🤗 Optimum Habana. + +Requirements + +Optimum Habana 1.3 or later, here is how to install it. +SynapseAI 1.7. + +Inference Pipeline + +To generate images with Stable Diffusion 1 and 2 on Gaudi, you need to instantiate two instances: +A pipeline with GaudiStableDiffusionPipeline. This pipeline supports text-to-image generation. +A scheduler with GaudiDDIMScheduler. This scheduler has been optimized for Habana Gaudi. +When initializing the pipeline, you have to specify use_habana=True to deploy it on HPUs. +Furthermore, in order to get the fastest possible generations you should enable HPU graphs with use_hpu_graphs=True. +Finally, you will need to specify a Gaudi configuration which can be downloaded from the Hugging Face Hub. + + + Copied +from optimum.habana import GaudiConfig +from optimum.habana.diffusers import GaudiDDIMScheduler, GaudiStableDiffusionPipeline + +model_name = "stabilityai/stable-diffusion-2-base" +scheduler = GaudiDDIMScheduler.from_pretrained(model_name, subfolder="scheduler") +pipeline = GaudiStableDiffusionPipeline.from_pretrained( + model_name, + scheduler=scheduler, + use_habana=True, + use_hpu_graphs=True, + gaudi_config="Habana/stable-diffusion", +) +You can then call the pipeline to generate images by batches from one or several prompts: + + + Copied +outputs = pipeline( + prompt=[ + "High quality photo of an astronaut riding a horse in space", + "Face of a yellow cat, high resolution, sitting on a park bench", + ], + num_images_per_prompt=10, + batch_size=4, +) +For more information, check out Optimum Habana’s documentation and the example provided in the official Github repository. + +Benchmark + +Here are the latencies for Habana Gaudi 1 and Gaudi 2 with the Habana/stable-diffusion Gaudi configuration (mixed precision bf16/fp32): + +Latency +Batch size +Gaudi 1 +4.37s +4/8 +Gaudi 2 +1.19s +4/8 diff --git a/scrapped_outputs/395ac9aa52878b74ad6b9ae5ecd265ad.txt b/scrapped_outputs/395ac9aa52878b74ad6b9ae5ecd265ad.txt new file mode 100644 index 0000000000000000000000000000000000000000..44fefb3c47f353a4d2bfd9d051ae6e0b396bc8d5 --- /dev/null +++ b/scrapped_outputs/395ac9aa52878b74ad6b9ae5ecd265ad.txt @@ -0,0 +1,4 @@ +Using Diffusers for audio + +DanceDiffusionPipeline and AudioDiffusionPipeline can be used to generate +audio rapidly! More coming soon! diff --git a/scrapped_outputs/396cc221aa9fafe0adb97a504b14cdba.txt b/scrapped_outputs/396cc221aa9fafe0adb97a504b14cdba.txt new file mode 100644 index 0000000000000000000000000000000000000000..b38b5c13a31ff2d5b90900e6331e648465b535b4 --- /dev/null +++ b/scrapped_outputs/396cc221aa9fafe0adb97a504b14cdba.txt @@ -0,0 +1,174 @@ +Reduce memory usage A barrier to using diffusion models is the large amount of memory required. To overcome this challenge, there are several memory-reducing techniques you can use to run even some of the largest models on free-tier or consumer GPUs. Some of these techniques can even be combined to further reduce memory usage. In many cases, optimizing for memory or speed leads to improved performance in the other, so you should try to optimize for both whenever you can. This guide focuses on minimizing memory usage, but you can also learn more about how to Speed up inference. The results below are obtained from generating a single 512x512 image from the prompt a photo of an astronaut riding a horse on mars with 50 DDIM steps on a Nvidia Titan RTX, demonstrating the speed-up you can expect as a result of reduced memory consumption. latency speed-up original 9.50s x1 fp16 3.61s x2.63 channels last 3.30s x2.88 traced UNet 3.21s x2.96 memory-efficient attention 2.63s x3.61 Sliced VAE Sliced VAE enables decoding large batches of images with limited VRAM or batches with 32 images or more by decoding the batches of latents one image at a time. You’ll likely want to couple this with enable_xformers_memory_efficient_attention() to reduce memory use further if you have xFormers installed. To use sliced VAE, call enable_vae_slicing() on your pipeline before inference: Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_vae_slicing() +#pipe.enable_xformers_memory_efficient_attention() +images = pipe([prompt] * 32).images You may see a small performance boost in VAE decoding on multi-image batches, and there should be no performance impact on single-image batches. Tiled VAE Tiled VAE processing also enables working with large images on limited VRAM (for example, generating 4k images on 8GB of VRAM) by splitting the image into overlapping tiles, decoding the tiles, and then blending the outputs together to compose the final image. You should also used tiled VAE with enable_xformers_memory_efficient_attention() to reduce memory use further if you have xFormers installed. To use tiled VAE processing, call enable_vae_tiling() on your pipeline before inference: Copied import torch +from diffusers import StableDiffusionPipeline, UniPCMultistepScheduler + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") +prompt = "a beautiful landscape photograph" +pipe.enable_vae_tiling() +#pipe.enable_xformers_memory_efficient_attention() + +image = pipe([prompt], width=3840, height=2224, num_inference_steps=20).images[0] The output image has some tile-to-tile tone variation because the tiles are decoded separately, but you shouldn’t see any sharp and obvious seams between the tiles. Tiling is turned off for images that are 512x512 or smaller. CPU offloading Offloading the weights to the CPU and only loading them on the GPU when performing the forward pass can also save memory. Often, this technique can reduce memory consumption to less than 3GB. To perform CPU offloading, call enable_sequential_cpu_offload(): Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_sequential_cpu_offload() +image = pipe(prompt).images[0] CPU offloading works on submodules rather than whole models. This is the best way to minimize memory consumption, but inference is much slower due to the iterative nature of the diffusion process. The UNet component of the pipeline runs several times (as many as num_inference_steps); each time, the different UNet submodules are sequentially onloaded and offloaded as needed, resulting in a large number of memory transfers. Consider using model offloading if you want to optimize for speed because it is much faster. The tradeoff is your memory savings won’t be as large. When using enable_sequential_cpu_offload(), don’t move the pipeline to CUDA beforehand or else the gain in memory consumption will only be minimal (see this issue for more information). enable_sequential_cpu_offload() is a stateful operation that installs hooks on the models. Model offloading Model offloading requires 🤗 Accelerate version 0.17.0 or higher. Sequential CPU offloading preserves a lot of memory but it makes inference slower because submodules are moved to GPU as needed, and they’re immediately returned to the CPU when a new module runs. Full-model offloading is an alternative that moves whole models to the GPU, instead of handling each model’s constituent submodules. There is a negligible impact on inference time (compared with moving the pipeline to cuda), and it still provides some memory savings. During model offloading, only one of the main components of the pipeline (typically the text encoder, UNet and VAE) +is placed on the GPU while the others wait on the CPU. Components like the UNet that run for multiple iterations stay on the GPU until they’re no longer needed. Enable model offloading by calling enable_model_cpu_offload() on the pipeline: Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_model_cpu_offload() +image = pipe(prompt).images[0] In order to properly offload models after they’re called, it is required to run the entire pipeline and models are called in the pipeline’s expected order. Exercise caution if models are reused outside the context of the pipeline after hooks have been installed. See Removing Hooks for more information. enable_model_cpu_offload() is a stateful operation that installs hooks on the models and state on the pipeline. Channels-last memory format The channels-last memory format is an alternative way of ordering NCHW tensors in memory to preserve dimension ordering. Channels-last tensors are ordered in such a way that the channels become the densest dimension (storing images pixel-per-pixel). Since not all operators currently support the channels-last format, it may result in worst performance but you should still try and see if it works for your model. For example, to set the pipeline’s UNet to use the channels-last format: Copied print(pipe.unet.conv_out.state_dict()["weight"].stride()) # (2880, 9, 3, 1) +pipe.unet.to(memory_format=torch.channels_last) # in-place operation +print( + pipe.unet.conv_out.state_dict()["weight"].stride() +) # (2880, 1, 960, 320) having a stride of 1 for the 2nd dimension proves that it works Tracing Tracing runs an example input tensor through the model and captures the operations that are performed on it as that input makes its way through the model’s layers. The executable or ScriptFunction that is returned is optimized with just-in-time compilation. To trace a UNet: Copied import time +import torch +from diffusers import StableDiffusionPipeline +import functools + +# torch disable grad +torch.set_grad_enabled(False) + +# set variables +n_experiments = 2 +unet_runs_per_experiment = 50 + + +# load inputs +def generate_inputs(): + sample = torch.randn((2, 4, 64, 64), device="cuda", dtype=torch.float16) + timestep = torch.rand(1, device="cuda", dtype=torch.float16) * 999 + encoder_hidden_states = torch.randn((2, 77, 768), device="cuda", dtype=torch.float16) + return sample, timestep, encoder_hidden_states + + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") +unet = pipe.unet +unet.eval() +unet.to(memory_format=torch.channels_last) # use channels_last memory format +unet.forward = functools.partial(unet.forward, return_dict=False) # set return_dict=False as default + +# warmup +for _ in range(3): + with torch.inference_mode(): + inputs = generate_inputs() + orig_output = unet(*inputs) + +# trace +print("tracing..") +unet_traced = torch.jit.trace(unet, inputs) +unet_traced.eval() +print("done tracing") + + +# warmup and optimize graph +for _ in range(5): + with torch.inference_mode(): + inputs = generate_inputs() + orig_output = unet_traced(*inputs) + + +# benchmarking +with torch.inference_mode(): + for _ in range(n_experiments): + torch.cuda.synchronize() + start_time = time.time() + for _ in range(unet_runs_per_experiment): + orig_output = unet_traced(*inputs) + torch.cuda.synchronize() + print(f"unet traced inference took {time.time() - start_time:.2f} seconds") + for _ in range(n_experiments): + torch.cuda.synchronize() + start_time = time.time() + for _ in range(unet_runs_per_experiment): + orig_output = unet(*inputs) + torch.cuda.synchronize() + print(f"unet inference took {time.time() - start_time:.2f} seconds") + +# save the model +unet_traced.save("unet_traced.pt") Replace the unet attribute of the pipeline with the traced model: Copied from diffusers import StableDiffusionPipeline +import torch +from dataclasses import dataclass + + +@dataclass +class UNet2DConditionOutput: + sample: torch.FloatTensor + + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") + +# use jitted unet +unet_traced = torch.jit.load("unet_traced.pt") + + +# del pipe.unet +class TracedUNet(torch.nn.Module): + def __init__(self): + super().__init__() + self.in_channels = pipe.unet.config.in_channels + self.device = pipe.unet.device + + def forward(self, latent_model_input, t, encoder_hidden_states): + sample = unet_traced(latent_model_input, t, encoder_hidden_states)[0] + return UNet2DConditionOutput(sample=sample) + + +pipe.unet = TracedUNet() + +with torch.inference_mode(): + image = pipe([prompt] * 1, num_inference_steps=50).images[0] Memory-efficient attention Recent work on optimizing bandwidth in the attention block has generated huge speed-ups and reductions in GPU memory usage. The most recent type of memory-efficient attention is Flash Attention (you can check out the original code at HazyResearch/flash-attention). If you have PyTorch >= 2.0 installed, you should not expect a speed-up for inference when enabling xformers. To use Flash Attention, install the following: PyTorch > 1.12 CUDA available xFormers Then call enable_xformers_memory_efficient_attention() on the pipeline: Copied from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") + +pipe.enable_xformers_memory_efficient_attention() + +with torch.inference_mode(): + sample = pipe("a small cat") + +# optional: You can disable it via +# pipe.disable_xformers_memory_efficient_attention() The iteration speed when using xformers should match the iteration speed of PyTorch 2.0 as described here. diff --git a/scrapped_outputs/3989c00ca6b04d43afcb7a918b81827f.txt b/scrapped_outputs/3989c00ca6b04d43afcb7a918b81827f.txt new file mode 100644 index 0000000000000000000000000000000000000000..f3d2a1a340ad1efdbcd58232cb5909967c8d6d47 --- /dev/null +++ b/scrapped_outputs/3989c00ca6b04d43afcb7a918b81827f.txt @@ -0,0 +1,64 @@ +Configuration Schedulers from SchedulerMixin and models from ModelMixin inherit from ConfigMixin which stores all the parameters that are passed to their respective __init__ methods in a JSON-configuration file. To use private or gated models, log-in with huggingface-cli login. ConfigMixin class diffusers.ConfigMixin < source > ( ) Base class for all configuration classes. All configuration parameters are stored under self.config. Also +provides the from_config() and save_config() methods for loading, downloading, and +saving classes that inherit from ConfigMixin. Class attributes: config_name (str) — A filename under which the config should stored when calling +save_config() (should be overridden by parent class). ignore_for_config (List[str]) — A list of attributes that should not be saved in the config (should be +overridden by subclass). has_compatibles (bool) — Whether the class has compatible classes (should be overridden by subclass). _deprecated_kwargs (List[str]) — Keyword arguments that are deprecated. Note that the init function +should only have a kwargs argument if at least one argument is deprecated (should be overridden by +subclass). load_config < source > ( pretrained_model_name_or_path: Union return_unused_kwargs = False return_commit_hash = False **kwargs ) → dict Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing model weights saved with +save_config(). + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. return_unused_kwargs (bool, optional, defaults to `False) — +Whether unused keyword arguments of the config are returned. return_commit_hash (bool, optional, defaults to False) -- Whether the commit_hash` of the loaded configuration are returned. Returns +dict + +A dictionary of all the parameters stored in a JSON configuration file. + Load a model or scheduler configuration. from_config < source > ( config: Union = None return_unused_kwargs = False **kwargs ) → ModelMixin or SchedulerMixin Parameters config (Dict[str, Any]) — +A config dictionary from which the Python class is instantiated. Make sure to only load configuration +files of compatible classes. return_unused_kwargs (bool, optional, defaults to False) — +Whether kwargs that are not consumed by the Python class should be returned or not. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to update the configuration object (after it is loaded) and initiate the Python class. +**kwargs are passed directly to the underlying scheduler/model’s __init__ method and eventually +overwrite the same named arguments in config. Returns +ModelMixin or SchedulerMixin + +A model or scheduler object instantiated from a config dictionary. + Instantiate a Python class from a config dictionary. Examples: Copied >>> from diffusers import DDPMScheduler, DDIMScheduler, PNDMScheduler + +>>> # Download scheduler from huggingface.co and cache. +>>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cifar10-32") + +>>> # Instantiate DDIM scheduler class with same config as DDPM +>>> scheduler = DDIMScheduler.from_config(scheduler.config) + +>>> # Instantiate PNDM scheduler class with same config as DDPM +>>> scheduler = PNDMScheduler.from_config(scheduler.config) save_config < source > ( save_directory: Union push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory where the configuration JSON file is saved (will be created if it does not exist). push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save a configuration object to the directory specified in save_directory so that it can be reloaded using the +from_config() class method. to_json_file < source > ( json_file_path: Union ) Parameters json_file_path (str or os.PathLike) — +Path to the JSON file to save a configuration instance’s parameters. Save the configuration instance’s parameters to a JSON file. to_json_string < source > ( ) → str Returns +str + +String containing all the attributes that make up the configuration instance in JSON format. + Serializes the configuration instance to a JSON string. diff --git a/scrapped_outputs/39e5471ff3a75cf9a7db49481a58fdb2.txt b/scrapped_outputs/39e5471ff3a75cf9a7db49481a58fdb2.txt new file mode 100644 index 0000000000000000000000000000000000000000..848931d1969089ae8a8d21d431c071f2b1f6f901 --- /dev/null +++ b/scrapped_outputs/39e5471ff3a75cf9a7db49481a58fdb2.txt @@ -0,0 +1,71 @@ +AutoencoderKL The variational autoencoder (VAE) model with KL loss was introduced in Auto-Encoding Variational Bayes by Diederik P. Kingma and Max Welling. The model is used in 🤗 Diffusers to encode images into latents and to decode latent representations into images. The abstract from the paper is: How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions are two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results. Loading from the original format By default the AutoencoderKL should be loaded with from_pretrained(), but it can also be loaded +from the original format using FromOriginalVAEMixin.from_single_file as follows: Copied from diffusers import AutoencoderKL + +url = "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors" # can also be a local file +model = AutoencoderKL.from_single_file(url) AutoencoderKL class diffusers.AutoencoderKL < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) up_block_types: Tuple = ('UpDecoderBlock2D',) block_out_channels: Tuple = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 4 norm_num_groups: int = 32 sample_size: int = 32 scaling_factor: float = 0.18215 force_upcast: float = True ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("DownEncoderBlock2D",)) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to ("UpDecoderBlock2D",)) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of block output channels. act_fn (str, optional, defaults to "silu") — The activation function to use. latent_channels (int, optional, defaults to 4) — Number of channels in the latent space. sample_size (int, optional, defaults to 32) — Sample input size. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. force_upcast (bool, optional, default to True) — +If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE +can be fine-tuned / trained to a lower range without loosing too much precision in which case +force_upcast can be set to False - see: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix A VAE model with KL loss for encoding images into latents and decoding latent representations into images. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). wrapper < source > ( *args **kwargs ) wrapper < source > ( *args **kwargs ) disable_slicing < source > ( ) Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing +decoding in one step. disable_tiling < source > ( ) Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing +decoding in one step. enable_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_tiling < source > ( use_tiling: bool = True ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. forward < source > ( sample: FloatTensor sample_posterior: bool = False return_dict: bool = True generator: Optional = None ) Parameters sample (torch.FloatTensor) — Input sample. sample_posterior (bool, optional, defaults to False) — +Whether to sample from the posterior. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. fuse_qkv_projections < source > ( ) Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. tiled_decode < source > ( z: FloatTensor return_dict: bool = True ) → ~models.vae.DecoderOutput or tuple Parameters z (torch.FloatTensor) — Input batch of latent vectors. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.vae.DecoderOutput instead of a plain tuple. Returns +~models.vae.DecoderOutput or tuple + +If return_dict is True, a ~models.vae.DecoderOutput is returned, otherwise a plain tuple is +returned. + Decode a batch of images using a tiled decoder. tiled_encode < source > ( x: FloatTensor return_dict: bool = True ) → ~models.autoencoder_kl.AutoencoderKLOutput or tuple Parameters x (torch.FloatTensor) — Input batch of images. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.autoencoder_kl.AutoencoderKLOutput instead of a plain tuple. Returns +~models.autoencoder_kl.AutoencoderKLOutput or tuple + +If return_dict is True, a ~models.autoencoder_kl.AutoencoderKLOutput is returned, otherwise a plain +tuple is returned. + Encode a batch of images using a tiled encoder. When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several +steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is +different from non-tiled encoding because each tile uses a different encoder. To avoid tiling artifacts, the +tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the +output, but they should be much less noticeable. unfuse_qkv_projections < source > ( ) Disables the fused QKV projection if enabled. This API is 🧪 experimental. AutoencoderKLOutput class diffusers.models.modeling_outputs.AutoencoderKLOutput < source > ( latent_dist: DiagonalGaussianDistribution ) Parameters latent_dist (DiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of DiagonalGaussianDistribution. +DiagonalGaussianDistribution allows for sampling latents from the distribution. Output of AutoencoderKL encoding method. DecoderOutput class diffusers.models.autoencoders.vae.DecoderOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The decoded output sample from the last layer of the model. Output of decoding method. FlaxAutoencoderKL class diffusers.FlaxAutoencoderKL < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) up_block_types: Tuple = ('UpDecoderBlock2D',) block_out_channels: Tuple = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 4 norm_num_groups: int = 32 sample_size: int = 32 scaling_factor: float = 0.18215 dtype: dtype = parent: Union = name: Optional = None ) Parameters in_channels (int, optional, defaults to 3) — +Number of channels in the input image. out_channels (int, optional, defaults to 3) — +Number of channels in the output. down_block_types (Tuple[str], optional, defaults to (DownEncoderBlock2D)) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to (UpDecoderBlock2D)) — +Tuple of upsample block types. block_out_channels (Tuple[str], optional, defaults to (64,)) — +Tuple of block output channels. layers_per_block (int, optional, defaults to 2) — +Number of ResNet layer for each block. act_fn (str, optional, defaults to silu) — +The activation function to use. latent_channels (int, optional, defaults to 4) — +Number of channels in the latent space. norm_num_groups (int, optional, defaults to 32) — +The number of groups for normalization. sample_size (int, optional, defaults to 32) — +Sample input size. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. dtype (jnp.dtype, optional, defaults to jnp.float32) — +The dtype of the parameters. Flax implementation of a VAE model with KL loss for decoding latent representations. This model inherits from FlaxModelMixin. Check the superclass documentation for it’s generic methods +implemented for all models (such as downloading or saving). This model is a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matter related to its +general usage and behavior. Inherent JAX features such as the following are supported: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization FlaxAutoencoderKLOutput class diffusers.models.vae_flax.FlaxAutoencoderKLOutput < source > ( latent_dist: FlaxDiagonalGaussianDistribution ) Parameters latent_dist (FlaxDiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of FlaxDiagonalGaussianDistribution. +FlaxDiagonalGaussianDistribution allows for sampling latents from the distribution. Output of AutoencoderKL encoding method. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. FlaxDecoderOutput class diffusers.models.vae_flax.FlaxDecoderOutput < source > ( sample: Array ) Parameters sample (jnp.ndarray of shape (batch_size, num_channels, height, width)) — +The decoded output sample from the last layer of the model. dtype (jnp.dtype, optional, defaults to jnp.float32) — +The dtype of the parameters. Output of decoding method. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/39f39f3fc4d8b2f696fa2ab0d50b86f0.txt b/scrapped_outputs/39f39f3fc4d8b2f696fa2ab0d50b86f0.txt new file mode 100644 index 0000000000000000000000000000000000000000..00a1475bcc5e7b5a02879867e599566c0cc82ebb --- /dev/null +++ b/scrapped_outputs/39f39f3fc4d8b2f696fa2ab0d50b86f0.txt @@ -0,0 +1,225 @@ +AudioLDM 2 AudioLDM 2 was proposed in AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining by Haohe Liu et al. AudioLDM 2 takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional sound effects, human speech and music. Inspired by Stable Diffusion, AudioLDM 2 is a text-to-audio latent diffusion model (LDM) that learns continuous audio representations from text embeddings. Two text encoder models are used to compute the text embeddings from a prompt input: the text-branch of CLAP and the encoder of Flan-T5. These text embeddings are then projected to a shared embedding space by an AudioLDM2ProjectionModel. A GPT2 language model (LM) is used to auto-regressively predict eight new embedding vectors, conditional on the projected CLAP and Flan-T5 embeddings. The generated embedding vectors and Flan-T5 text embeddings are used as cross-attention conditioning in the LDM. The UNet of AudioLDM 2 is unique in the sense that it takes two cross-attention embeddings, as opposed to one cross-attention conditioning, as in most other LDMs. The abstract of the paper is the following: Although audio generation shares commonalities across different types of audio, such as speech, music, and sound effects, designing models for each type requires careful consideration of specific objectives and biases that can significantly differ from those of other types. To bring us closer to a unified perspective of audio generation, this paper proposes a framework that utilizes the same learning method for speech, music, and sound effect generation. Our framework introduces a general representation of audio, called “language of audio” (LOA). Any audio can be translated into LOA based on AudioMAE, a self-supervised pre-trained representation learning model. In the generation process, we translate any modalities into LOA by using a GPT-2 model, and we perform self-supervised audio generation learning with a latent diffusion model conditioned on LOA. The proposed framework naturally brings advantages such as in-context learning abilities and reusable self-supervised pretrained AudioMAE and latent diffusion models. Experiments on the major benchmarks of text-to-audio, text-to-music, and text-to-speech demonstrate state-of-the-art or competitive performance against previous approaches. Our code, pretrained model, and demo are available at this https URL. This pipeline was contributed by sanchit-gandhi. The original codebase can be found at haoheliu/audioldm2. Tips Choosing a checkpoint AudioLDM2 comes in three variants. Two of these checkpoints are applicable to the general task of text-to-audio generation. The third checkpoint is trained exclusively on text-to-music generation. All checkpoints share the same model size for the text encoders and VAE. They differ in the size and depth of the UNet. +See table below for details on the three checkpoints: Checkpoint Task UNet Model Size Total Model Size Training Data / h audioldm2 Text-to-audio 350M 1.1B 1150k audioldm2-large Text-to-audio 750M 1.5B 1150k audioldm2-music Text-to-music 350M 1.1B 665k Constructing a prompt Descriptive prompt inputs work best: use adjectives to describe the sound (e.g. “high quality” or “clear”) and make the prompt context specific (e.g. “water stream in a forest” instead of “stream”). It’s best to use general terms like “cat” or “dog” instead of specific names or abstract objects the model may not be familiar with. Using a negative prompt can significantly improve the quality of the generated waveform, by guiding the generation away from terms that correspond to poor quality audio. Try using a negative prompt of “Low quality.” Controlling inference The quality of the predicted audio sample can be controlled by the num_inference_steps argument; higher steps give higher quality audio at the expense of slower inference. The length of the predicted audio sample can be controlled by varying the audio_length_in_s argument. Evaluating generated waveforms: The quality of the generated waveforms can vary significantly based on the seed. Try generating with different seeds until you find a satisfactory generation. Multiple waveforms can be generated in one go: set num_waveforms_per_prompt to a value greater than 1. Automatic scoring will be performed between the generated waveforms and prompt text, and the audios ranked from best to worst accordingly. The following example demonstrates how to construct good music generation using the aforementioned tips: example. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. AudioLDM2Pipeline class diffusers.AudioLDM2Pipeline < source > ( vae: AutoencoderKL text_encoder: ClapModel text_encoder_2: T5EncoderModel projection_model: AudioLDM2ProjectionModel language_model: GPT2Model tokenizer: Union tokenizer_2: Union feature_extractor: ClapFeatureExtractor unet: AudioLDM2UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vocoder: SpeechT5HifiGan ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (ClapModel) — +First frozen text-encoder. AudioLDM2 uses the joint audio-text embedding model +CLAP, +specifically the laion/clap-htsat-unfused variant. The +text branch is used to encode the text prompt to a prompt embedding. The full audio-text model is used to +rank generated waveforms against the text prompt by computing similarity scores. text_encoder_2 (T5EncoderModel) — +Second frozen text-encoder. AudioLDM2 uses the encoder of +T5, specifically the +google/flan-t5-large variant. projection_model (AudioLDM2ProjectionModel) — +A trained model used to linearly project the hidden-states from the first and second text encoder models +and insert learned SOS and EOS token embeddings. The projected hidden-states from the two text encoders are +concatenated to give the input to the language model. language_model (GPT2Model) — +An auto-regressive language model used to generate a sequence of hidden-states conditioned on the projected +outputs from the two text encoders. tokenizer (RobertaTokenizer) — +Tokenizer to tokenize text for the first frozen text-encoder. tokenizer_2 (T5Tokenizer) — +Tokenizer to tokenize text for the second frozen text-encoder. feature_extractor (ClapFeatureExtractor) — +Feature extractor to pre-process generated audio waveforms to log-mel spectrograms for automatic scoring. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded audio latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. vocoder (SpeechT5HifiGan) — +Vocoder of class SpeechT5HifiGan to convert the mel-spectrogram latents to the final audio waveform. Pipeline for text-to-audio generation using AudioLDM2. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None audio_length_in_s: Optional = None num_inference_steps: int = 200 guidance_scale: float = 3.5 negative_prompt: Union = None num_waveforms_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None generated_prompt_embeds: Optional = None negative_generated_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None max_new_tokens: Optional = None return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None output_type: Optional = 'np' ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide audio generation. If not defined, you need to pass prompt_embeds. audio_length_in_s (int, optional, defaults to 10.24) — +The length of the generated audio sample in seconds. num_inference_steps (int, optional, defaults to 200) — +The number of denoising steps. More denoising steps usually lead to a higher quality audio at the +expense of slower inference. guidance_scale (float, optional, defaults to 3.5) — +A higher guidance scale value encourages the model to generate audio that is closely linked to the text +prompt at the expense of lower sound quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in audio generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_waveforms_per_prompt (int, optional, defaults to 1) — +The number of waveforms to generate per prompt. If num_waveforms_per_prompt > 1, then automatic +scoring is performed between the generated outputs and the text prompt. This scoring ranks the +generated waveforms based on their cosine similarity with the text input in the joint text-audio +embedding space. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for spectrogram +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings from the GPT2 langauge model. Can be used to easily tweak text inputs, +e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input +argument. negative_generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings from the GPT2 language model. Can be used to easily tweak text +inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from +negative_prompt input argument. attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the prompt_embeds. If not provided, attention mask will +be computed from prompt input argument. negative_attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the negative_prompt_embeds. If not provided, attention +mask will be computed from negative_prompt input argument. max_new_tokens (int, optional, defaults to None) — +Number of new tokens to generate with the GPT2 language model. If not provided, number of tokens will +be taken from the config of the model. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. output_type (str, optional, defaults to "np") — +The output format of the generated audio. Choose between "np" to return a NumPy np.ndarray or +"pt" to return a PyTorch torch.Tensor object. Set to "latent" to return the latent diffusion +model (LDM) output. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Examples: Copied >>> import scipy +>>> import torch +>>> from diffusers import AudioLDM2Pipeline + +>>> repo_id = "cvssp/audioldm2" +>>> pipe = AudioLDM2Pipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> # define the prompts +>>> prompt = "The sound of a hammer hitting a wooden surface." +>>> negative_prompt = "Low quality." + +>>> # set the seed for generator +>>> generator = torch.Generator("cuda").manual_seed(0) + +>>> # run the generation +>>> audio = pipe( +... prompt, +... negative_prompt=negative_prompt, +... num_inference_steps=200, +... audio_length_in_s=10.0, +... num_waveforms_per_prompt=3, +... generator=generator, +... ).audios + +>>> # save the best audio sample (index 0) as a .wav file +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio[0]) disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_waveforms_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None generated_prompt_embeds: Optional = None negative_generated_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None max_new_tokens: Optional = None ) → prompt_embeds (torch.FloatTensor) Parameters prompt (str or List[str], optional) — +prompt to be encoded device (torch.device) — +torch device num_waveforms_per_prompt (int) — +number of waveforms that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the audio generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-computed text embeddings from the Flan T5 model. Can be used to easily tweak text inputs, e.g. +prompt weighting. If not provided, text embeddings will be computed from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-computed negative text embeddings from the Flan T5 model. Can be used to easily tweak text inputs, +e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from +negative_prompt input argument. generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings from the GPT2 langauge model. Can be used to easily tweak text inputs, +e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input +argument. negative_generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings from the GPT2 language model. Can be used to easily tweak text +inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from +negative_prompt input argument. attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the prompt_embeds. If not provided, attention mask will +be computed from prompt input argument. negative_attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the negative_prompt_embeds. If not provided, attention +mask will be computed from negative_prompt input argument. max_new_tokens (int, optional, defaults to None) — +The number of new tokens to generate with the GPT2 language model. Returns +prompt_embeds (torch.FloatTensor) + +Text embeddings from the Flan T5 model. +attention_mask (torch.LongTensor): +Attention mask to be applied to the prompt_embeds. +generated_prompt_embeds (torch.FloatTensor): +Text embeddings generated from the GPT2 langauge model. + Encodes the prompt into text encoder hidden states. Example: Copied >>> import scipy +>>> import torch +>>> from diffusers import AudioLDM2Pipeline + +>>> repo_id = "cvssp/audioldm2" +>>> pipe = AudioLDM2Pipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> # Get text embedding vectors +>>> prompt_embeds, attention_mask, generated_prompt_embeds = pipe.encode_prompt( +... prompt="Techno music with a strong, upbeat tempo and high melodic riffs", +... device="cuda", +... do_classifier_free_guidance=True, +... ) + +>>> # Pass text embeddings to pipeline for text-conditional audio generation +>>> audio = pipe( +... prompt_embeds=prompt_embeds, +... attention_mask=attention_mask, +... generated_prompt_embeds=generated_prompt_embeds, +... num_inference_steps=200, +... audio_length_in_s=10.0, +... ).audios[0] + +>>> # save generated audio sample +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio) generate_language_model < source > ( inputs_embeds: Tensor = None max_new_tokens: int = 8 **model_kwargs ) → inputs_embeds (torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)`) Parameters inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — +The sequence used as a prompt for the generation. max_new_tokens (int) — +Number of new tokens to generate. model_kwargs (Dict[str, Any], optional) — +Ad hoc parametrization of additional model-specific kwargs that will be forwarded to the forward +function of the model. Returns +inputs_embeds (torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)`) + +The sequence of generated hidden-states. + Generates a sequence of hidden-states from the language model, conditioned on the embedding inputs. AudioLDM2ProjectionModel class diffusers.AudioLDM2ProjectionModel < source > ( text_encoder_dim text_encoder_1_dim langauge_model_dim ) Parameters text_encoder_dim (int) — +Dimensionality of the text embeddings from the first text encoder (CLAP). text_encoder_1_dim (int) — +Dimensionality of the text embeddings from the second text encoder (T5 or VITS). langauge_model_dim (int) — +Dimensionality of the text embeddings from the language model (GPT2). A simple linear projection model to map two text embeddings to a shared latent space. It also inserts learned +embedding vectors at the start and end of each text embedding sequence respectively. Each variable appended with +_1 refers to that corresponding to the second text encoder. Otherwise, it is from the first. forward < source > ( hidden_states: Optional = None hidden_states_1: Optional = None attention_mask: Optional = None attention_mask_1: Optional = None ) AudioLDM2UNet2DConditionModel class diffusers.AudioLDM2UNet2DConditionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 flip_sin_to_cos: bool = True freq_shift: int = 0 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') mid_block_type: Optional = 'UNetMidBlock2DCrossAttn' up_block_types: Tuple = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: Union = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: Union = 1280 transformer_layers_per_block: Union = 1 attention_head_dim: Union = 8 num_attention_heads: Union = None use_linear_projection: bool = False class_embed_type: Optional = None num_class_embeds: Optional = None upcast_attention: bool = False resnet_time_scale_shift: str = 'default' time_embedding_type: str = 'positional' time_embedding_dim: Optional = None time_embedding_act_fn: Optional = None timestep_post_act: Optional = None time_cond_proj_dim: Optional = None conv_in_kernel: int = 3 conv_out_kernel: int = 3 projection_class_embeddings_input_dim: Optional = None class_embeddings_concat: bool = False ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. in_channels (int, optional, defaults to 4) — Number of channels in the input sample. out_channels (int, optional, defaults to 4) — Number of channels in the output. flip_sin_to_cos (bool, optional, defaults to False) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. mid_block_type (str, optional, defaults to "UNetMidBlock2DCrossAttn") — +Block type for middle of UNet, it can only be UNetMidBlock2DCrossAttn for AudioLDM2. up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. only_cross_attention (bool or Tuple[bool], optional, default to False) — +Whether to include self-attention in the basic transformer blocks, see +BasicTransformerBlock. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — The number of layers per block. downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. act_fn (str, optional, defaults to "silu") — The activation function to use. norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, normalization and activation layers is skipped in post-processing. norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. cross_attention_dim (int or Tuple[int], optional, defaults to 1280) — +The dimension of the cross attention features. transformer_layers_per_block (int or Tuple[int], optional, defaults to 1) — +The number of transformer blocks of type BasicTransformerBlock. Only relevant for +CrossAttnDownBlock2D, CrossAttnUpBlock2D, +UNetMidBlock2DCrossAttn. attention_head_dim (int, optional, defaults to 8) — The dimension of the attention heads. num_attention_heads (int, optional) — +The number of attention heads. If not defined, defaults to attention_head_dim resnet_time_scale_shift (str, optional, defaults to "default") — Time scale shift config +for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", "identity", "projection", or "simple_projection". num_class_embeds (int, optional, defaults to None) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. time_embedding_type (str, optional, defaults to positional) — +The type of position embedding to use for timesteps. Choose from positional or fourier. time_embedding_dim (int, optional, defaults to None) — +An optional override for the dimension of the projected time embedding. time_embedding_act_fn (str, optional, defaults to None) — +Optional activation function to use only once on the time embeddings before they are passed to the rest of +the UNet. Choose from silu, mish, gelu, and swish. timestep_post_act (str, optional, defaults to None) — +The second activation function to use in timestep embedding. Choose from silu, mish and gelu. time_cond_proj_dim (int, optional, defaults to None) — +The dimension of cond_proj layer in the timestep embedding. conv_in_kernel (int, optional, default to 3) — The kernel size of conv_in layer. conv_out_kernel (int, optional, default to 3) — The kernel size of conv_out layer. projection_class_embeddings_input_dim (int, optional) — The dimension of the class_labels input when +class_embed_type="projection". Required when class_embed_type="projection". class_embeddings_concat (bool, optional, defaults to False) — Whether to concatenate the time +embeddings with the class embeddings. A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. Compared to the vanilla UNet2DConditionModel, this variant optionally includes an additional +self-attention layer in each Transformer block, as well as multiple cross-attention layers. It also allows for up +to two cross-attention embeddings, encoder_hidden_states and encoder_hidden_states_1. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None encoder_attention_mask: Optional = None return_dict: bool = True encoder_hidden_states_1: Optional = None encoder_attention_mask_1: Optional = None ) → UNet2DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, channel, height, width). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). encoder_attention_mask (torch.Tensor) — +A cross-attention mask of shape (batch, sequence_length) is applied to encoder_hidden_states. If +True the mask is kept, otherwise if False it is discarded. Mask will be converted into a bias, +which adds large negative values to the attention scores corresponding to “discard” tokens. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. encoder_hidden_states_1 (torch.FloatTensor, optional) — +A second set of encoder hidden states with shape (batch, sequence_length_2, feature_dim_2). Can be +used to condition the model on a different set of embeddings to encoder_hidden_states. encoder_attention_mask_1 (torch.Tensor, optional) — +A cross-attention mask of shape (batch, sequence_length_2) is applied to encoder_hidden_states_1. +If True the mask is kept, otherwise if False it is discarded. Mask will be converted into a bias, +which adds large negative values to the attention scores corresponding to “discard” tokens. Returns +UNet2DConditionOutput or tuple + +If return_dict is True, an UNet2DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The AudioLDM2UNet2DConditionModel forward method. AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. diff --git a/scrapped_outputs/3a5bb25a9080430ca883fbba8af0229d.txt b/scrapped_outputs/3a5bb25a9080430ca883fbba8af0229d.txt new file mode 100644 index 0000000000000000000000000000000000000000..db7171b03930077dc4188ad756a7f5e1ae92467f --- /dev/null +++ b/scrapped_outputs/3a5bb25a9080430ca883fbba8af0229d.txt @@ -0,0 +1,27 @@ +UNet2DModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 2D UNet model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet2DModel class diffusers.UNet2DModel < source > ( sample_size: Union = None in_channels: int = 3 out_channels: int = 3 center_input_sample: bool = False time_embedding_type: str = 'positional' freq_shift: int = 0 flip_sin_to_cos: bool = True down_block_types: Tuple = ('DownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D') up_block_types: Tuple = ('AttnUpBlock2D', 'AttnUpBlock2D', 'AttnUpBlock2D', 'UpBlock2D') block_out_channels: Tuple = (224, 448, 672, 896) layers_per_block: int = 2 mid_block_scale_factor: float = 1 downsample_padding: int = 1 downsample_type: str = 'conv' upsample_type: str = 'conv' dropout: float = 0.0 act_fn: str = 'silu' attention_head_dim: Optional = 8 norm_num_groups: int = 32 attn_norm_num_groups: Optional = None norm_eps: float = 1e-05 resnet_time_scale_shift: str = 'default' add_attention: bool = True class_embed_type: Optional = None num_class_embeds: Optional = None num_train_timesteps: Optional = None ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. Dimensions must be a multiple of 2 ** (len(block_out_channels) - 1). in_channels (int, optional, defaults to 3) — Number of channels in the input sample. out_channels (int, optional, defaults to 3) — Number of channels in the output. center_input_sample (bool, optional, defaults to False) — Whether to center the input sample. time_embedding_type (str, optional, defaults to "positional") — Type of time embedding to use. freq_shift (int, optional, defaults to 0) — Frequency shift for Fourier time embedding. flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip sin to cos for Fourier time embedding. down_block_types (Tuple[str], optional, defaults to ("DownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D")) — +Tuple of downsample block types. mid_block_type (str, optional, defaults to "UNetMidBlock2D") — +Block type for middle of UNet, it can be either UNetMidBlock2D or UnCLIPUNetMidBlock2D. up_block_types (Tuple[str], optional, defaults to ("AttnUpBlock2D", "AttnUpBlock2D", "AttnUpBlock2D", "UpBlock2D")) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (224, 448, 672, 896)) — +Tuple of block output channels. layers_per_block (int, optional, defaults to 2) — The number of layers per block. mid_block_scale_factor (float, optional, defaults to 1) — The scale factor for the mid block. downsample_padding (int, optional, defaults to 1) — The padding for the downsample convolution. downsample_type (str, optional, defaults to conv) — +The downsample type for downsampling layers. Choose between “conv” and “resnet” upsample_type (str, optional, defaults to conv) — +The upsample type for upsampling layers. Choose between “conv” and “resnet” dropout (float, optional, defaults to 0.0) — The dropout probability to use. act_fn (str, optional, defaults to "silu") — The activation function to use. attention_head_dim (int, optional, defaults to 8) — The attention head dimension. norm_num_groups (int, optional, defaults to 32) — The number of groups for normalization. attn_norm_num_groups (int, optional, defaults to None) — +If set to an integer, a group norm layer will be created in the mid block’s Attention layer with the +given number of groups. If left as None, the group norm layer will only be created if +resnet_time_scale_shift is set to default, and if created will have norm_num_groups groups. norm_eps (float, optional, defaults to 1e-5) — The epsilon for normalization. resnet_time_scale_shift (str, optional, defaults to "default") — Time scale shift config +for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", or "identity". num_class_embeds (int, optional, defaults to None) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim when performing class +conditioning with class_embed_type equal to None. A 2D UNet model that takes a noisy sample and a timestep and returns a sample shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor timestep: Union class_labels: Optional = None return_dict: bool = True ) → ~models.unet_2d.UNet2DOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, channel, height, width). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. class_labels (torch.FloatTensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.unet_2d.UNet2DOutput instead of a plain tuple. Returns +~models.unet_2d.UNet2DOutput or tuple + +If return_dict is True, an ~models.unet_2d.UNet2DOutput is returned, otherwise a tuple is +returned where the first element is the sample tensor. + The UNet2DModel forward method. UNet2DOutput class diffusers.models.unets.unet_2d.UNet2DOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The hidden states output from the last layer of the model. The output of UNet2DModel. diff --git a/scrapped_outputs/3a8845c26c07f3306caefc82b5cb5d50.txt b/scrapped_outputs/3a8845c26c07f3306caefc82b5cb5d50.txt new file mode 100644 index 0000000000000000000000000000000000000000..eae19b2e2becff1d17c9403c12096c6335834cf4 --- /dev/null +++ b/scrapped_outputs/3a8845c26c07f3306caefc82b5cb5d50.txt @@ -0,0 +1,26 @@ +PNDM Pseudo Numerical Methods for Diffusion Models on Manifolds (PNDM) is by Luping Liu, Yi Ren, Zhijie Lin and Zhou Zhao. The abstract from the paper is: Denoising Diffusion Probabilistic Models (DDPMs) can generate high-quality samples such as image and audio samples. However, DDPMs require hundreds to thousands of iterations to produce final samples. Several prior works have successfully accelerated DDPMs through adjusting the variance schedule (e.g., Improved Denoising Diffusion Probabilistic Models) or the denoising equation (e.g., Denoising Diffusion Implicit Models (DDIMs)). However, these acceleration methods cannot maintain the quality of samples and even introduce new noise at a high speedup rate, which limit their practicability. To accelerate the inference process while keeping the sample quality, we provide a fresh perspective that DDPMs should be treated as solving differential equations on manifolds. Under such a perspective, we propose pseudo numerical methods for diffusion models (PNDMs). Specifically, we figure out how to solve differential equations on manifolds and show that DDIMs are simple cases of pseudo numerical methods. We change several classical numerical methods to corresponding pseudo numerical methods and find that the pseudo linear multi-step method is the best in most situations. According to our experiments, by directly using pre-trained models on Cifar10, CelebA and LSUN, PNDMs can generate higher quality synthetic images with only 50 steps compared with 1000-step DDIMs (20x speedup), significantly outperform DDIMs with 250 steps (by around 0.4 in FID) and have good generalization on different variance schedules. The original codebase can be found at luping-liu/PNDM. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. PNDMPipeline class diffusers.PNDMPipeline < source > ( unet: UNet2DModel scheduler: PNDMScheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (PNDMScheduler) — +A PNDMScheduler to be used in combination with unet to denoise the encoded image. Pipeline for unconditional image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 num_inference_steps: int = 50 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True **kwargs ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Example: Copied >>> from diffusers import PNDMPipeline + +>>> # load model and scheduler +>>> pndm = PNDMPipeline.from_pretrained("google/ddpm-cifar10-32") + +>>> # run pipeline in inference (sample random noise and denoise) +>>> image = pndm().images[0] + +>>> # save image +>>> image.save("pndm_generated_image.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/3ab83f48455ff11deabe05514df853c4.txt b/scrapped_outputs/3ab83f48455ff11deabe05514df853c4.txt new file mode 100644 index 0000000000000000000000000000000000000000..fa69efa9696034670fc8ca476928c6521eb0af53 --- /dev/null +++ b/scrapped_outputs/3ab83f48455ff11deabe05514df853c4.txt @@ -0,0 +1,212 @@ +Train a diffusion model Unconditional image generation is a popular application of diffusion models that generates images that look like those in the dataset used for training. Typically, the best results are obtained from finetuning a pretrained model on a specific dataset. You can find many of these checkpoints on the Hub, but if you can’t find one you like, you can always train your own! This tutorial will teach you how to train a UNet2DModel from scratch on a subset of the Smithsonian Butterflies dataset to generate your own 🦋 butterflies 🦋. 💡 This training tutorial is based on the Training with 🧨 Diffusers notebook. For additional details and context about diffusion models like how they work, check out the notebook! Before you begin, make sure you have 🤗 Datasets installed to load and preprocess image datasets, and 🤗 Accelerate, to simplify training on any number of GPUs. The following command will also install TensorBoard to visualize training metrics (you can also use Weights & Biases to track your training). Copied # uncomment to install the necessary libraries in Colab +#!pip install diffusers[training] We encourage you to share your model with the community, and in order to do that, you’ll need to login to your Hugging Face account (create one here if you don’t already have one!). You can login from a notebook and enter your token when prompted. Make sure your token has the write role. Copied >>> from huggingface_hub import notebook_login + +>>> notebook_login() Or login in from the terminal: Copied huggingface-cli login Since the model checkpoints are quite large, install Git-LFS to version these large files: Copied !sudo apt -qq install git-lfs +!git config --global credential.helper store Training configuration For convenience, create a TrainingConfig class containing the training hyperparameters (feel free to adjust them): Copied >>> from dataclasses import dataclass + +>>> @dataclass +... class TrainingConfig: +... image_size = 128 # the generated image resolution +... train_batch_size = 16 +... eval_batch_size = 16 # how many images to sample during evaluation +... num_epochs = 50 +... gradient_accumulation_steps = 1 +... learning_rate = 1e-4 +... lr_warmup_steps = 500 +... save_image_epochs = 10 +... save_model_epochs = 30 +... mixed_precision = "fp16" # `no` for float32, `fp16` for automatic mixed precision +... output_dir = "ddpm-butterflies-128" # the model name locally and on the HF Hub + +... push_to_hub = True # whether to upload the saved model to the HF Hub +... hub_model_id = "/" # the name of the repository to create on the HF Hub +... hub_private_repo = False +... overwrite_output_dir = True # overwrite the old model when re-running the notebook +... seed = 0 + + +>>> config = TrainingConfig() Load the dataset You can easily load the Smithsonian Butterflies dataset with the 🤗 Datasets library: Copied >>> from datasets import load_dataset + +>>> config.dataset_name = "huggan/smithsonian_butterflies_subset" +>>> dataset = load_dataset(config.dataset_name, split="train") 💡 You can find additional datasets from the HugGan Community Event or you can use your own dataset by creating a local ImageFolder. Set config.dataset_name to the repository id of the dataset if it is from the HugGan Community Event, or imagefolder if you’re using your own images. 🤗 Datasets uses the Image feature to automatically decode the image data and load it as a PIL.Image which we can visualize: Copied >>> import matplotlib.pyplot as plt + +>>> fig, axs = plt.subplots(1, 4, figsize=(16, 4)) +>>> for i, image in enumerate(dataset[:4]["image"]): +... axs[i].imshow(image) +... axs[i].set_axis_off() +>>> fig.show() The images are all different sizes though, so you’ll need to preprocess them first: Resize changes the image size to the one defined in config.image_size. RandomHorizontalFlip augments the dataset by randomly mirroring the images. Normalize is important to rescale the pixel values into a [-1, 1] range, which is what the model expects. Copied >>> from torchvision import transforms + +>>> preprocess = transforms.Compose( +... [ +... transforms.Resize((config.image_size, config.image_size)), +... transforms.RandomHorizontalFlip(), +... transforms.ToTensor(), +... transforms.Normalize([0.5], [0.5]), +... ] +... ) Use 🤗 Datasets’ set_transform method to apply the preprocess function on the fly during training: Copied >>> def transform(examples): +... images = [preprocess(image.convert("RGB")) for image in examples["image"]] +... return {"images": images} + + +>>> dataset.set_transform(transform) Feel free to visualize the images again to confirm that they’ve been resized. Now you’re ready to wrap the dataset in a DataLoader for training! Copied >>> import torch + +>>> train_dataloader = torch.utils.data.DataLoader(dataset, batch_size=config.train_batch_size, shuffle=True) Create a UNet2DModel Pretrained models in 🧨 Diffusers are easily created from their model class with the parameters you want. For example, to create a UNet2DModel: Copied >>> from diffusers import UNet2DModel + +>>> model = UNet2DModel( +... sample_size=config.image_size, # the target image resolution +... in_channels=3, # the number of input channels, 3 for RGB images +... out_channels=3, # the number of output channels +... layers_per_block=2, # how many ResNet layers to use per UNet block +... block_out_channels=(128, 128, 256, 256, 512, 512), # the number of output channels for each UNet block +... down_block_types=( +... "DownBlock2D", # a regular ResNet downsampling block +... "DownBlock2D", +... "DownBlock2D", +... "DownBlock2D", +... "AttnDownBlock2D", # a ResNet downsampling block with spatial self-attention +... "DownBlock2D", +... ), +... up_block_types=( +... "UpBlock2D", # a regular ResNet upsampling block +... "AttnUpBlock2D", # a ResNet upsampling block with spatial self-attention +... "UpBlock2D", +... "UpBlock2D", +... "UpBlock2D", +... "UpBlock2D", +... ), +... ) It is often a good idea to quickly check the sample image shape matches the model output shape: Copied >>> sample_image = dataset[0]["images"].unsqueeze(0) +>>> print("Input shape:", sample_image.shape) +Input shape: torch.Size([1, 3, 128, 128]) + +>>> print("Output shape:", model(sample_image, timestep=0).sample.shape) +Output shape: torch.Size([1, 3, 128, 128]) Great! Next, you’ll need a scheduler to add some noise to the image. Create a scheduler The scheduler behaves differently depending on whether you’re using the model for training or inference. During inference, the scheduler generates image from the noise. During training, the scheduler takes a model output - or a sample - from a specific point in the diffusion process and applies noise to the image according to a noise schedule and an update rule. Let’s take a look at the DDPMScheduler and use the add_noise method to add some random noise to the sample_image from before: Copied >>> import torch +>>> from PIL import Image +>>> from diffusers import DDPMScheduler + +>>> noise_scheduler = DDPMScheduler(num_train_timesteps=1000) +>>> noise = torch.randn(sample_image.shape) +>>> timesteps = torch.LongTensor([50]) +>>> noisy_image = noise_scheduler.add_noise(sample_image, noise, timesteps) + +>>> Image.fromarray(((noisy_image.permute(0, 2, 3, 1) + 1.0) * 127.5).type(torch.uint8).numpy()[0]) The training objective of the model is to predict the noise added to the image. The loss at this step can be calculated by: Copied >>> import torch.nn.functional as F + +>>> noise_pred = model(noisy_image, timesteps).sample +>>> loss = F.mse_loss(noise_pred, noise) Train the model By now, you have most of the pieces to start training the model and all that’s left is putting everything together. First, you’ll need an optimizer and a learning rate scheduler: Copied >>> from diffusers.optimization import get_cosine_schedule_with_warmup + +>>> optimizer = torch.optim.AdamW(model.parameters(), lr=config.learning_rate) +>>> lr_scheduler = get_cosine_schedule_with_warmup( +... optimizer=optimizer, +... num_warmup_steps=config.lr_warmup_steps, +... num_training_steps=(len(train_dataloader) * config.num_epochs), +... ) Then, you’ll need a way to evaluate the model. For evaluation, you can use the DDPMPipeline to generate a batch of sample images and save it as a grid: Copied >>> from diffusers import DDPMPipeline +>>> from diffusers.utils import make_image_grid +>>> import os + +>>> def evaluate(config, epoch, pipeline): +... # Sample some images from random noise (this is the backward diffusion process). +... # The default pipeline output type is `List[PIL.Image]` +... images = pipeline( +... batch_size=config.eval_batch_size, +... generator=torch.manual_seed(config.seed), +... ).images + +... # Make a grid out of the images +... image_grid = make_image_grid(images, rows=4, cols=4) + +... # Save the images +... test_dir = os.path.join(config.output_dir, "samples") +... os.makedirs(test_dir, exist_ok=True) +... image_grid.save(f"{test_dir}/{epoch:04d}.png") Now you can wrap all these components together in a training loop with 🤗 Accelerate for easy TensorBoard logging, gradient accumulation, and mixed precision training. To upload the model to the Hub, write a function to get your repository name and information and then push it to the Hub. 💡 The training loop below may look intimidating and long, but it’ll be worth it later when you launch your training in just one line of code! If you can’t wait and want to start generating images, feel free to copy and run the code below. You can always come back and examine the training loop more closely later, like when you’re waiting for your model to finish training. 🤗 Copied >>> from accelerate import Accelerator +>>> from huggingface_hub import create_repo, upload_folder +>>> from tqdm.auto import tqdm +>>> from pathlib import Path +>>> import os + +>>> def train_loop(config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler): +... # Initialize accelerator and tensorboard logging +... accelerator = Accelerator( +... mixed_precision=config.mixed_precision, +... gradient_accumulation_steps=config.gradient_accumulation_steps, +... log_with="tensorboard", +... project_dir=os.path.join(config.output_dir, "logs"), +... ) +... if accelerator.is_main_process: +... if config.output_dir is not None: +... os.makedirs(config.output_dir, exist_ok=True) +... if config.push_to_hub: +... repo_id = create_repo( +... repo_id=config.hub_model_id or Path(config.output_dir).name, exist_ok=True +... ).repo_id +... accelerator.init_trackers("train_example") + +... # Prepare everything +... # There is no specific order to remember, you just need to unpack the +... # objects in the same order you gave them to the prepare method. +... model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( +... model, optimizer, train_dataloader, lr_scheduler +... ) + +... global_step = 0 + +... # Now you train the model +... for epoch in range(config.num_epochs): +... progress_bar = tqdm(total=len(train_dataloader), disable=not accelerator.is_local_main_process) +... progress_bar.set_description(f"Epoch {epoch}") + +... for step, batch in enumerate(train_dataloader): +... clean_images = batch["images"] +... # Sample noise to add to the images +... noise = torch.randn(clean_images.shape, device=clean_images.device) +... bs = clean_images.shape[0] + +... # Sample a random timestep for each image +... timesteps = torch.randint( +... 0, noise_scheduler.config.num_train_timesteps, (bs,), device=clean_images.device, +... dtype=torch.int64 +... ) + +... # Add noise to the clean images according to the noise magnitude at each timestep +... # (this is the forward diffusion process) +... noisy_images = noise_scheduler.add_noise(clean_images, noise, timesteps) + +... with accelerator.accumulate(model): +... # Predict the noise residual +... noise_pred = model(noisy_images, timesteps, return_dict=False)[0] +... loss = F.mse_loss(noise_pred, noise) +... accelerator.backward(loss) + +... accelerator.clip_grad_norm_(model.parameters(), 1.0) +... optimizer.step() +... lr_scheduler.step() +... optimizer.zero_grad() + +... progress_bar.update(1) +... logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0], "step": global_step} +... progress_bar.set_postfix(**logs) +... accelerator.log(logs, step=global_step) +... global_step += 1 + +... # After each epoch you optionally sample some demo images with evaluate() and save the model +... if accelerator.is_main_process: +... pipeline = DDPMPipeline(unet=accelerator.unwrap_model(model), scheduler=noise_scheduler) + +... if (epoch + 1) % config.save_image_epochs == 0 or epoch == config.num_epochs - 1: +... evaluate(config, epoch, pipeline) + +... if (epoch + 1) % config.save_model_epochs == 0 or epoch == config.num_epochs - 1: +... if config.push_to_hub: +... upload_folder( +... repo_id=repo_id, +... folder_path=config.output_dir, +... commit_message=f"Epoch {epoch}", +... ignore_patterns=["step_*", "epoch_*"], +... ) +... else: +... pipeline.save_pretrained(config.output_dir) Phew, that was quite a bit of code! But you’re finally ready to launch the training with 🤗 Accelerate’s notebook_launcher function. Pass the function the training loop, all the training arguments, and the number of processes (you can change this value to the number of GPUs available to you) to use for training: Copied >>> from accelerate import notebook_launcher + +>>> args = (config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler) + +>>> notebook_launcher(train_loop, args, num_processes=1) Once training is complete, take a look at the final 🦋 images 🦋 generated by your diffusion model! Copied >>> import glob + +>>> sample_images = sorted(glob.glob(f"{config.output_dir}/samples/*.png")) +>>> Image.open(sample_images[-1]) Next steps Unconditional image generation is one example of a task that can be trained. You can explore other tasks and training techniques by visiting the 🧨 Diffusers Training Examples page. Here are some examples of what you can learn: Textual Inversion, an algorithm that teaches a model a specific visual concept and integrates it into the generated image. DreamBooth, a technique for generating personalized images of a subject given several input images of the subject. Guide to finetuning a Stable Diffusion model on your own dataset. Guide to using LoRA, a memory-efficient technique for finetuning really large models faster. diff --git a/scrapped_outputs/3ad3b9e1ed753128a3c872f4c4e120f7.txt b/scrapped_outputs/3ad3b9e1ed753128a3c872f4c4e120f7.txt new file mode 100644 index 0000000000000000000000000000000000000000..5ac980c70abc6eba4fbd0f38f30a6ecdd94ad92f --- /dev/null +++ b/scrapped_outputs/3ad3b9e1ed753128a3c872f4c4e120f7.txt @@ -0,0 +1,201 @@ +Depth-to-image The Stable Diffusion model can also infer depth based on an image using MiDaS. This allows you to pass a text prompt and an initial image to condition the generation of new images as well as a depth_map to preserve the image structure. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionDepth2ImgPipeline class diffusers.StableDiffusionDepth2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers depth_estimator: DPTForDepthEstimation feature_extractor: DPTFeatureExtractor ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-guided depth-based image-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None image: Union = None depth_map: Optional = None strength: float = 0.8 num_inference_steps: Optional = 50 guidance_scale: Optional = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: Optional = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be used as the starting point. Can accept image +latents as image only if depth_map is not None. depth_map (torch.FloatTensor, optional) — +Depth prediction to be used as additional conditioning for the image generation process. If not +defined, it automatically predicts the depth with self.depth_estimator. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> import requests +>>> from PIL import Image + +>>> from diffusers import StableDiffusionDepth2ImgPipeline + +>>> pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-depth", +... torch_dtype=torch.float16, +... ) +>>> pipe.to("cuda") + + +>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" +>>> init_image = Image.open(requests.get(url, stream=True).raw) +>>> prompt = "two tigers" +>>> n_propmt = "bad, deformed, ugly, bad anotomy" +>>> image = pipe(prompt=prompt, image=init_image, negative_prompt=n_propmt, strength=0.7).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/3ae61f8e6c463435c65d9afae58a218b.txt b/scrapped_outputs/3ae61f8e6c463435c65d9afae58a218b.txt new file mode 100644 index 0000000000000000000000000000000000000000..25c46b6891734af2caccd73456b27f1ecd1e462b --- /dev/null +++ b/scrapped_outputs/3ae61f8e6c463435c65d9afae58a218b.txt @@ -0,0 +1,64 @@ +PNDMScheduler PNDMScheduler, or pseudo numerical methods for diffusion models, uses more advanced ODE integration techniques like the Runge-Kutta and linear multi-step method. The original implementation can be found at crowsonkb/k-diffusion. PNDMScheduler class diffusers.PNDMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None skip_prk_steps: bool = False set_alpha_to_one: bool = False prediction_type: str = 'epsilon' timestep_spacing: str = 'leading' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. skip_prk_steps (bool, defaults to False) — +Allows the scheduler to skip the Runge-Kutta steps defined in the original paper as being required before +PLMS steps. set_alpha_to_one (bool, defaults to False) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the alpha value at step 0. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process) +or v_prediction (see section 2.4 of Imagen Video +paper). timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. PNDMScheduler uses pseudo numerical methods for diffusion models such as the Runge-Kutta and linear multi-step +method. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise), and calls step_prk() +or step_plms() depending on the internal variable counter. step_plms < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the linear multistep method. It performs one forward pass multiple times to approximate the solution. step_prk < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the Runge-Kutta method. It performs four forward passes to approximate the solution to the differential +equation. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/3aef56ca273596964d5785324a240502.txt b/scrapped_outputs/3aef56ca273596964d5785324a240502.txt new file mode 100644 index 0000000000000000000000000000000000000000..4540f6a7c0e03add95f145da0638f9a5a6f1c9cb --- /dev/null +++ b/scrapped_outputs/3aef56ca273596964d5785324a240502.txt @@ -0,0 +1,14 @@ +DeepCache DeepCache accelerates StableDiffusionPipeline and StableDiffusionXLPipeline by strategically caching and reusing high-level features while efficiently updating low-level features by taking advantage of the U-Net architecture. Start by installing DeepCache: Copied pip install DeepCache Then load and enable the DeepCacheSDHelper: Copied import torch + from diffusers import StableDiffusionPipeline + pipe = StableDiffusionPipeline.from_pretrained('runwayml/stable-diffusion-v1-5', torch_dtype=torch.float16).to("cuda") + ++ from DeepCache import DeepCacheSDHelper ++ helper = DeepCacheSDHelper(pipe=pipe) ++ helper.set_params( ++ cache_interval=3, ++ cache_branch_id=0, ++ ) ++ helper.enable() + + image = pipe("a photo of an astronaut on a moon").images[0] The set_params method accepts two arguments: cache_interval and cache_branch_id. cache_interval means the frequency of feature caching, specified as the number of steps between each cache operation. cache_branch_id identifies which branch of the network (ordered from the shallowest to the deepest layer) is responsible for executing the caching processes. +Opting for a lower cache_branch_id or a larger cache_interval can lead to faster inference speed at the expense of reduced image quality (ablation experiments of these two hyperparameters can be found in the paper). Once those arguments are set, use the enable or disable methods to activate or deactivate the DeepCacheSDHelper. You can find more generated samples (original pipeline vs DeepCache) and the corresponding inference latency in the WandB report. The prompts are randomly selected from the MS-COCO 2017 dataset. Benchmark We tested how much faster DeepCache accelerates Stable Diffusion v2.1 with 50 inference steps on an NVIDIA RTX A5000, using different configurations for resolution, batch size, cache interval (I), and cache branch (B). Resolution Batch size Original DeepCache(I=3, B=0) DeepCache(I=5, B=0) DeepCache(I=5, B=1) 512 8 15.96 6.88(2.32x) 5.03(3.18x) 7.27(2.20x) 4 8.39 3.60(2.33x) 2.62(3.21x) 3.75(2.24x) 1 2.61 1.12(2.33x) 0.81(3.24x) 1.11(2.35x) 768 8 43.58 18.99(2.29x) 13.96(3.12x) 21.27(2.05x) 4 22.24 9.67(2.30x) 7.10(3.13x) 10.74(2.07x) 1 6.33 2.72(2.33x) 1.97(3.21x) 2.98(2.12x) 1024 8 101.95 45.57(2.24x) 33.72(3.02x) 53.00(1.92x) 4 49.25 21.86(2.25x) 16.19(3.04x) 25.78(1.91x) 1 13.83 6.07(2.28x) 4.43(3.12x) 7.15(1.93x) diff --git a/scrapped_outputs/3b0dd8dc295e10d1fa6a1bd14763b011.txt b/scrapped_outputs/3b0dd8dc295e10d1fa6a1bd14763b011.txt new file mode 100644 index 0000000000000000000000000000000000000000..4fdf516b6d77156c92f409f664a1bb5bd1902c7b --- /dev/null +++ b/scrapped_outputs/3b0dd8dc295e10d1fa6a1bd14763b011.txt @@ -0,0 +1,65 @@ +ControlNet ControlNet models are adapters trained on top of another pretrained model. It allows for a greater degree of control over image generation by conditioning the model with an additional input image. The input image can be a canny edge, depth map, human pose, and many more. If you’re training on a GPU with limited vRAM, you should try enabling the gradient_checkpointing, gradient_accumulation_steps, and mixed_precision parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing or xFormers. You should have a GPU with >30GB of memory if you want to train faster with Flax. This guide will explore the train_controlnet.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/controlnet +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_controlnet.py \ + --mixed_precision="fp16" Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the relevant parameters for ControlNet: --max_train_samples: the number of training samples; this can be lowered for faster training, but if you want to stream really large datasets, you’ll need to include this parameter and the --streaming parameter in your training command --gradient_accumulation_steps: number of update steps to accumulate before the backward pass; this allows you to train with a bigger batch size than your GPU memory can typically handle Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_controlnet.py \ + --snr_gamma=5.0 Training script As with the script parameters, a general walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the relevant parts of the ControlNet script. The training script has a make_train_dataset function for preprocessing the dataset with image transforms and caption tokenization. You’ll see that in addition to the usual caption tokenization and image transforms, the script also includes transforms for the conditioning image. If you’re streaming a dataset on a TPU, performance may be bottlenecked by the 🤗 Datasets library which is not optimized for images. To ensure maximum throughput, you’re encouraged to explore other dataset formats like WebDataset, TorchData, and TensorFlow Datasets. Copied conditioning_image_transforms = transforms.Compose( + [ + transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), + transforms.CenterCrop(args.resolution), + transforms.ToTensor(), + ] +) Within the main() function, you’ll find the code for loading the tokenizer, text encoder, scheduler and models. This is also where the ControlNet model is loaded either from existing weights or randomly initialized from a UNet: Copied if args.controlnet_model_name_or_path: + logger.info("Loading existing controlnet weights") + controlnet = ControlNetModel.from_pretrained(args.controlnet_model_name_or_path) +else: + logger.info("Initializing controlnet weights from unet") + controlnet = ControlNetModel.from_unet(unet) The optimizer is set up to update the ControlNet parameters: Copied params_to_optimize = controlnet.parameters() +optimizer = optimizer_class( + params_to_optimize, + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Finally, in the training loop, the conditioning text embeddings and image are passed to the down and mid-blocks of the ControlNet model: Copied encoder_hidden_states = text_encoder(batch["input_ids"])[0] +controlnet_image = batch["conditioning_pixel_values"].to(dtype=weight_dtype) + +down_block_res_samples, mid_block_res_sample = controlnet( + noisy_latents, + timesteps, + encoder_hidden_states=encoder_hidden_states, + controlnet_cond=controlnet_image, + return_dict=False, +) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Now you’re ready to launch the training script! 🚀 This guide uses the fusing/fill50k dataset, but remember, you can create and use your own dataset if you want (see the Create a dataset for training guide). Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model and OUTPUT_DIR to where you want to save the model. Download the following images to condition your training with: Copied wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png +wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png One more thing before you launch the script! Depending on the GPU you have, you may need to enable certain optimizations to train a ControlNet. The default configuration in this script requires ~38GB of vRAM. If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. 16GB 12GB 8GB On a 16GB GPU, you can use bitsandbytes 8-bit optimizer and gradient checkpointing to optimize your training run. Install bitsandbytes: Copied pip install bitsandbytes Then, add the following parameter to your training command: Copied accelerate launch train_controlnet.py \ + --gradient_checkpointing \ + --use_8bit_adam \ PyTorch Flax Copied export MODEL_DIR="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="path/to/save/model" + +accelerate launch train_controlnet.py \ + --pretrained_model_name_or_path=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --dataset_name=fusing/fill50k \ + --resolution=512 \ + --learning_rate=1e-5 \ + --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ + --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --push_to_hub Once training is complete, you can use your newly trained model for inference! Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.utils import load_image +import torch + +controlnet = ControlNetModel.from_pretrained("path/to/controlnet", torch_dtype=torch.float16) +pipeline = StableDiffusionControlNetPipeline.from_pretrained( + "path/to/base/model", controlnet=controlnet, torch_dtype=torch.float16 +).to("cuda") + +control_image = load_image("./conditioning_image_1.png") +prompt = "pale golden rod circle with old lace background" + +generator = torch.manual_seed(0) +image = pipe(prompt, num_inference_steps=20, generator=generator, image=control_image).images[0] +image.save("./output.png") Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_controlnet_sdxl.py script to train a ControlNet adapter for the SDXL model. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on training your own ControlNet! To learn more about how to use your new model, the following guides may be helpful: Learn how to use a ControlNet for inference on a variety of tasks. diff --git a/scrapped_outputs/3b25372033d92d6e1349b45a3d5760f2.txt b/scrapped_outputs/3b25372033d92d6e1349b45a3d5760f2.txt new file mode 100644 index 0000000000000000000000000000000000000000..44404381265fb59e40a4d0a64a09200029284152 --- /dev/null +++ b/scrapped_outputs/3b25372033d92d6e1349b45a3d5760f2.txt @@ -0,0 +1,49 @@ +EulerDiscreteScheduler The Euler scheduler (Algorithm 2) is from the Elucidating the Design Space of Diffusion-Based Generative Models paper by Karras et al. This is a fast scheduler which can often generate good outputs in 20-30 steps. The scheduler is based on the original k-diffusion implementation by Katherine Crowson. EulerDiscreteScheduler class diffusers.EulerDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' interpolation_type: str = 'linear' use_karras_sigmas: Optional = False sigma_min: Optional = None sigma_max: Optional = None timestep_spacing: str = 'linspace' timestep_type: str = 'discrete' steps_offset: int = 0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). interpolation_type(str, defaults to "linear", optional) — +The interpolation type to compute intermediate sigmas for the scheduler denoising steps. Should be on of +"linear" or "log_linear". use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. Euler scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. Scales the denoising model input by (sigma**2 + 1) ** 0.5 to match the Euler algorithm. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor s_churn: float = 0.0 s_tmin: float = 0.0 s_tmax: float = inf s_noise: float = 1.0 generator: Optional = None return_dict: bool = True ) → EulerDiscreteSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. s_churn (float) — s_tmin (float) — s_tmax (float) — s_noise (float, defaults to 1.0) — +Scaling factor for noise added to the sample. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a EulerDiscreteSchedulerOutput or +tuple. Returns +EulerDiscreteSchedulerOutput or tuple + +If return_dict is True, EulerDiscreteSchedulerOutput is +returned, otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). EulerDiscreteSchedulerOutput class diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/3b35d75f77c1a9b2ed4bcf1a0d48a5f8.txt b/scrapped_outputs/3b35d75f77c1a9b2ed4bcf1a0d48a5f8.txt new file mode 100644 index 0000000000000000000000000000000000000000..161bab95d89c856bbecb72654e8b0d0142d13c70 --- /dev/null +++ b/scrapped_outputs/3b35d75f77c1a9b2ed4bcf1a0d48a5f8.txt @@ -0,0 +1,6 @@ +Unconditional image generation Unconditional image generation generates images that look like a random sample from the training data the model was trained on because the denoising process is not guided by any additional context like text or image. To get started, use the DiffusionPipeline to load the anton-l/ddpm-butterflies-128 checkpoint to generate images of butterflies. The DiffusionPipeline downloads and caches all the model components required to generate an image. Copied from diffusers import DiffusionPipeline + +generator = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128").to("cuda") +image = generator().images[0] +image Want to generate images of something else? Take a look at the training guide to learn how to train a model to generate your own images. The output image is a PIL.Image object that can be saved: Copied image.save("generated_image.png") You can also try experimenting with the num_inference_steps parameter, which controls the number of denoising steps. More denoising steps typically produce higher quality images, but it’ll take longer to generate. Feel free to play around with this parameter to see how it affects the image quality. Copied image = generator(num_inference_steps=100).images[0] +image Try out the Space below to generate an image of a butterfly! diff --git a/scrapped_outputs/3b3d273d83f5153de3fe86aee73830e3.txt b/scrapped_outputs/3b3d273d83f5153de3fe86aee73830e3.txt new file mode 100644 index 0000000000000000000000000000000000000000..e1a47bdc5912f4a654a85267915a87c071cf9cff --- /dev/null +++ b/scrapped_outputs/3b3d273d83f5153de3fe86aee73830e3.txt @@ -0,0 +1,159 @@ +Heun scheduler inspired by Karras et. al paper + + +Overview + +Algorithm 1 of Karras et. al. +Scheduler ported from @crowsonkb’s https://github.com/crowsonkb/k-diffusion library: +All credit for making this scheduler work goes to Katherine Crowson + +HeunDiscreteScheduler + + +class diffusers.HeunDiscreteScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.00085 +beta_end: float = 0.012 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +prediction_type: str = 'epsilon' +use_karras_sigmas: typing.Optional[bool] = False + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. beta_start (float): the + + +starting beta value of inference. beta_end (float) — the final beta value. beta_schedule (str): +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. +options to clip the variance used when adding noise to the denoised sample. Choose from fixed_small, +fixed_small_log, fixed_large, fixed_large_log, learned or learned_range. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf). + + +use_karras_sigmas (bool, optional, defaults to False) — +This parameter controls whether to use Karras sigmas (Karras et al. (2022) scheme) for step sizes in the +noise schedule during the sampling process. If True, the sigmas will be determined according to a sequence +of noise levels {σi} as defined in Equation (5) of the paper https://arxiv.org/pdf/2206.00364.pdf. + + + +Implements Algorithm 2 (Heun steps) from Karras et al. (2022). for discrete beta schedules. Based on the original +k-diffusion implementation by Katherine Crowson: +https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L90 +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[float, torch.FloatTensor] + +) +→ +torch.FloatTensor + +Parameters + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the — + + +current timestep. — +sample (torch.FloatTensor): input sample timestep (int, optional): current timestep + + +Returns + +torch.FloatTensor + + + +scaled input sample + + + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None +num_train_timesteps: typing.Optional[int] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device, optional) — +the device to which the timesteps should be moved to. If None, the timesteps are not moved. + + + +Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: typing.Union[torch.FloatTensor, numpy.ndarray] +timestep: typing.Union[float, torch.FloatTensor] +sample: typing.Union[torch.FloatTensor, numpy.ndarray] +return_dict: bool = True + +) +→ +SchedulerOutput or tuple + +Parameters + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion — + + +process from the learned model outputs (most often the predicted noise). — +model_output (torch.FloatTensor or np.ndarray): direct output from learned diffusion model. timestep +(int): current discrete timestep in the diffusion chain. sample (torch.FloatTensor or np.ndarray): +current instance of sample being created by diffusion process. +return_dict (bool): option for returning tuple rather than SchedulerOutput class + + +Returns + +SchedulerOutput or tuple + + + +SchedulerOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. diff --git a/scrapped_outputs/3b40b2f6d7a457dec0b9257a5dcfac4d.txt b/scrapped_outputs/3b40b2f6d7a457dec0b9257a5dcfac4d.txt new file mode 100644 index 0000000000000000000000000000000000000000..bfb701b6b92da524e2044f38c56691f6854d8e5e --- /dev/null +++ b/scrapped_outputs/3b40b2f6d7a457dec0b9257a5dcfac4d.txt @@ -0,0 +1,169 @@ +Latent Consistency Model Latent Consistency Models (LCM) enable quality image generation in typically 2-4 steps making it possible to use diffusion models in almost real-time settings. From the official website: LCMs can be distilled from any pre-trained Stable Diffusion (SD) in only 4,000 training steps (~32 A100 GPU Hours) for generating high quality 768 x 768 resolution images in 2~4 steps or even one step, significantly accelerating text-to-image generation. We employ LCM to distill the Dreamshaper-V7 version of SD in just 4,000 training iterations. For a more technical overview of LCMs, refer to the paper. LCM distilled models are available for stable-diffusion-v1-5, stable-diffusion-xl-base-1.0, and the SSD-1B model. All the checkpoints can be found in this collection. This guide shows how to perform inference with LCMs for text-to-image image-to-image combined with style LoRAs ControlNet/T2I-Adapter Text-to-image You’ll use the StableDiffusionXLPipeline pipeline with the LCMScheduler and then load the LCM-LoRA. Together with the LCM-LoRA and the scheduler, the pipeline enables a fast inference workflow, overcoming the slow iterative nature of diffusion models. Copied from diffusers import StableDiffusionXLPipeline, UNet2DConditionModel, LCMScheduler +import torch + +unet = UNet2DConditionModel.from_pretrained( + "latent-consistency/lcm-sdxl", + torch_dtype=torch.float16, + variant="fp16", +) +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16, variant="fp16", +).to("cuda") +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=8.0 +).images[0] Notice that we use only 4 steps for generation which is way less than what’s typically used for standard SDXL. Some details to keep in mind: To perform classifier-free guidance, batch size is usually doubled inside the pipeline. LCM, however, applies guidance using guidance embeddings, so the batch size does not have to be doubled in this case. This leads to a faster inference time, with the drawback that negative prompts don’t have any effect on the denoising process. The UNet was trained using the [3., 13.] guidance scale range. So, that is the ideal range for guidance_scale. However, disabling guidance_scale using a value of 1.0 is also effective in most cases. Image-to-image LCMs can be applied to image-to-image tasks too. For this example, we’ll use the LCM_Dreamshaper_v7 model, but the same steps can be applied to other LCM models as well. Copied import torch +from diffusers import AutoPipelineForImage2Image, UNet2DConditionModel, LCMScheduler +from diffusers.utils import make_image_grid, load_image + +unet = UNet2DConditionModel.from_pretrained( + "SimianLuo/LCM_Dreamshaper_v7", + subfolder="unet", + torch_dtype=torch.float16, +) + +pipe = AutoPipelineForImage2Image.from_pretrained( + "Lykon/dreamshaper-7", + unet=unet, + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) +prompt = "Astronauts in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +generator = torch.manual_seed(0) +image = pipe( + prompt, + image=init_image, + num_inference_steps=4, + guidance_scale=7.5, + strength=0.5, + generator=generator +).images[0] +make_image_grid([init_image, image], rows=1, cols=2) You can get different results based on your prompt and the image you provide. To get the best results, we recommend trying different values for num_inference_steps, strength, and guidance_scale parameters and choose the best one. Combine with style LoRAs LCMs can be used with other styled LoRAs to generate styled-images in very few steps (4-8). In the following example, we’ll use the papercut LoRA. Copied from diffusers import StableDiffusionXLPipeline, UNet2DConditionModel, LCMScheduler +import torch + +unet = UNet2DConditionModel.from_pretrained( + "latent-consistency/lcm-sdxl", + torch_dtype=torch.float16, + variant="fp16", +) +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16, variant="fp16", +).to("cuda") +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +pipe.load_lora_weights("TheLastBen/Papercut_SDXL", weight_name="papercut.safetensors", adapter_name="papercut") + +prompt = "papercut, a cute fox" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=8.0 +).images[0] +image ControlNet/T2I-Adapter Let’s look at how we can perform inference with ControlNet/T2I-Adapter and a LCM. ControlNet For this example, we’ll use the LCM_Dreamshaper_v7 model with canny ControlNet, but the same steps can be applied to other LCM models as well. Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +image = load_image( + "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +).resize((512, 512)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + "SimianLuo/LCM_Dreamshaper_v7", + controlnet=controlnet, + torch_dtype=torch.float16, + safety_checker=None, +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +generator = torch.manual_seed(0) +image = pipe( + "the mona lisa", + image=canny_image, + num_inference_steps=4, + generator=generator, +).images[0] +make_image_grid([canny_image, image], rows=1, cols=2) The inference parameters in this example might not work for all examples, so we recommend trying different values for the `num_inference_steps`, `guidance_scale`, `controlnet_conditioning_scale`, and `cross_attention_kwargs` parameters and choosing the best one. T2I-Adapter This example shows how to use the lcm-sdxl with the Canny T2I-Adapter. Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionXLAdapterPipeline, UNet2DConditionModel, T2IAdapter, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +# Prepare image +# Detect the canny map in low resolution to avoid high-frequency details +image = load_image( + "https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_canny.jpg" +).resize((384, 384)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image).resize((1024, 1216)) + +# load adapter +adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, varient="fp16").to("cuda") + +unet = UNet2DConditionModel.from_pretrained( + "latent-consistency/lcm-sdxl", + torch_dtype=torch.float16, + variant="fp16", +) +pipe = StableDiffusionXLAdapterPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + unet=unet, + adapter=adapter, + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +prompt = "Mystical fairy in real, magic, 4k picture, high quality" +negative_prompt = "extra digit, fewer digits, cropped, worst quality, low quality, glitch, deformed, mutated, ugly, disfigured" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, + negative_prompt=negative_prompt, + image=canny_image, + num_inference_steps=4, + guidance_scale=5, + adapter_conditioning_scale=0.8, + adapter_conditioning_factor=1, + generator=generator, +).images[0] +grid = make_image_grid([canny_image, image], rows=1, cols=2) diff --git a/scrapped_outputs/3b86204e3d069239f84d8464c535c0be.txt b/scrapped_outputs/3b86204e3d069239f84d8464c535c0be.txt new file mode 100644 index 0000000000000000000000000000000000000000..c927ce7d06865cbd6e16bcd6bb15efd3ab6ad802 --- /dev/null +++ b/scrapped_outputs/3b86204e3d069239f84d8464c535c0be.txt @@ -0,0 +1,152 @@ +DPM Discrete Scheduler with ancestral sampling inspired by Karras et. al paper + + +Overview + +Inspired by Karras et. al. Scheduler ported from @crowsonkb’s https://github.com/crowsonkb/k-diffusion library: +All credit for making this scheduler work goes to Katherine Crowson + +KDPM2AncestralDiscreteScheduler + + +class diffusers.KDPM2AncestralDiscreteScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.00085 +beta_end: float = 0.012 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +prediction_type: str = 'epsilon' + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. beta_start (float): the + + +starting beta value of inference. beta_end (float) — the final beta value. beta_schedule (str): +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. +options to clip the variance used when adding noise to the denoised sample. Choose from fixed_small, +fixed_small_log, fixed_large, fixed_large_log, learned or learned_range. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + + +Scheduler created by @crowsonkb in k_diffusion, see: +https://github.com/crowsonkb/k-diffusion/blob/5b3af030dd83e0297272d861c19477735d0317ec/k_diffusion/sampling.py#L188 +Scheduler inspired by DPM-Solver-2 and Algorthim 2 from Karras et al. (2022). +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[float, torch.FloatTensor] + +) +→ +torch.FloatTensor + +Parameters + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the — + + +current timestep. — +sample (torch.FloatTensor): input sample timestep (int, optional): current timestep + + +Returns + +torch.FloatTensor + + + +scaled input sample + + + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None +num_train_timesteps: typing.Optional[int] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device, optional) — +the device to which the timesteps should be moved to. If None, the timesteps are not moved. + + + +Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: typing.Union[torch.FloatTensor, numpy.ndarray] +timestep: typing.Union[float, torch.FloatTensor] +sample: typing.Union[torch.FloatTensor, numpy.ndarray] +generator: typing.Optional[torch._C.Generator] = None +return_dict: bool = True + +) +→ +SchedulerOutput or tuple + +Parameters + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion — + + +process from the learned model outputs (most often the predicted noise). — +model_output (torch.FloatTensor or np.ndarray): direct output from learned diffusion model. timestep +(int): current discrete timestep in the diffusion chain. sample (torch.FloatTensor or np.ndarray): +current instance of sample being created by diffusion process. +return_dict (bool): option for returning tuple rather than SchedulerOutput class + + +Returns + +SchedulerOutput or tuple + + + +SchedulerOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. diff --git a/scrapped_outputs/3ba7b648744e6da0834bc3a6cae8825b.txt b/scrapped_outputs/3ba7b648744e6da0834bc3a6cae8825b.txt new file mode 100644 index 0000000000000000000000000000000000000000..032f569366b1a5bb387a95e95afb74b4ab65d517 --- /dev/null +++ b/scrapped_outputs/3ba7b648744e6da0834bc3a6cae8825b.txt @@ -0,0 +1,17 @@ +UNet1DModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 1D UNet model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet1DModel class diffusers.UNet1DModel < source > ( sample_size: int = 65536 sample_rate: Optional = None in_channels: int = 2 out_channels: int = 2 extra_in_channels: int = 0 time_embedding_type: str = 'fourier' flip_sin_to_cos: bool = True use_timestep_embedding: bool = False freq_shift: float = 0.0 down_block_types: Tuple = ('DownBlock1DNoSkip', 'DownBlock1D', 'AttnDownBlock1D') up_block_types: Tuple = ('AttnUpBlock1D', 'UpBlock1D', 'UpBlock1DNoSkip') mid_block_type: Tuple = 'UNetMidBlock1D' out_block_type: str = None block_out_channels: Tuple = (32, 32, 64) act_fn: str = None norm_num_groups: int = 8 layers_per_block: int = 1 downsample_each_block: bool = False ) Parameters sample_size (int, optional) — Default length of sample. Should be adaptable at runtime. in_channels (int, optional, defaults to 2) — Number of channels in the input sample. out_channels (int, optional, defaults to 2) — Number of channels in the output. extra_in_channels (int, optional, defaults to 0) — +Number of additional channels to be added to the input of the first down block. Useful for cases where the +input data has more channels than what the model was initially designed for. time_embedding_type (str, optional, defaults to "fourier") — Type of time embedding to use. freq_shift (float, optional, defaults to 0.0) — Frequency shift for Fourier time embedding. flip_sin_to_cos (bool, optional, defaults to False) — +Whether to flip sin to cos for Fourier time embedding. down_block_types (Tuple[str], optional, defaults to ("DownBlock1DNoSkip", "DownBlock1D", "AttnDownBlock1D")) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to ("AttnUpBlock1D", "UpBlock1D", "UpBlock1DNoSkip")) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (32, 32, 64)) — +Tuple of block output channels. mid_block_type (str, optional, defaults to "UNetMidBlock1D") — Block type for middle of UNet. out_block_type (str, optional, defaults to None) — Optional output processing block of UNet. act_fn (str, optional, defaults to None) — Optional activation function in UNet blocks. norm_num_groups (int, optional, defaults to 8) — The number of groups for normalization. layers_per_block (int, optional, defaults to 1) — The number of layers per block. downsample_each_block (int, optional, defaults to False) — +Experimental feature for using a UNet without upsampling. A 1D UNet model that takes a noisy sample and a timestep and returns a sample shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor timestep: Union return_dict: bool = True ) → ~models.unet_1d.UNet1DOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch_size, num_channels, sample_size). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.unet_1d.UNet1DOutput instead of a plain tuple. Returns +~models.unet_1d.UNet1DOutput or tuple + +If return_dict is True, an ~models.unet_1d.UNet1DOutput is returned, otherwise a tuple is +returned where the first element is the sample tensor. + The UNet1DModel forward method. UNet1DOutput class diffusers.models.unets.unet_1d.UNet1DOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, sample_size)) — +The hidden states output from the last layer of the model. The output of UNet1DModel. diff --git a/scrapped_outputs/3bb282653c338e6e4341d732d5072238.txt b/scrapped_outputs/3bb282653c338e6e4341d732d5072238.txt new file mode 100644 index 0000000000000000000000000000000000000000..ae719be0b7ba5e539ea6636677a7dcc7a90dd1e7 --- /dev/null +++ b/scrapped_outputs/3bb282653c338e6e4341d732d5072238.txt @@ -0,0 +1,88 @@ +Text-to-(RGB, depth) LDM3D was proposed in LDM3D: Latent Diffusion Model for 3D by Gabriela Ben Melech Stan, Diana Wofk, Scottie Fox, Alex Redden, Will Saxton, Jean Yu, Estelle Aflalo, Shao-Yen Tseng, Fabio Nonato, Matthias Muller, and Vasudev Lal. LDM3D generates an image and a depth map from a given text prompt unlike the existing text-to-image diffusion models such as Stable Diffusion which only generates an image. With almost the same number of parameters, LDM3D achieves to create a latent space that can compress both the RGB images and the depth maps. Two checkpoints are available for use: ldm3d-original. The original checkpoint used in the paper ldm3d-4c. The new version of LDM3D using 4 channels inputs instead of 6-channels inputs and finetuned on higher resolution images. The abstract from the paper is: This research paper proposes a Latent Diffusion Model for 3D (LDM3D) that generates both image and depth map data from a given text prompt, allowing users to generate RGBD images from text prompts. The LDM3D model is fine-tuned on a dataset of tuples containing an RGB image, depth map and caption, and validated through extensive experiments. We also develop an application called DepthFusion, which uses the generated RGB images and depth maps to create immersive and interactive 360-degree-view experiences using TouchDesigner. This technology has the potential to transform a wide range of industries, from entertainment and gaming to architecture and design. Overall, this paper presents a significant contribution to the field of generative AI and computer vision, and showcases the potential of LDM3D and DepthFusion to revolutionize content creation and digital experiences. A short video summarizing the approach can be found at this url. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionLDM3DPipeline class diffusers.StableDiffusionLDM3DPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image and 3D generation using LDM3D. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 49 guidance_scale: float = 5.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 5.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import StableDiffusionLDM3DPipeline + +>>> pipe = StableDiffusionLDM3DPipeline.from_pretrained("Intel/ldm3d-4c") +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> output = pipe(prompt) +>>> rgb_image, depth_image = output.rgb, output.depth +>>> rgb_image[0].save("astronaut_ldm3d_rgb.jpg") +>>> depth_image[0].save("astronaut_ldm3d_depth.png") disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. LDM3DPipelineOutput class diffusers.pipelines.stable_diffusion_ldm3d.pipeline_stable_diffusion_ldm3d.LDM3DPipelineOutput < source > ( rgb: Union depth: Union nsfw_content_detected: Optional ) Parameters rgb (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). depth (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. __call__ ( *args **kwargs ) Call self as a function. Upscaler LDM3D-VR is an extended version of LDM3D. The abstract from the paper is: +Latent diffusion models have proven to be state-of-the-art in the creation and manipulation of visual outputs. However, as far as we know, the generation of depth maps jointly with RGB is still limited. We introduce LDM3D-VR, a suite of diffusion models targeting virtual reality development that includes LDM3D-pano and LDM3D-SR. These models enable the generation of panoramic RGBD based on textual prompts and the upscaling of low-resolution inputs to high-resolution RGBD, respectively. Our models are fine-tuned from existing pretrained models on datasets containing panoramic/high-resolution RGB images, depth maps and captions. Both models are evaluated in comparison to existing related methods Two checkpoints are available for use: ldm3d-pano. This checkpoint enables the generation of panoramic images and requires the StableDiffusionLDM3DPipeline pipeline to be used. ldm3d-sr. This checkpoint enables the upscaling of RGB and depth images. Can be used in cascade after the original LDM3D pipeline using the StableDiffusionUpscaleLDM3DPipeline from communauty pipeline. diff --git a/scrapped_outputs/3c02a10cdb3cc33efb11d932f96e49b1.txt b/scrapped_outputs/3c02a10cdb3cc33efb11d932f96e49b1.txt new file mode 100644 index 0000000000000000000000000000000000000000..7aa9d7438e50cb065d601931ea93e05ed669bc92 --- /dev/null +++ b/scrapped_outputs/3c02a10cdb3cc33efb11d932f96e49b1.txt @@ -0,0 +1,58 @@ +Effective and efficient diffusion Getting the DiffusionPipeline to generate images in a certain style or include what you want can be tricky. Often times, you have to run the DiffusionPipeline several times before you end up with an image you’re happy with. But generating something out of nothing is a computationally intensive process, especially if you’re running inference over and over again. This is why it’s important to get the most computational (speed) and memory (GPU vRAM) efficiency from the pipeline to reduce the time between inference cycles so you can iterate faster. This tutorial walks you through how to generate faster and better with the DiffusionPipeline. Begin by loading the runwayml/stable-diffusion-v1-5 model: Copied from diffusers import DiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipeline = DiffusionPipeline.from_pretrained(model_id, use_safetensors=True) The example prompt you’ll use is a portrait of an old warrior chief, but feel free to use your own prompt: Copied prompt = "portrait photo of a old warrior chief" Speed 💡 If you don’t have access to a GPU, you can use one for free from a GPU provider like Colab! One of the simplest ways to speed up inference is to place the pipeline on a GPU the same way you would with any PyTorch module: Copied pipeline = pipeline.to("cuda") To make sure you can use the same image and improve on it, use a Generator and set a seed for reproducibility: Copied import torch + +generator = torch.Generator("cuda").manual_seed(0) Now you can generate an image: Copied image = pipeline(prompt, generator=generator).images[0] +image This process took ~30 seconds on a T4 GPU (it might be faster if your allocated GPU is better than a T4). By default, the DiffusionPipeline runs inference with full float32 precision for 50 inference steps. You can speed this up by switching to a lower precision like float16 or running fewer inference steps. Let’s start by loading the model in float16 and generate an image: Copied import torch + +pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, use_safetensors=True) +pipeline = pipeline.to("cuda") +generator = torch.Generator("cuda").manual_seed(0) +image = pipeline(prompt, generator=generator).images[0] +image This time, it only took ~11 seconds to generate the image, which is almost 3x faster than before! 💡 We strongly suggest always running your pipelines in float16, and so far, we’ve rarely seen any degradation in output quality. Another option is to reduce the number of inference steps. Choosing a more efficient scheduler could help decrease the number of steps without sacrificing output quality. You can find which schedulers are compatible with the current model in the DiffusionPipeline by calling the compatibles method: Copied pipeline.scheduler.compatibles +[ + diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, + diffusers.schedulers.scheduling_unipc_multistep.UniPCMultistepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_discrete.KDPM2DiscreteScheduler, + diffusers.schedulers.scheduling_deis_multistep.DEISMultistepScheduler, + diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, + diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler, + diffusers.schedulers.scheduling_ddpm.DDPMScheduler, + diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_ancestral_discrete.KDPM2AncestralDiscreteScheduler, + diffusers.utils.dummy_torch_and_torchsde_objects.DPMSolverSDEScheduler, + diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler, + diffusers.schedulers.scheduling_pndm.PNDMScheduler, + diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, + diffusers.schedulers.scheduling_ddim.DDIMScheduler, +] The Stable Diffusion model uses the PNDMScheduler by default which usually requires ~50 inference steps, but more performant schedulers like DPMSolverMultistepScheduler, require only ~20 or 25 inference steps. Use the from_config() method to load a new scheduler: Copied from diffusers import DPMSolverMultistepScheduler + +pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) Now set the num_inference_steps to 20: Copied generator = torch.Generator("cuda").manual_seed(0) +image = pipeline(prompt, generator=generator, num_inference_steps=20).images[0] +image Great, you’ve managed to cut the inference time to just 4 seconds! ⚡️ Memory The other key to improving pipeline performance is consuming less memory, which indirectly implies more speed, since you’re often trying to maximize the number of images generated per second. The easiest way to see how many images you can generate at once is to try out different batch sizes until you get an OutOfMemoryError (OOM). Create a function that’ll generate a batch of images from a list of prompts and Generators. Make sure to assign each Generator a seed so you can reuse it if it produces a good result. Copied def get_inputs(batch_size=1): + generator = [torch.Generator("cuda").manual_seed(i) for i in range(batch_size)] + prompts = batch_size * [prompt] + num_inference_steps = 20 + + return {"prompt": prompts, "generator": generator, "num_inference_steps": num_inference_steps} Start with batch_size=4 and see how much memory you’ve consumed: Copied from diffusers.utils import make_image_grid + +images = pipeline(**get_inputs(batch_size=4)).images +make_image_grid(images, 2, 2) Unless you have a GPU with more vRAM, the code above probably returned an OOM error! Most of the memory is taken up by the cross-attention layers. Instead of running this operation in a batch, you can run it sequentially to save a significant amount of memory. All you have to do is configure the pipeline to use the enable_attention_slicing() function: Copied pipeline.enable_attention_slicing() Now try increasing the batch_size to 8! Copied images = pipeline(**get_inputs(batch_size=8)).images +make_image_grid(images, rows=2, cols=4) Whereas before you couldn’t even generate a batch of 4 images, now you can generate a batch of 8 images at ~3.5 seconds per image! This is probably the fastest you can go on a T4 GPU without sacrificing quality. Quality In the last two sections, you learned how to optimize the speed of your pipeline by using fp16, reducing the number of inference steps by using a more performant scheduler, and enabling attention slicing to reduce memory consumption. Now you’re going to focus on how to improve the quality of generated images. Better checkpoints The most obvious step is to use better checkpoints. The Stable Diffusion model is a good starting point, and since its official launch, several improved versions have also been released. However, using a newer version doesn’t automatically mean you’ll get better results. You’ll still have to experiment with different checkpoints yourself, and do a little research (such as using negative prompts) to get the best results. As the field grows, there are more and more high-quality checkpoints finetuned to produce certain styles. Try exploring the Hub and Diffusers Gallery to find one you’re interested in! Better pipeline components You can also try replacing the current pipeline components with a newer version. Let’s try loading the latest autoencoder from Stability AI into the pipeline, and generate some images: Copied from diffusers import AutoencoderKL + +vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16).to("cuda") +pipeline.vae = vae +images = pipeline(**get_inputs(batch_size=8)).images +make_image_grid(images, rows=2, cols=4) Better prompt engineering The text prompt you use to generate an image is super important, so much so that it is called prompt engineering. Some considerations to keep during prompt engineering are: How is the image or similar images of the one I want to generate stored on the internet? What additional detail can I give that steers the model towards the style I want? With this in mind, let’s improve the prompt to include color and higher quality details: Copied prompt += ", tribal panther make up, blue on red, side profile, looking away, serious eyes" +prompt += " 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta" Generate a batch of images with the new prompt: Copied images = pipeline(**get_inputs(batch_size=8)).images +make_image_grid(images, rows=2, cols=4) Pretty impressive! Let’s tweak the second image - corresponding to the Generator with a seed of 1 - a bit more by adding some text about the age of the subject: Copied prompts = [ + "portrait photo of the oldest warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a old warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a young warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", +] + +generator = [torch.Generator("cuda").manual_seed(1) for _ in range(len(prompts))] +images = pipeline(prompt=prompts, generator=generator, num_inference_steps=25).images +make_image_grid(images, 2, 2) Next steps In this tutorial, you learned how to optimize a DiffusionPipeline for computational and memory efficiency as well as improving the quality of generated outputs. If you’re interested in making your pipeline even faster, take a look at the following resources: Learn how PyTorch 2.0 and torch.compile can yield 5 - 300% faster inference speed. On an A100 GPU, inference can be up to 50% faster! If you can’t use PyTorch 2, we recommend you install xFormers. Its memory-efficient attention mechanism works great with PyTorch 1.13.1 for faster speed and reduced memory consumption. Other optimization techniques, such as model offloading, are covered in this guide. diff --git a/scrapped_outputs/3c1dab22043881b5253c4c0dde6b8de2.txt b/scrapped_outputs/3c1dab22043881b5253c4c0dde6b8de2.txt new file mode 100644 index 0000000000000000000000000000000000000000..d0ff9812e8390d7761559412d64c19cfc04afa33 --- /dev/null +++ b/scrapped_outputs/3c1dab22043881b5253c4c0dde6b8de2.txt @@ -0,0 +1,89 @@ +Quicktour Diffusion models are trained to denoise random Gaussian noise step-by-step to generate a sample of interest, such as an image or audio. This has sparked a tremendous amount of interest in generative AI, and you have probably seen examples of diffusion generated images on the internet. 🧨 Diffusers is a library aimed at making diffusion models widely accessible to everyone. Whether you’re a developer or an everyday user, this quicktour will introduce you to 🧨 Diffusers and help you get up and generating quickly! There are three main components of the library to know about: The DiffusionPipeline is a high-level end-to-end class designed to rapidly generate samples from pretrained diffusion models for inference. Popular pretrained model architectures and modules that can be used as building blocks for creating diffusion systems. Many different schedulers - algorithms that control how noise is added for training, and how to generate denoised images during inference. The quicktour will show you how to use the DiffusionPipeline for inference, and then walk you through how to combine a model and scheduler to replicate what’s happening inside the DiffusionPipeline. The quicktour is a simplified version of the introductory 🧨 Diffusers notebook to help you get started quickly. If you want to learn more about 🧨 Diffusers’ goal, design philosophy, and additional details about its core API, check out the notebook! Before you begin, make sure you have all the necessary libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install --upgrade diffusers accelerate transformers 🤗 Accelerate speeds up model loading for inference and training. 🤗 Transformers is required to run the most popular diffusion models, such as Stable Diffusion. DiffusionPipeline The DiffusionPipeline is the easiest way to use a pretrained diffusion system for inference. It is an end-to-end system containing the model and the scheduler. You can use the DiffusionPipeline out-of-the-box for many tasks. Take a look at the table below for some supported tasks, and for a complete list of supported tasks, check out the 🧨 Diffusers Summary table. Task Description Pipeline Unconditional Image Generation generate an image from Gaussian noise unconditional_image_generation Text-Guided Image Generation generate an image given a text prompt conditional_image_generation Text-Guided Image-to-Image Translation adapt an image guided by a text prompt img2img Text-Guided Image-Inpainting fill the masked part of an image given the image, the mask and a text prompt inpaint Text-Guided Depth-to-Image Translation adapt parts of an image guided by a text prompt while preserving structure via depth estimation depth2img Start by creating an instance of a DiffusionPipeline and specify which pipeline checkpoint you would like to download. +You can use the DiffusionPipeline for any checkpoint stored on the Hugging Face Hub. +In this quicktour, you’ll load the stable-diffusion-v1-5 checkpoint for text-to-image generation. For Stable Diffusion models, please carefully read the license first before running the model. 🧨 Diffusers implements a safety_checker to prevent offensive or harmful content, but the model’s improved image generation capabilities can still produce potentially harmful content. Load the model with the from_pretrained() method: Copied >>> from diffusers import DiffusionPipeline + +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) The DiffusionPipeline downloads and caches all modeling, tokenization, and scheduling components. You’ll see that the Stable Diffusion pipeline is composed of the UNet2DConditionModel and PNDMScheduler among other things: Copied >>> pipeline +StableDiffusionPipeline { + "_class_name": "StableDiffusionPipeline", + "_diffusers_version": "0.21.4", + ..., + "scheduler": [ + "diffusers", + "PNDMScheduler" + ], + ..., + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vae": [ + "diffusers", + "AutoencoderKL" + ] +} We strongly recommend running the pipeline on a GPU because the model consists of roughly 1.4 billion parameters. +You can move the generator object to a GPU, just like you would in PyTorch: Copied >>> pipeline.to("cuda") Now you can pass a text prompt to the pipeline to generate an image, and then access the denoised image. By default, the image output is wrapped in a PIL.Image object. Copied >>> image = pipeline("An image of a squirrel in Picasso style").images[0] +>>> image Save the image by calling save: Copied >>> image.save("image_of_squirrel_painting.png") Local pipeline You can also use the pipeline locally. The only difference is you need to download the weights first: Copied !git lfs install +!git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 Then load the saved weights into the pipeline: Copied >>> pipeline = DiffusionPipeline.from_pretrained("./stable-diffusion-v1-5", use_safetensors=True) Now, you can run the pipeline as you would in the section above. Swapping schedulers Different schedulers come with different denoising speeds and quality trade-offs. The best way to find out which one works best for you is to try them out! One of the main features of 🧨 Diffusers is to allow you to easily switch between schedulers. For example, to replace the default PNDMScheduler with the EulerDiscreteScheduler, load it with the from_config() method: Copied >>> from diffusers import EulerDiscreteScheduler + +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) +>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) Try generating an image with the new scheduler and see if you notice a difference! In the next section, you’ll take a closer look at the components - the model and scheduler - that make up the DiffusionPipeline and learn how to use these components to generate an image of a cat. Models Most models take a noisy sample, and at each timestep it predicts the noise residual (other models learn to predict the previous sample directly or the velocity or v-prediction), the difference between a less noisy image and the input image. You can mix and match models to create other diffusion systems. Models are initiated with the from_pretrained() method which also locally caches the model weights so it is faster the next time you load the model. For the quicktour, you’ll load the UNet2DModel, a basic unconditional image generation model with a checkpoint trained on cat images: Copied >>> from diffusers import UNet2DModel + +>>> repo_id = "google/ddpm-cat-256" +>>> model = UNet2DModel.from_pretrained(repo_id, use_safetensors=True) To access the model parameters, call model.config: Copied >>> model.config The model configuration is a 🧊 frozen 🧊 dictionary, which means those parameters can’t be changed after the model is created. This is intentional and ensures that the parameters used to define the model architecture at the start remain the same, while other parameters can still be adjusted during inference. Some of the most important parameters are: sample_size: the height and width dimension of the input sample. in_channels: the number of input channels of the input sample. down_block_types and up_block_types: the type of down- and upsampling blocks used to create the UNet architecture. block_out_channels: the number of output channels of the downsampling blocks; also used in reverse order for the number of input channels of the upsampling blocks. layers_per_block: the number of ResNet blocks present in each UNet block. To use the model for inference, create the image shape with random Gaussian noise. It should have a batch axis because the model can receive multiple random noises, a channel axis corresponding to the number of input channels, and a sample_size axis for the height and width of the image: Copied >>> import torch + +>>> torch.manual_seed(0) + +>>> noisy_sample = torch.randn(1, model.config.in_channels, model.config.sample_size, model.config.sample_size) +>>> noisy_sample.shape +torch.Size([1, 3, 256, 256]) For inference, pass the noisy image and a timestep to the model. The timestep indicates how noisy the input image is, with more noise at the beginning and less at the end. This helps the model determine its position in the diffusion process, whether it is closer to the start or the end. Use the sample method to get the model output: Copied >>> with torch.no_grad(): +... noisy_residual = model(sample=noisy_sample, timestep=2).sample To generate actual examples though, you’ll need a scheduler to guide the denoising process. In the next section, you’ll learn how to couple a model with a scheduler. Schedulers Schedulers manage going from a noisy sample to a less noisy sample given the model output - in this case, it is the noisy_residual. 🧨 Diffusers is a toolbox for building diffusion systems. While the DiffusionPipeline is a convenient way to get started with a pre-built diffusion system, you can also choose your own model and scheduler components separately to build a custom diffusion system. For the quicktour, you’ll instantiate the DDPMScheduler with its from_config() method: Copied >>> from diffusers import DDPMScheduler + +>>> scheduler = DDPMScheduler.from_pretrained(repo_id) +>>> scheduler +DDPMScheduler { + "_class_name": "DDPMScheduler", + "_diffusers_version": "0.21.4", + "beta_end": 0.02, + "beta_schedule": "linear", + "beta_start": 0.0001, + "clip_sample": true, + "clip_sample_range": 1.0, + "dynamic_thresholding_ratio": 0.995, + "num_train_timesteps": 1000, + "prediction_type": "epsilon", + "sample_max_value": 1.0, + "steps_offset": 0, + "thresholding": false, + "timestep_spacing": "leading", + "trained_betas": null, + "variance_type": "fixed_small" +} 💡 Unlike a model, a scheduler does not have trainable weights and is parameter-free! Some of the most important parameters are: num_train_timesteps: the length of the denoising process or, in other words, the number of timesteps required to process random Gaussian noise into a data sample. beta_schedule: the type of noise schedule to use for inference and training. beta_start and beta_end: the start and end noise values for the noise schedule. To predict a slightly less noisy image, pass the following to the scheduler’s step() method: model output, timestep, and current sample. Copied >>> less_noisy_sample = scheduler.step(model_output=noisy_residual, timestep=2, sample=noisy_sample).prev_sample +>>> less_noisy_sample.shape +torch.Size([1, 3, 256, 256]) The less_noisy_sample can be passed to the next timestep where it’ll get even less noisy! Let’s bring it all together now and visualize the entire denoising process. First, create a function that postprocesses and displays the denoised image as a PIL.Image: Copied >>> import PIL.Image +>>> import numpy as np + + +>>> def display_sample(sample, i): +... image_processed = sample.cpu().permute(0, 2, 3, 1) +... image_processed = (image_processed + 1.0) * 127.5 +... image_processed = image_processed.numpy().astype(np.uint8) + +... image_pil = PIL.Image.fromarray(image_processed[0]) +... display(f"Image at step {i}") +... display(image_pil) To speed up the denoising process, move the input and model to a GPU: Copied >>> model.to("cuda") +>>> noisy_sample = noisy_sample.to("cuda") Now create a denoising loop that predicts the residual of the less noisy sample, and computes the less noisy sample with the scheduler: Copied >>> import tqdm + +>>> sample = noisy_sample + +>>> for i, t in enumerate(tqdm.tqdm(scheduler.timesteps)): +... # 1. predict noise residual +... with torch.no_grad(): +... residual = model(sample, t).sample + +... # 2. compute less noisy image and set x_t -> x_t-1 +... sample = scheduler.step(residual, t, sample).prev_sample + +... # 3. optionally look at image +... if (i + 1) % 50 == 0: +... display_sample(sample, i + 1) Sit back and watch as a cat is generated from nothing but noise! 😻 Next steps Hopefully, you generated some cool images with 🧨 Diffusers in this quicktour! For your next steps, you can: Train or finetune a model to generate your own images in the training tutorial. See example official and community training or finetuning scripts for a variety of use cases. Learn more about loading, accessing, changing, and comparing schedulers in the Using different Schedulers guide. Explore prompt engineering, speed and memory optimizations, and tips and tricks for generating higher-quality images with the Stable Diffusion guide. Dive deeper into speeding up 🧨 Diffusers with guides on optimized PyTorch on a GPU, and inference guides for running Stable Diffusion on Apple Silicon (M1/M2) and ONNX Runtime. diff --git a/scrapped_outputs/3c21216973664f6394de6e5a014ad286.txt b/scrapped_outputs/3c21216973664f6394de6e5a014ad286.txt new file mode 100644 index 0000000000000000000000000000000000000000..b16b1a8e34aaa9499323c43a56a0084cfbc1c8e2 --- /dev/null +++ b/scrapped_outputs/3c21216973664f6394de6e5a014ad286.txt @@ -0,0 +1,42 @@ +KDPM2AncestralDiscreteScheduler The KDPM2DiscreteScheduler with ancestral sampling is inspired by the Elucidating the Design Space of Diffusion-Based Generative Models paper, and the scheduler is ported from and created by Katherine Crowson. The original codebase can be found at crowsonkb/k-diffusion. KDPM2AncestralDiscreteScheduler class diffusers.KDPM2AncestralDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None use_karras_sigmas: Optional = False prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.00085) — +The starting beta value of inference. beta_end (float, defaults to 0.012) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. KDPM2DiscreteScheduler with ancestral sampling is inspired by the DPMSolver2 and Algorithm 2 from the Elucidating +the Design Space of Diffusion-Based Generative Models paper. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union generator: Optional = None return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, ~schedulers.scheduling_ddim.SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/3c2b04f9ef327090fd5ec0ceaf131c34.txt b/scrapped_outputs/3c2b04f9ef327090fd5ec0ceaf131c34.txt new file mode 100644 index 0000000000000000000000000000000000000000..44404381265fb59e40a4d0a64a09200029284152 --- /dev/null +++ b/scrapped_outputs/3c2b04f9ef327090fd5ec0ceaf131c34.txt @@ -0,0 +1,49 @@ +EulerDiscreteScheduler The Euler scheduler (Algorithm 2) is from the Elucidating the Design Space of Diffusion-Based Generative Models paper by Karras et al. This is a fast scheduler which can often generate good outputs in 20-30 steps. The scheduler is based on the original k-diffusion implementation by Katherine Crowson. EulerDiscreteScheduler class diffusers.EulerDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' interpolation_type: str = 'linear' use_karras_sigmas: Optional = False sigma_min: Optional = None sigma_max: Optional = None timestep_spacing: str = 'linspace' timestep_type: str = 'discrete' steps_offset: int = 0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). interpolation_type(str, defaults to "linear", optional) — +The interpolation type to compute intermediate sigmas for the scheduler denoising steps. Should be on of +"linear" or "log_linear". use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. Euler scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. Scales the denoising model input by (sigma**2 + 1) ** 0.5 to match the Euler algorithm. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor s_churn: float = 0.0 s_tmin: float = 0.0 s_tmax: float = inf s_noise: float = 1.0 generator: Optional = None return_dict: bool = True ) → EulerDiscreteSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. s_churn (float) — s_tmin (float) — s_tmax (float) — s_noise (float, defaults to 1.0) — +Scaling factor for noise added to the sample. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a EulerDiscreteSchedulerOutput or +tuple. Returns +EulerDiscreteSchedulerOutput or tuple + +If return_dict is True, EulerDiscreteSchedulerOutput is +returned, otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). EulerDiscreteSchedulerOutput class diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/3c4d8bffdf6a9df66a85323c39c38109.txt b/scrapped_outputs/3c4d8bffdf6a9df66a85323c39c38109.txt new file mode 100644 index 0000000000000000000000000000000000000000..381f912386c920ec82a3d47a6a714ecee19c4eba --- /dev/null +++ b/scrapped_outputs/3c4d8bffdf6a9df66a85323c39c38109.txt @@ -0,0 +1,298 @@ +Stable Diffusion Latent Upscaler + + +StableDiffusionLatentUpscalePipeline + +The Stable Diffusion Latent Upscaler model was created by Katherine Crowson in collaboration with Stability AI. It can be used on top of any StableDiffusionUpscalePipeline checkpoint to enhance its output image resolution by a factor of 2. +A notebook that demonstrates the original implementation can be found here: +Stable Diffusion Upscaler Demo +Available Checkpoints are: +stabilityai/latent-upscaler: stabilityai/sd-x2-latent-upscaler + +class diffusers.StableDiffusionLatentUpscalePipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: EulerDiscreteScheduler + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +EulerDiscreteScheduler. + + + +Pipeline to upscale the resolution of Stable Diffusion output images by a factor of 2. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] +image: typing.Union[torch.FloatTensor, PIL.Image.Image, typing.List[PIL.Image.Image]] +num_inference_steps: int = 75 +guidance_scale: float = 9.0 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image upscaling. + + +image (PIL.Image.Image or ListPIL.Image.Image or torch.FloatTensor) — +Image, or tensor representing an image batch which will be upscaled. If it’s a tensor, it can be +either a latent output from a stable diffusion model, or an image tensor in the range [-1, 1]. It +will be considered a latent if image.shape[1] is 4; otherwise, it will be considered to be an +image representation and encoded using this pipeline’s vae encoder. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import StableDiffusionLatentUpscalePipeline, StableDiffusionPipeline +>>> import torch + + +>>> pipeline = StableDiffusionPipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 +... ) +>>> pipeline.to("cuda") + +>>> model_id = "stabilityai/sd-x2-latent-upscaler" +>>> upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) +>>> upscaler.to("cuda") + +>>> prompt = "a photo of an astronaut high resolution, unreal engine, ultra realistic" +>>> generator = torch.manual_seed(33) + +>>> low_res_latents = pipeline(prompt, generator=generator, output_type="latent").images + +>>> with torch.no_grad(): +... image = pipeline.decode_latents(low_res_latents) +>>> image = pipeline.numpy_to_pil(image)[0] + +>>> image.save("../images/a1.png") + +>>> upscaled_image = upscaler( +... prompt=prompt, +... image=low_res_latents, +... num_inference_steps=20, +... guidance_scale=0, +... generator=generator, +... ).images[0] + +>>> upscaled_image.save("../images/a2.png") + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. + +enable_attention_slicing + +< +source +> +( +slice_size: typing.Union[str, int, NoneType] = 'auto' + +) + + +Parameters + +slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +disable_attention_slicing + +< +source +> +( +) + + + +Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go +back to computing attention in one step. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. diff --git a/scrapped_outputs/3c5c2fc402fcafda0b3fa2a9c17796cf.txt b/scrapped_outputs/3c5c2fc402fcafda0b3fa2a9c17796cf.txt new file mode 100644 index 0000000000000000000000000000000000000000..3202fb51e10a32c683f71e7b038c0b00367fe667 --- /dev/null +++ b/scrapped_outputs/3c5c2fc402fcafda0b3fa2a9c17796cf.txt @@ -0,0 +1 @@ +Overview The APIs in this section are more experimental and prone to breaking changes. Most of them are used internally for development, but they may also be useful to you if you’re interested in building a diffusion model with some custom parts or if you’re interested in some of our helper utilities for working with 🤗 Diffusers. diff --git a/scrapped_outputs/3cbe0f98fb411c3bedb6f6482a9e6e70.txt b/scrapped_outputs/3cbe0f98fb411c3bedb6f6482a9e6e70.txt new file mode 100644 index 0000000000000000000000000000000000000000..c45daf9a97ec4b41db61304ab7ca97f58be2ed61 --- /dev/null +++ b/scrapped_outputs/3cbe0f98fb411c3bedb6f6482a9e6e70.txt @@ -0,0 +1 @@ +xFormers We recommend xFormers for both inference and training. In our tests, the optimizations performed in the attention blocks allow for both faster speed and reduced memory consumption. Install xFormers from pip: Copied pip install xformers The xFormers pip package requires the latest version of PyTorch. If you need to use a previous version of PyTorch, then we recommend installing xFormers from the source. After xFormers is installed, you can use enable_xformers_memory_efficient_attention() for faster inference and reduced memory consumption as shown in this section. According to this issue, xFormers v0.0.16 cannot be used for training (fine-tune or DreamBooth) in some GPUs. If you observe this problem, please install a development version as indicated in the issue comments. diff --git a/scrapped_outputs/3cbf869ba197ef4da5ffcb93e0c2b93c.txt b/scrapped_outputs/3cbf869ba197ef4da5ffcb93e0c2b93c.txt new file mode 100644 index 0000000000000000000000000000000000000000..48db1d86c42bfa6034942e102e0f77668b1cd8f2 --- /dev/null +++ b/scrapped_outputs/3cbf869ba197ef4da5ffcb93e0c2b93c.txt @@ -0,0 +1,290 @@ +Cycle Diffusion + + +Overview + +Cycle Diffusion is a Text-Guided Image-to-Image Generation model proposed in Unifying Diffusion Models’ Latent Space, with Applications to CycleDiffusion and Guidance by Chen Henry Wu, Fernando De la Torre. +The abstract of the paper is the following: +Diffusion models have achieved unprecedented performance in generative modeling. The commonly-adopted formulation of the latent code of diffusion models is a sequence of gradually denoised samples, as opposed to the simpler (e.g., Gaussian) latent space of GANs, VAEs, and normalizing flows. This paper provides an alternative, Gaussian formulation of the latent space of various diffusion models, as well as an invertible DPM-Encoder that maps images into the latent space. While our formulation is purely based on the definition of diffusion models, we demonstrate several intriguing consequences. (1) Empirically, we observe that a common latent space emerges from two diffusion models trained independently on related domains. In light of this finding, we propose CycleDiffusion, which uses DPM-Encoder for unpaired image-to-image translation. Furthermore, applying CycleDiffusion to text-to-image diffusion models, we show that large-scale text-to-image diffusion models can be used as zero-shot image-to-image editors. (2) One can guide pre-trained diffusion models and GANs by controlling the latent codes in a unified, plug-and-play formulation based on energy-based models. Using the CLIP model and a face recognition model as guidance, we demonstrate that diffusion models have better coverage of low-density sub-populations and individuals than GANs. +Tips: +The Cycle Diffusion pipeline is fully compatible with any Stable Diffusion checkpoints +Currently Cycle Diffusion only works with the DDIMScheduler. +Example: +In the following we should how to best use the CycleDiffusionPipeline + + + Copied +import requests +import torch +from PIL import Image +from io import BytesIO + +from diffusers import CycleDiffusionPipeline, DDIMScheduler + +# load the pipeline +# make sure you're logged in with `huggingface-cli login` +model_id_or_path = "CompVis/stable-diffusion-v1-4" +scheduler = DDIMScheduler.from_pretrained(model_id_or_path, subfolder="scheduler") +pipe = CycleDiffusionPipeline.from_pretrained(model_id_or_path, scheduler=scheduler).to("cuda") + +# let's download an initial image +url = "https://raw.githubusercontent.com/ChenWu98/cycle-diffusion/main/data/dalle2/An%20astronaut%20riding%20a%20horse.png" +response = requests.get(url) +init_image = Image.open(BytesIO(response.content)).convert("RGB") +init_image = init_image.resize((512, 512)) +init_image.save("horse.png") + +# let's specify a prompt +source_prompt = "An astronaut riding a horse" +prompt = "An astronaut riding an elephant" + +# call the pipeline +image = pipe( + prompt=prompt, + source_prompt=source_prompt, + image=init_image, + num_inference_steps=100, + eta=0.1, + strength=0.8, + guidance_scale=2, + source_guidance_scale=1, +).images[0] + +image.save("horse_to_elephant.png") + +# let's try another example +# See more samples at the original repo: https://github.com/ChenWu98/cycle-diffusion +url = "https://raw.githubusercontent.com/ChenWu98/cycle-diffusion/main/data/dalle2/A%20black%20colored%20car.png" +response = requests.get(url) +init_image = Image.open(BytesIO(response.content)).convert("RGB") +init_image = init_image.resize((512, 512)) +init_image.save("black.png") + +source_prompt = "A black colored car" +prompt = "A blue colored car" + +# call the pipeline +torch.manual_seed(0) +image = pipe( + prompt=prompt, + source_prompt=source_prompt, + image=init_image, + num_inference_steps=100, + eta=0.1, + strength=0.85, + guidance_scale=3, + source_guidance_scale=1, +).images[0] + +image.save("black_to_blue.png") + +CycleDiffusionPipeline + + +class diffusers.CycleDiffusionPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: DDIMScheduler +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPFeatureExtractor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-guided image to image generation using Stable Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] +source_prompt: typing.Union[str, typing.List[str]] +image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None +strength: float = 0.8 +num_inference_steps: typing.Optional[int] = 50 +guidance_scale: typing.Optional[float] = 7.5 +source_guidance_scale: typing.Optional[float] = 1 +num_images_per_prompt: typing.Optional[int] = 1 +eta: typing.Optional[float] = 0.1 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. + + +image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. + + +strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter will be modulated by strength. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +source_guidance_scale (float, optional, defaults to 1) — +Guidance scale for the source prompt. This is useful to control the amount of influence the source +prompt for encoding. + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.1) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. diff --git a/scrapped_outputs/3d510db066b08215f2f648a3bf4a9d9d.txt b/scrapped_outputs/3d510db066b08215f2f648a3bf4a9d9d.txt new file mode 100644 index 0000000000000000000000000000000000000000..810a91b8fef1b421013373c972981ec5ae26c4c4 --- /dev/null +++ b/scrapped_outputs/3d510db066b08215f2f648a3bf4a9d9d.txt @@ -0,0 +1,21 @@ +ConsistencyDecoderScheduler This scheduler is a part of the ConsistencyDecoderPipeline and was introduced in DALL-E 3. The original codebase can be found at openai/consistency_models. ConsistencyDecoderScheduler class diffusers.schedulers.ConsistencyDecoderScheduler < source > ( num_train_timesteps: int = 1024 sigma_data: float = 0.5 ) scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor generator: Optional = None return_dict: bool = True ) → ~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (float) — +The current timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a +~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput or tuple. Returns +~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput or tuple + +If return_dict is True, +~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/3d55478a78858e691538d7851eb0fb00.txt b/scrapped_outputs/3d55478a78858e691538d7851eb0fb00.txt new file mode 100644 index 0000000000000000000000000000000000000000..f9d5759d2a52433aeb4a07b9b2cace405fc5aff7 --- /dev/null +++ b/scrapped_outputs/3d55478a78858e691538d7851eb0fb00.txt @@ -0,0 +1,61 @@ +Distilled Stable Diffusion inference Stable Diffusion inference can be a computationally intensive process because it must iteratively denoise the latents to generate an image. To reduce the computational burden, you can use a distilled version of the Stable Diffusion model from Nota AI. The distilled version of their Stable Diffusion model eliminates some of the residual and attention blocks from the UNet, reducing the model size by 51% and improving latency on CPU/GPU by 43%. Read this blog post to learn more about how knowledge distillation training works to produce a faster, smaller, and cheaper generative model. Let’s load the distilled Stable Diffusion model and compare it against the original Stable Diffusion model: Copied from diffusers import StableDiffusionPipeline +import torch + +distilled = StableDiffusionPipeline.from_pretrained( + "nota-ai/bk-sdm-small", torch_dtype=torch.float16, use_safetensors=True, +).to("cuda") + +original = StableDiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16, use_safetensors=True, +).to("cuda") Given a prompt, get the inference time for the original model: Copied import time + +seed = 2023 +generator = torch.manual_seed(seed) + +NUM_ITERS_TO_RUN = 3 +NUM_INFERENCE_STEPS = 25 +NUM_IMAGES_PER_PROMPT = 4 + +prompt = "a golden vase with different flowers" + +start = time.time_ns() +for _ in range(NUM_ITERS_TO_RUN): + images = original( + prompt, + num_inference_steps=NUM_INFERENCE_STEPS, + generator=generator, + num_images_per_prompt=NUM_IMAGES_PER_PROMPT + ).images +end = time.time_ns() +original_sd = f"{(end - start) / 1e6:.1f}" + +print(f"Execution time -- {original_sd} ms\n") +"Execution time -- 45781.5 ms" Time the distilled model inference: Copied start = time.time_ns() +for _ in range(NUM_ITERS_TO_RUN): + images = distilled( + prompt, + num_inference_steps=NUM_INFERENCE_STEPS, + generator=generator, + num_images_per_prompt=NUM_IMAGES_PER_PROMPT + ).images +end = time.time_ns() + +distilled_sd = f"{(end - start) / 1e6:.1f}" +print(f"Execution time -- {distilled_sd} ms\n") +"Execution time -- 29884.2 ms" original Stable Diffusion (45781.5 ms) distilled Stable Diffusion (29884.2 ms) Tiny AutoEncoder To speed inference up even more, use a tiny distilled version of the Stable Diffusion VAE to denoise the latents into images. Replace the VAE in the distilled Stable Diffusion model with the tiny VAE: Copied from diffusers import AutoencoderTiny + +distilled.vae = AutoencoderTiny.from_pretrained( + "sayakpaul/taesd-diffusers", torch_dtype=torch.float16, use_safetensors=True, +).to("cuda") Time the distilled model and distilled VAE inference: Copied start = time.time_ns() +for _ in range(NUM_ITERS_TO_RUN): + images = distilled( + prompt, + num_inference_steps=NUM_INFERENCE_STEPS, + generator=generator, + num_images_per_prompt=NUM_IMAGES_PER_PROMPT + ).images +end = time.time_ns() + +distilled_tiny_sd = f"{(end - start) / 1e6:.1f}" +print(f"Execution time -- {distilled_tiny_sd} ms\n") +"Execution time -- 27165.7 ms" distilled Stable Diffusion + Tiny AutoEncoder (27165.7 ms) diff --git a/scrapped_outputs/3d570047b7c081509a8254ea4fdceb45.txt b/scrapped_outputs/3d570047b7c081509a8254ea4fdceb45.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/3d8590608169687f5f1f2abdbffb2502.txt b/scrapped_outputs/3d8590608169687f5f1f2abdbffb2502.txt new file mode 100644 index 0000000000000000000000000000000000000000..96a0a5c22497290cdb231bbf72184daeee1b4d8c --- /dev/null +++ b/scrapped_outputs/3d8590608169687f5f1f2abdbffb2502.txt @@ -0,0 +1,18 @@ +VQModel The VQ-VAE model was introduced in Neural Discrete Representation Learning by Aaron van den Oord, Oriol Vinyals and Koray Kavukcuoglu. The model is used in 🤗 Diffusers to decode latent representations into images. Unlike AutoencoderKL, the VQModel works in a quantized latent space. The abstract from the paper is: Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector Quantised-Variational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of “posterior collapse” — where the latents are ignored when they are paired with a powerful autoregressive decoder — typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations. VQModel class diffusers.VQModel < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) up_block_types: Tuple = ('UpDecoderBlock2D',) block_out_channels: Tuple = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 3 sample_size: int = 32 num_vq_embeddings: int = 256 norm_num_groups: int = 32 vq_embed_dim: Optional = None scaling_factor: float = 0.18215 norm_type: str = 'group' mid_block_add_attention = True lookup_from_codebook = False force_upcast = False ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("DownEncoderBlock2D",)) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to ("UpDecoderBlock2D",)) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of block output channels. layers_per_block (int, optional, defaults to 1) — Number of layers per block. act_fn (str, optional, defaults to "silu") — The activation function to use. latent_channels (int, optional, defaults to 3) — Number of channels in the latent space. sample_size (int, optional, defaults to 32) — Sample input size. num_vq_embeddings (int, optional, defaults to 256) — Number of codebook vectors in the VQ-VAE. norm_num_groups (int, optional, defaults to 32) — Number of groups for normalization layers. vq_embed_dim (int, optional) — Hidden dim of codebook vectors in the VQ-VAE. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. norm_type (str, optional, defaults to "group") — +Type of normalization layer to use. Can be one of "group" or "spatial". A VQ-VAE model for decoding latent representations. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor return_dict: bool = True ) → VQEncoderOutput or tuple Parameters sample (torch.FloatTensor) — Input sample. return_dict (bool, optional, defaults to True) — +Whether or not to return a models.vq_model.VQEncoderOutput instead of a plain tuple. Returns +VQEncoderOutput or tuple + +If return_dict is True, a VQEncoderOutput is returned, otherwise a plain tuple +is returned. + The VQModel forward method. VQEncoderOutput class diffusers.models.vq_model.VQEncoderOutput < source > ( latents: FloatTensor ) Parameters latents (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The encoded output sample from the last layer of the model. Output of VQModel encoding method. diff --git a/scrapped_outputs/3db1036f5dd9fc8611ec7862d7551e0f.txt b/scrapped_outputs/3db1036f5dd9fc8611ec7862d7551e0f.txt new file mode 100644 index 0000000000000000000000000000000000000000..25c46b6891734af2caccd73456b27f1ecd1e462b --- /dev/null +++ b/scrapped_outputs/3db1036f5dd9fc8611ec7862d7551e0f.txt @@ -0,0 +1,64 @@ +PNDMScheduler PNDMScheduler, or pseudo numerical methods for diffusion models, uses more advanced ODE integration techniques like the Runge-Kutta and linear multi-step method. The original implementation can be found at crowsonkb/k-diffusion. PNDMScheduler class diffusers.PNDMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None skip_prk_steps: bool = False set_alpha_to_one: bool = False prediction_type: str = 'epsilon' timestep_spacing: str = 'leading' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. skip_prk_steps (bool, defaults to False) — +Allows the scheduler to skip the Runge-Kutta steps defined in the original paper as being required before +PLMS steps. set_alpha_to_one (bool, defaults to False) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the alpha value at step 0. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process) +or v_prediction (see section 2.4 of Imagen Video +paper). timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. PNDMScheduler uses pseudo numerical methods for diffusion models such as the Runge-Kutta and linear multi-step +method. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise), and calls step_prk() +or step_plms() depending on the internal variable counter. step_plms < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the linear multistep method. It performs one forward pass multiple times to approximate the solution. step_prk < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the Runge-Kutta method. It performs four forward passes to approximate the solution to the differential +equation. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/3dec56cb3c337cc30a9b263b09995a20.txt b/scrapped_outputs/3dec56cb3c337cc30a9b263b09995a20.txt new file mode 100644 index 0000000000000000000000000000000000000000..49e19fb4c11ed7fa69c26f38e304a1a47862bdca --- /dev/null +++ b/scrapped_outputs/3dec56cb3c337cc30a9b263b09995a20.txt @@ -0,0 +1,466 @@ +Text-to-Image Generation with Adapter Conditioning Overview T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models by Chong Mou, Xintao Wang, Liangbin Xie, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie. Using the pretrained models we can provide control images (for example, a depth map) to control Stable Diffusion text-to-image generation so that it follows the structure of the depth image and fills in the details. The abstract of the paper is the following: The incredible generative ability of large-scale text-to-image (T2I) models has demonstrated strong power of learning complex structures and meaningful semantics. However, relying solely on text prompts cannot fully take advantage of the knowledge learned by the model, especially when flexible and accurate controlling (e.g., color and structure) is needed. In this paper, we aim to “dig out” the capabilities that T2I models have implicitly learned, and then explicitly use them to control the generation more granularly. Specifically, we propose to learn simple and lightweight T2I-Adapters to align internal knowledge in T2I models with external control signals, while freezing the original large T2I models. In this way, we can train various adapters according to different conditions, achieving rich control and editing effects in the color and structure of the generation results. Further, the proposed T2I-Adapters have attractive properties of practical value, such as composability and generalization ability. Extensive experiments demonstrate that our T2I-Adapter has promising generation quality and a wide range of applications. This model was contributed by the community contributor HimariO ❤️ . Available Pipelines: Pipeline Tasks Demo StableDiffusionAdapterPipeline Text-to-Image Generation with T2I-Adapter Conditioning - StableDiffusionXLAdapterPipeline Text-to-Image Generation with T2I-Adapter Conditioning on StableDiffusion-XL - Usage example with the base model of StableDiffusion-1.4/1.5 In the following we give a simple example of how to use a T2I-Adapter checkpoint with Diffusers for inference based on StableDiffusion-1.4/1.5. +All adapters use the same pipeline. Images are first converted into the appropriate control image format. The control image and prompt are passed to the StableDiffusionAdapterPipeline. Let’s have a look at a simple example using the Color Adapter. Copied from diffusers.utils import load_image, make_image_grid + +image = load_image("https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_ref.png") Then we can create our color palette by simply resizing it to 8 by 8 pixels and then scaling it back to original size. Copied from PIL import Image + +color_palette = image.resize((8, 8)) +color_palette = color_palette.resize((512, 512), resample=Image.Resampling.NEAREST) Let’s take a look at the processed image. Next, create the adapter pipeline Copied import torch +from diffusers import StableDiffusionAdapterPipeline, T2IAdapter + +adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_color_sd14v1", torch_dtype=torch.float16) +pipe = StableDiffusionAdapterPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + adapter=adapter, + torch_dtype=torch.float16, +) +pipe.to("cuda") Finally, pass the prompt and control image to the pipeline Copied # fix the random seed, so you will get the same result as the example +generator = torch.Generator("cuda").manual_seed(7) + +out_image = pipe( + "At night, glowing cubes in front of the beach", + image=color_palette, + generator=generator, +).images[0] +make_image_grid([image, color_palette, out_image], rows=1, cols=3) Usage example with the base model of StableDiffusion-XL In the following we give a simple example of how to use a T2I-Adapter checkpoint with Diffusers for inference based on StableDiffusion-XL. +All adapters use the same pipeline. Images are first downloaded into the appropriate control image format. The control image and prompt are passed to the StableDiffusionXLAdapterPipeline. Let’s have a look at a simple example using the Sketch Adapter. Copied from diffusers.utils import load_image, make_image_grid + +sketch_image = load_image("https://huggingface.co/Adapter/t2iadapter/resolve/main/sketch.png").convert("L") Then, create the adapter pipeline Copied import torch +from diffusers import ( + T2IAdapter, + StableDiffusionXLAdapterPipeline, + DDPMScheduler +) + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +adapter = T2IAdapter.from_pretrained("Adapter/t2iadapter", subfolder="sketch_sdxl_1.0", torch_dtype=torch.float16, adapter_type="full_adapter_xl") +scheduler = DDPMScheduler.from_pretrained(model_id, subfolder="scheduler") + +pipe = StableDiffusionXLAdapterPipeline.from_pretrained( + model_id, adapter=adapter, safety_checker=None, torch_dtype=torch.float16, variant="fp16", scheduler=scheduler +) + +pipe.to("cuda") Finally, pass the prompt and control image to the pipeline Copied # fix the random seed, so you will get the same result as the example +generator = torch.Generator().manual_seed(42) + +sketch_image_out = pipe( + prompt="a photo of a dog in real world, high quality", + negative_prompt="extra digit, fewer digits, cropped, worst quality, low quality", + image=sketch_image, + generator=generator, + guidance_scale=7.5 +).images[0] +make_image_grid([sketch_image, sketch_image_out], rows=1, cols=2) Available checkpoints Non-diffusers checkpoints can be found under TencentARC/T2I-Adapter. T2I-Adapter with Stable Diffusion 1.4 Model Name Control Image Overview Control Image Example Generated Image Example TencentARC/t2iadapter_color_sd14v1 Trained with spatial color palette An image with 8x8 color palette. TencentARC/t2iadapter_canny_sd14v1 Trained with canny edge detection A monochrome image with white edges on a black background. TencentARC/t2iadapter_sketch_sd14v1 Trained with PidiNet edge detection A hand-drawn monochrome image with white outlines on a black background. TencentARC/t2iadapter_depth_sd14v1 Trained with Midas depth estimation A grayscale image with black representing deep areas and white representing shallow areas. TencentARC/t2iadapter_openpose_sd14v1 Trained with OpenPose bone image A OpenPose bone image. TencentARC/t2iadapter_keypose_sd14v1 Trained with mmpose skeleton image A mmpose skeleton image. TencentARC/t2iadapter_seg_sd14v1Trained with semantic segmentation An custom segmentation protocol image. TencentARC/t2iadapter_canny_sd15v2 TencentARC/t2iadapter_depth_sd15v2 TencentARC/t2iadapter_sketch_sd15v2 TencentARC/t2iadapter_zoedepth_sd15v1 Adapter/t2iadapter, subfolder=‘sketch_sdxl_1.0’ Adapter/t2iadapter, subfolder=‘canny_sdxl_1.0’ Adapter/t2iadapter, subfolder=‘openpose_sdxl_1.0’ Combining multiple adapters MultiAdapter can be used for applying multiple conditionings at once. Here we use the keypose adapter for the character posture and the depth adapter for creating the scene. Copied from diffusers.utils import load_image, make_image_grid + +cond_keypose = load_image( + "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png" +) +cond_depth = load_image( + "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png" +) +cond = [cond_keypose, cond_depth] + +prompt = ["A man walking in an office room with a nice view"] The two control images look as such: MultiAdapter combines keypose and depth adapters. adapter_conditioning_scale balances the relative influence of the different adapters. Copied import torch +from diffusers import StableDiffusionAdapterPipeline, MultiAdapter, T2IAdapter + +adapters = MultiAdapter( + [ + T2IAdapter.from_pretrained("TencentARC/t2iadapter_keypose_sd14v1"), + T2IAdapter.from_pretrained("TencentARC/t2iadapter_depth_sd14v1"), + ] +) +adapters = adapters.to(torch.float16) + +pipe = StableDiffusionAdapterPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + torch_dtype=torch.float16, + adapter=adapters, +).to("cuda") + +image = pipe(prompt, cond, adapter_conditioning_scale=[0.8, 0.8]).images[0] +make_image_grid([cond_keypose, cond_depth, image], rows=1, cols=3) T2I-Adapter vs ControlNet T2I-Adapter is similar to ControlNet. +T2I-Adapter uses a smaller auxiliary network which is only run once for the entire diffusion process. +However, T2I-Adapter performs slightly worse than ControlNet. StableDiffusionAdapterPipeline class diffusers.StableDiffusionAdapterPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel adapter: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True ) Parameters adapter (T2IAdapter or MultiAdapter or List[T2IAdapter]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple Adapter as a +list, the outputs from each Adapter are added together to create one combined additional conditioning. adapter_weights (List[float], optional, defaults to None) — +List of floats representing the weight which will be multiply to each adapter’s output before adding them +together. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. Pipeline for text-to-image generation using Stable Diffusion augmented with T2I-Adapter +https://arxiv.org/abs/2302.08453 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None adapter_conditioning_scale: Union = 1.0 clip_skip: Optional = None ) → ~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor] or List[PIL.Image.Image] or List[List[PIL.Image.Image]]) — +The Adapter input condition. Adapter uses this input condition to generate guidance to Unet. If the +type is specified as Torch.FloatTensor, it is passed to Adapter as is. PIL.Image.Image` can also be +accepted as an image. The control image is automatically resized to fit the output image. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor as defined under +self.processor in +diffusers.models.attention_processor. adapter_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the adapter are multiplied by adapter_conditioning_scale before they are added to the +residual in the original unet. If multiple adapters are specified in init, you can set the +corresponding scale as a list. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from PIL import Image +>>> from diffusers.utils import load_image +>>> import torch +>>> from diffusers import StableDiffusionAdapterPipeline, T2IAdapter + +>>> image = load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_ref.png" +... ) + +>>> color_palette = image.resize((8, 8)) +>>> color_palette = color_palette.resize((512, 512), resample=Image.Resampling.NEAREST) + +>>> adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_color_sd14v1", torch_dtype=torch.float16) +>>> pipe = StableDiffusionAdapterPipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", +... adapter=adapter, +... torch_dtype=torch.float16, +... ) + +>>> pipe.to("cuda") + +>>> out_image = pipe( +... "At night, glowing cubes in front of the beach", +... image=color_palette, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionXLAdapterPipeline class diffusers.StableDiffusionXLAdapterPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel adapter: Union scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters adapter (T2IAdapter or MultiAdapter or List[T2IAdapter]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple Adapter as a +list, the outputs from each Adapter are added together to create one combined additional conditioning. adapter_weights (List[float], optional, defaults to None) — +List of floats representing the weight which will be multiply to each adapter’s output before adding them +together. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. Pipeline for text-to-image generation using Stable Diffusion augmented with T2I-Adapter +https://arxiv.org/abs/2302.08453 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Optional = None crops_coords_top_left: Tuple = (0, 0) target_size: Optional = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None adapter_conditioning_scale: Union = 1.0 adapter_conditioning_factor: float = 1.0 clip_skip: Optional = None ) → ~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor] or List[PIL.Image.Image] or List[List[PIL.Image.Image]]) — +The Adapter input condition. Adapter uses this input condition to generate guidance to Unet. If the +type is specified as Torch.FloatTensor, it is passed to Adapter as is. PIL.Image.Image` can also be +accepted as an image. The control image is automatically resized to fit the output image. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 5.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion_xl.StableDiffusionAdapterPipelineOutput +instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. adapter_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the adapter are multiplied by adapter_conditioning_scale before they are added to the +residual in the original unet. If multiple adapters are specified in init, you can set the +corresponding scale as a list. adapter_conditioning_factor (float, optional, defaults to 1.0) — +The fraction of timesteps for which adapter should be applied. If adapter_conditioning_factor is +0.0, adapter is not applied at all. If adapter_conditioning_factor is 1.0, adapter is applied for +all timesteps. If adapter_conditioning_factor is 0.5, adapter is applied for half of the timesteps. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import T2IAdapter, StableDiffusionXLAdapterPipeline, DDPMScheduler +>>> from diffusers.utils import load_image + +>>> sketch_image = load_image("https://huggingface.co/Adapter/t2iadapter/resolve/main/sketch.png").convert("L") + +>>> model_id = "stabilityai/stable-diffusion-xl-base-1.0" + +>>> adapter = T2IAdapter.from_pretrained( +... "Adapter/t2iadapter", +... subfolder="sketch_sdxl_1.0", +... torch_dtype=torch.float16, +... adapter_type="full_adapter_xl", +... ) +>>> scheduler = DDPMScheduler.from_pretrained(model_id, subfolder="scheduler") + +>>> pipe = StableDiffusionXLAdapterPipeline.from_pretrained( +... model_id, adapter=adapter, torch_dtype=torch.float16, variant="fp16", scheduler=scheduler +... ).to("cuda") + +>>> generator = torch.manual_seed(42) +>>> sketch_image_out = pipe( +... prompt="a photo of a dog in real world, high quality", +... negative_prompt="extra digit, fewer digits, cropped, worst quality, low quality", +... image=sketch_image, +... generator=generator, +... guidance_scale=7.5, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 diff --git a/scrapped_outputs/3dede73e9d6ad17301b6b7b48a055141.txt b/scrapped_outputs/3dede73e9d6ad17301b6b7b48a055141.txt new file mode 100644 index 0000000000000000000000000000000000000000..b7d158c83f1d4ace037f662eae21300f2008f1a9 --- /dev/null +++ b/scrapped_outputs/3dede73e9d6ad17301b6b7b48a055141.txt @@ -0,0 +1,568 @@ +Stable Diffusion XL Stable Diffusion XL (SDXL) was proposed in SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis by Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. The abstract from the paper is: We present SDXL, a latent diffusion model for text-to-image synthesis. Compared to previous versions of Stable Diffusion, SDXL leverages a three times larger UNet backbone: The increase of model parameters is mainly due to more attention blocks and a larger cross-attention context as SDXL uses a second text encoder. We design multiple novel conditioning schemes and train SDXL on multiple aspect ratios. We also introduce a refinement model which is used to improve the visual fidelity of samples generated by SDXL using a post-hoc image-to-image technique. We demonstrate that SDXL shows drastically improved performance compared the previous versions of Stable Diffusion and achieves results competitive with those of black-box state-of-the-art image generators. Tips Using SDXL with a DPM++ scheduler for less than 50 steps is known to produce visual artifacts because the solver becomes numerically unstable. To fix this issue, take a look at this PR which recommends for ODE/SDE solvers:set use_karras_sigmas=True or lu_lambdas=True to improve image quality set euler_at_final=True if you’re using a solver with uniform step sizes (DPM++2M or DPM++2M SDE) Most SDXL checkpoints work best with an image size of 1024x1024. Image sizes of 768x768 and 512x512 are also supported, but the results aren’t as good. Anything below 512x512 is not recommended and likely won’t be for default checkpoints like stabilityai/stable-diffusion-xl-base-1.0. SDXL can pass a different prompt for each of the text encoders it was trained on. We can even pass different parts of the same prompt to the text encoders. SDXL output images can be improved by making use of a refiner model in an image-to-image setting. SDXL offers negative_original_size, negative_crops_coords_top_left, and negative_target_size to negatively condition the model on image resolution and cropping parameters. To learn how to use SDXL for various tasks, how to optimize performance, and other usage examples, take a look at the Stable Diffusion XL guide. Check out the Stability AI Hub organization for the official base and refiner model checkpoints! StableDiffusionXLPipeline class diffusers.StableDiffusionXLPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Optional = None crops_coords_top_left: Tuple = (0, 0) target_size: Optional = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 5.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionXLPipelineOutput instead +of a plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionXLPipelineOutput or tuple + +StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLPipeline + +>>> pipe = StableDiffusionXLPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionXLImg2ImgPipeline class diffusers.StableDiffusionXLImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires an aesthetic_score condition to be passed during inference. Also see the +config of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None strength: float = 0.3 num_inference_steps: int = 50 timesteps: List = None denoising_start: Optional = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor or PIL.Image.Image or np.ndarray or List[torch.FloatTensor] or List[PIL.Image.Image] or List[np.ndarray]) — +The image(s) to modify with the pipeline. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. Note that in the case of +denoising_start being declared as an integer, the value of strength will be ignored. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_start (float, optional) — +When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be +bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and +it is assumed that the passed image is a partly denoised image. Note that when this is specified, +strength will be ignored. The denoising_start parameter is particularly beneficial when this pipeline +is integrated into a “Mixture of Denoisers” multi-pipeline setup, as detailed in Refine Image +Quality. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be +denoised by a successor pipeline that has denoising_start set to 0.8 so that it only denoises the +final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline +forms a part of a “Mixture of Denoisers” multi-pipeline setup, as elaborated in Refine Image +Quality. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +`tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLImg2ImgPipeline +>>> from diffusers.utils import load_image + +>>> pipe = StableDiffusionXLImg2ImgPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") +>>> url = "https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/aa_xl/000000009.png" + +>>> init_image = load_image(url).convert("RGB") +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt, image=init_image).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionXLInpaintPipeline class diffusers.StableDiffusionXLInpaintPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires a aesthetic_score condition to be passed during inference. Also see the config +of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None mask_image: Union = None masked_image_latents: FloatTensor = None height: Optional = None width: Optional = None strength: float = 0.9999 num_inference_steps: int = 50 timesteps: List = None denoising_start: Optional = None denoising_end: Optional = None guidance_scale: float = 7.5 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (PIL.Image.Image) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. strength (float, optional, defaults to 0.9999) — +Conceptually, indicates how much to transform the masked portion of the reference image. Must be +between 0 and 1. image will be used as a starting point, adding more noise to it the larger the +strength. The number of denoising steps depends on the amount of noise initially added. When +strength is 1, added noise will be maximum and the denoising process will run for the full number of +iterations specified in num_inference_steps. A value of 1, therefore, essentially ignores the masked +portion of the reference image. Note that in the case of denoising_start being declared as an +integer, the value of strength will be ignored. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_start (float, optional) — +When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be +bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and +it is assumed that the passed image is a partly denoised image. Note that when this is specified, +strength will be ignored. The denoising_start parameter is particularly beneficial when this pipeline +is integrated into a “Mixture of Denoisers” multi-pipeline setup, as detailed in Refining the Image +Output. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be +denoised by a successor pipeline that has denoising_start set to 0.8 so that it only denoises the +final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline +forms a part of a “Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLInpaintPipeline +>>> from diffusers.utils import load_image + +>>> pipe = StableDiffusionXLInpaintPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", +... torch_dtype=torch.float16, +... variant="fp16", +... use_safetensors=True, +... ) +>>> pipe.to("cuda") + +>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +>>> init_image = load_image(img_url).convert("RGB") +>>> mask_image = load_image(mask_url).convert("RGB") + +>>> prompt = "A majestic tiger sitting on a bench" +>>> image = pipe( +... prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=50, strength=0.80 +... ).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. diff --git a/scrapped_outputs/3e0d059659a72218a9b0303ebd7717e9.txt b/scrapped_outputs/3e0d059659a72218a9b0303ebd7717e9.txt new file mode 100644 index 0000000000000000000000000000000000000000..b141ceaf084a8212da6ac7e6a804208f1ca7d021 --- /dev/null +++ b/scrapped_outputs/3e0d059659a72218a9b0303ebd7717e9.txt @@ -0,0 +1,35 @@ +Dance Diffusion Dance Diffusion is by Zach Evans. Dance Diffusion is the first in a suite of generative audio tools for producers and musicians released by Harmonai. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. DanceDiffusionPipeline class diffusers.DanceDiffusionPipeline < source > ( unet scheduler ) Parameters unet (UNet1DModel) — +A UNet1DModel to denoise the encoded audio. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +IPNDMScheduler. Pipeline for audio generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 num_inference_steps: int = 100 generator: Union = None audio_length_in_s: Optional = None return_dict: bool = True ) → AudioPipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of audio samples to generate. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher-quality audio sample at +the expense of slower inference. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. audio_length_in_s (float, optional, defaults to self.unet.config.sample_size/self.unet.config.sample_rate) — +The length of the generated audio sample in seconds. return_dict (bool, optional, defaults to True) — +Whether or not to return a AudioPipelineOutput instead of a plain tuple. Returns +AudioPipelineOutput or tuple + +If return_dict is True, AudioPipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Example: Copied from diffusers import DiffusionPipeline +from scipy.io.wavfile import write + +model_id = "harmonai/maestro-150k" +pipe = DiffusionPipeline.from_pretrained(model_id) +pipe = pipe.to("cuda") + +audios = pipe(audio_length_in_s=4.0).audios + +# To save locally +for i, audio in enumerate(audios): + write(f"maestro_test_{i}.wav", pipe.unet.sample_rate, audio.transpose()) + +# To dislay in google colab +import IPython.display as ipd + +for audio in audios: + display(ipd.Audio(audio, rate=pipe.unet.sample_rate)) AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. diff --git a/scrapped_outputs/3e2cd21e54144abab37653602476da2e.txt b/scrapped_outputs/3e2cd21e54144abab37653602476da2e.txt new file mode 100644 index 0000000000000000000000000000000000000000..86d9ddbbae81241685d47196515ab51585d529f3 --- /dev/null +++ b/scrapped_outputs/3e2cd21e54144abab37653602476da2e.txt @@ -0,0 +1,93 @@ +Latent Consistency Distillation Latent Consistency Models (LCMs) are able to generate high-quality images in just a few steps, representing a big leap forward because many pipelines require at least 25+ steps. LCMs are produced by applying the latent consistency distillation method to any Stable Diffusion model. This method works by applying one-stage guided distillation to the latent space, and incorporating a skipping-step method to consistently skip timesteps to accelerate the distillation process (refer to section 4.1, 4.2, and 4.3 of the paper for more details). If you’re training on a GPU with limited vRAM, try enabling gradient_checkpointing, gradient_accumulation_steps, and mixed_precision to reduce memory-usage and speedup training. You can reduce your memory-usage even more by enabling memory-efficient attention with xFormers and bitsandbytes’ 8-bit optimizer. This guide will explore the train_lcm_distill_sd_wds.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/consistency_distillation +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment (try enabling torch.compile to significantly speedup training): Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_lcm_distill_sd_wds.py \ + --mixed_precision="fp16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so you’ll focus on the parameters that are relevant to latent consistency distillation in this guide. --pretrained_teacher_model: the path to a pretrained latent diffusion model to use as the teacher model --pretrained_vae_model_name_or_path: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify an alternative VAE (like this VAE by madebyollin which works in fp16) --w_min and --w_max: the minimum and maximum guidance scale values for guidance scale sampling --num_ddim_timesteps: the number of timesteps for DDIM sampling --loss_type: the type of loss (L2 or Huber) to calculate for latent consistency distillation; Huber loss is generally preferred because it’s more robust to outliers --huber_c: the Huber loss parameter Training script The training script starts by creating a dataset class - Text2ImageDataset - for preprocessing the images and creating a training dataset. Copied def transform(example): + image = example["image"] + image = TF.resize(image, resolution, interpolation=transforms.InterpolationMode.BILINEAR) + + c_top, c_left, _, _ = transforms.RandomCrop.get_params(image, output_size=(resolution, resolution)) + image = TF.crop(image, c_top, c_left, resolution, resolution) + image = TF.to_tensor(image) + image = TF.normalize(image, [0.5], [0.5]) + + example["image"] = image + return example For improved performance on reading and writing large datasets stored in the cloud, this script uses the WebDataset format to create a preprocessing pipeline to apply transforms and create a dataset and dataloader for training. Images are processed and fed to the training loop without having to download the full dataset first. Copied processing_pipeline = [ + wds.decode("pil", handler=wds.ignore_and_continue), + wds.rename(image="jpg;png;jpeg;webp", text="text;txt;caption", handler=wds.warn_and_continue), + wds.map(filter_keys({"image", "text"})), + wds.map(transform), + wds.to_tuple("image", "text"), +] In the main() function, all the necessary components like the noise scheduler, tokenizers, text encoders, and VAE are loaded. The teacher UNet is also loaded here and then you can create a student UNet from the teacher UNet. The student UNet is updated by the optimizer during training. Copied teacher_unet = UNet2DConditionModel.from_pretrained( + args.pretrained_teacher_model, subfolder="unet", revision=args.teacher_revision +) + +unet = UNet2DConditionModel(**teacher_unet.config) +unet.load_state_dict(teacher_unet.state_dict(), strict=False) +unet.train() Now you can create the optimizer to update the UNet parameters: Copied optimizer = optimizer_class( + unet.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Create the dataset: Copied dataset = Text2ImageDataset( + train_shards_path_or_url=args.train_shards_path_or_url, + num_train_examples=args.max_train_samples, + per_gpu_batch_size=args.train_batch_size, + global_batch_size=args.train_batch_size * accelerator.num_processes, + num_workers=args.dataloader_num_workers, + resolution=args.resolution, + shuffle_buffer_size=1000, + pin_memory=True, + persistent_workers=True, +) +train_dataloader = dataset.train_dataloader Next, you’re ready to setup the training loop and implement the latent consistency distillation method (see Algorithm 1 in the paper for more details). This section of the script takes care of adding noise to the latents, sampling and creating a guidance scale embedding, and predicting the original image from the noise. Copied pred_x_0 = predicted_origin( + noise_pred, + start_timesteps, + noisy_model_input, + noise_scheduler.config.prediction_type, + alpha_schedule, + sigma_schedule, +) + +model_pred = c_skip_start * noisy_model_input + c_out_start * pred_x_0 It gets the teacher model predictions and the LCM predictions next, calculates the loss, and then backpropagates it to the LCM. Copied if args.loss_type == "l2": + loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") +elif args.loss_type == "huber": + loss = torch.mean( + torch.sqrt((model_pred.float() - target.float()) ** 2 + args.huber_c**2) - args.huber_c + ) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Now you’re ready to launch the training script and start distilling! For this guide, you’ll use the --train_shards_path_or_url to specify the path to the Conceptual Captions 12M dataset stored on the Hub here. Set the MODEL_DIR environment variable to the name of the teacher model and OUTPUT_DIR to where you want to save the model. Copied export MODEL_DIR="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="path/to/saved/model" + +accelerate launch train_lcm_distill_sd_wds.py \ + --pretrained_teacher_model=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --mixed_precision=fp16 \ + --resolution=512 \ + --learning_rate=1e-6 --loss_type="huber" --ema_decay=0.95 --adam_weight_decay=0.0 \ + --max_train_steps=1000 \ + --max_train_samples=4000000 \ + --dataloader_num_workers=8 \ + --train_shards_path_or_url="pipe:curl -L -s https://huggingface.co/datasets/laion/conceptual-captions-12m-webdataset/resolve/main/data/{00000..01099}.tar?download=true" \ + --validation_steps=200 \ + --checkpointing_steps=200 --checkpoints_total_limit=10 \ + --train_batch_size=12 \ + --gradient_checkpointing --enable_xformers_memory_efficient_attention \ + --gradient_accumulation_steps=1 \ + --use_8bit_adam \ + --resume_from_checkpoint=latest \ + --report_to=wandb \ + --seed=453645634 \ + --push_to_hub Once training is complete, you can use your new LCM for inference. Copied from diffusers import UNet2DConditionModel, DiffusionPipeline, LCMScheduler +import torch + +unet = UNet2DConditionModel.from_pretrained("your-username/your-model", torch_dtype=torch.float16, variant="fp16") +pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", unet=unet, torch_dtype=torch.float16, variant="fp16") + +pipeline.scheduler = LCMScheduler.from_config(pipe.scheduler.config) +pipeline.to("cuda") + +prompt = "sushi rolls in the form of panda heads, sushi platter" + +image = pipeline(prompt, num_inference_steps=4, guidance_scale=1.0).images[0] LoRA LoRA is a training technique for significantly reducing the number of trainable parameters. As a result, training is faster and it is easier to store the resulting weights because they are a lot smaller (~100MBs). Use the train_lcm_distill_lora_sd_wds.py or train_lcm_distill_lora_sdxl.wds.py script to train with LoRA. The LoRA training script is discussed in more detail in the LoRA training guide. Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_lcm_distill_sdxl_wds.py script to train a SDXL model with LoRA. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on distilling a LCM model! To learn more about LCM, the following may be helpful: Learn how to use LCMs for inference for text-to-image, image-to-image, and with LoRA checkpoints. Read the SDXL in 4 steps with Latent Consistency LoRAs blog post to learn more about SDXL LCM-LoRA’s for super fast inference, quality comparisons, benchmarks, and more. diff --git a/scrapped_outputs/3e351a68f5491c43ca171fc3ce0ba7ae.txt b/scrapped_outputs/3e351a68f5491c43ca171fc3ce0ba7ae.txt new file mode 100644 index 0000000000000000000000000000000000000000..3ed526b258968db676928aa0d8cb1ec1badf1fc8 --- /dev/null +++ b/scrapped_outputs/3e351a68f5491c43ca171fc3ce0ba7ae.txt @@ -0,0 +1,129 @@ +ControlNet-XS ControlNet-XS was introduced in ControlNet-XS by Denis Zavadski and Carsten Rother. It is based on the observation that the control model in the original ControlNet can be made much smaller and still produce good results. Like the original ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that’ll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process. ControlNet-XS generates images with comparable quality to a regular ControlNet, but it is 20-25% faster (see benchmark with StableDiffusion-XL) and uses ~45% less memory. Here’s the overview from the project page: With increasing computing capabilities, current model architectures appear to follow the trend of simply upscaling all components without validating the necessity for doing so. In this project we investigate the size and architectural design of ControlNet [Zhang et al., 2023] for controlling the image generation process with stable diffusion-based models. We show that a new architecture with as little as 1% of the parameters of the base model achieves state-of-the art results, considerably better than ControlNet in terms of FID score. Hence we call it ControlNet-XS. We provide the code for controlling StableDiffusion-XL [Podell et al., 2023] (Model B, 48M Parameters) and StableDiffusion 2.1 [Rombach et al. 2022] (Model B, 14M Parameters), all under openrail license. This model was contributed by UmerHA. ❤️ Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionControlNetXSPipeline class diffusers.StableDiffusionControlNetXSPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel controlnet: ControlNetXSModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetXSModel) — +Provides additional conditioning to the unet during the denoising process. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with ControlNet-XS guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 control_guidance_start: float = 0.0 control_guidance_end: float = 1.0 clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionControlNetXSPipeline, ControlNetXSModel +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" +>>> negative_prompt = "low quality, bad quality, sketches" + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" +... ) + +>>> # initialize the models and pipeline +>>> controlnet_conditioning_scale = 0.5 +>>> controlnet = ControlNetXSModel.from_pretrained( +... "UmerHA/ConrolNetXS-SD2.1-canny", torch_dtype=torch.float16 +... ) +>>> pipe = StableDiffusionControlNetXSPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-1", controlnet=controlnet, torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> # get canny image +>>> image = np.array(image) +>>> image = cv2.Canny(image, 100, 200) +>>> image = image[:, :, None] +>>> image = np.concatenate([image, image, image], axis=2) +>>> canny_image = Image.fromarray(image) +>>> # generate image +>>> image = pipe( +... prompt, controlnet_conditioning_scale=controlnet_conditioning_scale, image=canny_image +... ).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/3e36b8e1a98d89dfd1fc0a252526d444.txt b/scrapped_outputs/3e36b8e1a98d89dfd1fc0a252526d444.txt new file mode 100644 index 0000000000000000000000000000000000000000..27e473e96ef3e5480dbddcafab99a5316b599755 --- /dev/null +++ b/scrapped_outputs/3e36b8e1a98d89dfd1fc0a252526d444.txt @@ -0,0 +1,57 @@ +Wuerstchen The Wuerstchen model drastically reduces computational costs by compressing the latent space by 42x, without compromising image quality and accelerating inference. During training, Wuerstchen uses two models (VQGAN + autoencoder) to compress the latents, and then a third model (text-conditioned latent diffusion model) is conditioned on this highly compressed space to generate an image. To fit the prior model into GPU memory and to speedup training, try enabling gradient_accumulation_steps, gradient_checkpointing, and mixed_precision respectively. This guide explores the train_text_to_image_prior.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/wuerstchen/text_to_image +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training scripts that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the scripts and let us know if you have any questions or concerns. Script parameters The training scripts provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image_prior.py \ + --mixed_precision="fp16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so let’s dive right into the Wuerstchen training script! Training script The training script is also similar to the Text-to-image training guide, but it’s been modified to support Wuerstchen. This guide focuses on the code that is unique to the Wuerstchen training script. The main() function starts by initializing the image encoder - an EfficientNet - in addition to the usual scheduler and tokenizer. Copied with ContextManagers(deepspeed_zero_init_disabled_context_manager()): + pretrained_checkpoint_file = hf_hub_download("dome272/wuerstchen", filename="model_v2_stage_b.pt") + state_dict = torch.load(pretrained_checkpoint_file, map_location="cpu") + image_encoder = EfficientNetEncoder() + image_encoder.load_state_dict(state_dict["effnet_state_dict"]) + image_encoder.eval() You’ll also load the WuerstchenPrior model for optimization. Copied prior = WuerstchenPrior.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="prior") + +optimizer = optimizer_cls( + prior.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Next, you’ll apply some transforms to the images and tokenize the captions: Copied def preprocess_train(examples): + images = [image.convert("RGB") for image in examples[image_column]] + examples["effnet_pixel_values"] = [effnet_transforms(image) for image in images] + examples["text_input_ids"], examples["text_mask"] = tokenize_captions(examples) + return examples Finally, the training loop handles compressing the images to latent space with the EfficientNetEncoder, adding noise to the latents, and predicting the noise residual with the WuerstchenPrior model. Copied pred_noise = prior(noisy_latents, timesteps, prompt_embeds) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 Set the DATASET_NAME environment variable to the dataset name from the Hub. This guide uses the Pokémon BLIP captions dataset, but you can create and train on your own datasets as well (see the Create a dataset for training guide). To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_prompt to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. Copied export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch train_text_to_image_prior.py \ + --mixed_precision="fp16" \ + --dataset_name=$DATASET_NAME \ + --resolution=768 \ + --train_batch_size=4 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --dataloader_num_workers=4 \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --checkpoints_total_limit=3 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --validation_prompts="A robot pokemon, 4k photo" \ + --report_to="wandb" \ + --push_to_hub \ + --output_dir="wuerstchen-prior-pokemon-model" Once training is complete, you can use your newly trained model for inference! Copied import torch +from diffusers import AutoPipelineForText2Image +from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS + +pipeline = AutoPipelineForText2Image.from_pretrained("path/to/saved/model", torch_dtype=torch.float16).to("cuda") + +caption = "A cute bird pokemon holding a shield" +images = pipeline( + caption, + width=1024, + height=1536, + prior_timesteps=DEFAULT_STAGE_C_TIMESTEPS, + prior_guidance_scale=4.0, + num_images_per_prompt=2, +).images Next steps Congratulations on training a Wuerstchen model! To learn more about how to use your new model, the following may be helpful: Take a look at the Wuerstchen API documentation to learn more about how to use the pipeline for text-to-image generation and its limitations. diff --git a/scrapped_outputs/3e6c0e0cf272d14fcc1b8bff04eba30a.txt b/scrapped_outputs/3e6c0e0cf272d14fcc1b8bff04eba30a.txt new file mode 100644 index 0000000000000000000000000000000000000000..62825fe72aa801b97e465830300492417c227d28 --- /dev/null +++ b/scrapped_outputs/3e6c0e0cf272d14fcc1b8bff04eba30a.txt @@ -0,0 +1,18 @@ +Stable Diffusion pipelines Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from CompVis, Stability AI and LAION. Latent diffusion applies the diffusion process over a lower dimensional latent space to reduce memory and compute complexity. This specific type of diffusion model was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. Stable Diffusion is trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs. For more details about how Stable Diffusion works and how it differs from the base latent diffusion model, take a look at the Stability AI announcement and our own blog post for more technical details. You can find the original codebase for Stable Diffusion v1.0 at CompVis/stable-diffusion and Stable Diffusion v2.0 at Stability-AI/stablediffusion as well as their original scripts for various tasks. Additional official checkpoints for the different Stable Diffusion versions and tasks can be found on the CompVis, Runway, and Stability AI Hub organizations. Explore these organizations to find the best checkpoint for your use-case! The table below summarizes the available Stable Diffusion pipelines, their supported tasks, and an interactive demo: Pipeline Supported tasks 🤗 Space StableDiffusion text-to-image StableDiffusionImg2Img image-to-image StableDiffusionInpaint inpainting StableDiffusionDepth2Img depth-to-image StableDiffusionImageVariation image variation StableDiffusionPipelineSafe filtered text-to-image StableDiffusion2 text-to-image, inpainting, depth-to-image, super-resolution StableDiffusionXL text-to-image, image-to-image StableDiffusionLatentUpscale super-resolution StableDiffusionUpscale super-resolution StableDiffusionLDM3D text-to-rgb, text-to-depth, text-to-pano StableDiffusionUpscaleLDM3D ldm3d super-resolution Tips To help you get the most out of the Stable Diffusion pipelines, here are a few tips for improving performance and usability. These tips are applicable to all Stable Diffusion pipelines. Explore tradeoff between speed and quality StableDiffusionPipeline uses the PNDMScheduler by default, but 🤗 Diffusers provides many other schedulers (some of which are faster or output better quality) that are compatible. For example, if you want to use the EulerDiscreteScheduler instead of the default: Copied from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler + +pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") +pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +# or +euler_scheduler = EulerDiscreteScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler") +pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=euler_scheduler) Reuse pipeline components to save memory To save memory and use the same components across multiple pipelines, use the .components method to avoid loading weights into RAM more than once. Copied from diffusers import ( + StableDiffusionPipeline, + StableDiffusionImg2ImgPipeline, + StableDiffusionInpaintPipeline, +) + +text2img = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") +img2img = StableDiffusionImg2ImgPipeline(**text2img.components) +inpaint = StableDiffusionInpaintPipeline(**text2img.components) + +# now you can use text2img(...), img2img(...), inpaint(...) just like the call methods of each respective pipeline diff --git a/scrapped_outputs/3e853432348e6eb9a4abc5ad92824658.txt b/scrapped_outputs/3e853432348e6eb9a4abc5ad92824658.txt new file mode 100644 index 0000000000000000000000000000000000000000..70b4217dd0c7138c00d1e18f1498d6ca0f929b68 --- /dev/null +++ b/scrapped_outputs/3e853432348e6eb9a4abc5ad92824658.txt @@ -0,0 +1,31 @@ +Load different Stable Diffusion formats Stable Diffusion models are available in different formats depending on the framework they’re trained and saved with, and where you download them from. Converting these formats for use in 🤗 Diffusers allows you to use all the features supported by the library, such as using different schedulers for inference, building your custom pipeline, and a variety of techniques and methods for optimizing inference speed. We highly recommend using the .safetensors format because it is more secure than traditional pickled files which are vulnerable and can be exploited to execute any code on your machine (learn more in the Load safetensors guide). This guide will show you how to convert other Stable Diffusion formats to be compatible with 🤗 Diffusers. PyTorch .ckpt The checkpoint - or .ckpt - format is commonly used to store and save models. The .ckpt file contains the entire model and is typically several GBs in size. While you can load and use a .ckpt file directly with the from_single_file() method, it is generally better to convert the .ckpt file to 🤗 Diffusers so both formats are available. There are two options for converting a .ckpt file: use a Space to convert the checkpoint or convert the .ckpt file with a script. Convert with a Space The easiest and most convenient way to convert a .ckpt file is to use the SD to Diffusers Space. You can follow the instructions on the Space to convert the .ckpt file. This approach works well for basic models, but it may struggle with more customized models. You’ll know the Space failed if it returns an empty pull request or error. In this case, you can try converting the .ckpt file with a script. Convert with a script 🤗 Diffusers provides a conversion script for converting .ckpt files. This approach is more reliable than the Space above. Before you start, make sure you have a local clone of 🤗 Diffusers to run the script and log in to your Hugging Face account so you can open pull requests and push your converted model to the Hub. Copied huggingface-cli login To use the script: Git clone the repository containing the .ckpt file you want to convert. For this example, let’s convert this TemporalNet .ckpt file: Copied git lfs install +git clone https://huggingface.co/CiaraRowles/TemporalNet Open a pull request on the repository where you’re converting the checkpoint from: Copied cd TemporalNet && git fetch origin refs/pr/13:pr/13 +git checkout pr/13 There are several input arguments to configure in the conversion script, but the most important ones are: checkpoint_path: the path to the .ckpt file to convert. original_config_file: a YAML file defining the configuration of the original architecture. If you can’t find this file, try searching for the YAML file in the GitHub repository where you found the .ckpt file. dump_path: the path to the converted model. For example, you can take the cldm_v15.yaml file from the ControlNet repository because the TemporalNet model is a Stable Diffusion v1.5 and ControlNet model. Now you can run the script to convert the .ckpt file: Copied python ../diffusers/scripts/convert_original_stable_diffusion_to_diffusers.py --checkpoint_path temporalnetv3.ckpt --original_config_file cldm_v15.yaml --dump_path ./ --controlnet Once the conversion is done, upload your converted model and test out the resulting pull request! Copied git push origin pr/13:refs/pr/13 Keras .pb or .h5 🧪 This is an experimental feature. Only Stable Diffusion v1 checkpoints are supported by the Convert KerasCV Space at the moment. KerasCV supports training for Stable Diffusion v1 and v2. However, it offers limited support for experimenting with Stable Diffusion models for inference and deployment whereas 🤗 Diffusers has a more complete set of features for this purpose, such as different noise schedulers, flash attention, and other +optimization techniques. The Convert KerasCV Space converts .pb or .h5 files to PyTorch, and then wraps them in a StableDiffusionPipeline so it is ready for inference. The converted checkpoint is stored in a repository on the Hugging Face Hub. For this example, let’s convert the sayakpaul/textual-inversion-kerasio checkpoint which was trained with Textual Inversion. It uses the special token to personalize images with cats. The Convert KerasCV Space allows you to input the following: Your Hugging Face token. Paths to download the UNet and text encoder weights from. Depending on how the model was trained, you don’t necessarily need to provide the paths to both the UNet and text encoder. For example, Textual Inversion only requires the embeddings from the text encoder and a text-to-image model only requires the UNet weights. Placeholder token is only applicable for textual inversion models. The output_repo_prefix is the name of the repository where the converted model is stored. Click the Submit button to automatically convert the KerasCV checkpoint! Once the checkpoint is successfully converted, you’ll see a link to the new repository containing the converted checkpoint. Follow the link to the new repository, and you’ll see the Convert KerasCV Space generated a model card with an inference widget to try out the converted model. If you prefer to run inference with code, click on the Use in Diffusers button in the upper right corner of the model card to copy and paste the code snippet: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline", use_safetensors=True +) Then, you can generate an image like: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline", use_safetensors=True +) +pipeline.to("cuda") + +placeholder_token = "" +prompt = f"two {placeholder_token} getting married, photorealistic, high quality" +image = pipeline(prompt, num_inference_steps=50).images[0] A1111 LoRA files Automatic1111 (A1111) is a popular web UI for Stable Diffusion that supports model sharing platforms like Civitai. Models trained with the Low-Rank Adaptation (LoRA) technique are especially popular because they’re fast to train and have a much smaller file size than a fully finetuned model. 🤗 Diffusers supports loading A1111 LoRA checkpoints with load_lora_weights(): Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "Lykon/dreamshaper-xl-1-0", torch_dtype=torch.float16, variant="fp16" +).to("cuda") Download a LoRA checkpoint from Civitai; this example uses the Blueprintify SD XL 1.0 checkpoint, but feel free to try out any LoRA checkpoint! Copied # uncomment to download the safetensor weights +#!wget https://civitai.com/api/download/models/168776 -O blueprintify.safetensors Load the LoRA checkpoint into the pipeline with the load_lora_weights() method: Copied pipeline.load_lora_weights(".", weight_name="blueprintify.safetensors") Now you can use the pipeline to generate images: Copied prompt = "bl3uprint, a highly detailed blueprint of the empire state building, explaining how to build all parts, many txt, blueprint grid backdrop" +negative_prompt = "lowres, cropped, worst quality, low quality, normal quality, artifacts, signature, watermark, username, blurry, more than one bridge, bad architecture" + +image = pipeline( + prompt=prompt, + negative_prompt=negative_prompt, + generator=torch.manual_seed(0), +).images[0] +image diff --git a/scrapped_outputs/3e9ac7ca9dc119848dd1ae57576cf466.txt b/scrapped_outputs/3e9ac7ca9dc119848dd1ae57576cf466.txt new file mode 100644 index 0000000000000000000000000000000000000000..e4abc6c3bdbf1174d841ae03e5693f7552e06dd7 --- /dev/null +++ b/scrapped_outputs/3e9ac7ca9dc119848dd1ae57576cf466.txt @@ -0,0 +1,38 @@ +Distributed inference with multiple GPUs On distributed setups, you can run inference across multiple GPUs with 🤗 Accelerate or PyTorch Distributed, which is useful for generating with multiple prompts in parallel. This guide will show you how to use 🤗 Accelerate and PyTorch Distributed for distributed inference. 🤗 Accelerate 🤗 Accelerate is a library designed to make it easy to train or run inference across distributed setups. It simplifies the process of setting up the distributed environment, allowing you to focus on your PyTorch code. To begin, create a Python file and initialize an accelerate.PartialState to create a distributed environment; your setup is automatically detected so you don’t need to explicitly define the rank or world_size. Move the DiffusionPipeline to distributed_state.device to assign a GPU to each process. Now use the split_between_processes utility as a context manager to automatically distribute the prompts between the number of processes. Copied import torch +from accelerate import PartialState +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +distributed_state = PartialState() +pipeline.to(distributed_state.device) + +with distributed_state.split_between_processes(["a dog", "a cat"]) as prompt: + result = pipeline(prompt).images[0] + result.save(f"result_{distributed_state.process_index}.png") Use the --num_processes argument to specify the number of GPUs to use, and call accelerate launch to run the script: Copied accelerate launch run_distributed.py --num_processes=2 To learn more, take a look at the Distributed Inference with 🤗 Accelerate guide. PyTorch Distributed PyTorch supports DistributedDataParallel which enables data parallelism. To start, create a Python file and import torch.distributed and torch.multiprocessing to set up the distributed process group and to spawn the processes for inference on each GPU. You should also initialize a DiffusionPipeline: Copied import torch +import torch.distributed as dist +import torch.multiprocessing as mp + +from diffusers import DiffusionPipeline + +sd = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) You’ll want to create a function to run inference; init_process_group handles creating a distributed environment with the type of backend to use, the rank of the current process, and the world_size or the number of processes participating. If you’re running inference in parallel over 2 GPUs, then the world_size is 2. Move the DiffusionPipeline to rank and use get_rank to assign a GPU to each process, where each process handles a different prompt: Copied def run_inference(rank, world_size): + dist.init_process_group("nccl", rank=rank, world_size=world_size) + + sd.to(rank) + + if torch.distributed.get_rank() == 0: + prompt = "a dog" + elif torch.distributed.get_rank() == 1: + prompt = "a cat" + + image = sd(prompt).images[0] + image.save(f"./{'_'.join(prompt)}.png") To run the distributed inference, call mp.spawn to run the run_inference function on the number of GPUs defined in world_size: Copied def main(): + world_size = 2 + mp.spawn(run_inference, args=(world_size,), nprocs=world_size, join=True) + + +if __name__ == "__main__": + main() Once you’ve completed the inference script, use the --nproc_per_node argument to specify the number of GPUs to use and call torchrun to run the script: Copied torchrun run_distributed.py --nproc_per_node=2 diff --git a/scrapped_outputs/3e9fabf2926f5ab258e0a1152dbdb1b6.txt b/scrapped_outputs/3e9fabf2926f5ab258e0a1152dbdb1b6.txt new file mode 100644 index 0000000000000000000000000000000000000000..c806105e4a67603d6c526c6d361c1f2bbc2347dc --- /dev/null +++ b/scrapped_outputs/3e9fabf2926f5ab258e0a1152dbdb1b6.txt @@ -0,0 +1,553 @@ +InstructPix2Pix: Learning to Follow Image Editing Instructions + + +Overview + +InstructPix2Pix: Learning to Follow Image Editing Instructions by Tim Brooks, Aleksander Holynski and Alexei A. Efros. +The abstract of the paper is the following: +We propose a method for editing images from human instructions: given an input image and a written instruction that tells the model what to do, our model follows these instructions to edit the image. To obtain training data for this problem, we combine the knowledge of two large pretrained models — a language model (GPT-3) and a text-to-image model (Stable Diffusion) — to generate a large dataset of image editing examples. Our conditional diffusion model, InstructPix2Pix, is trained on our generated data, and generalizes to real images and user-written instructions at inference time. Since it performs edits in the forward pass and does not require per example fine-tuning or inversion, our model edits images quickly, in a matter of seconds. We show compelling editing results for a diverse collection of input images and written instructions. +Resources: +Project Page. +Paper. +Original Code. +Demo. + +Available Pipelines: + +Pipeline +Tasks +Demo +StableDiffusionInstructPix2PixPipeline +Text-Based Image Editing +🤗 Space + +Usage example + + + + Copied +import PIL +import requests +import torch +from diffusers import StableDiffusionInstructPix2PixPipeline + +model_id = "timbrooks/instruct-pix2pix" +pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +url = "https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" + + +def download_image(url): + image = PIL.Image.open(requests.get(url, stream=True).raw) + image = PIL.ImageOps.exif_transpose(image) + image = image.convert("RGB") + return image + + +image = download_image(url) + +prompt = "make the mountains snowy" +images = pipe(prompt, image=image, num_inference_steps=20, image_guidance_scale=1.5, guidance_scale=7).images +images[0].save("snowy_mountains.png") + +StableDiffusionInstructPix2PixPipeline + + +class diffusers.StableDiffusionInstructPix2PixPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPImageProcessor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for pixel-level image editing by following text instructions. Based on Stable Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) +In addition the pipeline inherits the following loading methods: +Textual-Inversion: loaders.TextualInversionLoaderMixin.load_textual_inversion() +LoRA: loaders.LoraLoaderMixin.load_lora_weights() +as well as the following saving methods: +LoRA: loaders.LoraLoaderMixin.save_lora_weights() + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None +num_inference_steps: int = 100 +guidance_scale: float = 7.5 +image_guidance_scale: float = 1.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +image (PIL.Image.Image) — +Image, or tensor representing an image batch which will be repainted according to prompt. + + +num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. This pipeline requires a value of at least 1. + + +image_guidance_scale (float, optional, defaults to 1.5) — +Image guidance scale is to push the generated image towards the inital image image. Image guidance +scale is enabled by setting image_guidance_scale > 1. Higher image guidance scale encourages to +generate images that are closely linked to the source image image, usually at the expense of lower +image quality. This pipeline requires a value of at least 1. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. Ignored when not using guidance (i.e., ignored if guidance_scale +is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionInstructPix2PixPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" + +>>> image = download_image(img_url).resize((512, 512)) + +>>> pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained( +... "timbrooks/instruct-pix2pix", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "make the mountains snowy" +>>> image = pipe(prompt=prompt, image=image).images[0] + +load_textual_inversion + +< +source +> +( +pretrained_model_name_or_path: typing.Union[str, typing.Dict[str, torch.Tensor]] +token: typing.Optional[str] = None +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path (str or os.PathLike) — +Can be either: + +A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. +Valid model ids should have an organization name, like +"sd-concepts-library/low-poly-hd-logos-icons". +A path to a directory containing textual inversion weights, e.g. +./my_text_inversion_directory/. + + + +weight_name (str, optional) — +Name of a custom weight file. This should be used in two cases: + +The saved textual inversion file is in diffusers format, but was saved under a specific weight +name, such as text_inv.bin. +The saved textual inversion file is in the “Automatic1111” form. + + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running diffusers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +subfolder (str, optional, defaults to "") — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + +mirror (str, optional) — +Mirror source to accelerate downloads in China. If you are from China and have an accessibility +problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. +Please refer to the mirror site for more information. + + + +Load textual inversion embeddings into the text encoder of stable diffusion pipelines. Both diffusers and +Automatic1111 formats are supported (see example below). +This function is experimental and might change in the future. +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. +Example: + +To load a textual inversion embedding vector in diffusers format: + + + Copied +from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") +To load a textual inversion embedding vector in Automatic1111 format, make sure to first download the vector, + +e.g. from civitAI and then load the vector locally: + + + Copied +from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") + +load_lora_weights + +< +source +> +( +pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]] +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. +Valid model ids should have an organization name, like google/ddpm-celebahq-256. +A path to a directory containing model weights saved using ~ModelMixin.save_config, e.g., +./my_model_directory/. +A torch state +dict. + + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running diffusers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +subfolder (str, optional, defaults to "") — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + +mirror (str, optional) — +Mirror source to accelerate downloads in China. If you are from China and have an accessibility +problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. +Please refer to the mirror site for more information. + + + +Load pretrained attention processor layers (such as LoRA) into UNet2DConditionModel and +CLIPTextModel). +This function is experimental and might change in the future. +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. + +save_lora_weights + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +unet_lora_layers: typing.Dict[str, torch.nn.modules.module.Module] = None +text_encoder_lora_layers: typing.Dict[str, torch.nn.modules.module.Module] = None +is_main_process: bool = True +weight_name: str = None +save_function: typing.Callable = None +safe_serialization: bool = False + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. + + +unet_lora_layers (Dict[str, torch.nn.Module]) — +State dict of the LoRA layers corresponding to the UNet. Specifying this helps to make the +serialization process easier and cleaner. + + +text_encoder_lora_layers (Dict[str, torch.nn.Module]) — +State dict of the LoRA layers corresponding to the text_encoder. Since the text_encoder comes from +transformers, we cannot rejig it. That is why we have to explicitly pass the text encoder LoRA state +dict. + + +is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful when in distributed training like +TPUs and need to call this function on all processes. In this case, set is_main_process=True only on +the main process to avoid race conditions. + + +save_function (Callable) — +The function to use to save the state dictionary. Useful on distributed training like TPUs when one +need to replace torch.save by another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. + + + +Save the LoRA parameters corresponding to the UNet and the text encoder. + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. diff --git a/scrapped_outputs/3ea46f76fcd28cd37d6fdd87e56dcba5.txt b/scrapped_outputs/3ea46f76fcd28cd37d6fdd87e56dcba5.txt new file mode 100644 index 0000000000000000000000000000000000000000..aff30a571d591ad29e58f7a9ac94d335f8988bee --- /dev/null +++ b/scrapped_outputs/3ea46f76fcd28cd37d6fdd87e56dcba5.txt @@ -0,0 +1,136 @@ +Textual Inversion Textual Inversion is a training technique for personalizing image generation models with just a few example images of what you want it to learn. This technique works by learning and updating the text embeddings (the new embeddings are tied to a special word you must use in the prompt) to match the example images you provide. If you’re training on a GPU with limited vRAM, you should try enabling the gradient_checkpointing and mixed_precision parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing or xFormers. With the same configuration and setup as PyTorch, the Flax training script should be at least ~70% faster! This guide will explore the textual_inversion.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies for the script you’re using: + + + + Copied cd examples/textual_inversion +pip install -r requirements.txt + + + + Copied cd examples/textual_inversion +pip install -r requirements_flax.txt + + + 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script has many parameters to help you tailor the training run to your needs. All of the parameters and their descriptions are listed in the parse_args() function. Where applicable, Diffusers provides default values for each parameter such as the training batch size and learning rate, but feel free to change these values in the training command if you’d like. For example, to increase the number of gradient accumulation steps above the default value of 1: Copied accelerate launch textual_inversion.py \ + --gradient_accumulation_steps=4 Some other basic and important parameters to specify include: --pretrained_model_name_or_path: the name of the model on the Hub or a local path to the pretrained model --train_data_dir: path to a folder containing the training dataset (example images) --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command --num_vectors: the number of vectors to learn the embeddings with; increasing this parameter helps the model learn better but it comes with increased training costs --placeholder_token: the special word to tie the learned embeddings to (you must use the word in your prompt for inference) --initializer_token: a single-word that roughly describes the object or style you’re trying to train on --learnable_property: whether you’re training the model to learn a new “style” (for example, Van Gogh’s painting style) or “object” (for example, your dog) Training script Unlike some of the other training scripts, textual_inversion.py has a custom dataset class, TextualInversionDataset for creating a dataset. You can customize the image size, placeholder token, interpolation method, whether to crop the image, and more. If you need to change how the dataset is created, you can modify TextualInversionDataset. Next, you’ll find the dataset preprocessing code and training loop in the main() function. The script starts by loading the tokenizer, scheduler and model: Copied # Load tokenizer +if args.tokenizer_name: + tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name) +elif args.pretrained_model_name_or_path: + tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer") + +# Load scheduler and models +noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") +text_encoder = CLIPTextModel.from_pretrained( + args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision +) +vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision) +unet = UNet2DConditionModel.from_pretrained( + args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision +) The special placeholder token is added next to the tokenizer, and the embedding is readjusted to account for the new token. Then, the script creates a dataset from the TextualInversionDataset: Copied train_dataset = TextualInversionDataset( + data_root=args.train_data_dir, + tokenizer=tokenizer, + size=args.resolution, + placeholder_token=(" ".join(tokenizer.convert_ids_to_tokens(placeholder_token_ids))), + repeats=args.repeats, + learnable_property=args.learnable_property, + center_crop=args.center_crop, + set="train", +) +train_dataloader = torch.utils.data.DataLoader( + train_dataset, batch_size=args.train_batch_size, shuffle=True, num_workers=args.dataloader_num_workers +) Finally, the training loop handles everything else from predicting the noisy residual to updating the embedding weights of the special placeholder token. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 For this guide, you’ll download some images of a cat toy and store them in a directory. But remember, you can create and use your own dataset if you want (see the Create a dataset for training guide). Copied from huggingface_hub import snapshot_download + +local_dir = "./cat" +snapshot_download( + "diffusers/cat_toy_example", local_dir=local_dir, repo_type="dataset", ignore_patterns=".gitattributes" +) Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model, and DATA_DIR to the path where you just downloaded the cat images to. The script creates and saves the following files to your repository: learned_embeds.bin: the learned embedding vectors corresponding to your example images token_identifier.txt: the special placeholder token type_of_concept.txt: the type of concept you’re training on (either “object” or “style”) A full training run takes ~1 hour on a single V100 GPU. One more thing before you launch the script. If you’re interested in following along with the training process, you can periodically save generated images as training progresses. Add the following parameters to the training command: Copied --validation_prompt="A train" +--num_validation_images=4 +--validation_steps=100 + + + + Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export DATA_DIR="./cat" + +accelerate launch textual_inversion.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --train_data_dir=$DATA_DIR \ + --learnable_property="object" \ + --placeholder_token="" \ + --initializer_token="toy" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --max_train_steps=3000 \ + --learning_rate=5.0e-04 \ + --scale_lr \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --output_dir="textual_inversion_cat" \ + --push_to_hub + + + + Copied export MODEL_NAME="duongna/stable-diffusion-v1-4-flax" +export DATA_DIR="./cat" + +python textual_inversion_flax.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --train_data_dir=$DATA_DIR \ + --learnable_property="object" \ + --placeholder_token="" \ + --initializer_token="toy" \ + --resolution=512 \ + --train_batch_size=1 \ + --max_train_steps=3000 \ + --learning_rate=5.0e-04 \ + --scale_lr \ + --output_dir="textual_inversion_cat" \ + --push_to_hub + + +After training is complete, you can use your newly trained model for inference like: + + + + Copied from diffusers import StableDiffusionPipeline +import torch + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") +pipeline.load_textual_inversion("sd-concepts-library/cat-toy") +image = pipeline("A train", num_inference_steps=50).images[0] +image.save("cat-train.png") + + +Flax doesn’t support the load_textual_inversion() method, but the textual_inversion_flax.py script saves the learned embeddings as a part of the model after training. This means you can use the model for inference like any other Flax model: Copied import jax +import numpy as np +from flax.jax_utils import replicate +from flax.training.common_utils import shard +from diffusers import FlaxStableDiffusionPipeline + +model_path = "path-to-your-trained-model" +pipeline, params = FlaxStableDiffusionPipeline.from_pretrained(model_path, dtype=jax.numpy.bfloat16) + +prompt = "A train" +prng_seed = jax.random.PRNGKey(0) +num_inference_steps = 50 + +num_samples = jax.device_count() +prompt = num_samples * [prompt] +prompt_ids = pipeline.prepare_inputs(prompt) + +# shard inputs and rng +params = replicate(params) +prng_seed = jax.random.split(prng_seed, jax.device_count()) +prompt_ids = shard(prompt_ids) + +images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images +images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) +image.save("cat-train.png") + + + Next steps Congratulations on training your own Textual Inversion model! 🎉 To learn more about how to use your new model, the following guides may be helpful: Learn how to load Textual Inversion embeddings and also use them as negative embeddings. Learn how to use Textual Inversion for inference with Stable Diffusion 1/2 and Stable Diffusion XL. diff --git a/scrapped_outputs/3eaa2a1805b2fa71ecbfb85c8344300d.txt b/scrapped_outputs/3eaa2a1805b2fa71ecbfb85c8344300d.txt new file mode 100644 index 0000000000000000000000000000000000000000..9aa85d8f0ea063fa9ca9cc4c67b274c3a1854a84 --- /dev/null +++ b/scrapped_outputs/3eaa2a1805b2fa71ecbfb85c8344300d.txt @@ -0,0 +1,624 @@ +ControlNet with Stable Diffusion XL ControlNet was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that’ll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process. The abstract from the paper is: We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models. You can find additional smaller Stable Diffusion XL (SDXL) ControlNet checkpoints from the 🤗 Diffusers Hub organization, and browse community-trained checkpoints on the Hub. 🧪 Many of the SDXL ControlNet checkpoints are experimental, and there is a lot of room for improvement. Feel free to open an Issue and leave us feedback on how we can improve! If you don’t see a checkpoint you’re interested in, you can train your own SDXL ControlNet with our training script. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionXLControlNetPipeline class diffusers.StableDiffusionXLControlNetPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). text_encoder_2 (CLIPTextModelWithProjection) — +Second frozen text-encoder +(laion/CLIP-ViT-bigG-14-laion2B-39B-b160k). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. tokenizer_2 (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings should always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it defaults to True if the package is installed; otherwise no +watermarker is used. Pipeline for text-to-image generation using Stable Diffusion XL with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 5.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. This is sent to tokenizer_2 +and text_encoder_2. If not defined, negative_prompt is used in both text-encoders. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, pooled text embeddings are generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs (prompt +weighting). If not provided, pooled negative_prompt_embeds are generated from negative_prompt input +argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned containing the output images. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" +>>> negative_prompt = "low quality, bad quality, sketches" + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" +... ) + +>>> # initialize the models and pipeline +>>> controlnet_conditioning_scale = 0.5 # recommended for good generalization +>>> controlnet = ControlNetModel.from_pretrained( +... "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16 +... ) +>>> vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) +>>> pipe = StableDiffusionXLControlNetPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> # get canny image +>>> image = np.array(image) +>>> image = cv2.Canny(image, 100, 200) +>>> image = image[:, :, None] +>>> image = np.concatenate([image, image, image], axis=2) +>>> canny_image = Image.fromarray(image) + +>>> # generate image +>>> image = pipe( +... prompt, controlnet_conditioning_scale=controlnet_conditioning_scale, image=canny_image +... ).images[0] encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionXLControlNetImg2ImgPipeline class diffusers.StableDiffusionXLControlNetImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple ControlNets +as a list, the outputs from each ControlNet are added together to create one combined additional +conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires an aesthetic_score condition to be passed during inference. Also see the +config of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for image-to-image generation using Stable Diffusion XL with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None control_image: Union = None height: Optional = None width: Optional = None strength: float = 0.8 num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 0.8 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The initial image will be used as the starting point for the image generation process. Can also accept +image latents as image, if passing latents directly, it will not be encoded again. control_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition. ControlNet uses this input condition to generate guidance to Unet. If +the type is specified as Torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can +also be accepted as an image. The dimensions of the output image defaults to image’s dimensions. If +height and/or width are passed, image is resized according to them. If multiple ControlNets are +specified in init, images must be passed as a list such that each element of the list can be correctly +batched for input to a single controlnet. height (int, optional, defaults to the size of control_image) — +The height in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to the size of control_image) — +The width in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the controlnet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set the +corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +In this mode, the ControlNet encoder will try best to recognize the content of the input image even if +you remove all prompts. The guidance_scale between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the controlnet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the controlnet stops applying. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple +containing the output images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> # pip install accelerate transformers safetensors diffusers + +>>> import torch +>>> import numpy as np +>>> from PIL import Image + +>>> from transformers import DPTFeatureExtractor, DPTForDepthEstimation +>>> from diffusers import ControlNetModel, StableDiffusionXLControlNetImg2ImgPipeline, AutoencoderKL +>>> from diffusers.utils import load_image + + +>>> depth_estimator = DPTForDepthEstimation.from_pretrained("Intel/dpt-hybrid-midas").to("cuda") +>>> feature_extractor = DPTFeatureExtractor.from_pretrained("Intel/dpt-hybrid-midas") +>>> controlnet = ControlNetModel.from_pretrained( +... "diffusers/controlnet-depth-sdxl-1.0-small", +... variant="fp16", +... use_safetensors=True, +... torch_dtype=torch.float16, +... ).to("cuda") +>>> vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16).to("cuda") +>>> pipe = StableDiffusionXLControlNetImg2ImgPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", +... controlnet=controlnet, +... vae=vae, +... variant="fp16", +... use_safetensors=True, +... torch_dtype=torch.float16, +... ).to("cuda") +>>> pipe.enable_model_cpu_offload() + + +>>> def get_depth_map(image): +... image = feature_extractor(images=image, return_tensors="pt").pixel_values.to("cuda") +... with torch.no_grad(), torch.autocast("cuda"): +... depth_map = depth_estimator(image).predicted_depth + +... depth_map = torch.nn.functional.interpolate( +... depth_map.unsqueeze(1), +... size=(1024, 1024), +... mode="bicubic", +... align_corners=False, +... ) +... depth_min = torch.amin(depth_map, dim=[1, 2, 3], keepdim=True) +... depth_max = torch.amax(depth_map, dim=[1, 2, 3], keepdim=True) +... depth_map = (depth_map - depth_min) / (depth_max - depth_min) +... image = torch.cat([depth_map] * 3, dim=1) +... image = image.permute(0, 2, 3, 1).cpu().numpy()[0] +... image = Image.fromarray((image * 255.0).clip(0, 255).astype(np.uint8)) +... return image + + +>>> prompt = "A robot, 4k photo" +>>> image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ).resize((1024, 1024)) +>>> controlnet_conditioning_scale = 0.5 # recommended for good generalization +>>> depth_image = get_depth_map(image) + +>>> images = pipe( +... prompt, +... image=image, +... control_image=depth_image, +... strength=0.99, +... num_inference_steps=50, +... controlnet_conditioning_scale=controlnet_conditioning_scale, +... ).images +>>> images[0].save(f"robot_cat.png") encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionXLControlNetInpaintPipeline class diffusers.StableDiffusionXLControlNetInpaintPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: ControlNetModel scheduler: KarrasDiffusionSchedulers requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None feature_extractor: Optional = None image_encoder: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None mask_image: Union = None control_image: Union = None height: Optional = None width: Optional = None padding_mask_crop: Optional = None strength: float = 0.9999 num_inference_steps: int = 50 denoising_start: Optional = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (PIL.Image.Image) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. padding_mask_crop (int, optional, defaults to None) — +The size of margin in the crop to be applied to the image and masking. If None, no crop is applied to image and mask_image. If +padding_mask_crop is not None, it will first find a rectangular region with the same aspect ration of the image and +contains all masked area, and then expand that area based on padding_mask_crop. The image and mask_image will then be cropped based on +the expanded area before resizing to the original image size for inpainting. This is useful when the masked area is small while the image is large +and contain information inreleant for inpainging, such as background. strength (float, optional, defaults to 0.9999) — +Conceptually, indicates how much to transform the masked portion of the reference image. Must be +between 0 and 1. image will be used as a starting point, adding more noise to it the larger the +strength. The number of denoising steps depends on the amount of noise initially added. When +strength is 1, added noise will be maximum and the denoising process will run for the full number of +iterations specified in num_inference_steps. A value of 1, therefore, essentially ignores the masked +portion of the reference image. Note that in the case of denoising_start being declared as an +integer, the value of strength will be ignored. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. denoising_start (float, optional) — +When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be +bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and +it is assumed that the passed image is a partly denoised image. Note that when this is specified, +strength will be ignored. The denoising_start parameter is particularly beneficial when this pipeline +is integrated into a “Mixture of Denoisers” multi-pipeline setup, as detailed in Refining the Image +Output. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be +denoised by a successor pipeline that has denoising_start set to 0.8 so that it only denoises the +final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline +forms a part of a “Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (width, height) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (width, height). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> # !pip install transformers accelerate +>>> from diffusers import StableDiffusionXLControlNetInpaintPipeline, ControlNetModel, DDIMScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> init_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy.png" +... ) +>>> init_image = init_image.resize((1024, 1024)) + +>>> generator = torch.Generator(device="cpu").manual_seed(1) + +>>> mask_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy_mask.png" +... ) +>>> mask_image = mask_image.resize((1024, 1024)) + + +>>> def make_canny_condition(image): +... image = np.array(image) +... image = cv2.Canny(image, 100, 200) +... image = image[:, :, None] +... image = np.concatenate([image, image, image], axis=2) +... image = Image.fromarray(image) +... return image + + +>>> control_image = make_canny_condition(init_image) + +>>> controlnet = ControlNetModel.from_pretrained( +... "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16 +... ) +>>> pipe = StableDiffusionXLControlNetInpaintPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> image = pipe( +... "a handsome man with ray-ban sunglasses", +... num_inference_steps=20, +... generator=generator, +... eta=1.0, +... image=init_image, +... mask_image=mask_image, +... control_image=control_image, +... ).images[0] encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/3eb1cc8d7b9780b5df5b5d0190a840a3.txt b/scrapped_outputs/3eb1cc8d7b9780b5df5b5d0190a840a3.txt new file mode 100644 index 0000000000000000000000000000000000000000..a1045fc50045367e6cb6ef462fcda1222c68ba0d --- /dev/null +++ b/scrapped_outputs/3eb1cc8d7b9780b5df5b5d0190a840a3.txt @@ -0,0 +1,148 @@ +PyTorch 2.0 🤗 Diffusers supports the latest optimizations from PyTorch 2.0 which include: A memory-efficient attention implementation, scaled dot product attention, without requiring any extra dependencies such as xFormers. torch.compile, a just-in-time (JIT) compiler to provide an extra performance boost when individual models are compiled. Both of these optimizations require PyTorch 2.0 or later and 🤗 Diffusers > 0.13.0. Copied pip install --upgrade torch diffusers Scaled dot product attention torch.nn.functional.scaled_dot_product_attention (SDPA) is an optimized and memory-efficient attention (similar to xFormers) that automatically enables several other optimizations depending on the model inputs and GPU type. SDPA is enabled by default if you’re using PyTorch 2.0 and the latest version of 🤗 Diffusers, so you don’t need to add anything to your code. However, if you want to explicitly enable it, you can set a DiffusionPipeline to use AttnProcessor2_0: Copied import torch + from diffusers import DiffusionPipeline ++ from diffusers.models.attention_processor import AttnProcessor2_0 + + pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True).to("cuda") ++ pipe.unet.set_attn_processor(AttnProcessor2_0()) + + prompt = "a photo of an astronaut riding a horse on mars" + image = pipe(prompt).images[0] SDPA should be as fast and memory efficient as xFormers; check the benchmark for more details. In some cases - such as making the pipeline more deterministic or converting it to other formats - it may be helpful to use the vanilla attention processor, AttnProcessor. To revert to AttnProcessor, call the set_default_attn_processor() function on the pipeline: Copied import torch + from diffusers import DiffusionPipeline + + pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True).to("cuda") ++ pipe.unet.set_default_attn_processor() + + prompt = "a photo of an astronaut riding a horse on mars" + image = pipe(prompt).images[0] torch.compile The torch.compile function can often provide an additional speed-up to your PyTorch code. In 🤗 Diffusers, it is usually best to wrap the UNet with torch.compile because it does most of the heavy lifting in the pipeline. Copied from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) +images = pipe(prompt, num_inference_steps=steps, num_images_per_prompt=batch_size).images[0] Depending on GPU type, torch.compile can provide an additional speed-up of 5-300x on top of SDPA! If you’re using more recent GPU architectures such as Ampere (A100, 3090), Ada (4090), and Hopper (H100), torch.compile is able to squeeze even more performance out of these GPUs. Compilation requires some time to complete, so it is best suited for situations where you prepare your pipeline once and then perform the same type of inference operations multiple times. For example, calling the compiled pipeline on a different image size triggers compilation again which can be expensive. For more information and different options about torch.compile, refer to the torch_compile tutorial. Learn more about other ways PyTorch 2.0 can help optimize your model in the Accelerate inference of text-to-image diffusion models tutorial. Benchmark We conducted a comprehensive benchmark with PyTorch 2.0’s efficient attention implementation and torch.compile across different GPUs and batch sizes for five of our most used pipelines. The code is benchmarked on 🤗 Diffusers v0.17.0.dev0 to optimize torch.compile usage (see here for more details). Expand the dropdown below to find the code used to benchmark each pipeline: Stable Diffusion text-to-image Copied from diffusers import DiffusionPipeline +import torch + +path = "runwayml/stable-diffusion-v1-5" + +run_compile = True # Set True / False + +pipe = DiffusionPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors=True) +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + images = pipe(prompt=prompt).images Stable Diffusion image-to-image Copied from diffusers import StableDiffusionImg2ImgPipeline +from diffusers.utils import load_image +import torch + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +init_image = load_image(url) +init_image = init_image.resize((512, 512)) + +path = "runwayml/stable-diffusion-v1-5" + +run_compile = True # Set True / False + +pipe = StableDiffusionImg2ImgPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors=True) +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + image = pipe(prompt=prompt, image=init_image).images[0] Stable Diffusion inpainting Copied from diffusers import StableDiffusionInpaintPipeline +from diffusers.utils import load_image +import torch + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +path = "runwayml/stable-diffusion-inpainting" + +run_compile = True # Set True / False + +pipe = StableDiffusionInpaintPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors=True) +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] ControlNet Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.utils import load_image +import torch + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +init_image = load_image(url) +init_image = init_image.resize((512, 512)) + +path = "runwayml/stable-diffusion-v1-5" + +run_compile = True # Set True / False +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + path, controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) +pipe.controlnet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + pipe.controlnet = torch.compile(pipe.controlnet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + image = pipe(prompt=prompt, image=init_image).images[0] DeepFloyd IF text-to-image + upscaling Copied from diffusers import DiffusionPipeline +import torch + +run_compile = True # Set True / False + +pipe_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-M-v1.0", variant="fp16", text_encoder=None, torch_dtype=torch.float16, use_safetensors=True) +pipe_1.to("cuda") +pipe_2 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-II-M-v1.0", variant="fp16", text_encoder=None, torch_dtype=torch.float16, use_safetensors=True) +pipe_2.to("cuda") +pipe_3 = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-x4-upscaler", torch_dtype=torch.float16, use_safetensors=True) +pipe_3.to("cuda") + + +pipe_1.unet.to(memory_format=torch.channels_last) +pipe_2.unet.to(memory_format=torch.channels_last) +pipe_3.unet.to(memory_format=torch.channels_last) + +if run_compile: + pipe_1.unet = torch.compile(pipe_1.unet, mode="reduce-overhead", fullgraph=True) + pipe_2.unet = torch.compile(pipe_2.unet, mode="reduce-overhead", fullgraph=True) + pipe_3.unet = torch.compile(pipe_3.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "the blue hulk" + +prompt_embeds = torch.randn((1, 2, 4096), dtype=torch.float16) +neg_prompt_embeds = torch.randn((1, 2, 4096), dtype=torch.float16) + +for _ in range(3): + image_1 = pipe_1(prompt_embeds=prompt_embeds, negative_prompt_embeds=neg_prompt_embeds, output_type="pt").images + image_2 = pipe_2(image=image_1, prompt_embeds=prompt_embeds, negative_prompt_embeds=neg_prompt_embeds, output_type="pt").images + image_3 = pipe_3(prompt=prompt, image=image_1, noise_level=100).images The graph below highlights the relative speed-ups for the StableDiffusionPipeline across five GPU families with PyTorch 2.0 and torch.compile enabled. The benchmarks for the following graphs are measured in number of iterations/second. To give you an even better idea of how this speed-up holds for the other pipelines, consider the following +graph for an A100 with PyTorch 2.0 and torch.compile: In the following tables, we report our findings in terms of the number of iterations/second. A100 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 21.66 23.13 44.03 49.74 SD - img2img 21.81 22.40 43.92 46.32 SD - inpaint 22.24 23.23 43.76 49.25 SD - controlnet 15.02 15.82 32.13 36.08 IF 20.21 / 13.84 / 24.00 20.12 / 13.70 / 24.03 ❌ 97.34 / 27.23 / 111.66 SDXL - txt2img 8.64 9.9 - - A100 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 11.6 13.12 14.62 17.27 SD - img2img 11.47 13.06 14.66 17.25 SD - inpaint 11.67 13.31 14.88 17.48 SD - controlnet 8.28 9.38 10.51 12.41 IF 25.02 18.04 ❌ 48.47 SDXL - txt2img 2.44 2.74 - - A100 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 3.04 3.6 3.83 4.68 SD - img2img 2.98 3.58 3.83 4.67 SD - inpaint 3.04 3.66 3.9 4.76 SD - controlnet 2.15 2.58 2.74 3.35 IF 8.78 9.82 ❌ 16.77 SDXL - txt2img 0.64 0.72 - - V100 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 18.99 19.14 20.95 22.17 SD - img2img 18.56 19.18 20.95 22.11 SD - inpaint 19.14 19.06 21.08 22.20 SD - controlnet 13.48 13.93 15.18 15.88 IF 20.01 / 9.08 / 23.34 19.79 / 8.98 / 24.10 ❌ 55.75 / 11.57 / 57.67 V100 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 5.96 5.89 6.83 6.86 SD - img2img 5.90 5.91 6.81 6.82 SD - inpaint 5.99 6.03 6.93 6.95 SD - controlnet 4.26 4.29 4.92 4.93 IF 15.41 14.76 ❌ 22.95 V100 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 1.66 1.66 1.92 1.90 SD - img2img 1.65 1.65 1.91 1.89 SD - inpaint 1.69 1.69 1.95 1.93 SD - controlnet 1.19 1.19 OOM after warmup 1.36 IF 5.43 5.29 ❌ 7.06 T4 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 6.9 6.95 7.3 7.56 SD - img2img 6.84 6.99 7.04 7.55 SD - inpaint 6.91 6.7 7.01 7.37 SD - controlnet 4.89 4.86 5.35 5.48 IF 17.42 / 2.47 / 18.52 16.96 / 2.45 / 18.69 ❌ 24.63 / 2.47 / 23.39 SDXL - txt2img 1.15 1.16 - - T4 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 1.79 1.79 2.03 1.99 SD - img2img 1.77 1.77 2.05 2.04 SD - inpaint 1.81 1.82 2.09 2.09 SD - controlnet 1.34 1.27 1.47 1.46 IF 5.79 5.61 ❌ 7.39 SDXL - txt2img 0.288 0.289 - - T4 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 2.34s 2.30s OOM after 2nd iteration 1.99s SD - img2img 2.35s 2.31s OOM after warmup 2.00s SD - inpaint 2.30s 2.26s OOM after 2nd iteration 1.95s SD - controlnet OOM after 2nd iteration OOM after 2nd iteration OOM after warmup OOM after warmup IF * 1.44 1.44 ❌ 1.94 SDXL - txt2img OOM OOM - - RTX 3090 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 22.56 22.84 23.84 25.69 SD - img2img 22.25 22.61 24.1 25.83 SD - inpaint 22.22 22.54 24.26 26.02 SD - controlnet 16.03 16.33 17.38 18.56 IF 27.08 / 9.07 / 31.23 26.75 / 8.92 / 31.47 ❌ 68.08 / 11.16 / 65.29 RTX 3090 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 6.46 6.35 7.29 7.3 SD - img2img 6.33 6.27 7.31 7.26 SD - inpaint 6.47 6.4 7.44 7.39 SD - controlnet 4.59 4.54 5.27 5.26 IF 16.81 16.62 ❌ 21.57 RTX 3090 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 1.7 1.69 1.93 1.91 SD - img2img 1.68 1.67 1.93 1.9 SD - inpaint 1.72 1.71 1.97 1.94 SD - controlnet 1.23 1.22 1.4 1.38 IF 5.01 5.00 ❌ 6.33 RTX 4090 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 40.5 41.89 44.65 49.81 SD - img2img 40.39 41.95 44.46 49.8 SD - inpaint 40.51 41.88 44.58 49.72 SD - controlnet 29.27 30.29 32.26 36.03 IF 69.71 / 18.78 / 85.49 69.13 / 18.80 / 85.56 ❌ 124.60 / 26.37 / 138.79 SDXL - txt2img 6.8 8.18 - - RTX 4090 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 12.62 12.84 15.32 15.59 SD - img2img 12.61 12,.79 15.35 15.66 SD - inpaint 12.65 12.81 15.3 15.58 SD - controlnet 9.1 9.25 11.03 11.22 IF 31.88 31.14 ❌ 43.92 SDXL - txt2img 2.19 2.35 - - RTX 4090 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 3.17 3.2 3.84 3.85 SD - img2img 3.16 3.2 3.84 3.85 SD - inpaint 3.17 3.2 3.85 3.85 SD - controlnet 2.23 2.3 2.7 2.75 IF 9.26 9.2 ❌ 13.31 SDXL - txt2img 0.52 0.53 - - Notes Follow this PR for more details on the environment used for conducting the benchmarks. For the DeepFloyd IF pipeline where batch sizes > 1, we only used a batch size of > 1 in the first IF pipeline for text-to-image generation and NOT for upscaling. That means the two upscaling pipelines received a batch size of 1. Thanks to Horace He from the PyTorch team for their support in improving our support of torch.compile() in Diffusers. diff --git a/scrapped_outputs/3eb213ad6c853246f300407c41b24442.txt b/scrapped_outputs/3eb213ad6c853246f300407c41b24442.txt new file mode 100644 index 0000000000000000000000000000000000000000..dc45cc411c1e99044b02de9de0b70f888962c563 --- /dev/null +++ b/scrapped_outputs/3eb213ad6c853246f300407c41b24442.txt @@ -0,0 +1,42 @@ +DPMSolverSDEScheduler The DPMSolverSDEScheduler is inspired by the stochastic sampler from the Elucidating the Design Space of Diffusion-Based Generative Models paper, and the scheduler is ported from and created by Katherine Crowson. DPMSolverSDEScheduler class diffusers.DPMSolverSDEScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' use_karras_sigmas: Optional = False noise_sampler_seed: Optional = None timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.00085) — +The starting beta value of inference. beta_end (float, defaults to 0.012) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. noise_sampler_seed (int, optional, defaults to None) — +The random seed to use for the noise sampler. If None, a random seed is generated. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DPMSolverSDEScheduler implements the stochastic sampler from the Elucidating the Design Space of Diffusion-Based +Generative Models paper. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union return_dict: bool = True s_noise: float = 1.0 ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor or np.ndarray) — +The direct output from learned diffusion model. timestep (float or torch.FloatTensor) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor or np.ndarray) — +A current instance of a sample created by the diffusion process. return_dict (bool, optional, defaults to True) — +Whether or not to return a SchedulerOutput or tuple. s_noise (float, optional, defaults to 1.0) — +Scaling factor for noise added to the sample. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/3ec79b44132270fe5edbe30295e1a6fa.txt b/scrapped_outputs/3ec79b44132270fe5edbe30295e1a6fa.txt new file mode 100644 index 0000000000000000000000000000000000000000..79beedec7941f21d28c6e409790c9155bcf39eff --- /dev/null +++ b/scrapped_outputs/3ec79b44132270fe5edbe30295e1a6fa.txt @@ -0,0 +1,74 @@ +LoRA This is experimental and the API may change in the future. LoRA (Low-Rank Adaptation of Large Language Models) is a popular and lightweight training technique that significantly reduces the number of trainable parameters. It works by inserting a smaller number of new weights into the model and only these are trained. This makes training with LoRA much faster, memory-efficient, and produces smaller model weights (a few hundred MBs), which are easier to store and share. LoRA can also be combined with other training techniques like DreamBooth to speedup training. LoRA is very versatile and supported for DreamBooth, Kandinsky 2.2, Stable Diffusion XL, text-to-image, and Wuerstchen. This guide will explore the train_text_to_image_lora.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies for the script you’re using: + + + + Copied cd examples/text_to_image +pip install -r requirements.txt + + + + Copied cd examples/text_to_image +pip install -r requirements_flax.txt + + + 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script has many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. Default values are provided for most parameters that work pretty well, but you can also set your own values in the training command if you’d like. For example, to increase the number of epochs to train: Copied accelerate launch train_text_to_image_lora.py \ + --num_train_epochs=150 \ Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the LoRA relevant parameters: --rank: the number of low-rank matrices to train --learning_rate: the default learning rate is 1e-4, but with LoRA, you can use a higher learning rate Training script The dataset preprocessing code and training loop are found in the main() function, and if you need to adapt the training script, this is where you’ll make your changes. As with the script parameters, a walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the LoRA relevant parts of the script. The script begins by adding the new LoRA weights to the attention layers. This involves correctly configuring the weight size for each block in the UNet. You’ll see the rank parameter is used to create the LoRAAttnProcessor: Copied lora_attn_procs = {} +for name in unet.attn_processors.keys(): + cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim + if name.startswith("mid_block"): + hidden_size = unet.config.block_out_channels[-1] + elif name.startswith("up_blocks"): + block_id = int(name[len("up_blocks.")]) + hidden_size = list(reversed(unet.config.block_out_channels))[block_id] + elif name.startswith("down_blocks"): + block_id = int(name[len("down_blocks.")]) + hidden_size = unet.config.block_out_channels[block_id] + + lora_attn_procs[name] = LoRAAttnProcessor( + hidden_size=hidden_size, + cross_attention_dim=cross_attention_dim, + rank=args.rank, + ) + +unet.set_attn_processor(lora_attn_procs) +lora_layers = AttnProcsLayers(unet.attn_processors) The optimizer is initialized with the lora_layers because these are the only weights that’ll be optimized: Copied optimizer = optimizer_cls( + lora_layers.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Aside from setting up the LoRA layers, the training script is more or less the same as train_text_to_image.py! Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 Let’s train on the Pokémon BLIP captions dataset to generate our yown Pokémon. Set the environment variables MODEL_NAME and DATASET_NAME to the model and dataset respectively. You should also specify where to save the model in OUTPUT_DIR, and the name of the model to save to on the Hub with HUB_MODEL_ID. The script creates and saves the following files to your repository: saved model checkpoints pytorch_lora_weights.safetensors (the trained LoRA weights) If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. A full training run takes ~5 hours on a 2080 Ti GPU with 11GB of VRAM. Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="/sddata/finetune/lora/pokemon" +export HUB_MODEL_ID="pokemon-lora" +export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch --mixed_precision="fp16" train_text_to_image_lora.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$DATASET_NAME \ + --dataloader_num_workers=8 \ + --resolution=512 \ + --center_crop \ + --random_flip \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --max_train_steps=15000 \ + --learning_rate=1e-04 \ + --max_grad_norm=1 \ + --lr_scheduler="cosine" \ + --lr_warmup_steps=0 \ + --output_dir=${OUTPUT_DIR} \ + --push_to_hub \ + --hub_model_id=${HUB_MODEL_ID} \ + --report_to=wandb \ + --checkpointing_steps=500 \ + --validation_prompt="A pokemon with blue eyes." \ + --seed=1337 Once training has been completed, you can use your model for inference: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("path/to/lora/model", weight_name="pytorch_lora_weights.safetensors") +image = pipeline("A pokemon with blue eyes").images[0] Next steps Congratulations on training a new model with LoRA! To learn more about how to use your new model, the following guides may be helpful: Learn how to load different LoRA formats trained using community trainers like Kohya and TheLastBen. Learn how to use and combine multiple LoRA’s with PEFT for inference. diff --git a/scrapped_outputs/3ed4785cacfc2eb7f72041d43acd5436.txt b/scrapped_outputs/3ed4785cacfc2eb7f72041d43acd5436.txt new file mode 100644 index 0000000000000000000000000000000000000000..6cb15709ef2db459331589418952eb68057fc110 --- /dev/null +++ b/scrapped_outputs/3ed4785cacfc2eb7f72041d43acd5436.txt @@ -0,0 +1,26 @@ +IP-Adapter IP-Adapter is a lightweight adapter that enables prompting a diffusion model with an image. This method decouples the cross-attention layers of the image and text features. The image features are generated from an image encoder. Files generated from IP-Adapter are only ~100MBs. Learn how to load an IP-Adapter checkpoint and image in the IP-Adapter loading guide. IPAdapterMixin class diffusers.loaders.IPAdapterMixin < source > ( ) Mixin for handling IP Adapters. load_ip_adapter < source > ( pretrained_model_name_or_path_or_dict: Union subfolder: Union weight_name: Union **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with ModelMixin.save_pretrained(). +A torch state +dict. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. unload_ip_adapter < source > ( ) Unloads the IP Adapter weights Examples: Copied >>> # Assuming `pipeline` is already loaded with the IP Adapter weights. +>>> pipeline.unload_ip_adapter() +>>> ... diff --git a/scrapped_outputs/3f09fcfb2f06f34f0fa3fe7bdfe82817.txt b/scrapped_outputs/3f09fcfb2f06f34f0fa3fe7bdfe82817.txt new file mode 100644 index 0000000000000000000000000000000000000000..643707bcdd440e65416f02ac6003e845768e0c87 --- /dev/null +++ b/scrapped_outputs/3f09fcfb2f06f34f0fa3fe7bdfe82817.txt @@ -0,0 +1,96 @@ +I2VGen-XL I2VGen-XL: High-Quality Image-to-Video Synthesis via Cascaded Diffusion Models by Shiwei Zhang, Jiayu Wang, Yingya Zhang, Kang Zhao, Hangjie Yuan, Zhiwu Qin, Xiang Wang, Deli Zhao, and Jingren Zhou. The abstract from the paper is: Video synthesis has recently made remarkable strides benefiting from the rapid development of diffusion models. However, it still encounters challenges in terms of semantic accuracy, clarity and spatio-temporal continuity. They primarily arise from the scarcity of well-aligned text-video data and the complex inherent structure of videos, making it difficult for the model to simultaneously ensure semantic and qualitative excellence. In this report, we propose a cascaded I2VGen-XL approach that enhances model performance by decoupling these two factors and ensures the alignment of the input data by utilizing static images as a form of crucial guidance. I2VGen-XL consists of two stages: i) the base stage guarantees coherent semantics and preserves content from input images by using two hierarchical encoders, and ii) the refinement stage enhances the video’s details by incorporating an additional brief text and improves the resolution to 1280×720. To improve the diversity, we collect around 35 million single-shot text-video pairs and 6 billion text-image pairs to optimize the model. By this means, I2VGen-XL can simultaneously enhance the semantic accuracy, continuity of details and clarity of generated videos. Through extensive experiments, we have investigated the underlying principles of I2VGen-XL and compared it with current top methods, which can demonstrate its effectiveness on diverse data. The source code and models will be publicly available at this https URL. The original codebase can be found here. The model checkpoints can be found here. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. Also, to know more about reducing the memory usage of this pipeline, refer to the [“Reduce memory usage”] section here. Sample output with I2VGenXL: masterpiece, bestquality, sunset. + Notes I2VGenXL always uses a clip_skip value of 1. This means it leverages the penultimate layer representations from the text encoder of CLIP. It can generate videos of quality that is often on par with Stable Video Diffusion (SVD). Unlike SVD, it additionally accepts text prompts as inputs. It can generate higher resolution videos. When using the DDIMScheduler (which is default for this pipeline), less than 50 steps for inference leads to bad results. I2VGenXLPipeline class diffusers.I2VGenXLPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer image_encoder: CLIPVisionModelWithProjection feature_extractor: CLIPImageProcessor unet: I2VGenXLUNet scheduler: DDIMScheduler ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (I2VGenXLUNet) — +A I2VGenXLUNet to denoise the encoded video latents. scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Pipeline for image-to-video generation as proposed in I2VGenXL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None image: Union = None height: Optional = 704 width: Optional = 1280 target_fps: Optional = 16 num_frames: int = 16 num_inference_steps: int = 50 guidance_scale: float = 9.0 negative_prompt: Union = None eta: float = 0.0 num_videos_per_prompt: Optional = 1 decode_chunk_size: Optional = 1 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = 1 ) → pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — +Image or images to guide image generation. If you provide a tensor, it needs to be compatible with +CLIPImageProcessor. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. target_fps (int, optional) — +Frames per second. The rate at which the generated images shall be exported to a video after generation. This is also used as a “micro-condition” while generation. num_frames (int, optional) — +The number of video frames to generate. num_inference_steps (int, optional) — +The number of denoising steps. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. num_videos_per_prompt (int, optional) — +The number of images to generate per prompt. decode_chunk_size (int, optional) — +The number of frames to decode at a time. The higher the chunk size, the higher the temporal consistency +between frames, but also the higher the memory consumption. By default, the decoder will decode all frames at once +for maximal quality. Reduce decode_chunk_size to reduce memory usage. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput or tuple + +If return_dict is True, pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for image-to-video generation with I2VGenXLPipeline. Examples: Copied >>> import torch +>>> from diffusers import I2VGenXLPipeline + +>>> pipeline = I2VGenXLPipeline.from_pretrained("ali-vilab/i2vgen-xl", torch_dtype=torch.float16, variant="fp16") +>>> pipeline.enable_model_cpu_offload() + +>>> image_url = "https://github.com/ali-vilab/i2vgen-xl/blob/main/data/test_images/img_0009.png?raw=true" +>>> image = load_image(image_url).convert("RGB") + +>>> prompt = "Papers were floating in the air on a table in the library" +>>> negative_prompt = "Distorted, discontinuous, Ugly, blurry, low resolution, motionless, static, disfigured, disconnected limbs, Ugly faces, incomplete arms" +>>> generator = torch.manual_seed(8888) + +>>> frames = pipeline( +... prompt=prompt, +... image=image, +... num_inference_steps=50, +... negative_prompt=negative_prompt, +... guidance_scale=9.0, +... generator=generator +... ).frames[0] +>>> video_path = export_to_gif(frames, "i2v.gif") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_videos_per_prompt negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_videos_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. I2VGenXLPipelineOutput class diffusers.pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput < source > ( frames: Union ) Parameters frames (List[np.ndarray] or torch.FloatTensor) — +List of denoised frames (essentially images) as NumPy arrays of shape (height, width, num_channels) or as +a torch tensor. The length of the list denotes the video length (the number of frames). Output class for image-to-video pipeline. diff --git a/scrapped_outputs/3f11fd85a08d0e1fb47a9bd9b8b2d67e.txt b/scrapped_outputs/3f11fd85a08d0e1fb47a9bd9b8b2d67e.txt new file mode 100644 index 0000000000000000000000000000000000000000..26903c98059769d09923319afe7503b246c3bfc7 --- /dev/null +++ b/scrapped_outputs/3f11fd85a08d0e1fb47a9bd9b8b2d67e.txt @@ -0,0 +1,92 @@ +Score SDE VE + + +Overview + +Score-Based Generative Modeling through Stochastic Differential Equations (Score SDE) by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon and Ben Poole. +The abstract of the paper is the following: +Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model. +The original codebase can be found here. +This pipeline implements the Variance Expanding (VE) variant of the method. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_score_sde_ve.py +Unconditional Image Generation +- + +ScoreSdeVePipeline + + +class diffusers.ScoreSdeVePipeline + +< +source +> +( +unet: UNet2DModel +scheduler: DiffusionPipeline + +) + + +Parameters + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the — + + +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) — +unet (UNet2DModel): U-Net architecture to denoise the encoded image. scheduler (SchedulerMixin): +The ScoreSdeVeScheduler scheduler to be used in combination with unet to denoise the encoded image. + + + + +__call__ + +< +source +> +( +batch_size: int = 1 +num_inference_steps: int = 2000 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +**kwargs + +) +→ +ImagePipelineOutput or tuple + +Parameters + +batch_size (int, optional, defaults to 1) — +The number of images to generate. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + +Returns + +ImagePipelineOutput or tuple + + + +~pipelines.utils.ImagePipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. diff --git a/scrapped_outputs/3f493f1efdab38e0d9de7ce7d8441258.txt b/scrapped_outputs/3f493f1efdab38e0d9de7ce7d8441258.txt new file mode 100644 index 0000000000000000000000000000000000000000..e807efa0bdba9fcaf725824d3ab7c1cc5f8142b5 --- /dev/null +++ b/scrapped_outputs/3f493f1efdab38e0d9de7ce7d8441258.txt @@ -0,0 +1,138 @@ +Kandinsky 3 Kandinsky 3 is created by Vladimir Arkhipkin,Anastasia Maltseva,Igor Pavlov,Andrei Filatov,Arseniy Shakhmatov,Andrey Kuznetsov,Denis Dimitrov, Zein Shaheen The description from it’s Github page: Kandinsky 3.0 is an open-source text-to-image diffusion model built upon the Kandinsky2-x model family. In comparison to its predecessors, enhancements have been made to the text understanding and visual quality of the model, achieved by increasing the size of the text encoder and Diffusion U-Net models, respectively. Its architecture includes 3 main components: FLAN-UL2, which is an encoder decoder model based on the T5 architecture. New U-Net architecture featuring BigGAN-deep blocks doubles depth while maintaining the same number of parameters. Sber-MoVQGAN is a decoder proven to have superior results in image restoration. The original codebase can be found at ai-forever/Kandinsky-3. Check out the Kandinsky Community organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting. Make sure to check out the schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. Kandinsky3Pipeline class diffusers.Kandinsky3Pipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: Kandinsky3UNet scheduler: DDPMScheduler movq: VQModel ) __call__ < source > ( prompt: Union = None num_inference_steps: int = 25 guidance_scale: float = 3.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 height: Optional = 1024 width: Optional = 1024 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None output_type: Optional = 'pil' return_dict: bool = True latents = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 3.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to self.unet.config.sample_size) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size) — +The width in pixels of the generated image. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask. Must provide if passing prompt_embeds directly. negative_attention_mask (torch.FloatTensor, optional) — +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import AutoPipelineForText2Image +>>> import torch + +>>> pipe = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-3", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A photograph of the inside of a subway train. There are raccoons sitting on the seats. One of them is reading a newspaper. The window shows the city in the background." + +>>> generator = torch.Generator(device="cpu").manual_seed(0) +>>> image = pipe(prompt, num_inference_steps=25, generator=generator).images[0] encode_prompt < source > ( prompt do_classifier_free_guidance = True num_images_per_prompt = 1 device = None negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None _cut_context = False attention_mask: Optional = None negative_attention_mask: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device, optional): +torch device to place the resulting embeddings on num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask. Must provide if passing prompt_embeds directly. negative_attention_mask (torch.FloatTensor, optional) — +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. Encodes the prompt into text encoder hidden states. Kandinsky3Img2ImgPipeline class diffusers.Kandinsky3Img2ImgPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: Kandinsky3UNet scheduler: DDPMScheduler movq: VQModel ) __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.3 num_inference_steps: int = 25 guidance_scale: float = 3.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 3.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask. Must provide if passing prompt_embeds directly. negative_attention_mask (torch.FloatTensor, optional) — +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import AutoPipelineForImage2Image +>>> from diffusers.utils import load_image +>>> import torch + +>>> pipe = AutoPipelineForImage2Image.from_pretrained("kandinsky-community/kandinsky-3", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A painting of the inside of a subway train with tiny raccoons." +>>> image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky3/t2i.png") + +>>> generator = torch.Generator(device="cpu").manual_seed(0) +>>> image = pipe(prompt, image=image, strength=0.75, num_inference_steps=25, generator=generator).images[0] encode_prompt < source > ( prompt do_classifier_free_guidance = True num_images_per_prompt = 1 device = None negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None _cut_context = False attention_mask: Optional = None negative_attention_mask: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded Encodes the prompt into text encoder hidden states. device: (torch.device, optional): +torch device to place the resulting embeddings on +num_images_per_prompt (int, optional, defaults to 1): +number of images that should be generated per prompt +do_classifier_free_guidance (bool, optional, defaults to True): +whether to use classifier free guidance or not +negative_prompt (str or List[str], optional): +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). +prompt_embeds (torch.FloatTensor, optional): +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. +negative_prompt_embeds (torch.FloatTensor, optional): +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. +attention_mask (torch.FloatTensor, optional): +Pre-generated attention mask. Must provide if passing prompt_embeds directly. +negative_attention_mask (torch.FloatTensor, optional): +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. diff --git a/scrapped_outputs/3f55bb797f73ef0728d7c4588c150992.txt b/scrapped_outputs/3f55bb797f73ef0728d7c4588c150992.txt new file mode 100644 index 0000000000000000000000000000000000000000..4a297f5bb6f30edb573cac1e77ea6242239abd31 --- /dev/null +++ b/scrapped_outputs/3f55bb797f73ef0728d7c4588c150992.txt @@ -0,0 +1,170 @@ +Create reproducible pipelines + +Reproducibility is important for testing, replicating results, and can even be used to improve image quality. However, the randomness in diffusion models is a desired property because it allows the pipeline to generate different images every time it is run. While you can’t expect to get the exact same results across platforms, you can expect results to be reproducible across releases and platforms within a certain tolerance range. Even then, tolerance varies depending on the diffusion pipeline and checkpoint. +This is why it’s important to understand how to control sources of randomness in diffusion models or use deterministic algorithms. +💡 We strongly recommend reading PyTorch’s statement about reproducibility: +Completely reproducible results are not guaranteed across PyTorch releases, individual commits, or different platforms. Furthermore, results may not be reproducible between CPU and GPU executions, even when using identical seeds. + +Control randomness + +During inference, pipelines rely heavily on random sampling operations which include creating the +Gaussian noise tensors to denoise and adding noise to the scheduling step. +Take a look at the tensor values in the DDIMPipeline after two inference steps: + + + Copied +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np").images +print(np.abs(image).sum()) +Running the code above prints one value, but if you run it again you get a different value. What is going on here? +Every time the pipeline is run, torch.randn uses a different random seed to create Gaussian noise which is denoised stepwise. This leads to a different result each time it is run, which is great for diffusion pipelines since it generates a different random image each time. +But if you need to reliably generate the same image, that’ll depend on whether you’re running the pipeline on a CPU or GPU. + +CPU + +To generate reproducible results on a CPU, you’ll need to use a PyTorch Generator and set a seed: + + + Copied +import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id) + +# create a generator for reproducibility +generator = torch.Generator(device="cpu").manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) +Now when you run the code above, it always prints a value of 1491.1711 no matter what because the Generator object with the seed is passed to all the random functions of the pipeline. +If you run this code example on your specific hardware and PyTorch version, you should get a similar, if not the same, result. +💡 It might be a bit unintuitive at first to pass Generator objects to the pipeline instead of +just integer values representing the seed, but this is the recommended design when dealing with +probabilistic models in PyTorch as Generator’s are random states that can be +passed to multiple pipelines in a sequence. + +GPU + +Writing a reproducible pipeline on a GPU is a bit trickier, and full reproducibility across different hardware is not guaranteed because matrix multiplication - which diffusion pipelines require a lot of - is less deterministic on a GPU than a CPU. For example, if you run the same code example above on a GPU: + + + Copied +import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id) +ddim.to("cuda") + +# create a generator for reproducibility +generator = torch.Generator(device="cuda").manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) +The result is not the same even though you’re using an identical seed because the GPU uses a different random number generator than the CPU. +To circumvent this problem, 🧨 Diffusers has a randn_tensor function for creating random noise on the CPU, and then moving the tensor to a GPU if necessary. The randn_tensor function is used everywhere inside the pipeline, allowing the user to always pass a CPU Generator even if the pipeline is run on a GPU. +You’ll see the results are much closer now! + + + Copied +import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id) +ddim.to("cuda") + +# create a generator for reproducibility; notice you don't place it on the GPU! +generator = torch.manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) +💡 If reproducibility is important, we recommend always passing a CPU generator. +The performance loss is often neglectable, and you’ll generate much more similar +values than if the pipeline had been run on a GPU. +Finally, for more complex pipelines such as UnCLIPPipeline, these are often extremely +susceptible to precision error propagation. Don’t expect similar results across +different GPU hardware or PyTorch versions. In this case, you’ll need to run +exactly the same hardware and PyTorch version for full reproducibility. + +randn_tensor + + +diffusers.utils.randn_tensor + +< +source +> +( +shape: typing.Union[typing.Tuple, typing.List] +generator: typing.Union[typing.List[ForwardRef('torch.Generator')], ForwardRef('torch.Generator'), NoneType] = None +device: typing.Optional[ForwardRef('torch.device')] = None +dtype: typing.Optional[ForwardRef('torch.dtype')] = None +layout: typing.Optional[ForwardRef('torch.layout')] = None + +) + + + +This is a helper function that allows to create random tensors on the desired device with the desired dtype. When +passing a list of generators one can seed each batched size individually. If CPU generators are passed the tensor +will always be created on CPU. + +Deterministic algorithms + +You can also configure PyTorch to use deterministic algorithms to create a reproducible pipeline. However, you should be aware that deterministic algorithms may be slower than nondeterministic ones and you may observe a decrease in performance. But if reproducibility is important to you, then this is the way to go! +Nondeterministic behavior occurs when operations are launched in more than one CUDA stream. To avoid this, set the environment varibale CUBLAS_WORKSPACE_CONFIG to :16:8 to only use one buffer size during runtime. +PyTorch typically benchmarks multiple algorithms to select the fastest one, but if you want reproducibility, you should disable this feature because the benchmark may select different algorithms each time. Lastly, pass True to torch.use_deterministic_algorithms to enable deterministic algorithms. + + + Copied +import os + +os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":16:8" + +torch.backends.cudnn.benchmark = False +torch.use_deterministic_algorithms(True) +Now when you run the same pipeline twice, you’ll get identical results. + + + Copied +import torch +from diffusers import DDIMScheduler, StableDiffusionPipeline +import numpy as np + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id).to("cuda") +pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) +g = torch.Generator(device="cuda") + +prompt = "A bear is playing a guitar on Times Square" + +g.manual_seed(0) +result1 = pipe(prompt=prompt, num_inference_steps=50, generator=g, output_type="latent").images + +g.manual_seed(0) +result2 = pipe(prompt=prompt, num_inference_steps=50, generator=g, output_type="latent").images + +print("L_inf dist = ", abs(result1 - result2).max()) +"L_inf dist = tensor(0., device='cuda:0')" diff --git a/scrapped_outputs/3f6d5f9f38a6fcb516b20e62a0abfa97.txt b/scrapped_outputs/3f6d5f9f38a6fcb516b20e62a0abfa97.txt new file mode 100644 index 0000000000000000000000000000000000000000..cff714448fde8a5841e9c4833e95b6589962a2ce --- /dev/null +++ b/scrapped_outputs/3f6d5f9f38a6fcb516b20e62a0abfa97.txt @@ -0,0 +1 @@ +Overview 🧨 Diffusers offers many pipelines, models, and schedulers for generative tasks. To make loading these components as simple as possible, we provide a single and unified method - from_pretrained() - that loads any of these components from either the Hugging Face Hub or your local machine. Whenever you load a pipeline or model, the latest files are automatically downloaded and cached so you can quickly reuse them next time without redownloading the files. This section will show you everything you need to know about loading pipelines, how to load different components in a pipeline, how to load checkpoint variants, and how to load community pipelines. You’ll also learn how to load schedulers and compare the speed and quality trade-offs of using different schedulers. Finally, you’ll see how to convert and load KerasCV checkpoints so you can use them in PyTorch with 🧨 Diffusers. diff --git a/scrapped_outputs/3f6df4d03b9ee25875c4b1764850cfb9.txt b/scrapped_outputs/3f6df4d03b9ee25875c4b1764850cfb9.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/3f9084db41fe58a2b672fb7ff25ae499.txt b/scrapped_outputs/3f9084db41fe58a2b672fb7ff25ae499.txt new file mode 100644 index 0000000000000000000000000000000000000000..62825fe72aa801b97e465830300492417c227d28 --- /dev/null +++ b/scrapped_outputs/3f9084db41fe58a2b672fb7ff25ae499.txt @@ -0,0 +1,18 @@ +Stable Diffusion pipelines Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from CompVis, Stability AI and LAION. Latent diffusion applies the diffusion process over a lower dimensional latent space to reduce memory and compute complexity. This specific type of diffusion model was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. Stable Diffusion is trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs. For more details about how Stable Diffusion works and how it differs from the base latent diffusion model, take a look at the Stability AI announcement and our own blog post for more technical details. You can find the original codebase for Stable Diffusion v1.0 at CompVis/stable-diffusion and Stable Diffusion v2.0 at Stability-AI/stablediffusion as well as their original scripts for various tasks. Additional official checkpoints for the different Stable Diffusion versions and tasks can be found on the CompVis, Runway, and Stability AI Hub organizations. Explore these organizations to find the best checkpoint for your use-case! The table below summarizes the available Stable Diffusion pipelines, their supported tasks, and an interactive demo: Pipeline Supported tasks 🤗 Space StableDiffusion text-to-image StableDiffusionImg2Img image-to-image StableDiffusionInpaint inpainting StableDiffusionDepth2Img depth-to-image StableDiffusionImageVariation image variation StableDiffusionPipelineSafe filtered text-to-image StableDiffusion2 text-to-image, inpainting, depth-to-image, super-resolution StableDiffusionXL text-to-image, image-to-image StableDiffusionLatentUpscale super-resolution StableDiffusionUpscale super-resolution StableDiffusionLDM3D text-to-rgb, text-to-depth, text-to-pano StableDiffusionUpscaleLDM3D ldm3d super-resolution Tips To help you get the most out of the Stable Diffusion pipelines, here are a few tips for improving performance and usability. These tips are applicable to all Stable Diffusion pipelines. Explore tradeoff between speed and quality StableDiffusionPipeline uses the PNDMScheduler by default, but 🤗 Diffusers provides many other schedulers (some of which are faster or output better quality) that are compatible. For example, if you want to use the EulerDiscreteScheduler instead of the default: Copied from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler + +pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") +pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +# or +euler_scheduler = EulerDiscreteScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler") +pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=euler_scheduler) Reuse pipeline components to save memory To save memory and use the same components across multiple pipelines, use the .components method to avoid loading weights into RAM more than once. Copied from diffusers import ( + StableDiffusionPipeline, + StableDiffusionImg2ImgPipeline, + StableDiffusionInpaintPipeline, +) + +text2img = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") +img2img = StableDiffusionImg2ImgPipeline(**text2img.components) +inpaint = StableDiffusionInpaintPipeline(**text2img.components) + +# now you can use text2img(...), img2img(...), inpaint(...) just like the call methods of each respective pipeline diff --git a/scrapped_outputs/3fa2b062ecfecd8e60e520b2cae98c59.txt b/scrapped_outputs/3fa2b062ecfecd8e60e520b2cae98c59.txt new file mode 100644 index 0000000000000000000000000000000000000000..7aa9d7438e50cb065d601931ea93e05ed669bc92 --- /dev/null +++ b/scrapped_outputs/3fa2b062ecfecd8e60e520b2cae98c59.txt @@ -0,0 +1,58 @@ +Effective and efficient diffusion Getting the DiffusionPipeline to generate images in a certain style or include what you want can be tricky. Often times, you have to run the DiffusionPipeline several times before you end up with an image you’re happy with. But generating something out of nothing is a computationally intensive process, especially if you’re running inference over and over again. This is why it’s important to get the most computational (speed) and memory (GPU vRAM) efficiency from the pipeline to reduce the time between inference cycles so you can iterate faster. This tutorial walks you through how to generate faster and better with the DiffusionPipeline. Begin by loading the runwayml/stable-diffusion-v1-5 model: Copied from diffusers import DiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipeline = DiffusionPipeline.from_pretrained(model_id, use_safetensors=True) The example prompt you’ll use is a portrait of an old warrior chief, but feel free to use your own prompt: Copied prompt = "portrait photo of a old warrior chief" Speed 💡 If you don’t have access to a GPU, you can use one for free from a GPU provider like Colab! One of the simplest ways to speed up inference is to place the pipeline on a GPU the same way you would with any PyTorch module: Copied pipeline = pipeline.to("cuda") To make sure you can use the same image and improve on it, use a Generator and set a seed for reproducibility: Copied import torch + +generator = torch.Generator("cuda").manual_seed(0) Now you can generate an image: Copied image = pipeline(prompt, generator=generator).images[0] +image This process took ~30 seconds on a T4 GPU (it might be faster if your allocated GPU is better than a T4). By default, the DiffusionPipeline runs inference with full float32 precision for 50 inference steps. You can speed this up by switching to a lower precision like float16 or running fewer inference steps. Let’s start by loading the model in float16 and generate an image: Copied import torch + +pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, use_safetensors=True) +pipeline = pipeline.to("cuda") +generator = torch.Generator("cuda").manual_seed(0) +image = pipeline(prompt, generator=generator).images[0] +image This time, it only took ~11 seconds to generate the image, which is almost 3x faster than before! 💡 We strongly suggest always running your pipelines in float16, and so far, we’ve rarely seen any degradation in output quality. Another option is to reduce the number of inference steps. Choosing a more efficient scheduler could help decrease the number of steps without sacrificing output quality. You can find which schedulers are compatible with the current model in the DiffusionPipeline by calling the compatibles method: Copied pipeline.scheduler.compatibles +[ + diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, + diffusers.schedulers.scheduling_unipc_multistep.UniPCMultistepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_discrete.KDPM2DiscreteScheduler, + diffusers.schedulers.scheduling_deis_multistep.DEISMultistepScheduler, + diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, + diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler, + diffusers.schedulers.scheduling_ddpm.DDPMScheduler, + diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_ancestral_discrete.KDPM2AncestralDiscreteScheduler, + diffusers.utils.dummy_torch_and_torchsde_objects.DPMSolverSDEScheduler, + diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler, + diffusers.schedulers.scheduling_pndm.PNDMScheduler, + diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, + diffusers.schedulers.scheduling_ddim.DDIMScheduler, +] The Stable Diffusion model uses the PNDMScheduler by default which usually requires ~50 inference steps, but more performant schedulers like DPMSolverMultistepScheduler, require only ~20 or 25 inference steps. Use the from_config() method to load a new scheduler: Copied from diffusers import DPMSolverMultistepScheduler + +pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) Now set the num_inference_steps to 20: Copied generator = torch.Generator("cuda").manual_seed(0) +image = pipeline(prompt, generator=generator, num_inference_steps=20).images[0] +image Great, you’ve managed to cut the inference time to just 4 seconds! ⚡️ Memory The other key to improving pipeline performance is consuming less memory, which indirectly implies more speed, since you’re often trying to maximize the number of images generated per second. The easiest way to see how many images you can generate at once is to try out different batch sizes until you get an OutOfMemoryError (OOM). Create a function that’ll generate a batch of images from a list of prompts and Generators. Make sure to assign each Generator a seed so you can reuse it if it produces a good result. Copied def get_inputs(batch_size=1): + generator = [torch.Generator("cuda").manual_seed(i) for i in range(batch_size)] + prompts = batch_size * [prompt] + num_inference_steps = 20 + + return {"prompt": prompts, "generator": generator, "num_inference_steps": num_inference_steps} Start with batch_size=4 and see how much memory you’ve consumed: Copied from diffusers.utils import make_image_grid + +images = pipeline(**get_inputs(batch_size=4)).images +make_image_grid(images, 2, 2) Unless you have a GPU with more vRAM, the code above probably returned an OOM error! Most of the memory is taken up by the cross-attention layers. Instead of running this operation in a batch, you can run it sequentially to save a significant amount of memory. All you have to do is configure the pipeline to use the enable_attention_slicing() function: Copied pipeline.enable_attention_slicing() Now try increasing the batch_size to 8! Copied images = pipeline(**get_inputs(batch_size=8)).images +make_image_grid(images, rows=2, cols=4) Whereas before you couldn’t even generate a batch of 4 images, now you can generate a batch of 8 images at ~3.5 seconds per image! This is probably the fastest you can go on a T4 GPU without sacrificing quality. Quality In the last two sections, you learned how to optimize the speed of your pipeline by using fp16, reducing the number of inference steps by using a more performant scheduler, and enabling attention slicing to reduce memory consumption. Now you’re going to focus on how to improve the quality of generated images. Better checkpoints The most obvious step is to use better checkpoints. The Stable Diffusion model is a good starting point, and since its official launch, several improved versions have also been released. However, using a newer version doesn’t automatically mean you’ll get better results. You’ll still have to experiment with different checkpoints yourself, and do a little research (such as using negative prompts) to get the best results. As the field grows, there are more and more high-quality checkpoints finetuned to produce certain styles. Try exploring the Hub and Diffusers Gallery to find one you’re interested in! Better pipeline components You can also try replacing the current pipeline components with a newer version. Let’s try loading the latest autoencoder from Stability AI into the pipeline, and generate some images: Copied from diffusers import AutoencoderKL + +vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16).to("cuda") +pipeline.vae = vae +images = pipeline(**get_inputs(batch_size=8)).images +make_image_grid(images, rows=2, cols=4) Better prompt engineering The text prompt you use to generate an image is super important, so much so that it is called prompt engineering. Some considerations to keep during prompt engineering are: How is the image or similar images of the one I want to generate stored on the internet? What additional detail can I give that steers the model towards the style I want? With this in mind, let’s improve the prompt to include color and higher quality details: Copied prompt += ", tribal panther make up, blue on red, side profile, looking away, serious eyes" +prompt += " 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta" Generate a batch of images with the new prompt: Copied images = pipeline(**get_inputs(batch_size=8)).images +make_image_grid(images, rows=2, cols=4) Pretty impressive! Let’s tweak the second image - corresponding to the Generator with a seed of 1 - a bit more by adding some text about the age of the subject: Copied prompts = [ + "portrait photo of the oldest warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a old warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a young warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", +] + +generator = [torch.Generator("cuda").manual_seed(1) for _ in range(len(prompts))] +images = pipeline(prompt=prompts, generator=generator, num_inference_steps=25).images +make_image_grid(images, 2, 2) Next steps In this tutorial, you learned how to optimize a DiffusionPipeline for computational and memory efficiency as well as improving the quality of generated outputs. If you’re interested in making your pipeline even faster, take a look at the following resources: Learn how PyTorch 2.0 and torch.compile can yield 5 - 300% faster inference speed. On an A100 GPU, inference can be up to 50% faster! If you can’t use PyTorch 2, we recommend you install xFormers. Its memory-efficient attention mechanism works great with PyTorch 1.13.1 for faster speed and reduced memory consumption. Other optimization techniques, such as model offloading, are covered in this guide. diff --git a/scrapped_outputs/3fb184550dd0961f0915937c458cdb58.txt b/scrapped_outputs/3fb184550dd0961f0915937c458cdb58.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/3fb2295c4d5a344920f87ac8461dcf65.txt b/scrapped_outputs/3fb2295c4d5a344920f87ac8461dcf65.txt new file mode 100644 index 0000000000000000000000000000000000000000..cbdfab551c65a04d22ed1db010bb50b8fb750880 --- /dev/null +++ b/scrapped_outputs/3fb2295c4d5a344920f87ac8461dcf65.txt @@ -0,0 +1,852 @@ +ControlNet ControlNet was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that’ll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process. The abstract from the paper is: We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models. This model was contributed by takuma104. ❤️ The original codebase can be found at lllyasviel/ControlNet, and you can find official ControlNet checkpoints on lllyasviel’s Hub profile. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionControlNetPipeline class diffusers.StableDiffusionControlNetPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. When prompt is a list, and if a list of images is passed for a single ControlNet, +each will be paired with each prompt in the prompt list. This also applies to multiple ControlNets, +where a list of image lists can be passed to batch for each prompt and each ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +... ) +>>> image = np.array(image) + +>>> # get canny image +>>> image = cv2.Canny(image, 100, 200) +>>> image = image[:, :, None] +>>> image = np.concatenate([image, image, image], axis=2) +>>> canny_image = Image.fromarray(image) + +>>> # load control net and stable diffusion v1-5 +>>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +>>> pipe = StableDiffusionControlNetPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> # speed up diffusion process with faster scheduler and memory optimization +>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +>>> # remove following line if xformers is not installed +>>> pipe.enable_xformers_memory_efficient_attention() + +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> generator = torch.manual_seed(0) +>>> image = pipe( +... "futuristic-looking woman", num_inference_steps=20, generator=generator, image=canny_image +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionControlNetImg2ImgPipeline class diffusers.StableDiffusionControlNetImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for image-to-image generation using Stable Diffusion with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None control_image: Union = None height: Optional = None width: Optional = None strength: float = 0.8 num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 0.8 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The initial image to be used as the starting point for the image generation process. Can also accept +image latents as image, and if passing latents directly they are not encoded again. control_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, UniPCMultistepScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +... ) +>>> np_image = np.array(image) + +>>> # get canny image +>>> np_image = cv2.Canny(np_image, 100, 200) +>>> np_image = np_image[:, :, None] +>>> np_image = np.concatenate([np_image, np_image, np_image], axis=2) +>>> canny_image = Image.fromarray(np_image) + +>>> # load control net and stable diffusion v1-5 +>>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +>>> pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> # speed up diffusion process with faster scheduler and memory optimization +>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> generator = torch.manual_seed(0) +>>> image = pipe( +... "futuristic-looking woman", +... num_inference_steps=20, +... generator=generator, +... image=image, +... control_image=canny_image, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionControlNetInpaintPipeline class diffusers.StableDiffusionControlNetInpaintPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for image inpainting using Stable Diffusion with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters This pipeline can be used with checkpoints that have been specifically fine-tuned for inpainting +(runwayml/stable-diffusion-inpainting) as well as +default text-to-image Stable Diffusion checkpoints +(runwayml/stable-diffusion-v1-5). Default text-to-image +Stable Diffusion checkpoints might be preferable for ControlNets that have been fine-tuned on those, such as +lllyasviel/control_v11p_sd15_inpaint. __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None control_image: Union = None height: Optional = None width: Optional = None padding_mask_crop: Optional = None strength: float = 1.0 num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 0.5 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], — +List[PIL.Image.Image], or List[np.ndarray]): +Image, NumPy array or tensor representing an image batch to be used as the starting point. For both +NumPy array and PyTorch tensor, the expected value range is between [0, 1]. If it’s a tensor or a +list or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a NumPy array or +a list of arrays, the expected shape should be (B, H, W, C) or (H, W, C). It can also accept image +latents as image, but if passing latents directly it is not encoded again. mask_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], — +List[PIL.Image.Image], or List[np.ndarray]): +Image, NumPy array or tensor representing an image batch to mask image. White pixels in the mask +are repainted while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a NumPy array or PyTorch tensor, it should contain one +color channel (L) instead of 3, so the expected shape for PyTorch tensor would be (B, 1, H, W), (B, H, W), (1, H, W), (H, W). And for NumPy array, it would be for (B, H, W, 1), (B, H, W), (H, W, 1), or (H, W). control_image (torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image], — +List[List[torch.FloatTensor]], or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. padding_mask_crop (int, optional, defaults to None) — +The size of margin in the crop to be applied to the image and masking. If None, no crop is applied to image and mask_image. If +padding_mask_crop is not None, it will first find a rectangular region with the same aspect ration of the image and +contains all masked area, and then expand that area based on padding_mask_crop. The image and mask_image will then be cropped based on +the expanded area before resizing to the original image size for inpainting. This is useful when the masked area is small while the image is large +and contain information inreleant for inpainging, such as background. strength (float, optional, defaults to 1.0) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 0.5) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install transformers accelerate +>>> from diffusers import StableDiffusionControlNetInpaintPipeline, ControlNetModel, DDIMScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> init_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy.png" +... ) +>>> init_image = init_image.resize((512, 512)) + +>>> generator = torch.Generator(device="cpu").manual_seed(1) + +>>> mask_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy_mask.png" +... ) +>>> mask_image = mask_image.resize((512, 512)) + + +>>> def make_canny_condition(image): +... image = np.array(image) +... image = cv2.Canny(image, 100, 200) +... image = image[:, :, None] +... image = np.concatenate([image, image, image], axis=2) +... image = Image.fromarray(image) +... return image + + +>>> control_image = make_canny_condition(init_image) + +>>> controlnet = ControlNetModel.from_pretrained( +... "lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16 +... ) +>>> pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> image = pipe( +... "a handsome man with ray-ban sunglasses", +... num_inference_steps=20, +... generator=generator, +... eta=1.0, +... image=init_image, +... mask_image=mask_image, +... control_image=control_image, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionControlNetPipeline class diffusers.FlaxStableDiffusionControlNetPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel controlnet: FlaxControlNetModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. controlnet (FlaxControlNetModel — +Provides additional conditioning to the unet during the denoising process. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-to-image generation using Stable Diffusion with ControlNet Guidance. This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: Array image: Array params: Union prng_seed: Array num_inference_steps: int = 50 guidance_scale: Union = 7.5 latents: Array = None neg_prompt_ids: Array = None controlnet_conditioning_scale: Union = 1.0 return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt_ids (jnp.ndarray) — +The prompt or prompts to guide the image generation. image (jnp.ndarray) — +Array representing the ControlNet input condition to provide guidance to the unet for generation. params (Dict or FrozenDict) — +Dictionary containing the model parameters/weights. prng_seed (jax.Array) — +Array containing random number generator key. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. latents (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +array is generated by sampling using the supplied random generator. controlnet_conditioning_scale (float or jnp.ndarray, optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> import jax.numpy as jnp +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard +>>> from diffusers.utils import load_image, make_image_grid +>>> from PIL import Image +>>> from diffusers import FlaxStableDiffusionControlNetPipeline, FlaxControlNetModel + + +>>> def create_key(seed=0): +... return jax.random.PRNGKey(seed) + + +>>> rng = create_key(0) + +>>> # get canny image +>>> canny_image = load_image( +... "https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/blog_post_cell_10_output_0.jpeg" +... ) + +>>> prompts = "best quality, extremely detailed" +>>> negative_prompts = "monochrome, lowres, bad anatomy, worst quality, low quality" + +>>> # load control net and stable diffusion v1-5 +>>> controlnet, controlnet_params = FlaxControlNetModel.from_pretrained( +... "lllyasviel/sd-controlnet-canny", from_pt=True, dtype=jnp.float32 +... ) +>>> pipe, params = FlaxStableDiffusionControlNetPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, revision="flax", dtype=jnp.float32 +... ) +>>> params["controlnet"] = controlnet_params + +>>> num_samples = jax.device_count() +>>> rng = jax.random.split(rng, jax.device_count()) + +>>> prompt_ids = pipe.prepare_text_inputs([prompts] * num_samples) +>>> negative_prompt_ids = pipe.prepare_text_inputs([negative_prompts] * num_samples) +>>> processed_image = pipe.prepare_image_inputs([canny_image] * num_samples) + +>>> p_params = replicate(params) +>>> prompt_ids = shard(prompt_ids) +>>> negative_prompt_ids = shard(negative_prompt_ids) +>>> processed_image = shard(processed_image) + +>>> output = pipe( +... prompt_ids=prompt_ids, +... image=processed_image, +... params=p_params, +... prng_seed=rng, +... num_inference_steps=50, +... neg_prompt_ids=negative_prompt_ids, +... jit=True, +... ).images + +>>> output_images = pipe.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:]))) +>>> output_images = make_image_grid(output_images, num_samples // 4, 4) +>>> output_images.save("generated_image.png") FlaxStableDiffusionControlNetPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/3fb51a7a82fcb54e704937bbffb1d0de.txt b/scrapped_outputs/3fb51a7a82fcb54e704937bbffb1d0de.txt new file mode 100644 index 0000000000000000000000000000000000000000..7aa53bdbaa4afd2403451b87ff52427c2d2fa0b7 --- /dev/null +++ b/scrapped_outputs/3fb51a7a82fcb54e704937bbffb1d0de.txt @@ -0,0 +1,177 @@ +Configuration + +In Diffusers, schedulers of type schedulers.scheduling_utils.SchedulerMixin, and models of type ModelMixin inherit from ConfigMixin which conveniently takes care of storing all parameters that are +passed to the respective __init__ methods in a JSON-configuration file. + +ConfigMixin + + +class diffusers.ConfigMixin + +< +source +> +( +) + + + +Base class for all configuration classes. Stores all configuration parameters under self.config Also handles all +methods for loading/downloading/saving classes inheriting from ConfigMixin with +from_config() +save_config() +Class attributes: +config_name (str) — A filename under which the config should stored when calling +save_config() (should be overridden by parent class). +ignore_for_config (List[str]) — A list of attributes that should not be saved in the config (should be +overridden by subclass). +has_compatibles (bool) — Whether the class has compatible classes (should be overridden by subclass). +_deprecated_kwargs (List[str]) — Keyword arguments that are deprecated. Note that the init function +should only have a kwargs argument if at least one argument is deprecated (should be overridden by +subclass). + +load_config + +< +source +> +( +pretrained_model_name_or_path: typing.Union[str, os.PathLike] +return_unused_kwargs = False +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id of a model repo on huggingface.co. Valid model ids should have an +organization name, like google/ddpm-celebahq-256. +A path to a directory containing model weights saved using save_config(), e.g., +./my_model_directory/. + + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running transformers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +subfolder (str, optional, defaults to "") — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + + +Instantiate a Python class from a config dictionary +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. +Activate the special “offline-mode” to +use this method in a firewalled environment. + +from_config + +< +source +> +( +config: typing.Union[diffusers.configuration_utils.FrozenDict, typing.Dict[str, typing.Any]] = None +return_unused_kwargs = False +**kwargs + +) + + +Parameters + +config (Dict[str, Any]) — +A config dictionary from which the Python class will be instantiated. Make sure to only load +configuration files of compatible classes. + + +return_unused_kwargs (bool, optional, defaults to False) — +Whether kwargs that are not consumed by the Python class should be returned or not. + + +kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to update the configuration object (after it being loaded) and initiate the Python class. +**kwargs will be directly passed to the underlying scheduler/model’s __init__ method and eventually +overwrite same named arguments of config. + + + +Instantiate a Python class from a config dictionary + +Examples: + + + Copied +>>> from diffusers import DDPMScheduler, DDIMScheduler, PNDMScheduler + +>>> # Download scheduler from huggingface.co and cache. +>>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cifar10-32") + +>>> # Instantiate DDIM scheduler class with same config as DDPM +>>> scheduler = DDIMScheduler.from_config(scheduler.config) + +>>> # Instantiate PNDM scheduler class with same config as DDPM +>>> scheduler = PNDMScheduler.from_config(scheduler.config) + +save_config + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +push_to_hub: bool = False +**kwargs + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory where the configuration JSON file will be saved (will be created if it does not exist). + + + +Save a configuration object to the directory save_directory, so that it can be re-loaded using the +from_config() class method. diff --git a/scrapped_outputs/3fb983471dd9fea0347c6f4bd3ea3241.txt b/scrapped_outputs/3fb983471dd9fea0347c6f4bd3ea3241.txt new file mode 100644 index 0000000000000000000000000000000000000000..90f987bd68cea6f4c0f29a9a85768db8b9798fed --- /dev/null +++ b/scrapped_outputs/3fb983471dd9fea0347c6f4bd3ea3241.txt @@ -0,0 +1 @@ +Overview A pipeline is an end-to-end class that provides a quick and easy way to use a diffusion system for inference by bundling independently trained models and schedulers together. Certain combinations of models and schedulers define specific pipeline types, like StableDiffusionXLPipeline or StableDiffusionControlNetPipeline, with specific capabilities. All pipeline types inherit from the base DiffusionPipeline class; pass it any checkpoint, and it’ll automatically detect the pipeline type and load the necessary components. This section demonstrates how to use specific pipelines such as Stable Diffusion XL, ControlNet, and DiffEdit. You’ll also learn how to use a distilled version of the Stable Diffusion model to speed up inference, how to create reproducible pipelines, and how to use and contribute community pipelines. diff --git a/scrapped_outputs/4049ef82ba0e78a892445e72071f7b51.txt b/scrapped_outputs/4049ef82ba0e78a892445e72071f7b51.txt new file mode 100644 index 0000000000000000000000000000000000000000..b5b8f792fc115c3ab0410e2647d2cb1a410a75ea --- /dev/null +++ b/scrapped_outputs/4049ef82ba0e78a892445e72071f7b51.txt @@ -0,0 +1,27 @@ +UNet2DModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 2D UNet model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet2DModel class diffusers.UNet2DModel < source > ( sample_size: Union = None in_channels: int = 3 out_channels: int = 3 center_input_sample: bool = False time_embedding_type: str = 'positional' freq_shift: int = 0 flip_sin_to_cos: bool = True down_block_types: Tuple = ('DownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D') up_block_types: Tuple = ('AttnUpBlock2D', 'AttnUpBlock2D', 'AttnUpBlock2D', 'UpBlock2D') block_out_channels: Tuple = (224, 448, 672, 896) layers_per_block: int = 2 mid_block_scale_factor: float = 1 downsample_padding: int = 1 downsample_type: str = 'conv' upsample_type: str = 'conv' dropout: float = 0.0 act_fn: str = 'silu' attention_head_dim: Optional = 8 norm_num_groups: int = 32 attn_norm_num_groups: Optional = None norm_eps: float = 1e-05 resnet_time_scale_shift: str = 'default' add_attention: bool = True class_embed_type: Optional = None num_class_embeds: Optional = None num_train_timesteps: Optional = None ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. Dimensions must be a multiple of 2 ** (len(block_out_channels) - 1). in_channels (int, optional, defaults to 3) — Number of channels in the input sample. out_channels (int, optional, defaults to 3) — Number of channels in the output. center_input_sample (bool, optional, defaults to False) — Whether to center the input sample. time_embedding_type (str, optional, defaults to "positional") — Type of time embedding to use. freq_shift (int, optional, defaults to 0) — Frequency shift for Fourier time embedding. flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip sin to cos for Fourier time embedding. down_block_types (Tuple[str], optional, defaults to ("DownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D")) — +Tuple of downsample block types. mid_block_type (str, optional, defaults to "UNetMidBlock2D") — +Block type for middle of UNet, it can be either UNetMidBlock2D or UnCLIPUNetMidBlock2D. up_block_types (Tuple[str], optional, defaults to ("AttnUpBlock2D", "AttnUpBlock2D", "AttnUpBlock2D", "UpBlock2D")) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (224, 448, 672, 896)) — +Tuple of block output channels. layers_per_block (int, optional, defaults to 2) — The number of layers per block. mid_block_scale_factor (float, optional, defaults to 1) — The scale factor for the mid block. downsample_padding (int, optional, defaults to 1) — The padding for the downsample convolution. downsample_type (str, optional, defaults to conv) — +The downsample type for downsampling layers. Choose between “conv” and “resnet” upsample_type (str, optional, defaults to conv) — +The upsample type for upsampling layers. Choose between “conv” and “resnet” dropout (float, optional, defaults to 0.0) — The dropout probability to use. act_fn (str, optional, defaults to "silu") — The activation function to use. attention_head_dim (int, optional, defaults to 8) — The attention head dimension. norm_num_groups (int, optional, defaults to 32) — The number of groups for normalization. attn_norm_num_groups (int, optional, defaults to None) — +If set to an integer, a group norm layer will be created in the mid block’s Attention layer with the +given number of groups. If left as None, the group norm layer will only be created if +resnet_time_scale_shift is set to default, and if created will have norm_num_groups groups. norm_eps (float, optional, defaults to 1e-5) — The epsilon for normalization. resnet_time_scale_shift (str, optional, defaults to "default") — Time scale shift config +for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", or "identity". num_class_embeds (int, optional, defaults to None) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim when performing class +conditioning with class_embed_type equal to None. A 2D UNet model that takes a noisy sample and a timestep and returns a sample shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor timestep: Union class_labels: Optional = None return_dict: bool = True ) → UNet2DOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, channel, height, width). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. class_labels (torch.FloatTensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DOutput instead of a plain tuple. Returns +UNet2DOutput or tuple + +If return_dict is True, an UNet2DOutput is returned, otherwise a tuple is +returned where the first element is the sample tensor. + The UNet2DModel forward method. UNet2DOutput class diffusers.models.unet_2d.UNet2DOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The hidden states output from the last layer of the model. The output of UNet2DModel. diff --git a/scrapped_outputs/404cf94ab4e450650d7da7400c85f170.txt b/scrapped_outputs/404cf94ab4e450650d7da7400c85f170.txt new file mode 100644 index 0000000000000000000000000000000000000000..25c46b6891734af2caccd73456b27f1ecd1e462b --- /dev/null +++ b/scrapped_outputs/404cf94ab4e450650d7da7400c85f170.txt @@ -0,0 +1,64 @@ +PNDMScheduler PNDMScheduler, or pseudo numerical methods for diffusion models, uses more advanced ODE integration techniques like the Runge-Kutta and linear multi-step method. The original implementation can be found at crowsonkb/k-diffusion. PNDMScheduler class diffusers.PNDMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None skip_prk_steps: bool = False set_alpha_to_one: bool = False prediction_type: str = 'epsilon' timestep_spacing: str = 'leading' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. skip_prk_steps (bool, defaults to False) — +Allows the scheduler to skip the Runge-Kutta steps defined in the original paper as being required before +PLMS steps. set_alpha_to_one (bool, defaults to False) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the alpha value at step 0. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process) +or v_prediction (see section 2.4 of Imagen Video +paper). timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. PNDMScheduler uses pseudo numerical methods for diffusion models such as the Runge-Kutta and linear multi-step +method. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise), and calls step_prk() +or step_plms() depending on the internal variable counter. step_plms < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the linear multistep method. It performs one forward pass multiple times to approximate the solution. step_prk < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the Runge-Kutta method. It performs four forward passes to approximate the solution to the differential +equation. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/4057a00c5a1cf2aa54c367298905ba65.txt b/scrapped_outputs/4057a00c5a1cf2aa54c367298905ba65.txt new file mode 100644 index 0000000000000000000000000000000000000000..cdd78d68bba0e712cfad73d0a4eb0e2833f322c8 --- /dev/null +++ b/scrapped_outputs/4057a00c5a1cf2aa54c367298905ba65.txt @@ -0,0 +1,15 @@ +Outputs All model outputs are subclasses of BaseOutput, data structures containing all the information returned by the model. The outputs can also be used as tuples or dictionaries. For example: Copied from diffusers import DDIMPipeline + +pipeline = DDIMPipeline.from_pretrained("google/ddpm-cifar10-32") +outputs = pipeline() The outputs object is a ImagePipelineOutput which means it has an image attribute. You can access each attribute as you normally would or with a keyword lookup, and if that attribute is not returned by the model, you will get None: Copied outputs.images +outputs["images"] When considering the outputs object as a tuple, it only considers the attributes that don’t have None values. +For instance, retrieving an image by indexing into it returns the tuple (outputs.images): Copied outputs[:1] To check a specific pipeline or model output, refer to its corresponding API documentation. BaseOutput class diffusers.utils.BaseOutput < source > ( ) Base class for all model outputs as dataclass. Has a __getitem__ that allows indexing by integer or slice (like a +tuple) or strings (like a dictionary) that will ignore the None attributes. Otherwise behaves like a regular +Python dictionary. You can’t unpack a BaseOutput directly. Use the to_tuple() method to convert it to a tuple +first. to_tuple < source > ( ) Convert self to a tuple containing all the attributes/keys that are not None. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. FlaxImagePipelineOutput class diffusers.pipelines.pipeline_flax_utils.FlaxImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. ImageTextPipelineOutput class diffusers.ImageTextPipelineOutput < source > ( images: Union text: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). text (List[str] or List[List[str]]) — +List of generated text strings of length batch_size or a list of list of strings whose outer list has +length batch_size. Output class for joint image-text pipelines. diff --git a/scrapped_outputs/405b106d793b05dd7857af284fe030cd.txt b/scrapped_outputs/405b106d793b05dd7857af284fe030cd.txt new file mode 100644 index 0000000000000000000000000000000000000000..4b6768075cf1e8a48d64b0ca2f557393a85af24d --- /dev/null +++ b/scrapped_outputs/405b106d793b05dd7857af284fe030cd.txt @@ -0,0 +1,82 @@ +Self-Attention Guidance Improving Sample Quality of Diffusion Models Using Self-Attention Guidance is by Susung Hong et al. The abstract from the paper is: Denoising diffusion models (DDMs) have attracted attention for their exceptional generation quality and diversity. This success is largely attributed to the use of class- or text-conditional diffusion guidance methods, such as classifier and classifier-free guidance. In this paper, we present a more comprehensive perspective that goes beyond the traditional guidance methods. From this generalized perspective, we introduce novel condition- and training-free strategies to enhance the quality of generated images. As a simple solution, blur guidance improves the suitability of intermediate samples for their fine-scale information and structures, enabling diffusion models to generate higher quality samples with a moderate guidance scale. Improving upon this, Self-Attention Guidance (SAG) uses the intermediate self-attention maps of diffusion models to enhance their stability and efficacy. Specifically, SAG adversarially blurs only the regions that diffusion models attend to at each iteration and guides them accordingly. Our experimental results show that our SAG improves the performance of various diffusion models, including ADM, IDDPM, Stable Diffusion, and DiT. Moreover, combining SAG with conventional guidance methods leads to further improvement. You can find additional information about Self-Attention Guidance on the project page, original codebase, and try it out in a demo or notebook. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionSAGPipeline class diffusers.StableDiffusionSAGPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 sag_scale: float = 0.75 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. sag_scale (float, optional, defaults to 0.75) — +Chosen between [0, 1.0] for better quality. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionSAGPipeline + +>>> pipe = StableDiffusionSAGPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt, sag_scale=0.75).images[0] encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/4075799b189a16d472a288e617ae021f.txt b/scrapped_outputs/4075799b189a16d472a288e617ae021f.txt new file mode 100644 index 0000000000000000000000000000000000000000..d61c3f265da975aac5d562125c788f3e245e5b73 --- /dev/null +++ b/scrapped_outputs/4075799b189a16d472a288e617ae021f.txt @@ -0,0 +1,96 @@ +ControlNet The ControlNet model was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, Maneesh Agrawala. It provides a greater degree of control over text-to-image generation by conditioning the model on additional inputs such as edge maps, depth maps, segmentation maps, and keypoints for pose detection. The abstract from the paper is: We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models. Loading from the original format By default the ControlNetModel should be loaded with from_pretrained(), but it can also be loaded +from the original format using FromOriginalControlnetMixin.from_single_file as follows: Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel + +url = "https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_canny.pth" # can also be a local path +controlnet = ControlNetModel.from_single_file(url) + +url = "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.safetensors" # can also be a local path +pipe = StableDiffusionControlNetPipeline.from_single_file(url, controlnet=controlnet) ControlNetModel class diffusers.ControlNetModel < source > ( in_channels: int = 4 conditioning_channels: int = 3 flip_sin_to_cos: bool = True freq_shift: int = 0 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') mid_block_type: Optional = 'UNetMidBlock2DCrossAttn' only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: int = 1280 transformer_layers_per_block: Union = 1 encoder_hid_dim: Optional = None encoder_hid_dim_type: Optional = None attention_head_dim: Union = 8 num_attention_heads: Union = None use_linear_projection: bool = False class_embed_type: Optional = None addition_embed_type: Optional = None addition_time_embed_dim: Optional = None num_class_embeds: Optional = None upcast_attention: bool = False resnet_time_scale_shift: str = 'default' projection_class_embeddings_input_dim: Optional = None controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: Optional = (16, 32, 96, 256) global_pool_conditions: bool = False addition_embed_type_num_heads: int = 64 ) Parameters in_channels (int, defaults to 4) — +The number of channels in the input sample. flip_sin_to_cos (bool, defaults to True) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, defaults to 0) — +The frequency shift to apply to the time embedding. down_block_types (tuple[str], defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. only_cross_attention (Union[bool, Tuple[bool]], defaults to False) — block_out_channels (tuple[int], defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, defaults to 2) — +The number of layers per block. downsample_padding (int, defaults to 1) — +The padding to use for the downsampling convolution. mid_block_scale_factor (float, defaults to 1) — +The scale factor to use for the mid block. act_fn (str, defaults to “silu”) — +The activation function to use. norm_num_groups (int, optional, defaults to 32) — +The number of groups to use for the normalization. If None, normalization and activation layers is skipped +in post-processing. norm_eps (float, defaults to 1e-5) — +The epsilon to use for the normalization. cross_attention_dim (int, defaults to 1280) — +The dimension of the cross attention features. transformer_layers_per_block (int or Tuple[int], optional, defaults to 1) — +The number of transformer blocks of type BasicTransformerBlock. Only relevant for +~models.unet_2d_blocks.CrossAttnDownBlock2D, ~models.unet_2d_blocks.CrossAttnUpBlock2D, +~models.unet_2d_blocks.UNetMidBlock2DCrossAttn. encoder_hid_dim (int, optional, defaults to None) — +If encoder_hid_dim_type is defined, encoder_hidden_states will be projected from encoder_hid_dim +dimension to cross_attention_dim. encoder_hid_dim_type (str, optional, defaults to None) — +If given, the encoder_hidden_states and potentially other embeddings are down-projected to text +embeddings of dimension cross_attention according to encoder_hid_dim_type. attention_head_dim (Union[int, Tuple[int]], defaults to 8) — +The dimension of the attention heads. use_linear_projection (bool, defaults to False) — class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", "identity", "projection", or "simple_projection". addition_embed_type (str, optional, defaults to None) — +Configures an optional embedding which will be summed with the time embeddings. Choose from None or +“text”. “text” will use the TextTimeEmbedding layer. num_class_embeds (int, optional, defaults to 0) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. upcast_attention (bool, defaults to False) — resnet_time_scale_shift (str, defaults to "default") — +Time scale shift config for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. projection_class_embeddings_input_dim (int, optional, defaults to None) — +The dimension of the class_labels input when class_embed_type="projection". Required when +class_embed_type="projection". controlnet_conditioning_channel_order (str, defaults to "rgb") — +The channel order of conditional image. Will convert to rgb if it’s bgr. conditioning_embedding_out_channels (tuple[int], optional, defaults to (16, 32, 96, 256)) — +The tuple of output channel for each block in the conditioning_embedding layer. global_pool_conditions (bool, defaults to False) — +TODO(Patrick) - unused parameter. addition_embed_type_num_heads (int, defaults to 64) — +The number of heads to use for the TextTimeEmbedding layer. A ControlNet model. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor controlnet_cond: FloatTensor conditioning_scale: float = 1.0 class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None added_cond_kwargs: Optional = None cross_attention_kwargs: Optional = None guess_mode: bool = False return_dict: bool = True ) → ControlNetOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor. timestep (Union[torch.Tensor, float, int]) — +The number of timesteps to denoise an input. encoder_hidden_states (torch.Tensor) — +The encoder hidden states. controlnet_cond (torch.FloatTensor) — +The conditional input tensor of shape (batch_size, sequence_length, hidden_size). conditioning_scale (float, defaults to 1.0) — +The scale factor for ControlNet outputs. class_labels (torch.Tensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. timestep_cond (torch.Tensor, optional, defaults to None) — +Additional conditional embeddings for timestep. If provided, the embeddings will be summed with the +timestep_embedding passed through the self.time_embedding layer to obtain the final timestep +embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. added_cond_kwargs (dict) — +Additional conditions for the Stable Diffusion XL UNet. cross_attention_kwargs (dict[str], optional, defaults to None) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. guess_mode (bool, defaults to False) — +In this mode, the ControlNet encoder tries its best to recognize the input content of the input even if +you remove all prompts. A guidance_scale between 3.0 and 5.0 is recommended. return_dict (bool, defaults to True) — +Whether or not to return a ControlNetOutput instead of a plain tuple. Returns +ControlNetOutput or tuple + +If return_dict is True, a ControlNetOutput is returned, otherwise a tuple is +returned where the first element is the sample tensor. + The ControlNetModel forward method. from_unet < source > ( unet: UNet2DConditionModel controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: Optional = (16, 32, 96, 256) load_weights_from_unet: bool = True conditioning_channels: int = 3 ) Parameters unet (UNet2DConditionModel) — +The UNet model weights to copy to the ControlNetModel. All configuration options are also copied +where applicable. Instantiate a ControlNetModel from UNet2DConditionModel. set_attention_slice < source > ( slice_size: Union ) Parameters slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", input to the attention heads is halved, so attention is computed in two steps. If +"max", maximum amount of memory is saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in +several steps. This is useful for saving some memory in exchange for a small decrease in speed. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. ControlNetOutput class diffusers.models.controlnet.ControlNetOutput < source > ( down_block_res_samples: Tuple mid_block_res_sample: Tensor ) Parameters down_block_res_samples (tuple[torch.Tensor]) — +A tuple of downsample activations at different resolutions for each downsampling block. Each tensor should +be of shape (batch_size, channel * resolution, height //resolution, width // resolution). Output can be +used to condition the original UNet’s downsampling activations. mid_down_block_re_sample (torch.Tensor) — +The activation of the midde block (the lowest sample resolution). Each tensor should be of shape +(batch_size, channel * lowest_resolution, height // lowest_resolution, width // lowest_resolution). +Output can be used to condition the original UNet’s middle block activation. The output of ControlNetModel. FlaxControlNetModel class diffusers.FlaxControlNetModel < source > ( sample_size: int = 32 in_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 attention_head_dim: Union = 8 num_attention_heads: Union = None cross_attention_dim: int = 1280 dropout: float = 0.0 use_linear_projection: bool = False dtype: dtype = flip_sin_to_cos: bool = True freq_shift: int = 0 controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: Tuple = (16, 32, 96, 256) parent: Union = name: Optional = None ) Parameters sample_size (int, optional) — +The size of the input sample. in_channels (int, optional, defaults to 4) — +The number of channels in the input sample. down_block_types (Tuple[str], optional, defaults to ("FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxDownBlock2D")) — +The tuple of downsample blocks to use. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — +The number of layers per block. attention_head_dim (int or Tuple[int], optional, defaults to 8) — +The dimension of the attention heads. num_attention_heads (int or Tuple[int], optional) — +The number of attention heads. cross_attention_dim (int, optional, defaults to 768) — +The dimension of the cross attention features. dropout (float, optional, defaults to 0) — +Dropout probability for down, up and bottleneck blocks. flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. controlnet_conditioning_channel_order (str, optional, defaults to rgb) — +The channel order of conditional image. Will convert to rgb if it’s bgr. conditioning_embedding_out_channels (tuple, optional, defaults to (16, 32, 96, 256)) — +The tuple of output channel for each block in the conditioning_embedding layer. A ControlNet model. This model inherits from FlaxModelMixin. Check the superclass documentation for it’s generic methods +implemented for all models (such as downloading or saving). This model is also a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matters related to its +general usage and behavior. Inherent JAX features such as the following are supported: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization FlaxControlNetOutput class diffusers.models.controlnet_flax.FlaxControlNetOutput < source > ( down_block_res_samples: Array mid_block_res_sample: Array ) Parameters down_block_res_samples (jnp.ndarray) — mid_block_res_sample (jnp.ndarray) — The output of FlaxControlNetModel. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/409a8a1c51263e94f4f65a620cea9d1b.txt b/scrapped_outputs/409a8a1c51263e94f4f65a620cea9d1b.txt new file mode 100644 index 0000000000000000000000000000000000000000..98269f3c31d991ee698908d92c0548b99079f45a --- /dev/null +++ b/scrapped_outputs/409a8a1c51263e94f4f65a620cea9d1b.txt @@ -0,0 +1,24 @@ +IPNDMScheduler IPNDMScheduler is a fourth-order Improved Pseudo Linear Multistep scheduler. The original implementation can be found at crowsonkb/v-diffusion-pytorch. IPNDMScheduler class diffusers.IPNDMScheduler < source > ( num_train_timesteps: int = 1000 trained_betas: Union = None ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. A fourth-order Improved Pseudo Linear Multistep scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the linear multistep method. It performs one forward pass multiple times to approximate the solution. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/40bcf46a594b2f82f5aa2b7a386eddf1.txt b/scrapped_outputs/40bcf46a594b2f82f5aa2b7a386eddf1.txt new file mode 100644 index 0000000000000000000000000000000000000000..13aef0767c19d544c8b380b818921e179de42362 --- /dev/null +++ b/scrapped_outputs/40bcf46a594b2f82f5aa2b7a386eddf1.txt @@ -0,0 +1,14 @@ +Speed up inference There are several ways to optimize 🤗 Diffusers for inference speed. As a general rule of thumb, we recommend using either xFormers or torch.nn.functional.scaled_dot_product_attention in PyTorch 2.0 for their memory-efficient attention. In many cases, optimizing for speed or memory leads to improved performance in the other, so you should try to optimize for both whenever you can. This guide focuses on inference speed, but you can learn more about preserving memory in the Reduce memory usage guide. The results below are obtained from generating a single 512x512 image from the prompt a photo of an astronaut riding a horse on mars with 50 DDIM steps on a Nvidia Titan RTX, demonstrating the speed-up you can expect. latency speed-up original 9.50s x1 fp16 3.61s x2.63 channels last 3.30s x2.88 traced UNet 3.21s x2.96 memory efficient attention 2.63s x3.61 Use TensorFloat-32 On Ampere and later CUDA devices, matrix multiplications and convolutions can use the TensorFloat-32 (TF32) mode for faster, but slightly less accurate computations. By default, PyTorch enables TF32 mode for convolutions but not matrix multiplications. Unless your network requires full float32 precision, we recommend enabling TF32 for matrix multiplications. It can significantly speeds up computations with typically negligible loss in numerical accuracy. Copied import torch + +torch.backends.cuda.matmul.allow_tf32 = True You can learn more about TF32 in the Mixed precision training guide. Half-precision weights To save GPU memory and get more speed, try loading and running the model weights directly in half-precision or float16: Copied import torch +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +image = pipe(prompt).images[0] Don’t use torch.autocast in any of the pipelines as it can lead to black images and is always slower than pure float16 precision. diff --git a/scrapped_outputs/40cfa405fc5e07b5c730d25c5600d3a9.txt b/scrapped_outputs/40cfa405fc5e07b5c730d25c5600d3a9.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/40d33c07116420e3a0880b7857f7d71a.txt b/scrapped_outputs/40d33c07116420e3a0880b7857f7d71a.txt new file mode 100644 index 0000000000000000000000000000000000000000..fa69efa9696034670fc8ca476928c6521eb0af53 --- /dev/null +++ b/scrapped_outputs/40d33c07116420e3a0880b7857f7d71a.txt @@ -0,0 +1,212 @@ +Train a diffusion model Unconditional image generation is a popular application of diffusion models that generates images that look like those in the dataset used for training. Typically, the best results are obtained from finetuning a pretrained model on a specific dataset. You can find many of these checkpoints on the Hub, but if you can’t find one you like, you can always train your own! This tutorial will teach you how to train a UNet2DModel from scratch on a subset of the Smithsonian Butterflies dataset to generate your own 🦋 butterflies 🦋. 💡 This training tutorial is based on the Training with 🧨 Diffusers notebook. For additional details and context about diffusion models like how they work, check out the notebook! Before you begin, make sure you have 🤗 Datasets installed to load and preprocess image datasets, and 🤗 Accelerate, to simplify training on any number of GPUs. The following command will also install TensorBoard to visualize training metrics (you can also use Weights & Biases to track your training). Copied # uncomment to install the necessary libraries in Colab +#!pip install diffusers[training] We encourage you to share your model with the community, and in order to do that, you’ll need to login to your Hugging Face account (create one here if you don’t already have one!). You can login from a notebook and enter your token when prompted. Make sure your token has the write role. Copied >>> from huggingface_hub import notebook_login + +>>> notebook_login() Or login in from the terminal: Copied huggingface-cli login Since the model checkpoints are quite large, install Git-LFS to version these large files: Copied !sudo apt -qq install git-lfs +!git config --global credential.helper store Training configuration For convenience, create a TrainingConfig class containing the training hyperparameters (feel free to adjust them): Copied >>> from dataclasses import dataclass + +>>> @dataclass +... class TrainingConfig: +... image_size = 128 # the generated image resolution +... train_batch_size = 16 +... eval_batch_size = 16 # how many images to sample during evaluation +... num_epochs = 50 +... gradient_accumulation_steps = 1 +... learning_rate = 1e-4 +... lr_warmup_steps = 500 +... save_image_epochs = 10 +... save_model_epochs = 30 +... mixed_precision = "fp16" # `no` for float32, `fp16` for automatic mixed precision +... output_dir = "ddpm-butterflies-128" # the model name locally and on the HF Hub + +... push_to_hub = True # whether to upload the saved model to the HF Hub +... hub_model_id = "/" # the name of the repository to create on the HF Hub +... hub_private_repo = False +... overwrite_output_dir = True # overwrite the old model when re-running the notebook +... seed = 0 + + +>>> config = TrainingConfig() Load the dataset You can easily load the Smithsonian Butterflies dataset with the 🤗 Datasets library: Copied >>> from datasets import load_dataset + +>>> config.dataset_name = "huggan/smithsonian_butterflies_subset" +>>> dataset = load_dataset(config.dataset_name, split="train") 💡 You can find additional datasets from the HugGan Community Event or you can use your own dataset by creating a local ImageFolder. Set config.dataset_name to the repository id of the dataset if it is from the HugGan Community Event, or imagefolder if you’re using your own images. 🤗 Datasets uses the Image feature to automatically decode the image data and load it as a PIL.Image which we can visualize: Copied >>> import matplotlib.pyplot as plt + +>>> fig, axs = plt.subplots(1, 4, figsize=(16, 4)) +>>> for i, image in enumerate(dataset[:4]["image"]): +... axs[i].imshow(image) +... axs[i].set_axis_off() +>>> fig.show() The images are all different sizes though, so you’ll need to preprocess them first: Resize changes the image size to the one defined in config.image_size. RandomHorizontalFlip augments the dataset by randomly mirroring the images. Normalize is important to rescale the pixel values into a [-1, 1] range, which is what the model expects. Copied >>> from torchvision import transforms + +>>> preprocess = transforms.Compose( +... [ +... transforms.Resize((config.image_size, config.image_size)), +... transforms.RandomHorizontalFlip(), +... transforms.ToTensor(), +... transforms.Normalize([0.5], [0.5]), +... ] +... ) Use 🤗 Datasets’ set_transform method to apply the preprocess function on the fly during training: Copied >>> def transform(examples): +... images = [preprocess(image.convert("RGB")) for image in examples["image"]] +... return {"images": images} + + +>>> dataset.set_transform(transform) Feel free to visualize the images again to confirm that they’ve been resized. Now you’re ready to wrap the dataset in a DataLoader for training! Copied >>> import torch + +>>> train_dataloader = torch.utils.data.DataLoader(dataset, batch_size=config.train_batch_size, shuffle=True) Create a UNet2DModel Pretrained models in 🧨 Diffusers are easily created from their model class with the parameters you want. For example, to create a UNet2DModel: Copied >>> from diffusers import UNet2DModel + +>>> model = UNet2DModel( +... sample_size=config.image_size, # the target image resolution +... in_channels=3, # the number of input channels, 3 for RGB images +... out_channels=3, # the number of output channels +... layers_per_block=2, # how many ResNet layers to use per UNet block +... block_out_channels=(128, 128, 256, 256, 512, 512), # the number of output channels for each UNet block +... down_block_types=( +... "DownBlock2D", # a regular ResNet downsampling block +... "DownBlock2D", +... "DownBlock2D", +... "DownBlock2D", +... "AttnDownBlock2D", # a ResNet downsampling block with spatial self-attention +... "DownBlock2D", +... ), +... up_block_types=( +... "UpBlock2D", # a regular ResNet upsampling block +... "AttnUpBlock2D", # a ResNet upsampling block with spatial self-attention +... "UpBlock2D", +... "UpBlock2D", +... "UpBlock2D", +... "UpBlock2D", +... ), +... ) It is often a good idea to quickly check the sample image shape matches the model output shape: Copied >>> sample_image = dataset[0]["images"].unsqueeze(0) +>>> print("Input shape:", sample_image.shape) +Input shape: torch.Size([1, 3, 128, 128]) + +>>> print("Output shape:", model(sample_image, timestep=0).sample.shape) +Output shape: torch.Size([1, 3, 128, 128]) Great! Next, you’ll need a scheduler to add some noise to the image. Create a scheduler The scheduler behaves differently depending on whether you’re using the model for training or inference. During inference, the scheduler generates image from the noise. During training, the scheduler takes a model output - or a sample - from a specific point in the diffusion process and applies noise to the image according to a noise schedule and an update rule. Let’s take a look at the DDPMScheduler and use the add_noise method to add some random noise to the sample_image from before: Copied >>> import torch +>>> from PIL import Image +>>> from diffusers import DDPMScheduler + +>>> noise_scheduler = DDPMScheduler(num_train_timesteps=1000) +>>> noise = torch.randn(sample_image.shape) +>>> timesteps = torch.LongTensor([50]) +>>> noisy_image = noise_scheduler.add_noise(sample_image, noise, timesteps) + +>>> Image.fromarray(((noisy_image.permute(0, 2, 3, 1) + 1.0) * 127.5).type(torch.uint8).numpy()[0]) The training objective of the model is to predict the noise added to the image. The loss at this step can be calculated by: Copied >>> import torch.nn.functional as F + +>>> noise_pred = model(noisy_image, timesteps).sample +>>> loss = F.mse_loss(noise_pred, noise) Train the model By now, you have most of the pieces to start training the model and all that’s left is putting everything together. First, you’ll need an optimizer and a learning rate scheduler: Copied >>> from diffusers.optimization import get_cosine_schedule_with_warmup + +>>> optimizer = torch.optim.AdamW(model.parameters(), lr=config.learning_rate) +>>> lr_scheduler = get_cosine_schedule_with_warmup( +... optimizer=optimizer, +... num_warmup_steps=config.lr_warmup_steps, +... num_training_steps=(len(train_dataloader) * config.num_epochs), +... ) Then, you’ll need a way to evaluate the model. For evaluation, you can use the DDPMPipeline to generate a batch of sample images and save it as a grid: Copied >>> from diffusers import DDPMPipeline +>>> from diffusers.utils import make_image_grid +>>> import os + +>>> def evaluate(config, epoch, pipeline): +... # Sample some images from random noise (this is the backward diffusion process). +... # The default pipeline output type is `List[PIL.Image]` +... images = pipeline( +... batch_size=config.eval_batch_size, +... generator=torch.manual_seed(config.seed), +... ).images + +... # Make a grid out of the images +... image_grid = make_image_grid(images, rows=4, cols=4) + +... # Save the images +... test_dir = os.path.join(config.output_dir, "samples") +... os.makedirs(test_dir, exist_ok=True) +... image_grid.save(f"{test_dir}/{epoch:04d}.png") Now you can wrap all these components together in a training loop with 🤗 Accelerate for easy TensorBoard logging, gradient accumulation, and mixed precision training. To upload the model to the Hub, write a function to get your repository name and information and then push it to the Hub. 💡 The training loop below may look intimidating and long, but it’ll be worth it later when you launch your training in just one line of code! If you can’t wait and want to start generating images, feel free to copy and run the code below. You can always come back and examine the training loop more closely later, like when you’re waiting for your model to finish training. 🤗 Copied >>> from accelerate import Accelerator +>>> from huggingface_hub import create_repo, upload_folder +>>> from tqdm.auto import tqdm +>>> from pathlib import Path +>>> import os + +>>> def train_loop(config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler): +... # Initialize accelerator and tensorboard logging +... accelerator = Accelerator( +... mixed_precision=config.mixed_precision, +... gradient_accumulation_steps=config.gradient_accumulation_steps, +... log_with="tensorboard", +... project_dir=os.path.join(config.output_dir, "logs"), +... ) +... if accelerator.is_main_process: +... if config.output_dir is not None: +... os.makedirs(config.output_dir, exist_ok=True) +... if config.push_to_hub: +... repo_id = create_repo( +... repo_id=config.hub_model_id or Path(config.output_dir).name, exist_ok=True +... ).repo_id +... accelerator.init_trackers("train_example") + +... # Prepare everything +... # There is no specific order to remember, you just need to unpack the +... # objects in the same order you gave them to the prepare method. +... model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( +... model, optimizer, train_dataloader, lr_scheduler +... ) + +... global_step = 0 + +... # Now you train the model +... for epoch in range(config.num_epochs): +... progress_bar = tqdm(total=len(train_dataloader), disable=not accelerator.is_local_main_process) +... progress_bar.set_description(f"Epoch {epoch}") + +... for step, batch in enumerate(train_dataloader): +... clean_images = batch["images"] +... # Sample noise to add to the images +... noise = torch.randn(clean_images.shape, device=clean_images.device) +... bs = clean_images.shape[0] + +... # Sample a random timestep for each image +... timesteps = torch.randint( +... 0, noise_scheduler.config.num_train_timesteps, (bs,), device=clean_images.device, +... dtype=torch.int64 +... ) + +... # Add noise to the clean images according to the noise magnitude at each timestep +... # (this is the forward diffusion process) +... noisy_images = noise_scheduler.add_noise(clean_images, noise, timesteps) + +... with accelerator.accumulate(model): +... # Predict the noise residual +... noise_pred = model(noisy_images, timesteps, return_dict=False)[0] +... loss = F.mse_loss(noise_pred, noise) +... accelerator.backward(loss) + +... accelerator.clip_grad_norm_(model.parameters(), 1.0) +... optimizer.step() +... lr_scheduler.step() +... optimizer.zero_grad() + +... progress_bar.update(1) +... logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0], "step": global_step} +... progress_bar.set_postfix(**logs) +... accelerator.log(logs, step=global_step) +... global_step += 1 + +... # After each epoch you optionally sample some demo images with evaluate() and save the model +... if accelerator.is_main_process: +... pipeline = DDPMPipeline(unet=accelerator.unwrap_model(model), scheduler=noise_scheduler) + +... if (epoch + 1) % config.save_image_epochs == 0 or epoch == config.num_epochs - 1: +... evaluate(config, epoch, pipeline) + +... if (epoch + 1) % config.save_model_epochs == 0 or epoch == config.num_epochs - 1: +... if config.push_to_hub: +... upload_folder( +... repo_id=repo_id, +... folder_path=config.output_dir, +... commit_message=f"Epoch {epoch}", +... ignore_patterns=["step_*", "epoch_*"], +... ) +... else: +... pipeline.save_pretrained(config.output_dir) Phew, that was quite a bit of code! But you’re finally ready to launch the training with 🤗 Accelerate’s notebook_launcher function. Pass the function the training loop, all the training arguments, and the number of processes (you can change this value to the number of GPUs available to you) to use for training: Copied >>> from accelerate import notebook_launcher + +>>> args = (config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler) + +>>> notebook_launcher(train_loop, args, num_processes=1) Once training is complete, take a look at the final 🦋 images 🦋 generated by your diffusion model! Copied >>> import glob + +>>> sample_images = sorted(glob.glob(f"{config.output_dir}/samples/*.png")) +>>> Image.open(sample_images[-1]) Next steps Unconditional image generation is one example of a task that can be trained. You can explore other tasks and training techniques by visiting the 🧨 Diffusers Training Examples page. Here are some examples of what you can learn: Textual Inversion, an algorithm that teaches a model a specific visual concept and integrates it into the generated image. DreamBooth, a technique for generating personalized images of a subject given several input images of the subject. Guide to finetuning a Stable Diffusion model on your own dataset. Guide to using LoRA, a memory-efficient technique for finetuning really large models faster. diff --git a/scrapped_outputs/410cb4f9bfa90467d480a3fc6034eb7b.txt b/scrapped_outputs/410cb4f9bfa90467d480a3fc6034eb7b.txt new file mode 100644 index 0000000000000000000000000000000000000000..86d9ddbbae81241685d47196515ab51585d529f3 --- /dev/null +++ b/scrapped_outputs/410cb4f9bfa90467d480a3fc6034eb7b.txt @@ -0,0 +1,93 @@ +Latent Consistency Distillation Latent Consistency Models (LCMs) are able to generate high-quality images in just a few steps, representing a big leap forward because many pipelines require at least 25+ steps. LCMs are produced by applying the latent consistency distillation method to any Stable Diffusion model. This method works by applying one-stage guided distillation to the latent space, and incorporating a skipping-step method to consistently skip timesteps to accelerate the distillation process (refer to section 4.1, 4.2, and 4.3 of the paper for more details). If you’re training on a GPU with limited vRAM, try enabling gradient_checkpointing, gradient_accumulation_steps, and mixed_precision to reduce memory-usage and speedup training. You can reduce your memory-usage even more by enabling memory-efficient attention with xFormers and bitsandbytes’ 8-bit optimizer. This guide will explore the train_lcm_distill_sd_wds.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/consistency_distillation +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment (try enabling torch.compile to significantly speedup training): Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_lcm_distill_sd_wds.py \ + --mixed_precision="fp16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so you’ll focus on the parameters that are relevant to latent consistency distillation in this guide. --pretrained_teacher_model: the path to a pretrained latent diffusion model to use as the teacher model --pretrained_vae_model_name_or_path: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify an alternative VAE (like this VAE by madebyollin which works in fp16) --w_min and --w_max: the minimum and maximum guidance scale values for guidance scale sampling --num_ddim_timesteps: the number of timesteps for DDIM sampling --loss_type: the type of loss (L2 or Huber) to calculate for latent consistency distillation; Huber loss is generally preferred because it’s more robust to outliers --huber_c: the Huber loss parameter Training script The training script starts by creating a dataset class - Text2ImageDataset - for preprocessing the images and creating a training dataset. Copied def transform(example): + image = example["image"] + image = TF.resize(image, resolution, interpolation=transforms.InterpolationMode.BILINEAR) + + c_top, c_left, _, _ = transforms.RandomCrop.get_params(image, output_size=(resolution, resolution)) + image = TF.crop(image, c_top, c_left, resolution, resolution) + image = TF.to_tensor(image) + image = TF.normalize(image, [0.5], [0.5]) + + example["image"] = image + return example For improved performance on reading and writing large datasets stored in the cloud, this script uses the WebDataset format to create a preprocessing pipeline to apply transforms and create a dataset and dataloader for training. Images are processed and fed to the training loop without having to download the full dataset first. Copied processing_pipeline = [ + wds.decode("pil", handler=wds.ignore_and_continue), + wds.rename(image="jpg;png;jpeg;webp", text="text;txt;caption", handler=wds.warn_and_continue), + wds.map(filter_keys({"image", "text"})), + wds.map(transform), + wds.to_tuple("image", "text"), +] In the main() function, all the necessary components like the noise scheduler, tokenizers, text encoders, and VAE are loaded. The teacher UNet is also loaded here and then you can create a student UNet from the teacher UNet. The student UNet is updated by the optimizer during training. Copied teacher_unet = UNet2DConditionModel.from_pretrained( + args.pretrained_teacher_model, subfolder="unet", revision=args.teacher_revision +) + +unet = UNet2DConditionModel(**teacher_unet.config) +unet.load_state_dict(teacher_unet.state_dict(), strict=False) +unet.train() Now you can create the optimizer to update the UNet parameters: Copied optimizer = optimizer_class( + unet.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Create the dataset: Copied dataset = Text2ImageDataset( + train_shards_path_or_url=args.train_shards_path_or_url, + num_train_examples=args.max_train_samples, + per_gpu_batch_size=args.train_batch_size, + global_batch_size=args.train_batch_size * accelerator.num_processes, + num_workers=args.dataloader_num_workers, + resolution=args.resolution, + shuffle_buffer_size=1000, + pin_memory=True, + persistent_workers=True, +) +train_dataloader = dataset.train_dataloader Next, you’re ready to setup the training loop and implement the latent consistency distillation method (see Algorithm 1 in the paper for more details). This section of the script takes care of adding noise to the latents, sampling and creating a guidance scale embedding, and predicting the original image from the noise. Copied pred_x_0 = predicted_origin( + noise_pred, + start_timesteps, + noisy_model_input, + noise_scheduler.config.prediction_type, + alpha_schedule, + sigma_schedule, +) + +model_pred = c_skip_start * noisy_model_input + c_out_start * pred_x_0 It gets the teacher model predictions and the LCM predictions next, calculates the loss, and then backpropagates it to the LCM. Copied if args.loss_type == "l2": + loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") +elif args.loss_type == "huber": + loss = torch.mean( + torch.sqrt((model_pred.float() - target.float()) ** 2 + args.huber_c**2) - args.huber_c + ) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Now you’re ready to launch the training script and start distilling! For this guide, you’ll use the --train_shards_path_or_url to specify the path to the Conceptual Captions 12M dataset stored on the Hub here. Set the MODEL_DIR environment variable to the name of the teacher model and OUTPUT_DIR to where you want to save the model. Copied export MODEL_DIR="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="path/to/saved/model" + +accelerate launch train_lcm_distill_sd_wds.py \ + --pretrained_teacher_model=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --mixed_precision=fp16 \ + --resolution=512 \ + --learning_rate=1e-6 --loss_type="huber" --ema_decay=0.95 --adam_weight_decay=0.0 \ + --max_train_steps=1000 \ + --max_train_samples=4000000 \ + --dataloader_num_workers=8 \ + --train_shards_path_or_url="pipe:curl -L -s https://huggingface.co/datasets/laion/conceptual-captions-12m-webdataset/resolve/main/data/{00000..01099}.tar?download=true" \ + --validation_steps=200 \ + --checkpointing_steps=200 --checkpoints_total_limit=10 \ + --train_batch_size=12 \ + --gradient_checkpointing --enable_xformers_memory_efficient_attention \ + --gradient_accumulation_steps=1 \ + --use_8bit_adam \ + --resume_from_checkpoint=latest \ + --report_to=wandb \ + --seed=453645634 \ + --push_to_hub Once training is complete, you can use your new LCM for inference. Copied from diffusers import UNet2DConditionModel, DiffusionPipeline, LCMScheduler +import torch + +unet = UNet2DConditionModel.from_pretrained("your-username/your-model", torch_dtype=torch.float16, variant="fp16") +pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", unet=unet, torch_dtype=torch.float16, variant="fp16") + +pipeline.scheduler = LCMScheduler.from_config(pipe.scheduler.config) +pipeline.to("cuda") + +prompt = "sushi rolls in the form of panda heads, sushi platter" + +image = pipeline(prompt, num_inference_steps=4, guidance_scale=1.0).images[0] LoRA LoRA is a training technique for significantly reducing the number of trainable parameters. As a result, training is faster and it is easier to store the resulting weights because they are a lot smaller (~100MBs). Use the train_lcm_distill_lora_sd_wds.py or train_lcm_distill_lora_sdxl.wds.py script to train with LoRA. The LoRA training script is discussed in more detail in the LoRA training guide. Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_lcm_distill_sdxl_wds.py script to train a SDXL model with LoRA. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on distilling a LCM model! To learn more about LCM, the following may be helpful: Learn how to use LCMs for inference for text-to-image, image-to-image, and with LoRA checkpoints. Read the SDXL in 4 steps with Latent Consistency LoRAs blog post to learn more about SDXL LCM-LoRA’s for super fast inference, quality comparisons, benchmarks, and more. diff --git a/scrapped_outputs/4134fcf7f21b330082e161bd75e60fa1.txt b/scrapped_outputs/4134fcf7f21b330082e161bd75e60fa1.txt new file mode 100644 index 0000000000000000000000000000000000000000..7645418c174b20843d0dcacad570025d04b154f1 --- /dev/null +++ b/scrapped_outputs/4134fcf7f21b330082e161bd75e60fa1.txt @@ -0,0 +1,8 @@ +ScoreSdeVpScheduler ScoreSdeVpScheduler is a variance preserving stochastic differential equation (SDE) scheduler. It was introduced in the Score-Based Generative Modeling through Stochastic Differential Equations paper by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole. The abstract from the paper is: Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model. 🚧 This scheduler is under construction! ScoreSdeVpScheduler class diffusers.schedulers.ScoreSdeVpScheduler < source > ( num_train_timesteps = 2000 beta_min = 0.1 beta_max = 20 sampling_eps = 0.001 ) Parameters num_train_timesteps (int, defaults to 2000) — +The number of diffusion steps to train the model. beta_min (int, defaults to 0.1) — beta_max (int, defaults to 20) — sampling_eps (int, defaults to 1e-3) — +The end value of sampling where timesteps decrease progressively from 1 to epsilon. ScoreSdeVpScheduler is a variance preserving stochastic differential equation (SDE) scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. set_timesteps < source > ( num_inference_steps device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the continuous timesteps used for the diffusion chain (to be run before inference). step_pred < source > ( score x t generator = None ) Parameters score () — x () — t () — generator (torch.Generator, optional) — +A random number generator. Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/41bf116c130047c8f22527a32e925e7d.txt b/scrapped_outputs/41bf116c130047c8f22527a32e925e7d.txt new file mode 100644 index 0000000000000000000000000000000000000000..5e5f20bcd4c8ced4f5d66653f375f4b97a022c2a --- /dev/null +++ b/scrapped_outputs/41bf116c130047c8f22527a32e925e7d.txt @@ -0,0 +1,13 @@ +Improve image quality with deterministic generation A common way to improve the quality of generated images is with deterministic batch generation, generate a batch of images and select one image to improve with a more detailed prompt in a second round of inference. The key is to pass a list of torch.Generator’s to the pipeline for batched image generation, and tie each Generator to a seed so you can reuse it for an image. Let’s use runwayml/stable-diffusion-v1-5 for example, and generate several versions of the following prompt: Copied prompt = "Labrador in the style of Vermeer" Instantiate a pipeline with DiffusionPipeline.from_pretrained() and place it on a GPU (if available): Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import make_image_grid + +pipe = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +pipe = pipe.to("cuda") Now, define four different Generators and assign each Generator a seed (0 to 3) so you can reuse a Generator later for a specific image: Copied generator = [torch.Generator(device="cuda").manual_seed(i) for i in range(4)] To create a batched seed, you should use a list comprehension that iterates over the length specified in range(). This creates a unique Generator object for each image in the batch. If you only multiply the Generator by the batch size, this only creates one Generator object that is used sequentially for each image in the batch. For example, if you want to use the same seed to create 4 identical images: Copied ❌ [torch.Generator().manual_seed(seed)] * 4 + +✅ [torch.Generator().manual_seed(seed) for _ in range(4)] Generate the images and have a look: Copied images = pipe(prompt, generator=generator, num_images_per_prompt=4).images +make_image_grid(images, rows=2, cols=2) In this example, you’ll improve upon the first image - but in reality, you can use any image you want (even the image with double sets of eyes!). The first image used the Generator with seed 0, so you’ll reuse that Generator for the second round of inference. To improve the quality of the image, add some additional text to the prompt: Copied prompt = [prompt + t for t in [", highly realistic", ", artsy", ", trending", ", colorful"]] +generator = [torch.Generator(device="cuda").manual_seed(0) for i in range(4)] Create four generators with seed 0, and generate another batch of images, all of which should look like the first image from the previous round! Copied images = pipe(prompt, generator=generator).images +make_image_grid(images, rows=2, cols=2) diff --git a/scrapped_outputs/41f9b36bd327038fa1a3111a3bd025b4.txt b/scrapped_outputs/41f9b36bd327038fa1a3111a3bd025b4.txt new file mode 100644 index 0000000000000000000000000000000000000000..684383d3b766fe2306777de3fdfe7ac6f1cc9bb6 --- /dev/null +++ b/scrapped_outputs/41f9b36bd327038fa1a3111a3bd025b4.txt @@ -0,0 +1,29 @@ +Create a dataset for training There are many datasets on the Hub to train a model on, but if you can’t find one you’re interested in or want to use your own, you can create a dataset with the 🤗 Datasets library. The dataset structure depends on the task you want to train your model on. The most basic dataset structure is a directory of images for tasks like unconditional image generation. Another dataset structure may be a directory of images and a text file containing their corresponding text captions for tasks like text-to-image generation. This guide will show you two ways to create a dataset to finetune on: provide a folder of images to the --train_data_dir argument upload a dataset to the Hub and pass the dataset repository id to the --dataset_name argument 💡 Learn more about how to create an image dataset for training in the Create an image dataset guide. Provide a dataset as a folder For unconditional generation, you can provide your own dataset as a folder of images. The training script uses the ImageFolder builder from 🤗 Datasets to automatically build a dataset from the folder. Your directory structure should look like: Copied data_dir/xxx.png +data_dir/xxy.png +data_dir/[...]/xxz.png Pass the path to the dataset directory to the --train_data_dir argument, and then you can start training: Copied accelerate launch train_unconditional.py \ + --train_data_dir \ + Upload your data to the Hub 💡 For more details and context about creating and uploading a dataset to the Hub, take a look at the Image search with 🤗 Datasets post. Start by creating a dataset with the ImageFolder feature, which creates an image column containing the PIL-encoded images. You can use the data_dir or data_files parameters to specify the location of the dataset. The data_files parameter supports mapping specific files to dataset splits like train or test: Copied from datasets import load_dataset + +# example 1: local folder +dataset = load_dataset("imagefolder", data_dir="path_to_your_folder") + +# example 2: local files (supported formats are tar, gzip, zip, xz, rar, zstd) +dataset = load_dataset("imagefolder", data_files="path_to_zip_file") + +# example 3: remote files (supported formats are tar, gzip, zip, xz, rar, zstd) +dataset = load_dataset( + "imagefolder", + data_files="https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip", +) + +# example 4: providing several splits +dataset = load_dataset( + "imagefolder", data_files={"train": ["path/to/file1", "path/to/file2"], "test": ["path/to/file3", "path/to/file4"]} +) Then use the push_to_hub method to upload the dataset to the Hub: Copied # assuming you have ran the huggingface-cli login command in a terminal +dataset.push_to_hub("name_of_your_dataset") + +# if you want to push to a private repo, simply pass private=True: +dataset.push_to_hub("name_of_your_dataset", private=True) Now the dataset is available for training by passing the dataset name to the --dataset_name argument: Copied accelerate launch --mixed_precision="fp16" train_text_to_image.py \ + --pretrained_model_name_or_path="runwayml/stable-diffusion-v1-5" \ + --dataset_name="name_of_your_dataset" \ + Next steps Now that you’ve created a dataset, you can plug it into the train_data_dir (if your dataset is local) or dataset_name (if your dataset is on the Hub) arguments of a training script. For your next steps, feel free to try and use your dataset to train a model for unconditional generation or text-to-image generation! diff --git a/scrapped_outputs/4228118fc51611d7d65305e8fbe7aad2.txt b/scrapped_outputs/4228118fc51611d7d65305e8fbe7aad2.txt new file mode 100644 index 0000000000000000000000000000000000000000..dc45cc411c1e99044b02de9de0b70f888962c563 --- /dev/null +++ b/scrapped_outputs/4228118fc51611d7d65305e8fbe7aad2.txt @@ -0,0 +1,42 @@ +DPMSolverSDEScheduler The DPMSolverSDEScheduler is inspired by the stochastic sampler from the Elucidating the Design Space of Diffusion-Based Generative Models paper, and the scheduler is ported from and created by Katherine Crowson. DPMSolverSDEScheduler class diffusers.DPMSolverSDEScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' use_karras_sigmas: Optional = False noise_sampler_seed: Optional = None timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.00085) — +The starting beta value of inference. beta_end (float, defaults to 0.012) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. noise_sampler_seed (int, optional, defaults to None) — +The random seed to use for the noise sampler. If None, a random seed is generated. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DPMSolverSDEScheduler implements the stochastic sampler from the Elucidating the Design Space of Diffusion-Based +Generative Models paper. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union return_dict: bool = True s_noise: float = 1.0 ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor or np.ndarray) — +The direct output from learned diffusion model. timestep (float or torch.FloatTensor) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor or np.ndarray) — +A current instance of a sample created by the diffusion process. return_dict (bool, optional, defaults to True) — +Whether or not to return a SchedulerOutput or tuple. s_noise (float, optional, defaults to 1.0) — +Scaling factor for noise added to the sample. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/4234c76e9ad0fee9392f8011f416fd04.txt b/scrapped_outputs/4234c76e9ad0fee9392f8011f416fd04.txt new file mode 100644 index 0000000000000000000000000000000000000000..c82e25825d8d9963f7b4b0f30bedbc489b9e96a3 --- /dev/null +++ b/scrapped_outputs/4234c76e9ad0fee9392f8011f416fd04.txt @@ -0,0 +1,30 @@ +Transformer Temporal A Transformer model for video-like data. TransformerTemporalModel class diffusers.models.TransformerTemporalModel < source > ( num_attention_heads: int = 16 attention_head_dim: int = 88 in_channels: Optional = None out_channels: Optional = None num_layers: int = 1 dropout: float = 0.0 norm_num_groups: int = 32 cross_attention_dim: Optional = None attention_bias: bool = False sample_size: Optional = None activation_fn: str = 'geglu' norm_elementwise_affine: bool = True double_self_attention: bool = True positional_embeddings: Optional = None num_positional_embeddings: Optional = None ) Parameters num_attention_heads (int, optional, defaults to 16) — The number of heads to use for multi-head attention. attention_head_dim (int, optional, defaults to 88) — The number of channels in each head. in_channels (int, optional) — +The number of channels in the input and output (specify if the input is continuous). num_layers (int, optional, defaults to 1) — The number of layers of Transformer blocks to use. dropout (float, optional, defaults to 0.0) — The dropout probability to use. cross_attention_dim (int, optional) — The number of encoder_hidden_states dimensions to use. attention_bias (bool, optional) — +Configure if the TransformerBlock attention should contain a bias parameter. sample_size (int, optional) — The width of the latent images (specify if the input is discrete). +This is fixed during training since it is used to learn a number of position embeddings. activation_fn (str, optional, defaults to "geglu") — +Activation function to use in feed-forward. See diffusers.models.activations.get_activation for supported +activation functions. norm_elementwise_affine (bool, optional) — +Configure if the TransformerBlock should use learnable elementwise affine parameters for normalization. double_self_attention (bool, optional) — +Configure if each TransformerBlock should contain two self-attention layers. +positional_embeddings — (str, optional): +The type of positional embeddings to apply to the sequence input before passing use. +num_positional_embeddings — (int, optional): +The maximum length of the sequence over which to apply positional embeddings. A Transformer model for video-like data. forward < source > ( hidden_states: FloatTensor encoder_hidden_states: Optional = None timestep: Optional = None class_labels: LongTensor = None num_frames: int = 1 cross_attention_kwargs: Optional = None return_dict: bool = True ) → ~models.transformer_temporal.TransformerTemporalModelOutput or tuple Parameters hidden_states (torch.LongTensor of shape (batch size, num latent pixels) if discrete, torch.FloatTensor of shape (batch size, channel, height, width) if continuous) — +Input hidden_states. encoder_hidden_states ( torch.LongTensor of shape (batch size, encoder_hidden_states dim), optional) — +Conditional embeddings for cross attention layer. If not given, cross-attention defaults to +self-attention. timestep ( torch.LongTensor, optional) — +Used to indicate denoising step. Optional timestep to be applied as an embedding in AdaLayerNorm. class_labels ( torch.LongTensor of shape (batch size, num classes), optional) — +Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in +AdaLayerZeroNorm. num_frames (int, optional, defaults to 1) — +The number of frames to be processed per batch. This is used to reshape the hidden states. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. Returns +~models.transformer_temporal.TransformerTemporalModelOutput or tuple + +If return_dict is True, an ~models.transformer_temporal.TransformerTemporalModelOutput is +returned, otherwise a tuple where the first element is the sample tensor. + The TransformerTemporal forward method. TransformerTemporalModelOutput class diffusers.models.transformers.transformer_temporal.TransformerTemporalModelOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size x num_frames, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. The output of TransformerTemporalModel. diff --git a/scrapped_outputs/4279b9bb925364289b997f3becc84f1b.txt b/scrapped_outputs/4279b9bb925364289b997f3becc84f1b.txt new file mode 100644 index 0000000000000000000000000000000000000000..039dc21252f140b854db30919cf4105c2b03492c --- /dev/null +++ b/scrapped_outputs/4279b9bb925364289b997f3becc84f1b.txt @@ -0,0 +1,249 @@ +Evaluating Diffusion Models Evaluation of generative models like Stable Diffusion is subjective in nature. But as practitioners and researchers, we often have to make careful choices amongst many different possibilities. So, when working with different generative models (like GANs, Diffusion, etc.), how do we choose one over the other? Qualitative evaluation of such models can be error-prone and might incorrectly influence a decision. +However, quantitative metrics don’t necessarily correspond to image quality. So, usually, a combination +of both qualitative and quantitative evaluations provides a stronger signal when choosing one model +over the other. In this document, we provide a non-exhaustive overview of qualitative and quantitative methods to evaluate Diffusion models. For quantitative methods, we specifically focus on how to implement them alongside diffusers. The methods shown in this document can also be used to evaluate different noise schedulers keeping the underlying generation model fixed. Scenarios We cover Diffusion models with the following pipelines: Text-guided image generation (such as the StableDiffusionPipeline). Text-guided image generation, additionally conditioned on an input image (such as the StableDiffusionImg2ImgPipeline and StableDiffusionInstructPix2PixPipeline). Class-conditioned image generation models (such as the DiTPipeline). Qualitative Evaluation Qualitative evaluation typically involves human assessment of generated images. Quality is measured across aspects such as compositionality, image-text alignment, and spatial relations. Common prompts provide a degree of uniformity for subjective metrics. +DrawBench and PartiPrompts are prompt datasets used for qualitative benchmarking. DrawBench and PartiPrompts were introduced by Imagen and Parti respectively. From the official Parti website: PartiPrompts (P2) is a rich set of over 1600 prompts in English that we release as part of this work. P2 can be used to measure model capabilities across various categories and challenge aspects. PartiPrompts has the following columns: Prompt Category of the prompt (such as “Abstract”, “World Knowledge”, etc.) Challenge reflecting the difficulty (such as “Basic”, “Complex”, “Writing & Symbols”, etc.) These benchmarks allow for side-by-side human evaluation of different image generation models. For this, the 🧨 Diffusers team has built Open Parti Prompts, which is a community-driven qualitative benchmark based on Parti Prompts to compare state-of-the-art open-source diffusion models: Open Parti Prompts Game: For 10 parti prompts, 4 generated images are shown and the user selects the image that suits the prompt best. Open Parti Prompts Leaderboard: The leaderboard comparing the currently best open-sourced diffusion models to each other. To manually compare images, let’s see how we can use diffusers on a couple of PartiPrompts. Below we show some prompts sampled across different challenges: Basic, Complex, Linguistic Structures, Imagination, and Writing & Symbols. Here we are using PartiPrompts as a dataset. Copied from datasets import load_dataset + +# prompts = load_dataset("nateraw/parti-prompts", split="train") +# prompts = prompts.shuffle() +# sample_prompts = [prompts[i]["Prompt"] for i in range(5)] + +# Fixing these sample prompts in the interest of reproducibility. +sample_prompts = [ + "a corgi", + "a hot air balloon with a yin-yang symbol, with the moon visible in the daytime sky", + "a car with no windows", + "a cube made of porcupine", + 'The saying "BE EXCELLENT TO EACH OTHER" written on a red brick wall with a graffiti image of a green alien wearing a tuxedo. A yellow fire hydrant is on a sidewalk in the foreground.', +] Now we can use these prompts to generate some images using Stable Diffusion (v1-4 checkpoint): Copied import torch + +seed = 0 +generator = torch.manual_seed(seed) + +images = sd_pipeline(sample_prompts, num_images_per_prompt=1, generator=generator).images We can also set num_images_per_prompt accordingly to compare different images for the same prompt. Running the same pipeline but with a different checkpoint (v1-5), yields: Once several images are generated from all the prompts using multiple models (under evaluation), these results are presented to human evaluators for scoring. For +more details on the DrawBench and PartiPrompts benchmarks, refer to their respective papers. It is useful to look at some inference samples while a model is training to measure the +training progress. In our training scripts, we support this utility with additional support for +logging to TensorBoard and Weights & Biases. Quantitative Evaluation In this section, we will walk you through how to evaluate three different diffusion pipelines using: CLIP score CLIP directional similarity FID Text-guided image generation CLIP score measures the compatibility of image-caption pairs. Higher CLIP scores imply higher compatibility 🔼. The CLIP score is a quantitative measurement of the qualitative concept “compatibility”. Image-caption pair compatibility can also be thought of as the semantic similarity between the image and the caption. CLIP score was found to have high correlation with human judgement. Let’s first load a StableDiffusionPipeline: Copied from diffusers import StableDiffusionPipeline +import torch + +model_ckpt = "CompVis/stable-diffusion-v1-4" +sd_pipeline = StableDiffusionPipeline.from_pretrained(model_ckpt, torch_dtype=torch.float16).to("cuda") Generate some images with multiple prompts: Copied prompts = [ + "a photo of an astronaut riding a horse on mars", + "A high tech solarpunk utopia in the Amazon rainforest", + "A pikachu fine dining with a view to the Eiffel Tower", + "A mecha robot in a favela in expressionist style", + "an insect robot preparing a delicious meal", + "A small cabin on top of a snowy mountain in the style of Disney, artstation", +] + +images = sd_pipeline(prompts, num_images_per_prompt=1, output_type="np").images + +print(images.shape) +# (6, 512, 512, 3) And then, we calculate the CLIP score. Copied from torchmetrics.functional.multimodal import clip_score +from functools import partial + +clip_score_fn = partial(clip_score, model_name_or_path="openai/clip-vit-base-patch16") + +def calculate_clip_score(images, prompts): + images_int = (images * 255).astype("uint8") + clip_score = clip_score_fn(torch.from_numpy(images_int).permute(0, 3, 1, 2), prompts).detach() + return round(float(clip_score), 4) + +sd_clip_score = calculate_clip_score(images, prompts) +print(f"CLIP score: {sd_clip_score}") +# CLIP score: 35.7038 In the above example, we generated one image per prompt. If we generated multiple images per prompt, we would have to take the average score from the generated images per prompt. Now, if we wanted to compare two checkpoints compatible with the StableDiffusionPipeline we should pass a generator while calling the pipeline. First, we generate images with a +fixed seed with the v1-4 Stable Diffusion checkpoint: Copied seed = 0 +generator = torch.manual_seed(seed) + +images = sd_pipeline(prompts, num_images_per_prompt=1, generator=generator, output_type="np").images Then we load the v1-5 checkpoint to generate images: Copied model_ckpt_1_5 = "runwayml/stable-diffusion-v1-5" +sd_pipeline_1_5 = StableDiffusionPipeline.from_pretrained(model_ckpt_1_5, torch_dtype=weight_dtype).to(device) + +images_1_5 = sd_pipeline_1_5(prompts, num_images_per_prompt=1, generator=generator, output_type="np").images And finally, we compare their CLIP scores: Copied sd_clip_score_1_4 = calculate_clip_score(images, prompts) +print(f"CLIP Score with v-1-4: {sd_clip_score_1_4}") +# CLIP Score with v-1-4: 34.9102 + +sd_clip_score_1_5 = calculate_clip_score(images_1_5, prompts) +print(f"CLIP Score with v-1-5: {sd_clip_score_1_5}") +# CLIP Score with v-1-5: 36.2137 It seems like the v1-5 checkpoint performs better than its predecessor. Note, however, that the number of prompts we used to compute the CLIP scores is quite low. For a more practical evaluation, this number should be way higher, and the prompts should be diverse. By construction, there are some limitations in this score. The captions in the training dataset +were crawled from the web and extracted from alt and similar tags associated an image on the internet. +They are not necessarily representative of what a human being would use to describe an image. Hence we +had to “engineer” some prompts here. Image-conditioned text-to-image generation In this case, we condition the generation pipeline with an input image as well as a text prompt. Let’s take the StableDiffusionInstructPix2PixPipeline, as an example. It takes an edit instruction as an input prompt and an input image to be edited. Here is one example: One strategy to evaluate such a model is to measure the consistency of the change between the two images (in CLIP space) with the change between the two image captions (as shown in CLIP-Guided Domain Adaptation of Image Generators). This is referred to as the ”CLIP directional similarity“. Caption 1 corresponds to the input image (image 1) that is to be edited. Caption 2 corresponds to the edited image (image 2). It should reflect the edit instruction. Following is a pictorial overview: We have prepared a mini dataset to implement this metric. Let’s first load the dataset. Copied from datasets import load_dataset + +dataset = load_dataset("sayakpaul/instructpix2pix-demo", split="train") +dataset.features Copied {'input': Value(dtype='string', id=None), + 'edit': Value(dtype='string', id=None), + 'output': Value(dtype='string', id=None), + 'image': Image(decode=True, id=None)} Here we have: input is a caption corresponding to the image. edit denotes the edit instruction. output denotes the modified caption reflecting the edit instruction. Let’s take a look at a sample. Copied idx = 0 +print(f"Original caption: {dataset[idx]['input']}") +print(f"Edit instruction: {dataset[idx]['edit']}") +print(f"Modified caption: {dataset[idx]['output']}") Copied Original caption: 2. FAROE ISLANDS: An archipelago of 18 mountainous isles in the North Atlantic Ocean between Norway and Iceland, the Faroe Islands has 'everything you could hope for', according to Big 7 Travel. It boasts 'crystal clear waterfalls, rocky cliffs that seem to jut out of nowhere and velvety green hills' +Edit instruction: make the isles all white marble +Modified caption: 2. WHITE MARBLE ISLANDS: An archipelago of 18 mountainous white marble isles in the North Atlantic Ocean between Norway and Iceland, the White Marble Islands has 'everything you could hope for', according to Big 7 Travel. It boasts 'crystal clear waterfalls, rocky cliffs that seem to jut out of nowhere and velvety green hills' And here is the image: Copied dataset[idx]["image"] We will first edit the images of our dataset with the edit instruction and compute the directional similarity. Let’s first load the StableDiffusionInstructPix2PixPipeline: Copied from diffusers import StableDiffusionInstructPix2PixPipeline + +instruct_pix2pix_pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained( + "timbrooks/instruct-pix2pix", torch_dtype=torch.float16 +).to(device) Now, we perform the edits: Copied import numpy as np + + +def edit_image(input_image, instruction): + image = instruct_pix2pix_pipeline( + instruction, + image=input_image, + output_type="np", + generator=generator, + ).images[0] + return image + +input_images = [] +original_captions = [] +modified_captions = [] +edited_images = [] + +for idx in range(len(dataset)): + input_image = dataset[idx]["image"] + edit_instruction = dataset[idx]["edit"] + edited_image = edit_image(input_image, edit_instruction) + + input_images.append(np.array(input_image)) + original_captions.append(dataset[idx]["input"]) + modified_captions.append(dataset[idx]["output"]) + edited_images.append(edited_image) To measure the directional similarity, we first load CLIP’s image and text encoders: Copied from transformers import ( + CLIPTokenizer, + CLIPTextModelWithProjection, + CLIPVisionModelWithProjection, + CLIPImageProcessor, +) + +clip_id = "openai/clip-vit-large-patch14" +tokenizer = CLIPTokenizer.from_pretrained(clip_id) +text_encoder = CLIPTextModelWithProjection.from_pretrained(clip_id).to(device) +image_processor = CLIPImageProcessor.from_pretrained(clip_id) +image_encoder = CLIPVisionModelWithProjection.from_pretrained(clip_id).to(device) Notice that we are using a particular CLIP checkpoint, i.e., openai/clip-vit-large-patch14. This is because the Stable Diffusion pre-training was performed with this CLIP variant. For more details, refer to the documentation. Next, we prepare a PyTorch nn.Module to compute directional similarity: Copied import torch.nn as nn +import torch.nn.functional as F + + +class DirectionalSimilarity(nn.Module): + def __init__(self, tokenizer, text_encoder, image_processor, image_encoder): + super().__init__() + self.tokenizer = tokenizer + self.text_encoder = text_encoder + self.image_processor = image_processor + self.image_encoder = image_encoder + + def preprocess_image(self, image): + image = self.image_processor(image, return_tensors="pt")["pixel_values"] + return {"pixel_values": image.to(device)} + + def tokenize_text(self, text): + inputs = self.tokenizer( + text, + max_length=self.tokenizer.model_max_length, + padding="max_length", + truncation=True, + return_tensors="pt", + ) + return {"input_ids": inputs.input_ids.to(device)} + + def encode_image(self, image): + preprocessed_image = self.preprocess_image(image) + image_features = self.image_encoder(**preprocessed_image).image_embeds + image_features = image_features / image_features.norm(dim=1, keepdim=True) + return image_features + + def encode_text(self, text): + tokenized_text = self.tokenize_text(text) + text_features = self.text_encoder(**tokenized_text).text_embeds + text_features = text_features / text_features.norm(dim=1, keepdim=True) + return text_features + + def compute_directional_similarity(self, img_feat_one, img_feat_two, text_feat_one, text_feat_two): + sim_direction = F.cosine_similarity(img_feat_two - img_feat_one, text_feat_two - text_feat_one) + return sim_direction + + def forward(self, image_one, image_two, caption_one, caption_two): + img_feat_one = self.encode_image(image_one) + img_feat_two = self.encode_image(image_two) + text_feat_one = self.encode_text(caption_one) + text_feat_two = self.encode_text(caption_two) + directional_similarity = self.compute_directional_similarity( + img_feat_one, img_feat_two, text_feat_one, text_feat_two + ) + return directional_similarity Let’s put DirectionalSimilarity to use now. Copied dir_similarity = DirectionalSimilarity(tokenizer, text_encoder, image_processor, image_encoder) +scores = [] + +for i in range(len(input_images)): + original_image = input_images[i] + original_caption = original_captions[i] + edited_image = edited_images[i] + modified_caption = modified_captions[i] + + similarity_score = dir_similarity(original_image, edited_image, original_caption, modified_caption) + scores.append(float(similarity_score.detach().cpu())) + +print(f"CLIP directional similarity: {np.mean(scores)}") +# CLIP directional similarity: 0.0797976553440094 Like the CLIP Score, the higher the CLIP directional similarity, the better it is. It should be noted that the StableDiffusionInstructPix2PixPipeline exposes two arguments, namely, image_guidance_scale and guidance_scale that let you control the quality of the final edited image. We encourage you to experiment with these two arguments and see the impact of that on the directional similarity. We can extend the idea of this metric to measure how similar the original image and edited version are. To do that, we can just do F.cosine_similarity(img_feat_two, img_feat_one). For these kinds of edits, we would still want the primary semantics of the images to be preserved as much as possible, i.e., a high similarity score. We can use these metrics for similar pipelines such as the StableDiffusionPix2PixZeroPipeline. Both CLIP score and CLIP direction similarity rely on the CLIP model, which can make the evaluations biased. Extending metrics like IS, FID (discussed later), or KID can be difficult when the model under evaluation was pre-trained on a large image-captioning dataset (such as the LAION-5B dataset). This is because underlying these metrics is an InceptionNet (pre-trained on the ImageNet-1k dataset) used for extracting intermediate image features. The pre-training dataset of Stable Diffusion may have limited overlap with the pre-training dataset of InceptionNet, so it is not a good candidate here for feature extraction. Using the above metrics helps evaluate models that are class-conditioned. For example, DiT. It was pre-trained being conditioned on the ImageNet-1k classes. Class-conditioned image generation Class-conditioned generative models are usually pre-trained on a class-labeled dataset such as ImageNet-1k. Popular metrics for evaluating these models include Fréchet Inception Distance (FID), Kernel Inception Distance (KID), and Inception Score (IS). In this document, we focus on FID (Heusel et al.). We show how to compute it with the DiTPipeline, which uses the DiT model under the hood. FID aims to measure how similar are two datasets of images. As per this resource: Fréchet Inception Distance is a measure of similarity between two datasets of images. It was shown to correlate well with the human judgment of visual quality and is most often used to evaluate the quality of samples of Generative Adversarial Networks. FID is calculated by computing the Fréchet distance between two Gaussians fitted to feature representations of the Inception network. These two datasets are essentially the dataset of real images and the dataset of fake images (generated images in our case). FID is usually calculated with two large datasets. However, for this document, we will work with two mini datasets. Let’s first download a few images from the ImageNet-1k training set: Copied from zipfile import ZipFile +import requests + + +def download(url, local_filepath): + r = requests.get(url) + with open(local_filepath, "wb") as f: + f.write(r.content) + return local_filepath + +dummy_dataset_url = "https://hf.co/datasets/sayakpaul/sample-datasets/resolve/main/sample-imagenet-images.zip" +local_filepath = download(dummy_dataset_url, dummy_dataset_url.split("/")[-1]) + +with ZipFile(local_filepath, "r") as zipper: + zipper.extractall(".") Copied from PIL import Image +import os + +dataset_path = "sample-imagenet-images" +image_paths = sorted([os.path.join(dataset_path, x) for x in os.listdir(dataset_path)]) + +real_images = [np.array(Image.open(path).convert("RGB")) for path in image_paths] These are 10 images from the following ImageNet-1k classes: “cassette_player”, “chain_saw” (x2), “church”, “gas_pump” (x3), “parachute” (x2), and “tench”. Real images. Now that the images are loaded, let’s apply some lightweight pre-processing on them to use them for FID calculation. Copied from torchvision.transforms import functional as F + + +def preprocess_image(image): + image = torch.tensor(image).unsqueeze(0) + image = image.permute(0, 3, 1, 2) / 255.0 + return F.center_crop(image, (256, 256)) + +real_images = torch.cat([preprocess_image(image) for image in real_images]) +print(real_images.shape) +# torch.Size([10, 3, 256, 256]) We now load the DiTPipeline to generate images conditioned on the above-mentioned classes. Copied from diffusers import DiTPipeline, DPMSolverMultistepScheduler + +dit_pipeline = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256", torch_dtype=torch.float16) +dit_pipeline.scheduler = DPMSolverMultistepScheduler.from_config(dit_pipeline.scheduler.config) +dit_pipeline = dit_pipeline.to("cuda") + +words = [ + "cassette player", + "chainsaw", + "chainsaw", + "church", + "gas pump", + "gas pump", + "gas pump", + "parachute", + "parachute", + "tench", +] + +class_ids = dit_pipeline.get_label_ids(words) +output = dit_pipeline(class_labels=class_ids, generator=generator, output_type="np") + +fake_images = output.images +fake_images = torch.tensor(fake_images) +fake_images = fake_images.permute(0, 3, 1, 2) +print(fake_images.shape) +# torch.Size([10, 3, 256, 256]) Now, we can compute the FID using torchmetrics. Copied from torchmetrics.image.fid import FrechetInceptionDistance + +fid = FrechetInceptionDistance(normalize=True) +fid.update(real_images, real=True) +fid.update(fake_images, real=False) + +print(f"FID: {float(fid.compute())}") +# FID: 177.7147216796875 The lower the FID, the better it is. Several things can influence FID here: Number of images (both real and fake) Randomness induced in the diffusion process Number of inference steps in the diffusion process The scheduler being used in the diffusion process For the last two points, it is, therefore, a good practice to run the evaluation across different seeds and inference steps, and then report an average result. FID results tend to be fragile as they depend on a lot of factors: The specific Inception model used during computation. The implementation accuracy of the computation. The image format (not the same if we start from PNGs vs JPGs). Keeping that in mind, FID is often most useful when comparing similar runs, but it is +hard to reproduce paper results unless the authors carefully disclose the FID +measurement code. These points apply to other related metrics too, such as KID and IS. As a final step, let’s visually inspect the fake_images. Fake images. diff --git a/scrapped_outputs/4294e994c70c20909fd9ddff27ebe568.txt b/scrapped_outputs/4294e994c70c20909fd9ddff27ebe568.txt new file mode 100644 index 0000000000000000000000000000000000000000..d0517cedafce5a2047cf1ffe75bd487ffe5fa88f --- /dev/null +++ b/scrapped_outputs/4294e994c70c20909fd9ddff27ebe568.txt @@ -0,0 +1,157 @@ +Image-to-Video Generation with PIA (Personalized Image Animator) Overview PIA: Your Personalized Image Animator via Plug-and-Play Modules in Text-to-Image Models by Yiming Zhang, Zhening Xing, Yanhong Zeng, Youqing Fang, Kai Chen Recent advancements in personalized text-to-image (T2I) models have revolutionized content creation, empowering non-experts to generate stunning images with unique styles. While promising, adding realistic motions into these personalized images by text poses significant challenges in preserving distinct styles, high-fidelity details, and achieving motion controllability by text. In this paper, we present PIA, a Personalized Image Animator that excels in aligning with condition images, achieving motion controllability by text, and the compatibility with various personalized T2I models without specific tuning. To achieve these goals, PIA builds upon a base T2I model with well-trained temporal alignment layers, allowing for the seamless transformation of any personalized T2I model into an image animation model. A key component of PIA is the introduction of the condition module, which utilizes the condition frame and inter-frame affinity as input to transfer appearance information guided by the affinity hint for individual frame synthesis in the latent space. This design mitigates the challenges of appearance-related image alignment within and allows for a stronger focus on aligning with motion-related guidance. Project page Available Pipelines Pipeline Tasks Demo PIAPipeline Image-to-Video Generation with PIA Available checkpoints Motion Adapter checkpoints for PIA can be found under the OpenMMLab org. These checkpoints are meant to work with any model based on Stable Diffusion 1.5 Usage example PIA works with a MotionAdapter checkpoint and a Stable Diffusion 1.5 model checkpoint. The MotionAdapter is a collection of Motion Modules that are responsible for adding coherent motion across image frames. These modules are applied after the Resnet and Attention blocks in the Stable Diffusion UNet. In addition to the motion modules, PIA also replaces the input convolution layer of the SD 1.5 UNet model with a 9 channel input convolution layer. The following example demonstrates how to use PIA to generate a video from a single image. Copied import torch +from diffusers import ( + EulerDiscreteScheduler, + MotionAdapter, + PIAPipeline, +) +from diffusers.utils import export_to_gif, load_image + +adapter = MotionAdapter.from_pretrained("openmmlab/PIA-condition-adapter") +pipe = PIAPipeline.from_pretrained("SG161222/Realistic_Vision_V6.0_B1_noVAE", motion_adapter=adapter, torch_dtype=torch.float16) + +pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() +pipe.enable_vae_slicing() + +image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat_6.png?download=true" +) +image = image.resize((512, 512)) +prompt = "cat in a field" +negative_prompt = "wrong white balance, dark, sketches,worst quality,low quality" + +generator = torch.Generator("cpu").manual_seed(0) +output = pipe(image=image, prompt=prompt, generator=generator) +frames = output.frames[0] +export_to_gif(frames, "pia-animation.gif") Here are some sample outputs: cat in a field. + If you plan on using a scheduler that can clip samples, make sure to disable it by setting clip_sample=False in the scheduler as this can also have an adverse effect on generated samples. Additionally, the PIA checkpoints can be sensitive to the beta schedule of the scheduler. We recommend setting this to linear. Using FreeInit FreeInit: Bridging Initialization Gap in Video Diffusion Models by Tianxing Wu, Chenyang Si, Yuming Jiang, Ziqi Huang, Ziwei Liu. FreeInit is an effective method that improves temporal consistency and overall quality of videos generated using video-diffusion-models without any addition training. It can be applied to PIA, AnimateDiff, ModelScope, VideoCrafter and various other video generation models seamlessly at inference time, and works by iteratively refining the latent-initialization noise. More details can be found it the paper. The following example demonstrates the usage of FreeInit. Copied import torch +from diffusers import ( + DDIMScheduler, + MotionAdapter, + PIAPipeline, +) +from diffusers.utils import export_to_gif, load_image + +adapter = MotionAdapter.from_pretrained("openmmlab/PIA-condition-adapter") +pipe = PIAPipeline.from_pretrained("SG161222/Realistic_Vision_V6.0_B1_noVAE", motion_adapter=adapter) + +# enable FreeInit +# Refer to the enable_free_init documentation for a full list of configurable parameters +pipe.enable_free_init(method="butterworth", use_fast_sampling=True) + +# Memory saving options +pipe.enable_model_cpu_offload() +pipe.enable_vae_slicing() + +pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) +image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat_6.png?download=true" +) +image = image.resize((512, 512)) +prompt = "cat in a field" +negative_prompt = "wrong white balance, dark, sketches,worst quality,low quality" + +generator = torch.Generator("cpu").manual_seed(0) + +output = pipe(image=image, prompt=prompt, generator=generator) +frames = output.frames[0] +export_to_gif(frames, "pia-freeinit-animation.gif") cat in a field. + FreeInit is not really free - the improved quality comes at the cost of extra computation. It requires sampling a few extra times depending on the num_iters parameter that is set when enabling it. Setting the use_fast_sampling parameter to True can improve the overall performance (at the cost of lower quality compared to when use_fast_sampling=False but still better results than vanilla video generation models). PIAPipeline class diffusers.PIAPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: Union scheduler: Union motion_adapter: Optional = None feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter (MotionAdapter) — +A MotionAdapter to be used in combination with unet to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( image: Union prompt: Union = None strength: float = 1.0 num_frames: Optional = 16 height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None motion_scale: int = 0 output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → PIAPipelineOutput or tuple Parameters image (PipelineImageInput) — +The input image to be used for video generation. prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. strength (float, optional, defaults to 1.0) — Indicates extent to transform the reference image. Must be between 0 and 1. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated video. num_frames (int, optional, defaults to 16) — +The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds +amounts to 2 seconds of video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. +motion_scale — (int, optional, defaults to 0): +Parameter that controls the amount and type of motion that is added to the image. Increasing the value increases the amount of motion, while specific +ranges of values control the type of motion that is added. Must be between 0 and 8. +Set between 0-2 to only increase the amount of motion. +Set between 3-5 to create looping motion. +Set between 6-8 to perform motion with image style transfer. output_type (str, optional, defaults to "pil") — +The output format of the generated video. Choose between torch.FloatTensor, PIL.Image or +np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +PIAPipelineOutput or tuple + +If return_dict is True, PIAPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import ( +... EulerDiscreteScheduler, +... MotionAdapter, +... PIAPipeline, +... ) +>>> from diffusers.utils import export_to_gif, load_image +>>> adapter = MotionAdapter.from_pretrained("../checkpoints/pia-diffusers") +>>> pipe = PIAPipeline.from_pretrained("SG161222/Realistic_Vision_V6.0_B1_noVAE", motion_adapter=adapter) +>>> pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config) +>>> image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat_6.png?download=true" +... ) +>>> image = image.resize((512, 512)) +>>> prompt = "cat in a hat" +>>> negative_prompt = "wrong white balance, dark, sketches,worst quality,low quality, deformed, distorted, disfigured, bad eyes, wrong lips,weird mouth, bad teeth, mutated hands and fingers, bad anatomy,wrong anatomy, amputation, extra limb, missing limb, floating,limbs, disconnected limbs, mutation, ugly, disgusting, bad_pictures, negative_hand-neg" +>>> generator = torch.Generator("cpu").manual_seed(0) +>>> output = pipe(image=image, prompt=prompt, negative_prompt=negative_prompt, generator=generator) +>>> frames = output.frames[0] +>>> export_to_gif(frames, "pia-animation.gif") encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. enable_freeu disable_freeu enable_free_init disable_free_init enable_vae_slicing disable_vae_slicing enable_vae_tiling disable_vae_tiling PIAPipelineOutput class diffusers.pipelines.pia.PIAPipelineOutput < source > ( frames: Union ) Parameters frames (torch.Tensor, np.ndarray, or List[List[PIL.Image.Image]]) — Nested list of length batch_size with denoised PIL image sequences of length num_frames, — NumPy array of shape `(batch_size, num_frames, channels, height, width, — Torch tensor of shape (batch_size, num_frames, channels, height, width). — Output class for PIAPipeline. diff --git a/scrapped_outputs/42d9e63e35308b8eac15d5f5043d60b8.txt b/scrapped_outputs/42d9e63e35308b8eac15d5f5043d60b8.txt new file mode 100644 index 0000000000000000000000000000000000000000..d652e1d857c98c3e8bba256ca96f37cda949853a --- /dev/null +++ b/scrapped_outputs/42d9e63e35308b8eac15d5f5043d60b8.txt @@ -0,0 +1,57 @@ +Schedulers 🤗 Diffusers provides many scheduler functions for the diffusion process. A scheduler takes a model’s output (the sample which the diffusion process is iterating on) and a timestep to return a denoised sample. The timestep is important because it dictates where in the diffusion process the step is; data is generated by iterating forward n timesteps and inference occurs by propagating backward through the timesteps. Based on the timestep, a scheduler may be discrete in which case the timestep is an int or continuous in which case the timestep is a float. Depending on the context, a scheduler defines how to iteratively add noise to an image or how to update a sample based on a model’s output: during training, a scheduler adds noise (there are different algorithms for how to add noise) to a sample to train a diffusion model during inference, a scheduler defines how to update a sample based on a pretrained model’s output Many schedulers are implemented from the k-diffusion library by Katherine Crowson, and they’re also widely used in A1111. To help you map the schedulers from k-diffusion and A1111 to the schedulers in 🤗 Diffusers, take a look at the table below: A1111/k-diffusion 🤗 Diffusers Usage DPM++ 2M DPMSolverMultistepScheduler DPM++ 2M Karras DPMSolverMultistepScheduler init with use_karras_sigmas=True DPM++ 2M SDE DPMSolverMultistepScheduler init with algorithm_type="sde-dpmsolver++" DPM++ 2M SDE Karras DPMSolverMultistepScheduler init with use_karras_sigmas=True and algorithm_type="sde-dpmsolver++" DPM++ 2S a N/A very similar to DPMSolverSinglestepScheduler DPM++ 2S a Karras N/A very similar to DPMSolverSinglestepScheduler(use_karras_sigmas=True, ...) DPM++ SDE DPMSolverSinglestepScheduler DPM++ SDE Karras DPMSolverSinglestepScheduler init with use_karras_sigmas=True DPM2 KDPM2DiscreteScheduler DPM2 Karras KDPM2DiscreteScheduler init with use_karras_sigmas=True DPM2 a KDPM2AncestralDiscreteScheduler DPM2 a Karras KDPM2AncestralDiscreteScheduler init with use_karras_sigmas=True DPM adaptive N/A DPM fast N/A Euler EulerDiscreteScheduler Euler a EulerAncestralDiscreteScheduler Heun HeunDiscreteScheduler LMS LMSDiscreteScheduler LMS Karras LMSDiscreteScheduler init with use_karras_sigmas=True N/A DEISMultistepScheduler N/A UniPCMultistepScheduler All schedulers are built from the base SchedulerMixin class which implements low level utilities shared by all schedulers. SchedulerMixin class diffusers.SchedulerMixin < source > ( ) Base class for all schedulers. SchedulerMixin contains common functions shared by all schedulers such as general loading and saving +functionalities. ConfigMixin takes care of storing the configuration attributes (like num_train_timesteps) that are passed to +the scheduler’s __init__ function, and the attributes can be accessed by scheduler.config.num_train_timesteps. Class attributes: _compatibles (List[str]) — A list of scheduler classes that are compatible with the parent scheduler +class. Use from_config() to load a different compatible scheduler class (should be overridden +by parent class). from_pretrained < source > ( pretrained_model_name_or_path: Union = None subfolder: Optional = None return_unused_kwargs = False **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the scheduler +configuration saved with save_pretrained(). + subfolder (str, optional) — +The subfolder location of a model file within a larger model repository on the Hub or locally. return_unused_kwargs (bool, optional, defaults to False) — +Whether kwargs that are not consumed by the Python class should be returned or not. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. Instantiate a scheduler from a pre-defined JSON configuration file in a local directory or Hub repository. To use private or gated models, log-in with +huggingface-cli login. You can also activate the special +“offline-mode” to use this method in a +firewalled environment. save_pretrained < source > ( save_directory: Union push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory where the configuration JSON file will be saved (will be created if it does not exist). push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save a scheduler configuration object to a directory so that it can be reloaded using the +from_pretrained() class method. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. KarrasDiffusionSchedulers KarrasDiffusionSchedulers are a broad generalization of schedulers in 🤗 Diffusers. The schedulers in this class are distinguished at a high level by their noise sampling strategy, the type of network and scaling, the training strategy, and how the loss is weighed. The different schedulers in this class, depending on the ordinary differential equations (ODE) solver type, fall into the above taxonomy and provide a good abstraction for the design of the main schedulers implemented in 🤗 Diffusers. The schedulers in this class are given here. PushToHubMixin class diffusers.utils.PushToHubMixin < source > ( ) A Mixin to push a model, scheduler, or pipeline to the Hugging Face Hub. push_to_hub < source > ( repo_id: str commit_message: Optional = None private: Optional = None token: Optional = None create_pr: bool = False safe_serialization: bool = True variant: Optional = None ) Parameters repo_id (str) — +The name of the repository you want to push your model, scheduler, or pipeline files to. It should +contain your organization name when pushing to an organization. repo_id can also be a path to a local +directory. commit_message (str, optional) — +Message to commit while pushing. Default to "Upload {object}". private (bool, optional) — +Whether or not the repository created should be private. token (str, optional) — +The token to use as HTTP bearer authorization for remote files. The token generated when running +huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) — +Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to True) — +Whether or not to convert the model weights to the safetensors format. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. Upload model, scheduler, or pipeline files to the 🤗 Hugging Face Hub. Examples: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet") + +# Push the `unet` to your namespace with the name "my-finetuned-unet". +unet.push_to_hub("my-finetuned-unet") + +# Push the `unet` to an organization with the name "my-finetuned-unet". +unet.push_to_hub("your-org/my-finetuned-unet") diff --git a/scrapped_outputs/42e2b79faf0218c212bba94e1a76b61a.txt b/scrapped_outputs/42e2b79faf0218c212bba94e1a76b61a.txt new file mode 100644 index 0000000000000000000000000000000000000000..f9d5759d2a52433aeb4a07b9b2cace405fc5aff7 --- /dev/null +++ b/scrapped_outputs/42e2b79faf0218c212bba94e1a76b61a.txt @@ -0,0 +1,61 @@ +Distilled Stable Diffusion inference Stable Diffusion inference can be a computationally intensive process because it must iteratively denoise the latents to generate an image. To reduce the computational burden, you can use a distilled version of the Stable Diffusion model from Nota AI. The distilled version of their Stable Diffusion model eliminates some of the residual and attention blocks from the UNet, reducing the model size by 51% and improving latency on CPU/GPU by 43%. Read this blog post to learn more about how knowledge distillation training works to produce a faster, smaller, and cheaper generative model. Let’s load the distilled Stable Diffusion model and compare it against the original Stable Diffusion model: Copied from diffusers import StableDiffusionPipeline +import torch + +distilled = StableDiffusionPipeline.from_pretrained( + "nota-ai/bk-sdm-small", torch_dtype=torch.float16, use_safetensors=True, +).to("cuda") + +original = StableDiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16, use_safetensors=True, +).to("cuda") Given a prompt, get the inference time for the original model: Copied import time + +seed = 2023 +generator = torch.manual_seed(seed) + +NUM_ITERS_TO_RUN = 3 +NUM_INFERENCE_STEPS = 25 +NUM_IMAGES_PER_PROMPT = 4 + +prompt = "a golden vase with different flowers" + +start = time.time_ns() +for _ in range(NUM_ITERS_TO_RUN): + images = original( + prompt, + num_inference_steps=NUM_INFERENCE_STEPS, + generator=generator, + num_images_per_prompt=NUM_IMAGES_PER_PROMPT + ).images +end = time.time_ns() +original_sd = f"{(end - start) / 1e6:.1f}" + +print(f"Execution time -- {original_sd} ms\n") +"Execution time -- 45781.5 ms" Time the distilled model inference: Copied start = time.time_ns() +for _ in range(NUM_ITERS_TO_RUN): + images = distilled( + prompt, + num_inference_steps=NUM_INFERENCE_STEPS, + generator=generator, + num_images_per_prompt=NUM_IMAGES_PER_PROMPT + ).images +end = time.time_ns() + +distilled_sd = f"{(end - start) / 1e6:.1f}" +print(f"Execution time -- {distilled_sd} ms\n") +"Execution time -- 29884.2 ms" original Stable Diffusion (45781.5 ms) distilled Stable Diffusion (29884.2 ms) Tiny AutoEncoder To speed inference up even more, use a tiny distilled version of the Stable Diffusion VAE to denoise the latents into images. Replace the VAE in the distilled Stable Diffusion model with the tiny VAE: Copied from diffusers import AutoencoderTiny + +distilled.vae = AutoencoderTiny.from_pretrained( + "sayakpaul/taesd-diffusers", torch_dtype=torch.float16, use_safetensors=True, +).to("cuda") Time the distilled model and distilled VAE inference: Copied start = time.time_ns() +for _ in range(NUM_ITERS_TO_RUN): + images = distilled( + prompt, + num_inference_steps=NUM_INFERENCE_STEPS, + generator=generator, + num_images_per_prompt=NUM_IMAGES_PER_PROMPT + ).images +end = time.time_ns() + +distilled_tiny_sd = f"{(end - start) / 1e6:.1f}" +print(f"Execution time -- {distilled_tiny_sd} ms\n") +"Execution time -- 27165.7 ms" distilled Stable Diffusion + Tiny AutoEncoder (27165.7 ms) diff --git a/scrapped_outputs/42ede440dde823caa848409d57c3e97f.txt b/scrapped_outputs/42ede440dde823caa848409d57c3e97f.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/43143caeec74317dcd172f50aeafa192.txt b/scrapped_outputs/43143caeec74317dcd172f50aeafa192.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/43338ed30875372427a74baa2279e694.txt b/scrapped_outputs/43338ed30875372427a74baa2279e694.txt new file mode 100644 index 0000000000000000000000000000000000000000..7c4120ca559ac7e154bd60c031ca497e0b8a77e7 --- /dev/null +++ b/scrapped_outputs/43338ed30875372427a74baa2279e694.txt @@ -0,0 +1 @@ +Overview Generating high-quality outputs is computationally intensive, especially during each iterative step where you go from a noisy output to a less noisy output. One of 🤗 Diffuser’s goals is to make this technology widely accessible to everyone, which includes enabling fast inference on consumer and specialized hardware. This section will cover tips and tricks - like half-precision weights and sliced attention - for optimizing inference speed and reducing memory-consumption. You’ll also learn how to speed up your PyTorch code with torch.compile or ONNX Runtime, and enable memory-efficient attention with xFormers. There are also guides for running inference on specific hardware like Apple Silicon, and Intel or Habana processors. diff --git a/scrapped_outputs/4340d5c94b159209eb3f8dbe2fda0596.txt b/scrapped_outputs/4340d5c94b159209eb3f8dbe2fda0596.txt new file mode 100644 index 0000000000000000000000000000000000000000..78a3b346e030b1216370c892ffe83052959511a8 --- /dev/null +++ b/scrapped_outputs/4340d5c94b159209eb3f8dbe2fda0596.txt @@ -0,0 +1,105 @@ +Attend-and-Excite Attend-and-Excite for Stable Diffusion was proposed in Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models and provides textual attention control over image generation. The abstract from the paper is: Recent text-to-image generative models have demonstrated an unparalleled ability to generate diverse and creative imagery guided by a target text prompt. While revolutionary, current state-of-the-art diffusion models may still fail in generating images that fully convey the semantics in the given text prompt. We analyze the publicly available Stable Diffusion model and assess the existence of catastrophic neglect, where the model fails to generate one or more of the subjects from the input prompt. Moreover, we find that in some cases the model also fails to correctly bind attributes (e.g., colors) to their corresponding subjects. To help mitigate these failure cases, we introduce the concept of Generative Semantic Nursing (GSN), where we seek to intervene in the generative process on the fly during inference time to improve the faithfulness of the generated images. Using an attention-based formulation of GSN, dubbed Attend-and-Excite, we guide the model to refine the cross-attention units to attend to all subject tokens in the text prompt and strengthen - or excite - their activations, encouraging the model to generate all subjects described in the text prompt. We compare our approach to alternative approaches and demonstrate that it conveys the desired concepts more faithfully across a range of text prompts. You can find additional information about Attend-and-Excite on the project page, the original codebase, or try it out in a demo. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionAttendAndExcitePipeline class diffusers.StableDiffusionAttendAndExcitePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion and Attend-and-Excite. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings __call__ < source > ( prompt: Union token_indices: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: int = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None max_iter_to_alter: int = 25 thresholds: dict = {0: 0.05, 10: 0.5, 20: 0.8} scale_factor: int = 20 attn_res: Optional = (16, 16) clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. token_indices (List[int]) — +The token indices to alter with attend-and-excite. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. max_iter_to_alter (int, optional, defaults to 25) — +Number of denoising steps to apply attend-and-excite. The max_iter_to_alter denoising steps are when +attend-and-excite is applied. For example, if max_iter_to_alter is 25 and there are a total of 30 +denoising steps, the first 25 denoising steps applies attend-and-excite and the last 5 will not. thresholds (dict, optional, defaults to {0 -- 0.05, 10: 0.5, 20: 0.8}): +Dictionary defining the iterations and desired thresholds to apply iterative latent refinement in. scale_factor (int, optional, default to 20) — +Scale factor to control the step size of each attend-and-excite update. attn_res (tuple, optional, default computed from width and height) — +The 2D resolution of the semantic attention map. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionAttendAndExcitePipeline + +>>> pipe = StableDiffusionAttendAndExcitePipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 +... ).to("cuda") + + +>>> prompt = "a cat and a frog" + +>>> # use get_indices function to find out indices of the tokens you want to alter +>>> pipe.get_indices(prompt) +{0: '<|startoftext|>', 1: 'a', 2: 'cat', 3: 'and', 4: 'a', 5: 'frog', 6: '<|endoftext|>'} + +>>> token_indices = [2, 5] +>>> seed = 6141 +>>> generator = torch.Generator("cuda").manual_seed(seed) + +>>> images = pipe( +... prompt=prompt, +... token_indices=token_indices, +... guidance_scale=7.5, +... generator=generator, +... num_inference_steps=50, +... max_iter_to_alter=25, +... ).images + +>>> image = images[0] +>>> image.save(f"../images/{prompt}_{seed}.png") encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_indices < source > ( prompt: str ) Utility function to list the indices of the tokens you wish to alte StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/434ad250c0dbeeac9302a3f242b0ac09.txt b/scrapped_outputs/434ad250c0dbeeac9302a3f242b0ac09.txt new file mode 100644 index 0000000000000000000000000000000000000000..99c9c7d4f2201d98cc2da9436565b2c181d1c9c1 --- /dev/null +++ b/scrapped_outputs/434ad250c0dbeeac9302a3f242b0ac09.txt @@ -0,0 +1,83 @@ +Paint by Example Paint by Example: Exemplar-based Image Editing with Diffusion Models is by Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, Fang Wen. The abstract from the paper is: Language-guided image editing has achieved great success recently. In this paper, for the first time, we investigate exemplar-guided image editing for more precise control. We achieve this goal by leveraging self-supervised training to disentangle and re-organize the source image and the exemplar. However, the naive approach will cause obvious fusing artifacts. We carefully analyze it and propose an information bottleneck and strong augmentations to avoid the trivial solution of directly copying and pasting the exemplar image. Meanwhile, to ensure the controllability of the editing process, we design an arbitrary shape mask for the exemplar image and leverage the classifier-free guidance to increase the similarity to the exemplar image. The whole framework involves a single forward of the diffusion model without any iterative optimization. We demonstrate that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity. The original codebase can be found at Fantasy-Studio/Paint-by-Example, and you can try it out in a demo. Tips Paint by Example is supported by the official Fantasy-Studio/Paint-by-Example checkpoint. The checkpoint is warm-started from CompVis/stable-diffusion-v1-4 to inpaint partly masked images conditioned on example and reference images. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. PaintByExamplePipeline class diffusers.PaintByExamplePipeline < source > ( vae: AutoencoderKL image_encoder: PaintByExampleImageEncoder unet: UNet2DConditionModel scheduler: Union safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = False ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. image_encoder (PaintByExampleImageEncoder) — +Encodes the example input image. The unet is conditioned on the example image instead of a text prompt. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. 🧪 This is an experimental feature! Pipeline for image-guided image inpainting using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( example_image: Union image: Union mask_image: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters example_image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +An example image to guide image generation. image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +Image or tensor representing an image batch to be inpainted (parts of the image are masked out with +mask_image and repainted according to prompt). mask_image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +Image or tensor representing an image batch to mask image. White pixels in the mask are repainted, +while black pixels are preserved. If mask_image is a PIL image, it is converted to a single channel +(luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, so the +expected shape would be (B, H, W, 1). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Example: Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO +>>> from diffusers import PaintByExamplePipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = ( +... "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/image/example_1.png" +... ) +>>> mask_url = ( +... "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/mask/example_1.png" +... ) +>>> example_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/reference/example_1.jpg" + +>>> init_image = download_image(img_url).resize((512, 512)) +>>> mask_image = download_image(mask_url).resize((512, 512)) +>>> example_image = download_image(example_url).resize((512, 512)) + +>>> pipe = PaintByExamplePipeline.from_pretrained( +... "Fantasy-Studio/Paint-by-Example", +... torch_dtype=torch.float16, +... ) +>>> pipe = pipe.to("cuda") + +>>> image = pipe(image=init_image, mask_image=mask_image, example_image=example_image).images[0] +>>> image StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/4361cc930b8678ffa3a344f75220fa93.txt b/scrapped_outputs/4361cc930b8678ffa3a344f75220fa93.txt new file mode 100644 index 0000000000000000000000000000000000000000..cd183c157cf8eff7e7916669417c27bf06b12611 --- /dev/null +++ b/scrapped_outputs/4361cc930b8678ffa3a344f75220fa93.txt @@ -0,0 +1,225 @@ +Latent Consistency Models Latent Consistency Models (LCMs) were proposed in Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference by Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao. The abstract of the paper is as follows: Latent Diffusion models (LDMs) have achieved remarkable results in synthesizing high-resolution images. However, the iterative sampling process is computationally intensive and leads to slow generation. Inspired by Consistency Models (song et al.), we propose Latent Consistency Models (LCMs), enabling swift inference with minimal steps on any pre-trained LDMs, including Stable Diffusion (rombach et al). Viewing the guided reverse diffusion process as solving an augmented probability flow ODE (PF-ODE), LCMs are designed to directly predict the solution of such ODE in latent space, mitigating the need for numerous iterations and allowing rapid, high-fidelity sampling. Efficiently distilled from pre-trained classifier-free guided diffusion models, a high-quality 768 x 768 2~4-step LCM takes only 32 A100 GPU hours for training. Furthermore, we introduce Latent Consistency Fine-tuning (LCF), a novel method that is tailored for fine-tuning LCMs on customized image datasets. Evaluation on the LAION-5B-Aesthetics dataset demonstrates that LCMs achieve state-of-the-art text-to-image generation performance with few-step inference. Project Page: this https URL. A demo for the SimianLuo/LCM_Dreamshaper_v7 checkpoint can be found here. The pipelines were contributed by luosiallen, nagolinc, and dg845. LatentConsistencyModelPipeline class diffusers.LatentConsistencyModelPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: LCMScheduler safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Currently only +supports LCMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. requires_safety_checker (bool, optional, defaults to True) — +Whether the pipeline requires a safety checker component. Pipeline for text-to-image generation using a latent consistency model. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 4 original_inference_steps: int = None timesteps: List = None guidance_scale: float = 8.5 num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. original_inference_steps (int, optional) — +The original number of inference steps use to generate a linearly-spaced timestep schedule, from which +we will draw num_inference_steps evenly spaced timesteps from as our final timestep schedule, +following the Skipping-Step method in the paper (see Section 4.3). If not set this will default to the +scheduler’s original_inference_steps attribute. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps on the original LCM training/distillation timestep schedule are used. Must be in descending +order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. +Note that the original latent consistency models paper uses a different CFG formulation where the +guidance scales are decreased by 1 (so in the paper formulation CFG is enabled when guidance_scale > 0). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import DiffusionPipeline +>>> import torch + +>>> pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7") +>>> # To save GPU memory, torch.float16 can be used, but it may compromise image quality. +>>> pipe.to(torch_device="cuda", torch_dtype=torch.float32) + +>>> prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" + +>>> # Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps. +>>> num_inference_steps = 4 +>>> images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=8.0).images +>>> images[0].save("image.png") enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 LatentConsistencyModelImg2ImgPipeline class diffusers.LatentConsistencyModelImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: LCMScheduler safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Currently only +supports LCMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. requires_safety_checker (bool, optional, defaults to True) — +Whether the pipeline requires a safety checker component. Pipeline for image-to-image generation using a latent consistency model. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 4 strength: float = 0.8 original_inference_steps: int = None timesteps: List = None guidance_scale: float = 8.5 num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. original_inference_steps (int, optional) — +The original number of inference steps use to generate a linearly-spaced timestep schedule, from which +we will draw num_inference_steps evenly spaced timesteps from as our final timestep schedule, +following the Skipping-Step method in the paper (see Section 4.3). If not set this will default to the +scheduler’s original_inference_steps attribute. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps on the original LCM training/distillation timestep schedule are used. Must be in descending +order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. +Note that the original latent consistency models paper uses a different CFG formulation where the +guidance scales are decreased by 1 (so in the paper formulation CFG is enabled when guidance_scale > 0). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import AutoPipelineForImage2Image +>>> import torch +>>> import PIL + +>>> pipe = AutoPipelineForImage2Image.from_pretrained("SimianLuo/LCM_Dreamshaper_v7") +>>> # To save GPU memory, torch.float16 can be used, but it may compromise image quality. +>>> pipe.to(torch_device="cuda", torch_dtype=torch.float32) + +>>> prompt = "High altitude snowy mountains" +>>> image = PIL.Image.open("./snowy_mountains.png") + +>>> # Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps. +>>> num_inference_steps = 4 +>>> images = pipe( +... prompt=prompt, image=image, num_inference_steps=num_inference_steps, guidance_scale=8.0 +... ).images + +>>> images[0].save("image.png") enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/43e9dfd8803bd922afd8fe4ecb77a937.txt b/scrapped_outputs/43e9dfd8803bd922afd8fe4ecb77a937.txt new file mode 100644 index 0000000000000000000000000000000000000000..ac84e7af684acbbe414a495264a2879f29f202cf --- /dev/null +++ b/scrapped_outputs/43e9dfd8803bd922afd8fe4ecb77a937.txt @@ -0,0 +1,114 @@ +Accelerate inference of text-to-image diffusion models Diffusion models are slower than their GAN counterparts because of the iterative and sequential reverse diffusion process. There are several techniques that can address this limitation such as progressive timestep distillation (LCM LoRA), model compression (SSD-1B), and reusing adjacent features of the denoiser (DeepCache). However, you don’t necessarily need to use these techniques to speed up inference. With PyTorch 2 alone, you can accelerate the inference latency of text-to-image diffusion pipelines by up to 3x. This tutorial will show you how to progressively apply the optimizations found in PyTorch 2 to reduce inference latency. You’ll use the Stable Diffusion XL (SDXL) pipeline in this tutorial, but these techniques are applicable to other text-to-image diffusion pipelines too. Make sure you’re using the latest version of Diffusers: Copied pip install -U diffusers Then upgrade the other required libraries too: Copied pip install -U transformers accelerate peft Install PyTorch nightly to benefit from the latest and fastest kernels: Copied pip3 install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu121 The results reported below are from a 80GB 400W A100 with its clock rate set to the maximum. If you’re interested in the full benchmarking code, take a look at huggingface/diffusion-fast. Baseline Let’s start with a baseline. Disable reduced precision and the scaled_dot_product_attention (SDPA) function which is automatically used by Diffusers: Copied from diffusers import StableDiffusionXLPipeline + +# Load the pipeline in full-precision and place its model components on CUDA. +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0" +).to("cuda") + +# Run the attention ops without SDPA. +pipe.unet.set_default_attn_processor() +pipe.vae.set_default_attn_processor() + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] This default setup takes 7.36 seconds. bfloat16 Enable the first optimization, reduced precision or more specifically bfloat16. There are several benefits of using reduced precision: Using a reduced numerical precision (such as float16 or bfloat16) for inference doesn’t affect the generation quality but significantly improves latency. The benefits of using bfloat16 compared to float16 are hardware dependent, but modern GPUs tend to favor bfloat16. bfloat16 is much more resilient when used with quantization compared to float16, but more recent versions of the quantization library (torchao) we used don’t have numerical issues with float16. Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 +).to("cuda") + +# Run the attention ops without SDPA. +pipe.unet.set_default_attn_processor() +pipe.vae.set_default_attn_processor() + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] bfloat16 reduces the latency from 7.36 seconds to 4.63 seconds. In our later experiments with float16, recent versions of torchao do not incur numerical problems from float16. Take a look at the Speed up inference guide to learn more about running inference with reduced precision. SDPA Attention blocks are intensive to run. But with PyTorch’s scaled_dot_product_attention function, it is a lot more efficient. This function is used by default in Diffusers so you don’t need to make any changes to the code. Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] Scaled dot product attention improves the latency from 4.63 seconds to 3.31 seconds. torch.compile PyTorch 2 includes torch.compile which uses fast and optimized kernels. In Diffusers, the UNet and VAE are usually compiled because these are the most compute-intensive modules. First, configure a few compiler flags (refer to the full list for more options): Copied from diffusers import StableDiffusionXLPipeline +import torch + +torch._inductor.config.conv_1x1_as_mm = True +torch._inductor.config.coordinate_descent_tuning = True +torch._inductor.config.epilogue_fusion = False +torch._inductor.config.coordinate_descent_check_all_directions = True It is also important to change the UNet and VAE’s memory layout to “channels_last” when compiling them to ensure maximum speed. Copied pipe.unet.to(memory_format=torch.channels_last) +pipe.vae.to(memory_format=torch.channels_last) Now compile and perform inference: Copied # Compile the UNet and VAE. +pipe.unet = torch.compile(pipe.unet, mode="max-autotune", fullgraph=True) +pipe.vae.decode = torch.compile(pipe.vae.decode, mode="max-autotune", fullgraph=True) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# First call to `pipe` is slow, subsequent ones are faster. +image = pipe(prompt, num_inference_steps=30).images[0] torch.compile offers different backends and modes. For maximum inference speed, use “max-autotune” for the inductor backend. “max-autotune” uses CUDA graphs and optimizes the compilation graph specifically for latency. CUDA graphs greatly reduces the overhead of launching GPU operations by using a mechanism to launch multiple GPU operations through a single CPU operation. Using SDPA attention and compiling both the UNet and VAE cuts the latency from 3.31 seconds to 2.54 seconds. Prevent graph breaks Specifying fullgraph=True ensures there are no graph breaks in the underlying model to take full advantage of torch.compile without any performance degradation. For the UNet and VAE, this means changing how you access the return variables. Copied - latents = unet( +- latents, timestep=timestep, encoder_hidden_states=prompt_embeds +-).sample + ++ latents = unet( ++ latents, timestep=timestep, encoder_hidden_states=prompt_embeds, return_dict=False ++)[0] Remove GPU sync after compilation During the iterative reverse diffusion process, the step() function is called on the scheduler each time after the denoiser predicts the less noisy latent embeddings. Inside step(), the sigmas variable is indexed which when placed on the GPU, causes a communication sync between the CPU and GPU. This introduces latency and it becomes more evident when the denoiser has already been compiled. But if the sigmas array always stays on the CPU, the CPU and GPU sync doesn’t occur and you don’t get any latency. In general, any CPU and GPU communication sync should be none or be kept to a bare minimum because it can impact inference latency. Combine the attention block’s projection matrices The UNet and VAE in SDXL use Transformer-like blocks which consists of attention blocks and feed-forward blocks. In an attention block, the input is projected into three sub-spaces using three different projection matrices – Q, K, and V. These projections are performed separately on the input. But we can horizontally combine the projection matrices into a single matrix and perform the projection in one step. This increases the size of the matrix multiplications of the input projections and improves the impact of quantization. You can combine the projection matrices with just a single line of code: Copied pipe.fuse_qkv_projections() This provides a minor improvement from 2.54 seconds to 2.52 seconds. Support for fuse_qkv_projections() is limited and experimental. It’s not available for many non-Stable Diffusion pipelines such as Kandinsky. You can refer to this PR to get an idea about how to enable this for the other pipelines. Dynamic quantization You can also use the ultra-lightweight PyTorch quantization library, torchao (commit SHA 54bcd5a10d0abbe7b0c045052029257099f83fd9), to apply dynamic int8 quantization to the UNet and VAE. Quantization adds additional conversion overhead to the model that is hopefully made up for by faster matmuls (dynamic quantization). If the matmuls are too small, these techniques may degrade performance. First, configure all the compiler tags: Copied from diffusers import StableDiffusionXLPipeline +import torch + +# Notice the two new flags at the end. +torch._inductor.config.conv_1x1_as_mm = True +torch._inductor.config.coordinate_descent_tuning = True +torch._inductor.config.epilogue_fusion = False +torch._inductor.config.coordinate_descent_check_all_directions = True +torch._inductor.config.force_fuse_int_mm_with_mul = True +torch._inductor.config.use_mixed_mm = True Certain linear layers in the UNet and VAE don’t benefit from dynamic int8 quantization. You can filter out those layers with the dynamic_quant_filter_fn shown below. Copied def dynamic_quant_filter_fn(mod, *args): + return ( + isinstance(mod, torch.nn.Linear) + and mod.in_features > 16 + and (mod.in_features, mod.out_features) + not in [ + (1280, 640), + (1920, 1280), + (1920, 640), + (2048, 1280), + (2048, 2560), + (2560, 1280), + (256, 128), + (2816, 1280), + (320, 640), + (512, 1536), + (512, 256), + (512, 512), + (640, 1280), + (640, 1920), + (640, 320), + (640, 5120), + (640, 640), + (960, 320), + (960, 640), + ] + ) + + +def conv_filter_fn(mod, *args): + return ( + isinstance(mod, torch.nn.Conv2d) and mod.kernel_size == (1, 1) and 128 in [mod.in_channels, mod.out_channels] + ) Finally, apply all the optimizations discussed so far: Copied # SDPA + bfloat16. +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 +).to("cuda") + +# Combine attention projection matrices. +pipe.fuse_qkv_projections() + +# Change the memory layout. +pipe.unet.to(memory_format=torch.channels_last) +pipe.vae.to(memory_format=torch.channels_last) Since dynamic quantization is only limited to the linear layers, convert the appropriate pointwise convolution layers into linear layers to maximize its benefit. Copied from torchao import swap_conv2d_1x1_to_linear + +swap_conv2d_1x1_to_linear(pipe.unet, conv_filter_fn) +swap_conv2d_1x1_to_linear(pipe.vae, conv_filter_fn) Apply dynamic quantization: Copied from torchao import apply_dynamic_quant + +apply_dynamic_quant(pipe.unet, dynamic_quant_filter_fn) +apply_dynamic_quant(pipe.vae, dynamic_quant_filter_fn) Finally, compile and perform inference: Copied pipe.unet = torch.compile(pipe.unet, mode="max-autotune", fullgraph=True) +pipe.vae.decode = torch.compile(pipe.vae.decode, mode="max-autotune", fullgraph=True) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] Applying dynamic quantization improves the latency from 2.52 seconds to 2.43 seconds. diff --git a/scrapped_outputs/43f4074e80671b332665f3ef0b8cfac4.txt b/scrapped_outputs/43f4074e80671b332665f3ef0b8cfac4.txt new file mode 100644 index 0000000000000000000000000000000000000000..17224b1ed5c3d86d388a56d170467252653c4fe2 --- /dev/null +++ b/scrapped_outputs/43f4074e80671b332665f3ef0b8cfac4.txt @@ -0,0 +1,71 @@ +AutoencoderKL The variational autoencoder (VAE) model with KL loss was introduced in Auto-Encoding Variational Bayes by Diederik P. Kingma and Max Welling. The model is used in 🤗 Diffusers to encode images into latents and to decode latent representations into images. The abstract from the paper is: How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions are two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results. Loading from the original format By default the AutoencoderKL should be loaded with from_pretrained(), but it can also be loaded +from the original format using FromOriginalVAEMixin.from_single_file as follows: Copied from diffusers import AutoencoderKL + +url = "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors" # can also be a local file +model = AutoencoderKL.from_single_file(url) AutoencoderKL class diffusers.AutoencoderKL < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) up_block_types: Tuple = ('UpDecoderBlock2D',) block_out_channels: Tuple = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 4 norm_num_groups: int = 32 sample_size: int = 32 scaling_factor: float = 0.18215 force_upcast: float = True ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("DownEncoderBlock2D",)) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to ("UpDecoderBlock2D",)) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of block output channels. act_fn (str, optional, defaults to "silu") — The activation function to use. latent_channels (int, optional, defaults to 4) — Number of channels in the latent space. sample_size (int, optional, defaults to 32) — Sample input size. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. force_upcast (bool, optional, default to True) — +If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE +can be fine-tuned / trained to a lower range without loosing too much precision in which case +force_upcast can be set to False - see: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix A VAE model with KL loss for encoding images into latents and decoding latent representations into images. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). disable_slicing < source > ( ) Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing +decoding in one step. disable_tiling < source > ( ) Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing +decoding in one step. enable_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_tiling < source > ( use_tiling: bool = True ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. forward < source > ( sample: FloatTensor sample_posterior: bool = False return_dict: bool = True generator: Optional = None ) Parameters sample (torch.FloatTensor) — Input sample. sample_posterior (bool, optional, defaults to False) — +Whether to sample from the posterior. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. fuse_qkv_projections < source > ( ) Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. set_attn_processor < source > ( processor: Union _remove_lora = False ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. tiled_decode < source > ( z: FloatTensor return_dict: bool = True ) → ~models.vae.DecoderOutput or tuple Parameters z (torch.FloatTensor) — Input batch of latent vectors. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.vae.DecoderOutput instead of a plain tuple. Returns +~models.vae.DecoderOutput or tuple + +If return_dict is True, a ~models.vae.DecoderOutput is returned, otherwise a plain tuple is +returned. + Decode a batch of images using a tiled decoder. tiled_encode < source > ( x: FloatTensor return_dict: bool = True ) → ~models.autoencoder_kl.AutoencoderKLOutput or tuple Parameters x (torch.FloatTensor) — Input batch of images. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.autoencoder_kl.AutoencoderKLOutput instead of a plain tuple. Returns +~models.autoencoder_kl.AutoencoderKLOutput or tuple + +If return_dict is True, a ~models.autoencoder_kl.AutoencoderKLOutput is returned, otherwise a plain +tuple is returned. + Encode a batch of images using a tiled encoder. When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several +steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is +different from non-tiled encoding because each tile uses a different encoder. To avoid tiling artifacts, the +tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the +output, but they should be much less noticeable. unfuse_qkv_projections < source > ( ) Disables the fused QKV projection if enabled. This API is 🧪 experimental. AutoencoderKLOutput class diffusers.models.modeling_outputs.AutoencoderKLOutput < source > ( latent_dist: DiagonalGaussianDistribution ) Parameters latent_dist (DiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of DiagonalGaussianDistribution. +DiagonalGaussianDistribution allows for sampling latents from the distribution. Output of AutoencoderKL encoding method. DecoderOutput class diffusers.models.autoencoders.vae.DecoderOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The decoded output sample from the last layer of the model. Output of decoding method. FlaxAutoencoderKL class diffusers.FlaxAutoencoderKL < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) up_block_types: Tuple = ('UpDecoderBlock2D',) block_out_channels: Tuple = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 4 norm_num_groups: int = 32 sample_size: int = 32 scaling_factor: float = 0.18215 dtype: dtype = parent: Union = name: Optional = None ) Parameters in_channels (int, optional, defaults to 3) — +Number of channels in the input image. out_channels (int, optional, defaults to 3) — +Number of channels in the output. down_block_types (Tuple[str], optional, defaults to (DownEncoderBlock2D)) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to (UpDecoderBlock2D)) — +Tuple of upsample block types. block_out_channels (Tuple[str], optional, defaults to (64,)) — +Tuple of block output channels. layers_per_block (int, optional, defaults to 2) — +Number of ResNet layer for each block. act_fn (str, optional, defaults to silu) — +The activation function to use. latent_channels (int, optional, defaults to 4) — +Number of channels in the latent space. norm_num_groups (int, optional, defaults to 32) — +The number of groups for normalization. sample_size (int, optional, defaults to 32) — +Sample input size. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. dtype (jnp.dtype, optional, defaults to jnp.float32) — +The dtype of the parameters. Flax implementation of a VAE model with KL loss for decoding latent representations. This model inherits from FlaxModelMixin. Check the superclass documentation for it’s generic methods +implemented for all models (such as downloading or saving). This model is a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matter related to its +general usage and behavior. Inherent JAX features such as the following are supported: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization FlaxAutoencoderKLOutput class diffusers.models.vae_flax.FlaxAutoencoderKLOutput < source > ( latent_dist: FlaxDiagonalGaussianDistribution ) Parameters latent_dist (FlaxDiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of FlaxDiagonalGaussianDistribution. +FlaxDiagonalGaussianDistribution allows for sampling latents from the distribution. Output of AutoencoderKL encoding method. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. FlaxDecoderOutput class diffusers.models.vae_flax.FlaxDecoderOutput < source > ( sample: Array ) Parameters sample (jnp.ndarray of shape (batch_size, num_channels, height, width)) — +The decoded output sample from the last layer of the model. dtype (jnp.dtype, optional, defaults to jnp.float32) — +The dtype of the parameters. Output of decoding method. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/4434aab4a5b79de971a156093d9cb439.txt b/scrapped_outputs/4434aab4a5b79de971a156093d9cb439.txt new file mode 100644 index 0000000000000000000000000000000000000000..5afc2be3d91199356b9d7628f7ca4a75d3ed1ce9 --- /dev/null +++ b/scrapped_outputs/4434aab4a5b79de971a156093d9cb439.txt @@ -0,0 +1,74 @@ +DDIMScheduler Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. The abstract from the paper is: Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. +To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models +with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. +We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. +We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space. The original codebase of this paper can be found at ermongroup/ddim, and you can contact the author on tsong.me. Tips The paper Common Diffusion Noise Schedules and Sample Steps are Flawed claims that a mismatch between the training and inference settings leads to suboptimal inference generation results for Stable Diffusion. To fix this, the authors propose: 🧪 This is an experimental feature! rescale the noise schedule to enforce zero terminal signal-to-noise ratio (SNR) Copied pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, rescale_betas_zero_snr=True) train a model with v_prediction (add the following argument to the train_text_to_image.py or train_text_to_image_lora.py scripts) Copied --prediction_type="v_prediction" change the sampler to always start from the last timestep Copied pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing") rescale classifier-free guidance to prevent over-exposure Copied image = pipe(prompt, guidance_rescale=0.7).images[0] For example: Copied from diffusers import DiffusionPipeline, DDIMScheduler +import torch + +pipe = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", torch_dtype=torch.float16) +pipe.scheduler = DDIMScheduler.from_config( + pipe.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing" +) +pipe.to("cuda") + +prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" +image = pipe(prompt, guidance_rescale=0.7).images[0] +image DDIMScheduler class diffusers.DDIMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None clip_sample: bool = True set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 clip_sample_range: float = 1.0 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the alpha value at step 0. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. DDIMScheduler extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with +non-Markovian guidance. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor eta: float = 0.0 use_clipped_model_output: bool = False generator = None variance_noise: Optional = None return_dict: bool = True ) → ~schedulers.scheduling_utils.DDIMSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. eta (float) — +The weight of noise for added noise in diffusion step. use_clipped_model_output (bool, defaults to False) — +If True, computes “corrected” model_output from the clipped predicted original sample. Necessary +because predicted original sample is clipped to [-1, 1] when self.config.clip_sample is True. If no +clipping has happened, “corrected” model_output would coincide with the one provided as input and +use_clipped_model_output has no effect. generator (torch.Generator, optional) — +A random number generator. variance_noise (torch.FloatTensor) — +Alternative to generating noise with generator by directly providing the noise for the variance +itself. Useful for methods such as CycleDiffusion. return_dict (bool, optional, defaults to True) — +Whether or not to return a DDIMSchedulerOutput or tuple. Returns +~schedulers.scheduling_utils.DDIMSchedulerOutput or tuple + +If return_dict is True, DDIMSchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). DDIMSchedulerOutput class diffusers.schedulers.scheduling_ddim.DDIMSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/4450599abd7902faba30075a13f2734d.txt b/scrapped_outputs/4450599abd7902faba30075a13f2734d.txt new file mode 100644 index 0000000000000000000000000000000000000000..c66910ad9ca6beeb39c34723e62fcd881d993d7f --- /dev/null +++ b/scrapped_outputs/4450599abd7902faba30075a13f2734d.txt @@ -0,0 +1,385 @@ +Load pipelines, models, and schedulers + +Having an easy way to use a diffusion system for inference is essential to 🧨 Diffusers. Diffusion systems often consist of multiple components like parameterized models, tokenizers, and schedulers that interact in complex ways. That is why we designed the DiffusionPipeline to wrap the complexity of the entire diffusion system into an easy-to-use API, while remaining flexible enough to be adapted for other use cases, such as loading each component individually as building blocks to assemble your own diffusion system. +Everything you need for inference or training is accessible with the from_pretrained() method. +This guide will show you how to load: +pipelines from the Hub and locally +different components into a pipeline +checkpoint variants such as different floating point types or non-exponential mean averaged (EMA) weights +models and schedulers + +Diffusion Pipeline + +💡 Skip to the DiffusionPipeline explained section if you interested in learning in more detail about how the DiffusionPipeline class works. +The DiffusionPipeline class is the simplest and most generic way to load any diffusion model from the Hub. The DiffusionPipeline.from_pretrained() method automatically detects the correct pipeline class from the checkpoint, downloads and caches all the required configuration and weight files, and returns a pipeline instance ready for inference. + + + Copied +from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = DiffusionPipeline.from_pretrained(repo_id) +You can also load a checkpoint with it’s specific pipeline class. The example above loaded a Stable Diffusion model; to get the same result, use the StableDiffusionPipeline class: + + + Copied +from diffusers import StableDiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(repo_id) +A checkpoint (such as CompVis/stable-diffusion-v1-4 or runwayml/stable-diffusion-v1-5) may also be used for more than one task, like text-to-image or image-to-image. To differentiate what task you want to use the checkpoint for, you have to load it directly with it’s corresponding task-specific pipeline class: + + + Copied +from diffusers import StableDiffusionImg2ImgPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionImg2ImgPipeline.from_pretrained(repo_id) + +Local pipeline + +To load a diffusion pipeline locally, use git-lfs to manually download the checkpoint (in this case, runwayml/stable-diffusion-v1-5) to your local disk. This creates a local folder, ./stable-diffusion-v1-5, on your disk: + + + Copied +git lfs install +git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 +Then pass the local path to from_pretrained(): + + + Copied +from diffusers import DiffusionPipeline + +repo_id = "./stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id) +The from_pretrained() method won’t download any files from the Hub when it detects a local path, but this also means it won’t download and cache the latest changes to a checkpoint. + +Swap components in a pipeline + +You can customize the default components of any pipeline with another compatible component. Customization is important because: +Changing the scheduler is important for exploring the trade-off between generation speed and quality. +Different components of a model are typically trained independently and you can swap out a component with a better-performing one. +During finetuning, usually only some components - like the UNet or text encoder - are trained. +To find out which schedulers are compatible for customization, you can use the compatibles method: + + + Copied +from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id) +stable_diffusion.scheduler.compatibles +Let’s use the SchedulerMixin.from_pretrained() method to replace the default PNDMScheduler with a more performant scheduler, EulerDiscreteScheduler. The subfolder="scheduler" argument is required to load the scheduler configuration from the correct subfolder of the pipeline repository. +Then you can pass the new EulerDiscreteScheduler instance to the scheduler argument in DiffusionPipeline: + + + Copied +from diffusers import DiffusionPipeline, EulerDiscreteScheduler, DPMSolverMultistepScheduler + +repo_id = "runwayml/stable-diffusion-v1-5" + +scheduler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") + +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, scheduler=scheduler) + +Safety checker + +Diffusion models like Stable Diffusion can generate harmful content, which is why 🧨 Diffusers has a safety checker to check generated outputs against known hardcoded NSFW content. If you’d like to disable the safety checker for whatever reason, pass None to the safety_checker argument: + + + Copied +from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, safety_checker=None) + +Reuse components across pipelines + +You can also reuse the same components in multiple pipelines to avoid loading the weights into RAM twice. Use the components method to save the components: + + + Copied +from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id) + +components = stable_diffusion_txt2img.components +Then you can pass the components to another pipeline without reloading the weights into RAM: + + + Copied +stable_diffusion_img2img = StableDiffusionImg2ImgPipeline(**components) +You can also pass the components individually to the pipeline if you want more flexibility over which components to reuse or disable. For example, to reuse the same components in the text-to-image pipeline, except for the safety checker and feature extractor, in the image-to-image pipeline: + + + Copied +from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id) +stable_diffusion_img2img = StableDiffusionImg2ImgPipeline( + vae=stable_diffusion_txt2img.vae, + text_encoder=stable_diffusion_txt2img.text_encoder, + tokenizer=stable_diffusion_txt2img.tokenizer, + unet=stable_diffusion_txt2img.unet, + scheduler=stable_diffusion_txt2img.scheduler, + safety_checker=None, + feature_extractor=None, + requires_safety_checker=False, +) + +Checkpoint variants + +A checkpoint variant is usually a checkpoint where it’s weights are: +Stored in a different floating point type for lower precision and lower storage, such as torch.float16, because it only requires half the bandwidth and storage to download. You can’t use this variant if you’re continuing training or using a CPU. +Non-exponential mean averaged (EMA) weights which shouldn’t be used for inference. You should use these to continue finetuning a model. +💡 When the checkpoints have identical model structures, but they were trained on different datasets and with a different training setup, they should be stored in separate repositories instead of variations (for example, stable-diffusion-v1-4 and stable-diffusion-v1-5). +Otherwise, a variant is identical to the original checkpoint. They have exactly the same serialization format (like Safetensors), model structure, and weights have identical tensor shapes. +checkpoint type +weight name +argument for loading weights +original +diffusion_pytorch_model.bin + +floating point +diffusion_pytorch_model.fp16.bin +variant, torch_dtype +non-EMA +diffusion_pytorch_model.non_ema.bin +variant +There are two important arguments to know for loading variants: +torch_dtype defines the floating point precision of the loaded checkpoints. For example, if you want to save bandwidth by loading a fp16 variant, you should specify torch_dtype=torch.float16 to convert the weights to fp16. Otherwise, the fp16 weights are converted to the default fp32 precision. You can also load the original checkpoint without defining the variant argument, and convert it to fp16 with torch_dtype=torch.float16. In this case, the default fp32 weights are downloaded first, and then they’re converted to fp16 after loading. +variant defines which files should be loaded from the repository. For example, if you want to load a non_ema variant from the diffusers/stable-diffusion-variants repository, you should specify variant="non_ema" to download the non_ema files. + + + Copied +from diffusers import DiffusionPipeline + +# load fp16 variant +stable_diffusion = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16 +) +# load non_ema variant +stable_diffusion = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", variant="non_ema") +To save a checkpoint stored in a different floating point type or as a non-EMA variant, use the DiffusionPipeline.save_pretrained() method and specify the variant argument. You should try and save a variant to the same folder as the original checkpoint, so you can load both from the same folder: + + + Copied +from diffusers import DiffusionPipeline + +# save as fp16 variant +stable_diffusion.save_pretrained("runwayml/stable-diffusion-v1-5", variant="fp16") +# save as non-ema variant +stable_diffusion.save_pretrained("runwayml/stable-diffusion-v1-5", variant="non_ema") +If you don’t save the variant to an existing folder, you must specify the variant argument otherwise it’ll throw an Exception because it can’t find the original checkpoint: + + + Copied +# 👎 this won't work +stable_diffusion = DiffusionPipeline.from_pretrained("./stable-diffusion-v1-5", torch_dtype=torch.float16) +# 👍 this works +stable_diffusion = DiffusionPipeline.from_pretrained( + "./stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16 +) + +Models + +Models are loaded from the ModelMixin.from_pretrained() method, which downloads and caches the latest version of the model weights and configurations. If the latest files are available in the local cache, from_pretrained() reuses files in the cache instead of redownloading them. +Models can be loaded from a subfolder with the subfolder argument. For example, the model weights for runwayml/stable-diffusion-v1-5 are stored in the unet subfolder: + + + Copied +from diffusers import UNet2DConditionModel + +repo_id = "runwayml/stable-diffusion-v1-5" +model = UNet2DConditionModel.from_pretrained(repo_id, subfolder="unet") +Or directly from a repository’s directory: + + + Copied +from diffusers import UNet2DModel + +repo_id = "google/ddpm-cifar10-32" +model = UNet2DModel.from_pretrained(repo_id) +You can also load and save model variants by specifying the variant argument in ModelMixin.from_pretrained() and ModelMixin.save_pretrained(): + + + Copied +from diffusers import UNet2DConditionModel + +model = UNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5", subfolder="unet", variant="non-ema") +model.save_pretrained("./local-unet", variant="non-ema") + +Schedulers + +Schedulers are loaded from the SchedulerMixin.from_pretrained() method, and unlike models, schedulers are not parameterized or trained; they are defined by a configuration file. +Loading schedulers does not consume any significant amount of memory and the same configuration file can be used for a variety of different schedulers. +For example, the following schedulers are compatible with StableDiffusionPipeline which means you can load the same scheduler configuration file in any of these classes: + + + Copied +from diffusers import StableDiffusionPipeline +from diffusers import ( + DDPMScheduler, + DDIMScheduler, + PNDMScheduler, + LMSDiscreteScheduler, + EulerDiscreteScheduler, + EulerAncestralDiscreteScheduler, + DPMSolverMultistepScheduler, +) + +repo_id = "runwayml/stable-diffusion-v1-5" + +ddpm = DDPMScheduler.from_pretrained(repo_id, subfolder="scheduler") +ddim = DDIMScheduler.from_pretrained(repo_id, subfolder="scheduler") +pndm = PNDMScheduler.from_pretrained(repo_id, subfolder="scheduler") +lms = LMSDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +euler_anc = EulerAncestralDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +euler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +dpm = DPMSolverMultistepScheduler.from_pretrained(repo_id, subfolder="scheduler") + +# replace `dpm` with any of `ddpm`, `ddim`, `pndm`, `lms`, `euler_anc`, `euler` +pipeline = StableDiffusionPipeline.from_pretrained(repo_id, scheduler=dpm) + +DiffusionPipeline explained + +As a class method, DiffusionPipeline.from_pretrained() is responsible for two things: +Download the latest version of the folder structure required for inference and cache it. If the latest folder structure is available in the local cache, DiffusionPipeline.from_pretrained() reuses the cache and won’t redownload the files. +Load the cached weights into the correct pipeline class - retrieved from the model_index.json file - and return an instance of it. +The pipelines underlying folder structure corresponds directly with their class instances. For example, the StableDiffusionPipeline corresponds to the folder structure in runwayml/stable-diffusion-v1-5. + + + Copied +from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipeline = DiffusionPipeline.from_pretrained(repo_id) +print(pipeline) +You’ll see pipeline is an instance of StableDiffusionPipeline, which consists of seven components: +"feature_extractor": a CLIPFeatureExtractor from 🤗 Transformers. +"safety_checker": a component for screening against harmful content. +"scheduler": an instance of PNDMScheduler. +"text_encoder": a CLIPTextModel from 🤗 Transformers. +"tokenizer": a CLIPTokenizer from 🤗 Transformers. +"unet": an instance of UNet2DConditionModel. +"vae" an instance of AutoencoderKL. + + + Copied +StableDiffusionPipeline { + "feature_extractor": [ + "transformers", + "CLIPImageProcessor" + ], + "safety_checker": [ + "stable_diffusion", + "StableDiffusionSafetyChecker" + ], + "scheduler": [ + "diffusers", + "PNDMScheduler" + ], + "text_encoder": [ + "transformers", + "CLIPTextModel" + ], + "tokenizer": [ + "transformers", + "CLIPTokenizer" + ], + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vae": [ + "diffusers", + "AutoencoderKL" + ] +} +Compare the components of the pipeline instance to the runwayml/stable-diffusion-v1-5 folder structure, and you’ll see there is a separate folder for each of the components in the repository: + + + Copied +. +├── feature_extractor +│   └── preprocessor_config.json +├── model_index.json +├── safety_checker +│   ├── config.json +│   └── pytorch_model.bin +├── scheduler +│   └── scheduler_config.json +├── text_encoder +│   ├── config.json +│   └── pytorch_model.bin +├── tokenizer +│   ├── merges.txt +│   ├── special_tokens_map.json +│   ├── tokenizer_config.json +│   └── vocab.json +├── unet +│   ├── config.json +│   ├── diffusion_pytorch_model.bin +└── vae + ├── config.json + ├── diffusion_pytorch_model.bin +You can access each of the components of the pipeline as an attribute to view its configuration: + + + Copied +pipeline.tokenizer +CLIPTokenizer( + name_or_path="/root/.cache/huggingface/hub/models--runwayml--stable-diffusion-v1-5/snapshots/39593d5650112b4cc580433f6b0435385882d819/tokenizer", + vocab_size=49408, + model_max_length=77, + is_fast=False, + padding_side="right", + truncation_side="right", + special_tokens={ + "bos_token": AddedToken("<|startoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), + "eos_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), + "unk_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), + "pad_token": "<|endoftext|>", + }, +) +Every pipeline expects a model_index.json file that tells the DiffusionPipeline: +which pipeline class to load from _class_name +which version of 🧨 Diffusers was used to create the model in _diffusers_version +what components from which library are stored in the subfolders (name corresponds to the component and subfolder name, library corresponds to the name of the library to load the class from, and class corresponds to the class name) + + + Copied +{ + "_class_name": "StableDiffusionPipeline", + "_diffusers_version": "0.6.0", + "feature_extractor": [ + "transformers", + "CLIPImageProcessor" + ], + "safety_checker": [ + "stable_diffusion", + "StableDiffusionSafetyChecker" + ], + "scheduler": [ + "diffusers", + "PNDMScheduler" + ], + "text_encoder": [ + "transformers", + "CLIPTextModel" + ], + "tokenizer": [ + "transformers", + "CLIPTokenizer" + ], + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vae": [ + "diffusers", + "AutoencoderKL" + ] +} diff --git a/scrapped_outputs/446410bd49e8bf27583c55756c77b7fa.txt b/scrapped_outputs/446410bd49e8bf27583c55756c77b7fa.txt new file mode 100644 index 0000000000000000000000000000000000000000..a4946c1f029b1a65fb2ea488de115da9e2a87fcc --- /dev/null +++ b/scrapped_outputs/446410bd49e8bf27583c55756c77b7fa.txt @@ -0,0 +1,43 @@ +DPMSolverSDEScheduler The DPMSolverSDEScheduler is inspired by the stochastic sampler from the Elucidating the Design Space of Diffusion-Based Generative Models paper, and the scheduler is ported from and created by Katherine Crowson. DPMSolverSDEScheduler class diffusers.DPMSolverSDEScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' use_karras_sigmas: Optional = False noise_sampler_seed: Optional = None timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.00085) — +The starting beta value of inference. beta_end (float, defaults to 0.012) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. noise_sampler_seed (int, optional, defaults to None) — +The random seed to use for the noise sampler. If None, a random seed is generated. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DPMSolverSDEScheduler implements the stochastic sampler from the Elucidating the Design Space of Diffusion-Based +Generative Models paper. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union return_dict: bool = True s_noise: float = 1.0 ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor or np.ndarray) — +The direct output from learned diffusion model. timestep (float or torch.FloatTensor) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor or np.ndarray) — +A current instance of a sample created by the diffusion process. return_dict (bool, optional, defaults to True) — +Whether or not to return a SchedulerOutput or tuple. s_noise (float, optional, defaults to 1.0) — +Scaling factor for noise added to the sample. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/446b478296c40aa576192aae2cb9634a.txt b/scrapped_outputs/446b478296c40aa576192aae2cb9634a.txt new file mode 100644 index 0000000000000000000000000000000000000000..d4c2fb047f6e75d39f262c7ba451d086678ceef6 --- /dev/null +++ b/scrapped_outputs/446b478296c40aa576192aae2cb9634a.txt @@ -0,0 +1,378 @@ +unCLIP + + +Overview + +Hierarchical Text-Conditional Image Generation with CLIP Latents by Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen +The abstract of the paper is the following: +Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples. +The unCLIP model in diffusers comes from kakaobrain’s karlo and the original codebase can be found here. Additionally, lucidrains has a DALL-E 2 recreation here. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_unclip.py +Text-to-Image Generation +- +pipeline_unclip_image_variation.py +Image-Guided Image Generation +- + +UnCLIPPipeline + + +class diffusers.UnCLIPPipeline + +< +source +> +( +prior: PriorTransformer +decoder: UNet2DConditionModel +text_encoder: CLIPTextModelWithProjection +tokenizer: CLIPTokenizer +text_proj: UnCLIPTextProjModel +super_res_first: UNet2DModel +super_res_last: UNet2DModel +prior_scheduler: UnCLIPScheduler +decoder_scheduler: UnCLIPScheduler +super_res_scheduler: UnCLIPScheduler + +) + + +Parameters + +text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. + + +text_proj (UnCLIPTextProjModel) — +Utility class to prepare and combine the embeddings before they are passed to the decoder. + + +decoder (UNet2DConditionModel) — +The decoder to invert the image embedding into an image. + + +super_res_first (UNet2DModel) — +Super resolution unet. Used in all but the last step of the super resolution diffusion process. + + +super_res_last (UNet2DModel) — +Super resolution unet. Used in the last step of the super resolution diffusion process. + + +prior_scheduler (UnCLIPScheduler) — +Scheduler used in the prior denoising process. Just a modified DDPMScheduler. + + +decoder_scheduler (UnCLIPScheduler) — +Scheduler used in the decoder denoising process. Just a modified DDPMScheduler. + + +super_res_scheduler (UnCLIPScheduler) — +Scheduler used in the super resolution denoising process. Just a modified DDPMScheduler. + + + +Pipeline for text-to-image generation using unCLIP +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: int = 1 +prior_num_inference_steps: int = 25 +decoder_num_inference_steps: int = 25 +super_res_num_inference_steps: int = 7 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +prior_latents: typing.Optional[torch.FloatTensor] = None +decoder_latents: typing.Optional[torch.FloatTensor] = None +super_res_latents: typing.Optional[torch.FloatTensor] = None +text_model_output: typing.Union[transformers.models.clip.modeling_clip.CLIPTextModelOutput, typing.Tuple, NoneType] = None +text_attention_mask: typing.Optional[torch.Tensor] = None +prior_guidance_scale: float = 4.0 +decoder_guidance_scale: float = 8.0 +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True + +) + + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. This can only be left undefined if +text_model_output and text_attention_mask is passed. + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +prior_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the prior. More denoising steps usually lead to a higher quality +image at the expense of slower inference. + + +decoder_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality +image at the expense of slower inference. + + +super_res_num_inference_steps (int, optional, defaults to 7) — +The number of denoising steps for super resolution. More denoising steps usually lead to a higher +quality image at the expense of slower inference. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +prior_latents (torch.FloatTensor of shape (batch size, embeddings dimension), optional) — +Pre-generated noisy latents to be used as inputs for the prior. + + +decoder_latents (torch.FloatTensor of shape (batch size, channels, height, width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. + + +super_res_latents (torch.FloatTensor of shape (batch size, channels, super res height, super res width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. + + +prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +decoder_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +text_model_output (CLIPTextModelOutput, optional) — +Pre-defined CLIPTextModel outputs that can be derived from the text encoder. Pre-defined text outputs +can be passed for tasks like text embedding interpolations. Make sure to also pass +text_attention_mask in this case. prompt can the be left to None. + + +text_attention_mask (torch.Tensor, optional) — +Pre-defined CLIP text attention mask that can be derived from the tokenizer. Pre-defined text attention +masks are necessary when passing text_model_output. + + +output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + + +Function invoked when calling the pipeline for generation. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline’s +models have their state dicts saved to CPU and then are moved to a torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. + +class diffusers.UnCLIPImageVariationPipeline + +< +source +> +( +decoder: UNet2DConditionModel +text_encoder: CLIPTextModelWithProjection +tokenizer: CLIPTokenizer +text_proj: UnCLIPTextProjModel +feature_extractor: CLIPImageProcessor +image_encoder: CLIPVisionModelWithProjection +super_res_first: UNet2DModel +super_res_last: UNet2DModel +decoder_scheduler: UnCLIPScheduler +super_res_scheduler: UnCLIPScheduler + +) + + +Parameters + +text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the image_encoder. + + +image_encoder (CLIPVisionModelWithProjection) — +Frozen CLIP image-encoder. unCLIP Image Variation uses the vision portion of +CLIP, +specifically the clip-vit-large-patch14 variant. + + +text_proj (UnCLIPTextProjModel) — +Utility class to prepare and combine the embeddings before they are passed to the decoder. + + +decoder (UNet2DConditionModel) — +The decoder to invert the image embedding into an image. + + +super_res_first (UNet2DModel) — +Super resolution unet. Used in all but the last step of the super resolution diffusion process. + + +super_res_last (UNet2DModel) — +Super resolution unet. Used in the last step of the super resolution diffusion process. + + +decoder_scheduler (UnCLIPScheduler) — +Scheduler used in the decoder denoising process. Just a modified DDPMScheduler. + + +super_res_scheduler (UnCLIPScheduler) — +Scheduler used in the super resolution denoising process. Just a modified DDPMScheduler. + + + +Pipeline to generate variations from an input image using unCLIP +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +image: typing.Union[PIL.Image.Image, typing.List[PIL.Image.Image], torch.FloatTensor, NoneType] = None +num_images_per_prompt: int = 1 +decoder_num_inference_steps: int = 25 +super_res_num_inference_steps: int = 7 +generator: typing.Optional[torch._C.Generator] = None +decoder_latents: typing.Optional[torch.FloatTensor] = None +super_res_latents: typing.Optional[torch.FloatTensor] = None +image_embeddings: typing.Optional[torch.Tensor] = None +decoder_guidance_scale: float = 8.0 +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True + +) + + +Parameters + +image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — +The image or images to guide the image generation. If you provide a tensor, it needs to comply with the +configuration of +this +CLIPImageProcessor. Can be left to None only when image_embeddings are passed. + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +decoder_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality +image at the expense of slower inference. + + +super_res_num_inference_steps (int, optional, defaults to 7) — +The number of denoising steps for super resolution. More denoising steps usually lead to a higher +quality image at the expense of slower inference. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +decoder_latents (torch.FloatTensor of shape (batch size, channels, height, width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. + + +super_res_latents (torch.FloatTensor of shape (batch size, channels, super res height, super res width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. + + +decoder_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +image_embeddings (torch.Tensor, optional) — +Pre-defined image embeddings that can be derived from the image encoder. Pre-defined image embeddings +can be passed for tasks like image interpolations. image can the be left to None. + + +output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + + +Function invoked when calling the pipeline for generation. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline’s +models have their state dicts saved to CPU and then are moved to a torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. diff --git a/scrapped_outputs/446df00791cbd64ab7127486c9d5809a.txt b/scrapped_outputs/446df00791cbd64ab7127486c9d5809a.txt new file mode 100644 index 0000000000000000000000000000000000000000..3de545917a945be758b5da9cc73ec3840eca6cd1 --- /dev/null +++ b/scrapped_outputs/446df00791cbd64ab7127486c9d5809a.txt @@ -0,0 +1,108 @@ +Installation + +Install 🤗 Diffusers for whichever deep learning library you’re working with. +🤗 Diffusers is tested on Python 3.7+, PyTorch 1.7.0+ and flax. Follow the installation instructions below for the deep learning library you are using: +PyTorch installation instructions. +Flax installation instructions. + +Install with pip + +You should install 🤗 Diffusers in a virtual environment. +If you’re unfamiliar with Python virtual environments, take a look at this guide. +A virtual environment makes it easier to manage different projects, and avoid compatibility issues between dependencies. +Start by creating a virtual environment in your project directory: + + + Copied +python -m venv .env +Activate the virtual environment: + + + Copied +source .env/bin/activate +Now you’re ready to install 🤗 Diffusers with the following command: +For PyTorch + + + Copied +pip install diffusers["torch"] +For Flax + + + Copied +pip install diffusers["flax"] + +Install from source + +Before intsalling diffusers from source, make sure you have torch and accelerate installed. +For torch installation refer to the torch docs. +To install accelerate + + + Copied +pip install accelerate +Install 🤗 Diffusers from source with the following command: + + + Copied +pip install git+https://github.com/huggingface/diffusers +This command installs the bleeding edge main version rather than the latest stable version. +The main version is useful for staying up-to-date with the latest developments. +For instance, if a bug has been fixed since the last official release but a new release hasn’t been rolled out yet. +However, this means the main version may not always be stable. +We strive to keep the main version operational, and most issues are usually resolved within a few hours or a day. +If you run into a problem, please open an Issue, so we can fix it even sooner! + +Editable install + +You will need an editable install if you’d like to: +Use the main version of the source code. +Contribute to 🤗 Diffusers and need to test changes in the code. +Clone the repository and install 🤗 Diffusers with the following commands: + + + Copied +git clone https://github.com/huggingface/diffusers.git +cd diffusers +For PyTorch + + + Copied +pip install -e ".[torch]" +For Flax + + + Copied +pip install -e ".[flax]" +These commands will link the folder you cloned the repository to and your Python library paths. +Python will now look inside the folder you cloned to in addition to the normal library paths. +For example, if your Python packages are typically installed in ~/anaconda3/envs/main/lib/python3.7/site-packages/, Python will also search the folder you cloned to: ~/diffusers/. +You must keep the diffusers folder if you want to keep using the library. +Now you can easily update your clone to the latest version of 🤗 Diffusers with the following command: + + + Copied +cd ~/diffusers/ +git pull +Your Python environment will find the main version of 🤗 Diffusers on the next run. + +Notice on telemetry logging + +Our library gathers telemetry information during from_pretrained() requests. +This data includes the version of Diffusers and PyTorch/Flax, the requested model or pipeline class, +and the path to a pretrained checkpoint if it is hosted on the Hub. +This usage data helps us debug issues and prioritize new features. +Telemetry is only sent when loading models and pipelines from the HuggingFace Hub, +and is not collected during local usage. +We understand that not everyone wants to share additional information, and we respect your privacy, +so you can disable telemetry collection by setting the DISABLE_TELEMETRY environment variable from your terminal: +On Linux/MacOS: + + + Copied +export DISABLE_TELEMETRY=YES +On Windows: + + + Copied +set DISABLE_TELEMETRY=YES diff --git a/scrapped_outputs/4490049c26592d83444a7ba10e3f812c.txt b/scrapped_outputs/4490049c26592d83444a7ba10e3f812c.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/449e67d1b9cc62245d2ebc096d53e817.txt b/scrapped_outputs/449e67d1b9cc62245d2ebc096d53e817.txt new file mode 100644 index 0000000000000000000000000000000000000000..eae7bd4faaff8cd637ecc9e7c8c93e29f55d9c4b --- /dev/null +++ b/scrapped_outputs/449e67d1b9cc62245d2ebc096d53e817.txt @@ -0,0 +1,166 @@ +Text or image-to-video Driven by the success of text-to-image diffusion models, generative video models are able to generate short clips of video from a text prompt or an initial image. These models extend a pretrained diffusion model to generate videos by adding some type of temporal and/or spatial convolution layer to the architecture. A mixed dataset of images and videos are used to train the model which learns to output a series of video frames based on the text or image conditioning. This guide will show you how to generate videos, how to configure video model parameters, and how to control video generation. Popular models Discover other cool and trending video generation models on the Hub here! Stable Video Diffusions (SVD), I2VGen-XL, AnimateDiff, and ModelScopeT2V are popular models used for video diffusion. Each model is distinct. For example, AnimateDiff inserts a motion modeling module into a frozen text-to-image model to generate personalized animated images, whereas SVD is entirely pretrained from scratch with a three-stage training process to generate short high-quality videos. Stable Video Diffusion SVD is based on the Stable Diffusion 2.1 model and it is trained on images, then low-resolution videos, and finally a smaller dataset of high-resolution videos. This model generates a short 2-4 second video from an initial image. You can learn more details about model, like micro-conditioning, in the Stable Video Diffusion guide. Begin by loading the StableVideoDiffusionPipeline and passing an initial image to generate a video from. Copied import torch +from diffusers import StableVideoDiffusionPipeline +from diffusers.utils import load_image, export_to_video + +pipeline = StableVideoDiffusionPipeline.from_pretrained( + "stabilityai/stable-video-diffusion-img2vid-xt", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() + +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png") +image = image.resize((1024, 576)) + +generator = torch.manual_seed(42) +frames = pipeline(image, decode_chunk_size=8, generator=generator).frames[0] +export_to_video(frames, "generated.mp4", fps=7) initial image generated video I2VGen-XL I2VGen-XL is a diffusion model that can generate higher resolution videos than SVD and it is also capable of accepting text prompts in addition to images. The model is trained with two hierarchical encoders (detail and global encoder) to better capture low and high-level details in images. These learned details are used to train a video diffusion model which refines the video resolution and details in the generated video. You can use I2VGen-XL by loading the I2VGenXLPipeline, and passing a text and image prompt to generate a video. Copied import torch +from diffusers import I2VGenXLPipeline +from diffusers.utils import export_to_gif, load_image + +pipeline = I2VGenXLPipeline.from_pretrained("ali-vilab/i2vgen-xl", torch_dtype=torch.float16, variant="fp16") +pipeline.enable_model_cpu_offload() + +image_url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/i2vgen_xl_images/img_0009.png" +image = load_image(image_url).convert("RGB") + +prompt = "Papers were floating in the air on a table in the library" +negative_prompt = "Distorted, discontinuous, Ugly, blurry, low resolution, motionless, static, disfigured, disconnected limbs, Ugly faces, incomplete arms" +generator = torch.manual_seed(8888) + +frames = pipeline( + prompt=prompt, + image=image, + num_inference_steps=50, + negative_prompt=negative_prompt, + guidance_scale=9.0, + generator=generator +).frames[0] +export_to_gif(frames, "i2v.gif") initial image generated video AnimateDiff AnimateDiff is an adapter model that inserts a motion module into a pretrained diffusion model to animate an image. The adapter is trained on video clips to learn motion which is used to condition the generation process to create a video. It is faster and easier to only train the adapter and it can be loaded into most diffusion models, effectively turning them into “video models”. Start by loading a MotionAdapter. Copied import torch +from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) Then load a finetuned Stable Diffusion model with the AnimateDiffPipeline. Copied pipeline = AnimateDiffPipeline.from_pretrained("emilianJR/epiCRealism", motion_adapter=adapter, torch_dtype=torch.float16) +scheduler = DDIMScheduler.from_pretrained( + "emilianJR/epiCRealism", + subfolder="scheduler", + clip_sample=False, + timestep_spacing="linspace", + beta_schedule="linear", + steps_offset=1, +) +pipeline.scheduler = scheduler +pipeline.enable_vae_slicing() +pipeline.enable_model_cpu_offload() Create a prompt and generate the video. Copied output = pipeline( + prompt="A space rocket with trails of smoke behind it launching into space from the desert, 4k, high resolution", + negative_prompt="bad quality, worse quality, low resolution", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=50, + generator=torch.Generator("cpu").manual_seed(49), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") ModelscopeT2V ModelscopeT2V adds spatial and temporal convolutions and attention to a UNet, and it is trained on image-text and video-text datasets to enhance what it learns during training. The model takes a prompt, encodes it and creates text embeddings which are denoised by the UNet, and then decoded by a VQGAN into a video. ModelScopeT2V generates watermarked videos due to the datasets it was trained on. To use a watermark-free model, try the cerspense/zeroscope_v2_76w model with the TextToVideoSDPipeline first, and then upscale it’s output with the cerspense/zeroscope_v2_XL checkpoint using the VideoToVideoSDPipeline. Load a ModelScopeT2V checkpoint into the DiffusionPipeline along with a prompt to generate a video. Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import export_to_video + +pipeline = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") +pipeline.enable_model_cpu_offload() +pipeline.enable_vae_slicing() + +prompt = "Confident teddy bear surfer rides the wave in the tropics" +video_frames = pipeline(prompt).frames[0] +export_to_video(video_frames, "modelscopet2v.mp4", fps=10) Configure model parameters There are a few important parameters you can configure in the pipeline that’ll affect the video generation process and quality. Let’s take a closer look at what these parameters do and how changing them affects the output. Number of frames The num_frames parameter determines how many video frames are generated per second. A frame is an image that is played in a sequence of other frames to create motion or a video. This affects video length because the pipeline generates a certain number of frames per second (check a pipeline’s API reference for the default value). To increase the video duration, you’ll need to increase the num_frames parameter. Copied import torch +from diffusers import StableVideoDiffusionPipeline +from diffusers.utils import load_image, export_to_video + +pipeline = StableVideoDiffusionPipeline.from_pretrained( + "stabilityai/stable-video-diffusion-img2vid", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() + +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png") +image = image.resize((1024, 576)) + +generator = torch.manual_seed(42) +frames = pipeline(image, decode_chunk_size=8, generator=generator, num_frames=25).frames[0] +export_to_video(frames, "generated.mp4", fps=7) num_frames=14 num_frames=25 Guidance scale The guidance_scale parameter controls how closely aligned the generated video and text prompt or initial image is. A higher guidance_scale value means your generated video is more aligned with the text prompt or initial image, while a lower guidance_scale value means your generated video is less aligned which could give the model more “creativity” to interpret the conditioning input. SVD uses the min_guidance_scale and max_guidance_scale parameters for applying guidance to the first and last frames respectively. Copied import torch +from diffusers import I2VGenXLPipeline +from diffusers.utils import export_to_gif, load_image + +pipeline = I2VGenXLPipeline.from_pretrained("ali-vilab/i2vgen-xl", torch_dtype=torch.float16, variant="fp16") +pipeline.enable_model_cpu_offload() + +image_url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/i2vgen_xl_images/img_0009.png" +image = load_image(image_url).convert("RGB") + +prompt = "Papers were floating in the air on a table in the library" +negative_prompt = "Distorted, discontinuous, Ugly, blurry, low resolution, motionless, static, disfigured, disconnected limbs, Ugly faces, incomplete arms" +generator = torch.manual_seed(0) + +frames = pipeline( + prompt=prompt, + image=image, + num_inference_steps=50, + negative_prompt=negative_prompt, + guidance_scale=1.0, + generator=generator +).frames[0] +export_to_gif(frames, "i2v.gif") guidance_scale=9.0 guidance_scale=1.0 Negative prompt A negative prompt deters the model from generating things you don’t want it to. This parameter is commonly used to improve overall generation quality by removing poor or bad features such as “low resolution” or “bad details”. Copied import torch +from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) + +pipeline = AnimateDiffPipeline.from_pretrained("emilianJR/epiCRealism", motion_adapter=adapter, torch_dtype=torch.float16) +scheduler = DDIMScheduler.from_pretrained( + "emilianJR/epiCRealism", + subfolder="scheduler", + clip_sample=False, + timestep_spacing="linspace", + beta_schedule="linear", + steps_offset=1, +) +pipeline.scheduler = scheduler +pipeline.enable_vae_slicing() +pipeline.enable_model_cpu_offload() + +output = pipeline( + prompt="360 camera shot of a sushi roll in a restaurant", + negative_prompt="Distorted, discontinuous, ugly, blurry, low resolution, motionless, static", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=50, + generator=torch.Generator("cpu").manual_seed(0), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") no negative prompt negative prompt applied Model-specific parameters There are some pipeline parameters that are unique to each model such as adjusting the motion in a video or adding noise to the initial image. Stable Video Diffusion Text2Video-Zero Stable Video Diffusion provides additional micro-conditioning for the frame rate with the fps parameter and for motion with the motion_bucket_id parameter. Together, these parameters allow for adjusting the amount of motion in the generated video. There is also a noise_aug_strength parameter that increases the amount of noise added to the initial image. Varying this parameter affects how similar the generated video and initial image are. A higher noise_aug_strength also increases the amount of motion. To learn more, read the Micro-conditioning guide. Control video generation Video generation can be controlled similar to how text-to-image, image-to-image, and inpainting can be controlled with a ControlNetModel. The only difference is you need to use the CrossFrameAttnProcessor so each frame attends to the first frame. Text2Video-Zero Text2Video-Zero video generation can be conditioned on pose and edge images for even greater control over a subject’s motion in the generated video or to preserve the identity of a subject/object in the video. You can also use Text2Video-Zero with InstructPix2Pix for editing videos with text. pose control edge control InstructPix2Pix Start by downloading a video and extracting the pose images from it. Copied from huggingface_hub import hf_hub_download +from PIL import Image +import imageio + +filename = "__assets__/poses_skeleton_gifs/dance1_corr.mp4" +repo_id = "PAIR/Text2Video-Zero" +video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) + +reader = imageio.get_reader(video_path, "ffmpeg") +frame_count = 8 +pose_images = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] Load a ControlNetModel for pose estimation and a checkpoint into the StableDiffusionControlNetPipeline. Then you’ll use the CrossFrameAttnProcessor for the UNet and ControlNet. Copied import torch +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +model_id = "runwayml/stable-diffusion-v1-5" +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16) +pipeline = StableDiffusionControlNetPipeline.from_pretrained( + model_id, controlnet=controlnet, torch_dtype=torch.float16 +).to("cuda") + +pipeline.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) +pipeline.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) Fix the latents for all the frames, and then pass your prompt and extracted pose images to the model to generate a video. Copied latents = torch.randn((1, 4, 64, 64), device="cuda", dtype=torch.float16).repeat(len(pose_images), 1, 1, 1) + +prompt = "Darth Vader dancing in a desert" +result = pipeline(prompt=[prompt] * len(pose_images), image=pose_images, latents=latents).images +imageio.mimsave("video.mp4", result, fps=4) Optimize Video generation requires a lot of memory because you’re generating many video frames at once. You can reduce your memory requirements at the expense of some inference speed. Try: offloading pipeline components that are no longer needed to the CPU feed-forward chunking runs the feed-forward layer in a loop instead of all at once break up the number of frames the VAE has to decode into chunks instead of decoding them all at once Copied - pipeline.enable_model_cpu_offload() +- frames = pipeline(image, decode_chunk_size=8, generator=generator).frames[0] ++ pipeline.enable_model_cpu_offload() ++ pipeline.unet.enable_forward_chunking() ++ frames = pipeline(image, decode_chunk_size=2, generator=generator, num_frames=25).frames[0] If memory is not an issue and you want to optimize for speed, try wrapping the UNet with torch.compile. Copied - pipeline.enable_model_cpu_offload() ++ pipeline.to("cuda") ++ pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) diff --git a/scrapped_outputs/44a784fcb3c4a627b3422c4568006fbe.txt b/scrapped_outputs/44a784fcb3c4a627b3422c4568006fbe.txt new file mode 100644 index 0000000000000000000000000000000000000000..8963c1c7f764c0c2246ed475bbab38ae00c783e3 --- /dev/null +++ b/scrapped_outputs/44a784fcb3c4a627b3422c4568006fbe.txt @@ -0,0 +1,459 @@ +Evaluating Diffusion Models + + +Evaluation of generative models like Stable Diffusion is subjective in nature. But as practitioners and researchers, we often have to make careful choices amongst many different possibilities. So, when working with different generative models (like GANs, Diffusion, etc.), how do we choose one over the other? +Qualitative evaluation of such models can be error-prone and might incorrectly influence a decision. +However, quantitative metrics don’t necessarily correspond to image quality. So, usually, a combination +of both qualitative and quantitative evaluations provides a stronger signal when choosing one model +over the other. +In this document, we provide a non-exhaustive overview of qualitative and quantitative methods to evaluate Diffusion models. For quantitative methods, we specifically focus on how to implement them alongside diffusers. +The methods shown in this document can also be used to evaluate different noise schedulers keeping the underlying generation model fixed. + +Scenarios + +We cover Diffusion models with the following pipelines: +Text-guided image generation (such as the StableDiffusionPipeline). +Text-guided image generation, additionally conditioned on an input image (such as the StableDiffusionImg2ImgPipeline, and StableDiffusionInstructPix2PixPipeline). +Class-conditioned image generation models (such as the DiTPipeline). + +Qualitative Evaluation + +Qualitative evaluation typically involves human assessment of generated images. Quality is measured across aspects such as compositionality, image-text alignment, and spatial relations. Common prompts provide a degree of uniformity for subjective metrics. DrawBench and PartiPrompts are prompt datasets used for qualitative benchmarking. DrawBench and PartiPrompts were introduced by Imagen and Parti respectively. +From the official Parti website: +PartiPrompts (P2) is a rich set of over 1600 prompts in English that we release as part of this work. P2 can be used to measure model capabilities across various categories and challenge aspects. + +PartiPrompts has the following columns: +Prompt +Category of the prompt (such as “Abstract”, “World Knowledge”, etc.) +Challenge reflecting the difficulty (such as “Basic”, “Complex”, “Writing & Symbols”, etc.) +These benchmarks allow for side-by-side human evaluation of different image generation models. Let’s see how we can use diffusers on a couple of PartiPrompts. +Below we show some prompts sampled across different challenges: Basic, Complex, Linguistic Structures, Imagination, and Writing & Symbols. Here we are using PartiPrompts as a dataset. + + + Copied +from datasets import load_dataset + +# prompts = load_dataset("nateraw/parti-prompts", split="train") +# prompts = prompts.shuffle() +# sample_prompts = [prompts[i]["Prompt"] for i in range(5)] + +# Fixing these sample prompts in the interest of reproducibility. +sample_prompts = [ + "a corgi", + "a hot air balloon with a yin-yang symbol, with the moon visible in the daytime sky", + "a car with no windows", + "a cube made of porcupine", + 'The saying "BE EXCELLENT TO EACH OTHER" written on a red brick wall with a graffiti image of a green alien wearing a tuxedo. A yellow fire hydrant is on a sidewalk in the foreground.', +] +Now we can use these prompts to generate some images using Stable Diffusion (v1-4 checkpoint): + + + Copied +import torch + +seed = 0 +generator = torch.manual_seed(seed) + +images = sd_pipeline(sample_prompts, num_images_per_prompt=1, generator=generator, output_type="numpy").images + +We can also set num_images_per_prompt accordingly to compare different images for the same prompt. Running the same pipeline but with a different checkpoint (v1-5), yields: + +Once several images are generated from all the prompts using multiple models (under evaluation), these results are presented to human evaluators for scoring. For +more details on the DrawBench and PartiPrompts benchmarks, refer to their respective papers. +It is useful to look at some inference samples while a model is training to measure the +training progress. In our training scripts, we support this utility with additional support for +logging to TensorBoard and Weights & Biases. + +Quantitative Evaluation + +In this section, we will walk you through how to evaluate three different diffusion pipelines using: +CLIP score +CLIP directional similarity +FID + +Text-guided image generation + +CLIP score measures the compatibility of image-caption pairs. Higher CLIP scores imply higher compatibility 🔼. The CLIP score is a quantitative measurement of the qualitative concept “compatibility”. Image-caption pair compatibility can also be thought of as the semantic similarity between the image and the caption. CLIP score was found to have high correlation with human judgement. +Let’s first load a StableDiffusionPipeline: + + + Copied +from diffusers import StableDiffusionPipeline +import torch + +model_ckpt = "CompVis/stable-diffusion-v1-4" +sd_pipeline = StableDiffusionPipeline.from_pretrained(model_ckpt, torch_dtype=torch.float16).to("cuda") +Generate some images with multiple prompts: + + + Copied +prompts = [ + "a photo of an astronaut riding a horse on mars", + "A high tech solarpunk utopia in the Amazon rainforest", + "A pikachu fine dining with a view to the Eiffel Tower", + "A mecha robot in a favela in expressionist style", + "an insect robot preparing a delicious meal", + "A small cabin on top of a snowy mountain in the style of Disney, artstation", +] + +images = sd_pipeline(prompts, num_images_per_prompt=1, output_type="numpy").images + +print(images.shape) +# (6, 512, 512, 3) +And then, we calculate the CLIP score. + + + Copied +from torchmetrics.functional.multimodal import clip_score +from functools import partial + +clip_score_fn = partial(clip_score, model_name_or_path="openai/clip-vit-base-patch16") + + +def calculate_clip_score(images, prompts): + images_int = (images * 255).astype("uint8") + clip_score = clip_score_fn(torch.from_numpy(images_int).permute(0, 3, 1, 2), prompts).detach() + return round(float(clip_score), 4) + + +sd_clip_score = calculate_clip_score(images, prompts) +print(f"CLIP score: {sd_clip_score}") +# CLIP score: 35.7038 +In the above example, we generated one image per prompt. If we generated multiple images per prompt, we would have to take the average score from the generated images per prompt. +Now, if we wanted to compare two checkpoints compatible with the StableDiffusionPipeline we should pass a generator while calling the pipeline. First, we generate images with a +fixed seed with the v1-4 Stable Diffusion checkpoint: + + + Copied +seed = 0 +generator = torch.manual_seed(seed) + +images = sd_pipeline(prompts, num_images_per_prompt=1, generator=generator, output_type="numpy").images +Then we load the v1-5 checkpoint to generate images: + + + Copied +model_ckpt_1_5 = "runwayml/stable-diffusion-v1-5" +sd_pipeline_1_5 = StableDiffusionPipeline.from_pretrained(model_ckpt_1_5, torch_dtype=weight_dtype).to(device) + +images_1_5 = sd_pipeline_1_5(prompts, num_images_per_prompt=1, generator=generator, output_type="numpy").images +And finally, we compare their CLIP scores: + + + Copied +sd_clip_score_1_4 = calculate_clip_score(images, prompts) +print(f"CLIP Score with v-1-4: {sd_clip_score_1_4}") +# CLIP Score with v-1-4: 34.9102 + +sd_clip_score_1_5 = calculate_clip_score(images_1_5, prompts) +print(f"CLIP Score with v-1-5: {sd_clip_score_1_5}") +# CLIP Score with v-1-5: 36.2137 +It seems like the v1-5 checkpoint performs better than its predecessor. Note, however, that the number of prompts we used to compute the CLIP scores is quite low. For a more practical evaluation, this number should be way higher, and the prompts should be diverse. +By construction, there are some limitations in this score. The captions in the training dataset +were crawled from the web and extracted from alt and similar tags associated an image on the internet. +They are not necessarily representative of what a human being would use to describe an image. Hence we +had to “engineer” some prompts here. + +Image-conditioned text-to-image generation + +In this case, we condition the generation pipeline with an input image as well as a text prompt. Let’s take the StableDiffusionInstructPix2PixPipeline, as an example. It takes an edit instruction as an input prompt and an input image to be edited. +Here is one example: + +One strategy to evaluate such a model is to measure the consistency of the change between the two images (in CLIP space) with the change between the two image captions (as shown in CLIP-Guided Domain Adaptation of Image Generators). This is referred to as the ”CLIP directional similarity“. +Caption 1 corresponds to the input image (image 1) that is to be edited. +Caption 2 corresponds to the edited image (image 2). It should reflect the edit instruction. +Following is a pictorial overview: + +We have prepared a mini dataset to implement this metric. Let’s first load the dataset. + + + Copied +from datasets import load_dataset + +dataset = load_dataset("sayakpaul/instructpix2pix-demo", split="train") +dataset.features + + + Copied +{'input': Value(dtype='string', id=None), + 'edit': Value(dtype='string', id=None), + 'output': Value(dtype='string', id=None), + 'image': Image(decode=True, id=None)} +Here we have: +input is a caption corresponding to the image. +edit denotes the edit instruction. +output denotes the modified caption reflecting the edit instruction. +Let’s take a look at a sample. + + + Copied +idx = 0 +print(f"Original caption: {dataset[idx]['input']}") +print(f"Edit instruction: {dataset[idx]['edit']}") +print(f"Modified caption: {dataset[idx]['output']}") + + + Copied +Original caption: 2. FAROE ISLANDS: An archipelago of 18 mountainous isles in the North Atlantic Ocean between Norway and Iceland, the Faroe Islands has 'everything you could hope for', according to Big 7 Travel. It boasts 'crystal clear waterfalls, rocky cliffs that seem to jut out of nowhere and velvety green hills' +Edit instruction: make the isles all white marble +Modified caption: 2. WHITE MARBLE ISLANDS: An archipelago of 18 mountainous white marble isles in the North Atlantic Ocean between Norway and Iceland, the White Marble Islands has 'everything you could hope for', according to Big 7 Travel. It boasts 'crystal clear waterfalls, rocky cliffs that seem to jut out of nowhere and velvety green hills' +And here is the image: + + + Copied +dataset[idx]["image"] + +We will first edit the images of our dataset with the edit instruction and compute the directional similarity. +Let’s first load the StableDiffusionInstructPix2PixPipeline: + + + Copied +from diffusers import StableDiffusionInstructPix2PixPipeline + +instruct_pix2pix_pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained( + "timbrooks/instruct-pix2pix", torch_dtype=torch.float16 +).to(device) +Now, we perform the edits: + + + Copied +import numpy as np + + +def edit_image(input_image, instruction): + image = instruct_pix2pix_pipeline( + instruction, + image=input_image, + output_type="numpy", + generator=generator, + ).images[0] + return image + + +input_images = [] +original_captions = [] +modified_captions = [] +edited_images = [] + +for idx in range(len(dataset)): + input_image = dataset[idx]["image"] + edit_instruction = dataset[idx]["edit"] + edited_image = edit_image(input_image, edit_instruction) + + input_images.append(np.array(input_image)) + original_captions.append(dataset[idx]["input"]) + modified_captions.append(dataset[idx]["output"]) + edited_images.append(edited_image) +To measure the directional similarity, we first load CLIP’s image and text encoders: + + + Copied +from transformers import ( + CLIPTokenizer, + CLIPTextModelWithProjection, + CLIPVisionModelWithProjection, + CLIPImageProcessor, +) + +clip_id = "openai/clip-vit-large-patch14" +tokenizer = CLIPTokenizer.from_pretrained(clip_id) +text_encoder = CLIPTextModelWithProjection.from_pretrained(clip_id).to(device) +image_processor = CLIPImageProcessor.from_pretrained(clip_id) +image_encoder = CLIPVisionModelWithProjection.from_pretrained(clip_id).to(device) +Notice that we are using a particular CLIP checkpoint, i.e., openai/clip-vit-large-patch14. This is because the Stable Diffusion pre-training was performed with this CLIP variant. For more details, refer to the documentation. +Next, we prepare a PyTorch nn.Module to compute directional similarity: + + + Copied +import torch.nn as nn +import torch.nn.functional as F + + +class DirectionalSimilarity(nn.Module): + def __init__(self, tokenizer, text_encoder, image_processor, image_encoder): + super().__init__() + self.tokenizer = tokenizer + self.text_encoder = text_encoder + self.image_processor = image_processor + self.image_encoder = image_encoder + + def preprocess_image(self, image): + image = self.image_processor(image, return_tensors="pt")["pixel_values"] + return {"pixel_values": image.to(device)} + + def tokenize_text(self, text): + inputs = self.tokenizer( + text, + max_length=self.tokenizer.model_max_length, + padding="max_length", + truncation=True, + return_tensors="pt", + ) + return {"input_ids": inputs.input_ids.to(device)} + + def encode_image(self, image): + preprocessed_image = self.preprocess_image(image) + image_features = self.image_encoder(**preprocessed_image).image_embeds + image_features = image_features / image_features.norm(dim=1, keepdim=True) + return image_features + + def encode_text(self, text): + tokenized_text = self.tokenize_text(text) + text_features = self.text_encoder(**tokenized_text).text_embeds + text_features = text_features / text_features.norm(dim=1, keepdim=True) + return text_features + + def compute_directional_similarity(self, img_feat_one, img_feat_two, text_feat_one, text_feat_two): + sim_direction = F.cosine_similarity(img_feat_two - img_feat_one, text_feat_two - text_feat_one) + return sim_direction + + def forward(self, image_one, image_two, caption_one, caption_two): + img_feat_one = self.encode_image(image_one) + img_feat_two = self.encode_image(image_two) + text_feat_one = self.encode_text(caption_one) + text_feat_two = self.encode_text(caption_two) + directional_similarity = self.compute_directional_similarity( + img_feat_one, img_feat_two, text_feat_one, text_feat_two + ) + return directional_similarity +Let’s put DirectionalSimilarity to use now. + + + Copied +dir_similarity = DirectionalSimilarity(tokenizer, text_encoder, image_processor, image_encoder) +scores = [] + +for i in range(len(input_images)): + original_image = input_images[i] + original_caption = original_captions[i] + edited_image = edited_images[i] + modified_caption = modified_captions[i] + + similarity_score = dir_similarity(original_image, edited_image, original_caption, modified_caption) + scores.append(float(similarity_score.detach().cpu())) + +print(f"CLIP directional similarity: {np.mean(scores)}") +# CLIP directional similarity: 0.0797976553440094 +Like the CLIP Score, the higher the CLIP directional similarity, the better it is. +It should be noted that the StableDiffusionInstructPix2PixPipeline exposes two arguments, namely, image_guidance_scale and guidance_scale that let you control the quality of the final edited image. We encourage you to experiment with these two arguments and see the impact of that on the directional similarity. +We can extend the idea of this metric to measure how similar the original image and edited version are. To do that, we can just do F.cosine_similarity(img_feat_two, img_feat_one). For these kinds of edits, we would still want the primary semantics of the images to be preserved as much as possible, i.e., a high similarity score. +We can use these metrics for similar pipelines such as the StableDiffusionPix2PixZeroPipeline. +Both CLIP score and CLIP direction similarity rely on the CLIP model, which can make the evaluations biased. +Extending metrics like IS, FID (discussed later), or KID can be difficult when the model under evaluation was pre-trained on a large image-captioning dataset (such as the LAION-5B dataset). This is because underlying these metrics is an InceptionNet (pre-trained on the ImageNet-1k dataset) used for extracting intermediate image features. The pre-training dataset of Stable Diffusion may have limited overlap with the pre-training dataset of InceptionNet, so it is not a good candidate here for feature extraction. +Using the above metrics helps evaluate models that are class-conditioned. For example, DiT. It was pre-trained being conditioned on the ImageNet-1k classes. + +Class-conditioned image generation + +Class-conditioned generative models are usually pre-trained on a class-labeled dataset such as ImageNet-1k. Popular metrics for evaluating these models include Fréchet Inception Distance (FID), Kernel Inception Distance (KID), and Inception Score (IS). In this document, we focus on FID (Heusel et al.). We show how to compute it with the DiTPipeline, which uses the DiT model under the hood. +FID aims to measure how similar are two datasets of images. As per this resource: +Fréchet Inception Distance is a measure of similarity between two datasets of images. It was shown to correlate well with the human judgment of visual quality and is most often used to evaluate the quality of samples of Generative Adversarial Networks. FID is calculated by computing the Fréchet distance between two Gaussians fitted to feature representations of the Inception network. +These two datasets are essentially the dataset of real images and the dataset of fake images (generated images in our case). FID is usually calculated with two large datasets. However, for this document, we will work with two mini datasets. +Let’s first download a few images from the ImageNet-1k training set: + + + Copied +from zipfile import ZipFile +import requests + + +def download(url, local_filepath): + r = requests.get(url) + with open(local_filepath, "wb") as f: + f.write(r.content) + return local_filepath + + +dummy_dataset_url = "https://hf.co/datasets/sayakpaul/sample-datasets/resolve/main/sample-imagenet-images.zip" +local_filepath = download(dummy_dataset_url, dummy_dataset_url.split("/")[-1]) + +with ZipFile(local_filepath, "r") as zipper: + zipper.extractall(".") + + + Copied +from PIL import Image +import os + +dataset_path = "sample-imagenet-images" +image_paths = sorted([os.path.join(dataset_path, x) for x in os.listdir(dataset_path)]) + +real_images = [np.array(Image.open(path).convert("RGB")) for path in image_paths] +These are 10 images from the following Imagenet-1k classes: “cassette_player”, “chain_saw” (x2), “church”, “gas_pump” (x3), “parachute” (x2), and “tench”. + +Real images. +Now that the images are loaded, let’s apply some lightweight pre-processing on them to use them for FID calculation. + + + Copied +from torchvision.transforms import functional as F + + +def preprocess_image(image): + image = torch.tensor(image).unsqueeze(0) + image = image.permute(0, 3, 1, 2) / 255.0 + return F.center_crop(image, (256, 256)) + + +real_images = torch.cat([preprocess_image(image) for image in real_images]) +print(real_images.shape) +# torch.Size([10, 3, 256, 256]) +We now load the DiTPipeline to generate images conditioned on the above-mentioned classes. + + + Copied +from diffusers import DiTPipeline, DPMSolverMultistepScheduler + +dit_pipeline = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256", torch_dtype=torch.float16) +dit_pipeline.scheduler = DPMSolverMultistepScheduler.from_config(dit_pipeline.scheduler.config) +dit_pipeline = dit_pipeline.to("cuda") + +words = [ + "cassette player", + "chainsaw", + "chainsaw", + "church", + "gas pump", + "gas pump", + "gas pump", + "parachute", + "parachute", + "tench", +] + +class_ids = dit_pipeline.get_label_ids(words) +output = dit_pipeline(class_labels=class_ids, generator=generator, output_type="numpy") + +fake_images = output.images +fake_images = torch.tensor(fake_images) +fake_images = fake_images.permute(0, 3, 1, 2) +print(fake_images.shape) +# torch.Size([10, 3, 256, 256]) +Now, we can compute the FID using torchmetrics. + + + Copied +from torchmetrics.image.fid import FrechetInceptionDistance + +fid = FrechetInceptionDistance(normalize=True) +fid.update(real_images, real=True) +fid.update(fake_images, real=False) + +print(f"FID: {float(fid.compute())}") +# FID: 177.7147216796875 +The lower the FID, the better it is. Several things can influence FID here: +Number of images (both real and fake) +Randomness induced in the diffusion process +Number of inference steps in the diffusion process +The scheduler being used in the diffusion process +For the last two points, it is, therefore, a good practice to run the evaluation across different seeds and inference steps, and then report an average result. +FID results tend to be fragile as they depend on a lot of factors: +The specific Inception model used during computation. +The implementation accuracy of the computation. +The image format (not the same if we start from PNGs vs JPGs). +Keeping that in mind, FID is often most useful when comparing similar runs, but it is +hard to reproduce paper results unless the authors carefully disclose the FID +measurement code. +These points apply to other related metrics too, such as KID and IS. +As a final step, let’s visually inspect the fake_images. + +Fake images. diff --git a/scrapped_outputs/44d171f8549177e13ace74a6fc99789a.txt b/scrapped_outputs/44d171f8549177e13ace74a6fc99789a.txt new file mode 100644 index 0000000000000000000000000000000000000000..bedbfd4f29d8fea8e1cb1523c05c8b8e204c564f --- /dev/null +++ b/scrapped_outputs/44d171f8549177e13ace74a6fc99789a.txt @@ -0,0 +1,52 @@ +CMStochasticIterativeScheduler Consistency Models by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever introduced a multistep and onestep scheduler (Algorithm 1) that is capable of generating good samples in one or a small number of steps. The abstract from the paper is: Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast one-step generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet 64x64 and LSUN 256x256. The original codebase can be found at openai/consistency_models. CMStochasticIterativeScheduler class diffusers.CMStochasticIterativeScheduler < source > ( num_train_timesteps: int = 40 sigma_min: float = 0.002 sigma_max: float = 80.0 sigma_data: float = 0.5 s_noise: float = 1.0 rho: float = 7.0 clip_denoised: bool = True ) Parameters num_train_timesteps (int, defaults to 40) — +The number of diffusion steps to train the model. sigma_min (float, defaults to 0.002) — +Minimum noise magnitude in the sigma schedule. Defaults to 0.002 from the original implementation. sigma_max (float, defaults to 80.0) — +Maximum noise magnitude in the sigma schedule. Defaults to 80.0 from the original implementation. sigma_data (float, defaults to 0.5) — +The standard deviation of the data distribution from the EDM +paper. Defaults to 0.5 from the original implementation. s_noise (float, defaults to 1.0) — +The amount of additional noise to counteract loss of detail during sampling. A reasonable range is [1.000, +1.011]. Defaults to 1.0 from the original implementation. rho (float, defaults to 7.0) — +The parameter for calculating the Karras sigma schedule from the EDM +paper. Defaults to 7.0 from the original implementation. clip_denoised (bool, defaults to True) — +Whether to clip the denoised outputs to (-1, 1). timesteps (List or np.ndarray or torch.Tensor, optional) — +An explicit timestep schedule that can be optionally specified. The timesteps are expected to be in +increasing order. Multistep and onestep sampling for consistency models. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. get_scalings_for_boundary_condition < source > ( sigma ) → tuple Parameters sigma (torch.FloatTensor) — +The current sigma in the Karras sigma schedule. Returns +tuple + +A two-element tuple where c_skip (which weights the current sample) is the first element and c_out +(which weights the consistency model output) is the second element. + Gets the scalings used in the consistency model parameterization (from Appendix C of the +paper) to enforce boundary condition. epsilon in the equations for c_skip and c_out is set to sigma_min. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (float or torch.FloatTensor) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Scales the consistency model input by (sigma**2 + sigma_data**2) ** 0.5. set_timesteps < source > ( num_inference_steps: Optional = None device: Union = None timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. timesteps (List[int], optional) — +Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default +timestep spacing strategy of equal spacing between timesteps is used. If timesteps is passed, +num_inference_steps must be None. Sets the timesteps used for the diffusion chain (to be run before inference). sigma_to_t < source > ( sigmas: Union ) → float or np.ndarray Parameters sigmas (float or np.ndarray) — +A single Karras sigma or an array of Karras sigmas. Returns +float or np.ndarray + +A scaled input timestep or scaled input timestep array. + Gets scaled timesteps from the Karras sigmas for input to the consistency model. step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor generator: Optional = None return_dict: bool = True ) → CMStochasticIterativeSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (float) — +The current timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a +CMStochasticIterativeSchedulerOutput or tuple. Returns +CMStochasticIterativeSchedulerOutput or tuple + +If return_dict is True, +CMStochasticIterativeSchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). CMStochasticIterativeSchedulerOutput class diffusers.schedulers.scheduling_consistency_models.CMStochasticIterativeSchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Output class for the scheduler’s step function. diff --git a/scrapped_outputs/44dc0bead59f241f664fe391971aa9cd.txt b/scrapped_outputs/44dc0bead59f241f664fe391971aa9cd.txt new file mode 100644 index 0000000000000000000000000000000000000000..682e7ed4ade907ab1a141f47a047e5803e87a77a --- /dev/null +++ b/scrapped_outputs/44dc0bead59f241f664fe391971aa9cd.txt @@ -0,0 +1,33 @@ +Logging 🤗 Diffusers has a centralized logging system to easily manage the verbosity of the library. The default verbosity is set to WARNING. To change the verbosity level, use one of the direct setters. For instance, to change the verbosity to the INFO level. Copied import diffusers + +diffusers.logging.set_verbosity_info() You can also use the environment variable DIFFUSERS_VERBOSITY to override the default verbosity. You can set it +to one of the following: debug, info, warning, error, critical. For example: Copied DIFFUSERS_VERBOSITY=error ./myprogram.py Additionally, some warnings can be disabled by setting the environment variable +DIFFUSERS_NO_ADVISORY_WARNINGS to a true value, like 1. This disables any warning logged by +logger.warning_advice. For example: Copied DIFFUSERS_NO_ADVISORY_WARNINGS=1 ./myprogram.py Here is an example of how to use the same logger as the library in your own module or script: Copied from diffusers.utils import logging + +logging.set_verbosity_info() +logger = logging.get_logger("diffusers") +logger.info("INFO") +logger.warning("WARN") All methods of the logging module are documented below. The main methods are +logging.get_verbosity to get the current level of verbosity in the logger and +logging.set_verbosity to set the verbosity to the level of your choice. In order from the least verbose to the most verbose: Method Integer value Description diffusers.logging.CRITICAL or diffusers.logging.FATAL 50 only report the most critical errors diffusers.logging.ERROR 40 only report errors diffusers.logging.WARNING or diffusers.logging.WARN 30 only report errors and warnings (default) diffusers.logging.INFO 20 only report errors, warnings, and basic information diffusers.logging.DEBUG 10 report all information By default, tqdm progress bars are displayed during model download. logging.disable_progress_bar and logging.enable_progress_bar are used to enable or disable this behavior. Base setters diffusers.utils.logging.set_verbosity_error < source > ( ) Set the verbosity to the ERROR level. diffusers.utils.logging.set_verbosity_warning < source > ( ) Set the verbosity to the WARNING level. diffusers.utils.logging.set_verbosity_info < source > ( ) Set the verbosity to the INFO level. diffusers.utils.logging.set_verbosity_debug < source > ( ) Set the verbosity to the DEBUG level. Other functions diffusers.utils.logging.get_verbosity < source > ( ) → int Returns +int + +Logging level integers which can be one of: + +50: diffusers.logging.CRITICAL or diffusers.logging.FATAL +40: diffusers.logging.ERROR +30: diffusers.logging.WARNING or diffusers.logging.WARN +20: diffusers.logging.INFO +10: diffusers.logging.DEBUG + + Return the current level for the 🤗 Diffusers’ root logger as an int. diffusers.utils.logging.set_verbosity < source > ( verbosity: int ) Parameters verbosity (int) — +Logging level which can be one of: + +diffusers.logging.CRITICAL or diffusers.logging.FATAL +diffusers.logging.ERROR +diffusers.logging.WARNING or diffusers.logging.WARN +diffusers.logging.INFO +diffusers.logging.DEBUG + Set the verbosity level for the 🤗 Diffusers’ root logger. diffusers.utils.get_logger < source > ( name: Optional = None ) Return a logger with the specified name. This function is not supposed to be directly accessed unless you are writing a custom diffusers module. diffusers.utils.logging.enable_default_handler < source > ( ) Enable the default handler of the 🤗 Diffusers’ root logger. diffusers.utils.logging.disable_default_handler < source > ( ) Disable the default handler of the 🤗 Diffusers’ root logger. diffusers.utils.logging.enable_explicit_format < source > ( ) Enable explicit formatting for every 🤗 Diffusers’ logger. The explicit formatter is as follows: Copied [LEVELNAME|FILENAME|LINE NUMBER] TIME >> MESSAGE +All handlers currently bound to the root logger are affected by this method. diffusers.utils.logging.reset_format < source > ( ) Resets the formatting for 🤗 Diffusers’ loggers. All handlers currently bound to the root logger are affected by this method. diffusers.utils.logging.enable_progress_bar < source > ( ) Enable tqdm progress bar. diffusers.utils.logging.disable_progress_bar < source > ( ) Disable tqdm progress bar. diff --git a/scrapped_outputs/44edd3f4914a04a32050ba5a2eabce4d.txt b/scrapped_outputs/44edd3f4914a04a32050ba5a2eabce4d.txt new file mode 100644 index 0000000000000000000000000000000000000000..31cbdde7d3f5e542cc1b460fc970c1544f49a07d --- /dev/null +++ b/scrapped_outputs/44edd3f4914a04a32050ba5a2eabce4d.txt @@ -0,0 +1,71 @@ +Load LoRAs for inference There are many adapters (with LoRAs being the most common type) trained in different styles to achieve different effects. You can even combine multiple adapters to create new and unique images. With the 🤗 PEFT integration in 🤗 Diffusers, it is really easy to load and manage adapters for inference. In this guide, you’ll learn how to use different adapters with Stable Diffusion XL (SDXL) for inference. Throughout this guide, you’ll use LoRA as the main adapter technique, so we’ll use the terms LoRA and adapter interchangeably. You should have some familiarity with LoRA, and if you don’t, we welcome you to check out the LoRA guide. Let’s first install all the required libraries. Copied !pip install -q transformers accelerate +!pip install peft +!pip install diffusers Now, let’s load a pipeline with a SDXL checkpoint: Copied from diffusers import DiffusionPipeline +import torch + +pipe_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipe = DiffusionPipeline.from_pretrained(pipe_id, torch_dtype=torch.float16).to("cuda") Next, load a LoRA checkpoint with the load_lora_weights() method. With the 🤗 PEFT integration, you can assign a specific adapter_name to the checkpoint, which let’s you easily switch between different LoRA checkpoints. Let’s call this adapter "toy". Copied pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") And then perform inference: Copied prompt = "toy_face of a hacker with a hoodie" + +lora_scale= 0.9 +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) +).images[0] +image With the adapter_name parameter, it is really easy to use another adapter for inference! Load the nerijs/pixel-art-xl adapter that has been fine-tuned to generate pixel art images, and let’s call it "pixel". The pipeline automatically sets the first loaded adapter ("toy") as the active adapter. But you can activate the "pixel" adapter with the set_adapters() method as shown below: Copied pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipe.set_adapters("pixel") Let’s now generate an image with the second adapter and check the result: Copied prompt = "a hacker with a hoodie, pixel art" +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) +).images[0] +image Combine multiple adapters You can also perform multi-adapter inference where you combine different adapter checkpoints for inference. Once again, use the set_adapters() method to activate two LoRA checkpoints and specify the weight for how the checkpoints should be combined. Copied pipe.set_adapters(["pixel", "toy"], adapter_weights=[0.5, 1.0]) Now that we have set these two adapters, let’s generate an image from the combined adapters! LoRA checkpoints in the diffusion community are almost always obtained with DreamBooth. DreamBooth training often relies on “trigger” words in the input text prompts in order for the generation results to look as expected. When you combine multiple LoRA checkpoints, it’s important to ensure the trigger words for the corresponding LoRA checkpoints are present in the input text prompts. The trigger words for CiroN2022/toy-face and nerijs/pixel-art-xl are found in their repositories. Copied # Notice how the prompt is constructed. +prompt = "toy_face of a hacker with a hoodie, pixel art" +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": 1.0}, generator=torch.manual_seed(0) +).images[0] +image Impressive! As you can see, the model was able to generate an image that mixes the characteristics of both adapters. If you want to go back to using only one adapter, use the set_adapters() method to activate the "toy" adapter: Copied # First, set the adapter. +pipe.set_adapters("toy") + +# Then, run inference. +prompt = "toy_face of a hacker with a hoodie" +lora_scale= 0.9 +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) +).images[0] +image If you want to switch to only the base model, disable all LoRAs with the disable_lora() method. Copied pipe.disable_lora() + +prompt = "toy_face of a hacker with a hoodie" +lora_scale= 0.9 +image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] +image Monitoring active adapters You have attached multiple adapters in this tutorial, and if you’re feeling a bit lost on what adapters have been attached to the pipeline’s components, you can easily check the list of active adapters using the get_active_adapters() method: Copied active_adapters = pipe.get_active_adapters() +active_adapters +["toy", "pixel"] You can also get the active adapters of each pipeline component with get_list_adapters(): Copied list_adapters_component_wise = pipe.get_list_adapters() +list_adapters_component_wise +{"text_encoder": ["toy", "pixel"], "unet": ["toy", "pixel"], "text_encoder_2": ["toy", "pixel"]} Fusing adapters into the model You can use PEFT to easily fuse/unfuse multiple adapters directly into the model weights (both UNet and text encoder) using the fuse_lora() method, which can lead to a speed-up in inference and lower VRAM usage. Copied pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") + +pipe.set_adapters(["pixel", "toy"], adapter_weights=[0.5, 1.0]) +# Fuses the LoRAs into the Unet +pipe.fuse_lora() + +prompt = "toy_face of a hacker with a hoodie, pixel art" +image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] + +# Gets the Unet back to the original state +pipe.unfuse_lora() You can also fuse some adapters using adapter_names for faster generation: Copied pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") + +pipe.set_adapters(["pixel"], adapter_weights=[0.5, 1.0]) +# Fuses the LoRAs into the Unet +pipe.fuse_lora(adapter_names=["pixel"]) + +prompt = "a hacker with a hoodie, pixel art" +image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] + +# Gets the Unet back to the original state +pipe.unfuse_lora() + +# Fuse all adapters +pipe.fuse_lora(adapter_names=["pixel", "toy"]) + +prompt = "toy_face of a hacker with a hoodie, pixel art" +image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] Saving a pipeline after fusing the adapters To properly save a pipeline after it’s been loaded with the adapters, it should be serialized like so: Copied pipe.fuse_lora(lora_scale=1.0) +pipe.unload_lora_weights() +pipe.save_pretrained("path-to-pipeline") diff --git a/scrapped_outputs/45065a66cf1de9905085f074554d954f.txt b/scrapped_outputs/45065a66cf1de9905085f074554d954f.txt new file mode 100644 index 0000000000000000000000000000000000000000..84c29e20830b17a55539328c54b57bc8ab14854f --- /dev/null +++ b/scrapped_outputs/45065a66cf1de9905085f074554d954f.txt @@ -0,0 +1,97 @@ +Text-to-(RGB, depth) LDM3D was proposed in LDM3D: Latent Diffusion Model for 3D by Gabriela Ben Melech Stan, Diana Wofk, Scottie Fox, Alex Redden, Will Saxton, Jean Yu, Estelle Aflalo, Shao-Yen Tseng, Fabio Nonato, Matthias Muller, and Vasudev Lal. LDM3D generates an image and a depth map from a given text prompt unlike the existing text-to-image diffusion models such as Stable Diffusion which only generates an image. With almost the same number of parameters, LDM3D achieves to create a latent space that can compress both the RGB images and the depth maps. Two checkpoints are available for use: ldm3d-original. The original checkpoint used in the paper ldm3d-4c. The new version of LDM3D using 4 channels inputs instead of 6-channels inputs and finetuned on higher resolution images. The abstract from the paper is: This research paper proposes a Latent Diffusion Model for 3D (LDM3D) that generates both image and depth map data from a given text prompt, allowing users to generate RGBD images from text prompts. The LDM3D model is fine-tuned on a dataset of tuples containing an RGB image, depth map and caption, and validated through extensive experiments. We also develop an application called DepthFusion, which uses the generated RGB images and depth maps to create immersive and interactive 360-degree-view experiences using TouchDesigner. This technology has the potential to transform a wide range of industries, from entertainment and gaming to architecture and design. Overall, this paper presents a significant contribution to the field of generative AI and computer vision, and showcases the potential of LDM3D and DepthFusion to revolutionize content creation and digital experiences. A short video summarizing the approach can be found at this url. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionLDM3DPipeline class diffusers.StableDiffusionLDM3DPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image and 3D generation using LDM3D. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 49 timesteps: List = None guidance_scale: float = 5.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 5.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import StableDiffusionLDM3DPipeline + +>>> pipe = StableDiffusionLDM3DPipeline.from_pretrained("Intel/ldm3d-4c") +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> output = pipe(prompt) +>>> rgb_image, depth_image = output.rgb, output.depth +>>> rgb_image[0].save("astronaut_ldm3d_rgb.jpg") +>>> depth_image[0].save("astronaut_ldm3d_depth.png") encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 LDM3DPipelineOutput class diffusers.pipelines.stable_diffusion_ldm3d.pipeline_stable_diffusion_ldm3d.LDM3DPipelineOutput < source > ( rgb: Union depth: Union nsfw_content_detected: Optional ) Parameters rgb (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). depth (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. __call__ ( *args **kwargs ) Call self as a function. Upscaler LDM3D-VR is an extended version of LDM3D. The abstract from the paper is: +Latent diffusion models have proven to be state-of-the-art in the creation and manipulation of visual outputs. However, as far as we know, the generation of depth maps jointly with RGB is still limited. We introduce LDM3D-VR, a suite of diffusion models targeting virtual reality development that includes LDM3D-pano and LDM3D-SR. These models enable the generation of panoramic RGBD based on textual prompts and the upscaling of low-resolution inputs to high-resolution RGBD, respectively. Our models are fine-tuned from existing pretrained models on datasets containing panoramic/high-resolution RGB images, depth maps and captions. Both models are evaluated in comparison to existing related methods Two checkpoints are available for use: ldm3d-pano. This checkpoint enables the generation of panoramic images and requires the StableDiffusionLDM3DPipeline pipeline to be used. ldm3d-sr. This checkpoint enables the upscaling of RGB and depth images. Can be used in cascade after the original LDM3D pipeline using the StableDiffusionUpscaleLDM3DPipeline from communauty pipeline. diff --git a/scrapped_outputs/4551a06ca6e8835899a6e78cb92e213c.txt b/scrapped_outputs/4551a06ca6e8835899a6e78cb92e213c.txt new file mode 100644 index 0000000000000000000000000000000000000000..1fbae7407bd5ca9d68e49a6dd0e5b1c668ca0baf --- /dev/null +++ b/scrapped_outputs/4551a06ca6e8835899a6e78cb92e213c.txt @@ -0,0 +1,336 @@ +Inpainting The Stable Diffusion model can also be applied to inpainting which lets you edit specific parts of an image by providing a mask and a text prompt using Stable Diffusion. Tips It is recommended to use this pipeline with checkpoints that have been specifically fine-tuned for inpainting, such +as runwayml/stable-diffusion-inpainting. Default +text-to-image Stable Diffusion checkpoints, such as +runwayml/stable-diffusion-v1-5 are also compatible but they might be less performant. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionInpaintPipeline class diffusers.StableDiffusionInpaintPipeline < source > ( vae: Union text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae ([AutoencoderKL, AsymmetricAutoencoderKL]) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-guided image inpainting using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None masked_image_latents: FloatTensor = None height: Optional = None width: Optional = None padding_mask_crop: Optional = None strength: float = 1.0 num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: int = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be inpainted (which parts of the image to +be masked out with mask_image and repainted according to prompt). For both numpy array and pytorch +tensor, the expected value range is between [0, 1] If it’s a tensor or a list or tensors, the +expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a list of arrays, the +expected shape should be (B, H, W, C) or (H, W, C) It can also accept image latents as image, but +if passing latents directly it is not encoded again. mask_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to mask image. White pixels in the mask +are repainted while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a numpy array or pytorch tensor, it should contain one +color channel (L) instead of 3, so the expected shape for pytorch tensor would be (B, 1, H, W), (B, H, W), (1, H, W), (H, W). And for numpy array would be for (B, H, W, 1), (B, H, W), (H, W, 1), or (H, W). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. padding_mask_crop (int, optional, defaults to None) — +The size of margin in the crop to be applied to the image and masking. If None, no crop is applied to image and mask_image. If +padding_mask_crop is not None, it will first find a rectangular region with the same aspect ration of the image and +contains all masked area, and then expand that area based on padding_mask_crop. The image and mask_image will then be cropped based on +the expanded area before resizing to the original image size for inpainting. This is useful when the masked area is small while the image is large +and contain information inreleant for inpainging, such as background. strength (float, optional, defaults to 1.0) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionInpaintPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +>>> init_image = download_image(img_url).resize((512, 512)) +>>> mask_image = download_image(mask_url).resize((512, 512)) + +>>> pipe = StableDiffusionInpaintPipeline.from_pretrained( +... "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +>>> image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionInpaintPipeline class diffusers.FlaxStableDiffusionInpaintPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-guided image inpainting using Stable Diffusion. 🧪 This is an experimental feature! This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: Array mask: Array masked_image: Array params: Union prng_seed: Array num_inference_steps: int = 50 height: Optional = None width: Optional = None guidance_scale: Union = 7.5 latents: Array = None neg_prompt_ids: Array = None return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. latents (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +array is generated by sampling using the supplied random generator. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard +>>> import PIL +>>> import requests +>>> from io import BytesIO +>>> from diffusers import FlaxStableDiffusionInpaintPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +>>> init_image = download_image(img_url).resize((512, 512)) +>>> mask_image = download_image(mask_url).resize((512, 512)) + +>>> pipeline, params = FlaxStableDiffusionInpaintPipeline.from_pretrained( +... "xvjiarui/stable-diffusion-2-inpainting" +... ) + +>>> prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +>>> prng_seed = jax.random.PRNGKey(0) +>>> num_inference_steps = 50 + +>>> num_samples = jax.device_count() +>>> prompt = num_samples * [prompt] +>>> init_image = num_samples * [init_image] +>>> mask_image = num_samples * [mask_image] +>>> prompt_ids, processed_masked_images, processed_masks = pipeline.prepare_inputs( +... prompt, init_image, mask_image +... ) +# shard inputs and rng + +>>> params = replicate(params) +>>> prng_seed = jax.random.split(prng_seed, jax.device_count()) +>>> prompt_ids = shard(prompt_ids) +>>> processed_masked_images = shard(processed_masked_images) +>>> processed_masks = shard(processed_masks) + +>>> images = pipeline( +... prompt_ids, processed_masks, processed_masked_images, params, prng_seed, num_inference_steps, jit=True +... ).images +>>> images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) FlaxStableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/459f5594e231554703a7790a1b34d667.txt b/scrapped_outputs/459f5594e231554703a7790a1b34d667.txt new file mode 100644 index 0000000000000000000000000000000000000000..12f932f27da948cb5ce81edca4bff5444475b84d --- /dev/null +++ b/scrapped_outputs/459f5594e231554703a7790a1b34d667.txt @@ -0,0 +1,11 @@ +Control image brightness The Stable Diffusion pipeline is mediocre at generating images that are either very bright or dark as explained in the Common Diffusion Noise Schedules and Sample Steps are Flawed paper. The solutions proposed in the paper are currently implemented in the DDIMScheduler which you can use to improve the lighting in your images. 💡 Take a look at the paper linked above for more details about the proposed solutions! One of the solutions is to train a model with v prediction and v loss. Add the following flag to the train_text_to_image.py or train_text_to_image_lora.py scripts to enable v_prediction: Copied --prediction_type="v_prediction" For example, let’s use the ptx0/pseudo-journey-v2 checkpoint which has been finetuned with v_prediction. Next, configure the following parameters in the DDIMScheduler: rescale_betas_zero_snr=True, rescales the noise schedule to zero terminal signal-to-noise ratio (SNR) timestep_spacing="trailing", starts sampling from the last timestep Copied from diffusers import DiffusionPipeline, DDIMScheduler + +pipeline = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", use_safetensors=True) + +# switch the scheduler in the pipeline to use the DDIMScheduler +pipeline.scheduler = DDIMScheduler.from_config( + pipeline.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing" +) +pipeline.to("cuda") Finally, in your call to the pipeline, set guidance_rescale to prevent overexposure: Copied prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" +image = pipeline(prompt, guidance_rescale=0.7).images[0] +image diff --git a/scrapped_outputs/45c0463b259e6c837e214e0323aca1a9.txt b/scrapped_outputs/45c0463b259e6c837e214e0323aca1a9.txt new file mode 100644 index 0000000000000000000000000000000000000000..d389bbcd127fb4947ea0a7cb5913033c8204bc76 --- /dev/null +++ b/scrapped_outputs/45c0463b259e6c837e214e0323aca1a9.txt @@ -0,0 +1,111 @@ +Stable Diffusion text-to-image fine-tuning + +The train_text_to_image.py script shows how to fine-tune the stable diffusion model on your own dataset. +The text-to-image fine-tuning script is experimental. It’s easy to overfit and run into issues like catastrophic forgetting. We recommend to explore different hyperparameters to get the best results on your dataset. + +Running locally + + +Installing the dependencies + +Before running the scripts, make sure to install the library’s training dependencies: + + + Copied +pip install git+https://github.com/huggingface/diffusers.git +pip install -U -r requirements.txt +And initialize an 🤗Accelerate environment with: + + + Copied +accelerate config +You need to accept the model license before downloading or using the weights. In this example we’ll use model version v1-4, so you’ll need to visit its card, read the license and tick the checkbox if you agree. +You have to be a registered user in 🤗 Hugging Face Hub, and you’ll also need to use an access token for the code to work. For more information on access tokens, please refer to this section of the documentation. +Run the following command to authenticate your token + + + Copied +huggingface-cli login +If you have already cloned the repo, then you won’t need to go through these steps. Instead, you can pass the path to your local checkout to the training script and it will be loaded from there. + +Hardware Requirements for Fine-tuning + +Using gradient_checkpointing and mixed_precision it should be possible to fine tune the model on a single 24GB GPU. For higher batch_size and faster training it’s better to use GPUs with more than 30GB of GPU memory. You can also use JAX / Flax for fine-tuning on TPUs or GPUs, see below for details. + +Fine-tuning Example + +The following script will launch a fine-tuning run using Justin Pinkneys’ captioned Pokemon dataset, available in Hugging Face Hub. + + + Copied +export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export dataset_name="lambdalabs/pokemon-blip-captions" + +accelerate launch train_text_to_image.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$dataset_name \ + --use_ema \ + --resolution=512 --center_crop --random_flip \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --mixed_precision="fp16" \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --lr_scheduler="constant" --lr_warmup_steps=0 \ + --output_dir="sd-pokemon-model" +To run on your own training files you need to prepare the dataset according to the format required by datasets. You can upload your dataset to the Hub, or you can prepare a local folder with your files. This documentation explains how to do it. +You should modify the script if you wish to use custom loading logic. We have left pointers in the code in the appropriate places :) + + + Copied +export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export TRAIN_DIR="path_to_your_dataset" +export OUTPUT_DIR="path_to_save_model" + +accelerate launch train_text_to_image.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --train_data_dir=$TRAIN_DIR \ + --use_ema \ + --resolution=512 --center_crop --random_flip \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --mixed_precision="fp16" \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --lr_scheduler="constant" --lr_warmup_steps=0 \ + --output_dir=${OUTPUT_DIR} +Once training is finished the model will be saved to the OUTPUT_DIR specified in the command. To load the fine-tuned model for inference, just pass that path to StableDiffusionPipeline: + + + Copied +from diffusers import StableDiffusionPipeline + +model_path = "path_to_saved_model" +pipe = StableDiffusionPipeline.from_pretrained(model_path, torch_dtype=torch.float16) +pipe.to("cuda") + +image = pipe(prompt="yoda").images[0] +image.save("yoda-pokemon.png") + +Flax / JAX fine-tuning + +Thanks to @duongna211 it’s possible to fine-tune Stable Diffusion using Flax! This is very efficient on TPU hardware but works great on GPUs too. You can use the Flax training script like this: + + + Copied +export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export dataset_name="lambdalabs/pokemon-blip-captions" + +python train_text_to_image_flax.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$dataset_name \ + --resolution=512 --center_crop --random_flip \ + --train_batch_size=1 \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --output_dir="sd-pokemon-model" diff --git a/scrapped_outputs/45f7d537171c16e1f8aa000a91783d16.txt b/scrapped_outputs/45f7d537171c16e1f8aa000a91783d16.txt new file mode 100644 index 0000000000000000000000000000000000000000..af8bc21f7006c2432f3cf43cbda561eb3e9ef283 --- /dev/null +++ b/scrapped_outputs/45f7d537171c16e1f8aa000a91783d16.txt @@ -0,0 +1,42 @@ +RePaintScheduler RePaintScheduler is a DDPM-based inpainting scheduler for unsupervised inpainting with extreme masks. It is designed to be used with the RePaintPipeline, and it is based on the paper RePaint: Inpainting using Denoising Diffusion Probabilistic Models by Andreas Lugmayr et al. The abstract from the paper is: Free-form inpainting is the task of adding new content to an image in the regions specified by an arbitrary binary mask. Most existing approaches train for a certain distribution of masks, which limits their generalization capabilities to unseen mask types. Furthermore, training with pixel-wise and perceptual losses often leads to simple textural extensions towards the missing areas instead of semantically meaningful generation. In this work, we propose RePaint: A Denoising Diffusion Probabilistic Model (DDPM) based inpainting approach that is applicable to even extreme masks. We employ a pretrained unconditional DDPM as the generative prior. To condition the generation process, we only alter the reverse diffusion iterations by sampling the unmasked regions using the given image information. Since this technique does not modify or condition the original DDPM network itself, the model produces high-quality and diverse output images for any inpainting form. We validate our method for both faces and general-purpose image inpainting using standard and extreme masks. RePaint outperforms state-of-the-art Autoregressive, and GAN approaches for at least five out of six mask distributions. GitHub Repository: this http URL. The original implementation can be found at andreas128/RePaint. RePaintScheduler class diffusers.RePaintScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' eta: float = 0.0 trained_betas: Optional = None clip_sample: bool = True ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, squaredcos_cap_v2, or sigmoid. eta (float) — +The weight of noise for added noise in diffusion step. If its value is between 0.0 and 1.0 it corresponds +to the DDIM scheduler, and if its value is between -0.0 and 1.0 it corresponds to the DDPM scheduler. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. clip_sample (bool, defaults to True) — +Clip the predicted sample between -1 and 1 for numerical stability. RePaintScheduler is a scheduler for DDPM inpainting inside a given mask. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int jump_length: int = 10 jump_n_sample: int = 10 device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. If used, +timesteps must be None. jump_length (int, defaults to 10) — +The number of steps taken forward in time before going backward in time for a single jump (“j” in +RePaint paper). Take a look at Figure 9 and 10 in the paper. jump_n_sample (int, defaults to 10) — +The number of times to make a forward time jump for a given chosen time sample. Take a look at Figure 9 +and 10 in the paper. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor original_image: FloatTensor mask: FloatTensor generator: Optional = None return_dict: bool = True ) → RePaintSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. original_image (torch.FloatTensor) — +The original image to inpaint on. mask (torch.FloatTensor) — +The mask where a value of 0.0 indicates which part of the original image to inpaint. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a RePaintSchedulerOutput or tuple. Returns +RePaintSchedulerOutput or tuple + +If return_dict is True, RePaintSchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). RePaintSchedulerOutput class diffusers.schedulers.scheduling_repaint.RePaintSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from +the current timestep. pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/460e3c4d3cb95733f7d8a1e99427647b.txt b/scrapped_outputs/460e3c4d3cb95733f7d8a1e99427647b.txt new file mode 100644 index 0000000000000000000000000000000000000000..da7517473881ae8a5f98c9de9071381dc720f891 --- /dev/null +++ b/scrapped_outputs/460e3c4d3cb95733f7d8a1e99427647b.txt @@ -0,0 +1 @@ +Diffusers 🤗 Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you’re looking for a simple inference solution or want to train your own diffusion model, 🤗 Diffusers is a modular toolbox that supports both. Our library is designed with a focus on usability over performance, simple over easy, and customizability over abstractions. The library has three main components: State-of-the-art diffusion pipelines for inference with just a few lines of code. There are many pipelines in 🤗 Diffusers, check out the table in the pipeline overview for a complete list of available pipelines and the task they solve. Interchangeable noise schedulers for balancing trade-offs between generation speed and quality. Pretrained models that can be used as building blocks, and combined with schedulers, for creating your own end-to-end diffusion systems. Tutorials Learn the fundamental skills you need to start generating outputs, build your own diffusion system, and train a diffusion model. We recommend starting here if you're using 🤗 Diffusers for the first time! How-to guides Practical guides for helping you load pipelines, models, and schedulers. You'll also learn how to use pipelines for specific tasks, control how outputs are generated, optimize for inference speed, and different training techniques. Conceptual guides Understand why the library was designed the way it was, and learn more about the ethical guidelines and safety implementations for using the library. Reference Technical descriptions of how 🤗 Diffusers classes and methods work. diff --git a/scrapped_outputs/46347562d1e6136a231ce101c557d020.txt b/scrapped_outputs/46347562d1e6136a231ce101c557d020.txt new file mode 100644 index 0000000000000000000000000000000000000000..f3ff45d9b537f73b4891b1294f8d618d1aafc935 --- /dev/null +++ b/scrapped_outputs/46347562d1e6136a231ce101c557d020.txt @@ -0,0 +1,48 @@ +ScoreSdeVeScheduler ScoreSdeVeScheduler is a variance exploding stochastic differential equation (SDE) scheduler. It was introduced in the Score-Based Generative Modeling through Stochastic Differential Equations paper by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole. The abstract from the paper is: Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model. ScoreSdeVeScheduler class diffusers.ScoreSdeVeScheduler < source > ( num_train_timesteps: int = 2000 snr: float = 0.15 sigma_min: float = 0.01 sigma_max: float = 1348.0 sampling_eps: float = 1e-05 correct_steps: int = 1 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. snr (float, defaults to 0.15) — +A coefficient weighting the step from the model_output sample (from the network) to the random noise. sigma_min (float, defaults to 0.01) — +The initial noise scale for the sigma sequence in the sampling procedure. The minimum sigma should mirror +the distribution of the data. sigma_max (float, defaults to 1348.0) — +The maximum value used for the range of continuous timesteps passed into the model. sampling_eps (float, defaults to 1e-5) — +The end value of sampling where timesteps decrease progressively from 1 to epsilon. correct_steps (int, defaults to 1) — +The number of correction steps performed on a produced sample. ScoreSdeVeScheduler is a variance exploding stochastic differential equation (SDE) scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_sigmas < source > ( num_inference_steps: int sigma_min: float = None sigma_max: float = None sampling_eps: float = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. sigma_min (float, optional) — +The initial noise scale value (overrides value given during scheduler instantiation). sigma_max (float, optional) — +The final noise scale value (overrides value given during scheduler instantiation). sampling_eps (float, optional) — +The final timestep value (overrides value given during scheduler instantiation). Sets the noise scales used for the diffusion chain (to be run before inference). The sigmas control the weight +of the drift and diffusion components of the sample update. set_timesteps < source > ( num_inference_steps: int sampling_eps: float = None device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. sampling_eps (float, optional) — +The final timestep value (overrides value given during scheduler instantiation). device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the continuous timesteps used for the diffusion chain (to be run before inference). step_correct < source > ( model_output: FloatTensor sample: FloatTensor generator: Optional = None return_dict: bool = True ) → SdeVeOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a SdeVeOutput or tuple. Returns +SdeVeOutput or tuple + +If return_dict is True, SdeVeOutput is returned, otherwise a tuple +is returned where the first element is the sample tensor. + Correct the predicted sample based on the model_output of the network. This is often run repeatedly after +making the prediction for the previous timestep. step_pred < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator: Optional = None return_dict: bool = True ) → SdeVeOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a SdeVeOutput or tuple. Returns +SdeVeOutput or tuple + +If return_dict is True, SdeVeOutput is returned, otherwise a tuple +is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SdeVeOutput class diffusers.schedulers.scheduling_sde_ve.SdeVeOutput < source > ( prev_sample: FloatTensor prev_sample_mean: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. prev_sample_mean (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Mean averaged prev_sample over previous timesteps. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/46d636de003d1ce5745c31a520de90dc.txt b/scrapped_outputs/46d636de003d1ce5745c31a520de90dc.txt new file mode 100644 index 0000000000000000000000000000000000000000..49dfad88e1e2c0dcad3d9918f9f7b9486f85e0dc --- /dev/null +++ b/scrapped_outputs/46d636de003d1ce5745c31a520de90dc.txt @@ -0,0 +1,92 @@ +DPMSolverMultistepInverse DPMSolverMultistepInverse is the inverted scheduler from DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps and DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. The implementation is mostly based on the DDIM inversion definition of Null-text Inversion for Editing Real Images using Guided Diffusion Models and notebook implementation of the DiffEdit latent inversion from Xiang-cd/DiffEdit-stable-diffusion. Tips Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use the dynamic +thresholding. This thresholding method is unsuitable for latent-space diffusion models such as +Stable Diffusion. DPMSolverMultistepInverseScheduler class diffusers.DPMSolverMultistepInverseScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'dpmsolver++' solver_type: str = 'midpoint' lower_order_final: bool = True euler_at_final: bool = False use_karras_sigmas: Optional = False lambda_min_clipped: float = -inf variance_type: Optional = None timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DPMSolver order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++". algorithm_type (str, defaults to dpmsolver++) — +Algorithm type for the solver; can be dpmsolver, dpmsolver++, sde-dpmsolver or sde-dpmsolver++. The +dpmsolver type implements the algorithms in the DPMSolver +paper, and the dpmsolver++ type implements the algorithms in the +DPMSolver++ paper. It is recommended to use dpmsolver++ or +sde-dpmsolver++ with solver_order=2 for guided sampling like in Stable Diffusion. solver_type (str, defaults to midpoint) — +Solver type for the second-order solver; can be midpoint or heun. The solver type slightly affects the +sample quality, especially for a small number of steps. It is recommended to use midpoint solvers. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. euler_at_final (bool, defaults to False) — +Whether to use Euler’s method in the final step. It is a trade-off between numerical stability and detail +richness. This can stabilize the sampling of the SDE variant of DPMSolver for small number of inference +steps, but sometimes may result in blurring. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. lambda_min_clipped (float, defaults to -inf) — +Clipping threshold for the minimum value of lambda(t) for numerical stability. This is critical for the +cosine (squaredcos_cap_v2) noise schedule. variance_type (str, optional) — +Set to “learned” or “learned_range” for diffusion models that predict variance. If set, the model’s output +contains the predicted Gaussian variance. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DPMSolverMultistepInverseScheduler is the reverse scheduler of DPMSolverMultistepScheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is +designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an +integral of the data prediction model. The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise +prediction and data prediction models. dpm_solver_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DPMSolver (equivalent to DDIM). multistep_dpm_solver_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order multistep DPMSolver. multistep_dpm_solver_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order multistep DPMSolver. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int = None device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator = None return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep DPMSolver. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/4717e41fffbf7c97042e704006328840.txt b/scrapped_outputs/4717e41fffbf7c97042e704006328840.txt new file mode 100644 index 0000000000000000000000000000000000000000..cff714448fde8a5841e9c4833e95b6589962a2ce --- /dev/null +++ b/scrapped_outputs/4717e41fffbf7c97042e704006328840.txt @@ -0,0 +1 @@ +Overview 🧨 Diffusers offers many pipelines, models, and schedulers for generative tasks. To make loading these components as simple as possible, we provide a single and unified method - from_pretrained() - that loads any of these components from either the Hugging Face Hub or your local machine. Whenever you load a pipeline or model, the latest files are automatically downloaded and cached so you can quickly reuse them next time without redownloading the files. This section will show you everything you need to know about loading pipelines, how to load different components in a pipeline, how to load checkpoint variants, and how to load community pipelines. You’ll also learn how to load schedulers and compare the speed and quality trade-offs of using different schedulers. Finally, you’ll see how to convert and load KerasCV checkpoints so you can use them in PyTorch with 🧨 Diffusers. diff --git a/scrapped_outputs/473942a38320ddb293ec808c9f38f1b3.txt b/scrapped_outputs/473942a38320ddb293ec808c9f38f1b3.txt new file mode 100644 index 0000000000000000000000000000000000000000..9cfc96be6aaacc8d08b00ff6b4042e641b297921 --- /dev/null +++ b/scrapped_outputs/473942a38320ddb293ec808c9f38f1b3.txt @@ -0,0 +1,13 @@ +PEFT Diffusers supports loading adapters such as LoRA with the PEFT library with the PeftAdapterMixin class. This allows modeling classes in Diffusers like UNet2DConditionModel to load an adapter. Refer to the Inference with PEFT tutorial for an overview of how to use PEFT in Diffusers for inference. PeftAdapterMixin class diffusers.loaders.PeftAdapterMixin < source > ( ) A class containing all functions for loading and using adapters weights that are supported in PEFT library. For +more details about adapters and injecting them in a transformer-based model, check out the PEFT documentation. Install the latest version of PEFT, and use this mixin to: Attach new adapters in the model. Attach multiple adapters and iteratively activate/deactivate them. Activate/deactivate all adapters from the model. Get a list of the active adapters. active_adapters < source > ( ) Gets the current list of active adapters of the model. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +documentation. add_adapter < source > ( adapter_config adapter_name: str = 'default' ) Parameters adapter_config ([~peft.PeftConfig]) — +The configuration of the adapter to add; supported adapters are non-prefix tuning and adaption prompt +methods. adapter_name (str, optional, defaults to "default") — +The name of the adapter to add. If no name is passed, a default name is assigned to the adapter. Adds a new adapter to the current model for training. If no adapter name is passed, a default name is assigned +to the adapter to follow the convention of the PEFT library. If you are not familiar with adapters and PEFT methods, we invite you to read more about them in the PEFT +documentation. disable_adapters < source > ( ) Disable all adapters attached to the model and fallback to inference with the base model only. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +documentation. enable_adapters < source > ( ) Enable adapters that are attached to the model. The model uses self.active_adapters() to retrieve the +list of adapters to enable. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +documentation. set_adapter < source > ( adapter_name: Union ) Parameters adapter_name (Union[str, List[str]])) — +The list of adapters to set or the adapter name in the case of a single adapter. Sets a specific adapter by forcing the model to only use that adapter and disables the other adapters. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +documentation. diff --git a/scrapped_outputs/475758852a6a7de852332ff0f2cf8c8a.txt b/scrapped_outputs/475758852a6a7de852332ff0f2cf8c8a.txt new file mode 100644 index 0000000000000000000000000000000000000000..b20fa826f93ceab8b9350b48a73ddf983d626f35 --- /dev/null +++ b/scrapped_outputs/475758852a6a7de852332ff0f2cf8c8a.txt @@ -0,0 +1,115 @@ +Custom Diffusion Custom Diffusion is a training technique for personalizing image generation models. Like Textual Inversion, DreamBooth, and LoRA, Custom Diffusion only requires a few (~4-5) example images. This technique works by only training weights in the cross-attention layers, and it uses a special word to represent the newly learned concept. Custom Diffusion is unique because it can also learn multiple concepts at the same time. If you’re training on a GPU with limited vRAM, you should try enabling xFormers with --enable_xformers_memory_efficient_attention for faster training with lower vRAM requirements (16GB). To save even more memory, add --set_grads_to_none in the training argument to set the gradients to None instead of zero (this option can cause some issues, so if you experience any, try removing this parameter). This guide will explore the train_custom_diffusion.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies: Copied cd examples/custom_diffusion +pip install -r requirements.txt +pip install clip-retrieval 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script contains all the parameters to help you customize your training run. These are found in the parse_args() function. The function comes with default values, but you can also set your own values in the training command if you’d like. For example, to change the resolution of the input image: Copied accelerate launch train_custom_diffusion.py \ + --resolution=256 Many of the basic parameters are described in the DreamBooth training guide, so this guide focuses on the parameters unique to Custom Diffusion: --freeze_model: freezes the key and value parameters in the cross-attention layer; the default is crossattn_kv, but you can set it to crossattn to train all the parameters in the cross-attention layer --concepts_list: to learn multiple concepts, provide a path to a JSON file containing the concepts --modifier_token: a special word used to represent the learned concept --initializer_token: Prior preservation loss Prior preservation loss is a method that uses a model’s own generated samples to help it learn how to generate more diverse images. Because these generated sample images belong to the same class as the images you provided, they help the model retain what it has learned about the class and how it can use what it already knows about the class to make new compositions. Many of the parameters for prior preservation loss are described in the DreamBooth training guide. Regularization Custom Diffusion includes training the target images with a small set of real images to prevent overfitting. As you can imagine, this can be easy to do when you’re only training on a few images! Download 200 real images with clip_retrieval. The class_prompt should be the same category as the target images. These images are stored in class_data_dir. Copied python retrieve.py --class_prompt cat --class_data_dir real_reg/samples_cat --num_class_images 200 To enable regularization, add the following parameters: --with_prior_preservation: whether to use prior preservation loss --prior_loss_weight: controls the influence of the prior preservation loss on the model --real_prior: whether to use a small set of real images to prevent overfitting Copied accelerate launch train_custom_diffusion.py \ + --with_prior_preservation \ + --prior_loss_weight=1.0 \ + --class_data_dir="./real_reg/samples_cat" \ + --class_prompt="cat" \ + --real_prior=True \ Training script A lot of the code in the Custom Diffusion training script is similar to the DreamBooth script. This guide instead focuses on the code that is relevant to Custom Diffusion. The Custom Diffusion training script has two dataset classes: CustomDiffusionDataset: preprocesses the images, class images, and prompts for training PromptDataset: prepares the prompts for generating class images Next, the modifier_token is added to the tokenizer, converted to token ids, and the token embeddings are resized to account for the new modifier_token. Then the modifier_token embeddings are initialized with the embeddings of the initializer_token. All parameters in the text encoder are frozen, except for the token embeddings since this is what the model is trying to learn to associate with the concepts. Copied params_to_freeze = itertools.chain( + text_encoder.text_model.encoder.parameters(), + text_encoder.text_model.final_layer_norm.parameters(), + text_encoder.text_model.embeddings.position_embedding.parameters(), +) +freeze_params(params_to_freeze) Now you’ll need to add the Custom Diffusion weights to the attention layers. This is a really important step for getting the shape and size of the attention weights correct, and for setting the appropriate number of attention processors in each UNet block. Copied st = unet.state_dict() +for name, _ in unet.attn_processors.items(): + cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim + if name.startswith("mid_block"): + hidden_size = unet.config.block_out_channels[-1] + elif name.startswith("up_blocks"): + block_id = int(name[len("up_blocks.")]) + hidden_size = list(reversed(unet.config.block_out_channels))[block_id] + elif name.startswith("down_blocks"): + block_id = int(name[len("down_blocks.")]) + hidden_size = unet.config.block_out_channels[block_id] + layer_name = name.split(".processor")[0] + weights = { + "to_k_custom_diffusion.weight": st[layer_name + ".to_k.weight"], + "to_v_custom_diffusion.weight": st[layer_name + ".to_v.weight"], + } + if train_q_out: + weights["to_q_custom_diffusion.weight"] = st[layer_name + ".to_q.weight"] + weights["to_out_custom_diffusion.0.weight"] = st[layer_name + ".to_out.0.weight"] + weights["to_out_custom_diffusion.0.bias"] = st[layer_name + ".to_out.0.bias"] + if cross_attention_dim is not None: + custom_diffusion_attn_procs[name] = attention_class( + train_kv=train_kv, + train_q_out=train_q_out, + hidden_size=hidden_size, + cross_attention_dim=cross_attention_dim, + ).to(unet.device) + custom_diffusion_attn_procs[name].load_state_dict(weights) + else: + custom_diffusion_attn_procs[name] = attention_class( + train_kv=False, + train_q_out=False, + hidden_size=hidden_size, + cross_attention_dim=cross_attention_dim, + ) +del st +unet.set_attn_processor(custom_diffusion_attn_procs) +custom_diffusion_layers = AttnProcsLayers(unet.attn_processors) The optimizer is initialized to update the cross-attention layer parameters: Copied optimizer = optimizer_class( + itertools.chain(text_encoder.get_input_embeddings().parameters(), custom_diffusion_layers.parameters()) + if args.modifier_token is not None + else custom_diffusion_layers.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) In the training loop, it is important to only update the embeddings for the concept you’re trying to learn. This means setting the gradients of all the other token embeddings to zero: Copied if args.modifier_token is not None: + if accelerator.num_processes > 1: + grads_text_encoder = text_encoder.module.get_input_embeddings().weight.grad + else: + grads_text_encoder = text_encoder.get_input_embeddings().weight.grad + index_grads_to_zero = torch.arange(len(tokenizer)) != modifier_token_id[0] + for i in range(len(modifier_token_id[1:])): + index_grads_to_zero = index_grads_to_zero & ( + torch.arange(len(tokenizer)) != modifier_token_id[i] + ) + grads_text_encoder.data[index_grads_to_zero, :] = grads_text_encoder.data[ + index_grads_to_zero, : + ].fill_(0) Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 In this guide, you’ll download and use these example cat images. You can also create and use your own dataset if you want (see the Create a dataset for training guide). Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model, INSTANCE_DIR to the path where you just downloaded the cat images to, and OUTPUT_DIR to where you want to save the model. You’ll use as the special word to tie the newly learned embeddings to. The script creates and saves model checkpoints and a pytorch_custom_diffusion_weights.bin file to your repository. To monitor training progress with Weights and Biases, add the --report_to=wandb parameter to the training command and specify a validation prompt with --validation_prompt. This is useful for debugging and saving intermediate results. If you’re training on human faces, the Custom Diffusion team has found the following parameters to work well: --learning_rate=5e-6 --max_train_steps can be anywhere between 1000 and 2000 --freeze_model=crossattn use at least 15-20 images to train with single concept multiple concepts Copied export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export OUTPUT_DIR="path-to-save-model" +export INSTANCE_DIR="./data/cat" + +accelerate launch train_custom_diffusion.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --output_dir=$OUTPUT_DIR \ + --class_data_dir=./real_reg/samples_cat/ \ + --with_prior_preservation \ + --real_prior \ + --prior_loss_weight=1.0 \ + --class_prompt="cat" \ + --num_class_images=200 \ + --instance_prompt="photo of a cat" \ + --resolution=512 \ + --train_batch_size=2 \ + --learning_rate=1e-5 \ + --lr_warmup_steps=0 \ + --max_train_steps=250 \ + --scale_lr \ + --hflip \ + --modifier_token "" \ + --validation_prompt=" cat sitting in a bucket" \ + --report_to="wandb" \ + --push_to_hub Once training is finished, you can use your new Custom Diffusion model for inference. single concept multiple concepts Copied import torch +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16, +).to("cuda") +pipeline.unet.load_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin") +pipeline.load_textual_inversion("path-to-save-model", weight_name=".bin") + +image = pipeline( + " cat sitting in a bucket", + num_inference_steps=100, + guidance_scale=6.0, + eta=1.0, +).images[0] +image.save("cat.png") Next steps Congratulations on training a model with Custom Diffusion! 🎉 To learn more: Read the Multi-Concept Customization of Text-to-Image Diffusion blog post to learn more details about the experimental results from the Custom Diffusion team. diff --git a/scrapped_outputs/4764e9a16da11982376b406fb0fe662c.txt b/scrapped_outputs/4764e9a16da11982376b406fb0fe662c.txt new file mode 100644 index 0000000000000000000000000000000000000000..f86c7601a8960e5b9b1d28395df88617938da400 --- /dev/null +++ b/scrapped_outputs/4764e9a16da11982376b406fb0fe662c.txt @@ -0,0 +1,42 @@ +LMSDiscreteScheduler LMSDiscreteScheduler is a linear multistep scheduler for discrete beta schedules. The scheduler is ported from and created by Katherine Crowson, and the original implementation can be found at crowsonkb/k-diffusion. LMSDiscreteScheduler class diffusers.LMSDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None use_karras_sigmas: Optional = False prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. A linear multistep scheduler for discrete beta schedules. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. get_lms_coefficient < source > ( order t current_order ) Parameters order () — t () — current_order () — Compute the linear multistep coefficient. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (float or torch.FloatTensor) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor order: int = 4 return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float or torch.FloatTensor) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. order (int, defaults to 4) — +The order of the linear multistep method. return_dict (bool, optional, defaults to True) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). LMSDiscreteSchedulerOutput class diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/47792cb8a13d584ef0bb00c05e07f06a.txt b/scrapped_outputs/47792cb8a13d584ef0bb00c05e07f06a.txt new file mode 100644 index 0000000000000000000000000000000000000000..cdd78d68bba0e712cfad73d0a4eb0e2833f322c8 --- /dev/null +++ b/scrapped_outputs/47792cb8a13d584ef0bb00c05e07f06a.txt @@ -0,0 +1,15 @@ +Outputs All model outputs are subclasses of BaseOutput, data structures containing all the information returned by the model. The outputs can also be used as tuples or dictionaries. For example: Copied from diffusers import DDIMPipeline + +pipeline = DDIMPipeline.from_pretrained("google/ddpm-cifar10-32") +outputs = pipeline() The outputs object is a ImagePipelineOutput which means it has an image attribute. You can access each attribute as you normally would or with a keyword lookup, and if that attribute is not returned by the model, you will get None: Copied outputs.images +outputs["images"] When considering the outputs object as a tuple, it only considers the attributes that don’t have None values. +For instance, retrieving an image by indexing into it returns the tuple (outputs.images): Copied outputs[:1] To check a specific pipeline or model output, refer to its corresponding API documentation. BaseOutput class diffusers.utils.BaseOutput < source > ( ) Base class for all model outputs as dataclass. Has a __getitem__ that allows indexing by integer or slice (like a +tuple) or strings (like a dictionary) that will ignore the None attributes. Otherwise behaves like a regular +Python dictionary. You can’t unpack a BaseOutput directly. Use the to_tuple() method to convert it to a tuple +first. to_tuple < source > ( ) Convert self to a tuple containing all the attributes/keys that are not None. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. FlaxImagePipelineOutput class diffusers.pipelines.pipeline_flax_utils.FlaxImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. ImageTextPipelineOutput class diffusers.ImageTextPipelineOutput < source > ( images: Union text: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). text (List[str] or List[List[str]]) — +List of generated text strings of length batch_size or a list of list of strings whose outer list has +length batch_size. Output class for joint image-text pipelines. diff --git a/scrapped_outputs/47e5cbf1603156dca991eca68879612d.txt b/scrapped_outputs/47e5cbf1603156dca991eca68879612d.txt new file mode 100644 index 0000000000000000000000000000000000000000..2eda1601aa5e193fb122502a0750cc9c60eccfd5 --- /dev/null +++ b/scrapped_outputs/47e5cbf1603156dca991eca68879612d.txt @@ -0,0 +1,196 @@ +Stable Cascade This model is built upon the Würstchen architecture and its main +difference to other models like Stable Diffusion is that it is working at a much smaller latent space. Why is this +important? The smaller the latent space, the faster you can run inference and the cheaper the training becomes. +How small is the latent space? Stable Diffusion uses a compression factor of 8, resulting in a 1024x1024 image being +encoded to 128x128. Stable Cascade achieves a compression factor of 42, meaning that it is possible to encode a +1024x1024 image to 24x24, while maintaining crisp reconstructions. The text-conditional model is then trained in the +highly compressed latent space. Previous versions of this architecture, achieved a 16x cost reduction over Stable +Diffusion 1.5. Therefore, this kind of model is well suited for usages where efficiency is important. Furthermore, all known extensions +like finetuning, LoRA, ControlNet, IP-Adapter, LCM etc. are possible with this method as well. The original codebase can be found at Stability-AI/StableCascade. Model Overview Stable Cascade consists of three models: Stage A, Stage B and Stage C, representing a cascade to generate images, +hence the name “Stable Cascade”. Stage A & B are used to compress images, similar to what the job of the VAE is in Stable Diffusion. +However, with this setup, a much higher compression of images can be achieved. While the Stable Diffusion models use a +spatial compression factor of 8, encoding an image with resolution of 1024 x 1024 to 128 x 128, Stable Cascade achieves +a compression factor of 42. This encodes a 1024 x 1024 image to 24 x 24, while being able to accurately decode the +image. This comes with the great benefit of cheaper training and inference. Furthermore, Stage C is responsible +for generating the small 24 x 24 latents given a text prompt. Uses Direct Use The model is intended for research purposes for now. Possible research areas and tasks include Research on generative models. Safe deployment of models which have the potential to generate harmful content. Probing and understanding the limitations and biases of generative models. Generation of artworks and use in design and other artistic processes. Applications in educational or creative tools. Excluded uses are described below. Out-of-Scope Use The model was not trained to be factual or true representations of people or events, +and therefore using the model to generate such content is out-of-scope for the abilities of this model. +The model should not be used in any way that violates Stability AI’s Acceptable Use Policy. Limitations and Bias Limitations Faces and people in general may not be generated properly. The autoencoding part of the model is lossy. StableCascadeCombinedPipeline class diffusers.StableCascadeCombinedPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModel decoder: StableCascadeUNet scheduler: DDPMWuerstchenScheduler vqgan: PaellaVQModel prior_prior: StableCascadeUNet prior_text_encoder: CLIPTextModel prior_tokenizer: CLIPTokenizer prior_scheduler: DDPMWuerstchenScheduler prior_feature_extractor: Optional = None prior_image_encoder: Optional = None ) Parameters tokenizer (CLIPTokenizer) — +The decoder tokenizer to be used for text inputs. text_encoder (CLIPTextModel) — +The decoder text encoder to be used for text inputs. decoder (StableCascadeUNet) — +The decoder model to be used for decoder image generation pipeline. scheduler (DDPMWuerstchenScheduler) — +The scheduler to be used for decoder image generation pipeline. vqgan (PaellaVQModel) — +The VQGAN model to be used for decoder image generation pipeline. feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the image_encoder. image_encoder (CLIPVisionModelWithProjection) — +Frozen CLIP image-encoder (clip-vit-large-patch14). prior_prior (StableCascadeUNet) — +The prior model to be used for prior pipeline. prior_scheduler (DDPMWuerstchenScheduler) — +The scheduler to be used for prior pipeline. Combined Pipeline for text-to-image generation using Stable Cascade. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union = None images: Union = None height: int = 512 width: int = 512 prior_num_inference_steps: int = 60 prior_timesteps: Optional = None prior_guidance_scale: float = 4.0 num_inference_steps: int = 12 decoder_timesteps: Optional = None decoder_guidance_scale: float = 0.0 negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation for the prior and decoder. images (torch.Tensor, PIL.Image.Image, List[torch.Tensor], List[PIL.Image.Image], optional) — +The images to guide the image generation for the prior. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings for the prior. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings for the prior. Can be used to easily tweak text inputs, e.g. +prompt weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt +input argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +prior_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +prior_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked +to the text prompt, usually at the expense of lower image quality. prior_num_inference_steps (Union[int, Dict[float, int]], optional, defaults to 60) — +The number of prior denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. For more specific timestep spacing, you can pass customized +prior_timesteps num_inference_steps (int, optional, defaults to 12) — +The number of decoder denoising steps. More denoising steps usually lead to a higher quality image at +the expense of slower inference. For more specific timestep spacing, you can pass customized +timesteps decoder_guidance_scale (float, optional, defaults to 0.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. prior_callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). prior_callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the prior_callback_on_step_end function. The tensors specified in the +list will be passed as callback_kwargs argument. You will only be able to include variables listed in +the ._callback_tensor_inputs attribute of your pipeine class. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusions import StableCascadeCombinedPipeline + +>>> pipe = StableCascadeCombinedPipeline.from_pretrained("stabilityai/stable-cascade-combined", torch_dtype=torch.bfloat16).to( +... "cuda" +... ) +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> images = pipe(prompt=prompt) enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models (unet, text_encoder, vae, and safety checker state dicts) to CPU using 🤗 +Accelerate, significantly reducing memory usage. Models are moved to a torch.device('meta') and loaded on a +GPU only when their specific submodule’s forward method is called. Offloading happens on a submodule basis. +Memory savings are higher than using enable_model_cpu_offload, but performance is lower. StableCascadePriorPipeline class diffusers.StableCascadePriorPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection prior: StableCascadeUNet scheduler: DDPMWuerstchenScheduler resolution_multiple: float = 42.67 feature_extractor: Optional = None image_encoder: Optional = None ) Parameters prior (StableCascadeUNet) — +The Stable Cascade prior to approximate the image embedding from the text and/or image embedding. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder (laion/CLIP-ViT-bigG-14-laion2B-39B-b160k). feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the image_encoder. image_encoder (CLIPVisionModelWithProjection) — +Frozen CLIP image-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (DDPMWuerstchenScheduler) — +A scheduler to be used in combination with prior to generate image embedding. resolution_multiple (‘float’, optional, defaults to 42.67) — +Default resolution for multiple images generated. Pipeline for generating image prior for Stable Cascade. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union = None images: Union = None height: int = 1024 width: int = 1024 num_inference_steps: int = 20 timesteps: List = None guidance_scale: float = 4.0 negative_prompt: Union = None prompt_embeds: Optional = None prompt_embeds_pooled: Optional = None negative_prompt_embeds: Optional = None negative_prompt_embeds_pooled: Optional = None image_embeds: Optional = None num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pt' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. height (int, optional, defaults to 1024) — +The height in pixels of the generated image. width (int, optional, defaults to 1024) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 60) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 8.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +decoder_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +decoder_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely +linked to the text prompt, usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if decoder_guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. prompt_embeds_pooled (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. negative_prompt_embeds_pooled (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds_pooled will be generated from negative_prompt input +argument. image_embeds (torch.FloatTensor, optional) — +Pre-generated image embeddings. Can be used to easily tweak image inputs, e.g. prompt weighting. +If not provided, image embeddings will be generated from image input argument if existing. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableCascadePriorPipeline + +>>> prior_pipe = StableCascadePriorPipeline.from_pretrained( +... "stabilityai/stable-cascade-prior", torch_dtype=torch.bfloat16 +... ).to("cuda") + +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> prior_output = pipe(prompt) StableCascadePriorPipelineOutput class diffusers.pipelines.stable_cascade.pipeline_stable_cascade_prior.StableCascadePriorPipelineOutput < source > ( image_embeddings: Union prompt_embeds: Union negative_prompt_embeds: Union ) Parameters image_embeddings (torch.FloatTensor or np.ndarray) — +Prior image embeddings for text prompt prompt_embeds (torch.FloatTensor) — +Text embeddings for the prompt. negative_prompt_embeds (torch.FloatTensor) — +Text embeddings for the negative prompt. Output class for WuerstchenPriorPipeline. StableCascadeDecoderPipeline class diffusers.StableCascadeDecoderPipeline < source > ( decoder: StableCascadeUNet tokenizer: CLIPTokenizer text_encoder: CLIPTextModel scheduler: DDPMWuerstchenScheduler vqgan: PaellaVQModel latent_dim_scale: float = 10.67 ) Parameters tokenizer (CLIPTokenizer) — +The CLIP tokenizer. text_encoder (CLIPTextModel) — +The CLIP text encoder. decoder (StableCascadeUNet) — +The Stable Cascade decoder unet. vqgan (PaellaVQModel) — +The VQGAN model. scheduler (DDPMWuerstchenScheduler) — +A scheduler to be used in combination with prior to generate image embedding. latent_dim_scale (float, optional, defaults to 10.67) — +Multiplier to determine the VQ latent space size from the image embeddings. If the image embeddings are +height=24 and width=24, the VQ latent shape needs to be height=int(2410.67)=256 and +width=int(2410.67)=256 in order to match the training conditions. Pipeline for generating images from the Stable Cascade model. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeddings: Union prompt: Union = None num_inference_steps: int = 10 guidance_scale: float = 0.0 negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) Parameters image_embedding (torch.FloatTensor or List[torch.FloatTensor]) — +Image Embeddings either extracted from an image or generated by a Prior Model. prompt (str or List[str]) — +The prompt or prompts to guide the image generation. num_inference_steps (int, optional, defaults to 12) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 0.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +decoder_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +decoder_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely +linked to the text prompt, usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if decoder_guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableCascadePriorPipeline, StableCascadeDecoderPipeline + +>>> prior_pipe = StableCascadePriorPipeline.from_pretrained( +... "stabilityai/stable-cascade-prior", torch_dtype=torch.bfloat16 +... ).to("cuda") +>>> gen_pipe = StableCascadeDecoderPipeline.from_pretrain( +... "stabilityai/stable-cascade", torch_dtype=torch.float16 +... ).to("cuda") + +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> prior_output = pipe(prompt) +>>> images = gen_pipe(prior_output.image_embeddings, prompt=prompt) diff --git a/scrapped_outputs/47f8678268f1f308a5bc7692c8f98646.txt b/scrapped_outputs/47f8678268f1f308a5bc7692c8f98646.txt new file mode 100644 index 0000000000000000000000000000000000000000..ac901ea3d3abb278b5ccaa74a8178f52ddb208f5 --- /dev/null +++ b/scrapped_outputs/47f8678268f1f308a5bc7692c8f98646.txt @@ -0,0 +1,286 @@ +Community pipelines + +For more information about community pipelines, please have a look at this issue. +Community examples consist of both inference and training examples that have been added by the community. +Please have a look at the following table to get an overview of all community examples. Click on the Code Example to get a copy-and-paste ready code example that you can try out. +If a community doesn’t work as expected, please open an issue and ping the author on it. +Example +Description +Code Example +Colab +Author +CLIP Guided Stable Diffusion +Doing CLIP guidance for text to image generation with Stable Diffusion +CLIP Guided Stable Diffusion + +Suraj Patil +One Step U-Net (Dummy) +Example showcasing of how to use Community Pipelines (see https://github.com/huggingface/diffusers/issues/841) +One Step U-Net +- +Patrick von Platen +Stable Diffusion Interpolation +Interpolate the latent space of Stable Diffusion between different prompts/seeds +Stable Diffusion Interpolation +- +Nate Raw +Stable Diffusion Mega +One Stable Diffusion Pipeline with all functionalities of Text2Image, Image2Image and Inpainting +Stable Diffusion Mega +- +Patrick von Platen +Long Prompt Weighting Stable Diffusion +One Stable Diffusion Pipeline without tokens length limit, and support parsing weighting in prompt. +Long Prompt Weighting Stable Diffusion +- +SkyTNT +Speech to Image +Using automatic-speech-recognition to transcribe text and Stable Diffusion to generate images +Speech to Image +- +Mikail Duzenli +To load a custom pipeline you just need to pass the custom_pipeline argument to DiffusionPipeline, as one of the files in diffusers/examples/community. Feel free to send a PR with your own pipelines, we will merge them quickly. + + + Copied +pipe = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", custom_pipeline="filename_in_the_community_folder" +) + +Example usages + + +CLIP Guided Stable Diffusion + +CLIP guided stable diffusion can help to generate more realistic images +by guiding stable diffusion at every denoising step with an additional CLIP model. +The following code requires roughly 12GB of GPU RAM. + + + Copied +from diffusers import DiffusionPipeline +from transformers import CLIPImageProcessor, CLIPModel +import torch + + +feature_extractor = CLIPImageProcessor.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K") +clip_model = CLIPModel.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K", torch_dtype=torch.float16) + + +guided_pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + custom_pipeline="clip_guided_stable_diffusion", + clip_model=clip_model, + feature_extractor=feature_extractor, + torch_dtype=torch.float16, +) +guided_pipeline.enable_attention_slicing() +guided_pipeline = guided_pipeline.to("cuda") + +prompt = "fantasy book cover, full moon, fantasy forest landscape, golden vector elements, fantasy magic, dark light night, intricate, elegant, sharp focus, illustration, highly detailed, digital painting, concept art, matte, art by WLOP and Artgerm and Albert Bierstadt, masterpiece" + +generator = torch.Generator(device="cuda").manual_seed(0) +images = [] +for i in range(4): + image = guided_pipeline( + prompt, + num_inference_steps=50, + guidance_scale=7.5, + clip_guidance_scale=100, + num_cutouts=4, + use_cutouts=False, + generator=generator, + ).images[0] + images.append(image) + +# save images locally +for i, img in enumerate(images): + img.save(f"./clip_guided_sd/image_{i}.png") +The images list contains a list of PIL images that can be saved locally or displayed directly in a google colab. +Generated images tend to be of higher qualtiy than natively using stable diffusion. E.g. the above script generates the following images: +. + +One Step Unet + +The dummy “one-step-unet” can be run as follows: + + + Copied +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="one_step_unet") +pipe() +Note: This community pipeline is not useful as a feature, but rather just serves as an example of how community pipelines can be added (see https://github.com/huggingface/diffusers/issues/841). + +Stable Diffusion Interpolation + +The following code can be run on a GPU of at least 8GB VRAM and should take approximately 5 minutes. + + + Copied +from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + torch_dtype=torch.float16, + safety_checker=None, # Very important for videos...lots of false positives while interpolating + custom_pipeline="interpolate_stable_diffusion", +).to("cuda") +pipe.enable_attention_slicing() + +frame_filepaths = pipe.walk( + prompts=["a dog", "a cat", "a horse"], + seeds=[42, 1337, 1234], + num_interpolation_steps=16, + output_dir="./dreams", + batch_size=4, + height=512, + width=512, + guidance_scale=8.5, + num_inference_steps=50, +) +The output of the walk(...) function returns a list of images saved under the folder as defined in output_dir. You can use these images to create videos of stable diffusion. +Please have a look at https://github.com/nateraw/stable-diffusion-videos for more in-detail information on how to create videos using stable diffusion as well as more feature-complete functionality. + +Stable Diffusion Mega + +The Stable Diffusion Mega Pipeline lets you use the main use cases of the stable diffusion pipeline in a single class. + + + Copied +#!/usr/bin/env python3 +from diffusers import DiffusionPipeline +import PIL +import requests +from io import BytesIO +import torch + + +def download_image(url): + response = requests.get(url) + return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +pipe = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + custom_pipeline="stable_diffusion_mega", + torch_dtype=torch.float16, +) +pipe.to("cuda") +pipe.enable_attention_slicing() + + +### Text-to-Image + +images = pipe.text2img("An astronaut riding a horse").images + +### Image-to-Image + +init_image = download_image( + "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +) + +prompt = "A fantasy landscape, trending on artstation" + +images = pipe.img2img(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images + +### Inpainting + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" +init_image = download_image(img_url).resize((512, 512)) +mask_image = download_image(mask_url).resize((512, 512)) + +prompt = "a cat sitting on a bench" +images = pipe.inpaint(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.75).images +As shown above this one pipeline can run all both “text-to-image”, “image-to-image”, and “inpainting” in one pipeline. + +Long Prompt Weighting Stable Diffusion + +The Pipeline lets you input prompt without 77 token length limit. And you can increase words weighting by using ”()” or decrease words weighting by using ”[]” +The Pipeline also lets you use the main use cases of the stable diffusion pipeline in a single class. + +pytorch + + + + Copied +from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained( + "hakurei/waifu-diffusion", custom_pipeline="lpw_stable_diffusion", torch_dtype=torch.float16 +) +pipe = pipe.to("cuda") + +prompt = "best_quality (1girl:1.3) bow bride brown_hair closed_mouth frilled_bow frilled_hair_tubes frills (full_body:1.3) fox_ear hair_bow hair_tubes happy hood japanese_clothes kimono long_sleeves red_bow smile solo tabi uchikake white_kimono wide_sleeves cherry_blossoms" +neg_prompt = "lowres, bad_anatomy, error_body, error_hair, error_arm, error_hands, bad_hands, error_fingers, bad_fingers, missing_fingers, error_legs, bad_legs, multiple_legs, missing_legs, error_lighting, error_shadow, error_reflection, text, error, extra_digit, fewer_digits, cropped, worst_quality, low_quality, normal_quality, jpeg_artifacts, signature, watermark, username, blurry" + +pipe.text2img(prompt, negative_prompt=neg_prompt, width=512, height=512, max_embeddings_multiples=3).images[0] + +onnxruntime + + + + Copied +from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + custom_pipeline="lpw_stable_diffusion_onnx", + revision="onnx", + provider="CUDAExecutionProvider", +) + +prompt = "a photo of an astronaut riding a horse on mars, best quality" +neg_prompt = "lowres, bad anatomy, error body, error hair, error arm, error hands, bad hands, error fingers, bad fingers, missing fingers, error legs, bad legs, multiple legs, missing legs, error lighting, error shadow, error reflection, text, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry" + +pipe.text2img(prompt, negative_prompt=neg_prompt, width=512, height=512, max_embeddings_multiples=3).images[0] +if you see Token indices sequence length is longer than the specified maximum sequence length for this model ( *** > 77 ) . Running this sequence through the model will result in indexing errors. Do not worry, it is normal. + +Speech to Image + +The following code can generate an image from an audio sample using pre-trained OpenAI whisper-small and Stable Diffusion. + + + Copied +import torch + +import matplotlib.pyplot as plt +from datasets import load_dataset +from diffusers import DiffusionPipeline +from transformers import ( + WhisperForConditionalGeneration, + WhisperProcessor, +) + + +device = "cuda" if torch.cuda.is_available() else "cpu" + +ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") + +audio_sample = ds[3] + +text = audio_sample["text"].lower() +speech_data = audio_sample["audio"]["array"] + +model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small").to(device) +processor = WhisperProcessor.from_pretrained("openai/whisper-small") + +diffuser_pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + custom_pipeline="speech_to_image_diffusion", + speech_model=model, + speech_processor=processor, + + torch_dtype=torch.float16, +) + +diffuser_pipeline.enable_attention_slicing() +diffuser_pipeline = diffuser_pipeline.to(device) + +output = diffuser_pipeline(speech_data) +plt.imshow(output.images[0]) +This example produces the following image: diff --git a/scrapped_outputs/480fcfa6bbfb4b2a0baf3306a1fcb865.txt b/scrapped_outputs/480fcfa6bbfb4b2a0baf3306a1fcb865.txt new file mode 100644 index 0000000000000000000000000000000000000000..d509c1ac7ab849c2b3afbdbbc876d1114069ba2e --- /dev/null +++ b/scrapped_outputs/480fcfa6bbfb4b2a0baf3306a1fcb865.txt @@ -0,0 +1,217 @@ +Latent Consistency Models Latent Consistency Models (LCMs) were proposed in Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference by Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao. The abstract of the paper is as follows: Latent Diffusion models (LDMs) have achieved remarkable results in synthesizing high-resolution images. However, the iterative sampling process is computationally intensive and leads to slow generation. Inspired by Consistency Models (song et al.), we propose Latent Consistency Models (LCMs), enabling swift inference with minimal steps on any pre-trained LDMs, including Stable Diffusion (rombach et al). Viewing the guided reverse diffusion process as solving an augmented probability flow ODE (PF-ODE), LCMs are designed to directly predict the solution of such ODE in latent space, mitigating the need for numerous iterations and allowing rapid, high-fidelity sampling. Efficiently distilled from pre-trained classifier-free guided diffusion models, a high-quality 768 x 768 2~4-step LCM takes only 32 A100 GPU hours for training. Furthermore, we introduce Latent Consistency Fine-tuning (LCF), a novel method that is tailored for fine-tuning LCMs on customized image datasets. Evaluation on the LAION-5B-Aesthetics dataset demonstrates that LCMs achieve state-of-the-art text-to-image generation performance with few-step inference. Project Page: this https URL. A demo for the SimianLuo/LCM_Dreamshaper_v7 checkpoint can be found here. The pipelines were contributed by luosiallen, nagolinc, and dg845. LatentConsistencyModelPipeline class diffusers.LatentConsistencyModelPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: LCMScheduler safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Currently only +supports LCMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. requires_safety_checker (bool, optional, defaults to True) — +Whether the pipeline requires a safety checker component. Pipeline for text-to-image generation using a latent consistency model. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 4 original_inference_steps: int = None timesteps: List = None guidance_scale: float = 8.5 num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. original_inference_steps (int, optional) — +The original number of inference steps use to generate a linearly-spaced timestep schedule, from which +we will draw num_inference_steps evenly spaced timesteps from as our final timestep schedule, +following the Skipping-Step method in the paper (see Section 4.3). If not set this will default to the +scheduler’s original_inference_steps attribute. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps on the original LCM training/distillation timestep schedule are used. Must be in descending +order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. +Note that the original latent consistency models paper uses a different CFG formulation where the +guidance scales are decreased by 1 (so in the paper formulation CFG is enabled when guidance_scale > 0). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import DiffusionPipeline +>>> import torch + +>>> pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7") +>>> # To save GPU memory, torch.float16 can be used, but it may compromise image quality. +>>> pipe.to(torch_device="cuda", torch_dtype=torch.float32) + +>>> prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" + +>>> # Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps. +>>> num_inference_steps = 4 +>>> images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=8.0).images +>>> images[0].save("image.png") enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 LatentConsistencyModelImg2ImgPipeline class diffusers.LatentConsistencyModelImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: LCMScheduler safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Currently only +supports LCMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. requires_safety_checker (bool, optional, defaults to True) — +Whether the pipeline requires a safety checker component. Pipeline for image-to-image generation using a latent consistency model. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 4 strength: float = 0.8 original_inference_steps: int = None timesteps: List = None guidance_scale: float = 8.5 num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. original_inference_steps (int, optional) — +The original number of inference steps use to generate a linearly-spaced timestep schedule, from which +we will draw num_inference_steps evenly spaced timesteps from as our final timestep schedule, +following the Skipping-Step method in the paper (see Section 4.3). If not set this will default to the +scheduler’s original_inference_steps attribute. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps on the original LCM training/distillation timestep schedule are used. Must be in descending +order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. +Note that the original latent consistency models paper uses a different CFG formulation where the +guidance scales are decreased by 1 (so in the paper formulation CFG is enabled when guidance_scale > 0). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import AutoPipelineForImage2Image +>>> import torch +>>> import PIL + +>>> pipe = AutoPipelineForImage2Image.from_pretrained("SimianLuo/LCM_Dreamshaper_v7") +>>> # To save GPU memory, torch.float16 can be used, but it may compromise image quality. +>>> pipe.to(torch_device="cuda", torch_dtype=torch.float32) + +>>> prompt = "High altitude snowy mountains" +>>> image = PIL.Image.open("./snowy_mountains.png") + +>>> # Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps. +>>> num_inference_steps = 4 +>>> images = pipe( +... prompt=prompt, image=image, num_inference_steps=num_inference_steps, guidance_scale=8.0 +... ).images + +>>> images[0].save("image.png") enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/488733a9a7e6761165c084958f8b86cb.txt b/scrapped_outputs/488733a9a7e6761165c084958f8b86cb.txt new file mode 100644 index 0000000000000000000000000000000000000000..fbba08e6089c48721c4daf719b002f35502d6466 --- /dev/null +++ b/scrapped_outputs/488733a9a7e6761165c084958f8b86cb.txt @@ -0,0 +1,573 @@ +Stable Diffusion XL Stable Diffusion XL (SDXL) was proposed in SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis by Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. The abstract from the paper is: We present SDXL, a latent diffusion model for text-to-image synthesis. Compared to previous versions of Stable Diffusion, SDXL leverages a three times larger UNet backbone: The increase of model parameters is mainly due to more attention blocks and a larger cross-attention context as SDXL uses a second text encoder. We design multiple novel conditioning schemes and train SDXL on multiple aspect ratios. We also introduce a refinement model which is used to improve the visual fidelity of samples generated by SDXL using a post-hoc image-to-image technique. We demonstrate that SDXL shows drastically improved performance compared the previous versions of Stable Diffusion and achieves results competitive with those of black-box state-of-the-art image generators. Tips Using SDXL with a DPM++ scheduler for less than 50 steps is known to produce visual artifacts because the solver becomes numerically unstable. To fix this issue, take a look at this PR which recommends for ODE/SDE solvers:set use_karras_sigmas=True or lu_lambdas=True to improve image quality set euler_at_final=True if you’re using a solver with uniform step sizes (DPM++2M or DPM++2M SDE) Most SDXL checkpoints work best with an image size of 1024x1024. Image sizes of 768x768 and 512x512 are also supported, but the results aren’t as good. Anything below 512x512 is not recommended and likely won’t be for default checkpoints like stabilityai/stable-diffusion-xl-base-1.0. SDXL can pass a different prompt for each of the text encoders it was trained on. We can even pass different parts of the same prompt to the text encoders. SDXL output images can be improved by making use of a refiner model in an image-to-image setting. SDXL offers negative_original_size, negative_crops_coords_top_left, and negative_target_size to negatively condition the model on image resolution and cropping parameters. To learn how to use SDXL for various tasks, how to optimize performance, and other usage examples, take a look at the Stable Diffusion XL guide. Check out the Stability AI Hub organization for the official base and refiner model checkpoints! StableDiffusionXLPipeline class diffusers.StableDiffusionXLPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Optional = None crops_coords_top_left: Tuple = (0, 0) target_size: Optional = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 5.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput instead +of a plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLPipeline + +>>> pipe = StableDiffusionXLPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionXLImg2ImgPipeline class diffusers.StableDiffusionXLImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires an aesthetic_score condition to be passed during inference. Also see the +config of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None strength: float = 0.3 num_inference_steps: int = 50 timesteps: List = None denoising_start: Optional = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor or PIL.Image.Image or np.ndarray or List[torch.FloatTensor] or List[PIL.Image.Image] or List[np.ndarray]) — +The image(s) to modify with the pipeline. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. Note that in the case of +denoising_start being declared as an integer, the value of strength will be ignored. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_start (float, optional) — +When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be +bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and +it is assumed that the passed image is a partly denoised image. Note that when this is specified, +strength will be ignored. The denoising_start parameter is particularly beneficial when this pipeline +is integrated into a “Mixture of Denoisers” multi-pipeline setup, as detailed in Refine Image +Quality. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be +denoised by a successor pipeline that has denoising_start set to 0.8 so that it only denoises the +final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline +forms a part of a “Mixture of Denoisers” multi-pipeline setup, as elaborated in Refine Image +Quality. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +`tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLImg2ImgPipeline +>>> from diffusers.utils import load_image + +>>> pipe = StableDiffusionXLImg2ImgPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") +>>> url = "https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/aa_xl/000000009.png" + +>>> init_image = load_image(url).convert("RGB") +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt, image=init_image).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionXLInpaintPipeline class diffusers.StableDiffusionXLInpaintPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires a aesthetic_score condition to be passed during inference. Also see the config +of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None mask_image: Union = None masked_image_latents: FloatTensor = None height: Optional = None width: Optional = None padding_mask_crop: Optional = None strength: float = 0.9999 num_inference_steps: int = 50 timesteps: List = None denoising_start: Optional = None denoising_end: Optional = None guidance_scale: float = 7.5 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (PIL.Image.Image) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. padding_mask_crop (int, optional, defaults to None) — +The size of margin in the crop to be applied to the image and masking. If None, no crop is applied to image and mask_image. If +padding_mask_crop is not None, it will first find a rectangular region with the same aspect ration of the image and +contains all masked area, and then expand that area based on padding_mask_crop. The image and mask_image will then be cropped based on +the expanded area before resizing to the original image size for inpainting. This is useful when the masked area is small while the image is large +and contain information inreleant for inpainging, such as background. strength (float, optional, defaults to 0.9999) — +Conceptually, indicates how much to transform the masked portion of the reference image. Must be +between 0 and 1. image will be used as a starting point, adding more noise to it the larger the +strength. The number of denoising steps depends on the amount of noise initially added. When +strength is 1, added noise will be maximum and the denoising process will run for the full number of +iterations specified in num_inference_steps. A value of 1, therefore, essentially ignores the masked +portion of the reference image. Note that in the case of denoising_start being declared as an +integer, the value of strength will be ignored. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_start (float, optional) — +When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be +bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and +it is assumed that the passed image is a partly denoised image. Note that when this is specified, +strength will be ignored. The denoising_start parameter is particularly beneficial when this pipeline +is integrated into a “Mixture of Denoisers” multi-pipeline setup, as detailed in Refining the Image +Output. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be +denoised by a successor pipeline that has denoising_start set to 0.8 so that it only denoises the +final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline +forms a part of a “Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLInpaintPipeline +>>> from diffusers.utils import load_image + +>>> pipe = StableDiffusionXLInpaintPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", +... torch_dtype=torch.float16, +... variant="fp16", +... use_safetensors=True, +... ) +>>> pipe.to("cuda") + +>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +>>> init_image = load_image(img_url).convert("RGB") +>>> mask_image = load_image(mask_url).convert("RGB") + +>>> prompt = "A majestic tiger sitting on a bench" +>>> image = pipe( +... prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=50, strength=0.80 +... ).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. diff --git a/scrapped_outputs/489e8dde223810e69c206a6054d92a2e.txt b/scrapped_outputs/489e8dde223810e69c206a6054d92a2e.txt new file mode 100644 index 0000000000000000000000000000000000000000..7b1735de34d975258705c997ab6b7091fbeddde0 --- /dev/null +++ b/scrapped_outputs/489e8dde223810e69c206a6054d92a2e.txt @@ -0,0 +1,2 @@ +Activation functions Customized activation functions for supporting various models in 🤗 Diffusers. GELU class diffusers.models.activations.GELU < source > ( dim_in: int dim_out: int approximate: str = 'none' bias: bool = True ) Parameters dim_in (int) — The number of channels in the input. dim_out (int) — The number of channels in the output. approximate (str, optional, defaults to "none") — If "tanh", use tanh approximation. bias (bool, defaults to True) — Whether to use a bias in the linear layer. GELU activation function with tanh approximation support with approximate="tanh". GEGLU class diffusers.models.activations.GEGLU < source > ( dim_in: int dim_out: int bias: bool = True ) Parameters dim_in (int) — The number of channels in the input. dim_out (int) — The number of channels in the output. bias (bool, defaults to True) — Whether to use a bias in the linear layer. A variant of the gated linear unit activation function. ApproximateGELU class diffusers.models.activations.ApproximateGELU < source > ( dim_in: int dim_out: int bias: bool = True ) Parameters dim_in (int) — The number of channels in the input. dim_out (int) — The number of channels in the output. bias (bool, defaults to True) — Whether to use a bias in the linear layer. The approximate form of the Gaussian Error Linear Unit (GELU). For more details, see section 2 of this +paper. diff --git a/scrapped_outputs/48c1a751792fd53d1c49dc2714b6f781.txt b/scrapped_outputs/48c1a751792fd53d1c49dc2714b6f781.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/49176672b996948dc9cfeecb7aa3c6df.txt b/scrapped_outputs/49176672b996948dc9cfeecb7aa3c6df.txt new file mode 100644 index 0000000000000000000000000000000000000000..fb94d834121295af6b4bed33934b6c96b5fc6a51 --- /dev/null +++ b/scrapped_outputs/49176672b996948dc9cfeecb7aa3c6df.txt @@ -0,0 +1,79 @@ +Unconditional image generation Unconditional image generation models are not conditioned on text or images during training. It only generates images that resemble its training data distribution. This guide will explore the train_unconditional.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies: Copied cd examples/unconditional_image_generation +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the bf16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_unconditional.py \ + --mixed_precision="bf16" Some basic and important parameters to specify include: --dataset_name: the name of the dataset on the Hub or a local path to the dataset to train on --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command Bring your dataset, and let the training script handle everything else! Training script The code for preprocessing the dataset and the training loop is found in the main() function. If you need to adapt the training script, this is where you’ll need to make your changes. The train_unconditional script initializes a UNet2DModel if you don’t provide a model configuration. You can configure the UNet here if you’d like: Copied model = UNet2DModel( + sample_size=args.resolution, + in_channels=3, + out_channels=3, + layers_per_block=2, + block_out_channels=(128, 128, 256, 256, 512, 512), + down_block_types=( + "DownBlock2D", + "DownBlock2D", + "DownBlock2D", + "DownBlock2D", + "AttnDownBlock2D", + "DownBlock2D", + ), + up_block_types=( + "UpBlock2D", + "AttnUpBlock2D", + "UpBlock2D", + "UpBlock2D", + "UpBlock2D", + "UpBlock2D", + ), +) Next, the script initializes a scheduler and optimizer: Copied # Initialize the scheduler +accepts_prediction_type = "prediction_type" in set(inspect.signature(DDPMScheduler.__init__).parameters.keys()) +if accepts_prediction_type: + noise_scheduler = DDPMScheduler( + num_train_timesteps=args.ddpm_num_steps, + beta_schedule=args.ddpm_beta_schedule, + prediction_type=args.prediction_type, + ) +else: + noise_scheduler = DDPMScheduler(num_train_timesteps=args.ddpm_num_steps, beta_schedule=args.ddpm_beta_schedule) + +# Initialize the optimizer +optimizer = torch.optim.AdamW( + model.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Then it loads a dataset and you can specify how to preprocess it: Copied dataset = load_dataset("imagefolder", data_dir=args.train_data_dir, cache_dir=args.cache_dir, split="train") + +augmentations = transforms.Compose( + [ + transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), + transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution), + transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x), + transforms.ToTensor(), + transforms.Normalize([0.5], [0.5]), + ] +) Finally, the training loop handles everything else such as adding noise to the images, predicting the noise residual, calculating the loss, saving checkpoints at specified steps, and saving and pushing the model to the Hub. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 A full training run takes 2 hours on 4xV100 GPUs. + + + + Copied accelerate launch train_unconditional.py \ + --dataset_name="huggan/flowers-102-categories" \ + --output_dir="ddpm-ema-flowers-64" \ + --mixed_precision="fp16" \ + --push_to_hub + + +If you’re training with more than one GPU, add the --multi_gpu parameter to the training command: Copied accelerate launch --multi_gpu train_unconditional.py \ + --dataset_name="huggan/flowers-102-categories" \ + --output_dir="ddpm-ema-flowers-64" \ + --mixed_precision="fp16" \ + --push_to_hub + + +The training script creates and saves a checkpoint file in your repository. Now you can load and use your trained model for inference: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128").to("cuda") +image = pipeline().images[0] diff --git a/scrapped_outputs/491914bf0af969a9754b9bc712f5965d.txt b/scrapped_outputs/491914bf0af969a9754b9bc712f5965d.txt new file mode 100644 index 0000000000000000000000000000000000000000..92fbeed0765a53040c69079974db40c4e9eb3387 --- /dev/null +++ b/scrapped_outputs/491914bf0af969a9754b9bc712f5965d.txt @@ -0,0 +1,66 @@ +Latent Consistency Model Multistep Scheduler Overview Multistep and onestep scheduler (Algorithm 3) introduced alongside latent consistency models in the paper Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference by Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao. +This scheduler should be able to generate good samples from LatentConsistencyModelPipeline in 1-8 steps. LCMScheduler class diffusers.LCMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'scaled_linear' trained_betas: Union = None original_inference_steps: int = 50 clip_sample: bool = False clip_sample_range: float = 1.0 set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' timestep_scaling: float = 10.0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. original_inference_steps (int, optional, defaults to 50) — +The default number of inference steps used to generate a linearly-spaced timestep schedule, from which we +will ultimately take num_inference_steps evenly spaced timesteps to form the final timestep schedule. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the alpha value at step 0. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. timestep_scaling (float, defaults to 10.0) — +The factor the timesteps will be multiplied by when calculating the consistency model boundary conditions +c_skip and c_out. Increasing this will decrease the approximation error (although the approximation +error at the default of 10.0 is already pretty small). rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. LCMScheduler extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with +non-Markovian guidance. This model inherits from SchedulerMixin and ConfigMixin. ~ConfigMixin takes care of storing all config +attributes that are passed in the scheduler’s __init__ function, such as num_train_timesteps. They can be +accessed via scheduler.config.num_train_timesteps. SchedulerMixin provides general loading and saving +functionality via the SchedulerMixin.save_pretrained() and from_pretrained() functions. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: Optional = None device: Union = None original_inference_steps: Optional = None timesteps: Optional = None strength: int = 1.0 ) Parameters num_inference_steps (int, optional) — +The number of diffusion steps used when generating samples with a pre-trained model. If used, +timesteps must be None. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. original_inference_steps (int, optional) — +The original number of inference steps, which will be used to generate a linearly-spaced timestep +schedule (which is different from the standard diffusers implementation). We will then take +num_inference_steps timesteps from this schedule, evenly spaced in terms of indices, and use that as +our final timestep schedule. If not set, this will default to the original_inference_steps attribute. timesteps (List[int], optional) — +Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default +timestep spacing strategy of equal spacing between timesteps on the training/distillation timestep +schedule is used. If timesteps is passed, num_inference_steps must be None. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator: Optional = None return_dict: bool = True ) → ~schedulers.scheduling_utils.LCMSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a LCMSchedulerOutput or tuple. Returns +~schedulers.scheduling_utils.LCMSchedulerOutput or tuple + +If return_dict is True, LCMSchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/49269f1c01b782bf7bec7280a9a1b43e.txt b/scrapped_outputs/49269f1c01b782bf7bec7280a9a1b43e.txt new file mode 100644 index 0000000000000000000000000000000000000000..02948f26017297db150c2f1b80c70d14cf529652 --- /dev/null +++ b/scrapped_outputs/49269f1c01b782bf7bec7280a9a1b43e.txt @@ -0,0 +1,187 @@ +Kandinsky The Kandinsky models are a series of multilingual text-to-image generation models. The Kandinsky 2.0 model uses two multilingual text encoders and concatenates those results for the UNet. Kandinsky 2.1 changes the architecture to include an image prior model (CLIP) to generate a mapping between text and image embeddings. The mapping provides better text-image alignment and it is used with the text embeddings during training, leading to higher quality results. Finally, Kandinsky 2.1 uses a Modulating Quantized Vectors (MoVQ) decoder - which adds a spatial conditional normalization layer to increase photorealism - to decode the latents into images. Kandinsky 2.2 improves on the previous model by replacing the image encoder of the image prior model with a larger CLIP-ViT-G model to improve quality. The image prior model was also retrained on images with different resolutions and aspect ratios to generate higher-resolution images and different image sizes. Kandinsky 3 simplifies the architecture and shifts away from the two-stage generation process involving the prior model and diffusion model. Instead, Kandinsky 3 uses Flan-UL2 to encode text, a UNet with BigGan-deep blocks, and Sber-MoVQGAN to decode the latents into images. Text understanding and generated image quality are primarily achieved by using a larger text encoder and UNet. This guide will show you how to use the Kandinsky models for text-to-image, image-to-image, inpainting, interpolation, and more. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate Kandinsky 2.1 and 2.2 usage is very similar! The only difference is Kandinsky 2.2 doesn’t accept prompt as an input when decoding the latents. Instead, Kandinsky 2.2 only accepts image_embeds during decoding. Kandinsky 3 has a more concise architecture and it doesn’t require a prior model. This means it’s usage is identical to other diffusion models like Stable Diffusion XL. Text-to-image To use the Kandinsky models for any task, you always start by setting up the prior pipeline to encode the prompt and generate the image embeddings. The prior pipeline also generates negative_image_embeds that correspond to the negative prompt "". For better results, you can pass an actual negative_prompt to the prior pipeline, but this’ll increase the effective batch size of the prior pipeline by 2x. Kandinsky 2.1 Kandinsky 2.2 Kandinsky 3 Copied from diffusers import KandinskyPriorPipeline, KandinskyPipeline +import torch + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16).to("cuda") +pipeline = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16).to("cuda") + +prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting" +negative_prompt = "low quality, bad quality" # optional to include a negative prompt, but results are usually better +image_embeds, negative_image_embeds = prior_pipeline(prompt, negative_prompt, guidance_scale=1.0).to_tuple() Now pass all the prompts and embeddings to the KandinskyPipeline to generate an image: Copied image = pipeline(prompt, image_embeds=image_embeds, negative_prompt=negative_prompt, negative_image_embeds=negative_image_embeds, height=768, width=768).images[0] +image 🤗 Diffusers also provides an end-to-end API with the KandinskyCombinedPipeline and KandinskyV22CombinedPipeline, meaning you don’t have to separately load the prior and text-to-image pipeline. The combined pipeline automatically loads both the prior model and the decoder. You can still set different values for the prior pipeline with the prior_guidance_scale and prior_num_inference_steps parameters if you want. Use the AutoPipelineForText2Image to automatically call the combined pipelines under the hood: Kandinsky 2.1 Kandinsky 2.2 Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) +pipeline.enable_model_cpu_offload() + +prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting" +negative_prompt = "low quality, bad quality" + +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, prior_guidance_scale=1.0, guidance_scale=4.0, height=768, width=768).images[0] +image Image-to-image For image-to-image, pass the initial image and text prompt to condition the image to the pipeline. Start by loading the prior pipeline: Kandinsky 2.1 Kandinsky 2.2 Kandinsky 3 Copied import torch +from diffusers import KandinskyImg2ImgPipeline, KandinskyPriorPipeline + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipeline = KandinskyImg2ImgPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda") Download an image to condition on: Copied from diffusers.utils import load_image + +# download image +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +original_image = load_image(url) +original_image = original_image.resize((768, 512)) Generate the image_embeds and negative_image_embeds with the prior pipeline: Copied prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +image_embeds, negative_image_embeds = prior_pipeline(prompt, negative_prompt).to_tuple() Now pass the original image, and all the prompts and embeddings to the pipeline to generate an image: Kandinsky 2.1 Kandinsky 2.2 Kandinsky 3 Copied from diffusers.utils import make_image_grid + +image = pipeline(prompt, negative_prompt=negative_prompt, image=original_image, image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, height=768, width=768, strength=0.3).images[0] +make_image_grid([original_image.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2) 🤗 Diffusers also provides an end-to-end API with the KandinskyImg2ImgCombinedPipeline and KandinskyV22Img2ImgCombinedPipeline, meaning you don’t have to separately load the prior and image-to-image pipeline. The combined pipeline automatically loads both the prior model and the decoder. You can still set different values for the prior pipeline with the prior_guidance_scale and prior_num_inference_steps parameters if you want. Use the AutoPipelineForImage2Image to automatically call the combined pipelines under the hood: Kandinsky 2.1 Kandinsky 2.2 Copied from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image +import torch + +pipeline = AutoPipelineForImage2Image.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True) +pipeline.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +original_image = load_image(url) + +original_image.thumbnail((768, 768)) + +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=original_image, strength=0.3).images[0] +make_image_grid([original_image.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2) Inpainting ⚠️ The Kandinsky models use ⬜️ white pixels to represent the masked area now instead of black pixels. If you are using KandinskyInpaintPipeline in production, you need to change the mask to use white pixels: Copied # For PIL input +import PIL.ImageOps +mask = PIL.ImageOps.invert(mask) + +# For PyTorch and NumPy input +mask = 1 - mask For inpainting, you’ll need the original image, a mask of the area to replace in the original image, and a text prompt of what to inpaint. Load the prior pipeline: Kandinsky 2.1 Kandinsky 2.2 Copied from diffusers import KandinskyInpaintPipeline, KandinskyPriorPipeline +from diffusers.utils import load_image, make_image_grid +import torch +import numpy as np +from PIL import Image + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipeline = KandinskyInpaintPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16, use_safetensors=True).to("cuda") Load an initial image and create a mask: Copied init_image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png") +mask = np.zeros((768, 768), dtype=np.float32) +# mask area above cat's head +mask[:250, 250:-250] = 1 Generate the embeddings with the prior pipeline: Copied prompt = "a hat" +prior_output = prior_pipeline(prompt) Now pass the initial image, mask, and prompt and embeddings to the pipeline to generate an image: Kandinsky 2.1 Kandinsky 2.2 Copied output_image = pipeline(prompt, image=init_image, mask_image=mask, **prior_output, height=768, width=768, num_inference_steps=150).images[0] +mask = Image.fromarray((mask*255).astype('uint8'), 'L') +make_image_grid([init_image, mask, output_image], rows=1, cols=3) You can also use the end-to-end KandinskyInpaintCombinedPipeline and KandinskyV22InpaintCombinedPipeline to call the prior and decoder pipelines together under the hood. Use the AutoPipelineForInpainting for this: Kandinsky 2.1 Kandinsky 2.2 Copied import torch +import numpy as np +from PIL import Image +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipe = AutoPipelineForInpainting.from_pretrained("kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16) +pipe.enable_model_cpu_offload() + +init_image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png") +mask = np.zeros((768, 768), dtype=np.float32) +# mask area above cat's head +mask[:250, 250:-250] = 1 +prompt = "a hat" + +output_image = pipe(prompt=prompt, image=init_image, mask_image=mask).images[0] +mask = Image.fromarray((mask*255).astype('uint8'), 'L') +make_image_grid([init_image, mask, output_image], rows=1, cols=3) Interpolation Interpolation allows you to explore the latent space between the image and text embeddings which is a cool way to see some of the prior model’s intermediate outputs. Load the prior pipeline and two images you’d like to interpolate: Kandinsky 2.1 Kandinsky 2.2 Copied from diffusers import KandinskyPriorPipeline, KandinskyPipeline +from diffusers.utils import load_image, make_image_grid +import torch + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +img_1 = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png") +img_2 = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/starry_night.jpeg") +make_image_grid([img_1.resize((512,512)), img_2.resize((512,512))], rows=1, cols=2) a cat Van Gogh's Starry Night painting Specify the text or images to interpolate, and set the weights for each text or image. Experiment with the weights to see how they affect the interpolation! Copied images_texts = ["a cat", img_1, img_2] +weights = [0.3, 0.3, 0.4] Call the interpolate function to generate the embeddings, and then pass them to the pipeline to generate the image: Kandinsky 2.1 Kandinsky 2.2 Copied # prompt can be left empty +prompt = "" +prior_out = prior_pipeline.interpolate(images_texts, weights) + +pipeline = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + +image = pipeline(prompt, **prior_out, height=768, width=768).images[0] +image ControlNet ⚠️ ControlNet is only supported for Kandinsky 2.2! ControlNet enables conditioning large pretrained diffusion models with additional inputs such as a depth map or edge detection. For example, you can condition Kandinsky 2.2 with a depth map so the model understands and preserves the structure of the depth image. Let’s load an image and extract it’s depth map: Copied from diffusers.utils import load_image + +img = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png" +).resize((768, 768)) +img Then you can use the depth-estimation Pipeline from 🤗 Transformers to process the image and retrieve the depth map: Copied import torch +import numpy as np + +from transformers import pipeline + +def make_hint(image, depth_estimator): + image = depth_estimator(image)["depth"] + image = np.array(image) + image = image[:, :, None] + image = np.concatenate([image, image, image], axis=2) + detected_map = torch.from_numpy(image).float() / 255.0 + hint = detected_map.permute(2, 0, 1) + return hint + +depth_estimator = pipeline("depth-estimation") +hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda") Text-to-image Load the prior pipeline and the KandinskyV22ControlnetPipeline: Copied from diffusers import KandinskyV22PriorPipeline, KandinskyV22ControlnetPipeline + +prior_pipeline = KandinskyV22PriorPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +pipeline = KandinskyV22ControlnetPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16 +).to("cuda") Generate the image embeddings from a prompt and negative prompt: Copied prompt = "A robot, 4k photo" +negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature" + +generator = torch.Generator(device="cuda").manual_seed(43) + +image_emb, zero_image_emb = prior_pipeline( + prompt=prompt, negative_prompt=negative_prior_prompt, generator=generator +).to_tuple() Finally, pass the image embeddings and the depth image to the KandinskyV22ControlnetPipeline to generate an image: Copied image = pipeline(image_embeds=image_emb, negative_image_embeds=zero_image_emb, hint=hint, num_inference_steps=50, generator=generator, height=768, width=768).images[0] +image Image-to-image For image-to-image with ControlNet, you’ll need to use the: KandinskyV22PriorEmb2EmbPipeline to generate the image embeddings from a text prompt and an image KandinskyV22ControlnetImg2ImgPipeline to generate an image from the initial image and the image embeddings Process and extract a depth map of an initial image of a cat with the depth-estimation Pipeline from 🤗 Transformers: Copied import torch +import numpy as np + +from diffusers import KandinskyV22PriorEmb2EmbPipeline, KandinskyV22ControlnetImg2ImgPipeline +from diffusers.utils import load_image +from transformers import pipeline + +img = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png" +).resize((768, 768)) + +def make_hint(image, depth_estimator): + image = depth_estimator(image)["depth"] + image = np.array(image) + image = image[:, :, None] + image = np.concatenate([image, image, image], axis=2) + detected_map = torch.from_numpy(image).float() / 255.0 + hint = detected_map.permute(2, 0, 1) + return hint + +depth_estimator = pipeline("depth-estimation") +hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda") Load the prior pipeline and the KandinskyV22ControlnetImg2ImgPipeline: Copied prior_pipeline = KandinskyV22PriorEmb2EmbPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +pipeline = KandinskyV22ControlnetImg2ImgPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16 +).to("cuda") Pass a text prompt and the initial image to the prior pipeline to generate the image embeddings: Copied prompt = "A robot, 4k photo" +negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature" + +generator = torch.Generator(device="cuda").manual_seed(43) + +img_emb = prior_pipeline(prompt=prompt, image=img, strength=0.85, generator=generator) +negative_emb = prior_pipeline(prompt=negative_prior_prompt, image=img, strength=1, generator=generator) Now you can run the KandinskyV22ControlnetImg2ImgPipeline to generate an image from the initial image and the image embeddings: Copied image = pipeline(image=img, strength=0.5, image_embeds=img_emb.image_embeds, negative_image_embeds=negative_emb.image_embeds, hint=hint, num_inference_steps=50, generator=generator, height=768, width=768).images[0] +make_image_grid([img.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2) Optimizations Kandinsky is unique because it requires a prior pipeline to generate the mappings, and a second pipeline to decode the latents into an image. Optimization efforts should be focused on the second pipeline because that is where the bulk of the computation is done. Here are some tips to improve Kandinsky during inference. Enable xFormers if you’re using PyTorch < 2.0: Copied from diffusers import DiffusionPipeline + import torch + + pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) ++ pipe.enable_xformers_memory_efficient_attention() Enable torch.compile if you’re using PyTorch >= 2.0 to automatically use scaled dot-product attention (SDPA): Copied pipe.unet.to(memory_format=torch.channels_last) ++ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) This is the same as explicitly setting the attention processor to use AttnAddedKVProcessor2_0: Copied from diffusers.models.attention_processor import AttnAddedKVProcessor2_0 + +pipe.unet.set_attn_processor(AttnAddedKVProcessor2_0()) Offload the model to the CPU with enable_model_cpu_offload() to avoid out-of-memory errors: Copied from diffusers import DiffusionPipeline + import torch + + pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) ++ pipe.enable_model_cpu_offload() By default, the text-to-image pipeline uses the DDIMScheduler but you can replace it with another scheduler like DDPMScheduler to see how that affects the tradeoff between inference speed and image quality: Copied from diffusers import DDPMScheduler +from diffusers import DiffusionPipeline + +scheduler = DDPMScheduler.from_pretrained("kandinsky-community/kandinsky-2-1", subfolder="ddpm_scheduler") +pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", scheduler=scheduler, torch_dtype=torch.float16, use_safetensors=True).to("cuda") diff --git a/scrapped_outputs/496522afcc839f4123d4a82803747706.txt b/scrapped_outputs/496522afcc839f4123d4a82803747706.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/49720694db6fd6133824297971e7d2ae.txt b/scrapped_outputs/49720694db6fd6133824297971e7d2ae.txt new file mode 100644 index 0000000000000000000000000000000000000000..51eec044ff9541ddf40cd3ef6404f0e25abfaa6f --- /dev/null +++ b/scrapped_outputs/49720694db6fd6133824297971e7d2ae.txt @@ -0,0 +1,226 @@ +aMUSEd aMUSEd was introduced in aMUSEd: An Open MUSE Reproduction by Suraj Patil, William Berman, Robin Rombach, and Patrick von Platen. Amused is a lightweight text to image model based off of the MUSE architecture. Amused is particularly useful in applications that require a lightweight and fast model such as generating many images quickly at once. Amused is a vqvae token based transformer that can generate an image in fewer forward passes than many diffusion models. In contrast with muse, it uses the smaller text encoder CLIP-L/14 instead of t5-xxl. Due to its small parameter count and few forward pass generation process, amused can generate many images quickly. This benefit is seen particularly at larger batch sizes. The abstract from the paper is: We present aMUSEd, an open-source, lightweight masked image model (MIM) for text-to-image generation based on MUSE. With 10 percent of MUSE’s parameters, aMUSEd is focused on fast image generation. We believe MIM is under-explored compared to latent diffusion, the prevailing approach for text-to-image generation. Compared to latent diffusion, MIM requires fewer inference steps and is more interpretable. Additionally, MIM can be fine-tuned to learn additional styles with only a single image. We hope to encourage further exploration of MIM by demonstrating its effectiveness on large-scale text-to-image generation and releasing reproducible training code. We also release checkpoints for two models which directly produce images at 256x256 and 512x512 resolutions. Model Params amused-256 603M amused-512 608M AmusedPipeline class diffusers.AmusedPipeline < source > ( vqvae: VQModel tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection transformer: UVit2DModel scheduler: AmusedScheduler ) __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 12 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Optional = None latents: Optional = None prompt_embeds: Optional = None encoder_hidden_states: Optional = None negative_prompt_embeds: Optional = None negative_encoder_hidden_states: Optional = None output_type = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None micro_conditioning_aesthetic_score: int = 6 micro_conditioning_crop_coord: Tuple = (0, 0) temperature: Union = (2, 0) ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.transformer.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 16) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.IntTensor, optional) — +Pre-generated tokens representing latent vectors in self.vqvae, to be used as inputs for image +gneration. If not provided, the starting latents will be completely masked. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. A single vector from the +pooled and projected final hidden states. encoder_hidden_states (torch.FloatTensor, optional) — +Pre-generated penultimate hidden states from the text encoder providing additional text conditioning. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. negative_encoder_hidden_states (torch.FloatTensor, optional) — +Analogous to encoder_hidden_states for the positive prompt. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. micro_conditioning_aesthetic_score (int, optional, defaults to 6) — +The targeted aesthetic score according to the laion aesthetic classifier. See https://laion.ai/blog/laion-aesthetics/ +and the micro-conditioning section of https://arxiv.org/abs/2307.01952. micro_conditioning_crop_coord (Tuple[int], optional, defaults to (0, 0)) — +The targeted height, width crop coordinates. See the micro-conditioning section of https://arxiv.org/abs/2307.01952. temperature (Union[int, Tuple[int, int], List[int]], optional, defaults to (2, 0)) — +Configures the temperature scheduler on self.scheduler see AmusedScheduler#set_timesteps. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import AmusedPipeline + +>>> pipe = AmusedPipeline.from_pretrained( +... "amused/amused-512", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt).images[0] enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. class diffusers.AmusedImg2ImgPipeline < source > ( vqvae: VQModel tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection transformer: UVit2DModel scheduler: AmusedScheduler ) __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.5 num_inference_steps: int = 12 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Optional = None prompt_embeds: Optional = None encoder_hidden_states: Optional = None negative_prompt_embeds: Optional = None negative_encoder_hidden_states: Optional = None output_type = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None micro_conditioning_aesthetic_score: int = 6 micro_conditioning_crop_coord: Tuple = (0, 0) temperature: Union = (2, 0) ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be used as the starting point. For both +numpy array and pytorch tensor, the expected value range is between [0, 1] If it’s a tensor or a list +or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a +list of arrays, the expected shape should be (B, H, W, C) or (H, W, C) It can also accept image +latents as image, but if passing latents directly it is not encoded again. strength (float, optional, defaults to 0.5) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 16) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. A single vector from the +pooled and projected final hidden states. encoder_hidden_states (torch.FloatTensor, optional) — +Pre-generated penultimate hidden states from the text encoder providing additional text conditioning. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. negative_encoder_hidden_states (torch.FloatTensor, optional) — +Analogous to encoder_hidden_states for the positive prompt. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. micro_conditioning_aesthetic_score (int, optional, defaults to 6) — +The targeted aesthetic score according to the laion aesthetic classifier. See https://laion.ai/blog/laion-aesthetics/ +and the micro-conditioning section of https://arxiv.org/abs/2307.01952. micro_conditioning_crop_coord (Tuple[int], optional, defaults to (0, 0)) — +The targeted height, width crop coordinates. See the micro-conditioning section of https://arxiv.org/abs/2307.01952. temperature (Union[int, Tuple[int, int], List[int]], optional, defaults to (2, 0)) — +Configures the temperature scheduler on self.scheduler see AmusedScheduler#set_timesteps. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import AmusedImg2ImgPipeline +>>> from diffusers.utils import load_image + +>>> pipe = AmusedImg2ImgPipeline.from_pretrained( +... "amused/amused-512", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "winter mountains" +>>> input_image = ( +... load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/open_muse/mountains.jpg" +... ) +... .resize((512, 512)) +... .convert("RGB") +... ) +>>> image = pipe(prompt, input_image).images[0] enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. class diffusers.AmusedInpaintPipeline < source > ( vqvae: VQModel tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection transformer: UVit2DModel scheduler: AmusedScheduler ) __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None strength: float = 1.0 num_inference_steps: int = 12 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Optional = None prompt_embeds: Optional = None encoder_hidden_states: Optional = None negative_prompt_embeds: Optional = None negative_encoder_hidden_states: Optional = None output_type = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None micro_conditioning_aesthetic_score: int = 6 micro_conditioning_crop_coord: Tuple = (0, 0) temperature: Union = (2, 0) ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be used as the starting point. For both +numpy array and pytorch tensor, the expected value range is between [0, 1] If it’s a tensor or a list +or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a +list of arrays, the expected shape should be (B, H, W, C) or (H, W, C) It can also accept image +latents as image, but if passing latents directly it is not encoded again. mask_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to mask image. White pixels in the mask +are repainted while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a numpy array or pytorch tensor, it should contain one +color channel (L) instead of 3, so the expected shape for pytorch tensor would be (B, 1, H, W), (B, H, W), (1, H, W), (H, W). And for numpy array would be for (B, H, W, 1), (B, H, W), (H, W, 1), or (H, W). strength (float, optional, defaults to 1.0) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 16) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. A single vector from the +pooled and projected final hidden states. encoder_hidden_states (torch.FloatTensor, optional) — +Pre-generated penultimate hidden states from the text encoder providing additional text conditioning. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. negative_encoder_hidden_states (torch.FloatTensor, optional) — +Analogous to encoder_hidden_states for the positive prompt. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. micro_conditioning_aesthetic_score (int, optional, defaults to 6) — +The targeted aesthetic score according to the laion aesthetic classifier. See https://laion.ai/blog/laion-aesthetics/ +and the micro-conditioning section of https://arxiv.org/abs/2307.01952. micro_conditioning_crop_coord (Tuple[int], optional, defaults to (0, 0)) — +The targeted height, width crop coordinates. See the micro-conditioning section of https://arxiv.org/abs/2307.01952. temperature (Union[int, Tuple[int, int], List[int]], optional, defaults to (2, 0)) — +Configures the temperature scheduler on self.scheduler see AmusedScheduler#set_timesteps. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import AmusedInpaintPipeline +>>> from diffusers.utils import load_image + +>>> pipe = AmusedInpaintPipeline.from_pretrained( +... "amused/amused-512", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "fall mountains" +>>> input_image = ( +... load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/open_muse/mountains_1.jpg" +... ) +... .resize((512, 512)) +... .convert("RGB") +... ) +>>> mask = ( +... load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/open_muse/mountains_1_mask.png" +... ) +... .resize((512, 512)) +... .convert("L") +... ) +>>> pipe(prompt, input_image, mask).images[0].save("out.png") enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. diff --git a/scrapped_outputs/49a15ae347591b8cb3e066a009d5cb4a.txt b/scrapped_outputs/49a15ae347591b8cb3e066a009d5cb4a.txt new file mode 100644 index 0000000000000000000000000000000000000000..ec0984527432f3dd6b6a86e93a43c1b2ef304cb3 --- /dev/null +++ b/scrapped_outputs/49a15ae347591b8cb3e066a009d5cb4a.txt @@ -0,0 +1,60 @@ +VAE Image Processor The VaeImageProcessor provides a unified API for StableDiffusionPipelines to prepare image inputs for VAE encoding and post-processing outputs once they’re decoded. This includes transformations such as resizing, normalization, and conversion between PIL Image, PyTorch, and NumPy arrays. All pipelines with VaeImageProcessor accept PIL Image, PyTorch tensor, or NumPy arrays as image inputs and return outputs based on the output_type argument by the user. You can pass encoded image latents directly to the pipeline and return latents from the pipeline as a specific output with the output_type argument (for example output_type="latent"). This allows you to take the generated latents from one pipeline and pass it to another pipeline as input without leaving the latent space. It also makes it much easier to use multiple pipelines together by passing PyTorch tensors directly between different pipelines. VaeImageProcessor class diffusers.image_processor.VaeImageProcessor < source > ( do_resize: bool = True vae_scale_factor: int = 8 resample: str = 'lanczos' do_normalize: bool = True do_binarize: bool = False do_convert_rgb: bool = False do_convert_grayscale: bool = False ) Parameters do_resize (bool, optional, defaults to True) — +Whether to downscale the image’s (height, width) dimensions to multiples of vae_scale_factor. Can accept +height and width arguments from image_processor.VaeImageProcessor.preprocess() method. vae_scale_factor (int, optional, defaults to 8) — +VAE scale factor. If do_resize is True, the image is automatically resized to multiples of this factor. resample (str, optional, defaults to lanczos) — +Resampling filter to use when resizing the image. do_normalize (bool, optional, defaults to True) — +Whether to normalize the image to [-1,1]. do_binarize (bool, optional, defaults to False) — +Whether to binarize the image to 0/1. do_convert_rgb (bool, optional, defaults to be False) — +Whether to convert the images to RGB format. do_convert_grayscale (bool, optional, defaults to be False) — +Whether to convert the images to grayscale format. Image processor for VAE. apply_overlay < source > ( mask: Image init_image: Image image: Image crop_coords: Optional = None ) overlay the inpaint output to the original image binarize < source > ( image: Image ) → PIL.Image.Image Parameters image (PIL.Image.Image) — +The image input, should be a PIL image. Returns +PIL.Image.Image + +The binarized image. Values less than 0.5 are set to 0, values greater than 0.5 are set to 1. + Create a mask. blur < source > ( image: Image blur_factor: int = 4 ) Blurs an image. convert_to_grayscale < source > ( image: Image ) Converts a PIL image to grayscale format. convert_to_rgb < source > ( image: Image ) Converts a PIL image to RGB format. denormalize < source > ( images: Union ) Denormalize an image array to [0,1]. get_crop_region < source > ( mask_image: Image width: int height: int pad = 0 ) → tuple Parameters mask_image (PIL.Image.Image) — Mask image. width (int) — Width of the image to be processed. height (int) — Height of the image to be processed. pad (int, optional) — Padding to be added to the crop region. Defaults to 0. Returns +tuple + +(x1, y1, x2, y2) represent a rectangular region that contains all masked ares in an image and matches the original aspect ratio. + Finds a rectangular region that contains all masked ares in an image, and expands region to match the aspect ratio of the original image; +for example, if user drew mask in a 128x32 region, and the dimensions for processing are 512x512, the region will be expanded to 128x128. get_default_height_width < source > ( image: Union height: Optional = None width: Optional = None ) Parameters image(PIL.Image.Image, np.ndarray or torch.Tensor) — +The image input, can be a PIL image, numpy array or pytorch tensor. if it is a numpy array, should have +shape [batch, height, width] or [batch, height, width, channel] if it is a pytorch tensor, should +have shape [batch, channel, height, width]. height (int, optional, defaults to None) — +The height in preprocessed image. If None, will use the height of image input. width (int, optional, defaults to None) -- The width in preprocessed. If None, will use the width of the image` input. This function return the height and width that are downscaled to the next integer multiple of +vae_scale_factor. normalize < source > ( images: Union ) Normalize an image array to [-1,1]. numpy_to_pil < source > ( images: ndarray ) Convert a numpy image or a batch of images to a PIL image. numpy_to_pt < source > ( images: ndarray ) Convert a NumPy image to a PyTorch tensor. pil_to_numpy < source > ( images: Union ) Convert a PIL image or a list of PIL images to NumPy arrays. postprocess < source > ( image: FloatTensor output_type: str = 'pil' do_denormalize: Optional = None ) → PIL.Image.Image, np.ndarray or torch.FloatTensor Parameters image (torch.FloatTensor) — +The image input, should be a pytorch tensor with shape B x C x H x W. output_type (str, optional, defaults to pil) — +The output type of the image, can be one of pil, np, pt, latent. do_denormalize (List[bool], optional, defaults to None) — +Whether to denormalize the image to [0,1]. If None, will use the value of do_normalize in the +VaeImageProcessor config. Returns +PIL.Image.Image, np.ndarray or torch.FloatTensor + +The postprocessed image. + Postprocess the image output from tensor to output_type. preprocess < source > ( image: Union height: Optional = None width: Optional = None resize_mode: str = 'default' crops_coords: Optional = None ) Parameters image (pipeline_image_input) — +The image input, accepted formats are PIL images, NumPy arrays, PyTorch tensors; Also accept list of supported formats. height (int, optional, defaults to None) — +The height in preprocessed image. If None, will use the get_default_height_width() to get default height. width (int, optional, defaults to None) -- The width in preprocessed. If None, will use get_default_height_width() to get the default width. resize_mode (str, optional, defaults to default) — +The resize mode, can be one of default or fill. If default, will resize the image to fit +within the specified width and height, and it may not maintaining the original aspect ratio. +If fill, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, filling empty with data from image. +If crop, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, cropping the excess. +Note that resize_mode fill and crop are only supported for PIL image input. crops_coords (List[Tuple[int, int, int, int]], optional, defaults to None) — +The crop coordinates for each image in the batch. If None, will not crop the image. Preprocess the image input. pt_to_numpy < source > ( images: FloatTensor ) Convert a PyTorch tensor to a NumPy image. resize < source > ( image: Union height: int width: int resize_mode: str = 'default' ) → PIL.Image.Image, np.ndarray or torch.Tensor Parameters image (PIL.Image.Image, np.ndarray or torch.Tensor) — +The image input, can be a PIL image, numpy array or pytorch tensor. height (int) — +The height to resize to. width (int) — +The width to resize to. resize_mode (str, optional, defaults to default) — +The resize mode to use, can be one of default or fill. If default, will resize the image to fit +within the specified width and height, and it may not maintaining the original aspect ratio. +If fill, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, filling empty with data from image. +If crop, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, cropping the excess. +Note that resize_mode fill and crop are only supported for PIL image input. Returns +PIL.Image.Image, np.ndarray or torch.Tensor + +The resized image. + Resize image. VaeImageProcessorLDM3D The VaeImageProcessorLDM3D accepts RGB and depth inputs and returns RGB and depth outputs. class diffusers.image_processor.VaeImageProcessorLDM3D < source > ( do_resize: bool = True vae_scale_factor: int = 8 resample: str = 'lanczos' do_normalize: bool = True ) Parameters do_resize (bool, optional, defaults to True) — +Whether to downscale the image’s (height, width) dimensions to multiples of vae_scale_factor. vae_scale_factor (int, optional, defaults to 8) — +VAE scale factor. If do_resize is True, the image is automatically resized to multiples of this factor. resample (str, optional, defaults to lanczos) — +Resampling filter to use when resizing the image. do_normalize (bool, optional, defaults to True) — +Whether to normalize the image to [-1,1]. Image processor for VAE LDM3D. depth_pil_to_numpy < source > ( images: Union ) Convert a PIL image or a list of PIL images to NumPy arrays. numpy_to_depth < source > ( images: ndarray ) Convert a NumPy depth image or a batch of images to a PIL image. numpy_to_pil < source > ( images: ndarray ) Convert a NumPy image or a batch of images to a PIL image. preprocess < source > ( rgb: Union depth: Union height: Optional = None width: Optional = None target_res: Optional = None ) Preprocess the image input. Accepted formats are PIL images, NumPy arrays or PyTorch tensors. rgblike_to_depthmap < source > ( image: Union ) Returns: depth map diff --git a/scrapped_outputs/49ab50767090db0b02590867efc522b1.txt b/scrapped_outputs/49ab50767090db0b02590867efc522b1.txt new file mode 100644 index 0000000000000000000000000000000000000000..8ddef7d2587e0ab05a500a167a90610ae978a96c --- /dev/null +++ b/scrapped_outputs/49ab50767090db0b02590867efc522b1.txt @@ -0,0 +1,107 @@ +Attend-and-Excite Attend-and-Excite for Stable Diffusion was proposed in Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models and provides textual attention control over image generation. The abstract from the paper is: Recent text-to-image generative models have demonstrated an unparalleled ability to generate diverse and creative imagery guided by a target text prompt. While revolutionary, current state-of-the-art diffusion models may still fail in generating images that fully convey the semantics in the given text prompt. We analyze the publicly available Stable Diffusion model and assess the existence of catastrophic neglect, where the model fails to generate one or more of the subjects from the input prompt. Moreover, we find that in some cases the model also fails to correctly bind attributes (e.g., colors) to their corresponding subjects. To help mitigate these failure cases, we introduce the concept of Generative Semantic Nursing (GSN), where we seek to intervene in the generative process on the fly during inference time to improve the faithfulness of the generated images. Using an attention-based formulation of GSN, dubbed Attend-and-Excite, we guide the model to refine the cross-attention units to attend to all subject tokens in the text prompt and strengthen - or excite - their activations, encouraging the model to generate all subjects described in the text prompt. We compare our approach to alternative approaches and demonstrate that it conveys the desired concepts more faithfully across a range of text prompts. You can find additional information about Attend-and-Excite on the project page, the original codebase, or try it out in a demo. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionAttendAndExcitePipeline class diffusers.StableDiffusionAttendAndExcitePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion and Attend-and-Excite. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings __call__ < source > ( prompt: Union token_indices: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: int = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None max_iter_to_alter: int = 25 thresholds: dict = {0: 0.05, 10: 0.5, 20: 0.8} scale_factor: int = 20 attn_res: Optional = (16, 16) clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. token_indices (List[int]) — +The token indices to alter with attend-and-excite. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. max_iter_to_alter (int, optional, defaults to 25) — +Number of denoising steps to apply attend-and-excite. The max_iter_to_alter denoising steps are when +attend-and-excite is applied. For example, if max_iter_to_alter is 25 and there are a total of 30 +denoising steps, the first 25 denoising steps applies attend-and-excite and the last 5 will not. thresholds (dict, optional, defaults to {0 -- 0.05, 10: 0.5, 20: 0.8}): +Dictionary defining the iterations and desired thresholds to apply iterative latent refinement in. scale_factor (int, optional, default to 20) — +Scale factor to control the step size of each attend-and-excite update. attn_res (tuple, optional, default computed from width and height) — +The 2D resolution of the semantic attention map. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionAttendAndExcitePipeline + +>>> pipe = StableDiffusionAttendAndExcitePipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 +... ).to("cuda") + + +>>> prompt = "a cat and a frog" + +>>> # use get_indices function to find out indices of the tokens you want to alter +>>> pipe.get_indices(prompt) +{0: '<|startoftext|>', 1: 'a', 2: 'cat', 3: 'and', 4: 'a', 5: 'frog', 6: '<|endoftext|>'} + +>>> token_indices = [2, 5] +>>> seed = 6141 +>>> generator = torch.Generator("cuda").manual_seed(seed) + +>>> images = pipe( +... prompt=prompt, +... token_indices=token_indices, +... guidance_scale=7.5, +... generator=generator, +... num_inference_steps=50, +... max_iter_to_alter=25, +... ).images + +>>> image = images[0] +>>> image.save(f"../images/{prompt}_{seed}.png") disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_indices < source > ( prompt: str ) Utility function to list the indices of the tokens you wish to alte StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/49c41f4054d49a241fc25df70c3f7106.txt b/scrapped_outputs/49c41f4054d49a241fc25df70c3f7106.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/49e4e865d74ea6194f03063617c8d03b.txt b/scrapped_outputs/49e4e865d74ea6194f03063617c8d03b.txt new file mode 100644 index 0000000000000000000000000000000000000000..b873ba3b9d3614922057d0c02bbc129d959f1e64 --- /dev/null +++ b/scrapped_outputs/49e4e865d74ea6194f03063617c8d03b.txt @@ -0,0 +1,138 @@ +UNet2DConditionModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 2D UNet conditional model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet2DConditionModel class diffusers.UNet2DConditionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 center_input_sample: bool = False flip_sin_to_cos: bool = True freq_shift: int = 0 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') mid_block_type: Optional = 'UNetMidBlock2DCrossAttn' up_block_types: Tuple = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: Union = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 dropout: float = 0.0 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: Union = 1280 transformer_layers_per_block: Union = 1 reverse_transformer_layers_per_block: Optional = None encoder_hid_dim: Optional = None encoder_hid_dim_type: Optional = None attention_head_dim: Union = 8 num_attention_heads: Union = None dual_cross_attention: bool = False use_linear_projection: bool = False class_embed_type: Optional = None addition_embed_type: Optional = None addition_time_embed_dim: Optional = None num_class_embeds: Optional = None upcast_attention: bool = False resnet_time_scale_shift: str = 'default' resnet_skip_time_act: bool = False resnet_out_scale_factor: int = 1.0 time_embedding_type: str = 'positional' time_embedding_dim: Optional = None time_embedding_act_fn: Optional = None timestep_post_act: Optional = None time_cond_proj_dim: Optional = None conv_in_kernel: int = 3 conv_out_kernel: int = 3 projection_class_embeddings_input_dim: Optional = None attention_type: str = 'default' class_embeddings_concat: bool = False mid_block_only_cross_attention: Optional = None cross_attention_norm: Optional = None addition_embed_type_num_heads = 64 ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. in_channels (int, optional, defaults to 4) — Number of channels in the input sample. out_channels (int, optional, defaults to 4) — Number of channels in the output. center_input_sample (bool, optional, defaults to False) — Whether to center the input sample. flip_sin_to_cos (bool, optional, defaults to False) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. mid_block_type (str, optional, defaults to "UNetMidBlock2DCrossAttn") — +Block type for middle of UNet, it can be one of UNetMidBlock2DCrossAttn, UNetMidBlock2D, or +UNetMidBlock2DSimpleCrossAttn. If None, the mid block layer is skipped. up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. only_cross_attention(bool or Tuple[bool], optional, default to False) — +Whether to include self-attention in the basic transformer blocks, see +BasicTransformerBlock. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — The number of layers per block. downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. dropout (float, optional, defaults to 0.0) — The dropout probability to use. act_fn (str, optional, defaults to "silu") — The activation function to use. norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, normalization and activation layers is skipped in post-processing. norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. cross_attention_dim (int or Tuple[int], optional, defaults to 1280) — +The dimension of the cross attention features. transformer_layers_per_block (int, Tuple[int], or Tuple[Tuple] , optional, defaults to 1) — +The number of transformer blocks of type BasicTransformerBlock. Only relevant for +~models.unet_2d_blocks.CrossAttnDownBlock2D, ~models.unet_2d_blocks.CrossAttnUpBlock2D, +~models.unet_2d_blocks.UNetMidBlock2DCrossAttn. A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). reverse_transformer_layers_per_block : (Tuple[Tuple], optional, defaults to None): +The number of transformer blocks of type BasicTransformerBlock, in the upsampling +blocks of the U-Net. Only relevant if transformer_layers_per_block is of type Tuple[Tuple] and for +~models.unet_2d_blocks.CrossAttnDownBlock2D, ~models.unet_2d_blocks.CrossAttnUpBlock2D, +~models.unet_2d_blocks.UNetMidBlock2DCrossAttn. +encoder_hid_dim (int, optional, defaults to None): +If encoder_hid_dim_type is defined, encoder_hidden_states will be projected from encoder_hid_dim +dimension to cross_attention_dim. +encoder_hid_dim_type (str, optional, defaults to None): +If given, the encoder_hidden_states and potentially other embeddings are down-projected to text +embeddings of dimension cross_attention according to encoder_hid_dim_type. +attention_head_dim (int, optional, defaults to 8): The dimension of the attention heads. +num_attention_heads (int, optional): +The number of attention heads. If not defined, defaults to attention_head_dim +resnet_time_scale_shift (str, optional, defaults to "default"): Time scale shift config +for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. +class_embed_type (str, optional, defaults to None): +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", "identity", "projection", or "simple_projection". +addition_embed_type (str, optional, defaults to None): +Configures an optional embedding which will be summed with the time embeddings. Choose from None or +“text”. “text” will use the TextTimeEmbedding layer. +addition_time_embed_dim: (int, optional, defaults to None): +Dimension for the timestep embeddings. +num_class_embeds (int, optional, defaults to None): +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. +time_embedding_type (str, optional, defaults to positional): +The type of position embedding to use for timesteps. Choose from positional or fourier. +time_embedding_dim (int, optional, defaults to None): +An optional override for the dimension of the projected time embedding. +time_embedding_act_fn (str, optional, defaults to None): +Optional activation function to use only once on the time embeddings before they are passed to the rest of +the UNet. Choose from silu, mish, gelu, and swish. +timestep_post_act (str, optional, defaults to None): +The second activation function to use in timestep embedding. Choose from silu, mish and gelu. +time_cond_proj_dim (int, optional, defaults to None): +The dimension of cond_proj layer in the timestep embedding. +conv_in_kernel (int, optional, default to 3): The kernel size of conv_in layer. conv_out_kernel (int, +optional, default to 3): The kernel size of conv_out layer. projection_class_embeddings_input_dim (int, +optional): The dimension of the class_labels input when +class_embed_type="projection". Required when class_embed_type="projection". +class_embeddings_concat (bool, optional, defaults to False): Whether to concatenate the time +embeddings with the class embeddings. +mid_block_only_cross_attention (bool, optional, defaults to None): +Whether to use cross attention with the mid block when using the UNetMidBlock2DSimpleCrossAttn. If +only_cross_attention is given as a single boolean and mid_block_only_cross_attention is None, the +only_cross_attention value is used as the value for mid_block_only_cross_attention. Default to False +otherwise. disable_freeu < source > ( ) Disables the FreeU mechanism. enable_freeu < source > ( s1 s2 b1 b2 ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism from https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stage blocks where they are being applied. Please refer to the official repository for combinations of values that +are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None added_cond_kwargs: Optional = None down_block_additional_residuals: Optional = None mid_block_additional_residual: Optional = None down_intrablock_additional_residuals: Optional = None encoder_attention_mask: Optional = None return_dict: bool = True ) → UNet2DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, channel, height, width). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). class_labels (torch.Tensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. +timestep_cond — (torch.Tensor, optional, defaults to None): +Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed +through the self.time_embedding layer to obtain the timestep embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. +added_cond_kwargs — (dict, optional): +A kwargs dictionary containing additional embeddings that if specified are added to the embeddings that +are passed along to the UNet blocks. +down_block_additional_residuals — (tuple of torch.Tensor, optional): +A tuple of tensors that if specified are added to the residuals of down unet blocks. +mid_block_additional_residual — (torch.Tensor, optional): +A tensor that if specified is added to the residual of the middle unet block. encoder_attention_mask (torch.Tensor) — +A cross-attention mask of shape (batch, sequence_length) is applied to encoder_hidden_states. If +True the mask is kept, otherwise if False it is discarded. Mask will be converted into a bias, +which adds large negative values to the attention scores corresponding to “discard” tokens. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. +added_cond_kwargs — (dict, optional): +A kwargs dictionary containin additional embeddings that if specified are added to the embeddings that +are passed along to the UNet blocks. down_block_additional_residuals (tuple of torch.Tensor, optional) — +additional residuals to be added to UNet long skip connections from down blocks to up blocks for +example from ControlNet side model(s) mid_block_additional_residual (torch.Tensor, optional) — +additional residual to be added to UNet mid block output, for example from ControlNet side model down_intrablock_additional_residuals (tuple of torch.Tensor, optional) — +additional residuals to be added within UNet down blocks, for example from T2I-Adapter side model(s) Returns +UNet2DConditionOutput or tuple + +If return_dict is True, an UNet2DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The UNet2DConditionModel forward method. fuse_qkv_projections < source > ( ) Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. set_attention_slice < source > ( slice_size ) Parameters slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", input to the attention heads is halved, so attention is computed in two steps. If +"max", maximum amount of memory is saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in +several steps. This is useful for saving some memory in exchange for a small decrease in speed. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. unfuse_qkv_projections < source > ( ) Disables the fused QKV projection if enabled. This API is 🧪 experimental. unload_lora < source > ( ) Unloads LoRA weights. UNet2DConditionOutput class diffusers.models.unets.unet_2d_condition.UNet2DConditionOutput < source > ( sample: FloatTensor = None ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of UNet2DConditionModel. FlaxUNet2DConditionModel class diffusers.FlaxUNet2DConditionModel < source > ( sample_size: int = 32 in_channels: int = 4 out_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') up_block_types: Tuple = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 attention_head_dim: Union = 8 num_attention_heads: Union = None cross_attention_dim: int = 1280 dropout: float = 0.0 use_linear_projection: bool = False dtype: dtype = flip_sin_to_cos: bool = True freq_shift: int = 0 use_memory_efficient_attention: bool = False split_head_dim: bool = False transformer_layers_per_block: Union = 1 addition_embed_type: Optional = None addition_time_embed_dim: Optional = None addition_embed_type_num_heads: int = 64 projection_class_embeddings_input_dim: Optional = None parent: Union = name: Optional = None ) Parameters sample_size (int, optional) — +The size of the input sample. in_channels (int, optional, defaults to 4) — +The number of channels in the input sample. out_channels (int, optional, defaults to 4) — +The number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxDownBlock2D")) — +The tuple of downsample blocks to use. up_block_types (Tuple[str], optional, defaults to ("FlaxUpBlock2D", "FlaxCrossAttnUpBlock2D", "FlaxCrossAttnUpBlock2D", "FlaxCrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — +The number of layers per block. attention_head_dim (int or Tuple[int], optional, defaults to 8) — +The dimension of the attention heads. num_attention_heads (int or Tuple[int], optional) — +The number of attention heads. cross_attention_dim (int, optional, defaults to 768) — +The dimension of the cross attention features. dropout (float, optional, defaults to 0) — +Dropout probability for down, up and bottleneck blocks. flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. use_memory_efficient_attention (bool, optional, defaults to False) — +Enable memory efficient attention as described here. split_head_dim (bool, optional, defaults to False) — +Whether to split the head dimension into a new axis for the self-attention computation. In most cases, +enabling this flag should speed up the computation for Stable Diffusion 2.x and Stable Diffusion XL. A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. This model inherits from FlaxModelMixin. Check the superclass documentation for it’s generic methods +implemented for all models (such as downloading or saving). This model is also a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matters related to its +general usage and behavior. Inherent JAX features such as the following are supported: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization FlaxUNet2DConditionOutput class diffusers.models.unets.unet_2d_condition_flax.FlaxUNet2DConditionOutput < source > ( sample: Array ) Parameters sample (jnp.ndarray of shape (batch_size, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of FlaxUNet2DConditionModel. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/49f667b322cddc5cfbc8e209048590b5.txt b/scrapped_outputs/49f667b322cddc5cfbc8e209048590b5.txt new file mode 100644 index 0000000000000000000000000000000000000000..6cb15709ef2db459331589418952eb68057fc110 --- /dev/null +++ b/scrapped_outputs/49f667b322cddc5cfbc8e209048590b5.txt @@ -0,0 +1,26 @@ +IP-Adapter IP-Adapter is a lightweight adapter that enables prompting a diffusion model with an image. This method decouples the cross-attention layers of the image and text features. The image features are generated from an image encoder. Files generated from IP-Adapter are only ~100MBs. Learn how to load an IP-Adapter checkpoint and image in the IP-Adapter loading guide. IPAdapterMixin class diffusers.loaders.IPAdapterMixin < source > ( ) Mixin for handling IP Adapters. load_ip_adapter < source > ( pretrained_model_name_or_path_or_dict: Union subfolder: Union weight_name: Union **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with ModelMixin.save_pretrained(). +A torch state +dict. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. unload_ip_adapter < source > ( ) Unloads the IP Adapter weights Examples: Copied >>> # Assuming `pipeline` is already loaded with the IP Adapter weights. +>>> pipeline.unload_ip_adapter() +>>> ... diff --git a/scrapped_outputs/4a1e7e374284133e4d0c5c21469a1736.txt b/scrapped_outputs/4a1e7e374284133e4d0c5c21469a1736.txt new file mode 100644 index 0000000000000000000000000000000000000000..5afc2be3d91199356b9d7628f7ca4a75d3ed1ce9 --- /dev/null +++ b/scrapped_outputs/4a1e7e374284133e4d0c5c21469a1736.txt @@ -0,0 +1,74 @@ +DDIMScheduler Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. The abstract from the paper is: Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. +To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models +with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. +We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. +We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space. The original codebase of this paper can be found at ermongroup/ddim, and you can contact the author on tsong.me. Tips The paper Common Diffusion Noise Schedules and Sample Steps are Flawed claims that a mismatch between the training and inference settings leads to suboptimal inference generation results for Stable Diffusion. To fix this, the authors propose: 🧪 This is an experimental feature! rescale the noise schedule to enforce zero terminal signal-to-noise ratio (SNR) Copied pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, rescale_betas_zero_snr=True) train a model with v_prediction (add the following argument to the train_text_to_image.py or train_text_to_image_lora.py scripts) Copied --prediction_type="v_prediction" change the sampler to always start from the last timestep Copied pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing") rescale classifier-free guidance to prevent over-exposure Copied image = pipe(prompt, guidance_rescale=0.7).images[0] For example: Copied from diffusers import DiffusionPipeline, DDIMScheduler +import torch + +pipe = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", torch_dtype=torch.float16) +pipe.scheduler = DDIMScheduler.from_config( + pipe.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing" +) +pipe.to("cuda") + +prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" +image = pipe(prompt, guidance_rescale=0.7).images[0] +image DDIMScheduler class diffusers.DDIMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None clip_sample: bool = True set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 clip_sample_range: float = 1.0 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the alpha value at step 0. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. DDIMScheduler extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with +non-Markovian guidance. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor eta: float = 0.0 use_clipped_model_output: bool = False generator = None variance_noise: Optional = None return_dict: bool = True ) → ~schedulers.scheduling_utils.DDIMSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. eta (float) — +The weight of noise for added noise in diffusion step. use_clipped_model_output (bool, defaults to False) — +If True, computes “corrected” model_output from the clipped predicted original sample. Necessary +because predicted original sample is clipped to [-1, 1] when self.config.clip_sample is True. If no +clipping has happened, “corrected” model_output would coincide with the one provided as input and +use_clipped_model_output has no effect. generator (torch.Generator, optional) — +A random number generator. variance_noise (torch.FloatTensor) — +Alternative to generating noise with generator by directly providing the noise for the variance +itself. Useful for methods such as CycleDiffusion. return_dict (bool, optional, defaults to True) — +Whether or not to return a DDIMSchedulerOutput or tuple. Returns +~schedulers.scheduling_utils.DDIMSchedulerOutput or tuple + +If return_dict is True, DDIMSchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). DDIMSchedulerOutput class diffusers.schedulers.scheduling_ddim.DDIMSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/4a4176e1d453e93039eba25802c89aed.txt b/scrapped_outputs/4a4176e1d453e93039eba25802c89aed.txt new file mode 100644 index 0000000000000000000000000000000000000000..0a7cc0b79a2823c78003b419462fee63e47bb1de --- /dev/null +++ b/scrapped_outputs/4a4176e1d453e93039eba25802c89aed.txt @@ -0,0 +1,18 @@ +ONNX Runtime 🤗 Optimum provides a Stable Diffusion pipeline compatible with ONNX Runtime. You’ll need to install 🤗 Optimum with the following command for ONNX Runtime support: Copied pip install -q optimum["onnxruntime"] This guide will show you how to use the Stable Diffusion and Stable Diffusion XL (SDXL) pipelines with ONNX Runtime. Stable Diffusion To load and run inference, use the ORTStableDiffusionPipeline. If you want to load a PyTorch model and convert it to the ONNX format on-the-fly, set export=True: Copied from optimum.onnxruntime import ORTStableDiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id, export=True) +prompt = "sailing ship in storm by Leonardo da Vinci" +image = pipeline(prompt).images[0] +pipeline.save_pretrained("./onnx-stable-diffusion-v1-5") Generating multiple prompts in a batch seems to take too much memory. While we look into it, you may need to iterate instead of batching. To export the pipeline in the ONNX format offline and use it later for inference, +use the optimum-cli export command: Copied optimum-cli export onnx --model runwayml/stable-diffusion-v1-5 sd_v15_onnx/ Then to perform inference (you don’t have to specify export=True again): Copied from optimum.onnxruntime import ORTStableDiffusionPipeline + +model_id = "sd_v15_onnx" +pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id) +prompt = "sailing ship in storm by Leonardo da Vinci" +image = pipeline(prompt).images[0] You can find more examples in 🤗 Optimum documentation, and Stable Diffusion is supported for text-to-image, image-to-image, and inpainting. Stable Diffusion XL To load and run inference with SDXL, use the ORTStableDiffusionXLPipeline: Copied from optimum.onnxruntime import ORTStableDiffusionXLPipeline + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipeline = ORTStableDiffusionXLPipeline.from_pretrained(model_id) +prompt = "sailing ship in storm by Leonardo da Vinci" +image = pipeline(prompt).images[0] To export the pipeline in the ONNX format and use it later for inference, use the optimum-cli export command: Copied optimum-cli export onnx --model stabilityai/stable-diffusion-xl-base-1.0 --task stable-diffusion-xl sd_xl_onnx/ SDXL in the ONNX format is supported for text-to-image and image-to-image. diff --git a/scrapped_outputs/4a525385ee60b26579760515e0b17d83.txt b/scrapped_outputs/4a525385ee60b26579760515e0b17d83.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/4a6073927d6cd69e745ef02fb6f2ee5f.txt b/scrapped_outputs/4a6073927d6cd69e745ef02fb6f2ee5f.txt new file mode 100644 index 0000000000000000000000000000000000000000..d8610ad87c070caa4fdd6e48fd8b56d49472e888 --- /dev/null +++ b/scrapped_outputs/4a6073927d6cd69e745ef02fb6f2ee5f.txt @@ -0,0 +1,41 @@ +HeunDiscreteScheduler The Heun scheduler (Algorithm 1) is from the Elucidating the Design Space of Diffusion-Based Generative Models paper by Karras et al. The scheduler is ported from the k-diffusion library and created by Katherine Crowson. HeunDiscreteScheduler class diffusers.HeunDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' use_karras_sigmas: Optional = False clip_sample: Optional = False clip_sample_range: float = 1.0 timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. Scheduler with Heun steps for discrete beta schedules. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/4a642a4ffbe41014ca67816549fca1ce.txt b/scrapped_outputs/4a642a4ffbe41014ca67816549fca1ce.txt new file mode 100644 index 0000000000000000000000000000000000000000..f6343d343964a165861c1cd9fd3b9fbe84354c01 --- /dev/null +++ b/scrapped_outputs/4a642a4ffbe41014ca67816549fca1ce.txt @@ -0,0 +1,100 @@ +Parallel Sampling of Diffusion Models Parallel Sampling of Diffusion Models is by Andy Shih, Suneel Belkhale, Stefano Ermon, Dorsa Sadigh, Nima Anari. The abstract from the paper is: Diffusion models are powerful generative models but suffer from slow sampling, often taking 1000 sequential denoising steps for one sample. As a result, considerable efforts have been directed toward reducing the number of denoising steps, but these methods hurt sample quality. Instead of reducing the number of denoising steps (trading quality for speed), in this paper we explore an orthogonal approach: can we run the denoising steps in parallel (trading compute for speed)? In spite of the sequential nature of the denoising steps, we show that surprisingly it is possible to parallelize sampling via Picard iterations, by guessing the solution of future denoising steps and iteratively refining until convergence. With this insight, we present ParaDiGMS, a novel method to accelerate the sampling of pretrained diffusion models by denoising multiple steps in parallel. ParaDiGMS is the first diffusion sampling method that enables trading compute for speed and is even compatible with existing fast sampling techniques such as DDIM and DPMSolver. Using ParaDiGMS, we improve sampling speed by 2-4x across a range of robotics and image generation models, giving state-of-the-art sampling speeds of 0.2s on 100-step DiffusionPolicy and 14.6s on 1000-step StableDiffusion-v2 with no measurable degradation of task reward, FID score, or CLIP score. The original codebase can be found at AndyShih12/paradigms, and the pipeline was contributed by AndyShih12. ❤️ Tips This pipeline improves sampling speed by running denoising steps in parallel, at the cost of increased total FLOPs. +Therefore, it is better to call this pipeline when running on multiple GPUs. Otherwise, without enough GPU bandwidth +sampling may be even slower than sequential sampling. The two parameters to play with are parallel (batch size) and tolerance. If it fits in memory, for a 1000-step DDPM you can aim for a batch size of around 100 (for example, 8 GPUs and batch_per_device=12 to get parallel=96). A higher batch size may not fit in memory, and lower batch size gives less parallelism. For tolerance, using a higher tolerance may get better speedups but can risk sample quality degradation. If there is quality degradation with the default tolerance, then use a lower tolerance like 0.001. For a 1000-step DDPM on 8 A100 GPUs, you can expect around a 3x speedup from StableDiffusionParadigmsPipeline compared to the StableDiffusionPipeline +by setting parallel=80 and tolerance=0.1. 🤗 Diffusers offers distributed inference support for generating multiple prompts +in parallel on multiple GPUs. But StableDiffusionParadigmsPipeline is designed for speeding up sampling of a single prompt by using multiple GPUs. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionParadigmsPipeline class diffusers.StableDiffusionParadigmsPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using a parallelized version of Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files __call__ < source > ( prompt: typing.Union[str, typing.List[str]] = None height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 parallel: int = 10 tolerance: float = 0.1 guidance_scale: float = 7.5 negative_prompt: typing.Union[typing.List[str], str, NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: int = 1 cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None debug: bool = False clip_skip: int = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. parallel (int, optional, defaults to 10) — +The batch size to use when doing parallel sampling. More parallelism may lead to faster inference but +requires higher memory usage and can also require more total FLOPs. tolerance (float, optional, defaults to 0.1) — +The error tolerance for determining when to slide the batch window forward for parallel sampling. Lower +tolerance usually leads to less or no degradation. Higher tolerance is faster but can risk degradation +of sample quality. The tolerance is specified as a ratio of the scheduler’s noise magnitude. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. debug (bool, optional, defaults to False) — +Whether or not to run in debug mode. In debug mode, torch.cumsum is evaluated using the CPU. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import DDPMParallelScheduler +>>> from diffusers import StableDiffusionParadigmsPipeline + +>>> scheduler = DDPMParallelScheduler.from_pretrained("runwayml/stable-diffusion-v1-5", subfolder="scheduler") + +>>> pipe = StableDiffusionParadigmsPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", scheduler=scheduler, torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> ngpu, batch_per_device = torch.cuda.device_count(), 5 +>>> pipe.wrapped_unet = torch.nn.DataParallel(pipe.unet, device_ids=[d for d in range(ngpu)]) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt, parallel=ngpu * batch_per_device, num_inference_steps=1000).images[0] disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None lora_scale: typing.Optional[float] = None clip_skip: typing.Optional[int] = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] nsfw_content_detected: typing.Optional[typing.List[bool]] ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/4a7307060d9c117df4e101947479af2a.txt b/scrapped_outputs/4a7307060d9c117df4e101947479af2a.txt new file mode 100644 index 0000000000000000000000000000000000000000..c45daf9a97ec4b41db61304ab7ca97f58be2ed61 --- /dev/null +++ b/scrapped_outputs/4a7307060d9c117df4e101947479af2a.txt @@ -0,0 +1 @@ +xFormers We recommend xFormers for both inference and training. In our tests, the optimizations performed in the attention blocks allow for both faster speed and reduced memory consumption. Install xFormers from pip: Copied pip install xformers The xFormers pip package requires the latest version of PyTorch. If you need to use a previous version of PyTorch, then we recommend installing xFormers from the source. After xFormers is installed, you can use enable_xformers_memory_efficient_attention() for faster inference and reduced memory consumption as shown in this section. According to this issue, xFormers v0.0.16 cannot be used for training (fine-tune or DreamBooth) in some GPUs. If you observe this problem, please install a development version as indicated in the issue comments. diff --git a/scrapped_outputs/4a771b6fc50538af5cbc1b8d5bf6c177.txt b/scrapped_outputs/4a771b6fc50538af5cbc1b8d5bf6c177.txt new file mode 100644 index 0000000000000000000000000000000000000000..b5eb4e7be4c261d898545230090f92fda4b54478 --- /dev/null +++ b/scrapped_outputs/4a771b6fc50538af5cbc1b8d5bf6c177.txt @@ -0,0 +1,192 @@ +Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image generation model that iterates on the previous Stable Diffusion models in three key ways: the UNet is 3x larger and SDXL combines a second text encoder (OpenCLIP ViT-bigG/14) with the original text encoder to significantly increase the number of parameters introduces size and crop-conditioning to preserve training data from being discarded and gain more control over how a generated image should be cropped introduces a two-stage model process; the base model (can also be run as a standalone model) generates an image as an input to the refiner model which adds additional high-quality details This guide will show you how to use SDXL for text-to-image, image-to-image, and inpainting. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate invisible-watermark>=0.2.0 We recommend installing the invisible-watermark library to help identify images that are generated. If the invisible-watermark library is installed, it is used by default. To disable the watermarker: Copied pipeline = StableDiffusionXLPipeline.from_pretrained(..., add_watermarker=False) Load model checkpoints Model weights may be stored in separate subfolders on the Hub or locally, in which case, you should use the from_pretrained() method: Copied from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = StableDiffusionXLImg2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, use_safetensors=True, variant="fp16" +).to("cuda") You can also use the from_single_file() method to load a model checkpoint stored in a single file format (.ckpt or .safetensors) from the Hub or locally: Copied from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_single_file( + "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0.safetensors", + torch_dtype=torch.float16 +).to("cuda") + +refiner = StableDiffusionXLImg2ImgPipeline.from_single_file( + "https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/blob/main/sd_xl_refiner_1.0.safetensors", torch_dtype=torch.float16 +).to("cuda") Text-to-image For text-to-image, pass a text prompt. By default, SDXL generates a 1024x1024 image for the best results. You can try setting the height and width parameters to 768x768 or 512x512, but anything below 512x512 is not likely to work. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline_text2image = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipeline_text2image(prompt=prompt).images[0] +image Image-to-image For image-to-image, SDXL works especially well with image sizes between 768x768 and 1024x1024. Pass an initial image, and a text prompt to condition the image with: Copied from diffusers import AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +# use from_pipe to avoid consuming additional memory when loading a checkpoint +pipeline = AutoPipelineForImage2Image.from_pipe(pipeline_text2image).to("cuda") + +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png" +init_image = load_image(url) +prompt = "a dog catching a frisbee in the jungle" +image = pipeline(prompt, image=init_image, strength=0.8, guidance_scale=10.5).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Inpainting For inpainting, you’ll need the original image and a mask of what you want to replace in the original image. Create a prompt to describe what you want to replace the masked area with. Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +# use from_pipe to avoid consuming additional memory when loading a checkpoint +pipeline = AutoPipelineForInpainting.from_pipe(pipeline_text2image).to("cuda") + +img_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png" +mask_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-inpaint-mask.png" + +init_image = load_image(img_url) +mask_image = load_image(mask_url) + +prompt = "A deep sea diver floating" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.85, guidance_scale=12.5).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Refine image quality SDXL includes a refiner model specialized in denoising low-noise stage images to generate higher-quality images from the base model. There are two ways to use the refiner: use the base and refiner models together to produce a refined image use the base model to produce an image, and subsequently use the refiner model to add more details to the image (this is how SDXL was originally trained) Base + refiner model When you use the base and refiner model together to generate an image, this is known as an ensemble of expert denoisers. The ensemble of expert denoisers approach requires fewer overall denoising steps versus passing the base model’s output to the refiner model, so it should be significantly faster to run. However, you won’t be able to inspect the base model’s output because it still contains a large amount of noise. As an ensemble of expert denoisers, the base model serves as the expert during the high-noise diffusion stage and the refiner model serves as the expert during the low-noise diffusion stage. Load the base and refiner model: Copied from diffusers import DiffusionPipeline +import torch + +base = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", + text_encoder_2=base.text_encoder_2, + vae=base.vae, + torch_dtype=torch.float16, + use_safetensors=True, + variant="fp16", +).to("cuda") To use this approach, you need to define the number of timesteps for each model to run through their respective stages. For the base model, this is controlled by the denoising_end parameter and for the refiner model, it is controlled by the denoising_start parameter. The denoising_end and denoising_start parameters should be a float between 0 and 1. These parameters are represented as a proportion of discrete timesteps as defined by the scheduler. If you’re also using the strength parameter, it’ll be ignored because the number of denoising steps is determined by the discrete timesteps the model is trained on and the declared fractional cutoff. Let’s set denoising_end=0.8 so the base model performs the first 80% of denoising the high-noise timesteps and set denoising_start=0.8 so the refiner model performs the last 20% of denoising the low-noise timesteps. The base model output should be in latent space instead of a PIL image. Copied prompt = "A majestic lion jumping from a big stone at night" + +image = base( + prompt=prompt, + num_inference_steps=40, + denoising_end=0.8, + output_type="latent", +).images +image = refiner( + prompt=prompt, + num_inference_steps=40, + denoising_start=0.8, + image=image, +).images[0] +image default base model ensemble of expert denoisers The refiner model can also be used for inpainting in the StableDiffusionXLInpaintPipeline: Copied from diffusers import StableDiffusionXLInpaintPipeline +from diffusers.utils import load_image, make_image_grid +import torch + +base = StableDiffusionXLInpaintPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = StableDiffusionXLInpaintPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", + text_encoder_2=base.text_encoder_2, + vae=base.vae, + torch_dtype=torch.float16, + use_safetensors=True, + variant="fp16", +).to("cuda") + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url) +mask_image = load_image(mask_url) + +prompt = "A majestic tiger sitting on a bench" +num_inference_steps = 75 +high_noise_frac = 0.7 + +image = base( + prompt=prompt, + image=init_image, + mask_image=mask_image, + num_inference_steps=num_inference_steps, + denoising_end=high_noise_frac, + output_type="latent", +).images +image = refiner( + prompt=prompt, + image=image, + mask_image=mask_image, + num_inference_steps=num_inference_steps, + denoising_start=high_noise_frac, +).images[0] +make_image_grid([init_image, mask_image, image.resize((512, 512))], rows=1, cols=3) This ensemble of expert denoisers method works well for all available schedulers! Base to refiner model SDXL gets a boost in image quality by using the refiner model to add additional high-quality details to the fully-denoised image from the base model, in an image-to-image setting. Load the base and refiner models: Copied from diffusers import DiffusionPipeline +import torch + +base = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", + text_encoder_2=base.text_encoder_2, + vae=base.vae, + torch_dtype=torch.float16, + use_safetensors=True, + variant="fp16", +).to("cuda") Generate an image from the base model, and set the model output to latent space: Copied prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +image = base(prompt=prompt, output_type="latent").images[0] Pass the generated image to the refiner model: Copied image = refiner(prompt=prompt, image=image[None, :]).images[0] base model base model + refiner model For inpainting, load the base and the refiner model in the StableDiffusionXLInpaintPipeline, remove the denoising_end and denoising_start parameters, and choose a smaller number of inference steps for the refiner. Micro-conditioning SDXL training involves several additional conditioning techniques, which are referred to as micro-conditioning. These include original image size, target image size, and cropping parameters. The micro-conditionings can be used at inference time to create high-quality, centered images. You can use both micro-conditioning and negative micro-conditioning parameters thanks to classifier-free guidance. They are available in the StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline, StableDiffusionXLInpaintPipeline, and StableDiffusionXLControlNetPipeline. Size conditioning There are two types of size conditioning: original_size conditioning comes from upscaled images in the training batch (because it would be wasteful to discard the smaller images which make up almost 40% of the total training data). This way, SDXL learns that upscaling artifacts are not supposed to be present in high-resolution images. During inference, you can use original_size to indicate the original image resolution. Using the default value of (1024, 1024) produces higher-quality images that resemble the 1024x1024 images in the dataset. If you choose to use a lower resolution, such as (256, 256), the model still generates 1024x1024 images, but they’ll look like the low resolution images (simpler patterns, blurring) in the dataset. target_size conditioning comes from finetuning SDXL to support different image aspect ratios. During inference, if you use the default value of (1024, 1024), you’ll get an image that resembles the composition of square images in the dataset. We recommend using the same value for target_size and original_size, but feel free to experiment with other options! 🤗 Diffusers also lets you specify negative conditions about an image’s size to steer generation away from certain image resolutions: Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe( + prompt=prompt, + negative_original_size=(512, 512), + negative_target_size=(1024, 1024), +).images[0] Images negatively conditioned on image resolutions of (128, 128), (256, 256), and (512, 512). Crop conditioning Images generated by previous Stable Diffusion models may sometimes appear to be cropped. This is because images are actually cropped during training so that all the images in a batch have the same size. By conditioning on crop coordinates, SDXL learns that no cropping - coordinates (0, 0) - usually correlates with centered subjects and complete faces (this is the default value in 🤗 Diffusers). You can experiment with different coordinates if you want to generate off-centered compositions! Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipeline(prompt=prompt, crops_coords_top_left=(256, 0)).images[0] +image You can also specify negative cropping coordinates to steer generation away from certain cropping parameters: Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe( + prompt=prompt, + negative_original_size=(512, 512), + negative_crops_coords_top_left=(0, 0), + negative_target_size=(1024, 1024), +).images[0] +image Use a different prompt for each text-encoder SDXL uses two text-encoders, so it is possible to pass a different prompt to each text-encoder, which can improve quality. Pass your original prompt to prompt and the second prompt to prompt_2 (use negative_prompt and negative_prompt_2 if you’re using negative prompts): Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +# prompt is passed to OAI CLIP-ViT/L-14 +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +# prompt_2 is passed to OpenCLIP-ViT/bigG-14 +prompt_2 = "Van Gogh painting" +image = pipeline(prompt=prompt, prompt_2=prompt_2).images[0] +image The dual text-encoders also support textual inversion embeddings that need to be loaded separately as explained in the SDXL textual inversion section. Optimizations SDXL is a large model, and you may need to optimize memory to get it to run on your hardware. Here are some tips to save memory and speed up inference. Offload the model to the CPU with enable_model_cpu_offload() for out-of-memory errors: Copied - base.to("cuda") +- refiner.to("cuda") ++ base.enable_model_cpu_offload() ++ refiner.enable_model_cpu_offload() Use torch.compile for ~20% speed-up (you need torch>=2.0): Copied + base.unet = torch.compile(base.unet, mode="reduce-overhead", fullgraph=True) ++ refiner.unet = torch.compile(refiner.unet, mode="reduce-overhead", fullgraph=True) Enable xFormers to run SDXL if torch<2.0: Copied + base.enable_xformers_memory_efficient_attention() ++ refiner.enable_xformers_memory_efficient_attention() Other resources If you’re interested in experimenting with a minimal version of the UNet2DConditionModel used in SDXL, take a look at the minSDXL implementation which is written in PyTorch and directly compatible with 🤗 Diffusers. diff --git a/scrapped_outputs/4ab5f45fe162a7c09cc724610e6f051f.txt b/scrapped_outputs/4ab5f45fe162a7c09cc724610e6f051f.txt new file mode 100644 index 0000000000000000000000000000000000000000..e61eb0a68fe6473d1d312b7484e9469ca28f24df --- /dev/null +++ b/scrapped_outputs/4ab5f45fe162a7c09cc724610e6f051f.txt @@ -0,0 +1,75 @@ +Shap-E Shap-E is a conditional model for generating 3D assets which could be used for video game development, interior design, and architecture. It is trained on a large dataset of 3D assets, and post-processed to render more views of each object and produce 16K instead of 4K point clouds. The Shap-E model is trained in two steps: an encoder accepts the point clouds and rendered views of a 3D asset and outputs the parameters of implicit functions that represent the asset a diffusion model is trained on the latents produced by the encoder to generate either neural radiance fields (NeRFs) or a textured 3D mesh, making it easier to render and use the 3D asset in downstream applications This guide will show you how to use Shap-E to start generating your own 3D assets! Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate trimesh Text-to-3D To generate a gif of a 3D object, pass a text prompt to the ShapEPipeline. The pipeline generates a list of image frames which are used to create the 3D object. Copied import torch +from diffusers import ShapEPipeline + +device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +pipe = ShapEPipeline.from_pretrained("openai/shap-e", torch_dtype=torch.float16, variant="fp16") +pipe = pipe.to(device) + +guidance_scale = 15.0 +prompt = ["A firecracker", "A birthday cupcake"] + +images = pipe( + prompt, + guidance_scale=guidance_scale, + num_inference_steps=64, + frame_size=256, +).images Now use the export_to_gif() function to turn the list of image frames into a gif of the 3D object. Copied from diffusers.utils import export_to_gif + +export_to_gif(images[0], "firecracker_3d.gif") +export_to_gif(images[1], "cake_3d.gif") prompt = "A firecracker" prompt = "A birthday cupcake" Image-to-3D To generate a 3D object from another image, use the ShapEImg2ImgPipeline. You can use an existing image or generate an entirely new one. Let’s use the Kandinsky 2.1 model to generate a new image. Copied from diffusers import DiffusionPipeline +import torch + +prior_pipeline = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipeline = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + +prompt = "A cheeseburger, white background" + +image_embeds, negative_image_embeds = prior_pipeline(prompt, guidance_scale=1.0).to_tuple() +image = pipeline( + prompt, + image_embeds=image_embeds, + negative_image_embeds=negative_image_embeds, +).images[0] + +image.save("burger.png") Pass the cheeseburger to the ShapEImg2ImgPipeline to generate a 3D representation of it. Copied from PIL import Image +from diffusers import ShapEImg2ImgPipeline +from diffusers.utils import export_to_gif + +pipe = ShapEImg2ImgPipeline.from_pretrained("openai/shap-e-img2img", torch_dtype=torch.float16, variant="fp16").to("cuda") + +guidance_scale = 3.0 +image = Image.open("burger.png").resize((256, 256)) + +images = pipe( + image, + guidance_scale=guidance_scale, + num_inference_steps=64, + frame_size=256, +).images + +gif_path = export_to_gif(images[0], "burger_3d.gif") cheeseburger 3D cheeseburger Generate mesh Shap-E is a flexible model that can also generate textured mesh outputs to be rendered for downstream applications. In this example, you’ll convert the output into a glb file because the 🤗 Datasets library supports mesh visualization of glb files which can be rendered by the Dataset viewer. You can generate mesh outputs for both the ShapEPipeline and ShapEImg2ImgPipeline by specifying the output_type parameter as "mesh": Copied import torch +from diffusers import ShapEPipeline + +device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +pipe = ShapEPipeline.from_pretrained("openai/shap-e", torch_dtype=torch.float16, variant="fp16") +pipe = pipe.to(device) + +guidance_scale = 15.0 +prompt = "A birthday cupcake" + +images = pipe(prompt, guidance_scale=guidance_scale, num_inference_steps=64, frame_size=256, output_type="mesh").images Use the export_to_ply() function to save the mesh output as a ply file: You can optionally save the mesh output as an obj file with the export_to_obj() function. The ability to save the mesh output in a variety of formats makes it more flexible for downstream usage! Copied from diffusers.utils import export_to_ply + +ply_path = export_to_ply(images[0], "3d_cake.ply") +print(f"Saved to folder: {ply_path}") Then you can convert the ply file to a glb file with the trimesh library: Copied import trimesh + +mesh = trimesh.load("3d_cake.ply") +mesh_export = mesh.export("3d_cake.glb", file_type="glb") By default, the mesh output is focused from the bottom viewpoint but you can change the default viewpoint by applying a rotation transform: Copied import trimesh +import numpy as np + +mesh = trimesh.load("3d_cake.ply") +rot = trimesh.transformations.rotation_matrix(-np.pi / 2, [1, 0, 0]) +mesh = mesh.apply_transform(rot) +mesh_export = mesh.export("3d_cake.glb", file_type="glb") Upload the mesh file to your dataset repository to visualize it with the Dataset viewer! diff --git a/scrapped_outputs/4ad0c3f4271d213650a90137d655d9dc.txt b/scrapped_outputs/4ad0c3f4271d213650a90137d655d9dc.txt new file mode 100644 index 0000000000000000000000000000000000000000..5afc2be3d91199356b9d7628f7ca4a75d3ed1ce9 --- /dev/null +++ b/scrapped_outputs/4ad0c3f4271d213650a90137d655d9dc.txt @@ -0,0 +1,74 @@ +DDIMScheduler Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. The abstract from the paper is: Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. +To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models +with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. +We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. +We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space. The original codebase of this paper can be found at ermongroup/ddim, and you can contact the author on tsong.me. Tips The paper Common Diffusion Noise Schedules and Sample Steps are Flawed claims that a mismatch between the training and inference settings leads to suboptimal inference generation results for Stable Diffusion. To fix this, the authors propose: 🧪 This is an experimental feature! rescale the noise schedule to enforce zero terminal signal-to-noise ratio (SNR) Copied pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, rescale_betas_zero_snr=True) train a model with v_prediction (add the following argument to the train_text_to_image.py or train_text_to_image_lora.py scripts) Copied --prediction_type="v_prediction" change the sampler to always start from the last timestep Copied pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing") rescale classifier-free guidance to prevent over-exposure Copied image = pipe(prompt, guidance_rescale=0.7).images[0] For example: Copied from diffusers import DiffusionPipeline, DDIMScheduler +import torch + +pipe = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", torch_dtype=torch.float16) +pipe.scheduler = DDIMScheduler.from_config( + pipe.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing" +) +pipe.to("cuda") + +prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" +image = pipe(prompt, guidance_rescale=0.7).images[0] +image DDIMScheduler class diffusers.DDIMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None clip_sample: bool = True set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 clip_sample_range: float = 1.0 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the alpha value at step 0. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. DDIMScheduler extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with +non-Markovian guidance. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor eta: float = 0.0 use_clipped_model_output: bool = False generator = None variance_noise: Optional = None return_dict: bool = True ) → ~schedulers.scheduling_utils.DDIMSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. eta (float) — +The weight of noise for added noise in diffusion step. use_clipped_model_output (bool, defaults to False) — +If True, computes “corrected” model_output from the clipped predicted original sample. Necessary +because predicted original sample is clipped to [-1, 1] when self.config.clip_sample is True. If no +clipping has happened, “corrected” model_output would coincide with the one provided as input and +use_clipped_model_output has no effect. generator (torch.Generator, optional) — +A random number generator. variance_noise (torch.FloatTensor) — +Alternative to generating noise with generator by directly providing the noise for the variance +itself. Useful for methods such as CycleDiffusion. return_dict (bool, optional, defaults to True) — +Whether or not to return a DDIMSchedulerOutput or tuple. Returns +~schedulers.scheduling_utils.DDIMSchedulerOutput or tuple + +If return_dict is True, DDIMSchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). DDIMSchedulerOutput class diffusers.schedulers.scheduling_ddim.DDIMSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/4ad3d6120b83f06633c412b67522c154.txt b/scrapped_outputs/4ad3d6120b83f06633c412b67522c154.txt new file mode 100644 index 0000000000000000000000000000000000000000..be2cb47ac7929d07604329901692862da670fc66 --- /dev/null +++ b/scrapped_outputs/4ad3d6120b83f06633c412b67522c154.txt @@ -0,0 +1,70 @@ +MusicLDM MusicLDM was proposed in MusicLDM: Enhancing Novelty in Text-to-Music Generation Using Beat-Synchronous Mixup Strategies by Ke Chen, Yusong Wu, Haohe Liu, Marianna Nezhurina, Taylor Berg-Kirkpatrick, Shlomo Dubnov. +MusicLDM takes a text prompt as input and predicts the corresponding music sample. Inspired by Stable Diffusion and AudioLDM, +MusicLDM is a text-to-music latent diffusion model (LDM) that learns continuous audio representations from CLAP +latents. MusicLDM is trained on a corpus of 466 hours of music data. Beat-synchronous data augmentation strategies are applied to the music samples, both in the time domain and in the latent space. Using beat-synchronous data augmentation strategies encourages the model to interpolate between the training samples, but stay within the domain of the training data. The result is generated music that is more diverse while staying faithful to the corresponding style. The abstract of the paper is the following: Diffusion models have shown promising results in cross-modal generation tasks, including text-to-image and text-to-audio generation. However, generating music, as a special type of audio, presents unique challenges due to limited availability of music data and sensitive issues related to copyright and plagiarism. In this paper, to tackle these challenges, we first construct a state-of-the-art text-to-music model, MusicLDM, that adapts Stable Diffusion and AudioLDM architectures to the music domain. We achieve this by retraining the contrastive language-audio pretraining model (CLAP) and the Hifi-GAN vocoder, as components of MusicLDM, on a collection of music data samples. Then, to address the limitations of training data and to avoid plagiarism, we leverage a beat tracking model and propose two different mixup strategies for data augmentation: beat-synchronous audio mixup and beat-synchronous latent mixup, which recombine training audio directly or via a latent embeddings space, respectively. Such mixup strategies encourage the model to interpolate between musical training samples and generate new music within the convex hull of the training data, making the generated music more diverse while still staying faithful to the corresponding style. In addition to popular evaluation metrics, we design several new evaluation metrics based on CLAP score to demonstrate that our proposed MusicLDM and beat-synchronous mixup strategies improve both the quality and novelty of generated music, as well as the correspondence between input text and generated music. This pipeline was contributed by sanchit-gandhi. Tips When constructing a prompt, keep in mind: Descriptive prompt inputs work best; use adjectives to describe the sound (for example, “high quality” or “clear”) and make the prompt context specific where possible (e.g. “melodic techno with a fast beat and synths” works better than “techno”). Using a negative prompt can significantly improve the quality of the generated audio. Try using a negative prompt of “low quality, average quality”. During inference: The quality of the generated audio sample can be controlled by the num_inference_steps argument; higher steps give higher quality audio at the expense of slower inference. Multiple waveforms can be generated in one go: set num_waveforms_per_prompt to a value greater than 1 to enable. Automatic scoring will be performed between the generated waveforms and prompt text, and the audios ranked from best to worst accordingly. The length of the generated audio sample can be controlled by varying the audio_length_in_s argument. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. MusicLDMPipeline class diffusers.MusicLDMPipeline < source > ( vae: AutoencoderKL text_encoder: Union tokenizer: Union feature_extractor: Optional unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vocoder: SpeechT5HifiGan ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (ClapModel) — +Frozen text-audio embedding model (ClapTextModel), specifically the +laion/clap-htsat-unfused variant. tokenizer (PreTrainedTokenizer) — +A RobertaTokenizer to tokenize text. feature_extractor (ClapFeatureExtractor) — +Feature extractor to compute mel-spectrograms from audio waveforms. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded audio latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. vocoder (SpeechT5HifiGan) — +Vocoder of class SpeechT5HifiGan. Pipeline for text-to-audio generation using MusicLDM. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None audio_length_in_s: Optional = None num_inference_steps: int = 200 guidance_scale: float = 2.0 negative_prompt: Union = None num_waveforms_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None output_type: Optional = 'np' ) → AudioPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide audio generation. If not defined, you need to pass prompt_embeds. audio_length_in_s (int, optional, defaults to 10.24) — +The length of the generated audio sample in seconds. num_inference_steps (int, optional, defaults to 200) — +The number of denoising steps. More denoising steps usually lead to a higher quality audio at the +expense of slower inference. guidance_scale (float, optional, defaults to 2.0) — +A higher guidance scale value encourages the model to generate audio that is closely linked to the text +prompt at the expense of lower sound quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in audio generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_waveforms_per_prompt (int, optional, defaults to 1) — +The number of waveforms to generate per prompt. If num_waveforms_per_prompt > 1, the text encoding +model is a joint text-audio model (ClapModel), and the tokenizer is a +[~transformers.ClapProcessor], then automatic scoring will be performed between the generated outputs +and the input text. This scoring ranks the generated waveforms based on their cosine similarity to text +input in the joint text-audio embedding space. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. return_dict (bool, optional, defaults to True) — +Whether or not to return a AudioPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. output_type (str, optional, defaults to "np") — +The output format of the generated audio. Choose between "np" to return a NumPy np.ndarray or +"pt" to return a PyTorch torch.Tensor object. Set to "latent" to return the latent diffusion +model (LDM) output. Returns +AudioPipelineOutput or tuple + +If return_dict is True, AudioPipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import MusicLDMPipeline +>>> import torch +>>> import scipy + +>>> repo_id = "ucsd-reach/musicldm" +>>> pipe = MusicLDMPipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> prompt = "Techno music with a strong, upbeat tempo and high melodic riffs" +>>> audio = pipe(prompt, num_inference_steps=10, audio_length_in_s=5.0).audios[0] + +>>> # save the audio sample as a .wav file +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio) disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. diff --git a/scrapped_outputs/4adbaa6886c12c868afa521c0c0a43de.txt b/scrapped_outputs/4adbaa6886c12c868afa521c0c0a43de.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/4af8e2b2a1307a04a95c36e9955fd26a.txt b/scrapped_outputs/4af8e2b2a1307a04a95c36e9955fd26a.txt new file mode 100644 index 0000000000000000000000000000000000000000..bfb6ce0d7b29d717bc4cf9298fa18ceb1edda813 --- /dev/null +++ b/scrapped_outputs/4af8e2b2a1307a04a95c36e9955fd26a.txt @@ -0,0 +1,338 @@ +Inpainting The Stable Diffusion model can also be applied to inpainting which lets you edit specific parts of an image by providing a mask and a text prompt using Stable Diffusion. Tips It is recommended to use this pipeline with checkpoints that have been specifically fine-tuned for inpainting, such +as runwayml/stable-diffusion-inpainting. Default +text-to-image Stable Diffusion checkpoints, such as +runwayml/stable-diffusion-v1-5 are also compatible but they might be less performant. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionInpaintPipeline class diffusers.StableDiffusionInpaintPipeline < source > ( vae: Union text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae ([AutoencoderKL, AsymmetricAutoencoderKL]) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-guided image inpainting using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None masked_image_latents: FloatTensor = None height: Optional = None width: Optional = None padding_mask_crop: Optional = None strength: float = 1.0 num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: int = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be inpainted (which parts of the image to +be masked out with mask_image and repainted according to prompt). For both numpy array and pytorch +tensor, the expected value range is between [0, 1] If it’s a tensor or a list or tensors, the +expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a list of arrays, the +expected shape should be (B, H, W, C) or (H, W, C) It can also accept image latents as image, but +if passing latents directly it is not encoded again. mask_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to mask image. White pixels in the mask +are repainted while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a numpy array or pytorch tensor, it should contain one +color channel (L) instead of 3, so the expected shape for pytorch tensor would be (B, 1, H, W), (B, H, W), (1, H, W), (H, W). And for numpy array would be for (B, H, W, 1), (B, H, W), (H, W, 1), or (H, W). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. padding_mask_crop (int, optional, defaults to None) — +The size of margin in the crop to be applied to the image and masking. If None, no crop is applied to image and mask_image. If +padding_mask_crop is not None, it will first find a rectangular region with the same aspect ration of the image and +contains all masked area, and then expand that area based on padding_mask_crop. The image and mask_image will then be cropped based on +the expanded area before resizing to the original image size for inpainting. This is useful when the masked area is small while the image is large +and contain information inreleant for inpainging, such as background. strength (float, optional, defaults to 1.0) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionInpaintPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +>>> init_image = download_image(img_url).resize((512, 512)) +>>> mask_image = download_image(mask_url).resize((512, 512)) + +>>> pipe = StableDiffusionInpaintPipeline.from_pretrained( +... "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +>>> image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionInpaintPipeline class diffusers.FlaxStableDiffusionInpaintPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-guided image inpainting using Stable Diffusion. 🧪 This is an experimental feature! This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: Array mask: Array masked_image: Array params: Union prng_seed: Array num_inference_steps: int = 50 height: Optional = None width: Optional = None guidance_scale: Union = 7.5 latents: Array = None neg_prompt_ids: Array = None return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. latents (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +array is generated by sampling using the supplied random generator. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard +>>> import PIL +>>> import requests +>>> from io import BytesIO +>>> from diffusers import FlaxStableDiffusionInpaintPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +>>> init_image = download_image(img_url).resize((512, 512)) +>>> mask_image = download_image(mask_url).resize((512, 512)) + +>>> pipeline, params = FlaxStableDiffusionInpaintPipeline.from_pretrained( +... "xvjiarui/stable-diffusion-2-inpainting" +... ) + +>>> prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +>>> prng_seed = jax.random.PRNGKey(0) +>>> num_inference_steps = 50 + +>>> num_samples = jax.device_count() +>>> prompt = num_samples * [prompt] +>>> init_image = num_samples * [init_image] +>>> mask_image = num_samples * [mask_image] +>>> prompt_ids, processed_masked_images, processed_masks = pipeline.prepare_inputs( +... prompt, init_image, mask_image +... ) +# shard inputs and rng + +>>> params = replicate(params) +>>> prng_seed = jax.random.split(prng_seed, jax.device_count()) +>>> prompt_ids = shard(prompt_ids) +>>> processed_masked_images = shard(processed_masked_images) +>>> processed_masks = shard(processed_masks) + +>>> images = pipeline( +... prompt_ids, processed_masks, processed_masked_images, params, prng_seed, num_inference_steps, jit=True +... ).images +>>> images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) FlaxStableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/4b0475e3ec8044db3aa60d9e65e70194.txt b/scrapped_outputs/4b0475e3ec8044db3aa60d9e65e70194.txt new file mode 100644 index 0000000000000000000000000000000000000000..b38b5c13a31ff2d5b90900e6331e648465b535b4 --- /dev/null +++ b/scrapped_outputs/4b0475e3ec8044db3aa60d9e65e70194.txt @@ -0,0 +1,174 @@ +Reduce memory usage A barrier to using diffusion models is the large amount of memory required. To overcome this challenge, there are several memory-reducing techniques you can use to run even some of the largest models on free-tier or consumer GPUs. Some of these techniques can even be combined to further reduce memory usage. In many cases, optimizing for memory or speed leads to improved performance in the other, so you should try to optimize for both whenever you can. This guide focuses on minimizing memory usage, but you can also learn more about how to Speed up inference. The results below are obtained from generating a single 512x512 image from the prompt a photo of an astronaut riding a horse on mars with 50 DDIM steps on a Nvidia Titan RTX, demonstrating the speed-up you can expect as a result of reduced memory consumption. latency speed-up original 9.50s x1 fp16 3.61s x2.63 channels last 3.30s x2.88 traced UNet 3.21s x2.96 memory-efficient attention 2.63s x3.61 Sliced VAE Sliced VAE enables decoding large batches of images with limited VRAM or batches with 32 images or more by decoding the batches of latents one image at a time. You’ll likely want to couple this with enable_xformers_memory_efficient_attention() to reduce memory use further if you have xFormers installed. To use sliced VAE, call enable_vae_slicing() on your pipeline before inference: Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_vae_slicing() +#pipe.enable_xformers_memory_efficient_attention() +images = pipe([prompt] * 32).images You may see a small performance boost in VAE decoding on multi-image batches, and there should be no performance impact on single-image batches. Tiled VAE Tiled VAE processing also enables working with large images on limited VRAM (for example, generating 4k images on 8GB of VRAM) by splitting the image into overlapping tiles, decoding the tiles, and then blending the outputs together to compose the final image. You should also used tiled VAE with enable_xformers_memory_efficient_attention() to reduce memory use further if you have xFormers installed. To use tiled VAE processing, call enable_vae_tiling() on your pipeline before inference: Copied import torch +from diffusers import StableDiffusionPipeline, UniPCMultistepScheduler + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") +prompt = "a beautiful landscape photograph" +pipe.enable_vae_tiling() +#pipe.enable_xformers_memory_efficient_attention() + +image = pipe([prompt], width=3840, height=2224, num_inference_steps=20).images[0] The output image has some tile-to-tile tone variation because the tiles are decoded separately, but you shouldn’t see any sharp and obvious seams between the tiles. Tiling is turned off for images that are 512x512 or smaller. CPU offloading Offloading the weights to the CPU and only loading them on the GPU when performing the forward pass can also save memory. Often, this technique can reduce memory consumption to less than 3GB. To perform CPU offloading, call enable_sequential_cpu_offload(): Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_sequential_cpu_offload() +image = pipe(prompt).images[0] CPU offloading works on submodules rather than whole models. This is the best way to minimize memory consumption, but inference is much slower due to the iterative nature of the diffusion process. The UNet component of the pipeline runs several times (as many as num_inference_steps); each time, the different UNet submodules are sequentially onloaded and offloaded as needed, resulting in a large number of memory transfers. Consider using model offloading if you want to optimize for speed because it is much faster. The tradeoff is your memory savings won’t be as large. When using enable_sequential_cpu_offload(), don’t move the pipeline to CUDA beforehand or else the gain in memory consumption will only be minimal (see this issue for more information). enable_sequential_cpu_offload() is a stateful operation that installs hooks on the models. Model offloading Model offloading requires 🤗 Accelerate version 0.17.0 or higher. Sequential CPU offloading preserves a lot of memory but it makes inference slower because submodules are moved to GPU as needed, and they’re immediately returned to the CPU when a new module runs. Full-model offloading is an alternative that moves whole models to the GPU, instead of handling each model’s constituent submodules. There is a negligible impact on inference time (compared with moving the pipeline to cuda), and it still provides some memory savings. During model offloading, only one of the main components of the pipeline (typically the text encoder, UNet and VAE) +is placed on the GPU while the others wait on the CPU. Components like the UNet that run for multiple iterations stay on the GPU until they’re no longer needed. Enable model offloading by calling enable_model_cpu_offload() on the pipeline: Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_model_cpu_offload() +image = pipe(prompt).images[0] In order to properly offload models after they’re called, it is required to run the entire pipeline and models are called in the pipeline’s expected order. Exercise caution if models are reused outside the context of the pipeline after hooks have been installed. See Removing Hooks for more information. enable_model_cpu_offload() is a stateful operation that installs hooks on the models and state on the pipeline. Channels-last memory format The channels-last memory format is an alternative way of ordering NCHW tensors in memory to preserve dimension ordering. Channels-last tensors are ordered in such a way that the channels become the densest dimension (storing images pixel-per-pixel). Since not all operators currently support the channels-last format, it may result in worst performance but you should still try and see if it works for your model. For example, to set the pipeline’s UNet to use the channels-last format: Copied print(pipe.unet.conv_out.state_dict()["weight"].stride()) # (2880, 9, 3, 1) +pipe.unet.to(memory_format=torch.channels_last) # in-place operation +print( + pipe.unet.conv_out.state_dict()["weight"].stride() +) # (2880, 1, 960, 320) having a stride of 1 for the 2nd dimension proves that it works Tracing Tracing runs an example input tensor through the model and captures the operations that are performed on it as that input makes its way through the model’s layers. The executable or ScriptFunction that is returned is optimized with just-in-time compilation. To trace a UNet: Copied import time +import torch +from diffusers import StableDiffusionPipeline +import functools + +# torch disable grad +torch.set_grad_enabled(False) + +# set variables +n_experiments = 2 +unet_runs_per_experiment = 50 + + +# load inputs +def generate_inputs(): + sample = torch.randn((2, 4, 64, 64), device="cuda", dtype=torch.float16) + timestep = torch.rand(1, device="cuda", dtype=torch.float16) * 999 + encoder_hidden_states = torch.randn((2, 77, 768), device="cuda", dtype=torch.float16) + return sample, timestep, encoder_hidden_states + + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") +unet = pipe.unet +unet.eval() +unet.to(memory_format=torch.channels_last) # use channels_last memory format +unet.forward = functools.partial(unet.forward, return_dict=False) # set return_dict=False as default + +# warmup +for _ in range(3): + with torch.inference_mode(): + inputs = generate_inputs() + orig_output = unet(*inputs) + +# trace +print("tracing..") +unet_traced = torch.jit.trace(unet, inputs) +unet_traced.eval() +print("done tracing") + + +# warmup and optimize graph +for _ in range(5): + with torch.inference_mode(): + inputs = generate_inputs() + orig_output = unet_traced(*inputs) + + +# benchmarking +with torch.inference_mode(): + for _ in range(n_experiments): + torch.cuda.synchronize() + start_time = time.time() + for _ in range(unet_runs_per_experiment): + orig_output = unet_traced(*inputs) + torch.cuda.synchronize() + print(f"unet traced inference took {time.time() - start_time:.2f} seconds") + for _ in range(n_experiments): + torch.cuda.synchronize() + start_time = time.time() + for _ in range(unet_runs_per_experiment): + orig_output = unet(*inputs) + torch.cuda.synchronize() + print(f"unet inference took {time.time() - start_time:.2f} seconds") + +# save the model +unet_traced.save("unet_traced.pt") Replace the unet attribute of the pipeline with the traced model: Copied from diffusers import StableDiffusionPipeline +import torch +from dataclasses import dataclass + + +@dataclass +class UNet2DConditionOutput: + sample: torch.FloatTensor + + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") + +# use jitted unet +unet_traced = torch.jit.load("unet_traced.pt") + + +# del pipe.unet +class TracedUNet(torch.nn.Module): + def __init__(self): + super().__init__() + self.in_channels = pipe.unet.config.in_channels + self.device = pipe.unet.device + + def forward(self, latent_model_input, t, encoder_hidden_states): + sample = unet_traced(latent_model_input, t, encoder_hidden_states)[0] + return UNet2DConditionOutput(sample=sample) + + +pipe.unet = TracedUNet() + +with torch.inference_mode(): + image = pipe([prompt] * 1, num_inference_steps=50).images[0] Memory-efficient attention Recent work on optimizing bandwidth in the attention block has generated huge speed-ups and reductions in GPU memory usage. The most recent type of memory-efficient attention is Flash Attention (you can check out the original code at HazyResearch/flash-attention). If you have PyTorch >= 2.0 installed, you should not expect a speed-up for inference when enabling xformers. To use Flash Attention, install the following: PyTorch > 1.12 CUDA available xFormers Then call enable_xformers_memory_efficient_attention() on the pipeline: Copied from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") + +pipe.enable_xformers_memory_efficient_attention() + +with torch.inference_mode(): + sample = pipe("a small cat") + +# optional: You can disable it via +# pipe.disable_xformers_memory_efficient_attention() The iteration speed when using xformers should match the iteration speed of PyTorch 2.0 as described here. diff --git a/scrapped_outputs/4b0f1d4739310fa8fb52ae9fd389436b.txt b/scrapped_outputs/4b0f1d4739310fa8fb52ae9fd389436b.txt new file mode 100644 index 0000000000000000000000000000000000000000..825b60520b59c30a8c2a5c018ff51010ada6643b --- /dev/null +++ b/scrapped_outputs/4b0f1d4739310fa8fb52ae9fd389436b.txt @@ -0,0 +1,376 @@ +Image-to-image The Stable Diffusion model can also be applied to image-to-image generation by passing a text prompt and an initial image to condition the generation of new images. The StableDiffusionImg2ImgPipeline uses the diffusion-denoising mechanism proposed in SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations by Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, Stefano Ermon. The abstract from the paper is: Guided image synthesis enables everyday users to create and edit photo-realistic images with minimum effort. The key challenge is balancing faithfulness to the user input (e.g., hand-drawn colored strokes) and realism of the synthesized image. Existing GAN-based methods attempt to achieve such balance using either conditional GANs or GAN inversions, which are challenging and often require additional training data or loss functions for individual applications. To address these issues, we introduce a new image synthesis and editing method, Stochastic Differential Editing (SDEdit), based on a diffusion model generative prior, which synthesizes realistic images by iteratively denoising through a stochastic differential equation (SDE). Given an input image with user guide of any type, SDEdit first adds noise to the input, then subsequently denoises the resulting image through the SDE prior to increase its realism. SDEdit does not require task-specific training or inversions and can naturally achieve the balance between realism and faithfulness. SDEdit significantly outperforms state-of-the-art GAN-based methods by up to 98.09% on realism and 91.72% on overall satisfaction scores, according to a human perception study, on multiple tasks, including stroke-based image synthesis and editing as well as image compositing. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionImg2ImgPipeline class diffusers.StableDiffusionImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-guided image-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.8 num_inference_steps: Optional = 50 timesteps: List = None guidance_scale: Optional = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: Optional = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: int = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be used as the starting point. For both +numpy array and pytorch tensor, the expected value range is between [0, 1] If it’s a tensor or a list +or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a +list of arrays, the expected shape should be (B, H, W, C) or (H, W, C) It can also accept image +latents as image, but if passing latents directly it is not encoded again. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import requests +>>> import torch +>>> from PIL import Image +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionImg2ImgPipeline + +>>> device = "cuda" +>>> model_id_or_path = "runwayml/stable-diffusion-v1-5" +>>> pipe = StableDiffusionImg2ImgPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +>>> response = requests.get(url) +>>> init_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_image = init_image.resize((768, 512)) + +>>> prompt = "A fantasy landscape, trending on artstation" + +>>> images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images +>>> images[0].save("fantasy_landscape.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. original_config_file (str, optional) — +The path to the original config file that was used to train the model. If not provided, the config file +will be inferred from the checkpoint file. model_type (str, optional) — +The type of model to load. If not provided, the model type will be inferred from the checkpoint file. image_size (int, optional) — +The size of the image output. It’s used to configure the sample_size parameter of the UNet and VAE model. load_safety_checker (bool, optional, defaults to False) — +Whether to load the safety checker model or not. By default, the safety checker is not loaded unless a safety_checker component is passed to the kwargs. num_in_channels (int, optional) — +Specify the number of input channels for the UNet model. Read more about how to configure UNet model with this parameter +here. scaling_factor (float, optional) — +The scaling factor to use for the VAE model. If not provided, it is inferred from the config file first. +If the scaling factor is not found in the config file, the default value 0.18215 is used. scheduler_type (str, optional) — +The type of scheduler to load. If not provided, the scheduler type will be inferred from the checkpoint file. prediction_type (str, optional) — +The type of prediction to load. If not provided, the prediction type will be inferred from the checkpoint file. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt or .safetensors +format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import StableDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +... ) + +>>> # Download pipeline from local file +>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt +>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly") + +>>> # Enable float16 and move to GPU +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", +... torch_dtype=torch.float16, +... ) +>>> pipeline.to("cuda") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionImg2ImgPipeline class diffusers.FlaxStableDiffusionImg2ImgPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-guided image-to-image generation using Stable Diffusion. This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: Array image: Array params: Union prng_seed: Array strength: float = 0.8 num_inference_steps: int = 50 height: Optional = None width: Optional = None guidance_scale: Union = 7.5 noise: Array = None neg_prompt_ids: Array = None return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt_ids (jnp.ndarray) — +The prompt or prompts to guide image generation. image (jnp.ndarray) — +Array representing an image batch to be used as the starting point. params (Dict or FrozenDict) — +Dictionary containing the model parameters/weights. prng_seed (jax.Array or jax.Array) — +Array containing random number generator key. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. noise (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. The array is generated by +sampling using the supplied random generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> import jax.numpy as jnp +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard +>>> import requests +>>> from io import BytesIO +>>> from PIL import Image +>>> from diffusers import FlaxStableDiffusionImg2ImgPipeline + + +>>> def create_key(seed=0): +... return jax.random.PRNGKey(seed) + + +>>> rng = create_key(0) + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +>>> response = requests.get(url) +>>> init_img = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_img = init_img.resize((768, 512)) + +>>> prompts = "A fantasy landscape, trending on artstation" + +>>> pipeline, params = FlaxStableDiffusionImg2ImgPipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", +... revision="flax", +... dtype=jnp.bfloat16, +... ) + +>>> num_samples = jax.device_count() +>>> rng = jax.random.split(rng, jax.device_count()) +>>> prompt_ids, processed_image = pipeline.prepare_inputs( +... prompt=[prompts] * num_samples, image=[init_img] * num_samples +... ) +>>> p_params = replicate(params) +>>> prompt_ids = shard(prompt_ids) +>>> processed_image = shard(processed_image) + +>>> output = pipeline( +... prompt_ids=prompt_ids, +... image=processed_image, +... params=p_params, +... prng_seed=rng, +... strength=0.75, +... num_inference_steps=50, +... jit=True, +... height=512, +... width=768, +... ).images + +>>> output_images = pipeline.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:]))) FlaxStableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/4b429eb9047062ad24900539a2922e9e.txt b/scrapped_outputs/4b429eb9047062ad24900539a2922e9e.txt new file mode 100644 index 0000000000000000000000000000000000000000..4fdf516b6d77156c92f409f664a1bb5bd1902c7b --- /dev/null +++ b/scrapped_outputs/4b429eb9047062ad24900539a2922e9e.txt @@ -0,0 +1,65 @@ +ControlNet ControlNet models are adapters trained on top of another pretrained model. It allows for a greater degree of control over image generation by conditioning the model with an additional input image. The input image can be a canny edge, depth map, human pose, and many more. If you’re training on a GPU with limited vRAM, you should try enabling the gradient_checkpointing, gradient_accumulation_steps, and mixed_precision parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing or xFormers. You should have a GPU with >30GB of memory if you want to train faster with Flax. This guide will explore the train_controlnet.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/controlnet +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_controlnet.py \ + --mixed_precision="fp16" Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the relevant parameters for ControlNet: --max_train_samples: the number of training samples; this can be lowered for faster training, but if you want to stream really large datasets, you’ll need to include this parameter and the --streaming parameter in your training command --gradient_accumulation_steps: number of update steps to accumulate before the backward pass; this allows you to train with a bigger batch size than your GPU memory can typically handle Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_controlnet.py \ + --snr_gamma=5.0 Training script As with the script parameters, a general walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the relevant parts of the ControlNet script. The training script has a make_train_dataset function for preprocessing the dataset with image transforms and caption tokenization. You’ll see that in addition to the usual caption tokenization and image transforms, the script also includes transforms for the conditioning image. If you’re streaming a dataset on a TPU, performance may be bottlenecked by the 🤗 Datasets library which is not optimized for images. To ensure maximum throughput, you’re encouraged to explore other dataset formats like WebDataset, TorchData, and TensorFlow Datasets. Copied conditioning_image_transforms = transforms.Compose( + [ + transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), + transforms.CenterCrop(args.resolution), + transforms.ToTensor(), + ] +) Within the main() function, you’ll find the code for loading the tokenizer, text encoder, scheduler and models. This is also where the ControlNet model is loaded either from existing weights or randomly initialized from a UNet: Copied if args.controlnet_model_name_or_path: + logger.info("Loading existing controlnet weights") + controlnet = ControlNetModel.from_pretrained(args.controlnet_model_name_or_path) +else: + logger.info("Initializing controlnet weights from unet") + controlnet = ControlNetModel.from_unet(unet) The optimizer is set up to update the ControlNet parameters: Copied params_to_optimize = controlnet.parameters() +optimizer = optimizer_class( + params_to_optimize, + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Finally, in the training loop, the conditioning text embeddings and image are passed to the down and mid-blocks of the ControlNet model: Copied encoder_hidden_states = text_encoder(batch["input_ids"])[0] +controlnet_image = batch["conditioning_pixel_values"].to(dtype=weight_dtype) + +down_block_res_samples, mid_block_res_sample = controlnet( + noisy_latents, + timesteps, + encoder_hidden_states=encoder_hidden_states, + controlnet_cond=controlnet_image, + return_dict=False, +) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Now you’re ready to launch the training script! 🚀 This guide uses the fusing/fill50k dataset, but remember, you can create and use your own dataset if you want (see the Create a dataset for training guide). Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model and OUTPUT_DIR to where you want to save the model. Download the following images to condition your training with: Copied wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png +wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png One more thing before you launch the script! Depending on the GPU you have, you may need to enable certain optimizations to train a ControlNet. The default configuration in this script requires ~38GB of vRAM. If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. 16GB 12GB 8GB On a 16GB GPU, you can use bitsandbytes 8-bit optimizer and gradient checkpointing to optimize your training run. Install bitsandbytes: Copied pip install bitsandbytes Then, add the following parameter to your training command: Copied accelerate launch train_controlnet.py \ + --gradient_checkpointing \ + --use_8bit_adam \ PyTorch Flax Copied export MODEL_DIR="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="path/to/save/model" + +accelerate launch train_controlnet.py \ + --pretrained_model_name_or_path=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --dataset_name=fusing/fill50k \ + --resolution=512 \ + --learning_rate=1e-5 \ + --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ + --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --push_to_hub Once training is complete, you can use your newly trained model for inference! Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.utils import load_image +import torch + +controlnet = ControlNetModel.from_pretrained("path/to/controlnet", torch_dtype=torch.float16) +pipeline = StableDiffusionControlNetPipeline.from_pretrained( + "path/to/base/model", controlnet=controlnet, torch_dtype=torch.float16 +).to("cuda") + +control_image = load_image("./conditioning_image_1.png") +prompt = "pale golden rod circle with old lace background" + +generator = torch.manual_seed(0) +image = pipe(prompt, num_inference_steps=20, generator=generator, image=control_image).images[0] +image.save("./output.png") Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_controlnet_sdxl.py script to train a ControlNet adapter for the SDXL model. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on training your own ControlNet! To learn more about how to use your new model, the following guides may be helpful: Learn how to use a ControlNet for inference on a variety of tasks. diff --git a/scrapped_outputs/4b4aef6776c2dc4d0bde4f95a034bb43.txt b/scrapped_outputs/4b4aef6776c2dc4d0bde4f95a034bb43.txt new file mode 100644 index 0000000000000000000000000000000000000000..d38fe382771f8913300e4beb3a4637c7f124a711 --- /dev/null +++ b/scrapped_outputs/4b4aef6776c2dc4d0bde4f95a034bb43.txt @@ -0,0 +1,41 @@ +KDPM2DiscreteScheduler The KDPM2DiscreteScheduler is inspired by the Elucidating the Design Space of Diffusion-Based Generative Models paper, and the scheduler is ported from and created by Katherine Crowson. The original codebase can be found at crowsonkb/k-diffusion. KDPM2DiscreteScheduler class diffusers.KDPM2DiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None use_karras_sigmas: Optional = False prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.00085) — +The starting beta value of inference. beta_end (float, defaults to 0.012) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. KDPM2DiscreteScheduler is inspired by the DPMSolver2 and Algorithm 2 from the Elucidating the Design Space of +Diffusion-Based Generative Models paper. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/4ba070b977039a8f345b269a1c14bf82.txt b/scrapped_outputs/4ba070b977039a8f345b269a1c14bf82.txt new file mode 100644 index 0000000000000000000000000000000000000000..bfb701b6b92da524e2044f38c56691f6854d8e5e --- /dev/null +++ b/scrapped_outputs/4ba070b977039a8f345b269a1c14bf82.txt @@ -0,0 +1,169 @@ +Latent Consistency Model Latent Consistency Models (LCM) enable quality image generation in typically 2-4 steps making it possible to use diffusion models in almost real-time settings. From the official website: LCMs can be distilled from any pre-trained Stable Diffusion (SD) in only 4,000 training steps (~32 A100 GPU Hours) for generating high quality 768 x 768 resolution images in 2~4 steps or even one step, significantly accelerating text-to-image generation. We employ LCM to distill the Dreamshaper-V7 version of SD in just 4,000 training iterations. For a more technical overview of LCMs, refer to the paper. LCM distilled models are available for stable-diffusion-v1-5, stable-diffusion-xl-base-1.0, and the SSD-1B model. All the checkpoints can be found in this collection. This guide shows how to perform inference with LCMs for text-to-image image-to-image combined with style LoRAs ControlNet/T2I-Adapter Text-to-image You’ll use the StableDiffusionXLPipeline pipeline with the LCMScheduler and then load the LCM-LoRA. Together with the LCM-LoRA and the scheduler, the pipeline enables a fast inference workflow, overcoming the slow iterative nature of diffusion models. Copied from diffusers import StableDiffusionXLPipeline, UNet2DConditionModel, LCMScheduler +import torch + +unet = UNet2DConditionModel.from_pretrained( + "latent-consistency/lcm-sdxl", + torch_dtype=torch.float16, + variant="fp16", +) +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16, variant="fp16", +).to("cuda") +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=8.0 +).images[0] Notice that we use only 4 steps for generation which is way less than what’s typically used for standard SDXL. Some details to keep in mind: To perform classifier-free guidance, batch size is usually doubled inside the pipeline. LCM, however, applies guidance using guidance embeddings, so the batch size does not have to be doubled in this case. This leads to a faster inference time, with the drawback that negative prompts don’t have any effect on the denoising process. The UNet was trained using the [3., 13.] guidance scale range. So, that is the ideal range for guidance_scale. However, disabling guidance_scale using a value of 1.0 is also effective in most cases. Image-to-image LCMs can be applied to image-to-image tasks too. For this example, we’ll use the LCM_Dreamshaper_v7 model, but the same steps can be applied to other LCM models as well. Copied import torch +from diffusers import AutoPipelineForImage2Image, UNet2DConditionModel, LCMScheduler +from diffusers.utils import make_image_grid, load_image + +unet = UNet2DConditionModel.from_pretrained( + "SimianLuo/LCM_Dreamshaper_v7", + subfolder="unet", + torch_dtype=torch.float16, +) + +pipe = AutoPipelineForImage2Image.from_pretrained( + "Lykon/dreamshaper-7", + unet=unet, + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) +prompt = "Astronauts in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +generator = torch.manual_seed(0) +image = pipe( + prompt, + image=init_image, + num_inference_steps=4, + guidance_scale=7.5, + strength=0.5, + generator=generator +).images[0] +make_image_grid([init_image, image], rows=1, cols=2) You can get different results based on your prompt and the image you provide. To get the best results, we recommend trying different values for num_inference_steps, strength, and guidance_scale parameters and choose the best one. Combine with style LoRAs LCMs can be used with other styled LoRAs to generate styled-images in very few steps (4-8). In the following example, we’ll use the papercut LoRA. Copied from diffusers import StableDiffusionXLPipeline, UNet2DConditionModel, LCMScheduler +import torch + +unet = UNet2DConditionModel.from_pretrained( + "latent-consistency/lcm-sdxl", + torch_dtype=torch.float16, + variant="fp16", +) +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16, variant="fp16", +).to("cuda") +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +pipe.load_lora_weights("TheLastBen/Papercut_SDXL", weight_name="papercut.safetensors", adapter_name="papercut") + +prompt = "papercut, a cute fox" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=8.0 +).images[0] +image ControlNet/T2I-Adapter Let’s look at how we can perform inference with ControlNet/T2I-Adapter and a LCM. ControlNet For this example, we’ll use the LCM_Dreamshaper_v7 model with canny ControlNet, but the same steps can be applied to other LCM models as well. Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +image = load_image( + "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +).resize((512, 512)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + "SimianLuo/LCM_Dreamshaper_v7", + controlnet=controlnet, + torch_dtype=torch.float16, + safety_checker=None, +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +generator = torch.manual_seed(0) +image = pipe( + "the mona lisa", + image=canny_image, + num_inference_steps=4, + generator=generator, +).images[0] +make_image_grid([canny_image, image], rows=1, cols=2) The inference parameters in this example might not work for all examples, so we recommend trying different values for the `num_inference_steps`, `guidance_scale`, `controlnet_conditioning_scale`, and `cross_attention_kwargs` parameters and choosing the best one. T2I-Adapter This example shows how to use the lcm-sdxl with the Canny T2I-Adapter. Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionXLAdapterPipeline, UNet2DConditionModel, T2IAdapter, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +# Prepare image +# Detect the canny map in low resolution to avoid high-frequency details +image = load_image( + "https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_canny.jpg" +).resize((384, 384)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image).resize((1024, 1216)) + +# load adapter +adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, varient="fp16").to("cuda") + +unet = UNet2DConditionModel.from_pretrained( + "latent-consistency/lcm-sdxl", + torch_dtype=torch.float16, + variant="fp16", +) +pipe = StableDiffusionXLAdapterPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + unet=unet, + adapter=adapter, + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +prompt = "Mystical fairy in real, magic, 4k picture, high quality" +negative_prompt = "extra digit, fewer digits, cropped, worst quality, low quality, glitch, deformed, mutated, ugly, disfigured" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, + negative_prompt=negative_prompt, + image=canny_image, + num_inference_steps=4, + guidance_scale=5, + adapter_conditioning_scale=0.8, + adapter_conditioning_factor=1, + generator=generator, +).images[0] +grid = make_image_grid([canny_image, image], rows=1, cols=2) diff --git a/scrapped_outputs/4bf3a56ddac8d445a2cd6ba274ca5f22.txt b/scrapped_outputs/4bf3a56ddac8d445a2cd6ba274ca5f22.txt new file mode 100644 index 0000000000000000000000000000000000000000..bc216baddd33dd12967fd5462ba2441730a14481 --- /dev/null +++ b/scrapped_outputs/4bf3a56ddac8d445a2cd6ba274ca5f22.txt @@ -0,0 +1,45 @@ +Text-Guided Image-Inpainting + +The StableDiffusionInpaintPipeline lets you edit specific parts of an image by providing a mask and a text prompt. It uses a version of Stable Diffusion specifically trained for in-painting tasks. +Note that this model is distributed separately from the regular Stable Diffusion model, so you have to accept its license even if you accepted the Stable Diffusion one in the past. +Please, visit the model card, read the license carefully and tick the checkbox if you agree. You have to be a registered user in 🤗 Hugging Face Hub, and you’ll also need to use an access token for the code to work. For more information on access tokens, please refer to this section of the documentation. + + + Copied +import PIL +import requests +import torch +from io import BytesIO + +from diffusers import StableDiffusionInpaintPipeline + + +def download_image(url): + response = requests.get(url) + return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = download_image(img_url).resize((512, 512)) +mask_image = download_image(mask_url).resize((512, 512)) + +pipe = StableDiffusionInpaintPipeline.from_pretrained( + "runwayml/stable-diffusion-inpainting", + torch_dtype=torch.float16, +) +pipe = pipe.to("cuda") + +prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] +image +mask_image +prompt +Output + + +Face of a yellow cat, high resolution, sitting on a park bench + +You can also run this example on colab +A previous experimental implementation of in-painting used a different, lower-quality process. To ensure backwards compatibility, loading a pretrained pipeline that doesn't contain the new model will still apply the old in-painting method. diff --git a/scrapped_outputs/4c4e57567013270a0bf8ee02c5213d30.txt b/scrapped_outputs/4c4e57567013270a0bf8ee02c5213d30.txt new file mode 100644 index 0000000000000000000000000000000000000000..b873ba3b9d3614922057d0c02bbc129d959f1e64 --- /dev/null +++ b/scrapped_outputs/4c4e57567013270a0bf8ee02c5213d30.txt @@ -0,0 +1,138 @@ +UNet2DConditionModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 2D UNet conditional model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet2DConditionModel class diffusers.UNet2DConditionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 center_input_sample: bool = False flip_sin_to_cos: bool = True freq_shift: int = 0 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') mid_block_type: Optional = 'UNetMidBlock2DCrossAttn' up_block_types: Tuple = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: Union = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 dropout: float = 0.0 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: Union = 1280 transformer_layers_per_block: Union = 1 reverse_transformer_layers_per_block: Optional = None encoder_hid_dim: Optional = None encoder_hid_dim_type: Optional = None attention_head_dim: Union = 8 num_attention_heads: Union = None dual_cross_attention: bool = False use_linear_projection: bool = False class_embed_type: Optional = None addition_embed_type: Optional = None addition_time_embed_dim: Optional = None num_class_embeds: Optional = None upcast_attention: bool = False resnet_time_scale_shift: str = 'default' resnet_skip_time_act: bool = False resnet_out_scale_factor: int = 1.0 time_embedding_type: str = 'positional' time_embedding_dim: Optional = None time_embedding_act_fn: Optional = None timestep_post_act: Optional = None time_cond_proj_dim: Optional = None conv_in_kernel: int = 3 conv_out_kernel: int = 3 projection_class_embeddings_input_dim: Optional = None attention_type: str = 'default' class_embeddings_concat: bool = False mid_block_only_cross_attention: Optional = None cross_attention_norm: Optional = None addition_embed_type_num_heads = 64 ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. in_channels (int, optional, defaults to 4) — Number of channels in the input sample. out_channels (int, optional, defaults to 4) — Number of channels in the output. center_input_sample (bool, optional, defaults to False) — Whether to center the input sample. flip_sin_to_cos (bool, optional, defaults to False) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. mid_block_type (str, optional, defaults to "UNetMidBlock2DCrossAttn") — +Block type for middle of UNet, it can be one of UNetMidBlock2DCrossAttn, UNetMidBlock2D, or +UNetMidBlock2DSimpleCrossAttn. If None, the mid block layer is skipped. up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. only_cross_attention(bool or Tuple[bool], optional, default to False) — +Whether to include self-attention in the basic transformer blocks, see +BasicTransformerBlock. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — The number of layers per block. downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. dropout (float, optional, defaults to 0.0) — The dropout probability to use. act_fn (str, optional, defaults to "silu") — The activation function to use. norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, normalization and activation layers is skipped in post-processing. norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. cross_attention_dim (int or Tuple[int], optional, defaults to 1280) — +The dimension of the cross attention features. transformer_layers_per_block (int, Tuple[int], or Tuple[Tuple] , optional, defaults to 1) — +The number of transformer blocks of type BasicTransformerBlock. Only relevant for +~models.unet_2d_blocks.CrossAttnDownBlock2D, ~models.unet_2d_blocks.CrossAttnUpBlock2D, +~models.unet_2d_blocks.UNetMidBlock2DCrossAttn. A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). reverse_transformer_layers_per_block : (Tuple[Tuple], optional, defaults to None): +The number of transformer blocks of type BasicTransformerBlock, in the upsampling +blocks of the U-Net. Only relevant if transformer_layers_per_block is of type Tuple[Tuple] and for +~models.unet_2d_blocks.CrossAttnDownBlock2D, ~models.unet_2d_blocks.CrossAttnUpBlock2D, +~models.unet_2d_blocks.UNetMidBlock2DCrossAttn. +encoder_hid_dim (int, optional, defaults to None): +If encoder_hid_dim_type is defined, encoder_hidden_states will be projected from encoder_hid_dim +dimension to cross_attention_dim. +encoder_hid_dim_type (str, optional, defaults to None): +If given, the encoder_hidden_states and potentially other embeddings are down-projected to text +embeddings of dimension cross_attention according to encoder_hid_dim_type. +attention_head_dim (int, optional, defaults to 8): The dimension of the attention heads. +num_attention_heads (int, optional): +The number of attention heads. If not defined, defaults to attention_head_dim +resnet_time_scale_shift (str, optional, defaults to "default"): Time scale shift config +for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. +class_embed_type (str, optional, defaults to None): +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", "identity", "projection", or "simple_projection". +addition_embed_type (str, optional, defaults to None): +Configures an optional embedding which will be summed with the time embeddings. Choose from None or +“text”. “text” will use the TextTimeEmbedding layer. +addition_time_embed_dim: (int, optional, defaults to None): +Dimension for the timestep embeddings. +num_class_embeds (int, optional, defaults to None): +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. +time_embedding_type (str, optional, defaults to positional): +The type of position embedding to use for timesteps. Choose from positional or fourier. +time_embedding_dim (int, optional, defaults to None): +An optional override for the dimension of the projected time embedding. +time_embedding_act_fn (str, optional, defaults to None): +Optional activation function to use only once on the time embeddings before they are passed to the rest of +the UNet. Choose from silu, mish, gelu, and swish. +timestep_post_act (str, optional, defaults to None): +The second activation function to use in timestep embedding. Choose from silu, mish and gelu. +time_cond_proj_dim (int, optional, defaults to None): +The dimension of cond_proj layer in the timestep embedding. +conv_in_kernel (int, optional, default to 3): The kernel size of conv_in layer. conv_out_kernel (int, +optional, default to 3): The kernel size of conv_out layer. projection_class_embeddings_input_dim (int, +optional): The dimension of the class_labels input when +class_embed_type="projection". Required when class_embed_type="projection". +class_embeddings_concat (bool, optional, defaults to False): Whether to concatenate the time +embeddings with the class embeddings. +mid_block_only_cross_attention (bool, optional, defaults to None): +Whether to use cross attention with the mid block when using the UNetMidBlock2DSimpleCrossAttn. If +only_cross_attention is given as a single boolean and mid_block_only_cross_attention is None, the +only_cross_attention value is used as the value for mid_block_only_cross_attention. Default to False +otherwise. disable_freeu < source > ( ) Disables the FreeU mechanism. enable_freeu < source > ( s1 s2 b1 b2 ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism from https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stage blocks where they are being applied. Please refer to the official repository for combinations of values that +are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None added_cond_kwargs: Optional = None down_block_additional_residuals: Optional = None mid_block_additional_residual: Optional = None down_intrablock_additional_residuals: Optional = None encoder_attention_mask: Optional = None return_dict: bool = True ) → UNet2DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, channel, height, width). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). class_labels (torch.Tensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. +timestep_cond — (torch.Tensor, optional, defaults to None): +Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed +through the self.time_embedding layer to obtain the timestep embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. +added_cond_kwargs — (dict, optional): +A kwargs dictionary containing additional embeddings that if specified are added to the embeddings that +are passed along to the UNet blocks. +down_block_additional_residuals — (tuple of torch.Tensor, optional): +A tuple of tensors that if specified are added to the residuals of down unet blocks. +mid_block_additional_residual — (torch.Tensor, optional): +A tensor that if specified is added to the residual of the middle unet block. encoder_attention_mask (torch.Tensor) — +A cross-attention mask of shape (batch, sequence_length) is applied to encoder_hidden_states. If +True the mask is kept, otherwise if False it is discarded. Mask will be converted into a bias, +which adds large negative values to the attention scores corresponding to “discard” tokens. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. +added_cond_kwargs — (dict, optional): +A kwargs dictionary containin additional embeddings that if specified are added to the embeddings that +are passed along to the UNet blocks. down_block_additional_residuals (tuple of torch.Tensor, optional) — +additional residuals to be added to UNet long skip connections from down blocks to up blocks for +example from ControlNet side model(s) mid_block_additional_residual (torch.Tensor, optional) — +additional residual to be added to UNet mid block output, for example from ControlNet side model down_intrablock_additional_residuals (tuple of torch.Tensor, optional) — +additional residuals to be added within UNet down blocks, for example from T2I-Adapter side model(s) Returns +UNet2DConditionOutput or tuple + +If return_dict is True, an UNet2DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The UNet2DConditionModel forward method. fuse_qkv_projections < source > ( ) Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. set_attention_slice < source > ( slice_size ) Parameters slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", input to the attention heads is halved, so attention is computed in two steps. If +"max", maximum amount of memory is saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in +several steps. This is useful for saving some memory in exchange for a small decrease in speed. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. unfuse_qkv_projections < source > ( ) Disables the fused QKV projection if enabled. This API is 🧪 experimental. unload_lora < source > ( ) Unloads LoRA weights. UNet2DConditionOutput class diffusers.models.unets.unet_2d_condition.UNet2DConditionOutput < source > ( sample: FloatTensor = None ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of UNet2DConditionModel. FlaxUNet2DConditionModel class diffusers.FlaxUNet2DConditionModel < source > ( sample_size: int = 32 in_channels: int = 4 out_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') up_block_types: Tuple = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 attention_head_dim: Union = 8 num_attention_heads: Union = None cross_attention_dim: int = 1280 dropout: float = 0.0 use_linear_projection: bool = False dtype: dtype = flip_sin_to_cos: bool = True freq_shift: int = 0 use_memory_efficient_attention: bool = False split_head_dim: bool = False transformer_layers_per_block: Union = 1 addition_embed_type: Optional = None addition_time_embed_dim: Optional = None addition_embed_type_num_heads: int = 64 projection_class_embeddings_input_dim: Optional = None parent: Union = name: Optional = None ) Parameters sample_size (int, optional) — +The size of the input sample. in_channels (int, optional, defaults to 4) — +The number of channels in the input sample. out_channels (int, optional, defaults to 4) — +The number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxDownBlock2D")) — +The tuple of downsample blocks to use. up_block_types (Tuple[str], optional, defaults to ("FlaxUpBlock2D", "FlaxCrossAttnUpBlock2D", "FlaxCrossAttnUpBlock2D", "FlaxCrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — +The number of layers per block. attention_head_dim (int or Tuple[int], optional, defaults to 8) — +The dimension of the attention heads. num_attention_heads (int or Tuple[int], optional) — +The number of attention heads. cross_attention_dim (int, optional, defaults to 768) — +The dimension of the cross attention features. dropout (float, optional, defaults to 0) — +Dropout probability for down, up and bottleneck blocks. flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. use_memory_efficient_attention (bool, optional, defaults to False) — +Enable memory efficient attention as described here. split_head_dim (bool, optional, defaults to False) — +Whether to split the head dimension into a new axis for the self-attention computation. In most cases, +enabling this flag should speed up the computation for Stable Diffusion 2.x and Stable Diffusion XL. A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. This model inherits from FlaxModelMixin. Check the superclass documentation for it’s generic methods +implemented for all models (such as downloading or saving). This model is also a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matters related to its +general usage and behavior. Inherent JAX features such as the following are supported: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization FlaxUNet2DConditionOutput class diffusers.models.unets.unet_2d_condition_flax.FlaxUNet2DConditionOutput < source > ( sample: Array ) Parameters sample (jnp.ndarray of shape (batch_size, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of FlaxUNet2DConditionModel. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/4c4f0104ae2d4ea7b1af407a992740cd.txt b/scrapped_outputs/4c4f0104ae2d4ea7b1af407a992740cd.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/4c52045a0707ef4e3fed6bc5c575ef03.txt b/scrapped_outputs/4c52045a0707ef4e3fed6bc5c575ef03.txt new file mode 100644 index 0000000000000000000000000000000000000000..8c5bcb9f001a84d9b945c267456eb710daaafe80 --- /dev/null +++ b/scrapped_outputs/4c52045a0707ef4e3fed6bc5c575ef03.txt @@ -0,0 +1,104 @@ +DPMSolverSinglestepScheduler DPMSolverSinglestepScheduler is a single step scheduler from DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps and DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. DPMSolver (and the improved version DPMSolver++) is a fast dedicated high-order solver for diffusion ODEs with convergence order guarantee. Empirically, DPMSolver sampling with only 20 steps can generate high-quality +samples, and it can generate quite good samples even in 10 steps. The original implementation can be found at LuChengTHU/dpm-solver. Tips It is recommended to set solver_order to 2 for guide sampling, and solver_order=3 for unconditional sampling. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use dynamic +thresholding. This thresholding method is unsuitable for latent-space diffusion models such as +Stable Diffusion. DPMSolverSinglestepScheduler class diffusers.DPMSolverSinglestepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Optional = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'dpmsolver++' solver_type: str = 'midpoint' lower_order_final: bool = False use_karras_sigmas: Optional = False final_sigmas_type: Optional = 'zero' lambda_min_clipped: float = -inf variance_type: Optional = None ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DPMSolver order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++". algorithm_type (str, defaults to dpmsolver++) — +Algorithm type for the solver; can be dpmsolver or dpmsolver++. The +dpmsolver type implements the algorithms in the DPMSolver +paper, and the dpmsolver++ type implements the algorithms in the +DPMSolver++ paper. It is recommended to use dpmsolver++ or +sde-dpmsolver++ with solver_order=2 for guided sampling like in Stable Diffusion. solver_type (str, defaults to midpoint) — +Solver type for the second-order solver; can be midpoint or heun. The solver type slightly affects the +sample quality, especially for a small number of steps. It is recommended to use midpoint solvers. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. final_sigmas_type (str, optional, defaults to "zero") — +The final sigma value for the noise schedule during the sampling process. If "sigma_min", the final sigma +is the same as the last sigma in the training schedule. If zero, the final sigma is set to 0. lambda_min_clipped (float, defaults to -inf) — +Clipping threshold for the minimum value of lambda(t) for numerical stability. This is critical for the +cosine (squaredcos_cap_v2) noise schedule. variance_type (str, optional) — +Set to “learned” or “learned_range” for diffusion models that predict variance. If set, the model’s output +contains the predicted Gaussian variance. DPMSolverSinglestepScheduler is a fast dedicated high-order solver for diffusion ODEs. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is +designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an +integral of the data prediction model. The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise +prediction and data prediction models. dpm_solver_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DPMSolver (equivalent to DDIM). get_order_list < source > ( num_inference_steps: int ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. Computes the solver order at each time step. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). singlestep_dpm_solver_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. timestep (int) — +The current and latter discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order singlestep DPMSolver that computes the solution at time prev_timestep from the +time timestep_list[-2]. singlestep_dpm_solver_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. timestep (int) — +The current and latter discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order singlestep DPMSolver that computes the solution at time prev_timestep from the +time timestep_list[-3]. singlestep_dpm_solver_update < source > ( model_output_list: List *args sample: FloatTensor = None order: int = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. timestep (int) — +The current and latter discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. order (int) — +The solver order at this step. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the singlestep DPMSolver. step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the singlestep DPMSolver. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/4c595c25f7c42c581c83e1a4d727490e.txt b/scrapped_outputs/4c595c25f7c42c581c83e1a4d727490e.txt new file mode 100644 index 0000000000000000000000000000000000000000..48649ec5c0477bba9de1fe1afcb189a2b6b4fbd9 --- /dev/null +++ b/scrapped_outputs/4c595c25f7c42c581c83e1a4d727490e.txt @@ -0,0 +1,88 @@ +Textual Inversion Textual Inversion is a training method for personalizing models by learning new text embeddings from a few example images. The file produced from training is extremely small (a few KBs) and the new embeddings can be loaded into the text encoder. TextualInversionLoaderMixin provides a function for loading Textual Inversion embeddings from Diffusers and Automatic1111 into the text encoder and loading a special token to activate the embeddings. To learn more about how to load Textual Inversion embeddings, see the Textual Inversion loading guide. TextualInversionLoaderMixin class diffusers.loaders.TextualInversionLoaderMixin < source > ( ) Load Textual Inversion tokens and embeddings to the tokenizer and text encoder. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") maybe_convert_prompt < source > ( prompt: Union tokenizer: PreTrainedTokenizer ) → str or list of str Parameters prompt (str or list of str) — +The prompt or prompts to guide the image generation. tokenizer (PreTrainedTokenizer) — +The tokenizer responsible for encoding the prompt into input tokens. Returns +str or list of str + +The converted prompt + Processes prompts that include a special token corresponding to a multi-vector textual inversion embedding to +be replaced with multiple special tokens each corresponding to one of the vectors. If the prompt has no textual +inversion token or if the textual inversion token is a single vector, the input prompt is returned. unload_textual_inversion < source > ( tokens: Union = None ) Unload Textual Inversion embeddings from the text encoder of StableDiffusionPipeline Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5") + +# Example 1 +pipeline.load_textual_inversion("sd-concepts-library/gta5-artwork") +pipeline.load_textual_inversion("sd-concepts-library/moeb-style") + +# Remove all token embeddings +pipeline.unload_textual_inversion() + +# Example 2 +pipeline.load_textual_inversion("sd-concepts-library/moeb-style") +pipeline.load_textual_inversion("sd-concepts-library/gta5-artwork") + +# Remove just one token +pipeline.unload_textual_inversion("") diff --git a/scrapped_outputs/4c7e14c926f7d6fd7f95daf359dc3f63.txt b/scrapped_outputs/4c7e14c926f7d6fd7f95daf359dc3f63.txt new file mode 100644 index 0000000000000000000000000000000000000000..89bc887713a4db23ab02dc3377a161ea6292c27f --- /dev/null +++ b/scrapped_outputs/4c7e14c926f7d6fd7f95daf359dc3f63.txt @@ -0,0 +1,23 @@ +Controlled generation Controlling outputs generated by diffusion models has been long pursued by the community and is now an active research topic. In many popular diffusion models, subtle changes in inputs, both images and text prompts, can drastically change outputs. In an ideal world we want to be able to control how semantics are preserved and changed. Most examples of preserving semantics reduce to being able to accurately map a change in input to a change in output. I.e. adding an adjective to a subject in a prompt preserves the entire image, only modifying the changed subject. Or, image variation of a particular subject preserves the subject’s pose. Additionally, there are qualities of generated images that we would like to influence beyond semantic preservation. I.e. in general, we would like our outputs to be of good quality, adhere to a particular style, or be realistic. We will document some of the techniques diffusers supports to control generation of diffusion models. Much is cutting edge research and can be quite nuanced. If something needs clarifying or you have a suggestion, don’t hesitate to open a discussion on the forum or a GitHub issue. We provide a high level explanation of how the generation can be controlled as well as a snippet of the technicals. For more in depth explanations on the technicals, the original papers which are linked from the pipelines are always the best resources. Depending on the use case, one should choose a technique accordingly. In many cases, these techniques can be combined. For example, one can combine Textual Inversion with SEGA to provide more semantic guidance to the outputs generated using Textual Inversion. Unless otherwise mentioned, these are techniques that work with existing models and don’t require their own weights. InstructPix2Pix Pix2Pix Zero Attend and Excite Semantic Guidance Self-attention Guidance Depth2Image MultiDiffusion Panorama DreamBooth Textual Inversion ControlNet Prompt Weighting Custom Diffusion Model Editing DiffEdit T2I-Adapter FABRIC For convenience, we provide a table to denote which methods are inference-only and which require fine-tuning/training. Method Inference only Requires training / fine-tuning Comments InstructPix2Pix ✅ ❌ Can additionally befine-tuned for better performance on specific edit instructions. Pix2Pix Zero ✅ ❌ Attend and Excite ✅ ❌ Semantic Guidance ✅ ❌ Self-attention Guidance ✅ ❌ Depth2Image ✅ ❌ MultiDiffusion Panorama ✅ ❌ DreamBooth ❌ ✅ Textual Inversion ❌ ✅ ControlNet ✅ ❌ A ControlNet can be trained/fine-tuned ona custom conditioning. Prompt Weighting ✅ ❌ Custom Diffusion ❌ ✅ Model Editing ✅ ❌ DiffEdit ✅ ❌ T2I-Adapter ✅ ❌ Fabric ✅ ❌ InstructPix2Pix Paper InstructPix2Pix is fine-tuned from Stable Diffusion to support editing input images. It takes as inputs an image and a prompt describing an edit, and it outputs the edited image. +InstructPix2Pix has been explicitly trained to work well with InstructGPT-like prompts. Pix2Pix Zero Paper Pix2Pix Zero allows modifying an image so that one concept or subject is translated to another one while preserving general image semantics. The denoising process is guided from one conceptual embedding towards another conceptual embedding. The intermediate latents are optimized during the denoising process to push the attention maps towards reference attention maps. The reference attention maps are from the denoising process of the input image and are used to encourage semantic preservation. Pix2Pix Zero can be used both to edit synthetic images as well as real images. To edit synthetic images, one first generates an image given a caption. +Next, we generate image captions for the concept that shall be edited and for the new target concept. We can use a model like Flan-T5 for this purpose. Then, “mean” prompt embeddings for both the source and target concepts are created via the text encoder. Finally, the pix2pix-zero algorithm is used to edit the synthetic image. To edit a real image, one first generates an image caption using a model like BLIP. Then one applies DDIM inversion on the prompt and image to generate “inverse” latents. Similar to before, “mean” prompt embeddings for both source and target concepts are created and finally the pix2pix-zero algorithm in combination with the “inverse” latents is used to edit the image. Pix2Pix Zero is the first model that allows “zero-shot” image editing. This means that the model +can edit an image in less than a minute on a consumer GPU as shown here. As mentioned above, Pix2Pix Zero includes optimizing the latents (and not any of the UNet, VAE, or the text encoder) to steer the generation toward a specific concept. This means that the overall +pipeline might require more memory than a standard StableDiffusionPipeline. An important distinction between methods like InstructPix2Pix and Pix2Pix Zero is that the former +involves fine-tuning the pre-trained weights while the latter does not. This means that you can +apply Pix2Pix Zero to any of the available Stable Diffusion models. Attend and Excite Paper Attend and Excite allows subjects in the prompt to be faithfully represented in the final image. A set of token indices are given as input, corresponding to the subjects in the prompt that need to be present in the image. During denoising, each token index is guaranteed to have a minimum attention threshold for at least one patch of the image. The intermediate latents are iteratively optimized during the denoising process to strengthen the attention of the most neglected subject token until the attention threshold is passed for all subject tokens. Like Pix2Pix Zero, Attend and Excite also involves a mini optimization loop (leaving the pre-trained weights untouched) in its pipeline and can require more memory than the usual StableDiffusionPipeline. Semantic Guidance (SEGA) Paper SEGA allows applying or removing one or more concepts from an image. The strength of the concept can also be controlled. I.e. the smile concept can be used to incrementally increase or decrease the smile of a portrait. Similar to how classifier free guidance provides guidance via empty prompt inputs, SEGA provides guidance on conceptual prompts. Multiple of these conceptual prompts can be applied simultaneously. Each conceptual prompt can either add or remove their concept depending on if the guidance is applied positively or negatively. Unlike Pix2Pix Zero or Attend and Excite, SEGA directly interacts with the diffusion process instead of performing any explicit gradient-based optimization. Self-attention Guidance (SAG) Paper Self-attention Guidance improves the general quality of images. SAG provides guidance from predictions not conditioned on high-frequency details to fully conditioned images. The high frequency details are extracted out of the UNet self-attention maps. Depth2Image Project Depth2Image is fine-tuned from Stable Diffusion to better preserve semantics for text guided image variation. It conditions on a monocular depth estimate of the original image. MultiDiffusion Panorama Paper MultiDiffusion Panorama defines a new generation process over a pre-trained diffusion model. This process binds together multiple diffusion generation methods that can be readily applied to generate high quality and diverse images. Results adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes. +MultiDiffusion Panorama allows to generate high-quality images at arbitrary aspect ratios (e.g., panoramas). Fine-tuning your own models In addition to pre-trained models, Diffusers has training scripts for fine-tuning models on user-provided data. DreamBooth Project DreamBooth fine-tunes a model to teach it about a new subject. I.e. a few pictures of a person can be used to generate images of that person in different styles. Textual Inversion Paper Textual Inversion fine-tunes a model to teach it about a new concept. I.e. a few pictures of a style of artwork can be used to generate images in that style. ControlNet Paper ControlNet is an auxiliary network which adds an extra condition. +There are 8 canonical pre-trained ControlNets trained on different conditionings such as edge detection, scribbles, +depth maps, and semantic segmentations. Prompt Weighting Prompt weighting is a simple technique that puts more attention weight on certain parts of the text +input. Custom Diffusion Paper Custom Diffusion only fine-tunes the cross-attention maps of a pre-trained +text-to-image diffusion model. It also allows for additionally performing Textual Inversion. It supports +multi-concept training by design. Like DreamBooth and Textual Inversion, Custom Diffusion is also used to +teach a pre-trained text-to-image diffusion model about new concepts to generate outputs involving the +concept(s) of interest. Model Editing Paper The text-to-image model editing pipeline helps you mitigate some of the incorrect implicit assumptions a pre-trained text-to-image +diffusion model might make about the subjects present in the input prompt. For example, if you prompt Stable Diffusion to generate images for “A pack of roses”, the roses in the generated images +are more likely to be red. This pipeline helps you change that assumption. DiffEdit Paper DiffEdit allows for semantic editing of input images along with +input prompts while preserving the original input images as much as possible. T2I-Adapter Paper T2I-Adapter is an auxiliary network which adds an extra condition. +There are 8 canonical pre-trained adapters trained on different conditionings such as edge detection, sketch, +depth maps, and semantic segmentations. Fabric Paper Fabric is a training-free +approach applicable to a wide range of popular diffusion models, which exploits +the self-attention layer present in the most widely used architectures to condition +the diffusion process on a set of feedback images. diff --git a/scrapped_outputs/4ce73ace986b3056d6feb006ceabc0b7.txt b/scrapped_outputs/4ce73ace986b3056d6feb006ceabc0b7.txt new file mode 100644 index 0000000000000000000000000000000000000000..bfb6ce0d7b29d717bc4cf9298fa18ceb1edda813 --- /dev/null +++ b/scrapped_outputs/4ce73ace986b3056d6feb006ceabc0b7.txt @@ -0,0 +1,338 @@ +Inpainting The Stable Diffusion model can also be applied to inpainting which lets you edit specific parts of an image by providing a mask and a text prompt using Stable Diffusion. Tips It is recommended to use this pipeline with checkpoints that have been specifically fine-tuned for inpainting, such +as runwayml/stable-diffusion-inpainting. Default +text-to-image Stable Diffusion checkpoints, such as +runwayml/stable-diffusion-v1-5 are also compatible but they might be less performant. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionInpaintPipeline class diffusers.StableDiffusionInpaintPipeline < source > ( vae: Union text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae ([AutoencoderKL, AsymmetricAutoencoderKL]) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-guided image inpainting using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None masked_image_latents: FloatTensor = None height: Optional = None width: Optional = None padding_mask_crop: Optional = None strength: float = 1.0 num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: int = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be inpainted (which parts of the image to +be masked out with mask_image and repainted according to prompt). For both numpy array and pytorch +tensor, the expected value range is between [0, 1] If it’s a tensor or a list or tensors, the +expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a list of arrays, the +expected shape should be (B, H, W, C) or (H, W, C) It can also accept image latents as image, but +if passing latents directly it is not encoded again. mask_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to mask image. White pixels in the mask +are repainted while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a numpy array or pytorch tensor, it should contain one +color channel (L) instead of 3, so the expected shape for pytorch tensor would be (B, 1, H, W), (B, H, W), (1, H, W), (H, W). And for numpy array would be for (B, H, W, 1), (B, H, W), (H, W, 1), or (H, W). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. padding_mask_crop (int, optional, defaults to None) — +The size of margin in the crop to be applied to the image and masking. If None, no crop is applied to image and mask_image. If +padding_mask_crop is not None, it will first find a rectangular region with the same aspect ration of the image and +contains all masked area, and then expand that area based on padding_mask_crop. The image and mask_image will then be cropped based on +the expanded area before resizing to the original image size for inpainting. This is useful when the masked area is small while the image is large +and contain information inreleant for inpainging, such as background. strength (float, optional, defaults to 1.0) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionInpaintPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +>>> init_image = download_image(img_url).resize((512, 512)) +>>> mask_image = download_image(mask_url).resize((512, 512)) + +>>> pipe = StableDiffusionInpaintPipeline.from_pretrained( +... "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +>>> image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionInpaintPipeline class diffusers.FlaxStableDiffusionInpaintPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-guided image inpainting using Stable Diffusion. 🧪 This is an experimental feature! This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: Array mask: Array masked_image: Array params: Union prng_seed: Array num_inference_steps: int = 50 height: Optional = None width: Optional = None guidance_scale: Union = 7.5 latents: Array = None neg_prompt_ids: Array = None return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. latents (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +array is generated by sampling using the supplied random generator. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard +>>> import PIL +>>> import requests +>>> from io import BytesIO +>>> from diffusers import FlaxStableDiffusionInpaintPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +>>> init_image = download_image(img_url).resize((512, 512)) +>>> mask_image = download_image(mask_url).resize((512, 512)) + +>>> pipeline, params = FlaxStableDiffusionInpaintPipeline.from_pretrained( +... "xvjiarui/stable-diffusion-2-inpainting" +... ) + +>>> prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +>>> prng_seed = jax.random.PRNGKey(0) +>>> num_inference_steps = 50 + +>>> num_samples = jax.device_count() +>>> prompt = num_samples * [prompt] +>>> init_image = num_samples * [init_image] +>>> mask_image = num_samples * [mask_image] +>>> prompt_ids, processed_masked_images, processed_masks = pipeline.prepare_inputs( +... prompt, init_image, mask_image +... ) +# shard inputs and rng + +>>> params = replicate(params) +>>> prng_seed = jax.random.split(prng_seed, jax.device_count()) +>>> prompt_ids = shard(prompt_ids) +>>> processed_masked_images = shard(processed_masked_images) +>>> processed_masks = shard(processed_masks) + +>>> images = pipeline( +... prompt_ids, processed_masks, processed_masked_images, params, prng_seed, num_inference_steps, jit=True +... ).images +>>> images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) FlaxStableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/4d028129bcfcf03d2b14cc566deb9093.txt b/scrapped_outputs/4d028129bcfcf03d2b14cc566deb9093.txt new file mode 100644 index 0000000000000000000000000000000000000000..c64e5338e7b801217166447f9876dee342fd9e20 --- /dev/null +++ b/scrapped_outputs/4d028129bcfcf03d2b14cc566deb9093.txt @@ -0,0 +1,100 @@ +UNet Some training methods - like LoRA and Custom Diffusion - typically target the UNet’s attention layers, but these training methods can also target other non-attention layers. Instead of training all of a model’s parameters, only a subset of the parameters are trained, which is faster and more efficient. This class is useful if you’re only loading weights into a UNet. If you need to load weights into the text encoder or a text encoder and UNet, try using the load_lora_weights() function instead. The UNet2DConditionLoadersMixin class provides functions for loading and saving weights, fusing and unfusing LoRAs, disabling and enabling LoRAs, and setting and deleting adapters. To learn more about how to load LoRA weights, see the LoRA loading guide. UNet2DConditionLoadersMixin class diffusers.loaders.UNet2DConditionLoadersMixin < source > ( ) Load LoRA layers into a UNet2DCondtionModel. delete_adapters < source > ( adapter_names: Union ) Parameters adapter_names (Union[List[str], str]) — +The names (single string or list of strings) of the adapter to delete. Delete an adapter’s LoRA layers from the UNet. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_names="cinematic" +) +pipeline.delete_adapters("cinematic") disable_lora < source > ( ) Disable the UNet’s active LoRA layers. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) +pipeline.disable_lora() enable_lora < source > ( ) Enable the UNet’s active LoRA layers. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) +pipeline.enable_lora() load_attn_procs < source > ( pretrained_model_name_or_path_or_dict: Union **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with ModelMixin.save_pretrained(). +A torch state +dict. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load pretrained attention processor layers into UNet2DConditionModel. Attention processor layers have to be +defined in +attention_processor.py +and be a torch.nn.Module class. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.unet.load_attn_procs( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) save_attn_procs < source > ( save_directory: Union is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save an attention processor to (will be created if it doesn’t exist). is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or with pickle. Save attention processor layers to a directory so that it can be reloaded with the +load_attn_procs() method. Example: Copied import torch +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + torch_dtype=torch.float16, +).to("cuda") +pipeline.unet.load_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin") +pipeline.unet.save_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin") set_adapters < source > ( adapter_names: Union weights: Union = None ) Parameters adapter_names (List[str] or str) — +The names of the adapters to use. adapter_weights (Union[List[float], float], optional) — +The adapter(s) weights to use with the UNet. If None, the weights are set to 1.0 for all the +adapters. Set the currently active adapters for use in the UNet. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) +pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipeline.set_adapters(["cinematic", "pixel"], adapter_weights=[0.5, 0.5]) diff --git a/scrapped_outputs/4d0c00cfe8fbb3ef8b90fc0888f9ed31.txt b/scrapped_outputs/4d0c00cfe8fbb3ef8b90fc0888f9ed31.txt new file mode 100644 index 0000000000000000000000000000000000000000..11ac9a3410a83a30e6fc980490a3dfac0dbf0c58 --- /dev/null +++ b/scrapped_outputs/4d0c00cfe8fbb3ef8b90fc0888f9ed31.txt @@ -0,0 +1,219 @@ +AltDiffusion AltDiffusion was proposed in AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities by Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang, Ledell Wu. The abstract from the paper is: In this work, we present a conceptually simple and effective method to train a strong bilingual/multilingual multimodal representation model. Starting from the pre-trained multimodal representation model CLIP released by OpenAI, we altered its text encoder with a pre-trained multilingual text encoder XLM-R, and aligned both languages and image representations by a two-stage training schema consisting of teacher learning and contrastive learning. We validate our method through evaluations of a wide range of tasks. We set new state-of-the-art performances on a bunch of tasks including ImageNet-CN, Flicker30k-CN, COCO-CN and XTD. Further, we obtain very close performances with CLIP on almost all tasks, suggesting that one can simply alter the text encoder in CLIP for extended capabilities such as multilingual understanding. Our models and code are available at this https URL. Tips AltDiffusion is conceptually the same as Stable Diffusion. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. AltDiffusionPipeline class diffusers.AltDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: RobertaSeriesModelWithTransformation tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (RobertaSeriesModelWithTransformation) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (XLMRobertaTokenizer) — +A XLMRobertaTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Alt Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: typing.Union[str, typing.List[str]] = None height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 timesteps: typing.List[int] = None guidance_scale: float = 7.5 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None ip_adapter_image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.FloatTensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.FloatTensor], NoneType] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None guidance_rescale: float = 0.0 clip_skip: typing.Optional[int] = None callback_on_step_end: typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], NoneType] = None callback_on_step_end_tensor_inputs: typing.List[str] = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.AltDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.AltDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor from Common Diffusion Noise Schedules and Sample Steps are +Flawed. Guidance rescale factor should fix overexposure when +using zero terminal SNR. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +~pipelines.stable_diffusion.AltDiffusionPipelineOutput or tuple + +If return_dict is True, ~pipelines.stable_diffusion.AltDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import AltDiffusionPipeline + +>>> pipe = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> # "dark elf princess, highly detailed, d & d, fantasy, highly detailed, digital painting, trending on artstation, concept art, sharp focus, illustration, art by artgerm and greg rutkowski and fuji choko and viktoria gavrilenko and hoang lap" +>>> prompt = "黑暗精灵公主,非常详细,幻想,非常详细,数字绘画,概念艺术,敏锐的焦点,插图" +>>> image = pipe(prompt).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Alt Diffusion v1, v2, and Alt Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None lora_scale: typing.Optional[float] = None clip_skip: typing.Optional[int] = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. AltDiffusionImg2ImgPipeline class diffusers.AltDiffusionImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: RobertaSeriesModelWithTransformation tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (RobertaSeriesModelWithTransformation) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (XLMRobertaTokenizer) — +A XLMRobertaTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-guided image-to-image generation using Alt Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: typing.Union[str, typing.List[str]] = None image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.FloatTensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.FloatTensor]] = None strength: float = 0.8 num_inference_steps: typing.Optional[int] = 50 timesteps: typing.List[int] = None guidance_scale: typing.Optional[float] = 7.5 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 eta: typing.Optional[float] = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None ip_adapter_image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.FloatTensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.FloatTensor], NoneType] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None clip_skip: int = None callback_on_step_end: typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], NoneType] = None callback_on_step_end_tensor_inputs: typing.List[str] = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.AltDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be used as the starting point. For both +numpy array and pytorch tensor, the expected value range is between [0, 1] If it’s a tensor or a list +or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a +list of arrays, the expected shape should be (B, H, W, C) or (H, W, C) It can also accept image +latents as image, but if passing latents directly it is not encoded again. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.AltDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +~pipelines.stable_diffusion.AltDiffusionPipelineOutput or tuple + +If return_dict is True, ~pipelines.stable_diffusion.AltDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import requests +>>> import torch +>>> from PIL import Image +>>> from io import BytesIO + +>>> from diffusers import AltDiffusionImg2ImgPipeline + +>>> device = "cuda" +>>> model_id_or_path = "BAAI/AltDiffusion-m9" +>>> pipe = AltDiffusionImg2ImgPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +>>> response = requests.get(url) +>>> init_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_image = init_image.resize((768, 512)) + +>>> # "A fantasy landscape, trending on artstation" +>>> prompt = "幻想风景, artstation" + +>>> images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images +>>> images[0].save("幻想风景.png") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Alt Diffusion v1, v2, and Alt Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None lora_scale: typing.Optional[float] = None clip_skip: typing.Optional[int] = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. AltDiffusionPipelineOutput class diffusers.pipelines.alt_diffusion.AltDiffusionPipelineOutput < source > ( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] nsfw_content_detected: typing.Optional[typing.List[bool]] ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Alt Diffusion pipelines. __call__ ( *args **kwargs ) Call self as a function. diff --git a/scrapped_outputs/4d457496ee278f95c8aeed3a3536e187.txt b/scrapped_outputs/4d457496ee278f95c8aeed3a3536e187.txt new file mode 100644 index 0000000000000000000000000000000000000000..0188d75f72a83031cdf7cb06c0ecd446c1bd9ed4 --- /dev/null +++ b/scrapped_outputs/4d457496ee278f95c8aeed3a3536e187.txt @@ -0,0 +1,4 @@ +Overview + +🧨 Diffusers offers many pipelines, models, and schedulers for generative tasks. To make loading these components as simple as possible, we provide a single and unified method - from_pretrained() - that loads any of these components from either the Hugging Face Hub or your local machine. Whenever you load a pipeline or model, the latest files are automatically downloaded and cached so you can quickly reuse them next time without redownloading the files. +This section will show you everything you need to know about loading pipelines, how to load different components in a pipeline, how to load checkpoint variants, and how to load community pipelines. You’ll also learn how to load schedulers and compare the speed and quality trade-offs of using different schedulers. Finally, you’ll see how to convert and load KerasCV checkpoints so you can use them in PyTorch with 🧨 Diffusers. diff --git a/scrapped_outputs/4d49add88eb0b91ffcefce303210b2ea.txt b/scrapped_outputs/4d49add88eb0b91ffcefce303210b2ea.txt new file mode 100644 index 0000000000000000000000000000000000000000..b873ba3b9d3614922057d0c02bbc129d959f1e64 --- /dev/null +++ b/scrapped_outputs/4d49add88eb0b91ffcefce303210b2ea.txt @@ -0,0 +1,138 @@ +UNet2DConditionModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 2D UNet conditional model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet2DConditionModel class diffusers.UNet2DConditionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 center_input_sample: bool = False flip_sin_to_cos: bool = True freq_shift: int = 0 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') mid_block_type: Optional = 'UNetMidBlock2DCrossAttn' up_block_types: Tuple = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: Union = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 dropout: float = 0.0 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: Union = 1280 transformer_layers_per_block: Union = 1 reverse_transformer_layers_per_block: Optional = None encoder_hid_dim: Optional = None encoder_hid_dim_type: Optional = None attention_head_dim: Union = 8 num_attention_heads: Union = None dual_cross_attention: bool = False use_linear_projection: bool = False class_embed_type: Optional = None addition_embed_type: Optional = None addition_time_embed_dim: Optional = None num_class_embeds: Optional = None upcast_attention: bool = False resnet_time_scale_shift: str = 'default' resnet_skip_time_act: bool = False resnet_out_scale_factor: int = 1.0 time_embedding_type: str = 'positional' time_embedding_dim: Optional = None time_embedding_act_fn: Optional = None timestep_post_act: Optional = None time_cond_proj_dim: Optional = None conv_in_kernel: int = 3 conv_out_kernel: int = 3 projection_class_embeddings_input_dim: Optional = None attention_type: str = 'default' class_embeddings_concat: bool = False mid_block_only_cross_attention: Optional = None cross_attention_norm: Optional = None addition_embed_type_num_heads = 64 ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. in_channels (int, optional, defaults to 4) — Number of channels in the input sample. out_channels (int, optional, defaults to 4) — Number of channels in the output. center_input_sample (bool, optional, defaults to False) — Whether to center the input sample. flip_sin_to_cos (bool, optional, defaults to False) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. mid_block_type (str, optional, defaults to "UNetMidBlock2DCrossAttn") — +Block type for middle of UNet, it can be one of UNetMidBlock2DCrossAttn, UNetMidBlock2D, or +UNetMidBlock2DSimpleCrossAttn. If None, the mid block layer is skipped. up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. only_cross_attention(bool or Tuple[bool], optional, default to False) — +Whether to include self-attention in the basic transformer blocks, see +BasicTransformerBlock. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — The number of layers per block. downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. dropout (float, optional, defaults to 0.0) — The dropout probability to use. act_fn (str, optional, defaults to "silu") — The activation function to use. norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, normalization and activation layers is skipped in post-processing. norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. cross_attention_dim (int or Tuple[int], optional, defaults to 1280) — +The dimension of the cross attention features. transformer_layers_per_block (int, Tuple[int], or Tuple[Tuple] , optional, defaults to 1) — +The number of transformer blocks of type BasicTransformerBlock. Only relevant for +~models.unet_2d_blocks.CrossAttnDownBlock2D, ~models.unet_2d_blocks.CrossAttnUpBlock2D, +~models.unet_2d_blocks.UNetMidBlock2DCrossAttn. A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). reverse_transformer_layers_per_block : (Tuple[Tuple], optional, defaults to None): +The number of transformer blocks of type BasicTransformerBlock, in the upsampling +blocks of the U-Net. Only relevant if transformer_layers_per_block is of type Tuple[Tuple] and for +~models.unet_2d_blocks.CrossAttnDownBlock2D, ~models.unet_2d_blocks.CrossAttnUpBlock2D, +~models.unet_2d_blocks.UNetMidBlock2DCrossAttn. +encoder_hid_dim (int, optional, defaults to None): +If encoder_hid_dim_type is defined, encoder_hidden_states will be projected from encoder_hid_dim +dimension to cross_attention_dim. +encoder_hid_dim_type (str, optional, defaults to None): +If given, the encoder_hidden_states and potentially other embeddings are down-projected to text +embeddings of dimension cross_attention according to encoder_hid_dim_type. +attention_head_dim (int, optional, defaults to 8): The dimension of the attention heads. +num_attention_heads (int, optional): +The number of attention heads. If not defined, defaults to attention_head_dim +resnet_time_scale_shift (str, optional, defaults to "default"): Time scale shift config +for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. +class_embed_type (str, optional, defaults to None): +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", "identity", "projection", or "simple_projection". +addition_embed_type (str, optional, defaults to None): +Configures an optional embedding which will be summed with the time embeddings. Choose from None or +“text”. “text” will use the TextTimeEmbedding layer. +addition_time_embed_dim: (int, optional, defaults to None): +Dimension for the timestep embeddings. +num_class_embeds (int, optional, defaults to None): +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. +time_embedding_type (str, optional, defaults to positional): +The type of position embedding to use for timesteps. Choose from positional or fourier. +time_embedding_dim (int, optional, defaults to None): +An optional override for the dimension of the projected time embedding. +time_embedding_act_fn (str, optional, defaults to None): +Optional activation function to use only once on the time embeddings before they are passed to the rest of +the UNet. Choose from silu, mish, gelu, and swish. +timestep_post_act (str, optional, defaults to None): +The second activation function to use in timestep embedding. Choose from silu, mish and gelu. +time_cond_proj_dim (int, optional, defaults to None): +The dimension of cond_proj layer in the timestep embedding. +conv_in_kernel (int, optional, default to 3): The kernel size of conv_in layer. conv_out_kernel (int, +optional, default to 3): The kernel size of conv_out layer. projection_class_embeddings_input_dim (int, +optional): The dimension of the class_labels input when +class_embed_type="projection". Required when class_embed_type="projection". +class_embeddings_concat (bool, optional, defaults to False): Whether to concatenate the time +embeddings with the class embeddings. +mid_block_only_cross_attention (bool, optional, defaults to None): +Whether to use cross attention with the mid block when using the UNetMidBlock2DSimpleCrossAttn. If +only_cross_attention is given as a single boolean and mid_block_only_cross_attention is None, the +only_cross_attention value is used as the value for mid_block_only_cross_attention. Default to False +otherwise. disable_freeu < source > ( ) Disables the FreeU mechanism. enable_freeu < source > ( s1 s2 b1 b2 ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism from https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stage blocks where they are being applied. Please refer to the official repository for combinations of values that +are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None added_cond_kwargs: Optional = None down_block_additional_residuals: Optional = None mid_block_additional_residual: Optional = None down_intrablock_additional_residuals: Optional = None encoder_attention_mask: Optional = None return_dict: bool = True ) → UNet2DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, channel, height, width). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). class_labels (torch.Tensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. +timestep_cond — (torch.Tensor, optional, defaults to None): +Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed +through the self.time_embedding layer to obtain the timestep embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. +added_cond_kwargs — (dict, optional): +A kwargs dictionary containing additional embeddings that if specified are added to the embeddings that +are passed along to the UNet blocks. +down_block_additional_residuals — (tuple of torch.Tensor, optional): +A tuple of tensors that if specified are added to the residuals of down unet blocks. +mid_block_additional_residual — (torch.Tensor, optional): +A tensor that if specified is added to the residual of the middle unet block. encoder_attention_mask (torch.Tensor) — +A cross-attention mask of shape (batch, sequence_length) is applied to encoder_hidden_states. If +True the mask is kept, otherwise if False it is discarded. Mask will be converted into a bias, +which adds large negative values to the attention scores corresponding to “discard” tokens. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. +added_cond_kwargs — (dict, optional): +A kwargs dictionary containin additional embeddings that if specified are added to the embeddings that +are passed along to the UNet blocks. down_block_additional_residuals (tuple of torch.Tensor, optional) — +additional residuals to be added to UNet long skip connections from down blocks to up blocks for +example from ControlNet side model(s) mid_block_additional_residual (torch.Tensor, optional) — +additional residual to be added to UNet mid block output, for example from ControlNet side model down_intrablock_additional_residuals (tuple of torch.Tensor, optional) — +additional residuals to be added within UNet down blocks, for example from T2I-Adapter side model(s) Returns +UNet2DConditionOutput or tuple + +If return_dict is True, an UNet2DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The UNet2DConditionModel forward method. fuse_qkv_projections < source > ( ) Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. set_attention_slice < source > ( slice_size ) Parameters slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", input to the attention heads is halved, so attention is computed in two steps. If +"max", maximum amount of memory is saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in +several steps. This is useful for saving some memory in exchange for a small decrease in speed. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. unfuse_qkv_projections < source > ( ) Disables the fused QKV projection if enabled. This API is 🧪 experimental. unload_lora < source > ( ) Unloads LoRA weights. UNet2DConditionOutput class diffusers.models.unets.unet_2d_condition.UNet2DConditionOutput < source > ( sample: FloatTensor = None ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of UNet2DConditionModel. FlaxUNet2DConditionModel class diffusers.FlaxUNet2DConditionModel < source > ( sample_size: int = 32 in_channels: int = 4 out_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') up_block_types: Tuple = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 attention_head_dim: Union = 8 num_attention_heads: Union = None cross_attention_dim: int = 1280 dropout: float = 0.0 use_linear_projection: bool = False dtype: dtype = flip_sin_to_cos: bool = True freq_shift: int = 0 use_memory_efficient_attention: bool = False split_head_dim: bool = False transformer_layers_per_block: Union = 1 addition_embed_type: Optional = None addition_time_embed_dim: Optional = None addition_embed_type_num_heads: int = 64 projection_class_embeddings_input_dim: Optional = None parent: Union = name: Optional = None ) Parameters sample_size (int, optional) — +The size of the input sample. in_channels (int, optional, defaults to 4) — +The number of channels in the input sample. out_channels (int, optional, defaults to 4) — +The number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxDownBlock2D")) — +The tuple of downsample blocks to use. up_block_types (Tuple[str], optional, defaults to ("FlaxUpBlock2D", "FlaxCrossAttnUpBlock2D", "FlaxCrossAttnUpBlock2D", "FlaxCrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — +The number of layers per block. attention_head_dim (int or Tuple[int], optional, defaults to 8) — +The dimension of the attention heads. num_attention_heads (int or Tuple[int], optional) — +The number of attention heads. cross_attention_dim (int, optional, defaults to 768) — +The dimension of the cross attention features. dropout (float, optional, defaults to 0) — +Dropout probability for down, up and bottleneck blocks. flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. use_memory_efficient_attention (bool, optional, defaults to False) — +Enable memory efficient attention as described here. split_head_dim (bool, optional, defaults to False) — +Whether to split the head dimension into a new axis for the self-attention computation. In most cases, +enabling this flag should speed up the computation for Stable Diffusion 2.x and Stable Diffusion XL. A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. This model inherits from FlaxModelMixin. Check the superclass documentation for it’s generic methods +implemented for all models (such as downloading or saving). This model is also a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matters related to its +general usage and behavior. Inherent JAX features such as the following are supported: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization FlaxUNet2DConditionOutput class diffusers.models.unets.unet_2d_condition_flax.FlaxUNet2DConditionOutput < source > ( sample: Array ) Parameters sample (jnp.ndarray of shape (batch_size, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of FlaxUNet2DConditionModel. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/4d5a63bfa62d0a398b0bb1bd136c1c48.txt b/scrapped_outputs/4d5a63bfa62d0a398b0bb1bd136c1c48.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/4d63672ae963de41cba4c88e97d47702.txt b/scrapped_outputs/4d63672ae963de41cba4c88e97d47702.txt new file mode 100644 index 0000000000000000000000000000000000000000..2f1d96dfcc69ea2ea0ea188ca7ad5cbe34206e3b --- /dev/null +++ b/scrapped_outputs/4d63672ae963de41cba4c88e97d47702.txt @@ -0,0 +1,43 @@ +Text-Guided Image-Inpainting + +The StableDiffusionInpaintPipeline lets you edit specific parts of an image by providing a mask and a text prompt. It uses a version of Stable Diffusion specifically trained for in-painting tasks. + + + Copied +import PIL +import requests +import torch +from io import BytesIO + +from diffusers import StableDiffusionInpaintPipeline + + +def download_image(url): + response = requests.get(url) + return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = download_image(img_url).resize((512, 512)) +mask_image = download_image(mask_url).resize((512, 512)) + +pipe = StableDiffusionInpaintPipeline.from_pretrained( + "runwayml/stable-diffusion-inpainting", + torch_dtype=torch.float16, +) +pipe = pipe.to("cuda") + +prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] +image +mask_image +prompt +Output + + +Face of a yellow cat, high resolution, sitting on a park bench + +You can also run this example on colab +A previous experimental implementation of in-painting used a different, lower-quality process. To ensure backwards compatibility, loading a pretrained pipeline that doesn't contain the new model will still apply the old in-painting method. diff --git a/scrapped_outputs/4d9198713b546de19460bc93af88314c.txt b/scrapped_outputs/4d9198713b546de19460bc93af88314c.txt new file mode 100644 index 0000000000000000000000000000000000000000..118d04526fdacb6e280461a814f7dea84ba76932 --- /dev/null +++ b/scrapped_outputs/4d9198713b546de19460bc93af88314c.txt @@ -0,0 +1,51 @@ +DDIMInverseScheduler DDIMInverseScheduler is the inverted scheduler from Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. +The implementation is mostly based on the DDIM inversion definition from Null-text Inversion for Editing Real Images using Guided Diffusion Models. DDIMInverseScheduler class diffusers.DDIMInverseScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None clip_sample: bool = True set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' clip_sample_range: float = 1.0 timestep_spacing: str = 'leading' rescale_betas_zero_snr: bool = False **kwargs ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 0, otherwise +it uses the alpha value at step num_train_timesteps - 1. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use num_train_timesteps - 1 for the previous alpha +product. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. DDIMInverseScheduler is the reverse scheduler of DDIMScheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → ~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. eta (float) — +The weight of noise for added noise in diffusion step. use_clipped_model_output (bool, defaults to False) — +If True, computes “corrected” model_output from the clipped predicted original sample. Necessary +because predicted original sample is clipped to [-1, 1] when self.config.clip_sample is True. If no +clipping has happened, “corrected” model_output would coincide with the one provided as input and +use_clipped_model_output has no effect. variance_noise (torch.FloatTensor) — +Alternative to generating noise with generator by directly providing the noise for the variance +itself. Useful for methods such as CycleDiffusion. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput or +tuple. Returns +~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput or tuple + +If return_dict is True, ~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput is +returned, otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/4daff58a56445e818580b9619bfe93f4.txt b/scrapped_outputs/4daff58a56445e818580b9619bfe93f4.txt new file mode 100644 index 0000000000000000000000000000000000000000..af8bc21f7006c2432f3cf43cbda561eb3e9ef283 --- /dev/null +++ b/scrapped_outputs/4daff58a56445e818580b9619bfe93f4.txt @@ -0,0 +1,42 @@ +RePaintScheduler RePaintScheduler is a DDPM-based inpainting scheduler for unsupervised inpainting with extreme masks. It is designed to be used with the RePaintPipeline, and it is based on the paper RePaint: Inpainting using Denoising Diffusion Probabilistic Models by Andreas Lugmayr et al. The abstract from the paper is: Free-form inpainting is the task of adding new content to an image in the regions specified by an arbitrary binary mask. Most existing approaches train for a certain distribution of masks, which limits their generalization capabilities to unseen mask types. Furthermore, training with pixel-wise and perceptual losses often leads to simple textural extensions towards the missing areas instead of semantically meaningful generation. In this work, we propose RePaint: A Denoising Diffusion Probabilistic Model (DDPM) based inpainting approach that is applicable to even extreme masks. We employ a pretrained unconditional DDPM as the generative prior. To condition the generation process, we only alter the reverse diffusion iterations by sampling the unmasked regions using the given image information. Since this technique does not modify or condition the original DDPM network itself, the model produces high-quality and diverse output images for any inpainting form. We validate our method for both faces and general-purpose image inpainting using standard and extreme masks. RePaint outperforms state-of-the-art Autoregressive, and GAN approaches for at least five out of six mask distributions. GitHub Repository: this http URL. The original implementation can be found at andreas128/RePaint. RePaintScheduler class diffusers.RePaintScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' eta: float = 0.0 trained_betas: Optional = None clip_sample: bool = True ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, squaredcos_cap_v2, or sigmoid. eta (float) — +The weight of noise for added noise in diffusion step. If its value is between 0.0 and 1.0 it corresponds +to the DDIM scheduler, and if its value is between -0.0 and 1.0 it corresponds to the DDPM scheduler. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. clip_sample (bool, defaults to True) — +Clip the predicted sample between -1 and 1 for numerical stability. RePaintScheduler is a scheduler for DDPM inpainting inside a given mask. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int jump_length: int = 10 jump_n_sample: int = 10 device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. If used, +timesteps must be None. jump_length (int, defaults to 10) — +The number of steps taken forward in time before going backward in time for a single jump (“j” in +RePaint paper). Take a look at Figure 9 and 10 in the paper. jump_n_sample (int, defaults to 10) — +The number of times to make a forward time jump for a given chosen time sample. Take a look at Figure 9 +and 10 in the paper. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor original_image: FloatTensor mask: FloatTensor generator: Optional = None return_dict: bool = True ) → RePaintSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. original_image (torch.FloatTensor) — +The original image to inpaint on. mask (torch.FloatTensor) — +The mask where a value of 0.0 indicates which part of the original image to inpaint. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a RePaintSchedulerOutput or tuple. Returns +RePaintSchedulerOutput or tuple + +If return_dict is True, RePaintSchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). RePaintSchedulerOutput class diffusers.schedulers.scheduling_repaint.RePaintSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from +the current timestep. pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/4db5e9a4a1e9336605a87a72c8ebd5c3.txt b/scrapped_outputs/4db5e9a4a1e9336605a87a72c8ebd5c3.txt new file mode 100644 index 0000000000000000000000000000000000000000..237bb300818d41faed294b98bc9bdc0a82efbe0d --- /dev/null +++ b/scrapped_outputs/4db5e9a4a1e9336605a87a72c8ebd5c3.txt @@ -0,0 +1,11 @@ +Installing xFormers + +We recommend the use of xFormers for both inference and training. In our tests, the optimizations performed in the attention blocks allow for both faster speed and reduced memory consumption. +Installing xFormers has historically been a bit involved, as binary distributions were not always up to date. Fortunately, the project has very recently integrated a process to build pip wheels as part of the project’s continuous integration, so this should improve a lot starting from xFormers version 0.0.16. +Until xFormers 0.0.16 is deployed, you can install pip wheels using TestPyPI. These are the steps that worked for us in a Linux computer to install xFormers version 0.0.15: + + + Copied +pip install pyre-extensions==0.0.23 +pip install -i https://test.pypi.org/simple/ formers==0.0.15.dev376 +We’ll update these instructions when the wheels are published to the official PyPI repository. diff --git a/scrapped_outputs/4dbb28afaf68a58c6cb38e7a271d2718.txt b/scrapped_outputs/4dbb28afaf68a58c6cb38e7a271d2718.txt new file mode 100644 index 0000000000000000000000000000000000000000..e9b53eb8a868ef3829ac58348524811ec445482c --- /dev/null +++ b/scrapped_outputs/4dbb28afaf68a58c6cb38e7a271d2718.txt @@ -0,0 +1,143 @@ +BLIP-Diffusion BLIP-Diffusion was proposed in BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing. It enables zero-shot subject-driven generation and control-guided zero-shot generation. The abstract from the paper is: Subject-driven text-to-image generation models create novel renditions of an input subject based on text prompts. Existing models suffer from lengthy fine-tuning and difficulties preserving the subject fidelity. To overcome these limitations, we introduce BLIP-Diffusion, a new subject-driven image generation model that supports multimodal control which consumes inputs of subject images and text prompts. Unlike other subject-driven generation models, BLIP-Diffusion introduces a new multimodal encoder which is pre-trained to provide subject representation. We first pre-train the multimodal encoder following BLIP-2 to produce visual representation aligned with the text. Then we design a subject representation learning task which enables a diffusion model to leverage such visual representation and generates new subject renditions. Compared with previous methods such as DreamBooth, our model enables zero-shot subject-driven generation, and efficient fine-tuning for customized subject with up to 20x speedup. We also demonstrate that BLIP-Diffusion can be flexibly combined with existing techniques such as ControlNet and prompt-to-prompt to enable novel subject-driven generation and editing applications. Project page at this https URL. The original codebase can be found at salesforce/LAVIS. You can find the official BLIP-Diffusion checkpoints under the hf.co/SalesForce organization. BlipDiffusionPipeline and BlipDiffusionControlNetPipeline were contributed by ayushtues. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. BlipDiffusionPipeline class diffusers.BlipDiffusionPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: ContextCLIPTextModel vae: AutoencoderKL unet: UNet2DConditionModel scheduler: PNDMScheduler qformer: Blip2QFormerModel image_processor: BlipImageProcessor ctx_begin_pos: int = 2 mean: List = None std: List = None ) Parameters tokenizer (CLIPTokenizer) — +Tokenizer for the text encoder text_encoder (ContextCLIPTextModel) — +Text encoder to encode the text prompt vae (AutoencoderKL) — +VAE model to map the latents to the image unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. scheduler (PNDMScheduler) — +A scheduler to be used in combination with unet to generate image latents. qformer (Blip2QFormerModel) — +QFormer model to get multi-modal embeddings from the text and image. image_processor (BlipImageProcessor) — +Image Processor to preprocess and postprocess the image. ctx_begin_pos (int, optional, defaults to 2) — +Position of the context token in the text encoder. Pipeline for Zero-Shot Subject Driven Generation using Blip Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: List reference_image: Image source_subject_category: List target_subject_category: List latents: Optional = None guidance_scale: float = 7.5 height: int = 512 width: int = 512 num_inference_steps: int = 50 generator: Union = None neg_prompt: Optional = '' prompt_strength: float = 1.0 prompt_reps: int = 20 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (List[str]) — +The prompt or prompts to guide the image generation. reference_image (PIL.Image.Image) — +The reference image to condition the generation on. source_subject_category (List[str]) — +The source subject category. target_subject_category (List[str]) — +The target subject category. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by random sampling. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. height (int, optional, defaults to 512) — +The height of the generated image. width (int, optional, defaults to 512) — +The width of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. neg_prompt (str, optional, defaults to "") — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). prompt_strength (float, optional, defaults to 1.0) — +The strength of the prompt. Specifies the number of times the prompt is repeated along with prompt_reps +to amplify the prompt. prompt_reps (int, optional, defaults to 20) — +The number of times the prompt is repeated along with prompt_strength to amplify the prompt. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers.pipelines import BlipDiffusionPipeline +>>> from diffusers.utils import load_image +>>> import torch + +>>> blip_diffusion_pipe = BlipDiffusionPipeline.from_pretrained( +... "Salesforce/blipdiffusion", torch_dtype=torch.float16 +... ).to("cuda") + + +>>> cond_subject = "dog" +>>> tgt_subject = "dog" +>>> text_prompt_input = "swimming underwater" + +>>> cond_image = load_image( +... "https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/dog.jpg" +... ) +>>> guidance_scale = 7.5 +>>> num_inference_steps = 25 +>>> negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate" + + +>>> output = blip_diffusion_pipe( +... text_prompt_input, +... cond_image, +... cond_subject, +... tgt_subject, +... guidance_scale=guidance_scale, +... num_inference_steps=num_inference_steps, +... neg_prompt=negative_prompt, +... height=512, +... width=512, +... ).images +>>> output[0].save("image.png") BlipDiffusionControlNetPipeline class diffusers.BlipDiffusionControlNetPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: ContextCLIPTextModel vae: AutoencoderKL unet: UNet2DConditionModel scheduler: PNDMScheduler qformer: Blip2QFormerModel controlnet: ControlNetModel image_processor: BlipImageProcessor ctx_begin_pos: int = 2 mean: List = None std: List = None ) Parameters tokenizer (CLIPTokenizer) — +Tokenizer for the text encoder text_encoder (ContextCLIPTextModel) — +Text encoder to encode the text prompt vae (AutoencoderKL) — +VAE model to map the latents to the image unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. scheduler (PNDMScheduler) — +A scheduler to be used in combination with unet to generate image latents. qformer (Blip2QFormerModel) — +QFormer model to get multi-modal embeddings from the text and image. controlnet (ControlNetModel) — +ControlNet model to get the conditioning image embedding. image_processor (BlipImageProcessor) — +Image Processor to preprocess and postprocess the image. ctx_begin_pos (int, optional, defaults to 2) — +Position of the context token in the text encoder. Pipeline for Canny Edge based Controlled subject-driven generation using Blip Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: List reference_image: Image condtioning_image: Image source_subject_category: List target_subject_category: List latents: Optional = None guidance_scale: float = 7.5 height: int = 512 width: int = 512 num_inference_steps: int = 50 generator: Union = None neg_prompt: Optional = '' prompt_strength: float = 1.0 prompt_reps: int = 20 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (List[str]) — +The prompt or prompts to guide the image generation. reference_image (PIL.Image.Image) — +The reference image to condition the generation on. condtioning_image (PIL.Image.Image) — +The conditioning canny edge image to condition the generation on. source_subject_category (List[str]) — +The source subject category. target_subject_category (List[str]) — +The target subject category. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by random sampling. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. height (int, optional, defaults to 512) — +The height of the generated image. width (int, optional, defaults to 512) — +The width of the generated image. seed (int, optional, defaults to 42) — +The seed to use for random generation. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. neg_prompt (str, optional, defaults to "") — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). prompt_strength (float, optional, defaults to 1.0) — +The strength of the prompt. Specifies the number of times the prompt is repeated along with prompt_reps +to amplify the prompt. prompt_reps (int, optional, defaults to 20) — +The number of times the prompt is repeated along with prompt_strength to amplify the prompt. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers.pipelines import BlipDiffusionControlNetPipeline +>>> from diffusers.utils import load_image +>>> from controlnet_aux import CannyDetector +>>> import torch + +>>> blip_diffusion_pipe = BlipDiffusionControlNetPipeline.from_pretrained( +... "Salesforce/blipdiffusion-controlnet", torch_dtype=torch.float16 +... ).to("cuda") + +>>> style_subject = "flower" +>>> tgt_subject = "teapot" +>>> text_prompt = "on a marble table" + +>>> cldm_cond_image = load_image( +... "https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/kettle.jpg" +... ).resize((512, 512)) +>>> canny = CannyDetector() +>>> cldm_cond_image = canny(cldm_cond_image, 30, 70, output_type="pil") +>>> style_image = load_image( +... "https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg" +... ) +>>> guidance_scale = 7.5 +>>> num_inference_steps = 50 +>>> negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate" + + +>>> output = blip_diffusion_pipe( +... text_prompt, +... style_image, +... cldm_cond_image, +... style_subject, +... tgt_subject, +... guidance_scale=guidance_scale, +... num_inference_steps=num_inference_steps, +... neg_prompt=negative_prompt, +... height=512, +... width=512, +... ).images +>>> output[0].save("image.png") diff --git a/scrapped_outputs/4dc61a144917d37a237f00964d520d63.txt b/scrapped_outputs/4dc61a144917d37a237f00964d520d63.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/4ddf692a889252f40ddd7b3eee84a42c.txt b/scrapped_outputs/4ddf692a889252f40ddd7b3eee84a42c.txt new file mode 100644 index 0000000000000000000000000000000000000000..17a66448f05f2f6877f31f680c3ddabd2f0d30c9 --- /dev/null +++ b/scrapped_outputs/4ddf692a889252f40ddd7b3eee84a42c.txt @@ -0,0 +1,111 @@ +Merge LoRAs It can be fun and creative to use multiple LoRAs together to generate something entirely new and unique. This works by merging multiple LoRA weights together to produce images that are a blend of different styles. Diffusers provides a few methods to merge LoRAs depending on how you want to merge their weights, which can affect image quality. This guide will show you how to merge LoRAs using the set_adapters() and ~peft.LoraModel.add_weighted_adapter methods. To improve inference speed and reduce memory-usage of merged LoRAs, you’ll also see how to use the fuse_lora() method to fuse the LoRA weights with the original weights of the underlying model. For this guide, load a Stable Diffusion XL (SDXL) checkpoint and the KappaNeuro/studio-ghibli-style and Norod78/sdxl-chalkboarddrawing-lora LoRAs with the load_lora_weights() method. You’ll need to assign each LoRA an adapter_name to combine them later. Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("ostris/ikea-instructions-lora-sdxl", weight_name="ikea_instructions_xl_v1_5.safetensors", adapter_name="ikea") +pipeline.load_lora_weights("lordjia/by-feng-zikai", weight_name="fengzikai_v1.0_XL.safetensors", adapter_name="feng") set_adapters The set_adapters() method merges LoRA adapters by concatenating their weighted matrices. Use the adapter name to specify which LoRAs to merge, and the adapter_weights parameter to control the scaling for each LoRA. For example, if adapter_weights=[0.5, 0.5], then the merged LoRA output is an average of both LoRAs. Try adjusting the adapter weights to see how it affects the generated image! Copied pipeline.set_adapters(["ikea", "feng"], adapter_weights=[0.7, 0.8]) + +generator = torch.manual_seed(0) +prompt = "A bowl of ramen shaped like a cute kawaii bear, by Feng Zikai" +image = pipeline(prompt, generator=generator, cross_attention_kwargs={"scale": 1.0}).images[0] +image add_weighted_adapter This is an experimental method that adds PEFTs ~peft.LoraModel.add_weighted_adapter method to Diffusers to enable more efficient merging methods. Check out this issue if you’re interested in learning more about the motivation and design behind this integration. The ~peft.LoraModel.add_weighted_adapter method provides access to more efficient merging method such as TIES and DARE. To use these merging methods, make sure you have the latest stable version of Diffusers and PEFT installed. Copied pip install -U diffusers peft There are three steps to merge LoRAs with the ~peft.LoraModel.add_weighted_adapter method: Create a ~peft.PeftModel from the underlying model and LoRA checkpoint. Load a base UNet model and the LoRA adapters. Merge the adapters using the ~peft.LoraModel.add_weighted_adapter method and the merging method of your choice. Let’s dive deeper into what these steps entail. Load a UNet that corresponds to the UNet in the LoRA checkpoint. In this case, both LoRAs use the SDXL UNet as their base model. Copied from diffusers import UNet2DConditionModel +import torch + +unet = UNet2DConditionModel.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + torch_dtype=torch.float16, + use_safetensors=True, + variant="fp16", + subfolder="unet", +).to("cuda") Load the SDXL pipeline and the LoRA checkpoints, starting with the ostris/ikea-instructions-lora-sdxl LoRA. Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + variant="fp16", + torch_dtype=torch.float16, + unet=unet +).to("cuda") +pipeline.load_lora_weights("ostris/ikea-instructions-lora-sdxl", weight_name="ikea_instructions_xl_v1_5.safetensors", adapter_name="ikea") Now you’ll create a ~peft.PeftModel from the loaded LoRA checkpoint by combining the SDXL UNet and the LoRA UNet from the pipeline. Copied from peft import get_peft_model, LoraConfig +import copy + +sdxl_unet = copy.deepcopy(unet) +ikea_peft_model = get_peft_model( + sdxl_unet, + pipeline.unet.peft_config["ikea"], + adapter_name="ikea" +) + +original_state_dict = {f"base_model.model.{k}": v for k, v in pipeline.unet.state_dict().items()} +ikea_peft_model.load_state_dict(original_state_dict, strict=True) You can optionally push the ikea_peft_model to the Hub by calling ikea_peft_model.push_to_hub("ikea_peft_model", token=TOKEN). Repeat this process to create a ~peft.PeftModel from the lordjia/by-feng-zikai LoRA. Copied pipeline.delete_adapters("ikea") +sdxl_unet.delete_adapters("ikea") + +pipeline.load_lora_weights("lordjia/by-feng-zikai", weight_name="fengzikai_v1.0_XL.safetensors", adapter_name="feng") +pipeline.set_adapters(adapter_names="feng") + +feng_peft_model = get_peft_model( + sdxl_unet, + pipeline.unet.peft_config["feng"], + adapter_name="feng" +) + +original_state_dict = {f"base_model.model.{k}": v for k, v in pipe.unet.state_dict().items()} +feng_peft_model.load_state_dict(original_state_dict, strict=True) Load a base UNet model and then load the adapters onto it. Copied from peft import PeftModel + +base_unet = UNet2DConditionModel.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + torch_dtype=torch.float16, + use_safetensors=True, + variant="fp16", + subfolder="unet", +).to("cuda") + +model = PeftModel.from_pretrained(base_unet, "stevhliu/ikea_peft_model", use_safetensors=True, subfolder="ikea", adapter_name="ikea") +model.load_adapter("stevhliu/feng_peft_model", use_safetensors=True, subfolder="feng", adapter_name="feng") Merge the adapters using the ~peft.LoraModel.add_weighted_adapter method and the merging method of your choice (learn more about other merging methods in this blog post). For this example, let’s use the "dare_linear" method to merge the LoRAs. Keep in mind the LoRAs need to have the same rank to be merged! Copied model.add_weighted_adapter( + adapters=["ikea", "feng"], + weights=[1.0, 1.0], + combination_type="dare_linear", + adapter_name="ikea-feng" +) +model.set_adapters("ikea-feng") Now you can generate an image with the merged LoRA. Copied model = model.to(dtype=torch.float16, device="cuda") + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", unet=model, variant="fp16", torch_dtype=torch.float16, +).to("cuda") + +image = pipeline("A bowl of ramen shaped like a cute kawaii bear, by Feng Zikai", generator=torch.manual_seed(0)).images[0] +image fuse_lora Both the set_adapters() and ~peft.LoraModel.add_weighted_adapter methods require loading the base model and the LoRA adapters separately which incurs some overhead. The fuse_lora() method allows you to fuse the LoRA weights directly with the original weights of the underlying model. This way, you’re only loading the model once which can increase inference and lower memory-usage. You can use PEFT to easily fuse/unfuse multiple adapters directly into the model weights (both UNet and text encoder) using the fuse_lora() method, which can lead to a speed-up in inference and lower VRAM usage. For example, if you have a base model and adapters loaded and set as active with the following adapter weights: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("ostris/ikea-instructions-lora-sdxl", weight_name="ikea_instructions_xl_v1_5.safetensors", adapter_name="ikea") +pipeline.load_lora_weights("lordjia/by-feng-zikai", weight_name="fengzikai_v1.0_XL.safetensors", adapter_name="feng") + +pipeline.set_adapters(["ikea", "feng"], adapter_weights=[0.7, 0.8]) Fuse these LoRAs into the UNet with the fuse_lora() method. The lora_scale parameter controls how much to scale the output by with the LoRA weights. It is important to make the lora_scale adjustments in the fuse_lora() method because it won’t work if you try to pass scale to the cross_attention_kwargs in the pipeline. Copied pipeline.fuse_lora(adapter_names=["ikea", "feng"], lora_scale=1.0) Then you should use unload_lora_weights() to unload the LoRA weights since they’ve already been fused with the underlying base model. Finally, call save_pretrained() to save the fused pipeline locally or you could call push_to_hub() to push the fused pipeline to the Hub. Copied pipeline.unload_lora_weights() +# save locally +pipeline.save_pretrained("path/to/fused-pipeline") +# save to the Hub +pipeline.push_to_hub("fused-ikea-feng") Now you can quickly load the fused pipeline and use it for inference without needing to separately load the LoRA adapters. Copied pipeline = DiffusionPipeline.from_pretrained( + "username/fused-ikea-feng", torch_dtype=torch.float16, +).to("cuda") + +image = pipeline("A bowl of ramen shaped like a cute kawaii bear, by Feng Zikai", generator=torch.manual_seed(0)).images[0] +image You can call unfuse_lora() to restore the original model’s weights (for example, if you want to use a different lora_scale value). However, this only works if you’ve only fused one LoRA adapter to the original model. If you’ve fused multiple LoRAs, you’ll need to reload the model. Copied pipeline.unfuse_lora() torch.compile torch.compile can speed up your pipeline even more, but the LoRA weights must be fused first and then unloaded. Typically, the UNet is compiled because it is such a computationally intensive component of the pipeline. Copied from diffusers import DiffusionPipeline +import torch + +# load base model and LoRAs +pipeline = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("ostris/ikea-instructions-lora-sdxl", weight_name="ikea_instructions_xl_v1_5.safetensors", adapter_name="ikea") +pipeline.load_lora_weights("lordjia/by-feng-zikai", weight_name="fengzikai_v1.0_XL.safetensors", adapter_name="feng") + +# activate both LoRAs and set adapter weights +pipeline.set_adapters(["ikea", "feng"], adapter_weights=[0.7, 0.8]) + +# fuse LoRAs and unload weights +pipeline.fuse_lora(adapter_names=["ikea", "feng"], lora_scale=1.0) +pipeline.unload_lora_weights() + +# torch.compile +pipeline.unet.to(memory_format=torch.channels_last) +pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) + +image = pipeline("A bowl of ramen shaped like a cute kawaii bear, by Feng Zikai", generator=torch.manual_seed(0)).images[0] Learn more about torch.compile in the Accelerate inference of text-to-image diffusion models guide. Next steps For more conceptual details about how each merging method works, take a look at the 🤗 PEFT welcomes new merging methods blog post! diff --git a/scrapped_outputs/4dec624d1ffe183bb31b90b292907bef.txt b/scrapped_outputs/4dec624d1ffe183bb31b90b292907bef.txt new file mode 100644 index 0000000000000000000000000000000000000000..507273833a701bdd8365633f4cf442fc0a095949 --- /dev/null +++ b/scrapped_outputs/4dec624d1ffe183bb31b90b292907bef.txt @@ -0,0 +1,138 @@ +UNet2DConditionModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 2D UNet conditional model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet2DConditionModel class diffusers.UNet2DConditionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 center_input_sample: bool = False flip_sin_to_cos: bool = True freq_shift: int = 0 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') mid_block_type: Optional = 'UNetMidBlock2DCrossAttn' up_block_types: Tuple = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: Union = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 dropout: float = 0.0 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: Union = 1280 transformer_layers_per_block: Union = 1 reverse_transformer_layers_per_block: Optional = None encoder_hid_dim: Optional = None encoder_hid_dim_type: Optional = None attention_head_dim: Union = 8 num_attention_heads: Union = None dual_cross_attention: bool = False use_linear_projection: bool = False class_embed_type: Optional = None addition_embed_type: Optional = None addition_time_embed_dim: Optional = None num_class_embeds: Optional = None upcast_attention: bool = False resnet_time_scale_shift: str = 'default' resnet_skip_time_act: bool = False resnet_out_scale_factor: int = 1.0 time_embedding_type: str = 'positional' time_embedding_dim: Optional = None time_embedding_act_fn: Optional = None timestep_post_act: Optional = None time_cond_proj_dim: Optional = None conv_in_kernel: int = 3 conv_out_kernel: int = 3 projection_class_embeddings_input_dim: Optional = None attention_type: str = 'default' class_embeddings_concat: bool = False mid_block_only_cross_attention: Optional = None cross_attention_norm: Optional = None addition_embed_type_num_heads = 64 ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. in_channels (int, optional, defaults to 4) — Number of channels in the input sample. out_channels (int, optional, defaults to 4) — Number of channels in the output. center_input_sample (bool, optional, defaults to False) — Whether to center the input sample. flip_sin_to_cos (bool, optional, defaults to False) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. mid_block_type (str, optional, defaults to "UNetMidBlock2DCrossAttn") — +Block type for middle of UNet, it can be one of UNetMidBlock2DCrossAttn, UNetMidBlock2D, or +UNetMidBlock2DSimpleCrossAttn. If None, the mid block layer is skipped. up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. only_cross_attention(bool or Tuple[bool], optional, default to False) — +Whether to include self-attention in the basic transformer blocks, see +BasicTransformerBlock. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — The number of layers per block. downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. dropout (float, optional, defaults to 0.0) — The dropout probability to use. act_fn (str, optional, defaults to "silu") — The activation function to use. norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, normalization and activation layers is skipped in post-processing. norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. cross_attention_dim (int or Tuple[int], optional, defaults to 1280) — +The dimension of the cross attention features. transformer_layers_per_block (int, Tuple[int], or Tuple[Tuple] , optional, defaults to 1) — +The number of transformer blocks of type BasicTransformerBlock. Only relevant for +CrossAttnDownBlock2D, CrossAttnUpBlock2D, +UNetMidBlock2DCrossAttn. A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). reverse_transformer_layers_per_block : (Tuple[Tuple], optional, defaults to None): +The number of transformer blocks of type BasicTransformerBlock, in the upsampling +blocks of the U-Net. Only relevant if transformer_layers_per_block is of type Tuple[Tuple] and for +CrossAttnDownBlock2D, CrossAttnUpBlock2D, +UNetMidBlock2DCrossAttn. +encoder_hid_dim (int, optional, defaults to None): +If encoder_hid_dim_type is defined, encoder_hidden_states will be projected from encoder_hid_dim +dimension to cross_attention_dim. +encoder_hid_dim_type (str, optional, defaults to None): +If given, the encoder_hidden_states and potentially other embeddings are down-projected to text +embeddings of dimension cross_attention according to encoder_hid_dim_type. +attention_head_dim (int, optional, defaults to 8): The dimension of the attention heads. +num_attention_heads (int, optional): +The number of attention heads. If not defined, defaults to attention_head_dim +resnet_time_scale_shift (str, optional, defaults to "default"): Time scale shift config +for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. +class_embed_type (str, optional, defaults to None): +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", "identity", "projection", or "simple_projection". +addition_embed_type (str, optional, defaults to None): +Configures an optional embedding which will be summed with the time embeddings. Choose from None or +“text”. “text” will use the TextTimeEmbedding layer. +addition_time_embed_dim: (int, optional, defaults to None): +Dimension for the timestep embeddings. +num_class_embeds (int, optional, defaults to None): +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. +time_embedding_type (str, optional, defaults to positional): +The type of position embedding to use for timesteps. Choose from positional or fourier. +time_embedding_dim (int, optional, defaults to None): +An optional override for the dimension of the projected time embedding. +time_embedding_act_fn (str, optional, defaults to None): +Optional activation function to use only once on the time embeddings before they are passed to the rest of +the UNet. Choose from silu, mish, gelu, and swish. +timestep_post_act (str, optional, defaults to None): +The second activation function to use in timestep embedding. Choose from silu, mish and gelu. +time_cond_proj_dim (int, optional, defaults to None): +The dimension of cond_proj layer in the timestep embedding. +conv_in_kernel (int, optional, default to 3): The kernel size of conv_in layer. conv_out_kernel (int, +optional, default to 3): The kernel size of conv_out layer. projection_class_embeddings_input_dim (int, +optional): The dimension of the class_labels input when +class_embed_type="projection". Required when class_embed_type="projection". +class_embeddings_concat (bool, optional, defaults to False): Whether to concatenate the time +embeddings with the class embeddings. +mid_block_only_cross_attention (bool, optional, defaults to None): +Whether to use cross attention with the mid block when using the UNetMidBlock2DSimpleCrossAttn. If +only_cross_attention is given as a single boolean and mid_block_only_cross_attention is None, the +only_cross_attention value is used as the value for mid_block_only_cross_attention. Default to False +otherwise. disable_freeu < source > ( ) Disables the FreeU mechanism. enable_freeu < source > ( s1 s2 b1 b2 ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism from https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stage blocks where they are being applied. Please refer to the official repository for combinations of values that +are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None added_cond_kwargs: Optional = None down_block_additional_residuals: Optional = None mid_block_additional_residual: Optional = None down_intrablock_additional_residuals: Optional = None encoder_attention_mask: Optional = None return_dict: bool = True ) → UNet2DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, channel, height, width). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). class_labels (torch.Tensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. +timestep_cond — (torch.Tensor, optional, defaults to None): +Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed +through the self.time_embedding layer to obtain the timestep embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. +added_cond_kwargs — (dict, optional): +A kwargs dictionary containing additional embeddings that if specified are added to the embeddings that +are passed along to the UNet blocks. +down_block_additional_residuals — (tuple of torch.Tensor, optional): +A tuple of tensors that if specified are added to the residuals of down unet blocks. +mid_block_additional_residual — (torch.Tensor, optional): +A tensor that if specified is added to the residual of the middle unet block. encoder_attention_mask (torch.Tensor) — +A cross-attention mask of shape (batch, sequence_length) is applied to encoder_hidden_states. If +True the mask is kept, otherwise if False it is discarded. Mask will be converted into a bias, +which adds large negative values to the attention scores corresponding to “discard” tokens. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. +added_cond_kwargs — (dict, optional): +A kwargs dictionary containin additional embeddings that if specified are added to the embeddings that +are passed along to the UNet blocks. down_block_additional_residuals (tuple of torch.Tensor, optional) — +additional residuals to be added to UNet long skip connections from down blocks to up blocks for +example from ControlNet side model(s) mid_block_additional_residual (torch.Tensor, optional) — +additional residual to be added to UNet mid block output, for example from ControlNet side model down_intrablock_additional_residuals (tuple of torch.Tensor, optional) — +additional residuals to be added within UNet down blocks, for example from T2I-Adapter side model(s) Returns +UNet2DConditionOutput or tuple + +If return_dict is True, an UNet2DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The UNet2DConditionModel forward method. fuse_qkv_projections < source > ( ) Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. set_attention_slice < source > ( slice_size ) Parameters slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", input to the attention heads is halved, so attention is computed in two steps. If +"max", maximum amount of memory is saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in +several steps. This is useful for saving some memory in exchange for a small decrease in speed. set_attn_processor < source > ( processor: Union _remove_lora = False ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. unfuse_qkv_projections < source > ( ) Disables the fused QKV projection if enabled. This API is 🧪 experimental. UNet2DConditionOutput class diffusers.models.unet_2d_condition.UNet2DConditionOutput < source > ( sample: FloatTensor = None ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of UNet2DConditionModel. FlaxUNet2DConditionModel class diffusers.FlaxUNet2DConditionModel < source > ( sample_size: int = 32 in_channels: int = 4 out_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') up_block_types: Tuple = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 attention_head_dim: Union = 8 num_attention_heads: Union = None cross_attention_dim: int = 1280 dropout: float = 0.0 use_linear_projection: bool = False dtype: dtype = flip_sin_to_cos: bool = True freq_shift: int = 0 use_memory_efficient_attention: bool = False split_head_dim: bool = False transformer_layers_per_block: Union = 1 addition_embed_type: Optional = None addition_time_embed_dim: Optional = None addition_embed_type_num_heads: int = 64 projection_class_embeddings_input_dim: Optional = None parent: Union = name: Optional = None ) Parameters sample_size (int, optional) — +The size of the input sample. in_channels (int, optional, defaults to 4) — +The number of channels in the input sample. out_channels (int, optional, defaults to 4) — +The number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxDownBlock2D")) — +The tuple of downsample blocks to use. up_block_types (Tuple[str], optional, defaults to ("FlaxUpBlock2D", "FlaxCrossAttnUpBlock2D", "FlaxCrossAttnUpBlock2D", "FlaxCrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — +The number of layers per block. attention_head_dim (int or Tuple[int], optional, defaults to 8) — +The dimension of the attention heads. num_attention_heads (int or Tuple[int], optional) — +The number of attention heads. cross_attention_dim (int, optional, defaults to 768) — +The dimension of the cross attention features. dropout (float, optional, defaults to 0) — +Dropout probability for down, up and bottleneck blocks. flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. use_memory_efficient_attention (bool, optional, defaults to False) — +Enable memory efficient attention as described here. split_head_dim (bool, optional, defaults to False) — +Whether to split the head dimension into a new axis for the self-attention computation. In most cases, +enabling this flag should speed up the computation for Stable Diffusion 2.x and Stable Diffusion XL. A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. This model inherits from FlaxModelMixin. Check the superclass documentation for it’s generic methods +implemented for all models (such as downloading or saving). This model is also a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matters related to its +general usage and behavior. Inherent JAX features such as the following are supported: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization FlaxUNet2DConditionOutput class diffusers.models.unet_2d_condition_flax.FlaxUNet2DConditionOutput < source > ( sample: Array ) Parameters sample (jnp.ndarray of shape (batch_size, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of FlaxUNet2DConditionModel. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/4e4847a052d35d2e87f964cc7a373320.txt b/scrapped_outputs/4e4847a052d35d2e87f964cc7a373320.txt new file mode 100644 index 0000000000000000000000000000000000000000..e9b53eb8a868ef3829ac58348524811ec445482c --- /dev/null +++ b/scrapped_outputs/4e4847a052d35d2e87f964cc7a373320.txt @@ -0,0 +1,143 @@ +BLIP-Diffusion BLIP-Diffusion was proposed in BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing. It enables zero-shot subject-driven generation and control-guided zero-shot generation. The abstract from the paper is: Subject-driven text-to-image generation models create novel renditions of an input subject based on text prompts. Existing models suffer from lengthy fine-tuning and difficulties preserving the subject fidelity. To overcome these limitations, we introduce BLIP-Diffusion, a new subject-driven image generation model that supports multimodal control which consumes inputs of subject images and text prompts. Unlike other subject-driven generation models, BLIP-Diffusion introduces a new multimodal encoder which is pre-trained to provide subject representation. We first pre-train the multimodal encoder following BLIP-2 to produce visual representation aligned with the text. Then we design a subject representation learning task which enables a diffusion model to leverage such visual representation and generates new subject renditions. Compared with previous methods such as DreamBooth, our model enables zero-shot subject-driven generation, and efficient fine-tuning for customized subject with up to 20x speedup. We also demonstrate that BLIP-Diffusion can be flexibly combined with existing techniques such as ControlNet and prompt-to-prompt to enable novel subject-driven generation and editing applications. Project page at this https URL. The original codebase can be found at salesforce/LAVIS. You can find the official BLIP-Diffusion checkpoints under the hf.co/SalesForce organization. BlipDiffusionPipeline and BlipDiffusionControlNetPipeline were contributed by ayushtues. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. BlipDiffusionPipeline class diffusers.BlipDiffusionPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: ContextCLIPTextModel vae: AutoencoderKL unet: UNet2DConditionModel scheduler: PNDMScheduler qformer: Blip2QFormerModel image_processor: BlipImageProcessor ctx_begin_pos: int = 2 mean: List = None std: List = None ) Parameters tokenizer (CLIPTokenizer) — +Tokenizer for the text encoder text_encoder (ContextCLIPTextModel) — +Text encoder to encode the text prompt vae (AutoencoderKL) — +VAE model to map the latents to the image unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. scheduler (PNDMScheduler) — +A scheduler to be used in combination with unet to generate image latents. qformer (Blip2QFormerModel) — +QFormer model to get multi-modal embeddings from the text and image. image_processor (BlipImageProcessor) — +Image Processor to preprocess and postprocess the image. ctx_begin_pos (int, optional, defaults to 2) — +Position of the context token in the text encoder. Pipeline for Zero-Shot Subject Driven Generation using Blip Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: List reference_image: Image source_subject_category: List target_subject_category: List latents: Optional = None guidance_scale: float = 7.5 height: int = 512 width: int = 512 num_inference_steps: int = 50 generator: Union = None neg_prompt: Optional = '' prompt_strength: float = 1.0 prompt_reps: int = 20 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (List[str]) — +The prompt or prompts to guide the image generation. reference_image (PIL.Image.Image) — +The reference image to condition the generation on. source_subject_category (List[str]) — +The source subject category. target_subject_category (List[str]) — +The target subject category. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by random sampling. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. height (int, optional, defaults to 512) — +The height of the generated image. width (int, optional, defaults to 512) — +The width of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. neg_prompt (str, optional, defaults to "") — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). prompt_strength (float, optional, defaults to 1.0) — +The strength of the prompt. Specifies the number of times the prompt is repeated along with prompt_reps +to amplify the prompt. prompt_reps (int, optional, defaults to 20) — +The number of times the prompt is repeated along with prompt_strength to amplify the prompt. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers.pipelines import BlipDiffusionPipeline +>>> from diffusers.utils import load_image +>>> import torch + +>>> blip_diffusion_pipe = BlipDiffusionPipeline.from_pretrained( +... "Salesforce/blipdiffusion", torch_dtype=torch.float16 +... ).to("cuda") + + +>>> cond_subject = "dog" +>>> tgt_subject = "dog" +>>> text_prompt_input = "swimming underwater" + +>>> cond_image = load_image( +... "https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/dog.jpg" +... ) +>>> guidance_scale = 7.5 +>>> num_inference_steps = 25 +>>> negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate" + + +>>> output = blip_diffusion_pipe( +... text_prompt_input, +... cond_image, +... cond_subject, +... tgt_subject, +... guidance_scale=guidance_scale, +... num_inference_steps=num_inference_steps, +... neg_prompt=negative_prompt, +... height=512, +... width=512, +... ).images +>>> output[0].save("image.png") BlipDiffusionControlNetPipeline class diffusers.BlipDiffusionControlNetPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: ContextCLIPTextModel vae: AutoencoderKL unet: UNet2DConditionModel scheduler: PNDMScheduler qformer: Blip2QFormerModel controlnet: ControlNetModel image_processor: BlipImageProcessor ctx_begin_pos: int = 2 mean: List = None std: List = None ) Parameters tokenizer (CLIPTokenizer) — +Tokenizer for the text encoder text_encoder (ContextCLIPTextModel) — +Text encoder to encode the text prompt vae (AutoencoderKL) — +VAE model to map the latents to the image unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. scheduler (PNDMScheduler) — +A scheduler to be used in combination with unet to generate image latents. qformer (Blip2QFormerModel) — +QFormer model to get multi-modal embeddings from the text and image. controlnet (ControlNetModel) — +ControlNet model to get the conditioning image embedding. image_processor (BlipImageProcessor) — +Image Processor to preprocess and postprocess the image. ctx_begin_pos (int, optional, defaults to 2) — +Position of the context token in the text encoder. Pipeline for Canny Edge based Controlled subject-driven generation using Blip Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: List reference_image: Image condtioning_image: Image source_subject_category: List target_subject_category: List latents: Optional = None guidance_scale: float = 7.5 height: int = 512 width: int = 512 num_inference_steps: int = 50 generator: Union = None neg_prompt: Optional = '' prompt_strength: float = 1.0 prompt_reps: int = 20 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (List[str]) — +The prompt or prompts to guide the image generation. reference_image (PIL.Image.Image) — +The reference image to condition the generation on. condtioning_image (PIL.Image.Image) — +The conditioning canny edge image to condition the generation on. source_subject_category (List[str]) — +The source subject category. target_subject_category (List[str]) — +The target subject category. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by random sampling. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. height (int, optional, defaults to 512) — +The height of the generated image. width (int, optional, defaults to 512) — +The width of the generated image. seed (int, optional, defaults to 42) — +The seed to use for random generation. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. neg_prompt (str, optional, defaults to "") — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). prompt_strength (float, optional, defaults to 1.0) — +The strength of the prompt. Specifies the number of times the prompt is repeated along with prompt_reps +to amplify the prompt. prompt_reps (int, optional, defaults to 20) — +The number of times the prompt is repeated along with prompt_strength to amplify the prompt. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers.pipelines import BlipDiffusionControlNetPipeline +>>> from diffusers.utils import load_image +>>> from controlnet_aux import CannyDetector +>>> import torch + +>>> blip_diffusion_pipe = BlipDiffusionControlNetPipeline.from_pretrained( +... "Salesforce/blipdiffusion-controlnet", torch_dtype=torch.float16 +... ).to("cuda") + +>>> style_subject = "flower" +>>> tgt_subject = "teapot" +>>> text_prompt = "on a marble table" + +>>> cldm_cond_image = load_image( +... "https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/kettle.jpg" +... ).resize((512, 512)) +>>> canny = CannyDetector() +>>> cldm_cond_image = canny(cldm_cond_image, 30, 70, output_type="pil") +>>> style_image = load_image( +... "https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg" +... ) +>>> guidance_scale = 7.5 +>>> num_inference_steps = 50 +>>> negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate" + + +>>> output = blip_diffusion_pipe( +... text_prompt, +... style_image, +... cldm_cond_image, +... style_subject, +... tgt_subject, +... guidance_scale=guidance_scale, +... num_inference_steps=num_inference_steps, +... neg_prompt=negative_prompt, +... height=512, +... width=512, +... ).images +>>> output[0].save("image.png") diff --git a/scrapped_outputs/4e8b0437b7cee3fb1f9ee655ee900840.txt b/scrapped_outputs/4e8b0437b7cee3fb1f9ee655ee900840.txt new file mode 100644 index 0000000000000000000000000000000000000000..00271b49f1e24fbd75015632570698e2956adecc --- /dev/null +++ b/scrapped_outputs/4e8b0437b7cee3fb1f9ee655ee900840.txt @@ -0,0 +1,253 @@ +The Stable Diffusion Guide 🎨 + + + +Intro + +Stable Diffusion is a Latent Diffusion model developed by researchers from the Machine Vision and Learning group at LMU Munich, a.k.a CompVis. +Model checkpoints were publicly released at the end of August 2022 by a collaboration of Stability AI, CompVis, and Runway with support from EleutherAI and LAION. For more information, you can check out the official blog post. +Since its public release the community has done an incredible job at working together to make the stable diffusion checkpoints faster, more memory efficient, and more performant. +🧨 Diffusers offers a simple API to run stable diffusion with all memory, computing, and quality improvements. +This notebook walks you through the improvements one-by-one so you can best leverage StableDiffusionPipeline for inference. + +Prompt Engineering 🎨 + + + +When running *Stable Diffusion* in inference, we usually want to generate a certain type, or style of image and then improve upon it. Improving upon a previously generated image means running inference over and over again with a different prompt and potentially a different seed until we are happy with our generation. +So to begin with, it is most important to speed up stable diffusion as much as possible to generate as many pictures as possible in a given amount of time. +This can be done by both improving the computational efficiency (speed) and the memory efficiency (GPU RAM). +Let’s start by looking into computational efficiency first. +Throughout the notebook, we will focus on runwayml/stable-diffusion-v1-5: + + + Copied +model_id = "runwayml/stable-diffusion-v1-5" +Let’s load the pipeline. + +Speed Optimization + + + + Copied +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained(model_id) +We aim at generating a beautiful photograph of an old warrior chief and will later try to find the best prompt to generate such a photograph. For now, let’s keep the prompt simple: + + + Copied +prompt = "portrait photo of a old warrior chief" +To begin with, we should make sure we run inference on GPU, so let’s move the pipeline to GPU, just like you would with any PyTorch module. + + + Copied +pipe = pipe.to("cuda") +To generate an image, you should use the [~StableDiffusionPipeline.__call__] method. +To make sure we can reproduce more or less the same image in every call, let’s make use of the generator. See the documentation on reproducibility here for more information. + + + Copied +generator = torch.Generator("cuda").manual_seed(0) +Now, let’s take a spin on it. + + + Copied +image = pipe(prompt, generator=generator).images[0] +image + +Cool, this now took roughly 30 seconds on a T4 GPU (you might see faster inference if your allocated GPU is better than a T4). +The default run we did above used full float32 precision and ran the default number of inference steps (50). The easiest speed-ups come from switching to float16 (or half) precision and simply running fewer inference steps. Let’s load the model now in float16 instead. + + + Copied +import torch + +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) +pipe = pipe.to("cuda") +And we can again call the pipeline to generate an image. + + + Copied +generator = torch.Generator("cuda").manual_seed(0) + +image = pipe(prompt, generator=generator).images[0] +image + +Cool, this is almost three times as fast for arguably the same image quality. +We strongly suggest always running your pipelines in float16 as so far we have very rarely seen degradations in quality because of it. +Next, let’s see if we need to use 50 inference steps or whether we could use significantly fewer. The number of inference steps is associated with the denoising scheduler we use. Choosing a more efficient scheduler could help us decrease the number of steps. +Let’s have a look at all the schedulers the stable diffusion pipeline is compatible with. + + + Copied +pipe.scheduler.compatibles + + + Copied + [diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler, + diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, + diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler, + diffusers.schedulers.scheduling_pndm.PNDMScheduler, + diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, + diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, + diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler, + diffusers.schedulers.scheduling_ddpm.DDPMScheduler, + diffusers.schedulers.scheduling_ddim.DDIMScheduler] +Cool, that’s a lot of schedulers. +🧨 Diffusers is constantly adding a bunch of novel schedulers/samplers that can be used with Stable Diffusion. For more information, we recommend taking a look at the official documentation here. +Alright, right now Stable Diffusion is using the PNDMScheduler which usually requires around 50 inference steps. However, other schedulers such as DPMSolverMultistepScheduler or DPMSolverSinglestepScheduler seem to get away with just 20 to 25 inference steps. Let’s try them out. +You can set a new scheduler by making use of the from_config function. + + + Copied +from diffusers import DPMSolverMultistepScheduler + +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +Now, let’s try to reduce the number of inference steps to just 20. + + + Copied +generator = torch.Generator("cuda").manual_seed(0) + +image = pipe(prompt, generator=generator, num_inference_steps=20).images[0] +image + +The image now does look a little different, but it’s arguably still of equally high quality. We now cut inference time to just 4 seconds though 😍. + +Memory Optimization + +Less memory used in generation indirectly implies more speed, since we’re often trying to maximize how many images we can generate per second. Usually, the more images per inference run, the more images per second too. +The easiest way to see how many images we can generate at once is to simply try it out, and see when we get a “Out-of-memory (OOM)” error. +We can run batched inference by simply passing a list of prompts and generators. Let’s define a quick function that generates a batch for us. + + + Copied +def get_inputs(batch_size=1): + generator = [torch.Generator("cuda").manual_seed(i) for i in range(batch_size)] + prompts = batch_size * [prompt] + num_inference_steps = 20 + + return {"prompt": prompts, "generator": generator, "num_inference_steps": num_inference_steps} +This function returns a list of prompts and a list of generators, so we can reuse the generator that produced a result we like. +We also need a method that allows us to easily display a batch of images. + + + Copied +from PIL import Image + +def image_grid(imgs, rows=2, cols=2): + w, h = imgs[0].size + grid = Image.new('RGB', size=(cols*w, rows*h)) + + for i, img in enumerate(imgs): + grid.paste(img, box=(i%cols*w, i//cols*h)) + return grid +Cool, let’s see how much memory we can use starting with batch_size=4. + + + Copied +images = pipe(**get_inputs(batch_size=4)).images +image_grid(images) + +Going over a batch_size of 4 will error out in this notebook (assuming we are running it on a T4 GPU). Also, we can see we only generate slightly more images per second (3.75s/image) compared to 4s/image previously. +However, the community has found some nice tricks to improve the memory constraints further. After stable diffusion was released, the community found improvements within days and shared them freely over GitHub - open-source at its finest! I believe the original idea came from this GitHub thread. +By far most of the memory is taken up by the cross-attention layers. Instead of running this operation in batch, one can run it sequentially to save a significant amount of memory. +It can easily be enabled by calling enable_attention_slicing as is documented here. + + + Copied +pipe.enable_attention_slicing() +Great, now that attention slicing is enabled, let’s try to double the batch size again, going for batch_size=8. + + + Copied +images = pipe(**get_inputs(batch_size=8)).images +image_grid(images, rows=2, cols=4) + +Nice, it works. However, the speed gain is again not very big (it might however be much more significant on other GPUs). +We’re at roughly 3.5 seconds per image 🔥 which is probably the fastest we can be with a simple T4 without sacrificing quality. +Next, let’s look into how to improve the quality! + +Quality Improvements + +Now that our image generation pipeline is blazing fast, let’s try to get maximum image quality. +First of all, image quality is extremely subjective, so it’s difficult to make general claims here. +The most obvious step to take to improve quality is to use better checkpoints. Since the release of Stable Diffusion, many improved versions have been released, which are summarized here: +Official Release - 22 Aug 2022: Stable-Diffusion 1.4 +20 October 2022: Stable-Diffusion 1.5 +24 Nov 2022: Stable-Diffusion 2.0 +7 Dec 2022: Stable-Diffusion 2.1 +Newer versions don’t necessarily mean better image quality with the same parameters. People mentioned that 2.0 is slightly worse than 1.5 for certain prompts, but given the right prompt engineering 2.0 and 2.1 seem to be better. +Overall, we strongly recommend just trying the models out and reading up on advice online (e.g. it has been shown that using negative prompts is very important for 2.0 and 2.1 to get the highest possible quality. See for example this nice blog post. +Additionally, the community has started fine-tuning many of the above versions on certain styles with some of them having an extremely high quality and gaining a lot of traction. +We recommend having a look at all diffusers checkpoints sorted by downloads and trying out the different checkpoints. +For the following, we will stick to v1.5 for simplicity. +Next, we can also try to optimize single components of the pipeline, e.g. switching out the latent decoder. For more details on how the whole Stable Diffusion pipeline works, please have a look at this blog post. +Let’s load stabilityai’s newest auto-decoder. + + + Copied +from diffusers import AutoencoderKL + +vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16).to("cuda") +Now we can set it to the vae of the pipeline to use it. + + + Copied +pipe.vae = vae +Let’s run the same prompt as before to compare quality. + + + Copied +images = pipe(**get_inputs(batch_size=8)).images +image_grid(images, rows=2, cols=4) + +Seems like the difference is only very minor, but the new generations are arguably a bit sharper. +Cool, finally, let’s look a bit into prompt engineering. +Our goal was to generate a photo of an old warrior chief. Let’s now try to bring a bit more color into the photos and make the look more impressive. +Originally our prompt was ”portrait photo of an old warrior chief“. +To improve the prompt, it often helps to add cues that could have been used online to save high-quality photos, as well as add more details. +Essentially, when doing prompt engineering, one has to think: +How was the photo or similar photos of the one I want probably stored on the internet? +What additional detail can I give that steers the models into the style that I want? +Cool, let’s add more details. + + + Copied +prompt += ", tribal panther make up, blue on red, side profile, looking away, serious eyes" +and let’s also add some cues that usually help to generate higher quality images. + + + Copied +prompt += " 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta" +prompt +Cool, let’s now try this prompt. + + + Copied +images = pipe(**get_inputs(batch_size=8)).images +image_grid(images, rows=2, cols=4) + +Pretty impressive! We got some very high-quality image generations there. The 2nd image is my personal favorite, so I’ll re-use this seed and see whether I can tweak the prompts slightly by using “oldest warrior”, “old”, "", and “young” instead of “old”. + + + Copied +prompts = [ + "portrait photo of the oldest warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a old warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a young warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", +] + +generator = [torch.Generator("cuda").manual_seed(1) for _ in range(len(prompts))] # 1 because we want the 2nd image + +images = pipe(prompt=prompts, generator=generator, num_inference_steps=25).images +image_grid(images) + +The first picture looks nice! The eye movement slightly changed and looks nice. This finished up our 101-guide on how to use Stable Diffusion 🤗. +For more information on optimization or other guides, I recommend taking a look at the following: +Blog post about Stable Diffusion: In-detail blog post explaining Stable Diffusion. +FlashAttention: XFormers flash attention can optimize your model even further with more speed and memory improvements. +Dreambooth - Quickly customize the model by fine-tuning it. +General info on Stable Diffusion - Info on other tasks that are powered by Stable Diffusion. diff --git a/scrapped_outputs/4e979f80711cccfeab67520b91423351.txt b/scrapped_outputs/4e979f80711cccfeab67520b91423351.txt new file mode 100644 index 0000000000000000000000000000000000000000..28be7c2be08b90122a456c3dc3dafcfdbac176dc --- /dev/null +++ b/scrapped_outputs/4e979f80711cccfeab67520b91423351.txt @@ -0,0 +1,75 @@ +AutoPipeline 🤗 Diffusers is able to complete many different tasks, and you can often reuse the same pretrained weights for multiple tasks such as text-to-image, image-to-image, and inpainting. If you’re new to the library and diffusion models though, it may be difficult to know which pipeline to use for a task. For example, if you’re using the runwayml/stable-diffusion-v1-5 checkpoint for text-to-image, you might not know that you could also use it for image-to-image and inpainting by loading the checkpoint with the StableDiffusionImg2ImgPipeline and StableDiffusionInpaintPipeline classes respectively. The AutoPipeline class is designed to simplify the variety of pipelines in 🤗 Diffusers. It is a generic, task-first pipeline that lets you focus on the task. The AutoPipeline automatically detects the correct pipeline class to use, which makes it easier to load a checkpoint for a task without knowing the specific pipeline class name. Take a look at the AutoPipeline reference to see which tasks are supported. Currently, it supports text-to-image, image-to-image, and inpainting. This tutorial shows you how to use an AutoPipeline to automatically infer the pipeline class to load for a specific task, given the pretrained weights. Choose an AutoPipeline for your task Start by picking a checkpoint. For example, if you’re interested in text-to-image with the runwayml/stable-diffusion-v1-5 checkpoint, use AutoPipelineForText2Image: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") +prompt = "peasant and dragon combat, wood cutting style, viking era, bevel with rune" + +image = pipeline(prompt, num_inference_steps=25).images[0] +image Under the hood, AutoPipelineForText2Image: automatically detects a "stable-diffusion" class from the model_index.json file loads the corresponding text-to-image StableDiffusionPipeline based on the "stable-diffusion" class name Likewise, for image-to-image, AutoPipelineForImage2Image detects a "stable-diffusion" checkpoint from the model_index.json file and it’ll load the corresponding StableDiffusionImg2ImgPipeline behind the scenes. You can also pass any additional arguments specific to the pipeline class such as strength, which determines the amount of noise or variation added to an input image: Copied from diffusers import AutoPipelineForImage2Image +import torch +import requests +from PIL import Image +from io import BytesIO + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") +prompt = "a portrait of a dog wearing a pearl earring" + +url = "https://upload.wikimedia.org/wikipedia/commons/thumb/0/0f/1665_Girl_with_a_Pearl_Earring.jpg/800px-1665_Girl_with_a_Pearl_Earring.jpg" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") +image.thumbnail((768, 768)) + +image = pipeline(prompt, image, num_inference_steps=200, strength=0.75, guidance_scale=10.5).images[0] +image And if you want to do inpainting, then AutoPipelineForInpainting loads the underlying StableDiffusionInpaintPipeline class in the same way: Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +import torch + +pipeline = AutoPipelineForInpainting.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).convert("RGB") +mask_image = load_image(mask_url).convert("RGB") + +prompt = "A majestic tiger sitting on a bench" +image = pipeline(prompt, image=init_image, mask_image=mask_image, num_inference_steps=50, strength=0.80).images[0] +image If you try to load an unsupported checkpoint, it’ll throw an error: Copied from diffusers import AutoPipelineForImage2Image +import torch + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "openai/shap-e-img2img", torch_dtype=torch.float16, use_safetensors=True +) +"ValueError: AutoPipeline can't find a pipeline linked to ShapEImg2ImgPipeline for None" Use multiple pipelines For some workflows or if you’re loading many pipelines, it is more memory-efficient to reuse the same components from a checkpoint instead of reloading them which would unnecessarily consume additional memory. For example, if you’re using a checkpoint for text-to-image and you want to use it again for image-to-image, use the from_pipe() method. This method creates a new pipeline from the components of a previously loaded pipeline at no additional memory cost. The from_pipe() method detects the original pipeline class and maps it to the new pipeline class corresponding to the task you want to do. For example, if you load a "stable-diffusion" class pipeline for text-to-image: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch + +pipeline_text2img = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +print(type(pipeline_text2img)) +"" Then from_pipe() maps the original "stable-diffusion" pipeline class to StableDiffusionImg2ImgPipeline: Copied pipeline_img2img = AutoPipelineForImage2Image.from_pipe(pipeline_text2img) +print(type(pipeline_img2img)) +"" If you passed an optional argument - like disabling the safety checker - to the original pipeline, this argument is also passed on to the new pipeline: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch + +pipeline_text2img = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, + requires_safety_checker=False, +).to("cuda") + +pipeline_img2img = AutoPipelineForImage2Image.from_pipe(pipeline_text2img) +print(pipeline_img2img.config.requires_safety_checker) +"False" You can overwrite any of the arguments and even configuration from the original pipeline if you want to change the behavior of the new pipeline. For example, to turn the safety checker back on and add the strength argument: Copied pipeline_img2img = AutoPipelineForImage2Image.from_pipe(pipeline_text2img, requires_safety_checker=True, strength=0.3) +print(pipeline_img2img.config.requires_safety_checker) +"True" diff --git a/scrapped_outputs/4ea2f14b5e0a08d20ec7a3934c85afe1.txt b/scrapped_outputs/4ea2f14b5e0a08d20ec7a3934c85afe1.txt new file mode 100644 index 0000000000000000000000000000000000000000..684383d3b766fe2306777de3fdfe7ac6f1cc9bb6 --- /dev/null +++ b/scrapped_outputs/4ea2f14b5e0a08d20ec7a3934c85afe1.txt @@ -0,0 +1,29 @@ +Create a dataset for training There are many datasets on the Hub to train a model on, but if you can’t find one you’re interested in or want to use your own, you can create a dataset with the 🤗 Datasets library. The dataset structure depends on the task you want to train your model on. The most basic dataset structure is a directory of images for tasks like unconditional image generation. Another dataset structure may be a directory of images and a text file containing their corresponding text captions for tasks like text-to-image generation. This guide will show you two ways to create a dataset to finetune on: provide a folder of images to the --train_data_dir argument upload a dataset to the Hub and pass the dataset repository id to the --dataset_name argument 💡 Learn more about how to create an image dataset for training in the Create an image dataset guide. Provide a dataset as a folder For unconditional generation, you can provide your own dataset as a folder of images. The training script uses the ImageFolder builder from 🤗 Datasets to automatically build a dataset from the folder. Your directory structure should look like: Copied data_dir/xxx.png +data_dir/xxy.png +data_dir/[...]/xxz.png Pass the path to the dataset directory to the --train_data_dir argument, and then you can start training: Copied accelerate launch train_unconditional.py \ + --train_data_dir \ + Upload your data to the Hub 💡 For more details and context about creating and uploading a dataset to the Hub, take a look at the Image search with 🤗 Datasets post. Start by creating a dataset with the ImageFolder feature, which creates an image column containing the PIL-encoded images. You can use the data_dir or data_files parameters to specify the location of the dataset. The data_files parameter supports mapping specific files to dataset splits like train or test: Copied from datasets import load_dataset + +# example 1: local folder +dataset = load_dataset("imagefolder", data_dir="path_to_your_folder") + +# example 2: local files (supported formats are tar, gzip, zip, xz, rar, zstd) +dataset = load_dataset("imagefolder", data_files="path_to_zip_file") + +# example 3: remote files (supported formats are tar, gzip, zip, xz, rar, zstd) +dataset = load_dataset( + "imagefolder", + data_files="https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip", +) + +# example 4: providing several splits +dataset = load_dataset( + "imagefolder", data_files={"train": ["path/to/file1", "path/to/file2"], "test": ["path/to/file3", "path/to/file4"]} +) Then use the push_to_hub method to upload the dataset to the Hub: Copied # assuming you have ran the huggingface-cli login command in a terminal +dataset.push_to_hub("name_of_your_dataset") + +# if you want to push to a private repo, simply pass private=True: +dataset.push_to_hub("name_of_your_dataset", private=True) Now the dataset is available for training by passing the dataset name to the --dataset_name argument: Copied accelerate launch --mixed_precision="fp16" train_text_to_image.py \ + --pretrained_model_name_or_path="runwayml/stable-diffusion-v1-5" \ + --dataset_name="name_of_your_dataset" \ + Next steps Now that you’ve created a dataset, you can plug it into the train_data_dir (if your dataset is local) or dataset_name (if your dataset is on the Hub) arguments of a training script. For your next steps, feel free to try and use your dataset to train a model for unconditional generation or text-to-image generation! diff --git a/scrapped_outputs/4ed164220a7935a4cc18285f191efbb0.txt b/scrapped_outputs/4ed164220a7935a4cc18285f191efbb0.txt new file mode 100644 index 0000000000000000000000000000000000000000..40eee2ccb81615eae1ad390fef59f9ad8e61b9b4 --- /dev/null +++ b/scrapped_outputs/4ed164220a7935a4cc18285f191efbb0.txt @@ -0,0 +1,362 @@ +DEIS + +Fast Sampling of Diffusion Models with Exponential Integrator. + +Overview + +Original paper can be found here. The original implementation can be found here. + +DEISMultistepScheduler + + +class diffusers.DEISMultistepScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Optional[numpy.ndarray] = None +solver_order: int = 2 +prediction_type: str = 'epsilon' +thresholding: bool = False +dynamic_thresholding_ratio: float = 0.995 +sample_max_value: float = 1.0 +algorithm_type: str = 'deis' +solver_type: str = 'logrho' +lower_order_final: bool = True + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +solver_order (int, default 2) — +the order of DEIS; can be 1 or 2 or 3. We recommend to use solver_order=2 for guided sampling, and +solver_order=3 for unconditional sampling. + + +prediction_type (str, default epsilon) — +indicates whether the model predicts the noise (epsilon), or the data / x0. One of epsilon, sample, +or v-prediction. + + +thresholding (bool, default False) — +whether to use the “dynamic thresholding” method (introduced by Imagen, https://arxiv.org/abs/2205.11487). +Note that the thresholding method is unsuitable for latent-space diffusion models (such as +stable-diffusion). + + +dynamic_thresholding_ratio (float, default 0.995) — +the ratio for the dynamic thresholding method. Default is 0.995, the same as Imagen +(https://arxiv.org/abs/2205.11487). + + +sample_max_value (float, default 1.0) — +the threshold value for dynamic thresholding. Valid only when thresholding=True + + +algorithm_type (str, default deis) — +the algorithm type for the solver. current we support multistep deis, we will add other variants of DEIS in +the future + + +lower_order_final (bool, default True) — +whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. We empirically +find this trick can stabilize the sampling of DEIS for steps < 15, especially for steps <= 10. + + + +DEIS (https://arxiv.org/abs/2204.13902) is a fast high order solver for diffusion ODEs. We slightly modify the +polynomial fitting formula in log-rho space instead of the original linear t space in DEIS paper. The modification +enjoys closed-form coefficients for exponential multistep update instead of replying on the numerical solver. More +variants of DEIS can be found in https://github.com/qsh-zh/deis. +Currently, we support the log-rho multistep DEIS. We recommend to use solver_order=2 / 3 while solver_order=1 +reduces to DDIM. +We also support the “dynamic thresholding” method in Imagen (https://arxiv.org/abs/2205.11487). For pixel-space +diffusion models, you can set thresholding=True to use the dynamic thresholding. +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +convert_model_output + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the converted model output. + + +Convert the model output to the corresponding type that the algorithm DEIS needs. + +deis_first_order_update + +< +source +> +( +model_output: FloatTensor +timestep: int +prev_timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the first-order DEIS (equivalent to DDIM). + +multistep_deis_second_order_update + +< +source +> +( +model_output_list: typing.List[torch.FloatTensor] +timestep_list: typing.List[int] +prev_timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output_list (List[torch.FloatTensor]) — +direct outputs from learned diffusion model at current and latter timesteps. + + +timestep (int) — current and latter discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the second-order multistep DEIS. + +multistep_deis_third_order_update + +< +source +> +( +model_output_list: typing.List[torch.FloatTensor] +timestep_list: typing.List[int] +prev_timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output_list (List[torch.FloatTensor]) — +direct outputs from learned diffusion model at current and latter timesteps. + + +timestep (int) — current and latter discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the third-order multistep DEIS. + +scale_model_input + +< +source +> +( +sample: FloatTensor +*args +**kwargs + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device, optional) — +the device to which the timesteps should be moved to. If None, the timesteps are not moved. + + + +Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +return_dict: bool = True + +) +→ +~scheduling_utils.SchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +return_dict (bool) — option for returning tuple rather than SchedulerOutput class + + +Returns + +~scheduling_utils.SchedulerOutput or tuple + + + +~scheduling_utils.SchedulerOutput if return_dict is +True, otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + +Step function propagating the sample with the multistep DEIS. diff --git a/scrapped_outputs/4eda85b3409edb386df6a87b7ee74e70.txt b/scrapped_outputs/4eda85b3409edb386df6a87b7ee74e70.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/4eeee5f3fcb9f8d44dca3768321910b6.txt b/scrapped_outputs/4eeee5f3fcb9f8d44dca3768321910b6.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/4ef52d27aa988010147a0d6f0393f5e3.txt b/scrapped_outputs/4ef52d27aa988010147a0d6f0393f5e3.txt new file mode 100644 index 0000000000000000000000000000000000000000..29dabf482c809329f369fadcbe7b2aa7a2b04292 --- /dev/null +++ b/scrapped_outputs/4ef52d27aa988010147a0d6f0393f5e3.txt @@ -0,0 +1,803 @@ +Text-Guided Image Inpainting + + +StableDiffusionInpaintPipeline + +The Stable Diffusion model was created by the researchers and engineers from CompVis, Stability AI, runway, and LAION. The StableDiffusionInpaintPipeline lets you edit specific parts of an image by providing a mask and a text prompt using Stable Diffusion. +The original codebase can be found here: +Stable Diffusion V1: CampVis/stable-diffusion +Stable Diffusion V2: Stability-AI/stablediffusion +Available checkpoints are: +stable-diffusion-inpainting (512x512 resolution): runwayml/stable-diffusion-inpainting +stable-diffusion-2-inpainting (512x512 resolution): stabilityai/stable-diffusion-2-inpainting + +class diffusers.StableDiffusionInpaintPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPImageProcessor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-guided image inpainting using Stable Diffusion. This is an experimental feature. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) +In addition the pipeline inherits the following loading methods: +Textual-Inversion: loaders.TextualInversionLoaderMixin.load_textual_inversion() +LoRA: loaders.LoraLoaderMixin.load_lora_weights() +as well as the following saving methods: +LoRA: loaders.LoraLoaderMixin.save_lora_weights() + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None +mask_image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +image (PIL.Image.Image) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. + + +mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. Ignored when not using guidance (i.e., ignored if guidance_scale +is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionInpaintPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +>>> init_image = download_image(img_url).resize((512, 512)) +>>> mask_image = download_image(mask_url).resize((512, 512)) + +>>> pipe = StableDiffusionInpaintPipeline.from_pretrained( +... "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +>>> image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] + +enable_attention_slicing + +< +source +> +( +slice_size: typing.Union[str, int, NoneType] = 'auto' + +) + + +Parameters + +slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +disable_attention_slicing + +< +source +> +( +) + + + +Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go +back to computing attention in one step. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +load_textual_inversion + +< +source +> +( +pretrained_model_name_or_path: typing.Union[str, typing.Dict[str, torch.Tensor]] +token: typing.Optional[str] = None +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path (str or os.PathLike) — +Can be either: + +A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. +Valid model ids should have an organization name, like +"sd-concepts-library/low-poly-hd-logos-icons". +A path to a directory containing textual inversion weights, e.g. +./my_text_inversion_directory/. + + + +weight_name (str, optional) — +Name of a custom weight file. This should be used in two cases: + +The saved textual inversion file is in diffusers format, but was saved under a specific weight +name, such as text_inv.bin. +The saved textual inversion file is in the “Automatic1111” form. + + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running diffusers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +subfolder (str, optional, defaults to "") — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + +mirror (str, optional) — +Mirror source to accelerate downloads in China. If you are from China and have an accessibility +problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. +Please refer to the mirror site for more information. + + + +Load textual inversion embeddings into the text encoder of stable diffusion pipelines. Both diffusers and +Automatic1111 formats are supported (see example below). +This function is experimental and might change in the future. +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. +Example: + +To load a textual inversion embedding vector in diffusers format: + + + Copied +from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") +To load a textual inversion embedding vector in Automatic1111 format, make sure to first download the vector, + +e.g. from civitAI and then load the vector locally: + + + Copied +from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") + +load_lora_weights + +< +source +> +( +pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]] +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. +Valid model ids should have an organization name, like google/ddpm-celebahq-256. +A path to a directory containing model weights saved using ~ModelMixin.save_config, e.g., +./my_model_directory/. +A torch state +dict. + + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running diffusers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +subfolder (str, optional, defaults to "") — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + +mirror (str, optional) — +Mirror source to accelerate downloads in China. If you are from China and have an accessibility +problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. +Please refer to the mirror site for more information. + + + +Load pretrained attention processor layers (such as LoRA) into UNet2DConditionModel and +CLIPTextModel). +This function is experimental and might change in the future. +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. + +save_lora_weights + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +unet_lora_layers: typing.Dict[str, torch.nn.modules.module.Module] = None +text_encoder_lora_layers: typing.Dict[str, torch.nn.modules.module.Module] = None +is_main_process: bool = True +weight_name: str = None +save_function: typing.Callable = None +safe_serialization: bool = False + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. + + +unet_lora_layers (Dict[str, torch.nn.Module]) — +State dict of the LoRA layers corresponding to the UNet. Specifying this helps to make the +serialization process easier and cleaner. + + +text_encoder_lora_layers (Dict[str, torch.nn.Module]) — +State dict of the LoRA layers corresponding to the text_encoder. Since the text_encoder comes from +transformers, we cannot rejig it. That is why we have to explicitly pass the text encoder LoRA state +dict. + + +is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful when in distributed training like +TPUs and need to call this function on all processes. In this case, set is_main_process=True only on +the main process to avoid race conditions. + + +save_function (Callable) — +The function to use to save the state dictionary. Useful on distributed training like TPUs when one +need to replace torch.save by another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. + + + +Save the LoRA parameters corresponding to the UNet and the text encoder. + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. + +class diffusers.FlaxStableDiffusionInpaintPipeline + +< +source +> +( +vae: FlaxAutoencoderKL +text_encoder: FlaxCLIPTextModel +tokenizer: CLIPTokenizer +unet: FlaxUNet2DConditionModel +scheduler: typing.Union[diffusers.schedulers.scheduling_ddim_flax.FlaxDDIMScheduler, diffusers.schedulers.scheduling_pndm_flax.FlaxPNDMScheduler, diffusers.schedulers.scheduling_lms_discrete_flax.FlaxLMSDiscreteScheduler, diffusers.schedulers.scheduling_dpmsolver_multistep_flax.FlaxDPMSolverMultistepScheduler] +safety_checker: FlaxStableDiffusionSafetyChecker +feature_extractor: CLIPImageProcessor +dtype: dtype = + +) + + +Parameters + +vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, +specifically the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (FlaxUNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. + + +safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-guided image inpainting using Stable Diffusion. This is an experimental feature. +This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt_ids: array +mask: array +masked_image: array +params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] +prng_seed: PRNGKeyArray +num_inference_steps: int = 50 +height: typing.Optional[int] = None +width: typing.Optional[int] = None +guidance_scale: typing.Union[float, array] = 7.5 +latents: array = None +neg_prompt_ids: array = None +return_dict: bool = True +jit: bool = False + +) +→ +FlaxStableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +latents (jnp.array, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. tensor will ge generated +by sampling using the supplied random generator. + + +jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. NOTE: This argument +exists because __call__ is not yet end-to-end pmap-able. It will be removed in a future release. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. + + +Returns + +FlaxStableDiffusionPipelineOutput or tuple + + + +FlaxStableDiffusionPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import jax +>>> import numpy as np +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard +>>> import PIL +>>> import requests +>>> from io import BytesIO +>>> from diffusers import FlaxStableDiffusionInpaintPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +>>> init_image = download_image(img_url).resize((512, 512)) +>>> mask_image = download_image(mask_url).resize((512, 512)) + +>>> pipeline, params = FlaxStableDiffusionInpaintPipeline.from_pretrained( +... "xvjiarui/stable-diffusion-2-inpainting" +... ) + +>>> prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +>>> prng_seed = jax.random.PRNGKey(0) +>>> num_inference_steps = 50 + +>>> num_samples = jax.device_count() +>>> prompt = num_samples * [prompt] +>>> init_image = num_samples * [init_image] +>>> mask_image = num_samples * [mask_image] +>>> prompt_ids, processed_masked_images, processed_masks = pipeline.prepare_inputs( +... prompt, init_image, mask_image +... ) +# shard inputs and rng + +>>> params = replicate(params) +>>> prng_seed = jax.random.split(prng_seed, jax.device_count()) +>>> prompt_ids = shard(prompt_ids) +>>> processed_masked_images = shard(processed_masked_images) +>>> processed_masks = shard(processed_masks) + +>>> images = pipeline( +... prompt_ids, processed_masks, processed_masked_images, params, prng_seed, num_inference_steps, jit=True +... ).images +>>> images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) diff --git a/scrapped_outputs/4f334eaa43b1d7b5d0190772b72763f6.txt b/scrapped_outputs/4f334eaa43b1d7b5d0190772b72763f6.txt new file mode 100644 index 0000000000000000000000000000000000000000..f3d2a1a340ad1efdbcd58232cb5909967c8d6d47 --- /dev/null +++ b/scrapped_outputs/4f334eaa43b1d7b5d0190772b72763f6.txt @@ -0,0 +1,64 @@ +Configuration Schedulers from SchedulerMixin and models from ModelMixin inherit from ConfigMixin which stores all the parameters that are passed to their respective __init__ methods in a JSON-configuration file. To use private or gated models, log-in with huggingface-cli login. ConfigMixin class diffusers.ConfigMixin < source > ( ) Base class for all configuration classes. All configuration parameters are stored under self.config. Also +provides the from_config() and save_config() methods for loading, downloading, and +saving classes that inherit from ConfigMixin. Class attributes: config_name (str) — A filename under which the config should stored when calling +save_config() (should be overridden by parent class). ignore_for_config (List[str]) — A list of attributes that should not be saved in the config (should be +overridden by subclass). has_compatibles (bool) — Whether the class has compatible classes (should be overridden by subclass). _deprecated_kwargs (List[str]) — Keyword arguments that are deprecated. Note that the init function +should only have a kwargs argument if at least one argument is deprecated (should be overridden by +subclass). load_config < source > ( pretrained_model_name_or_path: Union return_unused_kwargs = False return_commit_hash = False **kwargs ) → dict Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing model weights saved with +save_config(). + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. return_unused_kwargs (bool, optional, defaults to `False) — +Whether unused keyword arguments of the config are returned. return_commit_hash (bool, optional, defaults to False) -- Whether the commit_hash` of the loaded configuration are returned. Returns +dict + +A dictionary of all the parameters stored in a JSON configuration file. + Load a model or scheduler configuration. from_config < source > ( config: Union = None return_unused_kwargs = False **kwargs ) → ModelMixin or SchedulerMixin Parameters config (Dict[str, Any]) — +A config dictionary from which the Python class is instantiated. Make sure to only load configuration +files of compatible classes. return_unused_kwargs (bool, optional, defaults to False) — +Whether kwargs that are not consumed by the Python class should be returned or not. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to update the configuration object (after it is loaded) and initiate the Python class. +**kwargs are passed directly to the underlying scheduler/model’s __init__ method and eventually +overwrite the same named arguments in config. Returns +ModelMixin or SchedulerMixin + +A model or scheduler object instantiated from a config dictionary. + Instantiate a Python class from a config dictionary. Examples: Copied >>> from diffusers import DDPMScheduler, DDIMScheduler, PNDMScheduler + +>>> # Download scheduler from huggingface.co and cache. +>>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cifar10-32") + +>>> # Instantiate DDIM scheduler class with same config as DDPM +>>> scheduler = DDIMScheduler.from_config(scheduler.config) + +>>> # Instantiate PNDM scheduler class with same config as DDPM +>>> scheduler = PNDMScheduler.from_config(scheduler.config) save_config < source > ( save_directory: Union push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory where the configuration JSON file is saved (will be created if it does not exist). push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save a configuration object to the directory specified in save_directory so that it can be reloaded using the +from_config() class method. to_json_file < source > ( json_file_path: Union ) Parameters json_file_path (str or os.PathLike) — +Path to the JSON file to save a configuration instance’s parameters. Save the configuration instance’s parameters to a JSON file. to_json_string < source > ( ) → str Returns +str + +String containing all the attributes that make up the configuration instance in JSON format. + Serializes the configuration instance to a JSON string. diff --git a/scrapped_outputs/4f60357bb16442189281269f7ac2da16.txt b/scrapped_outputs/4f60357bb16442189281269f7ac2da16.txt new file mode 100644 index 0000000000000000000000000000000000000000..552fc9d1655a6840094e03d21339f40c7b49403d --- /dev/null +++ b/scrapped_outputs/4f60357bb16442189281269f7ac2da16.txt @@ -0,0 +1,33 @@ +Transformer2D A Transformer model for image-like data from CompVis that is based on the Vision Transformer introduced by Dosovitskiy et al. The Transformer2DModel accepts discrete (classes of vector embeddings) or continuous (actual embeddings) inputs. When the input is continuous: Project the input and reshape it to (batch_size, sequence_length, feature_dimension). Apply the Transformer blocks in the standard way. Reshape to image. When the input is discrete: It is assumed one of the input classes is the masked latent pixel. The predicted classes of the unnoised image don’t contain a prediction for the masked pixel because the unnoised image cannot be masked. Convert input (classes of latent pixels) to embeddings and apply positional embeddings. Apply the Transformer blocks in the standard way. Predict classes of unnoised image. Transformer2DModel class diffusers.Transformer2DModel < source > ( num_attention_heads: int = 16 attention_head_dim: int = 88 in_channels: Optional = None out_channels: Optional = None num_layers: int = 1 dropout: float = 0.0 norm_num_groups: int = 32 cross_attention_dim: Optional = None attention_bias: bool = False sample_size: Optional = None num_vector_embeds: Optional = None patch_size: Optional = None activation_fn: str = 'geglu' num_embeds_ada_norm: Optional = None use_linear_projection: bool = False only_cross_attention: bool = False double_self_attention: bool = False upcast_attention: bool = False norm_type: str = 'layer_norm' norm_elementwise_affine: bool = True norm_eps: float = 1e-05 attention_type: str = 'default' caption_channels: int = None ) Parameters num_attention_heads (int, optional, defaults to 16) — The number of heads to use for multi-head attention. attention_head_dim (int, optional, defaults to 88) — The number of channels in each head. in_channels (int, optional) — +The number of channels in the input and output (specify if the input is continuous). num_layers (int, optional, defaults to 1) — The number of layers of Transformer blocks to use. dropout (float, optional, defaults to 0.0) — The dropout probability to use. cross_attention_dim (int, optional) — The number of encoder_hidden_states dimensions to use. sample_size (int, optional) — The width of the latent images (specify if the input is discrete). +This is fixed during training since it is used to learn a number of position embeddings. num_vector_embeds (int, optional) — +The number of classes of the vector embeddings of the latent pixels (specify if the input is discrete). +Includes the class for the masked latent pixel. activation_fn (str, optional, defaults to "geglu") — Activation function to use in feed-forward. num_embeds_ada_norm ( int, optional) — +The number of diffusion steps used during training. Pass if at least one of the norm_layers is +AdaLayerNorm. This is fixed during training since it is used to learn a number of embeddings that are +added to the hidden states. +During inference, you can denoise for up to but not more steps than num_embeds_ada_norm. attention_bias (bool, optional) — +Configure if the TransformerBlocks attention should contain a bias parameter. A 2D Transformer model for image-like data. forward < source > ( hidden_states: Tensor encoder_hidden_states: Optional = None timestep: Optional = None added_cond_kwargs: Dict = None class_labels: Optional = None cross_attention_kwargs: Dict = None attention_mask: Optional = None encoder_attention_mask: Optional = None return_dict: bool = True ) Parameters hidden_states (torch.LongTensor of shape (batch size, num latent pixels) if discrete, torch.FloatTensor of shape (batch size, channel, height, width) if continuous) — +Input hidden_states. encoder_hidden_states ( torch.FloatTensor of shape (batch size, sequence len, embed dims), optional) — +Conditional embeddings for cross attention layer. If not given, cross-attention defaults to +self-attention. timestep ( torch.LongTensor, optional) — +Used to indicate denoising step. Optional timestep to be applied as an embedding in AdaLayerNorm. class_labels ( torch.LongTensor of shape (batch size, num classes), optional) — +Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in +AdaLayerZeroNorm. cross_attention_kwargs ( Dict[str, Any], optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. attention_mask ( torch.Tensor, optional) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. encoder_attention_mask ( torch.Tensor, optional) — +Cross-attention mask applied to encoder_hidden_states. Two formats supported: + +Mask (batch, sequence_length) True = keep, False = discard. +Bias (batch, 1, sequence_length) 0 = keep, -10000 = discard. + +If ndim == 2: will be interpreted as a mask, then converted into a bias consistent with the format +above. This bias will be added to the cross-attention scores. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. The Transformer2DModel forward method. Transformer2DModelOutput class diffusers.models.transformers.transformer_2d.Transformer2DModelOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if Transformer2DModel is discrete) — +The hidden states output conditioned on the encoder_hidden_states input. If discrete, returns probability +distributions for the unnoised latent pixels. The output of Transformer2DModel. diff --git a/scrapped_outputs/4f633e3c2afbc22b67fbc46abcd71a93.txt b/scrapped_outputs/4f633e3c2afbc22b67fbc46abcd71a93.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/4f6b8a0d93c88fd75fe0e7ec9d476268.txt b/scrapped_outputs/4f6b8a0d93c88fd75fe0e7ec9d476268.txt new file mode 100644 index 0000000000000000000000000000000000000000..f695b722000cb30de90398c0e34dfcc9554715bb --- /dev/null +++ b/scrapped_outputs/4f6b8a0d93c88fd75fe0e7ec9d476268.txt @@ -0,0 +1,315 @@ +Inpainting Inpainting replaces or edits specific areas of an image. This makes it a useful tool for image restoration like removing defects and artifacts, or even replacing an image area with something entirely new. Inpainting relies on a mask to determine which regions of an image to fill in; the area to inpaint is represented by white pixels and the area to keep is represented by black pixels. The white pixels are filled in by the prompt. With 🤗 Diffusers, here is how you can do inpainting: Load an inpainting checkpoint with the AutoPipelineForInpainting class. This’ll automatically detect the appropriate pipeline class to load based on the checkpoint: Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() You’ll notice throughout the guide, we use enable_model_cpu_offload() and enable_xformers_memory_efficient_attention(), to save memory and increase inference speed. If you’re using PyTorch 2.0, it’s not necessary to call enable_xformers_memory_efficient_attention() on your pipeline because it’ll already be using PyTorch 2.0’s native scaled-dot product attention. Load the base and mask images: Copied init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") Create a prompt to inpaint the image with and pass it to the pipeline with the base and mask images: Copied prompt = "a black cat with glowing eyes, cute, adorable, disney, pixar, highly detailed, 8k" +negative_prompt = "bad anatomy, deformed, ugly, disfigured" +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=init_image, mask_image=mask_image).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) base image mask image generated image Create a mask image Throughout this guide, the mask image is provided in all of the code examples for convenience. You can inpaint on your own images, but you’ll need to create a mask image for it. Use the Space below to easily create a mask image. Upload a base image to inpaint on and use the sketch tool to draw a mask. Once you’re done, click Run to generate and download the mask image. Mask blur The ~VaeImageProcessor.blur method provides an option for how to blend the original image and inpaint area. The amount of blur is determined by the blur_factor parameter. Increasing the blur_factor increases the amount of blur applied to the mask edges, softening the transition between the original image and inpaint area. A low or zero blur_factor preserves the sharper edges of the mask. To use this, create a blurred mask with the image processor. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +from PIL import Image + +pipeline = AutoPipelineForInpainting.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to('cuda') + +mask = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore_mask.png") +blurred_mask = pipeline.mask_processor.blur(mask, blur_factor=33) +blurred_mask mask with no blur mask with blur applied Popular models Stable Diffusion Inpainting, Stable Diffusion XL (SDXL) Inpainting, and Kandinsky 2.2 Inpainting are among the most popular models for inpainting. SDXL typically produces higher resolution images than Stable Diffusion v1.5, and Kandinsky 2.2 is also capable of generating high-quality images. Stable Diffusion Inpainting Stable Diffusion Inpainting is a latent diffusion model finetuned on 512x512 images on inpainting. It is a good starting point because it is relatively fast and generates good quality images. To use this model for inpainting, you’ll need to pass a prompt, base and mask image to the pipeline: Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Stable Diffusion XL (SDXL) Inpainting SDXL is a larger and more powerful version of Stable Diffusion v1.5. This model can follow a two-stage model process (though each model can also be used alone); the base model generates an image, and a refiner model takes that image and further enhances its details and quality. Take a look at the SDXL guide for a more comprehensive guide on how to use SDXL and configure it’s parameters. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "diffusers/stable-diffusion-xl-1.0-inpainting-0.1", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Kandinsky 2.2 Inpainting The Kandinsky model family is similar to SDXL because it uses two models as well; the image prior model creates image embeddings, and the diffusion model generates images from them. You can load the image prior and diffusion model separately, but the easiest way to use Kandinsky 2.2 is to load it into the AutoPipelineForInpainting class which uses the KandinskyV22InpaintCombinedPipeline under the hood. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) base image Stable Diffusion Inpainting Stable Diffusion XL Inpainting Kandinsky 2.2 Inpainting Non-inpaint specific checkpoints So far, this guide has used inpaint specific checkpoints such as runwayml/stable-diffusion-inpainting. But you can also use regular checkpoints like runwayml/stable-diffusion-v1-5. Let’s compare the results of the two checkpoints. The image on the left is generated from a regular checkpoint, and the image on the right is from an inpaint checkpoint. You’ll immediately notice the image on the left is not as clean, and you can still see the outline of the area the model is supposed to inpaint. The image on the right is much cleaner and the inpainted area appears more natural. runwayml/stable-diffusion-v1-5 runwayml/stable-diffusion-inpainting Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, image], rows=1, cols=2) runwayml/stable-diffusion-v1-5 runwayml/stable-diffusion-inpainting However, for more basic tasks like erasing an object from an image (like the rocks in the road for example), a regular checkpoint yields pretty good results. There isn’t as noticeable of difference between the regular and inpaint checkpoint. runwayml/stable-diffusion-v1-5 runwayml/stable-diffusion-inpaint Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/road-mask.png") + +image = pipeline(prompt="road", image=init_image, mask_image=mask_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) runwayml/stable-diffusion-v1-5 runwayml/stable-diffusion-inpainting The trade-off of using a non-inpaint specific checkpoint is the overall image quality may be lower, but it generally tends to preserve the mask area (that is why you can see the mask outline). The inpaint specific checkpoints are intentionally trained to generate higher quality inpainted images, and that includes creating a more natural transition between the masked and unmasked areas. As a result, these checkpoints are more likely to change your unmasked area. If preserving the unmasked area is important for your task, you can use the VaeImageProcessor.apply_overlay method to force the unmasked area of an image to remain the same at the expense of some more unnatural transitions between the masked and unmasked areas. Copied import PIL +import numpy as np +import torch + +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +device = "cuda" +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", + torch_dtype=torch.float16, +) +pipeline = pipeline.to(device) + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +repainted_image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image).images[0] +repainted_image.save("repainted_image.png") + +unmasked_unchanged_image = pipeline.image_processor.apply_overlay(mask_image, init_image, repainted_image) +unmasked_unchanged_image.save("force_unmasked_unchanged.png") +make_image_grid([init_image, mask_image, repainted_image, unmasked_unchanged_image], rows=2, cols=2) Configure pipeline parameters Image features - like quality and “creativity” - are dependent on pipeline parameters. Knowing what these parameters do is important for getting the results you want. Let’s take a look at the most important parameters and see how changing them affects the output. Strength strength is a measure of how much noise is added to the base image, which influences how similar the output is to the base image. 📈 a high strength value means more noise is added to an image and the denoising process takes longer, but you’ll get higher quality images that are more different from the base image 📉 a low strength value means less noise is added to an image and the denoising process is faster, but the image quality may not be as great and the generated image resembles the base image more Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.6).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) strength = 0.6 strength = 0.8 strength = 1.0 Guidance scale guidance_scale affects how aligned the text prompt and generated image are. 📈 a high guidance_scale value means the prompt and generated image are closely aligned, so the output is a stricter interpretation of the prompt 📉 a low guidance_scale value means the prompt and generated image are more loosely aligned, so the output may be more varied from the prompt You can use strength and guidance_scale together for more control over how expressive the model is. For example, a combination high strength and guidance_scale values gives the model the most creative freedom. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, guidance_scale=2.5).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) guidance_scale = 2.5 guidance_scale = 7.5 guidance_scale = 12.5 Negative prompt A negative prompt assumes the opposite role of a prompt; it guides the model away from generating certain things in an image. This is useful for quickly improving image quality and preventing the model from generating things you don’t want. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +negative_prompt = "bad architecture, unstable, poor details, blurry" +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=init_image, mask_image=mask_image).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) negative_prompt = "bad architecture, unstable, poor details, blurry" Padding mask crop A method for increasing the inpainting image quality is to use the padding_mask_crop parameter. When enabled, this option crops the masked area with some user-specified padding and it’ll also crop the same area from the original image. Both the image and mask are upscaled to a higher resolution for inpainting, and then overlaid on the original image. This is a quick and easy way to improve image quality without using a separate pipeline like StableDiffusionUpscalePipeline. Add the padding_mask_crop parameter to the pipeline call and set it to the desired padding value. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +from PIL import Image + +generator = torch.Generator(device='cuda').manual_seed(0) +pipeline = AutoPipelineForInpainting.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to('cuda') + +base = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore.png") +mask = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore_mask.png") + +image = pipeline("boat", image=base, mask_image=mask, strength=0.75, generator=generator, padding_mask_crop=32).images[0] +image default inpaint image inpaint image with `padding_mask_crop` enabled Chained inpainting pipelines AutoPipelineForInpainting can be chained with other 🤗 Diffusers pipelines to edit their outputs. This is often useful for improving the output quality from your other diffusion pipelines, and if you’re using multiple pipelines, it can be more memory-efficient to chain them together to keep the outputs in latent space and reuse the same pipeline components. Text-to-image-to-inpaint Chaining a text-to-image and inpainting pipeline allows you to inpaint the generated image, and you don’t have to provide a base image to begin with. This makes it convenient to edit your favorite text-to-image outputs without having to generate an entirely new image. Start with the text-to-image pipeline to create a castle: Copied import torch +from diffusers import AutoPipelineForText2Image, AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +text2image = pipeline("concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k").images[0] Load the mask image of the output from above: Copied mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_text-chain-mask.png") And let’s inpaint the masked area with a waterfall: Copied pipeline = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +prompt = "digital painting of a fantasy waterfall, cloudy" +image = pipeline(prompt=prompt, image=text2image, mask_image=mask_image).images[0] +make_image_grid([text2image, mask_image, image], rows=1, cols=3) text-to-image inpaint Inpaint-to-image-to-image You can also chain an inpainting pipeline before another pipeline like image-to-image or an upscaler to improve the quality. Begin by inpainting an image: Copied import torch +from diffusers import AutoPipelineForInpainting, AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image_inpainting = pipeline(prompt=prompt, image=init_image, mask_image=mask_image).images[0] + +# resize image to 1024x1024 for SDXL +image_inpainting = image_inpainting.resize((1024, 1024)) Now let’s pass the image to another inpainting pipeline with SDXL’s refiner model to enhance the image details and quality: Copied pipeline = AutoPipelineForInpainting.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt=prompt, image=image_inpainting, mask_image=mask_image, output_type="latent").images[0] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. For example, in the Text-to-image-to-inpaint section, Kandinsky 2.2 uses a different VAE class than the Stable Diffusion model so it won’t work. But if you use Stable Diffusion v1.5 for both pipelines, then you can keep everything in latent space because they both use AutoencoderKL. Finally, you can pass this image to an image-to-image pipeline to put the finishing touches on it. It is more efficient to use the from_pipe() method to reuse the existing pipeline components, and avoid unnecessarily loading all the pipeline components into memory again. Copied pipeline = AutoPipelineForImage2Image.from_pipe(pipeline) +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt=prompt, image=image).images[0] +make_image_grid([init_image, mask_image, image_inpainting, image], rows=2, cols=2) initial image inpaint image-to-image Image-to-image and inpainting are actually very similar tasks. Image-to-image generates a new image that resembles the existing provided image. Inpainting does the same thing, but it only transforms the image area defined by the mask and the rest of the image is unchanged. You can think of inpainting as a more precise tool for making specific changes and image-to-image has a broader scope for making more sweeping changes. Control image generation Getting an image to look exactly the way you want is challenging because the denoising process is random. While you can control certain aspects of generation by configuring parameters like negative_prompt, there are better and more efficient methods for controlling image generation. Prompt weighting Prompt weighting provides a quantifiable way to scale the representation of concepts in a prompt. You can use it to increase or decrease the magnitude of the text embedding vector for each concept in the prompt, which subsequently determines how much of each concept is generated. The Compel library offers an intuitive syntax for scaling the prompt weights and generating the embeddings. Learn how to create the embeddings in the Prompt weighting guide. Once you’ve generated the embeddings, pass them to the prompt_embeds (and negative_prompt_embeds if you’re using a negative prompt) parameter in the AutoPipelineForInpainting. The embeddings replace the prompt parameter: Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt_embeds=prompt_embeds, # generated from Compel + negative_prompt_embeds=negative_prompt_embeds, # generated from Compel + image=init_image, + mask_image=mask_image +).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) ControlNet ControlNet models are used with other diffusion models like Stable Diffusion, and they provide an even more flexible and accurate way to control how an image is generated. A ControlNet accepts an additional conditioning image input that guides the diffusion model to preserve the features in it. For example, let’s condition an image with a ControlNet pretrained on inpaint images: Copied import torch +import numpy as np +from diffusers import ControlNetModel, StableDiffusionControlNetInpaintPipeline +from diffusers.utils import load_image, make_image_grid + +# load ControlNet +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16, variant="fp16") + +# pass ControlNet to the pipeline +pipeline = StableDiffusionControlNetInpaintPipeline.from_pretrained( + "runwayml/stable-diffusion-inpainting", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +# prepare control image +def make_inpaint_condition(init_image, mask_image): + init_image = np.array(init_image.convert("RGB")).astype(np.float32) / 255.0 + mask_image = np.array(mask_image.convert("L")).astype(np.float32) / 255.0 + + assert init_image.shape[0:1] == mask_image.shape[0:1], "image and image_mask must have the same image size" + init_image[mask_image > 0.5] = -1.0 # set as masked pixel + init_image = np.expand_dims(init_image, 0).transpose(0, 3, 1, 2) + init_image = torch.from_numpy(init_image) + return init_image + +control_image = make_inpaint_condition(init_image, mask_image) Now generate an image from the base, mask and control images. You’ll notice features of the base image are strongly preserved in the generated image. Copied prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, control_image=control_image).images[0] +make_image_grid([init_image, mask_image, PIL.Image.fromarray(np.uint8(control_image[0][0])).convert('RGB'), image], rows=2, cols=2) You can take this a step further and chain it with an image-to-image pipeline to apply a new style: Copied from diffusers import AutoPipelineForImage2Image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "nitrosocke/elden-ring-diffusion", torch_dtype=torch.float16, +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +prompt = "elden ring style castle" # include the token "elden ring style" in the prompt +negative_prompt = "bad architecture, deformed, disfigured, poor details" + +image_elden_ring = pipeline(prompt, negative_prompt=negative_prompt, image=image).images[0] +make_image_grid([init_image, mask_image, image, image_elden_ring], rows=2, cols=2) initial image ControlNet inpaint image-to-image Optimize It can be difficult and slow to run diffusion models if you’re resource constrained, but it doesn’t have to be with a few optimization tricks. One of the biggest (and easiest) optimizations you can enable is switching to memory-efficient attention. If you’re using PyTorch 2.0, scaled-dot product attention is automatically enabled and you don’t need to do anything else. For non-PyTorch 2.0 users, you can install and use xFormers’s implementation of memory-efficient attention. Both options reduce memory usage and accelerate inference. You can also offload the model to the CPU to save even more memory: Copied + pipeline.enable_xformers_memory_efficient_attention() ++ pipeline.enable_model_cpu_offload() To speed-up your inference code even more, use torch_compile. You should wrap torch.compile around the most intensive component in the pipeline which is typically the UNet: Copied pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) Learn more in the Reduce memory usage and Torch 2.0 guides. diff --git a/scrapped_outputs/4f826287ff0a627b806ffd4bbaf67842.txt b/scrapped_outputs/4f826287ff0a627b806ffd4bbaf67842.txt new file mode 100644 index 0000000000000000000000000000000000000000..843875e320b6bcdb29106ed38d7b3cffd10030d2 --- /dev/null +++ b/scrapped_outputs/4f826287ff0a627b806ffd4bbaf67842.txt @@ -0,0 +1,232 @@ +Würstchen Wuerstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models is by Pablo Pernias, Dominic Rampas, Mats L. Richter and Christopher Pal and Marc Aubreville. The abstract from the paper is: We introduce Würstchen, a novel architecture for text-to-image synthesis that combines competitive performance with unprecedented cost-effectiveness for large-scale text-to-image diffusion models. A key contribution of our work is to develop a latent diffusion technique in which we learn a detailed but extremely compact semantic image representation used to guide the diffusion process. This highly compressed representation of an image provides much more detailed guidance compared to latent representations of language and this significantly reduces the computational requirements to achieve state-of-the-art results. Our approach also improves the quality of text-conditioned image generation based on our user preference study. The training requirements of our approach consists of 24,602 A100-GPU hours - compared to Stable Diffusion 2.1’s 200,000 GPU hours. Our approach also requires less training data to achieve these results. Furthermore, our compact latent representations allows us to perform inference over twice as fast, slashing the usual costs and carbon footprint of a state-of-the-art (SOTA) diffusion model significantly, without compromising the end performance. In a broader comparison against SOTA models our approach is substantially more efficient and compares favorably in terms of image quality. We believe that this work motivates more emphasis on the prioritization of both performance and computational accessibility. Würstchen Overview Würstchen is a diffusion model, whose text-conditional model works in a highly compressed latent space of images. Why is this important? Compressing data can reduce computational costs for both training and inference by magnitudes. Training on 1024x1024 images is way more expensive than training on 32x32. Usually, other works make use of a relatively small compression, in the range of 4x - 8x spatial compression. Würstchen takes this to an extreme. Through its novel design, we achieve a 42x spatial compression. This was unseen before because common methods fail to faithfully reconstruct detailed images after 16x spatial compression. Würstchen employs a two-stage compression, what we call Stage A and Stage B. Stage A is a VQGAN, and Stage B is a Diffusion Autoencoder (more details can be found in the paper). A third model, Stage C, is learned in that highly compressed latent space. This training requires fractions of the compute used for current top-performing models, while also allowing cheaper and faster inference. Würstchen v2 comes to Diffusers After the initial paper release, we have improved numerous things in the architecture, training and sampling, making Würstchen competitive to current state-of-the-art models in many ways. We are excited to release this new version together with Diffusers. Here is a list of the improvements. Higher resolution (1024x1024 up to 2048x2048) Faster inference Multi Aspect Resolution Sampling Better quality We are releasing 3 checkpoints for the text-conditional image generation model (Stage C). Those are: v2-base v2-aesthetic (default) v2-interpolated (50% interpolation between v2-base and v2-aesthetic) We recommend using v2-interpolated, as it has a nice touch of both photorealism and aesthetics. Use v2-base for finetunings as it does not have a style bias and use v2-aesthetic for very artistic generations. +A comparison can be seen here: Text-to-Image Generation For the sake of usability, Würstchen can be used with a single pipeline. This pipeline can be used as follows: Copied import torch +from diffusers import AutoPipelineForText2Image +from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS + +pipe = AutoPipelineForText2Image.from_pretrained("warp-ai/wuerstchen", torch_dtype=torch.float16).to("cuda") + +caption = "Anthropomorphic cat dressed as a fire fighter" +images = pipe( + caption, + width=1024, + height=1536, + prior_timesteps=DEFAULT_STAGE_C_TIMESTEPS, + prior_guidance_scale=4.0, + num_images_per_prompt=2, +).images For explanation purposes, we can also initialize the two main pipelines of Würstchen individually. Würstchen consists of 3 stages: Stage C, Stage B, Stage A. They all have different jobs and work only together. When generating text-conditional images, Stage C will first generate the latents in a very compressed latent space. This is what happens in the prior_pipeline. Afterwards, the generated latents will be passed to Stage B, which decompresses the latents into a bigger latent space of a VQGAN. These latents can then be decoded by Stage A, which is a VQGAN, into the pixel-space. Stage B & Stage A are both encapsulated in the decoder_pipeline. For more details, take a look at the paper. Copied import torch +from diffusers import WuerstchenDecoderPipeline, WuerstchenPriorPipeline +from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS + +device = "cuda" +dtype = torch.float16 +num_images_per_prompt = 2 + +prior_pipeline = WuerstchenPriorPipeline.from_pretrained( + "warp-ai/wuerstchen-prior", torch_dtype=dtype +).to(device) +decoder_pipeline = WuerstchenDecoderPipeline.from_pretrained( + "warp-ai/wuerstchen", torch_dtype=dtype +).to(device) + +caption = "Anthropomorphic cat dressed as a fire fighter" +negative_prompt = "" + +prior_output = prior_pipeline( + prompt=caption, + height=1024, + width=1536, + timesteps=DEFAULT_STAGE_C_TIMESTEPS, + negative_prompt=negative_prompt, + guidance_scale=4.0, + num_images_per_prompt=num_images_per_prompt, +) +decoder_output = decoder_pipeline( + image_embeddings=prior_output.image_embeddings, + prompt=caption, + negative_prompt=negative_prompt, + guidance_scale=0.0, + output_type="pil", +).images[0] +decoder_output Speed-Up Inference You can make use of torch.compile function and gain a speed-up of about 2-3x: Copied prior_pipeline.prior = torch.compile(prior_pipeline.prior, mode="reduce-overhead", fullgraph=True) +decoder_pipeline.decoder = torch.compile(decoder_pipeline.decoder, mode="reduce-overhead", fullgraph=True) Limitations Due to the high compression employed by Würstchen, generations can lack a good amount +of detail. To our human eye, this is especially noticeable in faces, hands etc. Images can only be generated in 128-pixel steps, e.g. the next higher resolution +after 1024x1024 is 1152x1152 The model lacks the ability to render correct text in images The model often does not achieve photorealism Difficult compositional prompts are hard for the model The original codebase, as well as experimental ideas, can be found at dome272/Wuerstchen. WuerstchenCombinedPipeline class diffusers.WuerstchenCombinedPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModel decoder: WuerstchenDiffNeXt scheduler: DDPMWuerstchenScheduler vqgan: PaellaVQModel prior_tokenizer: CLIPTokenizer prior_text_encoder: CLIPTextModel prior_prior: WuerstchenPrior prior_scheduler: DDPMWuerstchenScheduler ) Parameters tokenizer (CLIPTokenizer) — +The decoder tokenizer to be used for text inputs. text_encoder (CLIPTextModel) — +The decoder text encoder to be used for text inputs. decoder (WuerstchenDiffNeXt) — +The decoder model to be used for decoder image generation pipeline. scheduler (DDPMWuerstchenScheduler) — +The scheduler to be used for decoder image generation pipeline. vqgan (PaellaVQModel) — +The VQGAN model to be used for decoder image generation pipeline. prior_tokenizer (CLIPTokenizer) — +The prior tokenizer to be used for text inputs. prior_text_encoder (CLIPTextModel) — +The prior text encoder to be used for text inputs. prior_prior (WuerstchenPrior) — +The prior model to be used for prior pipeline. prior_scheduler (DDPMWuerstchenScheduler) — +The scheduler to be used for prior pipeline. Combined Pipeline for text-to-image generation using Wuerstchen This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union = None height: int = 512 width: int = 512 prior_num_inference_steps: int = 60 prior_timesteps: Optional = None prior_guidance_scale: float = 4.0 num_inference_steps: int = 12 decoder_timesteps: Optional = None decoder_guidance_scale: float = 0.0 negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation for the prior and decoder. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings for the prior. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings for the prior. Can be used to easily tweak text inputs, e.g. +prompt weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt +input argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +prior_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +prior_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked +to the text prompt, usually at the expense of lower image quality. prior_num_inference_steps (Union[int, Dict[float, int]], optional, defaults to 60) — +The number of prior denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. For more specific timestep spacing, you can pass customized +prior_timesteps num_inference_steps (int, optional, defaults to 12) — +The number of decoder denoising steps. More denoising steps usually lead to a higher quality image at +the expense of slower inference. For more specific timestep spacing, you can pass customized +timesteps prior_timesteps (List[float], optional) — +Custom timesteps to use for the denoising process for the prior. If not defined, equal spaced +prior_num_inference_steps timesteps are used. Must be in descending order. decoder_timesteps (List[float], optional) — +Custom timesteps to use for the denoising process for the decoder. If not defined, equal spaced +num_inference_steps timesteps are used. Must be in descending order. decoder_guidance_scale (float, optional, defaults to 0.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. prior_callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). prior_callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the prior_callback_on_step_end function. The tensors specified in the +list will be passed as callback_kwargs argument. You will only be able to include variables listed in +the ._callback_tensor_inputs attribute of your pipeline class. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusions import WuerstchenCombinedPipeline + +>>> pipe = WuerstchenCombinedPipeline.from_pretrained("warp-ai/Wuerstchen", torch_dtype=torch.float16).to( +... "cuda" +... ) +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> images = pipe(prompt=prompt) enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models (unet, text_encoder, vae, and safety checker state dicts) to CPU using 🤗 +Accelerate, significantly reducing memory usage. Models are moved to a torch.device('meta') and loaded on a +GPU only when their specific submodule’s forward method is called. Offloading happens on a submodule basis. +Memory savings are higher than using enable_model_cpu_offload, but performance is lower. WuerstchenPriorPipeline class diffusers.WuerstchenPriorPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModel prior: WuerstchenPrior scheduler: DDPMWuerstchenScheduler latent_mean: float = 42.0 latent_std: float = 1.0 resolution_multiple: float = 42.67 ) Parameters prior (Prior) — +The canonical unCLIP prior to approximate the image embedding from the text embedding. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (DDPMWuerstchenScheduler) — +A scheduler to be used in combination with prior to generate image embedding. latent_mean (‘float’, optional, defaults to 42.0) — +Mean value for latent diffusers. latent_std (‘float’, optional, defaults to 1.0) — +Standard value for latent diffusers. resolution_multiple (‘float’, optional, defaults to 42.67) — +Default resolution for multiple images generated. Pipeline for generating image prior for Wuerstchen. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None height: int = 1024 width: int = 1024 num_inference_steps: int = 60 timesteps: List = None guidance_scale: float = 8.0 negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pt' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. height (int, optional, defaults to 1024) — +The height in pixels of the generated image. width (int, optional, defaults to 1024) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 60) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 8.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +decoder_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +decoder_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely +linked to the text prompt, usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if decoder_guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import WuerstchenPriorPipeline + +>>> prior_pipe = WuerstchenPriorPipeline.from_pretrained( +... "warp-ai/wuerstchen-prior", torch_dtype=torch.float16 +... ).to("cuda") + +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> prior_output = pipe(prompt) WuerstchenPriorPipelineOutput class diffusers.pipelines.wuerstchen.pipeline_wuerstchen_prior.WuerstchenPriorPipelineOutput < source > ( image_embeddings: Union ) Parameters image_embeddings (torch.FloatTensor or np.ndarray) — +Prior image embeddings for text prompt Output class for WuerstchenPriorPipeline. WuerstchenDecoderPipeline class diffusers.WuerstchenDecoderPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModel decoder: WuerstchenDiffNeXt scheduler: DDPMWuerstchenScheduler vqgan: PaellaVQModel latent_dim_scale: float = 10.67 ) Parameters tokenizer (CLIPTokenizer) — +The CLIP tokenizer. text_encoder (CLIPTextModel) — +The CLIP text encoder. decoder (WuerstchenDiffNeXt) — +The WuerstchenDiffNeXt unet decoder. vqgan (PaellaVQModel) — +The VQGAN model. scheduler (DDPMWuerstchenScheduler) — +A scheduler to be used in combination with prior to generate image embedding. latent_dim_scale (float, optional, defaults to 10.67) — +Multiplier to determine the VQ latent space size from the image embeddings. If the image embeddings are +height=24 and width=24, the VQ latent shape needs to be height=int(2410.67)=256 and +width=int(2410.67)=256 in order to match the training conditions. Pipeline for generating images from the Wuerstchen model. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeddings: Union prompt: Union = None num_inference_steps: int = 12 timesteps: Optional = None guidance_scale: float = 0.0 negative_prompt: Union = None num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) Parameters image_embedding (torch.FloatTensor or List[torch.FloatTensor]) — +Image Embeddings either extracted from an image or generated by a Prior Model. prompt (str or List[str]) — +The prompt or prompts to guide the image generation. num_inference_steps (int, optional, defaults to 12) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 0.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +decoder_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +decoder_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely +linked to the text prompt, usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if decoder_guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import WuerstchenPriorPipeline, WuerstchenDecoderPipeline + +>>> prior_pipe = WuerstchenPriorPipeline.from_pretrained( +... "warp-ai/wuerstchen-prior", torch_dtype=torch.float16 +... ).to("cuda") +>>> gen_pipe = WuerstchenDecoderPipeline.from_pretrain("warp-ai/wuerstchen", torch_dtype=torch.float16).to( +... "cuda" +... ) + +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> prior_output = pipe(prompt) +>>> images = gen_pipe(prior_output.image_embeddings, prompt=prompt) Citation Copied @misc{pernias2023wuerstchen, + title={Wuerstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models}, + author={Pablo Pernias and Dominic Rampas and Mats L. Richter and Christopher J. Pal and Marc Aubreville}, + year={2023}, + eprint={2306.00637}, + archivePrefix={arXiv}, + primaryClass={cs.CV} + } diff --git a/scrapped_outputs/4f88dcea4c139a3507b62feeb67ee3a7.txt b/scrapped_outputs/4f88dcea4c139a3507b62feeb67ee3a7.txt new file mode 100644 index 0000000000000000000000000000000000000000..552fc9d1655a6840094e03d21339f40c7b49403d --- /dev/null +++ b/scrapped_outputs/4f88dcea4c139a3507b62feeb67ee3a7.txt @@ -0,0 +1,33 @@ +Transformer2D A Transformer model for image-like data from CompVis that is based on the Vision Transformer introduced by Dosovitskiy et al. The Transformer2DModel accepts discrete (classes of vector embeddings) or continuous (actual embeddings) inputs. When the input is continuous: Project the input and reshape it to (batch_size, sequence_length, feature_dimension). Apply the Transformer blocks in the standard way. Reshape to image. When the input is discrete: It is assumed one of the input classes is the masked latent pixel. The predicted classes of the unnoised image don’t contain a prediction for the masked pixel because the unnoised image cannot be masked. Convert input (classes of latent pixels) to embeddings and apply positional embeddings. Apply the Transformer blocks in the standard way. Predict classes of unnoised image. Transformer2DModel class diffusers.Transformer2DModel < source > ( num_attention_heads: int = 16 attention_head_dim: int = 88 in_channels: Optional = None out_channels: Optional = None num_layers: int = 1 dropout: float = 0.0 norm_num_groups: int = 32 cross_attention_dim: Optional = None attention_bias: bool = False sample_size: Optional = None num_vector_embeds: Optional = None patch_size: Optional = None activation_fn: str = 'geglu' num_embeds_ada_norm: Optional = None use_linear_projection: bool = False only_cross_attention: bool = False double_self_attention: bool = False upcast_attention: bool = False norm_type: str = 'layer_norm' norm_elementwise_affine: bool = True norm_eps: float = 1e-05 attention_type: str = 'default' caption_channels: int = None ) Parameters num_attention_heads (int, optional, defaults to 16) — The number of heads to use for multi-head attention. attention_head_dim (int, optional, defaults to 88) — The number of channels in each head. in_channels (int, optional) — +The number of channels in the input and output (specify if the input is continuous). num_layers (int, optional, defaults to 1) — The number of layers of Transformer blocks to use. dropout (float, optional, defaults to 0.0) — The dropout probability to use. cross_attention_dim (int, optional) — The number of encoder_hidden_states dimensions to use. sample_size (int, optional) — The width of the latent images (specify if the input is discrete). +This is fixed during training since it is used to learn a number of position embeddings. num_vector_embeds (int, optional) — +The number of classes of the vector embeddings of the latent pixels (specify if the input is discrete). +Includes the class for the masked latent pixel. activation_fn (str, optional, defaults to "geglu") — Activation function to use in feed-forward. num_embeds_ada_norm ( int, optional) — +The number of diffusion steps used during training. Pass if at least one of the norm_layers is +AdaLayerNorm. This is fixed during training since it is used to learn a number of embeddings that are +added to the hidden states. +During inference, you can denoise for up to but not more steps than num_embeds_ada_norm. attention_bias (bool, optional) — +Configure if the TransformerBlocks attention should contain a bias parameter. A 2D Transformer model for image-like data. forward < source > ( hidden_states: Tensor encoder_hidden_states: Optional = None timestep: Optional = None added_cond_kwargs: Dict = None class_labels: Optional = None cross_attention_kwargs: Dict = None attention_mask: Optional = None encoder_attention_mask: Optional = None return_dict: bool = True ) Parameters hidden_states (torch.LongTensor of shape (batch size, num latent pixels) if discrete, torch.FloatTensor of shape (batch size, channel, height, width) if continuous) — +Input hidden_states. encoder_hidden_states ( torch.FloatTensor of shape (batch size, sequence len, embed dims), optional) — +Conditional embeddings for cross attention layer. If not given, cross-attention defaults to +self-attention. timestep ( torch.LongTensor, optional) — +Used to indicate denoising step. Optional timestep to be applied as an embedding in AdaLayerNorm. class_labels ( torch.LongTensor of shape (batch size, num classes), optional) — +Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in +AdaLayerZeroNorm. cross_attention_kwargs ( Dict[str, Any], optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. attention_mask ( torch.Tensor, optional) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. encoder_attention_mask ( torch.Tensor, optional) — +Cross-attention mask applied to encoder_hidden_states. Two formats supported: + +Mask (batch, sequence_length) True = keep, False = discard. +Bias (batch, 1, sequence_length) 0 = keep, -10000 = discard. + +If ndim == 2: will be interpreted as a mask, then converted into a bias consistent with the format +above. This bias will be added to the cross-attention scores. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. The Transformer2DModel forward method. Transformer2DModelOutput class diffusers.models.transformers.transformer_2d.Transformer2DModelOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if Transformer2DModel is discrete) — +The hidden states output conditioned on the encoder_hidden_states input. If discrete, returns probability +distributions for the unnoised latent pixels. The output of Transformer2DModel. diff --git a/scrapped_outputs/4fec989f5268fdcb3c07156f592ad591.txt b/scrapped_outputs/4fec989f5268fdcb3c07156f592ad591.txt new file mode 100644 index 0000000000000000000000000000000000000000..8423dbc4c086a93fc684851efbfbaf2fbcda62c5 --- /dev/null +++ b/scrapped_outputs/4fec989f5268fdcb3c07156f592ad591.txt @@ -0,0 +1,127 @@ +Super-resolution The Stable Diffusion upscaler diffusion model was created by the researchers and engineers from CompVis, Stability AI, and LAION. It is used to enhance the resolution of input images by a factor of 4. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionUpscalePipeline class diffusers.StableDiffusionUpscalePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel low_res_scheduler: DDPMScheduler scheduler: KarrasDiffusionSchedulers safety_checker: Optional = None feature_extractor: Optional = None watermarker: Optional = None max_noise_level: int = 350 ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. low_res_scheduler (SchedulerMixin) — +A scheduler used to add initial noise to the low resolution conditioning image. It must be an instance of +DDPMScheduler. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-guided image super-resolution using Stable Diffusion 2. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 75 guidance_scale: float = 9.0 noise_level: int = 20 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: int = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be upscaled. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import requests +>>> from PIL import Image +>>> from io import BytesIO +>>> from diffusers import StableDiffusionUpscalePipeline +>>> import torch + +>>> # load model and scheduler +>>> model_id = "stabilityai/stable-diffusion-x4-upscaler" +>>> pipeline = StableDiffusionUpscalePipeline.from_pretrained( +... model_id, revision="fp16", torch_dtype=torch.float16 +... ) +>>> pipeline = pipeline.to("cuda") + +>>> # let's download an image +>>> url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png" +>>> response = requests.get(url) +>>> low_res_img = Image.open(BytesIO(response.content)).convert("RGB") +>>> low_res_img = low_res_img.resize((128, 128)) +>>> prompt = "a white cat" + +>>> upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0] +>>> upscaled_image.save("upsampled_cat.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/5008671931166a06ed028bdbc861137f.txt b/scrapped_outputs/5008671931166a06ed028bdbc861137f.txt new file mode 100644 index 0000000000000000000000000000000000000000..53572038a6e775b927570dcbb9b0d0c8e4e1d9f7 --- /dev/null +++ b/scrapped_outputs/5008671931166a06ed028bdbc861137f.txt @@ -0,0 +1,209 @@ +Denoising Diffusion Probabilistic Models (DDPM) + + +Overview + +Denoising Diffusion Probabilistic Models +(DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes the diffusion based model of the same name, but in the context of the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline. +The abstract of the paper is the following: +We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. +The original paper can be found here. + +DDPMScheduler + + +class diffusers.DDPMScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +variance_type: str = 'fixed_small' +clip_sample: bool = True +prediction_type: str = 'epsilon' +thresholding: bool = False +dynamic_thresholding_ratio: float = 0.995 +clip_sample_range: float = 1.0 +sample_max_value: float = 1.0 + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +variance_type (str) — +options to clip the variance used when adding noise to the denoised sample. Choose from fixed_small, +fixed_small_log, fixed_large, fixed_large_log, learned or learned_range. + + +clip_sample (bool, default True) — +option to clip predicted sample for numerical stability. + + +clip_sample_range (float, default 1.0) — +the maximum magnitude for sample clipping. Valid only when clip_sample=True. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + +thresholding (bool, default False) — +whether to use the “dynamic thresholding” method (introduced by Imagen, https://arxiv.org/abs/2205.11487). +Note that the thresholding method is unsuitable for latent-space diffusion models (such as +stable-diffusion). + + +dynamic_thresholding_ratio (float, default 0.995) — +the ratio for the dynamic thresholding method. Default is 0.995, the same as Imagen +(https://arxiv.org/abs/2205.11487). Valid only when thresholding=True. + + +sample_max_value (float, default 1.0) — +the threshold value for dynamic thresholding. Valid only when thresholding=True. + + + +Denoising diffusion probabilistic models (DDPMs) explores the connections between denoising score matching and +Langevin dynamics sampling. +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. +For more details, see the original paper: https://arxiv.org/abs/2006.11239 + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Optional[int] = None + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +timestep (int, optional) — current timestep + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_timesteps + +< +source +> +( +num_inference_steps: typing.Optional[int] = None +device: typing.Union[str, torch.device] = None +timesteps: typing.Optional[typing.List[int]] = None + +) + + +Parameters + +num_inference_steps (Optional[int]) — +the number of diffusion steps used when generating samples with a pre-trained model. If passed, then +timesteps must be None. + + +device (str or torch.device, optional) — +the device to which the timesteps are moved to. + + +custom_timesteps (List[int], optional) — +custom timesteps used to support arbitrary spacing between timesteps. If None, then the default +timestep spacing strategy of equal spacing between timesteps is used. If passed, num_inference_steps +must be None. + + + +Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +generator = None +return_dict: bool = True + +) +→ +~schedulers.scheduling_utils.DDPMSchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. +generator — random number generator. + + +return_dict (bool) — option for returning tuple rather than DDPMSchedulerOutput class + + +Returns + +~schedulers.scheduling_utils.DDPMSchedulerOutput or tuple + + + +~schedulers.scheduling_utils.DDPMSchedulerOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/502a71bd0b98e29ca5e4a83e9e5fe5eb.txt b/scrapped_outputs/502a71bd0b98e29ca5e4a83e9e5fe5eb.txt new file mode 100644 index 0000000000000000000000000000000000000000..99f9d4b5885bd206a7181438e9f28f16726d42b9 --- /dev/null +++ b/scrapped_outputs/502a71bd0b98e29ca5e4a83e9e5fe5eb.txt @@ -0,0 +1,171 @@ +Custom Diffusion Custom Diffusion is a training technique for personalizing image generation models. Like Textual Inversion, DreamBooth, and LoRA, Custom Diffusion only requires a few (~4-5) example images. This technique works by only training weights in the cross-attention layers, and it uses a special word to represent the newly learned concept. Custom Diffusion is unique because it can also learn multiple concepts at the same time. If you’re training on a GPU with limited vRAM, you should try enabling xFormers with --enable_xformers_memory_efficient_attention for faster training with lower vRAM requirements (16GB). To save even more memory, add --set_grads_to_none in the training argument to set the gradients to None instead of zero (this option can cause some issues, so if you experience any, try removing this parameter). This guide will explore the train_custom_diffusion.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies: Copied cd examples/custom_diffusion +pip install -r requirements.txt +pip install clip-retrieval 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script contains all the parameters to help you customize your training run. These are found in the parse_args() function. The function comes with default values, but you can also set your own values in the training command if you’d like. For example, to change the resolution of the input image: Copied accelerate launch train_custom_diffusion.py \ + --resolution=256 Many of the basic parameters are described in the DreamBooth training guide, so this guide focuses on the parameters unique to Custom Diffusion: --freeze_model: freezes the key and value parameters in the cross-attention layer; the default is crossattn_kv, but you can set it to crossattn to train all the parameters in the cross-attention layer --concepts_list: to learn multiple concepts, provide a path to a JSON file containing the concepts --modifier_token: a special word used to represent the learned concept --initializer_token: Prior preservation loss Prior preservation loss is a method that uses a model’s own generated samples to help it learn how to generate more diverse images. Because these generated sample images belong to the same class as the images you provided, they help the model retain what it has learned about the class and how it can use what it already knows about the class to make new compositions. Many of the parameters for prior preservation loss are described in the DreamBooth training guide. Regularization Custom Diffusion includes training the target images with a small set of real images to prevent overfitting. As you can imagine, this can be easy to do when you’re only training on a few images! Download 200 real images with clip_retrieval. The class_prompt should be the same category as the target images. These images are stored in class_data_dir. Copied python retrieve.py --class_prompt cat --class_data_dir real_reg/samples_cat --num_class_images 200 To enable regularization, add the following parameters: --with_prior_preservation: whether to use prior preservation loss --prior_loss_weight: controls the influence of the prior preservation loss on the model --real_prior: whether to use a small set of real images to prevent overfitting Copied accelerate launch train_custom_diffusion.py \ + --with_prior_preservation \ + --prior_loss_weight=1.0 \ + --class_data_dir="./real_reg/samples_cat" \ + --class_prompt="cat" \ + --real_prior=True \ Training script A lot of the code in the Custom Diffusion training script is similar to the DreamBooth script. This guide instead focuses on the code that is relevant to Custom Diffusion. The Custom Diffusion training script has two dataset classes: CustomDiffusionDataset: preprocesses the images, class images, and prompts for training PromptDataset: prepares the prompts for generating class images Next, the modifier_token is added to the tokenizer, converted to token ids, and the token embeddings are resized to account for the new modifier_token. Then the modifier_token embeddings are initialized with the embeddings of the initializer_token. All parameters in the text encoder are frozen, except for the token embeddings since this is what the model is trying to learn to associate with the concepts. Copied params_to_freeze = itertools.chain( + text_encoder.text_model.encoder.parameters(), + text_encoder.text_model.final_layer_norm.parameters(), + text_encoder.text_model.embeddings.position_embedding.parameters(), +) +freeze_params(params_to_freeze) Now you’ll need to add the Custom Diffusion weights to the attention layers. This is a really important step for getting the shape and size of the attention weights correct, and for setting the appropriate number of attention processors in each UNet block. Copied st = unet.state_dict() +for name, _ in unet.attn_processors.items(): + cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim + if name.startswith("mid_block"): + hidden_size = unet.config.block_out_channels[-1] + elif name.startswith("up_blocks"): + block_id = int(name[len("up_blocks.")]) + hidden_size = list(reversed(unet.config.block_out_channels))[block_id] + elif name.startswith("down_blocks"): + block_id = int(name[len("down_blocks.")]) + hidden_size = unet.config.block_out_channels[block_id] + layer_name = name.split(".processor")[0] + weights = { + "to_k_custom_diffusion.weight": st[layer_name + ".to_k.weight"], + "to_v_custom_diffusion.weight": st[layer_name + ".to_v.weight"], + } + if train_q_out: + weights["to_q_custom_diffusion.weight"] = st[layer_name + ".to_q.weight"] + weights["to_out_custom_diffusion.0.weight"] = st[layer_name + ".to_out.0.weight"] + weights["to_out_custom_diffusion.0.bias"] = st[layer_name + ".to_out.0.bias"] + if cross_attention_dim is not None: + custom_diffusion_attn_procs[name] = attention_class( + train_kv=train_kv, + train_q_out=train_q_out, + hidden_size=hidden_size, + cross_attention_dim=cross_attention_dim, + ).to(unet.device) + custom_diffusion_attn_procs[name].load_state_dict(weights) + else: + custom_diffusion_attn_procs[name] = attention_class( + train_kv=False, + train_q_out=False, + hidden_size=hidden_size, + cross_attention_dim=cross_attention_dim, + ) +del st +unet.set_attn_processor(custom_diffusion_attn_procs) +custom_diffusion_layers = AttnProcsLayers(unet.attn_processors) The optimizer is initialized to update the cross-attention layer parameters: Copied optimizer = optimizer_class( + itertools.chain(text_encoder.get_input_embeddings().parameters(), custom_diffusion_layers.parameters()) + if args.modifier_token is not None + else custom_diffusion_layers.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) In the training loop, it is important to only update the embeddings for the concept you’re trying to learn. This means setting the gradients of all the other token embeddings to zero: Copied if args.modifier_token is not None: + if accelerator.num_processes > 1: + grads_text_encoder = text_encoder.module.get_input_embeddings().weight.grad + else: + grads_text_encoder = text_encoder.get_input_embeddings().weight.grad + index_grads_to_zero = torch.arange(len(tokenizer)) != modifier_token_id[0] + for i in range(len(modifier_token_id[1:])): + index_grads_to_zero = index_grads_to_zero & ( + torch.arange(len(tokenizer)) != modifier_token_id[i] + ) + grads_text_encoder.data[index_grads_to_zero, :] = grads_text_encoder.data[ + index_grads_to_zero, : + ].fill_(0) Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 In this guide, you’ll download and use these example cat images. You can also create and use your own dataset if you want (see the Create a dataset for training guide). Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model, INSTANCE_DIR to the path where you just downloaded the cat images to, and OUTPUT_DIR to where you want to save the model. You’ll use as the special word to tie the newly learned embeddings to. The script creates and saves model checkpoints and a pytorch_custom_diffusion_weights.bin file to your repository. To monitor training progress with Weights and Biases, add the --report_to=wandb parameter to the training command and specify a validation prompt with --validation_prompt. This is useful for debugging and saving intermediate results. If you’re training on human faces, the Custom Diffusion team has found the following parameters to work well: --learning_rate=5e-6 --max_train_steps can be anywhere between 1000 and 2000 --freeze_model=crossattn use at least 15-20 images to train with + + + + Copied export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export OUTPUT_DIR="path-to-save-model" +export INSTANCE_DIR="./data/cat" + +accelerate launch train_custom_diffusion.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --output_dir=$OUTPUT_DIR \ + --class_data_dir=./real_reg/samples_cat/ \ + --with_prior_preservation \ + --real_prior \ + --prior_loss_weight=1.0 \ + --class_prompt="cat" \ + --num_class_images=200 \ + --instance_prompt="photo of a cat" \ + --resolution=512 \ + --train_batch_size=2 \ + --learning_rate=1e-5 \ + --lr_warmup_steps=0 \ + --max_train_steps=250 \ + --scale_lr \ + --hflip \ + --modifier_token "" \ + --validation_prompt=" cat sitting in a bucket" \ + --report_to="wandb" \ + --push_to_hub + + +Custom Diffusion can also learn multiple concepts if you provide a JSON file with some details about each concept it should learn. Run clip-retrieval to collect some real images to use for regularization: Copied pip install clip-retrieval +python retrieve.py --class_prompt {} --class_data_dir {} --num_class_images 200 Then you can launch the script: Copied export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export OUTPUT_DIR="path-to-save-model" + +accelerate launch train_custom_diffusion.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --output_dir=$OUTPUT_DIR \ + --concepts_list=./concept_list.json \ + --with_prior_preservation \ + --real_prior \ + --prior_loss_weight=1.0 \ + --resolution=512 \ + --train_batch_size=2 \ + --learning_rate=1e-5 \ + --lr_warmup_steps=0 \ + --max_train_steps=500 \ + --num_class_images=200 \ + --scale_lr \ + --hflip \ + --modifier_token "+" \ + --push_to_hub + + +Once training is finished, you can use your new Custom Diffusion model for inference. + + + + Copied import torch +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16, +).to("cuda") +pipeline.unet.load_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin") +pipeline.load_textual_inversion("path-to-save-model", weight_name=".bin") + +image = pipeline( + " cat sitting in a bucket", + num_inference_steps=100, + guidance_scale=6.0, + eta=1.0, +).images[0] +image.save("cat.png") + + + + Copied import torch +from huggingface_hub.repocard import RepoCard +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained("sayakpaul/custom-diffusion-cat-wooden-pot", torch_dtype=torch.float16).to("cuda") +pipeline.unet.load_attn_procs(model_id, weight_name="pytorch_custom_diffusion_weights.bin") +pipeline.load_textual_inversion(model_id, weight_name=".bin") +pipeline.load_textual_inversion(model_id, weight_name=".bin") + +image = pipeline( + "the cat sculpture in the style of a wooden pot", + num_inference_steps=100, + guidance_scale=6.0, + eta=1.0, +).images[0] +image.save("multi-subject.png") + + + Next steps Congratulations on training a model with Custom Diffusion! 🎉 To learn more: Read the Multi-Concept Customization of Text-to-Image Diffusion blog post to learn more details about the experimental results from the Custom Diffusion team. diff --git a/scrapped_outputs/503cc45ca2872775b4ced889223f3aa6.txt b/scrapped_outputs/503cc45ca2872775b4ced889223f3aa6.txt new file mode 100644 index 0000000000000000000000000000000000000000..c927ce7d06865cbd6e16bcd6bb15efd3ab6ad802 --- /dev/null +++ b/scrapped_outputs/503cc45ca2872775b4ced889223f3aa6.txt @@ -0,0 +1,152 @@ +DPM Discrete Scheduler with ancestral sampling inspired by Karras et. al paper + + +Overview + +Inspired by Karras et. al. Scheduler ported from @crowsonkb’s https://github.com/crowsonkb/k-diffusion library: +All credit for making this scheduler work goes to Katherine Crowson + +KDPM2AncestralDiscreteScheduler + + +class diffusers.KDPM2AncestralDiscreteScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.00085 +beta_end: float = 0.012 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +prediction_type: str = 'epsilon' + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. beta_start (float): the + + +starting beta value of inference. beta_end (float) — the final beta value. beta_schedule (str): +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. +options to clip the variance used when adding noise to the denoised sample. Choose from fixed_small, +fixed_small_log, fixed_large, fixed_large_log, learned or learned_range. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + + +Scheduler created by @crowsonkb in k_diffusion, see: +https://github.com/crowsonkb/k-diffusion/blob/5b3af030dd83e0297272d861c19477735d0317ec/k_diffusion/sampling.py#L188 +Scheduler inspired by DPM-Solver-2 and Algorthim 2 from Karras et al. (2022). +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[float, torch.FloatTensor] + +) +→ +torch.FloatTensor + +Parameters + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the — + + +current timestep. — +sample (torch.FloatTensor): input sample timestep (int, optional): current timestep + + +Returns + +torch.FloatTensor + + + +scaled input sample + + + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None +num_train_timesteps: typing.Optional[int] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device, optional) — +the device to which the timesteps should be moved to. If None, the timesteps are not moved. + + + +Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: typing.Union[torch.FloatTensor, numpy.ndarray] +timestep: typing.Union[float, torch.FloatTensor] +sample: typing.Union[torch.FloatTensor, numpy.ndarray] +generator: typing.Optional[torch._C.Generator] = None +return_dict: bool = True + +) +→ +SchedulerOutput or tuple + +Parameters + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion — + + +process from the learned model outputs (most often the predicted noise). — +model_output (torch.FloatTensor or np.ndarray): direct output from learned diffusion model. timestep +(int): current discrete timestep in the diffusion chain. sample (torch.FloatTensor or np.ndarray): +current instance of sample being created by diffusion process. +return_dict (bool): option for returning tuple rather than SchedulerOutput class + + +Returns + +SchedulerOutput or tuple + + + +SchedulerOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. diff --git a/scrapped_outputs/5043922314f329b4e255c958957809a3.txt b/scrapped_outputs/5043922314f329b4e255c958957809a3.txt new file mode 100644 index 0000000000000000000000000000000000000000..161bab95d89c856bbecb72654e8b0d0142d13c70 --- /dev/null +++ b/scrapped_outputs/5043922314f329b4e255c958957809a3.txt @@ -0,0 +1,6 @@ +Unconditional image generation Unconditional image generation generates images that look like a random sample from the training data the model was trained on because the denoising process is not guided by any additional context like text or image. To get started, use the DiffusionPipeline to load the anton-l/ddpm-butterflies-128 checkpoint to generate images of butterflies. The DiffusionPipeline downloads and caches all the model components required to generate an image. Copied from diffusers import DiffusionPipeline + +generator = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128").to("cuda") +image = generator().images[0] +image Want to generate images of something else? Take a look at the training guide to learn how to train a model to generate your own images. The output image is a PIL.Image object that can be saved: Copied image.save("generated_image.png") You can also try experimenting with the num_inference_steps parameter, which controls the number of denoising steps. More denoising steps typically produce higher quality images, but it’ll take longer to generate. Feel free to play around with this parameter to see how it affects the image quality. Copied image = generator(num_inference_steps=100).images[0] +image Try out the Space below to generate an image of a butterfly! diff --git a/scrapped_outputs/505e08f34de9a238186adaf159310817.txt b/scrapped_outputs/505e08f34de9a238186adaf159310817.txt new file mode 100644 index 0000000000000000000000000000000000000000..78bbe5a9f180ff0b096046b649d06bb4063d6161 --- /dev/null +++ b/scrapped_outputs/505e08f34de9a238186adaf159310817.txt @@ -0,0 +1,137 @@ +DiffEdit Image editing typically requires providing a mask of the area to be edited. DiffEdit automatically generates the mask for you based on a text query, making it easier overall to create a mask without image editing software. The DiffEdit algorithm works in three steps: the diffusion model denoises an image conditioned on some query text and reference text which produces different noise estimates for different areas of the image; the difference is used to infer a mask to identify which area of the image needs to be changed to match the query text the input image is encoded into latent space with DDIM the latents are decoded with the diffusion model conditioned on the text query, using the mask as a guide such that pixels outside the mask remain the same as in the input image This guide will show you how to use DiffEdit to edit images without manually creating a mask. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate The StableDiffusionDiffEditPipeline requires an image mask and a set of partially inverted latents. The image mask is generated from the generate_mask() function, and includes two parameters, source_prompt and target_prompt. These parameters determine what to edit in the image. For example, if you want to change a bowl of fruits to a bowl of pears, then: Copied source_prompt = "a bowl of fruits" +target_prompt = "a bowl of pears" The partially inverted latents are generated from the invert() function, and it is generally a good idea to include a prompt or caption describing the image to help guide the inverse latent sampling process. The caption can often be your source_prompt, but feel free to experiment with other text descriptions! Let’s load the pipeline, scheduler, inverse scheduler, and enable some optimizations to reduce memory usage: Copied import torch +from diffusers import DDIMScheduler, DDIMInverseScheduler, StableDiffusionDiffEditPipeline + +pipeline = StableDiffusionDiffEditPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", + torch_dtype=torch.float16, + safety_checker=None, + use_safetensors=True, +) +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +pipeline.enable_model_cpu_offload() +pipeline.enable_vae_slicing() Load the image to edit: Copied from diffusers.utils import load_image, make_image_grid + +img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" +raw_image = load_image(img_url).resize((768, 768)) +raw_image Use the generate_mask() function to generate the image mask. You’ll need to pass it the source_prompt and target_prompt to specify what to edit in the image: Copied from PIL import Image + +source_prompt = "a bowl of fruits" +target_prompt = "a basket of pears" +mask_image = pipeline.generate_mask( + image=raw_image, + source_prompt=source_prompt, + target_prompt=target_prompt, +) +Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L").resize((768, 768)) Next, create the inverted latents and pass it a caption describing the image: Copied inv_latents = pipeline.invert(prompt=source_prompt, image=raw_image).latents Finally, pass the image mask and inverted latents to the pipeline. The target_prompt becomes the prompt now, and the source_prompt is used as the negative_prompt: Copied output_image = pipeline( + prompt=target_prompt, + mask_image=mask_image, + image_latents=inv_latents, + negative_prompt=source_prompt, +).images[0] +mask_image = Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L").resize((768, 768)) +make_image_grid([raw_image, mask_image, output_image], rows=1, cols=3) original image edited image Generate source and target embeddings The source and target embeddings can be automatically generated with the Flan-T5 model instead of creating them manually. Load the Flan-T5 model and tokenizer from the 🤗 Transformers library: Copied import torch +from transformers import AutoTokenizer, T5ForConditionalGeneration + +tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-large") +model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", device_map="auto", torch_dtype=torch.float16) Provide some initial text to prompt the model to generate the source and target prompts. Copied source_concept = "bowl" +target_concept = "basket" + +source_text = f"Provide a caption for images containing a {source_concept}. " +"The captions should be in English and should be no longer than 150 characters." + +target_text = f"Provide a caption for images containing a {target_concept}. " +"The captions should be in English and should be no longer than 150 characters." Next, create a utility function to generate the prompts: Copied @torch.no_grad() +def generate_prompts(input_prompt): + input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids.to("cuda") + + outputs = model.generate( + input_ids, temperature=0.8, num_return_sequences=16, do_sample=True, max_new_tokens=128, top_k=10 + ) + return tokenizer.batch_decode(outputs, skip_special_tokens=True) + +source_prompts = generate_prompts(source_text) +target_prompts = generate_prompts(target_text) +print(source_prompts) +print(target_prompts) Check out the generation strategy guide if you’re interested in learning more about strategies for generating different quality text. Load the text encoder model used by the StableDiffusionDiffEditPipeline to encode the text. You’ll use the text encoder to compute the text embeddings: Copied import torch +from diffusers import StableDiffusionDiffEditPipeline + +pipeline = StableDiffusionDiffEditPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +pipeline.enable_vae_slicing() + +@torch.no_grad() +def embed_prompts(sentences, tokenizer, text_encoder, device="cuda"): + embeddings = [] + for sent in sentences: + text_inputs = tokenizer( + sent, + padding="max_length", + max_length=tokenizer.model_max_length, + truncation=True, + return_tensors="pt", + ) + text_input_ids = text_inputs.input_ids + prompt_embeds = text_encoder(text_input_ids.to(device), attention_mask=None)[0] + embeddings.append(prompt_embeds) + return torch.concatenate(embeddings, dim=0).mean(dim=0).unsqueeze(0) + +source_embeds = embed_prompts(source_prompts, pipeline.tokenizer, pipeline.text_encoder) +target_embeds = embed_prompts(target_prompts, pipeline.tokenizer, pipeline.text_encoder) Finally, pass the embeddings to the generate_mask() and invert() functions, and pipeline to generate the image: Copied from diffusers import DDIMInverseScheduler, DDIMScheduler + from diffusers.utils import load_image, make_image_grid + from PIL import Image + + pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) + pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) + + img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + raw_image = load_image(img_url).resize((768, 768)) + + mask_image = pipeline.generate_mask( + image=raw_image, +- source_prompt=source_prompt, +- target_prompt=target_prompt, ++ source_prompt_embeds=source_embeds, ++ target_prompt_embeds=target_embeds, + ) + + inv_latents = pipeline.invert( +- prompt=source_prompt, ++ prompt_embeds=source_embeds, + image=raw_image, + ).latents + + output_image = pipeline( + mask_image=mask_image, + image_latents=inv_latents, +- prompt=target_prompt, +- negative_prompt=source_prompt, ++ prompt_embeds=target_embeds, ++ negative_prompt_embeds=source_embeds, + ).images[0] + mask_image = Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L") + make_image_grid([raw_image, mask_image, output_image], rows=1, cols=3) Generate a caption for inversion While you can use the source_prompt as a caption to help generate the partially inverted latents, you can also use the BLIP model to automatically generate a caption. Load the BLIP model and processor from the 🤗 Transformers library: Copied import torch +from transformers import BlipForConditionalGeneration, BlipProcessor + +processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") +model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base", torch_dtype=torch.float16, low_cpu_mem_usage=True) Create a utility function to generate a caption from the input image: Copied @torch.no_grad() +def generate_caption(images, caption_generator, caption_processor): + text = "a photograph of" + + inputs = caption_processor(images, text, return_tensors="pt").to(device="cuda", dtype=caption_generator.dtype) + caption_generator.to("cuda") + outputs = caption_generator.generate(**inputs, max_new_tokens=128) + + # offload caption generator + caption_generator.to("cpu") + + caption = caption_processor.batch_decode(outputs, skip_special_tokens=True)[0] + return caption Load an input image and generate a caption for it using the generate_caption function: Copied from diffusers.utils import load_image + +img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" +raw_image = load_image(img_url).resize((768, 768)) +caption = generate_caption(raw_image, model, processor) generated caption: "a photograph of a bowl of fruit on a table" Now you can drop the caption into the invert() function to generate the partially inverted latents! diff --git a/scrapped_outputs/50757a5f3868e19ff2a4e25c64028ea1.txt b/scrapped_outputs/50757a5f3868e19ff2a4e25c64028ea1.txt new file mode 100644 index 0000000000000000000000000000000000000000..039dc21252f140b854db30919cf4105c2b03492c --- /dev/null +++ b/scrapped_outputs/50757a5f3868e19ff2a4e25c64028ea1.txt @@ -0,0 +1,249 @@ +Evaluating Diffusion Models Evaluation of generative models like Stable Diffusion is subjective in nature. But as practitioners and researchers, we often have to make careful choices amongst many different possibilities. So, when working with different generative models (like GANs, Diffusion, etc.), how do we choose one over the other? Qualitative evaluation of such models can be error-prone and might incorrectly influence a decision. +However, quantitative metrics don’t necessarily correspond to image quality. So, usually, a combination +of both qualitative and quantitative evaluations provides a stronger signal when choosing one model +over the other. In this document, we provide a non-exhaustive overview of qualitative and quantitative methods to evaluate Diffusion models. For quantitative methods, we specifically focus on how to implement them alongside diffusers. The methods shown in this document can also be used to evaluate different noise schedulers keeping the underlying generation model fixed. Scenarios We cover Diffusion models with the following pipelines: Text-guided image generation (such as the StableDiffusionPipeline). Text-guided image generation, additionally conditioned on an input image (such as the StableDiffusionImg2ImgPipeline and StableDiffusionInstructPix2PixPipeline). Class-conditioned image generation models (such as the DiTPipeline). Qualitative Evaluation Qualitative evaluation typically involves human assessment of generated images. Quality is measured across aspects such as compositionality, image-text alignment, and spatial relations. Common prompts provide a degree of uniformity for subjective metrics. +DrawBench and PartiPrompts are prompt datasets used for qualitative benchmarking. DrawBench and PartiPrompts were introduced by Imagen and Parti respectively. From the official Parti website: PartiPrompts (P2) is a rich set of over 1600 prompts in English that we release as part of this work. P2 can be used to measure model capabilities across various categories and challenge aspects. PartiPrompts has the following columns: Prompt Category of the prompt (such as “Abstract”, “World Knowledge”, etc.) Challenge reflecting the difficulty (such as “Basic”, “Complex”, “Writing & Symbols”, etc.) These benchmarks allow for side-by-side human evaluation of different image generation models. For this, the 🧨 Diffusers team has built Open Parti Prompts, which is a community-driven qualitative benchmark based on Parti Prompts to compare state-of-the-art open-source diffusion models: Open Parti Prompts Game: For 10 parti prompts, 4 generated images are shown and the user selects the image that suits the prompt best. Open Parti Prompts Leaderboard: The leaderboard comparing the currently best open-sourced diffusion models to each other. To manually compare images, let’s see how we can use diffusers on a couple of PartiPrompts. Below we show some prompts sampled across different challenges: Basic, Complex, Linguistic Structures, Imagination, and Writing & Symbols. Here we are using PartiPrompts as a dataset. Copied from datasets import load_dataset + +# prompts = load_dataset("nateraw/parti-prompts", split="train") +# prompts = prompts.shuffle() +# sample_prompts = [prompts[i]["Prompt"] for i in range(5)] + +# Fixing these sample prompts in the interest of reproducibility. +sample_prompts = [ + "a corgi", + "a hot air balloon with a yin-yang symbol, with the moon visible in the daytime sky", + "a car with no windows", + "a cube made of porcupine", + 'The saying "BE EXCELLENT TO EACH OTHER" written on a red brick wall with a graffiti image of a green alien wearing a tuxedo. A yellow fire hydrant is on a sidewalk in the foreground.', +] Now we can use these prompts to generate some images using Stable Diffusion (v1-4 checkpoint): Copied import torch + +seed = 0 +generator = torch.manual_seed(seed) + +images = sd_pipeline(sample_prompts, num_images_per_prompt=1, generator=generator).images We can also set num_images_per_prompt accordingly to compare different images for the same prompt. Running the same pipeline but with a different checkpoint (v1-5), yields: Once several images are generated from all the prompts using multiple models (under evaluation), these results are presented to human evaluators for scoring. For +more details on the DrawBench and PartiPrompts benchmarks, refer to their respective papers. It is useful to look at some inference samples while a model is training to measure the +training progress. In our training scripts, we support this utility with additional support for +logging to TensorBoard and Weights & Biases. Quantitative Evaluation In this section, we will walk you through how to evaluate three different diffusion pipelines using: CLIP score CLIP directional similarity FID Text-guided image generation CLIP score measures the compatibility of image-caption pairs. Higher CLIP scores imply higher compatibility 🔼. The CLIP score is a quantitative measurement of the qualitative concept “compatibility”. Image-caption pair compatibility can also be thought of as the semantic similarity between the image and the caption. CLIP score was found to have high correlation with human judgement. Let’s first load a StableDiffusionPipeline: Copied from diffusers import StableDiffusionPipeline +import torch + +model_ckpt = "CompVis/stable-diffusion-v1-4" +sd_pipeline = StableDiffusionPipeline.from_pretrained(model_ckpt, torch_dtype=torch.float16).to("cuda") Generate some images with multiple prompts: Copied prompts = [ + "a photo of an astronaut riding a horse on mars", + "A high tech solarpunk utopia in the Amazon rainforest", + "A pikachu fine dining with a view to the Eiffel Tower", + "A mecha robot in a favela in expressionist style", + "an insect robot preparing a delicious meal", + "A small cabin on top of a snowy mountain in the style of Disney, artstation", +] + +images = sd_pipeline(prompts, num_images_per_prompt=1, output_type="np").images + +print(images.shape) +# (6, 512, 512, 3) And then, we calculate the CLIP score. Copied from torchmetrics.functional.multimodal import clip_score +from functools import partial + +clip_score_fn = partial(clip_score, model_name_or_path="openai/clip-vit-base-patch16") + +def calculate_clip_score(images, prompts): + images_int = (images * 255).astype("uint8") + clip_score = clip_score_fn(torch.from_numpy(images_int).permute(0, 3, 1, 2), prompts).detach() + return round(float(clip_score), 4) + +sd_clip_score = calculate_clip_score(images, prompts) +print(f"CLIP score: {sd_clip_score}") +# CLIP score: 35.7038 In the above example, we generated one image per prompt. If we generated multiple images per prompt, we would have to take the average score from the generated images per prompt. Now, if we wanted to compare two checkpoints compatible with the StableDiffusionPipeline we should pass a generator while calling the pipeline. First, we generate images with a +fixed seed with the v1-4 Stable Diffusion checkpoint: Copied seed = 0 +generator = torch.manual_seed(seed) + +images = sd_pipeline(prompts, num_images_per_prompt=1, generator=generator, output_type="np").images Then we load the v1-5 checkpoint to generate images: Copied model_ckpt_1_5 = "runwayml/stable-diffusion-v1-5" +sd_pipeline_1_5 = StableDiffusionPipeline.from_pretrained(model_ckpt_1_5, torch_dtype=weight_dtype).to(device) + +images_1_5 = sd_pipeline_1_5(prompts, num_images_per_prompt=1, generator=generator, output_type="np").images And finally, we compare their CLIP scores: Copied sd_clip_score_1_4 = calculate_clip_score(images, prompts) +print(f"CLIP Score with v-1-4: {sd_clip_score_1_4}") +# CLIP Score with v-1-4: 34.9102 + +sd_clip_score_1_5 = calculate_clip_score(images_1_5, prompts) +print(f"CLIP Score with v-1-5: {sd_clip_score_1_5}") +# CLIP Score with v-1-5: 36.2137 It seems like the v1-5 checkpoint performs better than its predecessor. Note, however, that the number of prompts we used to compute the CLIP scores is quite low. For a more practical evaluation, this number should be way higher, and the prompts should be diverse. By construction, there are some limitations in this score. The captions in the training dataset +were crawled from the web and extracted from alt and similar tags associated an image on the internet. +They are not necessarily representative of what a human being would use to describe an image. Hence we +had to “engineer” some prompts here. Image-conditioned text-to-image generation In this case, we condition the generation pipeline with an input image as well as a text prompt. Let’s take the StableDiffusionInstructPix2PixPipeline, as an example. It takes an edit instruction as an input prompt and an input image to be edited. Here is one example: One strategy to evaluate such a model is to measure the consistency of the change between the two images (in CLIP space) with the change between the two image captions (as shown in CLIP-Guided Domain Adaptation of Image Generators). This is referred to as the ”CLIP directional similarity“. Caption 1 corresponds to the input image (image 1) that is to be edited. Caption 2 corresponds to the edited image (image 2). It should reflect the edit instruction. Following is a pictorial overview: We have prepared a mini dataset to implement this metric. Let’s first load the dataset. Copied from datasets import load_dataset + +dataset = load_dataset("sayakpaul/instructpix2pix-demo", split="train") +dataset.features Copied {'input': Value(dtype='string', id=None), + 'edit': Value(dtype='string', id=None), + 'output': Value(dtype='string', id=None), + 'image': Image(decode=True, id=None)} Here we have: input is a caption corresponding to the image. edit denotes the edit instruction. output denotes the modified caption reflecting the edit instruction. Let’s take a look at a sample. Copied idx = 0 +print(f"Original caption: {dataset[idx]['input']}") +print(f"Edit instruction: {dataset[idx]['edit']}") +print(f"Modified caption: {dataset[idx]['output']}") Copied Original caption: 2. FAROE ISLANDS: An archipelago of 18 mountainous isles in the North Atlantic Ocean between Norway and Iceland, the Faroe Islands has 'everything you could hope for', according to Big 7 Travel. It boasts 'crystal clear waterfalls, rocky cliffs that seem to jut out of nowhere and velvety green hills' +Edit instruction: make the isles all white marble +Modified caption: 2. WHITE MARBLE ISLANDS: An archipelago of 18 mountainous white marble isles in the North Atlantic Ocean between Norway and Iceland, the White Marble Islands has 'everything you could hope for', according to Big 7 Travel. It boasts 'crystal clear waterfalls, rocky cliffs that seem to jut out of nowhere and velvety green hills' And here is the image: Copied dataset[idx]["image"] We will first edit the images of our dataset with the edit instruction and compute the directional similarity. Let’s first load the StableDiffusionInstructPix2PixPipeline: Copied from diffusers import StableDiffusionInstructPix2PixPipeline + +instruct_pix2pix_pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained( + "timbrooks/instruct-pix2pix", torch_dtype=torch.float16 +).to(device) Now, we perform the edits: Copied import numpy as np + + +def edit_image(input_image, instruction): + image = instruct_pix2pix_pipeline( + instruction, + image=input_image, + output_type="np", + generator=generator, + ).images[0] + return image + +input_images = [] +original_captions = [] +modified_captions = [] +edited_images = [] + +for idx in range(len(dataset)): + input_image = dataset[idx]["image"] + edit_instruction = dataset[idx]["edit"] + edited_image = edit_image(input_image, edit_instruction) + + input_images.append(np.array(input_image)) + original_captions.append(dataset[idx]["input"]) + modified_captions.append(dataset[idx]["output"]) + edited_images.append(edited_image) To measure the directional similarity, we first load CLIP’s image and text encoders: Copied from transformers import ( + CLIPTokenizer, + CLIPTextModelWithProjection, + CLIPVisionModelWithProjection, + CLIPImageProcessor, +) + +clip_id = "openai/clip-vit-large-patch14" +tokenizer = CLIPTokenizer.from_pretrained(clip_id) +text_encoder = CLIPTextModelWithProjection.from_pretrained(clip_id).to(device) +image_processor = CLIPImageProcessor.from_pretrained(clip_id) +image_encoder = CLIPVisionModelWithProjection.from_pretrained(clip_id).to(device) Notice that we are using a particular CLIP checkpoint, i.e., openai/clip-vit-large-patch14. This is because the Stable Diffusion pre-training was performed with this CLIP variant. For more details, refer to the documentation. Next, we prepare a PyTorch nn.Module to compute directional similarity: Copied import torch.nn as nn +import torch.nn.functional as F + + +class DirectionalSimilarity(nn.Module): + def __init__(self, tokenizer, text_encoder, image_processor, image_encoder): + super().__init__() + self.tokenizer = tokenizer + self.text_encoder = text_encoder + self.image_processor = image_processor + self.image_encoder = image_encoder + + def preprocess_image(self, image): + image = self.image_processor(image, return_tensors="pt")["pixel_values"] + return {"pixel_values": image.to(device)} + + def tokenize_text(self, text): + inputs = self.tokenizer( + text, + max_length=self.tokenizer.model_max_length, + padding="max_length", + truncation=True, + return_tensors="pt", + ) + return {"input_ids": inputs.input_ids.to(device)} + + def encode_image(self, image): + preprocessed_image = self.preprocess_image(image) + image_features = self.image_encoder(**preprocessed_image).image_embeds + image_features = image_features / image_features.norm(dim=1, keepdim=True) + return image_features + + def encode_text(self, text): + tokenized_text = self.tokenize_text(text) + text_features = self.text_encoder(**tokenized_text).text_embeds + text_features = text_features / text_features.norm(dim=1, keepdim=True) + return text_features + + def compute_directional_similarity(self, img_feat_one, img_feat_two, text_feat_one, text_feat_two): + sim_direction = F.cosine_similarity(img_feat_two - img_feat_one, text_feat_two - text_feat_one) + return sim_direction + + def forward(self, image_one, image_two, caption_one, caption_two): + img_feat_one = self.encode_image(image_one) + img_feat_two = self.encode_image(image_two) + text_feat_one = self.encode_text(caption_one) + text_feat_two = self.encode_text(caption_two) + directional_similarity = self.compute_directional_similarity( + img_feat_one, img_feat_two, text_feat_one, text_feat_two + ) + return directional_similarity Let’s put DirectionalSimilarity to use now. Copied dir_similarity = DirectionalSimilarity(tokenizer, text_encoder, image_processor, image_encoder) +scores = [] + +for i in range(len(input_images)): + original_image = input_images[i] + original_caption = original_captions[i] + edited_image = edited_images[i] + modified_caption = modified_captions[i] + + similarity_score = dir_similarity(original_image, edited_image, original_caption, modified_caption) + scores.append(float(similarity_score.detach().cpu())) + +print(f"CLIP directional similarity: {np.mean(scores)}") +# CLIP directional similarity: 0.0797976553440094 Like the CLIP Score, the higher the CLIP directional similarity, the better it is. It should be noted that the StableDiffusionInstructPix2PixPipeline exposes two arguments, namely, image_guidance_scale and guidance_scale that let you control the quality of the final edited image. We encourage you to experiment with these two arguments and see the impact of that on the directional similarity. We can extend the idea of this metric to measure how similar the original image and edited version are. To do that, we can just do F.cosine_similarity(img_feat_two, img_feat_one). For these kinds of edits, we would still want the primary semantics of the images to be preserved as much as possible, i.e., a high similarity score. We can use these metrics for similar pipelines such as the StableDiffusionPix2PixZeroPipeline. Both CLIP score and CLIP direction similarity rely on the CLIP model, which can make the evaluations biased. Extending metrics like IS, FID (discussed later), or KID can be difficult when the model under evaluation was pre-trained on a large image-captioning dataset (such as the LAION-5B dataset). This is because underlying these metrics is an InceptionNet (pre-trained on the ImageNet-1k dataset) used for extracting intermediate image features. The pre-training dataset of Stable Diffusion may have limited overlap with the pre-training dataset of InceptionNet, so it is not a good candidate here for feature extraction. Using the above metrics helps evaluate models that are class-conditioned. For example, DiT. It was pre-trained being conditioned on the ImageNet-1k classes. Class-conditioned image generation Class-conditioned generative models are usually pre-trained on a class-labeled dataset such as ImageNet-1k. Popular metrics for evaluating these models include Fréchet Inception Distance (FID), Kernel Inception Distance (KID), and Inception Score (IS). In this document, we focus on FID (Heusel et al.). We show how to compute it with the DiTPipeline, which uses the DiT model under the hood. FID aims to measure how similar are two datasets of images. As per this resource: Fréchet Inception Distance is a measure of similarity between two datasets of images. It was shown to correlate well with the human judgment of visual quality and is most often used to evaluate the quality of samples of Generative Adversarial Networks. FID is calculated by computing the Fréchet distance between two Gaussians fitted to feature representations of the Inception network. These two datasets are essentially the dataset of real images and the dataset of fake images (generated images in our case). FID is usually calculated with two large datasets. However, for this document, we will work with two mini datasets. Let’s first download a few images from the ImageNet-1k training set: Copied from zipfile import ZipFile +import requests + + +def download(url, local_filepath): + r = requests.get(url) + with open(local_filepath, "wb") as f: + f.write(r.content) + return local_filepath + +dummy_dataset_url = "https://hf.co/datasets/sayakpaul/sample-datasets/resolve/main/sample-imagenet-images.zip" +local_filepath = download(dummy_dataset_url, dummy_dataset_url.split("/")[-1]) + +with ZipFile(local_filepath, "r") as zipper: + zipper.extractall(".") Copied from PIL import Image +import os + +dataset_path = "sample-imagenet-images" +image_paths = sorted([os.path.join(dataset_path, x) for x in os.listdir(dataset_path)]) + +real_images = [np.array(Image.open(path).convert("RGB")) for path in image_paths] These are 10 images from the following ImageNet-1k classes: “cassette_player”, “chain_saw” (x2), “church”, “gas_pump” (x3), “parachute” (x2), and “tench”. Real images. Now that the images are loaded, let’s apply some lightweight pre-processing on them to use them for FID calculation. Copied from torchvision.transforms import functional as F + + +def preprocess_image(image): + image = torch.tensor(image).unsqueeze(0) + image = image.permute(0, 3, 1, 2) / 255.0 + return F.center_crop(image, (256, 256)) + +real_images = torch.cat([preprocess_image(image) for image in real_images]) +print(real_images.shape) +# torch.Size([10, 3, 256, 256]) We now load the DiTPipeline to generate images conditioned on the above-mentioned classes. Copied from diffusers import DiTPipeline, DPMSolverMultistepScheduler + +dit_pipeline = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256", torch_dtype=torch.float16) +dit_pipeline.scheduler = DPMSolverMultistepScheduler.from_config(dit_pipeline.scheduler.config) +dit_pipeline = dit_pipeline.to("cuda") + +words = [ + "cassette player", + "chainsaw", + "chainsaw", + "church", + "gas pump", + "gas pump", + "gas pump", + "parachute", + "parachute", + "tench", +] + +class_ids = dit_pipeline.get_label_ids(words) +output = dit_pipeline(class_labels=class_ids, generator=generator, output_type="np") + +fake_images = output.images +fake_images = torch.tensor(fake_images) +fake_images = fake_images.permute(0, 3, 1, 2) +print(fake_images.shape) +# torch.Size([10, 3, 256, 256]) Now, we can compute the FID using torchmetrics. Copied from torchmetrics.image.fid import FrechetInceptionDistance + +fid = FrechetInceptionDistance(normalize=True) +fid.update(real_images, real=True) +fid.update(fake_images, real=False) + +print(f"FID: {float(fid.compute())}") +# FID: 177.7147216796875 The lower the FID, the better it is. Several things can influence FID here: Number of images (both real and fake) Randomness induced in the diffusion process Number of inference steps in the diffusion process The scheduler being used in the diffusion process For the last two points, it is, therefore, a good practice to run the evaluation across different seeds and inference steps, and then report an average result. FID results tend to be fragile as they depend on a lot of factors: The specific Inception model used during computation. The implementation accuracy of the computation. The image format (not the same if we start from PNGs vs JPGs). Keeping that in mind, FID is often most useful when comparing similar runs, but it is +hard to reproduce paper results unless the authors carefully disclose the FID +measurement code. These points apply to other related metrics too, such as KID and IS. As a final step, let’s visually inspect the fake_images. Fake images. diff --git a/scrapped_outputs/50a720e8866e7b8de6be5095aa56ae11.txt b/scrapped_outputs/50a720e8866e7b8de6be5095aa56ae11.txt new file mode 100644 index 0000000000000000000000000000000000000000..d0ff9812e8390d7761559412d64c19cfc04afa33 --- /dev/null +++ b/scrapped_outputs/50a720e8866e7b8de6be5095aa56ae11.txt @@ -0,0 +1,89 @@ +Quicktour Diffusion models are trained to denoise random Gaussian noise step-by-step to generate a sample of interest, such as an image or audio. This has sparked a tremendous amount of interest in generative AI, and you have probably seen examples of diffusion generated images on the internet. 🧨 Diffusers is a library aimed at making diffusion models widely accessible to everyone. Whether you’re a developer or an everyday user, this quicktour will introduce you to 🧨 Diffusers and help you get up and generating quickly! There are three main components of the library to know about: The DiffusionPipeline is a high-level end-to-end class designed to rapidly generate samples from pretrained diffusion models for inference. Popular pretrained model architectures and modules that can be used as building blocks for creating diffusion systems. Many different schedulers - algorithms that control how noise is added for training, and how to generate denoised images during inference. The quicktour will show you how to use the DiffusionPipeline for inference, and then walk you through how to combine a model and scheduler to replicate what’s happening inside the DiffusionPipeline. The quicktour is a simplified version of the introductory 🧨 Diffusers notebook to help you get started quickly. If you want to learn more about 🧨 Diffusers’ goal, design philosophy, and additional details about its core API, check out the notebook! Before you begin, make sure you have all the necessary libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install --upgrade diffusers accelerate transformers 🤗 Accelerate speeds up model loading for inference and training. 🤗 Transformers is required to run the most popular diffusion models, such as Stable Diffusion. DiffusionPipeline The DiffusionPipeline is the easiest way to use a pretrained diffusion system for inference. It is an end-to-end system containing the model and the scheduler. You can use the DiffusionPipeline out-of-the-box for many tasks. Take a look at the table below for some supported tasks, and for a complete list of supported tasks, check out the 🧨 Diffusers Summary table. Task Description Pipeline Unconditional Image Generation generate an image from Gaussian noise unconditional_image_generation Text-Guided Image Generation generate an image given a text prompt conditional_image_generation Text-Guided Image-to-Image Translation adapt an image guided by a text prompt img2img Text-Guided Image-Inpainting fill the masked part of an image given the image, the mask and a text prompt inpaint Text-Guided Depth-to-Image Translation adapt parts of an image guided by a text prompt while preserving structure via depth estimation depth2img Start by creating an instance of a DiffusionPipeline and specify which pipeline checkpoint you would like to download. +You can use the DiffusionPipeline for any checkpoint stored on the Hugging Face Hub. +In this quicktour, you’ll load the stable-diffusion-v1-5 checkpoint for text-to-image generation. For Stable Diffusion models, please carefully read the license first before running the model. 🧨 Diffusers implements a safety_checker to prevent offensive or harmful content, but the model’s improved image generation capabilities can still produce potentially harmful content. Load the model with the from_pretrained() method: Copied >>> from diffusers import DiffusionPipeline + +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) The DiffusionPipeline downloads and caches all modeling, tokenization, and scheduling components. You’ll see that the Stable Diffusion pipeline is composed of the UNet2DConditionModel and PNDMScheduler among other things: Copied >>> pipeline +StableDiffusionPipeline { + "_class_name": "StableDiffusionPipeline", + "_diffusers_version": "0.21.4", + ..., + "scheduler": [ + "diffusers", + "PNDMScheduler" + ], + ..., + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vae": [ + "diffusers", + "AutoencoderKL" + ] +} We strongly recommend running the pipeline on a GPU because the model consists of roughly 1.4 billion parameters. +You can move the generator object to a GPU, just like you would in PyTorch: Copied >>> pipeline.to("cuda") Now you can pass a text prompt to the pipeline to generate an image, and then access the denoised image. By default, the image output is wrapped in a PIL.Image object. Copied >>> image = pipeline("An image of a squirrel in Picasso style").images[0] +>>> image Save the image by calling save: Copied >>> image.save("image_of_squirrel_painting.png") Local pipeline You can also use the pipeline locally. The only difference is you need to download the weights first: Copied !git lfs install +!git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 Then load the saved weights into the pipeline: Copied >>> pipeline = DiffusionPipeline.from_pretrained("./stable-diffusion-v1-5", use_safetensors=True) Now, you can run the pipeline as you would in the section above. Swapping schedulers Different schedulers come with different denoising speeds and quality trade-offs. The best way to find out which one works best for you is to try them out! One of the main features of 🧨 Diffusers is to allow you to easily switch between schedulers. For example, to replace the default PNDMScheduler with the EulerDiscreteScheduler, load it with the from_config() method: Copied >>> from diffusers import EulerDiscreteScheduler + +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) +>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) Try generating an image with the new scheduler and see if you notice a difference! In the next section, you’ll take a closer look at the components - the model and scheduler - that make up the DiffusionPipeline and learn how to use these components to generate an image of a cat. Models Most models take a noisy sample, and at each timestep it predicts the noise residual (other models learn to predict the previous sample directly or the velocity or v-prediction), the difference between a less noisy image and the input image. You can mix and match models to create other diffusion systems. Models are initiated with the from_pretrained() method which also locally caches the model weights so it is faster the next time you load the model. For the quicktour, you’ll load the UNet2DModel, a basic unconditional image generation model with a checkpoint trained on cat images: Copied >>> from diffusers import UNet2DModel + +>>> repo_id = "google/ddpm-cat-256" +>>> model = UNet2DModel.from_pretrained(repo_id, use_safetensors=True) To access the model parameters, call model.config: Copied >>> model.config The model configuration is a 🧊 frozen 🧊 dictionary, which means those parameters can’t be changed after the model is created. This is intentional and ensures that the parameters used to define the model architecture at the start remain the same, while other parameters can still be adjusted during inference. Some of the most important parameters are: sample_size: the height and width dimension of the input sample. in_channels: the number of input channels of the input sample. down_block_types and up_block_types: the type of down- and upsampling blocks used to create the UNet architecture. block_out_channels: the number of output channels of the downsampling blocks; also used in reverse order for the number of input channels of the upsampling blocks. layers_per_block: the number of ResNet blocks present in each UNet block. To use the model for inference, create the image shape with random Gaussian noise. It should have a batch axis because the model can receive multiple random noises, a channel axis corresponding to the number of input channels, and a sample_size axis for the height and width of the image: Copied >>> import torch + +>>> torch.manual_seed(0) + +>>> noisy_sample = torch.randn(1, model.config.in_channels, model.config.sample_size, model.config.sample_size) +>>> noisy_sample.shape +torch.Size([1, 3, 256, 256]) For inference, pass the noisy image and a timestep to the model. The timestep indicates how noisy the input image is, with more noise at the beginning and less at the end. This helps the model determine its position in the diffusion process, whether it is closer to the start or the end. Use the sample method to get the model output: Copied >>> with torch.no_grad(): +... noisy_residual = model(sample=noisy_sample, timestep=2).sample To generate actual examples though, you’ll need a scheduler to guide the denoising process. In the next section, you’ll learn how to couple a model with a scheduler. Schedulers Schedulers manage going from a noisy sample to a less noisy sample given the model output - in this case, it is the noisy_residual. 🧨 Diffusers is a toolbox for building diffusion systems. While the DiffusionPipeline is a convenient way to get started with a pre-built diffusion system, you can also choose your own model and scheduler components separately to build a custom diffusion system. For the quicktour, you’ll instantiate the DDPMScheduler with its from_config() method: Copied >>> from diffusers import DDPMScheduler + +>>> scheduler = DDPMScheduler.from_pretrained(repo_id) +>>> scheduler +DDPMScheduler { + "_class_name": "DDPMScheduler", + "_diffusers_version": "0.21.4", + "beta_end": 0.02, + "beta_schedule": "linear", + "beta_start": 0.0001, + "clip_sample": true, + "clip_sample_range": 1.0, + "dynamic_thresholding_ratio": 0.995, + "num_train_timesteps": 1000, + "prediction_type": "epsilon", + "sample_max_value": 1.0, + "steps_offset": 0, + "thresholding": false, + "timestep_spacing": "leading", + "trained_betas": null, + "variance_type": "fixed_small" +} 💡 Unlike a model, a scheduler does not have trainable weights and is parameter-free! Some of the most important parameters are: num_train_timesteps: the length of the denoising process or, in other words, the number of timesteps required to process random Gaussian noise into a data sample. beta_schedule: the type of noise schedule to use for inference and training. beta_start and beta_end: the start and end noise values for the noise schedule. To predict a slightly less noisy image, pass the following to the scheduler’s step() method: model output, timestep, and current sample. Copied >>> less_noisy_sample = scheduler.step(model_output=noisy_residual, timestep=2, sample=noisy_sample).prev_sample +>>> less_noisy_sample.shape +torch.Size([1, 3, 256, 256]) The less_noisy_sample can be passed to the next timestep where it’ll get even less noisy! Let’s bring it all together now and visualize the entire denoising process. First, create a function that postprocesses and displays the denoised image as a PIL.Image: Copied >>> import PIL.Image +>>> import numpy as np + + +>>> def display_sample(sample, i): +... image_processed = sample.cpu().permute(0, 2, 3, 1) +... image_processed = (image_processed + 1.0) * 127.5 +... image_processed = image_processed.numpy().astype(np.uint8) + +... image_pil = PIL.Image.fromarray(image_processed[0]) +... display(f"Image at step {i}") +... display(image_pil) To speed up the denoising process, move the input and model to a GPU: Copied >>> model.to("cuda") +>>> noisy_sample = noisy_sample.to("cuda") Now create a denoising loop that predicts the residual of the less noisy sample, and computes the less noisy sample with the scheduler: Copied >>> import tqdm + +>>> sample = noisy_sample + +>>> for i, t in enumerate(tqdm.tqdm(scheduler.timesteps)): +... # 1. predict noise residual +... with torch.no_grad(): +... residual = model(sample, t).sample + +... # 2. compute less noisy image and set x_t -> x_t-1 +... sample = scheduler.step(residual, t, sample).prev_sample + +... # 3. optionally look at image +... if (i + 1) % 50 == 0: +... display_sample(sample, i + 1) Sit back and watch as a cat is generated from nothing but noise! 😻 Next steps Hopefully, you generated some cool images with 🧨 Diffusers in this quicktour! For your next steps, you can: Train or finetune a model to generate your own images in the training tutorial. See example official and community training or finetuning scripts for a variety of use cases. Learn more about loading, accessing, changing, and comparing schedulers in the Using different Schedulers guide. Explore prompt engineering, speed and memory optimizations, and tips and tricks for generating higher-quality images with the Stable Diffusion guide. Dive deeper into speeding up 🧨 Diffusers with guides on optimized PyTorch on a GPU, and inference guides for running Stable Diffusion on Apple Silicon (M1/M2) and ONNX Runtime. diff --git a/scrapped_outputs/50b86ad2804655bd7e98856db5c653c2.txt b/scrapped_outputs/50b86ad2804655bd7e98856db5c653c2.txt new file mode 100644 index 0000000000000000000000000000000000000000..9d0d5ffb83e07315423c11b905ac9fe8aa24c736 --- /dev/null +++ b/scrapped_outputs/50b86ad2804655bd7e98856db5c653c2.txt @@ -0,0 +1,18 @@ +Installation 🤗 Diffusers is tested on Python 3.8+, PyTorch 1.7.0+, and Flax. Follow the installation instructions below for the deep learning library you are using: PyTorch installation instructions Flax installation instructions Install with pip You should install 🤗 Diffusers in a virtual environment. +If you’re unfamiliar with Python virtual environments, take a look at this guide. +A virtual environment makes it easier to manage different projects and avoid compatibility issues between dependencies. Start by creating a virtual environment in your project directory: Copied python -m venv .env Activate the virtual environment: Copied source .env/bin/activate You should also install 🤗 Transformers because 🤗 Diffusers relies on its models: Pytorch Hide Pytorch content Note - PyTorch only supports Python 3.8 - 3.11 on Windows. Copied pip install diffusers["torch"] transformers JAX Hide JAX content Copied pip install diffusers["flax"] transformers Install with conda After activating your virtual environment, with conda (maintained by the community): Copied conda install -c conda-forge diffusers Install from source Before installing 🤗 Diffusers from source, make sure you have PyTorch and 🤗 Accelerate installed. To install 🤗 Accelerate: Copied pip install accelerate Then install 🤗 Diffusers from source: Copied pip install git+https://github.com/huggingface/diffusers This command installs the bleeding edge main version rather than the latest stable version. +The main version is useful for staying up-to-date with the latest developments. +For instance, if a bug has been fixed since the last official release but a new release hasn’t been rolled out yet. +However, this means the main version may not always be stable. +We strive to keep the main version operational, and most issues are usually resolved within a few hours or a day. +If you run into a problem, please open an Issue so we can fix it even sooner! Editable install You will need an editable install if you’d like to: Use the main version of the source code. Contribute to 🤗 Diffusers and need to test changes in the code. Clone the repository and install 🤗 Diffusers with the following commands: Copied git clone https://github.com/huggingface/diffusers.git +cd diffusers Pytorch Hide Pytorch content Copied pip install -e ".[torch]" JAX Hide JAX content Copied pip install -e ".[flax]" These commands will link the folder you cloned the repository to and your Python library paths. +Python will now look inside the folder you cloned to in addition to the normal library paths. +For example, if your Python packages are typically installed in ~/anaconda3/envs/main/lib/python3.8/site-packages/, Python will also search the ~/diffusers/ folder you cloned to. You must keep the diffusers folder if you want to keep using the library. Now you can easily update your clone to the latest version of 🤗 Diffusers with the following command: Copied cd ~/diffusers/ +git pull Your Python environment will find the main version of 🤗 Diffusers on the next run. Cache Model weights and files are downloaded from the Hub to a cache which is usually your home directory. You can change the cache location by specifying the HF_HOME or HUGGINFACE_HUB_CACHE environment variables or configuring the cache_dir parameter in methods like from_pretrained(). Cached files allow you to run 🤗 Diffusers offline. To prevent 🤗 Diffusers from connecting to the internet, set the HF_HUB_OFFLINE environment variable to True and 🤗 Diffusers will only load previously downloaded files in the cache. Copied export HF_HUB_OFFLINE=True For more details about managing and cleaning the cache, take a look at the caching guide. Telemetry logging Our library gathers telemetry information during from_pretrained() requests. +The data gathered includes the version of 🤗 Diffusers and PyTorch/Flax, the requested model or pipeline class, +and the path to a pretrained checkpoint if it is hosted on the Hugging Face Hub. +This usage data helps us debug issues and prioritize new features. +Telemetry is only sent when loading models and pipelines from the Hub, +and it is not collected if you’re loading local files. We understand that not everyone wants to share additional information,and we respect your privacy. +You can disable telemetry collection by setting the DISABLE_TELEMETRY environment variable from your terminal: On Linux/MacOS: Copied export DISABLE_TELEMETRY=YES On Windows: Copied set DISABLE_TELEMETRY=YES diff --git a/scrapped_outputs/50ce766e641b21f87fdcad629861ab82.txt b/scrapped_outputs/50ce766e641b21f87fdcad629861ab82.txt new file mode 100644 index 0000000000000000000000000000000000000000..6d4b37b0a52f96659677efd85840d7f5e1ea639c --- /dev/null +++ b/scrapped_outputs/50ce766e641b21f87fdcad629861ab82.txt @@ -0,0 +1,41 @@ +Text-to-image The text-to-image script is experimental, and it’s easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset. Text-to-image models like Stable Diffusion are conditioned to generate images given a text prompt. Training a model can be taxing on your hardware, but if you enable gradient_checkpointing and mixed_precision, it is possible to train a model on a single 24GB GPU. If you’re training with larger batch sizes or want to train faster, it’s better to use GPUs with more than 30GB of memory. You can reduce your memory footprint by enabling memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing, gradient accumulation or xFormers. A GPU with at least 30GB of memory or a TPU v3 is recommended for training with Flax. This guide will explore the train_text_to_image.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/text_to_image +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image.py \ + --mixed_precision="fp16" Some basic and important parameters include: --pretrained_model_name_or_path: the name of the model on the Hub or a local path to the pretrained model --dataset_name: the name of the dataset on the Hub or a local path to the dataset to train on --image_column: the name of the image column in the dataset to train on --caption_column: the name of the text column in the dataset to train on --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_text_to_image.py \ + --snr_gamma=5.0 You can compare the loss surfaces for different snr_gamma values in this Weights and Biases report. For smaller datasets, the effects of Min-SNR may not be as obvious compared to larger datasets. Training script The dataset preprocessing code and training loop are found in the main() function. If you need to adapt the training script, this is where you’ll need to make your changes. The train_text_to_image script starts by loading a scheduler and tokenizer. You can choose to use a different scheduler here if you want: Copied noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") +tokenizer = CLIPTokenizer.from_pretrained( + args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision +) Then the script loads the UNet model: Copied load_model = UNet2DConditionModel.from_pretrained(input_dir, subfolder="unet") +model.register_to_config(**load_model.config) + +model.load_state_dict(load_model.state_dict()) Next, the text and image columns of the dataset need to be preprocessed. The tokenize_captions function handles tokenizing the inputs, and the train_transforms function specifies the type of transforms to apply to the image. Both of these functions are bundled into preprocess_train: Copied def preprocess_train(examples): + images = [image.convert("RGB") for image in examples[image_column]] + examples["pixel_values"] = [train_transforms(image) for image in images] + examples["input_ids"] = tokenize_captions(examples) + return examples Lastly, the training loop handles everything else. It encodes images into latent space, adds noise to the latents, computes the text embeddings to condition on, updates the model parameters, and saves and pushes the model to the Hub. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 PyTorch Flax Let’s train on the Pokémon BLIP captions dataset to generate your own Pokémon. Set the environment variables MODEL_NAME and dataset_name to the model and the dataset (either from the Hub or a local path). If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. To train on a local dataset, set the TRAIN_DIR and OUTPUT_DIR environment variables to the path of the dataset and where to save the model to. Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export dataset_name="lambdalabs/pokemon-blip-captions" + +accelerate launch --mixed_precision="fp16" train_text_to_image.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$dataset_name \ + --use_ema \ + --resolution=512 --center_crop --random_flip \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --enable_xformers_memory_efficient_attention + --lr_scheduler="constant" --lr_warmup_steps=0 \ + --output_dir="sd-pokemon-model" \ + --push_to_hub Once training is complete, you can use your newly trained model for inference: PyTorch Flax Copied from diffusers import StableDiffusionPipeline +import torch + +pipeline = StableDiffusionPipeline.from_pretrained("path/to/saved_model", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + +image = pipeline(prompt="yoda").images[0] +image.save("yoda-pokemon.png") Next steps Congratulations on training your own text-to-image model! To learn more about how to use your new model, the following guides may be helpful: Learn how to load LoRA weights for inference if you trained your model with LoRA. Learn more about how certain parameters like guidance scale or techniques such as prompt weighting can help you control inference in the Text-to-image task guide. diff --git a/scrapped_outputs/50e19b90f2dbc81291f0ab2701aa3098.txt b/scrapped_outputs/50e19b90f2dbc81291f0ab2701aa3098.txt new file mode 100644 index 0000000000000000000000000000000000000000..bcb666def15e33f1f85b4b3d91e464c6e12c8f33 --- /dev/null +++ b/scrapped_outputs/50e19b90f2dbc81291f0ab2701aa3098.txt @@ -0,0 +1,52 @@ +UNet3DConditionModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 3D UNet conditional model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet3DConditionModel class diffusers.UNet3DConditionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlock3D', 'CrossAttnDownBlock3D', 'CrossAttnDownBlock3D', 'DownBlock3D') up_block_types: Tuple = ('UpBlock3D', 'CrossAttnUpBlock3D', 'CrossAttnUpBlock3D', 'CrossAttnUpBlock3D') block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: int = 1024 attention_head_dim: Union = 64 num_attention_heads: Union = None ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. in_channels (int, optional, defaults to 4) — The number of channels in the input sample. out_channels (int, optional, defaults to 4) — The number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — The number of layers per block. downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. act_fn (str, optional, defaults to "silu") — The activation function to use. norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, normalization and activation layers is skipped in post-processing. norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. cross_attention_dim (int, optional, defaults to 1280) — The dimension of the cross attention features. attention_head_dim (int, optional, defaults to 8) — The dimension of the attention heads. num_attention_heads (int, optional) — The number of attention heads. A conditional 3D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). disable_freeu < source > ( ) Disables the FreeU mechanism. enable_forward_chunking < source > ( chunk_size: Optional = None dim: int = 0 ) Parameters chunk_size (int, optional) — +The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually +over each tensor of dim=dim. dim (int, optional, defaults to 0) — +The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch) +or dim=1 (sequence length). Sets the attention processor to use feed forward +chunking. enable_freeu < source > ( s1 s2 b1 b2 ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism from https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stage blocks where they are being applied. Please refer to the official repository for combinations of values that +are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None down_block_additional_residuals: Optional = None mid_block_additional_residual: Optional = None return_dict: bool = True ) → ~models.unet_3d_condition.UNet3DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, num_frames, channel, height, width. timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). class_labels (torch.Tensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. +timestep_cond — (torch.Tensor, optional, defaults to None): +Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed +through the self.time_embedding layer to obtain the timestep embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. +down_block_additional_residuals — (tuple of torch.Tensor, optional): +A tuple of tensors that if specified are added to the residuals of down unet blocks. +mid_block_additional_residual — (torch.Tensor, optional): +A tensor that if specified is added to the residual of the middle unet block. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.unet_3d_condition.UNet3DConditionOutput instead of a plain +tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. Returns +~models.unet_3d_condition.UNet3DConditionOutput or tuple + +If return_dict is True, an ~models.unet_3d_condition.UNet3DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The UNet3DConditionModel forward method. set_attention_slice < source > ( slice_size: Union ) Parameters slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", input to the attention heads is halved, so attention is computed in two steps. If +"max", maximum amount of memory is saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in +several steps. This is useful for saving some memory in exchange for a small decrease in speed. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. unload_lora < source > ( ) Unloads LoRA weights. UNet3DConditionOutput class diffusers.models.unets.unet_3d_condition.UNet3DConditionOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_frames, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of UNet3DConditionModel. diff --git a/scrapped_outputs/50facfc5f33acf8287b208ceb0c832a9.txt b/scrapped_outputs/50facfc5f33acf8287b208ceb0c832a9.txt new file mode 100644 index 0000000000000000000000000000000000000000..0216b63015b72cee2b55724c811388c4d1a98e96 --- /dev/null +++ b/scrapped_outputs/50facfc5f33acf8287b208ceb0c832a9.txt @@ -0,0 +1,41 @@ +KarrasVeScheduler KarrasVeScheduler is a stochastic sampler tailored to variance-expanding (VE) models. It is based on the Elucidating the Design Space of Diffusion-Based Generative Models and Score-based generative modeling through stochastic differential equations papers. KarrasVeScheduler class diffusers.KarrasVeScheduler < source > ( sigma_min: float = 0.02 sigma_max: float = 100 s_noise: float = 1.007 s_churn: float = 80 s_min: float = 0.05 s_max: float = 50 ) Parameters sigma_min (float, defaults to 0.02) — +The minimum noise magnitude. sigma_max (float, defaults to 100) — +The maximum noise magnitude. s_noise (float, defaults to 1.007) — +The amount of additional noise to counteract loss of detail during sampling. A reasonable range is [1.000, +1.011]. s_churn (float, defaults to 80) — +The parameter controlling the overall amount of stochasticity. A reasonable range is [0, 100]. s_min (float, defaults to 0.05) — +The start value of the sigma range to add noise (enable stochasticity). A reasonable range is [0, 10]. s_max (float, defaults to 50) — +The end value of the sigma range to add noise. A reasonable range is [0.2, 80]. A stochastic scheduler tailored to variance-expanding models. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. For more details on the parameters, see Appendix E. The grid search values used +to find the optimal {s_noise, s_churn, s_min, s_max} for a specific model are described in Table 5 of the paper. add_noise_to_input < source > ( sample: FloatTensor sigma: float generator: Optional = None ) Parameters sample (torch.FloatTensor) — +The input sample. sigma (float) — generator (torch.Generator, optional) — +A random number generator. Explicit Langevin-like “churn” step of adding noise to the sample according to a gamma_i ≥ 0 to reach a +higher noise level sigma_hat = sigma_i + gamma_i*sigma_i. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor sigma_hat: float sigma_prev: float sample_hat: FloatTensor return_dict: bool = True ) → ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. sigma_hat (float) — sigma_prev (float) — sample_hat (torch.FloatTensor) — return_dict (bool, optional, defaults to True) — +Whether or not to return a ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput or tuple. Returns +~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput or tuple + +If return_dict is True, ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). step_correct < source > ( model_output: FloatTensor sigma_hat: float sigma_prev: float sample_hat: FloatTensor sample_prev: FloatTensor derivative: FloatTensor return_dict: bool = True ) → prev_sample (TODO) Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. sigma_hat (float) — TODO sigma_prev (float) — TODO sample_hat (torch.FloatTensor) — TODO sample_prev (torch.FloatTensor) — TODO derivative (torch.FloatTensor) — TODO return_dict (bool, optional, defaults to True) — +Whether or not to return a DDPMSchedulerOutput or tuple. Returns +prev_sample (TODO) + +updated sample in the diffusion chain. derivative (TODO): TODO + Corrects the predicted sample based on the model_output of the network. KarrasVeOutput class diffusers.schedulers.deprecated.scheduling_karras_ve.KarrasVeOutput < source > ( prev_sample: FloatTensor derivative: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. derivative (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Derivative of predicted original image sample (x_0). pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/51981ad3085c2f71953f72186e64b163.txt b/scrapped_outputs/51981ad3085c2f71953f72186e64b163.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/51b6ebac06d4234bc95c4564ad86381c.txt b/scrapped_outputs/51b6ebac06d4234bc95c4564ad86381c.txt new file mode 100644 index 0000000000000000000000000000000000000000..f4d8d702045481a41c259fb77d2569da57e434ee --- /dev/null +++ b/scrapped_outputs/51b6ebac06d4234bc95c4564ad86381c.txt @@ -0,0 +1,152 @@ +Heun scheduler inspired by Karras et. al paper + + +Overview + +Algorithm 1 of Karras et. al. +Scheduler ported from @crowsonkb’s https://github.com/crowsonkb/k-diffusion library: +All credit for making this scheduler work goes to Katherine Crowson + +HeunDiscreteScheduler + + +class diffusers.HeunDiscreteScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.00085 +beta_end: float = 0.012 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +prediction_type: str = 'epsilon' + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. beta_start (float): the + + +starting beta value of inference. beta_end (float) — the final beta value. beta_schedule (str): +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. +options to clip the variance used when adding noise to the denoised sample. Choose from fixed_small, +fixed_small_log, fixed_large, fixed_large_log, learned or learned_range. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + + +Implements Algorithm 2 (Heun steps) from Karras et al. (2022). for discrete beta schedules. Based on the original +k-diffusion implementation by Katherine Crowson: +https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L90 +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[float, torch.FloatTensor] + +) +→ +torch.FloatTensor + +Parameters + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the — + + +current timestep. — +sample (torch.FloatTensor): input sample timestep (int, optional): current timestep + + +Returns + +torch.FloatTensor + + + +scaled input sample + + + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None +num_train_timesteps: typing.Optional[int] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device, optional) — +the device to which the timesteps should be moved to. If None, the timesteps are not moved. + + + +Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: typing.Union[torch.FloatTensor, numpy.ndarray] +timestep: typing.Union[float, torch.FloatTensor] +sample: typing.Union[torch.FloatTensor, numpy.ndarray] +return_dict: bool = True + +) +→ +SchedulerOutput or tuple + +Parameters + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion — + + +process from the learned model outputs (most often the predicted noise). — +model_output (torch.FloatTensor or np.ndarray): direct output from learned diffusion model. timestep +(int): current discrete timestep in the diffusion chain. sample (torch.FloatTensor or np.ndarray): +current instance of sample being created by diffusion process. +return_dict (bool): option for returning tuple rather than SchedulerOutput class + + +Returns + +SchedulerOutput or tuple + + + +SchedulerOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. diff --git a/scrapped_outputs/51b9ae0f5602d2289195b97e47c3b9f6.txt b/scrapped_outputs/51b9ae0f5602d2289195b97e47c3b9f6.txt new file mode 100644 index 0000000000000000000000000000000000000000..28be7c2be08b90122a456c3dc3dafcfdbac176dc --- /dev/null +++ b/scrapped_outputs/51b9ae0f5602d2289195b97e47c3b9f6.txt @@ -0,0 +1,75 @@ +AutoPipeline 🤗 Diffusers is able to complete many different tasks, and you can often reuse the same pretrained weights for multiple tasks such as text-to-image, image-to-image, and inpainting. If you’re new to the library and diffusion models though, it may be difficult to know which pipeline to use for a task. For example, if you’re using the runwayml/stable-diffusion-v1-5 checkpoint for text-to-image, you might not know that you could also use it for image-to-image and inpainting by loading the checkpoint with the StableDiffusionImg2ImgPipeline and StableDiffusionInpaintPipeline classes respectively. The AutoPipeline class is designed to simplify the variety of pipelines in 🤗 Diffusers. It is a generic, task-first pipeline that lets you focus on the task. The AutoPipeline automatically detects the correct pipeline class to use, which makes it easier to load a checkpoint for a task without knowing the specific pipeline class name. Take a look at the AutoPipeline reference to see which tasks are supported. Currently, it supports text-to-image, image-to-image, and inpainting. This tutorial shows you how to use an AutoPipeline to automatically infer the pipeline class to load for a specific task, given the pretrained weights. Choose an AutoPipeline for your task Start by picking a checkpoint. For example, if you’re interested in text-to-image with the runwayml/stable-diffusion-v1-5 checkpoint, use AutoPipelineForText2Image: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") +prompt = "peasant and dragon combat, wood cutting style, viking era, bevel with rune" + +image = pipeline(prompt, num_inference_steps=25).images[0] +image Under the hood, AutoPipelineForText2Image: automatically detects a "stable-diffusion" class from the model_index.json file loads the corresponding text-to-image StableDiffusionPipeline based on the "stable-diffusion" class name Likewise, for image-to-image, AutoPipelineForImage2Image detects a "stable-diffusion" checkpoint from the model_index.json file and it’ll load the corresponding StableDiffusionImg2ImgPipeline behind the scenes. You can also pass any additional arguments specific to the pipeline class such as strength, which determines the amount of noise or variation added to an input image: Copied from diffusers import AutoPipelineForImage2Image +import torch +import requests +from PIL import Image +from io import BytesIO + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") +prompt = "a portrait of a dog wearing a pearl earring" + +url = "https://upload.wikimedia.org/wikipedia/commons/thumb/0/0f/1665_Girl_with_a_Pearl_Earring.jpg/800px-1665_Girl_with_a_Pearl_Earring.jpg" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") +image.thumbnail((768, 768)) + +image = pipeline(prompt, image, num_inference_steps=200, strength=0.75, guidance_scale=10.5).images[0] +image And if you want to do inpainting, then AutoPipelineForInpainting loads the underlying StableDiffusionInpaintPipeline class in the same way: Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +import torch + +pipeline = AutoPipelineForInpainting.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).convert("RGB") +mask_image = load_image(mask_url).convert("RGB") + +prompt = "A majestic tiger sitting on a bench" +image = pipeline(prompt, image=init_image, mask_image=mask_image, num_inference_steps=50, strength=0.80).images[0] +image If you try to load an unsupported checkpoint, it’ll throw an error: Copied from diffusers import AutoPipelineForImage2Image +import torch + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "openai/shap-e-img2img", torch_dtype=torch.float16, use_safetensors=True +) +"ValueError: AutoPipeline can't find a pipeline linked to ShapEImg2ImgPipeline for None" Use multiple pipelines For some workflows or if you’re loading many pipelines, it is more memory-efficient to reuse the same components from a checkpoint instead of reloading them which would unnecessarily consume additional memory. For example, if you’re using a checkpoint for text-to-image and you want to use it again for image-to-image, use the from_pipe() method. This method creates a new pipeline from the components of a previously loaded pipeline at no additional memory cost. The from_pipe() method detects the original pipeline class and maps it to the new pipeline class corresponding to the task you want to do. For example, if you load a "stable-diffusion" class pipeline for text-to-image: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch + +pipeline_text2img = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +print(type(pipeline_text2img)) +"" Then from_pipe() maps the original "stable-diffusion" pipeline class to StableDiffusionImg2ImgPipeline: Copied pipeline_img2img = AutoPipelineForImage2Image.from_pipe(pipeline_text2img) +print(type(pipeline_img2img)) +"" If you passed an optional argument - like disabling the safety checker - to the original pipeline, this argument is also passed on to the new pipeline: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch + +pipeline_text2img = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, + requires_safety_checker=False, +).to("cuda") + +pipeline_img2img = AutoPipelineForImage2Image.from_pipe(pipeline_text2img) +print(pipeline_img2img.config.requires_safety_checker) +"False" You can overwrite any of the arguments and even configuration from the original pipeline if you want to change the behavior of the new pipeline. For example, to turn the safety checker back on and add the strength argument: Copied pipeline_img2img = AutoPipelineForImage2Image.from_pipe(pipeline_text2img, requires_safety_checker=True, strength=0.3) +print(pipeline_img2img.config.requires_safety_checker) +"True" diff --git a/scrapped_outputs/51bc42b8005abfc2ce081c63731e6576.txt b/scrapped_outputs/51bc42b8005abfc2ce081c63731e6576.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/51bee83d729553a83f673f1d777efe65.txt b/scrapped_outputs/51bee83d729553a83f673f1d777efe65.txt new file mode 100644 index 0000000000000000000000000000000000000000..e109b181bff7e509d8447aec9e012243d4f843dc --- /dev/null +++ b/scrapped_outputs/51bee83d729553a83f673f1d777efe65.txt @@ -0,0 +1,115 @@ +DreamBooth DreamBooth is a training technique that updates the entire diffusion model by training on just a few images of a subject or style. It works by associating a special word in the prompt with the example images. If you’re training on a GPU with limited vRAM, you should try enabling the gradient_checkpointing and mixed_precision parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing or xFormers. You should have a GPU with >30GB of memory if you want to train faster with Flax. This guide will explore the train_dreambooth.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/dreambooth +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters DreamBooth is very sensitive to training hyperparameters, and it is easy to overfit. Read the Training Stable Diffusion with Dreambooth using 🧨 Diffusers blog post for recommended settings for different subjects to help you choose the appropriate hyperparameters. The training script offers many parameters for customizing your training run. All of the parameters and their descriptions are found in the parse_args() function. The parameters are set with default values that should work pretty well out-of-the-box, but you can also set your own values in the training command if you’d like. For example, to train in the bf16 format: Copied accelerate launch train_dreambooth.py \ + --mixed_precision="bf16" Some basic and important parameters to know and specify are: --pretrained_model_name_or_path: the name of the model on the Hub or a local path to the pretrained model --instance_data_dir: path to a folder containing the training dataset (example images) --instance_prompt: the text prompt that contains the special word for the example images --train_text_encoder: whether to also train the text encoder --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_dreambooth.py \ + --snr_gamma=5.0 Prior preservation loss Prior preservation loss is a method that uses a model’s own generated samples to help it learn how to generate more diverse images. Because these generated sample images belong to the same class as the images you provided, they help the model retain what it has learned about the class and how it can use what it already knows about the class to make new compositions. --with_prior_preservation: whether to use prior preservation loss --prior_loss_weight: controls the influence of the prior preservation loss on the model --class_data_dir: path to a folder containing the generated class sample images --class_prompt: the text prompt describing the class of the generated sample images Copied accelerate launch train_dreambooth.py \ + --with_prior_preservation \ + --prior_loss_weight=1.0 \ + --class_data_dir="path/to/class/images" \ + --class_prompt="text prompt describing class" Train text encoder To improve the quality of the generated outputs, you can also train the text encoder in addition to the UNet. This requires additional memory and you’ll need a GPU with at least 24GB of vRAM. If you have the necessary hardware, then training the text encoder produces better results, especially when generating images of faces. Enable this option by: Copied accelerate launch train_dreambooth.py \ + --train_text_encoder Training script DreamBooth comes with its own dataset classes: DreamBoothDataset: preprocesses the images and class images, and tokenizes the prompts for training PromptDataset: generates the prompt embeddings to generate the class images If you enabled prior preservation loss, the class images are generated here: Copied sample_dataset = PromptDataset(args.class_prompt, num_new_images) +sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size) + +sample_dataloader = accelerator.prepare(sample_dataloader) +pipeline.to(accelerator.device) + +for example in tqdm( + sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process +): + images = pipeline(example["prompt"]).images Next is the main() function which handles setting up the dataset for training and the training loop itself. The script loads the tokenizer, scheduler and models: Copied # Load the tokenizer +if args.tokenizer_name: + tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name, revision=args.revision, use_fast=False) +elif args.pretrained_model_name_or_path: + tokenizer = AutoTokenizer.from_pretrained( + args.pretrained_model_name_or_path, + subfolder="tokenizer", + revision=args.revision, + use_fast=False, + ) + +# Load scheduler and models +noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") +text_encoder = text_encoder_cls.from_pretrained( + args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision +) + +if model_has_vae(args): + vae = AutoencoderKL.from_pretrained( + args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision + ) +else: + vae = None + +unet = UNet2DConditionModel.from_pretrained( + args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision +) Then, it’s time to create the training dataset and DataLoader from DreamBoothDataset: Copied train_dataset = DreamBoothDataset( + instance_data_root=args.instance_data_dir, + instance_prompt=args.instance_prompt, + class_data_root=args.class_data_dir if args.with_prior_preservation else None, + class_prompt=args.class_prompt, + class_num=args.num_class_images, + tokenizer=tokenizer, + size=args.resolution, + center_crop=args.center_crop, + encoder_hidden_states=pre_computed_encoder_hidden_states, + class_prompt_encoder_hidden_states=pre_computed_class_prompt_encoder_hidden_states, + tokenizer_max_length=args.tokenizer_max_length, +) + +train_dataloader = torch.utils.data.DataLoader( + train_dataset, + batch_size=args.train_batch_size, + shuffle=True, + collate_fn=lambda examples: collate_fn(examples, args.with_prior_preservation), + num_workers=args.dataloader_num_workers, +) Lastly, the training loop takes care of the remaining steps such as converting images to latent space, adding noise to the input, predicting the noise residual, and calculating the loss. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script You’re now ready to launch the training script! 🚀 For this guide, you’ll download some images of a dog and store them in a directory. But remember, you can create and use your own dataset if you want (see the Create a dataset for training guide). Copied from huggingface_hub import snapshot_download + +local_dir = "./dog" +snapshot_download( + "diffusers/dog-example", + local_dir=local_dir, + repo_type="dataset", + ignore_patterns=".gitattributes", +) Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model, INSTANCE_DIR to the path where you just downloaded the dog images to, and OUTPUT_DIR to where you want to save the model. You’ll use sks as the special word to tie the training to. If you’re interested in following along with the training process, you can periodically save generated images as training progresses. Add the following parameters to the training command: Copied --validation_prompt="a photo of a sks dog" +--num_validation_images=4 +--validation_steps=100 One more thing before you launch the script! Depending on the GPU you have, you may need to enable certain optimizations to train DreamBooth. 16GB 12GB 8GB On a 16GB GPU, you can use bitsandbytes 8-bit optimizer and gradient checkpointing to help you train a DreamBooth model. Install bitsandbytes: Copied pip install bitsandbytes Then, add the following parameter to your training command: Copied accelerate launch train_dreambooth.py \ + --gradient_checkpointing \ + --use_8bit_adam \ PyTorch Flax Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export INSTANCE_DIR="./dog" +export OUTPUT_DIR="path_to_saved_model" + +accelerate launch train_dreambooth.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --output_dir=$OUTPUT_DIR \ + --instance_prompt="a photo of sks dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=1 \ + --learning_rate=5e-6 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --max_train_steps=400 \ + --push_to_hub Once training is complete, you can use your newly trained model for inference! Can’t wait to try your model for inference before training is complete? 🤭 Make sure you have the latest version of 🤗 Accelerate installed. Copied from diffusers import DiffusionPipeline, UNet2DConditionModel +from transformers import CLIPTextModel +import torch + +unet = UNet2DConditionModel.from_pretrained("path/to/model/checkpoint-100/unet") + +# if you have trained with `--args.train_text_encoder` make sure to also load the text encoder +text_encoder = CLIPTextModel.from_pretrained("path/to/model/checkpoint-100/checkpoint-100/text_encoder") + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", unet=unet, text_encoder=text_encoder, dtype=torch.float16, +).to("cuda") + +image = pipeline("A photo of sks dog in a bucket", num_inference_steps=50, guidance_scale=7.5).images[0] +image.save("dog-bucket.png") PyTorch Flax Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("path_to_saved_model", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +image = pipeline("A photo of sks dog in a bucket", num_inference_steps=50, guidance_scale=7.5).images[0] +image.save("dog-bucket.png") LoRA LoRA is a training technique for significantly reducing the number of trainable parameters. As a result, training is faster and it is easier to store the resulting weights because they are a lot smaller (~100MBs). Use the train_dreambooth_lora.py script to train with LoRA. The LoRA training script is discussed in more detail in the LoRA training guide. Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_dreambooth_lora_sdxl.py script to train a SDXL model with LoRA. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on training your DreamBooth model! To learn more about how to use your new model, the following guide may be helpful: Learn how to load a DreamBooth model for inference if you trained your model with LoRA. diff --git a/scrapped_outputs/51c0d38348900322032fb28bb62d1e4b.txt b/scrapped_outputs/51c0d38348900322032fb28bb62d1e4b.txt new file mode 100644 index 0000000000000000000000000000000000000000..84d54169e993a685cd0d3adbb2feeedc473399bb --- /dev/null +++ b/scrapped_outputs/51c0d38348900322032fb28bb62d1e4b.txt @@ -0,0 +1,54 @@ +DDPMScheduler Denoising Diffusion Probabilistic Models (DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes a diffusion based model of the same name. In the context of the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline. The abstract from the paper is: We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. Our implementation is available at this https URL. DDPMScheduler class diffusers.DDPMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None variance_type: str = 'fixed_small' clip_sample: bool = True prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 clip_sample_range: float = 1.0 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' steps_offset: int = 0 rescale_betas_zero_snr: int = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. variance_type (str, defaults to "fixed_small") — +Clip the variance when adding noise to the denoised sample. Choose from fixed_small, fixed_small_log, +fixed_large, fixed_large_log, learned or learned_range. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. DDPMScheduler explores the connections between denoising score matching and Langevin dynamics sampling. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: Optional = None device: Union = None timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. If used, +timesteps must be None. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. timesteps (List[int], optional) — +Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default +timestep spacing strategy of equal spacing between timesteps is used. If timesteps is passed, +num_inference_steps must be None. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator = None return_dict: bool = True ) → DDPMSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a DDPMSchedulerOutput or tuple. Returns +DDPMSchedulerOutput or tuple + +If return_dict is True, DDPMSchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). DDPMSchedulerOutput class diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/51ef72266a6c3da7da6e7a355a4db946.txt b/scrapped_outputs/51ef72266a6c3da7da6e7a355a4db946.txt new file mode 100644 index 0000000000000000000000000000000000000000..4fdf516b6d77156c92f409f664a1bb5bd1902c7b --- /dev/null +++ b/scrapped_outputs/51ef72266a6c3da7da6e7a355a4db946.txt @@ -0,0 +1,65 @@ +ControlNet ControlNet models are adapters trained on top of another pretrained model. It allows for a greater degree of control over image generation by conditioning the model with an additional input image. The input image can be a canny edge, depth map, human pose, and many more. If you’re training on a GPU with limited vRAM, you should try enabling the gradient_checkpointing, gradient_accumulation_steps, and mixed_precision parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing or xFormers. You should have a GPU with >30GB of memory if you want to train faster with Flax. This guide will explore the train_controlnet.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/controlnet +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_controlnet.py \ + --mixed_precision="fp16" Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the relevant parameters for ControlNet: --max_train_samples: the number of training samples; this can be lowered for faster training, but if you want to stream really large datasets, you’ll need to include this parameter and the --streaming parameter in your training command --gradient_accumulation_steps: number of update steps to accumulate before the backward pass; this allows you to train with a bigger batch size than your GPU memory can typically handle Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_controlnet.py \ + --snr_gamma=5.0 Training script As with the script parameters, a general walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the relevant parts of the ControlNet script. The training script has a make_train_dataset function for preprocessing the dataset with image transforms and caption tokenization. You’ll see that in addition to the usual caption tokenization and image transforms, the script also includes transforms for the conditioning image. If you’re streaming a dataset on a TPU, performance may be bottlenecked by the 🤗 Datasets library which is not optimized for images. To ensure maximum throughput, you’re encouraged to explore other dataset formats like WebDataset, TorchData, and TensorFlow Datasets. Copied conditioning_image_transforms = transforms.Compose( + [ + transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), + transforms.CenterCrop(args.resolution), + transforms.ToTensor(), + ] +) Within the main() function, you’ll find the code for loading the tokenizer, text encoder, scheduler and models. This is also where the ControlNet model is loaded either from existing weights or randomly initialized from a UNet: Copied if args.controlnet_model_name_or_path: + logger.info("Loading existing controlnet weights") + controlnet = ControlNetModel.from_pretrained(args.controlnet_model_name_or_path) +else: + logger.info("Initializing controlnet weights from unet") + controlnet = ControlNetModel.from_unet(unet) The optimizer is set up to update the ControlNet parameters: Copied params_to_optimize = controlnet.parameters() +optimizer = optimizer_class( + params_to_optimize, + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Finally, in the training loop, the conditioning text embeddings and image are passed to the down and mid-blocks of the ControlNet model: Copied encoder_hidden_states = text_encoder(batch["input_ids"])[0] +controlnet_image = batch["conditioning_pixel_values"].to(dtype=weight_dtype) + +down_block_res_samples, mid_block_res_sample = controlnet( + noisy_latents, + timesteps, + encoder_hidden_states=encoder_hidden_states, + controlnet_cond=controlnet_image, + return_dict=False, +) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Now you’re ready to launch the training script! 🚀 This guide uses the fusing/fill50k dataset, but remember, you can create and use your own dataset if you want (see the Create a dataset for training guide). Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model and OUTPUT_DIR to where you want to save the model. Download the following images to condition your training with: Copied wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png +wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png One more thing before you launch the script! Depending on the GPU you have, you may need to enable certain optimizations to train a ControlNet. The default configuration in this script requires ~38GB of vRAM. If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. 16GB 12GB 8GB On a 16GB GPU, you can use bitsandbytes 8-bit optimizer and gradient checkpointing to optimize your training run. Install bitsandbytes: Copied pip install bitsandbytes Then, add the following parameter to your training command: Copied accelerate launch train_controlnet.py \ + --gradient_checkpointing \ + --use_8bit_adam \ PyTorch Flax Copied export MODEL_DIR="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="path/to/save/model" + +accelerate launch train_controlnet.py \ + --pretrained_model_name_or_path=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --dataset_name=fusing/fill50k \ + --resolution=512 \ + --learning_rate=1e-5 \ + --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ + --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --push_to_hub Once training is complete, you can use your newly trained model for inference! Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.utils import load_image +import torch + +controlnet = ControlNetModel.from_pretrained("path/to/controlnet", torch_dtype=torch.float16) +pipeline = StableDiffusionControlNetPipeline.from_pretrained( + "path/to/base/model", controlnet=controlnet, torch_dtype=torch.float16 +).to("cuda") + +control_image = load_image("./conditioning_image_1.png") +prompt = "pale golden rod circle with old lace background" + +generator = torch.manual_seed(0) +image = pipe(prompt, num_inference_steps=20, generator=generator, image=control_image).images[0] +image.save("./output.png") Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_controlnet_sdxl.py script to train a ControlNet adapter for the SDXL model. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on training your own ControlNet! To learn more about how to use your new model, the following guides may be helpful: Learn how to use a ControlNet for inference on a variety of tasks. diff --git a/scrapped_outputs/52020bd88c3a134bc739f1133c763cef.txt b/scrapped_outputs/52020bd88c3a134bc739f1133c763cef.txt new file mode 100644 index 0000000000000000000000000000000000000000..f695b722000cb30de90398c0e34dfcc9554715bb --- /dev/null +++ b/scrapped_outputs/52020bd88c3a134bc739f1133c763cef.txt @@ -0,0 +1,315 @@ +Inpainting Inpainting replaces or edits specific areas of an image. This makes it a useful tool for image restoration like removing defects and artifacts, or even replacing an image area with something entirely new. Inpainting relies on a mask to determine which regions of an image to fill in; the area to inpaint is represented by white pixels and the area to keep is represented by black pixels. The white pixels are filled in by the prompt. With 🤗 Diffusers, here is how you can do inpainting: Load an inpainting checkpoint with the AutoPipelineForInpainting class. This’ll automatically detect the appropriate pipeline class to load based on the checkpoint: Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() You’ll notice throughout the guide, we use enable_model_cpu_offload() and enable_xformers_memory_efficient_attention(), to save memory and increase inference speed. If you’re using PyTorch 2.0, it’s not necessary to call enable_xformers_memory_efficient_attention() on your pipeline because it’ll already be using PyTorch 2.0’s native scaled-dot product attention. Load the base and mask images: Copied init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") Create a prompt to inpaint the image with and pass it to the pipeline with the base and mask images: Copied prompt = "a black cat with glowing eyes, cute, adorable, disney, pixar, highly detailed, 8k" +negative_prompt = "bad anatomy, deformed, ugly, disfigured" +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=init_image, mask_image=mask_image).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) base image mask image generated image Create a mask image Throughout this guide, the mask image is provided in all of the code examples for convenience. You can inpaint on your own images, but you’ll need to create a mask image for it. Use the Space below to easily create a mask image. Upload a base image to inpaint on and use the sketch tool to draw a mask. Once you’re done, click Run to generate and download the mask image. Mask blur The ~VaeImageProcessor.blur method provides an option for how to blend the original image and inpaint area. The amount of blur is determined by the blur_factor parameter. Increasing the blur_factor increases the amount of blur applied to the mask edges, softening the transition between the original image and inpaint area. A low or zero blur_factor preserves the sharper edges of the mask. To use this, create a blurred mask with the image processor. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +from PIL import Image + +pipeline = AutoPipelineForInpainting.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to('cuda') + +mask = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore_mask.png") +blurred_mask = pipeline.mask_processor.blur(mask, blur_factor=33) +blurred_mask mask with no blur mask with blur applied Popular models Stable Diffusion Inpainting, Stable Diffusion XL (SDXL) Inpainting, and Kandinsky 2.2 Inpainting are among the most popular models for inpainting. SDXL typically produces higher resolution images than Stable Diffusion v1.5, and Kandinsky 2.2 is also capable of generating high-quality images. Stable Diffusion Inpainting Stable Diffusion Inpainting is a latent diffusion model finetuned on 512x512 images on inpainting. It is a good starting point because it is relatively fast and generates good quality images. To use this model for inpainting, you’ll need to pass a prompt, base and mask image to the pipeline: Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Stable Diffusion XL (SDXL) Inpainting SDXL is a larger and more powerful version of Stable Diffusion v1.5. This model can follow a two-stage model process (though each model can also be used alone); the base model generates an image, and a refiner model takes that image and further enhances its details and quality. Take a look at the SDXL guide for a more comprehensive guide on how to use SDXL and configure it’s parameters. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "diffusers/stable-diffusion-xl-1.0-inpainting-0.1", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Kandinsky 2.2 Inpainting The Kandinsky model family is similar to SDXL because it uses two models as well; the image prior model creates image embeddings, and the diffusion model generates images from them. You can load the image prior and diffusion model separately, but the easiest way to use Kandinsky 2.2 is to load it into the AutoPipelineForInpainting class which uses the KandinskyV22InpaintCombinedPipeline under the hood. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) base image Stable Diffusion Inpainting Stable Diffusion XL Inpainting Kandinsky 2.2 Inpainting Non-inpaint specific checkpoints So far, this guide has used inpaint specific checkpoints such as runwayml/stable-diffusion-inpainting. But you can also use regular checkpoints like runwayml/stable-diffusion-v1-5. Let’s compare the results of the two checkpoints. The image on the left is generated from a regular checkpoint, and the image on the right is from an inpaint checkpoint. You’ll immediately notice the image on the left is not as clean, and you can still see the outline of the area the model is supposed to inpaint. The image on the right is much cleaner and the inpainted area appears more natural. runwayml/stable-diffusion-v1-5 runwayml/stable-diffusion-inpainting Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, image], rows=1, cols=2) runwayml/stable-diffusion-v1-5 runwayml/stable-diffusion-inpainting However, for more basic tasks like erasing an object from an image (like the rocks in the road for example), a regular checkpoint yields pretty good results. There isn’t as noticeable of difference between the regular and inpaint checkpoint. runwayml/stable-diffusion-v1-5 runwayml/stable-diffusion-inpaint Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/road-mask.png") + +image = pipeline(prompt="road", image=init_image, mask_image=mask_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) runwayml/stable-diffusion-v1-5 runwayml/stable-diffusion-inpainting The trade-off of using a non-inpaint specific checkpoint is the overall image quality may be lower, but it generally tends to preserve the mask area (that is why you can see the mask outline). The inpaint specific checkpoints are intentionally trained to generate higher quality inpainted images, and that includes creating a more natural transition between the masked and unmasked areas. As a result, these checkpoints are more likely to change your unmasked area. If preserving the unmasked area is important for your task, you can use the VaeImageProcessor.apply_overlay method to force the unmasked area of an image to remain the same at the expense of some more unnatural transitions between the masked and unmasked areas. Copied import PIL +import numpy as np +import torch + +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +device = "cuda" +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", + torch_dtype=torch.float16, +) +pipeline = pipeline.to(device) + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +repainted_image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image).images[0] +repainted_image.save("repainted_image.png") + +unmasked_unchanged_image = pipeline.image_processor.apply_overlay(mask_image, init_image, repainted_image) +unmasked_unchanged_image.save("force_unmasked_unchanged.png") +make_image_grid([init_image, mask_image, repainted_image, unmasked_unchanged_image], rows=2, cols=2) Configure pipeline parameters Image features - like quality and “creativity” - are dependent on pipeline parameters. Knowing what these parameters do is important for getting the results you want. Let’s take a look at the most important parameters and see how changing them affects the output. Strength strength is a measure of how much noise is added to the base image, which influences how similar the output is to the base image. 📈 a high strength value means more noise is added to an image and the denoising process takes longer, but you’ll get higher quality images that are more different from the base image 📉 a low strength value means less noise is added to an image and the denoising process is faster, but the image quality may not be as great and the generated image resembles the base image more Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.6).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) strength = 0.6 strength = 0.8 strength = 1.0 Guidance scale guidance_scale affects how aligned the text prompt and generated image are. 📈 a high guidance_scale value means the prompt and generated image are closely aligned, so the output is a stricter interpretation of the prompt 📉 a low guidance_scale value means the prompt and generated image are more loosely aligned, so the output may be more varied from the prompt You can use strength and guidance_scale together for more control over how expressive the model is. For example, a combination high strength and guidance_scale values gives the model the most creative freedom. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, guidance_scale=2.5).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) guidance_scale = 2.5 guidance_scale = 7.5 guidance_scale = 12.5 Negative prompt A negative prompt assumes the opposite role of a prompt; it guides the model away from generating certain things in an image. This is useful for quickly improving image quality and preventing the model from generating things you don’t want. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +negative_prompt = "bad architecture, unstable, poor details, blurry" +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=init_image, mask_image=mask_image).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) negative_prompt = "bad architecture, unstable, poor details, blurry" Padding mask crop A method for increasing the inpainting image quality is to use the padding_mask_crop parameter. When enabled, this option crops the masked area with some user-specified padding and it’ll also crop the same area from the original image. Both the image and mask are upscaled to a higher resolution for inpainting, and then overlaid on the original image. This is a quick and easy way to improve image quality without using a separate pipeline like StableDiffusionUpscalePipeline. Add the padding_mask_crop parameter to the pipeline call and set it to the desired padding value. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +from PIL import Image + +generator = torch.Generator(device='cuda').manual_seed(0) +pipeline = AutoPipelineForInpainting.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to('cuda') + +base = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore.png") +mask = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore_mask.png") + +image = pipeline("boat", image=base, mask_image=mask, strength=0.75, generator=generator, padding_mask_crop=32).images[0] +image default inpaint image inpaint image with `padding_mask_crop` enabled Chained inpainting pipelines AutoPipelineForInpainting can be chained with other 🤗 Diffusers pipelines to edit their outputs. This is often useful for improving the output quality from your other diffusion pipelines, and if you’re using multiple pipelines, it can be more memory-efficient to chain them together to keep the outputs in latent space and reuse the same pipeline components. Text-to-image-to-inpaint Chaining a text-to-image and inpainting pipeline allows you to inpaint the generated image, and you don’t have to provide a base image to begin with. This makes it convenient to edit your favorite text-to-image outputs without having to generate an entirely new image. Start with the text-to-image pipeline to create a castle: Copied import torch +from diffusers import AutoPipelineForText2Image, AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +text2image = pipeline("concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k").images[0] Load the mask image of the output from above: Copied mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_text-chain-mask.png") And let’s inpaint the masked area with a waterfall: Copied pipeline = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +prompt = "digital painting of a fantasy waterfall, cloudy" +image = pipeline(prompt=prompt, image=text2image, mask_image=mask_image).images[0] +make_image_grid([text2image, mask_image, image], rows=1, cols=3) text-to-image inpaint Inpaint-to-image-to-image You can also chain an inpainting pipeline before another pipeline like image-to-image or an upscaler to improve the quality. Begin by inpainting an image: Copied import torch +from diffusers import AutoPipelineForInpainting, AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image_inpainting = pipeline(prompt=prompt, image=init_image, mask_image=mask_image).images[0] + +# resize image to 1024x1024 for SDXL +image_inpainting = image_inpainting.resize((1024, 1024)) Now let’s pass the image to another inpainting pipeline with SDXL’s refiner model to enhance the image details and quality: Copied pipeline = AutoPipelineForInpainting.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt=prompt, image=image_inpainting, mask_image=mask_image, output_type="latent").images[0] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. For example, in the Text-to-image-to-inpaint section, Kandinsky 2.2 uses a different VAE class than the Stable Diffusion model so it won’t work. But if you use Stable Diffusion v1.5 for both pipelines, then you can keep everything in latent space because they both use AutoencoderKL. Finally, you can pass this image to an image-to-image pipeline to put the finishing touches on it. It is more efficient to use the from_pipe() method to reuse the existing pipeline components, and avoid unnecessarily loading all the pipeline components into memory again. Copied pipeline = AutoPipelineForImage2Image.from_pipe(pipeline) +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt=prompt, image=image).images[0] +make_image_grid([init_image, mask_image, image_inpainting, image], rows=2, cols=2) initial image inpaint image-to-image Image-to-image and inpainting are actually very similar tasks. Image-to-image generates a new image that resembles the existing provided image. Inpainting does the same thing, but it only transforms the image area defined by the mask and the rest of the image is unchanged. You can think of inpainting as a more precise tool for making specific changes and image-to-image has a broader scope for making more sweeping changes. Control image generation Getting an image to look exactly the way you want is challenging because the denoising process is random. While you can control certain aspects of generation by configuring parameters like negative_prompt, there are better and more efficient methods for controlling image generation. Prompt weighting Prompt weighting provides a quantifiable way to scale the representation of concepts in a prompt. You can use it to increase or decrease the magnitude of the text embedding vector for each concept in the prompt, which subsequently determines how much of each concept is generated. The Compel library offers an intuitive syntax for scaling the prompt weights and generating the embeddings. Learn how to create the embeddings in the Prompt weighting guide. Once you’ve generated the embeddings, pass them to the prompt_embeds (and negative_prompt_embeds if you’re using a negative prompt) parameter in the AutoPipelineForInpainting. The embeddings replace the prompt parameter: Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt_embeds=prompt_embeds, # generated from Compel + negative_prompt_embeds=negative_prompt_embeds, # generated from Compel + image=init_image, + mask_image=mask_image +).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) ControlNet ControlNet models are used with other diffusion models like Stable Diffusion, and they provide an even more flexible and accurate way to control how an image is generated. A ControlNet accepts an additional conditioning image input that guides the diffusion model to preserve the features in it. For example, let’s condition an image with a ControlNet pretrained on inpaint images: Copied import torch +import numpy as np +from diffusers import ControlNetModel, StableDiffusionControlNetInpaintPipeline +from diffusers.utils import load_image, make_image_grid + +# load ControlNet +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16, variant="fp16") + +# pass ControlNet to the pipeline +pipeline = StableDiffusionControlNetInpaintPipeline.from_pretrained( + "runwayml/stable-diffusion-inpainting", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +# prepare control image +def make_inpaint_condition(init_image, mask_image): + init_image = np.array(init_image.convert("RGB")).astype(np.float32) / 255.0 + mask_image = np.array(mask_image.convert("L")).astype(np.float32) / 255.0 + + assert init_image.shape[0:1] == mask_image.shape[0:1], "image and image_mask must have the same image size" + init_image[mask_image > 0.5] = -1.0 # set as masked pixel + init_image = np.expand_dims(init_image, 0).transpose(0, 3, 1, 2) + init_image = torch.from_numpy(init_image) + return init_image + +control_image = make_inpaint_condition(init_image, mask_image) Now generate an image from the base, mask and control images. You’ll notice features of the base image are strongly preserved in the generated image. Copied prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, control_image=control_image).images[0] +make_image_grid([init_image, mask_image, PIL.Image.fromarray(np.uint8(control_image[0][0])).convert('RGB'), image], rows=2, cols=2) You can take this a step further and chain it with an image-to-image pipeline to apply a new style: Copied from diffusers import AutoPipelineForImage2Image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "nitrosocke/elden-ring-diffusion", torch_dtype=torch.float16, +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +prompt = "elden ring style castle" # include the token "elden ring style" in the prompt +negative_prompt = "bad architecture, deformed, disfigured, poor details" + +image_elden_ring = pipeline(prompt, negative_prompt=negative_prompt, image=image).images[0] +make_image_grid([init_image, mask_image, image, image_elden_ring], rows=2, cols=2) initial image ControlNet inpaint image-to-image Optimize It can be difficult and slow to run diffusion models if you’re resource constrained, but it doesn’t have to be with a few optimization tricks. One of the biggest (and easiest) optimizations you can enable is switching to memory-efficient attention. If you’re using PyTorch 2.0, scaled-dot product attention is automatically enabled and you don’t need to do anything else. For non-PyTorch 2.0 users, you can install and use xFormers’s implementation of memory-efficient attention. Both options reduce memory usage and accelerate inference. You can also offload the model to the CPU to save even more memory: Copied + pipeline.enable_xformers_memory_efficient_attention() ++ pipeline.enable_model_cpu_offload() To speed-up your inference code even more, use torch_compile. You should wrap torch.compile around the most intensive component in the pipeline which is typically the UNet: Copied pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) Learn more in the Reduce memory usage and Torch 2.0 guides. diff --git a/scrapped_outputs/52328ecad923a8b4944b4d1383d93c21.txt b/scrapped_outputs/52328ecad923a8b4944b4d1383d93c21.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/525eb0fd26b01cfa6be0171617af72a1.txt b/scrapped_outputs/525eb0fd26b01cfa6be0171617af72a1.txt new file mode 100644 index 0000000000000000000000000000000000000000..bfb6ce0d7b29d717bc4cf9298fa18ceb1edda813 --- /dev/null +++ b/scrapped_outputs/525eb0fd26b01cfa6be0171617af72a1.txt @@ -0,0 +1,338 @@ +Inpainting The Stable Diffusion model can also be applied to inpainting which lets you edit specific parts of an image by providing a mask and a text prompt using Stable Diffusion. Tips It is recommended to use this pipeline with checkpoints that have been specifically fine-tuned for inpainting, such +as runwayml/stable-diffusion-inpainting. Default +text-to-image Stable Diffusion checkpoints, such as +runwayml/stable-diffusion-v1-5 are also compatible but they might be less performant. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionInpaintPipeline class diffusers.StableDiffusionInpaintPipeline < source > ( vae: Union text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae ([AutoencoderKL, AsymmetricAutoencoderKL]) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-guided image inpainting using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None masked_image_latents: FloatTensor = None height: Optional = None width: Optional = None padding_mask_crop: Optional = None strength: float = 1.0 num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: int = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be inpainted (which parts of the image to +be masked out with mask_image and repainted according to prompt). For both numpy array and pytorch +tensor, the expected value range is between [0, 1] If it’s a tensor or a list or tensors, the +expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a list of arrays, the +expected shape should be (B, H, W, C) or (H, W, C) It can also accept image latents as image, but +if passing latents directly it is not encoded again. mask_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to mask image. White pixels in the mask +are repainted while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a numpy array or pytorch tensor, it should contain one +color channel (L) instead of 3, so the expected shape for pytorch tensor would be (B, 1, H, W), (B, H, W), (1, H, W), (H, W). And for numpy array would be for (B, H, W, 1), (B, H, W), (H, W, 1), or (H, W). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. padding_mask_crop (int, optional, defaults to None) — +The size of margin in the crop to be applied to the image and masking. If None, no crop is applied to image and mask_image. If +padding_mask_crop is not None, it will first find a rectangular region with the same aspect ration of the image and +contains all masked area, and then expand that area based on padding_mask_crop. The image and mask_image will then be cropped based on +the expanded area before resizing to the original image size for inpainting. This is useful when the masked area is small while the image is large +and contain information inreleant for inpainging, such as background. strength (float, optional, defaults to 1.0) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionInpaintPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +>>> init_image = download_image(img_url).resize((512, 512)) +>>> mask_image = download_image(mask_url).resize((512, 512)) + +>>> pipe = StableDiffusionInpaintPipeline.from_pretrained( +... "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +>>> image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionInpaintPipeline class diffusers.FlaxStableDiffusionInpaintPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-guided image inpainting using Stable Diffusion. 🧪 This is an experimental feature! This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: Array mask: Array masked_image: Array params: Union prng_seed: Array num_inference_steps: int = 50 height: Optional = None width: Optional = None guidance_scale: Union = 7.5 latents: Array = None neg_prompt_ids: Array = None return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. latents (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +array is generated by sampling using the supplied random generator. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard +>>> import PIL +>>> import requests +>>> from io import BytesIO +>>> from diffusers import FlaxStableDiffusionInpaintPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +>>> init_image = download_image(img_url).resize((512, 512)) +>>> mask_image = download_image(mask_url).resize((512, 512)) + +>>> pipeline, params = FlaxStableDiffusionInpaintPipeline.from_pretrained( +... "xvjiarui/stable-diffusion-2-inpainting" +... ) + +>>> prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +>>> prng_seed = jax.random.PRNGKey(0) +>>> num_inference_steps = 50 + +>>> num_samples = jax.device_count() +>>> prompt = num_samples * [prompt] +>>> init_image = num_samples * [init_image] +>>> mask_image = num_samples * [mask_image] +>>> prompt_ids, processed_masked_images, processed_masks = pipeline.prepare_inputs( +... prompt, init_image, mask_image +... ) +# shard inputs and rng + +>>> params = replicate(params) +>>> prng_seed = jax.random.split(prng_seed, jax.device_count()) +>>> prompt_ids = shard(prompt_ids) +>>> processed_masked_images = shard(processed_masked_images) +>>> processed_masks = shard(processed_masks) + +>>> images = pipeline( +... prompt_ids, processed_masks, processed_masked_images, params, prng_seed, num_inference_steps, jit=True +... ).images +>>> images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) FlaxStableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/529c09186cf4eac7cd44ec651531021b.txt b/scrapped_outputs/529c09186cf4eac7cd44ec651531021b.txt new file mode 100644 index 0000000000000000000000000000000000000000..27e473e96ef3e5480dbddcafab99a5316b599755 --- /dev/null +++ b/scrapped_outputs/529c09186cf4eac7cd44ec651531021b.txt @@ -0,0 +1,57 @@ +Wuerstchen The Wuerstchen model drastically reduces computational costs by compressing the latent space by 42x, without compromising image quality and accelerating inference. During training, Wuerstchen uses two models (VQGAN + autoencoder) to compress the latents, and then a third model (text-conditioned latent diffusion model) is conditioned on this highly compressed space to generate an image. To fit the prior model into GPU memory and to speedup training, try enabling gradient_accumulation_steps, gradient_checkpointing, and mixed_precision respectively. This guide explores the train_text_to_image_prior.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/wuerstchen/text_to_image +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training scripts that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the scripts and let us know if you have any questions or concerns. Script parameters The training scripts provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image_prior.py \ + --mixed_precision="fp16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so let’s dive right into the Wuerstchen training script! Training script The training script is also similar to the Text-to-image training guide, but it’s been modified to support Wuerstchen. This guide focuses on the code that is unique to the Wuerstchen training script. The main() function starts by initializing the image encoder - an EfficientNet - in addition to the usual scheduler and tokenizer. Copied with ContextManagers(deepspeed_zero_init_disabled_context_manager()): + pretrained_checkpoint_file = hf_hub_download("dome272/wuerstchen", filename="model_v2_stage_b.pt") + state_dict = torch.load(pretrained_checkpoint_file, map_location="cpu") + image_encoder = EfficientNetEncoder() + image_encoder.load_state_dict(state_dict["effnet_state_dict"]) + image_encoder.eval() You’ll also load the WuerstchenPrior model for optimization. Copied prior = WuerstchenPrior.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="prior") + +optimizer = optimizer_cls( + prior.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Next, you’ll apply some transforms to the images and tokenize the captions: Copied def preprocess_train(examples): + images = [image.convert("RGB") for image in examples[image_column]] + examples["effnet_pixel_values"] = [effnet_transforms(image) for image in images] + examples["text_input_ids"], examples["text_mask"] = tokenize_captions(examples) + return examples Finally, the training loop handles compressing the images to latent space with the EfficientNetEncoder, adding noise to the latents, and predicting the noise residual with the WuerstchenPrior model. Copied pred_noise = prior(noisy_latents, timesteps, prompt_embeds) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 Set the DATASET_NAME environment variable to the dataset name from the Hub. This guide uses the Pokémon BLIP captions dataset, but you can create and train on your own datasets as well (see the Create a dataset for training guide). To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_prompt to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. Copied export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch train_text_to_image_prior.py \ + --mixed_precision="fp16" \ + --dataset_name=$DATASET_NAME \ + --resolution=768 \ + --train_batch_size=4 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --dataloader_num_workers=4 \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --checkpoints_total_limit=3 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --validation_prompts="A robot pokemon, 4k photo" \ + --report_to="wandb" \ + --push_to_hub \ + --output_dir="wuerstchen-prior-pokemon-model" Once training is complete, you can use your newly trained model for inference! Copied import torch +from diffusers import AutoPipelineForText2Image +from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS + +pipeline = AutoPipelineForText2Image.from_pretrained("path/to/saved/model", torch_dtype=torch.float16).to("cuda") + +caption = "A cute bird pokemon holding a shield" +images = pipeline( + caption, + width=1024, + height=1536, + prior_timesteps=DEFAULT_STAGE_C_TIMESTEPS, + prior_guidance_scale=4.0, + num_images_per_prompt=2, +).images Next steps Congratulations on training a Wuerstchen model! To learn more about how to use your new model, the following may be helpful: Take a look at the Wuerstchen API documentation to learn more about how to use the pipeline for text-to-image generation and its limitations. diff --git a/scrapped_outputs/529c1b7beeac42601b36803bbf52f1ea.txt b/scrapped_outputs/529c1b7beeac42601b36803bbf52f1ea.txt new file mode 100644 index 0000000000000000000000000000000000000000..c6f952ad08987328ef5a7108f6c98636c5902202 --- /dev/null +++ b/scrapped_outputs/529c1b7beeac42601b36803bbf52f1ea.txt @@ -0,0 +1,76 @@ +Contribute a community pipeline 💡 Take a look at GitHub Issue #841 for more context about why we’re adding community pipelines to help everyone easily share their work without being slowed down. Community pipelines allow you to add any additional features you’d like on top of the DiffusionPipeline. The main benefit of building on top of the DiffusionPipeline is anyone can load and use your pipeline by only adding one more argument, making it super easy for the community to access. This guide will show you how to create a community pipeline and explain how they work. To keep things simple, you’ll create a “one-step” pipeline where the UNet does a single forward pass and calls the scheduler once. Initialize the pipeline You should start by creating a one_step_unet.py file for your community pipeline. In this file, create a pipeline class that inherits from the DiffusionPipeline to be able to load model weights and the scheduler configuration from the Hub. The one-step pipeline needs a UNet and a scheduler, so you’ll need to add these as arguments to the __init__ function: Copied from diffusers import DiffusionPipeline +import torch + +class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() To ensure your pipeline and its components (unet and scheduler) can be saved with save_pretrained(), add them to the register_modules function: Copied from diffusers import DiffusionPipeline + import torch + + class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() + ++ self.register_modules(unet=unet, scheduler=scheduler) Cool, the __init__ step is done and you can move to the forward pass now! 🔥 Define the forward pass In the forward pass, which we recommend defining as __call__, you have complete creative freedom to add whatever feature you’d like. For our amazing one-step pipeline, create a random image and only call the unet and scheduler once by setting timestep=1: Copied from diffusers import DiffusionPipeline + import torch + + class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() + + self.register_modules(unet=unet, scheduler=scheduler) + ++ def __call__(self): ++ image = torch.randn( ++ (1, self.unet.config.in_channels, self.unet.config.sample_size, self.unet.config.sample_size), ++ ) ++ timestep = 1 + ++ model_output = self.unet(image, timestep).sample ++ scheduler_output = self.scheduler.step(model_output, timestep, image).prev_sample + ++ return scheduler_output That’s it! 🚀 You can now run this pipeline by passing a unet and scheduler to it: Copied from diffusers import DDPMScheduler, UNet2DModel + +scheduler = DDPMScheduler() +unet = UNet2DModel() + +pipeline = UnetSchedulerOneForwardPipeline(unet=unet, scheduler=scheduler) + +output = pipeline() But what’s even better is you can load pre-existing weights into the pipeline if the pipeline structure is identical. For example, you can load the google/ddpm-cifar10-32 weights into the one-step pipeline: Copied pipeline = UnetSchedulerOneForwardPipeline.from_pretrained("google/ddpm-cifar10-32", use_safetensors=True) + +output = pipeline() Share your pipeline Open a Pull Request on the 🧨 Diffusers repository to add your awesome pipeline in one_step_unet.py to the examples/community subfolder. Once it is merged, anyone with diffusers >= 0.4.0 installed can use this pipeline magically 🪄 by specifying it in the custom_pipeline argument: Copied from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "google/ddpm-cifar10-32", custom_pipeline="one_step_unet", use_safetensors=True +) +pipe() Another way to share your community pipeline is to upload the one_step_unet.py file directly to your preferred model repository on the Hub. Instead of specifying the one_step_unet.py file, pass the model repository id to the custom_pipeline argument: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "google/ddpm-cifar10-32", custom_pipeline="stevhliu/one_step_unet", use_safetensors=True +) Take a look at the following table to compare the two sharing workflows to help you decide the best option for you: GitHub community pipeline HF Hub community pipeline usage same same review process open a Pull Request on GitHub and undergo a review process from the Diffusers team before merging; may be slower upload directly to a Hub repository without any review; this is the fastest workflow visibility included in the official Diffusers repository and documentation included on your HF Hub profile and relies on your own usage/promotion to gain visibility 💡 You can use whatever package you want in your community pipeline file - as long as the user has it installed, everything will work fine. Make sure you have one and only one pipeline class that inherits from DiffusionPipeline because this is automatically detected. How do community pipelines work? A community pipeline is a class that inherits from DiffusionPipeline which means: It can be loaded with the custom_pipeline argument. The model weights and scheduler configuration are loaded from pretrained_model_name_or_path. The code that implements a feature in the community pipeline is defined in a pipeline.py file. Sometimes you can’t load all the pipeline components weights from an official repository. In this case, the other components should be passed directly to the pipeline: Copied from diffusers import DiffusionPipeline +from transformers import CLIPImageProcessor, CLIPModel + +model_id = "CompVis/stable-diffusion-v1-4" +clip_model_id = "laion/CLIP-ViT-B-32-laion2B-s34B-b79K" + +feature_extractor = CLIPImageProcessor.from_pretrained(clip_model_id) +clip_model = CLIPModel.from_pretrained(clip_model_id, torch_dtype=torch.float16) + +pipeline = DiffusionPipeline.from_pretrained( + model_id, + custom_pipeline="clip_guided_stable_diffusion", + clip_model=clip_model, + feature_extractor=feature_extractor, + scheduler=scheduler, + torch_dtype=torch.float16, + use_safetensors=True, +) The magic behind community pipelines is contained in the following code. It allows the community pipeline to be loaded from GitHub or the Hub, and it’ll be available to all 🧨 Diffusers packages. Copied # 2. Load the pipeline class, if using custom module then load it from the Hub +# if we load from explicit class, let's use it +if custom_pipeline is not None: + pipeline_class = get_class_from_dynamic_module( + custom_pipeline, module_file=CUSTOM_PIPELINE_FILE_NAME, cache_dir=custom_pipeline + ) +elif cls != DiffusionPipeline: + pipeline_class = cls +else: + diffusers_module = importlib.import_module(cls.__module__.split(".")[0]) + pipeline_class = getattr(diffusers_module, config_dict["_class_name"]) diff --git a/scrapped_outputs/52e382d786b9f8d8386878213eb3d792.txt b/scrapped_outputs/52e382d786b9f8d8386878213eb3d792.txt new file mode 100644 index 0000000000000000000000000000000000000000..cbdfab551c65a04d22ed1db010bb50b8fb750880 --- /dev/null +++ b/scrapped_outputs/52e382d786b9f8d8386878213eb3d792.txt @@ -0,0 +1,852 @@ +ControlNet ControlNet was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that’ll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process. The abstract from the paper is: We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models. This model was contributed by takuma104. ❤️ The original codebase can be found at lllyasviel/ControlNet, and you can find official ControlNet checkpoints on lllyasviel’s Hub profile. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionControlNetPipeline class diffusers.StableDiffusionControlNetPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. When prompt is a list, and if a list of images is passed for a single ControlNet, +each will be paired with each prompt in the prompt list. This also applies to multiple ControlNets, +where a list of image lists can be passed to batch for each prompt and each ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +... ) +>>> image = np.array(image) + +>>> # get canny image +>>> image = cv2.Canny(image, 100, 200) +>>> image = image[:, :, None] +>>> image = np.concatenate([image, image, image], axis=2) +>>> canny_image = Image.fromarray(image) + +>>> # load control net and stable diffusion v1-5 +>>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +>>> pipe = StableDiffusionControlNetPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> # speed up diffusion process with faster scheduler and memory optimization +>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +>>> # remove following line if xformers is not installed +>>> pipe.enable_xformers_memory_efficient_attention() + +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> generator = torch.manual_seed(0) +>>> image = pipe( +... "futuristic-looking woman", num_inference_steps=20, generator=generator, image=canny_image +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionControlNetImg2ImgPipeline class diffusers.StableDiffusionControlNetImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for image-to-image generation using Stable Diffusion with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None control_image: Union = None height: Optional = None width: Optional = None strength: float = 0.8 num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 0.8 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The initial image to be used as the starting point for the image generation process. Can also accept +image latents as image, and if passing latents directly they are not encoded again. control_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, UniPCMultistepScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +... ) +>>> np_image = np.array(image) + +>>> # get canny image +>>> np_image = cv2.Canny(np_image, 100, 200) +>>> np_image = np_image[:, :, None] +>>> np_image = np.concatenate([np_image, np_image, np_image], axis=2) +>>> canny_image = Image.fromarray(np_image) + +>>> # load control net and stable diffusion v1-5 +>>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +>>> pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> # speed up diffusion process with faster scheduler and memory optimization +>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> generator = torch.manual_seed(0) +>>> image = pipe( +... "futuristic-looking woman", +... num_inference_steps=20, +... generator=generator, +... image=image, +... control_image=canny_image, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionControlNetInpaintPipeline class diffusers.StableDiffusionControlNetInpaintPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for image inpainting using Stable Diffusion with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters This pipeline can be used with checkpoints that have been specifically fine-tuned for inpainting +(runwayml/stable-diffusion-inpainting) as well as +default text-to-image Stable Diffusion checkpoints +(runwayml/stable-diffusion-v1-5). Default text-to-image +Stable Diffusion checkpoints might be preferable for ControlNets that have been fine-tuned on those, such as +lllyasviel/control_v11p_sd15_inpaint. __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None control_image: Union = None height: Optional = None width: Optional = None padding_mask_crop: Optional = None strength: float = 1.0 num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 0.5 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], — +List[PIL.Image.Image], or List[np.ndarray]): +Image, NumPy array or tensor representing an image batch to be used as the starting point. For both +NumPy array and PyTorch tensor, the expected value range is between [0, 1]. If it’s a tensor or a +list or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a NumPy array or +a list of arrays, the expected shape should be (B, H, W, C) or (H, W, C). It can also accept image +latents as image, but if passing latents directly it is not encoded again. mask_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], — +List[PIL.Image.Image], or List[np.ndarray]): +Image, NumPy array or tensor representing an image batch to mask image. White pixels in the mask +are repainted while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a NumPy array or PyTorch tensor, it should contain one +color channel (L) instead of 3, so the expected shape for PyTorch tensor would be (B, 1, H, W), (B, H, W), (1, H, W), (H, W). And for NumPy array, it would be for (B, H, W, 1), (B, H, W), (H, W, 1), or (H, W). control_image (torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image], — +List[List[torch.FloatTensor]], or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. padding_mask_crop (int, optional, defaults to None) — +The size of margin in the crop to be applied to the image and masking. If None, no crop is applied to image and mask_image. If +padding_mask_crop is not None, it will first find a rectangular region with the same aspect ration of the image and +contains all masked area, and then expand that area based on padding_mask_crop. The image and mask_image will then be cropped based on +the expanded area before resizing to the original image size for inpainting. This is useful when the masked area is small while the image is large +and contain information inreleant for inpainging, such as background. strength (float, optional, defaults to 1.0) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 0.5) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install transformers accelerate +>>> from diffusers import StableDiffusionControlNetInpaintPipeline, ControlNetModel, DDIMScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> init_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy.png" +... ) +>>> init_image = init_image.resize((512, 512)) + +>>> generator = torch.Generator(device="cpu").manual_seed(1) + +>>> mask_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy_mask.png" +... ) +>>> mask_image = mask_image.resize((512, 512)) + + +>>> def make_canny_condition(image): +... image = np.array(image) +... image = cv2.Canny(image, 100, 200) +... image = image[:, :, None] +... image = np.concatenate([image, image, image], axis=2) +... image = Image.fromarray(image) +... return image + + +>>> control_image = make_canny_condition(init_image) + +>>> controlnet = ControlNetModel.from_pretrained( +... "lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16 +... ) +>>> pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> image = pipe( +... "a handsome man with ray-ban sunglasses", +... num_inference_steps=20, +... generator=generator, +... eta=1.0, +... image=init_image, +... mask_image=mask_image, +... control_image=control_image, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionControlNetPipeline class diffusers.FlaxStableDiffusionControlNetPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel controlnet: FlaxControlNetModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. controlnet (FlaxControlNetModel — +Provides additional conditioning to the unet during the denoising process. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-to-image generation using Stable Diffusion with ControlNet Guidance. This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: Array image: Array params: Union prng_seed: Array num_inference_steps: int = 50 guidance_scale: Union = 7.5 latents: Array = None neg_prompt_ids: Array = None controlnet_conditioning_scale: Union = 1.0 return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt_ids (jnp.ndarray) — +The prompt or prompts to guide the image generation. image (jnp.ndarray) — +Array representing the ControlNet input condition to provide guidance to the unet for generation. params (Dict or FrozenDict) — +Dictionary containing the model parameters/weights. prng_seed (jax.Array) — +Array containing random number generator key. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. latents (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +array is generated by sampling using the supplied random generator. controlnet_conditioning_scale (float or jnp.ndarray, optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> import jax.numpy as jnp +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard +>>> from diffusers.utils import load_image, make_image_grid +>>> from PIL import Image +>>> from diffusers import FlaxStableDiffusionControlNetPipeline, FlaxControlNetModel + + +>>> def create_key(seed=0): +... return jax.random.PRNGKey(seed) + + +>>> rng = create_key(0) + +>>> # get canny image +>>> canny_image = load_image( +... "https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/blog_post_cell_10_output_0.jpeg" +... ) + +>>> prompts = "best quality, extremely detailed" +>>> negative_prompts = "monochrome, lowres, bad anatomy, worst quality, low quality" + +>>> # load control net and stable diffusion v1-5 +>>> controlnet, controlnet_params = FlaxControlNetModel.from_pretrained( +... "lllyasviel/sd-controlnet-canny", from_pt=True, dtype=jnp.float32 +... ) +>>> pipe, params = FlaxStableDiffusionControlNetPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, revision="flax", dtype=jnp.float32 +... ) +>>> params["controlnet"] = controlnet_params + +>>> num_samples = jax.device_count() +>>> rng = jax.random.split(rng, jax.device_count()) + +>>> prompt_ids = pipe.prepare_text_inputs([prompts] * num_samples) +>>> negative_prompt_ids = pipe.prepare_text_inputs([negative_prompts] * num_samples) +>>> processed_image = pipe.prepare_image_inputs([canny_image] * num_samples) + +>>> p_params = replicate(params) +>>> prompt_ids = shard(prompt_ids) +>>> negative_prompt_ids = shard(negative_prompt_ids) +>>> processed_image = shard(processed_image) + +>>> output = pipe( +... prompt_ids=prompt_ids, +... image=processed_image, +... params=p_params, +... prng_seed=rng, +... num_inference_steps=50, +... neg_prompt_ids=negative_prompt_ids, +... jit=True, +... ).images + +>>> output_images = pipe.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:]))) +>>> output_images = make_image_grid(output_images, num_samples // 4, 4) +>>> output_images.save("generated_image.png") FlaxStableDiffusionControlNetPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/530d047de1ec9e1c359beb7526c636e6.txt b/scrapped_outputs/530d047de1ec9e1c359beb7526c636e6.txt new file mode 100644 index 0000000000000000000000000000000000000000..b38b5c13a31ff2d5b90900e6331e648465b535b4 --- /dev/null +++ b/scrapped_outputs/530d047de1ec9e1c359beb7526c636e6.txt @@ -0,0 +1,174 @@ +Reduce memory usage A barrier to using diffusion models is the large amount of memory required. To overcome this challenge, there are several memory-reducing techniques you can use to run even some of the largest models on free-tier or consumer GPUs. Some of these techniques can even be combined to further reduce memory usage. In many cases, optimizing for memory or speed leads to improved performance in the other, so you should try to optimize for both whenever you can. This guide focuses on minimizing memory usage, but you can also learn more about how to Speed up inference. The results below are obtained from generating a single 512x512 image from the prompt a photo of an astronaut riding a horse on mars with 50 DDIM steps on a Nvidia Titan RTX, demonstrating the speed-up you can expect as a result of reduced memory consumption. latency speed-up original 9.50s x1 fp16 3.61s x2.63 channels last 3.30s x2.88 traced UNet 3.21s x2.96 memory-efficient attention 2.63s x3.61 Sliced VAE Sliced VAE enables decoding large batches of images with limited VRAM or batches with 32 images or more by decoding the batches of latents one image at a time. You’ll likely want to couple this with enable_xformers_memory_efficient_attention() to reduce memory use further if you have xFormers installed. To use sliced VAE, call enable_vae_slicing() on your pipeline before inference: Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_vae_slicing() +#pipe.enable_xformers_memory_efficient_attention() +images = pipe([prompt] * 32).images You may see a small performance boost in VAE decoding on multi-image batches, and there should be no performance impact on single-image batches. Tiled VAE Tiled VAE processing also enables working with large images on limited VRAM (for example, generating 4k images on 8GB of VRAM) by splitting the image into overlapping tiles, decoding the tiles, and then blending the outputs together to compose the final image. You should also used tiled VAE with enable_xformers_memory_efficient_attention() to reduce memory use further if you have xFormers installed. To use tiled VAE processing, call enable_vae_tiling() on your pipeline before inference: Copied import torch +from diffusers import StableDiffusionPipeline, UniPCMultistepScheduler + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") +prompt = "a beautiful landscape photograph" +pipe.enable_vae_tiling() +#pipe.enable_xformers_memory_efficient_attention() + +image = pipe([prompt], width=3840, height=2224, num_inference_steps=20).images[0] The output image has some tile-to-tile tone variation because the tiles are decoded separately, but you shouldn’t see any sharp and obvious seams between the tiles. Tiling is turned off for images that are 512x512 or smaller. CPU offloading Offloading the weights to the CPU and only loading them on the GPU when performing the forward pass can also save memory. Often, this technique can reduce memory consumption to less than 3GB. To perform CPU offloading, call enable_sequential_cpu_offload(): Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_sequential_cpu_offload() +image = pipe(prompt).images[0] CPU offloading works on submodules rather than whole models. This is the best way to minimize memory consumption, but inference is much slower due to the iterative nature of the diffusion process. The UNet component of the pipeline runs several times (as many as num_inference_steps); each time, the different UNet submodules are sequentially onloaded and offloaded as needed, resulting in a large number of memory transfers. Consider using model offloading if you want to optimize for speed because it is much faster. The tradeoff is your memory savings won’t be as large. When using enable_sequential_cpu_offload(), don’t move the pipeline to CUDA beforehand or else the gain in memory consumption will only be minimal (see this issue for more information). enable_sequential_cpu_offload() is a stateful operation that installs hooks on the models. Model offloading Model offloading requires 🤗 Accelerate version 0.17.0 or higher. Sequential CPU offloading preserves a lot of memory but it makes inference slower because submodules are moved to GPU as needed, and they’re immediately returned to the CPU when a new module runs. Full-model offloading is an alternative that moves whole models to the GPU, instead of handling each model’s constituent submodules. There is a negligible impact on inference time (compared with moving the pipeline to cuda), and it still provides some memory savings. During model offloading, only one of the main components of the pipeline (typically the text encoder, UNet and VAE) +is placed on the GPU while the others wait on the CPU. Components like the UNet that run for multiple iterations stay on the GPU until they’re no longer needed. Enable model offloading by calling enable_model_cpu_offload() on the pipeline: Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_model_cpu_offload() +image = pipe(prompt).images[0] In order to properly offload models after they’re called, it is required to run the entire pipeline and models are called in the pipeline’s expected order. Exercise caution if models are reused outside the context of the pipeline after hooks have been installed. See Removing Hooks for more information. enable_model_cpu_offload() is a stateful operation that installs hooks on the models and state on the pipeline. Channels-last memory format The channels-last memory format is an alternative way of ordering NCHW tensors in memory to preserve dimension ordering. Channels-last tensors are ordered in such a way that the channels become the densest dimension (storing images pixel-per-pixel). Since not all operators currently support the channels-last format, it may result in worst performance but you should still try and see if it works for your model. For example, to set the pipeline’s UNet to use the channels-last format: Copied print(pipe.unet.conv_out.state_dict()["weight"].stride()) # (2880, 9, 3, 1) +pipe.unet.to(memory_format=torch.channels_last) # in-place operation +print( + pipe.unet.conv_out.state_dict()["weight"].stride() +) # (2880, 1, 960, 320) having a stride of 1 for the 2nd dimension proves that it works Tracing Tracing runs an example input tensor through the model and captures the operations that are performed on it as that input makes its way through the model’s layers. The executable or ScriptFunction that is returned is optimized with just-in-time compilation. To trace a UNet: Copied import time +import torch +from diffusers import StableDiffusionPipeline +import functools + +# torch disable grad +torch.set_grad_enabled(False) + +# set variables +n_experiments = 2 +unet_runs_per_experiment = 50 + + +# load inputs +def generate_inputs(): + sample = torch.randn((2, 4, 64, 64), device="cuda", dtype=torch.float16) + timestep = torch.rand(1, device="cuda", dtype=torch.float16) * 999 + encoder_hidden_states = torch.randn((2, 77, 768), device="cuda", dtype=torch.float16) + return sample, timestep, encoder_hidden_states + + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") +unet = pipe.unet +unet.eval() +unet.to(memory_format=torch.channels_last) # use channels_last memory format +unet.forward = functools.partial(unet.forward, return_dict=False) # set return_dict=False as default + +# warmup +for _ in range(3): + with torch.inference_mode(): + inputs = generate_inputs() + orig_output = unet(*inputs) + +# trace +print("tracing..") +unet_traced = torch.jit.trace(unet, inputs) +unet_traced.eval() +print("done tracing") + + +# warmup and optimize graph +for _ in range(5): + with torch.inference_mode(): + inputs = generate_inputs() + orig_output = unet_traced(*inputs) + + +# benchmarking +with torch.inference_mode(): + for _ in range(n_experiments): + torch.cuda.synchronize() + start_time = time.time() + for _ in range(unet_runs_per_experiment): + orig_output = unet_traced(*inputs) + torch.cuda.synchronize() + print(f"unet traced inference took {time.time() - start_time:.2f} seconds") + for _ in range(n_experiments): + torch.cuda.synchronize() + start_time = time.time() + for _ in range(unet_runs_per_experiment): + orig_output = unet(*inputs) + torch.cuda.synchronize() + print(f"unet inference took {time.time() - start_time:.2f} seconds") + +# save the model +unet_traced.save("unet_traced.pt") Replace the unet attribute of the pipeline with the traced model: Copied from diffusers import StableDiffusionPipeline +import torch +from dataclasses import dataclass + + +@dataclass +class UNet2DConditionOutput: + sample: torch.FloatTensor + + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") + +# use jitted unet +unet_traced = torch.jit.load("unet_traced.pt") + + +# del pipe.unet +class TracedUNet(torch.nn.Module): + def __init__(self): + super().__init__() + self.in_channels = pipe.unet.config.in_channels + self.device = pipe.unet.device + + def forward(self, latent_model_input, t, encoder_hidden_states): + sample = unet_traced(latent_model_input, t, encoder_hidden_states)[0] + return UNet2DConditionOutput(sample=sample) + + +pipe.unet = TracedUNet() + +with torch.inference_mode(): + image = pipe([prompt] * 1, num_inference_steps=50).images[0] Memory-efficient attention Recent work on optimizing bandwidth in the attention block has generated huge speed-ups and reductions in GPU memory usage. The most recent type of memory-efficient attention is Flash Attention (you can check out the original code at HazyResearch/flash-attention). If you have PyTorch >= 2.0 installed, you should not expect a speed-up for inference when enabling xformers. To use Flash Attention, install the following: PyTorch > 1.12 CUDA available xFormers Then call enable_xformers_memory_efficient_attention() on the pipeline: Copied from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") + +pipe.enable_xformers_memory_efficient_attention() + +with torch.inference_mode(): + sample = pipe("a small cat") + +# optional: You can disable it via +# pipe.disable_xformers_memory_efficient_attention() The iteration speed when using xformers should match the iteration speed of PyTorch 2.0 as described here. diff --git a/scrapped_outputs/5335f3350456bb3123a880b1567ffa49.txt b/scrapped_outputs/5335f3350456bb3123a880b1567ffa49.txt new file mode 100644 index 0000000000000000000000000000000000000000..25c46b6891734af2caccd73456b27f1ecd1e462b --- /dev/null +++ b/scrapped_outputs/5335f3350456bb3123a880b1567ffa49.txt @@ -0,0 +1,64 @@ +PNDMScheduler PNDMScheduler, or pseudo numerical methods for diffusion models, uses more advanced ODE integration techniques like the Runge-Kutta and linear multi-step method. The original implementation can be found at crowsonkb/k-diffusion. PNDMScheduler class diffusers.PNDMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None skip_prk_steps: bool = False set_alpha_to_one: bool = False prediction_type: str = 'epsilon' timestep_spacing: str = 'leading' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. skip_prk_steps (bool, defaults to False) — +Allows the scheduler to skip the Runge-Kutta steps defined in the original paper as being required before +PLMS steps. set_alpha_to_one (bool, defaults to False) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the alpha value at step 0. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process) +or v_prediction (see section 2.4 of Imagen Video +paper). timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. PNDMScheduler uses pseudo numerical methods for diffusion models such as the Runge-Kutta and linear multi-step +method. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise), and calls step_prk() +or step_plms() depending on the internal variable counter. step_plms < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the linear multistep method. It performs one forward pass multiple times to approximate the solution. step_prk < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the Runge-Kutta method. It performs four forward passes to approximate the solution to the differential +equation. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/5357cd8e009fecb5b1519b11659cc79a.txt b/scrapped_outputs/5357cd8e009fecb5b1519b11659cc79a.txt new file mode 100644 index 0000000000000000000000000000000000000000..684383d3b766fe2306777de3fdfe7ac6f1cc9bb6 --- /dev/null +++ b/scrapped_outputs/5357cd8e009fecb5b1519b11659cc79a.txt @@ -0,0 +1,29 @@ +Create a dataset for training There are many datasets on the Hub to train a model on, but if you can’t find one you’re interested in or want to use your own, you can create a dataset with the 🤗 Datasets library. The dataset structure depends on the task you want to train your model on. The most basic dataset structure is a directory of images for tasks like unconditional image generation. Another dataset structure may be a directory of images and a text file containing their corresponding text captions for tasks like text-to-image generation. This guide will show you two ways to create a dataset to finetune on: provide a folder of images to the --train_data_dir argument upload a dataset to the Hub and pass the dataset repository id to the --dataset_name argument 💡 Learn more about how to create an image dataset for training in the Create an image dataset guide. Provide a dataset as a folder For unconditional generation, you can provide your own dataset as a folder of images. The training script uses the ImageFolder builder from 🤗 Datasets to automatically build a dataset from the folder. Your directory structure should look like: Copied data_dir/xxx.png +data_dir/xxy.png +data_dir/[...]/xxz.png Pass the path to the dataset directory to the --train_data_dir argument, and then you can start training: Copied accelerate launch train_unconditional.py \ + --train_data_dir \ + Upload your data to the Hub 💡 For more details and context about creating and uploading a dataset to the Hub, take a look at the Image search with 🤗 Datasets post. Start by creating a dataset with the ImageFolder feature, which creates an image column containing the PIL-encoded images. You can use the data_dir or data_files parameters to specify the location of the dataset. The data_files parameter supports mapping specific files to dataset splits like train or test: Copied from datasets import load_dataset + +# example 1: local folder +dataset = load_dataset("imagefolder", data_dir="path_to_your_folder") + +# example 2: local files (supported formats are tar, gzip, zip, xz, rar, zstd) +dataset = load_dataset("imagefolder", data_files="path_to_zip_file") + +# example 3: remote files (supported formats are tar, gzip, zip, xz, rar, zstd) +dataset = load_dataset( + "imagefolder", + data_files="https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip", +) + +# example 4: providing several splits +dataset = load_dataset( + "imagefolder", data_files={"train": ["path/to/file1", "path/to/file2"], "test": ["path/to/file3", "path/to/file4"]} +) Then use the push_to_hub method to upload the dataset to the Hub: Copied # assuming you have ran the huggingface-cli login command in a terminal +dataset.push_to_hub("name_of_your_dataset") + +# if you want to push to a private repo, simply pass private=True: +dataset.push_to_hub("name_of_your_dataset", private=True) Now the dataset is available for training by passing the dataset name to the --dataset_name argument: Copied accelerate launch --mixed_precision="fp16" train_text_to_image.py \ + --pretrained_model_name_or_path="runwayml/stable-diffusion-v1-5" \ + --dataset_name="name_of_your_dataset" \ + Next steps Now that you’ve created a dataset, you can plug it into the train_data_dir (if your dataset is local) or dataset_name (if your dataset is on the Hub) arguments of a training script. For your next steps, feel free to try and use your dataset to train a model for unconditional generation or text-to-image generation! diff --git a/scrapped_outputs/536d33344c4915d5fb7394c593957a72.txt b/scrapped_outputs/536d33344c4915d5fb7394c593957a72.txt new file mode 100644 index 0000000000000000000000000000000000000000..f9d5759d2a52433aeb4a07b9b2cace405fc5aff7 --- /dev/null +++ b/scrapped_outputs/536d33344c4915d5fb7394c593957a72.txt @@ -0,0 +1,61 @@ +Distilled Stable Diffusion inference Stable Diffusion inference can be a computationally intensive process because it must iteratively denoise the latents to generate an image. To reduce the computational burden, you can use a distilled version of the Stable Diffusion model from Nota AI. The distilled version of their Stable Diffusion model eliminates some of the residual and attention blocks from the UNet, reducing the model size by 51% and improving latency on CPU/GPU by 43%. Read this blog post to learn more about how knowledge distillation training works to produce a faster, smaller, and cheaper generative model. Let’s load the distilled Stable Diffusion model and compare it against the original Stable Diffusion model: Copied from diffusers import StableDiffusionPipeline +import torch + +distilled = StableDiffusionPipeline.from_pretrained( + "nota-ai/bk-sdm-small", torch_dtype=torch.float16, use_safetensors=True, +).to("cuda") + +original = StableDiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16, use_safetensors=True, +).to("cuda") Given a prompt, get the inference time for the original model: Copied import time + +seed = 2023 +generator = torch.manual_seed(seed) + +NUM_ITERS_TO_RUN = 3 +NUM_INFERENCE_STEPS = 25 +NUM_IMAGES_PER_PROMPT = 4 + +prompt = "a golden vase with different flowers" + +start = time.time_ns() +for _ in range(NUM_ITERS_TO_RUN): + images = original( + prompt, + num_inference_steps=NUM_INFERENCE_STEPS, + generator=generator, + num_images_per_prompt=NUM_IMAGES_PER_PROMPT + ).images +end = time.time_ns() +original_sd = f"{(end - start) / 1e6:.1f}" + +print(f"Execution time -- {original_sd} ms\n") +"Execution time -- 45781.5 ms" Time the distilled model inference: Copied start = time.time_ns() +for _ in range(NUM_ITERS_TO_RUN): + images = distilled( + prompt, + num_inference_steps=NUM_INFERENCE_STEPS, + generator=generator, + num_images_per_prompt=NUM_IMAGES_PER_PROMPT + ).images +end = time.time_ns() + +distilled_sd = f"{(end - start) / 1e6:.1f}" +print(f"Execution time -- {distilled_sd} ms\n") +"Execution time -- 29884.2 ms" original Stable Diffusion (45781.5 ms) distilled Stable Diffusion (29884.2 ms) Tiny AutoEncoder To speed inference up even more, use a tiny distilled version of the Stable Diffusion VAE to denoise the latents into images. Replace the VAE in the distilled Stable Diffusion model with the tiny VAE: Copied from diffusers import AutoencoderTiny + +distilled.vae = AutoencoderTiny.from_pretrained( + "sayakpaul/taesd-diffusers", torch_dtype=torch.float16, use_safetensors=True, +).to("cuda") Time the distilled model and distilled VAE inference: Copied start = time.time_ns() +for _ in range(NUM_ITERS_TO_RUN): + images = distilled( + prompt, + num_inference_steps=NUM_INFERENCE_STEPS, + generator=generator, + num_images_per_prompt=NUM_IMAGES_PER_PROMPT + ).images +end = time.time_ns() + +distilled_tiny_sd = f"{(end - start) / 1e6:.1f}" +print(f"Execution time -- {distilled_tiny_sd} ms\n") +"Execution time -- 27165.7 ms" distilled Stable Diffusion + Tiny AutoEncoder (27165.7 ms) diff --git a/scrapped_outputs/539e92c3b985d9081159317d8801ce9e.txt b/scrapped_outputs/539e92c3b985d9081159317d8801ce9e.txt new file mode 100644 index 0000000000000000000000000000000000000000..025d8d9b7e21e34a1a210fa0bd70fff4f7c14e19 --- /dev/null +++ b/scrapped_outputs/539e92c3b985d9081159317d8801ce9e.txt @@ -0,0 +1,63 @@ +BaseOutputs + +All models have outputs that are instances of subclasses of BaseOutput. Those are +data structures containing all the information returned by the model, but that can also be used as tuples or +dictionaries. +Let’s see how this looks in an example: + + + Copied +from diffusers import DDIMPipeline + +pipeline = DDIMPipeline.from_pretrained("google/ddpm-cifar10-32") +outputs = pipeline() +The outputs object is a ImagePipelineOutput, as we can see in the +documentation of that class below, it means it has an image attribute. +You can access each attribute as you would usually do, and if that attribute has not been returned by the model, you will get None: + + + Copied +outputs.images +or via keyword lookup + + + Copied +outputs["images"] +When considering our outputs object as tuple, it only considers the attributes that don’t have None values. +Here for instance, we could retrieve images via indexing: + + + Copied +outputs[:1] +which will return the tuple (outputs.images) for instance. + +BaseOutput + + +class diffusers.utils.BaseOutput + +< +source +> +( +) + + + +Base class for all model outputs as dataclass. Has a __getitem__ that allows indexing by integer or slice (like a +tuple) or strings (like a dictionary) that will ignore the None attributes. Otherwise behaves like a regular +python dictionary. +You can’t unpack a BaseOutput directly. Use the to_tuple() method to convert it to a tuple +before. + +to_tuple + +< +source +> +( +) + + + +Convert self to a tuple containing all the attributes/keys that are not None. diff --git a/scrapped_outputs/53b76606215209e26e95d7464bb0d0ce.txt b/scrapped_outputs/53b76606215209e26e95d7464bb0d0ce.txt new file mode 100644 index 0000000000000000000000000000000000000000..b20fa826f93ceab8b9350b48a73ddf983d626f35 --- /dev/null +++ b/scrapped_outputs/53b76606215209e26e95d7464bb0d0ce.txt @@ -0,0 +1,115 @@ +Custom Diffusion Custom Diffusion is a training technique for personalizing image generation models. Like Textual Inversion, DreamBooth, and LoRA, Custom Diffusion only requires a few (~4-5) example images. This technique works by only training weights in the cross-attention layers, and it uses a special word to represent the newly learned concept. Custom Diffusion is unique because it can also learn multiple concepts at the same time. If you’re training on a GPU with limited vRAM, you should try enabling xFormers with --enable_xformers_memory_efficient_attention for faster training with lower vRAM requirements (16GB). To save even more memory, add --set_grads_to_none in the training argument to set the gradients to None instead of zero (this option can cause some issues, so if you experience any, try removing this parameter). This guide will explore the train_custom_diffusion.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies: Copied cd examples/custom_diffusion +pip install -r requirements.txt +pip install clip-retrieval 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script contains all the parameters to help you customize your training run. These are found in the parse_args() function. The function comes with default values, but you can also set your own values in the training command if you’d like. For example, to change the resolution of the input image: Copied accelerate launch train_custom_diffusion.py \ + --resolution=256 Many of the basic parameters are described in the DreamBooth training guide, so this guide focuses on the parameters unique to Custom Diffusion: --freeze_model: freezes the key and value parameters in the cross-attention layer; the default is crossattn_kv, but you can set it to crossattn to train all the parameters in the cross-attention layer --concepts_list: to learn multiple concepts, provide a path to a JSON file containing the concepts --modifier_token: a special word used to represent the learned concept --initializer_token: Prior preservation loss Prior preservation loss is a method that uses a model’s own generated samples to help it learn how to generate more diverse images. Because these generated sample images belong to the same class as the images you provided, they help the model retain what it has learned about the class and how it can use what it already knows about the class to make new compositions. Many of the parameters for prior preservation loss are described in the DreamBooth training guide. Regularization Custom Diffusion includes training the target images with a small set of real images to prevent overfitting. As you can imagine, this can be easy to do when you’re only training on a few images! Download 200 real images with clip_retrieval. The class_prompt should be the same category as the target images. These images are stored in class_data_dir. Copied python retrieve.py --class_prompt cat --class_data_dir real_reg/samples_cat --num_class_images 200 To enable regularization, add the following parameters: --with_prior_preservation: whether to use prior preservation loss --prior_loss_weight: controls the influence of the prior preservation loss on the model --real_prior: whether to use a small set of real images to prevent overfitting Copied accelerate launch train_custom_diffusion.py \ + --with_prior_preservation \ + --prior_loss_weight=1.0 \ + --class_data_dir="./real_reg/samples_cat" \ + --class_prompt="cat" \ + --real_prior=True \ Training script A lot of the code in the Custom Diffusion training script is similar to the DreamBooth script. This guide instead focuses on the code that is relevant to Custom Diffusion. The Custom Diffusion training script has two dataset classes: CustomDiffusionDataset: preprocesses the images, class images, and prompts for training PromptDataset: prepares the prompts for generating class images Next, the modifier_token is added to the tokenizer, converted to token ids, and the token embeddings are resized to account for the new modifier_token. Then the modifier_token embeddings are initialized with the embeddings of the initializer_token. All parameters in the text encoder are frozen, except for the token embeddings since this is what the model is trying to learn to associate with the concepts. Copied params_to_freeze = itertools.chain( + text_encoder.text_model.encoder.parameters(), + text_encoder.text_model.final_layer_norm.parameters(), + text_encoder.text_model.embeddings.position_embedding.parameters(), +) +freeze_params(params_to_freeze) Now you’ll need to add the Custom Diffusion weights to the attention layers. This is a really important step for getting the shape and size of the attention weights correct, and for setting the appropriate number of attention processors in each UNet block. Copied st = unet.state_dict() +for name, _ in unet.attn_processors.items(): + cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim + if name.startswith("mid_block"): + hidden_size = unet.config.block_out_channels[-1] + elif name.startswith("up_blocks"): + block_id = int(name[len("up_blocks.")]) + hidden_size = list(reversed(unet.config.block_out_channels))[block_id] + elif name.startswith("down_blocks"): + block_id = int(name[len("down_blocks.")]) + hidden_size = unet.config.block_out_channels[block_id] + layer_name = name.split(".processor")[0] + weights = { + "to_k_custom_diffusion.weight": st[layer_name + ".to_k.weight"], + "to_v_custom_diffusion.weight": st[layer_name + ".to_v.weight"], + } + if train_q_out: + weights["to_q_custom_diffusion.weight"] = st[layer_name + ".to_q.weight"] + weights["to_out_custom_diffusion.0.weight"] = st[layer_name + ".to_out.0.weight"] + weights["to_out_custom_diffusion.0.bias"] = st[layer_name + ".to_out.0.bias"] + if cross_attention_dim is not None: + custom_diffusion_attn_procs[name] = attention_class( + train_kv=train_kv, + train_q_out=train_q_out, + hidden_size=hidden_size, + cross_attention_dim=cross_attention_dim, + ).to(unet.device) + custom_diffusion_attn_procs[name].load_state_dict(weights) + else: + custom_diffusion_attn_procs[name] = attention_class( + train_kv=False, + train_q_out=False, + hidden_size=hidden_size, + cross_attention_dim=cross_attention_dim, + ) +del st +unet.set_attn_processor(custom_diffusion_attn_procs) +custom_diffusion_layers = AttnProcsLayers(unet.attn_processors) The optimizer is initialized to update the cross-attention layer parameters: Copied optimizer = optimizer_class( + itertools.chain(text_encoder.get_input_embeddings().parameters(), custom_diffusion_layers.parameters()) + if args.modifier_token is not None + else custom_diffusion_layers.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) In the training loop, it is important to only update the embeddings for the concept you’re trying to learn. This means setting the gradients of all the other token embeddings to zero: Copied if args.modifier_token is not None: + if accelerator.num_processes > 1: + grads_text_encoder = text_encoder.module.get_input_embeddings().weight.grad + else: + grads_text_encoder = text_encoder.get_input_embeddings().weight.grad + index_grads_to_zero = torch.arange(len(tokenizer)) != modifier_token_id[0] + for i in range(len(modifier_token_id[1:])): + index_grads_to_zero = index_grads_to_zero & ( + torch.arange(len(tokenizer)) != modifier_token_id[i] + ) + grads_text_encoder.data[index_grads_to_zero, :] = grads_text_encoder.data[ + index_grads_to_zero, : + ].fill_(0) Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 In this guide, you’ll download and use these example cat images. You can also create and use your own dataset if you want (see the Create a dataset for training guide). Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model, INSTANCE_DIR to the path where you just downloaded the cat images to, and OUTPUT_DIR to where you want to save the model. You’ll use as the special word to tie the newly learned embeddings to. The script creates and saves model checkpoints and a pytorch_custom_diffusion_weights.bin file to your repository. To monitor training progress with Weights and Biases, add the --report_to=wandb parameter to the training command and specify a validation prompt with --validation_prompt. This is useful for debugging and saving intermediate results. If you’re training on human faces, the Custom Diffusion team has found the following parameters to work well: --learning_rate=5e-6 --max_train_steps can be anywhere between 1000 and 2000 --freeze_model=crossattn use at least 15-20 images to train with single concept multiple concepts Copied export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export OUTPUT_DIR="path-to-save-model" +export INSTANCE_DIR="./data/cat" + +accelerate launch train_custom_diffusion.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --output_dir=$OUTPUT_DIR \ + --class_data_dir=./real_reg/samples_cat/ \ + --with_prior_preservation \ + --real_prior \ + --prior_loss_weight=1.0 \ + --class_prompt="cat" \ + --num_class_images=200 \ + --instance_prompt="photo of a cat" \ + --resolution=512 \ + --train_batch_size=2 \ + --learning_rate=1e-5 \ + --lr_warmup_steps=0 \ + --max_train_steps=250 \ + --scale_lr \ + --hflip \ + --modifier_token "" \ + --validation_prompt=" cat sitting in a bucket" \ + --report_to="wandb" \ + --push_to_hub Once training is finished, you can use your new Custom Diffusion model for inference. single concept multiple concepts Copied import torch +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16, +).to("cuda") +pipeline.unet.load_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin") +pipeline.load_textual_inversion("path-to-save-model", weight_name=".bin") + +image = pipeline( + " cat sitting in a bucket", + num_inference_steps=100, + guidance_scale=6.0, + eta=1.0, +).images[0] +image.save("cat.png") Next steps Congratulations on training a model with Custom Diffusion! 🎉 To learn more: Read the Multi-Concept Customization of Text-to-Image Diffusion blog post to learn more details about the experimental results from the Custom Diffusion team. diff --git a/scrapped_outputs/53c5d009274cd2b0e66751336eb779fc.txt b/scrapped_outputs/53c5d009274cd2b0e66751336eb779fc.txt new file mode 100644 index 0000000000000000000000000000000000000000..c64e5338e7b801217166447f9876dee342fd9e20 --- /dev/null +++ b/scrapped_outputs/53c5d009274cd2b0e66751336eb779fc.txt @@ -0,0 +1,100 @@ +UNet Some training methods - like LoRA and Custom Diffusion - typically target the UNet’s attention layers, but these training methods can also target other non-attention layers. Instead of training all of a model’s parameters, only a subset of the parameters are trained, which is faster and more efficient. This class is useful if you’re only loading weights into a UNet. If you need to load weights into the text encoder or a text encoder and UNet, try using the load_lora_weights() function instead. The UNet2DConditionLoadersMixin class provides functions for loading and saving weights, fusing and unfusing LoRAs, disabling and enabling LoRAs, and setting and deleting adapters. To learn more about how to load LoRA weights, see the LoRA loading guide. UNet2DConditionLoadersMixin class diffusers.loaders.UNet2DConditionLoadersMixin < source > ( ) Load LoRA layers into a UNet2DCondtionModel. delete_adapters < source > ( adapter_names: Union ) Parameters adapter_names (Union[List[str], str]) — +The names (single string or list of strings) of the adapter to delete. Delete an adapter’s LoRA layers from the UNet. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_names="cinematic" +) +pipeline.delete_adapters("cinematic") disable_lora < source > ( ) Disable the UNet’s active LoRA layers. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) +pipeline.disable_lora() enable_lora < source > ( ) Enable the UNet’s active LoRA layers. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) +pipeline.enable_lora() load_attn_procs < source > ( pretrained_model_name_or_path_or_dict: Union **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with ModelMixin.save_pretrained(). +A torch state +dict. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load pretrained attention processor layers into UNet2DConditionModel. Attention processor layers have to be +defined in +attention_processor.py +and be a torch.nn.Module class. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.unet.load_attn_procs( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) save_attn_procs < source > ( save_directory: Union is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save an attention processor to (will be created if it doesn’t exist). is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or with pickle. Save attention processor layers to a directory so that it can be reloaded with the +load_attn_procs() method. Example: Copied import torch +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + torch_dtype=torch.float16, +).to("cuda") +pipeline.unet.load_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin") +pipeline.unet.save_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin") set_adapters < source > ( adapter_names: Union weights: Union = None ) Parameters adapter_names (List[str] or str) — +The names of the adapters to use. adapter_weights (Union[List[float], float], optional) — +The adapter(s) weights to use with the UNet. If None, the weights are set to 1.0 for all the +adapters. Set the currently active adapters for use in the UNet. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) +pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipeline.set_adapters(["cinematic", "pixel"], adapter_weights=[0.5, 0.5]) diff --git a/scrapped_outputs/53c7bc8f227503d1c3af35e5404cce00.txt b/scrapped_outputs/53c7bc8f227503d1c3af35e5404cce00.txt new file mode 100644 index 0000000000000000000000000000000000000000..8ffbeca318ea60288f515ef9c440ebea9a984f50 --- /dev/null +++ b/scrapped_outputs/53c7bc8f227503d1c3af35e5404cce00.txt @@ -0,0 +1,80 @@ +UniPCMultistepScheduler UniPCMultistepScheduler is a training-free framework designed for fast sampling of diffusion models. It was introduced in UniPC: A Unified Predictor-Corrector Framework for Fast Sampling of Diffusion Models by Wenliang Zhao, Lujia Bai, Yongming Rao, Jie Zhou, Jiwen Lu. It consists of a corrector (UniC) and a predictor (UniP) that share a unified analytical form and support arbitrary orders. +UniPC is by design model-agnostic, supporting pixel-space/latent-space DPMs on unconditional/conditional sampling. It can also be applied to both noise prediction and data prediction models. The corrector UniC can be also applied after any off-the-shelf solvers to increase the order of accuracy. The abstract from the paper is: Diffusion probabilistic models (DPMs) have demonstrated a very promising ability in high-resolution image synthesis. However, sampling from a pre-trained DPM is time-consuming due to the multiple evaluations of the denoising network, making it more and more important to accelerate the sampling of DPMs. Despite recent progress in designing fast samplers, existing methods still cannot generate satisfying images in many applications where fewer steps (e.g., <10) are favored. In this paper, we develop a unified corrector (UniC) that can be applied after any existing DPM sampler to increase the order of accuracy without extra model evaluations, and derive a unified predictor (UniP) that supports arbitrary order as a byproduct. Combining UniP and UniC, we propose a unified predictor-corrector framework called UniPC for the fast sampling of DPMs, which has a unified analytical form for any order and can significantly improve the sampling quality over previous methods, especially in extremely few steps. We evaluate our methods through extensive experiments including both unconditional and conditional sampling using pixel-space and latent-space DPMs. Our UniPC can achieve 3.87 FID on CIFAR10 (unconditional) and 7.51 FID on ImageNet 256×256 (conditional) with only 10 function evaluations. Code is available at this https URL. Tips It is recommended to set solver_order to 2 for guide sampling, and solver_order=3 for unconditional sampling. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both predict_x0=True and thresholding=True to use dynamic thresholding. This thresholding method is unsuitable for latent-space diffusion models such as Stable Diffusion. UniPCMultistepScheduler class diffusers.UniPCMultistepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 predict_x0: bool = True solver_type: str = 'bh2' lower_order_final: bool = True disable_corrector: List = [] solver_p: SchedulerMixin = None use_karras_sigmas: Optional = False timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, default 2) — +The UniPC order which can be any positive integer. The effective order of accuracy is solver_order + 1 +due to the UniC. It is recommended to use solver_order=2 for guided sampling, and solver_order=3 for +unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and predict_x0=True. predict_x0 (bool, defaults to True) — +Whether to use the updating algorithm on the predicted x0. solver_type (str, default bh2) — +Solver type for UniPC. It is recommended to use bh1 for unconditional sampling when steps < 10, and bh2 +otherwise. lower_order_final (bool, default True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. disable_corrector (list, default []) — +Decides which step to disable the corrector to mitigate the misalignment between epsilon_theta(x_t, c) +and epsilon_theta(x_t^c, c) which can influence convergence for a large guidance scale. Corrector is +usually disabled during the first few steps. solver_p (SchedulerMixin, default None) — +Any other scheduler that if specified, the algorithm becomes solver_p + UniC. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. UniPCMultistepScheduler is a training-free framework designed for the fast sampling of diffusion models. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the UniPC algorithm needs. multistep_uni_c_bh_update < source > ( this_model_output: FloatTensor *args last_sample: FloatTensor = None this_sample: FloatTensor = None order: int = None **kwargs ) → torch.FloatTensor Parameters this_model_output (torch.FloatTensor) — +The model outputs at x_t. this_timestep (int) — +The current timestep t. last_sample (torch.FloatTensor) — +The generated sample before the last predictor x_{t-1}. this_sample (torch.FloatTensor) — +The generated sample after the last predictor x_{t}. order (int) — +The p of UniC-p at this step. The effective order of accuracy should be order + 1. Returns +torch.FloatTensor + +The corrected sample tensor at the current timestep. + One step for the UniC (B(h) version). multistep_uni_p_bh_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None order: int = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model at the current timestep. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. order (int) — +The order of UniP at this timestep (corresponds to the p in UniPC-p). Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the UniP (B(h) version). Alternatively, self.solver_p is used if is specified. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep UniPC. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/53ff32e2280d5d8b52f8b18d2d9c4e4e.txt b/scrapped_outputs/53ff32e2280d5d8b52f8b18d2d9c4e4e.txt new file mode 100644 index 0000000000000000000000000000000000000000..7aa9d7438e50cb065d601931ea93e05ed669bc92 --- /dev/null +++ b/scrapped_outputs/53ff32e2280d5d8b52f8b18d2d9c4e4e.txt @@ -0,0 +1,58 @@ +Effective and efficient diffusion Getting the DiffusionPipeline to generate images in a certain style or include what you want can be tricky. Often times, you have to run the DiffusionPipeline several times before you end up with an image you’re happy with. But generating something out of nothing is a computationally intensive process, especially if you’re running inference over and over again. This is why it’s important to get the most computational (speed) and memory (GPU vRAM) efficiency from the pipeline to reduce the time between inference cycles so you can iterate faster. This tutorial walks you through how to generate faster and better with the DiffusionPipeline. Begin by loading the runwayml/stable-diffusion-v1-5 model: Copied from diffusers import DiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipeline = DiffusionPipeline.from_pretrained(model_id, use_safetensors=True) The example prompt you’ll use is a portrait of an old warrior chief, but feel free to use your own prompt: Copied prompt = "portrait photo of a old warrior chief" Speed 💡 If you don’t have access to a GPU, you can use one for free from a GPU provider like Colab! One of the simplest ways to speed up inference is to place the pipeline on a GPU the same way you would with any PyTorch module: Copied pipeline = pipeline.to("cuda") To make sure you can use the same image and improve on it, use a Generator and set a seed for reproducibility: Copied import torch + +generator = torch.Generator("cuda").manual_seed(0) Now you can generate an image: Copied image = pipeline(prompt, generator=generator).images[0] +image This process took ~30 seconds on a T4 GPU (it might be faster if your allocated GPU is better than a T4). By default, the DiffusionPipeline runs inference with full float32 precision for 50 inference steps. You can speed this up by switching to a lower precision like float16 or running fewer inference steps. Let’s start by loading the model in float16 and generate an image: Copied import torch + +pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, use_safetensors=True) +pipeline = pipeline.to("cuda") +generator = torch.Generator("cuda").manual_seed(0) +image = pipeline(prompt, generator=generator).images[0] +image This time, it only took ~11 seconds to generate the image, which is almost 3x faster than before! 💡 We strongly suggest always running your pipelines in float16, and so far, we’ve rarely seen any degradation in output quality. Another option is to reduce the number of inference steps. Choosing a more efficient scheduler could help decrease the number of steps without sacrificing output quality. You can find which schedulers are compatible with the current model in the DiffusionPipeline by calling the compatibles method: Copied pipeline.scheduler.compatibles +[ + diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, + diffusers.schedulers.scheduling_unipc_multistep.UniPCMultistepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_discrete.KDPM2DiscreteScheduler, + diffusers.schedulers.scheduling_deis_multistep.DEISMultistepScheduler, + diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, + diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler, + diffusers.schedulers.scheduling_ddpm.DDPMScheduler, + diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_ancestral_discrete.KDPM2AncestralDiscreteScheduler, + diffusers.utils.dummy_torch_and_torchsde_objects.DPMSolverSDEScheduler, + diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler, + diffusers.schedulers.scheduling_pndm.PNDMScheduler, + diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, + diffusers.schedulers.scheduling_ddim.DDIMScheduler, +] The Stable Diffusion model uses the PNDMScheduler by default which usually requires ~50 inference steps, but more performant schedulers like DPMSolverMultistepScheduler, require only ~20 or 25 inference steps. Use the from_config() method to load a new scheduler: Copied from diffusers import DPMSolverMultistepScheduler + +pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) Now set the num_inference_steps to 20: Copied generator = torch.Generator("cuda").manual_seed(0) +image = pipeline(prompt, generator=generator, num_inference_steps=20).images[0] +image Great, you’ve managed to cut the inference time to just 4 seconds! ⚡️ Memory The other key to improving pipeline performance is consuming less memory, which indirectly implies more speed, since you’re often trying to maximize the number of images generated per second. The easiest way to see how many images you can generate at once is to try out different batch sizes until you get an OutOfMemoryError (OOM). Create a function that’ll generate a batch of images from a list of prompts and Generators. Make sure to assign each Generator a seed so you can reuse it if it produces a good result. Copied def get_inputs(batch_size=1): + generator = [torch.Generator("cuda").manual_seed(i) for i in range(batch_size)] + prompts = batch_size * [prompt] + num_inference_steps = 20 + + return {"prompt": prompts, "generator": generator, "num_inference_steps": num_inference_steps} Start with batch_size=4 and see how much memory you’ve consumed: Copied from diffusers.utils import make_image_grid + +images = pipeline(**get_inputs(batch_size=4)).images +make_image_grid(images, 2, 2) Unless you have a GPU with more vRAM, the code above probably returned an OOM error! Most of the memory is taken up by the cross-attention layers. Instead of running this operation in a batch, you can run it sequentially to save a significant amount of memory. All you have to do is configure the pipeline to use the enable_attention_slicing() function: Copied pipeline.enable_attention_slicing() Now try increasing the batch_size to 8! Copied images = pipeline(**get_inputs(batch_size=8)).images +make_image_grid(images, rows=2, cols=4) Whereas before you couldn’t even generate a batch of 4 images, now you can generate a batch of 8 images at ~3.5 seconds per image! This is probably the fastest you can go on a T4 GPU without sacrificing quality. Quality In the last two sections, you learned how to optimize the speed of your pipeline by using fp16, reducing the number of inference steps by using a more performant scheduler, and enabling attention slicing to reduce memory consumption. Now you’re going to focus on how to improve the quality of generated images. Better checkpoints The most obvious step is to use better checkpoints. The Stable Diffusion model is a good starting point, and since its official launch, several improved versions have also been released. However, using a newer version doesn’t automatically mean you’ll get better results. You’ll still have to experiment with different checkpoints yourself, and do a little research (such as using negative prompts) to get the best results. As the field grows, there are more and more high-quality checkpoints finetuned to produce certain styles. Try exploring the Hub and Diffusers Gallery to find one you’re interested in! Better pipeline components You can also try replacing the current pipeline components with a newer version. Let’s try loading the latest autoencoder from Stability AI into the pipeline, and generate some images: Copied from diffusers import AutoencoderKL + +vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16).to("cuda") +pipeline.vae = vae +images = pipeline(**get_inputs(batch_size=8)).images +make_image_grid(images, rows=2, cols=4) Better prompt engineering The text prompt you use to generate an image is super important, so much so that it is called prompt engineering. Some considerations to keep during prompt engineering are: How is the image or similar images of the one I want to generate stored on the internet? What additional detail can I give that steers the model towards the style I want? With this in mind, let’s improve the prompt to include color and higher quality details: Copied prompt += ", tribal panther make up, blue on red, side profile, looking away, serious eyes" +prompt += " 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta" Generate a batch of images with the new prompt: Copied images = pipeline(**get_inputs(batch_size=8)).images +make_image_grid(images, rows=2, cols=4) Pretty impressive! Let’s tweak the second image - corresponding to the Generator with a seed of 1 - a bit more by adding some text about the age of the subject: Copied prompts = [ + "portrait photo of the oldest warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a old warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a young warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", +] + +generator = [torch.Generator("cuda").manual_seed(1) for _ in range(len(prompts))] +images = pipeline(prompt=prompts, generator=generator, num_inference_steps=25).images +make_image_grid(images, 2, 2) Next steps In this tutorial, you learned how to optimize a DiffusionPipeline for computational and memory efficiency as well as improving the quality of generated outputs. If you’re interested in making your pipeline even faster, take a look at the following resources: Learn how PyTorch 2.0 and torch.compile can yield 5 - 300% faster inference speed. On an A100 GPU, inference can be up to 50% faster! If you can’t use PyTorch 2, we recommend you install xFormers. Its memory-efficient attention mechanism works great with PyTorch 1.13.1 for faster speed and reduced memory consumption. Other optimization techniques, such as model offloading, are covered in this guide. diff --git a/scrapped_outputs/540a9a92636ef3dc5da3833a89a5c0cb.txt b/scrapped_outputs/540a9a92636ef3dc5da3833a89a5c0cb.txt new file mode 100644 index 0000000000000000000000000000000000000000..cdcd3a6a8fcddc27fec8e1156f213cb014eca381 --- /dev/null +++ b/scrapped_outputs/540a9a92636ef3dc5da3833a89a5c0cb.txt @@ -0,0 +1,276 @@ +ControlNet ControlNet is a type of model for controlling image diffusion models by conditioning the model with an additional input image. There are many types of conditioning inputs (canny edge, user sketching, human pose, depth, and more) you can use to control a diffusion model. This is hugely useful because it affords you greater control over image generation, making it easier to generate specific images without experimenting with different text prompts or denoising values as much. Check out Section 3.5 of the ControlNet paper v1 for a list of ControlNet implementations on various conditioning inputs. You can find the official Stable Diffusion ControlNet conditioned models on lllyasviel’s Hub profile, and more community-trained ones on the Hub. For Stable Diffusion XL (SDXL) ControlNet models, you can find them on the 🤗 Diffusers Hub organization, or you can browse community-trained ones on the Hub. A ControlNet model has two sets of weights (or blocks) connected by a zero-convolution layer: a locked copy keeps everything a large pretrained diffusion model has learned a trainable copy is trained on the additional conditioning input Since the locked copy preserves the pretrained model, training and implementing a ControlNet on a new conditioning input is as fast as finetuning any other model because you aren’t training the model from scratch. This guide will show you how to use ControlNet for text-to-image, image-to-image, inpainting, and more! There are many types of ControlNet conditioning inputs to choose from, but in this guide we’ll only focus on several of them. Feel free to experiment with other conditioning inputs! Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate opencv-python Text-to-image For text-to-image, you normally pass a text prompt to the model. But with ControlNet, you can specify an additional conditioning input. Let’s condition the model with a canny image, a white outline of an image on a black background. This way, the ControlNet can use the canny image as a control to guide the model to generate an image with the same outline. Load an image and use the opencv-python library to extract the canny image: Copied from diffusers.utils import load_image, make_image_grid +from PIL import Image +import cv2 +import numpy as np + +original_image = load_image( + "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +) + +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) original image canny image Next, load a ControlNet model conditioned on canny edge detection and pass it to the StableDiffusionControlNetPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler +import torch + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now pass your prompt and canny image to the pipeline: Copied output = pipe( + "the mona lisa", image=canny_image +).images[0] +make_image_grid([original_image, canny_image, output], rows=1, cols=3) Image-to-image For image-to-image, you’d typically pass an initial image and a prompt to the pipeline to generate a new image. With ControlNet, you can pass an additional conditioning input to guide the model. Let’s condition the model with a depth map, an image which contains spatial information. This way, the ControlNet can use the depth map as a control to guide the model to generate an image that preserves spatial information. You’ll use the StableDiffusionControlNetImg2ImgPipeline for this task, which is different from the StableDiffusionControlNetPipeline because it allows you to pass an initial image as the starting point for the image generation process. Load an image and use the depth-estimation Pipeline from 🤗 Transformers to extract the depth map of an image: Copied import torch +import numpy as np + +from transformers import pipeline +from diffusers.utils import load_image, make_image_grid + +image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-img2img.jpg" +) + +def get_depth_map(image, depth_estimator): + image = depth_estimator(image)["depth"] + image = np.array(image) + image = image[:, :, None] + image = np.concatenate([image, image, image], axis=2) + detected_map = torch.from_numpy(image).float() / 255.0 + depth_map = detected_map.permute(2, 0, 1) + return depth_map + +depth_estimator = pipeline("depth-estimation") +depth_map = get_depth_map(image, depth_estimator).unsqueeze(0).half().to("cuda") Next, load a ControlNet model conditioned on depth maps and pass it to the StableDiffusionControlNetImg2ImgPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, UniPCMultistepScheduler +import torch + +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11f1p_sd15_depth", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now pass your prompt, initial image, and depth map to the pipeline: Copied output = pipe( + "lego batman and robin", image=image, control_image=depth_map, +).images[0] +make_image_grid([image, output], rows=1, cols=2) original image generated image Inpainting For inpainting, you need an initial image, a mask image, and a prompt describing what to replace the mask with. ControlNet models allow you to add another control image to condition a model with. Let’s condition the model with an inpainting mask. This way, the ControlNet can use the inpainting mask as a control to guide the model to generate an image within the mask area. Load an initial image and a mask image: Copied from diffusers.utils import load_image, make_image_grid + +init_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-inpaint.jpg" +) +init_image = init_image.resize((512, 512)) + +mask_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-inpaint-mask.jpg" +) +mask_image = mask_image.resize((512, 512)) +make_image_grid([init_image, mask_image], rows=1, cols=2) Create a function to prepare the control image from the initial and mask images. This’ll create a tensor to mark the pixels in init_image as masked if the corresponding pixel in mask_image is over a certain threshold. Copied import numpy as np +import torch + +def make_inpaint_condition(image, image_mask): + image = np.array(image.convert("RGB")).astype(np.float32) / 255.0 + image_mask = np.array(image_mask.convert("L")).astype(np.float32) / 255.0 + + assert image.shape[0:1] == image_mask.shape[0:1] + image[image_mask > 0.5] = -1.0 # set as masked pixel + image = np.expand_dims(image, 0).transpose(0, 3, 1, 2) + image = torch.from_numpy(image) + return image + +control_image = make_inpaint_condition(init_image, mask_image) original image mask image Load a ControlNet model conditioned on inpainting and pass it to the StableDiffusionControlNetInpaintPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetInpaintPipeline, ControlNetModel, UniPCMultistepScheduler + +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now pass your prompt, initial image, mask image, and control image to the pipeline: Copied output = pipe( + "corgi face with large ears, detailed, pixar, animated, disney", + num_inference_steps=20, + eta=1.0, + image=init_image, + mask_image=mask_image, + control_image=control_image, +).images[0] +make_image_grid([init_image, mask_image, output], rows=1, cols=3) Guess mode Guess mode does not require supplying a prompt to a ControlNet at all! This forces the ControlNet encoder to do it’s best to “guess” the contents of the input control map (depth map, pose estimation, canny edge, etc.). Guess mode adjusts the scale of the output residuals from a ControlNet by a fixed ratio depending on the block depth. The shallowest DownBlock corresponds to 0.1, and as the blocks get deeper, the scale increases exponentially such that the scale of the MidBlock output becomes 1.0. Guess mode does not have any impact on prompt conditioning and you can still provide a prompt if you want. Set guess_mode=True in the pipeline, and it is recommended to set the guidance_scale value between 3.0 and 5.0. Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.utils import load_image, make_image_grid +import numpy as np +import torch +from PIL import Image +import cv2 + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", use_safetensors=True) +pipe = StableDiffusionControlNetPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", controlnet=controlnet, use_safetensors=True).to("cuda") + +original_image = load_image("https://huggingface.co/takuma104/controlnet_dev/resolve/main/bird_512x512.png") + +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +image = pipe("", image=canny_image, guess_mode=True, guidance_scale=3.0).images[0] +make_image_grid([original_image, canny_image, image], rows=1, cols=3) regular mode with prompt guess mode without prompt ControlNet with Stable Diffusion XL There aren’t too many ControlNet models compatible with Stable Diffusion XL (SDXL) at the moment, but we’ve trained two full-sized ControlNet models for SDXL conditioned on canny edge detection and depth maps. We’re also experimenting with creating smaller versions of these SDXL-compatible ControlNet models so it is easier to run on resource-constrained hardware. You can find these checkpoints on the 🤗 Diffusers Hub organization! Let’s use a SDXL ControlNet conditioned on canny images to generate an image. Start by loading an image and prepare the canny image: Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL +from diffusers.utils import load_image, make_image_grid +from PIL import Image +import cv2 +import numpy as np +import torch + +original_image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" +) + +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) +make_image_grid([original_image, canny_image], rows=1, cols=2) original image canny image Load a SDXL ControlNet model conditioned on canny edge detection and pass it to the StableDiffusionXLControlNetPipeline. You can also enable model offloading to reduce memory usage. Copied controlnet = ControlNetModel.from_pretrained( + "diffusers/controlnet-canny-sdxl-1.0", + torch_dtype=torch.float16, + use_safetensors=True +) +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionXLControlNetPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + controlnet=controlnet, + vae=vae, + torch_dtype=torch.float16, + use_safetensors=True +) +pipe.enable_model_cpu_offload() Now pass your prompt (and optionally a negative prompt if you’re using one) and canny image to the pipeline: The controlnet_conditioning_scale parameter determines how much weight to assign to the conditioning inputs. A value of 0.5 is recommended for good generalization, but feel free to experiment with this number! Copied prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" +negative_prompt = 'low quality, bad quality, sketches' + +image = pipe( + prompt, + negative_prompt=negative_prompt, + image=canny_image, + controlnet_conditioning_scale=0.5, +).images[0] +make_image_grid([original_image, canny_image, image], rows=1, cols=3) You can use StableDiffusionXLControlNetPipeline in guess mode as well by setting the parameter to True: Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL +from diffusers.utils import load_image, make_image_grid +import numpy as np +import torch +import cv2 +from PIL import Image + +prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" +negative_prompt = "low quality, bad quality, sketches" + +original_image = load_image( + "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" +) + +controlnet = ControlNetModel.from_pretrained( + "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16, use_safetensors=True +) +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionXLControlNetPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16, use_safetensors=True +) +pipe.enable_model_cpu_offload() + +image = np.array(original_image) +image = cv2.Canny(image, 100, 200) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +image = pipe( + prompt, negative_prompt=negative_prompt, controlnet_conditioning_scale=0.5, image=canny_image, guess_mode=True, +).images[0] +make_image_grid([original_image, canny_image, image], rows=1, cols=3) You can use a refiner model with StableDiffusionXLControlNetPipeline to improve image quality, just like you can with a regular StableDiffusionXLPipeline. +See the Refine image quality section to learn how to use the refiner model. +Make sure to use StableDiffusionXLControlNetPipeline and pass image and controlnet_conditioning_scale. Copied base = StableDiffusionXLControlNetPipeline(...) +image = base( + prompt=prompt, + controlnet_conditioning_scale=0.5, + image=canny_image, + num_inference_steps=40, + denoising_end=0.8, + output_type="latent", +).images +# rest exactly as with StableDiffusionXLPipeline MultiControlNet Replace the SDXL model with a model like runwayml/stable-diffusion-v1-5 to use multiple conditioning inputs with Stable Diffusion models. You can compose multiple ControlNet conditionings from different image inputs to create a MultiControlNet. To get better results, it is often helpful to: mask conditionings such that they don’t overlap (for example, mask the area of a canny image where the pose conditioning is located) experiment with the controlnet_conditioning_scale parameter to determine how much weight to assign to each conditioning input In this example, you’ll combine a canny image and a human pose estimation image to generate a new image. Prepare the canny image conditioning: Copied from diffusers.utils import load_image, make_image_grid +from PIL import Image +import numpy as np +import cv2 + +original_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" +) +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) + +# zero out middle columns of image where pose will be overlaid +zero_start = image.shape[1] // 4 +zero_end = zero_start + image.shape[1] // 2 +image[:, zero_start:zero_end] = 0 + +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) +make_image_grid([original_image, canny_image], rows=1, cols=2) original image canny image For human pose estimation, install controlnet_aux: Copied # uncomment to install the necessary library in Colab +#!pip install -q controlnet-aux Prepare the human pose estimation conditioning: Copied from controlnet_aux import OpenposeDetector + +openpose = OpenposeDetector.from_pretrained("lllyasviel/ControlNet") +original_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/person.png" +) +openpose_image = openpose(original_image) +make_image_grid([original_image, openpose_image], rows=1, cols=2) original image human pose image Load a list of ControlNet models that correspond to each conditioning, and pass them to the StableDiffusionXLControlNetPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to reduce memory usage. Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL, UniPCMultistepScheduler +import torch + +controlnets = [ + ControlNetModel.from_pretrained( + "thibaud/controlnet-openpose-sdxl-1.0", torch_dtype=torch.float16 + ), + ControlNetModel.from_pretrained( + "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16, use_safetensors=True + ), +] + +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionXLControlNetPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnets, vae=vae, torch_dtype=torch.float16, use_safetensors=True +) +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now you can pass your prompt (an optional negative prompt if you’re using one), canny image, and pose image to the pipeline: Copied prompt = "a giant standing in a fantasy landscape, best quality" +negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality" + +generator = torch.manual_seed(1) + +images = [openpose_image.resize((1024, 1024)), canny_image.resize((1024, 1024))] + +images = pipe( + prompt, + image=images, + num_inference_steps=25, + generator=generator, + negative_prompt=negative_prompt, + num_images_per_prompt=3, + controlnet_conditioning_scale=[1.0, 0.8], +).images +make_image_grid([original_image, canny_image, openpose_image, + images[0].resize((512, 512)), images[1].resize((512, 512)), images[2].resize((512, 512))], rows=2, cols=3) diff --git a/scrapped_outputs/541911677b3c3ad993c690eae1dbe017.txt b/scrapped_outputs/541911677b3c3ad993c690eae1dbe017.txt new file mode 100644 index 0000000000000000000000000000000000000000..1f39cdc6039e6ca60233a880a99b0f0bf5e50fc4 --- /dev/null +++ b/scrapped_outputs/541911677b3c3ad993c690eae1dbe017.txt @@ -0,0 +1,53 @@ +CMStochasticIterativeScheduler Consistency Models by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever introduced a multistep and onestep scheduler (Algorithm 1) that is capable of generating good samples in one or a small number of steps. The abstract from the paper is: Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast one-step generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet 64x64 and LSUN 256x256. The original codebase can be found at openai/consistency_models. CMStochasticIterativeScheduler class diffusers.CMStochasticIterativeScheduler < source > ( num_train_timesteps: int = 40 sigma_min: float = 0.002 sigma_max: float = 80.0 sigma_data: float = 0.5 s_noise: float = 1.0 rho: float = 7.0 clip_denoised: bool = True ) Parameters num_train_timesteps (int, defaults to 40) — +The number of diffusion steps to train the model. sigma_min (float, defaults to 0.002) — +Minimum noise magnitude in the sigma schedule. Defaults to 0.002 from the original implementation. sigma_max (float, defaults to 80.0) — +Maximum noise magnitude in the sigma schedule. Defaults to 80.0 from the original implementation. sigma_data (float, defaults to 0.5) — +The standard deviation of the data distribution from the EDM +paper. Defaults to 0.5 from the original implementation. s_noise (float, defaults to 1.0) — +The amount of additional noise to counteract loss of detail during sampling. A reasonable range is [1.000, +1.011]. Defaults to 1.0 from the original implementation. rho (float, defaults to 7.0) — +The parameter for calculating the Karras sigma schedule from the EDM +paper. Defaults to 7.0 from the original implementation. clip_denoised (bool, defaults to True) — +Whether to clip the denoised outputs to (-1, 1). timesteps (List or np.ndarray or torch.Tensor, optional) — +An explicit timestep schedule that can be optionally specified. The timesteps are expected to be in +increasing order. Multistep and onestep sampling for consistency models. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. get_scalings_for_boundary_condition < source > ( sigma ) → tuple Parameters sigma (torch.FloatTensor) — +The current sigma in the Karras sigma schedule. Returns +tuple + +A two-element tuple where c_skip (which weights the current sample) is the first element and c_out +(which weights the consistency model output) is the second element. + Gets the scalings used in the consistency model parameterization (from Appendix C of the +paper) to enforce boundary condition. epsilon in the equations for c_skip and c_out is set to sigma_min. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (float or torch.FloatTensor) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Scales the consistency model input by (sigma**2 + sigma_data**2) ** 0.5. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: Optional = None device: Union = None timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. timesteps (List[int], optional) — +Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default +timestep spacing strategy of equal spacing between timesteps is used. If timesteps is passed, +num_inference_steps must be None. Sets the timesteps used for the diffusion chain (to be run before inference). sigma_to_t < source > ( sigmas: Union ) → float or np.ndarray Parameters sigmas (float or np.ndarray) — +A single Karras sigma or an array of Karras sigmas. Returns +float or np.ndarray + +A scaled input timestep or scaled input timestep array. + Gets scaled timesteps from the Karras sigmas for input to the consistency model. step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor generator: Optional = None return_dict: bool = True ) → CMStochasticIterativeSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (float) — +The current timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a +CMStochasticIterativeSchedulerOutput or tuple. Returns +CMStochasticIterativeSchedulerOutput or tuple + +If return_dict is True, +CMStochasticIterativeSchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). CMStochasticIterativeSchedulerOutput class diffusers.schedulers.scheduling_consistency_models.CMStochasticIterativeSchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Output class for the scheduler’s step function. diff --git a/scrapped_outputs/546350eae878c9106bb3e610df19fb90.txt b/scrapped_outputs/546350eae878c9106bb3e610df19fb90.txt new file mode 100644 index 0000000000000000000000000000000000000000..74ebf95ae4d144f747165d7c8784c89b6729768f --- /dev/null +++ b/scrapped_outputs/546350eae878c9106bb3e610df19fb90.txt @@ -0,0 +1,324 @@ +Load adapters There are several training techniques for personalizing diffusion models to generate images of a specific subject or images in certain styles. Each of these training methods produces a different type of adapter. Some of the adapters generate an entirely new model, while other adapters only modify a smaller set of embeddings or weights. This means the loading process for each adapter is also different. This guide will show you how to load DreamBooth, textual inversion, and LoRA weights. Feel free to browse the Stable Diffusion Conceptualizer, LoRA the Explorer, and the Diffusers Models Gallery for checkpoints and embeddings to use. DreamBooth DreamBooth finetunes an entire diffusion model on just several images of a subject to generate images of that subject in new styles and settings. This method works by using a special word in the prompt that the model learns to associate with the subject image. Of all the training methods, DreamBooth produces the largest file size (usually a few GBs) because it is a full checkpoint model. Let’s load the herge_style checkpoint, which is trained on just 10 images drawn by Hergé, to generate images in that style. For it to work, you need to include the special word herge_style in your prompt to trigger the checkpoint: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("sd-dreambooth-library/herge-style", torch_dtype=torch.float16).to("cuda") +prompt = "A cute herge_style brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration" +image = pipeline(prompt).images[0] +image Textual inversion Textual inversion is very similar to DreamBooth and it can also personalize a diffusion model to generate certain concepts (styles, objects) from just a few images. This method works by training and finding new embeddings that represent the images you provide with a special word in the prompt. As a result, the diffusion model weights stay the same and the training process produces a relatively tiny (a few KBs) file. Because textual inversion creates embeddings, it cannot be used on its own like DreamBooth and requires another model. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") Now you can load the textual inversion embeddings with the load_textual_inversion() method and generate some images. Let’s load the sd-concepts-library/gta5-artwork embeddings and you’ll need to include the special word in your prompt to trigger it: Copied pipeline.load_textual_inversion("sd-concepts-library/gta5-artwork") +prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration, style" +image = pipeline(prompt).images[0] +image Textual inversion can also be trained on undesirable things to create negative embeddings to discourage a model from generating images with those undesirable things like blurry images or extra fingers on a hand. This can be an easy way to quickly improve your prompt. You’ll also load the embeddings with load_textual_inversion(), but this time, you’ll need two more parameters: weight_name: specifies the weight file to load if the file was saved in the 🤗 Diffusers format with a specific name or if the file is stored in the A1111 format token: specifies the special word to use in the prompt to trigger the embeddings Let’s load the sayakpaul/EasyNegative-test embeddings: Copied pipeline.load_textual_inversion( + "sayakpaul/EasyNegative-test", weight_name="EasyNegative.safetensors", token="EasyNegative" +) Now you can use the token to generate an image with the negative embeddings: Copied prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration, EasyNegative" +negative_prompt = "EasyNegative" + +image = pipeline(prompt, negative_prompt=negative_prompt, num_inference_steps=50).images[0] +image LoRA Low-Rank Adaptation (LoRA) is a popular training technique because it is fast and generates smaller file sizes (a couple hundred MBs). Like the other methods in this guide, LoRA can train a model to learn new styles from just a few images. It works by inserting new weights into the diffusion model and then only the new weights are trained instead of the entire model. This makes LoRAs faster to train and easier to store. LoRA is a very general training technique that can be used with other training methods. For example, it is common to train a model with DreamBooth and LoRA. LoRAs also need to be used with another model: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") Then use the load_lora_weights() method to load the ostris/super-cereal-sdxl-lora weights and specify the weights filename from the repository: Copied pipeline.load_lora_weights("ostris/super-cereal-sdxl-lora", weight_name="cereal_box_sdxl_v1.safetensors") +prompt = "bears, pizza bites" +image = pipeline(prompt).images[0] +image The load_lora_weights() method loads LoRA weights into both the UNet and text encoder. It is the preferred way for loading LoRAs because it can handle cases where: the LoRA weights don’t have separate identifiers for the UNet and text encoder the LoRA weights have separate identifiers for the UNet and text encoder But if you only need to load LoRA weights into the UNet, then you can use the load_attn_procs() method. Let’s load the jbilcke-hf/sdxl-cinematic-1 LoRA: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.unet.load_attn_procs("jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors") + +# use cnmt in the prompt to trigger the LoRA +prompt = "A cute cnmt eating a slice of pizza, stunning color scheme, masterpiece, illustration" +image = pipeline(prompt).images[0] +image For both load_lora_weights() and load_attn_procs(), you can pass the cross_attention_kwargs={"scale": 0.5} parameter to adjust how much of the LoRA weights to use. A value of 0 is the same as only using the base model weights, and a value of 1 is equivalent to using the fully finetuned LoRA. To unload the LoRA weights, use the unload_lora_weights() method to discard the LoRA weights and restore the model to its original weights: Copied pipeline.unload_lora_weights() Load multiple LoRAs It can be fun to use multiple LoRAs together to create something entirely new and unique. The fuse_lora() method allows you to fuse the LoRA weights with the original weights of the underlying model. Fusing the weights can lead to a speedup in inference latency because you don’t need to separately load the base model and LoRA! You can save your fused pipeline with save_pretrained() to avoid loading and fusing the weights every time you want to use the model. Load an initial model: Copied from diffusers import StableDiffusionXLPipeline, AutoencoderKL +import torch + +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + vae=vae, + torch_dtype=torch.float16, +).to("cuda") Next, load the LoRA checkpoint and fuse it with the original weights. The lora_scale parameter controls how much to scale the output by with the LoRA weights. It is important to make the lora_scale adjustments in the fuse_lora() method because it won’t work if you try to pass scale to the cross_attention_kwargs in the pipeline. If you need to reset the original model weights for any reason (use a different lora_scale), you should use the unfuse_lora() method. Copied pipeline.load_lora_weights("ostris/ikea-instructions-lora-sdxl") +pipeline.fuse_lora(lora_scale=0.7) + +# to unfuse the LoRA weights +pipeline.unfuse_lora() Then fuse this pipeline with the next set of LoRA weights: Copied pipeline.load_lora_weights("ostris/super-cereal-sdxl-lora") +pipeline.fuse_lora(lora_scale=0.7) You can’t unfuse multiple LoRA checkpoints, so if you need to reset the model to its original weights, you’ll need to reload it. Now you can generate an image that uses the weights from both LoRAs: Copied prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration" +image = pipeline(prompt).images[0] +image 🤗 PEFT Read the Inference with 🤗 PEFT tutorial to learn more about its integration with 🤗 Diffusers and how you can easily work with and juggle multiple adapters. You’ll need to install 🤗 Diffusers and PEFT from source to run the example in this section. Another way you can load and use multiple LoRAs is to specify the adapter_name parameter in load_lora_weights(). This method takes advantage of the 🤗 PEFT integration. For example, load and name both LoRA weights: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("ostris/ikea-instructions-lora-sdxl", weight_name="ikea_instructions_xl_v1_5.safetensors", adapter_name="ikea") +pipeline.load_lora_weights("ostris/super-cereal-sdxl-lora", weight_name="cereal_box_sdxl_v1.safetensors", adapter_name="cereal") Now use the set_adapters() to activate both LoRAs, and you can configure how much weight each LoRA should have on the output: Copied pipeline.set_adapters(["ikea", "cereal"], adapter_weights=[0.7, 0.5]) Then, generate an image: Copied prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration" +image = pipeline(prompt, num_inference_steps=30, cross_attention_kwargs={"scale": 1.0}).images[0] +image Kohya and TheLastBen Other popular LoRA trainers from the community include those by Kohya and TheLastBen. These trainers create different LoRA checkpoints than those trained by 🤗 Diffusers, but they can still be loaded in the same way. Let’s download the Blueprintify SD XL 1.0 checkpoint from Civitai: Copied !wget https://civitai.com/api/download/models/168776 -O blueprintify-sd-xl-10.safetensors Load the LoRA checkpoint with the load_lora_weights() method, and specify the filename in the weight_name parameter: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("path/to/weights", weight_name="blueprintify-sd-xl-10.safetensors") Generate an image: Copied # use bl3uprint in the prompt to trigger the LoRA +prompt = "bl3uprint, a highly detailed blueprint of the eiffel tower, explaining how to build all parts, many txt, blueprint grid backdrop" +image = pipeline(prompt).images[0] +image Some limitations of using Kohya LoRAs with 🤗 Diffusers include: Images may not look like those generated by UIs - like ComfyUI - for multiple reasons, which are explained here. LyCORIS checkpoints aren’t fully supported. The load_lora_weights() method loads LyCORIS checkpoints with LoRA and LoCon modules, but Hada and LoKR are not supported. Loading a checkpoint from TheLastBen is very similar. For example, to load the TheLastBen/William_Eggleston_Style_SDXL checkpoint: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("TheLastBen/William_Eggleston_Style_SDXL", weight_name="wegg.safetensors") + +# use by william eggleston in the prompt to trigger the LoRA +prompt = "a house by william eggleston, sunrays, beautiful, sunlight, sunrays, beautiful" +image = pipeline(prompt=prompt).images[0] +image IP-Adapter IP-Adapter is an effective and lightweight adapter that adds image prompting capabilities to a diffusion model. This adapter works by decoupling the cross-attention layers of the image and text features. All the other model components are frozen and only the embedded image features in the UNet are trained. As a result, IP-Adapter files are typically only ~100MBs. IP-Adapter works with most of our pipelines, including Stable Diffusion, Stable Diffusion XL (SDXL), ControlNet, T2I-Adapter, AnimateDiff. And you can use any custom models finetuned from the same base models. It also works with LCM-Lora out of box. You can find official IP-Adapter checkpoints in h94/IP-Adapter. IP-Adapter was contributed by okotaku. Let’s first create a Stable Diffusion Pipeline. Copied from diffusers import AutoPipelineForText2Image +import torch +from diffusers.utils import load_image + + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") Now load the h94/IP-Adapter weights with the load_ip_adapter() method. Copied pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") IP-Adapter relies on an image encoder to generate the image features, if your IP-Adapter weights folder contains a "image_encoder" subfolder, the image encoder will be automatically loaded and registered to the pipeline. Otherwise you can so load a [CLIPVisionModelWithProjection](https://huggingface.co/docs/transformers/v4.36.2/en/model_doc/clip#transformers.CLIPVisionModelWithProjection) model and pass it to a Stable Diffusion pipeline when you create it. + + Copied from diffusers import AutoPipelineForText2Image, CLIPVisionModelWithProjection +import torch + +image_encoder = CLIPVisionModelWithProjection.from_pretrained( + "h94/IP-Adapter", + subfolder="models/image_encoder", + torch_dtype=torch.float16, +).to("cuda") + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", image_encoder=image_encoder, torch_dtype=torch.float16).to("cuda") IP-Adapter allows you to use both image and text to condition the image generation process. For example, let’s use the bear image from the Textual Inversion section as the image prompt (ip_adapter_image) along with a text prompt to add “sunglasses”. 😎 Copied pipeline.set_ip_adapter_scale(0.6) +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_neg_embed.png") +generator = torch.Generator(device="cpu").manual_seed(33) +images = pipeline( +    prompt='best quality, high quality, wearing sunglasses', +    ip_adapter_image=image, +    negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", +    num_inference_steps=50, +    generator=generator, +).images +images[0]     You can use the set_ip_adapter_scale() method to adjust the text prompt and image prompt condition ratio.  If you’re only using the image prompt, you should set the scale to 1.0. You can lower the scale to get more generation diversity, but it’ll be less aligned with the prompt. +scale=0.5 can achieve good results in most cases when you use both text and image prompts. IP-Adapter also works great with Image-to-Image and Inpainting pipelines. See below examples of how you can use it with Image-to-Image and Inpaint. + + + + Copied from diffusers import AutoPipelineForImage2Image +import torch +from diffusers.utils import load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") + +image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/vermeer.jpg") +ip_image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/river.png") + +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") +generator = torch.Generator(device="cpu").manual_seed(33) +images = pipeline( +    prompt='best quality, high quality', +    image = image, +    ip_adapter_image=ip_image, +    num_inference_steps=50, +    generator=generator, +    strength=0.6, +).images +images[0] + + + + Copied from diffusers import AutoPipelineForInpaint +import torch +from diffusers.utils import load_image + +pipeline = AutoPipelineForInpaint.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float).to("cuda") + +image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/inpaint_image.png") +mask = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/mask.png") +ip_image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/girl.png") + +image = image.resize((512, 768)) +mask = mask.resize((512, 768)) + +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") + +generator = torch.Generator(device="cpu").manual_seed(33) +images = pipeline( + prompt='best quality, high quality', + image = image, + mask_image = mask, + ip_adapter_image=ip_image, + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=50, + generator=generator, + strength=0.5, +).images +images[0] + + +IP-Adapters can also be used with SDXL Copied from diffusers import AutoPipelineForText2Image +from diffusers.utils import load_image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + torch_dtype=torch.float16 +).to("cuda") + +image = load_image("https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/watercolor_painting.jpeg") + +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name="ip-adapter_sdxl.bin") + +generator = torch.Generator(device="cpu").manual_seed(33) +image = pipeline( + prompt="best quality, high quality", + ip_adapter_image=image, + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=25, + generator=generator, +).images[0] +image.save("sdxl_t2i.png") input image adapted image You can use the IP-Adapter face model to apply specific faces to your images. It is an effective way to maintain consistent characters in your image generations. +Weights are loaded with the same method used for the other IP-Adapters. Copied # Load ip-adapter-full-face_sd15.bin +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter-full-face_sd15.bin") It is recommended to use DDIMScheduler and EulerDiscreteScheduler for face model. Copied import torch +from diffusers import StableDiffusionPipeline, DDIMScheduler +from diffusers.utils import load_image + +noise_scheduler = DDIMScheduler( + num_train_timesteps=1000, + beta_start=0.00085, + beta_end=0.012, + beta_schedule="scaled_linear", + clip_sample=False, + set_alpha_to_one=False, + steps_offset=1 +) + +pipeline = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + scheduler=noise_scheduler, +).to("cuda") + +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter-full-face_sd15.bin") + +pipeline.set_ip_adapter_scale(0.7) + +image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ai_face2.png") + +generator = torch.Generator(device="cpu").manual_seed(33) + +image = pipeline( + prompt="A photo of a girl wearing a black dress, holding red roses in hand, upper body, behind is the Eiffel Tower", + ip_adapter_image=image, + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=50, num_images_per_prompt=1, width=512, height=704, + generator=generator, +).images[0] input image output image LCM-Lora You can use IP-Adapter with LCM-Lora to achieve “instant fine-tune” with custom images. Note that you need to load IP-Adapter weights before loading the LCM-Lora weights. Copied from diffusers import DiffusionPipeline, LCMScheduler +import torch +from diffusers.utils import load_image + +model_id = "sd-dreambooth-library/herge-style" +lcm_lora_id = "latent-consistency/lcm-lora-sdv1-5" + +pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) + +pipe.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") +pipe.load_lora_weights(lcm_lora_id) +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() + +prompt = "best quality, high quality" +image = load_image("https://user-images.githubusercontent.com/24734142/266492875-2d50d223-8475-44f0-a7c6-08b51cb53572.png") +images = pipe( + prompt=prompt, + ip_adapter_image=image, + num_inference_steps=4, + guidance_scale=1, +).images[0] Other pipelines IP-Adapter is compatible with any pipeline that (1) uses a text prompt and (2) uses Stable Diffusion or Stable Diffusion XL checkpoint. To use IP-Adapter with a different pipeline, all you need to do is to run load_ip_adapter() method after you create the pipeline, and then pass your image to the pipeline as ip_adapter_image 🤗 Diffusers currently only supports using IP-Adapter with some of the most popular pipelines, feel free to open a feature request if you have a cool use-case and require integrating IP-adapters with a pipeline that does not support it yet! You can find below examples on how to use IP-Adapter with ControlNet and AnimateDiff. + + + + Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +import torch +from diffusers.utils import load_image + +controlnet_model_path = "lllyasviel/control_v11f1p_sd15_depth" +controlnet = ControlNetModel.from_pretrained(controlnet_model_path, torch_dtype=torch.float16) + +pipeline = StableDiffusionControlNetPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16) +pipeline.to("cuda") + +image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/statue.png") +depth_map = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/depth.png") + +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") + +generator = torch.Generator(device="cpu").manual_seed(33) +images = pipeline( + prompt='best quality, high quality', + image=depth_map, + ip_adapter_image=image, + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=50, + generator=generator, +).images +images[0] input image adapted image + + + + Copied # animate diff + ip adapter +import torch +from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler +from diffusers.utils import export_to_gif, load_image + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "Lykon/DreamShaper" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16) + +# scheduler +scheduler = DDIMScheduler( + clip_sample=False, + beta_start=0.00085, + beta_end=0.012, + beta_schedule="linear", + timestep_spacing="trailing", + steps_offset=1 +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +# load ip_adapter +pipe.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") + +# load motion adapters +pipe.load_lora_weights("guoyww/animatediff-motion-lora-zoom-out", adapter_name="zoom-out") +pipe.load_lora_weights("guoyww/animatediff-motion-lora-tilt-up", adapter_name="tilt-up") +pipe.load_lora_weights("guoyww/animatediff-motion-lora-pan-left", adapter_name="pan-left") + +seed = 42 +image = load_image("https://user-images.githubusercontent.com/24734142/266492875-2d50d223-8475-44f0-a7c6-08b51cb53572.png") +images = [image] * 3 +prompts = ["best quality, high quality"] * 3 +negative_prompt = "bad quality, worst quality" +adapter_weights = [[0.75, 0.0, 0.0], [0.0, 0.0, 0.75], [0.0, 0.75, 0.75]] + +# generate +output_frames = [] +for prompt, image, adapter_weight in zip(prompts, images, adapter_weights): + pipe.set_adapters(["zoom-out", "tilt-up", "pan-left"], adapter_weights=adapter_weight) + output = pipe( + prompt= prompt, + num_frames=16, + guidance_scale=7.5, + num_inference_steps=30, + ip_adapter_image = image, + generator=torch.Generator("cpu").manual_seed(seed), + ) + frames = output.frames[0] + output_frames.extend(frames) + +export_to_gif(output_frames, "test_out_animation.gif") + + diff --git a/scrapped_outputs/546f78e80f8e436fd03ff0307811d5e3.txt b/scrapped_outputs/546f78e80f8e436fd03ff0307811d5e3.txt new file mode 100644 index 0000000000000000000000000000000000000000..721028c014a8376a30101930e52296d27ef2a06d --- /dev/null +++ b/scrapped_outputs/546f78e80f8e436fd03ff0307811d5e3.txt @@ -0,0 +1,47 @@ +How to use Stable Diffusion in Apple Silicon (M1/M2) + +🤗 Diffusers is compatible with Apple silicon for Stable Diffusion inference, using the PyTorch mps device. These are the steps you need to follow to use your M1 or M2 computer with Stable Diffusion. + +Requirements + +Mac computer with Apple silicon (M1/M2) hardware. +macOS 12.6 or later (13.0 or later recommended). +arm64 version of Python. +PyTorch 2.0 (recommended) or 1.13 (minimum version supported for mps). You can install it with pip or conda using the instructions in https://pytorch.org/get-started/locally/. + +Inference Pipeline + +The snippet below demonstrates how to use the mps backend using the familiar to() interface to move the Stable Diffusion pipeline to your M1 or M2 device. +If you are using PyTorch 1.13 you need to “prime” the pipeline using an additional one-time pass through it. This is a temporary workaround for a weird issue we detected: the first inference pass produces slightly different results than subsequent ones. You only need to do this pass once, and it’s ok to use just one inference step and discard the result. +We strongly recommend you use PyTorch 2 or better, as it solves a number of problems like the one described in the previous tip. + + + Copied +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +pipe = pipe.to("mps") + +# Recommended if your computer has < 64 GB of RAM +pipe.enable_attention_slicing() + +prompt = "a photo of an astronaut riding a horse on mars" + +# First-time "warmup" pass if PyTorch version is 1.13 (see explanation above) +_ = pipe(prompt, num_inference_steps=1) + +# Results match those from the CPU device after the warmup pass. +image = pipe(prompt).images[0] + +Performance Recommendations + +M1/M2 performance is very sensitive to memory pressure. The system will automatically swap if it needs to, but performance will degrade significantly when it does. +We recommend you use attention slicing to reduce memory pressure during inference and prevent swapping, particularly if your computer has less than 64 GB of system RAM, or if you generate images at non-standard resolutions larger than 512 × 512 pixels. Attention slicing performs the costly attention operation in multiple steps instead of all at once. It usually has a performance impact of ~20% in computers without universal memory, but we have observed better performance in most Apple Silicon computers, unless you have 64 GB or more. + + + Copied +pipeline.enable_attention_slicing() + +Known Issues + +Generating multiple prompts in a batch crashes or doesn’t work reliably. We believe this is related to the mps backend in PyTorch. This is being resolved, but for now we recommend to iterate instead of batching. diff --git a/scrapped_outputs/54845054114fc0f728922cd33bdf04c6.txt b/scrapped_outputs/54845054114fc0f728922cd33bdf04c6.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/5499925c6bde8fc8aa591156835c20a1.txt b/scrapped_outputs/5499925c6bde8fc8aa591156835c20a1.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/54ba8f9a6083e642ec3de9040c2b1b69.txt b/scrapped_outputs/54ba8f9a6083e642ec3de9040c2b1b69.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/54eb557d9d21e7fbc67f7c9a1215d105.txt b/scrapped_outputs/54eb557d9d21e7fbc67f7c9a1215d105.txt new file mode 100644 index 0000000000000000000000000000000000000000..468c0483a2546314fa3f8291e558ee4a11ec620d --- /dev/null +++ b/scrapped_outputs/54eb557d9d21e7fbc67f7c9a1215d105.txt @@ -0,0 +1,69 @@ +JAX/Flax 🤗 Diffusers supports Flax for super fast inference on Google TPUs, such as those available in Colab, Kaggle or Google Cloud Platform. This guide shows you how to run inference with Stable Diffusion using JAX/Flax. Before you begin, make sure you have the necessary libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q jax==0.3.25 jaxlib==0.3.25 flax transformers ftfy +#!pip install -q diffusers You should also make sure you’re using a TPU backend. While JAX does not run exclusively on TPUs, you’ll get the best performance on a TPU because each server has 8 TPU accelerators working in parallel. If you are running this guide in Colab, select Runtime in the menu above, select the option Change runtime type, and then select TPU under the Hardware accelerator setting. Import JAX and quickly check whether you’re using a TPU: Copied import jax +import jax.tools.colab_tpu +jax.tools.colab_tpu.setup_tpu() + +num_devices = jax.device_count() +device_type = jax.devices()[0].device_kind + +print(f"Found {num_devices} JAX devices of type {device_type}.") +assert ( + "TPU" in device_type, + "Available device is not a TPU, please select TPU from Runtime > Change runtime type > Hardware accelerator" +) +# Found 8 JAX devices of type Cloud TPU. Great, now you can import the rest of the dependencies you’ll need: Copied import jax.numpy as jnp +from jax import pmap +from flax.jax_utils import replicate +from flax.training.common_utils import shard + +from diffusers import FlaxStableDiffusionPipeline Load a model Flax is a functional framework, so models are stateless and parameters are stored outside of them. Loading a pretrained Flax pipeline returns both the pipeline and the model weights (or parameters). In this guide, you’ll use bfloat16, a more efficient half-float type that is supported by TPUs (you can also use float32 for full precision if you want). Copied dtype = jnp.bfloat16 +pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + revision="bf16", + dtype=dtype, +) Inference TPUs usually have 8 devices working in parallel, so let’s use the same prompt for each device. This means you can perform inference on 8 devices at once, with each device generating one image. As a result, you’ll get 8 images in the same amount of time it takes for one chip to generate a single image! Learn more details in the How does parallelization work? section. After replicating the prompt, get the tokenized text ids by calling the prepare_inputs function on the pipeline. The length of the tokenized text is set to 77 tokens as required by the configuration of the underlying CLIP text model. Copied prompt = "A cinematic film still of Morgan Freeman starring as Jimi Hendrix, portrait, 40mm lens, shallow depth of field, close up, split lighting, cinematic" +prompt = [prompt] * jax.device_count() +prompt_ids = pipeline.prepare_inputs(prompt) +prompt_ids.shape +# (8, 77) Model parameters and inputs have to be replicated across the 8 parallel devices. The parameters dictionary is replicated with flax.jax_utils.replicate which traverses the dictionary and changes the shape of the weights so they are repeated 8 times. Arrays are replicated using shard. Copied # parameters +p_params = replicate(params) + +# arrays +prompt_ids = shard(prompt_ids) +prompt_ids.shape +# (8, 1, 77) This shape means each one of the 8 devices receives as an input a jnp array with shape (1, 77), where 1 is the batch size per device. On TPUs with sufficient memory, you could have a batch size larger than 1 if you want to generate multiple images (per chip) at once. Next, create a random number generator to pass to the generation function. This is standard procedure in Flax, which is very serious and opinionated about random numbers. All functions that deal with random numbers are expected to receive a generator to ensure reproducibility, even when you’re training across multiple distributed devices. The helper function below uses a seed to initialize a random number generator. As long as you use the same seed, you’ll get the exact same results. Feel free to use different seeds when exploring results later in the guide. Copied def create_key(seed=0): + return jax.random.PRNGKey(seed) The helper function, or rng, is split 8 times so each device receives a different generator and generates a different image. Copied rng = create_key(0) +rng = jax.random.split(rng, jax.device_count()) To take advantage of JAX’s optimized speed on a TPU, pass jit=True to the pipeline to compile the JAX code into an efficient representation and to ensure the model runs in parallel across the 8 devices. You need to ensure all your inputs have the same shape in subsequent calls, otherwise JAX will need to recompile the code which is slower. The first inference run takes more time because it needs to compile the code, but subsequent calls (even with different inputs) are much faster. For example, it took more than a minute to compile on a TPU v2-8, but then it takes about 7s on a future inference run! Copied %%time +images = pipeline(prompt_ids, p_params, rng, jit=True)[0] + +# CPU times: user 56.2 s, sys: 42.5 s, total: 1min 38s +# Wall time: 1min 29s The returned array has shape (8, 1, 512, 512, 3) which should be reshaped to remove the second dimension and get 8 images of 512 × 512 × 3. Then you can use the numpy_to_pil() function to convert the arrays into images. Copied from diffusers.utils import make_image_grid + +images = images.reshape((images.shape[0] * images.shape[1],) + images.shape[-3:]) +images = pipeline.numpy_to_pil(images) +make_image_grid(images, rows=2, cols=4) Using different prompts You don’t necessarily have to use the same prompt on all devices. For example, to generate 8 different prompts: Copied prompts = [ + "Labrador in the style of Hokusai", + "Painting of a squirrel skating in New York", + "HAL-9000 in the style of Van Gogh", + "Times Square under water, with fish and a dolphin swimming around", + "Ancient Roman fresco showing a man working on his laptop", + "Close-up photograph of young black woman against urban background, high quality, bokeh", + "Armchair in the shape of an avocado", + "Clown astronaut in space, with Earth in the background", +] + +prompt_ids = pipeline.prepare_inputs(prompts) +prompt_ids = shard(prompt_ids) + +images = pipeline(prompt_ids, p_params, rng, jit=True).images +images = images.reshape((images.shape[0] * images.shape[1],) + images.shape[-3:]) +images = pipeline.numpy_to_pil(images) + +make_image_grid(images, 2, 4) How does parallelization work? The Flax pipeline in 🤗 Diffusers automatically compiles the model and runs it in parallel on all available devices. Let’s take a closer look at how that process works. JAX parallelization can be done in multiple ways. The easiest one revolves around using the jax.pmap function to achieve single-program multiple-data (SPMD) parallelization. It means running several copies of the same code, each on different data inputs. More sophisticated approaches are possible, and you can go over to the JAX documentation to explore this topic in more detail if you are interested! jax.pmap does two things: Compiles (or ”jits”) the code which is similar to jax.jit(). This does not happen when you call pmap, and only the first time the pmapped function is called. Ensures the compiled code runs in parallel on all available devices. To demonstrate, call pmap on the pipeline’s _generate method (this is a private method that generates images and may be renamed or removed in future releases of 🤗 Diffusers): Copied p_generate = pmap(pipeline._generate) After calling pmap, the prepared function p_generate will: Make a copy of the underlying function, pipeline._generate, on each device. Send each device a different portion of the input arguments (this is why it’s necessary to call the shard function). In this case, prompt_ids has shape (8, 1, 77, 768) so the array is split into 8 and each copy of _generate receives an input with shape (1, 77, 768). The most important thing to pay attention to here is the batch size (1 in this example), and the input dimensions that make sense for your code. You don’t have to change anything else to make the code work in parallel. The first time you call the pipeline takes more time, but the calls afterward are much faster. The block_until_ready function is used to correctly measure inference time because JAX uses asynchronous dispatch and returns control to the Python loop as soon as it can. You don’t need to use that in your code; blocking occurs automatically when you want to use the result of a computation that has not yet been materialized. Copied %%time +images = p_generate(prompt_ids, p_params, rng) +images = images.block_until_ready() + +# CPU times: user 1min 15s, sys: 18.2 s, total: 1min 34s +# Wall time: 1min 15s Check your image dimensions to see if they’re correct: Copied images.shape +# (8, 1, 512, 512, 3) diff --git a/scrapped_outputs/55047a7e303f333a2e04d5f0639a1fee.txt b/scrapped_outputs/55047a7e303f333a2e04d5f0639a1fee.txt new file mode 100644 index 0000000000000000000000000000000000000000..d0517cedafce5a2047cf1ffe75bd487ffe5fa88f --- /dev/null +++ b/scrapped_outputs/55047a7e303f333a2e04d5f0639a1fee.txt @@ -0,0 +1,157 @@ +Image-to-Video Generation with PIA (Personalized Image Animator) Overview PIA: Your Personalized Image Animator via Plug-and-Play Modules in Text-to-Image Models by Yiming Zhang, Zhening Xing, Yanhong Zeng, Youqing Fang, Kai Chen Recent advancements in personalized text-to-image (T2I) models have revolutionized content creation, empowering non-experts to generate stunning images with unique styles. While promising, adding realistic motions into these personalized images by text poses significant challenges in preserving distinct styles, high-fidelity details, and achieving motion controllability by text. In this paper, we present PIA, a Personalized Image Animator that excels in aligning with condition images, achieving motion controllability by text, and the compatibility with various personalized T2I models without specific tuning. To achieve these goals, PIA builds upon a base T2I model with well-trained temporal alignment layers, allowing for the seamless transformation of any personalized T2I model into an image animation model. A key component of PIA is the introduction of the condition module, which utilizes the condition frame and inter-frame affinity as input to transfer appearance information guided by the affinity hint for individual frame synthesis in the latent space. This design mitigates the challenges of appearance-related image alignment within and allows for a stronger focus on aligning with motion-related guidance. Project page Available Pipelines Pipeline Tasks Demo PIAPipeline Image-to-Video Generation with PIA Available checkpoints Motion Adapter checkpoints for PIA can be found under the OpenMMLab org. These checkpoints are meant to work with any model based on Stable Diffusion 1.5 Usage example PIA works with a MotionAdapter checkpoint and a Stable Diffusion 1.5 model checkpoint. The MotionAdapter is a collection of Motion Modules that are responsible for adding coherent motion across image frames. These modules are applied after the Resnet and Attention blocks in the Stable Diffusion UNet. In addition to the motion modules, PIA also replaces the input convolution layer of the SD 1.5 UNet model with a 9 channel input convolution layer. The following example demonstrates how to use PIA to generate a video from a single image. Copied import torch +from diffusers import ( + EulerDiscreteScheduler, + MotionAdapter, + PIAPipeline, +) +from diffusers.utils import export_to_gif, load_image + +adapter = MotionAdapter.from_pretrained("openmmlab/PIA-condition-adapter") +pipe = PIAPipeline.from_pretrained("SG161222/Realistic_Vision_V6.0_B1_noVAE", motion_adapter=adapter, torch_dtype=torch.float16) + +pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() +pipe.enable_vae_slicing() + +image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat_6.png?download=true" +) +image = image.resize((512, 512)) +prompt = "cat in a field" +negative_prompt = "wrong white balance, dark, sketches,worst quality,low quality" + +generator = torch.Generator("cpu").manual_seed(0) +output = pipe(image=image, prompt=prompt, generator=generator) +frames = output.frames[0] +export_to_gif(frames, "pia-animation.gif") Here are some sample outputs: cat in a field. + If you plan on using a scheduler that can clip samples, make sure to disable it by setting clip_sample=False in the scheduler as this can also have an adverse effect on generated samples. Additionally, the PIA checkpoints can be sensitive to the beta schedule of the scheduler. We recommend setting this to linear. Using FreeInit FreeInit: Bridging Initialization Gap in Video Diffusion Models by Tianxing Wu, Chenyang Si, Yuming Jiang, Ziqi Huang, Ziwei Liu. FreeInit is an effective method that improves temporal consistency and overall quality of videos generated using video-diffusion-models without any addition training. It can be applied to PIA, AnimateDiff, ModelScope, VideoCrafter and various other video generation models seamlessly at inference time, and works by iteratively refining the latent-initialization noise. More details can be found it the paper. The following example demonstrates the usage of FreeInit. Copied import torch +from diffusers import ( + DDIMScheduler, + MotionAdapter, + PIAPipeline, +) +from diffusers.utils import export_to_gif, load_image + +adapter = MotionAdapter.from_pretrained("openmmlab/PIA-condition-adapter") +pipe = PIAPipeline.from_pretrained("SG161222/Realistic_Vision_V6.0_B1_noVAE", motion_adapter=adapter) + +# enable FreeInit +# Refer to the enable_free_init documentation for a full list of configurable parameters +pipe.enable_free_init(method="butterworth", use_fast_sampling=True) + +# Memory saving options +pipe.enable_model_cpu_offload() +pipe.enable_vae_slicing() + +pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) +image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat_6.png?download=true" +) +image = image.resize((512, 512)) +prompt = "cat in a field" +negative_prompt = "wrong white balance, dark, sketches,worst quality,low quality" + +generator = torch.Generator("cpu").manual_seed(0) + +output = pipe(image=image, prompt=prompt, generator=generator) +frames = output.frames[0] +export_to_gif(frames, "pia-freeinit-animation.gif") cat in a field. + FreeInit is not really free - the improved quality comes at the cost of extra computation. It requires sampling a few extra times depending on the num_iters parameter that is set when enabling it. Setting the use_fast_sampling parameter to True can improve the overall performance (at the cost of lower quality compared to when use_fast_sampling=False but still better results than vanilla video generation models). PIAPipeline class diffusers.PIAPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: Union scheduler: Union motion_adapter: Optional = None feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter (MotionAdapter) — +A MotionAdapter to be used in combination with unet to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( image: Union prompt: Union = None strength: float = 1.0 num_frames: Optional = 16 height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None motion_scale: int = 0 output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → PIAPipelineOutput or tuple Parameters image (PipelineImageInput) — +The input image to be used for video generation. prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. strength (float, optional, defaults to 1.0) — Indicates extent to transform the reference image. Must be between 0 and 1. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated video. num_frames (int, optional, defaults to 16) — +The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds +amounts to 2 seconds of video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. +motion_scale — (int, optional, defaults to 0): +Parameter that controls the amount and type of motion that is added to the image. Increasing the value increases the amount of motion, while specific +ranges of values control the type of motion that is added. Must be between 0 and 8. +Set between 0-2 to only increase the amount of motion. +Set between 3-5 to create looping motion. +Set between 6-8 to perform motion with image style transfer. output_type (str, optional, defaults to "pil") — +The output format of the generated video. Choose between torch.FloatTensor, PIL.Image or +np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +PIAPipelineOutput or tuple + +If return_dict is True, PIAPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import ( +... EulerDiscreteScheduler, +... MotionAdapter, +... PIAPipeline, +... ) +>>> from diffusers.utils import export_to_gif, load_image +>>> adapter = MotionAdapter.from_pretrained("../checkpoints/pia-diffusers") +>>> pipe = PIAPipeline.from_pretrained("SG161222/Realistic_Vision_V6.0_B1_noVAE", motion_adapter=adapter) +>>> pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config) +>>> image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat_6.png?download=true" +... ) +>>> image = image.resize((512, 512)) +>>> prompt = "cat in a hat" +>>> negative_prompt = "wrong white balance, dark, sketches,worst quality,low quality, deformed, distorted, disfigured, bad eyes, wrong lips,weird mouth, bad teeth, mutated hands and fingers, bad anatomy,wrong anatomy, amputation, extra limb, missing limb, floating,limbs, disconnected limbs, mutation, ugly, disgusting, bad_pictures, negative_hand-neg" +>>> generator = torch.Generator("cpu").manual_seed(0) +>>> output = pipe(image=image, prompt=prompt, negative_prompt=negative_prompt, generator=generator) +>>> frames = output.frames[0] +>>> export_to_gif(frames, "pia-animation.gif") encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. enable_freeu disable_freeu enable_free_init disable_free_init enable_vae_slicing disable_vae_slicing enable_vae_tiling disable_vae_tiling PIAPipelineOutput class diffusers.pipelines.pia.PIAPipelineOutput < source > ( frames: Union ) Parameters frames (torch.Tensor, np.ndarray, or List[List[PIL.Image.Image]]) — Nested list of length batch_size with denoised PIL image sequences of length num_frames, — NumPy array of shape `(batch_size, num_frames, channels, height, width, — Torch tensor of shape (batch_size, num_frames, channels, height, width). — Output class for PIAPipeline. diff --git a/scrapped_outputs/5510fc80ecc8e6baec7d6c925948f15e.txt b/scrapped_outputs/5510fc80ecc8e6baec7d6c925948f15e.txt new file mode 100644 index 0000000000000000000000000000000000000000..44404381265fb59e40a4d0a64a09200029284152 --- /dev/null +++ b/scrapped_outputs/5510fc80ecc8e6baec7d6c925948f15e.txt @@ -0,0 +1,49 @@ +EulerDiscreteScheduler The Euler scheduler (Algorithm 2) is from the Elucidating the Design Space of Diffusion-Based Generative Models paper by Karras et al. This is a fast scheduler which can often generate good outputs in 20-30 steps. The scheduler is based on the original k-diffusion implementation by Katherine Crowson. EulerDiscreteScheduler class diffusers.EulerDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' interpolation_type: str = 'linear' use_karras_sigmas: Optional = False sigma_min: Optional = None sigma_max: Optional = None timestep_spacing: str = 'linspace' timestep_type: str = 'discrete' steps_offset: int = 0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). interpolation_type(str, defaults to "linear", optional) — +The interpolation type to compute intermediate sigmas for the scheduler denoising steps. Should be on of +"linear" or "log_linear". use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. Euler scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. Scales the denoising model input by (sigma**2 + 1) ** 0.5 to match the Euler algorithm. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor s_churn: float = 0.0 s_tmin: float = 0.0 s_tmax: float = inf s_noise: float = 1.0 generator: Optional = None return_dict: bool = True ) → EulerDiscreteSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. s_churn (float) — s_tmin (float) — s_tmax (float) — s_noise (float, defaults to 1.0) — +Scaling factor for noise added to the sample. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a EulerDiscreteSchedulerOutput or +tuple. Returns +EulerDiscreteSchedulerOutput or tuple + +If return_dict is True, EulerDiscreteSchedulerOutput is +returned, otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). EulerDiscreteSchedulerOutput class diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/55295881c91cb1cbabaab462908561a5.txt b/scrapped_outputs/55295881c91cb1cbabaab462908561a5.txt new file mode 100644 index 0000000000000000000000000000000000000000..b38b5c13a31ff2d5b90900e6331e648465b535b4 --- /dev/null +++ b/scrapped_outputs/55295881c91cb1cbabaab462908561a5.txt @@ -0,0 +1,174 @@ +Reduce memory usage A barrier to using diffusion models is the large amount of memory required. To overcome this challenge, there are several memory-reducing techniques you can use to run even some of the largest models on free-tier or consumer GPUs. Some of these techniques can even be combined to further reduce memory usage. In many cases, optimizing for memory or speed leads to improved performance in the other, so you should try to optimize for both whenever you can. This guide focuses on minimizing memory usage, but you can also learn more about how to Speed up inference. The results below are obtained from generating a single 512x512 image from the prompt a photo of an astronaut riding a horse on mars with 50 DDIM steps on a Nvidia Titan RTX, demonstrating the speed-up you can expect as a result of reduced memory consumption. latency speed-up original 9.50s x1 fp16 3.61s x2.63 channels last 3.30s x2.88 traced UNet 3.21s x2.96 memory-efficient attention 2.63s x3.61 Sliced VAE Sliced VAE enables decoding large batches of images with limited VRAM or batches with 32 images or more by decoding the batches of latents one image at a time. You’ll likely want to couple this with enable_xformers_memory_efficient_attention() to reduce memory use further if you have xFormers installed. To use sliced VAE, call enable_vae_slicing() on your pipeline before inference: Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_vae_slicing() +#pipe.enable_xformers_memory_efficient_attention() +images = pipe([prompt] * 32).images You may see a small performance boost in VAE decoding on multi-image batches, and there should be no performance impact on single-image batches. Tiled VAE Tiled VAE processing also enables working with large images on limited VRAM (for example, generating 4k images on 8GB of VRAM) by splitting the image into overlapping tiles, decoding the tiles, and then blending the outputs together to compose the final image. You should also used tiled VAE with enable_xformers_memory_efficient_attention() to reduce memory use further if you have xFormers installed. To use tiled VAE processing, call enable_vae_tiling() on your pipeline before inference: Copied import torch +from diffusers import StableDiffusionPipeline, UniPCMultistepScheduler + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") +prompt = "a beautiful landscape photograph" +pipe.enable_vae_tiling() +#pipe.enable_xformers_memory_efficient_attention() + +image = pipe([prompt], width=3840, height=2224, num_inference_steps=20).images[0] The output image has some tile-to-tile tone variation because the tiles are decoded separately, but you shouldn’t see any sharp and obvious seams between the tiles. Tiling is turned off for images that are 512x512 or smaller. CPU offloading Offloading the weights to the CPU and only loading them on the GPU when performing the forward pass can also save memory. Often, this technique can reduce memory consumption to less than 3GB. To perform CPU offloading, call enable_sequential_cpu_offload(): Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_sequential_cpu_offload() +image = pipe(prompt).images[0] CPU offloading works on submodules rather than whole models. This is the best way to minimize memory consumption, but inference is much slower due to the iterative nature of the diffusion process. The UNet component of the pipeline runs several times (as many as num_inference_steps); each time, the different UNet submodules are sequentially onloaded and offloaded as needed, resulting in a large number of memory transfers. Consider using model offloading if you want to optimize for speed because it is much faster. The tradeoff is your memory savings won’t be as large. When using enable_sequential_cpu_offload(), don’t move the pipeline to CUDA beforehand or else the gain in memory consumption will only be minimal (see this issue for more information). enable_sequential_cpu_offload() is a stateful operation that installs hooks on the models. Model offloading Model offloading requires 🤗 Accelerate version 0.17.0 or higher. Sequential CPU offloading preserves a lot of memory but it makes inference slower because submodules are moved to GPU as needed, and they’re immediately returned to the CPU when a new module runs. Full-model offloading is an alternative that moves whole models to the GPU, instead of handling each model’s constituent submodules. There is a negligible impact on inference time (compared with moving the pipeline to cuda), and it still provides some memory savings. During model offloading, only one of the main components of the pipeline (typically the text encoder, UNet and VAE) +is placed on the GPU while the others wait on the CPU. Components like the UNet that run for multiple iterations stay on the GPU until they’re no longer needed. Enable model offloading by calling enable_model_cpu_offload() on the pipeline: Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_model_cpu_offload() +image = pipe(prompt).images[0] In order to properly offload models after they’re called, it is required to run the entire pipeline and models are called in the pipeline’s expected order. Exercise caution if models are reused outside the context of the pipeline after hooks have been installed. See Removing Hooks for more information. enable_model_cpu_offload() is a stateful operation that installs hooks on the models and state on the pipeline. Channels-last memory format The channels-last memory format is an alternative way of ordering NCHW tensors in memory to preserve dimension ordering. Channels-last tensors are ordered in such a way that the channels become the densest dimension (storing images pixel-per-pixel). Since not all operators currently support the channels-last format, it may result in worst performance but you should still try and see if it works for your model. For example, to set the pipeline’s UNet to use the channels-last format: Copied print(pipe.unet.conv_out.state_dict()["weight"].stride()) # (2880, 9, 3, 1) +pipe.unet.to(memory_format=torch.channels_last) # in-place operation +print( + pipe.unet.conv_out.state_dict()["weight"].stride() +) # (2880, 1, 960, 320) having a stride of 1 for the 2nd dimension proves that it works Tracing Tracing runs an example input tensor through the model and captures the operations that are performed on it as that input makes its way through the model’s layers. The executable or ScriptFunction that is returned is optimized with just-in-time compilation. To trace a UNet: Copied import time +import torch +from diffusers import StableDiffusionPipeline +import functools + +# torch disable grad +torch.set_grad_enabled(False) + +# set variables +n_experiments = 2 +unet_runs_per_experiment = 50 + + +# load inputs +def generate_inputs(): + sample = torch.randn((2, 4, 64, 64), device="cuda", dtype=torch.float16) + timestep = torch.rand(1, device="cuda", dtype=torch.float16) * 999 + encoder_hidden_states = torch.randn((2, 77, 768), device="cuda", dtype=torch.float16) + return sample, timestep, encoder_hidden_states + + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") +unet = pipe.unet +unet.eval() +unet.to(memory_format=torch.channels_last) # use channels_last memory format +unet.forward = functools.partial(unet.forward, return_dict=False) # set return_dict=False as default + +# warmup +for _ in range(3): + with torch.inference_mode(): + inputs = generate_inputs() + orig_output = unet(*inputs) + +# trace +print("tracing..") +unet_traced = torch.jit.trace(unet, inputs) +unet_traced.eval() +print("done tracing") + + +# warmup and optimize graph +for _ in range(5): + with torch.inference_mode(): + inputs = generate_inputs() + orig_output = unet_traced(*inputs) + + +# benchmarking +with torch.inference_mode(): + for _ in range(n_experiments): + torch.cuda.synchronize() + start_time = time.time() + for _ in range(unet_runs_per_experiment): + orig_output = unet_traced(*inputs) + torch.cuda.synchronize() + print(f"unet traced inference took {time.time() - start_time:.2f} seconds") + for _ in range(n_experiments): + torch.cuda.synchronize() + start_time = time.time() + for _ in range(unet_runs_per_experiment): + orig_output = unet(*inputs) + torch.cuda.synchronize() + print(f"unet inference took {time.time() - start_time:.2f} seconds") + +# save the model +unet_traced.save("unet_traced.pt") Replace the unet attribute of the pipeline with the traced model: Copied from diffusers import StableDiffusionPipeline +import torch +from dataclasses import dataclass + + +@dataclass +class UNet2DConditionOutput: + sample: torch.FloatTensor + + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") + +# use jitted unet +unet_traced = torch.jit.load("unet_traced.pt") + + +# del pipe.unet +class TracedUNet(torch.nn.Module): + def __init__(self): + super().__init__() + self.in_channels = pipe.unet.config.in_channels + self.device = pipe.unet.device + + def forward(self, latent_model_input, t, encoder_hidden_states): + sample = unet_traced(latent_model_input, t, encoder_hidden_states)[0] + return UNet2DConditionOutput(sample=sample) + + +pipe.unet = TracedUNet() + +with torch.inference_mode(): + image = pipe([prompt] * 1, num_inference_steps=50).images[0] Memory-efficient attention Recent work on optimizing bandwidth in the attention block has generated huge speed-ups and reductions in GPU memory usage. The most recent type of memory-efficient attention is Flash Attention (you can check out the original code at HazyResearch/flash-attention). If you have PyTorch >= 2.0 installed, you should not expect a speed-up for inference when enabling xformers. To use Flash Attention, install the following: PyTorch > 1.12 CUDA available xFormers Then call enable_xformers_memory_efficient_attention() on the pipeline: Copied from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") + +pipe.enable_xformers_memory_efficient_attention() + +with torch.inference_mode(): + sample = pipe("a small cat") + +# optional: You can disable it via +# pipe.disable_xformers_memory_efficient_attention() The iteration speed when using xformers should match the iteration speed of PyTorch 2.0 as described here. diff --git a/scrapped_outputs/55601dc1633a89ad1ec0b5eb109e2cc3.txt b/scrapped_outputs/55601dc1633a89ad1ec0b5eb109e2cc3.txt new file mode 100644 index 0000000000000000000000000000000000000000..26444ce0b02439b036cdb5951e8bcee16133d21d --- /dev/null +++ b/scrapped_outputs/55601dc1633a89ad1ec0b5eb109e2cc3.txt @@ -0,0 +1,7 @@ +Value-guided planning 🧪 This is an experimental pipeline for reinforcement learning! This pipeline is based on the Planning with Diffusion for Flexible Behavior Synthesis paper by Michael Janner, Yilun Du, Joshua B. Tenenbaum, Sergey Levine. The abstract from the paper is: Model-based reinforcement learning methods often use learning only for the purpose of estimating an approximate dynamics model, offloading the rest of the decision-making work to classical trajectory optimizers. While conceptually simple, this combination has a number of empirical shortcomings, suggesting that learned models may not be well-suited to standard trajectory optimization. In this paper, we consider what it would look like to fold as much of the trajectory optimization pipeline as possible into the modeling problem, such that sampling from the model and planning with it become nearly identical. The core of our technical approach lies in a diffusion probabilistic model that plans by iteratively denoising trajectories. We show how classifier-guided sampling and image inpainting can be reinterpreted as coherent planning strategies, explore the unusual and useful properties of diffusion-based planning methods, and demonstrate the effectiveness of our framework in control settings that emphasize long-horizon decision-making and test-time flexibility. You can find additional information about the model on the project page, the original codebase, or try it out in a demo notebook. The script to run the model is available here. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. ValueGuidedRLPipeline class diffusers.experimental.ValueGuidedRLPipeline < source > ( value_function: UNet1DModel unet: UNet1DModel scheduler: DDPMScheduler env ) Parameters value_function (UNet1DModel) — +A specialized UNet for fine-tuning trajectories base on reward. unet (UNet1DModel) — +UNet architecture to denoise the encoded trajectories. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded trajectories. Default for this +application is DDPMScheduler. env () — +An environment following the OpenAI gym API to act in. For now only Hopper has pretrained models. Pipeline for value-guided sampling from a diffusion model trained to predict sequences of states. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). diff --git a/scrapped_outputs/5571684b87ac01ffef2ce218491b83e5.txt b/scrapped_outputs/5571684b87ac01ffef2ce218491b83e5.txt new file mode 100644 index 0000000000000000000000000000000000000000..b413917c52bc7069ecb64d4b6c9ce531220bac25 --- /dev/null +++ b/scrapped_outputs/5571684b87ac01ffef2ce218491b83e5.txt @@ -0,0 +1,87 @@ +Create reproducible pipelines Reproducibility is important for testing, replicating results, and can even be used to improve image quality. However, the randomness in diffusion models is a desired property because it allows the pipeline to generate different images every time it is run. While you can’t expect to get the exact same results across platforms, you can expect results to be reproducible across releases and platforms within a certain tolerance range. Even then, tolerance varies depending on the diffusion pipeline and checkpoint. This is why it’s important to understand how to control sources of randomness in diffusion models or use deterministic algorithms. 💡 We strongly recommend reading PyTorch’s statement about reproducibility: Completely reproducible results are not guaranteed across PyTorch releases, individual commits, or different platforms. Furthermore, results may not be reproducible between CPU and GPU executions, even when using identical seeds. Control randomness During inference, pipelines rely heavily on random sampling operations which include creating the +Gaussian noise tensors to denoise and adding noise to the scheduling step. Take a look at the tensor values in the DDIMPipeline after two inference steps: Copied from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np").images +print(np.abs(image).sum()) Running the code above prints one value, but if you run it again you get a different value. What is going on here? Every time the pipeline is run, torch.randn uses a different random seed to create Gaussian noise which is denoised stepwise. This leads to a different result each time it is run, which is great for diffusion pipelines since it generates a different random image each time. But if you need to reliably generate the same image, that’ll depend on whether you’re running the pipeline on a CPU or GPU. CPU To generate reproducible results on a CPU, you’ll need to use a PyTorch Generator and set a seed: Copied import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) + +# create a generator for reproducibility +generator = torch.Generator(device="cpu").manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) Now when you run the code above, it always prints a value of 1491.1711 no matter what because the Generator object with the seed is passed to all the random functions of the pipeline. If you run this code example on your specific hardware and PyTorch version, you should get a similar, if not the same, result. 💡 It might be a bit unintuitive at first to pass Generator objects to the pipeline instead of +just integer values representing the seed, but this is the recommended design when dealing with +probabilistic models in PyTorch, as Generators are random states that can be +passed to multiple pipelines in a sequence. GPU Writing a reproducible pipeline on a GPU is a bit trickier, and full reproducibility across different hardware is not guaranteed because matrix multiplication - which diffusion pipelines require a lot of - is less deterministic on a GPU than a CPU. For example, if you run the same code example above on a GPU: Copied import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) +ddim.to("cuda") + +# create a generator for reproducibility +generator = torch.Generator(device="cuda").manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) The result is not the same even though you’re using an identical seed because the GPU uses a different random number generator than the CPU. To circumvent this problem, 🧨 Diffusers has a randn_tensor() function for creating random noise on the CPU, and then moving the tensor to a GPU if necessary. The randn_tensor function is used everywhere inside the pipeline, allowing the user to always pass a CPU Generator even if the pipeline is run on a GPU. You’ll see the results are much closer now! Copied import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) +ddim.to("cuda") + +# create a generator for reproducibility; notice you don't place it on the GPU! +generator = torch.manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) 💡 If reproducibility is important, we recommend always passing a CPU generator. +The performance loss is often neglectable, and you’ll generate much more similar +values than if the pipeline had been run on a GPU. Finally, for more complex pipelines such as UnCLIPPipeline, these are often extremely +susceptible to precision error propagation. Don’t expect similar results across +different GPU hardware or PyTorch versions. In this case, you’ll need to run +exactly the same hardware and PyTorch version for full reproducibility. Deterministic algorithms You can also configure PyTorch to use deterministic algorithms to create a reproducible pipeline. However, you should be aware that deterministic algorithms may be slower than nondeterministic ones and you may observe a decrease in performance. But if reproducibility is important to you, then this is the way to go! Nondeterministic behavior occurs when operations are launched in more than one CUDA stream. To avoid this, set the environment variable CUBLAS_WORKSPACE_CONFIG to :16:8 to only use one buffer size during runtime. PyTorch typically benchmarks multiple algorithms to select the fastest one, but if you want reproducibility, you should disable this feature because the benchmark may select different algorithms each time. Lastly, pass True to torch.use_deterministic_algorithms to enable deterministic algorithms. Copied import os +import torch + +os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":16:8" + +torch.backends.cudnn.benchmark = False +torch.use_deterministic_algorithms(True) Now when you run the same pipeline twice, you’ll get identical results. Copied import torch +from diffusers import DDIMScheduler, StableDiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True).to("cuda") +pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) +g = torch.Generator(device="cuda") + +prompt = "A bear is playing a guitar on Times Square" + +g.manual_seed(0) +result1 = pipe(prompt=prompt, num_inference_steps=50, generator=g, output_type="latent").images + +g.manual_seed(0) +result2 = pipe(prompt=prompt, num_inference_steps=50, generator=g, output_type="latent").images + +print("L_inf dist =", abs(result1 - result2).max()) +"L_inf dist = tensor(0., device='cuda:0')" diff --git a/scrapped_outputs/5580dcd3f0a900614b267db92de8f57d.txt b/scrapped_outputs/5580dcd3f0a900614b267db92de8f57d.txt new file mode 100644 index 0000000000000000000000000000000000000000..44404381265fb59e40a4d0a64a09200029284152 --- /dev/null +++ b/scrapped_outputs/5580dcd3f0a900614b267db92de8f57d.txt @@ -0,0 +1,49 @@ +EulerDiscreteScheduler The Euler scheduler (Algorithm 2) is from the Elucidating the Design Space of Diffusion-Based Generative Models paper by Karras et al. This is a fast scheduler which can often generate good outputs in 20-30 steps. The scheduler is based on the original k-diffusion implementation by Katherine Crowson. EulerDiscreteScheduler class diffusers.EulerDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' interpolation_type: str = 'linear' use_karras_sigmas: Optional = False sigma_min: Optional = None sigma_max: Optional = None timestep_spacing: str = 'linspace' timestep_type: str = 'discrete' steps_offset: int = 0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). interpolation_type(str, defaults to "linear", optional) — +The interpolation type to compute intermediate sigmas for the scheduler denoising steps. Should be on of +"linear" or "log_linear". use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. Euler scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. Scales the denoising model input by (sigma**2 + 1) ** 0.5 to match the Euler algorithm. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor s_churn: float = 0.0 s_tmin: float = 0.0 s_tmax: float = inf s_noise: float = 1.0 generator: Optional = None return_dict: bool = True ) → EulerDiscreteSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. s_churn (float) — s_tmin (float) — s_tmax (float) — s_noise (float, defaults to 1.0) — +Scaling factor for noise added to the sample. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a EulerDiscreteSchedulerOutput or +tuple. Returns +EulerDiscreteSchedulerOutput or tuple + +If return_dict is True, EulerDiscreteSchedulerOutput is +returned, otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). EulerDiscreteSchedulerOutput class diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/55a27be8fbba2dbfdf0b83ed6199557c.txt b/scrapped_outputs/55a27be8fbba2dbfdf0b83ed6199557c.txt new file mode 100644 index 0000000000000000000000000000000000000000..8ddef7d2587e0ab05a500a167a90610ae978a96c --- /dev/null +++ b/scrapped_outputs/55a27be8fbba2dbfdf0b83ed6199557c.txt @@ -0,0 +1,107 @@ +Attend-and-Excite Attend-and-Excite for Stable Diffusion was proposed in Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models and provides textual attention control over image generation. The abstract from the paper is: Recent text-to-image generative models have demonstrated an unparalleled ability to generate diverse and creative imagery guided by a target text prompt. While revolutionary, current state-of-the-art diffusion models may still fail in generating images that fully convey the semantics in the given text prompt. We analyze the publicly available Stable Diffusion model and assess the existence of catastrophic neglect, where the model fails to generate one or more of the subjects from the input prompt. Moreover, we find that in some cases the model also fails to correctly bind attributes (e.g., colors) to their corresponding subjects. To help mitigate these failure cases, we introduce the concept of Generative Semantic Nursing (GSN), where we seek to intervene in the generative process on the fly during inference time to improve the faithfulness of the generated images. Using an attention-based formulation of GSN, dubbed Attend-and-Excite, we guide the model to refine the cross-attention units to attend to all subject tokens in the text prompt and strengthen - or excite - their activations, encouraging the model to generate all subjects described in the text prompt. We compare our approach to alternative approaches and demonstrate that it conveys the desired concepts more faithfully across a range of text prompts. You can find additional information about Attend-and-Excite on the project page, the original codebase, or try it out in a demo. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionAttendAndExcitePipeline class diffusers.StableDiffusionAttendAndExcitePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion and Attend-and-Excite. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings __call__ < source > ( prompt: Union token_indices: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: int = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None max_iter_to_alter: int = 25 thresholds: dict = {0: 0.05, 10: 0.5, 20: 0.8} scale_factor: int = 20 attn_res: Optional = (16, 16) clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. token_indices (List[int]) — +The token indices to alter with attend-and-excite. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. max_iter_to_alter (int, optional, defaults to 25) — +Number of denoising steps to apply attend-and-excite. The max_iter_to_alter denoising steps are when +attend-and-excite is applied. For example, if max_iter_to_alter is 25 and there are a total of 30 +denoising steps, the first 25 denoising steps applies attend-and-excite and the last 5 will not. thresholds (dict, optional, defaults to {0 -- 0.05, 10: 0.5, 20: 0.8}): +Dictionary defining the iterations and desired thresholds to apply iterative latent refinement in. scale_factor (int, optional, default to 20) — +Scale factor to control the step size of each attend-and-excite update. attn_res (tuple, optional, default computed from width and height) — +The 2D resolution of the semantic attention map. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionAttendAndExcitePipeline + +>>> pipe = StableDiffusionAttendAndExcitePipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 +... ).to("cuda") + + +>>> prompt = "a cat and a frog" + +>>> # use get_indices function to find out indices of the tokens you want to alter +>>> pipe.get_indices(prompt) +{0: '<|startoftext|>', 1: 'a', 2: 'cat', 3: 'and', 4: 'a', 5: 'frog', 6: '<|endoftext|>'} + +>>> token_indices = [2, 5] +>>> seed = 6141 +>>> generator = torch.Generator("cuda").manual_seed(seed) + +>>> images = pipe( +... prompt=prompt, +... token_indices=token_indices, +... guidance_scale=7.5, +... generator=generator, +... num_inference_steps=50, +... max_iter_to_alter=25, +... ).images + +>>> image = images[0] +>>> image.save(f"../images/{prompt}_{seed}.png") disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_indices < source > ( prompt: str ) Utility function to list the indices of the tokens you wish to alte StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/55bd02aa78a67c33517a34366832bfdb.txt b/scrapped_outputs/55bd02aa78a67c33517a34366832bfdb.txt new file mode 100644 index 0000000000000000000000000000000000000000..218eb87f8f649852b0b2e0b52a2a1d758aa1b603 --- /dev/null +++ b/scrapped_outputs/55bd02aa78a67c33517a34366832bfdb.txt @@ -0,0 +1 @@ +Using Diffusers with other modalities Diffusers is in the process of expanding to modalities other than images. Example type Colab Pipeline Molecule conformation generation ❌ More coming soon! diff --git a/scrapped_outputs/55bf7d566b2cb400e24a933b82b5da95.txt b/scrapped_outputs/55bf7d566b2cb400e24a933b82b5da95.txt new file mode 100644 index 0000000000000000000000000000000000000000..ae35bd71905061d7430ba6a839a139739f34ded5 --- /dev/null +++ b/scrapped_outputs/55bf7d566b2cb400e24a933b82b5da95.txt @@ -0,0 +1,84 @@ +Self-Attention Guidance Improving Sample Quality of Diffusion Models Using Self-Attention Guidance is by Susung Hong et al. The abstract from the paper is: Denoising diffusion models (DDMs) have attracted attention for their exceptional generation quality and diversity. This success is largely attributed to the use of class- or text-conditional diffusion guidance methods, such as classifier and classifier-free guidance. In this paper, we present a more comprehensive perspective that goes beyond the traditional guidance methods. From this generalized perspective, we introduce novel condition- and training-free strategies to enhance the quality of generated images. As a simple solution, blur guidance improves the suitability of intermediate samples for their fine-scale information and structures, enabling diffusion models to generate higher quality samples with a moderate guidance scale. Improving upon this, Self-Attention Guidance (SAG) uses the intermediate self-attention maps of diffusion models to enhance their stability and efficacy. Specifically, SAG adversarially blurs only the regions that diffusion models attend to at each iteration and guides them accordingly. Our experimental results show that our SAG improves the performance of various diffusion models, including ADM, IDDPM, Stable Diffusion, and DiT. Moreover, combining SAG with conventional guidance methods leads to further improvement. You can find additional information about Self-Attention Guidance on the project page, original codebase, and try it out in a demo or notebook. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionSAGPipeline class diffusers.StableDiffusionSAGPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 sag_scale: float = 0.75 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. sag_scale (float, optional, defaults to 0.75) — +Chosen between [0, 1.0] for better quality. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionSAGPipeline + +>>> pipe = StableDiffusionSAGPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt, sag_scale=0.75).images[0] disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/55c141437caf7840617d47b2fbc9303b.txt b/scrapped_outputs/55c141437caf7840617d47b2fbc9303b.txt new file mode 100644 index 0000000000000000000000000000000000000000..d0ff9812e8390d7761559412d64c19cfc04afa33 --- /dev/null +++ b/scrapped_outputs/55c141437caf7840617d47b2fbc9303b.txt @@ -0,0 +1,89 @@ +Quicktour Diffusion models are trained to denoise random Gaussian noise step-by-step to generate a sample of interest, such as an image or audio. This has sparked a tremendous amount of interest in generative AI, and you have probably seen examples of diffusion generated images on the internet. 🧨 Diffusers is a library aimed at making diffusion models widely accessible to everyone. Whether you’re a developer or an everyday user, this quicktour will introduce you to 🧨 Diffusers and help you get up and generating quickly! There are three main components of the library to know about: The DiffusionPipeline is a high-level end-to-end class designed to rapidly generate samples from pretrained diffusion models for inference. Popular pretrained model architectures and modules that can be used as building blocks for creating diffusion systems. Many different schedulers - algorithms that control how noise is added for training, and how to generate denoised images during inference. The quicktour will show you how to use the DiffusionPipeline for inference, and then walk you through how to combine a model and scheduler to replicate what’s happening inside the DiffusionPipeline. The quicktour is a simplified version of the introductory 🧨 Diffusers notebook to help you get started quickly. If you want to learn more about 🧨 Diffusers’ goal, design philosophy, and additional details about its core API, check out the notebook! Before you begin, make sure you have all the necessary libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install --upgrade diffusers accelerate transformers 🤗 Accelerate speeds up model loading for inference and training. 🤗 Transformers is required to run the most popular diffusion models, such as Stable Diffusion. DiffusionPipeline The DiffusionPipeline is the easiest way to use a pretrained diffusion system for inference. It is an end-to-end system containing the model and the scheduler. You can use the DiffusionPipeline out-of-the-box for many tasks. Take a look at the table below for some supported tasks, and for a complete list of supported tasks, check out the 🧨 Diffusers Summary table. Task Description Pipeline Unconditional Image Generation generate an image from Gaussian noise unconditional_image_generation Text-Guided Image Generation generate an image given a text prompt conditional_image_generation Text-Guided Image-to-Image Translation adapt an image guided by a text prompt img2img Text-Guided Image-Inpainting fill the masked part of an image given the image, the mask and a text prompt inpaint Text-Guided Depth-to-Image Translation adapt parts of an image guided by a text prompt while preserving structure via depth estimation depth2img Start by creating an instance of a DiffusionPipeline and specify which pipeline checkpoint you would like to download. +You can use the DiffusionPipeline for any checkpoint stored on the Hugging Face Hub. +In this quicktour, you’ll load the stable-diffusion-v1-5 checkpoint for text-to-image generation. For Stable Diffusion models, please carefully read the license first before running the model. 🧨 Diffusers implements a safety_checker to prevent offensive or harmful content, but the model’s improved image generation capabilities can still produce potentially harmful content. Load the model with the from_pretrained() method: Copied >>> from diffusers import DiffusionPipeline + +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) The DiffusionPipeline downloads and caches all modeling, tokenization, and scheduling components. You’ll see that the Stable Diffusion pipeline is composed of the UNet2DConditionModel and PNDMScheduler among other things: Copied >>> pipeline +StableDiffusionPipeline { + "_class_name": "StableDiffusionPipeline", + "_diffusers_version": "0.21.4", + ..., + "scheduler": [ + "diffusers", + "PNDMScheduler" + ], + ..., + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vae": [ + "diffusers", + "AutoencoderKL" + ] +} We strongly recommend running the pipeline on a GPU because the model consists of roughly 1.4 billion parameters. +You can move the generator object to a GPU, just like you would in PyTorch: Copied >>> pipeline.to("cuda") Now you can pass a text prompt to the pipeline to generate an image, and then access the denoised image. By default, the image output is wrapped in a PIL.Image object. Copied >>> image = pipeline("An image of a squirrel in Picasso style").images[0] +>>> image Save the image by calling save: Copied >>> image.save("image_of_squirrel_painting.png") Local pipeline You can also use the pipeline locally. The only difference is you need to download the weights first: Copied !git lfs install +!git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 Then load the saved weights into the pipeline: Copied >>> pipeline = DiffusionPipeline.from_pretrained("./stable-diffusion-v1-5", use_safetensors=True) Now, you can run the pipeline as you would in the section above. Swapping schedulers Different schedulers come with different denoising speeds and quality trade-offs. The best way to find out which one works best for you is to try them out! One of the main features of 🧨 Diffusers is to allow you to easily switch between schedulers. For example, to replace the default PNDMScheduler with the EulerDiscreteScheduler, load it with the from_config() method: Copied >>> from diffusers import EulerDiscreteScheduler + +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) +>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) Try generating an image with the new scheduler and see if you notice a difference! In the next section, you’ll take a closer look at the components - the model and scheduler - that make up the DiffusionPipeline and learn how to use these components to generate an image of a cat. Models Most models take a noisy sample, and at each timestep it predicts the noise residual (other models learn to predict the previous sample directly or the velocity or v-prediction), the difference between a less noisy image and the input image. You can mix and match models to create other diffusion systems. Models are initiated with the from_pretrained() method which also locally caches the model weights so it is faster the next time you load the model. For the quicktour, you’ll load the UNet2DModel, a basic unconditional image generation model with a checkpoint trained on cat images: Copied >>> from diffusers import UNet2DModel + +>>> repo_id = "google/ddpm-cat-256" +>>> model = UNet2DModel.from_pretrained(repo_id, use_safetensors=True) To access the model parameters, call model.config: Copied >>> model.config The model configuration is a 🧊 frozen 🧊 dictionary, which means those parameters can’t be changed after the model is created. This is intentional and ensures that the parameters used to define the model architecture at the start remain the same, while other parameters can still be adjusted during inference. Some of the most important parameters are: sample_size: the height and width dimension of the input sample. in_channels: the number of input channels of the input sample. down_block_types and up_block_types: the type of down- and upsampling blocks used to create the UNet architecture. block_out_channels: the number of output channels of the downsampling blocks; also used in reverse order for the number of input channels of the upsampling blocks. layers_per_block: the number of ResNet blocks present in each UNet block. To use the model for inference, create the image shape with random Gaussian noise. It should have a batch axis because the model can receive multiple random noises, a channel axis corresponding to the number of input channels, and a sample_size axis for the height and width of the image: Copied >>> import torch + +>>> torch.manual_seed(0) + +>>> noisy_sample = torch.randn(1, model.config.in_channels, model.config.sample_size, model.config.sample_size) +>>> noisy_sample.shape +torch.Size([1, 3, 256, 256]) For inference, pass the noisy image and a timestep to the model. The timestep indicates how noisy the input image is, with more noise at the beginning and less at the end. This helps the model determine its position in the diffusion process, whether it is closer to the start or the end. Use the sample method to get the model output: Copied >>> with torch.no_grad(): +... noisy_residual = model(sample=noisy_sample, timestep=2).sample To generate actual examples though, you’ll need a scheduler to guide the denoising process. In the next section, you’ll learn how to couple a model with a scheduler. Schedulers Schedulers manage going from a noisy sample to a less noisy sample given the model output - in this case, it is the noisy_residual. 🧨 Diffusers is a toolbox for building diffusion systems. While the DiffusionPipeline is a convenient way to get started with a pre-built diffusion system, you can also choose your own model and scheduler components separately to build a custom diffusion system. For the quicktour, you’ll instantiate the DDPMScheduler with its from_config() method: Copied >>> from diffusers import DDPMScheduler + +>>> scheduler = DDPMScheduler.from_pretrained(repo_id) +>>> scheduler +DDPMScheduler { + "_class_name": "DDPMScheduler", + "_diffusers_version": "0.21.4", + "beta_end": 0.02, + "beta_schedule": "linear", + "beta_start": 0.0001, + "clip_sample": true, + "clip_sample_range": 1.0, + "dynamic_thresholding_ratio": 0.995, + "num_train_timesteps": 1000, + "prediction_type": "epsilon", + "sample_max_value": 1.0, + "steps_offset": 0, + "thresholding": false, + "timestep_spacing": "leading", + "trained_betas": null, + "variance_type": "fixed_small" +} 💡 Unlike a model, a scheduler does not have trainable weights and is parameter-free! Some of the most important parameters are: num_train_timesteps: the length of the denoising process or, in other words, the number of timesteps required to process random Gaussian noise into a data sample. beta_schedule: the type of noise schedule to use for inference and training. beta_start and beta_end: the start and end noise values for the noise schedule. To predict a slightly less noisy image, pass the following to the scheduler’s step() method: model output, timestep, and current sample. Copied >>> less_noisy_sample = scheduler.step(model_output=noisy_residual, timestep=2, sample=noisy_sample).prev_sample +>>> less_noisy_sample.shape +torch.Size([1, 3, 256, 256]) The less_noisy_sample can be passed to the next timestep where it’ll get even less noisy! Let’s bring it all together now and visualize the entire denoising process. First, create a function that postprocesses and displays the denoised image as a PIL.Image: Copied >>> import PIL.Image +>>> import numpy as np + + +>>> def display_sample(sample, i): +... image_processed = sample.cpu().permute(0, 2, 3, 1) +... image_processed = (image_processed + 1.0) * 127.5 +... image_processed = image_processed.numpy().astype(np.uint8) + +... image_pil = PIL.Image.fromarray(image_processed[0]) +... display(f"Image at step {i}") +... display(image_pil) To speed up the denoising process, move the input and model to a GPU: Copied >>> model.to("cuda") +>>> noisy_sample = noisy_sample.to("cuda") Now create a denoising loop that predicts the residual of the less noisy sample, and computes the less noisy sample with the scheduler: Copied >>> import tqdm + +>>> sample = noisy_sample + +>>> for i, t in enumerate(tqdm.tqdm(scheduler.timesteps)): +... # 1. predict noise residual +... with torch.no_grad(): +... residual = model(sample, t).sample + +... # 2. compute less noisy image and set x_t -> x_t-1 +... sample = scheduler.step(residual, t, sample).prev_sample + +... # 3. optionally look at image +... if (i + 1) % 50 == 0: +... display_sample(sample, i + 1) Sit back and watch as a cat is generated from nothing but noise! 😻 Next steps Hopefully, you generated some cool images with 🧨 Diffusers in this quicktour! For your next steps, you can: Train or finetune a model to generate your own images in the training tutorial. See example official and community training or finetuning scripts for a variety of use cases. Learn more about loading, accessing, changing, and comparing schedulers in the Using different Schedulers guide. Explore prompt engineering, speed and memory optimizations, and tips and tricks for generating higher-quality images with the Stable Diffusion guide. Dive deeper into speeding up 🧨 Diffusers with guides on optimized PyTorch on a GPU, and inference guides for running Stable Diffusion on Apple Silicon (M1/M2) and ONNX Runtime. diff --git a/scrapped_outputs/55c7d8cb3cc266b358bbfcd7dd690c40.txt b/scrapped_outputs/55c7d8cb3cc266b358bbfcd7dd690c40.txt new file mode 100644 index 0000000000000000000000000000000000000000..7645418c174b20843d0dcacad570025d04b154f1 --- /dev/null +++ b/scrapped_outputs/55c7d8cb3cc266b358bbfcd7dd690c40.txt @@ -0,0 +1,8 @@ +ScoreSdeVpScheduler ScoreSdeVpScheduler is a variance preserving stochastic differential equation (SDE) scheduler. It was introduced in the Score-Based Generative Modeling through Stochastic Differential Equations paper by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole. The abstract from the paper is: Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model. 🚧 This scheduler is under construction! ScoreSdeVpScheduler class diffusers.schedulers.ScoreSdeVpScheduler < source > ( num_train_timesteps = 2000 beta_min = 0.1 beta_max = 20 sampling_eps = 0.001 ) Parameters num_train_timesteps (int, defaults to 2000) — +The number of diffusion steps to train the model. beta_min (int, defaults to 0.1) — beta_max (int, defaults to 20) — sampling_eps (int, defaults to 1e-3) — +The end value of sampling where timesteps decrease progressively from 1 to epsilon. ScoreSdeVpScheduler is a variance preserving stochastic differential equation (SDE) scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. set_timesteps < source > ( num_inference_steps device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the continuous timesteps used for the diffusion chain (to be run before inference). step_pred < source > ( score x t generator = None ) Parameters score () — x () — t () — generator (torch.Generator, optional) — +A random number generator. Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/55d60ec11c0d4c1c511ae154b10adb57.txt b/scrapped_outputs/55d60ec11c0d4c1c511ae154b10adb57.txt new file mode 100644 index 0000000000000000000000000000000000000000..9cfc96be6aaacc8d08b00ff6b4042e641b297921 --- /dev/null +++ b/scrapped_outputs/55d60ec11c0d4c1c511ae154b10adb57.txt @@ -0,0 +1,13 @@ +PEFT Diffusers supports loading adapters such as LoRA with the PEFT library with the PeftAdapterMixin class. This allows modeling classes in Diffusers like UNet2DConditionModel to load an adapter. Refer to the Inference with PEFT tutorial for an overview of how to use PEFT in Diffusers for inference. PeftAdapterMixin class diffusers.loaders.PeftAdapterMixin < source > ( ) A class containing all functions for loading and using adapters weights that are supported in PEFT library. For +more details about adapters and injecting them in a transformer-based model, check out the PEFT documentation. Install the latest version of PEFT, and use this mixin to: Attach new adapters in the model. Attach multiple adapters and iteratively activate/deactivate them. Activate/deactivate all adapters from the model. Get a list of the active adapters. active_adapters < source > ( ) Gets the current list of active adapters of the model. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +documentation. add_adapter < source > ( adapter_config adapter_name: str = 'default' ) Parameters adapter_config ([~peft.PeftConfig]) — +The configuration of the adapter to add; supported adapters are non-prefix tuning and adaption prompt +methods. adapter_name (str, optional, defaults to "default") — +The name of the adapter to add. If no name is passed, a default name is assigned to the adapter. Adds a new adapter to the current model for training. If no adapter name is passed, a default name is assigned +to the adapter to follow the convention of the PEFT library. If you are not familiar with adapters and PEFT methods, we invite you to read more about them in the PEFT +documentation. disable_adapters < source > ( ) Disable all adapters attached to the model and fallback to inference with the base model only. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +documentation. enable_adapters < source > ( ) Enable adapters that are attached to the model. The model uses self.active_adapters() to retrieve the +list of adapters to enable. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +documentation. set_adapter < source > ( adapter_name: Union ) Parameters adapter_name (Union[str, List[str]])) — +The list of adapters to set or the adapter name in the case of a single adapter. Sets a specific adapter by forcing the model to only use that adapter and disables the other adapters. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +documentation. diff --git a/scrapped_outputs/560de2197c6ccf43c2c4d4ff2ac47b88.txt b/scrapped_outputs/560de2197c6ccf43c2c4d4ff2ac47b88.txt new file mode 100644 index 0000000000000000000000000000000000000000..ac59df5433d23b7c188dd3d53bf865450ff7dab9 --- /dev/null +++ b/scrapped_outputs/560de2197c6ccf43c2c4d4ff2ac47b88.txt @@ -0,0 +1 @@ +Reinforcement learning training with DDPO You can fine-tune Stable Diffusion on a reward function via reinforcement learning with the 🤗 TRL library and 🤗 Diffusers. This is done with the Denoising Diffusion Policy Optimization (DDPO) algorithm introduced by Black et al. in Training Diffusion Models with Reinforcement Learning, which is implemented in 🤗 TRL with the DDPOTrainer. For more information, check out the DDPOTrainer API reference and the Finetune Stable Diffusion Models with DDPO via TRL blog post. diff --git a/scrapped_outputs/5619bdf3d916b5a44f4e50cd15ad3b81.txt b/scrapped_outputs/5619bdf3d916b5a44f4e50cd15ad3b81.txt new file mode 100644 index 0000000000000000000000000000000000000000..f44a3d21a8e26d613db10e2b1641d1bc1fb54490 --- /dev/null +++ b/scrapped_outputs/5619bdf3d916b5a44f4e50cd15ad3b81.txt @@ -0,0 +1,2 @@ +🧨 Diffusers’ Ethical Guidelines Preamble Diffusers provides pre-trained diffusion models and serves as a modular toolbox for inference and training. Given its real case applications in the world and potential negative impacts on society, we think it is important to provide the project with ethical guidelines to guide the development, users’ contributions, and usage of the Diffusers library. The risks associated with using this technology are still being examined, but to name a few: copyrights issues for artists; deep-fake exploitation; sexual content generation in inappropriate contexts; non-consensual impersonation; harmful social biases perpetuating the oppression of marginalized groups. +We will keep tracking risks and adapt the following guidelines based on the community’s responsiveness and valuable feedback. Scope The Diffusers community will apply the following ethical guidelines to the project’s development and help coordinate how the community will integrate the contributions, especially concerning sensitive topics related to ethical concerns. Ethical guidelines The following ethical guidelines apply generally, but we will primarily implement them when dealing with ethically sensitive issues while making a technical choice. Furthermore, we commit to adapting those ethical principles over time following emerging harms related to the state of the art of the technology in question. Transparency: we are committed to being transparent in managing PRs, explaining our choices to users, and making technical decisions. Consistency: we are committed to guaranteeing our users the same level of attention in project management, keeping it technically stable and consistent. Simplicity: with a desire to make it easy to use and exploit the Diffusers library, we are committed to keeping the project’s goals lean and coherent. Accessibility: the Diffusers project helps lower the entry bar for contributors who can help run it even without technical expertise. Doing so makes research artifacts more accessible to the community. Reproducibility: we aim to be transparent about the reproducibility of upstream code, models, and datasets when made available through the Diffusers library. Responsibility: as a community and through teamwork, we hold a collective responsibility to our users by anticipating and mitigating this technology’s potential risks and dangers. Examples of implementations: Safety features and Mechanisms The team works daily to make the technical and non-technical tools available to deal with the potential ethical and social risks associated with diffusion technology. Moreover, the community’s input is invaluable in ensuring these features’ implementation and raising awareness with us. Community tab: it enables the community to discuss and better collaborate on a project. Bias exploration and evaluation: the Hugging Face team provides a space to demonstrate the biases in Stable Diffusion interactively. In this sense, we support and encourage bias explorers and evaluations. Encouraging safety in deployment Safe Stable Diffusion: It mitigates the well-known issue that models, like Stable Diffusion, that are trained on unfiltered, web-crawled datasets tend to suffer from inappropriate degeneration. Related paper: Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models. Safety Checker: It checks and compares the class probability of a set of hard-coded harmful concepts in the embedding space against an image after it has been generated. The harmful concepts are intentionally hidden to prevent reverse engineering of the checker. Staged released on the Hub: in particularly sensitive situations, access to some repositories should be restricted. This staged release is an intermediary step that allows the repository’s authors to have more control over its use. Licensing: OpenRAILs, a new type of licensing, allow us to ensure free access while having a set of restrictions that ensure more responsible use. diff --git a/scrapped_outputs/564f9332357d45beed3128943dc3dcf6.txt b/scrapped_outputs/564f9332357d45beed3128943dc3dcf6.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/568116768f28f79a1775362cb936e911.txt b/scrapped_outputs/568116768f28f79a1775362cb936e911.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/56ed0df696435fec0ffabec5f64e6c3a.txt b/scrapped_outputs/56ed0df696435fec0ffabec5f64e6c3a.txt new file mode 100644 index 0000000000000000000000000000000000000000..498c3c3f41d29f480b85006bf7dde281a8ddb1f1 --- /dev/null +++ b/scrapped_outputs/56ed0df696435fec0ffabec5f64e6c3a.txt @@ -0,0 +1,167 @@ +Unconditional image generation + +Unconditional image generation is not conditioned on any text or images, unlike text- or image-to-image models. It only generates images that resemble its training data distribution. + +This guide will show you how to train an unconditional image generation model on existing datasets as well as your own custom dataset. All the training scripts for unconditional image generation can be found here if you’re interested in learning more about the training details. +Before running the script, make sure you install the library’s training dependencies: + + + Copied +pip install diffusers[training] accelerate datasets +Next, initialize an 🤗 Accelerate environment with: + + + Copied +accelerate config +To setup a default 🤗 Accelerate environment without choosing any configurations: + + + Copied +accelerate config default +Or if your environment doesn’t support an interactive shell like a notebook, you can use: + + + Copied +from accelerate.utils import write_basic_config + +write_basic_config() + +Upload model to Hub + +You can upload your model on the Hub by adding the following argument to the training script: + + + Copied +--push_to_hub + +Save and load checkpoints + +It is a good idea to regularly save checkpoints in case anything happens during training. To save a checkpoint, pass the following argument to the training script: + + + Copied +--checkpointing_steps=500 +The full training state is saved in a subfolder in the output_dir every 500 steps, which allows you to load a checkpoint and resume training if you pass the --resume_from_checkpoint argument to the training script: + + + Copied +--resume_from_checkpoint="checkpoint-1500" + +Finetuning + +You’re ready to launch the training script now! Specify the dataset name to finetune on with the --dataset_name argument and then save it to the path in --output_dir. +💡 A full training run takes 2 hours on 4xV100 GPUs. +For example, to finetune on the Oxford Flowers dataset: + + + Copied +accelerate launch train_unconditional.py \ + --dataset_name="huggan/flowers-102-categories" \ + --resolution=64 \ + --output_dir="ddpm-ema-flowers-64" \ + --train_batch_size=16 \ + --num_epochs=100 \ + --gradient_accumulation_steps=1 \ + --learning_rate=1e-4 \ + --lr_warmup_steps=500 \ + --mixed_precision=no \ + --push_to_hub + +Or if you want to train your model on the Pokemon dataset: + + + Copied +accelerate launch train_unconditional.py \ + --dataset_name="huggan/pokemon" \ + --resolution=64 \ + --output_dir="ddpm-ema-pokemon-64" \ + --train_batch_size=16 \ + --num_epochs=100 \ + --gradient_accumulation_steps=1 \ + --learning_rate=1e-4 \ + --lr_warmup_steps=500 \ + --mixed_precision=no \ + --push_to_hub + + +Training with multiple GPUs + +accelerate allows for seamless multi-GPU training. Follow the instructions here +for running distributed training with accelerate. Here is an example command: + + + Copied +accelerate launch --mixed_precision="fp16" --multi_gpu train_unconditional.py \ + --dataset_name="huggan/pokemon" \ + --resolution=64 --center_crop --random_flip \ + --output_dir="ddpm-ema-pokemon-64" \ + --train_batch_size=16 \ + --num_epochs=100 \ + --gradient_accumulation_steps=1 \ + --use_ema \ + --learning_rate=1e-4 \ + --lr_warmup_steps=500 \ + --mixed_precision="fp16" \ + --logger="wandb" + +Finetuning with your own data + +There are two ways to finetune a model on your own dataset: +provide your own folder of images to the --train_data_dir argument +upload your dataset to the Hub and pass the dataset repository id to the --dataset_name argument. +💡 Learn more about how to create an image dataset for training in the Create an image dataset guide. +Below, we explain both in more detail. + +Provide the dataset as a folder + +If you provide your own dataset as a folder, the script expects the following directory structure: + + + Copied +data_dir/xxx.png +data_dir/xxy.png +data_dir/[...]/xxz.png +Pass the path to the folder containing the images to the --train_data_dir argument and launch the training: + + + Copied +accelerate launch train_unconditional.py \ + --train_data_dir \ + +Internally, the script uses the ImageFolder to automatically build a dataset from the folder. + +Upload your data to the Hub + +💡 For more details and context about creating and uploading a dataset to the Hub, take a look at the Image search with 🤗 Datasets post. +To upload your dataset to the Hub, you can start by creating one with the ImageFolder feature, which creates an image column containing the PIL-encoded images, from 🤗 Datasets: + + + Copied +from datasets import load_dataset + +# example 1: local folder +dataset = load_dataset("imagefolder", data_dir="path_to_your_folder") + +# example 2: local files (supported formats are tar, gzip, zip, xz, rar, zstd) +dataset = load_dataset("imagefolder", data_files="path_to_zip_file") + +# example 3: remote files (supported formats are tar, gzip, zip, xz, rar, zstd) +dataset = load_dataset( + "imagefolder", + data_files="https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip", +) + +# example 4: providing several splits +dataset = load_dataset( + "imagefolder", data_files={"train": ["path/to/file1", "path/to/file2"], "test": ["path/to/file3", "path/to/file4"]} +) +Then you can use the push_to_hub method to upload it to the Hub: + + + Copied +# assuming you have ran the huggingface-cli login command in a terminal +dataset.push_to_hub("name_of_your_dataset") + +# if you want to push to a private repo, simply pass private=True: +dataset.push_to_hub("name_of_your_dataset", private=True) +Now train your model by simply setting the --dataset_name argument to the name of your dataset on the Hub. diff --git a/scrapped_outputs/572e37fc03fdc2cab30a39b8d37d8535.txt b/scrapped_outputs/572e37fc03fdc2cab30a39b8d37d8535.txt new file mode 100644 index 0000000000000000000000000000000000000000..59083c57c1a632ae7752bc6ded438137105156ce --- /dev/null +++ b/scrapped_outputs/572e37fc03fdc2cab30a39b8d37d8535.txt @@ -0,0 +1,102 @@ +DPMSolverMultistepScheduler DPMSolverMultistep is a multistep scheduler from DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps and DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. DPMSolver (and the improved version DPMSolver++) is a fast dedicated high-order solver for diffusion ODEs with convergence order guarantee. Empirically, DPMSolver sampling with only 20 steps can generate high-quality +samples, and it can generate quite good samples even in 10 steps. Tips It is recommended to set solver_order to 2 for guide sampling, and solver_order=3 for unconditional sampling. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use the dynamic +thresholding. This thresholding method is unsuitable for latent-space diffusion models such as +Stable Diffusion. The SDE variant of DPMSolver and DPM-Solver++ is also supported, but only for the first and second-order solvers. This is a fast SDE solver for the reverse diffusion SDE. It is recommended to use the second-order sde-dpmsolver++. DPMSolverMultistepScheduler class diffusers.DPMSolverMultistepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'dpmsolver++' solver_type: str = 'midpoint' lower_order_final: bool = True euler_at_final: bool = False use_karras_sigmas: Optional = False use_lu_lambdas: Optional = False final_sigmas_type: Optional = 'zero' lambda_min_clipped: float = -inf variance_type: Optional = None timestep_spacing: str = 'linspace' steps_offset: int = 0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DPMSolver order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++". algorithm_type (str, defaults to dpmsolver++) — +Algorithm type for the solver; can be dpmsolver, dpmsolver++, sde-dpmsolver or sde-dpmsolver++. The +dpmsolver type implements the algorithms in the DPMSolver +paper, and the dpmsolver++ type implements the algorithms in the +DPMSolver++ paper. It is recommended to use dpmsolver++ or +sde-dpmsolver++ with solver_order=2 for guided sampling like in Stable Diffusion. solver_type (str, defaults to midpoint) — +Solver type for the second-order solver; can be midpoint or heun. The solver type slightly affects the +sample quality, especially for a small number of steps. It is recommended to use midpoint solvers. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. euler_at_final (bool, defaults to False) — +Whether to use Euler’s method in the final step. It is a trade-off between numerical stability and detail +richness. This can stabilize the sampling of the SDE variant of DPMSolver for small number of inference +steps, but sometimes may result in blurring. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. use_lu_lambdas (bool, optional, defaults to False) — +Whether to use the uniform-logSNR for step sizes proposed by Lu’s DPM-Solver in the noise schedule during +the sampling process. If True, the sigmas and time steps are determined according to a sequence of +lambda(t). final_sigmas_type (str, defaults to "zero") — +The final sigma value for the noise schedule during the sampling process. If "sigma_min", the final sigma +is the same as the last sigma in the training schedule. If zero, the final sigma is set to 0. lambda_min_clipped (float, defaults to -inf) — +Clipping threshold for the minimum value of lambda(t) for numerical stability. This is critical for the +cosine (squaredcos_cap_v2) noise schedule. variance_type (str, optional) — +Set to “learned” or “learned_range” for diffusion models that predict variance. If set, the model’s output +contains the predicted Gaussian variance. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. DPMSolverMultistepScheduler is a fast dedicated high-order solver for diffusion ODEs. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is +designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an +integral of the data prediction model. The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise +prediction and data prediction models. dpm_solver_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DPMSolver (equivalent to DDIM). multistep_dpm_solver_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order multistep DPMSolver. multistep_dpm_solver_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order multistep DPMSolver. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: int = None device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator = None return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep DPMSolver. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/57627355393eb8d2c97a1b11d896757c.txt b/scrapped_outputs/57627355393eb8d2c97a1b11d896757c.txt new file mode 100644 index 0000000000000000000000000000000000000000..78c3d8546c4767fffa594b36c432c1201bb2ccc3 --- /dev/null +++ b/scrapped_outputs/57627355393eb8d2c97a1b11d896757c.txt @@ -0,0 +1,17 @@ +Token merging Token merging (ToMe) merges redundant tokens/patches progressively in the forward pass of a Transformer-based network which can speed-up the inference latency of StableDiffusionPipeline. Install ToMe from pip: Copied pip install tomesd You can use ToMe from the tomesd library with the apply_patch function: Copied from diffusers import StableDiffusionPipeline + import torch + import tomesd + + pipeline = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True, + ).to("cuda") ++ tomesd.apply_patch(pipeline, ratio=0.5) + + image = pipeline("a photo of an astronaut riding a horse on mars").images[0] The apply_patch function exposes a number of arguments to help strike a balance between pipeline inference speed and the quality of the generated tokens. The most important argument is ratio which controls the number of tokens that are merged during the forward pass. As reported in the paper, ToMe can greatly preserve the quality of the generated images while boosting inference speed. By increasing the ratio, you can speed-up inference even further, but at the cost of some degraded image quality. To test the quality of the generated images, we sampled a few prompts from Parti Prompts and performed inference with the StableDiffusionPipeline with the following settings: We didn’t notice any significant decrease in the quality of the generated samples, and you can check out the generated samples in this WandB report. If you’re interested in reproducing this experiment, use this script. Benchmarks We also benchmarked the impact of tomesd on the StableDiffusionPipeline with xFormers enabled across several image resolutions. The results are obtained from A100 and V100 GPUs in the following development environment: Copied - `diffusers` version: 0.15.1 +- Python version: 3.8.16 +- PyTorch version (GPU?): 1.13.1+cu116 (True) +- Huggingface_hub version: 0.13.2 +- Transformers version: 4.27.2 +- Accelerate version: 0.18.0 +- xFormers version: 0.0.16 +- tomesd version: 0.1.2 To reproduce this benchmark, feel free to use this script. The results are reported in seconds, and where applicable we report the speed-up percentage over the vanilla pipeline when using ToMe and ToMe + xFormers. GPU Resolution Batch size Vanilla ToMe ToMe + xFormers A100 512 10 6.88 5.26 (+23.55%) 4.69 (+31.83%) 768 10 OOM 14.71 11 8 OOM 11.56 8.84 4 OOM 5.98 4.66 2 4.99 3.24 (+35.07%) 2.1 (+37.88%) 1 3.29 2.24 (+31.91%) 2.03 (+38.3%) 1024 10 OOM OOM OOM 8 OOM OOM OOM 4 OOM 12.51 9.09 2 OOM 6.52 4.96 1 6.4 3.61 (+43.59%) 2.81 (+56.09%) V100 512 10 OOM 10.03 9.29 8 OOM 8.05 7.47 4 5.7 4.3 (+24.56%) 3.98 (+30.18%) 2 3.14 2.43 (+22.61%) 2.27 (+27.71%) 1 1.88 1.57 (+16.49%) 1.57 (+16.49%) 768 10 OOM OOM 23.67 8 OOM OOM 18.81 4 OOM 11.81 9.7 2 OOM 6.27 5.2 1 5.43 3.38 (+37.75%) 2.82 (+48.07%) 1024 10 OOM OOM OOM 8 OOM OOM OOM 4 OOM OOM 19.35 2 OOM 13 10.78 1 OOM 6.66 5.54 As seen in the tables above, the speed-up from tomesd becomes more pronounced for larger image resolutions. It is also interesting to note that with tomesd, it is possible to run the pipeline on a higher resolution like 1024x1024. You may be able to speed-up inference even more with torch.compile. diff --git a/scrapped_outputs/5778c136bc093452eef8f684bc3a38dd.txt b/scrapped_outputs/5778c136bc093452eef8f684bc3a38dd.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/5794a697580bafd0c00db4771add6e01.txt b/scrapped_outputs/5794a697580bafd0c00db4771add6e01.txt new file mode 100644 index 0000000000000000000000000000000000000000..90f987bd68cea6f4c0f29a9a85768db8b9798fed --- /dev/null +++ b/scrapped_outputs/5794a697580bafd0c00db4771add6e01.txt @@ -0,0 +1 @@ +Overview A pipeline is an end-to-end class that provides a quick and easy way to use a diffusion system for inference by bundling independently trained models and schedulers together. Certain combinations of models and schedulers define specific pipeline types, like StableDiffusionXLPipeline or StableDiffusionControlNetPipeline, with specific capabilities. All pipeline types inherit from the base DiffusionPipeline class; pass it any checkpoint, and it’ll automatically detect the pipeline type and load the necessary components. This section demonstrates how to use specific pipelines such as Stable Diffusion XL, ControlNet, and DiffEdit. You’ll also learn how to use a distilled version of the Stable Diffusion model to speed up inference, how to create reproducible pipelines, and how to use and contribute community pipelines. diff --git a/scrapped_outputs/57b13c9a1442d69876ab6723aba2759b.txt b/scrapped_outputs/57b13c9a1442d69876ab6723aba2759b.txt new file mode 100644 index 0000000000000000000000000000000000000000..5eb8aca237f4b1aa72ff085bbc8ab70f6ba7cd91 --- /dev/null +++ b/scrapped_outputs/57b13c9a1442d69876ab6723aba2759b.txt @@ -0,0 +1,128 @@ +LoRA LoRA is a fast and lightweight training method that inserts and trains a significantly smaller number of parameters instead of all the model parameters. This produces a smaller file (~100 MBs) and makes it easier to quickly train a model to learn a new concept. LoRA weights are typically loaded into the UNet, text encoder or both. There are two classes for loading LoRA weights: LoraLoaderMixin provides functions for loading and unloading, fusing and unfusing, enabling and disabling, and more functions for managing LoRA weights. This class can be used with any model. StableDiffusionXLLoraLoaderMixin is a Stable Diffusion (SDXL) version of the LoraLoaderMixin class for loading and saving LoRA weights. It can only be used with the SDXL model. To learn more about how to load LoRA weights, see the LoRA loading guide. LoraLoaderMixin class diffusers.loaders.LoraLoaderMixin < source > ( ) Load LoRA layers into UNet2DConditionModel and +CLIPTextModel. delete_adapters < source > ( adapter_names: Union ) Parameters Deletes the LoRA layers of adapter_name for the unet and text-encoder(s). — +adapter_names (Union[List[str], str]): +The names of the adapter to delete. Can be a single string or a list of strings disable_lora_for_text_encoder < source > ( text_encoder: Optional = None ) Parameters text_encoder (torch.nn.Module, optional) — +The text encoder module to disable the LoRA layers for. If None, it will try to get the +text_encoder attribute. Disables the LoRA layers for the text encoder. enable_lora_for_text_encoder < source > ( text_encoder: Optional = None ) Parameters text_encoder (torch.nn.Module, optional) — +The text encoder module to enable the LoRA layers for. If None, it will try to get the text_encoder +attribute. Enables the LoRA layers for the text encoder. fuse_lora < source > ( fuse_unet: bool = True fuse_text_encoder: bool = True lora_scale: float = 1.0 safe_fusing: bool = False adapter_names: Optional = None ) Parameters fuse_unet (bool, defaults to True) — Whether to fuse the UNet LoRA parameters. fuse_text_encoder (bool, defaults to True) — +Whether to fuse the text encoder LoRA parameters. If the text encoder wasn’t monkey-patched with the +LoRA parameters then it won’t have any effect. lora_scale (float, defaults to 1.0) — +Controls how much to influence the outputs with the LoRA parameters. safe_fusing (bool, defaults to False) — +Whether to check fused weights for NaN values before fusing and if values are NaN not fusing them. adapter_names (List[str], optional) — +Adapter names to be used for fusing. If nothing is passed, all active adapters will be fused. Fuses the LoRA parameters into the original parameters of the corresponding blocks. This is an experimental API. Example: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipeline.fuse_lora(lora_scale=0.7) get_active_adapters < source > ( ) Gets the list of the current active adapters. Example: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", +).to("cuda") +pipeline.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") +pipeline.get_active_adapters() get_list_adapters < source > ( ) Gets the current list of all available adapters in the pipeline. load_lora_into_text_encoder < source > ( state_dict network_alphas text_encoder prefix = None lora_scale = 1.0 low_cpu_mem_usage = None adapter_name = None _pipeline = None ) Parameters state_dict (dict) — +A standard state dict containing the lora layer parameters. The key should be prefixed with an +additional text_encoder to distinguish between unet lora layers. network_alphas (Dict[str, float]) — +See LoRALinearLayer for more details. text_encoder (CLIPTextModel) — +The text encoder model to load the LoRA layers into. prefix (str) — +Expected prefix of the text_encoder in the state_dict. lora_scale (float) — +How much to scale the output of the lora linear layer before it is added with the output of the regular +lora layer. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. This will load the LoRA layers specified in state_dict into text_encoder load_lora_into_transformer < source > ( state_dict network_alphas transformer low_cpu_mem_usage = None adapter_name = None _pipeline = None ) Parameters state_dict (dict) — +A standard state dict containing the lora layer parameters. The keys can either be indexed directly +into the unet or prefixed with an additional unet which can be used to distinguish between text +encoder lora layers. network_alphas (Dict[str, float]) — +See LoRALinearLayer for more details. unet (UNet2DConditionModel) — +The UNet model to load the LoRA layers into. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. This will load the LoRA layers specified in state_dict into transformer. load_lora_into_unet < source > ( state_dict network_alphas unet low_cpu_mem_usage = None adapter_name = None _pipeline = None ) Parameters state_dict (dict) — +A standard state dict containing the lora layer parameters. The keys can either be indexed directly +into the unet or prefixed with an additional unet which can be used to distinguish between text +encoder lora layers. network_alphas (Dict[str, float]) — +See LoRALinearLayer for more details. unet (UNet2DConditionModel) — +The UNet model to load the LoRA layers into. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. This will load the LoRA layers specified in state_dict into unet. load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. lora_state_dict < source > ( pretrained_model_name_or_path_or_dict: Union **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with ModelMixin.save_pretrained(). +A torch state +dict. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Return state dict for lora weights and the network alphas. We support loading A1111 formatted LoRA checkpoints in a limited capacity. This function is experimental and might change in the future. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. set_adapters_for_text_encoder < source > ( adapter_names: Union text_encoder: Optional = None text_encoder_weights: List = None ) Parameters adapter_names (List[str] or str) — +The names of the adapters to use. text_encoder (torch.nn.Module, optional) — +The text encoder module to set the adapter layers for. If None, it will try to get the text_encoder +attribute. text_encoder_weights (List[float], optional) — +The weights to use for the text encoder. If None, the weights are set to 1.0 for all the adapters. Sets the adapter layers for the text encoder. set_lora_device < source > ( adapter_names: List device: Union ) Parameters adapter_names (List[str]) — +List of adapters to send device to. device (Union[torch.device, str, int]) — +Device to send the adapters to. Can be either a torch device, a str or an integer. Moves the LoRAs listed in adapter_names to a target device. Useful for offloading the LoRA to the CPU in case +you want to load multiple adapters and free some GPU memory. unfuse_lora < source > ( unfuse_unet: bool = True unfuse_text_encoder: bool = True ) Parameters unfuse_unet (bool, defaults to True) — Whether to unfuse the UNet LoRA parameters. unfuse_text_encoder (bool, defaults to True) — +Whether to unfuse the text encoder LoRA parameters. If the text encoder wasn’t monkey-patched with the +LoRA parameters then it won’t have any effect. Reverses the effect of +pipe.fuse_lora(). This is an experimental API. unload_lora_weights < source > ( ) Unloads the LoRA parameters. Examples: Copied >>> # Assuming `pipeline` is already loaded with the LoRA parameters. +>>> pipeline.unload_lora_weights() +>>> ... StableDiffusionXLLoraLoaderMixin class diffusers.loaders.StableDiffusionXLLoraLoaderMixin < source > ( ) This class overrides LoraLoaderMixin with LoRA loading/saving code that’s specific to SDXL load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name: Optional = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. kwargs (dict, optional) — +See lora_state_dict(). Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. diff --git a/scrapped_outputs/57dd85be4e6fd359fd43b6d67e606c7e.txt b/scrapped_outputs/57dd85be4e6fd359fd43b6d67e606c7e.txt new file mode 100644 index 0000000000000000000000000000000000000000..a7b663b381edb40c44b5dc45124142bca44fb798 --- /dev/null +++ b/scrapped_outputs/57dd85be4e6fd359fd43b6d67e606c7e.txt @@ -0,0 +1,148 @@ +PyTorch 2.0 🤗 Diffusers supports the latest optimizations from PyTorch 2.0 which include: A memory-efficient attention implementation, scaled dot product attention, without requiring any extra dependencies such as xFormers. torch.compile, a just-in-time (JIT) compiler to provide an extra performance boost when individual models are compiled. Both of these optimizations require PyTorch 2.0 or later and 🤗 Diffusers > 0.13.0. Copied pip install --upgrade torch diffusers Scaled dot product attention torch.nn.functional.scaled_dot_product_attention (SDPA) is an optimized and memory-efficient attention (similar to xFormers) that automatically enables several other optimizations depending on the model inputs and GPU type. SDPA is enabled by default if you’re using PyTorch 2.0 and the latest version of 🤗 Diffusers, so you don’t need to add anything to your code. However, if you want to explicitly enable it, you can set a DiffusionPipeline to use AttnProcessor2_0: Copied import torch + from diffusers import DiffusionPipeline ++ from diffusers.models.attention_processor import AttnProcessor2_0 + + pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True).to("cuda") ++ pipe.unet.set_attn_processor(AttnProcessor2_0()) + + prompt = "a photo of an astronaut riding a horse on mars" + image = pipe(prompt).images[0] SDPA should be as fast and memory efficient as xFormers; check the benchmark for more details. In some cases - such as making the pipeline more deterministic or converting it to other formats - it may be helpful to use the vanilla attention processor, AttnProcessor. To revert to AttnProcessor, call the set_default_attn_processor() function on the pipeline: Copied import torch + from diffusers import DiffusionPipeline + + pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True).to("cuda") ++ pipe.unet.set_default_attn_processor() + + prompt = "a photo of an astronaut riding a horse on mars" + image = pipe(prompt).images[0] torch.compile The torch.compile function can often provide an additional speed-up to your PyTorch code. In 🤗 Diffusers, it is usually best to wrap the UNet with torch.compile because it does most of the heavy lifting in the pipeline. Copied from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) +images = pipe(prompt, num_inference_steps=steps, num_images_per_prompt=batch_size).images[0] Depending on GPU type, torch.compile can provide an additional speed-up of 5-300x on top of SDPA! If you’re using more recent GPU architectures such as Ampere (A100, 3090), Ada (4090), and Hopper (H100), torch.compile is able to squeeze even more performance out of these GPUs. Compilation requires some time to complete, so it is best suited for situations where you prepare your pipeline once and then perform the same type of inference operations multiple times. For example, calling the compiled pipeline on a different image size triggers compilation again which can be expensive. For more information and different options about torch.compile, refer to the torch_compile tutorial. Benchmark We conducted a comprehensive benchmark with PyTorch 2.0’s efficient attention implementation and torch.compile across different GPUs and batch sizes for five of our most used pipelines. The code is benchmarked on 🤗 Diffusers v0.17.0.dev0 to optimize torch.compile usage (see here for more details). Expand the dropdown below to find the code used to benchmark each pipeline: Stable Diffusion text-to-image Copied from diffusers import DiffusionPipeline +import torch + +path = "runwayml/stable-diffusion-v1-5" + +run_compile = True # Set True / False + +pipe = DiffusionPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors=True) +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + images = pipe(prompt=prompt).images Stable Diffusion image-to-image Copied from diffusers import StableDiffusionImg2ImgPipeline +from diffusers.utils import load_image +import torch + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +init_image = load_image(url) +init_image = init_image.resize((512, 512)) + +path = "runwayml/stable-diffusion-v1-5" + +run_compile = True # Set True / False + +pipe = StableDiffusionImg2ImgPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors=True) +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + image = pipe(prompt=prompt, image=init_image).images[0] Stable Diffusion inpainting Copied from diffusers import StableDiffusionInpaintPipeline +from diffusers.utils import load_image +import torch + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +path = "runwayml/stable-diffusion-inpainting" + +run_compile = True # Set True / False + +pipe = StableDiffusionInpaintPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors=True) +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] ControlNet Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.utils import load_image +import torch + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +init_image = load_image(url) +init_image = init_image.resize((512, 512)) + +path = "runwayml/stable-diffusion-v1-5" + +run_compile = True # Set True / False +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + path, controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) +pipe.controlnet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + pipe.controlnet = torch.compile(pipe.controlnet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + image = pipe(prompt=prompt, image=init_image).images[0] DeepFloyd IF text-to-image + upscaling Copied from diffusers import DiffusionPipeline +import torch + +run_compile = True # Set True / False + +pipe_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-M-v1.0", variant="fp16", text_encoder=None, torch_dtype=torch.float16, use_safetensors=True) +pipe_1.to("cuda") +pipe_2 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-II-M-v1.0", variant="fp16", text_encoder=None, torch_dtype=torch.float16, use_safetensors=True) +pipe_2.to("cuda") +pipe_3 = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-x4-upscaler", torch_dtype=torch.float16, use_safetensors=True) +pipe_3.to("cuda") + + +pipe_1.unet.to(memory_format=torch.channels_last) +pipe_2.unet.to(memory_format=torch.channels_last) +pipe_3.unet.to(memory_format=torch.channels_last) + +if run_compile: + pipe_1.unet = torch.compile(pipe_1.unet, mode="reduce-overhead", fullgraph=True) + pipe_2.unet = torch.compile(pipe_2.unet, mode="reduce-overhead", fullgraph=True) + pipe_3.unet = torch.compile(pipe_3.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "the blue hulk" + +prompt_embeds = torch.randn((1, 2, 4096), dtype=torch.float16) +neg_prompt_embeds = torch.randn((1, 2, 4096), dtype=torch.float16) + +for _ in range(3): + image_1 = pipe_1(prompt_embeds=prompt_embeds, negative_prompt_embeds=neg_prompt_embeds, output_type="pt").images + image_2 = pipe_2(image=image_1, prompt_embeds=prompt_embeds, negative_prompt_embeds=neg_prompt_embeds, output_type="pt").images + image_3 = pipe_3(prompt=prompt, image=image_1, noise_level=100).images The graph below highlights the relative speed-ups for the StableDiffusionPipeline across five GPU families with PyTorch 2.0 and torch.compile enabled. The benchmarks for the following graphs are measured in number of iterations/second. To give you an even better idea of how this speed-up holds for the other pipelines, consider the following +graph for an A100 with PyTorch 2.0 and torch.compile: In the following tables, we report our findings in terms of the number of iterations/second. A100 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 21.66 23.13 44.03 49.74 SD - img2img 21.81 22.40 43.92 46.32 SD - inpaint 22.24 23.23 43.76 49.25 SD - controlnet 15.02 15.82 32.13 36.08 IF 20.21 / 13.84 / 24.00 20.12 / 13.70 / 24.03 ❌ 97.34 / 27.23 / 111.66 SDXL - txt2img 8.64 9.9 - - A100 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 11.6 13.12 14.62 17.27 SD - img2img 11.47 13.06 14.66 17.25 SD - inpaint 11.67 13.31 14.88 17.48 SD - controlnet 8.28 9.38 10.51 12.41 IF 25.02 18.04 ❌ 48.47 SDXL - txt2img 2.44 2.74 - - A100 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 3.04 3.6 3.83 4.68 SD - img2img 2.98 3.58 3.83 4.67 SD - inpaint 3.04 3.66 3.9 4.76 SD - controlnet 2.15 2.58 2.74 3.35 IF 8.78 9.82 ❌ 16.77 SDXL - txt2img 0.64 0.72 - - V100 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 18.99 19.14 20.95 22.17 SD - img2img 18.56 19.18 20.95 22.11 SD - inpaint 19.14 19.06 21.08 22.20 SD - controlnet 13.48 13.93 15.18 15.88 IF 20.01 / 9.08 / 23.34 19.79 / 8.98 / 24.10 ❌ 55.75 / 11.57 / 57.67 V100 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 5.96 5.89 6.83 6.86 SD - img2img 5.90 5.91 6.81 6.82 SD - inpaint 5.99 6.03 6.93 6.95 SD - controlnet 4.26 4.29 4.92 4.93 IF 15.41 14.76 ❌ 22.95 V100 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 1.66 1.66 1.92 1.90 SD - img2img 1.65 1.65 1.91 1.89 SD - inpaint 1.69 1.69 1.95 1.93 SD - controlnet 1.19 1.19 OOM after warmup 1.36 IF 5.43 5.29 ❌ 7.06 T4 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 6.9 6.95 7.3 7.56 SD - img2img 6.84 6.99 7.04 7.55 SD - inpaint 6.91 6.7 7.01 7.37 SD - controlnet 4.89 4.86 5.35 5.48 IF 17.42 / 2.47 / 18.52 16.96 / 2.45 / 18.69 ❌ 24.63 / 2.47 / 23.39 SDXL - txt2img 1.15 1.16 - - T4 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 1.79 1.79 2.03 1.99 SD - img2img 1.77 1.77 2.05 2.04 SD - inpaint 1.81 1.82 2.09 2.09 SD - controlnet 1.34 1.27 1.47 1.46 IF 5.79 5.61 ❌ 7.39 SDXL - txt2img 0.288 0.289 - - T4 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 2.34s 2.30s OOM after 2nd iteration 1.99s SD - img2img 2.35s 2.31s OOM after warmup 2.00s SD - inpaint 2.30s 2.26s OOM after 2nd iteration 1.95s SD - controlnet OOM after 2nd iteration OOM after 2nd iteration OOM after warmup OOM after warmup IF * 1.44 1.44 ❌ 1.94 SDXL - txt2img OOM OOM - - RTX 3090 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 22.56 22.84 23.84 25.69 SD - img2img 22.25 22.61 24.1 25.83 SD - inpaint 22.22 22.54 24.26 26.02 SD - controlnet 16.03 16.33 17.38 18.56 IF 27.08 / 9.07 / 31.23 26.75 / 8.92 / 31.47 ❌ 68.08 / 11.16 / 65.29 RTX 3090 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 6.46 6.35 7.29 7.3 SD - img2img 6.33 6.27 7.31 7.26 SD - inpaint 6.47 6.4 7.44 7.39 SD - controlnet 4.59 4.54 5.27 5.26 IF 16.81 16.62 ❌ 21.57 RTX 3090 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 1.7 1.69 1.93 1.91 SD - img2img 1.68 1.67 1.93 1.9 SD - inpaint 1.72 1.71 1.97 1.94 SD - controlnet 1.23 1.22 1.4 1.38 IF 5.01 5.00 ❌ 6.33 RTX 4090 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 40.5 41.89 44.65 49.81 SD - img2img 40.39 41.95 44.46 49.8 SD - inpaint 40.51 41.88 44.58 49.72 SD - controlnet 29.27 30.29 32.26 36.03 IF 69.71 / 18.78 / 85.49 69.13 / 18.80 / 85.56 ❌ 124.60 / 26.37 / 138.79 SDXL - txt2img 6.8 8.18 - - RTX 4090 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 12.62 12.84 15.32 15.59 SD - img2img 12.61 12,.79 15.35 15.66 SD - inpaint 12.65 12.81 15.3 15.58 SD - controlnet 9.1 9.25 11.03 11.22 IF 31.88 31.14 ❌ 43.92 SDXL - txt2img 2.19 2.35 - - RTX 4090 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 3.17 3.2 3.84 3.85 SD - img2img 3.16 3.2 3.84 3.85 SD - inpaint 3.17 3.2 3.85 3.85 SD - controlnet 2.23 2.3 2.7 2.75 IF 9.26 9.2 ❌ 13.31 SDXL - txt2img 0.52 0.53 - - Notes Follow this PR for more details on the environment used for conducting the benchmarks. For the DeepFloyd IF pipeline where batch sizes > 1, we only used a batch size of > 1 in the first IF pipeline for text-to-image generation and NOT for upscaling. That means the two upscaling pipelines received a batch size of 1. Thanks to Horace He from the PyTorch team for their support in improving our support of torch.compile() in Diffusers. diff --git a/scrapped_outputs/57e2b387d67b88a223cac45fd804ef6a.txt b/scrapped_outputs/57e2b387d67b88a223cac45fd804ef6a.txt new file mode 100644 index 0000000000000000000000000000000000000000..f44a3d21a8e26d613db10e2b1641d1bc1fb54490 --- /dev/null +++ b/scrapped_outputs/57e2b387d67b88a223cac45fd804ef6a.txt @@ -0,0 +1,2 @@ +🧨 Diffusers’ Ethical Guidelines Preamble Diffusers provides pre-trained diffusion models and serves as a modular toolbox for inference and training. Given its real case applications in the world and potential negative impacts on society, we think it is important to provide the project with ethical guidelines to guide the development, users’ contributions, and usage of the Diffusers library. The risks associated with using this technology are still being examined, but to name a few: copyrights issues for artists; deep-fake exploitation; sexual content generation in inappropriate contexts; non-consensual impersonation; harmful social biases perpetuating the oppression of marginalized groups. +We will keep tracking risks and adapt the following guidelines based on the community’s responsiveness and valuable feedback. Scope The Diffusers community will apply the following ethical guidelines to the project’s development and help coordinate how the community will integrate the contributions, especially concerning sensitive topics related to ethical concerns. Ethical guidelines The following ethical guidelines apply generally, but we will primarily implement them when dealing with ethically sensitive issues while making a technical choice. Furthermore, we commit to adapting those ethical principles over time following emerging harms related to the state of the art of the technology in question. Transparency: we are committed to being transparent in managing PRs, explaining our choices to users, and making technical decisions. Consistency: we are committed to guaranteeing our users the same level of attention in project management, keeping it technically stable and consistent. Simplicity: with a desire to make it easy to use and exploit the Diffusers library, we are committed to keeping the project’s goals lean and coherent. Accessibility: the Diffusers project helps lower the entry bar for contributors who can help run it even without technical expertise. Doing so makes research artifacts more accessible to the community. Reproducibility: we aim to be transparent about the reproducibility of upstream code, models, and datasets when made available through the Diffusers library. Responsibility: as a community and through teamwork, we hold a collective responsibility to our users by anticipating and mitigating this technology’s potential risks and dangers. Examples of implementations: Safety features and Mechanisms The team works daily to make the technical and non-technical tools available to deal with the potential ethical and social risks associated with diffusion technology. Moreover, the community’s input is invaluable in ensuring these features’ implementation and raising awareness with us. Community tab: it enables the community to discuss and better collaborate on a project. Bias exploration and evaluation: the Hugging Face team provides a space to demonstrate the biases in Stable Diffusion interactively. In this sense, we support and encourage bias explorers and evaluations. Encouraging safety in deployment Safe Stable Diffusion: It mitigates the well-known issue that models, like Stable Diffusion, that are trained on unfiltered, web-crawled datasets tend to suffer from inappropriate degeneration. Related paper: Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models. Safety Checker: It checks and compares the class probability of a set of hard-coded harmful concepts in the embedding space against an image after it has been generated. The harmful concepts are intentionally hidden to prevent reverse engineering of the checker. Staged released on the Hub: in particularly sensitive situations, access to some repositories should be restricted. This staged release is an intermediary step that allows the repository’s authors to have more control over its use. Licensing: OpenRAILs, a new type of licensing, allow us to ensure free access while having a set of restrictions that ensure more responsible use. diff --git a/scrapped_outputs/57e30e98a7c957973dd24b71c2b56fed.txt b/scrapped_outputs/57e30e98a7c957973dd24b71c2b56fed.txt new file mode 100644 index 0000000000000000000000000000000000000000..e3c71ca96baa76c1c11f96cfbdad30df65a97ee3 --- /dev/null +++ b/scrapped_outputs/57e30e98a7c957973dd24b71c2b56fed.txt @@ -0,0 +1,112 @@ +How to contribute to Diffusers 🧨 We ❤️ contributions from the open-source community! Everyone is welcome, and all types of participation –not just code– are valued and appreciated. Answering questions, helping others, reaching out, and improving the documentation are all immensely valuable to the community, so don’t be afraid and get involved if you’re up for it! Everyone is encouraged to start by saying 👋 in our public Discord channel. We discuss the latest trends in diffusion models, ask questions, show off personal projects, help each other with contributions, or just hang out ☕. Whichever way you choose to contribute, we strive to be part of an open, welcoming, and kind community. Please, read our code of conduct and be mindful to respect it during your interactions. We also recommend you become familiar with the ethical guidelines that guide our project and ask you to adhere to the same principles of transparency and responsibility. We enormously value feedback from the community, so please do not be afraid to speak up if you believe you have valuable feedback that can help improve the library - every message, comment, issue, and pull request (PR) is read and considered. Overview You can contribute in many ways ranging from answering questions on issues to adding new diffusion models to +the core library. In the following, we give an overview of different ways to contribute, ranked by difficulty in ascending order. All of them are valuable to the community. Asking and answering questions on the Diffusers discussion forum or on Discord. Opening new issues on the GitHub Issues tab. Answering issues on the GitHub Issues tab. Fix a simple issue, marked by the “Good first issue” label, see here. Contribute to the documentation. Contribute a Community Pipeline. Contribute to the examples. Fix a more difficult issue, marked by the “Good second issue” label, see here. Add a new pipeline, model, or scheduler, see “New Pipeline/Model” and “New scheduler” issues. For this contribution, please have a look at Design Philosophy. As said before, all contributions are valuable to the community. +In the following, we will explain each contribution a bit more in detail. For all contributions 4 - 9, you will need to open a PR. It is explained in detail how to do so in Opening a pull request. 1. Asking and answering questions on the Diffusers discussion forum or on the Diffusers Discord Any question or comment related to the Diffusers library can be asked on the discussion forum or on Discord. Such questions and comments include (but are not limited to): Reports of training or inference experiments in an attempt to share knowledge Presentation of personal projects Questions to non-official training examples Project proposals General feedback Paper summaries Asking for help on personal projects that build on top of the Diffusers library General questions Ethical questions regarding diffusion models … Every question that is asked on the forum or on Discord actively encourages the community to publicly +share knowledge and might very well help a beginner in the future who has the same question you’re +having. Please do pose any questions you might have. +In the same spirit, you are of immense help to the community by answering such questions because this way you are publicly documenting knowledge for everybody to learn from. Please keep in mind that the more effort you put into asking or answering a question, the higher +the quality of the publicly documented knowledge. In the same way, well-posed and well-answered questions create a high-quality knowledge database accessible to everybody, while badly posed questions or answers reduce the overall quality of the public knowledge database. +In short, a high quality question or answer is precise, concise, relevant, easy-to-understand, accessible, and well-formated/well-posed. For more information, please have a look through the How to write a good issue section. NOTE about channels: +The forum is much better indexed by search engines, such as Google. Posts are ranked by popularity rather than chronologically. Hence, it’s easier to look up questions and answers that we posted some time ago. +In addition, questions and answers posted in the forum can easily be linked to. +In contrast, Discord has a chat-like format that invites fast back-and-forth communication. +While it will most likely take less time for you to get an answer to your question on Discord, your +question won’t be visible anymore over time. Also, it’s much harder to find information that was posted a while back on Discord. We therefore strongly recommend using the forum for high-quality questions and answers in an attempt to create long-lasting knowledge for the community. If discussions on Discord lead to very interesting answers and conclusions, we recommend posting the results on the forum to make the information more available for future readers. 2. Opening new issues on the GitHub issues tab The 🧨 Diffusers library is robust and reliable thanks to the users who notify us of +the problems they encounter. So thank you for reporting an issue. Remember, GitHub issues are reserved for technical questions directly related to the Diffusers library, bug reports, feature requests, or feedback on the library design. In a nutshell, this means that everything that is not related to the code of the Diffusers library (including the documentation) should not be asked on GitHub, but rather on either the forum or Discord. Please consider the following guidelines when opening a new issue: Make sure you have searched whether your issue has already been asked before (use the search bar on GitHub under Issues). Please never report a new issue on another (related) issue. If another issue is highly related, please +open a new issue nevertheless and link to the related issue. Make sure your issue is written in English. Please use one of the great, free online translation services, such as DeepL to translate from your native language to English if you are not comfortable in English. Check whether your issue might be solved by updating to the newest Diffusers version. Before posting your issue, please make sure that python -c "import diffusers; print(diffusers.__version__)" is higher or matches the latest Diffusers version. Remember that the more effort you put into opening a new issue, the higher the quality of your answer will be and the better the overall quality of the Diffusers issues. New issues usually include the following. 2.1. Reproducible, minimal bug reports A bug report should always have a reproducible code snippet and be as minimal and concise as possible. +This means in more detail: Narrow the bug down as much as you can, do not just dump your whole code file. Format your code. Do not include any external libraries except for Diffusers depending on them. Always provide all necessary information about your environment; for this, you can run: diffusers-cli env in your shell and copy-paste the displayed information to the issue. Explain the issue. If the reader doesn’t know what the issue is and why it is an issue, she cannot solve it. Always make sure the reader can reproduce your issue with as little effort as possible. If your code snippet cannot be run because of missing libraries or undefined variables, the reader cannot help you. Make sure your reproducible code snippet is as minimal as possible and can be copy-pasted into a simple Python shell. If in order to reproduce your issue a model and/or dataset is required, make sure the reader has access to that model or dataset. You can always upload your model or dataset to the Hub to make it easily downloadable. Try to keep your model and dataset as small as possible, to make the reproduction of your issue as effortless as possible. For more information, please have a look through the How to write a good issue section. You can open a bug report here. 2.2. Feature requests A world-class feature request addresses the following points: Motivation first: Is it related to a problem/frustration with the library? If so, please explain +why. Providing a code snippet that demonstrates the problem is best. Is it related to something you would need for a project? We’d love to hear +about it! Is it something you worked on and think could benefit the community? +Awesome! Tell us what problem it solved for you. Write a full paragraph describing the feature; Provide a code snippet that demonstrates its future use; In case this is related to a paper, please attach a link; Attach any additional information (drawings, screenshots, etc.) you think may help. You can open a feature request here. 2.3 Feedback Feedback about the library design and why it is good or not good helps the core maintainers immensely to build a user-friendly library. To understand the philosophy behind the current design philosophy, please have a look here. If you feel like a certain design choice does not fit with the current design philosophy, please explain why and how it should be changed. If a certain design choice follows the design philosophy too much, hence restricting use cases, explain why and how it should be changed. +If a certain design choice is very useful for you, please also leave a note as this is great feedback for future design decisions. You can open an issue about feedback here. 2.4 Technical questions Technical questions are mainly about why certain code of the library was written in a certain way, or what a certain part of the code does. Please make sure to link to the code in question and please provide details on +why this part of the code is difficult to understand. You can open an issue about a technical question here. 2.5 Proposal to add a new model, scheduler, or pipeline If the diffusion model community released a new model, pipeline, or scheduler that you would like to see in the Diffusers library, please provide the following information: Short description of the diffusion pipeline, model, or scheduler and link to the paper or public release. Link to any of its open-source implementation(s). Link to the model weights if they are available. If you are willing to contribute to the model yourself, let us know so we can best guide you. Also, don’t forget +to tag the original author of the component (model, scheduler, pipeline, etc.) by GitHub handle if you can find it. You can open a request for a model/pipeline/scheduler here. 3. Answering issues on the GitHub issues tab Answering issues on GitHub might require some technical knowledge of Diffusers, but we encourage everybody to give it a try even if you are not 100% certain that your answer is correct. +Some tips to give a high-quality answer to an issue: Be as concise and minimal as possible. Stay on topic. An answer to the issue should concern the issue and only the issue. Provide links to code, papers, or other sources that prove or encourage your point. Answer in code. If a simple code snippet is the answer to the issue or shows how the issue can be solved, please provide a fully reproducible code snippet. Also, many issues tend to be simply off-topic, duplicates of other issues, or irrelevant. It is of great +help to the maintainers if you can answer such issues, encouraging the author of the issue to be +more precise, provide the link to a duplicated issue or redirect them to the forum or Discord. If you have verified that the issued bug report is correct and requires a correction in the source code, +please have a look at the next sections. For all of the following contributions, you will need to open a PR. It is explained in detail how to do so in the Opening a pull request section. 4. Fixing a “Good first issue” Good first issues are marked by the Good first issue label. Usually, the issue already +explains how a potential solution should look so that it is easier to fix. +If the issue hasn’t been closed and you would like to try to fix this issue, you can just leave a message “I would like to try this issue.”. There are usually three scenarios: a.) The issue description already proposes a fix. In this case and if the solution makes sense to you, you can open a PR or draft PR to fix it. b.) The issue description does not propose a fix. In this case, you can ask what a proposed fix could look like and someone from the Diffusers team should answer shortly. If you have a good idea of how to fix it, feel free to directly open a PR. c.) There is already an open PR to fix the issue, but the issue hasn’t been closed yet. If the PR has gone stale, you can simply open a new PR and link to the stale PR. PRs often go stale if the original contributor who wanted to fix the issue suddenly cannot find the time anymore to proceed. This often happens in open-source and is very normal. In this case, the community will be very happy if you give it a new try and leverage the knowledge of the existing PR. If there is already a PR and it is active, you can help the author by giving suggestions, reviewing the PR or even asking whether you can contribute to the PR. 5. Contribute to the documentation A good library always has good documentation! The official documentation is often one of the first points of contact for new users of the library, and therefore contributing to the documentation is a highly +valuable contribution. Contributing to the library can have many forms: Correcting spelling or grammatical errors. Correct incorrect formatting of the docstring. If you see that the official documentation is weirdly displayed or a link is broken, we would be very happy if you take some time to correct it. Correct the shape or dimensions of a docstring input or output tensor. Clarify documentation that is hard to understand or incorrect. Update outdated code examples. Translating the documentation to another language. Anything displayed on the official Diffusers doc page is part of the official documentation and can be corrected, adjusted in the respective documentation source. Please have a look at this page on how to verify changes made to the documentation locally. 6. Contribute a community pipeline Pipelines are usually the first point of contact between the Diffusers library and the user. +Pipelines are examples of how to use Diffusers models and schedulers. +We support two types of pipelines: Official Pipelines Community Pipelines Both official and community pipelines follow the same design and consist of the same type of components. Official pipelines are tested and maintained by the core maintainers of Diffusers. Their code +resides in src/diffusers/pipelines. +In contrast, community pipelines are contributed and maintained purely by the community and are not tested. +They reside in examples/community and while they can be accessed via the PyPI diffusers package, their code is not part of the PyPI distribution. The reason for the distinction is that the core maintainers of the Diffusers library cannot maintain and test all +possible ways diffusion models can be used for inference, but some of them may be of interest to the community. +Officially released diffusion pipelines, +such as Stable Diffusion are added to the core src/diffusers/pipelines package which ensures +high quality of maintenance, no backward-breaking code changes, and testing. +More bleeding edge pipelines should be added as community pipelines. If usage for a community pipeline is high, the pipeline can be moved to the official pipelines upon request from the community. This is one of the ways we strive to be a community-driven library. To add a community pipeline, one should add a .py file to examples/community and adapt the examples/community/README.md to include an example of the new pipeline. An example can be seen here. Community pipeline PRs are only checked at a superficial level and ideally they should be maintained by their original authors. Contributing a community pipeline is a great way to understand how Diffusers models and schedulers work. Having contributed a community pipeline is usually the first stepping stone to contributing an official pipeline to the +core package. 7. Contribute to training examples Diffusers examples are a collection of training scripts that reside in examples. We support two types of training examples: Official training examples Research training examples Research training examples are located in examples/research_projects whereas official training examples include all folders under examples except the research_projects and community folders. +The official training examples are maintained by the Diffusers’ core maintainers whereas the research training examples are maintained by the community. +This is because of the same reasons put forward in 6. Contribute a community pipeline for official pipelines vs. community pipelines: It is not feasible for the core maintainers to maintain all possible training methods for diffusion models. +If the Diffusers core maintainers and the community consider a certain training paradigm to be too experimental or not popular enough, the corresponding training code should be put in the research_projects folder and maintained by the author. Both official training and research examples consist of a directory that contains one or more training scripts, a requirements.txt file, and a README.md file. In order for the user to make use of the +training examples, it is required to clone the repository: Copied git clone https://github.com/huggingface/diffusers as well as to install all additional dependencies required for training: Copied pip install -r /examples//requirements.txt Therefore when adding an example, the requirements.txt file shall define all pip dependencies required for your training example so that once all those are installed, the user can run the example’s training script. See, for example, the DreamBooth requirements.txt file. Training examples of the Diffusers library should adhere to the following philosophy: All the code necessary to run the examples should be found in a single Python file. One should be able to run the example from the command line with python .py --args. Examples should be kept simple and serve as an example on how to use Diffusers for training. The purpose of example scripts is not to create state-of-the-art diffusion models, but rather to reproduce known training schemes without adding too much custom logic. As a byproduct of this point, our examples also strive to serve as good educational materials. To contribute an example, it is highly recommended to look at already existing examples such as dreambooth to get an idea of how they should look like. +We strongly advise contributors to make use of the Accelerate library as it’s tightly integrated +with Diffusers. +Once an example script works, please make sure to add a comprehensive README.md that states how to use the example exactly. This README should include: An example command on how to run the example script as shown here. A link to some training results (logs, models, etc.) that show what the user can expect as shown here. If you are adding a non-official/research training example, please don’t forget to add a sentence that you are maintaining this training example which includes your git handle as shown here. If you are contributing to the official training examples, please also make sure to add a test to examples/test_examples.py. This is not necessary for non-official training examples. 8. Fixing a “Good second issue” Good second issues are marked by the Good second issue label. Good second issues are +usually more complicated to solve than Good first issues. +The issue description usually gives less guidance on how to fix the issue and requires +a decent understanding of the library by the interested contributor. +If you are interested in tackling a good second issue, feel free to open a PR to fix it and link the PR to the issue. If you see that a PR has already been opened for this issue but did not get merged, have a look to understand why it wasn’t merged and try to open an improved PR. +Good second issues are usually more difficult to get merged compared to good first issues, so don’t hesitate to ask for help from the core maintainers. If your PR is almost finished the core maintainers can also jump into your PR and commit to it in order to get it merged. 9. Adding pipelines, models, schedulers Pipelines, models, and schedulers are the most important pieces of the Diffusers library. +They provide easy access to state-of-the-art diffusion technologies and thus allow the community to +build powerful generative AI applications. By adding a new model, pipeline, or scheduler you might enable a new powerful use case for any of the user interfaces relying on Diffusers which can be of immense value for the whole generative AI ecosystem. Diffusers has a couple of open feature requests for all three components - feel free to gloss over them +if you don’t know yet what specific component you would like to add: Model or pipeline Scheduler Before adding any of the three components, it is strongly recommended that you give the Philosophy guide a read to better understand the design of any of the three components. Please be aware that we cannot merge model, scheduler, or pipeline additions that strongly diverge from our design philosophy +as it will lead to API inconsistencies. If you fundamentally disagree with a design choice, please open a Feedback issue instead so that it can be discussed whether a certain design pattern/design choice shall be changed everywhere in the library and whether we shall update our design philosophy. Consistency across the library is very important for us. Please make sure to add links to the original codebase/paper to the PR and ideally also ping the original author directly on the PR so that they can follow the progress and potentially help with questions. If you are unsure or stuck in the PR, don’t hesitate to leave a message to ask for a first review or help. Copied from mechanism A unique and important feature to understand when adding any pipeline, model or scheduler code is the # Copied from mechanism. You’ll see this all over the Diffusers codebase, and the reason we use it is to keep the codebase easy to understand and maintain. Marking code with the # Copied from mechanism forces the marked code to be identical to the code it was copied from. This makes it easy to update and propagate changes across many files whenever you run make fix-copies. For example, in the code example below, StableDiffusionPipelineOutput is the original code and AltDiffusionPipelineOutput uses the # Copied from mechanism to copy it. The only difference is changing the class prefix from Stable to Alt. Copied # Copied from diffusers.pipelines.stable_diffusion.pipeline_output.StableDiffusionPipelineOutput with Stable->Alt +class AltDiffusionPipelineOutput(BaseOutput): + """ + Output class for Alt Diffusion pipelines. + + Args: + images (`List[PIL.Image.Image]` or `np.ndarray`) + List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width, + num_channels)`. + nsfw_content_detected (`List[bool]`) + List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or + `None` if safety checking could not be performed. + """ To learn more, read this section of the ~Don’t~ Repeat Yourself* blog post. How to write a good issue The better your issue is written, the higher the chances that it will be quickly resolved. Make sure that you’ve used the correct template for your issue. You can pick between Bug Report, Feature Request, Feedback about API Design, New model/pipeline/scheduler addition, Forum, or a blank issue. Make sure to pick the correct one when opening a new issue. Be precise: Give your issue a fitting title. Try to formulate your issue description as simple as possible. The more precise you are when submitting an issue, the less time it takes to understand the issue and potentially solve it. Make sure to open an issue for one issue only and not for multiple issues. If you found multiple issues, simply open multiple issues. If your issue is a bug, try to be as precise as possible about what bug it is - you should not just write “Error in diffusers”. Reproducibility: No reproducible code snippet == no solution. If you encounter a bug, maintainers have to be able to reproduce it. Make sure that you include a code snippet that can be copy-pasted into a Python interpreter to reproduce the issue. Make sure that your code snippet works, i.e. that there are no missing imports or missing links to images, … Your issue should contain an error message and a code snippet that can be copy-pasted without any changes to reproduce the exact same error message. If your issue is using local model weights or local data that cannot be accessed by the reader, the issue cannot be solved. If you cannot share your data or model, try to make a dummy model or dummy data. Minimalistic: Try to help the reader as much as you can to understand the issue as quickly as possible by staying as concise as possible. Remove all code / all information that is irrelevant to the issue. If you have found a bug, try to create the easiest code example you can to demonstrate your issue, do not just dump your whole workflow into the issue as soon as you have found a bug. E.g., if you train a model and get an error at some point during the training, you should first try to understand what part of the training code is responsible for the error and try to reproduce it with a couple of lines. Try to use dummy data instead of full datasets. Add links. If you are referring to a certain naming, method, or model make sure to provide a link so that the reader can better understand what you mean. If you are referring to a specific PR or issue, make sure to link it to your issue. Do not assume that the reader knows what you are talking about. The more links you add to your issue the better. Formatting. Make sure to nicely format your issue by formatting code into Python code syntax, and error messages into normal code syntax. See the official GitHub formatting docs for more information. Think of your issue not as a ticket to be solved, but rather as a beautiful entry to a well-written encyclopedia. Every added issue is a contribution to publicly available knowledge. By adding a nicely written issue you not only make it easier for maintainers to solve your issue, but you are helping the whole community to better understand a certain aspect of the library. How to write a good PR Be a chameleon. Understand existing design patterns and syntax and make sure your code additions flow seamlessly into the existing code base. Pull requests that significantly diverge from existing design patterns or user interfaces will not be merged. Be laser focused. A pull request should solve one problem and one problem only. Make sure to not fall into the trap of “also fixing another problem while we’re adding it”. It is much more difficult to review pull requests that solve multiple, unrelated problems at once. If helpful, try to add a code snippet that displays an example of how your addition can be used. The title of your pull request should be a summary of its contribution. If your pull request addresses an issue, please mention the issue number in +the pull request description to make sure they are linked (and people +consulting the issue know you are working on it); To indicate a work in progress please prefix the title with [WIP]. These +are useful to avoid duplicated work, and to differentiate it from PRs ready +to be merged; Try to formulate and format your text as explained in How to write a good issue. Make sure existing tests pass; Add high-coverage tests. No quality testing = no merge. If you are adding new @slow tests, make sure they pass using +RUN_SLOW=1 python -m pytest tests/test_my_new_model.py. +CircleCI does not run the slow tests, but GitHub Actions does every night! All public methods must have informative docstrings that work nicely with markdown. See pipeline_latent_diffusion.py for an example. Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted dataset like +hf-internal-testing or huggingface/documentation-images to place these files. +If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images +to this dataset. How to open a PR Before writing code, we strongly advise you to search through the existing PRs or +issues to make sure that nobody is already working on the same thing. If you are +unsure, it is always a good idea to open an issue to get some feedback. You will need basic git proficiency to be able to contribute to +🧨 Diffusers. git is not the easiest tool to use but it has the greatest +manual. Type git --help in a shell and enjoy. If you prefer books, Pro +Git is a very good reference. Follow these steps to start contributing (supported Python versions): Fork the repository by +clicking on the ‘Fork’ button on the repository’s page. This creates a copy of the code +under your GitHub user account. Clone your fork to your local disk, and add the base repository as a remote: Copied $ git clone git@github.com:/diffusers.git +$ cd diffusers +$ git remote add upstream https://github.com/huggingface/diffusers.git Create a new branch to hold your development changes: Copied $ git checkout -b a-descriptive-name-for-my-changes Do not work on the main branch. Set up a development environment by running the following command in a virtual environment: Copied $ pip install -e ".[dev]" If you have already cloned the repo, you might need to git pull to get the most recent changes in the +library. Develop the features on your branch. As you work on the features, you should make sure that the test suite +passes. You should run the tests impacted by your changes like this: Copied $ pytest tests/.py Before you run the tests, please make sure you install the dependencies required for testing. You can do so +with this command: Copied $ pip install -e ".[test]" You can also run the full test suite with the following command, but it takes +a beefy machine to produce a result in a decent amount of time now that +Diffusers has grown a lot. Here is the command for it: Copied $ make test 🧨 Diffusers relies on black and isort to format its source code +consistently. After you make changes, apply automatic style corrections and code verifications +that can’t be automated in one go with: Copied $ make style 🧨 Diffusers also uses ruff and a few custom scripts to check for coding mistakes. Quality +control runs in CI, however, you can also run the same checks with: Copied $ make quality Once you’re happy with your changes, add changed files using git add and +make a commit with git commit to record your changes locally: Copied $ git add modified_file.py +$ git commit -m "A descriptive message about your changes." It is a good idea to sync your copy of the code with the original +repository regularly. This way you can quickly account for changes: Copied $ git pull upstream main Push the changes to your account using: Copied $ git push -u origin a-descriptive-name-for-my-changes Once you are satisfied, go to the +webpage of your fork on GitHub. Click on ‘Pull request’ to send your changes +to the project maintainers for review. It’s OK if maintainers ask you for changes. It happens to core contributors +too! So everyone can see the changes in the Pull request, work in your local +branch and push the changes to your fork. They will automatically appear in +the pull request. Tests An extensive test suite is included to test the library behavior and several examples. Library tests can be found in +the tests folder. We like pytest and pytest-xdist because it’s faster. From the root of the +repository, here’s how to run tests with pytest for the library: Copied $ python -m pytest -n auto --dist=loadfile -s -v ./tests/ In fact, that’s how make test is implemented! You can specify a smaller set of tests in order to test only the feature +you’re working on. By default, slow tests are skipped. Set the RUN_SLOW environment variable to +yes to run them. This will download many gigabytes of models — make sure you +have enough disk space and a good Internet connection, or a lot of patience! Copied $ RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/ unittest is fully supported, here’s how to run tests with it: Copied $ python -m unittest discover -s tests -t . -v +$ python -m unittest discover -s examples -t examples -v Syncing forked main with upstream (HuggingFace) main To avoid pinging the upstream repository which adds reference notes to each upstream PR and sends unnecessary notifications to the developers involved in these PRs, +when syncing the main branch of a forked repository, please, follow these steps: When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead, merge directly into the forked main. If a PR is absolutely necessary, use the following steps after checking out your branch: Copied $ git checkout -b your-branch-for-syncing +$ git pull --squash --no-commit upstream main +$ git commit -m '' +$ git push --set-upstream origin your-branch-for-syncing Style guide For documentation strings, 🧨 Diffusers follows the Google style. diff --git a/scrapped_outputs/57f1ee28dcf8122eb2ede7539ed1abf1.txt b/scrapped_outputs/57f1ee28dcf8122eb2ede7539ed1abf1.txt new file mode 100644 index 0000000000000000000000000000000000000000..7ae6be814c826287f059c2eb68dd72ed8a534da5 --- /dev/null +++ b/scrapped_outputs/57f1ee28dcf8122eb2ede7539ed1abf1.txt @@ -0,0 +1,245 @@ +Textual Inversion + + + + + + + + + + + + +Textual Inversion is a technique for capturing novel concepts from a small number of example images. While the technique was originally demonstrated with a latent diffusion model, it has since been applied to other model variants like Stable Diffusion. The learned concepts can be used to better control the images generated from text-to-image pipelines. It learns new “words” in the text encoder’s embedding space, which are used within text prompts for personalized image generation. + +By using just 3-5 images you can teach new concepts to a model such as Stable Diffusion for personalized image generation (image source). +This guide will show you how to train a runwayml/stable-diffusion-v1-5 model with Textual Inversion. All the training scripts for Textual Inversion used in this guide can be found here if you’re interested in taking a closer look at how things work under the hood. +There is a community-created collection of trained Textual Inversion models in the Stable Diffusion Textual Inversion Concepts Library which are readily available for inference. Over time, this’ll hopefully grow into a useful resource as more concepts are added! +Before you begin, make sure you install the library’s training dependencies: + + + Copied +pip install diffusers accelerate transformers +After all the dependencies have been set up, initialize a 🤗Accelerate environment with: + + + Copied +accelerate config +To setup a default 🤗 Accelerate environment without choosing any configurations: + + + Copied +accelerate config default +Or if your environment doesn’t support an interactive shell like a notebook, you can use: + + + Copied +from accelerate.utils import write_basic_config + +write_basic_config() +Finally, you try and install xFormers to reduce your memory footprint with xFormers memory-efficient attention. Once you have xFormers installed, add the --enable_xformers_memory_efficient_attention argument to the training script. xFormers is not supported for Flax. + +Upload model to Hub + +If you want to store your model on the Hub, add the following argument to the training script: + + + Copied +--push_to_hub + +Save and load checkpoints + +It is often a good idea to regularly save checkpoints of your model during training. This way, you can resume training from a saved checkpoint if your training is interrupted for any reason. To save a checkpoint, pass the following argument to the training script to save the full training state in a subfolder in output_dir every 500 steps: + + + Copied +--checkpointing_steps=500 +To resume training from a saved checkpoint, pass the following argument to the training script and the specific checkpoint you’d like to resume from: + + + Copied +--resume_from_checkpoint="checkpoint-1500" + +Finetuning + +For your training dataset, download these images of a cat toy and store them in a directory: + + + Copied +from huggingface_hub import snapshot_download + +local_dir = "./cat" +snapshot_download( + "diffusers/cat_toy_example", local_dir=local_dir, repo_type="dataset", ignore_patterns=".gitattributes" +) +Specify the MODEL_NAME environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the ~diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path argument, and the DATA_DIR environment variable to the path of the directory containing the images. +Now you can launch the training script: +💡 A full training run takes ~1 hour on one V100 GPU. While you’re waiting for the training to complete, feel free to check out how Textual Inversion works in the section below if you’re curious! + + +Pytorch + +Hide Pytorch content + + + + Copied +export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export DATA_DIR="./cat" + +accelerate launch textual_inversion.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --train_data_dir=$DATA_DIR \ + --learnable_property="object" \ + --placeholder_token="" --initializer_token="toy" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --max_train_steps=3000 \ + --learning_rate=5.0e-04 --scale_lr \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --output_dir="textual_inversion_cat" +💡 If you want to increase the trainable capacity, you can associate your placeholder token, e.g. to +multiple embedding vectors. This can help the model to better capture the style of more (complex) images. +To enable training multiple embedding vectors, simply pass: + + + Copied +--num_vectors=5 + +JAX + +Hide JAX content + +If you have access to TPUs, try out the Flax training script to train even faster (this’ll also work for GPUs). With the same configuration settings, the Flax training script should be at least 70% faster than the PyTorch training script! ⚡️ +Before you begin, make sure you install the Flax specific dependencies: + + + Copied +pip install -U -r requirements_flax.txt +Specify the MODEL_NAME environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the ~diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path argument. +Then you can launch the training script: + + + Copied +export MODEL_NAME="duongna/stable-diffusion-v1-4-flax" +export DATA_DIR="./cat" + +python textual_inversion_flax.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --train_data_dir=$DATA_DIR \ + --learnable_property="object" \ + --placeholder_token="" --initializer_token="toy" \ + --resolution=512 \ + --train_batch_size=1 \ + --max_train_steps=3000 \ + --learning_rate=5.0e-04 --scale_lr \ + --output_dir="textual_inversion_cat" + + +Intermediate logging + +If you’re interested in following along with your model training progress, you can save the generated images from the training process. Add the following arguments to the training script to enable intermediate logging: +validation_prompt, the prompt used to generate samples (this is set to None by default and intermediate logging is disabled) +num_validation_images, the number of sample images to generate +validation_steps, the number of steps before generating num_validation_images from the validation_prompt + + + Copied +--validation_prompt="A backpack" +--num_validation_images=4 +--validation_steps=100 + +Inference + +Once you have trained a model, you can use it for inference with the StableDiffusionPipeline. +The textual inversion script will by default only save the textual inversion embedding vector(s) that have +been added to the text encoder embedding matrix and consequently been trained. + + +Pytorch + +Hide Pytorch content + +💡 The community has created a large library of different textual inversion embedding vectors, called sd-concepts-library. +Instead of training textual inversion embeddings from scratch you can also see whether a fitting textual inversion embedding has already been added to the libary. +To load the textual inversion embeddings you first need to load the base model that was used when training +your textual inversion embedding vectors. Here we assume that runwayml/stable-diffusion-v1-5 +was used as a base model so we load it first: + + + Copied +from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") +Next, we need to load the textual inversion embedding vector which can be done via the TextualInversionLoaderMixin.load_textual_inversion +function. Here we’ll load the embeddings of the ”” example from before. + + + Copied +pipe.load_textual_inversion("sd-concepts-library/cat-toy") +Now we can run the pipeline making sure that the placeholder token is used in our prompt. + + + Copied +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") +The function TextualInversionLoaderMixin.load_textual_inversion can not only +load textual embedding vectors saved in Diffusers’ format, but also embedding vectors +saved in Automatic1111 format. +To do so, you can first download an embedding vector from civitAI +and then load it locally: + + + Copied +pipe.load_textual_inversion("./charturnerv2.pt") + +JAX + +Hide JAX content + +Currently there is no load_textual_inversion function for Flax so one has to make sure the textual inversion +embedding vector is saved as part of the model after training. +The model can then be run just like any other Flax model: + + + Copied +import jax +import numpy as np +from flax.jax_utils import replicate +from flax.training.common_utils import shard +from diffusers import FlaxStableDiffusionPipeline + +model_path = "path-to-your-trained-model" +pipe, params = FlaxStableDiffusionPipeline.from_pretrained(model_path, dtype=jax.numpy.bfloat16) + +prompt = "A backpack" +prng_seed = jax.random.PRNGKey(0) +num_inference_steps = 50 + +num_samples = jax.device_count() +prompt = num_samples * [prompt] +prompt_ids = pipeline.prepare_inputs(prompt) + +# shard inputs and rng +params = replicate(params) +prng_seed = jax.random.split(prng_seed, jax.device_count()) +prompt_ids = shard(prompt_ids) + +images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images +images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) +image.save("cat-backpack.png") + + +How it works + + +Architecture overview from the Textual Inversion blog post. +Usually, text prompts are tokenized into an embedding before being passed to a model, which is often a transformer. Textual Inversion does something similar, but it learns a new token embedding, v*, from a special token S* in the diagram above. The model output is used to condition the diffusion model, which helps the diffusion model understand the prompt and new concepts from just a few example images. +To do this, Textual Inversion uses a generator model and noisy versions of the training images. The generator tries to predict less noisy versions of the images, and the token embedding v* is optimized based on how well the generator does. If the token embedding successfully captures the new concept, it gives more useful information to the diffusion model and helps create clearer images with less noise. This optimization process typically occurs after several thousand steps of exposure to a variety of prompt and image variants. diff --git a/scrapped_outputs/57ff31f96a540afe53acaa78dce47ee8.txt b/scrapped_outputs/57ff31f96a540afe53acaa78dce47ee8.txt new file mode 100644 index 0000000000000000000000000000000000000000..c6ada9556f117e916687e4a6c5586a56d8e2825d --- /dev/null +++ b/scrapped_outputs/57ff31f96a540afe53acaa78dce47ee8.txt @@ -0,0 +1,17 @@ +Load safetensors safetensors is a safe and fast file format for storing and loading tensors. Typically, PyTorch model weights are saved or pickled into a .bin file with Python’s pickle utility. However, pickle is not secure and pickled files may contain malicious code that can be executed. safetensors is a secure alternative to pickle, making it ideal for sharing model weights. This guide will show you how you load .safetensor files, and how to convert Stable Diffusion model weights stored in other formats to .safetensor. Before you start, make sure you have safetensors installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install safetensors If you look at the runwayml/stable-diffusion-v1-5 repository, you’ll see weights inside the text_encoder, unet and vae subfolders are stored in the .safetensors format. By default, 🤗 Diffusers automatically loads these .safetensors files from their subfolders if they’re available in the model repository. For more explicit control, you can optionally set use_safetensors=True (if safetensors is not installed, you’ll get an error message asking you to install it): Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) However, model weights are not necessarily stored in separate subfolders like in the example above. Sometimes, all the weights are stored in a single .safetensors file. In this case, if the weights are Stable Diffusion weights, you can load the file directly with the from_single_file() method: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_single_file( + "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +) Convert to safetensors Not all weights on the Hub are available in the .safetensors format, and you may encounter weights stored as .bin. In this case, use the Convert Space to convert the weights to .safetensors. The Convert Space downloads the pickled weights, converts them, and opens a Pull Request to upload the newly converted .safetensors file on the Hub. This way, if there is any malicious code contained in the pickled files, they’re uploaded to the Hub - which has a security scanner to detect unsafe files and suspicious pickle imports - instead of your computer. You can use the model with the new .safetensors weights by specifying the reference to the Pull Request in the revision parameter (you can also test it in this Check PR Space on the Hub), for example refs/pr/22: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", revision="refs/pr/22", use_safetensors=True +) Why use safetensors? There are several reasons for using safetensors: Safety is the number one reason for using safetensors. As open-source and model distribution grows, it is important to be able to trust the model weights you downloaded don’t contain any malicious code. The current size of the header in safetensors prevents parsing extremely large JSON files. Loading speed between switching models is another reason to use safetensors, which performs zero-copy of the tensors. It is especially fast compared to pickle if you’re loading the weights to CPU (the default case), and just as fast if not faster when directly loading the weights to GPU. You’ll only notice the performance difference if the model is already loaded, and not if you’re downloading the weights or loading the model for the first time. The time it takes to load the entire pipeline: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", use_safetensors=True) +"Loaded in safetensors 0:00:02.033658" +"Loaded in PyTorch 0:00:02.663379" But the actual time it takes to load 500MB of the model weights is only: Copied safetensors: 3.4873ms +PyTorch: 172.7537ms Lazy loading is also supported in safetensors, which is useful in distributed settings to only load some of the tensors. This format allowed the BLOOM model to be loaded in 45 seconds on 8 GPUs instead of 10 minutes with regular PyTorch weights. diff --git a/scrapped_outputs/580404f4f404e2f78557a3c489f3de3f.txt b/scrapped_outputs/580404f4f404e2f78557a3c489f3de3f.txt new file mode 100644 index 0000000000000000000000000000000000000000..62825fe72aa801b97e465830300492417c227d28 --- /dev/null +++ b/scrapped_outputs/580404f4f404e2f78557a3c489f3de3f.txt @@ -0,0 +1,18 @@ +Stable Diffusion pipelines Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from CompVis, Stability AI and LAION. Latent diffusion applies the diffusion process over a lower dimensional latent space to reduce memory and compute complexity. This specific type of diffusion model was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. Stable Diffusion is trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs. For more details about how Stable Diffusion works and how it differs from the base latent diffusion model, take a look at the Stability AI announcement and our own blog post for more technical details. You can find the original codebase for Stable Diffusion v1.0 at CompVis/stable-diffusion and Stable Diffusion v2.0 at Stability-AI/stablediffusion as well as their original scripts for various tasks. Additional official checkpoints for the different Stable Diffusion versions and tasks can be found on the CompVis, Runway, and Stability AI Hub organizations. Explore these organizations to find the best checkpoint for your use-case! The table below summarizes the available Stable Diffusion pipelines, their supported tasks, and an interactive demo: Pipeline Supported tasks 🤗 Space StableDiffusion text-to-image StableDiffusionImg2Img image-to-image StableDiffusionInpaint inpainting StableDiffusionDepth2Img depth-to-image StableDiffusionImageVariation image variation StableDiffusionPipelineSafe filtered text-to-image StableDiffusion2 text-to-image, inpainting, depth-to-image, super-resolution StableDiffusionXL text-to-image, image-to-image StableDiffusionLatentUpscale super-resolution StableDiffusionUpscale super-resolution StableDiffusionLDM3D text-to-rgb, text-to-depth, text-to-pano StableDiffusionUpscaleLDM3D ldm3d super-resolution Tips To help you get the most out of the Stable Diffusion pipelines, here are a few tips for improving performance and usability. These tips are applicable to all Stable Diffusion pipelines. Explore tradeoff between speed and quality StableDiffusionPipeline uses the PNDMScheduler by default, but 🤗 Diffusers provides many other schedulers (some of which are faster or output better quality) that are compatible. For example, if you want to use the EulerDiscreteScheduler instead of the default: Copied from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler + +pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") +pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +# or +euler_scheduler = EulerDiscreteScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler") +pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=euler_scheduler) Reuse pipeline components to save memory To save memory and use the same components across multiple pipelines, use the .components method to avoid loading weights into RAM more than once. Copied from diffusers import ( + StableDiffusionPipeline, + StableDiffusionImg2ImgPipeline, + StableDiffusionInpaintPipeline, +) + +text2img = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") +img2img = StableDiffusionImg2ImgPipeline(**text2img.components) +inpaint = StableDiffusionInpaintPipeline(**text2img.components) + +# now you can use text2img(...), img2img(...), inpaint(...) just like the call methods of each respective pipeline diff --git a/scrapped_outputs/58094ff5a2061b5fb520b651f186a3db.txt b/scrapped_outputs/58094ff5a2061b5fb520b651f186a3db.txt new file mode 100644 index 0000000000000000000000000000000000000000..49dfad88e1e2c0dcad3d9918f9f7b9486f85e0dc --- /dev/null +++ b/scrapped_outputs/58094ff5a2061b5fb520b651f186a3db.txt @@ -0,0 +1,92 @@ +DPMSolverMultistepInverse DPMSolverMultistepInverse is the inverted scheduler from DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps and DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. The implementation is mostly based on the DDIM inversion definition of Null-text Inversion for Editing Real Images using Guided Diffusion Models and notebook implementation of the DiffEdit latent inversion from Xiang-cd/DiffEdit-stable-diffusion. Tips Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use the dynamic +thresholding. This thresholding method is unsuitable for latent-space diffusion models such as +Stable Diffusion. DPMSolverMultistepInverseScheduler class diffusers.DPMSolverMultistepInverseScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'dpmsolver++' solver_type: str = 'midpoint' lower_order_final: bool = True euler_at_final: bool = False use_karras_sigmas: Optional = False lambda_min_clipped: float = -inf variance_type: Optional = None timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DPMSolver order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++". algorithm_type (str, defaults to dpmsolver++) — +Algorithm type for the solver; can be dpmsolver, dpmsolver++, sde-dpmsolver or sde-dpmsolver++. The +dpmsolver type implements the algorithms in the DPMSolver +paper, and the dpmsolver++ type implements the algorithms in the +DPMSolver++ paper. It is recommended to use dpmsolver++ or +sde-dpmsolver++ with solver_order=2 for guided sampling like in Stable Diffusion. solver_type (str, defaults to midpoint) — +Solver type for the second-order solver; can be midpoint or heun. The solver type slightly affects the +sample quality, especially for a small number of steps. It is recommended to use midpoint solvers. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. euler_at_final (bool, defaults to False) — +Whether to use Euler’s method in the final step. It is a trade-off between numerical stability and detail +richness. This can stabilize the sampling of the SDE variant of DPMSolver for small number of inference +steps, but sometimes may result in blurring. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. lambda_min_clipped (float, defaults to -inf) — +Clipping threshold for the minimum value of lambda(t) for numerical stability. This is critical for the +cosine (squaredcos_cap_v2) noise schedule. variance_type (str, optional) — +Set to “learned” or “learned_range” for diffusion models that predict variance. If set, the model’s output +contains the predicted Gaussian variance. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DPMSolverMultistepInverseScheduler is the reverse scheduler of DPMSolverMultistepScheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is +designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an +integral of the data prediction model. The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise +prediction and data prediction models. dpm_solver_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DPMSolver (equivalent to DDIM). multistep_dpm_solver_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order multistep DPMSolver. multistep_dpm_solver_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order multistep DPMSolver. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int = None device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator = None return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep DPMSolver. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/580fda6d219b2e486173f6645ad78935.txt b/scrapped_outputs/580fda6d219b2e486173f6645ad78935.txt new file mode 100644 index 0000000000000000000000000000000000000000..ca96e25d4b56d29e1aaf758a2139d89e38d25509 --- /dev/null +++ b/scrapped_outputs/580fda6d219b2e486173f6645ad78935.txt @@ -0,0 +1,387 @@ +Text2Video-Zero Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators is by Levon Khachatryan, Andranik Movsisyan, Vahram Tadevosyan, Roberto Henschel, Zhangyang Wang, Shant Navasardyan, Humphrey Shi. Text2Video-Zero enables zero-shot video generation using either: A textual prompt A prompt combined with guidance from poses or edges Video Instruct-Pix2Pix (instruction-guided video editing) Results are temporally consistent and closely follow the guidance and textual prompts. The abstract from the paper is: Recent text-to-video generation approaches rely on computationally heavy training and require large-scale video datasets. In this paper, we introduce a new task of zero-shot text-to-video generation and propose a low-cost approach (without any training or optimization) by leveraging the power of existing text-to-image synthesis methods (e.g., Stable Diffusion), making them suitable for the video domain. +Our key modifications include (i) enriching the latent codes of the generated frames with motion dynamics to keep the global scene and the background time consistent; and (ii) reprogramming frame-level self-attention using a new cross-frame attention of each frame on the first frame, to preserve the context, appearance, and identity of the foreground object. +Experiments show that this leads to low overhead, yet high-quality and remarkably consistent video generation. Moreover, our approach is not limited to text-to-video synthesis but is also applicable to other tasks such as conditional and content-specialized video generation, and Video Instruct-Pix2Pix, i.e., instruction-guided video editing. +As experiments show, our method performs comparably or sometimes better than recent approaches, despite not being trained on additional video data. You can find additional information about Text2Video-Zero on the project page, paper, and original codebase. Usage example Text-To-Video To generate a video from prompt, run the following Python code: Copied import torch +from diffusers import TextToVideoZeroPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +prompt = "A panda is playing guitar on times square" +result = pipe(prompt=prompt).images +result = [(r * 255).astype("uint8") for r in result] +imageio.mimsave("video.mp4", result, fps=4) You can change these parameters in the pipeline call: Motion field strength (see the paper, Sect. 3.3.1):motion_field_strength_x and motion_field_strength_y. Default: motion_field_strength_x=12, motion_field_strength_y=12 T and T' (see the paper, Sect. 3.3.1)t0 and t1 in the range {0, ..., num_inference_steps}. Default: t0=45, t1=48 Video length:video_length, the number of frames video_length to be generated. Default: video_length=8 We can also generate longer videos by doing the processing in a chunk-by-chunk manner: Copied import torch +from diffusers import TextToVideoZeroPipeline +import numpy as np + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") +seed = 0 +video_length = 24 #24 ÷ 4fps = 6 seconds +chunk_size = 8 +prompt = "A panda is playing guitar on times square" + +# Generate the video chunk-by-chunk +result = [] +chunk_ids = np.arange(0, video_length, chunk_size - 1) +generator = torch.Generator(device="cuda") +for i in range(len(chunk_ids)): + print(f"Processing chunk {i + 1} / {len(chunk_ids)}") + ch_start = chunk_ids[i] + ch_end = video_length if i == len(chunk_ids) - 1 else chunk_ids[i + 1] + # Attach the first frame for Cross Frame Attention + frame_ids = [0] + list(range(ch_start, ch_end)) + # Fix the seed for the temporal consistency + generator.manual_seed(seed) + output = pipe(prompt=prompt, video_length=len(frame_ids), generator=generator, frame_ids=frame_ids) + result.append(output.images[1:]) + +# Concatenate chunks and save +result = np.concatenate(result) +result = [(r * 255).astype("uint8") for r in result] +imageio.mimsave("video.mp4", result, fps=4) SDXL Support +In order to use the SDXL model when generating a video from prompt, use the TextToVideoZeroSDXLPipeline pipeline: Copied import torch +from diffusers import TextToVideoZeroSDXLPipeline + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipe = TextToVideoZeroSDXLPipeline.from_pretrained( + model_id, torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") Text-To-Video with Pose Control + +To generate a video from prompt with additional pose control +Download a demo video Copied from huggingface_hub import hf_hub_download + +filename = "__assets__/poses_skeleton_gifs/dance1_corr.mp4" +repo_id = "PAIR/Text2Video-Zero" +video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) Read video containing extracted pose images Copied from PIL import Image +import imageio + +reader = imageio.get_reader(video_path, "ffmpeg") +frame_count = 8 +pose_images = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] To extract pose from actual video, read ControlNet documentation. Run StableDiffusionControlNetPipeline with our custom attention processor Copied import torch +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +model_id = "runwayml/stable-diffusion-v1-5" +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + model_id, controlnet=controlnet, torch_dtype=torch.float16 +).to("cuda") + +# Set the attention processor +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) +pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) + +# fix latents for all frames +latents = torch.randn((1, 4, 64, 64), device="cuda", dtype=torch.float16).repeat(len(pose_images), 1, 1, 1) + +prompt = "Darth Vader dancing in a desert" +result = pipe(prompt=[prompt] * len(pose_images), image=pose_images, latents=latents).images +imageio.mimsave("video.mp4", result, fps=4) SDXL Support Since our attention processor also works with SDXL, it can be utilized to generate a video from prompt using ControlNet models powered by SDXL: Copied import torch +from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +controlnet_model_id = 'thibaud/controlnet-openpose-sdxl-1.0' +model_id = 'stabilityai/stable-diffusion-xl-base-1.0' + +controlnet = ControlNetModel.from_pretrained(controlnet_model_id, torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + model_id, controlnet=controlnet, torch_dtype=torch.float16 +).to('cuda') + +# Set the attention processor +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) +pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) + +# fix latents for all frames +latents = torch.randn((1, 4, 128, 128), device="cuda", dtype=torch.float16).repeat(len(pose_images), 1, 1, 1) + +prompt = "Darth Vader dancing in a desert" +result = pipe(prompt=[prompt] * len(pose_images), image=pose_images, latents=latents).images +imageio.mimsave("video.mp4", result, fps=4) Text-To-Video with Edge Control To generate a video from prompt with additional Canny edge control, follow the same steps described above for pose-guided generation using Canny edge ControlNet model. Video Instruct-Pix2Pix To perform text-guided video editing (with InstructPix2Pix): Download a demo video Copied from huggingface_hub import hf_hub_download + +filename = "__assets__/pix2pix video/camel.mp4" +repo_id = "PAIR/Text2Video-Zero" +video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) Read video from path Copied from PIL import Image +import imageio + +reader = imageio.get_reader(video_path, "ffmpeg") +frame_count = 8 +video = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] Run StableDiffusionInstructPix2PixPipeline with our custom attention processor Copied import torch +from diffusers import StableDiffusionInstructPix2PixPipeline +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +model_id = "timbrooks/instruct-pix2pix" +pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=3)) + +prompt = "make it Van Gogh Starry Night style" +result = pipe(prompt=[prompt] * len(video), image=video).images +imageio.mimsave("edited_video.mp4", result, fps=4) DreamBooth specialization Methods Text-To-Video, Text-To-Video with Pose Control and Text-To-Video with Edge Control +can run with custom DreamBooth models, as shown below for +Canny edge ControlNet model and +Avatar style DreamBooth model: Download a demo video Copied from huggingface_hub import hf_hub_download + +filename = "__assets__/canny_videos_mp4/girl_turning.mp4" +repo_id = "PAIR/Text2Video-Zero" +video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) Read video from path Copied from PIL import Image +import imageio + +reader = imageio.get_reader(video_path, "ffmpeg") +frame_count = 8 +canny_edges = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] Run StableDiffusionControlNetPipeline with custom trained DreamBooth model Copied import torch +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +# set model id to custom model +model_id = "PAIR/text2video-zero-controlnet-canny-avatar" +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + model_id, controlnet=controlnet, torch_dtype=torch.float16 +).to("cuda") + +# Set the attention processor +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) +pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) + +# fix latents for all frames +latents = torch.randn((1, 4, 64, 64), device="cuda", dtype=torch.float16).repeat(len(canny_edges), 1, 1, 1) + +prompt = "oil painting of a beautiful girl avatar style" +result = pipe(prompt=[prompt] * len(canny_edges), image=canny_edges, latents=latents).images +imageio.mimsave("video.mp4", result, fps=4) You can filter out some available DreamBooth-trained models with this link. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. TextToVideoZeroPipeline class diffusers.TextToVideoZeroPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet3DConditionModel to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for zero-shot text-to-video generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union video_length: Optional = 8 height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None motion_field_strength_x: float = 12 motion_field_strength_y: float = 12 output_type: Optional = 'tensor' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 t0: int = 44 t1: int = 47 frame_ids: Optional = None ) → TextToVideoPipelineOutput Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. video_length (int, optional, defaults to 8) — +The number of generated video frames. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in video generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_videos_per_prompt (int, optional, defaults to 1) — +The number of videos to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "numpy") — +The output format of the generated video. Choose between "latent" and "numpy". return_dict (bool, optional, defaults to True) — +Whether or not to return a +TextToVideoPipelineOutput instead of +a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. motion_field_strength_x (float, optional, defaults to 12) — +Strength of motion in generated video along x-axis. See the paper, +Sect. 3.3.1. motion_field_strength_y (float, optional, defaults to 12) — +Strength of motion in generated video along y-axis. See the paper, +Sect. 3.3.1. t0 (int, optional, defaults to 44) — +Timestep t0. Should be in the range [0, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. t1 (int, optional, defaults to 47) — +Timestep t0. Should be in the range [t0 + 1, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. frame_ids (List[int], optional) — +Indexes of the frames that are being generated. This is used when generating longer videos +chunk-by-chunk. Returns +TextToVideoPipelineOutput + +The output contains a ndarray of the generated video, when output_type != "latent", otherwise a +latent code of generated videos and a list of bools indicating whether the corresponding generated +video contains “not-safe-for-work” (nsfw) content.. + The call function to the pipeline for generation. backward_loop < source > ( latents timesteps prompt_embeds guidance_scale callback callback_steps num_warmup_steps extra_step_kwargs cross_attention_kwargs = None ) → latents Parameters callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. +extra_step_kwargs — +Extra_step_kwargs. +cross_attention_kwargs — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. +num_warmup_steps — +number of warmup steps. Returns +latents + +Latents of backward process output at time timesteps[-1]. + Perform backward process given list of time steps. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. forward_loop < source > ( x_t0 t0 t1 generator ) → x_t1 Parameters generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. Returns +x_t1 + +Forward process applied to x_t0 from time t0 to t1. + Perform DDPM forward process from time t0 to t1. This is the same as adding noise with corresponding variance. TextToVideoZeroSDXLPipeline class diffusers.TextToVideoZeroSDXLPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for zero-shot text-to-video generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union prompt_2: Union = None video_length: Optional = 8 height: Optional = None width: Optional = None num_inference_steps: int = 50 denoising_end: Optional = None guidance_scale: float = 7.5 negative_prompt: Union = None negative_prompt_2: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None frame_ids: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None latents: Optional = None motion_field_strength_x: float = 12 motion_field_strength_y: float = 12 output_type: Optional = 'tensor' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Optional = None crops_coords_top_left: Tuple = (0, 0) target_size: Optional = None t0: int = 44 t1: int = 47 ) Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders video_length (int, optional, defaults to 8) — +The number of generated video frames. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_videos_per_prompt (int, optional, defaults to 1) — +The number of videos to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. frame_ids (List[int], optional) — +Indexes of the frames that are being generated. This is used when generating longer videos +chunk-by-chunk. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. motion_field_strength_x (float, optional, defaults to 12) — +Strength of motion in generated video along x-axis. See the paper, +Sect. 3.3.1. motion_field_strength_y (float, optional, defaults to 12) — +Strength of motion in generated video along y-axis. See the paper, +Sect. 3.3.1. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionXLPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. guidance_rescale (float, optional, defaults to 0.7) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (width, height) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (width, height). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. t0 (int, optional, defaults to 44) — +Timestep t0. Should be in the range [0, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. t1 (int, optional, defaults to 47) — +Timestep t0. Should be in the range [t0 + 1, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. Function invoked when calling the pipeline for generation. backward_loop < source > ( latents timesteps prompt_embeds guidance_scale callback callback_steps num_warmup_steps extra_step_kwargs add_text_embeds add_time_ids cross_attention_kwargs = None guidance_rescale: float = 0.0 ) → latents Parameters callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. +extra_step_kwargs — +Extra_step_kwargs. +cross_attention_kwargs — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. +num_warmup_steps — +number of warmup steps. Returns +latents + +latents of backward process output at time timesteps[-1] + Perform backward process given list of time steps disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. forward_loop < source > ( x_t0 t0 t1 generator ) → x_t1 Parameters generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. Returns +x_t1 + +Forward process applied to x_t0 from time t0 to t1. + Perform DDPM forward process from time t0 to t1. This is the same as adding noise with corresponding variance. TextToVideoPipelineOutput class diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.TextToVideoPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images ([List[PIL.Image.Image], np.ndarray]) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected ([List[bool]]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for zero-shot text-to-video pipeline. diff --git a/scrapped_outputs/5823e0b1e2121391f62bcbc7fbc8e22a.txt b/scrapped_outputs/5823e0b1e2121391f62bcbc7fbc8e22a.txt new file mode 100644 index 0000000000000000000000000000000000000000..65a9cfaf29f703e7c7512eba0f3f7082686a6b82 --- /dev/null +++ b/scrapped_outputs/5823e0b1e2121391f62bcbc7fbc8e22a.txt @@ -0,0 +1,40 @@ +KDPM2DiscreteScheduler The KDPM2DiscreteScheduler is inspired by the Elucidating the Design Space of Diffusion-Based Generative Models paper, and the scheduler is ported from and created by Katherine Crowson. The original codebase can be found at crowsonkb/k-diffusion. KDPM2DiscreteScheduler class diffusers.KDPM2DiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None use_karras_sigmas: Optional = False prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.00085) — +The starting beta value of inference. beta_end (float, defaults to 0.012) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. KDPM2DiscreteScheduler is inspired by the DPMSolver2 and Algorithm 2 from the Elucidating the Design Space of +Diffusion-Based Generative Models paper. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/58257f6fe6db8d2db19da4afae93d9f0.txt b/scrapped_outputs/58257f6fe6db8d2db19da4afae93d9f0.txt new file mode 100644 index 0000000000000000000000000000000000000000..4a2dab2440032fce02434afcfbdf3d52bba38d63 --- /dev/null +++ b/scrapped_outputs/58257f6fe6db8d2db19da4afae93d9f0.txt @@ -0,0 +1,11 @@ +Philosophy 🧨 Diffusers provides state-of-the-art pretrained diffusion models across multiple modalities. +Its purpose is to serve as a modular toolbox for both inference and training. We aim at building a library that stands the test of time and therefore take API design very seriously. In a nutshell, Diffusers is built to be a natural extension of PyTorch. Therefore, most of our design choices are based on PyTorch’s Design Principles. Let’s go over the most important ones: Usability over Performance While Diffusers has many built-in performance-enhancing features (see Memory and Speed), models are always loaded with the highest precision and lowest optimization. Therefore, by default diffusion pipelines are always instantiated on CPU with float32 precision if not otherwise defined by the user. This ensures usability across different platforms and accelerators and means that no complex installations are required to run the library. Diffusers aims to be a light-weight package and therefore has very few required dependencies, but many soft dependencies that can improve performance (such as accelerate, safetensors, onnx, etc…). We strive to keep the library as lightweight as possible so that it can be added without much concern as a dependency on other packages. Diffusers prefers simple, self-explainable code over condensed, magic code. This means that short-hand code syntaxes such as lambda functions, and advanced PyTorch operators are often not desired. Simple over easy As PyTorch states, explicit is better than implicit and simple is better than complex. This design philosophy is reflected in multiple parts of the library: We follow PyTorch’s API with methods like DiffusionPipeline.to to let the user handle device management. Raising concise error messages is preferred to silently correct erroneous input. Diffusers aims at teaching the user, rather than making the library as easy to use as possible. Complex model vs. scheduler logic is exposed instead of magically handled inside. Schedulers/Samplers are separated from diffusion models with minimal dependencies on each other. This forces the user to write the unrolled denoising loop. However, the separation allows for easier debugging and gives the user more control over adapting the denoising process or switching out diffusion models or schedulers. Separately trained components of the diffusion pipeline, e.g. the text encoder, the unet, and the variational autoencoder, each have their own model class. This forces the user to handle the interaction between the different model components, and the serialization format separates the model components into different files. However, this allows for easier debugging and customization. DreamBooth or Textual Inversion training +is very simple thanks to Diffusers’ ability to separate single components of the diffusion pipeline. Tweakable, contributor-friendly over abstraction For large parts of the library, Diffusers adopts an important design principle of the Transformers library, which is to prefer copy-pasted code over hasty abstractions. This design principle is very opinionated and stands in stark contrast to popular design principles such as Don’t repeat yourself (DRY). +In short, just like Transformers does for modeling files, Diffusers prefers to keep an extremely low level of abstraction and very self-contained code for pipelines and schedulers. +Functions, long code blocks, and even classes can be copied across multiple files which at first can look like a bad, sloppy design choice that makes the library unmaintainable. +However, this design has proven to be extremely successful for Transformers and makes a lot of sense for community-driven, open-source machine learning libraries because: Machine Learning is an extremely fast-moving field in which paradigms, model architectures, and algorithms are changing rapidly, which therefore makes it very difficult to define long-lasting code abstractions. Machine Learning practitioners like to be able to quickly tweak existing code for ideation and research and therefore prefer self-contained code over one that contains many abstractions. Open-source libraries rely on community contributions and therefore must build a library that is easy to contribute to. The more abstract the code, the more dependencies, the harder to read, and the harder to contribute to. Contributors simply stop contributing to very abstract libraries out of fear of breaking vital functionality. If contributing to a library cannot break other fundamental code, not only is it more inviting for potential new contributors, but it is also easier to review and contribute to multiple parts in parallel. At Hugging Face, we call this design the single-file policy which means that almost all of the code of a certain class should be written in a single, self-contained file. To read more about the philosophy, you can have a look +at this blog post. In Diffusers, we follow this philosophy for both pipelines and schedulers, but only partly for diffusion models. The reason we don’t follow this design fully for diffusion models is because almost all diffusion pipelines, such +as DDPM, Stable Diffusion, unCLIP (DALL·E 2) and Imagen all rely on the same diffusion model, the UNet. Great, now you should have generally understood why 🧨 Diffusers is designed the way it is 🤗. +We try to apply these design principles consistently across the library. Nevertheless, there are some minor exceptions to the philosophy or some unlucky design choices. If you have feedback regarding the design, we would ❤️ to hear it directly on GitHub. Design Philosophy in Details Now, let’s look a bit into the nitty-gritty details of the design philosophy. Diffusers essentially consists of three major classes: pipelines, models, and schedulers. +Let’s walk through more in-detail design decisions for each class. Pipelines Pipelines are designed to be easy to use (therefore do not follow Simple over easy 100%), are not feature complete, and should loosely be seen as examples of how to use models and schedulers for inference. The following design principles are followed: Pipelines follow the single-file policy. All pipelines can be found in individual directories under src/diffusers/pipelines. One pipeline folder corresponds to one diffusion paper/project/release. Multiple pipeline files can be gathered in one pipeline folder, as it’s done for src/diffusers/pipelines/stable-diffusion. If pipelines share similar functionality, one can make use of the #Copied from mechanism. Pipelines all inherit from DiffusionPipeline. Every pipeline consists of different model and scheduler components, that are documented in the model_index.json file, are accessible under the same name as attributes of the pipeline and can be shared between pipelines with DiffusionPipeline.components function. Every pipeline should be loadable via the DiffusionPipeline.from_pretrained function. Pipelines should be used only for inference. Pipelines should be very readable, self-explanatory, and easy to tweak. Pipelines should be designed to build on top of each other and be easy to integrate into higher-level APIs. Pipelines are not intended to be feature-complete user interfaces. For future complete user interfaces one should rather have a look at InvokeAI, Diffuzers, and lama-cleaner. Every pipeline should have one and only one way to run it via a __call__ method. The naming of the __call__ arguments should be shared across all pipelines. Pipelines should be named after the task they are intended to solve. In almost all cases, novel diffusion pipelines shall be implemented in a new pipeline folder/file. Models Models are designed as configurable toolboxes that are natural extensions of PyTorch’s Module class. They only partly follow the single-file policy. The following design principles are followed: Models correspond to a type of model architecture. E.g. the UNet2DConditionModel class is used for all UNet variations that expect 2D image inputs and are conditioned on some context. All models can be found in src/diffusers/models and every model architecture shall be defined in its file, e.g. unet_2d_condition.py, transformer_2d.py, etc… Models do not follow the single-file policy and should make use of smaller model building blocks, such as attention.py, resnet.py, embeddings.py, etc… Note: This is in stark contrast to Transformers’ modeling files and shows that models do not really follow the single-file policy. Models intend to expose complexity, just like PyTorch’s Module class, and give clear error messages. Models all inherit from ModelMixin and ConfigMixin. Models can be optimized for performance when it doesn’t demand major code changes, keeps backward compatibility, and gives significant memory or compute gain. Models should by default have the highest precision and lowest performance setting. To integrate new model checkpoints whose general architecture can be classified as an architecture that already exists in Diffusers, the existing model architecture shall be adapted to make it work with the new checkpoint. One should only create a new file if the model architecture is fundamentally different. Models should be designed to be easily extendable to future changes. This can be achieved by limiting public function arguments, configuration arguments, and “foreseeing” future changes, e.g. it is usually better to add string “…type” arguments that can easily be extended to new future types instead of boolean is_..._type arguments. Only the minimum amount of changes shall be made to existing architectures to make a new model checkpoint work. The model design is a difficult trade-off between keeping code readable and concise and supporting many model checkpoints. For most parts of the modeling code, classes shall be adapted for new model checkpoints, while there are some exceptions where it is preferred to add new classes to make sure the code is kept concise and +readable long-term, such as UNet blocks and Attention processors. Schedulers Schedulers are responsible to guide the denoising process for inference as well as to define a noise schedule for training. They are designed as individual classes with loadable configuration files and strongly follow the single-file policy. The following design principles are followed: All schedulers are found in src/diffusers/schedulers. Schedulers are not allowed to import from large utils files and shall be kept very self-contained. One scheduler Python file corresponds to one scheduler algorithm (as might be defined in a paper). If schedulers share similar functionalities, we can make use of the #Copied from mechanism. Schedulers all inherit from SchedulerMixin and ConfigMixin. Schedulers can be easily swapped out with the ConfigMixin.from_config method as explained in detail here. Every scheduler has to have a set_num_inference_steps, and a step function. set_num_inference_steps(...) has to be called before every denoising process, i.e. before step(...) is called. Every scheduler exposes the timesteps to be “looped over” via a timesteps attribute, which is an array of timesteps the model will be called upon. The step(...) function takes a predicted model output and the “current” sample (x_t) and returns the “previous”, slightly more denoised sample (x_t-1). Given the complexity of diffusion schedulers, the step function does not expose all the complexity and can be a bit of a “black box”. In almost all cases, novel schedulers shall be implemented in a new scheduling file. diff --git a/scrapped_outputs/586f85e7d6ad4411713c9a2c8b8d006d.txt b/scrapped_outputs/586f85e7d6ad4411713c9a2c8b8d006d.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/588643907a0ba4e7109714e7d15fce5c.txt b/scrapped_outputs/588643907a0ba4e7109714e7d15fce5c.txt new file mode 100644 index 0000000000000000000000000000000000000000..43caa4e9d69d10bca1cf7b16f04035243b30ac36 --- /dev/null +++ b/scrapped_outputs/588643907a0ba4e7109714e7d15fce5c.txt @@ -0,0 +1,163 @@ +RePaint scheduler + + +Overview + +DDPM-based inpainting scheduler for unsupervised inpainting with extreme masks. +Intended for use with RePaintPipeline. +Based on the paper RePaint: Inpainting using Denoising Diffusion Probabilistic Models +and the original implementation by Andreas Lugmayr et al.: https://github.com/andreas128/RePaint + +RePaintScheduler + + +class diffusers.RePaintScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +eta: float = 0.0 +trained_betas: typing.Optional[numpy.ndarray] = None +clip_sample: bool = True + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. + + +eta (float) — +The weight of noise for added noise in a diffusion step. Its value is between 0.0 and 1.0 -0.0 is DDIM and +1.0 is DDPM scheduler respectively. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +variance_type (str) — +options to clip the variance used when adding noise to the denoised sample. Choose from fixed_small, +fixed_small_log, fixed_large, fixed_large_log, learned or learned_range. + + +clip_sample (bool, default True) — +option to clip predicted sample between -1 and 1 for numerical stability. + + + +RePaint is a schedule for DDPM inpainting inside a given mask. +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. +For more details, see the original paper: https://arxiv.org/pdf/2201.09865.pdf + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Optional[int] = None + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +timestep (int, optional) — current timestep + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +step + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +original_image: FloatTensor +mask: FloatTensor +generator: typing.Optional[torch._C.Generator] = None +return_dict: bool = True + +) +→ +~schedulers.scheduling_utils.RePaintSchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned +diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +original_image (torch.FloatTensor) — +the original image to inpaint on. + + +mask (torch.FloatTensor) — +the mask where 0.0 values define which part of the original image to inpaint (change). + + +generator (torch.Generator, optional) — random number generator. + + +return_dict (bool) — option for returning tuple rather than +DDPMSchedulerOutput class + + +Returns + +~schedulers.scheduling_utils.RePaintSchedulerOutput or tuple + + + +~schedulers.scheduling_utils.RePaintSchedulerOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/58a36e50dfa5e6d564bb82c63f1d9beb.txt b/scrapped_outputs/58a36e50dfa5e6d564bb82c63f1d9beb.txt new file mode 100644 index 0000000000000000000000000000000000000000..f3ff45d9b537f73b4891b1294f8d618d1aafc935 --- /dev/null +++ b/scrapped_outputs/58a36e50dfa5e6d564bb82c63f1d9beb.txt @@ -0,0 +1,48 @@ +ScoreSdeVeScheduler ScoreSdeVeScheduler is a variance exploding stochastic differential equation (SDE) scheduler. It was introduced in the Score-Based Generative Modeling through Stochastic Differential Equations paper by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole. The abstract from the paper is: Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model. ScoreSdeVeScheduler class diffusers.ScoreSdeVeScheduler < source > ( num_train_timesteps: int = 2000 snr: float = 0.15 sigma_min: float = 0.01 sigma_max: float = 1348.0 sampling_eps: float = 1e-05 correct_steps: int = 1 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. snr (float, defaults to 0.15) — +A coefficient weighting the step from the model_output sample (from the network) to the random noise. sigma_min (float, defaults to 0.01) — +The initial noise scale for the sigma sequence in the sampling procedure. The minimum sigma should mirror +the distribution of the data. sigma_max (float, defaults to 1348.0) — +The maximum value used for the range of continuous timesteps passed into the model. sampling_eps (float, defaults to 1e-5) — +The end value of sampling where timesteps decrease progressively from 1 to epsilon. correct_steps (int, defaults to 1) — +The number of correction steps performed on a produced sample. ScoreSdeVeScheduler is a variance exploding stochastic differential equation (SDE) scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_sigmas < source > ( num_inference_steps: int sigma_min: float = None sigma_max: float = None sampling_eps: float = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. sigma_min (float, optional) — +The initial noise scale value (overrides value given during scheduler instantiation). sigma_max (float, optional) — +The final noise scale value (overrides value given during scheduler instantiation). sampling_eps (float, optional) — +The final timestep value (overrides value given during scheduler instantiation). Sets the noise scales used for the diffusion chain (to be run before inference). The sigmas control the weight +of the drift and diffusion components of the sample update. set_timesteps < source > ( num_inference_steps: int sampling_eps: float = None device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. sampling_eps (float, optional) — +The final timestep value (overrides value given during scheduler instantiation). device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the continuous timesteps used for the diffusion chain (to be run before inference). step_correct < source > ( model_output: FloatTensor sample: FloatTensor generator: Optional = None return_dict: bool = True ) → SdeVeOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a SdeVeOutput or tuple. Returns +SdeVeOutput or tuple + +If return_dict is True, SdeVeOutput is returned, otherwise a tuple +is returned where the first element is the sample tensor. + Correct the predicted sample based on the model_output of the network. This is often run repeatedly after +making the prediction for the previous timestep. step_pred < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator: Optional = None return_dict: bool = True ) → SdeVeOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a SdeVeOutput or tuple. Returns +SdeVeOutput or tuple + +If return_dict is True, SdeVeOutput is returned, otherwise a tuple +is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SdeVeOutput class diffusers.schedulers.scheduling_sde_ve.SdeVeOutput < source > ( prev_sample: FloatTensor prev_sample_mean: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. prev_sample_mean (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Mean averaged prev_sample over previous timesteps. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/58cbe91b50db4a1cf3bc3aa404823c62.txt b/scrapped_outputs/58cbe91b50db4a1cf3bc3aa404823c62.txt new file mode 100644 index 0000000000000000000000000000000000000000..b1a5b1caf72ab66f1458f358678fe7da6bdce6c7 --- /dev/null +++ b/scrapped_outputs/58cbe91b50db4a1cf3bc3aa404823c62.txt @@ -0,0 +1 @@ +SDXL Turbo Stable Diffusion XL (SDXL) Turbo was proposed in Adversarial Diffusion Distillation by Axel Sauer, Dominik Lorenz, Andreas Blattmann, and Robin Rombach. The abstract from the paper is: We introduce Adversarial Diffusion Distillation (ADD), a novel training approach that efficiently samples large-scale foundational image diffusion models in just 1–4 steps while maintaining high image quality. We use score distillation to leverage large-scale off-the-shelf image diffusion models as a teacher signal in combination with an adversarial loss to ensure high image fidelity even in the low-step regime of one or two sampling steps. Our analyses show that our model clearly outperforms existing few-step methods (GANs,Latent Consistency Models) in a single step and reaches the performance of state-of-the-art diffusion models (SDXL) in only four steps. ADD is the first method to unlock single-step, real-time image synthesis with foundation models. Tips SDXL Turbo uses the exact same architecture as SDXL, which means it also has the same API. Please refer to the SDXL API reference for more details. SDXL Turbo should disable guidance scale by setting guidance_scale=0.0 SDXL Turbo should use timestep_spacing='trailing' for the scheduler and use between 1 and 4 steps. SDXL Turbo has been trained to generate images of size 512x512. SDXL Turbo is open-access, but not open-source meaning that one might have to buy a model license in order to use it for commercial applications. Make sure to read the official model card to learn more. To learn how to use SDXL Turbo for various tasks, how to optimize performance, and other usage examples, take a look at the SDXL Turbo guide. Check out the Stability AI Hub organization for the official base and refiner model checkpoints! diff --git a/scrapped_outputs/58e4a80d8e5a987a229e7453014ed29e.txt b/scrapped_outputs/58e4a80d8e5a987a229e7453014ed29e.txt new file mode 100644 index 0000000000000000000000000000000000000000..b36fcdaae1a968a902d79e9e2398812f703a2021 --- /dev/null +++ b/scrapped_outputs/58e4a80d8e5a987a229e7453014ed29e.txt @@ -0,0 +1,63 @@ +Kandinsky 2.2 This script is experimental, and it’s easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset. Kandinsky 2.2 is a multilingual text-to-image model capable of producing more photorealistic images. The model includes an image prior model for creating image embeddings from text prompts, and a decoder model that generates images based on the prior model’s embeddings. That’s why you’ll find two separate scripts in Diffusers for Kandinsky 2.2, one for training the prior model and one for training the decoder model. You can train both models separately, but to get the best results, you should train both the prior and decoder models. Depending on your GPU, you may need to enable gradient_checkpointing (⚠️ not supported for the prior model!), mixed_precision, and gradient_accumulation_steps to help fit the model into memory and to speedup training. You can reduce your memory-usage even more by enabling memory-efficient attention with xFormers (version v0.0.16 fails for training on some GPUs so you may need to install a development version instead). This guide explores the train_text_to_image_prior.py and the train_text_to_image_decoder.py scripts to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the scripts, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/kandinsky2_2/text_to_image +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training scripts that are important for understanding how to modify it, but it doesn’t cover every aspect of the scripts in detail. If you’re interested in learning more, feel free to read through the scripts and let us know if you have any questions or concerns. Script parameters The training scripts provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. The training scripts provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image_prior.py \ + --mixed_precision="fp16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so let’s get straight to a walkthrough of the Kandinsky training scripts! Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_text_to_image_prior.py \ + --snr_gamma=5.0 Training script The training script is also similar to the Text-to-image training guide, but it’s been modified to support training the prior and decoder models. This guide focuses on the code that is unique to the Kandinsky 2.2 training scripts. prior model decoder model The main() function contains the code for preparing the dataset and training the model. One of the main differences you’ll notice right away is that the training script also loads a CLIPImageProcessor - in addition to a scheduler and tokenizer - for preprocessing images and a CLIPVisionModelWithProjection model for encoding the images: Copied noise_scheduler = DDPMScheduler(beta_schedule="squaredcos_cap_v2", prediction_type="sample") +image_processor = CLIPImageProcessor.from_pretrained( + args.pretrained_prior_model_name_or_path, subfolder="image_processor" +) +tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="tokenizer") + +with ContextManagers(deepspeed_zero_init_disabled_context_manager()): + image_encoder = CLIPVisionModelWithProjection.from_pretrained( + args.pretrained_prior_model_name_or_path, subfolder="image_encoder", torch_dtype=weight_dtype + ).eval() + text_encoder = CLIPTextModelWithProjection.from_pretrained( + args.pretrained_prior_model_name_or_path, subfolder="text_encoder", torch_dtype=weight_dtype + ).eval() Kandinsky uses a PriorTransformer to generate the image embeddings, so you’ll want to setup the optimizer to learn the prior mode’s parameters. Copied prior = PriorTransformer.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="prior") +prior.train() +optimizer = optimizer_cls( + prior.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Next, the input captions are tokenized, and images are preprocessed by the CLIPImageProcessor: Copied def preprocess_train(examples): + images = [image.convert("RGB") for image in examples[image_column]] + examples["clip_pixel_values"] = image_processor(images, return_tensors="pt").pixel_values + examples["text_input_ids"], examples["text_mask"] = tokenize_captions(examples) + return examples Finally, the training loop converts the input images into latents, adds noise to the image embeddings, and makes a prediction: Copied model_pred = prior( + noisy_latents, + timestep=timesteps, + proj_embedding=prompt_embeds, + encoder_hidden_states=text_encoder_hidden_states, + attention_mask=text_mask, +).predicted_image_embedding If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 You’ll train on the Pokémon BLIP captions dataset to generate your own Pokémon, but you can also create and train on your own dataset by following the Create a dataset for training guide. Set the environment variable DATASET_NAME to the name of the dataset on the Hub or if you’re training on your own files, set the environment variable TRAIN_DIR to a path to your dataset. If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_prompt to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. prior model decoder model Copied export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch --mixed_precision="fp16" train_text_to_image_prior.py \ + --dataset_name=$DATASET_NAME \ + --resolution=768 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --checkpoints_total_limit=3 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --validation_prompts="A robot pokemon, 4k photo" \ + --report_to="wandb" \ + --push_to_hub \ + --output_dir="kandi2-prior-pokemon-model" Once training is finished, you can use your newly trained model for inference! prior model decoder model Copied from diffusers import AutoPipelineForText2Image, DiffusionPipeline +import torch + +prior_pipeline = DiffusionPipeline.from_pretrained(output_dir, torch_dtype=torch.float16) +prior_components = {"prior_" + k: v for k,v in prior_pipeline.components.items()} +pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", **prior_components, torch_dtype=torch.float16) + +pipe.enable_model_cpu_offload() +prompt="A robot pokemon, 4k photo" +image = pipeline(prompt=prompt, negative_prompt=negative_prompt).images[0] Feel free to replace kandinsky-community/kandinsky-2-2-decoder with your own trained decoder checkpoint! Next steps Congratulations on training a Kandinsky 2.2 model! To learn more about how to use your new model, the following guides may be helpful: Read the Kandinsky guide to learn how to use it for a variety of different tasks (text-to-image, image-to-image, inpainting, interpolation), and how it can be combined with a ControlNet. Check out the DreamBooth and LoRA training guides to learn how to train a personalized Kandinsky model with just a few example images. These two training techniques can even be combined! diff --git a/scrapped_outputs/58ea4036c8f24018b308e0b6a474d685.txt b/scrapped_outputs/58ea4036c8f24018b308e0b6a474d685.txt new file mode 100644 index 0000000000000000000000000000000000000000..0454f29f161e7c79737a21f6448f556cf18eca51 --- /dev/null +++ b/scrapped_outputs/58ea4036c8f24018b308e0b6a474d685.txt @@ -0,0 +1,81 @@ +Push files to the Hub 🤗 Diffusers provides a PushToHubMixin for uploading your model, scheduler, or pipeline to the Hub. It is an easy way to store your files on the Hub, and also allows you to share your work with others. Under the hood, the PushToHubMixin: creates a repository on the Hub saves your model, scheduler, or pipeline files so they can be reloaded later uploads folder containing these files to the Hub This guide will show you how to use the PushToHubMixin to upload your files to the Hub. You’ll need to log in to your Hub account with your access token first: Copied from huggingface_hub import notebook_login + +notebook_login() Models To push a model to the Hub, call push_to_hub() and specify the repository id of the model to be stored on the Hub: Copied from diffusers import ControlNetModel + +controlnet = ControlNetModel( + block_out_channels=(32, 64), + layers_per_block=2, + in_channels=4, + down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), + cross_attention_dim=32, + conditioning_embedding_out_channels=(16, 32), +) +controlnet.push_to_hub("my-controlnet-model") For models, you can also specify the variant of the weights to push to the Hub. For example, to push fp16 weights: Copied controlnet.push_to_hub("my-controlnet-model", variant="fp16") The push_to_hub() function saves the model’s config.json file and the weights are automatically saved in the safetensors format. Now you can reload the model from your repository on the Hub: Copied model = ControlNetModel.from_pretrained("your-namespace/my-controlnet-model") Scheduler To push a scheduler to the Hub, call push_to_hub() and specify the repository id of the scheduler to be stored on the Hub: Copied from diffusers import DDIMScheduler + +scheduler = DDIMScheduler( + beta_start=0.00085, + beta_end=0.012, + beta_schedule="scaled_linear", + clip_sample=False, + set_alpha_to_one=False, +) +scheduler.push_to_hub("my-controlnet-scheduler") The push_to_hub() function saves the scheduler’s scheduler_config.json file to the specified repository. Now you can reload the scheduler from your repository on the Hub: Copied scheduler = DDIMScheduler.from_pretrained("your-namepsace/my-controlnet-scheduler") Pipeline You can also push an entire pipeline with all it’s components to the Hub. For example, initialize the components of a StableDiffusionPipeline with the parameters you want: Copied from diffusers import ( + UNet2DConditionModel, + AutoencoderKL, + DDIMScheduler, + StableDiffusionPipeline, +) +from transformers import CLIPTextModel, CLIPTextConfig, CLIPTokenizer + +unet = UNet2DConditionModel( + block_out_channels=(32, 64), + layers_per_block=2, + sample_size=32, + in_channels=4, + out_channels=4, + down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), + up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), + cross_attention_dim=32, +) + +scheduler = DDIMScheduler( + beta_start=0.00085, + beta_end=0.012, + beta_schedule="scaled_linear", + clip_sample=False, + set_alpha_to_one=False, +) + +vae = AutoencoderKL( + block_out_channels=[32, 64], + in_channels=3, + out_channels=3, + down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], + up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], + latent_channels=4, +) + +text_encoder_config = CLIPTextConfig( + bos_token_id=0, + eos_token_id=2, + hidden_size=32, + intermediate_size=37, + layer_norm_eps=1e-05, + num_attention_heads=4, + num_hidden_layers=5, + pad_token_id=1, + vocab_size=1000, +) +text_encoder = CLIPTextModel(text_encoder_config) +tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") Pass all of the components to the StableDiffusionPipeline and call push_to_hub() to push the pipeline to the Hub: Copied components = { + "unet": unet, + "scheduler": scheduler, + "vae": vae, + "text_encoder": text_encoder, + "tokenizer": tokenizer, + "safety_checker": None, + "feature_extractor": None, +} + +pipeline = StableDiffusionPipeline(**components) +pipeline.push_to_hub("my-pipeline") The push_to_hub() function saves each component to a subfolder in the repository. Now you can reload the pipeline from your repository on the Hub: Copied pipeline = StableDiffusionPipeline.from_pretrained("your-namespace/my-pipeline") Privacy Set private=True in the push_to_hub() function to keep your model, scheduler, or pipeline files private: Copied controlnet.push_to_hub("my-controlnet-model-private", private=True) Private repositories are only visible to you, and other users won’t be able to clone the repository and your repository won’t appear in search results. Even if a user has the URL to your private repository, they’ll receive a 404 - Sorry, we can't find the page you are looking for. You must be logged in to load a model from a private repository. diff --git a/scrapped_outputs/59203b97f5e18554b7fc1e1535be492f.txt b/scrapped_outputs/59203b97f5e18554b7fc1e1535be492f.txt new file mode 100644 index 0000000000000000000000000000000000000000..ed307c5e7ec0eba355d6da6f87807233e0a27eec --- /dev/null +++ b/scrapped_outputs/59203b97f5e18554b7fc1e1535be492f.txt @@ -0,0 +1,43 @@ +DiT Scalable Diffusion Models with Transformers (DiT) is by William Peebles and Saining Xie. The abstract from the paper is: We explore a new class of diffusion models based on the transformer architecture. We train latent diffusion models of images, replacing the commonly-used U-Net backbone with a transformer that operates on latent patches. We analyze the scalability of our Diffusion Transformers (DiTs) through the lens of forward pass complexity as measured by Gflops. We find that DiTs with higher Gflops — through increased transformer depth/width or increased number of input tokens — consistently have lower FID. In addition to possessing good scalability properties, our largest DiT-XL/2 models outperform all prior diffusion models on the class-conditional ImageNet 512x512 and 256x256 benchmarks, achieving a state-of-the-art FID of 2.27 on the latter. The original codebase can be found at facebookresearch/dit. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. DiTPipeline class diffusers.DiTPipeline < source > ( transformer: Transformer2DModel vae: AutoencoderKL scheduler: KarrasDiffusionSchedulers id2label: Optional = None ) Parameters transformer (Transformer2DModel) — +A class conditioned Transformer2DModel to denoise the encoded image latents. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. scheduler (DDIMScheduler) — +A scheduler to be used in combination with transformer to denoise the encoded image latents. Pipeline for image generation based on a Transformer backbone instead of a UNet. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( class_labels: List guidance_scale: float = 4.0 generator: Union = None num_inference_steps: int = 50 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters class_labels (List[int]) — +List of ImageNet class labels for the images to be generated. guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. num_inference_steps (int, optional, defaults to 250) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import DiTPipeline, DPMSolverMultistepScheduler +>>> import torch + +>>> pipe = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256", torch_dtype=torch.float16) +>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe = pipe.to("cuda") + +>>> # pick words from Imagenet class labels +>>> pipe.labels # to print all available words + +>>> # pick words that exist in ImageNet +>>> words = ["white shark", "umbrella"] + +>>> class_ids = pipe.get_label_ids(words) + +>>> generator = torch.manual_seed(33) +>>> output = pipe(class_labels=class_ids, num_inference_steps=25, generator=generator) + +>>> image = output.images[0] # label 'white shark' get_label_ids < source > ( label: Union ) → list of int Parameters label (str or dict of str) — +Label strings to be mapped to class ids. Returns +list of int + +Class ids to be processed by pipeline. + Map label strings from ImageNet to corresponding class ids. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/593654601217339b0633c52b5cc47bd9.txt b/scrapped_outputs/593654601217339b0633c52b5cc47bd9.txt new file mode 100644 index 0000000000000000000000000000000000000000..836dee32c8271dc967057672c03614a463c4ec61 --- /dev/null +++ b/scrapped_outputs/593654601217339b0633c52b5cc47bd9.txt @@ -0,0 +1,324 @@ +Pipelines Pipelines provide a simple way to run state-of-the-art diffusion models in inference by bundling all of the necessary components (multiple independently-trained models, schedulers, and processors) into a single end-to-end class. Pipelines are flexible and they can be adapted to use different schedulers or even model components. All pipelines are built from the base DiffusionPipeline class which provides basic functionality for loading, downloading, and saving all the components. Specific pipeline types (for example StableDiffusionPipeline) loaded with from_pretrained() are automatically detected and the pipeline components are loaded and passed to the __init__ function of the pipeline. You shouldn’t use the DiffusionPipeline class for training. Individual components (for example, UNet2DModel and UNet2DConditionModel) of diffusion pipelines are usually trained individually, so we suggest directly working with them instead. Pipelines do not offer any training functionality. You’ll notice PyTorch’s autograd is disabled by decorating the __call__() method with a torch.no_grad decorator because pipelines should not be used for training. If you’re interested in training, please take a look at the Training guides instead! The table below lists all the pipelines currently available in 🤗 Diffusers and the tasks they support. Click on a pipeline to view its abstract and published paper. Pipeline Tasks AltDiffusion image2image AnimateDiff text2video Attend-and-Excite text2image Audio Diffusion image2audio AudioLDM text2audio AudioLDM2 text2audio BLIP Diffusion text2image Consistency Models unconditional image generation ControlNet text2image, image2image, inpainting ControlNet with Stable Diffusion XL text2image ControlNet-XS text2image ControlNet-XS with Stable Diffusion XL text2image Cycle Diffusion image2image Dance Diffusion unconditional audio generation DDIM unconditional image generation DDPM unconditional image generation DeepFloyd IF text2image, image2image, inpainting, super-resolution DiffEdit inpainting DiT text2image GLIGEN text2image InstructPix2Pix image editing Kandinsky 2.1 text2image, image2image, inpainting, interpolation Kandinsky 2.2 text2image, image2image, inpainting Kandinsky 3 text2image, image2image Latent Consistency Models text2image Latent Diffusion text2image, super-resolution LDM3D text2image, text-to-3D, text-to-pano, upscaling MultiDiffusion text2image MusicLDM text2audio Paint by Example inpainting ParaDiGMS text2image Pix2Pix Zero image editing PixArt-α text2image PNDM unconditional image generation RePaint inpainting Score SDE VE unconditional image generation Self-Attention Guidance text2image Semantic Guidance text2image Shap-E text-to-3D, image-to-3D Spectrogram Diffusion Stable Diffusion text2image, image2image, depth2image, inpainting, image variation, latent upscaler, super-resolution Stable Diffusion Model Editing model editing Stable Diffusion XL text2image, image2image, inpainting Stable Diffusion XL Turbo text2image, image2image, inpainting Stable unCLIP text2image, image variation Stochastic Karras VE unconditional image generation T2I-Adapter text2image Text2Video text2video, video2video Text2Video-Zero text2video unCLIP text2image, image variation Unconditional Latent Diffusion unconditional image generation UniDiffuser text2image, image2text, image variation, text variation, unconditional image generation, unconditional audio generation Value-guided planning value guided sampling Versatile Diffusion text2image, image variation VQ Diffusion text2image Wuerstchen text2image DiffusionPipeline class diffusers.DiffusionPipeline < source > ( ) Base class for all pipelines. DiffusionPipeline stores all components (models, schedulers, and processors) for diffusion pipelines and +provides methods for loading, downloading and saving models. It also includes methods to: move all PyTorch modules to the device of your choice enable/disable the progress bar for the denoising iteration Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. _optional_components (List[str]) — List of all optional components that don’t have to be passed to the +pipeline to function (should be overridden by subclasses). __call__ ( *args **kwargs ) Call self as a function. device < source > ( ) → torch.device Returns +torch.device + +The torch device on which the pipeline is located. + to < source > ( *args **kwargs ) → DiffusionPipeline Parameters dtype (torch.dtype, optional) — +Returns a pipeline with the specified +dtype device (torch.Device, optional) — +Returns a pipeline with the specified +device silence_dtype_warnings (str, optional, defaults to False) — +Whether to omit warnings if the target dtype is not compatible with the target device. Returns +DiffusionPipeline + +The pipeline converted to specified dtype and/or dtype. + Performs Pipeline dtype and/or device conversion. A torch.dtype and torch.device are inferred from the +arguments of self.to(*args, **kwargs). If the pipeline already has the correct torch.dtype and torch.device, then it is returned as is. Otherwise, +the returned pipeline is a copy of self with the desired torch.dtype and torch.device. Here are the ways to call to: to(dtype, silence_dtype_warnings=False) → DiffusionPipeline to return a pipeline with the specified +dtype to(device, silence_dtype_warnings=False) → DiffusionPipeline to return a pipeline with the specified +device to(device=None, dtype=None, silence_dtype_warnings=False) → DiffusionPipeline to return a pipeline with the +specified device and +dtype components < source > ( ) The self.components property can be useful to run different pipelines with the same weights and +configurations without reallocating additional memory. Returns (dict): +A dictionary containing all the modules needed to initialize the pipeline. Examples: Copied >>> from diffusers import ( +... StableDiffusionPipeline, +... StableDiffusionImg2ImgPipeline, +... StableDiffusionInpaintPipeline, +... ) + +>>> text2img = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> img2img = StableDiffusionImg2ImgPipeline(**text2img.components) +>>> inpaint = StableDiffusionInpaintPipeline(**text2img.components) disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. download < source > ( pretrained_model_name **kwargs ) → os.PathLike Parameters pretrained_model_name (str or os.PathLike, optional) — +A string, the repository id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. custom_pipeline (str, optional) — +Can be either: + + +A string, the repository id (for example CompVis/ldm-text2im-large-256) of a pretrained +pipeline hosted on the Hub. The repository must contain a file called pipeline.py that defines +the custom pipeline. + + +A string, the file name of a community pipeline hosted on GitHub under +Community. Valid file +names must match the file name and not the pipeline script (clip_guided_stable_diffusion +instead of clip_guided_stable_diffusion.py). Community pipelines are always loaded from the +current main branch of GitHub. + + +A path to a directory (./my_pipeline_directory/) containing a custom pipeline. The directory +must contain a file called pipeline.py that defines the custom pipeline. + + + +🧪 This is an experimental feature and may change in the future. + +For more information on how to load and create custom pipelines, take a look at How to contribute a +community pipeline. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. use_onnx (bool, optional, defaults to False) — +If set to True, ONNX weights will always be downloaded if present. If set to False, ONNX weights +will never be downloaded. By default use_onnx defaults to the _is_onnx class attribute which is +False for non-ONNX pipelines and True for ONNX pipelines. ONNX weights include both files ending +with .onnx and .pb. trust_remote_code (bool, optional, defaults to False) — +Whether or not to allow for custom pipelines and components defined on the Hub in their own files. This +option should only be set to True for repositories you trust and in which you have read the code, as +it will execute code present on the Hub on your local machine. Returns +os.PathLike + +A path to the downloaded pipeline. + Download and cache a PyTorch diffusion pipeline from pretrained pipeline weights. To use private or gated models, log-in with +huggingface-cli login. enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] enable_model_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_sequential_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using 🤗 Accelerate, significantly reducing memory usage. When called, the state +dicts of all torch.nn.Module components (except those in self._exclude_from_cpu_offload) are saved to CPU +and then moved to torch.device('meta') and loaded to GPU only when their specific submodule has its forward +method called. Offloading happens on a submodule basis. Memory savings are higher than with +enable_model_cpu_offload, but performance is lower. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) from_pretrained < source > ( pretrained_model_name_or_path: Union **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. custom_pipeline (str, optional) — + +🧪 This is an experimental feature and may change in the future. + +Can be either: + +A string, the repo id (for example hf-internal-testing/diffusers-dummy-pipeline) of a custom +pipeline hosted on the Hub. The repository must contain a file called pipeline.py that defines +the custom pipeline. +A string, the file name of a community pipeline hosted on GitHub under +Community. Valid file +names must match the file name and not the pipeline script (clip_guided_stable_diffusion +instead of clip_guided_stable_diffusion.py). Community pipelines are always loaded from the +current main branch of GitHub. +A path to a directory (./my_pipeline_directory/) containing a custom pipeline. The directory +must contain a file called pipeline.py that defines the custom pipeline. + +For more information on how to load and create custom pipelines, please have a look at Loading and +Adding Custom +Pipelines force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. use_onnx (bool, optional, defaults to None) — +If set to True, ONNX weights will always be downloaded if present. If set to False, ONNX weights +will never be downloaded. By default use_onnx defaults to the _is_onnx class attribute which is +False for non-ONNX pipelines and True for ONNX pipelines. ONNX weights include both files ending +with .onnx and .pb. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiate a PyTorch diffusion pipeline from pretrained pipeline weights. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import DiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256") + +>>> # Download pipeline that requires an authorization token +>>> # For more information on access tokens, please refer to this section +>>> # of the documentation](https://huggingface.co/docs/hub/security-tokens) +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") + +>>> # Use a different scheduler +>>> from diffusers import LMSDiscreteScheduler + +>>> scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.scheduler = scheduler maybe_free_model_hooks < source > ( ) Function that offloads all components, removes all model hooks that were added when using +enable_model_cpu_offload and then applies them again. In case the model has not been offloaded this function +is a no-op. Make sure to add this function to the end of the __call__ function of your pipeline so that it +functions correctly when applying enable_model_cpu_offload. numpy_to_pil < source > ( images ) Convert a NumPy image or a batch of images to a PIL image. save_pretrained < source > ( save_directory: Union safe_serialization: bool = True variant: Optional = None push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save a pipeline to. Will be created if it doesn’t exist. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save all saveable variables of the pipeline to a directory. A pipeline variable can be saved and loaded if its +class implements both a save and loading method. The pipeline is easily reloaded using the +from_pretrained() class method. FlaxDiffusionPipeline class diffusers.FlaxDiffusionPipeline < source > ( ) Base class for Flax-based pipelines. FlaxDiffusionPipeline stores all components (models, schedulers, and processors) for diffusion pipelines and +provides methods for loading, downloading and saving models. It also includes methods to: enable/disable the progress bar for the denoising iteration Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_name_or_path: Union **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example runwayml/stable-diffusion-v1-5) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +using save_pretrained(). + dtype (str or jnp.dtype, optional) — +Override the default jnp.dtype and load the model under this dtype. If "auto", the dtype is +automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components) of the specific pipeline +class. The overwritten components are passed directly to the pipelines __init__ method. Instantiate a Flax-based diffusion pipeline from pretrained pipeline weights. The pipeline is set in evaluation mode (`model.eval()) by default and dropout modules are deactivated. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of FlaxUNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import FlaxDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> # Requires to be logged in to Hugging Face hub, +>>> # see more in [the documentation](https://huggingface.co/docs/hub/security-tokens) +>>> pipeline, params = FlaxDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... revision="bf16", +... dtype=jnp.bfloat16, +... ) + +>>> # Download pipeline, but use a different scheduler +>>> from diffusers import FlaxDPMSolverMultistepScheduler + +>>> model_id = "runwayml/stable-diffusion-v1-5" +>>> dpmpp, dpmpp_state = FlaxDPMSolverMultistepScheduler.from_pretrained( +... model_id, +... subfolder="scheduler", +... ) + +>>> dpm_pipe, dpm_params = FlaxStableDiffusionPipeline.from_pretrained( +... model_id, revision="bf16", dtype=jnp.bfloat16, scheduler=dpmpp +... ) +>>> dpm_params["scheduler"] = dpmpp_state numpy_to_pil < source > ( images ) Convert a NumPy image or a batch of images to a PIL image. save_pretrained < source > ( save_directory: Union params: Union push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save all saveable variables of the pipeline to a directory. A pipeline variable can be saved and loaded if its +class implements both a save and loading method. The pipeline is easily reloaded using the +from_pretrained() class method. PushToHubMixin class diffusers.utils.PushToHubMixin < source > ( ) A Mixin to push a model, scheduler, or pipeline to the Hugging Face Hub. push_to_hub < source > ( repo_id: str commit_message: Optional = None private: Optional = None token: Optional = None create_pr: bool = False safe_serialization: bool = True variant: Optional = None ) Parameters repo_id (str) — +The name of the repository you want to push your model, scheduler, or pipeline files to. It should +contain your organization name when pushing to an organization. repo_id can also be a path to a local +directory. commit_message (str, optional) — +Message to commit while pushing. Default to "Upload {object}". private (bool, optional) — +Whether or not the repository created should be private. token (str, optional) — +The token to use as HTTP bearer authorization for remote files. The token generated when running +huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) — +Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to True) — +Whether or not to convert the model weights to the safetensors format. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. Upload model, scheduler, or pipeline files to the 🤗 Hugging Face Hub. Examples: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet") + +# Push the `unet` to your namespace with the name "my-finetuned-unet". +unet.push_to_hub("my-finetuned-unet") + +# Push the `unet` to an organization with the name "my-finetuned-unet". +unet.push_to_hub("your-org/my-finetuned-unet") diff --git a/scrapped_outputs/593a7d9fa6522be675c68d275c6aa121.txt b/scrapped_outputs/593a7d9fa6522be675c68d275c6aa121.txt new file mode 100644 index 0000000000000000000000000000000000000000..ca254f42f72a76d580bb5340e193834f7f82b6d6 --- /dev/null +++ b/scrapped_outputs/593a7d9fa6522be675c68d275c6aa121.txt @@ -0,0 +1,86 @@ +Prompt weighting Prompt weighting provides a way to emphasize or de-emphasize certain parts of a prompt, allowing for more control over the generated image. A prompt can include several concepts, which gets turned into contextualized text embeddings. The embeddings are used by the model to condition its cross-attention layers to generate an image (read the Stable Diffusion blog post to learn more about how it works). Prompt weighting works by increasing or decreasing the scale of the text embedding vector that corresponds to its concept in the prompt because you may not necessarily want the model to focus on all concepts equally. The easiest way to prepare the prompt-weighted embeddings is to use Compel, a text prompt-weighting and blending library. Once you have the prompt-weighted embeddings, you can pass them to any pipeline that has a prompt_embeds (and optionally negative_prompt_embeds) parameter, such as StableDiffusionPipeline, StableDiffusionControlNetPipeline, and StableDiffusionXLPipeline. If your favorite pipeline doesn’t have a prompt_embeds parameter, please open an issue so we can add it! This guide will show you how to weight and blend your prompts with Compel in 🤗 Diffusers. Before you begin, make sure you have the latest version of Compel installed: Copied # uncomment to install in Colab +#!pip install compel --upgrade For this guide, let’s generate an image with the prompt "a red cat playing with a ball" using the StableDiffusionPipeline: Copied from diffusers import StableDiffusionPipeline, UniPCMultistepScheduler +import torch + +pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", use_safetensors=True) +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.to("cuda") + +prompt = "a red cat playing with a ball" + +generator = torch.Generator(device="cpu").manual_seed(33) + +image = pipe(prompt, generator=generator, num_inference_steps=20).images[0] +image Weighting You’ll notice there is no “ball” in the image! Let’s use compel to upweight the concept of “ball” in the prompt. Create a Compel object, and pass it a tokenizer and text encoder: Copied from compel import Compel + +compel_proc = Compel(tokenizer=pipe.tokenizer, text_encoder=pipe.text_encoder) compel uses + or - to increase or decrease the weight of a word in the prompt. To increase the weight of “ball”: + corresponds to the value 1.1, ++ corresponds to 1.1^2, and so on. Similarly, - corresponds to 0.9 and -- corresponds to 0.9^2. Feel free to experiment with adding more + or - in your prompt! Copied prompt = "a red cat playing with a ball++" Pass the prompt to compel_proc to create the new prompt embeddings which are passed to the pipeline: Copied prompt_embeds = compel_proc(prompt) +generator = torch.manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image To downweight parts of the prompt, use the - suffix: Copied prompt = "a red------- cat playing with a ball" +prompt_embeds = compel_proc(prompt) + +generator = torch.manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image You can even up or downweight multiple concepts in the same prompt: Copied prompt = "a red cat++ playing with a ball----" +prompt_embeds = compel_proc(prompt) + +generator = torch.manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image Blending You can also create a weighted blend of prompts by adding .blend() to a list of prompts and passing it some weights. Your blend may not always produce the result you expect because it breaks some assumptions about how the text encoder functions, so just have fun and experiment with it! Copied prompt_embeds = compel_proc('("a red cat playing with a ball", "jungle").blend(0.7, 0.8)') +generator = torch.Generator(device="cuda").manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image Conjunction A conjunction diffuses each prompt independently and concatenates their results by their weighted sum. Add .and() to the end of a list of prompts to create a conjunction: Copied prompt_embeds = compel_proc('["a red cat", "playing with a", "ball"].and()') +generator = torch.Generator(device="cuda").manual_seed(55) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image Textual inversion Textual inversion is a technique for learning a specific concept from some images which you can use to generate new images conditioned on that concept. Create a pipeline and use the load_textual_inversion() function to load the textual inversion embeddings (feel free to browse the Stable Diffusion Conceptualizer for 100+ trained concepts): Copied import torch +from diffusers import StableDiffusionPipeline +from compel import Compel, DiffusersTextualInversionManager + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, + use_safetensors=True, variant="fp16").to("cuda") +pipe.load_textual_inversion("sd-concepts-library/midjourney-style") Compel provides a DiffusersTextualInversionManager class to simplify prompt weighting with textual inversion. Instantiate DiffusersTextualInversionManager and pass it to the Compel class: Copied textual_inversion_manager = DiffusersTextualInversionManager(pipe) +compel_proc = Compel( + tokenizer=pipe.tokenizer, + text_encoder=pipe.text_encoder, + textual_inversion_manager=textual_inversion_manager) Incorporate the concept to condition a prompt with using the syntax: Copied prompt_embeds = compel_proc('("A red cat++ playing with a ball ")') + +image = pipe(prompt_embeds=prompt_embeds).images[0] +image DreamBooth DreamBooth is a technique for generating contextualized images of a subject given just a few images of the subject to train on. It is similar to textual inversion, but DreamBooth trains the full model whereas textual inversion only fine-tunes the text embeddings. This means you should use from_pretrained() to load the DreamBooth model (feel free to browse the Stable Diffusion Dreambooth Concepts Library for 100+ trained models): Copied import torch +from diffusers import DiffusionPipeline, UniPCMultistepScheduler +from compel import Compel + +pipe = DiffusionPipeline.from_pretrained("sd-dreambooth-library/dndcoverart-v1", torch_dtype=torch.float16).to("cuda") +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) Create a Compel class with a tokenizer and text encoder, and pass your prompt to it. Depending on the model you use, you’ll need to incorporate the model’s unique identifier into your prompt. For example, the dndcoverart-v1 model uses the identifier dndcoverart: Copied compel_proc = Compel(tokenizer=pipe.tokenizer, text_encoder=pipe.text_encoder) +prompt_embeds = compel_proc('("magazine cover of a dndcoverart dragon, high quality, intricate details, larry elmore art style").and()') +image = pipe(prompt_embeds=prompt_embeds).images[0] +image Stable Diffusion XL Stable Diffusion XL (SDXL) has two tokenizers and text encoders so it’s usage is a bit different. To address this, you should pass both tokenizers and encoders to the Compel class: Copied from compel import Compel, ReturnedEmbeddingsType +from diffusers import DiffusionPipeline +from diffusers.utils import make_image_grid +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + variant="fp16", + use_safetensors=True, + torch_dtype=torch.float16 +).to("cuda") + +compel = Compel( + tokenizer=[pipeline.tokenizer, pipeline.tokenizer_2] , + text_encoder=[pipeline.text_encoder, pipeline.text_encoder_2], + returned_embeddings_type=ReturnedEmbeddingsType.PENULTIMATE_HIDDEN_STATES_NON_NORMALIZED, + requires_pooled=[False, True] +) This time, let’s upweight “ball” by a factor of 1.5 for the first prompt, and downweight “ball” by 0.6 for the second prompt. The StableDiffusionXLPipeline also requires pooled_prompt_embeds (and optionally negative_pooled_prompt_embeds) so you should pass those to the pipeline along with the conditioning tensors: Copied # apply weights +prompt = ["a red cat playing with a (ball)1.5", "a red cat playing with a (ball)0.6"] +conditioning, pooled = compel(prompt) + +# generate image +generator = [torch.Generator().manual_seed(33) for _ in range(len(prompt))] +images = pipeline(prompt_embeds=conditioning, pooled_prompt_embeds=pooled, generator=generator, num_inference_steps=30).images +make_image_grid(images, rows=1, cols=2) "a red cat playing with a (ball)1.5" "a red cat playing with a (ball)0.6" diff --git a/scrapped_outputs/595df158614326b528d792d9bc620cf9.txt b/scrapped_outputs/595df158614326b528d792d9bc620cf9.txt new file mode 100644 index 0000000000000000000000000000000000000000..88a46e3c7ee0477caaec1744d6e85e32374dcd84 --- /dev/null +++ b/scrapped_outputs/595df158614326b528d792d9bc620cf9.txt @@ -0,0 +1,109 @@ +DDIM + + +Overview + +Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. +The abstract of the paper is the following: +Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space. +The original codebase of this paper can be found here: ermongroup/ddim. +For questions, feel free to contact the author on tsong.me. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_ddim.py +Unconditional Image Generation +- + +DDIMPipeline + + +class diffusers.DDIMPipeline + +< +source +> +( +unet +scheduler + +) + + +Parameters + +unet (UNet2DModel) — U-Net architecture to denoise the encoded image. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image. Can be one of +DDPMScheduler, or DDIMScheduler. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +batch_size: int = 1 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +eta: float = 0.0 +num_inference_steps: int = 50 +use_clipped_model_output: typing.Optional[bool] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True + +) +→ +ImagePipelineOutput or tuple + +Parameters + +batch_size (int, optional, defaults to 1) — +The number of images to generate. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +eta (float, optional, defaults to 0.0) — +The eta parameter which controls the scale of the variance (0 is DDIM and 1 is one type of DDPM). + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +use_clipped_model_output (bool, optional, defaults to None) — +if True or False, see documentation for DDIMScheduler.step. If None, nothing is passed +downstream to the scheduler. So use None for schedulers which don’t support this argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + +Returns + +ImagePipelineOutput or tuple + + + +~pipelines.utils.ImagePipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. diff --git a/scrapped_outputs/59655529eca2c8a2d91b0a9d103f8f88.txt b/scrapped_outputs/59655529eca2c8a2d91b0a9d103f8f88.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/59b462e59ac1354fe9ed29d04421e1e3.txt b/scrapped_outputs/59b462e59ac1354fe9ed29d04421e1e3.txt new file mode 100644 index 0000000000000000000000000000000000000000..7645418c174b20843d0dcacad570025d04b154f1 --- /dev/null +++ b/scrapped_outputs/59b462e59ac1354fe9ed29d04421e1e3.txt @@ -0,0 +1,8 @@ +ScoreSdeVpScheduler ScoreSdeVpScheduler is a variance preserving stochastic differential equation (SDE) scheduler. It was introduced in the Score-Based Generative Modeling through Stochastic Differential Equations paper by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole. The abstract from the paper is: Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model. 🚧 This scheduler is under construction! ScoreSdeVpScheduler class diffusers.schedulers.ScoreSdeVpScheduler < source > ( num_train_timesteps = 2000 beta_min = 0.1 beta_max = 20 sampling_eps = 0.001 ) Parameters num_train_timesteps (int, defaults to 2000) — +The number of diffusion steps to train the model. beta_min (int, defaults to 0.1) — beta_max (int, defaults to 20) — sampling_eps (int, defaults to 1e-3) — +The end value of sampling where timesteps decrease progressively from 1 to epsilon. ScoreSdeVpScheduler is a variance preserving stochastic differential equation (SDE) scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. set_timesteps < source > ( num_inference_steps device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the continuous timesteps used for the diffusion chain (to be run before inference). step_pred < source > ( score x t generator = None ) Parameters score () — x () — t () — generator (torch.Generator, optional) — +A random number generator. Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/59e314e7be7ac72a9ef33ba031249278.txt b/scrapped_outputs/59e314e7be7ac72a9ef33ba031249278.txt new file mode 100644 index 0000000000000000000000000000000000000000..d61c3f265da975aac5d562125c788f3e245e5b73 --- /dev/null +++ b/scrapped_outputs/59e314e7be7ac72a9ef33ba031249278.txt @@ -0,0 +1,96 @@ +ControlNet The ControlNet model was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, Maneesh Agrawala. It provides a greater degree of control over text-to-image generation by conditioning the model on additional inputs such as edge maps, depth maps, segmentation maps, and keypoints for pose detection. The abstract from the paper is: We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models. Loading from the original format By default the ControlNetModel should be loaded with from_pretrained(), but it can also be loaded +from the original format using FromOriginalControlnetMixin.from_single_file as follows: Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel + +url = "https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_canny.pth" # can also be a local path +controlnet = ControlNetModel.from_single_file(url) + +url = "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.safetensors" # can also be a local path +pipe = StableDiffusionControlNetPipeline.from_single_file(url, controlnet=controlnet) ControlNetModel class diffusers.ControlNetModel < source > ( in_channels: int = 4 conditioning_channels: int = 3 flip_sin_to_cos: bool = True freq_shift: int = 0 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') mid_block_type: Optional = 'UNetMidBlock2DCrossAttn' only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: int = 1280 transformer_layers_per_block: Union = 1 encoder_hid_dim: Optional = None encoder_hid_dim_type: Optional = None attention_head_dim: Union = 8 num_attention_heads: Union = None use_linear_projection: bool = False class_embed_type: Optional = None addition_embed_type: Optional = None addition_time_embed_dim: Optional = None num_class_embeds: Optional = None upcast_attention: bool = False resnet_time_scale_shift: str = 'default' projection_class_embeddings_input_dim: Optional = None controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: Optional = (16, 32, 96, 256) global_pool_conditions: bool = False addition_embed_type_num_heads: int = 64 ) Parameters in_channels (int, defaults to 4) — +The number of channels in the input sample. flip_sin_to_cos (bool, defaults to True) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, defaults to 0) — +The frequency shift to apply to the time embedding. down_block_types (tuple[str], defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. only_cross_attention (Union[bool, Tuple[bool]], defaults to False) — block_out_channels (tuple[int], defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, defaults to 2) — +The number of layers per block. downsample_padding (int, defaults to 1) — +The padding to use for the downsampling convolution. mid_block_scale_factor (float, defaults to 1) — +The scale factor to use for the mid block. act_fn (str, defaults to “silu”) — +The activation function to use. norm_num_groups (int, optional, defaults to 32) — +The number of groups to use for the normalization. If None, normalization and activation layers is skipped +in post-processing. norm_eps (float, defaults to 1e-5) — +The epsilon to use for the normalization. cross_attention_dim (int, defaults to 1280) — +The dimension of the cross attention features. transformer_layers_per_block (int or Tuple[int], optional, defaults to 1) — +The number of transformer blocks of type BasicTransformerBlock. Only relevant for +~models.unet_2d_blocks.CrossAttnDownBlock2D, ~models.unet_2d_blocks.CrossAttnUpBlock2D, +~models.unet_2d_blocks.UNetMidBlock2DCrossAttn. encoder_hid_dim (int, optional, defaults to None) — +If encoder_hid_dim_type is defined, encoder_hidden_states will be projected from encoder_hid_dim +dimension to cross_attention_dim. encoder_hid_dim_type (str, optional, defaults to None) — +If given, the encoder_hidden_states and potentially other embeddings are down-projected to text +embeddings of dimension cross_attention according to encoder_hid_dim_type. attention_head_dim (Union[int, Tuple[int]], defaults to 8) — +The dimension of the attention heads. use_linear_projection (bool, defaults to False) — class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", "identity", "projection", or "simple_projection". addition_embed_type (str, optional, defaults to None) — +Configures an optional embedding which will be summed with the time embeddings. Choose from None or +“text”. “text” will use the TextTimeEmbedding layer. num_class_embeds (int, optional, defaults to 0) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. upcast_attention (bool, defaults to False) — resnet_time_scale_shift (str, defaults to "default") — +Time scale shift config for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. projection_class_embeddings_input_dim (int, optional, defaults to None) — +The dimension of the class_labels input when class_embed_type="projection". Required when +class_embed_type="projection". controlnet_conditioning_channel_order (str, defaults to "rgb") — +The channel order of conditional image. Will convert to rgb if it’s bgr. conditioning_embedding_out_channels (tuple[int], optional, defaults to (16, 32, 96, 256)) — +The tuple of output channel for each block in the conditioning_embedding layer. global_pool_conditions (bool, defaults to False) — +TODO(Patrick) - unused parameter. addition_embed_type_num_heads (int, defaults to 64) — +The number of heads to use for the TextTimeEmbedding layer. A ControlNet model. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor controlnet_cond: FloatTensor conditioning_scale: float = 1.0 class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None added_cond_kwargs: Optional = None cross_attention_kwargs: Optional = None guess_mode: bool = False return_dict: bool = True ) → ControlNetOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor. timestep (Union[torch.Tensor, float, int]) — +The number of timesteps to denoise an input. encoder_hidden_states (torch.Tensor) — +The encoder hidden states. controlnet_cond (torch.FloatTensor) — +The conditional input tensor of shape (batch_size, sequence_length, hidden_size). conditioning_scale (float, defaults to 1.0) — +The scale factor for ControlNet outputs. class_labels (torch.Tensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. timestep_cond (torch.Tensor, optional, defaults to None) — +Additional conditional embeddings for timestep. If provided, the embeddings will be summed with the +timestep_embedding passed through the self.time_embedding layer to obtain the final timestep +embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. added_cond_kwargs (dict) — +Additional conditions for the Stable Diffusion XL UNet. cross_attention_kwargs (dict[str], optional, defaults to None) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. guess_mode (bool, defaults to False) — +In this mode, the ControlNet encoder tries its best to recognize the input content of the input even if +you remove all prompts. A guidance_scale between 3.0 and 5.0 is recommended. return_dict (bool, defaults to True) — +Whether or not to return a ControlNetOutput instead of a plain tuple. Returns +ControlNetOutput or tuple + +If return_dict is True, a ControlNetOutput is returned, otherwise a tuple is +returned where the first element is the sample tensor. + The ControlNetModel forward method. from_unet < source > ( unet: UNet2DConditionModel controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: Optional = (16, 32, 96, 256) load_weights_from_unet: bool = True conditioning_channels: int = 3 ) Parameters unet (UNet2DConditionModel) — +The UNet model weights to copy to the ControlNetModel. All configuration options are also copied +where applicable. Instantiate a ControlNetModel from UNet2DConditionModel. set_attention_slice < source > ( slice_size: Union ) Parameters slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", input to the attention heads is halved, so attention is computed in two steps. If +"max", maximum amount of memory is saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in +several steps. This is useful for saving some memory in exchange for a small decrease in speed. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. ControlNetOutput class diffusers.models.controlnet.ControlNetOutput < source > ( down_block_res_samples: Tuple mid_block_res_sample: Tensor ) Parameters down_block_res_samples (tuple[torch.Tensor]) — +A tuple of downsample activations at different resolutions for each downsampling block. Each tensor should +be of shape (batch_size, channel * resolution, height //resolution, width // resolution). Output can be +used to condition the original UNet’s downsampling activations. mid_down_block_re_sample (torch.Tensor) — +The activation of the midde block (the lowest sample resolution). Each tensor should be of shape +(batch_size, channel * lowest_resolution, height // lowest_resolution, width // lowest_resolution). +Output can be used to condition the original UNet’s middle block activation. The output of ControlNetModel. FlaxControlNetModel class diffusers.FlaxControlNetModel < source > ( sample_size: int = 32 in_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 attention_head_dim: Union = 8 num_attention_heads: Union = None cross_attention_dim: int = 1280 dropout: float = 0.0 use_linear_projection: bool = False dtype: dtype = flip_sin_to_cos: bool = True freq_shift: int = 0 controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: Tuple = (16, 32, 96, 256) parent: Union = name: Optional = None ) Parameters sample_size (int, optional) — +The size of the input sample. in_channels (int, optional, defaults to 4) — +The number of channels in the input sample. down_block_types (Tuple[str], optional, defaults to ("FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxDownBlock2D")) — +The tuple of downsample blocks to use. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — +The number of layers per block. attention_head_dim (int or Tuple[int], optional, defaults to 8) — +The dimension of the attention heads. num_attention_heads (int or Tuple[int], optional) — +The number of attention heads. cross_attention_dim (int, optional, defaults to 768) — +The dimension of the cross attention features. dropout (float, optional, defaults to 0) — +Dropout probability for down, up and bottleneck blocks. flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. controlnet_conditioning_channel_order (str, optional, defaults to rgb) — +The channel order of conditional image. Will convert to rgb if it’s bgr. conditioning_embedding_out_channels (tuple, optional, defaults to (16, 32, 96, 256)) — +The tuple of output channel for each block in the conditioning_embedding layer. A ControlNet model. This model inherits from FlaxModelMixin. Check the superclass documentation for it’s generic methods +implemented for all models (such as downloading or saving). This model is also a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matters related to its +general usage and behavior. Inherent JAX features such as the following are supported: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization FlaxControlNetOutput class diffusers.models.controlnet_flax.FlaxControlNetOutput < source > ( down_block_res_samples: Array mid_block_res_sample: Array ) Parameters down_block_res_samples (jnp.ndarray) — mid_block_res_sample (jnp.ndarray) — The output of FlaxControlNetModel. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/5a1834ef9fb90138f0ca1ffeef736f61.txt b/scrapped_outputs/5a1834ef9fb90138f0ca1ffeef736f61.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/5a1c7e6ce90ee1f031e0a4f9495ae128.txt b/scrapped_outputs/5a1c7e6ce90ee1f031e0a4f9495ae128.txt new file mode 100644 index 0000000000000000000000000000000000000000..bedbfd4f29d8fea8e1cb1523c05c8b8e204c564f --- /dev/null +++ b/scrapped_outputs/5a1c7e6ce90ee1f031e0a4f9495ae128.txt @@ -0,0 +1,52 @@ +CMStochasticIterativeScheduler Consistency Models by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever introduced a multistep and onestep scheduler (Algorithm 1) that is capable of generating good samples in one or a small number of steps. The abstract from the paper is: Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast one-step generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet 64x64 and LSUN 256x256. The original codebase can be found at openai/consistency_models. CMStochasticIterativeScheduler class diffusers.CMStochasticIterativeScheduler < source > ( num_train_timesteps: int = 40 sigma_min: float = 0.002 sigma_max: float = 80.0 sigma_data: float = 0.5 s_noise: float = 1.0 rho: float = 7.0 clip_denoised: bool = True ) Parameters num_train_timesteps (int, defaults to 40) — +The number of diffusion steps to train the model. sigma_min (float, defaults to 0.002) — +Minimum noise magnitude in the sigma schedule. Defaults to 0.002 from the original implementation. sigma_max (float, defaults to 80.0) — +Maximum noise magnitude in the sigma schedule. Defaults to 80.0 from the original implementation. sigma_data (float, defaults to 0.5) — +The standard deviation of the data distribution from the EDM +paper. Defaults to 0.5 from the original implementation. s_noise (float, defaults to 1.0) — +The amount of additional noise to counteract loss of detail during sampling. A reasonable range is [1.000, +1.011]. Defaults to 1.0 from the original implementation. rho (float, defaults to 7.0) — +The parameter for calculating the Karras sigma schedule from the EDM +paper. Defaults to 7.0 from the original implementation. clip_denoised (bool, defaults to True) — +Whether to clip the denoised outputs to (-1, 1). timesteps (List or np.ndarray or torch.Tensor, optional) — +An explicit timestep schedule that can be optionally specified. The timesteps are expected to be in +increasing order. Multistep and onestep sampling for consistency models. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. get_scalings_for_boundary_condition < source > ( sigma ) → tuple Parameters sigma (torch.FloatTensor) — +The current sigma in the Karras sigma schedule. Returns +tuple + +A two-element tuple where c_skip (which weights the current sample) is the first element and c_out +(which weights the consistency model output) is the second element. + Gets the scalings used in the consistency model parameterization (from Appendix C of the +paper) to enforce boundary condition. epsilon in the equations for c_skip and c_out is set to sigma_min. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (float or torch.FloatTensor) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Scales the consistency model input by (sigma**2 + sigma_data**2) ** 0.5. set_timesteps < source > ( num_inference_steps: Optional = None device: Union = None timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. timesteps (List[int], optional) — +Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default +timestep spacing strategy of equal spacing between timesteps is used. If timesteps is passed, +num_inference_steps must be None. Sets the timesteps used for the diffusion chain (to be run before inference). sigma_to_t < source > ( sigmas: Union ) → float or np.ndarray Parameters sigmas (float or np.ndarray) — +A single Karras sigma or an array of Karras sigmas. Returns +float or np.ndarray + +A scaled input timestep or scaled input timestep array. + Gets scaled timesteps from the Karras sigmas for input to the consistency model. step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor generator: Optional = None return_dict: bool = True ) → CMStochasticIterativeSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (float) — +The current timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a +CMStochasticIterativeSchedulerOutput or tuple. Returns +CMStochasticIterativeSchedulerOutput or tuple + +If return_dict is True, +CMStochasticIterativeSchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). CMStochasticIterativeSchedulerOutput class diffusers.schedulers.scheduling_consistency_models.CMStochasticIterativeSchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Output class for the scheduler’s step function. diff --git a/scrapped_outputs/5a436a71b65c3c7c7376e62be9a29232.txt b/scrapped_outputs/5a436a71b65c3c7c7376e62be9a29232.txt new file mode 100644 index 0000000000000000000000000000000000000000..88a46e3c7ee0477caaec1744d6e85e32374dcd84 --- /dev/null +++ b/scrapped_outputs/5a436a71b65c3c7c7376e62be9a29232.txt @@ -0,0 +1,109 @@ +DDIM + + +Overview + +Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. +The abstract of the paper is the following: +Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space. +The original codebase of this paper can be found here: ermongroup/ddim. +For questions, feel free to contact the author on tsong.me. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_ddim.py +Unconditional Image Generation +- + +DDIMPipeline + + +class diffusers.DDIMPipeline + +< +source +> +( +unet +scheduler + +) + + +Parameters + +unet (UNet2DModel) — U-Net architecture to denoise the encoded image. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image. Can be one of +DDPMScheduler, or DDIMScheduler. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +batch_size: int = 1 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +eta: float = 0.0 +num_inference_steps: int = 50 +use_clipped_model_output: typing.Optional[bool] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True + +) +→ +ImagePipelineOutput or tuple + +Parameters + +batch_size (int, optional, defaults to 1) — +The number of images to generate. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +eta (float, optional, defaults to 0.0) — +The eta parameter which controls the scale of the variance (0 is DDIM and 1 is one type of DDPM). + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +use_clipped_model_output (bool, optional, defaults to None) — +if True or False, see documentation for DDIMScheduler.step. If None, nothing is passed +downstream to the scheduler. So use None for schedulers which don’t support this argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + +Returns + +ImagePipelineOutput or tuple + + + +~pipelines.utils.ImagePipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. diff --git a/scrapped_outputs/5a69a8781670307ad26a34d563ee7313.txt b/scrapped_outputs/5a69a8781670307ad26a34d563ee7313.txt new file mode 100644 index 0000000000000000000000000000000000000000..1c2807daa00904639945ad258ca6e10542925a4c --- /dev/null +++ b/scrapped_outputs/5a69a8781670307ad26a34d563ee7313.txt @@ -0,0 +1,136 @@ +Cycle Diffusion Cycle Diffusion is a text guided image-to-image generation model proposed in Unifying Diffusion Models’ Latent Space, with Applications to CycleDiffusion and Guidance by Chen Henry Wu, Fernando De la Torre. The abstract from the paper is: Diffusion models have achieved unprecedented performance in generative modeling. The commonly-adopted formulation of the latent code of diffusion models is a sequence of gradually denoised samples, as opposed to the simpler (e.g., Gaussian) latent space of GANs, VAEs, and normalizing flows. This paper provides an alternative, Gaussian formulation of the latent space of various diffusion models, as well as an invertible DPM-Encoder that maps images into the latent space. While our formulation is purely based on the definition of diffusion models, we demonstrate several intriguing consequences. (1) Empirically, we observe that a common latent space emerges from two diffusion models trained independently on related domains. In light of this finding, we propose CycleDiffusion, which uses DPM-Encoder for unpaired image-to-image translation. Furthermore, applying CycleDiffusion to text-to-image diffusion models, we show that large-scale text-to-image diffusion models can be used as zero-shot image-to-image editors. (2) One can guide pre-trained diffusion models and GANs by controlling the latent codes in a unified, plug-and-play formulation based on energy-based models. Using the CLIP model and a face recognition model as guidance, we demonstrate that diffusion models have better coverage of low-density sub-populations and individuals than GANs. The code is publicly available at this https URL. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. CycleDiffusionPipeline class diffusers.CycleDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: DDIMScheduler safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can only be an +instance of DDIMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-guided image to image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: typing.Union[str, typing.List[str]] source_prompt: typing.Union[str, typing.List[str]] image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.FloatTensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.FloatTensor]] = None strength: float = 0.8 num_inference_steps: typing.Optional[int] = 50 guidance_scale: typing.Optional[float] = 7.5 source_guidance_scale: typing.Optional[float] = 1 num_images_per_prompt: typing.Optional[int] = 1 eta: typing.Optional[float] = 0.1 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None prompt_embeds: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: int = 1 cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None clip_skip: typing.Optional[int] = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor np.ndarray, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be used as the starting point. Can also accept image +latents as image, but if passing latents directly it is not encoded again. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. source_guidance_scale (float, optional, defaults to 1) — +Guidance scale for the source prompt. This is useful to control the amount of influence the source +prompt has for encoding. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Example: Copied import requests +import torch +from PIL import Image +from io import BytesIO + +from diffusers import CycleDiffusionPipeline, DDIMScheduler + +# load the pipeline +# make sure you're logged in with `huggingface-cli login` +model_id_or_path = "CompVis/stable-diffusion-v1-4" +scheduler = DDIMScheduler.from_pretrained(model_id_or_path, subfolder="scheduler") +pipe = CycleDiffusionPipeline.from_pretrained(model_id_or_path, scheduler=scheduler).to("cuda") + +# let's download an initial image +url = "https://raw.githubusercontent.com/ChenWu98/cycle-diffusion/main/data/dalle2/An%20astronaut%20riding%20a%20horse.png" +response = requests.get(url) +init_image = Image.open(BytesIO(response.content)).convert("RGB") +init_image = init_image.resize((512, 512)) +init_image.save("horse.png") + +# let's specify a prompt +source_prompt = "An astronaut riding a horse" +prompt = "An astronaut riding an elephant" + +# call the pipeline +image = pipe( + prompt=prompt, + source_prompt=source_prompt, + image=init_image, + num_inference_steps=100, + eta=0.1, + strength=0.8, + guidance_scale=2, + source_guidance_scale=1, +).images[0] + +image.save("horse_to_elephant.png") + +# let's try another example +# See more samples at the original repo: https://github.com/ChenWu98/cycle-diffusion +url = ( + "https://raw.githubusercontent.com/ChenWu98/cycle-diffusion/main/data/dalle2/A%20black%20colored%20car.png" +) +response = requests.get(url) +init_image = Image.open(BytesIO(response.content)).convert("RGB") +init_image = init_image.resize((512, 512)) +init_image.save("black.png") + +source_prompt = "A black colored car" +prompt = "A blue colored car" + +# call the pipeline +torch.manual_seed(0) +image = pipe( + prompt=prompt, + source_prompt=source_prompt, + image=init_image, + num_inference_steps=100, + eta=0.1, + strength=0.85, + guidance_scale=3, + source_guidance_scale=1, +).images[0] + +image.save("black_to_blue.png") encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None lora_scale: typing.Optional[float] = None clip_skip: typing.Optional[int] = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPiplineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] nsfw_content_detected: typing.Optional[typing.List[bool]] ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/5aa04c7dd1d541e817952d785c828dc7.txt b/scrapped_outputs/5aa04c7dd1d541e817952d785c828dc7.txt new file mode 100644 index 0000000000000000000000000000000000000000..9ca029bb91c67a0ec67932bad4223385e8d4a365 --- /dev/null +++ b/scrapped_outputs/5aa04c7dd1d541e817952d785c828dc7.txt @@ -0,0 +1 @@ +Reinforcement learning training with DDPO You can fine-tune Stable Diffusion on a reward function via reinforcement learning with the 🤗 TRL library and 🤗 Diffusers. This is done with the Denoising Diffusion Policy Optimization (DDPO) algorithm introduced by Black et al. in Training Diffusion Models with Reinforcement Learning, which is implemented in 🤗 TRL with the ~trl.DDPOTrainer. For more information, check out the ~trl.DDPOTrainer API reference and the Finetune Stable Diffusion Models with DDPO via TRL blog post. diff --git a/scrapped_outputs/5aa14e8b650076421a73339e380998b7.txt b/scrapped_outputs/5aa14e8b650076421a73339e380998b7.txt new file mode 100644 index 0000000000000000000000000000000000000000..5ee871335093ed2ca29b91e756da3147dae8eda6 --- /dev/null +++ b/scrapped_outputs/5aa14e8b650076421a73339e380998b7.txt @@ -0,0 +1,217 @@ +Load pipelines, models, and schedulers Having an easy way to use a diffusion system for inference is essential to 🧨 Diffusers. Diffusion systems often consist of multiple components like parameterized models, tokenizers, and schedulers that interact in complex ways. That is why we designed the DiffusionPipeline to wrap the complexity of the entire diffusion system into an easy-to-use API, while remaining flexible enough to be adapted for other use cases, such as loading each component individually as building blocks to assemble your own diffusion system. Everything you need for inference or training is accessible with the from_pretrained() method. This guide will show you how to load: pipelines from the Hub and locally different components into a pipeline checkpoint variants such as different floating point types or non-exponential mean averaged (EMA) weights models and schedulers Diffusion Pipeline 💡 Skip to the DiffusionPipeline explained section if you are interested in learning in more detail about how the DiffusionPipeline class works. The DiffusionPipeline class is the simplest and most generic way to load the latest trending diffusion model from the Hub. The DiffusionPipeline.from_pretrained() method automatically detects the correct pipeline class from the checkpoint, downloads, and caches all the required configuration and weight files, and returns a pipeline instance ready for inference. Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) You can also load a checkpoint with its specific pipeline class. The example above loaded a Stable Diffusion model; to get the same result, use the StableDiffusionPipeline class: Copied from diffusers import StableDiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) A checkpoint (such as CompVis/stable-diffusion-v1-4 or runwayml/stable-diffusion-v1-5) may also be used for more than one task, like text-to-image or image-to-image. To differentiate what task you want to use the checkpoint for, you have to load it directly with its corresponding task-specific pipeline class: Copied from diffusers import StableDiffusionImg2ImgPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionImg2ImgPipeline.from_pretrained(repo_id) Local pipeline To load a diffusion pipeline locally, use git-lfs to manually download the checkpoint (in this case, runwayml/stable-diffusion-v1-5) to your local disk. This creates a local folder, ./stable-diffusion-v1-5, on your disk: Copied git-lfs install +git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 Then pass the local path to from_pretrained(): Copied from diffusers import DiffusionPipeline + +repo_id = "./stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) The from_pretrained() method won’t download any files from the Hub when it detects a local path, but this also means it won’t download and cache the latest changes to a checkpoint. Swap components in a pipeline You can customize the default components of any pipeline with another compatible component. Customization is important because: Changing the scheduler is important for exploring the trade-off between generation speed and quality. Different components of a model are typically trained independently and you can swap out a component with a better-performing one. During finetuning, usually only some components - like the UNet or text encoder - are trained. To find out which schedulers are compatible for customization, you can use the compatibles method: Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) +stable_diffusion.scheduler.compatibles Let’s use the SchedulerMixin.from_pretrained() method to replace the default PNDMScheduler with a more performant scheduler, EulerDiscreteScheduler. The subfolder="scheduler" argument is required to load the scheduler configuration from the correct subfolder of the pipeline repository. Then you can pass the new EulerDiscreteScheduler instance to the scheduler argument in DiffusionPipeline: Copied from diffusers import DiffusionPipeline, EulerDiscreteScheduler + +repo_id = "runwayml/stable-diffusion-v1-5" +scheduler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, scheduler=scheduler, use_safetensors=True) Safety checker Diffusion models like Stable Diffusion can generate harmful content, which is why 🧨 Diffusers has a safety checker to check generated outputs against known hardcoded NSFW content. If you’d like to disable the safety checker for whatever reason, pass None to the safety_checker argument: Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, safety_checker=None, use_safetensors=True) +""" +You have disabled the safety checker for by passing `safety_checker=None`. Ensure that you abide by the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend keeping the safety filter enabled in all public-facing circumstances, disabling it only for use cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 . +""" Reuse components across pipelines You can also reuse the same components in multiple pipelines to avoid loading the weights into RAM twice. Use the components method to save the components: Copied from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True) + +components = stable_diffusion_txt2img.components Then you can pass the components to another pipeline without reloading the weights into RAM: Copied stable_diffusion_img2img = StableDiffusionImg2ImgPipeline(**components) You can also pass the components individually to the pipeline if you want more flexibility over which components to reuse or disable. For example, to reuse the same components in the text-to-image pipeline, except for the safety checker and feature extractor, in the image-to-image pipeline: Copied from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True) +stable_diffusion_img2img = StableDiffusionImg2ImgPipeline( + vae=stable_diffusion_txt2img.vae, + text_encoder=stable_diffusion_txt2img.text_encoder, + tokenizer=stable_diffusion_txt2img.tokenizer, + unet=stable_diffusion_txt2img.unet, + scheduler=stable_diffusion_txt2img.scheduler, + safety_checker=None, + feature_extractor=None, + requires_safety_checker=False, +) Checkpoint variants A checkpoint variant is usually a checkpoint whose weights are: Stored in a different floating point type for lower precision and lower storage, such as torch.float16, because it only requires half the bandwidth and storage to download. You can’t use this variant if you’re continuing training or using a CPU. Non-exponential mean averaged (EMA) weights, which shouldn’t be used for inference. You should use these to continue fine-tuning a model. 💡 When the checkpoints have identical model structures, but they were trained on different datasets and with a different training setup, they should be stored in separate repositories instead of variations (for example, stable-diffusion-v1-4 and stable-diffusion-v1-5). Otherwise, a variant is identical to the original checkpoint. They have exactly the same serialization format (like Safetensors), model structure, and weights that have identical tensor shapes. checkpoint type weight name argument for loading weights original diffusion_pytorch_model.bin floating point diffusion_pytorch_model.fp16.bin variant, torch_dtype non-EMA diffusion_pytorch_model.non_ema.bin variant There are two important arguments to know for loading variants: torch_dtype defines the floating point precision of the loaded checkpoints. For example, if you want to save bandwidth by loading a fp16 variant, you should specify torch_dtype=torch.float16 to convert the weights to fp16. Otherwise, the fp16 weights are converted to the default fp32 precision. You can also load the original checkpoint without defining the variant argument, and convert it to fp16 with torch_dtype=torch.float16. In this case, the default fp32 weights are downloaded first, and then they’re converted to fp16 after loading. variant defines which files should be loaded from the repository. For example, if you want to load a non_ema variant from the diffusers/stable-diffusion-variants repository, you should specify variant="non_ema" to download the non_ema files. Copied from diffusers import DiffusionPipeline +import torch + +# load fp16 variant +stable_diffusion = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16, use_safetensors=True +) +# load non_ema variant +stable_diffusion = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", variant="non_ema", use_safetensors=True +) To save a checkpoint stored in a different floating-point type or as a non-EMA variant, use the DiffusionPipeline.save_pretrained() method and specify the variant argument. You should try and save a variant to the same folder as the original checkpoint, so you can load both from the same folder: Copied from diffusers import DiffusionPipeline + +# save as fp16 variant +stable_diffusion.save_pretrained("runwayml/stable-diffusion-v1-5", variant="fp16") +# save as non-ema variant +stable_diffusion.save_pretrained("runwayml/stable-diffusion-v1-5", variant="non_ema") If you don’t save the variant to an existing folder, you must specify the variant argument otherwise it’ll throw an Exception because it can’t find the original checkpoint: Copied # 👎 this won't work +stable_diffusion = DiffusionPipeline.from_pretrained( + "./stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +# 👍 this works +stable_diffusion = DiffusionPipeline.from_pretrained( + "./stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16, use_safetensors=True +) Models Models are loaded from the ModelMixin.from_pretrained() method, which downloads and caches the latest version of the model weights and configurations. If the latest files are available in the local cache, from_pretrained() reuses files in the cache instead of re-downloading them. Models can be loaded from a subfolder with the subfolder argument. For example, the model weights for runwayml/stable-diffusion-v1-5 are stored in the unet subfolder: Copied from diffusers import UNet2DConditionModel + +repo_id = "runwayml/stable-diffusion-v1-5" +model = UNet2DConditionModel.from_pretrained(repo_id, subfolder="unet", use_safetensors=True) Or directly from a repository’s directory: Copied from diffusers import UNet2DModel + +repo_id = "google/ddpm-cifar10-32" +model = UNet2DModel.from_pretrained(repo_id, use_safetensors=True) You can also load and save model variants by specifying the variant argument in ModelMixin.from_pretrained() and ModelMixin.save_pretrained(): Copied from diffusers import UNet2DConditionModel + +model = UNet2DConditionModel.from_pretrained( + "runwayml/stable-diffusion-v1-5", subfolder="unet", variant="non_ema", use_safetensors=True +) +model.save_pretrained("./local-unet", variant="non_ema") Schedulers Schedulers are loaded from the SchedulerMixin.from_pretrained() method, and unlike models, schedulers are not parameterized or trained; they are defined by a configuration file. Loading schedulers does not consume any significant amount of memory and the same configuration file can be used for a variety of different schedulers. +For example, the following schedulers are compatible with StableDiffusionPipeline, which means you can load the same scheduler configuration file in any of these classes: Copied from diffusers import StableDiffusionPipeline +from diffusers import ( + DDPMScheduler, + DDIMScheduler, + PNDMScheduler, + LMSDiscreteScheduler, + EulerAncestralDiscreteScheduler, + EulerDiscreteScheduler, + DPMSolverMultistepScheduler, +) + +repo_id = "runwayml/stable-diffusion-v1-5" + +ddpm = DDPMScheduler.from_pretrained(repo_id, subfolder="scheduler") +ddim = DDIMScheduler.from_pretrained(repo_id, subfolder="scheduler") +pndm = PNDMScheduler.from_pretrained(repo_id, subfolder="scheduler") +lms = LMSDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +euler_anc = EulerAncestralDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +euler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +dpm = DPMSolverMultistepScheduler.from_pretrained(repo_id, subfolder="scheduler") + +# replace `dpm` with any of `ddpm`, `ddim`, `pndm`, `lms`, `euler_anc`, `euler` +pipeline = StableDiffusionPipeline.from_pretrained(repo_id, scheduler=dpm, use_safetensors=True) DiffusionPipeline explained As a class method, DiffusionPipeline.from_pretrained() is responsible for two things: Download the latest version of the folder structure required for inference and cache it. If the latest folder structure is available in the local cache, DiffusionPipeline.from_pretrained() reuses the cache and won’t redownload the files. Load the cached weights into the correct pipeline class - retrieved from the model_index.json file - and return an instance of it. The pipelines’ underlying folder structure corresponds directly with their class instances. For example, the StableDiffusionPipeline corresponds to the folder structure in runwayml/stable-diffusion-v1-5. Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipeline = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) +print(pipeline) You’ll see pipeline is an instance of StableDiffusionPipeline, which consists of seven components: "feature_extractor": a CLIPImageProcessor from 🤗 Transformers. "safety_checker": a component for screening against harmful content. "scheduler": an instance of PNDMScheduler. "text_encoder": a CLIPTextModel from 🤗 Transformers. "tokenizer": a CLIPTokenizer from 🤗 Transformers. "unet": an instance of UNet2DConditionModel. "vae": an instance of AutoencoderKL. Copied StableDiffusionPipeline { + "feature_extractor": [ + "transformers", + "CLIPImageProcessor" + ], + "safety_checker": [ + "stable_diffusion", + "StableDiffusionSafetyChecker" + ], + "scheduler": [ + "diffusers", + "PNDMScheduler" + ], + "text_encoder": [ + "transformers", + "CLIPTextModel" + ], + "tokenizer": [ + "transformers", + "CLIPTokenizer" + ], + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vae": [ + "diffusers", + "AutoencoderKL" + ] +} Compare the components of the pipeline instance to the runwayml/stable-diffusion-v1-5 folder structure, and you’ll see there is a separate folder for each of the components in the repository: Copied . +├── feature_extractor +│   └── preprocessor_config.json +├── model_index.json +├── safety_checker +│   ├── config.json +| ├── model.fp16.safetensors +│ ├── model.safetensors +│ ├── pytorch_model.bin +| └── pytorch_model.fp16.bin +├── scheduler +│   └── scheduler_config.json +├── text_encoder +│   ├── config.json +| ├── model.fp16.safetensors +│ ├── model.safetensors +│ |── pytorch_model.bin +| └── pytorch_model.fp16.bin +├── tokenizer +│   ├── merges.txt +│   ├── special_tokens_map.json +│   ├── tokenizer_config.json +│   └── vocab.json +├── unet +│   ├── config.json +│   ├── diffusion_pytorch_model.bin +| |── diffusion_pytorch_model.fp16.bin +│ |── diffusion_pytorch_model.f16.safetensors +│ |── diffusion_pytorch_model.non_ema.bin +│ |── diffusion_pytorch_model.non_ema.safetensors +│ └── diffusion_pytorch_model.safetensors +|── vae +. ├── config.json +. ├── diffusion_pytorch_model.bin + ├── diffusion_pytorch_model.fp16.bin + ├── diffusion_pytorch_model.fp16.safetensors + └── diffusion_pytorch_model.safetensors You can access each of the components of the pipeline as an attribute to view its configuration: Copied pipeline.tokenizer +CLIPTokenizer( + name_or_path="/root/.cache/huggingface/hub/models--runwayml--stable-diffusion-v1-5/snapshots/39593d5650112b4cc580433f6b0435385882d819/tokenizer", + vocab_size=49408, + model_max_length=77, + is_fast=False, + padding_side="right", + truncation_side="right", + special_tokens={ + "bos_token": AddedToken("<|startoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), + "eos_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), + "unk_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), + "pad_token": "<|endoftext|>", + }, + clean_up_tokenization_spaces=True +) Every pipeline expects a model_index.json file that tells the DiffusionPipeline: which pipeline class to load from _class_name which version of 🧨 Diffusers was used to create the model in _diffusers_version what components from which library are stored in the subfolders (name corresponds to the component and subfolder name, library corresponds to the name of the library to load the class from, and class corresponds to the class name) Copied { + "_class_name": "StableDiffusionPipeline", + "_diffusers_version": "0.6.0", + "feature_extractor": [ + "transformers", + "CLIPImageProcessor" + ], + "safety_checker": [ + "stable_diffusion", + "StableDiffusionSafetyChecker" + ], + "scheduler": [ + "diffusers", + "PNDMScheduler" + ], + "text_encoder": [ + "transformers", + "CLIPTextModel" + ], + "tokenizer": [ + "transformers", + "CLIPTokenizer" + ], + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vae": [ + "diffusers", + "AutoencoderKL" + ] +} diff --git a/scrapped_outputs/5ab4a7d4d8ae55f2104d6c26db86926f.txt b/scrapped_outputs/5ab4a7d4d8ae55f2104d6c26db86926f.txt new file mode 100644 index 0000000000000000000000000000000000000000..3202fb51e10a32c683f71e7b038c0b00367fe667 --- /dev/null +++ b/scrapped_outputs/5ab4a7d4d8ae55f2104d6c26db86926f.txt @@ -0,0 +1 @@ +Overview The APIs in this section are more experimental and prone to breaking changes. Most of them are used internally for development, but they may also be useful to you if you’re interested in building a diffusion model with some custom parts or if you’re interested in some of our helper utilities for working with 🤗 Diffusers. diff --git a/scrapped_outputs/5abd29ad8d2ace8abd2c202068e96cfa.txt b/scrapped_outputs/5abd29ad8d2ace8abd2c202068e96cfa.txt new file mode 100644 index 0000000000000000000000000000000000000000..e61eb0a68fe6473d1d312b7484e9469ca28f24df --- /dev/null +++ b/scrapped_outputs/5abd29ad8d2ace8abd2c202068e96cfa.txt @@ -0,0 +1,75 @@ +Shap-E Shap-E is a conditional model for generating 3D assets which could be used for video game development, interior design, and architecture. It is trained on a large dataset of 3D assets, and post-processed to render more views of each object and produce 16K instead of 4K point clouds. The Shap-E model is trained in two steps: an encoder accepts the point clouds and rendered views of a 3D asset and outputs the parameters of implicit functions that represent the asset a diffusion model is trained on the latents produced by the encoder to generate either neural radiance fields (NeRFs) or a textured 3D mesh, making it easier to render and use the 3D asset in downstream applications This guide will show you how to use Shap-E to start generating your own 3D assets! Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate trimesh Text-to-3D To generate a gif of a 3D object, pass a text prompt to the ShapEPipeline. The pipeline generates a list of image frames which are used to create the 3D object. Copied import torch +from diffusers import ShapEPipeline + +device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +pipe = ShapEPipeline.from_pretrained("openai/shap-e", torch_dtype=torch.float16, variant="fp16") +pipe = pipe.to(device) + +guidance_scale = 15.0 +prompt = ["A firecracker", "A birthday cupcake"] + +images = pipe( + prompt, + guidance_scale=guidance_scale, + num_inference_steps=64, + frame_size=256, +).images Now use the export_to_gif() function to turn the list of image frames into a gif of the 3D object. Copied from diffusers.utils import export_to_gif + +export_to_gif(images[0], "firecracker_3d.gif") +export_to_gif(images[1], "cake_3d.gif") prompt = "A firecracker" prompt = "A birthday cupcake" Image-to-3D To generate a 3D object from another image, use the ShapEImg2ImgPipeline. You can use an existing image or generate an entirely new one. Let’s use the Kandinsky 2.1 model to generate a new image. Copied from diffusers import DiffusionPipeline +import torch + +prior_pipeline = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipeline = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + +prompt = "A cheeseburger, white background" + +image_embeds, negative_image_embeds = prior_pipeline(prompt, guidance_scale=1.0).to_tuple() +image = pipeline( + prompt, + image_embeds=image_embeds, + negative_image_embeds=negative_image_embeds, +).images[0] + +image.save("burger.png") Pass the cheeseburger to the ShapEImg2ImgPipeline to generate a 3D representation of it. Copied from PIL import Image +from diffusers import ShapEImg2ImgPipeline +from diffusers.utils import export_to_gif + +pipe = ShapEImg2ImgPipeline.from_pretrained("openai/shap-e-img2img", torch_dtype=torch.float16, variant="fp16").to("cuda") + +guidance_scale = 3.0 +image = Image.open("burger.png").resize((256, 256)) + +images = pipe( + image, + guidance_scale=guidance_scale, + num_inference_steps=64, + frame_size=256, +).images + +gif_path = export_to_gif(images[0], "burger_3d.gif") cheeseburger 3D cheeseburger Generate mesh Shap-E is a flexible model that can also generate textured mesh outputs to be rendered for downstream applications. In this example, you’ll convert the output into a glb file because the 🤗 Datasets library supports mesh visualization of glb files which can be rendered by the Dataset viewer. You can generate mesh outputs for both the ShapEPipeline and ShapEImg2ImgPipeline by specifying the output_type parameter as "mesh": Copied import torch +from diffusers import ShapEPipeline + +device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +pipe = ShapEPipeline.from_pretrained("openai/shap-e", torch_dtype=torch.float16, variant="fp16") +pipe = pipe.to(device) + +guidance_scale = 15.0 +prompt = "A birthday cupcake" + +images = pipe(prompt, guidance_scale=guidance_scale, num_inference_steps=64, frame_size=256, output_type="mesh").images Use the export_to_ply() function to save the mesh output as a ply file: You can optionally save the mesh output as an obj file with the export_to_obj() function. The ability to save the mesh output in a variety of formats makes it more flexible for downstream usage! Copied from diffusers.utils import export_to_ply + +ply_path = export_to_ply(images[0], "3d_cake.ply") +print(f"Saved to folder: {ply_path}") Then you can convert the ply file to a glb file with the trimesh library: Copied import trimesh + +mesh = trimesh.load("3d_cake.ply") +mesh_export = mesh.export("3d_cake.glb", file_type="glb") By default, the mesh output is focused from the bottom viewpoint but you can change the default viewpoint by applying a rotation transform: Copied import trimesh +import numpy as np + +mesh = trimesh.load("3d_cake.ply") +rot = trimesh.transformations.rotation_matrix(-np.pi / 2, [1, 0, 0]) +mesh = mesh.apply_transform(rot) +mesh_export = mesh.export("3d_cake.glb", file_type="glb") Upload the mesh file to your dataset repository to visualize it with the Dataset viewer! diff --git a/scrapped_outputs/5acb59bb71e18ca6da430f5ce9a68f11.txt b/scrapped_outputs/5acb59bb71e18ca6da430f5ce9a68f11.txt new file mode 100644 index 0000000000000000000000000000000000000000..8423dbc4c086a93fc684851efbfbaf2fbcda62c5 --- /dev/null +++ b/scrapped_outputs/5acb59bb71e18ca6da430f5ce9a68f11.txt @@ -0,0 +1,127 @@ +Super-resolution The Stable Diffusion upscaler diffusion model was created by the researchers and engineers from CompVis, Stability AI, and LAION. It is used to enhance the resolution of input images by a factor of 4. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionUpscalePipeline class diffusers.StableDiffusionUpscalePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel low_res_scheduler: DDPMScheduler scheduler: KarrasDiffusionSchedulers safety_checker: Optional = None feature_extractor: Optional = None watermarker: Optional = None max_noise_level: int = 350 ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. low_res_scheduler (SchedulerMixin) — +A scheduler used to add initial noise to the low resolution conditioning image. It must be an instance of +DDPMScheduler. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-guided image super-resolution using Stable Diffusion 2. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 75 guidance_scale: float = 9.0 noise_level: int = 20 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: int = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be upscaled. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import requests +>>> from PIL import Image +>>> from io import BytesIO +>>> from diffusers import StableDiffusionUpscalePipeline +>>> import torch + +>>> # load model and scheduler +>>> model_id = "stabilityai/stable-diffusion-x4-upscaler" +>>> pipeline = StableDiffusionUpscalePipeline.from_pretrained( +... model_id, revision="fp16", torch_dtype=torch.float16 +... ) +>>> pipeline = pipeline.to("cuda") + +>>> # let's download an image +>>> url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png" +>>> response = requests.get(url) +>>> low_res_img = Image.open(BytesIO(response.content)).convert("RGB") +>>> low_res_img = low_res_img.resize((128, 128)) +>>> prompt = "a white cat" + +>>> upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0] +>>> upscaled_image.save("upsampled_cat.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/5b09521dfb53c90f784374441589184c.txt b/scrapped_outputs/5b09521dfb53c90f784374441589184c.txt new file mode 100644 index 0000000000000000000000000000000000000000..de6c58fcde78c492edef95532e5c2bb054c86845 --- /dev/null +++ b/scrapped_outputs/5b09521dfb53c90f784374441589184c.txt @@ -0,0 +1,81 @@ +UniPCMultistepScheduler UniPCMultistepScheduler is a training-free framework designed for fast sampling of diffusion models. It was introduced in UniPC: A Unified Predictor-Corrector Framework for Fast Sampling of Diffusion Models by Wenliang Zhao, Lujia Bai, Yongming Rao, Jie Zhou, Jiwen Lu. It consists of a corrector (UniC) and a predictor (UniP) that share a unified analytical form and support arbitrary orders. +UniPC is by design model-agnostic, supporting pixel-space/latent-space DPMs on unconditional/conditional sampling. It can also be applied to both noise prediction and data prediction models. The corrector UniC can be also applied after any off-the-shelf solvers to increase the order of accuracy. The abstract from the paper is: Diffusion probabilistic models (DPMs) have demonstrated a very promising ability in high-resolution image synthesis. However, sampling from a pre-trained DPM is time-consuming due to the multiple evaluations of the denoising network, making it more and more important to accelerate the sampling of DPMs. Despite recent progress in designing fast samplers, existing methods still cannot generate satisfying images in many applications where fewer steps (e.g., <10) are favored. In this paper, we develop a unified corrector (UniC) that can be applied after any existing DPM sampler to increase the order of accuracy without extra model evaluations, and derive a unified predictor (UniP) that supports arbitrary order as a byproduct. Combining UniP and UniC, we propose a unified predictor-corrector framework called UniPC for the fast sampling of DPMs, which has a unified analytical form for any order and can significantly improve the sampling quality over previous methods, especially in extremely few steps. We evaluate our methods through extensive experiments including both unconditional and conditional sampling using pixel-space and latent-space DPMs. Our UniPC can achieve 3.87 FID on CIFAR10 (unconditional) and 7.51 FID on ImageNet 256×256 (conditional) with only 10 function evaluations. Code is available at this https URL. Tips It is recommended to set solver_order to 2 for guide sampling, and solver_order=3 for unconditional sampling. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both predict_x0=True and thresholding=True to use dynamic thresholding. This thresholding method is unsuitable for latent-space diffusion models such as Stable Diffusion. UniPCMultistepScheduler class diffusers.UniPCMultistepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 predict_x0: bool = True solver_type: str = 'bh2' lower_order_final: bool = True disable_corrector: List = [] solver_p: SchedulerMixin = None use_karras_sigmas: Optional = False timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, default 2) — +The UniPC order which can be any positive integer. The effective order of accuracy is solver_order + 1 +due to the UniC. It is recommended to use solver_order=2 for guided sampling, and solver_order=3 for +unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and predict_x0=True. predict_x0 (bool, defaults to True) — +Whether to use the updating algorithm on the predicted x0. solver_type (str, default bh2) — +Solver type for UniPC. It is recommended to use bh1 for unconditional sampling when steps < 10, and bh2 +otherwise. lower_order_final (bool, default True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. disable_corrector (list, default []) — +Decides which step to disable the corrector to mitigate the misalignment between epsilon_theta(x_t, c) +and epsilon_theta(x_t^c, c) which can influence convergence for a large guidance scale. Corrector is +usually disabled during the first few steps. solver_p (SchedulerMixin, default None) — +Any other scheduler that if specified, the algorithm becomes solver_p + UniC. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. UniPCMultistepScheduler is a training-free framework designed for the fast sampling of diffusion models. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the UniPC algorithm needs. multistep_uni_c_bh_update < source > ( this_model_output: FloatTensor *args last_sample: FloatTensor = None this_sample: FloatTensor = None order: int = None **kwargs ) → torch.FloatTensor Parameters this_model_output (torch.FloatTensor) — +The model outputs at x_t. this_timestep (int) — +The current timestep t. last_sample (torch.FloatTensor) — +The generated sample before the last predictor x_{t-1}. this_sample (torch.FloatTensor) — +The generated sample after the last predictor x_{t}. order (int) — +The p of UniC-p at this step. The effective order of accuracy should be order + 1. Returns +torch.FloatTensor + +The corrected sample tensor at the current timestep. + One step for the UniC (B(h) version). multistep_uni_p_bh_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None order: int = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model at the current timestep. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. order (int) — +The order of UniP at this timestep (corresponds to the p in UniPC-p). Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the UniP (B(h) version). Alternatively, self.solver_p is used if is specified. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep UniPC. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/5b3cc8ab41623823f59191a4778fecaa.txt b/scrapped_outputs/5b3cc8ab41623823f59191a4778fecaa.txt new file mode 100644 index 0000000000000000000000000000000000000000..ae719be0b7ba5e539ea6636677a7dcc7a90dd1e7 --- /dev/null +++ b/scrapped_outputs/5b3cc8ab41623823f59191a4778fecaa.txt @@ -0,0 +1,88 @@ +Text-to-(RGB, depth) LDM3D was proposed in LDM3D: Latent Diffusion Model for 3D by Gabriela Ben Melech Stan, Diana Wofk, Scottie Fox, Alex Redden, Will Saxton, Jean Yu, Estelle Aflalo, Shao-Yen Tseng, Fabio Nonato, Matthias Muller, and Vasudev Lal. LDM3D generates an image and a depth map from a given text prompt unlike the existing text-to-image diffusion models such as Stable Diffusion which only generates an image. With almost the same number of parameters, LDM3D achieves to create a latent space that can compress both the RGB images and the depth maps. Two checkpoints are available for use: ldm3d-original. The original checkpoint used in the paper ldm3d-4c. The new version of LDM3D using 4 channels inputs instead of 6-channels inputs and finetuned on higher resolution images. The abstract from the paper is: This research paper proposes a Latent Diffusion Model for 3D (LDM3D) that generates both image and depth map data from a given text prompt, allowing users to generate RGBD images from text prompts. The LDM3D model is fine-tuned on a dataset of tuples containing an RGB image, depth map and caption, and validated through extensive experiments. We also develop an application called DepthFusion, which uses the generated RGB images and depth maps to create immersive and interactive 360-degree-view experiences using TouchDesigner. This technology has the potential to transform a wide range of industries, from entertainment and gaming to architecture and design. Overall, this paper presents a significant contribution to the field of generative AI and computer vision, and showcases the potential of LDM3D and DepthFusion to revolutionize content creation and digital experiences. A short video summarizing the approach can be found at this url. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionLDM3DPipeline class diffusers.StableDiffusionLDM3DPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image and 3D generation using LDM3D. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 49 guidance_scale: float = 5.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 5.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import StableDiffusionLDM3DPipeline + +>>> pipe = StableDiffusionLDM3DPipeline.from_pretrained("Intel/ldm3d-4c") +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> output = pipe(prompt) +>>> rgb_image, depth_image = output.rgb, output.depth +>>> rgb_image[0].save("astronaut_ldm3d_rgb.jpg") +>>> depth_image[0].save("astronaut_ldm3d_depth.png") disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. LDM3DPipelineOutput class diffusers.pipelines.stable_diffusion_ldm3d.pipeline_stable_diffusion_ldm3d.LDM3DPipelineOutput < source > ( rgb: Union depth: Union nsfw_content_detected: Optional ) Parameters rgb (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). depth (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. __call__ ( *args **kwargs ) Call self as a function. Upscaler LDM3D-VR is an extended version of LDM3D. The abstract from the paper is: +Latent diffusion models have proven to be state-of-the-art in the creation and manipulation of visual outputs. However, as far as we know, the generation of depth maps jointly with RGB is still limited. We introduce LDM3D-VR, a suite of diffusion models targeting virtual reality development that includes LDM3D-pano and LDM3D-SR. These models enable the generation of panoramic RGBD based on textual prompts and the upscaling of low-resolution inputs to high-resolution RGBD, respectively. Our models are fine-tuned from existing pretrained models on datasets containing panoramic/high-resolution RGB images, depth maps and captions. Both models are evaluated in comparison to existing related methods Two checkpoints are available for use: ldm3d-pano. This checkpoint enables the generation of panoramic images and requires the StableDiffusionLDM3DPipeline pipeline to be used. ldm3d-sr. This checkpoint enables the upscaling of RGB and depth images. Can be used in cascade after the original LDM3D pipeline using the StableDiffusionUpscaleLDM3DPipeline from communauty pipeline. diff --git a/scrapped_outputs/5b58482cd179949ca4b96b8e9e9f6f4c.txt b/scrapped_outputs/5b58482cd179949ca4b96b8e9e9f6f4c.txt new file mode 100644 index 0000000000000000000000000000000000000000..fa29aaa3795982e1203729759aa3fb501feeb077 --- /dev/null +++ b/scrapped_outputs/5b58482cd179949ca4b96b8e9e9f6f4c.txt @@ -0,0 +1,19 @@ +Habana Gaudi 🤗 Diffusers is compatible with Habana Gaudi through 🤗 Optimum. Follow the installation guide to install the SynapseAI and Gaudi drivers, and then install Optimum Habana: Copied python -m pip install --upgrade-strategy eager optimum[habana] To generate images with Stable Diffusion 1 and 2 on Gaudi, you need to instantiate two instances: GaudiStableDiffusionPipeline, a pipeline for text-to-image generation. GaudiDDIMScheduler, a Gaudi-optimized scheduler. When you initialize the pipeline, you have to specify use_habana=True to deploy it on HPUs and to get the fastest possible generation, you should enable HPU graphs with use_hpu_graphs=True. Finally, specify a GaudiConfig which can be downloaded from the Habana organization on the Hub. Copied from optimum.habana import GaudiConfig +from optimum.habana.diffusers import GaudiDDIMScheduler, GaudiStableDiffusionPipeline + +model_name = "stabilityai/stable-diffusion-2-base" +scheduler = GaudiDDIMScheduler.from_pretrained(model_name, subfolder="scheduler") +pipeline = GaudiStableDiffusionPipeline.from_pretrained( + model_name, + scheduler=scheduler, + use_habana=True, + use_hpu_graphs=True, + gaudi_config="Habana/stable-diffusion-2", +) Now you can call the pipeline to generate images by batches from one or several prompts: Copied outputs = pipeline( + prompt=[ + "High quality photo of an astronaut riding a horse in space", + "Face of a yellow cat, high resolution, sitting on a park bench", + ], + num_images_per_prompt=10, + batch_size=4, +) For more information, check out 🤗 Optimum Habana’s documentation and the example provided in the official GitHub repository. Benchmark We benchmarked Habana’s first-generation Gaudi and Gaudi2 with the Habana/stable-diffusion and Habana/stable-diffusion-2 Gaudi configurations (mixed precision bf16/fp32) to demonstrate their performance. For Stable Diffusion v1.5 on 512x512 images: Latency (batch size = 1) Throughput first-generation Gaudi 3.80s 0.308 images/s (batch size = 8) Gaudi2 1.33s 1.081 images/s (batch size = 8) For Stable Diffusion v2.1 on 768x768 images: Latency (batch size = 1) Throughput first-generation Gaudi 10.2s 0.108 images/s (batch size = 4) Gaudi2 3.17s 0.379 images/s (batch size = 8) diff --git a/scrapped_outputs/5b6d48c2a7ceec9aaa0d5a458fc715d8.txt b/scrapped_outputs/5b6d48c2a7ceec9aaa0d5a458fc715d8.txt new file mode 100644 index 0000000000000000000000000000000000000000..ac84e7af684acbbe414a495264a2879f29f202cf --- /dev/null +++ b/scrapped_outputs/5b6d48c2a7ceec9aaa0d5a458fc715d8.txt @@ -0,0 +1,114 @@ +Accelerate inference of text-to-image diffusion models Diffusion models are slower than their GAN counterparts because of the iterative and sequential reverse diffusion process. There are several techniques that can address this limitation such as progressive timestep distillation (LCM LoRA), model compression (SSD-1B), and reusing adjacent features of the denoiser (DeepCache). However, you don’t necessarily need to use these techniques to speed up inference. With PyTorch 2 alone, you can accelerate the inference latency of text-to-image diffusion pipelines by up to 3x. This tutorial will show you how to progressively apply the optimizations found in PyTorch 2 to reduce inference latency. You’ll use the Stable Diffusion XL (SDXL) pipeline in this tutorial, but these techniques are applicable to other text-to-image diffusion pipelines too. Make sure you’re using the latest version of Diffusers: Copied pip install -U diffusers Then upgrade the other required libraries too: Copied pip install -U transformers accelerate peft Install PyTorch nightly to benefit from the latest and fastest kernels: Copied pip3 install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu121 The results reported below are from a 80GB 400W A100 with its clock rate set to the maximum. If you’re interested in the full benchmarking code, take a look at huggingface/diffusion-fast. Baseline Let’s start with a baseline. Disable reduced precision and the scaled_dot_product_attention (SDPA) function which is automatically used by Diffusers: Copied from diffusers import StableDiffusionXLPipeline + +# Load the pipeline in full-precision and place its model components on CUDA. +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0" +).to("cuda") + +# Run the attention ops without SDPA. +pipe.unet.set_default_attn_processor() +pipe.vae.set_default_attn_processor() + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] This default setup takes 7.36 seconds. bfloat16 Enable the first optimization, reduced precision or more specifically bfloat16. There are several benefits of using reduced precision: Using a reduced numerical precision (such as float16 or bfloat16) for inference doesn’t affect the generation quality but significantly improves latency. The benefits of using bfloat16 compared to float16 are hardware dependent, but modern GPUs tend to favor bfloat16. bfloat16 is much more resilient when used with quantization compared to float16, but more recent versions of the quantization library (torchao) we used don’t have numerical issues with float16. Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 +).to("cuda") + +# Run the attention ops without SDPA. +pipe.unet.set_default_attn_processor() +pipe.vae.set_default_attn_processor() + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] bfloat16 reduces the latency from 7.36 seconds to 4.63 seconds. In our later experiments with float16, recent versions of torchao do not incur numerical problems from float16. Take a look at the Speed up inference guide to learn more about running inference with reduced precision. SDPA Attention blocks are intensive to run. But with PyTorch’s scaled_dot_product_attention function, it is a lot more efficient. This function is used by default in Diffusers so you don’t need to make any changes to the code. Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] Scaled dot product attention improves the latency from 4.63 seconds to 3.31 seconds. torch.compile PyTorch 2 includes torch.compile which uses fast and optimized kernels. In Diffusers, the UNet and VAE are usually compiled because these are the most compute-intensive modules. First, configure a few compiler flags (refer to the full list for more options): Copied from diffusers import StableDiffusionXLPipeline +import torch + +torch._inductor.config.conv_1x1_as_mm = True +torch._inductor.config.coordinate_descent_tuning = True +torch._inductor.config.epilogue_fusion = False +torch._inductor.config.coordinate_descent_check_all_directions = True It is also important to change the UNet and VAE’s memory layout to “channels_last” when compiling them to ensure maximum speed. Copied pipe.unet.to(memory_format=torch.channels_last) +pipe.vae.to(memory_format=torch.channels_last) Now compile and perform inference: Copied # Compile the UNet and VAE. +pipe.unet = torch.compile(pipe.unet, mode="max-autotune", fullgraph=True) +pipe.vae.decode = torch.compile(pipe.vae.decode, mode="max-autotune", fullgraph=True) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# First call to `pipe` is slow, subsequent ones are faster. +image = pipe(prompt, num_inference_steps=30).images[0] torch.compile offers different backends and modes. For maximum inference speed, use “max-autotune” for the inductor backend. “max-autotune” uses CUDA graphs and optimizes the compilation graph specifically for latency. CUDA graphs greatly reduces the overhead of launching GPU operations by using a mechanism to launch multiple GPU operations through a single CPU operation. Using SDPA attention and compiling both the UNet and VAE cuts the latency from 3.31 seconds to 2.54 seconds. Prevent graph breaks Specifying fullgraph=True ensures there are no graph breaks in the underlying model to take full advantage of torch.compile without any performance degradation. For the UNet and VAE, this means changing how you access the return variables. Copied - latents = unet( +- latents, timestep=timestep, encoder_hidden_states=prompt_embeds +-).sample + ++ latents = unet( ++ latents, timestep=timestep, encoder_hidden_states=prompt_embeds, return_dict=False ++)[0] Remove GPU sync after compilation During the iterative reverse diffusion process, the step() function is called on the scheduler each time after the denoiser predicts the less noisy latent embeddings. Inside step(), the sigmas variable is indexed which when placed on the GPU, causes a communication sync between the CPU and GPU. This introduces latency and it becomes more evident when the denoiser has already been compiled. But if the sigmas array always stays on the CPU, the CPU and GPU sync doesn’t occur and you don’t get any latency. In general, any CPU and GPU communication sync should be none or be kept to a bare minimum because it can impact inference latency. Combine the attention block’s projection matrices The UNet and VAE in SDXL use Transformer-like blocks which consists of attention blocks and feed-forward blocks. In an attention block, the input is projected into three sub-spaces using three different projection matrices – Q, K, and V. These projections are performed separately on the input. But we can horizontally combine the projection matrices into a single matrix and perform the projection in one step. This increases the size of the matrix multiplications of the input projections and improves the impact of quantization. You can combine the projection matrices with just a single line of code: Copied pipe.fuse_qkv_projections() This provides a minor improvement from 2.54 seconds to 2.52 seconds. Support for fuse_qkv_projections() is limited and experimental. It’s not available for many non-Stable Diffusion pipelines such as Kandinsky. You can refer to this PR to get an idea about how to enable this for the other pipelines. Dynamic quantization You can also use the ultra-lightweight PyTorch quantization library, torchao (commit SHA 54bcd5a10d0abbe7b0c045052029257099f83fd9), to apply dynamic int8 quantization to the UNet and VAE. Quantization adds additional conversion overhead to the model that is hopefully made up for by faster matmuls (dynamic quantization). If the matmuls are too small, these techniques may degrade performance. First, configure all the compiler tags: Copied from diffusers import StableDiffusionXLPipeline +import torch + +# Notice the two new flags at the end. +torch._inductor.config.conv_1x1_as_mm = True +torch._inductor.config.coordinate_descent_tuning = True +torch._inductor.config.epilogue_fusion = False +torch._inductor.config.coordinate_descent_check_all_directions = True +torch._inductor.config.force_fuse_int_mm_with_mul = True +torch._inductor.config.use_mixed_mm = True Certain linear layers in the UNet and VAE don’t benefit from dynamic int8 quantization. You can filter out those layers with the dynamic_quant_filter_fn shown below. Copied def dynamic_quant_filter_fn(mod, *args): + return ( + isinstance(mod, torch.nn.Linear) + and mod.in_features > 16 + and (mod.in_features, mod.out_features) + not in [ + (1280, 640), + (1920, 1280), + (1920, 640), + (2048, 1280), + (2048, 2560), + (2560, 1280), + (256, 128), + (2816, 1280), + (320, 640), + (512, 1536), + (512, 256), + (512, 512), + (640, 1280), + (640, 1920), + (640, 320), + (640, 5120), + (640, 640), + (960, 320), + (960, 640), + ] + ) + + +def conv_filter_fn(mod, *args): + return ( + isinstance(mod, torch.nn.Conv2d) and mod.kernel_size == (1, 1) and 128 in [mod.in_channels, mod.out_channels] + ) Finally, apply all the optimizations discussed so far: Copied # SDPA + bfloat16. +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 +).to("cuda") + +# Combine attention projection matrices. +pipe.fuse_qkv_projections() + +# Change the memory layout. +pipe.unet.to(memory_format=torch.channels_last) +pipe.vae.to(memory_format=torch.channels_last) Since dynamic quantization is only limited to the linear layers, convert the appropriate pointwise convolution layers into linear layers to maximize its benefit. Copied from torchao import swap_conv2d_1x1_to_linear + +swap_conv2d_1x1_to_linear(pipe.unet, conv_filter_fn) +swap_conv2d_1x1_to_linear(pipe.vae, conv_filter_fn) Apply dynamic quantization: Copied from torchao import apply_dynamic_quant + +apply_dynamic_quant(pipe.unet, dynamic_quant_filter_fn) +apply_dynamic_quant(pipe.vae, dynamic_quant_filter_fn) Finally, compile and perform inference: Copied pipe.unet = torch.compile(pipe.unet, mode="max-autotune", fullgraph=True) +pipe.vae.decode = torch.compile(pipe.vae.decode, mode="max-autotune", fullgraph=True) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] Applying dynamic quantization improves the latency from 2.52 seconds to 2.43 seconds. diff --git a/scrapped_outputs/5ba04db12e3c1827004621a3cfc6bd0b.txt b/scrapped_outputs/5ba04db12e3c1827004621a3cfc6bd0b.txt new file mode 100644 index 0000000000000000000000000000000000000000..0e7f0031784cb18f903bdc8b268f914396bcafa5 --- /dev/null +++ b/scrapped_outputs/5ba04db12e3c1827004621a3cfc6bd0b.txt @@ -0,0 +1,628 @@ +ControlNet with Stable Diffusion XL ControlNet was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that’ll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process. The abstract from the paper is: We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models. You can find additional smaller Stable Diffusion XL (SDXL) ControlNet checkpoints from the 🤗 Diffusers Hub organization, and browse community-trained checkpoints on the Hub. 🧪 Many of the SDXL ControlNet checkpoints are experimental, and there is a lot of room for improvement. Feel free to open an Issue and leave us feedback on how we can improve! If you don’t see a checkpoint you’re interested in, you can train your own SDXL ControlNet with our training script. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionXLControlNetPipeline class diffusers.StableDiffusionXLControlNetPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). text_encoder_2 (CLIPTextModelWithProjection) — +Second frozen text-encoder +(laion/CLIP-ViT-bigG-14-laion2B-39B-b160k). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. tokenizer_2 (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings should always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it defaults to True if the package is installed; otherwise no +watermarker is used. Pipeline for text-to-image generation using Stable Diffusion XL with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 5.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. This is sent to tokenizer_2 +and text_encoder_2. If not defined, negative_prompt is used in both text-encoders. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, pooled text embeddings are generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs (prompt +weighting). If not provided, pooled negative_prompt_embeds are generated from negative_prompt input +argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned containing the output images. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" +>>> negative_prompt = "low quality, bad quality, sketches" + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" +... ) + +>>> # initialize the models and pipeline +>>> controlnet_conditioning_scale = 0.5 # recommended for good generalization +>>> controlnet = ControlNetModel.from_pretrained( +... "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16 +... ) +>>> vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) +>>> pipe = StableDiffusionXLControlNetPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> # get canny image +>>> image = np.array(image) +>>> image = cv2.Canny(image, 100, 200) +>>> image = image[:, :, None] +>>> image = np.concatenate([image, image, image], axis=2) +>>> canny_image = Image.fromarray(image) + +>>> # generate image +>>> image = pipe( +... prompt, controlnet_conditioning_scale=controlnet_conditioning_scale, image=canny_image +... ).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionXLControlNetImg2ImgPipeline class diffusers.StableDiffusionXLControlNetImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple ControlNets +as a list, the outputs from each ControlNet are added together to create one combined additional +conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires an aesthetic_score condition to be passed during inference. Also see the +config of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for image-to-image generation using Stable Diffusion XL with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None control_image: Union = None height: Optional = None width: Optional = None strength: float = 0.8 num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 0.8 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The initial image will be used as the starting point for the image generation process. Can also accept +image latents as image, if passing latents directly, it will not be encoded again. control_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition. ControlNet uses this input condition to generate guidance to Unet. If +the type is specified as Torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can +also be accepted as an image. The dimensions of the output image defaults to image’s dimensions. If +height and/or width are passed, image is resized according to them. If multiple ControlNets are +specified in init, images must be passed as a list such that each element of the list can be correctly +batched for input to a single controlnet. height (int, optional, defaults to the size of control_image) — +The height in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to the size of control_image) — +The width in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the controlnet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set the +corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +In this mode, the ControlNet encoder will try best to recognize the content of the input image even if +you remove all prompts. The guidance_scale between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the controlnet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the controlnet stops applying. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple +containing the output images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> # pip install accelerate transformers safetensors diffusers + +>>> import torch +>>> import numpy as np +>>> from PIL import Image + +>>> from transformers import DPTFeatureExtractor, DPTForDepthEstimation +>>> from diffusers import ControlNetModel, StableDiffusionXLControlNetImg2ImgPipeline, AutoencoderKL +>>> from diffusers.utils import load_image + + +>>> depth_estimator = DPTForDepthEstimation.from_pretrained("Intel/dpt-hybrid-midas").to("cuda") +>>> feature_extractor = DPTFeatureExtractor.from_pretrained("Intel/dpt-hybrid-midas") +>>> controlnet = ControlNetModel.from_pretrained( +... "diffusers/controlnet-depth-sdxl-1.0-small", +... variant="fp16", +... use_safetensors=True, +... torch_dtype=torch.float16, +... ).to("cuda") +>>> vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16).to("cuda") +>>> pipe = StableDiffusionXLControlNetImg2ImgPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", +... controlnet=controlnet, +... vae=vae, +... variant="fp16", +... use_safetensors=True, +... torch_dtype=torch.float16, +... ).to("cuda") +>>> pipe.enable_model_cpu_offload() + + +>>> def get_depth_map(image): +... image = feature_extractor(images=image, return_tensors="pt").pixel_values.to("cuda") +... with torch.no_grad(), torch.autocast("cuda"): +... depth_map = depth_estimator(image).predicted_depth + +... depth_map = torch.nn.functional.interpolate( +... depth_map.unsqueeze(1), +... size=(1024, 1024), +... mode="bicubic", +... align_corners=False, +... ) +... depth_min = torch.amin(depth_map, dim=[1, 2, 3], keepdim=True) +... depth_max = torch.amax(depth_map, dim=[1, 2, 3], keepdim=True) +... depth_map = (depth_map - depth_min) / (depth_max - depth_min) +... image = torch.cat([depth_map] * 3, dim=1) +... image = image.permute(0, 2, 3, 1).cpu().numpy()[0] +... image = Image.fromarray((image * 255.0).clip(0, 255).astype(np.uint8)) +... return image + + +>>> prompt = "A robot, 4k photo" +>>> image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ).resize((1024, 1024)) +>>> controlnet_conditioning_scale = 0.5 # recommended for good generalization +>>> depth_image = get_depth_map(image) + +>>> images = pipe( +... prompt, +... image=image, +... control_image=depth_image, +... strength=0.99, +... num_inference_steps=50, +... controlnet_conditioning_scale=controlnet_conditioning_scale, +... ).images +>>> images[0].save(f"robot_cat.png") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionXLControlNetInpaintPipeline class diffusers.StableDiffusionXLControlNetInpaintPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: ControlNetModel scheduler: KarrasDiffusionSchedulers requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None mask_image: Union = None control_image: Union = None height: Optional = None width: Optional = None strength: float = 0.9999 num_inference_steps: int = 50 denoising_start: Optional = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (PIL.Image.Image) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. strength (float, optional, defaults to 0.9999) — +Conceptually, indicates how much to transform the masked portion of the reference image. Must be +between 0 and 1. image will be used as a starting point, adding more noise to it the larger the +strength. The number of denoising steps depends on the amount of noise initially added. When +strength is 1, added noise will be maximum and the denoising process will run for the full number of +iterations specified in num_inference_steps. A value of 1, therefore, essentially ignores the masked +portion of the reference image. Note that in the case of denoising_start being declared as an +integer, the value of strength will be ignored. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. denoising_start (float, optional) — +When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be +bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and +it is assumed that the passed image is a partly denoised image. Note that when this is specified, +strength will be ignored. The denoising_start parameter is particularly beneficial when this pipeline +is integrated into a “Mixture of Denoisers” multi-pipeline setup, as detailed in Refining the Image +Output. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be +denoised by a successor pipeline that has denoising_start set to 0.8 so that it only denoises the +final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline +forms a part of a “Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (width, height) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (width, height). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> # !pip install transformers accelerate +>>> from diffusers import StableDiffusionXLControlNetInpaintPipeline, ControlNetModel, DDIMScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> init_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy.png" +... ) +>>> init_image = init_image.resize((1024, 1024)) + +>>> generator = torch.Generator(device="cpu").manual_seed(1) + +>>> mask_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy_mask.png" +... ) +>>> mask_image = mask_image.resize((1024, 1024)) + + +>>> def make_canny_condition(image): +... image = np.array(image) +... image = cv2.Canny(image, 100, 200) +... image = image[:, :, None] +... image = np.concatenate([image, image, image], axis=2) +... image = Image.fromarray(image) +... return image + + +>>> control_image = make_canny_condition(init_image) + +>>> controlnet = ControlNetModel.from_pretrained( +... "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16 +... ) +>>> pipe = StableDiffusionXLControlNetInpaintPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> image = pipe( +... "a handsome man with ray-ban sunglasses", +... num_inference_steps=20, +... generator=generator, +... eta=1.0, +... image=init_image, +... mask_image=mask_image, +... control_image=control_image, +... ).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/5bc22fdf83491ce5d383d8b2381a60e0.txt b/scrapped_outputs/5bc22fdf83491ce5d383d8b2381a60e0.txt new file mode 100644 index 0000000000000000000000000000000000000000..ab7809d34983d6a8ebbe82ac4a22518de74ebdc9 --- /dev/null +++ b/scrapped_outputs/5bc22fdf83491ce5d383d8b2381a60e0.txt @@ -0,0 +1,31 @@ +Prior Transformer The Prior Transformer was originally introduced in Hierarchical Text-Conditional Image Generation with CLIP Latents by Ramesh et al. It is used to predict CLIP image embeddings from CLIP text embeddings; image embeddings are predicted through a denoising diffusion process. The abstract from the paper is: Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples. PriorTransformer class diffusers.PriorTransformer < source > ( num_attention_heads: int = 32 attention_head_dim: int = 64 num_layers: int = 20 embedding_dim: int = 768 num_embeddings = 77 additional_embeddings = 4 dropout: float = 0.0 time_embed_act_fn: str = 'silu' norm_in_type: Optional = None embedding_proj_norm_type: Optional = None encoder_hid_proj_type: Optional = 'linear' added_emb_type: Optional = 'prd' time_embed_dim: Optional = None embedding_proj_dim: Optional = None clip_embed_dim: Optional = None ) Parameters num_attention_heads (int, optional, defaults to 32) — The number of heads to use for multi-head attention. attention_head_dim (int, optional, defaults to 64) — The number of channels in each head. num_layers (int, optional, defaults to 20) — The number of layers of Transformer blocks to use. embedding_dim (int, optional, defaults to 768) — The dimension of the model input hidden_states num_embeddings (int, optional, defaults to 77) — +The number of embeddings of the model input hidden_states additional_embeddings (int, optional, defaults to 4) — The number of additional tokens appended to the +projected hidden_states. The actual length of the used hidden_states is num_embeddings + additional_embeddings. dropout (float, optional, defaults to 0.0) — The dropout probability to use. time_embed_act_fn (str, optional, defaults to ‘silu’) — +The activation function to use to create timestep embeddings. norm_in_type (str, optional, defaults to None) — The normalization layer to apply on hidden states before +passing to Transformer blocks. Set it to None if normalization is not needed. embedding_proj_norm_type (str, optional, defaults to None) — +The normalization layer to apply on the input proj_embedding. Set it to None if normalization is not +needed. encoder_hid_proj_type (str, optional, defaults to linear) — +The projection layer to apply on the input encoder_hidden_states. Set it to None if +encoder_hidden_states is None. added_emb_type (str, optional, defaults to prd) — Additional embeddings to condition the model. +Choose from prd or None. if choose prd, it will prepend a token indicating the (quantized) dot +product between the text embedding and image embedding as proposed in the unclip paper +https://arxiv.org/abs/2204.06125 If it is None, no additional embeddings will be prepended. time_embed_dim (int, *optional*, defaults to None) -- The dimension of timestep embeddings. If None, will be set to num_attention_heads * attention_head_dim` embedding_proj_dim (int, optional, default to None) — +The dimension of proj_embedding. If None, will be set to embedding_dim. clip_embed_dim (int, optional, default to None) — +The dimension of the output. If None, will be set to embedding_dim. A Prior Transformer model. forward < source > ( hidden_states timestep: Union proj_embedding: FloatTensor encoder_hidden_states: Optional = None attention_mask: Optional = None return_dict: bool = True ) → ~models.prior_transformer.PriorTransformerOutput or tuple Parameters hidden_states (torch.FloatTensor of shape (batch_size, embedding_dim)) — +The currently predicted image embeddings. timestep (torch.LongTensor) — +Current denoising step. proj_embedding (torch.FloatTensor of shape (batch_size, embedding_dim)) — +Projected embedding vector the denoising process is conditioned on. encoder_hidden_states (torch.FloatTensor of shape (batch_size, num_embeddings, embedding_dim)) — +Hidden states of the text embeddings the denoising process is conditioned on. attention_mask (torch.BoolTensor of shape (batch_size, num_embeddings)) — +Text mask for the text embeddings. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.prior_transformer.PriorTransformerOutput instead of a plain +tuple. Returns +~models.prior_transformer.PriorTransformerOutput or tuple + +If return_dict is True, a ~models.prior_transformer.PriorTransformerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + The PriorTransformer forward method. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. PriorTransformerOutput class diffusers.models.transformers.prior_transformer.PriorTransformerOutput < source > ( predicted_image_embedding: FloatTensor ) Parameters predicted_image_embedding (torch.FloatTensor of shape (batch_size, embedding_dim)) — +The predicted CLIP image embedding conditioned on the CLIP text embedding input. The output of PriorTransformer. diff --git a/scrapped_outputs/5c1828510742a6bb465d5b1837a517f0.txt b/scrapped_outputs/5c1828510742a6bb465d5b1837a517f0.txt new file mode 100644 index 0000000000000000000000000000000000000000..31cbdde7d3f5e542cc1b460fc970c1544f49a07d --- /dev/null +++ b/scrapped_outputs/5c1828510742a6bb465d5b1837a517f0.txt @@ -0,0 +1,71 @@ +Load LoRAs for inference There are many adapters (with LoRAs being the most common type) trained in different styles to achieve different effects. You can even combine multiple adapters to create new and unique images. With the 🤗 PEFT integration in 🤗 Diffusers, it is really easy to load and manage adapters for inference. In this guide, you’ll learn how to use different adapters with Stable Diffusion XL (SDXL) for inference. Throughout this guide, you’ll use LoRA as the main adapter technique, so we’ll use the terms LoRA and adapter interchangeably. You should have some familiarity with LoRA, and if you don’t, we welcome you to check out the LoRA guide. Let’s first install all the required libraries. Copied !pip install -q transformers accelerate +!pip install peft +!pip install diffusers Now, let’s load a pipeline with a SDXL checkpoint: Copied from diffusers import DiffusionPipeline +import torch + +pipe_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipe = DiffusionPipeline.from_pretrained(pipe_id, torch_dtype=torch.float16).to("cuda") Next, load a LoRA checkpoint with the load_lora_weights() method. With the 🤗 PEFT integration, you can assign a specific adapter_name to the checkpoint, which let’s you easily switch between different LoRA checkpoints. Let’s call this adapter "toy". Copied pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") And then perform inference: Copied prompt = "toy_face of a hacker with a hoodie" + +lora_scale= 0.9 +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) +).images[0] +image With the adapter_name parameter, it is really easy to use another adapter for inference! Load the nerijs/pixel-art-xl adapter that has been fine-tuned to generate pixel art images, and let’s call it "pixel". The pipeline automatically sets the first loaded adapter ("toy") as the active adapter. But you can activate the "pixel" adapter with the set_adapters() method as shown below: Copied pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipe.set_adapters("pixel") Let’s now generate an image with the second adapter and check the result: Copied prompt = "a hacker with a hoodie, pixel art" +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) +).images[0] +image Combine multiple adapters You can also perform multi-adapter inference where you combine different adapter checkpoints for inference. Once again, use the set_adapters() method to activate two LoRA checkpoints and specify the weight for how the checkpoints should be combined. Copied pipe.set_adapters(["pixel", "toy"], adapter_weights=[0.5, 1.0]) Now that we have set these two adapters, let’s generate an image from the combined adapters! LoRA checkpoints in the diffusion community are almost always obtained with DreamBooth. DreamBooth training often relies on “trigger” words in the input text prompts in order for the generation results to look as expected. When you combine multiple LoRA checkpoints, it’s important to ensure the trigger words for the corresponding LoRA checkpoints are present in the input text prompts. The trigger words for CiroN2022/toy-face and nerijs/pixel-art-xl are found in their repositories. Copied # Notice how the prompt is constructed. +prompt = "toy_face of a hacker with a hoodie, pixel art" +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": 1.0}, generator=torch.manual_seed(0) +).images[0] +image Impressive! As you can see, the model was able to generate an image that mixes the characteristics of both adapters. If you want to go back to using only one adapter, use the set_adapters() method to activate the "toy" adapter: Copied # First, set the adapter. +pipe.set_adapters("toy") + +# Then, run inference. +prompt = "toy_face of a hacker with a hoodie" +lora_scale= 0.9 +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) +).images[0] +image If you want to switch to only the base model, disable all LoRAs with the disable_lora() method. Copied pipe.disable_lora() + +prompt = "toy_face of a hacker with a hoodie" +lora_scale= 0.9 +image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] +image Monitoring active adapters You have attached multiple adapters in this tutorial, and if you’re feeling a bit lost on what adapters have been attached to the pipeline’s components, you can easily check the list of active adapters using the get_active_adapters() method: Copied active_adapters = pipe.get_active_adapters() +active_adapters +["toy", "pixel"] You can also get the active adapters of each pipeline component with get_list_adapters(): Copied list_adapters_component_wise = pipe.get_list_adapters() +list_adapters_component_wise +{"text_encoder": ["toy", "pixel"], "unet": ["toy", "pixel"], "text_encoder_2": ["toy", "pixel"]} Fusing adapters into the model You can use PEFT to easily fuse/unfuse multiple adapters directly into the model weights (both UNet and text encoder) using the fuse_lora() method, which can lead to a speed-up in inference and lower VRAM usage. Copied pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") + +pipe.set_adapters(["pixel", "toy"], adapter_weights=[0.5, 1.0]) +# Fuses the LoRAs into the Unet +pipe.fuse_lora() + +prompt = "toy_face of a hacker with a hoodie, pixel art" +image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] + +# Gets the Unet back to the original state +pipe.unfuse_lora() You can also fuse some adapters using adapter_names for faster generation: Copied pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") + +pipe.set_adapters(["pixel"], adapter_weights=[0.5, 1.0]) +# Fuses the LoRAs into the Unet +pipe.fuse_lora(adapter_names=["pixel"]) + +prompt = "a hacker with a hoodie, pixel art" +image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] + +# Gets the Unet back to the original state +pipe.unfuse_lora() + +# Fuse all adapters +pipe.fuse_lora(adapter_names=["pixel", "toy"]) + +prompt = "toy_face of a hacker with a hoodie, pixel art" +image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] Saving a pipeline after fusing the adapters To properly save a pipeline after it’s been loaded with the adapters, it should be serialized like so: Copied pipe.fuse_lora(lora_scale=1.0) +pipe.unload_lora_weights() +pipe.save_pretrained("path-to-pipeline") diff --git a/scrapped_outputs/5c4052bc3bcda8440d7759dc074ea441.txt b/scrapped_outputs/5c4052bc3bcda8440d7759dc074ea441.txt new file mode 100644 index 0000000000000000000000000000000000000000..90f987bd68cea6f4c0f29a9a85768db8b9798fed --- /dev/null +++ b/scrapped_outputs/5c4052bc3bcda8440d7759dc074ea441.txt @@ -0,0 +1 @@ +Overview A pipeline is an end-to-end class that provides a quick and easy way to use a diffusion system for inference by bundling independently trained models and schedulers together. Certain combinations of models and schedulers define specific pipeline types, like StableDiffusionXLPipeline or StableDiffusionControlNetPipeline, with specific capabilities. All pipeline types inherit from the base DiffusionPipeline class; pass it any checkpoint, and it’ll automatically detect the pipeline type and load the necessary components. This section demonstrates how to use specific pipelines such as Stable Diffusion XL, ControlNet, and DiffEdit. You’ll also learn how to use a distilled version of the Stable Diffusion model to speed up inference, how to create reproducible pipelines, and how to use and contribute community pipelines. diff --git a/scrapped_outputs/5c82ec39f545d75e786d77f102450e27.txt b/scrapped_outputs/5c82ec39f545d75e786d77f102450e27.txt new file mode 100644 index 0000000000000000000000000000000000000000..5afefc9802e2e300fc8f75dd39c8a1e63c7b33a2 --- /dev/null +++ b/scrapped_outputs/5c82ec39f545d75e786d77f102450e27.txt @@ -0,0 +1,323 @@ +Editing Implicit Assumptions in Text-to-Image Diffusion Models + + +Overview + +Editing Implicit Assumptions in Text-to-Image Diffusion Models by Hadas Orgad, Bahjat Kawar, and Yonatan Belinkov. +The abstract of the paper is the following: +Text-to-image diffusion models often make implicit assumptions about the world when generating images. While some assumptions are useful (e.g., the sky is blue), they can also be outdated, incorrect, or reflective of social biases present in the training data. Thus, there is a need to control these assumptions without requiring explicit user input or costly re-training. In this work, we aim to edit a given implicit assumption in a pre-trained diffusion model. Our Text-to-Image Model Editing method, TIME for short, receives a pair of inputs: a “source” under-specified prompt for which the model makes an implicit assumption (e.g., “a pack of roses”), and a “destination” prompt that describes the same setting, but with a specified desired attribute (e.g., “a pack of blue roses”). TIME then updates the model’s cross-attention layers, as these layers assign visual meaning to textual tokens. We edit the projection matrices in these layers such that the source prompt is projected close to the destination prompt. Our method is highly efficient, as it modifies a mere 2.2% of the model’s parameters in under one second. To evaluate model editing approaches, we introduce TIMED (TIME Dataset), containing 147 source and destination prompt pairs from various domains. Our experiments (using Stable Diffusion) show that TIME is successful in model editing, generalizes well for related prompts unseen during editing, and imposes minimal effect on unrelated generations. +Resources: +Project Page. +Paper. +Original Code. +Demo. + +Available Pipelines: + +Pipeline +Tasks +Demo +StableDiffusionModelEditingPipeline +Text-to-Image Model Editing +🤗 Space) +This pipeline enables editing the diffusion model weights, such that its assumptions on a given concept are changed. The resulting change is expected to take effect in all prompt generations pertaining to the edited concept. + +Usage example + + + + Copied +import torch +from diffusers import StableDiffusionModelEditingPipeline + +model_ckpt = "CompVis/stable-diffusion-v1-4" +pipe = StableDiffusionModelEditingPipeline.from_pretrained(model_ckpt) + +pipe = pipe.to("cuda") + +source_prompt = "A pack of roses" +destination_prompt = "A pack of blue roses" +pipe.edit_model(source_prompt, destination_prompt) + +prompt = "A field of roses" +image = pipe(prompt).images[0] +image.save("field_of_roses.png") + +StableDiffusionModelEditingPipeline + + +class diffusers.StableDiffusionModelEditingPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: SchedulerMixin +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPFeatureExtractor +requires_safety_checker: bool = True +with_to_k: bool = True +with_augs: list = ['A photo of ', 'An image of ', 'A picture of '] + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + +with_to_k (bool) — +Whether to edit the key projection matrices along wiht the value projection matrices. + + +with_augs (list) — +Textual augmentations to apply while editing the text-to-image model. Set to [] for no augmentations. + + + +Pipeline for text-to-image model editing using “Editing Implicit Assumptions in Text-to-Image Diffusion Models”. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.). + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. +Examples: + +disable_vae_slicing + +< +source +> +( +) + + + +Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to +computing decoding in one step. + +edit_model + +< +source +> +( +source_prompt: str +destination_prompt: str +lamb: float = 0.1 +restart_params: bool = True + +) + + +Parameters + +source_prompt (str) — +The source prompt containing the concept to be edited. + + +destination_prompt (str) — +The destination prompt. Must contain all words from source_prompt with additional ones to specify the +target edit. + + +lamb (float, optional, defaults to 0.1) — +The lambda parameter specifying the regularization intesity. Smaller values increase the editing power. + + +restart_params (bool, optional, defaults to True) — +Restart the model parameters to their pre-trained version before editing. This is done to avoid edit +compounding. When it is False, edits accumulate. + + + +Apply model editing via closed-form solution (see Eq. 5 in the TIME paper https://arxiv.org/abs/2303.08084) + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. + +enable_vae_slicing + +< +source +> +( +) + + + +Enable sliced VAE decoding. +When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several +steps. This is useful to save some memory and allow larger batch sizes. diff --git a/scrapped_outputs/5c94c230072d2ce047d06cdbbe95f976.txt b/scrapped_outputs/5c94c230072d2ce047d06cdbbe95f976.txt new file mode 100644 index 0000000000000000000000000000000000000000..88a46e3c7ee0477caaec1744d6e85e32374dcd84 --- /dev/null +++ b/scrapped_outputs/5c94c230072d2ce047d06cdbbe95f976.txt @@ -0,0 +1,109 @@ +DDIM + + +Overview + +Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. +The abstract of the paper is the following: +Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space. +The original codebase of this paper can be found here: ermongroup/ddim. +For questions, feel free to contact the author on tsong.me. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_ddim.py +Unconditional Image Generation +- + +DDIMPipeline + + +class diffusers.DDIMPipeline + +< +source +> +( +unet +scheduler + +) + + +Parameters + +unet (UNet2DModel) — U-Net architecture to denoise the encoded image. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image. Can be one of +DDPMScheduler, or DDIMScheduler. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +batch_size: int = 1 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +eta: float = 0.0 +num_inference_steps: int = 50 +use_clipped_model_output: typing.Optional[bool] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True + +) +→ +ImagePipelineOutput or tuple + +Parameters + +batch_size (int, optional, defaults to 1) — +The number of images to generate. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +eta (float, optional, defaults to 0.0) — +The eta parameter which controls the scale of the variance (0 is DDIM and 1 is one type of DDPM). + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +use_clipped_model_output (bool, optional, defaults to None) — +if True or False, see documentation for DDIMScheduler.step. If None, nothing is passed +downstream to the scheduler. So use None for schedulers which don’t support this argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + +Returns + +ImagePipelineOutput or tuple + + + +~pipelines.utils.ImagePipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. diff --git a/scrapped_outputs/5caa986774eaaa734f9f4e647105b79c.txt b/scrapped_outputs/5caa986774eaaa734f9f4e647105b79c.txt new file mode 100644 index 0000000000000000000000000000000000000000..98269f3c31d991ee698908d92c0548b99079f45a --- /dev/null +++ b/scrapped_outputs/5caa986774eaaa734f9f4e647105b79c.txt @@ -0,0 +1,24 @@ +IPNDMScheduler IPNDMScheduler is a fourth-order Improved Pseudo Linear Multistep scheduler. The original implementation can be found at crowsonkb/v-diffusion-pytorch. IPNDMScheduler class diffusers.IPNDMScheduler < source > ( num_train_timesteps: int = 1000 trained_betas: Union = None ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. A fourth-order Improved Pseudo Linear Multistep scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the linear multistep method. It performs one forward pass multiple times to approximate the solution. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/5cc65cf1df7b8235fb4e8ec38b636c17.txt b/scrapped_outputs/5cc65cf1df7b8235fb4e8ec38b636c17.txt new file mode 100644 index 0000000000000000000000000000000000000000..7c4120ca559ac7e154bd60c031ca497e0b8a77e7 --- /dev/null +++ b/scrapped_outputs/5cc65cf1df7b8235fb4e8ec38b636c17.txt @@ -0,0 +1 @@ +Overview Generating high-quality outputs is computationally intensive, especially during each iterative step where you go from a noisy output to a less noisy output. One of 🤗 Diffuser’s goals is to make this technology widely accessible to everyone, which includes enabling fast inference on consumer and specialized hardware. This section will cover tips and tricks - like half-precision weights and sliced attention - for optimizing inference speed and reducing memory-consumption. You’ll also learn how to speed up your PyTorch code with torch.compile or ONNX Runtime, and enable memory-efficient attention with xFormers. There are also guides for running inference on specific hardware like Apple Silicon, and Intel or Habana processors. diff --git a/scrapped_outputs/5cf36d5bd712523840d0a5801a2c6e97.txt b/scrapped_outputs/5cf36d5bd712523840d0a5801a2c6e97.txt new file mode 100644 index 0000000000000000000000000000000000000000..b2cc2de2c2b439a4068ad959cd182522bf83b8b7 --- /dev/null +++ b/scrapped_outputs/5cf36d5bd712523840d0a5801a2c6e97.txt @@ -0,0 +1,72 @@ +K-Diffusion k-diffusion is a popular library created by Katherine Crowson. We provide StableDiffusionKDiffusionPipeline and StableDiffusionXLKDiffusionPipeline that allow you to run Stable DIffusion with samplers from k-diffusion. Note that most the samplers from k-diffusion are implemented in Diffusers and we recommend using existing schedulers. You can find a mapping between k-diffusion samplers and schedulers in Diffusers here StableDiffusionKDiffusionPipeline class diffusers.StableDiffusionKDiffusionPipeline < source > ( vae text_encoder tokenizer unet scheduler safety_checker feature_extractor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. Pipeline for text-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights This is an experimental pipeline and is likely to change in the future. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionXLKDiffusionPipeline class diffusers.StableDiffusionXLKDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. Pipeline for text-to-image generation using Stable Diffusion XL and k-diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. diff --git a/scrapped_outputs/5cfc183cf4a35db9021ff97ae1ece76b.txt b/scrapped_outputs/5cfc183cf4a35db9021ff97ae1ece76b.txt new file mode 100644 index 0000000000000000000000000000000000000000..fbc000e3f1f4798b3b57e43c2f0af0e2e06c9cce --- /dev/null +++ b/scrapped_outputs/5cfc183cf4a35db9021ff97ae1ece76b.txt @@ -0,0 +1,65 @@ +Latent Consistency Model Multistep Scheduler Overview Multistep and onestep scheduler (Algorithm 3) introduced alongside latent consistency models in the paper Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference by Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao. +This scheduler should be able to generate good samples from LatentConsistencyModelPipeline in 1-8 steps. LCMScheduler class diffusers.LCMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'scaled_linear' trained_betas: Union = None original_inference_steps: int = 50 clip_sample: bool = False clip_sample_range: float = 1.0 set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' timestep_scaling: float = 10.0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. original_inference_steps (int, optional, defaults to 50) — +The default number of inference steps used to generate a linearly-spaced timestep schedule, from which we +will ultimately take num_inference_steps evenly spaced timesteps to form the final timestep schedule. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the alpha value at step 0. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. timestep_scaling (float, defaults to 10.0) — +The factor the timesteps will be multiplied by when calculating the consistency model boundary conditions +c_skip and c_out. Increasing this will decrease the approximation error (although the approximation +error at the default of 10.0 is already pretty small). rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. LCMScheduler extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with +non-Markovian guidance. This model inherits from SchedulerMixin and ConfigMixin. ~ConfigMixin takes care of storing all config +attributes that are passed in the scheduler’s __init__ function, such as num_train_timesteps. They can be +accessed via scheduler.config.num_train_timesteps. SchedulerMixin provides general loading and saving +functionality via the SchedulerMixin.save_pretrained() and from_pretrained() functions. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: Optional = None device: Union = None original_inference_steps: Optional = None timesteps: Optional = None strength: int = 1.0 ) Parameters num_inference_steps (int, optional) — +The number of diffusion steps used when generating samples with a pre-trained model. If used, +timesteps must be None. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. original_inference_steps (int, optional) — +The original number of inference steps, which will be used to generate a linearly-spaced timestep +schedule (which is different from the standard diffusers implementation). We will then take +num_inference_steps timesteps from this schedule, evenly spaced in terms of indices, and use that as +our final timestep schedule. If not set, this will default to the original_inference_steps attribute. timesteps (List[int], optional) — +Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default +timestep spacing strategy of equal spacing between timesteps on the training/distillation timestep +schedule is used. If timesteps is passed, num_inference_steps must be None. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator: Optional = None return_dict: bool = True ) → ~schedulers.scheduling_utils.LCMSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a LCMSchedulerOutput or tuple. Returns +~schedulers.scheduling_utils.LCMSchedulerOutput or tuple + +If return_dict is True, LCMSchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/5d0ac536e7b15f12be269d7428d2e1eb.txt b/scrapped_outputs/5d0ac536e7b15f12be269d7428d2e1eb.txt new file mode 100644 index 0000000000000000000000000000000000000000..07086f82bad81c666b44ca2d095feabb72569dd6 --- /dev/null +++ b/scrapped_outputs/5d0ac536e7b15f12be269d7428d2e1eb.txt @@ -0,0 +1,261 @@ +Pseudo numerical methods for diffusion models (PNDM) + + +Overview + +Original implementation can be found here. + +PNDMScheduler + + +class diffusers.PNDMScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +skip_prk_steps: bool = False +set_alpha_to_one: bool = False +prediction_type: str = 'epsilon' +steps_offset: int = 0 + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +skip_prk_steps (bool) — +allows the scheduler to skip the Runge-Kutta steps that are defined in the original paper as being required +before plms steps; defaults to False. + + +set_alpha_to_one (bool, default False) — +each diffusion step uses the value of alphas product at that step and at the previous one. For the final +step there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the value of alpha at step 0. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion process) +or v_prediction (see section 2.4 https://imagen.research.google/video/paper.pdf) + + +steps_offset (int, default 0) — +an offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False, to make the last step use step 0 for the previous alpha product, as done in +stable diffusion. + + + +Pseudo numerical methods for diffusion models (PNDM) proposes using more advanced ODE integration techniques, +namely Runge-Kutta method and a linear multi-step method. +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. +For more details, see the original paper: https://arxiv.org/abs/2202.09778 + +scale_model_input + +< +source +> +( +sample: FloatTensor +*args +**kwargs + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + + +Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +return_dict: bool = True + +) +→ +SchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +return_dict (bool) — option for returning tuple rather than SchedulerOutput class + + +Returns + +SchedulerOutput or tuple + + + +SchedulerOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion +process from the learned model outputs (most often the predicted noise). +This function calls step_prk() or step_plms() depending on the internal variable counter. + +step_plms + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +return_dict: bool = True + +) +→ +~scheduling_utils.SchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +return_dict (bool) — option for returning tuple rather than SchedulerOutput class + + +Returns + +~scheduling_utils.SchedulerOutput or tuple + + + +~scheduling_utils.SchedulerOutput if return_dict is +True, otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + +Step function propagating the sample with the linear multi-step method. This has one forward pass with multiple +times to approximate the solution. + +step_prk + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +return_dict: bool = True + +) +→ +~scheduling_utils.SchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +return_dict (bool) — option for returning tuple rather than SchedulerOutput class + + +Returns + +~scheduling_utils.SchedulerOutput or tuple + + + +~scheduling_utils.SchedulerOutput if return_dict is +True, otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + +Step function propagating the sample with the Runge-Kutta method. RK takes 4 forward passes to approximate the +solution to the differential equation. diff --git a/scrapped_outputs/5d0fba1b97889abf64f8a8781d03c205.txt b/scrapped_outputs/5d0fba1b97889abf64f8a8781d03c205.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/5d1398af1804bead602ccd3592fc7ce3.txt b/scrapped_outputs/5d1398af1804bead602ccd3592fc7ce3.txt new file mode 100644 index 0000000000000000000000000000000000000000..bfb6ce0d7b29d717bc4cf9298fa18ceb1edda813 --- /dev/null +++ b/scrapped_outputs/5d1398af1804bead602ccd3592fc7ce3.txt @@ -0,0 +1,338 @@ +Inpainting The Stable Diffusion model can also be applied to inpainting which lets you edit specific parts of an image by providing a mask and a text prompt using Stable Diffusion. Tips It is recommended to use this pipeline with checkpoints that have been specifically fine-tuned for inpainting, such +as runwayml/stable-diffusion-inpainting. Default +text-to-image Stable Diffusion checkpoints, such as +runwayml/stable-diffusion-v1-5 are also compatible but they might be less performant. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionInpaintPipeline class diffusers.StableDiffusionInpaintPipeline < source > ( vae: Union text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae ([AutoencoderKL, AsymmetricAutoencoderKL]) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-guided image inpainting using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None masked_image_latents: FloatTensor = None height: Optional = None width: Optional = None padding_mask_crop: Optional = None strength: float = 1.0 num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: int = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be inpainted (which parts of the image to +be masked out with mask_image and repainted according to prompt). For both numpy array and pytorch +tensor, the expected value range is between [0, 1] If it’s a tensor or a list or tensors, the +expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a list of arrays, the +expected shape should be (B, H, W, C) or (H, W, C) It can also accept image latents as image, but +if passing latents directly it is not encoded again. mask_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to mask image. White pixels in the mask +are repainted while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a numpy array or pytorch tensor, it should contain one +color channel (L) instead of 3, so the expected shape for pytorch tensor would be (B, 1, H, W), (B, H, W), (1, H, W), (H, W). And for numpy array would be for (B, H, W, 1), (B, H, W), (H, W, 1), or (H, W). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. padding_mask_crop (int, optional, defaults to None) — +The size of margin in the crop to be applied to the image and masking. If None, no crop is applied to image and mask_image. If +padding_mask_crop is not None, it will first find a rectangular region with the same aspect ration of the image and +contains all masked area, and then expand that area based on padding_mask_crop. The image and mask_image will then be cropped based on +the expanded area before resizing to the original image size for inpainting. This is useful when the masked area is small while the image is large +and contain information inreleant for inpainging, such as background. strength (float, optional, defaults to 1.0) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionInpaintPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +>>> init_image = download_image(img_url).resize((512, 512)) +>>> mask_image = download_image(mask_url).resize((512, 512)) + +>>> pipe = StableDiffusionInpaintPipeline.from_pretrained( +... "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +>>> image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionInpaintPipeline class diffusers.FlaxStableDiffusionInpaintPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-guided image inpainting using Stable Diffusion. 🧪 This is an experimental feature! This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: Array mask: Array masked_image: Array params: Union prng_seed: Array num_inference_steps: int = 50 height: Optional = None width: Optional = None guidance_scale: Union = 7.5 latents: Array = None neg_prompt_ids: Array = None return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. latents (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +array is generated by sampling using the supplied random generator. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard +>>> import PIL +>>> import requests +>>> from io import BytesIO +>>> from diffusers import FlaxStableDiffusionInpaintPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +>>> init_image = download_image(img_url).resize((512, 512)) +>>> mask_image = download_image(mask_url).resize((512, 512)) + +>>> pipeline, params = FlaxStableDiffusionInpaintPipeline.from_pretrained( +... "xvjiarui/stable-diffusion-2-inpainting" +... ) + +>>> prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +>>> prng_seed = jax.random.PRNGKey(0) +>>> num_inference_steps = 50 + +>>> num_samples = jax.device_count() +>>> prompt = num_samples * [prompt] +>>> init_image = num_samples * [init_image] +>>> mask_image = num_samples * [mask_image] +>>> prompt_ids, processed_masked_images, processed_masks = pipeline.prepare_inputs( +... prompt, init_image, mask_image +... ) +# shard inputs and rng + +>>> params = replicate(params) +>>> prng_seed = jax.random.split(prng_seed, jax.device_count()) +>>> prompt_ids = shard(prompt_ids) +>>> processed_masked_images = shard(processed_masked_images) +>>> processed_masks = shard(processed_masks) + +>>> images = pipeline( +... prompt_ids, processed_masks, processed_masked_images, params, prng_seed, num_inference_steps, jit=True +... ).images +>>> images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) FlaxStableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/5d2afd0c2b8b3fa303097b1ec66c47b1.txt b/scrapped_outputs/5d2afd0c2b8b3fa303097b1ec66c47b1.txt new file mode 100644 index 0000000000000000000000000000000000000000..e3c54914f17fe9d0916ab4c87a6550c0de334316 --- /dev/null +++ b/scrapped_outputs/5d2afd0c2b8b3fa303097b1ec66c47b1.txt @@ -0,0 +1,191 @@ +Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image generation model that iterates on the previous Stable Diffusion models in three key ways: the UNet is 3x larger and SDXL combines a second text encoder (OpenCLIP ViT-bigG/14) with the original text encoder to significantly increase the number of parameters introduces size and crop-conditioning to preserve training data from being discarded and gain more control over how a generated image should be cropped introduces a two-stage model process; the base model (can also be run as a standalone model) generates an image as an input to the refiner model which adds additional high-quality details This guide will show you how to use SDXL for text-to-image, image-to-image, and inpainting. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate omegaconf invisible-watermark>=0.2.0 We recommend installing the invisible-watermark library to help identify images that are generated. If the invisible-watermark library is installed, it is used by default. To disable the watermarker: Copied pipeline = StableDiffusionXLPipeline.from_pretrained(..., add_watermarker=False) Load model checkpoints Model weights may be stored in separate subfolders on the Hub or locally, in which case, you should use the from_pretrained() method: Copied from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = StableDiffusionXLImg2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, use_safetensors=True, variant="fp16" +).to("cuda") You can also use the from_single_file() method to load a model checkpoint stored in a single file format (.ckpt or .safetensors) from the Hub or locally: Copied from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_single_file( + "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0.safetensors", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = StableDiffusionXLImg2ImgPipeline.from_single_file( + "https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/blob/main/sd_xl_refiner_1.0.safetensors", torch_dtype=torch.float16, use_safetensors=True, variant="fp16" +).to("cuda") Text-to-image For text-to-image, pass a text prompt. By default, SDXL generates a 1024x1024 image for the best results. You can try setting the height and width parameters to 768x768 or 512x512, but anything below 512x512 is not likely to work. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline_text2image = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipeline_text2image(prompt=prompt).images[0] +image Image-to-image For image-to-image, SDXL works especially well with image sizes between 768x768 and 1024x1024. Pass an initial image, and a text prompt to condition the image with: Copied from diffusers import AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +# use from_pipe to avoid consuming additional memory when loading a checkpoint +pipeline = AutoPipelineForImage2Image.from_pipe(pipeline_text2image).to("cuda") + +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png" +init_image = load_image(url) +prompt = "a dog catching a frisbee in the jungle" +image = pipeline(prompt, image=init_image, strength=0.8, guidance_scale=10.5).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Inpainting For inpainting, you’ll need the original image and a mask of what you want to replace in the original image. Create a prompt to describe what you want to replace the masked area with. Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +# use from_pipe to avoid consuming additional memory when loading a checkpoint +pipeline = AutoPipelineForInpainting.from_pipe(pipeline_text2image).to("cuda") + +img_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png" +mask_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-inpaint-mask.png" + +init_image = load_image(img_url) +mask_image = load_image(mask_url) + +prompt = "A deep sea diver floating" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.85, guidance_scale=12.5).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Refine image quality SDXL includes a refiner model specialized in denoising low-noise stage images to generate higher-quality images from the base model. There are two ways to use the refiner: use the base and refiner models together to produce a refined image use the base model to produce an image, and subsequently use the refiner model to add more details to the image (this is how SDXL was originally trained) Base + refiner model When you use the base and refiner model together to generate an image, this is known as an ensemble of expert denoisers. The ensemble of expert denoisers approach requires fewer overall denoising steps versus passing the base model’s output to the refiner model, so it should be significantly faster to run. However, you won’t be able to inspect the base model’s output because it still contains a large amount of noise. As an ensemble of expert denoisers, the base model serves as the expert during the high-noise diffusion stage and the refiner model serves as the expert during the low-noise diffusion stage. Load the base and refiner model: Copied from diffusers import DiffusionPipeline +import torch + +base = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", + text_encoder_2=base.text_encoder_2, + vae=base.vae, + torch_dtype=torch.float16, + use_safetensors=True, + variant="fp16", +).to("cuda") To use this approach, you need to define the number of timesteps for each model to run through their respective stages. For the base model, this is controlled by the denoising_end parameter and for the refiner model, it is controlled by the denoising_start parameter. The denoising_end and denoising_start parameters should be a float between 0 and 1. These parameters are represented as a proportion of discrete timesteps as defined by the scheduler. If you’re also using the strength parameter, it’ll be ignored because the number of denoising steps is determined by the discrete timesteps the model is trained on and the declared fractional cutoff. Let’s set denoising_end=0.8 so the base model performs the first 80% of denoising the high-noise timesteps and set denoising_start=0.8 so the refiner model performs the last 20% of denoising the low-noise timesteps. The base model output should be in latent space instead of a PIL image. Copied prompt = "A majestic lion jumping from a big stone at night" + +image = base( + prompt=prompt, + num_inference_steps=40, + denoising_end=0.8, + output_type="latent", +).images +image = refiner( + prompt=prompt, + num_inference_steps=40, + denoising_start=0.8, + image=image, +).images[0] +image default base model ensemble of expert denoisers The refiner model can also be used for inpainting in the StableDiffusionXLInpaintPipeline: Copied from diffusers import StableDiffusionXLInpaintPipeline +from diffusers.utils import load_image, make_image_grid +import torch + +base = StableDiffusionXLInpaintPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = StableDiffusionXLInpaintPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", + text_encoder_2=base.text_encoder_2, + vae=base.vae, + torch_dtype=torch.float16, + use_safetensors=True, + variant="fp16", +).to("cuda") + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url) +mask_image = load_image(mask_url) + +prompt = "A majestic tiger sitting on a bench" +num_inference_steps = 75 +high_noise_frac = 0.7 + +image = base( + prompt=prompt, + image=init_image, + mask_image=mask_image, + num_inference_steps=num_inference_steps, + denoising_end=high_noise_frac, + output_type="latent", +).images +image = refiner( + prompt=prompt, + image=image, + mask_image=mask_image, + num_inference_steps=num_inference_steps, + denoising_start=high_noise_frac, +).images[0] +make_image_grid([init_image, mask_image, image.resize((512, 512))], rows=1, cols=3) This ensemble of expert denoisers method works well for all available schedulers! Base to refiner model SDXL gets a boost in image quality by using the refiner model to add additional high-quality details to the fully-denoised image from the base model, in an image-to-image setting. Load the base and refiner models: Copied from diffusers import DiffusionPipeline +import torch + +base = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", + text_encoder_2=base.text_encoder_2, + vae=base.vae, + torch_dtype=torch.float16, + use_safetensors=True, + variant="fp16", +).to("cuda") Generate an image from the base model, and set the model output to latent space: Copied prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +image = base(prompt=prompt, output_type="latent").images[0] Pass the generated image to the refiner model: Copied image = refiner(prompt=prompt, image=image[None, :]).images[0] base model base model + refiner model For inpainting, load the base and the refiner model in the StableDiffusionXLInpaintPipeline, remove the denoising_end and denoising_start parameters, and choose a smaller number of inference steps for the refiner. Micro-conditioning SDXL training involves several additional conditioning techniques, which are referred to as micro-conditioning. These include original image size, target image size, and cropping parameters. The micro-conditionings can be used at inference time to create high-quality, centered images. You can use both micro-conditioning and negative micro-conditioning parameters thanks to classifier-free guidance. They are available in the StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline, StableDiffusionXLInpaintPipeline, and StableDiffusionXLControlNetPipeline. Size conditioning There are two types of size conditioning: original_size conditioning comes from upscaled images in the training batch (because it would be wasteful to discard the smaller images which make up almost 40% of the total training data). This way, SDXL learns that upscaling artifacts are not supposed to be present in high-resolution images. During inference, you can use original_size to indicate the original image resolution. Using the default value of (1024, 1024) produces higher-quality images that resemble the 1024x1024 images in the dataset. If you choose to use a lower resolution, such as (256, 256), the model still generates 1024x1024 images, but they’ll look like the low resolution images (simpler patterns, blurring) in the dataset. target_size conditioning comes from finetuning SDXL to support different image aspect ratios. During inference, if you use the default value of (1024, 1024), you’ll get an image that resembles the composition of square images in the dataset. We recommend using the same value for target_size and original_size, but feel free to experiment with other options! 🤗 Diffusers also lets you specify negative conditions about an image’s size to steer generation away from certain image resolutions: Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe( + prompt=prompt, + negative_original_size=(512, 512), + negative_target_size=(1024, 1024), +).images[0] Images negatively conditioned on image resolutions of (128, 128), (256, 256), and (512, 512). Crop conditioning Images generated by previous Stable Diffusion models may sometimes appear to be cropped. This is because images are actually cropped during training so that all the images in a batch have the same size. By conditioning on crop coordinates, SDXL learns that no cropping - coordinates (0, 0) - usually correlates with centered subjects and complete faces (this is the default value in 🤗 Diffusers). You can experiment with different coordinates if you want to generate off-centered compositions! Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipeline(prompt=prompt, crops_coords_top_left=(256, 0)).images[0] +image You can also specify negative cropping coordinates to steer generation away from certain cropping parameters: Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe( + prompt=prompt, + negative_original_size=(512, 512), + negative_crops_coords_top_left=(0, 0), + negative_target_size=(1024, 1024), +).images[0] +image Use a different prompt for each text-encoder SDXL uses two text-encoders, so it is possible to pass a different prompt to each text-encoder, which can improve quality. Pass your original prompt to prompt and the second prompt to prompt_2 (use negative_prompt and negative_prompt_2 if you’re using negative prompts): Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +# prompt is passed to OAI CLIP-ViT/L-14 +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +# prompt_2 is passed to OpenCLIP-ViT/bigG-14 +prompt_2 = "Van Gogh painting" +image = pipeline(prompt=prompt, prompt_2=prompt_2).images[0] +image The dual text-encoders also support textual inversion embeddings that need to be loaded separately as explained in the SDXL textual inversion section. Optimizations SDXL is a large model, and you may need to optimize memory to get it to run on your hardware. Here are some tips to save memory and speed up inference. Offload the model to the CPU with enable_model_cpu_offload() for out-of-memory errors: Copied - base.to("cuda") +- refiner.to("cuda") ++ base.enable_model_cpu_offload() ++ refiner.enable_model_cpu_offload() Use torch.compile for ~20% speed-up (you need torch>=2.0): Copied + base.unet = torch.compile(base.unet, mode="reduce-overhead", fullgraph=True) ++ refiner.unet = torch.compile(refiner.unet, mode="reduce-overhead", fullgraph=True) Enable xFormers to run SDXL if torch<2.0: Copied + base.enable_xformers_memory_efficient_attention() ++ refiner.enable_xformers_memory_efficient_attention() Other resources If you’re interested in experimenting with a minimal version of the UNet2DConditionModel used in SDXL, take a look at the minSDXL implementation which is written in PyTorch and directly compatible with 🤗 Diffusers. diff --git a/scrapped_outputs/5d6ad3cf0389757eec3c65b54dabdae2.txt b/scrapped_outputs/5d6ad3cf0389757eec3c65b54dabdae2.txt new file mode 100644 index 0000000000000000000000000000000000000000..97a771bf1c4a69150adf921fcc1b4adbe14566c1 --- /dev/null +++ b/scrapped_outputs/5d6ad3cf0389757eec3c65b54dabdae2.txt @@ -0,0 +1,927 @@ +DeepFloyd IF Overview DeepFloyd IF is a novel state-of-the-art open-source text-to-image model with a high degree of photorealism and language understanding. +The model is a modular composed of a frozen text encoder and three cascaded pixel diffusion modules: Stage 1: a base model that generates 64x64 px image based on text prompt, Stage 2: a 64x64 px => 256x256 px super-resolution model, and Stage 3: a 256x256 px => 1024x1024 px super-resolution model +Stage 1 and Stage 2 utilize a frozen text encoder based on the T5 transformer to extract text embeddings, which are then fed into a UNet architecture enhanced with cross-attention and attention pooling. +Stage 3 is Stability AI’s x4 Upscaling model. +The result is a highly efficient model that outperforms current state-of-the-art models, achieving a zero-shot FID score of 6.66 on the COCO dataset. +Our work underscores the potential of larger UNet architectures in the first stage of cascaded diffusion models and depicts a promising future for text-to-image synthesis. Usage Before you can use IF, you need to accept its usage conditions. To do so: Make sure to have a Hugging Face account and be logged in. Accept the license on the model card of DeepFloyd/IF-I-XL-v1.0. Accepting the license on the stage I model card will auto accept for the other IF models. Make sure to login locally. Install huggingface_hub: Copied pip install huggingface_hub --upgrade run the login function in a Python shell: Copied from huggingface_hub import login + +login() and enter your Hugging Face Hub access token. Next we install diffusers and dependencies: Copied pip install -q diffusers accelerate transformers The following sections give more in-detail examples of how to use IF. Specifically: Text-to-Image Generation Image-to-Image Generation Inpainting Reusing model weights Speed optimization Memory optimization Available checkpoints Stage-1 DeepFloyd/IF-I-XL-v1.0 DeepFloyd/IF-I-L-v1.0 DeepFloyd/IF-I-M-v1.0 Stage-2 DeepFloyd/IF-II-L-v1.0 DeepFloyd/IF-II-M-v1.0 Stage-3 stabilityai/stable-diffusion-x4-upscaler Google Colab Text-to-Image Generation By default diffusers makes use of model cpu offloading to run the whole IF pipeline with as little as 14 GB of VRAM. Copied from diffusers import DiffusionPipeline +from diffusers.utils import pt_to_pil, make_image_grid +import torch + +# stage 1 +stage_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +stage_1_output = stage_1( + prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, generator=generator, output_type="pt" +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# stage 2 +stage_2_output = stage_2( + image=stage_1_output, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png") + +# stage 3 +stage_3_output = stage_3(prompt=prompt, image=stage_2_output, noise_level=100, generator=generator).images +#stage_3_output[0].save("./if_stage_III.png") +make_image_grid([pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=3) Text Guided Image-to-Image Generation The same IF model weights can be used for text-guided image-to-image translation or image variation. +In this case just make sure to load the weights using the IFImg2ImgPipeline and IFImg2ImgSuperResolutionPipeline pipelines. Note: You can also directly move the weights of the text-to-image pipelines to the image-to-image pipelines +without loading them twice by making use of the components argument as explained here. Copied from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +from diffusers.utils import pt_to_pil, load_image, make_image_grid +import torch + +# download image +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +original_image = load_image(url) +original_image = original_image.resize((768, 512)) + +# stage 1 +stage_1 = IFImg2ImgPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = IFImg2ImgSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = "A fantasy landscape in style minecraft" +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +stage_1_output = stage_1( + image=original_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# stage 2 +stage_2_output = stage_2( + image=stage_1_output, + original_image=original_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png") + +# stage 3 +stage_3_output = stage_3(prompt=prompt, image=stage_2_output, generator=generator, noise_level=100).images +#stage_3_output[0].save("./if_stage_III.png") +make_image_grid([original_image, pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=4) Text Guided Inpainting Generation The same IF model weights can be used for text-guided image-to-image translation or image variation. +In this case just make sure to load the weights using the IFInpaintingPipeline and IFInpaintingSuperResolutionPipeline pipelines. Note: You can also directly move the weights of the text-to-image pipelines to the image-to-image pipelines +without loading them twice by making use of the ~DiffusionPipeline.components() function as explained here. Copied from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +from diffusers.utils import pt_to_pil, load_image, make_image_grid +import torch + +# download image +url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +original_image = load_image(url) + +# download mask +url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +mask_image = load_image(url) + +# stage 1 +stage_1 = IFInpaintingPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = IFInpaintingSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = "blue sunglasses" +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +stage_1_output = stage_1( + image=original_image, + mask_image=mask_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# stage 2 +stage_2_output = stage_2( + image=stage_1_output, + original_image=original_image, + mask_image=mask_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_II.png") + +# stage 3 +stage_3_output = stage_3(prompt=prompt, image=stage_2_output, generator=generator, noise_level=100).images +#stage_3_output[0].save("./if_stage_III.png") +make_image_grid([original_image, mask_image, pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=5) Converting between different pipelines In addition to being loaded with from_pretrained, Pipelines can also be loaded directly from each other. Copied from diffusers import IFPipeline, IFSuperResolutionPipeline + +pipe_1 = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0") +pipe_2 = IFSuperResolutionPipeline.from_pretrained("DeepFloyd/IF-II-L-v1.0") + + +from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline + +pipe_1 = IFImg2ImgPipeline(**pipe_1.components) +pipe_2 = IFImg2ImgSuperResolutionPipeline(**pipe_2.components) + + +from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline + +pipe_1 = IFInpaintingPipeline(**pipe_1.components) +pipe_2 = IFInpaintingSuperResolutionPipeline(**pipe_2.components) Optimizing for speed The simplest optimization to run IF faster is to move all model components to the GPU. Copied pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") You can also run the diffusion process for a shorter number of timesteps. This can either be done with the num_inference_steps argument: Copied pipe("", num_inference_steps=30) Or with the timesteps argument: Copied from diffusers.pipelines.deepfloyd_if import fast27_timesteps + +pipe("", timesteps=fast27_timesteps) When doing image variation or inpainting, you can also decrease the number of timesteps +with the strength argument. The strength argument is the amount of noise to add to the input image which also determines how many steps to run in the denoising process. +A smaller number will vary the image less but run faster. Copied pipe = IFImg2ImgPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") + +image = pipe(image=image, prompt="", strength=0.3).images You can also use torch.compile. Note that we have not exhaustively tested torch.compile +with IF and it might not give expected results. Copied from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") + +pipe.text_encoder = torch.compile(pipe.text_encoder, mode="reduce-overhead", fullgraph=True) +pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) Optimizing for memory When optimizing for GPU memory, we can use the standard diffusers CPU offloading APIs. Either the model based CPU offloading, Copied pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.enable_model_cpu_offload() or the more aggressive layer based CPU offloading. Copied pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.enable_sequential_cpu_offload() Additionally, T5 can be loaded in 8bit precision Copied from transformers import T5EncoderModel + +text_encoder = T5EncoderModel.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit" +) + +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", + text_encoder=text_encoder, # pass the previously instantiated 8bit text encoder + unet=None, + device_map="auto", +) + +prompt_embeds, negative_embeds = pipe.encode_prompt("") For CPU RAM constrained machines like Google Colab free tier where we can’t load all model components to the CPU at once, we can manually only load the pipeline with +the text encoder or UNet when the respective model components are needed. Copied from diffusers import IFPipeline, IFSuperResolutionPipeline +import torch +import gc +from transformers import T5EncoderModel +from diffusers.utils import pt_to_pil, make_image_grid + +text_encoder = T5EncoderModel.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit" +) + +# text to image +pipe = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", + text_encoder=text_encoder, # pass the previously instantiated 8bit text encoder + unet=None, + device_map="auto", +) + +prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +# Remove the pipeline so we can re-load the pipeline with the unet +del text_encoder +del pipe +gc.collect() +torch.cuda.empty_cache() + +pipe = IFPipeline.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto" +) + +generator = torch.Generator().manual_seed(0) +stage_1_output = pipe( + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + output_type="pt", + generator=generator, +).images + +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# Remove the pipeline so we can load the super-resolution pipeline +del pipe +gc.collect() +torch.cuda.empty_cache() + +# First super resolution + +pipe = IFSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto" +) + +generator = torch.Generator().manual_seed(0) +stage_2_output = pipe( + image=stage_1_output, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + output_type="pt", + generator=generator, +).images + +#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png") +make_image_grid([pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0]], rows=1, rows=2) Available Pipelines: Pipeline Tasks Colab pipeline_if.py Text-to-Image Generation - pipeline_if_superresolution.py Text-to-Image Generation - pipeline_if_img2img.py Image-to-Image Generation - pipeline_if_img2img_superresolution.py Image-to-Image Generation - pipeline_if_inpainting.py Image-to-Image Generation - pipeline_if_inpainting_superresolution.py Image-to-Image Generation - IFPipeline class diffusers.IFPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None num_inference_steps: int = 100 timesteps: List = None guidance_scale: float = 7.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 height: Optional = None width: Optional = None eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True cross_attention_kwargs: Optional = None ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 7.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to self.unet.config.sample_size) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size) — +The width in pixels of the generated image. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFPipeline, IFSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch + +>>> pipe = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt").images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt" +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> safety_modules = { +... "feature_extractor": pipe.feature_extractor, +... "safety_checker": pipe.safety_checker, +... "watermarker": pipe.watermarker, +... } +>>> super_res_2_pipe = DiffusionPipeline.from_pretrained( +... "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +... ) +>>> super_res_2_pipe.enable_model_cpu_offload() + +>>> image = super_res_2_pipe( +... prompt=prompt, +... image=image, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFSuperResolutionPipeline class diffusers.IFSuperResolutionPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler image_noising_scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None height: int = None width: int = None image: Union = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 4.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 250 clean_caption: bool = True ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. height (int, optional, defaults to None) — +The height in pixels of the generated image. width (int, optional, defaults to None) — +The width in pixels of the generated image. image (PIL.Image.Image, np.ndarray, torch.FloatTensor) — +The image to be upscaled. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional, defaults to None) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. noise_level (int, optional, defaults to 250) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFPipeline, IFSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch + +>>> pipe = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt").images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFImg2ImgPipeline class diffusers.IFImg2ImgPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.7 num_inference_steps: int = 80 timesteps: List = None guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True cross_attention_kwargs: Optional = None ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. strength (float, optional, defaults to 0.7) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. num_inference_steps (int, optional, defaults to 80) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 10.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image.resize((768, 512)) + +>>> pipe = IFImg2ImgPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A fantasy landscape in style minecraft" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFImg2ImgSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", +... text_encoder=None, +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFImg2ImgSuperResolutionPipeline class diffusers.IFImg2ImgSuperResolutionPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler image_noising_scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( image: Union original_image: Union = None strength: float = 0.8 prompt: Union = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 4.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 250 clean_caption: bool = True ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. original_image (torch.FloatTensor or PIL.Image.Image) — +The original image that image was varied from. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. noise_level (int, optional, defaults to 250) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image.resize((768, 512)) + +>>> pipe = IFImg2ImgPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A fantasy landscape in style minecraft" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFImg2ImgSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", +... text_encoder=None, +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFInpaintingPipeline class diffusers.IFInpaintingPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None strength: float = 1.0 num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True cross_attention_kwargs: Optional = None ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). strength (float, optional, defaults to 1.0) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 7.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +>>> response = requests.get(url) +>>> mask_image = Image.open(BytesIO(response.content)) +>>> mask_image = mask_image + +>>> pipe = IFInpaintingPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "blue sunglasses" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... mask_image=mask_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFInpaintingSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... mask_image=mask_image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFInpaintingSuperResolutionPipeline class diffusers.IFInpaintingSuperResolutionPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler image_noising_scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( image: Union original_image: Union = None mask_image: Union = None strength: float = 0.8 prompt: Union = None num_inference_steps: int = 100 timesteps: List = None guidance_scale: float = 4.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 0 clean_caption: bool = True ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. original_image (torch.FloatTensor or PIL.Image.Image) — +The original image that image was varied from. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. noise_level (int, optional, defaults to 0) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +>>> response = requests.get(url) +>>> mask_image = Image.open(BytesIO(response.content)) +>>> mask_image = mask_image + +>>> pipe = IFInpaintingPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "blue sunglasses" + +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) +>>> image = pipe( +... image=original_image, +... mask_image=mask_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFInpaintingSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... mask_image=mask_image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. diff --git a/scrapped_outputs/5d833f4098212045ef0296df2c0e9499.txt b/scrapped_outputs/5d833f4098212045ef0296df2c0e9499.txt new file mode 100644 index 0000000000000000000000000000000000000000..576dcc80f8d3648a3bfddba4f5d8e453c126504f --- /dev/null +++ b/scrapped_outputs/5d833f4098212045ef0296df2c0e9499.txt @@ -0,0 +1,58 @@ +Tiny AutoEncoder Tiny AutoEncoder for Stable Diffusion (TAESD) was introduced in madebyollin/taesd by Ollin Boer Bohan. It is a tiny distilled version of Stable Diffusion’s VAE that can quickly decode the latents in a StableDiffusionPipeline or StableDiffusionXLPipeline almost instantly. To use with Stable Diffusion v-2.1: Copied import torch +from diffusers import DiffusionPipeline, AutoencoderTiny + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1-base", torch_dtype=torch.float16 +) +pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesd", torch_dtype=torch.float16) +pipe = pipe.to("cuda") + +prompt = "slice of delicious New York-style berry cheesecake" +image = pipe(prompt, num_inference_steps=25).images[0] +image To use with Stable Diffusion XL 1.0 Copied import torch +from diffusers import DiffusionPipeline, AutoencoderTiny + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +) +pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesdxl", torch_dtype=torch.float16) +pipe = pipe.to("cuda") + +prompt = "slice of delicious New York-style berry cheesecake" +image = pipe(prompt, num_inference_steps=25).images[0] +image AutoencoderTiny class diffusers.AutoencoderTiny < source > ( in_channels: int = 3 out_channels: int = 3 encoder_block_out_channels: Tuple = (64, 64, 64, 64) decoder_block_out_channels: Tuple = (64, 64, 64, 64) act_fn: str = 'relu' latent_channels: int = 4 upsampling_scaling_factor: int = 2 num_encoder_blocks: Tuple = (1, 3, 3, 3) num_decoder_blocks: Tuple = (3, 3, 3, 1) latent_magnitude: int = 3 latent_shift: float = 0.5 force_upcast: bool = False scaling_factor: float = 1.0 ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. encoder_block_out_channels (Tuple[int], optional, defaults to (64, 64, 64, 64)) — +Tuple of integers representing the number of output channels for each encoder block. The length of the +tuple should be equal to the number of encoder blocks. decoder_block_out_channels (Tuple[int], optional, defaults to (64, 64, 64, 64)) — +Tuple of integers representing the number of output channels for each decoder block. The length of the +tuple should be equal to the number of decoder blocks. act_fn (str, optional, defaults to "relu") — +Activation function to be used throughout the model. latent_channels (int, optional, defaults to 4) — +Number of channels in the latent representation. The latent space acts as a compressed representation of +the input image. upsampling_scaling_factor (int, optional, defaults to 2) — +Scaling factor for upsampling in the decoder. It determines the size of the output image during the +upsampling process. num_encoder_blocks (Tuple[int], optional, defaults to (1, 3, 3, 3)) — +Tuple of integers representing the number of encoder blocks at each stage of the encoding process. The +length of the tuple should be equal to the number of stages in the encoder. Each stage has a different +number of encoder blocks. num_decoder_blocks (Tuple[int], optional, defaults to (3, 3, 3, 1)) — +Tuple of integers representing the number of decoder blocks at each stage of the decoding process. The +length of the tuple should be equal to the number of stages in the decoder. Each stage has a different +number of decoder blocks. latent_magnitude (float, optional, defaults to 3.0) — +Magnitude of the latent representation. This parameter scales the latent representation values to control +the extent of information preservation. latent_shift (float, optional, defaults to 0.5) — +Shift applied to the latent representation. This parameter controls the center of the latent space. scaling_factor (float, optional, defaults to 1.0) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. For this Autoencoder, +however, no such scaling factor was used, hence the value of 1.0 as the default. force_upcast (bool, optional, default to False) — +If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE +can be fine-tuned / trained to a lower range without losing too much precision, in which case +force_upcast can be set to False (see this fp16-friendly +AutoEncoder). A tiny distilled VAE model for encoding images into latents and decoding latent representations into images. AutoencoderTiny is a wrapper around the original implementation of TAESD. This model inherits from ModelMixin. Check the superclass documentation for its generic methods implemented for +all models (such as downloading or saving). disable_slicing < source > ( ) Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing +decoding in one step. disable_tiling < source > ( ) Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing +decoding in one step. enable_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_tiling < source > ( use_tiling: bool = True ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. forward < source > ( sample: FloatTensor return_dict: bool = True ) Parameters sample (torch.FloatTensor) — Input sample. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. scale_latents < source > ( x: FloatTensor ) raw latents -> [0, 1] unscale_latents < source > ( x: FloatTensor ) [0, 1] -> raw latents AutoencoderTinyOutput class diffusers.models.autoencoders.autoencoder_tiny.AutoencoderTinyOutput < source > ( latents: Tensor ) Parameters latents (torch.Tensor) — Encoded outputs of the Encoder. Output of AutoencoderTiny encoding method. diff --git a/scrapped_outputs/5db932cbdaa3df3a65a869e0b38d7267.txt b/scrapped_outputs/5db932cbdaa3df3a65a869e0b38d7267.txt new file mode 100644 index 0000000000000000000000000000000000000000..63a6ab586aafd81906ffd6f570eae9df138f4931 --- /dev/null +++ b/scrapped_outputs/5db932cbdaa3df3a65a869e0b38d7267.txt @@ -0,0 +1,120 @@ +Stable diffusion pipelines + +Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from CompVis, Stability AI and LAION. It’s trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs. +Latent diffusion is the research on top of which Stable Diffusion was built. It was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. You can learn more details about it in the specific pipeline for latent diffusion that is part of 🤗 Diffusers. +For more details about how Stable Diffusion works and how it differs from the base latent diffusion model, please refer to the official launch announcement post and this section of our own blog post. +Tips: +To tweak your prompts on a specific result you liked, you can generate your own latents, as demonstrated in the following notebook: +Overview: +Pipeline +Tasks +Colab +Demo +StableDiffusionPipeline +Text-to-Image Generation + +🤗 Stable Diffusion +StableDiffusionImg2ImgPipeline +Image-to-Image Text-Guided Generation + +🤗 Diffuse the Rest +StableDiffusionInpaintPipeline +Experimental – Text-Guided Image Inpainting + +Coming soon +StableDiffusionDepth2ImgPipeline +Experimental – Depth-to-Image Text-Guided Generation + +Coming soon +StableDiffusionImageVariationPipeline +Experimental – Image Variation Generation + +🤗 Stable Diffusion Image Variations +StableDiffusionUpscalePipeline +Experimental – Text-Guided Image Super-Resolution + +Coming soon +StableDiffusionLatentUpscalePipeline +Experimental – Text-Guided Image Super-Resolution + +Coming soon +StableDiffusionInstructPix2PixPipeline +Experimental – Text-Based Image Editing + +InstructPix2Pix: Learning to Follow Image Editing Instructions +StableDiffusionAttendAndExcitePipeline +Experimental – Text-to-Image Generation + +Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models +StableDiffusionPix2PixZeroPipeline +Experimental – Text-Based Image Editing + +Zero-shot Image-to-Image Translation + +Tips + + +How to load and use different schedulers. + +The stable diffusion pipeline uses PNDMScheduler scheduler by default. But diffusers provides many other schedulers that can be used with the stable diffusion pipeline such as DDIMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, EulerAncestralDiscreteScheduler etc. +To use a different scheduler, you can either change it via the ConfigMixin.from_config() method or pass the scheduler argument to the from_pretrained method of the pipeline. For example, to use the EulerDiscreteScheduler, you can do the following: + + + Copied +>>> from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler + +>>> pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") +>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +>>> # or +>>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler") +>>> pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=euler_scheduler) + +How to convert all use cases with multiple or single pipeline + +If you want to use all possible use cases in a single DiffusionPipeline you can either: +Make use of the Stable Diffusion Mega Pipeline or +Make use of the components functionality to instantiate all components in the most memory-efficient way: + + + Copied +>>> from diffusers import ( +... StableDiffusionPipeline, +... StableDiffusionImg2ImgPipeline, +... StableDiffusionInpaintPipeline, +... ) + +>>> text2img = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") +>>> img2img = StableDiffusionImg2ImgPipeline(**text2img.components) +>>> inpaint = StableDiffusionInpaintPipeline(**text2img.components) + +>>> # now you can use text2img(...), img2img(...), inpaint(...) just like the call methods of each respective pipeline + +StableDiffusionPipelineOutput + + +class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput + +< +source +> +( +images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] +nsfw_content_detected: typing.Optional[typing.List[bool]] + +) + + +Parameters + +images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or numpy array of shape (batch_size, height, width, num_channels). PIL images or numpy array present the denoised images of the diffusion pipeline. + + +nsfw_content_detected (List[bool]) — +List of flags denoting whether the corresponding generated image likely represents “not-safe-for-work” +(nsfw) content, or None if safety checking could not be performed. + + + +Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/5dde453fa7d2e5027abce0e2e2014338.txt b/scrapped_outputs/5dde453fa7d2e5027abce0e2e2014338.txt new file mode 100644 index 0000000000000000000000000000000000000000..fa29aaa3795982e1203729759aa3fb501feeb077 --- /dev/null +++ b/scrapped_outputs/5dde453fa7d2e5027abce0e2e2014338.txt @@ -0,0 +1,19 @@ +Habana Gaudi 🤗 Diffusers is compatible with Habana Gaudi through 🤗 Optimum. Follow the installation guide to install the SynapseAI and Gaudi drivers, and then install Optimum Habana: Copied python -m pip install --upgrade-strategy eager optimum[habana] To generate images with Stable Diffusion 1 and 2 on Gaudi, you need to instantiate two instances: GaudiStableDiffusionPipeline, a pipeline for text-to-image generation. GaudiDDIMScheduler, a Gaudi-optimized scheduler. When you initialize the pipeline, you have to specify use_habana=True to deploy it on HPUs and to get the fastest possible generation, you should enable HPU graphs with use_hpu_graphs=True. Finally, specify a GaudiConfig which can be downloaded from the Habana organization on the Hub. Copied from optimum.habana import GaudiConfig +from optimum.habana.diffusers import GaudiDDIMScheduler, GaudiStableDiffusionPipeline + +model_name = "stabilityai/stable-diffusion-2-base" +scheduler = GaudiDDIMScheduler.from_pretrained(model_name, subfolder="scheduler") +pipeline = GaudiStableDiffusionPipeline.from_pretrained( + model_name, + scheduler=scheduler, + use_habana=True, + use_hpu_graphs=True, + gaudi_config="Habana/stable-diffusion-2", +) Now you can call the pipeline to generate images by batches from one or several prompts: Copied outputs = pipeline( + prompt=[ + "High quality photo of an astronaut riding a horse in space", + "Face of a yellow cat, high resolution, sitting on a park bench", + ], + num_images_per_prompt=10, + batch_size=4, +) For more information, check out 🤗 Optimum Habana’s documentation and the example provided in the official GitHub repository. Benchmark We benchmarked Habana’s first-generation Gaudi and Gaudi2 with the Habana/stable-diffusion and Habana/stable-diffusion-2 Gaudi configurations (mixed precision bf16/fp32) to demonstrate their performance. For Stable Diffusion v1.5 on 512x512 images: Latency (batch size = 1) Throughput first-generation Gaudi 3.80s 0.308 images/s (batch size = 8) Gaudi2 1.33s 1.081 images/s (batch size = 8) For Stable Diffusion v2.1 on 768x768 images: Latency (batch size = 1) Throughput first-generation Gaudi 10.2s 0.108 images/s (batch size = 4) Gaudi2 3.17s 0.379 images/s (batch size = 8) diff --git a/scrapped_outputs/5df7fa1a9adf013a6e9bf6441e5ab23e.txt b/scrapped_outputs/5df7fa1a9adf013a6e9bf6441e5ab23e.txt new file mode 100644 index 0000000000000000000000000000000000000000..9e00876461007b85dd1f7f798abf317e313c05a5 --- /dev/null +++ b/scrapped_outputs/5df7fa1a9adf013a6e9bf6441e5ab23e.txt @@ -0,0 +1,96 @@ +ControlNet The ControlNet model was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, Maneesh Agrawala. It provides a greater degree of control over text-to-image generation by conditioning the model on additional inputs such as edge maps, depth maps, segmentation maps, and keypoints for pose detection. The abstract from the paper is: We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models. Loading from the original format By default the ControlNetModel should be loaded with from_pretrained(), but it can also be loaded +from the original format using FromOriginalControlnetMixin.from_single_file as follows: Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel + +url = "https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_canny.pth" # can also be a local path +controlnet = ControlNetModel.from_single_file(url) + +url = "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.safetensors" # can also be a local path +pipe = StableDiffusionControlNetPipeline.from_single_file(url, controlnet=controlnet) ControlNetModel class diffusers.ControlNetModel < source > ( in_channels: int = 4 conditioning_channels: int = 3 flip_sin_to_cos: bool = True freq_shift: int = 0 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') mid_block_type: Optional = 'UNetMidBlock2DCrossAttn' only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: int = 1280 transformer_layers_per_block: Union = 1 encoder_hid_dim: Optional = None encoder_hid_dim_type: Optional = None attention_head_dim: Union = 8 num_attention_heads: Union = None use_linear_projection: bool = False class_embed_type: Optional = None addition_embed_type: Optional = None addition_time_embed_dim: Optional = None num_class_embeds: Optional = None upcast_attention: bool = False resnet_time_scale_shift: str = 'default' projection_class_embeddings_input_dim: Optional = None controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: Optional = (16, 32, 96, 256) global_pool_conditions: bool = False addition_embed_type_num_heads: int = 64 ) Parameters in_channels (int, defaults to 4) — +The number of channels in the input sample. flip_sin_to_cos (bool, defaults to True) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, defaults to 0) — +The frequency shift to apply to the time embedding. down_block_types (tuple[str], defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. only_cross_attention (Union[bool, Tuple[bool]], defaults to False) — block_out_channels (tuple[int], defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, defaults to 2) — +The number of layers per block. downsample_padding (int, defaults to 1) — +The padding to use for the downsampling convolution. mid_block_scale_factor (float, defaults to 1) — +The scale factor to use for the mid block. act_fn (str, defaults to “silu”) — +The activation function to use. norm_num_groups (int, optional, defaults to 32) — +The number of groups to use for the normalization. If None, normalization and activation layers is skipped +in post-processing. norm_eps (float, defaults to 1e-5) — +The epsilon to use for the normalization. cross_attention_dim (int, defaults to 1280) — +The dimension of the cross attention features. transformer_layers_per_block (int or Tuple[int], optional, defaults to 1) — +The number of transformer blocks of type BasicTransformerBlock. Only relevant for +CrossAttnDownBlock2D, CrossAttnUpBlock2D, +UNetMidBlock2DCrossAttn. encoder_hid_dim (int, optional, defaults to None) — +If encoder_hid_dim_type is defined, encoder_hidden_states will be projected from encoder_hid_dim +dimension to cross_attention_dim. encoder_hid_dim_type (str, optional, defaults to None) — +If given, the encoder_hidden_states and potentially other embeddings are down-projected to text +embeddings of dimension cross_attention according to encoder_hid_dim_type. attention_head_dim (Union[int, Tuple[int]], defaults to 8) — +The dimension of the attention heads. use_linear_projection (bool, defaults to False) — class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", "identity", "projection", or "simple_projection". addition_embed_type (str, optional, defaults to None) — +Configures an optional embedding which will be summed with the time embeddings. Choose from None or +“text”. “text” will use the TextTimeEmbedding layer. num_class_embeds (int, optional, defaults to 0) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. upcast_attention (bool, defaults to False) — resnet_time_scale_shift (str, defaults to "default") — +Time scale shift config for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. projection_class_embeddings_input_dim (int, optional, defaults to None) — +The dimension of the class_labels input when class_embed_type="projection". Required when +class_embed_type="projection". controlnet_conditioning_channel_order (str, defaults to "rgb") — +The channel order of conditional image. Will convert to rgb if it’s bgr. conditioning_embedding_out_channels (tuple[int], optional, defaults to (16, 32, 96, 256)) — +The tuple of output channel for each block in the conditioning_embedding layer. global_pool_conditions (bool, defaults to False) — +TODO(Patrick) - unused parameter. addition_embed_type_num_heads (int, defaults to 64) — +The number of heads to use for the TextTimeEmbedding layer. A ControlNet model. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor controlnet_cond: FloatTensor conditioning_scale: float = 1.0 class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None added_cond_kwargs: Optional = None cross_attention_kwargs: Optional = None guess_mode: bool = False return_dict: bool = True ) → ControlNetOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor. timestep (Union[torch.Tensor, float, int]) — +The number of timesteps to denoise an input. encoder_hidden_states (torch.Tensor) — +The encoder hidden states. controlnet_cond (torch.FloatTensor) — +The conditional input tensor of shape (batch_size, sequence_length, hidden_size). conditioning_scale (float, defaults to 1.0) — +The scale factor for ControlNet outputs. class_labels (torch.Tensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. timestep_cond (torch.Tensor, optional, defaults to None) — +Additional conditional embeddings for timestep. If provided, the embeddings will be summed with the +timestep_embedding passed through the self.time_embedding layer to obtain the final timestep +embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. added_cond_kwargs (dict) — +Additional conditions for the Stable Diffusion XL UNet. cross_attention_kwargs (dict[str], optional, defaults to None) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. guess_mode (bool, defaults to False) — +In this mode, the ControlNet encoder tries its best to recognize the input content of the input even if +you remove all prompts. A guidance_scale between 3.0 and 5.0 is recommended. return_dict (bool, defaults to True) — +Whether or not to return a ControlNetOutput instead of a plain tuple. Returns +ControlNetOutput or tuple + +If return_dict is True, a ControlNetOutput is returned, otherwise a tuple is +returned where the first element is the sample tensor. + The ControlNetModel forward method. from_unet < source > ( unet: UNet2DConditionModel controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: Optional = (16, 32, 96, 256) load_weights_from_unet: bool = True conditioning_channels: int = 3 ) Parameters unet (UNet2DConditionModel) — +The UNet model weights to copy to the ControlNetModel. All configuration options are also copied +where applicable. Instantiate a ControlNetModel from UNet2DConditionModel. set_attention_slice < source > ( slice_size: Union ) Parameters slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", input to the attention heads is halved, so attention is computed in two steps. If +"max", maximum amount of memory is saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in +several steps. This is useful for saving some memory in exchange for a small decrease in speed. set_attn_processor < source > ( processor: Union _remove_lora = False ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. ControlNetOutput class diffusers.models.controlnet.ControlNetOutput < source > ( down_block_res_samples: Tuple mid_block_res_sample: Tensor ) Parameters down_block_res_samples (tuple[torch.Tensor]) — +A tuple of downsample activations at different resolutions for each downsampling block. Each tensor should +be of shape (batch_size, channel * resolution, height //resolution, width // resolution). Output can be +used to condition the original UNet’s downsampling activations. mid_down_block_re_sample (torch.Tensor) — +The activation of the midde block (the lowest sample resolution). Each tensor should be of shape +(batch_size, channel * lowest_resolution, height // lowest_resolution, width // lowest_resolution). +Output can be used to condition the original UNet’s middle block activation. The output of ControlNetModel. FlaxControlNetModel class diffusers.FlaxControlNetModel < source > ( sample_size: int = 32 in_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 attention_head_dim: Union = 8 num_attention_heads: Union = None cross_attention_dim: int = 1280 dropout: float = 0.0 use_linear_projection: bool = False dtype: dtype = flip_sin_to_cos: bool = True freq_shift: int = 0 controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: Tuple = (16, 32, 96, 256) parent: Union = name: Optional = None ) Parameters sample_size (int, optional) — +The size of the input sample. in_channels (int, optional, defaults to 4) — +The number of channels in the input sample. down_block_types (Tuple[str], optional, defaults to ("FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxDownBlock2D")) — +The tuple of downsample blocks to use. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — +The number of layers per block. attention_head_dim (int or Tuple[int], optional, defaults to 8) — +The dimension of the attention heads. num_attention_heads (int or Tuple[int], optional) — +The number of attention heads. cross_attention_dim (int, optional, defaults to 768) — +The dimension of the cross attention features. dropout (float, optional, defaults to 0) — +Dropout probability for down, up and bottleneck blocks. flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. controlnet_conditioning_channel_order (str, optional, defaults to rgb) — +The channel order of conditional image. Will convert to rgb if it’s bgr. conditioning_embedding_out_channels (tuple, optional, defaults to (16, 32, 96, 256)) — +The tuple of output channel for each block in the conditioning_embedding layer. A ControlNet model. This model inherits from FlaxModelMixin. Check the superclass documentation for it’s generic methods +implemented for all models (such as downloading or saving). This model is also a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matters related to its +general usage and behavior. Inherent JAX features such as the following are supported: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization FlaxControlNetOutput class diffusers.models.controlnet_flax.FlaxControlNetOutput < source > ( down_block_res_samples: Array mid_block_res_sample: Array ) Parameters down_block_res_samples (jnp.ndarray) — mid_block_res_sample (jnp.ndarray) — The output of FlaxControlNetModel. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/5dfba75a96b22c06dcaf287aa70b553a.txt b/scrapped_outputs/5dfba75a96b22c06dcaf287aa70b553a.txt new file mode 100644 index 0000000000000000000000000000000000000000..49d64c2bb4b20fbd4bc944a6449825ee53c95919 --- /dev/null +++ b/scrapped_outputs/5dfba75a96b22c06dcaf287aa70b553a.txt @@ -0,0 +1,41 @@ +KDPM2AncestralDiscreteScheduler The KDPM2DiscreteScheduler with ancestral sampling is inspired by the Elucidating the Design Space of Diffusion-Based Generative Models paper, and the scheduler is ported from and created by Katherine Crowson. The original codebase can be found at crowsonkb/k-diffusion. KDPM2AncestralDiscreteScheduler class diffusers.KDPM2AncestralDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None use_karras_sigmas: Optional = False prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.00085) — +The starting beta value of inference. beta_end (float, defaults to 0.012) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. KDPM2DiscreteScheduler with ancestral sampling is inspired by the DPMSolver2 and Algorithm 2 from the Elucidating +the Design Space of Diffusion-Based Generative Models paper. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union generator: Optional = None return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, ~schedulers.scheduling_ddim.SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/5e03b264f7ccd4112f570b1dc4545725.txt b/scrapped_outputs/5e03b264f7ccd4112f570b1dc4545725.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/5e16436c8a290f930012e75f026025f1.txt b/scrapped_outputs/5e16436c8a290f930012e75f026025f1.txt new file mode 100644 index 0000000000000000000000000000000000000000..9641614cdfafca849f4b6b239ffcf8908db727bc --- /dev/null +++ b/scrapped_outputs/5e16436c8a290f930012e75f026025f1.txt @@ -0,0 +1,300 @@ +AudioLDM + + +Overview + +AudioLDM was proposed in AudioLDM: Text-to-Audio Generation with Latent Diffusion Models by Haohe Liu et al. +Inspired by Stable Diffusion, AudioLDM +is a text-to-audio latent diffusion model (LDM) that learns continuous audio representations from CLAP +latents. AudioLDM takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional +sound effects, human speech and music. +This pipeline was contributed by sanchit-gandhi. The original codebase can be found here. + +Text-to-Audio + +The AudioLDMPipeline can be used to load pre-trained weights from cvssp/audioldm-s-full-v2 and generate text-conditional audio outputs: + + + Copied +from diffusers import AudioLDMPipeline +import torch +import scipy + +repo_id = "cvssp/audioldm-s-full-v2" +pipe = AudioLDMPipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +pipe = pipe.to("cuda") + +prompt = "Techno music with a strong, upbeat tempo and high melodic riffs" +audio = pipe(prompt, num_inference_steps=10, audio_length_in_s=5.0).audios[0] + +# save the audio sample as a .wav file +scipy.io.wavfile.write("techno.wav", rate=16000, data=audio) + +Tips + +Prompts: +Descriptive prompt inputs work best: you can use adjectives to describe the sound (e.g. “high quality” or “clear”) and make the prompt context specific (e.g., “water stream in a forest” instead of “stream”). +It’s best to use general terms like ‘cat’ or ‘dog’ instead of specific names or abstract objects that the model may not be familiar with. +Inference: +The quality of the predicted audio sample can be controlled by the num_inference_steps argument: higher steps give higher quality audio at the expense of slower inference. +The length of the predicted audio sample can be controlled by varying the audio_length_in_s argument. + +How to load and use different schedulers + +The AudioLDM pipeline uses DDIMScheduler scheduler by default. But diffusers provides many other schedulers +that can be used with the AudioLDM pipeline such as PNDMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, +EulerAncestralDiscreteScheduler etc. We recommend using the DPMSolverMultistepScheduler as it’s currently the fastest +scheduler there is. +To use a different scheduler, you can either change it via the ConfigMixin.from_config() +method, or pass the scheduler argument to the from_pretrained method of the pipeline. For example, to use the +DPMSolverMultistepScheduler, you can do the following: + + + Copied +>>> from diffusers import AudioLDMPipeline, DPMSolverMultistepScheduler +>>> import torch + +>>> pipeline = AudioLDMPipeline.from_pretrained("cvssp/audioldm-s-full-v2", torch_dtype=torch.float16) +>>> pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) + +>>> # or +>>> dpm_scheduler = DPMSolverMultistepScheduler.from_pretrained("cvssp/audioldm-s-full-v2", subfolder="scheduler") +>>> pipeline = AudioLDMPipeline.from_pretrained( +... "cvssp/audioldm-s-full-v2", scheduler=dpm_scheduler, torch_dtype=torch.float16 +... ) + +AudioLDMPipeline + + +class diffusers.AudioLDMPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: ClapTextModelWithProjection +tokenizer: typing.Union[transformers.models.roberta.tokenization_roberta.RobertaTokenizer, transformers.models.roberta.tokenization_roberta_fast.RobertaTokenizerFast] +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +vocoder: SpeechT5HifiGan + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode audios to and from latent representations. + + +text_encoder (ClapTextModelWithProjection) — +Frozen text-encoder. AudioLDM uses the text portion of +CLAP, +specifically the RoBERTa HSTAT-unfused variant. + + +tokenizer (PreTrainedTokenizer) — +Tokenizer of class +RobertaTokenizer. + + +unet (UNet2DConditionModel) — U-Net architecture to denoise the encoded audio latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +vocoder (SpeechT5HifiGan) — +Vocoder of class +SpeechT5HifiGan. + + + +Pipeline for text-to-audio generation using AudioLDM. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +audio_length_in_s: typing.Optional[float] = None +num_inference_steps: int = 10 +guidance_scale: float = 2.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_waveforms_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: typing.Optional[int] = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None +output_type: typing.Optional[str] = 'np' + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the audio generation. If not defined, one has to pass prompt_embeds. +instead. + + +audio_length_in_s (int, optional, defaults to 5.12) — +The length of the generated audio sample in seconds. + + +num_inference_steps (int, optional, defaults to 10) — +The number of denoising steps. More denoising steps usually lead to a higher quality audio at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 2.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate audios that are closely linked to the text prompt, +usually at the expense of lower sound quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the audio generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). + + +num_waveforms_per_prompt (int, optional, defaults to 1) — +The number of waveforms to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for audio +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor as defined under +self.processor in +diffusers.cross_attention. + + +output_type (str, optional, defaults to "np") — +The output format of the generate image. Choose between: +"np": Return Numpy np.ndarray objects. +"pt": Return PyTorch torch.Tensor objects. + + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a `tuple. +When returning a tuple, the first element is a list with the generated audios. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import AudioLDMPipeline + +>>> pipe = AudioLDMPipeline.from_pretrained("cvssp/audioldm", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> prompt = "A hammer hitting a wooden surface" +>>> audio = pipe(prompt).audio[0] + +disable_vae_slicing + +< +source +> +( +) + + + +Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to +computing decoding in one step. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and vocoder have their state dicts saved to CPU and then are moved to a torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. + +enable_vae_slicing + +< +source +> +( +) + + + +Enable sliced VAE decoding. +When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several +steps. This is useful to save some memory and allow larger batch sizes. diff --git a/scrapped_outputs/5e171115379f2d94747614da5ba0c5a6.txt b/scrapped_outputs/5e171115379f2d94747614da5ba0c5a6.txt new file mode 100644 index 0000000000000000000000000000000000000000..e9b53eb8a868ef3829ac58348524811ec445482c --- /dev/null +++ b/scrapped_outputs/5e171115379f2d94747614da5ba0c5a6.txt @@ -0,0 +1,143 @@ +BLIP-Diffusion BLIP-Diffusion was proposed in BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing. It enables zero-shot subject-driven generation and control-guided zero-shot generation. The abstract from the paper is: Subject-driven text-to-image generation models create novel renditions of an input subject based on text prompts. Existing models suffer from lengthy fine-tuning and difficulties preserving the subject fidelity. To overcome these limitations, we introduce BLIP-Diffusion, a new subject-driven image generation model that supports multimodal control which consumes inputs of subject images and text prompts. Unlike other subject-driven generation models, BLIP-Diffusion introduces a new multimodal encoder which is pre-trained to provide subject representation. We first pre-train the multimodal encoder following BLIP-2 to produce visual representation aligned with the text. Then we design a subject representation learning task which enables a diffusion model to leverage such visual representation and generates new subject renditions. Compared with previous methods such as DreamBooth, our model enables zero-shot subject-driven generation, and efficient fine-tuning for customized subject with up to 20x speedup. We also demonstrate that BLIP-Diffusion can be flexibly combined with existing techniques such as ControlNet and prompt-to-prompt to enable novel subject-driven generation and editing applications. Project page at this https URL. The original codebase can be found at salesforce/LAVIS. You can find the official BLIP-Diffusion checkpoints under the hf.co/SalesForce organization. BlipDiffusionPipeline and BlipDiffusionControlNetPipeline were contributed by ayushtues. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. BlipDiffusionPipeline class diffusers.BlipDiffusionPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: ContextCLIPTextModel vae: AutoencoderKL unet: UNet2DConditionModel scheduler: PNDMScheduler qformer: Blip2QFormerModel image_processor: BlipImageProcessor ctx_begin_pos: int = 2 mean: List = None std: List = None ) Parameters tokenizer (CLIPTokenizer) — +Tokenizer for the text encoder text_encoder (ContextCLIPTextModel) — +Text encoder to encode the text prompt vae (AutoencoderKL) — +VAE model to map the latents to the image unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. scheduler (PNDMScheduler) — +A scheduler to be used in combination with unet to generate image latents. qformer (Blip2QFormerModel) — +QFormer model to get multi-modal embeddings from the text and image. image_processor (BlipImageProcessor) — +Image Processor to preprocess and postprocess the image. ctx_begin_pos (int, optional, defaults to 2) — +Position of the context token in the text encoder. Pipeline for Zero-Shot Subject Driven Generation using Blip Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: List reference_image: Image source_subject_category: List target_subject_category: List latents: Optional = None guidance_scale: float = 7.5 height: int = 512 width: int = 512 num_inference_steps: int = 50 generator: Union = None neg_prompt: Optional = '' prompt_strength: float = 1.0 prompt_reps: int = 20 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (List[str]) — +The prompt or prompts to guide the image generation. reference_image (PIL.Image.Image) — +The reference image to condition the generation on. source_subject_category (List[str]) — +The source subject category. target_subject_category (List[str]) — +The target subject category. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by random sampling. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. height (int, optional, defaults to 512) — +The height of the generated image. width (int, optional, defaults to 512) — +The width of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. neg_prompt (str, optional, defaults to "") — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). prompt_strength (float, optional, defaults to 1.0) — +The strength of the prompt. Specifies the number of times the prompt is repeated along with prompt_reps +to amplify the prompt. prompt_reps (int, optional, defaults to 20) — +The number of times the prompt is repeated along with prompt_strength to amplify the prompt. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers.pipelines import BlipDiffusionPipeline +>>> from diffusers.utils import load_image +>>> import torch + +>>> blip_diffusion_pipe = BlipDiffusionPipeline.from_pretrained( +... "Salesforce/blipdiffusion", torch_dtype=torch.float16 +... ).to("cuda") + + +>>> cond_subject = "dog" +>>> tgt_subject = "dog" +>>> text_prompt_input = "swimming underwater" + +>>> cond_image = load_image( +... "https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/dog.jpg" +... ) +>>> guidance_scale = 7.5 +>>> num_inference_steps = 25 +>>> negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate" + + +>>> output = blip_diffusion_pipe( +... text_prompt_input, +... cond_image, +... cond_subject, +... tgt_subject, +... guidance_scale=guidance_scale, +... num_inference_steps=num_inference_steps, +... neg_prompt=negative_prompt, +... height=512, +... width=512, +... ).images +>>> output[0].save("image.png") BlipDiffusionControlNetPipeline class diffusers.BlipDiffusionControlNetPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: ContextCLIPTextModel vae: AutoencoderKL unet: UNet2DConditionModel scheduler: PNDMScheduler qformer: Blip2QFormerModel controlnet: ControlNetModel image_processor: BlipImageProcessor ctx_begin_pos: int = 2 mean: List = None std: List = None ) Parameters tokenizer (CLIPTokenizer) — +Tokenizer for the text encoder text_encoder (ContextCLIPTextModel) — +Text encoder to encode the text prompt vae (AutoencoderKL) — +VAE model to map the latents to the image unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. scheduler (PNDMScheduler) — +A scheduler to be used in combination with unet to generate image latents. qformer (Blip2QFormerModel) — +QFormer model to get multi-modal embeddings from the text and image. controlnet (ControlNetModel) — +ControlNet model to get the conditioning image embedding. image_processor (BlipImageProcessor) — +Image Processor to preprocess and postprocess the image. ctx_begin_pos (int, optional, defaults to 2) — +Position of the context token in the text encoder. Pipeline for Canny Edge based Controlled subject-driven generation using Blip Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: List reference_image: Image condtioning_image: Image source_subject_category: List target_subject_category: List latents: Optional = None guidance_scale: float = 7.5 height: int = 512 width: int = 512 num_inference_steps: int = 50 generator: Union = None neg_prompt: Optional = '' prompt_strength: float = 1.0 prompt_reps: int = 20 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (List[str]) — +The prompt or prompts to guide the image generation. reference_image (PIL.Image.Image) — +The reference image to condition the generation on. condtioning_image (PIL.Image.Image) — +The conditioning canny edge image to condition the generation on. source_subject_category (List[str]) — +The source subject category. target_subject_category (List[str]) — +The target subject category. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by random sampling. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. height (int, optional, defaults to 512) — +The height of the generated image. width (int, optional, defaults to 512) — +The width of the generated image. seed (int, optional, defaults to 42) — +The seed to use for random generation. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. neg_prompt (str, optional, defaults to "") — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). prompt_strength (float, optional, defaults to 1.0) — +The strength of the prompt. Specifies the number of times the prompt is repeated along with prompt_reps +to amplify the prompt. prompt_reps (int, optional, defaults to 20) — +The number of times the prompt is repeated along with prompt_strength to amplify the prompt. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers.pipelines import BlipDiffusionControlNetPipeline +>>> from diffusers.utils import load_image +>>> from controlnet_aux import CannyDetector +>>> import torch + +>>> blip_diffusion_pipe = BlipDiffusionControlNetPipeline.from_pretrained( +... "Salesforce/blipdiffusion-controlnet", torch_dtype=torch.float16 +... ).to("cuda") + +>>> style_subject = "flower" +>>> tgt_subject = "teapot" +>>> text_prompt = "on a marble table" + +>>> cldm_cond_image = load_image( +... "https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/kettle.jpg" +... ).resize((512, 512)) +>>> canny = CannyDetector() +>>> cldm_cond_image = canny(cldm_cond_image, 30, 70, output_type="pil") +>>> style_image = load_image( +... "https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg" +... ) +>>> guidance_scale = 7.5 +>>> num_inference_steps = 50 +>>> negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate" + + +>>> output = blip_diffusion_pipe( +... text_prompt, +... style_image, +... cldm_cond_image, +... style_subject, +... tgt_subject, +... guidance_scale=guidance_scale, +... num_inference_steps=num_inference_steps, +... neg_prompt=negative_prompt, +... height=512, +... width=512, +... ).images +>>> output[0].save("image.png") diff --git a/scrapped_outputs/5e1d66347be7cb7c4f2bdee97a8c59fe.txt b/scrapped_outputs/5e1d66347be7cb7c4f2bdee97a8c59fe.txt new file mode 100644 index 0000000000000000000000000000000000000000..a60cf1709306cd604a335558453963caf02df74b --- /dev/null +++ b/scrapped_outputs/5e1d66347be7cb7c4f2bdee97a8c59fe.txt @@ -0,0 +1,56 @@ +Community pipelines For more context about the design choices behind community pipelines, please have a look at this issue. Community pipelines allow you to get creative and build your own unique pipelines to share with the community. You can find all community pipelines in the diffusers/examples/community folder along with inference and training examples for how to use them. This guide showcases some of the community pipelines and hopefully it’ll inspire you to create your own (feel free to open a PR with your own pipeline and we will merge it!). To load a community pipeline, use the custom_pipeline argument in DiffusionPipeline to specify one of the files in diffusers/examples/community: Copied from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", custom_pipeline="filename_in_the_community_folder", use_safetensors=True +) If a community pipeline doesn’t work as expected, please open a GitHub issue and mention the author. You can learn more about community pipelines in the how to load community pipelines and how to contribute a community pipeline guides. Multilingual Stable Diffusion The multilingual Stable Diffusion pipeline uses a pretrained XLM-RoBERTa to identify a language and the mBART-large-50 model to handle the translation. This allows you to generate images from text in 20 languages. Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import make_image_grid +from transformers import ( + pipeline, + MBart50TokenizerFast, + MBartForConditionalGeneration, +) + +device = "cuda" if torch.cuda.is_available() else "cpu" +device_dict = {"cuda": 0, "cpu": -1} + +# add language detection pipeline +language_detection_model_ckpt = "papluca/xlm-roberta-base-language-detection" +language_detection_pipeline = pipeline("text-classification", + model=language_detection_model_ckpt, + device=device_dict[device]) + +# add model for language translation +translation_tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-one-mmt") +translation_model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-one-mmt").to(device) + +diffuser_pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + custom_pipeline="multilingual_stable_diffusion", + detection_pipeline=language_detection_pipeline, + translation_model=translation_model, + translation_tokenizer=translation_tokenizer, + torch_dtype=torch.float16, +) + +diffuser_pipeline.enable_attention_slicing() +diffuser_pipeline = diffuser_pipeline.to(device) + +prompt = ["a photograph of an astronaut riding a horse", + "Una casa en la playa", + "Ein Hund, der Orange isst", + "Un restaurant parisien"] + +images = diffuser_pipeline(prompt).images +make_image_grid(images, rows=2, cols=2) MagicMix MagicMix is a pipeline that can mix an image and text prompt to generate a new image that preserves the image structure. The mix_factor determines how much influence the prompt has on the layout generation, kmin controls the number of steps during the content generation process, and kmax determines how much information is kept in the layout of the original image. Copied from diffusers import DiffusionPipeline, DDIMScheduler +from diffusers.utils import load_image, make_image_grid + +pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + custom_pipeline="magic_mix", + scheduler=DDIMScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler"), +).to('cuda') + +img = load_image("https://user-images.githubusercontent.com/59410571/209578593-141467c7-d831-4792-8b9a-b17dc5e47816.jpg") +mix_img = pipeline(img, prompt="bed", kmin=0.3, kmax=0.5, mix_factor=0.5) +make_image_grid([img, mix_img], rows=1, cols=2) original image image and text prompt mix diff --git a/scrapped_outputs/5e36bedce9bafed2623cbd502c660674.txt b/scrapped_outputs/5e36bedce9bafed2623cbd502c660674.txt new file mode 100644 index 0000000000000000000000000000000000000000..88e5b06e9b901c30a382d8b55ca8b3362f4d77a2 --- /dev/null +++ b/scrapped_outputs/5e36bedce9bafed2623cbd502c660674.txt @@ -0,0 +1,253 @@ +DreamBooth fine-tuning example + +DreamBooth is a method to personalize text-to-image models like stable diffusion given just a few (3~5) images of a subject. + +Dreambooth examples from the project’s blog. +The Dreambooth training script shows how to implement this training procedure on a pre-trained Stable Diffusion model. +Dreambooth fine-tuning is very sensitive to hyperparameters and easy to overfit. We recommend you take a look at our in-depth analysis with recommended settings for different subjects, and go from there. + +Training locally + + +Installing the dependencies + +Before running the scripts, make sure to install the library’s training dependencies. We also recommend to install diffusers from the main github branch. + + + Copied +pip install git+https://github.com/huggingface/diffusers +pip install -U -r diffusers/examples/dreambooth/requirements.txt +xFormers is not part of the training requirements, but we recommend you install it if you can. It could make your training faster and less memory intensive. +After all dependencies have been set up you can configure a 🤗 Accelerate environment with: + + + Copied +accelerate config +In this example we’ll use model version v1-4, so please visit its card and carefully read the license before proceeding. +The command below will download and cache the model weights from the Hub because we use the model’s Hub id CompVis/stable-diffusion-v1-4. You may also clone the repo locally and use the local path in your system where the checkout was saved. + +Dog toy example + +In this example we’ll use these images to add a new concept to Stable Diffusion using the Dreambooth process. They will be our training data. Please, download them and place them somewhere in your system. +Then you can launch the training script using: + + + Copied +export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export INSTANCE_DIR="path_to_training_images" +export OUTPUT_DIR="path_to_saved_model" + +accelerate launch train_dreambooth.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --output_dir=$OUTPUT_DIR \ + --instance_prompt="a photo of sks dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=1 \ + --learning_rate=5e-6 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --max_train_steps=400 + +Training with a prior-preserving loss + +Prior preservation is used to avoid overfitting and language-drift. Please, refer to the paper to learn more about it if you are interested. For prior preservation, we use other images of the same class as part of the training process. The nice thing is that we can generate those images using the Stable Diffusion model itself! The training script will save the generated images to a local path we specify. +According to the paper, it’s recommended to generate num_epochs * num_samples images for prior preservation. 200-300 works well for most cases. + + + Copied +export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export INSTANCE_DIR="path_to_training_images" +export CLASS_DIR="path_to_class_images" +export OUTPUT_DIR="path_to_saved_model" + +accelerate launch train_dreambooth.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --class_data_dir=$CLASS_DIR \ + --output_dir=$OUTPUT_DIR \ + --with_prior_preservation --prior_loss_weight=1.0 \ + --instance_prompt="a photo of sks dog" \ + --class_prompt="a photo of dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=1 \ + --learning_rate=5e-6 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --num_class_images=200 \ + --max_train_steps=800 + +Saving checkpoints while training + +It’s easy to overfit while training with Dreambooth, so sometimes it’s useful to save regular checkpoints during the process. One of the intermediate checkpoints might work better than the final model! To use this feature you need to pass the following argument to the training script: + + + Copied + --checkpointing_steps=500 +This will save the full training state in subfolders of your output_dir. Subfolder names begin with the prefix checkpoint-, and then the number of steps performed so far; for example: checkpoint-1500 would be a checkpoint saved after 1500 training steps. + +Resuming training from a saved checkpoint + +If you want to resume training from any of the saved checkpoints, you can pass the argument --resume_from_checkpoint and then indicate the name of the checkpoint you want to use. You can also use the special string "latest" to resume from the last checkpoint saved (i.e., the one with the largest number of steps). For example, the following would resume training from the checkpoint saved after 1500 steps: + + + Copied + --resume_from_checkpoint="checkpoint-1500" +This would be a good opportunity to tweak some of your hyperparameters if you wish. + +Performing inference using a saved checkpoint + +Saved checkpoints are stored in a format suitable for resuming training. They not only include the model weights, but also the state of the optimizer, data loaders and learning rate. +You can use a checkpoint for inference, but first you need to convert it to an inference pipeline. This is how you could do it: + + + Copied +from accelerate import Accelerator +from diffusers import DiffusionPipeline + +# Load the pipeline with the same arguments (model, revision) that were used for training +model_id = "CompVis/stable-diffusion-v1-4" +pipeline = DiffusionPipeline.from_pretrained(model_id) + +accelerator = Accelerator() + +# Use text_encoder if `--train_text_encoder` was used for the initial training +unet, text_encoder = accelerator.prepare(pipeline.unet, pipeline.text_encoder) + +# Restore state from a checkpoint path. You have to use the absolute path here. +accelerator.load_state("/sddata/dreambooth/daruma-v2-1/checkpoint-100") + +# Rebuild the pipeline with the unwrapped models (assignment to .unet and .text_encoder should work too) +pipeline = DiffusionPipeline.from_pretrained( + model_id, + unet=accelerator.unwrap_model(unet), + text_encoder=accelerator.unwrap_model(text_encoder), +) + +# Perform inference, or save, or push to the hub +pipeline.save_pretrained("dreambooth-pipeline") + +Training on a 16GB GPU + +With the help of gradient checkpointing and the 8-bit optimizer from bitsandbytes, it’s possible to train dreambooth on a 16GB GPU. + + + Copied +pip install bitsandbytes +Then pass the --use_8bit_adam option to the training script. + + + Copied +export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export INSTANCE_DIR="path_to_training_images" +export CLASS_DIR="path_to_class_images" +export OUTPUT_DIR="path_to_saved_model" + +accelerate launch train_dreambooth.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --class_data_dir=$CLASS_DIR \ + --output_dir=$OUTPUT_DIR \ + --with_prior_preservation --prior_loss_weight=1.0 \ + --instance_prompt="a photo of sks dog" \ + --class_prompt="a photo of dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=2 --gradient_checkpointing \ + --use_8bit_adam \ + --learning_rate=5e-6 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --num_class_images=200 \ + --max_train_steps=800 + +Fine-tune the text encoder in addition to the UNet + +The script also allows to fine-tune the text_encoder along with the unet. It has been observed experimentally that this gives much better results, especially on faces. Please, refer to our blog for more details. +To enable this option, pass the --train_text_encoder argument to the training script. +Training the text encoder requires additional memory, so training won't fit on a 16GB GPU. You'll need at least 24GB VRAM to use this option. + + + + Copied +export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export INSTANCE_DIR="path_to_training_images" +export CLASS_DIR="path_to_class_images" +export OUTPUT_DIR="path_to_saved_model" + +accelerate launch train_dreambooth.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --train_text_encoder \ + --instance_data_dir=$INSTANCE_DIR \ + --class_data_dir=$CLASS_DIR \ + --output_dir=$OUTPUT_DIR \ + --with_prior_preservation --prior_loss_weight=1.0 \ + --instance_prompt="a photo of sks dog" \ + --class_prompt="a photo of dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --use_8bit_adam + --gradient_checkpointing \ + --learning_rate=2e-6 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --num_class_images=200 \ + --max_train_steps=800 + +Training on a 8 GB GPU: + +Using DeepSpeed it’s even possible to offload some +tensors from VRAM to either CPU or NVME, allowing training to proceed with less GPU memory. +DeepSpeed needs to be enabled with accelerate config. During configuration, +answer yes to “Do you want to use DeepSpeed?“. Combining DeepSpeed stage 2, fp16 +mixed precision, and offloading both the model parameters and the optimizer state to CPU, it’s +possible to train on under 8 GB VRAM. The drawback is that this requires more system RAM (about 25 GB). See the DeepSpeed documentation for more configuration options. +Changing the default Adam optimizer to DeepSpeed’s special version of Adam +deepspeed.ops.adam.DeepSpeedCPUAdam gives a substantial speedup, but enabling +it requires the system’s CUDA toolchain version to be the same as the one installed with PyTorch. 8-bit optimizers don’t seem to be compatible with DeepSpeed at the moment. + + + Copied +export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export INSTANCE_DIR="path_to_training_images" +export CLASS_DIR="path_to_class_images" +export OUTPUT_DIR="path_to_saved_model" + +accelerate launch train_dreambooth.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --class_data_dir=$CLASS_DIR \ + --output_dir=$OUTPUT_DIR \ + --with_prior_preservation --prior_loss_weight=1.0 \ + --instance_prompt="a photo of sks dog" \ + --class_prompt="a photo of dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --sample_batch_size=1 \ + --gradient_accumulation_steps=1 --gradient_checkpointing \ + --learning_rate=5e-6 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --num_class_images=200 \ + --max_train_steps=800 \ + --mixed_precision=fp16 + +Inference + +Once you have trained a model, inference can be done using the StableDiffusionPipeline, by simply indicating the path where the model was saved. Make sure that your prompts include the special identifier used during training (sks in the previous examples). + + + Copied +from diffusers import StableDiffusionPipeline +import torch + +model_id = "path_to_saved_model" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +prompt = "A photo of sks dog in a bucket" +image = pipe(prompt, num_inference_steps=50, guidance_scale=7.5).images[0] + +image.save("dog-bucket.png") +You may also run inference from any of the saved training checkpoints. diff --git a/scrapped_outputs/5e5eb15957cee08075224d499a7af459.txt b/scrapped_outputs/5e5eb15957cee08075224d499a7af459.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/5e6714aad97ab7a90d754767d56e6654.txt b/scrapped_outputs/5e6714aad97ab7a90d754767d56e6654.txt new file mode 100644 index 0000000000000000000000000000000000000000..b6ee2d139f8d33d1b57f5e5dc720363dd35642a1 --- /dev/null +++ b/scrapped_outputs/5e6714aad97ab7a90d754767d56e6654.txt @@ -0,0 +1,101 @@ +Shap-E The Shap-E model was proposed in Shap-E: Generating Conditional 3D Implicit Functions by Alex Nichol and Heewoo Jun from OpenAI. The abstract from the paper is: We present Shap-E, a conditional generative model for 3D assets. Unlike recent work on 3D generative models which produce a single output representation, Shap-E directly generates the parameters of implicit functions that can be rendered as both textured meshes and neural radiance fields. We train Shap-E in two stages: first, we train an encoder that deterministically maps 3D assets into the parameters of an implicit function; second, we train a conditional diffusion model on outputs of the encoder. When trained on a large dataset of paired 3D and text data, our resulting models are capable of generating complex and diverse 3D assets in a matter of seconds. When compared to Point-E, an explicit generative model over point clouds, Shap-E converges faster and reaches comparable or better sample quality despite modeling a higher-dimensional, multi-representation output space. The original codebase can be found at openai/shap-e. See the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. ShapEPipeline class diffusers.ShapEPipeline < source > ( prior: PriorTransformer text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: HeunDiscreteScheduler shap_e_renderer: ShapERenderer ) Parameters prior (PriorTransformer) — +The canonical unCLIP prior to approximate the image embedding from the text embedding. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. scheduler (HeunDiscreteScheduler) — +A scheduler to be used in combination with the prior model to generate image embedding. shap_e_renderer (ShapERenderer) — +Shap-E renderer projects the generated latents into parameters of a MLP to create 3D objects with the NeRF +rendering method. Pipeline for generating latent representation of a 3D asset and rendering with the NeRF method. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: str num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 frame_size: int = 64 output_type: Optional = 'pil' return_dict: bool = True ) → ShapEPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. frame_size (int, optional, default to 64) — +The width and height of each image frame of the generated 3D output. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between "pil" (PIL.Image.Image), "np" +(np.array), "latent" (torch.Tensor), or mesh (MeshDecoderOutput). return_dict (bool, optional, defaults to True) — +Whether or not to return a ShapEPipelineOutput instead of a plain +tuple. Returns +ShapEPipelineOutput or tuple + +If return_dict is True, ShapEPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from diffusers.utils import export_to_gif + +>>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +>>> repo = "openai/shap-e" +>>> pipe = DiffusionPipeline.from_pretrained(repo, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> guidance_scale = 15.0 +>>> prompt = "a shark" + +>>> images = pipe( +... prompt, +... guidance_scale=guidance_scale, +... num_inference_steps=64, +... frame_size=256, +... ).images + +>>> gif_path = export_to_gif(images[0], "shark_3d.gif") ShapEImg2ImgPipeline class diffusers.ShapEImg2ImgPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModel image_processor: CLIPImageProcessor scheduler: HeunDiscreteScheduler shap_e_renderer: ShapERenderer ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModel) — +Frozen image-encoder. image_processor (CLIPImageProcessor) — +A CLIPImageProcessor to process images. scheduler (HeunDiscreteScheduler) — +A scheduler to be used in combination with the prior model to generate image embedding. shap_e_renderer (ShapERenderer) — +Shap-E renderer projects the generated latents into parameters of a MLP to create 3D objects with the NeRF +rendering method. Pipeline for generating latent representation of a 3D asset and rendering with the NeRF method from an image. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 frame_size: int = 64 output_type: Optional = 'pil' return_dict: bool = True ) → ShapEPipelineOutput or tuple Parameters image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be used as the starting point. Can also accept image +latents as image, but if passing latents directly it is not encoded again. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. frame_size (int, optional, default to 64) — +The width and height of each image frame of the generated 3D output. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between "pil" (PIL.Image.Image), "np" +(np.array), "latent" (torch.Tensor), or mesh (MeshDecoderOutput). return_dict (bool, optional, defaults to True) — +Whether or not to return a ShapEPipelineOutput instead of a plain +tuple. Returns +ShapEPipelineOutput or tuple + +If return_dict is True, ShapEPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> from PIL import Image +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from diffusers.utils import export_to_gif, load_image + +>>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +>>> repo = "openai/shap-e-img2img" +>>> pipe = DiffusionPipeline.from_pretrained(repo, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> guidance_scale = 3.0 +>>> image_url = "https://hf.co/datasets/diffusers/docs-images/resolve/main/shap-e/corgi.png" +>>> image = load_image(image_url).convert("RGB") + +>>> images = pipe( +... image, +... guidance_scale=guidance_scale, +... num_inference_steps=64, +... frame_size=256, +... ).images + +>>> gif_path = export_to_gif(images[0], "corgi_3d.gif") ShapEPipelineOutput class diffusers.pipelines.shap_e.pipeline_shap_e.ShapEPipelineOutput < source > ( images: Union ) Parameters images (torch.FloatTensor) — +A list of images for 3D rendering. Output class for ShapEPipeline and ShapEImg2ImgPipeline. diff --git a/scrapped_outputs/5e80ea73634ae182e0a738dd979776d3.txt b/scrapped_outputs/5e80ea73634ae182e0a738dd979776d3.txt new file mode 100644 index 0000000000000000000000000000000000000000..ff28dd01033ce547a340e7754e35c2123f361679 --- /dev/null +++ b/scrapped_outputs/5e80ea73634ae182e0a738dd979776d3.txt @@ -0,0 +1,14 @@ +Text-guided depth-to-image generation The StableDiffusionDepth2ImgPipeline lets you pass a text prompt and an initial image to condition the generation of new images. In addition, you can also pass a depth_map to preserve the image structure. If no depth_map is provided, the pipeline automatically predicts the depth via an integrated depth-estimation model. Start by creating an instance of the StableDiffusionDepth2ImgPipeline: Copied import torch +from diffusers import StableDiffusionDepth2ImgPipeline +from diffusers.utils import load_image, make_image_grid + +pipeline = StableDiffusionDepth2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-depth", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") Now pass your prompt to the pipeline. You can also pass a negative_prompt to prevent certain words from guiding how an image is generated: Copied url = "http://images.cocodataset.org/val2017/000000039769.jpg" +init_image = load_image(url) +prompt = "two tigers" +negative_prompt = "bad, deformed, ugly, bad anatomy" +image = pipeline(prompt=prompt, image=init_image, negative_prompt=negative_prompt, strength=0.7).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Input Output diff --git a/scrapped_outputs/5eac6f4d03e3f6f2ea4107608e1d789c.txt b/scrapped_outputs/5eac6f4d03e3f6f2ea4107608e1d789c.txt new file mode 100644 index 0000000000000000000000000000000000000000..234a43db73df49daee22efa741494c8112a00fdb --- /dev/null +++ b/scrapped_outputs/5eac6f4d03e3f6f2ea4107608e1d789c.txt @@ -0,0 +1,238 @@ +Variance Exploding Stochastic Differential Equation (VE-SDE) scheduler + + +Overview + +Original paper can be found here. + +ScoreSdeVeScheduler + + +class diffusers.ScoreSdeVeScheduler + +< +source +> +( +num_train_timesteps: int = 2000 +snr: float = 0.15 +sigma_min: float = 0.01 +sigma_max: float = 1348.0 +sampling_eps: float = 1e-05 +correct_steps: int = 1 + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +snr (float) — +coefficient weighting the step from the model_output sample (from the network) to the random noise. + + +sigma_min (float) — +initial noise scale for sigma sequence in sampling procedure. The minimum sigma should mirror the +distribution of the data. + + +sigma_max (float) — maximum value used for the range of continuous timesteps passed into the model. + + +sampling_eps (float) — the end value of sampling, where timesteps decrease progressively from 1 to +epsilon. — + + +correct_steps (int) — number of correction steps performed on a produced sample. + + + +The variance exploding stochastic differential equation (SDE) scheduler. +For more information, see the original paper: https://arxiv.org/abs/2011.13456 +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Optional[int] = None + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +timestep (int, optional) — current timestep + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_sigmas + +< +source +> +( +num_inference_steps: int +sigma_min: float = None +sigma_max: float = None +sampling_eps: float = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +sigma_min (float, optional) — +initial noise scale value (overrides value given at Scheduler instantiation). + + +sigma_max (float, optional) — +final noise scale value (overrides value given at Scheduler instantiation). + + +sampling_eps (float, optional) — +final timestep value (overrides value given at Scheduler instantiation). + + + +Sets the noise scales used for the diffusion chain. Supporting function to be run before inference. +The sigmas control the weight of the drift and diffusion components of sample update. + +set_timesteps + +< +source +> +( +num_inference_steps: int +sampling_eps: float = None +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +sampling_eps (float, optional) — +final timestep value (overrides value given at Scheduler instantiation). + + + +Sets the continuous timesteps used for the diffusion chain. Supporting function to be run before inference. + +step_correct + +< +source +> +( +model_output: FloatTensor +sample: FloatTensor +generator: typing.Optional[torch._C.Generator] = None +return_dict: bool = True + +) +→ +SdeVeOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. +generator — random number generator. + + +return_dict (bool) — option for returning tuple rather than SchedulerOutput class + + +Returns + +SdeVeOutput or tuple + + + +SdeVeOutput if +return_dict is True, otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + +Correct the predicted sample based on the output model_output of the network. This is often run repeatedly +after making the prediction for the previous timestep. + +step_pred + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +generator: typing.Optional[torch._C.Generator] = None +return_dict: bool = True + +) +→ +SdeVeOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. +generator — random number generator. + + +return_dict (bool) — option for returning tuple rather than SchedulerOutput class + + +Returns + +SdeVeOutput or tuple + + + +SdeVeOutput if +return_dict is True, otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/5ec64341818519d25506b0da268a8a62.txt b/scrapped_outputs/5ec64341818519d25506b0da268a8a62.txt new file mode 100644 index 0000000000000000000000000000000000000000..7ceceb11a90b85d8cd4d5312125046f299ed9ab9 --- /dev/null +++ b/scrapped_outputs/5ec64341818519d25506b0da268a8a62.txt @@ -0,0 +1,644 @@ +AltDiffusion + +AltDiffusion was proposed in AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities by Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang, Ledell Wu. +The abstract of the paper is the following: +In this work, we present a conceptually simple and effective method to train a strong bilingual multimodal representation model. Starting from the pretrained multimodal representation model CLIP released by OpenAI, we switched its text encoder with a pretrained multilingual text encoder XLM-R, and aligned both languages and image representations by a two-stage training schema consisting of teacher learning and contrastive learning. We validate our method through evaluations of a wide range of tasks. We set new state-of-the-art performances on a bunch of tasks including ImageNet-CN, Flicker30k- CN, and COCO-CN. Further, we obtain very close performances with CLIP on almost all tasks, suggesting that one can simply alter the text encoder in CLIP for extended capabilities such as multilingual understanding. +Overview: +Pipeline +Tasks +Colab +Demo +pipeline_alt_diffusion.py +Text-to-Image Generation +- +- +pipeline_alt_diffusion_img2img.py +Image-to-Image Text-Guided Generation +- +- + +Tips + +AltDiffusion is conceptually exactly the same as Stable Diffusion. +Run AltDiffusion +AltDiffusion can be tested very easily with the AltDiffusionPipeline, AltDiffusionImg2ImgPipeline and the "BAAI/AltDiffusion-m9" checkpoint exactly in the same way it is shown in the Conditional Image Generation Guide and the Image-to-Image Generation Guide. +How to load and use different schedulers. +The alt diffusion pipeline uses DDIMScheduler scheduler by default. But diffusers provides many other schedulers that can be used with the alt diffusion pipeline such as PNDMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, EulerAncestralDiscreteScheduler etc. +To use a different scheduler, you can either change it via the ConfigMixin.from_config() method or pass the scheduler argument to the from_pretrained method of the pipeline. For example, to use the EulerDiscreteScheduler, you can do the following: + + + Copied +>>> from diffusers import AltDiffusionPipeline, EulerDiscreteScheduler + +>>> pipeline = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9") +>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +>>> # or +>>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("BAAI/AltDiffusion-m9", subfolder="scheduler") +>>> pipeline = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9", scheduler=euler_scheduler) +How to convert all use cases with multiple or single pipeline +If you want to use all possible use cases in a single DiffusionPipeline we recommend using the components functionality to instantiate all components in the most memory-efficient way: + + + Copied +>>> from diffusers import ( +... AltDiffusionPipeline, +... AltDiffusionImg2ImgPipeline, +... ) + +>>> text2img = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9") +>>> img2img = AltDiffusionImg2ImgPipeline(**text2img.components) + +>>> # now you can use text2img(...) and img2img(...) just like the call methods of each respective pipeline + +AltDiffusionPipelineOutput + + +class diffusers.pipelines.alt_diffusion.AltDiffusionPipelineOutput + +< +source +> +( +images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] +nsfw_content_detected: typing.Optional[typing.List[bool]] + +) + + +Parameters + +images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or numpy array of shape (batch_size, height, width, num_channels). PIL images or numpy array present the denoised images of the diffusion pipeline. + + +nsfw_content_detected (List[bool]) — +List of flags denoting whether the corresponding generated image likely represents “not-safe-for-work” +(nsfw) content, or None if safety checking could not be performed. + + + +Output class for Alt Diffusion pipelines. + +__call__ + + +( +*args +**kwargs + +) + + + +Call self as a function. + +AltDiffusionPipeline + + +class diffusers.AltDiffusionPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: RobertaSeriesModelWithTransformation +tokenizer: XLMRobertaTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPImageProcessor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (RobertaSeriesModelWithTransformation) — +Frozen text-encoder. Alt Diffusion uses the text portion of +CLIP, +specifically the clip-vit-large-patch14 variant. + + +tokenizer (XLMRobertaTokenizer) — +Tokenizer of class +XLMRobertaTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-to-image generation using Alt Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) +In addition the pipeline inherits the following loading methods: +Textual-Inversion: loaders.TextualInversionLoaderMixin.load_textual_inversion() +LoRA: loaders.LoraLoaderMixin.load_lora_weights() +Ckpt: loaders.FromCkptMixin.from_ckpt() +as well as the following saving methods: +LoRA: loaders.LoraLoaderMixin.save_lora_weights() + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None + +) +→ +~pipelines.stable_diffusion.AltDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.AltDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +Returns + +~pipelines.stable_diffusion.AltDiffusionPipelineOutput or tuple + + + +~pipelines.stable_diffusion.AltDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import AltDiffusionPipeline + +>>> pipe = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> # "dark elf princess, highly detailed, d & d, fantasy, highly detailed, digital painting, trending on artstation, concept art, sharp focus, illustration, art by artgerm and greg rutkowski and fuji choko and viktoria gavrilenko and hoang lap" +>>> prompt = "黑暗精灵公主,非常详细,幻想,非常详细,数字绘画,概念艺术,敏锐的焦点,插图" +>>> image = pipe(prompt).images[0] + +disable_vae_slicing + +< +source +> +( +) + + + +Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to +computing decoding in one step. + +disable_vae_tiling + +< +source +> +( +) + + + +Disable tiled VAE decoding. If enable_vae_tiling was previously invoked, this method will go back to +computing decoding in one step. + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. + +enable_vae_slicing + +< +source +> +( +) + + + +Enable sliced VAE decoding. +When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several +steps. This is useful to save some memory and allow larger batch sizes. + +enable_vae_tiling + +< +source +> +( +) + + + +Enable tiled VAE decoding. +When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in +several steps. This is useful to save a large amount of memory and to allow the processing of larger images. + +AltDiffusionImg2ImgPipeline + + +class diffusers.AltDiffusionImg2ImgPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: RobertaSeriesModelWithTransformation +tokenizer: XLMRobertaTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPImageProcessor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (RobertaSeriesModelWithTransformation) — +Frozen text-encoder. Alt Diffusion uses the text portion of +CLIP, +specifically the clip-vit-large-patch14 variant. + + +tokenizer (XLMRobertaTokenizer) — +Tokenizer of class +XLMRobertaTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-guided image to image generation using Alt Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) +In addition the pipeline inherits the following loading methods: +Textual-Inversion: loaders.TextualInversionLoaderMixin.load_textual_inversion() +LoRA: loaders.LoraLoaderMixin.load_lora_weights() +Ckpt: loaders.FromCkptMixin.from_ckpt() +as well as the following saving methods: +LoRA: loaders.LoraLoaderMixin.save_lora_weights() + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None +strength: float = 0.8 +num_inference_steps: typing.Optional[int] = 50 +guidance_scale: typing.Optional[float] = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: typing.Optional[float] = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None + +) +→ +~pipelines.stable_diffusion.AltDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. + + +strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter will be modulated by strength. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. Ignored when not using guidance (i.e., ignored if guidance_scale +is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.AltDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +Returns + +~pipelines.stable_diffusion.AltDiffusionPipelineOutput or tuple + + + +~pipelines.stable_diffusion.AltDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import requests +>>> import torch +>>> from PIL import Image +>>> from io import BytesIO + +>>> from diffusers import AltDiffusionImg2ImgPipeline + +>>> device = "cuda" +>>> model_id_or_path = "BAAI/AltDiffusion-m9" +>>> pipe = AltDiffusionImg2ImgPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +>>> response = requests.get(url) +>>> init_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_image = init_image.resize((768, 512)) + +>>> # "A fantasy landscape, trending on artstation" +>>> prompt = "幻想风景, artstation" + +>>> images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images +>>> images[0].save("幻想风景.png") + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. diff --git a/scrapped_outputs/5f1b7057b6d02caca5fda52045779c35.txt b/scrapped_outputs/5f1b7057b6d02caca5fda52045779c35.txt new file mode 100644 index 0000000000000000000000000000000000000000..218eb87f8f649852b0b2e0b52a2a1d758aa1b603 --- /dev/null +++ b/scrapped_outputs/5f1b7057b6d02caca5fda52045779c35.txt @@ -0,0 +1 @@ +Using Diffusers with other modalities Diffusers is in the process of expanding to modalities other than images. Example type Colab Pipeline Molecule conformation generation ❌ More coming soon! diff --git a/scrapped_outputs/5f57708b7fd135ac1176bb5d74282fc8.txt b/scrapped_outputs/5f57708b7fd135ac1176bb5d74282fc8.txt new file mode 100644 index 0000000000000000000000000000000000000000..552fc9d1655a6840094e03d21339f40c7b49403d --- /dev/null +++ b/scrapped_outputs/5f57708b7fd135ac1176bb5d74282fc8.txt @@ -0,0 +1,33 @@ +Transformer2D A Transformer model for image-like data from CompVis that is based on the Vision Transformer introduced by Dosovitskiy et al. The Transformer2DModel accepts discrete (classes of vector embeddings) or continuous (actual embeddings) inputs. When the input is continuous: Project the input and reshape it to (batch_size, sequence_length, feature_dimension). Apply the Transformer blocks in the standard way. Reshape to image. When the input is discrete: It is assumed one of the input classes is the masked latent pixel. The predicted classes of the unnoised image don’t contain a prediction for the masked pixel because the unnoised image cannot be masked. Convert input (classes of latent pixels) to embeddings and apply positional embeddings. Apply the Transformer blocks in the standard way. Predict classes of unnoised image. Transformer2DModel class diffusers.Transformer2DModel < source > ( num_attention_heads: int = 16 attention_head_dim: int = 88 in_channels: Optional = None out_channels: Optional = None num_layers: int = 1 dropout: float = 0.0 norm_num_groups: int = 32 cross_attention_dim: Optional = None attention_bias: bool = False sample_size: Optional = None num_vector_embeds: Optional = None patch_size: Optional = None activation_fn: str = 'geglu' num_embeds_ada_norm: Optional = None use_linear_projection: bool = False only_cross_attention: bool = False double_self_attention: bool = False upcast_attention: bool = False norm_type: str = 'layer_norm' norm_elementwise_affine: bool = True norm_eps: float = 1e-05 attention_type: str = 'default' caption_channels: int = None ) Parameters num_attention_heads (int, optional, defaults to 16) — The number of heads to use for multi-head attention. attention_head_dim (int, optional, defaults to 88) — The number of channels in each head. in_channels (int, optional) — +The number of channels in the input and output (specify if the input is continuous). num_layers (int, optional, defaults to 1) — The number of layers of Transformer blocks to use. dropout (float, optional, defaults to 0.0) — The dropout probability to use. cross_attention_dim (int, optional) — The number of encoder_hidden_states dimensions to use. sample_size (int, optional) — The width of the latent images (specify if the input is discrete). +This is fixed during training since it is used to learn a number of position embeddings. num_vector_embeds (int, optional) — +The number of classes of the vector embeddings of the latent pixels (specify if the input is discrete). +Includes the class for the masked latent pixel. activation_fn (str, optional, defaults to "geglu") — Activation function to use in feed-forward. num_embeds_ada_norm ( int, optional) — +The number of diffusion steps used during training. Pass if at least one of the norm_layers is +AdaLayerNorm. This is fixed during training since it is used to learn a number of embeddings that are +added to the hidden states. +During inference, you can denoise for up to but not more steps than num_embeds_ada_norm. attention_bias (bool, optional) — +Configure if the TransformerBlocks attention should contain a bias parameter. A 2D Transformer model for image-like data. forward < source > ( hidden_states: Tensor encoder_hidden_states: Optional = None timestep: Optional = None added_cond_kwargs: Dict = None class_labels: Optional = None cross_attention_kwargs: Dict = None attention_mask: Optional = None encoder_attention_mask: Optional = None return_dict: bool = True ) Parameters hidden_states (torch.LongTensor of shape (batch size, num latent pixels) if discrete, torch.FloatTensor of shape (batch size, channel, height, width) if continuous) — +Input hidden_states. encoder_hidden_states ( torch.FloatTensor of shape (batch size, sequence len, embed dims), optional) — +Conditional embeddings for cross attention layer. If not given, cross-attention defaults to +self-attention. timestep ( torch.LongTensor, optional) — +Used to indicate denoising step. Optional timestep to be applied as an embedding in AdaLayerNorm. class_labels ( torch.LongTensor of shape (batch size, num classes), optional) — +Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in +AdaLayerZeroNorm. cross_attention_kwargs ( Dict[str, Any], optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. attention_mask ( torch.Tensor, optional) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. encoder_attention_mask ( torch.Tensor, optional) — +Cross-attention mask applied to encoder_hidden_states. Two formats supported: + +Mask (batch, sequence_length) True = keep, False = discard. +Bias (batch, 1, sequence_length) 0 = keep, -10000 = discard. + +If ndim == 2: will be interpreted as a mask, then converted into a bias consistent with the format +above. This bias will be added to the cross-attention scores. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. The Transformer2DModel forward method. Transformer2DModelOutput class diffusers.models.transformers.transformer_2d.Transformer2DModelOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if Transformer2DModel is discrete) — +The hidden states output conditioned on the encoder_hidden_states input. If discrete, returns probability +distributions for the unnoised latent pixels. The output of Transformer2DModel. diff --git a/scrapped_outputs/5f86570ef214936dffa0822ce40bc956.txt b/scrapped_outputs/5f86570ef214936dffa0822ce40bc956.txt new file mode 100644 index 0000000000000000000000000000000000000000..1afea408cd64c8c6388f2de4ad0e81536407a2fb --- /dev/null +++ b/scrapped_outputs/5f86570ef214936dffa0822ce40bc956.txt @@ -0,0 +1,6 @@ +Utilities Utility and helper functions for working with 🤗 Diffusers. numpy_to_pil diffusers.utils.numpy_to_pil < source > ( images ) Convert a numpy image or a batch of images to a PIL image. pt_to_pil diffusers.utils.pt_to_pil < source > ( images ) Convert a torch image to a PIL image. load_image diffusers.utils.load_image < source > ( image: Union ) → PIL.Image.Image Parameters image (str or PIL.Image.Image) — +The image to convert to the PIL Image format. Returns +PIL.Image.Image + +A PIL Image. + Loads image to a PIL Image. export_to_gif diffusers.utils.export_to_gif < source > ( image: List output_gif_path: str = None ) export_to_video diffusers.utils.export_to_video < source > ( video_frames: Union output_video_path: str = None fps: int = 8 ) make_image_grid diffusers.utils.make_image_grid < source > ( images: List rows: int cols: int resize: int = None ) Prepares a single grid of images. Useful for visualization purposes. diff --git a/scrapped_outputs/5faea79fd565192fc59265ae37d752f3.txt b/scrapped_outputs/5faea79fd565192fc59265ae37d752f3.txt new file mode 100644 index 0000000000000000000000000000000000000000..acbc313e656972084810639a2513c61961c63127 --- /dev/null +++ b/scrapped_outputs/5faea79fd565192fc59265ae37d752f3.txt @@ -0,0 +1 @@ +Normalization layers Customized normalization layers for supporting various models in 🤗 Diffusers. AdaLayerNorm class diffusers.models.normalization.AdaLayerNorm < source > ( embedding_dim: int num_embeddings: int ) Parameters embedding_dim (int) — The size of each embedding vector. num_embeddings (int) — The size of the embeddings dictionary. Norm layer modified to incorporate timestep embeddings. AdaLayerNormZero class diffusers.models.normalization.AdaLayerNormZero < source > ( embedding_dim: int num_embeddings: int ) Parameters embedding_dim (int) — The size of each embedding vector. num_embeddings (int) — The size of the embeddings dictionary. Norm layer adaptive layer norm zero (adaLN-Zero). AdaLayerNormSingle class diffusers.models.normalization.AdaLayerNormSingle < source > ( embedding_dim: int use_additional_conditions: bool = False ) Parameters embedding_dim (int) — The size of each embedding vector. use_additional_conditions (bool) — To use additional conditions for normalization or not. Norm layer adaptive layer norm single (adaLN-single). As proposed in PixArt-Alpha (see: https://arxiv.org/abs/2310.00426; Section 2.3). AdaGroupNorm class diffusers.models.normalization.AdaGroupNorm < source > ( embedding_dim: int out_dim: int num_groups: int act_fn: Optional = None eps: float = 1e-05 ) Parameters embedding_dim (int) — The size of each embedding vector. num_embeddings (int) — The size of the embeddings dictionary. num_groups (int) — The number of groups to separate the channels into. act_fn (str, optional, defaults to None) — The activation function to use. eps (float, optional, defaults to 1e-5) — The epsilon value to use for numerical stability. GroupNorm layer modified to incorporate timestep embeddings. diff --git a/scrapped_outputs/5ff54769e888649f49e37a6ce4d8a503.txt b/scrapped_outputs/5ff54769e888649f49e37a6ce4d8a503.txt new file mode 100644 index 0000000000000000000000000000000000000000..01ea0e5d72a3bb11b1b3044dcb7c32709e72187c --- /dev/null +++ b/scrapped_outputs/5ff54769e888649f49e37a6ce4d8a503.txt @@ -0,0 +1,250 @@ +Quicktour + +Diffusion models are trained to denoise random Gaussian noise step-by-step to generate a sample of interest, such as an image or audio. This has sparked a tremendous amount of interest in generative AI, and you have probably seen examples of diffusion generated images on the internet. 🧨 Diffusers is a library aimed at making diffusion models widely accessible to everyone. +Whether you’re a developer or an everyday user, this quicktour will introduce you to 🧨 Diffusers and help you get up and generating quickly! There are three main components of the library to know about: +The DiffusionPipeline is a high-level end-to-end class designed to rapidly generate samples from pretrained diffusion models for inference. +Popular pretrained model architectures and modules that can be used as building blocks for creating diffusion systems. +Many different schedulers - algorithms that control how noise is added for training, and how to generate denoised images during inference. +The quicktour will show you how to use the DiffusionPipeline for inference, and then walk you through how to combine a model and scheduler to replicate what’s happening inside the DiffusionPipeline. +The quicktour is a simplified version of the introductory 🧨 Diffusers notebook to help you get started quickly. If you want to learn more about 🧨 Diffusers goal, design philosophy, and additional details about it’s core API, check out the notebook! +Before you begin, make sure you have all the necessary libraries installed: + + + Copied +pip install --upgrade diffusers accelerate transformers +🤗 Accelerate speeds up model loading for inference and training. +🤗 Transformers is required to run the most popular diffusion models, such as Stable Diffusion. + +DiffusionPipeline + +The DiffusionPipeline is the easiest way to use a pretrained diffusion system for inference. It is an end-to-end system containing the model and the scheduler. You can use the DiffusionPipeline out-of-the-box for many tasks. Take a look at the table below for some supported tasks, and for a complete list of supported tasks, check out the 🧨 Diffusers Summary table. +Task +Description +Pipeline +Unconditional Image Generation +generate an image from Gaussian noise +unconditional_image_generation +Text-Guided Image Generation +generate an image given a text prompt +conditional_image_generation +Text-Guided Image-to-Image Translation +adapt an image guided by a text prompt +img2img +Text-Guided Image-Inpainting +fill the masked part of an image given the image, the mask and a text prompt +inpaint +Text-Guided Depth-to-Image Translation +adapt parts of an image guided by a text prompt while preserving structure via depth estimation +depth2img +Start by creating an instance of a DiffusionPipeline and specify which pipeline checkpoint you would like to download. +You can use the DiffusionPipeline for any checkpoint stored on the Hugging Face Hub. +In this quicktour, you’ll load the stable-diffusion-v1-5 checkpoint for text-to-image generation. +For Stable Diffusion models, please carefully read the license first before running the model. 🧨 Diffusers implements a safety_checker to prevent offensive or harmful content, but the model’s improved image generation capabilities can still produce potentially harmful content. +Load the model with the from_pretrained() method: + + + Copied +>>> from diffusers import DiffusionPipeline + +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +The DiffusionPipeline downloads and caches all modeling, tokenization, and scheduling components. You’ll see that the Stable Diffusion pipeline is composed of the UNet2DConditionModel and PNDMScheduler among other things: + + + Copied +>>> pipeline +StableDiffusionPipeline { + "_class_name": "StableDiffusionPipeline", + "_diffusers_version": "0.13.1", + ..., + "scheduler": [ + "diffusers", + "PNDMScheduler" + ], + ..., + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vae": [ + "diffusers", + "AutoencoderKL" + ] +} +We strongly recommend running the pipeline on a GPU because the model consists of roughly 1.4 billion parameters. +You can move the generator object to a GPU, just like you would in PyTorch: + + + Copied +>>> pipeline.to("cuda") +Now you can pass a text prompt to the pipeline to generate an image, and then access the denoised image. By default, the image output is wrapped in a PIL.Image object. + + + Copied +>>> image = pipeline("An image of a squirrel in Picasso style").images[0] +>>> image + +Save the image by calling save: + + + Copied +>>> image.save("image_of_squirrel_painting.png") + +Local pipeline + +You can also use the pipeline locally. The only difference is you need to download the weights first: + + + Copied +git lfs install +git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 +Then load the saved weights into the pipeline: + + + Copied +>>> pipeline = DiffusionPipeline.from_pretrained("./stable-diffusion-v1-5") +Now you can run the pipeline as you would in the section above. + +Swapping schedulers + +Different schedulers come with different denoising speeds and quality trade-offs. The best way to find out which one works best for you is to try them out! One of the main features of 🧨 Diffusers is to allow you to easily switch between schedulers. For example, to replace the default PNDMScheduler with the EulerDiscreteScheduler, load it with the from_config() method: + + + Copied +>>> from diffusers import EulerDiscreteScheduler + +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) +Try generating an image with the new scheduler and see if you notice a difference! +In the next section, you’ll take a closer look at the components - the model and scheduler - that make up the DiffusionPipeline and learn how to use these components to generate an image of a cat. + +Models + +Most models take a noisy sample, and at each timestep it predicts the noise residual (other models learn to predict the previous sample directly or the velocity or v-prediction), the difference between a less noisy image and the input image. You can mix and match models to create other diffusion systems. +Models are initiated with the from_pretrained() method which also locally caches the model weights so it is faster the next time you load the model. For the quicktour, you’ll load the UNet2DModel, a basic unconditional image generation model with a checkpoint trained on cat images: + + + Copied +>>> from diffusers import UNet2DModel + +>>> repo_id = "google/ddpm-cat-256" +>>> model = UNet2DModel.from_pretrained(repo_id) +To access the model parameters, call model.config: + + + Copied +>>> model.config +The model configuration is a 🧊 frozen 🧊 dictionary, which means those parameters can’t be changed after the model is created. This is intentional and ensures that the parameters used to define the model architecture at the start remain the same, while other parameters can still be adjusted during inference. +Some of the most important parameters are: +sample_size: the height and width dimension of the input sample. +in_channels: the number of input channels of the input sample. +down_block_types and up_block_types: the type of down- and upsampling blocks used to create the UNet architecture. +block_out_channels: the number of output channels of the downsampling blocks; also used in reverse order for the number of input channels of the upsampling blocks. +layers_per_block: the number of ResNet blocks present in each UNet block. +To use the model for inference, create the image shape with random Gaussian noise. It should have a batch axis because the model can receive multiple random noises, a channel axis corresponding to the number of input channels, and a sample_size axis for the height and width of the image: + + + Copied +>>> import torch + +>>> torch.manual_seed(0) + +>>> noisy_sample = torch.randn(1, model.config.in_channels, model.config.sample_size, model.config.sample_size) +>>> noisy_sample.shape +torch.Size([1, 3, 256, 256]) +For inference, pass the noisy image to the model and a timestep. The timestep indicates how noisy the input image is, with more noise at the beginning and less at the end. This helps the model determine its position in the diffusion process, whether it is closer to the start or the end. Use the sample method to get the model output: + + + Copied +>>> with torch.no_grad(): +... noisy_residual = model(sample=noisy_sample, timestep=2).sample +To generate actual examples though, you’ll need a scheduler to guide the denoising process. In the next section, you’ll learn how to couple a model with a scheduler. + +Schedulers + +Schedulers manage going from a noisy sample to a less noisy sample given the model output - in this case, it is the noisy_residual. +🧨 Diffusers is a toolbox for building diffusion systems. While the DiffusionPipeline is a convenient way to get started with a pre-built diffusion system, you can also choose your own model and scheduler components separately to build a custom diffusion system. +For the quicktour, you’ll instantiate the DDPMScheduler with it’s from_config() method: + + + Copied +>>> from diffusers import DDPMScheduler + +>>> scheduler = DDPMScheduler.from_config(repo_id) +>>> scheduler +DDPMScheduler { + "_class_name": "DDPMScheduler", + "_diffusers_version": "0.13.1", + "beta_end": 0.02, + "beta_schedule": "linear", + "beta_start": 0.0001, + "clip_sample": true, + "clip_sample_range": 1.0, + "num_train_timesteps": 1000, + "prediction_type": "epsilon", + "trained_betas": null, + "variance_type": "fixed_small" +} +💡 Notice how the scheduler is instantiated from a configuration. Unlike a model, a scheduler does not have trainable weights and is parameter-free! +Some of the most important parameters are: +num_train_timesteps: the length of the denoising process or in other words, the number of timesteps required to process random Gaussian noise into a data sample. +beta_schedule: the type of noise schedule to use for inference and training. +beta_start and beta_end: the start and end noise values for the noise schedule. +To predict a slightly less noisy image, pass the following to the scheduler’s step() method: model output, timestep, and current sample. + + + Copied +>>> less_noisy_sample = scheduler.step(model_output=noisy_residual, timestep=2, sample=noisy_sample).prev_sample +>>> less_noisy_sample.shape +The less_noisy_sample can be passed to the next timestep where it’ll get even less noisier! Let’s bring it all together now and visualize the entire denoising process. +First, create a function that postprocesses and displays the denoised image as a PIL.Image: + + + Copied +>>> import PIL.Image +>>> import numpy as np + + +>>> def display_sample(sample, i): +... image_processed = sample.cpu().permute(0, 2, 3, 1) +... image_processed = (image_processed + 1.0) * 127.5 +... image_processed = image_processed.numpy().astype(np.uint8) + +... image_pil = PIL.Image.fromarray(image_processed[0]) +... display(f"Image at step {i}") +... display(image_pil) +To speed up the denoising process, move the input and model to a GPU: + + + Copied +>>> model.to("cuda") +>>> noisy_sample = noisy_sample.to("cuda") +Now create a denoising loop that predicts the residual of the less noisy sample, and computes the less noisy sample with the scheduler: + + + Copied +>>> import tqdm + +>>> sample = noisy_sample + +>>> for i, t in enumerate(tqdm.tqdm(scheduler.timesteps)): +... # 1. predict noise residual +... with torch.no_grad(): +... residual = model(sample, t).sample + +... # 2. compute less noisy image and set x_t -> x_t-1 +... sample = scheduler.step(residual, t, sample).prev_sample + +... # 3. optionally look at image +... if (i + 1) % 50 == 0: +... display_sample(sample, i + 1) +Sit back and watch as a cat is generated from nothing but noise! 😻 + + +Next steps + +Hopefully you generated some cool images with 🧨 Diffusers in this quicktour! For your next steps, you can: +Train or finetune a model to generate your own images in the training tutorial. +See example official and community training or finetuning scripts for a variety of use cases. +Learn more about loading, accessing, changing and comparing schedulers in the Using different Schedulers guide. +Explore prompt engineering, speed and memory optimizations, and tips and tricks for generating higher quality images with the Stable Diffusion guide. +Dive deeper into speeding up 🧨 Diffusers with guides on optimized PyTorch on a GPU, and inference guides for running Stable Diffusion on Apple Silicon (M1/M2) and ONNX Runtime. diff --git a/scrapped_outputs/5ffbe6af24cad70254bf332f971b1b6a.txt b/scrapped_outputs/5ffbe6af24cad70254bf332f971b1b6a.txt new file mode 100644 index 0000000000000000000000000000000000000000..607a91aa0d9c693ca270e96f0e69ff01fb89dfd4 --- /dev/null +++ b/scrapped_outputs/5ffbe6af24cad70254bf332f971b1b6a.txt @@ -0,0 +1,142 @@ +How to build a community pipeline + +Note: this page was built from the GitHub Issue on Community Pipelines #841. +Let’s make an example! +Say you want to define a pipeline that just does a single forward pass to a U-Net and then calls a scheduler only once (Note, this doesn’t make any sense from a scientific point of view, but only represents an example of how things work under the hood). +Cool! So you open your favorite IDE and start creating your pipeline 💻. +First, what model weights and configurations do we need? +We have a U-Net and a scheduler, so our pipeline should take a U-Net and a scheduler as an argument. +Also, as stated above, you’d like to be able to load weights and the scheduler config for Hub and share your code with others, so we’ll inherit from DiffusionPipeline: + + + Copied +from diffusers import DiffusionPipeline +import torch + + +class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() +Now, we must save the unet and scheduler in a config file so that you can save your pipeline with save_pretrained. +Therefore, make sure you add every component that is save-able to the register_modules function: + + + Copied +from diffusers import DiffusionPipeline +import torch + + +class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() + + self.register_modules(unet=unet, scheduler=scheduler) +Cool, the init is done! 🔥 Now, let’s go into the forward pass, which we recommend defining as __call__ . Here you’re given all the creative freedom there is. For our amazing “one-step” pipeline, we simply create a random image and call the unet once and the scheduler once: + + + Copied +from diffusers import DiffusionPipeline +import torch + + +class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() + + self.register_modules(unet=unet, scheduler=scheduler) + + def __call__(self): + image = torch.randn( + (1, self.unet.in_channels, self.unet.sample_size, self.unet.sample_size), + ) + timestep = 1 + + model_output = self.unet(image, timestep).sample + scheduler_output = self.scheduler.step(model_output, timestep, image).prev_sample + + return scheduler_output +Cool, that’s it! 🚀 You can now run this pipeline by passing a unet and a scheduler to the init: + + + Copied +from diffusers import DDPMScheduler, Unet2DModel + +scheduler = DDPMScheduler() +unet = UNet2DModel() + +pipeline = UnetSchedulerOneForwardPipeline(unet=unet, scheduler=scheduler) + +output = pipeline() +But what’s even better is that you can load pre-existing weights into the pipeline if they match exactly your pipeline structure. This is e.g. the case for https://huggingface.co/google/ddpm-cifar10-32 so that we can do the following: + + + Copied +pipeline = UnetSchedulerOneForwardPipeline.from_pretrained("google/ddpm-cifar10-32") + +output = pipeline() +We want to share this amazing pipeline with the community, so we would open a PR request to add the following code under one_step_unet.py to https://github.com/huggingface/diffusers/tree/main/examples/community . + + + Copied +from diffusers import DiffusionPipeline +import torch + + +class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() + + self.register_modules(unet=unet, scheduler=scheduler) + + def __call__(self): + image = torch.randn( + (1, self.unet.in_channels, self.unet.sample_size, self.unet.sample_size), + ) + timestep = 1 + + model_output = self.unet(image, timestep).sample + scheduler_output = self.scheduler.step(model_output, timestep, image).prev_sample + + return scheduler_output +Our amazing pipeline got merged here: #840. +Now everybody that has diffusers >= 0.4.0 installed can use our pipeline magically 🪄 as follows: + + + Copied +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="one_step_unet") +pipe() +Another way to upload your custom_pipeline, besides sending a PR, is uploading the code that contains it to the Hugging Face Hub, as exemplified here. +Try it out now - it works! +In general, you will want to create much more sophisticated pipelines, so we recommend looking at existing pipelines here: https://github.com/huggingface/diffusers/tree/main/examples/community. +IMPORTANT: +You can use whatever package you want in your community pipeline file - as long as the user has it installed, everything will work fine. Make sure you have one and only one pipeline class that inherits from DiffusionPipeline as this will be automatically detected. + +How do community pipelines work? + + +A community pipeline is a class that has to inherit from ['DiffusionPipeline']: +and that has been added to `examples/community` [files](https://github.com/huggingface/diffusers/tree/main/examples/community). +The community can load the pipeline code via the custom_pipeline argument from DiffusionPipeline. See docs [here](https://huggingface.co/docs/diffusers/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained.custom_pipeline): +This means: +The model weights and configs of the pipeline should be loaded from the pretrained_model_name_or_path argument: +whereas the code that powers the community pipeline is defined in a file added in examples/community. +Now, it might very well be that only some of your pipeline components weights can be downloaded from an official repo. +The other components should then be passed directly to init as is the case for the ClIP guidance notebook here. +The magic behind all of this is that we load the code directly from GitHub. You can check it out in more detail if you follow the functionality defined here: + + + Copied +# 2. Load the pipeline class, if using custom module then load it from the hub +# if we load from explicit class, let's use it +if custom_pipeline is not None: + pipeline_class = get_class_from_dynamic_module( + custom_pipeline, module_file=CUSTOM_PIPELINE_FILE_NAME, cache_dir=custom_pipeline + ) +elif cls != DiffusionPipeline: + pipeline_class = cls +else: + diffusers_module = importlib.import_module(cls.__module__.split(".")[0]) + pipeline_class = getattr(diffusers_module, config_dict["_class_name"]) +This is why a community pipeline merged to GitHub will be directly available to all diffusers packages. diff --git a/scrapped_outputs/600d267e67e94ac2f3ab2b590f7db867.txt b/scrapped_outputs/600d267e67e94ac2f3ab2b590f7db867.txt new file mode 100644 index 0000000000000000000000000000000000000000..cc23ee00cb951350879a778fd09dd9d54e9915f8 --- /dev/null +++ b/scrapped_outputs/600d267e67e94ac2f3ab2b590f7db867.txt @@ -0,0 +1,103 @@ +Controlling generation of diffusion models + +Controlling outputs generated by diffusion models has been long pursued by the community and is now an active research topic. In many popular diffusion models, subtle changes in inputs, both images and text prompts, can drastically change outputs. In an ideal world we want to be able to control how semantics are preserved and changed. +Most examples of preserving semantics reduce to being able to accurately map a change in input to a change in output. I.e. adding an adjective to a subject in a prompt preserves the entire image, only modifying the changed subject. Or, image variation of a particular subject preserves the subject’s pose. +Additionally, there are qualities of generated images that we would like to influence beyond semantic preservation. I.e. in general, we would like our outputs to be of good quality, adhere to a particular style, or be realistic. +We will document some of the techniques diffusers supports to control generation of diffusion models. Much is cutting edge research and can be quite nuanced. If something needs clarifying or you have a suggestion, don’t hesitate to open a discussion on the forum or a GitHub issue. +We provide a high level explanation of how the generation can be controlled as well as a snippet of the technicals. For more in depth explanations on the technicals, the original papers which are linked from the pipelines are always the best resources. +Depending on the use case, one should choose a technique accordingly. In many cases, these techniques can be combined. For example, one can combine Textual Inversion with SEGA to provide more semantic guidance to the outputs generated using Textual Inversion. +Unless otherwise mentioned, these are techniques that work with existing models and don’t require their own weights. +Instruct Pix2Pix +Pix2Pix Zero +Attend and Excite +Semantic Guidance +Self-attention Guidance +Depth2Image +MultiDiffusion Panorama +DreamBooth +Textual Inversion +ControlNet + +Instruct Pix2Pix + +Paper +Instruct Pix2Pix is fine-tuned from stable diffusion to support editing input images. It takes as inputs an image and a prompt describing an edit, and it outputs the edited image. +Instruct Pix2Pix has been explicitly trained to work well with InstructGPT-like prompts. +See here for more information on how to use it. + +Pix2Pix Zero + +Paper +Pix2Pix Zero allows modifying an image so that one concept or subject is translated to another one while preserving general image semantics. +The denoising process is guided from one conceptual embedding towards another conceptual embedding. The intermediate latents are optimized during the denoising process to push the attention maps towards reference attention maps. The reference attention maps are from the denoising process of the input image and are used to encourage semantic preservation. +Pix2Pix Zero can be used both to edit synthetic images as well as real images. +To edit synthetic images, one first generates an image given a caption. +Next, we generate image captions for the concept that shall be edited and for the new target concept. We can use a model like Flan-T5 for this purpose. Then, “mean” prompt embeddings for both the source and target concepts are created via the text encoder. Finally, the pix2pix-zero algorithm is used to edit the synthetic image. +To edit a real image, one first generates an image caption using a model like BLIP. Then one applies ddim inversion on the prompt and image to generate “inverse” latents. Similar to before, “mean” prompt embeddings for both source and target concepts are created and finally the pix2pix-zero algorithm in combination with the “inverse” latents is used to edit the image. +Pix2Pix Zero is the first model that allows “zero-shot” image editing. This means that the model +can edit an image in less than a minute on a consumer GPU as shown here +As mentioned above, Pix2Pix Zero includes optimizing the latents (and not any of the UNet, VAE, or the text encoder) to steer the generation toward a specific concept. This means that the overall +pipeline might require more memory than a standard StableDiffusionPipeline. +See here for more information on how to use it. + +Attend and Excite + +Paper +Attend and Excite allows subjects in the prompt to be faithfully represented in the final image. +A set of token indices are given as input, corresponding to the subjects in the prompt that need to be present in the image. During denoising, each token index is guaranteed to have a minimum attention threshold for at least one patch of the image. The intermediate latents are iteratively optimized during the denoising process to strengthen the attention of the most neglected subject token until the attention threshold is passed for all subject tokens. +Like Pix2Pix Zero, Attend and Excite also involves a mini optimization loop (leaving the pre-trained weights untouched) in its pipeline and can require more memory than the usual StableDiffusionPipeline. +See here for more information on how to use it. + +Semantic Guidance (SEGA) + +Paper +SEGA allows applying or removing one or more concepts from an image. The strength of the concept can also be controlled. I.e. the smile concept can be used to incrementally increase or decrease the smile of a portrait. +Similar to how classifier free guidance provides guidance via empty prompt inputs, SEGA provides guidance on conceptual prompts. Multiple of these conceptual prompts can be applied simultaneously. Each conceptual prompt can either add or remove their concept depending on if the guidance is applied positively or negatively. +Unlike Pix2Pix Zero or Attend and Excite, SEGA directly interacts with the diffusion process instead of performing any explicit gradient-based optimization. +See here for more information on how to use it. + +Self-attention Guidance (SAG) + +Paper +Self-attention Guidance improves the general quality of images. +SAG provides guidance from predictions not conditioned on high-frequency details to fully conditioned images. The high frequency details are extracted out of the UNet self-attention maps. +See here for more information on how to use it. + +Depth2Image + +Project +Depth2Image is fine-tuned from Stable Diffusion to better preserve semantics for text guided image variation. +It conditions on a monocular depth estimate of the original image. +See here for more information on how to use it. +An important distinction between methods like InstructPix2Pix and Pix2Pix Zero is that the former +involves fine-tuning the pre-trained weights while the latter does not. This means that you can +apply Pix2Pix Zero to any of the available Stable Diffusion models. + +MultiDiffusion Panorama + +Paper +MultiDiffusion defines a new generation process over a pre-trained diffusion model. This process binds together multiple diffusion generation methods that can be readily applied to generate high quality and diverse images. Results adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes. +MultiDiffusion Panorama allows to generate high-quality images at arbitrary aspect ratios (e.g., panoramas). +See here for more information on how to use it to generate panoramic images. + +Fine-tuning your own models + +In addition to pre-trained models, Diffusers has training scripts for fine-tuning models on user-provided data. + +DreamBooth + +DreamBooth fine-tunes a model to teach it about a new subject. I.e. a few pictures of a person can be used to generate images of that person in different styles. +See here for more information on how to use it. + +Textual Inversion + +Textual Inversion fine-tunes a model to teach it about a new concept. I.e. a few pictures of a style of artwork can be used to generate images in that style. +See here for more information on how to use it. + +ControlNet + +Paper +ControlNet is an auxiliary network which adds an extra condition. +There are 8 canonical pre-trained ControlNets trained on different conditionings such as edge detection, scribbles, +depth maps, and semantic segmentations. +See here for more information on how to use it. diff --git a/scrapped_outputs/600d79220580f90cca8a64e508df4a28.txt b/scrapped_outputs/600d79220580f90cca8a64e508df4a28.txt new file mode 100644 index 0000000000000000000000000000000000000000..6eb814578b3c61caf6866a5ffadcbcf16e6fec47 --- /dev/null +++ b/scrapped_outputs/600d79220580f90cca8a64e508df4a28.txt @@ -0,0 +1,26 @@ +How to run Stable Diffusion with Core ML Core ML is the model format and machine learning library supported by Apple frameworks. If you are interested in running Stable Diffusion models inside your macOS or iOS/iPadOS apps, this guide will show you how to convert existing PyTorch checkpoints into the Core ML format and use them for inference with Python or Swift. Core ML models can leverage all the compute engines available in Apple devices: the CPU, the GPU, and the Apple Neural Engine (or ANE, a tensor-optimized accelerator available in Apple Silicon Macs and modern iPhones/iPads). Depending on the model and the device it’s running on, Core ML can mix and match compute engines too, so some portions of the model may run on the CPU while others run on GPU, for example. You can also run the diffusers Python codebase on Apple Silicon Macs using the mps accelerator built into PyTorch. This approach is explained in depth in the mps guide, but it is not compatible with native apps. Stable Diffusion Core ML Checkpoints Stable Diffusion weights (or checkpoints) are stored in the PyTorch format, so you need to convert them to the Core ML format before we can use them inside native apps. Thankfully, Apple engineers developed a conversion tool based on diffusers to convert the PyTorch checkpoints to Core ML. Before you convert a model, though, take a moment to explore the Hugging Face Hub – chances are the model you’re interested in is already available in Core ML format: the Apple organization includes Stable Diffusion versions 1.4, 1.5, 2.0 base, and 2.1 base coreml community includes custom finetuned models use this filter to return all available Core ML checkpoints If you can’t find the model you’re interested in, we recommend you follow the instructions for Converting Models to Core ML by Apple. Selecting the Core ML Variant to Use Stable Diffusion models can be converted to different Core ML variants intended for different purposes: The type of attention blocks used. The attention operation is used to “pay attention” to the relationship between different areas in the image representations and to understand how the image and text representations are related. Attention is compute- and memory-intensive, so different implementations exist that consider the hardware characteristics of different devices. For Core ML Stable Diffusion models, there are two attention variants: split_einsum (introduced by Apple) is optimized for ANE devices, which is available in modern iPhones, iPads and M-series computers. The “original” attention (the base implementation used in diffusers) is only compatible with CPU/GPU and not ANE. It can be faster to run your model on CPU + GPU using original attention than ANE. See this performance benchmark as well as some additional measures provided by the community for additional details. The supported inference framework. packages are suitable for Python inference. This can be used to test converted Core ML models before attempting to integrate them inside native apps, or if you want to explore Core ML performance but don’t need to support native apps. For example, an application with a web UI could perfectly use a Python Core ML backend. compiled models are required for Swift code. The compiled models in the Hub split the large UNet model weights into several files for compatibility with iOS and iPadOS devices. This corresponds to the --chunk-unet conversion option. If you want to support native apps, then you need to select the compiled variant. The official Core ML Stable Diffusion models include these variants, but the community ones may vary: Copied coreml-stable-diffusion-v1-4 +├── README.md +├── original +│ ├── compiled +│ └── packages +└── split_einsum + ├── compiled + └── packages You can download and use the variant you need as shown below. Core ML Inference in Python Install the following libraries to run Core ML inference in Python: Copied pip install huggingface_hub +pip install git+https://github.com/apple/ml-stable-diffusion Download the Model Checkpoints To run inference in Python, use one of the versions stored in the packages folders because the compiled ones are only compatible with Swift. You may choose whether you want to use original or split_einsum attention. This is how you’d download the original attention variant from the Hub to a directory called models: Copied from huggingface_hub import snapshot_download +from pathlib import Path + +repo_id = "apple/coreml-stable-diffusion-v1-4" +variant = "original/packages" + +model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_")) +snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False) +print(f"Model downloaded at {model_path}") Inference Once you have downloaded a snapshot of the model, you can test it using Apple’s Python script. Copied python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" -i models/coreml-stable-diffusion-v1-4_original_packages -o --compute-unit CPU_AND_GPU --seed 93 Pass the path of the downloaded checkpoint with -i flag to the script. --compute-unit indicates the hardware you want to allow for inference. It must be one of the following options: ALL, CPU_AND_GPU, CPU_ONLY, CPU_AND_NE. You may also provide an optional output path, and a seed for reproducibility. The inference script assumes you’re using the original version of the Stable Diffusion model, CompVis/stable-diffusion-v1-4. If you use another model, you have to specify its Hub id in the inference command line, using the --model-version option. This works for models already supported and custom models you trained or fine-tuned yourself. For example, if you want to use runwayml/stable-diffusion-v1-5: Copied python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" --compute-unit ALL -o output --seed 93 -i models/coreml-stable-diffusion-v1-5_original_packages --model-version runwayml/stable-diffusion-v1-5 Core ML inference in Swift Running inference in Swift is slightly faster than in Python because the models are already compiled in the mlmodelc format. This is noticeable on app startup when the model is loaded but shouldn’t be noticeable if you run several generations afterward. Download To run inference in Swift on your Mac, you need one of the compiled checkpoint versions. We recommend you download them locally using Python code similar to the previous example, but with one of the compiled variants: Copied from huggingface_hub import snapshot_download +from pathlib import Path + +repo_id = "apple/coreml-stable-diffusion-v1-4" +variant = "original/compiled" + +model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_")) +snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False) +print(f"Model downloaded at {model_path}") Inference To run inference, please clone Apple’s repo: Copied git clone https://github.com/apple/ml-stable-diffusion +cd ml-stable-diffusion And then use Apple’s command line tool, Swift Package Manager: Copied swift run StableDiffusionSample --resource-path models/coreml-stable-diffusion-v1-4_original_compiled --compute-units all "a photo of an astronaut riding a horse on mars" You have to specify in --resource-path one of the checkpoints downloaded in the previous step, so please make sure it contains compiled Core ML bundles with the extension .mlmodelc. The --compute-units has to be one of these values: all, cpuOnly, cpuAndGPU, cpuAndNeuralEngine. For more details, please refer to the instructions in Apple’s repo. Supported Diffusers Features The Core ML models and inference code don’t support many of the features, options, and flexibility of 🧨 Diffusers. These are some of the limitations to keep in mind: Core ML models are only suitable for inference. They can’t be used for training or fine-tuning. Only two schedulers have been ported to Swift, the default one used by Stable Diffusion and DPMSolverMultistepScheduler, which we ported to Swift from our diffusers implementation. We recommend you use DPMSolverMultistepScheduler, since it produces the same quality in about half the steps. Negative prompts, classifier-free guidance scale, and image-to-image tasks are available in the inference code. Advanced features such as depth guidance, ControlNet, and latent upscalers are not available yet. Apple’s conversion and inference repo and our own swift-coreml-diffusers repos are intended as technology demonstrators to enable other developers to build upon. If you feel strongly about any missing features, please feel free to open a feature request or, better yet, a contribution PR 🙂. Native Diffusers Swift app One easy way to run Stable Diffusion on your own Apple hardware is to use our open-source Swift repo, based on diffusers and Apple’s conversion and inference repo. You can study the code, compile it with Xcode and adapt it for your own needs. For your convenience, there’s also a standalone Mac app in the App Store, so you can play with it without having to deal with the code or IDE. If you are a developer and have determined that Core ML is the best solution to build your Stable Diffusion app, then you can use the rest of this guide to get started with your project. We can’t wait to see what you’ll build 🙂. diff --git a/scrapped_outputs/603f27c620df9b028fa5de9ae296834b.txt b/scrapped_outputs/603f27c620df9b028fa5de9ae296834b.txt new file mode 100644 index 0000000000000000000000000000000000000000..bd0245f3577cfd587e442e6c7558f08774faa662 --- /dev/null +++ b/scrapped_outputs/603f27c620df9b028fa5de9ae296834b.txt @@ -0,0 +1,108 @@ +Quicktour + +Get up and running with 🧨 Diffusers quickly! +Whether you’re a developer or an everyday user, this quick tour will help you get started and show you how to use DiffusionPipeline for inference. +Before you begin, make sure you have all the necessary libraries installed: + + + Copied +pip install --upgrade diffusers accelerate transformers +accelerate speeds up model loading for inference and training +transformers is required to run the most popular diffusion models, such as Stable Diffusion + +DiffusionPipeline + +The DiffusionPipeline is the easiest way to use a pre-trained diffusion system for inference. You can use the DiffusionPipeline out-of-the-box for many tasks across different modalities. Take a look at the table below for some supported tasks: +Task +Description +Pipeline +Unconditional Image Generation +generate an image from gaussian noise +unconditional_image_generation +Text-Guided Image Generation +generate an image given a text prompt +conditional_image_generation +Text-Guided Image-to-Image Translation +adapt an image guided by a text prompt +img2img +Text-Guided Image-Inpainting +fill the masked part of an image given the image, the mask and a text prompt +inpaint +Text-Guided Depth-to-Image Translation +adapt parts of an image guided by a text prompt while preserving structure via depth estimation +depth2img +For more in-detail information on how diffusion pipelines function for the different tasks, please have a look at the Using Diffusers section. +As an example, start by creating an instance of DiffusionPipeline and specify which pipeline checkpoint you would like to download. +You can use the DiffusionPipeline for any Diffusers’ checkpoint. +In this guide though, you’ll use DiffusionPipeline for text-to-image generation with Stable Diffusion. +For Stable Diffusion, please carefully read its license before running the model. +This is due to the improved image generation capabilities of the model and the potentially harmful content that could be produced with it. +Please, head over to your stable diffusion model of choice, e.g. runwayml/stable-diffusion-v1-5, and read the license. +You can load the model as follows: + + + Copied +>>> from diffusers import DiffusionPipeline + +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +The DiffusionPipeline downloads and caches all modeling, tokenization, and scheduling components. +Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on GPU. +You can move the generator object to GPU, just like you would in PyTorch. + + + Copied +>>> pipeline.to("cuda") +Now you can use the pipeline on your text prompt: + + + Copied +>>> image = pipeline("An image of a squirrel in Picasso style").images[0] +The output is by default wrapped into a PIL Image object. +You can save the image by simply calling: + + + Copied +>>> image.save("image_of_squirrel_painting.png") +Note: You can also use the pipeline locally by downloading the weights via: + + + Copied +git lfs install +git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 +and then loading the saved weights into the pipeline. + + + Copied +>>> pipeline = DiffusionPipeline.from_pretrained("./stable-diffusion-v1-5") +Running the pipeline is then identical to the code above as it’s the same model architecture. + + + Copied +>>> generator.to("cuda") +>>> image = generator("An image of a squirrel in Picasso style").images[0] +>>> image.save("image_of_squirrel_painting.png") +Diffusion systems can be used with multiple different schedulers each with their +pros and cons. By default, Stable Diffusion runs with PNDMScheduler, but it’s very simple to +use a different scheduler. E.g. if you would instead like to use the EulerDiscreteScheduler scheduler, +you could use it as follows: + + + Copied +>>> from diffusers import EulerDiscreteScheduler + +>>> pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") + +>>> # change scheduler to Euler +>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) +For more in-detail information on how to change between schedulers, please refer to the Using Schedulers guide. +Stability AI’s Stable Diffusion model is an impressive image generation model +and can do much more than just generating images from text. We have dedicated a whole documentation page, +just for Stable Diffusion here. +If you want to know how to optimize Stable Diffusion to run on less memory, higher inference speeds, on specific hardware, such as Mac, or with ONNX Runtime, please have a look at our +optimization pages: +Optimized PyTorch on GPU +Mac OS with PyTorch +ONNX +OpenVINO +If you want to fine-tune or train your diffusion model, please have a look at the training section +Finally, please be considerate when distributing generated images publicly 🤗. diff --git a/scrapped_outputs/607f7fc5b9f67d124bb04c7b59ffffbb.txt b/scrapped_outputs/607f7fc5b9f67d124bb04c7b59ffffbb.txt new file mode 100644 index 0000000000000000000000000000000000000000..f7fa2efd2409e8f8de819166d0de25e1e2c3b529 --- /dev/null +++ b/scrapped_outputs/607f7fc5b9f67d124bb04c7b59ffffbb.txt @@ -0,0 +1,372 @@ +This pipeline is for research purposes only. + +Text-to-video synthesis + + +Overview + +VideoFusion: Decomposed Diffusion Models for High-Quality Video Generation by Zhengxiong Luo, Dayou Chen, Yingya Zhang, Yan Huang, Liang Wang, Yujun Shen, Deli Zhao, Jingren Zhou, Tieniu Tan. +The abstract of the paper is the following: +A diffusion probabilistic model (DPM), which constructs a forward diffusion process by gradually adding noise to data points and learns the reverse denoising process to generate new samples, has been shown to handle complex data distribution. Despite its recent success in image synthesis, applying DPMs to video generation is still challenging due to high-dimensional data spaces. Previous methods usually adopt a standard diffusion process, where frames in the same video clip are destroyed with independent noises, ignoring the content redundancy and temporal correlation. This work presents a decomposed diffusion process via resolving the per-frame noise into a base noise that is shared among all frames and a residual noise that varies along the time axis. The denoising pipeline employs two jointly-learned networks to match the noise decomposition accordingly. Experiments on various datasets confirm that our approach, termed as VideoFusion, surpasses both GAN-based and diffusion-based alternatives in high-quality video generation. We further show that our decomposed formulation can benefit from pre-trained image diffusion models and well-support text-conditioned video creation. +Resources: +Website +GitHub repository +🤗 Spaces + +Available Pipelines: + +Pipeline +Tasks +Demo +TextToVideoSDPipeline +Text-to-Video Generation +🤗 Spaces + +Usage example + +Let’s start by generating a short video with the default length of 16 frames (2s at 8 fps): + + + Copied +import torch +from diffusers import DiffusionPipeline +from diffusers.utils import export_to_video + +pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") +pipe = pipe.to("cuda") + +prompt = "Spiderman is surfing" +video_frames = pipe(prompt).frames +video_path = export_to_video(video_frames) +video_path +Diffusers supports different optimization techniques to improve the latency +and memory footprint of a pipeline. Since videos are often more memory-heavy than images, +we can enable CPU offloading and VAE slicing to keep the memory footprint at bay. +Let’s generate a video of 8 seconds (64 frames) on the same GPU using CPU offloading and VAE slicing: + + + Copied +import torch +from diffusers import DiffusionPipeline +from diffusers.utils import export_to_video + +pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") +pipe.enable_model_cpu_offload() + +# memory optimization +pipe.enable_vae_slicing() + +prompt = "Darth Vader surfing a wave" +video_frames = pipe(prompt, num_frames=64).frames +video_path = export_to_video(video_frames) +video_path +It just takes 7 GBs of GPU memory to generate the 64 video frames using PyTorch 2.0, “fp16” precision and the techniques mentioned above. +We can also use a different scheduler easily, using the same method we’d use for Stable Diffusion: + + + Copied +import torch +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +from diffusers.utils import export_to_video + +pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() + +prompt = "Spiderman is surfing" +video_frames = pipe(prompt, num_inference_steps=25).frames +video_path = export_to_video(video_frames) +video_path +Here are some sample outputs: +An astronaut riding a horse. + + +Darth vader surfing in waves. + + + +Available checkpoints + +damo-vilab/text-to-video-ms-1.7b +damo-vilab/text-to-video-ms-1.7b-legacy + +TextToVideoSDPipeline + + +class diffusers.TextToVideoSDPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet3DConditionModel +scheduler: KarrasDiffusionSchedulers + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Same as Stable Diffusion 2. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet3DConditionModel) — Conditional U-Net architecture to denoise the encoded video latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + + +Pipeline for text-to-video generation. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_frames: int = 16 +num_inference_steps: int = 50 +guidance_scale: float = 9.0 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'np' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None + +) +→ +~pipelines.stable_diffusion.TextToVideoSDPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the video generation. If not defined, one has to pass prompt_embeds. +instead. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated video. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated video. + + +num_frames (int, optional, defaults to 16) — +The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds +amounts to 2 seconds of video. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate videos that are closely linked to the text prompt, +usually at the expense of lower video quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the video generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "np") — +The output format of the generate video. Choose between torch.FloatTensor or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.TextToVideoSDPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +Returns + +~pipelines.stable_diffusion.TextToVideoSDPipelineOutput or tuple + + + +~pipelines.stable_diffusion.TextToVideoSDPipelineOutput if return_dict is True, otherwise a `tuple. +When returning a tuple, the first element is a list with the generated frames. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import TextToVideoSDPipeline +>>> from diffusers.utils import export_to_video + +>>> pipe = TextToVideoSDPipeline.from_pretrained( +... "damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16" +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "Spiderman is surfing" +>>> video_frames = pipe(prompt).frames +>>> video_path = export_to_video(video_frames) +>>> video_path + +disable_vae_slicing + +< +source +> +( +) + + + +Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to +computing decoding in one step. + +disable_vae_tiling + +< +source +> +( +) + + + +Disable tiled VAE decoding. If enable_vae_tiling was previously invoked, this method will go back to +computing decoding in one step. + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae have their state dicts saved to CPU and then are moved to a torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. + +enable_vae_slicing + +< +source +> +( +) + + + +Enable sliced VAE decoding. +When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several +steps. This is useful to save some memory and allow larger batch sizes. + +enable_vae_tiling + +< +source +> +( +) + + + +Enable tiled VAE decoding. +When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in +several steps. This is useful to save a large amount of memory and to allow the processing of larger images. diff --git a/scrapped_outputs/60b84a74b9be779df3096cfbaa2b84cd.txt b/scrapped_outputs/60b84a74b9be779df3096cfbaa2b84cd.txt new file mode 100644 index 0000000000000000000000000000000000000000..f695b722000cb30de90398c0e34dfcc9554715bb --- /dev/null +++ b/scrapped_outputs/60b84a74b9be779df3096cfbaa2b84cd.txt @@ -0,0 +1,315 @@ +Inpainting Inpainting replaces or edits specific areas of an image. This makes it a useful tool for image restoration like removing defects and artifacts, or even replacing an image area with something entirely new. Inpainting relies on a mask to determine which regions of an image to fill in; the area to inpaint is represented by white pixels and the area to keep is represented by black pixels. The white pixels are filled in by the prompt. With 🤗 Diffusers, here is how you can do inpainting: Load an inpainting checkpoint with the AutoPipelineForInpainting class. This’ll automatically detect the appropriate pipeline class to load based on the checkpoint: Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() You’ll notice throughout the guide, we use enable_model_cpu_offload() and enable_xformers_memory_efficient_attention(), to save memory and increase inference speed. If you’re using PyTorch 2.0, it’s not necessary to call enable_xformers_memory_efficient_attention() on your pipeline because it’ll already be using PyTorch 2.0’s native scaled-dot product attention. Load the base and mask images: Copied init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") Create a prompt to inpaint the image with and pass it to the pipeline with the base and mask images: Copied prompt = "a black cat with glowing eyes, cute, adorable, disney, pixar, highly detailed, 8k" +negative_prompt = "bad anatomy, deformed, ugly, disfigured" +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=init_image, mask_image=mask_image).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) base image mask image generated image Create a mask image Throughout this guide, the mask image is provided in all of the code examples for convenience. You can inpaint on your own images, but you’ll need to create a mask image for it. Use the Space below to easily create a mask image. Upload a base image to inpaint on and use the sketch tool to draw a mask. Once you’re done, click Run to generate and download the mask image. Mask blur The ~VaeImageProcessor.blur method provides an option for how to blend the original image and inpaint area. The amount of blur is determined by the blur_factor parameter. Increasing the blur_factor increases the amount of blur applied to the mask edges, softening the transition between the original image and inpaint area. A low or zero blur_factor preserves the sharper edges of the mask. To use this, create a blurred mask with the image processor. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +from PIL import Image + +pipeline = AutoPipelineForInpainting.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to('cuda') + +mask = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore_mask.png") +blurred_mask = pipeline.mask_processor.blur(mask, blur_factor=33) +blurred_mask mask with no blur mask with blur applied Popular models Stable Diffusion Inpainting, Stable Diffusion XL (SDXL) Inpainting, and Kandinsky 2.2 Inpainting are among the most popular models for inpainting. SDXL typically produces higher resolution images than Stable Diffusion v1.5, and Kandinsky 2.2 is also capable of generating high-quality images. Stable Diffusion Inpainting Stable Diffusion Inpainting is a latent diffusion model finetuned on 512x512 images on inpainting. It is a good starting point because it is relatively fast and generates good quality images. To use this model for inpainting, you’ll need to pass a prompt, base and mask image to the pipeline: Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Stable Diffusion XL (SDXL) Inpainting SDXL is a larger and more powerful version of Stable Diffusion v1.5. This model can follow a two-stage model process (though each model can also be used alone); the base model generates an image, and a refiner model takes that image and further enhances its details and quality. Take a look at the SDXL guide for a more comprehensive guide on how to use SDXL and configure it’s parameters. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "diffusers/stable-diffusion-xl-1.0-inpainting-0.1", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Kandinsky 2.2 Inpainting The Kandinsky model family is similar to SDXL because it uses two models as well; the image prior model creates image embeddings, and the diffusion model generates images from them. You can load the image prior and diffusion model separately, but the easiest way to use Kandinsky 2.2 is to load it into the AutoPipelineForInpainting class which uses the KandinskyV22InpaintCombinedPipeline under the hood. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) base image Stable Diffusion Inpainting Stable Diffusion XL Inpainting Kandinsky 2.2 Inpainting Non-inpaint specific checkpoints So far, this guide has used inpaint specific checkpoints such as runwayml/stable-diffusion-inpainting. But you can also use regular checkpoints like runwayml/stable-diffusion-v1-5. Let’s compare the results of the two checkpoints. The image on the left is generated from a regular checkpoint, and the image on the right is from an inpaint checkpoint. You’ll immediately notice the image on the left is not as clean, and you can still see the outline of the area the model is supposed to inpaint. The image on the right is much cleaner and the inpainted area appears more natural. runwayml/stable-diffusion-v1-5 runwayml/stable-diffusion-inpainting Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, image], rows=1, cols=2) runwayml/stable-diffusion-v1-5 runwayml/stable-diffusion-inpainting However, for more basic tasks like erasing an object from an image (like the rocks in the road for example), a regular checkpoint yields pretty good results. There isn’t as noticeable of difference between the regular and inpaint checkpoint. runwayml/stable-diffusion-v1-5 runwayml/stable-diffusion-inpaint Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/road-mask.png") + +image = pipeline(prompt="road", image=init_image, mask_image=mask_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) runwayml/stable-diffusion-v1-5 runwayml/stable-diffusion-inpainting The trade-off of using a non-inpaint specific checkpoint is the overall image quality may be lower, but it generally tends to preserve the mask area (that is why you can see the mask outline). The inpaint specific checkpoints are intentionally trained to generate higher quality inpainted images, and that includes creating a more natural transition between the masked and unmasked areas. As a result, these checkpoints are more likely to change your unmasked area. If preserving the unmasked area is important for your task, you can use the VaeImageProcessor.apply_overlay method to force the unmasked area of an image to remain the same at the expense of some more unnatural transitions between the masked and unmasked areas. Copied import PIL +import numpy as np +import torch + +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +device = "cuda" +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", + torch_dtype=torch.float16, +) +pipeline = pipeline.to(device) + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +repainted_image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image).images[0] +repainted_image.save("repainted_image.png") + +unmasked_unchanged_image = pipeline.image_processor.apply_overlay(mask_image, init_image, repainted_image) +unmasked_unchanged_image.save("force_unmasked_unchanged.png") +make_image_grid([init_image, mask_image, repainted_image, unmasked_unchanged_image], rows=2, cols=2) Configure pipeline parameters Image features - like quality and “creativity” - are dependent on pipeline parameters. Knowing what these parameters do is important for getting the results you want. Let’s take a look at the most important parameters and see how changing them affects the output. Strength strength is a measure of how much noise is added to the base image, which influences how similar the output is to the base image. 📈 a high strength value means more noise is added to an image and the denoising process takes longer, but you’ll get higher quality images that are more different from the base image 📉 a low strength value means less noise is added to an image and the denoising process is faster, but the image quality may not be as great and the generated image resembles the base image more Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.6).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) strength = 0.6 strength = 0.8 strength = 1.0 Guidance scale guidance_scale affects how aligned the text prompt and generated image are. 📈 a high guidance_scale value means the prompt and generated image are closely aligned, so the output is a stricter interpretation of the prompt 📉 a low guidance_scale value means the prompt and generated image are more loosely aligned, so the output may be more varied from the prompt You can use strength and guidance_scale together for more control over how expressive the model is. For example, a combination high strength and guidance_scale values gives the model the most creative freedom. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, guidance_scale=2.5).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) guidance_scale = 2.5 guidance_scale = 7.5 guidance_scale = 12.5 Negative prompt A negative prompt assumes the opposite role of a prompt; it guides the model away from generating certain things in an image. This is useful for quickly improving image quality and preventing the model from generating things you don’t want. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +negative_prompt = "bad architecture, unstable, poor details, blurry" +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=init_image, mask_image=mask_image).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) negative_prompt = "bad architecture, unstable, poor details, blurry" Padding mask crop A method for increasing the inpainting image quality is to use the padding_mask_crop parameter. When enabled, this option crops the masked area with some user-specified padding and it’ll also crop the same area from the original image. Both the image and mask are upscaled to a higher resolution for inpainting, and then overlaid on the original image. This is a quick and easy way to improve image quality without using a separate pipeline like StableDiffusionUpscalePipeline. Add the padding_mask_crop parameter to the pipeline call and set it to the desired padding value. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +from PIL import Image + +generator = torch.Generator(device='cuda').manual_seed(0) +pipeline = AutoPipelineForInpainting.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to('cuda') + +base = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore.png") +mask = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore_mask.png") + +image = pipeline("boat", image=base, mask_image=mask, strength=0.75, generator=generator, padding_mask_crop=32).images[0] +image default inpaint image inpaint image with `padding_mask_crop` enabled Chained inpainting pipelines AutoPipelineForInpainting can be chained with other 🤗 Diffusers pipelines to edit their outputs. This is often useful for improving the output quality from your other diffusion pipelines, and if you’re using multiple pipelines, it can be more memory-efficient to chain them together to keep the outputs in latent space and reuse the same pipeline components. Text-to-image-to-inpaint Chaining a text-to-image and inpainting pipeline allows you to inpaint the generated image, and you don’t have to provide a base image to begin with. This makes it convenient to edit your favorite text-to-image outputs without having to generate an entirely new image. Start with the text-to-image pipeline to create a castle: Copied import torch +from diffusers import AutoPipelineForText2Image, AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +text2image = pipeline("concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k").images[0] Load the mask image of the output from above: Copied mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_text-chain-mask.png") And let’s inpaint the masked area with a waterfall: Copied pipeline = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +prompt = "digital painting of a fantasy waterfall, cloudy" +image = pipeline(prompt=prompt, image=text2image, mask_image=mask_image).images[0] +make_image_grid([text2image, mask_image, image], rows=1, cols=3) text-to-image inpaint Inpaint-to-image-to-image You can also chain an inpainting pipeline before another pipeline like image-to-image or an upscaler to improve the quality. Begin by inpainting an image: Copied import torch +from diffusers import AutoPipelineForInpainting, AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image_inpainting = pipeline(prompt=prompt, image=init_image, mask_image=mask_image).images[0] + +# resize image to 1024x1024 for SDXL +image_inpainting = image_inpainting.resize((1024, 1024)) Now let’s pass the image to another inpainting pipeline with SDXL’s refiner model to enhance the image details and quality: Copied pipeline = AutoPipelineForInpainting.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt=prompt, image=image_inpainting, mask_image=mask_image, output_type="latent").images[0] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. For example, in the Text-to-image-to-inpaint section, Kandinsky 2.2 uses a different VAE class than the Stable Diffusion model so it won’t work. But if you use Stable Diffusion v1.5 for both pipelines, then you can keep everything in latent space because they both use AutoencoderKL. Finally, you can pass this image to an image-to-image pipeline to put the finishing touches on it. It is more efficient to use the from_pipe() method to reuse the existing pipeline components, and avoid unnecessarily loading all the pipeline components into memory again. Copied pipeline = AutoPipelineForImage2Image.from_pipe(pipeline) +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt=prompt, image=image).images[0] +make_image_grid([init_image, mask_image, image_inpainting, image], rows=2, cols=2) initial image inpaint image-to-image Image-to-image and inpainting are actually very similar tasks. Image-to-image generates a new image that resembles the existing provided image. Inpainting does the same thing, but it only transforms the image area defined by the mask and the rest of the image is unchanged. You can think of inpainting as a more precise tool for making specific changes and image-to-image has a broader scope for making more sweeping changes. Control image generation Getting an image to look exactly the way you want is challenging because the denoising process is random. While you can control certain aspects of generation by configuring parameters like negative_prompt, there are better and more efficient methods for controlling image generation. Prompt weighting Prompt weighting provides a quantifiable way to scale the representation of concepts in a prompt. You can use it to increase or decrease the magnitude of the text embedding vector for each concept in the prompt, which subsequently determines how much of each concept is generated. The Compel library offers an intuitive syntax for scaling the prompt weights and generating the embeddings. Learn how to create the embeddings in the Prompt weighting guide. Once you’ve generated the embeddings, pass them to the prompt_embeds (and negative_prompt_embeds if you’re using a negative prompt) parameter in the AutoPipelineForInpainting. The embeddings replace the prompt parameter: Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt_embeds=prompt_embeds, # generated from Compel + negative_prompt_embeds=negative_prompt_embeds, # generated from Compel + image=init_image, + mask_image=mask_image +).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) ControlNet ControlNet models are used with other diffusion models like Stable Diffusion, and they provide an even more flexible and accurate way to control how an image is generated. A ControlNet accepts an additional conditioning image input that guides the diffusion model to preserve the features in it. For example, let’s condition an image with a ControlNet pretrained on inpaint images: Copied import torch +import numpy as np +from diffusers import ControlNetModel, StableDiffusionControlNetInpaintPipeline +from diffusers.utils import load_image, make_image_grid + +# load ControlNet +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16, variant="fp16") + +# pass ControlNet to the pipeline +pipeline = StableDiffusionControlNetInpaintPipeline.from_pretrained( + "runwayml/stable-diffusion-inpainting", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +# prepare control image +def make_inpaint_condition(init_image, mask_image): + init_image = np.array(init_image.convert("RGB")).astype(np.float32) / 255.0 + mask_image = np.array(mask_image.convert("L")).astype(np.float32) / 255.0 + + assert init_image.shape[0:1] == mask_image.shape[0:1], "image and image_mask must have the same image size" + init_image[mask_image > 0.5] = -1.0 # set as masked pixel + init_image = np.expand_dims(init_image, 0).transpose(0, 3, 1, 2) + init_image = torch.from_numpy(init_image) + return init_image + +control_image = make_inpaint_condition(init_image, mask_image) Now generate an image from the base, mask and control images. You’ll notice features of the base image are strongly preserved in the generated image. Copied prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, control_image=control_image).images[0] +make_image_grid([init_image, mask_image, PIL.Image.fromarray(np.uint8(control_image[0][0])).convert('RGB'), image], rows=2, cols=2) You can take this a step further and chain it with an image-to-image pipeline to apply a new style: Copied from diffusers import AutoPipelineForImage2Image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "nitrosocke/elden-ring-diffusion", torch_dtype=torch.float16, +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +prompt = "elden ring style castle" # include the token "elden ring style" in the prompt +negative_prompt = "bad architecture, deformed, disfigured, poor details" + +image_elden_ring = pipeline(prompt, negative_prompt=negative_prompt, image=image).images[0] +make_image_grid([init_image, mask_image, image, image_elden_ring], rows=2, cols=2) initial image ControlNet inpaint image-to-image Optimize It can be difficult and slow to run diffusion models if you’re resource constrained, but it doesn’t have to be with a few optimization tricks. One of the biggest (and easiest) optimizations you can enable is switching to memory-efficient attention. If you’re using PyTorch 2.0, scaled-dot product attention is automatically enabled and you don’t need to do anything else. For non-PyTorch 2.0 users, you can install and use xFormers’s implementation of memory-efficient attention. Both options reduce memory usage and accelerate inference. You can also offload the model to the CPU to save even more memory: Copied + pipeline.enable_xformers_memory_efficient_attention() ++ pipeline.enable_model_cpu_offload() To speed-up your inference code even more, use torch_compile. You should wrap torch.compile around the most intensive component in the pipeline which is typically the UNet: Copied pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) Learn more in the Reduce memory usage and Torch 2.0 guides. diff --git a/scrapped_outputs/60d6c20a4723c9a7a5a05e0350264f19.txt b/scrapped_outputs/60d6c20a4723c9a7a5a05e0350264f19.txt new file mode 100644 index 0000000000000000000000000000000000000000..118d04526fdacb6e280461a814f7dea84ba76932 --- /dev/null +++ b/scrapped_outputs/60d6c20a4723c9a7a5a05e0350264f19.txt @@ -0,0 +1,51 @@ +DDIMInverseScheduler DDIMInverseScheduler is the inverted scheduler from Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. +The implementation is mostly based on the DDIM inversion definition from Null-text Inversion for Editing Real Images using Guided Diffusion Models. DDIMInverseScheduler class diffusers.DDIMInverseScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None clip_sample: bool = True set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' clip_sample_range: float = 1.0 timestep_spacing: str = 'leading' rescale_betas_zero_snr: bool = False **kwargs ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 0, otherwise +it uses the alpha value at step num_train_timesteps - 1. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use num_train_timesteps - 1 for the previous alpha +product. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. DDIMInverseScheduler is the reverse scheduler of DDIMScheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → ~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. eta (float) — +The weight of noise for added noise in diffusion step. use_clipped_model_output (bool, defaults to False) — +If True, computes “corrected” model_output from the clipped predicted original sample. Necessary +because predicted original sample is clipped to [-1, 1] when self.config.clip_sample is True. If no +clipping has happened, “corrected” model_output would coincide with the one provided as input and +use_clipped_model_output has no effect. variance_noise (torch.FloatTensor) — +Alternative to generating noise with generator by directly providing the noise for the variance +itself. Useful for methods such as CycleDiffusion. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput or +tuple. Returns +~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput or tuple + +If return_dict is True, ~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput is +returned, otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/60f1f6f37b1d6938e6548cb66903c562.txt b/scrapped_outputs/60f1f6f37b1d6938e6548cb66903c562.txt new file mode 100644 index 0000000000000000000000000000000000000000..44fefb3c47f353a4d2bfd9d051ae6e0b396bc8d5 --- /dev/null +++ b/scrapped_outputs/60f1f6f37b1d6938e6548cb66903c562.txt @@ -0,0 +1,4 @@ +Using Diffusers for audio + +DanceDiffusionPipeline and AudioDiffusionPipeline can be used to generate +audio rapidly! More coming soon! diff --git a/scrapped_outputs/611634616a07c2cabbe2f1a42399ba3a.txt b/scrapped_outputs/611634616a07c2cabbe2f1a42399ba3a.txt new file mode 100644 index 0000000000000000000000000000000000000000..a393913848d6f7c336242559c3c841e1e1ac8bf4 --- /dev/null +++ b/scrapped_outputs/611634616a07c2cabbe2f1a42399ba3a.txt @@ -0,0 +1,30 @@ +Conditional Image Generation + +The DiffusionPipeline is the easiest way to use a pre-trained diffusion system for inference +Start by creating an instance of DiffusionPipeline and specify which pipeline checkpoint you would like to download. +You can use the DiffusionPipeline for any Diffusers’ checkpoint. +In this guide though, you’ll use DiffusionPipeline for text-to-image generation with Latent Diffusion: + + + Copied +>>> from diffusers import DiffusionPipeline + +>>> generator = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256") +The DiffusionPipeline downloads and caches all modeling, tokenization, and scheduling components. +Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on GPU. +You can move the generator object to GPU, just like you would in PyTorch. + + + Copied +>>> generator.to("cuda") +Now you can use the generator on your text prompt: + + + Copied +>>> image = generator("An image of a squirrel in Picasso style").images[0] +The output is by default wrapped into a PIL Image object. +You can save the image by simply calling: + + + Copied +>>> image.save("image_of_squirrel_painting.png") diff --git a/scrapped_outputs/61256e96344476da8a63f3469d0e7fe8.txt b/scrapped_outputs/61256e96344476da8a63f3469d0e7fe8.txt new file mode 100644 index 0000000000000000000000000000000000000000..ed307c5e7ec0eba355d6da6f87807233e0a27eec --- /dev/null +++ b/scrapped_outputs/61256e96344476da8a63f3469d0e7fe8.txt @@ -0,0 +1,43 @@ +DiT Scalable Diffusion Models with Transformers (DiT) is by William Peebles and Saining Xie. The abstract from the paper is: We explore a new class of diffusion models based on the transformer architecture. We train latent diffusion models of images, replacing the commonly-used U-Net backbone with a transformer that operates on latent patches. We analyze the scalability of our Diffusion Transformers (DiTs) through the lens of forward pass complexity as measured by Gflops. We find that DiTs with higher Gflops — through increased transformer depth/width or increased number of input tokens — consistently have lower FID. In addition to possessing good scalability properties, our largest DiT-XL/2 models outperform all prior diffusion models on the class-conditional ImageNet 512x512 and 256x256 benchmarks, achieving a state-of-the-art FID of 2.27 on the latter. The original codebase can be found at facebookresearch/dit. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. DiTPipeline class diffusers.DiTPipeline < source > ( transformer: Transformer2DModel vae: AutoencoderKL scheduler: KarrasDiffusionSchedulers id2label: Optional = None ) Parameters transformer (Transformer2DModel) — +A class conditioned Transformer2DModel to denoise the encoded image latents. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. scheduler (DDIMScheduler) — +A scheduler to be used in combination with transformer to denoise the encoded image latents. Pipeline for image generation based on a Transformer backbone instead of a UNet. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( class_labels: List guidance_scale: float = 4.0 generator: Union = None num_inference_steps: int = 50 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters class_labels (List[int]) — +List of ImageNet class labels for the images to be generated. guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. num_inference_steps (int, optional, defaults to 250) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import DiTPipeline, DPMSolverMultistepScheduler +>>> import torch + +>>> pipe = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256", torch_dtype=torch.float16) +>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe = pipe.to("cuda") + +>>> # pick words from Imagenet class labels +>>> pipe.labels # to print all available words + +>>> # pick words that exist in ImageNet +>>> words = ["white shark", "umbrella"] + +>>> class_ids = pipe.get_label_ids(words) + +>>> generator = torch.manual_seed(33) +>>> output = pipe(class_labels=class_ids, num_inference_steps=25, generator=generator) + +>>> image = output.images[0] # label 'white shark' get_label_ids < source > ( label: Union ) → list of int Parameters label (str or dict of str) — +Label strings to be mapped to class ids. Returns +list of int + +Class ids to be processed by pipeline. + Map label strings from ImageNet to corresponding class ids. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/613a9a5bae8a486957c209aaf469fc09.txt b/scrapped_outputs/613a9a5bae8a486957c209aaf469fc09.txt new file mode 100644 index 0000000000000000000000000000000000000000..d497661a6c9cfce4b8b06d95ad96868e9dc634a1 --- /dev/null +++ b/scrapped_outputs/613a9a5bae8a486957c209aaf469fc09.txt @@ -0,0 +1,42 @@ +Textual inversion The StableDiffusionPipeline supports textual inversion, a technique that enables a model like Stable Diffusion to learn a new concept from just a few sample images. This gives you more control over the generated images and allows you to tailor the model towards specific concepts. You can get started quickly with a collection of community created concepts in the Stable Diffusion Conceptualizer. This guide will show you how to run inference with textual inversion using a pre-learned concept from the Stable Diffusion Conceptualizer. If you’re interested in teaching a model new concepts with textual inversion, take a look at the Textual Inversion training guide. Import the necessary libraries: Copied import torch +from diffusers import StableDiffusionPipeline +from diffusers.utils import make_image_grid Stable Diffusion 1 and 2 Pick a Stable Diffusion checkpoint and a pre-learned concept from the Stable Diffusion Conceptualizer: Copied pretrained_model_name_or_path = "runwayml/stable-diffusion-v1-5" +repo_id_embeds = "sd-concepts-library/cat-toy" Now you can load a pipeline, and pass the pre-learned concept to it: Copied pipeline = StableDiffusionPipeline.from_pretrained( + pretrained_model_name_or_path, torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +pipeline.load_textual_inversion(repo_id_embeds) Create a prompt with the pre-learned concept by using the special placeholder token , and choose the number of samples and rows of images you’d like to generate: Copied prompt = "a grafitti in a favela wall with a on it" + +num_samples_per_row = 2 +num_rows = 2 Then run the pipeline (feel free to adjust the parameters like num_inference_steps and guidance_scale to see how they affect image quality), save the generated images and visualize them with the helper function you created at the beginning: Copied all_images = [] +for _ in range(num_rows): + images = pipeline(prompt, num_images_per_prompt=num_samples_per_row, num_inference_steps=50, guidance_scale=7.5).images + all_images.extend(images) + +grid = make_image_grid(all_images, num_rows, num_samples_per_row) +grid Stable Diffusion XL Stable Diffusion XL (SDXL) can also use textual inversion vectors for inference. In contrast to Stable Diffusion 1 and 2, SDXL has two text encoders so you’ll need two textual inversion embeddings - one for each text encoder model. Let’s download the SDXL textual inversion embeddings and have a closer look at it’s structure: Copied from huggingface_hub import hf_hub_download +from safetensors.torch import load_file + +file = hf_hub_download("dn118/unaestheticXL", filename="unaestheticXLv31.safetensors") +state_dict = load_file(file) +state_dict Copied {'clip_g': tensor([[ 0.0077, -0.0112, 0.0065, ..., 0.0195, 0.0159, 0.0275], + ..., + [-0.0170, 0.0213, 0.0143, ..., -0.0302, -0.0240, -0.0362]], + 'clip_l': tensor([[ 0.0023, 0.0192, 0.0213, ..., -0.0385, 0.0048, -0.0011], + ..., + [ 0.0475, -0.0508, -0.0145, ..., 0.0070, -0.0089, -0.0163]], There are two tensors, "clip_g" and "clip_l". +"clip_g" corresponds to the bigger text encoder in SDXL and refers to +pipe.text_encoder_2 and "clip_l" refers to pipe.text_encoder. Now you can load each tensor separately by passing them along with the correct text encoder and tokenizer +to load_textual_inversion(): Copied from diffusers import AutoPipelineForText2Image +import torch + +pipe = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") + +pipe.load_textual_inversion(state_dict["clip_g"], token="unaestheticXLv31", text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2) +pipe.load_textual_inversion(state_dict["clip_l"], token="unaestheticXLv31", text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer) + +# the embedding should be used as a negative embedding, so we pass it as a negative prompt +generator = torch.Generator().manual_seed(33) +image = pipe("a woman standing in front of a mountain", negative_prompt="unaestheticXLv31", generator=generator).images[0] +image diff --git a/scrapped_outputs/615525c274f405c45ee0c9aed9ca6c72.txt b/scrapped_outputs/615525c274f405c45ee0c9aed9ca6c72.txt new file mode 100644 index 0000000000000000000000000000000000000000..e22d0ecb1ca94b6e138f26b5e0253a7e522852f5 --- /dev/null +++ b/scrapped_outputs/615525c274f405c45ee0c9aed9ca6c72.txt @@ -0,0 +1,243 @@ +Schedulers + +Diffusers contains multiple pre-built schedule functions for the diffusion process. + +What is a scheduler? + +The schedule functions, denoted Schedulers in the library take in the output of a trained model, a sample which the diffusion process is iterating on, and a timestep to return a denoised sample. That’s why schedulers may also be called Samplers in other diffusion models implementations. +Schedulers define the methodology for iteratively adding noise to an image or for updating a sample based on model outputs.adding noise in different manners represent the algorithmic processes to train a diffusion model by adding noise to images. +for inference, the scheduler defines how to update a sample based on an output from a pretrained model. +Schedulers are often defined by a noise schedule and an update rule to solve the differential equation solution. + +Discrete versus continuous schedulers + +All schedulers take in a timestep to predict the updated version of the sample being diffused. +The timesteps dictate where in the diffusion process the step is, where data is generated by iterating forward in time and inference is executed by propagating backwards through timesteps. +Different algorithms use timesteps that can be discrete (accepting int inputs), such as the DDPMScheduler or PNDMScheduler, or continuous (accepting float inputs), such as the score-based schedulers ScoreSdeVeScheduler or ScoreSdeVpScheduler. + +Designing Re-usable schedulers + +The core design principle between the schedule functions is to be model, system, and framework independent. +This allows for rapid experimentation and cleaner abstractions in the code, where the model prediction is separated from the sample update. +To this end, the design of schedulers is such that: +Schedulers can be used interchangeably between diffusion models in inference to find the preferred trade-off between speed and generation quality. +Schedulers are currently by default in PyTorch, but are designed to be framework independent (partial Jax support currently exists). +Many diffusion pipelines, such as StableDiffusionPipeline and DiTPipeline can use any of KarrasDiffusionSchedulers + +Schedulers Summary + +The following table summarizes all officially supported schedulers, their corresponding paper +Scheduler +Paper +ddim +Denoising Diffusion Implicit Models +ddpm +Denoising Diffusion Probabilistic Models +singlestep_dpm_solver +Singlestep DPM-Solver +multistep_dpm_solver +Multistep DPM-Solver +heun +Heun scheduler inspired by Karras et. al paper +dpm_discrete +DPM Discrete Scheduler inspired by Karras et. al paper +dpm_discrete_ancestral +DPM Discrete Scheduler with ancestral sampling inspired by Karras et. al paper +stochastic_karras_ve +Variance exploding, stochastic sampling from Karras et. al +lms_discrete +Linear multistep scheduler for discrete beta schedules +pndm +Pseudo numerical methods for diffusion models (PNDM) +score_sde_ve +variance exploding stochastic differential equation (VE-SDE) scheduler +ipndm +improved pseudo numerical methods for diffusion models (iPNDM) +score_sde_vp +Variance preserving stochastic differential equation (VP-SDE) scheduler +euler +Euler scheduler +euler_ancestral +Euler Ancestral scheduler +vq_diffusion +VQDiffusionScheduler +repaint +RePaint scheduler + +API + +The core API for any new scheduler must follow a limited structure. +Schedulers should provide one or more def step(...) functions that should be called to update the generated sample iteratively. +Schedulers should provide a set_timesteps(...) method that configures the parameters of a schedule function for a specific inference task. +Schedulers should be framework-specific. +The base class SchedulerMixin implements low level utilities used by multiple schedulers. + +SchedulerMixin + + +class diffusers.SchedulerMixin + +< +source +> +( +) + + + +Mixin containing common functions for the schedulers. +Class attributes: +_compatibles (List[str]) — A list of classes that are compatible with the parent class, so that +from_config can be used from a class different than the one used to save the config (should be overridden +by parent class). + +from_pretrained + +< +source +> +( +pretrained_model_name_or_path: typing.Dict[str, typing.Any] = None +subfolder: typing.Optional[str] = None +return_unused_kwargs = False +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id of a model repo on huggingface.co. Valid model ids should have an +organization name, like google/ddpm-celebahq-256. +A path to a directory containing the schedluer configurations saved using +save_pretrained(), e.g., ./my_model_directory/. + + + +subfolder (str, optional) — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + +return_unused_kwargs (bool, optional, defaults to False) — +Whether kwargs that are not consumed by the Python class should be returned or not. + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running transformers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + + +Instantiate a Scheduler class from a pre-defined JSON configuration file inside a directory or Hub repo. +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. +Activate the special “offline-mode” to +use this method in a firewalled environment. + +save_pretrained + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +push_to_hub: bool = False +**kwargs + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory where the configuration JSON file will be saved (will be created if it does not exist). + + + +Save a scheduler configuration object to the directory save_directory, so that it can be re-loaded using the +from_pretrained() class method. + +SchedulerOutput + + +The class `SchedulerOutput` contains the outputs from any schedulers `step(...)` call. + +class diffusers.schedulers.scheduling_utils.SchedulerOutput + +< +source +> +( +prev_sample: FloatTensor + +) + + +Parameters + +prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. + + + +Base class for the scheduler’s step function output. + +KarrasDiffusionSchedulers + + +class diffusers.schedulers.KarrasDiffusionSchedulers + +< +source +> +( +value +names = None +module = None +qualname = None +type = None +start = 1 + +) + + + +An enumeration. diff --git a/scrapped_outputs/6177c975303f2e9760fb1f9de40f5312.txt b/scrapped_outputs/6177c975303f2e9760fb1f9de40f5312.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/6187d16a3866c102e54dc1d2e65d97c4.txt b/scrapped_outputs/6187d16a3866c102e54dc1d2e65d97c4.txt new file mode 100644 index 0000000000000000000000000000000000000000..074dfb36700a1ed683f1c6891afc97d56e1cb780 --- /dev/null +++ b/scrapped_outputs/6187d16a3866c102e54dc1d2e65d97c4.txt @@ -0,0 +1,113 @@ +Latent upscaler The Stable Diffusion latent upscaler model was created by Katherine Crowson in collaboration with Stability AI. It is used to enhance the output image resolution by a factor of 2 (see this demo notebook for a demonstration of the original implementation). Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionLatentUpscalePipeline class diffusers.StableDiffusionLatentUpscalePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: EulerDiscreteScheduler ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A EulerDiscreteScheduler to be used in combination with unet to denoise the encoded image latents. Pipeline for upscaling Stable Diffusion output image resolution by a factor of 2. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union image: Union = None num_inference_steps: int = 75 guidance_scale: float = 9.0 negative_prompt: Union = None generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image upscaling. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be upscaled. If it’s a tensor, it can be either a +latent output from a Stable Diffusion model or an image tensor in the range [-1, 1]. It is considered +a latent if image.shape[1] is 4; otherwise, it is considered to be an image representation and +encoded using this pipeline’s vae encoder. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import StableDiffusionLatentUpscalePipeline, StableDiffusionPipeline +>>> import torch + + +>>> pipeline = StableDiffusionPipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 +... ) +>>> pipeline.to("cuda") + +>>> model_id = "stabilityai/sd-x2-latent-upscaler" +>>> upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) +>>> upscaler.to("cuda") + +>>> prompt = "a photo of an astronaut high resolution, unreal engine, ultra realistic" +>>> generator = torch.manual_seed(33) + +>>> low_res_latents = pipeline(prompt, generator=generator, output_type="latent").images + +>>> with torch.no_grad(): +... image = pipeline.decode_latents(low_res_latents) +>>> image = pipeline.numpy_to_pil(image)[0] + +>>> image.save("../images/a1.png") + +>>> upscaled_image = upscaler( +... prompt=prompt, +... image=low_res_latents, +... num_inference_steps=20, +... guidance_scale=0, +... generator=generator, +... ).images[0] + +>>> upscaled_image.save("../images/a2.png") enable_sequential_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using 🤗 Accelerate, significantly reducing memory usage. When called, the state +dicts of all torch.nn.Module components (except those in self._exclude_from_cpu_offload) are saved to CPU +and then moved to torch.device('meta') and loaded to GPU only when their specific submodule has its forward +method called. Offloading happens on a submodule basis. Memory savings are higher than with +enable_model_cpu_offload, but performance is lower. enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/618b9a469809cf5ef39605fd51d2f6f9.txt b/scrapped_outputs/618b9a469809cf5ef39605fd51d2f6f9.txt new file mode 100644 index 0000000000000000000000000000000000000000..d497661a6c9cfce4b8b06d95ad96868e9dc634a1 --- /dev/null +++ b/scrapped_outputs/618b9a469809cf5ef39605fd51d2f6f9.txt @@ -0,0 +1,42 @@ +Textual inversion The StableDiffusionPipeline supports textual inversion, a technique that enables a model like Stable Diffusion to learn a new concept from just a few sample images. This gives you more control over the generated images and allows you to tailor the model towards specific concepts. You can get started quickly with a collection of community created concepts in the Stable Diffusion Conceptualizer. This guide will show you how to run inference with textual inversion using a pre-learned concept from the Stable Diffusion Conceptualizer. If you’re interested in teaching a model new concepts with textual inversion, take a look at the Textual Inversion training guide. Import the necessary libraries: Copied import torch +from diffusers import StableDiffusionPipeline +from diffusers.utils import make_image_grid Stable Diffusion 1 and 2 Pick a Stable Diffusion checkpoint and a pre-learned concept from the Stable Diffusion Conceptualizer: Copied pretrained_model_name_or_path = "runwayml/stable-diffusion-v1-5" +repo_id_embeds = "sd-concepts-library/cat-toy" Now you can load a pipeline, and pass the pre-learned concept to it: Copied pipeline = StableDiffusionPipeline.from_pretrained( + pretrained_model_name_or_path, torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +pipeline.load_textual_inversion(repo_id_embeds) Create a prompt with the pre-learned concept by using the special placeholder token , and choose the number of samples and rows of images you’d like to generate: Copied prompt = "a grafitti in a favela wall with a on it" + +num_samples_per_row = 2 +num_rows = 2 Then run the pipeline (feel free to adjust the parameters like num_inference_steps and guidance_scale to see how they affect image quality), save the generated images and visualize them with the helper function you created at the beginning: Copied all_images = [] +for _ in range(num_rows): + images = pipeline(prompt, num_images_per_prompt=num_samples_per_row, num_inference_steps=50, guidance_scale=7.5).images + all_images.extend(images) + +grid = make_image_grid(all_images, num_rows, num_samples_per_row) +grid Stable Diffusion XL Stable Diffusion XL (SDXL) can also use textual inversion vectors for inference. In contrast to Stable Diffusion 1 and 2, SDXL has two text encoders so you’ll need two textual inversion embeddings - one for each text encoder model. Let’s download the SDXL textual inversion embeddings and have a closer look at it’s structure: Copied from huggingface_hub import hf_hub_download +from safetensors.torch import load_file + +file = hf_hub_download("dn118/unaestheticXL", filename="unaestheticXLv31.safetensors") +state_dict = load_file(file) +state_dict Copied {'clip_g': tensor([[ 0.0077, -0.0112, 0.0065, ..., 0.0195, 0.0159, 0.0275], + ..., + [-0.0170, 0.0213, 0.0143, ..., -0.0302, -0.0240, -0.0362]], + 'clip_l': tensor([[ 0.0023, 0.0192, 0.0213, ..., -0.0385, 0.0048, -0.0011], + ..., + [ 0.0475, -0.0508, -0.0145, ..., 0.0070, -0.0089, -0.0163]], There are two tensors, "clip_g" and "clip_l". +"clip_g" corresponds to the bigger text encoder in SDXL and refers to +pipe.text_encoder_2 and "clip_l" refers to pipe.text_encoder. Now you can load each tensor separately by passing them along with the correct text encoder and tokenizer +to load_textual_inversion(): Copied from diffusers import AutoPipelineForText2Image +import torch + +pipe = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") + +pipe.load_textual_inversion(state_dict["clip_g"], token="unaestheticXLv31", text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2) +pipe.load_textual_inversion(state_dict["clip_l"], token="unaestheticXLv31", text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer) + +# the embedding should be used as a negative embedding, so we pass it as a negative prompt +generator = torch.Generator().manual_seed(33) +image = pipe("a woman standing in front of a mountain", negative_prompt="unaestheticXLv31", generator=generator).images[0] +image diff --git a/scrapped_outputs/61960ec46795da55eff0c5544804f8ea.txt b/scrapped_outputs/61960ec46795da55eff0c5544804f8ea.txt new file mode 100644 index 0000000000000000000000000000000000000000..18ff21ef44b1209309d3996bfa0c5efab35a57c1 --- /dev/null +++ b/scrapped_outputs/61960ec46795da55eff0c5544804f8ea.txt @@ -0,0 +1,78 @@ +Safe Stable Diffusion Safe Stable Diffusion was proposed in Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models and mitigates inappropriate degeneration from Stable Diffusion models because they’re trained on unfiltered web-crawled datasets. For instance Stable Diffusion may unexpectedly generate nudity, violence, images depicting self-harm, and otherwise offensive content. Safe Stable Diffusion is an extension of Stable Diffusion that drastically reduces this type of content. The abstract from the paper is: Text-conditioned image generation models have recently achieved astonishing results in image quality and text alignment and are consequently employed in a fast-growing number of applications. Since they are highly data-driven, relying on billion-sized datasets randomly scraped from the internet, they also suffer, as we demonstrate, from degenerated and biased human behavior. In turn, they may even reinforce such biases. To help combat these undesired side effects, we present safe latent diffusion (SLD). Specifically, to measure the inappropriate degeneration due to unfiltered and imbalanced training sets, we establish a novel image generation test bed-inappropriate image prompts (I2P)-containing dedicated, real-world image-to-text prompts covering concepts such as nudity and violence. As our exhaustive empirical evaluation demonstrates, the introduced SLD removes and suppresses inappropriate image parts during the diffusion process, with no additional training required and no adverse effect on overall image quality or text alignment. Tips Use the safety_concept property of StableDiffusionPipelineSafe to check and edit the current safety concept: Copied >>> from diffusers import StableDiffusionPipelineSafe + +>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe") +>>> pipeline.safety_concept +'an image showing hate, harassment, violence, suffering, humiliation, harm, suicide, sexual, nudity, bodily fluids, blood, obscene gestures, illegal activity, drug use, theft, vandalism, weapons, child abuse, brutality, cruelty' For each image generation the active concept is also contained in StableDiffusionSafePipelineOutput. There are 4 configurations (SafetyConfig.WEAK, SafetyConfig.MEDIUM, SafetyConfig.STRONG, and SafetyConfig.MAX) that can be applied: Copied >>> from diffusers import StableDiffusionPipelineSafe +>>> from diffusers.pipelines.stable_diffusion_safe import SafetyConfig + +>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe") +>>> prompt = "the four horsewomen of the apocalypse, painting by tom of finland, gaston bussiere, craig mullins, j. c. leyendecker" +>>> out = pipeline(prompt=prompt, **SafetyConfig.MAX) Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionPipelineSafe class diffusers.StableDiffusionPipelineSafe < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: SafeStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline based on the StableDiffusionPipeline for text-to-image generation using Safe Latent Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 sld_guidance_scale: Optional = 1000 sld_warmup_steps: Optional = 10 sld_threshold: Optional = 0.01 sld_momentum_scale: Optional = 0.3 sld_mom_beta: Optional = 0.4 ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. sld_guidance_scale (float, optional, defaults to 1000) — +If sld_guidance_scale < 1, safety guidance is disabled. sld_warmup_steps (int, optional, defaults to 10) — +Number of warmup steps for safety guidance. SLD is only be applied for diffusion steps greater than +sld_warmup_steps. sld_threshold (float, optional, defaults to 0.01) — +Threshold that separates the hyperplane between appropriate and inappropriate images. sld_momentum_scale (float, optional, defaults to 0.3) — +Scale of the SLD momentum to be added to the safety guidance at each diffusion step. If set to 0.0, +momentum is disabled. Momentum is built up during warmup for diffusion steps smaller than +sld_warmup_steps. sld_mom_beta (float, optional, defaults to 0.4) — +Defines how safety guidance momentum builds up. sld_mom_beta indicates how much of the previous +momentum is kept. Momentum is built up during warmup for diffusion steps smaller than +sld_warmup_steps. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied import torch +from diffusers import StableDiffusionPipelineSafe +from diffusers.pipelines.stable_diffusion_safe import SafetyConfig + +pipeline = StableDiffusionPipelineSafe.from_pretrained( + "AIML-TUDA/stable-diffusion-safe", torch_dtype=torch.float16 +).to("cuda") +prompt = "the four horsewomen of the apocalypse, painting by tom of finland, gaston bussiere, craig mullins, j. c. leyendecker" +image = pipeline(prompt=prompt, **SafetyConfig.MEDIUM).images[0] StableDiffusionSafePipelineOutput class diffusers.pipelines.stable_diffusion_safe.StableDiffusionSafePipelineOutput < source > ( images: Union nsfw_content_detected: Optional unsafe_images: Union applied_safety_concept: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or numpy array of shape (batch_size, height, width, num_channels). PIL images or numpy array present the denoised images of the diffusion pipeline. nsfw_content_detected (List[bool]) — +List of flags denoting whether the corresponding generated image likely represents “not-safe-for-work” +(nsfw) content, or None if safety checking could not be performed. images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images that were flagged by the safety checker any may contain “not-safe-for-work” +(nsfw) content, or None if no safety check was performed or no images were flagged. applied_safety_concept (str) — +The safety concept that was applied for safety guidance, or None if safety guidance was disabled Output class for Safe Stable Diffusion pipelines. __call__ ( *args **kwargs ) Call self as a function. diff --git a/scrapped_outputs/61a8a3e467fe717ced433d7166b804da.txt b/scrapped_outputs/61a8a3e467fe717ced433d7166b804da.txt new file mode 100644 index 0000000000000000000000000000000000000000..eaf1daaf7ae542a78f5381f7eae39049ee58f668 --- /dev/null +++ b/scrapped_outputs/61a8a3e467fe717ced433d7166b804da.txt @@ -0,0 +1,49 @@ +Improve generation quality with FreeU The UNet is responsible for denoising during the reverse diffusion process, and there are two distinct features in its architecture: Backbone features primarily contribute to the denoising process Skip features mainly introduce high-frequency features into the decoder module and can make the network overlook the semantics in the backbone features However, the skip connection can sometimes introduce unnatural image details. FreeU is a technique for improving image quality by rebalancing the contributions from the UNet’s skip connections and backbone feature maps. FreeU is applied during inference and it does not require any additional training. The technique works for different tasks such as text-to-image, image-to-image, and text-to-video. In this guide, you will apply FreeU to the StableDiffusionPipeline, StableDiffusionXLPipeline, and TextToVideoSDPipeline. You need to install Diffusers from source to run the examples below. StableDiffusionPipeline Load the pipeline: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, safety_checker=None +).to("cuda") Then enable the FreeU mechanism with the FreeU-specific hyperparameters. These values are scaling factors for the backbone and skip features. Copied pipeline.enable_freeu(s1=0.9, s2=0.2, b1=1.2, b2=1.4) The values above are from the official FreeU code repository where you can also find reference hyperparameters for different models. Disable the FreeU mechanism by calling disable_freeu() on a pipeline. And then run inference: Copied prompt = "A squirrel eating a burger" +seed = 2023 +image = pipeline(prompt, generator=torch.manual_seed(seed)).images[0] +image The figure below compares non-FreeU and FreeU results respectively for the same hyperparameters used above (prompt and seed): Let’s see how Stable Diffusion 2 results are impacted: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16, safety_checker=None +).to("cuda") + +prompt = "A squirrel eating a burger" +seed = 2023 + +pipeline.enable_freeu(s1=0.9, s2=0.2, b1=1.1, b2=1.2) +image = pipeline(prompt, generator=torch.manual_seed(seed)).images[0] +image Stable Diffusion XL Finally, let’s take a look at how FreeU affects Stable Diffusion XL results: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, +).to("cuda") + +prompt = "A squirrel eating a burger" +seed = 2023 + +# Comes from +# https://wandb.ai/nasirk24/UNET-FreeU-SDXL/reports/FreeU-SDXL-Optimal-Parameters--Vmlldzo1NDg4NTUw +pipeline.enable_freeu(s1=0.6, s2=0.4, b1=1.1, b2=1.2) +image = pipeline(prompt, generator=torch.manual_seed(seed)).images[0] +image Text-to-video generation FreeU can also be used to improve video quality: Copied from diffusers import DiffusionPipeline +from diffusers.utils import export_to_video +import torch + +model_id = "cerspense/zeroscope_v2_576w" +pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +prompt = "an astronaut riding a horse on mars" +seed = 2023 + +# The values come from +# https://github.com/lyn-rgb/FreeU_Diffusers#video-pipelines +pipe.enable_freeu(b1=1.2, b2=1.4, s1=0.9, s2=0.2) +video_frames = pipe(prompt, height=320, width=576, num_frames=30, generator=torch.manual_seed(seed)).frames +export_to_video(video_frames, "astronaut_rides_horse.mp4") Thanks to kadirnar for helping to integrate the feature, and to justindujardin for the helpful discussions. diff --git a/scrapped_outputs/61afb10ef23b944277452278d44a58cd.txt b/scrapped_outputs/61afb10ef23b944277452278d44a58cd.txt new file mode 100644 index 0000000000000000000000000000000000000000..643707bcdd440e65416f02ac6003e845768e0c87 --- /dev/null +++ b/scrapped_outputs/61afb10ef23b944277452278d44a58cd.txt @@ -0,0 +1,96 @@ +I2VGen-XL I2VGen-XL: High-Quality Image-to-Video Synthesis via Cascaded Diffusion Models by Shiwei Zhang, Jiayu Wang, Yingya Zhang, Kang Zhao, Hangjie Yuan, Zhiwu Qin, Xiang Wang, Deli Zhao, and Jingren Zhou. The abstract from the paper is: Video synthesis has recently made remarkable strides benefiting from the rapid development of diffusion models. However, it still encounters challenges in terms of semantic accuracy, clarity and spatio-temporal continuity. They primarily arise from the scarcity of well-aligned text-video data and the complex inherent structure of videos, making it difficult for the model to simultaneously ensure semantic and qualitative excellence. In this report, we propose a cascaded I2VGen-XL approach that enhances model performance by decoupling these two factors and ensures the alignment of the input data by utilizing static images as a form of crucial guidance. I2VGen-XL consists of two stages: i) the base stage guarantees coherent semantics and preserves content from input images by using two hierarchical encoders, and ii) the refinement stage enhances the video’s details by incorporating an additional brief text and improves the resolution to 1280×720. To improve the diversity, we collect around 35 million single-shot text-video pairs and 6 billion text-image pairs to optimize the model. By this means, I2VGen-XL can simultaneously enhance the semantic accuracy, continuity of details and clarity of generated videos. Through extensive experiments, we have investigated the underlying principles of I2VGen-XL and compared it with current top methods, which can demonstrate its effectiveness on diverse data. The source code and models will be publicly available at this https URL. The original codebase can be found here. The model checkpoints can be found here. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. Also, to know more about reducing the memory usage of this pipeline, refer to the [“Reduce memory usage”] section here. Sample output with I2VGenXL: masterpiece, bestquality, sunset. + Notes I2VGenXL always uses a clip_skip value of 1. This means it leverages the penultimate layer representations from the text encoder of CLIP. It can generate videos of quality that is often on par with Stable Video Diffusion (SVD). Unlike SVD, it additionally accepts text prompts as inputs. It can generate higher resolution videos. When using the DDIMScheduler (which is default for this pipeline), less than 50 steps for inference leads to bad results. I2VGenXLPipeline class diffusers.I2VGenXLPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer image_encoder: CLIPVisionModelWithProjection feature_extractor: CLIPImageProcessor unet: I2VGenXLUNet scheduler: DDIMScheduler ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (I2VGenXLUNet) — +A I2VGenXLUNet to denoise the encoded video latents. scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Pipeline for image-to-video generation as proposed in I2VGenXL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None image: Union = None height: Optional = 704 width: Optional = 1280 target_fps: Optional = 16 num_frames: int = 16 num_inference_steps: int = 50 guidance_scale: float = 9.0 negative_prompt: Union = None eta: float = 0.0 num_videos_per_prompt: Optional = 1 decode_chunk_size: Optional = 1 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = 1 ) → pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — +Image or images to guide image generation. If you provide a tensor, it needs to be compatible with +CLIPImageProcessor. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. target_fps (int, optional) — +Frames per second. The rate at which the generated images shall be exported to a video after generation. This is also used as a “micro-condition” while generation. num_frames (int, optional) — +The number of video frames to generate. num_inference_steps (int, optional) — +The number of denoising steps. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. num_videos_per_prompt (int, optional) — +The number of images to generate per prompt. decode_chunk_size (int, optional) — +The number of frames to decode at a time. The higher the chunk size, the higher the temporal consistency +between frames, but also the higher the memory consumption. By default, the decoder will decode all frames at once +for maximal quality. Reduce decode_chunk_size to reduce memory usage. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput or tuple + +If return_dict is True, pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for image-to-video generation with I2VGenXLPipeline. Examples: Copied >>> import torch +>>> from diffusers import I2VGenXLPipeline + +>>> pipeline = I2VGenXLPipeline.from_pretrained("ali-vilab/i2vgen-xl", torch_dtype=torch.float16, variant="fp16") +>>> pipeline.enable_model_cpu_offload() + +>>> image_url = "https://github.com/ali-vilab/i2vgen-xl/blob/main/data/test_images/img_0009.png?raw=true" +>>> image = load_image(image_url).convert("RGB") + +>>> prompt = "Papers were floating in the air on a table in the library" +>>> negative_prompt = "Distorted, discontinuous, Ugly, blurry, low resolution, motionless, static, disfigured, disconnected limbs, Ugly faces, incomplete arms" +>>> generator = torch.manual_seed(8888) + +>>> frames = pipeline( +... prompt=prompt, +... image=image, +... num_inference_steps=50, +... negative_prompt=negative_prompt, +... guidance_scale=9.0, +... generator=generator +... ).frames[0] +>>> video_path = export_to_gif(frames, "i2v.gif") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_videos_per_prompt negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_videos_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. I2VGenXLPipelineOutput class diffusers.pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput < source > ( frames: Union ) Parameters frames (List[np.ndarray] or torch.FloatTensor) — +List of denoised frames (essentially images) as NumPy arrays of shape (height, width, num_channels) or as +a torch tensor. The length of the list denotes the video length (the number of frames). Output class for image-to-video pipeline. diff --git a/scrapped_outputs/61b6ed30c400a699b5d7edb77794d52c.txt b/scrapped_outputs/61b6ed30c400a699b5d7edb77794d52c.txt new file mode 100644 index 0000000000000000000000000000000000000000..8806ad9c7bc5846e28773660f7a1afd17769a279 --- /dev/null +++ b/scrapped_outputs/61b6ed30c400a699b5d7edb77794d52c.txt @@ -0,0 +1,53 @@ +UNet3DConditionModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 3D UNet conditional model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet3DConditionModel class diffusers.UNet3DConditionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlock3D', 'CrossAttnDownBlock3D', 'CrossAttnDownBlock3D', 'DownBlock3D') up_block_types: Tuple = ('UpBlock3D', 'CrossAttnUpBlock3D', 'CrossAttnUpBlock3D', 'CrossAttnUpBlock3D') block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: int = 1024 attention_head_dim: Union = 64 num_attention_heads: Union = None ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. in_channels (int, optional, defaults to 4) — The number of channels in the input sample. out_channels (int, optional, defaults to 4) — The number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock3D", "CrossAttnDownBlock3D", "CrossAttnDownBlock3D", "DownBlock3D")) — +The tuple of downsample blocks to use. up_block_types (Tuple[str], optional, defaults to ("UpBlock3D", "CrossAttnUpBlock3D", "CrossAttnUpBlock3D", "CrossAttnUpBlock3D")) — +The tuple of upsample blocks to use. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — The number of layers per block. downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. act_fn (str, optional, defaults to "silu") — The activation function to use. norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, normalization and activation layers is skipped in post-processing. norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. cross_attention_dim (int, optional, defaults to 1024) — The dimension of the cross attention features. attention_head_dim (int, optional, defaults to 64) — The dimension of the attention heads. num_attention_heads (int, optional) — The number of attention heads. A conditional 3D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). disable_freeu < source > ( ) Disables the FreeU mechanism. enable_forward_chunking < source > ( chunk_size: Optional = None dim: int = 0 ) Parameters chunk_size (int, optional) — +The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually +over each tensor of dim=dim. dim (int, optional, defaults to 0) — +The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch) +or dim=1 (sequence length). Sets the attention processor to use feed forward +chunking. enable_freeu < source > ( s1 s2 b1 b2 ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism from https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stage blocks where they are being applied. Please refer to the official repository for combinations of values that +are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None down_block_additional_residuals: Optional = None mid_block_additional_residual: Optional = None return_dict: bool = True ) → ~models.unet_3d_condition.UNet3DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, num_channels, num_frames, height, width. timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). class_labels (torch.Tensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. +timestep_cond — (torch.Tensor, optional, defaults to None): +Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed +through the self.time_embedding layer to obtain the timestep embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. +down_block_additional_residuals — (tuple of torch.Tensor, optional): +A tuple of tensors that if specified are added to the residuals of down unet blocks. +mid_block_additional_residual — (torch.Tensor, optional): +A tensor that if specified is added to the residual of the middle unet block. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.unet_3d_condition.UNet3DConditionOutput instead of a plain +tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. Returns +~models.unet_3d_condition.UNet3DConditionOutput or tuple + +If return_dict is True, an ~models.unet_3d_condition.UNet3DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The UNet3DConditionModel forward method. fuse_qkv_projections < source > ( ) Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. set_attention_slice < source > ( slice_size: Union ) Parameters slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", input to the attention heads is halved, so attention is computed in two steps. If +"max", maximum amount of memory is saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in +several steps. This is useful for saving some memory in exchange for a small decrease in speed. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. unfuse_qkv_projections < source > ( ) Disables the fused QKV projection if enabled. This API is 🧪 experimental. unload_lora < source > ( ) Unloads LoRA weights. UNet3DConditionOutput class diffusers.models.unets.unet_3d_condition.UNet3DConditionOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, num_frames, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of UNet3DConditionModel. diff --git a/scrapped_outputs/61ba9252808ef9280da66eea0f3e803f.txt b/scrapped_outputs/61ba9252808ef9280da66eea0f3e803f.txt new file mode 100644 index 0000000000000000000000000000000000000000..7c0ad31ee6de5413052deffb62095ba2092bc251 --- /dev/null +++ b/scrapped_outputs/61ba9252808ef9280da66eea0f3e803f.txt @@ -0,0 +1,124 @@ +PixArt-α PixArt-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis is Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, and Zhenguo Li. The abstract from the paper is: The most advanced text-to-image (T2I) models require significant training costs (e.g., millions of GPU hours), seriously hindering the fundamental innovation for the AIGC community while increasing CO2 emissions. This paper introduces PIXART-α, a Transformer-based T2I diffusion model whose image generation quality is competitive with state-of-the-art image generators (e.g., Imagen, SDXL, and even Midjourney), reaching near-commercial application standards. Additionally, it supports high-resolution image synthesis up to 1024px resolution with low training cost, as shown in Figure 1 and 2. To achieve this goal, three core designs are proposed: (1) Training strategy decomposition: We devise three distinct training steps that separately optimize pixel dependency, text-image alignment, and image aesthetic quality; (2) Efficient T2I Transformer: We incorporate cross-attention modules into Diffusion Transformer (DiT) to inject text conditions and streamline the computation-intensive class-condition branch; (3) High-informative data: We emphasize the significance of concept density in text-image pairs and leverage a large Vision-Language model to auto-label dense pseudo-captions to assist text-image alignment learning. As a result, PIXART-α’s training speed markedly surpasses existing large-scale T2I models, e.g., PIXART-α only takes 10.8% of Stable Diffusion v1.5’s training time (675 vs. 6,250 A100 GPU days), saving nearly $300,000 ($26,000 vs. $320,000) and reducing 90% CO2 emissions. Moreover, compared with a larger SOTA model, RAPHAEL, our training cost is merely 1%. Extensive experiments demonstrate that PIXART-α excels in image quality, artistry, and semantic control. We hope PIXART-α will provide new insights to the AIGC community and startups to accelerate building their own high-quality yet low-cost generative models from scratch. You can find the original codebase at PixArt-alpha/PixArt-alpha and all the available checkpoints at PixArt-alpha. Some notes about this pipeline: It uses a Transformer backbone (instead of a UNet) for denoising. As such it has a similar architecture as DiT. It was trained using text conditions computed from T5. This aspect makes the pipeline better at following complex text prompts with intricate details. It is good at producing high-resolution images at different aspect ratios. To get the best results, the authors recommend some size brackets which can be found here. It rivals the quality of state-of-the-art text-to-image generation systems (as of this writing) such as Stable Diffusion XL, Imagen, and DALL-E 2, while being more efficient than them. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. Inference with under 8GB GPU VRAM Run the PixArtAlphaPipeline with under 8GB GPU VRAM by loading the text encoder in 8-bit precision. Let’s walk through a full-fledged example. First, install the bitsandbytes library: Copied pip install -U bitsandbytes Then load the text encoder in 8-bit: Copied from transformers import T5EncoderModel +from diffusers import PixArtAlphaPipeline +import torch + +text_encoder = T5EncoderModel.from_pretrained( + "PixArt-alpha/PixArt-XL-2-1024-MS", + subfolder="text_encoder", + load_in_8bit=True, + device_map="auto", + +) +pipe = PixArtAlphaPipeline.from_pretrained( + "PixArt-alpha/PixArt-XL-2-1024-MS", + text_encoder=text_encoder, + transformer=None, + device_map="auto" +) Now, use the pipe to encode a prompt: Copied with torch.no_grad(): + prompt = "cute cat" + prompt_embeds, prompt_attention_mask, negative_embeds, negative_prompt_attention_mask = pipe.encode_prompt(prompt) Since text embeddings have been computed, remove the text_encoder and pipe from the memory, and free up som GPU VRAM: Copied import gc + +def flush(): + gc.collect() + torch.cuda.empty_cache() + +del text_encoder +del pipe +flush() Then compute the latents with the prompt embeddings as inputs: Copied pipe = PixArtAlphaPipeline.from_pretrained( + "PixArt-alpha/PixArt-XL-2-1024-MS", + text_encoder=None, + torch_dtype=torch.float16, +).to("cuda") + +latents = pipe( + negative_prompt=None, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + prompt_attention_mask=prompt_attention_mask, + negative_prompt_attention_mask=negative_prompt_attention_mask, + num_images_per_prompt=1, + output_type="latent", +).images + +del pipe.transformer +flush() Notice that while initializing pipe, you’re setting text_encoder to None so that it’s not loaded. Once the latents are computed, pass it off to the VAE to decode into a real image: Copied with torch.no_grad(): + image = pipe.vae.decode(latents / pipe.vae.config.scaling_factor, return_dict=False)[0] +image = pipe.image_processor.postprocess(image, output_type="pil")[0] +image.save("cat.png") By deleting components you aren’t using and flushing the GPU VRAM, you should be able to run PixArtAlphaPipeline with under 8GB GPU VRAM. If you want a report of your memory-usage, run this script. Text embeddings computed in 8-bit can impact the quality of the generated images because of the information loss in the representation space caused by the reduced precision. It’s recommended to compare the outputs with and without 8-bit. While loading the text_encoder, you set load_in_8bit to True. You could also specify load_in_4bit to bring your memory requirements down even further to under 7GB. PixArtAlphaPipeline class diffusers.PixArtAlphaPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel vae: AutoencoderKL transformer: Transformer2DModel scheduler: DPMSolverMultistepScheduler ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (T5EncoderModel) — +Frozen text-encoder. PixArt-Alpha uses +T5, specifically the +t5-v1_1-xxl variant. tokenizer (T5Tokenizer) — +Tokenizer of class +T5Tokenizer. transformer (Transformer2DModel) — +A text conditioned Transformer2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with transformer to denoise the encoded image latents. Pipeline for text-to-image generation using PixArt-Alpha. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union = None negative_prompt: str = '' num_inference_steps: int = 20 timesteps: List = None guidance_scale: float = 4.5 num_images_per_prompt: Optional = 1 height: Optional = None width: Optional = None eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None prompt_attention_mask: Optional = None negative_prompt_embeds: Optional = None negative_prompt_attention_mask: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True use_resolution_binning: bool = True max_sequence_length: int = 120 **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to self.unet.config.sample_size) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size) — +The width in pixels of the generated image. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. prompt_attention_mask (torch.FloatTensor, optional) — Pre-generated attention mask for text embeddings. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. For PixArt-Alpha this negative prompt should be "". If not +provided, negative_prompt_embeds will be generated from negative_prompt input argument. negative_prompt_attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask for negative text embeddings. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. use_resolution_binning (bool defaults to True) — +If set to True, the requested height and width are first mapped to the closest resolutions using +ASPECT_RATIO_1024_BIN. After the produced latents are decoded into images, they are resized back to +the requested resolution. Useful for generating non-square images. max_sequence_length (int defaults to 120) — Maximum sequence length to use with the prompt. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import PixArtAlphaPipeline + +>>> # You can replace the checkpoint id with "PixArt-alpha/PixArt-XL-2-512x512" too. +>>> pipe = PixArtAlphaPipeline.from_pretrained("PixArt-alpha/PixArt-XL-2-1024-MS", torch_dtype=torch.float16) +>>> # Enable memory optimizations. +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A small cactus with a happy face in the Sahara desert." +>>> image = pipe(prompt).images[0] classify_height_width_bin < source > ( height: int width: int ratios: dict ) Returns binned height and width. encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True negative_prompt: str = '' num_images_per_prompt: int = 1 device: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None prompt_attention_mask: Optional = None negative_prompt_attention_mask: Optional = None clean_caption: bool = False max_sequence_length: int = 120 **kwargs ) Parameters prompt (str or List[str], optional) — +prompt to be encoded negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. If not defined, one has to pass negative_prompt_embeds +instead. Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). For +PixArt-Alpha, this should be "". do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. For PixArt-Alpha, it’s should be the embeddings of the "" +string. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. max_sequence_length (int, defaults to 120) — Maximum sequence length to use for the prompt. Encodes the prompt into text encoder hidden states. diff --git a/scrapped_outputs/61fcccf5d4c355affd20a0fd6ad10fd0.txt b/scrapped_outputs/61fcccf5d4c355affd20a0fd6ad10fd0.txt new file mode 100644 index 0000000000000000000000000000000000000000..a3ef40f070274021b77fb2e361dbd5e9e695ba0c --- /dev/null +++ b/scrapped_outputs/61fcccf5d4c355affd20a0fd6ad10fd0.txt @@ -0,0 +1,116 @@ +Single files Diffusers supports loading pretrained pipeline (or model) weights stored in a single file, such as a ckpt or safetensors file. These single file types are typically produced from community trained models. There are three classes for loading single file weights: FromSingleFileMixin supports loading pretrained pipeline weights stored in a single file, which can either be a ckpt or safetensors file. FromOriginalVAEMixin supports loading a pretrained AutoencoderKL from pretrained ControlNet weights stored in a single file, which can either be a ckpt or safetensors file. FromOriginalControlnetMixin supports loading pretrained ControlNet weights stored in a single file, which can either be a ckpt or safetensors file. To learn more about how to load single file weights, see the Load different Stable Diffusion formats loading guide. FromSingleFileMixin class diffusers.loaders.FromSingleFileMixin < source > ( ) Load model weights saved in the .ckpt format into a DiffusionPipeline. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt or .safetensors +format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import StableDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +... ) + +>>> # Download pipeline from local file +>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt +>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly") + +>>> # Enable float16 and move to GPU +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", +... torch_dtype=torch.float16, +... ) +>>> pipeline.to("cuda") FromOriginalVAEMixin class diffusers.loaders.FromOriginalVAEMixin < source > ( ) Load pretrained AutoencoderKL weights saved in the .ckpt or .safetensors format into a AutoencoderKL. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + config_file (str, optional) — +Filepath to the configuration YAML file associated with the model. If not provided it will default to: +https://raw.githubusercontent.com/CompVis/stable-diffusion/main/configs/stable-diffusion/v1-inference.yaml torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. image_size (int, optional, defaults to 512) — +The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable +Diffusion v2 base model. Use 768 for Stable Diffusion v2. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution +Image Synthesis with Latent Diffusion Models paper. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (for example the pipeline components of the +specific pipeline class). The overwritten components are directly passed to the pipelines __init__ +method. See example below for more information. Instantiate a AutoencoderKL from pretrained ControlNet weights saved in the original .ckpt or +.safetensors format. The pipeline is set in evaluation mode (model.eval()) by default. Make sure to pass both image_size and scaling_factor to from_single_file() if you’re loading +a VAE from SDXL or a Stable Diffusion v2 model or higher. Examples: Copied from diffusers import AutoencoderKL + +url = "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors" # can also be local file +model = AutoencoderKL.from_single_file(url) FromOriginalControlnetMixin class diffusers.loaders.FromOriginalControlNetMixin < source > ( ) Load pretrained ControlNet weights saved in the .ckpt or .safetensors format into a ControlNetModel. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. image_size (int, optional, defaults to 512) — +The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable +Diffusion v2 base model. Use 768 for Stable Diffusion v2. upcast_attention (bool, optional, defaults to None) — +Whether the attention computation should always be upcasted. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (for example the pipeline components of the +specific pipeline class). The overwritten components are directly passed to the pipelines __init__ +method. See example below for more information. Instantiate a ControlNetModel from pretrained ControlNet weights saved in the original .ckpt or +.safetensors format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel + +url = "https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_canny.pth" # can also be a local path +model = ControlNetModel.from_single_file(url) + +url = "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.safetensors" # can also be a local path +pipe = StableDiffusionControlNetPipeline.from_single_file(url, controlnet=controlnet) diff --git a/scrapped_outputs/61ff8bf729a1fb78d82e1fc2b88ebbb7.txt b/scrapped_outputs/61ff8bf729a1fb78d82e1fc2b88ebbb7.txt new file mode 100644 index 0000000000000000000000000000000000000000..e2e3f78fe8241413a85b55d180c1b4b614352447 --- /dev/null +++ b/scrapped_outputs/61ff8bf729a1fb78d82e1fc2b88ebbb7.txt @@ -0,0 +1,123 @@ +Unconditional Image-Generation + +In this section, we explain how one can train an unconditional image generation diffusion +model. “Unconditional” because the model is not conditioned on any context to generate an image - once trained the model will simply generate images that resemble its training data +distribution. + +Installing the dependencies + +Before running the scripts, make sure to install the library’s training dependencies: + + + Copied +pip install diffusers[training] accelerate datasets +And initialize an 🤗Accelerate environment with: + + + Copied +accelerate config + +Unconditional Flowers + +The command to train a DDPM UNet model on the Oxford Flowers dataset: + + + Copied +accelerate launch train_unconditional.py \ + --dataset_name="huggan/flowers-102-categories" \ + --resolution=64 \ + --output_dir="ddpm-ema-flowers-64" \ + --train_batch_size=16 \ + --num_epochs=100 \ + --gradient_accumulation_steps=1 \ + --learning_rate=1e-4 \ + --lr_warmup_steps=500 \ + --mixed_precision=no \ + --push_to_hub +An example trained model: https://huggingface.co/anton-l/ddpm-ema-flowers-64 +A full training run takes 2 hours on 4xV100 GPUs. + + +Unconditional Pokemon + +The command to train a DDPM UNet model on the Pokemon dataset: + + + Copied +accelerate launch train_unconditional.py \ + --dataset_name="huggan/pokemon" \ + --resolution=64 \ + --output_dir="ddpm-ema-pokemon-64" \ + --train_batch_size=16 \ + --num_epochs=100 \ + --gradient_accumulation_steps=1 \ + --learning_rate=1e-4 \ + --lr_warmup_steps=500 \ + --mixed_precision=no \ + --push_to_hub +An example trained model: https://huggingface.co/anton-l/ddpm-ema-pokemon-64 +A full training run takes 2 hours on 4xV100 GPUs. + + +Using your own data + +To use your own dataset, there are 2 ways: +you can either provide your own folder as --train_data_dir +or you can upload your dataset to the hub (possibly as a private repo, if you prefer so), and simply pass the --dataset_name argument. +Note: If you want to create your own training dataset please have a look at this document. +Below, we explain both in more detail. + +Provide the dataset as a folder + +If you provide your own folders with images, the script expects the following directory structure: + + + Copied +data_dir/xxx.png +data_dir/xxy.png +data_dir/[...]/xxz.png +In other words, the script will take care of gathering all images inside the folder. You can then run the script like this: + + + Copied +accelerate launch train_unconditional.py \ + --train_data_dir \ + +Internally, the script will use the ImageFolder feature which will automatically turn the folders into 🤗 Dataset objects. + +Upload your data to the hub, as a (possibly private) repo + +It’s very easy (and convenient) to upload your image dataset to the hub using the ImageFolder feature available in 🤗 Datasets. Simply do the following: + + + Copied +from datasets import load_dataset + +# example 1: local folder +dataset = load_dataset("imagefolder", data_dir="path_to_your_folder") + +# example 2: local files (supported formats are tar, gzip, zip, xz, rar, zstd) +dataset = load_dataset("imagefolder", data_files="path_to_zip_file") + +# example 3: remote files (supported formats are tar, gzip, zip, xz, rar, zstd) +dataset = load_dataset( + "imagefolder", + data_files="https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip", +) + +# example 4: providing several splits +dataset = load_dataset( + "imagefolder", data_files={"train": ["path/to/file1", "path/to/file2"], "test": ["path/to/file3", "path/to/file4"]} +) +ImageFolder will create an image column containing the PIL-encoded images. +Next, push it to the hub! + + + Copied +# assuming you have ran the huggingface-cli login command in a terminal +dataset.push_to_hub("name_of_your_dataset") + +# if you want to push to a private repo, simply pass private=True: +dataset.push_to_hub("name_of_your_dataset", private=True) +and that’s it! You can now train your model by simply setting the --dataset_name argument to the name of your dataset on the hub. +More on this can also be found in this blog post. diff --git a/scrapped_outputs/623292f21abfe33b7d74b806858e3c0c.txt b/scrapped_outputs/623292f21abfe33b7d74b806858e3c0c.txt new file mode 100644 index 0000000000000000000000000000000000000000..48649ec5c0477bba9de1fe1afcb189a2b6b4fbd9 --- /dev/null +++ b/scrapped_outputs/623292f21abfe33b7d74b806858e3c0c.txt @@ -0,0 +1,88 @@ +Textual Inversion Textual Inversion is a training method for personalizing models by learning new text embeddings from a few example images. The file produced from training is extremely small (a few KBs) and the new embeddings can be loaded into the text encoder. TextualInversionLoaderMixin provides a function for loading Textual Inversion embeddings from Diffusers and Automatic1111 into the text encoder and loading a special token to activate the embeddings. To learn more about how to load Textual Inversion embeddings, see the Textual Inversion loading guide. TextualInversionLoaderMixin class diffusers.loaders.TextualInversionLoaderMixin < source > ( ) Load Textual Inversion tokens and embeddings to the tokenizer and text encoder. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") maybe_convert_prompt < source > ( prompt: Union tokenizer: PreTrainedTokenizer ) → str or list of str Parameters prompt (str or list of str) — +The prompt or prompts to guide the image generation. tokenizer (PreTrainedTokenizer) — +The tokenizer responsible for encoding the prompt into input tokens. Returns +str or list of str + +The converted prompt + Processes prompts that include a special token corresponding to a multi-vector textual inversion embedding to +be replaced with multiple special tokens each corresponding to one of the vectors. If the prompt has no textual +inversion token or if the textual inversion token is a single vector, the input prompt is returned. unload_textual_inversion < source > ( tokens: Union = None ) Unload Textual Inversion embeddings from the text encoder of StableDiffusionPipeline Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5") + +# Example 1 +pipeline.load_textual_inversion("sd-concepts-library/gta5-artwork") +pipeline.load_textual_inversion("sd-concepts-library/moeb-style") + +# Remove all token embeddings +pipeline.unload_textual_inversion() + +# Example 2 +pipeline.load_textual_inversion("sd-concepts-library/moeb-style") +pipeline.load_textual_inversion("sd-concepts-library/gta5-artwork") + +# Remove just one token +pipeline.unload_textual_inversion("") diff --git a/scrapped_outputs/62348b63bf35f1ba7e0334d798df64ed.txt b/scrapped_outputs/62348b63bf35f1ba7e0334d798df64ed.txt new file mode 100644 index 0000000000000000000000000000000000000000..025d8d9b7e21e34a1a210fa0bd70fff4f7c14e19 --- /dev/null +++ b/scrapped_outputs/62348b63bf35f1ba7e0334d798df64ed.txt @@ -0,0 +1,63 @@ +BaseOutputs + +All models have outputs that are instances of subclasses of BaseOutput. Those are +data structures containing all the information returned by the model, but that can also be used as tuples or +dictionaries. +Let’s see how this looks in an example: + + + Copied +from diffusers import DDIMPipeline + +pipeline = DDIMPipeline.from_pretrained("google/ddpm-cifar10-32") +outputs = pipeline() +The outputs object is a ImagePipelineOutput, as we can see in the +documentation of that class below, it means it has an image attribute. +You can access each attribute as you would usually do, and if that attribute has not been returned by the model, you will get None: + + + Copied +outputs.images +or via keyword lookup + + + Copied +outputs["images"] +When considering our outputs object as tuple, it only considers the attributes that don’t have None values. +Here for instance, we could retrieve images via indexing: + + + Copied +outputs[:1] +which will return the tuple (outputs.images) for instance. + +BaseOutput + + +class diffusers.utils.BaseOutput + +< +source +> +( +) + + + +Base class for all model outputs as dataclass. Has a __getitem__ that allows indexing by integer or slice (like a +tuple) or strings (like a dictionary) that will ignore the None attributes. Otherwise behaves like a regular +python dictionary. +You can’t unpack a BaseOutput directly. Use the to_tuple() method to convert it to a tuple +before. + +to_tuple + +< +source +> +( +) + + + +Convert self to a tuple containing all the attributes/keys that are not None. diff --git a/scrapped_outputs/624e766629b285858b6d2f5cd1043b20.txt b/scrapped_outputs/624e766629b285858b6d2f5cd1043b20.txt new file mode 100644 index 0000000000000000000000000000000000000000..e109b181bff7e509d8447aec9e012243d4f843dc --- /dev/null +++ b/scrapped_outputs/624e766629b285858b6d2f5cd1043b20.txt @@ -0,0 +1,115 @@ +DreamBooth DreamBooth is a training technique that updates the entire diffusion model by training on just a few images of a subject or style. It works by associating a special word in the prompt with the example images. If you’re training on a GPU with limited vRAM, you should try enabling the gradient_checkpointing and mixed_precision parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing or xFormers. You should have a GPU with >30GB of memory if you want to train faster with Flax. This guide will explore the train_dreambooth.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/dreambooth +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters DreamBooth is very sensitive to training hyperparameters, and it is easy to overfit. Read the Training Stable Diffusion with Dreambooth using 🧨 Diffusers blog post for recommended settings for different subjects to help you choose the appropriate hyperparameters. The training script offers many parameters for customizing your training run. All of the parameters and their descriptions are found in the parse_args() function. The parameters are set with default values that should work pretty well out-of-the-box, but you can also set your own values in the training command if you’d like. For example, to train in the bf16 format: Copied accelerate launch train_dreambooth.py \ + --mixed_precision="bf16" Some basic and important parameters to know and specify are: --pretrained_model_name_or_path: the name of the model on the Hub or a local path to the pretrained model --instance_data_dir: path to a folder containing the training dataset (example images) --instance_prompt: the text prompt that contains the special word for the example images --train_text_encoder: whether to also train the text encoder --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_dreambooth.py \ + --snr_gamma=5.0 Prior preservation loss Prior preservation loss is a method that uses a model’s own generated samples to help it learn how to generate more diverse images. Because these generated sample images belong to the same class as the images you provided, they help the model retain what it has learned about the class and how it can use what it already knows about the class to make new compositions. --with_prior_preservation: whether to use prior preservation loss --prior_loss_weight: controls the influence of the prior preservation loss on the model --class_data_dir: path to a folder containing the generated class sample images --class_prompt: the text prompt describing the class of the generated sample images Copied accelerate launch train_dreambooth.py \ + --with_prior_preservation \ + --prior_loss_weight=1.0 \ + --class_data_dir="path/to/class/images" \ + --class_prompt="text prompt describing class" Train text encoder To improve the quality of the generated outputs, you can also train the text encoder in addition to the UNet. This requires additional memory and you’ll need a GPU with at least 24GB of vRAM. If you have the necessary hardware, then training the text encoder produces better results, especially when generating images of faces. Enable this option by: Copied accelerate launch train_dreambooth.py \ + --train_text_encoder Training script DreamBooth comes with its own dataset classes: DreamBoothDataset: preprocesses the images and class images, and tokenizes the prompts for training PromptDataset: generates the prompt embeddings to generate the class images If you enabled prior preservation loss, the class images are generated here: Copied sample_dataset = PromptDataset(args.class_prompt, num_new_images) +sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size) + +sample_dataloader = accelerator.prepare(sample_dataloader) +pipeline.to(accelerator.device) + +for example in tqdm( + sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process +): + images = pipeline(example["prompt"]).images Next is the main() function which handles setting up the dataset for training and the training loop itself. The script loads the tokenizer, scheduler and models: Copied # Load the tokenizer +if args.tokenizer_name: + tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name, revision=args.revision, use_fast=False) +elif args.pretrained_model_name_or_path: + tokenizer = AutoTokenizer.from_pretrained( + args.pretrained_model_name_or_path, + subfolder="tokenizer", + revision=args.revision, + use_fast=False, + ) + +# Load scheduler and models +noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") +text_encoder = text_encoder_cls.from_pretrained( + args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision +) + +if model_has_vae(args): + vae = AutoencoderKL.from_pretrained( + args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision + ) +else: + vae = None + +unet = UNet2DConditionModel.from_pretrained( + args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision +) Then, it’s time to create the training dataset and DataLoader from DreamBoothDataset: Copied train_dataset = DreamBoothDataset( + instance_data_root=args.instance_data_dir, + instance_prompt=args.instance_prompt, + class_data_root=args.class_data_dir if args.with_prior_preservation else None, + class_prompt=args.class_prompt, + class_num=args.num_class_images, + tokenizer=tokenizer, + size=args.resolution, + center_crop=args.center_crop, + encoder_hidden_states=pre_computed_encoder_hidden_states, + class_prompt_encoder_hidden_states=pre_computed_class_prompt_encoder_hidden_states, + tokenizer_max_length=args.tokenizer_max_length, +) + +train_dataloader = torch.utils.data.DataLoader( + train_dataset, + batch_size=args.train_batch_size, + shuffle=True, + collate_fn=lambda examples: collate_fn(examples, args.with_prior_preservation), + num_workers=args.dataloader_num_workers, +) Lastly, the training loop takes care of the remaining steps such as converting images to latent space, adding noise to the input, predicting the noise residual, and calculating the loss. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script You’re now ready to launch the training script! 🚀 For this guide, you’ll download some images of a dog and store them in a directory. But remember, you can create and use your own dataset if you want (see the Create a dataset for training guide). Copied from huggingface_hub import snapshot_download + +local_dir = "./dog" +snapshot_download( + "diffusers/dog-example", + local_dir=local_dir, + repo_type="dataset", + ignore_patterns=".gitattributes", +) Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model, INSTANCE_DIR to the path where you just downloaded the dog images to, and OUTPUT_DIR to where you want to save the model. You’ll use sks as the special word to tie the training to. If you’re interested in following along with the training process, you can periodically save generated images as training progresses. Add the following parameters to the training command: Copied --validation_prompt="a photo of a sks dog" +--num_validation_images=4 +--validation_steps=100 One more thing before you launch the script! Depending on the GPU you have, you may need to enable certain optimizations to train DreamBooth. 16GB 12GB 8GB On a 16GB GPU, you can use bitsandbytes 8-bit optimizer and gradient checkpointing to help you train a DreamBooth model. Install bitsandbytes: Copied pip install bitsandbytes Then, add the following parameter to your training command: Copied accelerate launch train_dreambooth.py \ + --gradient_checkpointing \ + --use_8bit_adam \ PyTorch Flax Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export INSTANCE_DIR="./dog" +export OUTPUT_DIR="path_to_saved_model" + +accelerate launch train_dreambooth.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --output_dir=$OUTPUT_DIR \ + --instance_prompt="a photo of sks dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=1 \ + --learning_rate=5e-6 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --max_train_steps=400 \ + --push_to_hub Once training is complete, you can use your newly trained model for inference! Can’t wait to try your model for inference before training is complete? 🤭 Make sure you have the latest version of 🤗 Accelerate installed. Copied from diffusers import DiffusionPipeline, UNet2DConditionModel +from transformers import CLIPTextModel +import torch + +unet = UNet2DConditionModel.from_pretrained("path/to/model/checkpoint-100/unet") + +# if you have trained with `--args.train_text_encoder` make sure to also load the text encoder +text_encoder = CLIPTextModel.from_pretrained("path/to/model/checkpoint-100/checkpoint-100/text_encoder") + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", unet=unet, text_encoder=text_encoder, dtype=torch.float16, +).to("cuda") + +image = pipeline("A photo of sks dog in a bucket", num_inference_steps=50, guidance_scale=7.5).images[0] +image.save("dog-bucket.png") PyTorch Flax Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("path_to_saved_model", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +image = pipeline("A photo of sks dog in a bucket", num_inference_steps=50, guidance_scale=7.5).images[0] +image.save("dog-bucket.png") LoRA LoRA is a training technique for significantly reducing the number of trainable parameters. As a result, training is faster and it is easier to store the resulting weights because they are a lot smaller (~100MBs). Use the train_dreambooth_lora.py script to train with LoRA. The LoRA training script is discussed in more detail in the LoRA training guide. Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_dreambooth_lora_sdxl.py script to train a SDXL model with LoRA. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on training your DreamBooth model! To learn more about how to use your new model, the following guide may be helpful: Learn how to load a DreamBooth model for inference if you trained your model with LoRA. diff --git a/scrapped_outputs/62518973b831c3f907082eecc8c2e860.txt b/scrapped_outputs/62518973b831c3f907082eecc8c2e860.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/625e0726ff8d316ec177abdc5ca64e4f.txt b/scrapped_outputs/625e0726ff8d316ec177abdc5ca64e4f.txt new file mode 100644 index 0000000000000000000000000000000000000000..07c26006c9cf463e7cf6147858153f2079e6c0ef --- /dev/null +++ b/scrapped_outputs/625e0726ff8d316ec177abdc5ca64e4f.txt @@ -0,0 +1,148 @@ +PyTorch 2.0 🤗 Diffusers supports the latest optimizations from PyTorch 2.0 which include: A memory-efficient attention implementation, scaled dot product attention, without requiring any extra dependencies such as xFormers. torch.compile, a just-in-time (JIT) compiler to provide an extra performance boost when individual models are compiled. Both of these optimizations require PyTorch 2.0 or later and 🤗 Diffusers > 0.13.0. Copied pip install --upgrade torch diffusers Scaled dot product attention torch.nn.functional.scaled_dot_product_attention (SDPA) is an optimized and memory-efficient attention (similar to xFormers) that automatically enables several other optimizations depending on the model inputs and GPU type. SDPA is enabled by default if you’re using PyTorch 2.0 and the latest version of 🤗 Diffusers, so you don’t need to add anything to your code. However, if you want to explicitly enable it, you can set a DiffusionPipeline to use AttnProcessor2_0: Copied import torch + from diffusers import DiffusionPipeline ++ from diffusers.models.attention_processor import AttnProcessor2_0 + + pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True).to("cuda") ++ pipe.unet.set_attn_processor(AttnProcessor2_0()) + + prompt = "a photo of an astronaut riding a horse on mars" + image = pipe(prompt).images[0] SDPA should be as fast and memory efficient as xFormers; check the benchmark for more details. In some cases - such as making the pipeline more deterministic or converting it to other formats - it may be helpful to use the vanilla attention processor, AttnProcessor. To revert to AttnProcessor, call the set_default_attn_processor() function on the pipeline: Copied import torch + from diffusers import DiffusionPipeline + + pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True).to("cuda") ++ pipe.unet.set_default_attn_processor() + + prompt = "a photo of an astronaut riding a horse on mars" + image = pipe(prompt).images[0] torch.compile The torch.compile function can often provide an additional speed-up to your PyTorch code. In 🤗 Diffusers, it is usually best to wrap the UNet with torch.compile because it does most of the heavy lifting in the pipeline. Copied from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) +images = pipe(prompt, num_inference_steps=steps, num_images_per_prompt=batch_size).images[0] Depending on GPU type, torch.compile can provide an additional speed-up of 5-300x on top of SDPA! If you’re using more recent GPU architectures such as Ampere (A100, 3090), Ada (4090), and Hopper (H100), torch.compile is able to squeeze even more performance out of these GPUs. Compilation requires some time to complete, so it is best suited for situations where you prepare your pipeline once and then perform the same type of inference operations multiple times. For example, calling the compiled pipeline on a different image size triggers compilation again which can be expensive. For more information and different options about torch.compile, refer to the torch_compile tutorial. Benchmark We conducted a comprehensive benchmark with PyTorch 2.0’s efficient attention implementation and torch.compile across different GPUs and batch sizes for five of our most used pipelines. The code is benchmarked on 🤗 Diffusers v0.17.0.dev0 to optimize torch.compile usage (see here for more details). Expand the dropdown below to find the code used to benchmark each pipeline: Stable Diffusion text-to-image Copied from diffusers import DiffusionPipeline +import torch + +path = "runwayml/stable-diffusion-v1-5" + +run_compile = True # Set True / False + +pipe = DiffusionPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors=True) +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + images = pipe(prompt=prompt).images Stable Diffusion image-to-image Copied from diffusers import StableDiffusionImg2ImgPipeline +from diffusers.utils import load_image +import torch + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +init_image = load_image(url) +init_image = init_image.resize((512, 512)) + +path = "runwayml/stable-diffusion-v1-5" + +run_compile = True # Set True / False + +pipe = StableDiffusionImg2ImgPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors=True) +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + image = pipe(prompt=prompt, image=init_image).images[0] Stable Diffusion inpainting Copied from diffusers import StableDiffusionInpaintPipeline +from diffusers.utils import load_image +import torch + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +path = "runwayml/stable-diffusion-inpainting" + +run_compile = True # Set True / False + +pipe = StableDiffusionInpaintPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors=True) +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] ControlNet Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.utils import load_image +import torch + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +init_image = load_image(url) +init_image = init_image.resize((512, 512)) + +path = "runwayml/stable-diffusion-v1-5" + +run_compile = True # Set True / False +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + path, controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) +pipe.controlnet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + pipe.controlnet = torch.compile(pipe.controlnet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + image = pipe(prompt=prompt, image=init_image).images[0] DeepFloyd IF text-to-image + upscaling Copied from diffusers import DiffusionPipeline +import torch + +run_compile = True # Set True / False + +pipe_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-M-v1.0", variant="fp16", text_encoder=None, torch_dtype=torch.float16, use_safetensors=True) +pipe_1.to("cuda") +pipe_2 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-II-M-v1.0", variant="fp16", text_encoder=None, torch_dtype=torch.float16, use_safetensors=True) +pipe_2.to("cuda") +pipe_3 = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-x4-upscaler", torch_dtype=torch.float16, use_safetensors=True) +pipe_3.to("cuda") + + +pipe_1.unet.to(memory_format=torch.channels_last) +pipe_2.unet.to(memory_format=torch.channels_last) +pipe_3.unet.to(memory_format=torch.channels_last) + +if run_compile: + pipe_1.unet = torch.compile(pipe_1.unet, mode="reduce-overhead", fullgraph=True) + pipe_2.unet = torch.compile(pipe_2.unet, mode="reduce-overhead", fullgraph=True) + pipe_3.unet = torch.compile(pipe_3.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "the blue hulk" + +prompt_embeds = torch.randn((1, 2, 4096), dtype=torch.float16) +neg_prompt_embeds = torch.randn((1, 2, 4096), dtype=torch.float16) + +for _ in range(3): + image_1 = pipe_1(prompt_embeds=prompt_embeds, negative_prompt_embeds=neg_prompt_embeds, output_type="pt").images + image_2 = pipe_2(image=image_1, prompt_embeds=prompt_embeds, negative_prompt_embeds=neg_prompt_embeds, output_type="pt").images + image_3 = pipe_3(prompt=prompt, image=image_1, noise_level=100).images The graph below highlights the relative speed-ups for the StableDiffusionPipeline across five GPU families with PyTorch 2.0 and torch.compile enabled. The benchmarks for the following graphs are measured in number of iterations/second. To give you an even better idea of how this speed-up holds for the other pipelines, consider the following +graph for an A100 with PyTorch 2.0 and torch.compile: In the following tables, we report our findings in terms of the number of iterations/second. A100 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 21.66 23.13 44.03 49.74 SD - img2img 21.81 22.40 43.92 46.32 SD - inpaint 22.24 23.23 43.76 49.25 SD - controlnet 15.02 15.82 32.13 36.08 IF 20.21 / 13.84 / 24.00 20.12 / 13.70 / 24.03 ❌ 97.34 / 27.23 / 111.66 SDXL - txt2img 8.64 9.9 - - A100 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 11.6 13.12 14.62 17.27 SD - img2img 11.47 13.06 14.66 17.25 SD - inpaint 11.67 13.31 14.88 17.48 SD - controlnet 8.28 9.38 10.51 12.41 IF 25.02 18.04 ❌ 48.47 SDXL - txt2img 2.44 2.74 - - A100 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 3.04 3.6 3.83 4.68 SD - img2img 2.98 3.58 3.83 4.67 SD - inpaint 3.04 3.66 3.9 4.76 SD - controlnet 2.15 2.58 2.74 3.35 IF 8.78 9.82 ❌ 16.77 SDXL - txt2img 0.64 0.72 - - V100 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 18.99 19.14 20.95 22.17 SD - img2img 18.56 19.18 20.95 22.11 SD - inpaint 19.14 19.06 21.08 22.20 SD - controlnet 13.48 13.93 15.18 15.88 IF 20.01 / 9.08 / 23.34 19.79 / 8.98 / 24.10 ❌ 55.75 / 11.57 / 57.67 V100 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 5.96 5.89 6.83 6.86 SD - img2img 5.90 5.91 6.81 6.82 SD - inpaint 5.99 6.03 6.93 6.95 SD - controlnet 4.26 4.29 4.92 4.93 IF 15.41 14.76 ❌ 22.95 V100 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 1.66 1.66 1.92 1.90 SD - img2img 1.65 1.65 1.91 1.89 SD - inpaint 1.69 1.69 1.95 1.93 SD - controlnet 1.19 1.19 OOM after warmup 1.36 IF 5.43 5.29 ❌ 7.06 T4 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 6.9 6.95 7.3 7.56 SD - img2img 6.84 6.99 7.04 7.55 SD - inpaint 6.91 6.7 7.01 7.37 SD - controlnet 4.89 4.86 5.35 5.48 IF 17.42 / 2.47 / 18.52 16.96 / 2.45 / 18.69 ❌ 24.63 / 2.47 / 23.39 SDXL - txt2img 1.15 1.16 - - T4 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 1.79 1.79 2.03 1.99 SD - img2img 1.77 1.77 2.05 2.04 SD - inpaint 1.81 1.82 2.09 2.09 SD - controlnet 1.34 1.27 1.47 1.46 IF 5.79 5.61 ❌ 7.39 SDXL - txt2img 0.288 0.289 - - T4 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 2.34s 2.30s OOM after 2nd iteration 1.99s SD - img2img 2.35s 2.31s OOM after warmup 2.00s SD - inpaint 2.30s 2.26s OOM after 2nd iteration 1.95s SD - controlnet OOM after 2nd iteration OOM after 2nd iteration OOM after warmup OOM after warmup IF * 1.44 1.44 ❌ 1.94 SDXL - txt2img OOM OOM - - RTX 3090 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 22.56 22.84 23.84 25.69 SD - img2img 22.25 22.61 24.1 25.83 SD - inpaint 22.22 22.54 24.26 26.02 SD - controlnet 16.03 16.33 17.38 18.56 IF 27.08 / 9.07 / 31.23 26.75 / 8.92 / 31.47 ❌ 68.08 / 11.16 / 65.29 RTX 3090 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 6.46 6.35 7.29 7.3 SD - img2img 6.33 6.27 7.31 7.26 SD - inpaint 6.47 6.4 7.44 7.39 SD - controlnet 4.59 4.54 5.27 5.26 IF 16.81 16.62 ❌ 21.57 RTX 3090 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 1.7 1.69 1.93 1.91 SD - img2img 1.68 1.67 1.93 1.9 SD - inpaint 1.72 1.71 1.97 1.94 SD - controlnet 1.23 1.22 1.4 1.38 IF 5.01 5.00 ❌ 6.33 RTX 4090 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 40.5 41.89 44.65 49.81 SD - img2img 40.39 41.95 44.46 49.8 SD - inpaint 40.51 41.88 44.58 49.72 SD - controlnet 29.27 30.29 32.26 36.03 IF 69.71 / 18.78 / 85.49 69.13 / 18.80 / 85.56 ❌ 124.60 / 26.37 / 138.79 SDXL - txt2img 6.8 8.18 - - RTX 4090 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 12.62 12.84 15.32 15.59 SD - img2img 12.61 12,.79 15.35 15.66 SD - inpaint 12.65 12.81 15.3 15.58 SD - controlnet 9.1 9.25 11.03 11.22 IF 31.88 31.14 ❌ 43.92 SDXL - txt2img 2.19 2.35 - - RTX 4090 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 3.17 3.2 3.84 3.85 SD - img2img 3.16 3.2 3.84 3.85 SD - inpaint 3.17 3.2 3.85 3.85 SD - controlnet 2.23 2.3 2.7 2.75 IF 9.26 9.2 ❌ 13.31 SDXL - txt2img 0.52 0.53 - - Notes Follow this PR for more details on the environment used for conducting the benchmarks. For the DeepFloyd IF pipeline where batch sizes > 1, we only used a batch size of > 1 in the first IF pipeline for text-to-image generation and NOT for upscaling. That means the two upscaling pipelines received a batch size of 1. Thanks to Horace He from the PyTorch team for their support in improving our support of torch.compile() in Diffusers. diff --git a/scrapped_outputs/62abf573f0bf7598b20b3412d2733012.txt b/scrapped_outputs/62abf573f0bf7598b20b3412d2733012.txt new file mode 100644 index 0000000000000000000000000000000000000000..6d4b37b0a52f96659677efd85840d7f5e1ea639c --- /dev/null +++ b/scrapped_outputs/62abf573f0bf7598b20b3412d2733012.txt @@ -0,0 +1,41 @@ +Text-to-image The text-to-image script is experimental, and it’s easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset. Text-to-image models like Stable Diffusion are conditioned to generate images given a text prompt. Training a model can be taxing on your hardware, but if you enable gradient_checkpointing and mixed_precision, it is possible to train a model on a single 24GB GPU. If you’re training with larger batch sizes or want to train faster, it’s better to use GPUs with more than 30GB of memory. You can reduce your memory footprint by enabling memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing, gradient accumulation or xFormers. A GPU with at least 30GB of memory or a TPU v3 is recommended for training with Flax. This guide will explore the train_text_to_image.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/text_to_image +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image.py \ + --mixed_precision="fp16" Some basic and important parameters include: --pretrained_model_name_or_path: the name of the model on the Hub or a local path to the pretrained model --dataset_name: the name of the dataset on the Hub or a local path to the dataset to train on --image_column: the name of the image column in the dataset to train on --caption_column: the name of the text column in the dataset to train on --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_text_to_image.py \ + --snr_gamma=5.0 You can compare the loss surfaces for different snr_gamma values in this Weights and Biases report. For smaller datasets, the effects of Min-SNR may not be as obvious compared to larger datasets. Training script The dataset preprocessing code and training loop are found in the main() function. If you need to adapt the training script, this is where you’ll need to make your changes. The train_text_to_image script starts by loading a scheduler and tokenizer. You can choose to use a different scheduler here if you want: Copied noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") +tokenizer = CLIPTokenizer.from_pretrained( + args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision +) Then the script loads the UNet model: Copied load_model = UNet2DConditionModel.from_pretrained(input_dir, subfolder="unet") +model.register_to_config(**load_model.config) + +model.load_state_dict(load_model.state_dict()) Next, the text and image columns of the dataset need to be preprocessed. The tokenize_captions function handles tokenizing the inputs, and the train_transforms function specifies the type of transforms to apply to the image. Both of these functions are bundled into preprocess_train: Copied def preprocess_train(examples): + images = [image.convert("RGB") for image in examples[image_column]] + examples["pixel_values"] = [train_transforms(image) for image in images] + examples["input_ids"] = tokenize_captions(examples) + return examples Lastly, the training loop handles everything else. It encodes images into latent space, adds noise to the latents, computes the text embeddings to condition on, updates the model parameters, and saves and pushes the model to the Hub. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 PyTorch Flax Let’s train on the Pokémon BLIP captions dataset to generate your own Pokémon. Set the environment variables MODEL_NAME and dataset_name to the model and the dataset (either from the Hub or a local path). If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. To train on a local dataset, set the TRAIN_DIR and OUTPUT_DIR environment variables to the path of the dataset and where to save the model to. Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export dataset_name="lambdalabs/pokemon-blip-captions" + +accelerate launch --mixed_precision="fp16" train_text_to_image.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$dataset_name \ + --use_ema \ + --resolution=512 --center_crop --random_flip \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --enable_xformers_memory_efficient_attention + --lr_scheduler="constant" --lr_warmup_steps=0 \ + --output_dir="sd-pokemon-model" \ + --push_to_hub Once training is complete, you can use your newly trained model for inference: PyTorch Flax Copied from diffusers import StableDiffusionPipeline +import torch + +pipeline = StableDiffusionPipeline.from_pretrained("path/to/saved_model", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + +image = pipeline(prompt="yoda").images[0] +image.save("yoda-pokemon.png") Next steps Congratulations on training your own text-to-image model! To learn more about how to use your new model, the following guides may be helpful: Learn how to load LoRA weights for inference if you trained your model with LoRA. Learn more about how certain parameters like guidance scale or techniques such as prompt weighting can help you control inference in the Text-to-image task guide. diff --git a/scrapped_outputs/62b63aeb5243a856a7e83c8151f0fe19.txt b/scrapped_outputs/62b63aeb5243a856a7e83c8151f0fe19.txt new file mode 100644 index 0000000000000000000000000000000000000000..a3ef40f070274021b77fb2e361dbd5e9e695ba0c --- /dev/null +++ b/scrapped_outputs/62b63aeb5243a856a7e83c8151f0fe19.txt @@ -0,0 +1,116 @@ +Single files Diffusers supports loading pretrained pipeline (or model) weights stored in a single file, such as a ckpt or safetensors file. These single file types are typically produced from community trained models. There are three classes for loading single file weights: FromSingleFileMixin supports loading pretrained pipeline weights stored in a single file, which can either be a ckpt or safetensors file. FromOriginalVAEMixin supports loading a pretrained AutoencoderKL from pretrained ControlNet weights stored in a single file, which can either be a ckpt or safetensors file. FromOriginalControlnetMixin supports loading pretrained ControlNet weights stored in a single file, which can either be a ckpt or safetensors file. To learn more about how to load single file weights, see the Load different Stable Diffusion formats loading guide. FromSingleFileMixin class diffusers.loaders.FromSingleFileMixin < source > ( ) Load model weights saved in the .ckpt format into a DiffusionPipeline. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt or .safetensors +format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import StableDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +... ) + +>>> # Download pipeline from local file +>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt +>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly") + +>>> # Enable float16 and move to GPU +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", +... torch_dtype=torch.float16, +... ) +>>> pipeline.to("cuda") FromOriginalVAEMixin class diffusers.loaders.FromOriginalVAEMixin < source > ( ) Load pretrained AutoencoderKL weights saved in the .ckpt or .safetensors format into a AutoencoderKL. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + config_file (str, optional) — +Filepath to the configuration YAML file associated with the model. If not provided it will default to: +https://raw.githubusercontent.com/CompVis/stable-diffusion/main/configs/stable-diffusion/v1-inference.yaml torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. image_size (int, optional, defaults to 512) — +The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable +Diffusion v2 base model. Use 768 for Stable Diffusion v2. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution +Image Synthesis with Latent Diffusion Models paper. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (for example the pipeline components of the +specific pipeline class). The overwritten components are directly passed to the pipelines __init__ +method. See example below for more information. Instantiate a AutoencoderKL from pretrained ControlNet weights saved in the original .ckpt or +.safetensors format. The pipeline is set in evaluation mode (model.eval()) by default. Make sure to pass both image_size and scaling_factor to from_single_file() if you’re loading +a VAE from SDXL or a Stable Diffusion v2 model or higher. Examples: Copied from diffusers import AutoencoderKL + +url = "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors" # can also be local file +model = AutoencoderKL.from_single_file(url) FromOriginalControlnetMixin class diffusers.loaders.FromOriginalControlNetMixin < source > ( ) Load pretrained ControlNet weights saved in the .ckpt or .safetensors format into a ControlNetModel. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. image_size (int, optional, defaults to 512) — +The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable +Diffusion v2 base model. Use 768 for Stable Diffusion v2. upcast_attention (bool, optional, defaults to None) — +Whether the attention computation should always be upcasted. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (for example the pipeline components of the +specific pipeline class). The overwritten components are directly passed to the pipelines __init__ +method. See example below for more information. Instantiate a ControlNetModel from pretrained ControlNet weights saved in the original .ckpt or +.safetensors format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel + +url = "https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_canny.pth" # can also be a local path +model = ControlNetModel.from_single_file(url) + +url = "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.safetensors" # can also be a local path +pipe = StableDiffusionControlNetPipeline.from_single_file(url, controlnet=controlnet) diff --git a/scrapped_outputs/62d80a23fab160eb7db6a58a45064744.txt b/scrapped_outputs/62d80a23fab160eb7db6a58a45064744.txt new file mode 100644 index 0000000000000000000000000000000000000000..5ac980c70abc6eba4fbd0f38f30a6ecdd94ad92f --- /dev/null +++ b/scrapped_outputs/62d80a23fab160eb7db6a58a45064744.txt @@ -0,0 +1,201 @@ +Depth-to-image The Stable Diffusion model can also infer depth based on an image using MiDaS. This allows you to pass a text prompt and an initial image to condition the generation of new images as well as a depth_map to preserve the image structure. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionDepth2ImgPipeline class diffusers.StableDiffusionDepth2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers depth_estimator: DPTForDepthEstimation feature_extractor: DPTFeatureExtractor ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-guided depth-based image-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None image: Union = None depth_map: Optional = None strength: float = 0.8 num_inference_steps: Optional = 50 guidance_scale: Optional = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: Optional = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be used as the starting point. Can accept image +latents as image only if depth_map is not None. depth_map (torch.FloatTensor, optional) — +Depth prediction to be used as additional conditioning for the image generation process. If not +defined, it automatically predicts the depth with self.depth_estimator. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> import requests +>>> from PIL import Image + +>>> from diffusers import StableDiffusionDepth2ImgPipeline + +>>> pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-depth", +... torch_dtype=torch.float16, +... ) +>>> pipe.to("cuda") + + +>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" +>>> init_image = Image.open(requests.get(url, stream=True).raw) +>>> prompt = "two tigers" +>>> n_propmt = "bad, deformed, ugly, bad anotomy" +>>> image = pipe(prompt=prompt, image=init_image, negative_prompt=n_propmt, strength=0.7).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/631d3feb7cfb6f95983d02c68fc90c28.txt b/scrapped_outputs/631d3feb7cfb6f95983d02c68fc90c28.txt new file mode 100644 index 0000000000000000000000000000000000000000..c93f948dc410dd64585375368bc3e52d8d0c43f6 --- /dev/null +++ b/scrapped_outputs/631d3feb7cfb6f95983d02c68fc90c28.txt @@ -0,0 +1,138 @@ +Single files Diffusers supports loading pretrained pipeline (or model) weights stored in a single file, such as a ckpt or safetensors file. These single file types are typically produced from community trained models. There are three classes for loading single file weights: FromSingleFileMixin supports loading pretrained pipeline weights stored in a single file, which can either be a ckpt or safetensors file. FromOriginalVAEMixin supports loading a pretrained AutoencoderKL from pretrained ControlNet weights stored in a single file, which can either be a ckpt or safetensors file. FromOriginalControlnetMixin supports loading pretrained ControlNet weights stored in a single file, which can either be a ckpt or safetensors file. To learn more about how to load single file weights, see the Load different Stable Diffusion formats loading guide. FromSingleFileMixin class diffusers.loaders.FromSingleFileMixin < source > ( ) Load model weights saved in the .ckpt format into a DiffusionPipeline. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. extract_ema (bool, optional, defaults to False) — +Whether to extract the EMA weights or not. Pass True to extract the EMA weights which usually yield +higher quality images for inference. Non-EMA weights are usually better for continuing finetuning. upcast_attention (bool, optional, defaults to None) — +Whether the attention computation should always be upcasted. image_size (int, optional, defaults to 512) — +The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable +Diffusion v2 base model. Use 768 for Stable Diffusion v2. prediction_type (str, optional) — +The prediction type the model was trained on. Use 'epsilon' for all Stable Diffusion v1 models and +the Stable Diffusion v2 base model. Use 'v_prediction' for Stable Diffusion v2. num_in_channels (int, optional, defaults to None) — +The number of input channels. If None, it is automatically inferred. scheduler_type (str, optional, defaults to "pndm") — +Type of scheduler to use. Should be one of ["pndm", "lms", "heun", "euler", "euler-ancestral", "dpm", "ddim"]. load_safety_checker (bool, optional, defaults to True) — +Whether to load the safety checker or not. text_encoder (CLIPTextModel, optional, defaults to None) — +An instance of CLIPTextModel to use, specifically the +clip-vit-large-patch14 variant. If this +parameter is None, the function loads a new instance of CLIPTextModel by itself if needed. vae (AutoencoderKL, optional, defaults to None) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. If +this parameter is None, the function will load a new instance of [CLIP] by itself, if needed. tokenizer (CLIPTokenizer, optional, defaults to None) — +An instance of CLIPTokenizer to use. If this parameter is None, the function loads a new instance +of CLIPTokenizer by itself if needed. original_config_file (str) — +Path to .yaml config file corresponding to the original architecture. If None, will be +automatically inferred by looking for a key that only exists in SD2.0 models. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (for example the pipeline components of the +specific pipeline class). The overwritten components are directly passed to the pipelines __init__ +method. See example below for more information. Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt or .safetensors +format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import StableDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +... ) + +>>> # Download pipeline from local file +>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt +>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly") + +>>> # Enable float16 and move to GPU +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", +... torch_dtype=torch.float16, +... ) +>>> pipeline.to("cuda") FromOriginalVAEMixin class diffusers.loaders.FromOriginalVAEMixin < source > ( ) Load pretrained ControlNet weights saved in the .ckpt or .safetensors format into an AutoencoderKL. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. image_size (int, optional, defaults to 512) — +The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable +Diffusion v2 base model. Use 768 for Stable Diffusion v2. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. upcast_attention (bool, optional, defaults to None) — +Whether the attention computation should always be upcasted. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution +Image Synthesis with Latent Diffusion Models paper. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (for example the pipeline components of the +specific pipeline class). The overwritten components are directly passed to the pipelines __init__ +method. See example below for more information. Instantiate a AutoencoderKL from pretrained ControlNet weights saved in the original .ckpt or +.safetensors format. The pipeline is set in evaluation mode (model.eval()) by default. Make sure to pass both image_size and scaling_factor to from_single_file() if you’re loading +a VAE from SDXL or a Stable Diffusion v2 model or higher. Examples: Copied from diffusers import AutoencoderKL + +url = "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors" # can also be local file +model = AutoencoderKL.from_single_file(url) FromOriginalControlnetMixin class diffusers.loaders.FromOriginalControlnetMixin < source > ( ) Load pretrained ControlNet weights saved in the .ckpt or .safetensors format into a ControlNetModel. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. image_size (int, optional, defaults to 512) — +The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable +Diffusion v2 base model. Use 768 for Stable Diffusion v2. upcast_attention (bool, optional, defaults to None) — +Whether the attention computation should always be upcasted. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (for example the pipeline components of the +specific pipeline class). The overwritten components are directly passed to the pipelines __init__ +method. See example below for more information. Instantiate a ControlNetModel from pretrained ControlNet weights saved in the original .ckpt or +.safetensors format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel + +url = "https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_canny.pth" # can also be a local path +model = ControlNetModel.from_single_file(url) + +url = "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.safetensors" # can also be a local path +pipe = StableDiffusionControlNetPipeline.from_single_file(url, controlnet=controlnet) diff --git a/scrapped_outputs/6366dbbe1fea40d408d4a13a35289cd5.txt b/scrapped_outputs/6366dbbe1fea40d408d4a13a35289cd5.txt new file mode 100644 index 0000000000000000000000000000000000000000..0216b63015b72cee2b55724c811388c4d1a98e96 --- /dev/null +++ b/scrapped_outputs/6366dbbe1fea40d408d4a13a35289cd5.txt @@ -0,0 +1,41 @@ +KarrasVeScheduler KarrasVeScheduler is a stochastic sampler tailored to variance-expanding (VE) models. It is based on the Elucidating the Design Space of Diffusion-Based Generative Models and Score-based generative modeling through stochastic differential equations papers. KarrasVeScheduler class diffusers.KarrasVeScheduler < source > ( sigma_min: float = 0.02 sigma_max: float = 100 s_noise: float = 1.007 s_churn: float = 80 s_min: float = 0.05 s_max: float = 50 ) Parameters sigma_min (float, defaults to 0.02) — +The minimum noise magnitude. sigma_max (float, defaults to 100) — +The maximum noise magnitude. s_noise (float, defaults to 1.007) — +The amount of additional noise to counteract loss of detail during sampling. A reasonable range is [1.000, +1.011]. s_churn (float, defaults to 80) — +The parameter controlling the overall amount of stochasticity. A reasonable range is [0, 100]. s_min (float, defaults to 0.05) — +The start value of the sigma range to add noise (enable stochasticity). A reasonable range is [0, 10]. s_max (float, defaults to 50) — +The end value of the sigma range to add noise. A reasonable range is [0.2, 80]. A stochastic scheduler tailored to variance-expanding models. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. For more details on the parameters, see Appendix E. The grid search values used +to find the optimal {s_noise, s_churn, s_min, s_max} for a specific model are described in Table 5 of the paper. add_noise_to_input < source > ( sample: FloatTensor sigma: float generator: Optional = None ) Parameters sample (torch.FloatTensor) — +The input sample. sigma (float) — generator (torch.Generator, optional) — +A random number generator. Explicit Langevin-like “churn” step of adding noise to the sample according to a gamma_i ≥ 0 to reach a +higher noise level sigma_hat = sigma_i + gamma_i*sigma_i. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor sigma_hat: float sigma_prev: float sample_hat: FloatTensor return_dict: bool = True ) → ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. sigma_hat (float) — sigma_prev (float) — sample_hat (torch.FloatTensor) — return_dict (bool, optional, defaults to True) — +Whether or not to return a ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput or tuple. Returns +~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput or tuple + +If return_dict is True, ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). step_correct < source > ( model_output: FloatTensor sigma_hat: float sigma_prev: float sample_hat: FloatTensor sample_prev: FloatTensor derivative: FloatTensor return_dict: bool = True ) → prev_sample (TODO) Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. sigma_hat (float) — TODO sigma_prev (float) — TODO sample_hat (torch.FloatTensor) — TODO sample_prev (torch.FloatTensor) — TODO derivative (torch.FloatTensor) — TODO return_dict (bool, optional, defaults to True) — +Whether or not to return a DDPMSchedulerOutput or tuple. Returns +prev_sample (TODO) + +updated sample in the diffusion chain. derivative (TODO): TODO + Corrects the predicted sample based on the model_output of the network. KarrasVeOutput class diffusers.schedulers.deprecated.scheduling_karras_ve.KarrasVeOutput < source > ( prev_sample: FloatTensor derivative: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. derivative (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Derivative of predicted original image sample (x_0). pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/63872197f6a6458439db2fea0a6a509a.txt b/scrapped_outputs/63872197f6a6458439db2fea0a6a509a.txt new file mode 100644 index 0000000000000000000000000000000000000000..78c3d8546c4767fffa594b36c432c1201bb2ccc3 --- /dev/null +++ b/scrapped_outputs/63872197f6a6458439db2fea0a6a509a.txt @@ -0,0 +1,17 @@ +Token merging Token merging (ToMe) merges redundant tokens/patches progressively in the forward pass of a Transformer-based network which can speed-up the inference latency of StableDiffusionPipeline. Install ToMe from pip: Copied pip install tomesd You can use ToMe from the tomesd library with the apply_patch function: Copied from diffusers import StableDiffusionPipeline + import torch + import tomesd + + pipeline = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True, + ).to("cuda") ++ tomesd.apply_patch(pipeline, ratio=0.5) + + image = pipeline("a photo of an astronaut riding a horse on mars").images[0] The apply_patch function exposes a number of arguments to help strike a balance between pipeline inference speed and the quality of the generated tokens. The most important argument is ratio which controls the number of tokens that are merged during the forward pass. As reported in the paper, ToMe can greatly preserve the quality of the generated images while boosting inference speed. By increasing the ratio, you can speed-up inference even further, but at the cost of some degraded image quality. To test the quality of the generated images, we sampled a few prompts from Parti Prompts and performed inference with the StableDiffusionPipeline with the following settings: We didn’t notice any significant decrease in the quality of the generated samples, and you can check out the generated samples in this WandB report. If you’re interested in reproducing this experiment, use this script. Benchmarks We also benchmarked the impact of tomesd on the StableDiffusionPipeline with xFormers enabled across several image resolutions. The results are obtained from A100 and V100 GPUs in the following development environment: Copied - `diffusers` version: 0.15.1 +- Python version: 3.8.16 +- PyTorch version (GPU?): 1.13.1+cu116 (True) +- Huggingface_hub version: 0.13.2 +- Transformers version: 4.27.2 +- Accelerate version: 0.18.0 +- xFormers version: 0.0.16 +- tomesd version: 0.1.2 To reproduce this benchmark, feel free to use this script. The results are reported in seconds, and where applicable we report the speed-up percentage over the vanilla pipeline when using ToMe and ToMe + xFormers. GPU Resolution Batch size Vanilla ToMe ToMe + xFormers A100 512 10 6.88 5.26 (+23.55%) 4.69 (+31.83%) 768 10 OOM 14.71 11 8 OOM 11.56 8.84 4 OOM 5.98 4.66 2 4.99 3.24 (+35.07%) 2.1 (+37.88%) 1 3.29 2.24 (+31.91%) 2.03 (+38.3%) 1024 10 OOM OOM OOM 8 OOM OOM OOM 4 OOM 12.51 9.09 2 OOM 6.52 4.96 1 6.4 3.61 (+43.59%) 2.81 (+56.09%) V100 512 10 OOM 10.03 9.29 8 OOM 8.05 7.47 4 5.7 4.3 (+24.56%) 3.98 (+30.18%) 2 3.14 2.43 (+22.61%) 2.27 (+27.71%) 1 1.88 1.57 (+16.49%) 1.57 (+16.49%) 768 10 OOM OOM 23.67 8 OOM OOM 18.81 4 OOM 11.81 9.7 2 OOM 6.27 5.2 1 5.43 3.38 (+37.75%) 2.82 (+48.07%) 1024 10 OOM OOM OOM 8 OOM OOM OOM 4 OOM OOM 19.35 2 OOM 13 10.78 1 OOM 6.66 5.54 As seen in the tables above, the speed-up from tomesd becomes more pronounced for larger image resolutions. It is also interesting to note that with tomesd, it is possible to run the pipeline on a higher resolution like 1024x1024. You may be able to speed-up inference even more with torch.compile. diff --git a/scrapped_outputs/63b63c3403947ffd57d62f2f3ad9a146.txt b/scrapped_outputs/63b63c3403947ffd57d62f2f3ad9a146.txt new file mode 100644 index 0000000000000000000000000000000000000000..f3d2a1a340ad1efdbcd58232cb5909967c8d6d47 --- /dev/null +++ b/scrapped_outputs/63b63c3403947ffd57d62f2f3ad9a146.txt @@ -0,0 +1,64 @@ +Configuration Schedulers from SchedulerMixin and models from ModelMixin inherit from ConfigMixin which stores all the parameters that are passed to their respective __init__ methods in a JSON-configuration file. To use private or gated models, log-in with huggingface-cli login. ConfigMixin class diffusers.ConfigMixin < source > ( ) Base class for all configuration classes. All configuration parameters are stored under self.config. Also +provides the from_config() and save_config() methods for loading, downloading, and +saving classes that inherit from ConfigMixin. Class attributes: config_name (str) — A filename under which the config should stored when calling +save_config() (should be overridden by parent class). ignore_for_config (List[str]) — A list of attributes that should not be saved in the config (should be +overridden by subclass). has_compatibles (bool) — Whether the class has compatible classes (should be overridden by subclass). _deprecated_kwargs (List[str]) — Keyword arguments that are deprecated. Note that the init function +should only have a kwargs argument if at least one argument is deprecated (should be overridden by +subclass). load_config < source > ( pretrained_model_name_or_path: Union return_unused_kwargs = False return_commit_hash = False **kwargs ) → dict Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing model weights saved with +save_config(). + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. return_unused_kwargs (bool, optional, defaults to `False) — +Whether unused keyword arguments of the config are returned. return_commit_hash (bool, optional, defaults to False) -- Whether the commit_hash` of the loaded configuration are returned. Returns +dict + +A dictionary of all the parameters stored in a JSON configuration file. + Load a model or scheduler configuration. from_config < source > ( config: Union = None return_unused_kwargs = False **kwargs ) → ModelMixin or SchedulerMixin Parameters config (Dict[str, Any]) — +A config dictionary from which the Python class is instantiated. Make sure to only load configuration +files of compatible classes. return_unused_kwargs (bool, optional, defaults to False) — +Whether kwargs that are not consumed by the Python class should be returned or not. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to update the configuration object (after it is loaded) and initiate the Python class. +**kwargs are passed directly to the underlying scheduler/model’s __init__ method and eventually +overwrite the same named arguments in config. Returns +ModelMixin or SchedulerMixin + +A model or scheduler object instantiated from a config dictionary. + Instantiate a Python class from a config dictionary. Examples: Copied >>> from diffusers import DDPMScheduler, DDIMScheduler, PNDMScheduler + +>>> # Download scheduler from huggingface.co and cache. +>>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cifar10-32") + +>>> # Instantiate DDIM scheduler class with same config as DDPM +>>> scheduler = DDIMScheduler.from_config(scheduler.config) + +>>> # Instantiate PNDM scheduler class with same config as DDPM +>>> scheduler = PNDMScheduler.from_config(scheduler.config) save_config < source > ( save_directory: Union push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory where the configuration JSON file is saved (will be created if it does not exist). push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save a configuration object to the directory specified in save_directory so that it can be reloaded using the +from_config() class method. to_json_file < source > ( json_file_path: Union ) Parameters json_file_path (str or os.PathLike) — +Path to the JSON file to save a configuration instance’s parameters. Save the configuration instance’s parameters to a JSON file. to_json_string < source > ( ) → str Returns +str + +String containing all the attributes that make up the configuration instance in JSON format. + Serializes the configuration instance to a JSON string. diff --git a/scrapped_outputs/63f6cb61689dadaf6c4af24bc93d0ccd.txt b/scrapped_outputs/63f6cb61689dadaf6c4af24bc93d0ccd.txt new file mode 100644 index 0000000000000000000000000000000000000000..3852e4b540ae565f239e88502bab4b42a7fe8ab9 --- /dev/null +++ b/scrapped_outputs/63f6cb61689dadaf6c4af24bc93d0ccd.txt @@ -0,0 +1,255 @@ +DiffEdit DiffEdit: Diffusion-based semantic image editing with mask guidance is by Guillaume Couairon, Jakob Verbeek, Holger Schwenk, and Matthieu Cord. The abstract from the paper is: Image generation has recently seen tremendous advances, with diffusion models allowing to synthesize convincing images for a large variety of text prompts. In this article, we propose DiffEdit, a method to take advantage of text-conditioned diffusion models for the task of semantic image editing, where the goal is to edit an image based on a text query. Semantic image editing is an extension of image generation, with the additional constraint that the generated image should be as similar as possible to a given input image. Current editing methods based on diffusion models usually require to provide a mask, making the task much easier by treating it as a conditional inpainting task. In contrast, our main contribution is able to automatically generate a mask highlighting regions of the input image that need to be edited, by contrasting predictions of a diffusion model conditioned on different text prompts. Moreover, we rely on latent inference to preserve content in those regions of interest and show excellent synergies with mask-based diffusion. DiffEdit achieves state-of-the-art editing performance on ImageNet. In addition, we evaluate semantic image editing in more challenging settings, using images from the COCO dataset as well as text-based generated images. The original codebase can be found at Xiang-cd/DiffEdit-stable-diffusion, and you can try it out in this demo. This pipeline was contributed by clarencechen. ❤️ Tips The pipeline can generate masks that can be fed into other inpainting pipelines. In order to generate an image using this pipeline, both an image mask (source and target prompts can be manually specified or generated, and passed to generate_mask()) +and a set of partially inverted latents (generated using invert()) must be provided as arguments when calling the pipeline to generate the final edited image. The function generate_mask() exposes two prompt arguments, source_prompt and target_prompt +that let you control the locations of the semantic edits in the final image to be generated. Let’s say, +you wanted to translate from “cat” to “dog”. In this case, the edit direction will be “cat -> dog”. To reflect +this in the generated mask, you simply have to set the embeddings related to the phrases including “cat” to +source_prompt and “dog” to target_prompt. When generating partially inverted latents using invert, assign a caption or text embedding describing the +overall image to the prompt argument to help guide the inverse latent sampling process. In most cases, the +source concept is sufficiently descriptive to yield good results, but feel free to explore alternatives. When calling the pipeline to generate the final edited image, assign the source concept to negative_prompt +and the target concept to prompt. Taking the above example, you simply have to set the embeddings related to +the phrases including “cat” to negative_prompt and “dog” to prompt. If you wanted to reverse the direction in the example above, i.e., “dog -> cat”, then it’s recommended to:Swap the source_prompt and target_prompt in the arguments to generate_mask. Change the input prompt in invert() to include “dog”. Swap the prompt and negative_prompt in the arguments to call the pipeline to generate the final edited image. The source and target prompts, or their corresponding embeddings, can also be automatically generated. Please refer to the DiffEdit guide for more details. StableDiffusionDiffEditPipeline class diffusers.StableDiffusionDiffEditPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor inverse_scheduler: DDIMInverseScheduler requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. inverse_scheduler (DDIMInverseScheduler) — +A scheduler to be used in combination with unet to fill in the unmasked part of the input latents. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. This is an experimental feature! Pipeline for text-guided image inpainting using Stable Diffusion and DiffEdit. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading and saving methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights generate_mask < source > ( image: Union = None target_prompt: Union = None target_negative_prompt: Union = None target_prompt_embeds: Optional = None target_negative_prompt_embeds: Optional = None source_prompt: Union = None source_negative_prompt: Union = None source_prompt_embeds: Optional = None source_negative_prompt_embeds: Optional = None num_maps_per_mask: Optional = 10 mask_encode_strength: Optional = 0.5 mask_thresholding_ratio: Optional = 3.0 num_inference_steps: int = 50 guidance_scale: float = 7.5 generator: Union = None output_type: Optional = 'np' cross_attention_kwargs: Optional = None ) → List[PIL.Image.Image] or np.array Parameters image (PIL.Image.Image) — +Image or tensor representing an image batch to be used for computing the mask. target_prompt (str or List[str], optional) — +The prompt or prompts to guide semantic mask generation. If not defined, you need to pass +prompt_embeds. target_negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). target_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. target_negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. source_prompt (str or List[str], optional) — +The prompt or prompts to guide semantic mask generation using DiffEdit. If not defined, you need to +pass source_prompt_embeds or source_image instead. source_negative_prompt (str or List[str], optional) — +The prompt or prompts to guide semantic mask generation away from using DiffEdit. If not defined, you +need to pass source_negative_prompt_embeds or source_image instead. source_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings to guide the semantic mask generation. Can be used to easily tweak text +inputs (prompt weighting). If not provided, text embeddings are generated from source_prompt input +argument. source_negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings to negatively guide the semantic mask generation. Can be used to easily +tweak text inputs (prompt weighting). If not provided, text embeddings are generated from +source_negative_prompt input argument. num_maps_per_mask (int, optional, defaults to 10) — +The number of noise maps sampled to generate the semantic mask using DiffEdit. mask_encode_strength (float, optional, defaults to 0.5) — +The strength of the noise maps sampled to generate the semantic mask using DiffEdit. Must be between 0 +and 1. mask_thresholding_ratio (float, optional, defaults to 3.0) — +The maximum multiple of the mean absolute difference used to clamp the semantic guidance map before +mask binarization. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the +AttnProcessor as defined in +self.processor. Returns +List[PIL.Image.Image] or np.array + +When returning a List[PIL.Image.Image], the list consists of a batch of single-channel binary images +with dimensions (height // self.vae_scale_factor, width // self.vae_scale_factor). If it’s +np.array, the shape is (batch_size, height // self.vae_scale_factor, width // self.vae_scale_factor). + Generate a latent mask given a mask prompt, a target prompt, and an image. Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionDiffEditPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + +>>> init_image = download_image(img_url).resize((768, 768)) + +>>> pipe = StableDiffusionDiffEditPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.enable_model_cpu_offload() + +>>> mask_prompt = "A bowl of fruits" +>>> prompt = "A bowl of pears" + +>>> mask_image = pipe.generate_mask(image=init_image, source_prompt=prompt, target_prompt=mask_prompt) +>>> image_latents = pipe.invert(image=init_image, prompt=mask_prompt).latents +>>> image = pipe(prompt=prompt, mask_image=mask_image, image_latents=image_latents).images[0] invert < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 50 inpaint_strength: float = 0.8 guidance_scale: float = 7.5 negative_prompt: Union = None generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None decode_latents: bool = False output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None lambda_auto_corr: float = 20.0 lambda_kl: float = 20.0 num_reg_steps: int = 0 num_auto_corr_rolls: int = 5 ) Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (PIL.Image.Image) — +Image or tensor representing an image batch to produce the inverted latents guided by prompt. inpaint_strength (float, optional, defaults to 0.8) — +Indicates extent of the noising process to run latent inversion. Must be between 0 and 1. When +inpaint_strength is 1, the inversion process is run for the full number of iterations specified in +num_inference_steps. image is used as a reference for the inversion process, and adding more noise +increases inpaint_strength. If inpaint_strength is 0, no inpainting occurs. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. decode_latents (bool, optional, defaults to False) — +Whether or not to decode the inverted latents into a generated image. Setting this argument to True +decodes all inverted latents for each timestep into a list of generated images. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.DiffEditInversionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the +AttnProcessor as defined in +self.processor. lambda_auto_corr (float, optional, defaults to 20.0) — +Lambda parameter to control auto correction. lambda_kl (float, optional, defaults to 20.0) — +Lambda parameter to control Kullback-Leibler divergence output. num_reg_steps (int, optional, defaults to 0) — +Number of regularization loss steps. num_auto_corr_rolls (int, optional, defaults to 5) — +Number of auto correction roll steps. Generate inverted latents given a prompt and image. Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionDiffEditPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + +>>> init_image = download_image(img_url).resize((768, 768)) + +>>> pipe = StableDiffusionDiffEditPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.enable_model_cpu_offload() + +>>> prompt = "A bowl of fruits" + +>>> inverted_latents = pipe.invert(image=init_image, prompt=prompt).latents __call__ < source > ( prompt: Union = None mask_image: Union = None image_latents: Union = None inpaint_strength: Optional = 0.8 num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_ckip: int = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. mask_image (PIL.Image.Image) — +Image or tensor representing an image batch to mask the generated image. White pixels in the mask are +repainted, while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, 1, H, W). image_latents (PIL.Image.Image or torch.FloatTensor) — +Partially noised image latents from the inversion process to be used as inputs for image generation. inpaint_strength (float, optional, defaults to 0.8) — +Indicates extent to inpaint the masked area. Must be between 0 and 1. When inpaint_strength is 1, the +denoising process is run on the masked area for the full number of iterations specified in +num_inference_steps. image_latents is used as a reference for the masked area, and adding more +noise to a region increases inpaint_strength. If inpaint_strength is 0, no inpainting occurs. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionDiffEditPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + +>>> init_image = download_image(img_url).resize((768, 768)) + +>>> pipe = StableDiffusionDiffEditPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.enable_model_cpu_offload() + +>>> mask_prompt = "A bowl of fruits" +>>> prompt = "A bowl of pears" + +>>> mask_image = pipe.generate_mask(image=init_image, source_prompt=prompt, target_prompt=mask_prompt) +>>> image_latents = pipe.invert(image=init_image, prompt=mask_prompt).latents +>>> image = pipe(prompt=prompt, mask_image=mask_image, image_latents=image_latents).images[0] disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/63fd2beba4c6373ab79f107a9b62d604.txt b/scrapped_outputs/63fd2beba4c6373ab79f107a9b62d604.txt new file mode 100644 index 0000000000000000000000000000000000000000..0c8b48f867375e08b716930801b9b71701856adf --- /dev/null +++ b/scrapped_outputs/63fd2beba4c6373ab79f107a9b62d604.txt @@ -0,0 +1,3 @@ +OpenVINO + +Under construction 🚧 diff --git a/scrapped_outputs/642085dbe7935c85fc5b930ccd8bd2e1.txt b/scrapped_outputs/642085dbe7935c85fc5b930ccd8bd2e1.txt new file mode 100644 index 0000000000000000000000000000000000000000..8e035fc84c71e612cbb63df1b6546d79487088fd --- /dev/null +++ b/scrapped_outputs/642085dbe7935c85fc5b930ccd8bd2e1.txt @@ -0,0 +1,173 @@ +Latent Consistency Model Latent Consistency Models (LCM) enable quality image generation in typically 2-4 steps making it possible to use diffusion models in almost real-time settings. From the official website: LCMs can be distilled from any pre-trained Stable Diffusion (SD) in only 4,000 training steps (~32 A100 GPU Hours) for generating high quality 768 x 768 resolution images in 2~4 steps or even one step, significantly accelerating text-to-image generation. We employ LCM to distill the Dreamshaper-V7 version of SD in just 4,000 training iterations. For a more technical overview of LCMs, refer to the paper. LCM distilled models are available for stable-diffusion-v1-5, stable-diffusion-xl-base-1.0, and the SSD-1B model. All the checkpoints can be found in this collection. This guide shows how to perform inference with LCMs for text-to-image image-to-image combined with style LoRAs ControlNet/T2I-Adapter Text-to-image You’ll use the StableDiffusionXLPipeline pipeline with the LCMScheduler and then load the LCM-LoRA. Together with the LCM-LoRA and the scheduler, the pipeline enables a fast inference workflow, overcoming the slow iterative nature of diffusion models. Copied from diffusers import StableDiffusionXLPipeline, UNet2DConditionModel, LCMScheduler +import torch + +unet = UNet2DConditionModel.from_pretrained( + "latent-consistency/lcm-sdxl", + torch_dtype=torch.float16, + variant="fp16", +) +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16, variant="fp16", +).to("cuda") +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=8.0 +).images[0] Notice that we use only 4 steps for generation which is way less than what’s typically used for standard SDXL. Some details to keep in mind: To perform classifier-free guidance, batch size is usually doubled inside the pipeline. LCM, however, applies guidance using guidance embeddings, so the batch size does not have to be doubled in this case. This leads to a faster inference time, with the drawback that negative prompts don’t have any effect on the denoising process. The UNet was trained using the [3., 13.] guidance scale range. So, that is the ideal range for guidance_scale. However, disabling guidance_scale using a value of 1.0 is also effective in most cases. Image-to-image LCMs can be applied to image-to-image tasks too. For this example, we’ll use the LCM_Dreamshaper_v7 model, but the same steps can be applied to other LCM models as well. Copied import torch +from diffusers import AutoPipelineForImage2Image, UNet2DConditionModel, LCMScheduler +from diffusers.utils import make_image_grid, load_image + +unet = UNet2DConditionModel.from_pretrained( + "SimianLuo/LCM_Dreamshaper_v7", + subfolder="unet", + torch_dtype=torch.float16, +) + +pipe = AutoPipelineForImage2Image.from_pretrained( + "Lykon/dreamshaper-7", + unet=unet, + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) +prompt = "Astronauts in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +generator = torch.manual_seed(0) +image = pipe( + prompt, + image=init_image, + num_inference_steps=4, + guidance_scale=7.5, + strength=0.5, + generator=generator +).images[0] +make_image_grid([init_image, image], rows=1, cols=2) You can get different results based on your prompt and the image you provide. To get the best results, we recommend trying different values for num_inference_steps, strength, and guidance_scale parameters and choose the best one. Combine with style LoRAs LCMs can be used with other styled LoRAs to generate styled-images in very few steps (4-8). In the following example, we’ll use the papercut LoRA. Copied from diffusers import StableDiffusionXLPipeline, UNet2DConditionModel, LCMScheduler +import torch + +unet = UNet2DConditionModel.from_pretrained( + "latent-consistency/lcm-sdxl", + torch_dtype=torch.float16, + variant="fp16", +) +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16, variant="fp16", +).to("cuda") +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +pipe.load_lora_weights("TheLastBen/Papercut_SDXL", weight_name="papercut.safetensors", adapter_name="papercut") + +prompt = "papercut, a cute fox" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=8.0 +).images[0] +image ControlNet/T2I-Adapter Let’s look at how we can perform inference with ControlNet/T2I-Adapter and a LCM. ControlNet + +For this example, we'll use the [LCM_Dreamshaper_v7](https://huggingface.co/SimianLuo/LCM_Dreamshaper_v7) model with canny ControlNet, but the same steps can be applied to other LCM models as well. + + Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +image = load_image( + "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +).resize((512, 512)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + "SimianLuo/LCM_Dreamshaper_v7", + controlnet=controlnet, + torch_dtype=torch.float16, + safety_checker=None, +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +generator = torch.manual_seed(0) +image = pipe( + "the mona lisa", + image=canny_image, + num_inference_steps=4, + generator=generator, +).images[0] +make_image_grid([canny_image, image], rows=1, cols=2) The inference parameters in this example might not work for all examples, so we recommend trying different values for the `num_inference_steps`, `guidance_scale`, `controlnet_conditioning_scale`, and `cross_attention_kwargs` parameters and choosing the best one. T2I-Adapter This example shows how to use the lcm-sdxl with the Canny T2I-Adapter. Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionXLAdapterPipeline, UNet2DConditionModel, T2IAdapter, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +# Prepare image +# Detect the canny map in low resolution to avoid high-frequency details +image = load_image( + "https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_canny.jpg" +).resize((384, 384)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image).resize((1024, 1216)) + +# load adapter +adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, varient="fp16").to("cuda") + +unet = UNet2DConditionModel.from_pretrained( + "latent-consistency/lcm-sdxl", + torch_dtype=torch.float16, + variant="fp16", +) +pipe = StableDiffusionXLAdapterPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + unet=unet, + adapter=adapter, + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +prompt = "Mystical fairy in real, magic, 4k picture, high quality" +negative_prompt = "extra digit, fewer digits, cropped, worst quality, low quality, glitch, deformed, mutated, ugly, disfigured" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, + negative_prompt=negative_prompt, + image=canny_image, + num_inference_steps=4, + guidance_scale=5, + adapter_conditioning_scale=0.8, + adapter_conditioning_factor=1, + generator=generator, +).images[0] +grid = make_image_grid([canny_image, image], rows=1, cols=2) diff --git a/scrapped_outputs/6426e872c267d90b3f627d6b0cdceb8e.txt b/scrapped_outputs/6426e872c267d90b3f627d6b0cdceb8e.txt new file mode 100644 index 0000000000000000000000000000000000000000..f4cc4262c8901cbf0efaaf3a95066a4f6481fc18 --- /dev/null +++ b/scrapped_outputs/6426e872c267d90b3f627d6b0cdceb8e.txt @@ -0,0 +1,78 @@ +unCLIP Hierarchical Text-Conditional Image Generation with CLIP Latents is by Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen. The unCLIP model in 🤗 Diffusers comes from kakaobrain’s karlo. The abstract from the paper is following: Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples. You can find lucidrains’ DALL-E 2 recreation at lucidrains/DALLE2-pytorch. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. UnCLIPPipeline class diffusers.UnCLIPPipeline < source > ( prior: PriorTransformer decoder: UNet2DConditionModel text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer text_proj: UnCLIPTextProjModel super_res_first: UNet2DModel super_res_last: UNet2DModel prior_scheduler: UnCLIPScheduler decoder_scheduler: UnCLIPScheduler super_res_scheduler: UnCLIPScheduler ) Parameters text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. prior (PriorTransformer) — +The canonical unCLIP prior to approximate the image embedding from the text embedding. text_proj (UnCLIPTextProjModel) — +Utility class to prepare and combine the embeddings before they are passed to the decoder. decoder (UNet2DConditionModel) — +The decoder to invert the image embedding into an image. super_res_first (UNet2DModel) — +Super resolution UNet. Used in all but the last step of the super resolution diffusion process. super_res_last (UNet2DModel) — +Super resolution UNet. Used in the last step of the super resolution diffusion process. prior_scheduler (UnCLIPScheduler) — +Scheduler used in the prior denoising process (a modified DDPMScheduler). decoder_scheduler (UnCLIPScheduler) — +Scheduler used in the decoder denoising process (a modified DDPMScheduler). super_res_scheduler (UnCLIPScheduler) — +Scheduler used in the super resolution denoising process (a modified DDPMScheduler). Pipeline for text-to-image generation using unCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None num_images_per_prompt: int = 1 prior_num_inference_steps: int = 25 decoder_num_inference_steps: int = 25 super_res_num_inference_steps: int = 7 generator: Union = None prior_latents: Optional = None decoder_latents: Optional = None super_res_latents: Optional = None text_model_output: Union = None text_attention_mask: Optional = None prior_guidance_scale: float = 4.0 decoder_guidance_scale: float = 8.0 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. This can only be left undefined if text_model_output +and text_attention_mask is passed. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. prior_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the prior. More denoising steps usually lead to a higher quality +image at the expense of slower inference. decoder_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality +image at the expense of slower inference. super_res_num_inference_steps (int, optional, defaults to 7) — +The number of denoising steps for super resolution. More denoising steps usually lead to a higher +quality image at the expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. prior_latents (torch.FloatTensor of shape (batch size, embeddings dimension), optional) — +Pre-generated noisy latents to be used as inputs for the prior. decoder_latents (torch.FloatTensor of shape (batch size, channels, height, width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. super_res_latents (torch.FloatTensor of shape (batch size, channels, super res height, super res width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. prior_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. decoder_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. text_model_output (CLIPTextModelOutput, optional) — +Pre-defined CLIPTextModel outputs that can be derived from the text encoder. Pre-defined text +outputs can be passed for tasks like text embedding interpolations. Make sure to also pass +text_attention_mask in this case. prompt can the be left None. text_attention_mask (torch.Tensor, optional) — +Pre-defined CLIP text attention mask that can be derived from the tokenizer. Pre-defined text attention +masks are necessary when passing text_model_output. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. UnCLIPImageVariationPipeline class diffusers.UnCLIPImageVariationPipeline < source > ( decoder: UNet2DConditionModel text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer text_proj: UnCLIPTextProjModel feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection super_res_first: UNet2DModel super_res_last: UNet2DModel decoder_scheduler: UnCLIPScheduler super_res_scheduler: UnCLIPScheduler ) Parameters text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the image_encoder. image_encoder (CLIPVisionModelWithProjection) — +Frozen CLIP image-encoder (clip-vit-large-patch14). text_proj (UnCLIPTextProjModel) — +Utility class to prepare and combine the embeddings before they are passed to the decoder. decoder (UNet2DConditionModel) — +The decoder to invert the image embedding into an image. super_res_first (UNet2DModel) — +Super resolution UNet. Used in all but the last step of the super resolution diffusion process. super_res_last (UNet2DModel) — +Super resolution UNet. Used in the last step of the super resolution diffusion process. decoder_scheduler (UnCLIPScheduler) — +Scheduler used in the decoder denoising process (a modified DDPMScheduler). super_res_scheduler (UnCLIPScheduler) — +Scheduler used in the super resolution denoising process (a modified DDPMScheduler). Pipeline to generate image variations from an input image using UnCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union = None num_images_per_prompt: int = 1 decoder_num_inference_steps: int = 25 super_res_num_inference_steps: int = 7 generator: Optional = None decoder_latents: Optional = None super_res_latents: Optional = None image_embeddings: Optional = None decoder_guidance_scale: float = 8.0 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — +Image or tensor representing an image batch to be used as the starting point. If you provide a +tensor, it needs to be compatible with the CLIPImageProcessor +configuration. +Can be left as None only when image_embeddings are passed. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. decoder_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality +image at the expense of slower inference. super_res_num_inference_steps (int, optional, defaults to 7) — +The number of denoising steps for super resolution. More denoising steps usually lead to a higher +quality image at the expense of slower inference. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. decoder_latents (torch.FloatTensor of shape (batch size, channels, height, width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. super_res_latents (torch.FloatTensor of shape (batch size, channels, super res height, super res width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. decoder_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. image_embeddings (torch.Tensor, optional) — +Pre-defined image embeddings that can be derived from the image encoder. Pre-defined image embeddings +can be passed for tasks like image interpolations. image can be left as None. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/646ec8c495a3085513e37a1729bc5930.txt b/scrapped_outputs/646ec8c495a3085513e37a1729bc5930.txt new file mode 100644 index 0000000000000000000000000000000000000000..6c6930421010fe84f98ab906144201bb0390aa30 --- /dev/null +++ b/scrapped_outputs/646ec8c495a3085513e37a1729bc5930.txt @@ -0,0 +1,81 @@ +Latent Diffusion Latent Diffusion was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. The abstract from the paper is: By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. The original codebase can be found at CompVis/latent-diffusion. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. LDMTextToImagePipeline class diffusers.LDMTextToImagePipeline < source > ( vqvae: Union bert: PreTrainedModel tokenizer: PreTrainedTokenizer unet: Union scheduler: Union ) Parameters vqvae (VQModel) — +Vector-quantized (VQ) model to encode and decode images to and from latent representations. bert (LDMBertModel) — +Text-encoder model based on BERT. tokenizer (BertTokenizer) — +A BertTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-image generation using latent diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union height: Optional = None width: Optional = None num_inference_steps: Optional = 50 guidance_scale: Optional = 1.0 eta: Optional = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 1.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Example: Copied >>> from diffusers import DiffusionPipeline + +>>> # load model and scheduler +>>> ldm = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256") + +>>> # run pipeline in inference (sample random noise and denoise) +>>> prompt = "A painting of a squirrel eating a burger" +>>> images = ldm([prompt], num_inference_steps=50, eta=0.3, guidance_scale=6).images + +>>> # save images +>>> for idx, image in enumerate(images): +... image.save(f"squirrel-{idx}.png") LDMSuperResolutionPipeline class diffusers.LDMSuperResolutionPipeline < source > ( vqvae: VQModel unet: UNet2DModel scheduler: Union ) Parameters vqvae (VQModel) — +Vector-quantized (VQ) model to encode and decode images to and from latent representations. unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latens. Can be one of +DDIMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, +EulerAncestralDiscreteScheduler, DPMSolverMultistepScheduler, or PNDMScheduler. A pipeline for image super-resolution using latent diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union = None batch_size: Optional = 1 num_inference_steps: Optional = 100 eta: Optional = 0.0 generator: Union = None output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters image (torch.Tensor or PIL.Image.Image) — +Image or tensor representing an image batch to be used as the starting point for the process. batch_size (int, optional, defaults to 1) — +Number of images to generate. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Example: Copied >>> import requests +>>> from PIL import Image +>>> from io import BytesIO +>>> from diffusers import LDMSuperResolutionPipeline +>>> import torch + +>>> # load model and scheduler +>>> pipeline = LDMSuperResolutionPipeline.from_pretrained("CompVis/ldm-super-resolution-4x-openimages") +>>> pipeline = pipeline.to("cuda") + +>>> # let's download an image +>>> url = ( +... "https://user-images.githubusercontent.com/38061659/199705896-b48e17b8-b231-47cd-a270-4ffa5a93fa3e.png" +... ) +>>> response = requests.get(url) +>>> low_res_img = Image.open(BytesIO(response.content)).convert("RGB") +>>> low_res_img = low_res_img.resize((128, 128)) + +>>> # run pipeline in inference (sample random noise and denoise) +>>> upscaled_image = pipeline(low_res_img, num_inference_steps=100, eta=1).images[0] +>>> # save image +>>> upscaled_image.save("ldm_generated_image.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/647ed04244c56156d72a64d095ecd3fa.txt b/scrapped_outputs/647ed04244c56156d72a64d095ecd3fa.txt new file mode 100644 index 0000000000000000000000000000000000000000..825b60520b59c30a8c2a5c018ff51010ada6643b --- /dev/null +++ b/scrapped_outputs/647ed04244c56156d72a64d095ecd3fa.txt @@ -0,0 +1,376 @@ +Image-to-image The Stable Diffusion model can also be applied to image-to-image generation by passing a text prompt and an initial image to condition the generation of new images. The StableDiffusionImg2ImgPipeline uses the diffusion-denoising mechanism proposed in SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations by Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, Stefano Ermon. The abstract from the paper is: Guided image synthesis enables everyday users to create and edit photo-realistic images with minimum effort. The key challenge is balancing faithfulness to the user input (e.g., hand-drawn colored strokes) and realism of the synthesized image. Existing GAN-based methods attempt to achieve such balance using either conditional GANs or GAN inversions, which are challenging and often require additional training data or loss functions for individual applications. To address these issues, we introduce a new image synthesis and editing method, Stochastic Differential Editing (SDEdit), based on a diffusion model generative prior, which synthesizes realistic images by iteratively denoising through a stochastic differential equation (SDE). Given an input image with user guide of any type, SDEdit first adds noise to the input, then subsequently denoises the resulting image through the SDE prior to increase its realism. SDEdit does not require task-specific training or inversions and can naturally achieve the balance between realism and faithfulness. SDEdit significantly outperforms state-of-the-art GAN-based methods by up to 98.09% on realism and 91.72% on overall satisfaction scores, according to a human perception study, on multiple tasks, including stroke-based image synthesis and editing as well as image compositing. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionImg2ImgPipeline class diffusers.StableDiffusionImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-guided image-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.8 num_inference_steps: Optional = 50 timesteps: List = None guidance_scale: Optional = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: Optional = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: int = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be used as the starting point. For both +numpy array and pytorch tensor, the expected value range is between [0, 1] If it’s a tensor or a list +or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a +list of arrays, the expected shape should be (B, H, W, C) or (H, W, C) It can also accept image +latents as image, but if passing latents directly it is not encoded again. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import requests +>>> import torch +>>> from PIL import Image +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionImg2ImgPipeline + +>>> device = "cuda" +>>> model_id_or_path = "runwayml/stable-diffusion-v1-5" +>>> pipe = StableDiffusionImg2ImgPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +>>> response = requests.get(url) +>>> init_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_image = init_image.resize((768, 512)) + +>>> prompt = "A fantasy landscape, trending on artstation" + +>>> images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images +>>> images[0].save("fantasy_landscape.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. original_config_file (str, optional) — +The path to the original config file that was used to train the model. If not provided, the config file +will be inferred from the checkpoint file. model_type (str, optional) — +The type of model to load. If not provided, the model type will be inferred from the checkpoint file. image_size (int, optional) — +The size of the image output. It’s used to configure the sample_size parameter of the UNet and VAE model. load_safety_checker (bool, optional, defaults to False) — +Whether to load the safety checker model or not. By default, the safety checker is not loaded unless a safety_checker component is passed to the kwargs. num_in_channels (int, optional) — +Specify the number of input channels for the UNet model. Read more about how to configure UNet model with this parameter +here. scaling_factor (float, optional) — +The scaling factor to use for the VAE model. If not provided, it is inferred from the config file first. +If the scaling factor is not found in the config file, the default value 0.18215 is used. scheduler_type (str, optional) — +The type of scheduler to load. If not provided, the scheduler type will be inferred from the checkpoint file. prediction_type (str, optional) — +The type of prediction to load. If not provided, the prediction type will be inferred from the checkpoint file. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt or .safetensors +format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import StableDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +... ) + +>>> # Download pipeline from local file +>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt +>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly") + +>>> # Enable float16 and move to GPU +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", +... torch_dtype=torch.float16, +... ) +>>> pipeline.to("cuda") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionImg2ImgPipeline class diffusers.FlaxStableDiffusionImg2ImgPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-guided image-to-image generation using Stable Diffusion. This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: Array image: Array params: Union prng_seed: Array strength: float = 0.8 num_inference_steps: int = 50 height: Optional = None width: Optional = None guidance_scale: Union = 7.5 noise: Array = None neg_prompt_ids: Array = None return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt_ids (jnp.ndarray) — +The prompt or prompts to guide image generation. image (jnp.ndarray) — +Array representing an image batch to be used as the starting point. params (Dict or FrozenDict) — +Dictionary containing the model parameters/weights. prng_seed (jax.Array or jax.Array) — +Array containing random number generator key. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. noise (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. The array is generated by +sampling using the supplied random generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> import jax.numpy as jnp +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard +>>> import requests +>>> from io import BytesIO +>>> from PIL import Image +>>> from diffusers import FlaxStableDiffusionImg2ImgPipeline + + +>>> def create_key(seed=0): +... return jax.random.PRNGKey(seed) + + +>>> rng = create_key(0) + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +>>> response = requests.get(url) +>>> init_img = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_img = init_img.resize((768, 512)) + +>>> prompts = "A fantasy landscape, trending on artstation" + +>>> pipeline, params = FlaxStableDiffusionImg2ImgPipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", +... revision="flax", +... dtype=jnp.bfloat16, +... ) + +>>> num_samples = jax.device_count() +>>> rng = jax.random.split(rng, jax.device_count()) +>>> prompt_ids, processed_image = pipeline.prepare_inputs( +... prompt=[prompts] * num_samples, image=[init_img] * num_samples +... ) +>>> p_params = replicate(params) +>>> prompt_ids = shard(prompt_ids) +>>> processed_image = shard(processed_image) + +>>> output = pipeline( +... prompt_ids=prompt_ids, +... image=processed_image, +... params=p_params, +... prng_seed=rng, +... strength=0.75, +... num_inference_steps=50, +... jit=True, +... height=512, +... width=768, +... ).images + +>>> output_images = pipeline.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:]))) FlaxStableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/64d1d3ff0ac48642ee6679049e5d1d67.txt b/scrapped_outputs/64d1d3ff0ac48642ee6679049e5d1d67.txt new file mode 100644 index 0000000000000000000000000000000000000000..f09d76abe300613e7a2c46ac399075714d8042b5 --- /dev/null +++ b/scrapped_outputs/64d1d3ff0ac48642ee6679049e5d1d67.txt @@ -0,0 +1,251 @@ +InstructPix2Pix: Learning to Follow Image Editing Instructions + + +Overview + +InstructPix2Pix: Learning to Follow Image Editing Instructions by Tim Brooks, Aleksander Holynski and Alexei A. Efros. +The abstract of the paper is the following: +We propose a method for editing images from human instructions: given an input image and a written instruction that tells the model what to do, our model follows these instructions to edit the image. To obtain training data for this problem, we combine the knowledge of two large pretrained models — a language model (GPT-3) and a text-to-image model (Stable Diffusion) — to generate a large dataset of image editing examples. Our conditional diffusion model, InstructPix2Pix, is trained on our generated data, and generalizes to real images and user-written instructions at inference time. Since it performs edits in the forward pass and does not require per example fine-tuning or inversion, our model edits images quickly, in a matter of seconds. We show compelling editing results for a diverse collection of input images and written instructions. +Resources: +Project Page. +Paper. +Original Code. +Demo. + +Available Pipelines: + +Pipeline +Tasks +Demo +StableDiffusionInstructPix2PixPipeline +Text-Based Image Editing +🤗 Space + +Usage example + + + + Copied +import PIL +import requests +import torch +from diffusers import StableDiffusionInstructPix2PixPipeline + +model_id = "timbrooks/instruct-pix2pix" +pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +url = "https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" + + +def download_image(url): + image = PIL.Image.open(requests.get(url, stream=True).raw) + image = PIL.ImageOps.exif_transpose(image) + image = image.convert("RGB") + return image + + +image = download_image(url) + +prompt = "make the mountains snowy" +edit = pipe(prompt, image=image, num_inference_steps=20, image_guidance_scale=1.5, guidance_scale=7).images[0] +images[0].save("snowy_mountains.png") + +StableDiffusionInstructPix2PixPipeline + + +class diffusers.StableDiffusionInstructPix2PixPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPFeatureExtractor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for pixel-level image editing by following text instructions. Based on Stable Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None +num_inference_steps: int = 100 +guidance_scale: float = 7.5 +image_guidance_scale: float = 1.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: typing.Optional[int] = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. +image (PIL.Image.Image): +Image, or tensor representing an image batch which will be repainted according to prompt. +num_inference_steps (int, optional, defaults to 100): +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. +guidance_scale (float, optional, defaults to 7.5): +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. This pipeline requires a value of at least 1. +image_guidance_scale (float, optional, defaults to 1.5): +Image guidance scale is to push the generated image towards the inital image image. Image guidance +scale is enabled by setting image_guidance_scale > 1. Higher image guidance scale encourages to +generate images that are closely linked to the source image image, usually at the expense of lower +image quality. This pipeline requires a value of at least 1. +negative_prompt (str or List[str], optional): +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. Ignored when not using guidance (i.e., ignored if guidance_scale +is less than 1). +num_images_per_prompt (int, optional, defaults to 1): +The number of images to generate per prompt. +eta (float, optional, defaults to 0.0): +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. +generator (torch.Generator, optional): +One or a list of torch generator(s) +to make generation deterministic. +latents (torch.FloatTensor, optional): +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. +prompt_embeds (torch.FloatTensor, optional): +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. +negative_prompt_embeds (torch.FloatTensor, optional): +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. +output_type (str, optional, defaults to "pil"): +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. +return_dict (bool, optional, defaults to True): +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. +callback (Callable, optional): +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). +callback_steps (int, optional, defaults to 1): +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + +Examples: + + + Copied +>>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionInstructPix2PixPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" + +>>> image = download_image(img_url).resize((512, 512)) + +>>> pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained( +... "timbrooks/instruct-pix2pix", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "make the mountains snowy" +>>> image = pipe(prompt=prompt, image=image).images[0] + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. diff --git a/scrapped_outputs/64e8e670ec1b97b0448bb942c28b455e.txt b/scrapped_outputs/64e8e670ec1b97b0448bb942c28b455e.txt new file mode 100644 index 0000000000000000000000000000000000000000..118d04526fdacb6e280461a814f7dea84ba76932 --- /dev/null +++ b/scrapped_outputs/64e8e670ec1b97b0448bb942c28b455e.txt @@ -0,0 +1,51 @@ +DDIMInverseScheduler DDIMInverseScheduler is the inverted scheduler from Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. +The implementation is mostly based on the DDIM inversion definition from Null-text Inversion for Editing Real Images using Guided Diffusion Models. DDIMInverseScheduler class diffusers.DDIMInverseScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None clip_sample: bool = True set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' clip_sample_range: float = 1.0 timestep_spacing: str = 'leading' rescale_betas_zero_snr: bool = False **kwargs ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 0, otherwise +it uses the alpha value at step num_train_timesteps - 1. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use num_train_timesteps - 1 for the previous alpha +product. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. DDIMInverseScheduler is the reverse scheduler of DDIMScheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → ~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. eta (float) — +The weight of noise for added noise in diffusion step. use_clipped_model_output (bool, defaults to False) — +If True, computes “corrected” model_output from the clipped predicted original sample. Necessary +because predicted original sample is clipped to [-1, 1] when self.config.clip_sample is True. If no +clipping has happened, “corrected” model_output would coincide with the one provided as input and +use_clipped_model_output has no effect. variance_noise (torch.FloatTensor) — +Alternative to generating noise with generator by directly providing the noise for the variance +itself. Useful for methods such as CycleDiffusion. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput or +tuple. Returns +~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput or tuple + +If return_dict is True, ~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput is +returned, otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/64f8e3658e23b5c11f772fa29b50189c.txt b/scrapped_outputs/64f8e3658e23b5c11f772fa29b50189c.txt new file mode 100644 index 0000000000000000000000000000000000000000..ac84e7af684acbbe414a495264a2879f29f202cf --- /dev/null +++ b/scrapped_outputs/64f8e3658e23b5c11f772fa29b50189c.txt @@ -0,0 +1,114 @@ +Accelerate inference of text-to-image diffusion models Diffusion models are slower than their GAN counterparts because of the iterative and sequential reverse diffusion process. There are several techniques that can address this limitation such as progressive timestep distillation (LCM LoRA), model compression (SSD-1B), and reusing adjacent features of the denoiser (DeepCache). However, you don’t necessarily need to use these techniques to speed up inference. With PyTorch 2 alone, you can accelerate the inference latency of text-to-image diffusion pipelines by up to 3x. This tutorial will show you how to progressively apply the optimizations found in PyTorch 2 to reduce inference latency. You’ll use the Stable Diffusion XL (SDXL) pipeline in this tutorial, but these techniques are applicable to other text-to-image diffusion pipelines too. Make sure you’re using the latest version of Diffusers: Copied pip install -U diffusers Then upgrade the other required libraries too: Copied pip install -U transformers accelerate peft Install PyTorch nightly to benefit from the latest and fastest kernels: Copied pip3 install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu121 The results reported below are from a 80GB 400W A100 with its clock rate set to the maximum. If you’re interested in the full benchmarking code, take a look at huggingface/diffusion-fast. Baseline Let’s start with a baseline. Disable reduced precision and the scaled_dot_product_attention (SDPA) function which is automatically used by Diffusers: Copied from diffusers import StableDiffusionXLPipeline + +# Load the pipeline in full-precision and place its model components on CUDA. +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0" +).to("cuda") + +# Run the attention ops without SDPA. +pipe.unet.set_default_attn_processor() +pipe.vae.set_default_attn_processor() + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] This default setup takes 7.36 seconds. bfloat16 Enable the first optimization, reduced precision or more specifically bfloat16. There are several benefits of using reduced precision: Using a reduced numerical precision (such as float16 or bfloat16) for inference doesn’t affect the generation quality but significantly improves latency. The benefits of using bfloat16 compared to float16 are hardware dependent, but modern GPUs tend to favor bfloat16. bfloat16 is much more resilient when used with quantization compared to float16, but more recent versions of the quantization library (torchao) we used don’t have numerical issues with float16. Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 +).to("cuda") + +# Run the attention ops without SDPA. +pipe.unet.set_default_attn_processor() +pipe.vae.set_default_attn_processor() + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] bfloat16 reduces the latency from 7.36 seconds to 4.63 seconds. In our later experiments with float16, recent versions of torchao do not incur numerical problems from float16. Take a look at the Speed up inference guide to learn more about running inference with reduced precision. SDPA Attention blocks are intensive to run. But with PyTorch’s scaled_dot_product_attention function, it is a lot more efficient. This function is used by default in Diffusers so you don’t need to make any changes to the code. Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] Scaled dot product attention improves the latency from 4.63 seconds to 3.31 seconds. torch.compile PyTorch 2 includes torch.compile which uses fast and optimized kernels. In Diffusers, the UNet and VAE are usually compiled because these are the most compute-intensive modules. First, configure a few compiler flags (refer to the full list for more options): Copied from diffusers import StableDiffusionXLPipeline +import torch + +torch._inductor.config.conv_1x1_as_mm = True +torch._inductor.config.coordinate_descent_tuning = True +torch._inductor.config.epilogue_fusion = False +torch._inductor.config.coordinate_descent_check_all_directions = True It is also important to change the UNet and VAE’s memory layout to “channels_last” when compiling them to ensure maximum speed. Copied pipe.unet.to(memory_format=torch.channels_last) +pipe.vae.to(memory_format=torch.channels_last) Now compile and perform inference: Copied # Compile the UNet and VAE. +pipe.unet = torch.compile(pipe.unet, mode="max-autotune", fullgraph=True) +pipe.vae.decode = torch.compile(pipe.vae.decode, mode="max-autotune", fullgraph=True) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# First call to `pipe` is slow, subsequent ones are faster. +image = pipe(prompt, num_inference_steps=30).images[0] torch.compile offers different backends and modes. For maximum inference speed, use “max-autotune” for the inductor backend. “max-autotune” uses CUDA graphs and optimizes the compilation graph specifically for latency. CUDA graphs greatly reduces the overhead of launching GPU operations by using a mechanism to launch multiple GPU operations through a single CPU operation. Using SDPA attention and compiling both the UNet and VAE cuts the latency from 3.31 seconds to 2.54 seconds. Prevent graph breaks Specifying fullgraph=True ensures there are no graph breaks in the underlying model to take full advantage of torch.compile without any performance degradation. For the UNet and VAE, this means changing how you access the return variables. Copied - latents = unet( +- latents, timestep=timestep, encoder_hidden_states=prompt_embeds +-).sample + ++ latents = unet( ++ latents, timestep=timestep, encoder_hidden_states=prompt_embeds, return_dict=False ++)[0] Remove GPU sync after compilation During the iterative reverse diffusion process, the step() function is called on the scheduler each time after the denoiser predicts the less noisy latent embeddings. Inside step(), the sigmas variable is indexed which when placed on the GPU, causes a communication sync between the CPU and GPU. This introduces latency and it becomes more evident when the denoiser has already been compiled. But if the sigmas array always stays on the CPU, the CPU and GPU sync doesn’t occur and you don’t get any latency. In general, any CPU and GPU communication sync should be none or be kept to a bare minimum because it can impact inference latency. Combine the attention block’s projection matrices The UNet and VAE in SDXL use Transformer-like blocks which consists of attention blocks and feed-forward blocks. In an attention block, the input is projected into three sub-spaces using three different projection matrices – Q, K, and V. These projections are performed separately on the input. But we can horizontally combine the projection matrices into a single matrix and perform the projection in one step. This increases the size of the matrix multiplications of the input projections and improves the impact of quantization. You can combine the projection matrices with just a single line of code: Copied pipe.fuse_qkv_projections() This provides a minor improvement from 2.54 seconds to 2.52 seconds. Support for fuse_qkv_projections() is limited and experimental. It’s not available for many non-Stable Diffusion pipelines such as Kandinsky. You can refer to this PR to get an idea about how to enable this for the other pipelines. Dynamic quantization You can also use the ultra-lightweight PyTorch quantization library, torchao (commit SHA 54bcd5a10d0abbe7b0c045052029257099f83fd9), to apply dynamic int8 quantization to the UNet and VAE. Quantization adds additional conversion overhead to the model that is hopefully made up for by faster matmuls (dynamic quantization). If the matmuls are too small, these techniques may degrade performance. First, configure all the compiler tags: Copied from diffusers import StableDiffusionXLPipeline +import torch + +# Notice the two new flags at the end. +torch._inductor.config.conv_1x1_as_mm = True +torch._inductor.config.coordinate_descent_tuning = True +torch._inductor.config.epilogue_fusion = False +torch._inductor.config.coordinate_descent_check_all_directions = True +torch._inductor.config.force_fuse_int_mm_with_mul = True +torch._inductor.config.use_mixed_mm = True Certain linear layers in the UNet and VAE don’t benefit from dynamic int8 quantization. You can filter out those layers with the dynamic_quant_filter_fn shown below. Copied def dynamic_quant_filter_fn(mod, *args): + return ( + isinstance(mod, torch.nn.Linear) + and mod.in_features > 16 + and (mod.in_features, mod.out_features) + not in [ + (1280, 640), + (1920, 1280), + (1920, 640), + (2048, 1280), + (2048, 2560), + (2560, 1280), + (256, 128), + (2816, 1280), + (320, 640), + (512, 1536), + (512, 256), + (512, 512), + (640, 1280), + (640, 1920), + (640, 320), + (640, 5120), + (640, 640), + (960, 320), + (960, 640), + ] + ) + + +def conv_filter_fn(mod, *args): + return ( + isinstance(mod, torch.nn.Conv2d) and mod.kernel_size == (1, 1) and 128 in [mod.in_channels, mod.out_channels] + ) Finally, apply all the optimizations discussed so far: Copied # SDPA + bfloat16. +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 +).to("cuda") + +# Combine attention projection matrices. +pipe.fuse_qkv_projections() + +# Change the memory layout. +pipe.unet.to(memory_format=torch.channels_last) +pipe.vae.to(memory_format=torch.channels_last) Since dynamic quantization is only limited to the linear layers, convert the appropriate pointwise convolution layers into linear layers to maximize its benefit. Copied from torchao import swap_conv2d_1x1_to_linear + +swap_conv2d_1x1_to_linear(pipe.unet, conv_filter_fn) +swap_conv2d_1x1_to_linear(pipe.vae, conv_filter_fn) Apply dynamic quantization: Copied from torchao import apply_dynamic_quant + +apply_dynamic_quant(pipe.unet, dynamic_quant_filter_fn) +apply_dynamic_quant(pipe.vae, dynamic_quant_filter_fn) Finally, compile and perform inference: Copied pipe.unet = torch.compile(pipe.unet, mode="max-autotune", fullgraph=True) +pipe.vae.decode = torch.compile(pipe.vae.decode, mode="max-autotune", fullgraph=True) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] Applying dynamic quantization improves the latency from 2.52 seconds to 2.43 seconds. diff --git a/scrapped_outputs/651b456a46fd40b03fa2512409d064d1.txt b/scrapped_outputs/651b456a46fd40b03fa2512409d064d1.txt new file mode 100644 index 0000000000000000000000000000000000000000..d497661a6c9cfce4b8b06d95ad96868e9dc634a1 --- /dev/null +++ b/scrapped_outputs/651b456a46fd40b03fa2512409d064d1.txt @@ -0,0 +1,42 @@ +Textual inversion The StableDiffusionPipeline supports textual inversion, a technique that enables a model like Stable Diffusion to learn a new concept from just a few sample images. This gives you more control over the generated images and allows you to tailor the model towards specific concepts. You can get started quickly with a collection of community created concepts in the Stable Diffusion Conceptualizer. This guide will show you how to run inference with textual inversion using a pre-learned concept from the Stable Diffusion Conceptualizer. If you’re interested in teaching a model new concepts with textual inversion, take a look at the Textual Inversion training guide. Import the necessary libraries: Copied import torch +from diffusers import StableDiffusionPipeline +from diffusers.utils import make_image_grid Stable Diffusion 1 and 2 Pick a Stable Diffusion checkpoint and a pre-learned concept from the Stable Diffusion Conceptualizer: Copied pretrained_model_name_or_path = "runwayml/stable-diffusion-v1-5" +repo_id_embeds = "sd-concepts-library/cat-toy" Now you can load a pipeline, and pass the pre-learned concept to it: Copied pipeline = StableDiffusionPipeline.from_pretrained( + pretrained_model_name_or_path, torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +pipeline.load_textual_inversion(repo_id_embeds) Create a prompt with the pre-learned concept by using the special placeholder token , and choose the number of samples and rows of images you’d like to generate: Copied prompt = "a grafitti in a favela wall with a on it" + +num_samples_per_row = 2 +num_rows = 2 Then run the pipeline (feel free to adjust the parameters like num_inference_steps and guidance_scale to see how they affect image quality), save the generated images and visualize them with the helper function you created at the beginning: Copied all_images = [] +for _ in range(num_rows): + images = pipeline(prompt, num_images_per_prompt=num_samples_per_row, num_inference_steps=50, guidance_scale=7.5).images + all_images.extend(images) + +grid = make_image_grid(all_images, num_rows, num_samples_per_row) +grid Stable Diffusion XL Stable Diffusion XL (SDXL) can also use textual inversion vectors for inference. In contrast to Stable Diffusion 1 and 2, SDXL has two text encoders so you’ll need two textual inversion embeddings - one for each text encoder model. Let’s download the SDXL textual inversion embeddings and have a closer look at it’s structure: Copied from huggingface_hub import hf_hub_download +from safetensors.torch import load_file + +file = hf_hub_download("dn118/unaestheticXL", filename="unaestheticXLv31.safetensors") +state_dict = load_file(file) +state_dict Copied {'clip_g': tensor([[ 0.0077, -0.0112, 0.0065, ..., 0.0195, 0.0159, 0.0275], + ..., + [-0.0170, 0.0213, 0.0143, ..., -0.0302, -0.0240, -0.0362]], + 'clip_l': tensor([[ 0.0023, 0.0192, 0.0213, ..., -0.0385, 0.0048, -0.0011], + ..., + [ 0.0475, -0.0508, -0.0145, ..., 0.0070, -0.0089, -0.0163]], There are two tensors, "clip_g" and "clip_l". +"clip_g" corresponds to the bigger text encoder in SDXL and refers to +pipe.text_encoder_2 and "clip_l" refers to pipe.text_encoder. Now you can load each tensor separately by passing them along with the correct text encoder and tokenizer +to load_textual_inversion(): Copied from diffusers import AutoPipelineForText2Image +import torch + +pipe = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") + +pipe.load_textual_inversion(state_dict["clip_g"], token="unaestheticXLv31", text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2) +pipe.load_textual_inversion(state_dict["clip_l"], token="unaestheticXLv31", text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer) + +# the embedding should be used as a negative embedding, so we pass it as a negative prompt +generator = torch.Generator().manual_seed(33) +image = pipe("a woman standing in front of a mountain", negative_prompt="unaestheticXLv31", generator=generator).images[0] +image diff --git a/scrapped_outputs/6524b57292c7255f8b254693794cf7d3.txt b/scrapped_outputs/6524b57292c7255f8b254693794cf7d3.txt new file mode 100644 index 0000000000000000000000000000000000000000..a001c5e9c77873189a313244b2e7bed2ac696984 --- /dev/null +++ b/scrapped_outputs/6524b57292c7255f8b254693794cf7d3.txt @@ -0,0 +1,101 @@ +Image variation The Stable Diffusion model can also generate variations from an input image. It uses a fine-tuned version of a Stable Diffusion model by Justin Pinkney from Lambda. The original codebase can be found at LambdaLabsML/lambda-diffusers and additional official checkpoints for image variation can be found at lambdalabs/sd-image-variations-diffusers. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionImageVariationPipeline class diffusers.StableDiffusionImageVariationPipeline < source > ( vae: AutoencoderKL image_encoder: CLIPVisionModelWithProjection unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. image_encoder (CLIPVisionModelWithProjection) — +Frozen CLIP image-encoder (clip-vit-large-patch14). text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline to generate image variations from an input image using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — +Image or images to guide image generation. If you provide a tensor, it needs to be compatible with +CLIPImageProcessor. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied from diffusers import StableDiffusionImageVariationPipeline +from PIL import Image +from io import BytesIO +import requests + +pipe = StableDiffusionImageVariationPipeline.from_pretrained( + "lambdalabs/sd-image-variations-diffusers", revision="v2.0" +) +pipe = pipe.to("cuda") + +url = "https://lh3.googleusercontent.com/y-iFOHfLTwkuQSUegpwDdgKmOjRSTvPxat63dQLB25xkTs4lhIbRUFeNBWZzYf370g=s1200" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") + +out = pipe(image, num_images_per_prompt=3, guidance_scale=15) +out["images"][0].save("result.jpg") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/6572771c759713fc1a83613cdcf1efdb.txt b/scrapped_outputs/6572771c759713fc1a83613cdcf1efdb.txt new file mode 100644 index 0000000000000000000000000000000000000000..dc45cc411c1e99044b02de9de0b70f888962c563 --- /dev/null +++ b/scrapped_outputs/6572771c759713fc1a83613cdcf1efdb.txt @@ -0,0 +1,42 @@ +DPMSolverSDEScheduler The DPMSolverSDEScheduler is inspired by the stochastic sampler from the Elucidating the Design Space of Diffusion-Based Generative Models paper, and the scheduler is ported from and created by Katherine Crowson. DPMSolverSDEScheduler class diffusers.DPMSolverSDEScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' use_karras_sigmas: Optional = False noise_sampler_seed: Optional = None timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.00085) — +The starting beta value of inference. beta_end (float, defaults to 0.012) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. noise_sampler_seed (int, optional, defaults to None) — +The random seed to use for the noise sampler. If None, a random seed is generated. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DPMSolverSDEScheduler implements the stochastic sampler from the Elucidating the Design Space of Diffusion-Based +Generative Models paper. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union return_dict: bool = True s_noise: float = 1.0 ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor or np.ndarray) — +The direct output from learned diffusion model. timestep (float or torch.FloatTensor) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor or np.ndarray) — +A current instance of a sample created by the diffusion process. return_dict (bool, optional, defaults to True) — +Whether or not to return a SchedulerOutput or tuple. s_noise (float, optional, defaults to 1.0) — +Scaling factor for noise added to the sample. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/6598de003e007d1de80ac0d799f90c43.txt b/scrapped_outputs/6598de003e007d1de80ac0d799f90c43.txt new file mode 100644 index 0000000000000000000000000000000000000000..1f6f4515145581efe8db27c822c4dac240053ef7 --- /dev/null +++ b/scrapped_outputs/6598de003e007d1de80ac0d799f90c43.txt @@ -0,0 +1,68 @@ +Consistency Models Consistency Models were proposed in Consistency Models by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. The abstract from the paper is: Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast one-step generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet 64x64 and LSUN 256x256. The original codebase can be found at openai/consistency_models, and additional checkpoints are available at openai. The pipeline was contributed by dg845 and ayushtues. ❤️ Tips For an additional speed-up, use torch.compile to generate multiple images in <1 second: Copied import torch + from diffusers import ConsistencyModelPipeline + + device = "cuda" + # Load the cd_bedroom256_lpips checkpoint. + model_id_or_path = "openai/diffusers-cd_bedroom256_lpips" + pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) + pipe.to(device) + ++ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + + # Multistep sampling + # Timesteps can be explicitly specified; the particular timesteps below are from the original GitHub repo: + # https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L83 + for _ in range(10): + image = pipe(timesteps=[17, 0]).images[0] + image.show() ConsistencyModelPipeline class diffusers.ConsistencyModelPipeline < source > ( unet: UNet2DModel scheduler: CMStochasticIterativeScheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Currently only +compatible with CMStochasticIterativeScheduler. Pipeline for unconditional or class-conditional image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 class_labels: Union = None num_inference_steps: int = 1 timesteps: List = None generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. class_labels (torch.Tensor or List[int] or int, optional) — +Optional class labels for conditioning class-conditional consistency models. Not used if the model is +not class-conditional. num_inference_steps (int, optional, defaults to 1) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + Examples: Copied >>> import torch + +>>> from diffusers import ConsistencyModelPipeline + +>>> device = "cuda" +>>> # Load the cd_imagenet64_l2 checkpoint. +>>> model_id_or_path = "openai/diffusers-cd_imagenet64_l2" +>>> pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +>>> pipe.to(device) + +>>> # Onestep Sampling +>>> image = pipe(num_inference_steps=1).images[0] +>>> image.save("cd_imagenet64_l2_onestep_sample.png") + +>>> # Onestep sampling, class-conditional image generation +>>> # ImageNet-64 class label 145 corresponds to king penguins +>>> image = pipe(num_inference_steps=1, class_labels=145).images[0] +>>> image.save("cd_imagenet64_l2_onestep_sample_penguin.png") + +>>> # Multistep sampling, class-conditional image generation +>>> # Timesteps can be explicitly specified; the particular timesteps below are from the original Github repo: +>>> # https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L77 +>>> image = pipe(num_inference_steps=None, timesteps=[22, 0], class_labels=145).images[0] +>>> image.save("cd_imagenet64_l2_multistep_sample_penguin.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/65a29412b27b6656a1a863b367dd255e.txt b/scrapped_outputs/65a29412b27b6656a1a863b367dd255e.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/65b979f5dde7d9c53f46f73fc6d6eec5.txt b/scrapped_outputs/65b979f5dde7d9c53f46f73fc6d6eec5.txt new file mode 100644 index 0000000000000000000000000000000000000000..b4d313869e87be5d21416eaebaf209bc174ce6fb --- /dev/null +++ b/scrapped_outputs/65b979f5dde7d9c53f46f73fc6d6eec5.txt @@ -0,0 +1,89 @@ +Load community pipelines and components Community pipelines Community pipelines are any DiffusionPipeline class that are different from the original implementation as specified in their paper (for example, the StableDiffusionControlNetPipeline corresponds to the Text-to-Image Generation with ControlNet Conditioning paper). They provide additional functionality or extend the original implementation of a pipeline. There are many cool community pipelines like Speech to Image or Composable Stable Diffusion, and you can find all the official community pipelines here. To load any community pipeline on the Hub, pass the repository id of the community pipeline to the custom_pipeline argument and the model repository where you’d like to load the pipeline weights and components from. For example, the example below loads a dummy pipeline from hf-internal-testing/diffusers-dummy-pipeline and the pipeline weights and components from google/ddpm-cifar10-32: 🔒 By loading a community pipeline from the Hugging Face Hub, you are trusting that the code you are loading is safe. Make sure to inspect the code online before loading and running it automatically! Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "google/ddpm-cifar10-32", custom_pipeline="hf-internal-testing/diffusers-dummy-pipeline", use_safetensors=True +) Loading an official community pipeline is similar, but you can mix loading weights from an official repository id and pass pipeline components directly. The example below loads the community CLIP Guided Stable Diffusion pipeline, and you can pass the CLIP model components directly to it: Copied from diffusers import DiffusionPipeline +from transformers import CLIPImageProcessor, CLIPModel + +clip_model_id = "laion/CLIP-ViT-B-32-laion2B-s34B-b79K" + +feature_extractor = CLIPImageProcessor.from_pretrained(clip_model_id) +clip_model = CLIPModel.from_pretrained(clip_model_id) + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + custom_pipeline="clip_guided_stable_diffusion", + clip_model=clip_model, + feature_extractor=feature_extractor, + use_safetensors=True, +) Load from a local file Community pipelines can also be loaded from a local file if you pass a file path instead. The path to the passed directory must contain a pipeline.py file that contains the pipeline class in order to successfully load it. Copied pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + custom_pipeline="./path/to/pipeline_directory/", + clip_model=clip_model, + feature_extractor=feature_extractor, + use_safetensors=True, +) Load from a specific version By default, community pipelines are loaded from the latest stable version of Diffusers. To load a community pipeline from another version, use the custom_revision parameter. main older version For example, to load from the main branch: Copied pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + custom_pipeline="clip_guided_stable_diffusion", + custom_revision="main", + clip_model=clip_model, + feature_extractor=feature_extractor, + use_safetensors=True, +) For more information about community pipelines, take a look at the Community pipelines guide for how to use them and if you’re interested in adding a community pipeline check out the How to contribute a community pipeline guide! Community components Community components allow users to build pipelines that may have customized components that are not a part of Diffusers. If your pipeline has custom components that Diffusers doesn’t already support, you need to provide their implementations as Python modules. These customized components could be a VAE, UNet, and scheduler. In most cases, the text encoder is imported from the Transformers library. The pipeline code itself can also be customized. This section shows how users should use community components to build a community pipeline. You’ll use the showlab/show-1-base pipeline checkpoint as an example. So, let’s start loading the components: Import and load the text encoder from Transformers: Copied from transformers import T5Tokenizer, T5EncoderModel + +pipe_id = "showlab/show-1-base" +tokenizer = T5Tokenizer.from_pretrained(pipe_id, subfolder="tokenizer") +text_encoder = T5EncoderModel.from_pretrained(pipe_id, subfolder="text_encoder") Load a scheduler: Copied from diffusers import DPMSolverMultistepScheduler + +scheduler = DPMSolverMultistepScheduler.from_pretrained(pipe_id, subfolder="scheduler") Load an image processor: Copied from transformers import CLIPFeatureExtractor + +feature_extractor = CLIPFeatureExtractor.from_pretrained(pipe_id, subfolder="feature_extractor") In steps 4 and 5, the custom UNet and pipeline implementation must match the format shown in their files for this example to work. Now you’ll load a custom UNet, which in this example, has already been implemented in the showone_unet_3d_condition.py script for your convenience. You’ll notice the UNet3DConditionModel class name is changed to ShowOneUNet3DConditionModel because UNet3DConditionModel already exists in Diffusers. Any components needed for the ShowOneUNet3DConditionModel class should be placed in the showone_unet_3d_condition.py script. Once this is done, you can initialize the UNet: Copied from showone_unet_3d_condition import ShowOneUNet3DConditionModel + +unet = ShowOneUNet3DConditionModel.from_pretrained(pipe_id, subfolder="unet") Finally, you’ll load the custom pipeline code. For this example, it has already been created for you in the pipeline_t2v_base_pixel.py script. This script contains a custom TextToVideoIFPipeline class for generating videos from text. Just like the custom UNet, any code needed for the custom pipeline to work should go in the pipeline_t2v_base_pixel.py script. Once everything is in place, you can initialize the TextToVideoIFPipeline with the ShowOneUNet3DConditionModel: Copied from pipeline_t2v_base_pixel import TextToVideoIFPipeline +import torch + +pipeline = TextToVideoIFPipeline( + unet=unet, + text_encoder=text_encoder, + tokenizer=tokenizer, + scheduler=scheduler, + feature_extractor=feature_extractor +) +pipeline = pipeline.to(device="cuda") +pipeline.torch_dtype = torch.float16 Push the pipeline to the Hub to share with the community! Copied pipeline.push_to_hub("custom-t2v-pipeline") After the pipeline is successfully pushed, you need a couple of changes: Change the _class_name attribute in model_index.json to "pipeline_t2v_base_pixel" and "TextToVideoIFPipeline". Upload showone_unet_3d_condition.py to the unet directory. Upload pipeline_t2v_base_pixel.py to the pipeline base directory. To run inference, simply add the trust_remote_code argument while initializing the pipeline to handle all the “magic” behind the scenes. Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "/", trust_remote_code=True, torch_dtype=torch.float16 +).to("cuda") + +prompt = "hello" + +# Text embeds +prompt_embeds, negative_embeds = pipeline.encode_prompt(prompt) + +# Keyframes generation (8x64x40, 2fps) +video_frames = pipeline( + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + num_frames=8, + height=40, + width=64, + num_inference_steps=2, + guidance_scale=9.0, + output_type="pt" +).frames As an additional reference example, you can refer to the repository structure of stabilityai/japanese-stable-diffusion-xl, that makes use of the trust_remote_code feature: Copied +from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/japanese-stable-diffusion-xl", trust_remote_code=True +) +pipeline.to("cuda") + +# if using torch < 2.0 +# pipeline.enable_xformers_memory_efficient_attention() + +prompt = "柴犬、カラフルアート" + +image = pipeline(prompt=prompt).images[0] diff --git a/scrapped_outputs/65d759aca80f4fa1b02d880043611bda.txt b/scrapped_outputs/65d759aca80f4fa1b02d880043611bda.txt new file mode 100644 index 0000000000000000000000000000000000000000..be96d2b3cd8b0e9da6f4a7f488cab978fbcab007 --- /dev/null +++ b/scrapped_outputs/65d759aca80f4fa1b02d880043611bda.txt @@ -0,0 +1,25 @@ +IPNDMScheduler IPNDMScheduler is a fourth-order Improved Pseudo Linear Multistep scheduler. The original implementation can be found at crowsonkb/v-diffusion-pytorch. IPNDMScheduler class diffusers.IPNDMScheduler < source > ( num_train_timesteps: int = 1000 trained_betas: Union = None ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. A fourth-order Improved Pseudo Linear Multistep scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the linear multistep method. It performs one forward pass multiple times to approximate the solution. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/6604852112cd381c965f31a813dbdb94.txt b/scrapped_outputs/6604852112cd381c965f31a813dbdb94.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/6638594d6735f3ea1a3c25e7bbd701d6.txt b/scrapped_outputs/6638594d6735f3ea1a3c25e7bbd701d6.txt new file mode 100644 index 0000000000000000000000000000000000000000..86c0719a2317d8cc8ac7716a79b72e0231f612d9 --- /dev/null +++ b/scrapped_outputs/6638594d6735f3ea1a3c25e7bbd701d6.txt @@ -0,0 +1,33 @@ +Transformer2D A Transformer model for image-like data from CompVis that is based on the Vision Transformer introduced by Dosovitskiy et al. The Transformer2DModel accepts discrete (classes of vector embeddings) or continuous (actual embeddings) inputs. When the input is continuous: Project the input and reshape it to (batch_size, sequence_length, feature_dimension). Apply the Transformer blocks in the standard way. Reshape to image. When the input is discrete: It is assumed one of the input classes is the masked latent pixel. The predicted classes of the unnoised image don’t contain a prediction for the masked pixel because the unnoised image cannot be masked. Convert input (classes of latent pixels) to embeddings and apply positional embeddings. Apply the Transformer blocks in the standard way. Predict classes of unnoised image. Transformer2DModel class diffusers.Transformer2DModel < source > ( num_attention_heads: int = 16 attention_head_dim: int = 88 in_channels: Optional = None out_channels: Optional = None num_layers: int = 1 dropout: float = 0.0 norm_num_groups: int = 32 cross_attention_dim: Optional = None attention_bias: bool = False sample_size: Optional = None num_vector_embeds: Optional = None patch_size: Optional = None activation_fn: str = 'geglu' num_embeds_ada_norm: Optional = None use_linear_projection: bool = False only_cross_attention: bool = False double_self_attention: bool = False upcast_attention: bool = False norm_type: str = 'layer_norm' norm_elementwise_affine: bool = True norm_eps: float = 1e-05 attention_type: str = 'default' caption_channels: int = None ) Parameters num_attention_heads (int, optional, defaults to 16) — The number of heads to use for multi-head attention. attention_head_dim (int, optional, defaults to 88) — The number of channels in each head. in_channels (int, optional) — +The number of channels in the input and output (specify if the input is continuous). num_layers (int, optional, defaults to 1) — The number of layers of Transformer blocks to use. dropout (float, optional, defaults to 0.0) — The dropout probability to use. cross_attention_dim (int, optional) — The number of encoder_hidden_states dimensions to use. sample_size (int, optional) — The width of the latent images (specify if the input is discrete). +This is fixed during training since it is used to learn a number of position embeddings. num_vector_embeds (int, optional) — +The number of classes of the vector embeddings of the latent pixels (specify if the input is discrete). +Includes the class for the masked latent pixel. activation_fn (str, optional, defaults to "geglu") — Activation function to use in feed-forward. num_embeds_ada_norm ( int, optional) — +The number of diffusion steps used during training. Pass if at least one of the norm_layers is +AdaLayerNorm. This is fixed during training since it is used to learn a number of embeddings that are +added to the hidden states. +During inference, you can denoise for up to but not more steps than num_embeds_ada_norm. attention_bias (bool, optional) — +Configure if the TransformerBlocks attention should contain a bias parameter. A 2D Transformer model for image-like data. forward < source > ( hidden_states: Tensor encoder_hidden_states: Optional = None timestep: Optional = None added_cond_kwargs: Dict = None class_labels: Optional = None cross_attention_kwargs: Dict = None attention_mask: Optional = None encoder_attention_mask: Optional = None return_dict: bool = True ) Parameters hidden_states (torch.LongTensor of shape (batch size, num latent pixels) if discrete, torch.FloatTensor of shape (batch size, channel, height, width) if continuous) — +Input hidden_states. encoder_hidden_states ( torch.FloatTensor of shape (batch size, sequence len, embed dims), optional) — +Conditional embeddings for cross attention layer. If not given, cross-attention defaults to +self-attention. timestep ( torch.LongTensor, optional) — +Used to indicate denoising step. Optional timestep to be applied as an embedding in AdaLayerNorm. class_labels ( torch.LongTensor of shape (batch size, num classes), optional) — +Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in +AdaLayerZeroNorm. cross_attention_kwargs ( Dict[str, Any], optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. attention_mask ( torch.Tensor, optional) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. encoder_attention_mask ( torch.Tensor, optional) — +Cross-attention mask applied to encoder_hidden_states. Two formats supported: + +Mask (batch, sequence_length) True = keep, False = discard. +Bias (batch, 1, sequence_length) 0 = keep, -10000 = discard. + +If ndim == 2: will be interpreted as a mask, then converted into a bias consistent with the format +above. This bias will be added to the cross-attention scores. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. The Transformer2DModel forward method. Transformer2DModelOutput class diffusers.models.transformer_2d.Transformer2DModelOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if Transformer2DModel is discrete) — +The hidden states output conditioned on the encoder_hidden_states input. If discrete, returns probability +distributions for the unnoised latent pixels. The output of Transformer2DModel. diff --git a/scrapped_outputs/6677924647c864e79ec496dbd247782b.txt b/scrapped_outputs/6677924647c864e79ec496dbd247782b.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/668127b52e04515d766c66f2389684ef.txt b/scrapped_outputs/668127b52e04515d766c66f2389684ef.txt new file mode 100644 index 0000000000000000000000000000000000000000..d497661a6c9cfce4b8b06d95ad96868e9dc634a1 --- /dev/null +++ b/scrapped_outputs/668127b52e04515d766c66f2389684ef.txt @@ -0,0 +1,42 @@ +Textual inversion The StableDiffusionPipeline supports textual inversion, a technique that enables a model like Stable Diffusion to learn a new concept from just a few sample images. This gives you more control over the generated images and allows you to tailor the model towards specific concepts. You can get started quickly with a collection of community created concepts in the Stable Diffusion Conceptualizer. This guide will show you how to run inference with textual inversion using a pre-learned concept from the Stable Diffusion Conceptualizer. If you’re interested in teaching a model new concepts with textual inversion, take a look at the Textual Inversion training guide. Import the necessary libraries: Copied import torch +from diffusers import StableDiffusionPipeline +from diffusers.utils import make_image_grid Stable Diffusion 1 and 2 Pick a Stable Diffusion checkpoint and a pre-learned concept from the Stable Diffusion Conceptualizer: Copied pretrained_model_name_or_path = "runwayml/stable-diffusion-v1-5" +repo_id_embeds = "sd-concepts-library/cat-toy" Now you can load a pipeline, and pass the pre-learned concept to it: Copied pipeline = StableDiffusionPipeline.from_pretrained( + pretrained_model_name_or_path, torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +pipeline.load_textual_inversion(repo_id_embeds) Create a prompt with the pre-learned concept by using the special placeholder token , and choose the number of samples and rows of images you’d like to generate: Copied prompt = "a grafitti in a favela wall with a on it" + +num_samples_per_row = 2 +num_rows = 2 Then run the pipeline (feel free to adjust the parameters like num_inference_steps and guidance_scale to see how they affect image quality), save the generated images and visualize them with the helper function you created at the beginning: Copied all_images = [] +for _ in range(num_rows): + images = pipeline(prompt, num_images_per_prompt=num_samples_per_row, num_inference_steps=50, guidance_scale=7.5).images + all_images.extend(images) + +grid = make_image_grid(all_images, num_rows, num_samples_per_row) +grid Stable Diffusion XL Stable Diffusion XL (SDXL) can also use textual inversion vectors for inference. In contrast to Stable Diffusion 1 and 2, SDXL has two text encoders so you’ll need two textual inversion embeddings - one for each text encoder model. Let’s download the SDXL textual inversion embeddings and have a closer look at it’s structure: Copied from huggingface_hub import hf_hub_download +from safetensors.torch import load_file + +file = hf_hub_download("dn118/unaestheticXL", filename="unaestheticXLv31.safetensors") +state_dict = load_file(file) +state_dict Copied {'clip_g': tensor([[ 0.0077, -0.0112, 0.0065, ..., 0.0195, 0.0159, 0.0275], + ..., + [-0.0170, 0.0213, 0.0143, ..., -0.0302, -0.0240, -0.0362]], + 'clip_l': tensor([[ 0.0023, 0.0192, 0.0213, ..., -0.0385, 0.0048, -0.0011], + ..., + [ 0.0475, -0.0508, -0.0145, ..., 0.0070, -0.0089, -0.0163]], There are two tensors, "clip_g" and "clip_l". +"clip_g" corresponds to the bigger text encoder in SDXL and refers to +pipe.text_encoder_2 and "clip_l" refers to pipe.text_encoder. Now you can load each tensor separately by passing them along with the correct text encoder and tokenizer +to load_textual_inversion(): Copied from diffusers import AutoPipelineForText2Image +import torch + +pipe = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") + +pipe.load_textual_inversion(state_dict["clip_g"], token="unaestheticXLv31", text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2) +pipe.load_textual_inversion(state_dict["clip_l"], token="unaestheticXLv31", text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer) + +# the embedding should be used as a negative embedding, so we pass it as a negative prompt +generator = torch.Generator().manual_seed(33) +image = pipe("a woman standing in front of a mountain", negative_prompt="unaestheticXLv31", generator=generator).images[0] +image diff --git a/scrapped_outputs/66a3fffe151609b0d43c75dab6229cf5.txt b/scrapped_outputs/66a3fffe151609b0d43c75dab6229cf5.txt new file mode 100644 index 0000000000000000000000000000000000000000..11477af7da0355430f35587a5aa097be653d9a3d --- /dev/null +++ b/scrapped_outputs/66a3fffe151609b0d43c75dab6229cf5.txt @@ -0,0 +1,68 @@ +VQDiffusionScheduler VQDiffusionScheduler converts the transformer model’s output into a sample for the unnoised image at the previous diffusion timestep. It was introduced in Vector Quantized Diffusion Model for Text-to-Image Synthesis by Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, Baining Guo. The abstract from the paper is: We present the vector quantized diffusion (VQ-Diffusion) model for text-to-image generation. This method is based on a vector quantized variational autoencoder (VQ-VAE) whose latent space is modeled by a conditional variant of the recently developed Denoising Diffusion Probabilistic Model (DDPM). We find that this latent-space method is well-suited for text-to-image generation tasks because it not only eliminates the unidirectional bias with existing methods but also allows us to incorporate a mask-and-replace diffusion strategy to avoid the accumulation of errors, which is a serious problem with existing methods. Our experiments show that the VQ-Diffusion produces significantly better text-to-image generation results when compared with conventional autoregressive (AR) models with similar numbers of parameters. Compared with previous GAN-based text-to-image methods, our VQ-Diffusion can handle more complex scenes and improve the synthesized image quality by a large margin. Finally, we show that the image generation computation in our method can be made highly efficient by reparameterization. With traditional AR methods, the text-to-image generation time increases linearly with the output image resolution and hence is quite time consuming even for normal size images. The VQ-Diffusion allows us to achieve a better trade-off between quality and speed. Our experiments indicate that the VQ-Diffusion model with the reparameterization is fifteen times faster than traditional AR methods while achieving a better image quality. VQDiffusionScheduler class diffusers.VQDiffusionScheduler < source > ( num_vec_classes: int num_train_timesteps: int = 100 alpha_cum_start: float = 0.99999 alpha_cum_end: float = 9e-06 gamma_cum_start: float = 9e-06 gamma_cum_end: float = 0.99999 ) Parameters num_vec_classes (int) — +The number of classes of the vector embeddings of the latent pixels. Includes the class for the masked +latent pixel. num_train_timesteps (int, defaults to 100) — +The number of diffusion steps to train the model. alpha_cum_start (float, defaults to 0.99999) — +The starting cumulative alpha value. alpha_cum_end (float, defaults to 0.00009) — +The ending cumulative alpha value. gamma_cum_start (float, defaults to 0.00009) — +The starting cumulative gamma value. gamma_cum_end (float, defaults to 0.99999) — +The ending cumulative gamma value. A scheduler for vector quantized diffusion. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. log_Q_t_transitioning_to_known_class < source > ( t: torch.int32 x_t: LongTensor log_onehot_x_t: FloatTensor cumulative: bool ) → torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels) Parameters t (torch.Long) — +The timestep that determines which transition matrix is used. x_t (torch.LongTensor of shape (batch size, num latent pixels)) — +The classes of each latent pixel at time t. log_onehot_x_t (torch.FloatTensor of shape (batch size, num classes, num latent pixels)) — +The log one-hot vectors of x_t. cumulative (bool) — +If cumulative is False, the single step transition matrix t-1->t is used. If cumulative is +True, the cumulative transition matrix 0->t is used. Returns +torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels) + +Each column of the returned matrix is a row of log probabilities of the complete probability +transition matrix. +When non cumulative, returns self.num_classes - 1 rows because the initial latent pixel cannot be +masked. +Where: + +q_n is the probability distribution for the forward process of the nth latent pixel. +C_0 is a class of a latent pixel embedding +C_k is the class of the masked latent pixel + +non-cumulative result (omitting logarithms): +_0(x_t | x_{t-1\} = C_0) ... q_n(x_t | x_{t-1\} = C_0) + . . . + . . . + . . . +q_0(x_t | x_{t-1\} = C_k) ... q_n(x_t | x_{t-1\} = C_k)`} + wrap={false} + /> +cumulative result (omitting logarithms): +_0_cumulative(x_t | x_0 = C_0) ... q_n_cumulative(x_t | x_0 = C_0) + . . . + . . . + . . . +q_0_cumulative(x_t | x_0 = C_{k-1\}) ... q_n_cumulative(x_t | x_0 = C_{k-1\})`} + wrap={false} + /> + Calculates the log probabilities of the rows from the (cumulative or non-cumulative) transition matrix for each +latent pixel in x_t. q_posterior < source > ( log_p_x_0 x_t t ) → torch.FloatTensor of shape (batch size, num classes, num latent pixels) Parameters log_p_x_0 (torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels)) — +The log probabilities for the predicted classes of the initial latent pixels. Does not include a +prediction for the masked class as the initial unnoised image cannot be masked. x_t (torch.LongTensor of shape (batch size, num latent pixels)) — +The classes of each latent pixel at time t. t (torch.Long) — +The timestep that determines which transition matrix is used. Returns +torch.FloatTensor of shape (batch size, num classes, num latent pixels) + +The log probabilities for the predicted classes of the image at timestep t-1. + Calculates the log probabilities for the predicted classes of the image at timestep t-1: Copied p(x_{t-1} | x_t) = sum( q(x_t | x_{t-1}) * q(x_{t-1} | x_0) * p(x_0) / q(x_t | x_0) ) set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps and diffusion process parameters (alpha, beta, gamma) should be moved +to. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: torch.int64 sample: LongTensor generator: Optional = None return_dict: bool = True ) → VQDiffusionSchedulerOutput or tuple Parameters t (torch.long) — +The timestep that determines which transition matrices are used. x_t (torch.LongTensor of shape (batch size, num latent pixels)) — +The classes of each latent pixel at time t. generator (torch.Generator, or None) — +A random number generator for the noise applied to p(x_{t-1} | x_t) before it is sampled from. return_dict (bool, optional, defaults to True) — +Whether or not to return a VQDiffusionSchedulerOutput or +tuple. Returns +VQDiffusionSchedulerOutput or tuple + +If return_dict is True, VQDiffusionSchedulerOutput is +returned, otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by the reverse transition distribution. See +q_posterior() for more details about how the distribution is computer. VQDiffusionSchedulerOutput class diffusers.schedulers.scheduling_vq_diffusion.VQDiffusionSchedulerOutput < source > ( prev_sample: LongTensor ) Parameters prev_sample (torch.LongTensor of shape (batch size, num latent pixels)) — +Computed sample x_{t-1} of previous timestep. prev_sample should be used as next model input in the +denoising loop. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/66e531f9198e7d27102634d66b5b2cfe.txt b/scrapped_outputs/66e531f9198e7d27102634d66b5b2cfe.txt new file mode 100644 index 0000000000000000000000000000000000000000..c796491cbfe9ea7c96684c36934fc2d682903305 --- /dev/null +++ b/scrapped_outputs/66e531f9198e7d27102634d66b5b2cfe.txt @@ -0,0 +1,191 @@ +Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image generation model that iterates on the previous Stable Diffusion models in three key ways: the UNet is 3x larger and SDXL combines a second text encoder (OpenCLIP ViT-bigG/14) with the original text encoder to significantly increase the number of parameters introduces size and crop-conditioning to preserve training data from being discarded and gain more control over how a generated image should be cropped introduces a two-stage model process; the base model (can also be run as a standalone model) generates an image as an input to the refiner model which adds additional high-quality details This guide will show you how to use SDXL for text-to-image, image-to-image, and inpainting. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate invisible-watermark>=0.2.0 We recommend installing the invisible-watermark library to help identify images that are generated. If the invisible-watermark library is installed, it is used by default. To disable the watermarker: Copied pipeline = StableDiffusionXLPipeline.from_pretrained(..., add_watermarker=False) Load model checkpoints Model weights may be stored in separate subfolders on the Hub or locally, in which case, you should use the from_pretrained() method: Copied from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = StableDiffusionXLImg2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, use_safetensors=True, variant="fp16" +).to("cuda") You can also use the from_single_file() method to load a model checkpoint stored in a single file format (.ckpt or .safetensors) from the Hub or locally: Copied from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_single_file( + "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0.safetensors", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = StableDiffusionXLImg2ImgPipeline.from_single_file( + "https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/blob/main/sd_xl_refiner_1.0.safetensors", torch_dtype=torch.float16, use_safetensors=True, variant="fp16" +).to("cuda") Text-to-image For text-to-image, pass a text prompt. By default, SDXL generates a 1024x1024 image for the best results. You can try setting the height and width parameters to 768x768 or 512x512, but anything below 512x512 is not likely to work. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline_text2image = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipeline_text2image(prompt=prompt).images[0] +image Image-to-image For image-to-image, SDXL works especially well with image sizes between 768x768 and 1024x1024. Pass an initial image, and a text prompt to condition the image with: Copied from diffusers import AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +# use from_pipe to avoid consuming additional memory when loading a checkpoint +pipeline = AutoPipelineForImage2Image.from_pipe(pipeline_text2image).to("cuda") + +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png" +init_image = load_image(url) +prompt = "a dog catching a frisbee in the jungle" +image = pipeline(prompt, image=init_image, strength=0.8, guidance_scale=10.5).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Inpainting For inpainting, you’ll need the original image and a mask of what you want to replace in the original image. Create a prompt to describe what you want to replace the masked area with. Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +# use from_pipe to avoid consuming additional memory when loading a checkpoint +pipeline = AutoPipelineForInpainting.from_pipe(pipeline_text2image).to("cuda") + +img_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png" +mask_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-inpaint-mask.png" + +init_image = load_image(img_url) +mask_image = load_image(mask_url) + +prompt = "A deep sea diver floating" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.85, guidance_scale=12.5).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Refine image quality SDXL includes a refiner model specialized in denoising low-noise stage images to generate higher-quality images from the base model. There are two ways to use the refiner: use the base and refiner models together to produce a refined image use the base model to produce an image, and subsequently use the refiner model to add more details to the image (this is how SDXL was originally trained) Base + refiner model When you use the base and refiner model together to generate an image, this is known as an ensemble of expert denoisers. The ensemble of expert denoisers approach requires fewer overall denoising steps versus passing the base model’s output to the refiner model, so it should be significantly faster to run. However, you won’t be able to inspect the base model’s output because it still contains a large amount of noise. As an ensemble of expert denoisers, the base model serves as the expert during the high-noise diffusion stage and the refiner model serves as the expert during the low-noise diffusion stage. Load the base and refiner model: Copied from diffusers import DiffusionPipeline +import torch + +base = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", + text_encoder_2=base.text_encoder_2, + vae=base.vae, + torch_dtype=torch.float16, + use_safetensors=True, + variant="fp16", +).to("cuda") To use this approach, you need to define the number of timesteps for each model to run through their respective stages. For the base model, this is controlled by the denoising_end parameter and for the refiner model, it is controlled by the denoising_start parameter. The denoising_end and denoising_start parameters should be a float between 0 and 1. These parameters are represented as a proportion of discrete timesteps as defined by the scheduler. If you’re also using the strength parameter, it’ll be ignored because the number of denoising steps is determined by the discrete timesteps the model is trained on and the declared fractional cutoff. Let’s set denoising_end=0.8 so the base model performs the first 80% of denoising the high-noise timesteps and set denoising_start=0.8 so the refiner model performs the last 20% of denoising the low-noise timesteps. The base model output should be in latent space instead of a PIL image. Copied prompt = "A majestic lion jumping from a big stone at night" + +image = base( + prompt=prompt, + num_inference_steps=40, + denoising_end=0.8, + output_type="latent", +).images +image = refiner( + prompt=prompt, + num_inference_steps=40, + denoising_start=0.8, + image=image, +).images[0] +image default base model ensemble of expert denoisers The refiner model can also be used for inpainting in the StableDiffusionXLInpaintPipeline: Copied from diffusers import StableDiffusionXLInpaintPipeline +from diffusers.utils import load_image, make_image_grid +import torch + +base = StableDiffusionXLInpaintPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = StableDiffusionXLInpaintPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", + text_encoder_2=base.text_encoder_2, + vae=base.vae, + torch_dtype=torch.float16, + use_safetensors=True, + variant="fp16", +).to("cuda") + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url) +mask_image = load_image(mask_url) + +prompt = "A majestic tiger sitting on a bench" +num_inference_steps = 75 +high_noise_frac = 0.7 + +image = base( + prompt=prompt, + image=init_image, + mask_image=mask_image, + num_inference_steps=num_inference_steps, + denoising_end=high_noise_frac, + output_type="latent", +).images +image = refiner( + prompt=prompt, + image=image, + mask_image=mask_image, + num_inference_steps=num_inference_steps, + denoising_start=high_noise_frac, +).images[0] +make_image_grid([init_image, mask_image, image.resize((512, 512))], rows=1, cols=3) This ensemble of expert denoisers method works well for all available schedulers! Base to refiner model SDXL gets a boost in image quality by using the refiner model to add additional high-quality details to the fully-denoised image from the base model, in an image-to-image setting. Load the base and refiner models: Copied from diffusers import DiffusionPipeline +import torch + +base = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", + text_encoder_2=base.text_encoder_2, + vae=base.vae, + torch_dtype=torch.float16, + use_safetensors=True, + variant="fp16", +).to("cuda") Generate an image from the base model, and set the model output to latent space: Copied prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +image = base(prompt=prompt, output_type="latent").images[0] Pass the generated image to the refiner model: Copied image = refiner(prompt=prompt, image=image[None, :]).images[0] base model base model + refiner model For inpainting, load the base and the refiner model in the StableDiffusionXLInpaintPipeline, remove the denoising_end and denoising_start parameters, and choose a smaller number of inference steps for the refiner. Micro-conditioning SDXL training involves several additional conditioning techniques, which are referred to as micro-conditioning. These include original image size, target image size, and cropping parameters. The micro-conditionings can be used at inference time to create high-quality, centered images. You can use both micro-conditioning and negative micro-conditioning parameters thanks to classifier-free guidance. They are available in the StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline, StableDiffusionXLInpaintPipeline, and StableDiffusionXLControlNetPipeline. Size conditioning There are two types of size conditioning: original_size conditioning comes from upscaled images in the training batch (because it would be wasteful to discard the smaller images which make up almost 40% of the total training data). This way, SDXL learns that upscaling artifacts are not supposed to be present in high-resolution images. During inference, you can use original_size to indicate the original image resolution. Using the default value of (1024, 1024) produces higher-quality images that resemble the 1024x1024 images in the dataset. If you choose to use a lower resolution, such as (256, 256), the model still generates 1024x1024 images, but they’ll look like the low resolution images (simpler patterns, blurring) in the dataset. target_size conditioning comes from finetuning SDXL to support different image aspect ratios. During inference, if you use the default value of (1024, 1024), you’ll get an image that resembles the composition of square images in the dataset. We recommend using the same value for target_size and original_size, but feel free to experiment with other options! 🤗 Diffusers also lets you specify negative conditions about an image’s size to steer generation away from certain image resolutions: Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe( + prompt=prompt, + negative_original_size=(512, 512), + negative_target_size=(1024, 1024), +).images[0] Images negatively conditioned on image resolutions of (128, 128), (256, 256), and (512, 512). Crop conditioning Images generated by previous Stable Diffusion models may sometimes appear to be cropped. This is because images are actually cropped during training so that all the images in a batch have the same size. By conditioning on crop coordinates, SDXL learns that no cropping - coordinates (0, 0) - usually correlates with centered subjects and complete faces (this is the default value in 🤗 Diffusers). You can experiment with different coordinates if you want to generate off-centered compositions! Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipeline(prompt=prompt, crops_coords_top_left=(256, 0)).images[0] +image You can also specify negative cropping coordinates to steer generation away from certain cropping parameters: Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe( + prompt=prompt, + negative_original_size=(512, 512), + negative_crops_coords_top_left=(0, 0), + negative_target_size=(1024, 1024), +).images[0] +image Use a different prompt for each text-encoder SDXL uses two text-encoders, so it is possible to pass a different prompt to each text-encoder, which can improve quality. Pass your original prompt to prompt and the second prompt to prompt_2 (use negative_prompt and negative_prompt_2 if you’re using negative prompts): Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +# prompt is passed to OAI CLIP-ViT/L-14 +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +# prompt_2 is passed to OpenCLIP-ViT/bigG-14 +prompt_2 = "Van Gogh painting" +image = pipeline(prompt=prompt, prompt_2=prompt_2).images[0] +image The dual text-encoders also support textual inversion embeddings that need to be loaded separately as explained in the SDXL textual inversion section. Optimizations SDXL is a large model, and you may need to optimize memory to get it to run on your hardware. Here are some tips to save memory and speed up inference. Offload the model to the CPU with enable_model_cpu_offload() for out-of-memory errors: Copied - base.to("cuda") +- refiner.to("cuda") ++ base.enable_model_cpu_offload() ++ refiner.enable_model_cpu_offload() Use torch.compile for ~20% speed-up (you need torch>=2.0): Copied + base.unet = torch.compile(base.unet, mode="reduce-overhead", fullgraph=True) ++ refiner.unet = torch.compile(refiner.unet, mode="reduce-overhead", fullgraph=True) Enable xFormers to run SDXL if torch<2.0: Copied + base.enable_xformers_memory_efficient_attention() ++ refiner.enable_xformers_memory_efficient_attention() Other resources If you’re interested in experimenting with a minimal version of the UNet2DConditionModel used in SDXL, take a look at the minSDXL implementation which is written in PyTorch and directly compatible with 🤗 Diffusers. diff --git a/scrapped_outputs/66ece9199d5742217fd26e2b49294dd0.txt b/scrapped_outputs/66ece9199d5742217fd26e2b49294dd0.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/670ef410993f654ffb5c10852845283e.txt b/scrapped_outputs/670ef410993f654ffb5c10852845283e.txt new file mode 100644 index 0000000000000000000000000000000000000000..92296bcdbbca1fe039034b8fbfc23043d7895d17 --- /dev/null +++ b/scrapped_outputs/670ef410993f654ffb5c10852845283e.txt @@ -0,0 +1,105 @@ +DPMSolverSinglestepScheduler DPMSolverSinglestepScheduler is a single step scheduler from DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps and DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. DPMSolver (and the improved version DPMSolver++) is a fast dedicated high-order solver for diffusion ODEs with convergence order guarantee. Empirically, DPMSolver sampling with only 20 steps can generate high-quality +samples, and it can generate quite good samples even in 10 steps. The original implementation can be found at LuChengTHU/dpm-solver. Tips It is recommended to set solver_order to 2 for guide sampling, and solver_order=3 for unconditional sampling. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use dynamic +thresholding. This thresholding method is unsuitable for latent-space diffusion models such as +Stable Diffusion. DPMSolverSinglestepScheduler class diffusers.DPMSolverSinglestepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Optional = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'dpmsolver++' solver_type: str = 'midpoint' lower_order_final: bool = False use_karras_sigmas: Optional = False final_sigmas_type: Optional = 'zero' lambda_min_clipped: float = -inf variance_type: Optional = None ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DPMSolver order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++". algorithm_type (str, defaults to dpmsolver++) — +Algorithm type for the solver; can be dpmsolver or dpmsolver++. The +dpmsolver type implements the algorithms in the DPMSolver +paper, and the dpmsolver++ type implements the algorithms in the +DPMSolver++ paper. It is recommended to use dpmsolver++ or +sde-dpmsolver++ with solver_order=2 for guided sampling like in Stable Diffusion. solver_type (str, defaults to midpoint) — +Solver type for the second-order solver; can be midpoint or heun. The solver type slightly affects the +sample quality, especially for a small number of steps. It is recommended to use midpoint solvers. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. final_sigmas_type (str, optional, defaults to "zero") — +The final sigma value for the noise schedule during the sampling process. If "sigma_min", the final sigma +is the same as the last sigma in the training schedule. If zero, the final sigma is set to 0. lambda_min_clipped (float, defaults to -inf) — +Clipping threshold for the minimum value of lambda(t) for numerical stability. This is critical for the +cosine (squaredcos_cap_v2) noise schedule. variance_type (str, optional) — +Set to “learned” or “learned_range” for diffusion models that predict variance. If set, the model’s output +contains the predicted Gaussian variance. DPMSolverSinglestepScheduler is a fast dedicated high-order solver for diffusion ODEs. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is +designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an +integral of the data prediction model. The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise +prediction and data prediction models. dpm_solver_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DPMSolver (equivalent to DDIM). get_order_list < source > ( num_inference_steps: int ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. Computes the solver order at each time step. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). singlestep_dpm_solver_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. timestep (int) — +The current and latter discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order singlestep DPMSolver that computes the solution at time prev_timestep from the +time timestep_list[-2]. singlestep_dpm_solver_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. timestep (int) — +The current and latter discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order singlestep DPMSolver that computes the solution at time prev_timestep from the +time timestep_list[-3]. singlestep_dpm_solver_update < source > ( model_output_list: List *args sample: FloatTensor = None order: int = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. timestep (int) — +The current and latter discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. order (int) — +The solver order at this step. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the singlestep DPMSolver. step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the singlestep DPMSolver. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/671d4f3bacc5d8e53d126e1b7b94c31c.txt b/scrapped_outputs/671d4f3bacc5d8e53d126e1b7b94c31c.txt new file mode 100644 index 0000000000000000000000000000000000000000..4049d6b91ac5929ba92113dc859ead44d28a4f4e --- /dev/null +++ b/scrapped_outputs/671d4f3bacc5d8e53d126e1b7b94c31c.txt @@ -0,0 +1,45 @@ +EulerAncestralDiscreteScheduler A scheduler that uses ancestral sampling with Euler method steps. This is a fast scheduler which can often generate good outputs in 20-30 steps. The scheduler is based on the original k-diffusion implementation by Katherine Crowson. EulerAncestralDiscreteScheduler class diffusers.EulerAncestralDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. Ancestral sampling with Euler method steps. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. Scales the denoising model input by (sigma**2 + 1) ** 0.5 to match the Euler algorithm. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor generator: Optional = None return_dict: bool = True ) → EulerAncestralDiscreteSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a +EulerAncestralDiscreteSchedulerOutput or tuple. Returns +EulerAncestralDiscreteSchedulerOutput or tuple + +If return_dict is True, +EulerAncestralDiscreteSchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). EulerAncestralDiscreteSchedulerOutput class diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/672ada4a0f1a9c6789e4db19ff795ebd.txt b/scrapped_outputs/672ada4a0f1a9c6789e4db19ff795ebd.txt new file mode 100644 index 0000000000000000000000000000000000000000..4ae8ff9c01a41be4bc950412702f6aa66a636b0b --- /dev/null +++ b/scrapped_outputs/672ada4a0f1a9c6789e4db19ff795ebd.txt @@ -0,0 +1,108 @@ +Accelerate inference of text-to-image diffusion models Diffusion models are known to be slower than their counter parts, GANs, because of the iterative and sequential reverse diffusion process. Recent works try to address limitation with: progressive timestep distillation (such as LCM LoRA) model compression (such as SSD-1B) reusing adjacent features of the denoiser (such as DeepCache) In this tutorial, we focus on leveraging the power of PyTorch 2 to accelerate the inference latency of text-to-image diffusion pipeline, instead. We will use Stable Diffusion XL (SDXL) as a case study, but the techniques we will discuss should extend to other text-to-image diffusion pipelines. Setup Make sure you’re on the latest version of diffusers: Copied pip install -U diffusers Then upgrade the other required libraries too: Copied pip install -U transformers accelerate peft To benefit from the fastest kernels, use PyTorch nightly. You can find the installation instructions here. To report the numbers shown below, we used an 80GB 400W A100 with its clock rate set to the maximum. This tutorial doesn’t present the benchmarking code and focuses on how to perform the optimizations, instead. For the full benchmarking code, refer to: https://github.com/huggingface/diffusion-fast. Baseline Let’s start with a baseline. Disable the use of a reduced precision and scaled_dot_product_attention: Copied from diffusers import StableDiffusionXLPipeline + +# Load the pipeline in full-precision and place its model components on CUDA. +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0" +).to("cuda") + +# Run the attention ops without efficiency. +pipe.unet.set_default_attn_processor() +pipe.vae.set_default_attn_processor() + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] This takes 7.36 seconds: Running inference in bfloat16 Enable the first optimization: use a reduced precision to run the inference. Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 +).to("cuda") + +# Run the attention ops without efficiency. +pipe.unet.set_default_attn_processor() +pipe.vae.set_default_attn_processor() + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] bfloat16 reduces the latency from 7.36 seconds to 4.63 seconds: Why bfloat16? Using a reduced numerical precision (such as float16, bfloat16) to run inference doesn’t affect the generation quality but significantly improves latency. The benefits of using the bfloat16 numerical precision as compared to float16 are hardware-dependent. Modern generations of GPUs tend to favor bfloat16. Furthermore, in our experiments, we bfloat16 to be much more resilient when used with quantization in comparison to float16. We have a dedicated guide for running inference in a reduced precision. Running attention efficiently Attention blocks are intensive to run. But with PyTorch’s scaled_dot_product_attention, we can run them efficiently. Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] scaled_dot_product_attention improves the latency from 4.63 seconds to 3.31 seconds. Use faster kernels with torch.compile Compile the UNet and the VAE to benefit from the faster kernels. First, configure a few compiler flags: Copied from diffusers import StableDiffusionXLPipeline +import torch + +torch._inductor.config.conv_1x1_as_mm = True +torch._inductor.config.coordinate_descent_tuning = True +torch._inductor.config.epilogue_fusion = False +torch._inductor.config.coordinate_descent_check_all_directions = True For the full list of compiler flags, refer to this file. It is also important to change the memory layout of the UNet and the VAE to “channels_last” when compiling them. This ensures maximum speed: Copied pipe.unet.to(memory_format=torch.channels_last) +pipe.vae.to(memory_format=torch.channels_last) Then, compile and perform inference: Copied # Compile the UNet and VAE. +pipe.unet = torch.compile(pipe.unet, mode="max-autotune", fullgraph=True) +pipe.vae.decode = torch.compile(pipe.vae.decode, mode="max-autotune", fullgraph=True) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# First call to `pipe` will be slow, subsequent ones will be faster. +image = pipe(prompt, num_inference_steps=30).images[0] torch.compile offers different backends and modes. As we’re aiming for maximum inference speed, we opt for the inductor backend using the “max-autotune”. “max-autotune” uses CUDA graphs and optimizes the compilation graph specifically for latency. Specifying fullgraph to be True ensures that there are no graph breaks in the underlying model, ensuring the fullest potential of torch.compile. Using SDPA attention and compiling both the UNet and VAE reduces the latency from 3.31 seconds to 2.54 seconds. Combine the projection matrices of attention Both the UNet and the VAE used in SDXL make use of Transformer-like blocks. A Transformer block consists of attention blocks and feed-forward blocks. In an attention block, the input is projected into three sub-spaces using three different projection matrices – Q, K, and V. In the naive implementation, these projections are performed separately on the input. But we can horizontally combine the projection matrices into a single matrix and perform the projection in one shot. This increases the size of the matmuls of the input projections and improves the impact of quantization (to be discussed next). Enabling this kind of computation in Diffusers just takes a single line of code: Copied pipe.fuse_qkv_projections() It provides a minor boost from 2.54 seconds to 2.52 seconds. Support for fuse_qkv_projections() is limited and experimental. As such, it’s not available for many non-SD pipelines such as Kandinsky. You can refer to this PR to get an idea about how to support this kind of computation. Dynamic quantization Aapply dynamic int8 quantization to both the UNet and the VAE. This is because quantization adds additional conversion overhead to the model that is hopefully made up for by faster matmuls (dynamic quantization). If the matmuls are too small, these techniques may degrade performance. Through experimentation, we found that certain linear layers in the UNet and the VAE don’t benefit from dynamic int8 quantization. You can check out the full code for filtering those layers here (referred to as dynamic_quant_filter_fn below). You will leverage the ultra-lightweight pure PyTorch library torchao to use its user-friendly APIs for quantization. First, configure all the compiler tags: Copied from diffusers import StableDiffusionXLPipeline +import torch + +# Notice the two new flags at the end. +torch._inductor.config.conv_1x1_as_mm = True +torch._inductor.config.coordinate_descent_tuning = True +torch._inductor.config.epilogue_fusion = False +torch._inductor.config.coordinate_descent_check_all_directions = True +torch._inductor.config.force_fuse_int_mm_with_mul = True +torch._inductor.config.use_mixed_mm = True Define the filtering functions: Copied def dynamic_quant_filter_fn(mod, *args): + return ( + isinstance(mod, torch.nn.Linear) + and mod.in_features > 16 + and (mod.in_features, mod.out_features) + not in [ + (1280, 640), + (1920, 1280), + (1920, 640), + (2048, 1280), + (2048, 2560), + (2560, 1280), + (256, 128), + (2816, 1280), + (320, 640), + (512, 1536), + (512, 256), + (512, 512), + (640, 1280), + (640, 1920), + (640, 320), + (640, 5120), + (640, 640), + (960, 320), + (960, 640), + ] + ) + + +def conv_filter_fn(mod, *args): + return ( + isinstance(mod, torch.nn.Conv2d) and mod.kernel_size == (1, 1) and 128 in [mod.in_channels, mod.out_channels] + ) Then apply all the optimizations discussed so far: Copied # SDPA + bfloat16. +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 +).to("cuda") + +# Combine attention projection matrices. +pipe.fuse_qkv_projections() + +# Change the memory layout. +pipe.unet.to(memory_format=torch.channels_last) +pipe.vae.to(memory_format=torch.channels_last) Since this quantization support is limited to linear layers only, we also turn suitable pointwise convolution layers into linear layers to maximize the benefit. Copied from torchao import swap_conv2d_1x1_to_linear + +swap_conv2d_1x1_to_linear(pipe.unet, conv_filter_fn) +swap_conv2d_1x1_to_linear(pipe.vae, conv_filter_fn) Apply dynamic quantization: Copied from torchao import apply_dynamic_quant + +apply_dynamic_quant(pipe.unet, dynamic_quant_filter_fn) +apply_dynamic_quant(pipe.vae, dynamic_quant_filter_fn) Finally, compile and perform inference: Copied pipe.unet = torch.compile(pipe.unet, mode="max-autotune", fullgraph=True) +pipe.vae.decode = torch.compile(pipe.vae.decode, mode="max-autotune", fullgraph=True) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] Applying dynamic quantization improves the latency from 2.52 seconds to 2.43 seconds. diff --git a/scrapped_outputs/67669e2b85fbcce0e78db05f924d9541.txt b/scrapped_outputs/67669e2b85fbcce0e78db05f924d9541.txt new file mode 100644 index 0000000000000000000000000000000000000000..a0d056105b047d4475ee55e44872902ed3daa0e9 --- /dev/null +++ b/scrapped_outputs/67669e2b85fbcce0e78db05f924d9541.txt @@ -0,0 +1,42 @@ +Conditional image generation + + + + + + + + + + + + +Conditional image generation allows you to generate images from a text prompt. The text is converted into embeddings which are used to condition the model to generate an image from noise. +The DiffusionPipeline is the easiest way to use a pre-trained diffusion system for inference. +Start by creating an instance of DiffusionPipeline and specify which pipeline checkpoint you would like to download. +In this guide, you’ll use DiffusionPipeline for text-to-image generation with Latent Diffusion: + + + Copied +>>> from diffusers import DiffusionPipeline + +>>> generator = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256") +The DiffusionPipeline downloads and caches all modeling, tokenization, and scheduling components. +Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on a GPU. +You can move the generator object to a GPU, just like you would in PyTorch: + + + Copied +>>> generator.to("cuda") +Now you can use the generator on your text prompt: + + + Copied +>>> image = generator("An image of a squirrel in Picasso style").images[0] +The output is by default wrapped into a PIL.Image object. +You can save the image by calling: + + + Copied +>>> image.save("image_of_squirrel_painting.png") +Try out the Spaces below, and feel free to play around with the guidance scale parameter to see how it affects the image quality! diff --git a/scrapped_outputs/676c4c0bf9fb2d7c591c49fc33344784.txt b/scrapped_outputs/676c4c0bf9fb2d7c591c49fc33344784.txt new file mode 100644 index 0000000000000000000000000000000000000000..6c6930421010fe84f98ab906144201bb0390aa30 --- /dev/null +++ b/scrapped_outputs/676c4c0bf9fb2d7c591c49fc33344784.txt @@ -0,0 +1,81 @@ +Latent Diffusion Latent Diffusion was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. The abstract from the paper is: By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. The original codebase can be found at CompVis/latent-diffusion. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. LDMTextToImagePipeline class diffusers.LDMTextToImagePipeline < source > ( vqvae: Union bert: PreTrainedModel tokenizer: PreTrainedTokenizer unet: Union scheduler: Union ) Parameters vqvae (VQModel) — +Vector-quantized (VQ) model to encode and decode images to and from latent representations. bert (LDMBertModel) — +Text-encoder model based on BERT. tokenizer (BertTokenizer) — +A BertTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-image generation using latent diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union height: Optional = None width: Optional = None num_inference_steps: Optional = 50 guidance_scale: Optional = 1.0 eta: Optional = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 1.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Example: Copied >>> from diffusers import DiffusionPipeline + +>>> # load model and scheduler +>>> ldm = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256") + +>>> # run pipeline in inference (sample random noise and denoise) +>>> prompt = "A painting of a squirrel eating a burger" +>>> images = ldm([prompt], num_inference_steps=50, eta=0.3, guidance_scale=6).images + +>>> # save images +>>> for idx, image in enumerate(images): +... image.save(f"squirrel-{idx}.png") LDMSuperResolutionPipeline class diffusers.LDMSuperResolutionPipeline < source > ( vqvae: VQModel unet: UNet2DModel scheduler: Union ) Parameters vqvae (VQModel) — +Vector-quantized (VQ) model to encode and decode images to and from latent representations. unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latens. Can be one of +DDIMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, +EulerAncestralDiscreteScheduler, DPMSolverMultistepScheduler, or PNDMScheduler. A pipeline for image super-resolution using latent diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union = None batch_size: Optional = 1 num_inference_steps: Optional = 100 eta: Optional = 0.0 generator: Union = None output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters image (torch.Tensor or PIL.Image.Image) — +Image or tensor representing an image batch to be used as the starting point for the process. batch_size (int, optional, defaults to 1) — +Number of images to generate. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Example: Copied >>> import requests +>>> from PIL import Image +>>> from io import BytesIO +>>> from diffusers import LDMSuperResolutionPipeline +>>> import torch + +>>> # load model and scheduler +>>> pipeline = LDMSuperResolutionPipeline.from_pretrained("CompVis/ldm-super-resolution-4x-openimages") +>>> pipeline = pipeline.to("cuda") + +>>> # let's download an image +>>> url = ( +... "https://user-images.githubusercontent.com/38061659/199705896-b48e17b8-b231-47cd-a270-4ffa5a93fa3e.png" +... ) +>>> response = requests.get(url) +>>> low_res_img = Image.open(BytesIO(response.content)).convert("RGB") +>>> low_res_img = low_res_img.resize((128, 128)) + +>>> # run pipeline in inference (sample random noise and denoise) +>>> upscaled_image = pipeline(low_res_img, num_inference_steps=100, eta=1).images[0] +>>> # save image +>>> upscaled_image.save("ldm_generated_image.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/676c901e6e0e40b3c74e425d4c725bdd.txt b/scrapped_outputs/676c901e6e0e40b3c74e425d4c725bdd.txt new file mode 100644 index 0000000000000000000000000000000000000000..bfb6ce0d7b29d717bc4cf9298fa18ceb1edda813 --- /dev/null +++ b/scrapped_outputs/676c901e6e0e40b3c74e425d4c725bdd.txt @@ -0,0 +1,338 @@ +Inpainting The Stable Diffusion model can also be applied to inpainting which lets you edit specific parts of an image by providing a mask and a text prompt using Stable Diffusion. Tips It is recommended to use this pipeline with checkpoints that have been specifically fine-tuned for inpainting, such +as runwayml/stable-diffusion-inpainting. Default +text-to-image Stable Diffusion checkpoints, such as +runwayml/stable-diffusion-v1-5 are also compatible but they might be less performant. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionInpaintPipeline class diffusers.StableDiffusionInpaintPipeline < source > ( vae: Union text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae ([AutoencoderKL, AsymmetricAutoencoderKL]) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-guided image inpainting using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None masked_image_latents: FloatTensor = None height: Optional = None width: Optional = None padding_mask_crop: Optional = None strength: float = 1.0 num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: int = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be inpainted (which parts of the image to +be masked out with mask_image and repainted according to prompt). For both numpy array and pytorch +tensor, the expected value range is between [0, 1] If it’s a tensor or a list or tensors, the +expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a list of arrays, the +expected shape should be (B, H, W, C) or (H, W, C) It can also accept image latents as image, but +if passing latents directly it is not encoded again. mask_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to mask image. White pixels in the mask +are repainted while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a numpy array or pytorch tensor, it should contain one +color channel (L) instead of 3, so the expected shape for pytorch tensor would be (B, 1, H, W), (B, H, W), (1, H, W), (H, W). And for numpy array would be for (B, H, W, 1), (B, H, W), (H, W, 1), or (H, W). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. padding_mask_crop (int, optional, defaults to None) — +The size of margin in the crop to be applied to the image and masking. If None, no crop is applied to image and mask_image. If +padding_mask_crop is not None, it will first find a rectangular region with the same aspect ration of the image and +contains all masked area, and then expand that area based on padding_mask_crop. The image and mask_image will then be cropped based on +the expanded area before resizing to the original image size for inpainting. This is useful when the masked area is small while the image is large +and contain information inreleant for inpainging, such as background. strength (float, optional, defaults to 1.0) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionInpaintPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +>>> init_image = download_image(img_url).resize((512, 512)) +>>> mask_image = download_image(mask_url).resize((512, 512)) + +>>> pipe = StableDiffusionInpaintPipeline.from_pretrained( +... "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +>>> image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionInpaintPipeline class diffusers.FlaxStableDiffusionInpaintPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-guided image inpainting using Stable Diffusion. 🧪 This is an experimental feature! This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: Array mask: Array masked_image: Array params: Union prng_seed: Array num_inference_steps: int = 50 height: Optional = None width: Optional = None guidance_scale: Union = 7.5 latents: Array = None neg_prompt_ids: Array = None return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. latents (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +array is generated by sampling using the supplied random generator. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard +>>> import PIL +>>> import requests +>>> from io import BytesIO +>>> from diffusers import FlaxStableDiffusionInpaintPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +>>> init_image = download_image(img_url).resize((512, 512)) +>>> mask_image = download_image(mask_url).resize((512, 512)) + +>>> pipeline, params = FlaxStableDiffusionInpaintPipeline.from_pretrained( +... "xvjiarui/stable-diffusion-2-inpainting" +... ) + +>>> prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +>>> prng_seed = jax.random.PRNGKey(0) +>>> num_inference_steps = 50 + +>>> num_samples = jax.device_count() +>>> prompt = num_samples * [prompt] +>>> init_image = num_samples * [init_image] +>>> mask_image = num_samples * [mask_image] +>>> prompt_ids, processed_masked_images, processed_masks = pipeline.prepare_inputs( +... prompt, init_image, mask_image +... ) +# shard inputs and rng + +>>> params = replicate(params) +>>> prng_seed = jax.random.split(prng_seed, jax.device_count()) +>>> prompt_ids = shard(prompt_ids) +>>> processed_masked_images = shard(processed_masked_images) +>>> processed_masks = shard(processed_masks) + +>>> images = pipeline( +... prompt_ids, processed_masks, processed_masked_images, params, prng_seed, num_inference_steps, jit=True +... ).images +>>> images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) FlaxStableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/67a794526b096a9e4bb3944ccd767980.txt b/scrapped_outputs/67a794526b096a9e4bb3944ccd767980.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/67bc2566830af03d7f6f2628c688c589.txt b/scrapped_outputs/67bc2566830af03d7f6f2628c688c589.txt new file mode 100644 index 0000000000000000000000000000000000000000..987c9209fcde600484b42a955615d555013bf385 --- /dev/null +++ b/scrapped_outputs/67bc2566830af03d7f6f2628c688c589.txt @@ -0,0 +1,367 @@ +Image-to-image The Stable Diffusion model can also be applied to image-to-image generation by passing a text prompt and an initial image to condition the generation of new images. The StableDiffusionImg2ImgPipeline uses the diffusion-denoising mechanism proposed in SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations by Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, Stefano Ermon. The abstract from the paper is: Guided image synthesis enables everyday users to create and edit photo-realistic images with minimum effort. The key challenge is balancing faithfulness to the user input (e.g., hand-drawn colored strokes) and realism of the synthesized image. Existing GAN-based methods attempt to achieve such balance using either conditional GANs or GAN inversions, which are challenging and often require additional training data or loss functions for individual applications. To address these issues, we introduce a new image synthesis and editing method, Stochastic Differential Editing (SDEdit), based on a diffusion model generative prior, which synthesizes realistic images by iteratively denoising through a stochastic differential equation (SDE). Given an input image with user guide of any type, SDEdit first adds noise to the input, then subsequently denoises the resulting image through the SDE prior to increase its realism. SDEdit does not require task-specific training or inversions and can naturally achieve the balance between realism and faithfulness. SDEdit significantly outperforms state-of-the-art GAN-based methods by up to 98.09% on realism and 91.72% on overall satisfaction scores, according to a human perception study, on multiple tasks, including stroke-based image synthesis and editing as well as image compositing. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionImg2ImgPipeline class diffusers.StableDiffusionImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-guided image-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.8 num_inference_steps: Optional = 50 timesteps: List = None guidance_scale: Optional = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: Optional = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: int = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be used as the starting point. For both +numpy array and pytorch tensor, the expected value range is between [0, 1] If it’s a tensor or a list +or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a +list of arrays, the expected shape should be (B, H, W, C) or (H, W, C) It can also accept image +latents as image, but if passing latents directly it is not encoded again. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import requests +>>> import torch +>>> from PIL import Image +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionImg2ImgPipeline + +>>> device = "cuda" +>>> model_id_or_path = "runwayml/stable-diffusion-v1-5" +>>> pipe = StableDiffusionImg2ImgPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +>>> response = requests.get(url) +>>> init_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_image = init_image.resize((768, 512)) + +>>> prompt = "A fantasy landscape, trending on artstation" + +>>> images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images +>>> images[0].save("fantasy_landscape.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt or .safetensors +format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import StableDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +... ) + +>>> # Download pipeline from local file +>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt +>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly") + +>>> # Enable float16 and move to GPU +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", +... torch_dtype=torch.float16, +... ) +>>> pipeline.to("cuda") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionImg2ImgPipeline class diffusers.FlaxStableDiffusionImg2ImgPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-guided image-to-image generation using Stable Diffusion. This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: Array image: Array params: Union prng_seed: Array strength: float = 0.8 num_inference_steps: int = 50 height: Optional = None width: Optional = None guidance_scale: Union = 7.5 noise: Array = None neg_prompt_ids: Array = None return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt_ids (jnp.ndarray) — +The prompt or prompts to guide image generation. image (jnp.ndarray) — +Array representing an image batch to be used as the starting point. params (Dict or FrozenDict) — +Dictionary containing the model parameters/weights. prng_seed (jax.Array or jax.Array) — +Array containing random number generator key. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. noise (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. The array is generated by +sampling using the supplied random generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> import jax.numpy as jnp +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard +>>> import requests +>>> from io import BytesIO +>>> from PIL import Image +>>> from diffusers import FlaxStableDiffusionImg2ImgPipeline + + +>>> def create_key(seed=0): +... return jax.random.PRNGKey(seed) + + +>>> rng = create_key(0) + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +>>> response = requests.get(url) +>>> init_img = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_img = init_img.resize((768, 512)) + +>>> prompts = "A fantasy landscape, trending on artstation" + +>>> pipeline, params = FlaxStableDiffusionImg2ImgPipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", +... revision="flax", +... dtype=jnp.bfloat16, +... ) + +>>> num_samples = jax.device_count() +>>> rng = jax.random.split(rng, jax.device_count()) +>>> prompt_ids, processed_image = pipeline.prepare_inputs( +... prompt=[prompts] * num_samples, image=[init_img] * num_samples +... ) +>>> p_params = replicate(params) +>>> prompt_ids = shard(prompt_ids) +>>> processed_image = shard(processed_image) + +>>> output = pipeline( +... prompt_ids=prompt_ids, +... image=processed_image, +... params=p_params, +... prng_seed=rng, +... strength=0.75, +... num_inference_steps=50, +... jit=True, +... height=512, +... width=768, +... ).images + +>>> output_images = pipeline.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:]))) FlaxStableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/67bc52fe691d297dba2616345d1a5145.txt b/scrapped_outputs/67bc52fe691d297dba2616345d1a5145.txt new file mode 100644 index 0000000000000000000000000000000000000000..ed307c5e7ec0eba355d6da6f87807233e0a27eec --- /dev/null +++ b/scrapped_outputs/67bc52fe691d297dba2616345d1a5145.txt @@ -0,0 +1,43 @@ +DiT Scalable Diffusion Models with Transformers (DiT) is by William Peebles and Saining Xie. The abstract from the paper is: We explore a new class of diffusion models based on the transformer architecture. We train latent diffusion models of images, replacing the commonly-used U-Net backbone with a transformer that operates on latent patches. We analyze the scalability of our Diffusion Transformers (DiTs) through the lens of forward pass complexity as measured by Gflops. We find that DiTs with higher Gflops — through increased transformer depth/width or increased number of input tokens — consistently have lower FID. In addition to possessing good scalability properties, our largest DiT-XL/2 models outperform all prior diffusion models on the class-conditional ImageNet 512x512 and 256x256 benchmarks, achieving a state-of-the-art FID of 2.27 on the latter. The original codebase can be found at facebookresearch/dit. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. DiTPipeline class diffusers.DiTPipeline < source > ( transformer: Transformer2DModel vae: AutoencoderKL scheduler: KarrasDiffusionSchedulers id2label: Optional = None ) Parameters transformer (Transformer2DModel) — +A class conditioned Transformer2DModel to denoise the encoded image latents. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. scheduler (DDIMScheduler) — +A scheduler to be used in combination with transformer to denoise the encoded image latents. Pipeline for image generation based on a Transformer backbone instead of a UNet. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( class_labels: List guidance_scale: float = 4.0 generator: Union = None num_inference_steps: int = 50 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters class_labels (List[int]) — +List of ImageNet class labels for the images to be generated. guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. num_inference_steps (int, optional, defaults to 250) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import DiTPipeline, DPMSolverMultistepScheduler +>>> import torch + +>>> pipe = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256", torch_dtype=torch.float16) +>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe = pipe.to("cuda") + +>>> # pick words from Imagenet class labels +>>> pipe.labels # to print all available words + +>>> # pick words that exist in ImageNet +>>> words = ["white shark", "umbrella"] + +>>> class_ids = pipe.get_label_ids(words) + +>>> generator = torch.manual_seed(33) +>>> output = pipe(class_labels=class_ids, num_inference_steps=25, generator=generator) + +>>> image = output.images[0] # label 'white shark' get_label_ids < source > ( label: Union ) → list of int Parameters label (str or dict of str) — +Label strings to be mapped to class ids. Returns +list of int + +Class ids to be processed by pipeline. + Map label strings from ImageNet to corresponding class ids. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/67d0bcf2ad1545df715358b0115911b6.txt b/scrapped_outputs/67d0bcf2ad1545df715358b0115911b6.txt new file mode 100644 index 0000000000000000000000000000000000000000..fa6755ea3c7c8d60d3512e78072a458c1594b457 --- /dev/null +++ b/scrapped_outputs/67d0bcf2ad1545df715358b0115911b6.txt @@ -0,0 +1,552 @@ +Stable Diffusion XL Stable Diffusion XL (SDXL) was proposed in SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis by Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. The abstract from the paper is: We present SDXL, a latent diffusion model for text-to-image synthesis. Compared to previous versions of Stable Diffusion, SDXL leverages a three times larger UNet backbone: The increase of model parameters is mainly due to more attention blocks and a larger cross-attention context as SDXL uses a second text encoder. We design multiple novel conditioning schemes and train SDXL on multiple aspect ratios. We also introduce a refinement model which is used to improve the visual fidelity of samples generated by SDXL using a post-hoc image-to-image technique. We demonstrate that SDXL shows drastically improved performance compared the previous versions of Stable Diffusion and achieves results competitive with those of black-box state-of-the-art image generators. Tips Using SDXL with a DPM++ scheduler for less than 50 steps is known to produce visual artifacts because the solver becomes numerically unstable. To fix this issue, take a look at this PR which recommends for ODE/SDE solvers:set use_karras_sigmas=True or lu_lambdas=True to improve image quality set euler_at_final=True if you’re using a solver with uniform step sizes (DPM++2M or DPM++2M SDE) Most SDXL checkpoints work best with an image size of 1024x1024. Image sizes of 768x768 and 512x512 are also supported, but the results aren’t as good. Anything below 512x512 is not recommended and likely won’t be for default checkpoints like stabilityai/stable-diffusion-xl-base-1.0. SDXL can pass a different prompt for each of the text encoders it was trained on. We can even pass different parts of the same prompt to the text encoders. SDXL output images can be improved by making use of a refiner model in an image-to-image setting. SDXL offers negative_original_size, negative_crops_coords_top_left, and negative_target_size to negatively condition the model on image resolution and cropping parameters. To learn how to use SDXL for various tasks, how to optimize performance, and other usage examples, take a look at the Stable Diffusion XL guide. Check out the Stability AI Hub organization for the official base and refiner model checkpoints! StableDiffusionXLPipeline class diffusers.StableDiffusionXLPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Optional = None crops_coords_top_left: Tuple = (0, 0) target_size: Optional = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 5.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput instead +of a plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLPipeline + +>>> pipe = StableDiffusionXLPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt).images[0] encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionXLImg2ImgPipeline class diffusers.StableDiffusionXLImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires an aesthetic_score condition to be passed during inference. Also see the +config of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None strength: float = 0.3 num_inference_steps: int = 50 timesteps: List = None denoising_start: Optional = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor or PIL.Image.Image or np.ndarray or List[torch.FloatTensor] or List[PIL.Image.Image] or List[np.ndarray]) — +The image(s) to modify with the pipeline. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. Note that in the case of +denoising_start being declared as an integer, the value of strength will be ignored. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_start (float, optional) — +When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be +bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and +it is assumed that the passed image is a partly denoised image. Note that when this is specified, +strength will be ignored. The denoising_start parameter is particularly beneficial when this pipeline +is integrated into a “Mixture of Denoisers” multi-pipeline setup, as detailed in Refine Image +Quality. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be +denoised by a successor pipeline that has denoising_start set to 0.8 so that it only denoises the +final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline +forms a part of a “Mixture of Denoisers” multi-pipeline setup, as elaborated in Refine Image +Quality. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +`tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLImg2ImgPipeline +>>> from diffusers.utils import load_image + +>>> pipe = StableDiffusionXLImg2ImgPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") +>>> url = "https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/aa_xl/000000009.png" + +>>> init_image = load_image(url).convert("RGB") +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt, image=init_image).images[0] encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionXLInpaintPipeline class diffusers.StableDiffusionXLInpaintPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires a aesthetic_score condition to be passed during inference. Also see the config +of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None mask_image: Union = None masked_image_latents: FloatTensor = None height: Optional = None width: Optional = None padding_mask_crop: Optional = None strength: float = 0.9999 num_inference_steps: int = 50 timesteps: List = None denoising_start: Optional = None denoising_end: Optional = None guidance_scale: float = 7.5 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (PIL.Image.Image) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. padding_mask_crop (int, optional, defaults to None) — +The size of margin in the crop to be applied to the image and masking. If None, no crop is applied to image and mask_image. If +padding_mask_crop is not None, it will first find a rectangular region with the same aspect ration of the image and +contains all masked area, and then expand that area based on padding_mask_crop. The image and mask_image will then be cropped based on +the expanded area before resizing to the original image size for inpainting. This is useful when the masked area is small while the image is large +and contain information inreleant for inpainging, such as background. strength (float, optional, defaults to 0.9999) — +Conceptually, indicates how much to transform the masked portion of the reference image. Must be +between 0 and 1. image will be used as a starting point, adding more noise to it the larger the +strength. The number of denoising steps depends on the amount of noise initially added. When +strength is 1, added noise will be maximum and the denoising process will run for the full number of +iterations specified in num_inference_steps. A value of 1, therefore, essentially ignores the masked +portion of the reference image. Note that in the case of denoising_start being declared as an +integer, the value of strength will be ignored. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_start (float, optional) — +When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be +bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and +it is assumed that the passed image is a partly denoised image. Note that when this is specified, +strength will be ignored. The denoising_start parameter is particularly beneficial when this pipeline +is integrated into a “Mixture of Denoisers” multi-pipeline setup, as detailed in Refining the Image +Output. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be +denoised by a successor pipeline that has denoising_start set to 0.8 so that it only denoises the +final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline +forms a part of a “Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLInpaintPipeline +>>> from diffusers.utils import load_image + +>>> pipe = StableDiffusionXLInpaintPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", +... torch_dtype=torch.float16, +... variant="fp16", +... use_safetensors=True, +... ) +>>> pipe.to("cuda") + +>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +>>> init_image = load_image(img_url).convert("RGB") +>>> mask_image = load_image(mask_url).convert("RGB") + +>>> prompt = "A majestic tiger sitting on a bench" +>>> image = pipe( +... prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=50, strength=0.80 +... ).images[0] encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 diff --git a/scrapped_outputs/6832698b74e5f77fa3115226a2a1dda9.txt b/scrapped_outputs/6832698b74e5f77fa3115226a2a1dda9.txt new file mode 100644 index 0000000000000000000000000000000000000000..50125022d5938337789747a2fa84c160bc116d70 --- /dev/null +++ b/scrapped_outputs/6832698b74e5f77fa3115226a2a1dda9.txt @@ -0,0 +1,973 @@ +Stable unCLIP + +Stable unCLIP checkpoints are finetuned from stable diffusion 2.1 checkpoints to condition on CLIP image embeddings. +Stable unCLIP also still conditions on text embeddings. Given the two separate conditionings, stable unCLIP can be used +for text guided image variation. When combined with an unCLIP prior, it can also be used for full text to image generation. +To know more about the unCLIP process, check out the following paper: +Hierarchical Text-Conditional Image Generation with CLIP Latents by Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen. + +Tips + +Stable unCLIP takes a noise_level as input during inference. noise_level determines how much noise is added +to the image embeddings. A higher noise_level increases variation in the final un-noised images. By default, +we do not add any additional noise to the image embeddings i.e. noise_level = 0. + +Available checkpoints: + +Image variationstabilityai/stable-diffusion-2-1-unclip +stabilityai/stable-diffusion-2-1-unclip-small +Text-to-image stabilityai/stable-diffusion-2-1-unclip-small + +Text-to-Image Generation + + +Stable unCLIP can be leveraged for text-to-image generation by pipelining it with the prior model of KakaoBrain's open source DALL-E 2 replication [Karlo](https://huggingface.co/kakaobrain/karlo-v1-alpha) + + + + Copied +import torch +from diffusers import UnCLIPScheduler, DDPMScheduler, StableUnCLIPPipeline +from diffusers.models import PriorTransformer +from transformers import CLIPTokenizer, CLIPTextModelWithProjection + +prior_model_id = "kakaobrain/karlo-v1-alpha" +data_type = torch.float16 +prior = PriorTransformer.from_pretrained(prior_model_id, subfolder="prior", torch_dtype=data_type) + +prior_text_model_id = "openai/clip-vit-large-patch14" +prior_tokenizer = CLIPTokenizer.from_pretrained(prior_text_model_id) +prior_text_model = CLIPTextModelWithProjection.from_pretrained(prior_text_model_id, torch_dtype=data_type) +prior_scheduler = UnCLIPScheduler.from_pretrained(prior_model_id, subfolder="prior_scheduler") +prior_scheduler = DDPMScheduler.from_config(prior_scheduler.config) + +stable_unclip_model_id = "stabilityai/stable-diffusion-2-1-unclip-small" + +pipe = StableUnCLIPPipeline.from_pretrained( + stable_unclip_model_id, + torch_dtype=data_type, + variant="fp16", + prior_tokenizer=prior_tokenizer, + prior_text_encoder=prior_text_model, + prior=prior, + prior_scheduler=prior_scheduler, +) + +pipe = pipe.to("cuda") +wave_prompt = "dramatic wave, the Oceans roar, Strong wave spiral across the oceans as the waves unfurl into roaring crests; perfect wave form; perfect wave shape; dramatic wave shape; wave shape unbelievable; wave; wave shape spectacular" + +images = pipe(prompt=wave_prompt).images +images[0].save("waves.png") +For text-to-image we use stabilityai/stable-diffusion-2-1-unclip-small as it was trained on CLIP ViT-L/14 embedding, the same as the Karlo model prior. stabilityai/stable-diffusion-2-1-unclip was trained on OpenCLIP ViT-H, so we don’t recommend its use. + +Text guided Image-to-Image Variation + + + + Copied +from diffusers import StableUnCLIPImg2ImgPipeline +from diffusers.utils import load_image +import torch + +pipe = StableUnCLIPImg2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1-unclip", torch_dtype=torch.float16, variation="fp16" +) +pipe = pipe.to("cuda") + +url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/stable_unclip/tarsila_do_amaral.png" +init_image = load_image(url) + +images = pipe(init_image).images +images[0].save("variation_image.png") +Optionally, you can also pass a prompt to pipe such as: + + + Copied +prompt = "A fantasy landscape, trending on artstation" + +images = pipe(init_image, prompt=prompt).images +images[0].save("variation_image_two.png") + +Memory optimization + +If you are short on GPU memory, you can enable smart CPU offloading so that models that are not needed +immediately for a computation can be offloaded to CPU: + + + Copied +from diffusers import StableUnCLIPImg2ImgPipeline +from diffusers.utils import load_image +import torch + +pipe = StableUnCLIPImg2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1-unclip", torch_dtype=torch.float16, variation="fp16" +) +# Offload to CPU. +pipe.enable_model_cpu_offload() + +url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/stable_unclip/tarsila_do_amaral.png" +init_image = load_image(url) + +images = pipe(init_image).images +images[0] +Further memory optimizations are possible by enabling VAE slicing on the pipeline: + + + Copied +from diffusers import StableUnCLIPImg2ImgPipeline +from diffusers.utils import load_image +import torch + +pipe = StableUnCLIPImg2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1-unclip", torch_dtype=torch.float16, variation="fp16" +) +pipe.enable_model_cpu_offload() +pipe.enable_vae_slicing() + +url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/stable_unclip/tarsila_do_amaral.png" +init_image = load_image(url) + +images = pipe(init_image).images +images[0] + +StableUnCLIPPipeline + + +class diffusers.StableUnCLIPPipeline + +< +source +> +( +prior_tokenizer: CLIPTokenizer +prior_text_encoder: CLIPTextModelWithProjection +prior: PriorTransformer +prior_scheduler: KarrasDiffusionSchedulers +image_normalizer: StableUnCLIPImageNormalizer +image_noising_scheduler: KarrasDiffusionSchedulers +tokenizer: CLIPTokenizer +text_encoder: CLIPTextModelWithProjection +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +vae: AutoencoderKL + +) + + +Parameters + +prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. + + +prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. + + +prior_scheduler (KarrasDiffusionSchedulers) — +Scheduler used in the prior denoising process. + + +image_normalizer (StableUnCLIPImageNormalizer) — +Used to normalize the predicted image embeddings before the noise is applied and un-normalize the image +embeddings after the noise has been applied. + + +image_noising_scheduler (KarrasDiffusionSchedulers) — +Noise schedule for adding noise to the predicted image embeddings. The amount of noise to add is determined +by noise_level in StableUnCLIPPipeline.__call__. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (KarrasDiffusionSchedulers) — +A scheduler to be used in combination with unet to denoise the encoded image latents. + + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + + +Pipeline for text-to-image generation using stable unCLIP. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str], NoneType] = None +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 20 +guidance_scale: float = 10.0 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Optional[torch._C.Generator] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None +noise_level: int = 0 +prior_num_inference_steps: int = 25 +prior_guidance_scale: float = 4.0 +prior_latents: typing.Optional[torch.FloatTensor] = None + +) +→ +ImagePipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 20) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 10.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +noise_level (int, optional, defaults to 0) — +The amount of noise to add to the image embeddings. A higher noise_level increases the variance in +the final un-noised images. See StableUnCLIPPipeline.noise_image_embeddings for details. + + +prior_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps in the prior denoising process. More denoising steps usually lead to a +higher quality image at the expense of slower inference. + + +prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale for the prior denoising process as defined in Classifier-Free Diffusion +Guidance. prior_guidance_scale is defined as w of equation 2. of +Imagen Paper. Guidance scale is enabled by setting +guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to +the text prompt, usually at the expense of lower image quality. + + +prior_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +embedding generation in the prior denoising process. Can be used to tweak the same generation with +different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied +random generator. + + +Returns + +ImagePipelineOutput or tuple + + + +~ pipeline_utils.ImagePipelineOutput if return_dict is +True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import StableUnCLIPPipeline + +>>> pipe = StableUnCLIPPipeline.from_pretrained( +... "fusing/stable-unclip-2-1-l", torch_dtype=torch.float16 +... ) # TODO update model path +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> images = pipe(prompt).images +>>> images[0].save("astronaut_horse.png") + +enable_attention_slicing + +< +source +> +( +slice_size: typing.Union[str, int, NoneType] = 'auto' + +) + + +Parameters + +slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +disable_attention_slicing + +< +source +> +( +) + + + +Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go +back to computing attention in one step. + +enable_vae_slicing + +< +source +> +( +) + + + +Enable sliced VAE decoding. +When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several +steps. This is useful to save some memory and allow larger batch sizes. + +disable_vae_slicing + +< +source +> +( +) + + + +Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to +computing decoding in one step. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline’s +models have their state dicts saved to CPU and then are moved to a torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. + +noise_image_embeddings + +< +source +> +( +image_embeds: Tensor +noise_level: int +noise: typing.Optional[torch.FloatTensor] = None +generator: typing.Optional[torch._C.Generator] = None + +) + + + +Add noise to the image embeddings. The amount of noise is controlled by a noise_level input. A higher +noise_level increases the variance in the final un-noised images. +The noise is applied in two ways +A noise schedule is applied directly to the embeddings +A vector of sinusoidal time embeddings are appended to the output. +In both cases, the amount of noise is controlled by the same noise_level. +The embeddings are normalized before the noise is applied and un-normalized after the noise is applied. + +StableUnCLIPImg2ImgPipeline + + +class diffusers.StableUnCLIPImg2ImgPipeline + +< +source +> +( +feature_extractor: CLIPImageProcessor +image_encoder: CLIPVisionModelWithProjection +image_normalizer: StableUnCLIPImageNormalizer +image_noising_scheduler: KarrasDiffusionSchedulers +tokenizer: CLIPTokenizer +text_encoder: CLIPTextModel +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +vae: AutoencoderKL + +) + + +Parameters + +feature_extractor (CLIPImageProcessor) — +Feature extractor for image pre-processing before being encoded. + + +image_encoder (CLIPVisionModelWithProjection) — +CLIP vision model for encoding images. + + +image_normalizer (StableUnCLIPImageNormalizer) — +Used to normalize the predicted image embeddings before the noise is applied and un-normalize the image +embeddings after the noise has been applied. + + +image_noising_scheduler (KarrasDiffusionSchedulers) — +Noise schedule for adding noise to the predicted image embeddings. The amount of noise to add is determined +by noise_level in StableUnCLIPPipeline.__call__. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (KarrasDiffusionSchedulers) — +A scheduler to be used in combination with unet to denoise the encoded image latents. + + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + + +Pipeline for text-guided image to image generation using stable unCLIP. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None +prompt: typing.Union[str, typing.List[str]] = None +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 20 +guidance_scale: float = 10 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Optional[torch._C.Generator] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None +noise_level: int = 0 +image_embeds: typing.Optional[torch.FloatTensor] = None + +) +→ +ImagePipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, either prompt_embeds will be +used or prompt is initialized to "". + + +image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch. The image will be encoded to its CLIP embedding which +the unet will be conditioned on. Note that the image is not encoded by the vae and then used as the +latents in the denoising process such as in the standard stable diffusion text guided image variation +process. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 20) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 10.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +noise_level (int, optional, defaults to 0) — +The amount of noise to add to the image embeddings. A higher noise_level increases the variance in +the final un-noised images. See StableUnCLIPPipeline.noise_image_embeddings for details. + + +image_embeds (torch.FloatTensor, optional) — +Pre-generated CLIP embeddings to condition the unet on. Note that these are not latents to be used in +the denoising process. If you want to provide pre-generated latents, pass them to __call__ as +latents. + + +Returns + +ImagePipelineOutput or tuple + + + +~ pipeline_utils.ImagePipelineOutput if return_dict is +True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import requests +>>> import torch +>>> from PIL import Image +>>> from io import BytesIO + +>>> from diffusers import StableUnCLIPImg2ImgPipeline + +>>> pipe = StableUnCLIPImg2ImgPipeline.from_pretrained( +... "fusing/stable-unclip-2-1-l-img2img", torch_dtype=torch.float16 +... ) # TODO update model path +>>> pipe = pipe.to("cuda") + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +>>> response = requests.get(url) +>>> init_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_image = init_image.resize((768, 512)) + +>>> prompt = "A fantasy landscape, trending on artstation" + +>>> images = pipe(prompt, init_image).images +>>> images[0].save("fantasy_landscape.png") + +enable_attention_slicing + +< +source +> +( +slice_size: typing.Union[str, int, NoneType] = 'auto' + +) + + +Parameters + +slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +disable_attention_slicing + +< +source +> +( +) + + + +Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go +back to computing attention in one step. + +enable_vae_slicing + +< +source +> +( +) + + + +Enable sliced VAE decoding. +When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several +steps. This is useful to save some memory and allow larger batch sizes. + +disable_vae_slicing + +< +source +> +( +) + + + +Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to +computing decoding in one step. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline’s +models have their state dicts saved to CPU and then are moved to a torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. + +noise_image_embeddings + +< +source +> +( +image_embeds: Tensor +noise_level: int +noise: typing.Optional[torch.FloatTensor] = None +generator: typing.Optional[torch._C.Generator] = None + +) + + + +Add noise to the image embeddings. The amount of noise is controlled by a noise_level input. A higher +noise_level increases the variance in the final un-noised images. +The noise is applied in two ways +A noise schedule is applied directly to the embeddings +A vector of sinusoidal time embeddings are appended to the output. +In both cases, the amount of noise is controlled by the same noise_level. +The embeddings are normalized before the noise is applied and un-normalized after the noise is applied. diff --git a/scrapped_outputs/6842672822c340355d19dc8ae37e1d9a.txt b/scrapped_outputs/6842672822c340355d19dc8ae37e1d9a.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/6855f775ba74d4163f91637b5526b5e4.txt b/scrapped_outputs/6855f775ba74d4163f91637b5526b5e4.txt new file mode 100644 index 0000000000000000000000000000000000000000..3852e4b540ae565f239e88502bab4b42a7fe8ab9 --- /dev/null +++ b/scrapped_outputs/6855f775ba74d4163f91637b5526b5e4.txt @@ -0,0 +1,255 @@ +DiffEdit DiffEdit: Diffusion-based semantic image editing with mask guidance is by Guillaume Couairon, Jakob Verbeek, Holger Schwenk, and Matthieu Cord. The abstract from the paper is: Image generation has recently seen tremendous advances, with diffusion models allowing to synthesize convincing images for a large variety of text prompts. In this article, we propose DiffEdit, a method to take advantage of text-conditioned diffusion models for the task of semantic image editing, where the goal is to edit an image based on a text query. Semantic image editing is an extension of image generation, with the additional constraint that the generated image should be as similar as possible to a given input image. Current editing methods based on diffusion models usually require to provide a mask, making the task much easier by treating it as a conditional inpainting task. In contrast, our main contribution is able to automatically generate a mask highlighting regions of the input image that need to be edited, by contrasting predictions of a diffusion model conditioned on different text prompts. Moreover, we rely on latent inference to preserve content in those regions of interest and show excellent synergies with mask-based diffusion. DiffEdit achieves state-of-the-art editing performance on ImageNet. In addition, we evaluate semantic image editing in more challenging settings, using images from the COCO dataset as well as text-based generated images. The original codebase can be found at Xiang-cd/DiffEdit-stable-diffusion, and you can try it out in this demo. This pipeline was contributed by clarencechen. ❤️ Tips The pipeline can generate masks that can be fed into other inpainting pipelines. In order to generate an image using this pipeline, both an image mask (source and target prompts can be manually specified or generated, and passed to generate_mask()) +and a set of partially inverted latents (generated using invert()) must be provided as arguments when calling the pipeline to generate the final edited image. The function generate_mask() exposes two prompt arguments, source_prompt and target_prompt +that let you control the locations of the semantic edits in the final image to be generated. Let’s say, +you wanted to translate from “cat” to “dog”. In this case, the edit direction will be “cat -> dog”. To reflect +this in the generated mask, you simply have to set the embeddings related to the phrases including “cat” to +source_prompt and “dog” to target_prompt. When generating partially inverted latents using invert, assign a caption or text embedding describing the +overall image to the prompt argument to help guide the inverse latent sampling process. In most cases, the +source concept is sufficiently descriptive to yield good results, but feel free to explore alternatives. When calling the pipeline to generate the final edited image, assign the source concept to negative_prompt +and the target concept to prompt. Taking the above example, you simply have to set the embeddings related to +the phrases including “cat” to negative_prompt and “dog” to prompt. If you wanted to reverse the direction in the example above, i.e., “dog -> cat”, then it’s recommended to:Swap the source_prompt and target_prompt in the arguments to generate_mask. Change the input prompt in invert() to include “dog”. Swap the prompt and negative_prompt in the arguments to call the pipeline to generate the final edited image. The source and target prompts, or their corresponding embeddings, can also be automatically generated. Please refer to the DiffEdit guide for more details. StableDiffusionDiffEditPipeline class diffusers.StableDiffusionDiffEditPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor inverse_scheduler: DDIMInverseScheduler requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. inverse_scheduler (DDIMInverseScheduler) — +A scheduler to be used in combination with unet to fill in the unmasked part of the input latents. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. This is an experimental feature! Pipeline for text-guided image inpainting using Stable Diffusion and DiffEdit. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading and saving methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights generate_mask < source > ( image: Union = None target_prompt: Union = None target_negative_prompt: Union = None target_prompt_embeds: Optional = None target_negative_prompt_embeds: Optional = None source_prompt: Union = None source_negative_prompt: Union = None source_prompt_embeds: Optional = None source_negative_prompt_embeds: Optional = None num_maps_per_mask: Optional = 10 mask_encode_strength: Optional = 0.5 mask_thresholding_ratio: Optional = 3.0 num_inference_steps: int = 50 guidance_scale: float = 7.5 generator: Union = None output_type: Optional = 'np' cross_attention_kwargs: Optional = None ) → List[PIL.Image.Image] or np.array Parameters image (PIL.Image.Image) — +Image or tensor representing an image batch to be used for computing the mask. target_prompt (str or List[str], optional) — +The prompt or prompts to guide semantic mask generation. If not defined, you need to pass +prompt_embeds. target_negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). target_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. target_negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. source_prompt (str or List[str], optional) — +The prompt or prompts to guide semantic mask generation using DiffEdit. If not defined, you need to +pass source_prompt_embeds or source_image instead. source_negative_prompt (str or List[str], optional) — +The prompt or prompts to guide semantic mask generation away from using DiffEdit. If not defined, you +need to pass source_negative_prompt_embeds or source_image instead. source_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings to guide the semantic mask generation. Can be used to easily tweak text +inputs (prompt weighting). If not provided, text embeddings are generated from source_prompt input +argument. source_negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings to negatively guide the semantic mask generation. Can be used to easily +tweak text inputs (prompt weighting). If not provided, text embeddings are generated from +source_negative_prompt input argument. num_maps_per_mask (int, optional, defaults to 10) — +The number of noise maps sampled to generate the semantic mask using DiffEdit. mask_encode_strength (float, optional, defaults to 0.5) — +The strength of the noise maps sampled to generate the semantic mask using DiffEdit. Must be between 0 +and 1. mask_thresholding_ratio (float, optional, defaults to 3.0) — +The maximum multiple of the mean absolute difference used to clamp the semantic guidance map before +mask binarization. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the +AttnProcessor as defined in +self.processor. Returns +List[PIL.Image.Image] or np.array + +When returning a List[PIL.Image.Image], the list consists of a batch of single-channel binary images +with dimensions (height // self.vae_scale_factor, width // self.vae_scale_factor). If it’s +np.array, the shape is (batch_size, height // self.vae_scale_factor, width // self.vae_scale_factor). + Generate a latent mask given a mask prompt, a target prompt, and an image. Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionDiffEditPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + +>>> init_image = download_image(img_url).resize((768, 768)) + +>>> pipe = StableDiffusionDiffEditPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.enable_model_cpu_offload() + +>>> mask_prompt = "A bowl of fruits" +>>> prompt = "A bowl of pears" + +>>> mask_image = pipe.generate_mask(image=init_image, source_prompt=prompt, target_prompt=mask_prompt) +>>> image_latents = pipe.invert(image=init_image, prompt=mask_prompt).latents +>>> image = pipe(prompt=prompt, mask_image=mask_image, image_latents=image_latents).images[0] invert < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 50 inpaint_strength: float = 0.8 guidance_scale: float = 7.5 negative_prompt: Union = None generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None decode_latents: bool = False output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None lambda_auto_corr: float = 20.0 lambda_kl: float = 20.0 num_reg_steps: int = 0 num_auto_corr_rolls: int = 5 ) Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (PIL.Image.Image) — +Image or tensor representing an image batch to produce the inverted latents guided by prompt. inpaint_strength (float, optional, defaults to 0.8) — +Indicates extent of the noising process to run latent inversion. Must be between 0 and 1. When +inpaint_strength is 1, the inversion process is run for the full number of iterations specified in +num_inference_steps. image is used as a reference for the inversion process, and adding more noise +increases inpaint_strength. If inpaint_strength is 0, no inpainting occurs. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. decode_latents (bool, optional, defaults to False) — +Whether or not to decode the inverted latents into a generated image. Setting this argument to True +decodes all inverted latents for each timestep into a list of generated images. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.DiffEditInversionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the +AttnProcessor as defined in +self.processor. lambda_auto_corr (float, optional, defaults to 20.0) — +Lambda parameter to control auto correction. lambda_kl (float, optional, defaults to 20.0) — +Lambda parameter to control Kullback-Leibler divergence output. num_reg_steps (int, optional, defaults to 0) — +Number of regularization loss steps. num_auto_corr_rolls (int, optional, defaults to 5) — +Number of auto correction roll steps. Generate inverted latents given a prompt and image. Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionDiffEditPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + +>>> init_image = download_image(img_url).resize((768, 768)) + +>>> pipe = StableDiffusionDiffEditPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.enable_model_cpu_offload() + +>>> prompt = "A bowl of fruits" + +>>> inverted_latents = pipe.invert(image=init_image, prompt=prompt).latents __call__ < source > ( prompt: Union = None mask_image: Union = None image_latents: Union = None inpaint_strength: Optional = 0.8 num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_ckip: int = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. mask_image (PIL.Image.Image) — +Image or tensor representing an image batch to mask the generated image. White pixels in the mask are +repainted, while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, 1, H, W). image_latents (PIL.Image.Image or torch.FloatTensor) — +Partially noised image latents from the inversion process to be used as inputs for image generation. inpaint_strength (float, optional, defaults to 0.8) — +Indicates extent to inpaint the masked area. Must be between 0 and 1. When inpaint_strength is 1, the +denoising process is run on the masked area for the full number of iterations specified in +num_inference_steps. image_latents is used as a reference for the masked area, and adding more +noise to a region increases inpaint_strength. If inpaint_strength is 0, no inpainting occurs. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionDiffEditPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + +>>> init_image = download_image(img_url).resize((768, 768)) + +>>> pipe = StableDiffusionDiffEditPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.enable_model_cpu_offload() + +>>> mask_prompt = "A bowl of fruits" +>>> prompt = "A bowl of pears" + +>>> mask_image = pipe.generate_mask(image=init_image, source_prompt=prompt, target_prompt=mask_prompt) +>>> image_latents = pipe.invert(image=init_image, prompt=mask_prompt).latents +>>> image = pipe(prompt=prompt, mask_image=mask_image, image_latents=image_latents).images[0] disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/686e754d2e3dce4abbdca83b0d6992aa.txt b/scrapped_outputs/686e754d2e3dce4abbdca83b0d6992aa.txt new file mode 100644 index 0000000000000000000000000000000000000000..cb002bda2ec40dcae9dd008d1a32f6d02e3caa74 --- /dev/null +++ b/scrapped_outputs/686e754d2e3dce4abbdca83b0d6992aa.txt @@ -0,0 +1,97 @@ +Dance Diffusion + + +Overview + +Dance Diffusion by Zach Evans. +Dance Diffusion is the first in a suite of generative audio tools for producers and musicians to be released by Harmonai. +For more info or to get involved in the development of these tools, please visit https://harmonai.org and fill out the form on the front page. +The original codebase of this implementation can be found here. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_dance_diffusion.py +Unconditional Audio Generation +- + +DanceDiffusionPipeline + + +class diffusers.DanceDiffusionPipeline + +< +source +> +( +unet +scheduler + +) + + +Parameters + +unet (UNet1DModel) — U-Net architecture to denoise the encoded image. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image. Can be one of +IPNDMScheduler. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +batch_size: int = 1 +num_inference_steps: int = 100 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +audio_length_in_s: typing.Optional[float] = None +return_dict: bool = True + +) +→ +AudioPipelineOutput or tuple + +Parameters + +batch_size (int, optional, defaults to 1) — +The number of audio samples to generate. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality audio sample at +the expense of slower inference. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +audio_length_in_s (float, optional, defaults to self.unet.config.sample_size/self.unet.config.sample_rate) — +The length of the generated audio sample in seconds. Note that the output of the pipeline, i.e. +sample_size, will be audio_length_in_s * self.unet.sample_rate. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a AudioPipelineOutput instead of a plain tuple. + + +Returns + +AudioPipelineOutput or tuple + + + +~pipelines.utils.AudioPipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. diff --git a/scrapped_outputs/6879750f50876238a21c3590a4fcfaa7.txt b/scrapped_outputs/6879750f50876238a21c3590a4fcfaa7.txt new file mode 100644 index 0000000000000000000000000000000000000000..c64e5338e7b801217166447f9876dee342fd9e20 --- /dev/null +++ b/scrapped_outputs/6879750f50876238a21c3590a4fcfaa7.txt @@ -0,0 +1,100 @@ +UNet Some training methods - like LoRA and Custom Diffusion - typically target the UNet’s attention layers, but these training methods can also target other non-attention layers. Instead of training all of a model’s parameters, only a subset of the parameters are trained, which is faster and more efficient. This class is useful if you’re only loading weights into a UNet. If you need to load weights into the text encoder or a text encoder and UNet, try using the load_lora_weights() function instead. The UNet2DConditionLoadersMixin class provides functions for loading and saving weights, fusing and unfusing LoRAs, disabling and enabling LoRAs, and setting and deleting adapters. To learn more about how to load LoRA weights, see the LoRA loading guide. UNet2DConditionLoadersMixin class diffusers.loaders.UNet2DConditionLoadersMixin < source > ( ) Load LoRA layers into a UNet2DCondtionModel. delete_adapters < source > ( adapter_names: Union ) Parameters adapter_names (Union[List[str], str]) — +The names (single string or list of strings) of the adapter to delete. Delete an adapter’s LoRA layers from the UNet. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_names="cinematic" +) +pipeline.delete_adapters("cinematic") disable_lora < source > ( ) Disable the UNet’s active LoRA layers. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) +pipeline.disable_lora() enable_lora < source > ( ) Enable the UNet’s active LoRA layers. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) +pipeline.enable_lora() load_attn_procs < source > ( pretrained_model_name_or_path_or_dict: Union **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with ModelMixin.save_pretrained(). +A torch state +dict. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load pretrained attention processor layers into UNet2DConditionModel. Attention processor layers have to be +defined in +attention_processor.py +and be a torch.nn.Module class. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.unet.load_attn_procs( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) save_attn_procs < source > ( save_directory: Union is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save an attention processor to (will be created if it doesn’t exist). is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or with pickle. Save attention processor layers to a directory so that it can be reloaded with the +load_attn_procs() method. Example: Copied import torch +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + torch_dtype=torch.float16, +).to("cuda") +pipeline.unet.load_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin") +pipeline.unet.save_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin") set_adapters < source > ( adapter_names: Union weights: Union = None ) Parameters adapter_names (List[str] or str) — +The names of the adapters to use. adapter_weights (Union[List[float], float], optional) — +The adapter(s) weights to use with the UNet. If None, the weights are set to 1.0 for all the +adapters. Set the currently active adapters for use in the UNet. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) +pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipeline.set_adapters(["cinematic", "pixel"], adapter_weights=[0.5, 0.5]) diff --git a/scrapped_outputs/687c0df08edb049e27f3e12331cc4db0.txt b/scrapped_outputs/687c0df08edb049e27f3e12331cc4db0.txt new file mode 100644 index 0000000000000000000000000000000000000000..f3ff45d9b537f73b4891b1294f8d618d1aafc935 --- /dev/null +++ b/scrapped_outputs/687c0df08edb049e27f3e12331cc4db0.txt @@ -0,0 +1,48 @@ +ScoreSdeVeScheduler ScoreSdeVeScheduler is a variance exploding stochastic differential equation (SDE) scheduler. It was introduced in the Score-Based Generative Modeling through Stochastic Differential Equations paper by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole. The abstract from the paper is: Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model. ScoreSdeVeScheduler class diffusers.ScoreSdeVeScheduler < source > ( num_train_timesteps: int = 2000 snr: float = 0.15 sigma_min: float = 0.01 sigma_max: float = 1348.0 sampling_eps: float = 1e-05 correct_steps: int = 1 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. snr (float, defaults to 0.15) — +A coefficient weighting the step from the model_output sample (from the network) to the random noise. sigma_min (float, defaults to 0.01) — +The initial noise scale for the sigma sequence in the sampling procedure. The minimum sigma should mirror +the distribution of the data. sigma_max (float, defaults to 1348.0) — +The maximum value used for the range of continuous timesteps passed into the model. sampling_eps (float, defaults to 1e-5) — +The end value of sampling where timesteps decrease progressively from 1 to epsilon. correct_steps (int, defaults to 1) — +The number of correction steps performed on a produced sample. ScoreSdeVeScheduler is a variance exploding stochastic differential equation (SDE) scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_sigmas < source > ( num_inference_steps: int sigma_min: float = None sigma_max: float = None sampling_eps: float = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. sigma_min (float, optional) — +The initial noise scale value (overrides value given during scheduler instantiation). sigma_max (float, optional) — +The final noise scale value (overrides value given during scheduler instantiation). sampling_eps (float, optional) — +The final timestep value (overrides value given during scheduler instantiation). Sets the noise scales used for the diffusion chain (to be run before inference). The sigmas control the weight +of the drift and diffusion components of the sample update. set_timesteps < source > ( num_inference_steps: int sampling_eps: float = None device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. sampling_eps (float, optional) — +The final timestep value (overrides value given during scheduler instantiation). device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the continuous timesteps used for the diffusion chain (to be run before inference). step_correct < source > ( model_output: FloatTensor sample: FloatTensor generator: Optional = None return_dict: bool = True ) → SdeVeOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a SdeVeOutput or tuple. Returns +SdeVeOutput or tuple + +If return_dict is True, SdeVeOutput is returned, otherwise a tuple +is returned where the first element is the sample tensor. + Correct the predicted sample based on the model_output of the network. This is often run repeatedly after +making the prediction for the previous timestep. step_pred < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator: Optional = None return_dict: bool = True ) → SdeVeOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a SdeVeOutput or tuple. Returns +SdeVeOutput or tuple + +If return_dict is True, SdeVeOutput is returned, otherwise a tuple +is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SdeVeOutput class diffusers.schedulers.scheduling_sde_ve.SdeVeOutput < source > ( prev_sample: FloatTensor prev_sample_mean: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. prev_sample_mean (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Mean averaged prev_sample over previous timesteps. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/687f60cf1e8cbb9b901eedf069cc3d6b.txt b/scrapped_outputs/687f60cf1e8cbb9b901eedf069cc3d6b.txt new file mode 100644 index 0000000000000000000000000000000000000000..923735996db131119f1ed82ba37eae73f2bb0f3e --- /dev/null +++ b/scrapped_outputs/687f60cf1e8cbb9b901eedf069cc3d6b.txt @@ -0,0 +1,27 @@ +DDPM Denoising Diffusion Probabilistic Models (DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes a diffusion based model of the same name. In the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline. The abstract from the paper is: We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. The original codebase can be found at hohonathanho/diffusion. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. DDPMPipeline class diffusers.DDPMPipeline < source > ( unet scheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image. Can be one of +DDPMScheduler, or DDIMScheduler. Pipeline for image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 generator: Union = None num_inference_steps: int = 1000 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. num_inference_steps (int, optional, defaults to 1000) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Example: Copied >>> from diffusers import DDPMPipeline + +>>> # load model and scheduler +>>> pipe = DDPMPipeline.from_pretrained("google/ddpm-cat-256") + +>>> # run pipeline in inference (sample random noise and denoise) +>>> image = pipe().images[0] + +>>> # save image +>>> image.save("ddpm_generated_image.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/68882391c55dc9390e14edae65eaea06.txt b/scrapped_outputs/68882391c55dc9390e14edae65eaea06.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/6910e0b2910f575c896f3633d8003957.txt b/scrapped_outputs/6910e0b2910f575c896f3633d8003957.txt new file mode 100644 index 0000000000000000000000000000000000000000..7b1735de34d975258705c997ab6b7091fbeddde0 --- /dev/null +++ b/scrapped_outputs/6910e0b2910f575c896f3633d8003957.txt @@ -0,0 +1,2 @@ +Activation functions Customized activation functions for supporting various models in 🤗 Diffusers. GELU class diffusers.models.activations.GELU < source > ( dim_in: int dim_out: int approximate: str = 'none' bias: bool = True ) Parameters dim_in (int) — The number of channels in the input. dim_out (int) — The number of channels in the output. approximate (str, optional, defaults to "none") — If "tanh", use tanh approximation. bias (bool, defaults to True) — Whether to use a bias in the linear layer. GELU activation function with tanh approximation support with approximate="tanh". GEGLU class diffusers.models.activations.GEGLU < source > ( dim_in: int dim_out: int bias: bool = True ) Parameters dim_in (int) — The number of channels in the input. dim_out (int) — The number of channels in the output. bias (bool, defaults to True) — Whether to use a bias in the linear layer. A variant of the gated linear unit activation function. ApproximateGELU class diffusers.models.activations.ApproximateGELU < source > ( dim_in: int dim_out: int bias: bool = True ) Parameters dim_in (int) — The number of channels in the input. dim_out (int) — The number of channels in the output. bias (bool, defaults to True) — Whether to use a bias in the linear layer. The approximate form of the Gaussian Error Linear Unit (GELU). For more details, see section 2 of this +paper. diff --git a/scrapped_outputs/69a8288d56c0483edd9b184f90ab80ca.txt b/scrapped_outputs/69a8288d56c0483edd9b184f90ab80ca.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/69b38b0510055d9128e33a120d997e50.txt b/scrapped_outputs/69b38b0510055d9128e33a120d997e50.txt new file mode 100644 index 0000000000000000000000000000000000000000..d652e1d857c98c3e8bba256ca96f37cda949853a --- /dev/null +++ b/scrapped_outputs/69b38b0510055d9128e33a120d997e50.txt @@ -0,0 +1,57 @@ +Schedulers 🤗 Diffusers provides many scheduler functions for the diffusion process. A scheduler takes a model’s output (the sample which the diffusion process is iterating on) and a timestep to return a denoised sample. The timestep is important because it dictates where in the diffusion process the step is; data is generated by iterating forward n timesteps and inference occurs by propagating backward through the timesteps. Based on the timestep, a scheduler may be discrete in which case the timestep is an int or continuous in which case the timestep is a float. Depending on the context, a scheduler defines how to iteratively add noise to an image or how to update a sample based on a model’s output: during training, a scheduler adds noise (there are different algorithms for how to add noise) to a sample to train a diffusion model during inference, a scheduler defines how to update a sample based on a pretrained model’s output Many schedulers are implemented from the k-diffusion library by Katherine Crowson, and they’re also widely used in A1111. To help you map the schedulers from k-diffusion and A1111 to the schedulers in 🤗 Diffusers, take a look at the table below: A1111/k-diffusion 🤗 Diffusers Usage DPM++ 2M DPMSolverMultistepScheduler DPM++ 2M Karras DPMSolverMultistepScheduler init with use_karras_sigmas=True DPM++ 2M SDE DPMSolverMultistepScheduler init with algorithm_type="sde-dpmsolver++" DPM++ 2M SDE Karras DPMSolverMultistepScheduler init with use_karras_sigmas=True and algorithm_type="sde-dpmsolver++" DPM++ 2S a N/A very similar to DPMSolverSinglestepScheduler DPM++ 2S a Karras N/A very similar to DPMSolverSinglestepScheduler(use_karras_sigmas=True, ...) DPM++ SDE DPMSolverSinglestepScheduler DPM++ SDE Karras DPMSolverSinglestepScheduler init with use_karras_sigmas=True DPM2 KDPM2DiscreteScheduler DPM2 Karras KDPM2DiscreteScheduler init with use_karras_sigmas=True DPM2 a KDPM2AncestralDiscreteScheduler DPM2 a Karras KDPM2AncestralDiscreteScheduler init with use_karras_sigmas=True DPM adaptive N/A DPM fast N/A Euler EulerDiscreteScheduler Euler a EulerAncestralDiscreteScheduler Heun HeunDiscreteScheduler LMS LMSDiscreteScheduler LMS Karras LMSDiscreteScheduler init with use_karras_sigmas=True N/A DEISMultistepScheduler N/A UniPCMultistepScheduler All schedulers are built from the base SchedulerMixin class which implements low level utilities shared by all schedulers. SchedulerMixin class diffusers.SchedulerMixin < source > ( ) Base class for all schedulers. SchedulerMixin contains common functions shared by all schedulers such as general loading and saving +functionalities. ConfigMixin takes care of storing the configuration attributes (like num_train_timesteps) that are passed to +the scheduler’s __init__ function, and the attributes can be accessed by scheduler.config.num_train_timesteps. Class attributes: _compatibles (List[str]) — A list of scheduler classes that are compatible with the parent scheduler +class. Use from_config() to load a different compatible scheduler class (should be overridden +by parent class). from_pretrained < source > ( pretrained_model_name_or_path: Union = None subfolder: Optional = None return_unused_kwargs = False **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the scheduler +configuration saved with save_pretrained(). + subfolder (str, optional) — +The subfolder location of a model file within a larger model repository on the Hub or locally. return_unused_kwargs (bool, optional, defaults to False) — +Whether kwargs that are not consumed by the Python class should be returned or not. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. Instantiate a scheduler from a pre-defined JSON configuration file in a local directory or Hub repository. To use private or gated models, log-in with +huggingface-cli login. You can also activate the special +“offline-mode” to use this method in a +firewalled environment. save_pretrained < source > ( save_directory: Union push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory where the configuration JSON file will be saved (will be created if it does not exist). push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save a scheduler configuration object to a directory so that it can be reloaded using the +from_pretrained() class method. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. KarrasDiffusionSchedulers KarrasDiffusionSchedulers are a broad generalization of schedulers in 🤗 Diffusers. The schedulers in this class are distinguished at a high level by their noise sampling strategy, the type of network and scaling, the training strategy, and how the loss is weighed. The different schedulers in this class, depending on the ordinary differential equations (ODE) solver type, fall into the above taxonomy and provide a good abstraction for the design of the main schedulers implemented in 🤗 Diffusers. The schedulers in this class are given here. PushToHubMixin class diffusers.utils.PushToHubMixin < source > ( ) A Mixin to push a model, scheduler, or pipeline to the Hugging Face Hub. push_to_hub < source > ( repo_id: str commit_message: Optional = None private: Optional = None token: Optional = None create_pr: bool = False safe_serialization: bool = True variant: Optional = None ) Parameters repo_id (str) — +The name of the repository you want to push your model, scheduler, or pipeline files to. It should +contain your organization name when pushing to an organization. repo_id can also be a path to a local +directory. commit_message (str, optional) — +Message to commit while pushing. Default to "Upload {object}". private (bool, optional) — +Whether or not the repository created should be private. token (str, optional) — +The token to use as HTTP bearer authorization for remote files. The token generated when running +huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) — +Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to True) — +Whether or not to convert the model weights to the safetensors format. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. Upload model, scheduler, or pipeline files to the 🤗 Hugging Face Hub. Examples: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet") + +# Push the `unet` to your namespace with the name "my-finetuned-unet". +unet.push_to_hub("my-finetuned-unet") + +# Push the `unet` to an organization with the name "my-finetuned-unet". +unet.push_to_hub("your-org/my-finetuned-unet") diff --git a/scrapped_outputs/69e529d8d15c02aa0ec763f2c820841b.txt b/scrapped_outputs/69e529d8d15c02aa0ec763f2c820841b.txt new file mode 100644 index 0000000000000000000000000000000000000000..12f932f27da948cb5ce81edca4bff5444475b84d --- /dev/null +++ b/scrapped_outputs/69e529d8d15c02aa0ec763f2c820841b.txt @@ -0,0 +1,11 @@ +Control image brightness The Stable Diffusion pipeline is mediocre at generating images that are either very bright or dark as explained in the Common Diffusion Noise Schedules and Sample Steps are Flawed paper. The solutions proposed in the paper are currently implemented in the DDIMScheduler which you can use to improve the lighting in your images. 💡 Take a look at the paper linked above for more details about the proposed solutions! One of the solutions is to train a model with v prediction and v loss. Add the following flag to the train_text_to_image.py or train_text_to_image_lora.py scripts to enable v_prediction: Copied --prediction_type="v_prediction" For example, let’s use the ptx0/pseudo-journey-v2 checkpoint which has been finetuned with v_prediction. Next, configure the following parameters in the DDIMScheduler: rescale_betas_zero_snr=True, rescales the noise schedule to zero terminal signal-to-noise ratio (SNR) timestep_spacing="trailing", starts sampling from the last timestep Copied from diffusers import DiffusionPipeline, DDIMScheduler + +pipeline = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", use_safetensors=True) + +# switch the scheduler in the pipeline to use the DDIMScheduler +pipeline.scheduler = DDIMScheduler.from_config( + pipeline.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing" +) +pipeline.to("cuda") Finally, in your call to the pipeline, set guidance_rescale to prevent overexposure: Copied prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" +image = pipeline(prompt, guidance_rescale=0.7).images[0] +image diff --git a/scrapped_outputs/69f74eb7713a52890f318b41a915558e.txt b/scrapped_outputs/69f74eb7713a52890f318b41a915558e.txt new file mode 100644 index 0000000000000000000000000000000000000000..9f114fdb9e4df008a7dccedd3c1f0129e4d4d434 --- /dev/null +++ b/scrapped_outputs/69f74eb7713a52890f318b41a915558e.txt @@ -0,0 +1,96 @@ +ControlNet The ControlNet model was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, Maneesh Agrawala. It provides a greater degree of control over text-to-image generation by conditioning the model on additional inputs such as edge maps, depth maps, segmentation maps, and keypoints for pose detection. The abstract from the paper is: We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models. Loading from the original format By default the ControlNetModel should be loaded with from_pretrained(), but it can also be loaded +from the original format using FromOriginalControlnetMixin.from_single_file as follows: Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel + +url = "https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_canny.pth" # can also be a local path +controlnet = ControlNetModel.from_single_file(url) + +url = "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.safetensors" # can also be a local path +pipe = StableDiffusionControlNetPipeline.from_single_file(url, controlnet=controlnet) ControlNetModel class diffusers.ControlNetModel < source > ( in_channels: int = 4 conditioning_channels: int = 3 flip_sin_to_cos: bool = True freq_shift: int = 0 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') mid_block_type: Optional = 'UNetMidBlock2DCrossAttn' only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: int = 1280 transformer_layers_per_block: Union = 1 encoder_hid_dim: Optional = None encoder_hid_dim_type: Optional = None attention_head_dim: Union = 8 num_attention_heads: Union = None use_linear_projection: bool = False class_embed_type: Optional = None addition_embed_type: Optional = None addition_time_embed_dim: Optional = None num_class_embeds: Optional = None upcast_attention: bool = False resnet_time_scale_shift: str = 'default' projection_class_embeddings_input_dim: Optional = None controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: Optional = (16, 32, 96, 256) global_pool_conditions: bool = False addition_embed_type_num_heads: int = 64 ) Parameters in_channels (int, defaults to 4) — +The number of channels in the input sample. flip_sin_to_cos (bool, defaults to True) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, defaults to 0) — +The frequency shift to apply to the time embedding. down_block_types (tuple[str], defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. only_cross_attention (Union[bool, Tuple[bool]], defaults to False) — block_out_channels (tuple[int], defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, defaults to 2) — +The number of layers per block. downsample_padding (int, defaults to 1) — +The padding to use for the downsampling convolution. mid_block_scale_factor (float, defaults to 1) — +The scale factor to use for the mid block. act_fn (str, defaults to “silu”) — +The activation function to use. norm_num_groups (int, optional, defaults to 32) — +The number of groups to use for the normalization. If None, normalization and activation layers is skipped +in post-processing. norm_eps (float, defaults to 1e-5) — +The epsilon to use for the normalization. cross_attention_dim (int, defaults to 1280) — +The dimension of the cross attention features. transformer_layers_per_block (int or Tuple[int], optional, defaults to 1) — +The number of transformer blocks of type BasicTransformerBlock. Only relevant for +~models.unet_2d_blocks.CrossAttnDownBlock2D, ~models.unet_2d_blocks.CrossAttnUpBlock2D, +~models.unet_2d_blocks.UNetMidBlock2DCrossAttn. encoder_hid_dim (int, optional, defaults to None) — +If encoder_hid_dim_type is defined, encoder_hidden_states will be projected from encoder_hid_dim +dimension to cross_attention_dim. encoder_hid_dim_type (str, optional, defaults to None) — +If given, the encoder_hidden_states and potentially other embeddings are down-projected to text +embeddings of dimension cross_attention according to encoder_hid_dim_type. attention_head_dim (Union[int, Tuple[int]], defaults to 8) — +The dimension of the attention heads. use_linear_projection (bool, defaults to False) — class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", "identity", "projection", or "simple_projection". addition_embed_type (str, optional, defaults to None) — +Configures an optional embedding which will be summed with the time embeddings. Choose from None or +“text”. “text” will use the TextTimeEmbedding layer. num_class_embeds (int, optional, defaults to 0) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. upcast_attention (bool, defaults to False) — resnet_time_scale_shift (str, defaults to "default") — +Time scale shift config for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. projection_class_embeddings_input_dim (int, optional, defaults to None) — +The dimension of the class_labels input when class_embed_type="projection". Required when +class_embed_type="projection". controlnet_conditioning_channel_order (str, defaults to "rgb") — +The channel order of conditional image. Will convert to rgb if it’s bgr. conditioning_embedding_out_channels (tuple[int], optional, defaults to (16, 32, 96, 256)) — +The tuple of output channel for each block in the conditioning_embedding layer. global_pool_conditions (bool, defaults to False) — +TODO(Patrick) - unused parameter. addition_embed_type_num_heads (int, defaults to 64) — +The number of heads to use for the TextTimeEmbedding layer. A ControlNet model. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor controlnet_cond: FloatTensor conditioning_scale: float = 1.0 class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None added_cond_kwargs: Optional = None cross_attention_kwargs: Optional = None guess_mode: bool = False return_dict: bool = True ) → ControlNetOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor. timestep (Union[torch.Tensor, float, int]) — +The number of timesteps to denoise an input. encoder_hidden_states (torch.Tensor) — +The encoder hidden states. controlnet_cond (torch.FloatTensor) — +The conditional input tensor of shape (batch_size, sequence_length, hidden_size). conditioning_scale (float, defaults to 1.0) — +The scale factor for ControlNet outputs. class_labels (torch.Tensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. timestep_cond (torch.Tensor, optional, defaults to None) — +Additional conditional embeddings for timestep. If provided, the embeddings will be summed with the +timestep_embedding passed through the self.time_embedding layer to obtain the final timestep +embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. added_cond_kwargs (dict) — +Additional conditions for the Stable Diffusion XL UNet. cross_attention_kwargs (dict[str], optional, defaults to None) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. guess_mode (bool, defaults to False) — +In this mode, the ControlNet encoder tries its best to recognize the input content of the input even if +you remove all prompts. A guidance_scale between 3.0 and 5.0 is recommended. return_dict (bool, defaults to True) — +Whether or not to return a ControlNetOutput instead of a plain tuple. Returns +ControlNetOutput or tuple + +If return_dict is True, a ControlNetOutput is returned, otherwise a tuple is +returned where the first element is the sample tensor. + The ControlNetModel forward method. from_unet < source > ( unet: UNet2DConditionModel controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: Optional = (16, 32, 96, 256) load_weights_from_unet: bool = True conditioning_channels: int = 3 ) Parameters unet (UNet2DConditionModel) — +The UNet model weights to copy to the ControlNetModel. All configuration options are also copied +where applicable. Instantiate a ControlNetModel from UNet2DConditionModel. set_attention_slice < source > ( slice_size: Union ) Parameters slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", input to the attention heads is halved, so attention is computed in two steps. If +"max", maximum amount of memory is saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in +several steps. This is useful for saving some memory in exchange for a small decrease in speed. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. ControlNetOutput class diffusers.models.controlnet.ControlNetOutput < source > ( down_block_res_samples: Tuple mid_block_res_sample: Tensor ) Parameters down_block_res_samples (tuple[torch.Tensor]) — +A tuple of downsample activations at different resolutions for each downsampling block. Each tensor should +be of shape (batch_size, channel * resolution, height //resolution, width // resolution). Output can be +used to condition the original UNet’s downsampling activations. mid_down_block_re_sample (torch.Tensor) — +The activation of the midde block (the lowest sample resolution). Each tensor should be of shape +(batch_size, channel * lowest_resolution, height // lowest_resolution, width // lowest_resolution). +Output can be used to condition the original UNet’s middle block activation. The output of ControlNetModel. FlaxControlNetModel class diffusers.FlaxControlNetModel < source > ( sample_size: int = 32 in_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 attention_head_dim: Union = 8 num_attention_heads: Union = None cross_attention_dim: int = 1280 dropout: float = 0.0 use_linear_projection: bool = False dtype: dtype = flip_sin_to_cos: bool = True freq_shift: int = 0 controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: Tuple = (16, 32, 96, 256) parent: Union = name: Optional = None ) Parameters sample_size (int, optional) — +The size of the input sample. in_channels (int, optional, defaults to 4) — +The number of channels in the input sample. down_block_types (Tuple[str], optional, defaults to ("FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxDownBlock2D")) — +The tuple of downsample blocks to use. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — +The number of layers per block. attention_head_dim (int or Tuple[int], optional, defaults to 8) — +The dimension of the attention heads. num_attention_heads (int or Tuple[int], optional) — +The number of attention heads. cross_attention_dim (int, optional, defaults to 768) — +The dimension of the cross attention features. dropout (float, optional, defaults to 0) — +Dropout probability for down, up and bottleneck blocks. flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. controlnet_conditioning_channel_order (str, optional, defaults to rgb) — +The channel order of conditional image. Will convert to rgb if it’s bgr. conditioning_embedding_out_channels (tuple, optional, defaults to (16, 32, 96, 256)) — +The tuple of output channel for each block in the conditioning_embedding layer. A ControlNet model. This model inherits from FlaxModelMixin. Check the superclass documentation for it’s generic methods +implemented for all models (such as downloading or saving). This model is also a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matters related to its +general usage and behavior. Inherent JAX features such as the following are supported: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization FlaxControlNetOutput class diffusers.models.controlnet_flax.FlaxControlNetOutput < source > ( down_block_res_samples: Array mid_block_res_sample: Array ) Parameters down_block_res_samples (jnp.ndarray) — mid_block_res_sample (jnp.ndarray) — The output of FlaxControlNetModel. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/6a7555611832761c65b2938cfda961e0.txt b/scrapped_outputs/6a7555611832761c65b2938cfda961e0.txt new file mode 100644 index 0000000000000000000000000000000000000000..f44a3d21a8e26d613db10e2b1641d1bc1fb54490 --- /dev/null +++ b/scrapped_outputs/6a7555611832761c65b2938cfda961e0.txt @@ -0,0 +1,2 @@ +🧨 Diffusers’ Ethical Guidelines Preamble Diffusers provides pre-trained diffusion models and serves as a modular toolbox for inference and training. Given its real case applications in the world and potential negative impacts on society, we think it is important to provide the project with ethical guidelines to guide the development, users’ contributions, and usage of the Diffusers library. The risks associated with using this technology are still being examined, but to name a few: copyrights issues for artists; deep-fake exploitation; sexual content generation in inappropriate contexts; non-consensual impersonation; harmful social biases perpetuating the oppression of marginalized groups. +We will keep tracking risks and adapt the following guidelines based on the community’s responsiveness and valuable feedback. Scope The Diffusers community will apply the following ethical guidelines to the project’s development and help coordinate how the community will integrate the contributions, especially concerning sensitive topics related to ethical concerns. Ethical guidelines The following ethical guidelines apply generally, but we will primarily implement them when dealing with ethically sensitive issues while making a technical choice. Furthermore, we commit to adapting those ethical principles over time following emerging harms related to the state of the art of the technology in question. Transparency: we are committed to being transparent in managing PRs, explaining our choices to users, and making technical decisions. Consistency: we are committed to guaranteeing our users the same level of attention in project management, keeping it technically stable and consistent. Simplicity: with a desire to make it easy to use and exploit the Diffusers library, we are committed to keeping the project’s goals lean and coherent. Accessibility: the Diffusers project helps lower the entry bar for contributors who can help run it even without technical expertise. Doing so makes research artifacts more accessible to the community. Reproducibility: we aim to be transparent about the reproducibility of upstream code, models, and datasets when made available through the Diffusers library. Responsibility: as a community and through teamwork, we hold a collective responsibility to our users by anticipating and mitigating this technology’s potential risks and dangers. Examples of implementations: Safety features and Mechanisms The team works daily to make the technical and non-technical tools available to deal with the potential ethical and social risks associated with diffusion technology. Moreover, the community’s input is invaluable in ensuring these features’ implementation and raising awareness with us. Community tab: it enables the community to discuss and better collaborate on a project. Bias exploration and evaluation: the Hugging Face team provides a space to demonstrate the biases in Stable Diffusion interactively. In this sense, we support and encourage bias explorers and evaluations. Encouraging safety in deployment Safe Stable Diffusion: It mitigates the well-known issue that models, like Stable Diffusion, that are trained on unfiltered, web-crawled datasets tend to suffer from inappropriate degeneration. Related paper: Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models. Safety Checker: It checks and compares the class probability of a set of hard-coded harmful concepts in the embedding space against an image after it has been generated. The harmful concepts are intentionally hidden to prevent reverse engineering of the checker. Staged released on the Hub: in particularly sensitive situations, access to some repositories should be restricted. This staged release is an intermediary step that allows the repository’s authors to have more control over its use. Licensing: OpenRAILs, a new type of licensing, allow us to ensure free access while having a set of restrictions that ensure more responsible use. diff --git a/scrapped_outputs/6a7ee1d2687f864e20c55f236ae3b5e8.txt b/scrapped_outputs/6a7ee1d2687f864e20c55f236ae3b5e8.txt new file mode 100644 index 0000000000000000000000000000000000000000..4ff136abe2e56da35c3fd6f9e1412b9b95cae0b8 --- /dev/null +++ b/scrapped_outputs/6a7ee1d2687f864e20c55f236ae3b5e8.txt @@ -0,0 +1,90 @@ +Philosophy + +🧨 Diffusers provides state-of-the-art pretrained diffusion models across multiple modalities. +Its purpose is to serve as a modular toolbox for both inference and training. +We aim at building a library that stands the test of time and therefore take API design very seriously. +In a nutshell, Diffusers is built to be a natural extension of PyTorch. Therefore, most of our design choices are based on PyTorch’s Design Principles. Let’s go over the most important ones: + +Usability over Performance + +While Diffusers has many built-in performance-enhancing features (see Memory and Speed), models are always loaded with the highest precision and lowest optimization. Therefore, by default diffusion pipelines are always instantiated on CPU with float32 precision if not otherwise defined by the user. This ensures usability across different platforms and accelerators and means that no complex installations are required to run the library. +Diffusers aim at being a light-weight package and therefore has very few required dependencies, but many soft dependencies that can improve performance (such as accelerate, safetensors, onnx, etc…). We strive to keep the library as lightweight as possible so that it can be added without much concern as a dependency on other packages. +Diffusers prefers simple, self-explainable code over condensed, magic code. This means that short-hand code syntaxes such as lambda functions, and advanced PyTorch operators are often not desired. + +Simple over easy + +As PyTorch states, explicit is better than implicit and simple is better than complex. This design philosophy is reflected in multiple parts of the library: +We follow PyTorch’s API with methods like DiffusionPipeline.to to let the user handle device management. +Raising concise error messages is preferred to silently correct erroneous input. Diffusers aims at teaching the user, rather than making the library as easy to use as possible. +Complex model vs. scheduler logic is exposed instead of magically handled inside. Schedulers/Samplers are separated from diffusion models with minimal dependencies on each other. This forces the user to write the unrolled denoising loop. However, the separation allows for easier debugging and gives the user more control over adapting the denoising process or switching out diffusion models or schedulers. +Separately trained components of the diffusion pipeline, e.g. the text encoder, the unet, and the variational autoencoder, each have their own model class. This forces the user to handle the interaction between the different model components, and the serialization format separates the model components into different files. However, this allows for easier debugging and customization. Dreambooth or textual inversion training +is very simple thanks to diffusers’ ability to separate single components of the diffusion pipeline. + +Tweakable, contributor-friendly over abstraction + +For large parts of the library, Diffusers adopts an important design principle of the Transformers library, which is to prefer copy-pasted code over hasty abstractions. This design principle is very opinionated and stands in stark contrast to popular design principles such as Don’t repeat yourself (DRY). +In short, just like Transformers does for modeling files, diffusers prefers to keep an extremely low level of abstraction and very self-contained code for pipelines and schedulers. +Functions, long code blocks, and even classes can be copied across multiple files which at first can look like a bad, sloppy design choice that makes the library unmaintainable. +However, this design has proven to be extremely successful for Transformers and makes a lot of sense for community-driven, open-source machine learning libraries because: +Machine Learning is an extremely fast-moving field in which paradigms, model architectures, and algorithms are changing rapidly, which therefore makes it very difficult to define long-lasting code abstractions. +Machine Learning practitioners like to be able to quickly tweak existing code for ideation and research and therefore prefer self-contained code over one that contains many abstractions. +Open-source libraries rely on community contributions and therefore must build a library that is easy to contribute to. The more abstract the code, the more dependencies, the harder to read, and the harder to contribute to. Contributors simply stop contributing to very abstract libraries out of fear of breaking vital functionality. If contributing to a library cannot break other fundamental code, not only is it more inviting for potential new contributors, but it is also easier to review and contribute to multiple parts in parallel. +At Hugging Face, we call this design the single-file policy which means that almost all of the code of a certain class should be written in a single, self-contained file. To read more about the philosophy, you can have a look +at this blog post. +In diffusers, we follow this philosophy for both pipelines and schedulers, but only partly for diffusion models. The reason we don’t follow this design fully for diffusion models is because almost all diffusion pipelines, such +as DDPM, Stable Diffusion, UnCLIP (Dalle-2) and Imagen all rely on the same diffusion model, the UNet. +Great, now you should have generally understood why 🧨 Diffusers is designed the way it is 🤗. +We try to apply these design principles consistently across the library. Nevertheless, there are some minor exceptions to the philosophy or some unlucky design choices. If you have feedback regarding the design, we would ❤️ to hear it directly on GitHub. + +Design Philosophy in Details + +Now, let’s look a bit into the nitty-gritty details of the design philosophy. Diffusers essentially consist of three major classes, pipelines, models, and schedulers. +Let’s walk through more in-detail design decisions for each class. + +Pipelines + +Pipelines are designed to be easy to use (therefore do not follow Simple over easy 100%)), are not feature complete, and should loosely be seen as examples of how to use models and schedulers for inference. +The following design principles are followed: +Pipelines follow the single-file policy. All pipelines can be found in individual directories under src/diffusers/pipelines. One pipeline folder corresponds to one diffusion paper/project/release. Multiple pipeline files can be gathered in one pipeline folder, as it’s done for src/diffusers/pipelines/stable-diffusion. If pipelines share similar functionality, one can make use of the #Copied from mechanism. +Pipelines all inherit from DiffusionPipeline +Every pipeline consists of different model and scheduler components, that are documented in the model_index.json file, are accessible under the same name as attributes of the pipeline and can be shared between pipelines with DiffusionPipeline.components function. +Every pipeline should be loadable via the DiffusionPipeline.from_pretrained function. +Pipelines should be used only for inference. +Pipelines should be very readable, self-explanatory, and easy to tweak. +Pipelines should be designed to build on top of each other and be easy to integrate into higher-level APIs. +Pipelines are not intended to be feature-complete user interfaces. For future complete user interfaces one should rather have a look at InvokeAI, Diffuzers, and lama-cleaner +Every pipeline should have one and only one way to run it via a __call__ method. The naming of the __call__ arguments should be shared across all pipelines. +Pipelines should be named after the task they are intended to solve. +In almost all cases, novel diffusion pipelines shall be implemented in a new pipeline folder/file. + +Models + +Models are designed as configurable toolboxes that are natural extensions of PyTorch’s Module class. They only partly follow the single-file policy. +The following design principles are followed: +Models correspond to a type of model architecture. E.g. the UNet2DConditionModel class is used for all UNet variations that expect 2D image inputs and are conditioned on some context. +All models can be found in src/diffusers/models and every model architecture shall be defined in its file, e.g. unet_2d_condition.py, transformer_2d.py, etc… +Models do not follow the single-file policy and should make use of smaller model building blocks, such as attention.py, resnet.py, embeddings.py, etc… Note: This is in stark contrast to Transformers’ modeling files and shows that models do not really follow the single-file policy. +Models intend to expose complexity, just like PyTorch’s module does, and give clear error messages. +Models all inherit from ModelMixin and ConfigMixin. +Models can be optimized for performance when it doesn’t demand major code changes, keeps backward compatibility, and gives significant memory or compute gain. +Models should by default have the highest precision and lowest performance setting. +To integrate new model checkpoints whose general architecture can be classified as an architecture that already exists in Diffusers, the existing model architecture shall be adapted to make it work with the new checkpoint. One should only create a new file if the model architecture is fundamentally different. +Models should be designed to be easily extendable to future changes. This can be achieved by limiting public function arguments, configuration arguments, and “foreseeing” future changes, e.g. it is usually better to add string “…type” arguments that can easily be extended to new future types instead of boolean is_..._type arguments. Only the minimum amount of changes shall be made to existing architectures to make a new model checkpoint work. +The model design is a difficult trade-off between keeping code readable and concise and supporting many model checkpoints. For most parts of the modeling code, classes shall be adapted for new model checkpoints, while there are some exceptions where it is preferred to add new classes to make sure the code is kept concise and +readable longterm, such as UNet blocks and Attention processors. + +Schedulers + +Schedulers are responsible to guide the denoising process for inference as well as to define a noise schedule for training. They are designed as individual classes with loadable configuration files and strongly follow the single-file policy. +The following design principles are followed: +All schedulers are found in src/diffusers/schedulers. +Schedulers are not allowed to import from large utils files and shall be kept very self-contained. +One scheduler python file corresponds to one scheduler algorithm (as might be defined in a paper). +If schedulers share similar functionalities, we can make use of the #Copied from mechanism. +Schedulers all inherit from SchedulerMixin and ConfigMixin. +Schedulers can be easily swapped out with the ConfigMixin.from_config method as explained in detail here. +Every scheduler has to have a set_num_inference_steps, and a step function. set_num_inference_steps(...) has to be called before every denoising process, i.e. before step(...) is called. +Every scheduler exposes the timesteps to be “looped over” via a timesteps attribute, which is an array of timesteps the model will be called upon +The step(...) function takes a predicted model output and the “current” sample (x_t) and returns the “previous”, slightly more denoised sample (x_t-1). +Given the complexity of diffusion schedulers, the step function does not expose all the complexity and can be a bit of a “black box”. +In almost all cases, novel schedulers shall be implemented in a new scheduling file. diff --git a/scrapped_outputs/6a819bfb31522cff885a00be4bde3920.txt b/scrapped_outputs/6a819bfb31522cff885a00be4bde3920.txt new file mode 100644 index 0000000000000000000000000000000000000000..1867f773b4344fd37e77bce342b7730704ed1f48 --- /dev/null +++ b/scrapped_outputs/6a819bfb31522cff885a00be4bde3920.txt @@ -0,0 +1,76 @@ +Load community pipelines and components Community pipelines Community pipelines are any DiffusionPipeline class that are different from the original implementation as specified in their paper (for example, the StableDiffusionControlNetPipeline corresponds to the Text-to-Image Generation with ControlNet Conditioning paper). They provide additional functionality or extend the original implementation of a pipeline. There are many cool community pipelines like Speech to Image or Composable Stable Diffusion, and you can find all the official community pipelines here. To load any community pipeline on the Hub, pass the repository id of the community pipeline to the custom_pipeline argument and the model repository where you’d like to load the pipeline weights and components from. For example, the example below loads a dummy pipeline from hf-internal-testing/diffusers-dummy-pipeline and the pipeline weights and components from google/ddpm-cifar10-32: 🔒 By loading a community pipeline from the Hugging Face Hub, you are trusting that the code you are loading is safe. Make sure to inspect the code online before loading and running it automatically! Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "google/ddpm-cifar10-32", custom_pipeline="hf-internal-testing/diffusers-dummy-pipeline", use_safetensors=True +) Loading an official community pipeline is similar, but you can mix loading weights from an official repository id and pass pipeline components directly. The example below loads the community CLIP Guided Stable Diffusion pipeline, and you can pass the CLIP model components directly to it: Copied from diffusers import DiffusionPipeline +from transformers import CLIPImageProcessor, CLIPModel + +clip_model_id = "laion/CLIP-ViT-B-32-laion2B-s34B-b79K" + +feature_extractor = CLIPImageProcessor.from_pretrained(clip_model_id) +clip_model = CLIPModel.from_pretrained(clip_model_id) + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + custom_pipeline="clip_guided_stable_diffusion", + clip_model=clip_model, + feature_extractor=feature_extractor, + use_safetensors=True, +) For more information about community pipelines, take a look at the Community pipelines guide for how to use them and if you’re interested in adding a community pipeline check out the How to contribute a community pipeline guide! Community components Community components allow users to build pipelines that may have customized components that are not a part of Diffusers. If your pipeline has custom components that Diffusers doesn’t already support, you need to provide their implementations as Python modules. These customized components could be a VAE, UNet, and scheduler. In most cases, the text encoder is imported from the Transformers library. The pipeline code itself can also be customized. This section shows how users should use community components to build a community pipeline. You’ll use the showlab/show-1-base pipeline checkpoint as an example. So, let’s start loading the components: Import and load the text encoder from Transformers: Copied from transformers import T5Tokenizer, T5EncoderModel + +pipe_id = "showlab/show-1-base" +tokenizer = T5Tokenizer.from_pretrained(pipe_id, subfolder="tokenizer") +text_encoder = T5EncoderModel.from_pretrained(pipe_id, subfolder="text_encoder") Load a scheduler: Copied from diffusers import DPMSolverMultistepScheduler + +scheduler = DPMSolverMultistepScheduler.from_pretrained(pipe_id, subfolder="scheduler") Load an image processor: Copied from transformers import CLIPFeatureExtractor + +feature_extractor = CLIPFeatureExtractor.from_pretrained(pipe_id, subfolder="feature_extractor") In steps 4 and 5, the custom UNet and pipeline implementation must match the format shown in their files for this example to work. Now you’ll load a custom UNet, which in this example, has already been implemented in the showone_unet_3d_condition.py script for your convenience. You’ll notice the UNet3DConditionModel class name is changed to ShowOneUNet3DConditionModel because UNet3DConditionModel already exists in Diffusers. Any components needed for the ShowOneUNet3DConditionModel class should be placed in the showone_unet_3d_condition.py script. Once this is done, you can initialize the UNet: Copied from showone_unet_3d_condition import ShowOneUNet3DConditionModel + +unet = ShowOneUNet3DConditionModel.from_pretrained(pipe_id, subfolder="unet") Finally, you’ll load the custom pipeline code. For this example, it has already been created for you in the pipeline_t2v_base_pixel.py script. This script contains a custom TextToVideoIFPipeline class for generating videos from text. Just like the custom UNet, any code needed for the custom pipeline to work should go in the pipeline_t2v_base_pixel.py script. Once everything is in place, you can initialize the TextToVideoIFPipeline with the ShowOneUNet3DConditionModel: Copied from pipeline_t2v_base_pixel import TextToVideoIFPipeline +import torch + +pipeline = TextToVideoIFPipeline( + unet=unet, + text_encoder=text_encoder, + tokenizer=tokenizer, + scheduler=scheduler, + feature_extractor=feature_extractor +) +pipeline = pipeline.to(device="cuda") +pipeline.torch_dtype = torch.float16 Push the pipeline to the Hub to share with the community! Copied pipeline.push_to_hub("custom-t2v-pipeline") After the pipeline is successfully pushed, you need a couple of changes: Change the _class_name attribute in model_index.json to "pipeline_t2v_base_pixel" and "TextToVideoIFPipeline". Upload showone_unet_3d_condition.py to the unet directory. Upload pipeline_t2v_base_pixel.py to the pipeline base directory. To run inference, simply add the trust_remote_code argument while initializing the pipeline to handle all the “magic” behind the scenes. Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "/", trust_remote_code=True, torch_dtype=torch.float16 +).to("cuda") + +prompt = "hello" + +# Text embeds +prompt_embeds, negative_embeds = pipeline.encode_prompt(prompt) + +# Keyframes generation (8x64x40, 2fps) +video_frames = pipeline( + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + num_frames=8, + height=40, + width=64, + num_inference_steps=2, + guidance_scale=9.0, + output_type="pt" +).frames As an additional reference example, you can refer to the repository structure of stabilityai/japanese-stable-diffusion-xl, that makes use of the trust_remote_code feature: Copied +from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/japanese-stable-diffusion-xl", trust_remote_code=True +) +pipeline.to("cuda") + +# if using torch < 2.0 +# pipeline.enable_xformers_memory_efficient_attention() + +prompt = "柴犬、カラフルアート" + +image = pipeline(prompt=prompt).images[0] diff --git a/scrapped_outputs/6a8f06bbe213d769b41061f348d28e3d.txt b/scrapped_outputs/6a8f06bbe213d769b41061f348d28e3d.txt new file mode 100644 index 0000000000000000000000000000000000000000..11477af7da0355430f35587a5aa097be653d9a3d --- /dev/null +++ b/scrapped_outputs/6a8f06bbe213d769b41061f348d28e3d.txt @@ -0,0 +1,68 @@ +VQDiffusionScheduler VQDiffusionScheduler converts the transformer model’s output into a sample for the unnoised image at the previous diffusion timestep. It was introduced in Vector Quantized Diffusion Model for Text-to-Image Synthesis by Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, Baining Guo. The abstract from the paper is: We present the vector quantized diffusion (VQ-Diffusion) model for text-to-image generation. This method is based on a vector quantized variational autoencoder (VQ-VAE) whose latent space is modeled by a conditional variant of the recently developed Denoising Diffusion Probabilistic Model (DDPM). We find that this latent-space method is well-suited for text-to-image generation tasks because it not only eliminates the unidirectional bias with existing methods but also allows us to incorporate a mask-and-replace diffusion strategy to avoid the accumulation of errors, which is a serious problem with existing methods. Our experiments show that the VQ-Diffusion produces significantly better text-to-image generation results when compared with conventional autoregressive (AR) models with similar numbers of parameters. Compared with previous GAN-based text-to-image methods, our VQ-Diffusion can handle more complex scenes and improve the synthesized image quality by a large margin. Finally, we show that the image generation computation in our method can be made highly efficient by reparameterization. With traditional AR methods, the text-to-image generation time increases linearly with the output image resolution and hence is quite time consuming even for normal size images. The VQ-Diffusion allows us to achieve a better trade-off between quality and speed. Our experiments indicate that the VQ-Diffusion model with the reparameterization is fifteen times faster than traditional AR methods while achieving a better image quality. VQDiffusionScheduler class diffusers.VQDiffusionScheduler < source > ( num_vec_classes: int num_train_timesteps: int = 100 alpha_cum_start: float = 0.99999 alpha_cum_end: float = 9e-06 gamma_cum_start: float = 9e-06 gamma_cum_end: float = 0.99999 ) Parameters num_vec_classes (int) — +The number of classes of the vector embeddings of the latent pixels. Includes the class for the masked +latent pixel. num_train_timesteps (int, defaults to 100) — +The number of diffusion steps to train the model. alpha_cum_start (float, defaults to 0.99999) — +The starting cumulative alpha value. alpha_cum_end (float, defaults to 0.00009) — +The ending cumulative alpha value. gamma_cum_start (float, defaults to 0.00009) — +The starting cumulative gamma value. gamma_cum_end (float, defaults to 0.99999) — +The ending cumulative gamma value. A scheduler for vector quantized diffusion. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. log_Q_t_transitioning_to_known_class < source > ( t: torch.int32 x_t: LongTensor log_onehot_x_t: FloatTensor cumulative: bool ) → torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels) Parameters t (torch.Long) — +The timestep that determines which transition matrix is used. x_t (torch.LongTensor of shape (batch size, num latent pixels)) — +The classes of each latent pixel at time t. log_onehot_x_t (torch.FloatTensor of shape (batch size, num classes, num latent pixels)) — +The log one-hot vectors of x_t. cumulative (bool) — +If cumulative is False, the single step transition matrix t-1->t is used. If cumulative is +True, the cumulative transition matrix 0->t is used. Returns +torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels) + +Each column of the returned matrix is a row of log probabilities of the complete probability +transition matrix. +When non cumulative, returns self.num_classes - 1 rows because the initial latent pixel cannot be +masked. +Where: + +q_n is the probability distribution for the forward process of the nth latent pixel. +C_0 is a class of a latent pixel embedding +C_k is the class of the masked latent pixel + +non-cumulative result (omitting logarithms): +_0(x_t | x_{t-1\} = C_0) ... q_n(x_t | x_{t-1\} = C_0) + . . . + . . . + . . . +q_0(x_t | x_{t-1\} = C_k) ... q_n(x_t | x_{t-1\} = C_k)`} + wrap={false} + /> +cumulative result (omitting logarithms): +_0_cumulative(x_t | x_0 = C_0) ... q_n_cumulative(x_t | x_0 = C_0) + . . . + . . . + . . . +q_0_cumulative(x_t | x_0 = C_{k-1\}) ... q_n_cumulative(x_t | x_0 = C_{k-1\})`} + wrap={false} + /> + Calculates the log probabilities of the rows from the (cumulative or non-cumulative) transition matrix for each +latent pixel in x_t. q_posterior < source > ( log_p_x_0 x_t t ) → torch.FloatTensor of shape (batch size, num classes, num latent pixels) Parameters log_p_x_0 (torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels)) — +The log probabilities for the predicted classes of the initial latent pixels. Does not include a +prediction for the masked class as the initial unnoised image cannot be masked. x_t (torch.LongTensor of shape (batch size, num latent pixels)) — +The classes of each latent pixel at time t. t (torch.Long) — +The timestep that determines which transition matrix is used. Returns +torch.FloatTensor of shape (batch size, num classes, num latent pixels) + +The log probabilities for the predicted classes of the image at timestep t-1. + Calculates the log probabilities for the predicted classes of the image at timestep t-1: Copied p(x_{t-1} | x_t) = sum( q(x_t | x_{t-1}) * q(x_{t-1} | x_0) * p(x_0) / q(x_t | x_0) ) set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps and diffusion process parameters (alpha, beta, gamma) should be moved +to. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: torch.int64 sample: LongTensor generator: Optional = None return_dict: bool = True ) → VQDiffusionSchedulerOutput or tuple Parameters t (torch.long) — +The timestep that determines which transition matrices are used. x_t (torch.LongTensor of shape (batch size, num latent pixels)) — +The classes of each latent pixel at time t. generator (torch.Generator, or None) — +A random number generator for the noise applied to p(x_{t-1} | x_t) before it is sampled from. return_dict (bool, optional, defaults to True) — +Whether or not to return a VQDiffusionSchedulerOutput or +tuple. Returns +VQDiffusionSchedulerOutput or tuple + +If return_dict is True, VQDiffusionSchedulerOutput is +returned, otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by the reverse transition distribution. See +q_posterior() for more details about how the distribution is computer. VQDiffusionSchedulerOutput class diffusers.schedulers.scheduling_vq_diffusion.VQDiffusionSchedulerOutput < source > ( prev_sample: LongTensor ) Parameters prev_sample (torch.LongTensor of shape (batch size, num latent pixels)) — +Computed sample x_{t-1} of previous timestep. prev_sample should be used as next model input in the +denoising loop. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/6ad8503afcdeff91e2fc1dbf8a602f79.txt b/scrapped_outputs/6ad8503afcdeff91e2fc1dbf8a602f79.txt new file mode 100644 index 0000000000000000000000000000000000000000..b141ceaf084a8212da6ac7e6a804208f1ca7d021 --- /dev/null +++ b/scrapped_outputs/6ad8503afcdeff91e2fc1dbf8a602f79.txt @@ -0,0 +1,35 @@ +Dance Diffusion Dance Diffusion is by Zach Evans. Dance Diffusion is the first in a suite of generative audio tools for producers and musicians released by Harmonai. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. DanceDiffusionPipeline class diffusers.DanceDiffusionPipeline < source > ( unet scheduler ) Parameters unet (UNet1DModel) — +A UNet1DModel to denoise the encoded audio. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +IPNDMScheduler. Pipeline for audio generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 num_inference_steps: int = 100 generator: Union = None audio_length_in_s: Optional = None return_dict: bool = True ) → AudioPipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of audio samples to generate. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher-quality audio sample at +the expense of slower inference. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. audio_length_in_s (float, optional, defaults to self.unet.config.sample_size/self.unet.config.sample_rate) — +The length of the generated audio sample in seconds. return_dict (bool, optional, defaults to True) — +Whether or not to return a AudioPipelineOutput instead of a plain tuple. Returns +AudioPipelineOutput or tuple + +If return_dict is True, AudioPipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Example: Copied from diffusers import DiffusionPipeline +from scipy.io.wavfile import write + +model_id = "harmonai/maestro-150k" +pipe = DiffusionPipeline.from_pretrained(model_id) +pipe = pipe.to("cuda") + +audios = pipe(audio_length_in_s=4.0).audios + +# To save locally +for i, audio in enumerate(audios): + write(f"maestro_test_{i}.wav", pipe.unet.sample_rate, audio.transpose()) + +# To dislay in google colab +import IPython.display as ipd + +for audio in audios: + display(ipd.Audio(audio, rate=pipe.unet.sample_rate)) AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. diff --git a/scrapped_outputs/6adde0913d4bbb76a2865ff13e9569c6.txt b/scrapped_outputs/6adde0913d4bbb76a2865ff13e9569c6.txt new file mode 100644 index 0000000000000000000000000000000000000000..c618df35dab9f1ea7404eb6772bf3711c834e51e --- /dev/null +++ b/scrapped_outputs/6adde0913d4bbb76a2865ff13e9569c6.txt @@ -0,0 +1,40 @@ +Stable Video Diffusion Stable Video Diffusion (SVD) is a powerful image-to-video generation model that can generate 2-4 second high resolution (576x1024) videos conditioned on an input image. This guide will show you how to use SVD to generate short videos from images. Before you begin, make sure you have the following libraries installed: Copied !pip install -q -U diffusers transformers accelerate The are two variants of this model, SVD and SVD-XT. The SVD checkpoint is trained to generate 14 frames and the SVD-XT checkpoint is further finetuned to generate 25 frames. You’ll use the SVD-XT checkpoint for this guide. Copied import torch + +from diffusers import StableVideoDiffusionPipeline +from diffusers.utils import load_image, export_to_video + +pipe = StableVideoDiffusionPipeline.from_pretrained( + "stabilityai/stable-video-diffusion-img2vid-xt", torch_dtype=torch.float16, variant="fp16" +) +pipe.enable_model_cpu_offload() + +# Load the conditioning image +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png") +image = image.resize((1024, 576)) + +generator = torch.manual_seed(42) +frames = pipe(image, decode_chunk_size=8, generator=generator).frames[0] + +export_to_video(frames, "generated.mp4", fps=7) "source image of a rocket" "generated video from source image" torch.compile You can gain a 20-25% speedup at the expense of slightly increased memory by compiling the UNet. Copied - pipe.enable_model_cpu_offload() ++ pipe.to("cuda") ++ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) Reduce memory usage Video generation is very memory intensive because you’re essentially generating num_frames all at once, similar to text-to-image generation with a high batch size. To reduce the memory requirement, there are multiple options that trade-off inference speed for lower memory requirement: enable model offloading: each component of the pipeline is offloaded to the CPU once it’s not needed anymore. enable feed-forward chunking: the feed-forward layer runs in a loop instead of running a single feed-forward with a huge batch size. reduce decode_chunk_size: the VAE decodes frames in chunks instead of decoding them all together. Setting decode_chunk_size=1 decodes one frame at a time and uses the least amount of memory (we recommend adjusting this value based on your GPU memory) but the video might have some flickering. Copied - pipe.enable_model_cpu_offload() +- frames = pipe(image, decode_chunk_size=8, generator=generator).frames[0] ++ pipe.enable_model_cpu_offload() ++ pipe.unet.enable_forward_chunking() ++ frames = pipe(image, decode_chunk_size=2, generator=generator, num_frames=25).frames[0] Using all these tricks togethere should lower the memory requirement to less than 8GB VRAM. Micro-conditioning Stable Diffusion Video also accepts micro-conditioning, in addition to the conditioning image, which allows more control over the generated video: fps: the frames per second of the generated video. motion_bucket_id: the motion bucket id to use for the generated video. This can be used to control the motion of the generated video. Increasing the motion bucket id increases the motion of the generated video. noise_aug_strength: the amount of noise added to the conditioning image. The higher the values the less the video resembles the conditioning image. Increasing this value also increases the motion of the generated video. For example, to generate a video with more motion, use the motion_bucket_id and noise_aug_strength micro-conditioning parameters: Copied import torch + +from diffusers import StableVideoDiffusionPipeline +from diffusers.utils import load_image, export_to_video + +pipe = StableVideoDiffusionPipeline.from_pretrained( + "stabilityai/stable-video-diffusion-img2vid-xt", torch_dtype=torch.float16, variant="fp16" +) +pipe.enable_model_cpu_offload() + +# Load the conditioning image +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png") +image = image.resize((1024, 576)) + +generator = torch.manual_seed(42) +frames = pipe(image, decode_chunk_size=8, generator=generator, motion_bucket_id=180, noise_aug_strength=0.1).frames[0] +export_to_video(frames, "generated.mp4", fps=7) diff --git a/scrapped_outputs/6b11e0fca86fc95a89f1650cf1c85a13.txt b/scrapped_outputs/6b11e0fca86fc95a89f1650cf1c85a13.txt new file mode 100644 index 0000000000000000000000000000000000000000..f4cc4262c8901cbf0efaaf3a95066a4f6481fc18 --- /dev/null +++ b/scrapped_outputs/6b11e0fca86fc95a89f1650cf1c85a13.txt @@ -0,0 +1,78 @@ +unCLIP Hierarchical Text-Conditional Image Generation with CLIP Latents is by Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen. The unCLIP model in 🤗 Diffusers comes from kakaobrain’s karlo. The abstract from the paper is following: Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples. You can find lucidrains’ DALL-E 2 recreation at lucidrains/DALLE2-pytorch. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. UnCLIPPipeline class diffusers.UnCLIPPipeline < source > ( prior: PriorTransformer decoder: UNet2DConditionModel text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer text_proj: UnCLIPTextProjModel super_res_first: UNet2DModel super_res_last: UNet2DModel prior_scheduler: UnCLIPScheduler decoder_scheduler: UnCLIPScheduler super_res_scheduler: UnCLIPScheduler ) Parameters text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. prior (PriorTransformer) — +The canonical unCLIP prior to approximate the image embedding from the text embedding. text_proj (UnCLIPTextProjModel) — +Utility class to prepare and combine the embeddings before they are passed to the decoder. decoder (UNet2DConditionModel) — +The decoder to invert the image embedding into an image. super_res_first (UNet2DModel) — +Super resolution UNet. Used in all but the last step of the super resolution diffusion process. super_res_last (UNet2DModel) — +Super resolution UNet. Used in the last step of the super resolution diffusion process. prior_scheduler (UnCLIPScheduler) — +Scheduler used in the prior denoising process (a modified DDPMScheduler). decoder_scheduler (UnCLIPScheduler) — +Scheduler used in the decoder denoising process (a modified DDPMScheduler). super_res_scheduler (UnCLIPScheduler) — +Scheduler used in the super resolution denoising process (a modified DDPMScheduler). Pipeline for text-to-image generation using unCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None num_images_per_prompt: int = 1 prior_num_inference_steps: int = 25 decoder_num_inference_steps: int = 25 super_res_num_inference_steps: int = 7 generator: Union = None prior_latents: Optional = None decoder_latents: Optional = None super_res_latents: Optional = None text_model_output: Union = None text_attention_mask: Optional = None prior_guidance_scale: float = 4.0 decoder_guidance_scale: float = 8.0 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. This can only be left undefined if text_model_output +and text_attention_mask is passed. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. prior_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the prior. More denoising steps usually lead to a higher quality +image at the expense of slower inference. decoder_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality +image at the expense of slower inference. super_res_num_inference_steps (int, optional, defaults to 7) — +The number of denoising steps for super resolution. More denoising steps usually lead to a higher +quality image at the expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. prior_latents (torch.FloatTensor of shape (batch size, embeddings dimension), optional) — +Pre-generated noisy latents to be used as inputs for the prior. decoder_latents (torch.FloatTensor of shape (batch size, channels, height, width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. super_res_latents (torch.FloatTensor of shape (batch size, channels, super res height, super res width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. prior_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. decoder_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. text_model_output (CLIPTextModelOutput, optional) — +Pre-defined CLIPTextModel outputs that can be derived from the text encoder. Pre-defined text +outputs can be passed for tasks like text embedding interpolations. Make sure to also pass +text_attention_mask in this case. prompt can the be left None. text_attention_mask (torch.Tensor, optional) — +Pre-defined CLIP text attention mask that can be derived from the tokenizer. Pre-defined text attention +masks are necessary when passing text_model_output. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. UnCLIPImageVariationPipeline class diffusers.UnCLIPImageVariationPipeline < source > ( decoder: UNet2DConditionModel text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer text_proj: UnCLIPTextProjModel feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection super_res_first: UNet2DModel super_res_last: UNet2DModel decoder_scheduler: UnCLIPScheduler super_res_scheduler: UnCLIPScheduler ) Parameters text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the image_encoder. image_encoder (CLIPVisionModelWithProjection) — +Frozen CLIP image-encoder (clip-vit-large-patch14). text_proj (UnCLIPTextProjModel) — +Utility class to prepare and combine the embeddings before they are passed to the decoder. decoder (UNet2DConditionModel) — +The decoder to invert the image embedding into an image. super_res_first (UNet2DModel) — +Super resolution UNet. Used in all but the last step of the super resolution diffusion process. super_res_last (UNet2DModel) — +Super resolution UNet. Used in the last step of the super resolution diffusion process. decoder_scheduler (UnCLIPScheduler) — +Scheduler used in the decoder denoising process (a modified DDPMScheduler). super_res_scheduler (UnCLIPScheduler) — +Scheduler used in the super resolution denoising process (a modified DDPMScheduler). Pipeline to generate image variations from an input image using UnCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union = None num_images_per_prompt: int = 1 decoder_num_inference_steps: int = 25 super_res_num_inference_steps: int = 7 generator: Optional = None decoder_latents: Optional = None super_res_latents: Optional = None image_embeddings: Optional = None decoder_guidance_scale: float = 8.0 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — +Image or tensor representing an image batch to be used as the starting point. If you provide a +tensor, it needs to be compatible with the CLIPImageProcessor +configuration. +Can be left as None only when image_embeddings are passed. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. decoder_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality +image at the expense of slower inference. super_res_num_inference_steps (int, optional, defaults to 7) — +The number of denoising steps for super resolution. More denoising steps usually lead to a higher +quality image at the expense of slower inference. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. decoder_latents (torch.FloatTensor of shape (batch size, channels, height, width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. super_res_latents (torch.FloatTensor of shape (batch size, channels, super res height, super res width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. decoder_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. image_embeddings (torch.Tensor, optional) — +Pre-defined image embeddings that can be derived from the image encoder. Pre-defined image embeddings +can be passed for tasks like image interpolations. image can be left as None. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/6b138f3b740e2f1572f4d2253f842b80.txt b/scrapped_outputs/6b138f3b740e2f1572f4d2253f842b80.txt new file mode 100644 index 0000000000000000000000000000000000000000..db47242d5b00684f783685474911a8d89dd98131 --- /dev/null +++ b/scrapped_outputs/6b138f3b740e2f1572f4d2253f842b80.txt @@ -0,0 +1,250 @@ +Logging + +🧨 Diffusers has a centralized logging system, so that you can setup the verbosity of the library easily. +Currently the default verbosity of the library is WARNING. +To change the level of verbosity, just use one of the direct setters. For instance, here is how to change the verbosity +to the INFO level. + + + Copied +import diffusers + +diffusers.logging.set_verbosity_info() +You can also use the environment variable DIFFUSERS_VERBOSITY to override the default verbosity. You can set it +to one of the following: debug, info, warning, error, critical. For example: + + + Copied +DIFFUSERS_VERBOSITY=error ./myprogram.py +Additionally, some warnings can be disabled by setting the environment variable +DIFFUSERS_NO_ADVISORY_WARNINGS to a true value, like 1. This will disable any warning that is logged using +logger.warning_advice. For example: + + + Copied +DIFFUSERS_NO_ADVISORY_WARNINGS=1 ./myprogram.py +Here is an example of how to use the same logger as the library in your own module or script: + + + Copied +from diffusers.utils import logging + +logging.set_verbosity_info() +logger = logging.get_logger("diffusers") +logger.info("INFO") +logger.warning("WARN") +All the methods of this logging module are documented below, the main ones are +logging.get_verbosity() to get the current level of verbosity in the logger and +logging.set_verbosity() to set the verbosity to the level of your choice. In order (from the least +verbose to the most verbose), those levels (with their corresponding int values in parenthesis) are: +diffusers.logging.CRITICAL or diffusers.logging.FATAL (int value, 50): only report the most +critical errors. +diffusers.logging.ERROR (int value, 40): only report errors. +diffusers.logging.WARNING or diffusers.logging.WARN (int value, 30): only reports error and +warnings. This the default level used by the library. +diffusers.logging.INFO (int value, 20): reports error, warnings and basic information. +diffusers.logging.DEBUG (int value, 10): report all information. +By default, tqdm progress bars will be displayed during model download. logging.disable_progress_bar() and logging.enable_progress_bar() can be used to suppress or unsuppress this behavior. + +Base setters + + +diffusers.utils.logging.set_verbosity_error + +< +source +> +( +) + + + +Set the verbosity to the ERROR level. + +diffusers.utils.logging.set_verbosity_warning + +< +source +> +( +) + + + +Set the verbosity to the WARNING level. + +diffusers.utils.logging.set_verbosity_info + +< +source +> +( +) + + + +Set the verbosity to the INFO level. + +diffusers.utils.logging.set_verbosity_debug + +< +source +> +( +) + + + +Set the verbosity to the DEBUG level. + +Other functions + + +diffusers.utils.logging.get_verbosity + +< +source +> +( +) +→ +int + +Returns + +int + + + +The logging level. + + +Return the current level for the 🤗 Diffusers’ root logger as an int. +🤗 Diffusers has following logging levels: +50: diffusers.logging.CRITICAL or diffusers.logging.FATAL +40: diffusers.logging.ERROR +30: diffusers.logging.WARNING or diffusers.logging.WARN +20: diffusers.logging.INFO +10: diffusers.logging.DEBUG + +diffusers.utils.logging.set_verbosity + +< +source +> +( +verbosity: int + +) + + +Parameters + +verbosity (int) — +Logging level, e.g., one of: + +diffusers.logging.CRITICAL or diffusers.logging.FATAL +diffusers.logging.ERROR +diffusers.logging.WARNING or diffusers.logging.WARN +diffusers.logging.INFO +diffusers.logging.DEBUG + + + + +Set the verbosity level for the 🤗 Diffusers’ root logger. + +diffusers.utils.get_logger + +< +source +> +( +name: typing.Optional[str] = None + +) + + + +Return a logger with the specified name. +This function is not supposed to be directly accessed unless you are writing a custom diffusers module. + +diffusers.utils.logging.enable_default_handler + +< +source +> +( +) + + + +Enable the default handler of the HuggingFace Diffusers’ root logger. + +diffusers.utils.logging.disable_default_handler + +< +source +> +( +) + + + +Disable the default handler of the HuggingFace Diffusers’ root logger. + +diffusers.utils.logging.enable_explicit_format + +< +source +> +( +) + + + + +Enable explicit formatting for every HuggingFace Diffusers’ logger. The explicit formatter is as follows: + + + Copied + [LEVELNAME|FILENAME|LINE NUMBER] TIME >> MESSAGE +All handlers currently bound to the root logger are affected by this method. + + +diffusers.utils.logging.reset_format + +< +source +> +( +) + + + +Resets the formatting for HuggingFace Diffusers’ loggers. +All handlers currently bound to the root logger are affected by this method. + +diffusers.utils.logging.enable_progress_bar + +< +source +> +( +) + + + +Enable tqdm progress bar. + +diffusers.utils.logging.disable_progress_bar + +< +source +> +( +) + + + +Disable tqdm progress bar. diff --git a/scrapped_outputs/6b20ccf56f0a4cea58770bfa77295609.txt b/scrapped_outputs/6b20ccf56f0a4cea58770bfa77295609.txt new file mode 100644 index 0000000000000000000000000000000000000000..ff28dd01033ce547a340e7754e35c2123f361679 --- /dev/null +++ b/scrapped_outputs/6b20ccf56f0a4cea58770bfa77295609.txt @@ -0,0 +1,14 @@ +Text-guided depth-to-image generation The StableDiffusionDepth2ImgPipeline lets you pass a text prompt and an initial image to condition the generation of new images. In addition, you can also pass a depth_map to preserve the image structure. If no depth_map is provided, the pipeline automatically predicts the depth via an integrated depth-estimation model. Start by creating an instance of the StableDiffusionDepth2ImgPipeline: Copied import torch +from diffusers import StableDiffusionDepth2ImgPipeline +from diffusers.utils import load_image, make_image_grid + +pipeline = StableDiffusionDepth2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-depth", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") Now pass your prompt to the pipeline. You can also pass a negative_prompt to prevent certain words from guiding how an image is generated: Copied url = "http://images.cocodataset.org/val2017/000000039769.jpg" +init_image = load_image(url) +prompt = "two tigers" +negative_prompt = "bad, deformed, ugly, bad anatomy" +image = pipeline(prompt=prompt, image=init_image, negative_prompt=negative_prompt, strength=0.7).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Input Output diff --git a/scrapped_outputs/6b224256867c19946ba6112f8d23fdf5.txt b/scrapped_outputs/6b224256867c19946ba6112f8d23fdf5.txt new file mode 100644 index 0000000000000000000000000000000000000000..810a91b8fef1b421013373c972981ec5ae26c4c4 --- /dev/null +++ b/scrapped_outputs/6b224256867c19946ba6112f8d23fdf5.txt @@ -0,0 +1,21 @@ +ConsistencyDecoderScheduler This scheduler is a part of the ConsistencyDecoderPipeline and was introduced in DALL-E 3. The original codebase can be found at openai/consistency_models. ConsistencyDecoderScheduler class diffusers.schedulers.ConsistencyDecoderScheduler < source > ( num_train_timesteps: int = 1024 sigma_data: float = 0.5 ) scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor generator: Optional = None return_dict: bool = True ) → ~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (float) — +The current timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a +~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput or tuple. Returns +~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput or tuple + +If return_dict is True, +~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/6b2afe9432831cf4b880c0220784eb69.txt b/scrapped_outputs/6b2afe9432831cf4b880c0220784eb69.txt new file mode 100644 index 0000000000000000000000000000000000000000..6024bf1a00e90500c0a7ce1aa584ee7f009df150 --- /dev/null +++ b/scrapped_outputs/6b2afe9432831cf4b880c0220784eb69.txt @@ -0,0 +1,448 @@ +Singlestep DPM-Solver + + +Overview + +Original paper can be found here and the improved version. The original implementation can be found here. + +DPMSolverSinglestepScheduler + + +class diffusers.DPMSolverSinglestepScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Optional[numpy.ndarray] = None +solver_order: int = 2 +prediction_type: str = 'epsilon' +thresholding: bool = False +dynamic_thresholding_ratio: float = 0.995 +sample_max_value: float = 1.0 +algorithm_type: str = 'dpmsolver++' +solver_type: str = 'midpoint' +lower_order_final: bool = True + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +solver_order (int, default 2) — +the order of DPM-Solver; can be 1 or 2 or 3. We recommend to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. + + +prediction_type (str, default epsilon) — +indicates whether the model predicts the noise (epsilon), or the data / x0. One of epsilon, sample, +or v-prediction. + + +thresholding (bool, default False) — +whether to use the “dynamic thresholding” method (introduced by Imagen, https://arxiv.org/abs/2205.11487). +For pixel-space diffusion models, you can set both algorithm_type=dpmsolver++ and thresholding=True to +use the dynamic thresholding. Note that the thresholding method is unsuitable for latent-space diffusion +models (such as stable-diffusion). + + +dynamic_thresholding_ratio (float, default 0.995) — +the ratio for the dynamic thresholding method. Default is 0.995, the same as Imagen +(https://arxiv.org/abs/2205.11487). + + +sample_max_value (float, default 1.0) — +the threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++. + + +algorithm_type (str, default dpmsolver++) — +the algorithm type for the solver. Either dpmsolver or dpmsolver++. The dpmsolver type implements the +algorithms in https://arxiv.org/abs/2206.00927, and the dpmsolver++ type implements the algorithms in +https://arxiv.org/abs/2211.01095. We recommend to use dpmsolver++ with solver_order=2 for guided +sampling (e.g. stable-diffusion). + + +solver_type (str, default midpoint) — +the solver type for the second-order solver. Either midpoint or heun. The solver type slightly affects +the sample quality, especially for small number of steps. We empirically find that midpoint solvers are +slightly better, so we recommend to use the midpoint type. + + +lower_order_final (bool, default True) — +whether to use lower-order solvers in the final steps. For singlestep schedulers, we recommend to enable +this to use up all the function evaluations. + + + +DPM-Solver (and the improved version DPM-Solver++) is a fast dedicated high-order solver for diffusion ODEs with +the convergence order guarantee. Empirically, sampling by DPM-Solver with only 20 steps can generate high-quality +samples, and it can generate quite good samples even in only 10 steps. +For more details, see the original paper: https://arxiv.org/abs/2206.00927 and https://arxiv.org/abs/2211.01095 +Currently, we support the singlestep DPM-Solver for both noise prediction models and data prediction models. We +recommend to use solver_order=2 for guided sampling, and solver_order=3 for unconditional sampling. +We also support the “dynamic thresholding” method in Imagen (https://arxiv.org/abs/2205.11487). For pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use the dynamic +thresholding. Note that the thresholding method is unsuitable for latent-space diffusion models (such as +stable-diffusion). +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +convert_model_output + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the converted model output. + + +Convert the model output to the corresponding type that the algorithm (DPM-Solver / DPM-Solver++) needs. +DPM-Solver is designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to +discretize an integral of the data prediction model. So we need to first convert the model output to the +corresponding type to match the algorithm. +Note that the algorithm type and the model type is decoupled. That is to say, we can use either DPM-Solver or +DPM-Solver++ for both noise prediction model and data prediction model. + +dpm_solver_first_order_update + +< +source +> +( +model_output: FloatTensor +timestep: int +prev_timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the first-order DPM-Solver (equivalent to DDIM). +See https://arxiv.org/abs/2206.00927 for the detailed derivation. + +get_order_list + +< +source +> +( +num_inference_steps: int + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + + +Computes the solver order at each time step. + +scale_model_input + +< +source +> +( +sample: FloatTensor +*args +**kwargs + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device, optional) — +the device to which the timesteps should be moved to. If None, the timesteps are not moved. + + + +Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. + +singlestep_dpm_solver_second_order_update + +< +source +> +( +model_output_list: typing.List[torch.FloatTensor] +timestep_list: typing.List[int] +prev_timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output_list (List[torch.FloatTensor]) — +direct outputs from learned diffusion model at current and latter timesteps. + + +timestep (int) — current and latter discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the second-order singlestep DPM-Solver. +It computes the solution at time prev_timestep from the time timestep_list[-2]. + +singlestep_dpm_solver_third_order_update + +< +source +> +( +model_output_list: typing.List[torch.FloatTensor] +timestep_list: typing.List[int] +prev_timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output_list (List[torch.FloatTensor]) — +direct outputs from learned diffusion model at current and latter timesteps. + + +timestep (int) — current and latter discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the third-order singlestep DPM-Solver. +It computes the solution at time prev_timestep from the time timestep_list[-3]. + +singlestep_dpm_solver_update + +< +source +> +( +model_output_list: typing.List[torch.FloatTensor] +timestep_list: typing.List[int] +prev_timestep: int +sample: FloatTensor +order: int + +) +→ +torch.FloatTensor + +Parameters + +model_output_list (List[torch.FloatTensor]) — +direct outputs from learned diffusion model at current and latter timesteps. + + +timestep (int) — current and latter discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +order (int) — +the solver order at this step. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the singlestep DPM-Solver. + +step + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +return_dict: bool = True + +) +→ +~scheduling_utils.SchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +return_dict (bool) — option for returning tuple rather than SchedulerOutput class + + +Returns + +~scheduling_utils.SchedulerOutput or tuple + + + +~scheduling_utils.SchedulerOutput if return_dict is +True, otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + +Step function propagating the sample with the singlestep DPM-Solver. diff --git a/scrapped_outputs/6b2f5f6b17115bb36bf5c7208175a77c.txt b/scrapped_outputs/6b2f5f6b17115bb36bf5c7208175a77c.txt new file mode 100644 index 0000000000000000000000000000000000000000..04f6b66657b62f8093d03c690225df93d111640b --- /dev/null +++ b/scrapped_outputs/6b2f5f6b17115bb36bf5c7208175a77c.txt @@ -0,0 +1,52 @@ +UNet3DConditionModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 3D UNet conditional model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet3DConditionModel class diffusers.UNet3DConditionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlock3D', 'CrossAttnDownBlock3D', 'CrossAttnDownBlock3D', 'DownBlock3D') up_block_types: Tuple = ('UpBlock3D', 'CrossAttnUpBlock3D', 'CrossAttnUpBlock3D', 'CrossAttnUpBlock3D') block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: int = 1024 attention_head_dim: Union = 64 num_attention_heads: Union = None ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. in_channels (int, optional, defaults to 4) — The number of channels in the input sample. out_channels (int, optional, defaults to 4) — The number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — The number of layers per block. downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. act_fn (str, optional, defaults to "silu") — The activation function to use. norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, normalization and activation layers is skipped in post-processing. norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. cross_attention_dim (int, optional, defaults to 1280) — The dimension of the cross attention features. attention_head_dim (int, optional, defaults to 8) — The dimension of the attention heads. num_attention_heads (int, optional) — The number of attention heads. A conditional 3D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). disable_freeu < source > ( ) Disables the FreeU mechanism. enable_forward_chunking < source > ( chunk_size: Optional = None dim: int = 0 ) Parameters chunk_size (int, optional) — +The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually +over each tensor of dim=dim. dim (int, optional, defaults to 0) — +The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch) +or dim=1 (sequence length). Sets the attention processor to use feed forward +chunking. enable_freeu < source > ( s1 s2 b1 b2 ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism from https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stage blocks where they are being applied. Please refer to the official repository for combinations of values that +are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None down_block_additional_residuals: Optional = None mid_block_additional_residual: Optional = None return_dict: bool = True ) → UNet3DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, num_frames, channel, height, width. timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). class_labels (torch.Tensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. +timestep_cond — (torch.Tensor, optional, defaults to None): +Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed +through the self.time_embedding layer to obtain the timestep embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. +down_block_additional_residuals — (tuple of torch.Tensor, optional): +A tuple of tensors that if specified are added to the residuals of down unet blocks. +mid_block_additional_residual — (torch.Tensor, optional): +A tensor that if specified is added to the residual of the middle unet block. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet3DConditionOutput instead of a plain +tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. Returns +UNet3DConditionOutput or tuple + +If return_dict is True, an UNet3DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The UNet3DConditionModel forward method. set_attention_slice < source > ( slice_size: Union ) Parameters slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", input to the attention heads is halved, so attention is computed in two steps. If +"max", maximum amount of memory is saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in +several steps. This is useful for saving some memory in exchange for a small decrease in speed. set_attn_processor < source > ( processor: Union _remove_lora = False ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. UNet3DConditionOutput class diffusers.models.unet_3d_condition.UNet3DConditionOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_frames, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of UNet3DConditionModel. diff --git a/scrapped_outputs/6b53dbd562aaa9c331221f666aa046bd.txt b/scrapped_outputs/6b53dbd562aaa9c331221f666aa046bd.txt new file mode 100644 index 0000000000000000000000000000000000000000..d05e83f211afd073b47b8d298eea79b4b3c9daf7 --- /dev/null +++ b/scrapped_outputs/6b53dbd562aaa9c331221f666aa046bd.txt @@ -0,0 +1,97 @@ +Text-to-image When you think of diffusion models, text-to-image is usually one of the first things that come to mind. Text-to-image generates an image from a text description (for example, “Astronaut in a jungle, cold color palette, muted colors, detailed, 8k”) which is also known as a prompt. From a very high level, a diffusion model takes a prompt and some random initial noise, and iteratively removes the noise to construct an image. The denoising process is guided by the prompt, and once the denoising process ends after a predetermined number of time steps, the image representation is decoded into an image. Read the How does Stable Diffusion work? blog post to learn more about how a latent diffusion model works. You can generate images from a prompt in 🤗 Diffusers in two steps: Load a checkpoint into the AutoPipelineForText2Image class, which automatically detects the appropriate pipeline class to use based on the checkpoint: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") Pass a prompt to the pipeline to generate an image: Copied image = pipeline( + "stained glass of darth vader, backlight, centered composition, masterpiece, photorealistic, 8k" +).images[0] +image Popular models The most common text-to-image models are Stable Diffusion v1.5, Stable Diffusion XL (SDXL), and Kandinsky 2.2. There are also ControlNet models or adapters that can be used with text-to-image models for more direct control in generating images. The results from each model are slightly different because of their architecture and training process, but no matter which model you choose, their usage is more or less the same. Let’s use the same prompt for each model and compare their results. Stable Diffusion v1.5 Stable Diffusion v1.5 is a latent diffusion model initialized from Stable Diffusion v1-4, and finetuned for 595K steps on 512x512 images from the LAION-Aesthetics V2 dataset. You can use this model like: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0] +image Stable Diffusion XL SDXL is a much larger version of the previous Stable Diffusion models, and involves a two-stage model process that adds even more details to an image. It also includes some additional micro-conditionings to generate high-quality images centered subjects. Take a look at the more comprehensive SDXL guide to learn more about how to use it. In general, you can use SDXL like: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0] +image Kandinsky 2.2 The Kandinsky model is a bit different from the Stable Diffusion models because it also uses an image prior model to create embeddings that are used to better align text and images in the diffusion model. The easiest way to use Kandinsky 2.2 is: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0] +image ControlNet ControlNet models are auxiliary models or adapters that are finetuned on top of text-to-image models, such as Stable Diffusion v1.5. Using ControlNet models in combination with text-to-image models offers diverse options for more explicit control over how to generate an image. With ControlNet, you add an additional conditioning input image to the model. For example, if you provide an image of a human pose (usually represented as multiple keypoints that are connected into a skeleton) as a conditioning input, the model generates an image that follows the pose of the image. Check out the more in-depth ControlNet guide to learn more about other conditioning inputs and how to use them. In this example, let’s condition the ControlNet with a human pose estimation image. Load the ControlNet model pretrained on human pose estimations: Copied from diffusers import ControlNetModel, AutoPipelineForText2Image +from diffusers.utils import load_image +import torch + +controlnet = ControlNetModel.from_pretrained( + "lllyasviel/control_v11p_sd15_openpose", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +pose_image = load_image("https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png") Pass the controlnet to the AutoPipelineForText2Image, and provide the prompt and pose estimation image: Copied pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16" +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", image=pose_image, generator=generator).images[0] +image Stable Diffusion v1.5 Stable Diffusion XL Kandinsky 2.2 ControlNet (pose conditioning) Configure pipeline parameters There are a number of parameters that can be configured in the pipeline that affect how an image is generated. You can change the image’s output size, specify a negative prompt to improve image quality, and more. This section dives deeper into how to use these parameters. Height and width The height and width parameters control the height and width (in pixels) of the generated image. By default, the Stable Diffusion v1.5 model outputs 512x512 images, but you can change this to any size that is a multiple of 8. For example, to create a rectangular image: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +image = pipeline( + "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", height=768, width=512 +).images[0] +image Other models may have different default image sizes depending on the image sizes in the training dataset. For example, SDXL’s default image size is 1024x1024 and using lower height and width values may result in lower quality images. Make sure you check the model’s API reference first! Guidance scale The guidance_scale parameter affects how much the prompt influences image generation. A lower value gives the model “creativity” to generate images that are more loosely related to the prompt. Higher guidance_scale values push the model to follow the prompt more closely, and if this value is too high, you may observe some artifacts in the generated image. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +image = pipeline( + "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", guidance_scale=3.5 +).images[0] +image guidance_scale = 2.5 guidance_scale = 7.5 guidance_scale = 10.5 Negative prompt Just like how a prompt guides generation, a negative prompt steers the model away from things you don’t want the model to generate. This is commonly used to improve overall image quality by removing poor or bad image features such as “low resolution” or “bad details”. You can also use a negative prompt to remove or modify the content and style of an image. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +image = pipeline( + prompt="Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", + negative_prompt="ugly, deformed, disfigured, poor details, bad anatomy", +).images[0] +image negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" negative_prompt = "astronaut" Generator A torch.Generator object enables reproducibility in a pipeline by setting a manual seed. You can use a Generator to generate batches of images and iteratively improve on an image generated from a seed as detailed in the Improve image quality with deterministic generation guide. You can set a seed and Generator as shown below. Creating an image with a Generator should return the same result each time instead of randomly generating a new image. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +generator = torch.Generator(device="cuda").manual_seed(30) +image = pipeline( + "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", + generator=generator, +).images[0] +image Control image generation There are several ways to exert more control over how an image is generated outside of configuring a pipeline’s parameters, such as prompt weighting and ControlNet models. Prompt weighting Prompt weighting is a technique for increasing or decreasing the importance of concepts in a prompt to emphasize or minimize certain features in an image. We recommend using the Compel library to help you generate the weighted prompt embeddings. Learn how to create the prompt embeddings in the Prompt weighting guide. This example focuses on how to use the prompt embeddings in the pipeline. Once you’ve created the embeddings, you can pass them to the prompt_embeds (and negative_prompt_embeds if you’re using a negative prompt) parameter in the pipeline. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +image = pipeline( + prompt_embeds=prompt_embeds, # generated from Compel + negative_prompt_embeds=negative_prompt_embeds, # generated from Compel +).images[0] ControlNet As you saw in the ControlNet section, these models offer a more flexible and accurate way to generate images by incorporating an additional conditioning image input. Each ControlNet model is pretrained on a particular type of conditioning image to generate new images that resemble it. For example, if you take a ControlNet model pretrained on depth maps, you can give the model a depth map as a conditioning input and it’ll generate an image that preserves the spatial information in it. This is quicker and easier than specifying the depth information in a prompt. You can even combine multiple conditioning inputs with a MultiControlNet! There are many types of conditioning inputs you can use, and 🤗 Diffusers supports ControlNet for Stable Diffusion and SDXL models. Take a look at the more comprehensive ControlNet guide to learn how you can use these models. Optimize Diffusion models are large, and the iterative nature of denoising an image is computationally expensive and intensive. But this doesn’t mean you need access to powerful - or even many - GPUs to use them. There are many optimization techniques for running diffusion models on consumer and free-tier resources. For example, you can load model weights in half-precision to save GPU memory and increase speed or offload the entire model to the GPU to save even more memory. PyTorch 2.0 also supports a more memory-efficient attention mechanism called scaled dot product attention that is automatically enabled if you’re using PyTorch 2.0. You can combine this with torch.compile to speed your code up even more: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16").to("cuda") +pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) For more tips on how to optimize your code to save memory and speed up inference, read the Memory and speed and Torch 2.0 guides. diff --git a/scrapped_outputs/6b5a578396ae2db9997802bd36407f47.txt b/scrapped_outputs/6b5a578396ae2db9997802bd36407f47.txt new file mode 100644 index 0000000000000000000000000000000000000000..ef62c086e705e0fd98841711ee18a967fbc85f5e --- /dev/null +++ b/scrapped_outputs/6b5a578396ae2db9997802bd36407f47.txt @@ -0,0 +1,41 @@ +UNetMotionModel The UNet model was originally introduced by Ronneberger et al for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 2D UNet model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNetMotionModel class diffusers.UNetMotionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlockMotion', 'CrossAttnDownBlockMotion', 'CrossAttnDownBlockMotion', 'DownBlockMotion') up_block_types: Tuple = ('UpBlockMotion', 'CrossAttnUpBlockMotion', 'CrossAttnUpBlockMotion', 'CrossAttnUpBlockMotion') block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: int = 32 norm_eps: float = 1e-05 cross_attention_dim: int = 1280 use_linear_projection: bool = False num_attention_heads: Union = 8 motion_max_seq_length: int = 32 motion_num_attention_heads: int = 8 use_motion_mid_block: int = True encoder_hid_dim: Optional = None encoder_hid_dim_type: Optional = None ) A modified conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a +sample shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). disable_freeu < source > ( ) Disables the FreeU mechanism. enable_forward_chunking < source > ( chunk_size: Optional = None dim: int = 0 ) Parameters chunk_size (int, optional) — +The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually +over each tensor of dim=dim. dim (int, optional, defaults to 0) — +The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch) +or dim=1 (sequence length). Sets the attention processor to use feed forward +chunking. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism from https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stage blocks where they are being applied. Please refer to the official repository for combinations of values that +are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None added_cond_kwargs: Optional = None down_block_additional_residuals: Optional = None mid_block_additional_residual: Optional = None return_dict: bool = True ) → ~models.unet_3d_condition.UNet3DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, num_frames, channel, height, width. timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). +timestep_cond — (torch.Tensor, optional, defaults to None): +Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed +through the self.time_embedding layer to obtain the timestep embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. +down_block_additional_residuals — (tuple of torch.Tensor, optional): +A tuple of tensors that if specified are added to the residuals of down unet blocks. +mid_block_additional_residual — (torch.Tensor, optional): +A tensor that if specified is added to the residual of the middle unet block. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.unet_3d_condition.UNet3DConditionOutput instead of a plain +tuple. Returns +~models.unet_3d_condition.UNet3DConditionOutput or tuple + +If return_dict is True, an ~models.unet_3d_condition.UNet3DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The UNetMotionModel forward method. freeze_unet2d_params < source > ( ) Freeze the weights of just the UNet2DConditionModel, and leave the motion modules +unfrozen for fine tuning. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. UNet3DConditionOutput class diffusers.models.unets.unet_3d_condition.UNet3DConditionOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_frames, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of UNet3DConditionModel. diff --git a/scrapped_outputs/6b7923fdf62fdb029485604b123b09e6.txt b/scrapped_outputs/6b7923fdf62fdb029485604b123b09e6.txt new file mode 100644 index 0000000000000000000000000000000000000000..191230d895650a96c9b8f907a3911fdd00d72140 --- /dev/null +++ b/scrapped_outputs/6b7923fdf62fdb029485604b123b09e6.txt @@ -0,0 +1,55 @@ +DDPMScheduler Denoising Diffusion Probabilistic Models (DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes a diffusion based model of the same name. In the context of the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline. The abstract from the paper is: We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. Our implementation is available at this https URL. DDPMScheduler class diffusers.DDPMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None variance_type: str = 'fixed_small' clip_sample: bool = True prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 clip_sample_range: float = 1.0 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' steps_offset: int = 0 rescale_betas_zero_snr: int = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +An array of betas to pass directly to the constructor without using beta_start and beta_end. variance_type (str, defaults to "fixed_small") — +Clip the variance when adding noise to the denoised sample. Choose from fixed_small, fixed_small_log, +fixed_large, fixed_large_log, learned or learned_range. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. DDPMScheduler explores the connections between denoising score matching and Langevin dynamics sampling. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: Optional = None device: Union = None timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. If used, +timesteps must be None. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. timesteps (List[int], optional) — +Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default +timestep spacing strategy of equal spacing between timesteps is used. If timesteps is passed, +num_inference_steps must be None. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator = None return_dict: bool = True ) → DDPMSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a DDPMSchedulerOutput or tuple. Returns +DDPMSchedulerOutput or tuple + +If return_dict is True, DDPMSchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). DDPMSchedulerOutput class diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/6b964843bb71811a7f0976ff5e376fe4.txt b/scrapped_outputs/6b964843bb71811a7f0976ff5e376fe4.txt new file mode 100644 index 0000000000000000000000000000000000000000..aa2d63d59b04449a98f5d12b99c53e29a1ead14b --- /dev/null +++ b/scrapped_outputs/6b964843bb71811a7f0976ff5e376fe4.txt @@ -0,0 +1,64 @@ +Textual Inversion Textual Inversion is a training technique for personalizing image generation models with just a few example images of what you want it to learn. This technique works by learning and updating the text embeddings (the new embeddings are tied to a special word you must use in the prompt) to match the example images you provide. If you’re training on a GPU with limited vRAM, you should try enabling the gradient_checkpointing and mixed_precision parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing or xFormers. With the same configuration and setup as PyTorch, the Flax training script should be at least ~70% faster! This guide will explore the textual_inversion.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/textual_inversion +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script has many parameters to help you tailor the training run to your needs. All of the parameters and their descriptions are listed in the parse_args() function. Where applicable, Diffusers provides default values for each parameter such as the training batch size and learning rate, but feel free to change these values in the training command if you’d like. For example, to increase the number of gradient accumulation steps above the default value of 1: Copied accelerate launch textual_inversion.py \ + --gradient_accumulation_steps=4 Some other basic and important parameters to specify include: --pretrained_model_name_or_path: the name of the model on the Hub or a local path to the pretrained model --train_data_dir: path to a folder containing the training dataset (example images) --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command --num_vectors: the number of vectors to learn the embeddings with; increasing this parameter helps the model learn better but it comes with increased training costs --placeholder_token: the special word to tie the learned embeddings to (you must use the word in your prompt for inference) --initializer_token: a single-word that roughly describes the object or style you’re trying to train on --learnable_property: whether you’re training the model to learn a new “style” (for example, Van Gogh’s painting style) or “object” (for example, your dog) Training script Unlike some of the other training scripts, textual_inversion.py has a custom dataset class, TextualInversionDataset for creating a dataset. You can customize the image size, placeholder token, interpolation method, whether to crop the image, and more. If you need to change how the dataset is created, you can modify TextualInversionDataset. Next, you’ll find the dataset preprocessing code and training loop in the main() function. The script starts by loading the tokenizer, scheduler and model: Copied # Load tokenizer +if args.tokenizer_name: + tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name) +elif args.pretrained_model_name_or_path: + tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer") + +# Load scheduler and models +noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") +text_encoder = CLIPTextModel.from_pretrained( + args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision +) +vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision) +unet = UNet2DConditionModel.from_pretrained( + args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision +) The special placeholder token is added next to the tokenizer, and the embedding is readjusted to account for the new token. Then, the script creates a dataset from the TextualInversionDataset: Copied train_dataset = TextualInversionDataset( + data_root=args.train_data_dir, + tokenizer=tokenizer, + size=args.resolution, + placeholder_token=(" ".join(tokenizer.convert_ids_to_tokens(placeholder_token_ids))), + repeats=args.repeats, + learnable_property=args.learnable_property, + center_crop=args.center_crop, + set="train", +) +train_dataloader = torch.utils.data.DataLoader( + train_dataset, batch_size=args.train_batch_size, shuffle=True, num_workers=args.dataloader_num_workers +) Finally, the training loop handles everything else from predicting the noisy residual to updating the embedding weights of the special placeholder token. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 For this guide, you’ll download some images of a cat toy and store them in a directory. But remember, you can create and use your own dataset if you want (see the Create a dataset for training guide). Copied from huggingface_hub import snapshot_download + +local_dir = "./cat" +snapshot_download( + "diffusers/cat_toy_example", local_dir=local_dir, repo_type="dataset", ignore_patterns=".gitattributes" +) Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model, and DATA_DIR to the path where you just downloaded the cat images to. The script creates and saves the following files to your repository: learned_embeds.bin: the learned embedding vectors corresponding to your example images token_identifier.txt: the special placeholder token type_of_concept.txt: the type of concept you’re training on (either “object” or “style”) A full training run takes ~1 hour on a single V100 GPU. One more thing before you launch the script. If you’re interested in following along with the training process, you can periodically save generated images as training progresses. Add the following parameters to the training command: Copied --validation_prompt="A train" +--num_validation_images=4 +--validation_steps=100 PyTorch Flax Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export DATA_DIR="./cat" + +accelerate launch textual_inversion.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --train_data_dir=$DATA_DIR \ + --learnable_property="object" \ + --placeholder_token="" \ + --initializer_token="toy" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --max_train_steps=3000 \ + --learning_rate=5.0e-04 \ + --scale_lr \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --output_dir="textual_inversion_cat" \ + --push_to_hub After training is complete, you can use your newly trained model for inference like: PyTorch Flax Copied from diffusers import StableDiffusionPipeline +import torch + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") +pipeline.load_textual_inversion("sd-concepts-library/cat-toy") +image = pipeline("A train", num_inference_steps=50).images[0] +image.save("cat-train.png") Next steps Congratulations on training your own Textual Inversion model! 🎉 To learn more about how to use your new model, the following guides may be helpful: Learn how to load Textual Inversion embeddings and also use them as negative embeddings. Learn how to use Textual Inversion for inference with Stable Diffusion 1/2 and Stable Diffusion XL. diff --git a/scrapped_outputs/6baeb24297eae2b02d7d5819327b53be.txt b/scrapped_outputs/6baeb24297eae2b02d7d5819327b53be.txt new file mode 100644 index 0000000000000000000000000000000000000000..78c3d8546c4767fffa594b36c432c1201bb2ccc3 --- /dev/null +++ b/scrapped_outputs/6baeb24297eae2b02d7d5819327b53be.txt @@ -0,0 +1,17 @@ +Token merging Token merging (ToMe) merges redundant tokens/patches progressively in the forward pass of a Transformer-based network which can speed-up the inference latency of StableDiffusionPipeline. Install ToMe from pip: Copied pip install tomesd You can use ToMe from the tomesd library with the apply_patch function: Copied from diffusers import StableDiffusionPipeline + import torch + import tomesd + + pipeline = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True, + ).to("cuda") ++ tomesd.apply_patch(pipeline, ratio=0.5) + + image = pipeline("a photo of an astronaut riding a horse on mars").images[0] The apply_patch function exposes a number of arguments to help strike a balance between pipeline inference speed and the quality of the generated tokens. The most important argument is ratio which controls the number of tokens that are merged during the forward pass. As reported in the paper, ToMe can greatly preserve the quality of the generated images while boosting inference speed. By increasing the ratio, you can speed-up inference even further, but at the cost of some degraded image quality. To test the quality of the generated images, we sampled a few prompts from Parti Prompts and performed inference with the StableDiffusionPipeline with the following settings: We didn’t notice any significant decrease in the quality of the generated samples, and you can check out the generated samples in this WandB report. If you’re interested in reproducing this experiment, use this script. Benchmarks We also benchmarked the impact of tomesd on the StableDiffusionPipeline with xFormers enabled across several image resolutions. The results are obtained from A100 and V100 GPUs in the following development environment: Copied - `diffusers` version: 0.15.1 +- Python version: 3.8.16 +- PyTorch version (GPU?): 1.13.1+cu116 (True) +- Huggingface_hub version: 0.13.2 +- Transformers version: 4.27.2 +- Accelerate version: 0.18.0 +- xFormers version: 0.0.16 +- tomesd version: 0.1.2 To reproduce this benchmark, feel free to use this script. The results are reported in seconds, and where applicable we report the speed-up percentage over the vanilla pipeline when using ToMe and ToMe + xFormers. GPU Resolution Batch size Vanilla ToMe ToMe + xFormers A100 512 10 6.88 5.26 (+23.55%) 4.69 (+31.83%) 768 10 OOM 14.71 11 8 OOM 11.56 8.84 4 OOM 5.98 4.66 2 4.99 3.24 (+35.07%) 2.1 (+37.88%) 1 3.29 2.24 (+31.91%) 2.03 (+38.3%) 1024 10 OOM OOM OOM 8 OOM OOM OOM 4 OOM 12.51 9.09 2 OOM 6.52 4.96 1 6.4 3.61 (+43.59%) 2.81 (+56.09%) V100 512 10 OOM 10.03 9.29 8 OOM 8.05 7.47 4 5.7 4.3 (+24.56%) 3.98 (+30.18%) 2 3.14 2.43 (+22.61%) 2.27 (+27.71%) 1 1.88 1.57 (+16.49%) 1.57 (+16.49%) 768 10 OOM OOM 23.67 8 OOM OOM 18.81 4 OOM 11.81 9.7 2 OOM 6.27 5.2 1 5.43 3.38 (+37.75%) 2.82 (+48.07%) 1024 10 OOM OOM OOM 8 OOM OOM OOM 4 OOM OOM 19.35 2 OOM 13 10.78 1 OOM 6.66 5.54 As seen in the tables above, the speed-up from tomesd becomes more pronounced for larger image resolutions. It is also interesting to note that with tomesd, it is possible to run the pipeline on a higher resolution like 1024x1024. You may be able to speed-up inference even more with torch.compile. diff --git a/scrapped_outputs/6bb69a6c3938d341b5b51ff1101b4bb3.txt b/scrapped_outputs/6bb69a6c3938d341b5b51ff1101b4bb3.txt new file mode 100644 index 0000000000000000000000000000000000000000..fa69efa9696034670fc8ca476928c6521eb0af53 --- /dev/null +++ b/scrapped_outputs/6bb69a6c3938d341b5b51ff1101b4bb3.txt @@ -0,0 +1,212 @@ +Train a diffusion model Unconditional image generation is a popular application of diffusion models that generates images that look like those in the dataset used for training. Typically, the best results are obtained from finetuning a pretrained model on a specific dataset. You can find many of these checkpoints on the Hub, but if you can’t find one you like, you can always train your own! This tutorial will teach you how to train a UNet2DModel from scratch on a subset of the Smithsonian Butterflies dataset to generate your own 🦋 butterflies 🦋. 💡 This training tutorial is based on the Training with 🧨 Diffusers notebook. For additional details and context about diffusion models like how they work, check out the notebook! Before you begin, make sure you have 🤗 Datasets installed to load and preprocess image datasets, and 🤗 Accelerate, to simplify training on any number of GPUs. The following command will also install TensorBoard to visualize training metrics (you can also use Weights & Biases to track your training). Copied # uncomment to install the necessary libraries in Colab +#!pip install diffusers[training] We encourage you to share your model with the community, and in order to do that, you’ll need to login to your Hugging Face account (create one here if you don’t already have one!). You can login from a notebook and enter your token when prompted. Make sure your token has the write role. Copied >>> from huggingface_hub import notebook_login + +>>> notebook_login() Or login in from the terminal: Copied huggingface-cli login Since the model checkpoints are quite large, install Git-LFS to version these large files: Copied !sudo apt -qq install git-lfs +!git config --global credential.helper store Training configuration For convenience, create a TrainingConfig class containing the training hyperparameters (feel free to adjust them): Copied >>> from dataclasses import dataclass + +>>> @dataclass +... class TrainingConfig: +... image_size = 128 # the generated image resolution +... train_batch_size = 16 +... eval_batch_size = 16 # how many images to sample during evaluation +... num_epochs = 50 +... gradient_accumulation_steps = 1 +... learning_rate = 1e-4 +... lr_warmup_steps = 500 +... save_image_epochs = 10 +... save_model_epochs = 30 +... mixed_precision = "fp16" # `no` for float32, `fp16` for automatic mixed precision +... output_dir = "ddpm-butterflies-128" # the model name locally and on the HF Hub + +... push_to_hub = True # whether to upload the saved model to the HF Hub +... hub_model_id = "/" # the name of the repository to create on the HF Hub +... hub_private_repo = False +... overwrite_output_dir = True # overwrite the old model when re-running the notebook +... seed = 0 + + +>>> config = TrainingConfig() Load the dataset You can easily load the Smithsonian Butterflies dataset with the 🤗 Datasets library: Copied >>> from datasets import load_dataset + +>>> config.dataset_name = "huggan/smithsonian_butterflies_subset" +>>> dataset = load_dataset(config.dataset_name, split="train") 💡 You can find additional datasets from the HugGan Community Event or you can use your own dataset by creating a local ImageFolder. Set config.dataset_name to the repository id of the dataset if it is from the HugGan Community Event, or imagefolder if you’re using your own images. 🤗 Datasets uses the Image feature to automatically decode the image data and load it as a PIL.Image which we can visualize: Copied >>> import matplotlib.pyplot as plt + +>>> fig, axs = plt.subplots(1, 4, figsize=(16, 4)) +>>> for i, image in enumerate(dataset[:4]["image"]): +... axs[i].imshow(image) +... axs[i].set_axis_off() +>>> fig.show() The images are all different sizes though, so you’ll need to preprocess them first: Resize changes the image size to the one defined in config.image_size. RandomHorizontalFlip augments the dataset by randomly mirroring the images. Normalize is important to rescale the pixel values into a [-1, 1] range, which is what the model expects. Copied >>> from torchvision import transforms + +>>> preprocess = transforms.Compose( +... [ +... transforms.Resize((config.image_size, config.image_size)), +... transforms.RandomHorizontalFlip(), +... transforms.ToTensor(), +... transforms.Normalize([0.5], [0.5]), +... ] +... ) Use 🤗 Datasets’ set_transform method to apply the preprocess function on the fly during training: Copied >>> def transform(examples): +... images = [preprocess(image.convert("RGB")) for image in examples["image"]] +... return {"images": images} + + +>>> dataset.set_transform(transform) Feel free to visualize the images again to confirm that they’ve been resized. Now you’re ready to wrap the dataset in a DataLoader for training! Copied >>> import torch + +>>> train_dataloader = torch.utils.data.DataLoader(dataset, batch_size=config.train_batch_size, shuffle=True) Create a UNet2DModel Pretrained models in 🧨 Diffusers are easily created from their model class with the parameters you want. For example, to create a UNet2DModel: Copied >>> from diffusers import UNet2DModel + +>>> model = UNet2DModel( +... sample_size=config.image_size, # the target image resolution +... in_channels=3, # the number of input channels, 3 for RGB images +... out_channels=3, # the number of output channels +... layers_per_block=2, # how many ResNet layers to use per UNet block +... block_out_channels=(128, 128, 256, 256, 512, 512), # the number of output channels for each UNet block +... down_block_types=( +... "DownBlock2D", # a regular ResNet downsampling block +... "DownBlock2D", +... "DownBlock2D", +... "DownBlock2D", +... "AttnDownBlock2D", # a ResNet downsampling block with spatial self-attention +... "DownBlock2D", +... ), +... up_block_types=( +... "UpBlock2D", # a regular ResNet upsampling block +... "AttnUpBlock2D", # a ResNet upsampling block with spatial self-attention +... "UpBlock2D", +... "UpBlock2D", +... "UpBlock2D", +... "UpBlock2D", +... ), +... ) It is often a good idea to quickly check the sample image shape matches the model output shape: Copied >>> sample_image = dataset[0]["images"].unsqueeze(0) +>>> print("Input shape:", sample_image.shape) +Input shape: torch.Size([1, 3, 128, 128]) + +>>> print("Output shape:", model(sample_image, timestep=0).sample.shape) +Output shape: torch.Size([1, 3, 128, 128]) Great! Next, you’ll need a scheduler to add some noise to the image. Create a scheduler The scheduler behaves differently depending on whether you’re using the model for training or inference. During inference, the scheduler generates image from the noise. During training, the scheduler takes a model output - or a sample - from a specific point in the diffusion process and applies noise to the image according to a noise schedule and an update rule. Let’s take a look at the DDPMScheduler and use the add_noise method to add some random noise to the sample_image from before: Copied >>> import torch +>>> from PIL import Image +>>> from diffusers import DDPMScheduler + +>>> noise_scheduler = DDPMScheduler(num_train_timesteps=1000) +>>> noise = torch.randn(sample_image.shape) +>>> timesteps = torch.LongTensor([50]) +>>> noisy_image = noise_scheduler.add_noise(sample_image, noise, timesteps) + +>>> Image.fromarray(((noisy_image.permute(0, 2, 3, 1) + 1.0) * 127.5).type(torch.uint8).numpy()[0]) The training objective of the model is to predict the noise added to the image. The loss at this step can be calculated by: Copied >>> import torch.nn.functional as F + +>>> noise_pred = model(noisy_image, timesteps).sample +>>> loss = F.mse_loss(noise_pred, noise) Train the model By now, you have most of the pieces to start training the model and all that’s left is putting everything together. First, you’ll need an optimizer and a learning rate scheduler: Copied >>> from diffusers.optimization import get_cosine_schedule_with_warmup + +>>> optimizer = torch.optim.AdamW(model.parameters(), lr=config.learning_rate) +>>> lr_scheduler = get_cosine_schedule_with_warmup( +... optimizer=optimizer, +... num_warmup_steps=config.lr_warmup_steps, +... num_training_steps=(len(train_dataloader) * config.num_epochs), +... ) Then, you’ll need a way to evaluate the model. For evaluation, you can use the DDPMPipeline to generate a batch of sample images and save it as a grid: Copied >>> from diffusers import DDPMPipeline +>>> from diffusers.utils import make_image_grid +>>> import os + +>>> def evaluate(config, epoch, pipeline): +... # Sample some images from random noise (this is the backward diffusion process). +... # The default pipeline output type is `List[PIL.Image]` +... images = pipeline( +... batch_size=config.eval_batch_size, +... generator=torch.manual_seed(config.seed), +... ).images + +... # Make a grid out of the images +... image_grid = make_image_grid(images, rows=4, cols=4) + +... # Save the images +... test_dir = os.path.join(config.output_dir, "samples") +... os.makedirs(test_dir, exist_ok=True) +... image_grid.save(f"{test_dir}/{epoch:04d}.png") Now you can wrap all these components together in a training loop with 🤗 Accelerate for easy TensorBoard logging, gradient accumulation, and mixed precision training. To upload the model to the Hub, write a function to get your repository name and information and then push it to the Hub. 💡 The training loop below may look intimidating and long, but it’ll be worth it later when you launch your training in just one line of code! If you can’t wait and want to start generating images, feel free to copy and run the code below. You can always come back and examine the training loop more closely later, like when you’re waiting for your model to finish training. 🤗 Copied >>> from accelerate import Accelerator +>>> from huggingface_hub import create_repo, upload_folder +>>> from tqdm.auto import tqdm +>>> from pathlib import Path +>>> import os + +>>> def train_loop(config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler): +... # Initialize accelerator and tensorboard logging +... accelerator = Accelerator( +... mixed_precision=config.mixed_precision, +... gradient_accumulation_steps=config.gradient_accumulation_steps, +... log_with="tensorboard", +... project_dir=os.path.join(config.output_dir, "logs"), +... ) +... if accelerator.is_main_process: +... if config.output_dir is not None: +... os.makedirs(config.output_dir, exist_ok=True) +... if config.push_to_hub: +... repo_id = create_repo( +... repo_id=config.hub_model_id or Path(config.output_dir).name, exist_ok=True +... ).repo_id +... accelerator.init_trackers("train_example") + +... # Prepare everything +... # There is no specific order to remember, you just need to unpack the +... # objects in the same order you gave them to the prepare method. +... model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( +... model, optimizer, train_dataloader, lr_scheduler +... ) + +... global_step = 0 + +... # Now you train the model +... for epoch in range(config.num_epochs): +... progress_bar = tqdm(total=len(train_dataloader), disable=not accelerator.is_local_main_process) +... progress_bar.set_description(f"Epoch {epoch}") + +... for step, batch in enumerate(train_dataloader): +... clean_images = batch["images"] +... # Sample noise to add to the images +... noise = torch.randn(clean_images.shape, device=clean_images.device) +... bs = clean_images.shape[0] + +... # Sample a random timestep for each image +... timesteps = torch.randint( +... 0, noise_scheduler.config.num_train_timesteps, (bs,), device=clean_images.device, +... dtype=torch.int64 +... ) + +... # Add noise to the clean images according to the noise magnitude at each timestep +... # (this is the forward diffusion process) +... noisy_images = noise_scheduler.add_noise(clean_images, noise, timesteps) + +... with accelerator.accumulate(model): +... # Predict the noise residual +... noise_pred = model(noisy_images, timesteps, return_dict=False)[0] +... loss = F.mse_loss(noise_pred, noise) +... accelerator.backward(loss) + +... accelerator.clip_grad_norm_(model.parameters(), 1.0) +... optimizer.step() +... lr_scheduler.step() +... optimizer.zero_grad() + +... progress_bar.update(1) +... logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0], "step": global_step} +... progress_bar.set_postfix(**logs) +... accelerator.log(logs, step=global_step) +... global_step += 1 + +... # After each epoch you optionally sample some demo images with evaluate() and save the model +... if accelerator.is_main_process: +... pipeline = DDPMPipeline(unet=accelerator.unwrap_model(model), scheduler=noise_scheduler) + +... if (epoch + 1) % config.save_image_epochs == 0 or epoch == config.num_epochs - 1: +... evaluate(config, epoch, pipeline) + +... if (epoch + 1) % config.save_model_epochs == 0 or epoch == config.num_epochs - 1: +... if config.push_to_hub: +... upload_folder( +... repo_id=repo_id, +... folder_path=config.output_dir, +... commit_message=f"Epoch {epoch}", +... ignore_patterns=["step_*", "epoch_*"], +... ) +... else: +... pipeline.save_pretrained(config.output_dir) Phew, that was quite a bit of code! But you’re finally ready to launch the training with 🤗 Accelerate’s notebook_launcher function. Pass the function the training loop, all the training arguments, and the number of processes (you can change this value to the number of GPUs available to you) to use for training: Copied >>> from accelerate import notebook_launcher + +>>> args = (config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler) + +>>> notebook_launcher(train_loop, args, num_processes=1) Once training is complete, take a look at the final 🦋 images 🦋 generated by your diffusion model! Copied >>> import glob + +>>> sample_images = sorted(glob.glob(f"{config.output_dir}/samples/*.png")) +>>> Image.open(sample_images[-1]) Next steps Unconditional image generation is one example of a task that can be trained. You can explore other tasks and training techniques by visiting the 🧨 Diffusers Training Examples page. Here are some examples of what you can learn: Textual Inversion, an algorithm that teaches a model a specific visual concept and integrates it into the generated image. DreamBooth, a technique for generating personalized images of a subject given several input images of the subject. Guide to finetuning a Stable Diffusion model on your own dataset. Guide to using LoRA, a memory-efficient technique for finetuning really large models faster. diff --git a/scrapped_outputs/6c02308d3fe451e2410f05858062f747.txt b/scrapped_outputs/6c02308d3fe451e2410f05858062f747.txt new file mode 100644 index 0000000000000000000000000000000000000000..682e7ed4ade907ab1a141f47a047e5803e87a77a --- /dev/null +++ b/scrapped_outputs/6c02308d3fe451e2410f05858062f747.txt @@ -0,0 +1,33 @@ +Logging 🤗 Diffusers has a centralized logging system to easily manage the verbosity of the library. The default verbosity is set to WARNING. To change the verbosity level, use one of the direct setters. For instance, to change the verbosity to the INFO level. Copied import diffusers + +diffusers.logging.set_verbosity_info() You can also use the environment variable DIFFUSERS_VERBOSITY to override the default verbosity. You can set it +to one of the following: debug, info, warning, error, critical. For example: Copied DIFFUSERS_VERBOSITY=error ./myprogram.py Additionally, some warnings can be disabled by setting the environment variable +DIFFUSERS_NO_ADVISORY_WARNINGS to a true value, like 1. This disables any warning logged by +logger.warning_advice. For example: Copied DIFFUSERS_NO_ADVISORY_WARNINGS=1 ./myprogram.py Here is an example of how to use the same logger as the library in your own module or script: Copied from diffusers.utils import logging + +logging.set_verbosity_info() +logger = logging.get_logger("diffusers") +logger.info("INFO") +logger.warning("WARN") All methods of the logging module are documented below. The main methods are +logging.get_verbosity to get the current level of verbosity in the logger and +logging.set_verbosity to set the verbosity to the level of your choice. In order from the least verbose to the most verbose: Method Integer value Description diffusers.logging.CRITICAL or diffusers.logging.FATAL 50 only report the most critical errors diffusers.logging.ERROR 40 only report errors diffusers.logging.WARNING or diffusers.logging.WARN 30 only report errors and warnings (default) diffusers.logging.INFO 20 only report errors, warnings, and basic information diffusers.logging.DEBUG 10 report all information By default, tqdm progress bars are displayed during model download. logging.disable_progress_bar and logging.enable_progress_bar are used to enable or disable this behavior. Base setters diffusers.utils.logging.set_verbosity_error < source > ( ) Set the verbosity to the ERROR level. diffusers.utils.logging.set_verbosity_warning < source > ( ) Set the verbosity to the WARNING level. diffusers.utils.logging.set_verbosity_info < source > ( ) Set the verbosity to the INFO level. diffusers.utils.logging.set_verbosity_debug < source > ( ) Set the verbosity to the DEBUG level. Other functions diffusers.utils.logging.get_verbosity < source > ( ) → int Returns +int + +Logging level integers which can be one of: + +50: diffusers.logging.CRITICAL or diffusers.logging.FATAL +40: diffusers.logging.ERROR +30: diffusers.logging.WARNING or diffusers.logging.WARN +20: diffusers.logging.INFO +10: diffusers.logging.DEBUG + + Return the current level for the 🤗 Diffusers’ root logger as an int. diffusers.utils.logging.set_verbosity < source > ( verbosity: int ) Parameters verbosity (int) — +Logging level which can be one of: + +diffusers.logging.CRITICAL or diffusers.logging.FATAL +diffusers.logging.ERROR +diffusers.logging.WARNING or diffusers.logging.WARN +diffusers.logging.INFO +diffusers.logging.DEBUG + Set the verbosity level for the 🤗 Diffusers’ root logger. diffusers.utils.get_logger < source > ( name: Optional = None ) Return a logger with the specified name. This function is not supposed to be directly accessed unless you are writing a custom diffusers module. diffusers.utils.logging.enable_default_handler < source > ( ) Enable the default handler of the 🤗 Diffusers’ root logger. diffusers.utils.logging.disable_default_handler < source > ( ) Disable the default handler of the 🤗 Diffusers’ root logger. diffusers.utils.logging.enable_explicit_format < source > ( ) Enable explicit formatting for every 🤗 Diffusers’ logger. The explicit formatter is as follows: Copied [LEVELNAME|FILENAME|LINE NUMBER] TIME >> MESSAGE +All handlers currently bound to the root logger are affected by this method. diffusers.utils.logging.reset_format < source > ( ) Resets the formatting for 🤗 Diffusers’ loggers. All handlers currently bound to the root logger are affected by this method. diffusers.utils.logging.enable_progress_bar < source > ( ) Enable tqdm progress bar. diffusers.utils.logging.disable_progress_bar < source > ( ) Disable tqdm progress bar. diff --git a/scrapped_outputs/6c95c8b1f32841bf957d0ea8eafb1c98.txt b/scrapped_outputs/6c95c8b1f32841bf957d0ea8eafb1c98.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/6ca02abc27e7a4d4fa69ea44a3cd89e2.txt b/scrapped_outputs/6ca02abc27e7a4d4fa69ea44a3cd89e2.txt new file mode 100644 index 0000000000000000000000000000000000000000..ca254f42f72a76d580bb5340e193834f7f82b6d6 --- /dev/null +++ b/scrapped_outputs/6ca02abc27e7a4d4fa69ea44a3cd89e2.txt @@ -0,0 +1,86 @@ +Prompt weighting Prompt weighting provides a way to emphasize or de-emphasize certain parts of a prompt, allowing for more control over the generated image. A prompt can include several concepts, which gets turned into contextualized text embeddings. The embeddings are used by the model to condition its cross-attention layers to generate an image (read the Stable Diffusion blog post to learn more about how it works). Prompt weighting works by increasing or decreasing the scale of the text embedding vector that corresponds to its concept in the prompt because you may not necessarily want the model to focus on all concepts equally. The easiest way to prepare the prompt-weighted embeddings is to use Compel, a text prompt-weighting and blending library. Once you have the prompt-weighted embeddings, you can pass them to any pipeline that has a prompt_embeds (and optionally negative_prompt_embeds) parameter, such as StableDiffusionPipeline, StableDiffusionControlNetPipeline, and StableDiffusionXLPipeline. If your favorite pipeline doesn’t have a prompt_embeds parameter, please open an issue so we can add it! This guide will show you how to weight and blend your prompts with Compel in 🤗 Diffusers. Before you begin, make sure you have the latest version of Compel installed: Copied # uncomment to install in Colab +#!pip install compel --upgrade For this guide, let’s generate an image with the prompt "a red cat playing with a ball" using the StableDiffusionPipeline: Copied from diffusers import StableDiffusionPipeline, UniPCMultistepScheduler +import torch + +pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", use_safetensors=True) +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.to("cuda") + +prompt = "a red cat playing with a ball" + +generator = torch.Generator(device="cpu").manual_seed(33) + +image = pipe(prompt, generator=generator, num_inference_steps=20).images[0] +image Weighting You’ll notice there is no “ball” in the image! Let’s use compel to upweight the concept of “ball” in the prompt. Create a Compel object, and pass it a tokenizer and text encoder: Copied from compel import Compel + +compel_proc = Compel(tokenizer=pipe.tokenizer, text_encoder=pipe.text_encoder) compel uses + or - to increase or decrease the weight of a word in the prompt. To increase the weight of “ball”: + corresponds to the value 1.1, ++ corresponds to 1.1^2, and so on. Similarly, - corresponds to 0.9 and -- corresponds to 0.9^2. Feel free to experiment with adding more + or - in your prompt! Copied prompt = "a red cat playing with a ball++" Pass the prompt to compel_proc to create the new prompt embeddings which are passed to the pipeline: Copied prompt_embeds = compel_proc(prompt) +generator = torch.manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image To downweight parts of the prompt, use the - suffix: Copied prompt = "a red------- cat playing with a ball" +prompt_embeds = compel_proc(prompt) + +generator = torch.manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image You can even up or downweight multiple concepts in the same prompt: Copied prompt = "a red cat++ playing with a ball----" +prompt_embeds = compel_proc(prompt) + +generator = torch.manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image Blending You can also create a weighted blend of prompts by adding .blend() to a list of prompts and passing it some weights. Your blend may not always produce the result you expect because it breaks some assumptions about how the text encoder functions, so just have fun and experiment with it! Copied prompt_embeds = compel_proc('("a red cat playing with a ball", "jungle").blend(0.7, 0.8)') +generator = torch.Generator(device="cuda").manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image Conjunction A conjunction diffuses each prompt independently and concatenates their results by their weighted sum. Add .and() to the end of a list of prompts to create a conjunction: Copied prompt_embeds = compel_proc('["a red cat", "playing with a", "ball"].and()') +generator = torch.Generator(device="cuda").manual_seed(55) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image Textual inversion Textual inversion is a technique for learning a specific concept from some images which you can use to generate new images conditioned on that concept. Create a pipeline and use the load_textual_inversion() function to load the textual inversion embeddings (feel free to browse the Stable Diffusion Conceptualizer for 100+ trained concepts): Copied import torch +from diffusers import StableDiffusionPipeline +from compel import Compel, DiffusersTextualInversionManager + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, + use_safetensors=True, variant="fp16").to("cuda") +pipe.load_textual_inversion("sd-concepts-library/midjourney-style") Compel provides a DiffusersTextualInversionManager class to simplify prompt weighting with textual inversion. Instantiate DiffusersTextualInversionManager and pass it to the Compel class: Copied textual_inversion_manager = DiffusersTextualInversionManager(pipe) +compel_proc = Compel( + tokenizer=pipe.tokenizer, + text_encoder=pipe.text_encoder, + textual_inversion_manager=textual_inversion_manager) Incorporate the concept to condition a prompt with using the syntax: Copied prompt_embeds = compel_proc('("A red cat++ playing with a ball ")') + +image = pipe(prompt_embeds=prompt_embeds).images[0] +image DreamBooth DreamBooth is a technique for generating contextualized images of a subject given just a few images of the subject to train on. It is similar to textual inversion, but DreamBooth trains the full model whereas textual inversion only fine-tunes the text embeddings. This means you should use from_pretrained() to load the DreamBooth model (feel free to browse the Stable Diffusion Dreambooth Concepts Library for 100+ trained models): Copied import torch +from diffusers import DiffusionPipeline, UniPCMultistepScheduler +from compel import Compel + +pipe = DiffusionPipeline.from_pretrained("sd-dreambooth-library/dndcoverart-v1", torch_dtype=torch.float16).to("cuda") +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) Create a Compel class with a tokenizer and text encoder, and pass your prompt to it. Depending on the model you use, you’ll need to incorporate the model’s unique identifier into your prompt. For example, the dndcoverart-v1 model uses the identifier dndcoverart: Copied compel_proc = Compel(tokenizer=pipe.tokenizer, text_encoder=pipe.text_encoder) +prompt_embeds = compel_proc('("magazine cover of a dndcoverart dragon, high quality, intricate details, larry elmore art style").and()') +image = pipe(prompt_embeds=prompt_embeds).images[0] +image Stable Diffusion XL Stable Diffusion XL (SDXL) has two tokenizers and text encoders so it’s usage is a bit different. To address this, you should pass both tokenizers and encoders to the Compel class: Copied from compel import Compel, ReturnedEmbeddingsType +from diffusers import DiffusionPipeline +from diffusers.utils import make_image_grid +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + variant="fp16", + use_safetensors=True, + torch_dtype=torch.float16 +).to("cuda") + +compel = Compel( + tokenizer=[pipeline.tokenizer, pipeline.tokenizer_2] , + text_encoder=[pipeline.text_encoder, pipeline.text_encoder_2], + returned_embeddings_type=ReturnedEmbeddingsType.PENULTIMATE_HIDDEN_STATES_NON_NORMALIZED, + requires_pooled=[False, True] +) This time, let’s upweight “ball” by a factor of 1.5 for the first prompt, and downweight “ball” by 0.6 for the second prompt. The StableDiffusionXLPipeline also requires pooled_prompt_embeds (and optionally negative_pooled_prompt_embeds) so you should pass those to the pipeline along with the conditioning tensors: Copied # apply weights +prompt = ["a red cat playing with a (ball)1.5", "a red cat playing with a (ball)0.6"] +conditioning, pooled = compel(prompt) + +# generate image +generator = [torch.Generator().manual_seed(33) for _ in range(len(prompt))] +images = pipeline(prompt_embeds=conditioning, pooled_prompt_embeds=pooled, generator=generator, num_inference_steps=30).images +make_image_grid(images, rows=1, cols=2) "a red cat playing with a (ball)1.5" "a red cat playing with a (ball)0.6" diff --git a/scrapped_outputs/6cbacd3cc88a459e2522e99b287b7b9b.txt b/scrapped_outputs/6cbacd3cc88a459e2522e99b287b7b9b.txt new file mode 100644 index 0000000000000000000000000000000000000000..cdd78d68bba0e712cfad73d0a4eb0e2833f322c8 --- /dev/null +++ b/scrapped_outputs/6cbacd3cc88a459e2522e99b287b7b9b.txt @@ -0,0 +1,15 @@ +Outputs All model outputs are subclasses of BaseOutput, data structures containing all the information returned by the model. The outputs can also be used as tuples or dictionaries. For example: Copied from diffusers import DDIMPipeline + +pipeline = DDIMPipeline.from_pretrained("google/ddpm-cifar10-32") +outputs = pipeline() The outputs object is a ImagePipelineOutput which means it has an image attribute. You can access each attribute as you normally would or with a keyword lookup, and if that attribute is not returned by the model, you will get None: Copied outputs.images +outputs["images"] When considering the outputs object as a tuple, it only considers the attributes that don’t have None values. +For instance, retrieving an image by indexing into it returns the tuple (outputs.images): Copied outputs[:1] To check a specific pipeline or model output, refer to its corresponding API documentation. BaseOutput class diffusers.utils.BaseOutput < source > ( ) Base class for all model outputs as dataclass. Has a __getitem__ that allows indexing by integer or slice (like a +tuple) or strings (like a dictionary) that will ignore the None attributes. Otherwise behaves like a regular +Python dictionary. You can’t unpack a BaseOutput directly. Use the to_tuple() method to convert it to a tuple +first. to_tuple < source > ( ) Convert self to a tuple containing all the attributes/keys that are not None. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. FlaxImagePipelineOutput class diffusers.pipelines.pipeline_flax_utils.FlaxImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. ImageTextPipelineOutput class diffusers.ImageTextPipelineOutput < source > ( images: Union text: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). text (List[str] or List[List[str]]) — +List of generated text strings of length batch_size or a list of list of strings whose outer list has +length batch_size. Output class for joint image-text pipelines. diff --git a/scrapped_outputs/6cfc7d2c07539b1a092147d223165a5c.txt b/scrapped_outputs/6cfc7d2c07539b1a092147d223165a5c.txt new file mode 100644 index 0000000000000000000000000000000000000000..619c1344357a2477dbdb089431e14fc3b2eaccb0 --- /dev/null +++ b/scrapped_outputs/6cfc7d2c07539b1a092147d223165a5c.txt @@ -0,0 +1,130 @@ +improved pseudo numerical methods for diffusion models (iPNDM) + + +Overview + +Original implementation can be found here. + +IPNDMScheduler + + +class diffusers.IPNDMScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + + +Improved Pseudo numerical methods for diffusion models (iPNDM) ported from @crowsonkb’s amazing k-diffusion +library +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. +For more details, see the original paper: https://arxiv.org/abs/2202.09778 + +scale_model_input + +< +source +> +( +sample: FloatTensor +*args +**kwargs + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + + +Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +return_dict: bool = True + +) +→ +~scheduling_utils.SchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +return_dict (bool) — option for returning tuple rather than SchedulerOutput class + + +Returns + +~scheduling_utils.SchedulerOutput or tuple + + + +~scheduling_utils.SchedulerOutput if return_dict is +True, otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + +Step function propagating the sample with the linear multi-step method. This has one forward pass with multiple +times to approximate the solution. diff --git a/scrapped_outputs/6d1bdd57506c5a6463fc2b589bb1327b.txt b/scrapped_outputs/6d1bdd57506c5a6463fc2b589bb1327b.txt new file mode 100644 index 0000000000000000000000000000000000000000..684383d3b766fe2306777de3fdfe7ac6f1cc9bb6 --- /dev/null +++ b/scrapped_outputs/6d1bdd57506c5a6463fc2b589bb1327b.txt @@ -0,0 +1,29 @@ +Create a dataset for training There are many datasets on the Hub to train a model on, but if you can’t find one you’re interested in or want to use your own, you can create a dataset with the 🤗 Datasets library. The dataset structure depends on the task you want to train your model on. The most basic dataset structure is a directory of images for tasks like unconditional image generation. Another dataset structure may be a directory of images and a text file containing their corresponding text captions for tasks like text-to-image generation. This guide will show you two ways to create a dataset to finetune on: provide a folder of images to the --train_data_dir argument upload a dataset to the Hub and pass the dataset repository id to the --dataset_name argument 💡 Learn more about how to create an image dataset for training in the Create an image dataset guide. Provide a dataset as a folder For unconditional generation, you can provide your own dataset as a folder of images. The training script uses the ImageFolder builder from 🤗 Datasets to automatically build a dataset from the folder. Your directory structure should look like: Copied data_dir/xxx.png +data_dir/xxy.png +data_dir/[...]/xxz.png Pass the path to the dataset directory to the --train_data_dir argument, and then you can start training: Copied accelerate launch train_unconditional.py \ + --train_data_dir \ + Upload your data to the Hub 💡 For more details and context about creating and uploading a dataset to the Hub, take a look at the Image search with 🤗 Datasets post. Start by creating a dataset with the ImageFolder feature, which creates an image column containing the PIL-encoded images. You can use the data_dir or data_files parameters to specify the location of the dataset. The data_files parameter supports mapping specific files to dataset splits like train or test: Copied from datasets import load_dataset + +# example 1: local folder +dataset = load_dataset("imagefolder", data_dir="path_to_your_folder") + +# example 2: local files (supported formats are tar, gzip, zip, xz, rar, zstd) +dataset = load_dataset("imagefolder", data_files="path_to_zip_file") + +# example 3: remote files (supported formats are tar, gzip, zip, xz, rar, zstd) +dataset = load_dataset( + "imagefolder", + data_files="https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip", +) + +# example 4: providing several splits +dataset = load_dataset( + "imagefolder", data_files={"train": ["path/to/file1", "path/to/file2"], "test": ["path/to/file3", "path/to/file4"]} +) Then use the push_to_hub method to upload the dataset to the Hub: Copied # assuming you have ran the huggingface-cli login command in a terminal +dataset.push_to_hub("name_of_your_dataset") + +# if you want to push to a private repo, simply pass private=True: +dataset.push_to_hub("name_of_your_dataset", private=True) Now the dataset is available for training by passing the dataset name to the --dataset_name argument: Copied accelerate launch --mixed_precision="fp16" train_text_to_image.py \ + --pretrained_model_name_or_path="runwayml/stable-diffusion-v1-5" \ + --dataset_name="name_of_your_dataset" \ + Next steps Now that you’ve created a dataset, you can plug it into the train_data_dir (if your dataset is local) or dataset_name (if your dataset is on the Hub) arguments of a training script. For your next steps, feel free to try and use your dataset to train a model for unconditional generation or text-to-image generation! diff --git a/scrapped_outputs/6db332b46c4c2c5fb7797a282029aa18.txt b/scrapped_outputs/6db332b46c4c2c5fb7797a282029aa18.txt new file mode 100644 index 0000000000000000000000000000000000000000..c618df35dab9f1ea7404eb6772bf3711c834e51e --- /dev/null +++ b/scrapped_outputs/6db332b46c4c2c5fb7797a282029aa18.txt @@ -0,0 +1,40 @@ +Stable Video Diffusion Stable Video Diffusion (SVD) is a powerful image-to-video generation model that can generate 2-4 second high resolution (576x1024) videos conditioned on an input image. This guide will show you how to use SVD to generate short videos from images. Before you begin, make sure you have the following libraries installed: Copied !pip install -q -U diffusers transformers accelerate The are two variants of this model, SVD and SVD-XT. The SVD checkpoint is trained to generate 14 frames and the SVD-XT checkpoint is further finetuned to generate 25 frames. You’ll use the SVD-XT checkpoint for this guide. Copied import torch + +from diffusers import StableVideoDiffusionPipeline +from diffusers.utils import load_image, export_to_video + +pipe = StableVideoDiffusionPipeline.from_pretrained( + "stabilityai/stable-video-diffusion-img2vid-xt", torch_dtype=torch.float16, variant="fp16" +) +pipe.enable_model_cpu_offload() + +# Load the conditioning image +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png") +image = image.resize((1024, 576)) + +generator = torch.manual_seed(42) +frames = pipe(image, decode_chunk_size=8, generator=generator).frames[0] + +export_to_video(frames, "generated.mp4", fps=7) "source image of a rocket" "generated video from source image" torch.compile You can gain a 20-25% speedup at the expense of slightly increased memory by compiling the UNet. Copied - pipe.enable_model_cpu_offload() ++ pipe.to("cuda") ++ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) Reduce memory usage Video generation is very memory intensive because you’re essentially generating num_frames all at once, similar to text-to-image generation with a high batch size. To reduce the memory requirement, there are multiple options that trade-off inference speed for lower memory requirement: enable model offloading: each component of the pipeline is offloaded to the CPU once it’s not needed anymore. enable feed-forward chunking: the feed-forward layer runs in a loop instead of running a single feed-forward with a huge batch size. reduce decode_chunk_size: the VAE decodes frames in chunks instead of decoding them all together. Setting decode_chunk_size=1 decodes one frame at a time and uses the least amount of memory (we recommend adjusting this value based on your GPU memory) but the video might have some flickering. Copied - pipe.enable_model_cpu_offload() +- frames = pipe(image, decode_chunk_size=8, generator=generator).frames[0] ++ pipe.enable_model_cpu_offload() ++ pipe.unet.enable_forward_chunking() ++ frames = pipe(image, decode_chunk_size=2, generator=generator, num_frames=25).frames[0] Using all these tricks togethere should lower the memory requirement to less than 8GB VRAM. Micro-conditioning Stable Diffusion Video also accepts micro-conditioning, in addition to the conditioning image, which allows more control over the generated video: fps: the frames per second of the generated video. motion_bucket_id: the motion bucket id to use for the generated video. This can be used to control the motion of the generated video. Increasing the motion bucket id increases the motion of the generated video. noise_aug_strength: the amount of noise added to the conditioning image. The higher the values the less the video resembles the conditioning image. Increasing this value also increases the motion of the generated video. For example, to generate a video with more motion, use the motion_bucket_id and noise_aug_strength micro-conditioning parameters: Copied import torch + +from diffusers import StableVideoDiffusionPipeline +from diffusers.utils import load_image, export_to_video + +pipe = StableVideoDiffusionPipeline.from_pretrained( + "stabilityai/stable-video-diffusion-img2vid-xt", torch_dtype=torch.float16, variant="fp16" +) +pipe.enable_model_cpu_offload() + +# Load the conditioning image +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png") +image = image.resize((1024, 576)) + +generator = torch.manual_seed(42) +frames = pipe(image, decode_chunk_size=8, generator=generator, motion_bucket_id=180, noise_aug_strength=0.1).frames[0] +export_to_video(frames, "generated.mp4", fps=7) diff --git a/scrapped_outputs/6e52a3e667a05a2ff771426c3a9216ef.txt b/scrapped_outputs/6e52a3e667a05a2ff771426c3a9216ef.txt new file mode 100644 index 0000000000000000000000000000000000000000..8423dbc4c086a93fc684851efbfbaf2fbcda62c5 --- /dev/null +++ b/scrapped_outputs/6e52a3e667a05a2ff771426c3a9216ef.txt @@ -0,0 +1,127 @@ +Super-resolution The Stable Diffusion upscaler diffusion model was created by the researchers and engineers from CompVis, Stability AI, and LAION. It is used to enhance the resolution of input images by a factor of 4. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionUpscalePipeline class diffusers.StableDiffusionUpscalePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel low_res_scheduler: DDPMScheduler scheduler: KarrasDiffusionSchedulers safety_checker: Optional = None feature_extractor: Optional = None watermarker: Optional = None max_noise_level: int = 350 ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. low_res_scheduler (SchedulerMixin) — +A scheduler used to add initial noise to the low resolution conditioning image. It must be an instance of +DDPMScheduler. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-guided image super-resolution using Stable Diffusion 2. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 75 guidance_scale: float = 9.0 noise_level: int = 20 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: int = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be upscaled. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import requests +>>> from PIL import Image +>>> from io import BytesIO +>>> from diffusers import StableDiffusionUpscalePipeline +>>> import torch + +>>> # load model and scheduler +>>> model_id = "stabilityai/stable-diffusion-x4-upscaler" +>>> pipeline = StableDiffusionUpscalePipeline.from_pretrained( +... model_id, revision="fp16", torch_dtype=torch.float16 +... ) +>>> pipeline = pipeline.to("cuda") + +>>> # let's download an image +>>> url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png" +>>> response = requests.get(url) +>>> low_res_img = Image.open(BytesIO(response.content)).convert("RGB") +>>> low_res_img = low_res_img.resize((128, 128)) +>>> prompt = "a white cat" + +>>> upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0] +>>> upscaled_image.save("upsampled_cat.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/6e714c67ef6d49074c413f503a9a5577.txt b/scrapped_outputs/6e714c67ef6d49074c413f503a9a5577.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/6e7e39867a740ea62a151ecc7a7af4f8.txt b/scrapped_outputs/6e7e39867a740ea62a151ecc7a7af4f8.txt new file mode 100644 index 0000000000000000000000000000000000000000..032f569366b1a5bb387a95e95afb74b4ab65d517 --- /dev/null +++ b/scrapped_outputs/6e7e39867a740ea62a151ecc7a7af4f8.txt @@ -0,0 +1,17 @@ +UNet1DModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 1D UNet model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet1DModel class diffusers.UNet1DModel < source > ( sample_size: int = 65536 sample_rate: Optional = None in_channels: int = 2 out_channels: int = 2 extra_in_channels: int = 0 time_embedding_type: str = 'fourier' flip_sin_to_cos: bool = True use_timestep_embedding: bool = False freq_shift: float = 0.0 down_block_types: Tuple = ('DownBlock1DNoSkip', 'DownBlock1D', 'AttnDownBlock1D') up_block_types: Tuple = ('AttnUpBlock1D', 'UpBlock1D', 'UpBlock1DNoSkip') mid_block_type: Tuple = 'UNetMidBlock1D' out_block_type: str = None block_out_channels: Tuple = (32, 32, 64) act_fn: str = None norm_num_groups: int = 8 layers_per_block: int = 1 downsample_each_block: bool = False ) Parameters sample_size (int, optional) — Default length of sample. Should be adaptable at runtime. in_channels (int, optional, defaults to 2) — Number of channels in the input sample. out_channels (int, optional, defaults to 2) — Number of channels in the output. extra_in_channels (int, optional, defaults to 0) — +Number of additional channels to be added to the input of the first down block. Useful for cases where the +input data has more channels than what the model was initially designed for. time_embedding_type (str, optional, defaults to "fourier") — Type of time embedding to use. freq_shift (float, optional, defaults to 0.0) — Frequency shift for Fourier time embedding. flip_sin_to_cos (bool, optional, defaults to False) — +Whether to flip sin to cos for Fourier time embedding. down_block_types (Tuple[str], optional, defaults to ("DownBlock1DNoSkip", "DownBlock1D", "AttnDownBlock1D")) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to ("AttnUpBlock1D", "UpBlock1D", "UpBlock1DNoSkip")) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (32, 32, 64)) — +Tuple of block output channels. mid_block_type (str, optional, defaults to "UNetMidBlock1D") — Block type for middle of UNet. out_block_type (str, optional, defaults to None) — Optional output processing block of UNet. act_fn (str, optional, defaults to None) — Optional activation function in UNet blocks. norm_num_groups (int, optional, defaults to 8) — The number of groups for normalization. layers_per_block (int, optional, defaults to 1) — The number of layers per block. downsample_each_block (int, optional, defaults to False) — +Experimental feature for using a UNet without upsampling. A 1D UNet model that takes a noisy sample and a timestep and returns a sample shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor timestep: Union return_dict: bool = True ) → ~models.unet_1d.UNet1DOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch_size, num_channels, sample_size). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.unet_1d.UNet1DOutput instead of a plain tuple. Returns +~models.unet_1d.UNet1DOutput or tuple + +If return_dict is True, an ~models.unet_1d.UNet1DOutput is returned, otherwise a tuple is +returned where the first element is the sample tensor. + The UNet1DModel forward method. UNet1DOutput class diffusers.models.unets.unet_1d.UNet1DOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, sample_size)) — +The hidden states output from the last layer of the model. The output of UNet1DModel. diff --git a/scrapped_outputs/6e8022c63219390139176d8dc582d30b.txt b/scrapped_outputs/6e8022c63219390139176d8dc582d30b.txt new file mode 100644 index 0000000000000000000000000000000000000000..77bfc70e39049721df753225367296a6dc627c51 --- /dev/null +++ b/scrapped_outputs/6e8022c63219390139176d8dc582d30b.txt @@ -0,0 +1,124 @@ +PixArt-α PixArt-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis is Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, and Zhenguo Li. The abstract from the paper is: The most advanced text-to-image (T2I) models require significant training costs (e.g., millions of GPU hours), seriously hindering the fundamental innovation for the AIGC community while increasing CO2 emissions. This paper introduces PIXART-α, a Transformer-based T2I diffusion model whose image generation quality is competitive with state-of-the-art image generators (e.g., Imagen, SDXL, and even Midjourney), reaching near-commercial application standards. Additionally, it supports high-resolution image synthesis up to 1024px resolution with low training cost, as shown in Figure 1 and 2. To achieve this goal, three core designs are proposed: (1) Training strategy decomposition: We devise three distinct training steps that separately optimize pixel dependency, text-image alignment, and image aesthetic quality; (2) Efficient T2I Transformer: We incorporate cross-attention modules into Diffusion Transformer (DiT) to inject text conditions and streamline the computation-intensive class-condition branch; (3) High-informative data: We emphasize the significance of concept density in text-image pairs and leverage a large Vision-Language model to auto-label dense pseudo-captions to assist text-image alignment learning. As a result, PIXART-α’s training speed markedly surpasses existing large-scale T2I models, e.g., PIXART-α only takes 10.8% of Stable Diffusion v1.5’s training time (675 vs. 6,250 A100 GPU days), saving nearly $300,000 ($26,000 vs. $320,000) and reducing 90% CO2 emissions. Moreover, compared with a larger SOTA model, RAPHAEL, our training cost is merely 1%. Extensive experiments demonstrate that PIXART-α excels in image quality, artistry, and semantic control. We hope PIXART-α will provide new insights to the AIGC community and startups to accelerate building their own high-quality yet low-cost generative models from scratch. You can find the original codebase at PixArt-alpha/PixArt-alpha and all the available checkpoints at PixArt-alpha. Some notes about this pipeline: It uses a Transformer backbone (instead of a UNet) for denoising. As such it has a similar architecture as DiT. It was trained using text conditions computed from T5. This aspect makes the pipeline better at following complex text prompts with intricate details. It is good at producing high-resolution images at different aspect ratios. To get the best results, the authors recommend some size brackets which can be found here. It rivals the quality of state-of-the-art text-to-image generation systems (as of this writing) such as Stable Diffusion XL, Imagen, and DALL-E 2, while being more efficient than them. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. Inference with under 8GB GPU VRAM Run the PixArtAlphaPipeline with under 8GB GPU VRAM by loading the text encoder in 8-bit precision. Let’s walk through a full-fledged example. First, install the bitsandbytes library: Copied pip install -U bitsandbytes Then load the text encoder in 8-bit: Copied from transformers import T5EncoderModel +from diffusers import PixArtAlphaPipeline +import torch + +text_encoder = T5EncoderModel.from_pretrained( + "PixArt-alpha/PixArt-XL-2-1024-MS", + subfolder="text_encoder", + load_in_8bit=True, + device_map="auto", + +) +pipe = PixArtAlphaPipeline.from_pretrained( + "PixArt-alpha/PixArt-XL-2-1024-MS", + text_encoder=text_encoder, + transformer=None, + device_map="auto" +) Now, use the pipe to encode a prompt: Copied with torch.no_grad(): + prompt = "cute cat" + prompt_embeds, prompt_attention_mask, negative_embeds, negative_prompt_attention_mask = pipe.encode_prompt(prompt) Since text embeddings have been computed, remove the text_encoder and pipe from the memory, and free up som GPU VRAM: Copied import gc + +def flush(): + gc.collect() + torch.cuda.empty_cache() + +del text_encoder +del pipe +flush() Then compute the latents with the prompt embeddings as inputs: Copied pipe = PixArtAlphaPipeline.from_pretrained( + "PixArt-alpha/PixArt-XL-2-1024-MS", + text_encoder=None, + torch_dtype=torch.float16, +).to("cuda") + +latents = pipe( + negative_prompt=None, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + prompt_attention_mask=prompt_attention_mask, + negative_prompt_attention_mask=negative_prompt_attention_mask, + num_images_per_prompt=1, + output_type="latent", +).images + +del pipe.transformer +flush() Notice that while initializing pipe, you’re setting text_encoder to None so that it’s not loaded. Once the latents are computed, pass it off to the VAE to decode into a real image: Copied with torch.no_grad(): + image = pipe.vae.decode(latents / pipe.vae.config.scaling_factor, return_dict=False)[0] +image = pipe.image_processor.postprocess(image, output_type="pil")[0] +image.save("cat.png") By deleting components you aren’t using and flushing the GPU VRAM, you should be able to run PixArtAlphaPipeline with under 8GB GPU VRAM. If you want a report of your memory-usage, run this script. Text embeddings computed in 8-bit can impact the quality of the generated images because of the information loss in the representation space caused by the reduced precision. It’s recommended to compare the outputs with and without 8-bit. While loading the text_encoder, you set load_in_8bit to True. You could also specify load_in_4bit to bring your memory requirements down even further to under 7GB. PixArtAlphaPipeline class diffusers.PixArtAlphaPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel vae: AutoencoderKL transformer: Transformer2DModel scheduler: DPMSolverMultistepScheduler ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (T5EncoderModel) — +Frozen text-encoder. PixArt-Alpha uses +T5, specifically the +t5-v1_1-xxl variant. tokenizer (T5Tokenizer) — +Tokenizer of class +T5Tokenizer. transformer (Transformer2DModel) — +A text conditioned Transformer2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with transformer to denoise the encoded image latents. Pipeline for text-to-image generation using PixArt-Alpha. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union = None negative_prompt: str = '' num_inference_steps: int = 20 timesteps: List = None guidance_scale: float = 4.5 num_images_per_prompt: Optional = 1 height: Optional = None width: Optional = None eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None prompt_attention_mask: Optional = None negative_prompt_embeds: Optional = None negative_prompt_attention_mask: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True use_resolution_binning: bool = True **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to self.unet.config.sample_size) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size) — +The width in pixels of the generated image. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. prompt_attention_mask (torch.FloatTensor, optional) — Pre-generated attention mask for text embeddings. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. For PixArt-Alpha this negative prompt should be "". If not +provided, negative_prompt_embeds will be generated from negative_prompt input argument. negative_prompt_attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask for negative text embeddings. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. use_resolution_binning (bool defaults to True) — +If set to True, the requested height and width are first mapped to the closest resolutions using +ASPECT_RATIO_1024_BIN. After the produced latents are decoded into images, they are resized back to +the requested resolution. Useful for generating non-square images. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import PixArtAlphaPipeline + +>>> # You can replace the checkpoint id with "PixArt-alpha/PixArt-XL-2-512x512" too. +>>> pipe = PixArtAlphaPipeline.from_pretrained("PixArt-alpha/PixArt-XL-2-1024-MS", torch_dtype=torch.float16) +>>> # Enable memory optimizations. +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A small cactus with a happy face in the Sahara desert." +>>> image = pipe(prompt).images[0] classify_height_width_bin < source > ( height: int width: int ratios: dict ) Returns binned height and width. encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True negative_prompt: str = '' num_images_per_prompt: int = 1 device: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None prompt_attention_mask: Optional = None negative_prompt_attention_mask: Optional = None clean_caption: bool = False **kwargs ) Parameters prompt (str or List[str], optional) — +prompt to be encoded negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. If not defined, one has to pass negative_prompt_embeds +instead. Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). For +PixArt-Alpha, this should be "". do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. For PixArt-Alpha, it’s should be the embeddings of the "" +string. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. diff --git a/scrapped_outputs/6e94607ff70b283db7b0e840370b00cc.txt b/scrapped_outputs/6e94607ff70b283db7b0e840370b00cc.txt new file mode 100644 index 0000000000000000000000000000000000000000..0051dea3c8497a0aea4368d8c2019c00ab6ab808 --- /dev/null +++ b/scrapped_outputs/6e94607ff70b283db7b0e840370b00cc.txt @@ -0,0 +1,107 @@ +Semantic Guidance Semantic Guidance for Diffusion Models was proposed in SEGA: Instructing Text-to-Image Models using Semantic Guidance and provides strong semantic control over image generation. +Small changes to the text prompt usually result in entirely different output images. However, with SEGA a variety of changes to the image are enabled that can be controlled easily and intuitively, while staying true to the original image composition. The abstract from the paper is: Text-to-image diffusion models have recently received a lot of interest for their astonishing ability to produce high-fidelity images from text only. However, achieving one-shot generation that aligns with the user’s intent is nearly impossible, yet small changes to the input prompt often result in very different images. This leaves the user with little semantic control. To put the user in control, we show how to interact with the diffusion process to flexibly steer it along semantic directions. This semantic guidance (SEGA) generalizes to any generative architecture using classifier-free guidance. More importantly, it allows for subtle and extensive edits, changes in composition and style, as well as optimizing the overall artistic conception. We demonstrate SEGA’s effectiveness on both latent and pixel-based diffusion models such as Stable Diffusion, Paella, and DeepFloyd-IF using a variety of tasks, thus providing strong evidence for its versatility, flexibility, and improvements over existing methods. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. SemanticStableDiffusionPipeline class diffusers.SemanticStableDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (Q16SafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with latent editing. This model inherits from DiffusionPipeline and builds on the StableDiffusionPipeline. Check the superclass +documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular +device, etc.). __call__ < source > ( prompt: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: int = 1 eta: float = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 editing_prompt: Union = None editing_prompt_embeddings: Optional = None reverse_editing_direction: Union = False edit_guidance_scale: Union = 5 edit_warmup_steps: Union = 10 edit_cooldown_steps: Union = None edit_threshold: Union = 0.9 edit_momentum_scale: Optional = 0.1 edit_mom_beta: Optional = 0.4 edit_weights: Optional = None sem_guidance: Optional = None ) → ~pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. editing_prompt (str or List[str], optional) — +The prompt or prompts to use for semantic guidance. Semantic guidance is disabled by setting +editing_prompt = None. Guidance direction of prompt should be specified via +reverse_editing_direction. editing_prompt_embeddings (torch.Tensor, optional) — +Pre-computed embeddings to use for semantic guidance. Guidance direction of embedding should be +specified via reverse_editing_direction. reverse_editing_direction (bool or List[bool], optional, defaults to False) — +Whether the corresponding prompt in editing_prompt should be increased or decreased. edit_guidance_scale (float or List[float], optional, defaults to 5) — +Guidance scale for semantic guidance. If provided as a list, values should correspond to +editing_prompt. edit_warmup_steps (float or List[float], optional, defaults to 10) — +Number of diffusion steps (for each prompt) for which semantic guidance is not applied. Momentum is +calculated for those steps and applied once all warmup periods are over. edit_cooldown_steps (float or List[float], optional, defaults to None) — +Number of diffusion steps (for each prompt) after which semantic guidance is longer applied. edit_threshold (float or List[float], optional, defaults to 0.9) — +Threshold of semantic guidance. edit_momentum_scale (float, optional, defaults to 0.1) — +Scale of the momentum to be added to the semantic guidance at each diffusion step. If set to 0.0, +momentum is disabled. Momentum is already built up during warmup (for diffusion steps smaller than +sld_warmup_steps). Momentum is only added to latent guidance once all warmup periods are finished. edit_mom_beta (float, optional, defaults to 0.4) — +Defines how semantic guidance momentum builds up. edit_mom_beta indicates how much of the previous +momentum is kept. Momentum is already built up during warmup (for diffusion steps smaller than +edit_warmup_steps). edit_weights (List[float], optional, defaults to None) — +Indicates how much each individual concept should influence the overall guidance. If no weights are +provided all concepts are applied equally. sem_guidance (List[torch.Tensor], optional) — +List of pre-generated guidance vectors to be applied at generation. Length of the list has to +correspond to num_inference_steps. Returns +~pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput or tuple + +If return_dict is True, +~pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images and the second element +is a list of bools indicating whether the corresponding generated image contains “not-safe-for-work” +(nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import SemanticStableDiffusionPipeline + +>>> pipe = SemanticStableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> out = pipe( +... prompt="a photo of the face of a woman", +... num_images_per_prompt=1, +... guidance_scale=7, +... editing_prompt=[ +... "smiling, smile", # Concepts to apply +... "glasses, wearing glasses", +... "curls, wavy hair, curly hair", +... "beard, full beard, mustache", +... ], +... reverse_editing_direction=[ +... False, +... False, +... False, +... False, +... ], # Direction of guidance i.e. increase all concepts +... edit_warmup_steps=[10, 10, 10, 10], # Warmup period for each concept +... edit_guidance_scale=[4, 5, 5, 5.4], # Guidance scale for each concept +... edit_threshold=[ +... 0.99, +... 0.975, +... 0.925, +... 0.96, +... ], # Threshold for each concept. Threshold equals the percentile of the latent space that will be discarded. I.e. threshold=0.99 uses 1% of the latent dimensions +... edit_momentum_scale=0.3, # Momentum scale that will be added to the latent guidance +... edit_mom_beta=0.6, # Momentum beta +... edit_weights=[1, 1, 1, 1, 1], # Weights of the individual concepts against each other +... ) +>>> image = out.images[0] StableDiffusionSafePipelineOutput class diffusers.pipelines.semantic_stable_diffusion.pipeline_output.SemanticStableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/6e971af6d73037d0310fa01f2b0e2aa0.txt b/scrapped_outputs/6e971af6d73037d0310fa01f2b0e2aa0.txt new file mode 100644 index 0000000000000000000000000000000000000000..7f8a405dfba4e08faba47f75f9a9a309028d423f --- /dev/null +++ b/scrapped_outputs/6e971af6d73037d0310fa01f2b0e2aa0.txt @@ -0,0 +1,381 @@ +Text2Video-Zero Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators is by Levon Khachatryan, Andranik Movsisyan, Vahram Tadevosyan, Roberto Henschel, Zhangyang Wang, Shant Navasardyan, Humphrey Shi. Text2Video-Zero enables zero-shot video generation using either: A textual prompt A prompt combined with guidance from poses or edges Video Instruct-Pix2Pix (instruction-guided video editing) Results are temporally consistent and closely follow the guidance and textual prompts. The abstract from the paper is: Recent text-to-video generation approaches rely on computationally heavy training and require large-scale video datasets. In this paper, we introduce a new task of zero-shot text-to-video generation and propose a low-cost approach (without any training or optimization) by leveraging the power of existing text-to-image synthesis methods (e.g., Stable Diffusion), making them suitable for the video domain. +Our key modifications include (i) enriching the latent codes of the generated frames with motion dynamics to keep the global scene and the background time consistent; and (ii) reprogramming frame-level self-attention using a new cross-frame attention of each frame on the first frame, to preserve the context, appearance, and identity of the foreground object. +Experiments show that this leads to low overhead, yet high-quality and remarkably consistent video generation. Moreover, our approach is not limited to text-to-video synthesis but is also applicable to other tasks such as conditional and content-specialized video generation, and Video Instruct-Pix2Pix, i.e., instruction-guided video editing. +As experiments show, our method performs comparably or sometimes better than recent approaches, despite not being trained on additional video data. You can find additional information about Text2Video-Zero on the project page, paper, and original codebase. Usage example Text-To-Video To generate a video from prompt, run the following Python code: Copied import torch +from diffusers import TextToVideoZeroPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +prompt = "A panda is playing guitar on times square" +result = pipe(prompt=prompt).images +result = [(r * 255).astype("uint8") for r in result] +imageio.mimsave("video.mp4", result, fps=4) You can change these parameters in the pipeline call: Motion field strength (see the paper, Sect. 3.3.1):motion_field_strength_x and motion_field_strength_y. Default: motion_field_strength_x=12, motion_field_strength_y=12 T and T' (see the paper, Sect. 3.3.1)t0 and t1 in the range {0, ..., num_inference_steps}. Default: t0=45, t1=48 Video length:video_length, the number of frames video_length to be generated. Default: video_length=8 We can also generate longer videos by doing the processing in a chunk-by-chunk manner: Copied import torch +from diffusers import TextToVideoZeroPipeline +import numpy as np + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") +seed = 0 +video_length = 24 #24 ÷ 4fps = 6 seconds +chunk_size = 8 +prompt = "A panda is playing guitar on times square" + +# Generate the video chunk-by-chunk +result = [] +chunk_ids = np.arange(0, video_length, chunk_size - 1) +generator = torch.Generator(device="cuda") +for i in range(len(chunk_ids)): + print(f"Processing chunk {i + 1} / {len(chunk_ids)}") + ch_start = chunk_ids[i] + ch_end = video_length if i == len(chunk_ids) - 1 else chunk_ids[i + 1] + # Attach the first frame for Cross Frame Attention + frame_ids = [0] + list(range(ch_start, ch_end)) + # Fix the seed for the temporal consistency + generator.manual_seed(seed) + output = pipe(prompt=prompt, video_length=len(frame_ids), generator=generator, frame_ids=frame_ids) + result.append(output.images[1:]) + +# Concatenate chunks and save +result = np.concatenate(result) +result = [(r * 255).astype("uint8") for r in result] +imageio.mimsave("video.mp4", result, fps=4) SDXL SupportIn order to use the SDXL model when generating a video from prompt, use the TextToVideoZeroSDXLPipeline pipeline: Copied import torch +from diffusers import TextToVideoZeroSDXLPipeline + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipe = TextToVideoZeroSDXLPipeline.from_pretrained( + model_id, torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") Text-To-Video with Pose Control To generate a video from prompt with additional pose control Download a demo video Copied from huggingface_hub import hf_hub_download + +filename = "__assets__/poses_skeleton_gifs/dance1_corr.mp4" +repo_id = "PAIR/Text2Video-Zero" +video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) Read video containing extracted pose images Copied from PIL import Image +import imageio + +reader = imageio.get_reader(video_path, "ffmpeg") +frame_count = 8 +pose_images = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] To extract pose from actual video, read ControlNet documentation. Run StableDiffusionControlNetPipeline with our custom attention processor Copied import torch +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +model_id = "runwayml/stable-diffusion-v1-5" +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + model_id, controlnet=controlnet, torch_dtype=torch.float16 +).to("cuda") + +# Set the attention processor +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) +pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) + +# fix latents for all frames +latents = torch.randn((1, 4, 64, 64), device="cuda", dtype=torch.float16).repeat(len(pose_images), 1, 1, 1) + +prompt = "Darth Vader dancing in a desert" +result = pipe(prompt=[prompt] * len(pose_images), image=pose_images, latents=latents).images +imageio.mimsave("video.mp4", result, fps=4) SDXL Support Since our attention processor also works with SDXL, it can be utilized to generate a video from prompt using ControlNet models powered by SDXL: Copied import torch +from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +controlnet_model_id = 'thibaud/controlnet-openpose-sdxl-1.0' +model_id = 'stabilityai/stable-diffusion-xl-base-1.0' + +controlnet = ControlNetModel.from_pretrained(controlnet_model_id, torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + model_id, controlnet=controlnet, torch_dtype=torch.float16 +).to('cuda') + +# Set the attention processor +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) +pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) + +# fix latents for all frames +latents = torch.randn((1, 4, 128, 128), device="cuda", dtype=torch.float16).repeat(len(pose_images), 1, 1, 1) + +prompt = "Darth Vader dancing in a desert" +result = pipe(prompt=[prompt] * len(pose_images), image=pose_images, latents=latents).images +imageio.mimsave("video.mp4", result, fps=4) Text-To-Video with Edge Control To generate a video from prompt with additional Canny edge control, follow the same steps described above for pose-guided generation using Canny edge ControlNet model. Video Instruct-Pix2Pix To perform text-guided video editing (with InstructPix2Pix): Download a demo video Copied from huggingface_hub import hf_hub_download + +filename = "__assets__/pix2pix video/camel.mp4" +repo_id = "PAIR/Text2Video-Zero" +video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) Read video from path Copied from PIL import Image +import imageio + +reader = imageio.get_reader(video_path, "ffmpeg") +frame_count = 8 +video = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] Run StableDiffusionInstructPix2PixPipeline with our custom attention processor Copied import torch +from diffusers import StableDiffusionInstructPix2PixPipeline +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +model_id = "timbrooks/instruct-pix2pix" +pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=3)) + +prompt = "make it Van Gogh Starry Night style" +result = pipe(prompt=[prompt] * len(video), image=video).images +imageio.mimsave("edited_video.mp4", result, fps=4) DreamBooth specialization Methods Text-To-Video, Text-To-Video with Pose Control and Text-To-Video with Edge Control +can run with custom DreamBooth models, as shown below for +Canny edge ControlNet model and +Avatar style DreamBooth model: Download a demo video Copied from huggingface_hub import hf_hub_download + +filename = "__assets__/canny_videos_mp4/girl_turning.mp4" +repo_id = "PAIR/Text2Video-Zero" +video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) Read video from path Copied from PIL import Image +import imageio + +reader = imageio.get_reader(video_path, "ffmpeg") +frame_count = 8 +canny_edges = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] Run StableDiffusionControlNetPipeline with custom trained DreamBooth model Copied import torch +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +# set model id to custom model +model_id = "PAIR/text2video-zero-controlnet-canny-avatar" +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + model_id, controlnet=controlnet, torch_dtype=torch.float16 +).to("cuda") + +# Set the attention processor +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) +pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) + +# fix latents for all frames +latents = torch.randn((1, 4, 64, 64), device="cuda", dtype=torch.float16).repeat(len(canny_edges), 1, 1, 1) + +prompt = "oil painting of a beautiful girl avatar style" +result = pipe(prompt=[prompt] * len(canny_edges), image=canny_edges, latents=latents).images +imageio.mimsave("video.mp4", result, fps=4) You can filter out some available DreamBooth-trained models with this link. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. TextToVideoZeroPipeline class diffusers.TextToVideoZeroPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet3DConditionModel to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for zero-shot text-to-video generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union video_length: Optional = 8 height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None motion_field_strength_x: float = 12 motion_field_strength_y: float = 12 output_type: Optional = 'tensor' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 t0: int = 44 t1: int = 47 frame_ids: Optional = None ) → TextToVideoPipelineOutput Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. video_length (int, optional, defaults to 8) — +The number of generated video frames. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in video generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_videos_per_prompt (int, optional, defaults to 1) — +The number of videos to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "numpy") — +The output format of the generated video. Choose between "latent" and "numpy". return_dict (bool, optional, defaults to True) — +Whether or not to return a +TextToVideoPipelineOutput instead of +a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. motion_field_strength_x (float, optional, defaults to 12) — +Strength of motion in generated video along x-axis. See the paper, +Sect. 3.3.1. motion_field_strength_y (float, optional, defaults to 12) — +Strength of motion in generated video along y-axis. See the paper, +Sect. 3.3.1. t0 (int, optional, defaults to 44) — +Timestep t0. Should be in the range [0, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. t1 (int, optional, defaults to 47) — +Timestep t0. Should be in the range [t0 + 1, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. frame_ids (List[int], optional) — +Indexes of the frames that are being generated. This is used when generating longer videos +chunk-by-chunk. Returns +TextToVideoPipelineOutput + +The output contains a ndarray of the generated video, when output_type != "latent", otherwise a +latent code of generated videos and a list of bools indicating whether the corresponding generated +video contains “not-safe-for-work” (nsfw) content.. + The call function to the pipeline for generation. backward_loop < source > ( latents timesteps prompt_embeds guidance_scale callback callback_steps num_warmup_steps extra_step_kwargs cross_attention_kwargs = None ) → latents Parameters callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. +extra_step_kwargs — +Extra_step_kwargs. +cross_attention_kwargs — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. +num_warmup_steps — +number of warmup steps. Returns +latents + +Latents of backward process output at time timesteps[-1]. + Perform backward process given list of time steps. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. forward_loop < source > ( x_t0 t0 t1 generator ) → x_t1 Parameters generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. Returns +x_t1 + +Forward process applied to x_t0 from time t0 to t1. + Perform DDPM forward process from time t0 to t1. This is the same as adding noise with corresponding variance. TextToVideoZeroSDXLPipeline class diffusers.TextToVideoZeroSDXLPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for zero-shot text-to-video generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union prompt_2: Union = None video_length: Optional = 8 height: Optional = None width: Optional = None num_inference_steps: int = 50 denoising_end: Optional = None guidance_scale: float = 7.5 negative_prompt: Union = None negative_prompt_2: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None frame_ids: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None latents: Optional = None motion_field_strength_x: float = 12 motion_field_strength_y: float = 12 output_type: Optional = 'tensor' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Optional = None crops_coords_top_left: Tuple = (0, 0) target_size: Optional = None t0: int = 44 t1: int = 47 ) Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders video_length (int, optional, defaults to 8) — +The number of generated video frames. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_videos_per_prompt (int, optional, defaults to 1) — +The number of videos to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. frame_ids (List[int], optional) — +Indexes of the frames that are being generated. This is used when generating longer videos +chunk-by-chunk. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. motion_field_strength_x (float, optional, defaults to 12) — +Strength of motion in generated video along x-axis. See the paper, +Sect. 3.3.1. motion_field_strength_y (float, optional, defaults to 12) — +Strength of motion in generated video along y-axis. See the paper, +Sect. 3.3.1. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. guidance_rescale (float, optional, defaults to 0.7) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (width, height) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (width, height). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. t0 (int, optional, defaults to 44) — +Timestep t0. Should be in the range [0, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. t1 (int, optional, defaults to 47) — +Timestep t0. Should be in the range [t0 + 1, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. Function invoked when calling the pipeline for generation. backward_loop < source > ( latents timesteps prompt_embeds guidance_scale callback callback_steps num_warmup_steps extra_step_kwargs add_text_embeds add_time_ids cross_attention_kwargs = None guidance_rescale: float = 0.0 ) → latents Parameters callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. +extra_step_kwargs — +Extra_step_kwargs. +cross_attention_kwargs — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. +num_warmup_steps — +number of warmup steps. Returns +latents + +latents of backward process output at time timesteps[-1] + Perform backward process given list of time steps encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. forward_loop < source > ( x_t0 t0 t1 generator ) → x_t1 Parameters generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. Returns +x_t1 + +Forward process applied to x_t0 from time t0 to t1. + Perform DDPM forward process from time t0 to t1. This is the same as adding noise with corresponding variance. TextToVideoPipelineOutput class diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.TextToVideoPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images ([List[PIL.Image.Image], np.ndarray]) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected ([List[bool]]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for zero-shot text-to-video pipeline. diff --git a/scrapped_outputs/6e9ebc8d145078be53b3cd2ae96edb06.txt b/scrapped_outputs/6e9ebc8d145078be53b3cd2ae96edb06.txt new file mode 100644 index 0000000000000000000000000000000000000000..6c6930421010fe84f98ab906144201bb0390aa30 --- /dev/null +++ b/scrapped_outputs/6e9ebc8d145078be53b3cd2ae96edb06.txt @@ -0,0 +1,81 @@ +Latent Diffusion Latent Diffusion was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. The abstract from the paper is: By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. The original codebase can be found at CompVis/latent-diffusion. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. LDMTextToImagePipeline class diffusers.LDMTextToImagePipeline < source > ( vqvae: Union bert: PreTrainedModel tokenizer: PreTrainedTokenizer unet: Union scheduler: Union ) Parameters vqvae (VQModel) — +Vector-quantized (VQ) model to encode and decode images to and from latent representations. bert (LDMBertModel) — +Text-encoder model based on BERT. tokenizer (BertTokenizer) — +A BertTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-image generation using latent diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union height: Optional = None width: Optional = None num_inference_steps: Optional = 50 guidance_scale: Optional = 1.0 eta: Optional = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 1.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Example: Copied >>> from diffusers import DiffusionPipeline + +>>> # load model and scheduler +>>> ldm = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256") + +>>> # run pipeline in inference (sample random noise and denoise) +>>> prompt = "A painting of a squirrel eating a burger" +>>> images = ldm([prompt], num_inference_steps=50, eta=0.3, guidance_scale=6).images + +>>> # save images +>>> for idx, image in enumerate(images): +... image.save(f"squirrel-{idx}.png") LDMSuperResolutionPipeline class diffusers.LDMSuperResolutionPipeline < source > ( vqvae: VQModel unet: UNet2DModel scheduler: Union ) Parameters vqvae (VQModel) — +Vector-quantized (VQ) model to encode and decode images to and from latent representations. unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latens. Can be one of +DDIMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, +EulerAncestralDiscreteScheduler, DPMSolverMultistepScheduler, or PNDMScheduler. A pipeline for image super-resolution using latent diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union = None batch_size: Optional = 1 num_inference_steps: Optional = 100 eta: Optional = 0.0 generator: Union = None output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters image (torch.Tensor or PIL.Image.Image) — +Image or tensor representing an image batch to be used as the starting point for the process. batch_size (int, optional, defaults to 1) — +Number of images to generate. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Example: Copied >>> import requests +>>> from PIL import Image +>>> from io import BytesIO +>>> from diffusers import LDMSuperResolutionPipeline +>>> import torch + +>>> # load model and scheduler +>>> pipeline = LDMSuperResolutionPipeline.from_pretrained("CompVis/ldm-super-resolution-4x-openimages") +>>> pipeline = pipeline.to("cuda") + +>>> # let's download an image +>>> url = ( +... "https://user-images.githubusercontent.com/38061659/199705896-b48e17b8-b231-47cd-a270-4ffa5a93fa3e.png" +... ) +>>> response = requests.get(url) +>>> low_res_img = Image.open(BytesIO(response.content)).convert("RGB") +>>> low_res_img = low_res_img.resize((128, 128)) + +>>> # run pipeline in inference (sample random noise and denoise) +>>> upscaled_image = pipeline(low_res_img, num_inference_steps=100, eta=1).images[0] +>>> # save image +>>> upscaled_image.save("ldm_generated_image.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/6ea7aa947515d145e1f7c287227cbf5e.txt b/scrapped_outputs/6ea7aa947515d145e1f7c287227cbf5e.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/6ed8936f12e556255342fa6ff03507e6.txt b/scrapped_outputs/6ed8936f12e556255342fa6ff03507e6.txt new file mode 100644 index 0000000000000000000000000000000000000000..b2eb8974cda89e11056bf65f8c38cb7c6ff2a3e9 --- /dev/null +++ b/scrapped_outputs/6ed8936f12e556255342fa6ff03507e6.txt @@ -0,0 +1,69 @@ +JAX/Flax 🤗 Diffusers supports Flax for super fast inference on Google TPUs, such as those available in Colab, Kaggle or Google Cloud Platform. This guide shows you how to run inference with Stable Diffusion using JAX/Flax. Before you begin, make sure you have the necessary libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q jax==0.3.25 jaxlib==0.3.25 flax transformers ftfy +#!pip install -q diffusers You should also make sure you’re using a TPU backend. While JAX does not run exclusively on TPUs, you’ll get the best performance on a TPU because each server has 8 TPU accelerators working in parallel. If you are running this guide in Colab, select Runtime in the menu above, select the option Change runtime type, and then select TPU under the Hardware accelerator setting. Import JAX and quickly check whether you’re using a TPU: Copied import jax +import jax.tools.colab_tpu +jax.tools.colab_tpu.setup_tpu() + +num_devices = jax.device_count() +device_type = jax.devices()[0].device_kind + +print(f"Found {num_devices} JAX devices of type {device_type}.") +assert ( + "TPU" in device_type, + "Available device is not a TPU, please select TPU from Runtime > Change runtime type > Hardware accelerator" +) +# Found 8 JAX devices of type Cloud TPU. Great, now you can import the rest of the dependencies you’ll need: Copied import jax.numpy as jnp +from jax import pmap +from flax.jax_utils import replicate +from flax.training.common_utils import shard + +from diffusers import FlaxStableDiffusionPipeline Load a model Flax is a functional framework, so models are stateless and parameters are stored outside of them. Loading a pretrained Flax pipeline returns both the pipeline and the model weights (or parameters). In this guide, you’ll use bfloat16, a more efficient half-float type that is supported by TPUs (you can also use float32 for full precision if you want). Copied dtype = jnp.bfloat16 +pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + revision="bf16", + dtype=dtype, +) Inference TPUs usually have 8 devices working in parallel, so let’s use the same prompt for each device. This means you can perform inference on 8 devices at once, with each device generating one image. As a result, you’ll get 8 images in the same amount of time it takes for one chip to generate a single image! Learn more details in the How does parallelization work? section. After replicating the prompt, get the tokenized text ids by calling the prepare_inputs function on the pipeline. The length of the tokenized text is set to 77 tokens as required by the configuration of the underlying CLIP text model. Copied prompt = "A cinematic film still of Morgan Freeman starring as Jimi Hendrix, portrait, 40mm lens, shallow depth of field, close up, split lighting, cinematic" +prompt = [prompt] * jax.device_count() +prompt_ids = pipeline.prepare_inputs(prompt) +prompt_ids.shape +# (8, 77) Model parameters and inputs have to be replicated across the 8 parallel devices. The parameters dictionary is replicated with flax.jax_utils.replicate which traverses the dictionary and changes the shape of the weights so they are repeated 8 times. Arrays are replicated using shard. Copied # parameters +p_params = replicate(params) + +# arrays +prompt_ids = shard(prompt_ids) +prompt_ids.shape +# (8, 1, 77) This shape means each one of the 8 devices receives as an input a jnp array with shape (1, 77), where 1 is the batch size per device. On TPUs with sufficient memory, you could have a batch size larger than 1 if you want to generate multiple images (per chip) at once. Next, create a random number generator to pass to the generation function. This is standard procedure in Flax, which is very serious and opinionated about random numbers. All functions that deal with random numbers are expected to receive a generator to ensure reproducibility, even when you’re training across multiple distributed devices. The helper function below uses a seed to initialize a random number generator. As long as you use the same seed, you’ll get the exact same results. Feel free to use different seeds when exploring results later in the guide. Copied def create_key(seed=0): + return jax.random.PRNGKey(seed) The helper function, or rng, is split 8 times so each device receives a different generator and generates a different image. Copied rng = create_key(0) +rng = jax.random.split(rng, jax.device_count()) To take advantage of JAX’s optimized speed on a TPU, pass jit=True to the pipeline to compile the JAX code into an efficient representation and to ensure the model runs in parallel across the 8 devices. You need to ensure all your inputs have the same shape in subsequent calls, otherwise JAX will need to recompile the code which is slower. The first inference run takes more time because it needs to compile the code, but subsequent calls (even with different inputs) are much faster. For example, it took more than a minute to compile on a TPU v2-8, but then it takes about 7s on a future inference run! Copied %%time +images = pipeline(prompt_ids, p_params, rng, jit=True)[0] + +# CPU times: user 56.2 s, sys: 42.5 s, total: 1min 38s +# Wall time: 1min 29s The returned array has shape (8, 1, 512, 512, 3) which should be reshaped to remove the second dimension and get 8 images of 512 × 512 × 3. Then you can use the numpy_to_pil() function to convert the arrays into images. Copied from diffusers.utils import make_image_grid + +images = images.reshape((images.shape[0] * images.shape[1],) + images.shape[-3:]) +images = pipeline.numpy_to_pil(images) +make_image_grid(images, rows=2, cols=4) Using different prompts You don’t necessarily have to use the same prompt on all devices. For example, to generate 8 different prompts: Copied prompts = [ + "Labrador in the style of Hokusai", + "Painting of a squirrel skating in New York", + "HAL-9000 in the style of Van Gogh", + "Times Square under water, with fish and a dolphin swimming around", + "Ancient Roman fresco showing a man working on his laptop", + "Close-up photograph of young black woman against urban background, high quality, bokeh", + "Armchair in the shape of an avocado", + "Clown astronaut in space, with Earth in the background", +] + +prompt_ids = pipeline.prepare_inputs(prompts) +prompt_ids = shard(prompt_ids) + +images = pipeline(prompt_ids, p_params, rng, jit=True).images +images = images.reshape((images.shape[0] * images.shape[1],) + images.shape[-3:]) +images = pipeline.numpy_to_pil(images) + +make_image_grid(images, 2, 4) How does parallelization work? The Flax pipeline in 🤗 Diffusers automatically compiles the model and runs it in parallel on all available devices. Let’s take a closer look at how that process works. JAX parallelization can be done in multiple ways. The easiest one revolves around using the jax.pmap function to achieve single-program multiple-data (SPMD) parallelization. It means running several copies of the same code, each on different data inputs. More sophisticated approaches are possible, and you can go over to the JAX documentation to explore this topic in more detail if you are interested! jax.pmap does two things: Compiles (or ”jits”) the code which is similar to jax.jit(). This does not happen when you call pmap, and only the first time the pmapped function is called. Ensures the compiled code runs in parallel on all available devices. To demonstrate, call pmap on the pipeline’s _generate method (this is a private method that generates images and may be renamed or removed in future releases of 🤗 Diffusers): Copied p_generate = pmap(pipeline._generate) After calling pmap, the prepared function p_generate will: Make a copy of the underlying function, pipeline._generate, on each device. Send each device a different portion of the input arguments (this is why it’s necessary to call the shard function). In this case, prompt_ids has shape (8, 1, 77, 768) so the array is split into 8 and each copy of _generate receives an input with shape (1, 77, 768). The most important thing to pay attention to here is the batch size (1 in this example), and the input dimensions that make sense for your code. You don’t have to change anything else to make the code work in parallel. The first time you call the pipeline takes more time, but the calls afterward are much faster. The block_until_ready function is used to correctly measure inference time because JAX uses asynchronous dispatch and returns control to the Python loop as soon as it can. You don’t need to use that in your code; blocking occurs automatically when you want to use the result of a computation that has not yet been materialized. Copied %%time +images = p_generate(prompt_ids, p_params, rng) +images = images.block_until_ready() + +# CPU times: user 1min 15s, sys: 18.2 s, total: 1min 34s +# Wall time: 1min 15s Check your image dimensions to see if they’re correct: Copied images.shape +# (8, 1, 512, 512, 3) Resources To learn more about how JAX works with Stable Diffusion, you may be interested in reading: Accelerating Stable Diffusion XL Inference with JAX on Cloud TPU v5e diff --git a/scrapped_outputs/6ee77cc79f865fac050e188e8ecc2b21.txt b/scrapped_outputs/6ee77cc79f865fac050e188e8ecc2b21.txt new file mode 100644 index 0000000000000000000000000000000000000000..1f6f4515145581efe8db27c822c4dac240053ef7 --- /dev/null +++ b/scrapped_outputs/6ee77cc79f865fac050e188e8ecc2b21.txt @@ -0,0 +1,68 @@ +Consistency Models Consistency Models were proposed in Consistency Models by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. The abstract from the paper is: Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast one-step generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet 64x64 and LSUN 256x256. The original codebase can be found at openai/consistency_models, and additional checkpoints are available at openai. The pipeline was contributed by dg845 and ayushtues. ❤️ Tips For an additional speed-up, use torch.compile to generate multiple images in <1 second: Copied import torch + from diffusers import ConsistencyModelPipeline + + device = "cuda" + # Load the cd_bedroom256_lpips checkpoint. + model_id_or_path = "openai/diffusers-cd_bedroom256_lpips" + pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) + pipe.to(device) + ++ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + + # Multistep sampling + # Timesteps can be explicitly specified; the particular timesteps below are from the original GitHub repo: + # https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L83 + for _ in range(10): + image = pipe(timesteps=[17, 0]).images[0] + image.show() ConsistencyModelPipeline class diffusers.ConsistencyModelPipeline < source > ( unet: UNet2DModel scheduler: CMStochasticIterativeScheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Currently only +compatible with CMStochasticIterativeScheduler. Pipeline for unconditional or class-conditional image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 class_labels: Union = None num_inference_steps: int = 1 timesteps: List = None generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. class_labels (torch.Tensor or List[int] or int, optional) — +Optional class labels for conditioning class-conditional consistency models. Not used if the model is +not class-conditional. num_inference_steps (int, optional, defaults to 1) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + Examples: Copied >>> import torch + +>>> from diffusers import ConsistencyModelPipeline + +>>> device = "cuda" +>>> # Load the cd_imagenet64_l2 checkpoint. +>>> model_id_or_path = "openai/diffusers-cd_imagenet64_l2" +>>> pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +>>> pipe.to(device) + +>>> # Onestep Sampling +>>> image = pipe(num_inference_steps=1).images[0] +>>> image.save("cd_imagenet64_l2_onestep_sample.png") + +>>> # Onestep sampling, class-conditional image generation +>>> # ImageNet-64 class label 145 corresponds to king penguins +>>> image = pipe(num_inference_steps=1, class_labels=145).images[0] +>>> image.save("cd_imagenet64_l2_onestep_sample_penguin.png") + +>>> # Multistep sampling, class-conditional image generation +>>> # Timesteps can be explicitly specified; the particular timesteps below are from the original Github repo: +>>> # https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L77 +>>> image = pipe(num_inference_steps=None, timesteps=[22, 0], class_labels=145).images[0] +>>> image.save("cd_imagenet64_l2_multistep_sample_penguin.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/6ee8991b4bd5d04ca927bbbfb77f3a7d.txt b/scrapped_outputs/6ee8991b4bd5d04ca927bbbfb77f3a7d.txt new file mode 100644 index 0000000000000000000000000000000000000000..12f932f27da948cb5ce81edca4bff5444475b84d --- /dev/null +++ b/scrapped_outputs/6ee8991b4bd5d04ca927bbbfb77f3a7d.txt @@ -0,0 +1,11 @@ +Control image brightness The Stable Diffusion pipeline is mediocre at generating images that are either very bright or dark as explained in the Common Diffusion Noise Schedules and Sample Steps are Flawed paper. The solutions proposed in the paper are currently implemented in the DDIMScheduler which you can use to improve the lighting in your images. 💡 Take a look at the paper linked above for more details about the proposed solutions! One of the solutions is to train a model with v prediction and v loss. Add the following flag to the train_text_to_image.py or train_text_to_image_lora.py scripts to enable v_prediction: Copied --prediction_type="v_prediction" For example, let’s use the ptx0/pseudo-journey-v2 checkpoint which has been finetuned with v_prediction. Next, configure the following parameters in the DDIMScheduler: rescale_betas_zero_snr=True, rescales the noise schedule to zero terminal signal-to-noise ratio (SNR) timestep_spacing="trailing", starts sampling from the last timestep Copied from diffusers import DiffusionPipeline, DDIMScheduler + +pipeline = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", use_safetensors=True) + +# switch the scheduler in the pipeline to use the DDIMScheduler +pipeline.scheduler = DDIMScheduler.from_config( + pipeline.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing" +) +pipeline.to("cuda") Finally, in your call to the pipeline, set guidance_rescale to prevent overexposure: Copied prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" +image = pipeline(prompt, guidance_rescale=0.7).images[0] +image diff --git a/scrapped_outputs/6efcd34c99826ed3fcebac7662283216.txt b/scrapped_outputs/6efcd34c99826ed3fcebac7662283216.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/6f2b41b325037bc18b165d8219e1792c.txt b/scrapped_outputs/6f2b41b325037bc18b165d8219e1792c.txt new file mode 100644 index 0000000000000000000000000000000000000000..13aef0767c19d544c8b380b818921e179de42362 --- /dev/null +++ b/scrapped_outputs/6f2b41b325037bc18b165d8219e1792c.txt @@ -0,0 +1,14 @@ +Speed up inference There are several ways to optimize 🤗 Diffusers for inference speed. As a general rule of thumb, we recommend using either xFormers or torch.nn.functional.scaled_dot_product_attention in PyTorch 2.0 for their memory-efficient attention. In many cases, optimizing for speed or memory leads to improved performance in the other, so you should try to optimize for both whenever you can. This guide focuses on inference speed, but you can learn more about preserving memory in the Reduce memory usage guide. The results below are obtained from generating a single 512x512 image from the prompt a photo of an astronaut riding a horse on mars with 50 DDIM steps on a Nvidia Titan RTX, demonstrating the speed-up you can expect. latency speed-up original 9.50s x1 fp16 3.61s x2.63 channels last 3.30s x2.88 traced UNet 3.21s x2.96 memory efficient attention 2.63s x3.61 Use TensorFloat-32 On Ampere and later CUDA devices, matrix multiplications and convolutions can use the TensorFloat-32 (TF32) mode for faster, but slightly less accurate computations. By default, PyTorch enables TF32 mode for convolutions but not matrix multiplications. Unless your network requires full float32 precision, we recommend enabling TF32 for matrix multiplications. It can significantly speeds up computations with typically negligible loss in numerical accuracy. Copied import torch + +torch.backends.cuda.matmul.allow_tf32 = True You can learn more about TF32 in the Mixed precision training guide. Half-precision weights To save GPU memory and get more speed, try loading and running the model weights directly in half-precision or float16: Copied import torch +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +image = pipe(prompt).images[0] Don’t use torch.autocast in any of the pipelines as it can lead to black images and is always slower than pure float16 precision. diff --git a/scrapped_outputs/6f4f08c81b0747604498b7103cc223a6.txt b/scrapped_outputs/6f4f08c81b0747604498b7103cc223a6.txt new file mode 100644 index 0000000000000000000000000000000000000000..051657a0a8a5f093e79718c5c36eb36f9c0bf990 --- /dev/null +++ b/scrapped_outputs/6f4f08c81b0747604498b7103cc223a6.txt @@ -0,0 +1,235 @@ +AutoPipeline AutoPipeline is designed to: make it easy for you to load a checkpoint for a task without knowing the specific pipeline class to use use multiple pipelines in your workflow Based on the task, the AutoPipeline class automatically retrieves the relevant pipeline given the name or path to the pretrained weights with the from_pretrained() method. To seamlessly switch between tasks with the same checkpoint without reallocating additional memory, use the from_pipe() method to transfer the components from the original pipeline to the new one. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +image = pipeline(prompt, num_inference_steps=25).images[0] Check out the AutoPipeline tutorial to learn how to use this API! AutoPipeline supports text-to-image, image-to-image, and inpainting for the following diffusion models: Stable Diffusion ControlNet Stable Diffusion XL (SDXL) DeepFloyd IF Kandinsky 2.1 Kandinsky 2.2 AutoPipelineForText2Image class diffusers.AutoPipelineForText2Image < source > ( *args **kwargs ) AutoPipelineForText2Image is a generic pipeline class that instantiates a text-to-image pipeline class. The +specific underlying pipeline class is automatically selected from either the +from_pretrained() or from_pipe() methods. This class cannot be instantiated using __init__() (throws an error). Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_or_path **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiates a text-to-image Pytorch diffusion pipeline from pretrained pipeline weight. The from_pretrained() method takes care of returning the correct pipeline class instance by: Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its +config object Find the text-to-image pipeline linked to the pipeline class using pattern matching on pipeline class +name. If a controlnet argument is passed, it will instantiate a StableDiffusionControlNetPipeline object. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import AutoPipelineForText2Image + +>>> pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> image = pipeline(prompt).images[0] from_pipe < source > ( pipeline **kwargs ) Parameters pipeline (DiffusionPipeline) — +an instantiated DiffusionPipeline object Instantiates a text-to-image Pytorch diffusion pipeline from another instantiated diffusion pipeline class. The from_pipe() method takes care of returning the correct pipeline class instance by finding the text-to-image +pipeline linked to the pipeline class using pattern matching on pipeline class name. All the modules the pipeline contains will be used to initialize the new pipeline without reallocating +additional memory. The pipeline is set in evaluation mode (model.eval()) by default. Copied >>> from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image + +>>> pipe_i2i = AutoPipelineForImage2Image.from_pretrained( +... "runwayml/stable-diffusion-v1-5", requires_safety_checker=False +... ) + +>>> pipe_t2i = AutoPipelineForText2Image.from_pipe(pipe_i2i) +>>> image = pipe_t2i(prompt).images[0] AutoPipelineForImage2Image class diffusers.AutoPipelineForImage2Image < source > ( *args **kwargs ) AutoPipelineForImage2Image is a generic pipeline class that instantiates an image-to-image pipeline class. The +specific underlying pipeline class is automatically selected from either the +from_pretrained() or from_pipe() methods. This class cannot be instantiated using __init__() (throws an error). Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_or_path **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiates a image-to-image Pytorch diffusion pipeline from pretrained pipeline weight. The from_pretrained() method takes care of returning the correct pipeline class instance by: Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its +config object Find the image-to-image pipeline linked to the pipeline class using pattern matching on pipeline class +name. If a controlnet argument is passed, it will instantiate a StableDiffusionControlNetImg2ImgPipeline +object. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import AutoPipelineForImage2Image + +>>> pipeline = AutoPipelineForImage2Image.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> image = pipeline(prompt, image).images[0] from_pipe < source > ( pipeline **kwargs ) Parameters pipeline (DiffusionPipeline) — +an instantiated DiffusionPipeline object Instantiates a image-to-image Pytorch diffusion pipeline from another instantiated diffusion pipeline class. The from_pipe() method takes care of returning the correct pipeline class instance by finding the +image-to-image pipeline linked to the pipeline class using pattern matching on pipeline class name. All the modules the pipeline contains will be used to initialize the new pipeline without reallocating +additional memory. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image + +>>> pipe_t2i = AutoPipelineForText2Image.from_pretrained( +... "runwayml/stable-diffusion-v1-5", requires_safety_checker=False +... ) + +>>> pipe_i2i = AutoPipelineForImage2Image.from_pipe(pipe_t2i) +>>> image = pipe_i2i(prompt, image).images[0] AutoPipelineForInpainting class diffusers.AutoPipelineForInpainting < source > ( *args **kwargs ) AutoPipelineForInpainting is a generic pipeline class that instantiates an inpainting pipeline class. The +specific underlying pipeline class is automatically selected from either the +from_pretrained() or from_pipe() methods. This class cannot be instantiated using __init__() (throws an error). Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_or_path **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiates a inpainting Pytorch diffusion pipeline from pretrained pipeline weight. The from_pretrained() method takes care of returning the correct pipeline class instance by: Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its +config object Find the inpainting pipeline linked to the pipeline class using pattern matching on pipeline class name. If a controlnet argument is passed, it will instantiate a StableDiffusionControlNetInpaintPipeline +object. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import AutoPipelineForInpainting + +>>> pipeline = AutoPipelineForInpainting.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> image = pipeline(prompt, image=init_image, mask_image=mask_image).images[0] from_pipe < source > ( pipeline **kwargs ) Parameters pipeline (DiffusionPipeline) — +an instantiated DiffusionPipeline object Instantiates a inpainting Pytorch diffusion pipeline from another instantiated diffusion pipeline class. The from_pipe() method takes care of returning the correct pipeline class instance by finding the inpainting +pipeline linked to the pipeline class using pattern matching on pipeline class name. All the modules the pipeline class contain will be used to initialize the new pipeline without reallocating +additional memory. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import AutoPipelineForText2Image, AutoPipelineForInpainting + +>>> pipe_t2i = AutoPipelineForText2Image.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", requires_safety_checker=False +... ) + +>>> pipe_inpaint = AutoPipelineForInpainting.from_pipe(pipe_t2i) +>>> image = pipe_inpaint(prompt, image=init_image, mask_image=mask_image).images[0] diff --git a/scrapped_outputs/6f94dbda1098b11b4a617cd93cc045e8.txt b/scrapped_outputs/6f94dbda1098b11b4a617cd93cc045e8.txt new file mode 100644 index 0000000000000000000000000000000000000000..d8610ad87c070caa4fdd6e48fd8b56d49472e888 --- /dev/null +++ b/scrapped_outputs/6f94dbda1098b11b4a617cd93cc045e8.txt @@ -0,0 +1,41 @@ +HeunDiscreteScheduler The Heun scheduler (Algorithm 1) is from the Elucidating the Design Space of Diffusion-Based Generative Models paper by Karras et al. The scheduler is ported from the k-diffusion library and created by Katherine Crowson. HeunDiscreteScheduler class diffusers.HeunDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' use_karras_sigmas: Optional = False clip_sample: Optional = False clip_sample_range: float = 1.0 timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. Scheduler with Heun steps for discrete beta schedules. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/6fa99bc98781172abd60861cce9bc969.txt b/scrapped_outputs/6fa99bc98781172abd60861cce9bc969.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/6fc8b55a98336089f552a5323e6401c2.txt b/scrapped_outputs/6fc8b55a98336089f552a5323e6401c2.txt new file mode 100644 index 0000000000000000000000000000000000000000..bfb701b6b92da524e2044f38c56691f6854d8e5e --- /dev/null +++ b/scrapped_outputs/6fc8b55a98336089f552a5323e6401c2.txt @@ -0,0 +1,169 @@ +Latent Consistency Model Latent Consistency Models (LCM) enable quality image generation in typically 2-4 steps making it possible to use diffusion models in almost real-time settings. From the official website: LCMs can be distilled from any pre-trained Stable Diffusion (SD) in only 4,000 training steps (~32 A100 GPU Hours) for generating high quality 768 x 768 resolution images in 2~4 steps or even one step, significantly accelerating text-to-image generation. We employ LCM to distill the Dreamshaper-V7 version of SD in just 4,000 training iterations. For a more technical overview of LCMs, refer to the paper. LCM distilled models are available for stable-diffusion-v1-5, stable-diffusion-xl-base-1.0, and the SSD-1B model. All the checkpoints can be found in this collection. This guide shows how to perform inference with LCMs for text-to-image image-to-image combined with style LoRAs ControlNet/T2I-Adapter Text-to-image You’ll use the StableDiffusionXLPipeline pipeline with the LCMScheduler and then load the LCM-LoRA. Together with the LCM-LoRA and the scheduler, the pipeline enables a fast inference workflow, overcoming the slow iterative nature of diffusion models. Copied from diffusers import StableDiffusionXLPipeline, UNet2DConditionModel, LCMScheduler +import torch + +unet = UNet2DConditionModel.from_pretrained( + "latent-consistency/lcm-sdxl", + torch_dtype=torch.float16, + variant="fp16", +) +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16, variant="fp16", +).to("cuda") +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=8.0 +).images[0] Notice that we use only 4 steps for generation which is way less than what’s typically used for standard SDXL. Some details to keep in mind: To perform classifier-free guidance, batch size is usually doubled inside the pipeline. LCM, however, applies guidance using guidance embeddings, so the batch size does not have to be doubled in this case. This leads to a faster inference time, with the drawback that negative prompts don’t have any effect on the denoising process. The UNet was trained using the [3., 13.] guidance scale range. So, that is the ideal range for guidance_scale. However, disabling guidance_scale using a value of 1.0 is also effective in most cases. Image-to-image LCMs can be applied to image-to-image tasks too. For this example, we’ll use the LCM_Dreamshaper_v7 model, but the same steps can be applied to other LCM models as well. Copied import torch +from diffusers import AutoPipelineForImage2Image, UNet2DConditionModel, LCMScheduler +from diffusers.utils import make_image_grid, load_image + +unet = UNet2DConditionModel.from_pretrained( + "SimianLuo/LCM_Dreamshaper_v7", + subfolder="unet", + torch_dtype=torch.float16, +) + +pipe = AutoPipelineForImage2Image.from_pretrained( + "Lykon/dreamshaper-7", + unet=unet, + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) +prompt = "Astronauts in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +generator = torch.manual_seed(0) +image = pipe( + prompt, + image=init_image, + num_inference_steps=4, + guidance_scale=7.5, + strength=0.5, + generator=generator +).images[0] +make_image_grid([init_image, image], rows=1, cols=2) You can get different results based on your prompt and the image you provide. To get the best results, we recommend trying different values for num_inference_steps, strength, and guidance_scale parameters and choose the best one. Combine with style LoRAs LCMs can be used with other styled LoRAs to generate styled-images in very few steps (4-8). In the following example, we’ll use the papercut LoRA. Copied from diffusers import StableDiffusionXLPipeline, UNet2DConditionModel, LCMScheduler +import torch + +unet = UNet2DConditionModel.from_pretrained( + "latent-consistency/lcm-sdxl", + torch_dtype=torch.float16, + variant="fp16", +) +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16, variant="fp16", +).to("cuda") +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +pipe.load_lora_weights("TheLastBen/Papercut_SDXL", weight_name="papercut.safetensors", adapter_name="papercut") + +prompt = "papercut, a cute fox" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=8.0 +).images[0] +image ControlNet/T2I-Adapter Let’s look at how we can perform inference with ControlNet/T2I-Adapter and a LCM. ControlNet For this example, we’ll use the LCM_Dreamshaper_v7 model with canny ControlNet, but the same steps can be applied to other LCM models as well. Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +image = load_image( + "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +).resize((512, 512)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + "SimianLuo/LCM_Dreamshaper_v7", + controlnet=controlnet, + torch_dtype=torch.float16, + safety_checker=None, +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +generator = torch.manual_seed(0) +image = pipe( + "the mona lisa", + image=canny_image, + num_inference_steps=4, + generator=generator, +).images[0] +make_image_grid([canny_image, image], rows=1, cols=2) The inference parameters in this example might not work for all examples, so we recommend trying different values for the `num_inference_steps`, `guidance_scale`, `controlnet_conditioning_scale`, and `cross_attention_kwargs` parameters and choosing the best one. T2I-Adapter This example shows how to use the lcm-sdxl with the Canny T2I-Adapter. Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionXLAdapterPipeline, UNet2DConditionModel, T2IAdapter, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +# Prepare image +# Detect the canny map in low resolution to avoid high-frequency details +image = load_image( + "https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_canny.jpg" +).resize((384, 384)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image).resize((1024, 1216)) + +# load adapter +adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, varient="fp16").to("cuda") + +unet = UNet2DConditionModel.from_pretrained( + "latent-consistency/lcm-sdxl", + torch_dtype=torch.float16, + variant="fp16", +) +pipe = StableDiffusionXLAdapterPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + unet=unet, + adapter=adapter, + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +prompt = "Mystical fairy in real, magic, 4k picture, high quality" +negative_prompt = "extra digit, fewer digits, cropped, worst quality, low quality, glitch, deformed, mutated, ugly, disfigured" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, + negative_prompt=negative_prompt, + image=canny_image, + num_inference_steps=4, + guidance_scale=5, + adapter_conditioning_scale=0.8, + adapter_conditioning_factor=1, + generator=generator, +).images[0] +grid = make_image_grid([canny_image, image], rows=1, cols=2) diff --git a/scrapped_outputs/6fec328a37cd88dab85e333a2b9df716.txt b/scrapped_outputs/6fec328a37cd88dab85e333a2b9df716.txt new file mode 100644 index 0000000000000000000000000000000000000000..f4cc4262c8901cbf0efaaf3a95066a4f6481fc18 --- /dev/null +++ b/scrapped_outputs/6fec328a37cd88dab85e333a2b9df716.txt @@ -0,0 +1,78 @@ +unCLIP Hierarchical Text-Conditional Image Generation with CLIP Latents is by Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen. The unCLIP model in 🤗 Diffusers comes from kakaobrain’s karlo. The abstract from the paper is following: Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples. You can find lucidrains’ DALL-E 2 recreation at lucidrains/DALLE2-pytorch. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. UnCLIPPipeline class diffusers.UnCLIPPipeline < source > ( prior: PriorTransformer decoder: UNet2DConditionModel text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer text_proj: UnCLIPTextProjModel super_res_first: UNet2DModel super_res_last: UNet2DModel prior_scheduler: UnCLIPScheduler decoder_scheduler: UnCLIPScheduler super_res_scheduler: UnCLIPScheduler ) Parameters text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. prior (PriorTransformer) — +The canonical unCLIP prior to approximate the image embedding from the text embedding. text_proj (UnCLIPTextProjModel) — +Utility class to prepare and combine the embeddings before they are passed to the decoder. decoder (UNet2DConditionModel) — +The decoder to invert the image embedding into an image. super_res_first (UNet2DModel) — +Super resolution UNet. Used in all but the last step of the super resolution diffusion process. super_res_last (UNet2DModel) — +Super resolution UNet. Used in the last step of the super resolution diffusion process. prior_scheduler (UnCLIPScheduler) — +Scheduler used in the prior denoising process (a modified DDPMScheduler). decoder_scheduler (UnCLIPScheduler) — +Scheduler used in the decoder denoising process (a modified DDPMScheduler). super_res_scheduler (UnCLIPScheduler) — +Scheduler used in the super resolution denoising process (a modified DDPMScheduler). Pipeline for text-to-image generation using unCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None num_images_per_prompt: int = 1 prior_num_inference_steps: int = 25 decoder_num_inference_steps: int = 25 super_res_num_inference_steps: int = 7 generator: Union = None prior_latents: Optional = None decoder_latents: Optional = None super_res_latents: Optional = None text_model_output: Union = None text_attention_mask: Optional = None prior_guidance_scale: float = 4.0 decoder_guidance_scale: float = 8.0 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. This can only be left undefined if text_model_output +and text_attention_mask is passed. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. prior_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the prior. More denoising steps usually lead to a higher quality +image at the expense of slower inference. decoder_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality +image at the expense of slower inference. super_res_num_inference_steps (int, optional, defaults to 7) — +The number of denoising steps for super resolution. More denoising steps usually lead to a higher +quality image at the expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. prior_latents (torch.FloatTensor of shape (batch size, embeddings dimension), optional) — +Pre-generated noisy latents to be used as inputs for the prior. decoder_latents (torch.FloatTensor of shape (batch size, channels, height, width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. super_res_latents (torch.FloatTensor of shape (batch size, channels, super res height, super res width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. prior_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. decoder_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. text_model_output (CLIPTextModelOutput, optional) — +Pre-defined CLIPTextModel outputs that can be derived from the text encoder. Pre-defined text +outputs can be passed for tasks like text embedding interpolations. Make sure to also pass +text_attention_mask in this case. prompt can the be left None. text_attention_mask (torch.Tensor, optional) — +Pre-defined CLIP text attention mask that can be derived from the tokenizer. Pre-defined text attention +masks are necessary when passing text_model_output. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. UnCLIPImageVariationPipeline class diffusers.UnCLIPImageVariationPipeline < source > ( decoder: UNet2DConditionModel text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer text_proj: UnCLIPTextProjModel feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection super_res_first: UNet2DModel super_res_last: UNet2DModel decoder_scheduler: UnCLIPScheduler super_res_scheduler: UnCLIPScheduler ) Parameters text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the image_encoder. image_encoder (CLIPVisionModelWithProjection) — +Frozen CLIP image-encoder (clip-vit-large-patch14). text_proj (UnCLIPTextProjModel) — +Utility class to prepare and combine the embeddings before they are passed to the decoder. decoder (UNet2DConditionModel) — +The decoder to invert the image embedding into an image. super_res_first (UNet2DModel) — +Super resolution UNet. Used in all but the last step of the super resolution diffusion process. super_res_last (UNet2DModel) — +Super resolution UNet. Used in the last step of the super resolution diffusion process. decoder_scheduler (UnCLIPScheduler) — +Scheduler used in the decoder denoising process (a modified DDPMScheduler). super_res_scheduler (UnCLIPScheduler) — +Scheduler used in the super resolution denoising process (a modified DDPMScheduler). Pipeline to generate image variations from an input image using UnCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union = None num_images_per_prompt: int = 1 decoder_num_inference_steps: int = 25 super_res_num_inference_steps: int = 7 generator: Optional = None decoder_latents: Optional = None super_res_latents: Optional = None image_embeddings: Optional = None decoder_guidance_scale: float = 8.0 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — +Image or tensor representing an image batch to be used as the starting point. If you provide a +tensor, it needs to be compatible with the CLIPImageProcessor +configuration. +Can be left as None only when image_embeddings are passed. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. decoder_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality +image at the expense of slower inference. super_res_num_inference_steps (int, optional, defaults to 7) — +The number of denoising steps for super resolution. More denoising steps usually lead to a higher +quality image at the expense of slower inference. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. decoder_latents (torch.FloatTensor of shape (batch size, channels, height, width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. super_res_latents (torch.FloatTensor of shape (batch size, channels, super res height, super res width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. decoder_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. image_embeddings (torch.Tensor, optional) — +Pre-defined image embeddings that can be derived from the image encoder. Pre-defined image embeddings +can be passed for tasks like image interpolations. image can be left as None. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/6fed15c7569369807f7c4ae6bf8a0e3f.txt b/scrapped_outputs/6fed15c7569369807f7c4ae6bf8a0e3f.txt new file mode 100644 index 0000000000000000000000000000000000000000..2bdd92145f3e1d95344f3a558a1b0165b1443a85 --- /dev/null +++ b/scrapped_outputs/6fed15c7569369807f7c4ae6bf8a0e3f.txt @@ -0,0 +1,403 @@ +Kandinsky The Kandinsky models are a series of multilingual text-to-image generation models. The Kandinsky 2.0 model uses two multilingual text encoders and concatenates those results for the UNet. Kandinsky 2.1 changes the architecture to include an image prior model (CLIP) to generate a mapping between text and image embeddings. The mapping provides better text-image alignment and it is used with the text embeddings during training, leading to higher quality results. Finally, Kandinsky 2.1 uses a Modulating Quantized Vectors (MoVQ) decoder - which adds a spatial conditional normalization layer to increase photorealism - to decode the latents into images. Kandinsky 2.2 improves on the previous model by replacing the image encoder of the image prior model with a larger CLIP-ViT-G model to improve quality. The image prior model was also retrained on images with different resolutions and aspect ratios to generate higher-resolution images and different image sizes. Kandinsky 3 simplifies the architecture and shifts away from the two-stage generation process involving the prior model and diffusion model. Instead, Kandinsky 3 uses Flan-UL2 to encode text, a UNet with BigGan-deep blocks, and Sber-MoVQGAN to decode the latents into images. Text understanding and generated image quality are primarily achieved by using a larger text encoder and UNet. This guide will show you how to use the Kandinsky models for text-to-image, image-to-image, inpainting, interpolation, and more. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate Kandinsky 2.1 and 2.2 usage is very similar! The only difference is Kandinsky 2.2 doesn’t accept prompt as an input when decoding the latents. Instead, Kandinsky 2.2 only accepts image_embeds during decoding. Kandinsky 3 has a more concise architecture and it doesn’t require a prior model. This means it’s usage is identical to other diffusion models like Stable Diffusion XL. Text-to-image To use the Kandinsky models for any task, you always start by setting up the prior pipeline to encode the prompt and generate the image embeddings. The prior pipeline also generates negative_image_embeds that correspond to the negative prompt "". For better results, you can pass an actual negative_prompt to the prior pipeline, but this’ll increase the effective batch size of the prior pipeline by 2x. + + + + Copied from diffusers import KandinskyPriorPipeline, KandinskyPipeline +import torch + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16).to("cuda") +pipeline = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16).to("cuda") + +prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting" +negative_prompt = "low quality, bad quality" # optional to include a negative prompt, but results are usually better +image_embeds, negative_image_embeds = prior_pipeline(prompt, negative_prompt, guidance_scale=1.0).to_tuple() Now pass all the prompts and embeddings to the KandinskyPipeline to generate an image: Copied image = pipeline(prompt, image_embeds=image_embeds, negative_prompt=negative_prompt, negative_image_embeds=negative_image_embeds, height=768, width=768).images[0] +image + + + + Copied from diffusers import KandinskyV22PriorPipeline, KandinskyV22Pipeline +import torch + +prior_pipeline = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16).to("cuda") +pipeline = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16).to("cuda") + +prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting" +negative_prompt = "low quality, bad quality" # optional to include a negative prompt, but results are usually better +image_embeds, negative_image_embeds = prior_pipeline(prompt, guidance_scale=1.0).to_tuple() Pass the image_embeds and negative_image_embeds to the KandinskyV22Pipeline to generate an image: Copied image = pipeline(image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, height=768, width=768).images[0] +image + + +Kandinsky 3 doesn’t require a prior model so you can directly load the Kandinsky3Pipeline and pass a prompt to generate an image: Copied from diffusers import Kandinsky3Pipeline +import torch + +pipeline = Kandinsky3Pipeline.from_pretrained("kandinsky-community/kandinsky-3", variant="fp16", torch_dtype=torch.float16) +pipeline.enable_model_cpu_offload() + +prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting" +image = pipeline(prompt).images[0] +image + + +🤗 Diffusers also provides an end-to-end API with the KandinskyCombinedPipeline and KandinskyV22CombinedPipeline, meaning you don’t have to separately load the prior and text-to-image pipeline. The combined pipeline automatically loads both the prior model and the decoder. You can still set different values for the prior pipeline with the prior_guidance_scale and prior_num_inference_steps parameters if you want. Use the AutoPipelineForText2Image to automatically call the combined pipelines under the hood: + + + + Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) +pipeline.enable_model_cpu_offload() + +prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting" +negative_prompt = "low quality, bad quality" + +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, prior_guidance_scale=1.0, guidance_scale=4.0, height=768, width=768).images[0] +image + + + + Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16) +pipeline.enable_model_cpu_offload() + +prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting" +negative_prompt = "low quality, bad quality" + +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, prior_guidance_scale=1.0, guidance_scale=4.0, height=768, width=768).images[0] +image + + + Image-to-image For image-to-image, pass the initial image and text prompt to condition the image to the pipeline. Start by loading the prior pipeline: + + + + Copied import torch +from diffusers import KandinskyImg2ImgPipeline, KandinskyPriorPipeline + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipeline = KandinskyImg2ImgPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + + + + Copied import torch +from diffusers import KandinskyV22Img2ImgPipeline, KandinskyPriorPipeline + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipeline = KandinskyV22Img2ImgPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + + +Kandinsky 3 doesn’t require a prior model so you can directly load the image-to-image pipeline: Copied from diffusers import Kandinsky3Img2ImgPipeline +from diffusers.utils import load_image +import torch + +pipeline = Kandinsky3Img2ImgPipeline.from_pretrained("kandinsky-community/kandinsky-3", variant="fp16", torch_dtype=torch.float16) +pipeline.enable_model_cpu_offload() + + +Download an image to condition on: Copied from diffusers.utils import load_image + +# download image +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +original_image = load_image(url) +original_image = original_image.resize((768, 512)) Generate the image_embeds and negative_image_embeds with the prior pipeline: Copied prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +image_embeds, negative_image_embeds = prior_pipeline(prompt, negative_prompt).to_tuple() Now pass the original image, and all the prompts and embeddings to the pipeline to generate an image: + + + + Copied from diffusers.utils import make_image_grid + +image = pipeline(prompt, negative_prompt=negative_prompt, image=original_image, image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, height=768, width=768, strength=0.3).images[0] +make_image_grid([original_image.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2) + + + + Copied from diffusers.utils import make_image_grid + +image = pipeline(image=original_image, image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, height=768, width=768, strength=0.3).images[0] +make_image_grid([original_image.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2) + + + + Copied image = pipeline(prompt, negative_prompt=negative_prompt, image=image, strength=0.75, num_inference_steps=25).images[0] +image + + +🤗 Diffusers also provides an end-to-end API with the KandinskyImg2ImgCombinedPipeline and KandinskyV22Img2ImgCombinedPipeline, meaning you don’t have to separately load the prior and image-to-image pipeline. The combined pipeline automatically loads both the prior model and the decoder. You can still set different values for the prior pipeline with the prior_guidance_scale and prior_num_inference_steps parameters if you want. Use the AutoPipelineForImage2Image to automatically call the combined pipelines under the hood: + + + + Copied from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image +import torch + +pipeline = AutoPipelineForImage2Image.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True) +pipeline.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +original_image = load_image(url) + +original_image.thumbnail((768, 768)) + +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=original_image, strength=0.3).images[0] +make_image_grid([original_image.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2) + + + + Copied from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image +import torch + +pipeline = AutoPipelineForImage2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16) +pipeline.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +original_image = load_image(url) + +original_image.thumbnail((768, 768)) + +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=original_image, strength=0.3).images[0] +make_image_grid([original_image.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2) + + + Inpainting ⚠️ The Kandinsky models use ⬜️ white pixels to represent the masked area now instead of black pixels. If you are using KandinskyInpaintPipeline in production, you need to change the mask to use white pixels: Copied # For PIL input +import PIL.ImageOps +mask = PIL.ImageOps.invert(mask) + +# For PyTorch and NumPy input +mask = 1 - mask For inpainting, you’ll need the original image, a mask of the area to replace in the original image, and a text prompt of what to inpaint. Load the prior pipeline: + + + + Copied from diffusers import KandinskyInpaintPipeline, KandinskyPriorPipeline +from diffusers.utils import load_image, make_image_grid +import torch +import numpy as np +from PIL import Image + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipeline = KandinskyInpaintPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + + + + Copied from diffusers import KandinskyV22InpaintPipeline, KandinskyV22PriorPipeline +from diffusers.utils import load_image, make_image_grid +import torch +import numpy as np +from PIL import Image + +prior_pipeline = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipeline = KandinskyV22InpaintPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + + +Load an initial image and create a mask: Copied init_image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png") +mask = np.zeros((768, 768), dtype=np.float32) +# mask area above cat's head +mask[:250, 250:-250] = 1 Generate the embeddings with the prior pipeline: Copied prompt = "a hat" +prior_output = prior_pipeline(prompt) Now pass the initial image, mask, and prompt and embeddings to the pipeline to generate an image: + + + + Copied output_image = pipeline(prompt, image=init_image, mask_image=mask, **prior_output, height=768, width=768, num_inference_steps=150).images[0] +mask = Image.fromarray((mask*255).astype('uint8'), 'L') +make_image_grid([init_image, mask, output_image], rows=1, cols=3) + + + + Copied output_image = pipeline(image=init_image, mask_image=mask, **prior_output, height=768, width=768, num_inference_steps=150).images[0] +mask = Image.fromarray((mask*255).astype('uint8'), 'L') +make_image_grid([init_image, mask, output_image], rows=1, cols=3) + + +You can also use the end-to-end KandinskyInpaintCombinedPipeline and KandinskyV22InpaintCombinedPipeline to call the prior and decoder pipelines together under the hood. Use the AutoPipelineForInpainting for this: + + + + Copied import torch +import numpy as np +from PIL import Image +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipe = AutoPipelineForInpainting.from_pretrained("kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16) +pipe.enable_model_cpu_offload() + +init_image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png") +mask = np.zeros((768, 768), dtype=np.float32) +# mask area above cat's head +mask[:250, 250:-250] = 1 +prompt = "a hat" + +output_image = pipe(prompt=prompt, image=init_image, mask_image=mask).images[0] +mask = Image.fromarray((mask*255).astype('uint8'), 'L') +make_image_grid([init_image, mask, output_image], rows=1, cols=3) + + + + Copied import torch +import numpy as np +from PIL import Image +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipe = AutoPipelineForInpainting.from_pretrained("kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16) +pipe.enable_model_cpu_offload() + +init_image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png") +mask = np.zeros((768, 768), dtype=np.float32) +# mask area above cat's head +mask[:250, 250:-250] = 1 +prompt = "a hat" + +output_image = pipe(prompt=prompt, image=original_image, mask_image=mask).images[0] +mask = Image.fromarray((mask*255).astype('uint8'), 'L') +make_image_grid([init_image, mask, output_image], rows=1, cols=3) + + + Interpolation Interpolation allows you to explore the latent space between the image and text embeddings which is a cool way to see some of the prior model’s intermediate outputs. Load the prior pipeline and two images you’d like to interpolate: + + + + Copied from diffusers import KandinskyPriorPipeline, KandinskyPipeline +from diffusers.utils import load_image, make_image_grid +import torch + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +img_1 = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png") +img_2 = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/starry_night.jpeg") +make_image_grid([img_1.resize((512,512)), img_2.resize((512,512))], rows=1, cols=2) + + + + Copied from diffusers import KandinskyV22PriorPipeline, KandinskyV22Pipeline +from diffusers.utils import load_image, make_image_grid +import torch + +prior_pipeline = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +img_1 = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png") +img_2 = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/starry_night.jpeg") +make_image_grid([img_1.resize((512,512)), img_2.resize((512,512))], rows=1, cols=2) + + + a cat Van Gogh's Starry Night painting Specify the text or images to interpolate, and set the weights for each text or image. Experiment with the weights to see how they affect the interpolation! Copied images_texts = ["a cat", img_1, img_2] +weights = [0.3, 0.3, 0.4] Call the interpolate function to generate the embeddings, and then pass them to the pipeline to generate the image: + + + + Copied # prompt can be left empty +prompt = "" +prior_out = prior_pipeline.interpolate(images_texts, weights) + +pipeline = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + +image = pipeline(prompt, **prior_out, height=768, width=768).images[0] +image + + + + Copied # prompt can be left empty +prompt = "" +prior_out = prior_pipeline.interpolate(images_texts, weights) + +pipeline = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + +image = pipeline(prompt, **prior_out, height=768, width=768).images[0] +image + + + ControlNet ⚠️ ControlNet is only supported for Kandinsky 2.2! ControlNet enables conditioning large pretrained diffusion models with additional inputs such as a depth map or edge detection. For example, you can condition Kandinsky 2.2 with a depth map so the model understands and preserves the structure of the depth image. Let’s load an image and extract it’s depth map: Copied from diffusers.utils import load_image + +img = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png" +).resize((768, 768)) +img Then you can use the depth-estimation Pipeline from 🤗 Transformers to process the image and retrieve the depth map: Copied import torch +import numpy as np + +from transformers import pipeline + +def make_hint(image, depth_estimator): + image = depth_estimator(image)["depth"] + image = np.array(image) + image = image[:, :, None] + image = np.concatenate([image, image, image], axis=2) + detected_map = torch.from_numpy(image).float() / 255.0 + hint = detected_map.permute(2, 0, 1) + return hint + +depth_estimator = pipeline("depth-estimation") +hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda") Text-to-image Load the prior pipeline and the KandinskyV22ControlnetPipeline: Copied from diffusers import KandinskyV22PriorPipeline, KandinskyV22ControlnetPipeline + +prior_pipeline = KandinskyV22PriorPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +pipeline = KandinskyV22ControlnetPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16 +).to("cuda") Generate the image embeddings from a prompt and negative prompt: Copied prompt = "A robot, 4k photo" +negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature" + +generator = torch.Generator(device="cuda").manual_seed(43) + +image_emb, zero_image_emb = prior_pipeline( + prompt=prompt, negative_prompt=negative_prior_prompt, generator=generator +).to_tuple() Finally, pass the image embeddings and the depth image to the KandinskyV22ControlnetPipeline to generate an image: Copied image = pipeline(image_embeds=image_emb, negative_image_embeds=zero_image_emb, hint=hint, num_inference_steps=50, generator=generator, height=768, width=768).images[0] +image Image-to-image For image-to-image with ControlNet, you’ll need to use the: KandinskyV22PriorEmb2EmbPipeline to generate the image embeddings from a text prompt and an image KandinskyV22ControlnetImg2ImgPipeline to generate an image from the initial image and the image embeddings Process and extract a depth map of an initial image of a cat with the depth-estimation Pipeline from 🤗 Transformers: Copied import torch +import numpy as np + +from diffusers import KandinskyV22PriorEmb2EmbPipeline, KandinskyV22ControlnetImg2ImgPipeline +from diffusers.utils import load_image +from transformers import pipeline + +img = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png" +).resize((768, 768)) + +def make_hint(image, depth_estimator): + image = depth_estimator(image)["depth"] + image = np.array(image) + image = image[:, :, None] + image = np.concatenate([image, image, image], axis=2) + detected_map = torch.from_numpy(image).float() / 255.0 + hint = detected_map.permute(2, 0, 1) + return hint + +depth_estimator = pipeline("depth-estimation") +hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda") Load the prior pipeline and the KandinskyV22ControlnetImg2ImgPipeline: Copied prior_pipeline = KandinskyV22PriorEmb2EmbPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +pipeline = KandinskyV22ControlnetImg2ImgPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16 +).to("cuda") Pass a text prompt and the initial image to the prior pipeline to generate the image embeddings: Copied prompt = "A robot, 4k photo" +negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature" + +generator = torch.Generator(device="cuda").manual_seed(43) + +img_emb = prior_pipeline(prompt=prompt, image=img, strength=0.85, generator=generator) +negative_emb = prior_pipeline(prompt=negative_prior_prompt, image=img, strength=1, generator=generator) Now you can run the KandinskyV22ControlnetImg2ImgPipeline to generate an image from the initial image and the image embeddings: Copied image = pipeline(image=img, strength=0.5, image_embeds=img_emb.image_embeds, negative_image_embeds=negative_emb.image_embeds, hint=hint, num_inference_steps=50, generator=generator, height=768, width=768).images[0] +make_image_grid([img.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2) Optimizations Kandinsky is unique because it requires a prior pipeline to generate the mappings, and a second pipeline to decode the latents into an image. Optimization efforts should be focused on the second pipeline because that is where the bulk of the computation is done. Here are some tips to improve Kandinsky during inference. Enable xFormers if you’re using PyTorch < 2.0: Copied from diffusers import DiffusionPipeline + import torch + + pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) ++ pipe.enable_xformers_memory_efficient_attention() Enable torch.compile if you’re using PyTorch >= 2.0 to automatically use scaled dot-product attention (SDPA): Copied pipe.unet.to(memory_format=torch.channels_last) ++ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) This is the same as explicitly setting the attention processor to use AttnAddedKVProcessor2_0: Copied from diffusers.models.attention_processor import AttnAddedKVProcessor2_0 + +pipe.unet.set_attn_processor(AttnAddedKVProcessor2_0()) Offload the model to the CPU with enable_model_cpu_offload() to avoid out-of-memory errors: Copied from diffusers import DiffusionPipeline + import torch + + pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) ++ pipe.enable_model_cpu_offload() By default, the text-to-image pipeline uses the DDIMScheduler but you can replace it with another scheduler like DDPMScheduler to see how that affects the tradeoff between inference speed and image quality: Copied from diffusers import DDPMScheduler +from diffusers import DiffusionPipeline + +scheduler = DDPMScheduler.from_pretrained("kandinsky-community/kandinsky-2-1", subfolder="ddpm_scheduler") +pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", scheduler=scheduler, torch_dtype=torch.float16, use_safetensors=True).to("cuda") diff --git a/scrapped_outputs/703661df68cebf64017589b86144cbe2.txt b/scrapped_outputs/703661df68cebf64017589b86144cbe2.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/704bcd4e3f454b05b0d156eaeebfdf87.txt b/scrapped_outputs/704bcd4e3f454b05b0d156eaeebfdf87.txt new file mode 100644 index 0000000000000000000000000000000000000000..7ba14b6e0e43d4ca7ed6b0c338388308b99ebb1d --- /dev/null +++ b/scrapped_outputs/704bcd4e3f454b05b0d156eaeebfdf87.txt @@ -0,0 +1,265 @@ +ControlNet ControlNet is a type of model for controlling image diffusion models by conditioning the model with an additional input image. There are many types of conditioning inputs (canny edge, user sketching, human pose, depth, and more) you can use to control a diffusion model. This is hugely useful because it affords you greater control over image generation, making it easier to generate specific images without experimenting with different text prompts or denoising values as much. Check out Section 3.5 of the ControlNet paper v1 for a list of ControlNet implementations on various conditioning inputs. You can find the official Stable Diffusion ControlNet conditioned models on lllyasviel’s Hub profile, and more community-trained ones on the Hub. For Stable Diffusion XL (SDXL) ControlNet models, you can find them on the 🤗 Diffusers Hub organization, or you can browse community-trained ones on the Hub. A ControlNet model has two sets of weights (or blocks) connected by a zero-convolution layer: a locked copy keeps everything a large pretrained diffusion model has learned a trainable copy is trained on the additional conditioning input Since the locked copy preserves the pretrained model, training and implementing a ControlNet on a new conditioning input is as fast as finetuning any other model because you aren’t training the model from scratch. This guide will show you how to use ControlNet for text-to-image, image-to-image, inpainting, and more! There are many types of ControlNet conditioning inputs to choose from, but in this guide we’ll only focus on several of them. Feel free to experiment with other conditioning inputs! Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate opencv-python Text-to-image For text-to-image, you normally pass a text prompt to the model. But with ControlNet, you can specify an additional conditioning input. Let’s condition the model with a canny image, a white outline of an image on a black background. This way, the ControlNet can use the canny image as a control to guide the model to generate an image with the same outline. Load an image and use the opencv-python library to extract the canny image: Copied from diffusers.utils import load_image, make_image_grid +from PIL import Image +import cv2 +import numpy as np + +original_image = load_image( + "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +) + +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) original image canny image Next, load a ControlNet model conditioned on canny edge detection and pass it to the StableDiffusionControlNetPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler +import torch + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now pass your prompt and canny image to the pipeline: Copied output = pipe( + "the mona lisa", image=canny_image +).images[0] +make_image_grid([original_image, canny_image, output], rows=1, cols=3) Image-to-image For image-to-image, you’d typically pass an initial image and a prompt to the pipeline to generate a new image. With ControlNet, you can pass an additional conditioning input to guide the model. Let’s condition the model with a depth map, an image which contains spatial information. This way, the ControlNet can use the depth map as a control to guide the model to generate an image that preserves spatial information. You’ll use the StableDiffusionControlNetImg2ImgPipeline for this task, which is different from the StableDiffusionControlNetPipeline because it allows you to pass an initial image as the starting point for the image generation process. Load an image and use the depth-estimation Pipeline from 🤗 Transformers to extract the depth map of an image: Copied import torch +import numpy as np + +from transformers import pipeline +from diffusers.utils import load_image, make_image_grid + +image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-img2img.jpg" +) + +def get_depth_map(image, depth_estimator): + image = depth_estimator(image)["depth"] + image = np.array(image) + image = image[:, :, None] + image = np.concatenate([image, image, image], axis=2) + detected_map = torch.from_numpy(image).float() / 255.0 + depth_map = detected_map.permute(2, 0, 1) + return depth_map + +depth_estimator = pipeline("depth-estimation") +depth_map = get_depth_map(image, depth_estimator).unsqueeze(0).half().to("cuda") Next, load a ControlNet model conditioned on depth maps and pass it to the StableDiffusionControlNetImg2ImgPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, UniPCMultistepScheduler +import torch + +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11f1p_sd15_depth", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now pass your prompt, initial image, and depth map to the pipeline: Copied output = pipe( + "lego batman and robin", image=image, control_image=depth_map, +).images[0] +make_image_grid([image, output], rows=1, cols=2) original image generated image Inpainting For inpainting, you need an initial image, a mask image, and a prompt describing what to replace the mask with. ControlNet models allow you to add another control image to condition a model with. Let’s condition the model with an inpainting mask. This way, the ControlNet can use the inpainting mask as a control to guide the model to generate an image within the mask area. Load an initial image and a mask image: Copied from diffusers.utils import load_image, make_image_grid + +init_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-inpaint.jpg" +) +init_image = init_image.resize((512, 512)) + +mask_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-inpaint-mask.jpg" +) +mask_image = mask_image.resize((512, 512)) +make_image_grid([init_image, mask_image], rows=1, cols=2) Create a function to prepare the control image from the initial and mask images. This’ll create a tensor to mark the pixels in init_image as masked if the corresponding pixel in mask_image is over a certain threshold. Copied import numpy as np +import torch + +def make_inpaint_condition(image, image_mask): + image = np.array(image.convert("RGB")).astype(np.float32) / 255.0 + image_mask = np.array(image_mask.convert("L")).astype(np.float32) / 255.0 + + assert image.shape[0:1] == image_mask.shape[0:1] + image[image_mask > 0.5] = -1.0 # set as masked pixel + image = np.expand_dims(image, 0).transpose(0, 3, 1, 2) + image = torch.from_numpy(image) + return image + +control_image = make_inpaint_condition(init_image, mask_image) original image mask image Load a ControlNet model conditioned on inpainting and pass it to the StableDiffusionControlNetInpaintPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetInpaintPipeline, ControlNetModel, UniPCMultistepScheduler + +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now pass your prompt, initial image, mask image, and control image to the pipeline: Copied output = pipe( + "corgi face with large ears, detailed, pixar, animated, disney", + num_inference_steps=20, + eta=1.0, + image=init_image, + mask_image=mask_image, + control_image=control_image, +).images[0] +make_image_grid([init_image, mask_image, output], rows=1, cols=3) Guess mode Guess mode does not require supplying a prompt to a ControlNet at all! This forces the ControlNet encoder to do it’s best to “guess” the contents of the input control map (depth map, pose estimation, canny edge, etc.). Guess mode adjusts the scale of the output residuals from a ControlNet by a fixed ratio depending on the block depth. The shallowest DownBlock corresponds to 0.1, and as the blocks get deeper, the scale increases exponentially such that the scale of the MidBlock output becomes 1.0. Guess mode does not have any impact on prompt conditioning and you can still provide a prompt if you want. Set guess_mode=True in the pipeline, and it is recommended to set the guidance_scale value between 3.0 and 5.0. Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.utils import load_image, make_image_grid +import numpy as np +import torch +from PIL import Image +import cv2 + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", use_safetensors=True) +pipe = StableDiffusionControlNetPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", controlnet=controlnet, use_safetensors=True).to("cuda") + +original_image = load_image("https://huggingface.co/takuma104/controlnet_dev/resolve/main/bird_512x512.png") + +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +image = pipe("", image=canny_image, guess_mode=True, guidance_scale=3.0).images[0] +make_image_grid([original_image, canny_image, image], rows=1, cols=3) regular mode with prompt guess mode without prompt ControlNet with Stable Diffusion XL There aren’t too many ControlNet models compatible with Stable Diffusion XL (SDXL) at the moment, but we’ve trained two full-sized ControlNet models for SDXL conditioned on canny edge detection and depth maps. We’re also experimenting with creating smaller versions of these SDXL-compatible ControlNet models so it is easier to run on resource-constrained hardware. You can find these checkpoints on the 🤗 Diffusers Hub organization! Let’s use a SDXL ControlNet conditioned on canny images to generate an image. Start by loading an image and prepare the canny image: Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL +from diffusers.utils import load_image, make_image_grid +from PIL import Image +import cv2 +import numpy as np +import torch + +original_image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" +) + +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) +make_image_grid([original_image, canny_image], rows=1, cols=2) original image canny image Load a SDXL ControlNet model conditioned on canny edge detection and pass it to the StableDiffusionXLControlNetPipeline. You can also enable model offloading to reduce memory usage. Copied controlnet = ControlNetModel.from_pretrained( + "diffusers/controlnet-canny-sdxl-1.0", + torch_dtype=torch.float16, + use_safetensors=True +) +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionXLControlNetPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + controlnet=controlnet, + vae=vae, + torch_dtype=torch.float16, + use_safetensors=True +) +pipe.enable_model_cpu_offload() Now pass your prompt (and optionally a negative prompt if you’re using one) and canny image to the pipeline: The controlnet_conditioning_scale parameter determines how much weight to assign to the conditioning inputs. A value of 0.5 is recommended for good generalization, but feel free to experiment with this number! Copied prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" +negative_prompt = 'low quality, bad quality, sketches' + +image = pipe( + prompt, + negative_prompt=negative_prompt, + image=canny_image, + controlnet_conditioning_scale=0.5, +).images[0] +make_image_grid([original_image, canny_image, image], rows=1, cols=3) You can use StableDiffusionXLControlNetPipeline in guess mode as well by setting the parameter to True: Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL +from diffusers.utils import load_image, make_image_grid +import numpy as np +import torch +import cv2 +from PIL import Image + +prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" +negative_prompt = "low quality, bad quality, sketches" + +original_image = load_image( + "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" +) + +controlnet = ControlNetModel.from_pretrained( + "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16, use_safetensors=True +) +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionXLControlNetPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16, use_safetensors=True +) +pipe.enable_model_cpu_offload() + +image = np.array(original_image) +image = cv2.Canny(image, 100, 200) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +image = pipe( + prompt, negative_prompt=negative_prompt, controlnet_conditioning_scale=0.5, image=canny_image, guess_mode=True, +).images[0] +make_image_grid([original_image, canny_image, image], rows=1, cols=3) MultiControlNet Replace the SDXL model with a model like runwayml/stable-diffusion-v1-5 to use multiple conditioning inputs with Stable Diffusion models. You can compose multiple ControlNet conditionings from different image inputs to create a MultiControlNet. To get better results, it is often helpful to: mask conditionings such that they don’t overlap (for example, mask the area of a canny image where the pose conditioning is located) experiment with the controlnet_conditioning_scale parameter to determine how much weight to assign to each conditioning input In this example, you’ll combine a canny image and a human pose estimation image to generate a new image. Prepare the canny image conditioning: Copied from diffusers.utils import load_image, make_image_grid +from PIL import Image +import numpy as np +import cv2 + +original_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" +) +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) + +# zero out middle columns of image where pose will be overlaid +zero_start = image.shape[1] // 4 +zero_end = zero_start + image.shape[1] // 2 +image[:, zero_start:zero_end] = 0 + +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) +make_image_grid([original_image, canny_image], rows=1, cols=2) original image canny image For human pose estimation, install controlnet_aux: Copied # uncomment to install the necessary library in Colab +#!pip install -q controlnet-aux Prepare the human pose estimation conditioning: Copied from controlnet_aux import OpenposeDetector + +openpose = OpenposeDetector.from_pretrained("lllyasviel/ControlNet") +original_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/person.png" +) +openpose_image = openpose(original_image) +make_image_grid([original_image, openpose_image], rows=1, cols=2) original image human pose image Load a list of ControlNet models that correspond to each conditioning, and pass them to the StableDiffusionXLControlNetPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to reduce memory usage. Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL, UniPCMultistepScheduler +import torch + +controlnets = [ + ControlNetModel.from_pretrained( + "thibaud/controlnet-openpose-sdxl-1.0", torch_dtype=torch.float16 + ), + ControlNetModel.from_pretrained( + "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16, use_safetensors=True + ), +] + +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionXLControlNetPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnets, vae=vae, torch_dtype=torch.float16, use_safetensors=True +) +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now you can pass your prompt (an optional negative prompt if you’re using one), canny image, and pose image to the pipeline: Copied prompt = "a giant standing in a fantasy landscape, best quality" +negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality" + +generator = torch.manual_seed(1) + +images = [openpose_image.resize((1024, 1024)), canny_image.resize((1024, 1024))] + +images = pipe( + prompt, + image=images, + num_inference_steps=25, + generator=generator, + negative_prompt=negative_prompt, + num_images_per_prompt=3, + controlnet_conditioning_scale=[1.0, 0.8], +).images +make_image_grid([original_image, canny_image, openpose_image, + images[0].resize((512, 512)), images[1].resize((512, 512)), images[2].resize((512, 512))], rows=2, cols=3) diff --git a/scrapped_outputs/7058746b82777cd39c064c9ac99fd41d.txt b/scrapped_outputs/7058746b82777cd39c064c9ac99fd41d.txt new file mode 100644 index 0000000000000000000000000000000000000000..d23d93327c35d9c8f0901065ebe9c0cc039991a4 --- /dev/null +++ b/scrapped_outputs/7058746b82777cd39c064c9ac99fd41d.txt @@ -0,0 +1,260 @@ +Image-to-image Image-to-image is similar to text-to-image, but in addition to a prompt, you can also pass an initial image as a starting point for the diffusion process. The initial image is encoded to latent space and noise is added to it. Then the latent diffusion model takes a prompt and the noisy latent image, predicts the added noise, and removes the predicted noise from the initial latent image to get the new latent image. Lastly, a decoder decodes the new latent image back into an image. With 🤗 Diffusers, this is as easy as 1-2-3: Load a checkpoint into the AutoPipelineForImage2Image class; this pipeline automatically handles loading the correct pipeline class based on the checkpoint: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() You’ll notice throughout the guide, we use enable_model_cpu_offload() and enable_xformers_memory_efficient_attention(), to save memory and increase inference speed. If you’re using PyTorch 2.0, then you don’t need to call enable_xformers_memory_efficient_attention() on your pipeline because it’ll already be using PyTorch 2.0’s native scaled-dot product attention. Load an image to pass to the pipeline: Copied init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") Pass a prompt and image to the pipeline to generate an image: Copied prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k" +image = pipeline(prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Popular models The most popular image-to-image models are Stable Diffusion v1.5, Stable Diffusion XL (SDXL), and Kandinsky 2.2. The results from the Stable Diffusion and Kandinsky models vary due to their architecture differences and training process; you can generally expect SDXL to produce higher quality images than Stable Diffusion v1.5. Let’s take a quick look at how to use each of these models and compare their results. Stable Diffusion v1.5 Stable Diffusion v1.5 is a latent diffusion model initialized from an earlier checkpoint, and further finetuned for 595K steps on 512x512 images. To use this pipeline for image-to-image, you’ll need to prepare an initial image to pass to the pipeline. Then you can pass a prompt and the image to the pipeline to generate a new image: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Stable Diffusion XL (SDXL) SDXL is a more powerful version of the Stable Diffusion model. It uses a larger base model, and an additional refiner model to increase the quality of the base model’s output. Read the SDXL guide for a more detailed walkthrough of how to use this model, and other techniques it uses to produce high quality images. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-sdxl-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, strength=0.5).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Kandinsky 2.2 The Kandinsky model is different from the Stable Diffusion models because it uses an image prior model to create image embeddings. The embeddings help create a better alignment between text and images, allowing the latent diffusion model to generate better images. The simplest way to use Kandinsky 2.2 is: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Configure pipeline parameters There are several important parameters you can configure in the pipeline that’ll affect the image generation process and image quality. Let’s take a closer look at what these parameters do and how changing them affects the output. Strength strength is one of the most important parameters to consider and it’ll have a huge impact on your generated image. It determines how much the generated image resembles the initial image. In other words: 📈 a higher strength value gives the model more “creativity” to generate an image that’s different from the initial image; a strength value of 1.0 means the initial image is more or less ignored 📉 a lower strength value means the generated image is more similar to the initial image The strength and num_inference_steps parameters are related because strength determines the number of noise steps to add. For example, if the num_inference_steps is 50 and strength is 0.8, then this means adding 40 (50 * 0.8) steps of noise to the initial image and then denoising for 40 steps to get the newly generated image. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, strength=0.8).images[0] +make_image_grid([init_image, image], rows=1, cols=2) strength = 0.4 strength = 0.6 strength = 1.0 Guidance scale The guidance_scale parameter is used to control how closely aligned the generated image and text prompt are. A higher guidance_scale value means your generated image is more aligned with the prompt, while a lower guidance_scale value means your generated image has more space to deviate from the prompt. You can combine guidance_scale with strength for even more precise control over how expressive the model is. For example, combine a high strength + guidance_scale for maximum creativity or use a combination of low strength and low guidance_scale to generate an image that resembles the initial image but is not as strictly bound to the prompt. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, guidance_scale=8.0).images[0] +make_image_grid([init_image, image], rows=1, cols=2) guidance_scale = 0.1 guidance_scale = 5.0 guidance_scale = 10.0 Negative prompt A negative prompt conditions the model to not include things in an image, and it can be used to improve image quality or modify an image. For example, you can improve image quality by including negative prompts like “poor details” or “blurry” to encourage the model to generate a higher quality image. Or you can modify an image by specifying things to exclude from an image. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" + +# pass prompt and image to pipeline +image = pipeline(prompt, negative_prompt=negative_prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" negative_prompt = "jungle" Chained image-to-image pipelines There are some other interesting ways you can use an image-to-image pipeline aside from just generating an image (although that is pretty cool too). You can take it a step further and chain it with other pipelines. Text-to-image-to-image Chaining a text-to-image and image-to-image pipeline allows you to generate an image from text and use the generated image as the initial image for the image-to-image pipeline. This is useful if you want to generate an image entirely from scratch. For example, let’s chain a Stable Diffusion and a Kandinsky model. Start by generating an image with the text-to-image pipeline: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch +from diffusers.utils import make_image_grid + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +text2image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k").images[0] +text2image Now you can pass this generated image to the image-to-image pipeline: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image2image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", image=text2image).images[0] +make_image_grid([text2image, image2image], rows=1, cols=2) Image-to-image-to-image You can also chain multiple image-to-image pipelines together to create more interesting images. This can be useful for iteratively performing style transfer on an image, generating short GIFs, restoring color to an image, or restoring missing areas of an image. Start by generating an image: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, output_type="latent").images[0] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. Pass the latent output from this pipeline to the next pipeline to generate an image in a comic book art style: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "ogkalu/Comic-Diffusion", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# need to include the token "charliebo artstyle" in the prompt to use this checkpoint +image = pipeline("Astronaut in a jungle, charliebo artstyle", image=image, output_type="latent").images[0] Repeat one more time to generate the final image in a pixel art style: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "kohbanye/pixel-art-style", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# need to include the token "pixelartstyle" in the prompt to use this checkpoint +image = pipeline("Astronaut in a jungle, pixelartstyle", image=image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Image-to-upscaler-to-super-resolution Another way you can chain your image-to-image pipeline is with an upscaler and super-resolution pipeline to really increase the level of details in an image. Start with an image-to-image pipeline: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image_1 = pipeline(prompt, image=init_image, output_type="latent").images[0] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. Chain it to an upscaler pipeline to increase the image resolution: Copied from diffusers import StableDiffusionLatentUpscalePipeline + +upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained( + "stabilityai/sd-x2-latent-upscaler", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +upscaler.enable_model_cpu_offload() +upscaler.enable_xformers_memory_efficient_attention() + +image_2 = upscaler(prompt, image=image_1, output_type="latent").images[0] Finally, chain it to a super-resolution pipeline to further enhance the resolution: Copied from diffusers import StableDiffusionUpscalePipeline + +super_res = StableDiffusionUpscalePipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +super_res.enable_model_cpu_offload() +super_res.enable_xformers_memory_efficient_attention() + +image_3 = super_res(prompt, image=image_2).images[0] +make_image_grid([init_image, image_3.resize((512, 512))], rows=1, cols=2) Control image generation Trying to generate an image that looks exactly the way you want can be difficult, which is why controlled generation techniques and models are so useful. While you can use the negative_prompt to partially control image generation, there are more robust methods like prompt weighting and ControlNets. Prompt weighting Prompt weighting allows you to scale the representation of each concept in a prompt. For example, in a prompt like “Astronaut in a jungle, cold color palette, muted colors, detailed, 8k”, you can choose to increase or decrease the embeddings of “astronaut” and “jungle”. The Compel library provides a simple syntax for adjusting prompt weights and generating the embeddings. You can learn how to create the embeddings in the Prompt weighting guide. AutoPipelineForImage2Image has a prompt_embeds (and negative_prompt_embeds if you’re using a negative prompt) parameter where you can pass the embeddings which replaces the prompt parameter. Copied from diffusers import AutoPipelineForImage2Image +import torch + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt_embeds=prompt_embeds, # generated from Compel + negative_prompt_embeds=negative_prompt_embeds, # generated from Compel + image=init_image, +).images[0] ControlNet ControlNets provide a more flexible and accurate way to control image generation because you can use an additional conditioning image. The conditioning image can be a canny image, depth map, image segmentation, and even scribbles! Whatever type of conditioning image you choose, the ControlNet generates an image that preserves the information in it. For example, let’s condition an image with a depth map to keep the spatial information in the image. Copied from diffusers.utils import load_image, make_image_grid + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) +init_image = init_image.resize((958, 960)) # resize to depth image dimensions +depth_image = load_image("https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png") +make_image_grid([init_image, depth_image], rows=1, cols=2) Load a ControlNet model conditioned on depth maps and the AutoPipelineForImage2Image: Copied from diffusers import ControlNetModel, AutoPipelineForImage2Image +import torch + +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11f1p_sd15_depth", torch_dtype=torch.float16, variant="fp16", use_safetensors=True) +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() Now generate a new image conditioned on the depth map, initial image, and prompt: Copied prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image_control_net = pipeline(prompt, image=init_image, control_image=depth_image).images[0] +make_image_grid([init_image, depth_image, image_control_net], rows=1, cols=3) initial image depth image ControlNet image Let’s apply a new style to the image generated from the ControlNet by chaining it with an image-to-image pipeline: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "nitrosocke/elden-ring-diffusion", torch_dtype=torch.float16, +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +prompt = "elden ring style astronaut in a jungle" # include the token "elden ring style" in the prompt +negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" + +image_elden_ring = pipeline(prompt, negative_prompt=negative_prompt, image=image_control_net, strength=0.45, guidance_scale=10.5).images[0] +make_image_grid([init_image, depth_image, image_control_net, image_elden_ring], rows=2, cols=2) Optimize Running diffusion models is computationally expensive and intensive, but with a few optimization tricks, it is entirely possible to run them on consumer and free-tier GPUs. For example, you can use a more memory-efficient form of attention such as PyTorch 2.0’s scaled-dot product attention or xFormers (you can use one or the other, but there’s no need to use both). You can also offload the model to the GPU while the other pipeline components wait on the CPU. Copied + pipeline.enable_model_cpu_offload() ++ pipeline.enable_xformers_memory_efficient_attention() With torch.compile, you can boost your inference speed even more by wrapping your UNet with it: Copied pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) To learn more, take a look at the Reduce memory usage and Torch 2.0 guides. diff --git a/scrapped_outputs/708a2d4ed21276e63c17b40798d84f45.txt b/scrapped_outputs/708a2d4ed21276e63c17b40798d84f45.txt new file mode 100644 index 0000000000000000000000000000000000000000..5ac980c70abc6eba4fbd0f38f30a6ecdd94ad92f --- /dev/null +++ b/scrapped_outputs/708a2d4ed21276e63c17b40798d84f45.txt @@ -0,0 +1,201 @@ +Depth-to-image The Stable Diffusion model can also infer depth based on an image using MiDaS. This allows you to pass a text prompt and an initial image to condition the generation of new images as well as a depth_map to preserve the image structure. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionDepth2ImgPipeline class diffusers.StableDiffusionDepth2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers depth_estimator: DPTForDepthEstimation feature_extractor: DPTFeatureExtractor ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-guided depth-based image-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None image: Union = None depth_map: Optional = None strength: float = 0.8 num_inference_steps: Optional = 50 guidance_scale: Optional = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: Optional = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be used as the starting point. Can accept image +latents as image only if depth_map is not None. depth_map (torch.FloatTensor, optional) — +Depth prediction to be used as additional conditioning for the image generation process. If not +defined, it automatically predicts the depth with self.depth_estimator. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> import requests +>>> from PIL import Image + +>>> from diffusers import StableDiffusionDepth2ImgPipeline + +>>> pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-depth", +... torch_dtype=torch.float16, +... ) +>>> pipe.to("cuda") + + +>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" +>>> init_image = Image.open(requests.get(url, stream=True).raw) +>>> prompt = "two tigers" +>>> n_propmt = "bad, deformed, ugly, bad anotomy" +>>> image = pipe(prompt=prompt, image=init_image, negative_prompt=n_propmt, strength=0.7).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/70bb48a57e161dca504a0ca127867586.txt b/scrapped_outputs/70bb48a57e161dca504a0ca127867586.txt new file mode 100644 index 0000000000000000000000000000000000000000..70e8ff8ae9a89a38f63fb94929d9090c96587fe0 --- /dev/null +++ b/scrapped_outputs/70bb48a57e161dca504a0ca127867586.txt @@ -0,0 +1,435 @@ +Text-to-Video Generation with AnimateDiff Overview AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning by Yuwei Guo, Ceyuan Yang, Anyi Rao, Yaohui Wang, Yu Qiao, Dahua Lin, Bo Dai. The abstract of the paper is the following: With the advance of text-to-image models (e.g., Stable Diffusion) and corresponding personalization techniques such as DreamBooth and LoRA, everyone can manifest their imagination into high-quality images at an affordable cost. Subsequently, there is a great demand for image animation techniques to further combine generated static images with motion dynamics. In this report, we propose a practical framework to animate most of the existing personalized text-to-image models once and for all, saving efforts in model-specific tuning. At the core of the proposed framework is to insert a newly initialized motion modeling module into the frozen text-to-image model and train it on video clips to distill reasonable motion priors. Once trained, by simply injecting this motion modeling module, all personalized versions derived from the same base T2I readily become text-driven models that produce diverse and personalized animated images. We conduct our evaluation on several public representative personalized text-to-image models across anime pictures and realistic photographs, and demonstrate that our proposed framework helps these models generate temporally smooth animation clips while preserving the domain and diversity of their outputs. Code and pre-trained weights will be publicly available at this https URL. Available Pipelines Pipeline Tasks Demo AnimateDiffPipeline Text-to-Video Generation with AnimateDiff AnimateDiffVideoToVideoPipeline Video-to-Video Generation with AnimateDiff Available checkpoints Motion Adapter checkpoints can be found under guoyww. These checkpoints are meant to work with any model based on Stable Diffusion 1.4/1.5. Usage example AnimateDiffPipeline AnimateDiff works with a MotionAdapter checkpoint and a Stable Diffusion model checkpoint. The MotionAdapter is a collection of Motion Modules that are responsible for adding coherent motion across image frames. These modules are applied after the Resnet and Attention blocks in Stable Diffusion UNet. The following example demonstrates how to use a MotionAdapter checkpoint with Diffusers for inference based on StableDiffusion-1.4/1.5. Copied import torch +from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16) +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + timestep_spacing="linspace", + beta_schedule="linear", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt=( + "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, " + "orange sky, warm lighting, fishing boats, ocean waves seagulls, " + "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, " + "golden hour, coastal landscape, seaside scenery" + ), + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=25, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") + Here are some sample outputs: masterpiece, bestquality, sunset. + AnimateDiff tends to work better with finetuned Stable Diffusion models. If you plan on using a scheduler that can clip samples, make sure to disable it by setting clip_sample=False in the scheduler as this can also have an adverse effect on generated samples. Additionally, the AnimateDiff checkpoints can be sensitive to the beta schedule of the scheduler. We recommend setting this to linear. AnimateDiffVideoToVideoPipeline AnimateDiff can also be used to generate visually similar videos or enable style/character/background or other edits starting from an initial video, allowing you to seamlessly explore creative possibilities. Copied import imageio +import requests +import torch +from diffusers import AnimateDiffVideoToVideoPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif +from io import BytesIO +from PIL import Image + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffVideoToVideoPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16).to("cuda") +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + timestep_spacing="linspace", + beta_schedule="linear", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +# helper function to load videos +def load_video(file_path: str): + images = [] + + if file_path.startswith(('http://', 'https://')): + # If the file_path is a URL + response = requests.get(file_path) + response.raise_for_status() + content = BytesIO(response.content) + vid = imageio.get_reader(content) + else: + # Assuming it's a local file path + vid = imageio.get_reader(file_path) + + for frame in vid: + pil_image = Image.fromarray(frame) + images.append(pil_image) + + return images + +video = load_video("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-vid2vid-input-1.gif") + +output = pipe( + video = video, + prompt="panda playing a guitar, on a boat, in the ocean, high quality", + negative_prompt="bad quality, worse quality", + guidance_scale=7.5, + num_inference_steps=25, + strength=0.5, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") Here are some sample outputs: Source Video Output Video raccoon playing a guitar + panda playing a guitar + closeup of margot robbie, fireworks in the background, high quality + closeup of tony stark, robert downey jr, fireworks + Using Motion LoRAs Motion LoRAs are a collection of LoRAs that work with the guoyww/animatediff-motion-adapter-v1-5-2 checkpoint. These LoRAs are responsible for adding specific types of motion to the animations. Copied import torch +from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16) +pipe.load_lora_weights( + "guoyww/animatediff-motion-lora-zoom-out", adapter_name="zoom-out" +) + +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + beta_schedule="linear", + timestep_spacing="linspace", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt=( + "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, " + "orange sky, warm lighting, fishing boats, ocean waves seagulls, " + "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, " + "golden hour, coastal landscape, seaside scenery" + ), + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=25, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") + masterpiece, bestquality, sunset. + Using Motion LoRAs with PEFT You can also leverage the PEFT backend to combine Motion LoRA’s and create more complex animations. First install PEFT with Copied pip install peft Then you can use the following code to combine Motion LoRAs. Copied import torch +from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16) + +pipe.load_lora_weights( + "diffusers/animatediff-motion-lora-zoom-out", adapter_name="zoom-out", +) +pipe.load_lora_weights( + "diffusers/animatediff-motion-lora-pan-left", adapter_name="pan-left", +) +pipe.set_adapters(["zoom-out", "pan-left"], adapter_weights=[1.0, 1.0]) + +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + timestep_spacing="linspace", + beta_schedule="linear", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt=( + "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, " + "orange sky, warm lighting, fishing boats, ocean waves seagulls, " + "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, " + "golden hour, coastal landscape, seaside scenery" + ), + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=25, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") + masterpiece, bestquality, sunset. + Using FreeInit FreeInit: Bridging Initialization Gap in Video Diffusion Models by Tianxing Wu, Chenyang Si, Yuming Jiang, Ziqi Huang, Ziwei Liu. FreeInit is an effective method that improves temporal consistency and overall quality of videos generated using video-diffusion-models without any addition training. It can be applied to AnimateDiff, ModelScope, VideoCrafter and various other video generation models seamlessly at inference time, and works by iteratively refining the latent-initialization noise. More details can be found it the paper. The following example demonstrates the usage of FreeInit. Copied import torch +from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler +from diffusers.utils import export_to_gif + +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2") +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16).to("cuda") +pipe.scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + beta_schedule="linear", + clip_sample=False, + timestep_spacing="linspace", + steps_offset=1 +) + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_vae_tiling() + +# enable FreeInit +# Refer to the enable_free_init documentation for a full list of configurable parameters +pipe.enable_free_init(method="butterworth", use_fast_sampling=True) + +# run inference +output = pipe( + prompt="a panda playing a guitar, on a boat, in the ocean, high quality", + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=20, + generator=torch.Generator("cpu").manual_seed(666), +) + +# disable FreeInit +pipe.disable_free_init() + +frames = output.frames[0] +export_to_gif(frames, "animation.gif") FreeInit is not really free - the improved quality comes at the cost of extra computation. It requires sampling a few extra times depending on the num_iters parameter that is set when enabling it. Setting the use_fast_sampling parameter to True can improve the overall performance (at the cost of lower quality compared to when use_fast_sampling=False but still better results than vanilla video generation models). Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. Using AnimateLCM AnimateLCM is a motion module checkpoint and an LCM LoRA that have been created using a consistency learning strategy that decouples the distillation of the image generation priors and the motion generation priors. Copied import torch +from diffusers import AnimateDiffPipeline, LCMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +adapter = MotionAdapter.from_pretrained("wangfuyun/AnimateLCM") +pipe = AnimateDiffPipeline.from_pretrained("emilianJR/epiCRealism", motion_adapter=adapter) +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config, beta_schedule="linear") + +pipe.load_lora_weights("wangfuyun/AnimateLCM", weight_name="sd15_lora_beta.safetensors", adapter_name="lcm-lora") + +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt="A space rocket with trails of smoke behind it launching into space from the desert, 4k, high resolution", + negative_prompt="bad quality, worse quality, low resolution", + num_frames=16, + guidance_scale=1.5, + num_inference_steps=6, + generator=torch.Generator("cpu").manual_seed(0), +) +frames = output.frames[0] +export_to_gif(frames, "animatelcm.gif") A space rocket, 4K. + AnimateLCM is also compatible with existing Motion LoRAs. Copied import torch +from diffusers import AnimateDiffPipeline, LCMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +adapter = MotionAdapter.from_pretrained("wangfuyun/AnimateLCM") +pipe = AnimateDiffPipeline.from_pretrained("emilianJR/epiCRealism", motion_adapter=adapter) +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config, beta_schedule="linear") + +pipe.load_lora_weights("wangfuyun/AnimateLCM", weight_name="sd15_lora_beta.safetensors", adapter_name="lcm-lora") +pipe.load_lora_weights("guoyww/animatediff-motion-lora-tilt-up", adapter_name="tilt-up") + +pipe.set_adapters(["lcm-lora", "tilt-up"], [1.0, 0.8]) +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt="A space rocket with trails of smoke behind it launching into space from the desert, 4k, high resolution", + negative_prompt="bad quality, worse quality, low resolution", + num_frames=16, + guidance_scale=1.5, + num_inference_steps=6, + generator=torch.Generator("cpu").manual_seed(0), +) +frames = output.frames[0] +export_to_gif(frames, "animatelcm-motion-lora.gif") A space rocket, 4K. + AnimateDiffPipeline class diffusers.AnimateDiffPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel motion_adapter: MotionAdapter scheduler: Union feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter (MotionAdapter) — +A MotionAdapter to be used in combination with unet to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None num_frames: Optional = 16 height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → AnimateDiffPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated video. num_frames (int, optional, defaults to 16) — +The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds +amounts to 2 seconds of video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated video. Choose between torch.FloatTensor, PIL.Image or +np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +AnimateDiffPipelineOutput or tuple + +If return_dict is True, AnimateDiffPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler +>>> from diffusers.utils import export_to_gif + +>>> adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2") +>>> pipe = AnimateDiffPipeline.from_pretrained("frankjoshua/toonyou_beta6", motion_adapter=adapter) +>>> pipe.scheduler = DDIMScheduler(beta_schedule="linear", steps_offset=1, clip_sample=False) +>>> output = pipe(prompt="A corgi walking in the park") +>>> frames = output.frames[0] +>>> export_to_gif(frames, "animation.gif") encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. AnimateDiffVideoToVideoPipeline class diffusers.AnimateDiffVideoToVideoPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel motion_adapter: MotionAdapter scheduler: Union feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter (MotionAdapter) — +A MotionAdapter to be used in combination with unet to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for video-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( video: List = None prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: Optional = None guidance_scale: float = 7.5 strength: float = 0.8 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → pipelines.animatediff.pipeline_output.AnimateDiffPipelineOutput or tuple Parameters video (List[PipelineImageInput]) — +The input video to condition the generation on. Must be a list of images/frames of the video. prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. strength (float, optional, defaults to 0.8) — +Higher strength leads to more differences between original video and generated video. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated video. Choose between torch.FloatTensor, PIL.Image or +np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a AnimateDiffPipelineOutput instead +of a plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +pipelines.animatediff.pipeline_output.AnimateDiffPipelineOutput or tuple + +If return_dict is True, pipelines.animatediff.pipeline_output.AnimateDiffPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. AnimateDiffPipelineOutput class diffusers.pipelines.animatediff.AnimateDiffPipelineOutput < source > ( frames: Union ) Parameters frames (torch.Tensor, np.ndarray, or List[List[PIL.Image.Image]]) — +List of video outputs - It can be a nested list of length batch_size, with each sub-list containing denoised Output class for AnimateDiff pipelines. PIL image sequences of length num_frames. It can also be a NumPy array or Torch tensor of shape +(batch_size, num_frames, channels, height, width) diff --git a/scrapped_outputs/70d01488b10f77d2988998a4346f789b.txt b/scrapped_outputs/70d01488b10f77d2988998a4346f789b.txt new file mode 100644 index 0000000000000000000000000000000000000000..b38b5c13a31ff2d5b90900e6331e648465b535b4 --- /dev/null +++ b/scrapped_outputs/70d01488b10f77d2988998a4346f789b.txt @@ -0,0 +1,174 @@ +Reduce memory usage A barrier to using diffusion models is the large amount of memory required. To overcome this challenge, there are several memory-reducing techniques you can use to run even some of the largest models on free-tier or consumer GPUs. Some of these techniques can even be combined to further reduce memory usage. In many cases, optimizing for memory or speed leads to improved performance in the other, so you should try to optimize for both whenever you can. This guide focuses on minimizing memory usage, but you can also learn more about how to Speed up inference. The results below are obtained from generating a single 512x512 image from the prompt a photo of an astronaut riding a horse on mars with 50 DDIM steps on a Nvidia Titan RTX, demonstrating the speed-up you can expect as a result of reduced memory consumption. latency speed-up original 9.50s x1 fp16 3.61s x2.63 channels last 3.30s x2.88 traced UNet 3.21s x2.96 memory-efficient attention 2.63s x3.61 Sliced VAE Sliced VAE enables decoding large batches of images with limited VRAM or batches with 32 images or more by decoding the batches of latents one image at a time. You’ll likely want to couple this with enable_xformers_memory_efficient_attention() to reduce memory use further if you have xFormers installed. To use sliced VAE, call enable_vae_slicing() on your pipeline before inference: Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_vae_slicing() +#pipe.enable_xformers_memory_efficient_attention() +images = pipe([prompt] * 32).images You may see a small performance boost in VAE decoding on multi-image batches, and there should be no performance impact on single-image batches. Tiled VAE Tiled VAE processing also enables working with large images on limited VRAM (for example, generating 4k images on 8GB of VRAM) by splitting the image into overlapping tiles, decoding the tiles, and then blending the outputs together to compose the final image. You should also used tiled VAE with enable_xformers_memory_efficient_attention() to reduce memory use further if you have xFormers installed. To use tiled VAE processing, call enable_vae_tiling() on your pipeline before inference: Copied import torch +from diffusers import StableDiffusionPipeline, UniPCMultistepScheduler + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") +prompt = "a beautiful landscape photograph" +pipe.enable_vae_tiling() +#pipe.enable_xformers_memory_efficient_attention() + +image = pipe([prompt], width=3840, height=2224, num_inference_steps=20).images[0] The output image has some tile-to-tile tone variation because the tiles are decoded separately, but you shouldn’t see any sharp and obvious seams between the tiles. Tiling is turned off for images that are 512x512 or smaller. CPU offloading Offloading the weights to the CPU and only loading them on the GPU when performing the forward pass can also save memory. Often, this technique can reduce memory consumption to less than 3GB. To perform CPU offloading, call enable_sequential_cpu_offload(): Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_sequential_cpu_offload() +image = pipe(prompt).images[0] CPU offloading works on submodules rather than whole models. This is the best way to minimize memory consumption, but inference is much slower due to the iterative nature of the diffusion process. The UNet component of the pipeline runs several times (as many as num_inference_steps); each time, the different UNet submodules are sequentially onloaded and offloaded as needed, resulting in a large number of memory transfers. Consider using model offloading if you want to optimize for speed because it is much faster. The tradeoff is your memory savings won’t be as large. When using enable_sequential_cpu_offload(), don’t move the pipeline to CUDA beforehand or else the gain in memory consumption will only be minimal (see this issue for more information). enable_sequential_cpu_offload() is a stateful operation that installs hooks on the models. Model offloading Model offloading requires 🤗 Accelerate version 0.17.0 or higher. Sequential CPU offloading preserves a lot of memory but it makes inference slower because submodules are moved to GPU as needed, and they’re immediately returned to the CPU when a new module runs. Full-model offloading is an alternative that moves whole models to the GPU, instead of handling each model’s constituent submodules. There is a negligible impact on inference time (compared with moving the pipeline to cuda), and it still provides some memory savings. During model offloading, only one of the main components of the pipeline (typically the text encoder, UNet and VAE) +is placed on the GPU while the others wait on the CPU. Components like the UNet that run for multiple iterations stay on the GPU until they’re no longer needed. Enable model offloading by calling enable_model_cpu_offload() on the pipeline: Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_model_cpu_offload() +image = pipe(prompt).images[0] In order to properly offload models after they’re called, it is required to run the entire pipeline and models are called in the pipeline’s expected order. Exercise caution if models are reused outside the context of the pipeline after hooks have been installed. See Removing Hooks for more information. enable_model_cpu_offload() is a stateful operation that installs hooks on the models and state on the pipeline. Channels-last memory format The channels-last memory format is an alternative way of ordering NCHW tensors in memory to preserve dimension ordering. Channels-last tensors are ordered in such a way that the channels become the densest dimension (storing images pixel-per-pixel). Since not all operators currently support the channels-last format, it may result in worst performance but you should still try and see if it works for your model. For example, to set the pipeline’s UNet to use the channels-last format: Copied print(pipe.unet.conv_out.state_dict()["weight"].stride()) # (2880, 9, 3, 1) +pipe.unet.to(memory_format=torch.channels_last) # in-place operation +print( + pipe.unet.conv_out.state_dict()["weight"].stride() +) # (2880, 1, 960, 320) having a stride of 1 for the 2nd dimension proves that it works Tracing Tracing runs an example input tensor through the model and captures the operations that are performed on it as that input makes its way through the model’s layers. The executable or ScriptFunction that is returned is optimized with just-in-time compilation. To trace a UNet: Copied import time +import torch +from diffusers import StableDiffusionPipeline +import functools + +# torch disable grad +torch.set_grad_enabled(False) + +# set variables +n_experiments = 2 +unet_runs_per_experiment = 50 + + +# load inputs +def generate_inputs(): + sample = torch.randn((2, 4, 64, 64), device="cuda", dtype=torch.float16) + timestep = torch.rand(1, device="cuda", dtype=torch.float16) * 999 + encoder_hidden_states = torch.randn((2, 77, 768), device="cuda", dtype=torch.float16) + return sample, timestep, encoder_hidden_states + + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") +unet = pipe.unet +unet.eval() +unet.to(memory_format=torch.channels_last) # use channels_last memory format +unet.forward = functools.partial(unet.forward, return_dict=False) # set return_dict=False as default + +# warmup +for _ in range(3): + with torch.inference_mode(): + inputs = generate_inputs() + orig_output = unet(*inputs) + +# trace +print("tracing..") +unet_traced = torch.jit.trace(unet, inputs) +unet_traced.eval() +print("done tracing") + + +# warmup and optimize graph +for _ in range(5): + with torch.inference_mode(): + inputs = generate_inputs() + orig_output = unet_traced(*inputs) + + +# benchmarking +with torch.inference_mode(): + for _ in range(n_experiments): + torch.cuda.synchronize() + start_time = time.time() + for _ in range(unet_runs_per_experiment): + orig_output = unet_traced(*inputs) + torch.cuda.synchronize() + print(f"unet traced inference took {time.time() - start_time:.2f} seconds") + for _ in range(n_experiments): + torch.cuda.synchronize() + start_time = time.time() + for _ in range(unet_runs_per_experiment): + orig_output = unet(*inputs) + torch.cuda.synchronize() + print(f"unet inference took {time.time() - start_time:.2f} seconds") + +# save the model +unet_traced.save("unet_traced.pt") Replace the unet attribute of the pipeline with the traced model: Copied from diffusers import StableDiffusionPipeline +import torch +from dataclasses import dataclass + + +@dataclass +class UNet2DConditionOutput: + sample: torch.FloatTensor + + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") + +# use jitted unet +unet_traced = torch.jit.load("unet_traced.pt") + + +# del pipe.unet +class TracedUNet(torch.nn.Module): + def __init__(self): + super().__init__() + self.in_channels = pipe.unet.config.in_channels + self.device = pipe.unet.device + + def forward(self, latent_model_input, t, encoder_hidden_states): + sample = unet_traced(latent_model_input, t, encoder_hidden_states)[0] + return UNet2DConditionOutput(sample=sample) + + +pipe.unet = TracedUNet() + +with torch.inference_mode(): + image = pipe([prompt] * 1, num_inference_steps=50).images[0] Memory-efficient attention Recent work on optimizing bandwidth in the attention block has generated huge speed-ups and reductions in GPU memory usage. The most recent type of memory-efficient attention is Flash Attention (you can check out the original code at HazyResearch/flash-attention). If you have PyTorch >= 2.0 installed, you should not expect a speed-up for inference when enabling xformers. To use Flash Attention, install the following: PyTorch > 1.12 CUDA available xFormers Then call enable_xformers_memory_efficient_attention() on the pipeline: Copied from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") + +pipe.enable_xformers_memory_efficient_attention() + +with torch.inference_mode(): + sample = pipe("a small cat") + +# optional: You can disable it via +# pipe.disable_xformers_memory_efficient_attention() The iteration speed when using xformers should match the iteration speed of PyTorch 2.0 as described here. diff --git a/scrapped_outputs/70e060621273639edbd174b784b16314.txt b/scrapped_outputs/70e060621273639edbd174b784b16314.txt new file mode 100644 index 0000000000000000000000000000000000000000..836dee32c8271dc967057672c03614a463c4ec61 --- /dev/null +++ b/scrapped_outputs/70e060621273639edbd174b784b16314.txt @@ -0,0 +1,324 @@ +Pipelines Pipelines provide a simple way to run state-of-the-art diffusion models in inference by bundling all of the necessary components (multiple independently-trained models, schedulers, and processors) into a single end-to-end class. Pipelines are flexible and they can be adapted to use different schedulers or even model components. All pipelines are built from the base DiffusionPipeline class which provides basic functionality for loading, downloading, and saving all the components. Specific pipeline types (for example StableDiffusionPipeline) loaded with from_pretrained() are automatically detected and the pipeline components are loaded and passed to the __init__ function of the pipeline. You shouldn’t use the DiffusionPipeline class for training. Individual components (for example, UNet2DModel and UNet2DConditionModel) of diffusion pipelines are usually trained individually, so we suggest directly working with them instead. Pipelines do not offer any training functionality. You’ll notice PyTorch’s autograd is disabled by decorating the __call__() method with a torch.no_grad decorator because pipelines should not be used for training. If you’re interested in training, please take a look at the Training guides instead! The table below lists all the pipelines currently available in 🤗 Diffusers and the tasks they support. Click on a pipeline to view its abstract and published paper. Pipeline Tasks AltDiffusion image2image AnimateDiff text2video Attend-and-Excite text2image Audio Diffusion image2audio AudioLDM text2audio AudioLDM2 text2audio BLIP Diffusion text2image Consistency Models unconditional image generation ControlNet text2image, image2image, inpainting ControlNet with Stable Diffusion XL text2image ControlNet-XS text2image ControlNet-XS with Stable Diffusion XL text2image Cycle Diffusion image2image Dance Diffusion unconditional audio generation DDIM unconditional image generation DDPM unconditional image generation DeepFloyd IF text2image, image2image, inpainting, super-resolution DiffEdit inpainting DiT text2image GLIGEN text2image InstructPix2Pix image editing Kandinsky 2.1 text2image, image2image, inpainting, interpolation Kandinsky 2.2 text2image, image2image, inpainting Kandinsky 3 text2image, image2image Latent Consistency Models text2image Latent Diffusion text2image, super-resolution LDM3D text2image, text-to-3D, text-to-pano, upscaling MultiDiffusion text2image MusicLDM text2audio Paint by Example inpainting ParaDiGMS text2image Pix2Pix Zero image editing PixArt-α text2image PNDM unconditional image generation RePaint inpainting Score SDE VE unconditional image generation Self-Attention Guidance text2image Semantic Guidance text2image Shap-E text-to-3D, image-to-3D Spectrogram Diffusion Stable Diffusion text2image, image2image, depth2image, inpainting, image variation, latent upscaler, super-resolution Stable Diffusion Model Editing model editing Stable Diffusion XL text2image, image2image, inpainting Stable Diffusion XL Turbo text2image, image2image, inpainting Stable unCLIP text2image, image variation Stochastic Karras VE unconditional image generation T2I-Adapter text2image Text2Video text2video, video2video Text2Video-Zero text2video unCLIP text2image, image variation Unconditional Latent Diffusion unconditional image generation UniDiffuser text2image, image2text, image variation, text variation, unconditional image generation, unconditional audio generation Value-guided planning value guided sampling Versatile Diffusion text2image, image variation VQ Diffusion text2image Wuerstchen text2image DiffusionPipeline class diffusers.DiffusionPipeline < source > ( ) Base class for all pipelines. DiffusionPipeline stores all components (models, schedulers, and processors) for diffusion pipelines and +provides methods for loading, downloading and saving models. It also includes methods to: move all PyTorch modules to the device of your choice enable/disable the progress bar for the denoising iteration Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. _optional_components (List[str]) — List of all optional components that don’t have to be passed to the +pipeline to function (should be overridden by subclasses). __call__ ( *args **kwargs ) Call self as a function. device < source > ( ) → torch.device Returns +torch.device + +The torch device on which the pipeline is located. + to < source > ( *args **kwargs ) → DiffusionPipeline Parameters dtype (torch.dtype, optional) — +Returns a pipeline with the specified +dtype device (torch.Device, optional) — +Returns a pipeline with the specified +device silence_dtype_warnings (str, optional, defaults to False) — +Whether to omit warnings if the target dtype is not compatible with the target device. Returns +DiffusionPipeline + +The pipeline converted to specified dtype and/or dtype. + Performs Pipeline dtype and/or device conversion. A torch.dtype and torch.device are inferred from the +arguments of self.to(*args, **kwargs). If the pipeline already has the correct torch.dtype and torch.device, then it is returned as is. Otherwise, +the returned pipeline is a copy of self with the desired torch.dtype and torch.device. Here are the ways to call to: to(dtype, silence_dtype_warnings=False) → DiffusionPipeline to return a pipeline with the specified +dtype to(device, silence_dtype_warnings=False) → DiffusionPipeline to return a pipeline with the specified +device to(device=None, dtype=None, silence_dtype_warnings=False) → DiffusionPipeline to return a pipeline with the +specified device and +dtype components < source > ( ) The self.components property can be useful to run different pipelines with the same weights and +configurations without reallocating additional memory. Returns (dict): +A dictionary containing all the modules needed to initialize the pipeline. Examples: Copied >>> from diffusers import ( +... StableDiffusionPipeline, +... StableDiffusionImg2ImgPipeline, +... StableDiffusionInpaintPipeline, +... ) + +>>> text2img = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> img2img = StableDiffusionImg2ImgPipeline(**text2img.components) +>>> inpaint = StableDiffusionInpaintPipeline(**text2img.components) disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. download < source > ( pretrained_model_name **kwargs ) → os.PathLike Parameters pretrained_model_name (str or os.PathLike, optional) — +A string, the repository id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. custom_pipeline (str, optional) — +Can be either: + + +A string, the repository id (for example CompVis/ldm-text2im-large-256) of a pretrained +pipeline hosted on the Hub. The repository must contain a file called pipeline.py that defines +the custom pipeline. + + +A string, the file name of a community pipeline hosted on GitHub under +Community. Valid file +names must match the file name and not the pipeline script (clip_guided_stable_diffusion +instead of clip_guided_stable_diffusion.py). Community pipelines are always loaded from the +current main branch of GitHub. + + +A path to a directory (./my_pipeline_directory/) containing a custom pipeline. The directory +must contain a file called pipeline.py that defines the custom pipeline. + + + +🧪 This is an experimental feature and may change in the future. + +For more information on how to load and create custom pipelines, take a look at How to contribute a +community pipeline. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. use_onnx (bool, optional, defaults to False) — +If set to True, ONNX weights will always be downloaded if present. If set to False, ONNX weights +will never be downloaded. By default use_onnx defaults to the _is_onnx class attribute which is +False for non-ONNX pipelines and True for ONNX pipelines. ONNX weights include both files ending +with .onnx and .pb. trust_remote_code (bool, optional, defaults to False) — +Whether or not to allow for custom pipelines and components defined on the Hub in their own files. This +option should only be set to True for repositories you trust and in which you have read the code, as +it will execute code present on the Hub on your local machine. Returns +os.PathLike + +A path to the downloaded pipeline. + Download and cache a PyTorch diffusion pipeline from pretrained pipeline weights. To use private or gated models, log-in with +huggingface-cli login. enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] enable_model_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_sequential_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using 🤗 Accelerate, significantly reducing memory usage. When called, the state +dicts of all torch.nn.Module components (except those in self._exclude_from_cpu_offload) are saved to CPU +and then moved to torch.device('meta') and loaded to GPU only when their specific submodule has its forward +method called. Offloading happens on a submodule basis. Memory savings are higher than with +enable_model_cpu_offload, but performance is lower. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) from_pretrained < source > ( pretrained_model_name_or_path: Union **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. custom_pipeline (str, optional) — + +🧪 This is an experimental feature and may change in the future. + +Can be either: + +A string, the repo id (for example hf-internal-testing/diffusers-dummy-pipeline) of a custom +pipeline hosted on the Hub. The repository must contain a file called pipeline.py that defines +the custom pipeline. +A string, the file name of a community pipeline hosted on GitHub under +Community. Valid file +names must match the file name and not the pipeline script (clip_guided_stable_diffusion +instead of clip_guided_stable_diffusion.py). Community pipelines are always loaded from the +current main branch of GitHub. +A path to a directory (./my_pipeline_directory/) containing a custom pipeline. The directory +must contain a file called pipeline.py that defines the custom pipeline. + +For more information on how to load and create custom pipelines, please have a look at Loading and +Adding Custom +Pipelines force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. use_onnx (bool, optional, defaults to None) — +If set to True, ONNX weights will always be downloaded if present. If set to False, ONNX weights +will never be downloaded. By default use_onnx defaults to the _is_onnx class attribute which is +False for non-ONNX pipelines and True for ONNX pipelines. ONNX weights include both files ending +with .onnx and .pb. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiate a PyTorch diffusion pipeline from pretrained pipeline weights. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import DiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256") + +>>> # Download pipeline that requires an authorization token +>>> # For more information on access tokens, please refer to this section +>>> # of the documentation](https://huggingface.co/docs/hub/security-tokens) +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") + +>>> # Use a different scheduler +>>> from diffusers import LMSDiscreteScheduler + +>>> scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.scheduler = scheduler maybe_free_model_hooks < source > ( ) Function that offloads all components, removes all model hooks that were added when using +enable_model_cpu_offload and then applies them again. In case the model has not been offloaded this function +is a no-op. Make sure to add this function to the end of the __call__ function of your pipeline so that it +functions correctly when applying enable_model_cpu_offload. numpy_to_pil < source > ( images ) Convert a NumPy image or a batch of images to a PIL image. save_pretrained < source > ( save_directory: Union safe_serialization: bool = True variant: Optional = None push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save a pipeline to. Will be created if it doesn’t exist. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save all saveable variables of the pipeline to a directory. A pipeline variable can be saved and loaded if its +class implements both a save and loading method. The pipeline is easily reloaded using the +from_pretrained() class method. FlaxDiffusionPipeline class diffusers.FlaxDiffusionPipeline < source > ( ) Base class for Flax-based pipelines. FlaxDiffusionPipeline stores all components (models, schedulers, and processors) for diffusion pipelines and +provides methods for loading, downloading and saving models. It also includes methods to: enable/disable the progress bar for the denoising iteration Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_name_or_path: Union **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example runwayml/stable-diffusion-v1-5) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +using save_pretrained(). + dtype (str or jnp.dtype, optional) — +Override the default jnp.dtype and load the model under this dtype. If "auto", the dtype is +automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components) of the specific pipeline +class. The overwritten components are passed directly to the pipelines __init__ method. Instantiate a Flax-based diffusion pipeline from pretrained pipeline weights. The pipeline is set in evaluation mode (`model.eval()) by default and dropout modules are deactivated. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of FlaxUNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import FlaxDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> # Requires to be logged in to Hugging Face hub, +>>> # see more in [the documentation](https://huggingface.co/docs/hub/security-tokens) +>>> pipeline, params = FlaxDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... revision="bf16", +... dtype=jnp.bfloat16, +... ) + +>>> # Download pipeline, but use a different scheduler +>>> from diffusers import FlaxDPMSolverMultistepScheduler + +>>> model_id = "runwayml/stable-diffusion-v1-5" +>>> dpmpp, dpmpp_state = FlaxDPMSolverMultistepScheduler.from_pretrained( +... model_id, +... subfolder="scheduler", +... ) + +>>> dpm_pipe, dpm_params = FlaxStableDiffusionPipeline.from_pretrained( +... model_id, revision="bf16", dtype=jnp.bfloat16, scheduler=dpmpp +... ) +>>> dpm_params["scheduler"] = dpmpp_state numpy_to_pil < source > ( images ) Convert a NumPy image or a batch of images to a PIL image. save_pretrained < source > ( save_directory: Union params: Union push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save all saveable variables of the pipeline to a directory. A pipeline variable can be saved and loaded if its +class implements both a save and loading method. The pipeline is easily reloaded using the +from_pretrained() class method. PushToHubMixin class diffusers.utils.PushToHubMixin < source > ( ) A Mixin to push a model, scheduler, or pipeline to the Hugging Face Hub. push_to_hub < source > ( repo_id: str commit_message: Optional = None private: Optional = None token: Optional = None create_pr: bool = False safe_serialization: bool = True variant: Optional = None ) Parameters repo_id (str) — +The name of the repository you want to push your model, scheduler, or pipeline files to. It should +contain your organization name when pushing to an organization. repo_id can also be a path to a local +directory. commit_message (str, optional) — +Message to commit while pushing. Default to "Upload {object}". private (bool, optional) — +Whether or not the repository created should be private. token (str, optional) — +The token to use as HTTP bearer authorization for remote files. The token generated when running +huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) — +Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to True) — +Whether or not to convert the model weights to the safetensors format. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. Upload model, scheduler, or pipeline files to the 🤗 Hugging Face Hub. Examples: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet") + +# Push the `unet` to your namespace with the name "my-finetuned-unet". +unet.push_to_hub("my-finetuned-unet") + +# Push the `unet` to an organization with the name "my-finetuned-unet". +unet.push_to_hub("your-org/my-finetuned-unet") diff --git a/scrapped_outputs/70e93cbfeb696f5466ab8c4e56f4ba75.txt b/scrapped_outputs/70e93cbfeb696f5466ab8c4e56f4ba75.txt new file mode 100644 index 0000000000000000000000000000000000000000..53d6bac007007dc5928724a55ca5a0bcb2652378 --- /dev/null +++ b/scrapped_outputs/70e93cbfeb696f5466ab8c4e56f4ba75.txt @@ -0,0 +1,90 @@ +Textual Inversion + +Textual Inversion is a technique for capturing novel concepts from a small number of example images in a way that can later be used to control text-to-image pipelines. It does so by learning new ‘words’ in the embedding space of the pipeline’s text encoder. These special words can then be used within text prompts to achieve very fine-grained control of the resulting images. + +By using just 3-5 images you can teach new concepts to a model such as Stable Diffusion for personalized image generation (image source). +This technique was introduced in An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion. The paper demonstrated the concept using a latent diffusion model but the idea has since been applied to other variants such as Stable Diffusion. + +How It Works + + +Architecture Overview from the textual inversion blog post +Before a text prompt can be used in a diffusion model, it must first be processed into a numerical representation. This typically involves tokenizing the text, converting each token to an embedding and then feeding those embeddings through a model (typically a transformer) whose output will be used as the conditioning for the diffusion model. +Textual inversion learns a new token embedding (v* in the diagram above). A prompt (that includes a token which will be mapped to this new embedding) is used in conjunction with a noised version of one or more training images as inputs to the generator model, which attempts to predict the denoised version of the image. The embedding is optimized based on how well the model does at this task - an embedding that better captures the object or style shown by the training images will give more useful information to the diffusion model and thus result in a lower denoising loss. After many steps (typically several thousand) with a variety of prompt and image variants the learned embedding should hopefully capture the essence of the new concept being taught. + +Usage + +To train your own textual inversions, see the example script here. +There is also a notebook for training: + +And one for inference: + +In addition to using concepts you have trained yourself, there is a community-created collection of trained textual inversions in the new Stable Diffusion public concepts library which you can also use from the inference notebook above. Over time this will hopefully grow into a useful resource as more examples are added. + +Example: Running locally + +The textual_inversion.py script here shows how to implement the training procedure and adapt it for stable diffusion. + +Installing the dependencies + +Before running the scripts, make sure to install the library’s training dependencies. + + + Copied +pip install diffusers[training] accelerate transformers +And initialize an 🤗Accelerate environment with: + + + Copied +accelerate config + +Cat toy example + +You need to accept the model license before downloading or using the weights. In this example we’ll use model version v1-4, so you’ll need to visit its card, read the license and tick the checkbox if you agree. +You have to be a registered user in 🤗 Hugging Face Hub, and you’ll also need to use an access token for the code to work. For more information on access tokens, please refer to this section of the documentation. +Run the following command to authenticate your token + + + Copied +huggingface-cli login +If you have already cloned the repo, then you won’t need to go through these steps. + +Now let’s get our dataset.Download 3-4 images from here and save them in a directory. This will be our training data. +And launch the training using + + + Copied +export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export DATA_DIR="path-to-dir-containing-images" + +accelerate launch textual_inversion.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --train_data_dir=$DATA_DIR \ + --learnable_property="object" \ + --placeholder_token="" --initializer_token="toy" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --max_train_steps=3000 \ + --learning_rate=5.0e-04 --scale_lr \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --output_dir="textual_inversion_cat" +A full training run takes ~1 hour on one V100 GPU. + +Inference + +Once you have trained a model using above command, the inference can be done simply using the StableDiffusionPipeline. Make sure to include the placeholder_token in your prompt. + + + Copied +from diffusers import StableDiffusionPipeline + +model_id = "path-to-your-trained-model" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50, guidance_scale=7.5).images[0] + +image.save("cat-backpack.png") diff --git a/scrapped_outputs/70fe84dd876b5225b4e002e75752cc53.txt b/scrapped_outputs/70fe84dd876b5225b4e002e75752cc53.txt new file mode 100644 index 0000000000000000000000000000000000000000..bd08be310ae298544c596d050027c90654a96e42 --- /dev/null +++ b/scrapped_outputs/70fe84dd876b5225b4e002e75752cc53.txt @@ -0,0 +1,308 @@ +Stable unCLIP Stable unCLIP checkpoints are finetuned from Stable Diffusion 2.1 checkpoints to condition on CLIP image embeddings. +Stable unCLIP still conditions on text embeddings. Given the two separate conditionings, stable unCLIP can be used +for text guided image variation. When combined with an unCLIP prior, it can also be used for full text to image generation. The abstract from the paper is: Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples. Tips Stable unCLIP takes noise_level as input during inference which determines how much noise is added to the image embeddings. A higher noise_level increases variation in the final un-noised images. By default, we do not add any additional noise to the image embeddings (noise_level = 0). Text-to-Image Generation + +Stable unCLIP can be leveraged for text-to-image generation by pipelining it with the prior model of KakaoBrain's open source DALL-E 2 replication [Karlo](https://huggingface.co/kakaobrain/karlo-v1-alpha): + + Copied import torch +from diffusers import UnCLIPScheduler, DDPMScheduler, StableUnCLIPPipeline +from diffusers.models import PriorTransformer +from transformers import CLIPTokenizer, CLIPTextModelWithProjection + +prior_model_id = "kakaobrain/karlo-v1-alpha" +data_type = torch.float16 +prior = PriorTransformer.from_pretrained(prior_model_id, subfolder="prior", torch_dtype=data_type) + +prior_text_model_id = "openai/clip-vit-large-patch14" +prior_tokenizer = CLIPTokenizer.from_pretrained(prior_text_model_id) +prior_text_model = CLIPTextModelWithProjection.from_pretrained(prior_text_model_id, torch_dtype=data_type) +prior_scheduler = UnCLIPScheduler.from_pretrained(prior_model_id, subfolder="prior_scheduler") +prior_scheduler = DDPMScheduler.from_config(prior_scheduler.config) + +stable_unclip_model_id = "stabilityai/stable-diffusion-2-1-unclip-small" + +pipe = StableUnCLIPPipeline.from_pretrained( + stable_unclip_model_id, + torch_dtype=data_type, + variant="fp16", + prior_tokenizer=prior_tokenizer, + prior_text_encoder=prior_text_model, + prior=prior, + prior_scheduler=prior_scheduler, +) + +pipe = pipe.to("cuda") +wave_prompt = "dramatic wave, the Oceans roar, Strong wave spiral across the oceans as the waves unfurl into roaring crests; perfect wave form; perfect wave shape; dramatic wave shape; wave shape unbelievable; wave; wave shape spectacular" + +image = pipe(prompt=wave_prompt).images[0] +image For text-to-image we use stabilityai/stable-diffusion-2-1-unclip-small as it was trained on CLIP ViT-L/14 embedding, the same as the Karlo model prior. stabilityai/stable-diffusion-2-1-unclip was trained on OpenCLIP ViT-H, so we don’t recommend its use. Text guided Image-to-Image Variation Copied from diffusers import StableUnCLIPImg2ImgPipeline +from diffusers.utils import load_image +import torch + +pipe = StableUnCLIPImg2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1-unclip", torch_dtype=torch.float16, variation="fp16" +) +pipe = pipe.to("cuda") + +url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/stable_unclip/tarsila_do_amaral.png" +init_image = load_image(url) + +images = pipe(init_image).images +images[0].save("variation_image.png") Optionally, you can also pass a prompt to pipe such as: Copied prompt = "A fantasy landscape, trending on artstation" + +image = pipe(init_image, prompt=prompt).images[0] +image Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableUnCLIPPipeline class diffusers.StableUnCLIPPipeline < source > ( prior_tokenizer: CLIPTokenizer prior_text_encoder: CLIPTextModelWithProjection prior: PriorTransformer prior_scheduler: KarrasDiffusionSchedulers image_normalizer: StableUnCLIPImageNormalizer image_noising_scheduler: KarrasDiffusionSchedulers tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vae: AutoencoderKL ) Parameters prior_tokenizer (CLIPTokenizer) — +A CLIPTokenizer. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen CLIPTextModelWithProjection text-encoder. prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_scheduler (KarrasDiffusionSchedulers) — +Scheduler used in the prior denoising process. image_normalizer (StableUnCLIPImageNormalizer) — +Used to normalize the predicted image embeddings before the noise is applied and un-normalize the image +embeddings after the noise has been applied. image_noising_scheduler (KarrasDiffusionSchedulers) — +Noise schedule for adding noise to the predicted image embeddings. The amount of noise to add is determined +by the noise_level. tokenizer (CLIPTokenizer) — +A CLIPTokenizer. text_encoder (CLIPTextModel) — +Frozen CLIPTextModel text-encoder. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (KarrasDiffusionSchedulers) — +A scheduler to be used in combination with unet to denoise the encoded image latents. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. Pipeline for text-to-image generation using stable unCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 20 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Optional = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 0 prior_num_inference_steps: int = 25 prior_guidance_scale: float = 4.0 prior_latents: Optional = None clip_skip: Optional = None ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 20) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. noise_level (int, optional, defaults to 0) — +The amount of noise to add to the image embeddings. A higher noise_level increases the variance in +the final un-noised images. See StableUnCLIPPipeline.noise_image_embeddings() for more details. prior_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps in the prior denoising process. More denoising steps usually lead to a +higher quality image at the expense of slower inference. prior_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. prior_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +embedding generation in the prior denoising process. Can be used to tweak the same generation with +different prompts. If not provided, a latents tensor is generated by sampling using the supplied random +generator. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +ImagePipelineOutput or tuple + +~ pipeline_utils.ImagePipelineOutput if return_dict is True, otherwise a tuple. When returning +a tuple, the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableUnCLIPPipeline + +>>> pipe = StableUnCLIPPipeline.from_pretrained( +... "fusing/stable-unclip-2-1-l", torch_dtype=torch.float16 +... ) # TODO update model path +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> images = pipe(prompt).images +>>> images[0].save("astronaut_horse.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. noise_image_embeddings < source > ( image_embeds: Tensor noise_level: int noise: Optional = None generator: Optional = None ) Add noise to the image embeddings. The amount of noise is controlled by a noise_level input. A higher +noise_level increases the variance in the final un-noised images. The noise is applied in two ways: A noise schedule is applied directly to the embeddings. A vector of sinusoidal time embeddings are appended to the output. In both cases, the amount of noise is controlled by the same noise_level. The embeddings are normalized before the noise is applied and un-normalized after the noise is applied. StableUnCLIPImg2ImgPipeline class diffusers.StableUnCLIPImg2ImgPipeline < source > ( feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection image_normalizer: StableUnCLIPImageNormalizer image_noising_scheduler: KarrasDiffusionSchedulers tokenizer: CLIPTokenizer text_encoder: CLIPTextModel unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vae: AutoencoderKL ) Parameters feature_extractor (CLIPImageProcessor) — +Feature extractor for image pre-processing before being encoded. image_encoder (CLIPVisionModelWithProjection) — +CLIP vision model for encoding images. image_normalizer (StableUnCLIPImageNormalizer) — +Used to normalize the predicted image embeddings before the noise is applied and un-normalize the image +embeddings after the noise has been applied. image_noising_scheduler (KarrasDiffusionSchedulers) — +Noise schedule for adding noise to the predicted image embeddings. The amount of noise to add is determined +by the noise_level. tokenizer (~transformers.CLIPTokenizer) — +A [~transformers.CLIPTokenizer)]. text_encoder (CLIPTextModel) — +Frozen CLIPTextModel text-encoder. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (KarrasDiffusionSchedulers) — +A scheduler to be used in combination with unet to denoise the encoded image latents. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. Pipeline for text-guided image-to-image generation using stable unCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( image: Union = None prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 20 guidance_scale: float = 10 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Optional = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 0 image_embeds: Optional = None clip_skip: Optional = None ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, either prompt_embeds will be +used or prompt is initialized to "". image (torch.FloatTensor or PIL.Image.Image) — +Image or tensor representing an image batch. The image is encoded to its CLIP embedding which the +unet is conditioned on. The image is not encoded by the vae and then used as the latents in the +denoising process like it is in the standard Stable Diffusion text-guided image variation process. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 20) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. noise_level (int, optional, defaults to 0) — +The amount of noise to add to the image embeddings. A higher noise_level increases the variance in +the final un-noised images. See StableUnCLIPPipeline.noise_image_embeddings() for more details. image_embeds (torch.FloatTensor, optional) — +Pre-generated CLIP embeddings to condition the unet on. These latents are not used in the denoising +process. If you want to provide pre-generated latents, pass them to __call__ as latents. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +ImagePipelineOutput or tuple + +~ pipeline_utils.ImagePipelineOutput if return_dict is True, otherwise a tuple. When returning +a tuple, the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import requests +>>> import torch +>>> from PIL import Image +>>> from io import BytesIO + +>>> from diffusers import StableUnCLIPImg2ImgPipeline + +>>> pipe = StableUnCLIPImg2ImgPipeline.from_pretrained( +... "fusing/stable-unclip-2-1-l-img2img", torch_dtype=torch.float16 +... ) # TODO update model path +>>> pipe = pipe.to("cuda") + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +>>> response = requests.get(url) +>>> init_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_image = init_image.resize((768, 512)) + +>>> prompt = "A fantasy landscape, trending on artstation" + +>>> images = pipe(prompt, init_image).images +>>> images[0].save("fantasy_landscape.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. noise_image_embeddings < source > ( image_embeds: Tensor noise_level: int noise: Optional = None generator: Optional = None ) Add noise to the image embeddings. The amount of noise is controlled by a noise_level input. A higher +noise_level increases the variance in the final un-noised images. The noise is applied in two ways: A noise schedule is applied directly to the embeddings. A vector of sinusoidal time embeddings are appended to the output. In both cases, the amount of noise is controlled by the same noise_level. The embeddings are normalized before the noise is applied and un-normalized after the noise is applied. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/71026a871341027987af06cc73c1f71d.txt b/scrapped_outputs/71026a871341027987af06cc73c1f71d.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/712f45dd68bac8b44a778ad1aa875850.txt b/scrapped_outputs/712f45dd68bac8b44a778ad1aa875850.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/713f30d08aecb7e696c4e9176d662acf.txt b/scrapped_outputs/713f30d08aecb7e696c4e9176d662acf.txt new file mode 100644 index 0000000000000000000000000000000000000000..fa29aaa3795982e1203729759aa3fb501feeb077 --- /dev/null +++ b/scrapped_outputs/713f30d08aecb7e696c4e9176d662acf.txt @@ -0,0 +1,19 @@ +Habana Gaudi 🤗 Diffusers is compatible with Habana Gaudi through 🤗 Optimum. Follow the installation guide to install the SynapseAI and Gaudi drivers, and then install Optimum Habana: Copied python -m pip install --upgrade-strategy eager optimum[habana] To generate images with Stable Diffusion 1 and 2 on Gaudi, you need to instantiate two instances: GaudiStableDiffusionPipeline, a pipeline for text-to-image generation. GaudiDDIMScheduler, a Gaudi-optimized scheduler. When you initialize the pipeline, you have to specify use_habana=True to deploy it on HPUs and to get the fastest possible generation, you should enable HPU graphs with use_hpu_graphs=True. Finally, specify a GaudiConfig which can be downloaded from the Habana organization on the Hub. Copied from optimum.habana import GaudiConfig +from optimum.habana.diffusers import GaudiDDIMScheduler, GaudiStableDiffusionPipeline + +model_name = "stabilityai/stable-diffusion-2-base" +scheduler = GaudiDDIMScheduler.from_pretrained(model_name, subfolder="scheduler") +pipeline = GaudiStableDiffusionPipeline.from_pretrained( + model_name, + scheduler=scheduler, + use_habana=True, + use_hpu_graphs=True, + gaudi_config="Habana/stable-diffusion-2", +) Now you can call the pipeline to generate images by batches from one or several prompts: Copied outputs = pipeline( + prompt=[ + "High quality photo of an astronaut riding a horse in space", + "Face of a yellow cat, high resolution, sitting on a park bench", + ], + num_images_per_prompt=10, + batch_size=4, +) For more information, check out 🤗 Optimum Habana’s documentation and the example provided in the official GitHub repository. Benchmark We benchmarked Habana’s first-generation Gaudi and Gaudi2 with the Habana/stable-diffusion and Habana/stable-diffusion-2 Gaudi configurations (mixed precision bf16/fp32) to demonstrate their performance. For Stable Diffusion v1.5 on 512x512 images: Latency (batch size = 1) Throughput first-generation Gaudi 3.80s 0.308 images/s (batch size = 8) Gaudi2 1.33s 1.081 images/s (batch size = 8) For Stable Diffusion v2.1 on 768x768 images: Latency (batch size = 1) Throughput first-generation Gaudi 10.2s 0.108 images/s (batch size = 4) Gaudi2 3.17s 0.379 images/s (batch size = 8) diff --git a/scrapped_outputs/714281f8f00d4fa84bbb4e4a5479aada.txt b/scrapped_outputs/714281f8f00d4fa84bbb4e4a5479aada.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/714682b6125c702b541a9c0e455d6872.txt b/scrapped_outputs/714682b6125c702b541a9c0e455d6872.txt new file mode 100644 index 0000000000000000000000000000000000000000..010170acb5aca8bf2ac921f78fb672a7a4826b1d --- /dev/null +++ b/scrapped_outputs/714682b6125c702b541a9c0e455d6872.txt @@ -0,0 +1,381 @@ +Multistep DPM-Solver + + +Overview + +Original paper can be found here and the improved version. The original implementation can be found here. + +DPMSolverMultistepScheduler + + +class diffusers.DPMSolverMultistepScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +solver_order: int = 2 +prediction_type: str = 'epsilon' +thresholding: bool = False +dynamic_thresholding_ratio: float = 0.995 +sample_max_value: float = 1.0 +algorithm_type: str = 'dpmsolver++' +solver_type: str = 'midpoint' +lower_order_final: bool = True +**kwargs + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +solver_order (int, default 2) — +the order of DPM-Solver; can be 1 or 2 or 3. We recommend to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + +thresholding (bool, default False) — +whether to use the “dynamic thresholding” method (introduced by Imagen, https://arxiv.org/abs/2205.11487). +For pixel-space diffusion models, you can set both algorithm_type=dpmsolver++ and thresholding=True to +use the dynamic thresholding. Note that the thresholding method is unsuitable for latent-space diffusion +models (such as stable-diffusion). + + +dynamic_thresholding_ratio (float, default 0.995) — +the ratio for the dynamic thresholding method. Default is 0.995, the same as Imagen +(https://arxiv.org/abs/2205.11487). + + +sample_max_value (float, default 1.0) — +the threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++. + + +algorithm_type (str, default dpmsolver++) — +the algorithm type for the solver. Either dpmsolver or dpmsolver++. The dpmsolver type implements the +algorithms in https://arxiv.org/abs/2206.00927, and the dpmsolver++ type implements the algorithms in +https://arxiv.org/abs/2211.01095. We recommend to use dpmsolver++ with solver_order=2 for guided +sampling (e.g. stable-diffusion). + + +solver_type (str, default midpoint) — +the solver type for the second-order solver. Either midpoint or heun. The solver type slightly affects +the sample quality, especially for small number of steps. We empirically find that midpoint solvers are +slightly better, so we recommend to use the midpoint type. + + +lower_order_final (bool, default True) — +whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. We empirically +find this trick can stabilize the sampling of DPM-Solver for steps < 15, especially for steps <= 10. + + + +DPM-Solver (and the improved version DPM-Solver++) is a fast dedicated high-order solver for diffusion ODEs with +the convergence order guarantee. Empirically, sampling by DPM-Solver with only 20 steps can generate high-quality +samples, and it can generate quite good samples even in only 10 steps. +For more details, see the original paper: https://arxiv.org/abs/2206.00927 and https://arxiv.org/abs/2211.01095 +Currently, we support the multistep DPM-Solver for both noise prediction models and data prediction models. We +recommend to use solver_order=2 for guided sampling, and solver_order=3 for unconditional sampling. +We also support the “dynamic thresholding” method in Imagen (https://arxiv.org/abs/2205.11487). For pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use the dynamic +thresholding. Note that the thresholding method is unsuitable for latent-space diffusion models (such as +stable-diffusion). +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +convert_model_output + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the converted model output. + + +Convert the model output to the corresponding type that the algorithm (DPM-Solver / DPM-Solver++) needs. +DPM-Solver is designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to +discretize an integral of the data prediction model. So we need to first convert the model output to the +corresponding type to match the algorithm. +Note that the algorithm type and the model type is decoupled. That is to say, we can use either DPM-Solver or +DPM-Solver++ for both noise prediction model and data prediction model. + +dpm_solver_first_order_update + +< +source +> +( +model_output: FloatTensor +timestep: int +prev_timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the first-order DPM-Solver (equivalent to DDIM). +See https://arxiv.org/abs/2206.00927 for the detailed derivation. + +multistep_dpm_solver_second_order_update + +< +source +> +( +model_output_list: typing.List[torch.FloatTensor] +timestep_list: typing.List[int] +prev_timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output_list (List[torch.FloatTensor]) — +direct outputs from learned diffusion model at current and latter timesteps. + + +timestep (int) — current and latter discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the second-order multistep DPM-Solver. + +multistep_dpm_solver_third_order_update + +< +source +> +( +model_output_list: typing.List[torch.FloatTensor] +timestep_list: typing.List[int] +prev_timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output_list (List[torch.FloatTensor]) — +direct outputs from learned diffusion model at current and latter timesteps. + + +timestep (int) — current and latter discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the third-order multistep DPM-Solver. + +scale_model_input + +< +source +> +( +sample: FloatTensor +*args +**kwargs + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device, optional) — +the device to which the timesteps should be moved to. If None, the timesteps are not moved. + + + +Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +return_dict: bool = True + +) +→ +~scheduling_utils.SchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +return_dict (bool) — option for returning tuple rather than SchedulerOutput class + + +Returns + +~scheduling_utils.SchedulerOutput or tuple + + + +~scheduling_utils.SchedulerOutput if return_dict is +True, otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + +Step function propagating the sample with the multistep DPM-Solver. diff --git a/scrapped_outputs/71485161142008cb384f16bc9665d0c0.txt b/scrapped_outputs/71485161142008cb384f16bc9665d0c0.txt new file mode 100644 index 0000000000000000000000000000000000000000..89bc887713a4db23ab02dc3377a161ea6292c27f --- /dev/null +++ b/scrapped_outputs/71485161142008cb384f16bc9665d0c0.txt @@ -0,0 +1,23 @@ +Controlled generation Controlling outputs generated by diffusion models has been long pursued by the community and is now an active research topic. In many popular diffusion models, subtle changes in inputs, both images and text prompts, can drastically change outputs. In an ideal world we want to be able to control how semantics are preserved and changed. Most examples of preserving semantics reduce to being able to accurately map a change in input to a change in output. I.e. adding an adjective to a subject in a prompt preserves the entire image, only modifying the changed subject. Or, image variation of a particular subject preserves the subject’s pose. Additionally, there are qualities of generated images that we would like to influence beyond semantic preservation. I.e. in general, we would like our outputs to be of good quality, adhere to a particular style, or be realistic. We will document some of the techniques diffusers supports to control generation of diffusion models. Much is cutting edge research and can be quite nuanced. If something needs clarifying or you have a suggestion, don’t hesitate to open a discussion on the forum or a GitHub issue. We provide a high level explanation of how the generation can be controlled as well as a snippet of the technicals. For more in depth explanations on the technicals, the original papers which are linked from the pipelines are always the best resources. Depending on the use case, one should choose a technique accordingly. In many cases, these techniques can be combined. For example, one can combine Textual Inversion with SEGA to provide more semantic guidance to the outputs generated using Textual Inversion. Unless otherwise mentioned, these are techniques that work with existing models and don’t require their own weights. InstructPix2Pix Pix2Pix Zero Attend and Excite Semantic Guidance Self-attention Guidance Depth2Image MultiDiffusion Panorama DreamBooth Textual Inversion ControlNet Prompt Weighting Custom Diffusion Model Editing DiffEdit T2I-Adapter FABRIC For convenience, we provide a table to denote which methods are inference-only and which require fine-tuning/training. Method Inference only Requires training / fine-tuning Comments InstructPix2Pix ✅ ❌ Can additionally befine-tuned for better performance on specific edit instructions. Pix2Pix Zero ✅ ❌ Attend and Excite ✅ ❌ Semantic Guidance ✅ ❌ Self-attention Guidance ✅ ❌ Depth2Image ✅ ❌ MultiDiffusion Panorama ✅ ❌ DreamBooth ❌ ✅ Textual Inversion ❌ ✅ ControlNet ✅ ❌ A ControlNet can be trained/fine-tuned ona custom conditioning. Prompt Weighting ✅ ❌ Custom Diffusion ❌ ✅ Model Editing ✅ ❌ DiffEdit ✅ ❌ T2I-Adapter ✅ ❌ Fabric ✅ ❌ InstructPix2Pix Paper InstructPix2Pix is fine-tuned from Stable Diffusion to support editing input images. It takes as inputs an image and a prompt describing an edit, and it outputs the edited image. +InstructPix2Pix has been explicitly trained to work well with InstructGPT-like prompts. Pix2Pix Zero Paper Pix2Pix Zero allows modifying an image so that one concept or subject is translated to another one while preserving general image semantics. The denoising process is guided from one conceptual embedding towards another conceptual embedding. The intermediate latents are optimized during the denoising process to push the attention maps towards reference attention maps. The reference attention maps are from the denoising process of the input image and are used to encourage semantic preservation. Pix2Pix Zero can be used both to edit synthetic images as well as real images. To edit synthetic images, one first generates an image given a caption. +Next, we generate image captions for the concept that shall be edited and for the new target concept. We can use a model like Flan-T5 for this purpose. Then, “mean” prompt embeddings for both the source and target concepts are created via the text encoder. Finally, the pix2pix-zero algorithm is used to edit the synthetic image. To edit a real image, one first generates an image caption using a model like BLIP. Then one applies DDIM inversion on the prompt and image to generate “inverse” latents. Similar to before, “mean” prompt embeddings for both source and target concepts are created and finally the pix2pix-zero algorithm in combination with the “inverse” latents is used to edit the image. Pix2Pix Zero is the first model that allows “zero-shot” image editing. This means that the model +can edit an image in less than a minute on a consumer GPU as shown here. As mentioned above, Pix2Pix Zero includes optimizing the latents (and not any of the UNet, VAE, or the text encoder) to steer the generation toward a specific concept. This means that the overall +pipeline might require more memory than a standard StableDiffusionPipeline. An important distinction between methods like InstructPix2Pix and Pix2Pix Zero is that the former +involves fine-tuning the pre-trained weights while the latter does not. This means that you can +apply Pix2Pix Zero to any of the available Stable Diffusion models. Attend and Excite Paper Attend and Excite allows subjects in the prompt to be faithfully represented in the final image. A set of token indices are given as input, corresponding to the subjects in the prompt that need to be present in the image. During denoising, each token index is guaranteed to have a minimum attention threshold for at least one patch of the image. The intermediate latents are iteratively optimized during the denoising process to strengthen the attention of the most neglected subject token until the attention threshold is passed for all subject tokens. Like Pix2Pix Zero, Attend and Excite also involves a mini optimization loop (leaving the pre-trained weights untouched) in its pipeline and can require more memory than the usual StableDiffusionPipeline. Semantic Guidance (SEGA) Paper SEGA allows applying or removing one or more concepts from an image. The strength of the concept can also be controlled. I.e. the smile concept can be used to incrementally increase or decrease the smile of a portrait. Similar to how classifier free guidance provides guidance via empty prompt inputs, SEGA provides guidance on conceptual prompts. Multiple of these conceptual prompts can be applied simultaneously. Each conceptual prompt can either add or remove their concept depending on if the guidance is applied positively or negatively. Unlike Pix2Pix Zero or Attend and Excite, SEGA directly interacts with the diffusion process instead of performing any explicit gradient-based optimization. Self-attention Guidance (SAG) Paper Self-attention Guidance improves the general quality of images. SAG provides guidance from predictions not conditioned on high-frequency details to fully conditioned images. The high frequency details are extracted out of the UNet self-attention maps. Depth2Image Project Depth2Image is fine-tuned from Stable Diffusion to better preserve semantics for text guided image variation. It conditions on a monocular depth estimate of the original image. MultiDiffusion Panorama Paper MultiDiffusion Panorama defines a new generation process over a pre-trained diffusion model. This process binds together multiple diffusion generation methods that can be readily applied to generate high quality and diverse images. Results adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes. +MultiDiffusion Panorama allows to generate high-quality images at arbitrary aspect ratios (e.g., panoramas). Fine-tuning your own models In addition to pre-trained models, Diffusers has training scripts for fine-tuning models on user-provided data. DreamBooth Project DreamBooth fine-tunes a model to teach it about a new subject. I.e. a few pictures of a person can be used to generate images of that person in different styles. Textual Inversion Paper Textual Inversion fine-tunes a model to teach it about a new concept. I.e. a few pictures of a style of artwork can be used to generate images in that style. ControlNet Paper ControlNet is an auxiliary network which adds an extra condition. +There are 8 canonical pre-trained ControlNets trained on different conditionings such as edge detection, scribbles, +depth maps, and semantic segmentations. Prompt Weighting Prompt weighting is a simple technique that puts more attention weight on certain parts of the text +input. Custom Diffusion Paper Custom Diffusion only fine-tunes the cross-attention maps of a pre-trained +text-to-image diffusion model. It also allows for additionally performing Textual Inversion. It supports +multi-concept training by design. Like DreamBooth and Textual Inversion, Custom Diffusion is also used to +teach a pre-trained text-to-image diffusion model about new concepts to generate outputs involving the +concept(s) of interest. Model Editing Paper The text-to-image model editing pipeline helps you mitigate some of the incorrect implicit assumptions a pre-trained text-to-image +diffusion model might make about the subjects present in the input prompt. For example, if you prompt Stable Diffusion to generate images for “A pack of roses”, the roses in the generated images +are more likely to be red. This pipeline helps you change that assumption. DiffEdit Paper DiffEdit allows for semantic editing of input images along with +input prompts while preserving the original input images as much as possible. T2I-Adapter Paper T2I-Adapter is an auxiliary network which adds an extra condition. +There are 8 canonical pre-trained adapters trained on different conditionings such as edge detection, sketch, +depth maps, and semantic segmentations. Fabric Paper Fabric is a training-free +approach applicable to a wide range of popular diffusion models, which exploits +the self-attention layer present in the most widely used architectures to condition +the diffusion process on a set of feedback images. diff --git a/scrapped_outputs/715ac84b4aa8da856b1385cc19914a81.txt b/scrapped_outputs/715ac84b4aa8da856b1385cc19914a81.txt new file mode 100644 index 0000000000000000000000000000000000000000..7b33af7ded71fb9ee111a4c828a87ecbd9858360 --- /dev/null +++ b/scrapped_outputs/715ac84b4aa8da856b1385cc19914a81.txt @@ -0,0 +1,36 @@ +Consistency Decoder Consistency decoder can be used to decode the latents from the denoising UNet in the StableDiffusionPipeline. This decoder was introduced in the DALL-E 3 technical report. The original codebase can be found at openai/consistencydecoder. Inference is only supported for 2 iterations as of now. The pipeline could not have been contributed without the help of madebyollin and mrsteyk from this issue. ConsistencyDecoderVAE class diffusers.ConsistencyDecoderVAE < source > ( scaling_factor: float = 0.18215 latent_channels: int = 4 encoder_act_fn: str = 'silu' encoder_block_out_channels: Tuple = (128, 256, 512, 512) encoder_double_z: bool = True encoder_down_block_types: Tuple = ('DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D') encoder_in_channels: int = 3 encoder_layers_per_block: int = 2 encoder_norm_num_groups: int = 32 encoder_out_channels: int = 4 decoder_add_attention: bool = False decoder_block_out_channels: Tuple = (320, 640, 1024, 1024) decoder_down_block_types: Tuple = ('ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D') decoder_downsample_padding: int = 1 decoder_in_channels: int = 7 decoder_layers_per_block: int = 3 decoder_norm_eps: float = 1e-05 decoder_norm_num_groups: int = 32 decoder_num_train_timesteps: int = 1024 decoder_out_channels: int = 6 decoder_resnet_time_scale_shift: str = 'scale_shift' decoder_time_embedding_type: str = 'learned' decoder_up_block_types: Tuple = ('ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D') ) The consistency decoder used with DALL-E 3. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline, ConsistencyDecoderVAE + +>>> vae = ConsistencyDecoderVAE.from_pretrained("openai/consistency-decoder", torch_dtype=torch.float16) +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", vae=vae, torch_dtype=torch.float16 +... ).to("cuda") + +>>> pipe("horse", generator=torch.manual_seed(0)).images wrapper < source > ( *args **kwargs ) disable_slicing < source > ( ) Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing +decoding in one step. disable_tiling < source > ( ) Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing +decoding in one step. enable_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_tiling < source > ( use_tiling: bool = True ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. forward < source > ( sample: FloatTensor sample_posterior: bool = False return_dict: bool = True generator: Optional = None ) → DecoderOutput or tuple Parameters sample (torch.FloatTensor) — Input sample. sample_posterior (bool, optional, defaults to False) — +Whether to sample from the posterior. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. generator (torch.Generator, optional, defaults to None) — +Generator to use for sampling. Returns +DecoderOutput or tuple + +If return_dict is True, a DecoderOutput is returned, otherwise a plain tuple is returned. + set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. tiled_encode < source > ( x: FloatTensor return_dict: bool = True ) → ~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput or tuple Parameters x (torch.FloatTensor) — Input batch of images. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput instead of a +plain tuple. Returns +~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput or tuple + +If return_dict is True, a ~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput is returned, +otherwise a plain tuple is returned. + Encode a batch of images using a tiled encoder. When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several +steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is +different from non-tiled encoding because each tile uses a different encoder. To avoid tiling artifacts, the +tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the +output, but they should be much less noticeable. diff --git a/scrapped_outputs/71db5e4cece85fea204288c1256799a9.txt b/scrapped_outputs/71db5e4cece85fea204288c1256799a9.txt new file mode 100644 index 0000000000000000000000000000000000000000..a001c5e9c77873189a313244b2e7bed2ac696984 --- /dev/null +++ b/scrapped_outputs/71db5e4cece85fea204288c1256799a9.txt @@ -0,0 +1,101 @@ +Image variation The Stable Diffusion model can also generate variations from an input image. It uses a fine-tuned version of a Stable Diffusion model by Justin Pinkney from Lambda. The original codebase can be found at LambdaLabsML/lambda-diffusers and additional official checkpoints for image variation can be found at lambdalabs/sd-image-variations-diffusers. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionImageVariationPipeline class diffusers.StableDiffusionImageVariationPipeline < source > ( vae: AutoencoderKL image_encoder: CLIPVisionModelWithProjection unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. image_encoder (CLIPVisionModelWithProjection) — +Frozen CLIP image-encoder (clip-vit-large-patch14). text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline to generate image variations from an input image using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — +Image or images to guide image generation. If you provide a tensor, it needs to be compatible with +CLIPImageProcessor. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied from diffusers import StableDiffusionImageVariationPipeline +from PIL import Image +from io import BytesIO +import requests + +pipe = StableDiffusionImageVariationPipeline.from_pretrained( + "lambdalabs/sd-image-variations-diffusers", revision="v2.0" +) +pipe = pipe.to("cuda") + +url = "https://lh3.googleusercontent.com/y-iFOHfLTwkuQSUegpwDdgKmOjRSTvPxat63dQLB25xkTs4lhIbRUFeNBWZzYf370g=s1200" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") + +out = pipe(image, num_images_per_prompt=3, guidance_scale=15) +out["images"][0].save("result.jpg") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/720160f7f3129b80dd863a1c2fac67d8.txt b/scrapped_outputs/720160f7f3129b80dd863a1c2fac67d8.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/7220d794f948b6c65b1dc853590e6d91.txt b/scrapped_outputs/7220d794f948b6c65b1dc853590e6d91.txt new file mode 100644 index 0000000000000000000000000000000000000000..3202fb51e10a32c683f71e7b038c0b00367fe667 --- /dev/null +++ b/scrapped_outputs/7220d794f948b6c65b1dc853590e6d91.txt @@ -0,0 +1 @@ +Overview The APIs in this section are more experimental and prone to breaking changes. Most of them are used internally for development, but they may also be useful to you if you’re interested in building a diffusion model with some custom parts or if you’re interested in some of our helper utilities for working with 🤗 Diffusers. diff --git a/scrapped_outputs/72938a374582ba92d72a21a33a5faee9.txt b/scrapped_outputs/72938a374582ba92d72a21a33a5faee9.txt new file mode 100644 index 0000000000000000000000000000000000000000..7c0ad31ee6de5413052deffb62095ba2092bc251 --- /dev/null +++ b/scrapped_outputs/72938a374582ba92d72a21a33a5faee9.txt @@ -0,0 +1,124 @@ +PixArt-α PixArt-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis is Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, and Zhenguo Li. The abstract from the paper is: The most advanced text-to-image (T2I) models require significant training costs (e.g., millions of GPU hours), seriously hindering the fundamental innovation for the AIGC community while increasing CO2 emissions. This paper introduces PIXART-α, a Transformer-based T2I diffusion model whose image generation quality is competitive with state-of-the-art image generators (e.g., Imagen, SDXL, and even Midjourney), reaching near-commercial application standards. Additionally, it supports high-resolution image synthesis up to 1024px resolution with low training cost, as shown in Figure 1 and 2. To achieve this goal, three core designs are proposed: (1) Training strategy decomposition: We devise three distinct training steps that separately optimize pixel dependency, text-image alignment, and image aesthetic quality; (2) Efficient T2I Transformer: We incorporate cross-attention modules into Diffusion Transformer (DiT) to inject text conditions and streamline the computation-intensive class-condition branch; (3) High-informative data: We emphasize the significance of concept density in text-image pairs and leverage a large Vision-Language model to auto-label dense pseudo-captions to assist text-image alignment learning. As a result, PIXART-α’s training speed markedly surpasses existing large-scale T2I models, e.g., PIXART-α only takes 10.8% of Stable Diffusion v1.5’s training time (675 vs. 6,250 A100 GPU days), saving nearly $300,000 ($26,000 vs. $320,000) and reducing 90% CO2 emissions. Moreover, compared with a larger SOTA model, RAPHAEL, our training cost is merely 1%. Extensive experiments demonstrate that PIXART-α excels in image quality, artistry, and semantic control. We hope PIXART-α will provide new insights to the AIGC community and startups to accelerate building their own high-quality yet low-cost generative models from scratch. You can find the original codebase at PixArt-alpha/PixArt-alpha and all the available checkpoints at PixArt-alpha. Some notes about this pipeline: It uses a Transformer backbone (instead of a UNet) for denoising. As such it has a similar architecture as DiT. It was trained using text conditions computed from T5. This aspect makes the pipeline better at following complex text prompts with intricate details. It is good at producing high-resolution images at different aspect ratios. To get the best results, the authors recommend some size brackets which can be found here. It rivals the quality of state-of-the-art text-to-image generation systems (as of this writing) such as Stable Diffusion XL, Imagen, and DALL-E 2, while being more efficient than them. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. Inference with under 8GB GPU VRAM Run the PixArtAlphaPipeline with under 8GB GPU VRAM by loading the text encoder in 8-bit precision. Let’s walk through a full-fledged example. First, install the bitsandbytes library: Copied pip install -U bitsandbytes Then load the text encoder in 8-bit: Copied from transformers import T5EncoderModel +from diffusers import PixArtAlphaPipeline +import torch + +text_encoder = T5EncoderModel.from_pretrained( + "PixArt-alpha/PixArt-XL-2-1024-MS", + subfolder="text_encoder", + load_in_8bit=True, + device_map="auto", + +) +pipe = PixArtAlphaPipeline.from_pretrained( + "PixArt-alpha/PixArt-XL-2-1024-MS", + text_encoder=text_encoder, + transformer=None, + device_map="auto" +) Now, use the pipe to encode a prompt: Copied with torch.no_grad(): + prompt = "cute cat" + prompt_embeds, prompt_attention_mask, negative_embeds, negative_prompt_attention_mask = pipe.encode_prompt(prompt) Since text embeddings have been computed, remove the text_encoder and pipe from the memory, and free up som GPU VRAM: Copied import gc + +def flush(): + gc.collect() + torch.cuda.empty_cache() + +del text_encoder +del pipe +flush() Then compute the latents with the prompt embeddings as inputs: Copied pipe = PixArtAlphaPipeline.from_pretrained( + "PixArt-alpha/PixArt-XL-2-1024-MS", + text_encoder=None, + torch_dtype=torch.float16, +).to("cuda") + +latents = pipe( + negative_prompt=None, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + prompt_attention_mask=prompt_attention_mask, + negative_prompt_attention_mask=negative_prompt_attention_mask, + num_images_per_prompt=1, + output_type="latent", +).images + +del pipe.transformer +flush() Notice that while initializing pipe, you’re setting text_encoder to None so that it’s not loaded. Once the latents are computed, pass it off to the VAE to decode into a real image: Copied with torch.no_grad(): + image = pipe.vae.decode(latents / pipe.vae.config.scaling_factor, return_dict=False)[0] +image = pipe.image_processor.postprocess(image, output_type="pil")[0] +image.save("cat.png") By deleting components you aren’t using and flushing the GPU VRAM, you should be able to run PixArtAlphaPipeline with under 8GB GPU VRAM. If you want a report of your memory-usage, run this script. Text embeddings computed in 8-bit can impact the quality of the generated images because of the information loss in the representation space caused by the reduced precision. It’s recommended to compare the outputs with and without 8-bit. While loading the text_encoder, you set load_in_8bit to True. You could also specify load_in_4bit to bring your memory requirements down even further to under 7GB. PixArtAlphaPipeline class diffusers.PixArtAlphaPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel vae: AutoencoderKL transformer: Transformer2DModel scheduler: DPMSolverMultistepScheduler ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (T5EncoderModel) — +Frozen text-encoder. PixArt-Alpha uses +T5, specifically the +t5-v1_1-xxl variant. tokenizer (T5Tokenizer) — +Tokenizer of class +T5Tokenizer. transformer (Transformer2DModel) — +A text conditioned Transformer2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with transformer to denoise the encoded image latents. Pipeline for text-to-image generation using PixArt-Alpha. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union = None negative_prompt: str = '' num_inference_steps: int = 20 timesteps: List = None guidance_scale: float = 4.5 num_images_per_prompt: Optional = 1 height: Optional = None width: Optional = None eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None prompt_attention_mask: Optional = None negative_prompt_embeds: Optional = None negative_prompt_attention_mask: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True use_resolution_binning: bool = True max_sequence_length: int = 120 **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to self.unet.config.sample_size) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size) — +The width in pixels of the generated image. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. prompt_attention_mask (torch.FloatTensor, optional) — Pre-generated attention mask for text embeddings. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. For PixArt-Alpha this negative prompt should be "". If not +provided, negative_prompt_embeds will be generated from negative_prompt input argument. negative_prompt_attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask for negative text embeddings. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. use_resolution_binning (bool defaults to True) — +If set to True, the requested height and width are first mapped to the closest resolutions using +ASPECT_RATIO_1024_BIN. After the produced latents are decoded into images, they are resized back to +the requested resolution. Useful for generating non-square images. max_sequence_length (int defaults to 120) — Maximum sequence length to use with the prompt. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import PixArtAlphaPipeline + +>>> # You can replace the checkpoint id with "PixArt-alpha/PixArt-XL-2-512x512" too. +>>> pipe = PixArtAlphaPipeline.from_pretrained("PixArt-alpha/PixArt-XL-2-1024-MS", torch_dtype=torch.float16) +>>> # Enable memory optimizations. +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A small cactus with a happy face in the Sahara desert." +>>> image = pipe(prompt).images[0] classify_height_width_bin < source > ( height: int width: int ratios: dict ) Returns binned height and width. encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True negative_prompt: str = '' num_images_per_prompt: int = 1 device: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None prompt_attention_mask: Optional = None negative_prompt_attention_mask: Optional = None clean_caption: bool = False max_sequence_length: int = 120 **kwargs ) Parameters prompt (str or List[str], optional) — +prompt to be encoded negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. If not defined, one has to pass negative_prompt_embeds +instead. Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). For +PixArt-Alpha, this should be "". do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. For PixArt-Alpha, it’s should be the embeddings of the "" +string. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. max_sequence_length (int, defaults to 120) — Maximum sequence length to use for the prompt. Encodes the prompt into text encoder hidden states. diff --git a/scrapped_outputs/72c7a5a8af277fe52675fa928414dab3.txt b/scrapped_outputs/72c7a5a8af277fe52675fa928414dab3.txt new file mode 100644 index 0000000000000000000000000000000000000000..f695b722000cb30de90398c0e34dfcc9554715bb --- /dev/null +++ b/scrapped_outputs/72c7a5a8af277fe52675fa928414dab3.txt @@ -0,0 +1,315 @@ +Inpainting Inpainting replaces or edits specific areas of an image. This makes it a useful tool for image restoration like removing defects and artifacts, or even replacing an image area with something entirely new. Inpainting relies on a mask to determine which regions of an image to fill in; the area to inpaint is represented by white pixels and the area to keep is represented by black pixels. The white pixels are filled in by the prompt. With 🤗 Diffusers, here is how you can do inpainting: Load an inpainting checkpoint with the AutoPipelineForInpainting class. This’ll automatically detect the appropriate pipeline class to load based on the checkpoint: Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() You’ll notice throughout the guide, we use enable_model_cpu_offload() and enable_xformers_memory_efficient_attention(), to save memory and increase inference speed. If you’re using PyTorch 2.0, it’s not necessary to call enable_xformers_memory_efficient_attention() on your pipeline because it’ll already be using PyTorch 2.0’s native scaled-dot product attention. Load the base and mask images: Copied init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") Create a prompt to inpaint the image with and pass it to the pipeline with the base and mask images: Copied prompt = "a black cat with glowing eyes, cute, adorable, disney, pixar, highly detailed, 8k" +negative_prompt = "bad anatomy, deformed, ugly, disfigured" +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=init_image, mask_image=mask_image).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) base image mask image generated image Create a mask image Throughout this guide, the mask image is provided in all of the code examples for convenience. You can inpaint on your own images, but you’ll need to create a mask image for it. Use the Space below to easily create a mask image. Upload a base image to inpaint on and use the sketch tool to draw a mask. Once you’re done, click Run to generate and download the mask image. Mask blur The ~VaeImageProcessor.blur method provides an option for how to blend the original image and inpaint area. The amount of blur is determined by the blur_factor parameter. Increasing the blur_factor increases the amount of blur applied to the mask edges, softening the transition between the original image and inpaint area. A low or zero blur_factor preserves the sharper edges of the mask. To use this, create a blurred mask with the image processor. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +from PIL import Image + +pipeline = AutoPipelineForInpainting.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to('cuda') + +mask = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore_mask.png") +blurred_mask = pipeline.mask_processor.blur(mask, blur_factor=33) +blurred_mask mask with no blur mask with blur applied Popular models Stable Diffusion Inpainting, Stable Diffusion XL (SDXL) Inpainting, and Kandinsky 2.2 Inpainting are among the most popular models for inpainting. SDXL typically produces higher resolution images than Stable Diffusion v1.5, and Kandinsky 2.2 is also capable of generating high-quality images. Stable Diffusion Inpainting Stable Diffusion Inpainting is a latent diffusion model finetuned on 512x512 images on inpainting. It is a good starting point because it is relatively fast and generates good quality images. To use this model for inpainting, you’ll need to pass a prompt, base and mask image to the pipeline: Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Stable Diffusion XL (SDXL) Inpainting SDXL is a larger and more powerful version of Stable Diffusion v1.5. This model can follow a two-stage model process (though each model can also be used alone); the base model generates an image, and a refiner model takes that image and further enhances its details and quality. Take a look at the SDXL guide for a more comprehensive guide on how to use SDXL and configure it’s parameters. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "diffusers/stable-diffusion-xl-1.0-inpainting-0.1", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Kandinsky 2.2 Inpainting The Kandinsky model family is similar to SDXL because it uses two models as well; the image prior model creates image embeddings, and the diffusion model generates images from them. You can load the image prior and diffusion model separately, but the easiest way to use Kandinsky 2.2 is to load it into the AutoPipelineForInpainting class which uses the KandinskyV22InpaintCombinedPipeline under the hood. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) base image Stable Diffusion Inpainting Stable Diffusion XL Inpainting Kandinsky 2.2 Inpainting Non-inpaint specific checkpoints So far, this guide has used inpaint specific checkpoints such as runwayml/stable-diffusion-inpainting. But you can also use regular checkpoints like runwayml/stable-diffusion-v1-5. Let’s compare the results of the two checkpoints. The image on the left is generated from a regular checkpoint, and the image on the right is from an inpaint checkpoint. You’ll immediately notice the image on the left is not as clean, and you can still see the outline of the area the model is supposed to inpaint. The image on the right is much cleaner and the inpainted area appears more natural. runwayml/stable-diffusion-v1-5 runwayml/stable-diffusion-inpainting Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, image], rows=1, cols=2) runwayml/stable-diffusion-v1-5 runwayml/stable-diffusion-inpainting However, for more basic tasks like erasing an object from an image (like the rocks in the road for example), a regular checkpoint yields pretty good results. There isn’t as noticeable of difference between the regular and inpaint checkpoint. runwayml/stable-diffusion-v1-5 runwayml/stable-diffusion-inpaint Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/road-mask.png") + +image = pipeline(prompt="road", image=init_image, mask_image=mask_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) runwayml/stable-diffusion-v1-5 runwayml/stable-diffusion-inpainting The trade-off of using a non-inpaint specific checkpoint is the overall image quality may be lower, but it generally tends to preserve the mask area (that is why you can see the mask outline). The inpaint specific checkpoints are intentionally trained to generate higher quality inpainted images, and that includes creating a more natural transition between the masked and unmasked areas. As a result, these checkpoints are more likely to change your unmasked area. If preserving the unmasked area is important for your task, you can use the VaeImageProcessor.apply_overlay method to force the unmasked area of an image to remain the same at the expense of some more unnatural transitions between the masked and unmasked areas. Copied import PIL +import numpy as np +import torch + +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +device = "cuda" +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", + torch_dtype=torch.float16, +) +pipeline = pipeline.to(device) + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +repainted_image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image).images[0] +repainted_image.save("repainted_image.png") + +unmasked_unchanged_image = pipeline.image_processor.apply_overlay(mask_image, init_image, repainted_image) +unmasked_unchanged_image.save("force_unmasked_unchanged.png") +make_image_grid([init_image, mask_image, repainted_image, unmasked_unchanged_image], rows=2, cols=2) Configure pipeline parameters Image features - like quality and “creativity” - are dependent on pipeline parameters. Knowing what these parameters do is important for getting the results you want. Let’s take a look at the most important parameters and see how changing them affects the output. Strength strength is a measure of how much noise is added to the base image, which influences how similar the output is to the base image. 📈 a high strength value means more noise is added to an image and the denoising process takes longer, but you’ll get higher quality images that are more different from the base image 📉 a low strength value means less noise is added to an image and the denoising process is faster, but the image quality may not be as great and the generated image resembles the base image more Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.6).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) strength = 0.6 strength = 0.8 strength = 1.0 Guidance scale guidance_scale affects how aligned the text prompt and generated image are. 📈 a high guidance_scale value means the prompt and generated image are closely aligned, so the output is a stricter interpretation of the prompt 📉 a low guidance_scale value means the prompt and generated image are more loosely aligned, so the output may be more varied from the prompt You can use strength and guidance_scale together for more control over how expressive the model is. For example, a combination high strength and guidance_scale values gives the model the most creative freedom. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, guidance_scale=2.5).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) guidance_scale = 2.5 guidance_scale = 7.5 guidance_scale = 12.5 Negative prompt A negative prompt assumes the opposite role of a prompt; it guides the model away from generating certain things in an image. This is useful for quickly improving image quality and preventing the model from generating things you don’t want. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +negative_prompt = "bad architecture, unstable, poor details, blurry" +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=init_image, mask_image=mask_image).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) negative_prompt = "bad architecture, unstable, poor details, blurry" Padding mask crop A method for increasing the inpainting image quality is to use the padding_mask_crop parameter. When enabled, this option crops the masked area with some user-specified padding and it’ll also crop the same area from the original image. Both the image and mask are upscaled to a higher resolution for inpainting, and then overlaid on the original image. This is a quick and easy way to improve image quality without using a separate pipeline like StableDiffusionUpscalePipeline. Add the padding_mask_crop parameter to the pipeline call and set it to the desired padding value. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +from PIL import Image + +generator = torch.Generator(device='cuda').manual_seed(0) +pipeline = AutoPipelineForInpainting.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to('cuda') + +base = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore.png") +mask = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore_mask.png") + +image = pipeline("boat", image=base, mask_image=mask, strength=0.75, generator=generator, padding_mask_crop=32).images[0] +image default inpaint image inpaint image with `padding_mask_crop` enabled Chained inpainting pipelines AutoPipelineForInpainting can be chained with other 🤗 Diffusers pipelines to edit their outputs. This is often useful for improving the output quality from your other diffusion pipelines, and if you’re using multiple pipelines, it can be more memory-efficient to chain them together to keep the outputs in latent space and reuse the same pipeline components. Text-to-image-to-inpaint Chaining a text-to-image and inpainting pipeline allows you to inpaint the generated image, and you don’t have to provide a base image to begin with. This makes it convenient to edit your favorite text-to-image outputs without having to generate an entirely new image. Start with the text-to-image pipeline to create a castle: Copied import torch +from diffusers import AutoPipelineForText2Image, AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +text2image = pipeline("concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k").images[0] Load the mask image of the output from above: Copied mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_text-chain-mask.png") And let’s inpaint the masked area with a waterfall: Copied pipeline = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +prompt = "digital painting of a fantasy waterfall, cloudy" +image = pipeline(prompt=prompt, image=text2image, mask_image=mask_image).images[0] +make_image_grid([text2image, mask_image, image], rows=1, cols=3) text-to-image inpaint Inpaint-to-image-to-image You can also chain an inpainting pipeline before another pipeline like image-to-image or an upscaler to improve the quality. Begin by inpainting an image: Copied import torch +from diffusers import AutoPipelineForInpainting, AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image_inpainting = pipeline(prompt=prompt, image=init_image, mask_image=mask_image).images[0] + +# resize image to 1024x1024 for SDXL +image_inpainting = image_inpainting.resize((1024, 1024)) Now let’s pass the image to another inpainting pipeline with SDXL’s refiner model to enhance the image details and quality: Copied pipeline = AutoPipelineForInpainting.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt=prompt, image=image_inpainting, mask_image=mask_image, output_type="latent").images[0] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. For example, in the Text-to-image-to-inpaint section, Kandinsky 2.2 uses a different VAE class than the Stable Diffusion model so it won’t work. But if you use Stable Diffusion v1.5 for both pipelines, then you can keep everything in latent space because they both use AutoencoderKL. Finally, you can pass this image to an image-to-image pipeline to put the finishing touches on it. It is more efficient to use the from_pipe() method to reuse the existing pipeline components, and avoid unnecessarily loading all the pipeline components into memory again. Copied pipeline = AutoPipelineForImage2Image.from_pipe(pipeline) +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt=prompt, image=image).images[0] +make_image_grid([init_image, mask_image, image_inpainting, image], rows=2, cols=2) initial image inpaint image-to-image Image-to-image and inpainting are actually very similar tasks. Image-to-image generates a new image that resembles the existing provided image. Inpainting does the same thing, but it only transforms the image area defined by the mask and the rest of the image is unchanged. You can think of inpainting as a more precise tool for making specific changes and image-to-image has a broader scope for making more sweeping changes. Control image generation Getting an image to look exactly the way you want is challenging because the denoising process is random. While you can control certain aspects of generation by configuring parameters like negative_prompt, there are better and more efficient methods for controlling image generation. Prompt weighting Prompt weighting provides a quantifiable way to scale the representation of concepts in a prompt. You can use it to increase or decrease the magnitude of the text embedding vector for each concept in the prompt, which subsequently determines how much of each concept is generated. The Compel library offers an intuitive syntax for scaling the prompt weights and generating the embeddings. Learn how to create the embeddings in the Prompt weighting guide. Once you’ve generated the embeddings, pass them to the prompt_embeds (and negative_prompt_embeds if you’re using a negative prompt) parameter in the AutoPipelineForInpainting. The embeddings replace the prompt parameter: Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt_embeds=prompt_embeds, # generated from Compel + negative_prompt_embeds=negative_prompt_embeds, # generated from Compel + image=init_image, + mask_image=mask_image +).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) ControlNet ControlNet models are used with other diffusion models like Stable Diffusion, and they provide an even more flexible and accurate way to control how an image is generated. A ControlNet accepts an additional conditioning image input that guides the diffusion model to preserve the features in it. For example, let’s condition an image with a ControlNet pretrained on inpaint images: Copied import torch +import numpy as np +from diffusers import ControlNetModel, StableDiffusionControlNetInpaintPipeline +from diffusers.utils import load_image, make_image_grid + +# load ControlNet +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16, variant="fp16") + +# pass ControlNet to the pipeline +pipeline = StableDiffusionControlNetInpaintPipeline.from_pretrained( + "runwayml/stable-diffusion-inpainting", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +# prepare control image +def make_inpaint_condition(init_image, mask_image): + init_image = np.array(init_image.convert("RGB")).astype(np.float32) / 255.0 + mask_image = np.array(mask_image.convert("L")).astype(np.float32) / 255.0 + + assert init_image.shape[0:1] == mask_image.shape[0:1], "image and image_mask must have the same image size" + init_image[mask_image > 0.5] = -1.0 # set as masked pixel + init_image = np.expand_dims(init_image, 0).transpose(0, 3, 1, 2) + init_image = torch.from_numpy(init_image) + return init_image + +control_image = make_inpaint_condition(init_image, mask_image) Now generate an image from the base, mask and control images. You’ll notice features of the base image are strongly preserved in the generated image. Copied prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, control_image=control_image).images[0] +make_image_grid([init_image, mask_image, PIL.Image.fromarray(np.uint8(control_image[0][0])).convert('RGB'), image], rows=2, cols=2) You can take this a step further and chain it with an image-to-image pipeline to apply a new style: Copied from diffusers import AutoPipelineForImage2Image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "nitrosocke/elden-ring-diffusion", torch_dtype=torch.float16, +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +prompt = "elden ring style castle" # include the token "elden ring style" in the prompt +negative_prompt = "bad architecture, deformed, disfigured, poor details" + +image_elden_ring = pipeline(prompt, negative_prompt=negative_prompt, image=image).images[0] +make_image_grid([init_image, mask_image, image, image_elden_ring], rows=2, cols=2) initial image ControlNet inpaint image-to-image Optimize It can be difficult and slow to run diffusion models if you’re resource constrained, but it doesn’t have to be with a few optimization tricks. One of the biggest (and easiest) optimizations you can enable is switching to memory-efficient attention. If you’re using PyTorch 2.0, scaled-dot product attention is automatically enabled and you don’t need to do anything else. For non-PyTorch 2.0 users, you can install and use xFormers’s implementation of memory-efficient attention. Both options reduce memory usage and accelerate inference. You can also offload the model to the CPU to save even more memory: Copied + pipeline.enable_xformers_memory_efficient_attention() ++ pipeline.enable_model_cpu_offload() To speed-up your inference code even more, use torch_compile. You should wrap torch.compile around the most intensive component in the pipeline which is typically the UNet: Copied pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) Learn more in the Reduce memory usage and Torch 2.0 guides. diff --git a/scrapped_outputs/72d7984afc63936a736725273dda5eec.txt b/scrapped_outputs/72d7984afc63936a736725273dda5eec.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/72e2dbfcb4c644a22b6e5b7ff0a38240.txt b/scrapped_outputs/72e2dbfcb4c644a22b6e5b7ff0a38240.txt new file mode 100644 index 0000000000000000000000000000000000000000..d1bc74bee0e4d058e2996318eef5ca32b7a50282 --- /dev/null +++ b/scrapped_outputs/72e2dbfcb4c644a22b6e5b7ff0a38240.txt @@ -0,0 +1,256 @@ +Schedulers + +Diffusion pipelines are inherently a collection of diffusion models and schedulers that are partly independent from each other. This means that one is able to switch out parts of the pipeline to better customize +a pipeline to one’s use case. The best example of this is the Schedulers. +Whereas diffusion models usually simply define the forward pass from noise to a less noisy sample, +schedulers define the whole denoising process, i.e.: +How many denoising steps? +Stochastic or deterministic? +What algorithm to use to find the denoised sample +They can be quite complex and often define a trade-off between denoising speed and denoising quality. +It is extremely difficult to measure quantitatively which scheduler works best for a given diffusion pipeline, so it is often recommended to simply try out which works best. +The following paragraphs show how to do so with the 🧨 Diffusers library. + +Load pipeline + +Let’s start by loading the stable diffusion pipeline. +Remember that you have to be a registered user on the 🤗 Hugging Face Hub, and have “click-accepted” the license in order to use stable diffusion. + + + Copied +from huggingface_hub import login +from diffusers import DiffusionPipeline +import torch + +# first we need to login with our access token +login() + +# Now we can download the pipeline +pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +Next, we move it to GPU: + + + Copied +pipeline.to("cuda") + +Access the scheduler + +The scheduler is always one of the components of the pipeline and is usually called "scheduler". +So it can be accessed via the "scheduler" property. + + + Copied +pipeline.scheduler +Output: + + + Copied +PNDMScheduler { + "_class_name": "PNDMScheduler", + "_diffusers_version": "0.8.0.dev0", + "beta_end": 0.012, + "beta_schedule": "scaled_linear", + "beta_start": 0.00085, + "clip_sample": false, + "num_train_timesteps": 1000, + "set_alpha_to_one": false, + "skip_prk_steps": true, + "steps_offset": 1, + "trained_betas": null +} +We can see that the scheduler is of type PNDMScheduler. +Cool, now let’s compare the scheduler in its performance to other schedulers. +First we define a prompt on which we will test all the different schedulers: + + + Copied +prompt = "A photograph of an astronaut riding a horse on Mars, high resolution, high definition." +Next, we create a generator from a random seed that will ensure that we can generate similar images as well as run the pipeline: + + + Copied +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image + + + + +Changing the scheduler + +Now we show how easy it is to change the scheduler of a pipeline. Every scheduler has a property SchedulerMixin.compatibles +which defines all compatible schedulers. You can take a look at all available, compatible schedulers for the Stable Diffusion pipeline as follows. + + + Copied +pipeline.scheduler.compatibles +Output: + + + Copied +[diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, + diffusers.schedulers.scheduling_ddim.DDIMScheduler, + diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler, + diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, + diffusers.schedulers.scheduling_pndm.PNDMScheduler, + diffusers.schedulers.scheduling_ddpm.DDPMScheduler, + diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler] +Cool, lots of schedulers to look at. Feel free to have a look at their respective class definitions: +LMSDiscreteScheduler, +DDIMScheduler, +DPMSolverMultistepScheduler, +EulerDiscreteScheduler, +PNDMScheduler, +DDPMScheduler, +EulerAncestralDiscreteScheduler. +We will now compare the input prompt with all other schedulers. To change the scheduler of the pipeline you can make use of the +convenient ConfigMixin.config property in combination with the ConfigMixin.from_config() function. + + + Copied +pipeline.scheduler.config +returns a dictionary of the configuration of the scheduler: +Output: + + + Copied +FrozenDict([('num_train_timesteps', 1000), + ('beta_start', 0.00085), + ('beta_end', 0.012), + ('beta_schedule', 'scaled_linear'), + ('trained_betas', None), + ('skip_prk_steps', True), + ('set_alpha_to_one', False), + ('steps_offset', 1), + ('_class_name', 'PNDMScheduler'), + ('_diffusers_version', '0.8.0.dev0'), + ('clip_sample', False)]) +This configuration can then be used to instantiate a scheduler +of a different class that is compatible with the pipeline. Here, +we change the scheduler to the DDIMScheduler. + + + Copied +from diffusers import DDIMScheduler + +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +Cool, now we can run the pipeline again to compare the generation quality. + + + Copied +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image + + + +If you are a JAX/Flax user, please check this section instead. + +Compare schedulers + +So far we have tried running the stable diffusion pipeline with two schedulers: PNDMScheduler and DDIMScheduler. +A number of better schedulers have been released that can be run with much fewer steps, let’s compare them here: +LMSDiscreteScheduler usually leads to better results: + + + Copied +from diffusers import LMSDiscreteScheduler + +pipeline.scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image + + + +EulerDiscreteScheduler and EulerAncestralDiscreteScheduler can generate high quality results with as little as 30 steps. + + + Copied +from diffusers import EulerDiscreteScheduler + +pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=30).images[0] +image + + + +and: + + + Copied +from diffusers import EulerAncestralDiscreteScheduler + +pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=30).images[0] +image + + + +At the time of writing this doc DPMSolverMultistepScheduler gives arguably the best speed/quality trade-off and can be run with as little +as 20 steps. + + + Copied +from diffusers import DPMSolverMultistepScheduler + +pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=20).images[0] +image + + + +As you can see most images look very similar and are arguably of very similar quality. It often really depends on the specific use case which scheduler to choose. A good approach is always to run multiple different +schedulers to compare results. + +Changing the Scheduler in Flax + +If you are a JAX/Flax user, you can also change the default pipeline scheduler. This is a complete example of how to run inference using the Flax Stable Diffusion pipeline and the super-fast DDPM-Solver++ scheduler: + + + Copied +import jax +import numpy as np +from flax.jax_utils import replicate +from flax.training.common_utils import shard + +from diffusers import FlaxStableDiffusionPipeline, FlaxDPMSolverMultistepScheduler + +model_id = "runwayml/stable-diffusion-v1-5" +scheduler, scheduler_state = FlaxDPMSolverMultistepScheduler.from_pretrained( + model_id, + subfolder="scheduler" +) +pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( + model_id, + scheduler=scheduler, + revision="bf16", + dtype=jax.numpy.bfloat16, +) +params["scheduler"] = scheduler_state + +# Generate 1 image per parallel device (8 on TPUv2-8 or TPUv3-8) +prompt = "a photo of an astronaut riding a horse on mars" +num_samples = jax.device_count() +prompt_ids = pipeline.prepare_inputs([prompt] * num_samples) + +prng_seed = jax.random.PRNGKey(0) +num_inference_steps = 25 + +# shard inputs and rng +params = replicate(params) +prng_seed = jax.random.split(prng_seed, jax.device_count()) +prompt_ids = shard(prompt_ids) + +images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images +images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) +The following Flax schedulers are not yet compatible with the Flax Stable Diffusion Pipeline: +FlaxLMSDiscreteScheduler +FlaxDDPMScheduler diff --git a/scrapped_outputs/733c3264c0525ee82edf7a9f632e03b6.txt b/scrapped_outputs/733c3264c0525ee82edf7a9f632e03b6.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/7353588710ed3201e91bdc31fd2dc3c7.txt b/scrapped_outputs/7353588710ed3201e91bdc31fd2dc3c7.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/736a8cfc79f1ef1024e52f6f35a77326.txt b/scrapped_outputs/736a8cfc79f1ef1024e52f6f35a77326.txt new file mode 100644 index 0000000000000000000000000000000000000000..ff28dd01033ce547a340e7754e35c2123f361679 --- /dev/null +++ b/scrapped_outputs/736a8cfc79f1ef1024e52f6f35a77326.txt @@ -0,0 +1,14 @@ +Text-guided depth-to-image generation The StableDiffusionDepth2ImgPipeline lets you pass a text prompt and an initial image to condition the generation of new images. In addition, you can also pass a depth_map to preserve the image structure. If no depth_map is provided, the pipeline automatically predicts the depth via an integrated depth-estimation model. Start by creating an instance of the StableDiffusionDepth2ImgPipeline: Copied import torch +from diffusers import StableDiffusionDepth2ImgPipeline +from diffusers.utils import load_image, make_image_grid + +pipeline = StableDiffusionDepth2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-depth", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") Now pass your prompt to the pipeline. You can also pass a negative_prompt to prevent certain words from guiding how an image is generated: Copied url = "http://images.cocodataset.org/val2017/000000039769.jpg" +init_image = load_image(url) +prompt = "two tigers" +negative_prompt = "bad, deformed, ugly, bad anatomy" +image = pipeline(prompt=prompt, image=init_image, negative_prompt=negative_prompt, strength=0.7).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Input Output diff --git a/scrapped_outputs/736f607659262c2cad9f3c4e36c54374.txt b/scrapped_outputs/736f607659262c2cad9f3c4e36c54374.txt new file mode 100644 index 0000000000000000000000000000000000000000..97a771bf1c4a69150adf921fcc1b4adbe14566c1 --- /dev/null +++ b/scrapped_outputs/736f607659262c2cad9f3c4e36c54374.txt @@ -0,0 +1,927 @@ +DeepFloyd IF Overview DeepFloyd IF is a novel state-of-the-art open-source text-to-image model with a high degree of photorealism and language understanding. +The model is a modular composed of a frozen text encoder and three cascaded pixel diffusion modules: Stage 1: a base model that generates 64x64 px image based on text prompt, Stage 2: a 64x64 px => 256x256 px super-resolution model, and Stage 3: a 256x256 px => 1024x1024 px super-resolution model +Stage 1 and Stage 2 utilize a frozen text encoder based on the T5 transformer to extract text embeddings, which are then fed into a UNet architecture enhanced with cross-attention and attention pooling. +Stage 3 is Stability AI’s x4 Upscaling model. +The result is a highly efficient model that outperforms current state-of-the-art models, achieving a zero-shot FID score of 6.66 on the COCO dataset. +Our work underscores the potential of larger UNet architectures in the first stage of cascaded diffusion models and depicts a promising future for text-to-image synthesis. Usage Before you can use IF, you need to accept its usage conditions. To do so: Make sure to have a Hugging Face account and be logged in. Accept the license on the model card of DeepFloyd/IF-I-XL-v1.0. Accepting the license on the stage I model card will auto accept for the other IF models. Make sure to login locally. Install huggingface_hub: Copied pip install huggingface_hub --upgrade run the login function in a Python shell: Copied from huggingface_hub import login + +login() and enter your Hugging Face Hub access token. Next we install diffusers and dependencies: Copied pip install -q diffusers accelerate transformers The following sections give more in-detail examples of how to use IF. Specifically: Text-to-Image Generation Image-to-Image Generation Inpainting Reusing model weights Speed optimization Memory optimization Available checkpoints Stage-1 DeepFloyd/IF-I-XL-v1.0 DeepFloyd/IF-I-L-v1.0 DeepFloyd/IF-I-M-v1.0 Stage-2 DeepFloyd/IF-II-L-v1.0 DeepFloyd/IF-II-M-v1.0 Stage-3 stabilityai/stable-diffusion-x4-upscaler Google Colab Text-to-Image Generation By default diffusers makes use of model cpu offloading to run the whole IF pipeline with as little as 14 GB of VRAM. Copied from diffusers import DiffusionPipeline +from diffusers.utils import pt_to_pil, make_image_grid +import torch + +# stage 1 +stage_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +stage_1_output = stage_1( + prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, generator=generator, output_type="pt" +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# stage 2 +stage_2_output = stage_2( + image=stage_1_output, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png") + +# stage 3 +stage_3_output = stage_3(prompt=prompt, image=stage_2_output, noise_level=100, generator=generator).images +#stage_3_output[0].save("./if_stage_III.png") +make_image_grid([pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=3) Text Guided Image-to-Image Generation The same IF model weights can be used for text-guided image-to-image translation or image variation. +In this case just make sure to load the weights using the IFImg2ImgPipeline and IFImg2ImgSuperResolutionPipeline pipelines. Note: You can also directly move the weights of the text-to-image pipelines to the image-to-image pipelines +without loading them twice by making use of the components argument as explained here. Copied from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +from diffusers.utils import pt_to_pil, load_image, make_image_grid +import torch + +# download image +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +original_image = load_image(url) +original_image = original_image.resize((768, 512)) + +# stage 1 +stage_1 = IFImg2ImgPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = IFImg2ImgSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = "A fantasy landscape in style minecraft" +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +stage_1_output = stage_1( + image=original_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# stage 2 +stage_2_output = stage_2( + image=stage_1_output, + original_image=original_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png") + +# stage 3 +stage_3_output = stage_3(prompt=prompt, image=stage_2_output, generator=generator, noise_level=100).images +#stage_3_output[0].save("./if_stage_III.png") +make_image_grid([original_image, pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=4) Text Guided Inpainting Generation The same IF model weights can be used for text-guided image-to-image translation or image variation. +In this case just make sure to load the weights using the IFInpaintingPipeline and IFInpaintingSuperResolutionPipeline pipelines. Note: You can also directly move the weights of the text-to-image pipelines to the image-to-image pipelines +without loading them twice by making use of the ~DiffusionPipeline.components() function as explained here. Copied from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +from diffusers.utils import pt_to_pil, load_image, make_image_grid +import torch + +# download image +url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +original_image = load_image(url) + +# download mask +url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +mask_image = load_image(url) + +# stage 1 +stage_1 = IFInpaintingPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = IFInpaintingSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = "blue sunglasses" +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +stage_1_output = stage_1( + image=original_image, + mask_image=mask_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# stage 2 +stage_2_output = stage_2( + image=stage_1_output, + original_image=original_image, + mask_image=mask_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_II.png") + +# stage 3 +stage_3_output = stage_3(prompt=prompt, image=stage_2_output, generator=generator, noise_level=100).images +#stage_3_output[0].save("./if_stage_III.png") +make_image_grid([original_image, mask_image, pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=5) Converting between different pipelines In addition to being loaded with from_pretrained, Pipelines can also be loaded directly from each other. Copied from diffusers import IFPipeline, IFSuperResolutionPipeline + +pipe_1 = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0") +pipe_2 = IFSuperResolutionPipeline.from_pretrained("DeepFloyd/IF-II-L-v1.0") + + +from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline + +pipe_1 = IFImg2ImgPipeline(**pipe_1.components) +pipe_2 = IFImg2ImgSuperResolutionPipeline(**pipe_2.components) + + +from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline + +pipe_1 = IFInpaintingPipeline(**pipe_1.components) +pipe_2 = IFInpaintingSuperResolutionPipeline(**pipe_2.components) Optimizing for speed The simplest optimization to run IF faster is to move all model components to the GPU. Copied pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") You can also run the diffusion process for a shorter number of timesteps. This can either be done with the num_inference_steps argument: Copied pipe("", num_inference_steps=30) Or with the timesteps argument: Copied from diffusers.pipelines.deepfloyd_if import fast27_timesteps + +pipe("", timesteps=fast27_timesteps) When doing image variation or inpainting, you can also decrease the number of timesteps +with the strength argument. The strength argument is the amount of noise to add to the input image which also determines how many steps to run in the denoising process. +A smaller number will vary the image less but run faster. Copied pipe = IFImg2ImgPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") + +image = pipe(image=image, prompt="", strength=0.3).images You can also use torch.compile. Note that we have not exhaustively tested torch.compile +with IF and it might not give expected results. Copied from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") + +pipe.text_encoder = torch.compile(pipe.text_encoder, mode="reduce-overhead", fullgraph=True) +pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) Optimizing for memory When optimizing for GPU memory, we can use the standard diffusers CPU offloading APIs. Either the model based CPU offloading, Copied pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.enable_model_cpu_offload() or the more aggressive layer based CPU offloading. Copied pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.enable_sequential_cpu_offload() Additionally, T5 can be loaded in 8bit precision Copied from transformers import T5EncoderModel + +text_encoder = T5EncoderModel.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit" +) + +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", + text_encoder=text_encoder, # pass the previously instantiated 8bit text encoder + unet=None, + device_map="auto", +) + +prompt_embeds, negative_embeds = pipe.encode_prompt("") For CPU RAM constrained machines like Google Colab free tier where we can’t load all model components to the CPU at once, we can manually only load the pipeline with +the text encoder or UNet when the respective model components are needed. Copied from diffusers import IFPipeline, IFSuperResolutionPipeline +import torch +import gc +from transformers import T5EncoderModel +from diffusers.utils import pt_to_pil, make_image_grid + +text_encoder = T5EncoderModel.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit" +) + +# text to image +pipe = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", + text_encoder=text_encoder, # pass the previously instantiated 8bit text encoder + unet=None, + device_map="auto", +) + +prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +# Remove the pipeline so we can re-load the pipeline with the unet +del text_encoder +del pipe +gc.collect() +torch.cuda.empty_cache() + +pipe = IFPipeline.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto" +) + +generator = torch.Generator().manual_seed(0) +stage_1_output = pipe( + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + output_type="pt", + generator=generator, +).images + +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# Remove the pipeline so we can load the super-resolution pipeline +del pipe +gc.collect() +torch.cuda.empty_cache() + +# First super resolution + +pipe = IFSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto" +) + +generator = torch.Generator().manual_seed(0) +stage_2_output = pipe( + image=stage_1_output, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + output_type="pt", + generator=generator, +).images + +#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png") +make_image_grid([pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0]], rows=1, rows=2) Available Pipelines: Pipeline Tasks Colab pipeline_if.py Text-to-Image Generation - pipeline_if_superresolution.py Text-to-Image Generation - pipeline_if_img2img.py Image-to-Image Generation - pipeline_if_img2img_superresolution.py Image-to-Image Generation - pipeline_if_inpainting.py Image-to-Image Generation - pipeline_if_inpainting_superresolution.py Image-to-Image Generation - IFPipeline class diffusers.IFPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None num_inference_steps: int = 100 timesteps: List = None guidance_scale: float = 7.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 height: Optional = None width: Optional = None eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True cross_attention_kwargs: Optional = None ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 7.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to self.unet.config.sample_size) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size) — +The width in pixels of the generated image. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFPipeline, IFSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch + +>>> pipe = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt").images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt" +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> safety_modules = { +... "feature_extractor": pipe.feature_extractor, +... "safety_checker": pipe.safety_checker, +... "watermarker": pipe.watermarker, +... } +>>> super_res_2_pipe = DiffusionPipeline.from_pretrained( +... "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +... ) +>>> super_res_2_pipe.enable_model_cpu_offload() + +>>> image = super_res_2_pipe( +... prompt=prompt, +... image=image, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFSuperResolutionPipeline class diffusers.IFSuperResolutionPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler image_noising_scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None height: int = None width: int = None image: Union = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 4.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 250 clean_caption: bool = True ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. height (int, optional, defaults to None) — +The height in pixels of the generated image. width (int, optional, defaults to None) — +The width in pixels of the generated image. image (PIL.Image.Image, np.ndarray, torch.FloatTensor) — +The image to be upscaled. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional, defaults to None) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. noise_level (int, optional, defaults to 250) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFPipeline, IFSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch + +>>> pipe = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt").images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFImg2ImgPipeline class diffusers.IFImg2ImgPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.7 num_inference_steps: int = 80 timesteps: List = None guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True cross_attention_kwargs: Optional = None ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. strength (float, optional, defaults to 0.7) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. num_inference_steps (int, optional, defaults to 80) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 10.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image.resize((768, 512)) + +>>> pipe = IFImg2ImgPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A fantasy landscape in style minecraft" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFImg2ImgSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", +... text_encoder=None, +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFImg2ImgSuperResolutionPipeline class diffusers.IFImg2ImgSuperResolutionPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler image_noising_scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( image: Union original_image: Union = None strength: float = 0.8 prompt: Union = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 4.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 250 clean_caption: bool = True ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. original_image (torch.FloatTensor or PIL.Image.Image) — +The original image that image was varied from. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. noise_level (int, optional, defaults to 250) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image.resize((768, 512)) + +>>> pipe = IFImg2ImgPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A fantasy landscape in style minecraft" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFImg2ImgSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", +... text_encoder=None, +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFInpaintingPipeline class diffusers.IFInpaintingPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None strength: float = 1.0 num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True cross_attention_kwargs: Optional = None ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). strength (float, optional, defaults to 1.0) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 7.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +>>> response = requests.get(url) +>>> mask_image = Image.open(BytesIO(response.content)) +>>> mask_image = mask_image + +>>> pipe = IFInpaintingPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "blue sunglasses" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... mask_image=mask_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFInpaintingSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... mask_image=mask_image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFInpaintingSuperResolutionPipeline class diffusers.IFInpaintingSuperResolutionPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler image_noising_scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( image: Union original_image: Union = None mask_image: Union = None strength: float = 0.8 prompt: Union = None num_inference_steps: int = 100 timesteps: List = None guidance_scale: float = 4.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 0 clean_caption: bool = True ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. original_image (torch.FloatTensor or PIL.Image.Image) — +The original image that image was varied from. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. noise_level (int, optional, defaults to 0) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +>>> response = requests.get(url) +>>> mask_image = Image.open(BytesIO(response.content)) +>>> mask_image = mask_image + +>>> pipe = IFInpaintingPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "blue sunglasses" + +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) +>>> image = pipe( +... image=original_image, +... mask_image=mask_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFInpaintingSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... mask_image=mask_image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. diff --git a/scrapped_outputs/737e3f95bdb771175263078be74935cc.txt b/scrapped_outputs/737e3f95bdb771175263078be74935cc.txt new file mode 100644 index 0000000000000000000000000000000000000000..87e841bb6526798d7ec432c9b142b89c41f96f02 --- /dev/null +++ b/scrapped_outputs/737e3f95bdb771175263078be74935cc.txt @@ -0,0 +1,25 @@ +How to use OpenVINO for inference + +🤗 Optimum provides a Stable Diffusion pipeline compatible with OpenVINO. You can now easily perform inference with OpenVINO Runtime on a variety of Intel processors (see the full list of supported devices). + +Installation + +Install 🤗 Optimum Intel with the following command: + + + Copied +pip install optimum["openvino"] + +Stable Diffusion Inference + +To load an OpenVINO model and run inference with OpenVINO Runtime, you need to replace StableDiffusionPipeline with OVStableDiffusionPipeline. In case you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, you can set export=True. + + + Copied +from optimum.intel.openvino import OVStableDiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = OVStableDiffusionPipeline.from_pretrained(model_id, export=True) +prompt = "a photo of an astronaut riding a horse on mars" +images = pipe(prompt).images[0] +You can find more examples (such as static reshaping and model compilation) in optimum documentation. diff --git a/scrapped_outputs/73878dd0eb5497b6f421c72ec38e6a19.txt b/scrapped_outputs/73878dd0eb5497b6f421c72ec38e6a19.txt new file mode 100644 index 0000000000000000000000000000000000000000..eaf1daaf7ae542a78f5381f7eae39049ee58f668 --- /dev/null +++ b/scrapped_outputs/73878dd0eb5497b6f421c72ec38e6a19.txt @@ -0,0 +1,49 @@ +Improve generation quality with FreeU The UNet is responsible for denoising during the reverse diffusion process, and there are two distinct features in its architecture: Backbone features primarily contribute to the denoising process Skip features mainly introduce high-frequency features into the decoder module and can make the network overlook the semantics in the backbone features However, the skip connection can sometimes introduce unnatural image details. FreeU is a technique for improving image quality by rebalancing the contributions from the UNet’s skip connections and backbone feature maps. FreeU is applied during inference and it does not require any additional training. The technique works for different tasks such as text-to-image, image-to-image, and text-to-video. In this guide, you will apply FreeU to the StableDiffusionPipeline, StableDiffusionXLPipeline, and TextToVideoSDPipeline. You need to install Diffusers from source to run the examples below. StableDiffusionPipeline Load the pipeline: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, safety_checker=None +).to("cuda") Then enable the FreeU mechanism with the FreeU-specific hyperparameters. These values are scaling factors for the backbone and skip features. Copied pipeline.enable_freeu(s1=0.9, s2=0.2, b1=1.2, b2=1.4) The values above are from the official FreeU code repository where you can also find reference hyperparameters for different models. Disable the FreeU mechanism by calling disable_freeu() on a pipeline. And then run inference: Copied prompt = "A squirrel eating a burger" +seed = 2023 +image = pipeline(prompt, generator=torch.manual_seed(seed)).images[0] +image The figure below compares non-FreeU and FreeU results respectively for the same hyperparameters used above (prompt and seed): Let’s see how Stable Diffusion 2 results are impacted: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16, safety_checker=None +).to("cuda") + +prompt = "A squirrel eating a burger" +seed = 2023 + +pipeline.enable_freeu(s1=0.9, s2=0.2, b1=1.1, b2=1.2) +image = pipeline(prompt, generator=torch.manual_seed(seed)).images[0] +image Stable Diffusion XL Finally, let’s take a look at how FreeU affects Stable Diffusion XL results: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, +).to("cuda") + +prompt = "A squirrel eating a burger" +seed = 2023 + +# Comes from +# https://wandb.ai/nasirk24/UNET-FreeU-SDXL/reports/FreeU-SDXL-Optimal-Parameters--Vmlldzo1NDg4NTUw +pipeline.enable_freeu(s1=0.6, s2=0.4, b1=1.1, b2=1.2) +image = pipeline(prompt, generator=torch.manual_seed(seed)).images[0] +image Text-to-video generation FreeU can also be used to improve video quality: Copied from diffusers import DiffusionPipeline +from diffusers.utils import export_to_video +import torch + +model_id = "cerspense/zeroscope_v2_576w" +pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +prompt = "an astronaut riding a horse on mars" +seed = 2023 + +# The values come from +# https://github.com/lyn-rgb/FreeU_Diffusers#video-pipelines +pipe.enable_freeu(b1=1.2, b2=1.4, s1=0.9, s2=0.2) +video_frames = pipe(prompt, height=320, width=576, num_frames=30, generator=torch.manual_seed(seed)).frames +export_to_video(video_frames, "astronaut_rides_horse.mp4") Thanks to kadirnar for helping to integrate the feature, and to justindujardin for the helpful discussions. diff --git a/scrapped_outputs/739bbe5f820b9860f394f5fa6118d6e0.txt b/scrapped_outputs/739bbe5f820b9860f394f5fa6118d6e0.txt new file mode 100644 index 0000000000000000000000000000000000000000..9d0d5ffb83e07315423c11b905ac9fe8aa24c736 --- /dev/null +++ b/scrapped_outputs/739bbe5f820b9860f394f5fa6118d6e0.txt @@ -0,0 +1,18 @@ +Installation 🤗 Diffusers is tested on Python 3.8+, PyTorch 1.7.0+, and Flax. Follow the installation instructions below for the deep learning library you are using: PyTorch installation instructions Flax installation instructions Install with pip You should install 🤗 Diffusers in a virtual environment. +If you’re unfamiliar with Python virtual environments, take a look at this guide. +A virtual environment makes it easier to manage different projects and avoid compatibility issues between dependencies. Start by creating a virtual environment in your project directory: Copied python -m venv .env Activate the virtual environment: Copied source .env/bin/activate You should also install 🤗 Transformers because 🤗 Diffusers relies on its models: Pytorch Hide Pytorch content Note - PyTorch only supports Python 3.8 - 3.11 on Windows. Copied pip install diffusers["torch"] transformers JAX Hide JAX content Copied pip install diffusers["flax"] transformers Install with conda After activating your virtual environment, with conda (maintained by the community): Copied conda install -c conda-forge diffusers Install from source Before installing 🤗 Diffusers from source, make sure you have PyTorch and 🤗 Accelerate installed. To install 🤗 Accelerate: Copied pip install accelerate Then install 🤗 Diffusers from source: Copied pip install git+https://github.com/huggingface/diffusers This command installs the bleeding edge main version rather than the latest stable version. +The main version is useful for staying up-to-date with the latest developments. +For instance, if a bug has been fixed since the last official release but a new release hasn’t been rolled out yet. +However, this means the main version may not always be stable. +We strive to keep the main version operational, and most issues are usually resolved within a few hours or a day. +If you run into a problem, please open an Issue so we can fix it even sooner! Editable install You will need an editable install if you’d like to: Use the main version of the source code. Contribute to 🤗 Diffusers and need to test changes in the code. Clone the repository and install 🤗 Diffusers with the following commands: Copied git clone https://github.com/huggingface/diffusers.git +cd diffusers Pytorch Hide Pytorch content Copied pip install -e ".[torch]" JAX Hide JAX content Copied pip install -e ".[flax]" These commands will link the folder you cloned the repository to and your Python library paths. +Python will now look inside the folder you cloned to in addition to the normal library paths. +For example, if your Python packages are typically installed in ~/anaconda3/envs/main/lib/python3.8/site-packages/, Python will also search the ~/diffusers/ folder you cloned to. You must keep the diffusers folder if you want to keep using the library. Now you can easily update your clone to the latest version of 🤗 Diffusers with the following command: Copied cd ~/diffusers/ +git pull Your Python environment will find the main version of 🤗 Diffusers on the next run. Cache Model weights and files are downloaded from the Hub to a cache which is usually your home directory. You can change the cache location by specifying the HF_HOME or HUGGINFACE_HUB_CACHE environment variables or configuring the cache_dir parameter in methods like from_pretrained(). Cached files allow you to run 🤗 Diffusers offline. To prevent 🤗 Diffusers from connecting to the internet, set the HF_HUB_OFFLINE environment variable to True and 🤗 Diffusers will only load previously downloaded files in the cache. Copied export HF_HUB_OFFLINE=True For more details about managing and cleaning the cache, take a look at the caching guide. Telemetry logging Our library gathers telemetry information during from_pretrained() requests. +The data gathered includes the version of 🤗 Diffusers and PyTorch/Flax, the requested model or pipeline class, +and the path to a pretrained checkpoint if it is hosted on the Hugging Face Hub. +This usage data helps us debug issues and prioritize new features. +Telemetry is only sent when loading models and pipelines from the Hub, +and it is not collected if you’re loading local files. We understand that not everyone wants to share additional information,and we respect your privacy. +You can disable telemetry collection by setting the DISABLE_TELEMETRY environment variable from your terminal: On Linux/MacOS: Copied export DISABLE_TELEMETRY=YES On Windows: Copied set DISABLE_TELEMETRY=YES diff --git a/scrapped_outputs/73b4bc13585515a4604c292502472185.txt b/scrapped_outputs/73b4bc13585515a4604c292502472185.txt new file mode 100644 index 0000000000000000000000000000000000000000..26444ce0b02439b036cdb5951e8bcee16133d21d --- /dev/null +++ b/scrapped_outputs/73b4bc13585515a4604c292502472185.txt @@ -0,0 +1,7 @@ +Value-guided planning 🧪 This is an experimental pipeline for reinforcement learning! This pipeline is based on the Planning with Diffusion for Flexible Behavior Synthesis paper by Michael Janner, Yilun Du, Joshua B. Tenenbaum, Sergey Levine. The abstract from the paper is: Model-based reinforcement learning methods often use learning only for the purpose of estimating an approximate dynamics model, offloading the rest of the decision-making work to classical trajectory optimizers. While conceptually simple, this combination has a number of empirical shortcomings, suggesting that learned models may not be well-suited to standard trajectory optimization. In this paper, we consider what it would look like to fold as much of the trajectory optimization pipeline as possible into the modeling problem, such that sampling from the model and planning with it become nearly identical. The core of our technical approach lies in a diffusion probabilistic model that plans by iteratively denoising trajectories. We show how classifier-guided sampling and image inpainting can be reinterpreted as coherent planning strategies, explore the unusual and useful properties of diffusion-based planning methods, and demonstrate the effectiveness of our framework in control settings that emphasize long-horizon decision-making and test-time flexibility. You can find additional information about the model on the project page, the original codebase, or try it out in a demo notebook. The script to run the model is available here. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. ValueGuidedRLPipeline class diffusers.experimental.ValueGuidedRLPipeline < source > ( value_function: UNet1DModel unet: UNet1DModel scheduler: DDPMScheduler env ) Parameters value_function (UNet1DModel) — +A specialized UNet for fine-tuning trajectories base on reward. unet (UNet1DModel) — +UNet architecture to denoise the encoded trajectories. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded trajectories. Default for this +application is DDPMScheduler. env () — +An environment following the OpenAI gym API to act in. For now only Hopper has pretrained models. Pipeline for value-guided sampling from a diffusion model trained to predict sequences of states. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). diff --git a/scrapped_outputs/73b51ab2e2755914d6f81249b22516c9.txt b/scrapped_outputs/73b51ab2e2755914d6f81249b22516c9.txt new file mode 100644 index 0000000000000000000000000000000000000000..2dde9c6e189ad6d607bc313e3e555570773bb332 --- /dev/null +++ b/scrapped_outputs/73b51ab2e2755914d6f81249b22516c9.txt @@ -0,0 +1,19 @@ +Adapt a model to a new task Many diffusion systems share the same components, allowing you to adapt a pretrained model for one task to an entirely different task. This guide will show you how to adapt a pretrained text-to-image model for inpainting by initializing and modifying the architecture of a pretrained UNet2DConditionModel. Configure UNet2DConditionModel parameters A UNet2DConditionModel by default accepts 4 channels in the input sample. For example, load a pretrained text-to-image model like runwayml/stable-diffusion-v1-5 and take a look at the number of in_channels: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) +pipeline.unet.config["in_channels"] +4 Inpainting requires 9 channels in the input sample. You can check this value in a pretrained inpainting model like runwayml/stable-diffusion-inpainting: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-inpainting", use_safetensors=True) +pipeline.unet.config["in_channels"] +9 To adapt your text-to-image model for inpainting, you’ll need to change the number of in_channels from 4 to 9. Initialize a UNet2DConditionModel with the pretrained text-to-image model weights, and change in_channels to 9. Changing the number of in_channels means you need to set ignore_mismatched_sizes=True and low_cpu_mem_usage=False to avoid a size mismatch error because the shape is different now. Copied from diffusers import UNet2DConditionModel + +model_id = "runwayml/stable-diffusion-v1-5" +unet = UNet2DConditionModel.from_pretrained( + model_id, + subfolder="unet", + in_channels=9, + low_cpu_mem_usage=False, + ignore_mismatched_sizes=True, + use_safetensors=True, +) The pretrained weights of the other components from the text-to-image model are initialized from their checkpoints, but the input channel weights (conv_in.weight) of the unet are randomly initialized. It is important to finetune the model for inpainting because otherwise the model returns noise. diff --git a/scrapped_outputs/73c9dcb3cd7039ac4daef5730c5ebb8f.txt b/scrapped_outputs/73c9dcb3cd7039ac4daef5730c5ebb8f.txt new file mode 100644 index 0000000000000000000000000000000000000000..7f071804a6d1fd96f89b53ac2e21853833e83f62 --- /dev/null +++ b/scrapped_outputs/73c9dcb3cd7039ac4daef5730c5ebb8f.txt @@ -0,0 +1,74 @@ +DEISMultistepScheduler Diffusion Exponential Integrator Sampler (DEIS) is proposed in Fast Sampling of Diffusion Models with Exponential Integrator by Qinsheng Zhang and Yongxin Chen. DEISMultistepScheduler is a fast high order solver for diffusion ordinary differential equations (ODEs). This implementation modifies the polynomial fitting formula in log-rho space instead of the original linear t space in the DEIS paper. The modification enjoys closed-form coefficients for exponential multistep update instead of replying on the numerical solver. The abstract from the paper is: The past few years have witnessed the great success of Diffusion models~(DMs) in generating high-fidelity samples in generative modeling tasks. A major limitation of the DM is its notoriously slow sampling procedure which normally requires hundreds to thousands of time discretization steps of the learned diffusion process to reach the desired accuracy. Our goal is to develop a fast sampling method for DMs with a much less number of steps while retaining high sample quality. To this end, we systematically analyze the sampling procedure in DMs and identify key factors that affect the sample quality, among which the method of discretization is most crucial. By carefully examining the learned diffusion process, we propose Diffusion Exponential Integrator Sampler~(DEIS). It is based on the Exponential Integrator designed for discretizing ordinary differential equations (ODEs) and leverages a semilinear structure of the learned diffusion process to reduce the discretization error. The proposed method can be applied to any DMs and can generate high-fidelity samples in as few as 10 steps. In our experiments, it takes about 3 minutes on one A6000 GPU to generate 50k images from CIFAR10. Moreover, by directly using pre-trained DMs, we achieve the state-of-art sampling performance when the number of score function evaluation~(NFE) is limited, e.g., 4.17 FID with 10 NFEs, 3.37 FID, and 9.74 IS with only 15 NFEs on CIFAR10. Code is available at this https URL. Tips It is recommended to set solver_order to 2 or 3, while solver_order=1 is equivalent to DDIMScheduler. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set thresholding=True to use the dynamic thresholding. DEISMultistepScheduler class diffusers.DEISMultistepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Optional = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'deis' solver_type: str = 'logrho' lower_order_final: bool = True use_karras_sigmas: Optional = False timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DEIS order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. algorithm_type (str, defaults to deis) — +The algorithm type for the solver. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DEISMultistepScheduler is a fast high order solver for diffusion ordinary differential equations (ODEs). This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DEIS algorithm needs. deis_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DEIS (equivalent to DDIM). multistep_deis_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order multistep DEIS. multistep_deis_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order multistep DEIS. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep DEIS. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/73dc5a8068e014af56511225c0e0870a.txt b/scrapped_outputs/73dc5a8068e014af56511225c0e0870a.txt new file mode 100644 index 0000000000000000000000000000000000000000..3de545917a945be758b5da9cc73ec3840eca6cd1 --- /dev/null +++ b/scrapped_outputs/73dc5a8068e014af56511225c0e0870a.txt @@ -0,0 +1,108 @@ +Installation + +Install 🤗 Diffusers for whichever deep learning library you’re working with. +🤗 Diffusers is tested on Python 3.7+, PyTorch 1.7.0+ and flax. Follow the installation instructions below for the deep learning library you are using: +PyTorch installation instructions. +Flax installation instructions. + +Install with pip + +You should install 🤗 Diffusers in a virtual environment. +If you’re unfamiliar with Python virtual environments, take a look at this guide. +A virtual environment makes it easier to manage different projects, and avoid compatibility issues between dependencies. +Start by creating a virtual environment in your project directory: + + + Copied +python -m venv .env +Activate the virtual environment: + + + Copied +source .env/bin/activate +Now you’re ready to install 🤗 Diffusers with the following command: +For PyTorch + + + Copied +pip install diffusers["torch"] +For Flax + + + Copied +pip install diffusers["flax"] + +Install from source + +Before intsalling diffusers from source, make sure you have torch and accelerate installed. +For torch installation refer to the torch docs. +To install accelerate + + + Copied +pip install accelerate +Install 🤗 Diffusers from source with the following command: + + + Copied +pip install git+https://github.com/huggingface/diffusers +This command installs the bleeding edge main version rather than the latest stable version. +The main version is useful for staying up-to-date with the latest developments. +For instance, if a bug has been fixed since the last official release but a new release hasn’t been rolled out yet. +However, this means the main version may not always be stable. +We strive to keep the main version operational, and most issues are usually resolved within a few hours or a day. +If you run into a problem, please open an Issue, so we can fix it even sooner! + +Editable install + +You will need an editable install if you’d like to: +Use the main version of the source code. +Contribute to 🤗 Diffusers and need to test changes in the code. +Clone the repository and install 🤗 Diffusers with the following commands: + + + Copied +git clone https://github.com/huggingface/diffusers.git +cd diffusers +For PyTorch + + + Copied +pip install -e ".[torch]" +For Flax + + + Copied +pip install -e ".[flax]" +These commands will link the folder you cloned the repository to and your Python library paths. +Python will now look inside the folder you cloned to in addition to the normal library paths. +For example, if your Python packages are typically installed in ~/anaconda3/envs/main/lib/python3.7/site-packages/, Python will also search the folder you cloned to: ~/diffusers/. +You must keep the diffusers folder if you want to keep using the library. +Now you can easily update your clone to the latest version of 🤗 Diffusers with the following command: + + + Copied +cd ~/diffusers/ +git pull +Your Python environment will find the main version of 🤗 Diffusers on the next run. + +Notice on telemetry logging + +Our library gathers telemetry information during from_pretrained() requests. +This data includes the version of Diffusers and PyTorch/Flax, the requested model or pipeline class, +and the path to a pretrained checkpoint if it is hosted on the Hub. +This usage data helps us debug issues and prioritize new features. +Telemetry is only sent when loading models and pipelines from the HuggingFace Hub, +and is not collected during local usage. +We understand that not everyone wants to share additional information, and we respect your privacy, +so you can disable telemetry collection by setting the DISABLE_TELEMETRY environment variable from your terminal: +On Linux/MacOS: + + + Copied +export DISABLE_TELEMETRY=YES +On Windows: + + + Copied +set DISABLE_TELEMETRY=YES diff --git a/scrapped_outputs/73ddb390007b268ad3834fc3498557b7.txt b/scrapped_outputs/73ddb390007b268ad3834fc3498557b7.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/73edcb5d5a9f73c792c0d876f6e5a37a.txt b/scrapped_outputs/73edcb5d5a9f73c792c0d876f6e5a37a.txt new file mode 100644 index 0000000000000000000000000000000000000000..86d9ddbbae81241685d47196515ab51585d529f3 --- /dev/null +++ b/scrapped_outputs/73edcb5d5a9f73c792c0d876f6e5a37a.txt @@ -0,0 +1,93 @@ +Latent Consistency Distillation Latent Consistency Models (LCMs) are able to generate high-quality images in just a few steps, representing a big leap forward because many pipelines require at least 25+ steps. LCMs are produced by applying the latent consistency distillation method to any Stable Diffusion model. This method works by applying one-stage guided distillation to the latent space, and incorporating a skipping-step method to consistently skip timesteps to accelerate the distillation process (refer to section 4.1, 4.2, and 4.3 of the paper for more details). If you’re training on a GPU with limited vRAM, try enabling gradient_checkpointing, gradient_accumulation_steps, and mixed_precision to reduce memory-usage and speedup training. You can reduce your memory-usage even more by enabling memory-efficient attention with xFormers and bitsandbytes’ 8-bit optimizer. This guide will explore the train_lcm_distill_sd_wds.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/consistency_distillation +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment (try enabling torch.compile to significantly speedup training): Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_lcm_distill_sd_wds.py \ + --mixed_precision="fp16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so you’ll focus on the parameters that are relevant to latent consistency distillation in this guide. --pretrained_teacher_model: the path to a pretrained latent diffusion model to use as the teacher model --pretrained_vae_model_name_or_path: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify an alternative VAE (like this VAE by madebyollin which works in fp16) --w_min and --w_max: the minimum and maximum guidance scale values for guidance scale sampling --num_ddim_timesteps: the number of timesteps for DDIM sampling --loss_type: the type of loss (L2 or Huber) to calculate for latent consistency distillation; Huber loss is generally preferred because it’s more robust to outliers --huber_c: the Huber loss parameter Training script The training script starts by creating a dataset class - Text2ImageDataset - for preprocessing the images and creating a training dataset. Copied def transform(example): + image = example["image"] + image = TF.resize(image, resolution, interpolation=transforms.InterpolationMode.BILINEAR) + + c_top, c_left, _, _ = transforms.RandomCrop.get_params(image, output_size=(resolution, resolution)) + image = TF.crop(image, c_top, c_left, resolution, resolution) + image = TF.to_tensor(image) + image = TF.normalize(image, [0.5], [0.5]) + + example["image"] = image + return example For improved performance on reading and writing large datasets stored in the cloud, this script uses the WebDataset format to create a preprocessing pipeline to apply transforms and create a dataset and dataloader for training. Images are processed and fed to the training loop without having to download the full dataset first. Copied processing_pipeline = [ + wds.decode("pil", handler=wds.ignore_and_continue), + wds.rename(image="jpg;png;jpeg;webp", text="text;txt;caption", handler=wds.warn_and_continue), + wds.map(filter_keys({"image", "text"})), + wds.map(transform), + wds.to_tuple("image", "text"), +] In the main() function, all the necessary components like the noise scheduler, tokenizers, text encoders, and VAE are loaded. The teacher UNet is also loaded here and then you can create a student UNet from the teacher UNet. The student UNet is updated by the optimizer during training. Copied teacher_unet = UNet2DConditionModel.from_pretrained( + args.pretrained_teacher_model, subfolder="unet", revision=args.teacher_revision +) + +unet = UNet2DConditionModel(**teacher_unet.config) +unet.load_state_dict(teacher_unet.state_dict(), strict=False) +unet.train() Now you can create the optimizer to update the UNet parameters: Copied optimizer = optimizer_class( + unet.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Create the dataset: Copied dataset = Text2ImageDataset( + train_shards_path_or_url=args.train_shards_path_or_url, + num_train_examples=args.max_train_samples, + per_gpu_batch_size=args.train_batch_size, + global_batch_size=args.train_batch_size * accelerator.num_processes, + num_workers=args.dataloader_num_workers, + resolution=args.resolution, + shuffle_buffer_size=1000, + pin_memory=True, + persistent_workers=True, +) +train_dataloader = dataset.train_dataloader Next, you’re ready to setup the training loop and implement the latent consistency distillation method (see Algorithm 1 in the paper for more details). This section of the script takes care of adding noise to the latents, sampling and creating a guidance scale embedding, and predicting the original image from the noise. Copied pred_x_0 = predicted_origin( + noise_pred, + start_timesteps, + noisy_model_input, + noise_scheduler.config.prediction_type, + alpha_schedule, + sigma_schedule, +) + +model_pred = c_skip_start * noisy_model_input + c_out_start * pred_x_0 It gets the teacher model predictions and the LCM predictions next, calculates the loss, and then backpropagates it to the LCM. Copied if args.loss_type == "l2": + loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") +elif args.loss_type == "huber": + loss = torch.mean( + torch.sqrt((model_pred.float() - target.float()) ** 2 + args.huber_c**2) - args.huber_c + ) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Now you’re ready to launch the training script and start distilling! For this guide, you’ll use the --train_shards_path_or_url to specify the path to the Conceptual Captions 12M dataset stored on the Hub here. Set the MODEL_DIR environment variable to the name of the teacher model and OUTPUT_DIR to where you want to save the model. Copied export MODEL_DIR="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="path/to/saved/model" + +accelerate launch train_lcm_distill_sd_wds.py \ + --pretrained_teacher_model=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --mixed_precision=fp16 \ + --resolution=512 \ + --learning_rate=1e-6 --loss_type="huber" --ema_decay=0.95 --adam_weight_decay=0.0 \ + --max_train_steps=1000 \ + --max_train_samples=4000000 \ + --dataloader_num_workers=8 \ + --train_shards_path_or_url="pipe:curl -L -s https://huggingface.co/datasets/laion/conceptual-captions-12m-webdataset/resolve/main/data/{00000..01099}.tar?download=true" \ + --validation_steps=200 \ + --checkpointing_steps=200 --checkpoints_total_limit=10 \ + --train_batch_size=12 \ + --gradient_checkpointing --enable_xformers_memory_efficient_attention \ + --gradient_accumulation_steps=1 \ + --use_8bit_adam \ + --resume_from_checkpoint=latest \ + --report_to=wandb \ + --seed=453645634 \ + --push_to_hub Once training is complete, you can use your new LCM for inference. Copied from diffusers import UNet2DConditionModel, DiffusionPipeline, LCMScheduler +import torch + +unet = UNet2DConditionModel.from_pretrained("your-username/your-model", torch_dtype=torch.float16, variant="fp16") +pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", unet=unet, torch_dtype=torch.float16, variant="fp16") + +pipeline.scheduler = LCMScheduler.from_config(pipe.scheduler.config) +pipeline.to("cuda") + +prompt = "sushi rolls in the form of panda heads, sushi platter" + +image = pipeline(prompt, num_inference_steps=4, guidance_scale=1.0).images[0] LoRA LoRA is a training technique for significantly reducing the number of trainable parameters. As a result, training is faster and it is easier to store the resulting weights because they are a lot smaller (~100MBs). Use the train_lcm_distill_lora_sd_wds.py or train_lcm_distill_lora_sdxl.wds.py script to train with LoRA. The LoRA training script is discussed in more detail in the LoRA training guide. Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_lcm_distill_sdxl_wds.py script to train a SDXL model with LoRA. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on distilling a LCM model! To learn more about LCM, the following may be helpful: Learn how to use LCMs for inference for text-to-image, image-to-image, and with LoRA checkpoints. Read the SDXL in 4 steps with Latent Consistency LoRAs blog post to learn more about SDXL LCM-LoRA’s for super fast inference, quality comparisons, benchmarks, and more. diff --git a/scrapped_outputs/73fb21c9ff211087d48372bd06d095b7.txt b/scrapped_outputs/73fb21c9ff211087d48372bd06d095b7.txt new file mode 100644 index 0000000000000000000000000000000000000000..e807efa0bdba9fcaf725824d3ab7c1cc5f8142b5 --- /dev/null +++ b/scrapped_outputs/73fb21c9ff211087d48372bd06d095b7.txt @@ -0,0 +1,138 @@ +Kandinsky 3 Kandinsky 3 is created by Vladimir Arkhipkin,Anastasia Maltseva,Igor Pavlov,Andrei Filatov,Arseniy Shakhmatov,Andrey Kuznetsov,Denis Dimitrov, Zein Shaheen The description from it’s Github page: Kandinsky 3.0 is an open-source text-to-image diffusion model built upon the Kandinsky2-x model family. In comparison to its predecessors, enhancements have been made to the text understanding and visual quality of the model, achieved by increasing the size of the text encoder and Diffusion U-Net models, respectively. Its architecture includes 3 main components: FLAN-UL2, which is an encoder decoder model based on the T5 architecture. New U-Net architecture featuring BigGAN-deep blocks doubles depth while maintaining the same number of parameters. Sber-MoVQGAN is a decoder proven to have superior results in image restoration. The original codebase can be found at ai-forever/Kandinsky-3. Check out the Kandinsky Community organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting. Make sure to check out the schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. Kandinsky3Pipeline class diffusers.Kandinsky3Pipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: Kandinsky3UNet scheduler: DDPMScheduler movq: VQModel ) __call__ < source > ( prompt: Union = None num_inference_steps: int = 25 guidance_scale: float = 3.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 height: Optional = 1024 width: Optional = 1024 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None output_type: Optional = 'pil' return_dict: bool = True latents = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 3.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to self.unet.config.sample_size) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size) — +The width in pixels of the generated image. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask. Must provide if passing prompt_embeds directly. negative_attention_mask (torch.FloatTensor, optional) — +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import AutoPipelineForText2Image +>>> import torch + +>>> pipe = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-3", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A photograph of the inside of a subway train. There are raccoons sitting on the seats. One of them is reading a newspaper. The window shows the city in the background." + +>>> generator = torch.Generator(device="cpu").manual_seed(0) +>>> image = pipe(prompt, num_inference_steps=25, generator=generator).images[0] encode_prompt < source > ( prompt do_classifier_free_guidance = True num_images_per_prompt = 1 device = None negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None _cut_context = False attention_mask: Optional = None negative_attention_mask: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device, optional): +torch device to place the resulting embeddings on num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask. Must provide if passing prompt_embeds directly. negative_attention_mask (torch.FloatTensor, optional) — +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. Encodes the prompt into text encoder hidden states. Kandinsky3Img2ImgPipeline class diffusers.Kandinsky3Img2ImgPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: Kandinsky3UNet scheduler: DDPMScheduler movq: VQModel ) __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.3 num_inference_steps: int = 25 guidance_scale: float = 3.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 3.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask. Must provide if passing prompt_embeds directly. negative_attention_mask (torch.FloatTensor, optional) — +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import AutoPipelineForImage2Image +>>> from diffusers.utils import load_image +>>> import torch + +>>> pipe = AutoPipelineForImage2Image.from_pretrained("kandinsky-community/kandinsky-3", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A painting of the inside of a subway train with tiny raccoons." +>>> image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky3/t2i.png") + +>>> generator = torch.Generator(device="cpu").manual_seed(0) +>>> image = pipe(prompt, image=image, strength=0.75, num_inference_steps=25, generator=generator).images[0] encode_prompt < source > ( prompt do_classifier_free_guidance = True num_images_per_prompt = 1 device = None negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None _cut_context = False attention_mask: Optional = None negative_attention_mask: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded Encodes the prompt into text encoder hidden states. device: (torch.device, optional): +torch device to place the resulting embeddings on +num_images_per_prompt (int, optional, defaults to 1): +number of images that should be generated per prompt +do_classifier_free_guidance (bool, optional, defaults to True): +whether to use classifier free guidance or not +negative_prompt (str or List[str], optional): +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). +prompt_embeds (torch.FloatTensor, optional): +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. +negative_prompt_embeds (torch.FloatTensor, optional): +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. +attention_mask (torch.FloatTensor, optional): +Pre-generated attention mask. Must provide if passing prompt_embeds directly. +negative_attention_mask (torch.FloatTensor, optional): +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. diff --git a/scrapped_outputs/74097a1aa08f33f2bcc3b054d768467c.txt b/scrapped_outputs/74097a1aa08f33f2bcc3b054d768467c.txt new file mode 100644 index 0000000000000000000000000000000000000000..48649ec5c0477bba9de1fe1afcb189a2b6b4fbd9 --- /dev/null +++ b/scrapped_outputs/74097a1aa08f33f2bcc3b054d768467c.txt @@ -0,0 +1,88 @@ +Textual Inversion Textual Inversion is a training method for personalizing models by learning new text embeddings from a few example images. The file produced from training is extremely small (a few KBs) and the new embeddings can be loaded into the text encoder. TextualInversionLoaderMixin provides a function for loading Textual Inversion embeddings from Diffusers and Automatic1111 into the text encoder and loading a special token to activate the embeddings. To learn more about how to load Textual Inversion embeddings, see the Textual Inversion loading guide. TextualInversionLoaderMixin class diffusers.loaders.TextualInversionLoaderMixin < source > ( ) Load Textual Inversion tokens and embeddings to the tokenizer and text encoder. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") maybe_convert_prompt < source > ( prompt: Union tokenizer: PreTrainedTokenizer ) → str or list of str Parameters prompt (str or list of str) — +The prompt or prompts to guide the image generation. tokenizer (PreTrainedTokenizer) — +The tokenizer responsible for encoding the prompt into input tokens. Returns +str or list of str + +The converted prompt + Processes prompts that include a special token corresponding to a multi-vector textual inversion embedding to +be replaced with multiple special tokens each corresponding to one of the vectors. If the prompt has no textual +inversion token or if the textual inversion token is a single vector, the input prompt is returned. unload_textual_inversion < source > ( tokens: Union = None ) Unload Textual Inversion embeddings from the text encoder of StableDiffusionPipeline Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5") + +# Example 1 +pipeline.load_textual_inversion("sd-concepts-library/gta5-artwork") +pipeline.load_textual_inversion("sd-concepts-library/moeb-style") + +# Remove all token embeddings +pipeline.unload_textual_inversion() + +# Example 2 +pipeline.load_textual_inversion("sd-concepts-library/moeb-style") +pipeline.load_textual_inversion("sd-concepts-library/gta5-artwork") + +# Remove just one token +pipeline.unload_textual_inversion("") diff --git a/scrapped_outputs/7421bce1d6e7c004d6eddb3e5aa66195.txt b/scrapped_outputs/7421bce1d6e7c004d6eddb3e5aa66195.txt new file mode 100644 index 0000000000000000000000000000000000000000..810a91b8fef1b421013373c972981ec5ae26c4c4 --- /dev/null +++ b/scrapped_outputs/7421bce1d6e7c004d6eddb3e5aa66195.txt @@ -0,0 +1,21 @@ +ConsistencyDecoderScheduler This scheduler is a part of the ConsistencyDecoderPipeline and was introduced in DALL-E 3. The original codebase can be found at openai/consistency_models. ConsistencyDecoderScheduler class diffusers.schedulers.ConsistencyDecoderScheduler < source > ( num_train_timesteps: int = 1024 sigma_data: float = 0.5 ) scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor generator: Optional = None return_dict: bool = True ) → ~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (float) — +The current timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a +~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput or tuple. Returns +~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput or tuple + +If return_dict is True, +~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/742328a9786ef814f96538fd62e9ed0c.txt b/scrapped_outputs/742328a9786ef814f96538fd62e9ed0c.txt new file mode 100644 index 0000000000000000000000000000000000000000..48396c146f3995890b4116a7443457db9ccef879 --- /dev/null +++ b/scrapped_outputs/742328a9786ef814f96538fd62e9ed0c.txt @@ -0,0 +1,60 @@ +VAE Image Processor The VaeImageProcessor provides a unified API for StableDiffusionPipelines to prepare image inputs for VAE encoding and post-processing outputs once they’re decoded. This includes transformations such as resizing, normalization, and conversion between PIL Image, PyTorch, and NumPy arrays. All pipelines with VaeImageProcessor accept PIL Image, PyTorch tensor, or NumPy arrays as image inputs and return outputs based on the output_type argument by the user. You can pass encoded image latents directly to the pipeline and return latents from the pipeline as a specific output with the output_type argument (for example output_type="latent"). This allows you to take the generated latents from one pipeline and pass it to another pipeline as input without leaving the latent space. It also makes it much easier to use multiple pipelines together by passing PyTorch tensors directly between different pipelines. VaeImageProcessor class diffusers.image_processor.VaeImageProcessor < source > ( do_resize: bool = True vae_scale_factor: int = 8 resample: str = 'lanczos' do_normalize: bool = True do_binarize: bool = False do_convert_rgb: bool = False do_convert_grayscale: bool = False ) Parameters do_resize (bool, optional, defaults to True) — +Whether to downscale the image’s (height, width) dimensions to multiples of vae_scale_factor. Can accept +height and width arguments from image_processor.VaeImageProcessor.preprocess() method. vae_scale_factor (int, optional, defaults to 8) — +VAE scale factor. If do_resize is True, the image is automatically resized to multiples of this factor. resample (str, optional, defaults to lanczos) — +Resampling filter to use when resizing the image. do_normalize (bool, optional, defaults to True) — +Whether to normalize the image to [-1,1]. do_binarize (bool, optional, defaults to False) — +Whether to binarize the image to 0/1. do_convert_rgb (bool, optional, defaults to be False) — +Whether to convert the images to RGB format. do_convert_grayscale (bool, optional, defaults to be False) — +Whether to convert the images to grayscale format. Image processor for VAE. apply_overlay < source > ( mask: Image init_image: Image image: Image crop_coords: Optional = None ) overlay the inpaint output to the original image binarize < source > ( image: Image ) → PIL.Image.Image Parameters image (PIL.Image.Image) — +The image input, should be a PIL image. Returns +PIL.Image.Image + +The binarized image. Values less than 0.5 are set to 0, values greater than 0.5 are set to 1. + Create a mask. blur < source > ( image: Image blur_factor: int = 4 ) Applies Gaussian blur to an image. convert_to_grayscale < source > ( image: Image ) Converts a PIL image to grayscale format. convert_to_rgb < source > ( image: Image ) Converts a PIL image to RGB format. denormalize < source > ( images: Union ) Denormalize an image array to [0,1]. get_crop_region < source > ( mask_image: Image width: int height: int pad = 0 ) → tuple Parameters mask_image (PIL.Image.Image) — Mask image. width (int) — Width of the image to be processed. height (int) — Height of the image to be processed. pad (int, optional) — Padding to be added to the crop region. Defaults to 0. Returns +tuple + +(x1, y1, x2, y2) represent a rectangular region that contains all masked ares in an image and matches the original aspect ratio. + Finds a rectangular region that contains all masked ares in an image, and expands region to match the aspect ratio of the original image; +for example, if user drew mask in a 128x32 region, and the dimensions for processing are 512x512, the region will be expanded to 128x128. get_default_height_width < source > ( image: Union height: Optional = None width: Optional = None ) Parameters image(PIL.Image.Image, np.ndarray or torch.Tensor) — +The image input, can be a PIL image, numpy array or pytorch tensor. if it is a numpy array, should have +shape [batch, height, width] or [batch, height, width, channel] if it is a pytorch tensor, should +have shape [batch, channel, height, width]. height (int, optional, defaults to None) — +The height in preprocessed image. If None, will use the height of image input. width (int, optional, defaults to None) -- The width in preprocessed. If None, will use the width of the image` input. This function return the height and width that are downscaled to the next integer multiple of +vae_scale_factor. normalize < source > ( images: Union ) Normalize an image array to [-1,1]. numpy_to_pil < source > ( images: ndarray ) Convert a numpy image or a batch of images to a PIL image. numpy_to_pt < source > ( images: ndarray ) Convert a NumPy image to a PyTorch tensor. pil_to_numpy < source > ( images: Union ) Convert a PIL image or a list of PIL images to NumPy arrays. postprocess < source > ( image: FloatTensor output_type: str = 'pil' do_denormalize: Optional = None ) → PIL.Image.Image, np.ndarray or torch.FloatTensor Parameters image (torch.FloatTensor) — +The image input, should be a pytorch tensor with shape B x C x H x W. output_type (str, optional, defaults to pil) — +The output type of the image, can be one of pil, np, pt, latent. do_denormalize (List[bool], optional, defaults to None) — +Whether to denormalize the image to [0,1]. If None, will use the value of do_normalize in the +VaeImageProcessor config. Returns +PIL.Image.Image, np.ndarray or torch.FloatTensor + +The postprocessed image. + Postprocess the image output from tensor to output_type. preprocess < source > ( image: Union height: Optional = None width: Optional = None resize_mode: str = 'default' crops_coords: Optional = None ) Parameters image (pipeline_image_input) — +The image input, accepted formats are PIL images, NumPy arrays, PyTorch tensors; Also accept list of supported formats. height (int, optional, defaults to None) — +The height in preprocessed image. If None, will use the get_default_height_width() to get default height. width (int, optional, defaults to None) -- The width in preprocessed. If None, will use get_default_height_width() to get the default width. resize_mode (str, optional, defaults to default) — +The resize mode, can be one of default or fill. If default, will resize the image to fit +within the specified width and height, and it may not maintaining the original aspect ratio. +If fill, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, filling empty with data from image. +If crop, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, cropping the excess. +Note that resize_mode fill and crop are only supported for PIL image input. crops_coords (List[Tuple[int, int, int, int]], optional, defaults to None) — +The crop coordinates for each image in the batch. If None, will not crop the image. Preprocess the image input. pt_to_numpy < source > ( images: FloatTensor ) Convert a PyTorch tensor to a NumPy image. resize < source > ( image: Union height: int width: int resize_mode: str = 'default' ) → PIL.Image.Image, np.ndarray or torch.Tensor Parameters image (PIL.Image.Image, np.ndarray or torch.Tensor) — +The image input, can be a PIL image, numpy array or pytorch tensor. height (int) — +The height to resize to. width (int) — +The width to resize to. resize_mode (str, optional, defaults to default) — +The resize mode to use, can be one of default or fill. If default, will resize the image to fit +within the specified width and height, and it may not maintaining the original aspect ratio. +If fill, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, filling empty with data from image. +If crop, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, cropping the excess. +Note that resize_mode fill and crop are only supported for PIL image input. Returns +PIL.Image.Image, np.ndarray or torch.Tensor + +The resized image. + Resize image. VaeImageProcessorLDM3D The VaeImageProcessorLDM3D accepts RGB and depth inputs and returns RGB and depth outputs. class diffusers.image_processor.VaeImageProcessorLDM3D < source > ( do_resize: bool = True vae_scale_factor: int = 8 resample: str = 'lanczos' do_normalize: bool = True ) Parameters do_resize (bool, optional, defaults to True) — +Whether to downscale the image’s (height, width) dimensions to multiples of vae_scale_factor. vae_scale_factor (int, optional, defaults to 8) — +VAE scale factor. If do_resize is True, the image is automatically resized to multiples of this factor. resample (str, optional, defaults to lanczos) — +Resampling filter to use when resizing the image. do_normalize (bool, optional, defaults to True) — +Whether to normalize the image to [-1,1]. Image processor for VAE LDM3D. depth_pil_to_numpy < source > ( images: Union ) Convert a PIL image or a list of PIL images to NumPy arrays. numpy_to_depth < source > ( images: ndarray ) Convert a NumPy depth image or a batch of images to a PIL image. numpy_to_pil < source > ( images: ndarray ) Convert a NumPy image or a batch of images to a PIL image. preprocess < source > ( rgb: Union depth: Union height: Optional = None width: Optional = None target_res: Optional = None ) Preprocess the image input. Accepted formats are PIL images, NumPy arrays or PyTorch tensors. rgblike_to_depthmap < source > ( image: Union ) Returns: depth map diff --git a/scrapped_outputs/743e754a9b3611147ce3e65d6fd17f8a.txt b/scrapped_outputs/743e754a9b3611147ce3e65d6fd17f8a.txt new file mode 100644 index 0000000000000000000000000000000000000000..1fe3bd3f06785a74a09c4c4199e812fcd2270991 --- /dev/null +++ b/scrapped_outputs/743e754a9b3611147ce3e65d6fd17f8a.txt @@ -0,0 +1,6 @@ +Overview 🤗 Diffusers provides a collection of training scripts for you to train your own diffusion models. You can find all of our training scripts in diffusers/examples. Each training script is: Self-contained: the training script does not depend on any local files, and all packages required to run the script are installed from the requirements.txt file. Easy-to-tweak: the training scripts are an example of how to train a diffusion model for a specific task and won’t work out-of-the-box for every training scenario. You’ll likely need to adapt the training script for your specific use-case. To help you with that, we’ve fully exposed the data preprocessing code and the training loop so you can modify it for your own use. Beginner-friendly: the training scripts are designed to be beginner-friendly and easy to understand, rather than including the latest state-of-the-art methods to get the best and most competitive results. Any training methods we consider too complex are purposefully left out. Single-purpose: each training script is expressly designed for only one task to keep it readable and understandable. Our current collection of training scripts include: Training SDXL-support LoRA-support Flax-support unconditional image generation text-to-image 👍 👍 👍 textual inversion 👍 DreamBooth 👍 👍 👍 ControlNet 👍 👍 InstructPix2Pix 👍 Custom Diffusion T2I-Adapters 👍 Kandinsky 2.2 👍 Wuerstchen 👍 These examples are actively maintained, so please feel free to open an issue if they aren’t working as expected. If you feel like another training example should be included, you’re more than welcome to start a Feature Request to discuss your feature idea with us and whether it meets our criteria of being self-contained, easy-to-tweak, beginner-friendly, and single-purpose. Install Make sure you can successfully run the latest versions of the example scripts by installing the library from source in a new virtual environment: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the folder of the training script (for example, DreamBooth) and install the requirements.txt file. Some training scripts have a specific requirement file for SDXL, LoRA or Flax. If you’re using one of these scripts, make sure you install its corresponding requirements file. Copied cd examples/dreambooth +pip install -r requirements.txt +# to train SDXL with DreamBooth +pip install -r requirements_sdxl.txt To speedup training and reduce memory-usage, we recommend: using PyTorch 2.0 or higher to automatically use scaled dot product attention during training (you don’t need to make any changes to the training code) installing xFormers to enable memory-efficient attention diff --git a/scrapped_outputs/744dcbca2080763f24c7849534d6cee9.txt b/scrapped_outputs/744dcbca2080763f24c7849534d6cee9.txt new file mode 100644 index 0000000000000000000000000000000000000000..019c4d1bed8279c368db9a675af18172eacecbe1 --- /dev/null +++ b/scrapped_outputs/744dcbca2080763f24c7849534d6cee9.txt @@ -0,0 +1,24 @@ +IP-Adapter IP-Adapter is a lightweight adapter that enables prompting a diffusion model with an image. This method decouples the cross-attention layers of the image and text features. The image features are generated from an image encoder. Files generated from IP-Adapter are only ~100MBs. Learn how to load an IP-Adapter checkpoint and image in the IP-Adapter loading guide. IPAdapterMixin class diffusers.loaders.IPAdapterMixin < source > ( ) Mixin for handling IP Adapters. load_ip_adapter < source > ( pretrained_model_name_or_path_or_dict: Union subfolder: str weight_name: str **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with ModelMixin.save_pretrained(). +A torch state +dict. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. diff --git a/scrapped_outputs/7455fcab558e65af195fb5f67d93845b.txt b/scrapped_outputs/7455fcab558e65af195fb5f67d93845b.txt new file mode 100644 index 0000000000000000000000000000000000000000..a393913848d6f7c336242559c3c841e1e1ac8bf4 --- /dev/null +++ b/scrapped_outputs/7455fcab558e65af195fb5f67d93845b.txt @@ -0,0 +1,30 @@ +Conditional Image Generation + +The DiffusionPipeline is the easiest way to use a pre-trained diffusion system for inference +Start by creating an instance of DiffusionPipeline and specify which pipeline checkpoint you would like to download. +You can use the DiffusionPipeline for any Diffusers’ checkpoint. +In this guide though, you’ll use DiffusionPipeline for text-to-image generation with Latent Diffusion: + + + Copied +>>> from diffusers import DiffusionPipeline + +>>> generator = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256") +The DiffusionPipeline downloads and caches all modeling, tokenization, and scheduling components. +Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on GPU. +You can move the generator object to GPU, just like you would in PyTorch. + + + Copied +>>> generator.to("cuda") +Now you can use the generator on your text prompt: + + + Copied +>>> image = generator("An image of a squirrel in Picasso style").images[0] +The output is by default wrapped into a PIL Image object. +You can save the image by simply calling: + + + Copied +>>> image.save("image_of_squirrel_painting.png") diff --git a/scrapped_outputs/74c3f1718585f989fbdafdaf8ff94cd1.txt b/scrapped_outputs/74c3f1718585f989fbdafdaf8ff94cd1.txt new file mode 100644 index 0000000000000000000000000000000000000000..b2cc2de2c2b439a4068ad959cd182522bf83b8b7 --- /dev/null +++ b/scrapped_outputs/74c3f1718585f989fbdafdaf8ff94cd1.txt @@ -0,0 +1,72 @@ +K-Diffusion k-diffusion is a popular library created by Katherine Crowson. We provide StableDiffusionKDiffusionPipeline and StableDiffusionXLKDiffusionPipeline that allow you to run Stable DIffusion with samplers from k-diffusion. Note that most the samplers from k-diffusion are implemented in Diffusers and we recommend using existing schedulers. You can find a mapping between k-diffusion samplers and schedulers in Diffusers here StableDiffusionKDiffusionPipeline class diffusers.StableDiffusionKDiffusionPipeline < source > ( vae text_encoder tokenizer unet scheduler safety_checker feature_extractor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. Pipeline for text-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights This is an experimental pipeline and is likely to change in the future. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionXLKDiffusionPipeline class diffusers.StableDiffusionXLKDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. Pipeline for text-to-image generation using Stable Diffusion XL and k-diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. diff --git a/scrapped_outputs/74cd3f07f86a0312fc57b6a1ff8f887b.txt b/scrapped_outputs/74cd3f07f86a0312fc57b6a1ff8f887b.txt new file mode 100644 index 0000000000000000000000000000000000000000..54b679f844e0756b73267dc59e36b49e7f006adb --- /dev/null +++ b/scrapped_outputs/74cd3f07f86a0312fc57b6a1ff8f887b.txt @@ -0,0 +1,95 @@ +PNDM + + +Overview + +Pseudo Numerical methods for Diffusion Models on manifolds (PNDM) by Luping Liu, Yi Ren, Zhijie Lin and Zhou Zhao. +The abstract of the paper is the following: +Denoising Diffusion Probabilistic Models (DDPMs) can generate high-quality samples such as image and audio samples. However, DDPMs require hundreds to thousands of iterations to produce final samples. Several prior works have successfully accelerated DDPMs through adjusting the variance schedule (e.g., Improved Denoising Diffusion Probabilistic Models) or the denoising equation (e.g., Denoising Diffusion Implicit Models (DDIMs)). However, these acceleration methods cannot maintain the quality of samples and even introduce new noise at a high speedup rate, which limit their practicability. To accelerate the inference process while keeping the sample quality, we provide a fresh perspective that DDPMs should be treated as solving differential equations on manifolds. Under such a perspective, we propose pseudo numerical methods for diffusion models (PNDMs). Specifically, we figure out how to solve differential equations on manifolds and show that DDIMs are simple cases of pseudo numerical methods. We change several classical numerical methods to corresponding pseudo numerical methods and find that the pseudo linear multi-step method is the best in most situations. According to our experiments, by directly using pre-trained models on Cifar10, CelebA and LSUN, PNDMs can generate higher quality synthetic images with only 50 steps compared with 1000-step DDIMs (20x speedup), significantly outperform DDIMs with 250 steps (by around 0.4 in FID) and have good generalization on different variance schedules. +The original codebase can be found here. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_pndm.py +Unconditional Image Generation +- + +PNDMPipeline + + +class diffusers.PNDMPipeline + +< +source +> +( +unet: UNet2DModel +scheduler: PNDMScheduler + +) + + +Parameters + +unet (UNet2DModel) — U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +The PNDMScheduler to be used in combination with unet to denoise the encoded image. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +batch_size: int = 1 +num_inference_steps: int = 50 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +**kwargs + +) +→ +ImagePipelineOutput or tuple + +Parameters + +batch_size (int, optional, defaults to 1) — The number of images to generate. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +generator (torch.Generator, optional) — A torch +generator to make generation +deterministic. + + +output_type (str, optional, defaults to "pil") — The output format of the generate image. Choose +between PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — Whether or not to return a +ImagePipelineOutput instead of a plain tuple. + + +Returns + +ImagePipelineOutput or tuple + + + +~pipelines.utils.ImagePipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. diff --git a/scrapped_outputs/74d3283f744ab49271dbab7af3cc4967.txt b/scrapped_outputs/74d3283f744ab49271dbab7af3cc4967.txt new file mode 100644 index 0000000000000000000000000000000000000000..b6ee2d139f8d33d1b57f5e5dc720363dd35642a1 --- /dev/null +++ b/scrapped_outputs/74d3283f744ab49271dbab7af3cc4967.txt @@ -0,0 +1,101 @@ +Shap-E The Shap-E model was proposed in Shap-E: Generating Conditional 3D Implicit Functions by Alex Nichol and Heewoo Jun from OpenAI. The abstract from the paper is: We present Shap-E, a conditional generative model for 3D assets. Unlike recent work on 3D generative models which produce a single output representation, Shap-E directly generates the parameters of implicit functions that can be rendered as both textured meshes and neural radiance fields. We train Shap-E in two stages: first, we train an encoder that deterministically maps 3D assets into the parameters of an implicit function; second, we train a conditional diffusion model on outputs of the encoder. When trained on a large dataset of paired 3D and text data, our resulting models are capable of generating complex and diverse 3D assets in a matter of seconds. When compared to Point-E, an explicit generative model over point clouds, Shap-E converges faster and reaches comparable or better sample quality despite modeling a higher-dimensional, multi-representation output space. The original codebase can be found at openai/shap-e. See the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. ShapEPipeline class diffusers.ShapEPipeline < source > ( prior: PriorTransformer text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: HeunDiscreteScheduler shap_e_renderer: ShapERenderer ) Parameters prior (PriorTransformer) — +The canonical unCLIP prior to approximate the image embedding from the text embedding. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. scheduler (HeunDiscreteScheduler) — +A scheduler to be used in combination with the prior model to generate image embedding. shap_e_renderer (ShapERenderer) — +Shap-E renderer projects the generated latents into parameters of a MLP to create 3D objects with the NeRF +rendering method. Pipeline for generating latent representation of a 3D asset and rendering with the NeRF method. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: str num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 frame_size: int = 64 output_type: Optional = 'pil' return_dict: bool = True ) → ShapEPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. frame_size (int, optional, default to 64) — +The width and height of each image frame of the generated 3D output. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between "pil" (PIL.Image.Image), "np" +(np.array), "latent" (torch.Tensor), or mesh (MeshDecoderOutput). return_dict (bool, optional, defaults to True) — +Whether or not to return a ShapEPipelineOutput instead of a plain +tuple. Returns +ShapEPipelineOutput or tuple + +If return_dict is True, ShapEPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from diffusers.utils import export_to_gif + +>>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +>>> repo = "openai/shap-e" +>>> pipe = DiffusionPipeline.from_pretrained(repo, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> guidance_scale = 15.0 +>>> prompt = "a shark" + +>>> images = pipe( +... prompt, +... guidance_scale=guidance_scale, +... num_inference_steps=64, +... frame_size=256, +... ).images + +>>> gif_path = export_to_gif(images[0], "shark_3d.gif") ShapEImg2ImgPipeline class diffusers.ShapEImg2ImgPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModel image_processor: CLIPImageProcessor scheduler: HeunDiscreteScheduler shap_e_renderer: ShapERenderer ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModel) — +Frozen image-encoder. image_processor (CLIPImageProcessor) — +A CLIPImageProcessor to process images. scheduler (HeunDiscreteScheduler) — +A scheduler to be used in combination with the prior model to generate image embedding. shap_e_renderer (ShapERenderer) — +Shap-E renderer projects the generated latents into parameters of a MLP to create 3D objects with the NeRF +rendering method. Pipeline for generating latent representation of a 3D asset and rendering with the NeRF method from an image. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 frame_size: int = 64 output_type: Optional = 'pil' return_dict: bool = True ) → ShapEPipelineOutput or tuple Parameters image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be used as the starting point. Can also accept image +latents as image, but if passing latents directly it is not encoded again. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. frame_size (int, optional, default to 64) — +The width and height of each image frame of the generated 3D output. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between "pil" (PIL.Image.Image), "np" +(np.array), "latent" (torch.Tensor), or mesh (MeshDecoderOutput). return_dict (bool, optional, defaults to True) — +Whether or not to return a ShapEPipelineOutput instead of a plain +tuple. Returns +ShapEPipelineOutput or tuple + +If return_dict is True, ShapEPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> from PIL import Image +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from diffusers.utils import export_to_gif, load_image + +>>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +>>> repo = "openai/shap-e-img2img" +>>> pipe = DiffusionPipeline.from_pretrained(repo, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> guidance_scale = 3.0 +>>> image_url = "https://hf.co/datasets/diffusers/docs-images/resolve/main/shap-e/corgi.png" +>>> image = load_image(image_url).convert("RGB") + +>>> images = pipe( +... image, +... guidance_scale=guidance_scale, +... num_inference_steps=64, +... frame_size=256, +... ).images + +>>> gif_path = export_to_gif(images[0], "corgi_3d.gif") ShapEPipelineOutput class diffusers.pipelines.shap_e.pipeline_shap_e.ShapEPipelineOutput < source > ( images: Union ) Parameters images (torch.FloatTensor) — +A list of images for 3D rendering. Output class for ShapEPipeline and ShapEImg2ImgPipeline. diff --git a/scrapped_outputs/7502d3d308337f19290b6785c9025d93.txt b/scrapped_outputs/7502d3d308337f19290b6785c9025d93.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/750d41f69f84729f23c1d87a20a207ee.txt b/scrapped_outputs/750d41f69f84729f23c1d87a20a207ee.txt new file mode 100644 index 0000000000000000000000000000000000000000..f3d2a1a340ad1efdbcd58232cb5909967c8d6d47 --- /dev/null +++ b/scrapped_outputs/750d41f69f84729f23c1d87a20a207ee.txt @@ -0,0 +1,64 @@ +Configuration Schedulers from SchedulerMixin and models from ModelMixin inherit from ConfigMixin which stores all the parameters that are passed to their respective __init__ methods in a JSON-configuration file. To use private or gated models, log-in with huggingface-cli login. ConfigMixin class diffusers.ConfigMixin < source > ( ) Base class for all configuration classes. All configuration parameters are stored under self.config. Also +provides the from_config() and save_config() methods for loading, downloading, and +saving classes that inherit from ConfigMixin. Class attributes: config_name (str) — A filename under which the config should stored when calling +save_config() (should be overridden by parent class). ignore_for_config (List[str]) — A list of attributes that should not be saved in the config (should be +overridden by subclass). has_compatibles (bool) — Whether the class has compatible classes (should be overridden by subclass). _deprecated_kwargs (List[str]) — Keyword arguments that are deprecated. Note that the init function +should only have a kwargs argument if at least one argument is deprecated (should be overridden by +subclass). load_config < source > ( pretrained_model_name_or_path: Union return_unused_kwargs = False return_commit_hash = False **kwargs ) → dict Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing model weights saved with +save_config(). + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. return_unused_kwargs (bool, optional, defaults to `False) — +Whether unused keyword arguments of the config are returned. return_commit_hash (bool, optional, defaults to False) -- Whether the commit_hash` of the loaded configuration are returned. Returns +dict + +A dictionary of all the parameters stored in a JSON configuration file. + Load a model or scheduler configuration. from_config < source > ( config: Union = None return_unused_kwargs = False **kwargs ) → ModelMixin or SchedulerMixin Parameters config (Dict[str, Any]) — +A config dictionary from which the Python class is instantiated. Make sure to only load configuration +files of compatible classes. return_unused_kwargs (bool, optional, defaults to False) — +Whether kwargs that are not consumed by the Python class should be returned or not. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to update the configuration object (after it is loaded) and initiate the Python class. +**kwargs are passed directly to the underlying scheduler/model’s __init__ method and eventually +overwrite the same named arguments in config. Returns +ModelMixin or SchedulerMixin + +A model or scheduler object instantiated from a config dictionary. + Instantiate a Python class from a config dictionary. Examples: Copied >>> from diffusers import DDPMScheduler, DDIMScheduler, PNDMScheduler + +>>> # Download scheduler from huggingface.co and cache. +>>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cifar10-32") + +>>> # Instantiate DDIM scheduler class with same config as DDPM +>>> scheduler = DDIMScheduler.from_config(scheduler.config) + +>>> # Instantiate PNDM scheduler class with same config as DDPM +>>> scheduler = PNDMScheduler.from_config(scheduler.config) save_config < source > ( save_directory: Union push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory where the configuration JSON file is saved (will be created if it does not exist). push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save a configuration object to the directory specified in save_directory so that it can be reloaded using the +from_config() class method. to_json_file < source > ( json_file_path: Union ) Parameters json_file_path (str or os.PathLike) — +Path to the JSON file to save a configuration instance’s parameters. Save the configuration instance’s parameters to a JSON file. to_json_string < source > ( ) → str Returns +str + +String containing all the attributes that make up the configuration instance in JSON format. + Serializes the configuration instance to a JSON string. diff --git a/scrapped_outputs/7556771cbfe9331393443989e9826a05.txt b/scrapped_outputs/7556771cbfe9331393443989e9826a05.txt new file mode 100644 index 0000000000000000000000000000000000000000..ad6e5d49a3040ac519ec419494a3930d6fff2730 --- /dev/null +++ b/scrapped_outputs/7556771cbfe9331393443989e9826a05.txt @@ -0,0 +1,740 @@ +Accelerated PyTorch 2.0 support in Diffusers + +Starting from version 0.13.0, Diffusers supports the latest optimization from the upcoming PyTorch 2.0 release. These include: +Support for accelerated transformers implementation with memory-efficient attention – no extra dependencies required. +torch.compile support for extra performance boost when individual models are compiled. + +Installation + + +To benefit from the accelerated transformers implementation and `torch.compile`, we will need to install the nightly version of PyTorch, as the stable version is yet to be released. The first step is to install CUDA 11.7 or CUDA 11.8, +as PyTorch 2.0 does not support the previous versions. Once CUDA is installed, torch nightly can be installed using: + + + + Copied +pip install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cu117 + +Using accelerated transformers and torch.compile. + +Accelerated Transformers implementation +PyTorch 2.0 includes an optimized and memory-efficient attention implementation through the torch.nn.functional.scaled_dot_product_attention function, which automatically enables several optimizations depending on the inputs and the GPU type. This is similar to the memory_efficient_attention from xFormers, but built natively into PyTorch. +These optimizations will be enabled by default in Diffusers if PyTorch 2.0 is installed and if torch.nn.functional.scaled_dot_product_attention is available. To use it, just install torch 2.0 as suggested above and simply use the pipeline. For example: + + + Copied +import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +image = pipe(prompt).images[0] +If you want to enable it explicitly (which is not required), you can do so as shown below. + + + Copied +import torch +from diffusers import StableDiffusionPipeline +from diffusers.models.cross_attention import AttnProcessor2_0 + +pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") +pipe.unet.set_attn_processor(AttnProcessor2_0()) + +prompt = "a photo of an astronaut riding a horse on mars" +image = pipe(prompt).images[0] +This should be as fast and memory efficient as xFormers. More details in our benchmark. +torch.compile +To get an additional speedup, we can use the new torch.compile feature. To do so, we simply wrap our unet with torch.compile. For more information and different options, refer to the +torch compile docs. + + + Copied +import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to( + "cuda" +) +pipe.unet = torch.compile(pipe.unet) + +batch_size = 10 +prompt = "A photo of an astronaut riding a horse on marse." +images = pipe(prompt, num_inference_steps=steps, num_images_per_prompt=batch_size).images +Depending on the type of GPU, compile() can yield between 2-9% of additional speed-up over the accelerated transformer optimizations. Note, however, that compilation is able to squeeze more performance improvements in more recent GPU architectures such as Ampere (A100, 3090), Ada (4090) and Hopper (H100). +Compilation takes some time to complete, so it is best suited for situations where you need to prepare your pipeline once and then perform the same type of inference operations multiple times. + +Benchmark + +We conducted a simple benchmark on different GPUs to compare vanilla attention, xFormers, torch.nn.functional.scaled_dot_product_attention and torch.compile+torch.nn.functional.scaled_dot_product_attention. +For the benchmark we used the the stable-diffusion-v1-4 model with 50 steps. The xFormers benchmark is done using the torch==1.13.1 version, while the accelerated transformers optimizations are tested using nightly versions of PyTorch 2.0. The tables below summarize the results we got. +The Speed over xformers columns denote the speed-up gained over xFormers using the torch.compile+torch.nn.functional.scaled_dot_product_attention. + +FP16 benchmark + +The table below shows the benchmark results for inference using fp16. As we can see, torch.nn.functional.scaled_dot_product_attention is as fast as xFormers (sometimes slightly faster/slower) on all the GPUs we tested. +And using torch.compile gives further speed-up of up of 10% over xFormers, but it’s mostly noticeable on the A100 GPU. +The time reported is in seconds. +GPU +Batch Size +Vanilla Attention +xFormers +PyTorch2.0 SDPA +SDPA + torch.compile +Speed over xformers (%) +A100 +10 +12.02 +8.7 +8.79 +7.89 +9.31 +A100 +16 +18.95 +13.57 +13.67 +12.25 +9.73 +A100 +32 (1) +OOM +26.56 +26.68 +24.08 +9.34 +A100 +64 + +52.51 +53.03 +47.81 +8.95 + + + + + + + +A10 +4 +13.94 +9.81 +10.01 +9.35 +4.69 +A10 +8 +27.09 +19 +19.53 +18.33 +3.53 +A10 +10 +33.69 +23.53 +24.19 +22.52 +4.29 +A10 +16 +OOM +37.55 +38.31 +36.81 +1.97 +A10 +32 (1) + +77.19 +78.43 +76.64 +0.71 +A10 +64 (1) + +173.59 +158.99 +155.14 +10.63 + + + + + + + +T4 +4 +38.81 +30.09 +29.74 +27.55 +8.44 +T4 +8 +OOM +55.71 +55.99 +53.85 +3.34 +T4 +10 +OOM +68.96 +69.86 +65.35 +5.23 +T4 +16 +OOM +111.47 +113.26 +106.93 +4.07 + + + + + + + +V100 +4 +9.84 +8.16 +8.09 +7.65 +6.25 +V100 +8 +OOM +15.62 +15.44 +14.59 +6.59 +V100 +10 +OOM +19.52 +19.28 +18.18 +6.86 +V100 +16 +OOM +30.29 +29.84 +28.22 +6.83 + + + + + + + +3090 +4 +10.04 +7.82 +7.89 +7.47 +4.48 +3090 +8 +19.27 +14.97 +15.04 +14.22 +5.01 +3090 +10 +24.08 +18.7 +18.7 +17.69 +5.40 +3090 +16 +OOM +29.06 +29.06 +28.2 +2.96 +3090 +32 (1) + +58.05 +58 +54.88 +5.46 +3090 +64 (1) + +126.54 +126.03 +117.33 +7.28 + + + + + + + +3090 Ti +4 +9.07 +7.14 +7.15 +6.81 +4.62 +3090 Ti +8 +17.51 +13.65 +13.72 +12.99 +4.84 +3090 Ti +10 (2) +21.79 +16.85 +16.93 +16.02 +4.93 +3090 Ti +16 +OOM +26.1 +26.28 +25.46 +2.45 +3090 Ti +32 (1) + +51.78 +52.04 +49.15 +5.08 +3090 Ti +64 (1) + +112.02 +112.33 +103.91 +7.24 + + + + + + + +4090 +4 +10.48 +8.37 +8.32 +8.01 +4.30 +4090 +8 +14.33 +10.22 +10.42 +9.78 +4.31 +4090 +16 + +17.07 +17.46 +17.15 +-0.47 +4090 +32 (1) + +39.03 +39.86 +37.97 +2.72 +4090 +64 (1) + +77.29 +79.44 +77.67 +-0.49 + +FP32 benchmark + +The table below shows the benchmark results for inference using fp32. In this case, torch.nn.functional.scaled_dot_product_attention is faster than xFormers on all the GPUs we tested. +Using torch.compile in addition to the accelerated transformers implementation can yield up to 19% performance improvement over xFormers in Ampere and Ada cards, and up to 20% (Ampere) or 28% (Ada) over vanilla attention. +GPU +Batch Size +Vanilla Attention +xFormers +PyTorch2.0 SDPA +SDPA + torch.compile +Speed over xformers (%) +Speed over vanilla (%) +A100 +4 +16.56 +12.42 +12.2 +11.84 +4.67 +28.50 +A100 +10 +OOM +29.93 +29.44 +28.5 +4.78 + +A100 +16 + +47.08 +46.27 +44.8 +4.84 + +A100 +32 + +92.89 +91.34 +88.35 +4.89 + +A100 +64 + +185.3 +182.71 +176.48 +4.76 + + + + + + + + + +A10 +1 +10.59 +8.81 +7.51 +7.35 +16.57 +30.59 +A10 +4 +34.77 +27.63 +22.77 +22.07 +20.12 +36.53 +A10 +8 + +56.19 +43.53 +43.86 +21.94 + +A10 +16 + +116.49 +88.56 +86.64 +25.62 + +A10 +32 + +221.95 +175.74 +168.18 +24.23 + +A10 +48 + +333.23 +264.84 + +20.52 + + + + + + + + + +T4 +1 +28.2 +24.49 +23.93 +23.56 +3.80 +16.45 +T4 +2 +52.77 +45.7 +45.88 +45.06 +1.40 +14.61 +T4 +4 +OOM +85.72 +85.78 +84.48 +1.45 + +T4 +8 + +149.64 +150.75 +148.4 +0.83 + + + + + + + + + +V100 +1 +7.4 +6.84 +6.8 +6.66 +2.63 +10.00 +V100 +2 +13.85 +12.81 +12.66 +12.35 +3.59 +10.83 +V100 +4 +OOM +25.73 +25.31 +24.78 +3.69 + +V100 +8 + +43.95 +43.37 +42.25 +3.87 + +V100 +16 + +84.99 +84.73 +82.55 +2.87 + + + + + + + + + +3090 +1 +7.09 +6.78 +6.11 +6.03 +11.06 +14.95 +3090 +4 +22.69 +21.45 +18.67 +18.09 +15.66 +20.27 +3090 +8 + +42.59 +36.75 +35.59 +16.44 + +3090 +16 + +85.35 +72.37 +70.25 +17.69 + +3090 +32 (1) + +162.05 +138.99 +134.53 +16.98 + +3090 +48 + +241.91 +207.75 + +14.12 + + + + + + + + + +3090 Ti +1 +6.45 +6.19 +5.64 +5.49 +11.31 +14.88 +3090 Ti +4 +20.32 +19.31 +16.9 +16.37 +15.23 +19.44 +3090 Ti +8 (2) + +37.93 +33.05 +31.99 +15.66 + +3090 Ti +16 + +75.37 +65.25 +64.32 +14.66 + +3090 Ti +32 (1) + +142.55 +124.44 +120.74 +15.30 + +3090 Ti +48 + +213.19 +186.55 + +12.50 + + + + + + + + + +4090 +1 +5.54 +4.99 +4.51 +4.44 +11.02 +19.86 +4090 +4 +13.67 +11.4 +10.3 +9.84 +13.68 +28.02 +4090 +8 + +19.79 +17.13 +16.19 +18.19 + +4090 +16 + +38.62 +33.14 +32.31 +16.34 + +4090 +32 (1) + +76.57 +65.96 +62.05 +18.96 + +4090 +48 + +114.44 +98.78 + +13.68 + +(1) Batch Size >= 32 requires enable_vae_slicing() because of https://github.com/pytorch/pytorch/issues/81665 +This is required for PyTorch 1.13.1, and also for PyTorch 2.0 and batch size of 64 +For more details about how this benchmark was run, please refer to this PR. diff --git a/scrapped_outputs/75719a41e8830385a68a522e04579915.txt b/scrapped_outputs/75719a41e8830385a68a522e04579915.txt new file mode 100644 index 0000000000000000000000000000000000000000..9d0a8a28b6d3bc1a9ce7a2bdbcac9943975943ca --- /dev/null +++ b/scrapped_outputs/75719a41e8830385a68a522e04579915.txt @@ -0,0 +1 @@ +Overview Welcome to 🧨 Diffusers! If you’re new to diffusion models and generative AI, and want to learn more, then you’ve come to the right place. These beginner-friendly tutorials are designed to provide a gentle introduction to diffusion models and help you understand the library fundamentals - the core components and how 🧨 Diffusers is meant to be used. You’ll learn how to use a pipeline for inference to rapidly generate things, and then deconstruct that pipeline to really understand how to use the library as a modular toolbox for building your own diffusion systems. In the next lesson, you’ll learn how to train your own diffusion model to generate what you want. After completing the tutorials, you’ll have gained the necessary skills to start exploring the library on your own and see how to use it for your own projects and applications. Feel free to join our community on Discord or the forums to connect and collaborate with other users and developers! Let’s start diffusing! 🧨 diff --git a/scrapped_outputs/75a19bfa1238f1c253d6f799432062d2.txt b/scrapped_outputs/75a19bfa1238f1c253d6f799432062d2.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/75b0ea5d1855ff08c76c73faa5a193cc.txt b/scrapped_outputs/75b0ea5d1855ff08c76c73faa5a193cc.txt new file mode 100644 index 0000000000000000000000000000000000000000..78bbe5a9f180ff0b096046b649d06bb4063d6161 --- /dev/null +++ b/scrapped_outputs/75b0ea5d1855ff08c76c73faa5a193cc.txt @@ -0,0 +1,137 @@ +DiffEdit Image editing typically requires providing a mask of the area to be edited. DiffEdit automatically generates the mask for you based on a text query, making it easier overall to create a mask without image editing software. The DiffEdit algorithm works in three steps: the diffusion model denoises an image conditioned on some query text and reference text which produces different noise estimates for different areas of the image; the difference is used to infer a mask to identify which area of the image needs to be changed to match the query text the input image is encoded into latent space with DDIM the latents are decoded with the diffusion model conditioned on the text query, using the mask as a guide such that pixels outside the mask remain the same as in the input image This guide will show you how to use DiffEdit to edit images without manually creating a mask. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate The StableDiffusionDiffEditPipeline requires an image mask and a set of partially inverted latents. The image mask is generated from the generate_mask() function, and includes two parameters, source_prompt and target_prompt. These parameters determine what to edit in the image. For example, if you want to change a bowl of fruits to a bowl of pears, then: Copied source_prompt = "a bowl of fruits" +target_prompt = "a bowl of pears" The partially inverted latents are generated from the invert() function, and it is generally a good idea to include a prompt or caption describing the image to help guide the inverse latent sampling process. The caption can often be your source_prompt, but feel free to experiment with other text descriptions! Let’s load the pipeline, scheduler, inverse scheduler, and enable some optimizations to reduce memory usage: Copied import torch +from diffusers import DDIMScheduler, DDIMInverseScheduler, StableDiffusionDiffEditPipeline + +pipeline = StableDiffusionDiffEditPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", + torch_dtype=torch.float16, + safety_checker=None, + use_safetensors=True, +) +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +pipeline.enable_model_cpu_offload() +pipeline.enable_vae_slicing() Load the image to edit: Copied from diffusers.utils import load_image, make_image_grid + +img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" +raw_image = load_image(img_url).resize((768, 768)) +raw_image Use the generate_mask() function to generate the image mask. You’ll need to pass it the source_prompt and target_prompt to specify what to edit in the image: Copied from PIL import Image + +source_prompt = "a bowl of fruits" +target_prompt = "a basket of pears" +mask_image = pipeline.generate_mask( + image=raw_image, + source_prompt=source_prompt, + target_prompt=target_prompt, +) +Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L").resize((768, 768)) Next, create the inverted latents and pass it a caption describing the image: Copied inv_latents = pipeline.invert(prompt=source_prompt, image=raw_image).latents Finally, pass the image mask and inverted latents to the pipeline. The target_prompt becomes the prompt now, and the source_prompt is used as the negative_prompt: Copied output_image = pipeline( + prompt=target_prompt, + mask_image=mask_image, + image_latents=inv_latents, + negative_prompt=source_prompt, +).images[0] +mask_image = Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L").resize((768, 768)) +make_image_grid([raw_image, mask_image, output_image], rows=1, cols=3) original image edited image Generate source and target embeddings The source and target embeddings can be automatically generated with the Flan-T5 model instead of creating them manually. Load the Flan-T5 model and tokenizer from the 🤗 Transformers library: Copied import torch +from transformers import AutoTokenizer, T5ForConditionalGeneration + +tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-large") +model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", device_map="auto", torch_dtype=torch.float16) Provide some initial text to prompt the model to generate the source and target prompts. Copied source_concept = "bowl" +target_concept = "basket" + +source_text = f"Provide a caption for images containing a {source_concept}. " +"The captions should be in English and should be no longer than 150 characters." + +target_text = f"Provide a caption for images containing a {target_concept}. " +"The captions should be in English and should be no longer than 150 characters." Next, create a utility function to generate the prompts: Copied @torch.no_grad() +def generate_prompts(input_prompt): + input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids.to("cuda") + + outputs = model.generate( + input_ids, temperature=0.8, num_return_sequences=16, do_sample=True, max_new_tokens=128, top_k=10 + ) + return tokenizer.batch_decode(outputs, skip_special_tokens=True) + +source_prompts = generate_prompts(source_text) +target_prompts = generate_prompts(target_text) +print(source_prompts) +print(target_prompts) Check out the generation strategy guide if you’re interested in learning more about strategies for generating different quality text. Load the text encoder model used by the StableDiffusionDiffEditPipeline to encode the text. You’ll use the text encoder to compute the text embeddings: Copied import torch +from diffusers import StableDiffusionDiffEditPipeline + +pipeline = StableDiffusionDiffEditPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +pipeline.enable_vae_slicing() + +@torch.no_grad() +def embed_prompts(sentences, tokenizer, text_encoder, device="cuda"): + embeddings = [] + for sent in sentences: + text_inputs = tokenizer( + sent, + padding="max_length", + max_length=tokenizer.model_max_length, + truncation=True, + return_tensors="pt", + ) + text_input_ids = text_inputs.input_ids + prompt_embeds = text_encoder(text_input_ids.to(device), attention_mask=None)[0] + embeddings.append(prompt_embeds) + return torch.concatenate(embeddings, dim=0).mean(dim=0).unsqueeze(0) + +source_embeds = embed_prompts(source_prompts, pipeline.tokenizer, pipeline.text_encoder) +target_embeds = embed_prompts(target_prompts, pipeline.tokenizer, pipeline.text_encoder) Finally, pass the embeddings to the generate_mask() and invert() functions, and pipeline to generate the image: Copied from diffusers import DDIMInverseScheduler, DDIMScheduler + from diffusers.utils import load_image, make_image_grid + from PIL import Image + + pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) + pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) + + img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + raw_image = load_image(img_url).resize((768, 768)) + + mask_image = pipeline.generate_mask( + image=raw_image, +- source_prompt=source_prompt, +- target_prompt=target_prompt, ++ source_prompt_embeds=source_embeds, ++ target_prompt_embeds=target_embeds, + ) + + inv_latents = pipeline.invert( +- prompt=source_prompt, ++ prompt_embeds=source_embeds, + image=raw_image, + ).latents + + output_image = pipeline( + mask_image=mask_image, + image_latents=inv_latents, +- prompt=target_prompt, +- negative_prompt=source_prompt, ++ prompt_embeds=target_embeds, ++ negative_prompt_embeds=source_embeds, + ).images[0] + mask_image = Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L") + make_image_grid([raw_image, mask_image, output_image], rows=1, cols=3) Generate a caption for inversion While you can use the source_prompt as a caption to help generate the partially inverted latents, you can also use the BLIP model to automatically generate a caption. Load the BLIP model and processor from the 🤗 Transformers library: Copied import torch +from transformers import BlipForConditionalGeneration, BlipProcessor + +processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") +model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base", torch_dtype=torch.float16, low_cpu_mem_usage=True) Create a utility function to generate a caption from the input image: Copied @torch.no_grad() +def generate_caption(images, caption_generator, caption_processor): + text = "a photograph of" + + inputs = caption_processor(images, text, return_tensors="pt").to(device="cuda", dtype=caption_generator.dtype) + caption_generator.to("cuda") + outputs = caption_generator.generate(**inputs, max_new_tokens=128) + + # offload caption generator + caption_generator.to("cpu") + + caption = caption_processor.batch_decode(outputs, skip_special_tokens=True)[0] + return caption Load an input image and generate a caption for it using the generate_caption function: Copied from diffusers.utils import load_image + +img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" +raw_image = load_image(img_url).resize((768, 768)) +caption = generate_caption(raw_image, model, processor) generated caption: "a photograph of a bowl of fruit on a table" Now you can drop the caption into the invert() function to generate the partially inverted latents! diff --git a/scrapped_outputs/75ded1fd1dd92d385ed0cfeb9062907a.txt b/scrapped_outputs/75ded1fd1dd92d385ed0cfeb9062907a.txt new file mode 100644 index 0000000000000000000000000000000000000000..f2282512f2f0bcea89548e640b2b6d75311dad9c --- /dev/null +++ b/scrapped_outputs/75ded1fd1dd92d385ed0cfeb9062907a.txt @@ -0,0 +1,27 @@ +OpenVINO 🤗 Optimum provides Stable Diffusion pipelines compatible with OpenVINO to perform inference on a variety of Intel processors (see the full list of supported devices). You’ll need to install 🤗 Optimum Intel with the --upgrade-strategy eager option to ensure optimum-intel is using the latest version: Copied pip install --upgrade-strategy eager optimum["openvino"] This guide will show you how to use the Stable Diffusion and Stable Diffusion XL (SDXL) pipelines with OpenVINO. Stable Diffusion To load and run inference, use the OVStableDiffusionPipeline. If you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, set export=True: Copied from optimum.intel import OVStableDiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipeline = OVStableDiffusionPipeline.from_pretrained(model_id, export=True) +prompt = "sailing ship in storm by Rembrandt" +image = pipeline(prompt).images[0] + +# Don't forget to save the exported model +pipeline.save_pretrained("openvino-sd-v1-5") To further speed-up inference, statically reshape the model. If you change any parameters such as the outputs height or width, you’ll need to statically reshape your model again. Copied # Define the shapes related to the inputs and desired outputs +batch_size, num_images, height, width = 1, 1, 512, 512 + +# Statically reshape the model +pipeline.reshape(batch_size, height, width, num_images) +# Compile the model before inference +pipeline.compile() + +image = pipeline( + prompt, + height=height, + width=width, + num_images_per_prompt=num_images, +).images[0] You can find more examples in the 🤗 Optimum documentation, and Stable Diffusion is supported for text-to-image, image-to-image, and inpainting. Stable Diffusion XL To load and run inference with SDXL, use the OVStableDiffusionXLPipeline: Copied from optimum.intel import OVStableDiffusionXLPipeline + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipeline = OVStableDiffusionXLPipeline.from_pretrained(model_id) +prompt = "sailing ship in storm by Rembrandt" +image = pipeline(prompt).images[0] To further speed-up inference, statically reshape the model as shown in the Stable Diffusion section. You can find more examples in the 🤗 Optimum documentation, and running SDXL in OpenVINO is supported for text-to-image and image-to-image. diff --git a/scrapped_outputs/75fe28b0811646b05fbd607c81e4ac6f.txt b/scrapped_outputs/75fe28b0811646b05fbd607c81e4ac6f.txt new file mode 100644 index 0000000000000000000000000000000000000000..4311b047a1b5aa3d3e6273ec05b2f484f4d2f760 --- /dev/null +++ b/scrapped_outputs/75fe28b0811646b05fbd607c81e4ac6f.txt @@ -0,0 +1,364 @@ +Loaders Adapters (textual inversion, LoRA, hypernetworks) allow you to modify a diffusion model to generate images in a specific style without training or finetuning the entire model. The adapter weights are very portable because they’re typically only a tiny fraction of the pretrained model weights. 🤗 Diffusers provides an easy-to-use LoaderMixin API to load adapter weights. 🧪 The LoaderMixins are highly experimental and prone to future changes. To use private or gated models, log-in with huggingface-cli login. UNet2DConditionLoadersMixin class diffusers.loaders.UNet2DConditionLoadersMixin < source > ( ) delete_adapters < source > ( adapter_names: typing.Union[typing.List[str], str] ) Parameters Deletes the LoRA layers of adapter_name for the unet. — +adapter_names (Union[List[str], str]): +The names of the adapter to delete. Can be a single string or a list of strings disable_lora < source > ( ) Disables the active LoRA layers for the unet. enable_lora < source > ( ) Enables the active LoRA layers for the unet. load_attn_procs < source > ( pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]] **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with ModelMixin.save_pretrained(). +A torch state +dict. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load pretrained attention processor layers into UNet2DConditionModel. Attention processor layers have to be +defined in +attention_processor.py +and be a torch.nn.Module class. save_attn_procs < source > ( save_directory: typing.Union[str, os.PathLike] is_main_process: bool = True weight_name: str = None save_function: typing.Callable = None safe_serialization: bool = True **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save an attention processor to. Will be created if it doesn’t exist. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save an attention processor to a directory so that it can be reloaded using the +load_attn_procs() method. set_adapters < source > ( adapter_names: typing.Union[typing.List[str], str] weights: typing.Union[typing.List[float], float, NoneType] = None ) Parameters adapter_names (List[str] or str) — +The names of the adapters to use. weights (Union[List[float], float], optional) — +The adapter(s) weights to use with the UNet. If None, the weights are set to 1.0 for all the +adapters. Sets the adapter layers for the unet. TextualInversionLoaderMixin class diffusers.loaders.TextualInversionLoaderMixin < source > ( ) Load textual inversion tokens and embeddings to the tokenizer and text encoder. load_textual_inversion < source > ( pretrained_model_name_or_path: typing.Union[str, typing.List[str], typing.Dict[str, torch.Tensor], typing.List[typing.Dict[str, torch.Tensor]]] token: typing.Union[str, typing.List[str], NoneType] = None tokenizer: typing.Optional[ForwardRef('PreTrainedTokenizer')] = None text_encoder: typing.Optional[ForwardRef('PreTrainedModel')] = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load textual inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a textual inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a textual inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") maybe_convert_prompt < source > ( prompt: typing.Union[str, typing.List[str]] tokenizer: PreTrainedTokenizer ) → str or list of str Parameters prompt (str or list of str) — +The prompt or prompts to guide the image generation. tokenizer (PreTrainedTokenizer) — +The tokenizer responsible for encoding the prompt into input tokens. Returns +str or list of str + +The converted prompt + Processes prompts that include a special token corresponding to a multi-vector textual inversion embedding to +be replaced with multiple special tokens each corresponding to one of the vectors. If the prompt has no textual +inversion token or if the textual inversion token is a single vector, the input prompt is returned. StableDiffusionXLLoraLoaderMixin class diffusers.loaders.StableDiffusionXLLoraLoaderMixin < source > ( ) This class overrides LoraLoaderMixin with LoRA loading/saving code that’s specific to SDXL load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]] adapter_name: typing.Optional[str] = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. kwargs (dict, optional) — +See lora_state_dict(). Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. LoraLoaderMixin class diffusers.loaders.LoraLoaderMixin < source > ( ) Load LoRA layers into UNet2DConditionModel and +CLIPTextModel. delete_adapters < source > ( adapter_names: typing.Union[typing.List[str], str] ) Parameters Deletes the LoRA layers of adapter_name for the unet and text-encoder(s). — +adapter_names (Union[List[str], str]): +The names of the adapter to delete. Can be a single string or a list of strings disable_lora_for_text_encoder < source > ( text_encoder: typing.Optional[ForwardRef('PreTrainedModel')] = None ) Parameters text_encoder (torch.nn.Module, optional) — +The text encoder module to disable the LoRA layers for. If None, it will try to get the +text_encoder attribute. Disables the LoRA layers for the text encoder. enable_lora_for_text_encoder < source > ( text_encoder: typing.Optional[ForwardRef('PreTrainedModel')] = None ) Parameters text_encoder (torch.nn.Module, optional) — +The text encoder module to enable the LoRA layers for. If None, it will try to get the text_encoder +attribute. Enables the LoRA layers for the text encoder. fuse_lora < source > ( fuse_unet: bool = True fuse_text_encoder: bool = True lora_scale: float = 1.0 safe_fusing: bool = False ) Parameters fuse_unet (bool, defaults to True) — Whether to fuse the UNet LoRA parameters. fuse_text_encoder (bool, defaults to True) — +Whether to fuse the text encoder LoRA parameters. If the text encoder wasn’t monkey-patched with the +LoRA parameters then it won’t have any effect. lora_scale (float, defaults to 1.0) — +Controls how much to influence the outputs with the LoRA parameters. safe_fusing (bool, defaults to False) — +Whether to check fused weights for NaN values before fusing and if values are NaN not fusing them. Fuses the LoRA parameters into the original parameters of the corresponding blocks. This is an experimental API. get_active_adapters < source > ( ) Gets the list of the current active adapters. Example: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", +).to("cuda") +pipeline.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") +pipeline.get_active_adapters() get_list_adapters < source > ( ) Gets the current list of all available adapters in the pipeline. load_lora_into_text_encoder < source > ( state_dict network_alphas text_encoder prefix = None lora_scale = 1.0 low_cpu_mem_usage = None adapter_name = None _pipeline = None ) Parameters state_dict (dict) — +A standard state dict containing the lora layer parameters. The key should be prefixed with an +additional text_encoder to distinguish between unet lora layers. network_alphas (Dict[str, float]) — +See LoRALinearLayer for more details. text_encoder (CLIPTextModel) — +The text encoder model to load the LoRA layers into. prefix (str) — +Expected prefix of the text_encoder in the state_dict. lora_scale (float) — +How much to scale the output of the lora linear layer before it is added with the output of the regular +lora layer. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. This will load the LoRA layers specified in state_dict into text_encoder load_lora_into_unet < source > ( state_dict network_alphas unet low_cpu_mem_usage = None adapter_name = None _pipeline = None ) Parameters state_dict (dict) — +A standard state dict containing the lora layer parameters. The keys can either be indexed directly +into the unet or prefixed with an additional unet which can be used to distinguish between text +encoder lora layers. network_alphas (Dict[str, float]) — +See LoRALinearLayer for more details. unet (UNet2DConditionModel) — +The UNet model to load the LoRA layers into. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. This will load the LoRA layers specified in state_dict into unet. load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]] adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. lora_state_dict < source > ( pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]] **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with ModelMixin.save_pretrained(). +A torch state +dict. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Return state dict for lora weights and the network alphas. We support loading A1111 formatted LoRA checkpoints in a limited capacity. This function is experimental and might change in the future. save_lora_weights < source > ( save_directory: typing.Union[str, os.PathLike] unet_lora_layers: typing.Dict[str, typing.Union[torch.nn.modules.module.Module, torch.Tensor]] = None text_encoder_lora_layers: typing.Dict[str, torch.nn.modules.module.Module] = None is_main_process: bool = True weight_name: str = None save_function: typing.Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. set_adapters_for_text_encoder < source > ( adapter_names: typing.Union[typing.List[str], str] text_encoder: typing.Optional[ForwardRef('PreTrainedModel')] = None text_encoder_weights: typing.List[float] = None ) Parameters adapter_names (List[str] or str) — +The names of the adapters to use. text_encoder (torch.nn.Module, optional) — +The text encoder module to set the adapter layers for. If None, it will try to get the text_encoder +attribute. text_encoder_weights (List[float], optional) — +The weights to use for the text encoder. If None, the weights are set to 1.0 for all the adapters. Sets the adapter layers for the text encoder. set_lora_device < source > ( adapter_names: typing.List[str] device: typing.Union[torch.device, str, int] ) Parameters adapter_names (List[str]) — +List of adapters to send device to. device (Union[torch.device, str, int]) — +Device to send the adapters to. Can be either a torch device, a str or an integer. Moves the LoRAs listed in adapter_names to a target device. Useful for offloading the LoRA to the CPU in case +you want to load multiple adapters and free some GPU memory. unfuse_lora < source > ( unfuse_unet: bool = True unfuse_text_encoder: bool = True ) Parameters unfuse_unet (bool, defaults to True) — Whether to unfuse the UNet LoRA parameters. unfuse_text_encoder (bool, defaults to True) — +Whether to unfuse the text encoder LoRA parameters. If the text encoder wasn’t monkey-patched with the +LoRA parameters then it won’t have any effect. Reverses the effect of +pipe.fuse_lora(). This is an experimental API. unload_lora_weights < source > ( ) Unloads the LoRA parameters. Examples: Copied >>> # Assuming `pipeline` is already loaded with the LoRA parameters. +>>> pipeline.unload_lora_weights() +>>> ... FromSingleFileMixin class diffusers.loaders.FromSingleFileMixin < source > ( ) Load model weights saved in the .ckpt format into a DiffusionPipeline. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. extract_ema (bool, optional, defaults to False) — +Whether to extract the EMA weights or not. Pass True to extract the EMA weights which usually yield +higher quality images for inference. Non-EMA weights are usually better for continuing finetuning. upcast_attention (bool, optional, defaults to None) — +Whether the attention computation should always be upcasted. image_size (int, optional, defaults to 512) — +The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable +Diffusion v2 base model. Use 768 for Stable Diffusion v2. prediction_type (str, optional) — +The prediction type the model was trained on. Use 'epsilon' for all Stable Diffusion v1 models and +the Stable Diffusion v2 base model. Use 'v_prediction' for Stable Diffusion v2. num_in_channels (int, optional, defaults to None) — +The number of input channels. If None, it is automatically inferred. scheduler_type (str, optional, defaults to "pndm") — +Type of scheduler to use. Should be one of ["pndm", "lms", "heun", "euler", "euler-ancestral", "dpm", "ddim"]. load_safety_checker (bool, optional, defaults to True) — +Whether to load the safety checker or not. text_encoder (CLIPTextModel, optional, defaults to None) — +An instance of CLIPTextModel to use, specifically the +clip-vit-large-patch14 variant. If this +parameter is None, the function loads a new instance of CLIPTextModel by itself if needed. vae (AutoencoderKL, optional, defaults to None) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. If +this parameter is None, the function will load a new instance of [CLIP] by itself, if needed. tokenizer (CLIPTokenizer, optional, defaults to None) — +An instance of CLIPTokenizer to use. If this parameter is None, the function loads a new instance +of CLIPTokenizer by itself if needed. original_config_file (str) — +Path to .yaml config file corresponding to the original architecture. If None, will be +automatically inferred by looking for a key that only exists in SD2.0 models. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (for example the pipeline components of the +specific pipeline class). The overwritten components are directly passed to the pipelines __init__ +method. See example below for more information. Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt or .safetensors +format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import StableDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +... ) + +>>> # Download pipeline from local file +>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt +>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly") + +>>> # Enable float16 and move to GPU +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", +... torch_dtype=torch.float16, +... ) +>>> pipeline.to("cuda") FromOriginalControlnetMixin class diffusers.loaders.FromOriginalControlnetMixin < source > ( ) from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. image_size (int, optional, defaults to 512) — +The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable +Diffusion v2 base model. Use 768 for Stable Diffusion v2. upcast_attention (bool, optional, defaults to None) — +Whether the attention computation should always be upcasted. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (for example the pipeline components of the +specific pipeline class). The overwritten components are directly passed to the pipelines __init__ +method. See example below for more information. Instantiate a ControlNetModel from pretrained controlnet weights saved in the original .ckpt or +.safetensors format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel + +url = "https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_canny.pth" # can also be a local path +model = ControlNetModel.from_single_file(url) + +url = "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.safetensors" # can also be a local path +pipe = StableDiffusionControlNetPipeline.from_single_file(url, controlnet=controlnet) FromOriginalVAEMixin class diffusers.loaders.FromOriginalVAEMixin < source > ( ) from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. image_size (int, optional, defaults to 512) — +The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable +Diffusion v2 base model. Use 768 for Stable Diffusion v2. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. upcast_attention (bool, optional, defaults to None) — +Whether the attention computation should always be upcasted. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution +Image Synthesis with Latent Diffusion Models paper. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (for example the pipeline components of the +specific pipeline class). The overwritten components are directly passed to the pipelines __init__ +method. See example below for more information. Instantiate a AutoencoderKL from pretrained controlnet weights saved in the original .ckpt or +.safetensors format. The pipeline is format. The pipeline is set in evaluation mode (model.eval()) by +default. Make sure to pass both image_size and scaling_factor to from_single_file() if you want to load +a VAE that does accompany a stable diffusion model of v2 or higher or SDXL. Examples: Copied from diffusers import AutoencoderKL + +url = "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors" # can also be local file +model = AutoencoderKL.from_single_file(url) diff --git a/scrapped_outputs/7620f921b1db202e7f1f5edbbe341cbe.txt b/scrapped_outputs/7620f921b1db202e7f1f5edbbe341cbe.txt new file mode 100644 index 0000000000000000000000000000000000000000..bedbfd4f29d8fea8e1cb1523c05c8b8e204c564f --- /dev/null +++ b/scrapped_outputs/7620f921b1db202e7f1f5edbbe341cbe.txt @@ -0,0 +1,52 @@ +CMStochasticIterativeScheduler Consistency Models by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever introduced a multistep and onestep scheduler (Algorithm 1) that is capable of generating good samples in one or a small number of steps. The abstract from the paper is: Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast one-step generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet 64x64 and LSUN 256x256. The original codebase can be found at openai/consistency_models. CMStochasticIterativeScheduler class diffusers.CMStochasticIterativeScheduler < source > ( num_train_timesteps: int = 40 sigma_min: float = 0.002 sigma_max: float = 80.0 sigma_data: float = 0.5 s_noise: float = 1.0 rho: float = 7.0 clip_denoised: bool = True ) Parameters num_train_timesteps (int, defaults to 40) — +The number of diffusion steps to train the model. sigma_min (float, defaults to 0.002) — +Minimum noise magnitude in the sigma schedule. Defaults to 0.002 from the original implementation. sigma_max (float, defaults to 80.0) — +Maximum noise magnitude in the sigma schedule. Defaults to 80.0 from the original implementation. sigma_data (float, defaults to 0.5) — +The standard deviation of the data distribution from the EDM +paper. Defaults to 0.5 from the original implementation. s_noise (float, defaults to 1.0) — +The amount of additional noise to counteract loss of detail during sampling. A reasonable range is [1.000, +1.011]. Defaults to 1.0 from the original implementation. rho (float, defaults to 7.0) — +The parameter for calculating the Karras sigma schedule from the EDM +paper. Defaults to 7.0 from the original implementation. clip_denoised (bool, defaults to True) — +Whether to clip the denoised outputs to (-1, 1). timesteps (List or np.ndarray or torch.Tensor, optional) — +An explicit timestep schedule that can be optionally specified. The timesteps are expected to be in +increasing order. Multistep and onestep sampling for consistency models. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. get_scalings_for_boundary_condition < source > ( sigma ) → tuple Parameters sigma (torch.FloatTensor) — +The current sigma in the Karras sigma schedule. Returns +tuple + +A two-element tuple where c_skip (which weights the current sample) is the first element and c_out +(which weights the consistency model output) is the second element. + Gets the scalings used in the consistency model parameterization (from Appendix C of the +paper) to enforce boundary condition. epsilon in the equations for c_skip and c_out is set to sigma_min. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (float or torch.FloatTensor) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Scales the consistency model input by (sigma**2 + sigma_data**2) ** 0.5. set_timesteps < source > ( num_inference_steps: Optional = None device: Union = None timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. timesteps (List[int], optional) — +Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default +timestep spacing strategy of equal spacing between timesteps is used. If timesteps is passed, +num_inference_steps must be None. Sets the timesteps used for the diffusion chain (to be run before inference). sigma_to_t < source > ( sigmas: Union ) → float or np.ndarray Parameters sigmas (float or np.ndarray) — +A single Karras sigma or an array of Karras sigmas. Returns +float or np.ndarray + +A scaled input timestep or scaled input timestep array. + Gets scaled timesteps from the Karras sigmas for input to the consistency model. step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor generator: Optional = None return_dict: bool = True ) → CMStochasticIterativeSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (float) — +The current timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a +CMStochasticIterativeSchedulerOutput or tuple. Returns +CMStochasticIterativeSchedulerOutput or tuple + +If return_dict is True, +CMStochasticIterativeSchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). CMStochasticIterativeSchedulerOutput class diffusers.schedulers.scheduling_consistency_models.CMStochasticIterativeSchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Output class for the scheduler’s step function. diff --git a/scrapped_outputs/765cbcbec7f889ed405e0320fe4c263b.txt b/scrapped_outputs/765cbcbec7f889ed405e0320fe4c263b.txt new file mode 100644 index 0000000000000000000000000000000000000000..7c4120ca559ac7e154bd60c031ca497e0b8a77e7 --- /dev/null +++ b/scrapped_outputs/765cbcbec7f889ed405e0320fe4c263b.txt @@ -0,0 +1 @@ +Overview Generating high-quality outputs is computationally intensive, especially during each iterative step where you go from a noisy output to a less noisy output. One of 🤗 Diffuser’s goals is to make this technology widely accessible to everyone, which includes enabling fast inference on consumer and specialized hardware. This section will cover tips and tricks - like half-precision weights and sliced attention - for optimizing inference speed and reducing memory-consumption. You’ll also learn how to speed up your PyTorch code with torch.compile or ONNX Runtime, and enable memory-efficient attention with xFormers. There are also guides for running inference on specific hardware like Apple Silicon, and Intel or Habana processors. diff --git a/scrapped_outputs/76680f6d5de7f79198b01fed2c77a6af.txt b/scrapped_outputs/76680f6d5de7f79198b01fed2c77a6af.txt new file mode 100644 index 0000000000000000000000000000000000000000..9d0a8a28b6d3bc1a9ce7a2bdbcac9943975943ca --- /dev/null +++ b/scrapped_outputs/76680f6d5de7f79198b01fed2c77a6af.txt @@ -0,0 +1 @@ +Overview Welcome to 🧨 Diffusers! If you’re new to diffusion models and generative AI, and want to learn more, then you’ve come to the right place. These beginner-friendly tutorials are designed to provide a gentle introduction to diffusion models and help you understand the library fundamentals - the core components and how 🧨 Diffusers is meant to be used. You’ll learn how to use a pipeline for inference to rapidly generate things, and then deconstruct that pipeline to really understand how to use the library as a modular toolbox for building your own diffusion systems. In the next lesson, you’ll learn how to train your own diffusion model to generate what you want. After completing the tutorials, you’ll have gained the necessary skills to start exploring the library on your own and see how to use it for your own projects and applications. Feel free to join our community on Discord or the forums to connect and collaborate with other users and developers! Let’s start diffusing! 🧨 diff --git a/scrapped_outputs/7673b32b6641910600ef0f5b98411918.txt b/scrapped_outputs/7673b32b6641910600ef0f5b98411918.txt new file mode 100644 index 0000000000000000000000000000000000000000..f2bcdd0eab08a61d4d8ad8d73bfbe01b5aad187f --- /dev/null +++ b/scrapped_outputs/7673b32b6641910600ef0f5b98411918.txt @@ -0,0 +1,234 @@ +Models 🤗 Diffusers provides pretrained models for popular algorithms and modules to create custom diffusion systems. The primary function of models is to denoise an input sample as modeled by the distribution pθ(xt−1∣xt)p_{\theta}(x_{t-1}|x_{t})pθ​(xt−1​∣xt​). All models are built from the base ModelMixin class which is a torch.nn.Module providing basic functionality for saving and loading models, locally and from the Hugging Face Hub. ModelMixin class diffusers.ModelMixin < source > ( ) Base class for all models. ModelMixin takes care of storing the model configuration and provides methods for loading, downloading and +saving models. config_name (str) — Filename to save a model to when calling save_pretrained(). disable_gradient_checkpointing < source > ( ) Deactivates gradient checkpointing for the current model (may be referred to as activation checkpointing or +checkpoint activations in other frameworks). disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. enable_gradient_checkpointing < source > ( ) Activates gradient checkpointing for the current model (may be referred to as activation checkpointing or +checkpoint activations in other frameworks). enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this option is enabled, you should observe lower GPU memory usage and a potential speed up during +inference. Speed up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import UNet2DConditionModel +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> model = UNet2DConditionModel.from_pretrained( +... "stabilityai/stable-diffusion-2-1", subfolder="unet", torch_dtype=torch.float16 +... ) +>>> model = model.to("cuda") +>>> model.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) from_pretrained < source > ( pretrained_model_name_or_path: Union **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with save_pretrained(). + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info (bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. from_flax (bool, optional, defaults to False) — +Load the model weights from a Flax checkpoint save file. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. Instantiate a pretrained PyTorch model from a pretrained model configuration. The model is set in evaluation mode - model.eval() - by default, and dropout modules are deactivated. To +train the model, set it back in training mode with model.train(). To use private or gated models, log-in with +huggingface-cli login. You can also activate the special +“offline-mode” to use this method in a +firewalled environment. Example: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5", subfolder="unet") If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. num_parameters < source > ( only_trainable: bool = False exclude_embeddings: bool = False ) → int Parameters only_trainable (bool, optional, defaults to False) — +Whether or not to return only the number of trainable parameters. exclude_embeddings (bool, optional, defaults to False) — +Whether or not to return only the number of non-embedding parameters. Returns +int + +The number of parameters. + Get number of (trainable or non-embedding) parameters in the module. Example: Copied from diffusers import UNet2DConditionModel + +model_id = "runwayml/stable-diffusion-v1-5" +unet = UNet2DConditionModel.from_pretrained(model_id, subfolder="unet") +unet.num_parameters(only_trainable=True) +859520964 save_pretrained < source > ( save_directory: Union is_main_process: bool = True save_function: Optional = None safe_serialization: bool = True variant: Optional = None push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save a model and its configuration file to. Will be created if it doesn’t exist. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save a model and its configuration file to a directory so that it can be reloaded using the +from_pretrained() class method. FlaxModelMixin class diffusers.FlaxModelMixin < source > ( ) Base class for all Flax models. FlaxModelMixin takes care of storing the model configuration and provides methods for loading, downloading and +saving models. config_name (str) — Filename to save a model to when calling save_pretrained(). from_pretrained < source > ( pretrained_model_name_or_path: Union dtype: dtype = *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — +Can be either: + +A string, the model id (for example runwayml/stable-diffusion-v1-5) of a pretrained model +hosted on the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +using save_pretrained(). + dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — +The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and +jax.numpy.bfloat16 (on TPUs). +This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If +specified, all the computation will be performed with the given dtype. + +This only specifies the dtype of the computation and does not influence the dtype of model +parameters. +If you wish to change the dtype of the model parameters, see to_fp16() and +to_bf16(). + model_args (sequence of positional arguments, optional) — +All remaining positional arguments are passed to the underlying model’s __init__ method. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only(bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. from_pt (bool, optional, defaults to False) — +Load the model weights from a PyTorch checkpoint save file. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to update the configuration object (after it is loaded) and initiate the model (for +example, output_attentions=True). Behaves differently depending on whether a config is provided or +automatically loaded: + +If a configuration is provided with config, kwargs are directly passed to the underlying +model’s __init__ method (we assume all relevant updates to the configuration have already been +done). +If a configuration is not provided, kwargs are first passed to the configuration class +initialization function from_config(). Each key of the kwargs that corresponds +to a configuration attribute is used to override said attribute with the supplied kwargs value. +Remaining keys that do not correspond to any configuration attribute are passed to the underlying +model’s __init__ function. + Instantiate a pretrained Flax model from a pretrained model configuration. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # Download model and configuration from huggingface.co and cache. +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # Model was saved using *save_pretrained('./test/saved_model/')* (for example purposes, not runnable). +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("./test/saved_model/") If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. save_pretrained < source > ( save_directory: Union params: Union is_main_process: bool = True push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save a model and its configuration file to. Will be created if it doesn’t exist. params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional key word arguments passed along to the push_to_hub() method. Save a model and its configuration file to a directory so that it can be reloaded using the +from_pretrained() class method. to_bf16 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans. It should be True +for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.bfloat16. This returns a new params tree and does not cast +the params in place. This method can be used on a TPU to explicitly convert the model parameters to bfloat16 precision to do full +half-precision training or to save weights in bfloat16 for inference in order to save memory and improve speed. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # load model +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model parameters will be in fp32 precision, to cast these to bfloat16 precision +>>> params = model.to_bf16(params) +>>> # If you don't want to cast certain parameters (for example layer norm bias and scale) +>>> # then pass the mask as follows +>>> from flax import traverse_util + +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> flat_params = traverse_util.flatten_dict(params) +>>> mask = { +... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) +... for path in flat_params +... } +>>> mask = traverse_util.unflatten_dict(mask) +>>> params = model.to_bf16(params, mask) to_fp16 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans. It should be True +for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.float16. This returns a new params tree and does not cast the +params in place. This method can be used on a GPU to explicitly convert the model parameters to float16 precision to do full +half-precision training or to save weights in float16 for inference in order to save memory and improve speed. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # load model +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model params will be in fp32, to cast these to float16 +>>> params = model.to_fp16(params) +>>> # If you want don't want to cast certain parameters (for example layer norm bias and scale) +>>> # then pass the mask as follows +>>> from flax import traverse_util + +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> flat_params = traverse_util.flatten_dict(params) +>>> mask = { +... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) +... for path in flat_params +... } +>>> mask = traverse_util.unflatten_dict(mask) +>>> params = model.to_fp16(params, mask) to_fp32 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans. It should be True +for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.float32. This method can be used to explicitly convert the +model parameters to fp32 precision. This returns a new params tree and does not cast the params in place. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # Download model and configuration from huggingface.co +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model params will be in fp32, to illustrate the use of this method, +>>> # we'll first cast to fp16 and back to fp32 +>>> params = model.to_f16(params) +>>> # now cast back to fp32 +>>> params = model.to_fp32(params) PushToHubMixin class diffusers.utils.PushToHubMixin < source > ( ) A Mixin to push a model, scheduler, or pipeline to the Hugging Face Hub. push_to_hub < source > ( repo_id: str commit_message: Optional = None private: Optional = None token: Optional = None create_pr: bool = False safe_serialization: bool = True variant: Optional = None ) Parameters repo_id (str) — +The name of the repository you want to push your model, scheduler, or pipeline files to. It should +contain your organization name when pushing to an organization. repo_id can also be a path to a local +directory. commit_message (str, optional) — +Message to commit while pushing. Default to "Upload {object}". private (bool, optional) — +Whether or not the repository created should be private. token (str, optional) — +The token to use as HTTP bearer authorization for remote files. The token generated when running +huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) — +Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to True) — +Whether or not to convert the model weights to the safetensors format. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. Upload model, scheduler, or pipeline files to the 🤗 Hugging Face Hub. Examples: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet") + +# Push the `unet` to your namespace with the name "my-finetuned-unet". +unet.push_to_hub("my-finetuned-unet") + +# Push the `unet` to an organization with the name "my-finetuned-unet". +unet.push_to_hub("your-org/my-finetuned-unet") diff --git a/scrapped_outputs/76a8209d6e346f7711a634706de32c46.txt b/scrapped_outputs/76a8209d6e346f7711a634706de32c46.txt new file mode 100644 index 0000000000000000000000000000000000000000..8ab6d945875a0e619ebe5c590dd7d60c41d0ccbd --- /dev/null +++ b/scrapped_outputs/76a8209d6e346f7711a634706de32c46.txt @@ -0,0 +1,362 @@ +DEIS + +Fast Sampling of Diffusion Models with Exponential Integrator. + +Overview + +Original paper can be found here. The original implementation can be found here. + +DEISMultistepScheduler + + +class diffusers.DEISMultistepScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Optional[numpy.ndarray] = None +solver_order: int = 2 +prediction_type: str = 'epsilon' +thresholding: bool = False +dynamic_thresholding_ratio: float = 0.995 +sample_max_value: float = 1.0 +algorithm_type: str = 'deis' +solver_type: str = 'logrho' +lower_order_final: bool = True + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +solver_order (int, default 2) — +the order of DEIS; can be 1 or 2 or 3. We recommend to use solver_order=2 for guided sampling, and +solver_order=3 for unconditional sampling. + + +prediction_type (str, default epsilon) — +indicates whether the model predicts the noise (epsilon), or the data / x0. One of epsilon, sample, +or v-prediction. + + +thresholding (bool, default False) — +whether to use the “dynamic thresholding” method (introduced by Imagen, https://arxiv.org/abs/2205.11487). +Note that the thresholding method is unsuitable for latent-space diffusion models (such as +stable-diffusion). + + +dynamic_thresholding_ratio (float, default 0.995) — +the ratio for the dynamic thresholding method. Default is 0.995, the same as Imagen +(https://arxiv.org/abs/2205.11487). + + +sample_max_value (float, default 1.0) — +the threshold value for dynamic thresholding. Valid woks when thresholding=True + + +algorithm_type (str, default deis) — +the algorithm type for the solver. current we support multistep deis, we will add other variants of DEIS in +the future + + +lower_order_final (bool, default True) — +whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. We empirically +find this trick can stabilize the sampling of DEIS for steps < 15, especially for steps <= 10. + + + +DEIS (https://arxiv.org/abs/2204.13902) is a fast high order solver for diffusion ODEs. We slightly modify the +polynomial fitting formula in log-rho space instead of the original linear t space in DEIS paper. The modification +enjoys closed-form coefficients for exponential multistep update instead of replying on the numerical solver. More +variants of DEIS can be found in https://github.com/qsh-zh/deis. +Currently, we support the log-rho multistep DEIS. We recommend to use solver_order=2 / 3 while solver_order=1 +reduces to DDIM. +We also support the “dynamic thresholding” method in Imagen (https://arxiv.org/abs/2205.11487). For pixel-space +diffusion models, you can set thresholding=True to use the dynamic thresholding. +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +convert_model_output + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the converted model output. + + +Convert the model output to the corresponding type that the algorithm DEIS needs. + +deis_first_order_update + +< +source +> +( +model_output: FloatTensor +timestep: int +prev_timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the first-order DEIS (equivalent to DDIM). + +multistep_deis_second_order_update + +< +source +> +( +model_output_list: typing.List[torch.FloatTensor] +timestep_list: typing.List[int] +prev_timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output_list (List[torch.FloatTensor]) — +direct outputs from learned diffusion model at current and latter timesteps. + + +timestep (int) — current and latter discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the second-order multistep DEIS. + +multistep_deis_third_order_update + +< +source +> +( +model_output_list: typing.List[torch.FloatTensor] +timestep_list: typing.List[int] +prev_timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output_list (List[torch.FloatTensor]) — +direct outputs from learned diffusion model at current and latter timesteps. + + +timestep (int) — current and latter discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the third-order multistep DEIS. + +scale_model_input + +< +source +> +( +sample: FloatTensor +*args +**kwargs + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device, optional) — +the device to which the timesteps should be moved to. If None, the timesteps are not moved. + + + +Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +return_dict: bool = True + +) +→ +~scheduling_utils.SchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +return_dict (bool) — option for returning tuple rather than SchedulerOutput class + + +Returns + +~scheduling_utils.SchedulerOutput or tuple + + + +~scheduling_utils.SchedulerOutput if return_dict is +True, otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + +Step function propagating the sample with the multistep DEIS. diff --git a/scrapped_outputs/76ed94632406efaa4257cabcc9fc98d6.txt b/scrapped_outputs/76ed94632406efaa4257cabcc9fc98d6.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/7703ebf51480794b4cd4b33586416109.txt b/scrapped_outputs/7703ebf51480794b4cd4b33586416109.txt new file mode 100644 index 0000000000000000000000000000000000000000..b45fe5213bcfa863fc1c686b497f93e27b1008f7 --- /dev/null +++ b/scrapped_outputs/7703ebf51480794b4cd4b33586416109.txt @@ -0,0 +1,630 @@ +Kandinsky 2.2 Kandinsky 2.2 is created by Arseniy Shakhmatov, Anton Razzhigaev, Aleksandr Nikolich, Vladimir Arkhipkin, Igor Pavlov, Andrey Kuznetsov, and Denis Dimitrov. The description from it’s GitHub page is: Kandinsky 2.2 brings substantial improvements upon its predecessor, Kandinsky 2.1, by introducing a new, more powerful image encoder - CLIP-ViT-G and the ControlNet support. The switch to CLIP-ViT-G as the image encoder significantly increases the model’s capability to generate more aesthetic pictures and better understand text, thus enhancing the model’s overall performance. The addition of the ControlNet mechanism allows the model to effectively control the process of generating images. This leads to more accurate and visually appealing outputs and opens new possibilities for text-guided image manipulation. The original codebase can be found at ai-forever/Kandinsky-2. Check out the Kandinsky Community organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting. Make sure to check out the schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. KandinskyV22PriorPipeline class diffusers.KandinskyV22PriorPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModelWithProjection text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: UnCLIPScheduler image_processor: CLIPImageProcessor ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Pipeline for generating image prior for Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 output_type: Optional = 'pt' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → KandinskyPriorPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. output_type (str, optional, defaults to "pt") — +The output format of the generate image. Choose between: "np" (np.array) or "pt" +(torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyV22Pipeline, KandinskyV22PriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior") +>>> pipe_prior.to("cuda") +>>> prompt = "red cat, 4k photo" +>>> image_emb, negative_image_emb = pipe_prior(prompt).to_tuple() + +>>> pipe = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder") +>>> pipe.to("cuda") +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=50, +... ).images +>>> image[0].save("cat.png") interpolate < source > ( images_and_prompts: List weights: List num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None negative_prior_prompt: Optional = None negative_prompt: str = '' guidance_scale: float = 4.0 device = None ) → KandinskyPriorPipelineOutput or tuple Parameters images_and_prompts (List[Union[str, PIL.Image.Image, torch.FloatTensor]]) — +list of prompts and images to guide the image generation. +weights — (List[float]): +list of weights for each condition in images_and_prompts num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. negative_prior_prompt (str, optional) — +The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when using the prior pipeline for interpolation. Examples: Copied >>> from diffusers import KandinskyV22PriorPipeline, KandinskyV22Pipeline +>>> from diffusers.utils import load_image +>>> import PIL +>>> import torch +>>> from torchvision import transforms + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") +>>> img1 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) +>>> img2 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/starry_night.jpeg" +... ) +>>> images_texts = ["a cat", img1, img2] +>>> weights = [0.3, 0.3, 0.4] +>>> out = pipe_prior.interpolate(images_texts, weights) +>>> pipe = KandinskyV22Pipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") +>>> image = pipe( +... image_embeds=out.image_embeds, +... negative_image_embeds=out.negative_image_embeds, +... height=768, +... width=768, +... num_inference_steps=50, +... ).images[0] +>>> image.save("starry_cat.png") KandinskyV22Pipeline class diffusers.KandinskyV22Pipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union negative_image_embeds: Union height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyV22Pipeline, KandinskyV22PriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior") +>>> pipe_prior.to("cuda") +>>> prompt = "red cat, 4k photo" +>>> out = pipe_prior(prompt) +>>> image_emb = out.image_embeds +>>> zero_image_emb = out.negative_image_embeds +>>> pipe = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder") +>>> pipe.to("cuda") +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=50, +... ).images +>>> image[0].save("cat.png") KandinskyV22CombinedPipeline class diffusers.KandinskyV22CombinedPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. prior_image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Combined Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. prior_callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference of the prior pipeline. +The function is called with the following arguments: prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). prior_callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the prior_callback_on_step_end function. The tensors specified in the +list will be passed as callback_kwargs argument. You will only be able to include variables listed in +the ._callback_tensor_inputs attribute of your prior pipeline class. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference of the decoder pipeline. +The function is called with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors +as specified by callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipe = AutoPipelineForText2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" + +image = pipe(prompt=prompt, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. KandinskyV22ControlnetPipeline class diffusers.KandinskyV22ControlnetPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union negative_image_embeds: Union hint: FloatTensor height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. hint (torch.FloatTensor) — +The controlnet condition. image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22PriorEmb2EmbPipeline class diffusers.KandinskyV22PriorEmb2EmbPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModelWithProjection text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: UnCLIPScheduler image_processor: CLIPImageProcessor ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Pipeline for generating image prior for Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union strength: float = 0.3 negative_prompt: Union = None num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None guidance_scale: float = 4.0 output_type: Optional = 'pt' return_dict: bool = True ) → KandinskyPriorPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference emb. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. emb (torch.FloatTensor) — +The image embedding. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. output_type (str, optional, defaults to "pt") — +The output format of the generate image. Choose between: "np" (np.array) or "pt" +(torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyV22Pipeline, KandinskyV22PriorEmb2EmbPipeline +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> prompt = "red cat, 4k photo" +>>> img = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) +>>> image_emb, nagative_image_emb = pipe_prior(prompt, image=img, strength=0.2).to_tuple() + +>>> pipe = KandinskyPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-decoder, torch_dtype=torch.float16" +... ) +>>> pipe.to("cuda") + +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... ).images + +>>> image[0].save("cat.png") interpolate < source > ( images_and_prompts: List weights: List num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None negative_prior_prompt: Optional = None negative_prompt: str = '' guidance_scale: float = 4.0 device = None ) → KandinskyPriorPipelineOutput or tuple Parameters images_and_prompts (List[Union[str, PIL.Image.Image, torch.FloatTensor]]) — +list of prompts and images to guide the image generation. +weights — (List[float]): +list of weights for each condition in images_and_prompts num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. negative_prior_prompt (str, optional) — +The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when using the prior pipeline for interpolation. Examples: Copied >>> from diffusers import KandinskyV22PriorEmb2EmbPipeline, KandinskyV22Pipeline +>>> from diffusers.utils import load_image +>>> import PIL + +>>> import torch +>>> from torchvision import transforms + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> img1 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) + +>>> img2 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/starry_night.jpeg" +... ) + +>>> images_texts = ["a cat", img1, img2] +>>> weights = [0.3, 0.3, 0.4] +>>> image_emb, zero_image_emb = pipe_prior.interpolate(images_texts, weights) + +>>> pipe = KandinskyV22Pipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") + +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=150, +... ).images[0] + +>>> image.save("starry_cat.png") KandinskyV22Img2ImgPipeline class diffusers.KandinskyV22Img2ImgPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union image: Union negative_image_embeds: Union height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 strength: float = 0.3 num_images_per_prompt: int = 1 generator: Union = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22Img2ImgCombinedPipeline class diffusers.KandinskyV22Img2ImgCombinedPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. prior_image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Combined Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 strength: float = 0.3 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForImage2Image +import torch +import requests +from io import BytesIO +from PIL import Image +import os + +pipe = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") +image.thumbnail((768, 768)) + +image = pipe(prompt=prompt, image=original_image, num_inference_steps=25).images[0] enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. KandinskyV22ControlnetImg2ImgPipeline class diffusers.KandinskyV22ControlnetImg2ImgPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union image: Union negative_image_embeds: Union hint: FloatTensor height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 strength: float = 0.3 num_images_per_prompt: int = 1 generator: Union = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. hint (torch.FloatTensor) — +The controlnet condition. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22InpaintPipeline class diffusers.KandinskyV22InpaintPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-guided image inpainting using Kandinsky2.1 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union image: Union mask_image: Union negative_image_embeds: Union height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. image (PIL.Image.Image) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. mask_image (np.array) — +Tensor representing an image batch, to mask image. White pixels in the mask will be repainted, while +black pixels will be preserved. If mask_image is a PIL image, it will be converted to a single +channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, +so the expected shape would be (B, H, W, 1). negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22InpaintCombinedPipeline class diffusers.KandinskyV22InpaintCombinedPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. prior_image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Combined Pipeline for inpainting generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union mask_image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. mask_image (np.array) — +Tensor representing an image batch, to mask image. White pixels in the mask will be repainted, while +black pixels will be preserved. If mask_image is a PIL image, it will be converted to a single +channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, +so the expected shape would be (B, H, W, 1). negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. prior_callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). prior_callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the prior_callback_on_step_end function. The tensors specified in the +list will be passed as callback_kwargs argument. You will only be able to include variables listed in +the ._callback_tensor_inputs attribute of your pipeline class. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +import torch +import numpy as np + +pipe = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +original_image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png" +) + +mask = np.zeros((768, 768), dtype=np.float32) +# Let's mask out an area above the cat's head +mask[:250, 250:-250] = 1 + +image = pipe(prompt=prompt, image=original_image, mask_image=mask, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. diff --git a/scrapped_outputs/771e706f6fcdd303aac3f577aa777e3b.txt b/scrapped_outputs/771e706f6fcdd303aac3f577aa777e3b.txt new file mode 100644 index 0000000000000000000000000000000000000000..b20fa826f93ceab8b9350b48a73ddf983d626f35 --- /dev/null +++ b/scrapped_outputs/771e706f6fcdd303aac3f577aa777e3b.txt @@ -0,0 +1,115 @@ +Custom Diffusion Custom Diffusion is a training technique for personalizing image generation models. Like Textual Inversion, DreamBooth, and LoRA, Custom Diffusion only requires a few (~4-5) example images. This technique works by only training weights in the cross-attention layers, and it uses a special word to represent the newly learned concept. Custom Diffusion is unique because it can also learn multiple concepts at the same time. If you’re training on a GPU with limited vRAM, you should try enabling xFormers with --enable_xformers_memory_efficient_attention for faster training with lower vRAM requirements (16GB). To save even more memory, add --set_grads_to_none in the training argument to set the gradients to None instead of zero (this option can cause some issues, so if you experience any, try removing this parameter). This guide will explore the train_custom_diffusion.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies: Copied cd examples/custom_diffusion +pip install -r requirements.txt +pip install clip-retrieval 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script contains all the parameters to help you customize your training run. These are found in the parse_args() function. The function comes with default values, but you can also set your own values in the training command if you’d like. For example, to change the resolution of the input image: Copied accelerate launch train_custom_diffusion.py \ + --resolution=256 Many of the basic parameters are described in the DreamBooth training guide, so this guide focuses on the parameters unique to Custom Diffusion: --freeze_model: freezes the key and value parameters in the cross-attention layer; the default is crossattn_kv, but you can set it to crossattn to train all the parameters in the cross-attention layer --concepts_list: to learn multiple concepts, provide a path to a JSON file containing the concepts --modifier_token: a special word used to represent the learned concept --initializer_token: Prior preservation loss Prior preservation loss is a method that uses a model’s own generated samples to help it learn how to generate more diverse images. Because these generated sample images belong to the same class as the images you provided, they help the model retain what it has learned about the class and how it can use what it already knows about the class to make new compositions. Many of the parameters for prior preservation loss are described in the DreamBooth training guide. Regularization Custom Diffusion includes training the target images with a small set of real images to prevent overfitting. As you can imagine, this can be easy to do when you’re only training on a few images! Download 200 real images with clip_retrieval. The class_prompt should be the same category as the target images. These images are stored in class_data_dir. Copied python retrieve.py --class_prompt cat --class_data_dir real_reg/samples_cat --num_class_images 200 To enable regularization, add the following parameters: --with_prior_preservation: whether to use prior preservation loss --prior_loss_weight: controls the influence of the prior preservation loss on the model --real_prior: whether to use a small set of real images to prevent overfitting Copied accelerate launch train_custom_diffusion.py \ + --with_prior_preservation \ + --prior_loss_weight=1.0 \ + --class_data_dir="./real_reg/samples_cat" \ + --class_prompt="cat" \ + --real_prior=True \ Training script A lot of the code in the Custom Diffusion training script is similar to the DreamBooth script. This guide instead focuses on the code that is relevant to Custom Diffusion. The Custom Diffusion training script has two dataset classes: CustomDiffusionDataset: preprocesses the images, class images, and prompts for training PromptDataset: prepares the prompts for generating class images Next, the modifier_token is added to the tokenizer, converted to token ids, and the token embeddings are resized to account for the new modifier_token. Then the modifier_token embeddings are initialized with the embeddings of the initializer_token. All parameters in the text encoder are frozen, except for the token embeddings since this is what the model is trying to learn to associate with the concepts. Copied params_to_freeze = itertools.chain( + text_encoder.text_model.encoder.parameters(), + text_encoder.text_model.final_layer_norm.parameters(), + text_encoder.text_model.embeddings.position_embedding.parameters(), +) +freeze_params(params_to_freeze) Now you’ll need to add the Custom Diffusion weights to the attention layers. This is a really important step for getting the shape and size of the attention weights correct, and for setting the appropriate number of attention processors in each UNet block. Copied st = unet.state_dict() +for name, _ in unet.attn_processors.items(): + cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim + if name.startswith("mid_block"): + hidden_size = unet.config.block_out_channels[-1] + elif name.startswith("up_blocks"): + block_id = int(name[len("up_blocks.")]) + hidden_size = list(reversed(unet.config.block_out_channels))[block_id] + elif name.startswith("down_blocks"): + block_id = int(name[len("down_blocks.")]) + hidden_size = unet.config.block_out_channels[block_id] + layer_name = name.split(".processor")[0] + weights = { + "to_k_custom_diffusion.weight": st[layer_name + ".to_k.weight"], + "to_v_custom_diffusion.weight": st[layer_name + ".to_v.weight"], + } + if train_q_out: + weights["to_q_custom_diffusion.weight"] = st[layer_name + ".to_q.weight"] + weights["to_out_custom_diffusion.0.weight"] = st[layer_name + ".to_out.0.weight"] + weights["to_out_custom_diffusion.0.bias"] = st[layer_name + ".to_out.0.bias"] + if cross_attention_dim is not None: + custom_diffusion_attn_procs[name] = attention_class( + train_kv=train_kv, + train_q_out=train_q_out, + hidden_size=hidden_size, + cross_attention_dim=cross_attention_dim, + ).to(unet.device) + custom_diffusion_attn_procs[name].load_state_dict(weights) + else: + custom_diffusion_attn_procs[name] = attention_class( + train_kv=False, + train_q_out=False, + hidden_size=hidden_size, + cross_attention_dim=cross_attention_dim, + ) +del st +unet.set_attn_processor(custom_diffusion_attn_procs) +custom_diffusion_layers = AttnProcsLayers(unet.attn_processors) The optimizer is initialized to update the cross-attention layer parameters: Copied optimizer = optimizer_class( + itertools.chain(text_encoder.get_input_embeddings().parameters(), custom_diffusion_layers.parameters()) + if args.modifier_token is not None + else custom_diffusion_layers.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) In the training loop, it is important to only update the embeddings for the concept you’re trying to learn. This means setting the gradients of all the other token embeddings to zero: Copied if args.modifier_token is not None: + if accelerator.num_processes > 1: + grads_text_encoder = text_encoder.module.get_input_embeddings().weight.grad + else: + grads_text_encoder = text_encoder.get_input_embeddings().weight.grad + index_grads_to_zero = torch.arange(len(tokenizer)) != modifier_token_id[0] + for i in range(len(modifier_token_id[1:])): + index_grads_to_zero = index_grads_to_zero & ( + torch.arange(len(tokenizer)) != modifier_token_id[i] + ) + grads_text_encoder.data[index_grads_to_zero, :] = grads_text_encoder.data[ + index_grads_to_zero, : + ].fill_(0) Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 In this guide, you’ll download and use these example cat images. You can also create and use your own dataset if you want (see the Create a dataset for training guide). Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model, INSTANCE_DIR to the path where you just downloaded the cat images to, and OUTPUT_DIR to where you want to save the model. You’ll use as the special word to tie the newly learned embeddings to. The script creates and saves model checkpoints and a pytorch_custom_diffusion_weights.bin file to your repository. To monitor training progress with Weights and Biases, add the --report_to=wandb parameter to the training command and specify a validation prompt with --validation_prompt. This is useful for debugging and saving intermediate results. If you’re training on human faces, the Custom Diffusion team has found the following parameters to work well: --learning_rate=5e-6 --max_train_steps can be anywhere between 1000 and 2000 --freeze_model=crossattn use at least 15-20 images to train with single concept multiple concepts Copied export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export OUTPUT_DIR="path-to-save-model" +export INSTANCE_DIR="./data/cat" + +accelerate launch train_custom_diffusion.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --output_dir=$OUTPUT_DIR \ + --class_data_dir=./real_reg/samples_cat/ \ + --with_prior_preservation \ + --real_prior \ + --prior_loss_weight=1.0 \ + --class_prompt="cat" \ + --num_class_images=200 \ + --instance_prompt="photo of a cat" \ + --resolution=512 \ + --train_batch_size=2 \ + --learning_rate=1e-5 \ + --lr_warmup_steps=0 \ + --max_train_steps=250 \ + --scale_lr \ + --hflip \ + --modifier_token "" \ + --validation_prompt=" cat sitting in a bucket" \ + --report_to="wandb" \ + --push_to_hub Once training is finished, you can use your new Custom Diffusion model for inference. single concept multiple concepts Copied import torch +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16, +).to("cuda") +pipeline.unet.load_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin") +pipeline.load_textual_inversion("path-to-save-model", weight_name=".bin") + +image = pipeline( + " cat sitting in a bucket", + num_inference_steps=100, + guidance_scale=6.0, + eta=1.0, +).images[0] +image.save("cat.png") Next steps Congratulations on training a model with Custom Diffusion! 🎉 To learn more: Read the Multi-Concept Customization of Text-to-Image Diffusion blog post to learn more details about the experimental results from the Custom Diffusion team. diff --git a/scrapped_outputs/7744464357902549435e49d501eaf371.txt b/scrapped_outputs/7744464357902549435e49d501eaf371.txt new file mode 100644 index 0000000000000000000000000000000000000000..cf2d88cd2c276c34e8e6673ea524a7e773e96a51 --- /dev/null +++ b/scrapped_outputs/7744464357902549435e49d501eaf371.txt @@ -0,0 +1,32 @@ +Text-Guided Image-to-Image Generation + +The StableDiffusionImg2ImgPipeline lets you pass a text prompt and an initial image to condition the generation of new images. + + + Copied +import torch +import requests +from PIL import Image +from io import BytesIO + +from diffusers import StableDiffusionImg2ImgPipeline + +# load the pipeline +device = "cuda" +pipe = StableDiffusionImg2ImgPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to( + device +) + +# let's download an initial image +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +response = requests.get(url) +init_image = Image.open(BytesIO(response.content)).convert("RGB") +init_image.thumbnail((768, 768)) + +prompt = "A fantasy landscape, trending on artstation" + +images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images + +images[0].save("fantasy_landscape.png") +You can also run this example on colab diff --git a/scrapped_outputs/7767fe88a99b48ddead398d43cd83265.txt b/scrapped_outputs/7767fe88a99b48ddead398d43cd83265.txt new file mode 100644 index 0000000000000000000000000000000000000000..bcb666def15e33f1f85b4b3d91e464c6e12c8f33 --- /dev/null +++ b/scrapped_outputs/7767fe88a99b48ddead398d43cd83265.txt @@ -0,0 +1,52 @@ +UNet3DConditionModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 3D UNet conditional model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet3DConditionModel class diffusers.UNet3DConditionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlock3D', 'CrossAttnDownBlock3D', 'CrossAttnDownBlock3D', 'DownBlock3D') up_block_types: Tuple = ('UpBlock3D', 'CrossAttnUpBlock3D', 'CrossAttnUpBlock3D', 'CrossAttnUpBlock3D') block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: int = 1024 attention_head_dim: Union = 64 num_attention_heads: Union = None ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. in_channels (int, optional, defaults to 4) — The number of channels in the input sample. out_channels (int, optional, defaults to 4) — The number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — The number of layers per block. downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. act_fn (str, optional, defaults to "silu") — The activation function to use. norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, normalization and activation layers is skipped in post-processing. norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. cross_attention_dim (int, optional, defaults to 1280) — The dimension of the cross attention features. attention_head_dim (int, optional, defaults to 8) — The dimension of the attention heads. num_attention_heads (int, optional) — The number of attention heads. A conditional 3D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). disable_freeu < source > ( ) Disables the FreeU mechanism. enable_forward_chunking < source > ( chunk_size: Optional = None dim: int = 0 ) Parameters chunk_size (int, optional) — +The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually +over each tensor of dim=dim. dim (int, optional, defaults to 0) — +The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch) +or dim=1 (sequence length). Sets the attention processor to use feed forward +chunking. enable_freeu < source > ( s1 s2 b1 b2 ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism from https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stage blocks where they are being applied. Please refer to the official repository for combinations of values that +are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None down_block_additional_residuals: Optional = None mid_block_additional_residual: Optional = None return_dict: bool = True ) → ~models.unet_3d_condition.UNet3DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, num_frames, channel, height, width. timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). class_labels (torch.Tensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. +timestep_cond — (torch.Tensor, optional, defaults to None): +Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed +through the self.time_embedding layer to obtain the timestep embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. +down_block_additional_residuals — (tuple of torch.Tensor, optional): +A tuple of tensors that if specified are added to the residuals of down unet blocks. +mid_block_additional_residual — (torch.Tensor, optional): +A tensor that if specified is added to the residual of the middle unet block. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.unet_3d_condition.UNet3DConditionOutput instead of a plain +tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. Returns +~models.unet_3d_condition.UNet3DConditionOutput or tuple + +If return_dict is True, an ~models.unet_3d_condition.UNet3DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The UNet3DConditionModel forward method. set_attention_slice < source > ( slice_size: Union ) Parameters slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", input to the attention heads is halved, so attention is computed in two steps. If +"max", maximum amount of memory is saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in +several steps. This is useful for saving some memory in exchange for a small decrease in speed. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. unload_lora < source > ( ) Unloads LoRA weights. UNet3DConditionOutput class diffusers.models.unets.unet_3d_condition.UNet3DConditionOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_frames, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of UNet3DConditionModel. diff --git a/scrapped_outputs/776fabf0e4a4540602788ea2bb4f56c5.txt b/scrapped_outputs/776fabf0e4a4540602788ea2bb4f56c5.txt new file mode 100644 index 0000000000000000000000000000000000000000..1f6f4515145581efe8db27c822c4dac240053ef7 --- /dev/null +++ b/scrapped_outputs/776fabf0e4a4540602788ea2bb4f56c5.txt @@ -0,0 +1,68 @@ +Consistency Models Consistency Models were proposed in Consistency Models by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. The abstract from the paper is: Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast one-step generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet 64x64 and LSUN 256x256. The original codebase can be found at openai/consistency_models, and additional checkpoints are available at openai. The pipeline was contributed by dg845 and ayushtues. ❤️ Tips For an additional speed-up, use torch.compile to generate multiple images in <1 second: Copied import torch + from diffusers import ConsistencyModelPipeline + + device = "cuda" + # Load the cd_bedroom256_lpips checkpoint. + model_id_or_path = "openai/diffusers-cd_bedroom256_lpips" + pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) + pipe.to(device) + ++ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + + # Multistep sampling + # Timesteps can be explicitly specified; the particular timesteps below are from the original GitHub repo: + # https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L83 + for _ in range(10): + image = pipe(timesteps=[17, 0]).images[0] + image.show() ConsistencyModelPipeline class diffusers.ConsistencyModelPipeline < source > ( unet: UNet2DModel scheduler: CMStochasticIterativeScheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Currently only +compatible with CMStochasticIterativeScheduler. Pipeline for unconditional or class-conditional image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 class_labels: Union = None num_inference_steps: int = 1 timesteps: List = None generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. class_labels (torch.Tensor or List[int] or int, optional) — +Optional class labels for conditioning class-conditional consistency models. Not used if the model is +not class-conditional. num_inference_steps (int, optional, defaults to 1) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + Examples: Copied >>> import torch + +>>> from diffusers import ConsistencyModelPipeline + +>>> device = "cuda" +>>> # Load the cd_imagenet64_l2 checkpoint. +>>> model_id_or_path = "openai/diffusers-cd_imagenet64_l2" +>>> pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +>>> pipe.to(device) + +>>> # Onestep Sampling +>>> image = pipe(num_inference_steps=1).images[0] +>>> image.save("cd_imagenet64_l2_onestep_sample.png") + +>>> # Onestep sampling, class-conditional image generation +>>> # ImageNet-64 class label 145 corresponds to king penguins +>>> image = pipe(num_inference_steps=1, class_labels=145).images[0] +>>> image.save("cd_imagenet64_l2_onestep_sample_penguin.png") + +>>> # Multistep sampling, class-conditional image generation +>>> # Timesteps can be explicitly specified; the particular timesteps below are from the original Github repo: +>>> # https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L77 +>>> image = pipe(num_inference_steps=None, timesteps=[22, 0], class_labels=145).images[0] +>>> image.save("cd_imagenet64_l2_multistep_sample_penguin.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/7784bf3fd67acb7d5a33b6d961b0900d.txt b/scrapped_outputs/7784bf3fd67acb7d5a33b6d961b0900d.txt new file mode 100644 index 0000000000000000000000000000000000000000..6c6930421010fe84f98ab906144201bb0390aa30 --- /dev/null +++ b/scrapped_outputs/7784bf3fd67acb7d5a33b6d961b0900d.txt @@ -0,0 +1,81 @@ +Latent Diffusion Latent Diffusion was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. The abstract from the paper is: By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. The original codebase can be found at CompVis/latent-diffusion. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. LDMTextToImagePipeline class diffusers.LDMTextToImagePipeline < source > ( vqvae: Union bert: PreTrainedModel tokenizer: PreTrainedTokenizer unet: Union scheduler: Union ) Parameters vqvae (VQModel) — +Vector-quantized (VQ) model to encode and decode images to and from latent representations. bert (LDMBertModel) — +Text-encoder model based on BERT. tokenizer (BertTokenizer) — +A BertTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-image generation using latent diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union height: Optional = None width: Optional = None num_inference_steps: Optional = 50 guidance_scale: Optional = 1.0 eta: Optional = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 1.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Example: Copied >>> from diffusers import DiffusionPipeline + +>>> # load model and scheduler +>>> ldm = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256") + +>>> # run pipeline in inference (sample random noise and denoise) +>>> prompt = "A painting of a squirrel eating a burger" +>>> images = ldm([prompt], num_inference_steps=50, eta=0.3, guidance_scale=6).images + +>>> # save images +>>> for idx, image in enumerate(images): +... image.save(f"squirrel-{idx}.png") LDMSuperResolutionPipeline class diffusers.LDMSuperResolutionPipeline < source > ( vqvae: VQModel unet: UNet2DModel scheduler: Union ) Parameters vqvae (VQModel) — +Vector-quantized (VQ) model to encode and decode images to and from latent representations. unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latens. Can be one of +DDIMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, +EulerAncestralDiscreteScheduler, DPMSolverMultistepScheduler, or PNDMScheduler. A pipeline for image super-resolution using latent diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union = None batch_size: Optional = 1 num_inference_steps: Optional = 100 eta: Optional = 0.0 generator: Union = None output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters image (torch.Tensor or PIL.Image.Image) — +Image or tensor representing an image batch to be used as the starting point for the process. batch_size (int, optional, defaults to 1) — +Number of images to generate. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Example: Copied >>> import requests +>>> from PIL import Image +>>> from io import BytesIO +>>> from diffusers import LDMSuperResolutionPipeline +>>> import torch + +>>> # load model and scheduler +>>> pipeline = LDMSuperResolutionPipeline.from_pretrained("CompVis/ldm-super-resolution-4x-openimages") +>>> pipeline = pipeline.to("cuda") + +>>> # let's download an image +>>> url = ( +... "https://user-images.githubusercontent.com/38061659/199705896-b48e17b8-b231-47cd-a270-4ffa5a93fa3e.png" +... ) +>>> response = requests.get(url) +>>> low_res_img = Image.open(BytesIO(response.content)).convert("RGB") +>>> low_res_img = low_res_img.resize((128, 128)) + +>>> # run pipeline in inference (sample random noise and denoise) +>>> upscaled_image = pipeline(low_res_img, num_inference_steps=100, eta=1).images[0] +>>> # save image +>>> upscaled_image.save("ldm_generated_image.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/779281c1af84eb9e0d938034f583ecf1.txt b/scrapped_outputs/779281c1af84eb9e0d938034f583ecf1.txt new file mode 100644 index 0000000000000000000000000000000000000000..f971d25fc44aa74df592b1a56356146d3ed210ee --- /dev/null +++ b/scrapped_outputs/779281c1af84eb9e0d938034f583ecf1.txt @@ -0,0 +1,83 @@ +K-Diffusion k-diffusion is a popular library created by Katherine Crowson. We provide StableDiffusionKDiffusionPipeline and StableDiffusionXLKDiffusionPipeline that allow you to run Stable DIffusion with samplers from k-diffusion. Note that most the samplers from k-diffusion are implemented in Diffusers and we recommend using existing schedulers. You can find a mapping between k-diffusion samplers and schedulers in Diffusers here StableDiffusionKDiffusionPipeline class diffusers.StableDiffusionKDiffusionPipeline < source > ( vae text_encoder tokenizer unet scheduler safety_checker feature_extractor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. Pipeline for text-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights This is an experimental pipeline and is likely to change in the future. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionXLKDiffusionPipeline class diffusers.StableDiffusionXLKDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. Pipeline for text-to-image generation using Stable Diffusion XL and k-diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. diff --git a/scrapped_outputs/7799a64abdde778b89b15d35f5fe5c61.txt b/scrapped_outputs/7799a64abdde778b89b15d35f5fe5c61.txt new file mode 100644 index 0000000000000000000000000000000000000000..9d0a8a28b6d3bc1a9ce7a2bdbcac9943975943ca --- /dev/null +++ b/scrapped_outputs/7799a64abdde778b89b15d35f5fe5c61.txt @@ -0,0 +1 @@ +Overview Welcome to 🧨 Diffusers! If you’re new to diffusion models and generative AI, and want to learn more, then you’ve come to the right place. These beginner-friendly tutorials are designed to provide a gentle introduction to diffusion models and help you understand the library fundamentals - the core components and how 🧨 Diffusers is meant to be used. You’ll learn how to use a pipeline for inference to rapidly generate things, and then deconstruct that pipeline to really understand how to use the library as a modular toolbox for building your own diffusion systems. In the next lesson, you’ll learn how to train your own diffusion model to generate what you want. After completing the tutorials, you’ll have gained the necessary skills to start exploring the library on your own and see how to use it for your own projects and applications. Feel free to join our community on Discord or the forums to connect and collaborate with other users and developers! Let’s start diffusing! 🧨 diff --git a/scrapped_outputs/77a16b1a9534f409206a2872778817c6.txt b/scrapped_outputs/77a16b1a9534f409206a2872778817c6.txt new file mode 100644 index 0000000000000000000000000000000000000000..e807efa0bdba9fcaf725824d3ab7c1cc5f8142b5 --- /dev/null +++ b/scrapped_outputs/77a16b1a9534f409206a2872778817c6.txt @@ -0,0 +1,138 @@ +Kandinsky 3 Kandinsky 3 is created by Vladimir Arkhipkin,Anastasia Maltseva,Igor Pavlov,Andrei Filatov,Arseniy Shakhmatov,Andrey Kuznetsov,Denis Dimitrov, Zein Shaheen The description from it’s Github page: Kandinsky 3.0 is an open-source text-to-image diffusion model built upon the Kandinsky2-x model family. In comparison to its predecessors, enhancements have been made to the text understanding and visual quality of the model, achieved by increasing the size of the text encoder and Diffusion U-Net models, respectively. Its architecture includes 3 main components: FLAN-UL2, which is an encoder decoder model based on the T5 architecture. New U-Net architecture featuring BigGAN-deep blocks doubles depth while maintaining the same number of parameters. Sber-MoVQGAN is a decoder proven to have superior results in image restoration. The original codebase can be found at ai-forever/Kandinsky-3. Check out the Kandinsky Community organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting. Make sure to check out the schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. Kandinsky3Pipeline class diffusers.Kandinsky3Pipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: Kandinsky3UNet scheduler: DDPMScheduler movq: VQModel ) __call__ < source > ( prompt: Union = None num_inference_steps: int = 25 guidance_scale: float = 3.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 height: Optional = 1024 width: Optional = 1024 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None output_type: Optional = 'pil' return_dict: bool = True latents = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 3.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to self.unet.config.sample_size) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size) — +The width in pixels of the generated image. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask. Must provide if passing prompt_embeds directly. negative_attention_mask (torch.FloatTensor, optional) — +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import AutoPipelineForText2Image +>>> import torch + +>>> pipe = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-3", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A photograph of the inside of a subway train. There are raccoons sitting on the seats. One of them is reading a newspaper. The window shows the city in the background." + +>>> generator = torch.Generator(device="cpu").manual_seed(0) +>>> image = pipe(prompt, num_inference_steps=25, generator=generator).images[0] encode_prompt < source > ( prompt do_classifier_free_guidance = True num_images_per_prompt = 1 device = None negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None _cut_context = False attention_mask: Optional = None negative_attention_mask: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device, optional): +torch device to place the resulting embeddings on num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask. Must provide if passing prompt_embeds directly. negative_attention_mask (torch.FloatTensor, optional) — +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. Encodes the prompt into text encoder hidden states. Kandinsky3Img2ImgPipeline class diffusers.Kandinsky3Img2ImgPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: Kandinsky3UNet scheduler: DDPMScheduler movq: VQModel ) __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.3 num_inference_steps: int = 25 guidance_scale: float = 3.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 3.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask. Must provide if passing prompt_embeds directly. negative_attention_mask (torch.FloatTensor, optional) — +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import AutoPipelineForImage2Image +>>> from diffusers.utils import load_image +>>> import torch + +>>> pipe = AutoPipelineForImage2Image.from_pretrained("kandinsky-community/kandinsky-3", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A painting of the inside of a subway train with tiny raccoons." +>>> image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky3/t2i.png") + +>>> generator = torch.Generator(device="cpu").manual_seed(0) +>>> image = pipe(prompt, image=image, strength=0.75, num_inference_steps=25, generator=generator).images[0] encode_prompt < source > ( prompt do_classifier_free_guidance = True num_images_per_prompt = 1 device = None negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None _cut_context = False attention_mask: Optional = None negative_attention_mask: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded Encodes the prompt into text encoder hidden states. device: (torch.device, optional): +torch device to place the resulting embeddings on +num_images_per_prompt (int, optional, defaults to 1): +number of images that should be generated per prompt +do_classifier_free_guidance (bool, optional, defaults to True): +whether to use classifier free guidance or not +negative_prompt (str or List[str], optional): +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). +prompt_embeds (torch.FloatTensor, optional): +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. +negative_prompt_embeds (torch.FloatTensor, optional): +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. +attention_mask (torch.FloatTensor, optional): +Pre-generated attention mask. Must provide if passing prompt_embeds directly. +negative_attention_mask (torch.FloatTensor, optional): +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. diff --git a/scrapped_outputs/77a39e5ab696f80523c03738b2d920b4.txt b/scrapped_outputs/77a39e5ab696f80523c03738b2d920b4.txt new file mode 100644 index 0000000000000000000000000000000000000000..684383d3b766fe2306777de3fdfe7ac6f1cc9bb6 --- /dev/null +++ b/scrapped_outputs/77a39e5ab696f80523c03738b2d920b4.txt @@ -0,0 +1,29 @@ +Create a dataset for training There are many datasets on the Hub to train a model on, but if you can’t find one you’re interested in or want to use your own, you can create a dataset with the 🤗 Datasets library. The dataset structure depends on the task you want to train your model on. The most basic dataset structure is a directory of images for tasks like unconditional image generation. Another dataset structure may be a directory of images and a text file containing their corresponding text captions for tasks like text-to-image generation. This guide will show you two ways to create a dataset to finetune on: provide a folder of images to the --train_data_dir argument upload a dataset to the Hub and pass the dataset repository id to the --dataset_name argument 💡 Learn more about how to create an image dataset for training in the Create an image dataset guide. Provide a dataset as a folder For unconditional generation, you can provide your own dataset as a folder of images. The training script uses the ImageFolder builder from 🤗 Datasets to automatically build a dataset from the folder. Your directory structure should look like: Copied data_dir/xxx.png +data_dir/xxy.png +data_dir/[...]/xxz.png Pass the path to the dataset directory to the --train_data_dir argument, and then you can start training: Copied accelerate launch train_unconditional.py \ + --train_data_dir \ + Upload your data to the Hub 💡 For more details and context about creating and uploading a dataset to the Hub, take a look at the Image search with 🤗 Datasets post. Start by creating a dataset with the ImageFolder feature, which creates an image column containing the PIL-encoded images. You can use the data_dir or data_files parameters to specify the location of the dataset. The data_files parameter supports mapping specific files to dataset splits like train or test: Copied from datasets import load_dataset + +# example 1: local folder +dataset = load_dataset("imagefolder", data_dir="path_to_your_folder") + +# example 2: local files (supported formats are tar, gzip, zip, xz, rar, zstd) +dataset = load_dataset("imagefolder", data_files="path_to_zip_file") + +# example 3: remote files (supported formats are tar, gzip, zip, xz, rar, zstd) +dataset = load_dataset( + "imagefolder", + data_files="https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip", +) + +# example 4: providing several splits +dataset = load_dataset( + "imagefolder", data_files={"train": ["path/to/file1", "path/to/file2"], "test": ["path/to/file3", "path/to/file4"]} +) Then use the push_to_hub method to upload the dataset to the Hub: Copied # assuming you have ran the huggingface-cli login command in a terminal +dataset.push_to_hub("name_of_your_dataset") + +# if you want to push to a private repo, simply pass private=True: +dataset.push_to_hub("name_of_your_dataset", private=True) Now the dataset is available for training by passing the dataset name to the --dataset_name argument: Copied accelerate launch --mixed_precision="fp16" train_text_to_image.py \ + --pretrained_model_name_or_path="runwayml/stable-diffusion-v1-5" \ + --dataset_name="name_of_your_dataset" \ + Next steps Now that you’ve created a dataset, you can plug it into the train_data_dir (if your dataset is local) or dataset_name (if your dataset is on the Hub) arguments of a training script. For your next steps, feel free to try and use your dataset to train a model for unconditional generation or text-to-image generation! diff --git a/scrapped_outputs/77ae256b2a7bc5bc2848925046faa57b.txt b/scrapped_outputs/77ae256b2a7bc5bc2848925046faa57b.txt new file mode 100644 index 0000000000000000000000000000000000000000..b45fe5213bcfa863fc1c686b497f93e27b1008f7 --- /dev/null +++ b/scrapped_outputs/77ae256b2a7bc5bc2848925046faa57b.txt @@ -0,0 +1,630 @@ +Kandinsky 2.2 Kandinsky 2.2 is created by Arseniy Shakhmatov, Anton Razzhigaev, Aleksandr Nikolich, Vladimir Arkhipkin, Igor Pavlov, Andrey Kuznetsov, and Denis Dimitrov. The description from it’s GitHub page is: Kandinsky 2.2 brings substantial improvements upon its predecessor, Kandinsky 2.1, by introducing a new, more powerful image encoder - CLIP-ViT-G and the ControlNet support. The switch to CLIP-ViT-G as the image encoder significantly increases the model’s capability to generate more aesthetic pictures and better understand text, thus enhancing the model’s overall performance. The addition of the ControlNet mechanism allows the model to effectively control the process of generating images. This leads to more accurate and visually appealing outputs and opens new possibilities for text-guided image manipulation. The original codebase can be found at ai-forever/Kandinsky-2. Check out the Kandinsky Community organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting. Make sure to check out the schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. KandinskyV22PriorPipeline class diffusers.KandinskyV22PriorPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModelWithProjection text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: UnCLIPScheduler image_processor: CLIPImageProcessor ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Pipeline for generating image prior for Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 output_type: Optional = 'pt' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → KandinskyPriorPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. output_type (str, optional, defaults to "pt") — +The output format of the generate image. Choose between: "np" (np.array) or "pt" +(torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyV22Pipeline, KandinskyV22PriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior") +>>> pipe_prior.to("cuda") +>>> prompt = "red cat, 4k photo" +>>> image_emb, negative_image_emb = pipe_prior(prompt).to_tuple() + +>>> pipe = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder") +>>> pipe.to("cuda") +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=50, +... ).images +>>> image[0].save("cat.png") interpolate < source > ( images_and_prompts: List weights: List num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None negative_prior_prompt: Optional = None negative_prompt: str = '' guidance_scale: float = 4.0 device = None ) → KandinskyPriorPipelineOutput or tuple Parameters images_and_prompts (List[Union[str, PIL.Image.Image, torch.FloatTensor]]) — +list of prompts and images to guide the image generation. +weights — (List[float]): +list of weights for each condition in images_and_prompts num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. negative_prior_prompt (str, optional) — +The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when using the prior pipeline for interpolation. Examples: Copied >>> from diffusers import KandinskyV22PriorPipeline, KandinskyV22Pipeline +>>> from diffusers.utils import load_image +>>> import PIL +>>> import torch +>>> from torchvision import transforms + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") +>>> img1 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) +>>> img2 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/starry_night.jpeg" +... ) +>>> images_texts = ["a cat", img1, img2] +>>> weights = [0.3, 0.3, 0.4] +>>> out = pipe_prior.interpolate(images_texts, weights) +>>> pipe = KandinskyV22Pipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") +>>> image = pipe( +... image_embeds=out.image_embeds, +... negative_image_embeds=out.negative_image_embeds, +... height=768, +... width=768, +... num_inference_steps=50, +... ).images[0] +>>> image.save("starry_cat.png") KandinskyV22Pipeline class diffusers.KandinskyV22Pipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union negative_image_embeds: Union height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyV22Pipeline, KandinskyV22PriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior") +>>> pipe_prior.to("cuda") +>>> prompt = "red cat, 4k photo" +>>> out = pipe_prior(prompt) +>>> image_emb = out.image_embeds +>>> zero_image_emb = out.negative_image_embeds +>>> pipe = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder") +>>> pipe.to("cuda") +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=50, +... ).images +>>> image[0].save("cat.png") KandinskyV22CombinedPipeline class diffusers.KandinskyV22CombinedPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. prior_image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Combined Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. prior_callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference of the prior pipeline. +The function is called with the following arguments: prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). prior_callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the prior_callback_on_step_end function. The tensors specified in the +list will be passed as callback_kwargs argument. You will only be able to include variables listed in +the ._callback_tensor_inputs attribute of your prior pipeline class. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference of the decoder pipeline. +The function is called with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors +as specified by callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipe = AutoPipelineForText2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" + +image = pipe(prompt=prompt, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. KandinskyV22ControlnetPipeline class diffusers.KandinskyV22ControlnetPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union negative_image_embeds: Union hint: FloatTensor height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. hint (torch.FloatTensor) — +The controlnet condition. image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22PriorEmb2EmbPipeline class diffusers.KandinskyV22PriorEmb2EmbPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModelWithProjection text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: UnCLIPScheduler image_processor: CLIPImageProcessor ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Pipeline for generating image prior for Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union strength: float = 0.3 negative_prompt: Union = None num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None guidance_scale: float = 4.0 output_type: Optional = 'pt' return_dict: bool = True ) → KandinskyPriorPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference emb. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. emb (torch.FloatTensor) — +The image embedding. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. output_type (str, optional, defaults to "pt") — +The output format of the generate image. Choose between: "np" (np.array) or "pt" +(torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyV22Pipeline, KandinskyV22PriorEmb2EmbPipeline +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> prompt = "red cat, 4k photo" +>>> img = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) +>>> image_emb, nagative_image_emb = pipe_prior(prompt, image=img, strength=0.2).to_tuple() + +>>> pipe = KandinskyPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-decoder, torch_dtype=torch.float16" +... ) +>>> pipe.to("cuda") + +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... ).images + +>>> image[0].save("cat.png") interpolate < source > ( images_and_prompts: List weights: List num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None negative_prior_prompt: Optional = None negative_prompt: str = '' guidance_scale: float = 4.0 device = None ) → KandinskyPriorPipelineOutput or tuple Parameters images_and_prompts (List[Union[str, PIL.Image.Image, torch.FloatTensor]]) — +list of prompts and images to guide the image generation. +weights — (List[float]): +list of weights for each condition in images_and_prompts num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. negative_prior_prompt (str, optional) — +The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when using the prior pipeline for interpolation. Examples: Copied >>> from diffusers import KandinskyV22PriorEmb2EmbPipeline, KandinskyV22Pipeline +>>> from diffusers.utils import load_image +>>> import PIL + +>>> import torch +>>> from torchvision import transforms + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> img1 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) + +>>> img2 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/starry_night.jpeg" +... ) + +>>> images_texts = ["a cat", img1, img2] +>>> weights = [0.3, 0.3, 0.4] +>>> image_emb, zero_image_emb = pipe_prior.interpolate(images_texts, weights) + +>>> pipe = KandinskyV22Pipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") + +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=150, +... ).images[0] + +>>> image.save("starry_cat.png") KandinskyV22Img2ImgPipeline class diffusers.KandinskyV22Img2ImgPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union image: Union negative_image_embeds: Union height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 strength: float = 0.3 num_images_per_prompt: int = 1 generator: Union = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22Img2ImgCombinedPipeline class diffusers.KandinskyV22Img2ImgCombinedPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. prior_image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Combined Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 strength: float = 0.3 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForImage2Image +import torch +import requests +from io import BytesIO +from PIL import Image +import os + +pipe = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") +image.thumbnail((768, 768)) + +image = pipe(prompt=prompt, image=original_image, num_inference_steps=25).images[0] enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. KandinskyV22ControlnetImg2ImgPipeline class diffusers.KandinskyV22ControlnetImg2ImgPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union image: Union negative_image_embeds: Union hint: FloatTensor height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 strength: float = 0.3 num_images_per_prompt: int = 1 generator: Union = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. hint (torch.FloatTensor) — +The controlnet condition. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22InpaintPipeline class diffusers.KandinskyV22InpaintPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-guided image inpainting using Kandinsky2.1 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union image: Union mask_image: Union negative_image_embeds: Union height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. image (PIL.Image.Image) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. mask_image (np.array) — +Tensor representing an image batch, to mask image. White pixels in the mask will be repainted, while +black pixels will be preserved. If mask_image is a PIL image, it will be converted to a single +channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, +so the expected shape would be (B, H, W, 1). negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22InpaintCombinedPipeline class diffusers.KandinskyV22InpaintCombinedPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. prior_image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Combined Pipeline for inpainting generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union mask_image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. mask_image (np.array) — +Tensor representing an image batch, to mask image. White pixels in the mask will be repainted, while +black pixels will be preserved. If mask_image is a PIL image, it will be converted to a single +channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, +so the expected shape would be (B, H, W, 1). negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. prior_callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). prior_callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the prior_callback_on_step_end function. The tensors specified in the +list will be passed as callback_kwargs argument. You will only be able to include variables listed in +the ._callback_tensor_inputs attribute of your pipeline class. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +import torch +import numpy as np + +pipe = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +original_image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png" +) + +mask = np.zeros((768, 768), dtype=np.float32) +# Let's mask out an area above the cat's head +mask[:250, 250:-250] = 1 + +image = pipe(prompt=prompt, image=original_image, mask_image=mask, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. diff --git a/scrapped_outputs/77b710f704a701f74c9aa8091fbdfb74.txt b/scrapped_outputs/77b710f704a701f74c9aa8091fbdfb74.txt new file mode 100644 index 0000000000000000000000000000000000000000..d497661a6c9cfce4b8b06d95ad96868e9dc634a1 --- /dev/null +++ b/scrapped_outputs/77b710f704a701f74c9aa8091fbdfb74.txt @@ -0,0 +1,42 @@ +Textual inversion The StableDiffusionPipeline supports textual inversion, a technique that enables a model like Stable Diffusion to learn a new concept from just a few sample images. This gives you more control over the generated images and allows you to tailor the model towards specific concepts. You can get started quickly with a collection of community created concepts in the Stable Diffusion Conceptualizer. This guide will show you how to run inference with textual inversion using a pre-learned concept from the Stable Diffusion Conceptualizer. If you’re interested in teaching a model new concepts with textual inversion, take a look at the Textual Inversion training guide. Import the necessary libraries: Copied import torch +from diffusers import StableDiffusionPipeline +from diffusers.utils import make_image_grid Stable Diffusion 1 and 2 Pick a Stable Diffusion checkpoint and a pre-learned concept from the Stable Diffusion Conceptualizer: Copied pretrained_model_name_or_path = "runwayml/stable-diffusion-v1-5" +repo_id_embeds = "sd-concepts-library/cat-toy" Now you can load a pipeline, and pass the pre-learned concept to it: Copied pipeline = StableDiffusionPipeline.from_pretrained( + pretrained_model_name_or_path, torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +pipeline.load_textual_inversion(repo_id_embeds) Create a prompt with the pre-learned concept by using the special placeholder token , and choose the number of samples and rows of images you’d like to generate: Copied prompt = "a grafitti in a favela wall with a on it" + +num_samples_per_row = 2 +num_rows = 2 Then run the pipeline (feel free to adjust the parameters like num_inference_steps and guidance_scale to see how they affect image quality), save the generated images and visualize them with the helper function you created at the beginning: Copied all_images = [] +for _ in range(num_rows): + images = pipeline(prompt, num_images_per_prompt=num_samples_per_row, num_inference_steps=50, guidance_scale=7.5).images + all_images.extend(images) + +grid = make_image_grid(all_images, num_rows, num_samples_per_row) +grid Stable Diffusion XL Stable Diffusion XL (SDXL) can also use textual inversion vectors for inference. In contrast to Stable Diffusion 1 and 2, SDXL has two text encoders so you’ll need two textual inversion embeddings - one for each text encoder model. Let’s download the SDXL textual inversion embeddings and have a closer look at it’s structure: Copied from huggingface_hub import hf_hub_download +from safetensors.torch import load_file + +file = hf_hub_download("dn118/unaestheticXL", filename="unaestheticXLv31.safetensors") +state_dict = load_file(file) +state_dict Copied {'clip_g': tensor([[ 0.0077, -0.0112, 0.0065, ..., 0.0195, 0.0159, 0.0275], + ..., + [-0.0170, 0.0213, 0.0143, ..., -0.0302, -0.0240, -0.0362]], + 'clip_l': tensor([[ 0.0023, 0.0192, 0.0213, ..., -0.0385, 0.0048, -0.0011], + ..., + [ 0.0475, -0.0508, -0.0145, ..., 0.0070, -0.0089, -0.0163]], There are two tensors, "clip_g" and "clip_l". +"clip_g" corresponds to the bigger text encoder in SDXL and refers to +pipe.text_encoder_2 and "clip_l" refers to pipe.text_encoder. Now you can load each tensor separately by passing them along with the correct text encoder and tokenizer +to load_textual_inversion(): Copied from diffusers import AutoPipelineForText2Image +import torch + +pipe = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") + +pipe.load_textual_inversion(state_dict["clip_g"], token="unaestheticXLv31", text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2) +pipe.load_textual_inversion(state_dict["clip_l"], token="unaestheticXLv31", text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer) + +# the embedding should be used as a negative embedding, so we pass it as a negative prompt +generator = torch.Generator().manual_seed(33) +image = pipe("a woman standing in front of a mountain", negative_prompt="unaestheticXLv31", generator=generator).images[0] +image diff --git a/scrapped_outputs/77c4e5bddbf622d89dfd03c4073ef46e.txt b/scrapped_outputs/77c4e5bddbf622d89dfd03c4073ef46e.txt new file mode 100644 index 0000000000000000000000000000000000000000..a001c5e9c77873189a313244b2e7bed2ac696984 --- /dev/null +++ b/scrapped_outputs/77c4e5bddbf622d89dfd03c4073ef46e.txt @@ -0,0 +1,101 @@ +Image variation The Stable Diffusion model can also generate variations from an input image. It uses a fine-tuned version of a Stable Diffusion model by Justin Pinkney from Lambda. The original codebase can be found at LambdaLabsML/lambda-diffusers and additional official checkpoints for image variation can be found at lambdalabs/sd-image-variations-diffusers. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionImageVariationPipeline class diffusers.StableDiffusionImageVariationPipeline < source > ( vae: AutoencoderKL image_encoder: CLIPVisionModelWithProjection unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. image_encoder (CLIPVisionModelWithProjection) — +Frozen CLIP image-encoder (clip-vit-large-patch14). text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline to generate image variations from an input image using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — +Image or images to guide image generation. If you provide a tensor, it needs to be compatible with +CLIPImageProcessor. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied from diffusers import StableDiffusionImageVariationPipeline +from PIL import Image +from io import BytesIO +import requests + +pipe = StableDiffusionImageVariationPipeline.from_pretrained( + "lambdalabs/sd-image-variations-diffusers", revision="v2.0" +) +pipe = pipe.to("cuda") + +url = "https://lh3.googleusercontent.com/y-iFOHfLTwkuQSUegpwDdgKmOjRSTvPxat63dQLB25xkTs4lhIbRUFeNBWZzYf370g=s1200" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") + +out = pipe(image, num_images_per_prompt=3, guidance_scale=15) +out["images"][0].save("result.jpg") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/77e4cdefb0ee7d2277a84da8a5ac7f25.txt b/scrapped_outputs/77e4cdefb0ee7d2277a84da8a5ac7f25.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/77e813d0493fd46c897de1261a896e2b.txt b/scrapped_outputs/77e813d0493fd46c897de1261a896e2b.txt new file mode 100644 index 0000000000000000000000000000000000000000..a9b23cd194564c43aca8fd94b78d118e14153f64 --- /dev/null +++ b/scrapped_outputs/77e813d0493fd46c897de1261a896e2b.txt @@ -0,0 +1,263 @@ +🧪 This pipeline is for research purposes only. Text-to-video ModelScope Text-to-Video Technical Report is by Jiuniu Wang, Hangjie Yuan, Dayou Chen, Yingya Zhang, Xiang Wang, Shiwei Zhang. The abstract from the paper is: This paper introduces ModelScopeT2V, a text-to-video synthesis model that evolves from a text-to-image synthesis model (i.e., Stable Diffusion). ModelScopeT2V incorporates spatio-temporal blocks to ensure consistent frame generation and smooth movement transitions. The model could adapt to varying frame numbers during training and inference, rendering it suitable for both image-text and video-text datasets. ModelScopeT2V brings together three components (i.e., VQGAN, a text encoder, and a denoising UNet), totally comprising 1.7 billion parameters, in which 0.5 billion parameters are dedicated to temporal capabilities. The model demonstrates superior performance over state-of-the-art methods across three evaluation metrics. The code and an online demo are available at https://modelscope.cn/models/damo/text-to-video-synthesis/summary. You can find additional information about Text-to-Video on the project page, original codebase, and try it out in a demo. Official checkpoints can be found at damo-vilab and cerspense. Usage example text-to-video-ms-1.7b Let’s start by generating a short video with the default length of 16 frames (2s at 8 fps): Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import export_to_video + +pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") +pipe = pipe.to("cuda") + +prompt = "Spiderman is surfing" +video_frames = pipe(prompt).frames +video_path = export_to_video(video_frames) +video_path Diffusers supports different optimization techniques to improve the latency +and memory footprint of a pipeline. Since videos are often more memory-heavy than images, +we can enable CPU offloading and VAE slicing to keep the memory footprint at bay. Let’s generate a video of 8 seconds (64 frames) on the same GPU using CPU offloading and VAE slicing: Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import export_to_video + +pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") +pipe.enable_model_cpu_offload() + +# memory optimization +pipe.enable_vae_slicing() + +prompt = "Darth Vader surfing a wave" +video_frames = pipe(prompt, num_frames=64).frames +video_path = export_to_video(video_frames) +video_path It just takes 7 GBs of GPU memory to generate the 64 video frames using PyTorch 2.0, “fp16” precision and the techniques mentioned above. We can also use a different scheduler easily, using the same method we’d use for Stable Diffusion: Copied import torch +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +from diffusers.utils import export_to_video + +pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() + +prompt = "Spiderman is surfing" +video_frames = pipe(prompt, num_inference_steps=25).frames +video_path = export_to_video(video_frames) +video_path Here are some sample outputs: An astronaut riding a horse. + Darth vader surfing in waves. + cerspense/zeroscope_v2_576w & cerspense/zeroscope_v2_XL Zeroscope are watermark-free model and have been trained on specific sizes such as 576x320 and 1024x576. +One should first generate a video using the lower resolution checkpoint cerspense/zeroscope_v2_576w with TextToVideoSDPipeline, +which can then be upscaled using VideoToVideoSDPipeline and cerspense/zeroscope_v2_XL. Copied import torch +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +from diffusers.utils import export_to_video +from PIL import Image + +pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16) +pipe.enable_model_cpu_offload() + +# memory optimization +pipe.unet.enable_forward_chunking(chunk_size=1, dim=1) +pipe.enable_vae_slicing() + +prompt = "Darth Vader surfing a wave" +video_frames = pipe(prompt, num_frames=24).frames +video_path = export_to_video(video_frames) +video_path Now the video can be upscaled: Copied pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_XL", torch_dtype=torch.float16) +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() + +# memory optimization +pipe.unet.enable_forward_chunking(chunk_size=1, dim=1) +pipe.enable_vae_slicing() + +video = [Image.fromarray(frame).resize((1024, 576)) for frame in video_frames] + +video_frames = pipe(prompt, video=video, strength=0.6).frames +video_path = export_to_video(video_frames) +video_path Here are some sample outputs: Darth vader surfing in waves. + Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. TextToVideoSDPipeline class diffusers.TextToVideoSDPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet3DConditionModel scheduler: KarrasDiffusionSchedulers ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet3DConditionModel) — +A UNet3DConditionModel to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_frames: int = 16 num_inference_steps: int = 50 guidance_scale: float = 9.0 negative_prompt: Union = None eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'np' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → TextToVideoSDPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated video. num_frames (int, optional, defaults to 16) — +The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds +amounts to 2 seconds of video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "np") — +The output format of the generated video. Choose between torch.FloatTensor or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +TextToVideoSDPipelineOutput or tuple + +If return_dict is True, TextToVideoSDPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import TextToVideoSDPipeline +>>> from diffusers.utils import export_to_video + +>>> pipe = TextToVideoSDPipeline.from_pretrained( +... "damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16" +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "Spiderman is surfing" +>>> video_frames = pipe(prompt).frames +>>> video_path = export_to_video(video_frames) +>>> video_path disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. VideoToVideoSDPipeline class diffusers.VideoToVideoSDPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet3DConditionModel scheduler: KarrasDiffusionSchedulers ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet3DConditionModel) — +A UNet3DConditionModel to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-guided video-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None video: Union = None strength: float = 0.6 num_inference_steps: int = 50 guidance_scale: float = 15.0 negative_prompt: Union = None eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'np' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → TextToVideoSDPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. video (List[np.ndarray] or torch.FloatTensor) — +video frames or tensor representing a video batch to be used as the starting point for the process. +Can also accept video latents as image, if passing latents directly, it will not be encoded again. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference video. Must be between 0 and 1. video is used as a +starting point, adding more noise to it the larger the strength. The number of denoising steps +depends on the amount of noise initially added. When strength is 1, added noise is maximum and the +denoising process runs for the full number of iterations specified in num_inference_steps. A value of +1 essentially ignores video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in video generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "np") — +The output format of the generated video. Choose between torch.FloatTensor or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +TextToVideoSDPipelineOutput or tuple + +If return_dict is True, TextToVideoSDPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +>>> from diffusers.utils import export_to_video + +>>> pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16) +>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe.to("cuda") + +>>> prompt = "spiderman running in the desert" +>>> video_frames = pipe(prompt, num_inference_steps=40, height=320, width=576, num_frames=24).frames +>>> # safe low-res video +>>> video_path = export_to_video(video_frames, output_video_path="./video_576_spiderman.mp4") + +>>> # let's offload the text-to-image model +>>> pipe.to("cpu") + +>>> # and load the image-to-image model +>>> pipe = DiffusionPipeline.from_pretrained( +... "cerspense/zeroscope_v2_XL", torch_dtype=torch.float16, revision="refs/pr/15" +... ) +>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe.enable_model_cpu_offload() + +>>> # The VAE consumes A LOT of memory, let's make sure we run it in sliced mode +>>> pipe.vae.enable_slicing() + +>>> # now let's upscale it +>>> video = [Image.fromarray(frame).resize((1024, 576)) for frame in video_frames] + +>>> # and denoise it +>>> video_frames = pipe(prompt, video=video, strength=0.6).frames +>>> video_path = export_to_video(video_frames, output_video_path="./video_1024_spiderman.mp4") +>>> video_path disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. TextToVideoSDPipelineOutput class diffusers.pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput < source > ( frames: Union ) Parameters frames (List[np.ndarray] or torch.FloatTensor) — +List of denoised frames (essentially images) as NumPy arrays of shape (height, width, num_channels) or as +a torch tensor. The length of the list denotes the video length (the number of frames). Output class for text-to-video pipelines. diff --git a/scrapped_outputs/77fb4542fff6bf6caaf7799249a72a1e.txt b/scrapped_outputs/77fb4542fff6bf6caaf7799249a72a1e.txt new file mode 100644 index 0000000000000000000000000000000000000000..cc1a72acaf9ff9434b7d5d17c1deecffdb061dc0 --- /dev/null +++ b/scrapped_outputs/77fb4542fff6bf6caaf7799249a72a1e.txt @@ -0,0 +1,318 @@ +Versatile Diffusion Versatile Diffusion was proposed in Versatile Diffusion: Text, Images and Variations All in One Diffusion Model by Xingqian Xu, Zhangyang Wang, Eric Zhang, Kai Wang, Humphrey Shi. The abstract from the paper is: Recent advances in diffusion models have set an impressive milestone in many generation tasks, and trending works such as DALL-E2, Imagen, and Stable Diffusion have attracted great interest. Despite the rapid landscape changes, recent new approaches focus on extensions and performance rather than capacity, thus requiring separate models for separate tasks. In this work, we expand the existing single-flow diffusion pipeline into a multi-task multimodal network, dubbed Versatile Diffusion (VD), that handles multiple flows of text-to-image, image-to-text, and variations in one unified model. The pipeline design of VD instantiates a unified multi-flow diffusion framework, consisting of sharable and swappable layer modules that enable the crossmodal generality beyond images and text. Through extensive experiments, we demonstrate that VD successfully achieves the following: a) VD outperforms the baseline approaches and handles all its base tasks with competitive quality; b) VD enables novel extensions such as disentanglement of style and semantics, dual- and multi-context blending, etc.; c) The success of our multi-flow multimodal framework over images and text may inspire further diffusion-based universal AI research. Tips You can load the more memory intensive “all-in-one” VersatileDiffusionPipeline that supports all the tasks or use the individual pipelines which are more memory efficient. Pipeline Supported tasks VersatileDiffusionPipeline all of the below VersatileDiffusionTextToImagePipeline text-to-image VersatileDiffusionImageVariationPipeline image variation VersatileDiffusionDualGuidedPipeline image-text dual guided generation Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. VersatileDiffusionPipeline class diffusers.VersatileDiffusionPipeline < source > ( tokenizer: CLIPTokenizer image_feature_extractor: CLIPImageProcessor text_encoder: CLIPTextModel image_encoder: CLIPVisionModel image_unet: UNet2DConditionModel text_unet: UNet2DConditionModel vae: AutoencoderKL scheduler: KarrasDiffusionSchedulers ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). dual_guided < source > ( prompt: typing.Union[PIL.Image.Image, typing.List[PIL.Image.Image]] image: typing.Union[str, typing.List[str]] text_to_image_strength: float = 0.5 height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 guidance_scale: float = 7.5 num_images_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: int = 1 ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import VersatileDiffusionPipeline +>>> import torch +>>> import requests +>>> from io import BytesIO +>>> from PIL import Image + +>>> # let's download an initial image +>>> url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg" + +>>> response = requests.get(url) +>>> image = Image.open(BytesIO(response.content)).convert("RGB") +>>> text = "a red car in the sun" + +>>> pipe = VersatileDiffusionPipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> text_to_image_strength = 0.75 + +>>> image = pipe.dual_guided( +... prompt=text, image=image, text_to_image_strength=text_to_image_strength, generator=generator +... ).images[0] +>>> image.save("./car_variation.png") image_variation < source > ( image: typing.Union[torch.FloatTensor, PIL.Image.Image] height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters image (PIL.Image.Image, List[PIL.Image.Image] or torch.Tensor) — +The image prompt or prompts to guide the image generation. height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import VersatileDiffusionPipeline +>>> import torch +>>> import requests +>>> from io import BytesIO +>>> from PIL import Image + +>>> # let's download an initial image +>>> url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg" + +>>> response = requests.get(url) +>>> image = Image.open(BytesIO(response.content)).convert("RGB") + +>>> pipe = VersatileDiffusionPipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> image = pipe.image_variation(image, generator=generator).images[0] +>>> image.save("./car_variation.png") text_to_image < source > ( prompt: typing.Union[str, typing.List[str]] height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import VersatileDiffusionPipeline +>>> import torch + +>>> pipe = VersatileDiffusionPipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> image = pipe.text_to_image("an astronaut riding on a horse on mars", generator=generator).images[0] +>>> image.save("./astronaut.png") VersatileDiffusionTextToImagePipeline class diffusers.VersatileDiffusionTextToImagePipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection image_unet: UNet2DConditionModel text_unet: UNetFlatConditionModel vae: AutoencoderKL scheduler: KarrasDiffusionSchedulers ) Parameters vqvae (VQModel) — +Vector-quantized (VQ) model to encode and decode images to and from latent representations. bert (LDMBertModel) — +Text-encoder model based on BERT. tokenizer (BertTokenizer) — +A BertTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-image generation using Versatile Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: typing.Union[str, typing.List[str]] height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: int = 1 **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import VersatileDiffusionTextToImagePipeline +>>> import torch + +>>> pipe = VersatileDiffusionTextToImagePipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe.remove_unused_weights() +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> image = pipe("an astronaut riding on a horse on mars", generator=generator).images[0] +>>> image.save("./astronaut.png") VersatileDiffusionImageVariationPipeline class diffusers.VersatileDiffusionImageVariationPipeline < source > ( image_feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection image_unet: UNet2DConditionModel vae: AutoencoderKL scheduler: KarrasDiffusionSchedulers ) Parameters vqvae (VQModel) — +Vector-quantized (VQ) model to encode and decode images to and from latent representations. bert (LDMBertModel) — +Text-encoder model based on BERT. tokenizer (BertTokenizer) — +A BertTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for image variation using Versatile Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: typing.Union[PIL.Image.Image, typing.List[PIL.Image.Image], torch.Tensor] height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: int = 1 **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters image (PIL.Image.Image, List[PIL.Image.Image] or torch.Tensor) — +The image prompt or prompts to guide the image generation. height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import VersatileDiffusionImageVariationPipeline +>>> import torch +>>> import requests +>>> from io import BytesIO +>>> from PIL import Image + +>>> # let's download an initial image +>>> url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg" + +>>> response = requests.get(url) +>>> image = Image.open(BytesIO(response.content)).convert("RGB") + +>>> pipe = VersatileDiffusionImageVariationPipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> image = pipe(image, generator=generator).images[0] +>>> image.save("./car_variation.png") VersatileDiffusionDualGuidedPipeline class diffusers.VersatileDiffusionDualGuidedPipeline < source > ( tokenizer: CLIPTokenizer image_feature_extractor: CLIPImageProcessor text_encoder: CLIPTextModelWithProjection image_encoder: CLIPVisionModelWithProjection image_unet: UNet2DConditionModel text_unet: UNetFlatConditionModel vae: AutoencoderKL scheduler: KarrasDiffusionSchedulers ) Parameters vqvae (VQModel) — +Vector-quantized (VQ) model to encode and decode images to and from latent representations. bert (LDMBertModel) — +Text-encoder model based on BERT. tokenizer (BertTokenizer) — +A BertTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for image-text dual-guided generation using Versatile Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: typing.Union[PIL.Image.Image, typing.List[PIL.Image.Image]] image: typing.Union[str, typing.List[str]] text_to_image_strength: float = 0.5 height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 guidance_scale: float = 7.5 num_images_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: int = 1 **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import VersatileDiffusionDualGuidedPipeline +>>> import torch +>>> import requests +>>> from io import BytesIO +>>> from PIL import Image + +>>> # let's download an initial image +>>> url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg" + +>>> response = requests.get(url) +>>> image = Image.open(BytesIO(response.content)).convert("RGB") +>>> text = "a red car in the sun" + +>>> pipe = VersatileDiffusionDualGuidedPipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe.remove_unused_weights() +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> text_to_image_strength = 0.75 + +>>> image = pipe( +... prompt=text, image=image, text_to_image_strength=text_to_image_strength, generator=generator +... ).images[0] +>>> image.save("./car_variation.png") diff --git a/scrapped_outputs/78137d27eef4a1860156b6cea02e841e.txt b/scrapped_outputs/78137d27eef4a1860156b6cea02e841e.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/7841cd857e60013f9da389596ce5b313.txt b/scrapped_outputs/7841cd857e60013f9da389596ce5b313.txt new file mode 100644 index 0000000000000000000000000000000000000000..12f932f27da948cb5ce81edca4bff5444475b84d --- /dev/null +++ b/scrapped_outputs/7841cd857e60013f9da389596ce5b313.txt @@ -0,0 +1,11 @@ +Control image brightness The Stable Diffusion pipeline is mediocre at generating images that are either very bright or dark as explained in the Common Diffusion Noise Schedules and Sample Steps are Flawed paper. The solutions proposed in the paper are currently implemented in the DDIMScheduler which you can use to improve the lighting in your images. 💡 Take a look at the paper linked above for more details about the proposed solutions! One of the solutions is to train a model with v prediction and v loss. Add the following flag to the train_text_to_image.py or train_text_to_image_lora.py scripts to enable v_prediction: Copied --prediction_type="v_prediction" For example, let’s use the ptx0/pseudo-journey-v2 checkpoint which has been finetuned with v_prediction. Next, configure the following parameters in the DDIMScheduler: rescale_betas_zero_snr=True, rescales the noise schedule to zero terminal signal-to-noise ratio (SNR) timestep_spacing="trailing", starts sampling from the last timestep Copied from diffusers import DiffusionPipeline, DDIMScheduler + +pipeline = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", use_safetensors=True) + +# switch the scheduler in the pipeline to use the DDIMScheduler +pipeline.scheduler = DDIMScheduler.from_config( + pipeline.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing" +) +pipeline.to("cuda") Finally, in your call to the pipeline, set guidance_rescale to prevent overexposure: Copied prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" +image = pipeline(prompt, guidance_rescale=0.7).images[0] +image diff --git a/scrapped_outputs/7855255c15d7889dc3f22695908f47f5.txt b/scrapped_outputs/7855255c15d7889dc3f22695908f47f5.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/786c5725cfd28c0522c2eafa950cfd33.txt b/scrapped_outputs/786c5725cfd28c0522c2eafa950cfd33.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/78eeba61696e93e7e0ef5881d3817161.txt b/scrapped_outputs/78eeba61696e93e7e0ef5881d3817161.txt new file mode 100644 index 0000000000000000000000000000000000000000..619b44cd8c05a0c372dc935e8e8f3871d9c7d942 --- /dev/null +++ b/scrapped_outputs/78eeba61696e93e7e0ef5881d3817161.txt @@ -0,0 +1,3 @@ +TODO + +Coming soon! diff --git a/scrapped_outputs/78f7f8cee9958387c5122b38307bc067.txt b/scrapped_outputs/78f7f8cee9958387c5122b38307bc067.txt new file mode 100644 index 0000000000000000000000000000000000000000..e4abc6c3bdbf1174d841ae03e5693f7552e06dd7 --- /dev/null +++ b/scrapped_outputs/78f7f8cee9958387c5122b38307bc067.txt @@ -0,0 +1,38 @@ +Distributed inference with multiple GPUs On distributed setups, you can run inference across multiple GPUs with 🤗 Accelerate or PyTorch Distributed, which is useful for generating with multiple prompts in parallel. This guide will show you how to use 🤗 Accelerate and PyTorch Distributed for distributed inference. 🤗 Accelerate 🤗 Accelerate is a library designed to make it easy to train or run inference across distributed setups. It simplifies the process of setting up the distributed environment, allowing you to focus on your PyTorch code. To begin, create a Python file and initialize an accelerate.PartialState to create a distributed environment; your setup is automatically detected so you don’t need to explicitly define the rank or world_size. Move the DiffusionPipeline to distributed_state.device to assign a GPU to each process. Now use the split_between_processes utility as a context manager to automatically distribute the prompts between the number of processes. Copied import torch +from accelerate import PartialState +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +distributed_state = PartialState() +pipeline.to(distributed_state.device) + +with distributed_state.split_between_processes(["a dog", "a cat"]) as prompt: + result = pipeline(prompt).images[0] + result.save(f"result_{distributed_state.process_index}.png") Use the --num_processes argument to specify the number of GPUs to use, and call accelerate launch to run the script: Copied accelerate launch run_distributed.py --num_processes=2 To learn more, take a look at the Distributed Inference with 🤗 Accelerate guide. PyTorch Distributed PyTorch supports DistributedDataParallel which enables data parallelism. To start, create a Python file and import torch.distributed and torch.multiprocessing to set up the distributed process group and to spawn the processes for inference on each GPU. You should also initialize a DiffusionPipeline: Copied import torch +import torch.distributed as dist +import torch.multiprocessing as mp + +from diffusers import DiffusionPipeline + +sd = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) You’ll want to create a function to run inference; init_process_group handles creating a distributed environment with the type of backend to use, the rank of the current process, and the world_size or the number of processes participating. If you’re running inference in parallel over 2 GPUs, then the world_size is 2. Move the DiffusionPipeline to rank and use get_rank to assign a GPU to each process, where each process handles a different prompt: Copied def run_inference(rank, world_size): + dist.init_process_group("nccl", rank=rank, world_size=world_size) + + sd.to(rank) + + if torch.distributed.get_rank() == 0: + prompt = "a dog" + elif torch.distributed.get_rank() == 1: + prompt = "a cat" + + image = sd(prompt).images[0] + image.save(f"./{'_'.join(prompt)}.png") To run the distributed inference, call mp.spawn to run the run_inference function on the number of GPUs defined in world_size: Copied def main(): + world_size = 2 + mp.spawn(run_inference, args=(world_size,), nprocs=world_size, join=True) + + +if __name__ == "__main__": + main() Once you’ve completed the inference script, use the --nproc_per_node argument to specify the number of GPUs to use and call torchrun to run the script: Copied torchrun run_distributed.py --nproc_per_node=2 diff --git a/scrapped_outputs/79036a0ab3f3163fd8ea2f86db257ff4.txt b/scrapped_outputs/79036a0ab3f3163fd8ea2f86db257ff4.txt new file mode 100644 index 0000000000000000000000000000000000000000..ed307c5e7ec0eba355d6da6f87807233e0a27eec --- /dev/null +++ b/scrapped_outputs/79036a0ab3f3163fd8ea2f86db257ff4.txt @@ -0,0 +1,43 @@ +DiT Scalable Diffusion Models with Transformers (DiT) is by William Peebles and Saining Xie. The abstract from the paper is: We explore a new class of diffusion models based on the transformer architecture. We train latent diffusion models of images, replacing the commonly-used U-Net backbone with a transformer that operates on latent patches. We analyze the scalability of our Diffusion Transformers (DiTs) through the lens of forward pass complexity as measured by Gflops. We find that DiTs with higher Gflops — through increased transformer depth/width or increased number of input tokens — consistently have lower FID. In addition to possessing good scalability properties, our largest DiT-XL/2 models outperform all prior diffusion models on the class-conditional ImageNet 512x512 and 256x256 benchmarks, achieving a state-of-the-art FID of 2.27 on the latter. The original codebase can be found at facebookresearch/dit. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. DiTPipeline class diffusers.DiTPipeline < source > ( transformer: Transformer2DModel vae: AutoencoderKL scheduler: KarrasDiffusionSchedulers id2label: Optional = None ) Parameters transformer (Transformer2DModel) — +A class conditioned Transformer2DModel to denoise the encoded image latents. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. scheduler (DDIMScheduler) — +A scheduler to be used in combination with transformer to denoise the encoded image latents. Pipeline for image generation based on a Transformer backbone instead of a UNet. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( class_labels: List guidance_scale: float = 4.0 generator: Union = None num_inference_steps: int = 50 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters class_labels (List[int]) — +List of ImageNet class labels for the images to be generated. guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. num_inference_steps (int, optional, defaults to 250) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import DiTPipeline, DPMSolverMultistepScheduler +>>> import torch + +>>> pipe = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256", torch_dtype=torch.float16) +>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe = pipe.to("cuda") + +>>> # pick words from Imagenet class labels +>>> pipe.labels # to print all available words + +>>> # pick words that exist in ImageNet +>>> words = ["white shark", "umbrella"] + +>>> class_ids = pipe.get_label_ids(words) + +>>> generator = torch.manual_seed(33) +>>> output = pipe(class_labels=class_ids, num_inference_steps=25, generator=generator) + +>>> image = output.images[0] # label 'white shark' get_label_ids < source > ( label: Union ) → list of int Parameters label (str or dict of str) — +Label strings to be mapped to class ids. Returns +list of int + +Class ids to be processed by pipeline. + Map label strings from ImageNet to corresponding class ids. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/790663e3c6d3aef093516f245358cfc2.txt b/scrapped_outputs/790663e3c6d3aef093516f245358cfc2.txt new file mode 100644 index 0000000000000000000000000000000000000000..25c46b6891734af2caccd73456b27f1ecd1e462b --- /dev/null +++ b/scrapped_outputs/790663e3c6d3aef093516f245358cfc2.txt @@ -0,0 +1,64 @@ +PNDMScheduler PNDMScheduler, or pseudo numerical methods for diffusion models, uses more advanced ODE integration techniques like the Runge-Kutta and linear multi-step method. The original implementation can be found at crowsonkb/k-diffusion. PNDMScheduler class diffusers.PNDMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None skip_prk_steps: bool = False set_alpha_to_one: bool = False prediction_type: str = 'epsilon' timestep_spacing: str = 'leading' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. skip_prk_steps (bool, defaults to False) — +Allows the scheduler to skip the Runge-Kutta steps defined in the original paper as being required before +PLMS steps. set_alpha_to_one (bool, defaults to False) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the alpha value at step 0. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process) +or v_prediction (see section 2.4 of Imagen Video +paper). timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. PNDMScheduler uses pseudo numerical methods for diffusion models such as the Runge-Kutta and linear multi-step +method. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise), and calls step_prk() +or step_plms() depending on the internal variable counter. step_plms < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the linear multistep method. It performs one forward pass multiple times to approximate the solution. step_prk < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the Runge-Kutta method. It performs four forward passes to approximate the solution to the differential +equation. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/79124e7636fbb895aa3fcb994a75584d.txt b/scrapped_outputs/79124e7636fbb895aa3fcb994a75584d.txt new file mode 100644 index 0000000000000000000000000000000000000000..cbdfab551c65a04d22ed1db010bb50b8fb750880 --- /dev/null +++ b/scrapped_outputs/79124e7636fbb895aa3fcb994a75584d.txt @@ -0,0 +1,852 @@ +ControlNet ControlNet was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that’ll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process. The abstract from the paper is: We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models. This model was contributed by takuma104. ❤️ The original codebase can be found at lllyasviel/ControlNet, and you can find official ControlNet checkpoints on lllyasviel’s Hub profile. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionControlNetPipeline class diffusers.StableDiffusionControlNetPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. When prompt is a list, and if a list of images is passed for a single ControlNet, +each will be paired with each prompt in the prompt list. This also applies to multiple ControlNets, +where a list of image lists can be passed to batch for each prompt and each ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +... ) +>>> image = np.array(image) + +>>> # get canny image +>>> image = cv2.Canny(image, 100, 200) +>>> image = image[:, :, None] +>>> image = np.concatenate([image, image, image], axis=2) +>>> canny_image = Image.fromarray(image) + +>>> # load control net and stable diffusion v1-5 +>>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +>>> pipe = StableDiffusionControlNetPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> # speed up diffusion process with faster scheduler and memory optimization +>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +>>> # remove following line if xformers is not installed +>>> pipe.enable_xformers_memory_efficient_attention() + +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> generator = torch.manual_seed(0) +>>> image = pipe( +... "futuristic-looking woman", num_inference_steps=20, generator=generator, image=canny_image +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionControlNetImg2ImgPipeline class diffusers.StableDiffusionControlNetImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for image-to-image generation using Stable Diffusion with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None control_image: Union = None height: Optional = None width: Optional = None strength: float = 0.8 num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 0.8 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The initial image to be used as the starting point for the image generation process. Can also accept +image latents as image, and if passing latents directly they are not encoded again. control_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, UniPCMultistepScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +... ) +>>> np_image = np.array(image) + +>>> # get canny image +>>> np_image = cv2.Canny(np_image, 100, 200) +>>> np_image = np_image[:, :, None] +>>> np_image = np.concatenate([np_image, np_image, np_image], axis=2) +>>> canny_image = Image.fromarray(np_image) + +>>> # load control net and stable diffusion v1-5 +>>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +>>> pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> # speed up diffusion process with faster scheduler and memory optimization +>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> generator = torch.manual_seed(0) +>>> image = pipe( +... "futuristic-looking woman", +... num_inference_steps=20, +... generator=generator, +... image=image, +... control_image=canny_image, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionControlNetInpaintPipeline class diffusers.StableDiffusionControlNetInpaintPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for image inpainting using Stable Diffusion with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters This pipeline can be used with checkpoints that have been specifically fine-tuned for inpainting +(runwayml/stable-diffusion-inpainting) as well as +default text-to-image Stable Diffusion checkpoints +(runwayml/stable-diffusion-v1-5). Default text-to-image +Stable Diffusion checkpoints might be preferable for ControlNets that have been fine-tuned on those, such as +lllyasviel/control_v11p_sd15_inpaint. __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None control_image: Union = None height: Optional = None width: Optional = None padding_mask_crop: Optional = None strength: float = 1.0 num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 0.5 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], — +List[PIL.Image.Image], or List[np.ndarray]): +Image, NumPy array or tensor representing an image batch to be used as the starting point. For both +NumPy array and PyTorch tensor, the expected value range is between [0, 1]. If it’s a tensor or a +list or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a NumPy array or +a list of arrays, the expected shape should be (B, H, W, C) or (H, W, C). It can also accept image +latents as image, but if passing latents directly it is not encoded again. mask_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], — +List[PIL.Image.Image], or List[np.ndarray]): +Image, NumPy array or tensor representing an image batch to mask image. White pixels in the mask +are repainted while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a NumPy array or PyTorch tensor, it should contain one +color channel (L) instead of 3, so the expected shape for PyTorch tensor would be (B, 1, H, W), (B, H, W), (1, H, W), (H, W). And for NumPy array, it would be for (B, H, W, 1), (B, H, W), (H, W, 1), or (H, W). control_image (torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image], — +List[List[torch.FloatTensor]], or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. padding_mask_crop (int, optional, defaults to None) — +The size of margin in the crop to be applied to the image and masking. If None, no crop is applied to image and mask_image. If +padding_mask_crop is not None, it will first find a rectangular region with the same aspect ration of the image and +contains all masked area, and then expand that area based on padding_mask_crop. The image and mask_image will then be cropped based on +the expanded area before resizing to the original image size for inpainting. This is useful when the masked area is small while the image is large +and contain information inreleant for inpainging, such as background. strength (float, optional, defaults to 1.0) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 0.5) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install transformers accelerate +>>> from diffusers import StableDiffusionControlNetInpaintPipeline, ControlNetModel, DDIMScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> init_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy.png" +... ) +>>> init_image = init_image.resize((512, 512)) + +>>> generator = torch.Generator(device="cpu").manual_seed(1) + +>>> mask_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy_mask.png" +... ) +>>> mask_image = mask_image.resize((512, 512)) + + +>>> def make_canny_condition(image): +... image = np.array(image) +... image = cv2.Canny(image, 100, 200) +... image = image[:, :, None] +... image = np.concatenate([image, image, image], axis=2) +... image = Image.fromarray(image) +... return image + + +>>> control_image = make_canny_condition(init_image) + +>>> controlnet = ControlNetModel.from_pretrained( +... "lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16 +... ) +>>> pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> image = pipe( +... "a handsome man with ray-ban sunglasses", +... num_inference_steps=20, +... generator=generator, +... eta=1.0, +... image=init_image, +... mask_image=mask_image, +... control_image=control_image, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionControlNetPipeline class diffusers.FlaxStableDiffusionControlNetPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel controlnet: FlaxControlNetModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. controlnet (FlaxControlNetModel — +Provides additional conditioning to the unet during the denoising process. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-to-image generation using Stable Diffusion with ControlNet Guidance. This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: Array image: Array params: Union prng_seed: Array num_inference_steps: int = 50 guidance_scale: Union = 7.5 latents: Array = None neg_prompt_ids: Array = None controlnet_conditioning_scale: Union = 1.0 return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt_ids (jnp.ndarray) — +The prompt or prompts to guide the image generation. image (jnp.ndarray) — +Array representing the ControlNet input condition to provide guidance to the unet for generation. params (Dict or FrozenDict) — +Dictionary containing the model parameters/weights. prng_seed (jax.Array) — +Array containing random number generator key. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. latents (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +array is generated by sampling using the supplied random generator. controlnet_conditioning_scale (float or jnp.ndarray, optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> import jax.numpy as jnp +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard +>>> from diffusers.utils import load_image, make_image_grid +>>> from PIL import Image +>>> from diffusers import FlaxStableDiffusionControlNetPipeline, FlaxControlNetModel + + +>>> def create_key(seed=0): +... return jax.random.PRNGKey(seed) + + +>>> rng = create_key(0) + +>>> # get canny image +>>> canny_image = load_image( +... "https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/blog_post_cell_10_output_0.jpeg" +... ) + +>>> prompts = "best quality, extremely detailed" +>>> negative_prompts = "monochrome, lowres, bad anatomy, worst quality, low quality" + +>>> # load control net and stable diffusion v1-5 +>>> controlnet, controlnet_params = FlaxControlNetModel.from_pretrained( +... "lllyasviel/sd-controlnet-canny", from_pt=True, dtype=jnp.float32 +... ) +>>> pipe, params = FlaxStableDiffusionControlNetPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, revision="flax", dtype=jnp.float32 +... ) +>>> params["controlnet"] = controlnet_params + +>>> num_samples = jax.device_count() +>>> rng = jax.random.split(rng, jax.device_count()) + +>>> prompt_ids = pipe.prepare_text_inputs([prompts] * num_samples) +>>> negative_prompt_ids = pipe.prepare_text_inputs([negative_prompts] * num_samples) +>>> processed_image = pipe.prepare_image_inputs([canny_image] * num_samples) + +>>> p_params = replicate(params) +>>> prompt_ids = shard(prompt_ids) +>>> negative_prompt_ids = shard(negative_prompt_ids) +>>> processed_image = shard(processed_image) + +>>> output = pipe( +... prompt_ids=prompt_ids, +... image=processed_image, +... params=p_params, +... prng_seed=rng, +... num_inference_steps=50, +... neg_prompt_ids=negative_prompt_ids, +... jit=True, +... ).images + +>>> output_images = pipe.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:]))) +>>> output_images = make_image_grid(output_images, num_samples // 4, 4) +>>> output_images.save("generated_image.png") FlaxStableDiffusionControlNetPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/7915a8803742939a00367478691cdacc.txt b/scrapped_outputs/7915a8803742939a00367478691cdacc.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/7915ff5702d2bbfcf1d3e3353b2ccdb5.txt b/scrapped_outputs/7915ff5702d2bbfcf1d3e3353b2ccdb5.txt new file mode 100644 index 0000000000000000000000000000000000000000..fa69efa9696034670fc8ca476928c6521eb0af53 --- /dev/null +++ b/scrapped_outputs/7915ff5702d2bbfcf1d3e3353b2ccdb5.txt @@ -0,0 +1,212 @@ +Train a diffusion model Unconditional image generation is a popular application of diffusion models that generates images that look like those in the dataset used for training. Typically, the best results are obtained from finetuning a pretrained model on a specific dataset. You can find many of these checkpoints on the Hub, but if you can’t find one you like, you can always train your own! This tutorial will teach you how to train a UNet2DModel from scratch on a subset of the Smithsonian Butterflies dataset to generate your own 🦋 butterflies 🦋. 💡 This training tutorial is based on the Training with 🧨 Diffusers notebook. For additional details and context about diffusion models like how they work, check out the notebook! Before you begin, make sure you have 🤗 Datasets installed to load and preprocess image datasets, and 🤗 Accelerate, to simplify training on any number of GPUs. The following command will also install TensorBoard to visualize training metrics (you can also use Weights & Biases to track your training). Copied # uncomment to install the necessary libraries in Colab +#!pip install diffusers[training] We encourage you to share your model with the community, and in order to do that, you’ll need to login to your Hugging Face account (create one here if you don’t already have one!). You can login from a notebook and enter your token when prompted. Make sure your token has the write role. Copied >>> from huggingface_hub import notebook_login + +>>> notebook_login() Or login in from the terminal: Copied huggingface-cli login Since the model checkpoints are quite large, install Git-LFS to version these large files: Copied !sudo apt -qq install git-lfs +!git config --global credential.helper store Training configuration For convenience, create a TrainingConfig class containing the training hyperparameters (feel free to adjust them): Copied >>> from dataclasses import dataclass + +>>> @dataclass +... class TrainingConfig: +... image_size = 128 # the generated image resolution +... train_batch_size = 16 +... eval_batch_size = 16 # how many images to sample during evaluation +... num_epochs = 50 +... gradient_accumulation_steps = 1 +... learning_rate = 1e-4 +... lr_warmup_steps = 500 +... save_image_epochs = 10 +... save_model_epochs = 30 +... mixed_precision = "fp16" # `no` for float32, `fp16` for automatic mixed precision +... output_dir = "ddpm-butterflies-128" # the model name locally and on the HF Hub + +... push_to_hub = True # whether to upload the saved model to the HF Hub +... hub_model_id = "/" # the name of the repository to create on the HF Hub +... hub_private_repo = False +... overwrite_output_dir = True # overwrite the old model when re-running the notebook +... seed = 0 + + +>>> config = TrainingConfig() Load the dataset You can easily load the Smithsonian Butterflies dataset with the 🤗 Datasets library: Copied >>> from datasets import load_dataset + +>>> config.dataset_name = "huggan/smithsonian_butterflies_subset" +>>> dataset = load_dataset(config.dataset_name, split="train") 💡 You can find additional datasets from the HugGan Community Event or you can use your own dataset by creating a local ImageFolder. Set config.dataset_name to the repository id of the dataset if it is from the HugGan Community Event, or imagefolder if you’re using your own images. 🤗 Datasets uses the Image feature to automatically decode the image data and load it as a PIL.Image which we can visualize: Copied >>> import matplotlib.pyplot as plt + +>>> fig, axs = plt.subplots(1, 4, figsize=(16, 4)) +>>> for i, image in enumerate(dataset[:4]["image"]): +... axs[i].imshow(image) +... axs[i].set_axis_off() +>>> fig.show() The images are all different sizes though, so you’ll need to preprocess them first: Resize changes the image size to the one defined in config.image_size. RandomHorizontalFlip augments the dataset by randomly mirroring the images. Normalize is important to rescale the pixel values into a [-1, 1] range, which is what the model expects. Copied >>> from torchvision import transforms + +>>> preprocess = transforms.Compose( +... [ +... transforms.Resize((config.image_size, config.image_size)), +... transforms.RandomHorizontalFlip(), +... transforms.ToTensor(), +... transforms.Normalize([0.5], [0.5]), +... ] +... ) Use 🤗 Datasets’ set_transform method to apply the preprocess function on the fly during training: Copied >>> def transform(examples): +... images = [preprocess(image.convert("RGB")) for image in examples["image"]] +... return {"images": images} + + +>>> dataset.set_transform(transform) Feel free to visualize the images again to confirm that they’ve been resized. Now you’re ready to wrap the dataset in a DataLoader for training! Copied >>> import torch + +>>> train_dataloader = torch.utils.data.DataLoader(dataset, batch_size=config.train_batch_size, shuffle=True) Create a UNet2DModel Pretrained models in 🧨 Diffusers are easily created from their model class with the parameters you want. For example, to create a UNet2DModel: Copied >>> from diffusers import UNet2DModel + +>>> model = UNet2DModel( +... sample_size=config.image_size, # the target image resolution +... in_channels=3, # the number of input channels, 3 for RGB images +... out_channels=3, # the number of output channels +... layers_per_block=2, # how many ResNet layers to use per UNet block +... block_out_channels=(128, 128, 256, 256, 512, 512), # the number of output channels for each UNet block +... down_block_types=( +... "DownBlock2D", # a regular ResNet downsampling block +... "DownBlock2D", +... "DownBlock2D", +... "DownBlock2D", +... "AttnDownBlock2D", # a ResNet downsampling block with spatial self-attention +... "DownBlock2D", +... ), +... up_block_types=( +... "UpBlock2D", # a regular ResNet upsampling block +... "AttnUpBlock2D", # a ResNet upsampling block with spatial self-attention +... "UpBlock2D", +... "UpBlock2D", +... "UpBlock2D", +... "UpBlock2D", +... ), +... ) It is often a good idea to quickly check the sample image shape matches the model output shape: Copied >>> sample_image = dataset[0]["images"].unsqueeze(0) +>>> print("Input shape:", sample_image.shape) +Input shape: torch.Size([1, 3, 128, 128]) + +>>> print("Output shape:", model(sample_image, timestep=0).sample.shape) +Output shape: torch.Size([1, 3, 128, 128]) Great! Next, you’ll need a scheduler to add some noise to the image. Create a scheduler The scheduler behaves differently depending on whether you’re using the model for training or inference. During inference, the scheduler generates image from the noise. During training, the scheduler takes a model output - or a sample - from a specific point in the diffusion process and applies noise to the image according to a noise schedule and an update rule. Let’s take a look at the DDPMScheduler and use the add_noise method to add some random noise to the sample_image from before: Copied >>> import torch +>>> from PIL import Image +>>> from diffusers import DDPMScheduler + +>>> noise_scheduler = DDPMScheduler(num_train_timesteps=1000) +>>> noise = torch.randn(sample_image.shape) +>>> timesteps = torch.LongTensor([50]) +>>> noisy_image = noise_scheduler.add_noise(sample_image, noise, timesteps) + +>>> Image.fromarray(((noisy_image.permute(0, 2, 3, 1) + 1.0) * 127.5).type(torch.uint8).numpy()[0]) The training objective of the model is to predict the noise added to the image. The loss at this step can be calculated by: Copied >>> import torch.nn.functional as F + +>>> noise_pred = model(noisy_image, timesteps).sample +>>> loss = F.mse_loss(noise_pred, noise) Train the model By now, you have most of the pieces to start training the model and all that’s left is putting everything together. First, you’ll need an optimizer and a learning rate scheduler: Copied >>> from diffusers.optimization import get_cosine_schedule_with_warmup + +>>> optimizer = torch.optim.AdamW(model.parameters(), lr=config.learning_rate) +>>> lr_scheduler = get_cosine_schedule_with_warmup( +... optimizer=optimizer, +... num_warmup_steps=config.lr_warmup_steps, +... num_training_steps=(len(train_dataloader) * config.num_epochs), +... ) Then, you’ll need a way to evaluate the model. For evaluation, you can use the DDPMPipeline to generate a batch of sample images and save it as a grid: Copied >>> from diffusers import DDPMPipeline +>>> from diffusers.utils import make_image_grid +>>> import os + +>>> def evaluate(config, epoch, pipeline): +... # Sample some images from random noise (this is the backward diffusion process). +... # The default pipeline output type is `List[PIL.Image]` +... images = pipeline( +... batch_size=config.eval_batch_size, +... generator=torch.manual_seed(config.seed), +... ).images + +... # Make a grid out of the images +... image_grid = make_image_grid(images, rows=4, cols=4) + +... # Save the images +... test_dir = os.path.join(config.output_dir, "samples") +... os.makedirs(test_dir, exist_ok=True) +... image_grid.save(f"{test_dir}/{epoch:04d}.png") Now you can wrap all these components together in a training loop with 🤗 Accelerate for easy TensorBoard logging, gradient accumulation, and mixed precision training. To upload the model to the Hub, write a function to get your repository name and information and then push it to the Hub. 💡 The training loop below may look intimidating and long, but it’ll be worth it later when you launch your training in just one line of code! If you can’t wait and want to start generating images, feel free to copy and run the code below. You can always come back and examine the training loop more closely later, like when you’re waiting for your model to finish training. 🤗 Copied >>> from accelerate import Accelerator +>>> from huggingface_hub import create_repo, upload_folder +>>> from tqdm.auto import tqdm +>>> from pathlib import Path +>>> import os + +>>> def train_loop(config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler): +... # Initialize accelerator and tensorboard logging +... accelerator = Accelerator( +... mixed_precision=config.mixed_precision, +... gradient_accumulation_steps=config.gradient_accumulation_steps, +... log_with="tensorboard", +... project_dir=os.path.join(config.output_dir, "logs"), +... ) +... if accelerator.is_main_process: +... if config.output_dir is not None: +... os.makedirs(config.output_dir, exist_ok=True) +... if config.push_to_hub: +... repo_id = create_repo( +... repo_id=config.hub_model_id or Path(config.output_dir).name, exist_ok=True +... ).repo_id +... accelerator.init_trackers("train_example") + +... # Prepare everything +... # There is no specific order to remember, you just need to unpack the +... # objects in the same order you gave them to the prepare method. +... model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( +... model, optimizer, train_dataloader, lr_scheduler +... ) + +... global_step = 0 + +... # Now you train the model +... for epoch in range(config.num_epochs): +... progress_bar = tqdm(total=len(train_dataloader), disable=not accelerator.is_local_main_process) +... progress_bar.set_description(f"Epoch {epoch}") + +... for step, batch in enumerate(train_dataloader): +... clean_images = batch["images"] +... # Sample noise to add to the images +... noise = torch.randn(clean_images.shape, device=clean_images.device) +... bs = clean_images.shape[0] + +... # Sample a random timestep for each image +... timesteps = torch.randint( +... 0, noise_scheduler.config.num_train_timesteps, (bs,), device=clean_images.device, +... dtype=torch.int64 +... ) + +... # Add noise to the clean images according to the noise magnitude at each timestep +... # (this is the forward diffusion process) +... noisy_images = noise_scheduler.add_noise(clean_images, noise, timesteps) + +... with accelerator.accumulate(model): +... # Predict the noise residual +... noise_pred = model(noisy_images, timesteps, return_dict=False)[0] +... loss = F.mse_loss(noise_pred, noise) +... accelerator.backward(loss) + +... accelerator.clip_grad_norm_(model.parameters(), 1.0) +... optimizer.step() +... lr_scheduler.step() +... optimizer.zero_grad() + +... progress_bar.update(1) +... logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0], "step": global_step} +... progress_bar.set_postfix(**logs) +... accelerator.log(logs, step=global_step) +... global_step += 1 + +... # After each epoch you optionally sample some demo images with evaluate() and save the model +... if accelerator.is_main_process: +... pipeline = DDPMPipeline(unet=accelerator.unwrap_model(model), scheduler=noise_scheduler) + +... if (epoch + 1) % config.save_image_epochs == 0 or epoch == config.num_epochs - 1: +... evaluate(config, epoch, pipeline) + +... if (epoch + 1) % config.save_model_epochs == 0 or epoch == config.num_epochs - 1: +... if config.push_to_hub: +... upload_folder( +... repo_id=repo_id, +... folder_path=config.output_dir, +... commit_message=f"Epoch {epoch}", +... ignore_patterns=["step_*", "epoch_*"], +... ) +... else: +... pipeline.save_pretrained(config.output_dir) Phew, that was quite a bit of code! But you’re finally ready to launch the training with 🤗 Accelerate’s notebook_launcher function. Pass the function the training loop, all the training arguments, and the number of processes (you can change this value to the number of GPUs available to you) to use for training: Copied >>> from accelerate import notebook_launcher + +>>> args = (config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler) + +>>> notebook_launcher(train_loop, args, num_processes=1) Once training is complete, take a look at the final 🦋 images 🦋 generated by your diffusion model! Copied >>> import glob + +>>> sample_images = sorted(glob.glob(f"{config.output_dir}/samples/*.png")) +>>> Image.open(sample_images[-1]) Next steps Unconditional image generation is one example of a task that can be trained. You can explore other tasks and training techniques by visiting the 🧨 Diffusers Training Examples page. Here are some examples of what you can learn: Textual Inversion, an algorithm that teaches a model a specific visual concept and integrates it into the generated image. DreamBooth, a technique for generating personalized images of a subject given several input images of the subject. Guide to finetuning a Stable Diffusion model on your own dataset. Guide to using LoRA, a memory-efficient technique for finetuning really large models faster. diff --git a/scrapped_outputs/799a80e12ca2de7a7869be53486f1953.txt b/scrapped_outputs/799a80e12ca2de7a7869be53486f1953.txt new file mode 100644 index 0000000000000000000000000000000000000000..90f987bd68cea6f4c0f29a9a85768db8b9798fed --- /dev/null +++ b/scrapped_outputs/799a80e12ca2de7a7869be53486f1953.txt @@ -0,0 +1 @@ +Overview A pipeline is an end-to-end class that provides a quick and easy way to use a diffusion system for inference by bundling independently trained models and schedulers together. Certain combinations of models and schedulers define specific pipeline types, like StableDiffusionXLPipeline or StableDiffusionControlNetPipeline, with specific capabilities. All pipeline types inherit from the base DiffusionPipeline class; pass it any checkpoint, and it’ll automatically detect the pipeline type and load the necessary components. This section demonstrates how to use specific pipelines such as Stable Diffusion XL, ControlNet, and DiffEdit. You’ll also learn how to use a distilled version of the Stable Diffusion model to speed up inference, how to create reproducible pipelines, and how to use and contribute community pipelines. diff --git a/scrapped_outputs/79b7f884964de1f3f03db373a38c086a.txt b/scrapped_outputs/79b7f884964de1f3f03db373a38c086a.txt new file mode 100644 index 0000000000000000000000000000000000000000..ce58e0ad582b6eaf4da440fd40e4608c31bcb070 --- /dev/null +++ b/scrapped_outputs/79b7f884964de1f3f03db373a38c086a.txt @@ -0,0 +1,380 @@ +Multistep DPM-Solver + + +Overview + +Original paper can be found here and the improved version. The original implementation can be found here. + +DPMSolverMultistepScheduler + + +class diffusers.DPMSolverMultistepScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +solver_order: int = 2 +prediction_type: str = 'epsilon' +thresholding: bool = False +dynamic_thresholding_ratio: float = 0.995 +sample_max_value: float = 1.0 +algorithm_type: str = 'dpmsolver++' +solver_type: str = 'midpoint' +lower_order_final: bool = True + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +solver_order (int, default 2) — +the order of DPM-Solver; can be 1 or 2 or 3. We recommend to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + +thresholding (bool, default False) — +whether to use the “dynamic thresholding” method (introduced by Imagen, https://arxiv.org/abs/2205.11487). +For pixel-space diffusion models, you can set both algorithm_type=dpmsolver++ and thresholding=True to +use the dynamic thresholding. Note that the thresholding method is unsuitable for latent-space diffusion +models (such as stable-diffusion). + + +dynamic_thresholding_ratio (float, default 0.995) — +the ratio for the dynamic thresholding method. Default is 0.995, the same as Imagen +(https://arxiv.org/abs/2205.11487). + + +sample_max_value (float, default 1.0) — +the threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++. + + +algorithm_type (str, default dpmsolver++) — +the algorithm type for the solver. Either dpmsolver or dpmsolver++. The dpmsolver type implements the +algorithms in https://arxiv.org/abs/2206.00927, and the dpmsolver++ type implements the algorithms in +https://arxiv.org/abs/2211.01095. We recommend to use dpmsolver++ with solver_order=2 for guided +sampling (e.g. stable-diffusion). + + +solver_type (str, default midpoint) — +the solver type for the second-order solver. Either midpoint or heun. The solver type slightly affects +the sample quality, especially for small number of steps. We empirically find that midpoint solvers are +slightly better, so we recommend to use the midpoint type. + + +lower_order_final (bool, default True) — +whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. We empirically +find this trick can stabilize the sampling of DPM-Solver for steps < 15, especially for steps <= 10. + + + +DPM-Solver (and the improved version DPM-Solver++) is a fast dedicated high-order solver for diffusion ODEs with +the convergence order guarantee. Empirically, sampling by DPM-Solver with only 20 steps can generate high-quality +samples, and it can generate quite good samples even in only 10 steps. +For more details, see the original paper: https://arxiv.org/abs/2206.00927 and https://arxiv.org/abs/2211.01095 +Currently, we support the multistep DPM-Solver for both noise prediction models and data prediction models. We +recommend to use solver_order=2 for guided sampling, and solver_order=3 for unconditional sampling. +We also support the “dynamic thresholding” method in Imagen (https://arxiv.org/abs/2205.11487). For pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use the dynamic +thresholding. Note that the thresholding method is unsuitable for latent-space diffusion models (such as +stable-diffusion). +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +convert_model_output + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the converted model output. + + +Convert the model output to the corresponding type that the algorithm (DPM-Solver / DPM-Solver++) needs. +DPM-Solver is designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to +discretize an integral of the data prediction model. So we need to first convert the model output to the +corresponding type to match the algorithm. +Note that the algorithm type and the model type is decoupled. That is to say, we can use either DPM-Solver or +DPM-Solver++ for both noise prediction model and data prediction model. + +dpm_solver_first_order_update + +< +source +> +( +model_output: FloatTensor +timestep: int +prev_timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the first-order DPM-Solver (equivalent to DDIM). +See https://arxiv.org/abs/2206.00927 for the detailed derivation. + +multistep_dpm_solver_second_order_update + +< +source +> +( +model_output_list: typing.List[torch.FloatTensor] +timestep_list: typing.List[int] +prev_timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output_list (List[torch.FloatTensor]) — +direct outputs from learned diffusion model at current and latter timesteps. + + +timestep (int) — current and latter discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the second-order multistep DPM-Solver. + +multistep_dpm_solver_third_order_update + +< +source +> +( +model_output_list: typing.List[torch.FloatTensor] +timestep_list: typing.List[int] +prev_timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output_list (List[torch.FloatTensor]) — +direct outputs from learned diffusion model at current and latter timesteps. + + +timestep (int) — current and latter discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the third-order multistep DPM-Solver. + +scale_model_input + +< +source +> +( +sample: FloatTensor +*args +**kwargs + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device, optional) — +the device to which the timesteps should be moved to. If None, the timesteps are not moved. + + + +Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +return_dict: bool = True + +) +→ +~scheduling_utils.SchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +return_dict (bool) — option for returning tuple rather than SchedulerOutput class + + +Returns + +~scheduling_utils.SchedulerOutput or tuple + + + +~scheduling_utils.SchedulerOutput if return_dict is +True, otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + +Step function propagating the sample with the multistep DPM-Solver. diff --git a/scrapped_outputs/79be57c88cf786ce5bd0cf4f73ef6017.txt b/scrapped_outputs/79be57c88cf786ce5bd0cf4f73ef6017.txt new file mode 100644 index 0000000000000000000000000000000000000000..d40a592a99a5a511666fa277762589488810334f --- /dev/null +++ b/scrapped_outputs/79be57c88cf786ce5bd0cf4f73ef6017.txt @@ -0,0 +1,563 @@ +Depth-to-Image Generation + + +StableDiffusionDepth2ImgPipeline + +The depth-guided stable diffusion model was created by the researchers and engineers from CompVis, Stability AI, and LAION, as part of Stable Diffusion 2.0. It uses MiDas to infer depth based on an image. +StableDiffusionDepth2ImgPipeline lets you pass a text prompt and an initial image to condition the generation of new images as well as a depth_map to preserve the images’ structure. +The original codebase can be found here: +Stable Diffusion v2: Stability-AI/stablediffusion +Available Checkpoints are: +stable-diffusion-2-depth: stabilityai/stable-diffusion-2-depth + +class diffusers.StableDiffusionDepth2ImgPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +depth_estimator: DPTForDepthEstimation +feature_extractor: DPTFeatureExtractor + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + + +Pipeline for text-guided image to image generation using Stable Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) +In addition the pipeline inherits the following loading methods: +Textual-Inversion: loaders.TextualInversionLoaderMixin.load_textual_inversion() +LoRA: loaders.LoraLoaderMixin.load_lora_weights() +as well as the following saving methods: +LoRA: loaders.LoraLoaderMixin.save_lora_weights() + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None +depth_map: typing.Optional[torch.FloatTensor] = None +strength: float = 0.8 +num_inference_steps: typing.Optional[int] = 50 +guidance_scale: typing.Optional[float] = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: typing.Optional[float] = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. + + +strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter will be modulated by strength. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. Ignored when not using guidance (i.e., ignored if guidance_scale +is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import torch +>>> import requests +>>> from PIL import Image + +>>> from diffusers import StableDiffusionDepth2ImgPipeline + +>>> pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-depth", +... torch_dtype=torch.float16, +... ) +>>> pipe.to("cuda") + + +>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" +>>> init_image = Image.open(requests.get(url, stream=True).raw) +>>> prompt = "two tigers" +>>> n_propmt = "bad, deformed, ugly, bad anotomy" +>>> image = pipe(prompt=prompt, image=init_image, negative_prompt=n_propmt, strength=0.7).images[0] + +enable_attention_slicing + +< +source +> +( +slice_size: typing.Union[str, int, NoneType] = 'auto' + +) + + +Parameters + +slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +disable_attention_slicing + +< +source +> +( +) + + + +Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go +back to computing attention in one step. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +load_textual_inversion + +< +source +> +( +pretrained_model_name_or_path: typing.Union[str, typing.Dict[str, torch.Tensor]] +token: typing.Optional[str] = None +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path (str or os.PathLike) — +Can be either: + +A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. +Valid model ids should have an organization name, like +"sd-concepts-library/low-poly-hd-logos-icons". +A path to a directory containing textual inversion weights, e.g. +./my_text_inversion_directory/. + + + +weight_name (str, optional) — +Name of a custom weight file. This should be used in two cases: + +The saved textual inversion file is in diffusers format, but was saved under a specific weight +name, such as text_inv.bin. +The saved textual inversion file is in the “Automatic1111” form. + + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running diffusers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +subfolder (str, optional, defaults to "") — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + +mirror (str, optional) — +Mirror source to accelerate downloads in China. If you are from China and have an accessibility +problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. +Please refer to the mirror site for more information. + + + +Load textual inversion embeddings into the text encoder of stable diffusion pipelines. Both diffusers and +Automatic1111 formats are supported (see example below). +This function is experimental and might change in the future. +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. +Example: + +To load a textual inversion embedding vector in diffusers format: + + + Copied +from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") +To load a textual inversion embedding vector in Automatic1111 format, make sure to first download the vector, + +e.g. from civitAI and then load the vector locally: + + + Copied +from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") + +load_lora_weights + +< +source +> +( +pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]] +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. +Valid model ids should have an organization name, like google/ddpm-celebahq-256. +A path to a directory containing model weights saved using ~ModelMixin.save_config, e.g., +./my_model_directory/. +A torch state +dict. + + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running diffusers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +subfolder (str, optional, defaults to "") — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + +mirror (str, optional) — +Mirror source to accelerate downloads in China. If you are from China and have an accessibility +problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. +Please refer to the mirror site for more information. + + + +Load pretrained attention processor layers (such as LoRA) into UNet2DConditionModel and +CLIPTextModel). +This function is experimental and might change in the future. +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. + +save_lora_weights + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +unet_lora_layers: typing.Dict[str, torch.nn.modules.module.Module] = None +text_encoder_lora_layers: typing.Dict[str, torch.nn.modules.module.Module] = None +is_main_process: bool = True +weight_name: str = None +save_function: typing.Callable = None +safe_serialization: bool = False + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. + + +unet_lora_layers (Dict[str, torch.nn.Module]) — +State dict of the LoRA layers corresponding to the UNet. Specifying this helps to make the +serialization process easier and cleaner. + + +text_encoder_lora_layers (Dict[str, torch.nn.Module]) — +State dict of the LoRA layers corresponding to the text_encoder. Since the text_encoder comes from +transformers, we cannot rejig it. That is why we have to explicitly pass the text encoder LoRA state +dict. + + +is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful when in distributed training like +TPUs and need to call this function on all processes. In this case, set is_main_process=True only on +the main process to avoid race conditions. + + +save_function (Callable) — +The function to use to save the state dictionary. Useful on distributed training like TPUs when one +need to replace torch.save by another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. + + + +Save the LoRA parameters corresponding to the UNet and the text encoder. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. diff --git a/scrapped_outputs/79cd7826f4566c2a69424b80448c1515.txt b/scrapped_outputs/79cd7826f4566c2a69424b80448c1515.txt new file mode 100644 index 0000000000000000000000000000000000000000..df6c7a63a692fbcb67cd30a67bb4f5f0a2dbf20d --- /dev/null +++ b/scrapped_outputs/79cd7826f4566c2a69424b80448c1515.txt @@ -0,0 +1,156 @@ +IP-Adapter IP-Adapter is an image prompt adapter that can be plugged into diffusion models to enable image prompting without any changes to the underlying model. Furthermore, this adapter can be reused with other models finetuned from the same base model and it can be combined with other adapters like ControlNet. The key idea behind IP-Adapter is the decoupled cross-attention mechanism which adds a separate cross-attention layer just for image features instead of using the same cross-attention layer for both text and image features. This allows the model to learn more image-specific features. Learn how to load an IP-Adapter in the Load adapters guide, and make sure you check out the IP-Adapter Plus section which requires manually loading the image encoder. This guide will walk you through using IP-Adapter for various tasks and use cases. General tasks Let’s take a look at how to use IP-Adapter’s image prompting capabilities with the StableDiffusionXLPipeline for tasks like text-to-image, image-to-image, and inpainting. We also encourage you to try out other pipelines such as Stable Diffusion, LCM-LoRA, ControlNet, T2I-Adapter, or AnimateDiff! In all the following examples, you’ll see the set_ip_adapter_scale() method. This method controls the amount of text or image conditioning to apply to the model. A value of 1.0 means the model is only conditioned on the image prompt. Lowering this value encourages the model to produce more diverse images, but they may not be as aligned with the image prompt. Typically, a value of 0.5 achieves a good balance between the two prompt types and produces good results. In the examples below, try adding low_cpu_mem_usage=True to the load_ip_adapter() method to speed up the loading time. Text-to-image Image-to-image Inpainting Video Crafting the precise text prompt to generate the image you want can be difficult because it may not always capture what you’d like to express. Adding an image alongside the text prompt helps the model better understand what it should generate and can lead to more accurate results. Load a Stable Diffusion XL (SDXL) model and insert an IP-Adapter into the model with the load_ip_adapter() method. Use the subfolder parameter to load the SDXL model weights. Copied from diffusers import AutoPipelineForText2Image +from diffusers.utils import load_image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name="ip-adapter_sdxl.bin") +pipeline.set_ip_adapter_scale(0.6) Create a text prompt and load an image prompt before passing them to the pipeline to generate an image. Copied image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_diner.png") +generator = torch.Generator(device="cpu").manual_seed(0) +images = pipeline( + prompt="a polar bear sitting in a chair drinking a milkshake", + ip_adapter_image=image, + negative_prompt="deformed, ugly, wrong proportion, low res, bad anatomy, worst quality, low quality", + num_inference_steps=100, + generator=generator, +).images +images[0] IP-Adapter image generated image Configure parameters There are a couple of IP-Adapter parameters that are useful to know about and can help you with your image generation tasks. These parameters can make your workflow more efficient or give you more control over image generation. Image embeddings IP-Adapter enabled pipelines provide the ip_adapter_image_embeds parameter to accept precomputed image embeddings. This is particularly useful in scenarios where you need to run the IP-Adapter pipeline multiple times because you have more than one image. For example, multi IP-Adapter is a specific use case where you provide multiple styling images to generate a specific image in a specific style. Loading and encoding multiple images each time you use the pipeline would be inefficient. Instead, you can precompute and save the image embeddings to disk (which can save a lot of space if you’re using high-quality images) and load them when you need them. This parameter also gives you the flexibility to load embeddings from other sources. For example, ComfyUI image embeddings for IP-Adapters are compatible with Diffusers and should work ouf-of-the-box! Call the prepare_ip_adapter_image_embeds() method to encode and generate the image embeddings. Then you can save them to disk with torch.save. If you’re using IP-Adapter with ip_adapter_image_embedding instead of ip_adapter_image’, you can set load_ip_adapter(image_encoder_folder=None,...) because you don’t need to load an encoder to generate the image embeddings. Copied image_embeds = pipeline.prepare_ip_adapter_image_embeds( + ip_adapter_image=image, + ip_adapter_image_embeds=None, + device="cuda", + num_images_per_prompt=1, + do_classifier_free_guidance=True, +) + +torch.save(image_embeds, "image_embeds.ipadpt") Now load the image embeddings by passing them to the ip_adapter_image_embeds parameter. Copied image_embeds = torch.load("image_embeds.ipadpt") +images = pipeline( + prompt="a polar bear sitting in a chair drinking a milkshake", + ip_adapter_image_embeds=image_embeds, + negative_prompt="deformed, ugly, wrong proportion, low res, bad anatomy, worst quality, low quality", + num_inference_steps=100, + generator=generator, +).images IP-Adapter masking Binary masks specify which portion of the output image should be assigned to an IP-Adapter. This is useful for composing more than one IP-Adapter image. For each input IP-Adapter image, you must provide a binary mask an an IP-Adapter. To start, preprocess the input IP-Adapter images with the ~image_processor.IPAdapterMaskProcessor.preprocess() to generate their masks. For optimal results, provide the output height and width to ~image_processor.IPAdapterMaskProcessor.preprocess(). This ensures masks with different aspect ratios are appropriately stretched. If the input masks already match the aspect ratio of the generated image, you don’t have to set the height and width. Copied from diffusers.image_processor import IPAdapterMaskProcessor + +mask1 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_mask1.png") +mask2 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_mask2.png") + +output_height = 1024 +output_width = 1024 + +processor = IPAdapterMaskProcessor() +masks = processor.preprocess([mask1, mask2], height=output_height, width=output_width) mask one mask two When there is more than one input IP-Adapter image, load them as a list to ensure each image is assigned to a different IP-Adapter. Each of the input IP-Adapter images here correspond to the masks generated above. Copied face_image1 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_girl1.png") +face_image2 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_girl2.png") + +ip_images = [[face_image1], [face_image2]] IP-Adapter image one IP-Adapter image two Now pass the preprocessed masks to cross_attention_kwargs in the pipeline call. Copied pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name=["ip-adapter-plus-face_sdxl_vit-h.safetensors"] * 2) +pipeline.set_ip_adapter_scale([0.7] * 2) +generator = torch.Generator(device="cpu").manual_seed(0) +num_images = 1 + +image = pipeline( + prompt="2 girls", + ip_adapter_image=ip_images, + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=20, + num_images_per_prompt=num_images, + generator=generator, + cross_attention_kwargs={"ip_adapter_masks": masks} +).images[0] +image IP-Adapter masking applied no IP-Adapter masking applied Specific use cases IP-Adapter’s image prompting and compatibility with other adapters and models makes it a versatile tool for a variety of use cases. This section covers some of the more popular applications of IP-Adapter, and we can’t wait to see what you come up with! Face model Generating accurate faces is challenging because they are complex and nuanced. Diffusers supports two IP-Adapter checkpoints specifically trained to generate faces: ip-adapter-full-face_sd15.safetensors is conditioned with images of cropped faces and removed backgrounds ip-adapter-plus-face_sd15.safetensors uses patch embeddings and is conditioned with images of cropped faces IP-Adapter-FaceID is a face-specific IP-Adapter trained with face ID embeddings instead of CLIP image embeddings, allowing you to generate more consistent faces in different contexts and styles. Try out this popular community pipeline and see how it compares to the other face IP-Adapters. For face models, use the h94/IP-Adapter checkpoint. It is also recommended to use DDIMScheduler or EulerDiscreteScheduler for face models. Copied import torch +from diffusers import StableDiffusionPipeline, DDIMScheduler +from diffusers.utils import load_image + +pipeline = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, +).to("cuda") +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter-full-face_sd15.bin") + +pipeline.set_ip_adapter_scale(0.5) + +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_einstein_base.png") +generator = torch.Generator(device="cpu").manual_seed(26) + +image = pipeline( + prompt="A photo of Einstein as a chef, wearing an apron, cooking in a French restaurant", + ip_adapter_image=image, + negative_prompt="lowres, bad anatomy, worst quality, low quality", + num_inference_steps=100, + generator=generator, +).images[0] +image IP-Adapter image generated image Multi IP-Adapter More than one IP-Adapter can be used at the same time to generate specific images in more diverse styles. For example, you can use IP-Adapter-Face to generate consistent faces and characters, and IP-Adapter Plus to generate those faces in a specific style. Read the IP-Adapter Plus section to learn why you need to manually load the image encoder. Load the image encoder with CLIPVisionModelWithProjection. Copied import torch +from diffusers import AutoPipelineForText2Image, DDIMScheduler +from transformers import CLIPVisionModelWithProjection +from diffusers.utils import load_image + +image_encoder = CLIPVisionModelWithProjection.from_pretrained( + "h94/IP-Adapter", + subfolder="models/image_encoder", + torch_dtype=torch.float16, +) Next, you’ll load a base model, scheduler, and the IP-Adapters. The IP-Adapters to use are passed as a list to the weight_name parameter: ip-adapter-plus_sdxl_vit-h uses patch embeddings and a ViT-H image encoder ip-adapter-plus-face_sdxl_vit-h has the same architecture but it is conditioned with images of cropped faces Copied pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + torch_dtype=torch.float16, + image_encoder=image_encoder, +) +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +pipeline.load_ip_adapter( + "h94/IP-Adapter", + subfolder="sdxl_models", + weight_name=["ip-adapter-plus_sdxl_vit-h.safetensors", "ip-adapter-plus-face_sdxl_vit-h.safetensors"] +) +pipeline.set_ip_adapter_scale([0.7, 0.3]) +pipeline.enable_model_cpu_offload() Load an image prompt and a folder containing images of a certain style you want to use. Copied face_image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/women_input.png") +style_folder = "https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/style_ziggy" +style_images = [load_image(f"{style_folder}/img{i}.png") for i in range(10)] IP-Adapter image of face IP-Adapter style images Pass the image prompt and style images as a list to the ip_adapter_image parameter, and run the pipeline! Copied generator = torch.Generator(device="cpu").manual_seed(0) + +image = pipeline( + prompt="wonderwoman", + ip_adapter_image=[style_images, face_image], + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=50, num_images_per_prompt=1, + generator=generator, +).images[0] +image     Instant generation Latent Consistency Models (LCM) are diffusion models that can generate images in as little as 4 steps compared to other diffusion models like SDXL that typically require way more steps. This is why image generation with an LCM feels “instantaneous”. IP-Adapters can be plugged into an LCM-LoRA model to instantly generate images with an image prompt. The IP-Adapter weights need to be loaded first, then you can use load_lora_weights() to load the LoRA style and weight you want to apply to your image. Copied from diffusers import DiffusionPipeline, LCMScheduler +import torch +from diffusers.utils import load_image + +model_id = "sd-dreambooth-library/herge-style" +lcm_lora_id = "latent-consistency/lcm-lora-sdv1-5" + +pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) + +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") +pipeline.load_lora_weights(lcm_lora_id) +pipeline.scheduler = LCMScheduler.from_config(pipeline.scheduler.config) +pipeline.enable_model_cpu_offload() Try using with a lower IP-Adapter scale to condition image generation more on the herge_style checkpoint, and remember to use the special token herge_style in your prompt to trigger and apply the style. Copied pipeline.set_ip_adapter_scale(0.4) + +prompt = "herge_style woman in armor, best quality, high quality" +generator = torch.Generator(device="cpu").manual_seed(0) + +ip_adapter_image = load_image("https://user-images.githubusercontent.com/24734142/266492875-2d50d223-8475-44f0-a7c6-08b51cb53572.png") +image = pipeline( + prompt=prompt, + ip_adapter_image=ip_adapter_image, + num_inference_steps=4, + guidance_scale=1, +).images[0] +image     Structural control To control image generation to an even greater degree, you can combine IP-Adapter with a model like ControlNet. A ControlNet is also an adapter that can be inserted into a diffusion model to allow for conditioning on an additional control image. The control image can be depth maps, edge maps, pose estimations, and more. Load a ControlNetModel checkpoint conditioned on depth maps, insert it into a diffusion model, and load the IP-Adapter. Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +import torch +from diffusers.utils import load_image + +controlnet_model_path = "lllyasviel/control_v11f1p_sd15_depth" +controlnet = ControlNetModel.from_pretrained(controlnet_model_path, torch_dtype=torch.float16) + +pipeline = StableDiffusionControlNetPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16) +pipeline.to("cuda") +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") Now load the IP-Adapter image and depth map. Copied ip_adapter_image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/statue.png") +depth_map = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/depth.png") IP-Adapter image depth map Pass the depth map and IP-Adapter image to the pipeline to generate an image. Copied generator = torch.Generator(device="cpu").manual_seed(33) +image = pipeline( + prompt="best quality, high quality", + image=depth_map, + ip_adapter_image=ip_adapter_image, + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=50, + generator=generator, +).images[0] +image diff --git a/scrapped_outputs/79d4ac35adbafab9baf13c4d29cbf807.txt b/scrapped_outputs/79d4ac35adbafab9baf13c4d29cbf807.txt new file mode 100644 index 0000000000000000000000000000000000000000..48396c146f3995890b4116a7443457db9ccef879 --- /dev/null +++ b/scrapped_outputs/79d4ac35adbafab9baf13c4d29cbf807.txt @@ -0,0 +1,60 @@ +VAE Image Processor The VaeImageProcessor provides a unified API for StableDiffusionPipelines to prepare image inputs for VAE encoding and post-processing outputs once they’re decoded. This includes transformations such as resizing, normalization, and conversion between PIL Image, PyTorch, and NumPy arrays. All pipelines with VaeImageProcessor accept PIL Image, PyTorch tensor, or NumPy arrays as image inputs and return outputs based on the output_type argument by the user. You can pass encoded image latents directly to the pipeline and return latents from the pipeline as a specific output with the output_type argument (for example output_type="latent"). This allows you to take the generated latents from one pipeline and pass it to another pipeline as input without leaving the latent space. It also makes it much easier to use multiple pipelines together by passing PyTorch tensors directly between different pipelines. VaeImageProcessor class diffusers.image_processor.VaeImageProcessor < source > ( do_resize: bool = True vae_scale_factor: int = 8 resample: str = 'lanczos' do_normalize: bool = True do_binarize: bool = False do_convert_rgb: bool = False do_convert_grayscale: bool = False ) Parameters do_resize (bool, optional, defaults to True) — +Whether to downscale the image’s (height, width) dimensions to multiples of vae_scale_factor. Can accept +height and width arguments from image_processor.VaeImageProcessor.preprocess() method. vae_scale_factor (int, optional, defaults to 8) — +VAE scale factor. If do_resize is True, the image is automatically resized to multiples of this factor. resample (str, optional, defaults to lanczos) — +Resampling filter to use when resizing the image. do_normalize (bool, optional, defaults to True) — +Whether to normalize the image to [-1,1]. do_binarize (bool, optional, defaults to False) — +Whether to binarize the image to 0/1. do_convert_rgb (bool, optional, defaults to be False) — +Whether to convert the images to RGB format. do_convert_grayscale (bool, optional, defaults to be False) — +Whether to convert the images to grayscale format. Image processor for VAE. apply_overlay < source > ( mask: Image init_image: Image image: Image crop_coords: Optional = None ) overlay the inpaint output to the original image binarize < source > ( image: Image ) → PIL.Image.Image Parameters image (PIL.Image.Image) — +The image input, should be a PIL image. Returns +PIL.Image.Image + +The binarized image. Values less than 0.5 are set to 0, values greater than 0.5 are set to 1. + Create a mask. blur < source > ( image: Image blur_factor: int = 4 ) Applies Gaussian blur to an image. convert_to_grayscale < source > ( image: Image ) Converts a PIL image to grayscale format. convert_to_rgb < source > ( image: Image ) Converts a PIL image to RGB format. denormalize < source > ( images: Union ) Denormalize an image array to [0,1]. get_crop_region < source > ( mask_image: Image width: int height: int pad = 0 ) → tuple Parameters mask_image (PIL.Image.Image) — Mask image. width (int) — Width of the image to be processed. height (int) — Height of the image to be processed. pad (int, optional) — Padding to be added to the crop region. Defaults to 0. Returns +tuple + +(x1, y1, x2, y2) represent a rectangular region that contains all masked ares in an image and matches the original aspect ratio. + Finds a rectangular region that contains all masked ares in an image, and expands region to match the aspect ratio of the original image; +for example, if user drew mask in a 128x32 region, and the dimensions for processing are 512x512, the region will be expanded to 128x128. get_default_height_width < source > ( image: Union height: Optional = None width: Optional = None ) Parameters image(PIL.Image.Image, np.ndarray or torch.Tensor) — +The image input, can be a PIL image, numpy array or pytorch tensor. if it is a numpy array, should have +shape [batch, height, width] or [batch, height, width, channel] if it is a pytorch tensor, should +have shape [batch, channel, height, width]. height (int, optional, defaults to None) — +The height in preprocessed image. If None, will use the height of image input. width (int, optional, defaults to None) -- The width in preprocessed. If None, will use the width of the image` input. This function return the height and width that are downscaled to the next integer multiple of +vae_scale_factor. normalize < source > ( images: Union ) Normalize an image array to [-1,1]. numpy_to_pil < source > ( images: ndarray ) Convert a numpy image or a batch of images to a PIL image. numpy_to_pt < source > ( images: ndarray ) Convert a NumPy image to a PyTorch tensor. pil_to_numpy < source > ( images: Union ) Convert a PIL image or a list of PIL images to NumPy arrays. postprocess < source > ( image: FloatTensor output_type: str = 'pil' do_denormalize: Optional = None ) → PIL.Image.Image, np.ndarray or torch.FloatTensor Parameters image (torch.FloatTensor) — +The image input, should be a pytorch tensor with shape B x C x H x W. output_type (str, optional, defaults to pil) — +The output type of the image, can be one of pil, np, pt, latent. do_denormalize (List[bool], optional, defaults to None) — +Whether to denormalize the image to [0,1]. If None, will use the value of do_normalize in the +VaeImageProcessor config. Returns +PIL.Image.Image, np.ndarray or torch.FloatTensor + +The postprocessed image. + Postprocess the image output from tensor to output_type. preprocess < source > ( image: Union height: Optional = None width: Optional = None resize_mode: str = 'default' crops_coords: Optional = None ) Parameters image (pipeline_image_input) — +The image input, accepted formats are PIL images, NumPy arrays, PyTorch tensors; Also accept list of supported formats. height (int, optional, defaults to None) — +The height in preprocessed image. If None, will use the get_default_height_width() to get default height. width (int, optional, defaults to None) -- The width in preprocessed. If None, will use get_default_height_width() to get the default width. resize_mode (str, optional, defaults to default) — +The resize mode, can be one of default or fill. If default, will resize the image to fit +within the specified width and height, and it may not maintaining the original aspect ratio. +If fill, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, filling empty with data from image. +If crop, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, cropping the excess. +Note that resize_mode fill and crop are only supported for PIL image input. crops_coords (List[Tuple[int, int, int, int]], optional, defaults to None) — +The crop coordinates for each image in the batch. If None, will not crop the image. Preprocess the image input. pt_to_numpy < source > ( images: FloatTensor ) Convert a PyTorch tensor to a NumPy image. resize < source > ( image: Union height: int width: int resize_mode: str = 'default' ) → PIL.Image.Image, np.ndarray or torch.Tensor Parameters image (PIL.Image.Image, np.ndarray or torch.Tensor) — +The image input, can be a PIL image, numpy array or pytorch tensor. height (int) — +The height to resize to. width (int) — +The width to resize to. resize_mode (str, optional, defaults to default) — +The resize mode to use, can be one of default or fill. If default, will resize the image to fit +within the specified width and height, and it may not maintaining the original aspect ratio. +If fill, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, filling empty with data from image. +If crop, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, cropping the excess. +Note that resize_mode fill and crop are only supported for PIL image input. Returns +PIL.Image.Image, np.ndarray or torch.Tensor + +The resized image. + Resize image. VaeImageProcessorLDM3D The VaeImageProcessorLDM3D accepts RGB and depth inputs and returns RGB and depth outputs. class diffusers.image_processor.VaeImageProcessorLDM3D < source > ( do_resize: bool = True vae_scale_factor: int = 8 resample: str = 'lanczos' do_normalize: bool = True ) Parameters do_resize (bool, optional, defaults to True) — +Whether to downscale the image’s (height, width) dimensions to multiples of vae_scale_factor. vae_scale_factor (int, optional, defaults to 8) — +VAE scale factor. If do_resize is True, the image is automatically resized to multiples of this factor. resample (str, optional, defaults to lanczos) — +Resampling filter to use when resizing the image. do_normalize (bool, optional, defaults to True) — +Whether to normalize the image to [-1,1]. Image processor for VAE LDM3D. depth_pil_to_numpy < source > ( images: Union ) Convert a PIL image or a list of PIL images to NumPy arrays. numpy_to_depth < source > ( images: ndarray ) Convert a NumPy depth image or a batch of images to a PIL image. numpy_to_pil < source > ( images: ndarray ) Convert a NumPy image or a batch of images to a PIL image. preprocess < source > ( rgb: Union depth: Union height: Optional = None width: Optional = None target_res: Optional = None ) Preprocess the image input. Accepted formats are PIL images, NumPy arrays or PyTorch tensors. rgblike_to_depthmap < source > ( image: Union ) Returns: depth map diff --git a/scrapped_outputs/79ff91a4a43cb2d4d2d66e515f4f71e9.txt b/scrapped_outputs/79ff91a4a43cb2d4d2d66e515f4f71e9.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/7a4695a43b59a747d8ecd34e0d01f067.txt b/scrapped_outputs/7a4695a43b59a747d8ecd34e0d01f067.txt new file mode 100644 index 0000000000000000000000000000000000000000..44404381265fb59e40a4d0a64a09200029284152 --- /dev/null +++ b/scrapped_outputs/7a4695a43b59a747d8ecd34e0d01f067.txt @@ -0,0 +1,49 @@ +EulerDiscreteScheduler The Euler scheduler (Algorithm 2) is from the Elucidating the Design Space of Diffusion-Based Generative Models paper by Karras et al. This is a fast scheduler which can often generate good outputs in 20-30 steps. The scheduler is based on the original k-diffusion implementation by Katherine Crowson. EulerDiscreteScheduler class diffusers.EulerDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' interpolation_type: str = 'linear' use_karras_sigmas: Optional = False sigma_min: Optional = None sigma_max: Optional = None timestep_spacing: str = 'linspace' timestep_type: str = 'discrete' steps_offset: int = 0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). interpolation_type(str, defaults to "linear", optional) — +The interpolation type to compute intermediate sigmas for the scheduler denoising steps. Should be on of +"linear" or "log_linear". use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. Euler scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. Scales the denoising model input by (sigma**2 + 1) ** 0.5 to match the Euler algorithm. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor s_churn: float = 0.0 s_tmin: float = 0.0 s_tmax: float = inf s_noise: float = 1.0 generator: Optional = None return_dict: bool = True ) → EulerDiscreteSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. s_churn (float) — s_tmin (float) — s_tmax (float) — s_noise (float, defaults to 1.0) — +Scaling factor for noise added to the sample. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a EulerDiscreteSchedulerOutput or +tuple. Returns +EulerDiscreteSchedulerOutput or tuple + +If return_dict is True, EulerDiscreteSchedulerOutput is +returned, otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). EulerDiscreteSchedulerOutput class diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/7a526cf8ebb94d8d6f9357a9be60a17f.txt b/scrapped_outputs/7a526cf8ebb94d8d6f9357a9be60a17f.txt new file mode 100644 index 0000000000000000000000000000000000000000..c988f5fbd3af0bec8702ce742b8e0ac4c55d6c07 --- /dev/null +++ b/scrapped_outputs/7a526cf8ebb94d8d6f9357a9be60a17f.txt @@ -0,0 +1,151 @@ +DPM Discrete Scheduler inspired by Karras et. al paper + + +Overview + +Inspired by Karras et. al. Scheduler ported from @crowsonkb’s https://github.com/crowsonkb/k-diffusion library: +All credit for making this scheduler work goes to Katherine Crowson + +KDPM2DiscreteScheduler + + +class diffusers.KDPM2DiscreteScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.00085 +beta_end: float = 0.012 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +prediction_type: str = 'epsilon' + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. beta_start (float): the + + +starting beta value of inference. beta_end (float) — the final beta value. beta_schedule (str): +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. +options to clip the variance used when adding noise to the denoised sample. Choose from fixed_small, +fixed_small_log, fixed_large, fixed_large_log, learned or learned_range. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + + +Scheduler created by @crowsonkb in k_diffusion, see: +https://github.com/crowsonkb/k-diffusion/blob/5b3af030dd83e0297272d861c19477735d0317ec/k_diffusion/sampling.py#L188 +Scheduler inspired by DPM-Solver-2 and Algorthim 2 from Karras et al. (2022). +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[float, torch.FloatTensor] + +) +→ +torch.FloatTensor + +Parameters + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the — + + +current timestep. — +sample (torch.FloatTensor): input sample timestep (int, optional): current timestep + + +Returns + +torch.FloatTensor + + + +scaled input sample + + + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None +num_train_timesteps: typing.Optional[int] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device, optional) — +the device to which the timesteps should be moved to. If None, the timesteps are not moved. + + + +Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: typing.Union[torch.FloatTensor, numpy.ndarray] +timestep: typing.Union[float, torch.FloatTensor] +sample: typing.Union[torch.FloatTensor, numpy.ndarray] +return_dict: bool = True + +) +→ +SchedulerOutput or tuple + +Parameters + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion — + + +process from the learned model outputs (most often the predicted noise). — +model_output (torch.FloatTensor or np.ndarray): direct output from learned diffusion model. timestep +(int): current discrete timestep in the diffusion chain. sample (torch.FloatTensor or np.ndarray): +current instance of sample being created by diffusion process. +return_dict (bool): option for returning tuple rather than SchedulerOutput class + + +Returns + +SchedulerOutput or tuple + + + +SchedulerOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. diff --git a/scrapped_outputs/7a7c12d60497db8b0669647eb37339d4.txt b/scrapped_outputs/7a7c12d60497db8b0669647eb37339d4.txt new file mode 100644 index 0000000000000000000000000000000000000000..2dde9c6e189ad6d607bc313e3e555570773bb332 --- /dev/null +++ b/scrapped_outputs/7a7c12d60497db8b0669647eb37339d4.txt @@ -0,0 +1,19 @@ +Adapt a model to a new task Many diffusion systems share the same components, allowing you to adapt a pretrained model for one task to an entirely different task. This guide will show you how to adapt a pretrained text-to-image model for inpainting by initializing and modifying the architecture of a pretrained UNet2DConditionModel. Configure UNet2DConditionModel parameters A UNet2DConditionModel by default accepts 4 channels in the input sample. For example, load a pretrained text-to-image model like runwayml/stable-diffusion-v1-5 and take a look at the number of in_channels: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) +pipeline.unet.config["in_channels"] +4 Inpainting requires 9 channels in the input sample. You can check this value in a pretrained inpainting model like runwayml/stable-diffusion-inpainting: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-inpainting", use_safetensors=True) +pipeline.unet.config["in_channels"] +9 To adapt your text-to-image model for inpainting, you’ll need to change the number of in_channels from 4 to 9. Initialize a UNet2DConditionModel with the pretrained text-to-image model weights, and change in_channels to 9. Changing the number of in_channels means you need to set ignore_mismatched_sizes=True and low_cpu_mem_usage=False to avoid a size mismatch error because the shape is different now. Copied from diffusers import UNet2DConditionModel + +model_id = "runwayml/stable-diffusion-v1-5" +unet = UNet2DConditionModel.from_pretrained( + model_id, + subfolder="unet", + in_channels=9, + low_cpu_mem_usage=False, + ignore_mismatched_sizes=True, + use_safetensors=True, +) The pretrained weights of the other components from the text-to-image model are initialized from their checkpoints, but the input channel weights (conv_in.weight) of the unet are randomly initialized. It is important to finetune the model for inpainting because otherwise the model returns noise. diff --git a/scrapped_outputs/7aa11700d4b3e99dd75cd0c05dbd652c.txt b/scrapped_outputs/7aa11700d4b3e99dd75cd0c05dbd652c.txt new file mode 100644 index 0000000000000000000000000000000000000000..28be7c2be08b90122a456c3dc3dafcfdbac176dc --- /dev/null +++ b/scrapped_outputs/7aa11700d4b3e99dd75cd0c05dbd652c.txt @@ -0,0 +1,75 @@ +AutoPipeline 🤗 Diffusers is able to complete many different tasks, and you can often reuse the same pretrained weights for multiple tasks such as text-to-image, image-to-image, and inpainting. If you’re new to the library and diffusion models though, it may be difficult to know which pipeline to use for a task. For example, if you’re using the runwayml/stable-diffusion-v1-5 checkpoint for text-to-image, you might not know that you could also use it for image-to-image and inpainting by loading the checkpoint with the StableDiffusionImg2ImgPipeline and StableDiffusionInpaintPipeline classes respectively. The AutoPipeline class is designed to simplify the variety of pipelines in 🤗 Diffusers. It is a generic, task-first pipeline that lets you focus on the task. The AutoPipeline automatically detects the correct pipeline class to use, which makes it easier to load a checkpoint for a task without knowing the specific pipeline class name. Take a look at the AutoPipeline reference to see which tasks are supported. Currently, it supports text-to-image, image-to-image, and inpainting. This tutorial shows you how to use an AutoPipeline to automatically infer the pipeline class to load for a specific task, given the pretrained weights. Choose an AutoPipeline for your task Start by picking a checkpoint. For example, if you’re interested in text-to-image with the runwayml/stable-diffusion-v1-5 checkpoint, use AutoPipelineForText2Image: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") +prompt = "peasant and dragon combat, wood cutting style, viking era, bevel with rune" + +image = pipeline(prompt, num_inference_steps=25).images[0] +image Under the hood, AutoPipelineForText2Image: automatically detects a "stable-diffusion" class from the model_index.json file loads the corresponding text-to-image StableDiffusionPipeline based on the "stable-diffusion" class name Likewise, for image-to-image, AutoPipelineForImage2Image detects a "stable-diffusion" checkpoint from the model_index.json file and it’ll load the corresponding StableDiffusionImg2ImgPipeline behind the scenes. You can also pass any additional arguments specific to the pipeline class such as strength, which determines the amount of noise or variation added to an input image: Copied from diffusers import AutoPipelineForImage2Image +import torch +import requests +from PIL import Image +from io import BytesIO + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") +prompt = "a portrait of a dog wearing a pearl earring" + +url = "https://upload.wikimedia.org/wikipedia/commons/thumb/0/0f/1665_Girl_with_a_Pearl_Earring.jpg/800px-1665_Girl_with_a_Pearl_Earring.jpg" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") +image.thumbnail((768, 768)) + +image = pipeline(prompt, image, num_inference_steps=200, strength=0.75, guidance_scale=10.5).images[0] +image And if you want to do inpainting, then AutoPipelineForInpainting loads the underlying StableDiffusionInpaintPipeline class in the same way: Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +import torch + +pipeline = AutoPipelineForInpainting.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).convert("RGB") +mask_image = load_image(mask_url).convert("RGB") + +prompt = "A majestic tiger sitting on a bench" +image = pipeline(prompt, image=init_image, mask_image=mask_image, num_inference_steps=50, strength=0.80).images[0] +image If you try to load an unsupported checkpoint, it’ll throw an error: Copied from diffusers import AutoPipelineForImage2Image +import torch + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "openai/shap-e-img2img", torch_dtype=torch.float16, use_safetensors=True +) +"ValueError: AutoPipeline can't find a pipeline linked to ShapEImg2ImgPipeline for None" Use multiple pipelines For some workflows or if you’re loading many pipelines, it is more memory-efficient to reuse the same components from a checkpoint instead of reloading them which would unnecessarily consume additional memory. For example, if you’re using a checkpoint for text-to-image and you want to use it again for image-to-image, use the from_pipe() method. This method creates a new pipeline from the components of a previously loaded pipeline at no additional memory cost. The from_pipe() method detects the original pipeline class and maps it to the new pipeline class corresponding to the task you want to do. For example, if you load a "stable-diffusion" class pipeline for text-to-image: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch + +pipeline_text2img = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +print(type(pipeline_text2img)) +"" Then from_pipe() maps the original "stable-diffusion" pipeline class to StableDiffusionImg2ImgPipeline: Copied pipeline_img2img = AutoPipelineForImage2Image.from_pipe(pipeline_text2img) +print(type(pipeline_img2img)) +"" If you passed an optional argument - like disabling the safety checker - to the original pipeline, this argument is also passed on to the new pipeline: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch + +pipeline_text2img = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, + requires_safety_checker=False, +).to("cuda") + +pipeline_img2img = AutoPipelineForImage2Image.from_pipe(pipeline_text2img) +print(pipeline_img2img.config.requires_safety_checker) +"False" You can overwrite any of the arguments and even configuration from the original pipeline if you want to change the behavior of the new pipeline. For example, to turn the safety checker back on and add the strength argument: Copied pipeline_img2img = AutoPipelineForImage2Image.from_pipe(pipeline_text2img, requires_safety_checker=True, strength=0.3) +print(pipeline_img2img.config.requires_safety_checker) +"True" diff --git a/scrapped_outputs/7aad29c491b2a3b1d8df8c14f9c00540.txt b/scrapped_outputs/7aad29c491b2a3b1d8df8c14f9c00540.txt new file mode 100644 index 0000000000000000000000000000000000000000..6b2f521e40e38cf54824f4d7c2c05c78554dd3cf --- /dev/null +++ b/scrapped_outputs/7aad29c491b2a3b1d8df8c14f9c00540.txt @@ -0,0 +1,62 @@ +AudioLDM AudioLDM was proposed in AudioLDM: Text-to-Audio Generation with Latent Diffusion Models by Haohe Liu et al. Inspired by Stable Diffusion, AudioLDM +is a text-to-audio latent diffusion model (LDM) that learns continuous audio representations from CLAP +latents. AudioLDM takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional +sound effects, human speech and music. The abstract from the paper is: Text-to-audio (TTA) system has recently gained attention for its ability to synthesize general audio based on text descriptions. However, previous studies in TTA have limited generation quality with high computational costs. In this study, we propose AudioLDM, a TTA system that is built on a latent space to learn the continuous audio representations from contrastive language-audio pretraining (CLAP) latents. The pretrained CLAP models enable us to train LDMs with audio embedding while providing text embedding as a condition during sampling. By learning the latent representations of audio signals and their compositions without modeling the cross-modal relationship, AudioLDM is advantageous in both generation quality and computational efficiency. Trained on AudioCaps with a single GPU, AudioLDM achieves state-of-the-art TTA performance measured by both objective and subjective metrics (e.g., frechet distance). Moreover, AudioLDM is the first TTA system that enables various text-guided audio manipulations (e.g., style transfer) in a zero-shot fashion. Our implementation and demos are available at this https URL. The original codebase can be found at haoheliu/AudioLDM. Tips When constructing a prompt, keep in mind: Descriptive prompt inputs work best; you can use adjectives to describe the sound (for example, “high quality” or “clear”) and make the prompt context specific (for example, “water stream in a forest” instead of “stream”). It’s best to use general terms like “cat” or “dog” instead of specific names or abstract objects the model may not be familiar with. During inference: The quality of the predicted audio sample can be controlled by the num_inference_steps argument; higher steps give higher quality audio at the expense of slower inference. The length of the predicted audio sample can be controlled by varying the audio_length_in_s argument. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. AudioLDMPipeline class diffusers.AudioLDMPipeline < source > ( vae: AutoencoderKL text_encoder: ClapTextModelWithProjection tokenizer: Union unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vocoder: SpeechT5HifiGan ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (ClapTextModelWithProjection) — +Frozen text-encoder (ClapTextModelWithProjection, specifically the +laion/clap-htsat-unfused variant. tokenizer (PreTrainedTokenizer) — +A RobertaTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded audio latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. vocoder (SpeechT5HifiGan) — +Vocoder of class SpeechT5HifiGan. Pipeline for text-to-audio generation using AudioLDM. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None audio_length_in_s: Optional = None num_inference_steps: int = 10 guidance_scale: float = 2.5 negative_prompt: Union = None num_waveforms_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None output_type: Optional = 'np' ) → AudioPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide audio generation. If not defined, you need to pass prompt_embeds. audio_length_in_s (int, optional, defaults to 5.12) — +The length of the generated audio sample in seconds. num_inference_steps (int, optional, defaults to 10) — +The number of denoising steps. More denoising steps usually lead to a higher quality audio at the +expense of slower inference. guidance_scale (float, optional, defaults to 2.5) — +A higher guidance scale value encourages the model to generate audio that is closely linked to the text +prompt at the expense of lower sound quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in audio generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_waveforms_per_prompt (int, optional, defaults to 1) — +The number of waveforms to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. return_dict (bool, optional, defaults to True) — +Whether or not to return a AudioPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. output_type (str, optional, defaults to "np") — +The output format of the generated image. Choose between "np" to return a NumPy np.ndarray or +"pt" to return a PyTorch torch.Tensor object. Returns +AudioPipelineOutput or tuple + +If return_dict is True, AudioPipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import AudioLDMPipeline +>>> import torch +>>> import scipy + +>>> repo_id = "cvssp/audioldm-s-full-v2" +>>> pipe = AudioLDMPipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> prompt = "Techno music with a strong, upbeat tempo and high melodic riffs" +>>> audio = pipe(prompt, num_inference_steps=10, audio_length_in_s=5.0).audios[0] + +>>> # save the audio sample as a .wav file +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio) disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. diff --git a/scrapped_outputs/7b134cde7595a4ff1ff268829addc335.txt b/scrapped_outputs/7b134cde7595a4ff1ff268829addc335.txt new file mode 100644 index 0000000000000000000000000000000000000000..e109b181bff7e509d8447aec9e012243d4f843dc --- /dev/null +++ b/scrapped_outputs/7b134cde7595a4ff1ff268829addc335.txt @@ -0,0 +1,115 @@ +DreamBooth DreamBooth is a training technique that updates the entire diffusion model by training on just a few images of a subject or style. It works by associating a special word in the prompt with the example images. If you’re training on a GPU with limited vRAM, you should try enabling the gradient_checkpointing and mixed_precision parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing or xFormers. You should have a GPU with >30GB of memory if you want to train faster with Flax. This guide will explore the train_dreambooth.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/dreambooth +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters DreamBooth is very sensitive to training hyperparameters, and it is easy to overfit. Read the Training Stable Diffusion with Dreambooth using 🧨 Diffusers blog post for recommended settings for different subjects to help you choose the appropriate hyperparameters. The training script offers many parameters for customizing your training run. All of the parameters and their descriptions are found in the parse_args() function. The parameters are set with default values that should work pretty well out-of-the-box, but you can also set your own values in the training command if you’d like. For example, to train in the bf16 format: Copied accelerate launch train_dreambooth.py \ + --mixed_precision="bf16" Some basic and important parameters to know and specify are: --pretrained_model_name_or_path: the name of the model on the Hub or a local path to the pretrained model --instance_data_dir: path to a folder containing the training dataset (example images) --instance_prompt: the text prompt that contains the special word for the example images --train_text_encoder: whether to also train the text encoder --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_dreambooth.py \ + --snr_gamma=5.0 Prior preservation loss Prior preservation loss is a method that uses a model’s own generated samples to help it learn how to generate more diverse images. Because these generated sample images belong to the same class as the images you provided, they help the model retain what it has learned about the class and how it can use what it already knows about the class to make new compositions. --with_prior_preservation: whether to use prior preservation loss --prior_loss_weight: controls the influence of the prior preservation loss on the model --class_data_dir: path to a folder containing the generated class sample images --class_prompt: the text prompt describing the class of the generated sample images Copied accelerate launch train_dreambooth.py \ + --with_prior_preservation \ + --prior_loss_weight=1.0 \ + --class_data_dir="path/to/class/images" \ + --class_prompt="text prompt describing class" Train text encoder To improve the quality of the generated outputs, you can also train the text encoder in addition to the UNet. This requires additional memory and you’ll need a GPU with at least 24GB of vRAM. If you have the necessary hardware, then training the text encoder produces better results, especially when generating images of faces. Enable this option by: Copied accelerate launch train_dreambooth.py \ + --train_text_encoder Training script DreamBooth comes with its own dataset classes: DreamBoothDataset: preprocesses the images and class images, and tokenizes the prompts for training PromptDataset: generates the prompt embeddings to generate the class images If you enabled prior preservation loss, the class images are generated here: Copied sample_dataset = PromptDataset(args.class_prompt, num_new_images) +sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size) + +sample_dataloader = accelerator.prepare(sample_dataloader) +pipeline.to(accelerator.device) + +for example in tqdm( + sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process +): + images = pipeline(example["prompt"]).images Next is the main() function which handles setting up the dataset for training and the training loop itself. The script loads the tokenizer, scheduler and models: Copied # Load the tokenizer +if args.tokenizer_name: + tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name, revision=args.revision, use_fast=False) +elif args.pretrained_model_name_or_path: + tokenizer = AutoTokenizer.from_pretrained( + args.pretrained_model_name_or_path, + subfolder="tokenizer", + revision=args.revision, + use_fast=False, + ) + +# Load scheduler and models +noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") +text_encoder = text_encoder_cls.from_pretrained( + args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision +) + +if model_has_vae(args): + vae = AutoencoderKL.from_pretrained( + args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision + ) +else: + vae = None + +unet = UNet2DConditionModel.from_pretrained( + args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision +) Then, it’s time to create the training dataset and DataLoader from DreamBoothDataset: Copied train_dataset = DreamBoothDataset( + instance_data_root=args.instance_data_dir, + instance_prompt=args.instance_prompt, + class_data_root=args.class_data_dir if args.with_prior_preservation else None, + class_prompt=args.class_prompt, + class_num=args.num_class_images, + tokenizer=tokenizer, + size=args.resolution, + center_crop=args.center_crop, + encoder_hidden_states=pre_computed_encoder_hidden_states, + class_prompt_encoder_hidden_states=pre_computed_class_prompt_encoder_hidden_states, + tokenizer_max_length=args.tokenizer_max_length, +) + +train_dataloader = torch.utils.data.DataLoader( + train_dataset, + batch_size=args.train_batch_size, + shuffle=True, + collate_fn=lambda examples: collate_fn(examples, args.with_prior_preservation), + num_workers=args.dataloader_num_workers, +) Lastly, the training loop takes care of the remaining steps such as converting images to latent space, adding noise to the input, predicting the noise residual, and calculating the loss. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script You’re now ready to launch the training script! 🚀 For this guide, you’ll download some images of a dog and store them in a directory. But remember, you can create and use your own dataset if you want (see the Create a dataset for training guide). Copied from huggingface_hub import snapshot_download + +local_dir = "./dog" +snapshot_download( + "diffusers/dog-example", + local_dir=local_dir, + repo_type="dataset", + ignore_patterns=".gitattributes", +) Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model, INSTANCE_DIR to the path where you just downloaded the dog images to, and OUTPUT_DIR to where you want to save the model. You’ll use sks as the special word to tie the training to. If you’re interested in following along with the training process, you can periodically save generated images as training progresses. Add the following parameters to the training command: Copied --validation_prompt="a photo of a sks dog" +--num_validation_images=4 +--validation_steps=100 One more thing before you launch the script! Depending on the GPU you have, you may need to enable certain optimizations to train DreamBooth. 16GB 12GB 8GB On a 16GB GPU, you can use bitsandbytes 8-bit optimizer and gradient checkpointing to help you train a DreamBooth model. Install bitsandbytes: Copied pip install bitsandbytes Then, add the following parameter to your training command: Copied accelerate launch train_dreambooth.py \ + --gradient_checkpointing \ + --use_8bit_adam \ PyTorch Flax Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export INSTANCE_DIR="./dog" +export OUTPUT_DIR="path_to_saved_model" + +accelerate launch train_dreambooth.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --output_dir=$OUTPUT_DIR \ + --instance_prompt="a photo of sks dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=1 \ + --learning_rate=5e-6 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --max_train_steps=400 \ + --push_to_hub Once training is complete, you can use your newly trained model for inference! Can’t wait to try your model for inference before training is complete? 🤭 Make sure you have the latest version of 🤗 Accelerate installed. Copied from diffusers import DiffusionPipeline, UNet2DConditionModel +from transformers import CLIPTextModel +import torch + +unet = UNet2DConditionModel.from_pretrained("path/to/model/checkpoint-100/unet") + +# if you have trained with `--args.train_text_encoder` make sure to also load the text encoder +text_encoder = CLIPTextModel.from_pretrained("path/to/model/checkpoint-100/checkpoint-100/text_encoder") + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", unet=unet, text_encoder=text_encoder, dtype=torch.float16, +).to("cuda") + +image = pipeline("A photo of sks dog in a bucket", num_inference_steps=50, guidance_scale=7.5).images[0] +image.save("dog-bucket.png") PyTorch Flax Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("path_to_saved_model", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +image = pipeline("A photo of sks dog in a bucket", num_inference_steps=50, guidance_scale=7.5).images[0] +image.save("dog-bucket.png") LoRA LoRA is a training technique for significantly reducing the number of trainable parameters. As a result, training is faster and it is easier to store the resulting weights because they are a lot smaller (~100MBs). Use the train_dreambooth_lora.py script to train with LoRA. The LoRA training script is discussed in more detail in the LoRA training guide. Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_dreambooth_lora_sdxl.py script to train a SDXL model with LoRA. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on training your DreamBooth model! To learn more about how to use your new model, the following guide may be helpful: Learn how to load a DreamBooth model for inference if you trained your model with LoRA. diff --git a/scrapped_outputs/7b203c651ce498ea7f5612d7fec78b57.txt b/scrapped_outputs/7b203c651ce498ea7f5612d7fec78b57.txt new file mode 100644 index 0000000000000000000000000000000000000000..191230d895650a96c9b8f907a3911fdd00d72140 --- /dev/null +++ b/scrapped_outputs/7b203c651ce498ea7f5612d7fec78b57.txt @@ -0,0 +1,55 @@ +DDPMScheduler Denoising Diffusion Probabilistic Models (DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes a diffusion based model of the same name. In the context of the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline. The abstract from the paper is: We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. Our implementation is available at this https URL. DDPMScheduler class diffusers.DDPMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None variance_type: str = 'fixed_small' clip_sample: bool = True prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 clip_sample_range: float = 1.0 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' steps_offset: int = 0 rescale_betas_zero_snr: int = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +An array of betas to pass directly to the constructor without using beta_start and beta_end. variance_type (str, defaults to "fixed_small") — +Clip the variance when adding noise to the denoised sample. Choose from fixed_small, fixed_small_log, +fixed_large, fixed_large_log, learned or learned_range. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. DDPMScheduler explores the connections between denoising score matching and Langevin dynamics sampling. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: Optional = None device: Union = None timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. If used, +timesteps must be None. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. timesteps (List[int], optional) — +Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default +timestep spacing strategy of equal spacing between timesteps is used. If timesteps is passed, +num_inference_steps must be None. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator = None return_dict: bool = True ) → DDPMSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a DDPMSchedulerOutput or tuple. Returns +DDPMSchedulerOutput or tuple + +If return_dict is True, DDPMSchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). DDPMSchedulerOutput class diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/7b2a12ecacf7e9796919a57ec022e178.txt b/scrapped_outputs/7b2a12ecacf7e9796919a57ec022e178.txt new file mode 100644 index 0000000000000000000000000000000000000000..d509c1ac7ab849c2b3afbdbbc876d1114069ba2e --- /dev/null +++ b/scrapped_outputs/7b2a12ecacf7e9796919a57ec022e178.txt @@ -0,0 +1,217 @@ +Latent Consistency Models Latent Consistency Models (LCMs) were proposed in Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference by Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao. The abstract of the paper is as follows: Latent Diffusion models (LDMs) have achieved remarkable results in synthesizing high-resolution images. However, the iterative sampling process is computationally intensive and leads to slow generation. Inspired by Consistency Models (song et al.), we propose Latent Consistency Models (LCMs), enabling swift inference with minimal steps on any pre-trained LDMs, including Stable Diffusion (rombach et al). Viewing the guided reverse diffusion process as solving an augmented probability flow ODE (PF-ODE), LCMs are designed to directly predict the solution of such ODE in latent space, mitigating the need for numerous iterations and allowing rapid, high-fidelity sampling. Efficiently distilled from pre-trained classifier-free guided diffusion models, a high-quality 768 x 768 2~4-step LCM takes only 32 A100 GPU hours for training. Furthermore, we introduce Latent Consistency Fine-tuning (LCF), a novel method that is tailored for fine-tuning LCMs on customized image datasets. Evaluation on the LAION-5B-Aesthetics dataset demonstrates that LCMs achieve state-of-the-art text-to-image generation performance with few-step inference. Project Page: this https URL. A demo for the SimianLuo/LCM_Dreamshaper_v7 checkpoint can be found here. The pipelines were contributed by luosiallen, nagolinc, and dg845. LatentConsistencyModelPipeline class diffusers.LatentConsistencyModelPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: LCMScheduler safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Currently only +supports LCMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. requires_safety_checker (bool, optional, defaults to True) — +Whether the pipeline requires a safety checker component. Pipeline for text-to-image generation using a latent consistency model. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 4 original_inference_steps: int = None timesteps: List = None guidance_scale: float = 8.5 num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. original_inference_steps (int, optional) — +The original number of inference steps use to generate a linearly-spaced timestep schedule, from which +we will draw num_inference_steps evenly spaced timesteps from as our final timestep schedule, +following the Skipping-Step method in the paper (see Section 4.3). If not set this will default to the +scheduler’s original_inference_steps attribute. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps on the original LCM training/distillation timestep schedule are used. Must be in descending +order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. +Note that the original latent consistency models paper uses a different CFG formulation where the +guidance scales are decreased by 1 (so in the paper formulation CFG is enabled when guidance_scale > 0). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import DiffusionPipeline +>>> import torch + +>>> pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7") +>>> # To save GPU memory, torch.float16 can be used, but it may compromise image quality. +>>> pipe.to(torch_device="cuda", torch_dtype=torch.float32) + +>>> prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" + +>>> # Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps. +>>> num_inference_steps = 4 +>>> images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=8.0).images +>>> images[0].save("image.png") enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 LatentConsistencyModelImg2ImgPipeline class diffusers.LatentConsistencyModelImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: LCMScheduler safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Currently only +supports LCMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. requires_safety_checker (bool, optional, defaults to True) — +Whether the pipeline requires a safety checker component. Pipeline for image-to-image generation using a latent consistency model. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 4 strength: float = 0.8 original_inference_steps: int = None timesteps: List = None guidance_scale: float = 8.5 num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. original_inference_steps (int, optional) — +The original number of inference steps use to generate a linearly-spaced timestep schedule, from which +we will draw num_inference_steps evenly spaced timesteps from as our final timestep schedule, +following the Skipping-Step method in the paper (see Section 4.3). If not set this will default to the +scheduler’s original_inference_steps attribute. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps on the original LCM training/distillation timestep schedule are used. Must be in descending +order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. +Note that the original latent consistency models paper uses a different CFG formulation where the +guidance scales are decreased by 1 (so in the paper formulation CFG is enabled when guidance_scale > 0). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import AutoPipelineForImage2Image +>>> import torch +>>> import PIL + +>>> pipe = AutoPipelineForImage2Image.from_pretrained("SimianLuo/LCM_Dreamshaper_v7") +>>> # To save GPU memory, torch.float16 can be used, but it may compromise image quality. +>>> pipe.to(torch_device="cuda", torch_dtype=torch.float32) + +>>> prompt = "High altitude snowy mountains" +>>> image = PIL.Image.open("./snowy_mountains.png") + +>>> # Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps. +>>> num_inference_steps = 4 +>>> images = pipe( +... prompt=prompt, image=image, num_inference_steps=num_inference_steps, guidance_scale=8.0 +... ).images + +>>> images[0].save("image.png") enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/7b2e38ed54fffee8196bf4b1cefacb1a.txt b/scrapped_outputs/7b2e38ed54fffee8196bf4b1cefacb1a.txt new file mode 100644 index 0000000000000000000000000000000000000000..c6ada9556f117e916687e4a6c5586a56d8e2825d --- /dev/null +++ b/scrapped_outputs/7b2e38ed54fffee8196bf4b1cefacb1a.txt @@ -0,0 +1,17 @@ +Load safetensors safetensors is a safe and fast file format for storing and loading tensors. Typically, PyTorch model weights are saved or pickled into a .bin file with Python’s pickle utility. However, pickle is not secure and pickled files may contain malicious code that can be executed. safetensors is a secure alternative to pickle, making it ideal for sharing model weights. This guide will show you how you load .safetensor files, and how to convert Stable Diffusion model weights stored in other formats to .safetensor. Before you start, make sure you have safetensors installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install safetensors If you look at the runwayml/stable-diffusion-v1-5 repository, you’ll see weights inside the text_encoder, unet and vae subfolders are stored in the .safetensors format. By default, 🤗 Diffusers automatically loads these .safetensors files from their subfolders if they’re available in the model repository. For more explicit control, you can optionally set use_safetensors=True (if safetensors is not installed, you’ll get an error message asking you to install it): Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) However, model weights are not necessarily stored in separate subfolders like in the example above. Sometimes, all the weights are stored in a single .safetensors file. In this case, if the weights are Stable Diffusion weights, you can load the file directly with the from_single_file() method: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_single_file( + "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +) Convert to safetensors Not all weights on the Hub are available in the .safetensors format, and you may encounter weights stored as .bin. In this case, use the Convert Space to convert the weights to .safetensors. The Convert Space downloads the pickled weights, converts them, and opens a Pull Request to upload the newly converted .safetensors file on the Hub. This way, if there is any malicious code contained in the pickled files, they’re uploaded to the Hub - which has a security scanner to detect unsafe files and suspicious pickle imports - instead of your computer. You can use the model with the new .safetensors weights by specifying the reference to the Pull Request in the revision parameter (you can also test it in this Check PR Space on the Hub), for example refs/pr/22: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", revision="refs/pr/22", use_safetensors=True +) Why use safetensors? There are several reasons for using safetensors: Safety is the number one reason for using safetensors. As open-source and model distribution grows, it is important to be able to trust the model weights you downloaded don’t contain any malicious code. The current size of the header in safetensors prevents parsing extremely large JSON files. Loading speed between switching models is another reason to use safetensors, which performs zero-copy of the tensors. It is especially fast compared to pickle if you’re loading the weights to CPU (the default case), and just as fast if not faster when directly loading the weights to GPU. You’ll only notice the performance difference if the model is already loaded, and not if you’re downloading the weights or loading the model for the first time. The time it takes to load the entire pipeline: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", use_safetensors=True) +"Loaded in safetensors 0:00:02.033658" +"Loaded in PyTorch 0:00:02.663379" But the actual time it takes to load 500MB of the model weights is only: Copied safetensors: 3.4873ms +PyTorch: 172.7537ms Lazy loading is also supported in safetensors, which is useful in distributed settings to only load some of the tensors. This format allowed the BLOOM model to be loaded in 45 seconds on 8 GPUs instead of 10 minutes with regular PyTorch weights. diff --git a/scrapped_outputs/7b379ac72c68a21667f2fd53fcabc104.txt b/scrapped_outputs/7b379ac72c68a21667f2fd53fcabc104.txt new file mode 100644 index 0000000000000000000000000000000000000000..b6ee2d139f8d33d1b57f5e5dc720363dd35642a1 --- /dev/null +++ b/scrapped_outputs/7b379ac72c68a21667f2fd53fcabc104.txt @@ -0,0 +1,101 @@ +Shap-E The Shap-E model was proposed in Shap-E: Generating Conditional 3D Implicit Functions by Alex Nichol and Heewoo Jun from OpenAI. The abstract from the paper is: We present Shap-E, a conditional generative model for 3D assets. Unlike recent work on 3D generative models which produce a single output representation, Shap-E directly generates the parameters of implicit functions that can be rendered as both textured meshes and neural radiance fields. We train Shap-E in two stages: first, we train an encoder that deterministically maps 3D assets into the parameters of an implicit function; second, we train a conditional diffusion model on outputs of the encoder. When trained on a large dataset of paired 3D and text data, our resulting models are capable of generating complex and diverse 3D assets in a matter of seconds. When compared to Point-E, an explicit generative model over point clouds, Shap-E converges faster and reaches comparable or better sample quality despite modeling a higher-dimensional, multi-representation output space. The original codebase can be found at openai/shap-e. See the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. ShapEPipeline class diffusers.ShapEPipeline < source > ( prior: PriorTransformer text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: HeunDiscreteScheduler shap_e_renderer: ShapERenderer ) Parameters prior (PriorTransformer) — +The canonical unCLIP prior to approximate the image embedding from the text embedding. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. scheduler (HeunDiscreteScheduler) — +A scheduler to be used in combination with the prior model to generate image embedding. shap_e_renderer (ShapERenderer) — +Shap-E renderer projects the generated latents into parameters of a MLP to create 3D objects with the NeRF +rendering method. Pipeline for generating latent representation of a 3D asset and rendering with the NeRF method. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: str num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 frame_size: int = 64 output_type: Optional = 'pil' return_dict: bool = True ) → ShapEPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. frame_size (int, optional, default to 64) — +The width and height of each image frame of the generated 3D output. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between "pil" (PIL.Image.Image), "np" +(np.array), "latent" (torch.Tensor), or mesh (MeshDecoderOutput). return_dict (bool, optional, defaults to True) — +Whether or not to return a ShapEPipelineOutput instead of a plain +tuple. Returns +ShapEPipelineOutput or tuple + +If return_dict is True, ShapEPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from diffusers.utils import export_to_gif + +>>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +>>> repo = "openai/shap-e" +>>> pipe = DiffusionPipeline.from_pretrained(repo, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> guidance_scale = 15.0 +>>> prompt = "a shark" + +>>> images = pipe( +... prompt, +... guidance_scale=guidance_scale, +... num_inference_steps=64, +... frame_size=256, +... ).images + +>>> gif_path = export_to_gif(images[0], "shark_3d.gif") ShapEImg2ImgPipeline class diffusers.ShapEImg2ImgPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModel image_processor: CLIPImageProcessor scheduler: HeunDiscreteScheduler shap_e_renderer: ShapERenderer ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModel) — +Frozen image-encoder. image_processor (CLIPImageProcessor) — +A CLIPImageProcessor to process images. scheduler (HeunDiscreteScheduler) — +A scheduler to be used in combination with the prior model to generate image embedding. shap_e_renderer (ShapERenderer) — +Shap-E renderer projects the generated latents into parameters of a MLP to create 3D objects with the NeRF +rendering method. Pipeline for generating latent representation of a 3D asset and rendering with the NeRF method from an image. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 frame_size: int = 64 output_type: Optional = 'pil' return_dict: bool = True ) → ShapEPipelineOutput or tuple Parameters image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be used as the starting point. Can also accept image +latents as image, but if passing latents directly it is not encoded again. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. frame_size (int, optional, default to 64) — +The width and height of each image frame of the generated 3D output. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between "pil" (PIL.Image.Image), "np" +(np.array), "latent" (torch.Tensor), or mesh (MeshDecoderOutput). return_dict (bool, optional, defaults to True) — +Whether or not to return a ShapEPipelineOutput instead of a plain +tuple. Returns +ShapEPipelineOutput or tuple + +If return_dict is True, ShapEPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> from PIL import Image +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from diffusers.utils import export_to_gif, load_image + +>>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +>>> repo = "openai/shap-e-img2img" +>>> pipe = DiffusionPipeline.from_pretrained(repo, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> guidance_scale = 3.0 +>>> image_url = "https://hf.co/datasets/diffusers/docs-images/resolve/main/shap-e/corgi.png" +>>> image = load_image(image_url).convert("RGB") + +>>> images = pipe( +... image, +... guidance_scale=guidance_scale, +... num_inference_steps=64, +... frame_size=256, +... ).images + +>>> gif_path = export_to_gif(images[0], "corgi_3d.gif") ShapEPipelineOutput class diffusers.pipelines.shap_e.pipeline_shap_e.ShapEPipelineOutput < source > ( images: Union ) Parameters images (torch.FloatTensor) — +A list of images for 3D rendering. Output class for ShapEPipeline and ShapEImg2ImgPipeline. diff --git a/scrapped_outputs/7b3a9625ef99d479c591b4de3d9f146b.txt b/scrapped_outputs/7b3a9625ef99d479c591b4de3d9f146b.txt new file mode 100644 index 0000000000000000000000000000000000000000..2c39533f3811507775688b8fc90c71c93f8c744f --- /dev/null +++ b/scrapped_outputs/7b3a9625ef99d479c591b4de3d9f146b.txt @@ -0,0 +1,324 @@ +InstructPix2Pix InstructPix2Pix: Learning to Follow Image Editing Instructions is by Tim Brooks, Aleksander Holynski and Alexei A. Efros. The abstract from the paper is: We propose a method for editing images from human instructions: given an input image and a written instruction that tells the model what to do, our model follows these instructions to edit the image. To obtain training data for this problem, we combine the knowledge of two large pretrained models — a language model (GPT-3) and a text-to-image model (Stable Diffusion) — to generate a large dataset of image editing examples. Our conditional diffusion model, InstructPix2Pix, is trained on our generated data, and generalizes to real images and user-written instructions at inference time. Since it performs edits in the forward pass and does not require per example fine-tuning or inversion, our model edits images quickly, in a matter of seconds. We show compelling editing results for a diverse collection of input images and written instructions. You can find additional information about InstructPix2Pix on the project page, original codebase, and try it out in a demo. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionInstructPix2PixPipeline class diffusers.StableDiffusionInstructPix2PixPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for pixel-level image editing by following text instructions (based on Stable Diffusion). This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 100 guidance_scale: float = 7.5 image_guidance_scale: float = 1.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor np.ndarray, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be repainted according to prompt. Can also accept +image latents as image, but if passing latents directly it is not encoded again. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. image_guidance_scale (float, optional, defaults to 1.5) — +Push the generated image towards the inital image. Image guidance scale is enabled by setting +image_guidance_scale > 1. Higher image guidance scale encourages generated images that are closely +linked to the source image, usually at the expense of lower image quality. This pipeline requires a +value of at least 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionInstructPix2PixPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" + +>>> image = download_image(img_url).resize((512, 512)) + +>>> pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained( +... "timbrooks/instruct-pix2pix", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "make the mountains snowy" +>>> image = pipe(prompt=prompt, image=image).images[0] load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. StableDiffusionXLInstructPix2PixPipeline class diffusers.StableDiffusionXLInstructPix2PixPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires a aesthetic_score condition to be passed during inference. Also see the config +of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for pixel-level image editing by following text instructions. Based on Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 100 denoising_end: Optional = None guidance_scale: float = 5.0 image_guidance_scale: float = 1.5 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None ) → ~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor or PIL.Image.Image or np.ndarray or List[torch.FloatTensor] or List[PIL.Image.Image] or List[np.ndarray]) — +The image(s) to modify with the pipeline. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 5.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. image_guidance_scale (float, optional, defaults to 1.5) — +Image guidance scale is to push the generated image towards the inital image image. Image guidance +scale is enabled by setting image_guidance_scale > 1. Higher image guidance scale encourages to +generate images that are closely linked to the source image image, usually at the expense of lower +image quality. This pipeline requires a value of at least 1. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. Returns +~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLInstructPix2PixPipeline +>>> from diffusers.utils import load_image + +>>> resolution = 768 +>>> image = load_image( +... "https://hf.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" +... ).resize((resolution, resolution)) +>>> edit_instruction = "Turn sky into a cloudy one" + +>>> pipe = StableDiffusionXLInstructPix2PixPipeline.from_pretrained( +... "diffusers/sdxl-instructpix2pix-768", torch_dtype=torch.float16 +... ).to("cuda") + +>>> edited_image = pipe( +... prompt=edit_instruction, +... image=image, +... height=resolution, +... width=resolution, +... guidance_scale=3.0, +... image_guidance_scale=1.5, +... num_inference_steps=30, +... ).images[0] +>>> edited_image disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously invoked, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several +steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in +several steps. This is useful to save a large amount of memory and to allow the processing of larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. Encodes the prompt into text encoder hidden states. diff --git a/scrapped_outputs/7b76bf00efdcbeae1b737883da46ba28.txt b/scrapped_outputs/7b76bf00efdcbeae1b737883da46ba28.txt new file mode 100644 index 0000000000000000000000000000000000000000..4049d6b91ac5929ba92113dc859ead44d28a4f4e --- /dev/null +++ b/scrapped_outputs/7b76bf00efdcbeae1b737883da46ba28.txt @@ -0,0 +1,45 @@ +EulerAncestralDiscreteScheduler A scheduler that uses ancestral sampling with Euler method steps. This is a fast scheduler which can often generate good outputs in 20-30 steps. The scheduler is based on the original k-diffusion implementation by Katherine Crowson. EulerAncestralDiscreteScheduler class diffusers.EulerAncestralDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. Ancestral sampling with Euler method steps. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. Scales the denoising model input by (sigma**2 + 1) ** 0.5 to match the Euler algorithm. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor generator: Optional = None return_dict: bool = True ) → EulerAncestralDiscreteSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a +EulerAncestralDiscreteSchedulerOutput or tuple. Returns +EulerAncestralDiscreteSchedulerOutput or tuple + +If return_dict is True, +EulerAncestralDiscreteSchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). EulerAncestralDiscreteSchedulerOutput class diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/7b977f44ef5a5fddf60cb29c7e12c994.txt b/scrapped_outputs/7b977f44ef5a5fddf60cb29c7e12c994.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/7ba45e6f0aedbae95dfa1217c02ba8f8.txt b/scrapped_outputs/7ba45e6f0aedbae95dfa1217c02ba8f8.txt new file mode 100644 index 0000000000000000000000000000000000000000..c82e25825d8d9963f7b4b0f30bedbc489b9e96a3 --- /dev/null +++ b/scrapped_outputs/7ba45e6f0aedbae95dfa1217c02ba8f8.txt @@ -0,0 +1,30 @@ +Transformer Temporal A Transformer model for video-like data. TransformerTemporalModel class diffusers.models.TransformerTemporalModel < source > ( num_attention_heads: int = 16 attention_head_dim: int = 88 in_channels: Optional = None out_channels: Optional = None num_layers: int = 1 dropout: float = 0.0 norm_num_groups: int = 32 cross_attention_dim: Optional = None attention_bias: bool = False sample_size: Optional = None activation_fn: str = 'geglu' norm_elementwise_affine: bool = True double_self_attention: bool = True positional_embeddings: Optional = None num_positional_embeddings: Optional = None ) Parameters num_attention_heads (int, optional, defaults to 16) — The number of heads to use for multi-head attention. attention_head_dim (int, optional, defaults to 88) — The number of channels in each head. in_channels (int, optional) — +The number of channels in the input and output (specify if the input is continuous). num_layers (int, optional, defaults to 1) — The number of layers of Transformer blocks to use. dropout (float, optional, defaults to 0.0) — The dropout probability to use. cross_attention_dim (int, optional) — The number of encoder_hidden_states dimensions to use. attention_bias (bool, optional) — +Configure if the TransformerBlock attention should contain a bias parameter. sample_size (int, optional) — The width of the latent images (specify if the input is discrete). +This is fixed during training since it is used to learn a number of position embeddings. activation_fn (str, optional, defaults to "geglu") — +Activation function to use in feed-forward. See diffusers.models.activations.get_activation for supported +activation functions. norm_elementwise_affine (bool, optional) — +Configure if the TransformerBlock should use learnable elementwise affine parameters for normalization. double_self_attention (bool, optional) — +Configure if each TransformerBlock should contain two self-attention layers. +positional_embeddings — (str, optional): +The type of positional embeddings to apply to the sequence input before passing use. +num_positional_embeddings — (int, optional): +The maximum length of the sequence over which to apply positional embeddings. A Transformer model for video-like data. forward < source > ( hidden_states: FloatTensor encoder_hidden_states: Optional = None timestep: Optional = None class_labels: LongTensor = None num_frames: int = 1 cross_attention_kwargs: Optional = None return_dict: bool = True ) → ~models.transformer_temporal.TransformerTemporalModelOutput or tuple Parameters hidden_states (torch.LongTensor of shape (batch size, num latent pixels) if discrete, torch.FloatTensor of shape (batch size, channel, height, width) if continuous) — +Input hidden_states. encoder_hidden_states ( torch.LongTensor of shape (batch size, encoder_hidden_states dim), optional) — +Conditional embeddings for cross attention layer. If not given, cross-attention defaults to +self-attention. timestep ( torch.LongTensor, optional) — +Used to indicate denoising step. Optional timestep to be applied as an embedding in AdaLayerNorm. class_labels ( torch.LongTensor of shape (batch size, num classes), optional) — +Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in +AdaLayerZeroNorm. num_frames (int, optional, defaults to 1) — +The number of frames to be processed per batch. This is used to reshape the hidden states. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. Returns +~models.transformer_temporal.TransformerTemporalModelOutput or tuple + +If return_dict is True, an ~models.transformer_temporal.TransformerTemporalModelOutput is +returned, otherwise a tuple where the first element is the sample tensor. + The TransformerTemporal forward method. TransformerTemporalModelOutput class diffusers.models.transformers.transformer_temporal.TransformerTemporalModelOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size x num_frames, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. The output of TransformerTemporalModel. diff --git a/scrapped_outputs/7bbfc82f2811be0954ea69db52c8c1b5.txt b/scrapped_outputs/7bbfc82f2811be0954ea69db52c8c1b5.txt new file mode 100644 index 0000000000000000000000000000000000000000..d769a7f9060837ab9edb28b421635809b26af2d7 --- /dev/null +++ b/scrapped_outputs/7bbfc82f2811be0954ea69db52c8c1b5.txt @@ -0,0 +1,61 @@ +Attention Processor An attention processor is a class for applying different types of attention mechanisms. AttnProcessor class diffusers.models.attention_processor.AttnProcessor < source > ( ) Default processor for performing attention-related computations. AttnProcessor2_0 class diffusers.models.attention_processor.AttnProcessor2_0 < source > ( ) Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). FusedAttnProcessor2_0 class diffusers.models.attention_processor.FusedAttnProcessor2_0 < source > ( ) Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). +It uses fused projection layers. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is currently 🧪 experimental in nature and can change in future. LoRAAttnProcessor class diffusers.models.attention_processor.LoRAAttnProcessor < source > ( hidden_size: int cross_attention_dim: Optional = None rank: int = 4 network_alpha: Optional = None **kwargs ) Parameters hidden_size (int, optional) — +The hidden size of the attention layer. cross_attention_dim (int, optional) — +The number of channels in the encoder_hidden_states. rank (int, defaults to 4) — +The dimension of the LoRA update matrices. network_alpha (int, optional) — +Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) — +Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism. LoRAAttnProcessor2_0 class diffusers.models.attention_processor.LoRAAttnProcessor2_0 < source > ( hidden_size: int cross_attention_dim: Optional = None rank: int = 4 network_alpha: Optional = None **kwargs ) Parameters hidden_size (int) — +The hidden size of the attention layer. cross_attention_dim (int, optional) — +The number of channels in the encoder_hidden_states. rank (int, defaults to 4) — +The dimension of the LoRA update matrices. network_alpha (int, optional) — +Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) — +Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism using PyTorch 2.0’s memory-efficient scaled dot-product +attention. CustomDiffusionAttnProcessor class diffusers.models.attention_processor.CustomDiffusionAttnProcessor < source > ( train_kv: bool = True train_q_out: bool = True hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 ) Parameters train_kv (bool, defaults to True) — +Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) — +Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) — +Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) — +The dropout probability to use. Processor for implementing attention for the Custom Diffusion method. CustomDiffusionAttnProcessor2_0 class diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0 < source > ( train_kv: bool = True train_q_out: bool = True hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 ) Parameters train_kv (bool, defaults to True) — +Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) — +Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) — +Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) — +The dropout probability to use. Processor for implementing attention for the Custom Diffusion method using PyTorch 2.0’s memory-efficient scaled +dot-product attention. AttnAddedKVProcessor class diffusers.models.attention_processor.AttnAddedKVProcessor < source > ( ) Processor for performing attention-related computations with extra learnable key and value matrices for the text +encoder. AttnAddedKVProcessor2_0 class diffusers.models.attention_processor.AttnAddedKVProcessor2_0 < source > ( ) Processor for performing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0), with extra +learnable key and value matrices for the text encoder. LoRAAttnAddedKVProcessor class diffusers.models.attention_processor.LoRAAttnAddedKVProcessor < source > ( hidden_size: int cross_attention_dim: Optional = None rank: int = 4 network_alpha: Optional = None ) Parameters hidden_size (int, optional) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. rank (int, defaults to 4) — +The dimension of the LoRA update matrices. network_alpha (int, optional) — +Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) — +Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism with extra learnable key and value matrices for the text +encoder. XFormersAttnProcessor class diffusers.models.attention_processor.XFormersAttnProcessor < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional, defaults to None) — +The base +operator to +use as the attention operator. It is recommended to set to None, and allow xFormers to choose the best +operator. Processor for implementing memory efficient attention using xFormers. LoRAXFormersAttnProcessor class diffusers.models.attention_processor.LoRAXFormersAttnProcessor < source > ( hidden_size: int cross_attention_dim: int rank: int = 4 attention_op: Optional = None network_alpha: Optional = None **kwargs ) Parameters hidden_size (int, optional) — +The hidden size of the attention layer. cross_attention_dim (int, optional) — +The number of channels in the encoder_hidden_states. rank (int, defaults to 4) — +The dimension of the LoRA update matrices. attention_op (Callable, optional, defaults to None) — +The base +operator to +use as the attention operator. It is recommended to set to None, and allow xFormers to choose the best +operator. network_alpha (int, optional) — +Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) — +Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism with memory efficient attention using xFormers. CustomDiffusionXFormersAttnProcessor class diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor < source > ( train_kv: bool = True train_q_out: bool = False hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 attention_op: Optional = None ) Parameters train_kv (bool, defaults to True) — +Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) — +Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) — +Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) — +The dropout probability to use. attention_op (Callable, optional, defaults to None) — +The base +operator to use +as the attention operator. It is recommended to set to None, and allow xFormers to choose the best operator. Processor for implementing memory efficient attention using xFormers for the Custom Diffusion method. SlicedAttnProcessor class diffusers.models.attention_processor.SlicedAttnProcessor < source > ( slice_size: int ) Parameters slice_size (int, optional) — +The number of steps to compute attention. Uses as many slices as attention_head_dim // slice_size, and +attention_head_dim must be a multiple of the slice_size. Processor for implementing sliced attention. SlicedAttnAddedKVProcessor class diffusers.models.attention_processor.SlicedAttnAddedKVProcessor < source > ( slice_size ) Parameters slice_size (int, optional) — +The number of steps to compute attention. Uses as many slices as attention_head_dim // slice_size, and +attention_head_dim must be a multiple of the slice_size. Processor for implementing sliced attention with extra learnable key and value matrices for the text encoder. diff --git a/scrapped_outputs/7bc49b638a2da96a07b99b7e998f9120.txt b/scrapped_outputs/7bc49b638a2da96a07b99b7e998f9120.txt new file mode 100644 index 0000000000000000000000000000000000000000..e3c71ca96baa76c1c11f96cfbdad30df65a97ee3 --- /dev/null +++ b/scrapped_outputs/7bc49b638a2da96a07b99b7e998f9120.txt @@ -0,0 +1,112 @@ +How to contribute to Diffusers 🧨 We ❤️ contributions from the open-source community! Everyone is welcome, and all types of participation –not just code– are valued and appreciated. Answering questions, helping others, reaching out, and improving the documentation are all immensely valuable to the community, so don’t be afraid and get involved if you’re up for it! Everyone is encouraged to start by saying 👋 in our public Discord channel. We discuss the latest trends in diffusion models, ask questions, show off personal projects, help each other with contributions, or just hang out ☕. Whichever way you choose to contribute, we strive to be part of an open, welcoming, and kind community. Please, read our code of conduct and be mindful to respect it during your interactions. We also recommend you become familiar with the ethical guidelines that guide our project and ask you to adhere to the same principles of transparency and responsibility. We enormously value feedback from the community, so please do not be afraid to speak up if you believe you have valuable feedback that can help improve the library - every message, comment, issue, and pull request (PR) is read and considered. Overview You can contribute in many ways ranging from answering questions on issues to adding new diffusion models to +the core library. In the following, we give an overview of different ways to contribute, ranked by difficulty in ascending order. All of them are valuable to the community. Asking and answering questions on the Diffusers discussion forum or on Discord. Opening new issues on the GitHub Issues tab. Answering issues on the GitHub Issues tab. Fix a simple issue, marked by the “Good first issue” label, see here. Contribute to the documentation. Contribute a Community Pipeline. Contribute to the examples. Fix a more difficult issue, marked by the “Good second issue” label, see here. Add a new pipeline, model, or scheduler, see “New Pipeline/Model” and “New scheduler” issues. For this contribution, please have a look at Design Philosophy. As said before, all contributions are valuable to the community. +In the following, we will explain each contribution a bit more in detail. For all contributions 4 - 9, you will need to open a PR. It is explained in detail how to do so in Opening a pull request. 1. Asking and answering questions on the Diffusers discussion forum or on the Diffusers Discord Any question or comment related to the Diffusers library can be asked on the discussion forum or on Discord. Such questions and comments include (but are not limited to): Reports of training or inference experiments in an attempt to share knowledge Presentation of personal projects Questions to non-official training examples Project proposals General feedback Paper summaries Asking for help on personal projects that build on top of the Diffusers library General questions Ethical questions regarding diffusion models … Every question that is asked on the forum or on Discord actively encourages the community to publicly +share knowledge and might very well help a beginner in the future who has the same question you’re +having. Please do pose any questions you might have. +In the same spirit, you are of immense help to the community by answering such questions because this way you are publicly documenting knowledge for everybody to learn from. Please keep in mind that the more effort you put into asking or answering a question, the higher +the quality of the publicly documented knowledge. In the same way, well-posed and well-answered questions create a high-quality knowledge database accessible to everybody, while badly posed questions or answers reduce the overall quality of the public knowledge database. +In short, a high quality question or answer is precise, concise, relevant, easy-to-understand, accessible, and well-formated/well-posed. For more information, please have a look through the How to write a good issue section. NOTE about channels: +The forum is much better indexed by search engines, such as Google. Posts are ranked by popularity rather than chronologically. Hence, it’s easier to look up questions and answers that we posted some time ago. +In addition, questions and answers posted in the forum can easily be linked to. +In contrast, Discord has a chat-like format that invites fast back-and-forth communication. +While it will most likely take less time for you to get an answer to your question on Discord, your +question won’t be visible anymore over time. Also, it’s much harder to find information that was posted a while back on Discord. We therefore strongly recommend using the forum for high-quality questions and answers in an attempt to create long-lasting knowledge for the community. If discussions on Discord lead to very interesting answers and conclusions, we recommend posting the results on the forum to make the information more available for future readers. 2. Opening new issues on the GitHub issues tab The 🧨 Diffusers library is robust and reliable thanks to the users who notify us of +the problems they encounter. So thank you for reporting an issue. Remember, GitHub issues are reserved for technical questions directly related to the Diffusers library, bug reports, feature requests, or feedback on the library design. In a nutshell, this means that everything that is not related to the code of the Diffusers library (including the documentation) should not be asked on GitHub, but rather on either the forum or Discord. Please consider the following guidelines when opening a new issue: Make sure you have searched whether your issue has already been asked before (use the search bar on GitHub under Issues). Please never report a new issue on another (related) issue. If another issue is highly related, please +open a new issue nevertheless and link to the related issue. Make sure your issue is written in English. Please use one of the great, free online translation services, such as DeepL to translate from your native language to English if you are not comfortable in English. Check whether your issue might be solved by updating to the newest Diffusers version. Before posting your issue, please make sure that python -c "import diffusers; print(diffusers.__version__)" is higher or matches the latest Diffusers version. Remember that the more effort you put into opening a new issue, the higher the quality of your answer will be and the better the overall quality of the Diffusers issues. New issues usually include the following. 2.1. Reproducible, minimal bug reports A bug report should always have a reproducible code snippet and be as minimal and concise as possible. +This means in more detail: Narrow the bug down as much as you can, do not just dump your whole code file. Format your code. Do not include any external libraries except for Diffusers depending on them. Always provide all necessary information about your environment; for this, you can run: diffusers-cli env in your shell and copy-paste the displayed information to the issue. Explain the issue. If the reader doesn’t know what the issue is and why it is an issue, she cannot solve it. Always make sure the reader can reproduce your issue with as little effort as possible. If your code snippet cannot be run because of missing libraries or undefined variables, the reader cannot help you. Make sure your reproducible code snippet is as minimal as possible and can be copy-pasted into a simple Python shell. If in order to reproduce your issue a model and/or dataset is required, make sure the reader has access to that model or dataset. You can always upload your model or dataset to the Hub to make it easily downloadable. Try to keep your model and dataset as small as possible, to make the reproduction of your issue as effortless as possible. For more information, please have a look through the How to write a good issue section. You can open a bug report here. 2.2. Feature requests A world-class feature request addresses the following points: Motivation first: Is it related to a problem/frustration with the library? If so, please explain +why. Providing a code snippet that demonstrates the problem is best. Is it related to something you would need for a project? We’d love to hear +about it! Is it something you worked on and think could benefit the community? +Awesome! Tell us what problem it solved for you. Write a full paragraph describing the feature; Provide a code snippet that demonstrates its future use; In case this is related to a paper, please attach a link; Attach any additional information (drawings, screenshots, etc.) you think may help. You can open a feature request here. 2.3 Feedback Feedback about the library design and why it is good or not good helps the core maintainers immensely to build a user-friendly library. To understand the philosophy behind the current design philosophy, please have a look here. If you feel like a certain design choice does not fit with the current design philosophy, please explain why and how it should be changed. If a certain design choice follows the design philosophy too much, hence restricting use cases, explain why and how it should be changed. +If a certain design choice is very useful for you, please also leave a note as this is great feedback for future design decisions. You can open an issue about feedback here. 2.4 Technical questions Technical questions are mainly about why certain code of the library was written in a certain way, or what a certain part of the code does. Please make sure to link to the code in question and please provide details on +why this part of the code is difficult to understand. You can open an issue about a technical question here. 2.5 Proposal to add a new model, scheduler, or pipeline If the diffusion model community released a new model, pipeline, or scheduler that you would like to see in the Diffusers library, please provide the following information: Short description of the diffusion pipeline, model, or scheduler and link to the paper or public release. Link to any of its open-source implementation(s). Link to the model weights if they are available. If you are willing to contribute to the model yourself, let us know so we can best guide you. Also, don’t forget +to tag the original author of the component (model, scheduler, pipeline, etc.) by GitHub handle if you can find it. You can open a request for a model/pipeline/scheduler here. 3. Answering issues on the GitHub issues tab Answering issues on GitHub might require some technical knowledge of Diffusers, but we encourage everybody to give it a try even if you are not 100% certain that your answer is correct. +Some tips to give a high-quality answer to an issue: Be as concise and minimal as possible. Stay on topic. An answer to the issue should concern the issue and only the issue. Provide links to code, papers, or other sources that prove or encourage your point. Answer in code. If a simple code snippet is the answer to the issue or shows how the issue can be solved, please provide a fully reproducible code snippet. Also, many issues tend to be simply off-topic, duplicates of other issues, or irrelevant. It is of great +help to the maintainers if you can answer such issues, encouraging the author of the issue to be +more precise, provide the link to a duplicated issue or redirect them to the forum or Discord. If you have verified that the issued bug report is correct and requires a correction in the source code, +please have a look at the next sections. For all of the following contributions, you will need to open a PR. It is explained in detail how to do so in the Opening a pull request section. 4. Fixing a “Good first issue” Good first issues are marked by the Good first issue label. Usually, the issue already +explains how a potential solution should look so that it is easier to fix. +If the issue hasn’t been closed and you would like to try to fix this issue, you can just leave a message “I would like to try this issue.”. There are usually three scenarios: a.) The issue description already proposes a fix. In this case and if the solution makes sense to you, you can open a PR or draft PR to fix it. b.) The issue description does not propose a fix. In this case, you can ask what a proposed fix could look like and someone from the Diffusers team should answer shortly. If you have a good idea of how to fix it, feel free to directly open a PR. c.) There is already an open PR to fix the issue, but the issue hasn’t been closed yet. If the PR has gone stale, you can simply open a new PR and link to the stale PR. PRs often go stale if the original contributor who wanted to fix the issue suddenly cannot find the time anymore to proceed. This often happens in open-source and is very normal. In this case, the community will be very happy if you give it a new try and leverage the knowledge of the existing PR. If there is already a PR and it is active, you can help the author by giving suggestions, reviewing the PR or even asking whether you can contribute to the PR. 5. Contribute to the documentation A good library always has good documentation! The official documentation is often one of the first points of contact for new users of the library, and therefore contributing to the documentation is a highly +valuable contribution. Contributing to the library can have many forms: Correcting spelling or grammatical errors. Correct incorrect formatting of the docstring. If you see that the official documentation is weirdly displayed or a link is broken, we would be very happy if you take some time to correct it. Correct the shape or dimensions of a docstring input or output tensor. Clarify documentation that is hard to understand or incorrect. Update outdated code examples. Translating the documentation to another language. Anything displayed on the official Diffusers doc page is part of the official documentation and can be corrected, adjusted in the respective documentation source. Please have a look at this page on how to verify changes made to the documentation locally. 6. Contribute a community pipeline Pipelines are usually the first point of contact between the Diffusers library and the user. +Pipelines are examples of how to use Diffusers models and schedulers. +We support two types of pipelines: Official Pipelines Community Pipelines Both official and community pipelines follow the same design and consist of the same type of components. Official pipelines are tested and maintained by the core maintainers of Diffusers. Their code +resides in src/diffusers/pipelines. +In contrast, community pipelines are contributed and maintained purely by the community and are not tested. +They reside in examples/community and while they can be accessed via the PyPI diffusers package, their code is not part of the PyPI distribution. The reason for the distinction is that the core maintainers of the Diffusers library cannot maintain and test all +possible ways diffusion models can be used for inference, but some of them may be of interest to the community. +Officially released diffusion pipelines, +such as Stable Diffusion are added to the core src/diffusers/pipelines package which ensures +high quality of maintenance, no backward-breaking code changes, and testing. +More bleeding edge pipelines should be added as community pipelines. If usage for a community pipeline is high, the pipeline can be moved to the official pipelines upon request from the community. This is one of the ways we strive to be a community-driven library. To add a community pipeline, one should add a .py file to examples/community and adapt the examples/community/README.md to include an example of the new pipeline. An example can be seen here. Community pipeline PRs are only checked at a superficial level and ideally they should be maintained by their original authors. Contributing a community pipeline is a great way to understand how Diffusers models and schedulers work. Having contributed a community pipeline is usually the first stepping stone to contributing an official pipeline to the +core package. 7. Contribute to training examples Diffusers examples are a collection of training scripts that reside in examples. We support two types of training examples: Official training examples Research training examples Research training examples are located in examples/research_projects whereas official training examples include all folders under examples except the research_projects and community folders. +The official training examples are maintained by the Diffusers’ core maintainers whereas the research training examples are maintained by the community. +This is because of the same reasons put forward in 6. Contribute a community pipeline for official pipelines vs. community pipelines: It is not feasible for the core maintainers to maintain all possible training methods for diffusion models. +If the Diffusers core maintainers and the community consider a certain training paradigm to be too experimental or not popular enough, the corresponding training code should be put in the research_projects folder and maintained by the author. Both official training and research examples consist of a directory that contains one or more training scripts, a requirements.txt file, and a README.md file. In order for the user to make use of the +training examples, it is required to clone the repository: Copied git clone https://github.com/huggingface/diffusers as well as to install all additional dependencies required for training: Copied pip install -r /examples//requirements.txt Therefore when adding an example, the requirements.txt file shall define all pip dependencies required for your training example so that once all those are installed, the user can run the example’s training script. See, for example, the DreamBooth requirements.txt file. Training examples of the Diffusers library should adhere to the following philosophy: All the code necessary to run the examples should be found in a single Python file. One should be able to run the example from the command line with python .py --args. Examples should be kept simple and serve as an example on how to use Diffusers for training. The purpose of example scripts is not to create state-of-the-art diffusion models, but rather to reproduce known training schemes without adding too much custom logic. As a byproduct of this point, our examples also strive to serve as good educational materials. To contribute an example, it is highly recommended to look at already existing examples such as dreambooth to get an idea of how they should look like. +We strongly advise contributors to make use of the Accelerate library as it’s tightly integrated +with Diffusers. +Once an example script works, please make sure to add a comprehensive README.md that states how to use the example exactly. This README should include: An example command on how to run the example script as shown here. A link to some training results (logs, models, etc.) that show what the user can expect as shown here. If you are adding a non-official/research training example, please don’t forget to add a sentence that you are maintaining this training example which includes your git handle as shown here. If you are contributing to the official training examples, please also make sure to add a test to examples/test_examples.py. This is not necessary for non-official training examples. 8. Fixing a “Good second issue” Good second issues are marked by the Good second issue label. Good second issues are +usually more complicated to solve than Good first issues. +The issue description usually gives less guidance on how to fix the issue and requires +a decent understanding of the library by the interested contributor. +If you are interested in tackling a good second issue, feel free to open a PR to fix it and link the PR to the issue. If you see that a PR has already been opened for this issue but did not get merged, have a look to understand why it wasn’t merged and try to open an improved PR. +Good second issues are usually more difficult to get merged compared to good first issues, so don’t hesitate to ask for help from the core maintainers. If your PR is almost finished the core maintainers can also jump into your PR and commit to it in order to get it merged. 9. Adding pipelines, models, schedulers Pipelines, models, and schedulers are the most important pieces of the Diffusers library. +They provide easy access to state-of-the-art diffusion technologies and thus allow the community to +build powerful generative AI applications. By adding a new model, pipeline, or scheduler you might enable a new powerful use case for any of the user interfaces relying on Diffusers which can be of immense value for the whole generative AI ecosystem. Diffusers has a couple of open feature requests for all three components - feel free to gloss over them +if you don’t know yet what specific component you would like to add: Model or pipeline Scheduler Before adding any of the three components, it is strongly recommended that you give the Philosophy guide a read to better understand the design of any of the three components. Please be aware that we cannot merge model, scheduler, or pipeline additions that strongly diverge from our design philosophy +as it will lead to API inconsistencies. If you fundamentally disagree with a design choice, please open a Feedback issue instead so that it can be discussed whether a certain design pattern/design choice shall be changed everywhere in the library and whether we shall update our design philosophy. Consistency across the library is very important for us. Please make sure to add links to the original codebase/paper to the PR and ideally also ping the original author directly on the PR so that they can follow the progress and potentially help with questions. If you are unsure or stuck in the PR, don’t hesitate to leave a message to ask for a first review or help. Copied from mechanism A unique and important feature to understand when adding any pipeline, model or scheduler code is the # Copied from mechanism. You’ll see this all over the Diffusers codebase, and the reason we use it is to keep the codebase easy to understand and maintain. Marking code with the # Copied from mechanism forces the marked code to be identical to the code it was copied from. This makes it easy to update and propagate changes across many files whenever you run make fix-copies. For example, in the code example below, StableDiffusionPipelineOutput is the original code and AltDiffusionPipelineOutput uses the # Copied from mechanism to copy it. The only difference is changing the class prefix from Stable to Alt. Copied # Copied from diffusers.pipelines.stable_diffusion.pipeline_output.StableDiffusionPipelineOutput with Stable->Alt +class AltDiffusionPipelineOutput(BaseOutput): + """ + Output class for Alt Diffusion pipelines. + + Args: + images (`List[PIL.Image.Image]` or `np.ndarray`) + List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width, + num_channels)`. + nsfw_content_detected (`List[bool]`) + List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or + `None` if safety checking could not be performed. + """ To learn more, read this section of the ~Don’t~ Repeat Yourself* blog post. How to write a good issue The better your issue is written, the higher the chances that it will be quickly resolved. Make sure that you’ve used the correct template for your issue. You can pick between Bug Report, Feature Request, Feedback about API Design, New model/pipeline/scheduler addition, Forum, or a blank issue. Make sure to pick the correct one when opening a new issue. Be precise: Give your issue a fitting title. Try to formulate your issue description as simple as possible. The more precise you are when submitting an issue, the less time it takes to understand the issue and potentially solve it. Make sure to open an issue for one issue only and not for multiple issues. If you found multiple issues, simply open multiple issues. If your issue is a bug, try to be as precise as possible about what bug it is - you should not just write “Error in diffusers”. Reproducibility: No reproducible code snippet == no solution. If you encounter a bug, maintainers have to be able to reproduce it. Make sure that you include a code snippet that can be copy-pasted into a Python interpreter to reproduce the issue. Make sure that your code snippet works, i.e. that there are no missing imports or missing links to images, … Your issue should contain an error message and a code snippet that can be copy-pasted without any changes to reproduce the exact same error message. If your issue is using local model weights or local data that cannot be accessed by the reader, the issue cannot be solved. If you cannot share your data or model, try to make a dummy model or dummy data. Minimalistic: Try to help the reader as much as you can to understand the issue as quickly as possible by staying as concise as possible. Remove all code / all information that is irrelevant to the issue. If you have found a bug, try to create the easiest code example you can to demonstrate your issue, do not just dump your whole workflow into the issue as soon as you have found a bug. E.g., if you train a model and get an error at some point during the training, you should first try to understand what part of the training code is responsible for the error and try to reproduce it with a couple of lines. Try to use dummy data instead of full datasets. Add links. If you are referring to a certain naming, method, or model make sure to provide a link so that the reader can better understand what you mean. If you are referring to a specific PR or issue, make sure to link it to your issue. Do not assume that the reader knows what you are talking about. The more links you add to your issue the better. Formatting. Make sure to nicely format your issue by formatting code into Python code syntax, and error messages into normal code syntax. See the official GitHub formatting docs for more information. Think of your issue not as a ticket to be solved, but rather as a beautiful entry to a well-written encyclopedia. Every added issue is a contribution to publicly available knowledge. By adding a nicely written issue you not only make it easier for maintainers to solve your issue, but you are helping the whole community to better understand a certain aspect of the library. How to write a good PR Be a chameleon. Understand existing design patterns and syntax and make sure your code additions flow seamlessly into the existing code base. Pull requests that significantly diverge from existing design patterns or user interfaces will not be merged. Be laser focused. A pull request should solve one problem and one problem only. Make sure to not fall into the trap of “also fixing another problem while we’re adding it”. It is much more difficult to review pull requests that solve multiple, unrelated problems at once. If helpful, try to add a code snippet that displays an example of how your addition can be used. The title of your pull request should be a summary of its contribution. If your pull request addresses an issue, please mention the issue number in +the pull request description to make sure they are linked (and people +consulting the issue know you are working on it); To indicate a work in progress please prefix the title with [WIP]. These +are useful to avoid duplicated work, and to differentiate it from PRs ready +to be merged; Try to formulate and format your text as explained in How to write a good issue. Make sure existing tests pass; Add high-coverage tests. No quality testing = no merge. If you are adding new @slow tests, make sure they pass using +RUN_SLOW=1 python -m pytest tests/test_my_new_model.py. +CircleCI does not run the slow tests, but GitHub Actions does every night! All public methods must have informative docstrings that work nicely with markdown. See pipeline_latent_diffusion.py for an example. Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted dataset like +hf-internal-testing or huggingface/documentation-images to place these files. +If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images +to this dataset. How to open a PR Before writing code, we strongly advise you to search through the existing PRs or +issues to make sure that nobody is already working on the same thing. If you are +unsure, it is always a good idea to open an issue to get some feedback. You will need basic git proficiency to be able to contribute to +🧨 Diffusers. git is not the easiest tool to use but it has the greatest +manual. Type git --help in a shell and enjoy. If you prefer books, Pro +Git is a very good reference. Follow these steps to start contributing (supported Python versions): Fork the repository by +clicking on the ‘Fork’ button on the repository’s page. This creates a copy of the code +under your GitHub user account. Clone your fork to your local disk, and add the base repository as a remote: Copied $ git clone git@github.com:/diffusers.git +$ cd diffusers +$ git remote add upstream https://github.com/huggingface/diffusers.git Create a new branch to hold your development changes: Copied $ git checkout -b a-descriptive-name-for-my-changes Do not work on the main branch. Set up a development environment by running the following command in a virtual environment: Copied $ pip install -e ".[dev]" If you have already cloned the repo, you might need to git pull to get the most recent changes in the +library. Develop the features on your branch. As you work on the features, you should make sure that the test suite +passes. You should run the tests impacted by your changes like this: Copied $ pytest tests/.py Before you run the tests, please make sure you install the dependencies required for testing. You can do so +with this command: Copied $ pip install -e ".[test]" You can also run the full test suite with the following command, but it takes +a beefy machine to produce a result in a decent amount of time now that +Diffusers has grown a lot. Here is the command for it: Copied $ make test 🧨 Diffusers relies on black and isort to format its source code +consistently. After you make changes, apply automatic style corrections and code verifications +that can’t be automated in one go with: Copied $ make style 🧨 Diffusers also uses ruff and a few custom scripts to check for coding mistakes. Quality +control runs in CI, however, you can also run the same checks with: Copied $ make quality Once you’re happy with your changes, add changed files using git add and +make a commit with git commit to record your changes locally: Copied $ git add modified_file.py +$ git commit -m "A descriptive message about your changes." It is a good idea to sync your copy of the code with the original +repository regularly. This way you can quickly account for changes: Copied $ git pull upstream main Push the changes to your account using: Copied $ git push -u origin a-descriptive-name-for-my-changes Once you are satisfied, go to the +webpage of your fork on GitHub. Click on ‘Pull request’ to send your changes +to the project maintainers for review. It’s OK if maintainers ask you for changes. It happens to core contributors +too! So everyone can see the changes in the Pull request, work in your local +branch and push the changes to your fork. They will automatically appear in +the pull request. Tests An extensive test suite is included to test the library behavior and several examples. Library tests can be found in +the tests folder. We like pytest and pytest-xdist because it’s faster. From the root of the +repository, here’s how to run tests with pytest for the library: Copied $ python -m pytest -n auto --dist=loadfile -s -v ./tests/ In fact, that’s how make test is implemented! You can specify a smaller set of tests in order to test only the feature +you’re working on. By default, slow tests are skipped. Set the RUN_SLOW environment variable to +yes to run them. This will download many gigabytes of models — make sure you +have enough disk space and a good Internet connection, or a lot of patience! Copied $ RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/ unittest is fully supported, here’s how to run tests with it: Copied $ python -m unittest discover -s tests -t . -v +$ python -m unittest discover -s examples -t examples -v Syncing forked main with upstream (HuggingFace) main To avoid pinging the upstream repository which adds reference notes to each upstream PR and sends unnecessary notifications to the developers involved in these PRs, +when syncing the main branch of a forked repository, please, follow these steps: When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead, merge directly into the forked main. If a PR is absolutely necessary, use the following steps after checking out your branch: Copied $ git checkout -b your-branch-for-syncing +$ git pull --squash --no-commit upstream main +$ git commit -m '' +$ git push --set-upstream origin your-branch-for-syncing Style guide For documentation strings, 🧨 Diffusers follows the Google style. diff --git a/scrapped_outputs/7bcbefc88455945dc46ac1598b849130.txt b/scrapped_outputs/7bcbefc88455945dc46ac1598b849130.txt new file mode 100644 index 0000000000000000000000000000000000000000..191eba717cd93724b13a5915ff44bfc9153360dd --- /dev/null +++ b/scrapped_outputs/7bcbefc88455945dc46ac1598b849130.txt @@ -0,0 +1,338 @@ +GLIGEN (Grounded Language-to-Image Generation) The GLIGEN model was created by researchers and engineers from University of Wisconsin-Madison, Columbia University, and Microsoft. The StableDiffusionGLIGENPipeline and StableDiffusionGLIGENTextImagePipeline can generate photorealistic images conditioned on grounding inputs. Along with text and bounding boxes with StableDiffusionGLIGENPipeline, if input images are given, StableDiffusionGLIGENTextImagePipeline can insert objects described by text at the region defined by bounding boxes. Otherwise, it’ll generate an image described by the caption/prompt and insert objects described by text at the region defined by bounding boxes. It’s trained on COCO2014D and COCO2014CD datasets, and the model uses a frozen CLIP ViT-L/14 text encoder to condition itself on grounding inputs. The abstract from the paper is: Large-scale text-to-image diffusion models have made amazing advances. However, the status quo is to use text input alone, which can impede controllability. In this work, we propose GLIGEN, Grounded-Language-to-Image Generation, a novel approach that builds upon and extends the functionality of existing pre-trained text-to-image diffusion models by enabling them to also be conditioned on grounding inputs. To preserve the vast concept knowledge of the pre-trained model, we freeze all of its weights and inject the grounding information into new trainable layers via a gated mechanism. Our model achieves open-world grounded text2img generation with caption and bounding box condition inputs, and the grounding ability generalizes well to novel spatial configurations and concepts. GLIGEN’s zeroshot performance on COCO and LVIS outperforms existing supervised layout-to-image baselines by a large margin. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality and how to reuse pipeline components efficiently! If you want to use one of the official checkpoints for a task, explore the gligen Hub organizations! StableDiffusionGLIGENPipeline was contributed by Nikhil Gajendrakumar and StableDiffusionGLIGENTextImagePipeline was contributed by Nguyễn Công Tú Anh. StableDiffusionGLIGENPipeline class diffusers.StableDiffusionGLIGENPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with Grounded-Language-to-Image Generation (GLIGEN). This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 gligen_scheduled_sampling_beta: float = 0.3 gligen_phrases: List = None gligen_boxes: List = None gligen_inpaint_image: Optional = None negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. gligen_phrases (List[str]) — +The phrases to guide what to include in each of the regions defined by the corresponding +gligen_boxes. There should only be one phrase per bounding box. gligen_boxes (List[List[float]]) — +The bounding boxes that identify rectangular regions of the image that are going to be filled with the +content described by the corresponding gligen_phrases. Each rectangular box is defined as a +List[float] of 4 elements [xmin, ymin, xmax, ymax] where each value is between [0,1]. gligen_inpaint_image (PIL.Image.Image, optional) — +The input image, if provided, is inpainted with objects described by the gligen_boxes and +gligen_phrases. Otherwise, it is treated as a generation task on a blank input image. gligen_scheduled_sampling_beta (float, defaults to 0.3) — +Scheduled Sampling factor from GLIGEN: Open-Set Grounded Text-to-Image +Generation. Scheduled Sampling factor is only varied for +scheduled sampling during inference for improved quality and controllability. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor from Common Diffusion Noise Schedules and Sample Steps are +Flawed. Guidance rescale factor should fix overexposure when +using zero terminal SNR. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionGLIGENPipeline +>>> from diffusers.utils import load_image + +>>> # Insert objects described by text at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENPipeline.from_pretrained( +... "masterful/gligen-1-4-inpainting-text-box", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> input_image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/livingroom_modern.png" +... ) +>>> prompt = "a birthday cake" +>>> boxes = [[0.2676, 0.6088, 0.4773, 0.7183]] +>>> phrases = ["a birthday cake"] + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_inpaint_image=input_image, +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-1-4-inpainting-text-box.jpg") + +>>> # Generate an image described by the prompt and +>>> # insert objects described by text at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENPipeline.from_pretrained( +... "masterful/gligen-1-4-generation-text-box", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a waterfall and a modern high speed train running through the tunnel in a beautiful forest with fall foliage" +>>> boxes = [[0.1387, 0.2051, 0.4277, 0.7090], [0.4980, 0.4355, 0.8516, 0.7266]] +>>> phrases = ["a waterfall", "a modern high speed train running through the tunnel"] + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-1-4-generation-text-box.jpg") enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. prepare_latents < source > ( batch_size num_channels_latents height width dtype device generator latents = None ) enable_fuser < source > ( enabled = True ) encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionGLIGENTextImagePipeline class diffusers.StableDiffusionGLIGENTextImagePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer processor: CLIPProcessor image_encoder: CLIPVisionModelWithProjection image_project: CLIPImageProjection unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. processor (CLIPProcessor) — +A CLIPProcessor to procces reference image. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder (clip-vit-large-patch14). image_project (CLIPImageProjection) — +A CLIPImageProjection to project image embedding into phrases embedding space. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with Grounded-Language-to-Image Generation (GLIGEN). This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 gligen_scheduled_sampling_beta: float = 0.3 gligen_phrases: List = None gligen_images: List = None input_phrases_mask: Union = None input_images_mask: Union = None gligen_boxes: List = None gligen_inpaint_image: Optional = None negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None gligen_normalize_constant: float = 28.7 clip_skip: int = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. gligen_phrases (List[str]) — +The phrases to guide what to include in each of the regions defined by the corresponding +gligen_boxes. There should only be one phrase per bounding box. gligen_images (List[PIL.Image.Image]) — +The images to guide what to include in each of the regions defined by the corresponding gligen_boxes. +There should only be one image per bounding box input_phrases_mask (int or List[int]) — +pre phrases mask input defined by the correspongding input_phrases_mask input_images_mask (int or List[int]) — +pre images mask input defined by the correspongding input_images_mask gligen_boxes (List[List[float]]) — +The bounding boxes that identify rectangular regions of the image that are going to be filled with the +content described by the corresponding gligen_phrases. Each rectangular box is defined as a +List[float] of 4 elements [xmin, ymin, xmax, ymax] where each value is between [0,1]. gligen_inpaint_image (PIL.Image.Image, optional) — +The input image, if provided, is inpainted with objects described by the gligen_boxes and +gligen_phrases. Otherwise, it is treated as a generation task on a blank input image. gligen_scheduled_sampling_beta (float, defaults to 0.3) — +Scheduled Sampling factor from GLIGEN: Open-Set Grounded Text-to-Image +Generation. Scheduled Sampling factor is only varied for +scheduled sampling during inference for improved quality and controllability. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. gligen_normalize_constant (float, optional, defaults to 28.7) — +The normalize value of the image embedding. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionGLIGENTextImagePipeline +>>> from diffusers.utils import load_image + +>>> # Insert objects described by image at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained( +... "anhnct/Gligen_Inpainting_Text_Image", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> input_image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/livingroom_modern.png" +... ) +>>> prompt = "a backpack" +>>> boxes = [[0.2676, 0.4088, 0.4773, 0.7183]] +>>> phrases = None +>>> gligen_image = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/backpack.jpeg" +... ) + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_inpaint_image=input_image, +... gligen_boxes=boxes, +... gligen_images=[gligen_image], +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-inpainting-text-image-box.jpg") + +>>> # Generate an image described by the prompt and +>>> # insert objects described by text and image at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained( +... "anhnct/Gligen_Text_Image", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a flower sitting on the beach" +>>> boxes = [[0.0, 0.09, 0.53, 0.76]] +>>> phrases = ["flower"] +>>> gligen_image = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/pexels-pixabay-60597.jpg" +... ) + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_images=[gligen_image], +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-generation-text-image-box.jpg") + +>>> # Generate an image described by the prompt and +>>> # transfer style described by image at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained( +... "anhnct/Gligen_Text_Image", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a dragon flying on the sky" +>>> boxes = [[0.4, 0.2, 1.0, 0.8], [0.0, 1.0, 0.0, 1.0]] # Set `[0.0, 1.0, 0.0, 1.0]` for the style + +>>> gligen_image = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" +... ) + +>>> gligen_placeholder = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" +... ) + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=[ +... "dragon", +... "placeholder", +... ], # Can use any text instead of `placeholder` token, because we will use mask here +... gligen_images=[ +... gligen_placeholder, +... gligen_image, +... ], # Can use any image in gligen_placeholder, because we will use mask here +... input_phrases_mask=[1, 0], # Set 0 for the placeholder token +... input_images_mask=[0, 1], # Set 0 for the placeholder image +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-generation-text-image-box-style-transfer.jpg") enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. prepare_latents < source > ( batch_size num_channels_latents height width dtype device generator latents = None ) enable_fuser < source > ( enabled = True ) complete_mask < source > ( has_mask max_objs device ) Based on the input mask corresponding value 0 or 1 for each phrases and image, mask the features +corresponding to phrases and images. crop < source > ( im new_width new_height ) Crop the input image to the specified dimensions. draw_inpaint_mask_from_boxes < source > ( boxes size ) Create an inpainting mask based on given boxes. This function generates an inpainting mask using the provided +boxes to mark regions that need to be inpainted. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_clip_feature < source > ( input normalize_constant device is_image = False ) Get image and phrases embedding by using CLIP pretrain model. The image embedding is transformed into the +phrases embedding space through a projection. get_cross_attention_kwargs_with_grounded < source > ( hidden_size gligen_phrases gligen_images gligen_boxes input_phrases_mask input_images_mask repeat_batch normalize_constant max_objs device ) Prepare the cross-attention kwargs containing information about the grounded input (boxes, mask, image +embedding, phrases embedding). get_cross_attention_kwargs_without_grounded < source > ( hidden_size repeat_batch max_objs device ) Prepare the cross-attention kwargs without information about the grounded input (boxes, mask, image embedding, +phrases embedding) (All are zero tensor). target_size_center_crop < source > ( im new_hw ) Crop and resize the image to the target size while keeping the center. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/7be82582ff55036862a38a9dd09379a3.txt b/scrapped_outputs/7be82582ff55036862a38a9dd09379a3.txt new file mode 100644 index 0000000000000000000000000000000000000000..619b44cd8c05a0c372dc935e8e8f3871d9c7d942 --- /dev/null +++ b/scrapped_outputs/7be82582ff55036862a38a9dd09379a3.txt @@ -0,0 +1,3 @@ +TODO + +Coming soon! diff --git a/scrapped_outputs/7bf15af26c5478e3d526e8e2409fa6d7.txt b/scrapped_outputs/7bf15af26c5478e3d526e8e2409fa6d7.txt new file mode 100644 index 0000000000000000000000000000000000000000..88c6593b32ef62cb7820e9bf8a18fcf276dfa370 --- /dev/null +++ b/scrapped_outputs/7bf15af26c5478e3d526e8e2409fa6d7.txt @@ -0,0 +1,304 @@ +Stable unCLIP Stable unCLIP checkpoints are finetuned from Stable Diffusion 2.1 checkpoints to condition on CLIP image embeddings. +Stable unCLIP still conditions on text embeddings. Given the two separate conditionings, stable unCLIP can be used +for text guided image variation. When combined with an unCLIP prior, it can also be used for full text to image generation. The abstract from the paper is: Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples. Tips Stable unCLIP takes noise_level as input during inference which determines how much noise is added to the image embeddings. A higher noise_level increases variation in the final un-noised images. By default, we do not add any additional noise to the image embeddings (noise_level = 0). Text-to-Image Generation Stable unCLIP can be leveraged for text-to-image generation by pipelining it with the prior model of KakaoBrain’s open source DALL-E 2 replication Karlo: Copied import torch +from diffusers import UnCLIPScheduler, DDPMScheduler, StableUnCLIPPipeline +from diffusers.models import PriorTransformer +from transformers import CLIPTokenizer, CLIPTextModelWithProjection + +prior_model_id = "kakaobrain/karlo-v1-alpha" +data_type = torch.float16 +prior = PriorTransformer.from_pretrained(prior_model_id, subfolder="prior", torch_dtype=data_type) + +prior_text_model_id = "openai/clip-vit-large-patch14" +prior_tokenizer = CLIPTokenizer.from_pretrained(prior_text_model_id) +prior_text_model = CLIPTextModelWithProjection.from_pretrained(prior_text_model_id, torch_dtype=data_type) +prior_scheduler = UnCLIPScheduler.from_pretrained(prior_model_id, subfolder="prior_scheduler") +prior_scheduler = DDPMScheduler.from_config(prior_scheduler.config) + +stable_unclip_model_id = "stabilityai/stable-diffusion-2-1-unclip-small" + +pipe = StableUnCLIPPipeline.from_pretrained( + stable_unclip_model_id, + torch_dtype=data_type, + variant="fp16", + prior_tokenizer=prior_tokenizer, + prior_text_encoder=prior_text_model, + prior=prior, + prior_scheduler=prior_scheduler, +) + +pipe = pipe.to("cuda") +wave_prompt = "dramatic wave, the Oceans roar, Strong wave spiral across the oceans as the waves unfurl into roaring crests; perfect wave form; perfect wave shape; dramatic wave shape; wave shape unbelievable; wave; wave shape spectacular" + +image = pipe(prompt=wave_prompt).images[0] +image For text-to-image we use stabilityai/stable-diffusion-2-1-unclip-small as it was trained on CLIP ViT-L/14 embedding, the same as the Karlo model prior. stabilityai/stable-diffusion-2-1-unclip was trained on OpenCLIP ViT-H, so we don’t recommend its use. Text guided Image-to-Image Variation Copied from diffusers import StableUnCLIPImg2ImgPipeline +from diffusers.utils import load_image +import torch + +pipe = StableUnCLIPImg2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1-unclip", torch_dtype=torch.float16, variation="fp16" +) +pipe = pipe.to("cuda") + +url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/stable_unclip/tarsila_do_amaral.png" +init_image = load_image(url) + +images = pipe(init_image).images +images[0].save("variation_image.png") Optionally, you can also pass a prompt to pipe such as: Copied prompt = "A fantasy landscape, trending on artstation" + +image = pipe(init_image, prompt=prompt).images[0] +image Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableUnCLIPPipeline class diffusers.StableUnCLIPPipeline < source > ( prior_tokenizer: CLIPTokenizer prior_text_encoder: CLIPTextModelWithProjection prior: PriorTransformer prior_scheduler: KarrasDiffusionSchedulers image_normalizer: StableUnCLIPImageNormalizer image_noising_scheduler: KarrasDiffusionSchedulers tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vae: AutoencoderKL ) Parameters prior_tokenizer (CLIPTokenizer) — +A CLIPTokenizer. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen CLIPTextModelWithProjection text-encoder. prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_scheduler (KarrasDiffusionSchedulers) — +Scheduler used in the prior denoising process. image_normalizer (StableUnCLIPImageNormalizer) — +Used to normalize the predicted image embeddings before the noise is applied and un-normalize the image +embeddings after the noise has been applied. image_noising_scheduler (KarrasDiffusionSchedulers) — +Noise schedule for adding noise to the predicted image embeddings. The amount of noise to add is determined +by the noise_level. tokenizer (CLIPTokenizer) — +A CLIPTokenizer. text_encoder (CLIPTextModel) — +Frozen CLIPTextModel text-encoder. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (KarrasDiffusionSchedulers) — +A scheduler to be used in combination with unet to denoise the encoded image latents. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. Pipeline for text-to-image generation using stable unCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 20 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Optional = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 0 prior_num_inference_steps: int = 25 prior_guidance_scale: float = 4.0 prior_latents: Optional = None clip_skip: Optional = None ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 20) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. noise_level (int, optional, defaults to 0) — +The amount of noise to add to the image embeddings. A higher noise_level increases the variance in +the final un-noised images. See StableUnCLIPPipeline.noise_image_embeddings() for more details. prior_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps in the prior denoising process. More denoising steps usually lead to a +higher quality image at the expense of slower inference. prior_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. prior_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +embedding generation in the prior denoising process. Can be used to tweak the same generation with +different prompts. If not provided, a latents tensor is generated by sampling using the supplied random +generator. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +ImagePipelineOutput or tuple + +~ pipeline_utils.ImagePipelineOutput if return_dict is True, otherwise a tuple. When returning +a tuple, the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableUnCLIPPipeline + +>>> pipe = StableUnCLIPPipeline.from_pretrained( +... "fusing/stable-unclip-2-1-l", torch_dtype=torch.float16 +... ) # TODO update model path +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> images = pipe(prompt).images +>>> images[0].save("astronaut_horse.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. noise_image_embeddings < source > ( image_embeds: Tensor noise_level: int noise: Optional = None generator: Optional = None ) Add noise to the image embeddings. The amount of noise is controlled by a noise_level input. A higher +noise_level increases the variance in the final un-noised images. The noise is applied in two ways: A noise schedule is applied directly to the embeddings. A vector of sinusoidal time embeddings are appended to the output. In both cases, the amount of noise is controlled by the same noise_level. The embeddings are normalized before the noise is applied and un-normalized after the noise is applied. StableUnCLIPImg2ImgPipeline class diffusers.StableUnCLIPImg2ImgPipeline < source > ( feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection image_normalizer: StableUnCLIPImageNormalizer image_noising_scheduler: KarrasDiffusionSchedulers tokenizer: CLIPTokenizer text_encoder: CLIPTextModel unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vae: AutoencoderKL ) Parameters feature_extractor (CLIPImageProcessor) — +Feature extractor for image pre-processing before being encoded. image_encoder (CLIPVisionModelWithProjection) — +CLIP vision model for encoding images. image_normalizer (StableUnCLIPImageNormalizer) — +Used to normalize the predicted image embeddings before the noise is applied and un-normalize the image +embeddings after the noise has been applied. image_noising_scheduler (KarrasDiffusionSchedulers) — +Noise schedule for adding noise to the predicted image embeddings. The amount of noise to add is determined +by the noise_level. tokenizer (~transformers.CLIPTokenizer) — +A [~transformers.CLIPTokenizer)]. text_encoder (CLIPTextModel) — +Frozen CLIPTextModel text-encoder. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (KarrasDiffusionSchedulers) — +A scheduler to be used in combination with unet to denoise the encoded image latents. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. Pipeline for text-guided image-to-image generation using stable unCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( image: Union = None prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 20 guidance_scale: float = 10 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Optional = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 0 image_embeds: Optional = None clip_skip: Optional = None ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, either prompt_embeds will be +used or prompt is initialized to "". image (torch.FloatTensor or PIL.Image.Image) — +Image or tensor representing an image batch. The image is encoded to its CLIP embedding which the +unet is conditioned on. The image is not encoded by the vae and then used as the latents in the +denoising process like it is in the standard Stable Diffusion text-guided image variation process. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 20) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. noise_level (int, optional, defaults to 0) — +The amount of noise to add to the image embeddings. A higher noise_level increases the variance in +the final un-noised images. See StableUnCLIPPipeline.noise_image_embeddings() for more details. image_embeds (torch.FloatTensor, optional) — +Pre-generated CLIP embeddings to condition the unet on. These latents are not used in the denoising +process. If you want to provide pre-generated latents, pass them to __call__ as latents. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +ImagePipelineOutput or tuple + +~ pipeline_utils.ImagePipelineOutput if return_dict is True, otherwise a tuple. When returning +a tuple, the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import requests +>>> import torch +>>> from PIL import Image +>>> from io import BytesIO + +>>> from diffusers import StableUnCLIPImg2ImgPipeline + +>>> pipe = StableUnCLIPImg2ImgPipeline.from_pretrained( +... "fusing/stable-unclip-2-1-l-img2img", torch_dtype=torch.float16 +... ) # TODO update model path +>>> pipe = pipe.to("cuda") + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +>>> response = requests.get(url) +>>> init_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_image = init_image.resize((768, 512)) + +>>> prompt = "A fantasy landscape, trending on artstation" + +>>> images = pipe(prompt, init_image).images +>>> images[0].save("fantasy_landscape.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. noise_image_embeddings < source > ( image_embeds: Tensor noise_level: int noise: Optional = None generator: Optional = None ) Add noise to the image embeddings. The amount of noise is controlled by a noise_level input. A higher +noise_level increases the variance in the final un-noised images. The noise is applied in two ways: A noise schedule is applied directly to the embeddings. A vector of sinusoidal time embeddings are appended to the output. In both cases, the amount of noise is controlled by the same noise_level. The embeddings are normalized before the noise is applied and un-normalized after the noise is applied. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/7c118ba9b2eefc07c8a070746a3a9285.txt b/scrapped_outputs/7c118ba9b2eefc07c8a070746a3a9285.txt new file mode 100644 index 0000000000000000000000000000000000000000..f4cc4262c8901cbf0efaaf3a95066a4f6481fc18 --- /dev/null +++ b/scrapped_outputs/7c118ba9b2eefc07c8a070746a3a9285.txt @@ -0,0 +1,78 @@ +unCLIP Hierarchical Text-Conditional Image Generation with CLIP Latents is by Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen. The unCLIP model in 🤗 Diffusers comes from kakaobrain’s karlo. The abstract from the paper is following: Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples. You can find lucidrains’ DALL-E 2 recreation at lucidrains/DALLE2-pytorch. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. UnCLIPPipeline class diffusers.UnCLIPPipeline < source > ( prior: PriorTransformer decoder: UNet2DConditionModel text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer text_proj: UnCLIPTextProjModel super_res_first: UNet2DModel super_res_last: UNet2DModel prior_scheduler: UnCLIPScheduler decoder_scheduler: UnCLIPScheduler super_res_scheduler: UnCLIPScheduler ) Parameters text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. prior (PriorTransformer) — +The canonical unCLIP prior to approximate the image embedding from the text embedding. text_proj (UnCLIPTextProjModel) — +Utility class to prepare and combine the embeddings before they are passed to the decoder. decoder (UNet2DConditionModel) — +The decoder to invert the image embedding into an image. super_res_first (UNet2DModel) — +Super resolution UNet. Used in all but the last step of the super resolution diffusion process. super_res_last (UNet2DModel) — +Super resolution UNet. Used in the last step of the super resolution diffusion process. prior_scheduler (UnCLIPScheduler) — +Scheduler used in the prior denoising process (a modified DDPMScheduler). decoder_scheduler (UnCLIPScheduler) — +Scheduler used in the decoder denoising process (a modified DDPMScheduler). super_res_scheduler (UnCLIPScheduler) — +Scheduler used in the super resolution denoising process (a modified DDPMScheduler). Pipeline for text-to-image generation using unCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None num_images_per_prompt: int = 1 prior_num_inference_steps: int = 25 decoder_num_inference_steps: int = 25 super_res_num_inference_steps: int = 7 generator: Union = None prior_latents: Optional = None decoder_latents: Optional = None super_res_latents: Optional = None text_model_output: Union = None text_attention_mask: Optional = None prior_guidance_scale: float = 4.0 decoder_guidance_scale: float = 8.0 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. This can only be left undefined if text_model_output +and text_attention_mask is passed. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. prior_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the prior. More denoising steps usually lead to a higher quality +image at the expense of slower inference. decoder_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality +image at the expense of slower inference. super_res_num_inference_steps (int, optional, defaults to 7) — +The number of denoising steps for super resolution. More denoising steps usually lead to a higher +quality image at the expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. prior_latents (torch.FloatTensor of shape (batch size, embeddings dimension), optional) — +Pre-generated noisy latents to be used as inputs for the prior. decoder_latents (torch.FloatTensor of shape (batch size, channels, height, width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. super_res_latents (torch.FloatTensor of shape (batch size, channels, super res height, super res width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. prior_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. decoder_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. text_model_output (CLIPTextModelOutput, optional) — +Pre-defined CLIPTextModel outputs that can be derived from the text encoder. Pre-defined text +outputs can be passed for tasks like text embedding interpolations. Make sure to also pass +text_attention_mask in this case. prompt can the be left None. text_attention_mask (torch.Tensor, optional) — +Pre-defined CLIP text attention mask that can be derived from the tokenizer. Pre-defined text attention +masks are necessary when passing text_model_output. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. UnCLIPImageVariationPipeline class diffusers.UnCLIPImageVariationPipeline < source > ( decoder: UNet2DConditionModel text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer text_proj: UnCLIPTextProjModel feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection super_res_first: UNet2DModel super_res_last: UNet2DModel decoder_scheduler: UnCLIPScheduler super_res_scheduler: UnCLIPScheduler ) Parameters text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the image_encoder. image_encoder (CLIPVisionModelWithProjection) — +Frozen CLIP image-encoder (clip-vit-large-patch14). text_proj (UnCLIPTextProjModel) — +Utility class to prepare and combine the embeddings before they are passed to the decoder. decoder (UNet2DConditionModel) — +The decoder to invert the image embedding into an image. super_res_first (UNet2DModel) — +Super resolution UNet. Used in all but the last step of the super resolution diffusion process. super_res_last (UNet2DModel) — +Super resolution UNet. Used in the last step of the super resolution diffusion process. decoder_scheduler (UnCLIPScheduler) — +Scheduler used in the decoder denoising process (a modified DDPMScheduler). super_res_scheduler (UnCLIPScheduler) — +Scheduler used in the super resolution denoising process (a modified DDPMScheduler). Pipeline to generate image variations from an input image using UnCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union = None num_images_per_prompt: int = 1 decoder_num_inference_steps: int = 25 super_res_num_inference_steps: int = 7 generator: Optional = None decoder_latents: Optional = None super_res_latents: Optional = None image_embeddings: Optional = None decoder_guidance_scale: float = 8.0 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — +Image or tensor representing an image batch to be used as the starting point. If you provide a +tensor, it needs to be compatible with the CLIPImageProcessor +configuration. +Can be left as None only when image_embeddings are passed. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. decoder_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality +image at the expense of slower inference. super_res_num_inference_steps (int, optional, defaults to 7) — +The number of denoising steps for super resolution. More denoising steps usually lead to a higher +quality image at the expense of slower inference. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. decoder_latents (torch.FloatTensor of shape (batch size, channels, height, width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. super_res_latents (torch.FloatTensor of shape (batch size, channels, super res height, super res width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. decoder_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. image_embeddings (torch.Tensor, optional) — +Pre-defined image embeddings that can be derived from the image encoder. Pre-defined image embeddings +can be passed for tasks like image interpolations. image can be left as None. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/7c5aa9790090cbd83814d1d24ae14e0a.txt b/scrapped_outputs/7c5aa9790090cbd83814d1d24ae14e0a.txt new file mode 100644 index 0000000000000000000000000000000000000000..6eb814578b3c61caf6866a5ffadcbcf16e6fec47 --- /dev/null +++ b/scrapped_outputs/7c5aa9790090cbd83814d1d24ae14e0a.txt @@ -0,0 +1,26 @@ +How to run Stable Diffusion with Core ML Core ML is the model format and machine learning library supported by Apple frameworks. If you are interested in running Stable Diffusion models inside your macOS or iOS/iPadOS apps, this guide will show you how to convert existing PyTorch checkpoints into the Core ML format and use them for inference with Python or Swift. Core ML models can leverage all the compute engines available in Apple devices: the CPU, the GPU, and the Apple Neural Engine (or ANE, a tensor-optimized accelerator available in Apple Silicon Macs and modern iPhones/iPads). Depending on the model and the device it’s running on, Core ML can mix and match compute engines too, so some portions of the model may run on the CPU while others run on GPU, for example. You can also run the diffusers Python codebase on Apple Silicon Macs using the mps accelerator built into PyTorch. This approach is explained in depth in the mps guide, but it is not compatible with native apps. Stable Diffusion Core ML Checkpoints Stable Diffusion weights (or checkpoints) are stored in the PyTorch format, so you need to convert them to the Core ML format before we can use them inside native apps. Thankfully, Apple engineers developed a conversion tool based on diffusers to convert the PyTorch checkpoints to Core ML. Before you convert a model, though, take a moment to explore the Hugging Face Hub – chances are the model you’re interested in is already available in Core ML format: the Apple organization includes Stable Diffusion versions 1.4, 1.5, 2.0 base, and 2.1 base coreml community includes custom finetuned models use this filter to return all available Core ML checkpoints If you can’t find the model you’re interested in, we recommend you follow the instructions for Converting Models to Core ML by Apple. Selecting the Core ML Variant to Use Stable Diffusion models can be converted to different Core ML variants intended for different purposes: The type of attention blocks used. The attention operation is used to “pay attention” to the relationship between different areas in the image representations and to understand how the image and text representations are related. Attention is compute- and memory-intensive, so different implementations exist that consider the hardware characteristics of different devices. For Core ML Stable Diffusion models, there are two attention variants: split_einsum (introduced by Apple) is optimized for ANE devices, which is available in modern iPhones, iPads and M-series computers. The “original” attention (the base implementation used in diffusers) is only compatible with CPU/GPU and not ANE. It can be faster to run your model on CPU + GPU using original attention than ANE. See this performance benchmark as well as some additional measures provided by the community for additional details. The supported inference framework. packages are suitable for Python inference. This can be used to test converted Core ML models before attempting to integrate them inside native apps, or if you want to explore Core ML performance but don’t need to support native apps. For example, an application with a web UI could perfectly use a Python Core ML backend. compiled models are required for Swift code. The compiled models in the Hub split the large UNet model weights into several files for compatibility with iOS and iPadOS devices. This corresponds to the --chunk-unet conversion option. If you want to support native apps, then you need to select the compiled variant. The official Core ML Stable Diffusion models include these variants, but the community ones may vary: Copied coreml-stable-diffusion-v1-4 +├── README.md +├── original +│ ├── compiled +│ └── packages +└── split_einsum + ├── compiled + └── packages You can download and use the variant you need as shown below. Core ML Inference in Python Install the following libraries to run Core ML inference in Python: Copied pip install huggingface_hub +pip install git+https://github.com/apple/ml-stable-diffusion Download the Model Checkpoints To run inference in Python, use one of the versions stored in the packages folders because the compiled ones are only compatible with Swift. You may choose whether you want to use original or split_einsum attention. This is how you’d download the original attention variant from the Hub to a directory called models: Copied from huggingface_hub import snapshot_download +from pathlib import Path + +repo_id = "apple/coreml-stable-diffusion-v1-4" +variant = "original/packages" + +model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_")) +snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False) +print(f"Model downloaded at {model_path}") Inference Once you have downloaded a snapshot of the model, you can test it using Apple’s Python script. Copied python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" -i models/coreml-stable-diffusion-v1-4_original_packages -o --compute-unit CPU_AND_GPU --seed 93 Pass the path of the downloaded checkpoint with -i flag to the script. --compute-unit indicates the hardware you want to allow for inference. It must be one of the following options: ALL, CPU_AND_GPU, CPU_ONLY, CPU_AND_NE. You may also provide an optional output path, and a seed for reproducibility. The inference script assumes you’re using the original version of the Stable Diffusion model, CompVis/stable-diffusion-v1-4. If you use another model, you have to specify its Hub id in the inference command line, using the --model-version option. This works for models already supported and custom models you trained or fine-tuned yourself. For example, if you want to use runwayml/stable-diffusion-v1-5: Copied python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" --compute-unit ALL -o output --seed 93 -i models/coreml-stable-diffusion-v1-5_original_packages --model-version runwayml/stable-diffusion-v1-5 Core ML inference in Swift Running inference in Swift is slightly faster than in Python because the models are already compiled in the mlmodelc format. This is noticeable on app startup when the model is loaded but shouldn’t be noticeable if you run several generations afterward. Download To run inference in Swift on your Mac, you need one of the compiled checkpoint versions. We recommend you download them locally using Python code similar to the previous example, but with one of the compiled variants: Copied from huggingface_hub import snapshot_download +from pathlib import Path + +repo_id = "apple/coreml-stable-diffusion-v1-4" +variant = "original/compiled" + +model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_")) +snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False) +print(f"Model downloaded at {model_path}") Inference To run inference, please clone Apple’s repo: Copied git clone https://github.com/apple/ml-stable-diffusion +cd ml-stable-diffusion And then use Apple’s command line tool, Swift Package Manager: Copied swift run StableDiffusionSample --resource-path models/coreml-stable-diffusion-v1-4_original_compiled --compute-units all "a photo of an astronaut riding a horse on mars" You have to specify in --resource-path one of the checkpoints downloaded in the previous step, so please make sure it contains compiled Core ML bundles with the extension .mlmodelc. The --compute-units has to be one of these values: all, cpuOnly, cpuAndGPU, cpuAndNeuralEngine. For more details, please refer to the instructions in Apple’s repo. Supported Diffusers Features The Core ML models and inference code don’t support many of the features, options, and flexibility of 🧨 Diffusers. These are some of the limitations to keep in mind: Core ML models are only suitable for inference. They can’t be used for training or fine-tuning. Only two schedulers have been ported to Swift, the default one used by Stable Diffusion and DPMSolverMultistepScheduler, which we ported to Swift from our diffusers implementation. We recommend you use DPMSolverMultistepScheduler, since it produces the same quality in about half the steps. Negative prompts, classifier-free guidance scale, and image-to-image tasks are available in the inference code. Advanced features such as depth guidance, ControlNet, and latent upscalers are not available yet. Apple’s conversion and inference repo and our own swift-coreml-diffusers repos are intended as technology demonstrators to enable other developers to build upon. If you feel strongly about any missing features, please feel free to open a feature request or, better yet, a contribution PR 🙂. Native Diffusers Swift app One easy way to run Stable Diffusion on your own Apple hardware is to use our open-source Swift repo, based on diffusers and Apple’s conversion and inference repo. You can study the code, compile it with Xcode and adapt it for your own needs. For your convenience, there’s also a standalone Mac app in the App Store, so you can play with it without having to deal with the code or IDE. If you are a developer and have determined that Core ML is the best solution to build your Stable Diffusion app, then you can use the rest of this guide to get started with your project. We can’t wait to see what you’ll build 🙂. diff --git a/scrapped_outputs/7c74d81c0d99e9197514d8f1432c974d.txt b/scrapped_outputs/7c74d81c0d99e9197514d8f1432c974d.txt new file mode 100644 index 0000000000000000000000000000000000000000..468c0483a2546314fa3f8291e558ee4a11ec620d --- /dev/null +++ b/scrapped_outputs/7c74d81c0d99e9197514d8f1432c974d.txt @@ -0,0 +1,69 @@ +JAX/Flax 🤗 Diffusers supports Flax for super fast inference on Google TPUs, such as those available in Colab, Kaggle or Google Cloud Platform. This guide shows you how to run inference with Stable Diffusion using JAX/Flax. Before you begin, make sure you have the necessary libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q jax==0.3.25 jaxlib==0.3.25 flax transformers ftfy +#!pip install -q diffusers You should also make sure you’re using a TPU backend. While JAX does not run exclusively on TPUs, you’ll get the best performance on a TPU because each server has 8 TPU accelerators working in parallel. If you are running this guide in Colab, select Runtime in the menu above, select the option Change runtime type, and then select TPU under the Hardware accelerator setting. Import JAX and quickly check whether you’re using a TPU: Copied import jax +import jax.tools.colab_tpu +jax.tools.colab_tpu.setup_tpu() + +num_devices = jax.device_count() +device_type = jax.devices()[0].device_kind + +print(f"Found {num_devices} JAX devices of type {device_type}.") +assert ( + "TPU" in device_type, + "Available device is not a TPU, please select TPU from Runtime > Change runtime type > Hardware accelerator" +) +# Found 8 JAX devices of type Cloud TPU. Great, now you can import the rest of the dependencies you’ll need: Copied import jax.numpy as jnp +from jax import pmap +from flax.jax_utils import replicate +from flax.training.common_utils import shard + +from diffusers import FlaxStableDiffusionPipeline Load a model Flax is a functional framework, so models are stateless and parameters are stored outside of them. Loading a pretrained Flax pipeline returns both the pipeline and the model weights (or parameters). In this guide, you’ll use bfloat16, a more efficient half-float type that is supported by TPUs (you can also use float32 for full precision if you want). Copied dtype = jnp.bfloat16 +pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + revision="bf16", + dtype=dtype, +) Inference TPUs usually have 8 devices working in parallel, so let’s use the same prompt for each device. This means you can perform inference on 8 devices at once, with each device generating one image. As a result, you’ll get 8 images in the same amount of time it takes for one chip to generate a single image! Learn more details in the How does parallelization work? section. After replicating the prompt, get the tokenized text ids by calling the prepare_inputs function on the pipeline. The length of the tokenized text is set to 77 tokens as required by the configuration of the underlying CLIP text model. Copied prompt = "A cinematic film still of Morgan Freeman starring as Jimi Hendrix, portrait, 40mm lens, shallow depth of field, close up, split lighting, cinematic" +prompt = [prompt] * jax.device_count() +prompt_ids = pipeline.prepare_inputs(prompt) +prompt_ids.shape +# (8, 77) Model parameters and inputs have to be replicated across the 8 parallel devices. The parameters dictionary is replicated with flax.jax_utils.replicate which traverses the dictionary and changes the shape of the weights so they are repeated 8 times. Arrays are replicated using shard. Copied # parameters +p_params = replicate(params) + +# arrays +prompt_ids = shard(prompt_ids) +prompt_ids.shape +# (8, 1, 77) This shape means each one of the 8 devices receives as an input a jnp array with shape (1, 77), where 1 is the batch size per device. On TPUs with sufficient memory, you could have a batch size larger than 1 if you want to generate multiple images (per chip) at once. Next, create a random number generator to pass to the generation function. This is standard procedure in Flax, which is very serious and opinionated about random numbers. All functions that deal with random numbers are expected to receive a generator to ensure reproducibility, even when you’re training across multiple distributed devices. The helper function below uses a seed to initialize a random number generator. As long as you use the same seed, you’ll get the exact same results. Feel free to use different seeds when exploring results later in the guide. Copied def create_key(seed=0): + return jax.random.PRNGKey(seed) The helper function, or rng, is split 8 times so each device receives a different generator and generates a different image. Copied rng = create_key(0) +rng = jax.random.split(rng, jax.device_count()) To take advantage of JAX’s optimized speed on a TPU, pass jit=True to the pipeline to compile the JAX code into an efficient representation and to ensure the model runs in parallel across the 8 devices. You need to ensure all your inputs have the same shape in subsequent calls, otherwise JAX will need to recompile the code which is slower. The first inference run takes more time because it needs to compile the code, but subsequent calls (even with different inputs) are much faster. For example, it took more than a minute to compile on a TPU v2-8, but then it takes about 7s on a future inference run! Copied %%time +images = pipeline(prompt_ids, p_params, rng, jit=True)[0] + +# CPU times: user 56.2 s, sys: 42.5 s, total: 1min 38s +# Wall time: 1min 29s The returned array has shape (8, 1, 512, 512, 3) which should be reshaped to remove the second dimension and get 8 images of 512 × 512 × 3. Then you can use the numpy_to_pil() function to convert the arrays into images. Copied from diffusers.utils import make_image_grid + +images = images.reshape((images.shape[0] * images.shape[1],) + images.shape[-3:]) +images = pipeline.numpy_to_pil(images) +make_image_grid(images, rows=2, cols=4) Using different prompts You don’t necessarily have to use the same prompt on all devices. For example, to generate 8 different prompts: Copied prompts = [ + "Labrador in the style of Hokusai", + "Painting of a squirrel skating in New York", + "HAL-9000 in the style of Van Gogh", + "Times Square under water, with fish and a dolphin swimming around", + "Ancient Roman fresco showing a man working on his laptop", + "Close-up photograph of young black woman against urban background, high quality, bokeh", + "Armchair in the shape of an avocado", + "Clown astronaut in space, with Earth in the background", +] + +prompt_ids = pipeline.prepare_inputs(prompts) +prompt_ids = shard(prompt_ids) + +images = pipeline(prompt_ids, p_params, rng, jit=True).images +images = images.reshape((images.shape[0] * images.shape[1],) + images.shape[-3:]) +images = pipeline.numpy_to_pil(images) + +make_image_grid(images, 2, 4) How does parallelization work? The Flax pipeline in 🤗 Diffusers automatically compiles the model and runs it in parallel on all available devices. Let’s take a closer look at how that process works. JAX parallelization can be done in multiple ways. The easiest one revolves around using the jax.pmap function to achieve single-program multiple-data (SPMD) parallelization. It means running several copies of the same code, each on different data inputs. More sophisticated approaches are possible, and you can go over to the JAX documentation to explore this topic in more detail if you are interested! jax.pmap does two things: Compiles (or ”jits”) the code which is similar to jax.jit(). This does not happen when you call pmap, and only the first time the pmapped function is called. Ensures the compiled code runs in parallel on all available devices. To demonstrate, call pmap on the pipeline’s _generate method (this is a private method that generates images and may be renamed or removed in future releases of 🤗 Diffusers): Copied p_generate = pmap(pipeline._generate) After calling pmap, the prepared function p_generate will: Make a copy of the underlying function, pipeline._generate, on each device. Send each device a different portion of the input arguments (this is why it’s necessary to call the shard function). In this case, prompt_ids has shape (8, 1, 77, 768) so the array is split into 8 and each copy of _generate receives an input with shape (1, 77, 768). The most important thing to pay attention to here is the batch size (1 in this example), and the input dimensions that make sense for your code. You don’t have to change anything else to make the code work in parallel. The first time you call the pipeline takes more time, but the calls afterward are much faster. The block_until_ready function is used to correctly measure inference time because JAX uses asynchronous dispatch and returns control to the Python loop as soon as it can. You don’t need to use that in your code; blocking occurs automatically when you want to use the result of a computation that has not yet been materialized. Copied %%time +images = p_generate(prompt_ids, p_params, rng) +images = images.block_until_ready() + +# CPU times: user 1min 15s, sys: 18.2 s, total: 1min 34s +# Wall time: 1min 15s Check your image dimensions to see if they’re correct: Copied images.shape +# (8, 1, 512, 512, 3) diff --git a/scrapped_outputs/7c7cb2b900368a23d1d579bb17d0a0b0.txt b/scrapped_outputs/7c7cb2b900368a23d1d579bb17d0a0b0.txt new file mode 100644 index 0000000000000000000000000000000000000000..99c9c7d4f2201d98cc2da9436565b2c181d1c9c1 --- /dev/null +++ b/scrapped_outputs/7c7cb2b900368a23d1d579bb17d0a0b0.txt @@ -0,0 +1,83 @@ +Paint by Example Paint by Example: Exemplar-based Image Editing with Diffusion Models is by Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, Fang Wen. The abstract from the paper is: Language-guided image editing has achieved great success recently. In this paper, for the first time, we investigate exemplar-guided image editing for more precise control. We achieve this goal by leveraging self-supervised training to disentangle and re-organize the source image and the exemplar. However, the naive approach will cause obvious fusing artifacts. We carefully analyze it and propose an information bottleneck and strong augmentations to avoid the trivial solution of directly copying and pasting the exemplar image. Meanwhile, to ensure the controllability of the editing process, we design an arbitrary shape mask for the exemplar image and leverage the classifier-free guidance to increase the similarity to the exemplar image. The whole framework involves a single forward of the diffusion model without any iterative optimization. We demonstrate that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity. The original codebase can be found at Fantasy-Studio/Paint-by-Example, and you can try it out in a demo. Tips Paint by Example is supported by the official Fantasy-Studio/Paint-by-Example checkpoint. The checkpoint is warm-started from CompVis/stable-diffusion-v1-4 to inpaint partly masked images conditioned on example and reference images. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. PaintByExamplePipeline class diffusers.PaintByExamplePipeline < source > ( vae: AutoencoderKL image_encoder: PaintByExampleImageEncoder unet: UNet2DConditionModel scheduler: Union safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = False ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. image_encoder (PaintByExampleImageEncoder) — +Encodes the example input image. The unet is conditioned on the example image instead of a text prompt. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. 🧪 This is an experimental feature! Pipeline for image-guided image inpainting using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( example_image: Union image: Union mask_image: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters example_image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +An example image to guide image generation. image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +Image or tensor representing an image batch to be inpainted (parts of the image are masked out with +mask_image and repainted according to prompt). mask_image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +Image or tensor representing an image batch to mask image. White pixels in the mask are repainted, +while black pixels are preserved. If mask_image is a PIL image, it is converted to a single channel +(luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, so the +expected shape would be (B, H, W, 1). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Example: Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO +>>> from diffusers import PaintByExamplePipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = ( +... "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/image/example_1.png" +... ) +>>> mask_url = ( +... "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/mask/example_1.png" +... ) +>>> example_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/reference/example_1.jpg" + +>>> init_image = download_image(img_url).resize((512, 512)) +>>> mask_image = download_image(mask_url).resize((512, 512)) +>>> example_image = download_image(example_url).resize((512, 512)) + +>>> pipe = PaintByExamplePipeline.from_pretrained( +... "Fantasy-Studio/Paint-by-Example", +... torch_dtype=torch.float16, +... ) +>>> pipe = pipe.to("cuda") + +>>> image = pipe(image=init_image, mask_image=mask_image, example_image=example_image).images[0] +>>> image StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/7c85031d5f9308a6f7a4de4057572b82.txt b/scrapped_outputs/7c85031d5f9308a6f7a4de4057572b82.txt new file mode 100644 index 0000000000000000000000000000000000000000..666a3fca5765b9f54e337bb3359e45c9d5018c27 --- /dev/null +++ b/scrapped_outputs/7c85031d5f9308a6f7a4de4057572b82.txt @@ -0,0 +1,70 @@ +TCDScheduler Trajectory Consistency Distillation by Jianbin Zheng, Minghui Hu, Zhongyi Fan, Chaoyue Wang, Changxing Ding, Dacheng Tao and Tat-Jen Cham introduced a Strategic Stochastic Sampling (Algorithm 4) that is capable of generating good samples in a small number of steps. Distinguishing it as an advanced iteration of the multistep scheduler (Algorithm 1) in the Consistency Models, Strategic Stochastic Sampling specifically tailored for the trajectory consistency function. The abstract from the paper is: Latent Consistency Model (LCM) extends the Consistency Model to the latent space and leverages the guided consistency distillation technique to achieve impressive performance in accelerating text-to-image synthesis. However, we observed that LCM struggles to generate images with both clarity and detailed intricacy. To address this limitation, we initially delve into and elucidate the underlying causes. Our investigation identifies that the primary issue stems from errors in three distinct areas. Consequently, we introduce Trajectory Consistency Distillation (TCD), which encompasses trajectory consistency function and strategic stochastic sampling. The trajectory consistency function diminishes the distillation errors by broadening the scope of the self-consistency boundary condition and endowing the TCD with the ability to accurately trace the entire trajectory of the Probability Flow ODE. Additionally, strategic stochastic sampling is specifically designed to circumvent the accumulated errors inherent in multi-step consistency sampling, which is meticulously tailored to complement the TCD model. Experiments demonstrate that TCD not only significantly enhances image quality at low NFEs but also yields more detailed results compared to the teacher model at high NFEs. The original codebase can be found at jabir-zheng/TCD. TCDScheduler class diffusers.TCDScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'scaled_linear' trained_betas: Union = None original_inference_steps: int = 50 clip_sample: bool = False clip_sample_range: float = 1.0 set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' timestep_scaling: float = 10.0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. original_inference_steps (int, optional, defaults to 50) — +The default number of inference steps used to generate a linearly-spaced timestep schedule, from which we +will ultimately take num_inference_steps evenly spaced timesteps to form the final timestep schedule. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the alpha value at step 0. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. timestep_scaling (float, defaults to 10.0) — +The factor the timesteps will be multiplied by when calculating the consistency model boundary conditions +c_skip and c_out. Increasing this will decrease the approximation error (although the approximation +error at the default of 10.0 is already pretty small). rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. TCDScheduler incorporates the Strategic Stochastic Sampling introduced by the paper Trajectory Consistency Distillation, +extending the original Multistep Consistency Sampling to enable unrestricted trajectory traversal. This code is based on the official repo of TCD(https://github.com/jabir-zheng/TCD). This model inherits from SchedulerMixin and ConfigMixin. ~ConfigMixin takes care of storing all config +attributes that are passed in the scheduler’s __init__ function, such as num_train_timesteps. They can be +accessed via scheduler.config.num_train_timesteps. SchedulerMixin provides general loading and saving +functionality via the SchedulerMixin.save_pretrained() and from_pretrained() functions. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: Optional = None device: Union = None original_inference_steps: Optional = None timesteps: Optional = None strength: int = 1.0 ) Parameters num_inference_steps (int, optional) — +The number of diffusion steps used when generating samples with a pre-trained model. If used, +timesteps must be None. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. original_inference_steps (int, optional) — +The original number of inference steps, which will be used to generate a linearly-spaced timestep +schedule (which is different from the standard diffusers implementation). We will then take +num_inference_steps timesteps from this schedule, evenly spaced in terms of indices, and use that as +our final timestep schedule. If not set, this will default to the original_inference_steps attribute. timesteps (List[int], optional) — +Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default +timestep spacing strategy of equal spacing between timesteps on the training/distillation timestep +schedule is used. If timesteps is passed, num_inference_steps must be None. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor eta: float = 0.3 generator: Optional = None return_dict: bool = True ) → ~schedulers.scheduling_utils.TCDSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. eta (float) — +A stochastic parameter (referred to as gamma in the paper) used to control the stochasticity in every step. +When eta = 0, it represents deterministic sampling, whereas eta = 1 indicates full stochastic sampling. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a TCDSchedulerOutput or tuple. Returns +~schedulers.scheduling_utils.TCDSchedulerOutput or tuple + +If return_dict is True, TCDSchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). TCDSchedulerOutput class diffusers.schedulers.scheduling_tcd.TCDSchedulerOutput < source > ( prev_sample: FloatTensor pred_noised_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_noised_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted noised sample (x_{s}) based on the model output from the current timestep. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/7cc7dcb2cfb01983b2768a903ab3843e.txt b/scrapped_outputs/7cc7dcb2cfb01983b2768a903ab3843e.txt new file mode 100644 index 0000000000000000000000000000000000000000..3a98c66d29962c8bcbbc0fa8e780c5fcacb9d94f --- /dev/null +++ b/scrapped_outputs/7cc7dcb2cfb01983b2768a903ab3843e.txt @@ -0,0 +1,102 @@ +Stable Diffusion XL This script is experimental, and it’s easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset. Stable Diffusion XL (SDXL) is a larger and more powerful iteration of the Stable Diffusion model, capable of producing higher resolution images. SDXL’s UNet is 3x larger and the model adds a second text encoder to the architecture. Depending on the hardware available to you, this can be very computationally intensive and it may not run on a consumer GPU like a Tesla T4. To help fit this larger model into memory and to speedup training, try enabling gradient_checkpointing, mixed_precision, and gradient_accumulation_steps. You can reduce your memory-usage even more by enabling memory-efficient attention with xFormers and using bitsandbytes’ 8-bit optimizer. This guide will explore the train_text_to_image_sdxl.py training script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/text_to_image +pip install -r requirements_sdxl.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the bf16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image_sdxl.py \ + --mixed_precision="bf16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so you’ll focus on the parameters that are relevant to training SDXL in this guide. --pretrained_vae_model_name_or_path: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify a better VAE --proportion_empty_prompts: the proportion of image prompts to replace with empty strings --timestep_bias_strategy: where (earlier vs. later) in the timestep to apply a bias, which can encourage the model to either learn low or high frequency details --timestep_bias_multiplier: the weight of the bias to apply to the timestep --timestep_bias_begin: the timestep to begin applying the bias --timestep_bias_end: the timestep to end applying the bias --timestep_bias_portion: the proportion of timesteps to apply the bias to Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting either epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_text_to_image_sdxl.py \ + --snr_gamma=5.0 Training script The training script is also similar to the Text-to-image training guide, but it’s been modified to support SDXL training. This guide will focus on the code that is unique to the SDXL training script. It starts by creating functions to tokenize the prompts to calculate the prompt embeddings, and to compute the image embeddings with the VAE. Next, you’ll a function to generate the timesteps weights depending on the number of timesteps and the timestep bias strategy to apply. Within the main() function, in addition to loading a tokenizer, the script loads a second tokenizer and text encoder because the SDXL architecture uses two of each: Copied tokenizer_one = AutoTokenizer.from_pretrained( + args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision, use_fast=False +) +tokenizer_two = AutoTokenizer.from_pretrained( + args.pretrained_model_name_or_path, subfolder="tokenizer_2", revision=args.revision, use_fast=False +) + +text_encoder_cls_one = import_model_class_from_model_name_or_path( + args.pretrained_model_name_or_path, args.revision +) +text_encoder_cls_two = import_model_class_from_model_name_or_path( + args.pretrained_model_name_or_path, args.revision, subfolder="text_encoder_2" +) The prompt and image embeddings are computed first and kept in memory, which isn’t typically an issue for a smaller dataset, but for larger datasets it can lead to memory problems. If this is the case, you should save the pre-computed embeddings to disk separately and load them into memory during the training process (see this PR for more discussion about this topic). Copied text_encoders = [text_encoder_one, text_encoder_two] +tokenizers = [tokenizer_one, tokenizer_two] +compute_embeddings_fn = functools.partial( + encode_prompt, + text_encoders=text_encoders, + tokenizers=tokenizers, + proportion_empty_prompts=args.proportion_empty_prompts, + caption_column=args.caption_column, +) + +train_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint) +train_dataset = train_dataset.map( + compute_vae_encodings_fn, + batched=True, + batch_size=args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps, + new_fingerprint=new_fingerprint_for_vae, +) After calculating the embeddings, the text encoder, VAE, and tokenizer are deleted to free up some memory: Copied del text_encoders, tokenizers, vae +gc.collect() +torch.cuda.empty_cache() Finally, the training loop takes care of the rest. If you chose to apply a timestep bias strategy, you’ll see the timestep weights are calculated and added as noise: Copied weights = generate_timestep_weights(args, noise_scheduler.config.num_train_timesteps).to( + model_input.device + ) + timesteps = torch.multinomial(weights, bsz, replacement=True).long() + +noisy_model_input = noise_scheduler.add_noise(model_input, noise, timesteps) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 Let’s train on the Pokémon BLIP captions dataset to generate your own Pokémon. Set the environment variables MODEL_NAME and DATASET_NAME to the model and the dataset (either from the Hub or a local path). You should also specify a VAE other than the SDXL VAE (either from the Hub or a local path) with VAE_NAME to avoid numerical instabilities. To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_prompt and --validation_epochs to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. Copied export MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0" +export VAE_NAME="madebyollin/sdxl-vae-fp16-fix" +export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch train_text_to_image_sdxl.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --pretrained_vae_model_name_or_path=$VAE_NAME \ + --dataset_name=$DATASET_NAME \ + --enable_xformers_memory_efficient_attention \ + --resolution=512 \ + --center_crop \ + --random_flip \ + --proportion_empty_prompts=0.2 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --max_train_steps=10000 \ + --use_8bit_adam \ + --learning_rate=1e-06 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --mixed_precision="fp16" \ + --report_to="wandb" \ + --validation_prompt="a cute Sundar Pichai creature" \ + --validation_epochs 5 \ + --checkpointing_steps=5000 \ + --output_dir="sdxl-pokemon-model" \ + --push_to_hub After you’ve finished training, you can use your newly trained SDXL model for inference! + + + + Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("path/to/your/model", torch_dtype=torch.float16).to("cuda") + +prompt = "A pokemon with green eyes and red legs." +image = pipeline(prompt, num_inference_steps=30, guidance_scale=7.5).images[0] +image.save("pokemon.png") + + +PyTorch XLA allows you to run PyTorch on XLA devices such as TPUs, which can be faster. The initial warmup step takes longer because the model needs to be compiled and optimized. However, subsequent calls to the pipeline on an input with the same length as the original prompt are much faster because it can reuse the optimized graph. Copied from diffusers import DiffusionPipeline +import torch +import torch_xla.core.xla_model as xm + +device = xm.xla_device() +pipeline = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0").to(device) + +prompt = "A pokemon with green eyes and red legs." +start = time() +image = pipeline(prompt, num_inference_steps=inference_steps).images[0] +print(f'Compilation time is {time()-start} sec') +image.save("pokemon.png") + +start = time() +image = pipeline(prompt, num_inference_steps=inference_steps).images[0] +print(f'Inference time is {time()-start} sec after compilation') + + + Next steps Congratulations on training a SDXL model! To learn more about how to use your new model, the following guides may be helpful: Read the Stable Diffusion XL guide to learn how to use it for a variety of different tasks (text-to-image, image-to-image, inpainting), how to use it’s refiner model, and the different types of micro-conditionings. Check out the DreamBooth and LoRA training guides to learn how to train a personalized SDXL model with just a few example images. These two training techniques can even be combined! diff --git a/scrapped_outputs/7cf1ef0a9bfd3e4daa1005539bff6ac9.txt b/scrapped_outputs/7cf1ef0a9bfd3e4daa1005539bff6ac9.txt new file mode 100644 index 0000000000000000000000000000000000000000..9d0d5ffb83e07315423c11b905ac9fe8aa24c736 --- /dev/null +++ b/scrapped_outputs/7cf1ef0a9bfd3e4daa1005539bff6ac9.txt @@ -0,0 +1,18 @@ +Installation 🤗 Diffusers is tested on Python 3.8+, PyTorch 1.7.0+, and Flax. Follow the installation instructions below for the deep learning library you are using: PyTorch installation instructions Flax installation instructions Install with pip You should install 🤗 Diffusers in a virtual environment. +If you’re unfamiliar with Python virtual environments, take a look at this guide. +A virtual environment makes it easier to manage different projects and avoid compatibility issues between dependencies. Start by creating a virtual environment in your project directory: Copied python -m venv .env Activate the virtual environment: Copied source .env/bin/activate You should also install 🤗 Transformers because 🤗 Diffusers relies on its models: Pytorch Hide Pytorch content Note - PyTorch only supports Python 3.8 - 3.11 on Windows. Copied pip install diffusers["torch"] transformers JAX Hide JAX content Copied pip install diffusers["flax"] transformers Install with conda After activating your virtual environment, with conda (maintained by the community): Copied conda install -c conda-forge diffusers Install from source Before installing 🤗 Diffusers from source, make sure you have PyTorch and 🤗 Accelerate installed. To install 🤗 Accelerate: Copied pip install accelerate Then install 🤗 Diffusers from source: Copied pip install git+https://github.com/huggingface/diffusers This command installs the bleeding edge main version rather than the latest stable version. +The main version is useful for staying up-to-date with the latest developments. +For instance, if a bug has been fixed since the last official release but a new release hasn’t been rolled out yet. +However, this means the main version may not always be stable. +We strive to keep the main version operational, and most issues are usually resolved within a few hours or a day. +If you run into a problem, please open an Issue so we can fix it even sooner! Editable install You will need an editable install if you’d like to: Use the main version of the source code. Contribute to 🤗 Diffusers and need to test changes in the code. Clone the repository and install 🤗 Diffusers with the following commands: Copied git clone https://github.com/huggingface/diffusers.git +cd diffusers Pytorch Hide Pytorch content Copied pip install -e ".[torch]" JAX Hide JAX content Copied pip install -e ".[flax]" These commands will link the folder you cloned the repository to and your Python library paths. +Python will now look inside the folder you cloned to in addition to the normal library paths. +For example, if your Python packages are typically installed in ~/anaconda3/envs/main/lib/python3.8/site-packages/, Python will also search the ~/diffusers/ folder you cloned to. You must keep the diffusers folder if you want to keep using the library. Now you can easily update your clone to the latest version of 🤗 Diffusers with the following command: Copied cd ~/diffusers/ +git pull Your Python environment will find the main version of 🤗 Diffusers on the next run. Cache Model weights and files are downloaded from the Hub to a cache which is usually your home directory. You can change the cache location by specifying the HF_HOME or HUGGINFACE_HUB_CACHE environment variables or configuring the cache_dir parameter in methods like from_pretrained(). Cached files allow you to run 🤗 Diffusers offline. To prevent 🤗 Diffusers from connecting to the internet, set the HF_HUB_OFFLINE environment variable to True and 🤗 Diffusers will only load previously downloaded files in the cache. Copied export HF_HUB_OFFLINE=True For more details about managing and cleaning the cache, take a look at the caching guide. Telemetry logging Our library gathers telemetry information during from_pretrained() requests. +The data gathered includes the version of 🤗 Diffusers and PyTorch/Flax, the requested model or pipeline class, +and the path to a pretrained checkpoint if it is hosted on the Hugging Face Hub. +This usage data helps us debug issues and prioritize new features. +Telemetry is only sent when loading models and pipelines from the Hub, +and it is not collected if you’re loading local files. We understand that not everyone wants to share additional information,and we respect your privacy. +You can disable telemetry collection by setting the DISABLE_TELEMETRY environment variable from your terminal: On Linux/MacOS: Copied export DISABLE_TELEMETRY=YES On Windows: Copied set DISABLE_TELEMETRY=YES diff --git a/scrapped_outputs/7d1571de72b4d2aac5c26b7249802c6a.txt b/scrapped_outputs/7d1571de72b4d2aac5c26b7249802c6a.txt new file mode 100644 index 0000000000000000000000000000000000000000..cdd78d68bba0e712cfad73d0a4eb0e2833f322c8 --- /dev/null +++ b/scrapped_outputs/7d1571de72b4d2aac5c26b7249802c6a.txt @@ -0,0 +1,15 @@ +Outputs All model outputs are subclasses of BaseOutput, data structures containing all the information returned by the model. The outputs can also be used as tuples or dictionaries. For example: Copied from diffusers import DDIMPipeline + +pipeline = DDIMPipeline.from_pretrained("google/ddpm-cifar10-32") +outputs = pipeline() The outputs object is a ImagePipelineOutput which means it has an image attribute. You can access each attribute as you normally would or with a keyword lookup, and if that attribute is not returned by the model, you will get None: Copied outputs.images +outputs["images"] When considering the outputs object as a tuple, it only considers the attributes that don’t have None values. +For instance, retrieving an image by indexing into it returns the tuple (outputs.images): Copied outputs[:1] To check a specific pipeline or model output, refer to its corresponding API documentation. BaseOutput class diffusers.utils.BaseOutput < source > ( ) Base class for all model outputs as dataclass. Has a __getitem__ that allows indexing by integer or slice (like a +tuple) or strings (like a dictionary) that will ignore the None attributes. Otherwise behaves like a regular +Python dictionary. You can’t unpack a BaseOutput directly. Use the to_tuple() method to convert it to a tuple +first. to_tuple < source > ( ) Convert self to a tuple containing all the attributes/keys that are not None. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. FlaxImagePipelineOutput class diffusers.pipelines.pipeline_flax_utils.FlaxImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. ImageTextPipelineOutput class diffusers.ImageTextPipelineOutput < source > ( images: Union text: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). text (List[str] or List[List[str]]) — +List of generated text strings of length batch_size or a list of list of strings whose outer list has +length batch_size. Output class for joint image-text pipelines. diff --git a/scrapped_outputs/7d17bb954e87ec96a7c29f7c9c035a25.txt b/scrapped_outputs/7d17bb954e87ec96a7c29f7c9c035a25.txt new file mode 100644 index 0000000000000000000000000000000000000000..82a9574b3624949d4502b1947f4846160a989916 --- /dev/null +++ b/scrapped_outputs/7d17bb954e87ec96a7c29f7c9c035a25.txt @@ -0,0 +1,58 @@ +How to use Stable Diffusion on Habana Gaudi + +🤗 Diffusers is compatible with Habana Gaudi through 🤗 Optimum Habana. + +Requirements + +Optimum Habana 1.3 or later, here is how to install it. +SynapseAI 1.7. + +Inference Pipeline + +To generate images with Stable Diffusion 1 and 2 on Gaudi, you need to instantiate two instances: +A pipeline with GaudiStableDiffusionPipeline. This pipeline supports text-to-image generation. +A scheduler with GaudiDDIMScheduler. This scheduler has been optimized for Habana Gaudi. +When initializing the pipeline, you have to specify use_habana=True to deploy it on HPUs. +Furthermore, in order to get the fastest possible generations you should enable HPU graphs with use_hpu_graphs=True. +Finally, you will need to specify a Gaudi configuration which can be downloaded from the Hugging Face Hub. + + + Copied +from optimum.habana import GaudiConfig +from optimum.habana.diffusers import GaudiDDIMScheduler, GaudiStableDiffusionPipeline + +model_name = "stabilityai/stable-diffusion-2-base" +scheduler = GaudiDDIMScheduler.from_pretrained(model_name, subfolder="scheduler") +pipeline = GaudiStableDiffusionPipeline.from_pretrained( + model_name, + scheduler=scheduler, + use_habana=True, + use_hpu_graphs=True, + gaudi_config="Habana/stable-diffusion", +) +You can then call the pipeline to generate images by batches from one or several prompts: + + + Copied +outputs = pipeline( + prompt=[ + "High quality photo of an astronaut riding a horse in space", + "Face of a yellow cat, high resolution, sitting on a park bench", + ], + num_images_per_prompt=10, + batch_size=4, +) +For more information, check out Optimum Habana’s documentation and the example provided in the official Github repository. + +Benchmark + +Here are the latencies for Habana Gaudi 1 and Gaudi 2 with the Habana/stable-diffusion Gaudi configuration (mixed precision bf16/fp32): + +Latency +Batch size +Gaudi 1 +4.37s +4/8 +Gaudi 2 +1.19s +4/8 diff --git a/scrapped_outputs/7d1d9c83b64623cc15d0b7dc7d17f1a3.txt b/scrapped_outputs/7d1d9c83b64623cc15d0b7dc7d17f1a3.txt new file mode 100644 index 0000000000000000000000000000000000000000..c096748fc9379b50eaf61a541e581e9ab2545d55 --- /dev/null +++ b/scrapped_outputs/7d1d9c83b64623cc15d0b7dc7d17f1a3.txt @@ -0,0 +1,383 @@ +Text2Video-Zero Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators is by Levon Khachatryan, Andranik Movsisyan, Vahram Tadevosyan, Roberto Henschel, Zhangyang Wang, Shant Navasardyan, Humphrey Shi. Text2Video-Zero enables zero-shot video generation using either: A textual prompt A prompt combined with guidance from poses or edges Video Instruct-Pix2Pix (instruction-guided video editing) Results are temporally consistent and closely follow the guidance and textual prompts. The abstract from the paper is: Recent text-to-video generation approaches rely on computationally heavy training and require large-scale video datasets. In this paper, we introduce a new task of zero-shot text-to-video generation and propose a low-cost approach (without any training or optimization) by leveraging the power of existing text-to-image synthesis methods (e.g., Stable Diffusion), making them suitable for the video domain. +Our key modifications include (i) enriching the latent codes of the generated frames with motion dynamics to keep the global scene and the background time consistent; and (ii) reprogramming frame-level self-attention using a new cross-frame attention of each frame on the first frame, to preserve the context, appearance, and identity of the foreground object. +Experiments show that this leads to low overhead, yet high-quality and remarkably consistent video generation. Moreover, our approach is not limited to text-to-video synthesis but is also applicable to other tasks such as conditional and content-specialized video generation, and Video Instruct-Pix2Pix, i.e., instruction-guided video editing. +As experiments show, our method performs comparably or sometimes better than recent approaches, despite not being trained on additional video data. You can find additional information about Text2Video-Zero on the project page, paper, and original codebase. Usage example Text-To-Video To generate a video from prompt, run the following Python code: Copied import torch +from diffusers import TextToVideoZeroPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +prompt = "A panda is playing guitar on times square" +result = pipe(prompt=prompt).images +result = [(r * 255).astype("uint8") for r in result] +imageio.mimsave("video.mp4", result, fps=4) You can change these parameters in the pipeline call: Motion field strength (see the paper, Sect. 3.3.1):motion_field_strength_x and motion_field_strength_y. Default: motion_field_strength_x=12, motion_field_strength_y=12 T and T' (see the paper, Sect. 3.3.1)t0 and t1 in the range {0, ..., num_inference_steps}. Default: t0=45, t1=48 Video length:video_length, the number of frames video_length to be generated. Default: video_length=8 We can also generate longer videos by doing the processing in a chunk-by-chunk manner: Copied import torch +from diffusers import TextToVideoZeroPipeline +import numpy as np + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") +seed = 0 +video_length = 24 #24 ÷ 4fps = 6 seconds +chunk_size = 8 +prompt = "A panda is playing guitar on times square" + +# Generate the video chunk-by-chunk +result = [] +chunk_ids = np.arange(0, video_length, chunk_size - 1) +generator = torch.Generator(device="cuda") +for i in range(len(chunk_ids)): + print(f"Processing chunk {i + 1} / {len(chunk_ids)}") + ch_start = chunk_ids[i] + ch_end = video_length if i == len(chunk_ids) - 1 else chunk_ids[i + 1] + # Attach the first frame for Cross Frame Attention + frame_ids = [0] + list(range(ch_start, ch_end)) + # Fix the seed for the temporal consistency + generator.manual_seed(seed) + output = pipe(prompt=prompt, video_length=len(frame_ids), generator=generator, frame_ids=frame_ids) + result.append(output.images[1:]) + +# Concatenate chunks and save +result = np.concatenate(result) +result = [(r * 255).astype("uint8") for r in result] +imageio.mimsave("video.mp4", result, fps=4) SDXL SupportIn order to use the SDXL model when generating a video from prompt, use the TextToVideoZeroSDXLPipeline pipeline: Copied import torch +from diffusers import TextToVideoZeroSDXLPipeline + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipe = TextToVideoZeroSDXLPipeline.from_pretrained( + model_id, torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") Text-To-Video with Pose Control To generate a video from prompt with additional pose control Download a demo video Copied from huggingface_hub import hf_hub_download + +filename = "__assets__/poses_skeleton_gifs/dance1_corr.mp4" +repo_id = "PAIR/Text2Video-Zero" +video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) Read video containing extracted pose images Copied from PIL import Image +import imageio + +reader = imageio.get_reader(video_path, "ffmpeg") +frame_count = 8 +pose_images = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] To extract pose from actual video, read ControlNet documentation. Run StableDiffusionControlNetPipeline with our custom attention processor Copied import torch +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +model_id = "runwayml/stable-diffusion-v1-5" +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + model_id, controlnet=controlnet, torch_dtype=torch.float16 +).to("cuda") + +# Set the attention processor +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) +pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) + +# fix latents for all frames +latents = torch.randn((1, 4, 64, 64), device="cuda", dtype=torch.float16).repeat(len(pose_images), 1, 1, 1) + +prompt = "Darth Vader dancing in a desert" +result = pipe(prompt=[prompt] * len(pose_images), image=pose_images, latents=latents).images +imageio.mimsave("video.mp4", result, fps=4) SDXL Support Since our attention processor also works with SDXL, it can be utilized to generate a video from prompt using ControlNet models powered by SDXL: Copied import torch +from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +controlnet_model_id = 'thibaud/controlnet-openpose-sdxl-1.0' +model_id = 'stabilityai/stable-diffusion-xl-base-1.0' + +controlnet = ControlNetModel.from_pretrained(controlnet_model_id, torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + model_id, controlnet=controlnet, torch_dtype=torch.float16 +).to('cuda') + +# Set the attention processor +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) +pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) + +# fix latents for all frames +latents = torch.randn((1, 4, 128, 128), device="cuda", dtype=torch.float16).repeat(len(pose_images), 1, 1, 1) + +prompt = "Darth Vader dancing in a desert" +result = pipe(prompt=[prompt] * len(pose_images), image=pose_images, latents=latents).images +imageio.mimsave("video.mp4", result, fps=4) Text-To-Video with Edge Control To generate a video from prompt with additional Canny edge control, follow the same steps described above for pose-guided generation using Canny edge ControlNet model. Video Instruct-Pix2Pix To perform text-guided video editing (with InstructPix2Pix): Download a demo video Copied from huggingface_hub import hf_hub_download + +filename = "__assets__/pix2pix video/camel.mp4" +repo_id = "PAIR/Text2Video-Zero" +video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) Read video from path Copied from PIL import Image +import imageio + +reader = imageio.get_reader(video_path, "ffmpeg") +frame_count = 8 +video = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] Run StableDiffusionInstructPix2PixPipeline with our custom attention processor Copied import torch +from diffusers import StableDiffusionInstructPix2PixPipeline +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +model_id = "timbrooks/instruct-pix2pix" +pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=3)) + +prompt = "make it Van Gogh Starry Night style" +result = pipe(prompt=[prompt] * len(video), image=video).images +imageio.mimsave("edited_video.mp4", result, fps=4) DreamBooth specialization Methods Text-To-Video, Text-To-Video with Pose Control and Text-To-Video with Edge Control +can run with custom DreamBooth models, as shown below for +Canny edge ControlNet model and +Avatar style DreamBooth model: Download a demo video Copied from huggingface_hub import hf_hub_download + +filename = "__assets__/canny_videos_mp4/girl_turning.mp4" +repo_id = "PAIR/Text2Video-Zero" +video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) Read video from path Copied from PIL import Image +import imageio + +reader = imageio.get_reader(video_path, "ffmpeg") +frame_count = 8 +canny_edges = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] Run StableDiffusionControlNetPipeline with custom trained DreamBooth model Copied import torch +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +# set model id to custom model +model_id = "PAIR/text2video-zero-controlnet-canny-avatar" +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + model_id, controlnet=controlnet, torch_dtype=torch.float16 +).to("cuda") + +# Set the attention processor +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) +pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) + +# fix latents for all frames +latents = torch.randn((1, 4, 64, 64), device="cuda", dtype=torch.float16).repeat(len(canny_edges), 1, 1, 1) + +prompt = "oil painting of a beautiful girl avatar style" +result = pipe(prompt=[prompt] * len(canny_edges), image=canny_edges, latents=latents).images +imageio.mimsave("video.mp4", result, fps=4) You can filter out some available DreamBooth-trained models with this link. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. TextToVideoZeroPipeline class diffusers.TextToVideoZeroPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet3DConditionModel to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for zero-shot text-to-video generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union video_length: Optional = 8 height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None motion_field_strength_x: float = 12 motion_field_strength_y: float = 12 output_type: Optional = 'tensor' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 t0: int = 44 t1: int = 47 frame_ids: Optional = None ) → TextToVideoPipelineOutput Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. video_length (int, optional, defaults to 8) — +The number of generated video frames. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in video generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_videos_per_prompt (int, optional, defaults to 1) — +The number of videos to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "numpy") — +The output format of the generated video. Choose between "latent" and "numpy". return_dict (bool, optional, defaults to True) — +Whether or not to return a +TextToVideoPipelineOutput instead of +a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. motion_field_strength_x (float, optional, defaults to 12) — +Strength of motion in generated video along x-axis. See the paper, +Sect. 3.3.1. motion_field_strength_y (float, optional, defaults to 12) — +Strength of motion in generated video along y-axis. See the paper, +Sect. 3.3.1. t0 (int, optional, defaults to 44) — +Timestep t0. Should be in the range [0, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. t1 (int, optional, defaults to 47) — +Timestep t0. Should be in the range [t0 + 1, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. frame_ids (List[int], optional) — +Indexes of the frames that are being generated. This is used when generating longer videos +chunk-by-chunk. Returns +TextToVideoPipelineOutput + +The output contains a ndarray of the generated video, when output_type != "latent", otherwise a +latent code of generated videos and a list of bools indicating whether the corresponding generated +video contains “not-safe-for-work” (nsfw) content.. + The call function to the pipeline for generation. backward_loop < source > ( latents timesteps prompt_embeds guidance_scale callback callback_steps num_warmup_steps extra_step_kwargs cross_attention_kwargs = None ) → latents Parameters callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. +extra_step_kwargs — +Extra_step_kwargs. +cross_attention_kwargs — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. +num_warmup_steps — +number of warmup steps. Returns +latents + +Latents of backward process output at time timesteps[-1]. + Perform backward process given list of time steps. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. forward_loop < source > ( x_t0 t0 t1 generator ) → x_t1 Parameters generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. Returns +x_t1 + +Forward process applied to x_t0 from time t0 to t1. + Perform DDPM forward process from time t0 to t1. This is the same as adding noise with corresponding variance. TextToVideoZeroSDXLPipeline class diffusers.TextToVideoZeroSDXLPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for zero-shot text-to-video generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union prompt_2: Union = None video_length: Optional = 8 height: Optional = None width: Optional = None num_inference_steps: int = 50 denoising_end: Optional = None guidance_scale: float = 7.5 negative_prompt: Union = None negative_prompt_2: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None frame_ids: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None latents: Optional = None motion_field_strength_x: float = 12 motion_field_strength_y: float = 12 output_type: Optional = 'tensor' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Optional = None crops_coords_top_left: Tuple = (0, 0) target_size: Optional = None t0: int = 44 t1: int = 47 ) Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders video_length (int, optional, defaults to 8) — +The number of generated video frames. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_videos_per_prompt (int, optional, defaults to 1) — +The number of videos to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. frame_ids (List[int], optional) — +Indexes of the frames that are being generated. This is used when generating longer videos +chunk-by-chunk. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. motion_field_strength_x (float, optional, defaults to 12) — +Strength of motion in generated video along x-axis. See the paper, +Sect. 3.3.1. motion_field_strength_y (float, optional, defaults to 12) — +Strength of motion in generated video along y-axis. See the paper, +Sect. 3.3.1. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. guidance_rescale (float, optional, defaults to 0.7) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (width, height) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (width, height). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. t0 (int, optional, defaults to 44) — +Timestep t0. Should be in the range [0, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. t1 (int, optional, defaults to 47) — +Timestep t0. Should be in the range [t0 + 1, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. Function invoked when calling the pipeline for generation. backward_loop < source > ( latents timesteps prompt_embeds guidance_scale callback callback_steps num_warmup_steps extra_step_kwargs add_text_embeds add_time_ids cross_attention_kwargs = None guidance_rescale: float = 0.0 ) → latents Parameters callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. +extra_step_kwargs — +Extra_step_kwargs. +cross_attention_kwargs — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. +num_warmup_steps — +number of warmup steps. Returns +latents + +latents of backward process output at time timesteps[-1] + Perform backward process given list of time steps disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. forward_loop < source > ( x_t0 t0 t1 generator ) → x_t1 Parameters generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. Returns +x_t1 + +Forward process applied to x_t0 from time t0 to t1. + Perform DDPM forward process from time t0 to t1. This is the same as adding noise with corresponding variance. TextToVideoPipelineOutput class diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.TextToVideoPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images ([List[PIL.Image.Image], np.ndarray]) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected ([List[bool]]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for zero-shot text-to-video pipeline. diff --git a/scrapped_outputs/7d3724601719a1af22fd0a5f14745b69.txt b/scrapped_outputs/7d3724601719a1af22fd0a5f14745b69.txt new file mode 100644 index 0000000000000000000000000000000000000000..26444ce0b02439b036cdb5951e8bcee16133d21d --- /dev/null +++ b/scrapped_outputs/7d3724601719a1af22fd0a5f14745b69.txt @@ -0,0 +1,7 @@ +Value-guided planning 🧪 This is an experimental pipeline for reinforcement learning! This pipeline is based on the Planning with Diffusion for Flexible Behavior Synthesis paper by Michael Janner, Yilun Du, Joshua B. Tenenbaum, Sergey Levine. The abstract from the paper is: Model-based reinforcement learning methods often use learning only for the purpose of estimating an approximate dynamics model, offloading the rest of the decision-making work to classical trajectory optimizers. While conceptually simple, this combination has a number of empirical shortcomings, suggesting that learned models may not be well-suited to standard trajectory optimization. In this paper, we consider what it would look like to fold as much of the trajectory optimization pipeline as possible into the modeling problem, such that sampling from the model and planning with it become nearly identical. The core of our technical approach lies in a diffusion probabilistic model that plans by iteratively denoising trajectories. We show how classifier-guided sampling and image inpainting can be reinterpreted as coherent planning strategies, explore the unusual and useful properties of diffusion-based planning methods, and demonstrate the effectiveness of our framework in control settings that emphasize long-horizon decision-making and test-time flexibility. You can find additional information about the model on the project page, the original codebase, or try it out in a demo notebook. The script to run the model is available here. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. ValueGuidedRLPipeline class diffusers.experimental.ValueGuidedRLPipeline < source > ( value_function: UNet1DModel unet: UNet1DModel scheduler: DDPMScheduler env ) Parameters value_function (UNet1DModel) — +A specialized UNet for fine-tuning trajectories base on reward. unet (UNet1DModel) — +UNet architecture to denoise the encoded trajectories. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded trajectories. Default for this +application is DDPMScheduler. env () — +An environment following the OpenAI gym API to act in. For now only Hopper has pretrained models. Pipeline for value-guided sampling from a diffusion model trained to predict sequences of states. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). diff --git a/scrapped_outputs/7d4b7b8d41f402b83ada498bf60f4c3f.txt b/scrapped_outputs/7d4b7b8d41f402b83ada498bf60f4c3f.txt new file mode 100644 index 0000000000000000000000000000000000000000..7b33af7ded71fb9ee111a4c828a87ecbd9858360 --- /dev/null +++ b/scrapped_outputs/7d4b7b8d41f402b83ada498bf60f4c3f.txt @@ -0,0 +1,36 @@ +Consistency Decoder Consistency decoder can be used to decode the latents from the denoising UNet in the StableDiffusionPipeline. This decoder was introduced in the DALL-E 3 technical report. The original codebase can be found at openai/consistencydecoder. Inference is only supported for 2 iterations as of now. The pipeline could not have been contributed without the help of madebyollin and mrsteyk from this issue. ConsistencyDecoderVAE class diffusers.ConsistencyDecoderVAE < source > ( scaling_factor: float = 0.18215 latent_channels: int = 4 encoder_act_fn: str = 'silu' encoder_block_out_channels: Tuple = (128, 256, 512, 512) encoder_double_z: bool = True encoder_down_block_types: Tuple = ('DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D') encoder_in_channels: int = 3 encoder_layers_per_block: int = 2 encoder_norm_num_groups: int = 32 encoder_out_channels: int = 4 decoder_add_attention: bool = False decoder_block_out_channels: Tuple = (320, 640, 1024, 1024) decoder_down_block_types: Tuple = ('ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D') decoder_downsample_padding: int = 1 decoder_in_channels: int = 7 decoder_layers_per_block: int = 3 decoder_norm_eps: float = 1e-05 decoder_norm_num_groups: int = 32 decoder_num_train_timesteps: int = 1024 decoder_out_channels: int = 6 decoder_resnet_time_scale_shift: str = 'scale_shift' decoder_time_embedding_type: str = 'learned' decoder_up_block_types: Tuple = ('ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D') ) The consistency decoder used with DALL-E 3. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline, ConsistencyDecoderVAE + +>>> vae = ConsistencyDecoderVAE.from_pretrained("openai/consistency-decoder", torch_dtype=torch.float16) +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", vae=vae, torch_dtype=torch.float16 +... ).to("cuda") + +>>> pipe("horse", generator=torch.manual_seed(0)).images wrapper < source > ( *args **kwargs ) disable_slicing < source > ( ) Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing +decoding in one step. disable_tiling < source > ( ) Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing +decoding in one step. enable_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_tiling < source > ( use_tiling: bool = True ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. forward < source > ( sample: FloatTensor sample_posterior: bool = False return_dict: bool = True generator: Optional = None ) → DecoderOutput or tuple Parameters sample (torch.FloatTensor) — Input sample. sample_posterior (bool, optional, defaults to False) — +Whether to sample from the posterior. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. generator (torch.Generator, optional, defaults to None) — +Generator to use for sampling. Returns +DecoderOutput or tuple + +If return_dict is True, a DecoderOutput is returned, otherwise a plain tuple is returned. + set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. tiled_encode < source > ( x: FloatTensor return_dict: bool = True ) → ~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput or tuple Parameters x (torch.FloatTensor) — Input batch of images. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput instead of a +plain tuple. Returns +~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput or tuple + +If return_dict is True, a ~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput is returned, +otherwise a plain tuple is returned. + Encode a batch of images using a tiled encoder. When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several +steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is +different from non-tiled encoding because each tile uses a different encoder. To avoid tiling artifacts, the +tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the +output, but they should be much less noticeable. diff --git a/scrapped_outputs/7d55db2665e03d5ccc6b57a49b95f57b.txt b/scrapped_outputs/7d55db2665e03d5ccc6b57a49b95f57b.txt new file mode 100644 index 0000000000000000000000000000000000000000..f09fe46dfcc2641727bea35773a653b7ffb9e5e5 --- /dev/null +++ b/scrapped_outputs/7d55db2665e03d5ccc6b57a49b95f57b.txt @@ -0,0 +1,96 @@ +Image variation The Stable Diffusion model can also generate variations from an input image. It uses a fine-tuned version of a Stable Diffusion model by Justin Pinkney from Lambda. The original codebase can be found at LambdaLabsML/lambda-diffusers and additional official checkpoints for image variation can be found at lambdalabs/sd-image-variations-diffusers. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionImageVariationPipeline class diffusers.StableDiffusionImageVariationPipeline < source > ( vae: AutoencoderKL image_encoder: CLIPVisionModelWithProjection unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. image_encoder (CLIPVisionModelWithProjection) — +Frozen CLIP image-encoder (clip-vit-large-patch14). text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline to generate image variations from an input image using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — +Image or images to guide image generation. If you provide a tensor, it needs to be compatible with +CLIPImageProcessor. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied from diffusers import StableDiffusionImageVariationPipeline +from PIL import Image +from io import BytesIO +import requests + +pipe = StableDiffusionImageVariationPipeline.from_pretrained( + "lambdalabs/sd-image-variations-diffusers", revision="v2.0" +) +pipe = pipe.to("cuda") + +url = "https://lh3.googleusercontent.com/y-iFOHfLTwkuQSUegpwDdgKmOjRSTvPxat63dQLB25xkTs4lhIbRUFeNBWZzYf370g=s1200" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") + +out = pipe(image, num_images_per_prompt=3, guidance_scale=15) +out["images"][0].save("result.jpg") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/7d7798d05a4ebc2c5b1ae679dc299a00.txt b/scrapped_outputs/7d7798d05a4ebc2c5b1ae679dc299a00.txt new file mode 100644 index 0000000000000000000000000000000000000000..f75b37d0697a81c66f25162d0911b66f223d8162 --- /dev/null +++ b/scrapped_outputs/7d7798d05a4ebc2c5b1ae679dc299a00.txt @@ -0,0 +1,106 @@ +Audio Diffusion Audio Diffusion is by Robert Dargavel Smith, and it leverages the recent advances in image generation from diffusion models by converting audio samples to and from Mel spectrogram images. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. AudioDiffusionPipeline class diffusers.AudioDiffusionPipeline < source > ( vqvae: AutoencoderKL unet: UNet2DConditionModel mel: Mel scheduler: typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_ddpm.DDPMScheduler] ) Parameters vqae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. mel (Mel) — +Transform audio into a spectrogram. scheduler (DDIMScheduler or DDPMScheduler) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler or DDPMScheduler. Pipeline for audio diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 audio_file: str = None raw_audio: ndarray = None slice: int = 0 start_step: int = 0 steps: int = None generator: Generator = None mask_start_secs: float = 0 mask_end_secs: float = 0 step_generator: Generator = None eta: float = 0 noise: Tensor = None encoding: Tensor = None return_dict = True ) → List[PIL Image] Parameters batch_size (int) — +Number of samples to generate. audio_file (str) — +An audio file that must be on disk due to Librosa limitation. raw_audio (np.ndarray) — +The raw audio file as a NumPy array. slice (int) — +Slice number of audio to convert. start_step (int) — +Step to start diffusion from. steps (int) — +Number of denoising steps (defaults to 50 for DDIM and 1000 for DDPM). generator (torch.Generator) — +A torch.Generator to make +generation deterministic. mask_start_secs (float) — +Number of seconds of audio to mask (not generate) at start. mask_end_secs (float) — +Number of seconds of audio to mask (not generate) at end. step_generator (torch.Generator) — +A torch.Generator used to denoise. +None eta (float) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. noise (torch.Tensor) — +A noise tensor of shape (batch_size, 1, height, width) or None. encoding (torch.Tensor) — +A tensor for UNet2DConditionModel of shape (batch_size, seq_length, cross_attention_dim). return_dict (bool) — +Whether or not to return a AudioPipelineOutput, ImagePipelineOutput or a plain tuple. Returns +List[PIL Image] + +A list of Mel spectrograms (float, List[np.ndarray]) with the sample rate and raw audio. + The call function to the pipeline for generation. Examples: For audio diffusion: Copied import torch +from IPython.display import Audio +from diffusers import DiffusionPipeline + +device = "cuda" if torch.cuda.is_available() else "cpu" +pipe = DiffusionPipeline.from_pretrained("teticio/audio-diffusion-256").to(device) + +output = pipe() +display(output.images[0]) +display(Audio(output.audios[0], rate=mel.get_sample_rate())) For latent audio diffusion: Copied import torch +from IPython.display import Audio +from diffusers import DiffusionPipeline + +device = "cuda" if torch.cuda.is_available() else "cpu" +pipe = DiffusionPipeline.from_pretrained("teticio/latent-audio-diffusion-256").to(device) + +output = pipe() +display(output.images[0]) +display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate())) For other tasks like variation, inpainting, outpainting, etc: Copied output = pipe( + raw_audio=output.audios[0, 0], + start_step=int(pipe.get_default_steps() / 2), + mask_start_secs=1, + mask_end_secs=1, +) +display(output.images[0]) +display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate())) encode < source > ( images: typing.List[PIL.Image.Image] steps: int = 50 ) → np.ndarray Parameters images (List[PIL Image]) — +List of images to encode. steps (int) — +Number of encoding steps to perform (defaults to 50). Returns +np.ndarray + +A noise tensor of shape (batch_size, 1, height, width). + Reverse the denoising step process to recover a noisy image from the generated image. get_default_steps < source > ( ) → int Returns +int + +The number of steps. + Returns default number of steps recommended for inference. slerp < source > ( x0: Tensor x1: Tensor alpha: float ) → torch.Tensor Parameters x0 (torch.Tensor) — +The first tensor to interpolate between. x1 (torch.Tensor) — +Second tensor to interpolate between. alpha (float) — +Interpolation between 0 and 1 Returns +torch.Tensor + +The interpolated tensor. + Spherical Linear intERPolation. AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. Mel class diffusers.Mel < source > ( x_res: int = 256 y_res: int = 256 sample_rate: int = 22050 n_fft: int = 2048 hop_length: int = 512 top_db: int = 80 n_iter: int = 32 ) Parameters x_res (int) — +x resolution of spectrogram (time). y_res (int) — +y resolution of spectrogram (frequency bins). sample_rate (int) — +Sample rate of audio. n_fft (int) — +Number of Fast Fourier Transforms. hop_length (int) — +Hop length (a higher number is recommended if y_res < 256). top_db (int) — +Loudest decibel value. n_iter (int) — +Number of iterations for Griffin-Lim Mel inversion. audio_slice_to_image < source > ( slice: int ) → PIL Image Parameters slice (int) — +Slice number of audio to convert (out of get_number_of_slices()). Returns +PIL Image + +A grayscale image of x_res x y_res. + Convert slice of audio to spectrogram. get_audio_slice < source > ( slice: int = 0 ) → np.ndarray Parameters slice (int) — +Slice number of audio (out of get_number_of_slices()). Returns +np.ndarray + +The audio slice as a NumPy array. + Get slice of audio. get_number_of_slices < source > ( ) → int Returns +int + +Number of spectograms audio can be sliced into. + Get number of slices in audio. get_sample_rate < source > ( ) → int Returns +int + +Sample rate of audio. + Get sample rate. image_to_audio < source > ( image: Image ) → audio (np.ndarray) Parameters image (PIL Image) — +An grayscale image of x_res x y_res. Returns +audio (np.ndarray) + +The audio as a NumPy array. + Converts spectrogram to audio. load_audio < source > ( audio_file: str = None raw_audio: ndarray = None ) Parameters audio_file (str) — +An audio file that must be on disk due to Librosa limitation. raw_audio (np.ndarray) — +The raw audio file as a NumPy array. Load audio. set_resolution < source > ( x_res: int y_res: int ) Parameters x_res (int) — +x resolution of spectrogram (time). y_res (int) — +y resolution of spectrogram (frequency bins). Set resolution. diff --git a/scrapped_outputs/7d7bf2c0cc74c45b8b51705fce133f72.txt b/scrapped_outputs/7d7bf2c0cc74c45b8b51705fce133f72.txt new file mode 100644 index 0000000000000000000000000000000000000000..ef8f603ddfc814bb38cda19aa526350c7005dee9 --- /dev/null +++ b/scrapped_outputs/7d7bf2c0cc74c45b8b51705fce133f72.txt @@ -0,0 +1,104 @@ +Textual Inversion Textual Inversion is a training method for personalizing models by learning new text embeddings from a few example images. The file produced from training is extremely small (a few KBs) and the new embeddings can be loaded into the text encoder. TextualInversionLoaderMixin provides a function for loading Textual Inversion embeddings from Diffusers and Automatic1111 into the text encoder and loading a special token to activate the embeddings. To learn more about how to load Textual Inversion embeddings, see the Textual Inversion loading guide. TextualInversionLoaderMixin class diffusers.loaders.TextualInversionLoaderMixin < source > ( ) Load Textual Inversion tokens and embeddings to the tokenizer and text encoder. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") maybe_convert_prompt < source > ( prompt: Union tokenizer: PreTrainedTokenizer ) → str or list of str Parameters prompt (str or list of str) — +The prompt or prompts to guide the image generation. tokenizer (PreTrainedTokenizer) — +The tokenizer responsible for encoding the prompt into input tokens. Returns +str or list of str + +The converted prompt + Processes prompts that include a special token corresponding to a multi-vector textual inversion embedding to +be replaced with multiple special tokens each corresponding to one of the vectors. If the prompt has no textual +inversion token or if the textual inversion token is a single vector, the input prompt is returned. unload_textual_inversion < source > ( tokens: Union = None tokenizer: Optional = None text_encoder: Optional = None ) Unload Textual Inversion embeddings from the text encoder of StableDiffusionPipeline Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5") + +# Example 1 +pipeline.load_textual_inversion("sd-concepts-library/gta5-artwork") +pipeline.load_textual_inversion("sd-concepts-library/moeb-style") + +# Remove all token embeddings +pipeline.unload_textual_inversion() + +# Example 2 +pipeline.load_textual_inversion("sd-concepts-library/moeb-style") +pipeline.load_textual_inversion("sd-concepts-library/gta5-artwork") + +# Remove just one token +pipeline.unload_textual_inversion("") + +# Example 3: unload from SDXL +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0") +embedding_path = hf_hub_download(repo_id="linoyts/web_y2k", filename="web_y2k_emb.safetensors", repo_type="model") + +# load embeddings to the text encoders +state_dict = load_file(embedding_path) + +# load embeddings of text_encoder 1 (CLIP ViT-L/14) +pipeline.load_textual_inversion(state_dict["clip_l"], token=["", ""], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) +# load embeddings of text_encoder 2 (CLIP ViT-G/14) +pipeline.load_textual_inversion(state_dict["clip_g"], token=["", ""], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) + +# Unload explicitly from both text encoders abd tokenizers +pipeline.unload_textual_inversion(tokens=["", ""], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) +pipeline.unload_textual_inversion(tokens=["", ""], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) diff --git a/scrapped_outputs/7d8da69eeadc66a9eda1904456b342f7.txt b/scrapped_outputs/7d8da69eeadc66a9eda1904456b342f7.txt new file mode 100644 index 0000000000000000000000000000000000000000..fa29aaa3795982e1203729759aa3fb501feeb077 --- /dev/null +++ b/scrapped_outputs/7d8da69eeadc66a9eda1904456b342f7.txt @@ -0,0 +1,19 @@ +Habana Gaudi 🤗 Diffusers is compatible with Habana Gaudi through 🤗 Optimum. Follow the installation guide to install the SynapseAI and Gaudi drivers, and then install Optimum Habana: Copied python -m pip install --upgrade-strategy eager optimum[habana] To generate images with Stable Diffusion 1 and 2 on Gaudi, you need to instantiate two instances: GaudiStableDiffusionPipeline, a pipeline for text-to-image generation. GaudiDDIMScheduler, a Gaudi-optimized scheduler. When you initialize the pipeline, you have to specify use_habana=True to deploy it on HPUs and to get the fastest possible generation, you should enable HPU graphs with use_hpu_graphs=True. Finally, specify a GaudiConfig which can be downloaded from the Habana organization on the Hub. Copied from optimum.habana import GaudiConfig +from optimum.habana.diffusers import GaudiDDIMScheduler, GaudiStableDiffusionPipeline + +model_name = "stabilityai/stable-diffusion-2-base" +scheduler = GaudiDDIMScheduler.from_pretrained(model_name, subfolder="scheduler") +pipeline = GaudiStableDiffusionPipeline.from_pretrained( + model_name, + scheduler=scheduler, + use_habana=True, + use_hpu_graphs=True, + gaudi_config="Habana/stable-diffusion-2", +) Now you can call the pipeline to generate images by batches from one or several prompts: Copied outputs = pipeline( + prompt=[ + "High quality photo of an astronaut riding a horse in space", + "Face of a yellow cat, high resolution, sitting on a park bench", + ], + num_images_per_prompt=10, + batch_size=4, +) For more information, check out 🤗 Optimum Habana’s documentation and the example provided in the official GitHub repository. Benchmark We benchmarked Habana’s first-generation Gaudi and Gaudi2 with the Habana/stable-diffusion and Habana/stable-diffusion-2 Gaudi configurations (mixed precision bf16/fp32) to demonstrate their performance. For Stable Diffusion v1.5 on 512x512 images: Latency (batch size = 1) Throughput first-generation Gaudi 3.80s 0.308 images/s (batch size = 8) Gaudi2 1.33s 1.081 images/s (batch size = 8) For Stable Diffusion v2.1 on 768x768 images: Latency (batch size = 1) Throughput first-generation Gaudi 10.2s 0.108 images/s (batch size = 4) Gaudi2 3.17s 0.379 images/s (batch size = 8) diff --git a/scrapped_outputs/7d93cfbdbca312434ed6004e840ef058.txt b/scrapped_outputs/7d93cfbdbca312434ed6004e840ef058.txt new file mode 100644 index 0000000000000000000000000000000000000000..a9b23cd194564c43aca8fd94b78d118e14153f64 --- /dev/null +++ b/scrapped_outputs/7d93cfbdbca312434ed6004e840ef058.txt @@ -0,0 +1,263 @@ +🧪 This pipeline is for research purposes only. Text-to-video ModelScope Text-to-Video Technical Report is by Jiuniu Wang, Hangjie Yuan, Dayou Chen, Yingya Zhang, Xiang Wang, Shiwei Zhang. The abstract from the paper is: This paper introduces ModelScopeT2V, a text-to-video synthesis model that evolves from a text-to-image synthesis model (i.e., Stable Diffusion). ModelScopeT2V incorporates spatio-temporal blocks to ensure consistent frame generation and smooth movement transitions. The model could adapt to varying frame numbers during training and inference, rendering it suitable for both image-text and video-text datasets. ModelScopeT2V brings together three components (i.e., VQGAN, a text encoder, and a denoising UNet), totally comprising 1.7 billion parameters, in which 0.5 billion parameters are dedicated to temporal capabilities. The model demonstrates superior performance over state-of-the-art methods across three evaluation metrics. The code and an online demo are available at https://modelscope.cn/models/damo/text-to-video-synthesis/summary. You can find additional information about Text-to-Video on the project page, original codebase, and try it out in a demo. Official checkpoints can be found at damo-vilab and cerspense. Usage example text-to-video-ms-1.7b Let’s start by generating a short video with the default length of 16 frames (2s at 8 fps): Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import export_to_video + +pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") +pipe = pipe.to("cuda") + +prompt = "Spiderman is surfing" +video_frames = pipe(prompt).frames +video_path = export_to_video(video_frames) +video_path Diffusers supports different optimization techniques to improve the latency +and memory footprint of a pipeline. Since videos are often more memory-heavy than images, +we can enable CPU offloading and VAE slicing to keep the memory footprint at bay. Let’s generate a video of 8 seconds (64 frames) on the same GPU using CPU offloading and VAE slicing: Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import export_to_video + +pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") +pipe.enable_model_cpu_offload() + +# memory optimization +pipe.enable_vae_slicing() + +prompt = "Darth Vader surfing a wave" +video_frames = pipe(prompt, num_frames=64).frames +video_path = export_to_video(video_frames) +video_path It just takes 7 GBs of GPU memory to generate the 64 video frames using PyTorch 2.0, “fp16” precision and the techniques mentioned above. We can also use a different scheduler easily, using the same method we’d use for Stable Diffusion: Copied import torch +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +from diffusers.utils import export_to_video + +pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() + +prompt = "Spiderman is surfing" +video_frames = pipe(prompt, num_inference_steps=25).frames +video_path = export_to_video(video_frames) +video_path Here are some sample outputs: An astronaut riding a horse. + Darth vader surfing in waves. + cerspense/zeroscope_v2_576w & cerspense/zeroscope_v2_XL Zeroscope are watermark-free model and have been trained on specific sizes such as 576x320 and 1024x576. +One should first generate a video using the lower resolution checkpoint cerspense/zeroscope_v2_576w with TextToVideoSDPipeline, +which can then be upscaled using VideoToVideoSDPipeline and cerspense/zeroscope_v2_XL. Copied import torch +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +from diffusers.utils import export_to_video +from PIL import Image + +pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16) +pipe.enable_model_cpu_offload() + +# memory optimization +pipe.unet.enable_forward_chunking(chunk_size=1, dim=1) +pipe.enable_vae_slicing() + +prompt = "Darth Vader surfing a wave" +video_frames = pipe(prompt, num_frames=24).frames +video_path = export_to_video(video_frames) +video_path Now the video can be upscaled: Copied pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_XL", torch_dtype=torch.float16) +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() + +# memory optimization +pipe.unet.enable_forward_chunking(chunk_size=1, dim=1) +pipe.enable_vae_slicing() + +video = [Image.fromarray(frame).resize((1024, 576)) for frame in video_frames] + +video_frames = pipe(prompt, video=video, strength=0.6).frames +video_path = export_to_video(video_frames) +video_path Here are some sample outputs: Darth vader surfing in waves. + Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. TextToVideoSDPipeline class diffusers.TextToVideoSDPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet3DConditionModel scheduler: KarrasDiffusionSchedulers ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet3DConditionModel) — +A UNet3DConditionModel to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_frames: int = 16 num_inference_steps: int = 50 guidance_scale: float = 9.0 negative_prompt: Union = None eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'np' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → TextToVideoSDPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated video. num_frames (int, optional, defaults to 16) — +The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds +amounts to 2 seconds of video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "np") — +The output format of the generated video. Choose between torch.FloatTensor or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +TextToVideoSDPipelineOutput or tuple + +If return_dict is True, TextToVideoSDPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import TextToVideoSDPipeline +>>> from diffusers.utils import export_to_video + +>>> pipe = TextToVideoSDPipeline.from_pretrained( +... "damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16" +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "Spiderman is surfing" +>>> video_frames = pipe(prompt).frames +>>> video_path = export_to_video(video_frames) +>>> video_path disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. VideoToVideoSDPipeline class diffusers.VideoToVideoSDPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet3DConditionModel scheduler: KarrasDiffusionSchedulers ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet3DConditionModel) — +A UNet3DConditionModel to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-guided video-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None video: Union = None strength: float = 0.6 num_inference_steps: int = 50 guidance_scale: float = 15.0 negative_prompt: Union = None eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'np' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → TextToVideoSDPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. video (List[np.ndarray] or torch.FloatTensor) — +video frames or tensor representing a video batch to be used as the starting point for the process. +Can also accept video latents as image, if passing latents directly, it will not be encoded again. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference video. Must be between 0 and 1. video is used as a +starting point, adding more noise to it the larger the strength. The number of denoising steps +depends on the amount of noise initially added. When strength is 1, added noise is maximum and the +denoising process runs for the full number of iterations specified in num_inference_steps. A value of +1 essentially ignores video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in video generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "np") — +The output format of the generated video. Choose between torch.FloatTensor or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +TextToVideoSDPipelineOutput or tuple + +If return_dict is True, TextToVideoSDPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +>>> from diffusers.utils import export_to_video + +>>> pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16) +>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe.to("cuda") + +>>> prompt = "spiderman running in the desert" +>>> video_frames = pipe(prompt, num_inference_steps=40, height=320, width=576, num_frames=24).frames +>>> # safe low-res video +>>> video_path = export_to_video(video_frames, output_video_path="./video_576_spiderman.mp4") + +>>> # let's offload the text-to-image model +>>> pipe.to("cpu") + +>>> # and load the image-to-image model +>>> pipe = DiffusionPipeline.from_pretrained( +... "cerspense/zeroscope_v2_XL", torch_dtype=torch.float16, revision="refs/pr/15" +... ) +>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe.enable_model_cpu_offload() + +>>> # The VAE consumes A LOT of memory, let's make sure we run it in sliced mode +>>> pipe.vae.enable_slicing() + +>>> # now let's upscale it +>>> video = [Image.fromarray(frame).resize((1024, 576)) for frame in video_frames] + +>>> # and denoise it +>>> video_frames = pipe(prompt, video=video, strength=0.6).frames +>>> video_path = export_to_video(video_frames, output_video_path="./video_1024_spiderman.mp4") +>>> video_path disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. TextToVideoSDPipelineOutput class diffusers.pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput < source > ( frames: Union ) Parameters frames (List[np.ndarray] or torch.FloatTensor) — +List of denoised frames (essentially images) as NumPy arrays of shape (height, width, num_channels) or as +a torch tensor. The length of the list denotes the video length (the number of frames). Output class for text-to-video pipelines. diff --git a/scrapped_outputs/7d956a1f8a2f79b80e9c813d915a81da.txt b/scrapped_outputs/7d956a1f8a2f79b80e9c813d915a81da.txt new file mode 100644 index 0000000000000000000000000000000000000000..ca254f42f72a76d580bb5340e193834f7f82b6d6 --- /dev/null +++ b/scrapped_outputs/7d956a1f8a2f79b80e9c813d915a81da.txt @@ -0,0 +1,86 @@ +Prompt weighting Prompt weighting provides a way to emphasize or de-emphasize certain parts of a prompt, allowing for more control over the generated image. A prompt can include several concepts, which gets turned into contextualized text embeddings. The embeddings are used by the model to condition its cross-attention layers to generate an image (read the Stable Diffusion blog post to learn more about how it works). Prompt weighting works by increasing or decreasing the scale of the text embedding vector that corresponds to its concept in the prompt because you may not necessarily want the model to focus on all concepts equally. The easiest way to prepare the prompt-weighted embeddings is to use Compel, a text prompt-weighting and blending library. Once you have the prompt-weighted embeddings, you can pass them to any pipeline that has a prompt_embeds (and optionally negative_prompt_embeds) parameter, such as StableDiffusionPipeline, StableDiffusionControlNetPipeline, and StableDiffusionXLPipeline. If your favorite pipeline doesn’t have a prompt_embeds parameter, please open an issue so we can add it! This guide will show you how to weight and blend your prompts with Compel in 🤗 Diffusers. Before you begin, make sure you have the latest version of Compel installed: Copied # uncomment to install in Colab +#!pip install compel --upgrade For this guide, let’s generate an image with the prompt "a red cat playing with a ball" using the StableDiffusionPipeline: Copied from diffusers import StableDiffusionPipeline, UniPCMultistepScheduler +import torch + +pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", use_safetensors=True) +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.to("cuda") + +prompt = "a red cat playing with a ball" + +generator = torch.Generator(device="cpu").manual_seed(33) + +image = pipe(prompt, generator=generator, num_inference_steps=20).images[0] +image Weighting You’ll notice there is no “ball” in the image! Let’s use compel to upweight the concept of “ball” in the prompt. Create a Compel object, and pass it a tokenizer and text encoder: Copied from compel import Compel + +compel_proc = Compel(tokenizer=pipe.tokenizer, text_encoder=pipe.text_encoder) compel uses + or - to increase or decrease the weight of a word in the prompt. To increase the weight of “ball”: + corresponds to the value 1.1, ++ corresponds to 1.1^2, and so on. Similarly, - corresponds to 0.9 and -- corresponds to 0.9^2. Feel free to experiment with adding more + or - in your prompt! Copied prompt = "a red cat playing with a ball++" Pass the prompt to compel_proc to create the new prompt embeddings which are passed to the pipeline: Copied prompt_embeds = compel_proc(prompt) +generator = torch.manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image To downweight parts of the prompt, use the - suffix: Copied prompt = "a red------- cat playing with a ball" +prompt_embeds = compel_proc(prompt) + +generator = torch.manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image You can even up or downweight multiple concepts in the same prompt: Copied prompt = "a red cat++ playing with a ball----" +prompt_embeds = compel_proc(prompt) + +generator = torch.manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image Blending You can also create a weighted blend of prompts by adding .blend() to a list of prompts and passing it some weights. Your blend may not always produce the result you expect because it breaks some assumptions about how the text encoder functions, so just have fun and experiment with it! Copied prompt_embeds = compel_proc('("a red cat playing with a ball", "jungle").blend(0.7, 0.8)') +generator = torch.Generator(device="cuda").manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image Conjunction A conjunction diffuses each prompt independently and concatenates their results by their weighted sum. Add .and() to the end of a list of prompts to create a conjunction: Copied prompt_embeds = compel_proc('["a red cat", "playing with a", "ball"].and()') +generator = torch.Generator(device="cuda").manual_seed(55) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image Textual inversion Textual inversion is a technique for learning a specific concept from some images which you can use to generate new images conditioned on that concept. Create a pipeline and use the load_textual_inversion() function to load the textual inversion embeddings (feel free to browse the Stable Diffusion Conceptualizer for 100+ trained concepts): Copied import torch +from diffusers import StableDiffusionPipeline +from compel import Compel, DiffusersTextualInversionManager + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, + use_safetensors=True, variant="fp16").to("cuda") +pipe.load_textual_inversion("sd-concepts-library/midjourney-style") Compel provides a DiffusersTextualInversionManager class to simplify prompt weighting with textual inversion. Instantiate DiffusersTextualInversionManager and pass it to the Compel class: Copied textual_inversion_manager = DiffusersTextualInversionManager(pipe) +compel_proc = Compel( + tokenizer=pipe.tokenizer, + text_encoder=pipe.text_encoder, + textual_inversion_manager=textual_inversion_manager) Incorporate the concept to condition a prompt with using the syntax: Copied prompt_embeds = compel_proc('("A red cat++ playing with a ball ")') + +image = pipe(prompt_embeds=prompt_embeds).images[0] +image DreamBooth DreamBooth is a technique for generating contextualized images of a subject given just a few images of the subject to train on. It is similar to textual inversion, but DreamBooth trains the full model whereas textual inversion only fine-tunes the text embeddings. This means you should use from_pretrained() to load the DreamBooth model (feel free to browse the Stable Diffusion Dreambooth Concepts Library for 100+ trained models): Copied import torch +from diffusers import DiffusionPipeline, UniPCMultistepScheduler +from compel import Compel + +pipe = DiffusionPipeline.from_pretrained("sd-dreambooth-library/dndcoverart-v1", torch_dtype=torch.float16).to("cuda") +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) Create a Compel class with a tokenizer and text encoder, and pass your prompt to it. Depending on the model you use, you’ll need to incorporate the model’s unique identifier into your prompt. For example, the dndcoverart-v1 model uses the identifier dndcoverart: Copied compel_proc = Compel(tokenizer=pipe.tokenizer, text_encoder=pipe.text_encoder) +prompt_embeds = compel_proc('("magazine cover of a dndcoverart dragon, high quality, intricate details, larry elmore art style").and()') +image = pipe(prompt_embeds=prompt_embeds).images[0] +image Stable Diffusion XL Stable Diffusion XL (SDXL) has two tokenizers and text encoders so it’s usage is a bit different. To address this, you should pass both tokenizers and encoders to the Compel class: Copied from compel import Compel, ReturnedEmbeddingsType +from diffusers import DiffusionPipeline +from diffusers.utils import make_image_grid +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + variant="fp16", + use_safetensors=True, + torch_dtype=torch.float16 +).to("cuda") + +compel = Compel( + tokenizer=[pipeline.tokenizer, pipeline.tokenizer_2] , + text_encoder=[pipeline.text_encoder, pipeline.text_encoder_2], + returned_embeddings_type=ReturnedEmbeddingsType.PENULTIMATE_HIDDEN_STATES_NON_NORMALIZED, + requires_pooled=[False, True] +) This time, let’s upweight “ball” by a factor of 1.5 for the first prompt, and downweight “ball” by 0.6 for the second prompt. The StableDiffusionXLPipeline also requires pooled_prompt_embeds (and optionally negative_pooled_prompt_embeds) so you should pass those to the pipeline along with the conditioning tensors: Copied # apply weights +prompt = ["a red cat playing with a (ball)1.5", "a red cat playing with a (ball)0.6"] +conditioning, pooled = compel(prompt) + +# generate image +generator = [torch.Generator().manual_seed(33) for _ in range(len(prompt))] +images = pipeline(prompt_embeds=conditioning, pooled_prompt_embeds=pooled, generator=generator, num_inference_steps=30).images +make_image_grid(images, rows=1, cols=2) "a red cat playing with a (ball)1.5" "a red cat playing with a (ball)0.6" diff --git a/scrapped_outputs/7df115dad5a22c63d77198b77b70abf7.txt b/scrapped_outputs/7df115dad5a22c63d77198b77b70abf7.txt new file mode 100644 index 0000000000000000000000000000000000000000..28be7c2be08b90122a456c3dc3dafcfdbac176dc --- /dev/null +++ b/scrapped_outputs/7df115dad5a22c63d77198b77b70abf7.txt @@ -0,0 +1,75 @@ +AutoPipeline 🤗 Diffusers is able to complete many different tasks, and you can often reuse the same pretrained weights for multiple tasks such as text-to-image, image-to-image, and inpainting. If you’re new to the library and diffusion models though, it may be difficult to know which pipeline to use for a task. For example, if you’re using the runwayml/stable-diffusion-v1-5 checkpoint for text-to-image, you might not know that you could also use it for image-to-image and inpainting by loading the checkpoint with the StableDiffusionImg2ImgPipeline and StableDiffusionInpaintPipeline classes respectively. The AutoPipeline class is designed to simplify the variety of pipelines in 🤗 Diffusers. It is a generic, task-first pipeline that lets you focus on the task. The AutoPipeline automatically detects the correct pipeline class to use, which makes it easier to load a checkpoint for a task without knowing the specific pipeline class name. Take a look at the AutoPipeline reference to see which tasks are supported. Currently, it supports text-to-image, image-to-image, and inpainting. This tutorial shows you how to use an AutoPipeline to automatically infer the pipeline class to load for a specific task, given the pretrained weights. Choose an AutoPipeline for your task Start by picking a checkpoint. For example, if you’re interested in text-to-image with the runwayml/stable-diffusion-v1-5 checkpoint, use AutoPipelineForText2Image: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") +prompt = "peasant and dragon combat, wood cutting style, viking era, bevel with rune" + +image = pipeline(prompt, num_inference_steps=25).images[0] +image Under the hood, AutoPipelineForText2Image: automatically detects a "stable-diffusion" class from the model_index.json file loads the corresponding text-to-image StableDiffusionPipeline based on the "stable-diffusion" class name Likewise, for image-to-image, AutoPipelineForImage2Image detects a "stable-diffusion" checkpoint from the model_index.json file and it’ll load the corresponding StableDiffusionImg2ImgPipeline behind the scenes. You can also pass any additional arguments specific to the pipeline class such as strength, which determines the amount of noise or variation added to an input image: Copied from diffusers import AutoPipelineForImage2Image +import torch +import requests +from PIL import Image +from io import BytesIO + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") +prompt = "a portrait of a dog wearing a pearl earring" + +url = "https://upload.wikimedia.org/wikipedia/commons/thumb/0/0f/1665_Girl_with_a_Pearl_Earring.jpg/800px-1665_Girl_with_a_Pearl_Earring.jpg" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") +image.thumbnail((768, 768)) + +image = pipeline(prompt, image, num_inference_steps=200, strength=0.75, guidance_scale=10.5).images[0] +image And if you want to do inpainting, then AutoPipelineForInpainting loads the underlying StableDiffusionInpaintPipeline class in the same way: Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +import torch + +pipeline = AutoPipelineForInpainting.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).convert("RGB") +mask_image = load_image(mask_url).convert("RGB") + +prompt = "A majestic tiger sitting on a bench" +image = pipeline(prompt, image=init_image, mask_image=mask_image, num_inference_steps=50, strength=0.80).images[0] +image If you try to load an unsupported checkpoint, it’ll throw an error: Copied from diffusers import AutoPipelineForImage2Image +import torch + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "openai/shap-e-img2img", torch_dtype=torch.float16, use_safetensors=True +) +"ValueError: AutoPipeline can't find a pipeline linked to ShapEImg2ImgPipeline for None" Use multiple pipelines For some workflows or if you’re loading many pipelines, it is more memory-efficient to reuse the same components from a checkpoint instead of reloading them which would unnecessarily consume additional memory. For example, if you’re using a checkpoint for text-to-image and you want to use it again for image-to-image, use the from_pipe() method. This method creates a new pipeline from the components of a previously loaded pipeline at no additional memory cost. The from_pipe() method detects the original pipeline class and maps it to the new pipeline class corresponding to the task you want to do. For example, if you load a "stable-diffusion" class pipeline for text-to-image: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch + +pipeline_text2img = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +print(type(pipeline_text2img)) +"" Then from_pipe() maps the original "stable-diffusion" pipeline class to StableDiffusionImg2ImgPipeline: Copied pipeline_img2img = AutoPipelineForImage2Image.from_pipe(pipeline_text2img) +print(type(pipeline_img2img)) +"" If you passed an optional argument - like disabling the safety checker - to the original pipeline, this argument is also passed on to the new pipeline: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch + +pipeline_text2img = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, + requires_safety_checker=False, +).to("cuda") + +pipeline_img2img = AutoPipelineForImage2Image.from_pipe(pipeline_text2img) +print(pipeline_img2img.config.requires_safety_checker) +"False" You can overwrite any of the arguments and even configuration from the original pipeline if you want to change the behavior of the new pipeline. For example, to turn the safety checker back on and add the strength argument: Copied pipeline_img2img = AutoPipelineForImage2Image.from_pipe(pipeline_text2img, requires_safety_checker=True, strength=0.3) +print(pipeline_img2img.config.requires_safety_checker) +"True" diff --git a/scrapped_outputs/7e2724ff3b9b025173809044a61a7960.txt b/scrapped_outputs/7e2724ff3b9b025173809044a61a7960.txt new file mode 100644 index 0000000000000000000000000000000000000000..161bab95d89c856bbecb72654e8b0d0142d13c70 --- /dev/null +++ b/scrapped_outputs/7e2724ff3b9b025173809044a61a7960.txt @@ -0,0 +1,6 @@ +Unconditional image generation Unconditional image generation generates images that look like a random sample from the training data the model was trained on because the denoising process is not guided by any additional context like text or image. To get started, use the DiffusionPipeline to load the anton-l/ddpm-butterflies-128 checkpoint to generate images of butterflies. The DiffusionPipeline downloads and caches all the model components required to generate an image. Copied from diffusers import DiffusionPipeline + +generator = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128").to("cuda") +image = generator().images[0] +image Want to generate images of something else? Take a look at the training guide to learn how to train a model to generate your own images. The output image is a PIL.Image object that can be saved: Copied image.save("generated_image.png") You can also try experimenting with the num_inference_steps parameter, which controls the number of denoising steps. More denoising steps typically produce higher quality images, but it’ll take longer to generate. Feel free to play around with this parameter to see how it affects the image quality. Copied image = generator(num_inference_steps=100).images[0] +image Try out the Space below to generate an image of a butterfly! diff --git a/scrapped_outputs/7e4140b21fbf18c9cb43775fee415365.txt b/scrapped_outputs/7e4140b21fbf18c9cb43775fee415365.txt new file mode 100644 index 0000000000000000000000000000000000000000..5e5f20bcd4c8ced4f5d66653f375f4b97a022c2a --- /dev/null +++ b/scrapped_outputs/7e4140b21fbf18c9cb43775fee415365.txt @@ -0,0 +1,13 @@ +Improve image quality with deterministic generation A common way to improve the quality of generated images is with deterministic batch generation, generate a batch of images and select one image to improve with a more detailed prompt in a second round of inference. The key is to pass a list of torch.Generator’s to the pipeline for batched image generation, and tie each Generator to a seed so you can reuse it for an image. Let’s use runwayml/stable-diffusion-v1-5 for example, and generate several versions of the following prompt: Copied prompt = "Labrador in the style of Vermeer" Instantiate a pipeline with DiffusionPipeline.from_pretrained() and place it on a GPU (if available): Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import make_image_grid + +pipe = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +pipe = pipe.to("cuda") Now, define four different Generators and assign each Generator a seed (0 to 3) so you can reuse a Generator later for a specific image: Copied generator = [torch.Generator(device="cuda").manual_seed(i) for i in range(4)] To create a batched seed, you should use a list comprehension that iterates over the length specified in range(). This creates a unique Generator object for each image in the batch. If you only multiply the Generator by the batch size, this only creates one Generator object that is used sequentially for each image in the batch. For example, if you want to use the same seed to create 4 identical images: Copied ❌ [torch.Generator().manual_seed(seed)] * 4 + +✅ [torch.Generator().manual_seed(seed) for _ in range(4)] Generate the images and have a look: Copied images = pipe(prompt, generator=generator, num_images_per_prompt=4).images +make_image_grid(images, rows=2, cols=2) In this example, you’ll improve upon the first image - but in reality, you can use any image you want (even the image with double sets of eyes!). The first image used the Generator with seed 0, so you’ll reuse that Generator for the second round of inference. To improve the quality of the image, add some additional text to the prompt: Copied prompt = [prompt + t for t in [", highly realistic", ", artsy", ", trending", ", colorful"]] +generator = [torch.Generator(device="cuda").manual_seed(0) for i in range(4)] Create four generators with seed 0, and generate another batch of images, all of which should look like the first image from the previous round! Copied images = pipe(prompt, generator=generator).images +make_image_grid(images, rows=2, cols=2) diff --git a/scrapped_outputs/7e43a81506b6275262495adf660f195e.txt b/scrapped_outputs/7e43a81506b6275262495adf660f195e.txt new file mode 100644 index 0000000000000000000000000000000000000000..7b1735de34d975258705c997ab6b7091fbeddde0 --- /dev/null +++ b/scrapped_outputs/7e43a81506b6275262495adf660f195e.txt @@ -0,0 +1,2 @@ +Activation functions Customized activation functions for supporting various models in 🤗 Diffusers. GELU class diffusers.models.activations.GELU < source > ( dim_in: int dim_out: int approximate: str = 'none' bias: bool = True ) Parameters dim_in (int) — The number of channels in the input. dim_out (int) — The number of channels in the output. approximate (str, optional, defaults to "none") — If "tanh", use tanh approximation. bias (bool, defaults to True) — Whether to use a bias in the linear layer. GELU activation function with tanh approximation support with approximate="tanh". GEGLU class diffusers.models.activations.GEGLU < source > ( dim_in: int dim_out: int bias: bool = True ) Parameters dim_in (int) — The number of channels in the input. dim_out (int) — The number of channels in the output. bias (bool, defaults to True) — Whether to use a bias in the linear layer. A variant of the gated linear unit activation function. ApproximateGELU class diffusers.models.activations.ApproximateGELU < source > ( dim_in: int dim_out: int bias: bool = True ) Parameters dim_in (int) — The number of channels in the input. dim_out (int) — The number of channels in the output. bias (bool, defaults to True) — Whether to use a bias in the linear layer. The approximate form of the Gaussian Error Linear Unit (GELU). For more details, see section 2 of this +paper. diff --git a/scrapped_outputs/7e4a859f3caea42be397d833ec641566.txt b/scrapped_outputs/7e4a859f3caea42be397d833ec641566.txt new file mode 100644 index 0000000000000000000000000000000000000000..ca254f42f72a76d580bb5340e193834f7f82b6d6 --- /dev/null +++ b/scrapped_outputs/7e4a859f3caea42be397d833ec641566.txt @@ -0,0 +1,86 @@ +Prompt weighting Prompt weighting provides a way to emphasize or de-emphasize certain parts of a prompt, allowing for more control over the generated image. A prompt can include several concepts, which gets turned into contextualized text embeddings. The embeddings are used by the model to condition its cross-attention layers to generate an image (read the Stable Diffusion blog post to learn more about how it works). Prompt weighting works by increasing or decreasing the scale of the text embedding vector that corresponds to its concept in the prompt because you may not necessarily want the model to focus on all concepts equally. The easiest way to prepare the prompt-weighted embeddings is to use Compel, a text prompt-weighting and blending library. Once you have the prompt-weighted embeddings, you can pass them to any pipeline that has a prompt_embeds (and optionally negative_prompt_embeds) parameter, such as StableDiffusionPipeline, StableDiffusionControlNetPipeline, and StableDiffusionXLPipeline. If your favorite pipeline doesn’t have a prompt_embeds parameter, please open an issue so we can add it! This guide will show you how to weight and blend your prompts with Compel in 🤗 Diffusers. Before you begin, make sure you have the latest version of Compel installed: Copied # uncomment to install in Colab +#!pip install compel --upgrade For this guide, let’s generate an image with the prompt "a red cat playing with a ball" using the StableDiffusionPipeline: Copied from diffusers import StableDiffusionPipeline, UniPCMultistepScheduler +import torch + +pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", use_safetensors=True) +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.to("cuda") + +prompt = "a red cat playing with a ball" + +generator = torch.Generator(device="cpu").manual_seed(33) + +image = pipe(prompt, generator=generator, num_inference_steps=20).images[0] +image Weighting You’ll notice there is no “ball” in the image! Let’s use compel to upweight the concept of “ball” in the prompt. Create a Compel object, and pass it a tokenizer and text encoder: Copied from compel import Compel + +compel_proc = Compel(tokenizer=pipe.tokenizer, text_encoder=pipe.text_encoder) compel uses + or - to increase or decrease the weight of a word in the prompt. To increase the weight of “ball”: + corresponds to the value 1.1, ++ corresponds to 1.1^2, and so on. Similarly, - corresponds to 0.9 and -- corresponds to 0.9^2. Feel free to experiment with adding more + or - in your prompt! Copied prompt = "a red cat playing with a ball++" Pass the prompt to compel_proc to create the new prompt embeddings which are passed to the pipeline: Copied prompt_embeds = compel_proc(prompt) +generator = torch.manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image To downweight parts of the prompt, use the - suffix: Copied prompt = "a red------- cat playing with a ball" +prompt_embeds = compel_proc(prompt) + +generator = torch.manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image You can even up or downweight multiple concepts in the same prompt: Copied prompt = "a red cat++ playing with a ball----" +prompt_embeds = compel_proc(prompt) + +generator = torch.manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image Blending You can also create a weighted blend of prompts by adding .blend() to a list of prompts and passing it some weights. Your blend may not always produce the result you expect because it breaks some assumptions about how the text encoder functions, so just have fun and experiment with it! Copied prompt_embeds = compel_proc('("a red cat playing with a ball", "jungle").blend(0.7, 0.8)') +generator = torch.Generator(device="cuda").manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image Conjunction A conjunction diffuses each prompt independently and concatenates their results by their weighted sum. Add .and() to the end of a list of prompts to create a conjunction: Copied prompt_embeds = compel_proc('["a red cat", "playing with a", "ball"].and()') +generator = torch.Generator(device="cuda").manual_seed(55) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image Textual inversion Textual inversion is a technique for learning a specific concept from some images which you can use to generate new images conditioned on that concept. Create a pipeline and use the load_textual_inversion() function to load the textual inversion embeddings (feel free to browse the Stable Diffusion Conceptualizer for 100+ trained concepts): Copied import torch +from diffusers import StableDiffusionPipeline +from compel import Compel, DiffusersTextualInversionManager + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, + use_safetensors=True, variant="fp16").to("cuda") +pipe.load_textual_inversion("sd-concepts-library/midjourney-style") Compel provides a DiffusersTextualInversionManager class to simplify prompt weighting with textual inversion. Instantiate DiffusersTextualInversionManager and pass it to the Compel class: Copied textual_inversion_manager = DiffusersTextualInversionManager(pipe) +compel_proc = Compel( + tokenizer=pipe.tokenizer, + text_encoder=pipe.text_encoder, + textual_inversion_manager=textual_inversion_manager) Incorporate the concept to condition a prompt with using the syntax: Copied prompt_embeds = compel_proc('("A red cat++ playing with a ball ")') + +image = pipe(prompt_embeds=prompt_embeds).images[0] +image DreamBooth DreamBooth is a technique for generating contextualized images of a subject given just a few images of the subject to train on. It is similar to textual inversion, but DreamBooth trains the full model whereas textual inversion only fine-tunes the text embeddings. This means you should use from_pretrained() to load the DreamBooth model (feel free to browse the Stable Diffusion Dreambooth Concepts Library for 100+ trained models): Copied import torch +from diffusers import DiffusionPipeline, UniPCMultistepScheduler +from compel import Compel + +pipe = DiffusionPipeline.from_pretrained("sd-dreambooth-library/dndcoverart-v1", torch_dtype=torch.float16).to("cuda") +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) Create a Compel class with a tokenizer and text encoder, and pass your prompt to it. Depending on the model you use, you’ll need to incorporate the model’s unique identifier into your prompt. For example, the dndcoverart-v1 model uses the identifier dndcoverart: Copied compel_proc = Compel(tokenizer=pipe.tokenizer, text_encoder=pipe.text_encoder) +prompt_embeds = compel_proc('("magazine cover of a dndcoverart dragon, high quality, intricate details, larry elmore art style").and()') +image = pipe(prompt_embeds=prompt_embeds).images[0] +image Stable Diffusion XL Stable Diffusion XL (SDXL) has two tokenizers and text encoders so it’s usage is a bit different. To address this, you should pass both tokenizers and encoders to the Compel class: Copied from compel import Compel, ReturnedEmbeddingsType +from diffusers import DiffusionPipeline +from diffusers.utils import make_image_grid +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + variant="fp16", + use_safetensors=True, + torch_dtype=torch.float16 +).to("cuda") + +compel = Compel( + tokenizer=[pipeline.tokenizer, pipeline.tokenizer_2] , + text_encoder=[pipeline.text_encoder, pipeline.text_encoder_2], + returned_embeddings_type=ReturnedEmbeddingsType.PENULTIMATE_HIDDEN_STATES_NON_NORMALIZED, + requires_pooled=[False, True] +) This time, let’s upweight “ball” by a factor of 1.5 for the first prompt, and downweight “ball” by 0.6 for the second prompt. The StableDiffusionXLPipeline also requires pooled_prompt_embeds (and optionally negative_pooled_prompt_embeds) so you should pass those to the pipeline along with the conditioning tensors: Copied # apply weights +prompt = ["a red cat playing with a (ball)1.5", "a red cat playing with a (ball)0.6"] +conditioning, pooled = compel(prompt) + +# generate image +generator = [torch.Generator().manual_seed(33) for _ in range(len(prompt))] +images = pipeline(prompt_embeds=conditioning, pooled_prompt_embeds=pooled, generator=generator, num_inference_steps=30).images +make_image_grid(images, rows=1, cols=2) "a red cat playing with a (ball)1.5" "a red cat playing with a (ball)0.6" diff --git a/scrapped_outputs/7e5adcde5a56e2624601e8577a53161a.txt b/scrapped_outputs/7e5adcde5a56e2624601e8577a53161a.txt new file mode 100644 index 0000000000000000000000000000000000000000..c988f5fbd3af0bec8702ce742b8e0ac4c55d6c07 --- /dev/null +++ b/scrapped_outputs/7e5adcde5a56e2624601e8577a53161a.txt @@ -0,0 +1,151 @@ +DPM Discrete Scheduler inspired by Karras et. al paper + + +Overview + +Inspired by Karras et. al. Scheduler ported from @crowsonkb’s https://github.com/crowsonkb/k-diffusion library: +All credit for making this scheduler work goes to Katherine Crowson + +KDPM2DiscreteScheduler + + +class diffusers.KDPM2DiscreteScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.00085 +beta_end: float = 0.012 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +prediction_type: str = 'epsilon' + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. beta_start (float): the + + +starting beta value of inference. beta_end (float) — the final beta value. beta_schedule (str): +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. +options to clip the variance used when adding noise to the denoised sample. Choose from fixed_small, +fixed_small_log, fixed_large, fixed_large_log, learned or learned_range. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + + +Scheduler created by @crowsonkb in k_diffusion, see: +https://github.com/crowsonkb/k-diffusion/blob/5b3af030dd83e0297272d861c19477735d0317ec/k_diffusion/sampling.py#L188 +Scheduler inspired by DPM-Solver-2 and Algorthim 2 from Karras et al. (2022). +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[float, torch.FloatTensor] + +) +→ +torch.FloatTensor + +Parameters + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the — + + +current timestep. — +sample (torch.FloatTensor): input sample timestep (int, optional): current timestep + + +Returns + +torch.FloatTensor + + + +scaled input sample + + + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None +num_train_timesteps: typing.Optional[int] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device, optional) — +the device to which the timesteps should be moved to. If None, the timesteps are not moved. + + + +Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: typing.Union[torch.FloatTensor, numpy.ndarray] +timestep: typing.Union[float, torch.FloatTensor] +sample: typing.Union[torch.FloatTensor, numpy.ndarray] +return_dict: bool = True + +) +→ +SchedulerOutput or tuple + +Parameters + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion — + + +process from the learned model outputs (most often the predicted noise). — +model_output (torch.FloatTensor or np.ndarray): direct output from learned diffusion model. timestep +(int): current discrete timestep in the diffusion chain. sample (torch.FloatTensor or np.ndarray): +current instance of sample being created by diffusion process. +return_dict (bool): option for returning tuple rather than SchedulerOutput class + + +Returns + +SchedulerOutput or tuple + + + +SchedulerOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. diff --git a/scrapped_outputs/7e6413688b4da8d821c3fad7a76d85c9.txt b/scrapped_outputs/7e6413688b4da8d821c3fad7a76d85c9.txt new file mode 100644 index 0000000000000000000000000000000000000000..17a66448f05f2f6877f31f680c3ddabd2f0d30c9 --- /dev/null +++ b/scrapped_outputs/7e6413688b4da8d821c3fad7a76d85c9.txt @@ -0,0 +1,111 @@ +Merge LoRAs It can be fun and creative to use multiple LoRAs together to generate something entirely new and unique. This works by merging multiple LoRA weights together to produce images that are a blend of different styles. Diffusers provides a few methods to merge LoRAs depending on how you want to merge their weights, which can affect image quality. This guide will show you how to merge LoRAs using the set_adapters() and ~peft.LoraModel.add_weighted_adapter methods. To improve inference speed and reduce memory-usage of merged LoRAs, you’ll also see how to use the fuse_lora() method to fuse the LoRA weights with the original weights of the underlying model. For this guide, load a Stable Diffusion XL (SDXL) checkpoint and the KappaNeuro/studio-ghibli-style and Norod78/sdxl-chalkboarddrawing-lora LoRAs with the load_lora_weights() method. You’ll need to assign each LoRA an adapter_name to combine them later. Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("ostris/ikea-instructions-lora-sdxl", weight_name="ikea_instructions_xl_v1_5.safetensors", adapter_name="ikea") +pipeline.load_lora_weights("lordjia/by-feng-zikai", weight_name="fengzikai_v1.0_XL.safetensors", adapter_name="feng") set_adapters The set_adapters() method merges LoRA adapters by concatenating their weighted matrices. Use the adapter name to specify which LoRAs to merge, and the adapter_weights parameter to control the scaling for each LoRA. For example, if adapter_weights=[0.5, 0.5], then the merged LoRA output is an average of both LoRAs. Try adjusting the adapter weights to see how it affects the generated image! Copied pipeline.set_adapters(["ikea", "feng"], adapter_weights=[0.7, 0.8]) + +generator = torch.manual_seed(0) +prompt = "A bowl of ramen shaped like a cute kawaii bear, by Feng Zikai" +image = pipeline(prompt, generator=generator, cross_attention_kwargs={"scale": 1.0}).images[0] +image add_weighted_adapter This is an experimental method that adds PEFTs ~peft.LoraModel.add_weighted_adapter method to Diffusers to enable more efficient merging methods. Check out this issue if you’re interested in learning more about the motivation and design behind this integration. The ~peft.LoraModel.add_weighted_adapter method provides access to more efficient merging method such as TIES and DARE. To use these merging methods, make sure you have the latest stable version of Diffusers and PEFT installed. Copied pip install -U diffusers peft There are three steps to merge LoRAs with the ~peft.LoraModel.add_weighted_adapter method: Create a ~peft.PeftModel from the underlying model and LoRA checkpoint. Load a base UNet model and the LoRA adapters. Merge the adapters using the ~peft.LoraModel.add_weighted_adapter method and the merging method of your choice. Let’s dive deeper into what these steps entail. Load a UNet that corresponds to the UNet in the LoRA checkpoint. In this case, both LoRAs use the SDXL UNet as their base model. Copied from diffusers import UNet2DConditionModel +import torch + +unet = UNet2DConditionModel.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + torch_dtype=torch.float16, + use_safetensors=True, + variant="fp16", + subfolder="unet", +).to("cuda") Load the SDXL pipeline and the LoRA checkpoints, starting with the ostris/ikea-instructions-lora-sdxl LoRA. Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + variant="fp16", + torch_dtype=torch.float16, + unet=unet +).to("cuda") +pipeline.load_lora_weights("ostris/ikea-instructions-lora-sdxl", weight_name="ikea_instructions_xl_v1_5.safetensors", adapter_name="ikea") Now you’ll create a ~peft.PeftModel from the loaded LoRA checkpoint by combining the SDXL UNet and the LoRA UNet from the pipeline. Copied from peft import get_peft_model, LoraConfig +import copy + +sdxl_unet = copy.deepcopy(unet) +ikea_peft_model = get_peft_model( + sdxl_unet, + pipeline.unet.peft_config["ikea"], + adapter_name="ikea" +) + +original_state_dict = {f"base_model.model.{k}": v for k, v in pipeline.unet.state_dict().items()} +ikea_peft_model.load_state_dict(original_state_dict, strict=True) You can optionally push the ikea_peft_model to the Hub by calling ikea_peft_model.push_to_hub("ikea_peft_model", token=TOKEN). Repeat this process to create a ~peft.PeftModel from the lordjia/by-feng-zikai LoRA. Copied pipeline.delete_adapters("ikea") +sdxl_unet.delete_adapters("ikea") + +pipeline.load_lora_weights("lordjia/by-feng-zikai", weight_name="fengzikai_v1.0_XL.safetensors", adapter_name="feng") +pipeline.set_adapters(adapter_names="feng") + +feng_peft_model = get_peft_model( + sdxl_unet, + pipeline.unet.peft_config["feng"], + adapter_name="feng" +) + +original_state_dict = {f"base_model.model.{k}": v for k, v in pipe.unet.state_dict().items()} +feng_peft_model.load_state_dict(original_state_dict, strict=True) Load a base UNet model and then load the adapters onto it. Copied from peft import PeftModel + +base_unet = UNet2DConditionModel.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + torch_dtype=torch.float16, + use_safetensors=True, + variant="fp16", + subfolder="unet", +).to("cuda") + +model = PeftModel.from_pretrained(base_unet, "stevhliu/ikea_peft_model", use_safetensors=True, subfolder="ikea", adapter_name="ikea") +model.load_adapter("stevhliu/feng_peft_model", use_safetensors=True, subfolder="feng", adapter_name="feng") Merge the adapters using the ~peft.LoraModel.add_weighted_adapter method and the merging method of your choice (learn more about other merging methods in this blog post). For this example, let’s use the "dare_linear" method to merge the LoRAs. Keep in mind the LoRAs need to have the same rank to be merged! Copied model.add_weighted_adapter( + adapters=["ikea", "feng"], + weights=[1.0, 1.0], + combination_type="dare_linear", + adapter_name="ikea-feng" +) +model.set_adapters("ikea-feng") Now you can generate an image with the merged LoRA. Copied model = model.to(dtype=torch.float16, device="cuda") + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", unet=model, variant="fp16", torch_dtype=torch.float16, +).to("cuda") + +image = pipeline("A bowl of ramen shaped like a cute kawaii bear, by Feng Zikai", generator=torch.manual_seed(0)).images[0] +image fuse_lora Both the set_adapters() and ~peft.LoraModel.add_weighted_adapter methods require loading the base model and the LoRA adapters separately which incurs some overhead. The fuse_lora() method allows you to fuse the LoRA weights directly with the original weights of the underlying model. This way, you’re only loading the model once which can increase inference and lower memory-usage. You can use PEFT to easily fuse/unfuse multiple adapters directly into the model weights (both UNet and text encoder) using the fuse_lora() method, which can lead to a speed-up in inference and lower VRAM usage. For example, if you have a base model and adapters loaded and set as active with the following adapter weights: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("ostris/ikea-instructions-lora-sdxl", weight_name="ikea_instructions_xl_v1_5.safetensors", adapter_name="ikea") +pipeline.load_lora_weights("lordjia/by-feng-zikai", weight_name="fengzikai_v1.0_XL.safetensors", adapter_name="feng") + +pipeline.set_adapters(["ikea", "feng"], adapter_weights=[0.7, 0.8]) Fuse these LoRAs into the UNet with the fuse_lora() method. The lora_scale parameter controls how much to scale the output by with the LoRA weights. It is important to make the lora_scale adjustments in the fuse_lora() method because it won’t work if you try to pass scale to the cross_attention_kwargs in the pipeline. Copied pipeline.fuse_lora(adapter_names=["ikea", "feng"], lora_scale=1.0) Then you should use unload_lora_weights() to unload the LoRA weights since they’ve already been fused with the underlying base model. Finally, call save_pretrained() to save the fused pipeline locally or you could call push_to_hub() to push the fused pipeline to the Hub. Copied pipeline.unload_lora_weights() +# save locally +pipeline.save_pretrained("path/to/fused-pipeline") +# save to the Hub +pipeline.push_to_hub("fused-ikea-feng") Now you can quickly load the fused pipeline and use it for inference without needing to separately load the LoRA adapters. Copied pipeline = DiffusionPipeline.from_pretrained( + "username/fused-ikea-feng", torch_dtype=torch.float16, +).to("cuda") + +image = pipeline("A bowl of ramen shaped like a cute kawaii bear, by Feng Zikai", generator=torch.manual_seed(0)).images[0] +image You can call unfuse_lora() to restore the original model’s weights (for example, if you want to use a different lora_scale value). However, this only works if you’ve only fused one LoRA adapter to the original model. If you’ve fused multiple LoRAs, you’ll need to reload the model. Copied pipeline.unfuse_lora() torch.compile torch.compile can speed up your pipeline even more, but the LoRA weights must be fused first and then unloaded. Typically, the UNet is compiled because it is such a computationally intensive component of the pipeline. Copied from diffusers import DiffusionPipeline +import torch + +# load base model and LoRAs +pipeline = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("ostris/ikea-instructions-lora-sdxl", weight_name="ikea_instructions_xl_v1_5.safetensors", adapter_name="ikea") +pipeline.load_lora_weights("lordjia/by-feng-zikai", weight_name="fengzikai_v1.0_XL.safetensors", adapter_name="feng") + +# activate both LoRAs and set adapter weights +pipeline.set_adapters(["ikea", "feng"], adapter_weights=[0.7, 0.8]) + +# fuse LoRAs and unload weights +pipeline.fuse_lora(adapter_names=["ikea", "feng"], lora_scale=1.0) +pipeline.unload_lora_weights() + +# torch.compile +pipeline.unet.to(memory_format=torch.channels_last) +pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) + +image = pipeline("A bowl of ramen shaped like a cute kawaii bear, by Feng Zikai", generator=torch.manual_seed(0)).images[0] Learn more about torch.compile in the Accelerate inference of text-to-image diffusion models guide. Next steps For more conceptual details about how each merging method works, take a look at the 🤗 PEFT welcomes new merging methods blog post! diff --git a/scrapped_outputs/7e8da299de5d1649d7a7df8bbefc10b4.txt b/scrapped_outputs/7e8da299de5d1649d7a7df8bbefc10b4.txt new file mode 100644 index 0000000000000000000000000000000000000000..da9dc0ec726865012f9af88d847ee90e8b1fd0b6 --- /dev/null +++ b/scrapped_outputs/7e8da299de5d1649d7a7df8bbefc10b4.txt @@ -0,0 +1,227 @@ +Configuration + +Schedulers from SchedulerMixin and models from ModelMixin inherit from ConfigMixin which conveniently takes care of storing all the parameters that are +passed to their respective __init__ methods in a JSON-configuration file. + +ConfigMixin + + +class diffusers.ConfigMixin + +< +source +> +( +) + + + +Base class for all configuration classes. Stores all configuration parameters under self.config Also handles all +methods for loading/downloading/saving classes inheriting from ConfigMixin with +from_config() +save_config() +Class attributes: +config_name (str) — A filename under which the config should stored when calling +save_config() (should be overridden by parent class). +ignore_for_config (List[str]) — A list of attributes that should not be saved in the config (should be +overridden by subclass). +has_compatibles (bool) — Whether the class has compatible classes (should be overridden by subclass). +_deprecated_kwargs (List[str]) — Keyword arguments that are deprecated. Note that the init function +should only have a kwargs argument if at least one argument is deprecated (should be overridden by +subclass). + +load_config + +< +source +> +( +pretrained_model_name_or_path: typing.Union[str, os.PathLike] +return_unused_kwargs = False +return_commit_hash = False +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id of a model repo on huggingface.co. Valid model ids should have an +organization name, like google/ddpm-celebahq-256. +A path to a directory containing model weights saved using save_config(), e.g., +./my_model_directory/. + + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running transformers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +subfolder (str, optional, defaults to "") — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + +return_unused_kwargs (bool, optional, defaults to `False) — +Whether unused keyword arguments of the config shall be returned. + + +return_commit_hash (bool, optional, defaults to `False) — +Whether the commit_hash of the loaded configuration shall be returned. + + + +Instantiate a Python class from a config dictionary +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. +Activate the special “offline-mode” to +use this method in a firewalled environment. + +from_config + +< +source +> +( +config: typing.Union[diffusers.configuration_utils.FrozenDict, typing.Dict[str, typing.Any]] = None +return_unused_kwargs = False +**kwargs + +) + + +Parameters + +config (Dict[str, Any]) — +A config dictionary from which the Python class will be instantiated. Make sure to only load +configuration files of compatible classes. + + +return_unused_kwargs (bool, optional, defaults to False) — +Whether kwargs that are not consumed by the Python class should be returned or not. + + +kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to update the configuration object (after it being loaded) and initiate the Python class. +**kwargs will be directly passed to the underlying scheduler/model’s __init__ method and eventually +overwrite same named arguments of config. + + + +Instantiate a Python class from a config dictionary + +Examples: + + + Copied +>>> from diffusers import DDPMScheduler, DDIMScheduler, PNDMScheduler + +>>> # Download scheduler from huggingface.co and cache. +>>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cifar10-32") + +>>> # Instantiate DDIM scheduler class with same config as DDPM +>>> scheduler = DDIMScheduler.from_config(scheduler.config) + +>>> # Instantiate PNDM scheduler class with same config as DDPM +>>> scheduler = PNDMScheduler.from_config(scheduler.config) + +save_config + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +push_to_hub: bool = False +**kwargs + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory where the configuration JSON file will be saved (will be created if it does not exist). + + + +Save a configuration object to the directory save_directory, so that it can be re-loaded using the +from_config() class method. + +to_json_file + +< +source +> +( +json_file_path: typing.Union[str, os.PathLike] + +) + + +Parameters + +json_file_path (str or os.PathLike) — +Path to the JSON file in which this configuration instance’s parameters will be saved. + + + +Save this instance to a JSON file. + +to_json_string + +< +source +> +( +) +→ +str + +Returns + +str + + + +String containing all the attributes that make up this configuration instance in JSON format. + + +Serializes this instance to a JSON string. diff --git a/scrapped_outputs/7e94a76d8c89d6aff69b9c4f06e8e8ac.txt b/scrapped_outputs/7e94a76d8c89d6aff69b9c4f06e8e8ac.txt new file mode 100644 index 0000000000000000000000000000000000000000..a1d62e149f06897a73f0cf31016ea5252858f00a --- /dev/null +++ b/scrapped_outputs/7e94a76d8c89d6aff69b9c4f06e8e8ac.txt @@ -0,0 +1,525 @@ +Kandinsky 2.1 Kandinsky 2.1 is created by Arseniy Shakhmatov, Anton Razzhigaev, Aleksandr Nikolich, Vladimir Arkhipkin, Igor Pavlov, Andrey Kuznetsov, and Denis Dimitrov. The description from it’s GitHub page is: Kandinsky 2.1 inherits best practicies from Dall-E 2 and Latent diffusion, while introducing some new ideas. As text and image encoder it uses CLIP model and diffusion image prior (mapping) between latent spaces of CLIP modalities. This approach increases the visual performance of the model and unveils new horizons in blending images and text-guided image manipulation. The original codebase can be found at ai-forever/Kandinsky-2. Check out the Kandinsky Community organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. KandinskyPriorPipeline class diffusers.KandinskyPriorPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModelWithProjection text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: UnCLIPScheduler image_processor: CLIPImageProcessor ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Pipeline for generating image prior for Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 output_type: Optional = 'pt' return_dict: bool = True ) → KandinskyPriorPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. output_type (str, optional, defaults to "pt") — +The output format of the generate image. Choose between: "np" (np.array) or "pt" +(torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyPipeline, KandinskyPriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior") +>>> pipe_prior.to("cuda") + +>>> prompt = "red cat, 4k photo" +>>> out = pipe_prior(prompt) +>>> image_emb = out.image_embeds +>>> negative_image_emb = out.negative_image_embeds + +>>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1") +>>> pipe.to("cuda") + +>>> image = pipe( +... prompt, +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... ).images + +>>> image[0].save("cat.png") interpolate < source > ( images_and_prompts: List weights: List num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None negative_prior_prompt: Optional = None negative_prompt: str = '' guidance_scale: float = 4.0 device = None ) → KandinskyPriorPipelineOutput or tuple Parameters images_and_prompts (List[Union[str, PIL.Image.Image, torch.FloatTensor]]) — +list of prompts and images to guide the image generation. +weights — (List[float]): +list of weights for each condition in images_and_prompts num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. negative_prior_prompt (str, optional) — +The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when using the prior pipeline for interpolation. Examples: Copied >>> from diffusers import KandinskyPriorPipeline, KandinskyPipeline +>>> from diffusers.utils import load_image +>>> import PIL + +>>> import torch +>>> from torchvision import transforms + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> img1 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) + +>>> img2 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/starry_night.jpeg" +... ) + +>>> images_texts = ["a cat", img1, img2] +>>> weights = [0.3, 0.3, 0.4] +>>> image_emb, zero_image_emb = pipe_prior.interpolate(images_texts, weights) + +>>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) +>>> pipe.to("cuda") + +>>> image = pipe( +... "", +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=150, +... ).images[0] + +>>> image.save("starry_cat.png") KandinskyPipeline class diffusers.KandinskyPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image_embeds: Union negative_image_embeds: Union negative_prompt: Union = None height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyPipeline, KandinskyPriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained("kandinsky-community/Kandinsky-2-1-prior") +>>> pipe_prior.to("cuda") + +>>> prompt = "red cat, 4k photo" +>>> out = pipe_prior(prompt) +>>> image_emb = out.image_embeds +>>> negative_image_emb = out.negative_image_embeds + +>>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1") +>>> pipe.to("cuda") + +>>> image = pipe( +... prompt, +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... ).images + +>>> image[0].save("cat.png") KandinskyCombinedPipeline class diffusers.KandinskyCombinedPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Combined Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipe = AutoPipelineForText2Image.from_pretrained( + "kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" + +image = pipe(prompt=prompt, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models (unet, text_encoder, vae, and safety checker state dicts) to CPU using 🤗 +Accelerate, significantly reducing memory usage. Models are moved to a torch.device('meta') and loaded on a +GPU only when their specific submodule’s forward method is called. Offloading happens on a submodule basis. +Memory savings are higher than using enable_model_cpu_offload, but performance is lower. KandinskyImg2ImgPipeline class diffusers.KandinskyImg2ImgPipeline < source > ( text_encoder: MultilingualCLIP movq: VQModel tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: DDIMScheduler ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ image encoder and decoder Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union image_embeds: FloatTensor negative_image_embeds: FloatTensor negative_prompt: Union = None height: int = 512 width: int = 512 num_inference_steps: int = 100 strength: float = 0.3 guidance_scale: float = 7.0 num_images_per_prompt: int = 1 generator: Union = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyImg2ImgPipeline, KandinskyPriorPipeline +>>> from diffusers.utils import load_image +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> prompt = "A red cartoon frog, 4k" +>>> image_emb, zero_image_emb = pipe_prior(prompt, return_dict=False) + +>>> pipe = KandinskyImg2ImgPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") + +>>> init_image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/frog.png" +... ) + +>>> image = pipe( +... prompt, +... image=init_image, +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... strength=0.2, +... ).images + +>>> image[0].save("red_frog.png") KandinskyImg2ImgCombinedPipeline class diffusers.KandinskyImg2ImgCombinedPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Combined Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 strength: float = 0.3 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForImage2Image +import torch +import requests +from io import BytesIO +from PIL import Image +import os + +pipe = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") +image.thumbnail((768, 768)) + +image = pipe(prompt=prompt, image=original_image, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. KandinskyInpaintPipeline class diffusers.KandinskyInpaintPipeline < source > ( text_encoder: MultilingualCLIP movq: VQModel tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: DDIMScheduler ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ image encoder and decoder Pipeline for text-guided image inpainting using Kandinsky2.1 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union mask_image: Union image_embeds: FloatTensor negative_image_embeds: FloatTensor negative_prompt: Union = None height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image or np.ndarray) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. mask_image (PIL.Image.Image,torch.FloatTensor or np.ndarray) — +Image, or a tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. You can pass a pytorch tensor as mask only if the +image you passed is a pytorch tensor, and it should contain one color channel (L) instead of 3, so the +expected shape would be either (B, 1, H, W,), (B, H, W), (1, H, W) or (H, W) If image is an PIL +image or numpy array, mask should also be a either PIL image or numpy array. If it is a PIL image, it +will be converted to a single channel (luminance) before use. If it is a nummpy array, the expected +shape is (H, W). image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyInpaintPipeline, KandinskyPriorPipeline +>>> from diffusers.utils import load_image +>>> import torch +>>> import numpy as np + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> prompt = "a hat" +>>> image_emb, zero_image_emb = pipe_prior(prompt, return_dict=False) + +>>> pipe = KandinskyInpaintPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") + +>>> init_image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) + +>>> mask = np.zeros((768, 768), dtype=np.float32) +>>> mask[:250, 250:-250] = 1 + +>>> out = pipe( +... prompt, +... image=init_image, +... mask_image=mask, +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=50, +... ) + +>>> image = out.images[0] +>>> image.save("cat_with_hat.png") KandinskyInpaintCombinedPipeline class diffusers.KandinskyInpaintCombinedPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Combined Pipeline for generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union mask_image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. mask_image (np.array) — +Tensor representing an image batch, to mask image. White pixels in the mask will be repainted, while +black pixels will be preserved. If mask_image is a PIL image, it will be converted to a single +channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, +so the expected shape would be (B, H, W, 1). negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +import torch +import numpy as np + +pipe = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +original_image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png" +) + +mask = np.zeros((768, 768), dtype=np.float32) +# Let's mask out an area above the cat's head +mask[:250, 250:-250] = 1 + +image = pipe(prompt=prompt, image=original_image, mask_image=mask, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. diff --git a/scrapped_outputs/7e98c8067a040c95d302efec056258ac.txt b/scrapped_outputs/7e98c8067a040c95d302efec056258ac.txt new file mode 100644 index 0000000000000000000000000000000000000000..dc45cc411c1e99044b02de9de0b70f888962c563 --- /dev/null +++ b/scrapped_outputs/7e98c8067a040c95d302efec056258ac.txt @@ -0,0 +1,42 @@ +DPMSolverSDEScheduler The DPMSolverSDEScheduler is inspired by the stochastic sampler from the Elucidating the Design Space of Diffusion-Based Generative Models paper, and the scheduler is ported from and created by Katherine Crowson. DPMSolverSDEScheduler class diffusers.DPMSolverSDEScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' use_karras_sigmas: Optional = False noise_sampler_seed: Optional = None timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.00085) — +The starting beta value of inference. beta_end (float, defaults to 0.012) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. noise_sampler_seed (int, optional, defaults to None) — +The random seed to use for the noise sampler. If None, a random seed is generated. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DPMSolverSDEScheduler implements the stochastic sampler from the Elucidating the Design Space of Diffusion-Based +Generative Models paper. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union return_dict: bool = True s_noise: float = 1.0 ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor or np.ndarray) — +The direct output from learned diffusion model. timestep (float or torch.FloatTensor) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor or np.ndarray) — +A current instance of a sample created by the diffusion process. return_dict (bool, optional, defaults to True) — +Whether or not to return a SchedulerOutput or tuple. s_noise (float, optional, defaults to 1.0) — +Scaling factor for noise added to the sample. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/7eaff63e04e0cd2811351b772f5e16a3.txt b/scrapped_outputs/7eaff63e04e0cd2811351b772f5e16a3.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/7eb98ba1f3feb446b2b952652fb8cbe3.txt b/scrapped_outputs/7eb98ba1f3feb446b2b952652fb8cbe3.txt new file mode 100644 index 0000000000000000000000000000000000000000..670e60a336d617da607490febe4cdc7f57188444 --- /dev/null +++ b/scrapped_outputs/7eb98ba1f3feb446b2b952652fb8cbe3.txt @@ -0,0 +1,82 @@ +T2I-Adapter T2I-Adapter is a lightweight adapter model that provides an additional conditioning input image (line art, canny, sketch, depth, pose) to better control image generation. It is similar to a ControlNet, but it is a lot smaller (~77M parameters and ~300MB file size) because its only inserts weights into the UNet instead of copying and training it. The T2I-Adapter is only available for training with the Stable Diffusion XL (SDXL) model. This guide will explore the train_t2i_adapter_sdxl.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/t2i_adapter +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to activate gradient accumulation, add the --gradient_accumulation_steps parameter to the training command: Copied accelerate launch train_t2i_adapter_sdxl.py \ + ----gradient_accumulation_steps=4 Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the relevant T2I-Adapter parameters: --pretrained_vae_model_name_or_path: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify a better VAE --crops_coords_top_left_h and --crops_coords_top_left_w: height and width coordinates to include in SDXL’s crop coordinate embeddings --conditioning_image_column: the column of the conditioning images in the dataset --proportion_empty_prompts: the proportion of image prompts to replace with empty strings Training script As with the script parameters, a walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the T2I-Adapter relevant parts of the script. The training script begins by preparing the dataset. This incudes tokenizing the prompt and applying transforms to the images and conditioning images. Copied conditioning_image_transforms = transforms.Compose( + [ + transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), + transforms.CenterCrop(args.resolution), + transforms.ToTensor(), + ] +) Within the main() function, the T2I-Adapter is either loaded from a pretrained adapter or it is randomly initialized: Copied if args.adapter_model_name_or_path: + logger.info("Loading existing adapter weights.") + t2iadapter = T2IAdapter.from_pretrained(args.adapter_model_name_or_path) +else: + logger.info("Initializing t2iadapter weights.") + t2iadapter = T2IAdapter( + in_channels=3, + channels=(320, 640, 1280, 1280), + num_res_blocks=2, + downscale_factor=16, + adapter_type="full_adapter_xl", + ) The optimizer is initialized for the T2I-Adapter parameters: Copied params_to_optimize = t2iadapter.parameters() +optimizer = optimizer_class( + params_to_optimize, + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Lastly, in the training loop, the adapter conditioning image and the text embeddings are passed to the UNet to predict the noise residual: Copied t2iadapter_image = batch["conditioning_pixel_values"].to(dtype=weight_dtype) +down_block_additional_residuals = t2iadapter(t2iadapter_image) +down_block_additional_residuals = [ + sample.to(dtype=weight_dtype) for sample in down_block_additional_residuals +] + +model_pred = unet( + inp_noisy_latents, + timesteps, + encoder_hidden_states=batch["prompt_ids"], + added_cond_kwargs=batch["unet_added_conditions"], + down_block_additional_residuals=down_block_additional_residuals, +).sample If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Now you’re ready to launch the training script! 🚀 For this example training, you’ll use the fusing/fill50k dataset. You can also create and use your own dataset if you want (see the Create a dataset for training guide). Set the environment variable MODEL_DIR to a model id on the Hub or a path to a local model and OUTPUT_DIR to where you want to save the model. Download the following images to condition your training with: Copied wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png +wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_image, --validation_prompt, and --validation_steps to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. Copied export MODEL_DIR="stabilityai/stable-diffusion-xl-base-1.0" +export OUTPUT_DIR="path to save model" + +accelerate launch train_t2i_adapter_sdxl.py \ + --pretrained_model_name_or_path=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --dataset_name=fusing/fill50k \ + --mixed_precision="fp16" \ + --resolution=1024 \ + --learning_rate=1e-5 \ + --max_train_steps=15000 \ + --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ + --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ + --validation_steps=100 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --report_to="wandb" \ + --seed=42 \ + --push_to_hub Once training is complete, you can use your T2I-Adapter for inference: Copied from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, EulerAncestralDiscreteSchedulerTest +from diffusers.utils import load_image +import torch + +adapter = T2IAdapter.from_pretrained("path/to/adapter", torch_dtype=torch.float16) +pipeline = StableDiffusionXLAdapterPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", adapter=adapter, torch_dtype=torch.float16 +) + +pipeline.scheduler = EulerAncestralDiscreteSchedulerTest.from_config(pipe.scheduler.config) +pipeline.enable_xformers_memory_efficient_attention() +pipeline.enable_model_cpu_offload() + +control_image = load_image("./conditioning_image_1.png") +prompt = "pale golden rod circle with old lace background" + +generator = torch.manual_seed(0) +image = pipeline( + prompt, image=control_image, generator=generator +).images[0] +image.save("./output.png") Next steps Congratulations on training a T2I-Adapter model! 🎉 To learn more: Read the Efficient Controllable Generation for SDXL with T2I-Adapters blog post to learn more details about the experimental results from the T2I-Adapter team. diff --git a/scrapped_outputs/7f32567be157bd0e92b1c3b760f808bc.txt b/scrapped_outputs/7f32567be157bd0e92b1c3b760f808bc.txt new file mode 100644 index 0000000000000000000000000000000000000000..c64e5338e7b801217166447f9876dee342fd9e20 --- /dev/null +++ b/scrapped_outputs/7f32567be157bd0e92b1c3b760f808bc.txt @@ -0,0 +1,100 @@ +UNet Some training methods - like LoRA and Custom Diffusion - typically target the UNet’s attention layers, but these training methods can also target other non-attention layers. Instead of training all of a model’s parameters, only a subset of the parameters are trained, which is faster and more efficient. This class is useful if you’re only loading weights into a UNet. If you need to load weights into the text encoder or a text encoder and UNet, try using the load_lora_weights() function instead. The UNet2DConditionLoadersMixin class provides functions for loading and saving weights, fusing and unfusing LoRAs, disabling and enabling LoRAs, and setting and deleting adapters. To learn more about how to load LoRA weights, see the LoRA loading guide. UNet2DConditionLoadersMixin class diffusers.loaders.UNet2DConditionLoadersMixin < source > ( ) Load LoRA layers into a UNet2DCondtionModel. delete_adapters < source > ( adapter_names: Union ) Parameters adapter_names (Union[List[str], str]) — +The names (single string or list of strings) of the adapter to delete. Delete an adapter’s LoRA layers from the UNet. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_names="cinematic" +) +pipeline.delete_adapters("cinematic") disable_lora < source > ( ) Disable the UNet’s active LoRA layers. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) +pipeline.disable_lora() enable_lora < source > ( ) Enable the UNet’s active LoRA layers. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) +pipeline.enable_lora() load_attn_procs < source > ( pretrained_model_name_or_path_or_dict: Union **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with ModelMixin.save_pretrained(). +A torch state +dict. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load pretrained attention processor layers into UNet2DConditionModel. Attention processor layers have to be +defined in +attention_processor.py +and be a torch.nn.Module class. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.unet.load_attn_procs( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) save_attn_procs < source > ( save_directory: Union is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save an attention processor to (will be created if it doesn’t exist). is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or with pickle. Save attention processor layers to a directory so that it can be reloaded with the +load_attn_procs() method. Example: Copied import torch +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + torch_dtype=torch.float16, +).to("cuda") +pipeline.unet.load_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin") +pipeline.unet.save_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin") set_adapters < source > ( adapter_names: Union weights: Union = None ) Parameters adapter_names (List[str] or str) — +The names of the adapters to use. adapter_weights (Union[List[float], float], optional) — +The adapter(s) weights to use with the UNet. If None, the weights are set to 1.0 for all the +adapters. Set the currently active adapters for use in the UNet. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) +pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipeline.set_adapters(["cinematic", "pixel"], adapter_weights=[0.5, 0.5]) diff --git a/scrapped_outputs/7f3f7610689ae457a581fead024347c5.txt b/scrapped_outputs/7f3f7610689ae457a581fead024347c5.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/7f669cdcbc6370986f05928fe70b0f6d.txt b/scrapped_outputs/7f669cdcbc6370986f05928fe70b0f6d.txt new file mode 100644 index 0000000000000000000000000000000000000000..682e7ed4ade907ab1a141f47a047e5803e87a77a --- /dev/null +++ b/scrapped_outputs/7f669cdcbc6370986f05928fe70b0f6d.txt @@ -0,0 +1,33 @@ +Logging 🤗 Diffusers has a centralized logging system to easily manage the verbosity of the library. The default verbosity is set to WARNING. To change the verbosity level, use one of the direct setters. For instance, to change the verbosity to the INFO level. Copied import diffusers + +diffusers.logging.set_verbosity_info() You can also use the environment variable DIFFUSERS_VERBOSITY to override the default verbosity. You can set it +to one of the following: debug, info, warning, error, critical. For example: Copied DIFFUSERS_VERBOSITY=error ./myprogram.py Additionally, some warnings can be disabled by setting the environment variable +DIFFUSERS_NO_ADVISORY_WARNINGS to a true value, like 1. This disables any warning logged by +logger.warning_advice. For example: Copied DIFFUSERS_NO_ADVISORY_WARNINGS=1 ./myprogram.py Here is an example of how to use the same logger as the library in your own module or script: Copied from diffusers.utils import logging + +logging.set_verbosity_info() +logger = logging.get_logger("diffusers") +logger.info("INFO") +logger.warning("WARN") All methods of the logging module are documented below. The main methods are +logging.get_verbosity to get the current level of verbosity in the logger and +logging.set_verbosity to set the verbosity to the level of your choice. In order from the least verbose to the most verbose: Method Integer value Description diffusers.logging.CRITICAL or diffusers.logging.FATAL 50 only report the most critical errors diffusers.logging.ERROR 40 only report errors diffusers.logging.WARNING or diffusers.logging.WARN 30 only report errors and warnings (default) diffusers.logging.INFO 20 only report errors, warnings, and basic information diffusers.logging.DEBUG 10 report all information By default, tqdm progress bars are displayed during model download. logging.disable_progress_bar and logging.enable_progress_bar are used to enable or disable this behavior. Base setters diffusers.utils.logging.set_verbosity_error < source > ( ) Set the verbosity to the ERROR level. diffusers.utils.logging.set_verbosity_warning < source > ( ) Set the verbosity to the WARNING level. diffusers.utils.logging.set_verbosity_info < source > ( ) Set the verbosity to the INFO level. diffusers.utils.logging.set_verbosity_debug < source > ( ) Set the verbosity to the DEBUG level. Other functions diffusers.utils.logging.get_verbosity < source > ( ) → int Returns +int + +Logging level integers which can be one of: + +50: diffusers.logging.CRITICAL or diffusers.logging.FATAL +40: diffusers.logging.ERROR +30: diffusers.logging.WARNING or diffusers.logging.WARN +20: diffusers.logging.INFO +10: diffusers.logging.DEBUG + + Return the current level for the 🤗 Diffusers’ root logger as an int. diffusers.utils.logging.set_verbosity < source > ( verbosity: int ) Parameters verbosity (int) — +Logging level which can be one of: + +diffusers.logging.CRITICAL or diffusers.logging.FATAL +diffusers.logging.ERROR +diffusers.logging.WARNING or diffusers.logging.WARN +diffusers.logging.INFO +diffusers.logging.DEBUG + Set the verbosity level for the 🤗 Diffusers’ root logger. diffusers.utils.get_logger < source > ( name: Optional = None ) Return a logger with the specified name. This function is not supposed to be directly accessed unless you are writing a custom diffusers module. diffusers.utils.logging.enable_default_handler < source > ( ) Enable the default handler of the 🤗 Diffusers’ root logger. diffusers.utils.logging.disable_default_handler < source > ( ) Disable the default handler of the 🤗 Diffusers’ root logger. diffusers.utils.logging.enable_explicit_format < source > ( ) Enable explicit formatting for every 🤗 Diffusers’ logger. The explicit formatter is as follows: Copied [LEVELNAME|FILENAME|LINE NUMBER] TIME >> MESSAGE +All handlers currently bound to the root logger are affected by this method. diffusers.utils.logging.reset_format < source > ( ) Resets the formatting for 🤗 Diffusers’ loggers. All handlers currently bound to the root logger are affected by this method. diffusers.utils.logging.enable_progress_bar < source > ( ) Enable tqdm progress bar. diffusers.utils.logging.disable_progress_bar < source > ( ) Disable tqdm progress bar. diff --git a/scrapped_outputs/7fa20beeaca2bc5a2c817feaf6fa59f9.txt b/scrapped_outputs/7fa20beeaca2bc5a2c817feaf6fa59f9.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/7ffba9bd0fdaf39540c012f0acab9cf9.txt b/scrapped_outputs/7ffba9bd0fdaf39540c012f0acab9cf9.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/804f1a3d5b986e408f4494387a2ddc93.txt b/scrapped_outputs/804f1a3d5b986e408f4494387a2ddc93.txt new file mode 100644 index 0000000000000000000000000000000000000000..c82e25825d8d9963f7b4b0f30bedbc489b9e96a3 --- /dev/null +++ b/scrapped_outputs/804f1a3d5b986e408f4494387a2ddc93.txt @@ -0,0 +1,30 @@ +Transformer Temporal A Transformer model for video-like data. TransformerTemporalModel class diffusers.models.TransformerTemporalModel < source > ( num_attention_heads: int = 16 attention_head_dim: int = 88 in_channels: Optional = None out_channels: Optional = None num_layers: int = 1 dropout: float = 0.0 norm_num_groups: int = 32 cross_attention_dim: Optional = None attention_bias: bool = False sample_size: Optional = None activation_fn: str = 'geglu' norm_elementwise_affine: bool = True double_self_attention: bool = True positional_embeddings: Optional = None num_positional_embeddings: Optional = None ) Parameters num_attention_heads (int, optional, defaults to 16) — The number of heads to use for multi-head attention. attention_head_dim (int, optional, defaults to 88) — The number of channels in each head. in_channels (int, optional) — +The number of channels in the input and output (specify if the input is continuous). num_layers (int, optional, defaults to 1) — The number of layers of Transformer blocks to use. dropout (float, optional, defaults to 0.0) — The dropout probability to use. cross_attention_dim (int, optional) — The number of encoder_hidden_states dimensions to use. attention_bias (bool, optional) — +Configure if the TransformerBlock attention should contain a bias parameter. sample_size (int, optional) — The width of the latent images (specify if the input is discrete). +This is fixed during training since it is used to learn a number of position embeddings. activation_fn (str, optional, defaults to "geglu") — +Activation function to use in feed-forward. See diffusers.models.activations.get_activation for supported +activation functions. norm_elementwise_affine (bool, optional) — +Configure if the TransformerBlock should use learnable elementwise affine parameters for normalization. double_self_attention (bool, optional) — +Configure if each TransformerBlock should contain two self-attention layers. +positional_embeddings — (str, optional): +The type of positional embeddings to apply to the sequence input before passing use. +num_positional_embeddings — (int, optional): +The maximum length of the sequence over which to apply positional embeddings. A Transformer model for video-like data. forward < source > ( hidden_states: FloatTensor encoder_hidden_states: Optional = None timestep: Optional = None class_labels: LongTensor = None num_frames: int = 1 cross_attention_kwargs: Optional = None return_dict: bool = True ) → ~models.transformer_temporal.TransformerTemporalModelOutput or tuple Parameters hidden_states (torch.LongTensor of shape (batch size, num latent pixels) if discrete, torch.FloatTensor of shape (batch size, channel, height, width) if continuous) — +Input hidden_states. encoder_hidden_states ( torch.LongTensor of shape (batch size, encoder_hidden_states dim), optional) — +Conditional embeddings for cross attention layer. If not given, cross-attention defaults to +self-attention. timestep ( torch.LongTensor, optional) — +Used to indicate denoising step. Optional timestep to be applied as an embedding in AdaLayerNorm. class_labels ( torch.LongTensor of shape (batch size, num classes), optional) — +Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in +AdaLayerZeroNorm. num_frames (int, optional, defaults to 1) — +The number of frames to be processed per batch. This is used to reshape the hidden states. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. Returns +~models.transformer_temporal.TransformerTemporalModelOutput or tuple + +If return_dict is True, an ~models.transformer_temporal.TransformerTemporalModelOutput is +returned, otherwise a tuple where the first element is the sample tensor. + The TransformerTemporal forward method. TransformerTemporalModelOutput class diffusers.models.transformers.transformer_temporal.TransformerTemporalModelOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size x num_frames, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. The output of TransformerTemporalModel. diff --git a/scrapped_outputs/805a7cd21022dd9c2dc147b37757f89b.txt b/scrapped_outputs/805a7cd21022dd9c2dc147b37757f89b.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/80772b25225a348eba5a103deeda79f2.txt b/scrapped_outputs/80772b25225a348eba5a103deeda79f2.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/808c14e5f74ac7e9f95b5d84a48ae081.txt b/scrapped_outputs/808c14e5f74ac7e9f95b5d84a48ae081.txt new file mode 100644 index 0000000000000000000000000000000000000000..9de2a9918b4f9735de3ea0d622cdf65706556cae --- /dev/null +++ b/scrapped_outputs/808c14e5f74ac7e9f95b5d84a48ae081.txt @@ -0,0 +1,124 @@ +Schedulers Diffusion pipelines are inherently a collection of diffusion models and schedulers that are partly independent from each other. This means that one is able to switch out parts of the pipeline to better customize +a pipeline to one’s use case. The best example of this is the Schedulers. Whereas diffusion models usually simply define the forward pass from noise to a less noisy sample, +schedulers define the whole denoising process, i.e.: How many denoising steps? Stochastic or deterministic? What algorithm to use to find the denoised sample? They can be quite complex and often define a trade-off between denoising speed and denoising quality. +It is extremely difficult to measure quantitatively which scheduler works best for a given diffusion pipeline, so it is often recommended to simply try out which works best. The following paragraphs show how to do so with the 🧨 Diffusers library. Load pipeline Let’s start by loading the runwayml/stable-diffusion-v1-5 model in the DiffusionPipeline: Copied from huggingface_hub import login +from diffusers import DiffusionPipeline +import torch + +login() + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) Next, we move it to GPU: Copied pipeline.to("cuda") Access the scheduler The scheduler is always one of the components of the pipeline and is usually called "scheduler". +So it can be accessed via the "scheduler" property. Copied pipeline.scheduler Output: Copied PNDMScheduler { + "_class_name": "PNDMScheduler", + "_diffusers_version": "0.21.4", + "beta_end": 0.012, + "beta_schedule": "scaled_linear", + "beta_start": 0.00085, + "clip_sample": false, + "num_train_timesteps": 1000, + "set_alpha_to_one": false, + "skip_prk_steps": true, + "steps_offset": 1, + "timestep_spacing": "leading", + "trained_betas": null +} We can see that the scheduler is of type PNDMScheduler. +Cool, now let’s compare the scheduler in its performance to other schedulers. +First we define a prompt on which we will test all the different schedulers: Copied prompt = "A photograph of an astronaut riding a horse on Mars, high resolution, high definition." Next, we create a generator from a random seed that will ensure that we can generate similar images as well as run the pipeline: Copied generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image Changing the scheduler Now we show how easy it is to change the scheduler of a pipeline. Every scheduler has a property compatibles +which defines all compatible schedulers. You can take a look at all available, compatible schedulers for the Stable Diffusion pipeline as follows. Copied pipeline.scheduler.compatibles Output: Copied [diffusers.utils.dummy_torch_and_torchsde_objects.DPMSolverSDEScheduler, + diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, + diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, + diffusers.schedulers.scheduling_ddim.DDIMScheduler, + diffusers.schedulers.scheduling_ddpm.DDPMScheduler, + diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler, + diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler, + diffusers.schedulers.scheduling_deis_multistep.DEISMultistepScheduler, + diffusers.schedulers.scheduling_pndm.PNDMScheduler, + diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, + diffusers.schedulers.scheduling_unipc_multistep.UniPCMultistepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_discrete.KDPM2DiscreteScheduler, + diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_ancestral_discrete.KDPM2AncestralDiscreteScheduler] Cool, lots of schedulers to look at. Feel free to have a look at their respective class definitions: EulerDiscreteScheduler, LMSDiscreteScheduler, DDIMScheduler, DDPMScheduler, HeunDiscreteScheduler, DPMSolverMultistepScheduler, DEISMultistepScheduler, PNDMScheduler, EulerAncestralDiscreteScheduler, UniPCMultistepScheduler, KDPM2DiscreteScheduler, DPMSolverSinglestepScheduler, KDPM2AncestralDiscreteScheduler. We will now compare the input prompt with all other schedulers. To change the scheduler of the pipeline you can make use of the +convenient config property in combination with the from_config() function. Copied pipeline.scheduler.config returns a dictionary of the configuration of the scheduler: Output: Copied FrozenDict([('num_train_timesteps', 1000), + ('beta_start', 0.00085), + ('beta_end', 0.012), + ('beta_schedule', 'scaled_linear'), + ('trained_betas', None), + ('skip_prk_steps', True), + ('set_alpha_to_one', False), + ('prediction_type', 'epsilon'), + ('timestep_spacing', 'leading'), + ('steps_offset', 1), + ('_use_default_values', ['timestep_spacing', 'prediction_type']), + ('_class_name', 'PNDMScheduler'), + ('_diffusers_version', '0.21.4'), + ('clip_sample', False)]) This configuration can then be used to instantiate a scheduler +of a different class that is compatible with the pipeline. Here, +we change the scheduler to the DDIMScheduler. Copied from diffusers import DDIMScheduler + +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) Cool, now we can run the pipeline again to compare the generation quality. Copied generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image If you are a JAX/Flax user, please check this section instead. Compare schedulers So far we have tried running the stable diffusion pipeline with two schedulers: PNDMScheduler and DDIMScheduler. +A number of better schedulers have been released that can be run with much fewer steps; let’s compare them here: LMSDiscreteScheduler usually leads to better results: Copied from diffusers import LMSDiscreteScheduler + +pipeline.scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image EulerDiscreteScheduler and EulerAncestralDiscreteScheduler can generate high quality results with as little as 30 steps. Copied from diffusers import EulerDiscreteScheduler + +pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=30).images[0] +image and: Copied from diffusers import EulerAncestralDiscreteScheduler + +pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=30).images[0] +image DPMSolverMultistepScheduler gives a reasonable speed/quality trade-off and can be run with as little as 20 steps. Copied from diffusers import DPMSolverMultistepScheduler + +pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=20).images[0] +image As you can see, most images look very similar and are arguably of very similar quality. It often really depends on the specific use case which scheduler to choose. A good approach is always to run multiple different +schedulers to compare results. Changing the Scheduler in Flax If you are a JAX/Flax user, you can also change the default pipeline scheduler. This is a complete example of how to run inference using the Flax Stable Diffusion pipeline and the super-fast DPM-Solver++ scheduler: Copied import jax +import numpy as np +from flax.jax_utils import replicate +from flax.training.common_utils import shard + +from diffusers import FlaxStableDiffusionPipeline, FlaxDPMSolverMultistepScheduler + +model_id = "runwayml/stable-diffusion-v1-5" +scheduler, scheduler_state = FlaxDPMSolverMultistepScheduler.from_pretrained( + model_id, + subfolder="scheduler" +) +pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( + model_id, + scheduler=scheduler, + revision="bf16", + dtype=jax.numpy.bfloat16, +) +params["scheduler"] = scheduler_state + +# Generate 1 image per parallel device (8 on TPUv2-8 or TPUv3-8) +prompt = "a photo of an astronaut riding a horse on mars" +num_samples = jax.device_count() +prompt_ids = pipeline.prepare_inputs([prompt] * num_samples) + +prng_seed = jax.random.PRNGKey(0) +num_inference_steps = 25 + +# shard inputs and rng +params = replicate(params) +prng_seed = jax.random.split(prng_seed, jax.device_count()) +prompt_ids = shard(prompt_ids) + +images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images +images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) The following Flax schedulers are not yet compatible with the Flax Stable Diffusion Pipeline: FlaxLMSDiscreteScheduler FlaxDDPMScheduler diff --git a/scrapped_outputs/809febe4bebfc09fef1952e02c279795.txt b/scrapped_outputs/809febe4bebfc09fef1952e02c279795.txt new file mode 100644 index 0000000000000000000000000000000000000000..f4cc4262c8901cbf0efaaf3a95066a4f6481fc18 --- /dev/null +++ b/scrapped_outputs/809febe4bebfc09fef1952e02c279795.txt @@ -0,0 +1,78 @@ +unCLIP Hierarchical Text-Conditional Image Generation with CLIP Latents is by Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen. The unCLIP model in 🤗 Diffusers comes from kakaobrain’s karlo. The abstract from the paper is following: Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples. You can find lucidrains’ DALL-E 2 recreation at lucidrains/DALLE2-pytorch. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. UnCLIPPipeline class diffusers.UnCLIPPipeline < source > ( prior: PriorTransformer decoder: UNet2DConditionModel text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer text_proj: UnCLIPTextProjModel super_res_first: UNet2DModel super_res_last: UNet2DModel prior_scheduler: UnCLIPScheduler decoder_scheduler: UnCLIPScheduler super_res_scheduler: UnCLIPScheduler ) Parameters text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. prior (PriorTransformer) — +The canonical unCLIP prior to approximate the image embedding from the text embedding. text_proj (UnCLIPTextProjModel) — +Utility class to prepare and combine the embeddings before they are passed to the decoder. decoder (UNet2DConditionModel) — +The decoder to invert the image embedding into an image. super_res_first (UNet2DModel) — +Super resolution UNet. Used in all but the last step of the super resolution diffusion process. super_res_last (UNet2DModel) — +Super resolution UNet. Used in the last step of the super resolution diffusion process. prior_scheduler (UnCLIPScheduler) — +Scheduler used in the prior denoising process (a modified DDPMScheduler). decoder_scheduler (UnCLIPScheduler) — +Scheduler used in the decoder denoising process (a modified DDPMScheduler). super_res_scheduler (UnCLIPScheduler) — +Scheduler used in the super resolution denoising process (a modified DDPMScheduler). Pipeline for text-to-image generation using unCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None num_images_per_prompt: int = 1 prior_num_inference_steps: int = 25 decoder_num_inference_steps: int = 25 super_res_num_inference_steps: int = 7 generator: Union = None prior_latents: Optional = None decoder_latents: Optional = None super_res_latents: Optional = None text_model_output: Union = None text_attention_mask: Optional = None prior_guidance_scale: float = 4.0 decoder_guidance_scale: float = 8.0 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. This can only be left undefined if text_model_output +and text_attention_mask is passed. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. prior_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the prior. More denoising steps usually lead to a higher quality +image at the expense of slower inference. decoder_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality +image at the expense of slower inference. super_res_num_inference_steps (int, optional, defaults to 7) — +The number of denoising steps for super resolution. More denoising steps usually lead to a higher +quality image at the expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. prior_latents (torch.FloatTensor of shape (batch size, embeddings dimension), optional) — +Pre-generated noisy latents to be used as inputs for the prior. decoder_latents (torch.FloatTensor of shape (batch size, channels, height, width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. super_res_latents (torch.FloatTensor of shape (batch size, channels, super res height, super res width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. prior_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. decoder_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. text_model_output (CLIPTextModelOutput, optional) — +Pre-defined CLIPTextModel outputs that can be derived from the text encoder. Pre-defined text +outputs can be passed for tasks like text embedding interpolations. Make sure to also pass +text_attention_mask in this case. prompt can the be left None. text_attention_mask (torch.Tensor, optional) — +Pre-defined CLIP text attention mask that can be derived from the tokenizer. Pre-defined text attention +masks are necessary when passing text_model_output. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. UnCLIPImageVariationPipeline class diffusers.UnCLIPImageVariationPipeline < source > ( decoder: UNet2DConditionModel text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer text_proj: UnCLIPTextProjModel feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection super_res_first: UNet2DModel super_res_last: UNet2DModel decoder_scheduler: UnCLIPScheduler super_res_scheduler: UnCLIPScheduler ) Parameters text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the image_encoder. image_encoder (CLIPVisionModelWithProjection) — +Frozen CLIP image-encoder (clip-vit-large-patch14). text_proj (UnCLIPTextProjModel) — +Utility class to prepare and combine the embeddings before they are passed to the decoder. decoder (UNet2DConditionModel) — +The decoder to invert the image embedding into an image. super_res_first (UNet2DModel) — +Super resolution UNet. Used in all but the last step of the super resolution diffusion process. super_res_last (UNet2DModel) — +Super resolution UNet. Used in the last step of the super resolution diffusion process. decoder_scheduler (UnCLIPScheduler) — +Scheduler used in the decoder denoising process (a modified DDPMScheduler). super_res_scheduler (UnCLIPScheduler) — +Scheduler used in the super resolution denoising process (a modified DDPMScheduler). Pipeline to generate image variations from an input image using UnCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union = None num_images_per_prompt: int = 1 decoder_num_inference_steps: int = 25 super_res_num_inference_steps: int = 7 generator: Optional = None decoder_latents: Optional = None super_res_latents: Optional = None image_embeddings: Optional = None decoder_guidance_scale: float = 8.0 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — +Image or tensor representing an image batch to be used as the starting point. If you provide a +tensor, it needs to be compatible with the CLIPImageProcessor +configuration. +Can be left as None only when image_embeddings are passed. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. decoder_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality +image at the expense of slower inference. super_res_num_inference_steps (int, optional, defaults to 7) — +The number of denoising steps for super resolution. More denoising steps usually lead to a higher +quality image at the expense of slower inference. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. decoder_latents (torch.FloatTensor of shape (batch size, channels, height, width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. super_res_latents (torch.FloatTensor of shape (batch size, channels, super res height, super res width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. decoder_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. image_embeddings (torch.Tensor, optional) — +Pre-defined image embeddings that can be derived from the image encoder. Pre-defined image embeddings +can be passed for tasks like image interpolations. image can be left as None. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/80bcf74d552243482e931313f5e243f8.txt b/scrapped_outputs/80bcf74d552243482e931313f5e243f8.txt new file mode 100644 index 0000000000000000000000000000000000000000..c82e25825d8d9963f7b4b0f30bedbc489b9e96a3 --- /dev/null +++ b/scrapped_outputs/80bcf74d552243482e931313f5e243f8.txt @@ -0,0 +1,30 @@ +Transformer Temporal A Transformer model for video-like data. TransformerTemporalModel class diffusers.models.TransformerTemporalModel < source > ( num_attention_heads: int = 16 attention_head_dim: int = 88 in_channels: Optional = None out_channels: Optional = None num_layers: int = 1 dropout: float = 0.0 norm_num_groups: int = 32 cross_attention_dim: Optional = None attention_bias: bool = False sample_size: Optional = None activation_fn: str = 'geglu' norm_elementwise_affine: bool = True double_self_attention: bool = True positional_embeddings: Optional = None num_positional_embeddings: Optional = None ) Parameters num_attention_heads (int, optional, defaults to 16) — The number of heads to use for multi-head attention. attention_head_dim (int, optional, defaults to 88) — The number of channels in each head. in_channels (int, optional) — +The number of channels in the input and output (specify if the input is continuous). num_layers (int, optional, defaults to 1) — The number of layers of Transformer blocks to use. dropout (float, optional, defaults to 0.0) — The dropout probability to use. cross_attention_dim (int, optional) — The number of encoder_hidden_states dimensions to use. attention_bias (bool, optional) — +Configure if the TransformerBlock attention should contain a bias parameter. sample_size (int, optional) — The width of the latent images (specify if the input is discrete). +This is fixed during training since it is used to learn a number of position embeddings. activation_fn (str, optional, defaults to "geglu") — +Activation function to use in feed-forward. See diffusers.models.activations.get_activation for supported +activation functions. norm_elementwise_affine (bool, optional) — +Configure if the TransformerBlock should use learnable elementwise affine parameters for normalization. double_self_attention (bool, optional) — +Configure if each TransformerBlock should contain two self-attention layers. +positional_embeddings — (str, optional): +The type of positional embeddings to apply to the sequence input before passing use. +num_positional_embeddings — (int, optional): +The maximum length of the sequence over which to apply positional embeddings. A Transformer model for video-like data. forward < source > ( hidden_states: FloatTensor encoder_hidden_states: Optional = None timestep: Optional = None class_labels: LongTensor = None num_frames: int = 1 cross_attention_kwargs: Optional = None return_dict: bool = True ) → ~models.transformer_temporal.TransformerTemporalModelOutput or tuple Parameters hidden_states (torch.LongTensor of shape (batch size, num latent pixels) if discrete, torch.FloatTensor of shape (batch size, channel, height, width) if continuous) — +Input hidden_states. encoder_hidden_states ( torch.LongTensor of shape (batch size, encoder_hidden_states dim), optional) — +Conditional embeddings for cross attention layer. If not given, cross-attention defaults to +self-attention. timestep ( torch.LongTensor, optional) — +Used to indicate denoising step. Optional timestep to be applied as an embedding in AdaLayerNorm. class_labels ( torch.LongTensor of shape (batch size, num classes), optional) — +Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in +AdaLayerZeroNorm. num_frames (int, optional, defaults to 1) — +The number of frames to be processed per batch. This is used to reshape the hidden states. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. Returns +~models.transformer_temporal.TransformerTemporalModelOutput or tuple + +If return_dict is True, an ~models.transformer_temporal.TransformerTemporalModelOutput is +returned, otherwise a tuple where the first element is the sample tensor. + The TransformerTemporal forward method. TransformerTemporalModelOutput class diffusers.models.transformers.transformer_temporal.TransformerTemporalModelOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size x num_frames, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. The output of TransformerTemporalModel. diff --git a/scrapped_outputs/80d8b05c1e23c1d723b24992c8f335aa.txt b/scrapped_outputs/80d8b05c1e23c1d723b24992c8f335aa.txt new file mode 100644 index 0000000000000000000000000000000000000000..848931d1969089ae8a8d21d431c071f2b1f6f901 --- /dev/null +++ b/scrapped_outputs/80d8b05c1e23c1d723b24992c8f335aa.txt @@ -0,0 +1,71 @@ +AutoencoderKL The variational autoencoder (VAE) model with KL loss was introduced in Auto-Encoding Variational Bayes by Diederik P. Kingma and Max Welling. The model is used in 🤗 Diffusers to encode images into latents and to decode latent representations into images. The abstract from the paper is: How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions are two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results. Loading from the original format By default the AutoencoderKL should be loaded with from_pretrained(), but it can also be loaded +from the original format using FromOriginalVAEMixin.from_single_file as follows: Copied from diffusers import AutoencoderKL + +url = "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors" # can also be a local file +model = AutoencoderKL.from_single_file(url) AutoencoderKL class diffusers.AutoencoderKL < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) up_block_types: Tuple = ('UpDecoderBlock2D',) block_out_channels: Tuple = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 4 norm_num_groups: int = 32 sample_size: int = 32 scaling_factor: float = 0.18215 force_upcast: float = True ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("DownEncoderBlock2D",)) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to ("UpDecoderBlock2D",)) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of block output channels. act_fn (str, optional, defaults to "silu") — The activation function to use. latent_channels (int, optional, defaults to 4) — Number of channels in the latent space. sample_size (int, optional, defaults to 32) — Sample input size. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. force_upcast (bool, optional, default to True) — +If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE +can be fine-tuned / trained to a lower range without loosing too much precision in which case +force_upcast can be set to False - see: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix A VAE model with KL loss for encoding images into latents and decoding latent representations into images. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). wrapper < source > ( *args **kwargs ) wrapper < source > ( *args **kwargs ) disable_slicing < source > ( ) Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing +decoding in one step. disable_tiling < source > ( ) Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing +decoding in one step. enable_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_tiling < source > ( use_tiling: bool = True ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. forward < source > ( sample: FloatTensor sample_posterior: bool = False return_dict: bool = True generator: Optional = None ) Parameters sample (torch.FloatTensor) — Input sample. sample_posterior (bool, optional, defaults to False) — +Whether to sample from the posterior. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. fuse_qkv_projections < source > ( ) Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. tiled_decode < source > ( z: FloatTensor return_dict: bool = True ) → ~models.vae.DecoderOutput or tuple Parameters z (torch.FloatTensor) — Input batch of latent vectors. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.vae.DecoderOutput instead of a plain tuple. Returns +~models.vae.DecoderOutput or tuple + +If return_dict is True, a ~models.vae.DecoderOutput is returned, otherwise a plain tuple is +returned. + Decode a batch of images using a tiled decoder. tiled_encode < source > ( x: FloatTensor return_dict: bool = True ) → ~models.autoencoder_kl.AutoencoderKLOutput or tuple Parameters x (torch.FloatTensor) — Input batch of images. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.autoencoder_kl.AutoencoderKLOutput instead of a plain tuple. Returns +~models.autoencoder_kl.AutoencoderKLOutput or tuple + +If return_dict is True, a ~models.autoencoder_kl.AutoencoderKLOutput is returned, otherwise a plain +tuple is returned. + Encode a batch of images using a tiled encoder. When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several +steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is +different from non-tiled encoding because each tile uses a different encoder. To avoid tiling artifacts, the +tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the +output, but they should be much less noticeable. unfuse_qkv_projections < source > ( ) Disables the fused QKV projection if enabled. This API is 🧪 experimental. AutoencoderKLOutput class diffusers.models.modeling_outputs.AutoencoderKLOutput < source > ( latent_dist: DiagonalGaussianDistribution ) Parameters latent_dist (DiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of DiagonalGaussianDistribution. +DiagonalGaussianDistribution allows for sampling latents from the distribution. Output of AutoencoderKL encoding method. DecoderOutput class diffusers.models.autoencoders.vae.DecoderOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The decoded output sample from the last layer of the model. Output of decoding method. FlaxAutoencoderKL class diffusers.FlaxAutoencoderKL < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) up_block_types: Tuple = ('UpDecoderBlock2D',) block_out_channels: Tuple = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 4 norm_num_groups: int = 32 sample_size: int = 32 scaling_factor: float = 0.18215 dtype: dtype = parent: Union = name: Optional = None ) Parameters in_channels (int, optional, defaults to 3) — +Number of channels in the input image. out_channels (int, optional, defaults to 3) — +Number of channels in the output. down_block_types (Tuple[str], optional, defaults to (DownEncoderBlock2D)) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to (UpDecoderBlock2D)) — +Tuple of upsample block types. block_out_channels (Tuple[str], optional, defaults to (64,)) — +Tuple of block output channels. layers_per_block (int, optional, defaults to 2) — +Number of ResNet layer for each block. act_fn (str, optional, defaults to silu) — +The activation function to use. latent_channels (int, optional, defaults to 4) — +Number of channels in the latent space. norm_num_groups (int, optional, defaults to 32) — +The number of groups for normalization. sample_size (int, optional, defaults to 32) — +Sample input size. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. dtype (jnp.dtype, optional, defaults to jnp.float32) — +The dtype of the parameters. Flax implementation of a VAE model with KL loss for decoding latent representations. This model inherits from FlaxModelMixin. Check the superclass documentation for it’s generic methods +implemented for all models (such as downloading or saving). This model is a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matter related to its +general usage and behavior. Inherent JAX features such as the following are supported: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization FlaxAutoencoderKLOutput class diffusers.models.vae_flax.FlaxAutoencoderKLOutput < source > ( latent_dist: FlaxDiagonalGaussianDistribution ) Parameters latent_dist (FlaxDiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of FlaxDiagonalGaussianDistribution. +FlaxDiagonalGaussianDistribution allows for sampling latents from the distribution. Output of AutoencoderKL encoding method. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. FlaxDecoderOutput class diffusers.models.vae_flax.FlaxDecoderOutput < source > ( sample: Array ) Parameters sample (jnp.ndarray of shape (batch_size, num_channels, height, width)) — +The decoded output sample from the last layer of the model. dtype (jnp.dtype, optional, defaults to jnp.float32) — +The dtype of the parameters. Output of decoding method. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/812298273c81c98fed877b3bd6c56c6b.txt b/scrapped_outputs/812298273c81c98fed877b3bd6c56c6b.txt new file mode 100644 index 0000000000000000000000000000000000000000..49d64c2bb4b20fbd4bc944a6449825ee53c95919 --- /dev/null +++ b/scrapped_outputs/812298273c81c98fed877b3bd6c56c6b.txt @@ -0,0 +1,41 @@ +KDPM2AncestralDiscreteScheduler The KDPM2DiscreteScheduler with ancestral sampling is inspired by the Elucidating the Design Space of Diffusion-Based Generative Models paper, and the scheduler is ported from and created by Katherine Crowson. The original codebase can be found at crowsonkb/k-diffusion. KDPM2AncestralDiscreteScheduler class diffusers.KDPM2AncestralDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None use_karras_sigmas: Optional = False prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.00085) — +The starting beta value of inference. beta_end (float, defaults to 0.012) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. KDPM2DiscreteScheduler with ancestral sampling is inspired by the DPMSolver2 and Algorithm 2 from the Elucidating +the Design Space of Diffusion-Based Generative Models paper. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union generator: Optional = None return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, ~schedulers.scheduling_ddim.SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/816561f4fab518574b287ba58167d4e7.txt b/scrapped_outputs/816561f4fab518574b287ba58167d4e7.txt new file mode 100644 index 0000000000000000000000000000000000000000..1f6f4515145581efe8db27c822c4dac240053ef7 --- /dev/null +++ b/scrapped_outputs/816561f4fab518574b287ba58167d4e7.txt @@ -0,0 +1,68 @@ +Consistency Models Consistency Models were proposed in Consistency Models by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. The abstract from the paper is: Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast one-step generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet 64x64 and LSUN 256x256. The original codebase can be found at openai/consistency_models, and additional checkpoints are available at openai. The pipeline was contributed by dg845 and ayushtues. ❤️ Tips For an additional speed-up, use torch.compile to generate multiple images in <1 second: Copied import torch + from diffusers import ConsistencyModelPipeline + + device = "cuda" + # Load the cd_bedroom256_lpips checkpoint. + model_id_or_path = "openai/diffusers-cd_bedroom256_lpips" + pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) + pipe.to(device) + ++ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + + # Multistep sampling + # Timesteps can be explicitly specified; the particular timesteps below are from the original GitHub repo: + # https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L83 + for _ in range(10): + image = pipe(timesteps=[17, 0]).images[0] + image.show() ConsistencyModelPipeline class diffusers.ConsistencyModelPipeline < source > ( unet: UNet2DModel scheduler: CMStochasticIterativeScheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Currently only +compatible with CMStochasticIterativeScheduler. Pipeline for unconditional or class-conditional image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 class_labels: Union = None num_inference_steps: int = 1 timesteps: List = None generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. class_labels (torch.Tensor or List[int] or int, optional) — +Optional class labels for conditioning class-conditional consistency models. Not used if the model is +not class-conditional. num_inference_steps (int, optional, defaults to 1) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + Examples: Copied >>> import torch + +>>> from diffusers import ConsistencyModelPipeline + +>>> device = "cuda" +>>> # Load the cd_imagenet64_l2 checkpoint. +>>> model_id_or_path = "openai/diffusers-cd_imagenet64_l2" +>>> pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +>>> pipe.to(device) + +>>> # Onestep Sampling +>>> image = pipe(num_inference_steps=1).images[0] +>>> image.save("cd_imagenet64_l2_onestep_sample.png") + +>>> # Onestep sampling, class-conditional image generation +>>> # ImageNet-64 class label 145 corresponds to king penguins +>>> image = pipe(num_inference_steps=1, class_labels=145).images[0] +>>> image.save("cd_imagenet64_l2_onestep_sample_penguin.png") + +>>> # Multistep sampling, class-conditional image generation +>>> # Timesteps can be explicitly specified; the particular timesteps below are from the original Github repo: +>>> # https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L77 +>>> image = pipe(num_inference_steps=None, timesteps=[22, 0], class_labels=145).images[0] +>>> image.save("cd_imagenet64_l2_multistep_sample_penguin.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/81a20e4707c8df41fccc8babd13c69ad.txt b/scrapped_outputs/81a20e4707c8df41fccc8babd13c69ad.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/81a9618f5e88e0feee2f8bf60387b873.txt b/scrapped_outputs/81a9618f5e88e0feee2f8bf60387b873.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/81d3a09056479b3f4ea802a6dbbaa7b6.txt b/scrapped_outputs/81d3a09056479b3f4ea802a6dbbaa7b6.txt new file mode 100644 index 0000000000000000000000000000000000000000..40f5c2affe25ec0a3fd2019cbe50ae1c850c37e8 --- /dev/null +++ b/scrapped_outputs/81d3a09056479b3f4ea802a6dbbaa7b6.txt @@ -0,0 +1,437 @@ +Pipelines + +The DiffusionPipeline is the easiest way to load any pretrained diffusion pipeline from the Hub and to use it in inference. +One should not use the Diffusion Pipeline class for training or fine-tuning a diffusion model. Individual + components of diffusion pipelines are usually trained individually, so we suggest to directly work + with `UNetModel` and `UNetConditionModel`. + +Any diffusion pipeline that is loaded with from_pretrained() will automatically +detect the pipeline type, e.g. StableDiffusionPipeline and consequently load each component of the +pipeline and pass them into the __init__ function of the pipeline, e.g. __init__(). +Any pipeline object can be saved locally with save_pretrained(). + +DiffusionPipeline + + +class diffusers.DiffusionPipeline + +< +source +> +( +) + + + +Base class for all models. +DiffusionPipeline takes care of storing all components (models, schedulers, processors) for diffusion pipelines +and handles methods for loading, downloading and saving models as well as a few methods common to all pipelines to: +move all PyTorch modules to the device of your choice +enabling/disabling the progress bar for the denoising iteration +Class attributes: +config_name (str) — name of the config file that will store the class and module names of all +components of the diffusion pipeline. +_optional_components (Liststr) — list of all components that are optional so they don’t have to be +passed for the pipeline to function (should be overridden by subclasses). + +__call__ + + +( +*args +**kwargs + +) + + + +Call self as a function. + +device + +< +source +> +( +) +→ +torch.device + +Returns + +torch.device + + + +The torch device on which the pipeline is located. + + + +to + +< +source +> +( +torch_device: typing.Union[str, torch.device, NoneType] = None + +) + + + + +disable_attention_slicing + +< +source +> +( +) + + + +Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go +back to computing attention in one step. + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +enable_attention_slicing + +< +source +> +( +slice_size: typing.Union[str, int, NoneType] = 'auto' + +) + + +Parameters + +slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maxium amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) + +from_pretrained + +< +source +> +( +pretrained_model_name_or_path: typing.Union[str, os.PathLike, NoneType] +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id of a pretrained pipeline hosted inside a model repo on +https://huggingface.co/ Valid repo ids have to be located under a user or organization name, like +CompVis/ldm-text2im-large-256. +A path to a directory containing pipeline weights saved using +save_pretrained(), e.g., ./my_pipeline_directory/. + + + +torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model under this dtype. If "auto" is passed the dtype +will be automatically derived from the model’s weights. + + +custom_pipeline (str, optional) — + +This is an experimental feature and is likely to change in the future. + +Can be either: + + +A string, the repo id of a custom pipeline hosted inside a model repo on +https://huggingface.co/. Valid repo ids have to be located under a user or organization name, +like hf-internal-testing/diffusers-dummy-pipeline. + +It is required that the model repo has a file, called pipeline.py that defines the custom +pipeline. + + + +A string, the file name of a community pipeline hosted on GitHub under +https://github.com/huggingface/diffusers/tree/main/examples/community. Valid file names have to +match exactly the file name without .py located under the above link, e.g. +clip_guided_stable_diffusion. + +Community pipelines are always loaded from the current main branch of GitHub. + + + +A path to a directory containing a custom pipeline, e.g., ./my_pipeline_directory/. + +It is required that the directory has a file, called pipeline.py that defines the custom +pipeline. + + + +For more information on how to load and create custom pipelines, please have a look at Loading and +Adding Custom +Pipelines + + +torch_dtype (str or torch.dtype, optional) — + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running huggingface-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +custom_revision (str, optional, defaults to "main" when loading from the Hub and to local version of diffusers when loading from GitHub) — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a diffusers version when loading a +custom pipeline from GitHub. + + +mirror (str, optional) — +Mirror source to accelerate downloads in China. If you are from China and have an accessibility +problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. +Please refer to the mirror site for more information. specify the folder name here. + + +device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be refined to each +parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the +same device. +To have Accelerate compute the most optimized device_map automatically, set device_map="auto". For +more information about each option see designing a device +map. + + +low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading by not initializing the weights and only loading the pre-trained weights. This +also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the +model. This is only supported when torch version >= 1.9.0. If you are using an older version of torch, +setting this argument to True will raise an error. + + +return_cached_folder (bool, optional, defaults to False) — +If set to True, path to downloaded cached folder will be returned in addition to loaded pipeline. + + +kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load - and saveable variables - i.e. the pipeline components - of the +specific pipeline class. The overwritten components are then directly passed to the pipelines +__init__ method. See example below for more information. + + + +Instantiate a PyTorch diffusion pipeline from pre-trained pipeline weights. +The pipeline is set in evaluation mode by default using model.eval() (Dropout modules are deactivated). +The warning Weights from XXX not initialized from pretrained model means that the weights of XXX do not come +pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning +task. +The warning Weights from XXX not used in YYY means that the layer XXX is not used by YYY, therefore those +weights are discarded. +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models, e.g. "runwayml/stable-diffusion-v1-5" +Activate the special “offline-mode” to use +this method in a firewalled environment. + +Examples: + + + Copied +>>> from diffusers import DiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256") + +>>> # Download pipeline that requires an authorization token +>>> # For more information on access tokens, please refer to this section +>>> # of the documentation](https://huggingface.co/docs/hub/security-tokens) +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") + +>>> # Use a different scheduler +>>> from diffusers import LMSDiscreteScheduler + +>>> scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.scheduler = scheduler + +numpy_to_pil + +< +source +> +( +images + +) + + + +Convert a numpy image or a batch of images to a PIL image. + +save_pretrained + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +safe_serialization: bool = False + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. + + +safe_serialization (bool, optional, defaults to False) — +Whether to save the model using safetensors or the traditional PyTorch way (that uses pickle). + + + +Save all variables of the pipeline that can be saved and loaded as well as the pipelines configuration file to +a directory. A pipeline variable can be saved and loaded if its class implements both a save and loading +method. The pipeline can easily be re-loaded using the [from_pretrained()](/docs/diffusers/v0.12.0/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained) class method. + +ImagePipelineOutput + + +By default diffusion pipelines return an object of class + +class diffusers.ImagePipelineOutput + +< +source +> +( +images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] + +) + + +Parameters + +images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or numpy array of shape (batch_size, height, width, num_channels). PIL images or numpy array present the denoised images of the diffusion pipeline. + + + +Output class for image pipelines. + +AudioPipelineOutput + + +By default diffusion pipelines return an object of class + +class diffusers.AudioPipelineOutput + +< +source +> +( +audios: ndarray + +) + + +Parameters + +audios (np.ndarray) — +List of denoised samples of shape (batch_size, num_channels, sample_rate). Numpy array present the +denoised audio samples of the diffusion pipeline. + + + +Output class for audio pipelines. diff --git a/scrapped_outputs/81db8dfd5fc5dd5f166761e9b9567460.txt b/scrapped_outputs/81db8dfd5fc5dd5f166761e9b9567460.txt new file mode 100644 index 0000000000000000000000000000000000000000..2f374186f988bc051b7ff44ed2b21dbe42cea043 --- /dev/null +++ b/scrapped_outputs/81db8dfd5fc5dd5f166761e9b9567460.txt @@ -0,0 +1,33 @@ +Load community pipelines + +Community pipelines are any DiffusionPipeline class that are different from the original implementation as specified in their paper (for example, the StableDiffusionControlNetPipeline corresponds to the Text-to-Image Generation with ControlNet Conditioning paper). They provide additional functionality or extend the original implementation of a pipeline. +There are many cool community pipelines like Speech to Image or Composable Stable Diffusion, and you can find all the official community pipelines here. +To load any community pipeline on the Hub, pass the repository id of the community pipeline to the custom_pipeline argument and the model repository where you’d like to load the pipeline weights and components from. For example, the example below loads a dummy pipeline from hf-internal-testing/diffusers-dummy-pipeline and the pipeline weights and components from google/ddpm-cifar10-32: +🔒 By loading a community pipeline from the Hugging Face Hub, you are trusting that the code you are loading is safe. Make sure to inspect the code online before loading and running it automatically! + + + Copied +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "google/ddpm-cifar10-32", custom_pipeline="hf-internal-testing/diffusers-dummy-pipeline" +) +Loading an official community pipeline is similar, but you can mix loading weights from an official repository id and pass pipeline components directly. The example below loads the community CLIP Guided Stable Diffusion pipeline, and you can pass the CLIP model components directly to it: + + + Copied +from diffusers import DiffusionPipeline +from transformers import CLIPImageProcessor, CLIPModel + +clip_model_id = "laion/CLIP-ViT-B-32-laion2B-s34B-b79K" + +feature_extractor = CLIPImageProcessor.from_pretrained(clip_model_id) +clip_model = CLIPModel.from_pretrained(clip_model_id) + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + custom_pipeline="clip_guided_stable_diffusion", + clip_model=clip_model, + feature_extractor=feature_extractor, +) +For more information about community pipelines, take a look at the Community pipelines guide for how to use them and if you’re interested in adding a community pipeline check out the How to contribute a community pipeline guide! diff --git a/scrapped_outputs/81e35c240578f6b8df6d5eee7b6016ac.txt b/scrapped_outputs/81e35c240578f6b8df6d5eee7b6016ac.txt new file mode 100644 index 0000000000000000000000000000000000000000..11477af7da0355430f35587a5aa097be653d9a3d --- /dev/null +++ b/scrapped_outputs/81e35c240578f6b8df6d5eee7b6016ac.txt @@ -0,0 +1,68 @@ +VQDiffusionScheduler VQDiffusionScheduler converts the transformer model’s output into a sample for the unnoised image at the previous diffusion timestep. It was introduced in Vector Quantized Diffusion Model for Text-to-Image Synthesis by Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, Baining Guo. The abstract from the paper is: We present the vector quantized diffusion (VQ-Diffusion) model for text-to-image generation. This method is based on a vector quantized variational autoencoder (VQ-VAE) whose latent space is modeled by a conditional variant of the recently developed Denoising Diffusion Probabilistic Model (DDPM). We find that this latent-space method is well-suited for text-to-image generation tasks because it not only eliminates the unidirectional bias with existing methods but also allows us to incorporate a mask-and-replace diffusion strategy to avoid the accumulation of errors, which is a serious problem with existing methods. Our experiments show that the VQ-Diffusion produces significantly better text-to-image generation results when compared with conventional autoregressive (AR) models with similar numbers of parameters. Compared with previous GAN-based text-to-image methods, our VQ-Diffusion can handle more complex scenes and improve the synthesized image quality by a large margin. Finally, we show that the image generation computation in our method can be made highly efficient by reparameterization. With traditional AR methods, the text-to-image generation time increases linearly with the output image resolution and hence is quite time consuming even for normal size images. The VQ-Diffusion allows us to achieve a better trade-off between quality and speed. Our experiments indicate that the VQ-Diffusion model with the reparameterization is fifteen times faster than traditional AR methods while achieving a better image quality. VQDiffusionScheduler class diffusers.VQDiffusionScheduler < source > ( num_vec_classes: int num_train_timesteps: int = 100 alpha_cum_start: float = 0.99999 alpha_cum_end: float = 9e-06 gamma_cum_start: float = 9e-06 gamma_cum_end: float = 0.99999 ) Parameters num_vec_classes (int) — +The number of classes of the vector embeddings of the latent pixels. Includes the class for the masked +latent pixel. num_train_timesteps (int, defaults to 100) — +The number of diffusion steps to train the model. alpha_cum_start (float, defaults to 0.99999) — +The starting cumulative alpha value. alpha_cum_end (float, defaults to 0.00009) — +The ending cumulative alpha value. gamma_cum_start (float, defaults to 0.00009) — +The starting cumulative gamma value. gamma_cum_end (float, defaults to 0.99999) — +The ending cumulative gamma value. A scheduler for vector quantized diffusion. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. log_Q_t_transitioning_to_known_class < source > ( t: torch.int32 x_t: LongTensor log_onehot_x_t: FloatTensor cumulative: bool ) → torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels) Parameters t (torch.Long) — +The timestep that determines which transition matrix is used. x_t (torch.LongTensor of shape (batch size, num latent pixels)) — +The classes of each latent pixel at time t. log_onehot_x_t (torch.FloatTensor of shape (batch size, num classes, num latent pixels)) — +The log one-hot vectors of x_t. cumulative (bool) — +If cumulative is False, the single step transition matrix t-1->t is used. If cumulative is +True, the cumulative transition matrix 0->t is used. Returns +torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels) + +Each column of the returned matrix is a row of log probabilities of the complete probability +transition matrix. +When non cumulative, returns self.num_classes - 1 rows because the initial latent pixel cannot be +masked. +Where: + +q_n is the probability distribution for the forward process of the nth latent pixel. +C_0 is a class of a latent pixel embedding +C_k is the class of the masked latent pixel + +non-cumulative result (omitting logarithms): +_0(x_t | x_{t-1\} = C_0) ... q_n(x_t | x_{t-1\} = C_0) + . . . + . . . + . . . +q_0(x_t | x_{t-1\} = C_k) ... q_n(x_t | x_{t-1\} = C_k)`} + wrap={false} + /> +cumulative result (omitting logarithms): +_0_cumulative(x_t | x_0 = C_0) ... q_n_cumulative(x_t | x_0 = C_0) + . . . + . . . + . . . +q_0_cumulative(x_t | x_0 = C_{k-1\}) ... q_n_cumulative(x_t | x_0 = C_{k-1\})`} + wrap={false} + /> + Calculates the log probabilities of the rows from the (cumulative or non-cumulative) transition matrix for each +latent pixel in x_t. q_posterior < source > ( log_p_x_0 x_t t ) → torch.FloatTensor of shape (batch size, num classes, num latent pixels) Parameters log_p_x_0 (torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels)) — +The log probabilities for the predicted classes of the initial latent pixels. Does not include a +prediction for the masked class as the initial unnoised image cannot be masked. x_t (torch.LongTensor of shape (batch size, num latent pixels)) — +The classes of each latent pixel at time t. t (torch.Long) — +The timestep that determines which transition matrix is used. Returns +torch.FloatTensor of shape (batch size, num classes, num latent pixels) + +The log probabilities for the predicted classes of the image at timestep t-1. + Calculates the log probabilities for the predicted classes of the image at timestep t-1: Copied p(x_{t-1} | x_t) = sum( q(x_t | x_{t-1}) * q(x_{t-1} | x_0) * p(x_0) / q(x_t | x_0) ) set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps and diffusion process parameters (alpha, beta, gamma) should be moved +to. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: torch.int64 sample: LongTensor generator: Optional = None return_dict: bool = True ) → VQDiffusionSchedulerOutput or tuple Parameters t (torch.long) — +The timestep that determines which transition matrices are used. x_t (torch.LongTensor of shape (batch size, num latent pixels)) — +The classes of each latent pixel at time t. generator (torch.Generator, or None) — +A random number generator for the noise applied to p(x_{t-1} | x_t) before it is sampled from. return_dict (bool, optional, defaults to True) — +Whether or not to return a VQDiffusionSchedulerOutput or +tuple. Returns +VQDiffusionSchedulerOutput or tuple + +If return_dict is True, VQDiffusionSchedulerOutput is +returned, otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by the reverse transition distribution. See +q_posterior() for more details about how the distribution is computer. VQDiffusionSchedulerOutput class diffusers.schedulers.scheduling_vq_diffusion.VQDiffusionSchedulerOutput < source > ( prev_sample: LongTensor ) Parameters prev_sample (torch.LongTensor of shape (batch size, num latent pixels)) — +Computed sample x_{t-1} of previous timestep. prev_sample should be used as next model input in the +denoising loop. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/82038a90a8c5c88f30f75c5e12f07b34.txt b/scrapped_outputs/82038a90a8c5c88f30f75c5e12f07b34.txt new file mode 100644 index 0000000000000000000000000000000000000000..e61eb0a68fe6473d1d312b7484e9469ca28f24df --- /dev/null +++ b/scrapped_outputs/82038a90a8c5c88f30f75c5e12f07b34.txt @@ -0,0 +1,75 @@ +Shap-E Shap-E is a conditional model for generating 3D assets which could be used for video game development, interior design, and architecture. It is trained on a large dataset of 3D assets, and post-processed to render more views of each object and produce 16K instead of 4K point clouds. The Shap-E model is trained in two steps: an encoder accepts the point clouds and rendered views of a 3D asset and outputs the parameters of implicit functions that represent the asset a diffusion model is trained on the latents produced by the encoder to generate either neural radiance fields (NeRFs) or a textured 3D mesh, making it easier to render and use the 3D asset in downstream applications This guide will show you how to use Shap-E to start generating your own 3D assets! Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate trimesh Text-to-3D To generate a gif of a 3D object, pass a text prompt to the ShapEPipeline. The pipeline generates a list of image frames which are used to create the 3D object. Copied import torch +from diffusers import ShapEPipeline + +device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +pipe = ShapEPipeline.from_pretrained("openai/shap-e", torch_dtype=torch.float16, variant="fp16") +pipe = pipe.to(device) + +guidance_scale = 15.0 +prompt = ["A firecracker", "A birthday cupcake"] + +images = pipe( + prompt, + guidance_scale=guidance_scale, + num_inference_steps=64, + frame_size=256, +).images Now use the export_to_gif() function to turn the list of image frames into a gif of the 3D object. Copied from diffusers.utils import export_to_gif + +export_to_gif(images[0], "firecracker_3d.gif") +export_to_gif(images[1], "cake_3d.gif") prompt = "A firecracker" prompt = "A birthday cupcake" Image-to-3D To generate a 3D object from another image, use the ShapEImg2ImgPipeline. You can use an existing image or generate an entirely new one. Let’s use the Kandinsky 2.1 model to generate a new image. Copied from diffusers import DiffusionPipeline +import torch + +prior_pipeline = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipeline = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + +prompt = "A cheeseburger, white background" + +image_embeds, negative_image_embeds = prior_pipeline(prompt, guidance_scale=1.0).to_tuple() +image = pipeline( + prompt, + image_embeds=image_embeds, + negative_image_embeds=negative_image_embeds, +).images[0] + +image.save("burger.png") Pass the cheeseburger to the ShapEImg2ImgPipeline to generate a 3D representation of it. Copied from PIL import Image +from diffusers import ShapEImg2ImgPipeline +from diffusers.utils import export_to_gif + +pipe = ShapEImg2ImgPipeline.from_pretrained("openai/shap-e-img2img", torch_dtype=torch.float16, variant="fp16").to("cuda") + +guidance_scale = 3.0 +image = Image.open("burger.png").resize((256, 256)) + +images = pipe( + image, + guidance_scale=guidance_scale, + num_inference_steps=64, + frame_size=256, +).images + +gif_path = export_to_gif(images[0], "burger_3d.gif") cheeseburger 3D cheeseburger Generate mesh Shap-E is a flexible model that can also generate textured mesh outputs to be rendered for downstream applications. In this example, you’ll convert the output into a glb file because the 🤗 Datasets library supports mesh visualization of glb files which can be rendered by the Dataset viewer. You can generate mesh outputs for both the ShapEPipeline and ShapEImg2ImgPipeline by specifying the output_type parameter as "mesh": Copied import torch +from diffusers import ShapEPipeline + +device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +pipe = ShapEPipeline.from_pretrained("openai/shap-e", torch_dtype=torch.float16, variant="fp16") +pipe = pipe.to(device) + +guidance_scale = 15.0 +prompt = "A birthday cupcake" + +images = pipe(prompt, guidance_scale=guidance_scale, num_inference_steps=64, frame_size=256, output_type="mesh").images Use the export_to_ply() function to save the mesh output as a ply file: You can optionally save the mesh output as an obj file with the export_to_obj() function. The ability to save the mesh output in a variety of formats makes it more flexible for downstream usage! Copied from diffusers.utils import export_to_ply + +ply_path = export_to_ply(images[0], "3d_cake.ply") +print(f"Saved to folder: {ply_path}") Then you can convert the ply file to a glb file with the trimesh library: Copied import trimesh + +mesh = trimesh.load("3d_cake.ply") +mesh_export = mesh.export("3d_cake.glb", file_type="glb") By default, the mesh output is focused from the bottom viewpoint but you can change the default viewpoint by applying a rotation transform: Copied import trimesh +import numpy as np + +mesh = trimesh.load("3d_cake.ply") +rot = trimesh.transformations.rotation_matrix(-np.pi / 2, [1, 0, 0]) +mesh = mesh.apply_transform(rot) +mesh_export = mesh.export("3d_cake.glb", file_type="glb") Upload the mesh file to your dataset repository to visualize it with the Dataset viewer! diff --git a/scrapped_outputs/820a6a650d74bcdd786f56de0c740acd.txt b/scrapped_outputs/820a6a650d74bcdd786f56de0c740acd.txt new file mode 100644 index 0000000000000000000000000000000000000000..7aa9d7438e50cb065d601931ea93e05ed669bc92 --- /dev/null +++ b/scrapped_outputs/820a6a650d74bcdd786f56de0c740acd.txt @@ -0,0 +1,58 @@ +Effective and efficient diffusion Getting the DiffusionPipeline to generate images in a certain style or include what you want can be tricky. Often times, you have to run the DiffusionPipeline several times before you end up with an image you’re happy with. But generating something out of nothing is a computationally intensive process, especially if you’re running inference over and over again. This is why it’s important to get the most computational (speed) and memory (GPU vRAM) efficiency from the pipeline to reduce the time between inference cycles so you can iterate faster. This tutorial walks you through how to generate faster and better with the DiffusionPipeline. Begin by loading the runwayml/stable-diffusion-v1-5 model: Copied from diffusers import DiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipeline = DiffusionPipeline.from_pretrained(model_id, use_safetensors=True) The example prompt you’ll use is a portrait of an old warrior chief, but feel free to use your own prompt: Copied prompt = "portrait photo of a old warrior chief" Speed 💡 If you don’t have access to a GPU, you can use one for free from a GPU provider like Colab! One of the simplest ways to speed up inference is to place the pipeline on a GPU the same way you would with any PyTorch module: Copied pipeline = pipeline.to("cuda") To make sure you can use the same image and improve on it, use a Generator and set a seed for reproducibility: Copied import torch + +generator = torch.Generator("cuda").manual_seed(0) Now you can generate an image: Copied image = pipeline(prompt, generator=generator).images[0] +image This process took ~30 seconds on a T4 GPU (it might be faster if your allocated GPU is better than a T4). By default, the DiffusionPipeline runs inference with full float32 precision for 50 inference steps. You can speed this up by switching to a lower precision like float16 or running fewer inference steps. Let’s start by loading the model in float16 and generate an image: Copied import torch + +pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, use_safetensors=True) +pipeline = pipeline.to("cuda") +generator = torch.Generator("cuda").manual_seed(0) +image = pipeline(prompt, generator=generator).images[0] +image This time, it only took ~11 seconds to generate the image, which is almost 3x faster than before! 💡 We strongly suggest always running your pipelines in float16, and so far, we’ve rarely seen any degradation in output quality. Another option is to reduce the number of inference steps. Choosing a more efficient scheduler could help decrease the number of steps without sacrificing output quality. You can find which schedulers are compatible with the current model in the DiffusionPipeline by calling the compatibles method: Copied pipeline.scheduler.compatibles +[ + diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, + diffusers.schedulers.scheduling_unipc_multistep.UniPCMultistepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_discrete.KDPM2DiscreteScheduler, + diffusers.schedulers.scheduling_deis_multistep.DEISMultistepScheduler, + diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, + diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler, + diffusers.schedulers.scheduling_ddpm.DDPMScheduler, + diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_ancestral_discrete.KDPM2AncestralDiscreteScheduler, + diffusers.utils.dummy_torch_and_torchsde_objects.DPMSolverSDEScheduler, + diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler, + diffusers.schedulers.scheduling_pndm.PNDMScheduler, + diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, + diffusers.schedulers.scheduling_ddim.DDIMScheduler, +] The Stable Diffusion model uses the PNDMScheduler by default which usually requires ~50 inference steps, but more performant schedulers like DPMSolverMultistepScheduler, require only ~20 or 25 inference steps. Use the from_config() method to load a new scheduler: Copied from diffusers import DPMSolverMultistepScheduler + +pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) Now set the num_inference_steps to 20: Copied generator = torch.Generator("cuda").manual_seed(0) +image = pipeline(prompt, generator=generator, num_inference_steps=20).images[0] +image Great, you’ve managed to cut the inference time to just 4 seconds! ⚡️ Memory The other key to improving pipeline performance is consuming less memory, which indirectly implies more speed, since you’re often trying to maximize the number of images generated per second. The easiest way to see how many images you can generate at once is to try out different batch sizes until you get an OutOfMemoryError (OOM). Create a function that’ll generate a batch of images from a list of prompts and Generators. Make sure to assign each Generator a seed so you can reuse it if it produces a good result. Copied def get_inputs(batch_size=1): + generator = [torch.Generator("cuda").manual_seed(i) for i in range(batch_size)] + prompts = batch_size * [prompt] + num_inference_steps = 20 + + return {"prompt": prompts, "generator": generator, "num_inference_steps": num_inference_steps} Start with batch_size=4 and see how much memory you’ve consumed: Copied from diffusers.utils import make_image_grid + +images = pipeline(**get_inputs(batch_size=4)).images +make_image_grid(images, 2, 2) Unless you have a GPU with more vRAM, the code above probably returned an OOM error! Most of the memory is taken up by the cross-attention layers. Instead of running this operation in a batch, you can run it sequentially to save a significant amount of memory. All you have to do is configure the pipeline to use the enable_attention_slicing() function: Copied pipeline.enable_attention_slicing() Now try increasing the batch_size to 8! Copied images = pipeline(**get_inputs(batch_size=8)).images +make_image_grid(images, rows=2, cols=4) Whereas before you couldn’t even generate a batch of 4 images, now you can generate a batch of 8 images at ~3.5 seconds per image! This is probably the fastest you can go on a T4 GPU without sacrificing quality. Quality In the last two sections, you learned how to optimize the speed of your pipeline by using fp16, reducing the number of inference steps by using a more performant scheduler, and enabling attention slicing to reduce memory consumption. Now you’re going to focus on how to improve the quality of generated images. Better checkpoints The most obvious step is to use better checkpoints. The Stable Diffusion model is a good starting point, and since its official launch, several improved versions have also been released. However, using a newer version doesn’t automatically mean you’ll get better results. You’ll still have to experiment with different checkpoints yourself, and do a little research (such as using negative prompts) to get the best results. As the field grows, there are more and more high-quality checkpoints finetuned to produce certain styles. Try exploring the Hub and Diffusers Gallery to find one you’re interested in! Better pipeline components You can also try replacing the current pipeline components with a newer version. Let’s try loading the latest autoencoder from Stability AI into the pipeline, and generate some images: Copied from diffusers import AutoencoderKL + +vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16).to("cuda") +pipeline.vae = vae +images = pipeline(**get_inputs(batch_size=8)).images +make_image_grid(images, rows=2, cols=4) Better prompt engineering The text prompt you use to generate an image is super important, so much so that it is called prompt engineering. Some considerations to keep during prompt engineering are: How is the image or similar images of the one I want to generate stored on the internet? What additional detail can I give that steers the model towards the style I want? With this in mind, let’s improve the prompt to include color and higher quality details: Copied prompt += ", tribal panther make up, blue on red, side profile, looking away, serious eyes" +prompt += " 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta" Generate a batch of images with the new prompt: Copied images = pipeline(**get_inputs(batch_size=8)).images +make_image_grid(images, rows=2, cols=4) Pretty impressive! Let’s tweak the second image - corresponding to the Generator with a seed of 1 - a bit more by adding some text about the age of the subject: Copied prompts = [ + "portrait photo of the oldest warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a old warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a young warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", +] + +generator = [torch.Generator("cuda").manual_seed(1) for _ in range(len(prompts))] +images = pipeline(prompt=prompts, generator=generator, num_inference_steps=25).images +make_image_grid(images, 2, 2) Next steps In this tutorial, you learned how to optimize a DiffusionPipeline for computational and memory efficiency as well as improving the quality of generated outputs. If you’re interested in making your pipeline even faster, take a look at the following resources: Learn how PyTorch 2.0 and torch.compile can yield 5 - 300% faster inference speed. On an A100 GPU, inference can be up to 50% faster! If you can’t use PyTorch 2, we recommend you install xFormers. Its memory-efficient attention mechanism works great with PyTorch 1.13.1 for faster speed and reduced memory consumption. Other optimization techniques, such as model offloading, are covered in this guide. diff --git a/scrapped_outputs/822f1472db773b6272ac105d5ab1d2f1.txt b/scrapped_outputs/822f1472db773b6272ac105d5ab1d2f1.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/82ca2b338df6abd3add9f39eacdf6014.txt b/scrapped_outputs/82ca2b338df6abd3add9f39eacdf6014.txt new file mode 100644 index 0000000000000000000000000000000000000000..f695b722000cb30de90398c0e34dfcc9554715bb --- /dev/null +++ b/scrapped_outputs/82ca2b338df6abd3add9f39eacdf6014.txt @@ -0,0 +1,315 @@ +Inpainting Inpainting replaces or edits specific areas of an image. This makes it a useful tool for image restoration like removing defects and artifacts, or even replacing an image area with something entirely new. Inpainting relies on a mask to determine which regions of an image to fill in; the area to inpaint is represented by white pixels and the area to keep is represented by black pixels. The white pixels are filled in by the prompt. With 🤗 Diffusers, here is how you can do inpainting: Load an inpainting checkpoint with the AutoPipelineForInpainting class. This’ll automatically detect the appropriate pipeline class to load based on the checkpoint: Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() You’ll notice throughout the guide, we use enable_model_cpu_offload() and enable_xformers_memory_efficient_attention(), to save memory and increase inference speed. If you’re using PyTorch 2.0, it’s not necessary to call enable_xformers_memory_efficient_attention() on your pipeline because it’ll already be using PyTorch 2.0’s native scaled-dot product attention. Load the base and mask images: Copied init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") Create a prompt to inpaint the image with and pass it to the pipeline with the base and mask images: Copied prompt = "a black cat with glowing eyes, cute, adorable, disney, pixar, highly detailed, 8k" +negative_prompt = "bad anatomy, deformed, ugly, disfigured" +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=init_image, mask_image=mask_image).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) base image mask image generated image Create a mask image Throughout this guide, the mask image is provided in all of the code examples for convenience. You can inpaint on your own images, but you’ll need to create a mask image for it. Use the Space below to easily create a mask image. Upload a base image to inpaint on and use the sketch tool to draw a mask. Once you’re done, click Run to generate and download the mask image. Mask blur The ~VaeImageProcessor.blur method provides an option for how to blend the original image and inpaint area. The amount of blur is determined by the blur_factor parameter. Increasing the blur_factor increases the amount of blur applied to the mask edges, softening the transition between the original image and inpaint area. A low or zero blur_factor preserves the sharper edges of the mask. To use this, create a blurred mask with the image processor. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +from PIL import Image + +pipeline = AutoPipelineForInpainting.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to('cuda') + +mask = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore_mask.png") +blurred_mask = pipeline.mask_processor.blur(mask, blur_factor=33) +blurred_mask mask with no blur mask with blur applied Popular models Stable Diffusion Inpainting, Stable Diffusion XL (SDXL) Inpainting, and Kandinsky 2.2 Inpainting are among the most popular models for inpainting. SDXL typically produces higher resolution images than Stable Diffusion v1.5, and Kandinsky 2.2 is also capable of generating high-quality images. Stable Diffusion Inpainting Stable Diffusion Inpainting is a latent diffusion model finetuned on 512x512 images on inpainting. It is a good starting point because it is relatively fast and generates good quality images. To use this model for inpainting, you’ll need to pass a prompt, base and mask image to the pipeline: Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Stable Diffusion XL (SDXL) Inpainting SDXL is a larger and more powerful version of Stable Diffusion v1.5. This model can follow a two-stage model process (though each model can also be used alone); the base model generates an image, and a refiner model takes that image and further enhances its details and quality. Take a look at the SDXL guide for a more comprehensive guide on how to use SDXL and configure it’s parameters. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "diffusers/stable-diffusion-xl-1.0-inpainting-0.1", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Kandinsky 2.2 Inpainting The Kandinsky model family is similar to SDXL because it uses two models as well; the image prior model creates image embeddings, and the diffusion model generates images from them. You can load the image prior and diffusion model separately, but the easiest way to use Kandinsky 2.2 is to load it into the AutoPipelineForInpainting class which uses the KandinskyV22InpaintCombinedPipeline under the hood. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) base image Stable Diffusion Inpainting Stable Diffusion XL Inpainting Kandinsky 2.2 Inpainting Non-inpaint specific checkpoints So far, this guide has used inpaint specific checkpoints such as runwayml/stable-diffusion-inpainting. But you can also use regular checkpoints like runwayml/stable-diffusion-v1-5. Let’s compare the results of the two checkpoints. The image on the left is generated from a regular checkpoint, and the image on the right is from an inpaint checkpoint. You’ll immediately notice the image on the left is not as clean, and you can still see the outline of the area the model is supposed to inpaint. The image on the right is much cleaner and the inpainted area appears more natural. runwayml/stable-diffusion-v1-5 runwayml/stable-diffusion-inpainting Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, image], rows=1, cols=2) runwayml/stable-diffusion-v1-5 runwayml/stable-diffusion-inpainting However, for more basic tasks like erasing an object from an image (like the rocks in the road for example), a regular checkpoint yields pretty good results. There isn’t as noticeable of difference between the regular and inpaint checkpoint. runwayml/stable-diffusion-v1-5 runwayml/stable-diffusion-inpaint Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/road-mask.png") + +image = pipeline(prompt="road", image=init_image, mask_image=mask_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) runwayml/stable-diffusion-v1-5 runwayml/stable-diffusion-inpainting The trade-off of using a non-inpaint specific checkpoint is the overall image quality may be lower, but it generally tends to preserve the mask area (that is why you can see the mask outline). The inpaint specific checkpoints are intentionally trained to generate higher quality inpainted images, and that includes creating a more natural transition between the masked and unmasked areas. As a result, these checkpoints are more likely to change your unmasked area. If preserving the unmasked area is important for your task, you can use the VaeImageProcessor.apply_overlay method to force the unmasked area of an image to remain the same at the expense of some more unnatural transitions between the masked and unmasked areas. Copied import PIL +import numpy as np +import torch + +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +device = "cuda" +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", + torch_dtype=torch.float16, +) +pipeline = pipeline.to(device) + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +repainted_image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image).images[0] +repainted_image.save("repainted_image.png") + +unmasked_unchanged_image = pipeline.image_processor.apply_overlay(mask_image, init_image, repainted_image) +unmasked_unchanged_image.save("force_unmasked_unchanged.png") +make_image_grid([init_image, mask_image, repainted_image, unmasked_unchanged_image], rows=2, cols=2) Configure pipeline parameters Image features - like quality and “creativity” - are dependent on pipeline parameters. Knowing what these parameters do is important for getting the results you want. Let’s take a look at the most important parameters and see how changing them affects the output. Strength strength is a measure of how much noise is added to the base image, which influences how similar the output is to the base image. 📈 a high strength value means more noise is added to an image and the denoising process takes longer, but you’ll get higher quality images that are more different from the base image 📉 a low strength value means less noise is added to an image and the denoising process is faster, but the image quality may not be as great and the generated image resembles the base image more Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.6).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) strength = 0.6 strength = 0.8 strength = 1.0 Guidance scale guidance_scale affects how aligned the text prompt and generated image are. 📈 a high guidance_scale value means the prompt and generated image are closely aligned, so the output is a stricter interpretation of the prompt 📉 a low guidance_scale value means the prompt and generated image are more loosely aligned, so the output may be more varied from the prompt You can use strength and guidance_scale together for more control over how expressive the model is. For example, a combination high strength and guidance_scale values gives the model the most creative freedom. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, guidance_scale=2.5).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) guidance_scale = 2.5 guidance_scale = 7.5 guidance_scale = 12.5 Negative prompt A negative prompt assumes the opposite role of a prompt; it guides the model away from generating certain things in an image. This is useful for quickly improving image quality and preventing the model from generating things you don’t want. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +negative_prompt = "bad architecture, unstable, poor details, blurry" +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=init_image, mask_image=mask_image).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) negative_prompt = "bad architecture, unstable, poor details, blurry" Padding mask crop A method for increasing the inpainting image quality is to use the padding_mask_crop parameter. When enabled, this option crops the masked area with some user-specified padding and it’ll also crop the same area from the original image. Both the image and mask are upscaled to a higher resolution for inpainting, and then overlaid on the original image. This is a quick and easy way to improve image quality without using a separate pipeline like StableDiffusionUpscalePipeline. Add the padding_mask_crop parameter to the pipeline call and set it to the desired padding value. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +from PIL import Image + +generator = torch.Generator(device='cuda').manual_seed(0) +pipeline = AutoPipelineForInpainting.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to('cuda') + +base = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore.png") +mask = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore_mask.png") + +image = pipeline("boat", image=base, mask_image=mask, strength=0.75, generator=generator, padding_mask_crop=32).images[0] +image default inpaint image inpaint image with `padding_mask_crop` enabled Chained inpainting pipelines AutoPipelineForInpainting can be chained with other 🤗 Diffusers pipelines to edit their outputs. This is often useful for improving the output quality from your other diffusion pipelines, and if you’re using multiple pipelines, it can be more memory-efficient to chain them together to keep the outputs in latent space and reuse the same pipeline components. Text-to-image-to-inpaint Chaining a text-to-image and inpainting pipeline allows you to inpaint the generated image, and you don’t have to provide a base image to begin with. This makes it convenient to edit your favorite text-to-image outputs without having to generate an entirely new image. Start with the text-to-image pipeline to create a castle: Copied import torch +from diffusers import AutoPipelineForText2Image, AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +text2image = pipeline("concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k").images[0] Load the mask image of the output from above: Copied mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_text-chain-mask.png") And let’s inpaint the masked area with a waterfall: Copied pipeline = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +prompt = "digital painting of a fantasy waterfall, cloudy" +image = pipeline(prompt=prompt, image=text2image, mask_image=mask_image).images[0] +make_image_grid([text2image, mask_image, image], rows=1, cols=3) text-to-image inpaint Inpaint-to-image-to-image You can also chain an inpainting pipeline before another pipeline like image-to-image or an upscaler to improve the quality. Begin by inpainting an image: Copied import torch +from diffusers import AutoPipelineForInpainting, AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image_inpainting = pipeline(prompt=prompt, image=init_image, mask_image=mask_image).images[0] + +# resize image to 1024x1024 for SDXL +image_inpainting = image_inpainting.resize((1024, 1024)) Now let’s pass the image to another inpainting pipeline with SDXL’s refiner model to enhance the image details and quality: Copied pipeline = AutoPipelineForInpainting.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt=prompt, image=image_inpainting, mask_image=mask_image, output_type="latent").images[0] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. For example, in the Text-to-image-to-inpaint section, Kandinsky 2.2 uses a different VAE class than the Stable Diffusion model so it won’t work. But if you use Stable Diffusion v1.5 for both pipelines, then you can keep everything in latent space because they both use AutoencoderKL. Finally, you can pass this image to an image-to-image pipeline to put the finishing touches on it. It is more efficient to use the from_pipe() method to reuse the existing pipeline components, and avoid unnecessarily loading all the pipeline components into memory again. Copied pipeline = AutoPipelineForImage2Image.from_pipe(pipeline) +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt=prompt, image=image).images[0] +make_image_grid([init_image, mask_image, image_inpainting, image], rows=2, cols=2) initial image inpaint image-to-image Image-to-image and inpainting are actually very similar tasks. Image-to-image generates a new image that resembles the existing provided image. Inpainting does the same thing, but it only transforms the image area defined by the mask and the rest of the image is unchanged. You can think of inpainting as a more precise tool for making specific changes and image-to-image has a broader scope for making more sweeping changes. Control image generation Getting an image to look exactly the way you want is challenging because the denoising process is random. While you can control certain aspects of generation by configuring parameters like negative_prompt, there are better and more efficient methods for controlling image generation. Prompt weighting Prompt weighting provides a quantifiable way to scale the representation of concepts in a prompt. You can use it to increase or decrease the magnitude of the text embedding vector for each concept in the prompt, which subsequently determines how much of each concept is generated. The Compel library offers an intuitive syntax for scaling the prompt weights and generating the embeddings. Learn how to create the embeddings in the Prompt weighting guide. Once you’ve generated the embeddings, pass them to the prompt_embeds (and negative_prompt_embeds if you’re using a negative prompt) parameter in the AutoPipelineForInpainting. The embeddings replace the prompt parameter: Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt_embeds=prompt_embeds, # generated from Compel + negative_prompt_embeds=negative_prompt_embeds, # generated from Compel + image=init_image, + mask_image=mask_image +).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) ControlNet ControlNet models are used with other diffusion models like Stable Diffusion, and they provide an even more flexible and accurate way to control how an image is generated. A ControlNet accepts an additional conditioning image input that guides the diffusion model to preserve the features in it. For example, let’s condition an image with a ControlNet pretrained on inpaint images: Copied import torch +import numpy as np +from diffusers import ControlNetModel, StableDiffusionControlNetInpaintPipeline +from diffusers.utils import load_image, make_image_grid + +# load ControlNet +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16, variant="fp16") + +# pass ControlNet to the pipeline +pipeline = StableDiffusionControlNetInpaintPipeline.from_pretrained( + "runwayml/stable-diffusion-inpainting", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +# prepare control image +def make_inpaint_condition(init_image, mask_image): + init_image = np.array(init_image.convert("RGB")).astype(np.float32) / 255.0 + mask_image = np.array(mask_image.convert("L")).astype(np.float32) / 255.0 + + assert init_image.shape[0:1] == mask_image.shape[0:1], "image and image_mask must have the same image size" + init_image[mask_image > 0.5] = -1.0 # set as masked pixel + init_image = np.expand_dims(init_image, 0).transpose(0, 3, 1, 2) + init_image = torch.from_numpy(init_image) + return init_image + +control_image = make_inpaint_condition(init_image, mask_image) Now generate an image from the base, mask and control images. You’ll notice features of the base image are strongly preserved in the generated image. Copied prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, control_image=control_image).images[0] +make_image_grid([init_image, mask_image, PIL.Image.fromarray(np.uint8(control_image[0][0])).convert('RGB'), image], rows=2, cols=2) You can take this a step further and chain it with an image-to-image pipeline to apply a new style: Copied from diffusers import AutoPipelineForImage2Image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "nitrosocke/elden-ring-diffusion", torch_dtype=torch.float16, +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +prompt = "elden ring style castle" # include the token "elden ring style" in the prompt +negative_prompt = "bad architecture, deformed, disfigured, poor details" + +image_elden_ring = pipeline(prompt, negative_prompt=negative_prompt, image=image).images[0] +make_image_grid([init_image, mask_image, image, image_elden_ring], rows=2, cols=2) initial image ControlNet inpaint image-to-image Optimize It can be difficult and slow to run diffusion models if you’re resource constrained, but it doesn’t have to be with a few optimization tricks. One of the biggest (and easiest) optimizations you can enable is switching to memory-efficient attention. If you’re using PyTorch 2.0, scaled-dot product attention is automatically enabled and you don’t need to do anything else. For non-PyTorch 2.0 users, you can install and use xFormers’s implementation of memory-efficient attention. Both options reduce memory usage and accelerate inference. You can also offload the model to the CPU to save even more memory: Copied + pipeline.enable_xformers_memory_efficient_attention() ++ pipeline.enable_model_cpu_offload() To speed-up your inference code even more, use torch_compile. You should wrap torch.compile around the most intensive component in the pipeline which is typically the UNet: Copied pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) Learn more in the Reduce memory usage and Torch 2.0 guides. diff --git a/scrapped_outputs/83086a4a93a8e9681d0812ef3b0922ec.txt b/scrapped_outputs/83086a4a93a8e9681d0812ef3b0922ec.txt new file mode 100644 index 0000000000000000000000000000000000000000..039dc21252f140b854db30919cf4105c2b03492c --- /dev/null +++ b/scrapped_outputs/83086a4a93a8e9681d0812ef3b0922ec.txt @@ -0,0 +1,249 @@ +Evaluating Diffusion Models Evaluation of generative models like Stable Diffusion is subjective in nature. But as practitioners and researchers, we often have to make careful choices amongst many different possibilities. So, when working with different generative models (like GANs, Diffusion, etc.), how do we choose one over the other? Qualitative evaluation of such models can be error-prone and might incorrectly influence a decision. +However, quantitative metrics don’t necessarily correspond to image quality. So, usually, a combination +of both qualitative and quantitative evaluations provides a stronger signal when choosing one model +over the other. In this document, we provide a non-exhaustive overview of qualitative and quantitative methods to evaluate Diffusion models. For quantitative methods, we specifically focus on how to implement them alongside diffusers. The methods shown in this document can also be used to evaluate different noise schedulers keeping the underlying generation model fixed. Scenarios We cover Diffusion models with the following pipelines: Text-guided image generation (such as the StableDiffusionPipeline). Text-guided image generation, additionally conditioned on an input image (such as the StableDiffusionImg2ImgPipeline and StableDiffusionInstructPix2PixPipeline). Class-conditioned image generation models (such as the DiTPipeline). Qualitative Evaluation Qualitative evaluation typically involves human assessment of generated images. Quality is measured across aspects such as compositionality, image-text alignment, and spatial relations. Common prompts provide a degree of uniformity for subjective metrics. +DrawBench and PartiPrompts are prompt datasets used for qualitative benchmarking. DrawBench and PartiPrompts were introduced by Imagen and Parti respectively. From the official Parti website: PartiPrompts (P2) is a rich set of over 1600 prompts in English that we release as part of this work. P2 can be used to measure model capabilities across various categories and challenge aspects. PartiPrompts has the following columns: Prompt Category of the prompt (such as “Abstract”, “World Knowledge”, etc.) Challenge reflecting the difficulty (such as “Basic”, “Complex”, “Writing & Symbols”, etc.) These benchmarks allow for side-by-side human evaluation of different image generation models. For this, the 🧨 Diffusers team has built Open Parti Prompts, which is a community-driven qualitative benchmark based on Parti Prompts to compare state-of-the-art open-source diffusion models: Open Parti Prompts Game: For 10 parti prompts, 4 generated images are shown and the user selects the image that suits the prompt best. Open Parti Prompts Leaderboard: The leaderboard comparing the currently best open-sourced diffusion models to each other. To manually compare images, let’s see how we can use diffusers on a couple of PartiPrompts. Below we show some prompts sampled across different challenges: Basic, Complex, Linguistic Structures, Imagination, and Writing & Symbols. Here we are using PartiPrompts as a dataset. Copied from datasets import load_dataset + +# prompts = load_dataset("nateraw/parti-prompts", split="train") +# prompts = prompts.shuffle() +# sample_prompts = [prompts[i]["Prompt"] for i in range(5)] + +# Fixing these sample prompts in the interest of reproducibility. +sample_prompts = [ + "a corgi", + "a hot air balloon with a yin-yang symbol, with the moon visible in the daytime sky", + "a car with no windows", + "a cube made of porcupine", + 'The saying "BE EXCELLENT TO EACH OTHER" written on a red brick wall with a graffiti image of a green alien wearing a tuxedo. A yellow fire hydrant is on a sidewalk in the foreground.', +] Now we can use these prompts to generate some images using Stable Diffusion (v1-4 checkpoint): Copied import torch + +seed = 0 +generator = torch.manual_seed(seed) + +images = sd_pipeline(sample_prompts, num_images_per_prompt=1, generator=generator).images We can also set num_images_per_prompt accordingly to compare different images for the same prompt. Running the same pipeline but with a different checkpoint (v1-5), yields: Once several images are generated from all the prompts using multiple models (under evaluation), these results are presented to human evaluators for scoring. For +more details on the DrawBench and PartiPrompts benchmarks, refer to their respective papers. It is useful to look at some inference samples while a model is training to measure the +training progress. In our training scripts, we support this utility with additional support for +logging to TensorBoard and Weights & Biases. Quantitative Evaluation In this section, we will walk you through how to evaluate three different diffusion pipelines using: CLIP score CLIP directional similarity FID Text-guided image generation CLIP score measures the compatibility of image-caption pairs. Higher CLIP scores imply higher compatibility 🔼. The CLIP score is a quantitative measurement of the qualitative concept “compatibility”. Image-caption pair compatibility can also be thought of as the semantic similarity between the image and the caption. CLIP score was found to have high correlation with human judgement. Let’s first load a StableDiffusionPipeline: Copied from diffusers import StableDiffusionPipeline +import torch + +model_ckpt = "CompVis/stable-diffusion-v1-4" +sd_pipeline = StableDiffusionPipeline.from_pretrained(model_ckpt, torch_dtype=torch.float16).to("cuda") Generate some images with multiple prompts: Copied prompts = [ + "a photo of an astronaut riding a horse on mars", + "A high tech solarpunk utopia in the Amazon rainforest", + "A pikachu fine dining with a view to the Eiffel Tower", + "A mecha robot in a favela in expressionist style", + "an insect robot preparing a delicious meal", + "A small cabin on top of a snowy mountain in the style of Disney, artstation", +] + +images = sd_pipeline(prompts, num_images_per_prompt=1, output_type="np").images + +print(images.shape) +# (6, 512, 512, 3) And then, we calculate the CLIP score. Copied from torchmetrics.functional.multimodal import clip_score +from functools import partial + +clip_score_fn = partial(clip_score, model_name_or_path="openai/clip-vit-base-patch16") + +def calculate_clip_score(images, prompts): + images_int = (images * 255).astype("uint8") + clip_score = clip_score_fn(torch.from_numpy(images_int).permute(0, 3, 1, 2), prompts).detach() + return round(float(clip_score), 4) + +sd_clip_score = calculate_clip_score(images, prompts) +print(f"CLIP score: {sd_clip_score}") +# CLIP score: 35.7038 In the above example, we generated one image per prompt. If we generated multiple images per prompt, we would have to take the average score from the generated images per prompt. Now, if we wanted to compare two checkpoints compatible with the StableDiffusionPipeline we should pass a generator while calling the pipeline. First, we generate images with a +fixed seed with the v1-4 Stable Diffusion checkpoint: Copied seed = 0 +generator = torch.manual_seed(seed) + +images = sd_pipeline(prompts, num_images_per_prompt=1, generator=generator, output_type="np").images Then we load the v1-5 checkpoint to generate images: Copied model_ckpt_1_5 = "runwayml/stable-diffusion-v1-5" +sd_pipeline_1_5 = StableDiffusionPipeline.from_pretrained(model_ckpt_1_5, torch_dtype=weight_dtype).to(device) + +images_1_5 = sd_pipeline_1_5(prompts, num_images_per_prompt=1, generator=generator, output_type="np").images And finally, we compare their CLIP scores: Copied sd_clip_score_1_4 = calculate_clip_score(images, prompts) +print(f"CLIP Score with v-1-4: {sd_clip_score_1_4}") +# CLIP Score with v-1-4: 34.9102 + +sd_clip_score_1_5 = calculate_clip_score(images_1_5, prompts) +print(f"CLIP Score with v-1-5: {sd_clip_score_1_5}") +# CLIP Score with v-1-5: 36.2137 It seems like the v1-5 checkpoint performs better than its predecessor. Note, however, that the number of prompts we used to compute the CLIP scores is quite low. For a more practical evaluation, this number should be way higher, and the prompts should be diverse. By construction, there are some limitations in this score. The captions in the training dataset +were crawled from the web and extracted from alt and similar tags associated an image on the internet. +They are not necessarily representative of what a human being would use to describe an image. Hence we +had to “engineer” some prompts here. Image-conditioned text-to-image generation In this case, we condition the generation pipeline with an input image as well as a text prompt. Let’s take the StableDiffusionInstructPix2PixPipeline, as an example. It takes an edit instruction as an input prompt and an input image to be edited. Here is one example: One strategy to evaluate such a model is to measure the consistency of the change between the two images (in CLIP space) with the change between the two image captions (as shown in CLIP-Guided Domain Adaptation of Image Generators). This is referred to as the ”CLIP directional similarity“. Caption 1 corresponds to the input image (image 1) that is to be edited. Caption 2 corresponds to the edited image (image 2). It should reflect the edit instruction. Following is a pictorial overview: We have prepared a mini dataset to implement this metric. Let’s first load the dataset. Copied from datasets import load_dataset + +dataset = load_dataset("sayakpaul/instructpix2pix-demo", split="train") +dataset.features Copied {'input': Value(dtype='string', id=None), + 'edit': Value(dtype='string', id=None), + 'output': Value(dtype='string', id=None), + 'image': Image(decode=True, id=None)} Here we have: input is a caption corresponding to the image. edit denotes the edit instruction. output denotes the modified caption reflecting the edit instruction. Let’s take a look at a sample. Copied idx = 0 +print(f"Original caption: {dataset[idx]['input']}") +print(f"Edit instruction: {dataset[idx]['edit']}") +print(f"Modified caption: {dataset[idx]['output']}") Copied Original caption: 2. FAROE ISLANDS: An archipelago of 18 mountainous isles in the North Atlantic Ocean between Norway and Iceland, the Faroe Islands has 'everything you could hope for', according to Big 7 Travel. It boasts 'crystal clear waterfalls, rocky cliffs that seem to jut out of nowhere and velvety green hills' +Edit instruction: make the isles all white marble +Modified caption: 2. WHITE MARBLE ISLANDS: An archipelago of 18 mountainous white marble isles in the North Atlantic Ocean between Norway and Iceland, the White Marble Islands has 'everything you could hope for', according to Big 7 Travel. It boasts 'crystal clear waterfalls, rocky cliffs that seem to jut out of nowhere and velvety green hills' And here is the image: Copied dataset[idx]["image"] We will first edit the images of our dataset with the edit instruction and compute the directional similarity. Let’s first load the StableDiffusionInstructPix2PixPipeline: Copied from diffusers import StableDiffusionInstructPix2PixPipeline + +instruct_pix2pix_pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained( + "timbrooks/instruct-pix2pix", torch_dtype=torch.float16 +).to(device) Now, we perform the edits: Copied import numpy as np + + +def edit_image(input_image, instruction): + image = instruct_pix2pix_pipeline( + instruction, + image=input_image, + output_type="np", + generator=generator, + ).images[0] + return image + +input_images = [] +original_captions = [] +modified_captions = [] +edited_images = [] + +for idx in range(len(dataset)): + input_image = dataset[idx]["image"] + edit_instruction = dataset[idx]["edit"] + edited_image = edit_image(input_image, edit_instruction) + + input_images.append(np.array(input_image)) + original_captions.append(dataset[idx]["input"]) + modified_captions.append(dataset[idx]["output"]) + edited_images.append(edited_image) To measure the directional similarity, we first load CLIP’s image and text encoders: Copied from transformers import ( + CLIPTokenizer, + CLIPTextModelWithProjection, + CLIPVisionModelWithProjection, + CLIPImageProcessor, +) + +clip_id = "openai/clip-vit-large-patch14" +tokenizer = CLIPTokenizer.from_pretrained(clip_id) +text_encoder = CLIPTextModelWithProjection.from_pretrained(clip_id).to(device) +image_processor = CLIPImageProcessor.from_pretrained(clip_id) +image_encoder = CLIPVisionModelWithProjection.from_pretrained(clip_id).to(device) Notice that we are using a particular CLIP checkpoint, i.e., openai/clip-vit-large-patch14. This is because the Stable Diffusion pre-training was performed with this CLIP variant. For more details, refer to the documentation. Next, we prepare a PyTorch nn.Module to compute directional similarity: Copied import torch.nn as nn +import torch.nn.functional as F + + +class DirectionalSimilarity(nn.Module): + def __init__(self, tokenizer, text_encoder, image_processor, image_encoder): + super().__init__() + self.tokenizer = tokenizer + self.text_encoder = text_encoder + self.image_processor = image_processor + self.image_encoder = image_encoder + + def preprocess_image(self, image): + image = self.image_processor(image, return_tensors="pt")["pixel_values"] + return {"pixel_values": image.to(device)} + + def tokenize_text(self, text): + inputs = self.tokenizer( + text, + max_length=self.tokenizer.model_max_length, + padding="max_length", + truncation=True, + return_tensors="pt", + ) + return {"input_ids": inputs.input_ids.to(device)} + + def encode_image(self, image): + preprocessed_image = self.preprocess_image(image) + image_features = self.image_encoder(**preprocessed_image).image_embeds + image_features = image_features / image_features.norm(dim=1, keepdim=True) + return image_features + + def encode_text(self, text): + tokenized_text = self.tokenize_text(text) + text_features = self.text_encoder(**tokenized_text).text_embeds + text_features = text_features / text_features.norm(dim=1, keepdim=True) + return text_features + + def compute_directional_similarity(self, img_feat_one, img_feat_two, text_feat_one, text_feat_two): + sim_direction = F.cosine_similarity(img_feat_two - img_feat_one, text_feat_two - text_feat_one) + return sim_direction + + def forward(self, image_one, image_two, caption_one, caption_two): + img_feat_one = self.encode_image(image_one) + img_feat_two = self.encode_image(image_two) + text_feat_one = self.encode_text(caption_one) + text_feat_two = self.encode_text(caption_two) + directional_similarity = self.compute_directional_similarity( + img_feat_one, img_feat_two, text_feat_one, text_feat_two + ) + return directional_similarity Let’s put DirectionalSimilarity to use now. Copied dir_similarity = DirectionalSimilarity(tokenizer, text_encoder, image_processor, image_encoder) +scores = [] + +for i in range(len(input_images)): + original_image = input_images[i] + original_caption = original_captions[i] + edited_image = edited_images[i] + modified_caption = modified_captions[i] + + similarity_score = dir_similarity(original_image, edited_image, original_caption, modified_caption) + scores.append(float(similarity_score.detach().cpu())) + +print(f"CLIP directional similarity: {np.mean(scores)}") +# CLIP directional similarity: 0.0797976553440094 Like the CLIP Score, the higher the CLIP directional similarity, the better it is. It should be noted that the StableDiffusionInstructPix2PixPipeline exposes two arguments, namely, image_guidance_scale and guidance_scale that let you control the quality of the final edited image. We encourage you to experiment with these two arguments and see the impact of that on the directional similarity. We can extend the idea of this metric to measure how similar the original image and edited version are. To do that, we can just do F.cosine_similarity(img_feat_two, img_feat_one). For these kinds of edits, we would still want the primary semantics of the images to be preserved as much as possible, i.e., a high similarity score. We can use these metrics for similar pipelines such as the StableDiffusionPix2PixZeroPipeline. Both CLIP score and CLIP direction similarity rely on the CLIP model, which can make the evaluations biased. Extending metrics like IS, FID (discussed later), or KID can be difficult when the model under evaluation was pre-trained on a large image-captioning dataset (such as the LAION-5B dataset). This is because underlying these metrics is an InceptionNet (pre-trained on the ImageNet-1k dataset) used for extracting intermediate image features. The pre-training dataset of Stable Diffusion may have limited overlap with the pre-training dataset of InceptionNet, so it is not a good candidate here for feature extraction. Using the above metrics helps evaluate models that are class-conditioned. For example, DiT. It was pre-trained being conditioned on the ImageNet-1k classes. Class-conditioned image generation Class-conditioned generative models are usually pre-trained on a class-labeled dataset such as ImageNet-1k. Popular metrics for evaluating these models include Fréchet Inception Distance (FID), Kernel Inception Distance (KID), and Inception Score (IS). In this document, we focus on FID (Heusel et al.). We show how to compute it with the DiTPipeline, which uses the DiT model under the hood. FID aims to measure how similar are two datasets of images. As per this resource: Fréchet Inception Distance is a measure of similarity between two datasets of images. It was shown to correlate well with the human judgment of visual quality and is most often used to evaluate the quality of samples of Generative Adversarial Networks. FID is calculated by computing the Fréchet distance between two Gaussians fitted to feature representations of the Inception network. These two datasets are essentially the dataset of real images and the dataset of fake images (generated images in our case). FID is usually calculated with two large datasets. However, for this document, we will work with two mini datasets. Let’s first download a few images from the ImageNet-1k training set: Copied from zipfile import ZipFile +import requests + + +def download(url, local_filepath): + r = requests.get(url) + with open(local_filepath, "wb") as f: + f.write(r.content) + return local_filepath + +dummy_dataset_url = "https://hf.co/datasets/sayakpaul/sample-datasets/resolve/main/sample-imagenet-images.zip" +local_filepath = download(dummy_dataset_url, dummy_dataset_url.split("/")[-1]) + +with ZipFile(local_filepath, "r") as zipper: + zipper.extractall(".") Copied from PIL import Image +import os + +dataset_path = "sample-imagenet-images" +image_paths = sorted([os.path.join(dataset_path, x) for x in os.listdir(dataset_path)]) + +real_images = [np.array(Image.open(path).convert("RGB")) for path in image_paths] These are 10 images from the following ImageNet-1k classes: “cassette_player”, “chain_saw” (x2), “church”, “gas_pump” (x3), “parachute” (x2), and “tench”. Real images. Now that the images are loaded, let’s apply some lightweight pre-processing on them to use them for FID calculation. Copied from torchvision.transforms import functional as F + + +def preprocess_image(image): + image = torch.tensor(image).unsqueeze(0) + image = image.permute(0, 3, 1, 2) / 255.0 + return F.center_crop(image, (256, 256)) + +real_images = torch.cat([preprocess_image(image) for image in real_images]) +print(real_images.shape) +# torch.Size([10, 3, 256, 256]) We now load the DiTPipeline to generate images conditioned on the above-mentioned classes. Copied from diffusers import DiTPipeline, DPMSolverMultistepScheduler + +dit_pipeline = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256", torch_dtype=torch.float16) +dit_pipeline.scheduler = DPMSolverMultistepScheduler.from_config(dit_pipeline.scheduler.config) +dit_pipeline = dit_pipeline.to("cuda") + +words = [ + "cassette player", + "chainsaw", + "chainsaw", + "church", + "gas pump", + "gas pump", + "gas pump", + "parachute", + "parachute", + "tench", +] + +class_ids = dit_pipeline.get_label_ids(words) +output = dit_pipeline(class_labels=class_ids, generator=generator, output_type="np") + +fake_images = output.images +fake_images = torch.tensor(fake_images) +fake_images = fake_images.permute(0, 3, 1, 2) +print(fake_images.shape) +# torch.Size([10, 3, 256, 256]) Now, we can compute the FID using torchmetrics. Copied from torchmetrics.image.fid import FrechetInceptionDistance + +fid = FrechetInceptionDistance(normalize=True) +fid.update(real_images, real=True) +fid.update(fake_images, real=False) + +print(f"FID: {float(fid.compute())}") +# FID: 177.7147216796875 The lower the FID, the better it is. Several things can influence FID here: Number of images (both real and fake) Randomness induced in the diffusion process Number of inference steps in the diffusion process The scheduler being used in the diffusion process For the last two points, it is, therefore, a good practice to run the evaluation across different seeds and inference steps, and then report an average result. FID results tend to be fragile as they depend on a lot of factors: The specific Inception model used during computation. The implementation accuracy of the computation. The image format (not the same if we start from PNGs vs JPGs). Keeping that in mind, FID is often most useful when comparing similar runs, but it is +hard to reproduce paper results unless the authors carefully disclose the FID +measurement code. These points apply to other related metrics too, such as KID and IS. As a final step, let’s visually inspect the fake_images. Fake images. diff --git a/scrapped_outputs/838ce1c7fa172398653ef28b3ebbf5f4.txt b/scrapped_outputs/838ce1c7fa172398653ef28b3ebbf5f4.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/83933cd5d4791c9c3a8811713e508dc2.txt b/scrapped_outputs/83933cd5d4791c9c3a8811713e508dc2.txt new file mode 100644 index 0000000000000000000000000000000000000000..90f987bd68cea6f4c0f29a9a85768db8b9798fed --- /dev/null +++ b/scrapped_outputs/83933cd5d4791c9c3a8811713e508dc2.txt @@ -0,0 +1 @@ +Overview A pipeline is an end-to-end class that provides a quick and easy way to use a diffusion system for inference by bundling independently trained models and schedulers together. Certain combinations of models and schedulers define specific pipeline types, like StableDiffusionXLPipeline or StableDiffusionControlNetPipeline, with specific capabilities. All pipeline types inherit from the base DiffusionPipeline class; pass it any checkpoint, and it’ll automatically detect the pipeline type and load the necessary components. This section demonstrates how to use specific pipelines such as Stable Diffusion XL, ControlNet, and DiffEdit. You’ll also learn how to use a distilled version of the Stable Diffusion model to speed up inference, how to create reproducible pipelines, and how to use and contribute community pipelines. diff --git a/scrapped_outputs/8393f9afc2fba7c73016df694b368d03.txt b/scrapped_outputs/8393f9afc2fba7c73016df694b368d03.txt new file mode 100644 index 0000000000000000000000000000000000000000..5e5f20bcd4c8ced4f5d66653f375f4b97a022c2a --- /dev/null +++ b/scrapped_outputs/8393f9afc2fba7c73016df694b368d03.txt @@ -0,0 +1,13 @@ +Improve image quality with deterministic generation A common way to improve the quality of generated images is with deterministic batch generation, generate a batch of images and select one image to improve with a more detailed prompt in a second round of inference. The key is to pass a list of torch.Generator’s to the pipeline for batched image generation, and tie each Generator to a seed so you can reuse it for an image. Let’s use runwayml/stable-diffusion-v1-5 for example, and generate several versions of the following prompt: Copied prompt = "Labrador in the style of Vermeer" Instantiate a pipeline with DiffusionPipeline.from_pretrained() and place it on a GPU (if available): Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import make_image_grid + +pipe = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +pipe = pipe.to("cuda") Now, define four different Generators and assign each Generator a seed (0 to 3) so you can reuse a Generator later for a specific image: Copied generator = [torch.Generator(device="cuda").manual_seed(i) for i in range(4)] To create a batched seed, you should use a list comprehension that iterates over the length specified in range(). This creates a unique Generator object for each image in the batch. If you only multiply the Generator by the batch size, this only creates one Generator object that is used sequentially for each image in the batch. For example, if you want to use the same seed to create 4 identical images: Copied ❌ [torch.Generator().manual_seed(seed)] * 4 + +✅ [torch.Generator().manual_seed(seed) for _ in range(4)] Generate the images and have a look: Copied images = pipe(prompt, generator=generator, num_images_per_prompt=4).images +make_image_grid(images, rows=2, cols=2) In this example, you’ll improve upon the first image - but in reality, you can use any image you want (even the image with double sets of eyes!). The first image used the Generator with seed 0, so you’ll reuse that Generator for the second round of inference. To improve the quality of the image, add some additional text to the prompt: Copied prompt = [prompt + t for t in [", highly realistic", ", artsy", ", trending", ", colorful"]] +generator = [torch.Generator(device="cuda").manual_seed(0) for i in range(4)] Create four generators with seed 0, and generate another batch of images, all of which should look like the first image from the previous round! Copied images = pipe(prompt, generator=generator).images +make_image_grid(images, rows=2, cols=2) diff --git a/scrapped_outputs/83a94f356b9b3e175716ed1e39c3521f.txt b/scrapped_outputs/83a94f356b9b3e175716ed1e39c3521f.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/83ab8c292fed2fd477284049f091a594.txt b/scrapped_outputs/83ab8c292fed2fd477284049f091a594.txt new file mode 100644 index 0000000000000000000000000000000000000000..8dc3604b4b9c771e172750704b5ebd2c5de8bc3e --- /dev/null +++ b/scrapped_outputs/83ab8c292fed2fd477284049f091a594.txt @@ -0,0 +1,122 @@ +Super-resolution The Stable Diffusion upscaler diffusion model was created by the researchers and engineers from CompVis, Stability AI, and LAION. It is used to enhance the resolution of input images by a factor of 4. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionUpscalePipeline class diffusers.StableDiffusionUpscalePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel low_res_scheduler: DDPMScheduler scheduler: KarrasDiffusionSchedulers safety_checker: Optional = None feature_extractor: Optional = None watermarker: Optional = None max_noise_level: int = 350 ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. low_res_scheduler (SchedulerMixin) — +A scheduler used to add initial noise to the low resolution conditioning image. It must be an instance of +DDPMScheduler. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-guided image super-resolution using Stable Diffusion 2. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 75 guidance_scale: float = 9.0 noise_level: int = 20 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: int = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be upscaled. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import requests +>>> from PIL import Image +>>> from io import BytesIO +>>> from diffusers import StableDiffusionUpscalePipeline +>>> import torch + +>>> # load model and scheduler +>>> model_id = "stabilityai/stable-diffusion-x4-upscaler" +>>> pipeline = StableDiffusionUpscalePipeline.from_pretrained( +... model_id, revision="fp16", torch_dtype=torch.float16 +... ) +>>> pipeline = pipeline.to("cuda") + +>>> # let's download an image +>>> url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png" +>>> response = requests.get(url) +>>> low_res_img = Image.open(BytesIO(response.content)).convert("RGB") +>>> low_res_img = low_res_img.resize((128, 128)) +>>> prompt = "a white cat" + +>>> upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0] +>>> upscaled_image.save("upsampled_cat.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/83ae19d1bee55d8d15f43799e6aae553.txt b/scrapped_outputs/83ae19d1bee55d8d15f43799e6aae553.txt new file mode 100644 index 0000000000000000000000000000000000000000..4fdf516b6d77156c92f409f664a1bb5bd1902c7b --- /dev/null +++ b/scrapped_outputs/83ae19d1bee55d8d15f43799e6aae553.txt @@ -0,0 +1,65 @@ +ControlNet ControlNet models are adapters trained on top of another pretrained model. It allows for a greater degree of control over image generation by conditioning the model with an additional input image. The input image can be a canny edge, depth map, human pose, and many more. If you’re training on a GPU with limited vRAM, you should try enabling the gradient_checkpointing, gradient_accumulation_steps, and mixed_precision parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing or xFormers. You should have a GPU with >30GB of memory if you want to train faster with Flax. This guide will explore the train_controlnet.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/controlnet +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_controlnet.py \ + --mixed_precision="fp16" Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the relevant parameters for ControlNet: --max_train_samples: the number of training samples; this can be lowered for faster training, but if you want to stream really large datasets, you’ll need to include this parameter and the --streaming parameter in your training command --gradient_accumulation_steps: number of update steps to accumulate before the backward pass; this allows you to train with a bigger batch size than your GPU memory can typically handle Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_controlnet.py \ + --snr_gamma=5.0 Training script As with the script parameters, a general walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the relevant parts of the ControlNet script. The training script has a make_train_dataset function for preprocessing the dataset with image transforms and caption tokenization. You’ll see that in addition to the usual caption tokenization and image transforms, the script also includes transforms for the conditioning image. If you’re streaming a dataset on a TPU, performance may be bottlenecked by the 🤗 Datasets library which is not optimized for images. To ensure maximum throughput, you’re encouraged to explore other dataset formats like WebDataset, TorchData, and TensorFlow Datasets. Copied conditioning_image_transforms = transforms.Compose( + [ + transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), + transforms.CenterCrop(args.resolution), + transforms.ToTensor(), + ] +) Within the main() function, you’ll find the code for loading the tokenizer, text encoder, scheduler and models. This is also where the ControlNet model is loaded either from existing weights or randomly initialized from a UNet: Copied if args.controlnet_model_name_or_path: + logger.info("Loading existing controlnet weights") + controlnet = ControlNetModel.from_pretrained(args.controlnet_model_name_or_path) +else: + logger.info("Initializing controlnet weights from unet") + controlnet = ControlNetModel.from_unet(unet) The optimizer is set up to update the ControlNet parameters: Copied params_to_optimize = controlnet.parameters() +optimizer = optimizer_class( + params_to_optimize, + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Finally, in the training loop, the conditioning text embeddings and image are passed to the down and mid-blocks of the ControlNet model: Copied encoder_hidden_states = text_encoder(batch["input_ids"])[0] +controlnet_image = batch["conditioning_pixel_values"].to(dtype=weight_dtype) + +down_block_res_samples, mid_block_res_sample = controlnet( + noisy_latents, + timesteps, + encoder_hidden_states=encoder_hidden_states, + controlnet_cond=controlnet_image, + return_dict=False, +) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Now you’re ready to launch the training script! 🚀 This guide uses the fusing/fill50k dataset, but remember, you can create and use your own dataset if you want (see the Create a dataset for training guide). Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model and OUTPUT_DIR to where you want to save the model. Download the following images to condition your training with: Copied wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png +wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png One more thing before you launch the script! Depending on the GPU you have, you may need to enable certain optimizations to train a ControlNet. The default configuration in this script requires ~38GB of vRAM. If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. 16GB 12GB 8GB On a 16GB GPU, you can use bitsandbytes 8-bit optimizer and gradient checkpointing to optimize your training run. Install bitsandbytes: Copied pip install bitsandbytes Then, add the following parameter to your training command: Copied accelerate launch train_controlnet.py \ + --gradient_checkpointing \ + --use_8bit_adam \ PyTorch Flax Copied export MODEL_DIR="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="path/to/save/model" + +accelerate launch train_controlnet.py \ + --pretrained_model_name_or_path=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --dataset_name=fusing/fill50k \ + --resolution=512 \ + --learning_rate=1e-5 \ + --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ + --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --push_to_hub Once training is complete, you can use your newly trained model for inference! Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.utils import load_image +import torch + +controlnet = ControlNetModel.from_pretrained("path/to/controlnet", torch_dtype=torch.float16) +pipeline = StableDiffusionControlNetPipeline.from_pretrained( + "path/to/base/model", controlnet=controlnet, torch_dtype=torch.float16 +).to("cuda") + +control_image = load_image("./conditioning_image_1.png") +prompt = "pale golden rod circle with old lace background" + +generator = torch.manual_seed(0) +image = pipe(prompt, num_inference_steps=20, generator=generator, image=control_image).images[0] +image.save("./output.png") Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_controlnet_sdxl.py script to train a ControlNet adapter for the SDXL model. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on training your own ControlNet! To learn more about how to use your new model, the following guides may be helpful: Learn how to use a ControlNet for inference on a variety of tasks. diff --git a/scrapped_outputs/83d4cea296db75a216d08508266ca401.txt b/scrapped_outputs/83d4cea296db75a216d08508266ca401.txt new file mode 100644 index 0000000000000000000000000000000000000000..69e339a22b6a777886b4591068d9fe0483d63180 --- /dev/null +++ b/scrapped_outputs/83d4cea296db75a216d08508266ca401.txt @@ -0,0 +1,139 @@ +Inverse Denoising Diffusion Implicit Models (DDIMInverse) + + +Overview + +This scheduler is the inverted scheduler of Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. +The implementation is mostly based on the DDIM inversion definition of Null-text Inversion for Editing Real Images using Guided Diffusion Models + +DDIMInverseScheduler + + +class diffusers.DDIMInverseScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +clip_sample: bool = True +set_alpha_to_zero: bool = True +steps_offset: int = 0 +prediction_type: str = 'epsilon' +clip_sample_range: float = 1.0 +**kwargs + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +clip_sample (bool, default True) — +option to clip predicted sample for numerical stability. + + +clip_sample_range (float, default 1.0) — +the maximum magnitude for sample clipping. Valid only when clip_sample=True. + + +set_alpha_to_zero (bool, default True) — +each diffusion step uses the value of alphas product at that step and at the previous one. For the final +step there is no previous alpha. When this option is True the previous alpha product is fixed to 0, +otherwise it uses the value of alpha at step num_train_timesteps - 1. + + +steps_offset (int, default 0) — +an offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_zero=False, to make the last step use step num_train_timesteps - 1 for the previous alpha +product. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + + +DDIMInverseScheduler is the reverse scheduler of DDIMScheduler. +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. +For more details, see the original paper: https://arxiv.org/abs/2010.02502 + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Optional[int] = None + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +timestep (int, optional) — current timestep + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + + +Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. diff --git a/scrapped_outputs/83d6f119b8782023c44490a0e79d37aa.txt b/scrapped_outputs/83d6f119b8782023c44490a0e79d37aa.txt new file mode 100644 index 0000000000000000000000000000000000000000..987c9209fcde600484b42a955615d555013bf385 --- /dev/null +++ b/scrapped_outputs/83d6f119b8782023c44490a0e79d37aa.txt @@ -0,0 +1,367 @@ +Image-to-image The Stable Diffusion model can also be applied to image-to-image generation by passing a text prompt and an initial image to condition the generation of new images. The StableDiffusionImg2ImgPipeline uses the diffusion-denoising mechanism proposed in SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations by Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, Stefano Ermon. The abstract from the paper is: Guided image synthesis enables everyday users to create and edit photo-realistic images with minimum effort. The key challenge is balancing faithfulness to the user input (e.g., hand-drawn colored strokes) and realism of the synthesized image. Existing GAN-based methods attempt to achieve such balance using either conditional GANs or GAN inversions, which are challenging and often require additional training data or loss functions for individual applications. To address these issues, we introduce a new image synthesis and editing method, Stochastic Differential Editing (SDEdit), based on a diffusion model generative prior, which synthesizes realistic images by iteratively denoising through a stochastic differential equation (SDE). Given an input image with user guide of any type, SDEdit first adds noise to the input, then subsequently denoises the resulting image through the SDE prior to increase its realism. SDEdit does not require task-specific training or inversions and can naturally achieve the balance between realism and faithfulness. SDEdit significantly outperforms state-of-the-art GAN-based methods by up to 98.09% on realism and 91.72% on overall satisfaction scores, according to a human perception study, on multiple tasks, including stroke-based image synthesis and editing as well as image compositing. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionImg2ImgPipeline class diffusers.StableDiffusionImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-guided image-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.8 num_inference_steps: Optional = 50 timesteps: List = None guidance_scale: Optional = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: Optional = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: int = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be used as the starting point. For both +numpy array and pytorch tensor, the expected value range is between [0, 1] If it’s a tensor or a list +or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a +list of arrays, the expected shape should be (B, H, W, C) or (H, W, C) It can also accept image +latents as image, but if passing latents directly it is not encoded again. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import requests +>>> import torch +>>> from PIL import Image +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionImg2ImgPipeline + +>>> device = "cuda" +>>> model_id_or_path = "runwayml/stable-diffusion-v1-5" +>>> pipe = StableDiffusionImg2ImgPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +>>> response = requests.get(url) +>>> init_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_image = init_image.resize((768, 512)) + +>>> prompt = "A fantasy landscape, trending on artstation" + +>>> images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images +>>> images[0].save("fantasy_landscape.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt or .safetensors +format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import StableDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +... ) + +>>> # Download pipeline from local file +>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt +>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly") + +>>> # Enable float16 and move to GPU +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", +... torch_dtype=torch.float16, +... ) +>>> pipeline.to("cuda") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionImg2ImgPipeline class diffusers.FlaxStableDiffusionImg2ImgPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-guided image-to-image generation using Stable Diffusion. This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: Array image: Array params: Union prng_seed: Array strength: float = 0.8 num_inference_steps: int = 50 height: Optional = None width: Optional = None guidance_scale: Union = 7.5 noise: Array = None neg_prompt_ids: Array = None return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt_ids (jnp.ndarray) — +The prompt or prompts to guide image generation. image (jnp.ndarray) — +Array representing an image batch to be used as the starting point. params (Dict or FrozenDict) — +Dictionary containing the model parameters/weights. prng_seed (jax.Array or jax.Array) — +Array containing random number generator key. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. noise (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. The array is generated by +sampling using the supplied random generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> import jax.numpy as jnp +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard +>>> import requests +>>> from io import BytesIO +>>> from PIL import Image +>>> from diffusers import FlaxStableDiffusionImg2ImgPipeline + + +>>> def create_key(seed=0): +... return jax.random.PRNGKey(seed) + + +>>> rng = create_key(0) + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +>>> response = requests.get(url) +>>> init_img = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_img = init_img.resize((768, 512)) + +>>> prompts = "A fantasy landscape, trending on artstation" + +>>> pipeline, params = FlaxStableDiffusionImg2ImgPipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", +... revision="flax", +... dtype=jnp.bfloat16, +... ) + +>>> num_samples = jax.device_count() +>>> rng = jax.random.split(rng, jax.device_count()) +>>> prompt_ids, processed_image = pipeline.prepare_inputs( +... prompt=[prompts] * num_samples, image=[init_img] * num_samples +... ) +>>> p_params = replicate(params) +>>> prompt_ids = shard(prompt_ids) +>>> processed_image = shard(processed_image) + +>>> output = pipeline( +... prompt_ids=prompt_ids, +... image=processed_image, +... params=p_params, +... prng_seed=rng, +... strength=0.75, +... num_inference_steps=50, +... jit=True, +... height=512, +... width=768, +... ).images + +>>> output_images = pipeline.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:]))) FlaxStableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/83fdd77ecb5a6df565c460f59aa51aed.txt b/scrapped_outputs/83fdd77ecb5a6df565c460f59aa51aed.txt new file mode 100644 index 0000000000000000000000000000000000000000..191eba717cd93724b13a5915ff44bfc9153360dd --- /dev/null +++ b/scrapped_outputs/83fdd77ecb5a6df565c460f59aa51aed.txt @@ -0,0 +1,338 @@ +GLIGEN (Grounded Language-to-Image Generation) The GLIGEN model was created by researchers and engineers from University of Wisconsin-Madison, Columbia University, and Microsoft. The StableDiffusionGLIGENPipeline and StableDiffusionGLIGENTextImagePipeline can generate photorealistic images conditioned on grounding inputs. Along with text and bounding boxes with StableDiffusionGLIGENPipeline, if input images are given, StableDiffusionGLIGENTextImagePipeline can insert objects described by text at the region defined by bounding boxes. Otherwise, it’ll generate an image described by the caption/prompt and insert objects described by text at the region defined by bounding boxes. It’s trained on COCO2014D and COCO2014CD datasets, and the model uses a frozen CLIP ViT-L/14 text encoder to condition itself on grounding inputs. The abstract from the paper is: Large-scale text-to-image diffusion models have made amazing advances. However, the status quo is to use text input alone, which can impede controllability. In this work, we propose GLIGEN, Grounded-Language-to-Image Generation, a novel approach that builds upon and extends the functionality of existing pre-trained text-to-image diffusion models by enabling them to also be conditioned on grounding inputs. To preserve the vast concept knowledge of the pre-trained model, we freeze all of its weights and inject the grounding information into new trainable layers via a gated mechanism. Our model achieves open-world grounded text2img generation with caption and bounding box condition inputs, and the grounding ability generalizes well to novel spatial configurations and concepts. GLIGEN’s zeroshot performance on COCO and LVIS outperforms existing supervised layout-to-image baselines by a large margin. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality and how to reuse pipeline components efficiently! If you want to use one of the official checkpoints for a task, explore the gligen Hub organizations! StableDiffusionGLIGENPipeline was contributed by Nikhil Gajendrakumar and StableDiffusionGLIGENTextImagePipeline was contributed by Nguyễn Công Tú Anh. StableDiffusionGLIGENPipeline class diffusers.StableDiffusionGLIGENPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with Grounded-Language-to-Image Generation (GLIGEN). This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 gligen_scheduled_sampling_beta: float = 0.3 gligen_phrases: List = None gligen_boxes: List = None gligen_inpaint_image: Optional = None negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. gligen_phrases (List[str]) — +The phrases to guide what to include in each of the regions defined by the corresponding +gligen_boxes. There should only be one phrase per bounding box. gligen_boxes (List[List[float]]) — +The bounding boxes that identify rectangular regions of the image that are going to be filled with the +content described by the corresponding gligen_phrases. Each rectangular box is defined as a +List[float] of 4 elements [xmin, ymin, xmax, ymax] where each value is between [0,1]. gligen_inpaint_image (PIL.Image.Image, optional) — +The input image, if provided, is inpainted with objects described by the gligen_boxes and +gligen_phrases. Otherwise, it is treated as a generation task on a blank input image. gligen_scheduled_sampling_beta (float, defaults to 0.3) — +Scheduled Sampling factor from GLIGEN: Open-Set Grounded Text-to-Image +Generation. Scheduled Sampling factor is only varied for +scheduled sampling during inference for improved quality and controllability. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor from Common Diffusion Noise Schedules and Sample Steps are +Flawed. Guidance rescale factor should fix overexposure when +using zero terminal SNR. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionGLIGENPipeline +>>> from diffusers.utils import load_image + +>>> # Insert objects described by text at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENPipeline.from_pretrained( +... "masterful/gligen-1-4-inpainting-text-box", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> input_image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/livingroom_modern.png" +... ) +>>> prompt = "a birthday cake" +>>> boxes = [[0.2676, 0.6088, 0.4773, 0.7183]] +>>> phrases = ["a birthday cake"] + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_inpaint_image=input_image, +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-1-4-inpainting-text-box.jpg") + +>>> # Generate an image described by the prompt and +>>> # insert objects described by text at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENPipeline.from_pretrained( +... "masterful/gligen-1-4-generation-text-box", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a waterfall and a modern high speed train running through the tunnel in a beautiful forest with fall foliage" +>>> boxes = [[0.1387, 0.2051, 0.4277, 0.7090], [0.4980, 0.4355, 0.8516, 0.7266]] +>>> phrases = ["a waterfall", "a modern high speed train running through the tunnel"] + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-1-4-generation-text-box.jpg") enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. prepare_latents < source > ( batch_size num_channels_latents height width dtype device generator latents = None ) enable_fuser < source > ( enabled = True ) encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionGLIGENTextImagePipeline class diffusers.StableDiffusionGLIGENTextImagePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer processor: CLIPProcessor image_encoder: CLIPVisionModelWithProjection image_project: CLIPImageProjection unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. processor (CLIPProcessor) — +A CLIPProcessor to procces reference image. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder (clip-vit-large-patch14). image_project (CLIPImageProjection) — +A CLIPImageProjection to project image embedding into phrases embedding space. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with Grounded-Language-to-Image Generation (GLIGEN). This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 gligen_scheduled_sampling_beta: float = 0.3 gligen_phrases: List = None gligen_images: List = None input_phrases_mask: Union = None input_images_mask: Union = None gligen_boxes: List = None gligen_inpaint_image: Optional = None negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None gligen_normalize_constant: float = 28.7 clip_skip: int = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. gligen_phrases (List[str]) — +The phrases to guide what to include in each of the regions defined by the corresponding +gligen_boxes. There should only be one phrase per bounding box. gligen_images (List[PIL.Image.Image]) — +The images to guide what to include in each of the regions defined by the corresponding gligen_boxes. +There should only be one image per bounding box input_phrases_mask (int or List[int]) — +pre phrases mask input defined by the correspongding input_phrases_mask input_images_mask (int or List[int]) — +pre images mask input defined by the correspongding input_images_mask gligen_boxes (List[List[float]]) — +The bounding boxes that identify rectangular regions of the image that are going to be filled with the +content described by the corresponding gligen_phrases. Each rectangular box is defined as a +List[float] of 4 elements [xmin, ymin, xmax, ymax] where each value is between [0,1]. gligen_inpaint_image (PIL.Image.Image, optional) — +The input image, if provided, is inpainted with objects described by the gligen_boxes and +gligen_phrases. Otherwise, it is treated as a generation task on a blank input image. gligen_scheduled_sampling_beta (float, defaults to 0.3) — +Scheduled Sampling factor from GLIGEN: Open-Set Grounded Text-to-Image +Generation. Scheduled Sampling factor is only varied for +scheduled sampling during inference for improved quality and controllability. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. gligen_normalize_constant (float, optional, defaults to 28.7) — +The normalize value of the image embedding. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionGLIGENTextImagePipeline +>>> from diffusers.utils import load_image + +>>> # Insert objects described by image at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained( +... "anhnct/Gligen_Inpainting_Text_Image", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> input_image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/livingroom_modern.png" +... ) +>>> prompt = "a backpack" +>>> boxes = [[0.2676, 0.4088, 0.4773, 0.7183]] +>>> phrases = None +>>> gligen_image = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/backpack.jpeg" +... ) + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_inpaint_image=input_image, +... gligen_boxes=boxes, +... gligen_images=[gligen_image], +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-inpainting-text-image-box.jpg") + +>>> # Generate an image described by the prompt and +>>> # insert objects described by text and image at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained( +... "anhnct/Gligen_Text_Image", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a flower sitting on the beach" +>>> boxes = [[0.0, 0.09, 0.53, 0.76]] +>>> phrases = ["flower"] +>>> gligen_image = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/pexels-pixabay-60597.jpg" +... ) + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_images=[gligen_image], +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-generation-text-image-box.jpg") + +>>> # Generate an image described by the prompt and +>>> # transfer style described by image at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained( +... "anhnct/Gligen_Text_Image", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a dragon flying on the sky" +>>> boxes = [[0.4, 0.2, 1.0, 0.8], [0.0, 1.0, 0.0, 1.0]] # Set `[0.0, 1.0, 0.0, 1.0]` for the style + +>>> gligen_image = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" +... ) + +>>> gligen_placeholder = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" +... ) + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=[ +... "dragon", +... "placeholder", +... ], # Can use any text instead of `placeholder` token, because we will use mask here +... gligen_images=[ +... gligen_placeholder, +... gligen_image, +... ], # Can use any image in gligen_placeholder, because we will use mask here +... input_phrases_mask=[1, 0], # Set 0 for the placeholder token +... input_images_mask=[0, 1], # Set 0 for the placeholder image +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-generation-text-image-box-style-transfer.jpg") enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. prepare_latents < source > ( batch_size num_channels_latents height width dtype device generator latents = None ) enable_fuser < source > ( enabled = True ) complete_mask < source > ( has_mask max_objs device ) Based on the input mask corresponding value 0 or 1 for each phrases and image, mask the features +corresponding to phrases and images. crop < source > ( im new_width new_height ) Crop the input image to the specified dimensions. draw_inpaint_mask_from_boxes < source > ( boxes size ) Create an inpainting mask based on given boxes. This function generates an inpainting mask using the provided +boxes to mark regions that need to be inpainted. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_clip_feature < source > ( input normalize_constant device is_image = False ) Get image and phrases embedding by using CLIP pretrain model. The image embedding is transformed into the +phrases embedding space through a projection. get_cross_attention_kwargs_with_grounded < source > ( hidden_size gligen_phrases gligen_images gligen_boxes input_phrases_mask input_images_mask repeat_batch normalize_constant max_objs device ) Prepare the cross-attention kwargs containing information about the grounded input (boxes, mask, image +embedding, phrases embedding). get_cross_attention_kwargs_without_grounded < source > ( hidden_size repeat_batch max_objs device ) Prepare the cross-attention kwargs without information about the grounded input (boxes, mask, image embedding, +phrases embedding) (All are zero tensor). target_size_center_crop < source > ( im new_hw ) Crop and resize the image to the target size while keeping the center. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/840860a9e500cc78ca85937d075f13d5.txt b/scrapped_outputs/840860a9e500cc78ca85937d075f13d5.txt new file mode 100644 index 0000000000000000000000000000000000000000..13e167b0959d0af366c380365e13d791431b1240 --- /dev/null +++ b/scrapped_outputs/840860a9e500cc78ca85937d075f13d5.txt @@ -0,0 +1,8 @@ +Utilities Utility and helper functions for working with 🤗 Diffusers. numpy_to_pil diffusers.utils.numpy_to_pil < source > ( images ) Convert a numpy image or a batch of images to a PIL image. pt_to_pil diffusers.utils.pt_to_pil < source > ( images ) Convert a torch image to a PIL image. load_image diffusers.utils.load_image < source > ( image: Union convert_method: Callable = None ) → PIL.Image.Image Parameters image (str or PIL.Image.Image) — +The image to convert to the PIL Image format. convert_method (Callable[[PIL.Image.Image], PIL.Image.Image], optional) — +A conversion method to apply to the image after loading it. +When set to None the image will be converted “RGB”. Returns +PIL.Image.Image + +A PIL Image. + Loads image to a PIL Image. export_to_gif diffusers.utils.export_to_gif < source > ( image: List output_gif_path: str = None ) export_to_video diffusers.utils.export_to_video < source > ( video_frames: Union output_video_path: str = None fps: int = 8 ) make_image_grid diffusers.utils.make_image_grid < source > ( images: List rows: int cols: int resize: int = None ) Prepares a single grid of images. Useful for visualization purposes. diff --git a/scrapped_outputs/8477cdf48eeb092d44f392faf6dedc8c.txt b/scrapped_outputs/8477cdf48eeb092d44f392faf6dedc8c.txt new file mode 100644 index 0000000000000000000000000000000000000000..8d7d87b86ae5af7a7fd008d5f2ca5264b88e97d8 --- /dev/null +++ b/scrapped_outputs/8477cdf48eeb092d44f392faf6dedc8c.txt @@ -0,0 +1,110 @@ +Text-to-image The text-to-image script is experimental, and it’s easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset. Text-to-image models like Stable Diffusion are conditioned to generate images given a text prompt. Training a model can be taxing on your hardware, but if you enable gradient_checkpointing and mixed_precision, it is possible to train a model on a single 24GB GPU. If you’re training with larger batch sizes or want to train faster, it’s better to use GPUs with more than 30GB of memory. You can reduce your memory footprint by enabling memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing, gradient accumulation or xFormers. A GPU with at least 30GB of memory or a TPU v3 is recommended for training with Flax. This guide will explore the train_text_to_image.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: + + +```bash +cd examples/text_to_image +pip install -r requirements.txt +``` + + +```bash +cd examples/text_to_image +pip install -r requirements_flax.txt +``` + + + 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image.py \ + --mixed_precision="fp16" Some basic and important parameters include: --pretrained_model_name_or_path: the name of the model on the Hub or a local path to the pretrained model --dataset_name: the name of the dataset on the Hub or a local path to the dataset to train on --image_column: the name of the image column in the dataset to train on --caption_column: the name of the text column in the dataset to train on --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_text_to_image.py \ + --snr_gamma=5.0 You can compare the loss surfaces for different snr_gamma values in this Weights and Biases report. For smaller datasets, the effects of Min-SNR may not be as obvious compared to larger datasets. Training script The dataset preprocessing code and training loop are found in the main() function. If you need to adapt the training script, this is where you’ll need to make your changes. The train_text_to_image script starts by loading a scheduler and tokenizer. You can choose to use a different scheduler here if you want: Copied noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") +tokenizer = CLIPTokenizer.from_pretrained( + args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision +) Then the script loads the UNet model: Copied load_model = UNet2DConditionModel.from_pretrained(input_dir, subfolder="unet") +model.register_to_config(**load_model.config) + +model.load_state_dict(load_model.state_dict()) Next, the text and image columns of the dataset need to be preprocessed. The tokenize_captions function handles tokenizing the inputs, and the train_transforms function specifies the type of transforms to apply to the image. Both of these functions are bundled into preprocess_train: Copied def preprocess_train(examples): + images = [image.convert("RGB") for image in examples[image_column]] + examples["pixel_values"] = [train_transforms(image) for image in images] + examples["input_ids"] = tokenize_captions(examples) + return examples Lastly, the training loop handles everything else. It encodes images into latent space, adds noise to the latents, computes the text embeddings to condition on, updates the model parameters, and saves and pushes the model to the Hub. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 + + +Let’s train on the Pokémon BLIP captions dataset to generate your own Pokémon. Set the environment variables MODEL_NAME and dataset_name to the model and the dataset (either from the Hub or a local path). If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. To train on a local dataset, set the TRAIN_DIR and OUTPUT_DIR environment variables to the path of the dataset and where to save the model to. Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export dataset_name="lambdalabs/pokemon-blip-captions" + +accelerate launch --mixed_precision="fp16" train_text_to_image.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$dataset_name \ + --use_ema \ + --resolution=512 --center_crop --random_flip \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --enable_xformers_memory_efficient_attention + --lr_scheduler="constant" --lr_warmup_steps=0 \ + --output_dir="sd-pokemon-model" \ + --push_to_hub + + +Training with Flax can be faster on TPUs and GPUs thanks to @duongna211. Flax is more efficient on a TPU, but GPU performance is also great. Set the environment variables MODEL_NAME and dataset_name to the model and the dataset (either from the Hub or a local path). To train on a local dataset, set the TRAIN_DIR and OUTPUT_DIR environment variables to the path of the dataset and where to save the model to. Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export dataset_name="lambdalabs/pokemon-blip-captions" + +python train_text_to_image_flax.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$dataset_name \ + --resolution=512 --center_crop --random_flip \ + --train_batch_size=1 \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --output_dir="sd-pokemon-model" \ + --push_to_hub + + +Once training is complete, you can use your newly trained model for inference: + + + + Copied from diffusers import StableDiffusionPipeline +import torch + +pipeline = StableDiffusionPipeline.from_pretrained("path/to/saved_model", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + +image = pipeline(prompt="yoda").images[0] +image.save("yoda-pokemon.png") + + + + Copied import jax +import numpy as np +from flax.jax_utils import replicate +from flax.training.common_utils import shard +from diffusers import FlaxStableDiffusionPipeline + +pipeline, params = FlaxStableDiffusionPipeline.from_pretrained("path/to/saved_model", dtype=jax.numpy.bfloat16) + +prompt = "yoda pokemon" +prng_seed = jax.random.PRNGKey(0) +num_inference_steps = 50 + +num_samples = jax.device_count() +prompt = num_samples * [prompt] +prompt_ids = pipeline.prepare_inputs(prompt) + +# shard inputs and rng +params = replicate(params) +prng_seed = jax.random.split(prng_seed, jax.device_count()) +prompt_ids = shard(prompt_ids) + +images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images +images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) +image.save("yoda-pokemon.png") + + + Next steps Congratulations on training your own text-to-image model! To learn more about how to use your new model, the following guides may be helpful: Learn how to load LoRA weights for inference if you trained your model with LoRA. Learn more about how certain parameters like guidance scale or techniques such as prompt weighting can help you control inference in the Text-to-image task guide. diff --git a/scrapped_outputs/848b67c430804f5447c9829b6236c29e.txt b/scrapped_outputs/848b67c430804f5447c9829b6236c29e.txt new file mode 100644 index 0000000000000000000000000000000000000000..f9d5759d2a52433aeb4a07b9b2cace405fc5aff7 --- /dev/null +++ b/scrapped_outputs/848b67c430804f5447c9829b6236c29e.txt @@ -0,0 +1,61 @@ +Distilled Stable Diffusion inference Stable Diffusion inference can be a computationally intensive process because it must iteratively denoise the latents to generate an image. To reduce the computational burden, you can use a distilled version of the Stable Diffusion model from Nota AI. The distilled version of their Stable Diffusion model eliminates some of the residual and attention blocks from the UNet, reducing the model size by 51% and improving latency on CPU/GPU by 43%. Read this blog post to learn more about how knowledge distillation training works to produce a faster, smaller, and cheaper generative model. Let’s load the distilled Stable Diffusion model and compare it against the original Stable Diffusion model: Copied from diffusers import StableDiffusionPipeline +import torch + +distilled = StableDiffusionPipeline.from_pretrained( + "nota-ai/bk-sdm-small", torch_dtype=torch.float16, use_safetensors=True, +).to("cuda") + +original = StableDiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16, use_safetensors=True, +).to("cuda") Given a prompt, get the inference time for the original model: Copied import time + +seed = 2023 +generator = torch.manual_seed(seed) + +NUM_ITERS_TO_RUN = 3 +NUM_INFERENCE_STEPS = 25 +NUM_IMAGES_PER_PROMPT = 4 + +prompt = "a golden vase with different flowers" + +start = time.time_ns() +for _ in range(NUM_ITERS_TO_RUN): + images = original( + prompt, + num_inference_steps=NUM_INFERENCE_STEPS, + generator=generator, + num_images_per_prompt=NUM_IMAGES_PER_PROMPT + ).images +end = time.time_ns() +original_sd = f"{(end - start) / 1e6:.1f}" + +print(f"Execution time -- {original_sd} ms\n") +"Execution time -- 45781.5 ms" Time the distilled model inference: Copied start = time.time_ns() +for _ in range(NUM_ITERS_TO_RUN): + images = distilled( + prompt, + num_inference_steps=NUM_INFERENCE_STEPS, + generator=generator, + num_images_per_prompt=NUM_IMAGES_PER_PROMPT + ).images +end = time.time_ns() + +distilled_sd = f"{(end - start) / 1e6:.1f}" +print(f"Execution time -- {distilled_sd} ms\n") +"Execution time -- 29884.2 ms" original Stable Diffusion (45781.5 ms) distilled Stable Diffusion (29884.2 ms) Tiny AutoEncoder To speed inference up even more, use a tiny distilled version of the Stable Diffusion VAE to denoise the latents into images. Replace the VAE in the distilled Stable Diffusion model with the tiny VAE: Copied from diffusers import AutoencoderTiny + +distilled.vae = AutoencoderTiny.from_pretrained( + "sayakpaul/taesd-diffusers", torch_dtype=torch.float16, use_safetensors=True, +).to("cuda") Time the distilled model and distilled VAE inference: Copied start = time.time_ns() +for _ in range(NUM_ITERS_TO_RUN): + images = distilled( + prompt, + num_inference_steps=NUM_INFERENCE_STEPS, + generator=generator, + num_images_per_prompt=NUM_IMAGES_PER_PROMPT + ).images +end = time.time_ns() + +distilled_tiny_sd = f"{(end - start) / 1e6:.1f}" +print(f"Execution time -- {distilled_tiny_sd} ms\n") +"Execution time -- 27165.7 ms" distilled Stable Diffusion + Tiny AutoEncoder (27165.7 ms) diff --git a/scrapped_outputs/84a912e267ec4f4d649f058c2ef84f20.txt b/scrapped_outputs/84a912e267ec4f4d649f058c2ef84f20.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/84b264bf140fde2b46da4fdb3ed23f87.txt b/scrapped_outputs/84b264bf140fde2b46da4fdb3ed23f87.txt new file mode 100644 index 0000000000000000000000000000000000000000..97a771bf1c4a69150adf921fcc1b4adbe14566c1 --- /dev/null +++ b/scrapped_outputs/84b264bf140fde2b46da4fdb3ed23f87.txt @@ -0,0 +1,927 @@ +DeepFloyd IF Overview DeepFloyd IF is a novel state-of-the-art open-source text-to-image model with a high degree of photorealism and language understanding. +The model is a modular composed of a frozen text encoder and three cascaded pixel diffusion modules: Stage 1: a base model that generates 64x64 px image based on text prompt, Stage 2: a 64x64 px => 256x256 px super-resolution model, and Stage 3: a 256x256 px => 1024x1024 px super-resolution model +Stage 1 and Stage 2 utilize a frozen text encoder based on the T5 transformer to extract text embeddings, which are then fed into a UNet architecture enhanced with cross-attention and attention pooling. +Stage 3 is Stability AI’s x4 Upscaling model. +The result is a highly efficient model that outperforms current state-of-the-art models, achieving a zero-shot FID score of 6.66 on the COCO dataset. +Our work underscores the potential of larger UNet architectures in the first stage of cascaded diffusion models and depicts a promising future for text-to-image synthesis. Usage Before you can use IF, you need to accept its usage conditions. To do so: Make sure to have a Hugging Face account and be logged in. Accept the license on the model card of DeepFloyd/IF-I-XL-v1.0. Accepting the license on the stage I model card will auto accept for the other IF models. Make sure to login locally. Install huggingface_hub: Copied pip install huggingface_hub --upgrade run the login function in a Python shell: Copied from huggingface_hub import login + +login() and enter your Hugging Face Hub access token. Next we install diffusers and dependencies: Copied pip install -q diffusers accelerate transformers The following sections give more in-detail examples of how to use IF. Specifically: Text-to-Image Generation Image-to-Image Generation Inpainting Reusing model weights Speed optimization Memory optimization Available checkpoints Stage-1 DeepFloyd/IF-I-XL-v1.0 DeepFloyd/IF-I-L-v1.0 DeepFloyd/IF-I-M-v1.0 Stage-2 DeepFloyd/IF-II-L-v1.0 DeepFloyd/IF-II-M-v1.0 Stage-3 stabilityai/stable-diffusion-x4-upscaler Google Colab Text-to-Image Generation By default diffusers makes use of model cpu offloading to run the whole IF pipeline with as little as 14 GB of VRAM. Copied from diffusers import DiffusionPipeline +from diffusers.utils import pt_to_pil, make_image_grid +import torch + +# stage 1 +stage_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +stage_1_output = stage_1( + prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, generator=generator, output_type="pt" +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# stage 2 +stage_2_output = stage_2( + image=stage_1_output, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png") + +# stage 3 +stage_3_output = stage_3(prompt=prompt, image=stage_2_output, noise_level=100, generator=generator).images +#stage_3_output[0].save("./if_stage_III.png") +make_image_grid([pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=3) Text Guided Image-to-Image Generation The same IF model weights can be used for text-guided image-to-image translation or image variation. +In this case just make sure to load the weights using the IFImg2ImgPipeline and IFImg2ImgSuperResolutionPipeline pipelines. Note: You can also directly move the weights of the text-to-image pipelines to the image-to-image pipelines +without loading them twice by making use of the components argument as explained here. Copied from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +from diffusers.utils import pt_to_pil, load_image, make_image_grid +import torch + +# download image +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +original_image = load_image(url) +original_image = original_image.resize((768, 512)) + +# stage 1 +stage_1 = IFImg2ImgPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = IFImg2ImgSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = "A fantasy landscape in style minecraft" +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +stage_1_output = stage_1( + image=original_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# stage 2 +stage_2_output = stage_2( + image=stage_1_output, + original_image=original_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png") + +# stage 3 +stage_3_output = stage_3(prompt=prompt, image=stage_2_output, generator=generator, noise_level=100).images +#stage_3_output[0].save("./if_stage_III.png") +make_image_grid([original_image, pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=4) Text Guided Inpainting Generation The same IF model weights can be used for text-guided image-to-image translation or image variation. +In this case just make sure to load the weights using the IFInpaintingPipeline and IFInpaintingSuperResolutionPipeline pipelines. Note: You can also directly move the weights of the text-to-image pipelines to the image-to-image pipelines +without loading them twice by making use of the ~DiffusionPipeline.components() function as explained here. Copied from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +from diffusers.utils import pt_to_pil, load_image, make_image_grid +import torch + +# download image +url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +original_image = load_image(url) + +# download mask +url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +mask_image = load_image(url) + +# stage 1 +stage_1 = IFInpaintingPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = IFInpaintingSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = "blue sunglasses" +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +stage_1_output = stage_1( + image=original_image, + mask_image=mask_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# stage 2 +stage_2_output = stage_2( + image=stage_1_output, + original_image=original_image, + mask_image=mask_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_II.png") + +# stage 3 +stage_3_output = stage_3(prompt=prompt, image=stage_2_output, generator=generator, noise_level=100).images +#stage_3_output[0].save("./if_stage_III.png") +make_image_grid([original_image, mask_image, pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=5) Converting between different pipelines In addition to being loaded with from_pretrained, Pipelines can also be loaded directly from each other. Copied from diffusers import IFPipeline, IFSuperResolutionPipeline + +pipe_1 = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0") +pipe_2 = IFSuperResolutionPipeline.from_pretrained("DeepFloyd/IF-II-L-v1.0") + + +from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline + +pipe_1 = IFImg2ImgPipeline(**pipe_1.components) +pipe_2 = IFImg2ImgSuperResolutionPipeline(**pipe_2.components) + + +from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline + +pipe_1 = IFInpaintingPipeline(**pipe_1.components) +pipe_2 = IFInpaintingSuperResolutionPipeline(**pipe_2.components) Optimizing for speed The simplest optimization to run IF faster is to move all model components to the GPU. Copied pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") You can also run the diffusion process for a shorter number of timesteps. This can either be done with the num_inference_steps argument: Copied pipe("", num_inference_steps=30) Or with the timesteps argument: Copied from diffusers.pipelines.deepfloyd_if import fast27_timesteps + +pipe("", timesteps=fast27_timesteps) When doing image variation or inpainting, you can also decrease the number of timesteps +with the strength argument. The strength argument is the amount of noise to add to the input image which also determines how many steps to run in the denoising process. +A smaller number will vary the image less but run faster. Copied pipe = IFImg2ImgPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") + +image = pipe(image=image, prompt="", strength=0.3).images You can also use torch.compile. Note that we have not exhaustively tested torch.compile +with IF and it might not give expected results. Copied from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") + +pipe.text_encoder = torch.compile(pipe.text_encoder, mode="reduce-overhead", fullgraph=True) +pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) Optimizing for memory When optimizing for GPU memory, we can use the standard diffusers CPU offloading APIs. Either the model based CPU offloading, Copied pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.enable_model_cpu_offload() or the more aggressive layer based CPU offloading. Copied pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.enable_sequential_cpu_offload() Additionally, T5 can be loaded in 8bit precision Copied from transformers import T5EncoderModel + +text_encoder = T5EncoderModel.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit" +) + +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", + text_encoder=text_encoder, # pass the previously instantiated 8bit text encoder + unet=None, + device_map="auto", +) + +prompt_embeds, negative_embeds = pipe.encode_prompt("") For CPU RAM constrained machines like Google Colab free tier where we can’t load all model components to the CPU at once, we can manually only load the pipeline with +the text encoder or UNet when the respective model components are needed. Copied from diffusers import IFPipeline, IFSuperResolutionPipeline +import torch +import gc +from transformers import T5EncoderModel +from diffusers.utils import pt_to_pil, make_image_grid + +text_encoder = T5EncoderModel.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit" +) + +# text to image +pipe = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", + text_encoder=text_encoder, # pass the previously instantiated 8bit text encoder + unet=None, + device_map="auto", +) + +prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +# Remove the pipeline so we can re-load the pipeline with the unet +del text_encoder +del pipe +gc.collect() +torch.cuda.empty_cache() + +pipe = IFPipeline.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto" +) + +generator = torch.Generator().manual_seed(0) +stage_1_output = pipe( + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + output_type="pt", + generator=generator, +).images + +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# Remove the pipeline so we can load the super-resolution pipeline +del pipe +gc.collect() +torch.cuda.empty_cache() + +# First super resolution + +pipe = IFSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto" +) + +generator = torch.Generator().manual_seed(0) +stage_2_output = pipe( + image=stage_1_output, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + output_type="pt", + generator=generator, +).images + +#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png") +make_image_grid([pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0]], rows=1, rows=2) Available Pipelines: Pipeline Tasks Colab pipeline_if.py Text-to-Image Generation - pipeline_if_superresolution.py Text-to-Image Generation - pipeline_if_img2img.py Image-to-Image Generation - pipeline_if_img2img_superresolution.py Image-to-Image Generation - pipeline_if_inpainting.py Image-to-Image Generation - pipeline_if_inpainting_superresolution.py Image-to-Image Generation - IFPipeline class diffusers.IFPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None num_inference_steps: int = 100 timesteps: List = None guidance_scale: float = 7.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 height: Optional = None width: Optional = None eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True cross_attention_kwargs: Optional = None ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 7.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to self.unet.config.sample_size) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size) — +The width in pixels of the generated image. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFPipeline, IFSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch + +>>> pipe = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt").images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt" +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> safety_modules = { +... "feature_extractor": pipe.feature_extractor, +... "safety_checker": pipe.safety_checker, +... "watermarker": pipe.watermarker, +... } +>>> super_res_2_pipe = DiffusionPipeline.from_pretrained( +... "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +... ) +>>> super_res_2_pipe.enable_model_cpu_offload() + +>>> image = super_res_2_pipe( +... prompt=prompt, +... image=image, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFSuperResolutionPipeline class diffusers.IFSuperResolutionPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler image_noising_scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None height: int = None width: int = None image: Union = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 4.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 250 clean_caption: bool = True ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. height (int, optional, defaults to None) — +The height in pixels of the generated image. width (int, optional, defaults to None) — +The width in pixels of the generated image. image (PIL.Image.Image, np.ndarray, torch.FloatTensor) — +The image to be upscaled. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional, defaults to None) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. noise_level (int, optional, defaults to 250) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFPipeline, IFSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch + +>>> pipe = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt").images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFImg2ImgPipeline class diffusers.IFImg2ImgPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.7 num_inference_steps: int = 80 timesteps: List = None guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True cross_attention_kwargs: Optional = None ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. strength (float, optional, defaults to 0.7) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. num_inference_steps (int, optional, defaults to 80) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 10.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image.resize((768, 512)) + +>>> pipe = IFImg2ImgPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A fantasy landscape in style minecraft" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFImg2ImgSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", +... text_encoder=None, +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFImg2ImgSuperResolutionPipeline class diffusers.IFImg2ImgSuperResolutionPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler image_noising_scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( image: Union original_image: Union = None strength: float = 0.8 prompt: Union = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 4.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 250 clean_caption: bool = True ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. original_image (torch.FloatTensor or PIL.Image.Image) — +The original image that image was varied from. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. noise_level (int, optional, defaults to 250) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image.resize((768, 512)) + +>>> pipe = IFImg2ImgPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A fantasy landscape in style minecraft" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFImg2ImgSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", +... text_encoder=None, +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFInpaintingPipeline class diffusers.IFInpaintingPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None strength: float = 1.0 num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True cross_attention_kwargs: Optional = None ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). strength (float, optional, defaults to 1.0) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 7.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +>>> response = requests.get(url) +>>> mask_image = Image.open(BytesIO(response.content)) +>>> mask_image = mask_image + +>>> pipe = IFInpaintingPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "blue sunglasses" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... mask_image=mask_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFInpaintingSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... mask_image=mask_image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFInpaintingSuperResolutionPipeline class diffusers.IFInpaintingSuperResolutionPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler image_noising_scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( image: Union original_image: Union = None mask_image: Union = None strength: float = 0.8 prompt: Union = None num_inference_steps: int = 100 timesteps: List = None guidance_scale: float = 4.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 0 clean_caption: bool = True ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. original_image (torch.FloatTensor or PIL.Image.Image) — +The original image that image was varied from. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. noise_level (int, optional, defaults to 0) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +>>> response = requests.get(url) +>>> mask_image = Image.open(BytesIO(response.content)) +>>> mask_image = mask_image + +>>> pipe = IFInpaintingPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "blue sunglasses" + +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) +>>> image = pipe( +... image=original_image, +... mask_image=mask_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFInpaintingSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... mask_image=mask_image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. diff --git a/scrapped_outputs/84d5191f823dffffe736d77348b20868.txt b/scrapped_outputs/84d5191f823dffffe736d77348b20868.txt new file mode 100644 index 0000000000000000000000000000000000000000..c83b691df90e2cf0a16c5dba417017f4a2e8d9c3 --- /dev/null +++ b/scrapped_outputs/84d5191f823dffffe736d77348b20868.txt @@ -0,0 +1,246 @@ +Pipelines The DiffusionPipeline is the quickest way to load any pretrained diffusion pipeline from the Hub for inference. You shouldn’t use the DiffusionPipeline class for training or finetuning a diffusion model. Individual +components (for example, UNet2DModel and UNet2DConditionModel) of diffusion pipelines are usually trained individually, so we suggest directly working with them instead. The pipeline type (for example StableDiffusionPipeline) of any diffusion pipeline loaded with from_pretrained() is automatically +detected and pipeline components are loaded and passed to the __init__ function of the pipeline. Any pipeline object can be saved locally with save_pretrained(). DiffusionPipeline class diffusers.DiffusionPipeline < source > ( ) Base class for all pipelines. DiffusionPipeline stores all components (models, schedulers, and processors) for diffusion pipelines and +provides methods for loading, downloading and saving models. It also includes methods to: move all PyTorch modules to the device of your choice enable/disable the progress bar for the denoising iteration Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. _optional_components (List[str]) — List of all optional components that don’t have to be passed to the +pipeline to function (should be overridden by subclasses). __call__ ( *args **kwargs ) Call self as a function. device < source > ( ) → torch.device Returns +torch.device + +The torch device on which the pipeline is located. + to < source > ( *args **kwargs ) → DiffusionPipeline Parameters dtype (torch.dtype, optional) — +Returns a pipeline with the specified +dtype device (torch.Device, optional) — +Returns a pipeline with the specified +device silence_dtype_warnings (str, optional, defaults to False) — +Whether to omit warnings if the target dtype is not compatible with the target device. Returns +DiffusionPipeline + +The pipeline converted to specified dtype and/or dtype. + Performs Pipeline dtype and/or device conversion. A torch.dtype and torch.device are inferred from the +arguments of self.to(*args, **kwargs). If the pipeline already has the correct torch.dtype and torch.device, then it is returned as is. Otherwise, +the returned pipeline is a copy of self with the desired torch.dtype and torch.device. Here are the ways to call to: to(dtype, silence_dtype_warnings=False) → DiffusionPipeline to return a pipeline with the specified +dtype to(device, silence_dtype_warnings=False) → DiffusionPipeline to return a pipeline with the specified +device to(device=None, dtype=None, silence_dtype_warnings=False) → DiffusionPipeline to return a pipeline with the +specified device and +dtype components < source > ( ) The self.components property can be useful to run different pipelines with the same weights and +configurations without reallocating additional memory. Returns (dict): +A dictionary containing all the modules needed to initialize the pipeline. Examples: Copied >>> from diffusers import ( +... StableDiffusionPipeline, +... StableDiffusionImg2ImgPipeline, +... StableDiffusionInpaintPipeline, +... ) + +>>> text2img = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> img2img = StableDiffusionImg2ImgPipeline(**text2img.components) +>>> inpaint = StableDiffusionInpaintPipeline(**text2img.components) disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. download < source > ( pretrained_model_name **kwargs ) → os.PathLike Parameters pretrained_model_name (str or os.PathLike, optional) — +A string, the repository id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. custom_pipeline (str, optional) — +Can be either: + + +A string, the repository id (for example CompVis/ldm-text2im-large-256) of a pretrained +pipeline hosted on the Hub. The repository must contain a file called pipeline.py that defines +the custom pipeline. + + +A string, the file name of a community pipeline hosted on GitHub under +Community. Valid file +names must match the file name and not the pipeline script (clip_guided_stable_diffusion +instead of clip_guided_stable_diffusion.py). Community pipelines are always loaded from the +current main branch of GitHub. + + +A path to a directory (./my_pipeline_directory/) containing a custom pipeline. The directory +must contain a file called pipeline.py that defines the custom pipeline. + + + +🧪 This is an experimental feature and may change in the future. + +For more information on how to load and create custom pipelines, take a look at How to contribute a +community pipeline. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. use_onnx (bool, optional, defaults to False) — +If set to True, ONNX weights will always be downloaded if present. If set to False, ONNX weights +will never be downloaded. By default use_onnx defaults to the _is_onnx class attribute which is +False for non-ONNX pipelines and True for ONNX pipelines. ONNX weights include both files ending +with .onnx and .pb. Returns +os.PathLike + +A path to the downloaded pipeline. + Download and cache a PyTorch diffusion pipeline from pretrained pipeline weights. To use private or gated models, log-in with +huggingface-cli login. enable_attention_slicing < source > ( slice_size: typing.Union[str, int, NoneType] = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] enable_model_cpu_offload < source > ( gpu_id: typing.Optional[int] = None device: typing.Union[torch.device, str] = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_sequential_cpu_offload < source > ( gpu_id: typing.Optional[int] = None device: typing.Union[torch.device, str] = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using 🤗 Accelerate, significantly reducing memory usage. When called, the state +dicts of all torch.nn.Module components (except those in self._exclude_from_cpu_offload) are saved to CPU +and then moved to torch.device('meta') and loaded to GPU only when their specific submodule has its forward +method called. Offloading happens on a submodule basis. Memory savings are higher than with +enable_model_cpu_offload, but performance is lower. enable_xformers_memory_efficient_attention < source > ( attention_op: typing.Optional[typing.Callable] = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) from_pretrained < source > ( pretrained_model_name_or_path: typing.Union[str, os.PathLike, NoneType] **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. custom_pipeline (str, optional) — + +🧪 This is an experimental feature and may change in the future. + +Can be either: + +A string, the repo id (for example hf-internal-testing/diffusers-dummy-pipeline) of a custom +pipeline hosted on the Hub. The repository must contain a file called pipeline.py that defines +the custom pipeline. +A string, the file name of a community pipeline hosted on GitHub under +Community. Valid file +names must match the file name and not the pipeline script (clip_guided_stable_diffusion +instead of clip_guided_stable_diffusion.py). Community pipelines are always loaded from the +current main branch of GitHub. +A path to a directory (./my_pipeline_directory/) containing a custom pipeline. The directory +must contain a file called pipeline.py that defines the custom pipeline. + +For more information on how to load and create custom pipelines, please have a look at Loading and +Adding Custom +Pipelines force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. use_onnx (bool, optional, defaults to None) — +If set to True, ONNX weights will always be downloaded if present. If set to False, ONNX weights +will never be downloaded. By default use_onnx defaults to the _is_onnx class attribute which is +False for non-ONNX pipelines and True for ONNX pipelines. ONNX weights include both files ending +with .onnx and .pb. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiate a PyTorch diffusion pipeline from pretrained pipeline weights. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import DiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256") + +>>> # Download pipeline that requires an authorization token +>>> # For more information on access tokens, please refer to this section +>>> # of the documentation](https://huggingface.co/docs/hub/security-tokens) +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") + +>>> # Use a different scheduler +>>> from diffusers import LMSDiscreteScheduler + +>>> scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.scheduler = scheduler maybe_free_model_hooks < source > ( ) Function that offloads all components, removes all model hooks that were added when using +enable_model_cpu_offload and then applies them again. In case the model has not been offloaded this function +is a no-op. Make sure to add this function to the end of the __call__ function of your pipeline so that it +functions correctly when applying enable_model_cpu_offload. numpy_to_pil < source > ( images ) Convert a NumPy image or a batch of images to a PIL image. save_pretrained < source > ( save_directory: typing.Union[str, os.PathLike] safe_serialization: bool = True variant: typing.Optional[str] = None push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save a pipeline to. Will be created if it doesn’t exist. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save all saveable variables of the pipeline to a directory. A pipeline variable can be saved and loaded if its +class implements both a save and loading method. The pipeline is easily reloaded using the +from_pretrained() class method. diff --git a/scrapped_outputs/84d8bc9e23f37b431313a4e2bfa8466b.txt b/scrapped_outputs/84d8bc9e23f37b431313a4e2bfa8466b.txt new file mode 100644 index 0000000000000000000000000000000000000000..1fe3bd3f06785a74a09c4c4199e812fcd2270991 --- /dev/null +++ b/scrapped_outputs/84d8bc9e23f37b431313a4e2bfa8466b.txt @@ -0,0 +1,6 @@ +Overview 🤗 Diffusers provides a collection of training scripts for you to train your own diffusion models. You can find all of our training scripts in diffusers/examples. Each training script is: Self-contained: the training script does not depend on any local files, and all packages required to run the script are installed from the requirements.txt file. Easy-to-tweak: the training scripts are an example of how to train a diffusion model for a specific task and won’t work out-of-the-box for every training scenario. You’ll likely need to adapt the training script for your specific use-case. To help you with that, we’ve fully exposed the data preprocessing code and the training loop so you can modify it for your own use. Beginner-friendly: the training scripts are designed to be beginner-friendly and easy to understand, rather than including the latest state-of-the-art methods to get the best and most competitive results. Any training methods we consider too complex are purposefully left out. Single-purpose: each training script is expressly designed for only one task to keep it readable and understandable. Our current collection of training scripts include: Training SDXL-support LoRA-support Flax-support unconditional image generation text-to-image 👍 👍 👍 textual inversion 👍 DreamBooth 👍 👍 👍 ControlNet 👍 👍 InstructPix2Pix 👍 Custom Diffusion T2I-Adapters 👍 Kandinsky 2.2 👍 Wuerstchen 👍 These examples are actively maintained, so please feel free to open an issue if they aren’t working as expected. If you feel like another training example should be included, you’re more than welcome to start a Feature Request to discuss your feature idea with us and whether it meets our criteria of being self-contained, easy-to-tweak, beginner-friendly, and single-purpose. Install Make sure you can successfully run the latest versions of the example scripts by installing the library from source in a new virtual environment: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the folder of the training script (for example, DreamBooth) and install the requirements.txt file. Some training scripts have a specific requirement file for SDXL, LoRA or Flax. If you’re using one of these scripts, make sure you install its corresponding requirements file. Copied cd examples/dreambooth +pip install -r requirements.txt +# to train SDXL with DreamBooth +pip install -r requirements_sdxl.txt To speedup training and reduce memory-usage, we recommend: using PyTorch 2.0 or higher to automatically use scaled dot product attention during training (you don’t need to make any changes to the training code) installing xFormers to enable memory-efficient attention diff --git a/scrapped_outputs/85394ccba0be25050df53601e26f3a16.txt b/scrapped_outputs/85394ccba0be25050df53601e26f3a16.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/853e892bcd9c7082951f705213c87ee3.txt b/scrapped_outputs/853e892bcd9c7082951f705213c87ee3.txt new file mode 100644 index 0000000000000000000000000000000000000000..27e473e96ef3e5480dbddcafab99a5316b599755 --- /dev/null +++ b/scrapped_outputs/853e892bcd9c7082951f705213c87ee3.txt @@ -0,0 +1,57 @@ +Wuerstchen The Wuerstchen model drastically reduces computational costs by compressing the latent space by 42x, without compromising image quality and accelerating inference. During training, Wuerstchen uses two models (VQGAN + autoencoder) to compress the latents, and then a third model (text-conditioned latent diffusion model) is conditioned on this highly compressed space to generate an image. To fit the prior model into GPU memory and to speedup training, try enabling gradient_accumulation_steps, gradient_checkpointing, and mixed_precision respectively. This guide explores the train_text_to_image_prior.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/wuerstchen/text_to_image +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training scripts that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the scripts and let us know if you have any questions or concerns. Script parameters The training scripts provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image_prior.py \ + --mixed_precision="fp16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so let’s dive right into the Wuerstchen training script! Training script The training script is also similar to the Text-to-image training guide, but it’s been modified to support Wuerstchen. This guide focuses on the code that is unique to the Wuerstchen training script. The main() function starts by initializing the image encoder - an EfficientNet - in addition to the usual scheduler and tokenizer. Copied with ContextManagers(deepspeed_zero_init_disabled_context_manager()): + pretrained_checkpoint_file = hf_hub_download("dome272/wuerstchen", filename="model_v2_stage_b.pt") + state_dict = torch.load(pretrained_checkpoint_file, map_location="cpu") + image_encoder = EfficientNetEncoder() + image_encoder.load_state_dict(state_dict["effnet_state_dict"]) + image_encoder.eval() You’ll also load the WuerstchenPrior model for optimization. Copied prior = WuerstchenPrior.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="prior") + +optimizer = optimizer_cls( + prior.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Next, you’ll apply some transforms to the images and tokenize the captions: Copied def preprocess_train(examples): + images = [image.convert("RGB") for image in examples[image_column]] + examples["effnet_pixel_values"] = [effnet_transforms(image) for image in images] + examples["text_input_ids"], examples["text_mask"] = tokenize_captions(examples) + return examples Finally, the training loop handles compressing the images to latent space with the EfficientNetEncoder, adding noise to the latents, and predicting the noise residual with the WuerstchenPrior model. Copied pred_noise = prior(noisy_latents, timesteps, prompt_embeds) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 Set the DATASET_NAME environment variable to the dataset name from the Hub. This guide uses the Pokémon BLIP captions dataset, but you can create and train on your own datasets as well (see the Create a dataset for training guide). To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_prompt to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. Copied export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch train_text_to_image_prior.py \ + --mixed_precision="fp16" \ + --dataset_name=$DATASET_NAME \ + --resolution=768 \ + --train_batch_size=4 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --dataloader_num_workers=4 \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --checkpoints_total_limit=3 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --validation_prompts="A robot pokemon, 4k photo" \ + --report_to="wandb" \ + --push_to_hub \ + --output_dir="wuerstchen-prior-pokemon-model" Once training is complete, you can use your newly trained model for inference! Copied import torch +from diffusers import AutoPipelineForText2Image +from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS + +pipeline = AutoPipelineForText2Image.from_pretrained("path/to/saved/model", torch_dtype=torch.float16).to("cuda") + +caption = "A cute bird pokemon holding a shield" +images = pipeline( + caption, + width=1024, + height=1536, + prior_timesteps=DEFAULT_STAGE_C_TIMESTEPS, + prior_guidance_scale=4.0, + num_images_per_prompt=2, +).images Next steps Congratulations on training a Wuerstchen model! To learn more about how to use your new model, the following may be helpful: Take a look at the Wuerstchen API documentation to learn more about how to use the pipeline for text-to-image generation and its limitations. diff --git a/scrapped_outputs/854ee2f1e0c186058883ba6b2ba8ed99.txt b/scrapped_outputs/854ee2f1e0c186058883ba6b2ba8ed99.txt new file mode 100644 index 0000000000000000000000000000000000000000..65bdd33b14d0e8369c584da67430e389c4261fd0 --- /dev/null +++ b/scrapped_outputs/854ee2f1e0c186058883ba6b2ba8ed99.txt @@ -0,0 +1,18 @@ +Installation 🤗 Diffusers is tested on Python 3.8+, PyTorch 1.7.0+, and Flax. Follow the installation instructions below for the deep learning library you are using: PyTorch installation instructions Flax installation instructions Install with pip You should install 🤗 Diffusers in a virtual environment. +If you’re unfamiliar with Python virtual environments, take a look at this guide. +A virtual environment makes it easier to manage different projects and avoid compatibility issues between dependencies. Start by creating a virtual environment in your project directory: Copied python -m venv .env Activate the virtual environment: Copied source .env/bin/activate You should also install 🤗 Transformers because 🤗 Diffusers relies on its models: Pytorch Hide Pytorch content Copied pip install diffusers["torch"] transformers JAX Hide JAX content Copied pip install diffusers["flax"] transformers Install with conda After activating your virtual environment, with conda (maintained by the community): Copied conda install -c conda-forge diffusers Install from source Before installing 🤗 Diffusers from source, make sure you have PyTorch and 🤗 Accelerate installed. To install 🤗 Accelerate: Copied pip install accelerate Then install 🤗 Diffusers from source: Copied pip install git+https://github.com/huggingface/diffusers This command installs the bleeding edge main version rather than the latest stable version. +The main version is useful for staying up-to-date with the latest developments. +For instance, if a bug has been fixed since the last official release but a new release hasn’t been rolled out yet. +However, this means the main version may not always be stable. +We strive to keep the main version operational, and most issues are usually resolved within a few hours or a day. +If you run into a problem, please open an Issue so we can fix it even sooner! Editable install You will need an editable install if you’d like to: Use the main version of the source code. Contribute to 🤗 Diffusers and need to test changes in the code. Clone the repository and install 🤗 Diffusers with the following commands: Copied git clone https://github.com/huggingface/diffusers.git +cd diffusers Pytorch Hide Pytorch content Copied pip install -e ".[torch]" JAX Hide JAX content Copied pip install -e ".[flax]" These commands will link the folder you cloned the repository to and your Python library paths. +Python will now look inside the folder you cloned to in addition to the normal library paths. +For example, if your Python packages are typically installed in ~/anaconda3/envs/main/lib/python3.8/site-packages/, Python will also search the ~/diffusers/ folder you cloned to. You must keep the diffusers folder if you want to keep using the library. Now you can easily update your clone to the latest version of 🤗 Diffusers with the following command: Copied cd ~/diffusers/ +git pull Your Python environment will find the main version of 🤗 Diffusers on the next run. Cache Model weights and files are downloaded from the Hub to a cache which is usually your home directory. You can change the cache location by specifying the HF_HOME or HUGGINFACE_HUB_CACHE environment variables or configuring the cache_dir parameter in methods like from_pretrained(). Cached files allow you to run 🤗 Diffusers offline. To prevent 🤗 Diffusers from connecting to the internet, set the HF_HUB_OFFLINE environment variable to True and 🤗 Diffusers will only load previously downloaded files in the cache. Copied export HF_HUB_OFFLINE=True For more details about managing and cleaning the cache, take a look at the caching guide. Telemetry logging Our library gathers telemetry information during from_pretrained() requests. +The data gathered includes the version of 🤗 Diffusers and PyTorch/Flax, the requested model or pipeline class, +and the path to a pretrained checkpoint if it is hosted on the Hugging Face Hub. +This usage data helps us debug issues and prioritize new features. +Telemetry is only sent when loading models and pipelines from the Hub, +and it is not collected if you’re loading local files. We understand that not everyone wants to share additional information,and we respect your privacy. +You can disable telemetry collection by setting the DISABLE_TELEMETRY environment variable from your terminal: On Linux/MacOS: Copied export DISABLE_TELEMETRY=YES On Windows: Copied set DISABLE_TELEMETRY=YES diff --git a/scrapped_outputs/856f9776a2abddee4b2fee99c313f162.txt b/scrapped_outputs/856f9776a2abddee4b2fee99c313f162.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/8582ce5b0e5b5b2ddefd4dd604a799fa.txt b/scrapped_outputs/8582ce5b0e5b5b2ddefd4dd604a799fa.txt new file mode 100644 index 0000000000000000000000000000000000000000..bfa3d716ecf3de4b47681af162e2893154070538 --- /dev/null +++ b/scrapped_outputs/8582ce5b0e5b5b2ddefd4dd604a799fa.txt @@ -0,0 +1,60 @@ +AudioLDM AudioLDM was proposed in AudioLDM: Text-to-Audio Generation with Latent Diffusion Models by Haohe Liu et al. Inspired by Stable Diffusion, AudioLDM +is a text-to-audio latent diffusion model (LDM) that learns continuous audio representations from CLAP +latents. AudioLDM takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional +sound effects, human speech and music. The abstract from the paper is: Text-to-audio (TTA) system has recently gained attention for its ability to synthesize general audio based on text descriptions. However, previous studies in TTA have limited generation quality with high computational costs. In this study, we propose AudioLDM, a TTA system that is built on a latent space to learn the continuous audio representations from contrastive language-audio pretraining (CLAP) latents. The pretrained CLAP models enable us to train LDMs with audio embedding while providing text embedding as a condition during sampling. By learning the latent representations of audio signals and their compositions without modeling the cross-modal relationship, AudioLDM is advantageous in both generation quality and computational efficiency. Trained on AudioCaps with a single GPU, AudioLDM achieves state-of-the-art TTA performance measured by both objective and subjective metrics (e.g., frechet distance). Moreover, AudioLDM is the first TTA system that enables various text-guided audio manipulations (e.g., style transfer) in a zero-shot fashion. Our implementation and demos are available at this https URL. The original codebase can be found at haoheliu/AudioLDM. Tips When constructing a prompt, keep in mind: Descriptive prompt inputs work best; you can use adjectives to describe the sound (for example, “high quality” or “clear”) and make the prompt context specific (for example, “water stream in a forest” instead of “stream”). It’s best to use general terms like “cat” or “dog” instead of specific names or abstract objects the model may not be familiar with. During inference: The quality of the predicted audio sample can be controlled by the num_inference_steps argument; higher steps give higher quality audio at the expense of slower inference. The length of the predicted audio sample can be controlled by varying the audio_length_in_s argument. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. AudioLDMPipeline class diffusers.AudioLDMPipeline < source > ( vae: AutoencoderKL text_encoder: ClapTextModelWithProjection tokenizer: Union unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vocoder: SpeechT5HifiGan ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (ClapTextModelWithProjection) — +Frozen text-encoder (ClapTextModelWithProjection, specifically the +laion/clap-htsat-unfused variant. tokenizer (PreTrainedTokenizer) — +A RobertaTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded audio latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. vocoder (SpeechT5HifiGan) — +Vocoder of class SpeechT5HifiGan. Pipeline for text-to-audio generation using AudioLDM. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None audio_length_in_s: Optional = None num_inference_steps: int = 10 guidance_scale: float = 2.5 negative_prompt: Union = None num_waveforms_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None output_type: Optional = 'np' ) → AudioPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide audio generation. If not defined, you need to pass prompt_embeds. audio_length_in_s (int, optional, defaults to 5.12) — +The length of the generated audio sample in seconds. num_inference_steps (int, optional, defaults to 10) — +The number of denoising steps. More denoising steps usually lead to a higher quality audio at the +expense of slower inference. guidance_scale (float, optional, defaults to 2.5) — +A higher guidance scale value encourages the model to generate audio that is closely linked to the text +prompt at the expense of lower sound quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in audio generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_waveforms_per_prompt (int, optional, defaults to 1) — +The number of waveforms to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. return_dict (bool, optional, defaults to True) — +Whether or not to return a AudioPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. output_type (str, optional, defaults to "np") — +The output format of the generated image. Choose between "np" to return a NumPy np.ndarray or +"pt" to return a PyTorch torch.Tensor object. Returns +AudioPipelineOutput or tuple + +If return_dict is True, AudioPipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import AudioLDMPipeline +>>> import torch +>>> import scipy + +>>> repo_id = "cvssp/audioldm-s-full-v2" +>>> pipe = AudioLDMPipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> prompt = "Techno music with a strong, upbeat tempo and high melodic riffs" +>>> audio = pipe(prompt, num_inference_steps=10, audio_length_in_s=5.0).audios[0] + +>>> # save the audio sample as a .wav file +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio) AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. diff --git a/scrapped_outputs/8590226a1e6df1325c6111b03893b8e6.txt b/scrapped_outputs/8590226a1e6df1325c6111b03893b8e6.txt new file mode 100644 index 0000000000000000000000000000000000000000..173b882d6bb0b0500124b1e8f97633b6bc0e5c16 --- /dev/null +++ b/scrapped_outputs/8590226a1e6df1325c6111b03893b8e6.txt @@ -0,0 +1,62 @@ +LoRA This is experimental and the API may change in the future. LoRA (Low-Rank Adaptation of Large Language Models) is a popular and lightweight training technique that significantly reduces the number of trainable parameters. It works by inserting a smaller number of new weights into the model and only these are trained. This makes training with LoRA much faster, memory-efficient, and produces smaller model weights (a few hundred MBs), which are easier to store and share. LoRA can also be combined with other training techniques like DreamBooth to speedup training. LoRA is very versatile and supported for DreamBooth, Kandinsky 2.2, Stable Diffusion XL, text-to-image, and Wuerstchen. This guide will explore the train_text_to_image_lora.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/text_to_image +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script has many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. Default values are provided for most parameters that work pretty well, but you can also set your own values in the training command if you’d like. For example, to increase the number of epochs to train: Copied accelerate launch train_text_to_image_lora.py \ + --num_train_epochs=150 \ Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the LoRA relevant parameters: --rank: the inner dimension of the low-rank matrices to train; a higher rank means more trainable parameters --learning_rate: the default learning rate is 1e-4, but with LoRA, you can use a higher learning rate Training script The dataset preprocessing code and training loop are found in the main() function, and if you need to adapt the training script, this is where you’ll make your changes. As with the script parameters, a walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the LoRA relevant parts of the script. The script begins by adding the new LoRA weights to the attention layers. This involves correctly configuring the weight size for each block in the UNet. You’ll see the rank parameter is used to create the LoRAAttnProcessor: Copied lora_attn_procs = {} +for name in unet.attn_processors.keys(): + cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim + if name.startswith("mid_block"): + hidden_size = unet.config.block_out_channels[-1] + elif name.startswith("up_blocks"): + block_id = int(name[len("up_blocks.")]) + hidden_size = list(reversed(unet.config.block_out_channels))[block_id] + elif name.startswith("down_blocks"): + block_id = int(name[len("down_blocks.")]) + hidden_size = unet.config.block_out_channels[block_id] + + lora_attn_procs[name] = LoRAAttnProcessor( + hidden_size=hidden_size, + cross_attention_dim=cross_attention_dim, + rank=args.rank, + ) + +unet.set_attn_processor(lora_attn_procs) +lora_layers = AttnProcsLayers(unet.attn_processors) The optimizer is initialized with the lora_layers because these are the only weights that’ll be optimized: Copied optimizer = optimizer_cls( + lora_layers.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Aside from setting up the LoRA layers, the training script is more or less the same as train_text_to_image.py! Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 Let’s train on the Pokémon BLIP captions dataset to generate our yown Pokémon. Set the environment variables MODEL_NAME and DATASET_NAME to the model and dataset respectively. You should also specify where to save the model in OUTPUT_DIR, and the name of the model to save to on the Hub with HUB_MODEL_ID. The script creates and saves the following files to your repository: saved model checkpoints pytorch_lora_weights.safetensors (the trained LoRA weights) If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. A full training run takes ~5 hours on a 2080 Ti GPU with 11GB of VRAM. Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="/sddata/finetune/lora/pokemon" +export HUB_MODEL_ID="pokemon-lora" +export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch --mixed_precision="fp16" train_text_to_image_lora.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$DATASET_NAME \ + --dataloader_num_workers=8 \ + --resolution=512 \ + --center_crop \ + --random_flip \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --max_train_steps=15000 \ + --learning_rate=1e-04 \ + --max_grad_norm=1 \ + --lr_scheduler="cosine" \ + --lr_warmup_steps=0 \ + --output_dir=${OUTPUT_DIR} \ + --push_to_hub \ + --hub_model_id=${HUB_MODEL_ID} \ + --report_to=wandb \ + --checkpointing_steps=500 \ + --validation_prompt="A pokemon with blue eyes." \ + --seed=1337 Once training has been completed, you can use your model for inference: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("path/to/lora/model", weight_name="pytorch_lora_weights.safetensors") +image = pipeline("A pokemon with blue eyes").images[0] Next steps Congratulations on training a new model with LoRA! To learn more about how to use your new model, the following guides may be helpful: Learn how to load different LoRA formats trained using community trainers like Kohya and TheLastBen. Learn how to use and combine multiple LoRA’s with PEFT for inference. diff --git a/scrapped_outputs/85bdf93ac26d3abb0e0fec7056a72c41.txt b/scrapped_outputs/85bdf93ac26d3abb0e0fec7056a72c41.txt new file mode 100644 index 0000000000000000000000000000000000000000..682e7ed4ade907ab1a141f47a047e5803e87a77a --- /dev/null +++ b/scrapped_outputs/85bdf93ac26d3abb0e0fec7056a72c41.txt @@ -0,0 +1,33 @@ +Logging 🤗 Diffusers has a centralized logging system to easily manage the verbosity of the library. The default verbosity is set to WARNING. To change the verbosity level, use one of the direct setters. For instance, to change the verbosity to the INFO level. Copied import diffusers + +diffusers.logging.set_verbosity_info() You can also use the environment variable DIFFUSERS_VERBOSITY to override the default verbosity. You can set it +to one of the following: debug, info, warning, error, critical. For example: Copied DIFFUSERS_VERBOSITY=error ./myprogram.py Additionally, some warnings can be disabled by setting the environment variable +DIFFUSERS_NO_ADVISORY_WARNINGS to a true value, like 1. This disables any warning logged by +logger.warning_advice. For example: Copied DIFFUSERS_NO_ADVISORY_WARNINGS=1 ./myprogram.py Here is an example of how to use the same logger as the library in your own module or script: Copied from diffusers.utils import logging + +logging.set_verbosity_info() +logger = logging.get_logger("diffusers") +logger.info("INFO") +logger.warning("WARN") All methods of the logging module are documented below. The main methods are +logging.get_verbosity to get the current level of verbosity in the logger and +logging.set_verbosity to set the verbosity to the level of your choice. In order from the least verbose to the most verbose: Method Integer value Description diffusers.logging.CRITICAL or diffusers.logging.FATAL 50 only report the most critical errors diffusers.logging.ERROR 40 only report errors diffusers.logging.WARNING or diffusers.logging.WARN 30 only report errors and warnings (default) diffusers.logging.INFO 20 only report errors, warnings, and basic information diffusers.logging.DEBUG 10 report all information By default, tqdm progress bars are displayed during model download. logging.disable_progress_bar and logging.enable_progress_bar are used to enable or disable this behavior. Base setters diffusers.utils.logging.set_verbosity_error < source > ( ) Set the verbosity to the ERROR level. diffusers.utils.logging.set_verbosity_warning < source > ( ) Set the verbosity to the WARNING level. diffusers.utils.logging.set_verbosity_info < source > ( ) Set the verbosity to the INFO level. diffusers.utils.logging.set_verbosity_debug < source > ( ) Set the verbosity to the DEBUG level. Other functions diffusers.utils.logging.get_verbosity < source > ( ) → int Returns +int + +Logging level integers which can be one of: + +50: diffusers.logging.CRITICAL or diffusers.logging.FATAL +40: diffusers.logging.ERROR +30: diffusers.logging.WARNING or diffusers.logging.WARN +20: diffusers.logging.INFO +10: diffusers.logging.DEBUG + + Return the current level for the 🤗 Diffusers’ root logger as an int. diffusers.utils.logging.set_verbosity < source > ( verbosity: int ) Parameters verbosity (int) — +Logging level which can be one of: + +diffusers.logging.CRITICAL or diffusers.logging.FATAL +diffusers.logging.ERROR +diffusers.logging.WARNING or diffusers.logging.WARN +diffusers.logging.INFO +diffusers.logging.DEBUG + Set the verbosity level for the 🤗 Diffusers’ root logger. diffusers.utils.get_logger < source > ( name: Optional = None ) Return a logger with the specified name. This function is not supposed to be directly accessed unless you are writing a custom diffusers module. diffusers.utils.logging.enable_default_handler < source > ( ) Enable the default handler of the 🤗 Diffusers’ root logger. diffusers.utils.logging.disable_default_handler < source > ( ) Disable the default handler of the 🤗 Diffusers’ root logger. diffusers.utils.logging.enable_explicit_format < source > ( ) Enable explicit formatting for every 🤗 Diffusers’ logger. The explicit formatter is as follows: Copied [LEVELNAME|FILENAME|LINE NUMBER] TIME >> MESSAGE +All handlers currently bound to the root logger are affected by this method. diffusers.utils.logging.reset_format < source > ( ) Resets the formatting for 🤗 Diffusers’ loggers. All handlers currently bound to the root logger are affected by this method. diffusers.utils.logging.enable_progress_bar < source > ( ) Enable tqdm progress bar. diffusers.utils.logging.disable_progress_bar < source > ( ) Disable tqdm progress bar. diff --git a/scrapped_outputs/85eff3caeaa4af6bae880fadbfaa1adb.txt b/scrapped_outputs/85eff3caeaa4af6bae880fadbfaa1adb.txt new file mode 100644 index 0000000000000000000000000000000000000000..0216b63015b72cee2b55724c811388c4d1a98e96 --- /dev/null +++ b/scrapped_outputs/85eff3caeaa4af6bae880fadbfaa1adb.txt @@ -0,0 +1,41 @@ +KarrasVeScheduler KarrasVeScheduler is a stochastic sampler tailored to variance-expanding (VE) models. It is based on the Elucidating the Design Space of Diffusion-Based Generative Models and Score-based generative modeling through stochastic differential equations papers. KarrasVeScheduler class diffusers.KarrasVeScheduler < source > ( sigma_min: float = 0.02 sigma_max: float = 100 s_noise: float = 1.007 s_churn: float = 80 s_min: float = 0.05 s_max: float = 50 ) Parameters sigma_min (float, defaults to 0.02) — +The minimum noise magnitude. sigma_max (float, defaults to 100) — +The maximum noise magnitude. s_noise (float, defaults to 1.007) — +The amount of additional noise to counteract loss of detail during sampling. A reasonable range is [1.000, +1.011]. s_churn (float, defaults to 80) — +The parameter controlling the overall amount of stochasticity. A reasonable range is [0, 100]. s_min (float, defaults to 0.05) — +The start value of the sigma range to add noise (enable stochasticity). A reasonable range is [0, 10]. s_max (float, defaults to 50) — +The end value of the sigma range to add noise. A reasonable range is [0.2, 80]. A stochastic scheduler tailored to variance-expanding models. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. For more details on the parameters, see Appendix E. The grid search values used +to find the optimal {s_noise, s_churn, s_min, s_max} for a specific model are described in Table 5 of the paper. add_noise_to_input < source > ( sample: FloatTensor sigma: float generator: Optional = None ) Parameters sample (torch.FloatTensor) — +The input sample. sigma (float) — generator (torch.Generator, optional) — +A random number generator. Explicit Langevin-like “churn” step of adding noise to the sample according to a gamma_i ≥ 0 to reach a +higher noise level sigma_hat = sigma_i + gamma_i*sigma_i. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor sigma_hat: float sigma_prev: float sample_hat: FloatTensor return_dict: bool = True ) → ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. sigma_hat (float) — sigma_prev (float) — sample_hat (torch.FloatTensor) — return_dict (bool, optional, defaults to True) — +Whether or not to return a ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput or tuple. Returns +~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput or tuple + +If return_dict is True, ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). step_correct < source > ( model_output: FloatTensor sigma_hat: float sigma_prev: float sample_hat: FloatTensor sample_prev: FloatTensor derivative: FloatTensor return_dict: bool = True ) → prev_sample (TODO) Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. sigma_hat (float) — TODO sigma_prev (float) — TODO sample_hat (torch.FloatTensor) — TODO sample_prev (torch.FloatTensor) — TODO derivative (torch.FloatTensor) — TODO return_dict (bool, optional, defaults to True) — +Whether or not to return a DDPMSchedulerOutput or tuple. Returns +prev_sample (TODO) + +updated sample in the diffusion chain. derivative (TODO): TODO + Corrects the predicted sample based on the model_output of the network. KarrasVeOutput class diffusers.schedulers.deprecated.scheduling_karras_ve.KarrasVeOutput < source > ( prev_sample: FloatTensor derivative: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. derivative (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Derivative of predicted original image sample (x_0). pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/85f03f9c517af4e3c80b48f1c6a4096f.txt b/scrapped_outputs/85f03f9c517af4e3c80b48f1c6a4096f.txt new file mode 100644 index 0000000000000000000000000000000000000000..51eec044ff9541ddf40cd3ef6404f0e25abfaa6f --- /dev/null +++ b/scrapped_outputs/85f03f9c517af4e3c80b48f1c6a4096f.txt @@ -0,0 +1,226 @@ +aMUSEd aMUSEd was introduced in aMUSEd: An Open MUSE Reproduction by Suraj Patil, William Berman, Robin Rombach, and Patrick von Platen. Amused is a lightweight text to image model based off of the MUSE architecture. Amused is particularly useful in applications that require a lightweight and fast model such as generating many images quickly at once. Amused is a vqvae token based transformer that can generate an image in fewer forward passes than many diffusion models. In contrast with muse, it uses the smaller text encoder CLIP-L/14 instead of t5-xxl. Due to its small parameter count and few forward pass generation process, amused can generate many images quickly. This benefit is seen particularly at larger batch sizes. The abstract from the paper is: We present aMUSEd, an open-source, lightweight masked image model (MIM) for text-to-image generation based on MUSE. With 10 percent of MUSE’s parameters, aMUSEd is focused on fast image generation. We believe MIM is under-explored compared to latent diffusion, the prevailing approach for text-to-image generation. Compared to latent diffusion, MIM requires fewer inference steps and is more interpretable. Additionally, MIM can be fine-tuned to learn additional styles with only a single image. We hope to encourage further exploration of MIM by demonstrating its effectiveness on large-scale text-to-image generation and releasing reproducible training code. We also release checkpoints for two models which directly produce images at 256x256 and 512x512 resolutions. Model Params amused-256 603M amused-512 608M AmusedPipeline class diffusers.AmusedPipeline < source > ( vqvae: VQModel tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection transformer: UVit2DModel scheduler: AmusedScheduler ) __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 12 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Optional = None latents: Optional = None prompt_embeds: Optional = None encoder_hidden_states: Optional = None negative_prompt_embeds: Optional = None negative_encoder_hidden_states: Optional = None output_type = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None micro_conditioning_aesthetic_score: int = 6 micro_conditioning_crop_coord: Tuple = (0, 0) temperature: Union = (2, 0) ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.transformer.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 16) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.IntTensor, optional) — +Pre-generated tokens representing latent vectors in self.vqvae, to be used as inputs for image +gneration. If not provided, the starting latents will be completely masked. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. A single vector from the +pooled and projected final hidden states. encoder_hidden_states (torch.FloatTensor, optional) — +Pre-generated penultimate hidden states from the text encoder providing additional text conditioning. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. negative_encoder_hidden_states (torch.FloatTensor, optional) — +Analogous to encoder_hidden_states for the positive prompt. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. micro_conditioning_aesthetic_score (int, optional, defaults to 6) — +The targeted aesthetic score according to the laion aesthetic classifier. See https://laion.ai/blog/laion-aesthetics/ +and the micro-conditioning section of https://arxiv.org/abs/2307.01952. micro_conditioning_crop_coord (Tuple[int], optional, defaults to (0, 0)) — +The targeted height, width crop coordinates. See the micro-conditioning section of https://arxiv.org/abs/2307.01952. temperature (Union[int, Tuple[int, int], List[int]], optional, defaults to (2, 0)) — +Configures the temperature scheduler on self.scheduler see AmusedScheduler#set_timesteps. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import AmusedPipeline + +>>> pipe = AmusedPipeline.from_pretrained( +... "amused/amused-512", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt).images[0] enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. class diffusers.AmusedImg2ImgPipeline < source > ( vqvae: VQModel tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection transformer: UVit2DModel scheduler: AmusedScheduler ) __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.5 num_inference_steps: int = 12 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Optional = None prompt_embeds: Optional = None encoder_hidden_states: Optional = None negative_prompt_embeds: Optional = None negative_encoder_hidden_states: Optional = None output_type = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None micro_conditioning_aesthetic_score: int = 6 micro_conditioning_crop_coord: Tuple = (0, 0) temperature: Union = (2, 0) ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be used as the starting point. For both +numpy array and pytorch tensor, the expected value range is between [0, 1] If it’s a tensor or a list +or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a +list of arrays, the expected shape should be (B, H, W, C) or (H, W, C) It can also accept image +latents as image, but if passing latents directly it is not encoded again. strength (float, optional, defaults to 0.5) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 16) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. A single vector from the +pooled and projected final hidden states. encoder_hidden_states (torch.FloatTensor, optional) — +Pre-generated penultimate hidden states from the text encoder providing additional text conditioning. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. negative_encoder_hidden_states (torch.FloatTensor, optional) — +Analogous to encoder_hidden_states for the positive prompt. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. micro_conditioning_aesthetic_score (int, optional, defaults to 6) — +The targeted aesthetic score according to the laion aesthetic classifier. See https://laion.ai/blog/laion-aesthetics/ +and the micro-conditioning section of https://arxiv.org/abs/2307.01952. micro_conditioning_crop_coord (Tuple[int], optional, defaults to (0, 0)) — +The targeted height, width crop coordinates. See the micro-conditioning section of https://arxiv.org/abs/2307.01952. temperature (Union[int, Tuple[int, int], List[int]], optional, defaults to (2, 0)) — +Configures the temperature scheduler on self.scheduler see AmusedScheduler#set_timesteps. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import AmusedImg2ImgPipeline +>>> from diffusers.utils import load_image + +>>> pipe = AmusedImg2ImgPipeline.from_pretrained( +... "amused/amused-512", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "winter mountains" +>>> input_image = ( +... load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/open_muse/mountains.jpg" +... ) +... .resize((512, 512)) +... .convert("RGB") +... ) +>>> image = pipe(prompt, input_image).images[0] enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. class diffusers.AmusedInpaintPipeline < source > ( vqvae: VQModel tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection transformer: UVit2DModel scheduler: AmusedScheduler ) __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None strength: float = 1.0 num_inference_steps: int = 12 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Optional = None prompt_embeds: Optional = None encoder_hidden_states: Optional = None negative_prompt_embeds: Optional = None negative_encoder_hidden_states: Optional = None output_type = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None micro_conditioning_aesthetic_score: int = 6 micro_conditioning_crop_coord: Tuple = (0, 0) temperature: Union = (2, 0) ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be used as the starting point. For both +numpy array and pytorch tensor, the expected value range is between [0, 1] If it’s a tensor or a list +or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a +list of arrays, the expected shape should be (B, H, W, C) or (H, W, C) It can also accept image +latents as image, but if passing latents directly it is not encoded again. mask_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to mask image. White pixels in the mask +are repainted while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a numpy array or pytorch tensor, it should contain one +color channel (L) instead of 3, so the expected shape for pytorch tensor would be (B, 1, H, W), (B, H, W), (1, H, W), (H, W). And for numpy array would be for (B, H, W, 1), (B, H, W), (H, W, 1), or (H, W). strength (float, optional, defaults to 1.0) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 16) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. A single vector from the +pooled and projected final hidden states. encoder_hidden_states (torch.FloatTensor, optional) — +Pre-generated penultimate hidden states from the text encoder providing additional text conditioning. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. negative_encoder_hidden_states (torch.FloatTensor, optional) — +Analogous to encoder_hidden_states for the positive prompt. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. micro_conditioning_aesthetic_score (int, optional, defaults to 6) — +The targeted aesthetic score according to the laion aesthetic classifier. See https://laion.ai/blog/laion-aesthetics/ +and the micro-conditioning section of https://arxiv.org/abs/2307.01952. micro_conditioning_crop_coord (Tuple[int], optional, defaults to (0, 0)) — +The targeted height, width crop coordinates. See the micro-conditioning section of https://arxiv.org/abs/2307.01952. temperature (Union[int, Tuple[int, int], List[int]], optional, defaults to (2, 0)) — +Configures the temperature scheduler on self.scheduler see AmusedScheduler#set_timesteps. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import AmusedInpaintPipeline +>>> from diffusers.utils import load_image + +>>> pipe = AmusedInpaintPipeline.from_pretrained( +... "amused/amused-512", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "fall mountains" +>>> input_image = ( +... load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/open_muse/mountains_1.jpg" +... ) +... .resize((512, 512)) +... .convert("RGB") +... ) +>>> mask = ( +... load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/open_muse/mountains_1_mask.png" +... ) +... .resize((512, 512)) +... .convert("L") +... ) +>>> pipe(prompt, input_image, mask).images[0].save("out.png") enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. diff --git a/scrapped_outputs/85f278b8024063ee50a394744ba053c5.txt b/scrapped_outputs/85f278b8024063ee50a394744ba053c5.txt new file mode 100644 index 0000000000000000000000000000000000000000..6eb814578b3c61caf6866a5ffadcbcf16e6fec47 --- /dev/null +++ b/scrapped_outputs/85f278b8024063ee50a394744ba053c5.txt @@ -0,0 +1,26 @@ +How to run Stable Diffusion with Core ML Core ML is the model format and machine learning library supported by Apple frameworks. If you are interested in running Stable Diffusion models inside your macOS or iOS/iPadOS apps, this guide will show you how to convert existing PyTorch checkpoints into the Core ML format and use them for inference with Python or Swift. Core ML models can leverage all the compute engines available in Apple devices: the CPU, the GPU, and the Apple Neural Engine (or ANE, a tensor-optimized accelerator available in Apple Silicon Macs and modern iPhones/iPads). Depending on the model and the device it’s running on, Core ML can mix and match compute engines too, so some portions of the model may run on the CPU while others run on GPU, for example. You can also run the diffusers Python codebase on Apple Silicon Macs using the mps accelerator built into PyTorch. This approach is explained in depth in the mps guide, but it is not compatible with native apps. Stable Diffusion Core ML Checkpoints Stable Diffusion weights (or checkpoints) are stored in the PyTorch format, so you need to convert them to the Core ML format before we can use them inside native apps. Thankfully, Apple engineers developed a conversion tool based on diffusers to convert the PyTorch checkpoints to Core ML. Before you convert a model, though, take a moment to explore the Hugging Face Hub – chances are the model you’re interested in is already available in Core ML format: the Apple organization includes Stable Diffusion versions 1.4, 1.5, 2.0 base, and 2.1 base coreml community includes custom finetuned models use this filter to return all available Core ML checkpoints If you can’t find the model you’re interested in, we recommend you follow the instructions for Converting Models to Core ML by Apple. Selecting the Core ML Variant to Use Stable Diffusion models can be converted to different Core ML variants intended for different purposes: The type of attention blocks used. The attention operation is used to “pay attention” to the relationship between different areas in the image representations and to understand how the image and text representations are related. Attention is compute- and memory-intensive, so different implementations exist that consider the hardware characteristics of different devices. For Core ML Stable Diffusion models, there are two attention variants: split_einsum (introduced by Apple) is optimized for ANE devices, which is available in modern iPhones, iPads and M-series computers. The “original” attention (the base implementation used in diffusers) is only compatible with CPU/GPU and not ANE. It can be faster to run your model on CPU + GPU using original attention than ANE. See this performance benchmark as well as some additional measures provided by the community for additional details. The supported inference framework. packages are suitable for Python inference. This can be used to test converted Core ML models before attempting to integrate them inside native apps, or if you want to explore Core ML performance but don’t need to support native apps. For example, an application with a web UI could perfectly use a Python Core ML backend. compiled models are required for Swift code. The compiled models in the Hub split the large UNet model weights into several files for compatibility with iOS and iPadOS devices. This corresponds to the --chunk-unet conversion option. If you want to support native apps, then you need to select the compiled variant. The official Core ML Stable Diffusion models include these variants, but the community ones may vary: Copied coreml-stable-diffusion-v1-4 +├── README.md +├── original +│ ├── compiled +│ └── packages +└── split_einsum + ├── compiled + └── packages You can download and use the variant you need as shown below. Core ML Inference in Python Install the following libraries to run Core ML inference in Python: Copied pip install huggingface_hub +pip install git+https://github.com/apple/ml-stable-diffusion Download the Model Checkpoints To run inference in Python, use one of the versions stored in the packages folders because the compiled ones are only compatible with Swift. You may choose whether you want to use original or split_einsum attention. This is how you’d download the original attention variant from the Hub to a directory called models: Copied from huggingface_hub import snapshot_download +from pathlib import Path + +repo_id = "apple/coreml-stable-diffusion-v1-4" +variant = "original/packages" + +model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_")) +snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False) +print(f"Model downloaded at {model_path}") Inference Once you have downloaded a snapshot of the model, you can test it using Apple’s Python script. Copied python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" -i models/coreml-stable-diffusion-v1-4_original_packages -o --compute-unit CPU_AND_GPU --seed 93 Pass the path of the downloaded checkpoint with -i flag to the script. --compute-unit indicates the hardware you want to allow for inference. It must be one of the following options: ALL, CPU_AND_GPU, CPU_ONLY, CPU_AND_NE. You may also provide an optional output path, and a seed for reproducibility. The inference script assumes you’re using the original version of the Stable Diffusion model, CompVis/stable-diffusion-v1-4. If you use another model, you have to specify its Hub id in the inference command line, using the --model-version option. This works for models already supported and custom models you trained or fine-tuned yourself. For example, if you want to use runwayml/stable-diffusion-v1-5: Copied python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" --compute-unit ALL -o output --seed 93 -i models/coreml-stable-diffusion-v1-5_original_packages --model-version runwayml/stable-diffusion-v1-5 Core ML inference in Swift Running inference in Swift is slightly faster than in Python because the models are already compiled in the mlmodelc format. This is noticeable on app startup when the model is loaded but shouldn’t be noticeable if you run several generations afterward. Download To run inference in Swift on your Mac, you need one of the compiled checkpoint versions. We recommend you download them locally using Python code similar to the previous example, but with one of the compiled variants: Copied from huggingface_hub import snapshot_download +from pathlib import Path + +repo_id = "apple/coreml-stable-diffusion-v1-4" +variant = "original/compiled" + +model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_")) +snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False) +print(f"Model downloaded at {model_path}") Inference To run inference, please clone Apple’s repo: Copied git clone https://github.com/apple/ml-stable-diffusion +cd ml-stable-diffusion And then use Apple’s command line tool, Swift Package Manager: Copied swift run StableDiffusionSample --resource-path models/coreml-stable-diffusion-v1-4_original_compiled --compute-units all "a photo of an astronaut riding a horse on mars" You have to specify in --resource-path one of the checkpoints downloaded in the previous step, so please make sure it contains compiled Core ML bundles with the extension .mlmodelc. The --compute-units has to be one of these values: all, cpuOnly, cpuAndGPU, cpuAndNeuralEngine. For more details, please refer to the instructions in Apple’s repo. Supported Diffusers Features The Core ML models and inference code don’t support many of the features, options, and flexibility of 🧨 Diffusers. These are some of the limitations to keep in mind: Core ML models are only suitable for inference. They can’t be used for training or fine-tuning. Only two schedulers have been ported to Swift, the default one used by Stable Diffusion and DPMSolverMultistepScheduler, which we ported to Swift from our diffusers implementation. We recommend you use DPMSolverMultistepScheduler, since it produces the same quality in about half the steps. Negative prompts, classifier-free guidance scale, and image-to-image tasks are available in the inference code. Advanced features such as depth guidance, ControlNet, and latent upscalers are not available yet. Apple’s conversion and inference repo and our own swift-coreml-diffusers repos are intended as technology demonstrators to enable other developers to build upon. If you feel strongly about any missing features, please feel free to open a feature request or, better yet, a contribution PR 🙂. Native Diffusers Swift app One easy way to run Stable Diffusion on your own Apple hardware is to use our open-source Swift repo, based on diffusers and Apple’s conversion and inference repo. You can study the code, compile it with Xcode and adapt it for your own needs. For your convenience, there’s also a standalone Mac app in the App Store, so you can play with it without having to deal with the code or IDE. If you are a developer and have determined that Core ML is the best solution to build your Stable Diffusion app, then you can use the rest of this guide to get started with your project. We can’t wait to see what you’ll build 🙂. diff --git a/scrapped_outputs/860c73e7cf460178a6ab571bd18628bc.txt b/scrapped_outputs/860c73e7cf460178a6ab571bd18628bc.txt new file mode 100644 index 0000000000000000000000000000000000000000..552fc9d1655a6840094e03d21339f40c7b49403d --- /dev/null +++ b/scrapped_outputs/860c73e7cf460178a6ab571bd18628bc.txt @@ -0,0 +1,33 @@ +Transformer2D A Transformer model for image-like data from CompVis that is based on the Vision Transformer introduced by Dosovitskiy et al. The Transformer2DModel accepts discrete (classes of vector embeddings) or continuous (actual embeddings) inputs. When the input is continuous: Project the input and reshape it to (batch_size, sequence_length, feature_dimension). Apply the Transformer blocks in the standard way. Reshape to image. When the input is discrete: It is assumed one of the input classes is the masked latent pixel. The predicted classes of the unnoised image don’t contain a prediction for the masked pixel because the unnoised image cannot be masked. Convert input (classes of latent pixels) to embeddings and apply positional embeddings. Apply the Transformer blocks in the standard way. Predict classes of unnoised image. Transformer2DModel class diffusers.Transformer2DModel < source > ( num_attention_heads: int = 16 attention_head_dim: int = 88 in_channels: Optional = None out_channels: Optional = None num_layers: int = 1 dropout: float = 0.0 norm_num_groups: int = 32 cross_attention_dim: Optional = None attention_bias: bool = False sample_size: Optional = None num_vector_embeds: Optional = None patch_size: Optional = None activation_fn: str = 'geglu' num_embeds_ada_norm: Optional = None use_linear_projection: bool = False only_cross_attention: bool = False double_self_attention: bool = False upcast_attention: bool = False norm_type: str = 'layer_norm' norm_elementwise_affine: bool = True norm_eps: float = 1e-05 attention_type: str = 'default' caption_channels: int = None ) Parameters num_attention_heads (int, optional, defaults to 16) — The number of heads to use for multi-head attention. attention_head_dim (int, optional, defaults to 88) — The number of channels in each head. in_channels (int, optional) — +The number of channels in the input and output (specify if the input is continuous). num_layers (int, optional, defaults to 1) — The number of layers of Transformer blocks to use. dropout (float, optional, defaults to 0.0) — The dropout probability to use. cross_attention_dim (int, optional) — The number of encoder_hidden_states dimensions to use. sample_size (int, optional) — The width of the latent images (specify if the input is discrete). +This is fixed during training since it is used to learn a number of position embeddings. num_vector_embeds (int, optional) — +The number of classes of the vector embeddings of the latent pixels (specify if the input is discrete). +Includes the class for the masked latent pixel. activation_fn (str, optional, defaults to "geglu") — Activation function to use in feed-forward. num_embeds_ada_norm ( int, optional) — +The number of diffusion steps used during training. Pass if at least one of the norm_layers is +AdaLayerNorm. This is fixed during training since it is used to learn a number of embeddings that are +added to the hidden states. +During inference, you can denoise for up to but not more steps than num_embeds_ada_norm. attention_bias (bool, optional) — +Configure if the TransformerBlocks attention should contain a bias parameter. A 2D Transformer model for image-like data. forward < source > ( hidden_states: Tensor encoder_hidden_states: Optional = None timestep: Optional = None added_cond_kwargs: Dict = None class_labels: Optional = None cross_attention_kwargs: Dict = None attention_mask: Optional = None encoder_attention_mask: Optional = None return_dict: bool = True ) Parameters hidden_states (torch.LongTensor of shape (batch size, num latent pixels) if discrete, torch.FloatTensor of shape (batch size, channel, height, width) if continuous) — +Input hidden_states. encoder_hidden_states ( torch.FloatTensor of shape (batch size, sequence len, embed dims), optional) — +Conditional embeddings for cross attention layer. If not given, cross-attention defaults to +self-attention. timestep ( torch.LongTensor, optional) — +Used to indicate denoising step. Optional timestep to be applied as an embedding in AdaLayerNorm. class_labels ( torch.LongTensor of shape (batch size, num classes), optional) — +Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in +AdaLayerZeroNorm. cross_attention_kwargs ( Dict[str, Any], optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. attention_mask ( torch.Tensor, optional) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. encoder_attention_mask ( torch.Tensor, optional) — +Cross-attention mask applied to encoder_hidden_states. Two formats supported: + +Mask (batch, sequence_length) True = keep, False = discard. +Bias (batch, 1, sequence_length) 0 = keep, -10000 = discard. + +If ndim == 2: will be interpreted as a mask, then converted into a bias consistent with the format +above. This bias will be added to the cross-attention scores. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. The Transformer2DModel forward method. Transformer2DModelOutput class diffusers.models.transformers.transformer_2d.Transformer2DModelOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if Transformer2DModel is discrete) — +The hidden states output conditioned on the encoder_hidden_states input. If discrete, returns probability +distributions for the unnoised latent pixels. The output of Transformer2DModel. diff --git a/scrapped_outputs/861b7b5fb9ba84f6076e40f91fe7db49.txt b/scrapped_outputs/861b7b5fb9ba84f6076e40f91fe7db49.txt new file mode 100644 index 0000000000000000000000000000000000000000..67d20ffe84f73ce9a3ad3216bb740471c0ea1a73 --- /dev/null +++ b/scrapped_outputs/861b7b5fb9ba84f6076e40f91fe7db49.txt @@ -0,0 +1,93 @@ +Text-to-image model editing Editing Implicit Assumptions in Text-to-Image Diffusion Models is by Hadas Orgad, Bahjat Kawar, and Yonatan Belinkov. This pipeline enables editing diffusion model weights, such that its assumptions of a given concept are changed. The resulting change is expected to take effect in all prompt generations related to the edited concept. The abstract from the paper is: Text-to-image diffusion models often make implicit assumptions about the world when generating images. While some assumptions are useful (e.g., the sky is blue), they can also be outdated, incorrect, or reflective of social biases present in the training data. Thus, there is a need to control these assumptions without requiring explicit user input or costly re-training. In this work, we aim to edit a given implicit assumption in a pre-trained diffusion model. Our Text-to-Image Model Editing method, TIME for short, receives a pair of inputs: a “source” under-specified prompt for which the model makes an implicit assumption (e.g., “a pack of roses”), and a “destination” prompt that describes the same setting, but with a specified desired attribute (e.g., “a pack of blue roses”). TIME then updates the model’s cross-attention layers, as these layers assign visual meaning to textual tokens. We edit the projection matrices in these layers such that the source prompt is projected close to the destination prompt. Our method is highly efficient, as it modifies a mere 2.2% of the model’s parameters in under one second. To evaluate model editing approaches, we introduce TIMED (TIME Dataset), containing 147 source and destination prompt pairs from various domains. Our experiments (using Stable Diffusion) show that TIME is successful in model editing, generalizes well for related prompts unseen during editing, and imposes minimal effect on unrelated generations. You can find additional information about model editing on the project page, original codebase, and try it out in a demo. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionModelEditingPipeline class diffusers.StableDiffusionModelEditingPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: SchedulerMixin safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True with_to_k: bool = True with_augs: list = ['A photo of ', 'An image of ', 'A picture of '] ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPFeatureExtractor) — +A CLIPFeatureExtractor to extract features from generated images; used as inputs to the safety_checker. with_to_k (bool) — +Whether to edit the key projection matrices along with the value projection matrices. with_augs (list) — +Textual augmentations to apply while editing the text-to-image model. Set to [] for no augmentations. Pipeline for text-to-image model editing. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: typing.Union[str, typing.List[str]] = None height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: typing.Union[typing.List[str], str, NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: int = 1 cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None clip_skip: typing.Optional[int] = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionModelEditingPipeline + +>>> model_ckpt = "CompVis/stable-diffusion-v1-4" +>>> pipe = StableDiffusionModelEditingPipeline.from_pretrained(model_ckpt) + +>>> pipe = pipe.to("cuda") + +>>> source_prompt = "A pack of roses" +>>> destination_prompt = "A pack of blue roses" +>>> pipe.edit_model(source_prompt, destination_prompt) + +>>> prompt = "A field of roses" +>>> image = pipe(prompt).images[0] disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. edit_model < source > ( source_prompt: str destination_prompt: str lamb: float = 0.1 restart_params: bool = True ) Parameters source_prompt (str) — +The source prompt containing the concept to be edited. destination_prompt (str) — +The destination prompt. Must contain all words from source_prompt with additional ones to specify the +target edit. lamb (float, optional, defaults to 0.1) — +The lambda parameter specifying the regularization intesity. Smaller values increase the editing power. restart_params (bool, optional, defaults to True) — +Restart the model parameters to their pre-trained version before editing. This is done to avoid edit +compounding. When it is False, edits accumulate. Apply model editing via closed-form solution (see Eq. 5 in the TIME paper). enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None lora_scale: typing.Optional[float] = None clip_skip: typing.Optional[int] = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] nsfw_content_detected: typing.Optional[typing.List[bool]] ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/86211a7baa99419cd445e4d15a8828b0.txt b/scrapped_outputs/86211a7baa99419cd445e4d15a8828b0.txt new file mode 100644 index 0000000000000000000000000000000000000000..e807efa0bdba9fcaf725824d3ab7c1cc5f8142b5 --- /dev/null +++ b/scrapped_outputs/86211a7baa99419cd445e4d15a8828b0.txt @@ -0,0 +1,138 @@ +Kandinsky 3 Kandinsky 3 is created by Vladimir Arkhipkin,Anastasia Maltseva,Igor Pavlov,Andrei Filatov,Arseniy Shakhmatov,Andrey Kuznetsov,Denis Dimitrov, Zein Shaheen The description from it’s Github page: Kandinsky 3.0 is an open-source text-to-image diffusion model built upon the Kandinsky2-x model family. In comparison to its predecessors, enhancements have been made to the text understanding and visual quality of the model, achieved by increasing the size of the text encoder and Diffusion U-Net models, respectively. Its architecture includes 3 main components: FLAN-UL2, which is an encoder decoder model based on the T5 architecture. New U-Net architecture featuring BigGAN-deep blocks doubles depth while maintaining the same number of parameters. Sber-MoVQGAN is a decoder proven to have superior results in image restoration. The original codebase can be found at ai-forever/Kandinsky-3. Check out the Kandinsky Community organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting. Make sure to check out the schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. Kandinsky3Pipeline class diffusers.Kandinsky3Pipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: Kandinsky3UNet scheduler: DDPMScheduler movq: VQModel ) __call__ < source > ( prompt: Union = None num_inference_steps: int = 25 guidance_scale: float = 3.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 height: Optional = 1024 width: Optional = 1024 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None output_type: Optional = 'pil' return_dict: bool = True latents = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 3.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to self.unet.config.sample_size) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size) — +The width in pixels of the generated image. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask. Must provide if passing prompt_embeds directly. negative_attention_mask (torch.FloatTensor, optional) — +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import AutoPipelineForText2Image +>>> import torch + +>>> pipe = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-3", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A photograph of the inside of a subway train. There are raccoons sitting on the seats. One of them is reading a newspaper. The window shows the city in the background." + +>>> generator = torch.Generator(device="cpu").manual_seed(0) +>>> image = pipe(prompt, num_inference_steps=25, generator=generator).images[0] encode_prompt < source > ( prompt do_classifier_free_guidance = True num_images_per_prompt = 1 device = None negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None _cut_context = False attention_mask: Optional = None negative_attention_mask: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device, optional): +torch device to place the resulting embeddings on num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask. Must provide if passing prompt_embeds directly. negative_attention_mask (torch.FloatTensor, optional) — +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. Encodes the prompt into text encoder hidden states. Kandinsky3Img2ImgPipeline class diffusers.Kandinsky3Img2ImgPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: Kandinsky3UNet scheduler: DDPMScheduler movq: VQModel ) __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.3 num_inference_steps: int = 25 guidance_scale: float = 3.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 3.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask. Must provide if passing prompt_embeds directly. negative_attention_mask (torch.FloatTensor, optional) — +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import AutoPipelineForImage2Image +>>> from diffusers.utils import load_image +>>> import torch + +>>> pipe = AutoPipelineForImage2Image.from_pretrained("kandinsky-community/kandinsky-3", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A painting of the inside of a subway train with tiny raccoons." +>>> image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky3/t2i.png") + +>>> generator = torch.Generator(device="cpu").manual_seed(0) +>>> image = pipe(prompt, image=image, strength=0.75, num_inference_steps=25, generator=generator).images[0] encode_prompt < source > ( prompt do_classifier_free_guidance = True num_images_per_prompt = 1 device = None negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None _cut_context = False attention_mask: Optional = None negative_attention_mask: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded Encodes the prompt into text encoder hidden states. device: (torch.device, optional): +torch device to place the resulting embeddings on +num_images_per_prompt (int, optional, defaults to 1): +number of images that should be generated per prompt +do_classifier_free_guidance (bool, optional, defaults to True): +whether to use classifier free guidance or not +negative_prompt (str or List[str], optional): +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). +prompt_embeds (torch.FloatTensor, optional): +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. +negative_prompt_embeds (torch.FloatTensor, optional): +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. +attention_mask (torch.FloatTensor, optional): +Pre-generated attention mask. Must provide if passing prompt_embeds directly. +negative_attention_mask (torch.FloatTensor, optional): +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. diff --git a/scrapped_outputs/862e587ca0ddc69a936380ce86868173.txt b/scrapped_outputs/862e587ca0ddc69a936380ce86868173.txt new file mode 100644 index 0000000000000000000000000000000000000000..f855e68b8bc1a27f4f6c0425b7b6ee65371c12be --- /dev/null +++ b/scrapped_outputs/862e587ca0ddc69a936380ce86868173.txt @@ -0,0 +1,68 @@ +MusicLDM MusicLDM was proposed in MusicLDM: Enhancing Novelty in Text-to-Music Generation Using Beat-Synchronous Mixup Strategies by Ke Chen, Yusong Wu, Haohe Liu, Marianna Nezhurina, Taylor Berg-Kirkpatrick, Shlomo Dubnov. +MusicLDM takes a text prompt as input and predicts the corresponding music sample. Inspired by Stable Diffusion and AudioLDM, +MusicLDM is a text-to-music latent diffusion model (LDM) that learns continuous audio representations from CLAP +latents. MusicLDM is trained on a corpus of 466 hours of music data. Beat-synchronous data augmentation strategies are applied to the music samples, both in the time domain and in the latent space. Using beat-synchronous data augmentation strategies encourages the model to interpolate between the training samples, but stay within the domain of the training data. The result is generated music that is more diverse while staying faithful to the corresponding style. The abstract of the paper is the following: Diffusion models have shown promising results in cross-modal generation tasks, including text-to-image and text-to-audio generation. However, generating music, as a special type of audio, presents unique challenges due to limited availability of music data and sensitive issues related to copyright and plagiarism. In this paper, to tackle these challenges, we first construct a state-of-the-art text-to-music model, MusicLDM, that adapts Stable Diffusion and AudioLDM architectures to the music domain. We achieve this by retraining the contrastive language-audio pretraining model (CLAP) and the Hifi-GAN vocoder, as components of MusicLDM, on a collection of music data samples. Then, to address the limitations of training data and to avoid plagiarism, we leverage a beat tracking model and propose two different mixup strategies for data augmentation: beat-synchronous audio mixup and beat-synchronous latent mixup, which recombine training audio directly or via a latent embeddings space, respectively. Such mixup strategies encourage the model to interpolate between musical training samples and generate new music within the convex hull of the training data, making the generated music more diverse while still staying faithful to the corresponding style. In addition to popular evaluation metrics, we design several new evaluation metrics based on CLAP score to demonstrate that our proposed MusicLDM and beat-synchronous mixup strategies improve both the quality and novelty of generated music, as well as the correspondence between input text and generated music. This pipeline was contributed by sanchit-gandhi. Tips When constructing a prompt, keep in mind: Descriptive prompt inputs work best; use adjectives to describe the sound (for example, “high quality” or “clear”) and make the prompt context specific where possible (e.g. “melodic techno with a fast beat and synths” works better than “techno”). Using a negative prompt can significantly improve the quality of the generated audio. Try using a negative prompt of “low quality, average quality”. During inference: The quality of the generated audio sample can be controlled by the num_inference_steps argument; higher steps give higher quality audio at the expense of slower inference. Multiple waveforms can be generated in one go: set num_waveforms_per_prompt to a value greater than 1 to enable. Automatic scoring will be performed between the generated waveforms and prompt text, and the audios ranked from best to worst accordingly. The length of the generated audio sample can be controlled by varying the audio_length_in_s argument. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. MusicLDMPipeline class diffusers.MusicLDMPipeline < source > ( vae: AutoencoderKL text_encoder: Union tokenizer: Union feature_extractor: Optional unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vocoder: SpeechT5HifiGan ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (ClapModel) — +Frozen text-audio embedding model (ClapTextModel), specifically the +laion/clap-htsat-unfused variant. tokenizer (PreTrainedTokenizer) — +A RobertaTokenizer to tokenize text. feature_extractor (ClapFeatureExtractor) — +Feature extractor to compute mel-spectrograms from audio waveforms. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded audio latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. vocoder (SpeechT5HifiGan) — +Vocoder of class SpeechT5HifiGan. Pipeline for text-to-audio generation using MusicLDM. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None audio_length_in_s: Optional = None num_inference_steps: int = 200 guidance_scale: float = 2.0 negative_prompt: Union = None num_waveforms_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None output_type: Optional = 'np' ) → AudioPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide audio generation. If not defined, you need to pass prompt_embeds. audio_length_in_s (int, optional, defaults to 10.24) — +The length of the generated audio sample in seconds. num_inference_steps (int, optional, defaults to 200) — +The number of denoising steps. More denoising steps usually lead to a higher quality audio at the +expense of slower inference. guidance_scale (float, optional, defaults to 2.0) — +A higher guidance scale value encourages the model to generate audio that is closely linked to the text +prompt at the expense of lower sound quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in audio generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_waveforms_per_prompt (int, optional, defaults to 1) — +The number of waveforms to generate per prompt. If num_waveforms_per_prompt > 1, the text encoding +model is a joint text-audio model (ClapModel), and the tokenizer is a +[~transformers.ClapProcessor], then automatic scoring will be performed between the generated outputs +and the input text. This scoring ranks the generated waveforms based on their cosine similarity to text +input in the joint text-audio embedding space. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. return_dict (bool, optional, defaults to True) — +Whether or not to return a AudioPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. output_type (str, optional, defaults to "np") — +The output format of the generated audio. Choose between "np" to return a NumPy np.ndarray or +"pt" to return a PyTorch torch.Tensor object. Set to "latent" to return the latent diffusion +model (LDM) output. Returns +AudioPipelineOutput or tuple + +If return_dict is True, AudioPipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import MusicLDMPipeline +>>> import torch +>>> import scipy + +>>> repo_id = "ucsd-reach/musicldm" +>>> pipe = MusicLDMPipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> prompt = "Techno music with a strong, upbeat tempo and high melodic riffs" +>>> audio = pipe(prompt, num_inference_steps=10, audio_length_in_s=5.0).audios[0] + +>>> # save the audio sample as a .wav file +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio) enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. diff --git a/scrapped_outputs/86759279b5372fc80cc4603e30d85f77.txt b/scrapped_outputs/86759279b5372fc80cc4603e30d85f77.txt new file mode 100644 index 0000000000000000000000000000000000000000..bfb6ce0d7b29d717bc4cf9298fa18ceb1edda813 --- /dev/null +++ b/scrapped_outputs/86759279b5372fc80cc4603e30d85f77.txt @@ -0,0 +1,338 @@ +Inpainting The Stable Diffusion model can also be applied to inpainting which lets you edit specific parts of an image by providing a mask and a text prompt using Stable Diffusion. Tips It is recommended to use this pipeline with checkpoints that have been specifically fine-tuned for inpainting, such +as runwayml/stable-diffusion-inpainting. Default +text-to-image Stable Diffusion checkpoints, such as +runwayml/stable-diffusion-v1-5 are also compatible but they might be less performant. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionInpaintPipeline class diffusers.StableDiffusionInpaintPipeline < source > ( vae: Union text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae ([AutoencoderKL, AsymmetricAutoencoderKL]) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-guided image inpainting using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None masked_image_latents: FloatTensor = None height: Optional = None width: Optional = None padding_mask_crop: Optional = None strength: float = 1.0 num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: int = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be inpainted (which parts of the image to +be masked out with mask_image and repainted according to prompt). For both numpy array and pytorch +tensor, the expected value range is between [0, 1] If it’s a tensor or a list or tensors, the +expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a list of arrays, the +expected shape should be (B, H, W, C) or (H, W, C) It can also accept image latents as image, but +if passing latents directly it is not encoded again. mask_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to mask image. White pixels in the mask +are repainted while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a numpy array or pytorch tensor, it should contain one +color channel (L) instead of 3, so the expected shape for pytorch tensor would be (B, 1, H, W), (B, H, W), (1, H, W), (H, W). And for numpy array would be for (B, H, W, 1), (B, H, W), (H, W, 1), or (H, W). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. padding_mask_crop (int, optional, defaults to None) — +The size of margin in the crop to be applied to the image and masking. If None, no crop is applied to image and mask_image. If +padding_mask_crop is not None, it will first find a rectangular region with the same aspect ration of the image and +contains all masked area, and then expand that area based on padding_mask_crop. The image and mask_image will then be cropped based on +the expanded area before resizing to the original image size for inpainting. This is useful when the masked area is small while the image is large +and contain information inreleant for inpainging, such as background. strength (float, optional, defaults to 1.0) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionInpaintPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +>>> init_image = download_image(img_url).resize((512, 512)) +>>> mask_image = download_image(mask_url).resize((512, 512)) + +>>> pipe = StableDiffusionInpaintPipeline.from_pretrained( +... "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +>>> image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionInpaintPipeline class diffusers.FlaxStableDiffusionInpaintPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-guided image inpainting using Stable Diffusion. 🧪 This is an experimental feature! This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: Array mask: Array masked_image: Array params: Union prng_seed: Array num_inference_steps: int = 50 height: Optional = None width: Optional = None guidance_scale: Union = 7.5 latents: Array = None neg_prompt_ids: Array = None return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. latents (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +array is generated by sampling using the supplied random generator. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard +>>> import PIL +>>> import requests +>>> from io import BytesIO +>>> from diffusers import FlaxStableDiffusionInpaintPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +>>> init_image = download_image(img_url).resize((512, 512)) +>>> mask_image = download_image(mask_url).resize((512, 512)) + +>>> pipeline, params = FlaxStableDiffusionInpaintPipeline.from_pretrained( +... "xvjiarui/stable-diffusion-2-inpainting" +... ) + +>>> prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +>>> prng_seed = jax.random.PRNGKey(0) +>>> num_inference_steps = 50 + +>>> num_samples = jax.device_count() +>>> prompt = num_samples * [prompt] +>>> init_image = num_samples * [init_image] +>>> mask_image = num_samples * [mask_image] +>>> prompt_ids, processed_masked_images, processed_masks = pipeline.prepare_inputs( +... prompt, init_image, mask_image +... ) +# shard inputs and rng + +>>> params = replicate(params) +>>> prng_seed = jax.random.split(prng_seed, jax.device_count()) +>>> prompt_ids = shard(prompt_ids) +>>> processed_masked_images = shard(processed_masked_images) +>>> processed_masks = shard(processed_masks) + +>>> images = pipeline( +... prompt_ids, processed_masks, processed_masked_images, params, prng_seed, num_inference_steps, jit=True +... ).images +>>> images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) FlaxStableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/86a2399f1c48925244a0d3a6e49bc8c6.txt b/scrapped_outputs/86a2399f1c48925244a0d3a6e49bc8c6.txt new file mode 100644 index 0000000000000000000000000000000000000000..e109b181bff7e509d8447aec9e012243d4f843dc --- /dev/null +++ b/scrapped_outputs/86a2399f1c48925244a0d3a6e49bc8c6.txt @@ -0,0 +1,115 @@ +DreamBooth DreamBooth is a training technique that updates the entire diffusion model by training on just a few images of a subject or style. It works by associating a special word in the prompt with the example images. If you’re training on a GPU with limited vRAM, you should try enabling the gradient_checkpointing and mixed_precision parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing or xFormers. You should have a GPU with >30GB of memory if you want to train faster with Flax. This guide will explore the train_dreambooth.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/dreambooth +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters DreamBooth is very sensitive to training hyperparameters, and it is easy to overfit. Read the Training Stable Diffusion with Dreambooth using 🧨 Diffusers blog post for recommended settings for different subjects to help you choose the appropriate hyperparameters. The training script offers many parameters for customizing your training run. All of the parameters and their descriptions are found in the parse_args() function. The parameters are set with default values that should work pretty well out-of-the-box, but you can also set your own values in the training command if you’d like. For example, to train in the bf16 format: Copied accelerate launch train_dreambooth.py \ + --mixed_precision="bf16" Some basic and important parameters to know and specify are: --pretrained_model_name_or_path: the name of the model on the Hub or a local path to the pretrained model --instance_data_dir: path to a folder containing the training dataset (example images) --instance_prompt: the text prompt that contains the special word for the example images --train_text_encoder: whether to also train the text encoder --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_dreambooth.py \ + --snr_gamma=5.0 Prior preservation loss Prior preservation loss is a method that uses a model’s own generated samples to help it learn how to generate more diverse images. Because these generated sample images belong to the same class as the images you provided, they help the model retain what it has learned about the class and how it can use what it already knows about the class to make new compositions. --with_prior_preservation: whether to use prior preservation loss --prior_loss_weight: controls the influence of the prior preservation loss on the model --class_data_dir: path to a folder containing the generated class sample images --class_prompt: the text prompt describing the class of the generated sample images Copied accelerate launch train_dreambooth.py \ + --with_prior_preservation \ + --prior_loss_weight=1.0 \ + --class_data_dir="path/to/class/images" \ + --class_prompt="text prompt describing class" Train text encoder To improve the quality of the generated outputs, you can also train the text encoder in addition to the UNet. This requires additional memory and you’ll need a GPU with at least 24GB of vRAM. If you have the necessary hardware, then training the text encoder produces better results, especially when generating images of faces. Enable this option by: Copied accelerate launch train_dreambooth.py \ + --train_text_encoder Training script DreamBooth comes with its own dataset classes: DreamBoothDataset: preprocesses the images and class images, and tokenizes the prompts for training PromptDataset: generates the prompt embeddings to generate the class images If you enabled prior preservation loss, the class images are generated here: Copied sample_dataset = PromptDataset(args.class_prompt, num_new_images) +sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size) + +sample_dataloader = accelerator.prepare(sample_dataloader) +pipeline.to(accelerator.device) + +for example in tqdm( + sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process +): + images = pipeline(example["prompt"]).images Next is the main() function which handles setting up the dataset for training and the training loop itself. The script loads the tokenizer, scheduler and models: Copied # Load the tokenizer +if args.tokenizer_name: + tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name, revision=args.revision, use_fast=False) +elif args.pretrained_model_name_or_path: + tokenizer = AutoTokenizer.from_pretrained( + args.pretrained_model_name_or_path, + subfolder="tokenizer", + revision=args.revision, + use_fast=False, + ) + +# Load scheduler and models +noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") +text_encoder = text_encoder_cls.from_pretrained( + args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision +) + +if model_has_vae(args): + vae = AutoencoderKL.from_pretrained( + args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision + ) +else: + vae = None + +unet = UNet2DConditionModel.from_pretrained( + args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision +) Then, it’s time to create the training dataset and DataLoader from DreamBoothDataset: Copied train_dataset = DreamBoothDataset( + instance_data_root=args.instance_data_dir, + instance_prompt=args.instance_prompt, + class_data_root=args.class_data_dir if args.with_prior_preservation else None, + class_prompt=args.class_prompt, + class_num=args.num_class_images, + tokenizer=tokenizer, + size=args.resolution, + center_crop=args.center_crop, + encoder_hidden_states=pre_computed_encoder_hidden_states, + class_prompt_encoder_hidden_states=pre_computed_class_prompt_encoder_hidden_states, + tokenizer_max_length=args.tokenizer_max_length, +) + +train_dataloader = torch.utils.data.DataLoader( + train_dataset, + batch_size=args.train_batch_size, + shuffle=True, + collate_fn=lambda examples: collate_fn(examples, args.with_prior_preservation), + num_workers=args.dataloader_num_workers, +) Lastly, the training loop takes care of the remaining steps such as converting images to latent space, adding noise to the input, predicting the noise residual, and calculating the loss. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script You’re now ready to launch the training script! 🚀 For this guide, you’ll download some images of a dog and store them in a directory. But remember, you can create and use your own dataset if you want (see the Create a dataset for training guide). Copied from huggingface_hub import snapshot_download + +local_dir = "./dog" +snapshot_download( + "diffusers/dog-example", + local_dir=local_dir, + repo_type="dataset", + ignore_patterns=".gitattributes", +) Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model, INSTANCE_DIR to the path where you just downloaded the dog images to, and OUTPUT_DIR to where you want to save the model. You’ll use sks as the special word to tie the training to. If you’re interested in following along with the training process, you can periodically save generated images as training progresses. Add the following parameters to the training command: Copied --validation_prompt="a photo of a sks dog" +--num_validation_images=4 +--validation_steps=100 One more thing before you launch the script! Depending on the GPU you have, you may need to enable certain optimizations to train DreamBooth. 16GB 12GB 8GB On a 16GB GPU, you can use bitsandbytes 8-bit optimizer and gradient checkpointing to help you train a DreamBooth model. Install bitsandbytes: Copied pip install bitsandbytes Then, add the following parameter to your training command: Copied accelerate launch train_dreambooth.py \ + --gradient_checkpointing \ + --use_8bit_adam \ PyTorch Flax Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export INSTANCE_DIR="./dog" +export OUTPUT_DIR="path_to_saved_model" + +accelerate launch train_dreambooth.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --output_dir=$OUTPUT_DIR \ + --instance_prompt="a photo of sks dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=1 \ + --learning_rate=5e-6 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --max_train_steps=400 \ + --push_to_hub Once training is complete, you can use your newly trained model for inference! Can’t wait to try your model for inference before training is complete? 🤭 Make sure you have the latest version of 🤗 Accelerate installed. Copied from diffusers import DiffusionPipeline, UNet2DConditionModel +from transformers import CLIPTextModel +import torch + +unet = UNet2DConditionModel.from_pretrained("path/to/model/checkpoint-100/unet") + +# if you have trained with `--args.train_text_encoder` make sure to also load the text encoder +text_encoder = CLIPTextModel.from_pretrained("path/to/model/checkpoint-100/checkpoint-100/text_encoder") + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", unet=unet, text_encoder=text_encoder, dtype=torch.float16, +).to("cuda") + +image = pipeline("A photo of sks dog in a bucket", num_inference_steps=50, guidance_scale=7.5).images[0] +image.save("dog-bucket.png") PyTorch Flax Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("path_to_saved_model", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +image = pipeline("A photo of sks dog in a bucket", num_inference_steps=50, guidance_scale=7.5).images[0] +image.save("dog-bucket.png") LoRA LoRA is a training technique for significantly reducing the number of trainable parameters. As a result, training is faster and it is easier to store the resulting weights because they are a lot smaller (~100MBs). Use the train_dreambooth_lora.py script to train with LoRA. The LoRA training script is discussed in more detail in the LoRA training guide. Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_dreambooth_lora_sdxl.py script to train a SDXL model with LoRA. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on training your DreamBooth model! To learn more about how to use your new model, the following guide may be helpful: Learn how to load a DreamBooth model for inference if you trained your model with LoRA. diff --git a/scrapped_outputs/86b39b61efc92ba108993d4b431b06ad.txt b/scrapped_outputs/86b39b61efc92ba108993d4b431b06ad.txt new file mode 100644 index 0000000000000000000000000000000000000000..31cbdde7d3f5e542cc1b460fc970c1544f49a07d --- /dev/null +++ b/scrapped_outputs/86b39b61efc92ba108993d4b431b06ad.txt @@ -0,0 +1,71 @@ +Load LoRAs for inference There are many adapters (with LoRAs being the most common type) trained in different styles to achieve different effects. You can even combine multiple adapters to create new and unique images. With the 🤗 PEFT integration in 🤗 Diffusers, it is really easy to load and manage adapters for inference. In this guide, you’ll learn how to use different adapters with Stable Diffusion XL (SDXL) for inference. Throughout this guide, you’ll use LoRA as the main adapter technique, so we’ll use the terms LoRA and adapter interchangeably. You should have some familiarity with LoRA, and if you don’t, we welcome you to check out the LoRA guide. Let’s first install all the required libraries. Copied !pip install -q transformers accelerate +!pip install peft +!pip install diffusers Now, let’s load a pipeline with a SDXL checkpoint: Copied from diffusers import DiffusionPipeline +import torch + +pipe_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipe = DiffusionPipeline.from_pretrained(pipe_id, torch_dtype=torch.float16).to("cuda") Next, load a LoRA checkpoint with the load_lora_weights() method. With the 🤗 PEFT integration, you can assign a specific adapter_name to the checkpoint, which let’s you easily switch between different LoRA checkpoints. Let’s call this adapter "toy". Copied pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") And then perform inference: Copied prompt = "toy_face of a hacker with a hoodie" + +lora_scale= 0.9 +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) +).images[0] +image With the adapter_name parameter, it is really easy to use another adapter for inference! Load the nerijs/pixel-art-xl adapter that has been fine-tuned to generate pixel art images, and let’s call it "pixel". The pipeline automatically sets the first loaded adapter ("toy") as the active adapter. But you can activate the "pixel" adapter with the set_adapters() method as shown below: Copied pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipe.set_adapters("pixel") Let’s now generate an image with the second adapter and check the result: Copied prompt = "a hacker with a hoodie, pixel art" +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) +).images[0] +image Combine multiple adapters You can also perform multi-adapter inference where you combine different adapter checkpoints for inference. Once again, use the set_adapters() method to activate two LoRA checkpoints and specify the weight for how the checkpoints should be combined. Copied pipe.set_adapters(["pixel", "toy"], adapter_weights=[0.5, 1.0]) Now that we have set these two adapters, let’s generate an image from the combined adapters! LoRA checkpoints in the diffusion community are almost always obtained with DreamBooth. DreamBooth training often relies on “trigger” words in the input text prompts in order for the generation results to look as expected. When you combine multiple LoRA checkpoints, it’s important to ensure the trigger words for the corresponding LoRA checkpoints are present in the input text prompts. The trigger words for CiroN2022/toy-face and nerijs/pixel-art-xl are found in their repositories. Copied # Notice how the prompt is constructed. +prompt = "toy_face of a hacker with a hoodie, pixel art" +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": 1.0}, generator=torch.manual_seed(0) +).images[0] +image Impressive! As you can see, the model was able to generate an image that mixes the characteristics of both adapters. If you want to go back to using only one adapter, use the set_adapters() method to activate the "toy" adapter: Copied # First, set the adapter. +pipe.set_adapters("toy") + +# Then, run inference. +prompt = "toy_face of a hacker with a hoodie" +lora_scale= 0.9 +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) +).images[0] +image If you want to switch to only the base model, disable all LoRAs with the disable_lora() method. Copied pipe.disable_lora() + +prompt = "toy_face of a hacker with a hoodie" +lora_scale= 0.9 +image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] +image Monitoring active adapters You have attached multiple adapters in this tutorial, and if you’re feeling a bit lost on what adapters have been attached to the pipeline’s components, you can easily check the list of active adapters using the get_active_adapters() method: Copied active_adapters = pipe.get_active_adapters() +active_adapters +["toy", "pixel"] You can also get the active adapters of each pipeline component with get_list_adapters(): Copied list_adapters_component_wise = pipe.get_list_adapters() +list_adapters_component_wise +{"text_encoder": ["toy", "pixel"], "unet": ["toy", "pixel"], "text_encoder_2": ["toy", "pixel"]} Fusing adapters into the model You can use PEFT to easily fuse/unfuse multiple adapters directly into the model weights (both UNet and text encoder) using the fuse_lora() method, which can lead to a speed-up in inference and lower VRAM usage. Copied pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") + +pipe.set_adapters(["pixel", "toy"], adapter_weights=[0.5, 1.0]) +# Fuses the LoRAs into the Unet +pipe.fuse_lora() + +prompt = "toy_face of a hacker with a hoodie, pixel art" +image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] + +# Gets the Unet back to the original state +pipe.unfuse_lora() You can also fuse some adapters using adapter_names for faster generation: Copied pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") + +pipe.set_adapters(["pixel"], adapter_weights=[0.5, 1.0]) +# Fuses the LoRAs into the Unet +pipe.fuse_lora(adapter_names=["pixel"]) + +prompt = "a hacker with a hoodie, pixel art" +image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] + +# Gets the Unet back to the original state +pipe.unfuse_lora() + +# Fuse all adapters +pipe.fuse_lora(adapter_names=["pixel", "toy"]) + +prompt = "toy_face of a hacker with a hoodie, pixel art" +image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] Saving a pipeline after fusing the adapters To properly save a pipeline after it’s been loaded with the adapters, it should be serialized like so: Copied pipe.fuse_lora(lora_scale=1.0) +pipe.unload_lora_weights() +pipe.save_pretrained("path-to-pipeline") diff --git a/scrapped_outputs/86b96883865a1a9e0c91efe2c130a6ef.txt b/scrapped_outputs/86b96883865a1a9e0c91efe2c130a6ef.txt new file mode 100644 index 0000000000000000000000000000000000000000..2add5dcbc2dfbc796cac5009a8f482715b5ce8eb --- /dev/null +++ b/scrapped_outputs/86b96883865a1a9e0c91efe2c130a6ef.txt @@ -0,0 +1,5 @@ +UVit2DModel The U-ViT model is a vision transformer (ViT) based UNet. This model incorporates elements from ViT (considers all inputs such as time, conditions and noisy image patches as tokens) and a UNet (long skip connections between the shallow and deep layers). The skip connection is important for predicting pixel-level features. An additional 3x3 convolutional block is applied prior to the final output to improve image quality. The abstract from the paper is: Currently, applying diffusion models in pixel space of high resolution images is difficult. Instead, existing approaches focus on diffusion in lower dimensional spaces (latent diffusion), or have multiple super-resolution levels of generation referred to as cascades. The downside is that these approaches add additional complexity to the diffusion framework. This paper aims to improve denoising diffusion for high resolution images while keeping the model as simple as possible. The paper is centered around the research question: How can one train a standard denoising diffusion models on high resolution images, and still obtain performance comparable to these alternate approaches? The four main findings are: 1) the noise schedule should be adjusted for high resolution images, 2) It is sufficient to scale only a particular part of the architecture, 3) dropout should be added at specific locations in the architecture, and 4) downsampling is an effective strategy to avoid high resolution feature maps. Combining these simple yet effective techniques, we achieve state-of-the-art on image generation among diffusion models without sampling modifiers on ImageNet. UVit2DModel class diffusers.UVit2DModel < source > ( hidden_size: int = 1024 use_bias: bool = False hidden_dropout: float = 0.0 cond_embed_dim: int = 768 micro_cond_encode_dim: int = 256 micro_cond_embed_dim: int = 1280 encoder_hidden_size: int = 768 vocab_size: int = 8256 codebook_size: int = 8192 in_channels: int = 768 block_out_channels: int = 768 num_res_blocks: int = 3 downsample: bool = False upsample: bool = False block_num_heads: int = 12 num_hidden_layers: int = 22 num_attention_heads: int = 16 attention_dropout: float = 0.0 intermediate_size: int = 2816 layer_norm_eps: float = 1e-06 ln_elementwise_affine: bool = True sample_size: int = 64 ) set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. UVit2DConvEmbed class diffusers.models.unets.uvit_2d.UVit2DConvEmbed < source > ( in_channels block_out_channels vocab_size elementwise_affine eps bias ) UVitBlock class diffusers.models.unets.uvit_2d.UVitBlock < source > ( channels num_res_blocks: int hidden_size hidden_dropout ln_elementwise_affine layer_norm_eps use_bias block_num_heads attention_dropout downsample: bool upsample: bool ) ConvNextBlock class diffusers.models.unets.uvit_2d.ConvNextBlock < source > ( channels layer_norm_eps ln_elementwise_affine use_bias hidden_dropout hidden_size res_ffn_factor = 4 ) ConvMlmLayer class diffusers.models.unets.uvit_2d.ConvMlmLayer < source > ( block_out_channels: int in_channels: int use_bias: bool ln_elementwise_affine: bool layer_norm_eps: float codebook_size: int ) diff --git a/scrapped_outputs/86e706de870ccd27888ac64bb066d319.txt b/scrapped_outputs/86e706de870ccd27888ac64bb066d319.txt new file mode 100644 index 0000000000000000000000000000000000000000..25c46b6891734af2caccd73456b27f1ecd1e462b --- /dev/null +++ b/scrapped_outputs/86e706de870ccd27888ac64bb066d319.txt @@ -0,0 +1,64 @@ +PNDMScheduler PNDMScheduler, or pseudo numerical methods for diffusion models, uses more advanced ODE integration techniques like the Runge-Kutta and linear multi-step method. The original implementation can be found at crowsonkb/k-diffusion. PNDMScheduler class diffusers.PNDMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None skip_prk_steps: bool = False set_alpha_to_one: bool = False prediction_type: str = 'epsilon' timestep_spacing: str = 'leading' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. skip_prk_steps (bool, defaults to False) — +Allows the scheduler to skip the Runge-Kutta steps defined in the original paper as being required before +PLMS steps. set_alpha_to_one (bool, defaults to False) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the alpha value at step 0. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process) +or v_prediction (see section 2.4 of Imagen Video +paper). timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. PNDMScheduler uses pseudo numerical methods for diffusion models such as the Runge-Kutta and linear multi-step +method. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise), and calls step_prk() +or step_plms() depending on the internal variable counter. step_plms < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the linear multistep method. It performs one forward pass multiple times to approximate the solution. step_prk < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the Runge-Kutta method. It performs four forward passes to approximate the solution to the differential +equation. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/86ff38fee7566d4a093fda584d6ffe0f.txt b/scrapped_outputs/86ff38fee7566d4a093fda584d6ffe0f.txt new file mode 100644 index 0000000000000000000000000000000000000000..9e00876461007b85dd1f7f798abf317e313c05a5 --- /dev/null +++ b/scrapped_outputs/86ff38fee7566d4a093fda584d6ffe0f.txt @@ -0,0 +1,96 @@ +ControlNet The ControlNet model was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, Maneesh Agrawala. It provides a greater degree of control over text-to-image generation by conditioning the model on additional inputs such as edge maps, depth maps, segmentation maps, and keypoints for pose detection. The abstract from the paper is: We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models. Loading from the original format By default the ControlNetModel should be loaded with from_pretrained(), but it can also be loaded +from the original format using FromOriginalControlnetMixin.from_single_file as follows: Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel + +url = "https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_canny.pth" # can also be a local path +controlnet = ControlNetModel.from_single_file(url) + +url = "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.safetensors" # can also be a local path +pipe = StableDiffusionControlNetPipeline.from_single_file(url, controlnet=controlnet) ControlNetModel class diffusers.ControlNetModel < source > ( in_channels: int = 4 conditioning_channels: int = 3 flip_sin_to_cos: bool = True freq_shift: int = 0 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') mid_block_type: Optional = 'UNetMidBlock2DCrossAttn' only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: int = 1280 transformer_layers_per_block: Union = 1 encoder_hid_dim: Optional = None encoder_hid_dim_type: Optional = None attention_head_dim: Union = 8 num_attention_heads: Union = None use_linear_projection: bool = False class_embed_type: Optional = None addition_embed_type: Optional = None addition_time_embed_dim: Optional = None num_class_embeds: Optional = None upcast_attention: bool = False resnet_time_scale_shift: str = 'default' projection_class_embeddings_input_dim: Optional = None controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: Optional = (16, 32, 96, 256) global_pool_conditions: bool = False addition_embed_type_num_heads: int = 64 ) Parameters in_channels (int, defaults to 4) — +The number of channels in the input sample. flip_sin_to_cos (bool, defaults to True) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, defaults to 0) — +The frequency shift to apply to the time embedding. down_block_types (tuple[str], defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. only_cross_attention (Union[bool, Tuple[bool]], defaults to False) — block_out_channels (tuple[int], defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, defaults to 2) — +The number of layers per block. downsample_padding (int, defaults to 1) — +The padding to use for the downsampling convolution. mid_block_scale_factor (float, defaults to 1) — +The scale factor to use for the mid block. act_fn (str, defaults to “silu”) — +The activation function to use. norm_num_groups (int, optional, defaults to 32) — +The number of groups to use for the normalization. If None, normalization and activation layers is skipped +in post-processing. norm_eps (float, defaults to 1e-5) — +The epsilon to use for the normalization. cross_attention_dim (int, defaults to 1280) — +The dimension of the cross attention features. transformer_layers_per_block (int or Tuple[int], optional, defaults to 1) — +The number of transformer blocks of type BasicTransformerBlock. Only relevant for +CrossAttnDownBlock2D, CrossAttnUpBlock2D, +UNetMidBlock2DCrossAttn. encoder_hid_dim (int, optional, defaults to None) — +If encoder_hid_dim_type is defined, encoder_hidden_states will be projected from encoder_hid_dim +dimension to cross_attention_dim. encoder_hid_dim_type (str, optional, defaults to None) — +If given, the encoder_hidden_states and potentially other embeddings are down-projected to text +embeddings of dimension cross_attention according to encoder_hid_dim_type. attention_head_dim (Union[int, Tuple[int]], defaults to 8) — +The dimension of the attention heads. use_linear_projection (bool, defaults to False) — class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", "identity", "projection", or "simple_projection". addition_embed_type (str, optional, defaults to None) — +Configures an optional embedding which will be summed with the time embeddings. Choose from None or +“text”. “text” will use the TextTimeEmbedding layer. num_class_embeds (int, optional, defaults to 0) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. upcast_attention (bool, defaults to False) — resnet_time_scale_shift (str, defaults to "default") — +Time scale shift config for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. projection_class_embeddings_input_dim (int, optional, defaults to None) — +The dimension of the class_labels input when class_embed_type="projection". Required when +class_embed_type="projection". controlnet_conditioning_channel_order (str, defaults to "rgb") — +The channel order of conditional image. Will convert to rgb if it’s bgr. conditioning_embedding_out_channels (tuple[int], optional, defaults to (16, 32, 96, 256)) — +The tuple of output channel for each block in the conditioning_embedding layer. global_pool_conditions (bool, defaults to False) — +TODO(Patrick) - unused parameter. addition_embed_type_num_heads (int, defaults to 64) — +The number of heads to use for the TextTimeEmbedding layer. A ControlNet model. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor controlnet_cond: FloatTensor conditioning_scale: float = 1.0 class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None added_cond_kwargs: Optional = None cross_attention_kwargs: Optional = None guess_mode: bool = False return_dict: bool = True ) → ControlNetOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor. timestep (Union[torch.Tensor, float, int]) — +The number of timesteps to denoise an input. encoder_hidden_states (torch.Tensor) — +The encoder hidden states. controlnet_cond (torch.FloatTensor) — +The conditional input tensor of shape (batch_size, sequence_length, hidden_size). conditioning_scale (float, defaults to 1.0) — +The scale factor for ControlNet outputs. class_labels (torch.Tensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. timestep_cond (torch.Tensor, optional, defaults to None) — +Additional conditional embeddings for timestep. If provided, the embeddings will be summed with the +timestep_embedding passed through the self.time_embedding layer to obtain the final timestep +embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. added_cond_kwargs (dict) — +Additional conditions for the Stable Diffusion XL UNet. cross_attention_kwargs (dict[str], optional, defaults to None) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. guess_mode (bool, defaults to False) — +In this mode, the ControlNet encoder tries its best to recognize the input content of the input even if +you remove all prompts. A guidance_scale between 3.0 and 5.0 is recommended. return_dict (bool, defaults to True) — +Whether or not to return a ControlNetOutput instead of a plain tuple. Returns +ControlNetOutput or tuple + +If return_dict is True, a ControlNetOutput is returned, otherwise a tuple is +returned where the first element is the sample tensor. + The ControlNetModel forward method. from_unet < source > ( unet: UNet2DConditionModel controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: Optional = (16, 32, 96, 256) load_weights_from_unet: bool = True conditioning_channels: int = 3 ) Parameters unet (UNet2DConditionModel) — +The UNet model weights to copy to the ControlNetModel. All configuration options are also copied +where applicable. Instantiate a ControlNetModel from UNet2DConditionModel. set_attention_slice < source > ( slice_size: Union ) Parameters slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", input to the attention heads is halved, so attention is computed in two steps. If +"max", maximum amount of memory is saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in +several steps. This is useful for saving some memory in exchange for a small decrease in speed. set_attn_processor < source > ( processor: Union _remove_lora = False ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. ControlNetOutput class diffusers.models.controlnet.ControlNetOutput < source > ( down_block_res_samples: Tuple mid_block_res_sample: Tensor ) Parameters down_block_res_samples (tuple[torch.Tensor]) — +A tuple of downsample activations at different resolutions for each downsampling block. Each tensor should +be of shape (batch_size, channel * resolution, height //resolution, width // resolution). Output can be +used to condition the original UNet’s downsampling activations. mid_down_block_re_sample (torch.Tensor) — +The activation of the midde block (the lowest sample resolution). Each tensor should be of shape +(batch_size, channel * lowest_resolution, height // lowest_resolution, width // lowest_resolution). +Output can be used to condition the original UNet’s middle block activation. The output of ControlNetModel. FlaxControlNetModel class diffusers.FlaxControlNetModel < source > ( sample_size: int = 32 in_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 attention_head_dim: Union = 8 num_attention_heads: Union = None cross_attention_dim: int = 1280 dropout: float = 0.0 use_linear_projection: bool = False dtype: dtype = flip_sin_to_cos: bool = True freq_shift: int = 0 controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: Tuple = (16, 32, 96, 256) parent: Union = name: Optional = None ) Parameters sample_size (int, optional) — +The size of the input sample. in_channels (int, optional, defaults to 4) — +The number of channels in the input sample. down_block_types (Tuple[str], optional, defaults to ("FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxDownBlock2D")) — +The tuple of downsample blocks to use. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — +The number of layers per block. attention_head_dim (int or Tuple[int], optional, defaults to 8) — +The dimension of the attention heads. num_attention_heads (int or Tuple[int], optional) — +The number of attention heads. cross_attention_dim (int, optional, defaults to 768) — +The dimension of the cross attention features. dropout (float, optional, defaults to 0) — +Dropout probability for down, up and bottleneck blocks. flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. controlnet_conditioning_channel_order (str, optional, defaults to rgb) — +The channel order of conditional image. Will convert to rgb if it’s bgr. conditioning_embedding_out_channels (tuple, optional, defaults to (16, 32, 96, 256)) — +The tuple of output channel for each block in the conditioning_embedding layer. A ControlNet model. This model inherits from FlaxModelMixin. Check the superclass documentation for it’s generic methods +implemented for all models (such as downloading or saving). This model is also a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matters related to its +general usage and behavior. Inherent JAX features such as the following are supported: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization FlaxControlNetOutput class diffusers.models.controlnet_flax.FlaxControlNetOutput < source > ( down_block_res_samples: Array mid_block_res_sample: Array ) Parameters down_block_res_samples (jnp.ndarray) — mid_block_res_sample (jnp.ndarray) — The output of FlaxControlNetModel. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/870f6275fb5a59d9404e67b4f5013850.txt b/scrapped_outputs/870f6275fb5a59d9404e67b4f5013850.txt new file mode 100644 index 0000000000000000000000000000000000000000..68ff112b968d56ed709f7889837161b8952ee99b --- /dev/null +++ b/scrapped_outputs/870f6275fb5a59d9404e67b4f5013850.txt @@ -0,0 +1,235 @@ +AutoPipeline AutoPipeline is designed to: make it easy for you to load a checkpoint for a task without knowing the specific pipeline class to use use multiple pipelines in your workflow Based on the task, the AutoPipeline class automatically retrieves the relevant pipeline given the name or path to the pretrained weights with the from_pretrained() method. To seamlessly switch between tasks with the same checkpoint without reallocating additional memory, use the from_pipe() method to transfer the components from the original pipeline to the new one. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +image = pipeline(prompt, num_inference_steps=25).images[0] Check out the AutoPipeline tutorial to learn how to use this API! AutoPipeline supports text-to-image, image-to-image, and inpainting for the following diffusion models: Stable Diffusion ControlNet Stable Diffusion XL (SDXL) DeepFloyd IF Kandinsky 2.1 Kandinsky 2.2 AutoPipelineForText2Image class diffusers.AutoPipelineForText2Image < source > ( *args **kwargs ) AutoPipelineForText2Image is a generic pipeline class that instantiates a text-to-image pipeline class. The +specific underlying pipeline class is automatically selected from either the +from_pretrained() or from_pipe() methods. This class cannot be instantiated using __init__() (throws an error). Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_or_path **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiates a text-to-image Pytorch diffusion pipeline from pretrained pipeline weight. The from_pretrained() method takes care of returning the correct pipeline class instance by: Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its +config object Find the text-to-image pipeline linked to the pipeline class using pattern matching on pipeline class +name. If a controlnet argument is passed, it will instantiate a StableDiffusionControlNetPipeline object. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import AutoPipelineForText2Image + +>>> pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> image = pipeline(prompt).images[0] from_pipe < source > ( pipeline **kwargs ) Parameters pipeline (DiffusionPipeline) — +an instantiated DiffusionPipeline object Instantiates a text-to-image Pytorch diffusion pipeline from another instantiated diffusion pipeline class. The from_pipe() method takes care of returning the correct pipeline class instance by finding the text-to-image +pipeline linked to the pipeline class using pattern matching on pipeline class name. All the modules the pipeline contains will be used to initialize the new pipeline without reallocating +additional memoery. The pipeline is set in evaluation mode (model.eval()) by default. Copied >>> from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image + +>>> pipe_i2i = AutoPipelineForImage2Image.from_pretrained( +... "runwayml/stable-diffusion-v1-5", requires_safety_checker=False +... ) + +>>> pipe_t2i = AutoPipelineForText2Image.from_pipe(pipe_i2i) +>>> image = pipe_t2i(prompt).images[0] AutoPipelineForImage2Image class diffusers.AutoPipelineForImage2Image < source > ( *args **kwargs ) AutoPipelineForImage2Image is a generic pipeline class that instantiates an image-to-image pipeline class. The +specific underlying pipeline class is automatically selected from either the +from_pretrained() or from_pipe() methods. This class cannot be instantiated using __init__() (throws an error). Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_or_path **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiates a image-to-image Pytorch diffusion pipeline from pretrained pipeline weight. The from_pretrained() method takes care of returning the correct pipeline class instance by: Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its +config object Find the image-to-image pipeline linked to the pipeline class using pattern matching on pipeline class +name. If a controlnet argument is passed, it will instantiate a StableDiffusionControlNetImg2ImgPipeline +object. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import AutoPipelineForImage2Image + +>>> pipeline = AutoPipelineForImage2Image.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> image = pipeline(prompt, image).images[0] from_pipe < source > ( pipeline **kwargs ) Parameters pipeline (DiffusionPipeline) — +an instantiated DiffusionPipeline object Instantiates a image-to-image Pytorch diffusion pipeline from another instantiated diffusion pipeline class. The from_pipe() method takes care of returning the correct pipeline class instance by finding the +image-to-image pipeline linked to the pipeline class using pattern matching on pipeline class name. All the modules the pipeline contains will be used to initialize the new pipeline without reallocating +additional memoery. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image + +>>> pipe_t2i = AutoPipelineForText2Image.from_pretrained( +... "runwayml/stable-diffusion-v1-5", requires_safety_checker=False +... ) + +>>> pipe_i2i = AutoPipelineForImage2Image.from_pipe(pipe_t2i) +>>> image = pipe_i2i(prompt, image).images[0] AutoPipelineForInpainting class diffusers.AutoPipelineForInpainting < source > ( *args **kwargs ) AutoPipelineForInpainting is a generic pipeline class that instantiates an inpainting pipeline class. The +specific underlying pipeline class is automatically selected from either the +from_pretrained() or from_pipe() methods. This class cannot be instantiated using __init__() (throws an error). Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_or_path **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiates a inpainting Pytorch diffusion pipeline from pretrained pipeline weight. The from_pretrained() method takes care of returning the correct pipeline class instance by: Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its +config object Find the inpainting pipeline linked to the pipeline class using pattern matching on pipeline class name. If a controlnet argument is passed, it will instantiate a StableDiffusionControlNetInpaintPipeline +object. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import AutoPipelineForInpainting + +>>> pipeline = AutoPipelineForInpainting.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> image = pipeline(prompt, image=init_image, mask_image=mask_image).images[0] from_pipe < source > ( pipeline **kwargs ) Parameters pipeline (DiffusionPipeline) — +an instantiated DiffusionPipeline object Instantiates a inpainting Pytorch diffusion pipeline from another instantiated diffusion pipeline class. The from_pipe() method takes care of returning the correct pipeline class instance by finding the inpainting +pipeline linked to the pipeline class using pattern matching on pipeline class name. All the modules the pipeline class contain will be used to initialize the new pipeline without reallocating +additional memoery. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import AutoPipelineForText2Image, AutoPipelineForInpainting + +>>> pipe_t2i = AutoPipelineForText2Image.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", requires_safety_checker=False +... ) + +>>> pipe_inpaint = AutoPipelineForInpainting.from_pipe(pipe_t2i) +>>> image = pipe_inpaint(prompt, image=init_image, mask_image=mask_image).images[0] diff --git a/scrapped_outputs/871e8be80385e8188776910f970fe100.txt b/scrapped_outputs/871e8be80385e8188776910f970fe100.txt new file mode 100644 index 0000000000000000000000000000000000000000..6d4b37b0a52f96659677efd85840d7f5e1ea639c --- /dev/null +++ b/scrapped_outputs/871e8be80385e8188776910f970fe100.txt @@ -0,0 +1,41 @@ +Text-to-image The text-to-image script is experimental, and it’s easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset. Text-to-image models like Stable Diffusion are conditioned to generate images given a text prompt. Training a model can be taxing on your hardware, but if you enable gradient_checkpointing and mixed_precision, it is possible to train a model on a single 24GB GPU. If you’re training with larger batch sizes or want to train faster, it’s better to use GPUs with more than 30GB of memory. You can reduce your memory footprint by enabling memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing, gradient accumulation or xFormers. A GPU with at least 30GB of memory or a TPU v3 is recommended for training with Flax. This guide will explore the train_text_to_image.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/text_to_image +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image.py \ + --mixed_precision="fp16" Some basic and important parameters include: --pretrained_model_name_or_path: the name of the model on the Hub or a local path to the pretrained model --dataset_name: the name of the dataset on the Hub or a local path to the dataset to train on --image_column: the name of the image column in the dataset to train on --caption_column: the name of the text column in the dataset to train on --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_text_to_image.py \ + --snr_gamma=5.0 You can compare the loss surfaces for different snr_gamma values in this Weights and Biases report. For smaller datasets, the effects of Min-SNR may not be as obvious compared to larger datasets. Training script The dataset preprocessing code and training loop are found in the main() function. If you need to adapt the training script, this is where you’ll need to make your changes. The train_text_to_image script starts by loading a scheduler and tokenizer. You can choose to use a different scheduler here if you want: Copied noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") +tokenizer = CLIPTokenizer.from_pretrained( + args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision +) Then the script loads the UNet model: Copied load_model = UNet2DConditionModel.from_pretrained(input_dir, subfolder="unet") +model.register_to_config(**load_model.config) + +model.load_state_dict(load_model.state_dict()) Next, the text and image columns of the dataset need to be preprocessed. The tokenize_captions function handles tokenizing the inputs, and the train_transforms function specifies the type of transforms to apply to the image. Both of these functions are bundled into preprocess_train: Copied def preprocess_train(examples): + images = [image.convert("RGB") for image in examples[image_column]] + examples["pixel_values"] = [train_transforms(image) for image in images] + examples["input_ids"] = tokenize_captions(examples) + return examples Lastly, the training loop handles everything else. It encodes images into latent space, adds noise to the latents, computes the text embeddings to condition on, updates the model parameters, and saves and pushes the model to the Hub. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 PyTorch Flax Let’s train on the Pokémon BLIP captions dataset to generate your own Pokémon. Set the environment variables MODEL_NAME and dataset_name to the model and the dataset (either from the Hub or a local path). If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. To train on a local dataset, set the TRAIN_DIR and OUTPUT_DIR environment variables to the path of the dataset and where to save the model to. Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export dataset_name="lambdalabs/pokemon-blip-captions" + +accelerate launch --mixed_precision="fp16" train_text_to_image.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$dataset_name \ + --use_ema \ + --resolution=512 --center_crop --random_flip \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --enable_xformers_memory_efficient_attention + --lr_scheduler="constant" --lr_warmup_steps=0 \ + --output_dir="sd-pokemon-model" \ + --push_to_hub Once training is complete, you can use your newly trained model for inference: PyTorch Flax Copied from diffusers import StableDiffusionPipeline +import torch + +pipeline = StableDiffusionPipeline.from_pretrained("path/to/saved_model", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + +image = pipeline(prompt="yoda").images[0] +image.save("yoda-pokemon.png") Next steps Congratulations on training your own text-to-image model! To learn more about how to use your new model, the following guides may be helpful: Learn how to load LoRA weights for inference if you trained your model with LoRA. Learn more about how certain parameters like guidance scale or techniques such as prompt weighting can help you control inference in the Text-to-image task guide. diff --git a/scrapped_outputs/8758088c39efffac5f513ecec696a881.txt b/scrapped_outputs/8758088c39efffac5f513ecec696a881.txt new file mode 100644 index 0000000000000000000000000000000000000000..9aa85d8f0ea063fa9ca9cc4c67b274c3a1854a84 --- /dev/null +++ b/scrapped_outputs/8758088c39efffac5f513ecec696a881.txt @@ -0,0 +1,624 @@ +ControlNet with Stable Diffusion XL ControlNet was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that’ll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process. The abstract from the paper is: We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models. You can find additional smaller Stable Diffusion XL (SDXL) ControlNet checkpoints from the 🤗 Diffusers Hub organization, and browse community-trained checkpoints on the Hub. 🧪 Many of the SDXL ControlNet checkpoints are experimental, and there is a lot of room for improvement. Feel free to open an Issue and leave us feedback on how we can improve! If you don’t see a checkpoint you’re interested in, you can train your own SDXL ControlNet with our training script. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionXLControlNetPipeline class diffusers.StableDiffusionXLControlNetPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). text_encoder_2 (CLIPTextModelWithProjection) — +Second frozen text-encoder +(laion/CLIP-ViT-bigG-14-laion2B-39B-b160k). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. tokenizer_2 (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings should always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it defaults to True if the package is installed; otherwise no +watermarker is used. Pipeline for text-to-image generation using Stable Diffusion XL with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 5.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. This is sent to tokenizer_2 +and text_encoder_2. If not defined, negative_prompt is used in both text-encoders. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, pooled text embeddings are generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs (prompt +weighting). If not provided, pooled negative_prompt_embeds are generated from negative_prompt input +argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned containing the output images. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" +>>> negative_prompt = "low quality, bad quality, sketches" + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" +... ) + +>>> # initialize the models and pipeline +>>> controlnet_conditioning_scale = 0.5 # recommended for good generalization +>>> controlnet = ControlNetModel.from_pretrained( +... "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16 +... ) +>>> vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) +>>> pipe = StableDiffusionXLControlNetPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> # get canny image +>>> image = np.array(image) +>>> image = cv2.Canny(image, 100, 200) +>>> image = image[:, :, None] +>>> image = np.concatenate([image, image, image], axis=2) +>>> canny_image = Image.fromarray(image) + +>>> # generate image +>>> image = pipe( +... prompt, controlnet_conditioning_scale=controlnet_conditioning_scale, image=canny_image +... ).images[0] encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionXLControlNetImg2ImgPipeline class diffusers.StableDiffusionXLControlNetImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple ControlNets +as a list, the outputs from each ControlNet are added together to create one combined additional +conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires an aesthetic_score condition to be passed during inference. Also see the +config of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for image-to-image generation using Stable Diffusion XL with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None control_image: Union = None height: Optional = None width: Optional = None strength: float = 0.8 num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 0.8 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The initial image will be used as the starting point for the image generation process. Can also accept +image latents as image, if passing latents directly, it will not be encoded again. control_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition. ControlNet uses this input condition to generate guidance to Unet. If +the type is specified as Torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can +also be accepted as an image. The dimensions of the output image defaults to image’s dimensions. If +height and/or width are passed, image is resized according to them. If multiple ControlNets are +specified in init, images must be passed as a list such that each element of the list can be correctly +batched for input to a single controlnet. height (int, optional, defaults to the size of control_image) — +The height in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to the size of control_image) — +The width in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the controlnet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set the +corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +In this mode, the ControlNet encoder will try best to recognize the content of the input image even if +you remove all prompts. The guidance_scale between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the controlnet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the controlnet stops applying. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple +containing the output images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> # pip install accelerate transformers safetensors diffusers + +>>> import torch +>>> import numpy as np +>>> from PIL import Image + +>>> from transformers import DPTFeatureExtractor, DPTForDepthEstimation +>>> from diffusers import ControlNetModel, StableDiffusionXLControlNetImg2ImgPipeline, AutoencoderKL +>>> from diffusers.utils import load_image + + +>>> depth_estimator = DPTForDepthEstimation.from_pretrained("Intel/dpt-hybrid-midas").to("cuda") +>>> feature_extractor = DPTFeatureExtractor.from_pretrained("Intel/dpt-hybrid-midas") +>>> controlnet = ControlNetModel.from_pretrained( +... "diffusers/controlnet-depth-sdxl-1.0-small", +... variant="fp16", +... use_safetensors=True, +... torch_dtype=torch.float16, +... ).to("cuda") +>>> vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16).to("cuda") +>>> pipe = StableDiffusionXLControlNetImg2ImgPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", +... controlnet=controlnet, +... vae=vae, +... variant="fp16", +... use_safetensors=True, +... torch_dtype=torch.float16, +... ).to("cuda") +>>> pipe.enable_model_cpu_offload() + + +>>> def get_depth_map(image): +... image = feature_extractor(images=image, return_tensors="pt").pixel_values.to("cuda") +... with torch.no_grad(), torch.autocast("cuda"): +... depth_map = depth_estimator(image).predicted_depth + +... depth_map = torch.nn.functional.interpolate( +... depth_map.unsqueeze(1), +... size=(1024, 1024), +... mode="bicubic", +... align_corners=False, +... ) +... depth_min = torch.amin(depth_map, dim=[1, 2, 3], keepdim=True) +... depth_max = torch.amax(depth_map, dim=[1, 2, 3], keepdim=True) +... depth_map = (depth_map - depth_min) / (depth_max - depth_min) +... image = torch.cat([depth_map] * 3, dim=1) +... image = image.permute(0, 2, 3, 1).cpu().numpy()[0] +... image = Image.fromarray((image * 255.0).clip(0, 255).astype(np.uint8)) +... return image + + +>>> prompt = "A robot, 4k photo" +>>> image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ).resize((1024, 1024)) +>>> controlnet_conditioning_scale = 0.5 # recommended for good generalization +>>> depth_image = get_depth_map(image) + +>>> images = pipe( +... prompt, +... image=image, +... control_image=depth_image, +... strength=0.99, +... num_inference_steps=50, +... controlnet_conditioning_scale=controlnet_conditioning_scale, +... ).images +>>> images[0].save(f"robot_cat.png") encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionXLControlNetInpaintPipeline class diffusers.StableDiffusionXLControlNetInpaintPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: ControlNetModel scheduler: KarrasDiffusionSchedulers requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None feature_extractor: Optional = None image_encoder: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None mask_image: Union = None control_image: Union = None height: Optional = None width: Optional = None padding_mask_crop: Optional = None strength: float = 0.9999 num_inference_steps: int = 50 denoising_start: Optional = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (PIL.Image.Image) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. padding_mask_crop (int, optional, defaults to None) — +The size of margin in the crop to be applied to the image and masking. If None, no crop is applied to image and mask_image. If +padding_mask_crop is not None, it will first find a rectangular region with the same aspect ration of the image and +contains all masked area, and then expand that area based on padding_mask_crop. The image and mask_image will then be cropped based on +the expanded area before resizing to the original image size for inpainting. This is useful when the masked area is small while the image is large +and contain information inreleant for inpainging, such as background. strength (float, optional, defaults to 0.9999) — +Conceptually, indicates how much to transform the masked portion of the reference image. Must be +between 0 and 1. image will be used as a starting point, adding more noise to it the larger the +strength. The number of denoising steps depends on the amount of noise initially added. When +strength is 1, added noise will be maximum and the denoising process will run for the full number of +iterations specified in num_inference_steps. A value of 1, therefore, essentially ignores the masked +portion of the reference image. Note that in the case of denoising_start being declared as an +integer, the value of strength will be ignored. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. denoising_start (float, optional) — +When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be +bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and +it is assumed that the passed image is a partly denoised image. Note that when this is specified, +strength will be ignored. The denoising_start parameter is particularly beneficial when this pipeline +is integrated into a “Mixture of Denoisers” multi-pipeline setup, as detailed in Refining the Image +Output. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be +denoised by a successor pipeline that has denoising_start set to 0.8 so that it only denoises the +final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline +forms a part of a “Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (width, height) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (width, height). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> # !pip install transformers accelerate +>>> from diffusers import StableDiffusionXLControlNetInpaintPipeline, ControlNetModel, DDIMScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> init_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy.png" +... ) +>>> init_image = init_image.resize((1024, 1024)) + +>>> generator = torch.Generator(device="cpu").manual_seed(1) + +>>> mask_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy_mask.png" +... ) +>>> mask_image = mask_image.resize((1024, 1024)) + + +>>> def make_canny_condition(image): +... image = np.array(image) +... image = cv2.Canny(image, 100, 200) +... image = image[:, :, None] +... image = np.concatenate([image, image, image], axis=2) +... image = Image.fromarray(image) +... return image + + +>>> control_image = make_canny_condition(init_image) + +>>> controlnet = ControlNetModel.from_pretrained( +... "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16 +... ) +>>> pipe = StableDiffusionXLControlNetInpaintPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> image = pipe( +... "a handsome man with ray-ban sunglasses", +... num_inference_steps=20, +... generator=generator, +... eta=1.0, +... image=init_image, +... mask_image=mask_image, +... control_image=control_image, +... ).images[0] encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/875f6bad19f6d9261c75bed0428b5334.txt b/scrapped_outputs/875f6bad19f6d9261c75bed0428b5334.txt new file mode 100644 index 0000000000000000000000000000000000000000..f86c7601a8960e5b9b1d28395df88617938da400 --- /dev/null +++ b/scrapped_outputs/875f6bad19f6d9261c75bed0428b5334.txt @@ -0,0 +1,42 @@ +LMSDiscreteScheduler LMSDiscreteScheduler is a linear multistep scheduler for discrete beta schedules. The scheduler is ported from and created by Katherine Crowson, and the original implementation can be found at crowsonkb/k-diffusion. LMSDiscreteScheduler class diffusers.LMSDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None use_karras_sigmas: Optional = False prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. A linear multistep scheduler for discrete beta schedules. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. get_lms_coefficient < source > ( order t current_order ) Parameters order () — t () — current_order () — Compute the linear multistep coefficient. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (float or torch.FloatTensor) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor order: int = 4 return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float or torch.FloatTensor) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. order (int, defaults to 4) — +The order of the linear multistep method. return_dict (bool, optional, defaults to True) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). LMSDiscreteSchedulerOutput class diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/8763df983064388e175a629e5c955881.txt b/scrapped_outputs/8763df983064388e175a629e5c955881.txt new file mode 100644 index 0000000000000000000000000000000000000000..e7efd5146c1078113af0423ef6c60dab2df7383d --- /dev/null +++ b/scrapped_outputs/8763df983064388e175a629e5c955881.txt @@ -0,0 +1,77 @@ +Stable Diffusion XL This script is experimental, and it’s easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset. Stable Diffusion XL (SDXL) is a larger and more powerful iteration of the Stable Diffusion model, capable of producing higher resolution images. SDXL’s UNet is 3x larger and the model adds a second text encoder to the architecture. Depending on the hardware available to you, this can be very computationally intensive and it may not run on a consumer GPU like a Tesla T4. To help fit this larger model into memory and to speedup training, try enabling gradient_checkpointing, mixed_precision, and gradient_accumulation_steps. You can reduce your memory-usage even more by enabling memory-efficient attention with xFormers and using bitsandbytes’ 8-bit optimizer. This guide will explore the train_text_to_image_sdxl.py training script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/text_to_image +pip install -r requirements_sdxl.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the bf16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image_sdxl.py \ + --mixed_precision="bf16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so you’ll focus on the parameters that are relevant to training SDXL in this guide. --pretrained_vae_model_name_or_path: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify a better VAE --proportion_empty_prompts: the proportion of image prompts to replace with empty strings --timestep_bias_strategy: where (earlier vs. later) in the timestep to apply a bias, which can encourage the model to either learn low or high frequency details --timestep_bias_multiplier: the weight of the bias to apply to the timestep --timestep_bias_begin: the timestep to begin applying the bias --timestep_bias_end: the timestep to end applying the bias --timestep_bias_portion: the proportion of timesteps to apply the bias to Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting either epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_text_to_image_sdxl.py \ + --snr_gamma=5.0 Training script The training script is also similar to the Text-to-image training guide, but it’s been modified to support SDXL training. This guide will focus on the code that is unique to the SDXL training script. It starts by creating functions to tokenize the prompts to calculate the prompt embeddings, and to compute the image embeddings with the VAE. Next, you’ll a function to generate the timesteps weights depending on the number of timesteps and the timestep bias strategy to apply. Within the main() function, in addition to loading a tokenizer, the script loads a second tokenizer and text encoder because the SDXL architecture uses two of each: Copied tokenizer_one = AutoTokenizer.from_pretrained( + args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision, use_fast=False +) +tokenizer_two = AutoTokenizer.from_pretrained( + args.pretrained_model_name_or_path, subfolder="tokenizer_2", revision=args.revision, use_fast=False +) + +text_encoder_cls_one = import_model_class_from_model_name_or_path( + args.pretrained_model_name_or_path, args.revision +) +text_encoder_cls_two = import_model_class_from_model_name_or_path( + args.pretrained_model_name_or_path, args.revision, subfolder="text_encoder_2" +) The prompt and image embeddings are computed first and kept in memory, which isn’t typically an issue for a smaller dataset, but for larger datasets it can lead to memory problems. If this is the case, you should save the pre-computed embeddings to disk separately and load them into memory during the training process (see this PR for more discussion about this topic). Copied text_encoders = [text_encoder_one, text_encoder_two] +tokenizers = [tokenizer_one, tokenizer_two] +compute_embeddings_fn = functools.partial( + encode_prompt, + text_encoders=text_encoders, + tokenizers=tokenizers, + proportion_empty_prompts=args.proportion_empty_prompts, + caption_column=args.caption_column, +) + +train_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint) +train_dataset = train_dataset.map( + compute_vae_encodings_fn, + batched=True, + batch_size=args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps, + new_fingerprint=new_fingerprint_for_vae, +) After calculating the embeddings, the text encoder, VAE, and tokenizer are deleted to free up some memory: Copied del text_encoders, tokenizers, vae +gc.collect() +torch.cuda.empty_cache() Finally, the training loop takes care of the rest. If you chose to apply a timestep bias strategy, you’ll see the timestep weights are calculated and added as noise: Copied weights = generate_timestep_weights(args, noise_scheduler.config.num_train_timesteps).to( + model_input.device + ) + timesteps = torch.multinomial(weights, bsz, replacement=True).long() + +noisy_model_input = noise_scheduler.add_noise(model_input, noise, timesteps) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 Let’s train on the Pokémon BLIP captions dataset to generate your own Pokémon. Set the environment variables MODEL_NAME and DATASET_NAME to the model and the dataset (either from the Hub or a local path). You should also specify a VAE other than the SDXL VAE (either from the Hub or a local path) with VAE_NAME to avoid numerical instabilities. To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_prompt and --validation_epochs to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. Copied export MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0" +export VAE_NAME="madebyollin/sdxl-vae-fp16-fix" +export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch train_text_to_image_sdxl.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --pretrained_vae_model_name_or_path=$VAE_NAME \ + --dataset_name=$DATASET_NAME \ + --enable_xformers_memory_efficient_attention \ + --resolution=512 \ + --center_crop \ + --random_flip \ + --proportion_empty_prompts=0.2 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --max_train_steps=10000 \ + --use_8bit_adam \ + --learning_rate=1e-06 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --mixed_precision="fp16" \ + --report_to="wandb" \ + --validation_prompt="a cute Sundar Pichai creature" \ + --validation_epochs 5 \ + --checkpointing_steps=5000 \ + --output_dir="sdxl-pokemon-model" \ + --push_to_hub After you’ve finished training, you can use your newly trained SDXL model for inference! PyTorch PyTorch XLA Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("path/to/your/model", torch_dtype=torch.float16).to("cuda") + +prompt = "A pokemon with green eyes and red legs." +image = pipeline(prompt, num_inference_steps=30, guidance_scale=7.5).images[0] +image.save("pokemon.png") Next steps Congratulations on training a SDXL model! To learn more about how to use your new model, the following guides may be helpful: Read the Stable Diffusion XL guide to learn how to use it for a variety of different tasks (text-to-image, image-to-image, inpainting), how to use it’s refiner model, and the different types of micro-conditionings. Check out the DreamBooth and LoRA training guides to learn how to train a personalized SDXL model with just a few example images. These two training techniques can even be combined! diff --git a/scrapped_outputs/87692333d76fb664f1384aed0cb64635.txt b/scrapped_outputs/87692333d76fb664f1384aed0cb64635.txt new file mode 100644 index 0000000000000000000000000000000000000000..d05e83f211afd073b47b8d298eea79b4b3c9daf7 --- /dev/null +++ b/scrapped_outputs/87692333d76fb664f1384aed0cb64635.txt @@ -0,0 +1,97 @@ +Text-to-image When you think of diffusion models, text-to-image is usually one of the first things that come to mind. Text-to-image generates an image from a text description (for example, “Astronaut in a jungle, cold color palette, muted colors, detailed, 8k”) which is also known as a prompt. From a very high level, a diffusion model takes a prompt and some random initial noise, and iteratively removes the noise to construct an image. The denoising process is guided by the prompt, and once the denoising process ends after a predetermined number of time steps, the image representation is decoded into an image. Read the How does Stable Diffusion work? blog post to learn more about how a latent diffusion model works. You can generate images from a prompt in 🤗 Diffusers in two steps: Load a checkpoint into the AutoPipelineForText2Image class, which automatically detects the appropriate pipeline class to use based on the checkpoint: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") Pass a prompt to the pipeline to generate an image: Copied image = pipeline( + "stained glass of darth vader, backlight, centered composition, masterpiece, photorealistic, 8k" +).images[0] +image Popular models The most common text-to-image models are Stable Diffusion v1.5, Stable Diffusion XL (SDXL), and Kandinsky 2.2. There are also ControlNet models or adapters that can be used with text-to-image models for more direct control in generating images. The results from each model are slightly different because of their architecture and training process, but no matter which model you choose, their usage is more or less the same. Let’s use the same prompt for each model and compare their results. Stable Diffusion v1.5 Stable Diffusion v1.5 is a latent diffusion model initialized from Stable Diffusion v1-4, and finetuned for 595K steps on 512x512 images from the LAION-Aesthetics V2 dataset. You can use this model like: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0] +image Stable Diffusion XL SDXL is a much larger version of the previous Stable Diffusion models, and involves a two-stage model process that adds even more details to an image. It also includes some additional micro-conditionings to generate high-quality images centered subjects. Take a look at the more comprehensive SDXL guide to learn more about how to use it. In general, you can use SDXL like: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0] +image Kandinsky 2.2 The Kandinsky model is a bit different from the Stable Diffusion models because it also uses an image prior model to create embeddings that are used to better align text and images in the diffusion model. The easiest way to use Kandinsky 2.2 is: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0] +image ControlNet ControlNet models are auxiliary models or adapters that are finetuned on top of text-to-image models, such as Stable Diffusion v1.5. Using ControlNet models in combination with text-to-image models offers diverse options for more explicit control over how to generate an image. With ControlNet, you add an additional conditioning input image to the model. For example, if you provide an image of a human pose (usually represented as multiple keypoints that are connected into a skeleton) as a conditioning input, the model generates an image that follows the pose of the image. Check out the more in-depth ControlNet guide to learn more about other conditioning inputs and how to use them. In this example, let’s condition the ControlNet with a human pose estimation image. Load the ControlNet model pretrained on human pose estimations: Copied from diffusers import ControlNetModel, AutoPipelineForText2Image +from diffusers.utils import load_image +import torch + +controlnet = ControlNetModel.from_pretrained( + "lllyasviel/control_v11p_sd15_openpose", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +pose_image = load_image("https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png") Pass the controlnet to the AutoPipelineForText2Image, and provide the prompt and pose estimation image: Copied pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16" +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", image=pose_image, generator=generator).images[0] +image Stable Diffusion v1.5 Stable Diffusion XL Kandinsky 2.2 ControlNet (pose conditioning) Configure pipeline parameters There are a number of parameters that can be configured in the pipeline that affect how an image is generated. You can change the image’s output size, specify a negative prompt to improve image quality, and more. This section dives deeper into how to use these parameters. Height and width The height and width parameters control the height and width (in pixels) of the generated image. By default, the Stable Diffusion v1.5 model outputs 512x512 images, but you can change this to any size that is a multiple of 8. For example, to create a rectangular image: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +image = pipeline( + "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", height=768, width=512 +).images[0] +image Other models may have different default image sizes depending on the image sizes in the training dataset. For example, SDXL’s default image size is 1024x1024 and using lower height and width values may result in lower quality images. Make sure you check the model’s API reference first! Guidance scale The guidance_scale parameter affects how much the prompt influences image generation. A lower value gives the model “creativity” to generate images that are more loosely related to the prompt. Higher guidance_scale values push the model to follow the prompt more closely, and if this value is too high, you may observe some artifacts in the generated image. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +image = pipeline( + "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", guidance_scale=3.5 +).images[0] +image guidance_scale = 2.5 guidance_scale = 7.5 guidance_scale = 10.5 Negative prompt Just like how a prompt guides generation, a negative prompt steers the model away from things you don’t want the model to generate. This is commonly used to improve overall image quality by removing poor or bad image features such as “low resolution” or “bad details”. You can also use a negative prompt to remove or modify the content and style of an image. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +image = pipeline( + prompt="Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", + negative_prompt="ugly, deformed, disfigured, poor details, bad anatomy", +).images[0] +image negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" negative_prompt = "astronaut" Generator A torch.Generator object enables reproducibility in a pipeline by setting a manual seed. You can use a Generator to generate batches of images and iteratively improve on an image generated from a seed as detailed in the Improve image quality with deterministic generation guide. You can set a seed and Generator as shown below. Creating an image with a Generator should return the same result each time instead of randomly generating a new image. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +generator = torch.Generator(device="cuda").manual_seed(30) +image = pipeline( + "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", + generator=generator, +).images[0] +image Control image generation There are several ways to exert more control over how an image is generated outside of configuring a pipeline’s parameters, such as prompt weighting and ControlNet models. Prompt weighting Prompt weighting is a technique for increasing or decreasing the importance of concepts in a prompt to emphasize or minimize certain features in an image. We recommend using the Compel library to help you generate the weighted prompt embeddings. Learn how to create the prompt embeddings in the Prompt weighting guide. This example focuses on how to use the prompt embeddings in the pipeline. Once you’ve created the embeddings, you can pass them to the prompt_embeds (and negative_prompt_embeds if you’re using a negative prompt) parameter in the pipeline. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +image = pipeline( + prompt_embeds=prompt_embeds, # generated from Compel + negative_prompt_embeds=negative_prompt_embeds, # generated from Compel +).images[0] ControlNet As you saw in the ControlNet section, these models offer a more flexible and accurate way to generate images by incorporating an additional conditioning image input. Each ControlNet model is pretrained on a particular type of conditioning image to generate new images that resemble it. For example, if you take a ControlNet model pretrained on depth maps, you can give the model a depth map as a conditioning input and it’ll generate an image that preserves the spatial information in it. This is quicker and easier than specifying the depth information in a prompt. You can even combine multiple conditioning inputs with a MultiControlNet! There are many types of conditioning inputs you can use, and 🤗 Diffusers supports ControlNet for Stable Diffusion and SDXL models. Take a look at the more comprehensive ControlNet guide to learn how you can use these models. Optimize Diffusion models are large, and the iterative nature of denoising an image is computationally expensive and intensive. But this doesn’t mean you need access to powerful - or even many - GPUs to use them. There are many optimization techniques for running diffusion models on consumer and free-tier resources. For example, you can load model weights in half-precision to save GPU memory and increase speed or offload the entire model to the GPU to save even more memory. PyTorch 2.0 also supports a more memory-efficient attention mechanism called scaled dot product attention that is automatically enabled if you’re using PyTorch 2.0. You can combine this with torch.compile to speed your code up even more: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16").to("cuda") +pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) For more tips on how to optimize your code to save memory and speed up inference, read the Memory and speed and Torch 2.0 guides. diff --git a/scrapped_outputs/87706f6d023b9ee4595dbb488a90eaa1.txt b/scrapped_outputs/87706f6d023b9ee4595dbb488a90eaa1.txt new file mode 100644 index 0000000000000000000000000000000000000000..032f569366b1a5bb387a95e95afb74b4ab65d517 --- /dev/null +++ b/scrapped_outputs/87706f6d023b9ee4595dbb488a90eaa1.txt @@ -0,0 +1,17 @@ +UNet1DModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 1D UNet model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet1DModel class diffusers.UNet1DModel < source > ( sample_size: int = 65536 sample_rate: Optional = None in_channels: int = 2 out_channels: int = 2 extra_in_channels: int = 0 time_embedding_type: str = 'fourier' flip_sin_to_cos: bool = True use_timestep_embedding: bool = False freq_shift: float = 0.0 down_block_types: Tuple = ('DownBlock1DNoSkip', 'DownBlock1D', 'AttnDownBlock1D') up_block_types: Tuple = ('AttnUpBlock1D', 'UpBlock1D', 'UpBlock1DNoSkip') mid_block_type: Tuple = 'UNetMidBlock1D' out_block_type: str = None block_out_channels: Tuple = (32, 32, 64) act_fn: str = None norm_num_groups: int = 8 layers_per_block: int = 1 downsample_each_block: bool = False ) Parameters sample_size (int, optional) — Default length of sample. Should be adaptable at runtime. in_channels (int, optional, defaults to 2) — Number of channels in the input sample. out_channels (int, optional, defaults to 2) — Number of channels in the output. extra_in_channels (int, optional, defaults to 0) — +Number of additional channels to be added to the input of the first down block. Useful for cases where the +input data has more channels than what the model was initially designed for. time_embedding_type (str, optional, defaults to "fourier") — Type of time embedding to use. freq_shift (float, optional, defaults to 0.0) — Frequency shift for Fourier time embedding. flip_sin_to_cos (bool, optional, defaults to False) — +Whether to flip sin to cos for Fourier time embedding. down_block_types (Tuple[str], optional, defaults to ("DownBlock1DNoSkip", "DownBlock1D", "AttnDownBlock1D")) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to ("AttnUpBlock1D", "UpBlock1D", "UpBlock1DNoSkip")) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (32, 32, 64)) — +Tuple of block output channels. mid_block_type (str, optional, defaults to "UNetMidBlock1D") — Block type for middle of UNet. out_block_type (str, optional, defaults to None) — Optional output processing block of UNet. act_fn (str, optional, defaults to None) — Optional activation function in UNet blocks. norm_num_groups (int, optional, defaults to 8) — The number of groups for normalization. layers_per_block (int, optional, defaults to 1) — The number of layers per block. downsample_each_block (int, optional, defaults to False) — +Experimental feature for using a UNet without upsampling. A 1D UNet model that takes a noisy sample and a timestep and returns a sample shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor timestep: Union return_dict: bool = True ) → ~models.unet_1d.UNet1DOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch_size, num_channels, sample_size). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.unet_1d.UNet1DOutput instead of a plain tuple. Returns +~models.unet_1d.UNet1DOutput or tuple + +If return_dict is True, an ~models.unet_1d.UNet1DOutput is returned, otherwise a tuple is +returned where the first element is the sample tensor. + The UNet1DModel forward method. UNet1DOutput class diffusers.models.unets.unet_1d.UNet1DOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, sample_size)) — +The hidden states output from the last layer of the model. The output of UNet1DModel. diff --git a/scrapped_outputs/877c563d70208f87ea38e9f82ff6d801.txt b/scrapped_outputs/877c563d70208f87ea38e9f82ff6d801.txt new file mode 100644 index 0000000000000000000000000000000000000000..be2cb47ac7929d07604329901692862da670fc66 --- /dev/null +++ b/scrapped_outputs/877c563d70208f87ea38e9f82ff6d801.txt @@ -0,0 +1,70 @@ +MusicLDM MusicLDM was proposed in MusicLDM: Enhancing Novelty in Text-to-Music Generation Using Beat-Synchronous Mixup Strategies by Ke Chen, Yusong Wu, Haohe Liu, Marianna Nezhurina, Taylor Berg-Kirkpatrick, Shlomo Dubnov. +MusicLDM takes a text prompt as input and predicts the corresponding music sample. Inspired by Stable Diffusion and AudioLDM, +MusicLDM is a text-to-music latent diffusion model (LDM) that learns continuous audio representations from CLAP +latents. MusicLDM is trained on a corpus of 466 hours of music data. Beat-synchronous data augmentation strategies are applied to the music samples, both in the time domain and in the latent space. Using beat-synchronous data augmentation strategies encourages the model to interpolate between the training samples, but stay within the domain of the training data. The result is generated music that is more diverse while staying faithful to the corresponding style. The abstract of the paper is the following: Diffusion models have shown promising results in cross-modal generation tasks, including text-to-image and text-to-audio generation. However, generating music, as a special type of audio, presents unique challenges due to limited availability of music data and sensitive issues related to copyright and plagiarism. In this paper, to tackle these challenges, we first construct a state-of-the-art text-to-music model, MusicLDM, that adapts Stable Diffusion and AudioLDM architectures to the music domain. We achieve this by retraining the contrastive language-audio pretraining model (CLAP) and the Hifi-GAN vocoder, as components of MusicLDM, on a collection of music data samples. Then, to address the limitations of training data and to avoid plagiarism, we leverage a beat tracking model and propose two different mixup strategies for data augmentation: beat-synchronous audio mixup and beat-synchronous latent mixup, which recombine training audio directly or via a latent embeddings space, respectively. Such mixup strategies encourage the model to interpolate between musical training samples and generate new music within the convex hull of the training data, making the generated music more diverse while still staying faithful to the corresponding style. In addition to popular evaluation metrics, we design several new evaluation metrics based on CLAP score to demonstrate that our proposed MusicLDM and beat-synchronous mixup strategies improve both the quality and novelty of generated music, as well as the correspondence between input text and generated music. This pipeline was contributed by sanchit-gandhi. Tips When constructing a prompt, keep in mind: Descriptive prompt inputs work best; use adjectives to describe the sound (for example, “high quality” or “clear”) and make the prompt context specific where possible (e.g. “melodic techno with a fast beat and synths” works better than “techno”). Using a negative prompt can significantly improve the quality of the generated audio. Try using a negative prompt of “low quality, average quality”. During inference: The quality of the generated audio sample can be controlled by the num_inference_steps argument; higher steps give higher quality audio at the expense of slower inference. Multiple waveforms can be generated in one go: set num_waveforms_per_prompt to a value greater than 1 to enable. Automatic scoring will be performed between the generated waveforms and prompt text, and the audios ranked from best to worst accordingly. The length of the generated audio sample can be controlled by varying the audio_length_in_s argument. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. MusicLDMPipeline class diffusers.MusicLDMPipeline < source > ( vae: AutoencoderKL text_encoder: Union tokenizer: Union feature_extractor: Optional unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vocoder: SpeechT5HifiGan ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (ClapModel) — +Frozen text-audio embedding model (ClapTextModel), specifically the +laion/clap-htsat-unfused variant. tokenizer (PreTrainedTokenizer) — +A RobertaTokenizer to tokenize text. feature_extractor (ClapFeatureExtractor) — +Feature extractor to compute mel-spectrograms from audio waveforms. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded audio latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. vocoder (SpeechT5HifiGan) — +Vocoder of class SpeechT5HifiGan. Pipeline for text-to-audio generation using MusicLDM. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None audio_length_in_s: Optional = None num_inference_steps: int = 200 guidance_scale: float = 2.0 negative_prompt: Union = None num_waveforms_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None output_type: Optional = 'np' ) → AudioPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide audio generation. If not defined, you need to pass prompt_embeds. audio_length_in_s (int, optional, defaults to 10.24) — +The length of the generated audio sample in seconds. num_inference_steps (int, optional, defaults to 200) — +The number of denoising steps. More denoising steps usually lead to a higher quality audio at the +expense of slower inference. guidance_scale (float, optional, defaults to 2.0) — +A higher guidance scale value encourages the model to generate audio that is closely linked to the text +prompt at the expense of lower sound quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in audio generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_waveforms_per_prompt (int, optional, defaults to 1) — +The number of waveforms to generate per prompt. If num_waveforms_per_prompt > 1, the text encoding +model is a joint text-audio model (ClapModel), and the tokenizer is a +[~transformers.ClapProcessor], then automatic scoring will be performed between the generated outputs +and the input text. This scoring ranks the generated waveforms based on their cosine similarity to text +input in the joint text-audio embedding space. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. return_dict (bool, optional, defaults to True) — +Whether or not to return a AudioPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. output_type (str, optional, defaults to "np") — +The output format of the generated audio. Choose between "np" to return a NumPy np.ndarray or +"pt" to return a PyTorch torch.Tensor object. Set to "latent" to return the latent diffusion +model (LDM) output. Returns +AudioPipelineOutput or tuple + +If return_dict is True, AudioPipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import MusicLDMPipeline +>>> import torch +>>> import scipy + +>>> repo_id = "ucsd-reach/musicldm" +>>> pipe = MusicLDMPipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> prompt = "Techno music with a strong, upbeat tempo and high melodic riffs" +>>> audio = pipe(prompt, num_inference_steps=10, audio_length_in_s=5.0).audios[0] + +>>> # save the audio sample as a .wav file +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio) disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. diff --git a/scrapped_outputs/87a616867e9c710197f7b7b9deb6744b.txt b/scrapped_outputs/87a616867e9c710197f7b7b9deb6744b.txt new file mode 100644 index 0000000000000000000000000000000000000000..6239505b8ff5f3f7eb6043b475677f1d948af531 --- /dev/null +++ b/scrapped_outputs/87a616867e9c710197f7b7b9deb6744b.txt @@ -0,0 +1,38 @@ +Pipeline callbacks The denoising loop of a pipeline can be modified with custom defined functions using the callback_on_step_end parameter. This can be really useful for dynamically adjusting certain pipeline attributes, or modifying tensor variables. The flexibility of callbacks opens up some interesting use-cases such as changing the prompt embeddings at each timestep, assigning different weights to the prompt embeddings, and editing the guidance scale. This guide will show you how to use the callback_on_step_end parameter to disable classifier-free guidance (CFG) after 40% of the inference steps to save compute with minimal cost to performance. The callback function should have the following arguments: pipe (or the pipeline instance) provides access to useful properties such as num_timestep and guidance_scale. You can modify these properties by updating the underlying attributes. For this example, you’ll disable CFG by setting pipe._guidance_scale=0.0. step_index and timestep tell you where you are in the denoising loop. Use step_index to turn off CFG after reaching 40% of num_timestep. callback_kwargs is a dict that contains tensor variables you can modify during the denoising loop. It only includes variables specified in the callback_on_step_end_tensor_inputs argument, which is passed to the pipeline’s __call__ method. Different pipelines may use different sets of variables, so please check a pipeline’s _callback_tensor_inputs attribute for the list of variables you can modify. Some common variables include latents and prompt_embeds. For this function, change the batch size of prompt_embeds after setting guidance_scale=0.0 in order for it to work properly. Your callback function should look something like this: Copied def callback_dynamic_cfg(pipe, step_index, timestep, callback_kwargs): + # adjust the batch_size of prompt_embeds according to guidance_scale + if step_index == int(pipe.num_timestep * 0.4): + prompt_embeds = callback_kwargs["prompt_embeds"] + prompt_embeds = prompt_embeds.chunk(2)[-1] + + # update guidance_scale and prompt_embeds + pipe._guidance_scale = 0.0 + callback_kwargs["prompt_embeds"] = prompt_embeds + return callback_kwargs Now, you can pass the callback function to the callback_on_step_end parameter and the prompt_embeds to callback_on_step_end_tensor_inputs. Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" + +generator = torch.Generator(device="cuda").manual_seed(1) +out = pipe(prompt, generator=generator, callback_on_step_end=callback_custom_cfg, callback_on_step_end_tensor_inputs=['prompt_embeds']) + +out.images[0].save("out_custom_cfg.png") The callback function is executed at the end of each denoising step, and modifies the pipeline attributes and tensor variables for the next denoising step. With callbacks, you can implement features such as dynamic CFG without having to modify the underlying code at all! 🤗 Diffusers currently only supports callback_on_step_end, but feel free to open a feature request if you have a cool use-case and require a callback function with a different execution point! Interrupt the diffusion process Interrupting the diffusion process is particularly useful when building UIs that work with Diffusers because it allows users to stop the generation process if they’re unhappy with the intermediate results. You can incorporate this into your pipeline with a callback. The interruption callback is supported for text-to-image, image-to-image, and inpainting for the StableDiffusionPipeline and StableDiffusionXLPipeline. This callback function should take the following arguments: pipe, i, t, and callback_kwargs (this must be returned). Set the pipeline’s _interrupt attribute to True to stop the diffusion process after a certain number of steps. You are also free to implement your own custom stopping logic inside the callback. In this example, the diffusion process is stopped after 10 steps even though num_inference_steps is set to 50. Copied from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +pipe.enable_model_cpu_offload() +num_inference_steps = 50 + +def interrupt_callback(pipe, i, t, callback_kwargs): + stop_idx = 10 + if i == stop_idx: + pipe._interrupt = True + + return callback_kwargs + +pipe( + "A photo of a cat", + num_inference_steps=num_inference_steps, + callback_on_step_end=interrupt_callback, +) diff --git a/scrapped_outputs/87a9abc68632f0e7f4b5f1721daffc5d.txt b/scrapped_outputs/87a9abc68632f0e7f4b5f1721daffc5d.txt new file mode 100644 index 0000000000000000000000000000000000000000..fbcb60fe130c5e417d414793d40c89f0240a5740 --- /dev/null +++ b/scrapped_outputs/87a9abc68632f0e7f4b5f1721daffc5d.txt @@ -0,0 +1,69 @@ +Load LoRAs for inference There are many adapters (with LoRAs being the most common type) trained in different styles to achieve different effects. You can even combine multiple adapters to create new and unique images. With the 🤗 PEFT integration in 🤗 Diffusers, it is really easy to load and manage adapters for inference. In this guide, you’ll learn how to use different adapters with Stable Diffusion XL (SDXL) for inference. Throughout this guide, you’ll use LoRA as the main adapter technique, so we’ll use the terms LoRA and adapter interchangeably. You should have some familiarity with LoRA, and if you don’t, we welcome you to check out the LoRA guide. Let’s first install all the required libraries. Copied !pip install -q transformers accelerate +!pip install peft +!pip install diffusers Now, let’s load a pipeline with a SDXL checkpoint: Copied from diffusers import DiffusionPipeline +import torch + +pipe_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipe = DiffusionPipeline.from_pretrained(pipe_id, torch_dtype=torch.float16).to("cuda") Next, load a LoRA checkpoint with the load_lora_weights() method. With the 🤗 PEFT integration, you can assign a specific adapter_name to the checkpoint, which let’s you easily switch between different LoRA checkpoints. Let’s call this adapter "toy". Copied pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") And then perform inference: Copied prompt = "toy_face of a hacker with a hoodie" + +lora_scale= 0.9 +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) +).images[0] +image With the adapter_name parameter, it is really easy to use another adapter for inference! Load the nerijs/pixel-art-xl adapter that has been fine-tuned to generate pixel art images, and let’s call it "pixel". The pipeline automatically sets the first loaded adapter ("toy") as the active adapter. But you can activate the "pixel" adapter with the set_adapters() method as shown below: Copied pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipe.set_adapters("pixel") Let’s now generate an image with the second adapter and check the result: Copied prompt = "a hacker with a hoodie, pixel art" +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) +).images[0] +image Combine multiple adapters You can also perform multi-adapter inference where you combine different adapter checkpoints for inference. Once again, use the set_adapters() method to activate two LoRA checkpoints and specify the weight for how the checkpoints should be combined. Copied pipe.set_adapters(["pixel", "toy"], adapter_weights=[0.5, 1.0]) Now that we have set these two adapters, let’s generate an image from the combined adapters! LoRA checkpoints in the diffusion community are almost always obtained with DreamBooth. DreamBooth training often relies on “trigger” words in the input text prompts in order for the generation results to look as expected. When you combine multiple LoRA checkpoints, it’s important to ensure the trigger words for the corresponding LoRA checkpoints are present in the input text prompts. The trigger words for CiroN2022/toy-face and nerijs/pixel-art-xl are found in their repositories. Copied # Notice how the prompt is constructed. +prompt = "toy_face of a hacker with a hoodie, pixel art" +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": 1.0}, generator=torch.manual_seed(0) +).images[0] +image Impressive! As you can see, the model was able to generate an image that mixes the characteristics of both adapters. If you want to go back to using only one adapter, use the set_adapters() method to activate the "toy" adapter: Copied # First, set the adapter. +pipe.set_adapters("toy") + +# Then, run inference. +prompt = "toy_face of a hacker with a hoodie" +lora_scale= 0.9 +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) +).images[0] +image If you want to switch to only the base model, disable all LoRAs with the disable_lora() method. Copied pipe.disable_lora() + +prompt = "toy_face of a hacker with a hoodie" +lora_scale= 0.9 +image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] +image Monitoring active adapters You have attached multiple adapters in this tutorial, and if you’re feeling a bit lost on what adapters have been attached to the pipeline’s components, you can easily check the list of active adapters using the get_active_adapters() method: Copied active_adapters = pipe.get_active_adapters() +active_adapters +["toy", "pixel"] You can also get the active adapters of each pipeline component with get_list_adapters(): Copied list_adapters_component_wise = pipe.get_list_adapters() +list_adapters_component_wise +{"text_encoder": ["toy", "pixel"], "unet": ["toy", "pixel"], "text_encoder_2": ["toy", "pixel"]} Fusing adapters into the model You can use PEFT to easily fuse/unfuse multiple adapters directly into the model weights (both UNet and text encoder) using the fuse_lora() method, which can lead to a speed-up in inference and lower VRAM usage. Copied pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") + +pipe.set_adapters(["pixel", "toy"], adapter_weights=[0.5, 1.0]) +# Fuses the LoRAs into the Unet +pipe.fuse_lora() + +prompt = "toy_face of a hacker with a hoodie, pixel art" +image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] + +# Gets the Unet back to the original state +pipe.unfuse_lora() You can also fuse some adapters using adapter_names for faster generation: Copied pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") + +pipe.set_adapters(["pixel"], adapter_weights=[0.5, 1.0]) +# Fuses the LoRAs into the Unet +pipe.fuse_lora(adapter_names=["pixel"]) + +prompt = "a hacker with a hoodie, pixel art" +image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] + +# Gets the Unet back to the original state +pipe.unfuse_lora() + +# Fuse all adapters +pipe.fuse_lora(adapter_names=["pixel", "toy"]) + +prompt = "toy_face of a hacker with a hoodie, pixel art" +image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] diff --git a/scrapped_outputs/87ab5cde9542c1bf8bbcb79b6cda6693.txt b/scrapped_outputs/87ab5cde9542c1bf8bbcb79b6cda6693.txt new file mode 100644 index 0000000000000000000000000000000000000000..032f569366b1a5bb387a95e95afb74b4ab65d517 --- /dev/null +++ b/scrapped_outputs/87ab5cde9542c1bf8bbcb79b6cda6693.txt @@ -0,0 +1,17 @@ +UNet1DModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 1D UNet model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet1DModel class diffusers.UNet1DModel < source > ( sample_size: int = 65536 sample_rate: Optional = None in_channels: int = 2 out_channels: int = 2 extra_in_channels: int = 0 time_embedding_type: str = 'fourier' flip_sin_to_cos: bool = True use_timestep_embedding: bool = False freq_shift: float = 0.0 down_block_types: Tuple = ('DownBlock1DNoSkip', 'DownBlock1D', 'AttnDownBlock1D') up_block_types: Tuple = ('AttnUpBlock1D', 'UpBlock1D', 'UpBlock1DNoSkip') mid_block_type: Tuple = 'UNetMidBlock1D' out_block_type: str = None block_out_channels: Tuple = (32, 32, 64) act_fn: str = None norm_num_groups: int = 8 layers_per_block: int = 1 downsample_each_block: bool = False ) Parameters sample_size (int, optional) — Default length of sample. Should be adaptable at runtime. in_channels (int, optional, defaults to 2) — Number of channels in the input sample. out_channels (int, optional, defaults to 2) — Number of channels in the output. extra_in_channels (int, optional, defaults to 0) — +Number of additional channels to be added to the input of the first down block. Useful for cases where the +input data has more channels than what the model was initially designed for. time_embedding_type (str, optional, defaults to "fourier") — Type of time embedding to use. freq_shift (float, optional, defaults to 0.0) — Frequency shift for Fourier time embedding. flip_sin_to_cos (bool, optional, defaults to False) — +Whether to flip sin to cos for Fourier time embedding. down_block_types (Tuple[str], optional, defaults to ("DownBlock1DNoSkip", "DownBlock1D", "AttnDownBlock1D")) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to ("AttnUpBlock1D", "UpBlock1D", "UpBlock1DNoSkip")) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (32, 32, 64)) — +Tuple of block output channels. mid_block_type (str, optional, defaults to "UNetMidBlock1D") — Block type for middle of UNet. out_block_type (str, optional, defaults to None) — Optional output processing block of UNet. act_fn (str, optional, defaults to None) — Optional activation function in UNet blocks. norm_num_groups (int, optional, defaults to 8) — The number of groups for normalization. layers_per_block (int, optional, defaults to 1) — The number of layers per block. downsample_each_block (int, optional, defaults to False) — +Experimental feature for using a UNet without upsampling. A 1D UNet model that takes a noisy sample and a timestep and returns a sample shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor timestep: Union return_dict: bool = True ) → ~models.unet_1d.UNet1DOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch_size, num_channels, sample_size). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.unet_1d.UNet1DOutput instead of a plain tuple. Returns +~models.unet_1d.UNet1DOutput or tuple + +If return_dict is True, an ~models.unet_1d.UNet1DOutput is returned, otherwise a tuple is +returned where the first element is the sample tensor. + The UNet1DModel forward method. UNet1DOutput class diffusers.models.unets.unet_1d.UNet1DOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, sample_size)) — +The hidden states output from the last layer of the model. The output of UNet1DModel. diff --git a/scrapped_outputs/87b19967f41286b93aa0a6bb52d129fe.txt b/scrapped_outputs/87b19967f41286b93aa0a6bb52d129fe.txt new file mode 100644 index 0000000000000000000000000000000000000000..350fcde2194ed65053fe8201403456d5175dba21 --- /dev/null +++ b/scrapped_outputs/87b19967f41286b93aa0a6bb52d129fe.txt @@ -0,0 +1,98 @@ +DPMSolverMultistepScheduler DPMSolverMultistep is a multistep scheduler from DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps and DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. DPMSolver (and the improved version DPMSolver++) is a fast dedicated high-order solver for diffusion ODEs with convergence order guarantee. Empirically, DPMSolver sampling with only 20 steps can generate high-quality +samples, and it can generate quite good samples even in 10 steps. Tips It is recommended to set solver_order to 2 for guide sampling, and solver_order=3 for unconditional sampling. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use the dynamic +thresholding. This thresholding method is unsuitable for latent-space diffusion models such as +Stable Diffusion. The SDE variant of DPMSolver and DPM-Solver++ is also supported, but only for the first and second-order solvers. This is a fast SDE solver for the reverse diffusion SDE. It is recommended to use the second-order sde-dpmsolver++. DPMSolverMultistepScheduler class diffusers.DPMSolverMultistepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'dpmsolver++' solver_type: str = 'midpoint' lower_order_final: bool = True euler_at_final: bool = False use_karras_sigmas: Optional = False use_lu_lambdas: Optional = False final_sigmas_type: Optional = 'zero' lambda_min_clipped: float = -inf variance_type: Optional = None timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DPMSolver order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++". algorithm_type (str, defaults to dpmsolver++) — +Algorithm type for the solver; can be dpmsolver, dpmsolver++, sde-dpmsolver or sde-dpmsolver++. The +dpmsolver type implements the algorithms in the DPMSolver +paper, and the dpmsolver++ type implements the algorithms in the +DPMSolver++ paper. It is recommended to use dpmsolver++ or +sde-dpmsolver++ with solver_order=2 for guided sampling like in Stable Diffusion. solver_type (str, defaults to midpoint) — +Solver type for the second-order solver; can be midpoint or heun. The solver type slightly affects the +sample quality, especially for a small number of steps. It is recommended to use midpoint solvers. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. euler_at_final (bool, defaults to False) — +Whether to use Euler’s method in the final step. It is a trade-off between numerical stability and detail +richness. This can stabilize the sampling of the SDE variant of DPMSolver for small number of inference +steps, but sometimes may result in blurring. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. use_lu_lambdas (bool, optional, defaults to False) — +Whether to use the uniform-logSNR for step sizes proposed by Lu’s DPM-Solver in the noise schedule during +the sampling process. If True, the sigmas and time steps are determined according to a sequence of +lambda(t). final_sigmas_type (str, defaults to "zero") — +The final sigma value for the noise schedule during the sampling process. If "sigma_min", the final sigma +is the same as the last sigma in the training schedule. If zero, the final sigma is set to 0. lambda_min_clipped (float, defaults to -inf) — +Clipping threshold for the minimum value of lambda(t) for numerical stability. This is critical for the +cosine (squaredcos_cap_v2) noise schedule. variance_type (str, optional) — +Set to “learned” or “learned_range” for diffusion models that predict variance. If set, the model’s output +contains the predicted Gaussian variance. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DPMSolverMultistepScheduler is a fast dedicated high-order solver for diffusion ODEs. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is +designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an +integral of the data prediction model. The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise +prediction and data prediction models. dpm_solver_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DPMSolver (equivalent to DDIM). multistep_dpm_solver_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order multistep DPMSolver. multistep_dpm_solver_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order multistep DPMSolver. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int = None device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator = None return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep DPMSolver. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/87b8356e7068fe4134e60eb614246e72.txt b/scrapped_outputs/87b8356e7068fe4134e60eb614246e72.txt new file mode 100644 index 0000000000000000000000000000000000000000..0ee0e400598d2a2833e04ab333507f2ff56ef276 --- /dev/null +++ b/scrapped_outputs/87b8356e7068fe4134e60eb614246e72.txt @@ -0,0 +1,42 @@ +UNetMotionModel The UNet model was originally introduced by Ronneberger et al for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 2D UNet model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNetMotionModel class diffusers.UNetMotionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlockMotion', 'CrossAttnDownBlockMotion', 'CrossAttnDownBlockMotion', 'DownBlockMotion') up_block_types: Tuple = ('UpBlockMotion', 'CrossAttnUpBlockMotion', 'CrossAttnUpBlockMotion', 'CrossAttnUpBlockMotion') block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: int = 32 norm_eps: float = 1e-05 cross_attention_dim: int = 1280 use_linear_projection: bool = False num_attention_heads: Union = 8 motion_max_seq_length: int = 32 motion_num_attention_heads: int = 8 use_motion_mid_block: int = True encoder_hid_dim: Optional = None encoder_hid_dim_type: Optional = None time_cond_proj_dim: Optional = None ) A modified conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a +sample shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). disable_freeu < source > ( ) Disables the FreeU mechanism. enable_forward_chunking < source > ( chunk_size: Optional = None dim: int = 0 ) Parameters chunk_size (int, optional) — +The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually +over each tensor of dim=dim. dim (int, optional, defaults to 0) — +The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch) +or dim=1 (sequence length). Sets the attention processor to use feed forward +chunking. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism from https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stage blocks where they are being applied. Please refer to the official repository for combinations of values that +are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None added_cond_kwargs: Optional = None down_block_additional_residuals: Optional = None mid_block_additional_residual: Optional = None return_dict: bool = True ) → ~models.unet_3d_condition.UNet3DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, num_frames, channel, height, width. timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). +timestep_cond — (torch.Tensor, optional, defaults to None): +Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed +through the self.time_embedding layer to obtain the timestep embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. +down_block_additional_residuals — (tuple of torch.Tensor, optional): +A tuple of tensors that if specified are added to the residuals of down unet blocks. +mid_block_additional_residual — (torch.Tensor, optional): +A tensor that if specified is added to the residual of the middle unet block. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.unet_3d_condition.UNet3DConditionOutput instead of a plain +tuple. Returns +~models.unet_3d_condition.UNet3DConditionOutput or tuple + +If return_dict is True, an ~models.unet_3d_condition.UNet3DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The UNetMotionModel forward method. freeze_unet2d_params < source > ( ) Freeze the weights of just the UNet2DConditionModel, and leave the motion modules +unfrozen for fine tuning. fuse_qkv_projections < source > ( ) Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. unfuse_qkv_projections < source > ( ) Disables the fused QKV projection if enabled. This API is 🧪 experimental. UNet3DConditionOutput class diffusers.models.unets.unet_3d_condition.UNet3DConditionOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, num_frames, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of UNet3DConditionModel. diff --git a/scrapped_outputs/87bf7bb7f3c822e459546c09efd408e7.txt b/scrapped_outputs/87bf7bb7f3c822e459546c09efd408e7.txt new file mode 100644 index 0000000000000000000000000000000000000000..b413917c52bc7069ecb64d4b6c9ce531220bac25 --- /dev/null +++ b/scrapped_outputs/87bf7bb7f3c822e459546c09efd408e7.txt @@ -0,0 +1,87 @@ +Create reproducible pipelines Reproducibility is important for testing, replicating results, and can even be used to improve image quality. However, the randomness in diffusion models is a desired property because it allows the pipeline to generate different images every time it is run. While you can’t expect to get the exact same results across platforms, you can expect results to be reproducible across releases and platforms within a certain tolerance range. Even then, tolerance varies depending on the diffusion pipeline and checkpoint. This is why it’s important to understand how to control sources of randomness in diffusion models or use deterministic algorithms. 💡 We strongly recommend reading PyTorch’s statement about reproducibility: Completely reproducible results are not guaranteed across PyTorch releases, individual commits, or different platforms. Furthermore, results may not be reproducible between CPU and GPU executions, even when using identical seeds. Control randomness During inference, pipelines rely heavily on random sampling operations which include creating the +Gaussian noise tensors to denoise and adding noise to the scheduling step. Take a look at the tensor values in the DDIMPipeline after two inference steps: Copied from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np").images +print(np.abs(image).sum()) Running the code above prints one value, but if you run it again you get a different value. What is going on here? Every time the pipeline is run, torch.randn uses a different random seed to create Gaussian noise which is denoised stepwise. This leads to a different result each time it is run, which is great for diffusion pipelines since it generates a different random image each time. But if you need to reliably generate the same image, that’ll depend on whether you’re running the pipeline on a CPU or GPU. CPU To generate reproducible results on a CPU, you’ll need to use a PyTorch Generator and set a seed: Copied import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) + +# create a generator for reproducibility +generator = torch.Generator(device="cpu").manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) Now when you run the code above, it always prints a value of 1491.1711 no matter what because the Generator object with the seed is passed to all the random functions of the pipeline. If you run this code example on your specific hardware and PyTorch version, you should get a similar, if not the same, result. 💡 It might be a bit unintuitive at first to pass Generator objects to the pipeline instead of +just integer values representing the seed, but this is the recommended design when dealing with +probabilistic models in PyTorch, as Generators are random states that can be +passed to multiple pipelines in a sequence. GPU Writing a reproducible pipeline on a GPU is a bit trickier, and full reproducibility across different hardware is not guaranteed because matrix multiplication - which diffusion pipelines require a lot of - is less deterministic on a GPU than a CPU. For example, if you run the same code example above on a GPU: Copied import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) +ddim.to("cuda") + +# create a generator for reproducibility +generator = torch.Generator(device="cuda").manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) The result is not the same even though you’re using an identical seed because the GPU uses a different random number generator than the CPU. To circumvent this problem, 🧨 Diffusers has a randn_tensor() function for creating random noise on the CPU, and then moving the tensor to a GPU if necessary. The randn_tensor function is used everywhere inside the pipeline, allowing the user to always pass a CPU Generator even if the pipeline is run on a GPU. You’ll see the results are much closer now! Copied import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) +ddim.to("cuda") + +# create a generator for reproducibility; notice you don't place it on the GPU! +generator = torch.manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) 💡 If reproducibility is important, we recommend always passing a CPU generator. +The performance loss is often neglectable, and you’ll generate much more similar +values than if the pipeline had been run on a GPU. Finally, for more complex pipelines such as UnCLIPPipeline, these are often extremely +susceptible to precision error propagation. Don’t expect similar results across +different GPU hardware or PyTorch versions. In this case, you’ll need to run +exactly the same hardware and PyTorch version for full reproducibility. Deterministic algorithms You can also configure PyTorch to use deterministic algorithms to create a reproducible pipeline. However, you should be aware that deterministic algorithms may be slower than nondeterministic ones and you may observe a decrease in performance. But if reproducibility is important to you, then this is the way to go! Nondeterministic behavior occurs when operations are launched in more than one CUDA stream. To avoid this, set the environment variable CUBLAS_WORKSPACE_CONFIG to :16:8 to only use one buffer size during runtime. PyTorch typically benchmarks multiple algorithms to select the fastest one, but if you want reproducibility, you should disable this feature because the benchmark may select different algorithms each time. Lastly, pass True to torch.use_deterministic_algorithms to enable deterministic algorithms. Copied import os +import torch + +os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":16:8" + +torch.backends.cudnn.benchmark = False +torch.use_deterministic_algorithms(True) Now when you run the same pipeline twice, you’ll get identical results. Copied import torch +from diffusers import DDIMScheduler, StableDiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True).to("cuda") +pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) +g = torch.Generator(device="cuda") + +prompt = "A bear is playing a guitar on Times Square" + +g.manual_seed(0) +result1 = pipe(prompt=prompt, num_inference_steps=50, generator=g, output_type="latent").images + +g.manual_seed(0) +result2 = pipe(prompt=prompt, num_inference_steps=50, generator=g, output_type="latent").images + +print("L_inf dist =", abs(result1 - result2).max()) +"L_inf dist = tensor(0., device='cuda:0')" diff --git a/scrapped_outputs/87d0e84a5567d37ee0c35c09f385c873.txt b/scrapped_outputs/87d0e84a5567d37ee0c35c09f385c873.txt new file mode 100644 index 0000000000000000000000000000000000000000..2add5dcbc2dfbc796cac5009a8f482715b5ce8eb --- /dev/null +++ b/scrapped_outputs/87d0e84a5567d37ee0c35c09f385c873.txt @@ -0,0 +1,5 @@ +UVit2DModel The U-ViT model is a vision transformer (ViT) based UNet. This model incorporates elements from ViT (considers all inputs such as time, conditions and noisy image patches as tokens) and a UNet (long skip connections between the shallow and deep layers). The skip connection is important for predicting pixel-level features. An additional 3x3 convolutional block is applied prior to the final output to improve image quality. The abstract from the paper is: Currently, applying diffusion models in pixel space of high resolution images is difficult. Instead, existing approaches focus on diffusion in lower dimensional spaces (latent diffusion), or have multiple super-resolution levels of generation referred to as cascades. The downside is that these approaches add additional complexity to the diffusion framework. This paper aims to improve denoising diffusion for high resolution images while keeping the model as simple as possible. The paper is centered around the research question: How can one train a standard denoising diffusion models on high resolution images, and still obtain performance comparable to these alternate approaches? The four main findings are: 1) the noise schedule should be adjusted for high resolution images, 2) It is sufficient to scale only a particular part of the architecture, 3) dropout should be added at specific locations in the architecture, and 4) downsampling is an effective strategy to avoid high resolution feature maps. Combining these simple yet effective techniques, we achieve state-of-the-art on image generation among diffusion models without sampling modifiers on ImageNet. UVit2DModel class diffusers.UVit2DModel < source > ( hidden_size: int = 1024 use_bias: bool = False hidden_dropout: float = 0.0 cond_embed_dim: int = 768 micro_cond_encode_dim: int = 256 micro_cond_embed_dim: int = 1280 encoder_hidden_size: int = 768 vocab_size: int = 8256 codebook_size: int = 8192 in_channels: int = 768 block_out_channels: int = 768 num_res_blocks: int = 3 downsample: bool = False upsample: bool = False block_num_heads: int = 12 num_hidden_layers: int = 22 num_attention_heads: int = 16 attention_dropout: float = 0.0 intermediate_size: int = 2816 layer_norm_eps: float = 1e-06 ln_elementwise_affine: bool = True sample_size: int = 64 ) set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. UVit2DConvEmbed class diffusers.models.unets.uvit_2d.UVit2DConvEmbed < source > ( in_channels block_out_channels vocab_size elementwise_affine eps bias ) UVitBlock class diffusers.models.unets.uvit_2d.UVitBlock < source > ( channels num_res_blocks: int hidden_size hidden_dropout ln_elementwise_affine layer_norm_eps use_bias block_num_heads attention_dropout downsample: bool upsample: bool ) ConvNextBlock class diffusers.models.unets.uvit_2d.ConvNextBlock < source > ( channels layer_norm_eps ln_elementwise_affine use_bias hidden_dropout hidden_size res_ffn_factor = 4 ) ConvMlmLayer class diffusers.models.unets.uvit_2d.ConvMlmLayer < source > ( block_out_channels: int in_channels: int use_bias: bool ln_elementwise_affine: bool layer_norm_eps: float codebook_size: int ) diff --git a/scrapped_outputs/87d2ec3d0bf96549609f53717eb8db25.txt b/scrapped_outputs/87d2ec3d0bf96549609f53717eb8db25.txt new file mode 100644 index 0000000000000000000000000000000000000000..2ebf16bdbbf104fcc4ba517ac8a8648595655aeb --- /dev/null +++ b/scrapped_outputs/87d2ec3d0bf96549609f53717eb8db25.txt @@ -0,0 +1,256 @@ +Schedulers + +Diffusion pipelines are inherently a collection of diffusion models and schedulers that are partly independent from each other. This means that one is able to switch out parts of the pipeline to better customize +a pipeline to one’s use case. The best example of this are the Schedulers. +Whereas diffusion models usually simply define the forward pass from noise to a less noisy sample, +schedulers define the whole denoising process, i.e.: +How many denoising steps? +Stochastic or deterministic? +What algorithm to use to find the denoised sample +They can be quite complex and often define a trade-off between denoising speed and denoising quality. +It is extremely difficult to measure quantitatively which scheduler works best for a given diffusion pipeline, so it is often recommended to simply try out which works best. +The following paragraphs shows how to do so with the 🧨 Diffusers library. + +Load pipeline + +Let’s start by loading the stable diffusion pipeline. +Remember that you have to be a registered user on the 🤗 Hugging Face Hub, and have “click-accepted” the license in order to use stable diffusion. + + + Copied +from huggingface_hub import login +from diffusers import DiffusionPipeline +import torch + +# first we need to login with our access token +login() + +# Now we can download the pipeline +pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +Next, we move it to GPU: + + + Copied +pipeline.to("cuda") + +Access the scheduler + +The scheduler is always one of the components of the pipeline and is usually called "scheduler". +So it can be accessed via the "scheduler" property. + + + Copied +pipeline.scheduler +Output: + + + Copied +PNDMScheduler { + "_class_name": "PNDMScheduler", + "_diffusers_version": "0.8.0.dev0", + "beta_end": 0.012, + "beta_schedule": "scaled_linear", + "beta_start": 0.00085, + "clip_sample": false, + "num_train_timesteps": 1000, + "set_alpha_to_one": false, + "skip_prk_steps": true, + "steps_offset": 1, + "trained_betas": null +} +We can see that the scheduler is of type PNDMScheduler. +Cool, now let’s compare the scheduler in its performance to other schedulers. +First we define a prompt on which we will test all the different schedulers: + + + Copied +prompt = "A photograph of an astronaut riding a horse on Mars, high resolution, high definition." +Next, we create a generator from a random seed that will ensure that we can generate similar images as well as run the pipeline: + + + Copied +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image + + + + +Changing the scheduler + +Now we show how easy it is to change the scheduler of a pipeline. Every scheduler has a property SchedulerMixin.compatibles +which defines all compatible schedulers. You can take a look at all available, compatible schedulers for the Stable Diffusion pipeline as follows. + + + Copied +pipeline.scheduler.compatibles +Output: + + + Copied +[diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, + diffusers.schedulers.scheduling_ddim.DDIMScheduler, + diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler, + diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, + diffusers.schedulers.scheduling_pndm.PNDMScheduler, + diffusers.schedulers.scheduling_ddpm.DDPMScheduler, + diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler] +Cool, lots of schedulers to look at. Feel free to have a look at their respective class definitions: +LMSDiscreteScheduler, +DDIMScheduler, +DPMSolverMultistepScheduler, +EulerDiscreteScheduler, +PNDMScheduler, +DDPMScheduler, +EulerAncestralDiscreteScheduler. +We will now compare the input prompt with all other schedulers. To change the scheduler of the pipeline you can make use of the +convenient ConfigMixin.config property in combination with the ConfigMixin.from_config() function. + + + Copied +pipeline.scheduler.config +returns a dictionary of the configuration of the scheduler: +Output: + + + Copied +FrozenDict([('num_train_timesteps', 1000), + ('beta_start', 0.00085), + ('beta_end', 0.012), + ('beta_schedule', 'scaled_linear'), + ('trained_betas', None), + ('skip_prk_steps', True), + ('set_alpha_to_one', False), + ('steps_offset', 1), + ('_class_name', 'PNDMScheduler'), + ('_diffusers_version', '0.8.0.dev0'), + ('clip_sample', False)]) +This configuration can then be used to instantiate a scheduler +of a different class that is compatible with the pipeline. Here, +we change the scheduler to the DDIMScheduler. + + + Copied +from diffusers import DDIMScheduler + +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +Cool, now we can run the pipeline again to compare the generation quality. + + + Copied +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image + + + +If you are a JAX/Flax user, please check this section instead. + +Compare schedulers + +So far we have tried running the stable diffusion pipeline with two schedulers: PNDMScheduler and DDIMScheduler. +A number of better schedulers have been released that can be run with much fewer steps, let’s compare them here: +LMSDiscreteScheduler usually leads to better results: + + + Copied +from diffusers import LMSDiscreteScheduler + +pipeline.scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image + + + +EulerDiscreteScheduler and EulerAncestralDiscreteScheduler can generate high quality results with as little as 30 steps. + + + Copied +from diffusers import EulerDiscreteScheduler + +pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=30).images[0] +image + + + +and: + + + Copied +from diffusers import EulerAncestralDiscreteScheduler + +pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=30).images[0] +image + + + +At the time of writing this doc DPMSolverMultistepScheduler gives arguably the best speed/quality trade-off and can be run with as little +as 20 steps. + + + Copied +from diffusers import DPMSolverMultistepScheduler + +pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=20).images[0] +image + + + +As you can see most images look very similar and are arguably of very similar quality. It often really depends on the specific use case which scheduler to choose. A good approach is always to run multiple different +schedulers to compare results. + +Changing the Scheduler in Flax + +If you are a JAX/Flax user, you can also change the default pipeline scheduler. This is a complete example of how to run inference using the Flax Stable Diffusion pipeline and the super-fast DDPM-Solver++ scheduler: + + + Copied +import jax +import numpy as np +from flax.jax_utils import replicate +from flax.training.common_utils import shard + +from diffusers import FlaxStableDiffusionPipeline, FlaxDPMSolverMultistepScheduler + +model_id = "runwayml/stable-diffusion-v1-5" +scheduler, scheduler_state = FlaxDPMSolverMultistepScheduler.from_pretrained( + model_id, + subfolder="scheduler" +) +pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( + model_id, + scheduler=scheduler, + revision="bf16", + dtype=jax.numpy.bfloat16, +) +params["scheduler"] = scheduler_state + +# Generate 1 image per parallel device (8 on TPUv2-8 or TPUv3-8) +prompt = "a photo of an astronaut riding a horse on mars" +num_samples = jax.device_count() +prompt_ids = pipeline.prepare_inputs([prompt] * num_samples) + +prng_seed = jax.random.PRNGKey(0) +num_inference_steps = 25 + +# shard inputs and rng +params = replicate(params) +prng_seed = jax.random.split(prng_seed, jax.device_count()) +prompt_ids = shard(prompt_ids) + +images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images +images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) +The following Flax schedulers are not yet compatible with the Flax Stable Diffusion Pipeline: +FlaxLMSDiscreteScheduler +FlaxDDPMScheduler diff --git a/scrapped_outputs/8812002e08ee5b59467e1a9e9f531534.txt b/scrapped_outputs/8812002e08ee5b59467e1a9e9f531534.txt new file mode 100644 index 0000000000000000000000000000000000000000..923735996db131119f1ed82ba37eae73f2bb0f3e --- /dev/null +++ b/scrapped_outputs/8812002e08ee5b59467e1a9e9f531534.txt @@ -0,0 +1,27 @@ +DDPM Denoising Diffusion Probabilistic Models (DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes a diffusion based model of the same name. In the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline. The abstract from the paper is: We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. The original codebase can be found at hohonathanho/diffusion. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. DDPMPipeline class diffusers.DDPMPipeline < source > ( unet scheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image. Can be one of +DDPMScheduler, or DDIMScheduler. Pipeline for image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 generator: Union = None num_inference_steps: int = 1000 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. num_inference_steps (int, optional, defaults to 1000) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Example: Copied >>> from diffusers import DDPMPipeline + +>>> # load model and scheduler +>>> pipe = DDPMPipeline.from_pretrained("google/ddpm-cat-256") + +>>> # run pipeline in inference (sample random noise and denoise) +>>> image = pipe().images[0] + +>>> # save image +>>> image.save("ddpm_generated_image.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/882f1b3ed620aab6dd0e19946c840139.txt b/scrapped_outputs/882f1b3ed620aab6dd0e19946c840139.txt new file mode 100644 index 0000000000000000000000000000000000000000..9b91d27246c47e715d8fb32343e10ffa0337626a --- /dev/null +++ b/scrapped_outputs/882f1b3ed620aab6dd0e19946c840139.txt @@ -0,0 +1,36 @@ +Stable Diffusion XL Turbo SDXL Turbo is an adversarial time-distilled Stable Diffusion XL (SDXL) model capable +of running inference in as little as 1 step. This guide will show you how to use SDXL-Turbo for text-to-image and image-to-image. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate Load model checkpoints Model weights may be stored in separate subfolders on the Hub or locally, in which case, you should use the from_pretrained() method: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16") +pipeline = pipeline.to("cuda") You can also use the from_single_file() method to load a model checkpoint stored in a single file format (.ckpt or .safetensors) from the Hub or locally: Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_single_file( + "https://huggingface.co/stabilityai/sdxl-turbo/blob/main/sd_xl_turbo_1.0_fp16.safetensors", torch_dtype=torch.float16) +pipeline = pipeline.to("cuda") Text-to-image For text-to-image, pass a text prompt. By default, SDXL Turbo generates a 512x512 image, and that resolution gives the best results. You can try setting the height and width parameters to 768x768 or 1024x1024, but you should expect quality degradations when doing so. Make sure to set guidance_scale to 0.0 to disable, as the model was trained without it. A single inference step is enough to generate high quality images. +Increasing the number of steps to 2, 3 or 4 should improve image quality. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline_text2image = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16") +pipeline_text2image = pipeline_text2image.to("cuda") + +prompt = "A cinematic shot of a baby racoon wearing an intricate italian priest robe." + +image = pipeline_text2image(prompt=prompt, guidance_scale=0.0, num_inference_steps=1).images[0] +image Image-to-image For image-to-image generation, make sure that num_inference_steps * strength is larger or equal to 1. +The image-to-image pipeline will run for int(num_inference_steps * strength) steps, e.g. 0.5 * 2.0 = 1 step in +our example below. Copied from diffusers import AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +# use from_pipe to avoid consuming additional memory when loading a checkpoint +pipeline = AutoPipelineForImage2Image.from_pipe(pipeline_text2image).to("cuda") + +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") +init_image = init_image.resize((512, 512)) + +prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k" + +image = pipeline(prompt, image=init_image, strength=0.5, guidance_scale=0.0, num_inference_steps=2).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Speed-up SDXL Turbo even more Compile the UNet if you are using PyTorch version 2 or better. The first inference run will be very slow, but subsequent ones will be much faster. Copied pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) When using the default VAE, keep it in float32 to avoid costly dtype conversions before and after each generation. You only need to do this one before your first generation: Copied pipe.upcast_vae() As an alternative, you can also use a 16-bit VAE created by community member @madebyollin that does not need to be upcasted to float32. diff --git a/scrapped_outputs/884af83d61fbdc091364b4448fca9b7c.txt b/scrapped_outputs/884af83d61fbdc091364b4448fca9b7c.txt new file mode 100644 index 0000000000000000000000000000000000000000..46b497020216eabb3750dd7c2b1feffa0b29e64b --- /dev/null +++ b/scrapped_outputs/884af83d61fbdc091364b4448fca9b7c.txt @@ -0,0 +1,342 @@ +Pix2Pix Zero Zero-shot Image-to-Image Translation is by Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yijun Li, Jingwan Lu, and Jun-Yan Zhu. The abstract from the paper is: Large-scale text-to-image generative models have shown their remarkable ability to synthesize diverse and high-quality images. However, it is still challenging to directly apply these models for editing real images for two reasons. First, it is hard for users to come up with a perfect text prompt that accurately describes every visual detail in the input image. Second, while existing models can introduce desirable changes in certain regions, they often dramatically alter the input content and introduce unexpected changes in unwanted regions. In this work, we propose pix2pix-zero, an image-to-image translation method that can preserve the content of the original image without manual prompting. We first automatically discover editing directions that reflect desired edits in the text embedding space. To preserve the general content structure after editing, we further propose cross-attention guidance, which aims to retain the cross-attention maps of the input image throughout the diffusion process. In addition, our method does not need additional training for these edits and can directly use the existing pre-trained text-to-image diffusion model. We conduct extensive experiments and show that our method outperforms existing and concurrent works for both real and synthetic image editing. You can find additional information about Pix2Pix Zero on the project page, original codebase, and try it out in a demo. Tips The pipeline can be conditioned on real input images. Check out the code examples below to know more. The pipeline exposes two arguments namely source_embeds and target_embeds +that let you control the direction of the semantic edits in the final image to be generated. Let’s say, +you wanted to translate from “cat” to “dog”. In this case, the edit direction will be “cat -> dog”. To reflect +this in the pipeline, you simply have to set the embeddings related to the phrases including “cat” to +source_embeds and “dog” to target_embeds. Refer to the code example below for more details. When you’re using this pipeline from a prompt, specify the source concept in the prompt. Taking +the above example, a valid input prompt would be: “a high resolution painting of a cat in the style of van gogh”. If you wanted to reverse the direction in the example above, i.e., “dog -> cat”, then it’s recommended to:Swap the source_embeds and target_embeds. Change the input prompt to include “dog”. To learn more about how the source and target embeddings are generated, refer to the original paper. Below, we also provide some directions on how to generate the embeddings. Note that the quality of the outputs generated with this pipeline is dependent on how good the source_embeds and target_embeds are. Please, refer to this discussion for some suggestions on the topic. Available Pipelines: Pipeline Tasks Demo StableDiffusionPix2PixZeroPipeline Text-Based Image Editing 🤗 Space Usage example Based on an image generated with the input prompt Copied import requests +import torch + +from diffusers import DDIMScheduler, StableDiffusionPix2PixZeroPipeline + + +def download(embedding_url, local_filepath): + r = requests.get(embedding_url) + with open(local_filepath, "wb") as f: + f.write(r.content) + + +model_ckpt = "CompVis/stable-diffusion-v1-4" +pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained( + model_ckpt, conditions_input_image=False, torch_dtype=torch.float16 +) +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +pipeline.to("cuda") + +prompt = "a high resolution painting of a cat in the style of van gogh" +src_embs_url = "https://github.com/pix2pixzero/pix2pix-zero/raw/main/assets/embeddings_sd_1.4/cat.pt" +target_embs_url = "https://github.com/pix2pixzero/pix2pix-zero/raw/main/assets/embeddings_sd_1.4/dog.pt" + +for url in [src_embs_url, target_embs_url]: + download(url, url.split("/")[-1]) + +src_embeds = torch.load(src_embs_url.split("/")[-1]) +target_embeds = torch.load(target_embs_url.split("/")[-1]) + +image = pipeline( + prompt, + source_embeds=src_embeds, + target_embeds=target_embeds, + num_inference_steps=50, + cross_attention_guidance_amount=0.15, +).images[0] +image Based on an input image When the pipeline is conditioned on an input image, we first obtain an inverted +noise from it using a DDIMInverseScheduler with the help of a generated caption. Then the inverted noise is used to start the generation process. First, let’s load our pipeline: Copied import torch +from transformers import BlipForConditionalGeneration, BlipProcessor +from diffusers import DDIMScheduler, DDIMInverseScheduler, StableDiffusionPix2PixZeroPipeline + +captioner_id = "Salesforce/blip-image-captioning-base" +processor = BlipProcessor.from_pretrained(captioner_id) +model = BlipForConditionalGeneration.from_pretrained(captioner_id, torch_dtype=torch.float16, low_cpu_mem_usage=True) + +sd_model_ckpt = "CompVis/stable-diffusion-v1-4" +pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained( + sd_model_ckpt, + caption_generator=model, + caption_processor=processor, + torch_dtype=torch.float16, + safety_checker=None, +) +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +pipeline.enable_model_cpu_offload() Then, we load an input image for conditioning and obtain a suitable caption for it: Copied from diffusers.utils import load_image + +img_url = "https://github.com/pix2pixzero/pix2pix-zero/raw/main/assets/test_images/cats/cat_6.png" +raw_image = load_image(url).resize((512, 512)) +caption = pipeline.generate_caption(raw_image) +caption Then we employ the generated caption and the input image to get the inverted noise: Copied generator = torch.manual_seed(0) +inv_latents = pipeline.invert(caption, image=raw_image, generator=generator).latents Now, generate the image with edit directions: Copied # See the "Generating source and target embeddings" section below to +# automate the generation of these captions with a pre-trained model like Flan-T5 as explained below. +source_prompts = ["a cat sitting on the street", "a cat playing in the field", "a face of a cat"] +target_prompts = ["a dog sitting on the street", "a dog playing in the field", "a face of a dog"] + +source_embeds = pipeline.get_embeds(source_prompts, batch_size=2) +target_embeds = pipeline.get_embeds(target_prompts, batch_size=2) + + +image = pipeline( + caption, + source_embeds=source_embeds, + target_embeds=target_embeds, + num_inference_steps=50, + cross_attention_guidance_amount=0.15, + generator=generator, + latents=inv_latents, + negative_prompt=caption, +).images[0] +image Generating source and target embeddings The authors originally used the GPT-3 API to generate the source and target captions for discovering +edit directions. However, we can also leverage open source and public models for the same purpose. +Below, we provide an end-to-end example with the Flan-T5 model +for generating captions and CLIP for +computing embeddings on the generated captions. 1. Load the generation model: Copied import torch +from transformers import AutoTokenizer, T5ForConditionalGeneration + +tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-xl") +model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xl", device_map="auto", torch_dtype=torch.float16) 2. Construct a starting prompt: Copied source_concept = "cat" +target_concept = "dog" + +source_text = f"Provide a caption for images containing a {source_concept}. " +"The captions should be in English and should be no longer than 150 characters." + +target_text = f"Provide a caption for images containing a {target_concept}. " +"The captions should be in English and should be no longer than 150 characters." Here, we’re interested in the “cat -> dog” direction. 3. Generate captions: We can use a utility like so for this purpose. Copied def generate_captions(input_prompt): + input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids.to("cuda") + + outputs = model.generate( + input_ids, temperature=0.8, num_return_sequences=16, do_sample=True, max_new_tokens=128, top_k=10 + ) + return tokenizer.batch_decode(outputs, skip_special_tokens=True) And then we just call it to generate our captions: Copied source_captions = generate_captions(source_text) +target_captions = generate_captions(target_concept) +print(source_captions, target_captions, sep='\n') We encourage you to play around with the different parameters supported by the +generate() method (documentation) for the generation quality you are looking for. 4. Load the embedding model: Here, we need to use the same text encoder model used by the subsequent Stable Diffusion model. Copied from diffusers import StableDiffusionPix2PixZeroPipeline + +pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 +) +pipeline = pipeline.to("cuda") +tokenizer = pipeline.tokenizer +text_encoder = pipeline.text_encoder 5. Compute embeddings: Copied import torch + +def embed_captions(sentences, tokenizer, text_encoder, device="cuda"): + with torch.no_grad(): + embeddings = [] + for sent in sentences: + text_inputs = tokenizer( + sent, + padding="max_length", + max_length=tokenizer.model_max_length, + truncation=True, + return_tensors="pt", + ) + text_input_ids = text_inputs.input_ids + prompt_embeds = text_encoder(text_input_ids.to(device), attention_mask=None)[0] + embeddings.append(prompt_embeds) + return torch.concatenate(embeddings, dim=0).mean(dim=0).unsqueeze(0) + +source_embeddings = embed_captions(source_captions, tokenizer, text_encoder) +target_embeddings = embed_captions(target_captions, tokenizer, text_encoder) And you’re done! Here is a Colab Notebook that you can use to interact with the entire process. Now, you can use these embeddings directly while calling the pipeline: Copied from diffusers import DDIMScheduler + +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) + +image = pipeline( + prompt, + source_embeds=source_embeddings, + target_embeds=target_embeddings, + num_inference_steps=50, + cross_attention_guidance_amount=0.15, +).images[0] +image Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionPix2PixZeroPipeline class diffusers.StableDiffusionPix2PixZeroPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: typing.Union[diffusers.schedulers.scheduling_ddpm.DDPMScheduler, diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler] feature_extractor: CLIPImageProcessor safety_checker: StableDiffusionSafetyChecker inverse_scheduler: DDIMInverseScheduler caption_generator: BlipForConditionalGeneration caption_processor: BlipProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, EulerAncestralDiscreteScheduler, or DDPMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. requires_safety_checker (bool) — +Whether the pipeline requires a safety checker. We recommend setting it to True if you’re using the +pipeline publicly. Pipeline for pixel-level image editing using Pix2Pix Zero. Based on Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: typing.Union[typing.List[str], str, NoneType] = None source_embeds: Tensor = None target_embeds: Tensor = None height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: typing.Union[typing.List[str], str, NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None cross_attention_guidance_amount: float = 0.1 output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: typing.Optional[int] = 1 cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None clip_skip: typing.Optional[int] = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. source_embeds (torch.Tensor) — +Source concept embeddings. Generation of the embeddings as per the original +paper. Used in discovering the edit direction. target_embeds (torch.Tensor) — +Target concept embeddings. Generation of the embeddings as per the original +paper. Used in discovering the edit direction. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. cross_attention_guidance_amount (float, defaults to 0.1) — +Amount of guidance needed from the reference cross-attention maps. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import requests +>>> import torch + +>>> from diffusers import DDIMScheduler, StableDiffusionPix2PixZeroPipeline + + +>>> def download(embedding_url, local_filepath): +... r = requests.get(embedding_url) +... with open(local_filepath, "wb") as f: +... f.write(r.content) + + +>>> model_ckpt = "CompVis/stable-diffusion-v1-4" +>>> pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained(model_ckpt, torch_dtype=torch.float16) +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.to("cuda") + +>>> prompt = "a high resolution painting of a cat in the style of van gough" +>>> source_emb_url = "https://hf.co/datasets/sayakpaul/sample-datasets/resolve/main/cat.pt" +>>> target_emb_url = "https://hf.co/datasets/sayakpaul/sample-datasets/resolve/main/dog.pt" + +>>> for url in [source_emb_url, target_emb_url]: +... download(url, url.split("/")[-1]) + +>>> src_embeds = torch.load(source_emb_url.split("/")[-1]) +>>> target_embeds = torch.load(target_emb_url.split("/")[-1]) +>>> images = pipeline( +... prompt, +... source_embeds=src_embeds, +... target_embeds=target_embeds, +... num_inference_steps=50, +... cross_attention_guidance_amount=0.15, +... ).images + +>>> images[0].save("edited_image_dog.png") construct_direction < source > ( embs_source: Tensor embs_target: Tensor ) Constructs the edit direction to steer the image generation process semantically. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None lora_scale: typing.Optional[float] = None clip_skip: typing.Optional[int] = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. generate_caption < source > ( images ) Generates caption for a given image. invert < source > ( prompt: typing.Optional[str] = None image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.FloatTensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.FloatTensor]] = None num_inference_steps: int = 50 guidance_scale: float = 1 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None prompt_embeds: typing.Optional[torch.FloatTensor] = None cross_attention_guidance_amount: float = 0.1 output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: typing.Optional[int] = 1 cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None lambda_auto_corr: float = 20.0 lambda_kl: float = 20.0 num_reg_steps: int = 5 num_auto_corr_rolls: int = 5 ) Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor np.ndarray, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch which will be used for conditioning. Can also accept +image latents as image, if passing latents directly, it will not be encoded again. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 1) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. cross_attention_guidance_amount (float, defaults to 0.1) — +Amount of guidance needed from the reference cross-attention maps. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. lambda_auto_corr (float, optional, defaults to 20.0) — +Lambda parameter to control auto correction lambda_kl (float, optional, defaults to 20.0) — +Lambda parameter to control Kullback–Leibler divergence output num_reg_steps (int, optional, defaults to 5) — +Number of regularization loss steps num_auto_corr_rolls (int, optional, defaults to 5) — +Number of auto correction roll steps Function used to generate inverted latents given a prompt and image. Examples: Copied >>> import torch +>>> from transformers import BlipForConditionalGeneration, BlipProcessor +>>> from diffusers import DDIMScheduler, DDIMInverseScheduler, StableDiffusionPix2PixZeroPipeline + +>>> import requests +>>> from PIL import Image + +>>> captioner_id = "Salesforce/blip-image-captioning-base" +>>> processor = BlipProcessor.from_pretrained(captioner_id) +>>> model = BlipForConditionalGeneration.from_pretrained( +... captioner_id, torch_dtype=torch.float16, low_cpu_mem_usage=True +... ) + +>>> sd_model_ckpt = "CompVis/stable-diffusion-v1-4" +>>> pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained( +... sd_model_ckpt, +... caption_generator=model, +... caption_processor=processor, +... torch_dtype=torch.float16, +... safety_checker=None, +... ) + +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.enable_model_cpu_offload() + +>>> img_url = "https://github.com/pix2pixzero/pix2pix-zero/raw/main/assets/test_images/cats/cat_6.png" + +>>> raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB").resize((512, 512)) +>>> # generate caption +>>> caption = pipeline.generate_caption(raw_image) + +>>> # "a photography of a cat with flowers and dai dai daie - daie - daie kasaii" +>>> inv_latents = pipeline.invert(caption, image=raw_image).latents +>>> # we need to generate source and target embeds + +>>> source_prompts = ["a cat sitting on the street", "a cat playing in the field", "a face of a cat"] + +>>> target_prompts = ["a dog sitting on the street", "a dog playing in the field", "a face of a dog"] + +>>> source_embeds = pipeline.get_embeds(source_prompts) +>>> target_embeds = pipeline.get_embeds(target_prompts) +>>> # the latents can then be used to edit a real image +>>> # when using Stable Diffusion 2 or other models that use v-prediction +>>> # set `cross_attention_guidance_amount` to 0.01 or less to avoid input latent gradient explosion + +>>> image = pipeline( +... caption, +... source_embeds=source_embeds, +... target_embeds=target_embeds, +... num_inference_steps=50, +... cross_attention_guidance_amount=0.15, +... generator=generator, +... latents=inv_latents, +... negative_prompt=caption, +... ).images[0] +>>> image.save("edited_image.png") diff --git a/scrapped_outputs/886e89855d2bc7cf86c5e9b6e90767dd.txt b/scrapped_outputs/886e89855d2bc7cf86c5e9b6e90767dd.txt new file mode 100644 index 0000000000000000000000000000000000000000..18ff21ef44b1209309d3996bfa0c5efab35a57c1 --- /dev/null +++ b/scrapped_outputs/886e89855d2bc7cf86c5e9b6e90767dd.txt @@ -0,0 +1,78 @@ +Safe Stable Diffusion Safe Stable Diffusion was proposed in Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models and mitigates inappropriate degeneration from Stable Diffusion models because they’re trained on unfiltered web-crawled datasets. For instance Stable Diffusion may unexpectedly generate nudity, violence, images depicting self-harm, and otherwise offensive content. Safe Stable Diffusion is an extension of Stable Diffusion that drastically reduces this type of content. The abstract from the paper is: Text-conditioned image generation models have recently achieved astonishing results in image quality and text alignment and are consequently employed in a fast-growing number of applications. Since they are highly data-driven, relying on billion-sized datasets randomly scraped from the internet, they also suffer, as we demonstrate, from degenerated and biased human behavior. In turn, they may even reinforce such biases. To help combat these undesired side effects, we present safe latent diffusion (SLD). Specifically, to measure the inappropriate degeneration due to unfiltered and imbalanced training sets, we establish a novel image generation test bed-inappropriate image prompts (I2P)-containing dedicated, real-world image-to-text prompts covering concepts such as nudity and violence. As our exhaustive empirical evaluation demonstrates, the introduced SLD removes and suppresses inappropriate image parts during the diffusion process, with no additional training required and no adverse effect on overall image quality or text alignment. Tips Use the safety_concept property of StableDiffusionPipelineSafe to check and edit the current safety concept: Copied >>> from diffusers import StableDiffusionPipelineSafe + +>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe") +>>> pipeline.safety_concept +'an image showing hate, harassment, violence, suffering, humiliation, harm, suicide, sexual, nudity, bodily fluids, blood, obscene gestures, illegal activity, drug use, theft, vandalism, weapons, child abuse, brutality, cruelty' For each image generation the active concept is also contained in StableDiffusionSafePipelineOutput. There are 4 configurations (SafetyConfig.WEAK, SafetyConfig.MEDIUM, SafetyConfig.STRONG, and SafetyConfig.MAX) that can be applied: Copied >>> from diffusers import StableDiffusionPipelineSafe +>>> from diffusers.pipelines.stable_diffusion_safe import SafetyConfig + +>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe") +>>> prompt = "the four horsewomen of the apocalypse, painting by tom of finland, gaston bussiere, craig mullins, j. c. leyendecker" +>>> out = pipeline(prompt=prompt, **SafetyConfig.MAX) Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionPipelineSafe class diffusers.StableDiffusionPipelineSafe < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: SafeStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline based on the StableDiffusionPipeline for text-to-image generation using Safe Latent Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 sld_guidance_scale: Optional = 1000 sld_warmup_steps: Optional = 10 sld_threshold: Optional = 0.01 sld_momentum_scale: Optional = 0.3 sld_mom_beta: Optional = 0.4 ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. sld_guidance_scale (float, optional, defaults to 1000) — +If sld_guidance_scale < 1, safety guidance is disabled. sld_warmup_steps (int, optional, defaults to 10) — +Number of warmup steps for safety guidance. SLD is only be applied for diffusion steps greater than +sld_warmup_steps. sld_threshold (float, optional, defaults to 0.01) — +Threshold that separates the hyperplane between appropriate and inappropriate images. sld_momentum_scale (float, optional, defaults to 0.3) — +Scale of the SLD momentum to be added to the safety guidance at each diffusion step. If set to 0.0, +momentum is disabled. Momentum is built up during warmup for diffusion steps smaller than +sld_warmup_steps. sld_mom_beta (float, optional, defaults to 0.4) — +Defines how safety guidance momentum builds up. sld_mom_beta indicates how much of the previous +momentum is kept. Momentum is built up during warmup for diffusion steps smaller than +sld_warmup_steps. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied import torch +from diffusers import StableDiffusionPipelineSafe +from diffusers.pipelines.stable_diffusion_safe import SafetyConfig + +pipeline = StableDiffusionPipelineSafe.from_pretrained( + "AIML-TUDA/stable-diffusion-safe", torch_dtype=torch.float16 +).to("cuda") +prompt = "the four horsewomen of the apocalypse, painting by tom of finland, gaston bussiere, craig mullins, j. c. leyendecker" +image = pipeline(prompt=prompt, **SafetyConfig.MEDIUM).images[0] StableDiffusionSafePipelineOutput class diffusers.pipelines.stable_diffusion_safe.StableDiffusionSafePipelineOutput < source > ( images: Union nsfw_content_detected: Optional unsafe_images: Union applied_safety_concept: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or numpy array of shape (batch_size, height, width, num_channels). PIL images or numpy array present the denoised images of the diffusion pipeline. nsfw_content_detected (List[bool]) — +List of flags denoting whether the corresponding generated image likely represents “not-safe-for-work” +(nsfw) content, or None if safety checking could not be performed. images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images that were flagged by the safety checker any may contain “not-safe-for-work” +(nsfw) content, or None if no safety check was performed or no images were flagged. applied_safety_concept (str) — +The safety concept that was applied for safety guidance, or None if safety guidance was disabled Output class for Safe Stable Diffusion pipelines. __call__ ( *args **kwargs ) Call self as a function. diff --git a/scrapped_outputs/888dcf602adbb78ea61284234cceccd5.txt b/scrapped_outputs/888dcf602adbb78ea61284234cceccd5.txt new file mode 100644 index 0000000000000000000000000000000000000000..d769a7f9060837ab9edb28b421635809b26af2d7 --- /dev/null +++ b/scrapped_outputs/888dcf602adbb78ea61284234cceccd5.txt @@ -0,0 +1,61 @@ +Attention Processor An attention processor is a class for applying different types of attention mechanisms. AttnProcessor class diffusers.models.attention_processor.AttnProcessor < source > ( ) Default processor for performing attention-related computations. AttnProcessor2_0 class diffusers.models.attention_processor.AttnProcessor2_0 < source > ( ) Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). FusedAttnProcessor2_0 class diffusers.models.attention_processor.FusedAttnProcessor2_0 < source > ( ) Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). +It uses fused projection layers. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is currently 🧪 experimental in nature and can change in future. LoRAAttnProcessor class diffusers.models.attention_processor.LoRAAttnProcessor < source > ( hidden_size: int cross_attention_dim: Optional = None rank: int = 4 network_alpha: Optional = None **kwargs ) Parameters hidden_size (int, optional) — +The hidden size of the attention layer. cross_attention_dim (int, optional) — +The number of channels in the encoder_hidden_states. rank (int, defaults to 4) — +The dimension of the LoRA update matrices. network_alpha (int, optional) — +Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) — +Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism. LoRAAttnProcessor2_0 class diffusers.models.attention_processor.LoRAAttnProcessor2_0 < source > ( hidden_size: int cross_attention_dim: Optional = None rank: int = 4 network_alpha: Optional = None **kwargs ) Parameters hidden_size (int) — +The hidden size of the attention layer. cross_attention_dim (int, optional) — +The number of channels in the encoder_hidden_states. rank (int, defaults to 4) — +The dimension of the LoRA update matrices. network_alpha (int, optional) — +Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) — +Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism using PyTorch 2.0’s memory-efficient scaled dot-product +attention. CustomDiffusionAttnProcessor class diffusers.models.attention_processor.CustomDiffusionAttnProcessor < source > ( train_kv: bool = True train_q_out: bool = True hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 ) Parameters train_kv (bool, defaults to True) — +Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) — +Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) — +Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) — +The dropout probability to use. Processor for implementing attention for the Custom Diffusion method. CustomDiffusionAttnProcessor2_0 class diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0 < source > ( train_kv: bool = True train_q_out: bool = True hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 ) Parameters train_kv (bool, defaults to True) — +Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) — +Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) — +Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) — +The dropout probability to use. Processor for implementing attention for the Custom Diffusion method using PyTorch 2.0’s memory-efficient scaled +dot-product attention. AttnAddedKVProcessor class diffusers.models.attention_processor.AttnAddedKVProcessor < source > ( ) Processor for performing attention-related computations with extra learnable key and value matrices for the text +encoder. AttnAddedKVProcessor2_0 class diffusers.models.attention_processor.AttnAddedKVProcessor2_0 < source > ( ) Processor for performing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0), with extra +learnable key and value matrices for the text encoder. LoRAAttnAddedKVProcessor class diffusers.models.attention_processor.LoRAAttnAddedKVProcessor < source > ( hidden_size: int cross_attention_dim: Optional = None rank: int = 4 network_alpha: Optional = None ) Parameters hidden_size (int, optional) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. rank (int, defaults to 4) — +The dimension of the LoRA update matrices. network_alpha (int, optional) — +Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) — +Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism with extra learnable key and value matrices for the text +encoder. XFormersAttnProcessor class diffusers.models.attention_processor.XFormersAttnProcessor < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional, defaults to None) — +The base +operator to +use as the attention operator. It is recommended to set to None, and allow xFormers to choose the best +operator. Processor for implementing memory efficient attention using xFormers. LoRAXFormersAttnProcessor class diffusers.models.attention_processor.LoRAXFormersAttnProcessor < source > ( hidden_size: int cross_attention_dim: int rank: int = 4 attention_op: Optional = None network_alpha: Optional = None **kwargs ) Parameters hidden_size (int, optional) — +The hidden size of the attention layer. cross_attention_dim (int, optional) — +The number of channels in the encoder_hidden_states. rank (int, defaults to 4) — +The dimension of the LoRA update matrices. attention_op (Callable, optional, defaults to None) — +The base +operator to +use as the attention operator. It is recommended to set to None, and allow xFormers to choose the best +operator. network_alpha (int, optional) — +Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) — +Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism with memory efficient attention using xFormers. CustomDiffusionXFormersAttnProcessor class diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor < source > ( train_kv: bool = True train_q_out: bool = False hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 attention_op: Optional = None ) Parameters train_kv (bool, defaults to True) — +Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) — +Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) — +Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) — +The dropout probability to use. attention_op (Callable, optional, defaults to None) — +The base +operator to use +as the attention operator. It is recommended to set to None, and allow xFormers to choose the best operator. Processor for implementing memory efficient attention using xFormers for the Custom Diffusion method. SlicedAttnProcessor class diffusers.models.attention_processor.SlicedAttnProcessor < source > ( slice_size: int ) Parameters slice_size (int, optional) — +The number of steps to compute attention. Uses as many slices as attention_head_dim // slice_size, and +attention_head_dim must be a multiple of the slice_size. Processor for implementing sliced attention. SlicedAttnAddedKVProcessor class diffusers.models.attention_processor.SlicedAttnAddedKVProcessor < source > ( slice_size ) Parameters slice_size (int, optional) — +The number of steps to compute attention. Uses as many slices as attention_head_dim // slice_size, and +attention_head_dim must be a multiple of the slice_size. Processor for implementing sliced attention with extra learnable key and value matrices for the text encoder. diff --git a/scrapped_outputs/88ae4b921c9e2b19b7dc9c5a5e734276.txt b/scrapped_outputs/88ae4b921c9e2b19b7dc9c5a5e734276.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/88d5d9c799e1b41368114127c4fb6234.txt b/scrapped_outputs/88d5d9c799e1b41368114127c4fb6234.txt new file mode 100644 index 0000000000000000000000000000000000000000..f2bcdd0eab08a61d4d8ad8d73bfbe01b5aad187f --- /dev/null +++ b/scrapped_outputs/88d5d9c799e1b41368114127c4fb6234.txt @@ -0,0 +1,234 @@ +Models 🤗 Diffusers provides pretrained models for popular algorithms and modules to create custom diffusion systems. The primary function of models is to denoise an input sample as modeled by the distribution pθ(xt−1∣xt)p_{\theta}(x_{t-1}|x_{t})pθ​(xt−1​∣xt​). All models are built from the base ModelMixin class which is a torch.nn.Module providing basic functionality for saving and loading models, locally and from the Hugging Face Hub. ModelMixin class diffusers.ModelMixin < source > ( ) Base class for all models. ModelMixin takes care of storing the model configuration and provides methods for loading, downloading and +saving models. config_name (str) — Filename to save a model to when calling save_pretrained(). disable_gradient_checkpointing < source > ( ) Deactivates gradient checkpointing for the current model (may be referred to as activation checkpointing or +checkpoint activations in other frameworks). disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. enable_gradient_checkpointing < source > ( ) Activates gradient checkpointing for the current model (may be referred to as activation checkpointing or +checkpoint activations in other frameworks). enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this option is enabled, you should observe lower GPU memory usage and a potential speed up during +inference. Speed up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import UNet2DConditionModel +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> model = UNet2DConditionModel.from_pretrained( +... "stabilityai/stable-diffusion-2-1", subfolder="unet", torch_dtype=torch.float16 +... ) +>>> model = model.to("cuda") +>>> model.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) from_pretrained < source > ( pretrained_model_name_or_path: Union **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with save_pretrained(). + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info (bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. from_flax (bool, optional, defaults to False) — +Load the model weights from a Flax checkpoint save file. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. Instantiate a pretrained PyTorch model from a pretrained model configuration. The model is set in evaluation mode - model.eval() - by default, and dropout modules are deactivated. To +train the model, set it back in training mode with model.train(). To use private or gated models, log-in with +huggingface-cli login. You can also activate the special +“offline-mode” to use this method in a +firewalled environment. Example: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5", subfolder="unet") If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. num_parameters < source > ( only_trainable: bool = False exclude_embeddings: bool = False ) → int Parameters only_trainable (bool, optional, defaults to False) — +Whether or not to return only the number of trainable parameters. exclude_embeddings (bool, optional, defaults to False) — +Whether or not to return only the number of non-embedding parameters. Returns +int + +The number of parameters. + Get number of (trainable or non-embedding) parameters in the module. Example: Copied from diffusers import UNet2DConditionModel + +model_id = "runwayml/stable-diffusion-v1-5" +unet = UNet2DConditionModel.from_pretrained(model_id, subfolder="unet") +unet.num_parameters(only_trainable=True) +859520964 save_pretrained < source > ( save_directory: Union is_main_process: bool = True save_function: Optional = None safe_serialization: bool = True variant: Optional = None push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save a model and its configuration file to. Will be created if it doesn’t exist. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save a model and its configuration file to a directory so that it can be reloaded using the +from_pretrained() class method. FlaxModelMixin class diffusers.FlaxModelMixin < source > ( ) Base class for all Flax models. FlaxModelMixin takes care of storing the model configuration and provides methods for loading, downloading and +saving models. config_name (str) — Filename to save a model to when calling save_pretrained(). from_pretrained < source > ( pretrained_model_name_or_path: Union dtype: dtype = *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — +Can be either: + +A string, the model id (for example runwayml/stable-diffusion-v1-5) of a pretrained model +hosted on the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +using save_pretrained(). + dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — +The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and +jax.numpy.bfloat16 (on TPUs). +This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If +specified, all the computation will be performed with the given dtype. + +This only specifies the dtype of the computation and does not influence the dtype of model +parameters. +If you wish to change the dtype of the model parameters, see to_fp16() and +to_bf16(). + model_args (sequence of positional arguments, optional) — +All remaining positional arguments are passed to the underlying model’s __init__ method. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only(bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. from_pt (bool, optional, defaults to False) — +Load the model weights from a PyTorch checkpoint save file. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to update the configuration object (after it is loaded) and initiate the model (for +example, output_attentions=True). Behaves differently depending on whether a config is provided or +automatically loaded: + +If a configuration is provided with config, kwargs are directly passed to the underlying +model’s __init__ method (we assume all relevant updates to the configuration have already been +done). +If a configuration is not provided, kwargs are first passed to the configuration class +initialization function from_config(). Each key of the kwargs that corresponds +to a configuration attribute is used to override said attribute with the supplied kwargs value. +Remaining keys that do not correspond to any configuration attribute are passed to the underlying +model’s __init__ function. + Instantiate a pretrained Flax model from a pretrained model configuration. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # Download model and configuration from huggingface.co and cache. +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # Model was saved using *save_pretrained('./test/saved_model/')* (for example purposes, not runnable). +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("./test/saved_model/") If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. save_pretrained < source > ( save_directory: Union params: Union is_main_process: bool = True push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save a model and its configuration file to. Will be created if it doesn’t exist. params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional key word arguments passed along to the push_to_hub() method. Save a model and its configuration file to a directory so that it can be reloaded using the +from_pretrained() class method. to_bf16 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans. It should be True +for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.bfloat16. This returns a new params tree and does not cast +the params in place. This method can be used on a TPU to explicitly convert the model parameters to bfloat16 precision to do full +half-precision training or to save weights in bfloat16 for inference in order to save memory and improve speed. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # load model +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model parameters will be in fp32 precision, to cast these to bfloat16 precision +>>> params = model.to_bf16(params) +>>> # If you don't want to cast certain parameters (for example layer norm bias and scale) +>>> # then pass the mask as follows +>>> from flax import traverse_util + +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> flat_params = traverse_util.flatten_dict(params) +>>> mask = { +... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) +... for path in flat_params +... } +>>> mask = traverse_util.unflatten_dict(mask) +>>> params = model.to_bf16(params, mask) to_fp16 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans. It should be True +for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.float16. This returns a new params tree and does not cast the +params in place. This method can be used on a GPU to explicitly convert the model parameters to float16 precision to do full +half-precision training or to save weights in float16 for inference in order to save memory and improve speed. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # load model +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model params will be in fp32, to cast these to float16 +>>> params = model.to_fp16(params) +>>> # If you want don't want to cast certain parameters (for example layer norm bias and scale) +>>> # then pass the mask as follows +>>> from flax import traverse_util + +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> flat_params = traverse_util.flatten_dict(params) +>>> mask = { +... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) +... for path in flat_params +... } +>>> mask = traverse_util.unflatten_dict(mask) +>>> params = model.to_fp16(params, mask) to_fp32 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans. It should be True +for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.float32. This method can be used to explicitly convert the +model parameters to fp32 precision. This returns a new params tree and does not cast the params in place. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # Download model and configuration from huggingface.co +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model params will be in fp32, to illustrate the use of this method, +>>> # we'll first cast to fp16 and back to fp32 +>>> params = model.to_f16(params) +>>> # now cast back to fp32 +>>> params = model.to_fp32(params) PushToHubMixin class diffusers.utils.PushToHubMixin < source > ( ) A Mixin to push a model, scheduler, or pipeline to the Hugging Face Hub. push_to_hub < source > ( repo_id: str commit_message: Optional = None private: Optional = None token: Optional = None create_pr: bool = False safe_serialization: bool = True variant: Optional = None ) Parameters repo_id (str) — +The name of the repository you want to push your model, scheduler, or pipeline files to. It should +contain your organization name when pushing to an organization. repo_id can also be a path to a local +directory. commit_message (str, optional) — +Message to commit while pushing. Default to "Upload {object}". private (bool, optional) — +Whether or not the repository created should be private. token (str, optional) — +The token to use as HTTP bearer authorization for remote files. The token generated when running +huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) — +Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to True) — +Whether or not to convert the model weights to the safetensors format. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. Upload model, scheduler, or pipeline files to the 🤗 Hugging Face Hub. Examples: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet") + +# Push the `unet` to your namespace with the name "my-finetuned-unet". +unet.push_to_hub("my-finetuned-unet") + +# Push the `unet` to an organization with the name "my-finetuned-unet". +unet.push_to_hub("your-org/my-finetuned-unet") diff --git a/scrapped_outputs/892eb12099481c94b116d9872908b567.txt b/scrapped_outputs/892eb12099481c94b116d9872908b567.txt new file mode 100644 index 0000000000000000000000000000000000000000..0454f29f161e7c79737a21f6448f556cf18eca51 --- /dev/null +++ b/scrapped_outputs/892eb12099481c94b116d9872908b567.txt @@ -0,0 +1,81 @@ +Push files to the Hub 🤗 Diffusers provides a PushToHubMixin for uploading your model, scheduler, or pipeline to the Hub. It is an easy way to store your files on the Hub, and also allows you to share your work with others. Under the hood, the PushToHubMixin: creates a repository on the Hub saves your model, scheduler, or pipeline files so they can be reloaded later uploads folder containing these files to the Hub This guide will show you how to use the PushToHubMixin to upload your files to the Hub. You’ll need to log in to your Hub account with your access token first: Copied from huggingface_hub import notebook_login + +notebook_login() Models To push a model to the Hub, call push_to_hub() and specify the repository id of the model to be stored on the Hub: Copied from diffusers import ControlNetModel + +controlnet = ControlNetModel( + block_out_channels=(32, 64), + layers_per_block=2, + in_channels=4, + down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), + cross_attention_dim=32, + conditioning_embedding_out_channels=(16, 32), +) +controlnet.push_to_hub("my-controlnet-model") For models, you can also specify the variant of the weights to push to the Hub. For example, to push fp16 weights: Copied controlnet.push_to_hub("my-controlnet-model", variant="fp16") The push_to_hub() function saves the model’s config.json file and the weights are automatically saved in the safetensors format. Now you can reload the model from your repository on the Hub: Copied model = ControlNetModel.from_pretrained("your-namespace/my-controlnet-model") Scheduler To push a scheduler to the Hub, call push_to_hub() and specify the repository id of the scheduler to be stored on the Hub: Copied from diffusers import DDIMScheduler + +scheduler = DDIMScheduler( + beta_start=0.00085, + beta_end=0.012, + beta_schedule="scaled_linear", + clip_sample=False, + set_alpha_to_one=False, +) +scheduler.push_to_hub("my-controlnet-scheduler") The push_to_hub() function saves the scheduler’s scheduler_config.json file to the specified repository. Now you can reload the scheduler from your repository on the Hub: Copied scheduler = DDIMScheduler.from_pretrained("your-namepsace/my-controlnet-scheduler") Pipeline You can also push an entire pipeline with all it’s components to the Hub. For example, initialize the components of a StableDiffusionPipeline with the parameters you want: Copied from diffusers import ( + UNet2DConditionModel, + AutoencoderKL, + DDIMScheduler, + StableDiffusionPipeline, +) +from transformers import CLIPTextModel, CLIPTextConfig, CLIPTokenizer + +unet = UNet2DConditionModel( + block_out_channels=(32, 64), + layers_per_block=2, + sample_size=32, + in_channels=4, + out_channels=4, + down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), + up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), + cross_attention_dim=32, +) + +scheduler = DDIMScheduler( + beta_start=0.00085, + beta_end=0.012, + beta_schedule="scaled_linear", + clip_sample=False, + set_alpha_to_one=False, +) + +vae = AutoencoderKL( + block_out_channels=[32, 64], + in_channels=3, + out_channels=3, + down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], + up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], + latent_channels=4, +) + +text_encoder_config = CLIPTextConfig( + bos_token_id=0, + eos_token_id=2, + hidden_size=32, + intermediate_size=37, + layer_norm_eps=1e-05, + num_attention_heads=4, + num_hidden_layers=5, + pad_token_id=1, + vocab_size=1000, +) +text_encoder = CLIPTextModel(text_encoder_config) +tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") Pass all of the components to the StableDiffusionPipeline and call push_to_hub() to push the pipeline to the Hub: Copied components = { + "unet": unet, + "scheduler": scheduler, + "vae": vae, + "text_encoder": text_encoder, + "tokenizer": tokenizer, + "safety_checker": None, + "feature_extractor": None, +} + +pipeline = StableDiffusionPipeline(**components) +pipeline.push_to_hub("my-pipeline") The push_to_hub() function saves each component to a subfolder in the repository. Now you can reload the pipeline from your repository on the Hub: Copied pipeline = StableDiffusionPipeline.from_pretrained("your-namespace/my-pipeline") Privacy Set private=True in the push_to_hub() function to keep your model, scheduler, or pipeline files private: Copied controlnet.push_to_hub("my-controlnet-model-private", private=True) Private repositories are only visible to you, and other users won’t be able to clone the repository and your repository won’t appear in search results. Even if a user has the URL to your private repository, they’ll receive a 404 - Sorry, we can't find the page you are looking for. You must be logged in to load a model from a private repository. diff --git a/scrapped_outputs/89646aa5d7258d99ca4fbf12c3138d4c.txt b/scrapped_outputs/89646aa5d7258d99ca4fbf12c3138d4c.txt new file mode 100644 index 0000000000000000000000000000000000000000..a60cf1709306cd604a335558453963caf02df74b --- /dev/null +++ b/scrapped_outputs/89646aa5d7258d99ca4fbf12c3138d4c.txt @@ -0,0 +1,56 @@ +Community pipelines For more context about the design choices behind community pipelines, please have a look at this issue. Community pipelines allow you to get creative and build your own unique pipelines to share with the community. You can find all community pipelines in the diffusers/examples/community folder along with inference and training examples for how to use them. This guide showcases some of the community pipelines and hopefully it’ll inspire you to create your own (feel free to open a PR with your own pipeline and we will merge it!). To load a community pipeline, use the custom_pipeline argument in DiffusionPipeline to specify one of the files in diffusers/examples/community: Copied from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", custom_pipeline="filename_in_the_community_folder", use_safetensors=True +) If a community pipeline doesn’t work as expected, please open a GitHub issue and mention the author. You can learn more about community pipelines in the how to load community pipelines and how to contribute a community pipeline guides. Multilingual Stable Diffusion The multilingual Stable Diffusion pipeline uses a pretrained XLM-RoBERTa to identify a language and the mBART-large-50 model to handle the translation. This allows you to generate images from text in 20 languages. Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import make_image_grid +from transformers import ( + pipeline, + MBart50TokenizerFast, + MBartForConditionalGeneration, +) + +device = "cuda" if torch.cuda.is_available() else "cpu" +device_dict = {"cuda": 0, "cpu": -1} + +# add language detection pipeline +language_detection_model_ckpt = "papluca/xlm-roberta-base-language-detection" +language_detection_pipeline = pipeline("text-classification", + model=language_detection_model_ckpt, + device=device_dict[device]) + +# add model for language translation +translation_tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-one-mmt") +translation_model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-one-mmt").to(device) + +diffuser_pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + custom_pipeline="multilingual_stable_diffusion", + detection_pipeline=language_detection_pipeline, + translation_model=translation_model, + translation_tokenizer=translation_tokenizer, + torch_dtype=torch.float16, +) + +diffuser_pipeline.enable_attention_slicing() +diffuser_pipeline = diffuser_pipeline.to(device) + +prompt = ["a photograph of an astronaut riding a horse", + "Una casa en la playa", + "Ein Hund, der Orange isst", + "Un restaurant parisien"] + +images = diffuser_pipeline(prompt).images +make_image_grid(images, rows=2, cols=2) MagicMix MagicMix is a pipeline that can mix an image and text prompt to generate a new image that preserves the image structure. The mix_factor determines how much influence the prompt has on the layout generation, kmin controls the number of steps during the content generation process, and kmax determines how much information is kept in the layout of the original image. Copied from diffusers import DiffusionPipeline, DDIMScheduler +from diffusers.utils import load_image, make_image_grid + +pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + custom_pipeline="magic_mix", + scheduler=DDIMScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler"), +).to('cuda') + +img = load_image("https://user-images.githubusercontent.com/59410571/209578593-141467c7-d831-4792-8b9a-b17dc5e47816.jpg") +mix_img = pipeline(img, prompt="bed", kmin=0.3, kmax=0.5, mix_factor=0.5) +make_image_grid([img, mix_img], rows=1, cols=2) original image image and text prompt mix diff --git a/scrapped_outputs/8971537b70106aa653f29d4532602ee9.txt b/scrapped_outputs/8971537b70106aa653f29d4532602ee9.txt new file mode 100644 index 0000000000000000000000000000000000000000..d8610ad87c070caa4fdd6e48fd8b56d49472e888 --- /dev/null +++ b/scrapped_outputs/8971537b70106aa653f29d4532602ee9.txt @@ -0,0 +1,41 @@ +HeunDiscreteScheduler The Heun scheduler (Algorithm 1) is from the Elucidating the Design Space of Diffusion-Based Generative Models paper by Karras et al. The scheduler is ported from the k-diffusion library and created by Katherine Crowson. HeunDiscreteScheduler class diffusers.HeunDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' use_karras_sigmas: Optional = False clip_sample: Optional = False clip_sample_range: float = 1.0 timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. Scheduler with Heun steps for discrete beta schedules. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/89aa75765c77c26e122eaeac383ed182.txt b/scrapped_outputs/89aa75765c77c26e122eaeac383ed182.txt new file mode 100644 index 0000000000000000000000000000000000000000..cb2a7efc0d3fd312b0eaa154732c00aa8b34bd28 --- /dev/null +++ b/scrapped_outputs/89aa75765c77c26e122eaeac383ed182.txt @@ -0,0 +1,123 @@ +Single files Diffusers supports loading pretrained pipeline (or model) weights stored in a single file, such as a ckpt or safetensors file. These single file types are typically produced from community trained models. There are three classes for loading single file weights: FromSingleFileMixin supports loading pretrained pipeline weights stored in a single file, which can either be a ckpt or safetensors file. FromOriginalVAEMixin supports loading a pretrained AutoencoderKL from pretrained ControlNet weights stored in a single file, which can either be a ckpt or safetensors file. FromOriginalControlnetMixin supports loading pretrained ControlNet weights stored in a single file, which can either be a ckpt or safetensors file. To learn more about how to load single file weights, see the Load different Stable Diffusion formats loading guide. FromSingleFileMixin class diffusers.loaders.FromSingleFileMixin < source > ( ) Load model weights saved in the .ckpt format into a DiffusionPipeline. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. original_config_file (str, optional) — +The path to the original config file that was used to train the model. If not provided, the config file +will be inferred from the checkpoint file. model_type (str, optional) — +The type of model to load. If not provided, the model type will be inferred from the checkpoint file. image_size (int, optional) — +The size of the image output. It’s used to configure the sample_size parameter of the UNet and VAE model. load_safety_checker (bool, optional, defaults to False) — +Whether to load the safety checker model or not. By default, the safety checker is not loaded unless a safety_checker component is passed to the kwargs. num_in_channels (int, optional) — +Specify the number of input channels for the UNet model. Read more about how to configure UNet model with this parameter +here. scaling_factor (float, optional) — +The scaling factor to use for the VAE model. If not provided, it is inferred from the config file first. +If the scaling factor is not found in the config file, the default value 0.18215 is used. scheduler_type (str, optional) — +The type of scheduler to load. If not provided, the scheduler type will be inferred from the checkpoint file. prediction_type (str, optional) — +The type of prediction to load. If not provided, the prediction type will be inferred from the checkpoint file. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt or .safetensors +format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import StableDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +... ) + +>>> # Download pipeline from local file +>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt +>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly") + +>>> # Enable float16 and move to GPU +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", +... torch_dtype=torch.float16, +... ) +>>> pipeline.to("cuda") FromOriginalVAEMixin class diffusers.loaders.FromOriginalVAEMixin < source > ( ) Load pretrained AutoencoderKL weights saved in the .ckpt or .safetensors format into a AutoencoderKL. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + config_file (str, optional) — +Filepath to the configuration YAML file associated with the model. If not provided it will default to: +https://raw.githubusercontent.com/CompVis/stable-diffusion/main/configs/stable-diffusion/v1-inference.yaml torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. image_size (int, optional, defaults to 512) — +The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable +Diffusion v2 base model. Use 768 for Stable Diffusion v2. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution +Image Synthesis with Latent Diffusion Models paper. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (for example the pipeline components of the +specific pipeline class). The overwritten components are directly passed to the pipelines __init__ +method. See example below for more information. Instantiate a AutoencoderKL from pretrained ControlNet weights saved in the original .ckpt or +.safetensors format. The pipeline is set in evaluation mode (model.eval()) by default. Make sure to pass both image_size and scaling_factor to from_single_file() if you’re loading +a VAE from SDXL or a Stable Diffusion v2 model or higher. Examples: Copied from diffusers import AutoencoderKL + +url = "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors" # can also be local file +model = AutoencoderKL.from_single_file(url) FromOriginalControlnetMixin class diffusers.loaders.FromOriginalControlNetMixin < source > ( ) Load pretrained ControlNet weights saved in the .ckpt or .safetensors format into a ControlNetModel. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + config_file (str, optional) — +Filepath to the configuration YAML file associated with the model. If not provided it will default to: +https://raw.githubusercontent.com/lllyasviel/ControlNet/main/models/cldm_v15.yaml torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. image_size (int, optional, defaults to 512) — +The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable +Diffusion v2 base model. Use 768 for Stable Diffusion v2. upcast_attention (bool, optional, defaults to None) — +Whether the attention computation should always be upcasted. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (for example the pipeline components of the +specific pipeline class). The overwritten components are directly passed to the pipelines __init__ +method. See example below for more information. Instantiate a ControlNetModel from pretrained ControlNet weights saved in the original .ckpt or +.safetensors format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel + +url = "https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_canny.pth" # can also be a local path +model = ControlNetModel.from_single_file(url) + +url = "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.safetensors" # can also be a local path +pipe = StableDiffusionControlNetPipeline.from_single_file(url, controlnet=controlnet) diff --git a/scrapped_outputs/89c96cf1212e425c930ce5bd943abe1f.txt b/scrapped_outputs/89c96cf1212e425c930ce5bd943abe1f.txt new file mode 100644 index 0000000000000000000000000000000000000000..d7fd413d05fa387c0d26fa8503b5642d90f939e1 --- /dev/null +++ b/scrapped_outputs/89c96cf1212e425c930ce5bd943abe1f.txt @@ -0,0 +1,135 @@ +ControlNet ControlNet models are adapters trained on top of another pretrained model. It allows for a greater degree of control over image generation by conditioning the model with an additional input image. The input image can be a canny edge, depth map, human pose, and many more. If you’re training on a GPU with limited vRAM, you should try enabling the gradient_checkpointing, gradient_accumulation_steps, and mixed_precision parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing or xFormers. You should have a GPU with >30GB of memory if you want to train faster with Flax. This guide will explore the train_controlnet.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: + + +```bash +cd examples/controlnet +pip install -r requirements.txt +``` + + +If you have access to a TPU, the Flax training script runs even faster! Let’s run the training script on the Google Cloud TPU VM. Create a single TPU v4-8 VM and connect to it: Copied ZONE=us-central2-b +TPU_TYPE=v4-8 +VM_NAME=hg_flax + +gcloud alpha compute tpus tpu-vm create $VM_NAME \ + --zone $ZONE \ + --accelerator-type $TPU_TYPE \ + --version tpu-vm-v4-base + +gcloud alpha compute tpus tpu-vm ssh $VM_NAME --zone $ZONE -- \ Install JAX 0.4.5: Copied pip install "jax[tpu]==0.4.5" -f https://storage.googleapis.com/jax-releases/libtpu_releases.html Then install the required dependencies for the Flax script: Copied cd examples/controlnet +pip install -r requirements_flax.txt + + + 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_controlnet.py \ + --mixed_precision="fp16" Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the relevant parameters for ControlNet: --max_train_samples: the number of training samples; this can be lowered for faster training, but if you want to stream really large datasets, you’ll need to include this parameter and the --streaming parameter in your training command --gradient_accumulation_steps: number of update steps to accumulate before the backward pass; this allows you to train with a bigger batch size than your GPU memory can typically handle Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_controlnet.py \ + --snr_gamma=5.0 Training script As with the script parameters, a general walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the relevant parts of the ControlNet script. The training script has a make_train_dataset function for preprocessing the dataset with image transforms and caption tokenization. You’ll see that in addition to the usual caption tokenization and image transforms, the script also includes transforms for the conditioning image. If you’re streaming a dataset on a TPU, performance may be bottlenecked by the 🤗 Datasets library which is not optimized for images. To ensure maximum throughput, you’re encouraged to explore other dataset formats like WebDataset, TorchData, and TensorFlow Datasets. Copied conditioning_image_transforms = transforms.Compose( + [ + transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), + transforms.CenterCrop(args.resolution), + transforms.ToTensor(), + ] +) Within the main() function, you’ll find the code for loading the tokenizer, text encoder, scheduler and models. This is also where the ControlNet model is loaded either from existing weights or randomly initialized from a UNet: Copied if args.controlnet_model_name_or_path: + logger.info("Loading existing controlnet weights") + controlnet = ControlNetModel.from_pretrained(args.controlnet_model_name_or_path) +else: + logger.info("Initializing controlnet weights from unet") + controlnet = ControlNetModel.from_unet(unet) The optimizer is set up to update the ControlNet parameters: Copied params_to_optimize = controlnet.parameters() +optimizer = optimizer_class( + params_to_optimize, + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Finally, in the training loop, the conditioning text embeddings and image are passed to the down and mid-blocks of the ControlNet model: Copied encoder_hidden_states = text_encoder(batch["input_ids"])[0] +controlnet_image = batch["conditioning_pixel_values"].to(dtype=weight_dtype) + +down_block_res_samples, mid_block_res_sample = controlnet( + noisy_latents, + timesteps, + encoder_hidden_states=encoder_hidden_states, + controlnet_cond=controlnet_image, + return_dict=False, +) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Now you’re ready to launch the training script! 🚀 This guide uses the fusing/fill50k dataset, but remember, you can create and use your own dataset if you want (see the Create a dataset for training guide). Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model and OUTPUT_DIR to where you want to save the model. Download the following images to condition your training with: Copied wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png +wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png One more thing before you launch the script! Depending on the GPU you have, you may need to enable certain optimizations to train a ControlNet. The default configuration in this script requires ~38GB of vRAM. If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. + + +On a 16GB GPU, you can use bitsandbytes 8-bit optimizer and gradient checkpointing to optimize your training run. Install bitsandbytes: Copied pip install bitsandbytes Then, add the following parameter to your training command: Copied accelerate launch train_controlnet.py \ + --gradient_checkpointing \ + --use_8bit_adam \ + + +On a 12GB GPU, you’ll need bitsandbytes 8-bit optimizer, gradient checkpointing, xFormers, and set the gradients to None instead of zero to reduce your memory-usage. Copied accelerate launch train_controlnet.py \ + --use_8bit_adam \ + --gradient_checkpointing \ + --enable_xformers_memory_efficient_attention \ + --set_grads_to_none \ + + +On a 8GB GPU, you’ll need to use DeepSpeed to offload some of the tensors from the vRAM to either the CPU or NVME to allow training with less GPU memory. Run the following command to configure your 🤗 Accelerate environment: Copied accelerate config During configuration, confirm that you want to use DeepSpeed stage 2. Now it should be possible to train on under 8GB vRAM by combining DeepSpeed stage 2, fp16 mixed precision, and offloading the model parameters and the optimizer state to the CPU. The drawback is that this requires more system RAM (~25 GB). See the DeepSpeed documentation for more configuration options. Your configuration file should look something like: Copied compute_environment: LOCAL_MACHINE +deepspeed_config: + gradient_accumulation_steps: 4 + offload_optimizer_device: cpu + offload_param_device: cpu + zero3_init_flag: false + zero_stage: 2 +distributed_type: DEEPSPEED You should also change the default Adam optimizer to DeepSpeed’s optimized version of Adam deepspeed.ops.adam.DeepSpeedCPUAdam for a substantial speedup. Enabling DeepSpeedCPUAdam requires your system’s CUDA toolchain version to be the same as the one installed with PyTorch. bitsandbytes 8-bit optimizers don’t seem to be compatible with DeepSpeed at the moment. That’s it! You don’t need to add any additional parameters to your training command. + + + + + + Copied export MODEL_DIR="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="path/to/save/model" + +accelerate launch train_controlnet.py \ + --pretrained_model_name_or_path=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --dataset_name=fusing/fill50k \ + --resolution=512 \ + --learning_rate=1e-5 \ + --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ + --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --push_to_hub + + +With Flax, you can profile your code by adding the --profile_steps==5 parameter to your training command. Install the Tensorboard profile plugin: Copied pip install tensorflow tensorboard-plugin-profile +tensorboard --logdir runs/fill-circle-100steps-20230411_165612/ Then you can inspect the profile at http://localhost:6006/#profile. If you run into version conflicts with the plugin, try uninstalling and reinstalling all versions of TensorFlow and Tensorboard. The debugging functionality of the profile plugin is still experimental, and not all views are fully functional. The trace_viewer cuts off events after 1M, which can result in all your device traces getting lost if for example, you profile the compilation step by accident. Copied python3 train_controlnet_flax.py \ + --pretrained_model_name_or_path=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --dataset_name=fusing/fill50k \ + --resolution=512 \ + --learning_rate=1e-5 \ + --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ + --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ + --validation_steps=1000 \ + --train_batch_size=2 \ + --revision="non-ema" \ + --from_pt \ + --report_to="wandb" \ + --tracker_project_name=$HUB_MODEL_ID \ + --num_train_epochs=11 \ + --push_to_hub \ + --hub_model_id=$HUB_MODEL_ID + + +Once training is complete, you can use your newly trained model for inference! Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.utils import load_image +import torch + +controlnet = ControlNetModel.from_pretrained("path/to/controlnet", torch_dtype=torch.float16) +pipeline = StableDiffusionControlNetPipeline.from_pretrained( + "path/to/base/model", controlnet=controlnet, torch_dtype=torch.float16 +).to("cuda") + +control_image = load_image("./conditioning_image_1.png") +prompt = "pale golden rod circle with old lace background" + +generator = torch.manual_seed(0) +image = pipe(prompt, num_inference_steps=20, generator=generator, image=control_image).images[0] +image.save("./output.png") Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_controlnet_sdxl.py script to train a ControlNet adapter for the SDXL model. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on training your own ControlNet! To learn more about how to use your new model, the following guides may be helpful: Learn how to use a ControlNet for inference on a variety of tasks. diff --git a/scrapped_outputs/89d8825e7e3aa87deff13e715b4fa222.txt b/scrapped_outputs/89d8825e7e3aa87deff13e715b4fa222.txt new file mode 100644 index 0000000000000000000000000000000000000000..bedbfd4f29d8fea8e1cb1523c05c8b8e204c564f --- /dev/null +++ b/scrapped_outputs/89d8825e7e3aa87deff13e715b4fa222.txt @@ -0,0 +1,52 @@ +CMStochasticIterativeScheduler Consistency Models by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever introduced a multistep and onestep scheduler (Algorithm 1) that is capable of generating good samples in one or a small number of steps. The abstract from the paper is: Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast one-step generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet 64x64 and LSUN 256x256. The original codebase can be found at openai/consistency_models. CMStochasticIterativeScheduler class diffusers.CMStochasticIterativeScheduler < source > ( num_train_timesteps: int = 40 sigma_min: float = 0.002 sigma_max: float = 80.0 sigma_data: float = 0.5 s_noise: float = 1.0 rho: float = 7.0 clip_denoised: bool = True ) Parameters num_train_timesteps (int, defaults to 40) — +The number of diffusion steps to train the model. sigma_min (float, defaults to 0.002) — +Minimum noise magnitude in the sigma schedule. Defaults to 0.002 from the original implementation. sigma_max (float, defaults to 80.0) — +Maximum noise magnitude in the sigma schedule. Defaults to 80.0 from the original implementation. sigma_data (float, defaults to 0.5) — +The standard deviation of the data distribution from the EDM +paper. Defaults to 0.5 from the original implementation. s_noise (float, defaults to 1.0) — +The amount of additional noise to counteract loss of detail during sampling. A reasonable range is [1.000, +1.011]. Defaults to 1.0 from the original implementation. rho (float, defaults to 7.0) — +The parameter for calculating the Karras sigma schedule from the EDM +paper. Defaults to 7.0 from the original implementation. clip_denoised (bool, defaults to True) — +Whether to clip the denoised outputs to (-1, 1). timesteps (List or np.ndarray or torch.Tensor, optional) — +An explicit timestep schedule that can be optionally specified. The timesteps are expected to be in +increasing order. Multistep and onestep sampling for consistency models. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. get_scalings_for_boundary_condition < source > ( sigma ) → tuple Parameters sigma (torch.FloatTensor) — +The current sigma in the Karras sigma schedule. Returns +tuple + +A two-element tuple where c_skip (which weights the current sample) is the first element and c_out +(which weights the consistency model output) is the second element. + Gets the scalings used in the consistency model parameterization (from Appendix C of the +paper) to enforce boundary condition. epsilon in the equations for c_skip and c_out is set to sigma_min. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (float or torch.FloatTensor) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Scales the consistency model input by (sigma**2 + sigma_data**2) ** 0.5. set_timesteps < source > ( num_inference_steps: Optional = None device: Union = None timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. timesteps (List[int], optional) — +Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default +timestep spacing strategy of equal spacing between timesteps is used. If timesteps is passed, +num_inference_steps must be None. Sets the timesteps used for the diffusion chain (to be run before inference). sigma_to_t < source > ( sigmas: Union ) → float or np.ndarray Parameters sigmas (float or np.ndarray) — +A single Karras sigma or an array of Karras sigmas. Returns +float or np.ndarray + +A scaled input timestep or scaled input timestep array. + Gets scaled timesteps from the Karras sigmas for input to the consistency model. step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor generator: Optional = None return_dict: bool = True ) → CMStochasticIterativeSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (float) — +The current timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a +CMStochasticIterativeSchedulerOutput or tuple. Returns +CMStochasticIterativeSchedulerOutput or tuple + +If return_dict is True, +CMStochasticIterativeSchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). CMStochasticIterativeSchedulerOutput class diffusers.schedulers.scheduling_consistency_models.CMStochasticIterativeSchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Output class for the scheduler’s step function. diff --git a/scrapped_outputs/89ed6a984da03c42b5f4ad2d22d6f32e.txt b/scrapped_outputs/89ed6a984da03c42b5f4ad2d22d6f32e.txt new file mode 100644 index 0000000000000000000000000000000000000000..820ee6f85e0c58e72d1011f18bab5fed44b5087e --- /dev/null +++ b/scrapped_outputs/89ed6a984da03c42b5f4ad2d22d6f32e.txt @@ -0,0 +1,945 @@ +Text-to-Image Generation + + +StableDiffusionPipeline + +The Stable Diffusion model was created by the researchers and engineers from CompVis, Stability AI, runway, and LAION. The StableDiffusionPipeline is capable of generating photo-realistic images given any text input using Stable Diffusion. +The original codebase can be found here: +Stable Diffusion V1: CompVis/stable-diffusion +Stable Diffusion v2: Stability-AI/stablediffusion +Available Checkpoints are: +stable-diffusion-v1-4 (512x512 resolution) CompVis/stable-diffusion-v1-4 +stable-diffusion-v1-5 (512x512 resolution) runwayml/stable-diffusion-v1-5 +stable-diffusion-2-base (512x512 resolution): stabilityai/stable-diffusion-2-base +stable-diffusion-2 (768x768 resolution): stabilityai/stable-diffusion-2 +stable-diffusion-2-1-base (512x512 resolution) stabilityai/stable-diffusion-2-1-base +stable-diffusion-2-1 (768x768 resolution): stabilityai/stable-diffusion-2-1 + +class diffusers.StableDiffusionPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPImageProcessor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-to-image generation using Stable Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) +In addition the pipeline inherits the following loading methods: +Textual-Inversion: loaders.TextualInversionLoaderMixin.load_textual_inversion() +LoRA: loaders.LoraLoaderMixin.load_lora_weights() +Ckpt: loaders.FromCkptMixin.from_ckpt() +as well as the following saving methods: +LoRA: loaders.LoraLoaderMixin.save_lora_weights() + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt).images[0] + +enable_attention_slicing + +< +source +> +( +slice_size: typing.Union[str, int, NoneType] = 'auto' + +) + + +Parameters + +slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +disable_attention_slicing + +< +source +> +( +) + + + +Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go +back to computing attention in one step. + +enable_vae_slicing + +< +source +> +( +) + + + +Enable sliced VAE decoding. +When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several +steps. This is useful to save some memory and allow larger batch sizes. + +disable_vae_slicing + +< +source +> +( +) + + + +Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to +computing decoding in one step. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +enable_vae_tiling + +< +source +> +( +) + + + +Enable tiled VAE decoding. +When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in +several steps. This is useful to save a large amount of memory and to allow the processing of larger images. + +disable_vae_tiling + +< +source +> +( +) + + + +Disable tiled VAE decoding. If enable_vae_tiling was previously invoked, this method will go back to +computing decoding in one step. + +load_textual_inversion + +< +source +> +( +pretrained_model_name_or_path: typing.Union[str, typing.Dict[str, torch.Tensor]] +token: typing.Optional[str] = None +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path (str or os.PathLike) — +Can be either: + +A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. +Valid model ids should have an organization name, like +"sd-concepts-library/low-poly-hd-logos-icons". +A path to a directory containing textual inversion weights, e.g. +./my_text_inversion_directory/. + + + +weight_name (str, optional) — +Name of a custom weight file. This should be used in two cases: + +The saved textual inversion file is in diffusers format, but was saved under a specific weight +name, such as text_inv.bin. +The saved textual inversion file is in the “Automatic1111” form. + + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running diffusers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +subfolder (str, optional, defaults to "") — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + +mirror (str, optional) — +Mirror source to accelerate downloads in China. If you are from China and have an accessibility +problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. +Please refer to the mirror site for more information. + + + +Load textual inversion embeddings into the text encoder of stable diffusion pipelines. Both diffusers and +Automatic1111 formats are supported (see example below). +This function is experimental and might change in the future. +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. +Example: + +To load a textual inversion embedding vector in diffusers format: + + + Copied +from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") +To load a textual inversion embedding vector in Automatic1111 format, make sure to first download the vector, + +e.g. from civitAI and then load the vector locally: + + + Copied +from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") + +from_ckpt + +< +source +> +( +pretrained_model_link_or_path +**kwargs + +) + + +Parameters + +pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file on the Hub. Should be in the format +"https://huggingface.co//blob/main/" +A path to a file containing all pipeline weights. + + + +torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model under this dtype. If "auto" is passed the dtype +will be automatically derived from the model’s weights. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +local_files_only (bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running huggingface-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +use_safetensors (bool, optional ) — +If set to True, the pipeline will be loaded from safetensors weights. If set to None (the +default). The pipeline will load using safetensors if the safetensors weights are available and if +safetensors is installed. If the to False the pipeline will not use safetensors. + + +extract_ema (bool, optional, defaults to False) — Only relevant for +checkpoints that have both EMA and non-EMA weights. Whether to extract the EMA weights or not. Defaults +to False. Pass True to extract the EMA weights. EMA weights usually yield higher quality images for +inference. Non-EMA weights are usually better to continue fine-tuning. + + +upcast_attention (bool, optional, defaults to None) — +Whether the attention computation should always be upcasted. This is necessary when running stable + + +image_size (int, optional, defaults to 512) — +The image size that the model was trained on. Use 512 for Stable Diffusion v1.X and Stable Diffusion v2 +Base. Use 768 for Stable Diffusion v2. + + +prediction_type (str, optional) — +The prediction type that the model was trained on. Use 'epsilon' for Stable Diffusion v1.X and Stable +Diffusion v2 Base. Use 'v_prediction' for Stable Diffusion v2. + + +num_in_channels (int, optional, defaults to None) — +The number of input channels. If None, it will be automatically inferred. + + +scheduler_type (str, optional, defaults to ‘pndm’) — +Type of scheduler to use. Should be one of ["pndm", "lms", "heun", "euler", "euler-ancestral", "dpm", "ddim"]. + + +load_safety_checker (bool, optional, defaults to True) — +Whether to load the safety checker or not. Defaults to True. + + +kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load - and saveable variables - i.e. the pipeline components - of the +specific pipeline class. The overwritten components are then directly passed to the pipelines +__init__ method. See example below for more information. + + + +Instantiate a PyTorch diffusion pipeline from pre-trained pipeline weights saved in the original .ckpt format. +The pipeline is set in evaluation mode by default using model.eval() (Dropout modules are deactivated). + +Examples: + + + Copied +>>> from diffusers import StableDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = StableDiffusionPipeline.from_ckpt( +... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +... ) + +>>> # Download pipeline from local file +>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt +>>> pipeline = StableDiffusionPipeline.from_ckpt("./v1-5-pruned-emaonly") + +>>> # Enable float16 and move to GPU +>>> pipeline = StableDiffusionPipeline.from_ckpt( +... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", +... torch_dtype=torch.float16, +... ) +>>> pipeline.to("cuda") + +load_lora_weights + +< +source +> +( +pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]] +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. +Valid model ids should have an organization name, like google/ddpm-celebahq-256. +A path to a directory containing model weights saved using ~ModelMixin.save_config, e.g., +./my_model_directory/. +A torch state +dict. + + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running diffusers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +subfolder (str, optional, defaults to "") — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + +mirror (str, optional) — +Mirror source to accelerate downloads in China. If you are from China and have an accessibility +problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. +Please refer to the mirror site for more information. + + + +Load pretrained attention processor layers (such as LoRA) into UNet2DConditionModel and +CLIPTextModel). +This function is experimental and might change in the future. +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. + +save_lora_weights + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +unet_lora_layers: typing.Dict[str, torch.nn.modules.module.Module] = None +text_encoder_lora_layers: typing.Dict[str, torch.nn.modules.module.Module] = None +is_main_process: bool = True +weight_name: str = None +save_function: typing.Callable = None +safe_serialization: bool = False + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. + + +unet_lora_layers (Dict[str, torch.nn.Module]) — +State dict of the LoRA layers corresponding to the UNet. Specifying this helps to make the +serialization process easier and cleaner. + + +text_encoder_lora_layers (Dict[str, torch.nn.Module]) — +State dict of the LoRA layers corresponding to the text_encoder. Since the text_encoder comes from +transformers, we cannot rejig it. That is why we have to explicitly pass the text encoder LoRA state +dict. + + +is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful when in distributed training like +TPUs and need to call this function on all processes. In this case, set is_main_process=True only on +the main process to avoid race conditions. + + +save_function (Callable) — +The function to use to save the state dictionary. Useful on distributed training like TPUs when one +need to replace torch.save by another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. + + + +Save the LoRA parameters corresponding to the UNet and the text encoder. + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. + +class diffusers.FlaxStableDiffusionPipeline + +< +source +> +( +vae: FlaxAutoencoderKL +text_encoder: FlaxCLIPTextModel +tokenizer: CLIPTokenizer +unet: FlaxUNet2DConditionModel +scheduler: typing.Union[diffusers.schedulers.scheduling_ddim_flax.FlaxDDIMScheduler, diffusers.schedulers.scheduling_pndm_flax.FlaxPNDMScheduler, diffusers.schedulers.scheduling_lms_discrete_flax.FlaxLMSDiscreteScheduler, diffusers.schedulers.scheduling_dpmsolver_multistep_flax.FlaxDPMSolverMultistepScheduler] +safety_checker: FlaxStableDiffusionSafetyChecker +feature_extractor: CLIPImageProcessor +dtype: dtype = + +) + + +Parameters + +vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, +specifically the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (FlaxUNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. + + +safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-to-image generation using Stable Diffusion. +This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt_ids: array +params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] +prng_seed: PRNGKeyArray +num_inference_steps: int = 50 +height: typing.Optional[int] = None +width: typing.Optional[int] = None +guidance_scale: typing.Union[float, array] = 7.5 +latents: array = None +neg_prompt_ids: array = None +return_dict: bool = True +jit: bool = False + +) +→ +FlaxStableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +latents (jnp.array, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. tensor will ge generated +by sampling using the supplied random generator. + + +jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. NOTE: This argument +exists because __call__ is not yet end-to-end pmap-able. It will be removed in a future release. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. + + +Returns + +FlaxStableDiffusionPipelineOutput or tuple + + + +FlaxStableDiffusionPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import jax +>>> import numpy as np +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard + +>>> from diffusers import FlaxStableDiffusionPipeline + +>>> pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", revision="bf16", dtype=jax.numpy.bfloat16 +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" + +>>> prng_seed = jax.random.PRNGKey(0) +>>> num_inference_steps = 50 + +>>> num_samples = jax.device_count() +>>> prompt = num_samples * [prompt] +>>> prompt_ids = pipeline.prepare_inputs(prompt) +# shard inputs and rng + +>>> params = replicate(params) +>>> prng_seed = jax.random.split(prng_seed, jax.device_count()) +>>> prompt_ids = shard(prompt_ids) + +>>> images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images +>>> images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) diff --git a/scrapped_outputs/89f4ea086bb596e87a1de48d66130d47.txt b/scrapped_outputs/89f4ea086bb596e87a1de48d66130d47.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/8a1558f7e8a3ff8d320899e2abafc263.txt b/scrapped_outputs/8a1558f7e8a3ff8d320899e2abafc263.txt new file mode 100644 index 0000000000000000000000000000000000000000..7c4120ca559ac7e154bd60c031ca497e0b8a77e7 --- /dev/null +++ b/scrapped_outputs/8a1558f7e8a3ff8d320899e2abafc263.txt @@ -0,0 +1 @@ +Overview Generating high-quality outputs is computationally intensive, especially during each iterative step where you go from a noisy output to a less noisy output. One of 🤗 Diffuser’s goals is to make this technology widely accessible to everyone, which includes enabling fast inference on consumer and specialized hardware. This section will cover tips and tricks - like half-precision weights and sliced attention - for optimizing inference speed and reducing memory-consumption. You’ll also learn how to speed up your PyTorch code with torch.compile or ONNX Runtime, and enable memory-efficient attention with xFormers. There are also guides for running inference on specific hardware like Apple Silicon, and Intel or Habana processors. diff --git a/scrapped_outputs/8a55b8461cc94b2852098751a7e426f2.txt b/scrapped_outputs/8a55b8461cc94b2852098751a7e426f2.txt new file mode 100644 index 0000000000000000000000000000000000000000..836dee32c8271dc967057672c03614a463c4ec61 --- /dev/null +++ b/scrapped_outputs/8a55b8461cc94b2852098751a7e426f2.txt @@ -0,0 +1,324 @@ +Pipelines Pipelines provide a simple way to run state-of-the-art diffusion models in inference by bundling all of the necessary components (multiple independently-trained models, schedulers, and processors) into a single end-to-end class. Pipelines are flexible and they can be adapted to use different schedulers or even model components. All pipelines are built from the base DiffusionPipeline class which provides basic functionality for loading, downloading, and saving all the components. Specific pipeline types (for example StableDiffusionPipeline) loaded with from_pretrained() are automatically detected and the pipeline components are loaded and passed to the __init__ function of the pipeline. You shouldn’t use the DiffusionPipeline class for training. Individual components (for example, UNet2DModel and UNet2DConditionModel) of diffusion pipelines are usually trained individually, so we suggest directly working with them instead. Pipelines do not offer any training functionality. You’ll notice PyTorch’s autograd is disabled by decorating the __call__() method with a torch.no_grad decorator because pipelines should not be used for training. If you’re interested in training, please take a look at the Training guides instead! The table below lists all the pipelines currently available in 🤗 Diffusers and the tasks they support. Click on a pipeline to view its abstract and published paper. Pipeline Tasks AltDiffusion image2image AnimateDiff text2video Attend-and-Excite text2image Audio Diffusion image2audio AudioLDM text2audio AudioLDM2 text2audio BLIP Diffusion text2image Consistency Models unconditional image generation ControlNet text2image, image2image, inpainting ControlNet with Stable Diffusion XL text2image ControlNet-XS text2image ControlNet-XS with Stable Diffusion XL text2image Cycle Diffusion image2image Dance Diffusion unconditional audio generation DDIM unconditional image generation DDPM unconditional image generation DeepFloyd IF text2image, image2image, inpainting, super-resolution DiffEdit inpainting DiT text2image GLIGEN text2image InstructPix2Pix image editing Kandinsky 2.1 text2image, image2image, inpainting, interpolation Kandinsky 2.2 text2image, image2image, inpainting Kandinsky 3 text2image, image2image Latent Consistency Models text2image Latent Diffusion text2image, super-resolution LDM3D text2image, text-to-3D, text-to-pano, upscaling MultiDiffusion text2image MusicLDM text2audio Paint by Example inpainting ParaDiGMS text2image Pix2Pix Zero image editing PixArt-α text2image PNDM unconditional image generation RePaint inpainting Score SDE VE unconditional image generation Self-Attention Guidance text2image Semantic Guidance text2image Shap-E text-to-3D, image-to-3D Spectrogram Diffusion Stable Diffusion text2image, image2image, depth2image, inpainting, image variation, latent upscaler, super-resolution Stable Diffusion Model Editing model editing Stable Diffusion XL text2image, image2image, inpainting Stable Diffusion XL Turbo text2image, image2image, inpainting Stable unCLIP text2image, image variation Stochastic Karras VE unconditional image generation T2I-Adapter text2image Text2Video text2video, video2video Text2Video-Zero text2video unCLIP text2image, image variation Unconditional Latent Diffusion unconditional image generation UniDiffuser text2image, image2text, image variation, text variation, unconditional image generation, unconditional audio generation Value-guided planning value guided sampling Versatile Diffusion text2image, image variation VQ Diffusion text2image Wuerstchen text2image DiffusionPipeline class diffusers.DiffusionPipeline < source > ( ) Base class for all pipelines. DiffusionPipeline stores all components (models, schedulers, and processors) for diffusion pipelines and +provides methods for loading, downloading and saving models. It also includes methods to: move all PyTorch modules to the device of your choice enable/disable the progress bar for the denoising iteration Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. _optional_components (List[str]) — List of all optional components that don’t have to be passed to the +pipeline to function (should be overridden by subclasses). __call__ ( *args **kwargs ) Call self as a function. device < source > ( ) → torch.device Returns +torch.device + +The torch device on which the pipeline is located. + to < source > ( *args **kwargs ) → DiffusionPipeline Parameters dtype (torch.dtype, optional) — +Returns a pipeline with the specified +dtype device (torch.Device, optional) — +Returns a pipeline with the specified +device silence_dtype_warnings (str, optional, defaults to False) — +Whether to omit warnings if the target dtype is not compatible with the target device. Returns +DiffusionPipeline + +The pipeline converted to specified dtype and/or dtype. + Performs Pipeline dtype and/or device conversion. A torch.dtype and torch.device are inferred from the +arguments of self.to(*args, **kwargs). If the pipeline already has the correct torch.dtype and torch.device, then it is returned as is. Otherwise, +the returned pipeline is a copy of self with the desired torch.dtype and torch.device. Here are the ways to call to: to(dtype, silence_dtype_warnings=False) → DiffusionPipeline to return a pipeline with the specified +dtype to(device, silence_dtype_warnings=False) → DiffusionPipeline to return a pipeline with the specified +device to(device=None, dtype=None, silence_dtype_warnings=False) → DiffusionPipeline to return a pipeline with the +specified device and +dtype components < source > ( ) The self.components property can be useful to run different pipelines with the same weights and +configurations without reallocating additional memory. Returns (dict): +A dictionary containing all the modules needed to initialize the pipeline. Examples: Copied >>> from diffusers import ( +... StableDiffusionPipeline, +... StableDiffusionImg2ImgPipeline, +... StableDiffusionInpaintPipeline, +... ) + +>>> text2img = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> img2img = StableDiffusionImg2ImgPipeline(**text2img.components) +>>> inpaint = StableDiffusionInpaintPipeline(**text2img.components) disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. download < source > ( pretrained_model_name **kwargs ) → os.PathLike Parameters pretrained_model_name (str or os.PathLike, optional) — +A string, the repository id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. custom_pipeline (str, optional) — +Can be either: + + +A string, the repository id (for example CompVis/ldm-text2im-large-256) of a pretrained +pipeline hosted on the Hub. The repository must contain a file called pipeline.py that defines +the custom pipeline. + + +A string, the file name of a community pipeline hosted on GitHub under +Community. Valid file +names must match the file name and not the pipeline script (clip_guided_stable_diffusion +instead of clip_guided_stable_diffusion.py). Community pipelines are always loaded from the +current main branch of GitHub. + + +A path to a directory (./my_pipeline_directory/) containing a custom pipeline. The directory +must contain a file called pipeline.py that defines the custom pipeline. + + + +🧪 This is an experimental feature and may change in the future. + +For more information on how to load and create custom pipelines, take a look at How to contribute a +community pipeline. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. use_onnx (bool, optional, defaults to False) — +If set to True, ONNX weights will always be downloaded if present. If set to False, ONNX weights +will never be downloaded. By default use_onnx defaults to the _is_onnx class attribute which is +False for non-ONNX pipelines and True for ONNX pipelines. ONNX weights include both files ending +with .onnx and .pb. trust_remote_code (bool, optional, defaults to False) — +Whether or not to allow for custom pipelines and components defined on the Hub in their own files. This +option should only be set to True for repositories you trust and in which you have read the code, as +it will execute code present on the Hub on your local machine. Returns +os.PathLike + +A path to the downloaded pipeline. + Download and cache a PyTorch diffusion pipeline from pretrained pipeline weights. To use private or gated models, log-in with +huggingface-cli login. enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] enable_model_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_sequential_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using 🤗 Accelerate, significantly reducing memory usage. When called, the state +dicts of all torch.nn.Module components (except those in self._exclude_from_cpu_offload) are saved to CPU +and then moved to torch.device('meta') and loaded to GPU only when their specific submodule has its forward +method called. Offloading happens on a submodule basis. Memory savings are higher than with +enable_model_cpu_offload, but performance is lower. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) from_pretrained < source > ( pretrained_model_name_or_path: Union **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. custom_pipeline (str, optional) — + +🧪 This is an experimental feature and may change in the future. + +Can be either: + +A string, the repo id (for example hf-internal-testing/diffusers-dummy-pipeline) of a custom +pipeline hosted on the Hub. The repository must contain a file called pipeline.py that defines +the custom pipeline. +A string, the file name of a community pipeline hosted on GitHub under +Community. Valid file +names must match the file name and not the pipeline script (clip_guided_stable_diffusion +instead of clip_guided_stable_diffusion.py). Community pipelines are always loaded from the +current main branch of GitHub. +A path to a directory (./my_pipeline_directory/) containing a custom pipeline. The directory +must contain a file called pipeline.py that defines the custom pipeline. + +For more information on how to load and create custom pipelines, please have a look at Loading and +Adding Custom +Pipelines force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. use_onnx (bool, optional, defaults to None) — +If set to True, ONNX weights will always be downloaded if present. If set to False, ONNX weights +will never be downloaded. By default use_onnx defaults to the _is_onnx class attribute which is +False for non-ONNX pipelines and True for ONNX pipelines. ONNX weights include both files ending +with .onnx and .pb. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiate a PyTorch diffusion pipeline from pretrained pipeline weights. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import DiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256") + +>>> # Download pipeline that requires an authorization token +>>> # For more information on access tokens, please refer to this section +>>> # of the documentation](https://huggingface.co/docs/hub/security-tokens) +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") + +>>> # Use a different scheduler +>>> from diffusers import LMSDiscreteScheduler + +>>> scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.scheduler = scheduler maybe_free_model_hooks < source > ( ) Function that offloads all components, removes all model hooks that were added when using +enable_model_cpu_offload and then applies them again. In case the model has not been offloaded this function +is a no-op. Make sure to add this function to the end of the __call__ function of your pipeline so that it +functions correctly when applying enable_model_cpu_offload. numpy_to_pil < source > ( images ) Convert a NumPy image or a batch of images to a PIL image. save_pretrained < source > ( save_directory: Union safe_serialization: bool = True variant: Optional = None push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save a pipeline to. Will be created if it doesn’t exist. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save all saveable variables of the pipeline to a directory. A pipeline variable can be saved and loaded if its +class implements both a save and loading method. The pipeline is easily reloaded using the +from_pretrained() class method. FlaxDiffusionPipeline class diffusers.FlaxDiffusionPipeline < source > ( ) Base class for Flax-based pipelines. FlaxDiffusionPipeline stores all components (models, schedulers, and processors) for diffusion pipelines and +provides methods for loading, downloading and saving models. It also includes methods to: enable/disable the progress bar for the denoising iteration Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_name_or_path: Union **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example runwayml/stable-diffusion-v1-5) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +using save_pretrained(). + dtype (str or jnp.dtype, optional) — +Override the default jnp.dtype and load the model under this dtype. If "auto", the dtype is +automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components) of the specific pipeline +class. The overwritten components are passed directly to the pipelines __init__ method. Instantiate a Flax-based diffusion pipeline from pretrained pipeline weights. The pipeline is set in evaluation mode (`model.eval()) by default and dropout modules are deactivated. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of FlaxUNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import FlaxDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> # Requires to be logged in to Hugging Face hub, +>>> # see more in [the documentation](https://huggingface.co/docs/hub/security-tokens) +>>> pipeline, params = FlaxDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... revision="bf16", +... dtype=jnp.bfloat16, +... ) + +>>> # Download pipeline, but use a different scheduler +>>> from diffusers import FlaxDPMSolverMultistepScheduler + +>>> model_id = "runwayml/stable-diffusion-v1-5" +>>> dpmpp, dpmpp_state = FlaxDPMSolverMultistepScheduler.from_pretrained( +... model_id, +... subfolder="scheduler", +... ) + +>>> dpm_pipe, dpm_params = FlaxStableDiffusionPipeline.from_pretrained( +... model_id, revision="bf16", dtype=jnp.bfloat16, scheduler=dpmpp +... ) +>>> dpm_params["scheduler"] = dpmpp_state numpy_to_pil < source > ( images ) Convert a NumPy image or a batch of images to a PIL image. save_pretrained < source > ( save_directory: Union params: Union push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save all saveable variables of the pipeline to a directory. A pipeline variable can be saved and loaded if its +class implements both a save and loading method. The pipeline is easily reloaded using the +from_pretrained() class method. PushToHubMixin class diffusers.utils.PushToHubMixin < source > ( ) A Mixin to push a model, scheduler, or pipeline to the Hugging Face Hub. push_to_hub < source > ( repo_id: str commit_message: Optional = None private: Optional = None token: Optional = None create_pr: bool = False safe_serialization: bool = True variant: Optional = None ) Parameters repo_id (str) — +The name of the repository you want to push your model, scheduler, or pipeline files to. It should +contain your organization name when pushing to an organization. repo_id can also be a path to a local +directory. commit_message (str, optional) — +Message to commit while pushing. Default to "Upload {object}". private (bool, optional) — +Whether or not the repository created should be private. token (str, optional) — +The token to use as HTTP bearer authorization for remote files. The token generated when running +huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) — +Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to True) — +Whether or not to convert the model weights to the safetensors format. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. Upload model, scheduler, or pipeline files to the 🤗 Hugging Face Hub. Examples: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet") + +# Push the `unet` to your namespace with the name "my-finetuned-unet". +unet.push_to_hub("my-finetuned-unet") + +# Push the `unet` to an organization with the name "my-finetuned-unet". +unet.push_to_hub("your-org/my-finetuned-unet") diff --git a/scrapped_outputs/8a667120910f3e8df3582a7447aada42.txt b/scrapped_outputs/8a667120910f3e8df3582a7447aada42.txt new file mode 100644 index 0000000000000000000000000000000000000000..8ffbeca318ea60288f515ef9c440ebea9a984f50 --- /dev/null +++ b/scrapped_outputs/8a667120910f3e8df3582a7447aada42.txt @@ -0,0 +1,80 @@ +UniPCMultistepScheduler UniPCMultistepScheduler is a training-free framework designed for fast sampling of diffusion models. It was introduced in UniPC: A Unified Predictor-Corrector Framework for Fast Sampling of Diffusion Models by Wenliang Zhao, Lujia Bai, Yongming Rao, Jie Zhou, Jiwen Lu. It consists of a corrector (UniC) and a predictor (UniP) that share a unified analytical form and support arbitrary orders. +UniPC is by design model-agnostic, supporting pixel-space/latent-space DPMs on unconditional/conditional sampling. It can also be applied to both noise prediction and data prediction models. The corrector UniC can be also applied after any off-the-shelf solvers to increase the order of accuracy. The abstract from the paper is: Diffusion probabilistic models (DPMs) have demonstrated a very promising ability in high-resolution image synthesis. However, sampling from a pre-trained DPM is time-consuming due to the multiple evaluations of the denoising network, making it more and more important to accelerate the sampling of DPMs. Despite recent progress in designing fast samplers, existing methods still cannot generate satisfying images in many applications where fewer steps (e.g., <10) are favored. In this paper, we develop a unified corrector (UniC) that can be applied after any existing DPM sampler to increase the order of accuracy without extra model evaluations, and derive a unified predictor (UniP) that supports arbitrary order as a byproduct. Combining UniP and UniC, we propose a unified predictor-corrector framework called UniPC for the fast sampling of DPMs, which has a unified analytical form for any order and can significantly improve the sampling quality over previous methods, especially in extremely few steps. We evaluate our methods through extensive experiments including both unconditional and conditional sampling using pixel-space and latent-space DPMs. Our UniPC can achieve 3.87 FID on CIFAR10 (unconditional) and 7.51 FID on ImageNet 256×256 (conditional) with only 10 function evaluations. Code is available at this https URL. Tips It is recommended to set solver_order to 2 for guide sampling, and solver_order=3 for unconditional sampling. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both predict_x0=True and thresholding=True to use dynamic thresholding. This thresholding method is unsuitable for latent-space diffusion models such as Stable Diffusion. UniPCMultistepScheduler class diffusers.UniPCMultistepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 predict_x0: bool = True solver_type: str = 'bh2' lower_order_final: bool = True disable_corrector: List = [] solver_p: SchedulerMixin = None use_karras_sigmas: Optional = False timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, default 2) — +The UniPC order which can be any positive integer. The effective order of accuracy is solver_order + 1 +due to the UniC. It is recommended to use solver_order=2 for guided sampling, and solver_order=3 for +unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and predict_x0=True. predict_x0 (bool, defaults to True) — +Whether to use the updating algorithm on the predicted x0. solver_type (str, default bh2) — +Solver type for UniPC. It is recommended to use bh1 for unconditional sampling when steps < 10, and bh2 +otherwise. lower_order_final (bool, default True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. disable_corrector (list, default []) — +Decides which step to disable the corrector to mitigate the misalignment between epsilon_theta(x_t, c) +and epsilon_theta(x_t^c, c) which can influence convergence for a large guidance scale. Corrector is +usually disabled during the first few steps. solver_p (SchedulerMixin, default None) — +Any other scheduler that if specified, the algorithm becomes solver_p + UniC. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. UniPCMultistepScheduler is a training-free framework designed for the fast sampling of diffusion models. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the UniPC algorithm needs. multistep_uni_c_bh_update < source > ( this_model_output: FloatTensor *args last_sample: FloatTensor = None this_sample: FloatTensor = None order: int = None **kwargs ) → torch.FloatTensor Parameters this_model_output (torch.FloatTensor) — +The model outputs at x_t. this_timestep (int) — +The current timestep t. last_sample (torch.FloatTensor) — +The generated sample before the last predictor x_{t-1}. this_sample (torch.FloatTensor) — +The generated sample after the last predictor x_{t}. order (int) — +The p of UniC-p at this step. The effective order of accuracy should be order + 1. Returns +torch.FloatTensor + +The corrected sample tensor at the current timestep. + One step for the UniC (B(h) version). multistep_uni_p_bh_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None order: int = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model at the current timestep. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. order (int) — +The order of UniP at this timestep (corresponds to the p in UniPC-p). Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the UniP (B(h) version). Alternatively, self.solver_p is used if is specified. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep UniPC. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/8a74265eaccf3fe83d4f2c07a291281b.txt b/scrapped_outputs/8a74265eaccf3fe83d4f2c07a291281b.txt new file mode 100644 index 0000000000000000000000000000000000000000..161bab95d89c856bbecb72654e8b0d0142d13c70 --- /dev/null +++ b/scrapped_outputs/8a74265eaccf3fe83d4f2c07a291281b.txt @@ -0,0 +1,6 @@ +Unconditional image generation Unconditional image generation generates images that look like a random sample from the training data the model was trained on because the denoising process is not guided by any additional context like text or image. To get started, use the DiffusionPipeline to load the anton-l/ddpm-butterflies-128 checkpoint to generate images of butterflies. The DiffusionPipeline downloads and caches all the model components required to generate an image. Copied from diffusers import DiffusionPipeline + +generator = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128").to("cuda") +image = generator().images[0] +image Want to generate images of something else? Take a look at the training guide to learn how to train a model to generate your own images. The output image is a PIL.Image object that can be saved: Copied image.save("generated_image.png") You can also try experimenting with the num_inference_steps parameter, which controls the number of denoising steps. More denoising steps typically produce higher quality images, but it’ll take longer to generate. Feel free to play around with this parameter to see how it affects the image quality. Copied image = generator(num_inference_steps=100).images[0] +image Try out the Space below to generate an image of a butterfly! diff --git a/scrapped_outputs/8a78b127a88046cc171fd1795e37f9e1.txt b/scrapped_outputs/8a78b127a88046cc171fd1795e37f9e1.txt new file mode 100644 index 0000000000000000000000000000000000000000..2c75745a0e7cec39d676aa7cacbf09ed6d05e3a4 --- /dev/null +++ b/scrapped_outputs/8a78b127a88046cc171fd1795e37f9e1.txt @@ -0,0 +1,361 @@ +Inpainting Inpainting replaces or edits specific areas of an image. This makes it a useful tool for image restoration like removing defects and artifacts, or even replacing an image area with something entirely new. Inpainting relies on a mask to determine which regions of an image to fill in; the area to inpaint is represented by white pixels and the area to keep is represented by black pixels. The white pixels are filled in by the prompt. With 🤗 Diffusers, here is how you can do inpainting: Load an inpainting checkpoint with the AutoPipelineForInpainting class. This’ll automatically detect the appropriate pipeline class to load based on the checkpoint: Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() You’ll notice throughout the guide, we use enable_model_cpu_offload() and enable_xformers_memory_efficient_attention(), to save memory and increase inference speed. If you’re using PyTorch 2.0, it’s not necessary to call enable_xformers_memory_efficient_attention() on your pipeline because it’ll already be using PyTorch 2.0’s native scaled-dot product attention. Load the base and mask images: Copied init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") Create a prompt to inpaint the image with and pass it to the pipeline with the base and mask images: Copied prompt = "a black cat with glowing eyes, cute, adorable, disney, pixar, highly detailed, 8k" +negative_prompt = "bad anatomy, deformed, ugly, disfigured" +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=init_image, mask_image=mask_image).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) base image mask image generated image Create a mask image Throughout this guide, the mask image is provided in all of the code examples for convenience. You can inpaint on your own images, but you’ll need to create a mask image for it. Use the Space below to easily create a mask image. Upload a base image to inpaint on and use the sketch tool to draw a mask. Once you’re done, click Run to generate and download the mask image. Popular models Stable Diffusion Inpainting, Stable Diffusion XL (SDXL) Inpainting, and Kandinsky 2.2 Inpainting are among the most popular models for inpainting. SDXL typically produces higher resolution images than Stable Diffusion v1.5, and Kandinsky 2.2 is also capable of generating high-quality images. Stable Diffusion Inpainting Stable Diffusion Inpainting is a latent diffusion model finetuned on 512x512 images on inpainting. It is a good starting point because it is relatively fast and generates good quality images. To use this model for inpainting, you’ll need to pass a prompt, base and mask image to the pipeline: Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Stable Diffusion XL (SDXL) Inpainting SDXL is a larger and more powerful version of Stable Diffusion v1.5. This model can follow a two-stage model process (though each model can also be used alone); the base model generates an image, and a refiner model takes that image and further enhances its details and quality. Take a look at the SDXL guide for a more comprehensive guide on how to use SDXL and configure it’s parameters. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "diffusers/stable-diffusion-xl-1.0-inpainting-0.1", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Kandinsky 2.2 Inpainting The Kandinsky model family is similar to SDXL because it uses two models as well; the image prior model creates image embeddings, and the diffusion model generates images from them. You can load the image prior and diffusion model separately, but the easiest way to use Kandinsky 2.2 is to load it into the AutoPipelineForInpainting class which uses the KandinskyV22InpaintCombinedPipeline under the hood. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) base image Stable Diffusion Inpainting Stable Diffusion XL Inpainting Kandinsky 2.2 Inpainting Non-inpaint specific checkpoints So far, this guide has used inpaint specific checkpoints such as runwayml/stable-diffusion-inpainting. But you can also use regular checkpoints like runwayml/stable-diffusion-v1-5. Let’s compare the results of the two checkpoints. The image on the left is generated from a regular checkpoint, and the image on the right is from an inpaint checkpoint. You’ll immediately notice the image on the left is not as clean, and you can still see the outline of the area the model is supposed to inpaint. The image on the right is much cleaner and the inpainted area appears more natural. + + + + Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, image], rows=1, cols=2) + + + + Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, image], rows=1, cols=2) + + + runwayml/stable-diffusion-v1-5 runwayml/stable-diffusion-inpainting However, for more basic tasks like erasing an object from an image (like the rocks in the road for example), a regular checkpoint yields pretty good results. There isn’t as noticeable of difference between the regular and inpaint checkpoint. + + + + Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/road-mask.png") + +image = pipeline(prompt="road", image=init_image, mask_image=mask_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) + + + + Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/road-mask.png") + +image = pipeline(prompt="road", image=init_image, mask_image=mask_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) + + + runwayml/stable-diffusion-v1-5 runwayml/stable-diffusion-inpainting The trade-off of using a non-inpaint specific checkpoint is the overall image quality may be lower, but it generally tends to preserve the mask area (that is why you can see the mask outline). The inpaint specific checkpoints are intentionally trained to generate higher quality inpainted images, and that includes creating a more natural transition between the masked and unmasked areas. As a result, these checkpoints are more likely to change your unmasked area. If preserving the unmasked area is important for your task, you can use the code below to force the unmasked area of an image to remain the same at the expense of some more unnatural transitions between the masked and unmasked areas. Copied import PIL +import numpy as np +import torch + +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +device = "cuda" +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", + torch_dtype=torch.float16, +) +pipeline = pipeline.to(device) + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +repainted_image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image).images[0] +repainted_image.save("repainted_image.png") + +# Convert mask to grayscale NumPy array +mask_image_arr = np.array(mask_image.convert("L")) +# Add a channel dimension to the end of the grayscale mask +mask_image_arr = mask_image_arr[:, :, None] +# Binarize the mask: 1s correspond to the pixels which are repainted +mask_image_arr = mask_image_arr.astype(np.float32) / 255.0 +mask_image_arr[mask_image_arr < 0.5] = 0 +mask_image_arr[mask_image_arr >= 0.5] = 1 + +# Take the masked pixels from the repainted image and the unmasked pixels from the initial image +unmasked_unchanged_image_arr = (1 - mask_image_arr) * init_image + mask_image_arr * repainted_image +unmasked_unchanged_image = PIL.Image.fromarray(unmasked_unchanged_image_arr.round().astype("uint8")) +unmasked_unchanged_image.save("force_unmasked_unchanged.png") +make_image_grid([init_image, mask_image, repainted_image, unmasked_unchanged_image], rows=2, cols=2) Configure pipeline parameters Image features - like quality and “creativity” - are dependent on pipeline parameters. Knowing what these parameters do is important for getting the results you want. Let’s take a look at the most important parameters and see how changing them affects the output. Strength strength is a measure of how much noise is added to the base image, which influences how similar the output is to the base image. 📈 a high strength value means more noise is added to an image and the denoising process takes longer, but you’ll get higher quality images that are more different from the base image 📉 a low strength value means less noise is added to an image and the denoising process is faster, but the image quality may not be as great and the generated image resembles the base image more Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.6).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) strength = 0.6 strength = 0.8 strength = 1.0 Guidance scale guidance_scale affects how aligned the text prompt and generated image are. 📈 a high guidance_scale value means the prompt and generated image are closely aligned, so the output is a stricter interpretation of the prompt 📉 a low guidance_scale value means the prompt and generated image are more loosely aligned, so the output may be more varied from the prompt You can use strength and guidance_scale together for more control over how expressive the model is. For example, a combination high strength and guidance_scale values gives the model the most creative freedom. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, guidance_scale=2.5).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) guidance_scale = 2.5 guidance_scale = 7.5 guidance_scale = 12.5 Negative prompt A negative prompt assumes the opposite role of a prompt; it guides the model away from generating certain things in an image. This is useful for quickly improving image quality and preventing the model from generating things you don’t want. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +negative_prompt = "bad architecture, unstable, poor details, blurry" +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=init_image, mask_image=mask_image).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) negative_prompt = "bad architecture, unstable, poor details, blurry" Chained inpainting pipelines AutoPipelineForInpainting can be chained with other 🤗 Diffusers pipelines to edit their outputs. This is often useful for improving the output quality from your other diffusion pipelines, and if you’re using multiple pipelines, it can be more memory-efficient to chain them together to keep the outputs in latent space and reuse the same pipeline components. Text-to-image-to-inpaint Chaining a text-to-image and inpainting pipeline allows you to inpaint the generated image, and you don’t have to provide a base image to begin with. This makes it convenient to edit your favorite text-to-image outputs without having to generate an entirely new image. Start with the text-to-image pipeline to create a castle: Copied import torch +from diffusers import AutoPipelineForText2Image, AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +text2image = pipeline("concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k").images[0] Load the mask image of the output from above: Copied mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_text-chain-mask.png") And let’s inpaint the masked area with a waterfall: Copied pipeline = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +prompt = "digital painting of a fantasy waterfall, cloudy" +image = pipeline(prompt=prompt, image=text2image, mask_image=mask_image).images[0] +make_image_grid([text2image, mask_image, image], rows=1, cols=3) text-to-image inpaint Inpaint-to-image-to-image You can also chain an inpainting pipeline before another pipeline like image-to-image or an upscaler to improve the quality. Begin by inpainting an image: Copied import torch +from diffusers import AutoPipelineForInpainting, AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image_inpainting = pipeline(prompt=prompt, image=init_image, mask_image=mask_image).images[0] + +# resize image to 1024x1024 for SDXL +image_inpainting = image_inpainting.resize((1024, 1024)) Now let’s pass the image to another inpainting pipeline with SDXL’s refiner model to enhance the image details and quality: Copied pipeline = AutoPipelineForInpainting.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt=prompt, image=image_inpainting, mask_image=mask_image, output_type="latent").images[0] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. For example, in the Text-to-image-to-inpaint section, Kandinsky 2.2 uses a different VAE class than the Stable Diffusion model so it won’t work. But if you use Stable Diffusion v1.5 for both pipelines, then you can keep everything in latent space because they both use AutoencoderKL. Finally, you can pass this image to an image-to-image pipeline to put the finishing touches on it. It is more efficient to use the from_pipe() method to reuse the existing pipeline components, and avoid unnecessarily loading all the pipeline components into memory again. Copied pipeline = AutoPipelineForImage2Image.from_pipe(pipeline) +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt=prompt, image=image).images[0] +make_image_grid([init_image, mask_image, image_inpainting, image], rows=2, cols=2) initial image inpaint image-to-image Image-to-image and inpainting are actually very similar tasks. Image-to-image generates a new image that resembles the existing provided image. Inpainting does the same thing, but it only transforms the image area defined by the mask and the rest of the image is unchanged. You can think of inpainting as a more precise tool for making specific changes and image-to-image has a broader scope for making more sweeping changes. Control image generation Getting an image to look exactly the way you want is challenging because the denoising process is random. While you can control certain aspects of generation by configuring parameters like negative_prompt, there are better and more efficient methods for controlling image generation. Prompt weighting Prompt weighting provides a quantifiable way to scale the representation of concepts in a prompt. You can use it to increase or decrease the magnitude of the text embedding vector for each concept in the prompt, which subsequently determines how much of each concept is generated. The Compel library offers an intuitive syntax for scaling the prompt weights and generating the embeddings. Learn how to create the embeddings in the Prompt weighting guide. Once you’ve generated the embeddings, pass them to the prompt_embeds (and negative_prompt_embeds if you’re using a negative prompt) parameter in the AutoPipelineForInpainting. The embeddings replace the prompt parameter: Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt_embeds=prompt_embeds, # generated from Compel + negative_prompt_embeds=negative_prompt_embeds, # generated from Compel + image=init_image, + mask_image=mask_image +).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) ControlNet ControlNet models are used with other diffusion models like Stable Diffusion, and they provide an even more flexible and accurate way to control how an image is generated. A ControlNet accepts an additional conditioning image input that guides the diffusion model to preserve the features in it. For example, let’s condition an image with a ControlNet pretrained on inpaint images: Copied import torch +import numpy as np +from diffusers import ControlNetModel, StableDiffusionControlNetInpaintPipeline +from diffusers.utils import load_image, make_image_grid + +# load ControlNet +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16, variant="fp16") + +# pass ControlNet to the pipeline +pipeline = StableDiffusionControlNetInpaintPipeline.from_pretrained( + "runwayml/stable-diffusion-inpainting", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +# prepare control image +def make_inpaint_condition(init_image, mask_image): + init_image = np.array(init_image.convert("RGB")).astype(np.float32) / 255.0 + mask_image = np.array(mask_image.convert("L")).astype(np.float32) / 255.0 + + assert init_image.shape[0:1] == mask_image.shape[0:1], "image and image_mask must have the same image size" + init_image[mask_image > 0.5] = -1.0 # set as masked pixel + init_image = np.expand_dims(init_image, 0).transpose(0, 3, 1, 2) + init_image = torch.from_numpy(init_image) + return init_image + +control_image = make_inpaint_condition(init_image, mask_image) Now generate an image from the base, mask and control images. You’ll notice features of the base image are strongly preserved in the generated image. Copied prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, control_image=control_image).images[0] +make_image_grid([init_image, mask_image, PIL.Image.fromarray(np.uint8(control_image[0][0])).convert('RGB'), image], rows=2, cols=2) You can take this a step further and chain it with an image-to-image pipeline to apply a new style: Copied from diffusers import AutoPipelineForImage2Image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "nitrosocke/elden-ring-diffusion", torch_dtype=torch.float16, +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +prompt = "elden ring style castle" # include the token "elden ring style" in the prompt +negative_prompt = "bad architecture, deformed, disfigured, poor details" + +image_elden_ring = pipeline(prompt, negative_prompt=negative_prompt, image=image).images[0] +make_image_grid([init_image, mask_image, image, image_elden_ring], rows=2, cols=2) initial image ControlNet inpaint image-to-image Optimize It can be difficult and slow to run diffusion models if you’re resource constrained, but it doesn’t have to be with a few optimization tricks. One of the biggest (and easiest) optimizations you can enable is switching to memory-efficient attention. If you’re using PyTorch 2.0, scaled-dot product attention is automatically enabled and you don’t need to do anything else. For non-PyTorch 2.0 users, you can install and use xFormers’s implementation of memory-efficient attention. Both options reduce memory usage and accelerate inference. You can also offload the model to the CPU to save even more memory: Copied + pipeline.enable_xformers_memory_efficient_attention() ++ pipeline.enable_model_cpu_offload() To speed-up your inference code even more, use torch_compile. You should wrap torch.compile around the most intensive component in the pipeline which is typically the UNet: Copied pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) Learn more in the Reduce memory usage and Torch 2.0 guides. diff --git a/scrapped_outputs/8aa3fb006b83be3b3925bbf1fb362c9c.txt b/scrapped_outputs/8aa3fb006b83be3b3925bbf1fb362c9c.txt new file mode 100644 index 0000000000000000000000000000000000000000..ef87814e315d0b1fa9553c9af5b3de5bd286d86a --- /dev/null +++ b/scrapped_outputs/8aa3fb006b83be3b3925bbf1fb362c9c.txt @@ -0,0 +1,30 @@ +Unconditional Image Generation + +The DiffusionPipeline is the easiest way to use a pre-trained diffusion system for inference +Start by creating an instance of DiffusionPipeline and specify which pipeline checkpoint you would like to download. +You can use the DiffusionPipeline for any Diffusers’ checkpoint. +In this guide though, you’ll use DiffusionPipeline for unconditional image generation with DDPM: + + + Copied +>>> from diffusers import DiffusionPipeline + +>>> generator = DiffusionPipeline.from_pretrained("google/ddpm-celebahq-256") +The DiffusionPipeline downloads and caches all modeling, tokenization, and scheduling components. +Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on GPU. +You can move the generator object to GPU, just like you would in PyTorch. + + + Copied +>>> generator.to("cuda") +Now you can use the generator on your text prompt: + + + Copied +>>> image = generator().images[0] +The output is by default wrapped into a PIL Image object. +You can save the image by simply calling: + + + Copied +>>> image.save("generated_image.png") diff --git a/scrapped_outputs/8ace9eb765032f6476f716d1941d4d5a.txt b/scrapped_outputs/8ace9eb765032f6476f716d1941d4d5a.txt new file mode 100644 index 0000000000000000000000000000000000000000..ab8d27ee7a818fbb85c3449a335f8a6b3e076abb --- /dev/null +++ b/scrapped_outputs/8ace9eb765032f6476f716d1941d4d5a.txt @@ -0,0 +1,387 @@ +Multistep DPM-Solver + + +Overview + +Original paper can be found here and the improved version. The original implementation can be found here. + +DPMSolverMultistepScheduler + + +class diffusers.DPMSolverMultistepScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +solver_order: int = 2 +prediction_type: str = 'epsilon' +thresholding: bool = False +dynamic_thresholding_ratio: float = 0.995 +sample_max_value: float = 1.0 +algorithm_type: str = 'dpmsolver++' +solver_type: str = 'midpoint' +lower_order_final: bool = True +use_karras_sigmas: typing.Optional[bool] = False + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +solver_order (int, default 2) — +the order of DPM-Solver; can be 1 or 2 or 3. We recommend to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + +thresholding (bool, default False) — +whether to use the “dynamic thresholding” method (introduced by Imagen, https://arxiv.org/abs/2205.11487). +For pixel-space diffusion models, you can set both algorithm_type=dpmsolver++ and thresholding=True to +use the dynamic thresholding. Note that the thresholding method is unsuitable for latent-space diffusion +models (such as stable-diffusion). + + +dynamic_thresholding_ratio (float, default 0.995) — +the ratio for the dynamic thresholding method. Default is 0.995, the same as Imagen +(https://arxiv.org/abs/2205.11487). + + +sample_max_value (float, default 1.0) — +the threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++. + + +algorithm_type (str, default dpmsolver++) — +the algorithm type for the solver. Either dpmsolver or dpmsolver++. The dpmsolver type implements the +algorithms in https://arxiv.org/abs/2206.00927, and the dpmsolver++ type implements the algorithms in +https://arxiv.org/abs/2211.01095. We recommend to use dpmsolver++ with solver_order=2 for guided +sampling (e.g. stable-diffusion). + + +solver_type (str, default midpoint) — +the solver type for the second-order solver. Either midpoint or heun. The solver type slightly affects +the sample quality, especially for small number of steps. We empirically find that midpoint solvers are +slightly better, so we recommend to use the midpoint type. + + +lower_order_final (bool, default True) — +whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. We empirically +find this trick can stabilize the sampling of DPM-Solver for steps < 15, especially for steps <= 10. + + +use_karras_sigmas (bool, optional, defaults to False) — +This parameter controls whether to use Karras sigmas (Karras et al. (2022) scheme) for step sizes in the +noise schedule during the sampling process. If True, the sigmas will be determined according to a sequence +of noise levels {σi} as defined in Equation (5) of the paper https://arxiv.org/pdf/2206.00364.pdf. + + + +DPM-Solver (and the improved version DPM-Solver++) is a fast dedicated high-order solver for diffusion ODEs with +the convergence order guarantee. Empirically, sampling by DPM-Solver with only 20 steps can generate high-quality +samples, and it can generate quite good samples even in only 10 steps. +For more details, see the original paper: https://arxiv.org/abs/2206.00927 and https://arxiv.org/abs/2211.01095 +Currently, we support the multistep DPM-Solver for both noise prediction models and data prediction models. We +recommend to use solver_order=2 for guided sampling, and solver_order=3 for unconditional sampling. +We also support the “dynamic thresholding” method in Imagen (https://arxiv.org/abs/2205.11487). For pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use the dynamic +thresholding. Note that the thresholding method is unsuitable for latent-space diffusion models (such as +stable-diffusion). +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +convert_model_output + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the converted model output. + + +Convert the model output to the corresponding type that the algorithm (DPM-Solver / DPM-Solver++) needs. +DPM-Solver is designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to +discretize an integral of the data prediction model. So we need to first convert the model output to the +corresponding type to match the algorithm. +Note that the algorithm type and the model type is decoupled. That is to say, we can use either DPM-Solver or +DPM-Solver++ for both noise prediction model and data prediction model. + +dpm_solver_first_order_update + +< +source +> +( +model_output: FloatTensor +timestep: int +prev_timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the first-order DPM-Solver (equivalent to DDIM). +See https://arxiv.org/abs/2206.00927 for the detailed derivation. + +multistep_dpm_solver_second_order_update + +< +source +> +( +model_output_list: typing.List[torch.FloatTensor] +timestep_list: typing.List[int] +prev_timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output_list (List[torch.FloatTensor]) — +direct outputs from learned diffusion model at current and latter timesteps. + + +timestep (int) — current and latter discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the second-order multistep DPM-Solver. + +multistep_dpm_solver_third_order_update + +< +source +> +( +model_output_list: typing.List[torch.FloatTensor] +timestep_list: typing.List[int] +prev_timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output_list (List[torch.FloatTensor]) — +direct outputs from learned diffusion model at current and latter timesteps. + + +timestep (int) — current and latter discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the third-order multistep DPM-Solver. + +scale_model_input + +< +source +> +( +sample: FloatTensor +*args +**kwargs + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device, optional) — +the device to which the timesteps should be moved to. If None, the timesteps are not moved. + + + +Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +return_dict: bool = True + +) +→ +~scheduling_utils.SchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +return_dict (bool) — option for returning tuple rather than SchedulerOutput class + + +Returns + +~scheduling_utils.SchedulerOutput or tuple + + + +~scheduling_utils.SchedulerOutput if return_dict is +True, otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + +Step function propagating the sample with the multistep DPM-Solver. diff --git a/scrapped_outputs/8add503ae6a6dda47b4e207e15a37c25.txt b/scrapped_outputs/8add503ae6a6dda47b4e207e15a37c25.txt new file mode 100644 index 0000000000000000000000000000000000000000..5afc2be3d91199356b9d7628f7ca4a75d3ed1ce9 --- /dev/null +++ b/scrapped_outputs/8add503ae6a6dda47b4e207e15a37c25.txt @@ -0,0 +1,74 @@ +DDIMScheduler Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. The abstract from the paper is: Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. +To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models +with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. +We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. +We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space. The original codebase of this paper can be found at ermongroup/ddim, and you can contact the author on tsong.me. Tips The paper Common Diffusion Noise Schedules and Sample Steps are Flawed claims that a mismatch between the training and inference settings leads to suboptimal inference generation results for Stable Diffusion. To fix this, the authors propose: 🧪 This is an experimental feature! rescale the noise schedule to enforce zero terminal signal-to-noise ratio (SNR) Copied pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, rescale_betas_zero_snr=True) train a model with v_prediction (add the following argument to the train_text_to_image.py or train_text_to_image_lora.py scripts) Copied --prediction_type="v_prediction" change the sampler to always start from the last timestep Copied pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing") rescale classifier-free guidance to prevent over-exposure Copied image = pipe(prompt, guidance_rescale=0.7).images[0] For example: Copied from diffusers import DiffusionPipeline, DDIMScheduler +import torch + +pipe = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", torch_dtype=torch.float16) +pipe.scheduler = DDIMScheduler.from_config( + pipe.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing" +) +pipe.to("cuda") + +prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" +image = pipe(prompt, guidance_rescale=0.7).images[0] +image DDIMScheduler class diffusers.DDIMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None clip_sample: bool = True set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 clip_sample_range: float = 1.0 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the alpha value at step 0. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. DDIMScheduler extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with +non-Markovian guidance. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor eta: float = 0.0 use_clipped_model_output: bool = False generator = None variance_noise: Optional = None return_dict: bool = True ) → ~schedulers.scheduling_utils.DDIMSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. eta (float) — +The weight of noise for added noise in diffusion step. use_clipped_model_output (bool, defaults to False) — +If True, computes “corrected” model_output from the clipped predicted original sample. Necessary +because predicted original sample is clipped to [-1, 1] when self.config.clip_sample is True. If no +clipping has happened, “corrected” model_output would coincide with the one provided as input and +use_clipped_model_output has no effect. generator (torch.Generator, optional) — +A random number generator. variance_noise (torch.FloatTensor) — +Alternative to generating noise with generator by directly providing the noise for the variance +itself. Useful for methods such as CycleDiffusion. return_dict (bool, optional, defaults to True) — +Whether or not to return a DDIMSchedulerOutput or tuple. Returns +~schedulers.scheduling_utils.DDIMSchedulerOutput or tuple + +If return_dict is True, DDIMSchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). DDIMSchedulerOutput class diffusers.schedulers.scheduling_ddim.DDIMSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/8b012c22dcdcefa19d64b179114bb8d6.txt b/scrapped_outputs/8b012c22dcdcefa19d64b179114bb8d6.txt new file mode 100644 index 0000000000000000000000000000000000000000..b26a6d56b0f7175109506df5db21894b73ff5f5f --- /dev/null +++ b/scrapped_outputs/8b012c22dcdcefa19d64b179114bb8d6.txt @@ -0,0 +1,25 @@ +Metal Performance Shaders (MPS) 🤗 Diffusers is compatible with Apple silicon (M1/M2 chips) using the PyTorch mps device, which uses the Metal framework to leverage the GPU on MacOS devices. You’ll need to have: macOS computer with Apple silicon (M1/M2) hardware macOS 12.6 or later (13.0 or later recommended) arm64 version of Python PyTorch 2.0 (recommended) or 1.13 (minimum version supported for mps) The mps backend uses PyTorch’s .to() interface to move the Stable Diffusion pipeline on to your M1 or M2 device: Copied from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +pipe = pipe.to("mps") + +# Recommended if your computer has < 64 GB of RAM +pipe.enable_attention_slicing() + +prompt = "a photo of an astronaut riding a horse on mars" +image = pipe(prompt).images[0] +image Generating multiple prompts in a batch can crash or fail to work reliably. We believe this is related to the mps backend in PyTorch. While this is being investigated, you should iterate instead of batching. If you’re using PyTorch 1.13, you need to “prime” the pipeline with an additional one-time pass through it. This is a temporary workaround for an issue where the first inference pass produces slightly different results than subsequent ones. You only need to do this pass once, and after just one inference step you can discard the result. Copied from diffusers import DiffusionPipeline + + pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5").to("mps") + pipe.enable_attention_slicing() + + prompt = "a photo of an astronaut riding a horse on mars" + # First-time "warmup" pass if PyTorch version is 1.13 ++ _ = pipe(prompt, num_inference_steps=1) + + # Results match those from the CPU device after the warmup pass. + image = pipe(prompt).images[0] Troubleshoot M1/M2 performance is very sensitive to memory pressure. When this occurs, the system automatically swaps if it needs to which significantly degrades performance. To prevent this from happening, we recommend attention slicing to reduce memory pressure during inference and prevent swapping. This is especially relevant if your computer has less than 64GB of system RAM, or if you generate images at non-standard resolutions larger than 512×512 pixels. Call the enable_attention_slicing() function on your pipeline: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True).to("mps") +pipeline.enable_attention_slicing() Attention slicing performs the costly attention operation in multiple steps instead of all at once. It usually improves performance by ~20% in computers without universal memory, but we’ve observed better performance in most Apple silicon computers unless you have 64GB of RAM or more. diff --git a/scrapped_outputs/8b1a50f27a74f251d2f57ac2c2c7dc2e.txt b/scrapped_outputs/8b1a50f27a74f251d2f57ac2c2c7dc2e.txt new file mode 100644 index 0000000000000000000000000000000000000000..163deebba32d44239adf15467f9dcbdfbfad7c90 --- /dev/null +++ b/scrapped_outputs/8b1a50f27a74f251d2f57ac2c2c7dc2e.txt @@ -0,0 +1,635 @@ +ControlNet with Stable Diffusion XL ControlNet was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that’ll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process. The abstract from the paper is: We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models. You can find additional smaller Stable Diffusion XL (SDXL) ControlNet checkpoints from the 🤗 Diffusers Hub organization, and browse community-trained checkpoints on the Hub. 🧪 Many of the SDXL ControlNet checkpoints are experimental, and there is a lot of room for improvement. Feel free to open an Issue and leave us feedback on how we can improve! If you don’t see a checkpoint you’re interested in, you can train your own SDXL ControlNet with our training script. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionXLControlNetPipeline class diffusers.StableDiffusionXLControlNetPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). text_encoder_2 (CLIPTextModelWithProjection) — +Second frozen text-encoder +(laion/CLIP-ViT-bigG-14-laion2B-39B-b160k). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. tokenizer_2 (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings should always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it defaults to True if the package is installed; otherwise no +watermarker is used. Pipeline for text-to-image generation using Stable Diffusion XL with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 5.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. This is sent to tokenizer_2 +and text_encoder_2. If not defined, negative_prompt is used in both text-encoders. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, pooled text embeddings are generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs (prompt +weighting). If not provided, pooled negative_prompt_embeds are generated from negative_prompt input +argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned containing the output images. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" +>>> negative_prompt = "low quality, bad quality, sketches" + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" +... ) + +>>> # initialize the models and pipeline +>>> controlnet_conditioning_scale = 0.5 # recommended for good generalization +>>> controlnet = ControlNetModel.from_pretrained( +... "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16 +... ) +>>> vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) +>>> pipe = StableDiffusionXLControlNetPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> # get canny image +>>> image = np.array(image) +>>> image = cv2.Canny(image, 100, 200) +>>> image = image[:, :, None] +>>> image = np.concatenate([image, image, image], axis=2) +>>> canny_image = Image.fromarray(image) + +>>> # generate image +>>> image = pipe( +... prompt, controlnet_conditioning_scale=controlnet_conditioning_scale, image=canny_image +... ).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionXLControlNetImg2ImgPipeline class diffusers.StableDiffusionXLControlNetImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple ControlNets +as a list, the outputs from each ControlNet are added together to create one combined additional +conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires an aesthetic_score condition to be passed during inference. Also see the +config of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for image-to-image generation using Stable Diffusion XL with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None control_image: Union = None height: Optional = None width: Optional = None strength: float = 0.8 num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 0.8 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The initial image will be used as the starting point for the image generation process. Can also accept +image latents as image, if passing latents directly, it will not be encoded again. control_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition. ControlNet uses this input condition to generate guidance to Unet. If +the type is specified as Torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can +also be accepted as an image. The dimensions of the output image defaults to image’s dimensions. If +height and/or width are passed, image is resized according to them. If multiple ControlNets are +specified in init, images must be passed as a list such that each element of the list can be correctly +batched for input to a single controlnet. height (int, optional, defaults to the size of control_image) — +The height in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to the size of control_image) — +The width in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the controlnet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set the +corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +In this mode, the ControlNet encoder will try best to recognize the content of the input image even if +you remove all prompts. The guidance_scale between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the controlnet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the controlnet stops applying. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple +containing the output images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> # pip install accelerate transformers safetensors diffusers + +>>> import torch +>>> import numpy as np +>>> from PIL import Image + +>>> from transformers import DPTFeatureExtractor, DPTForDepthEstimation +>>> from diffusers import ControlNetModel, StableDiffusionXLControlNetImg2ImgPipeline, AutoencoderKL +>>> from diffusers.utils import load_image + + +>>> depth_estimator = DPTForDepthEstimation.from_pretrained("Intel/dpt-hybrid-midas").to("cuda") +>>> feature_extractor = DPTFeatureExtractor.from_pretrained("Intel/dpt-hybrid-midas") +>>> controlnet = ControlNetModel.from_pretrained( +... "diffusers/controlnet-depth-sdxl-1.0-small", +... variant="fp16", +... use_safetensors=True, +... torch_dtype=torch.float16, +... ).to("cuda") +>>> vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16).to("cuda") +>>> pipe = StableDiffusionXLControlNetImg2ImgPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", +... controlnet=controlnet, +... vae=vae, +... variant="fp16", +... use_safetensors=True, +... torch_dtype=torch.float16, +... ).to("cuda") +>>> pipe.enable_model_cpu_offload() + + +>>> def get_depth_map(image): +... image = feature_extractor(images=image, return_tensors="pt").pixel_values.to("cuda") +... with torch.no_grad(), torch.autocast("cuda"): +... depth_map = depth_estimator(image).predicted_depth + +... depth_map = torch.nn.functional.interpolate( +... depth_map.unsqueeze(1), +... size=(1024, 1024), +... mode="bicubic", +... align_corners=False, +... ) +... depth_min = torch.amin(depth_map, dim=[1, 2, 3], keepdim=True) +... depth_max = torch.amax(depth_map, dim=[1, 2, 3], keepdim=True) +... depth_map = (depth_map - depth_min) / (depth_max - depth_min) +... image = torch.cat([depth_map] * 3, dim=1) +... image = image.permute(0, 2, 3, 1).cpu().numpy()[0] +... image = Image.fromarray((image * 255.0).clip(0, 255).astype(np.uint8)) +... return image + + +>>> prompt = "A robot, 4k photo" +>>> image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ).resize((1024, 1024)) +>>> controlnet_conditioning_scale = 0.5 # recommended for good generalization +>>> depth_image = get_depth_map(image) + +>>> images = pipe( +... prompt, +... image=image, +... control_image=depth_image, +... strength=0.99, +... num_inference_steps=50, +... controlnet_conditioning_scale=controlnet_conditioning_scale, +... ).images +>>> images[0].save(f"robot_cat.png") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionXLControlNetInpaintPipeline class diffusers.StableDiffusionXLControlNetInpaintPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: ControlNetModel scheduler: KarrasDiffusionSchedulers requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None mask_image: Union = None control_image: Union = None height: Optional = None width: Optional = None padding_mask_crop: Optional = None strength: float = 0.9999 num_inference_steps: int = 50 denoising_start: Optional = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (PIL.Image.Image) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. padding_mask_crop (int, optional, defaults to None) — +The size of margin in the crop to be applied to the image and masking. If None, no crop is applied to image and mask_image. If +padding_mask_crop is not None, it will first find a rectangular region with the same aspect ration of the image and +contains all masked area, and then expand that area based on padding_mask_crop. The image and mask_image will then be cropped based on +the expanded area before resizing to the original image size for inpainting. This is useful when the masked area is small while the image is large +and contain information inreleant for inpainging, such as background. strength (float, optional, defaults to 0.9999) — +Conceptually, indicates how much to transform the masked portion of the reference image. Must be +between 0 and 1. image will be used as a starting point, adding more noise to it the larger the +strength. The number of denoising steps depends on the amount of noise initially added. When +strength is 1, added noise will be maximum and the denoising process will run for the full number of +iterations specified in num_inference_steps. A value of 1, therefore, essentially ignores the masked +portion of the reference image. Note that in the case of denoising_start being declared as an +integer, the value of strength will be ignored. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. denoising_start (float, optional) — +When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be +bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and +it is assumed that the passed image is a partly denoised image. Note that when this is specified, +strength will be ignored. The denoising_start parameter is particularly beneficial when this pipeline +is integrated into a “Mixture of Denoisers” multi-pipeline setup, as detailed in Refining the Image +Output. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be +denoised by a successor pipeline that has denoising_start set to 0.8 so that it only denoises the +final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline +forms a part of a “Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (width, height) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (width, height). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> # !pip install transformers accelerate +>>> from diffusers import StableDiffusionXLControlNetInpaintPipeline, ControlNetModel, DDIMScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> init_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy.png" +... ) +>>> init_image = init_image.resize((1024, 1024)) + +>>> generator = torch.Generator(device="cpu").manual_seed(1) + +>>> mask_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy_mask.png" +... ) +>>> mask_image = mask_image.resize((1024, 1024)) + + +>>> def make_canny_condition(image): +... image = np.array(image) +... image = cv2.Canny(image, 100, 200) +... image = image[:, :, None] +... image = np.concatenate([image, image, image], axis=2) +... image = Image.fromarray(image) +... return image + + +>>> control_image = make_canny_condition(init_image) + +>>> controlnet = ControlNetModel.from_pretrained( +... "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16 +... ) +>>> pipe = StableDiffusionXLControlNetInpaintPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> image = pipe( +... "a handsome man with ray-ban sunglasses", +... num_inference_steps=20, +... generator=generator, +... eta=1.0, +... image=init_image, +... mask_image=mask_image, +... control_image=control_image, +... ).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/8b284214ffdf1d196eed13a0058153fe.txt b/scrapped_outputs/8b284214ffdf1d196eed13a0058153fe.txt new file mode 100644 index 0000000000000000000000000000000000000000..5eb8aca237f4b1aa72ff085bbc8ab70f6ba7cd91 --- /dev/null +++ b/scrapped_outputs/8b284214ffdf1d196eed13a0058153fe.txt @@ -0,0 +1,128 @@ +LoRA LoRA is a fast and lightweight training method that inserts and trains a significantly smaller number of parameters instead of all the model parameters. This produces a smaller file (~100 MBs) and makes it easier to quickly train a model to learn a new concept. LoRA weights are typically loaded into the UNet, text encoder or both. There are two classes for loading LoRA weights: LoraLoaderMixin provides functions for loading and unloading, fusing and unfusing, enabling and disabling, and more functions for managing LoRA weights. This class can be used with any model. StableDiffusionXLLoraLoaderMixin is a Stable Diffusion (SDXL) version of the LoraLoaderMixin class for loading and saving LoRA weights. It can only be used with the SDXL model. To learn more about how to load LoRA weights, see the LoRA loading guide. LoraLoaderMixin class diffusers.loaders.LoraLoaderMixin < source > ( ) Load LoRA layers into UNet2DConditionModel and +CLIPTextModel. delete_adapters < source > ( adapter_names: Union ) Parameters Deletes the LoRA layers of adapter_name for the unet and text-encoder(s). — +adapter_names (Union[List[str], str]): +The names of the adapter to delete. Can be a single string or a list of strings disable_lora_for_text_encoder < source > ( text_encoder: Optional = None ) Parameters text_encoder (torch.nn.Module, optional) — +The text encoder module to disable the LoRA layers for. If None, it will try to get the +text_encoder attribute. Disables the LoRA layers for the text encoder. enable_lora_for_text_encoder < source > ( text_encoder: Optional = None ) Parameters text_encoder (torch.nn.Module, optional) — +The text encoder module to enable the LoRA layers for. If None, it will try to get the text_encoder +attribute. Enables the LoRA layers for the text encoder. fuse_lora < source > ( fuse_unet: bool = True fuse_text_encoder: bool = True lora_scale: float = 1.0 safe_fusing: bool = False adapter_names: Optional = None ) Parameters fuse_unet (bool, defaults to True) — Whether to fuse the UNet LoRA parameters. fuse_text_encoder (bool, defaults to True) — +Whether to fuse the text encoder LoRA parameters. If the text encoder wasn’t monkey-patched with the +LoRA parameters then it won’t have any effect. lora_scale (float, defaults to 1.0) — +Controls how much to influence the outputs with the LoRA parameters. safe_fusing (bool, defaults to False) — +Whether to check fused weights for NaN values before fusing and if values are NaN not fusing them. adapter_names (List[str], optional) — +Adapter names to be used for fusing. If nothing is passed, all active adapters will be fused. Fuses the LoRA parameters into the original parameters of the corresponding blocks. This is an experimental API. Example: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipeline.fuse_lora(lora_scale=0.7) get_active_adapters < source > ( ) Gets the list of the current active adapters. Example: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", +).to("cuda") +pipeline.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") +pipeline.get_active_adapters() get_list_adapters < source > ( ) Gets the current list of all available adapters in the pipeline. load_lora_into_text_encoder < source > ( state_dict network_alphas text_encoder prefix = None lora_scale = 1.0 low_cpu_mem_usage = None adapter_name = None _pipeline = None ) Parameters state_dict (dict) — +A standard state dict containing the lora layer parameters. The key should be prefixed with an +additional text_encoder to distinguish between unet lora layers. network_alphas (Dict[str, float]) — +See LoRALinearLayer for more details. text_encoder (CLIPTextModel) — +The text encoder model to load the LoRA layers into. prefix (str) — +Expected prefix of the text_encoder in the state_dict. lora_scale (float) — +How much to scale the output of the lora linear layer before it is added with the output of the regular +lora layer. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. This will load the LoRA layers specified in state_dict into text_encoder load_lora_into_transformer < source > ( state_dict network_alphas transformer low_cpu_mem_usage = None adapter_name = None _pipeline = None ) Parameters state_dict (dict) — +A standard state dict containing the lora layer parameters. The keys can either be indexed directly +into the unet or prefixed with an additional unet which can be used to distinguish between text +encoder lora layers. network_alphas (Dict[str, float]) — +See LoRALinearLayer for more details. unet (UNet2DConditionModel) — +The UNet model to load the LoRA layers into. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. This will load the LoRA layers specified in state_dict into transformer. load_lora_into_unet < source > ( state_dict network_alphas unet low_cpu_mem_usage = None adapter_name = None _pipeline = None ) Parameters state_dict (dict) — +A standard state dict containing the lora layer parameters. The keys can either be indexed directly +into the unet or prefixed with an additional unet which can be used to distinguish between text +encoder lora layers. network_alphas (Dict[str, float]) — +See LoRALinearLayer for more details. unet (UNet2DConditionModel) — +The UNet model to load the LoRA layers into. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. This will load the LoRA layers specified in state_dict into unet. load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. lora_state_dict < source > ( pretrained_model_name_or_path_or_dict: Union **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with ModelMixin.save_pretrained(). +A torch state +dict. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Return state dict for lora weights and the network alphas. We support loading A1111 formatted LoRA checkpoints in a limited capacity. This function is experimental and might change in the future. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. set_adapters_for_text_encoder < source > ( adapter_names: Union text_encoder: Optional = None text_encoder_weights: List = None ) Parameters adapter_names (List[str] or str) — +The names of the adapters to use. text_encoder (torch.nn.Module, optional) — +The text encoder module to set the adapter layers for. If None, it will try to get the text_encoder +attribute. text_encoder_weights (List[float], optional) — +The weights to use for the text encoder. If None, the weights are set to 1.0 for all the adapters. Sets the adapter layers for the text encoder. set_lora_device < source > ( adapter_names: List device: Union ) Parameters adapter_names (List[str]) — +List of adapters to send device to. device (Union[torch.device, str, int]) — +Device to send the adapters to. Can be either a torch device, a str or an integer. Moves the LoRAs listed in adapter_names to a target device. Useful for offloading the LoRA to the CPU in case +you want to load multiple adapters and free some GPU memory. unfuse_lora < source > ( unfuse_unet: bool = True unfuse_text_encoder: bool = True ) Parameters unfuse_unet (bool, defaults to True) — Whether to unfuse the UNet LoRA parameters. unfuse_text_encoder (bool, defaults to True) — +Whether to unfuse the text encoder LoRA parameters. If the text encoder wasn’t monkey-patched with the +LoRA parameters then it won’t have any effect. Reverses the effect of +pipe.fuse_lora(). This is an experimental API. unload_lora_weights < source > ( ) Unloads the LoRA parameters. Examples: Copied >>> # Assuming `pipeline` is already loaded with the LoRA parameters. +>>> pipeline.unload_lora_weights() +>>> ... StableDiffusionXLLoraLoaderMixin class diffusers.loaders.StableDiffusionXLLoraLoaderMixin < source > ( ) This class overrides LoraLoaderMixin with LoRA loading/saving code that’s specific to SDXL load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name: Optional = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. kwargs (dict, optional) — +See lora_state_dict(). Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. diff --git a/scrapped_outputs/8b81d457d5e14a99c3bd4e159ddb142c.txt b/scrapped_outputs/8b81d457d5e14a99c3bd4e159ddb142c.txt new file mode 100644 index 0000000000000000000000000000000000000000..b413917c52bc7069ecb64d4b6c9ce531220bac25 --- /dev/null +++ b/scrapped_outputs/8b81d457d5e14a99c3bd4e159ddb142c.txt @@ -0,0 +1,87 @@ +Create reproducible pipelines Reproducibility is important for testing, replicating results, and can even be used to improve image quality. However, the randomness in diffusion models is a desired property because it allows the pipeline to generate different images every time it is run. While you can’t expect to get the exact same results across platforms, you can expect results to be reproducible across releases and platforms within a certain tolerance range. Even then, tolerance varies depending on the diffusion pipeline and checkpoint. This is why it’s important to understand how to control sources of randomness in diffusion models or use deterministic algorithms. 💡 We strongly recommend reading PyTorch’s statement about reproducibility: Completely reproducible results are not guaranteed across PyTorch releases, individual commits, or different platforms. Furthermore, results may not be reproducible between CPU and GPU executions, even when using identical seeds. Control randomness During inference, pipelines rely heavily on random sampling operations which include creating the +Gaussian noise tensors to denoise and adding noise to the scheduling step. Take a look at the tensor values in the DDIMPipeline after two inference steps: Copied from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np").images +print(np.abs(image).sum()) Running the code above prints one value, but if you run it again you get a different value. What is going on here? Every time the pipeline is run, torch.randn uses a different random seed to create Gaussian noise which is denoised stepwise. This leads to a different result each time it is run, which is great for diffusion pipelines since it generates a different random image each time. But if you need to reliably generate the same image, that’ll depend on whether you’re running the pipeline on a CPU or GPU. CPU To generate reproducible results on a CPU, you’ll need to use a PyTorch Generator and set a seed: Copied import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) + +# create a generator for reproducibility +generator = torch.Generator(device="cpu").manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) Now when you run the code above, it always prints a value of 1491.1711 no matter what because the Generator object with the seed is passed to all the random functions of the pipeline. If you run this code example on your specific hardware and PyTorch version, you should get a similar, if not the same, result. 💡 It might be a bit unintuitive at first to pass Generator objects to the pipeline instead of +just integer values representing the seed, but this is the recommended design when dealing with +probabilistic models in PyTorch, as Generators are random states that can be +passed to multiple pipelines in a sequence. GPU Writing a reproducible pipeline on a GPU is a bit trickier, and full reproducibility across different hardware is not guaranteed because matrix multiplication - which diffusion pipelines require a lot of - is less deterministic on a GPU than a CPU. For example, if you run the same code example above on a GPU: Copied import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) +ddim.to("cuda") + +# create a generator for reproducibility +generator = torch.Generator(device="cuda").manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) The result is not the same even though you’re using an identical seed because the GPU uses a different random number generator than the CPU. To circumvent this problem, 🧨 Diffusers has a randn_tensor() function for creating random noise on the CPU, and then moving the tensor to a GPU if necessary. The randn_tensor function is used everywhere inside the pipeline, allowing the user to always pass a CPU Generator even if the pipeline is run on a GPU. You’ll see the results are much closer now! Copied import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) +ddim.to("cuda") + +# create a generator for reproducibility; notice you don't place it on the GPU! +generator = torch.manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) 💡 If reproducibility is important, we recommend always passing a CPU generator. +The performance loss is often neglectable, and you’ll generate much more similar +values than if the pipeline had been run on a GPU. Finally, for more complex pipelines such as UnCLIPPipeline, these are often extremely +susceptible to precision error propagation. Don’t expect similar results across +different GPU hardware or PyTorch versions. In this case, you’ll need to run +exactly the same hardware and PyTorch version for full reproducibility. Deterministic algorithms You can also configure PyTorch to use deterministic algorithms to create a reproducible pipeline. However, you should be aware that deterministic algorithms may be slower than nondeterministic ones and you may observe a decrease in performance. But if reproducibility is important to you, then this is the way to go! Nondeterministic behavior occurs when operations are launched in more than one CUDA stream. To avoid this, set the environment variable CUBLAS_WORKSPACE_CONFIG to :16:8 to only use one buffer size during runtime. PyTorch typically benchmarks multiple algorithms to select the fastest one, but if you want reproducibility, you should disable this feature because the benchmark may select different algorithms each time. Lastly, pass True to torch.use_deterministic_algorithms to enable deterministic algorithms. Copied import os +import torch + +os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":16:8" + +torch.backends.cudnn.benchmark = False +torch.use_deterministic_algorithms(True) Now when you run the same pipeline twice, you’ll get identical results. Copied import torch +from diffusers import DDIMScheduler, StableDiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True).to("cuda") +pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) +g = torch.Generator(device="cuda") + +prompt = "A bear is playing a guitar on Times Square" + +g.manual_seed(0) +result1 = pipe(prompt=prompt, num_inference_steps=50, generator=g, output_type="latent").images + +g.manual_seed(0) +result2 = pipe(prompt=prompt, num_inference_steps=50, generator=g, output_type="latent").images + +print("L_inf dist =", abs(result1 - result2).max()) +"L_inf dist = tensor(0., device='cuda:0')" diff --git a/scrapped_outputs/8bcb7c87166922202d87f5cf1b00ea15.txt b/scrapped_outputs/8bcb7c87166922202d87f5cf1b00ea15.txt new file mode 100644 index 0000000000000000000000000000000000000000..e9b53eb8a868ef3829ac58348524811ec445482c --- /dev/null +++ b/scrapped_outputs/8bcb7c87166922202d87f5cf1b00ea15.txt @@ -0,0 +1,143 @@ +BLIP-Diffusion BLIP-Diffusion was proposed in BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing. It enables zero-shot subject-driven generation and control-guided zero-shot generation. The abstract from the paper is: Subject-driven text-to-image generation models create novel renditions of an input subject based on text prompts. Existing models suffer from lengthy fine-tuning and difficulties preserving the subject fidelity. To overcome these limitations, we introduce BLIP-Diffusion, a new subject-driven image generation model that supports multimodal control which consumes inputs of subject images and text prompts. Unlike other subject-driven generation models, BLIP-Diffusion introduces a new multimodal encoder which is pre-trained to provide subject representation. We first pre-train the multimodal encoder following BLIP-2 to produce visual representation aligned with the text. Then we design a subject representation learning task which enables a diffusion model to leverage such visual representation and generates new subject renditions. Compared with previous methods such as DreamBooth, our model enables zero-shot subject-driven generation, and efficient fine-tuning for customized subject with up to 20x speedup. We also demonstrate that BLIP-Diffusion can be flexibly combined with existing techniques such as ControlNet and prompt-to-prompt to enable novel subject-driven generation and editing applications. Project page at this https URL. The original codebase can be found at salesforce/LAVIS. You can find the official BLIP-Diffusion checkpoints under the hf.co/SalesForce organization. BlipDiffusionPipeline and BlipDiffusionControlNetPipeline were contributed by ayushtues. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. BlipDiffusionPipeline class diffusers.BlipDiffusionPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: ContextCLIPTextModel vae: AutoencoderKL unet: UNet2DConditionModel scheduler: PNDMScheduler qformer: Blip2QFormerModel image_processor: BlipImageProcessor ctx_begin_pos: int = 2 mean: List = None std: List = None ) Parameters tokenizer (CLIPTokenizer) — +Tokenizer for the text encoder text_encoder (ContextCLIPTextModel) — +Text encoder to encode the text prompt vae (AutoencoderKL) — +VAE model to map the latents to the image unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. scheduler (PNDMScheduler) — +A scheduler to be used in combination with unet to generate image latents. qformer (Blip2QFormerModel) — +QFormer model to get multi-modal embeddings from the text and image. image_processor (BlipImageProcessor) — +Image Processor to preprocess and postprocess the image. ctx_begin_pos (int, optional, defaults to 2) — +Position of the context token in the text encoder. Pipeline for Zero-Shot Subject Driven Generation using Blip Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: List reference_image: Image source_subject_category: List target_subject_category: List latents: Optional = None guidance_scale: float = 7.5 height: int = 512 width: int = 512 num_inference_steps: int = 50 generator: Union = None neg_prompt: Optional = '' prompt_strength: float = 1.0 prompt_reps: int = 20 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (List[str]) — +The prompt or prompts to guide the image generation. reference_image (PIL.Image.Image) — +The reference image to condition the generation on. source_subject_category (List[str]) — +The source subject category. target_subject_category (List[str]) — +The target subject category. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by random sampling. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. height (int, optional, defaults to 512) — +The height of the generated image. width (int, optional, defaults to 512) — +The width of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. neg_prompt (str, optional, defaults to "") — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). prompt_strength (float, optional, defaults to 1.0) — +The strength of the prompt. Specifies the number of times the prompt is repeated along with prompt_reps +to amplify the prompt. prompt_reps (int, optional, defaults to 20) — +The number of times the prompt is repeated along with prompt_strength to amplify the prompt. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers.pipelines import BlipDiffusionPipeline +>>> from diffusers.utils import load_image +>>> import torch + +>>> blip_diffusion_pipe = BlipDiffusionPipeline.from_pretrained( +... "Salesforce/blipdiffusion", torch_dtype=torch.float16 +... ).to("cuda") + + +>>> cond_subject = "dog" +>>> tgt_subject = "dog" +>>> text_prompt_input = "swimming underwater" + +>>> cond_image = load_image( +... "https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/dog.jpg" +... ) +>>> guidance_scale = 7.5 +>>> num_inference_steps = 25 +>>> negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate" + + +>>> output = blip_diffusion_pipe( +... text_prompt_input, +... cond_image, +... cond_subject, +... tgt_subject, +... guidance_scale=guidance_scale, +... num_inference_steps=num_inference_steps, +... neg_prompt=negative_prompt, +... height=512, +... width=512, +... ).images +>>> output[0].save("image.png") BlipDiffusionControlNetPipeline class diffusers.BlipDiffusionControlNetPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: ContextCLIPTextModel vae: AutoencoderKL unet: UNet2DConditionModel scheduler: PNDMScheduler qformer: Blip2QFormerModel controlnet: ControlNetModel image_processor: BlipImageProcessor ctx_begin_pos: int = 2 mean: List = None std: List = None ) Parameters tokenizer (CLIPTokenizer) — +Tokenizer for the text encoder text_encoder (ContextCLIPTextModel) — +Text encoder to encode the text prompt vae (AutoencoderKL) — +VAE model to map the latents to the image unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. scheduler (PNDMScheduler) — +A scheduler to be used in combination with unet to generate image latents. qformer (Blip2QFormerModel) — +QFormer model to get multi-modal embeddings from the text and image. controlnet (ControlNetModel) — +ControlNet model to get the conditioning image embedding. image_processor (BlipImageProcessor) — +Image Processor to preprocess and postprocess the image. ctx_begin_pos (int, optional, defaults to 2) — +Position of the context token in the text encoder. Pipeline for Canny Edge based Controlled subject-driven generation using Blip Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: List reference_image: Image condtioning_image: Image source_subject_category: List target_subject_category: List latents: Optional = None guidance_scale: float = 7.5 height: int = 512 width: int = 512 num_inference_steps: int = 50 generator: Union = None neg_prompt: Optional = '' prompt_strength: float = 1.0 prompt_reps: int = 20 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (List[str]) — +The prompt or prompts to guide the image generation. reference_image (PIL.Image.Image) — +The reference image to condition the generation on. condtioning_image (PIL.Image.Image) — +The conditioning canny edge image to condition the generation on. source_subject_category (List[str]) — +The source subject category. target_subject_category (List[str]) — +The target subject category. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by random sampling. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. height (int, optional, defaults to 512) — +The height of the generated image. width (int, optional, defaults to 512) — +The width of the generated image. seed (int, optional, defaults to 42) — +The seed to use for random generation. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. neg_prompt (str, optional, defaults to "") — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). prompt_strength (float, optional, defaults to 1.0) — +The strength of the prompt. Specifies the number of times the prompt is repeated along with prompt_reps +to amplify the prompt. prompt_reps (int, optional, defaults to 20) — +The number of times the prompt is repeated along with prompt_strength to amplify the prompt. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers.pipelines import BlipDiffusionControlNetPipeline +>>> from diffusers.utils import load_image +>>> from controlnet_aux import CannyDetector +>>> import torch + +>>> blip_diffusion_pipe = BlipDiffusionControlNetPipeline.from_pretrained( +... "Salesforce/blipdiffusion-controlnet", torch_dtype=torch.float16 +... ).to("cuda") + +>>> style_subject = "flower" +>>> tgt_subject = "teapot" +>>> text_prompt = "on a marble table" + +>>> cldm_cond_image = load_image( +... "https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/kettle.jpg" +... ).resize((512, 512)) +>>> canny = CannyDetector() +>>> cldm_cond_image = canny(cldm_cond_image, 30, 70, output_type="pil") +>>> style_image = load_image( +... "https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg" +... ) +>>> guidance_scale = 7.5 +>>> num_inference_steps = 50 +>>> negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate" + + +>>> output = blip_diffusion_pipe( +... text_prompt, +... style_image, +... cldm_cond_image, +... style_subject, +... tgt_subject, +... guidance_scale=guidance_scale, +... num_inference_steps=num_inference_steps, +... neg_prompt=negative_prompt, +... height=512, +... width=512, +... ).images +>>> output[0].save("image.png") diff --git a/scrapped_outputs/8bd192dc44aac144a3081df4cb1b4c24.txt b/scrapped_outputs/8bd192dc44aac144a3081df4cb1b4c24.txt new file mode 100644 index 0000000000000000000000000000000000000000..acbc313e656972084810639a2513c61961c63127 --- /dev/null +++ b/scrapped_outputs/8bd192dc44aac144a3081df4cb1b4c24.txt @@ -0,0 +1 @@ +Normalization layers Customized normalization layers for supporting various models in 🤗 Diffusers. AdaLayerNorm class diffusers.models.normalization.AdaLayerNorm < source > ( embedding_dim: int num_embeddings: int ) Parameters embedding_dim (int) — The size of each embedding vector. num_embeddings (int) — The size of the embeddings dictionary. Norm layer modified to incorporate timestep embeddings. AdaLayerNormZero class diffusers.models.normalization.AdaLayerNormZero < source > ( embedding_dim: int num_embeddings: int ) Parameters embedding_dim (int) — The size of each embedding vector. num_embeddings (int) — The size of the embeddings dictionary. Norm layer adaptive layer norm zero (adaLN-Zero). AdaLayerNormSingle class diffusers.models.normalization.AdaLayerNormSingle < source > ( embedding_dim: int use_additional_conditions: bool = False ) Parameters embedding_dim (int) — The size of each embedding vector. use_additional_conditions (bool) — To use additional conditions for normalization or not. Norm layer adaptive layer norm single (adaLN-single). As proposed in PixArt-Alpha (see: https://arxiv.org/abs/2310.00426; Section 2.3). AdaGroupNorm class diffusers.models.normalization.AdaGroupNorm < source > ( embedding_dim: int out_dim: int num_groups: int act_fn: Optional = None eps: float = 1e-05 ) Parameters embedding_dim (int) — The size of each embedding vector. num_embeddings (int) — The size of the embeddings dictionary. num_groups (int) — The number of groups to separate the channels into. act_fn (str, optional, defaults to None) — The activation function to use. eps (float, optional, defaults to 1e-5) — The epsilon value to use for numerical stability. GroupNorm layer modified to incorporate timestep embeddings. diff --git a/scrapped_outputs/8be823bb8b6ef6b8065bf234d0562afd.txt b/scrapped_outputs/8be823bb8b6ef6b8065bf234d0562afd.txt new file mode 100644 index 0000000000000000000000000000000000000000..f30b39a298e4c56dee2c29827af6d01fc3c8586a --- /dev/null +++ b/scrapped_outputs/8be823bb8b6ef6b8065bf234d0562afd.txt @@ -0,0 +1,36 @@ +AsymmetricAutoencoderKL Improved larger variational autoencoder (VAE) model with KL loss for inpainting task: Designing a Better Asymmetric VQGAN for StableDiffusion by Zixin Zhu, Xuelu Feng, Dongdong Chen, Jianmin Bao, Le Wang, Yinpeng Chen, Lu Yuan, Gang Hua. The abstract from the paper is: StableDiffusion is a revolutionary text-to-image generator that is causing a stir in the world of image generation and editing. Unlike traditional methods that learn a diffusion model in pixel space, StableDiffusion learns a diffusion model in the latent space via a VQGAN, ensuring both efficiency and quality. It not only supports image generation tasks, but also enables image editing for real images, such as image inpainting and local editing. However, we have observed that the vanilla VQGAN used in StableDiffusion leads to significant information loss, causing distortion artifacts even in non-edited image regions. To this end, we propose a new asymmetric VQGAN with two simple designs. Firstly, in addition to the input from the encoder, the decoder contains a conditional branch that incorporates information from task-specific priors, such as the unmasked image region in inpainting. Secondly, the decoder is much heavier than the encoder, allowing for more detailed recovery while only slightly increasing the total inference cost. The training cost of our asymmetric VQGAN is cheap, and we only need to retrain a new asymmetric decoder while keeping the vanilla VQGAN encoder and StableDiffusion unchanged. Our asymmetric VQGAN can be widely used in StableDiffusion-based inpainting and local editing methods. Extensive experiments demonstrate that it can significantly improve the inpainting and editing performance, while maintaining the original text-to-image capability. The code is available at https://github.com/buxiangzhiren/Asymmetric_VQGAN Evaluation results can be found in section 4.1 of the original paper. Available checkpoints https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-1-5 https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-2 Example Usage Copied from diffusers import AsymmetricAutoencoderKL, StableDiffusionInpaintPipeline +from diffusers.utils import load_image, make_image_grid + + +prompt = "a photo of a person with beard" +img_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/celeba_hq_256.png" +mask_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/mask_256.png" + +original_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +pipe = StableDiffusionInpaintPipeline.from_pretrained("runwayml/stable-diffusion-inpainting") +pipe.vae = AsymmetricAutoencoderKL.from_pretrained("cross-attention/asymmetric-autoencoder-kl-x-1-5") +pipe.to("cuda") + +image = pipe(prompt=prompt, image=original_image, mask_image=mask_image).images[0] +make_image_grid([original_image, mask_image, image], rows=1, cols=3) AsymmetricAutoencoderKL class diffusers.AsymmetricAutoencoderKL < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) down_block_out_channels: Tuple = (64,) layers_per_down_block: int = 1 up_block_types: Tuple = ('UpDecoderBlock2D',) up_block_out_channels: Tuple = (64,) layers_per_up_block: int = 1 act_fn: str = 'silu' latent_channels: int = 4 norm_num_groups: int = 32 sample_size: int = 32 scaling_factor: float = 0.18215 ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("DownEncoderBlock2D",)) — +Tuple of downsample block types. down_block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of down block output channels. layers_per_down_block (int, optional, defaults to 1) — +Number layers for down block. up_block_types (Tuple[str], optional, defaults to ("UpDecoderBlock2D",)) — +Tuple of upsample block types. up_block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of up block output channels. layers_per_up_block (int, optional, defaults to 1) — +Number layers for up block. act_fn (str, optional, defaults to "silu") — The activation function to use. latent_channels (int, optional, defaults to 4) — Number of channels in the latent space. sample_size (int, optional, defaults to 32) — Sample input size. norm_num_groups (int, optional, defaults to 32) — +Number of groups to use for the first normalization layer in ResNet blocks. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. Designing a Better Asymmetric VQGAN for StableDiffusion https://arxiv.org/abs/2306.04632 . A VAE model with KL loss +for encoding images into latents and decoding latent representations into images. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor mask: Optional = None sample_posterior: bool = False return_dict: bool = True generator: Optional = None ) Parameters sample (torch.FloatTensor) — Input sample. mask (torch.FloatTensor, optional, defaults to None) — Optional inpainting mask. sample_posterior (bool, optional, defaults to False) — +Whether to sample from the posterior. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. AutoencoderKLOutput class diffusers.models.modeling_outputs.AutoencoderKLOutput < source > ( latent_dist: DiagonalGaussianDistribution ) Parameters latent_dist (DiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of DiagonalGaussianDistribution. +DiagonalGaussianDistribution allows for sampling latents from the distribution. Output of AutoencoderKL encoding method. DecoderOutput class diffusers.models.autoencoders.vae.DecoderOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The decoded output sample from the last layer of the model. Output of decoding method. diff --git a/scrapped_outputs/8beddc7007b97af3a50930439d699b0d.txt b/scrapped_outputs/8beddc7007b97af3a50930439d699b0d.txt new file mode 100644 index 0000000000000000000000000000000000000000..48396c146f3995890b4116a7443457db9ccef879 --- /dev/null +++ b/scrapped_outputs/8beddc7007b97af3a50930439d699b0d.txt @@ -0,0 +1,60 @@ +VAE Image Processor The VaeImageProcessor provides a unified API for StableDiffusionPipelines to prepare image inputs for VAE encoding and post-processing outputs once they’re decoded. This includes transformations such as resizing, normalization, and conversion between PIL Image, PyTorch, and NumPy arrays. All pipelines with VaeImageProcessor accept PIL Image, PyTorch tensor, or NumPy arrays as image inputs and return outputs based on the output_type argument by the user. You can pass encoded image latents directly to the pipeline and return latents from the pipeline as a specific output with the output_type argument (for example output_type="latent"). This allows you to take the generated latents from one pipeline and pass it to another pipeline as input without leaving the latent space. It also makes it much easier to use multiple pipelines together by passing PyTorch tensors directly between different pipelines. VaeImageProcessor class diffusers.image_processor.VaeImageProcessor < source > ( do_resize: bool = True vae_scale_factor: int = 8 resample: str = 'lanczos' do_normalize: bool = True do_binarize: bool = False do_convert_rgb: bool = False do_convert_grayscale: bool = False ) Parameters do_resize (bool, optional, defaults to True) — +Whether to downscale the image’s (height, width) dimensions to multiples of vae_scale_factor. Can accept +height and width arguments from image_processor.VaeImageProcessor.preprocess() method. vae_scale_factor (int, optional, defaults to 8) — +VAE scale factor. If do_resize is True, the image is automatically resized to multiples of this factor. resample (str, optional, defaults to lanczos) — +Resampling filter to use when resizing the image. do_normalize (bool, optional, defaults to True) — +Whether to normalize the image to [-1,1]. do_binarize (bool, optional, defaults to False) — +Whether to binarize the image to 0/1. do_convert_rgb (bool, optional, defaults to be False) — +Whether to convert the images to RGB format. do_convert_grayscale (bool, optional, defaults to be False) — +Whether to convert the images to grayscale format. Image processor for VAE. apply_overlay < source > ( mask: Image init_image: Image image: Image crop_coords: Optional = None ) overlay the inpaint output to the original image binarize < source > ( image: Image ) → PIL.Image.Image Parameters image (PIL.Image.Image) — +The image input, should be a PIL image. Returns +PIL.Image.Image + +The binarized image. Values less than 0.5 are set to 0, values greater than 0.5 are set to 1. + Create a mask. blur < source > ( image: Image blur_factor: int = 4 ) Applies Gaussian blur to an image. convert_to_grayscale < source > ( image: Image ) Converts a PIL image to grayscale format. convert_to_rgb < source > ( image: Image ) Converts a PIL image to RGB format. denormalize < source > ( images: Union ) Denormalize an image array to [0,1]. get_crop_region < source > ( mask_image: Image width: int height: int pad = 0 ) → tuple Parameters mask_image (PIL.Image.Image) — Mask image. width (int) — Width of the image to be processed. height (int) — Height of the image to be processed. pad (int, optional) — Padding to be added to the crop region. Defaults to 0. Returns +tuple + +(x1, y1, x2, y2) represent a rectangular region that contains all masked ares in an image and matches the original aspect ratio. + Finds a rectangular region that contains all masked ares in an image, and expands region to match the aspect ratio of the original image; +for example, if user drew mask in a 128x32 region, and the dimensions for processing are 512x512, the region will be expanded to 128x128. get_default_height_width < source > ( image: Union height: Optional = None width: Optional = None ) Parameters image(PIL.Image.Image, np.ndarray or torch.Tensor) — +The image input, can be a PIL image, numpy array or pytorch tensor. if it is a numpy array, should have +shape [batch, height, width] or [batch, height, width, channel] if it is a pytorch tensor, should +have shape [batch, channel, height, width]. height (int, optional, defaults to None) — +The height in preprocessed image. If None, will use the height of image input. width (int, optional, defaults to None) -- The width in preprocessed. If None, will use the width of the image` input. This function return the height and width that are downscaled to the next integer multiple of +vae_scale_factor. normalize < source > ( images: Union ) Normalize an image array to [-1,1]. numpy_to_pil < source > ( images: ndarray ) Convert a numpy image or a batch of images to a PIL image. numpy_to_pt < source > ( images: ndarray ) Convert a NumPy image to a PyTorch tensor. pil_to_numpy < source > ( images: Union ) Convert a PIL image or a list of PIL images to NumPy arrays. postprocess < source > ( image: FloatTensor output_type: str = 'pil' do_denormalize: Optional = None ) → PIL.Image.Image, np.ndarray or torch.FloatTensor Parameters image (torch.FloatTensor) — +The image input, should be a pytorch tensor with shape B x C x H x W. output_type (str, optional, defaults to pil) — +The output type of the image, can be one of pil, np, pt, latent. do_denormalize (List[bool], optional, defaults to None) — +Whether to denormalize the image to [0,1]. If None, will use the value of do_normalize in the +VaeImageProcessor config. Returns +PIL.Image.Image, np.ndarray or torch.FloatTensor + +The postprocessed image. + Postprocess the image output from tensor to output_type. preprocess < source > ( image: Union height: Optional = None width: Optional = None resize_mode: str = 'default' crops_coords: Optional = None ) Parameters image (pipeline_image_input) — +The image input, accepted formats are PIL images, NumPy arrays, PyTorch tensors; Also accept list of supported formats. height (int, optional, defaults to None) — +The height in preprocessed image. If None, will use the get_default_height_width() to get default height. width (int, optional, defaults to None) -- The width in preprocessed. If None, will use get_default_height_width() to get the default width. resize_mode (str, optional, defaults to default) — +The resize mode, can be one of default or fill. If default, will resize the image to fit +within the specified width and height, and it may not maintaining the original aspect ratio. +If fill, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, filling empty with data from image. +If crop, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, cropping the excess. +Note that resize_mode fill and crop are only supported for PIL image input. crops_coords (List[Tuple[int, int, int, int]], optional, defaults to None) — +The crop coordinates for each image in the batch. If None, will not crop the image. Preprocess the image input. pt_to_numpy < source > ( images: FloatTensor ) Convert a PyTorch tensor to a NumPy image. resize < source > ( image: Union height: int width: int resize_mode: str = 'default' ) → PIL.Image.Image, np.ndarray or torch.Tensor Parameters image (PIL.Image.Image, np.ndarray or torch.Tensor) — +The image input, can be a PIL image, numpy array or pytorch tensor. height (int) — +The height to resize to. width (int) — +The width to resize to. resize_mode (str, optional, defaults to default) — +The resize mode to use, can be one of default or fill. If default, will resize the image to fit +within the specified width and height, and it may not maintaining the original aspect ratio. +If fill, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, filling empty with data from image. +If crop, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, cropping the excess. +Note that resize_mode fill and crop are only supported for PIL image input. Returns +PIL.Image.Image, np.ndarray or torch.Tensor + +The resized image. + Resize image. VaeImageProcessorLDM3D The VaeImageProcessorLDM3D accepts RGB and depth inputs and returns RGB and depth outputs. class diffusers.image_processor.VaeImageProcessorLDM3D < source > ( do_resize: bool = True vae_scale_factor: int = 8 resample: str = 'lanczos' do_normalize: bool = True ) Parameters do_resize (bool, optional, defaults to True) — +Whether to downscale the image’s (height, width) dimensions to multiples of vae_scale_factor. vae_scale_factor (int, optional, defaults to 8) — +VAE scale factor. If do_resize is True, the image is automatically resized to multiples of this factor. resample (str, optional, defaults to lanczos) — +Resampling filter to use when resizing the image. do_normalize (bool, optional, defaults to True) — +Whether to normalize the image to [-1,1]. Image processor for VAE LDM3D. depth_pil_to_numpy < source > ( images: Union ) Convert a PIL image or a list of PIL images to NumPy arrays. numpy_to_depth < source > ( images: ndarray ) Convert a NumPy depth image or a batch of images to a PIL image. numpy_to_pil < source > ( images: ndarray ) Convert a NumPy image or a batch of images to a PIL image. preprocess < source > ( rgb: Union depth: Union height: Optional = None width: Optional = None target_res: Optional = None ) Preprocess the image input. Accepted formats are PIL images, NumPy arrays or PyTorch tensors. rgblike_to_depthmap < source > ( image: Union ) Returns: depth map diff --git a/scrapped_outputs/8c0ca5958cff0077a62ee4228eaa683f.txt b/scrapped_outputs/8c0ca5958cff0077a62ee4228eaa683f.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/8c1a237ed8075f4bd982ac27fd693d08.txt b/scrapped_outputs/8c1a237ed8075f4bd982ac27fd693d08.txt new file mode 100644 index 0000000000000000000000000000000000000000..ff28dd01033ce547a340e7754e35c2123f361679 --- /dev/null +++ b/scrapped_outputs/8c1a237ed8075f4bd982ac27fd693d08.txt @@ -0,0 +1,14 @@ +Text-guided depth-to-image generation The StableDiffusionDepth2ImgPipeline lets you pass a text prompt and an initial image to condition the generation of new images. In addition, you can also pass a depth_map to preserve the image structure. If no depth_map is provided, the pipeline automatically predicts the depth via an integrated depth-estimation model. Start by creating an instance of the StableDiffusionDepth2ImgPipeline: Copied import torch +from diffusers import StableDiffusionDepth2ImgPipeline +from diffusers.utils import load_image, make_image_grid + +pipeline = StableDiffusionDepth2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-depth", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") Now pass your prompt to the pipeline. You can also pass a negative_prompt to prevent certain words from guiding how an image is generated: Copied url = "http://images.cocodataset.org/val2017/000000039769.jpg" +init_image = load_image(url) +prompt = "two tigers" +negative_prompt = "bad, deformed, ugly, bad anatomy" +image = pipeline(prompt=prompt, image=init_image, negative_prompt=negative_prompt, strength=0.7).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Input Output diff --git a/scrapped_outputs/8c1ec083a02d9e8731eb4bfca21f4b21.txt b/scrapped_outputs/8c1ec083a02d9e8731eb4bfca21f4b21.txt new file mode 100644 index 0000000000000000000000000000000000000000..3852e4b540ae565f239e88502bab4b42a7fe8ab9 --- /dev/null +++ b/scrapped_outputs/8c1ec083a02d9e8731eb4bfca21f4b21.txt @@ -0,0 +1,255 @@ +DiffEdit DiffEdit: Diffusion-based semantic image editing with mask guidance is by Guillaume Couairon, Jakob Verbeek, Holger Schwenk, and Matthieu Cord. The abstract from the paper is: Image generation has recently seen tremendous advances, with diffusion models allowing to synthesize convincing images for a large variety of text prompts. In this article, we propose DiffEdit, a method to take advantage of text-conditioned diffusion models for the task of semantic image editing, where the goal is to edit an image based on a text query. Semantic image editing is an extension of image generation, with the additional constraint that the generated image should be as similar as possible to a given input image. Current editing methods based on diffusion models usually require to provide a mask, making the task much easier by treating it as a conditional inpainting task. In contrast, our main contribution is able to automatically generate a mask highlighting regions of the input image that need to be edited, by contrasting predictions of a diffusion model conditioned on different text prompts. Moreover, we rely on latent inference to preserve content in those regions of interest and show excellent synergies with mask-based diffusion. DiffEdit achieves state-of-the-art editing performance on ImageNet. In addition, we evaluate semantic image editing in more challenging settings, using images from the COCO dataset as well as text-based generated images. The original codebase can be found at Xiang-cd/DiffEdit-stable-diffusion, and you can try it out in this demo. This pipeline was contributed by clarencechen. ❤️ Tips The pipeline can generate masks that can be fed into other inpainting pipelines. In order to generate an image using this pipeline, both an image mask (source and target prompts can be manually specified or generated, and passed to generate_mask()) +and a set of partially inverted latents (generated using invert()) must be provided as arguments when calling the pipeline to generate the final edited image. The function generate_mask() exposes two prompt arguments, source_prompt and target_prompt +that let you control the locations of the semantic edits in the final image to be generated. Let’s say, +you wanted to translate from “cat” to “dog”. In this case, the edit direction will be “cat -> dog”. To reflect +this in the generated mask, you simply have to set the embeddings related to the phrases including “cat” to +source_prompt and “dog” to target_prompt. When generating partially inverted latents using invert, assign a caption or text embedding describing the +overall image to the prompt argument to help guide the inverse latent sampling process. In most cases, the +source concept is sufficiently descriptive to yield good results, but feel free to explore alternatives. When calling the pipeline to generate the final edited image, assign the source concept to negative_prompt +and the target concept to prompt. Taking the above example, you simply have to set the embeddings related to +the phrases including “cat” to negative_prompt and “dog” to prompt. If you wanted to reverse the direction in the example above, i.e., “dog -> cat”, then it’s recommended to:Swap the source_prompt and target_prompt in the arguments to generate_mask. Change the input prompt in invert() to include “dog”. Swap the prompt and negative_prompt in the arguments to call the pipeline to generate the final edited image. The source and target prompts, or their corresponding embeddings, can also be automatically generated. Please refer to the DiffEdit guide for more details. StableDiffusionDiffEditPipeline class diffusers.StableDiffusionDiffEditPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor inverse_scheduler: DDIMInverseScheduler requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. inverse_scheduler (DDIMInverseScheduler) — +A scheduler to be used in combination with unet to fill in the unmasked part of the input latents. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. This is an experimental feature! Pipeline for text-guided image inpainting using Stable Diffusion and DiffEdit. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading and saving methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights generate_mask < source > ( image: Union = None target_prompt: Union = None target_negative_prompt: Union = None target_prompt_embeds: Optional = None target_negative_prompt_embeds: Optional = None source_prompt: Union = None source_negative_prompt: Union = None source_prompt_embeds: Optional = None source_negative_prompt_embeds: Optional = None num_maps_per_mask: Optional = 10 mask_encode_strength: Optional = 0.5 mask_thresholding_ratio: Optional = 3.0 num_inference_steps: int = 50 guidance_scale: float = 7.5 generator: Union = None output_type: Optional = 'np' cross_attention_kwargs: Optional = None ) → List[PIL.Image.Image] or np.array Parameters image (PIL.Image.Image) — +Image or tensor representing an image batch to be used for computing the mask. target_prompt (str or List[str], optional) — +The prompt or prompts to guide semantic mask generation. If not defined, you need to pass +prompt_embeds. target_negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). target_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. target_negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. source_prompt (str or List[str], optional) — +The prompt or prompts to guide semantic mask generation using DiffEdit. If not defined, you need to +pass source_prompt_embeds or source_image instead. source_negative_prompt (str or List[str], optional) — +The prompt or prompts to guide semantic mask generation away from using DiffEdit. If not defined, you +need to pass source_negative_prompt_embeds or source_image instead. source_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings to guide the semantic mask generation. Can be used to easily tweak text +inputs (prompt weighting). If not provided, text embeddings are generated from source_prompt input +argument. source_negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings to negatively guide the semantic mask generation. Can be used to easily +tweak text inputs (prompt weighting). If not provided, text embeddings are generated from +source_negative_prompt input argument. num_maps_per_mask (int, optional, defaults to 10) — +The number of noise maps sampled to generate the semantic mask using DiffEdit. mask_encode_strength (float, optional, defaults to 0.5) — +The strength of the noise maps sampled to generate the semantic mask using DiffEdit. Must be between 0 +and 1. mask_thresholding_ratio (float, optional, defaults to 3.0) — +The maximum multiple of the mean absolute difference used to clamp the semantic guidance map before +mask binarization. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the +AttnProcessor as defined in +self.processor. Returns +List[PIL.Image.Image] or np.array + +When returning a List[PIL.Image.Image], the list consists of a batch of single-channel binary images +with dimensions (height // self.vae_scale_factor, width // self.vae_scale_factor). If it’s +np.array, the shape is (batch_size, height // self.vae_scale_factor, width // self.vae_scale_factor). + Generate a latent mask given a mask prompt, a target prompt, and an image. Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionDiffEditPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + +>>> init_image = download_image(img_url).resize((768, 768)) + +>>> pipe = StableDiffusionDiffEditPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.enable_model_cpu_offload() + +>>> mask_prompt = "A bowl of fruits" +>>> prompt = "A bowl of pears" + +>>> mask_image = pipe.generate_mask(image=init_image, source_prompt=prompt, target_prompt=mask_prompt) +>>> image_latents = pipe.invert(image=init_image, prompt=mask_prompt).latents +>>> image = pipe(prompt=prompt, mask_image=mask_image, image_latents=image_latents).images[0] invert < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 50 inpaint_strength: float = 0.8 guidance_scale: float = 7.5 negative_prompt: Union = None generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None decode_latents: bool = False output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None lambda_auto_corr: float = 20.0 lambda_kl: float = 20.0 num_reg_steps: int = 0 num_auto_corr_rolls: int = 5 ) Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (PIL.Image.Image) — +Image or tensor representing an image batch to produce the inverted latents guided by prompt. inpaint_strength (float, optional, defaults to 0.8) — +Indicates extent of the noising process to run latent inversion. Must be between 0 and 1. When +inpaint_strength is 1, the inversion process is run for the full number of iterations specified in +num_inference_steps. image is used as a reference for the inversion process, and adding more noise +increases inpaint_strength. If inpaint_strength is 0, no inpainting occurs. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. decode_latents (bool, optional, defaults to False) — +Whether or not to decode the inverted latents into a generated image. Setting this argument to True +decodes all inverted latents for each timestep into a list of generated images. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.DiffEditInversionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the +AttnProcessor as defined in +self.processor. lambda_auto_corr (float, optional, defaults to 20.0) — +Lambda parameter to control auto correction. lambda_kl (float, optional, defaults to 20.0) — +Lambda parameter to control Kullback-Leibler divergence output. num_reg_steps (int, optional, defaults to 0) — +Number of regularization loss steps. num_auto_corr_rolls (int, optional, defaults to 5) — +Number of auto correction roll steps. Generate inverted latents given a prompt and image. Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionDiffEditPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + +>>> init_image = download_image(img_url).resize((768, 768)) + +>>> pipe = StableDiffusionDiffEditPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.enable_model_cpu_offload() + +>>> prompt = "A bowl of fruits" + +>>> inverted_latents = pipe.invert(image=init_image, prompt=prompt).latents __call__ < source > ( prompt: Union = None mask_image: Union = None image_latents: Union = None inpaint_strength: Optional = 0.8 num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_ckip: int = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. mask_image (PIL.Image.Image) — +Image or tensor representing an image batch to mask the generated image. White pixels in the mask are +repainted, while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, 1, H, W). image_latents (PIL.Image.Image or torch.FloatTensor) — +Partially noised image latents from the inversion process to be used as inputs for image generation. inpaint_strength (float, optional, defaults to 0.8) — +Indicates extent to inpaint the masked area. Must be between 0 and 1. When inpaint_strength is 1, the +denoising process is run on the masked area for the full number of iterations specified in +num_inference_steps. image_latents is used as a reference for the masked area, and adding more +noise to a region increases inpaint_strength. If inpaint_strength is 0, no inpainting occurs. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionDiffEditPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + +>>> init_image = download_image(img_url).resize((768, 768)) + +>>> pipe = StableDiffusionDiffEditPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.enable_model_cpu_offload() + +>>> mask_prompt = "A bowl of fruits" +>>> prompt = "A bowl of pears" + +>>> mask_image = pipe.generate_mask(image=init_image, source_prompt=prompt, target_prompt=mask_prompt) +>>> image_latents = pipe.invert(image=init_image, prompt=mask_prompt).latents +>>> image = pipe(prompt=prompt, mask_image=mask_image, image_latents=image_latents).images[0] disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/8c621a4d2f30ee52271968b54310f777.txt b/scrapped_outputs/8c621a4d2f30ee52271968b54310f777.txt new file mode 100644 index 0000000000000000000000000000000000000000..db7171b03930077dc4188ad756a7f5e1ae92467f --- /dev/null +++ b/scrapped_outputs/8c621a4d2f30ee52271968b54310f777.txt @@ -0,0 +1,27 @@ +UNet2DModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 2D UNet model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet2DModel class diffusers.UNet2DModel < source > ( sample_size: Union = None in_channels: int = 3 out_channels: int = 3 center_input_sample: bool = False time_embedding_type: str = 'positional' freq_shift: int = 0 flip_sin_to_cos: bool = True down_block_types: Tuple = ('DownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D') up_block_types: Tuple = ('AttnUpBlock2D', 'AttnUpBlock2D', 'AttnUpBlock2D', 'UpBlock2D') block_out_channels: Tuple = (224, 448, 672, 896) layers_per_block: int = 2 mid_block_scale_factor: float = 1 downsample_padding: int = 1 downsample_type: str = 'conv' upsample_type: str = 'conv' dropout: float = 0.0 act_fn: str = 'silu' attention_head_dim: Optional = 8 norm_num_groups: int = 32 attn_norm_num_groups: Optional = None norm_eps: float = 1e-05 resnet_time_scale_shift: str = 'default' add_attention: bool = True class_embed_type: Optional = None num_class_embeds: Optional = None num_train_timesteps: Optional = None ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. Dimensions must be a multiple of 2 ** (len(block_out_channels) - 1). in_channels (int, optional, defaults to 3) — Number of channels in the input sample. out_channels (int, optional, defaults to 3) — Number of channels in the output. center_input_sample (bool, optional, defaults to False) — Whether to center the input sample. time_embedding_type (str, optional, defaults to "positional") — Type of time embedding to use. freq_shift (int, optional, defaults to 0) — Frequency shift for Fourier time embedding. flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip sin to cos for Fourier time embedding. down_block_types (Tuple[str], optional, defaults to ("DownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D")) — +Tuple of downsample block types. mid_block_type (str, optional, defaults to "UNetMidBlock2D") — +Block type for middle of UNet, it can be either UNetMidBlock2D or UnCLIPUNetMidBlock2D. up_block_types (Tuple[str], optional, defaults to ("AttnUpBlock2D", "AttnUpBlock2D", "AttnUpBlock2D", "UpBlock2D")) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (224, 448, 672, 896)) — +Tuple of block output channels. layers_per_block (int, optional, defaults to 2) — The number of layers per block. mid_block_scale_factor (float, optional, defaults to 1) — The scale factor for the mid block. downsample_padding (int, optional, defaults to 1) — The padding for the downsample convolution. downsample_type (str, optional, defaults to conv) — +The downsample type for downsampling layers. Choose between “conv” and “resnet” upsample_type (str, optional, defaults to conv) — +The upsample type for upsampling layers. Choose between “conv” and “resnet” dropout (float, optional, defaults to 0.0) — The dropout probability to use. act_fn (str, optional, defaults to "silu") — The activation function to use. attention_head_dim (int, optional, defaults to 8) — The attention head dimension. norm_num_groups (int, optional, defaults to 32) — The number of groups for normalization. attn_norm_num_groups (int, optional, defaults to None) — +If set to an integer, a group norm layer will be created in the mid block’s Attention layer with the +given number of groups. If left as None, the group norm layer will only be created if +resnet_time_scale_shift is set to default, and if created will have norm_num_groups groups. norm_eps (float, optional, defaults to 1e-5) — The epsilon for normalization. resnet_time_scale_shift (str, optional, defaults to "default") — Time scale shift config +for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", or "identity". num_class_embeds (int, optional, defaults to None) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim when performing class +conditioning with class_embed_type equal to None. A 2D UNet model that takes a noisy sample and a timestep and returns a sample shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor timestep: Union class_labels: Optional = None return_dict: bool = True ) → ~models.unet_2d.UNet2DOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, channel, height, width). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. class_labels (torch.FloatTensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.unet_2d.UNet2DOutput instead of a plain tuple. Returns +~models.unet_2d.UNet2DOutput or tuple + +If return_dict is True, an ~models.unet_2d.UNet2DOutput is returned, otherwise a tuple is +returned where the first element is the sample tensor. + The UNet2DModel forward method. UNet2DOutput class diffusers.models.unets.unet_2d.UNet2DOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The hidden states output from the last layer of the model. The output of UNet2DModel. diff --git a/scrapped_outputs/8c64850eddd77f4228f4e415bc58a8aa.txt b/scrapped_outputs/8c64850eddd77f4228f4e415bc58a8aa.txt new file mode 100644 index 0000000000000000000000000000000000000000..843875e320b6bcdb29106ed38d7b3cffd10030d2 --- /dev/null +++ b/scrapped_outputs/8c64850eddd77f4228f4e415bc58a8aa.txt @@ -0,0 +1,232 @@ +Würstchen Wuerstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models is by Pablo Pernias, Dominic Rampas, Mats L. Richter and Christopher Pal and Marc Aubreville. The abstract from the paper is: We introduce Würstchen, a novel architecture for text-to-image synthesis that combines competitive performance with unprecedented cost-effectiveness for large-scale text-to-image diffusion models. A key contribution of our work is to develop a latent diffusion technique in which we learn a detailed but extremely compact semantic image representation used to guide the diffusion process. This highly compressed representation of an image provides much more detailed guidance compared to latent representations of language and this significantly reduces the computational requirements to achieve state-of-the-art results. Our approach also improves the quality of text-conditioned image generation based on our user preference study. The training requirements of our approach consists of 24,602 A100-GPU hours - compared to Stable Diffusion 2.1’s 200,000 GPU hours. Our approach also requires less training data to achieve these results. Furthermore, our compact latent representations allows us to perform inference over twice as fast, slashing the usual costs and carbon footprint of a state-of-the-art (SOTA) diffusion model significantly, without compromising the end performance. In a broader comparison against SOTA models our approach is substantially more efficient and compares favorably in terms of image quality. We believe that this work motivates more emphasis on the prioritization of both performance and computational accessibility. Würstchen Overview Würstchen is a diffusion model, whose text-conditional model works in a highly compressed latent space of images. Why is this important? Compressing data can reduce computational costs for both training and inference by magnitudes. Training on 1024x1024 images is way more expensive than training on 32x32. Usually, other works make use of a relatively small compression, in the range of 4x - 8x spatial compression. Würstchen takes this to an extreme. Through its novel design, we achieve a 42x spatial compression. This was unseen before because common methods fail to faithfully reconstruct detailed images after 16x spatial compression. Würstchen employs a two-stage compression, what we call Stage A and Stage B. Stage A is a VQGAN, and Stage B is a Diffusion Autoencoder (more details can be found in the paper). A third model, Stage C, is learned in that highly compressed latent space. This training requires fractions of the compute used for current top-performing models, while also allowing cheaper and faster inference. Würstchen v2 comes to Diffusers After the initial paper release, we have improved numerous things in the architecture, training and sampling, making Würstchen competitive to current state-of-the-art models in many ways. We are excited to release this new version together with Diffusers. Here is a list of the improvements. Higher resolution (1024x1024 up to 2048x2048) Faster inference Multi Aspect Resolution Sampling Better quality We are releasing 3 checkpoints for the text-conditional image generation model (Stage C). Those are: v2-base v2-aesthetic (default) v2-interpolated (50% interpolation between v2-base and v2-aesthetic) We recommend using v2-interpolated, as it has a nice touch of both photorealism and aesthetics. Use v2-base for finetunings as it does not have a style bias and use v2-aesthetic for very artistic generations. +A comparison can be seen here: Text-to-Image Generation For the sake of usability, Würstchen can be used with a single pipeline. This pipeline can be used as follows: Copied import torch +from diffusers import AutoPipelineForText2Image +from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS + +pipe = AutoPipelineForText2Image.from_pretrained("warp-ai/wuerstchen", torch_dtype=torch.float16).to("cuda") + +caption = "Anthropomorphic cat dressed as a fire fighter" +images = pipe( + caption, + width=1024, + height=1536, + prior_timesteps=DEFAULT_STAGE_C_TIMESTEPS, + prior_guidance_scale=4.0, + num_images_per_prompt=2, +).images For explanation purposes, we can also initialize the two main pipelines of Würstchen individually. Würstchen consists of 3 stages: Stage C, Stage B, Stage A. They all have different jobs and work only together. When generating text-conditional images, Stage C will first generate the latents in a very compressed latent space. This is what happens in the prior_pipeline. Afterwards, the generated latents will be passed to Stage B, which decompresses the latents into a bigger latent space of a VQGAN. These latents can then be decoded by Stage A, which is a VQGAN, into the pixel-space. Stage B & Stage A are both encapsulated in the decoder_pipeline. For more details, take a look at the paper. Copied import torch +from diffusers import WuerstchenDecoderPipeline, WuerstchenPriorPipeline +from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS + +device = "cuda" +dtype = torch.float16 +num_images_per_prompt = 2 + +prior_pipeline = WuerstchenPriorPipeline.from_pretrained( + "warp-ai/wuerstchen-prior", torch_dtype=dtype +).to(device) +decoder_pipeline = WuerstchenDecoderPipeline.from_pretrained( + "warp-ai/wuerstchen", torch_dtype=dtype +).to(device) + +caption = "Anthropomorphic cat dressed as a fire fighter" +negative_prompt = "" + +prior_output = prior_pipeline( + prompt=caption, + height=1024, + width=1536, + timesteps=DEFAULT_STAGE_C_TIMESTEPS, + negative_prompt=negative_prompt, + guidance_scale=4.0, + num_images_per_prompt=num_images_per_prompt, +) +decoder_output = decoder_pipeline( + image_embeddings=prior_output.image_embeddings, + prompt=caption, + negative_prompt=negative_prompt, + guidance_scale=0.0, + output_type="pil", +).images[0] +decoder_output Speed-Up Inference You can make use of torch.compile function and gain a speed-up of about 2-3x: Copied prior_pipeline.prior = torch.compile(prior_pipeline.prior, mode="reduce-overhead", fullgraph=True) +decoder_pipeline.decoder = torch.compile(decoder_pipeline.decoder, mode="reduce-overhead", fullgraph=True) Limitations Due to the high compression employed by Würstchen, generations can lack a good amount +of detail. To our human eye, this is especially noticeable in faces, hands etc. Images can only be generated in 128-pixel steps, e.g. the next higher resolution +after 1024x1024 is 1152x1152 The model lacks the ability to render correct text in images The model often does not achieve photorealism Difficult compositional prompts are hard for the model The original codebase, as well as experimental ideas, can be found at dome272/Wuerstchen. WuerstchenCombinedPipeline class diffusers.WuerstchenCombinedPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModel decoder: WuerstchenDiffNeXt scheduler: DDPMWuerstchenScheduler vqgan: PaellaVQModel prior_tokenizer: CLIPTokenizer prior_text_encoder: CLIPTextModel prior_prior: WuerstchenPrior prior_scheduler: DDPMWuerstchenScheduler ) Parameters tokenizer (CLIPTokenizer) — +The decoder tokenizer to be used for text inputs. text_encoder (CLIPTextModel) — +The decoder text encoder to be used for text inputs. decoder (WuerstchenDiffNeXt) — +The decoder model to be used for decoder image generation pipeline. scheduler (DDPMWuerstchenScheduler) — +The scheduler to be used for decoder image generation pipeline. vqgan (PaellaVQModel) — +The VQGAN model to be used for decoder image generation pipeline. prior_tokenizer (CLIPTokenizer) — +The prior tokenizer to be used for text inputs. prior_text_encoder (CLIPTextModel) — +The prior text encoder to be used for text inputs. prior_prior (WuerstchenPrior) — +The prior model to be used for prior pipeline. prior_scheduler (DDPMWuerstchenScheduler) — +The scheduler to be used for prior pipeline. Combined Pipeline for text-to-image generation using Wuerstchen This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union = None height: int = 512 width: int = 512 prior_num_inference_steps: int = 60 prior_timesteps: Optional = None prior_guidance_scale: float = 4.0 num_inference_steps: int = 12 decoder_timesteps: Optional = None decoder_guidance_scale: float = 0.0 negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation for the prior and decoder. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings for the prior. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings for the prior. Can be used to easily tweak text inputs, e.g. +prompt weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt +input argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +prior_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +prior_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked +to the text prompt, usually at the expense of lower image quality. prior_num_inference_steps (Union[int, Dict[float, int]], optional, defaults to 60) — +The number of prior denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. For more specific timestep spacing, you can pass customized +prior_timesteps num_inference_steps (int, optional, defaults to 12) — +The number of decoder denoising steps. More denoising steps usually lead to a higher quality image at +the expense of slower inference. For more specific timestep spacing, you can pass customized +timesteps prior_timesteps (List[float], optional) — +Custom timesteps to use for the denoising process for the prior. If not defined, equal spaced +prior_num_inference_steps timesteps are used. Must be in descending order. decoder_timesteps (List[float], optional) — +Custom timesteps to use for the denoising process for the decoder. If not defined, equal spaced +num_inference_steps timesteps are used. Must be in descending order. decoder_guidance_scale (float, optional, defaults to 0.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. prior_callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). prior_callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the prior_callback_on_step_end function. The tensors specified in the +list will be passed as callback_kwargs argument. You will only be able to include variables listed in +the ._callback_tensor_inputs attribute of your pipeline class. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusions import WuerstchenCombinedPipeline + +>>> pipe = WuerstchenCombinedPipeline.from_pretrained("warp-ai/Wuerstchen", torch_dtype=torch.float16).to( +... "cuda" +... ) +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> images = pipe(prompt=prompt) enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models (unet, text_encoder, vae, and safety checker state dicts) to CPU using 🤗 +Accelerate, significantly reducing memory usage. Models are moved to a torch.device('meta') and loaded on a +GPU only when their specific submodule’s forward method is called. Offloading happens on a submodule basis. +Memory savings are higher than using enable_model_cpu_offload, but performance is lower. WuerstchenPriorPipeline class diffusers.WuerstchenPriorPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModel prior: WuerstchenPrior scheduler: DDPMWuerstchenScheduler latent_mean: float = 42.0 latent_std: float = 1.0 resolution_multiple: float = 42.67 ) Parameters prior (Prior) — +The canonical unCLIP prior to approximate the image embedding from the text embedding. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (DDPMWuerstchenScheduler) — +A scheduler to be used in combination with prior to generate image embedding. latent_mean (‘float’, optional, defaults to 42.0) — +Mean value for latent diffusers. latent_std (‘float’, optional, defaults to 1.0) — +Standard value for latent diffusers. resolution_multiple (‘float’, optional, defaults to 42.67) — +Default resolution for multiple images generated. Pipeline for generating image prior for Wuerstchen. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None height: int = 1024 width: int = 1024 num_inference_steps: int = 60 timesteps: List = None guidance_scale: float = 8.0 negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pt' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. height (int, optional, defaults to 1024) — +The height in pixels of the generated image. width (int, optional, defaults to 1024) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 60) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 8.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +decoder_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +decoder_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely +linked to the text prompt, usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if decoder_guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import WuerstchenPriorPipeline + +>>> prior_pipe = WuerstchenPriorPipeline.from_pretrained( +... "warp-ai/wuerstchen-prior", torch_dtype=torch.float16 +... ).to("cuda") + +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> prior_output = pipe(prompt) WuerstchenPriorPipelineOutput class diffusers.pipelines.wuerstchen.pipeline_wuerstchen_prior.WuerstchenPriorPipelineOutput < source > ( image_embeddings: Union ) Parameters image_embeddings (torch.FloatTensor or np.ndarray) — +Prior image embeddings for text prompt Output class for WuerstchenPriorPipeline. WuerstchenDecoderPipeline class diffusers.WuerstchenDecoderPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModel decoder: WuerstchenDiffNeXt scheduler: DDPMWuerstchenScheduler vqgan: PaellaVQModel latent_dim_scale: float = 10.67 ) Parameters tokenizer (CLIPTokenizer) — +The CLIP tokenizer. text_encoder (CLIPTextModel) — +The CLIP text encoder. decoder (WuerstchenDiffNeXt) — +The WuerstchenDiffNeXt unet decoder. vqgan (PaellaVQModel) — +The VQGAN model. scheduler (DDPMWuerstchenScheduler) — +A scheduler to be used in combination with prior to generate image embedding. latent_dim_scale (float, optional, defaults to 10.67) — +Multiplier to determine the VQ latent space size from the image embeddings. If the image embeddings are +height=24 and width=24, the VQ latent shape needs to be height=int(2410.67)=256 and +width=int(2410.67)=256 in order to match the training conditions. Pipeline for generating images from the Wuerstchen model. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeddings: Union prompt: Union = None num_inference_steps: int = 12 timesteps: Optional = None guidance_scale: float = 0.0 negative_prompt: Union = None num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) Parameters image_embedding (torch.FloatTensor or List[torch.FloatTensor]) — +Image Embeddings either extracted from an image or generated by a Prior Model. prompt (str or List[str]) — +The prompt or prompts to guide the image generation. num_inference_steps (int, optional, defaults to 12) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 0.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +decoder_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +decoder_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely +linked to the text prompt, usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if decoder_guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import WuerstchenPriorPipeline, WuerstchenDecoderPipeline + +>>> prior_pipe = WuerstchenPriorPipeline.from_pretrained( +... "warp-ai/wuerstchen-prior", torch_dtype=torch.float16 +... ).to("cuda") +>>> gen_pipe = WuerstchenDecoderPipeline.from_pretrain("warp-ai/wuerstchen", torch_dtype=torch.float16).to( +... "cuda" +... ) + +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> prior_output = pipe(prompt) +>>> images = gen_pipe(prior_output.image_embeddings, prompt=prompt) Citation Copied @misc{pernias2023wuerstchen, + title={Wuerstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models}, + author={Pablo Pernias and Dominic Rampas and Mats L. Richter and Christopher J. Pal and Marc Aubreville}, + year={2023}, + eprint={2306.00637}, + archivePrefix={arXiv}, + primaryClass={cs.CV} + } diff --git a/scrapped_outputs/8c943b15990c35432fb1add89d45f4c8.txt b/scrapped_outputs/8c943b15990c35432fb1add89d45f4c8.txt new file mode 100644 index 0000000000000000000000000000000000000000..94fe480e62afd03a5eb42c54c777fabe8554206b --- /dev/null +++ b/scrapped_outputs/8c943b15990c35432fb1add89d45f4c8.txt @@ -0,0 +1,2889 @@ +Models + +Diffusers contains pretrained models for popular algorithms and modules for creating the next set of diffusion models. +The primary function of these models is to denoise an input sample, by modeling the distribution $p\theta(\mathbf{x}{t-1}|\mathbf{x}_t)$. +The models are built on the base class [‘ModelMixin’] that is a torch.nn.module with basic functionality for saving and loading models both locally and from the HuggingFace hub. + +ModelMixin + + +class diffusers.ModelMixin + +< +source +> +( +) + + + +Base class for all models. +ModelMixin takes care of storing the configuration of the models and handles methods for loading, downloading +and saving models. +config_name (str) — A filename under which the model should be stored when calling +save_pretrained(). + +disable_gradient_checkpointing + +< +source +> +( +) + + + +Deactivates gradient checkpointing for the current model. +Note that in other frameworks this feature can be referred to as “activation checkpointing” or “checkpoint +activations”. + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +enable_gradient_checkpointing + +< +source +> +( +) + + + +Activates gradient checkpointing for the current model. +Note that in other frameworks this feature can be referred to as “activation checkpointing” or “checkpoint +activations”. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import UNet2DConditionModel +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> model = UNet2DConditionModel.from_pretrained( +... "stabilityai/stable-diffusion-2-1", subfolder="unet", torch_dtype=torch.float16 +... ) +>>> model = model.to("cuda") +>>> model.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) + +from_pretrained + +< +source +> +( +pretrained_model_name_or_path: typing.Union[str, os.PathLike, NoneType] +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. +Valid model ids should have an organization name, like google/ddpm-celebahq-256. +A path to a directory containing model weights saved using ~ModelMixin.save_config, e.g., +./my_model_directory/. + + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model under this dtype. If "auto" is passed the dtype +will be automatically derived from the model’s weights. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running diffusers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +from_flax (bool, optional, defaults to False) — +Load the model weights from a Flax checkpoint save file. + + +subfolder (str, optional, defaults to "") — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + +mirror (str, optional) — +Mirror source to accelerate downloads in China. If you are from China and have an accessibility +problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. +Please refer to the mirror site for more information. + + +device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be refined to each +parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the +same device. +To have Accelerate compute the most optimized device_map automatically, set device_map="auto". For +more information about each option see designing a device +map. + + +low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading by not initializing the weights and only loading the pre-trained weights. This +also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the +model. This is only supported when torch version >= 1.9.0. If you are using an older version of torch, +setting this argument to True will raise an error. + + +variant (str, optional) — +If specified load weights from variant filename, e.g. pytorch_model..bin. variant is +ignored when using from_flax. + + +use_safetensors (bool, optional ) — +If set to True, the pipeline will forcibly load the models from safetensors weights. If set to +None (the default). The pipeline will load using safetensors if safetensors weights are available +and if safetensors is installed. If the to False the pipeline will not use safetensors. + + + +Instantiate a pretrained pytorch model from a pre-trained model configuration. +The model is set in evaluation mode by default using model.eval() (Dropout modules are deactivated). To train +the model, you should first set it back in training mode with model.train(). +The warning Weights from XXX not initialized from pretrained model means that the weights of XXX do not come +pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning +task. +The warning Weights from XXX not used in YYY means that the layer XXX is not used by YYY, therefore those +weights are discarded. +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. +Activate the special “offline-mode” to use +this method in a firewalled environment. + +num_parameters + +< +source +> +( +only_trainable: bool = False +exclude_embeddings: bool = False + +) +→ +int + +Parameters + +only_trainable (bool, optional, defaults to False) — +Whether or not to return only the number of trainable parameters + + +exclude_embeddings (bool, optional, defaults to False) — +Whether or not to return only the number of non-embeddings parameters + + +Returns + +int + + + +The number of parameters. + + +Get number of (optionally, trainable or non-embeddings) parameters in the module. + +save_pretrained + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +is_main_process: bool = True +save_function: typing.Callable = None +safe_serialization: bool = False +variant: typing.Optional[str] = None + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. + + +is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful when in distributed training like +TPUs and need to call this function on all processes. In this case, set is_main_process=True only on +the main process to avoid race conditions. + + +save_function (Callable) — +The function to use to save the state dictionary. Useful on distributed training like TPUs when one +need to replace torch.save by another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. + + +safe_serialization (bool, optional, defaults to False) — +Whether to save the model using safetensors or the traditional PyTorch way (that uses pickle). + + +variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. + + + +Save a model and its configuration file to a directory, so that it can be re-loaded using the +[from_pretrained()](/docs/diffusers/v0.16.0/en/api/models#diffusers.ModelMixin.from_pretrained) class method. + +UNet2DOutput + + +class diffusers.models.unet_2d.UNet2DOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +Hidden states output. Output of last layer of model. + + + + +UNet2DModel + + +class diffusers.UNet2DModel + +< +source +> +( +sample_size: typing.Union[int, typing.Tuple[int, int], NoneType] = None +in_channels: int = 3 +out_channels: int = 3 +center_input_sample: bool = False +time_embedding_type: str = 'positional' +freq_shift: int = 0 +flip_sin_to_cos: bool = True +down_block_types: typing.Tuple[str] = ('DownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D') +up_block_types: typing.Tuple[str] = ('AttnUpBlock2D', 'AttnUpBlock2D', 'AttnUpBlock2D', 'UpBlock2D') +block_out_channels: typing.Tuple[int] = (224, 448, 672, 896) +layers_per_block: int = 2 +mid_block_scale_factor: float = 1 +downsample_padding: int = 1 +act_fn: str = 'silu' +attention_head_dim: typing.Optional[int] = 8 +norm_num_groups: int = 32 +norm_eps: float = 1e-05 +resnet_time_scale_shift: str = 'default' +add_attention: bool = True +class_embed_type: typing.Optional[str] = None +num_class_embeds: typing.Optional[int] = None + +) + + +Parameters + +sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. Dimensions must be a multiple of 2 ** (len(block_out_channels) - 1). + + +in_channels (int, optional, defaults to 3) — Number of channels in the input image. + + +out_channels (int, optional, defaults to 3) — Number of channels in the output. + + +center_input_sample (bool, optional, defaults to False) — Whether to center the input sample. + + +time_embedding_type (str, optional, defaults to "positional") — Type of time embedding to use. + + +freq_shift (int, optional, defaults to 0) — Frequency shift for fourier time embedding. + + +flip_sin_to_cos (bool, optional, defaults to — +obj:True): Whether to flip sin to cos for fourier time embedding. + + +down_block_types (Tuple[str], optional, defaults to — +obj:("DownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D")): Tuple of downsample block +types. + + +mid_block_type (str, optional, defaults to "UNetMidBlock2D") — +The mid block type. Choose from UNetMidBlock2D or UnCLIPUNetMidBlock2D. + + +up_block_types (Tuple[str], optional, defaults to — +obj:("AttnUpBlock2D", "AttnUpBlock2D", "AttnUpBlock2D", "UpBlock2D")): Tuple of upsample block types. + + +block_out_channels (Tuple[int], optional, defaults to — +obj:(224, 448, 672, 896)): Tuple of block output channels. + + +layers_per_block (int, optional, defaults to 2) — The number of layers per block. + + +mid_block_scale_factor (float, optional, defaults to 1) — The scale factor for the mid block. + + +downsample_padding (int, optional, defaults to 1) — The padding for the downsample convolution. + + +act_fn (str, optional, defaults to "silu") — The activation function to use. + + +attention_head_dim (int, optional, defaults to 8) — The attention head dimension. + + +norm_num_groups (int, optional, defaults to 32) — The number of groups for the normalization. + + +norm_eps (float, optional, defaults to 1e-5) — The epsilon for the normalization. + + +resnet_time_scale_shift (str, optional, defaults to "default") — Time scale shift config +for resnet blocks, see ResnetBlock2D. Choose from default or scale_shift. + + +class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", or "identity". + + +num_class_embeds (int, optional, defaults to None) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. + + + +UNet2DModel is a 2D UNet model that takes in a noisy sample and a timestep and returns sample shaped output. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the model (such as downloading or saving, etc.) + +forward + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[torch.Tensor, float, int] +class_labels: typing.Optional[torch.Tensor] = None +return_dict: bool = True + +) +→ +UNet2DOutput or tuple + +Parameters + +sample (torch.FloatTensor) — (batch, channel, height, width) noisy inputs tensor + + +timestep (torch.FloatTensor or float or `int) — (batch) timesteps + + +class_labels (torch.FloatTensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DOutput instead of a plain tuple. + + +Returns + +UNet2DOutput or tuple + + + +UNet2DOutput if return_dict is True, +otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + + +UNet1DOutput + + +class diffusers.models.unet_1d.UNet1DOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size, num_channels, sample_size)) — +Hidden states output. Output of last layer of model. + + + + +UNet1DModel + + +class diffusers.UNet1DModel + +< +source +> +( +sample_size: int = 65536 +sample_rate: typing.Optional[int] = None +in_channels: int = 2 +out_channels: int = 2 +extra_in_channels: int = 0 +time_embedding_type: str = 'fourier' +flip_sin_to_cos: bool = True +use_timestep_embedding: bool = False +freq_shift: float = 0.0 +down_block_types: typing.Tuple[str] = ('DownBlock1DNoSkip', 'DownBlock1D', 'AttnDownBlock1D') +up_block_types: typing.Tuple[str] = ('AttnUpBlock1D', 'UpBlock1D', 'UpBlock1DNoSkip') +mid_block_type: typing.Tuple[str] = 'UNetMidBlock1D' +out_block_type: str = None +block_out_channels: typing.Tuple[int] = (32, 32, 64) +act_fn: str = None +norm_num_groups: int = 8 +layers_per_block: int = 1 +downsample_each_block: bool = False + +) + + +Parameters + +sample_size (int, optional) — Default length of sample. Should be adaptable at runtime. + + +in_channels (int, optional, defaults to 2) — Number of channels in the input sample. + + +out_channels (int, optional, defaults to 2) — Number of channels in the output. + + +extra_in_channels (int, optional, defaults to 0) — +Number of additional channels to be added to the input of the first down block. Useful for cases where the +input data has more channels than what the model is initially designed for. + + +time_embedding_type (str, optional, defaults to "fourier") — Type of time embedding to use. + + +freq_shift (float, optional, defaults to 0.0) — Frequency shift for fourier time embedding. + + +flip_sin_to_cos (bool, optional, defaults to — +obj:False): Whether to flip sin to cos for fourier time embedding. + + +down_block_types (Tuple[str], optional, defaults to — +obj:("DownBlock1D", "DownBlock1DNoSkip", "AttnDownBlock1D")): Tuple of downsample block types. + + +up_block_types (Tuple[str], optional, defaults to — +obj:("UpBlock1D", "UpBlock1DNoSkip", "AttnUpBlock1D")): Tuple of upsample block types. + + +block_out_channels (Tuple[int], optional, defaults to — +obj:(32, 32, 64)): Tuple of block output channels. + + +mid_block_type (str, optional, defaults to “UNetMidBlock1D”) — block type for middle of UNet. + + +out_block_type (str, optional, defaults to None) — optional output processing of UNet. + + +act_fn (str, optional, defaults to None) — optional activation function in UNet blocks. + + +norm_num_groups (int, optional, defaults to 8) — group norm member count in UNet blocks. + + +layers_per_block (int, optional, defaults to 1) — added number of layers in a UNet block. + + +downsample_each_block (int, optional, defaults to False — +experimental feature for using a UNet without upsampling. + + + +UNet1DModel is a 1D UNet model that takes in a noisy sample and a timestep and returns sample shaped output. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the model (such as downloading or saving, etc.) + +forward + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[torch.Tensor, float, int] +return_dict: bool = True + +) +→ +UNet1DOutput or tuple + +Parameters + +sample (torch.FloatTensor) — (batch_size, num_channels, sample_size) noisy inputs tensor + + +timestep (torch.FloatTensor or float or `int) — (batch) timesteps + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet1DOutput instead of a plain tuple. + + +Returns + +UNet1DOutput or tuple + + + +UNet1DOutput if return_dict is True, +otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + + +UNet2DConditionOutput + + +class diffusers.models.unet_2d_condition.UNet2DConditionOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +Hidden states conditioned on encoder_hidden_states input. Output of last layer of model. + + + + +UNet2DConditionModel + + +class diffusers.UNet2DConditionModel + +< +source +> +( +sample_size: typing.Optional[int] = None +in_channels: int = 4 +out_channels: int = 4 +center_input_sample: bool = False +flip_sin_to_cos: bool = True +freq_shift: int = 0 +down_block_types: typing.Tuple[str] = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') +mid_block_type: typing.Optional[str] = 'UNetMidBlock2DCrossAttn' +up_block_types: typing.Tuple[str] = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') +only_cross_attention: typing.Union[bool, typing.Tuple[bool]] = False +block_out_channels: typing.Tuple[int] = (320, 640, 1280, 1280) +layers_per_block: typing.Union[int, typing.Tuple[int]] = 2 +downsample_padding: int = 1 +mid_block_scale_factor: float = 1 +act_fn: str = 'silu' +norm_num_groups: typing.Optional[int] = 32 +norm_eps: float = 1e-05 +cross_attention_dim: typing.Union[int, typing.Tuple[int]] = 1280 +encoder_hid_dim: typing.Optional[int] = None +attention_head_dim: typing.Union[int, typing.Tuple[int]] = 8 +dual_cross_attention: bool = False +use_linear_projection: bool = False +class_embed_type: typing.Optional[str] = None +addition_embed_type: typing.Optional[str] = None +num_class_embeds: typing.Optional[int] = None +upcast_attention: bool = False +resnet_time_scale_shift: str = 'default' +resnet_skip_time_act: bool = False +resnet_out_scale_factor: int = 1.0 +time_embedding_type: str = 'positional' +time_embedding_dim: typing.Optional[int] = None +time_embedding_act_fn: typing.Optional[str] = None +timestep_post_act: typing.Optional[str] = None +time_cond_proj_dim: typing.Optional[int] = None +conv_in_kernel: int = 3 +conv_out_kernel: int = 3 +projection_class_embeddings_input_dim: typing.Optional[int] = None +class_embeddings_concat: bool = False +mid_block_only_cross_attention: typing.Optional[bool] = None +cross_attention_norm: typing.Optional[str] = None +addition_embed_type_num_heads = 64 + +) + + +Parameters + +sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. + + +in_channels (int, optional, defaults to 4) — The number of channels in the input sample. + + +out_channels (int, optional, defaults to 4) — The number of channels in the output. + + +center_input_sample (bool, optional, defaults to False) — Whether to center the input sample. + + +flip_sin_to_cos (bool, optional, defaults to False) — +Whether to flip the sin to cos in the time embedding. + + +freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. + + +down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. + + +mid_block_type (str, optional, defaults to "UNetMidBlock2DCrossAttn") — +The mid block type. Choose from UNetMidBlock2DCrossAttn or UNetMidBlock2DSimpleCrossAttn, will skip the +mid block layer if None. + + +up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D",)) — +The tuple of upsample blocks to use. + + +only_cross_attention(bool or Tuple[bool], optional, default to False) — +Whether to include self-attention in the basic transformer blocks, see +BasicTransformerBlock. + + +block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. + + +layers_per_block (int, optional, defaults to 2) — The number of layers per block. + + +downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. + + +mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. + + +act_fn (str, optional, defaults to "silu") — The activation function to use. + + +norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, it will skip the normalization and activation layers in post-processing + + +norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. + + +cross_attention_dim (int or Tuple[int], optional, defaults to 1280) — +The dimension of the cross attention features. + + +encoder_hid_dim (int, optional, defaults to None) — +If given, encoder_hidden_states will be projected from this dimension to cross_attention_dim. + + +attention_head_dim (int, optional, defaults to 8) — The dimension of the attention heads. + + +resnet_time_scale_shift (str, optional, defaults to "default") — Time scale shift config +for resnet blocks, see ResnetBlock2D. Choose from default or scale_shift. + + +class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", "identity", "projection", or "simple_projection". + + +addition_embed_type (str, optional, defaults to None) — +Configures an optional embedding which will be summed with the time embeddings. Choose from None or +“text”. “text” will use the TextTimeEmbedding layer. + + +num_class_embeds (int, optional, defaults to None) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. + + +time_embedding_type (str, optional, default to positional) — +The type of position embedding to use for timesteps. Choose from positional or fourier. + + +time_embedding_dim (int, optional, default to None) — +An optional override for the dimension of the projected time embedding. + + +time_embedding_act_fn (str, optional, default to None) — +Optional activation function to use on the time embeddings only one time before they as passed to the rest +of the unet. Choose from silu, mish, gelu, and swish. + + +timestep_post_act (str, *optional*, default to None) -- The second activation function to use in timestep embedding. Choose from silu, mishandgelu`. + + +time_cond_proj_dim (int, optional, default to None) — +The dimension of cond_proj layer in timestep embedding. + + +conv_in_kernel (int, optional, default to 3) — The kernel size of conv_in layer. + + +conv_out_kernel (int, optional, default to 3) — The kernel size of conv_out layer. + + +projection_class_embeddings_input_dim (int, optional) — The dimension of the class_labels input when +using the “projection” class_embed_type. Required when using the “projection” class_embed_type. + + +class_embeddings_concat (bool, optional, defaults to False) — Whether to concatenate the time +embeddings with the class embeddings. + + +mid_block_only_cross_attention (bool, optional, defaults to None) — +Whether to use cross attention with the mid block when using the UNetMidBlock2DSimpleCrossAttn. If +only_cross_attention is given as a single boolean and mid_block_only_cross_attention is None, the +only_cross_attention value will be used as the value for mid_block_only_cross_attention. Else, it will +default to False. + + + +UNet2DConditionModel is a conditional 2D UNet model that takes in a noisy sample, conditional state, and a timestep +and returns sample shaped output. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the models (such as downloading or saving, etc.) + +forward + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[torch.Tensor, float, int] +encoder_hidden_states: Tensor +class_labels: typing.Optional[torch.Tensor] = None +timestep_cond: typing.Optional[torch.Tensor] = None +attention_mask: typing.Optional[torch.Tensor] = None +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None +down_block_additional_residuals: typing.Optional[typing.Tuple[torch.Tensor]] = None +mid_block_additional_residual: typing.Optional[torch.Tensor] = None +return_dict: bool = True + +) +→ +UNet2DConditionOutput or tuple + +Parameters + +sample (torch.FloatTensor) — (batch, channel, height, width) noisy inputs tensor + + +timestep (torch.FloatTensor or float or int) — (batch) timesteps + + +encoder_hidden_states (torch.FloatTensor) — (batch, sequence_length, feature_dim) encoder hidden states + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a models.unet_2d_condition.UNet2DConditionOutput instead of a plain tuple. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +Returns + +UNet2DConditionOutput or tuple + + + +UNet2DConditionOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + + +set_attention_slice + +< +source +> +( +slice_size + +) + + +Parameters + +slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +set_attn_processor + +< +source +> +( +processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor]]] + +) + + +Parameters + +`processor (dict of AttentionProcessor or AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +of all Attention layers. + + +In case processor is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainable attention processors. — + + + + +set_default_attn_processor + +< +source +> +( +) + + + +Disables custom attention processors and sets the default attention implementation. + +UNet3DConditionOutput + + +class diffusers.models.unet_3d_condition.UNet3DConditionOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size, num_frames, num_channels, height, width)) — +Hidden states conditioned on encoder_hidden_states input. Output of last layer of model. + + + + +UNet3DConditionModel + + +class diffusers.UNet3DConditionModel + +< +source +> +( +sample_size: typing.Optional[int] = None +in_channels: int = 4 +out_channels: int = 4 +down_block_types: typing.Tuple[str] = ('CrossAttnDownBlock3D', 'CrossAttnDownBlock3D', 'CrossAttnDownBlock3D', 'DownBlock3D') +up_block_types: typing.Tuple[str] = ('UpBlock3D', 'CrossAttnUpBlock3D', 'CrossAttnUpBlock3D', 'CrossAttnUpBlock3D') +block_out_channels: typing.Tuple[int] = (320, 640, 1280, 1280) +layers_per_block: int = 2 +downsample_padding: int = 1 +mid_block_scale_factor: float = 1 +act_fn: str = 'silu' +norm_num_groups: typing.Optional[int] = 32 +norm_eps: float = 1e-05 +cross_attention_dim: int = 1024 +attention_head_dim: typing.Union[int, typing.Tuple[int]] = 64 + +) + + +Parameters + +sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. + + +in_channels (int, optional, defaults to 4) — The number of channels in the input sample. + + +out_channels (int, optional, defaults to 4) — The number of channels in the output. + + +down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. + + +up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D",)) — +The tuple of upsample blocks to use. + + +block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. + + +layers_per_block (int, optional, defaults to 2) — The number of layers per block. + + +downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. + + +mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. + + +act_fn (str, optional, defaults to "silu") — The activation function to use. + + +norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, it will skip the normalization and activation layers in post-processing + + +norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. + + +cross_attention_dim (int, optional, defaults to 1280) — The dimension of the cross attention features. + + +attention_head_dim (int, optional, defaults to 8) — The dimension of the attention heads. + + + +UNet3DConditionModel is a conditional 2D UNet model that takes in a noisy sample, conditional state, and a timestep +and returns sample shaped output. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the models (such as downloading or saving, etc.) + +forward + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[torch.Tensor, float, int] +encoder_hidden_states: Tensor +class_labels: typing.Optional[torch.Tensor] = None +timestep_cond: typing.Optional[torch.Tensor] = None +attention_mask: typing.Optional[torch.Tensor] = None +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None +down_block_additional_residuals: typing.Optional[typing.Tuple[torch.Tensor]] = None +mid_block_additional_residual: typing.Optional[torch.Tensor] = None +return_dict: bool = True + +) +→ +~models.unet_2d_condition.UNet3DConditionOutput or tuple + +Parameters + +sample (torch.FloatTensor) — (batch, num_frames, channel, height, width) noisy inputs tensor + + +timestep (torch.FloatTensor or float or int) — (batch) timesteps + + +encoder_hidden_states (torch.FloatTensor) — (batch, sequence_length, feature_dim) encoder hidden states + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a models.unet_2d_condition.UNet3DConditionOutput instead of a plain tuple. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +Returns + +~models.unet_2d_condition.UNet3DConditionOutput or tuple + + + +~models.unet_2d_condition.UNet3DConditionOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + + +set_attention_slice + +< +source +> +( +slice_size + +) + + +Parameters + +slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +set_attn_processor + +< +source +> +( +processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor]]] + +) + + +Parameters + +`processor (dict of AttentionProcessor or AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +of all Attention layers. + + +In case processor is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainable attention processors. — + + + + +set_default_attn_processor + +< +source +> +( +) + + + +Disables custom attention processors and sets the default attention implementation. + +DecoderOutput + + +class diffusers.models.vae.DecoderOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +Decoded output sample of the model. Output of the last layer of the model. + + + +Output of decoding method. + +VQEncoderOutput + + +class diffusers.models.vq_model.VQEncoderOutput + +< +source +> +( +latents: FloatTensor + +) + + +Parameters + +latents (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +Encoded output sample of the model. Output of the last layer of the model. + + + +Output of VQModel encoding method. + +VQModel + + +class diffusers.VQModel + +< +source +> +( +in_channels: int = 3 +out_channels: int = 3 +down_block_types: typing.Tuple[str] = ('DownEncoderBlock2D',) +up_block_types: typing.Tuple[str] = ('UpDecoderBlock2D',) +block_out_channels: typing.Tuple[int] = (64,) +layers_per_block: int = 1 +act_fn: str = 'silu' +latent_channels: int = 3 +sample_size: int = 32 +num_vq_embeddings: int = 256 +norm_num_groups: int = 32 +vq_embed_dim: typing.Optional[int] = None +scaling_factor: float = 0.18215 + +) + + +Parameters + +in_channels (int, optional, defaults to 3) — Number of channels in the input image. + + +out_channels (int, optional, defaults to 3) — Number of channels in the output. + + +down_block_types (Tuple[str], optional, defaults to — +obj:("DownEncoderBlock2D",)): Tuple of downsample block types. + + +up_block_types (Tuple[str], optional, defaults to — +obj:("UpDecoderBlock2D",)): Tuple of upsample block types. + + +block_out_channels (Tuple[int], optional, defaults to — +obj:(64,)): Tuple of block output channels. + + +act_fn (str, optional, defaults to "silu") — The activation function to use. + + +latent_channels (int, optional, defaults to 3) — Number of channels in the latent space. + + +sample_size (int, optional, defaults to 32) — TODO + + +num_vq_embeddings (int, optional, defaults to 256) — Number of codebook vectors in the VQ-VAE. + + +vq_embed_dim (int, optional) — Hidden dim of codebook vectors in the VQ-VAE. + + +scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. + + + +VQ-VAE model from the paper Neural Discrete Representation Learning by Aaron van den Oord, Oriol Vinyals and Koray +Kavukcuoglu. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the model (such as downloading or saving, etc.) + +forward + +< +source +> +( +sample: FloatTensor +return_dict: bool = True + +) + + +Parameters + +sample (torch.FloatTensor) — Input sample. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. + + + + +AutoencoderKLOutput + + +class diffusers.models.autoencoder_kl.AutoencoderKLOutput + +< +source +> +( +latent_dist: DiagonalGaussianDistribution + +) + + +Parameters + +latent_dist (DiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of DiagonalGaussianDistribution. +DiagonalGaussianDistribution allows for sampling latents from the distribution. + + + +Output of AutoencoderKL encoding method. + +AutoencoderKL + + +class diffusers.AutoencoderKL + +< +source +> +( +in_channels: int = 3 +out_channels: int = 3 +down_block_types: typing.Tuple[str] = ('DownEncoderBlock2D',) +up_block_types: typing.Tuple[str] = ('UpDecoderBlock2D',) +block_out_channels: typing.Tuple[int] = (64,) +layers_per_block: int = 1 +act_fn: str = 'silu' +latent_channels: int = 4 +norm_num_groups: int = 32 +sample_size: int = 32 +scaling_factor: float = 0.18215 + +) + + +Parameters + +in_channels (int, optional, defaults to 3) — Number of channels in the input image. + + +out_channels (int, optional, defaults to 3) — Number of channels in the output. + + +down_block_types (Tuple[str], optional, defaults to — +obj:("DownEncoderBlock2D",)): Tuple of downsample block types. + + +up_block_types (Tuple[str], optional, defaults to — +obj:("UpDecoderBlock2D",)): Tuple of upsample block types. + + +block_out_channels (Tuple[int], optional, defaults to — +obj:(64,)): Tuple of block output channels. + + +act_fn (str, optional, defaults to "silu") — The activation function to use. + + +latent_channels (int, optional, defaults to 4) — Number of channels in the latent space. + + +sample_size (int, optional, defaults to 32) — TODO + + +scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. + + + +Variational Autoencoder (VAE) model with KL loss from the paper Auto-Encoding Variational Bayes by Diederik P. Kingma +and Max Welling. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the model (such as downloading or saving, etc.) + +disable_slicing + +< +source +> +( +) + + + +Disable sliced VAE decoding. If enable_slicing was previously invoked, this method will go back to computing +decoding in one step. + +disable_tiling + +< +source +> +( +) + + + +Disable tiled VAE decoding. If enable_vae_tiling was previously invoked, this method will go back to +computing decoding in one step. + +enable_slicing + +< +source +> +( +) + + + +Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. + +enable_tiling + +< +source +> +( +use_tiling: bool = True + +) + + + +Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful to save a large amount of memory and to allow +the processing of larger images. + +forward + +< +source +> +( +sample: FloatTensor +sample_posterior: bool = False +return_dict: bool = True +generator: typing.Optional[torch._C.Generator] = None + +) + + +Parameters + +sample (torch.FloatTensor) — Input sample. + + +sample_posterior (bool, optional, defaults to False) — +Whether to sample from the posterior. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. + + + + +tiled_decode + +< +source +> +( +z: FloatTensor +return_dict: bool = True + +) + + +Parameters + +When this option is enabled, the VAE will split the input tensor into tiles to compute decoding in several — + + +steps. This is useful to keep memory use constant regardless of image size. The end result of tiled decoding is — + + +different from non-tiled decoding due to each tile using a different decoder. To avoid tiling artifacts, the — + + +tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the — + + +look of the output, but they should be much less noticeable. — +z (torch.FloatTensor): Input batch of latent vectors. return_dict (bool, optional, defaults to +True): +Whether or not to return a DecoderOutput instead of a plain tuple. + + + +Decode a batch of images using a tiled decoder. + +tiled_encode + +< +source +> +( +x: FloatTensor +return_dict: bool = True + +) + + +Parameters + +When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several — + + +steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is — + + +different from non-tiled encoding due to each tile using a different encoder. To avoid tiling artifacts, the — + + +tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the — + + +look of the output, but they should be much less noticeable. — +x (torch.FloatTensor): Input batch of images. return_dict (bool, optional, defaults to True): +Whether or not to return a AutoencoderKLOutput instead of a plain tuple. + + + +Encode a batch of images using a tiled encoder. + +Transformer2DModel + + +class diffusers.Transformer2DModel + +< +source +> +( +num_attention_heads: int = 16 +attention_head_dim: int = 88 +in_channels: typing.Optional[int] = None +out_channels: typing.Optional[int] = None +num_layers: int = 1 +dropout: float = 0.0 +norm_num_groups: int = 32 +cross_attention_dim: typing.Optional[int] = None +attention_bias: bool = False +sample_size: typing.Optional[int] = None +num_vector_embeds: typing.Optional[int] = None +patch_size: typing.Optional[int] = None +activation_fn: str = 'geglu' +num_embeds_ada_norm: typing.Optional[int] = None +use_linear_projection: bool = False +only_cross_attention: bool = False +upcast_attention: bool = False +norm_type: str = 'layer_norm' +norm_elementwise_affine: bool = True + +) + + +Parameters + +num_attention_heads (int, optional, defaults to 16) — The number of heads to use for multi-head attention. + + +attention_head_dim (int, optional, defaults to 88) — The number of channels in each head. + + +in_channels (int, optional) — +Pass if the input is continuous. The number of channels in the input and output. + + +num_layers (int, optional, defaults to 1) — The number of layers of Transformer blocks to use. + + +dropout (float, optional, defaults to 0.0) — The dropout probability to use. + + +cross_attention_dim (int, optional) — The number of encoder_hidden_states dimensions to use. + + +sample_size (int, optional) — Pass if the input is discrete. The width of the latent images. +Note that this is fixed at training time as it is used for learning a number of position embeddings. See +ImagePositionalEmbeddings. + + +num_vector_embeds (int, optional) — +Pass if the input is discrete. The number of classes of the vector embeddings of the latent pixels. +Includes the class for the masked latent pixel. + + +activation_fn (str, optional, defaults to "geglu") — Activation function to be used in feed-forward. + + +num_embeds_ada_norm ( int, optional) — Pass if at least one of the norm_layers is AdaLayerNorm. +The number of diffusion steps used during training. Note that this is fixed at training time as it is used +to learn a number of embeddings that are added to the hidden states. During inference, you can denoise for +up to but not more than steps than num_embeds_ada_norm. + + +attention_bias (bool, optional) — +Configure if the TransformerBlocks’ attention should contain a bias parameter. + + + +Transformer model for image-like data. Takes either discrete (classes of vector embeddings) or continuous (actual +embeddings) inputs. +When input is continuous: First, project the input (aka embedding) and reshape to b, t, d. Then apply standard +transformer action. Finally, reshape to image. +When input is discrete: First, input (classes of latent pixels) is converted to embeddings and has positional +embeddings applied, see ImagePositionalEmbeddings. Then apply standard transformer action. Finally, predict +classes of unnoised image. +Note that it is assumed one of the input classes is the masked latent pixel. The predicted classes of the unnoised +image do not contain a prediction for the masked pixel as the unnoised image cannot be masked. + +forward + +< +source +> +( +hidden_states +encoder_hidden_states = None +timestep = None +class_labels = None +cross_attention_kwargs = None +return_dict: bool = True + +) +→ +Transformer2DModelOutput or tuple + +Parameters + +hidden_states ( When discrete, torch.LongTensor of shape (batch size, num latent pixels). — +When continuous, torch.FloatTensor of shape (batch size, channel, height, width)): Input +hidden_states + + +encoder_hidden_states ( torch.FloatTensor of shape (batch size, sequence len, embed dims), optional) — +Conditional embeddings for cross attention layer. If not given, cross-attention defaults to +self-attention. + + +timestep ( torch.long, optional) — +Optional timestep to be applied as an embedding in AdaLayerNorm’s. Used to indicate denoising step. + + +class_labels ( torch.LongTensor of shape (batch size, num classes), optional) — +Optional class labels to be applied as an embedding in AdaLayerZeroNorm. Used to indicate class labels +conditioning. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a models.unet_2d_condition.UNet2DConditionOutput instead of a plain tuple. + + +Returns + +Transformer2DModelOutput or tuple + + + +Transformer2DModelOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + + +Transformer2DModelOutput + + +class diffusers.models.transformer_2d.Transformer2DModelOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if Transformer2DModel is discrete) — +Hidden states conditioned on encoder_hidden_states input. If discrete, returns probability distributions +for the unnoised latent pixels. + + + + +TransformerTemporalModel + + +class diffusers.models.transformer_temporal.TransformerTemporalModel + +< +source +> +( +num_attention_heads: int = 16 +attention_head_dim: int = 88 +in_channels: typing.Optional[int] = None +out_channels: typing.Optional[int] = None +num_layers: int = 1 +dropout: float = 0.0 +norm_num_groups: int = 32 +cross_attention_dim: typing.Optional[int] = None +attention_bias: bool = False +sample_size: typing.Optional[int] = None +activation_fn: str = 'geglu' +norm_elementwise_affine: bool = True +double_self_attention: bool = True + +) + + +Parameters + +num_attention_heads (int, optional, defaults to 16) — The number of heads to use for multi-head attention. + + +attention_head_dim (int, optional, defaults to 88) — The number of channels in each head. + + +in_channels (int, optional) — +Pass if the input is continuous. The number of channels in the input and output. + + +num_layers (int, optional, defaults to 1) — The number of layers of Transformer blocks to use. + + +dropout (float, optional, defaults to 0.0) — The dropout probability to use. + + +cross_attention_dim (int, optional) — The number of encoder_hidden_states dimensions to use. + + +sample_size (int, optional) — Pass if the input is discrete. The width of the latent images. +Note that this is fixed at training time as it is used for learning a number of position embeddings. See +ImagePositionalEmbeddings. + + +activation_fn (str, optional, defaults to "geglu") — Activation function to be used in feed-forward. + + +attention_bias (bool, optional) — +Configure if the TransformerBlocks’ attention should contain a bias parameter. + + +double_self_attention (bool, optional) — +Configure if each TransformerBlock should contain two self-attention layers + + + +Transformer model for video-like data. + +forward + +< +source +> +( +hidden_states +encoder_hidden_states = None +timestep = None +class_labels = None +num_frames = 1 +cross_attention_kwargs = None +return_dict: bool = True + +) +→ +~models.transformer_2d.TransformerTemporalModelOutput or tuple + +Parameters + +hidden_states ( When discrete, torch.LongTensor of shape (batch size, num latent pixels). — +When continous, torch.FloatTensor of shape (batch size, channel, height, width)): Input +hidden_states + + +encoder_hidden_states ( torch.LongTensor of shape (batch size, encoder_hidden_states dim), optional) — +Conditional embeddings for cross attention layer. If not given, cross-attention defaults to +self-attention. + + +timestep ( torch.long, optional) — +Optional timestep to be applied as an embedding in AdaLayerNorm’s. Used to indicate denoising step. + + +class_labels ( torch.LongTensor of shape (batch size, num classes), optional) — +Optional class labels to be applied as an embedding in AdaLayerZeroNorm. Used to indicate class labels +conditioning. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a models.unet_2d_condition.UNet2DConditionOutput instead of a plain tuple. + + +Returns + +~models.transformer_2d.TransformerTemporalModelOutput or tuple + + + +~models.transformer_2d.TransformerTemporalModelOutput if return_dict is True, otherwise a tuple. +When returning a tuple, the first element is the sample tensor. + + + +Transformer2DModelOutput + + +class diffusers.models.transformer_temporal.TransformerTemporalModelOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size x num_frames, num_channels, height, width)) — +Hidden states conditioned on encoder_hidden_states input. + + + + +PriorTransformer + + +class diffusers.PriorTransformer + +< +source +> +( +num_attention_heads: int = 32 +attention_head_dim: int = 64 +num_layers: int = 20 +embedding_dim: int = 768 +num_embeddings = 77 +additional_embeddings = 4 +dropout: float = 0.0 + +) + + +Parameters + +num_attention_heads (int, optional, defaults to 32) — The number of heads to use for multi-head attention. + + +attention_head_dim (int, optional, defaults to 64) — The number of channels in each head. + + +num_layers (int, optional, defaults to 20) — The number of layers of Transformer blocks to use. + + +embedding_dim (int, optional, defaults to 768) — The dimension of the CLIP embeddings. Note that CLIP +image embeddings and text embeddings are both the same dimension. + + +num_embeddings (int, optional, defaults to 77) — The max number of clip embeddings allowed. I.e. the +length of the prompt after it has been tokenized. + + +additional_embeddings (int, optional, defaults to 4) — The number of additional tokens appended to the +projected hidden_states. The actual length of the used hidden_states is num_embeddings + additional_embeddings. + + +dropout (float, optional, defaults to 0.0) — The dropout probability to use. + + + +The prior transformer from unCLIP is used to predict CLIP image embeddings from CLIP text embeddings. Note that the +transformer predicts the image embeddings through a denoising diffusion process. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the models (such as downloading or saving, etc.) +For more details, see the original paper: https://arxiv.org/abs/2204.06125 + +forward + +< +source +> +( +hidden_states +timestep: typing.Union[torch.Tensor, float, int] +proj_embedding: FloatTensor +encoder_hidden_states: FloatTensor +attention_mask: typing.Optional[torch.BoolTensor] = None +return_dict: bool = True + +) +→ +PriorTransformerOutput or tuple + +Parameters + +hidden_states (torch.FloatTensor of shape (batch_size, embedding_dim)) — +x_t, the currently predicted image embeddings. + + +timestep (torch.long) — +Current denoising step. + + +proj_embedding (torch.FloatTensor of shape (batch_size, embedding_dim)) — +Projected embedding vector the denoising process is conditioned on. + + +encoder_hidden_states (torch.FloatTensor of shape (batch_size, num_embeddings, embedding_dim)) — +Hidden states of the text embeddings the denoising process is conditioned on. + + +attention_mask (torch.BoolTensor of shape (batch_size, num_embeddings)) — +Text mask for the text embeddings. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a models.prior_transformer.PriorTransformerOutput instead of a plain +tuple. + + +Returns + +PriorTransformerOutput or tuple + + + +PriorTransformerOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + + +PriorTransformerOutput + + +class diffusers.models.prior_transformer.PriorTransformerOutput + +< +source +> +( +predicted_image_embedding: FloatTensor + +) + + +Parameters + +predicted_image_embedding (torch.FloatTensor of shape (batch_size, embedding_dim)) — +The predicted CLIP image embedding conditioned on the CLIP text embedding input. + + + + +ControlNetOutput + + +class diffusers.models.controlnet.ControlNetOutput + +< +source +> +( +down_block_res_samples: typing.Tuple[torch.Tensor] +mid_block_res_sample: Tensor + +) + + + + +ControlNetModel + + +class diffusers.ControlNetModel + +< +source +> +( +in_channels: int = 4 +flip_sin_to_cos: bool = True +freq_shift: int = 0 +down_block_types: typing.Tuple[str] = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') +only_cross_attention: typing.Union[bool, typing.Tuple[bool]] = False +block_out_channels: typing.Tuple[int] = (320, 640, 1280, 1280) +layers_per_block: int = 2 +downsample_padding: int = 1 +mid_block_scale_factor: float = 1 +act_fn: str = 'silu' +norm_num_groups: typing.Optional[int] = 32 +norm_eps: float = 1e-05 +cross_attention_dim: int = 1280 +attention_head_dim: typing.Union[int, typing.Tuple[int]] = 8 +use_linear_projection: bool = False +class_embed_type: typing.Optional[str] = None +num_class_embeds: typing.Optional[int] = None +upcast_attention: bool = False +resnet_time_scale_shift: str = 'default' +projection_class_embeddings_input_dim: typing.Optional[int] = None +controlnet_conditioning_channel_order: str = 'rgb' +conditioning_embedding_out_channels: typing.Optional[typing.Tuple[int]] = (16, 32, 96, 256) +global_pool_conditions: bool = False + +) + + + + +from_unet + +< +source +> +( +unet: UNet2DConditionModel +controlnet_conditioning_channel_order: str = 'rgb' +conditioning_embedding_out_channels: typing.Optional[typing.Tuple[int]] = (16, 32, 96, 256) +load_weights_from_unet: bool = True + +) + + +Parameters + +unet (UNet2DConditionModel) — +UNet model which weights are copied to the ControlNet. Note that all configuration options are also +copied where applicable. + + + +Instantiate Controlnet class from UNet2DConditionModel. + +set_attention_slice + +< +source +> +( +slice_size + +) + + +Parameters + +slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +set_attn_processor + +< +source +> +( +processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor]]] + +) + + +Parameters + +`processor (dict of AttentionProcessor or AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +of all Attention layers. + + +In case processor is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainable attention processors. — + + + + +set_default_attn_processor + +< +source +> +( +) + + + +Disables custom attention processors and sets the default attention implementation. + +FlaxModelMixin + + +class diffusers.FlaxModelMixin + +< +source +> +( +) + + + +Base class for all flax models. +FlaxModelMixin takes care of storing the configuration of the models and handles methods for loading, +downloading and saving models. + +from_pretrained + +< +source +> +( +pretrained_model_name_or_path: typing.Union[str, os.PathLike] +dtype: dtype = +*model_args +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path (str or os.PathLike) — +Can be either: + +A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. +Valid model ids are namespaced under a user or organization name, like +runwayml/stable-diffusion-v1-5. +A path to a directory containing model weights saved using save_pretrained(), +e.g., ./my_model_directory/. + + + +dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — +The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and +jax.numpy.bfloat16 (on TPUs). +This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If +specified all the computation will be performed with the given dtype. +Note that this only specifies the dtype of the computation and does not influence the dtype of model +parameters. +If you wish to change the dtype of the model parameters, see ~ModelMixin.to_fp16 and +~ModelMixin.to_bf16. + + +model_args (sequence of positional arguments, optional) — +All remaining positional arguments will be passed to the underlying model’s __init__ method. + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +from_pt (bool, optional, defaults to False) — +Load the model weights from a PyTorch checkpoint save file. + + +kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., +output_attentions=True). Behaves differently depending on whether a config is provided or +automatically loaded: + +If a configuration is provided with config, **kwargs will be directly passed to the +underlying model’s __init__ method (we assume all relevant updates to the configuration have +already been done) +If a configuration is not provided, kwargs will be first passed to the configuration class +initialization function (from_config()). Each key of kwargs that corresponds to +a configuration attribute will be used to override said attribute with the supplied kwargs +value. Remaining keys that do not correspond to any configuration attribute will be passed to the +underlying model’s __init__ function. + + + + +Instantiate a pretrained flax model from a pre-trained model configuration. +The warning Weights from XXX not initialized from pretrained model means that the weights of XXX do not come +pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning +task. +The warning Weights from XXX not used in YYY means that the layer XXX is not used by YYY, therefore those +weights are discarded. + +Examples: + + + Copied +>>> from diffusers import FlaxUNet2DConditionModel + +>>> # Download model and configuration from huggingface.co and cache. +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # Model was saved using *save_pretrained('./test/saved_model/')* (for example purposes, not runnable). +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("./test/saved_model/") + +save_pretrained + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] +is_main_process: bool = True + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. + + +params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. + + +is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful when in distributed training like +TPUs and need to call this function on all processes. In this case, set is_main_process=True only on +the main process to avoid race conditions. + + + +Save a model and its configuration file to a directory, so that it can be re-loaded using the +[from_pretrained()](/docs/diffusers/v0.16.0/en/api/models#diffusers.FlaxModelMixin.from_pretrained) class method + +to_bf16 + +< +source +> +( +params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] +mask: typing.Any = None + +) + + +Parameters + +params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. + + +mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans, True for params +you want to cast, and should be False for those you want to skip. + + + +Cast the floating-point params to jax.numpy.bfloat16. This returns a new params tree and does not cast +the params in place. +This method can be used on TPU to explicitly convert the model parameters to bfloat16 precision to do full +half-precision training or to save weights in bfloat16 for inference in order to save memory and improve speed. + +Examples: + + + Copied +>>> from diffusers import FlaxUNet2DConditionModel + +>>> # load model +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model parameters will be in fp32 precision, to cast these to bfloat16 precision +>>> params = model.to_bf16(params) +>>> # If you don't want to cast certain parameters (for example layer norm bias and scale) +>>> # then pass the mask as follows +>>> from flax import traverse_util + +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> flat_params = traverse_util.flatten_dict(params) +>>> mask = { +... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) +... for path in flat_params +... } +>>> mask = traverse_util.unflatten_dict(mask) +>>> params = model.to_bf16(params, mask) + +to_fp16 + +< +source +> +( +params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] +mask: typing.Any = None + +) + + +Parameters + +params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. + + +mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans, True for params +you want to cast, and should be False for those you want to skip + + + +Cast the floating-point params to jax.numpy.float16. This returns a new params tree and does not cast the +params in place. +This method can be used on GPU to explicitly convert the model parameters to float16 precision to do full +half-precision training or to save weights in float16 for inference in order to save memory and improve speed. + +Examples: + + + Copied +>>> from diffusers import FlaxUNet2DConditionModel + +>>> # load model +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model params will be in fp32, to cast these to float16 +>>> params = model.to_fp16(params) +>>> # If you want don't want to cast certain parameters (for example layer norm bias and scale) +>>> # then pass the mask as follows +>>> from flax import traverse_util + +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> flat_params = traverse_util.flatten_dict(params) +>>> mask = { +... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) +... for path in flat_params +... } +>>> mask = traverse_util.unflatten_dict(mask) +>>> params = model.to_fp16(params, mask) + +to_fp32 + +< +source +> +( +params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] +mask: typing.Any = None + +) + + +Parameters + +params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. + + +mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans, True for params +you want to cast, and should be False for those you want to skip + + + +Cast the floating-point params to jax.numpy.float32. This method can be used to explicitly convert the +model parameters to fp32 precision. This returns a new params tree and does not cast the params in place. + +Examples: + + + Copied +>>> from diffusers import FlaxUNet2DConditionModel + +>>> # Download model and configuration from huggingface.co +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model params will be in fp32, to illustrate the use of this method, +>>> # we'll first cast to fp16 and back to fp32 +>>> params = model.to_f16(params) +>>> # now cast back to fp32 +>>> params = model.to_fp32(params) + +FlaxUNet2DConditionOutput + + +class diffusers.models.unet_2d_condition_flax.FlaxUNet2DConditionOutput + +< +source +> +( +sample: ndarray + +) + + +Parameters + +sample (jnp.ndarray of shape (batch_size, num_channels, height, width)) — +Hidden states conditioned on encoder_hidden_states input. Output of last layer of model. + + + + +replace + +< +source +> +( +**updates + +) + + + +“Returns a new object replacing the specified fields with new values. + +FlaxUNet2DConditionModel + + +class diffusers.FlaxUNet2DConditionModel + +< +source +> +( +sample_size: int = 32 +in_channels: int = 4 +out_channels: int = 4 +down_block_types: typing.Tuple[str] = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') +up_block_types: typing.Tuple[str] = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') +only_cross_attention: typing.Union[bool, typing.Tuple[bool]] = False +block_out_channels: typing.Tuple[int] = (320, 640, 1280, 1280) +layers_per_block: int = 2 +attention_head_dim: typing.Union[int, typing.Tuple[int]] = 8 +cross_attention_dim: int = 1280 +dropout: float = 0.0 +use_linear_projection: bool = False +dtype: dtype = +flip_sin_to_cos: bool = True +freq_shift: int = 0 +use_memory_efficient_attention: bool = False +parent: typing.Union[typing.Type[flax.linen.module.Module], typing.Type[flax.core.scope.Scope], typing.Type[flax.linen.module._Sentinel], NoneType] = +name: str = None + +) + + +Parameters + +sample_size (int, optional) — +The size of the input sample. + + +in_channels (int, optional, defaults to 4) — +The number of channels in the input sample. + + +out_channels (int, optional, defaults to 4) — +The number of channels in the output. + + +down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. The corresponding class names will be: “FlaxCrossAttnDownBlock2D”, +“FlaxCrossAttnDownBlock2D”, “FlaxCrossAttnDownBlock2D”, “FlaxDownBlock2D” + + +up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D",)) — +The tuple of upsample blocks to use. The corresponding class names will be: “FlaxUpBlock2D”, +“FlaxCrossAttnUpBlock2D”, “FlaxCrossAttnUpBlock2D”, “FlaxCrossAttnUpBlock2D” + + +block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. + + +layers_per_block (int, optional, defaults to 2) — +The number of layers per block. + + +attention_head_dim (int or Tuple[int], optional, defaults to 8) — +The dimension of the attention heads. + + +cross_attention_dim (int, optional, defaults to 768) — +The dimension of the cross attention features. + + +dropout (float, optional, defaults to 0) — +Dropout probability for down, up and bottleneck blocks. + + +flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip the sin to cos in the time embedding. + + +freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. + + +use_memory_efficient_attention (bool, optional, defaults to False) — +enable memory efficient attention https://arxiv.org/abs/2112.05682 + + + +FlaxUNet2DConditionModel is a conditional 2D UNet model that takes in a noisy sample, conditional state, and a +timestep and returns sample shaped output. +This model inherits from FlaxModelMixin. Check the superclass documentation for the generic methods the library +implements for all the models (such as downloading or saving, etc.) +Also, this model is a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to +general usage and behavior. +Finally, this model supports inherent JAX features such as: +Just-In-Time (JIT) compilation +Automatic Differentiation +Vectorization +Parallelization + +FlaxDecoderOutput + + +class diffusers.models.vae_flax.FlaxDecoderOutput + +< +source +> +( +sample: ndarray + +) + + +Parameters + +sample (jnp.ndarray of shape (batch_size, num_channels, height, width)) — +Decoded output sample of the model. Output of the last layer of the model. + + +dtype (jnp.dtype, optional, defaults to jnp.float32) — +Parameters dtype + + + +Output of decoding method. + +replace + +< +source +> +( +**updates + +) + + + +“Returns a new object replacing the specified fields with new values. + +FlaxAutoencoderKLOutput + + +class diffusers.models.vae_flax.FlaxAutoencoderKLOutput + +< +source +> +( +latent_dist: FlaxDiagonalGaussianDistribution + +) + + +Parameters + +latent_dist (FlaxDiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of FlaxDiagonalGaussianDistribution. +FlaxDiagonalGaussianDistribution allows for sampling latents from the distribution. + + + +Output of AutoencoderKL encoding method. + +replace + +< +source +> +( +**updates + +) + + + +“Returns a new object replacing the specified fields with new values. + +FlaxAutoencoderKL + + +class diffusers.FlaxAutoencoderKL + +< +source +> +( +in_channels: int = 3 +out_channels: int = 3 +down_block_types: typing.Tuple[str] = ('DownEncoderBlock2D',) +up_block_types: typing.Tuple[str] = ('UpDecoderBlock2D',) +block_out_channels: typing.Tuple[int] = (64,) +layers_per_block: int = 1 +act_fn: str = 'silu' +latent_channels: int = 4 +norm_num_groups: int = 32 +sample_size: int = 32 +scaling_factor: float = 0.18215 +dtype: dtype = +parent: typing.Union[typing.Type[flax.linen.module.Module], typing.Type[flax.core.scope.Scope], typing.Type[flax.linen.module._Sentinel], NoneType] = +name: str = None + +) + + +Parameters + +in_channels (int, optional, defaults to 3) — +Input channels + + +out_channels (int, optional, defaults to 3) — +Output channels + + +down_block_types (Tuple[str], optional, defaults to (DownEncoderBlock2D)) — +DownEncoder block type + + +up_block_types (Tuple[str], optional, defaults to (UpDecoderBlock2D)) — +UpDecoder block type + + +block_out_channels (Tuple[str], optional, defaults to (64,)) — +Tuple containing the number of output channels for each block + + +layers_per_block (int, optional, defaults to 2) — +Number of Resnet layer for each block + + +act_fn (str, optional, defaults to silu) — +Activation function + + +latent_channels (int, optional, defaults to 4) — +Latent space channels + + +norm_num_groups (int, optional, defaults to 32) — +Norm num group + + +sample_size (int, optional, defaults to 32) — +Sample input size + + +scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 +/ scaling_factor z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. + + +dtype (jnp.dtype, optional, defaults to jnp.float32) — +parameters dtype + + + +Flax Implementation of Variational Autoencoder (VAE) model with KL loss from the paper Auto-Encoding Variational +Bayes by Diederik P. Kingma and Max Welling. +This model is a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to +general usage and behavior. +Finally, this model supports inherent JAX features such as: +Just-In-Time (JIT) compilation +Automatic Differentiation +Vectorization +Parallelization + +FlaxControlNetOutput + + +class diffusers.models.controlnet_flax.FlaxControlNetOutput + +< +source +> +( +down_block_res_samples: ndarray +mid_block_res_sample: ndarray + +) + + + + +replace + +< +source +> +( +**updates + +) + + + +“Returns a new object replacing the specified fields with new values. + +FlaxControlNetModel + + +class diffusers.FlaxControlNetModel + +< +source +> +( +sample_size: int = 32 +in_channels: int = 4 +down_block_types: typing.Tuple[str] = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') +only_cross_attention: typing.Union[bool, typing.Tuple[bool]] = False +block_out_channels: typing.Tuple[int] = (320, 640, 1280, 1280) +layers_per_block: int = 2 +attention_head_dim: typing.Union[int, typing.Tuple[int]] = 8 +cross_attention_dim: int = 1280 +dropout: float = 0.0 +use_linear_projection: bool = False +dtype: dtype = +flip_sin_to_cos: bool = True +freq_shift: int = 0 +controlnet_conditioning_channel_order: str = 'rgb' +conditioning_embedding_out_channels: typing.Tuple[int] = (16, 32, 96, 256) +parent: typing.Union[typing.Type[flax.linen.module.Module], typing.Type[flax.core.scope.Scope], typing.Type[flax.linen.module._Sentinel], NoneType] = +name: str = None + +) + + +Parameters + +sample_size (int, optional) — +The size of the input sample. + + +in_channels (int, optional, defaults to 4) — +The number of channels in the input sample. + + +down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. The corresponding class names will be: “FlaxCrossAttnDownBlock2D”, +“FlaxCrossAttnDownBlock2D”, “FlaxCrossAttnDownBlock2D”, “FlaxDownBlock2D” + + +block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. + + +layers_per_block (int, optional, defaults to 2) — +The number of layers per block. + + +attention_head_dim (int or Tuple[int], optional, defaults to 8) — +The dimension of the attention heads. + + +cross_attention_dim (int, optional, defaults to 768) — +The dimension of the cross attention features. + + +dropout (float, optional, defaults to 0) — +Dropout probability for down, up and bottleneck blocks. + + +flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip the sin to cos in the time embedding. + + +freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. + + +controlnet_conditioning_channel_order (str, optional, defaults to rgb) — +The channel order of conditional image. Will convert it to rgb if it’s bgr + + +conditioning_embedding_out_channels (tuple, optional, defaults to (16, 32, 96, 256)) — +The tuple of output channel for each block in conditioning_embedding layer + + + +Quoting from https://arxiv.org/abs/2302.05543: “Stable Diffusion uses a pre-processing method similar to VQ-GAN +[11] to convert the entire dataset of 512 × 512 images into smaller 64 × 64 “latent images” for stabilized +training. This requires ControlNets to convert image-based conditions to 64 × 64 feature space to match the +convolution size. We use a tiny network E(·) of four convolution layers with 4 × 4 kernels and 2 × 2 strides +(activated by ReLU, channels are 16, 32, 64, 128, initialized with Gaussian weights, trained jointly with the full +model) to encode image-space conditions … into feature maps …” +This model inherits from FlaxModelMixin. Check the superclass documentation for the generic methods the library +implements for all the models (such as downloading or saving, etc.) +Also, this model is a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to +general usage and behavior. +Finally, this model supports inherent JAX features such as: +Just-In-Time (JIT) compilation +Automatic Differentiation +Vectorization +Parallelization diff --git a/scrapped_outputs/8c9945b19b3dceed96265cab518eacd5.txt b/scrapped_outputs/8c9945b19b3dceed96265cab518eacd5.txt new file mode 100644 index 0000000000000000000000000000000000000000..0051dea3c8497a0aea4368d8c2019c00ab6ab808 --- /dev/null +++ b/scrapped_outputs/8c9945b19b3dceed96265cab518eacd5.txt @@ -0,0 +1,107 @@ +Semantic Guidance Semantic Guidance for Diffusion Models was proposed in SEGA: Instructing Text-to-Image Models using Semantic Guidance and provides strong semantic control over image generation. +Small changes to the text prompt usually result in entirely different output images. However, with SEGA a variety of changes to the image are enabled that can be controlled easily and intuitively, while staying true to the original image composition. The abstract from the paper is: Text-to-image diffusion models have recently received a lot of interest for their astonishing ability to produce high-fidelity images from text only. However, achieving one-shot generation that aligns with the user’s intent is nearly impossible, yet small changes to the input prompt often result in very different images. This leaves the user with little semantic control. To put the user in control, we show how to interact with the diffusion process to flexibly steer it along semantic directions. This semantic guidance (SEGA) generalizes to any generative architecture using classifier-free guidance. More importantly, it allows for subtle and extensive edits, changes in composition and style, as well as optimizing the overall artistic conception. We demonstrate SEGA’s effectiveness on both latent and pixel-based diffusion models such as Stable Diffusion, Paella, and DeepFloyd-IF using a variety of tasks, thus providing strong evidence for its versatility, flexibility, and improvements over existing methods. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. SemanticStableDiffusionPipeline class diffusers.SemanticStableDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (Q16SafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with latent editing. This model inherits from DiffusionPipeline and builds on the StableDiffusionPipeline. Check the superclass +documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular +device, etc.). __call__ < source > ( prompt: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: int = 1 eta: float = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 editing_prompt: Union = None editing_prompt_embeddings: Optional = None reverse_editing_direction: Union = False edit_guidance_scale: Union = 5 edit_warmup_steps: Union = 10 edit_cooldown_steps: Union = None edit_threshold: Union = 0.9 edit_momentum_scale: Optional = 0.1 edit_mom_beta: Optional = 0.4 edit_weights: Optional = None sem_guidance: Optional = None ) → ~pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. editing_prompt (str or List[str], optional) — +The prompt or prompts to use for semantic guidance. Semantic guidance is disabled by setting +editing_prompt = None. Guidance direction of prompt should be specified via +reverse_editing_direction. editing_prompt_embeddings (torch.Tensor, optional) — +Pre-computed embeddings to use for semantic guidance. Guidance direction of embedding should be +specified via reverse_editing_direction. reverse_editing_direction (bool or List[bool], optional, defaults to False) — +Whether the corresponding prompt in editing_prompt should be increased or decreased. edit_guidance_scale (float or List[float], optional, defaults to 5) — +Guidance scale for semantic guidance. If provided as a list, values should correspond to +editing_prompt. edit_warmup_steps (float or List[float], optional, defaults to 10) — +Number of diffusion steps (for each prompt) for which semantic guidance is not applied. Momentum is +calculated for those steps and applied once all warmup periods are over. edit_cooldown_steps (float or List[float], optional, defaults to None) — +Number of diffusion steps (for each prompt) after which semantic guidance is longer applied. edit_threshold (float or List[float], optional, defaults to 0.9) — +Threshold of semantic guidance. edit_momentum_scale (float, optional, defaults to 0.1) — +Scale of the momentum to be added to the semantic guidance at each diffusion step. If set to 0.0, +momentum is disabled. Momentum is already built up during warmup (for diffusion steps smaller than +sld_warmup_steps). Momentum is only added to latent guidance once all warmup periods are finished. edit_mom_beta (float, optional, defaults to 0.4) — +Defines how semantic guidance momentum builds up. edit_mom_beta indicates how much of the previous +momentum is kept. Momentum is already built up during warmup (for diffusion steps smaller than +edit_warmup_steps). edit_weights (List[float], optional, defaults to None) — +Indicates how much each individual concept should influence the overall guidance. If no weights are +provided all concepts are applied equally. sem_guidance (List[torch.Tensor], optional) — +List of pre-generated guidance vectors to be applied at generation. Length of the list has to +correspond to num_inference_steps. Returns +~pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput or tuple + +If return_dict is True, +~pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images and the second element +is a list of bools indicating whether the corresponding generated image contains “not-safe-for-work” +(nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import SemanticStableDiffusionPipeline + +>>> pipe = SemanticStableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> out = pipe( +... prompt="a photo of the face of a woman", +... num_images_per_prompt=1, +... guidance_scale=7, +... editing_prompt=[ +... "smiling, smile", # Concepts to apply +... "glasses, wearing glasses", +... "curls, wavy hair, curly hair", +... "beard, full beard, mustache", +... ], +... reverse_editing_direction=[ +... False, +... False, +... False, +... False, +... ], # Direction of guidance i.e. increase all concepts +... edit_warmup_steps=[10, 10, 10, 10], # Warmup period for each concept +... edit_guidance_scale=[4, 5, 5, 5.4], # Guidance scale for each concept +... edit_threshold=[ +... 0.99, +... 0.975, +... 0.925, +... 0.96, +... ], # Threshold for each concept. Threshold equals the percentile of the latent space that will be discarded. I.e. threshold=0.99 uses 1% of the latent dimensions +... edit_momentum_scale=0.3, # Momentum scale that will be added to the latent guidance +... edit_mom_beta=0.6, # Momentum beta +... edit_weights=[1, 1, 1, 1, 1], # Weights of the individual concepts against each other +... ) +>>> image = out.images[0] StableDiffusionSafePipelineOutput class diffusers.pipelines.semantic_stable_diffusion.pipeline_output.SemanticStableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/8cb42b65980c4419557151185748e987.txt b/scrapped_outputs/8cb42b65980c4419557151185748e987.txt new file mode 100644 index 0000000000000000000000000000000000000000..b45fe5213bcfa863fc1c686b497f93e27b1008f7 --- /dev/null +++ b/scrapped_outputs/8cb42b65980c4419557151185748e987.txt @@ -0,0 +1,630 @@ +Kandinsky 2.2 Kandinsky 2.2 is created by Arseniy Shakhmatov, Anton Razzhigaev, Aleksandr Nikolich, Vladimir Arkhipkin, Igor Pavlov, Andrey Kuznetsov, and Denis Dimitrov. The description from it’s GitHub page is: Kandinsky 2.2 brings substantial improvements upon its predecessor, Kandinsky 2.1, by introducing a new, more powerful image encoder - CLIP-ViT-G and the ControlNet support. The switch to CLIP-ViT-G as the image encoder significantly increases the model’s capability to generate more aesthetic pictures and better understand text, thus enhancing the model’s overall performance. The addition of the ControlNet mechanism allows the model to effectively control the process of generating images. This leads to more accurate and visually appealing outputs and opens new possibilities for text-guided image manipulation. The original codebase can be found at ai-forever/Kandinsky-2. Check out the Kandinsky Community organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting. Make sure to check out the schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. KandinskyV22PriorPipeline class diffusers.KandinskyV22PriorPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModelWithProjection text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: UnCLIPScheduler image_processor: CLIPImageProcessor ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Pipeline for generating image prior for Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 output_type: Optional = 'pt' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → KandinskyPriorPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. output_type (str, optional, defaults to "pt") — +The output format of the generate image. Choose between: "np" (np.array) or "pt" +(torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyV22Pipeline, KandinskyV22PriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior") +>>> pipe_prior.to("cuda") +>>> prompt = "red cat, 4k photo" +>>> image_emb, negative_image_emb = pipe_prior(prompt).to_tuple() + +>>> pipe = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder") +>>> pipe.to("cuda") +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=50, +... ).images +>>> image[0].save("cat.png") interpolate < source > ( images_and_prompts: List weights: List num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None negative_prior_prompt: Optional = None negative_prompt: str = '' guidance_scale: float = 4.0 device = None ) → KandinskyPriorPipelineOutput or tuple Parameters images_and_prompts (List[Union[str, PIL.Image.Image, torch.FloatTensor]]) — +list of prompts and images to guide the image generation. +weights — (List[float]): +list of weights for each condition in images_and_prompts num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. negative_prior_prompt (str, optional) — +The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when using the prior pipeline for interpolation. Examples: Copied >>> from diffusers import KandinskyV22PriorPipeline, KandinskyV22Pipeline +>>> from diffusers.utils import load_image +>>> import PIL +>>> import torch +>>> from torchvision import transforms + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") +>>> img1 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) +>>> img2 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/starry_night.jpeg" +... ) +>>> images_texts = ["a cat", img1, img2] +>>> weights = [0.3, 0.3, 0.4] +>>> out = pipe_prior.interpolate(images_texts, weights) +>>> pipe = KandinskyV22Pipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") +>>> image = pipe( +... image_embeds=out.image_embeds, +... negative_image_embeds=out.negative_image_embeds, +... height=768, +... width=768, +... num_inference_steps=50, +... ).images[0] +>>> image.save("starry_cat.png") KandinskyV22Pipeline class diffusers.KandinskyV22Pipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union negative_image_embeds: Union height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyV22Pipeline, KandinskyV22PriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior") +>>> pipe_prior.to("cuda") +>>> prompt = "red cat, 4k photo" +>>> out = pipe_prior(prompt) +>>> image_emb = out.image_embeds +>>> zero_image_emb = out.negative_image_embeds +>>> pipe = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder") +>>> pipe.to("cuda") +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=50, +... ).images +>>> image[0].save("cat.png") KandinskyV22CombinedPipeline class diffusers.KandinskyV22CombinedPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. prior_image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Combined Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. prior_callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference of the prior pipeline. +The function is called with the following arguments: prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). prior_callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the prior_callback_on_step_end function. The tensors specified in the +list will be passed as callback_kwargs argument. You will only be able to include variables listed in +the ._callback_tensor_inputs attribute of your prior pipeline class. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference of the decoder pipeline. +The function is called with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors +as specified by callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipe = AutoPipelineForText2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" + +image = pipe(prompt=prompt, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. KandinskyV22ControlnetPipeline class diffusers.KandinskyV22ControlnetPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union negative_image_embeds: Union hint: FloatTensor height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. hint (torch.FloatTensor) — +The controlnet condition. image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22PriorEmb2EmbPipeline class diffusers.KandinskyV22PriorEmb2EmbPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModelWithProjection text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: UnCLIPScheduler image_processor: CLIPImageProcessor ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Pipeline for generating image prior for Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union strength: float = 0.3 negative_prompt: Union = None num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None guidance_scale: float = 4.0 output_type: Optional = 'pt' return_dict: bool = True ) → KandinskyPriorPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference emb. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. emb (torch.FloatTensor) — +The image embedding. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. output_type (str, optional, defaults to "pt") — +The output format of the generate image. Choose between: "np" (np.array) or "pt" +(torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyV22Pipeline, KandinskyV22PriorEmb2EmbPipeline +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> prompt = "red cat, 4k photo" +>>> img = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) +>>> image_emb, nagative_image_emb = pipe_prior(prompt, image=img, strength=0.2).to_tuple() + +>>> pipe = KandinskyPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-decoder, torch_dtype=torch.float16" +... ) +>>> pipe.to("cuda") + +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... ).images + +>>> image[0].save("cat.png") interpolate < source > ( images_and_prompts: List weights: List num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None negative_prior_prompt: Optional = None negative_prompt: str = '' guidance_scale: float = 4.0 device = None ) → KandinskyPriorPipelineOutput or tuple Parameters images_and_prompts (List[Union[str, PIL.Image.Image, torch.FloatTensor]]) — +list of prompts and images to guide the image generation. +weights — (List[float]): +list of weights for each condition in images_and_prompts num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. negative_prior_prompt (str, optional) — +The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when using the prior pipeline for interpolation. Examples: Copied >>> from diffusers import KandinskyV22PriorEmb2EmbPipeline, KandinskyV22Pipeline +>>> from diffusers.utils import load_image +>>> import PIL + +>>> import torch +>>> from torchvision import transforms + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> img1 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) + +>>> img2 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/starry_night.jpeg" +... ) + +>>> images_texts = ["a cat", img1, img2] +>>> weights = [0.3, 0.3, 0.4] +>>> image_emb, zero_image_emb = pipe_prior.interpolate(images_texts, weights) + +>>> pipe = KandinskyV22Pipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") + +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=150, +... ).images[0] + +>>> image.save("starry_cat.png") KandinskyV22Img2ImgPipeline class diffusers.KandinskyV22Img2ImgPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union image: Union negative_image_embeds: Union height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 strength: float = 0.3 num_images_per_prompt: int = 1 generator: Union = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22Img2ImgCombinedPipeline class diffusers.KandinskyV22Img2ImgCombinedPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. prior_image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Combined Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 strength: float = 0.3 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForImage2Image +import torch +import requests +from io import BytesIO +from PIL import Image +import os + +pipe = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") +image.thumbnail((768, 768)) + +image = pipe(prompt=prompt, image=original_image, num_inference_steps=25).images[0] enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. KandinskyV22ControlnetImg2ImgPipeline class diffusers.KandinskyV22ControlnetImg2ImgPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union image: Union negative_image_embeds: Union hint: FloatTensor height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 strength: float = 0.3 num_images_per_prompt: int = 1 generator: Union = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. hint (torch.FloatTensor) — +The controlnet condition. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22InpaintPipeline class diffusers.KandinskyV22InpaintPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-guided image inpainting using Kandinsky2.1 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union image: Union mask_image: Union negative_image_embeds: Union height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. image (PIL.Image.Image) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. mask_image (np.array) — +Tensor representing an image batch, to mask image. White pixels in the mask will be repainted, while +black pixels will be preserved. If mask_image is a PIL image, it will be converted to a single +channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, +so the expected shape would be (B, H, W, 1). negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22InpaintCombinedPipeline class diffusers.KandinskyV22InpaintCombinedPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. prior_image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Combined Pipeline for inpainting generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union mask_image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. mask_image (np.array) — +Tensor representing an image batch, to mask image. White pixels in the mask will be repainted, while +black pixels will be preserved. If mask_image is a PIL image, it will be converted to a single +channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, +so the expected shape would be (B, H, W, 1). negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. prior_callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). prior_callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the prior_callback_on_step_end function. The tensors specified in the +list will be passed as callback_kwargs argument. You will only be able to include variables listed in +the ._callback_tensor_inputs attribute of your pipeline class. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +import torch +import numpy as np + +pipe = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +original_image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png" +) + +mask = np.zeros((768, 768), dtype=np.float32) +# Let's mask out an area above the cat's head +mask[:250, 250:-250] = 1 + +image = pipe(prompt=prompt, image=original_image, mask_image=mask, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. diff --git a/scrapped_outputs/8cbd89d8dca17d4b376257b30559915f.txt b/scrapped_outputs/8cbd89d8dca17d4b376257b30559915f.txt new file mode 100644 index 0000000000000000000000000000000000000000..f44a3d21a8e26d613db10e2b1641d1bc1fb54490 --- /dev/null +++ b/scrapped_outputs/8cbd89d8dca17d4b376257b30559915f.txt @@ -0,0 +1,2 @@ +🧨 Diffusers’ Ethical Guidelines Preamble Diffusers provides pre-trained diffusion models and serves as a modular toolbox for inference and training. Given its real case applications in the world and potential negative impacts on society, we think it is important to provide the project with ethical guidelines to guide the development, users’ contributions, and usage of the Diffusers library. The risks associated with using this technology are still being examined, but to name a few: copyrights issues for artists; deep-fake exploitation; sexual content generation in inappropriate contexts; non-consensual impersonation; harmful social biases perpetuating the oppression of marginalized groups. +We will keep tracking risks and adapt the following guidelines based on the community’s responsiveness and valuable feedback. Scope The Diffusers community will apply the following ethical guidelines to the project’s development and help coordinate how the community will integrate the contributions, especially concerning sensitive topics related to ethical concerns. Ethical guidelines The following ethical guidelines apply generally, but we will primarily implement them when dealing with ethically sensitive issues while making a technical choice. Furthermore, we commit to adapting those ethical principles over time following emerging harms related to the state of the art of the technology in question. Transparency: we are committed to being transparent in managing PRs, explaining our choices to users, and making technical decisions. Consistency: we are committed to guaranteeing our users the same level of attention in project management, keeping it technically stable and consistent. Simplicity: with a desire to make it easy to use and exploit the Diffusers library, we are committed to keeping the project’s goals lean and coherent. Accessibility: the Diffusers project helps lower the entry bar for contributors who can help run it even without technical expertise. Doing so makes research artifacts more accessible to the community. Reproducibility: we aim to be transparent about the reproducibility of upstream code, models, and datasets when made available through the Diffusers library. Responsibility: as a community and through teamwork, we hold a collective responsibility to our users by anticipating and mitigating this technology’s potential risks and dangers. Examples of implementations: Safety features and Mechanisms The team works daily to make the technical and non-technical tools available to deal with the potential ethical and social risks associated with diffusion technology. Moreover, the community’s input is invaluable in ensuring these features’ implementation and raising awareness with us. Community tab: it enables the community to discuss and better collaborate on a project. Bias exploration and evaluation: the Hugging Face team provides a space to demonstrate the biases in Stable Diffusion interactively. In this sense, we support and encourage bias explorers and evaluations. Encouraging safety in deployment Safe Stable Diffusion: It mitigates the well-known issue that models, like Stable Diffusion, that are trained on unfiltered, web-crawled datasets tend to suffer from inappropriate degeneration. Related paper: Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models. Safety Checker: It checks and compares the class probability of a set of hard-coded harmful concepts in the embedding space against an image after it has been generated. The harmful concepts are intentionally hidden to prevent reverse engineering of the checker. Staged released on the Hub: in particularly sensitive situations, access to some repositories should be restricted. This staged release is an intermediary step that allows the repository’s authors to have more control over its use. Licensing: OpenRAILs, a new type of licensing, allow us to ensure free access while having a set of restrictions that ensure more responsible use. diff --git a/scrapped_outputs/8d06757c37bd56bad79e20f97d4fe070.txt b/scrapped_outputs/8d06757c37bd56bad79e20f97d4fe070.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/8d1c0990fff5810befab5e5778f3fb71.txt b/scrapped_outputs/8d1c0990fff5810befab5e5778f3fb71.txt new file mode 100644 index 0000000000000000000000000000000000000000..b141ceaf084a8212da6ac7e6a804208f1ca7d021 --- /dev/null +++ b/scrapped_outputs/8d1c0990fff5810befab5e5778f3fb71.txt @@ -0,0 +1,35 @@ +Dance Diffusion Dance Diffusion is by Zach Evans. Dance Diffusion is the first in a suite of generative audio tools for producers and musicians released by Harmonai. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. DanceDiffusionPipeline class diffusers.DanceDiffusionPipeline < source > ( unet scheduler ) Parameters unet (UNet1DModel) — +A UNet1DModel to denoise the encoded audio. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +IPNDMScheduler. Pipeline for audio generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 num_inference_steps: int = 100 generator: Union = None audio_length_in_s: Optional = None return_dict: bool = True ) → AudioPipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of audio samples to generate. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher-quality audio sample at +the expense of slower inference. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. audio_length_in_s (float, optional, defaults to self.unet.config.sample_size/self.unet.config.sample_rate) — +The length of the generated audio sample in seconds. return_dict (bool, optional, defaults to True) — +Whether or not to return a AudioPipelineOutput instead of a plain tuple. Returns +AudioPipelineOutput or tuple + +If return_dict is True, AudioPipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Example: Copied from diffusers import DiffusionPipeline +from scipy.io.wavfile import write + +model_id = "harmonai/maestro-150k" +pipe = DiffusionPipeline.from_pretrained(model_id) +pipe = pipe.to("cuda") + +audios = pipe(audio_length_in_s=4.0).audios + +# To save locally +for i, audio in enumerate(audios): + write(f"maestro_test_{i}.wav", pipe.unet.sample_rate, audio.transpose()) + +# To dislay in google colab +import IPython.display as ipd + +for audio in audios: + display(ipd.Audio(audio, rate=pipe.unet.sample_rate)) AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. diff --git a/scrapped_outputs/8d3b31de33feca71ae237b760c887f34.txt b/scrapped_outputs/8d3b31de33feca71ae237b760c887f34.txt new file mode 100644 index 0000000000000000000000000000000000000000..7f071804a6d1fd96f89b53ac2e21853833e83f62 --- /dev/null +++ b/scrapped_outputs/8d3b31de33feca71ae237b760c887f34.txt @@ -0,0 +1,74 @@ +DEISMultistepScheduler Diffusion Exponential Integrator Sampler (DEIS) is proposed in Fast Sampling of Diffusion Models with Exponential Integrator by Qinsheng Zhang and Yongxin Chen. DEISMultistepScheduler is a fast high order solver for diffusion ordinary differential equations (ODEs). This implementation modifies the polynomial fitting formula in log-rho space instead of the original linear t space in the DEIS paper. The modification enjoys closed-form coefficients for exponential multistep update instead of replying on the numerical solver. The abstract from the paper is: The past few years have witnessed the great success of Diffusion models~(DMs) in generating high-fidelity samples in generative modeling tasks. A major limitation of the DM is its notoriously slow sampling procedure which normally requires hundreds to thousands of time discretization steps of the learned diffusion process to reach the desired accuracy. Our goal is to develop a fast sampling method for DMs with a much less number of steps while retaining high sample quality. To this end, we systematically analyze the sampling procedure in DMs and identify key factors that affect the sample quality, among which the method of discretization is most crucial. By carefully examining the learned diffusion process, we propose Diffusion Exponential Integrator Sampler~(DEIS). It is based on the Exponential Integrator designed for discretizing ordinary differential equations (ODEs) and leverages a semilinear structure of the learned diffusion process to reduce the discretization error. The proposed method can be applied to any DMs and can generate high-fidelity samples in as few as 10 steps. In our experiments, it takes about 3 minutes on one A6000 GPU to generate 50k images from CIFAR10. Moreover, by directly using pre-trained DMs, we achieve the state-of-art sampling performance when the number of score function evaluation~(NFE) is limited, e.g., 4.17 FID with 10 NFEs, 3.37 FID, and 9.74 IS with only 15 NFEs on CIFAR10. Code is available at this https URL. Tips It is recommended to set solver_order to 2 or 3, while solver_order=1 is equivalent to DDIMScheduler. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set thresholding=True to use the dynamic thresholding. DEISMultistepScheduler class diffusers.DEISMultistepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Optional = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'deis' solver_type: str = 'logrho' lower_order_final: bool = True use_karras_sigmas: Optional = False timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DEIS order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. algorithm_type (str, defaults to deis) — +The algorithm type for the solver. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DEISMultistepScheduler is a fast high order solver for diffusion ordinary differential equations (ODEs). This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DEIS algorithm needs. deis_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DEIS (equivalent to DDIM). multistep_deis_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order multistep DEIS. multistep_deis_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order multistep DEIS. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep DEIS. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/8d47dc064a2945a321a35ae1508b6387.txt b/scrapped_outputs/8d47dc064a2945a321a35ae1508b6387.txt new file mode 100644 index 0000000000000000000000000000000000000000..6c0a08b7cecfdee2741d5b8ad6d0c8c331f822a1 --- /dev/null +++ b/scrapped_outputs/8d47dc064a2945a321a35ae1508b6387.txt @@ -0,0 +1,29 @@ +How to use the ONNX Runtime for inference + +🤗 Diffusers provides a Stable Diffusion pipeline compatible with the ONNX Runtime. This allows you to run Stable Diffusion on any hardware that supports ONNX (including CPUs), and where an accelerated version of PyTorch is not available. + +Installation + +TODO + +Stable Diffusion Inference + +The snippet below demonstrates how to use the ONNX runtime. You need to use StableDiffusionOnnxPipeline instead of StableDiffusionPipeline. You also need to download the weights from the onnx branch of the repository, and indicate the runtime provider you want to use. + + + Copied +# make sure you're logged in with `huggingface-cli login` +from diffusers import StableDiffusionOnnxPipeline + +pipe = StableDiffusionOnnxPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + revision="onnx", + provider="CUDAExecutionProvider", +) + +prompt = "a photo of an astronaut riding a horse on mars" +image = pipe(prompt).images[0] + +Known Issues + +Generating multiple prompts in a batch seems to take too much memory. While we look into it, you may need to iterate instead of batching. diff --git a/scrapped_outputs/8d781ee862f3019123c7e118a30316dc.txt b/scrapped_outputs/8d781ee862f3019123c7e118a30316dc.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/8dbe50a730507fe224a672d405d59b39.txt b/scrapped_outputs/8dbe50a730507fe224a672d405d59b39.txt new file mode 100644 index 0000000000000000000000000000000000000000..02948f26017297db150c2f1b80c70d14cf529652 --- /dev/null +++ b/scrapped_outputs/8dbe50a730507fe224a672d405d59b39.txt @@ -0,0 +1,187 @@ +Kandinsky The Kandinsky models are a series of multilingual text-to-image generation models. The Kandinsky 2.0 model uses two multilingual text encoders and concatenates those results for the UNet. Kandinsky 2.1 changes the architecture to include an image prior model (CLIP) to generate a mapping between text and image embeddings. The mapping provides better text-image alignment and it is used with the text embeddings during training, leading to higher quality results. Finally, Kandinsky 2.1 uses a Modulating Quantized Vectors (MoVQ) decoder - which adds a spatial conditional normalization layer to increase photorealism - to decode the latents into images. Kandinsky 2.2 improves on the previous model by replacing the image encoder of the image prior model with a larger CLIP-ViT-G model to improve quality. The image prior model was also retrained on images with different resolutions and aspect ratios to generate higher-resolution images and different image sizes. Kandinsky 3 simplifies the architecture and shifts away from the two-stage generation process involving the prior model and diffusion model. Instead, Kandinsky 3 uses Flan-UL2 to encode text, a UNet with BigGan-deep blocks, and Sber-MoVQGAN to decode the latents into images. Text understanding and generated image quality are primarily achieved by using a larger text encoder and UNet. This guide will show you how to use the Kandinsky models for text-to-image, image-to-image, inpainting, interpolation, and more. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate Kandinsky 2.1 and 2.2 usage is very similar! The only difference is Kandinsky 2.2 doesn’t accept prompt as an input when decoding the latents. Instead, Kandinsky 2.2 only accepts image_embeds during decoding. Kandinsky 3 has a more concise architecture and it doesn’t require a prior model. This means it’s usage is identical to other diffusion models like Stable Diffusion XL. Text-to-image To use the Kandinsky models for any task, you always start by setting up the prior pipeline to encode the prompt and generate the image embeddings. The prior pipeline also generates negative_image_embeds that correspond to the negative prompt "". For better results, you can pass an actual negative_prompt to the prior pipeline, but this’ll increase the effective batch size of the prior pipeline by 2x. Kandinsky 2.1 Kandinsky 2.2 Kandinsky 3 Copied from diffusers import KandinskyPriorPipeline, KandinskyPipeline +import torch + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16).to("cuda") +pipeline = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16).to("cuda") + +prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting" +negative_prompt = "low quality, bad quality" # optional to include a negative prompt, but results are usually better +image_embeds, negative_image_embeds = prior_pipeline(prompt, negative_prompt, guidance_scale=1.0).to_tuple() Now pass all the prompts and embeddings to the KandinskyPipeline to generate an image: Copied image = pipeline(prompt, image_embeds=image_embeds, negative_prompt=negative_prompt, negative_image_embeds=negative_image_embeds, height=768, width=768).images[0] +image 🤗 Diffusers also provides an end-to-end API with the KandinskyCombinedPipeline and KandinskyV22CombinedPipeline, meaning you don’t have to separately load the prior and text-to-image pipeline. The combined pipeline automatically loads both the prior model and the decoder. You can still set different values for the prior pipeline with the prior_guidance_scale and prior_num_inference_steps parameters if you want. Use the AutoPipelineForText2Image to automatically call the combined pipelines under the hood: Kandinsky 2.1 Kandinsky 2.2 Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) +pipeline.enable_model_cpu_offload() + +prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting" +negative_prompt = "low quality, bad quality" + +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, prior_guidance_scale=1.0, guidance_scale=4.0, height=768, width=768).images[0] +image Image-to-image For image-to-image, pass the initial image and text prompt to condition the image to the pipeline. Start by loading the prior pipeline: Kandinsky 2.1 Kandinsky 2.2 Kandinsky 3 Copied import torch +from diffusers import KandinskyImg2ImgPipeline, KandinskyPriorPipeline + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipeline = KandinskyImg2ImgPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda") Download an image to condition on: Copied from diffusers.utils import load_image + +# download image +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +original_image = load_image(url) +original_image = original_image.resize((768, 512)) Generate the image_embeds and negative_image_embeds with the prior pipeline: Copied prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +image_embeds, negative_image_embeds = prior_pipeline(prompt, negative_prompt).to_tuple() Now pass the original image, and all the prompts and embeddings to the pipeline to generate an image: Kandinsky 2.1 Kandinsky 2.2 Kandinsky 3 Copied from diffusers.utils import make_image_grid + +image = pipeline(prompt, negative_prompt=negative_prompt, image=original_image, image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, height=768, width=768, strength=0.3).images[0] +make_image_grid([original_image.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2) 🤗 Diffusers also provides an end-to-end API with the KandinskyImg2ImgCombinedPipeline and KandinskyV22Img2ImgCombinedPipeline, meaning you don’t have to separately load the prior and image-to-image pipeline. The combined pipeline automatically loads both the prior model and the decoder. You can still set different values for the prior pipeline with the prior_guidance_scale and prior_num_inference_steps parameters if you want. Use the AutoPipelineForImage2Image to automatically call the combined pipelines under the hood: Kandinsky 2.1 Kandinsky 2.2 Copied from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image +import torch + +pipeline = AutoPipelineForImage2Image.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True) +pipeline.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +original_image = load_image(url) + +original_image.thumbnail((768, 768)) + +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=original_image, strength=0.3).images[0] +make_image_grid([original_image.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2) Inpainting ⚠️ The Kandinsky models use ⬜️ white pixels to represent the masked area now instead of black pixels. If you are using KandinskyInpaintPipeline in production, you need to change the mask to use white pixels: Copied # For PIL input +import PIL.ImageOps +mask = PIL.ImageOps.invert(mask) + +# For PyTorch and NumPy input +mask = 1 - mask For inpainting, you’ll need the original image, a mask of the area to replace in the original image, and a text prompt of what to inpaint. Load the prior pipeline: Kandinsky 2.1 Kandinsky 2.2 Copied from diffusers import KandinskyInpaintPipeline, KandinskyPriorPipeline +from diffusers.utils import load_image, make_image_grid +import torch +import numpy as np +from PIL import Image + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipeline = KandinskyInpaintPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16, use_safetensors=True).to("cuda") Load an initial image and create a mask: Copied init_image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png") +mask = np.zeros((768, 768), dtype=np.float32) +# mask area above cat's head +mask[:250, 250:-250] = 1 Generate the embeddings with the prior pipeline: Copied prompt = "a hat" +prior_output = prior_pipeline(prompt) Now pass the initial image, mask, and prompt and embeddings to the pipeline to generate an image: Kandinsky 2.1 Kandinsky 2.2 Copied output_image = pipeline(prompt, image=init_image, mask_image=mask, **prior_output, height=768, width=768, num_inference_steps=150).images[0] +mask = Image.fromarray((mask*255).astype('uint8'), 'L') +make_image_grid([init_image, mask, output_image], rows=1, cols=3) You can also use the end-to-end KandinskyInpaintCombinedPipeline and KandinskyV22InpaintCombinedPipeline to call the prior and decoder pipelines together under the hood. Use the AutoPipelineForInpainting for this: Kandinsky 2.1 Kandinsky 2.2 Copied import torch +import numpy as np +from PIL import Image +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipe = AutoPipelineForInpainting.from_pretrained("kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16) +pipe.enable_model_cpu_offload() + +init_image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png") +mask = np.zeros((768, 768), dtype=np.float32) +# mask area above cat's head +mask[:250, 250:-250] = 1 +prompt = "a hat" + +output_image = pipe(prompt=prompt, image=init_image, mask_image=mask).images[0] +mask = Image.fromarray((mask*255).astype('uint8'), 'L') +make_image_grid([init_image, mask, output_image], rows=1, cols=3) Interpolation Interpolation allows you to explore the latent space between the image and text embeddings which is a cool way to see some of the prior model’s intermediate outputs. Load the prior pipeline and two images you’d like to interpolate: Kandinsky 2.1 Kandinsky 2.2 Copied from diffusers import KandinskyPriorPipeline, KandinskyPipeline +from diffusers.utils import load_image, make_image_grid +import torch + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +img_1 = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png") +img_2 = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/starry_night.jpeg") +make_image_grid([img_1.resize((512,512)), img_2.resize((512,512))], rows=1, cols=2) a cat Van Gogh's Starry Night painting Specify the text or images to interpolate, and set the weights for each text or image. Experiment with the weights to see how they affect the interpolation! Copied images_texts = ["a cat", img_1, img_2] +weights = [0.3, 0.3, 0.4] Call the interpolate function to generate the embeddings, and then pass them to the pipeline to generate the image: Kandinsky 2.1 Kandinsky 2.2 Copied # prompt can be left empty +prompt = "" +prior_out = prior_pipeline.interpolate(images_texts, weights) + +pipeline = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + +image = pipeline(prompt, **prior_out, height=768, width=768).images[0] +image ControlNet ⚠️ ControlNet is only supported for Kandinsky 2.2! ControlNet enables conditioning large pretrained diffusion models with additional inputs such as a depth map or edge detection. For example, you can condition Kandinsky 2.2 with a depth map so the model understands and preserves the structure of the depth image. Let’s load an image and extract it’s depth map: Copied from diffusers.utils import load_image + +img = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png" +).resize((768, 768)) +img Then you can use the depth-estimation Pipeline from 🤗 Transformers to process the image and retrieve the depth map: Copied import torch +import numpy as np + +from transformers import pipeline + +def make_hint(image, depth_estimator): + image = depth_estimator(image)["depth"] + image = np.array(image) + image = image[:, :, None] + image = np.concatenate([image, image, image], axis=2) + detected_map = torch.from_numpy(image).float() / 255.0 + hint = detected_map.permute(2, 0, 1) + return hint + +depth_estimator = pipeline("depth-estimation") +hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda") Text-to-image Load the prior pipeline and the KandinskyV22ControlnetPipeline: Copied from diffusers import KandinskyV22PriorPipeline, KandinskyV22ControlnetPipeline + +prior_pipeline = KandinskyV22PriorPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +pipeline = KandinskyV22ControlnetPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16 +).to("cuda") Generate the image embeddings from a prompt and negative prompt: Copied prompt = "A robot, 4k photo" +negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature" + +generator = torch.Generator(device="cuda").manual_seed(43) + +image_emb, zero_image_emb = prior_pipeline( + prompt=prompt, negative_prompt=negative_prior_prompt, generator=generator +).to_tuple() Finally, pass the image embeddings and the depth image to the KandinskyV22ControlnetPipeline to generate an image: Copied image = pipeline(image_embeds=image_emb, negative_image_embeds=zero_image_emb, hint=hint, num_inference_steps=50, generator=generator, height=768, width=768).images[0] +image Image-to-image For image-to-image with ControlNet, you’ll need to use the: KandinskyV22PriorEmb2EmbPipeline to generate the image embeddings from a text prompt and an image KandinskyV22ControlnetImg2ImgPipeline to generate an image from the initial image and the image embeddings Process and extract a depth map of an initial image of a cat with the depth-estimation Pipeline from 🤗 Transformers: Copied import torch +import numpy as np + +from diffusers import KandinskyV22PriorEmb2EmbPipeline, KandinskyV22ControlnetImg2ImgPipeline +from diffusers.utils import load_image +from transformers import pipeline + +img = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png" +).resize((768, 768)) + +def make_hint(image, depth_estimator): + image = depth_estimator(image)["depth"] + image = np.array(image) + image = image[:, :, None] + image = np.concatenate([image, image, image], axis=2) + detected_map = torch.from_numpy(image).float() / 255.0 + hint = detected_map.permute(2, 0, 1) + return hint + +depth_estimator = pipeline("depth-estimation") +hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda") Load the prior pipeline and the KandinskyV22ControlnetImg2ImgPipeline: Copied prior_pipeline = KandinskyV22PriorEmb2EmbPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +pipeline = KandinskyV22ControlnetImg2ImgPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16 +).to("cuda") Pass a text prompt and the initial image to the prior pipeline to generate the image embeddings: Copied prompt = "A robot, 4k photo" +negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature" + +generator = torch.Generator(device="cuda").manual_seed(43) + +img_emb = prior_pipeline(prompt=prompt, image=img, strength=0.85, generator=generator) +negative_emb = prior_pipeline(prompt=negative_prior_prompt, image=img, strength=1, generator=generator) Now you can run the KandinskyV22ControlnetImg2ImgPipeline to generate an image from the initial image and the image embeddings: Copied image = pipeline(image=img, strength=0.5, image_embeds=img_emb.image_embeds, negative_image_embeds=negative_emb.image_embeds, hint=hint, num_inference_steps=50, generator=generator, height=768, width=768).images[0] +make_image_grid([img.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2) Optimizations Kandinsky is unique because it requires a prior pipeline to generate the mappings, and a second pipeline to decode the latents into an image. Optimization efforts should be focused on the second pipeline because that is where the bulk of the computation is done. Here are some tips to improve Kandinsky during inference. Enable xFormers if you’re using PyTorch < 2.0: Copied from diffusers import DiffusionPipeline + import torch + + pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) ++ pipe.enable_xformers_memory_efficient_attention() Enable torch.compile if you’re using PyTorch >= 2.0 to automatically use scaled dot-product attention (SDPA): Copied pipe.unet.to(memory_format=torch.channels_last) ++ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) This is the same as explicitly setting the attention processor to use AttnAddedKVProcessor2_0: Copied from diffusers.models.attention_processor import AttnAddedKVProcessor2_0 + +pipe.unet.set_attn_processor(AttnAddedKVProcessor2_0()) Offload the model to the CPU with enable_model_cpu_offload() to avoid out-of-memory errors: Copied from diffusers import DiffusionPipeline + import torch + + pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) ++ pipe.enable_model_cpu_offload() By default, the text-to-image pipeline uses the DDIMScheduler but you can replace it with another scheduler like DDPMScheduler to see how that affects the tradeoff between inference speed and image quality: Copied from diffusers import DDPMScheduler +from diffusers import DiffusionPipeline + +scheduler = DDPMScheduler.from_pretrained("kandinsky-community/kandinsky-2-1", subfolder="ddpm_scheduler") +pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", scheduler=scheduler, torch_dtype=torch.float16, use_safetensors=True).to("cuda") diff --git a/scrapped_outputs/8dd42c4e14b37633d53d07c83284f467.txt b/scrapped_outputs/8dd42c4e14b37633d53d07c83284f467.txt new file mode 100644 index 0000000000000000000000000000000000000000..5afc2be3d91199356b9d7628f7ca4a75d3ed1ce9 --- /dev/null +++ b/scrapped_outputs/8dd42c4e14b37633d53d07c83284f467.txt @@ -0,0 +1,74 @@ +DDIMScheduler Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. The abstract from the paper is: Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. +To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models +with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. +We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. +We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space. The original codebase of this paper can be found at ermongroup/ddim, and you can contact the author on tsong.me. Tips The paper Common Diffusion Noise Schedules and Sample Steps are Flawed claims that a mismatch between the training and inference settings leads to suboptimal inference generation results for Stable Diffusion. To fix this, the authors propose: 🧪 This is an experimental feature! rescale the noise schedule to enforce zero terminal signal-to-noise ratio (SNR) Copied pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, rescale_betas_zero_snr=True) train a model with v_prediction (add the following argument to the train_text_to_image.py or train_text_to_image_lora.py scripts) Copied --prediction_type="v_prediction" change the sampler to always start from the last timestep Copied pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing") rescale classifier-free guidance to prevent over-exposure Copied image = pipe(prompt, guidance_rescale=0.7).images[0] For example: Copied from diffusers import DiffusionPipeline, DDIMScheduler +import torch + +pipe = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", torch_dtype=torch.float16) +pipe.scheduler = DDIMScheduler.from_config( + pipe.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing" +) +pipe.to("cuda") + +prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" +image = pipe(prompt, guidance_rescale=0.7).images[0] +image DDIMScheduler class diffusers.DDIMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None clip_sample: bool = True set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 clip_sample_range: float = 1.0 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the alpha value at step 0. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. DDIMScheduler extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with +non-Markovian guidance. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor eta: float = 0.0 use_clipped_model_output: bool = False generator = None variance_noise: Optional = None return_dict: bool = True ) → ~schedulers.scheduling_utils.DDIMSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. eta (float) — +The weight of noise for added noise in diffusion step. use_clipped_model_output (bool, defaults to False) — +If True, computes “corrected” model_output from the clipped predicted original sample. Necessary +because predicted original sample is clipped to [-1, 1] when self.config.clip_sample is True. If no +clipping has happened, “corrected” model_output would coincide with the one provided as input and +use_clipped_model_output has no effect. generator (torch.Generator, optional) — +A random number generator. variance_noise (torch.FloatTensor) — +Alternative to generating noise with generator by directly providing the noise for the variance +itself. Useful for methods such as CycleDiffusion. return_dict (bool, optional, defaults to True) — +Whether or not to return a DDIMSchedulerOutput or tuple. Returns +~schedulers.scheduling_utils.DDIMSchedulerOutput or tuple + +If return_dict is True, DDIMSchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). DDIMSchedulerOutput class diffusers.schedulers.scheduling_ddim.DDIMSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/8df4ea0060b82a3eb2434739694db986.txt b/scrapped_outputs/8df4ea0060b82a3eb2434739694db986.txt new file mode 100644 index 0000000000000000000000000000000000000000..49e19fb4c11ed7fa69c26f38e304a1a47862bdca --- /dev/null +++ b/scrapped_outputs/8df4ea0060b82a3eb2434739694db986.txt @@ -0,0 +1,466 @@ +Text-to-Image Generation with Adapter Conditioning Overview T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models by Chong Mou, Xintao Wang, Liangbin Xie, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie. Using the pretrained models we can provide control images (for example, a depth map) to control Stable Diffusion text-to-image generation so that it follows the structure of the depth image and fills in the details. The abstract of the paper is the following: The incredible generative ability of large-scale text-to-image (T2I) models has demonstrated strong power of learning complex structures and meaningful semantics. However, relying solely on text prompts cannot fully take advantage of the knowledge learned by the model, especially when flexible and accurate controlling (e.g., color and structure) is needed. In this paper, we aim to “dig out” the capabilities that T2I models have implicitly learned, and then explicitly use them to control the generation more granularly. Specifically, we propose to learn simple and lightweight T2I-Adapters to align internal knowledge in T2I models with external control signals, while freezing the original large T2I models. In this way, we can train various adapters according to different conditions, achieving rich control and editing effects in the color and structure of the generation results. Further, the proposed T2I-Adapters have attractive properties of practical value, such as composability and generalization ability. Extensive experiments demonstrate that our T2I-Adapter has promising generation quality and a wide range of applications. This model was contributed by the community contributor HimariO ❤️ . Available Pipelines: Pipeline Tasks Demo StableDiffusionAdapterPipeline Text-to-Image Generation with T2I-Adapter Conditioning - StableDiffusionXLAdapterPipeline Text-to-Image Generation with T2I-Adapter Conditioning on StableDiffusion-XL - Usage example with the base model of StableDiffusion-1.4/1.5 In the following we give a simple example of how to use a T2I-Adapter checkpoint with Diffusers for inference based on StableDiffusion-1.4/1.5. +All adapters use the same pipeline. Images are first converted into the appropriate control image format. The control image and prompt are passed to the StableDiffusionAdapterPipeline. Let’s have a look at a simple example using the Color Adapter. Copied from diffusers.utils import load_image, make_image_grid + +image = load_image("https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_ref.png") Then we can create our color palette by simply resizing it to 8 by 8 pixels and then scaling it back to original size. Copied from PIL import Image + +color_palette = image.resize((8, 8)) +color_palette = color_palette.resize((512, 512), resample=Image.Resampling.NEAREST) Let’s take a look at the processed image. Next, create the adapter pipeline Copied import torch +from diffusers import StableDiffusionAdapterPipeline, T2IAdapter + +adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_color_sd14v1", torch_dtype=torch.float16) +pipe = StableDiffusionAdapterPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + adapter=adapter, + torch_dtype=torch.float16, +) +pipe.to("cuda") Finally, pass the prompt and control image to the pipeline Copied # fix the random seed, so you will get the same result as the example +generator = torch.Generator("cuda").manual_seed(7) + +out_image = pipe( + "At night, glowing cubes in front of the beach", + image=color_palette, + generator=generator, +).images[0] +make_image_grid([image, color_palette, out_image], rows=1, cols=3) Usage example with the base model of StableDiffusion-XL In the following we give a simple example of how to use a T2I-Adapter checkpoint with Diffusers for inference based on StableDiffusion-XL. +All adapters use the same pipeline. Images are first downloaded into the appropriate control image format. The control image and prompt are passed to the StableDiffusionXLAdapterPipeline. Let’s have a look at a simple example using the Sketch Adapter. Copied from diffusers.utils import load_image, make_image_grid + +sketch_image = load_image("https://huggingface.co/Adapter/t2iadapter/resolve/main/sketch.png").convert("L") Then, create the adapter pipeline Copied import torch +from diffusers import ( + T2IAdapter, + StableDiffusionXLAdapterPipeline, + DDPMScheduler +) + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +adapter = T2IAdapter.from_pretrained("Adapter/t2iadapter", subfolder="sketch_sdxl_1.0", torch_dtype=torch.float16, adapter_type="full_adapter_xl") +scheduler = DDPMScheduler.from_pretrained(model_id, subfolder="scheduler") + +pipe = StableDiffusionXLAdapterPipeline.from_pretrained( + model_id, adapter=adapter, safety_checker=None, torch_dtype=torch.float16, variant="fp16", scheduler=scheduler +) + +pipe.to("cuda") Finally, pass the prompt and control image to the pipeline Copied # fix the random seed, so you will get the same result as the example +generator = torch.Generator().manual_seed(42) + +sketch_image_out = pipe( + prompt="a photo of a dog in real world, high quality", + negative_prompt="extra digit, fewer digits, cropped, worst quality, low quality", + image=sketch_image, + generator=generator, + guidance_scale=7.5 +).images[0] +make_image_grid([sketch_image, sketch_image_out], rows=1, cols=2) Available checkpoints Non-diffusers checkpoints can be found under TencentARC/T2I-Adapter. T2I-Adapter with Stable Diffusion 1.4 Model Name Control Image Overview Control Image Example Generated Image Example TencentARC/t2iadapter_color_sd14v1 Trained with spatial color palette An image with 8x8 color palette. TencentARC/t2iadapter_canny_sd14v1 Trained with canny edge detection A monochrome image with white edges on a black background. TencentARC/t2iadapter_sketch_sd14v1 Trained with PidiNet edge detection A hand-drawn monochrome image with white outlines on a black background. TencentARC/t2iadapter_depth_sd14v1 Trained with Midas depth estimation A grayscale image with black representing deep areas and white representing shallow areas. TencentARC/t2iadapter_openpose_sd14v1 Trained with OpenPose bone image A OpenPose bone image. TencentARC/t2iadapter_keypose_sd14v1 Trained with mmpose skeleton image A mmpose skeleton image. TencentARC/t2iadapter_seg_sd14v1Trained with semantic segmentation An custom segmentation protocol image. TencentARC/t2iadapter_canny_sd15v2 TencentARC/t2iadapter_depth_sd15v2 TencentARC/t2iadapter_sketch_sd15v2 TencentARC/t2iadapter_zoedepth_sd15v1 Adapter/t2iadapter, subfolder=‘sketch_sdxl_1.0’ Adapter/t2iadapter, subfolder=‘canny_sdxl_1.0’ Adapter/t2iadapter, subfolder=‘openpose_sdxl_1.0’ Combining multiple adapters MultiAdapter can be used for applying multiple conditionings at once. Here we use the keypose adapter for the character posture and the depth adapter for creating the scene. Copied from diffusers.utils import load_image, make_image_grid + +cond_keypose = load_image( + "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png" +) +cond_depth = load_image( + "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png" +) +cond = [cond_keypose, cond_depth] + +prompt = ["A man walking in an office room with a nice view"] The two control images look as such: MultiAdapter combines keypose and depth adapters. adapter_conditioning_scale balances the relative influence of the different adapters. Copied import torch +from diffusers import StableDiffusionAdapterPipeline, MultiAdapter, T2IAdapter + +adapters = MultiAdapter( + [ + T2IAdapter.from_pretrained("TencentARC/t2iadapter_keypose_sd14v1"), + T2IAdapter.from_pretrained("TencentARC/t2iadapter_depth_sd14v1"), + ] +) +adapters = adapters.to(torch.float16) + +pipe = StableDiffusionAdapterPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + torch_dtype=torch.float16, + adapter=adapters, +).to("cuda") + +image = pipe(prompt, cond, adapter_conditioning_scale=[0.8, 0.8]).images[0] +make_image_grid([cond_keypose, cond_depth, image], rows=1, cols=3) T2I-Adapter vs ControlNet T2I-Adapter is similar to ControlNet. +T2I-Adapter uses a smaller auxiliary network which is only run once for the entire diffusion process. +However, T2I-Adapter performs slightly worse than ControlNet. StableDiffusionAdapterPipeline class diffusers.StableDiffusionAdapterPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel adapter: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True ) Parameters adapter (T2IAdapter or MultiAdapter or List[T2IAdapter]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple Adapter as a +list, the outputs from each Adapter are added together to create one combined additional conditioning. adapter_weights (List[float], optional, defaults to None) — +List of floats representing the weight which will be multiply to each adapter’s output before adding them +together. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. Pipeline for text-to-image generation using Stable Diffusion augmented with T2I-Adapter +https://arxiv.org/abs/2302.08453 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None adapter_conditioning_scale: Union = 1.0 clip_skip: Optional = None ) → ~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor] or List[PIL.Image.Image] or List[List[PIL.Image.Image]]) — +The Adapter input condition. Adapter uses this input condition to generate guidance to Unet. If the +type is specified as Torch.FloatTensor, it is passed to Adapter as is. PIL.Image.Image` can also be +accepted as an image. The control image is automatically resized to fit the output image. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor as defined under +self.processor in +diffusers.models.attention_processor. adapter_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the adapter are multiplied by adapter_conditioning_scale before they are added to the +residual in the original unet. If multiple adapters are specified in init, you can set the +corresponding scale as a list. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from PIL import Image +>>> from diffusers.utils import load_image +>>> import torch +>>> from diffusers import StableDiffusionAdapterPipeline, T2IAdapter + +>>> image = load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_ref.png" +... ) + +>>> color_palette = image.resize((8, 8)) +>>> color_palette = color_palette.resize((512, 512), resample=Image.Resampling.NEAREST) + +>>> adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_color_sd14v1", torch_dtype=torch.float16) +>>> pipe = StableDiffusionAdapterPipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", +... adapter=adapter, +... torch_dtype=torch.float16, +... ) + +>>> pipe.to("cuda") + +>>> out_image = pipe( +... "At night, glowing cubes in front of the beach", +... image=color_palette, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionXLAdapterPipeline class diffusers.StableDiffusionXLAdapterPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel adapter: Union scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters adapter (T2IAdapter or MultiAdapter or List[T2IAdapter]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple Adapter as a +list, the outputs from each Adapter are added together to create one combined additional conditioning. adapter_weights (List[float], optional, defaults to None) — +List of floats representing the weight which will be multiply to each adapter’s output before adding them +together. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. Pipeline for text-to-image generation using Stable Diffusion augmented with T2I-Adapter +https://arxiv.org/abs/2302.08453 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Optional = None crops_coords_top_left: Tuple = (0, 0) target_size: Optional = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None adapter_conditioning_scale: Union = 1.0 adapter_conditioning_factor: float = 1.0 clip_skip: Optional = None ) → ~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor] or List[PIL.Image.Image] or List[List[PIL.Image.Image]]) — +The Adapter input condition. Adapter uses this input condition to generate guidance to Unet. If the +type is specified as Torch.FloatTensor, it is passed to Adapter as is. PIL.Image.Image` can also be +accepted as an image. The control image is automatically resized to fit the output image. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 5.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion_xl.StableDiffusionAdapterPipelineOutput +instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. adapter_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the adapter are multiplied by adapter_conditioning_scale before they are added to the +residual in the original unet. If multiple adapters are specified in init, you can set the +corresponding scale as a list. adapter_conditioning_factor (float, optional, defaults to 1.0) — +The fraction of timesteps for which adapter should be applied. If adapter_conditioning_factor is +0.0, adapter is not applied at all. If adapter_conditioning_factor is 1.0, adapter is applied for +all timesteps. If adapter_conditioning_factor is 0.5, adapter is applied for half of the timesteps. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import T2IAdapter, StableDiffusionXLAdapterPipeline, DDPMScheduler +>>> from diffusers.utils import load_image + +>>> sketch_image = load_image("https://huggingface.co/Adapter/t2iadapter/resolve/main/sketch.png").convert("L") + +>>> model_id = "stabilityai/stable-diffusion-xl-base-1.0" + +>>> adapter = T2IAdapter.from_pretrained( +... "Adapter/t2iadapter", +... subfolder="sketch_sdxl_1.0", +... torch_dtype=torch.float16, +... adapter_type="full_adapter_xl", +... ) +>>> scheduler = DDPMScheduler.from_pretrained(model_id, subfolder="scheduler") + +>>> pipe = StableDiffusionXLAdapterPipeline.from_pretrained( +... model_id, adapter=adapter, torch_dtype=torch.float16, variant="fp16", scheduler=scheduler +... ).to("cuda") + +>>> generator = torch.manual_seed(42) +>>> sketch_image_out = pipe( +... prompt="a photo of a dog in real world, high quality", +... negative_prompt="extra digit, fewer digits, cropped, worst quality, low quality", +... image=sketch_image, +... generator=generator, +... guidance_scale=7.5, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 diff --git a/scrapped_outputs/8dffc25df6689276cfdb3fa4323ced85.txt b/scrapped_outputs/8dffc25df6689276cfdb3fa4323ced85.txt new file mode 100644 index 0000000000000000000000000000000000000000..843875e320b6bcdb29106ed38d7b3cffd10030d2 --- /dev/null +++ b/scrapped_outputs/8dffc25df6689276cfdb3fa4323ced85.txt @@ -0,0 +1,232 @@ +Würstchen Wuerstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models is by Pablo Pernias, Dominic Rampas, Mats L. Richter and Christopher Pal and Marc Aubreville. The abstract from the paper is: We introduce Würstchen, a novel architecture for text-to-image synthesis that combines competitive performance with unprecedented cost-effectiveness for large-scale text-to-image diffusion models. A key contribution of our work is to develop a latent diffusion technique in which we learn a detailed but extremely compact semantic image representation used to guide the diffusion process. This highly compressed representation of an image provides much more detailed guidance compared to latent representations of language and this significantly reduces the computational requirements to achieve state-of-the-art results. Our approach also improves the quality of text-conditioned image generation based on our user preference study. The training requirements of our approach consists of 24,602 A100-GPU hours - compared to Stable Diffusion 2.1’s 200,000 GPU hours. Our approach also requires less training data to achieve these results. Furthermore, our compact latent representations allows us to perform inference over twice as fast, slashing the usual costs and carbon footprint of a state-of-the-art (SOTA) diffusion model significantly, without compromising the end performance. In a broader comparison against SOTA models our approach is substantially more efficient and compares favorably in terms of image quality. We believe that this work motivates more emphasis on the prioritization of both performance and computational accessibility. Würstchen Overview Würstchen is a diffusion model, whose text-conditional model works in a highly compressed latent space of images. Why is this important? Compressing data can reduce computational costs for both training and inference by magnitudes. Training on 1024x1024 images is way more expensive than training on 32x32. Usually, other works make use of a relatively small compression, in the range of 4x - 8x spatial compression. Würstchen takes this to an extreme. Through its novel design, we achieve a 42x spatial compression. This was unseen before because common methods fail to faithfully reconstruct detailed images after 16x spatial compression. Würstchen employs a two-stage compression, what we call Stage A and Stage B. Stage A is a VQGAN, and Stage B is a Diffusion Autoencoder (more details can be found in the paper). A third model, Stage C, is learned in that highly compressed latent space. This training requires fractions of the compute used for current top-performing models, while also allowing cheaper and faster inference. Würstchen v2 comes to Diffusers After the initial paper release, we have improved numerous things in the architecture, training and sampling, making Würstchen competitive to current state-of-the-art models in many ways. We are excited to release this new version together with Diffusers. Here is a list of the improvements. Higher resolution (1024x1024 up to 2048x2048) Faster inference Multi Aspect Resolution Sampling Better quality We are releasing 3 checkpoints for the text-conditional image generation model (Stage C). Those are: v2-base v2-aesthetic (default) v2-interpolated (50% interpolation between v2-base and v2-aesthetic) We recommend using v2-interpolated, as it has a nice touch of both photorealism and aesthetics. Use v2-base for finetunings as it does not have a style bias and use v2-aesthetic for very artistic generations. +A comparison can be seen here: Text-to-Image Generation For the sake of usability, Würstchen can be used with a single pipeline. This pipeline can be used as follows: Copied import torch +from diffusers import AutoPipelineForText2Image +from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS + +pipe = AutoPipelineForText2Image.from_pretrained("warp-ai/wuerstchen", torch_dtype=torch.float16).to("cuda") + +caption = "Anthropomorphic cat dressed as a fire fighter" +images = pipe( + caption, + width=1024, + height=1536, + prior_timesteps=DEFAULT_STAGE_C_TIMESTEPS, + prior_guidance_scale=4.0, + num_images_per_prompt=2, +).images For explanation purposes, we can also initialize the two main pipelines of Würstchen individually. Würstchen consists of 3 stages: Stage C, Stage B, Stage A. They all have different jobs and work only together. When generating text-conditional images, Stage C will first generate the latents in a very compressed latent space. This is what happens in the prior_pipeline. Afterwards, the generated latents will be passed to Stage B, which decompresses the latents into a bigger latent space of a VQGAN. These latents can then be decoded by Stage A, which is a VQGAN, into the pixel-space. Stage B & Stage A are both encapsulated in the decoder_pipeline. For more details, take a look at the paper. Copied import torch +from diffusers import WuerstchenDecoderPipeline, WuerstchenPriorPipeline +from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS + +device = "cuda" +dtype = torch.float16 +num_images_per_prompt = 2 + +prior_pipeline = WuerstchenPriorPipeline.from_pretrained( + "warp-ai/wuerstchen-prior", torch_dtype=dtype +).to(device) +decoder_pipeline = WuerstchenDecoderPipeline.from_pretrained( + "warp-ai/wuerstchen", torch_dtype=dtype +).to(device) + +caption = "Anthropomorphic cat dressed as a fire fighter" +negative_prompt = "" + +prior_output = prior_pipeline( + prompt=caption, + height=1024, + width=1536, + timesteps=DEFAULT_STAGE_C_TIMESTEPS, + negative_prompt=negative_prompt, + guidance_scale=4.0, + num_images_per_prompt=num_images_per_prompt, +) +decoder_output = decoder_pipeline( + image_embeddings=prior_output.image_embeddings, + prompt=caption, + negative_prompt=negative_prompt, + guidance_scale=0.0, + output_type="pil", +).images[0] +decoder_output Speed-Up Inference You can make use of torch.compile function and gain a speed-up of about 2-3x: Copied prior_pipeline.prior = torch.compile(prior_pipeline.prior, mode="reduce-overhead", fullgraph=True) +decoder_pipeline.decoder = torch.compile(decoder_pipeline.decoder, mode="reduce-overhead", fullgraph=True) Limitations Due to the high compression employed by Würstchen, generations can lack a good amount +of detail. To our human eye, this is especially noticeable in faces, hands etc. Images can only be generated in 128-pixel steps, e.g. the next higher resolution +after 1024x1024 is 1152x1152 The model lacks the ability to render correct text in images The model often does not achieve photorealism Difficult compositional prompts are hard for the model The original codebase, as well as experimental ideas, can be found at dome272/Wuerstchen. WuerstchenCombinedPipeline class diffusers.WuerstchenCombinedPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModel decoder: WuerstchenDiffNeXt scheduler: DDPMWuerstchenScheduler vqgan: PaellaVQModel prior_tokenizer: CLIPTokenizer prior_text_encoder: CLIPTextModel prior_prior: WuerstchenPrior prior_scheduler: DDPMWuerstchenScheduler ) Parameters tokenizer (CLIPTokenizer) — +The decoder tokenizer to be used for text inputs. text_encoder (CLIPTextModel) — +The decoder text encoder to be used for text inputs. decoder (WuerstchenDiffNeXt) — +The decoder model to be used for decoder image generation pipeline. scheduler (DDPMWuerstchenScheduler) — +The scheduler to be used for decoder image generation pipeline. vqgan (PaellaVQModel) — +The VQGAN model to be used for decoder image generation pipeline. prior_tokenizer (CLIPTokenizer) — +The prior tokenizer to be used for text inputs. prior_text_encoder (CLIPTextModel) — +The prior text encoder to be used for text inputs. prior_prior (WuerstchenPrior) — +The prior model to be used for prior pipeline. prior_scheduler (DDPMWuerstchenScheduler) — +The scheduler to be used for prior pipeline. Combined Pipeline for text-to-image generation using Wuerstchen This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union = None height: int = 512 width: int = 512 prior_num_inference_steps: int = 60 prior_timesteps: Optional = None prior_guidance_scale: float = 4.0 num_inference_steps: int = 12 decoder_timesteps: Optional = None decoder_guidance_scale: float = 0.0 negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation for the prior and decoder. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings for the prior. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings for the prior. Can be used to easily tweak text inputs, e.g. +prompt weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt +input argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +prior_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +prior_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked +to the text prompt, usually at the expense of lower image quality. prior_num_inference_steps (Union[int, Dict[float, int]], optional, defaults to 60) — +The number of prior denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. For more specific timestep spacing, you can pass customized +prior_timesteps num_inference_steps (int, optional, defaults to 12) — +The number of decoder denoising steps. More denoising steps usually lead to a higher quality image at +the expense of slower inference. For more specific timestep spacing, you can pass customized +timesteps prior_timesteps (List[float], optional) — +Custom timesteps to use for the denoising process for the prior. If not defined, equal spaced +prior_num_inference_steps timesteps are used. Must be in descending order. decoder_timesteps (List[float], optional) — +Custom timesteps to use for the denoising process for the decoder. If not defined, equal spaced +num_inference_steps timesteps are used. Must be in descending order. decoder_guidance_scale (float, optional, defaults to 0.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. prior_callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). prior_callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the prior_callback_on_step_end function. The tensors specified in the +list will be passed as callback_kwargs argument. You will only be able to include variables listed in +the ._callback_tensor_inputs attribute of your pipeline class. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusions import WuerstchenCombinedPipeline + +>>> pipe = WuerstchenCombinedPipeline.from_pretrained("warp-ai/Wuerstchen", torch_dtype=torch.float16).to( +... "cuda" +... ) +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> images = pipe(prompt=prompt) enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models (unet, text_encoder, vae, and safety checker state dicts) to CPU using 🤗 +Accelerate, significantly reducing memory usage. Models are moved to a torch.device('meta') and loaded on a +GPU only when their specific submodule’s forward method is called. Offloading happens on a submodule basis. +Memory savings are higher than using enable_model_cpu_offload, but performance is lower. WuerstchenPriorPipeline class diffusers.WuerstchenPriorPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModel prior: WuerstchenPrior scheduler: DDPMWuerstchenScheduler latent_mean: float = 42.0 latent_std: float = 1.0 resolution_multiple: float = 42.67 ) Parameters prior (Prior) — +The canonical unCLIP prior to approximate the image embedding from the text embedding. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (DDPMWuerstchenScheduler) — +A scheduler to be used in combination with prior to generate image embedding. latent_mean (‘float’, optional, defaults to 42.0) — +Mean value for latent diffusers. latent_std (‘float’, optional, defaults to 1.0) — +Standard value for latent diffusers. resolution_multiple (‘float’, optional, defaults to 42.67) — +Default resolution for multiple images generated. Pipeline for generating image prior for Wuerstchen. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None height: int = 1024 width: int = 1024 num_inference_steps: int = 60 timesteps: List = None guidance_scale: float = 8.0 negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pt' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. height (int, optional, defaults to 1024) — +The height in pixels of the generated image. width (int, optional, defaults to 1024) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 60) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 8.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +decoder_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +decoder_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely +linked to the text prompt, usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if decoder_guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import WuerstchenPriorPipeline + +>>> prior_pipe = WuerstchenPriorPipeline.from_pretrained( +... "warp-ai/wuerstchen-prior", torch_dtype=torch.float16 +... ).to("cuda") + +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> prior_output = pipe(prompt) WuerstchenPriorPipelineOutput class diffusers.pipelines.wuerstchen.pipeline_wuerstchen_prior.WuerstchenPriorPipelineOutput < source > ( image_embeddings: Union ) Parameters image_embeddings (torch.FloatTensor or np.ndarray) — +Prior image embeddings for text prompt Output class for WuerstchenPriorPipeline. WuerstchenDecoderPipeline class diffusers.WuerstchenDecoderPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModel decoder: WuerstchenDiffNeXt scheduler: DDPMWuerstchenScheduler vqgan: PaellaVQModel latent_dim_scale: float = 10.67 ) Parameters tokenizer (CLIPTokenizer) — +The CLIP tokenizer. text_encoder (CLIPTextModel) — +The CLIP text encoder. decoder (WuerstchenDiffNeXt) — +The WuerstchenDiffNeXt unet decoder. vqgan (PaellaVQModel) — +The VQGAN model. scheduler (DDPMWuerstchenScheduler) — +A scheduler to be used in combination with prior to generate image embedding. latent_dim_scale (float, optional, defaults to 10.67) — +Multiplier to determine the VQ latent space size from the image embeddings. If the image embeddings are +height=24 and width=24, the VQ latent shape needs to be height=int(2410.67)=256 and +width=int(2410.67)=256 in order to match the training conditions. Pipeline for generating images from the Wuerstchen model. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeddings: Union prompt: Union = None num_inference_steps: int = 12 timesteps: Optional = None guidance_scale: float = 0.0 negative_prompt: Union = None num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) Parameters image_embedding (torch.FloatTensor or List[torch.FloatTensor]) — +Image Embeddings either extracted from an image or generated by a Prior Model. prompt (str or List[str]) — +The prompt or prompts to guide the image generation. num_inference_steps (int, optional, defaults to 12) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 0.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +decoder_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +decoder_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely +linked to the text prompt, usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if decoder_guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import WuerstchenPriorPipeline, WuerstchenDecoderPipeline + +>>> prior_pipe = WuerstchenPriorPipeline.from_pretrained( +... "warp-ai/wuerstchen-prior", torch_dtype=torch.float16 +... ).to("cuda") +>>> gen_pipe = WuerstchenDecoderPipeline.from_pretrain("warp-ai/wuerstchen", torch_dtype=torch.float16).to( +... "cuda" +... ) + +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> prior_output = pipe(prompt) +>>> images = gen_pipe(prior_output.image_embeddings, prompt=prompt) Citation Copied @misc{pernias2023wuerstchen, + title={Wuerstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models}, + author={Pablo Pernias and Dominic Rampas and Mats L. Richter and Christopher J. Pal and Marc Aubreville}, + year={2023}, + eprint={2306.00637}, + archivePrefix={arXiv}, + primaryClass={cs.CV} + } diff --git a/scrapped_outputs/8e04f2ae34cea75498dfb5c43f7f5f35.txt b/scrapped_outputs/8e04f2ae34cea75498dfb5c43f7f5f35.txt new file mode 100644 index 0000000000000000000000000000000000000000..576dcc80f8d3648a3bfddba4f5d8e453c126504f --- /dev/null +++ b/scrapped_outputs/8e04f2ae34cea75498dfb5c43f7f5f35.txt @@ -0,0 +1,58 @@ +Tiny AutoEncoder Tiny AutoEncoder for Stable Diffusion (TAESD) was introduced in madebyollin/taesd by Ollin Boer Bohan. It is a tiny distilled version of Stable Diffusion’s VAE that can quickly decode the latents in a StableDiffusionPipeline or StableDiffusionXLPipeline almost instantly. To use with Stable Diffusion v-2.1: Copied import torch +from diffusers import DiffusionPipeline, AutoencoderTiny + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1-base", torch_dtype=torch.float16 +) +pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesd", torch_dtype=torch.float16) +pipe = pipe.to("cuda") + +prompt = "slice of delicious New York-style berry cheesecake" +image = pipe(prompt, num_inference_steps=25).images[0] +image To use with Stable Diffusion XL 1.0 Copied import torch +from diffusers import DiffusionPipeline, AutoencoderTiny + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +) +pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesdxl", torch_dtype=torch.float16) +pipe = pipe.to("cuda") + +prompt = "slice of delicious New York-style berry cheesecake" +image = pipe(prompt, num_inference_steps=25).images[0] +image AutoencoderTiny class diffusers.AutoencoderTiny < source > ( in_channels: int = 3 out_channels: int = 3 encoder_block_out_channels: Tuple = (64, 64, 64, 64) decoder_block_out_channels: Tuple = (64, 64, 64, 64) act_fn: str = 'relu' latent_channels: int = 4 upsampling_scaling_factor: int = 2 num_encoder_blocks: Tuple = (1, 3, 3, 3) num_decoder_blocks: Tuple = (3, 3, 3, 1) latent_magnitude: int = 3 latent_shift: float = 0.5 force_upcast: bool = False scaling_factor: float = 1.0 ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. encoder_block_out_channels (Tuple[int], optional, defaults to (64, 64, 64, 64)) — +Tuple of integers representing the number of output channels for each encoder block. The length of the +tuple should be equal to the number of encoder blocks. decoder_block_out_channels (Tuple[int], optional, defaults to (64, 64, 64, 64)) — +Tuple of integers representing the number of output channels for each decoder block. The length of the +tuple should be equal to the number of decoder blocks. act_fn (str, optional, defaults to "relu") — +Activation function to be used throughout the model. latent_channels (int, optional, defaults to 4) — +Number of channels in the latent representation. The latent space acts as a compressed representation of +the input image. upsampling_scaling_factor (int, optional, defaults to 2) — +Scaling factor for upsampling in the decoder. It determines the size of the output image during the +upsampling process. num_encoder_blocks (Tuple[int], optional, defaults to (1, 3, 3, 3)) — +Tuple of integers representing the number of encoder blocks at each stage of the encoding process. The +length of the tuple should be equal to the number of stages in the encoder. Each stage has a different +number of encoder blocks. num_decoder_blocks (Tuple[int], optional, defaults to (3, 3, 3, 1)) — +Tuple of integers representing the number of decoder blocks at each stage of the decoding process. The +length of the tuple should be equal to the number of stages in the decoder. Each stage has a different +number of decoder blocks. latent_magnitude (float, optional, defaults to 3.0) — +Magnitude of the latent representation. This parameter scales the latent representation values to control +the extent of information preservation. latent_shift (float, optional, defaults to 0.5) — +Shift applied to the latent representation. This parameter controls the center of the latent space. scaling_factor (float, optional, defaults to 1.0) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. For this Autoencoder, +however, no such scaling factor was used, hence the value of 1.0 as the default. force_upcast (bool, optional, default to False) — +If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE +can be fine-tuned / trained to a lower range without losing too much precision, in which case +force_upcast can be set to False (see this fp16-friendly +AutoEncoder). A tiny distilled VAE model for encoding images into latents and decoding latent representations into images. AutoencoderTiny is a wrapper around the original implementation of TAESD. This model inherits from ModelMixin. Check the superclass documentation for its generic methods implemented for +all models (such as downloading or saving). disable_slicing < source > ( ) Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing +decoding in one step. disable_tiling < source > ( ) Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing +decoding in one step. enable_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_tiling < source > ( use_tiling: bool = True ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. forward < source > ( sample: FloatTensor return_dict: bool = True ) Parameters sample (torch.FloatTensor) — Input sample. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. scale_latents < source > ( x: FloatTensor ) raw latents -> [0, 1] unscale_latents < source > ( x: FloatTensor ) [0, 1] -> raw latents AutoencoderTinyOutput class diffusers.models.autoencoders.autoencoder_tiny.AutoencoderTinyOutput < source > ( latents: Tensor ) Parameters latents (torch.Tensor) — Encoded outputs of the Encoder. Output of AutoencoderTiny encoding method. diff --git a/scrapped_outputs/8e0614315555505f542c29dab04d4044.txt b/scrapped_outputs/8e0614315555505f542c29dab04d4044.txt new file mode 100644 index 0000000000000000000000000000000000000000..f2282512f2f0bcea89548e640b2b6d75311dad9c --- /dev/null +++ b/scrapped_outputs/8e0614315555505f542c29dab04d4044.txt @@ -0,0 +1,27 @@ +OpenVINO 🤗 Optimum provides Stable Diffusion pipelines compatible with OpenVINO to perform inference on a variety of Intel processors (see the full list of supported devices). You’ll need to install 🤗 Optimum Intel with the --upgrade-strategy eager option to ensure optimum-intel is using the latest version: Copied pip install --upgrade-strategy eager optimum["openvino"] This guide will show you how to use the Stable Diffusion and Stable Diffusion XL (SDXL) pipelines with OpenVINO. Stable Diffusion To load and run inference, use the OVStableDiffusionPipeline. If you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, set export=True: Copied from optimum.intel import OVStableDiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipeline = OVStableDiffusionPipeline.from_pretrained(model_id, export=True) +prompt = "sailing ship in storm by Rembrandt" +image = pipeline(prompt).images[0] + +# Don't forget to save the exported model +pipeline.save_pretrained("openvino-sd-v1-5") To further speed-up inference, statically reshape the model. If you change any parameters such as the outputs height or width, you’ll need to statically reshape your model again. Copied # Define the shapes related to the inputs and desired outputs +batch_size, num_images, height, width = 1, 1, 512, 512 + +# Statically reshape the model +pipeline.reshape(batch_size, height, width, num_images) +# Compile the model before inference +pipeline.compile() + +image = pipeline( + prompt, + height=height, + width=width, + num_images_per_prompt=num_images, +).images[0] You can find more examples in the 🤗 Optimum documentation, and Stable Diffusion is supported for text-to-image, image-to-image, and inpainting. Stable Diffusion XL To load and run inference with SDXL, use the OVStableDiffusionXLPipeline: Copied from optimum.intel import OVStableDiffusionXLPipeline + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipeline = OVStableDiffusionXLPipeline.from_pretrained(model_id) +prompt = "sailing ship in storm by Rembrandt" +image = pipeline(prompt).images[0] To further speed-up inference, statically reshape the model as shown in the Stable Diffusion section. You can find more examples in the 🤗 Optimum documentation, and running SDXL in OpenVINO is supported for text-to-image and image-to-image. diff --git a/scrapped_outputs/8e09816df800e906ba4737e6fa9f2176.txt b/scrapped_outputs/8e09816df800e906ba4737e6fa9f2176.txt new file mode 100644 index 0000000000000000000000000000000000000000..880c99be557ecb33b18849c5b32298e2e7b85f9f --- /dev/null +++ b/scrapped_outputs/8e09816df800e906ba4737e6fa9f2176.txt @@ -0,0 +1,36 @@ +Load LoRAs for inference There are many adapter types (with LoRAs being the most popular) trained in different styles to achieve different effects. You can even combine multiple adapters to create new and unique images. In this tutorial, you’ll learn how to easily load and manage adapters for inference with the 🤗 PEFT integration in 🤗 Diffusers. You’ll use LoRA as the main adapter technique, so you’ll see the terms LoRA and adapter used interchangeably. Let’s first install all the required libraries. Copied !pip install -q transformers accelerate peft diffusers Now, load a pipeline with a Stable Diffusion XL (SDXL) checkpoint: Copied from diffusers import DiffusionPipeline +import torch + +pipe_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipe = DiffusionPipeline.from_pretrained(pipe_id, torch_dtype=torch.float16).to("cuda") Next, load a CiroN2022/toy-face adapter with the load_lora_weights() method. With the 🤗 PEFT integration, you can assign a specific adapter_name to the checkpoint, which let’s you easily switch between different LoRA checkpoints. Let’s call this adapter "toy". Copied pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") Make sure to include the token toy_face in the prompt and then you can perform inference: Copied prompt = "toy_face of a hacker with a hoodie" + +lora_scale= 0.9 +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) +).images[0] +image With the adapter_name parameter, it is really easy to use another adapter for inference! Load the nerijs/pixel-art-xl adapter that has been fine-tuned to generate pixel art images and call it "pixel". The pipeline automatically sets the first loaded adapter ("toy") as the active adapter, but you can activate the "pixel" adapter with the set_adapters() method: Copied pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipe.set_adapters("pixel") Make sure you include the token pixel art in your prompt to generate a pixel art image: Copied prompt = "a hacker with a hoodie, pixel art" +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) +).images[0] +image Merge adapters You can also merge different adapter checkpoints for inference to blend their styles together. Once again, use the set_adapters() method to activate the pixel and toy adapters and specify the weights for how they should be merged. Copied pipe.set_adapters(["pixel", "toy"], adapter_weights=[0.5, 1.0]) LoRA checkpoints in the diffusion community are almost always obtained with DreamBooth. DreamBooth training often relies on “trigger” words in the input text prompts in order for the generation results to look as expected. When you combine multiple LoRA checkpoints, it’s important to ensure the trigger words for the corresponding LoRA checkpoints are present in the input text prompts. Remember to use the trigger words for CiroN2022/toy-face and nerijs/pixel-art-xl (these are found in their repositories) in the prompt to generate an image. Copied prompt = "toy_face of a hacker with a hoodie, pixel art" +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": 1.0}, generator=torch.manual_seed(0) +).images[0] +image Impressive! As you can see, the model generated an image that mixed the characteristics of both adapters. Through its PEFT integration, Diffusers also offers more efficient merging methods which you can learn about in the Merge LoRAs guide! To return to only using one adapter, use the set_adapters() method to activate the "toy" adapter: Copied pipe.set_adapters("toy") + +prompt = "toy_face of a hacker with a hoodie" +lora_scale= 0.9 +image = pipe( + prompt, num_inference_steps=30, cross_attention_kwargs={"scale": lora_scale}, generator=torch.manual_seed(0) +).images[0] +image Or to disable all adapters entirely, use the disable_lora() method to return the base model. Copied pipe.disable_lora() + +prompt = "toy_face of a hacker with a hoodie" +lora_scale= 0.9 +image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] +image Manage active adapters You have attached multiple adapters in this tutorial, and if you’re feeling a bit lost on what adapters have been attached to the pipeline’s components, use the get_active_adapters() method to check the list of active adapters: Copied active_adapters = pipe.get_active_adapters() +active_adapters +["toy", "pixel"] You can also get the active adapters of each pipeline component with get_list_adapters(): Copied list_adapters_component_wise = pipe.get_list_adapters() +list_adapters_component_wise +{"text_encoder": ["toy", "pixel"], "unet": ["toy", "pixel"], "text_encoder_2": ["toy", "pixel"]} diff --git a/scrapped_outputs/8e747fb38f41195ef1ef44212bc869c5.txt b/scrapped_outputs/8e747fb38f41195ef1ef44212bc869c5.txt new file mode 100644 index 0000000000000000000000000000000000000000..cbdfab551c65a04d22ed1db010bb50b8fb750880 --- /dev/null +++ b/scrapped_outputs/8e747fb38f41195ef1ef44212bc869c5.txt @@ -0,0 +1,852 @@ +ControlNet ControlNet was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that’ll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process. The abstract from the paper is: We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models. This model was contributed by takuma104. ❤️ The original codebase can be found at lllyasviel/ControlNet, and you can find official ControlNet checkpoints on lllyasviel’s Hub profile. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionControlNetPipeline class diffusers.StableDiffusionControlNetPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. When prompt is a list, and if a list of images is passed for a single ControlNet, +each will be paired with each prompt in the prompt list. This also applies to multiple ControlNets, +where a list of image lists can be passed to batch for each prompt and each ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +... ) +>>> image = np.array(image) + +>>> # get canny image +>>> image = cv2.Canny(image, 100, 200) +>>> image = image[:, :, None] +>>> image = np.concatenate([image, image, image], axis=2) +>>> canny_image = Image.fromarray(image) + +>>> # load control net and stable diffusion v1-5 +>>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +>>> pipe = StableDiffusionControlNetPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> # speed up diffusion process with faster scheduler and memory optimization +>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +>>> # remove following line if xformers is not installed +>>> pipe.enable_xformers_memory_efficient_attention() + +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> generator = torch.manual_seed(0) +>>> image = pipe( +... "futuristic-looking woman", num_inference_steps=20, generator=generator, image=canny_image +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionControlNetImg2ImgPipeline class diffusers.StableDiffusionControlNetImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for image-to-image generation using Stable Diffusion with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None control_image: Union = None height: Optional = None width: Optional = None strength: float = 0.8 num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 0.8 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The initial image to be used as the starting point for the image generation process. Can also accept +image latents as image, and if passing latents directly they are not encoded again. control_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, UniPCMultistepScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +... ) +>>> np_image = np.array(image) + +>>> # get canny image +>>> np_image = cv2.Canny(np_image, 100, 200) +>>> np_image = np_image[:, :, None] +>>> np_image = np.concatenate([np_image, np_image, np_image], axis=2) +>>> canny_image = Image.fromarray(np_image) + +>>> # load control net and stable diffusion v1-5 +>>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +>>> pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> # speed up diffusion process with faster scheduler and memory optimization +>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> generator = torch.manual_seed(0) +>>> image = pipe( +... "futuristic-looking woman", +... num_inference_steps=20, +... generator=generator, +... image=image, +... control_image=canny_image, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionControlNetInpaintPipeline class diffusers.StableDiffusionControlNetInpaintPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for image inpainting using Stable Diffusion with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters This pipeline can be used with checkpoints that have been specifically fine-tuned for inpainting +(runwayml/stable-diffusion-inpainting) as well as +default text-to-image Stable Diffusion checkpoints +(runwayml/stable-diffusion-v1-5). Default text-to-image +Stable Diffusion checkpoints might be preferable for ControlNets that have been fine-tuned on those, such as +lllyasviel/control_v11p_sd15_inpaint. __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None control_image: Union = None height: Optional = None width: Optional = None padding_mask_crop: Optional = None strength: float = 1.0 num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 0.5 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], — +List[PIL.Image.Image], or List[np.ndarray]): +Image, NumPy array or tensor representing an image batch to be used as the starting point. For both +NumPy array and PyTorch tensor, the expected value range is between [0, 1]. If it’s a tensor or a +list or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a NumPy array or +a list of arrays, the expected shape should be (B, H, W, C) or (H, W, C). It can also accept image +latents as image, but if passing latents directly it is not encoded again. mask_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], — +List[PIL.Image.Image], or List[np.ndarray]): +Image, NumPy array or tensor representing an image batch to mask image. White pixels in the mask +are repainted while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a NumPy array or PyTorch tensor, it should contain one +color channel (L) instead of 3, so the expected shape for PyTorch tensor would be (B, 1, H, W), (B, H, W), (1, H, W), (H, W). And for NumPy array, it would be for (B, H, W, 1), (B, H, W), (H, W, 1), or (H, W). control_image (torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image], — +List[List[torch.FloatTensor]], or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. padding_mask_crop (int, optional, defaults to None) — +The size of margin in the crop to be applied to the image and masking. If None, no crop is applied to image and mask_image. If +padding_mask_crop is not None, it will first find a rectangular region with the same aspect ration of the image and +contains all masked area, and then expand that area based on padding_mask_crop. The image and mask_image will then be cropped based on +the expanded area before resizing to the original image size for inpainting. This is useful when the masked area is small while the image is large +and contain information inreleant for inpainging, such as background. strength (float, optional, defaults to 1.0) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 0.5) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install transformers accelerate +>>> from diffusers import StableDiffusionControlNetInpaintPipeline, ControlNetModel, DDIMScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> init_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy.png" +... ) +>>> init_image = init_image.resize((512, 512)) + +>>> generator = torch.Generator(device="cpu").manual_seed(1) + +>>> mask_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy_mask.png" +... ) +>>> mask_image = mask_image.resize((512, 512)) + + +>>> def make_canny_condition(image): +... image = np.array(image) +... image = cv2.Canny(image, 100, 200) +... image = image[:, :, None] +... image = np.concatenate([image, image, image], axis=2) +... image = Image.fromarray(image) +... return image + + +>>> control_image = make_canny_condition(init_image) + +>>> controlnet = ControlNetModel.from_pretrained( +... "lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16 +... ) +>>> pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> image = pipe( +... "a handsome man with ray-ban sunglasses", +... num_inference_steps=20, +... generator=generator, +... eta=1.0, +... image=init_image, +... mask_image=mask_image, +... control_image=control_image, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionControlNetPipeline class diffusers.FlaxStableDiffusionControlNetPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel controlnet: FlaxControlNetModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. controlnet (FlaxControlNetModel — +Provides additional conditioning to the unet during the denoising process. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-to-image generation using Stable Diffusion with ControlNet Guidance. This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: Array image: Array params: Union prng_seed: Array num_inference_steps: int = 50 guidance_scale: Union = 7.5 latents: Array = None neg_prompt_ids: Array = None controlnet_conditioning_scale: Union = 1.0 return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt_ids (jnp.ndarray) — +The prompt or prompts to guide the image generation. image (jnp.ndarray) — +Array representing the ControlNet input condition to provide guidance to the unet for generation. params (Dict or FrozenDict) — +Dictionary containing the model parameters/weights. prng_seed (jax.Array) — +Array containing random number generator key. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. latents (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +array is generated by sampling using the supplied random generator. controlnet_conditioning_scale (float or jnp.ndarray, optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> import jax.numpy as jnp +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard +>>> from diffusers.utils import load_image, make_image_grid +>>> from PIL import Image +>>> from diffusers import FlaxStableDiffusionControlNetPipeline, FlaxControlNetModel + + +>>> def create_key(seed=0): +... return jax.random.PRNGKey(seed) + + +>>> rng = create_key(0) + +>>> # get canny image +>>> canny_image = load_image( +... "https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/blog_post_cell_10_output_0.jpeg" +... ) + +>>> prompts = "best quality, extremely detailed" +>>> negative_prompts = "monochrome, lowres, bad anatomy, worst quality, low quality" + +>>> # load control net and stable diffusion v1-5 +>>> controlnet, controlnet_params = FlaxControlNetModel.from_pretrained( +... "lllyasviel/sd-controlnet-canny", from_pt=True, dtype=jnp.float32 +... ) +>>> pipe, params = FlaxStableDiffusionControlNetPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, revision="flax", dtype=jnp.float32 +... ) +>>> params["controlnet"] = controlnet_params + +>>> num_samples = jax.device_count() +>>> rng = jax.random.split(rng, jax.device_count()) + +>>> prompt_ids = pipe.prepare_text_inputs([prompts] * num_samples) +>>> negative_prompt_ids = pipe.prepare_text_inputs([negative_prompts] * num_samples) +>>> processed_image = pipe.prepare_image_inputs([canny_image] * num_samples) + +>>> p_params = replicate(params) +>>> prompt_ids = shard(prompt_ids) +>>> negative_prompt_ids = shard(negative_prompt_ids) +>>> processed_image = shard(processed_image) + +>>> output = pipe( +... prompt_ids=prompt_ids, +... image=processed_image, +... params=p_params, +... prng_seed=rng, +... num_inference_steps=50, +... neg_prompt_ids=negative_prompt_ids, +... jit=True, +... ).images + +>>> output_images = pipe.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:]))) +>>> output_images = make_image_grid(output_images, num_samples // 4, 4) +>>> output_images.save("generated_image.png") FlaxStableDiffusionControlNetPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/8e7de99e005e6fce99e23d3813da2613.txt b/scrapped_outputs/8e7de99e005e6fce99e23d3813da2613.txt new file mode 100644 index 0000000000000000000000000000000000000000..0567d91797bc3b6eb1e1e6a148852e2564293561 --- /dev/null +++ b/scrapped_outputs/8e7de99e005e6fce99e23d3813da2613.txt @@ -0,0 +1,152 @@ +RePaint + + +Overview + +RePaint: Inpainting using Denoising Diffusion Probabilistic Models (PNDM) by Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, Luc Van Gool. +The abstract of the paper is the following: +Free-form inpainting is the task of adding new content to an image in the regions specified by an arbitrary binary mask. Most existing approaches train for a certain distribution of masks, which limits their generalization capabilities to unseen mask types. Furthermore, training with pixel-wise and perceptual losses often leads to simple textural extensions towards the missing areas instead of semantically meaningful generation. In this work, we propose RePaint: A Denoising Diffusion Probabilistic Model (DDPM) based inpainting approach that is applicable to even extreme masks. We employ a pretrained unconditional DDPM as the generative prior. To condition the generation process, we only alter the reverse diffusion iterations by sampling the unmasked regions using the given image information. Since this technique does not modify or condition the original DDPM network itself, the model produces high-quality and diverse output images for any inpainting form. We validate our method for both faces and general-purpose image inpainting using standard and extreme masks. +RePaint outperforms state-of-the-art Autoregressive, and GAN approaches for at least five out of six mask distributions. +The original codebase can be found here. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_repaint.py +Image Inpainting +- + +Usage example + + + + Copied +from io import BytesIO + +import torch + +import PIL +import requests +from diffusers import RePaintPipeline, RePaintScheduler + + +def download_image(url): + response = requests.get(url) + return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +img_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/celeba_hq_256.png" +mask_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/mask_256.png" + +# Load the original image and the mask as PIL images +original_image = download_image(img_url).resize((256, 256)) +mask_image = download_image(mask_url).resize((256, 256)) + +# Load the RePaint scheduler and pipeline based on a pretrained DDPM model +scheduler = RePaintScheduler.from_pretrained("google/ddpm-ema-celebahq-256") +pipe = RePaintPipeline.from_pretrained("google/ddpm-ema-celebahq-256", scheduler=scheduler) +pipe = pipe.to("cuda") + +generator = torch.Generator(device="cuda").manual_seed(0) +output = pipe( + original_image=original_image, + mask_image=mask_image, + num_inference_steps=250, + eta=0.0, + jump_length=10, + jump_n_sample=10, + generator=generator, +) +inpainted_image = output.images[0] + +RePaintPipeline + + +class diffusers.RePaintPipeline + +< +source +> +( +unet +scheduler + +) + + + + +__call__ + +< +source +> +( +image: typing.Union[torch.Tensor, PIL.Image.Image] +mask_image: typing.Union[torch.Tensor, PIL.Image.Image] +num_inference_steps: int = 250 +eta: float = 0.0 +jump_length: int = 10 +jump_n_sample: int = 10 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True + +) +→ +ImagePipelineOutput or tuple + +Parameters + +image (torch.FloatTensor or PIL.Image.Image) — +The original image to inpaint on. + + +mask_image (torch.FloatTensor or PIL.Image.Image) — +The mask_image where 0.0 values define which part of the original image to inpaint (change). + + +num_inference_steps (int, optional, defaults to 1000) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +eta (float) — +The weight of noise for added noise in a diffusion step. Its value is between 0.0 and 1.0 - 0.0 is DDIM +and 1.0 is DDPM scheduler respectively. + + +jump_length (int, optional, defaults to 10) — +The number of steps taken forward in time before going backward in time for a single jump (“j” in +RePaint paper). Take a look at Figure 9 and 10 in https://arxiv.org/pdf/2201.09865.pdf. + + +jump_n_sample (int, optional, defaults to 10) — +The number of times we will make forward time jump for a given chosen time sample. Take a look at +Figure 9 and 10 in https://arxiv.org/pdf/2201.09865.pdf. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + +Returns + +ImagePipelineOutput or tuple + + + +~pipelines.utils.ImagePipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. diff --git a/scrapped_outputs/8e90d7e00680d609e6e18151859e29da.txt b/scrapped_outputs/8e90d7e00680d609e6e18151859e29da.txt new file mode 100644 index 0000000000000000000000000000000000000000..c64e5338e7b801217166447f9876dee342fd9e20 --- /dev/null +++ b/scrapped_outputs/8e90d7e00680d609e6e18151859e29da.txt @@ -0,0 +1,100 @@ +UNet Some training methods - like LoRA and Custom Diffusion - typically target the UNet’s attention layers, but these training methods can also target other non-attention layers. Instead of training all of a model’s parameters, only a subset of the parameters are trained, which is faster and more efficient. This class is useful if you’re only loading weights into a UNet. If you need to load weights into the text encoder or a text encoder and UNet, try using the load_lora_weights() function instead. The UNet2DConditionLoadersMixin class provides functions for loading and saving weights, fusing and unfusing LoRAs, disabling and enabling LoRAs, and setting and deleting adapters. To learn more about how to load LoRA weights, see the LoRA loading guide. UNet2DConditionLoadersMixin class diffusers.loaders.UNet2DConditionLoadersMixin < source > ( ) Load LoRA layers into a UNet2DCondtionModel. delete_adapters < source > ( adapter_names: Union ) Parameters adapter_names (Union[List[str], str]) — +The names (single string or list of strings) of the adapter to delete. Delete an adapter’s LoRA layers from the UNet. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_names="cinematic" +) +pipeline.delete_adapters("cinematic") disable_lora < source > ( ) Disable the UNet’s active LoRA layers. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) +pipeline.disable_lora() enable_lora < source > ( ) Enable the UNet’s active LoRA layers. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) +pipeline.enable_lora() load_attn_procs < source > ( pretrained_model_name_or_path_or_dict: Union **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with ModelMixin.save_pretrained(). +A torch state +dict. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load pretrained attention processor layers into UNet2DConditionModel. Attention processor layers have to be +defined in +attention_processor.py +and be a torch.nn.Module class. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.unet.load_attn_procs( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) save_attn_procs < source > ( save_directory: Union is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save an attention processor to (will be created if it doesn’t exist). is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or with pickle. Save attention processor layers to a directory so that it can be reloaded with the +load_attn_procs() method. Example: Copied import torch +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + torch_dtype=torch.float16, +).to("cuda") +pipeline.unet.load_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin") +pipeline.unet.save_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin") set_adapters < source > ( adapter_names: Union weights: Union = None ) Parameters adapter_names (List[str] or str) — +The names of the adapters to use. adapter_weights (Union[List[float], float], optional) — +The adapter(s) weights to use with the UNet. If None, the weights are set to 1.0 for all the +adapters. Set the currently active adapters for use in the UNet. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) +pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipeline.set_adapters(["cinematic", "pixel"], adapter_weights=[0.5, 0.5]) diff --git a/scrapped_outputs/8ecc9613f913ef0f0637b36b6d5b40da.txt b/scrapped_outputs/8ecc9613f913ef0f0637b36b6d5b40da.txt new file mode 100644 index 0000000000000000000000000000000000000000..53d6bac007007dc5928724a55ca5a0bcb2652378 --- /dev/null +++ b/scrapped_outputs/8ecc9613f913ef0f0637b36b6d5b40da.txt @@ -0,0 +1,90 @@ +Textual Inversion + +Textual Inversion is a technique for capturing novel concepts from a small number of example images in a way that can later be used to control text-to-image pipelines. It does so by learning new ‘words’ in the embedding space of the pipeline’s text encoder. These special words can then be used within text prompts to achieve very fine-grained control of the resulting images. + +By using just 3-5 images you can teach new concepts to a model such as Stable Diffusion for personalized image generation (image source). +This technique was introduced in An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion. The paper demonstrated the concept using a latent diffusion model but the idea has since been applied to other variants such as Stable Diffusion. + +How It Works + + +Architecture Overview from the textual inversion blog post +Before a text prompt can be used in a diffusion model, it must first be processed into a numerical representation. This typically involves tokenizing the text, converting each token to an embedding and then feeding those embeddings through a model (typically a transformer) whose output will be used as the conditioning for the diffusion model. +Textual inversion learns a new token embedding (v* in the diagram above). A prompt (that includes a token which will be mapped to this new embedding) is used in conjunction with a noised version of one or more training images as inputs to the generator model, which attempts to predict the denoised version of the image. The embedding is optimized based on how well the model does at this task - an embedding that better captures the object or style shown by the training images will give more useful information to the diffusion model and thus result in a lower denoising loss. After many steps (typically several thousand) with a variety of prompt and image variants the learned embedding should hopefully capture the essence of the new concept being taught. + +Usage + +To train your own textual inversions, see the example script here. +There is also a notebook for training: + +And one for inference: + +In addition to using concepts you have trained yourself, there is a community-created collection of trained textual inversions in the new Stable Diffusion public concepts library which you can also use from the inference notebook above. Over time this will hopefully grow into a useful resource as more examples are added. + +Example: Running locally + +The textual_inversion.py script here shows how to implement the training procedure and adapt it for stable diffusion. + +Installing the dependencies + +Before running the scripts, make sure to install the library’s training dependencies. + + + Copied +pip install diffusers[training] accelerate transformers +And initialize an 🤗Accelerate environment with: + + + Copied +accelerate config + +Cat toy example + +You need to accept the model license before downloading or using the weights. In this example we’ll use model version v1-4, so you’ll need to visit its card, read the license and tick the checkbox if you agree. +You have to be a registered user in 🤗 Hugging Face Hub, and you’ll also need to use an access token for the code to work. For more information on access tokens, please refer to this section of the documentation. +Run the following command to authenticate your token + + + Copied +huggingface-cli login +If you have already cloned the repo, then you won’t need to go through these steps. + +Now let’s get our dataset.Download 3-4 images from here and save them in a directory. This will be our training data. +And launch the training using + + + Copied +export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export DATA_DIR="path-to-dir-containing-images" + +accelerate launch textual_inversion.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --train_data_dir=$DATA_DIR \ + --learnable_property="object" \ + --placeholder_token="" --initializer_token="toy" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --max_train_steps=3000 \ + --learning_rate=5.0e-04 --scale_lr \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --output_dir="textual_inversion_cat" +A full training run takes ~1 hour on one V100 GPU. + +Inference + +Once you have trained a model using above command, the inference can be done simply using the StableDiffusionPipeline. Make sure to include the placeholder_token in your prompt. + + + Copied +from diffusers import StableDiffusionPipeline + +model_id = "path-to-your-trained-model" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50, guidance_scale=7.5).images[0] + +image.save("cat-backpack.png") diff --git a/scrapped_outputs/8ef4712e801b57643df9f06592d1ff89.txt b/scrapped_outputs/8ef4712e801b57643df9f06592d1ff89.txt new file mode 100644 index 0000000000000000000000000000000000000000..810a91b8fef1b421013373c972981ec5ae26c4c4 --- /dev/null +++ b/scrapped_outputs/8ef4712e801b57643df9f06592d1ff89.txt @@ -0,0 +1,21 @@ +ConsistencyDecoderScheduler This scheduler is a part of the ConsistencyDecoderPipeline and was introduced in DALL-E 3. The original codebase can be found at openai/consistency_models. ConsistencyDecoderScheduler class diffusers.schedulers.ConsistencyDecoderScheduler < source > ( num_train_timesteps: int = 1024 sigma_data: float = 0.5 ) scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor generator: Optional = None return_dict: bool = True ) → ~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (float) — +The current timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a +~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput or tuple. Returns +~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput or tuple + +If return_dict is True, +~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/8efa22197df02020d611c562e0a01da1.txt b/scrapped_outputs/8efa22197df02020d611c562e0a01da1.txt new file mode 100644 index 0000000000000000000000000000000000000000..ac1b7ed0d2eb9d761ee55f0863101b101fd84b33 --- /dev/null +++ b/scrapped_outputs/8efa22197df02020d611c562e0a01da1.txt @@ -0,0 +1,50 @@ +Re-using seeds for fast prompt engineering + +A common use case when generating images is to generate a batch of images, select one image and improve it with a better, more detailed prompt in a second run. +To do this, one needs to make each generated image of the batch deterministic. +Images are generated by denoising gaussian random noise which can be instantiated by passing a torch generator. +Now, for batched generation, we need to make sure that every single generated image in the batch is tied exactly to one seed. In 🧨 Diffusers, this can be achieved by not passing one generator, but a list +of generators to the pipeline. +Let’s go through an example using runwayml/stable-diffusion-v1-5. +We want to generate several versions of the prompt: + + + Copied +prompt = "Labrador in the style of Vermeer" +Let’s load the pipeline + + + Copied +>>> from diffusers import DiffusionPipeline + +>>> pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +Now, let’s define 4 different generators, since we would like to reproduce a certain image. We’ll use seeds 0 to 3 to create our generators. + + + Copied +>>> import torch + +>>> generator = [torch.Generator(device="cuda").manual_seed(i) for i in range(4)] +Let’s generate 4 images: + + + Copied +>>> images = pipe(prompt, generator=generator, num_images_per_prompt=4).images +>>> images + +Ok, the last images has some double eyes, but the first image looks good! +Let’s try to make the prompt a bit better while keeping the first seed +so that the images are similar to the first image. + + + Copied +prompt = [prompt + t for t in [", highly realistic", ", artsy", ", trending", ", colorful"]] +generator = [torch.Generator(device="cuda").manual_seed(0) for i in range(4)] +We create 4 generators with seed 0, which is the first seed we used before. +Let’s run the pipeline again. + + + Copied +>>> images = pipe(prompt, generator=generator).images +>>> images diff --git a/scrapped_outputs/8f0abf2a2f8706d347fb5239a1f11f7b.txt b/scrapped_outputs/8f0abf2a2f8706d347fb5239a1f11f7b.txt new file mode 100644 index 0000000000000000000000000000000000000000..810a91b8fef1b421013373c972981ec5ae26c4c4 --- /dev/null +++ b/scrapped_outputs/8f0abf2a2f8706d347fb5239a1f11f7b.txt @@ -0,0 +1,21 @@ +ConsistencyDecoderScheduler This scheduler is a part of the ConsistencyDecoderPipeline and was introduced in DALL-E 3. The original codebase can be found at openai/consistency_models. ConsistencyDecoderScheduler class diffusers.schedulers.ConsistencyDecoderScheduler < source > ( num_train_timesteps: int = 1024 sigma_data: float = 0.5 ) scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor generator: Optional = None return_dict: bool = True ) → ~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (float) — +The current timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a +~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput or tuple. Returns +~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput or tuple + +If return_dict is True, +~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/8f3dabca476ac8ad60eb44e1fabdd85e.txt b/scrapped_outputs/8f3dabca476ac8ad60eb44e1fabdd85e.txt new file mode 100644 index 0000000000000000000000000000000000000000..218eb87f8f649852b0b2e0b52a2a1d758aa1b603 --- /dev/null +++ b/scrapped_outputs/8f3dabca476ac8ad60eb44e1fabdd85e.txt @@ -0,0 +1 @@ +Using Diffusers with other modalities Diffusers is in the process of expanding to modalities other than images. Example type Colab Pipeline Molecule conformation generation ❌ More coming soon! diff --git a/scrapped_outputs/8fcf89ac29773223029f74904ac334ae.txt b/scrapped_outputs/8fcf89ac29773223029f74904ac334ae.txt new file mode 100644 index 0000000000000000000000000000000000000000..fa69efa9696034670fc8ca476928c6521eb0af53 --- /dev/null +++ b/scrapped_outputs/8fcf89ac29773223029f74904ac334ae.txt @@ -0,0 +1,212 @@ +Train a diffusion model Unconditional image generation is a popular application of diffusion models that generates images that look like those in the dataset used for training. Typically, the best results are obtained from finetuning a pretrained model on a specific dataset. You can find many of these checkpoints on the Hub, but if you can’t find one you like, you can always train your own! This tutorial will teach you how to train a UNet2DModel from scratch on a subset of the Smithsonian Butterflies dataset to generate your own 🦋 butterflies 🦋. 💡 This training tutorial is based on the Training with 🧨 Diffusers notebook. For additional details and context about diffusion models like how they work, check out the notebook! Before you begin, make sure you have 🤗 Datasets installed to load and preprocess image datasets, and 🤗 Accelerate, to simplify training on any number of GPUs. The following command will also install TensorBoard to visualize training metrics (you can also use Weights & Biases to track your training). Copied # uncomment to install the necessary libraries in Colab +#!pip install diffusers[training] We encourage you to share your model with the community, and in order to do that, you’ll need to login to your Hugging Face account (create one here if you don’t already have one!). You can login from a notebook and enter your token when prompted. Make sure your token has the write role. Copied >>> from huggingface_hub import notebook_login + +>>> notebook_login() Or login in from the terminal: Copied huggingface-cli login Since the model checkpoints are quite large, install Git-LFS to version these large files: Copied !sudo apt -qq install git-lfs +!git config --global credential.helper store Training configuration For convenience, create a TrainingConfig class containing the training hyperparameters (feel free to adjust them): Copied >>> from dataclasses import dataclass + +>>> @dataclass +... class TrainingConfig: +... image_size = 128 # the generated image resolution +... train_batch_size = 16 +... eval_batch_size = 16 # how many images to sample during evaluation +... num_epochs = 50 +... gradient_accumulation_steps = 1 +... learning_rate = 1e-4 +... lr_warmup_steps = 500 +... save_image_epochs = 10 +... save_model_epochs = 30 +... mixed_precision = "fp16" # `no` for float32, `fp16` for automatic mixed precision +... output_dir = "ddpm-butterflies-128" # the model name locally and on the HF Hub + +... push_to_hub = True # whether to upload the saved model to the HF Hub +... hub_model_id = "/" # the name of the repository to create on the HF Hub +... hub_private_repo = False +... overwrite_output_dir = True # overwrite the old model when re-running the notebook +... seed = 0 + + +>>> config = TrainingConfig() Load the dataset You can easily load the Smithsonian Butterflies dataset with the 🤗 Datasets library: Copied >>> from datasets import load_dataset + +>>> config.dataset_name = "huggan/smithsonian_butterflies_subset" +>>> dataset = load_dataset(config.dataset_name, split="train") 💡 You can find additional datasets from the HugGan Community Event or you can use your own dataset by creating a local ImageFolder. Set config.dataset_name to the repository id of the dataset if it is from the HugGan Community Event, or imagefolder if you’re using your own images. 🤗 Datasets uses the Image feature to automatically decode the image data and load it as a PIL.Image which we can visualize: Copied >>> import matplotlib.pyplot as plt + +>>> fig, axs = plt.subplots(1, 4, figsize=(16, 4)) +>>> for i, image in enumerate(dataset[:4]["image"]): +... axs[i].imshow(image) +... axs[i].set_axis_off() +>>> fig.show() The images are all different sizes though, so you’ll need to preprocess them first: Resize changes the image size to the one defined in config.image_size. RandomHorizontalFlip augments the dataset by randomly mirroring the images. Normalize is important to rescale the pixel values into a [-1, 1] range, which is what the model expects. Copied >>> from torchvision import transforms + +>>> preprocess = transforms.Compose( +... [ +... transforms.Resize((config.image_size, config.image_size)), +... transforms.RandomHorizontalFlip(), +... transforms.ToTensor(), +... transforms.Normalize([0.5], [0.5]), +... ] +... ) Use 🤗 Datasets’ set_transform method to apply the preprocess function on the fly during training: Copied >>> def transform(examples): +... images = [preprocess(image.convert("RGB")) for image in examples["image"]] +... return {"images": images} + + +>>> dataset.set_transform(transform) Feel free to visualize the images again to confirm that they’ve been resized. Now you’re ready to wrap the dataset in a DataLoader for training! Copied >>> import torch + +>>> train_dataloader = torch.utils.data.DataLoader(dataset, batch_size=config.train_batch_size, shuffle=True) Create a UNet2DModel Pretrained models in 🧨 Diffusers are easily created from their model class with the parameters you want. For example, to create a UNet2DModel: Copied >>> from diffusers import UNet2DModel + +>>> model = UNet2DModel( +... sample_size=config.image_size, # the target image resolution +... in_channels=3, # the number of input channels, 3 for RGB images +... out_channels=3, # the number of output channels +... layers_per_block=2, # how many ResNet layers to use per UNet block +... block_out_channels=(128, 128, 256, 256, 512, 512), # the number of output channels for each UNet block +... down_block_types=( +... "DownBlock2D", # a regular ResNet downsampling block +... "DownBlock2D", +... "DownBlock2D", +... "DownBlock2D", +... "AttnDownBlock2D", # a ResNet downsampling block with spatial self-attention +... "DownBlock2D", +... ), +... up_block_types=( +... "UpBlock2D", # a regular ResNet upsampling block +... "AttnUpBlock2D", # a ResNet upsampling block with spatial self-attention +... "UpBlock2D", +... "UpBlock2D", +... "UpBlock2D", +... "UpBlock2D", +... ), +... ) It is often a good idea to quickly check the sample image shape matches the model output shape: Copied >>> sample_image = dataset[0]["images"].unsqueeze(0) +>>> print("Input shape:", sample_image.shape) +Input shape: torch.Size([1, 3, 128, 128]) + +>>> print("Output shape:", model(sample_image, timestep=0).sample.shape) +Output shape: torch.Size([1, 3, 128, 128]) Great! Next, you’ll need a scheduler to add some noise to the image. Create a scheduler The scheduler behaves differently depending on whether you’re using the model for training or inference. During inference, the scheduler generates image from the noise. During training, the scheduler takes a model output - or a sample - from a specific point in the diffusion process and applies noise to the image according to a noise schedule and an update rule. Let’s take a look at the DDPMScheduler and use the add_noise method to add some random noise to the sample_image from before: Copied >>> import torch +>>> from PIL import Image +>>> from diffusers import DDPMScheduler + +>>> noise_scheduler = DDPMScheduler(num_train_timesteps=1000) +>>> noise = torch.randn(sample_image.shape) +>>> timesteps = torch.LongTensor([50]) +>>> noisy_image = noise_scheduler.add_noise(sample_image, noise, timesteps) + +>>> Image.fromarray(((noisy_image.permute(0, 2, 3, 1) + 1.0) * 127.5).type(torch.uint8).numpy()[0]) The training objective of the model is to predict the noise added to the image. The loss at this step can be calculated by: Copied >>> import torch.nn.functional as F + +>>> noise_pred = model(noisy_image, timesteps).sample +>>> loss = F.mse_loss(noise_pred, noise) Train the model By now, you have most of the pieces to start training the model and all that’s left is putting everything together. First, you’ll need an optimizer and a learning rate scheduler: Copied >>> from diffusers.optimization import get_cosine_schedule_with_warmup + +>>> optimizer = torch.optim.AdamW(model.parameters(), lr=config.learning_rate) +>>> lr_scheduler = get_cosine_schedule_with_warmup( +... optimizer=optimizer, +... num_warmup_steps=config.lr_warmup_steps, +... num_training_steps=(len(train_dataloader) * config.num_epochs), +... ) Then, you’ll need a way to evaluate the model. For evaluation, you can use the DDPMPipeline to generate a batch of sample images and save it as a grid: Copied >>> from diffusers import DDPMPipeline +>>> from diffusers.utils import make_image_grid +>>> import os + +>>> def evaluate(config, epoch, pipeline): +... # Sample some images from random noise (this is the backward diffusion process). +... # The default pipeline output type is `List[PIL.Image]` +... images = pipeline( +... batch_size=config.eval_batch_size, +... generator=torch.manual_seed(config.seed), +... ).images + +... # Make a grid out of the images +... image_grid = make_image_grid(images, rows=4, cols=4) + +... # Save the images +... test_dir = os.path.join(config.output_dir, "samples") +... os.makedirs(test_dir, exist_ok=True) +... image_grid.save(f"{test_dir}/{epoch:04d}.png") Now you can wrap all these components together in a training loop with 🤗 Accelerate for easy TensorBoard logging, gradient accumulation, and mixed precision training. To upload the model to the Hub, write a function to get your repository name and information and then push it to the Hub. 💡 The training loop below may look intimidating and long, but it’ll be worth it later when you launch your training in just one line of code! If you can’t wait and want to start generating images, feel free to copy and run the code below. You can always come back and examine the training loop more closely later, like when you’re waiting for your model to finish training. 🤗 Copied >>> from accelerate import Accelerator +>>> from huggingface_hub import create_repo, upload_folder +>>> from tqdm.auto import tqdm +>>> from pathlib import Path +>>> import os + +>>> def train_loop(config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler): +... # Initialize accelerator and tensorboard logging +... accelerator = Accelerator( +... mixed_precision=config.mixed_precision, +... gradient_accumulation_steps=config.gradient_accumulation_steps, +... log_with="tensorboard", +... project_dir=os.path.join(config.output_dir, "logs"), +... ) +... if accelerator.is_main_process: +... if config.output_dir is not None: +... os.makedirs(config.output_dir, exist_ok=True) +... if config.push_to_hub: +... repo_id = create_repo( +... repo_id=config.hub_model_id or Path(config.output_dir).name, exist_ok=True +... ).repo_id +... accelerator.init_trackers("train_example") + +... # Prepare everything +... # There is no specific order to remember, you just need to unpack the +... # objects in the same order you gave them to the prepare method. +... model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( +... model, optimizer, train_dataloader, lr_scheduler +... ) + +... global_step = 0 + +... # Now you train the model +... for epoch in range(config.num_epochs): +... progress_bar = tqdm(total=len(train_dataloader), disable=not accelerator.is_local_main_process) +... progress_bar.set_description(f"Epoch {epoch}") + +... for step, batch in enumerate(train_dataloader): +... clean_images = batch["images"] +... # Sample noise to add to the images +... noise = torch.randn(clean_images.shape, device=clean_images.device) +... bs = clean_images.shape[0] + +... # Sample a random timestep for each image +... timesteps = torch.randint( +... 0, noise_scheduler.config.num_train_timesteps, (bs,), device=clean_images.device, +... dtype=torch.int64 +... ) + +... # Add noise to the clean images according to the noise magnitude at each timestep +... # (this is the forward diffusion process) +... noisy_images = noise_scheduler.add_noise(clean_images, noise, timesteps) + +... with accelerator.accumulate(model): +... # Predict the noise residual +... noise_pred = model(noisy_images, timesteps, return_dict=False)[0] +... loss = F.mse_loss(noise_pred, noise) +... accelerator.backward(loss) + +... accelerator.clip_grad_norm_(model.parameters(), 1.0) +... optimizer.step() +... lr_scheduler.step() +... optimizer.zero_grad() + +... progress_bar.update(1) +... logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0], "step": global_step} +... progress_bar.set_postfix(**logs) +... accelerator.log(logs, step=global_step) +... global_step += 1 + +... # After each epoch you optionally sample some demo images with evaluate() and save the model +... if accelerator.is_main_process: +... pipeline = DDPMPipeline(unet=accelerator.unwrap_model(model), scheduler=noise_scheduler) + +... if (epoch + 1) % config.save_image_epochs == 0 or epoch == config.num_epochs - 1: +... evaluate(config, epoch, pipeline) + +... if (epoch + 1) % config.save_model_epochs == 0 or epoch == config.num_epochs - 1: +... if config.push_to_hub: +... upload_folder( +... repo_id=repo_id, +... folder_path=config.output_dir, +... commit_message=f"Epoch {epoch}", +... ignore_patterns=["step_*", "epoch_*"], +... ) +... else: +... pipeline.save_pretrained(config.output_dir) Phew, that was quite a bit of code! But you’re finally ready to launch the training with 🤗 Accelerate’s notebook_launcher function. Pass the function the training loop, all the training arguments, and the number of processes (you can change this value to the number of GPUs available to you) to use for training: Copied >>> from accelerate import notebook_launcher + +>>> args = (config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler) + +>>> notebook_launcher(train_loop, args, num_processes=1) Once training is complete, take a look at the final 🦋 images 🦋 generated by your diffusion model! Copied >>> import glob + +>>> sample_images = sorted(glob.glob(f"{config.output_dir}/samples/*.png")) +>>> Image.open(sample_images[-1]) Next steps Unconditional image generation is one example of a task that can be trained. You can explore other tasks and training techniques by visiting the 🧨 Diffusers Training Examples page. Here are some examples of what you can learn: Textual Inversion, an algorithm that teaches a model a specific visual concept and integrates it into the generated image. DreamBooth, a technique for generating personalized images of a subject given several input images of the subject. Guide to finetuning a Stable Diffusion model on your own dataset. Guide to using LoRA, a memory-efficient technique for finetuning really large models faster. diff --git a/scrapped_outputs/901f87f20458aadfbd91348fae141f99.txt b/scrapped_outputs/901f87f20458aadfbd91348fae141f99.txt new file mode 100644 index 0000000000000000000000000000000000000000..4a2dab2440032fce02434afcfbdf3d52bba38d63 --- /dev/null +++ b/scrapped_outputs/901f87f20458aadfbd91348fae141f99.txt @@ -0,0 +1,11 @@ +Philosophy 🧨 Diffusers provides state-of-the-art pretrained diffusion models across multiple modalities. +Its purpose is to serve as a modular toolbox for both inference and training. We aim at building a library that stands the test of time and therefore take API design very seriously. In a nutshell, Diffusers is built to be a natural extension of PyTorch. Therefore, most of our design choices are based on PyTorch’s Design Principles. Let’s go over the most important ones: Usability over Performance While Diffusers has many built-in performance-enhancing features (see Memory and Speed), models are always loaded with the highest precision and lowest optimization. Therefore, by default diffusion pipelines are always instantiated on CPU with float32 precision if not otherwise defined by the user. This ensures usability across different platforms and accelerators and means that no complex installations are required to run the library. Diffusers aims to be a light-weight package and therefore has very few required dependencies, but many soft dependencies that can improve performance (such as accelerate, safetensors, onnx, etc…). We strive to keep the library as lightweight as possible so that it can be added without much concern as a dependency on other packages. Diffusers prefers simple, self-explainable code over condensed, magic code. This means that short-hand code syntaxes such as lambda functions, and advanced PyTorch operators are often not desired. Simple over easy As PyTorch states, explicit is better than implicit and simple is better than complex. This design philosophy is reflected in multiple parts of the library: We follow PyTorch’s API with methods like DiffusionPipeline.to to let the user handle device management. Raising concise error messages is preferred to silently correct erroneous input. Diffusers aims at teaching the user, rather than making the library as easy to use as possible. Complex model vs. scheduler logic is exposed instead of magically handled inside. Schedulers/Samplers are separated from diffusion models with minimal dependencies on each other. This forces the user to write the unrolled denoising loop. However, the separation allows for easier debugging and gives the user more control over adapting the denoising process or switching out diffusion models or schedulers. Separately trained components of the diffusion pipeline, e.g. the text encoder, the unet, and the variational autoencoder, each have their own model class. This forces the user to handle the interaction between the different model components, and the serialization format separates the model components into different files. However, this allows for easier debugging and customization. DreamBooth or Textual Inversion training +is very simple thanks to Diffusers’ ability to separate single components of the diffusion pipeline. Tweakable, contributor-friendly over abstraction For large parts of the library, Diffusers adopts an important design principle of the Transformers library, which is to prefer copy-pasted code over hasty abstractions. This design principle is very opinionated and stands in stark contrast to popular design principles such as Don’t repeat yourself (DRY). +In short, just like Transformers does for modeling files, Diffusers prefers to keep an extremely low level of abstraction and very self-contained code for pipelines and schedulers. +Functions, long code blocks, and even classes can be copied across multiple files which at first can look like a bad, sloppy design choice that makes the library unmaintainable. +However, this design has proven to be extremely successful for Transformers and makes a lot of sense for community-driven, open-source machine learning libraries because: Machine Learning is an extremely fast-moving field in which paradigms, model architectures, and algorithms are changing rapidly, which therefore makes it very difficult to define long-lasting code abstractions. Machine Learning practitioners like to be able to quickly tweak existing code for ideation and research and therefore prefer self-contained code over one that contains many abstractions. Open-source libraries rely on community contributions and therefore must build a library that is easy to contribute to. The more abstract the code, the more dependencies, the harder to read, and the harder to contribute to. Contributors simply stop contributing to very abstract libraries out of fear of breaking vital functionality. If contributing to a library cannot break other fundamental code, not only is it more inviting for potential new contributors, but it is also easier to review and contribute to multiple parts in parallel. At Hugging Face, we call this design the single-file policy which means that almost all of the code of a certain class should be written in a single, self-contained file. To read more about the philosophy, you can have a look +at this blog post. In Diffusers, we follow this philosophy for both pipelines and schedulers, but only partly for diffusion models. The reason we don’t follow this design fully for diffusion models is because almost all diffusion pipelines, such +as DDPM, Stable Diffusion, unCLIP (DALL·E 2) and Imagen all rely on the same diffusion model, the UNet. Great, now you should have generally understood why 🧨 Diffusers is designed the way it is 🤗. +We try to apply these design principles consistently across the library. Nevertheless, there are some minor exceptions to the philosophy or some unlucky design choices. If you have feedback regarding the design, we would ❤️ to hear it directly on GitHub. Design Philosophy in Details Now, let’s look a bit into the nitty-gritty details of the design philosophy. Diffusers essentially consists of three major classes: pipelines, models, and schedulers. +Let’s walk through more in-detail design decisions for each class. Pipelines Pipelines are designed to be easy to use (therefore do not follow Simple over easy 100%), are not feature complete, and should loosely be seen as examples of how to use models and schedulers for inference. The following design principles are followed: Pipelines follow the single-file policy. All pipelines can be found in individual directories under src/diffusers/pipelines. One pipeline folder corresponds to one diffusion paper/project/release. Multiple pipeline files can be gathered in one pipeline folder, as it’s done for src/diffusers/pipelines/stable-diffusion. If pipelines share similar functionality, one can make use of the #Copied from mechanism. Pipelines all inherit from DiffusionPipeline. Every pipeline consists of different model and scheduler components, that are documented in the model_index.json file, are accessible under the same name as attributes of the pipeline and can be shared between pipelines with DiffusionPipeline.components function. Every pipeline should be loadable via the DiffusionPipeline.from_pretrained function. Pipelines should be used only for inference. Pipelines should be very readable, self-explanatory, and easy to tweak. Pipelines should be designed to build on top of each other and be easy to integrate into higher-level APIs. Pipelines are not intended to be feature-complete user interfaces. For future complete user interfaces one should rather have a look at InvokeAI, Diffuzers, and lama-cleaner. Every pipeline should have one and only one way to run it via a __call__ method. The naming of the __call__ arguments should be shared across all pipelines. Pipelines should be named after the task they are intended to solve. In almost all cases, novel diffusion pipelines shall be implemented in a new pipeline folder/file. Models Models are designed as configurable toolboxes that are natural extensions of PyTorch’s Module class. They only partly follow the single-file policy. The following design principles are followed: Models correspond to a type of model architecture. E.g. the UNet2DConditionModel class is used for all UNet variations that expect 2D image inputs and are conditioned on some context. All models can be found in src/diffusers/models and every model architecture shall be defined in its file, e.g. unet_2d_condition.py, transformer_2d.py, etc… Models do not follow the single-file policy and should make use of smaller model building blocks, such as attention.py, resnet.py, embeddings.py, etc… Note: This is in stark contrast to Transformers’ modeling files and shows that models do not really follow the single-file policy. Models intend to expose complexity, just like PyTorch’s Module class, and give clear error messages. Models all inherit from ModelMixin and ConfigMixin. Models can be optimized for performance when it doesn’t demand major code changes, keeps backward compatibility, and gives significant memory or compute gain. Models should by default have the highest precision and lowest performance setting. To integrate new model checkpoints whose general architecture can be classified as an architecture that already exists in Diffusers, the existing model architecture shall be adapted to make it work with the new checkpoint. One should only create a new file if the model architecture is fundamentally different. Models should be designed to be easily extendable to future changes. This can be achieved by limiting public function arguments, configuration arguments, and “foreseeing” future changes, e.g. it is usually better to add string “…type” arguments that can easily be extended to new future types instead of boolean is_..._type arguments. Only the minimum amount of changes shall be made to existing architectures to make a new model checkpoint work. The model design is a difficult trade-off between keeping code readable and concise and supporting many model checkpoints. For most parts of the modeling code, classes shall be adapted for new model checkpoints, while there are some exceptions where it is preferred to add new classes to make sure the code is kept concise and +readable long-term, such as UNet blocks and Attention processors. Schedulers Schedulers are responsible to guide the denoising process for inference as well as to define a noise schedule for training. They are designed as individual classes with loadable configuration files and strongly follow the single-file policy. The following design principles are followed: All schedulers are found in src/diffusers/schedulers. Schedulers are not allowed to import from large utils files and shall be kept very self-contained. One scheduler Python file corresponds to one scheduler algorithm (as might be defined in a paper). If schedulers share similar functionalities, we can make use of the #Copied from mechanism. Schedulers all inherit from SchedulerMixin and ConfigMixin. Schedulers can be easily swapped out with the ConfigMixin.from_config method as explained in detail here. Every scheduler has to have a set_num_inference_steps, and a step function. set_num_inference_steps(...) has to be called before every denoising process, i.e. before step(...) is called. Every scheduler exposes the timesteps to be “looped over” via a timesteps attribute, which is an array of timesteps the model will be called upon. The step(...) function takes a predicted model output and the “current” sample (x_t) and returns the “previous”, slightly more denoised sample (x_t-1). Given the complexity of diffusion schedulers, the step function does not expose all the complexity and can be a bit of a “black box”. In almost all cases, novel schedulers shall be implemented in a new scheduling file. diff --git a/scrapped_outputs/907874730d13d2bf56ccd99d9ff36bfe.txt b/scrapped_outputs/907874730d13d2bf56ccd99d9ff36bfe.txt new file mode 100644 index 0000000000000000000000000000000000000000..90f987bd68cea6f4c0f29a9a85768db8b9798fed --- /dev/null +++ b/scrapped_outputs/907874730d13d2bf56ccd99d9ff36bfe.txt @@ -0,0 +1 @@ +Overview A pipeline is an end-to-end class that provides a quick and easy way to use a diffusion system for inference by bundling independently trained models and schedulers together. Certain combinations of models and schedulers define specific pipeline types, like StableDiffusionXLPipeline or StableDiffusionControlNetPipeline, with specific capabilities. All pipeline types inherit from the base DiffusionPipeline class; pass it any checkpoint, and it’ll automatically detect the pipeline type and load the necessary components. This section demonstrates how to use specific pipelines such as Stable Diffusion XL, ControlNet, and DiffEdit. You’ll also learn how to use a distilled version of the Stable Diffusion model to speed up inference, how to create reproducible pipelines, and how to use and contribute community pipelines. diff --git a/scrapped_outputs/909e6ccf64d4ea6a7d9e475550dfd6e4.txt b/scrapped_outputs/909e6ccf64d4ea6a7d9e475550dfd6e4.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/90acc73e0e6c9c0748fe5c6e6c2b6046.txt b/scrapped_outputs/90acc73e0e6c9c0748fe5c6e6c2b6046.txt new file mode 100644 index 0000000000000000000000000000000000000000..b6426f311f1dcd39145426a6e04473141bc5c4c0 --- /dev/null +++ b/scrapped_outputs/90acc73e0e6c9c0748fe5c6e6c2b6046.txt @@ -0,0 +1,157 @@ +Stable diffusion 2 + +Stable Diffusion 2 is a text-to-image latent diffusion model built upon the work of Stable Diffusion 1. +The project to train Stable Diffusion 2 was led by Robin Rombach and Katherine Crowson from Stability AI and LAION. +The Stable Diffusion 2.0 release includes robust text-to-image models trained using a brand new text encoder (OpenCLIP), developed by LAION with support from Stability AI, which greatly improves the quality of the generated images compared to earlier V1 releases. The text-to-image models in this release can generate images with default resolutions of both 512x512 pixels and 768x768 pixels. +These models are trained on an aesthetic subset of the LAION-5B dataset created by the DeepFloyd team at Stability AI, which is then further filtered to remove adult content using LAION’s NSFW filter. +For more details about how Stable Diffusion 2 works and how it differs from Stable Diffusion 1, please refer to the official launch announcement post. + +Tips + + +Available checkpoints: + +Note that the architecture is more or less identical to Stable Diffusion 1 so please refer to this page for API documentation. +Text-to-Image (512x512 resolution): stabilityai/stable-diffusion-2-base with StableDiffusionPipeline +Text-to-Image (768x768 resolution): stabilityai/stable-diffusion-2 with StableDiffusionPipeline +Image Inpainting (512x512 resolution): stabilityai/stable-diffusion-2-inpainting with StableDiffusionInpaintPipeline +Super-Resolution (x4 resolution resolution): stable-diffusion-x4-upscaler StableDiffusionUpscalePipeline +Depth-to-Image (512x512 resolution): stabilityai/stable-diffusion-2-depth with StableDiffusionDepth2ImagePipeline +We recommend using the DPMSolverMultistepScheduler as it’s currently the fastest scheduler there is. + +Text-to-Image + +Text-to-Image (512x512 resolution): stabilityai/stable-diffusion-2-base with StableDiffusionPipeline + + + Copied +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +import torch + +repo_id = "stabilityai/stable-diffusion-2-base" +pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") + +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") + +prompt = "High quality photo of an astronaut riding a horse in space" +image = pipe(prompt, num_inference_steps=25).images[0] +image.save("astronaut.png") +Text-to-Image (768x768 resolution): stabilityai/stable-diffusion-2 with StableDiffusionPipeline + + + Copied +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +import torch + +repo_id = "stabilityai/stable-diffusion-2" +pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") + +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") + +prompt = "High quality photo of an astronaut riding a horse in space" +image = pipe(prompt, guidance_scale=9, num_inference_steps=25).images[0] +image.save("astronaut.png") + +Image Inpainting + +Image Inpainting (512x512 resolution): stabilityai/stable-diffusion-2-inpainting with StableDiffusionInpaintPipeline + + + Copied +import PIL +import requests +import torch +from io import BytesIO + +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler + + +def download_image(url): + response = requests.get(url) + return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = download_image(img_url).resize((512, 512)) +mask_image = download_image(mask_url).resize((512, 512)) + +repo_id = "stabilityai/stable-diffusion-2-inpainting" +pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") + +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") + +prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +image = pipe(prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=25).images[0] + +image.save("yellow_cat.png") + +Super-Resolution + +Image Upscaling (x4 resolution resolution): stable-diffusion-x4-upscaler with StableDiffusionUpscalePipeline + + + Copied +import requests +from PIL import Image +from io import BytesIO +from diffusers import StableDiffusionUpscalePipeline +import torch + +# load model and scheduler +model_id = "stabilityai/stable-diffusion-x4-upscaler" +pipeline = StableDiffusionUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) +pipeline = pipeline.to("cuda") + +# let's download an image +url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png" +response = requests.get(url) +low_res_img = Image.open(BytesIO(response.content)).convert("RGB") +low_res_img = low_res_img.resize((128, 128)) +prompt = "a white cat" +upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0] +upscaled_image.save("upsampled_cat.png") + +Depth-to-Image + +Depth-Guided Text-to-Image: stabilityai/stable-diffusion-2-depth StableDiffusionDepth2ImagePipeline + + + Copied +import torch +import requests +from PIL import Image + +from diffusers import StableDiffusionDepth2ImgPipeline + +pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-depth", + torch_dtype=torch.float16, +).to("cuda") + + +url = "http://images.cocodataset.org/val2017/000000039769.jpg" +init_image = Image.open(requests.get(url, stream=True).raw) +prompt = "two tigers" +n_propmt = "bad, deformed, ugly, bad anotomy" +image = pipe(prompt=prompt, image=init_image, negative_prompt=n_propmt, strength=0.7).images[0] + +How to load and use different schedulers. + +The stable diffusion pipeline uses DDIMScheduler scheduler by default. But diffusers provides many other schedulers that can be used with the stable diffusion pipeline such as PNDMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, EulerAncestralDiscreteScheduler etc. +To use a different scheduler, you can either change it via the ConfigMixin.from_config() method or pass the scheduler argument to the from_pretrained method of the pipeline. For example, to use the EulerDiscreteScheduler, you can do the following: + + + Copied +>>> from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler + +>>> pipeline = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2") +>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +>>> # or +>>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("stabilityai/stable-diffusion-2", subfolder="scheduler") +>>> pipeline = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2", scheduler=euler_scheduler) diff --git a/scrapped_outputs/90c696e70c0139581979bee8524b6b86.txt b/scrapped_outputs/90c696e70c0139581979bee8524b6b86.txt new file mode 100644 index 0000000000000000000000000000000000000000..191230d895650a96c9b8f907a3911fdd00d72140 --- /dev/null +++ b/scrapped_outputs/90c696e70c0139581979bee8524b6b86.txt @@ -0,0 +1,55 @@ +DDPMScheduler Denoising Diffusion Probabilistic Models (DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes a diffusion based model of the same name. In the context of the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline. The abstract from the paper is: We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. Our implementation is available at this https URL. DDPMScheduler class diffusers.DDPMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None variance_type: str = 'fixed_small' clip_sample: bool = True prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 clip_sample_range: float = 1.0 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' steps_offset: int = 0 rescale_betas_zero_snr: int = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +An array of betas to pass directly to the constructor without using beta_start and beta_end. variance_type (str, defaults to "fixed_small") — +Clip the variance when adding noise to the denoised sample. Choose from fixed_small, fixed_small_log, +fixed_large, fixed_large_log, learned or learned_range. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. DDPMScheduler explores the connections between denoising score matching and Langevin dynamics sampling. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: Optional = None device: Union = None timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. If used, +timesteps must be None. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. timesteps (List[int], optional) — +Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default +timestep spacing strategy of equal spacing between timesteps is used. If timesteps is passed, +num_inference_steps must be None. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator = None return_dict: bool = True ) → DDPMSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a DDPMSchedulerOutput or tuple. Returns +DDPMSchedulerOutput or tuple + +If return_dict is True, DDPMSchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). DDPMSchedulerOutput class diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/90f61c38488cea279a2d5ba77319d5ee.txt b/scrapped_outputs/90f61c38488cea279a2d5ba77319d5ee.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/9121bba1feca1a1f7813483271d1f8d6.txt b/scrapped_outputs/9121bba1feca1a1f7813483271d1f8d6.txt new file mode 100644 index 0000000000000000000000000000000000000000..7c2ceeecc625c651b3ef1cc43f5c7fb053d83bae --- /dev/null +++ b/scrapped_outputs/9121bba1feca1a1f7813483271d1f8d6.txt @@ -0,0 +1,100 @@ +Stochastic Karras VE + + +Overview + +Elucidating the Design Space of Diffusion-Based Generative Models by Tero Karras, Miika Aittala, Timo Aila and Samuli Laine. +The abstract of the paper is the following: +We argue that the theory and practice of diffusion-based generative models are currently unnecessarily convoluted and seek to remedy the situation by presenting a design space that clearly separates the concrete design choices. This lets us identify several changes to both the sampling and training processes, as well as preconditioning of the score networks. Together, our improvements yield new state-of-the-art FID of 1.79 for CIFAR-10 in a class-conditional setting and 1.97 in an unconditional setting, with much faster sampling (35 network evaluations per image) than prior designs. To further demonstrate their modular nature, we show that our design changes dramatically improve both the efficiency and quality obtainable with pre-trained score networks from previous work, including improving the FID of an existing ImageNet-64 model from 2.07 to near-SOTA 1.55. +This pipeline implements the Stochastic sampling tailored to the Variance-Expanding (VE) models. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_stochastic_karras_ve.py +Unconditional Image Generation +- + +KarrasVePipeline + + +class diffusers.KarrasVePipeline + +< +source +> +( +unet: UNet2DModel +scheduler: KarrasVeScheduler + +) + + +Parameters + +unet (UNet2DModel) — U-Net architecture to denoise the encoded image. + + +scheduler (KarrasVeScheduler) — +Scheduler for the diffusion process to be used in combination with unet to denoise the encoded image. + + + +Stochastic sampling from Karras et al. [1] tailored to the Variance-Expanding (VE) models [2]. Use Algorithm 2 and +the VE column of Table 1 from [1] for reference. +[1] Karras, Tero, et al. “Elucidating the Design Space of Diffusion-Based Generative Models.” +https://arxiv.org/abs/2206.00364 [2] Song, Yang, et al. “Score-based generative modeling through stochastic +differential equations.” https://arxiv.org/abs/2011.13456 + +__call__ + +< +source +> +( +batch_size: int = 1 +num_inference_steps: int = 50 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +**kwargs + +) +→ +ImagePipelineOutput or tuple + +Parameters + +batch_size (int, optional, defaults to 1) — +The number of images to generate. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + +Returns + +ImagePipelineOutput or tuple + + + +~pipelines.utils.ImagePipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. diff --git a/scrapped_outputs/913cd8fdb650f55ae6eccc8a736068ac.txt b/scrapped_outputs/913cd8fdb650f55ae6eccc8a736068ac.txt new file mode 100644 index 0000000000000000000000000000000000000000..b0c782a0af4ea267d1420a2637f1b69aa8a55acf --- /dev/null +++ b/scrapped_outputs/913cd8fdb650f55ae6eccc8a736068ac.txt @@ -0,0 +1,1075 @@ +VersatileDiffusion + +VersatileDiffusion was proposed in Versatile Diffusion: Text, Images and Variations All in One Diffusion Model by Xingqian Xu, Zhangyang Wang, Eric Zhang, Kai Wang, Humphrey Shi . +The abstract of the paper is the following: +The recent advances in diffusion models have set an impressive milestone in many generation tasks. Trending works such as DALL-E2, Imagen, and Stable Diffusion have attracted great interest in academia and industry. Despite the rapid landscape changes, recent new approaches focus on extensions and performance rather than capacity, thus requiring separate models for separate tasks. In this work, we expand the existing single-flow diffusion pipeline into a multi-flow network, dubbed Versatile Diffusion (VD), that handles text-to-image, image-to-text, image-variation, and text-variation in one unified model. Moreover, we generalize VD to a unified multi-flow multimodal diffusion framework with grouped layers, swappable streams, and other propositions that can process modalities beyond images and text. Through our experiments, we demonstrate that VD and its underlying framework have the following merits: a) VD handles all subtasks with competitive quality; b) VD initiates novel extensions and applications such as disentanglement of style and semantic, image-text dual-guided generation, etc.; c) Through these experiments and applications, VD provides more semantic insights of the generated outputs. + +Tips + +VersatileDiffusion is conceptually very similar as Stable Diffusion, but instead of providing just a image data stream conditioned on text, VersatileDiffusion provides both a image and text data stream and can be conditioned on both text and image. + +*Run VersatileDiffusion* + +You can both load the memory intensive “all-in-one” VersatileDiffusionPipeline that can run all tasks +with the same class as shown in VersatileDiffusionPipeline.text_to_image(), VersatileDiffusionPipeline.image_variation(), and VersatileDiffusionPipeline.dual_guided() +or +You can run the individual pipelines which are much more memory efficient: +Text-to-Image: VersatileDiffusionTextToImagePipeline.call() +Image Variation: VersatileDiffusionImageVariationPipeline.call() +Dual Text and Image Guided Generation: VersatileDiffusionDualGuidedPipeline.call() + +*How to load and use different schedulers.* + +The versatile diffusion pipelines uses DDIMScheduler scheduler by default. But diffusers provides many other schedulers that can be used with the alt diffusion pipeline such as PNDMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, EulerAncestralDiscreteScheduler etc. +To use a different scheduler, you can either change it via the ConfigMixin.from_config() method or pass the scheduler argument to the from_pretrained method of the pipeline. For example, to use the EulerDiscreteScheduler, you can do the following: + + + Copied +>>> from diffusers import VersatileDiffusionPipeline, EulerDiscreteScheduler + +>>> pipeline = VersatileDiffusionPipeline.from_pretrained("shi-labs/versatile-diffusion") +>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +>>> # or +>>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("shi-labs/versatile-diffusion", subfolder="scheduler") +>>> pipeline = VersatileDiffusionPipeline.from_pretrained("shi-labs/versatile-diffusion", scheduler=euler_scheduler) + +VersatileDiffusionPipeline + + +class diffusers.VersatileDiffusionPipeline + +< +source +> +( +tokenizer: CLIPTokenizer +image_feature_extractor: CLIPFeatureExtractor +text_encoder: CLIPTextModel +image_encoder: CLIPVisionModel +image_unet: UNet2DConditionModel +text_unet: UNet2DConditionModel +vae: AutoencoderKL +scheduler: KarrasDiffusionSchedulers + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionMegaSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-to-image generation using Stable Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +dual_guided + +< +source +> +( +prompt: typing.Union[PIL.Image.Image, typing.List[PIL.Image.Image]] +image: typing.Union[str, typing.List[str]] +text_to_image_strength: float = 0.5 +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 + +) +→ +~pipelines.stable_diffusion.ImagePipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. + + +height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +~pipelines.stable_diffusion.ImagePipelineOutput or tuple + + + +~pipelines.stable_diffusion.ImagePipelineOutput if return_dict is True, otherwise a `tuple. When +returning a tuple, the first element is a list with the generated images. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import VersatileDiffusionPipeline +>>> import torch +>>> import requests +>>> from io import BytesIO +>>> from PIL import Image + +>>> # let's download an initial image +>>> url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg" + +>>> response = requests.get(url) +>>> image = Image.open(BytesIO(response.content)).convert("RGB") +>>> text = "a red car in the sun" + +>>> pipe = VersatileDiffusionPipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> text_to_image_strength = 0.75 + +>>> image = pipe.dual_guided( +... prompt=text, image=image, text_to_image_strength=text_to_image_strength, generator=generator +... ).images[0] +>>> image.save("./car_variation.png") + +image_variation + +< +source +> +( +image: typing.Union[torch.FloatTensor, PIL.Image.Image] +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +image (PIL.Image.Image, List[PIL.Image.Image] or torch.Tensor) — +The image prompt or prompts to guide the image generation. + + +height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import VersatileDiffusionPipeline +>>> import torch +>>> import requests +>>> from io import BytesIO +>>> from PIL import Image + +>>> # let's download an initial image +>>> url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg" + +>>> response = requests.get(url) +>>> image = Image.open(BytesIO(response.content)).convert("RGB") + +>>> pipe = VersatileDiffusionPipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> image = pipe.image_variation(image, generator=generator).images[0] +>>> image.save("./car_variation.png") + +text_to_image + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. + + +height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import VersatileDiffusionPipeline +>>> import torch + +>>> pipe = VersatileDiffusionPipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> image = pipe.text_to_image("an astronaut riding on a horse on mars", generator=generator).images[0] +>>> image.save("./astronaut.png") + +VersatileDiffusionTextToImagePipeline + + +class diffusers.VersatileDiffusionTextToImagePipeline + +< +source +> +( +tokenizer: CLIPTokenizer +text_encoder: CLIPTextModelWithProjection +image_unet: UNet2DConditionModel +text_unet: UNetFlatConditionModel +vae: AutoencoderKL +scheduler: KarrasDiffusionSchedulers + +) + + +Parameters + +vqvae (VQModel) — +Vector-quantized (VQ) Model to encode and decode images to and from latent representations. + + +bert (LDMBertModel) — +Text-encoder model based on BERT architecture. + + +tokenizer (transformers.BertTokenizer) — +Tokenizer of class +BertTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +**kwargs + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. + + +height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import VersatileDiffusionTextToImagePipeline +>>> import torch + +>>> pipe = VersatileDiffusionTextToImagePipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe.remove_unused_weights() +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> image = pipe("an astronaut riding on a horse on mars", generator=generator).images[0] +>>> image.save("./astronaut.png") + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. + +VersatileDiffusionImageVariationPipeline + + +class diffusers.VersatileDiffusionImageVariationPipeline + +< +source +> +( +image_feature_extractor: CLIPFeatureExtractor +image_encoder: CLIPVisionModelWithProjection +image_unet: UNet2DConditionModel +vae: AutoencoderKL +scheduler: KarrasDiffusionSchedulers + +) + + +Parameters + +vqvae (VQModel) — +Vector-quantized (VQ) Model to encode and decode images to and from latent representations. + + +bert (LDMBertModel) — +Text-encoder model based on BERT architecture. + + +tokenizer (transformers.BertTokenizer) — +Tokenizer of class +BertTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +image: typing.Union[PIL.Image.Image, typing.List[PIL.Image.Image], torch.Tensor] +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +**kwargs + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +image (PIL.Image.Image, List[PIL.Image.Image] or torch.Tensor) — +The image prompt or prompts to guide the image generation. + + +height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import VersatileDiffusionImageVariationPipeline +>>> import torch +>>> import requests +>>> from io import BytesIO +>>> from PIL import Image + +>>> # let's download an initial image +>>> url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg" + +>>> response = requests.get(url) +>>> image = Image.open(BytesIO(response.content)).convert("RGB") + +>>> pipe = VersatileDiffusionImageVariationPipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> image = pipe(image, generator=generator).images[0] +>>> image.save("./car_variation.png") + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. + +VersatileDiffusionDualGuidedPipeline + + +class diffusers.VersatileDiffusionDualGuidedPipeline + +< +source +> +( +tokenizer: CLIPTokenizer +image_feature_extractor: CLIPFeatureExtractor +text_encoder: CLIPTextModelWithProjection +image_encoder: CLIPVisionModelWithProjection +image_unet: UNet2DConditionModel +text_unet: UNetFlatConditionModel +vae: AutoencoderKL +scheduler: KarrasDiffusionSchedulers + +) + + +Parameters + +vqvae (VQModel) — +Vector-quantized (VQ) Model to encode and decode images to and from latent representations. + + +bert (LDMBertModel) — +Text-encoder model based on BERT architecture. + + +tokenizer (transformers.BertTokenizer) — +Tokenizer of class +BertTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[PIL.Image.Image, typing.List[PIL.Image.Image]] +image: typing.Union[str, typing.List[str]] +text_to_image_strength: float = 0.5 +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +**kwargs + +) +→ +~pipelines.stable_diffusion.ImagePipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. + + +height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +~pipelines.stable_diffusion.ImagePipelineOutput or tuple + + + +~pipelines.stable_diffusion.ImagePipelineOutput if return_dict is True, otherwise a `tuple. When +returning a tuple, the first element is a list with the generated images. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import VersatileDiffusionDualGuidedPipeline +>>> import torch +>>> import requests +>>> from io import BytesIO +>>> from PIL import Image + +>>> # let's download an initial image +>>> url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg" + +>>> response = requests.get(url) +>>> image = Image.open(BytesIO(response.content)).convert("RGB") +>>> text = "a red car in the sun" + +>>> pipe = VersatileDiffusionDualGuidedPipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe.remove_unused_weights() +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> text_to_image_strength = 0.75 + +>>> image = pipe( +... prompt=text, image=image, text_to_image_strength=text_to_image_strength, generator=generator +... ).images[0] +>>> image.save("./car_variation.png") + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. diff --git a/scrapped_outputs/914e0a2b2b5b14a54ff69143b2935b68.txt b/scrapped_outputs/914e0a2b2b5b14a54ff69143b2935b68.txt new file mode 100644 index 0000000000000000000000000000000000000000..9de2a9918b4f9735de3ea0d622cdf65706556cae --- /dev/null +++ b/scrapped_outputs/914e0a2b2b5b14a54ff69143b2935b68.txt @@ -0,0 +1,124 @@ +Schedulers Diffusion pipelines are inherently a collection of diffusion models and schedulers that are partly independent from each other. This means that one is able to switch out parts of the pipeline to better customize +a pipeline to one’s use case. The best example of this is the Schedulers. Whereas diffusion models usually simply define the forward pass from noise to a less noisy sample, +schedulers define the whole denoising process, i.e.: How many denoising steps? Stochastic or deterministic? What algorithm to use to find the denoised sample? They can be quite complex and often define a trade-off between denoising speed and denoising quality. +It is extremely difficult to measure quantitatively which scheduler works best for a given diffusion pipeline, so it is often recommended to simply try out which works best. The following paragraphs show how to do so with the 🧨 Diffusers library. Load pipeline Let’s start by loading the runwayml/stable-diffusion-v1-5 model in the DiffusionPipeline: Copied from huggingface_hub import login +from diffusers import DiffusionPipeline +import torch + +login() + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) Next, we move it to GPU: Copied pipeline.to("cuda") Access the scheduler The scheduler is always one of the components of the pipeline and is usually called "scheduler". +So it can be accessed via the "scheduler" property. Copied pipeline.scheduler Output: Copied PNDMScheduler { + "_class_name": "PNDMScheduler", + "_diffusers_version": "0.21.4", + "beta_end": 0.012, + "beta_schedule": "scaled_linear", + "beta_start": 0.00085, + "clip_sample": false, + "num_train_timesteps": 1000, + "set_alpha_to_one": false, + "skip_prk_steps": true, + "steps_offset": 1, + "timestep_spacing": "leading", + "trained_betas": null +} We can see that the scheduler is of type PNDMScheduler. +Cool, now let’s compare the scheduler in its performance to other schedulers. +First we define a prompt on which we will test all the different schedulers: Copied prompt = "A photograph of an astronaut riding a horse on Mars, high resolution, high definition." Next, we create a generator from a random seed that will ensure that we can generate similar images as well as run the pipeline: Copied generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image Changing the scheduler Now we show how easy it is to change the scheduler of a pipeline. Every scheduler has a property compatibles +which defines all compatible schedulers. You can take a look at all available, compatible schedulers for the Stable Diffusion pipeline as follows. Copied pipeline.scheduler.compatibles Output: Copied [diffusers.utils.dummy_torch_and_torchsde_objects.DPMSolverSDEScheduler, + diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, + diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, + diffusers.schedulers.scheduling_ddim.DDIMScheduler, + diffusers.schedulers.scheduling_ddpm.DDPMScheduler, + diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler, + diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler, + diffusers.schedulers.scheduling_deis_multistep.DEISMultistepScheduler, + diffusers.schedulers.scheduling_pndm.PNDMScheduler, + diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, + diffusers.schedulers.scheduling_unipc_multistep.UniPCMultistepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_discrete.KDPM2DiscreteScheduler, + diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_ancestral_discrete.KDPM2AncestralDiscreteScheduler] Cool, lots of schedulers to look at. Feel free to have a look at their respective class definitions: EulerDiscreteScheduler, LMSDiscreteScheduler, DDIMScheduler, DDPMScheduler, HeunDiscreteScheduler, DPMSolverMultistepScheduler, DEISMultistepScheduler, PNDMScheduler, EulerAncestralDiscreteScheduler, UniPCMultistepScheduler, KDPM2DiscreteScheduler, DPMSolverSinglestepScheduler, KDPM2AncestralDiscreteScheduler. We will now compare the input prompt with all other schedulers. To change the scheduler of the pipeline you can make use of the +convenient config property in combination with the from_config() function. Copied pipeline.scheduler.config returns a dictionary of the configuration of the scheduler: Output: Copied FrozenDict([('num_train_timesteps', 1000), + ('beta_start', 0.00085), + ('beta_end', 0.012), + ('beta_schedule', 'scaled_linear'), + ('trained_betas', None), + ('skip_prk_steps', True), + ('set_alpha_to_one', False), + ('prediction_type', 'epsilon'), + ('timestep_spacing', 'leading'), + ('steps_offset', 1), + ('_use_default_values', ['timestep_spacing', 'prediction_type']), + ('_class_name', 'PNDMScheduler'), + ('_diffusers_version', '0.21.4'), + ('clip_sample', False)]) This configuration can then be used to instantiate a scheduler +of a different class that is compatible with the pipeline. Here, +we change the scheduler to the DDIMScheduler. Copied from diffusers import DDIMScheduler + +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) Cool, now we can run the pipeline again to compare the generation quality. Copied generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image If you are a JAX/Flax user, please check this section instead. Compare schedulers So far we have tried running the stable diffusion pipeline with two schedulers: PNDMScheduler and DDIMScheduler. +A number of better schedulers have been released that can be run with much fewer steps; let’s compare them here: LMSDiscreteScheduler usually leads to better results: Copied from diffusers import LMSDiscreteScheduler + +pipeline.scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image EulerDiscreteScheduler and EulerAncestralDiscreteScheduler can generate high quality results with as little as 30 steps. Copied from diffusers import EulerDiscreteScheduler + +pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=30).images[0] +image and: Copied from diffusers import EulerAncestralDiscreteScheduler + +pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=30).images[0] +image DPMSolverMultistepScheduler gives a reasonable speed/quality trade-off and can be run with as little as 20 steps. Copied from diffusers import DPMSolverMultistepScheduler + +pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=20).images[0] +image As you can see, most images look very similar and are arguably of very similar quality. It often really depends on the specific use case which scheduler to choose. A good approach is always to run multiple different +schedulers to compare results. Changing the Scheduler in Flax If you are a JAX/Flax user, you can also change the default pipeline scheduler. This is a complete example of how to run inference using the Flax Stable Diffusion pipeline and the super-fast DPM-Solver++ scheduler: Copied import jax +import numpy as np +from flax.jax_utils import replicate +from flax.training.common_utils import shard + +from diffusers import FlaxStableDiffusionPipeline, FlaxDPMSolverMultistepScheduler + +model_id = "runwayml/stable-diffusion-v1-5" +scheduler, scheduler_state = FlaxDPMSolverMultistepScheduler.from_pretrained( + model_id, + subfolder="scheduler" +) +pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( + model_id, + scheduler=scheduler, + revision="bf16", + dtype=jax.numpy.bfloat16, +) +params["scheduler"] = scheduler_state + +# Generate 1 image per parallel device (8 on TPUv2-8 or TPUv3-8) +prompt = "a photo of an astronaut riding a horse on mars" +num_samples = jax.device_count() +prompt_ids = pipeline.prepare_inputs([prompt] * num_samples) + +prng_seed = jax.random.PRNGKey(0) +num_inference_steps = 25 + +# shard inputs and rng +params = replicate(params) +prng_seed = jax.random.split(prng_seed, jax.device_count()) +prompt_ids = shard(prompt_ids) + +images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images +images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) The following Flax schedulers are not yet compatible with the Flax Stable Diffusion Pipeline: FlaxLMSDiscreteScheduler FlaxDDPMScheduler diff --git a/scrapped_outputs/91864405475ec979427bbab0501b0624.txt b/scrapped_outputs/91864405475ec979427bbab0501b0624.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/9186df32d6b849db90018ae627aa3e06.txt b/scrapped_outputs/9186df32d6b849db90018ae627aa3e06.txt new file mode 100644 index 0000000000000000000000000000000000000000..27ff3e96e4e6d4dd3d19eb137ba8d07b5db24119 --- /dev/null +++ b/scrapped_outputs/9186df32d6b849db90018ae627aa3e06.txt @@ -0,0 +1,8 @@ +Utilities Utility and helper functions for working with 🤗 Diffusers. numpy_to_pil diffusers.utils.numpy_to_pil < source > ( images ) Convert a numpy image or a batch of images to a PIL image. pt_to_pil diffusers.utils.pt_to_pil < source > ( images ) Convert a torch image to a PIL image. load_image diffusers.utils.load_image < source > ( image: Union convert_method: Callable = None ) → PIL.Image.Image Parameters image (str or PIL.Image.Image) — +The image to convert to the PIL Image format. convert_method (Callable[[PIL.Image.Image], PIL.Image.Image], optional) — +A conversion method to apply to the image after loading it. +When set to None the image will be converted “RGB”. Returns +PIL.Image.Image + +A PIL Image. + Loads image to a PIL Image. export_to_gif diffusers.utils.export_to_gif < source > ( image: List output_gif_path: str = None fps: int = 10 ) export_to_video diffusers.utils.export_to_video < source > ( video_frames: Union output_video_path: str = None fps: int = 8 ) make_image_grid diffusers.utils.make_image_grid < source > ( images: List rows: int cols: int resize: int = None ) Prepares a single grid of images. Useful for visualization purposes. diff --git a/scrapped_outputs/91a7da84214a5f9692a329d58df6d74b.txt b/scrapped_outputs/91a7da84214a5f9692a329d58df6d74b.txt new file mode 100644 index 0000000000000000000000000000000000000000..a001c5e9c77873189a313244b2e7bed2ac696984 --- /dev/null +++ b/scrapped_outputs/91a7da84214a5f9692a329d58df6d74b.txt @@ -0,0 +1,101 @@ +Image variation The Stable Diffusion model can also generate variations from an input image. It uses a fine-tuned version of a Stable Diffusion model by Justin Pinkney from Lambda. The original codebase can be found at LambdaLabsML/lambda-diffusers and additional official checkpoints for image variation can be found at lambdalabs/sd-image-variations-diffusers. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionImageVariationPipeline class diffusers.StableDiffusionImageVariationPipeline < source > ( vae: AutoencoderKL image_encoder: CLIPVisionModelWithProjection unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. image_encoder (CLIPVisionModelWithProjection) — +Frozen CLIP image-encoder (clip-vit-large-patch14). text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline to generate image variations from an input image using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — +Image or images to guide image generation. If you provide a tensor, it needs to be compatible with +CLIPImageProcessor. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied from diffusers import StableDiffusionImageVariationPipeline +from PIL import Image +from io import BytesIO +import requests + +pipe = StableDiffusionImageVariationPipeline.from_pretrained( + "lambdalabs/sd-image-variations-diffusers", revision="v2.0" +) +pipe = pipe.to("cuda") + +url = "https://lh3.googleusercontent.com/y-iFOHfLTwkuQSUegpwDdgKmOjRSTvPxat63dQLB25xkTs4lhIbRUFeNBWZzYf370g=s1200" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") + +out = pipe(image, num_images_per_prompt=3, guidance_scale=15) +out["images"][0].save("result.jpg") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/91bb5ced42b0e1522b1f8269eb8046bc.txt b/scrapped_outputs/91bb5ced42b0e1522b1f8269eb8046bc.txt new file mode 100644 index 0000000000000000000000000000000000000000..1f6d34afd1a2489084d1e5089bae9e956ec46cdb --- /dev/null +++ b/scrapped_outputs/91bb5ced42b0e1522b1f8269eb8046bc.txt @@ -0,0 +1,281 @@ +Token Merging + +Token Merging (introduced in Token Merging: Your ViT But Faster) works by merging the redundant tokens / patches progressively in the forward pass of a Transformer-based network. It can speed up the inference latency of the underlying network. +After Token Merging (ToMe) was released, the authors released Token Merging for Fast Stable Diffusion, which introduced a version of ToMe which is more compatible with Stable Diffusion. We can use ToMe to gracefully speed up the inference latency of a DiffusionPipeline. This doc discusses how to apply ToMe to the StableDiffusionPipeline, the expected speedups, and the qualitative aspects of using ToMe on the StableDiffusionPipeline. + +Using ToMe + +The authors of ToMe released a convenient Python library called tomesd that lets us apply ToMe to a DiffusionPipeline like so: + + + Copied +from diffusers import StableDiffusionPipeline +import tomesd + +pipeline = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") ++ tomesd.apply_patch(pipeline, ratio=0.5) + +image = pipeline("a photo of an astronaut riding a horse on mars").images[0] +And that’s it! +tomesd.apply_patch() exposes a number of arguments to let us strike a balance between the pipeline inference speed and the quality of the generated tokens. Amongst those arguments, the most important one is ratio. ratio controls the number of tokens that will be merged during the forward pass. For more details on tomesd, please refer to the original repository https://github.com/dbolya/tomesd and the paper. + +Benchmarking tomesd with StableDiffusionPipeline +We benchmarked the impact of using tomesd on StableDiffusionPipeline along with xformers across different image resolutions. We used A100 and V100 as our test GPU devices with the following development environment (with Python 3.8.5): + + + Copied +- `diffusers` version: 0.15.1 +- Python version: 3.8.16 +- PyTorch version (GPU?): 1.13.1+cu116 (True) +- Huggingface_hub version: 0.13.2 +- Transformers version: 4.27.2 +- Accelerate version: 0.18.0 +- xFormers version: 0.0.16 +- tomesd version: 0.1.2 +We used this script for benchmarking: https://gist.github.com/sayakpaul/27aec6bca7eb7b0e0aa4112205850335. Following are our findings: + +A100 + +Resolution +Batch size +Vanilla +ToMe +ToMe + xFormers +ToMe speedup (%) +ToMe + xFormers speedup (%) +512 +10 +6.88 +5.26 +4.69 +23.54651163 +31.83139535 + + + + + + + +768 +10 +OOM +14.71 +11 + + + +8 +OOM +11.56 +8.84 + + + +4 +OOM +5.98 +4.66 + + + +2 +4.99 +3.24 +3.1 +35.07014028 +37.8757515 + +1 +3.29 +2.24 +2.03 +31.91489362 +38.29787234 + + + + + + + +1024 +10 +OOM +OOM +OOM + + + +8 +OOM +OOM +OOM + + + +4 +OOM +12.51 +9.09 + + + +2 +OOM +6.52 +4.96 + + + +1 +6.4 +3.61 +2.81 +43.59375 +56.09375 +The timings reported here are in seconds. Speedups are calculated over the Vanilla timings. + +V100 + +Resolution +Batch size +Vanilla +ToMe +ToMe + xFormers +ToMe speedup (%) +ToMe + xFormers speedup (%) +512 +10 +OOM +10.03 +9.29 + + + +8 +OOM +8.05 +7.47 + + + +4 +5.7 +4.3 +3.98 +24.56140351 +30.1754386 + +2 +3.14 +2.43 +2.27 +22.61146497 +27.70700637 + +1 +1.88 +1.57 +1.57 +16.4893617 +16.4893617 + + + + + + + +768 +10 +OOM +OOM +23.67 + + + +8 +OOM +OOM +18.81 + + + +4 +OOM +11.81 +9.7 + + + +2 +OOM +6.27 +5.2 + + + +1 +5.43 +3.38 +2.82 +37.75322284 +48.06629834 + + + + + + + +1024 +10 +OOM +OOM +OOM + + + +8 +OOM +OOM +OOM + + + +4 +OOM +OOM +19.35 + + + +2 +OOM +13 +10.78 + + + +1 +OOM +6.66 +5.54 + + +As seen in the tables above, the speedup with tomesd becomes more pronounced for larger image resolutions. It is also interesting to note that with tomesd, it becomes possible to run the pipeline on a higher resolution, like 1024x1024. +It might be possible to speed up inference even further with torch.compile(). + +Quality + +As reported in the paper, ToMe can preserve the quality of the generated images to a great extent while speeding up inference. By increasing the ratio, it is possible to further speed up inference, but that might come at the cost of a deterioration in the image quality. +To test the quality of the generated samples using our setup, we sampled a few prompts from the “Parti Prompts” (introduced in Parti) and performed inference with the StableDiffusionPipeline in the following settings: +Vanilla StableDiffusionPipeline +StableDiffusionPipeline + ToMe +StableDiffusionPipeline + ToMe + xformers +We didn’t notice any significant decrease in the quality of the generated samples. Here are samples: + +You can check out the generated samples here. We used this script for conducting this experiment. diff --git a/scrapped_outputs/91cf2c772df3cd88275237da10c0573b.txt b/scrapped_outputs/91cf2c772df3cd88275237da10c0573b.txt new file mode 100644 index 0000000000000000000000000000000000000000..9de2a9918b4f9735de3ea0d622cdf65706556cae --- /dev/null +++ b/scrapped_outputs/91cf2c772df3cd88275237da10c0573b.txt @@ -0,0 +1,124 @@ +Schedulers Diffusion pipelines are inherently a collection of diffusion models and schedulers that are partly independent from each other. This means that one is able to switch out parts of the pipeline to better customize +a pipeline to one’s use case. The best example of this is the Schedulers. Whereas diffusion models usually simply define the forward pass from noise to a less noisy sample, +schedulers define the whole denoising process, i.e.: How many denoising steps? Stochastic or deterministic? What algorithm to use to find the denoised sample? They can be quite complex and often define a trade-off between denoising speed and denoising quality. +It is extremely difficult to measure quantitatively which scheduler works best for a given diffusion pipeline, so it is often recommended to simply try out which works best. The following paragraphs show how to do so with the 🧨 Diffusers library. Load pipeline Let’s start by loading the runwayml/stable-diffusion-v1-5 model in the DiffusionPipeline: Copied from huggingface_hub import login +from diffusers import DiffusionPipeline +import torch + +login() + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) Next, we move it to GPU: Copied pipeline.to("cuda") Access the scheduler The scheduler is always one of the components of the pipeline and is usually called "scheduler". +So it can be accessed via the "scheduler" property. Copied pipeline.scheduler Output: Copied PNDMScheduler { + "_class_name": "PNDMScheduler", + "_diffusers_version": "0.21.4", + "beta_end": 0.012, + "beta_schedule": "scaled_linear", + "beta_start": 0.00085, + "clip_sample": false, + "num_train_timesteps": 1000, + "set_alpha_to_one": false, + "skip_prk_steps": true, + "steps_offset": 1, + "timestep_spacing": "leading", + "trained_betas": null +} We can see that the scheduler is of type PNDMScheduler. +Cool, now let’s compare the scheduler in its performance to other schedulers. +First we define a prompt on which we will test all the different schedulers: Copied prompt = "A photograph of an astronaut riding a horse on Mars, high resolution, high definition." Next, we create a generator from a random seed that will ensure that we can generate similar images as well as run the pipeline: Copied generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image Changing the scheduler Now we show how easy it is to change the scheduler of a pipeline. Every scheduler has a property compatibles +which defines all compatible schedulers. You can take a look at all available, compatible schedulers for the Stable Diffusion pipeline as follows. Copied pipeline.scheduler.compatibles Output: Copied [diffusers.utils.dummy_torch_and_torchsde_objects.DPMSolverSDEScheduler, + diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, + diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, + diffusers.schedulers.scheduling_ddim.DDIMScheduler, + diffusers.schedulers.scheduling_ddpm.DDPMScheduler, + diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler, + diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler, + diffusers.schedulers.scheduling_deis_multistep.DEISMultistepScheduler, + diffusers.schedulers.scheduling_pndm.PNDMScheduler, + diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, + diffusers.schedulers.scheduling_unipc_multistep.UniPCMultistepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_discrete.KDPM2DiscreteScheduler, + diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_ancestral_discrete.KDPM2AncestralDiscreteScheduler] Cool, lots of schedulers to look at. Feel free to have a look at their respective class definitions: EulerDiscreteScheduler, LMSDiscreteScheduler, DDIMScheduler, DDPMScheduler, HeunDiscreteScheduler, DPMSolverMultistepScheduler, DEISMultistepScheduler, PNDMScheduler, EulerAncestralDiscreteScheduler, UniPCMultistepScheduler, KDPM2DiscreteScheduler, DPMSolverSinglestepScheduler, KDPM2AncestralDiscreteScheduler. We will now compare the input prompt with all other schedulers. To change the scheduler of the pipeline you can make use of the +convenient config property in combination with the from_config() function. Copied pipeline.scheduler.config returns a dictionary of the configuration of the scheduler: Output: Copied FrozenDict([('num_train_timesteps', 1000), + ('beta_start', 0.00085), + ('beta_end', 0.012), + ('beta_schedule', 'scaled_linear'), + ('trained_betas', None), + ('skip_prk_steps', True), + ('set_alpha_to_one', False), + ('prediction_type', 'epsilon'), + ('timestep_spacing', 'leading'), + ('steps_offset', 1), + ('_use_default_values', ['timestep_spacing', 'prediction_type']), + ('_class_name', 'PNDMScheduler'), + ('_diffusers_version', '0.21.4'), + ('clip_sample', False)]) This configuration can then be used to instantiate a scheduler +of a different class that is compatible with the pipeline. Here, +we change the scheduler to the DDIMScheduler. Copied from diffusers import DDIMScheduler + +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) Cool, now we can run the pipeline again to compare the generation quality. Copied generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image If you are a JAX/Flax user, please check this section instead. Compare schedulers So far we have tried running the stable diffusion pipeline with two schedulers: PNDMScheduler and DDIMScheduler. +A number of better schedulers have been released that can be run with much fewer steps; let’s compare them here: LMSDiscreteScheduler usually leads to better results: Copied from diffusers import LMSDiscreteScheduler + +pipeline.scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image EulerDiscreteScheduler and EulerAncestralDiscreteScheduler can generate high quality results with as little as 30 steps. Copied from diffusers import EulerDiscreteScheduler + +pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=30).images[0] +image and: Copied from diffusers import EulerAncestralDiscreteScheduler + +pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=30).images[0] +image DPMSolverMultistepScheduler gives a reasonable speed/quality trade-off and can be run with as little as 20 steps. Copied from diffusers import DPMSolverMultistepScheduler + +pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=20).images[0] +image As you can see, most images look very similar and are arguably of very similar quality. It often really depends on the specific use case which scheduler to choose. A good approach is always to run multiple different +schedulers to compare results. Changing the Scheduler in Flax If you are a JAX/Flax user, you can also change the default pipeline scheduler. This is a complete example of how to run inference using the Flax Stable Diffusion pipeline and the super-fast DPM-Solver++ scheduler: Copied import jax +import numpy as np +from flax.jax_utils import replicate +from flax.training.common_utils import shard + +from diffusers import FlaxStableDiffusionPipeline, FlaxDPMSolverMultistepScheduler + +model_id = "runwayml/stable-diffusion-v1-5" +scheduler, scheduler_state = FlaxDPMSolverMultistepScheduler.from_pretrained( + model_id, + subfolder="scheduler" +) +pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( + model_id, + scheduler=scheduler, + revision="bf16", + dtype=jax.numpy.bfloat16, +) +params["scheduler"] = scheduler_state + +# Generate 1 image per parallel device (8 on TPUv2-8 or TPUv3-8) +prompt = "a photo of an astronaut riding a horse on mars" +num_samples = jax.device_count() +prompt_ids = pipeline.prepare_inputs([prompt] * num_samples) + +prng_seed = jax.random.PRNGKey(0) +num_inference_steps = 25 + +# shard inputs and rng +params = replicate(params) +prng_seed = jax.random.split(prng_seed, jax.device_count()) +prompt_ids = shard(prompt_ids) + +images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images +images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) The following Flax schedulers are not yet compatible with the Flax Stable Diffusion Pipeline: FlaxLMSDiscreteScheduler FlaxDDPMScheduler diff --git a/scrapped_outputs/91eb0c8931bd3971a2851bd29ff494e5.txt b/scrapped_outputs/91eb0c8931bd3971a2851bd29ff494e5.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/91f7dd4a2e36cbab210f74bf41ab443b.txt b/scrapped_outputs/91f7dd4a2e36cbab210f74bf41ab443b.txt new file mode 100644 index 0000000000000000000000000000000000000000..70b4217dd0c7138c00d1e18f1498d6ca0f929b68 --- /dev/null +++ b/scrapped_outputs/91f7dd4a2e36cbab210f74bf41ab443b.txt @@ -0,0 +1,31 @@ +Load different Stable Diffusion formats Stable Diffusion models are available in different formats depending on the framework they’re trained and saved with, and where you download them from. Converting these formats for use in 🤗 Diffusers allows you to use all the features supported by the library, such as using different schedulers for inference, building your custom pipeline, and a variety of techniques and methods for optimizing inference speed. We highly recommend using the .safetensors format because it is more secure than traditional pickled files which are vulnerable and can be exploited to execute any code on your machine (learn more in the Load safetensors guide). This guide will show you how to convert other Stable Diffusion formats to be compatible with 🤗 Diffusers. PyTorch .ckpt The checkpoint - or .ckpt - format is commonly used to store and save models. The .ckpt file contains the entire model and is typically several GBs in size. While you can load and use a .ckpt file directly with the from_single_file() method, it is generally better to convert the .ckpt file to 🤗 Diffusers so both formats are available. There are two options for converting a .ckpt file: use a Space to convert the checkpoint or convert the .ckpt file with a script. Convert with a Space The easiest and most convenient way to convert a .ckpt file is to use the SD to Diffusers Space. You can follow the instructions on the Space to convert the .ckpt file. This approach works well for basic models, but it may struggle with more customized models. You’ll know the Space failed if it returns an empty pull request or error. In this case, you can try converting the .ckpt file with a script. Convert with a script 🤗 Diffusers provides a conversion script for converting .ckpt files. This approach is more reliable than the Space above. Before you start, make sure you have a local clone of 🤗 Diffusers to run the script and log in to your Hugging Face account so you can open pull requests and push your converted model to the Hub. Copied huggingface-cli login To use the script: Git clone the repository containing the .ckpt file you want to convert. For this example, let’s convert this TemporalNet .ckpt file: Copied git lfs install +git clone https://huggingface.co/CiaraRowles/TemporalNet Open a pull request on the repository where you’re converting the checkpoint from: Copied cd TemporalNet && git fetch origin refs/pr/13:pr/13 +git checkout pr/13 There are several input arguments to configure in the conversion script, but the most important ones are: checkpoint_path: the path to the .ckpt file to convert. original_config_file: a YAML file defining the configuration of the original architecture. If you can’t find this file, try searching for the YAML file in the GitHub repository where you found the .ckpt file. dump_path: the path to the converted model. For example, you can take the cldm_v15.yaml file from the ControlNet repository because the TemporalNet model is a Stable Diffusion v1.5 and ControlNet model. Now you can run the script to convert the .ckpt file: Copied python ../diffusers/scripts/convert_original_stable_diffusion_to_diffusers.py --checkpoint_path temporalnetv3.ckpt --original_config_file cldm_v15.yaml --dump_path ./ --controlnet Once the conversion is done, upload your converted model and test out the resulting pull request! Copied git push origin pr/13:refs/pr/13 Keras .pb or .h5 🧪 This is an experimental feature. Only Stable Diffusion v1 checkpoints are supported by the Convert KerasCV Space at the moment. KerasCV supports training for Stable Diffusion v1 and v2. However, it offers limited support for experimenting with Stable Diffusion models for inference and deployment whereas 🤗 Diffusers has a more complete set of features for this purpose, such as different noise schedulers, flash attention, and other +optimization techniques. The Convert KerasCV Space converts .pb or .h5 files to PyTorch, and then wraps them in a StableDiffusionPipeline so it is ready for inference. The converted checkpoint is stored in a repository on the Hugging Face Hub. For this example, let’s convert the sayakpaul/textual-inversion-kerasio checkpoint which was trained with Textual Inversion. It uses the special token to personalize images with cats. The Convert KerasCV Space allows you to input the following: Your Hugging Face token. Paths to download the UNet and text encoder weights from. Depending on how the model was trained, you don’t necessarily need to provide the paths to both the UNet and text encoder. For example, Textual Inversion only requires the embeddings from the text encoder and a text-to-image model only requires the UNet weights. Placeholder token is only applicable for textual inversion models. The output_repo_prefix is the name of the repository where the converted model is stored. Click the Submit button to automatically convert the KerasCV checkpoint! Once the checkpoint is successfully converted, you’ll see a link to the new repository containing the converted checkpoint. Follow the link to the new repository, and you’ll see the Convert KerasCV Space generated a model card with an inference widget to try out the converted model. If you prefer to run inference with code, click on the Use in Diffusers button in the upper right corner of the model card to copy and paste the code snippet: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline", use_safetensors=True +) Then, you can generate an image like: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline", use_safetensors=True +) +pipeline.to("cuda") + +placeholder_token = "" +prompt = f"two {placeholder_token} getting married, photorealistic, high quality" +image = pipeline(prompt, num_inference_steps=50).images[0] A1111 LoRA files Automatic1111 (A1111) is a popular web UI for Stable Diffusion that supports model sharing platforms like Civitai. Models trained with the Low-Rank Adaptation (LoRA) technique are especially popular because they’re fast to train and have a much smaller file size than a fully finetuned model. 🤗 Diffusers supports loading A1111 LoRA checkpoints with load_lora_weights(): Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "Lykon/dreamshaper-xl-1-0", torch_dtype=torch.float16, variant="fp16" +).to("cuda") Download a LoRA checkpoint from Civitai; this example uses the Blueprintify SD XL 1.0 checkpoint, but feel free to try out any LoRA checkpoint! Copied # uncomment to download the safetensor weights +#!wget https://civitai.com/api/download/models/168776 -O blueprintify.safetensors Load the LoRA checkpoint into the pipeline with the load_lora_weights() method: Copied pipeline.load_lora_weights(".", weight_name="blueprintify.safetensors") Now you can use the pipeline to generate images: Copied prompt = "bl3uprint, a highly detailed blueprint of the empire state building, explaining how to build all parts, many txt, blueprint grid backdrop" +negative_prompt = "lowres, cropped, worst quality, low quality, normal quality, artifacts, signature, watermark, username, blurry, more than one bridge, bad architecture" + +image = pipeline( + prompt=prompt, + negative_prompt=negative_prompt, + generator=torch.manual_seed(0), +).images[0] +image diff --git a/scrapped_outputs/92029278c78d77cd91f6d08053a570fe.txt b/scrapped_outputs/92029278c78d77cd91f6d08053a570fe.txt new file mode 100644 index 0000000000000000000000000000000000000000..4739626007e95d8a06267eec2e461cffdebeba59 --- /dev/null +++ b/scrapped_outputs/92029278c78d77cd91f6d08053a570fe.txt @@ -0,0 +1,290 @@ +Cycle Diffusion + + +Overview + +Cycle Diffusion is a Text-Guided Image-to-Image Generation model proposed in Unifying Diffusion Models’ Latent Space, with Applications to CycleDiffusion and Guidance by Chen Henry Wu, Fernando De la Torre. +The abstract of the paper is the following: +Diffusion models have achieved unprecedented performance in generative modeling. The commonly-adopted formulation of the latent code of diffusion models is a sequence of gradually denoised samples, as opposed to the simpler (e.g., Gaussian) latent space of GANs, VAEs, and normalizing flows. This paper provides an alternative, Gaussian formulation of the latent space of various diffusion models, as well as an invertible DPM-Encoder that maps images into the latent space. While our formulation is purely based on the definition of diffusion models, we demonstrate several intriguing consequences. (1) Empirically, we observe that a common latent space emerges from two diffusion models trained independently on related domains. In light of this finding, we propose CycleDiffusion, which uses DPM-Encoder for unpaired image-to-image translation. Furthermore, applying CycleDiffusion to text-to-image diffusion models, we show that large-scale text-to-image diffusion models can be used as zero-shot image-to-image editors. (2) One can guide pre-trained diffusion models and GANs by controlling the latent codes in a unified, plug-and-play formulation based on energy-based models. Using the CLIP model and a face recognition model as guidance, we demonstrate that diffusion models have better coverage of low-density sub-populations and individuals than GANs. +Tips: +The Cycle Diffusion pipeline is fully compatible with any Stable Diffusion checkpoints +Currently Cycle Diffusion only works with the DDIMScheduler. +Example: +In the following we should how to best use the CycleDiffusionPipeline + + + Copied +import requests +import torch +from PIL import Image +from io import BytesIO + +from diffusers import CycleDiffusionPipeline, DDIMScheduler + +# load the pipeline +# make sure you're logged in with `huggingface-cli login` +model_id_or_path = "CompVis/stable-diffusion-v1-4" +scheduler = DDIMScheduler.from_pretrained(model_id_or_path, subfolder="scheduler") +pipe = CycleDiffusionPipeline.from_pretrained(model_id_or_path, scheduler=scheduler).to("cuda") + +# let's download an initial image +url = "https://raw.githubusercontent.com/ChenWu98/cycle-diffusion/main/data/dalle2/An%20astronaut%20riding%20a%20horse.png" +response = requests.get(url) +init_image = Image.open(BytesIO(response.content)).convert("RGB") +init_image = init_image.resize((512, 512)) +init_image.save("horse.png") + +# let's specify a prompt +source_prompt = "An astronaut riding a horse" +prompt = "An astronaut riding an elephant" + +# call the pipeline +image = pipe( + prompt=prompt, + source_prompt=source_prompt, + image=init_image, + num_inference_steps=100, + eta=0.1, + strength=0.8, + guidance_scale=2, + source_guidance_scale=1, +).images[0] + +image.save("horse_to_elephant.png") + +# let's try another example +# See more samples at the original repo: https://github.com/ChenWu98/cycle-diffusion +url = "https://raw.githubusercontent.com/ChenWu98/cycle-diffusion/main/data/dalle2/A%20black%20colored%20car.png" +response = requests.get(url) +init_image = Image.open(BytesIO(response.content)).convert("RGB") +init_image = init_image.resize((512, 512)) +init_image.save("black.png") + +source_prompt = "A black colored car" +prompt = "A blue colored car" + +# call the pipeline +torch.manual_seed(0) +image = pipe( + prompt=prompt, + source_prompt=source_prompt, + image=init_image, + num_inference_steps=100, + eta=0.1, + strength=0.85, + guidance_scale=3, + source_guidance_scale=1, +).images[0] + +image.save("black_to_blue.png") + +CycleDiffusionPipeline + + +class diffusers.CycleDiffusionPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: DDIMScheduler +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPImageProcessor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-guided image to image generation using Stable Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] +source_prompt: typing.Union[str, typing.List[str]] +image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None +strength: float = 0.8 +num_inference_steps: typing.Optional[int] = 50 +guidance_scale: typing.Optional[float] = 7.5 +source_guidance_scale: typing.Optional[float] = 1 +num_images_per_prompt: typing.Optional[int] = 1 +eta: typing.Optional[float] = 0.1 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. + + +image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. + + +strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter will be modulated by strength. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +source_guidance_scale (float, optional, defaults to 1) — +Guidance scale for the source prompt. This is useful to control the amount of influence the source +prompt for encoding. + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.1) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. diff --git a/scrapped_outputs/92064c6ebe8f311fc23bbd56b7b27b4d.txt b/scrapped_outputs/92064c6ebe8f311fc23bbd56b7b27b4d.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/9210d1b58952825929cb8324aa05e05a.txt b/scrapped_outputs/9210d1b58952825929cb8324aa05e05a.txt new file mode 100644 index 0000000000000000000000000000000000000000..d23d93327c35d9c8f0901065ebe9c0cc039991a4 --- /dev/null +++ b/scrapped_outputs/9210d1b58952825929cb8324aa05e05a.txt @@ -0,0 +1,260 @@ +Image-to-image Image-to-image is similar to text-to-image, but in addition to a prompt, you can also pass an initial image as a starting point for the diffusion process. The initial image is encoded to latent space and noise is added to it. Then the latent diffusion model takes a prompt and the noisy latent image, predicts the added noise, and removes the predicted noise from the initial latent image to get the new latent image. Lastly, a decoder decodes the new latent image back into an image. With 🤗 Diffusers, this is as easy as 1-2-3: Load a checkpoint into the AutoPipelineForImage2Image class; this pipeline automatically handles loading the correct pipeline class based on the checkpoint: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() You’ll notice throughout the guide, we use enable_model_cpu_offload() and enable_xformers_memory_efficient_attention(), to save memory and increase inference speed. If you’re using PyTorch 2.0, then you don’t need to call enable_xformers_memory_efficient_attention() on your pipeline because it’ll already be using PyTorch 2.0’s native scaled-dot product attention. Load an image to pass to the pipeline: Copied init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") Pass a prompt and image to the pipeline to generate an image: Copied prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k" +image = pipeline(prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Popular models The most popular image-to-image models are Stable Diffusion v1.5, Stable Diffusion XL (SDXL), and Kandinsky 2.2. The results from the Stable Diffusion and Kandinsky models vary due to their architecture differences and training process; you can generally expect SDXL to produce higher quality images than Stable Diffusion v1.5. Let’s take a quick look at how to use each of these models and compare their results. Stable Diffusion v1.5 Stable Diffusion v1.5 is a latent diffusion model initialized from an earlier checkpoint, and further finetuned for 595K steps on 512x512 images. To use this pipeline for image-to-image, you’ll need to prepare an initial image to pass to the pipeline. Then you can pass a prompt and the image to the pipeline to generate a new image: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Stable Diffusion XL (SDXL) SDXL is a more powerful version of the Stable Diffusion model. It uses a larger base model, and an additional refiner model to increase the quality of the base model’s output. Read the SDXL guide for a more detailed walkthrough of how to use this model, and other techniques it uses to produce high quality images. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-sdxl-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, strength=0.5).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Kandinsky 2.2 The Kandinsky model is different from the Stable Diffusion models because it uses an image prior model to create image embeddings. The embeddings help create a better alignment between text and images, allowing the latent diffusion model to generate better images. The simplest way to use Kandinsky 2.2 is: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Configure pipeline parameters There are several important parameters you can configure in the pipeline that’ll affect the image generation process and image quality. Let’s take a closer look at what these parameters do and how changing them affects the output. Strength strength is one of the most important parameters to consider and it’ll have a huge impact on your generated image. It determines how much the generated image resembles the initial image. In other words: 📈 a higher strength value gives the model more “creativity” to generate an image that’s different from the initial image; a strength value of 1.0 means the initial image is more or less ignored 📉 a lower strength value means the generated image is more similar to the initial image The strength and num_inference_steps parameters are related because strength determines the number of noise steps to add. For example, if the num_inference_steps is 50 and strength is 0.8, then this means adding 40 (50 * 0.8) steps of noise to the initial image and then denoising for 40 steps to get the newly generated image. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, strength=0.8).images[0] +make_image_grid([init_image, image], rows=1, cols=2) strength = 0.4 strength = 0.6 strength = 1.0 Guidance scale The guidance_scale parameter is used to control how closely aligned the generated image and text prompt are. A higher guidance_scale value means your generated image is more aligned with the prompt, while a lower guidance_scale value means your generated image has more space to deviate from the prompt. You can combine guidance_scale with strength for even more precise control over how expressive the model is. For example, combine a high strength + guidance_scale for maximum creativity or use a combination of low strength and low guidance_scale to generate an image that resembles the initial image but is not as strictly bound to the prompt. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, guidance_scale=8.0).images[0] +make_image_grid([init_image, image], rows=1, cols=2) guidance_scale = 0.1 guidance_scale = 5.0 guidance_scale = 10.0 Negative prompt A negative prompt conditions the model to not include things in an image, and it can be used to improve image quality or modify an image. For example, you can improve image quality by including negative prompts like “poor details” or “blurry” to encourage the model to generate a higher quality image. Or you can modify an image by specifying things to exclude from an image. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" + +# pass prompt and image to pipeline +image = pipeline(prompt, negative_prompt=negative_prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" negative_prompt = "jungle" Chained image-to-image pipelines There are some other interesting ways you can use an image-to-image pipeline aside from just generating an image (although that is pretty cool too). You can take it a step further and chain it with other pipelines. Text-to-image-to-image Chaining a text-to-image and image-to-image pipeline allows you to generate an image from text and use the generated image as the initial image for the image-to-image pipeline. This is useful if you want to generate an image entirely from scratch. For example, let’s chain a Stable Diffusion and a Kandinsky model. Start by generating an image with the text-to-image pipeline: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch +from diffusers.utils import make_image_grid + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +text2image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k").images[0] +text2image Now you can pass this generated image to the image-to-image pipeline: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image2image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", image=text2image).images[0] +make_image_grid([text2image, image2image], rows=1, cols=2) Image-to-image-to-image You can also chain multiple image-to-image pipelines together to create more interesting images. This can be useful for iteratively performing style transfer on an image, generating short GIFs, restoring color to an image, or restoring missing areas of an image. Start by generating an image: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, output_type="latent").images[0] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. Pass the latent output from this pipeline to the next pipeline to generate an image in a comic book art style: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "ogkalu/Comic-Diffusion", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# need to include the token "charliebo artstyle" in the prompt to use this checkpoint +image = pipeline("Astronaut in a jungle, charliebo artstyle", image=image, output_type="latent").images[0] Repeat one more time to generate the final image in a pixel art style: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "kohbanye/pixel-art-style", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# need to include the token "pixelartstyle" in the prompt to use this checkpoint +image = pipeline("Astronaut in a jungle, pixelartstyle", image=image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Image-to-upscaler-to-super-resolution Another way you can chain your image-to-image pipeline is with an upscaler and super-resolution pipeline to really increase the level of details in an image. Start with an image-to-image pipeline: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image_1 = pipeline(prompt, image=init_image, output_type="latent").images[0] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. Chain it to an upscaler pipeline to increase the image resolution: Copied from diffusers import StableDiffusionLatentUpscalePipeline + +upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained( + "stabilityai/sd-x2-latent-upscaler", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +upscaler.enable_model_cpu_offload() +upscaler.enable_xformers_memory_efficient_attention() + +image_2 = upscaler(prompt, image=image_1, output_type="latent").images[0] Finally, chain it to a super-resolution pipeline to further enhance the resolution: Copied from diffusers import StableDiffusionUpscalePipeline + +super_res = StableDiffusionUpscalePipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +super_res.enable_model_cpu_offload() +super_res.enable_xformers_memory_efficient_attention() + +image_3 = super_res(prompt, image=image_2).images[0] +make_image_grid([init_image, image_3.resize((512, 512))], rows=1, cols=2) Control image generation Trying to generate an image that looks exactly the way you want can be difficult, which is why controlled generation techniques and models are so useful. While you can use the negative_prompt to partially control image generation, there are more robust methods like prompt weighting and ControlNets. Prompt weighting Prompt weighting allows you to scale the representation of each concept in a prompt. For example, in a prompt like “Astronaut in a jungle, cold color palette, muted colors, detailed, 8k”, you can choose to increase or decrease the embeddings of “astronaut” and “jungle”. The Compel library provides a simple syntax for adjusting prompt weights and generating the embeddings. You can learn how to create the embeddings in the Prompt weighting guide. AutoPipelineForImage2Image has a prompt_embeds (and negative_prompt_embeds if you’re using a negative prompt) parameter where you can pass the embeddings which replaces the prompt parameter. Copied from diffusers import AutoPipelineForImage2Image +import torch + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt_embeds=prompt_embeds, # generated from Compel + negative_prompt_embeds=negative_prompt_embeds, # generated from Compel + image=init_image, +).images[0] ControlNet ControlNets provide a more flexible and accurate way to control image generation because you can use an additional conditioning image. The conditioning image can be a canny image, depth map, image segmentation, and even scribbles! Whatever type of conditioning image you choose, the ControlNet generates an image that preserves the information in it. For example, let’s condition an image with a depth map to keep the spatial information in the image. Copied from diffusers.utils import load_image, make_image_grid + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) +init_image = init_image.resize((958, 960)) # resize to depth image dimensions +depth_image = load_image("https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png") +make_image_grid([init_image, depth_image], rows=1, cols=2) Load a ControlNet model conditioned on depth maps and the AutoPipelineForImage2Image: Copied from diffusers import ControlNetModel, AutoPipelineForImage2Image +import torch + +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11f1p_sd15_depth", torch_dtype=torch.float16, variant="fp16", use_safetensors=True) +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() Now generate a new image conditioned on the depth map, initial image, and prompt: Copied prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image_control_net = pipeline(prompt, image=init_image, control_image=depth_image).images[0] +make_image_grid([init_image, depth_image, image_control_net], rows=1, cols=3) initial image depth image ControlNet image Let’s apply a new style to the image generated from the ControlNet by chaining it with an image-to-image pipeline: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "nitrosocke/elden-ring-diffusion", torch_dtype=torch.float16, +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +prompt = "elden ring style astronaut in a jungle" # include the token "elden ring style" in the prompt +negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" + +image_elden_ring = pipeline(prompt, negative_prompt=negative_prompt, image=image_control_net, strength=0.45, guidance_scale=10.5).images[0] +make_image_grid([init_image, depth_image, image_control_net, image_elden_ring], rows=2, cols=2) Optimize Running diffusion models is computationally expensive and intensive, but with a few optimization tricks, it is entirely possible to run them on consumer and free-tier GPUs. For example, you can use a more memory-efficient form of attention such as PyTorch 2.0’s scaled-dot product attention or xFormers (you can use one or the other, but there’s no need to use both). You can also offload the model to the GPU while the other pipeline components wait on the CPU. Copied + pipeline.enable_model_cpu_offload() ++ pipeline.enable_xformers_memory_efficient_attention() With torch.compile, you can boost your inference speed even more by wrapping your UNet with it: Copied pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) To learn more, take a look at the Reduce memory usage and Torch 2.0 guides. diff --git a/scrapped_outputs/92191a770fdffe0507816856e0700632.txt b/scrapped_outputs/92191a770fdffe0507816856e0700632.txt new file mode 100644 index 0000000000000000000000000000000000000000..86d9ddbbae81241685d47196515ab51585d529f3 --- /dev/null +++ b/scrapped_outputs/92191a770fdffe0507816856e0700632.txt @@ -0,0 +1,93 @@ +Latent Consistency Distillation Latent Consistency Models (LCMs) are able to generate high-quality images in just a few steps, representing a big leap forward because many pipelines require at least 25+ steps. LCMs are produced by applying the latent consistency distillation method to any Stable Diffusion model. This method works by applying one-stage guided distillation to the latent space, and incorporating a skipping-step method to consistently skip timesteps to accelerate the distillation process (refer to section 4.1, 4.2, and 4.3 of the paper for more details). If you’re training on a GPU with limited vRAM, try enabling gradient_checkpointing, gradient_accumulation_steps, and mixed_precision to reduce memory-usage and speedup training. You can reduce your memory-usage even more by enabling memory-efficient attention with xFormers and bitsandbytes’ 8-bit optimizer. This guide will explore the train_lcm_distill_sd_wds.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/consistency_distillation +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment (try enabling torch.compile to significantly speedup training): Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_lcm_distill_sd_wds.py \ + --mixed_precision="fp16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so you’ll focus on the parameters that are relevant to latent consistency distillation in this guide. --pretrained_teacher_model: the path to a pretrained latent diffusion model to use as the teacher model --pretrained_vae_model_name_or_path: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify an alternative VAE (like this VAE by madebyollin which works in fp16) --w_min and --w_max: the minimum and maximum guidance scale values for guidance scale sampling --num_ddim_timesteps: the number of timesteps for DDIM sampling --loss_type: the type of loss (L2 or Huber) to calculate for latent consistency distillation; Huber loss is generally preferred because it’s more robust to outliers --huber_c: the Huber loss parameter Training script The training script starts by creating a dataset class - Text2ImageDataset - for preprocessing the images and creating a training dataset. Copied def transform(example): + image = example["image"] + image = TF.resize(image, resolution, interpolation=transforms.InterpolationMode.BILINEAR) + + c_top, c_left, _, _ = transforms.RandomCrop.get_params(image, output_size=(resolution, resolution)) + image = TF.crop(image, c_top, c_left, resolution, resolution) + image = TF.to_tensor(image) + image = TF.normalize(image, [0.5], [0.5]) + + example["image"] = image + return example For improved performance on reading and writing large datasets stored in the cloud, this script uses the WebDataset format to create a preprocessing pipeline to apply transforms and create a dataset and dataloader for training. Images are processed and fed to the training loop without having to download the full dataset first. Copied processing_pipeline = [ + wds.decode("pil", handler=wds.ignore_and_continue), + wds.rename(image="jpg;png;jpeg;webp", text="text;txt;caption", handler=wds.warn_and_continue), + wds.map(filter_keys({"image", "text"})), + wds.map(transform), + wds.to_tuple("image", "text"), +] In the main() function, all the necessary components like the noise scheduler, tokenizers, text encoders, and VAE are loaded. The teacher UNet is also loaded here and then you can create a student UNet from the teacher UNet. The student UNet is updated by the optimizer during training. Copied teacher_unet = UNet2DConditionModel.from_pretrained( + args.pretrained_teacher_model, subfolder="unet", revision=args.teacher_revision +) + +unet = UNet2DConditionModel(**teacher_unet.config) +unet.load_state_dict(teacher_unet.state_dict(), strict=False) +unet.train() Now you can create the optimizer to update the UNet parameters: Copied optimizer = optimizer_class( + unet.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Create the dataset: Copied dataset = Text2ImageDataset( + train_shards_path_or_url=args.train_shards_path_or_url, + num_train_examples=args.max_train_samples, + per_gpu_batch_size=args.train_batch_size, + global_batch_size=args.train_batch_size * accelerator.num_processes, + num_workers=args.dataloader_num_workers, + resolution=args.resolution, + shuffle_buffer_size=1000, + pin_memory=True, + persistent_workers=True, +) +train_dataloader = dataset.train_dataloader Next, you’re ready to setup the training loop and implement the latent consistency distillation method (see Algorithm 1 in the paper for more details). This section of the script takes care of adding noise to the latents, sampling and creating a guidance scale embedding, and predicting the original image from the noise. Copied pred_x_0 = predicted_origin( + noise_pred, + start_timesteps, + noisy_model_input, + noise_scheduler.config.prediction_type, + alpha_schedule, + sigma_schedule, +) + +model_pred = c_skip_start * noisy_model_input + c_out_start * pred_x_0 It gets the teacher model predictions and the LCM predictions next, calculates the loss, and then backpropagates it to the LCM. Copied if args.loss_type == "l2": + loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") +elif args.loss_type == "huber": + loss = torch.mean( + torch.sqrt((model_pred.float() - target.float()) ** 2 + args.huber_c**2) - args.huber_c + ) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Now you’re ready to launch the training script and start distilling! For this guide, you’ll use the --train_shards_path_or_url to specify the path to the Conceptual Captions 12M dataset stored on the Hub here. Set the MODEL_DIR environment variable to the name of the teacher model and OUTPUT_DIR to where you want to save the model. Copied export MODEL_DIR="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="path/to/saved/model" + +accelerate launch train_lcm_distill_sd_wds.py \ + --pretrained_teacher_model=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --mixed_precision=fp16 \ + --resolution=512 \ + --learning_rate=1e-6 --loss_type="huber" --ema_decay=0.95 --adam_weight_decay=0.0 \ + --max_train_steps=1000 \ + --max_train_samples=4000000 \ + --dataloader_num_workers=8 \ + --train_shards_path_or_url="pipe:curl -L -s https://huggingface.co/datasets/laion/conceptual-captions-12m-webdataset/resolve/main/data/{00000..01099}.tar?download=true" \ + --validation_steps=200 \ + --checkpointing_steps=200 --checkpoints_total_limit=10 \ + --train_batch_size=12 \ + --gradient_checkpointing --enable_xformers_memory_efficient_attention \ + --gradient_accumulation_steps=1 \ + --use_8bit_adam \ + --resume_from_checkpoint=latest \ + --report_to=wandb \ + --seed=453645634 \ + --push_to_hub Once training is complete, you can use your new LCM for inference. Copied from diffusers import UNet2DConditionModel, DiffusionPipeline, LCMScheduler +import torch + +unet = UNet2DConditionModel.from_pretrained("your-username/your-model", torch_dtype=torch.float16, variant="fp16") +pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", unet=unet, torch_dtype=torch.float16, variant="fp16") + +pipeline.scheduler = LCMScheduler.from_config(pipe.scheduler.config) +pipeline.to("cuda") + +prompt = "sushi rolls in the form of panda heads, sushi platter" + +image = pipeline(prompt, num_inference_steps=4, guidance_scale=1.0).images[0] LoRA LoRA is a training technique for significantly reducing the number of trainable parameters. As a result, training is faster and it is easier to store the resulting weights because they are a lot smaller (~100MBs). Use the train_lcm_distill_lora_sd_wds.py or train_lcm_distill_lora_sdxl.wds.py script to train with LoRA. The LoRA training script is discussed in more detail in the LoRA training guide. Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_lcm_distill_sdxl_wds.py script to train a SDXL model with LoRA. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on distilling a LCM model! To learn more about LCM, the following may be helpful: Learn how to use LCMs for inference for text-to-image, image-to-image, and with LoRA checkpoints. Read the SDXL in 4 steps with Latent Consistency LoRAs blog post to learn more about SDXL LCM-LoRA’s for super fast inference, quality comparisons, benchmarks, and more. diff --git a/scrapped_outputs/922602ca4a9d9097fa17f6ff98101767.txt b/scrapped_outputs/922602ca4a9d9097fa17f6ff98101767.txt new file mode 100644 index 0000000000000000000000000000000000000000..163deebba32d44239adf15467f9dcbdfbfad7c90 --- /dev/null +++ b/scrapped_outputs/922602ca4a9d9097fa17f6ff98101767.txt @@ -0,0 +1,635 @@ +ControlNet with Stable Diffusion XL ControlNet was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that’ll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process. The abstract from the paper is: We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models. You can find additional smaller Stable Diffusion XL (SDXL) ControlNet checkpoints from the 🤗 Diffusers Hub organization, and browse community-trained checkpoints on the Hub. 🧪 Many of the SDXL ControlNet checkpoints are experimental, and there is a lot of room for improvement. Feel free to open an Issue and leave us feedback on how we can improve! If you don’t see a checkpoint you’re interested in, you can train your own SDXL ControlNet with our training script. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionXLControlNetPipeline class diffusers.StableDiffusionXLControlNetPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). text_encoder_2 (CLIPTextModelWithProjection) — +Second frozen text-encoder +(laion/CLIP-ViT-bigG-14-laion2B-39B-b160k). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. tokenizer_2 (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings should always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it defaults to True if the package is installed; otherwise no +watermarker is used. Pipeline for text-to-image generation using Stable Diffusion XL with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 5.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. This is sent to tokenizer_2 +and text_encoder_2. If not defined, negative_prompt is used in both text-encoders. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, pooled text embeddings are generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs (prompt +weighting). If not provided, pooled negative_prompt_embeds are generated from negative_prompt input +argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned containing the output images. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" +>>> negative_prompt = "low quality, bad quality, sketches" + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" +... ) + +>>> # initialize the models and pipeline +>>> controlnet_conditioning_scale = 0.5 # recommended for good generalization +>>> controlnet = ControlNetModel.from_pretrained( +... "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16 +... ) +>>> vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) +>>> pipe = StableDiffusionXLControlNetPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> # get canny image +>>> image = np.array(image) +>>> image = cv2.Canny(image, 100, 200) +>>> image = image[:, :, None] +>>> image = np.concatenate([image, image, image], axis=2) +>>> canny_image = Image.fromarray(image) + +>>> # generate image +>>> image = pipe( +... prompt, controlnet_conditioning_scale=controlnet_conditioning_scale, image=canny_image +... ).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionXLControlNetImg2ImgPipeline class diffusers.StableDiffusionXLControlNetImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple ControlNets +as a list, the outputs from each ControlNet are added together to create one combined additional +conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires an aesthetic_score condition to be passed during inference. Also see the +config of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for image-to-image generation using Stable Diffusion XL with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None control_image: Union = None height: Optional = None width: Optional = None strength: float = 0.8 num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 0.8 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The initial image will be used as the starting point for the image generation process. Can also accept +image latents as image, if passing latents directly, it will not be encoded again. control_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition. ControlNet uses this input condition to generate guidance to Unet. If +the type is specified as Torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can +also be accepted as an image. The dimensions of the output image defaults to image’s dimensions. If +height and/or width are passed, image is resized according to them. If multiple ControlNets are +specified in init, images must be passed as a list such that each element of the list can be correctly +batched for input to a single controlnet. height (int, optional, defaults to the size of control_image) — +The height in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to the size of control_image) — +The width in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the controlnet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set the +corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +In this mode, the ControlNet encoder will try best to recognize the content of the input image even if +you remove all prompts. The guidance_scale between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the controlnet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the controlnet stops applying. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple +containing the output images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> # pip install accelerate transformers safetensors diffusers + +>>> import torch +>>> import numpy as np +>>> from PIL import Image + +>>> from transformers import DPTFeatureExtractor, DPTForDepthEstimation +>>> from diffusers import ControlNetModel, StableDiffusionXLControlNetImg2ImgPipeline, AutoencoderKL +>>> from diffusers.utils import load_image + + +>>> depth_estimator = DPTForDepthEstimation.from_pretrained("Intel/dpt-hybrid-midas").to("cuda") +>>> feature_extractor = DPTFeatureExtractor.from_pretrained("Intel/dpt-hybrid-midas") +>>> controlnet = ControlNetModel.from_pretrained( +... "diffusers/controlnet-depth-sdxl-1.0-small", +... variant="fp16", +... use_safetensors=True, +... torch_dtype=torch.float16, +... ).to("cuda") +>>> vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16).to("cuda") +>>> pipe = StableDiffusionXLControlNetImg2ImgPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", +... controlnet=controlnet, +... vae=vae, +... variant="fp16", +... use_safetensors=True, +... torch_dtype=torch.float16, +... ).to("cuda") +>>> pipe.enable_model_cpu_offload() + + +>>> def get_depth_map(image): +... image = feature_extractor(images=image, return_tensors="pt").pixel_values.to("cuda") +... with torch.no_grad(), torch.autocast("cuda"): +... depth_map = depth_estimator(image).predicted_depth + +... depth_map = torch.nn.functional.interpolate( +... depth_map.unsqueeze(1), +... size=(1024, 1024), +... mode="bicubic", +... align_corners=False, +... ) +... depth_min = torch.amin(depth_map, dim=[1, 2, 3], keepdim=True) +... depth_max = torch.amax(depth_map, dim=[1, 2, 3], keepdim=True) +... depth_map = (depth_map - depth_min) / (depth_max - depth_min) +... image = torch.cat([depth_map] * 3, dim=1) +... image = image.permute(0, 2, 3, 1).cpu().numpy()[0] +... image = Image.fromarray((image * 255.0).clip(0, 255).astype(np.uint8)) +... return image + + +>>> prompt = "A robot, 4k photo" +>>> image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ).resize((1024, 1024)) +>>> controlnet_conditioning_scale = 0.5 # recommended for good generalization +>>> depth_image = get_depth_map(image) + +>>> images = pipe( +... prompt, +... image=image, +... control_image=depth_image, +... strength=0.99, +... num_inference_steps=50, +... controlnet_conditioning_scale=controlnet_conditioning_scale, +... ).images +>>> images[0].save(f"robot_cat.png") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionXLControlNetInpaintPipeline class diffusers.StableDiffusionXLControlNetInpaintPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: ControlNetModel scheduler: KarrasDiffusionSchedulers requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None mask_image: Union = None control_image: Union = None height: Optional = None width: Optional = None padding_mask_crop: Optional = None strength: float = 0.9999 num_inference_steps: int = 50 denoising_start: Optional = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (PIL.Image.Image) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. padding_mask_crop (int, optional, defaults to None) — +The size of margin in the crop to be applied to the image and masking. If None, no crop is applied to image and mask_image. If +padding_mask_crop is not None, it will first find a rectangular region with the same aspect ration of the image and +contains all masked area, and then expand that area based on padding_mask_crop. The image and mask_image will then be cropped based on +the expanded area before resizing to the original image size for inpainting. This is useful when the masked area is small while the image is large +and contain information inreleant for inpainging, such as background. strength (float, optional, defaults to 0.9999) — +Conceptually, indicates how much to transform the masked portion of the reference image. Must be +between 0 and 1. image will be used as a starting point, adding more noise to it the larger the +strength. The number of denoising steps depends on the amount of noise initially added. When +strength is 1, added noise will be maximum and the denoising process will run for the full number of +iterations specified in num_inference_steps. A value of 1, therefore, essentially ignores the masked +portion of the reference image. Note that in the case of denoising_start being declared as an +integer, the value of strength will be ignored. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. denoising_start (float, optional) — +When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be +bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and +it is assumed that the passed image is a partly denoised image. Note that when this is specified, +strength will be ignored. The denoising_start parameter is particularly beneficial when this pipeline +is integrated into a “Mixture of Denoisers” multi-pipeline setup, as detailed in Refining the Image +Output. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be +denoised by a successor pipeline that has denoising_start set to 0.8 so that it only denoises the +final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline +forms a part of a “Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (width, height) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (width, height). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> # !pip install transformers accelerate +>>> from diffusers import StableDiffusionXLControlNetInpaintPipeline, ControlNetModel, DDIMScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> init_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy.png" +... ) +>>> init_image = init_image.resize((1024, 1024)) + +>>> generator = torch.Generator(device="cpu").manual_seed(1) + +>>> mask_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy_mask.png" +... ) +>>> mask_image = mask_image.resize((1024, 1024)) + + +>>> def make_canny_condition(image): +... image = np.array(image) +... image = cv2.Canny(image, 100, 200) +... image = image[:, :, None] +... image = np.concatenate([image, image, image], axis=2) +... image = Image.fromarray(image) +... return image + + +>>> control_image = make_canny_condition(init_image) + +>>> controlnet = ControlNetModel.from_pretrained( +... "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16 +... ) +>>> pipe = StableDiffusionXLControlNetInpaintPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> image = pipe( +... "a handsome man with ray-ban sunglasses", +... num_inference_steps=20, +... generator=generator, +... eta=1.0, +... image=init_image, +... mask_image=mask_image, +... control_image=control_image, +... ).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/9239bbf82c05af8f732edf3b339f375d.txt b/scrapped_outputs/9239bbf82c05af8f732edf3b339f375d.txt new file mode 100644 index 0000000000000000000000000000000000000000..d8eb8600572d533c95c8e93de2ce9be735ab2e02 --- /dev/null +++ b/scrapped_outputs/9239bbf82c05af8f732edf3b339f375d.txt @@ -0,0 +1,465 @@ +Text-to-Image Generation with Adapter Conditioning Overview T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models by Chong Mou, Xintao Wang, Liangbin Xie, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie. Using the pretrained models we can provide control images (for example, a depth map) to control Stable Diffusion text-to-image generation so that it follows the structure of the depth image and fills in the details. The abstract of the paper is the following: The incredible generative ability of large-scale text-to-image (T2I) models has demonstrated strong power of learning complex structures and meaningful semantics. However, relying solely on text prompts cannot fully take advantage of the knowledge learned by the model, especially when flexible and accurate controlling (e.g., color and structure) is needed. In this paper, we aim to “dig out” the capabilities that T2I models have implicitly learned, and then explicitly use them to control the generation more granularly. Specifically, we propose to learn simple and lightweight T2I-Adapters to align internal knowledge in T2I models with external control signals, while freezing the original large T2I models. In this way, we can train various adapters according to different conditions, achieving rich control and editing effects in the color and structure of the generation results. Further, the proposed T2I-Adapters have attractive properties of practical value, such as composability and generalization ability. Extensive experiments demonstrate that our T2I-Adapter has promising generation quality and a wide range of applications. This model was contributed by the community contributor HimariO ❤️ . Available Pipelines: Pipeline Tasks Demo StableDiffusionAdapterPipeline Text-to-Image Generation with T2I-Adapter Conditioning - StableDiffusionXLAdapterPipeline Text-to-Image Generation with T2I-Adapter Conditioning on StableDiffusion-XL - Usage example with the base model of StableDiffusion-1.4/1.5 In the following we give a simple example of how to use a T2I-Adapter checkpoint with Diffusers for inference based on StableDiffusion-1.4/1.5. +All adapters use the same pipeline. Images are first converted into the appropriate control image format. The control image and prompt are passed to the StableDiffusionAdapterPipeline. Let’s have a look at a simple example using the Color Adapter. Copied from diffusers.utils import load_image, make_image_grid + +image = load_image("https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_ref.png") Then we can create our color palette by simply resizing it to 8 by 8 pixels and then scaling it back to original size. Copied from PIL import Image + +color_palette = image.resize((8, 8)) +color_palette = color_palette.resize((512, 512), resample=Image.Resampling.NEAREST) Let’s take a look at the processed image. Next, create the adapter pipeline Copied import torch +from diffusers import StableDiffusionAdapterPipeline, T2IAdapter + +adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_color_sd14v1", torch_dtype=torch.float16) +pipe = StableDiffusionAdapterPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + adapter=adapter, + torch_dtype=torch.float16, +) +pipe.to("cuda") Finally, pass the prompt and control image to the pipeline Copied # fix the random seed, so you will get the same result as the example +generator = torch.Generator("cuda").manual_seed(7) + +out_image = pipe( + "At night, glowing cubes in front of the beach", + image=color_palette, + generator=generator, +).images[0] +make_image_grid([image, color_palette, out_image], rows=1, cols=3) Usage example with the base model of StableDiffusion-XL In the following we give a simple example of how to use a T2I-Adapter checkpoint with Diffusers for inference based on StableDiffusion-XL. +All adapters use the same pipeline. Images are first downloaded into the appropriate control image format. The control image and prompt are passed to the StableDiffusionXLAdapterPipeline. Let’s have a look at a simple example using the Sketch Adapter. Copied from diffusers.utils import load_image, make_image_grid + +sketch_image = load_image("https://huggingface.co/Adapter/t2iadapter/resolve/main/sketch.png").convert("L") Then, create the adapter pipeline Copied import torch +from diffusers import ( + T2IAdapter, + StableDiffusionXLAdapterPipeline, + DDPMScheduler +) + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +adapter = T2IAdapter.from_pretrained("Adapter/t2iadapter", subfolder="sketch_sdxl_1.0", torch_dtype=torch.float16, adapter_type="full_adapter_xl") +scheduler = DDPMScheduler.from_pretrained(model_id, subfolder="scheduler") + +pipe = StableDiffusionXLAdapterPipeline.from_pretrained( + model_id, adapter=adapter, safety_checker=None, torch_dtype=torch.float16, variant="fp16", scheduler=scheduler +) + +pipe.to("cuda") Finally, pass the prompt and control image to the pipeline Copied # fix the random seed, so you will get the same result as the example +generator = torch.Generator().manual_seed(42) + +sketch_image_out = pipe( + prompt="a photo of a dog in real world, high quality", + negative_prompt="extra digit, fewer digits, cropped, worst quality, low quality", + image=sketch_image, + generator=generator, + guidance_scale=7.5 +).images[0] +make_image_grid([sketch_image, sketch_image_out], rows=1, cols=2) Available checkpoints Non-diffusers checkpoints can be found under TencentARC/T2I-Adapter. T2I-Adapter with Stable Diffusion 1.4 Model Name Control Image Overview Control Image Example Generated Image Example TencentARC/t2iadapter_color_sd14v1 Trained with spatial color palette An image with 8x8 color palette. TencentARC/t2iadapter_canny_sd14v1 Trained with canny edge detection A monochrome image with white edges on a black background. TencentARC/t2iadapter_sketch_sd14v1 Trained with PidiNet edge detection A hand-drawn monochrome image with white outlines on a black background. TencentARC/t2iadapter_depth_sd14v1 Trained with Midas depth estimation A grayscale image with black representing deep areas and white representing shallow areas. TencentARC/t2iadapter_openpose_sd14v1 Trained with OpenPose bone image A OpenPose bone image. TencentARC/t2iadapter_keypose_sd14v1 Trained with mmpose skeleton image A mmpose skeleton image. TencentARC/t2iadapter_seg_sd14v1Trained with semantic segmentation An custom segmentation protocol image. TencentARC/t2iadapter_canny_sd15v2 TencentARC/t2iadapter_depth_sd15v2 TencentARC/t2iadapter_sketch_sd15v2 TencentARC/t2iadapter_zoedepth_sd15v1 Adapter/t2iadapter, subfolder=‘sketch_sdxl_1.0’ Adapter/t2iadapter, subfolder=‘canny_sdxl_1.0’ Adapter/t2iadapter, subfolder=‘openpose_sdxl_1.0’ Combining multiple adapters MultiAdapter can be used for applying multiple conditionings at once. Here we use the keypose adapter for the character posture and the depth adapter for creating the scene. Copied from diffusers.utils import load_image, make_image_grid + +cond_keypose = load_image( + "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png" +) +cond_depth = load_image( + "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png" +) +cond = [cond_keypose, cond_depth] + +prompt = ["A man walking in an office room with a nice view"] The two control images look as such: MultiAdapter combines keypose and depth adapters. adapter_conditioning_scale balances the relative influence of the different adapters. Copied import torch +from diffusers import StableDiffusionAdapterPipeline, MultiAdapter, T2IAdapter + +adapters = MultiAdapter( + [ + T2IAdapter.from_pretrained("TencentARC/t2iadapter_keypose_sd14v1"), + T2IAdapter.from_pretrained("TencentARC/t2iadapter_depth_sd14v1"), + ] +) +adapters = adapters.to(torch.float16) + +pipe = StableDiffusionAdapterPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + torch_dtype=torch.float16, + adapter=adapters, +).to("cuda") + +image = pipe(prompt, cond, adapter_conditioning_scale=[0.8, 0.8]).images[0] +make_image_grid([cond_keypose, cond_depth, image], rows=1, cols=3) T2I-Adapter vs ControlNet T2I-Adapter is similar to ControlNet. +T2I-Adapter uses a smaller auxiliary network which is only run once for the entire diffusion process. +However, T2I-Adapter performs slightly worse than ControlNet. StableDiffusionAdapterPipeline class diffusers.StableDiffusionAdapterPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel adapter: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True ) Parameters adapter (T2IAdapter or MultiAdapter or List[T2IAdapter]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple Adapter as a +list, the outputs from each Adapter are added together to create one combined additional conditioning. adapter_weights (List[float], optional, defaults to None) — +List of floats representing the weight which will be multiply to each adapter’s output before adding them +together. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. Pipeline for text-to-image generation using Stable Diffusion augmented with T2I-Adapter +https://arxiv.org/abs/2302.08453 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None adapter_conditioning_scale: Union = 1.0 clip_skip: Optional = None ) → ~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor] or List[PIL.Image.Image] or List[List[PIL.Image.Image]]) — +The Adapter input condition. Adapter uses this input condition to generate guidance to Unet. If the +type is specified as Torch.FloatTensor, it is passed to Adapter as is. PIL.Image.Image` can also be +accepted as an image. The control image is automatically resized to fit the output image. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor as defined under +self.processor in +diffusers.models.attention_processor. adapter_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the adapter are multiplied by adapter_conditioning_scale before they are added to the +residual in the original unet. If multiple adapters are specified in init, you can set the +corresponding scale as a list. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from PIL import Image +>>> from diffusers.utils import load_image +>>> import torch +>>> from diffusers import StableDiffusionAdapterPipeline, T2IAdapter + +>>> image = load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_ref.png" +... ) + +>>> color_palette = image.resize((8, 8)) +>>> color_palette = color_palette.resize((512, 512), resample=Image.Resampling.NEAREST) + +>>> adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_color_sd14v1", torch_dtype=torch.float16) +>>> pipe = StableDiffusionAdapterPipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", +... adapter=adapter, +... torch_dtype=torch.float16, +... ) + +>>> pipe.to("cuda") + +>>> out_image = pipe( +... "At night, glowing cubes in front of the beach", +... image=color_palette, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionXLAdapterPipeline class diffusers.StableDiffusionXLAdapterPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel adapter: Union scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True ) Parameters adapter (T2IAdapter or MultiAdapter or List[T2IAdapter]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple Adapter as a +list, the outputs from each Adapter are added together to create one combined additional conditioning. adapter_weights (List[float], optional, defaults to None) — +List of floats representing the weight which will be multiply to each adapter’s output before adding them +together. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. Pipeline for text-to-image generation using Stable Diffusion augmented with T2I-Adapter +https://arxiv.org/abs/2302.08453 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Optional = None crops_coords_top_left: Tuple = (0, 0) target_size: Optional = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None adapter_conditioning_scale: Union = 1.0 adapter_conditioning_factor: float = 1.0 clip_skip: Optional = None ) → ~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor] or List[PIL.Image.Image] or List[List[PIL.Image.Image]]) — +The Adapter input condition. Adapter uses this input condition to generate guidance to Unet. If the +type is specified as Torch.FloatTensor, it is passed to Adapter as is. PIL.Image.Image` can also be +accepted as an image. The control image is automatically resized to fit the output image. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 5.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion_xl.StableDiffusionAdapterPipelineOutput +instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. adapter_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the adapter are multiplied by adapter_conditioning_scale before they are added to the +residual in the original unet. If multiple adapters are specified in init, you can set the +corresponding scale as a list. adapter_conditioning_factor (float, optional, defaults to 1.0) — +The fraction of timesteps for which adapter should be applied. If adapter_conditioning_factor is +0.0, adapter is not applied at all. If adapter_conditioning_factor is 1.0, adapter is applied for +all timesteps. If adapter_conditioning_factor is 0.5, adapter is applied for half of the timesteps. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import T2IAdapter, StableDiffusionXLAdapterPipeline, DDPMScheduler +>>> from diffusers.utils import load_image + +>>> sketch_image = load_image("https://huggingface.co/Adapter/t2iadapter/resolve/main/sketch.png").convert("L") + +>>> model_id = "stabilityai/stable-diffusion-xl-base-1.0" + +>>> adapter = T2IAdapter.from_pretrained( +... "Adapter/t2iadapter", +... subfolder="sketch_sdxl_1.0", +... torch_dtype=torch.float16, +... adapter_type="full_adapter_xl", +... ) +>>> scheduler = DDPMScheduler.from_pretrained(model_id, subfolder="scheduler") + +>>> pipe = StableDiffusionXLAdapterPipeline.from_pretrained( +... model_id, adapter=adapter, torch_dtype=torch.float16, variant="fp16", scheduler=scheduler +... ).to("cuda") + +>>> generator = torch.manual_seed(42) +>>> sketch_image_out = pipe( +... prompt="a photo of a dog in real world, high quality", +... negative_prompt="extra digit, fewer digits, cropped, worst quality, low quality", +... image=sketch_image, +... generator=generator, +... guidance_scale=7.5, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 diff --git a/scrapped_outputs/923deae8beddcb25fc71fc71d48a62da.txt b/scrapped_outputs/923deae8beddcb25fc71fc71d48a62da.txt new file mode 100644 index 0000000000000000000000000000000000000000..51eec044ff9541ddf40cd3ef6404f0e25abfaa6f --- /dev/null +++ b/scrapped_outputs/923deae8beddcb25fc71fc71d48a62da.txt @@ -0,0 +1,226 @@ +aMUSEd aMUSEd was introduced in aMUSEd: An Open MUSE Reproduction by Suraj Patil, William Berman, Robin Rombach, and Patrick von Platen. Amused is a lightweight text to image model based off of the MUSE architecture. Amused is particularly useful in applications that require a lightweight and fast model such as generating many images quickly at once. Amused is a vqvae token based transformer that can generate an image in fewer forward passes than many diffusion models. In contrast with muse, it uses the smaller text encoder CLIP-L/14 instead of t5-xxl. Due to its small parameter count and few forward pass generation process, amused can generate many images quickly. This benefit is seen particularly at larger batch sizes. The abstract from the paper is: We present aMUSEd, an open-source, lightweight masked image model (MIM) for text-to-image generation based on MUSE. With 10 percent of MUSE’s parameters, aMUSEd is focused on fast image generation. We believe MIM is under-explored compared to latent diffusion, the prevailing approach for text-to-image generation. Compared to latent diffusion, MIM requires fewer inference steps and is more interpretable. Additionally, MIM can be fine-tuned to learn additional styles with only a single image. We hope to encourage further exploration of MIM by demonstrating its effectiveness on large-scale text-to-image generation and releasing reproducible training code. We also release checkpoints for two models which directly produce images at 256x256 and 512x512 resolutions. Model Params amused-256 603M amused-512 608M AmusedPipeline class diffusers.AmusedPipeline < source > ( vqvae: VQModel tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection transformer: UVit2DModel scheduler: AmusedScheduler ) __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 12 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Optional = None latents: Optional = None prompt_embeds: Optional = None encoder_hidden_states: Optional = None negative_prompt_embeds: Optional = None negative_encoder_hidden_states: Optional = None output_type = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None micro_conditioning_aesthetic_score: int = 6 micro_conditioning_crop_coord: Tuple = (0, 0) temperature: Union = (2, 0) ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.transformer.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 16) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.IntTensor, optional) — +Pre-generated tokens representing latent vectors in self.vqvae, to be used as inputs for image +gneration. If not provided, the starting latents will be completely masked. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. A single vector from the +pooled and projected final hidden states. encoder_hidden_states (torch.FloatTensor, optional) — +Pre-generated penultimate hidden states from the text encoder providing additional text conditioning. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. negative_encoder_hidden_states (torch.FloatTensor, optional) — +Analogous to encoder_hidden_states for the positive prompt. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. micro_conditioning_aesthetic_score (int, optional, defaults to 6) — +The targeted aesthetic score according to the laion aesthetic classifier. See https://laion.ai/blog/laion-aesthetics/ +and the micro-conditioning section of https://arxiv.org/abs/2307.01952. micro_conditioning_crop_coord (Tuple[int], optional, defaults to (0, 0)) — +The targeted height, width crop coordinates. See the micro-conditioning section of https://arxiv.org/abs/2307.01952. temperature (Union[int, Tuple[int, int], List[int]], optional, defaults to (2, 0)) — +Configures the temperature scheduler on self.scheduler see AmusedScheduler#set_timesteps. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import AmusedPipeline + +>>> pipe = AmusedPipeline.from_pretrained( +... "amused/amused-512", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt).images[0] enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. class diffusers.AmusedImg2ImgPipeline < source > ( vqvae: VQModel tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection transformer: UVit2DModel scheduler: AmusedScheduler ) __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.5 num_inference_steps: int = 12 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Optional = None prompt_embeds: Optional = None encoder_hidden_states: Optional = None negative_prompt_embeds: Optional = None negative_encoder_hidden_states: Optional = None output_type = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None micro_conditioning_aesthetic_score: int = 6 micro_conditioning_crop_coord: Tuple = (0, 0) temperature: Union = (2, 0) ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be used as the starting point. For both +numpy array and pytorch tensor, the expected value range is between [0, 1] If it’s a tensor or a list +or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a +list of arrays, the expected shape should be (B, H, W, C) or (H, W, C) It can also accept image +latents as image, but if passing latents directly it is not encoded again. strength (float, optional, defaults to 0.5) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 16) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. A single vector from the +pooled and projected final hidden states. encoder_hidden_states (torch.FloatTensor, optional) — +Pre-generated penultimate hidden states from the text encoder providing additional text conditioning. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. negative_encoder_hidden_states (torch.FloatTensor, optional) — +Analogous to encoder_hidden_states for the positive prompt. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. micro_conditioning_aesthetic_score (int, optional, defaults to 6) — +The targeted aesthetic score according to the laion aesthetic classifier. See https://laion.ai/blog/laion-aesthetics/ +and the micro-conditioning section of https://arxiv.org/abs/2307.01952. micro_conditioning_crop_coord (Tuple[int], optional, defaults to (0, 0)) — +The targeted height, width crop coordinates. See the micro-conditioning section of https://arxiv.org/abs/2307.01952. temperature (Union[int, Tuple[int, int], List[int]], optional, defaults to (2, 0)) — +Configures the temperature scheduler on self.scheduler see AmusedScheduler#set_timesteps. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import AmusedImg2ImgPipeline +>>> from diffusers.utils import load_image + +>>> pipe = AmusedImg2ImgPipeline.from_pretrained( +... "amused/amused-512", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "winter mountains" +>>> input_image = ( +... load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/open_muse/mountains.jpg" +... ) +... .resize((512, 512)) +... .convert("RGB") +... ) +>>> image = pipe(prompt, input_image).images[0] enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. class diffusers.AmusedInpaintPipeline < source > ( vqvae: VQModel tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection transformer: UVit2DModel scheduler: AmusedScheduler ) __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None strength: float = 1.0 num_inference_steps: int = 12 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Optional = None prompt_embeds: Optional = None encoder_hidden_states: Optional = None negative_prompt_embeds: Optional = None negative_encoder_hidden_states: Optional = None output_type = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None micro_conditioning_aesthetic_score: int = 6 micro_conditioning_crop_coord: Tuple = (0, 0) temperature: Union = (2, 0) ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be used as the starting point. For both +numpy array and pytorch tensor, the expected value range is between [0, 1] If it’s a tensor or a list +or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a +list of arrays, the expected shape should be (B, H, W, C) or (H, W, C) It can also accept image +latents as image, but if passing latents directly it is not encoded again. mask_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to mask image. White pixels in the mask +are repainted while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a numpy array or pytorch tensor, it should contain one +color channel (L) instead of 3, so the expected shape for pytorch tensor would be (B, 1, H, W), (B, H, W), (1, H, W), (H, W). And for numpy array would be for (B, H, W, 1), (B, H, W), (H, W, 1), or (H, W). strength (float, optional, defaults to 1.0) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 16) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. A single vector from the +pooled and projected final hidden states. encoder_hidden_states (torch.FloatTensor, optional) — +Pre-generated penultimate hidden states from the text encoder providing additional text conditioning. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. negative_encoder_hidden_states (torch.FloatTensor, optional) — +Analogous to encoder_hidden_states for the positive prompt. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. micro_conditioning_aesthetic_score (int, optional, defaults to 6) — +The targeted aesthetic score according to the laion aesthetic classifier. See https://laion.ai/blog/laion-aesthetics/ +and the micro-conditioning section of https://arxiv.org/abs/2307.01952. micro_conditioning_crop_coord (Tuple[int], optional, defaults to (0, 0)) — +The targeted height, width crop coordinates. See the micro-conditioning section of https://arxiv.org/abs/2307.01952. temperature (Union[int, Tuple[int, int], List[int]], optional, defaults to (2, 0)) — +Configures the temperature scheduler on self.scheduler see AmusedScheduler#set_timesteps. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import AmusedInpaintPipeline +>>> from diffusers.utils import load_image + +>>> pipe = AmusedInpaintPipeline.from_pretrained( +... "amused/amused-512", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "fall mountains" +>>> input_image = ( +... load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/open_muse/mountains_1.jpg" +... ) +... .resize((512, 512)) +... .convert("RGB") +... ) +>>> mask = ( +... load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/open_muse/mountains_1_mask.png" +... ) +... .resize((512, 512)) +... .convert("L") +... ) +>>> pipe(prompt, input_image, mask).images[0].save("out.png") enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. diff --git a/scrapped_outputs/9250ea33a9a8bc5f594fd8cb0d32d12f.txt b/scrapped_outputs/9250ea33a9a8bc5f594fd8cb0d32d12f.txt new file mode 100644 index 0000000000000000000000000000000000000000..fa69efa9696034670fc8ca476928c6521eb0af53 --- /dev/null +++ b/scrapped_outputs/9250ea33a9a8bc5f594fd8cb0d32d12f.txt @@ -0,0 +1,212 @@ +Train a diffusion model Unconditional image generation is a popular application of diffusion models that generates images that look like those in the dataset used for training. Typically, the best results are obtained from finetuning a pretrained model on a specific dataset. You can find many of these checkpoints on the Hub, but if you can’t find one you like, you can always train your own! This tutorial will teach you how to train a UNet2DModel from scratch on a subset of the Smithsonian Butterflies dataset to generate your own 🦋 butterflies 🦋. 💡 This training tutorial is based on the Training with 🧨 Diffusers notebook. For additional details and context about diffusion models like how they work, check out the notebook! Before you begin, make sure you have 🤗 Datasets installed to load and preprocess image datasets, and 🤗 Accelerate, to simplify training on any number of GPUs. The following command will also install TensorBoard to visualize training metrics (you can also use Weights & Biases to track your training). Copied # uncomment to install the necessary libraries in Colab +#!pip install diffusers[training] We encourage you to share your model with the community, and in order to do that, you’ll need to login to your Hugging Face account (create one here if you don’t already have one!). You can login from a notebook and enter your token when prompted. Make sure your token has the write role. Copied >>> from huggingface_hub import notebook_login + +>>> notebook_login() Or login in from the terminal: Copied huggingface-cli login Since the model checkpoints are quite large, install Git-LFS to version these large files: Copied !sudo apt -qq install git-lfs +!git config --global credential.helper store Training configuration For convenience, create a TrainingConfig class containing the training hyperparameters (feel free to adjust them): Copied >>> from dataclasses import dataclass + +>>> @dataclass +... class TrainingConfig: +... image_size = 128 # the generated image resolution +... train_batch_size = 16 +... eval_batch_size = 16 # how many images to sample during evaluation +... num_epochs = 50 +... gradient_accumulation_steps = 1 +... learning_rate = 1e-4 +... lr_warmup_steps = 500 +... save_image_epochs = 10 +... save_model_epochs = 30 +... mixed_precision = "fp16" # `no` for float32, `fp16` for automatic mixed precision +... output_dir = "ddpm-butterflies-128" # the model name locally and on the HF Hub + +... push_to_hub = True # whether to upload the saved model to the HF Hub +... hub_model_id = "/" # the name of the repository to create on the HF Hub +... hub_private_repo = False +... overwrite_output_dir = True # overwrite the old model when re-running the notebook +... seed = 0 + + +>>> config = TrainingConfig() Load the dataset You can easily load the Smithsonian Butterflies dataset with the 🤗 Datasets library: Copied >>> from datasets import load_dataset + +>>> config.dataset_name = "huggan/smithsonian_butterflies_subset" +>>> dataset = load_dataset(config.dataset_name, split="train") 💡 You can find additional datasets from the HugGan Community Event or you can use your own dataset by creating a local ImageFolder. Set config.dataset_name to the repository id of the dataset if it is from the HugGan Community Event, or imagefolder if you’re using your own images. 🤗 Datasets uses the Image feature to automatically decode the image data and load it as a PIL.Image which we can visualize: Copied >>> import matplotlib.pyplot as plt + +>>> fig, axs = plt.subplots(1, 4, figsize=(16, 4)) +>>> for i, image in enumerate(dataset[:4]["image"]): +... axs[i].imshow(image) +... axs[i].set_axis_off() +>>> fig.show() The images are all different sizes though, so you’ll need to preprocess them first: Resize changes the image size to the one defined in config.image_size. RandomHorizontalFlip augments the dataset by randomly mirroring the images. Normalize is important to rescale the pixel values into a [-1, 1] range, which is what the model expects. Copied >>> from torchvision import transforms + +>>> preprocess = transforms.Compose( +... [ +... transforms.Resize((config.image_size, config.image_size)), +... transforms.RandomHorizontalFlip(), +... transforms.ToTensor(), +... transforms.Normalize([0.5], [0.5]), +... ] +... ) Use 🤗 Datasets’ set_transform method to apply the preprocess function on the fly during training: Copied >>> def transform(examples): +... images = [preprocess(image.convert("RGB")) for image in examples["image"]] +... return {"images": images} + + +>>> dataset.set_transform(transform) Feel free to visualize the images again to confirm that they’ve been resized. Now you’re ready to wrap the dataset in a DataLoader for training! Copied >>> import torch + +>>> train_dataloader = torch.utils.data.DataLoader(dataset, batch_size=config.train_batch_size, shuffle=True) Create a UNet2DModel Pretrained models in 🧨 Diffusers are easily created from their model class with the parameters you want. For example, to create a UNet2DModel: Copied >>> from diffusers import UNet2DModel + +>>> model = UNet2DModel( +... sample_size=config.image_size, # the target image resolution +... in_channels=3, # the number of input channels, 3 for RGB images +... out_channels=3, # the number of output channels +... layers_per_block=2, # how many ResNet layers to use per UNet block +... block_out_channels=(128, 128, 256, 256, 512, 512), # the number of output channels for each UNet block +... down_block_types=( +... "DownBlock2D", # a regular ResNet downsampling block +... "DownBlock2D", +... "DownBlock2D", +... "DownBlock2D", +... "AttnDownBlock2D", # a ResNet downsampling block with spatial self-attention +... "DownBlock2D", +... ), +... up_block_types=( +... "UpBlock2D", # a regular ResNet upsampling block +... "AttnUpBlock2D", # a ResNet upsampling block with spatial self-attention +... "UpBlock2D", +... "UpBlock2D", +... "UpBlock2D", +... "UpBlock2D", +... ), +... ) It is often a good idea to quickly check the sample image shape matches the model output shape: Copied >>> sample_image = dataset[0]["images"].unsqueeze(0) +>>> print("Input shape:", sample_image.shape) +Input shape: torch.Size([1, 3, 128, 128]) + +>>> print("Output shape:", model(sample_image, timestep=0).sample.shape) +Output shape: torch.Size([1, 3, 128, 128]) Great! Next, you’ll need a scheduler to add some noise to the image. Create a scheduler The scheduler behaves differently depending on whether you’re using the model for training or inference. During inference, the scheduler generates image from the noise. During training, the scheduler takes a model output - or a sample - from a specific point in the diffusion process and applies noise to the image according to a noise schedule and an update rule. Let’s take a look at the DDPMScheduler and use the add_noise method to add some random noise to the sample_image from before: Copied >>> import torch +>>> from PIL import Image +>>> from diffusers import DDPMScheduler + +>>> noise_scheduler = DDPMScheduler(num_train_timesteps=1000) +>>> noise = torch.randn(sample_image.shape) +>>> timesteps = torch.LongTensor([50]) +>>> noisy_image = noise_scheduler.add_noise(sample_image, noise, timesteps) + +>>> Image.fromarray(((noisy_image.permute(0, 2, 3, 1) + 1.0) * 127.5).type(torch.uint8).numpy()[0]) The training objective of the model is to predict the noise added to the image. The loss at this step can be calculated by: Copied >>> import torch.nn.functional as F + +>>> noise_pred = model(noisy_image, timesteps).sample +>>> loss = F.mse_loss(noise_pred, noise) Train the model By now, you have most of the pieces to start training the model and all that’s left is putting everything together. First, you’ll need an optimizer and a learning rate scheduler: Copied >>> from diffusers.optimization import get_cosine_schedule_with_warmup + +>>> optimizer = torch.optim.AdamW(model.parameters(), lr=config.learning_rate) +>>> lr_scheduler = get_cosine_schedule_with_warmup( +... optimizer=optimizer, +... num_warmup_steps=config.lr_warmup_steps, +... num_training_steps=(len(train_dataloader) * config.num_epochs), +... ) Then, you’ll need a way to evaluate the model. For evaluation, you can use the DDPMPipeline to generate a batch of sample images and save it as a grid: Copied >>> from diffusers import DDPMPipeline +>>> from diffusers.utils import make_image_grid +>>> import os + +>>> def evaluate(config, epoch, pipeline): +... # Sample some images from random noise (this is the backward diffusion process). +... # The default pipeline output type is `List[PIL.Image]` +... images = pipeline( +... batch_size=config.eval_batch_size, +... generator=torch.manual_seed(config.seed), +... ).images + +... # Make a grid out of the images +... image_grid = make_image_grid(images, rows=4, cols=4) + +... # Save the images +... test_dir = os.path.join(config.output_dir, "samples") +... os.makedirs(test_dir, exist_ok=True) +... image_grid.save(f"{test_dir}/{epoch:04d}.png") Now you can wrap all these components together in a training loop with 🤗 Accelerate for easy TensorBoard logging, gradient accumulation, and mixed precision training. To upload the model to the Hub, write a function to get your repository name and information and then push it to the Hub. 💡 The training loop below may look intimidating and long, but it’ll be worth it later when you launch your training in just one line of code! If you can’t wait and want to start generating images, feel free to copy and run the code below. You can always come back and examine the training loop more closely later, like when you’re waiting for your model to finish training. 🤗 Copied >>> from accelerate import Accelerator +>>> from huggingface_hub import create_repo, upload_folder +>>> from tqdm.auto import tqdm +>>> from pathlib import Path +>>> import os + +>>> def train_loop(config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler): +... # Initialize accelerator and tensorboard logging +... accelerator = Accelerator( +... mixed_precision=config.mixed_precision, +... gradient_accumulation_steps=config.gradient_accumulation_steps, +... log_with="tensorboard", +... project_dir=os.path.join(config.output_dir, "logs"), +... ) +... if accelerator.is_main_process: +... if config.output_dir is not None: +... os.makedirs(config.output_dir, exist_ok=True) +... if config.push_to_hub: +... repo_id = create_repo( +... repo_id=config.hub_model_id or Path(config.output_dir).name, exist_ok=True +... ).repo_id +... accelerator.init_trackers("train_example") + +... # Prepare everything +... # There is no specific order to remember, you just need to unpack the +... # objects in the same order you gave them to the prepare method. +... model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( +... model, optimizer, train_dataloader, lr_scheduler +... ) + +... global_step = 0 + +... # Now you train the model +... for epoch in range(config.num_epochs): +... progress_bar = tqdm(total=len(train_dataloader), disable=not accelerator.is_local_main_process) +... progress_bar.set_description(f"Epoch {epoch}") + +... for step, batch in enumerate(train_dataloader): +... clean_images = batch["images"] +... # Sample noise to add to the images +... noise = torch.randn(clean_images.shape, device=clean_images.device) +... bs = clean_images.shape[0] + +... # Sample a random timestep for each image +... timesteps = torch.randint( +... 0, noise_scheduler.config.num_train_timesteps, (bs,), device=clean_images.device, +... dtype=torch.int64 +... ) + +... # Add noise to the clean images according to the noise magnitude at each timestep +... # (this is the forward diffusion process) +... noisy_images = noise_scheduler.add_noise(clean_images, noise, timesteps) + +... with accelerator.accumulate(model): +... # Predict the noise residual +... noise_pred = model(noisy_images, timesteps, return_dict=False)[0] +... loss = F.mse_loss(noise_pred, noise) +... accelerator.backward(loss) + +... accelerator.clip_grad_norm_(model.parameters(), 1.0) +... optimizer.step() +... lr_scheduler.step() +... optimizer.zero_grad() + +... progress_bar.update(1) +... logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0], "step": global_step} +... progress_bar.set_postfix(**logs) +... accelerator.log(logs, step=global_step) +... global_step += 1 + +... # After each epoch you optionally sample some demo images with evaluate() and save the model +... if accelerator.is_main_process: +... pipeline = DDPMPipeline(unet=accelerator.unwrap_model(model), scheduler=noise_scheduler) + +... if (epoch + 1) % config.save_image_epochs == 0 or epoch == config.num_epochs - 1: +... evaluate(config, epoch, pipeline) + +... if (epoch + 1) % config.save_model_epochs == 0 or epoch == config.num_epochs - 1: +... if config.push_to_hub: +... upload_folder( +... repo_id=repo_id, +... folder_path=config.output_dir, +... commit_message=f"Epoch {epoch}", +... ignore_patterns=["step_*", "epoch_*"], +... ) +... else: +... pipeline.save_pretrained(config.output_dir) Phew, that was quite a bit of code! But you’re finally ready to launch the training with 🤗 Accelerate’s notebook_launcher function. Pass the function the training loop, all the training arguments, and the number of processes (you can change this value to the number of GPUs available to you) to use for training: Copied >>> from accelerate import notebook_launcher + +>>> args = (config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler) + +>>> notebook_launcher(train_loop, args, num_processes=1) Once training is complete, take a look at the final 🦋 images 🦋 generated by your diffusion model! Copied >>> import glob + +>>> sample_images = sorted(glob.glob(f"{config.output_dir}/samples/*.png")) +>>> Image.open(sample_images[-1]) Next steps Unconditional image generation is one example of a task that can be trained. You can explore other tasks and training techniques by visiting the 🧨 Diffusers Training Examples page. Here are some examples of what you can learn: Textual Inversion, an algorithm that teaches a model a specific visual concept and integrates it into the generated image. DreamBooth, a technique for generating personalized images of a subject given several input images of the subject. Guide to finetuning a Stable Diffusion model on your own dataset. Guide to using LoRA, a memory-efficient technique for finetuning really large models faster. diff --git a/scrapped_outputs/92de2fa6298aca0321ba947527ddefc9.txt b/scrapped_outputs/92de2fa6298aca0321ba947527ddefc9.txt new file mode 100644 index 0000000000000000000000000000000000000000..e291381381237ebdd7e328cd5ca0da62a70822ee --- /dev/null +++ b/scrapped_outputs/92de2fa6298aca0321ba947527ddefc9.txt @@ -0,0 +1,200 @@ +Denoising diffusion implicit models (DDIM) + + +Overview + +Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. +The abstract of the paper is the following: +Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space. +The original codebase of this paper can be found here: ermongroup/ddim. +For questions, feel free to contact the author on tsong.me. + +DDIMScheduler + + +class diffusers.DDIMScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +clip_sample: bool = True +set_alpha_to_one: bool = True +steps_offset: int = 0 +prediction_type: str = 'epsilon' +**kwargs + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +clip_sample (bool, default True) — +option to clip predicted sample between -1 and 1 for numerical stability. + + +set_alpha_to_one (bool, default True) — +each diffusion step uses the value of alphas product at that step and at the previous one. For the final +step there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the value of alpha at step 0. + + +steps_offset (int, default 0) — +an offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False, to make the last step use step 0 for the previous alpha product, as done in +stable diffusion. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + + +Denoising diffusion implicit models is a scheduler that extends the denoising procedure introduced in denoising +diffusion probabilistic models (DDPMs) with non-Markovian guidance. +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. +For more details, see the original paper: https://arxiv.org/abs/2010.02502 + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Optional[int] = None + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +timestep (int, optional) — current timestep + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + + +Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +eta: float = 0.0 +use_clipped_model_output: bool = False +generator = None +variance_noise: typing.Optional[torch.FloatTensor] = None +return_dict: bool = True + +) +→ +~schedulers.scheduling_utils.DDIMSchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +eta (float) — weight of noise for added noise in diffusion step. + + +use_clipped_model_output (bool) — if True, compute “corrected” model_output from the clipped +predicted original sample. Necessary because predicted original sample is clipped to [-1, 1] when +self.config.clip_sample is True. If no clipping has happened, “corrected” model_output would +coincide with the one provided as input and use_clipped_model_output will have not effect. +generator — random number generator. + + +variance_noise (torch.FloatTensor) — instead of generating noise for the variance using generator, we +can directly provide the noise for the variance itself. This is useful for methods such as +CycleDiffusion. (https://arxiv.org/abs/2210.05559) + + +return_dict (bool) — option for returning tuple rather than DDIMSchedulerOutput class + + +Returns + +~schedulers.scheduling_utils.DDIMSchedulerOutput or tuple + + + +~schedulers.scheduling_utils.DDIMSchedulerOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/930f184bf8ce23154981f658c95fb739.txt b/scrapped_outputs/930f184bf8ce23154981f658c95fb739.txt new file mode 100644 index 0000000000000000000000000000000000000000..c618df35dab9f1ea7404eb6772bf3711c834e51e --- /dev/null +++ b/scrapped_outputs/930f184bf8ce23154981f658c95fb739.txt @@ -0,0 +1,40 @@ +Stable Video Diffusion Stable Video Diffusion (SVD) is a powerful image-to-video generation model that can generate 2-4 second high resolution (576x1024) videos conditioned on an input image. This guide will show you how to use SVD to generate short videos from images. Before you begin, make sure you have the following libraries installed: Copied !pip install -q -U diffusers transformers accelerate The are two variants of this model, SVD and SVD-XT. The SVD checkpoint is trained to generate 14 frames and the SVD-XT checkpoint is further finetuned to generate 25 frames. You’ll use the SVD-XT checkpoint for this guide. Copied import torch + +from diffusers import StableVideoDiffusionPipeline +from diffusers.utils import load_image, export_to_video + +pipe = StableVideoDiffusionPipeline.from_pretrained( + "stabilityai/stable-video-diffusion-img2vid-xt", torch_dtype=torch.float16, variant="fp16" +) +pipe.enable_model_cpu_offload() + +# Load the conditioning image +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png") +image = image.resize((1024, 576)) + +generator = torch.manual_seed(42) +frames = pipe(image, decode_chunk_size=8, generator=generator).frames[0] + +export_to_video(frames, "generated.mp4", fps=7) "source image of a rocket" "generated video from source image" torch.compile You can gain a 20-25% speedup at the expense of slightly increased memory by compiling the UNet. Copied - pipe.enable_model_cpu_offload() ++ pipe.to("cuda") ++ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) Reduce memory usage Video generation is very memory intensive because you’re essentially generating num_frames all at once, similar to text-to-image generation with a high batch size. To reduce the memory requirement, there are multiple options that trade-off inference speed for lower memory requirement: enable model offloading: each component of the pipeline is offloaded to the CPU once it’s not needed anymore. enable feed-forward chunking: the feed-forward layer runs in a loop instead of running a single feed-forward with a huge batch size. reduce decode_chunk_size: the VAE decodes frames in chunks instead of decoding them all together. Setting decode_chunk_size=1 decodes one frame at a time and uses the least amount of memory (we recommend adjusting this value based on your GPU memory) but the video might have some flickering. Copied - pipe.enable_model_cpu_offload() +- frames = pipe(image, decode_chunk_size=8, generator=generator).frames[0] ++ pipe.enable_model_cpu_offload() ++ pipe.unet.enable_forward_chunking() ++ frames = pipe(image, decode_chunk_size=2, generator=generator, num_frames=25).frames[0] Using all these tricks togethere should lower the memory requirement to less than 8GB VRAM. Micro-conditioning Stable Diffusion Video also accepts micro-conditioning, in addition to the conditioning image, which allows more control over the generated video: fps: the frames per second of the generated video. motion_bucket_id: the motion bucket id to use for the generated video. This can be used to control the motion of the generated video. Increasing the motion bucket id increases the motion of the generated video. noise_aug_strength: the amount of noise added to the conditioning image. The higher the values the less the video resembles the conditioning image. Increasing this value also increases the motion of the generated video. For example, to generate a video with more motion, use the motion_bucket_id and noise_aug_strength micro-conditioning parameters: Copied import torch + +from diffusers import StableVideoDiffusionPipeline +from diffusers.utils import load_image, export_to_video + +pipe = StableVideoDiffusionPipeline.from_pretrained( + "stabilityai/stable-video-diffusion-img2vid-xt", torch_dtype=torch.float16, variant="fp16" +) +pipe.enable_model_cpu_offload() + +# Load the conditioning image +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png") +image = image.resize((1024, 576)) + +generator = torch.manual_seed(42) +frames = pipe(image, decode_chunk_size=8, generator=generator, motion_bucket_id=180, noise_aug_strength=0.1).frames[0] +export_to_video(frames, "generated.mp4", fps=7) diff --git a/scrapped_outputs/931f1df5e13ce800c9061a9ead93b5d0.txt b/scrapped_outputs/931f1df5e13ce800c9061a9ead93b5d0.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/932be236dd551398cffcb9e283dfb28d.txt b/scrapped_outputs/932be236dd551398cffcb9e283dfb28d.txt new file mode 100644 index 0000000000000000000000000000000000000000..b18ada944b930ad868f953fce2ec5a76ec51a52d --- /dev/null +++ b/scrapped_outputs/932be236dd551398cffcb9e283dfb28d.txt @@ -0,0 +1,558 @@ +AltDiffusion + +AltDiffusion was proposed in AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities by Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang, Ledell Wu +The abstract of the paper is the following: +In this work, we present a conceptually simple and effective method to train a strong bilingual multimodal representation model. Starting from the pretrained multimodal representation model CLIP released by OpenAI, we switched its text encoder with a pretrained multilingual text encoder XLM-R, and aligned both languages and image representations by a two-stage training schema consisting of teacher learning and contrastive learning. We validate our method through evaluations of a wide range of tasks. We set new state-of-the-art performances on a bunch of tasks including ImageNet-CN, Flicker30k- CN, and COCO-CN. Further, we obtain very close performances with CLIP on almost all tasks, suggesting that one can simply alter the text encoder in CLIP for extended capabilities such as multilingual understanding. +Overview: +Pipeline +Tasks +Colab +Demo +pipeline_alt_diffusion.py +Text-to-Image Generation +- +- +pipeline_alt_diffusion_img2img.py +Image-to-Image Text-Guided Generation +- +- + +Tips + +AltDiffusion is conceptually exaclty the same as Stable Diffusion. +Run AltDiffusion +AltDiffusion can be tested very easily with the AltDiffusionPipeline, AltDiffusionImg2ImgPipeline and the "BAAI/AltDiffusion-m9" checkpoint exactly in the same way it is shown in the Conditional Image Generation Guide and the Image-to-Image Generation Guide. +How to load and use different schedulers. +The alt diffusion pipeline uses DDIMScheduler scheduler by default. But diffusers provides many other schedulers that can be used with the alt diffusion pipeline such as PNDMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, EulerAncestralDiscreteScheduler etc. +To use a different scheduler, you can either change it via the ConfigMixin.from_config() method or pass the scheduler argument to the from_pretrained method of the pipeline. For example, to use the EulerDiscreteScheduler, you can do the following: + + + Copied +>>> from diffusers import AltDiffusionPipeline, EulerDiscreteScheduler + +>>> pipeline = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9") +>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +>>> # or +>>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("BAAI/AltDiffusion-m9", subfolder="scheduler") +>>> pipeline = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9", scheduler=euler_scheduler) +How to convert all use cases with multiple or single pipeline +If you want to use all possible use cases in a single DiffusionPipeline we recommend using the components functionality to instantiate all components in the most memory-efficient way: + + + Copied +>>> from diffusers import ( +... AltDiffusionPipeline, +... AltDiffusionImg2ImgPipeline, +... ) + +>>> text2img = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9") +>>> img2img = AltDiffusionImg2ImgPipeline(**text2img.components) + +>>> # now you can use text2img(...) and img2img(...) just like the call methods of each respective pipeline + +AltDiffusionPipelineOutput + + +class diffusers.pipelines.alt_diffusion.AltDiffusionPipelineOutput + +< +source +> +( +images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] +nsfw_content_detected: typing.Optional[typing.List[bool]] + +) + + +Parameters + +images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or numpy array of shape (batch_size, height, width, num_channels). PIL images or numpy array present the denoised images of the diffusion pipeline. + + +nsfw_content_detected (List[bool]) — +List of flags denoting whether the corresponding generated image likely represents “not-safe-for-work” +(nsfw) content, or None if safety checking could not be performed. + + + +Output class for Alt Diffusion pipelines. + +__call__ + + +( +*args +**kwargs + +) + + + +Call self as a function. + +AltDiffusionPipeline + + +class diffusers.AltDiffusionPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: RobertaSeriesModelWithTransformation +tokenizer: XLMRobertaTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPFeatureExtractor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (RobertaSeriesModelWithTransformation) — +Frozen text-encoder. Alt Diffusion uses the text portion of +CLIP, +specifically the clip-vit-large-patch14 variant. + + +tokenizer (XLMRobertaTokenizer) — +Tokenizer of class +XLMRobertaTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-to-image generation using Alt Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: typing.Optional[int] = 1 + +) +→ +~pipelines.stable_diffusion.AltDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.AltDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +~pipelines.stable_diffusion.AltDiffusionPipelineOutput or tuple + + + +~pipelines.stable_diffusion.AltDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import AltDiffusionPipeline + +>>> pipe = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> # "dark elf princess, highly detailed, d & d, fantasy, highly detailed, digital painting, trending on artstation, concept art, sharp focus, illustration, art by artgerm and greg rutkowski and fuji choko and viktoria gavrilenko and hoang lap" +>>> prompt = "黑暗精灵公主,非常详细,幻想,非常详细,数字绘画,概念艺术,敏锐的焦点,插图" +>>> image = pipe(prompt).images[0] + +disable_vae_slicing + +< +source +> +( +) + + + +Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to +computing decoding in one step. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. + +enable_vae_slicing + +< +source +> +( +) + + + +Enable sliced VAE decoding. +When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several +steps. This is useful to save some memory and allow larger batch sizes. + +AltDiffusionImg2ImgPipeline + + +class diffusers.AltDiffusionImg2ImgPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: RobertaSeriesModelWithTransformation +tokenizer: XLMRobertaTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPFeatureExtractor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (RobertaSeriesModelWithTransformation) — +Frozen text-encoder. Alt Diffusion uses the text portion of +CLIP, +specifically the clip-vit-large-patch14 variant. + + +tokenizer (XLMRobertaTokenizer) — +Tokenizer of class +XLMRobertaTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-guided image to image generation using Alt Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None +strength: float = 0.8 +num_inference_steps: typing.Optional[int] = 50 +guidance_scale: typing.Optional[float] = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: typing.Optional[float] = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: typing.Optional[int] = 1 +**kwargs + +) +→ +~pipelines.stable_diffusion.AltDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. + + +strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter will be modulated by strength. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. Ignored when not using guidance (i.e., ignored if guidance_scale +is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.AltDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +~pipelines.stable_diffusion.AltDiffusionPipelineOutput or tuple + + + +~pipelines.stable_diffusion.AltDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import requests +>>> import torch +>>> from PIL import Image +>>> from io import BytesIO + +>>> from diffusers import AltDiffusionImg2ImgPipeline + +>>> device = "cuda" +>>> model_id_or_path = "BAAI/AltDiffusion-m9" +>>> pipe = AltDiffusionImg2ImgPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +>>> response = requests.get(url) +>>> init_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_image = init_image.resize((768, 512)) + +>>> # "A fantasy landscape, trending on artstation" +>>> prompt = "幻想风景, artstation" + +>>> images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images +>>> images[0].save("幻想风景.png") + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. diff --git a/scrapped_outputs/9334364e110a41db954a743c340ebf40.txt b/scrapped_outputs/9334364e110a41db954a743c340ebf40.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/9351703dee6651ca5f06d0d5fd25a77f.txt b/scrapped_outputs/9351703dee6651ca5f06d0d5fd25a77f.txt new file mode 100644 index 0000000000000000000000000000000000000000..c6ada9556f117e916687e4a6c5586a56d8e2825d --- /dev/null +++ b/scrapped_outputs/9351703dee6651ca5f06d0d5fd25a77f.txt @@ -0,0 +1,17 @@ +Load safetensors safetensors is a safe and fast file format for storing and loading tensors. Typically, PyTorch model weights are saved or pickled into a .bin file with Python’s pickle utility. However, pickle is not secure and pickled files may contain malicious code that can be executed. safetensors is a secure alternative to pickle, making it ideal for sharing model weights. This guide will show you how you load .safetensor files, and how to convert Stable Diffusion model weights stored in other formats to .safetensor. Before you start, make sure you have safetensors installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install safetensors If you look at the runwayml/stable-diffusion-v1-5 repository, you’ll see weights inside the text_encoder, unet and vae subfolders are stored in the .safetensors format. By default, 🤗 Diffusers automatically loads these .safetensors files from their subfolders if they’re available in the model repository. For more explicit control, you can optionally set use_safetensors=True (if safetensors is not installed, you’ll get an error message asking you to install it): Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) However, model weights are not necessarily stored in separate subfolders like in the example above. Sometimes, all the weights are stored in a single .safetensors file. In this case, if the weights are Stable Diffusion weights, you can load the file directly with the from_single_file() method: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_single_file( + "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +) Convert to safetensors Not all weights on the Hub are available in the .safetensors format, and you may encounter weights stored as .bin. In this case, use the Convert Space to convert the weights to .safetensors. The Convert Space downloads the pickled weights, converts them, and opens a Pull Request to upload the newly converted .safetensors file on the Hub. This way, if there is any malicious code contained in the pickled files, they’re uploaded to the Hub - which has a security scanner to detect unsafe files and suspicious pickle imports - instead of your computer. You can use the model with the new .safetensors weights by specifying the reference to the Pull Request in the revision parameter (you can also test it in this Check PR Space on the Hub), for example refs/pr/22: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", revision="refs/pr/22", use_safetensors=True +) Why use safetensors? There are several reasons for using safetensors: Safety is the number one reason for using safetensors. As open-source and model distribution grows, it is important to be able to trust the model weights you downloaded don’t contain any malicious code. The current size of the header in safetensors prevents parsing extremely large JSON files. Loading speed between switching models is another reason to use safetensors, which performs zero-copy of the tensors. It is especially fast compared to pickle if you’re loading the weights to CPU (the default case), and just as fast if not faster when directly loading the weights to GPU. You’ll only notice the performance difference if the model is already loaded, and not if you’re downloading the weights or loading the model for the first time. The time it takes to load the entire pipeline: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", use_safetensors=True) +"Loaded in safetensors 0:00:02.033658" +"Loaded in PyTorch 0:00:02.663379" But the actual time it takes to load 500MB of the model weights is only: Copied safetensors: 3.4873ms +PyTorch: 172.7537ms Lazy loading is also supported in safetensors, which is useful in distributed settings to only load some of the tensors. This format allowed the BLOOM model to be loaded in 45 seconds on 8 GPUs instead of 10 minutes with regular PyTorch weights. diff --git a/scrapped_outputs/93735f0c69d1c390621724b81a57cf85.txt b/scrapped_outputs/93735f0c69d1c390621724b81a57cf85.txt new file mode 100644 index 0000000000000000000000000000000000000000..9de2a9918b4f9735de3ea0d622cdf65706556cae --- /dev/null +++ b/scrapped_outputs/93735f0c69d1c390621724b81a57cf85.txt @@ -0,0 +1,124 @@ +Schedulers Diffusion pipelines are inherently a collection of diffusion models and schedulers that are partly independent from each other. This means that one is able to switch out parts of the pipeline to better customize +a pipeline to one’s use case. The best example of this is the Schedulers. Whereas diffusion models usually simply define the forward pass from noise to a less noisy sample, +schedulers define the whole denoising process, i.e.: How many denoising steps? Stochastic or deterministic? What algorithm to use to find the denoised sample? They can be quite complex and often define a trade-off between denoising speed and denoising quality. +It is extremely difficult to measure quantitatively which scheduler works best for a given diffusion pipeline, so it is often recommended to simply try out which works best. The following paragraphs show how to do so with the 🧨 Diffusers library. Load pipeline Let’s start by loading the runwayml/stable-diffusion-v1-5 model in the DiffusionPipeline: Copied from huggingface_hub import login +from diffusers import DiffusionPipeline +import torch + +login() + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) Next, we move it to GPU: Copied pipeline.to("cuda") Access the scheduler The scheduler is always one of the components of the pipeline and is usually called "scheduler". +So it can be accessed via the "scheduler" property. Copied pipeline.scheduler Output: Copied PNDMScheduler { + "_class_name": "PNDMScheduler", + "_diffusers_version": "0.21.4", + "beta_end": 0.012, + "beta_schedule": "scaled_linear", + "beta_start": 0.00085, + "clip_sample": false, + "num_train_timesteps": 1000, + "set_alpha_to_one": false, + "skip_prk_steps": true, + "steps_offset": 1, + "timestep_spacing": "leading", + "trained_betas": null +} We can see that the scheduler is of type PNDMScheduler. +Cool, now let’s compare the scheduler in its performance to other schedulers. +First we define a prompt on which we will test all the different schedulers: Copied prompt = "A photograph of an astronaut riding a horse on Mars, high resolution, high definition." Next, we create a generator from a random seed that will ensure that we can generate similar images as well as run the pipeline: Copied generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image Changing the scheduler Now we show how easy it is to change the scheduler of a pipeline. Every scheduler has a property compatibles +which defines all compatible schedulers. You can take a look at all available, compatible schedulers for the Stable Diffusion pipeline as follows. Copied pipeline.scheduler.compatibles Output: Copied [diffusers.utils.dummy_torch_and_torchsde_objects.DPMSolverSDEScheduler, + diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, + diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, + diffusers.schedulers.scheduling_ddim.DDIMScheduler, + diffusers.schedulers.scheduling_ddpm.DDPMScheduler, + diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler, + diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler, + diffusers.schedulers.scheduling_deis_multistep.DEISMultistepScheduler, + diffusers.schedulers.scheduling_pndm.PNDMScheduler, + diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, + diffusers.schedulers.scheduling_unipc_multistep.UniPCMultistepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_discrete.KDPM2DiscreteScheduler, + diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_ancestral_discrete.KDPM2AncestralDiscreteScheduler] Cool, lots of schedulers to look at. Feel free to have a look at their respective class definitions: EulerDiscreteScheduler, LMSDiscreteScheduler, DDIMScheduler, DDPMScheduler, HeunDiscreteScheduler, DPMSolverMultistepScheduler, DEISMultistepScheduler, PNDMScheduler, EulerAncestralDiscreteScheduler, UniPCMultistepScheduler, KDPM2DiscreteScheduler, DPMSolverSinglestepScheduler, KDPM2AncestralDiscreteScheduler. We will now compare the input prompt with all other schedulers. To change the scheduler of the pipeline you can make use of the +convenient config property in combination with the from_config() function. Copied pipeline.scheduler.config returns a dictionary of the configuration of the scheduler: Output: Copied FrozenDict([('num_train_timesteps', 1000), + ('beta_start', 0.00085), + ('beta_end', 0.012), + ('beta_schedule', 'scaled_linear'), + ('trained_betas', None), + ('skip_prk_steps', True), + ('set_alpha_to_one', False), + ('prediction_type', 'epsilon'), + ('timestep_spacing', 'leading'), + ('steps_offset', 1), + ('_use_default_values', ['timestep_spacing', 'prediction_type']), + ('_class_name', 'PNDMScheduler'), + ('_diffusers_version', '0.21.4'), + ('clip_sample', False)]) This configuration can then be used to instantiate a scheduler +of a different class that is compatible with the pipeline. Here, +we change the scheduler to the DDIMScheduler. Copied from diffusers import DDIMScheduler + +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) Cool, now we can run the pipeline again to compare the generation quality. Copied generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image If you are a JAX/Flax user, please check this section instead. Compare schedulers So far we have tried running the stable diffusion pipeline with two schedulers: PNDMScheduler and DDIMScheduler. +A number of better schedulers have been released that can be run with much fewer steps; let’s compare them here: LMSDiscreteScheduler usually leads to better results: Copied from diffusers import LMSDiscreteScheduler + +pipeline.scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image EulerDiscreteScheduler and EulerAncestralDiscreteScheduler can generate high quality results with as little as 30 steps. Copied from diffusers import EulerDiscreteScheduler + +pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=30).images[0] +image and: Copied from diffusers import EulerAncestralDiscreteScheduler + +pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=30).images[0] +image DPMSolverMultistepScheduler gives a reasonable speed/quality trade-off and can be run with as little as 20 steps. Copied from diffusers import DPMSolverMultistepScheduler + +pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=20).images[0] +image As you can see, most images look very similar and are arguably of very similar quality. It often really depends on the specific use case which scheduler to choose. A good approach is always to run multiple different +schedulers to compare results. Changing the Scheduler in Flax If you are a JAX/Flax user, you can also change the default pipeline scheduler. This is a complete example of how to run inference using the Flax Stable Diffusion pipeline and the super-fast DPM-Solver++ scheduler: Copied import jax +import numpy as np +from flax.jax_utils import replicate +from flax.training.common_utils import shard + +from diffusers import FlaxStableDiffusionPipeline, FlaxDPMSolverMultistepScheduler + +model_id = "runwayml/stable-diffusion-v1-5" +scheduler, scheduler_state = FlaxDPMSolverMultistepScheduler.from_pretrained( + model_id, + subfolder="scheduler" +) +pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( + model_id, + scheduler=scheduler, + revision="bf16", + dtype=jax.numpy.bfloat16, +) +params["scheduler"] = scheduler_state + +# Generate 1 image per parallel device (8 on TPUv2-8 or TPUv3-8) +prompt = "a photo of an astronaut riding a horse on mars" +num_samples = jax.device_count() +prompt_ids = pipeline.prepare_inputs([prompt] * num_samples) + +prng_seed = jax.random.PRNGKey(0) +num_inference_steps = 25 + +# shard inputs and rng +params = replicate(params) +prng_seed = jax.random.split(prng_seed, jax.device_count()) +prompt_ids = shard(prompt_ids) + +images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images +images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) The following Flax schedulers are not yet compatible with the Flax Stable Diffusion Pipeline: FlaxLMSDiscreteScheduler FlaxDDPMScheduler diff --git a/scrapped_outputs/938dbfc7a8552a29c6a71a421bf7d43f.txt b/scrapped_outputs/938dbfc7a8552a29c6a71a421bf7d43f.txt new file mode 100644 index 0000000000000000000000000000000000000000..62825fe72aa801b97e465830300492417c227d28 --- /dev/null +++ b/scrapped_outputs/938dbfc7a8552a29c6a71a421bf7d43f.txt @@ -0,0 +1,18 @@ +Stable Diffusion pipelines Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from CompVis, Stability AI and LAION. Latent diffusion applies the diffusion process over a lower dimensional latent space to reduce memory and compute complexity. This specific type of diffusion model was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. Stable Diffusion is trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs. For more details about how Stable Diffusion works and how it differs from the base latent diffusion model, take a look at the Stability AI announcement and our own blog post for more technical details. You can find the original codebase for Stable Diffusion v1.0 at CompVis/stable-diffusion and Stable Diffusion v2.0 at Stability-AI/stablediffusion as well as their original scripts for various tasks. Additional official checkpoints for the different Stable Diffusion versions and tasks can be found on the CompVis, Runway, and Stability AI Hub organizations. Explore these organizations to find the best checkpoint for your use-case! The table below summarizes the available Stable Diffusion pipelines, their supported tasks, and an interactive demo: Pipeline Supported tasks 🤗 Space StableDiffusion text-to-image StableDiffusionImg2Img image-to-image StableDiffusionInpaint inpainting StableDiffusionDepth2Img depth-to-image StableDiffusionImageVariation image variation StableDiffusionPipelineSafe filtered text-to-image StableDiffusion2 text-to-image, inpainting, depth-to-image, super-resolution StableDiffusionXL text-to-image, image-to-image StableDiffusionLatentUpscale super-resolution StableDiffusionUpscale super-resolution StableDiffusionLDM3D text-to-rgb, text-to-depth, text-to-pano StableDiffusionUpscaleLDM3D ldm3d super-resolution Tips To help you get the most out of the Stable Diffusion pipelines, here are a few tips for improving performance and usability. These tips are applicable to all Stable Diffusion pipelines. Explore tradeoff between speed and quality StableDiffusionPipeline uses the PNDMScheduler by default, but 🤗 Diffusers provides many other schedulers (some of which are faster or output better quality) that are compatible. For example, if you want to use the EulerDiscreteScheduler instead of the default: Copied from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler + +pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") +pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +# or +euler_scheduler = EulerDiscreteScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler") +pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=euler_scheduler) Reuse pipeline components to save memory To save memory and use the same components across multiple pipelines, use the .components method to avoid loading weights into RAM more than once. Copied from diffusers import ( + StableDiffusionPipeline, + StableDiffusionImg2ImgPipeline, + StableDiffusionInpaintPipeline, +) + +text2img = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") +img2img = StableDiffusionImg2ImgPipeline(**text2img.components) +inpaint = StableDiffusionInpaintPipeline(**text2img.components) + +# now you can use text2img(...), img2img(...), inpaint(...) just like the call methods of each respective pipeline diff --git a/scrapped_outputs/93955b8db2543b018d490deaa711c8a6.txt b/scrapped_outputs/93955b8db2543b018d490deaa711c8a6.txt new file mode 100644 index 0000000000000000000000000000000000000000..7b33af7ded71fb9ee111a4c828a87ecbd9858360 --- /dev/null +++ b/scrapped_outputs/93955b8db2543b018d490deaa711c8a6.txt @@ -0,0 +1,36 @@ +Consistency Decoder Consistency decoder can be used to decode the latents from the denoising UNet in the StableDiffusionPipeline. This decoder was introduced in the DALL-E 3 technical report. The original codebase can be found at openai/consistencydecoder. Inference is only supported for 2 iterations as of now. The pipeline could not have been contributed without the help of madebyollin and mrsteyk from this issue. ConsistencyDecoderVAE class diffusers.ConsistencyDecoderVAE < source > ( scaling_factor: float = 0.18215 latent_channels: int = 4 encoder_act_fn: str = 'silu' encoder_block_out_channels: Tuple = (128, 256, 512, 512) encoder_double_z: bool = True encoder_down_block_types: Tuple = ('DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D') encoder_in_channels: int = 3 encoder_layers_per_block: int = 2 encoder_norm_num_groups: int = 32 encoder_out_channels: int = 4 decoder_add_attention: bool = False decoder_block_out_channels: Tuple = (320, 640, 1024, 1024) decoder_down_block_types: Tuple = ('ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D') decoder_downsample_padding: int = 1 decoder_in_channels: int = 7 decoder_layers_per_block: int = 3 decoder_norm_eps: float = 1e-05 decoder_norm_num_groups: int = 32 decoder_num_train_timesteps: int = 1024 decoder_out_channels: int = 6 decoder_resnet_time_scale_shift: str = 'scale_shift' decoder_time_embedding_type: str = 'learned' decoder_up_block_types: Tuple = ('ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D') ) The consistency decoder used with DALL-E 3. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline, ConsistencyDecoderVAE + +>>> vae = ConsistencyDecoderVAE.from_pretrained("openai/consistency-decoder", torch_dtype=torch.float16) +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", vae=vae, torch_dtype=torch.float16 +... ).to("cuda") + +>>> pipe("horse", generator=torch.manual_seed(0)).images wrapper < source > ( *args **kwargs ) disable_slicing < source > ( ) Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing +decoding in one step. disable_tiling < source > ( ) Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing +decoding in one step. enable_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_tiling < source > ( use_tiling: bool = True ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. forward < source > ( sample: FloatTensor sample_posterior: bool = False return_dict: bool = True generator: Optional = None ) → DecoderOutput or tuple Parameters sample (torch.FloatTensor) — Input sample. sample_posterior (bool, optional, defaults to False) — +Whether to sample from the posterior. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. generator (torch.Generator, optional, defaults to None) — +Generator to use for sampling. Returns +DecoderOutput or tuple + +If return_dict is True, a DecoderOutput is returned, otherwise a plain tuple is returned. + set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. tiled_encode < source > ( x: FloatTensor return_dict: bool = True ) → ~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput or tuple Parameters x (torch.FloatTensor) — Input batch of images. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput instead of a +plain tuple. Returns +~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput or tuple + +If return_dict is True, a ~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput is returned, +otherwise a plain tuple is returned. + Encode a batch of images using a tiled encoder. When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several +steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is +different from non-tiled encoding because each tile uses a different encoder. To avoid tiling artifacts, the +tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the +output, but they should be much less noticeable. diff --git a/scrapped_outputs/93d1014b674d8eefe7c5e7b25bf91980.txt b/scrapped_outputs/93d1014b674d8eefe7c5e7b25bf91980.txt new file mode 100644 index 0000000000000000000000000000000000000000..bcb666def15e33f1f85b4b3d91e464c6e12c8f33 --- /dev/null +++ b/scrapped_outputs/93d1014b674d8eefe7c5e7b25bf91980.txt @@ -0,0 +1,52 @@ +UNet3DConditionModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 3D UNet conditional model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet3DConditionModel class diffusers.UNet3DConditionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlock3D', 'CrossAttnDownBlock3D', 'CrossAttnDownBlock3D', 'DownBlock3D') up_block_types: Tuple = ('UpBlock3D', 'CrossAttnUpBlock3D', 'CrossAttnUpBlock3D', 'CrossAttnUpBlock3D') block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: int = 1024 attention_head_dim: Union = 64 num_attention_heads: Union = None ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. in_channels (int, optional, defaults to 4) — The number of channels in the input sample. out_channels (int, optional, defaults to 4) — The number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — The number of layers per block. downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. act_fn (str, optional, defaults to "silu") — The activation function to use. norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, normalization and activation layers is skipped in post-processing. norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. cross_attention_dim (int, optional, defaults to 1280) — The dimension of the cross attention features. attention_head_dim (int, optional, defaults to 8) — The dimension of the attention heads. num_attention_heads (int, optional) — The number of attention heads. A conditional 3D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). disable_freeu < source > ( ) Disables the FreeU mechanism. enable_forward_chunking < source > ( chunk_size: Optional = None dim: int = 0 ) Parameters chunk_size (int, optional) — +The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually +over each tensor of dim=dim. dim (int, optional, defaults to 0) — +The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch) +or dim=1 (sequence length). Sets the attention processor to use feed forward +chunking. enable_freeu < source > ( s1 s2 b1 b2 ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism from https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stage blocks where they are being applied. Please refer to the official repository for combinations of values that +are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None down_block_additional_residuals: Optional = None mid_block_additional_residual: Optional = None return_dict: bool = True ) → ~models.unet_3d_condition.UNet3DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, num_frames, channel, height, width. timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). class_labels (torch.Tensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. +timestep_cond — (torch.Tensor, optional, defaults to None): +Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed +through the self.time_embedding layer to obtain the timestep embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. +down_block_additional_residuals — (tuple of torch.Tensor, optional): +A tuple of tensors that if specified are added to the residuals of down unet blocks. +mid_block_additional_residual — (torch.Tensor, optional): +A tensor that if specified is added to the residual of the middle unet block. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.unet_3d_condition.UNet3DConditionOutput instead of a plain +tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. Returns +~models.unet_3d_condition.UNet3DConditionOutput or tuple + +If return_dict is True, an ~models.unet_3d_condition.UNet3DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The UNet3DConditionModel forward method. set_attention_slice < source > ( slice_size: Union ) Parameters slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", input to the attention heads is halved, so attention is computed in two steps. If +"max", maximum amount of memory is saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in +several steps. This is useful for saving some memory in exchange for a small decrease in speed. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. unload_lora < source > ( ) Unloads LoRA weights. UNet3DConditionOutput class diffusers.models.unets.unet_3d_condition.UNet3DConditionOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_frames, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of UNet3DConditionModel. diff --git a/scrapped_outputs/93d324a4981bda29318d0244cdf6c1eb.txt b/scrapped_outputs/93d324a4981bda29318d0244cdf6c1eb.txt new file mode 100644 index 0000000000000000000000000000000000000000..13aef0767c19d544c8b380b818921e179de42362 --- /dev/null +++ b/scrapped_outputs/93d324a4981bda29318d0244cdf6c1eb.txt @@ -0,0 +1,14 @@ +Speed up inference There are several ways to optimize 🤗 Diffusers for inference speed. As a general rule of thumb, we recommend using either xFormers or torch.nn.functional.scaled_dot_product_attention in PyTorch 2.0 for their memory-efficient attention. In many cases, optimizing for speed or memory leads to improved performance in the other, so you should try to optimize for both whenever you can. This guide focuses on inference speed, but you can learn more about preserving memory in the Reduce memory usage guide. The results below are obtained from generating a single 512x512 image from the prompt a photo of an astronaut riding a horse on mars with 50 DDIM steps on a Nvidia Titan RTX, demonstrating the speed-up you can expect. latency speed-up original 9.50s x1 fp16 3.61s x2.63 channels last 3.30s x2.88 traced UNet 3.21s x2.96 memory efficient attention 2.63s x3.61 Use TensorFloat-32 On Ampere and later CUDA devices, matrix multiplications and convolutions can use the TensorFloat-32 (TF32) mode for faster, but slightly less accurate computations. By default, PyTorch enables TF32 mode for convolutions but not matrix multiplications. Unless your network requires full float32 precision, we recommend enabling TF32 for matrix multiplications. It can significantly speeds up computations with typically negligible loss in numerical accuracy. Copied import torch + +torch.backends.cuda.matmul.allow_tf32 = True You can learn more about TF32 in the Mixed precision training guide. Half-precision weights To save GPU memory and get more speed, try loading and running the model weights directly in half-precision or float16: Copied import torch +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +image = pipe(prompt).images[0] Don’t use torch.autocast in any of the pipelines as it can lead to black images and is always slower than pure float16 precision. diff --git a/scrapped_outputs/93da3776e4eadda663385ef9ac4e7439.txt b/scrapped_outputs/93da3776e4eadda663385ef9ac4e7439.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/94145bfe4fac2aaa584223ca03707276.txt b/scrapped_outputs/94145bfe4fac2aaa584223ca03707276.txt new file mode 100644 index 0000000000000000000000000000000000000000..af8bc21f7006c2432f3cf43cbda561eb3e9ef283 --- /dev/null +++ b/scrapped_outputs/94145bfe4fac2aaa584223ca03707276.txt @@ -0,0 +1,42 @@ +RePaintScheduler RePaintScheduler is a DDPM-based inpainting scheduler for unsupervised inpainting with extreme masks. It is designed to be used with the RePaintPipeline, and it is based on the paper RePaint: Inpainting using Denoising Diffusion Probabilistic Models by Andreas Lugmayr et al. The abstract from the paper is: Free-form inpainting is the task of adding new content to an image in the regions specified by an arbitrary binary mask. Most existing approaches train for a certain distribution of masks, which limits their generalization capabilities to unseen mask types. Furthermore, training with pixel-wise and perceptual losses often leads to simple textural extensions towards the missing areas instead of semantically meaningful generation. In this work, we propose RePaint: A Denoising Diffusion Probabilistic Model (DDPM) based inpainting approach that is applicable to even extreme masks. We employ a pretrained unconditional DDPM as the generative prior. To condition the generation process, we only alter the reverse diffusion iterations by sampling the unmasked regions using the given image information. Since this technique does not modify or condition the original DDPM network itself, the model produces high-quality and diverse output images for any inpainting form. We validate our method for both faces and general-purpose image inpainting using standard and extreme masks. RePaint outperforms state-of-the-art Autoregressive, and GAN approaches for at least five out of six mask distributions. GitHub Repository: this http URL. The original implementation can be found at andreas128/RePaint. RePaintScheduler class diffusers.RePaintScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' eta: float = 0.0 trained_betas: Optional = None clip_sample: bool = True ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, squaredcos_cap_v2, or sigmoid. eta (float) — +The weight of noise for added noise in diffusion step. If its value is between 0.0 and 1.0 it corresponds +to the DDIM scheduler, and if its value is between -0.0 and 1.0 it corresponds to the DDPM scheduler. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. clip_sample (bool, defaults to True) — +Clip the predicted sample between -1 and 1 for numerical stability. RePaintScheduler is a scheduler for DDPM inpainting inside a given mask. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int jump_length: int = 10 jump_n_sample: int = 10 device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. If used, +timesteps must be None. jump_length (int, defaults to 10) — +The number of steps taken forward in time before going backward in time for a single jump (“j” in +RePaint paper). Take a look at Figure 9 and 10 in the paper. jump_n_sample (int, defaults to 10) — +The number of times to make a forward time jump for a given chosen time sample. Take a look at Figure 9 +and 10 in the paper. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor original_image: FloatTensor mask: FloatTensor generator: Optional = None return_dict: bool = True ) → RePaintSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. original_image (torch.FloatTensor) — +The original image to inpaint on. mask (torch.FloatTensor) — +The mask where a value of 0.0 indicates which part of the original image to inpaint. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a RePaintSchedulerOutput or tuple. Returns +RePaintSchedulerOutput or tuple + +If return_dict is True, RePaintSchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). RePaintSchedulerOutput class diffusers.schedulers.scheduling_repaint.RePaintSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from +the current timestep. pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/941c9f59aefacfe1bf4f5cd49a98b369.txt b/scrapped_outputs/941c9f59aefacfe1bf4f5cd49a98b369.txt new file mode 100644 index 0000000000000000000000000000000000000000..c720fbda739c9fd46bb3b6c3a1f79c411effed53 --- /dev/null +++ b/scrapped_outputs/941c9f59aefacfe1bf4f5cd49a98b369.txt @@ -0,0 +1,33 @@ +Variance Preserving Stochastic Differential Equation (VP-SDE) scheduler + + +Overview + +Original paper can be found here. +Score SDE-VP is under construction. + +ScoreSdeVpScheduler + + +class diffusers.schedulers.ScoreSdeVpScheduler + +< +source +> +( +num_train_timesteps = 2000 +beta_min = 0.1 +beta_max = 20 +sampling_eps = 0.001 + +) + + + +The variance preserving stochastic differential equation (SDE) scheduler. +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. +For more information, see the original paper: https://arxiv.org/abs/2011.13456 +UNDER CONSTRUCTION diff --git a/scrapped_outputs/943e19b3a270546921786b77a6cf5740.txt b/scrapped_outputs/943e19b3a270546921786b77a6cf5740.txt new file mode 100644 index 0000000000000000000000000000000000000000..156e21d626ab15c942cd8ad3d18e0d7c614d887d --- /dev/null +++ b/scrapped_outputs/943e19b3a270546921786b77a6cf5740.txt @@ -0,0 +1,66 @@ +🧨 Diffusers Training Examples + +Diffusers training examples are a collection of scripts to demonstrate how to effectively use the diffusers library +for a variety of use cases. +Note: If you are looking for official examples on how to use diffusers for inference, +please have a look at src/diffusers/pipelines +Our examples aspire to be self-contained, easy-to-tweak, beginner-friendly and for one-purpose-only. +More specifically, this means: +Self-contained: An example script shall only depend on “pip-install-able” Python packages that can be found in a requirements.txt file. Example scripts shall not depend on any local files. This means that one can simply download an example script, e.g. train_unconditional.py, install the required dependencies, e.g. requirements.txt and execute the example script. +Easy-to-tweak: While we strive to present as many use cases as possible, the example scripts are just that - examples. It is expected that they won’t work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs. To help you with that, most of the examples fully expose the preprocessing of the data and the training loop to allow you to tweak and edit them as required. +Beginner-friendly: We do not aim for providing state-of-the-art training scripts for the newest models, but rather examples that can be used as a way to better understand diffusion models and how to use them with the diffusers library. We often purposefully leave out certain state-of-the-art methods if we consider them too complex for beginners. +One-purpose-only: Examples should show one task and one task only. Even if a task is from a modeling +point of view very similar, e.g. image super-resolution and image modification tend to use the same model and training method, we want examples to showcase only one task to keep them as readable and easy-to-understand as possible. +We provide official examples that cover the most popular tasks of diffusion models. +Official examples are actively maintained by the diffusers maintainers and we try to rigorously follow our example philosophy as defined above. +If you feel like another important example should exist, we are more than happy to welcome a Feature Request or directly a Pull Request from you! +Training examples show how to pretrain or fine-tune diffusion models for a variety of tasks. Currently we support: +Unconditional Training +Text-to-Image Training +Text Inversion +Dreambooth +LoRA Support +If possible, please install xFormers for memory efficient attention. This could help make your training faster and less memory intensive. +Task +🤗 Accelerate +🤗 Datasets +Colab +Unconditional Image Generation +✅ +✅ + +Text-to-Image fine-tuning +✅ +✅ + +Textual Inversion +✅ +- + +Dreambooth +✅ +- + + +Community + +In addition, we provide community examples, which are examples added and maintained by our community. +Community examples can consist of both training examples or inference pipelines. +For such examples, we are more lenient regarding the philosophy defined above and also cannot guarantee to provide maintenance for every issue. +Examples that are useful for the community, but are either not yet deemed popular or not yet following our above philosophy should go into the community examples folder. The community folder therefore includes training examples and inference pipelines. +Note: Community examples can be a great first contribution to show to the community how you like to use diffusers 🪄. + +Important note + +To make sure you can successfully run the latest versions of the example scripts, you have to install the library from source and install some example-specific requirements. To do this, execute the following steps in a new virtual environment: + + + Copied +git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . +Then cd in the example folder of your choice and run + + + Copied +pip install -r requirements.txt diff --git a/scrapped_outputs/943e8c3a8f022f0e284d765171310dd4.txt b/scrapped_outputs/943e8c3a8f022f0e284d765171310dd4.txt new file mode 100644 index 0000000000000000000000000000000000000000..642f75cfa7384eee4d148149356f6a94df142d05 --- /dev/null +++ b/scrapped_outputs/943e8c3a8f022f0e284d765171310dd4.txt @@ -0,0 +1,390 @@ +Image-to-image The Stable Diffusion model can also be applied to image-to-image generation by passing a text prompt and an initial image to condition the generation of new images. The StableDiffusionImg2ImgPipeline uses the diffusion-denoising mechanism proposed in SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations by Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, Stefano Ermon. The abstract from the paper is: Guided image synthesis enables everyday users to create and edit photo-realistic images with minimum effort. The key challenge is balancing faithfulness to the user input (e.g., hand-drawn colored strokes) and realism of the synthesized image. Existing GAN-based methods attempt to achieve such balance using either conditional GANs or GAN inversions, which are challenging and often require additional training data or loss functions for individual applications. To address these issues, we introduce a new image synthesis and editing method, Stochastic Differential Editing (SDEdit), based on a diffusion model generative prior, which synthesizes realistic images by iteratively denoising through a stochastic differential equation (SDE). Given an input image with user guide of any type, SDEdit first adds noise to the input, then subsequently denoises the resulting image through the SDE prior to increase its realism. SDEdit does not require task-specific training or inversions and can naturally achieve the balance between realism and faithfulness. SDEdit significantly outperforms state-of-the-art GAN-based methods by up to 98.09% on realism and 91.72% on overall satisfaction scores, according to a human perception study, on multiple tasks, including stroke-based image synthesis and editing as well as image compositing. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionImg2ImgPipeline class diffusers.StableDiffusionImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-guided image-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.8 num_inference_steps: Optional = 50 timesteps: List = None guidance_scale: Optional = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: Optional = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: int = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be used as the starting point. For both +numpy array and pytorch tensor, the expected value range is between [0, 1] If it’s a tensor or a list +or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a +list of arrays, the expected shape should be (B, H, W, C) or (H, W, C) It can also accept image +latents as image, but if passing latents directly it is not encoded again. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import requests +>>> import torch +>>> from PIL import Image +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionImg2ImgPipeline + +>>> device = "cuda" +>>> model_id_or_path = "runwayml/stable-diffusion-v1-5" +>>> pipe = StableDiffusionImg2ImgPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +>>> response = requests.get(url) +>>> init_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_image = init_image.resize((768, 512)) + +>>> prompt = "A fantasy landscape, trending on artstation" + +>>> images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images +>>> images[0].save("fantasy_landscape.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. extract_ema (bool, optional, defaults to False) — +Whether to extract the EMA weights or not. Pass True to extract the EMA weights which usually yield +higher quality images for inference. Non-EMA weights are usually better for continuing finetuning. upcast_attention (bool, optional, defaults to None) — +Whether the attention computation should always be upcasted. image_size (int, optional, defaults to 512) — +The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable +Diffusion v2 base model. Use 768 for Stable Diffusion v2. prediction_type (str, optional) — +The prediction type the model was trained on. Use 'epsilon' for all Stable Diffusion v1 models and +the Stable Diffusion v2 base model. Use 'v_prediction' for Stable Diffusion v2. num_in_channels (int, optional, defaults to None) — +The number of input channels. If None, it is automatically inferred. scheduler_type (str, optional, defaults to "pndm") — +Type of scheduler to use. Should be one of ["pndm", "lms", "heun", "euler", "euler-ancestral", "dpm", "ddim"]. load_safety_checker (bool, optional, defaults to True) — +Whether to load the safety checker or not. text_encoder (CLIPTextModel, optional, defaults to None) — +An instance of CLIPTextModel to use, specifically the +clip-vit-large-patch14 variant. If this +parameter is None, the function loads a new instance of CLIPTextModel by itself if needed. vae (AutoencoderKL, optional, defaults to None) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. If +this parameter is None, the function will load a new instance of [CLIP] by itself, if needed. tokenizer (CLIPTokenizer, optional, defaults to None) — +An instance of CLIPTokenizer to use. If this parameter is None, the function loads a new instance +of CLIPTokenizer by itself if needed. original_config_file (str) — +Path to .yaml config file corresponding to the original architecture. If None, will be +automatically inferred by looking for a key that only exists in SD2.0 models. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (for example the pipeline components of the +specific pipeline class). The overwritten components are directly passed to the pipelines __init__ +method. See example below for more information. Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt or .safetensors +format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import StableDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +... ) + +>>> # Download pipeline from local file +>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt +>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly") + +>>> # Enable float16 and move to GPU +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", +... torch_dtype=torch.float16, +... ) +>>> pipeline.to("cuda") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionImg2ImgPipeline class diffusers.FlaxStableDiffusionImg2ImgPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-guided image-to-image generation using Stable Diffusion. This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: Array image: Array params: Union prng_seed: Array strength: float = 0.8 num_inference_steps: int = 50 height: Optional = None width: Optional = None guidance_scale: Union = 7.5 noise: Array = None neg_prompt_ids: Array = None return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt_ids (jnp.ndarray) — +The prompt or prompts to guide image generation. image (jnp.ndarray) — +Array representing an image batch to be used as the starting point. params (Dict or FrozenDict) — +Dictionary containing the model parameters/weights. prng_seed (jax.Array or jax.Array) — +Array containing random number generator key. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. noise (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. The array is generated by +sampling using the supplied random generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> import jax.numpy as jnp +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard +>>> import requests +>>> from io import BytesIO +>>> from PIL import Image +>>> from diffusers import FlaxStableDiffusionImg2ImgPipeline + + +>>> def create_key(seed=0): +... return jax.random.PRNGKey(seed) + + +>>> rng = create_key(0) + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +>>> response = requests.get(url) +>>> init_img = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_img = init_img.resize((768, 512)) + +>>> prompts = "A fantasy landscape, trending on artstation" + +>>> pipeline, params = FlaxStableDiffusionImg2ImgPipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", +... revision="flax", +... dtype=jnp.bfloat16, +... ) + +>>> num_samples = jax.device_count() +>>> rng = jax.random.split(rng, jax.device_count()) +>>> prompt_ids, processed_image = pipeline.prepare_inputs( +... prompt=[prompts] * num_samples, image=[init_img] * num_samples +... ) +>>> p_params = replicate(params) +>>> prompt_ids = shard(prompt_ids) +>>> processed_image = shard(processed_image) + +>>> output = pipeline( +... prompt_ids=prompt_ids, +... image=processed_image, +... params=p_params, +... prng_seed=rng, +... strength=0.75, +... num_inference_steps=50, +... jit=True, +... height=512, +... width=768, +... ).images + +>>> output_images = pipeline.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:]))) FlaxStableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/9461d26af0618187b0bb97f4f46a2c9e.txt b/scrapped_outputs/9461d26af0618187b0bb97f4f46a2c9e.txt new file mode 100644 index 0000000000000000000000000000000000000000..a3ac22e44f82a2bfeede971a5b1063163f7e9fc2 --- /dev/null +++ b/scrapped_outputs/9461d26af0618187b0bb97f4f46a2c9e.txt @@ -0,0 +1,176 @@ +Image-to-Video Generation with PIA (Personalized Image Animator) Overview PIA: Your Personalized Image Animator via Plug-and-Play Modules in Text-to-Image Models by Yiming Zhang, Zhening Xing, Yanhong Zeng, Youqing Fang, Kai Chen Recent advancements in personalized text-to-image (T2I) models have revolutionized content creation, empowering non-experts to generate stunning images with unique styles. While promising, adding realistic motions into these personalized images by text poses significant challenges in preserving distinct styles, high-fidelity details, and achieving motion controllability by text. In this paper, we present PIA, a Personalized Image Animator that excels in aligning with condition images, achieving motion controllability by text, and the compatibility with various personalized T2I models without specific tuning. To achieve these goals, PIA builds upon a base T2I model with well-trained temporal alignment layers, allowing for the seamless transformation of any personalized T2I model into an image animation model. A key component of PIA is the introduction of the condition module, which utilizes the condition frame and inter-frame affinity as input to transfer appearance information guided by the affinity hint for individual frame synthesis in the latent space. This design mitigates the challenges of appearance-related image alignment within and allows for a stronger focus on aligning with motion-related guidance. Project page Available Pipelines Pipeline Tasks Demo PIAPipeline Image-to-Video Generation with PIA Available checkpoints Motion Adapter checkpoints for PIA can be found under the OpenMMLab org. These checkpoints are meant to work with any model based on Stable Diffusion 1.5 Usage example PIA works with a MotionAdapter checkpoint and a Stable Diffusion 1.5 model checkpoint. The MotionAdapter is a collection of Motion Modules that are responsible for adding coherent motion across image frames. These modules are applied after the Resnet and Attention blocks in the Stable Diffusion UNet. In addition to the motion modules, PIA also replaces the input convolution layer of the SD 1.5 UNet model with a 9 channel input convolution layer. The following example demonstrates how to use PIA to generate a video from a single image. Copied import torch +from diffusers import ( + EulerDiscreteScheduler, + MotionAdapter, + PIAPipeline, +) +from diffusers.utils import export_to_gif, load_image + +adapter = MotionAdapter.from_pretrained("openmmlab/PIA-condition-adapter") +pipe = PIAPipeline.from_pretrained("SG161222/Realistic_Vision_V6.0_B1_noVAE", motion_adapter=adapter, torch_dtype=torch.float16) + +pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() +pipe.enable_vae_slicing() + +image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat_6.png?download=true" +) +image = image.resize((512, 512)) +prompt = "cat in a field" +negative_prompt = "wrong white balance, dark, sketches,worst quality,low quality" + +generator = torch.Generator("cpu").manual_seed(0) +output = pipe(image=image, prompt=prompt, generator=generator) +frames = output.frames[0] +export_to_gif(frames, "pia-animation.gif") Here are some sample outputs: masterpiece, bestquality, sunset. + If you plan on using a scheduler that can clip samples, make sure to disable it by setting clip_sample=False in the scheduler as this can also have an adverse effect on generated samples. Additionally, the PIA checkpoints can be sensitive to the beta schedule of the scheduler. We recommend setting this to linear. Using FreeInit FreeInit: Bridging Initialization Gap in Video Diffusion Models by Tianxing Wu, Chenyang Si, Yuming Jiang, Ziqi Huang, Ziwei Liu. FreeInit is an effective method that improves temporal consistency and overall quality of videos generated using video-diffusion-models without any addition training. It can be applied to PIA, AnimateDiff, ModelScope, VideoCrafter and various other video generation models seamlessly at inference time, and works by iteratively refining the latent-initialization noise. More details can be found it the paper. The following example demonstrates the usage of FreeInit. Copied import torch +from diffusers import ( + DDIMScheduler, + MotionAdapter, + PIAPipeline, +) +from diffusers.utils import export_to_gif, load_image + +adapter = MotionAdapter.from_pretrained("openmmlab/PIA-condition-adapter") +pipe = PIAPipeline.from_pretrained("SG161222/Realistic_Vision_V6.0_B1_noVAE", motion_adapter=adapter) + +# enable FreeInit +# Refer to the enable_free_init documentation for a full list of configurable parameters +pipe.enable_free_init(method="butterworth", use_fast_sampling=True) + +# Memory saving options +pipe.enable_model_cpu_offload() +pipe.enable_vae_slicing() + +pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) +image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat_6.png?download=true" +) +image = image.resize((512, 512)) +prompt = "cat in a hat" +negative_prompt = "wrong white balance, dark, sketches,worst quality,low quality" + +generator = torch.Generator("cpu").manual_seed(0) + +output = pipe(image=image, prompt=prompt, generator=generator) +frames = output.frames[0] +export_to_gif(frames, "pia-freeinit-animation.gif") masterpiece, bestquality, sunset. + FreeInit is not really free - the improved quality comes at the cost of extra computation. It requires sampling a few extra times depending on the num_iters parameter that is set when enabling it. Setting the use_fast_sampling parameter to True can improve the overall performance (at the cost of lower quality compared to when use_fast_sampling=False but still better results than vanilla video generation models). PIAPipeline class diffusers.PIAPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: Union scheduler: Union motion_adapter: Optional = None feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter (MotionAdapter) — +A MotionAdapter to be used in combination with unet to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( image: Union prompt: Union = None strength: float = 1.0 num_frames: Optional = 16 height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None motion_scale: int = 0 output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → TextToVideoSDPipelineOutput or tuple Parameters image (PipelineImageInput) — +The input image to be used for video generation. prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. strength (float, optional, defaults to 1.0) — Indicates extent to transform the reference image. Must be between 0 and 1. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated video. num_frames (int, optional, defaults to 16) — +The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds +amounts to 2 seconds of video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. +motion_scale — (int, optional, defaults to 0): +Parameter that controls the amount and type of motion that is added to the image. Increasing the value increases the amount of motion, while specific +ranges of values control the type of motion that is added. Must be between 0 and 8. +Set between 0-2 to only increase the amount of motion. +Set between 3-5 to create looping motion. +Set between 6-8 to perform motion with image style transfer. output_type (str, optional, defaults to "pil") — +The output format of the generated video. Choose between torch.FloatTensor, PIL.Image or +np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +TextToVideoSDPipelineOutput or tuple + +If return_dict is True, TextToVideoSDPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import ( +... EulerDiscreteScheduler, +... MotionAdapter, +... PIAPipeline, +... ) +>>> from diffusers.utils import export_to_gif, load_image +>>> adapter = MotionAdapter.from_pretrained("../checkpoints/pia-diffusers") +>>> pipe = PIAPipeline.from_pretrained("SG161222/Realistic_Vision_V6.0_B1_noVAE", motion_adapter=adapter) +>>> pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config) +>>> image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat_6.png?download=true" +... ) +>>> image = image.resize((512, 512)) +>>> prompt = "cat in a hat" +>>> negative_prompt = "wrong white balance, dark, sketches,worst quality,low quality, deformed, distorted, disfigured, bad eyes, wrong lips,weird mouth, bad teeth, mutated hands and fingers, bad anatomy,wrong anatomy, amputation, extra limb, missing limb, floating,limbs, disconnected limbs, mutation, ugly, disgusting, bad_pictures, negative_hand-neg" +>>> generator = torch.Generator("cpu").manual_seed(0) +>>> output = pipe(image=image, prompt=prompt, negative_prompt=negative_prompt, generator=generator) +>>> frames = output.frames[0] +>>> export_to_gif(frames, "pia-animation.gif") disable_free_init < source > ( ) Disables the FreeInit mechanism if enabled. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_free_init < source > ( num_iters: int = 3 use_fast_sampling: bool = False method: str = 'butterworth' order: int = 4 spatial_stop_frequency: float = 0.25 temporal_stop_frequency: float = 0.25 generator: Optional = None ) Parameters num_iters (int, optional, defaults to 3) — +Number of FreeInit noise re-initialization iterations. use_fast_sampling (bool, optional, defaults to False) — +Whether or not to speedup sampling procedure at the cost of probably lower quality results. Enables +the “Coarse-to-Fine Sampling” strategy, as mentioned in the paper, if set to True. method (str, optional, defaults to butterworth) — +Must be one of butterworth, ideal or gaussian to use as the filtering method for the +FreeInit low pass filter. order (int, optional, defaults to 4) — +Order of the filter used in butterworth method. Larger values lead to ideal method behaviour +whereas lower values lead to gaussian method behaviour. spatial_stop_frequency (float, optional, defaults to 0.25) — +Normalized stop frequency for spatial dimensions. Must be between 0 to 1. Referred to as d_s in +the original implementation. temporal_stop_frequency (float, optional, defaults to 0.25) — +Normalized stop frequency for temporal dimensions. Must be between 0 to 1. Referred to as d_t in +the original implementation. generator (torch.Generator, optional, defaults to 0.25) — +A torch.Generator to make +FreeInit generation deterministic. Enables the FreeInit mechanism as in https://arxiv.org/abs/2312.07537. This implementation has been adapted from the official repository. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. enable_freeu disable_freeu enable_free_init disable_free_init enable_vae_slicing disable_vae_slicing enable_vae_tiling disable_vae_tiling PIAPipelineOutput class diffusers.pipelines.pia.PIAPipelineOutput < source > ( frames: Union ) Parameters frames (torch.Tensor, np.ndarray, or List[PIL.Image.Image]) — Nested list of length batch_size with denoised PIL image sequences of length num_frames, — NumPy array of shape `(batch_size, num_frames, channels, height, width, — Torch tensor of shape (batch_size, num_frames, channels, height, width). — Output class for PIAPipeline. diff --git a/scrapped_outputs/94622aaff063cd7bcf1ad54acdb4f341.txt b/scrapped_outputs/94622aaff063cd7bcf1ad54acdb4f341.txt new file mode 100644 index 0000000000000000000000000000000000000000..7f071804a6d1fd96f89b53ac2e21853833e83f62 --- /dev/null +++ b/scrapped_outputs/94622aaff063cd7bcf1ad54acdb4f341.txt @@ -0,0 +1,74 @@ +DEISMultistepScheduler Diffusion Exponential Integrator Sampler (DEIS) is proposed in Fast Sampling of Diffusion Models with Exponential Integrator by Qinsheng Zhang and Yongxin Chen. DEISMultistepScheduler is a fast high order solver for diffusion ordinary differential equations (ODEs). This implementation modifies the polynomial fitting formula in log-rho space instead of the original linear t space in the DEIS paper. The modification enjoys closed-form coefficients for exponential multistep update instead of replying on the numerical solver. The abstract from the paper is: The past few years have witnessed the great success of Diffusion models~(DMs) in generating high-fidelity samples in generative modeling tasks. A major limitation of the DM is its notoriously slow sampling procedure which normally requires hundreds to thousands of time discretization steps of the learned diffusion process to reach the desired accuracy. Our goal is to develop a fast sampling method for DMs with a much less number of steps while retaining high sample quality. To this end, we systematically analyze the sampling procedure in DMs and identify key factors that affect the sample quality, among which the method of discretization is most crucial. By carefully examining the learned diffusion process, we propose Diffusion Exponential Integrator Sampler~(DEIS). It is based on the Exponential Integrator designed for discretizing ordinary differential equations (ODEs) and leverages a semilinear structure of the learned diffusion process to reduce the discretization error. The proposed method can be applied to any DMs and can generate high-fidelity samples in as few as 10 steps. In our experiments, it takes about 3 minutes on one A6000 GPU to generate 50k images from CIFAR10. Moreover, by directly using pre-trained DMs, we achieve the state-of-art sampling performance when the number of score function evaluation~(NFE) is limited, e.g., 4.17 FID with 10 NFEs, 3.37 FID, and 9.74 IS with only 15 NFEs on CIFAR10. Code is available at this https URL. Tips It is recommended to set solver_order to 2 or 3, while solver_order=1 is equivalent to DDIMScheduler. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set thresholding=True to use the dynamic thresholding. DEISMultistepScheduler class diffusers.DEISMultistepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Optional = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'deis' solver_type: str = 'logrho' lower_order_final: bool = True use_karras_sigmas: Optional = False timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DEIS order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. algorithm_type (str, defaults to deis) — +The algorithm type for the solver. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DEISMultistepScheduler is a fast high order solver for diffusion ordinary differential equations (ODEs). This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DEIS algorithm needs. deis_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DEIS (equivalent to DDIM). multistep_deis_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order multistep DEIS. multistep_deis_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order multistep DEIS. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep DEIS. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/946a9cde657072aa594dc619d1df64df.txt b/scrapped_outputs/946a9cde657072aa594dc619d1df64df.txt new file mode 100644 index 0000000000000000000000000000000000000000..1867f773b4344fd37e77bce342b7730704ed1f48 --- /dev/null +++ b/scrapped_outputs/946a9cde657072aa594dc619d1df64df.txt @@ -0,0 +1,76 @@ +Load community pipelines and components Community pipelines Community pipelines are any DiffusionPipeline class that are different from the original implementation as specified in their paper (for example, the StableDiffusionControlNetPipeline corresponds to the Text-to-Image Generation with ControlNet Conditioning paper). They provide additional functionality or extend the original implementation of a pipeline. There are many cool community pipelines like Speech to Image or Composable Stable Diffusion, and you can find all the official community pipelines here. To load any community pipeline on the Hub, pass the repository id of the community pipeline to the custom_pipeline argument and the model repository where you’d like to load the pipeline weights and components from. For example, the example below loads a dummy pipeline from hf-internal-testing/diffusers-dummy-pipeline and the pipeline weights and components from google/ddpm-cifar10-32: 🔒 By loading a community pipeline from the Hugging Face Hub, you are trusting that the code you are loading is safe. Make sure to inspect the code online before loading and running it automatically! Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "google/ddpm-cifar10-32", custom_pipeline="hf-internal-testing/diffusers-dummy-pipeline", use_safetensors=True +) Loading an official community pipeline is similar, but you can mix loading weights from an official repository id and pass pipeline components directly. The example below loads the community CLIP Guided Stable Diffusion pipeline, and you can pass the CLIP model components directly to it: Copied from diffusers import DiffusionPipeline +from transformers import CLIPImageProcessor, CLIPModel + +clip_model_id = "laion/CLIP-ViT-B-32-laion2B-s34B-b79K" + +feature_extractor = CLIPImageProcessor.from_pretrained(clip_model_id) +clip_model = CLIPModel.from_pretrained(clip_model_id) + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + custom_pipeline="clip_guided_stable_diffusion", + clip_model=clip_model, + feature_extractor=feature_extractor, + use_safetensors=True, +) For more information about community pipelines, take a look at the Community pipelines guide for how to use them and if you’re interested in adding a community pipeline check out the How to contribute a community pipeline guide! Community components Community components allow users to build pipelines that may have customized components that are not a part of Diffusers. If your pipeline has custom components that Diffusers doesn’t already support, you need to provide their implementations as Python modules. These customized components could be a VAE, UNet, and scheduler. In most cases, the text encoder is imported from the Transformers library. The pipeline code itself can also be customized. This section shows how users should use community components to build a community pipeline. You’ll use the showlab/show-1-base pipeline checkpoint as an example. So, let’s start loading the components: Import and load the text encoder from Transformers: Copied from transformers import T5Tokenizer, T5EncoderModel + +pipe_id = "showlab/show-1-base" +tokenizer = T5Tokenizer.from_pretrained(pipe_id, subfolder="tokenizer") +text_encoder = T5EncoderModel.from_pretrained(pipe_id, subfolder="text_encoder") Load a scheduler: Copied from diffusers import DPMSolverMultistepScheduler + +scheduler = DPMSolverMultistepScheduler.from_pretrained(pipe_id, subfolder="scheduler") Load an image processor: Copied from transformers import CLIPFeatureExtractor + +feature_extractor = CLIPFeatureExtractor.from_pretrained(pipe_id, subfolder="feature_extractor") In steps 4 and 5, the custom UNet and pipeline implementation must match the format shown in their files for this example to work. Now you’ll load a custom UNet, which in this example, has already been implemented in the showone_unet_3d_condition.py script for your convenience. You’ll notice the UNet3DConditionModel class name is changed to ShowOneUNet3DConditionModel because UNet3DConditionModel already exists in Diffusers. Any components needed for the ShowOneUNet3DConditionModel class should be placed in the showone_unet_3d_condition.py script. Once this is done, you can initialize the UNet: Copied from showone_unet_3d_condition import ShowOneUNet3DConditionModel + +unet = ShowOneUNet3DConditionModel.from_pretrained(pipe_id, subfolder="unet") Finally, you’ll load the custom pipeline code. For this example, it has already been created for you in the pipeline_t2v_base_pixel.py script. This script contains a custom TextToVideoIFPipeline class for generating videos from text. Just like the custom UNet, any code needed for the custom pipeline to work should go in the pipeline_t2v_base_pixel.py script. Once everything is in place, you can initialize the TextToVideoIFPipeline with the ShowOneUNet3DConditionModel: Copied from pipeline_t2v_base_pixel import TextToVideoIFPipeline +import torch + +pipeline = TextToVideoIFPipeline( + unet=unet, + text_encoder=text_encoder, + tokenizer=tokenizer, + scheduler=scheduler, + feature_extractor=feature_extractor +) +pipeline = pipeline.to(device="cuda") +pipeline.torch_dtype = torch.float16 Push the pipeline to the Hub to share with the community! Copied pipeline.push_to_hub("custom-t2v-pipeline") After the pipeline is successfully pushed, you need a couple of changes: Change the _class_name attribute in model_index.json to "pipeline_t2v_base_pixel" and "TextToVideoIFPipeline". Upload showone_unet_3d_condition.py to the unet directory. Upload pipeline_t2v_base_pixel.py to the pipeline base directory. To run inference, simply add the trust_remote_code argument while initializing the pipeline to handle all the “magic” behind the scenes. Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "/", trust_remote_code=True, torch_dtype=torch.float16 +).to("cuda") + +prompt = "hello" + +# Text embeds +prompt_embeds, negative_embeds = pipeline.encode_prompt(prompt) + +# Keyframes generation (8x64x40, 2fps) +video_frames = pipeline( + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + num_frames=8, + height=40, + width=64, + num_inference_steps=2, + guidance_scale=9.0, + output_type="pt" +).frames As an additional reference example, you can refer to the repository structure of stabilityai/japanese-stable-diffusion-xl, that makes use of the trust_remote_code feature: Copied +from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/japanese-stable-diffusion-xl", trust_remote_code=True +) +pipeline.to("cuda") + +# if using torch < 2.0 +# pipeline.enable_xformers_memory_efficient_attention() + +prompt = "柴犬、カラフルアート" + +image = pipeline(prompt=prompt).images[0] diff --git a/scrapped_outputs/946ac87065e63d5d3e0718c33c4ebd7a.txt b/scrapped_outputs/946ac87065e63d5d3e0718c33c4ebd7a.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/947b94a81e0972c90d39918ee2dbec36.txt b/scrapped_outputs/947b94a81e0972c90d39918ee2dbec36.txt new file mode 100644 index 0000000000000000000000000000000000000000..b873ba3b9d3614922057d0c02bbc129d959f1e64 --- /dev/null +++ b/scrapped_outputs/947b94a81e0972c90d39918ee2dbec36.txt @@ -0,0 +1,138 @@ +UNet2DConditionModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 2D UNet conditional model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet2DConditionModel class diffusers.UNet2DConditionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 center_input_sample: bool = False flip_sin_to_cos: bool = True freq_shift: int = 0 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') mid_block_type: Optional = 'UNetMidBlock2DCrossAttn' up_block_types: Tuple = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: Union = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 dropout: float = 0.0 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: Union = 1280 transformer_layers_per_block: Union = 1 reverse_transformer_layers_per_block: Optional = None encoder_hid_dim: Optional = None encoder_hid_dim_type: Optional = None attention_head_dim: Union = 8 num_attention_heads: Union = None dual_cross_attention: bool = False use_linear_projection: bool = False class_embed_type: Optional = None addition_embed_type: Optional = None addition_time_embed_dim: Optional = None num_class_embeds: Optional = None upcast_attention: bool = False resnet_time_scale_shift: str = 'default' resnet_skip_time_act: bool = False resnet_out_scale_factor: int = 1.0 time_embedding_type: str = 'positional' time_embedding_dim: Optional = None time_embedding_act_fn: Optional = None timestep_post_act: Optional = None time_cond_proj_dim: Optional = None conv_in_kernel: int = 3 conv_out_kernel: int = 3 projection_class_embeddings_input_dim: Optional = None attention_type: str = 'default' class_embeddings_concat: bool = False mid_block_only_cross_attention: Optional = None cross_attention_norm: Optional = None addition_embed_type_num_heads = 64 ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. in_channels (int, optional, defaults to 4) — Number of channels in the input sample. out_channels (int, optional, defaults to 4) — Number of channels in the output. center_input_sample (bool, optional, defaults to False) — Whether to center the input sample. flip_sin_to_cos (bool, optional, defaults to False) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. mid_block_type (str, optional, defaults to "UNetMidBlock2DCrossAttn") — +Block type for middle of UNet, it can be one of UNetMidBlock2DCrossAttn, UNetMidBlock2D, or +UNetMidBlock2DSimpleCrossAttn. If None, the mid block layer is skipped. up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. only_cross_attention(bool or Tuple[bool], optional, default to False) — +Whether to include self-attention in the basic transformer blocks, see +BasicTransformerBlock. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — The number of layers per block. downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. dropout (float, optional, defaults to 0.0) — The dropout probability to use. act_fn (str, optional, defaults to "silu") — The activation function to use. norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, normalization and activation layers is skipped in post-processing. norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. cross_attention_dim (int or Tuple[int], optional, defaults to 1280) — +The dimension of the cross attention features. transformer_layers_per_block (int, Tuple[int], or Tuple[Tuple] , optional, defaults to 1) — +The number of transformer blocks of type BasicTransformerBlock. Only relevant for +~models.unet_2d_blocks.CrossAttnDownBlock2D, ~models.unet_2d_blocks.CrossAttnUpBlock2D, +~models.unet_2d_blocks.UNetMidBlock2DCrossAttn. A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). reverse_transformer_layers_per_block : (Tuple[Tuple], optional, defaults to None): +The number of transformer blocks of type BasicTransformerBlock, in the upsampling +blocks of the U-Net. Only relevant if transformer_layers_per_block is of type Tuple[Tuple] and for +~models.unet_2d_blocks.CrossAttnDownBlock2D, ~models.unet_2d_blocks.CrossAttnUpBlock2D, +~models.unet_2d_blocks.UNetMidBlock2DCrossAttn. +encoder_hid_dim (int, optional, defaults to None): +If encoder_hid_dim_type is defined, encoder_hidden_states will be projected from encoder_hid_dim +dimension to cross_attention_dim. +encoder_hid_dim_type (str, optional, defaults to None): +If given, the encoder_hidden_states and potentially other embeddings are down-projected to text +embeddings of dimension cross_attention according to encoder_hid_dim_type. +attention_head_dim (int, optional, defaults to 8): The dimension of the attention heads. +num_attention_heads (int, optional): +The number of attention heads. If not defined, defaults to attention_head_dim +resnet_time_scale_shift (str, optional, defaults to "default"): Time scale shift config +for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. +class_embed_type (str, optional, defaults to None): +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", "identity", "projection", or "simple_projection". +addition_embed_type (str, optional, defaults to None): +Configures an optional embedding which will be summed with the time embeddings. Choose from None or +“text”. “text” will use the TextTimeEmbedding layer. +addition_time_embed_dim: (int, optional, defaults to None): +Dimension for the timestep embeddings. +num_class_embeds (int, optional, defaults to None): +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. +time_embedding_type (str, optional, defaults to positional): +The type of position embedding to use for timesteps. Choose from positional or fourier. +time_embedding_dim (int, optional, defaults to None): +An optional override for the dimension of the projected time embedding. +time_embedding_act_fn (str, optional, defaults to None): +Optional activation function to use only once on the time embeddings before they are passed to the rest of +the UNet. Choose from silu, mish, gelu, and swish. +timestep_post_act (str, optional, defaults to None): +The second activation function to use in timestep embedding. Choose from silu, mish and gelu. +time_cond_proj_dim (int, optional, defaults to None): +The dimension of cond_proj layer in the timestep embedding. +conv_in_kernel (int, optional, default to 3): The kernel size of conv_in layer. conv_out_kernel (int, +optional, default to 3): The kernel size of conv_out layer. projection_class_embeddings_input_dim (int, +optional): The dimension of the class_labels input when +class_embed_type="projection". Required when class_embed_type="projection". +class_embeddings_concat (bool, optional, defaults to False): Whether to concatenate the time +embeddings with the class embeddings. +mid_block_only_cross_attention (bool, optional, defaults to None): +Whether to use cross attention with the mid block when using the UNetMidBlock2DSimpleCrossAttn. If +only_cross_attention is given as a single boolean and mid_block_only_cross_attention is None, the +only_cross_attention value is used as the value for mid_block_only_cross_attention. Default to False +otherwise. disable_freeu < source > ( ) Disables the FreeU mechanism. enable_freeu < source > ( s1 s2 b1 b2 ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism from https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stage blocks where they are being applied. Please refer to the official repository for combinations of values that +are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None added_cond_kwargs: Optional = None down_block_additional_residuals: Optional = None mid_block_additional_residual: Optional = None down_intrablock_additional_residuals: Optional = None encoder_attention_mask: Optional = None return_dict: bool = True ) → UNet2DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, channel, height, width). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). class_labels (torch.Tensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. +timestep_cond — (torch.Tensor, optional, defaults to None): +Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed +through the self.time_embedding layer to obtain the timestep embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. +added_cond_kwargs — (dict, optional): +A kwargs dictionary containing additional embeddings that if specified are added to the embeddings that +are passed along to the UNet blocks. +down_block_additional_residuals — (tuple of torch.Tensor, optional): +A tuple of tensors that if specified are added to the residuals of down unet blocks. +mid_block_additional_residual — (torch.Tensor, optional): +A tensor that if specified is added to the residual of the middle unet block. encoder_attention_mask (torch.Tensor) — +A cross-attention mask of shape (batch, sequence_length) is applied to encoder_hidden_states. If +True the mask is kept, otherwise if False it is discarded. Mask will be converted into a bias, +which adds large negative values to the attention scores corresponding to “discard” tokens. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. +added_cond_kwargs — (dict, optional): +A kwargs dictionary containin additional embeddings that if specified are added to the embeddings that +are passed along to the UNet blocks. down_block_additional_residuals (tuple of torch.Tensor, optional) — +additional residuals to be added to UNet long skip connections from down blocks to up blocks for +example from ControlNet side model(s) mid_block_additional_residual (torch.Tensor, optional) — +additional residual to be added to UNet mid block output, for example from ControlNet side model down_intrablock_additional_residuals (tuple of torch.Tensor, optional) — +additional residuals to be added within UNet down blocks, for example from T2I-Adapter side model(s) Returns +UNet2DConditionOutput or tuple + +If return_dict is True, an UNet2DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The UNet2DConditionModel forward method. fuse_qkv_projections < source > ( ) Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. set_attention_slice < source > ( slice_size ) Parameters slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", input to the attention heads is halved, so attention is computed in two steps. If +"max", maximum amount of memory is saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in +several steps. This is useful for saving some memory in exchange for a small decrease in speed. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. unfuse_qkv_projections < source > ( ) Disables the fused QKV projection if enabled. This API is 🧪 experimental. unload_lora < source > ( ) Unloads LoRA weights. UNet2DConditionOutput class diffusers.models.unets.unet_2d_condition.UNet2DConditionOutput < source > ( sample: FloatTensor = None ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of UNet2DConditionModel. FlaxUNet2DConditionModel class diffusers.FlaxUNet2DConditionModel < source > ( sample_size: int = 32 in_channels: int = 4 out_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') up_block_types: Tuple = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 attention_head_dim: Union = 8 num_attention_heads: Union = None cross_attention_dim: int = 1280 dropout: float = 0.0 use_linear_projection: bool = False dtype: dtype = flip_sin_to_cos: bool = True freq_shift: int = 0 use_memory_efficient_attention: bool = False split_head_dim: bool = False transformer_layers_per_block: Union = 1 addition_embed_type: Optional = None addition_time_embed_dim: Optional = None addition_embed_type_num_heads: int = 64 projection_class_embeddings_input_dim: Optional = None parent: Union = name: Optional = None ) Parameters sample_size (int, optional) — +The size of the input sample. in_channels (int, optional, defaults to 4) — +The number of channels in the input sample. out_channels (int, optional, defaults to 4) — +The number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxDownBlock2D")) — +The tuple of downsample blocks to use. up_block_types (Tuple[str], optional, defaults to ("FlaxUpBlock2D", "FlaxCrossAttnUpBlock2D", "FlaxCrossAttnUpBlock2D", "FlaxCrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — +The number of layers per block. attention_head_dim (int or Tuple[int], optional, defaults to 8) — +The dimension of the attention heads. num_attention_heads (int or Tuple[int], optional) — +The number of attention heads. cross_attention_dim (int, optional, defaults to 768) — +The dimension of the cross attention features. dropout (float, optional, defaults to 0) — +Dropout probability for down, up and bottleneck blocks. flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. use_memory_efficient_attention (bool, optional, defaults to False) — +Enable memory efficient attention as described here. split_head_dim (bool, optional, defaults to False) — +Whether to split the head dimension into a new axis for the self-attention computation. In most cases, +enabling this flag should speed up the computation for Stable Diffusion 2.x and Stable Diffusion XL. A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. This model inherits from FlaxModelMixin. Check the superclass documentation for it’s generic methods +implemented for all models (such as downloading or saving). This model is also a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matters related to its +general usage and behavior. Inherent JAX features such as the following are supported: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization FlaxUNet2DConditionOutput class diffusers.models.unets.unet_2d_condition_flax.FlaxUNet2DConditionOutput < source > ( sample: Array ) Parameters sample (jnp.ndarray of shape (batch_size, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of FlaxUNet2DConditionModel. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/948f203e53a1b451032049632e2d5f93.txt b/scrapped_outputs/948f203e53a1b451032049632e2d5f93.txt new file mode 100644 index 0000000000000000000000000000000000000000..a1d62e149f06897a73f0cf31016ea5252858f00a --- /dev/null +++ b/scrapped_outputs/948f203e53a1b451032049632e2d5f93.txt @@ -0,0 +1,525 @@ +Kandinsky 2.1 Kandinsky 2.1 is created by Arseniy Shakhmatov, Anton Razzhigaev, Aleksandr Nikolich, Vladimir Arkhipkin, Igor Pavlov, Andrey Kuznetsov, and Denis Dimitrov. The description from it’s GitHub page is: Kandinsky 2.1 inherits best practicies from Dall-E 2 and Latent diffusion, while introducing some new ideas. As text and image encoder it uses CLIP model and diffusion image prior (mapping) between latent spaces of CLIP modalities. This approach increases the visual performance of the model and unveils new horizons in blending images and text-guided image manipulation. The original codebase can be found at ai-forever/Kandinsky-2. Check out the Kandinsky Community organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. KandinskyPriorPipeline class diffusers.KandinskyPriorPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModelWithProjection text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: UnCLIPScheduler image_processor: CLIPImageProcessor ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Pipeline for generating image prior for Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 output_type: Optional = 'pt' return_dict: bool = True ) → KandinskyPriorPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. output_type (str, optional, defaults to "pt") — +The output format of the generate image. Choose between: "np" (np.array) or "pt" +(torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyPipeline, KandinskyPriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior") +>>> pipe_prior.to("cuda") + +>>> prompt = "red cat, 4k photo" +>>> out = pipe_prior(prompt) +>>> image_emb = out.image_embeds +>>> negative_image_emb = out.negative_image_embeds + +>>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1") +>>> pipe.to("cuda") + +>>> image = pipe( +... prompt, +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... ).images + +>>> image[0].save("cat.png") interpolate < source > ( images_and_prompts: List weights: List num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None negative_prior_prompt: Optional = None negative_prompt: str = '' guidance_scale: float = 4.0 device = None ) → KandinskyPriorPipelineOutput or tuple Parameters images_and_prompts (List[Union[str, PIL.Image.Image, torch.FloatTensor]]) — +list of prompts and images to guide the image generation. +weights — (List[float]): +list of weights for each condition in images_and_prompts num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. negative_prior_prompt (str, optional) — +The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when using the prior pipeline for interpolation. Examples: Copied >>> from diffusers import KandinskyPriorPipeline, KandinskyPipeline +>>> from diffusers.utils import load_image +>>> import PIL + +>>> import torch +>>> from torchvision import transforms + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> img1 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) + +>>> img2 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/starry_night.jpeg" +... ) + +>>> images_texts = ["a cat", img1, img2] +>>> weights = [0.3, 0.3, 0.4] +>>> image_emb, zero_image_emb = pipe_prior.interpolate(images_texts, weights) + +>>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) +>>> pipe.to("cuda") + +>>> image = pipe( +... "", +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=150, +... ).images[0] + +>>> image.save("starry_cat.png") KandinskyPipeline class diffusers.KandinskyPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image_embeds: Union negative_image_embeds: Union negative_prompt: Union = None height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyPipeline, KandinskyPriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained("kandinsky-community/Kandinsky-2-1-prior") +>>> pipe_prior.to("cuda") + +>>> prompt = "red cat, 4k photo" +>>> out = pipe_prior(prompt) +>>> image_emb = out.image_embeds +>>> negative_image_emb = out.negative_image_embeds + +>>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1") +>>> pipe.to("cuda") + +>>> image = pipe( +... prompt, +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... ).images + +>>> image[0].save("cat.png") KandinskyCombinedPipeline class diffusers.KandinskyCombinedPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Combined Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipe = AutoPipelineForText2Image.from_pretrained( + "kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" + +image = pipe(prompt=prompt, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models (unet, text_encoder, vae, and safety checker state dicts) to CPU using 🤗 +Accelerate, significantly reducing memory usage. Models are moved to a torch.device('meta') and loaded on a +GPU only when their specific submodule’s forward method is called. Offloading happens on a submodule basis. +Memory savings are higher than using enable_model_cpu_offload, but performance is lower. KandinskyImg2ImgPipeline class diffusers.KandinskyImg2ImgPipeline < source > ( text_encoder: MultilingualCLIP movq: VQModel tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: DDIMScheduler ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ image encoder and decoder Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union image_embeds: FloatTensor negative_image_embeds: FloatTensor negative_prompt: Union = None height: int = 512 width: int = 512 num_inference_steps: int = 100 strength: float = 0.3 guidance_scale: float = 7.0 num_images_per_prompt: int = 1 generator: Union = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyImg2ImgPipeline, KandinskyPriorPipeline +>>> from diffusers.utils import load_image +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> prompt = "A red cartoon frog, 4k" +>>> image_emb, zero_image_emb = pipe_prior(prompt, return_dict=False) + +>>> pipe = KandinskyImg2ImgPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") + +>>> init_image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/frog.png" +... ) + +>>> image = pipe( +... prompt, +... image=init_image, +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... strength=0.2, +... ).images + +>>> image[0].save("red_frog.png") KandinskyImg2ImgCombinedPipeline class diffusers.KandinskyImg2ImgCombinedPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Combined Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 strength: float = 0.3 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForImage2Image +import torch +import requests +from io import BytesIO +from PIL import Image +import os + +pipe = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") +image.thumbnail((768, 768)) + +image = pipe(prompt=prompt, image=original_image, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. KandinskyInpaintPipeline class diffusers.KandinskyInpaintPipeline < source > ( text_encoder: MultilingualCLIP movq: VQModel tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: DDIMScheduler ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ image encoder and decoder Pipeline for text-guided image inpainting using Kandinsky2.1 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union mask_image: Union image_embeds: FloatTensor negative_image_embeds: FloatTensor negative_prompt: Union = None height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image or np.ndarray) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. mask_image (PIL.Image.Image,torch.FloatTensor or np.ndarray) — +Image, or a tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. You can pass a pytorch tensor as mask only if the +image you passed is a pytorch tensor, and it should contain one color channel (L) instead of 3, so the +expected shape would be either (B, 1, H, W,), (B, H, W), (1, H, W) or (H, W) If image is an PIL +image or numpy array, mask should also be a either PIL image or numpy array. If it is a PIL image, it +will be converted to a single channel (luminance) before use. If it is a nummpy array, the expected +shape is (H, W). image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyInpaintPipeline, KandinskyPriorPipeline +>>> from diffusers.utils import load_image +>>> import torch +>>> import numpy as np + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> prompt = "a hat" +>>> image_emb, zero_image_emb = pipe_prior(prompt, return_dict=False) + +>>> pipe = KandinskyInpaintPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") + +>>> init_image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) + +>>> mask = np.zeros((768, 768), dtype=np.float32) +>>> mask[:250, 250:-250] = 1 + +>>> out = pipe( +... prompt, +... image=init_image, +... mask_image=mask, +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=50, +... ) + +>>> image = out.images[0] +>>> image.save("cat_with_hat.png") KandinskyInpaintCombinedPipeline class diffusers.KandinskyInpaintCombinedPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Combined Pipeline for generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union mask_image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. mask_image (np.array) — +Tensor representing an image batch, to mask image. White pixels in the mask will be repainted, while +black pixels will be preserved. If mask_image is a PIL image, it will be converted to a single +channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, +so the expected shape would be (B, H, W, 1). negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +import torch +import numpy as np + +pipe = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +original_image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png" +) + +mask = np.zeros((768, 768), dtype=np.float32) +# Let's mask out an area above the cat's head +mask[:250, 250:-250] = 1 + +image = pipe(prompt=prompt, image=original_image, mask_image=mask, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. diff --git a/scrapped_outputs/949a2ea5566269c5d83bd6201d2feec0.txt b/scrapped_outputs/949a2ea5566269c5d83bd6201d2feec0.txt new file mode 100644 index 0000000000000000000000000000000000000000..013c1269daf8c4cfa37c93f9c5a59b6be09f9038 --- /dev/null +++ b/scrapped_outputs/949a2ea5566269c5d83bd6201d2feec0.txt @@ -0,0 +1,112 @@ +How to contribute to Diffusers 🧨 We ❤️ contributions from the open-source community! Everyone is welcome, and all types of participation –not just code– are valued and appreciated. Answering questions, helping others, reaching out, and improving the documentation are all immensely valuable to the community, so don’t be afraid and get involved if you’re up for it! Everyone is encouraged to start by saying 👋 in our public Discord channel. We discuss the latest trends in diffusion models, ask questions, show off personal projects, help each other with contributions, or just hang out ☕. Whichever way you choose to contribute, we strive to be part of an open, welcoming, and kind community. Please, read our code of conduct and be mindful to respect it during your interactions. We also recommend you become familiar with the ethical guidelines that guide our project and ask you to adhere to the same principles of transparency and responsibility. We enormously value feedback from the community, so please do not be afraid to speak up if you believe you have valuable feedback that can help improve the library - every message, comment, issue, and pull request (PR) is read and considered. Overview You can contribute in many ways ranging from answering questions on issues to adding new diffusion models to +the core library. In the following, we give an overview of different ways to contribute, ranked by difficulty in ascending order. All of them are valuable to the community. Asking and answering questions on the Diffusers discussion forum or on Discord. Opening new issues on the GitHub Issues tab. Answering issues on the GitHub Issues tab. Fix a simple issue, marked by the “Good first issue” label, see here. Contribute to the documentation. Contribute a Community Pipeline. Contribute to the examples. Fix a more difficult issue, marked by the “Good second issue” label, see here. Add a new pipeline, model, or scheduler, see “New Pipeline/Model” and “New scheduler” issues. For this contribution, please have a look at Design Philosophy. As said before, all contributions are valuable to the community. +In the following, we will explain each contribution a bit more in detail. For all contributions 4 - 9, you will need to open a PR. It is explained in detail how to do so in Opening a pull request. 1. Asking and answering questions on the Diffusers discussion forum or on the Diffusers Discord Any question or comment related to the Diffusers library can be asked on the discussion forum or on Discord. Such questions and comments include (but are not limited to): Reports of training or inference experiments in an attempt to share knowledge Presentation of personal projects Questions to non-official training examples Project proposals General feedback Paper summaries Asking for help on personal projects that build on top of the Diffusers library General questions Ethical questions regarding diffusion models … Every question that is asked on the forum or on Discord actively encourages the community to publicly +share knowledge and might very well help a beginner in the future who has the same question you’re +having. Please do pose any questions you might have. +In the same spirit, you are of immense help to the community by answering such questions because this way you are publicly documenting knowledge for everybody to learn from. Please keep in mind that the more effort you put into asking or answering a question, the higher +the quality of the publicly documented knowledge. In the same way, well-posed and well-answered questions create a high-quality knowledge database accessible to everybody, while badly posed questions or answers reduce the overall quality of the public knowledge database. +In short, a high quality question or answer is precise, concise, relevant, easy-to-understand, accessible, and well-formated/well-posed. For more information, please have a look through the How to write a good issue section. NOTE about channels: +The forum is much better indexed by search engines, such as Google. Posts are ranked by popularity rather than chronologically. Hence, it’s easier to look up questions and answers that we posted some time ago. +In addition, questions and answers posted in the forum can easily be linked to. +In contrast, Discord has a chat-like format that invites fast back-and-forth communication. +While it will most likely take less time for you to get an answer to your question on Discord, your +question won’t be visible anymore over time. Also, it’s much harder to find information that was posted a while back on Discord. We therefore strongly recommend using the forum for high-quality questions and answers in an attempt to create long-lasting knowledge for the community. If discussions on Discord lead to very interesting answers and conclusions, we recommend posting the results on the forum to make the information more available for future readers. 2. Opening new issues on the GitHub issues tab The 🧨 Diffusers library is robust and reliable thanks to the users who notify us of +the problems they encounter. So thank you for reporting an issue. Remember, GitHub issues are reserved for technical questions directly related to the Diffusers library, bug reports, feature requests, or feedback on the library design. In a nutshell, this means that everything that is not related to the code of the Diffusers library (including the documentation) should not be asked on GitHub, but rather on either the forum or Discord. Please consider the following guidelines when opening a new issue: Make sure you have searched whether your issue has already been asked before (use the search bar on GitHub under Issues). Please never report a new issue on another (related) issue. If another issue is highly related, please +open a new issue nevertheless and link to the related issue. Make sure your issue is written in English. Please use one of the great, free online translation services, such as DeepL to translate from your native language to English if you are not comfortable in English. Check whether your issue might be solved by updating to the newest Diffusers version. Before posting your issue, please make sure that python -c "import diffusers; print(diffusers.__version__)" is higher or matches the latest Diffusers version. Remember that the more effort you put into opening a new issue, the higher the quality of your answer will be and the better the overall quality of the Diffusers issues. New issues usually include the following. 2.1. Reproducible, minimal bug reports A bug report should always have a reproducible code snippet and be as minimal and concise as possible. +This means in more detail: Narrow the bug down as much as you can, do not just dump your whole code file. Format your code. Do not include any external libraries except for Diffusers depending on them. Always provide all necessary information about your environment; for this, you can run: diffusers-cli env in your shell and copy-paste the displayed information to the issue. Explain the issue. If the reader doesn’t know what the issue is and why it is an issue, she cannot solve it. Always make sure the reader can reproduce your issue with as little effort as possible. If your code snippet cannot be run because of missing libraries or undefined variables, the reader cannot help you. Make sure your reproducible code snippet is as minimal as possible and can be copy-pasted into a simple Python shell. If in order to reproduce your issue a model and/or dataset is required, make sure the reader has access to that model or dataset. You can always upload your model or dataset to the Hub to make it easily downloadable. Try to keep your model and dataset as small as possible, to make the reproduction of your issue as effortless as possible. For more information, please have a look through the How to write a good issue section. You can open a bug report here. 2.2. Feature requests A world-class feature request addresses the following points: Motivation first: Is it related to a problem/frustration with the library? If so, please explain +why. Providing a code snippet that demonstrates the problem is best. Is it related to something you would need for a project? We’d love to hear +about it! Is it something you worked on and think could benefit the community? +Awesome! Tell us what problem it solved for you. Write a full paragraph describing the feature; Provide a code snippet that demonstrates its future use; In case this is related to a paper, please attach a link; Attach any additional information (drawings, screenshots, etc.) you think may help. You can open a feature request here. 2.3 Feedback Feedback about the library design and why it is good or not good helps the core maintainers immensely to build a user-friendly library. To understand the philosophy behind the current design philosophy, please have a look here. If you feel like a certain design choice does not fit with the current design philosophy, please explain why and how it should be changed. If a certain design choice follows the design philosophy too much, hence restricting use cases, explain why and how it should be changed. +If a certain design choice is very useful for you, please also leave a note as this is great feedback for future design decisions. You can open an issue about feedback here. 2.4 Technical questions Technical questions are mainly about why certain code of the library was written in a certain way, or what a certain part of the code does. Please make sure to link to the code in question and please provide details on +why this part of the code is difficult to understand. You can open an issue about a technical question here. 2.5 Proposal to add a new model, scheduler, or pipeline If the diffusion model community released a new model, pipeline, or scheduler that you would like to see in the Diffusers library, please provide the following information: Short description of the diffusion pipeline, model, or scheduler and link to the paper or public release. Link to any of its open-source implementation(s). Link to the model weights if they are available. If you are willing to contribute to the model yourself, let us know so we can best guide you. Also, don’t forget +to tag the original author of the component (model, scheduler, pipeline, etc.) by GitHub handle if you can find it. You can open a request for a model/pipeline/scheduler here. 3. Answering issues on the GitHub issues tab Answering issues on GitHub might require some technical knowledge of Diffusers, but we encourage everybody to give it a try even if you are not 100% certain that your answer is correct. +Some tips to give a high-quality answer to an issue: Be as concise and minimal as possible. Stay on topic. An answer to the issue should concern the issue and only the issue. Provide links to code, papers, or other sources that prove or encourage your point. Answer in code. If a simple code snippet is the answer to the issue or shows how the issue can be solved, please provide a fully reproducible code snippet. Also, many issues tend to be simply off-topic, duplicates of other issues, or irrelevant. It is of great +help to the maintainers if you can answer such issues, encouraging the author of the issue to be +more precise, provide the link to a duplicated issue or redirect them to the forum or Discord. If you have verified that the issued bug report is correct and requires a correction in the source code, +please have a look at the next sections. For all of the following contributions, you will need to open a PR. It is explained in detail how to do so in the Opening a pull request section. 4. Fixing a "Good first issue" Good first issues are marked by the Good first issue label. Usually, the issue already +explains how a potential solution should look so that it is easier to fix. +If the issue hasn’t been closed and you would like to try to fix this issue, you can just leave a message “I would like to try this issue.”. There are usually three scenarios: a.) The issue description already proposes a fix. In this case and if the solution makes sense to you, you can open a PR or draft PR to fix it. b.) The issue description does not propose a fix. In this case, you can ask what a proposed fix could look like and someone from the Diffusers team should answer shortly. If you have a good idea of how to fix it, feel free to directly open a PR. c.) There is already an open PR to fix the issue, but the issue hasn’t been closed yet. If the PR has gone stale, you can simply open a new PR and link to the stale PR. PRs often go stale if the original contributor who wanted to fix the issue suddenly cannot find the time anymore to proceed. This often happens in open-source and is very normal. In this case, the community will be very happy if you give it a new try and leverage the knowledge of the existing PR. If there is already a PR and it is active, you can help the author by giving suggestions, reviewing the PR or even asking whether you can contribute to the PR. 5. Contribute to the documentation A good library always has good documentation! The official documentation is often one of the first points of contact for new users of the library, and therefore contributing to the documentation is a highly +valuable contribution. Contributing to the library can have many forms: Correcting spelling or grammatical errors. Correct incorrect formatting of the docstring. If you see that the official documentation is weirdly displayed or a link is broken, we would be very happy if you take some time to correct it. Correct the shape or dimensions of a docstring input or output tensor. Clarify documentation that is hard to understand or incorrect. Update outdated code examples. Translating the documentation to another language. Anything displayed on the official Diffusers doc page is part of the official documentation and can be corrected, adjusted in the respective documentation source. Please have a look at this page on how to verify changes made to the documentation locally. 6. Contribute a community pipeline Pipelines are usually the first point of contact between the Diffusers library and the user. +Pipelines are examples of how to use Diffusers models and schedulers. +We support two types of pipelines: Official Pipelines Community Pipelines Both official and community pipelines follow the same design and consist of the same type of components. Official pipelines are tested and maintained by the core maintainers of Diffusers. Their code +resides in src/diffusers/pipelines. +In contrast, community pipelines are contributed and maintained purely by the community and are not tested. +They reside in examples/community and while they can be accessed via the PyPI diffusers package, their code is not part of the PyPI distribution. The reason for the distinction is that the core maintainers of the Diffusers library cannot maintain and test all +possible ways diffusion models can be used for inference, but some of them may be of interest to the community. +Officially released diffusion pipelines, +such as Stable Diffusion are added to the core src/diffusers/pipelines package which ensures +high quality of maintenance, no backward-breaking code changes, and testing. +More bleeding edge pipelines should be added as community pipelines. If usage for a community pipeline is high, the pipeline can be moved to the official pipelines upon request from the community. This is one of the ways we strive to be a community-driven library. To add a community pipeline, one should add a .py file to examples/community and adapt the examples/community/README.md to include an example of the new pipeline. An example can be seen here. Community pipeline PRs are only checked at a superficial level and ideally they should be maintained by their original authors. Contributing a community pipeline is a great way to understand how Diffusers models and schedulers work. Having contributed a community pipeline is usually the first stepping stone to contributing an official pipeline to the +core package. 7. Contribute to training examples Diffusers examples are a collection of training scripts that reside in examples. We support two types of training examples: Official training examples Research training examples Research training examples are located in examples/research_projects whereas official training examples include all folders under examples except the research_projects and community folders. +The official training examples are maintained by the Diffusers’ core maintainers whereas the research training examples are maintained by the community. +This is because of the same reasons put forward in 6. Contribute a community pipeline for official pipelines vs. community pipelines: It is not feasible for the core maintainers to maintain all possible training methods for diffusion models. +If the Diffusers core maintainers and the community consider a certain training paradigm to be too experimental or not popular enough, the corresponding training code should be put in the research_projects folder and maintained by the author. Both official training and research examples consist of a directory that contains one or more training scripts, a requirements.txt file, and a README.md file. In order for the user to make use of the +training examples, it is required to clone the repository: Copied git clone https://github.com/huggingface/diffusers as well as to install all additional dependencies required for training: Copied pip install -r /examples//requirements.txt Therefore when adding an example, the requirements.txt file shall define all pip dependencies required for your training example so that once all those are installed, the user can run the example’s training script. See, for example, the DreamBooth requirements.txt file. Training examples of the Diffusers library should adhere to the following philosophy: All the code necessary to run the examples should be found in a single Python file. One should be able to run the example from the command line with python .py --args. Examples should be kept simple and serve as an example on how to use Diffusers for training. The purpose of example scripts is not to create state-of-the-art diffusion models, but rather to reproduce known training schemes without adding too much custom logic. As a byproduct of this point, our examples also strive to serve as good educational materials. To contribute an example, it is highly recommended to look at already existing examples such as dreambooth to get an idea of how they should look like. +We strongly advise contributors to make use of the Accelerate library as it’s tightly integrated +with Diffusers. +Once an example script works, please make sure to add a comprehensive README.md that states how to use the example exactly. This README should include: An example command on how to run the example script as shown here. A link to some training results (logs, models, etc.) that show what the user can expect as shown here. If you are adding a non-official/research training example, please don’t forget to add a sentence that you are maintaining this training example which includes your git handle as shown here. If you are contributing to the official training examples, please also make sure to add a test to examples/test_examples.py. This is not necessary for non-official training examples. 8. Fixing a "Good second issue" Good second issues are marked by the Good second issue label. Good second issues are +usually more complicated to solve than Good first issues. +The issue description usually gives less guidance on how to fix the issue and requires +a decent understanding of the library by the interested contributor. +If you are interested in tackling a good second issue, feel free to open a PR to fix it and link the PR to the issue. If you see that a PR has already been opened for this issue but did not get merged, have a look to understand why it wasn’t merged and try to open an improved PR. +Good second issues are usually more difficult to get merged compared to good first issues, so don’t hesitate to ask for help from the core maintainers. If your PR is almost finished the core maintainers can also jump into your PR and commit to it in order to get it merged. 9. Adding pipelines, models, schedulers Pipelines, models, and schedulers are the most important pieces of the Diffusers library. +They provide easy access to state-of-the-art diffusion technologies and thus allow the community to +build powerful generative AI applications. By adding a new model, pipeline, or scheduler you might enable a new powerful use case for any of the user interfaces relying on Diffusers which can be of immense value for the whole generative AI ecosystem. Diffusers has a couple of open feature requests for all three components - feel free to gloss over them +if you don’t know yet what specific component you would like to add: Model or pipeline Scheduler Before adding any of the three components, it is strongly recommended that you give the Philosophy guide a read to better understand the design of any of the three components. Please be aware that we cannot merge model, scheduler, or pipeline additions that strongly diverge from our design philosophy +as it will lead to API inconsistencies. If you fundamentally disagree with a design choice, please open a Feedback issue instead so that it can be discussed whether a certain design pattern/design choice shall be changed everywhere in the library and whether we shall update our design philosophy. Consistency across the library is very important for us. Please make sure to add links to the original codebase/paper to the PR and ideally also ping the original author directly on the PR so that they can follow the progress and potentially help with questions. If you are unsure or stuck in the PR, don’t hesitate to leave a message to ask for a first review or help. Copied from mechanism A unique and important feature to understand when adding any pipeline, model or scheduler code is the # Copied from mechanism. You’ll see this all over the Diffusers codebase, and the reason we use it is to keep the codebase easy to understand and maintain. Marking code with the # Copied from mechanism forces the marked code to be identical to the code it was copied from. This makes it easy to update and propagate changes across many files whenever you run make fix-copies. For example, in the code example below, StableDiffusionPipelineOutput is the original code and AltDiffusionPipelineOutput uses the # Copied from mechanism to copy it. The only difference is changing the class prefix from Stable to Alt. Copied # Copied from diffusers.pipelines.stable_diffusion.pipeline_output.StableDiffusionPipelineOutput with Stable->Alt +class AltDiffusionPipelineOutput(BaseOutput): + """ + Output class for Alt Diffusion pipelines. + + Args: + images (`List[PIL.Image.Image]` or `np.ndarray`) + List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width, + num_channels)`. + nsfw_content_detected (`List[bool]`) + List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or + `None` if safety checking could not be performed. + """ To learn more, read this section of the ~Don’t~ Repeat Yourself* blog post. How to write a good issue The better your issue is written, the higher the chances that it will be quickly resolved. Make sure that you’ve used the correct template for your issue. You can pick between Bug Report, Feature Request, Feedback about API Design, New model/pipeline/scheduler addition, Forum, or a blank issue. Make sure to pick the correct one when opening a new issue. Be precise: Give your issue a fitting title. Try to formulate your issue description as simple as possible. The more precise you are when submitting an issue, the less time it takes to understand the issue and potentially solve it. Make sure to open an issue for one issue only and not for multiple issues. If you found multiple issues, simply open multiple issues. If your issue is a bug, try to be as precise as possible about what bug it is - you should not just write “Error in diffusers”. Reproducibility: No reproducible code snippet == no solution. If you encounter a bug, maintainers have to be able to reproduce it. Make sure that you include a code snippet that can be copy-pasted into a Python interpreter to reproduce the issue. Make sure that your code snippet works, i.e. that there are no missing imports or missing links to images, … Your issue should contain an error message and a code snippet that can be copy-pasted without any changes to reproduce the exact same error message. If your issue is using local model weights or local data that cannot be accessed by the reader, the issue cannot be solved. If you cannot share your data or model, try to make a dummy model or dummy data. Minimalistic: Try to help the reader as much as you can to understand the issue as quickly as possible by staying as concise as possible. Remove all code / all information that is irrelevant to the issue. If you have found a bug, try to create the easiest code example you can to demonstrate your issue, do not just dump your whole workflow into the issue as soon as you have found a bug. E.g., if you train a model and get an error at some point during the training, you should first try to understand what part of the training code is responsible for the error and try to reproduce it with a couple of lines. Try to use dummy data instead of full datasets. Add links. If you are referring to a certain naming, method, or model make sure to provide a link so that the reader can better understand what you mean. If you are referring to a specific PR or issue, make sure to link it to your issue. Do not assume that the reader knows what you are talking about. The more links you add to your issue the better. Formatting. Make sure to nicely format your issue by formatting code into Python code syntax, and error messages into normal code syntax. See the official GitHub formatting docs for more information. Think of your issue not as a ticket to be solved, but rather as a beautiful entry to a well-written encyclopedia. Every added issue is a contribution to publicly available knowledge. By adding a nicely written issue you not only make it easier for maintainers to solve your issue, but you are helping the whole community to better understand a certain aspect of the library. How to write a good PR Be a chameleon. Understand existing design patterns and syntax and make sure your code additions flow seamlessly into the existing code base. Pull requests that significantly diverge from existing design patterns or user interfaces will not be merged. Be laser focused. A pull request should solve one problem and one problem only. Make sure to not fall into the trap of “also fixing another problem while we’re adding it”. It is much more difficult to review pull requests that solve multiple, unrelated problems at once. If helpful, try to add a code snippet that displays an example of how your addition can be used. The title of your pull request should be a summary of its contribution. If your pull request addresses an issue, please mention the issue number in +the pull request description to make sure they are linked (and people +consulting the issue know you are working on it); To indicate a work in progress please prefix the title with [WIP]. These +are useful to avoid duplicated work, and to differentiate it from PRs ready +to be merged; Try to formulate and format your text as explained in How to write a good issue. Make sure existing tests pass; Add high-coverage tests. No quality testing = no merge. If you are adding new @slow tests, make sure they pass using +RUN_SLOW=1 python -m pytest tests/test_my_new_model.py. +CircleCI does not run the slow tests, but GitHub Actions does every night! All public methods must have informative docstrings that work nicely with markdown. See pipeline_latent_diffusion.py for an example. Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted dataset like +hf-internal-testing or huggingface/documentation-images to place these files. +If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images +to this dataset. How to open a PR Before writing code, we strongly advise you to search through the existing PRs or +issues to make sure that nobody is already working on the same thing. If you are +unsure, it is always a good idea to open an issue to get some feedback. You will need basic git proficiency to be able to contribute to +🧨 Diffusers. git is not the easiest tool to use but it has the greatest +manual. Type git --help in a shell and enjoy. If you prefer books, Pro +Git is a very good reference. Follow these steps to start contributing (supported Python versions): Fork the repository by +clicking on the ‘Fork’ button on the repository’s page. This creates a copy of the code +under your GitHub user account. Clone your fork to your local disk, and add the base repository as a remote: Copied $ git clone git@github.com:/diffusers.git +$ cd diffusers +$ git remote add upstream https://github.com/huggingface/diffusers.git Create a new branch to hold your development changes: Copied $ git checkout -b a-descriptive-name-for-my-changes Do not work on the main branch. Set up a development environment by running the following command in a virtual environment: Copied $ pip install -e ".[dev]" If you have already cloned the repo, you might need to git pull to get the most recent changes in the +library. Develop the features on your branch. As you work on the features, you should make sure that the test suite +passes. You should run the tests impacted by your changes like this: Copied $ pytest tests/.py Before you run the tests, please make sure you install the dependencies required for testing. You can do so +with this command: Copied $ pip install -e ".[test]" You can also run the full test suite with the following command, but it takes +a beefy machine to produce a result in a decent amount of time now that +Diffusers has grown a lot. Here is the command for it: Copied $ make test 🧨 Diffusers relies on black and isort to format its source code +consistently. After you make changes, apply automatic style corrections and code verifications +that can’t be automated in one go with: Copied $ make style 🧨 Diffusers also uses ruff and a few custom scripts to check for coding mistakes. Quality +control runs in CI, however, you can also run the same checks with: Copied $ make quality Once you’re happy with your changes, add changed files using git add and +make a commit with git commit to record your changes locally: Copied $ git add modified_file.py +$ git commit -m "A descriptive message about your changes." It is a good idea to sync your copy of the code with the original +repository regularly. This way you can quickly account for changes: Copied $ git pull upstream main Push the changes to your account using: Copied $ git push -u origin a-descriptive-name-for-my-changes Once you are satisfied, go to the +webpage of your fork on GitHub. Click on ‘Pull request’ to send your changes +to the project maintainers for review. It’s OK if maintainers ask you for changes. It happens to core contributors +too! So everyone can see the changes in the Pull request, work in your local +branch and push the changes to your fork. They will automatically appear in +the pull request. Tests An extensive test suite is included to test the library behavior and several examples. Library tests can be found in +the tests folder. We like pytest and pytest-xdist because it’s faster. From the root of the +repository, here’s how to run tests with pytest for the library: Copied $ python -m pytest -n auto --dist=loadfile -s -v ./tests/ In fact, that’s how make test is implemented! You can specify a smaller set of tests in order to test only the feature +you’re working on. By default, slow tests are skipped. Set the RUN_SLOW environment variable to +yes to run them. This will download many gigabytes of models — make sure you +have enough disk space and a good Internet connection, or a lot of patience! Copied $ RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/ unittest is fully supported, here’s how to run tests with it: Copied $ python -m unittest discover -s tests -t . -v +$ python -m unittest discover -s examples -t examples -v Syncing forked main with upstream (HuggingFace) main To avoid pinging the upstream repository which adds reference notes to each upstream PR and sends unnecessary notifications to the developers involved in these PRs, +when syncing the main branch of a forked repository, please, follow these steps: When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead, merge directly into the forked main. If a PR is absolutely necessary, use the following steps after checking out your branch: Copied $ git checkout -b your-branch-for-syncing +$ git pull --squash --no-commit upstream main +$ git commit -m '' +$ git push --set-upstream origin your-branch-for-syncing Style guide For documentation strings, 🧨 Diffusers follows the Google style. diff --git a/scrapped_outputs/94bf138b19c07ffd73f43436ed60e353.txt b/scrapped_outputs/94bf138b19c07ffd73f43436ed60e353.txt new file mode 100644 index 0000000000000000000000000000000000000000..84c29e20830b17a55539328c54b57bc8ab14854f --- /dev/null +++ b/scrapped_outputs/94bf138b19c07ffd73f43436ed60e353.txt @@ -0,0 +1,97 @@ +Text-to-(RGB, depth) LDM3D was proposed in LDM3D: Latent Diffusion Model for 3D by Gabriela Ben Melech Stan, Diana Wofk, Scottie Fox, Alex Redden, Will Saxton, Jean Yu, Estelle Aflalo, Shao-Yen Tseng, Fabio Nonato, Matthias Muller, and Vasudev Lal. LDM3D generates an image and a depth map from a given text prompt unlike the existing text-to-image diffusion models such as Stable Diffusion which only generates an image. With almost the same number of parameters, LDM3D achieves to create a latent space that can compress both the RGB images and the depth maps. Two checkpoints are available for use: ldm3d-original. The original checkpoint used in the paper ldm3d-4c. The new version of LDM3D using 4 channels inputs instead of 6-channels inputs and finetuned on higher resolution images. The abstract from the paper is: This research paper proposes a Latent Diffusion Model for 3D (LDM3D) that generates both image and depth map data from a given text prompt, allowing users to generate RGBD images from text prompts. The LDM3D model is fine-tuned on a dataset of tuples containing an RGB image, depth map and caption, and validated through extensive experiments. We also develop an application called DepthFusion, which uses the generated RGB images and depth maps to create immersive and interactive 360-degree-view experiences using TouchDesigner. This technology has the potential to transform a wide range of industries, from entertainment and gaming to architecture and design. Overall, this paper presents a significant contribution to the field of generative AI and computer vision, and showcases the potential of LDM3D and DepthFusion to revolutionize content creation and digital experiences. A short video summarizing the approach can be found at this url. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionLDM3DPipeline class diffusers.StableDiffusionLDM3DPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image and 3D generation using LDM3D. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 49 timesteps: List = None guidance_scale: float = 5.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 5.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import StableDiffusionLDM3DPipeline + +>>> pipe = StableDiffusionLDM3DPipeline.from_pretrained("Intel/ldm3d-4c") +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> output = pipe(prompt) +>>> rgb_image, depth_image = output.rgb, output.depth +>>> rgb_image[0].save("astronaut_ldm3d_rgb.jpg") +>>> depth_image[0].save("astronaut_ldm3d_depth.png") encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 LDM3DPipelineOutput class diffusers.pipelines.stable_diffusion_ldm3d.pipeline_stable_diffusion_ldm3d.LDM3DPipelineOutput < source > ( rgb: Union depth: Union nsfw_content_detected: Optional ) Parameters rgb (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). depth (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. __call__ ( *args **kwargs ) Call self as a function. Upscaler LDM3D-VR is an extended version of LDM3D. The abstract from the paper is: +Latent diffusion models have proven to be state-of-the-art in the creation and manipulation of visual outputs. However, as far as we know, the generation of depth maps jointly with RGB is still limited. We introduce LDM3D-VR, a suite of diffusion models targeting virtual reality development that includes LDM3D-pano and LDM3D-SR. These models enable the generation of panoramic RGBD based on textual prompts and the upscaling of low-resolution inputs to high-resolution RGBD, respectively. Our models are fine-tuned from existing pretrained models on datasets containing panoramic/high-resolution RGB images, depth maps and captions. Both models are evaluated in comparison to existing related methods Two checkpoints are available for use: ldm3d-pano. This checkpoint enables the generation of panoramic images and requires the StableDiffusionLDM3DPipeline pipeline to be used. ldm3d-sr. This checkpoint enables the upscaling of RGB and depth images. Can be used in cascade after the original LDM3D pipeline using the StableDiffusionUpscaleLDM3DPipeline from communauty pipeline. diff --git a/scrapped_outputs/94c4e763dc612bd9ae242059105baf09.txt b/scrapped_outputs/94c4e763dc612bd9ae242059105baf09.txt new file mode 100644 index 0000000000000000000000000000000000000000..8ddef7d2587e0ab05a500a167a90610ae978a96c --- /dev/null +++ b/scrapped_outputs/94c4e763dc612bd9ae242059105baf09.txt @@ -0,0 +1,107 @@ +Attend-and-Excite Attend-and-Excite for Stable Diffusion was proposed in Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models and provides textual attention control over image generation. The abstract from the paper is: Recent text-to-image generative models have demonstrated an unparalleled ability to generate diverse and creative imagery guided by a target text prompt. While revolutionary, current state-of-the-art diffusion models may still fail in generating images that fully convey the semantics in the given text prompt. We analyze the publicly available Stable Diffusion model and assess the existence of catastrophic neglect, where the model fails to generate one or more of the subjects from the input prompt. Moreover, we find that in some cases the model also fails to correctly bind attributes (e.g., colors) to their corresponding subjects. To help mitigate these failure cases, we introduce the concept of Generative Semantic Nursing (GSN), where we seek to intervene in the generative process on the fly during inference time to improve the faithfulness of the generated images. Using an attention-based formulation of GSN, dubbed Attend-and-Excite, we guide the model to refine the cross-attention units to attend to all subject tokens in the text prompt and strengthen - or excite - their activations, encouraging the model to generate all subjects described in the text prompt. We compare our approach to alternative approaches and demonstrate that it conveys the desired concepts more faithfully across a range of text prompts. You can find additional information about Attend-and-Excite on the project page, the original codebase, or try it out in a demo. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionAttendAndExcitePipeline class diffusers.StableDiffusionAttendAndExcitePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion and Attend-and-Excite. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings __call__ < source > ( prompt: Union token_indices: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: int = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None max_iter_to_alter: int = 25 thresholds: dict = {0: 0.05, 10: 0.5, 20: 0.8} scale_factor: int = 20 attn_res: Optional = (16, 16) clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. token_indices (List[int]) — +The token indices to alter with attend-and-excite. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. max_iter_to_alter (int, optional, defaults to 25) — +Number of denoising steps to apply attend-and-excite. The max_iter_to_alter denoising steps are when +attend-and-excite is applied. For example, if max_iter_to_alter is 25 and there are a total of 30 +denoising steps, the first 25 denoising steps applies attend-and-excite and the last 5 will not. thresholds (dict, optional, defaults to {0 -- 0.05, 10: 0.5, 20: 0.8}): +Dictionary defining the iterations and desired thresholds to apply iterative latent refinement in. scale_factor (int, optional, default to 20) — +Scale factor to control the step size of each attend-and-excite update. attn_res (tuple, optional, default computed from width and height) — +The 2D resolution of the semantic attention map. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionAttendAndExcitePipeline + +>>> pipe = StableDiffusionAttendAndExcitePipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 +... ).to("cuda") + + +>>> prompt = "a cat and a frog" + +>>> # use get_indices function to find out indices of the tokens you want to alter +>>> pipe.get_indices(prompt) +{0: '<|startoftext|>', 1: 'a', 2: 'cat', 3: 'and', 4: 'a', 5: 'frog', 6: '<|endoftext|>'} + +>>> token_indices = [2, 5] +>>> seed = 6141 +>>> generator = torch.Generator("cuda").manual_seed(seed) + +>>> images = pipe( +... prompt=prompt, +... token_indices=token_indices, +... guidance_scale=7.5, +... generator=generator, +... num_inference_steps=50, +... max_iter_to_alter=25, +... ).images + +>>> image = images[0] +>>> image.save(f"../images/{prompt}_{seed}.png") disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_indices < source > ( prompt: str ) Utility function to list the indices of the tokens you wish to alte StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/952d27bfd29336f67bc05e7e65be8017.txt b/scrapped_outputs/952d27bfd29336f67bc05e7e65be8017.txt new file mode 100644 index 0000000000000000000000000000000000000000..46aa6d95b72d8b3fe32eeda13319effe81eaefb7 --- /dev/null +++ b/scrapped_outputs/952d27bfd29336f67bc05e7e65be8017.txt @@ -0,0 +1,181 @@ +Euler scheduler + + +Overview + +Euler scheduler (Algorithm 2) from the paper Elucidating the Design Space of Diffusion-Based Generative Models by Karras et al. (2022). Based on the original k-diffusion implementation by Katherine Crowson. +Fast scheduler which often times generates good outputs with 20-30 steps. + +EulerDiscreteScheduler + + +class diffusers.EulerDiscreteScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +prediction_type: str = 'epsilon' + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + + +Euler scheduler (Algorithm 2) from Karras et al. (2022) https://arxiv.org/abs/2206.00364. . Based on the original +k-diffusion implementation by Katherine Crowson: +https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L51 +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[float, torch.FloatTensor] + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +timestep (float or torch.FloatTensor) — the current timestep in the diffusion chain + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Scales the denoising model input by (sigma**2 + 1) ** 0.5 to match the Euler algorithm. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device, optional) — +the device to which the timesteps should be moved to. If None, the timesteps are not moved. + + + +Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: typing.Union[float, torch.FloatTensor] +sample: FloatTensor +s_churn: float = 0.0 +s_tmin: float = 0.0 +s_tmax: float = inf +s_noise: float = 1.0 +generator: typing.Optional[torch._C.Generator] = None +return_dict: bool = True + +) +→ +~schedulers.scheduling_utils.EulerDiscreteSchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (float) — current timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +s_churn (float) — + + +s_tmin (float) — + + +s_tmax (float) — + + +s_noise (float) — + + +generator (torch.Generator, optional) — Random number generator. + + +return_dict (bool) — option for returning tuple rather than EulerDiscreteSchedulerOutput class + + +Returns + +~schedulers.scheduling_utils.EulerDiscreteSchedulerOutput or tuple + + + +~schedulers.scheduling_utils.EulerDiscreteSchedulerOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/958fa3bdb0ed47776ef853cb281f8e86.txt b/scrapped_outputs/958fa3bdb0ed47776ef853cb281f8e86.txt new file mode 100644 index 0000000000000000000000000000000000000000..b2eb8974cda89e11056bf65f8c38cb7c6ff2a3e9 --- /dev/null +++ b/scrapped_outputs/958fa3bdb0ed47776ef853cb281f8e86.txt @@ -0,0 +1,69 @@ +JAX/Flax 🤗 Diffusers supports Flax for super fast inference on Google TPUs, such as those available in Colab, Kaggle or Google Cloud Platform. This guide shows you how to run inference with Stable Diffusion using JAX/Flax. Before you begin, make sure you have the necessary libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q jax==0.3.25 jaxlib==0.3.25 flax transformers ftfy +#!pip install -q diffusers You should also make sure you’re using a TPU backend. While JAX does not run exclusively on TPUs, you’ll get the best performance on a TPU because each server has 8 TPU accelerators working in parallel. If you are running this guide in Colab, select Runtime in the menu above, select the option Change runtime type, and then select TPU under the Hardware accelerator setting. Import JAX and quickly check whether you’re using a TPU: Copied import jax +import jax.tools.colab_tpu +jax.tools.colab_tpu.setup_tpu() + +num_devices = jax.device_count() +device_type = jax.devices()[0].device_kind + +print(f"Found {num_devices} JAX devices of type {device_type}.") +assert ( + "TPU" in device_type, + "Available device is not a TPU, please select TPU from Runtime > Change runtime type > Hardware accelerator" +) +# Found 8 JAX devices of type Cloud TPU. Great, now you can import the rest of the dependencies you’ll need: Copied import jax.numpy as jnp +from jax import pmap +from flax.jax_utils import replicate +from flax.training.common_utils import shard + +from diffusers import FlaxStableDiffusionPipeline Load a model Flax is a functional framework, so models are stateless and parameters are stored outside of them. Loading a pretrained Flax pipeline returns both the pipeline and the model weights (or parameters). In this guide, you’ll use bfloat16, a more efficient half-float type that is supported by TPUs (you can also use float32 for full precision if you want). Copied dtype = jnp.bfloat16 +pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + revision="bf16", + dtype=dtype, +) Inference TPUs usually have 8 devices working in parallel, so let’s use the same prompt for each device. This means you can perform inference on 8 devices at once, with each device generating one image. As a result, you’ll get 8 images in the same amount of time it takes for one chip to generate a single image! Learn more details in the How does parallelization work? section. After replicating the prompt, get the tokenized text ids by calling the prepare_inputs function on the pipeline. The length of the tokenized text is set to 77 tokens as required by the configuration of the underlying CLIP text model. Copied prompt = "A cinematic film still of Morgan Freeman starring as Jimi Hendrix, portrait, 40mm lens, shallow depth of field, close up, split lighting, cinematic" +prompt = [prompt] * jax.device_count() +prompt_ids = pipeline.prepare_inputs(prompt) +prompt_ids.shape +# (8, 77) Model parameters and inputs have to be replicated across the 8 parallel devices. The parameters dictionary is replicated with flax.jax_utils.replicate which traverses the dictionary and changes the shape of the weights so they are repeated 8 times. Arrays are replicated using shard. Copied # parameters +p_params = replicate(params) + +# arrays +prompt_ids = shard(prompt_ids) +prompt_ids.shape +# (8, 1, 77) This shape means each one of the 8 devices receives as an input a jnp array with shape (1, 77), where 1 is the batch size per device. On TPUs with sufficient memory, you could have a batch size larger than 1 if you want to generate multiple images (per chip) at once. Next, create a random number generator to pass to the generation function. This is standard procedure in Flax, which is very serious and opinionated about random numbers. All functions that deal with random numbers are expected to receive a generator to ensure reproducibility, even when you’re training across multiple distributed devices. The helper function below uses a seed to initialize a random number generator. As long as you use the same seed, you’ll get the exact same results. Feel free to use different seeds when exploring results later in the guide. Copied def create_key(seed=0): + return jax.random.PRNGKey(seed) The helper function, or rng, is split 8 times so each device receives a different generator and generates a different image. Copied rng = create_key(0) +rng = jax.random.split(rng, jax.device_count()) To take advantage of JAX’s optimized speed on a TPU, pass jit=True to the pipeline to compile the JAX code into an efficient representation and to ensure the model runs in parallel across the 8 devices. You need to ensure all your inputs have the same shape in subsequent calls, otherwise JAX will need to recompile the code which is slower. The first inference run takes more time because it needs to compile the code, but subsequent calls (even with different inputs) are much faster. For example, it took more than a minute to compile on a TPU v2-8, but then it takes about 7s on a future inference run! Copied %%time +images = pipeline(prompt_ids, p_params, rng, jit=True)[0] + +# CPU times: user 56.2 s, sys: 42.5 s, total: 1min 38s +# Wall time: 1min 29s The returned array has shape (8, 1, 512, 512, 3) which should be reshaped to remove the second dimension and get 8 images of 512 × 512 × 3. Then you can use the numpy_to_pil() function to convert the arrays into images. Copied from diffusers.utils import make_image_grid + +images = images.reshape((images.shape[0] * images.shape[1],) + images.shape[-3:]) +images = pipeline.numpy_to_pil(images) +make_image_grid(images, rows=2, cols=4) Using different prompts You don’t necessarily have to use the same prompt on all devices. For example, to generate 8 different prompts: Copied prompts = [ + "Labrador in the style of Hokusai", + "Painting of a squirrel skating in New York", + "HAL-9000 in the style of Van Gogh", + "Times Square under water, with fish and a dolphin swimming around", + "Ancient Roman fresco showing a man working on his laptop", + "Close-up photograph of young black woman against urban background, high quality, bokeh", + "Armchair in the shape of an avocado", + "Clown astronaut in space, with Earth in the background", +] + +prompt_ids = pipeline.prepare_inputs(prompts) +prompt_ids = shard(prompt_ids) + +images = pipeline(prompt_ids, p_params, rng, jit=True).images +images = images.reshape((images.shape[0] * images.shape[1],) + images.shape[-3:]) +images = pipeline.numpy_to_pil(images) + +make_image_grid(images, 2, 4) How does parallelization work? The Flax pipeline in 🤗 Diffusers automatically compiles the model and runs it in parallel on all available devices. Let’s take a closer look at how that process works. JAX parallelization can be done in multiple ways. The easiest one revolves around using the jax.pmap function to achieve single-program multiple-data (SPMD) parallelization. It means running several copies of the same code, each on different data inputs. More sophisticated approaches are possible, and you can go over to the JAX documentation to explore this topic in more detail if you are interested! jax.pmap does two things: Compiles (or ”jits”) the code which is similar to jax.jit(). This does not happen when you call pmap, and only the first time the pmapped function is called. Ensures the compiled code runs in parallel on all available devices. To demonstrate, call pmap on the pipeline’s _generate method (this is a private method that generates images and may be renamed or removed in future releases of 🤗 Diffusers): Copied p_generate = pmap(pipeline._generate) After calling pmap, the prepared function p_generate will: Make a copy of the underlying function, pipeline._generate, on each device. Send each device a different portion of the input arguments (this is why it’s necessary to call the shard function). In this case, prompt_ids has shape (8, 1, 77, 768) so the array is split into 8 and each copy of _generate receives an input with shape (1, 77, 768). The most important thing to pay attention to here is the batch size (1 in this example), and the input dimensions that make sense for your code. You don’t have to change anything else to make the code work in parallel. The first time you call the pipeline takes more time, but the calls afterward are much faster. The block_until_ready function is used to correctly measure inference time because JAX uses asynchronous dispatch and returns control to the Python loop as soon as it can. You don’t need to use that in your code; blocking occurs automatically when you want to use the result of a computation that has not yet been materialized. Copied %%time +images = p_generate(prompt_ids, p_params, rng) +images = images.block_until_ready() + +# CPU times: user 1min 15s, sys: 18.2 s, total: 1min 34s +# Wall time: 1min 15s Check your image dimensions to see if they’re correct: Copied images.shape +# (8, 1, 512, 512, 3) Resources To learn more about how JAX works with Stable Diffusion, you may be interested in reading: Accelerating Stable Diffusion XL Inference with JAX on Cloud TPU v5e diff --git a/scrapped_outputs/95904065616e66a9bbb9442ee3273702.txt b/scrapped_outputs/95904065616e66a9bbb9442ee3273702.txt new file mode 100644 index 0000000000000000000000000000000000000000..c2c26b00071da8c297b8820b37c712b100e43678 --- /dev/null +++ b/scrapped_outputs/95904065616e66a9bbb9442ee3273702.txt @@ -0,0 +1,245 @@ +Models 🤗 Diffusers provides pretrained models for popular algorithms and modules to create custom diffusion systems. The primary function of models is to denoise an input sample as modeled by the distribution pθ(xt−1∣xt)p_{\theta}(x_{t-1}|x_{t})pθ​(xt−1​∣xt​). All models are built from the base ModelMixin class which is a torch.nn.Module providing basic functionality for saving and loading models, locally and from the Hugging Face Hub. ModelMixin class diffusers.ModelMixin < source > ( ) Base class for all models. ModelMixin takes care of storing the model configuration and provides methods for loading, downloading and +saving models. config_name (str) — Filename to save a model to when calling save_pretrained(). active_adapters < source > ( ) Gets the current list of active adapters of the model. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +official documentation: https://huggingface.co/docs/peft add_adapter < source > ( adapter_config adapter_name: str = 'default' ) Parameters adapter_config ([~peft.PeftConfig]) — +The configuration of the adapter to add; supported adapters are non-prefix tuning and adaption prompt +methods. adapter_name (str, optional, defaults to "default") — +The name of the adapter to add. If no name is passed, a default name is assigned to the adapter. Adds a new adapter to the current model for training. If no adapter name is passed, a default name is assigned +to the adapter to follow the convention of the PEFT library. If you are not familiar with adapters and PEFT methods, we invite you to read more about them in the PEFT +documentation. disable_adapters < source > ( ) Disable all adapters attached to the model and fallback to inference with the base model only. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +official documentation: https://huggingface.co/docs/peft disable_gradient_checkpointing < source > ( ) Deactivates gradient checkpointing for the current model (may be referred to as activation checkpointing or +checkpoint activations in other frameworks). disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. enable_adapters < source > ( ) Enable adapters that are attached to the model. The model will use self.active_adapters() to retrieve the +list of adapters to enable. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +official documentation: https://huggingface.co/docs/peft enable_gradient_checkpointing < source > ( ) Activates gradient checkpointing for the current model (may be referred to as activation checkpointing or +checkpoint activations in other frameworks). enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this option is enabled, you should observe lower GPU memory usage and a potential speed up during +inference. Speed up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import UNet2DConditionModel +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> model = UNet2DConditionModel.from_pretrained( +... "stabilityai/stable-diffusion-2-1", subfolder="unet", torch_dtype=torch.float16 +... ) +>>> model = model.to("cuda") +>>> model.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) from_pretrained < source > ( pretrained_model_name_or_path: Union **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with save_pretrained(). + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info (bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. from_flax (bool, optional, defaults to False) — +Load the model weights from a Flax checkpoint save file. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. Instantiate a pretrained PyTorch model from a pretrained model configuration. The model is set in evaluation mode - model.eval() - by default, and dropout modules are deactivated. To +train the model, set it back in training mode with model.train(). To use private or gated models, log-in with +huggingface-cli login. You can also activate the special +“offline-mode” to use this method in a +firewalled environment. Example: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5", subfolder="unet") If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. num_parameters < source > ( only_trainable: bool = False exclude_embeddings: bool = False ) → int Parameters only_trainable (bool, optional, defaults to False) — +Whether or not to return only the number of trainable parameters. exclude_embeddings (bool, optional, defaults to False) — +Whether or not to return only the number of non-embedding parameters. Returns +int + +The number of parameters. + Get number of (trainable or non-embedding) parameters in the module. Example: Copied from diffusers import UNet2DConditionModel + +model_id = "runwayml/stable-diffusion-v1-5" +unet = UNet2DConditionModel.from_pretrained(model_id, subfolder="unet") +unet.num_parameters(only_trainable=True) +859520964 save_pretrained < source > ( save_directory: Union is_main_process: bool = True save_function: Optional = None safe_serialization: bool = True variant: Optional = None push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save a model and its configuration file to. Will be created if it doesn’t exist. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save a model and its configuration file to a directory so that it can be reloaded using the +from_pretrained() class method. set_adapter < source > ( adapter_name: Union ) Parameters adapter_name (Union[str, List[str]])) — +The list of adapters to set or the adapter name in case of single adapter. Sets a specific adapter by forcing the model to only use that adapter and disables the other adapters. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +official documentation: https://huggingface.co/docs/peft FlaxModelMixin class diffusers.FlaxModelMixin < source > ( ) Base class for all Flax models. FlaxModelMixin takes care of storing the model configuration and provides methods for loading, downloading and +saving models. config_name (str) — Filename to save a model to when calling save_pretrained(). from_pretrained < source > ( pretrained_model_name_or_path: Union dtype: dtype = *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — +Can be either: + +A string, the model id (for example runwayml/stable-diffusion-v1-5) of a pretrained model +hosted on the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +using save_pretrained(). + dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — +The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and +jax.numpy.bfloat16 (on TPUs). +This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If +specified, all the computation will be performed with the given dtype. + +This only specifies the dtype of the computation and does not influence the dtype of model +parameters. +If you wish to change the dtype of the model parameters, see to_fp16() and +to_bf16(). + model_args (sequence of positional arguments, optional) — +All remaining positional arguments are passed to the underlying model’s __init__ method. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only(bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. from_pt (bool, optional, defaults to False) — +Load the model weights from a PyTorch checkpoint save file. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to update the configuration object (after it is loaded) and initiate the model (for +example, output_attentions=True). Behaves differently depending on whether a config is provided or +automatically loaded: + +If a configuration is provided with config, kwargs are directly passed to the underlying +model’s __init__ method (we assume all relevant updates to the configuration have already been +done). +If a configuration is not provided, kwargs are first passed to the configuration class +initialization function from_config(). Each key of the kwargs that corresponds +to a configuration attribute is used to override said attribute with the supplied kwargs value. +Remaining keys that do not correspond to any configuration attribute are passed to the underlying +model’s __init__ function. + Instantiate a pretrained Flax model from a pretrained model configuration. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # Download model and configuration from huggingface.co and cache. +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # Model was saved using *save_pretrained('./test/saved_model/')* (for example purposes, not runnable). +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("./test/saved_model/") If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. save_pretrained < source > ( save_directory: Union params: Union is_main_process: bool = True push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save a model and its configuration file to. Will be created if it doesn’t exist. params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional key word arguments passed along to the push_to_hub() method. Save a model and its configuration file to a directory so that it can be reloaded using the +from_pretrained() class method. to_bf16 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans. It should be True +for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.bfloat16. This returns a new params tree and does not cast +the params in place. This method can be used on a TPU to explicitly convert the model parameters to bfloat16 precision to do full +half-precision training or to save weights in bfloat16 for inference in order to save memory and improve speed. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # load model +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model parameters will be in fp32 precision, to cast these to bfloat16 precision +>>> params = model.to_bf16(params) +>>> # If you don't want to cast certain parameters (for example layer norm bias and scale) +>>> # then pass the mask as follows +>>> from flax import traverse_util + +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> flat_params = traverse_util.flatten_dict(params) +>>> mask = { +... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) +... for path in flat_params +... } +>>> mask = traverse_util.unflatten_dict(mask) +>>> params = model.to_bf16(params, mask) to_fp16 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans. It should be True +for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.float16. This returns a new params tree and does not cast the +params in place. This method can be used on a GPU to explicitly convert the model parameters to float16 precision to do full +half-precision training or to save weights in float16 for inference in order to save memory and improve speed. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # load model +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model params will be in fp32, to cast these to float16 +>>> params = model.to_fp16(params) +>>> # If you want don't want to cast certain parameters (for example layer norm bias and scale) +>>> # then pass the mask as follows +>>> from flax import traverse_util + +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> flat_params = traverse_util.flatten_dict(params) +>>> mask = { +... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) +... for path in flat_params +... } +>>> mask = traverse_util.unflatten_dict(mask) +>>> params = model.to_fp16(params, mask) to_fp32 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans. It should be True +for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.float32. This method can be used to explicitly convert the +model parameters to fp32 precision. This returns a new params tree and does not cast the params in place. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # Download model and configuration from huggingface.co +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model params will be in fp32, to illustrate the use of this method, +>>> # we'll first cast to fp16 and back to fp32 +>>> params = model.to_f16(params) +>>> # now cast back to fp32 +>>> params = model.to_fp32(params) PushToHubMixin class diffusers.utils.PushToHubMixin < source > ( ) A Mixin to push a model, scheduler, or pipeline to the Hugging Face Hub. push_to_hub < source > ( repo_id: str commit_message: Optional = None private: Optional = None token: Optional = None create_pr: bool = False safe_serialization: bool = True variant: Optional = None ) Parameters repo_id (str) — +The name of the repository you want to push your model, scheduler, or pipeline files to. It should +contain your organization name when pushing to an organization. repo_id can also be a path to a local +directory. commit_message (str, optional) — +Message to commit while pushing. Default to "Upload {object}". private (bool, optional) — +Whether or not the repository created should be private. token (str, optional) — +The token to use as HTTP bearer authorization for remote files. The token generated when running +huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) — +Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to True) — +Whether or not to convert the model weights to the safetensors format. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. Upload model, scheduler, or pipeline files to the 🤗 Hugging Face Hub. Examples: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet") + +# Push the `unet` to your namespace with the name "my-finetuned-unet". +unet.push_to_hub("my-finetuned-unet") + +# Push the `unet` to an organization with the name "my-finetuned-unet". +unet.push_to_hub("your-org/my-finetuned-unet") diff --git a/scrapped_outputs/95a1c3155d1ae5ae9875b4a738694e6c.txt b/scrapped_outputs/95a1c3155d1ae5ae9875b4a738694e6c.txt new file mode 100644 index 0000000000000000000000000000000000000000..1f6f4515145581efe8db27c822c4dac240053ef7 --- /dev/null +++ b/scrapped_outputs/95a1c3155d1ae5ae9875b4a738694e6c.txt @@ -0,0 +1,68 @@ +Consistency Models Consistency Models were proposed in Consistency Models by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. The abstract from the paper is: Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast one-step generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet 64x64 and LSUN 256x256. The original codebase can be found at openai/consistency_models, and additional checkpoints are available at openai. The pipeline was contributed by dg845 and ayushtues. ❤️ Tips For an additional speed-up, use torch.compile to generate multiple images in <1 second: Copied import torch + from diffusers import ConsistencyModelPipeline + + device = "cuda" + # Load the cd_bedroom256_lpips checkpoint. + model_id_or_path = "openai/diffusers-cd_bedroom256_lpips" + pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) + pipe.to(device) + ++ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + + # Multistep sampling + # Timesteps can be explicitly specified; the particular timesteps below are from the original GitHub repo: + # https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L83 + for _ in range(10): + image = pipe(timesteps=[17, 0]).images[0] + image.show() ConsistencyModelPipeline class diffusers.ConsistencyModelPipeline < source > ( unet: UNet2DModel scheduler: CMStochasticIterativeScheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Currently only +compatible with CMStochasticIterativeScheduler. Pipeline for unconditional or class-conditional image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 class_labels: Union = None num_inference_steps: int = 1 timesteps: List = None generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. class_labels (torch.Tensor or List[int] or int, optional) — +Optional class labels for conditioning class-conditional consistency models. Not used if the model is +not class-conditional. num_inference_steps (int, optional, defaults to 1) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + Examples: Copied >>> import torch + +>>> from diffusers import ConsistencyModelPipeline + +>>> device = "cuda" +>>> # Load the cd_imagenet64_l2 checkpoint. +>>> model_id_or_path = "openai/diffusers-cd_imagenet64_l2" +>>> pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +>>> pipe.to(device) + +>>> # Onestep Sampling +>>> image = pipe(num_inference_steps=1).images[0] +>>> image.save("cd_imagenet64_l2_onestep_sample.png") + +>>> # Onestep sampling, class-conditional image generation +>>> # ImageNet-64 class label 145 corresponds to king penguins +>>> image = pipe(num_inference_steps=1, class_labels=145).images[0] +>>> image.save("cd_imagenet64_l2_onestep_sample_penguin.png") + +>>> # Multistep sampling, class-conditional image generation +>>> # Timesteps can be explicitly specified; the particular timesteps below are from the original Github repo: +>>> # https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L77 +>>> image = pipe(num_inference_steps=None, timesteps=[22, 0], class_labels=145).images[0] +>>> image.save("cd_imagenet64_l2_multistep_sample_penguin.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/95b4b1ff4f82572f6a14829f3494bd4a.txt b/scrapped_outputs/95b4b1ff4f82572f6a14829f3494bd4a.txt new file mode 100644 index 0000000000000000000000000000000000000000..e61eb0a68fe6473d1d312b7484e9469ca28f24df --- /dev/null +++ b/scrapped_outputs/95b4b1ff4f82572f6a14829f3494bd4a.txt @@ -0,0 +1,75 @@ +Shap-E Shap-E is a conditional model for generating 3D assets which could be used for video game development, interior design, and architecture. It is trained on a large dataset of 3D assets, and post-processed to render more views of each object and produce 16K instead of 4K point clouds. The Shap-E model is trained in two steps: an encoder accepts the point clouds and rendered views of a 3D asset and outputs the parameters of implicit functions that represent the asset a diffusion model is trained on the latents produced by the encoder to generate either neural radiance fields (NeRFs) or a textured 3D mesh, making it easier to render and use the 3D asset in downstream applications This guide will show you how to use Shap-E to start generating your own 3D assets! Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate trimesh Text-to-3D To generate a gif of a 3D object, pass a text prompt to the ShapEPipeline. The pipeline generates a list of image frames which are used to create the 3D object. Copied import torch +from diffusers import ShapEPipeline + +device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +pipe = ShapEPipeline.from_pretrained("openai/shap-e", torch_dtype=torch.float16, variant="fp16") +pipe = pipe.to(device) + +guidance_scale = 15.0 +prompt = ["A firecracker", "A birthday cupcake"] + +images = pipe( + prompt, + guidance_scale=guidance_scale, + num_inference_steps=64, + frame_size=256, +).images Now use the export_to_gif() function to turn the list of image frames into a gif of the 3D object. Copied from diffusers.utils import export_to_gif + +export_to_gif(images[0], "firecracker_3d.gif") +export_to_gif(images[1], "cake_3d.gif") prompt = "A firecracker" prompt = "A birthday cupcake" Image-to-3D To generate a 3D object from another image, use the ShapEImg2ImgPipeline. You can use an existing image or generate an entirely new one. Let’s use the Kandinsky 2.1 model to generate a new image. Copied from diffusers import DiffusionPipeline +import torch + +prior_pipeline = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipeline = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + +prompt = "A cheeseburger, white background" + +image_embeds, negative_image_embeds = prior_pipeline(prompt, guidance_scale=1.0).to_tuple() +image = pipeline( + prompt, + image_embeds=image_embeds, + negative_image_embeds=negative_image_embeds, +).images[0] + +image.save("burger.png") Pass the cheeseburger to the ShapEImg2ImgPipeline to generate a 3D representation of it. Copied from PIL import Image +from diffusers import ShapEImg2ImgPipeline +from diffusers.utils import export_to_gif + +pipe = ShapEImg2ImgPipeline.from_pretrained("openai/shap-e-img2img", torch_dtype=torch.float16, variant="fp16").to("cuda") + +guidance_scale = 3.0 +image = Image.open("burger.png").resize((256, 256)) + +images = pipe( + image, + guidance_scale=guidance_scale, + num_inference_steps=64, + frame_size=256, +).images + +gif_path = export_to_gif(images[0], "burger_3d.gif") cheeseburger 3D cheeseburger Generate mesh Shap-E is a flexible model that can also generate textured mesh outputs to be rendered for downstream applications. In this example, you’ll convert the output into a glb file because the 🤗 Datasets library supports mesh visualization of glb files which can be rendered by the Dataset viewer. You can generate mesh outputs for both the ShapEPipeline and ShapEImg2ImgPipeline by specifying the output_type parameter as "mesh": Copied import torch +from diffusers import ShapEPipeline + +device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +pipe = ShapEPipeline.from_pretrained("openai/shap-e", torch_dtype=torch.float16, variant="fp16") +pipe = pipe.to(device) + +guidance_scale = 15.0 +prompt = "A birthday cupcake" + +images = pipe(prompt, guidance_scale=guidance_scale, num_inference_steps=64, frame_size=256, output_type="mesh").images Use the export_to_ply() function to save the mesh output as a ply file: You can optionally save the mesh output as an obj file with the export_to_obj() function. The ability to save the mesh output in a variety of formats makes it more flexible for downstream usage! Copied from diffusers.utils import export_to_ply + +ply_path = export_to_ply(images[0], "3d_cake.ply") +print(f"Saved to folder: {ply_path}") Then you can convert the ply file to a glb file with the trimesh library: Copied import trimesh + +mesh = trimesh.load("3d_cake.ply") +mesh_export = mesh.export("3d_cake.glb", file_type="glb") By default, the mesh output is focused from the bottom viewpoint but you can change the default viewpoint by applying a rotation transform: Copied import trimesh +import numpy as np + +mesh = trimesh.load("3d_cake.ply") +rot = trimesh.transformations.rotation_matrix(-np.pi / 2, [1, 0, 0]) +mesh = mesh.apply_transform(rot) +mesh_export = mesh.export("3d_cake.glb", file_type="glb") Upload the mesh file to your dataset repository to visualize it with the Dataset viewer! diff --git a/scrapped_outputs/95bd5577da3e4bb52140dd14a5b75baf.txt b/scrapped_outputs/95bd5577da3e4bb52140dd14a5b75baf.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/95d7b8e1abe30d2787d1165c62fe9fa5.txt b/scrapped_outputs/95d7b8e1abe30d2787d1165c62fe9fa5.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/960123ca350a6c000acaefa8f07ca22b.txt b/scrapped_outputs/960123ca350a6c000acaefa8f07ca22b.txt new file mode 100644 index 0000000000000000000000000000000000000000..ca254f42f72a76d580bb5340e193834f7f82b6d6 --- /dev/null +++ b/scrapped_outputs/960123ca350a6c000acaefa8f07ca22b.txt @@ -0,0 +1,86 @@ +Prompt weighting Prompt weighting provides a way to emphasize or de-emphasize certain parts of a prompt, allowing for more control over the generated image. A prompt can include several concepts, which gets turned into contextualized text embeddings. The embeddings are used by the model to condition its cross-attention layers to generate an image (read the Stable Diffusion blog post to learn more about how it works). Prompt weighting works by increasing or decreasing the scale of the text embedding vector that corresponds to its concept in the prompt because you may not necessarily want the model to focus on all concepts equally. The easiest way to prepare the prompt-weighted embeddings is to use Compel, a text prompt-weighting and blending library. Once you have the prompt-weighted embeddings, you can pass them to any pipeline that has a prompt_embeds (and optionally negative_prompt_embeds) parameter, such as StableDiffusionPipeline, StableDiffusionControlNetPipeline, and StableDiffusionXLPipeline. If your favorite pipeline doesn’t have a prompt_embeds parameter, please open an issue so we can add it! This guide will show you how to weight and blend your prompts with Compel in 🤗 Diffusers. Before you begin, make sure you have the latest version of Compel installed: Copied # uncomment to install in Colab +#!pip install compel --upgrade For this guide, let’s generate an image with the prompt "a red cat playing with a ball" using the StableDiffusionPipeline: Copied from diffusers import StableDiffusionPipeline, UniPCMultistepScheduler +import torch + +pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", use_safetensors=True) +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.to("cuda") + +prompt = "a red cat playing with a ball" + +generator = torch.Generator(device="cpu").manual_seed(33) + +image = pipe(prompt, generator=generator, num_inference_steps=20).images[0] +image Weighting You’ll notice there is no “ball” in the image! Let’s use compel to upweight the concept of “ball” in the prompt. Create a Compel object, and pass it a tokenizer and text encoder: Copied from compel import Compel + +compel_proc = Compel(tokenizer=pipe.tokenizer, text_encoder=pipe.text_encoder) compel uses + or - to increase or decrease the weight of a word in the prompt. To increase the weight of “ball”: + corresponds to the value 1.1, ++ corresponds to 1.1^2, and so on. Similarly, - corresponds to 0.9 and -- corresponds to 0.9^2. Feel free to experiment with adding more + or - in your prompt! Copied prompt = "a red cat playing with a ball++" Pass the prompt to compel_proc to create the new prompt embeddings which are passed to the pipeline: Copied prompt_embeds = compel_proc(prompt) +generator = torch.manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image To downweight parts of the prompt, use the - suffix: Copied prompt = "a red------- cat playing with a ball" +prompt_embeds = compel_proc(prompt) + +generator = torch.manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image You can even up or downweight multiple concepts in the same prompt: Copied prompt = "a red cat++ playing with a ball----" +prompt_embeds = compel_proc(prompt) + +generator = torch.manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image Blending You can also create a weighted blend of prompts by adding .blend() to a list of prompts and passing it some weights. Your blend may not always produce the result you expect because it breaks some assumptions about how the text encoder functions, so just have fun and experiment with it! Copied prompt_embeds = compel_proc('("a red cat playing with a ball", "jungle").blend(0.7, 0.8)') +generator = torch.Generator(device="cuda").manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image Conjunction A conjunction diffuses each prompt independently and concatenates their results by their weighted sum. Add .and() to the end of a list of prompts to create a conjunction: Copied prompt_embeds = compel_proc('["a red cat", "playing with a", "ball"].and()') +generator = torch.Generator(device="cuda").manual_seed(55) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image Textual inversion Textual inversion is a technique for learning a specific concept from some images which you can use to generate new images conditioned on that concept. Create a pipeline and use the load_textual_inversion() function to load the textual inversion embeddings (feel free to browse the Stable Diffusion Conceptualizer for 100+ trained concepts): Copied import torch +from diffusers import StableDiffusionPipeline +from compel import Compel, DiffusersTextualInversionManager + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, + use_safetensors=True, variant="fp16").to("cuda") +pipe.load_textual_inversion("sd-concepts-library/midjourney-style") Compel provides a DiffusersTextualInversionManager class to simplify prompt weighting with textual inversion. Instantiate DiffusersTextualInversionManager and pass it to the Compel class: Copied textual_inversion_manager = DiffusersTextualInversionManager(pipe) +compel_proc = Compel( + tokenizer=pipe.tokenizer, + text_encoder=pipe.text_encoder, + textual_inversion_manager=textual_inversion_manager) Incorporate the concept to condition a prompt with using the syntax: Copied prompt_embeds = compel_proc('("A red cat++ playing with a ball ")') + +image = pipe(prompt_embeds=prompt_embeds).images[0] +image DreamBooth DreamBooth is a technique for generating contextualized images of a subject given just a few images of the subject to train on. It is similar to textual inversion, but DreamBooth trains the full model whereas textual inversion only fine-tunes the text embeddings. This means you should use from_pretrained() to load the DreamBooth model (feel free to browse the Stable Diffusion Dreambooth Concepts Library for 100+ trained models): Copied import torch +from diffusers import DiffusionPipeline, UniPCMultistepScheduler +from compel import Compel + +pipe = DiffusionPipeline.from_pretrained("sd-dreambooth-library/dndcoverart-v1", torch_dtype=torch.float16).to("cuda") +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) Create a Compel class with a tokenizer and text encoder, and pass your prompt to it. Depending on the model you use, you’ll need to incorporate the model’s unique identifier into your prompt. For example, the dndcoverart-v1 model uses the identifier dndcoverart: Copied compel_proc = Compel(tokenizer=pipe.tokenizer, text_encoder=pipe.text_encoder) +prompt_embeds = compel_proc('("magazine cover of a dndcoverart dragon, high quality, intricate details, larry elmore art style").and()') +image = pipe(prompt_embeds=prompt_embeds).images[0] +image Stable Diffusion XL Stable Diffusion XL (SDXL) has two tokenizers and text encoders so it’s usage is a bit different. To address this, you should pass both tokenizers and encoders to the Compel class: Copied from compel import Compel, ReturnedEmbeddingsType +from diffusers import DiffusionPipeline +from diffusers.utils import make_image_grid +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + variant="fp16", + use_safetensors=True, + torch_dtype=torch.float16 +).to("cuda") + +compel = Compel( + tokenizer=[pipeline.tokenizer, pipeline.tokenizer_2] , + text_encoder=[pipeline.text_encoder, pipeline.text_encoder_2], + returned_embeddings_type=ReturnedEmbeddingsType.PENULTIMATE_HIDDEN_STATES_NON_NORMALIZED, + requires_pooled=[False, True] +) This time, let’s upweight “ball” by a factor of 1.5 for the first prompt, and downweight “ball” by 0.6 for the second prompt. The StableDiffusionXLPipeline also requires pooled_prompt_embeds (and optionally negative_pooled_prompt_embeds) so you should pass those to the pipeline along with the conditioning tensors: Copied # apply weights +prompt = ["a red cat playing with a (ball)1.5", "a red cat playing with a (ball)0.6"] +conditioning, pooled = compel(prompt) + +# generate image +generator = [torch.Generator().manual_seed(33) for _ in range(len(prompt))] +images = pipeline(prompt_embeds=conditioning, pooled_prompt_embeds=pooled, generator=generator, num_inference_steps=30).images +make_image_grid(images, rows=1, cols=2) "a red cat playing with a (ball)1.5" "a red cat playing with a (ball)0.6" diff --git a/scrapped_outputs/9613303ed3994ea5e2c39b12350156b7.txt b/scrapped_outputs/9613303ed3994ea5e2c39b12350156b7.txt new file mode 100644 index 0000000000000000000000000000000000000000..816a6ec9c2fb9e36207317fc29707b1dd833518a --- /dev/null +++ b/scrapped_outputs/9613303ed3994ea5e2c39b12350156b7.txt @@ -0,0 +1,412 @@ +Text-to-Video Generation with AnimateDiff Overview AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning by Yuwei Guo, Ceyuan Yang, Anyi Rao, Yaohui Wang, Yu Qiao, Dahua Lin, Bo Dai. The abstract of the paper is the following: With the advance of text-to-image models (e.g., Stable Diffusion) and corresponding personalization techniques such as DreamBooth and LoRA, everyone can manifest their imagination into high-quality images at an affordable cost. Subsequently, there is a great demand for image animation techniques to further combine generated static images with motion dynamics. In this report, we propose a practical framework to animate most of the existing personalized text-to-image models once and for all, saving efforts in model-specific tuning. At the core of the proposed framework is to insert a newly initialized motion modeling module into the frozen text-to-image model and train it on video clips to distill reasonable motion priors. Once trained, by simply injecting this motion modeling module, all personalized versions derived from the same base T2I readily become text-driven models that produce diverse and personalized animated images. We conduct our evaluation on several public representative personalized text-to-image models across anime pictures and realistic photographs, and demonstrate that our proposed framework helps these models generate temporally smooth animation clips while preserving the domain and diversity of their outputs. Code and pre-trained weights will be publicly available at this https URL. Available Pipelines Pipeline Tasks Demo AnimateDiffPipeline Text-to-Video Generation with AnimateDiff AnimateDiffVideoToVideoPipeline Video-to-Video Generation with AnimateDiff Available checkpoints Motion Adapter checkpoints can be found under guoyww. These checkpoints are meant to work with any model based on Stable Diffusion 1.4/1.5. Usage example AnimateDiffPipeline AnimateDiff works with a MotionAdapter checkpoint and a Stable Diffusion model checkpoint. The MotionAdapter is a collection of Motion Modules that are responsible for adding coherent motion across image frames. These modules are applied after the Resnet and Attention blocks in Stable Diffusion UNet. The following example demonstrates how to use a MotionAdapter checkpoint with Diffusers for inference based on StableDiffusion-1.4/1.5. Copied import torch +from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16) +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + timestep_spacing="linspace", + beta_schedule="linear", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt=( + "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, " + "orange sky, warm lighting, fishing boats, ocean waves seagulls, " + "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, " + "golden hour, coastal landscape, seaside scenery" + ), + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=25, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") + Here are some sample outputs: masterpiece, bestquality, sunset. + AnimateDiff tends to work better with finetuned Stable Diffusion models. If you plan on using a scheduler that can clip samples, make sure to disable it by setting clip_sample=False in the scheduler as this can also have an adverse effect on generated samples. Additionally, the AnimateDiff checkpoints can be sensitive to the beta schedule of the scheduler. We recommend setting this to linear. AnimateDiffVideoToVideoPipeline AnimateDiff can also be used to generate visually similar videos or enable style/character/background or other edits starting from an initial video, allowing you to seamlessly explore creative possibilities. Copied import imageio +import requests +import torch +from diffusers import AnimateDiffVideoToVideoPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif +from io import BytesIO +from PIL import Image + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffVideoToVideoPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16).to("cuda") +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + timestep_spacing="linspace", + beta_schedule="linear", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +# helper function to load videos +def load_video(file_path: str): + images = [] + + if file_path.startswith(('http://', 'https://')): + # If the file_path is a URL + response = requests.get(file_path) + response.raise_for_status() + content = BytesIO(response.content) + vid = imageio.get_reader(content) + else: + # Assuming it's a local file path + vid = imageio.get_reader(file_path) + + for frame in vid: + pil_image = Image.fromarray(frame) + images.append(pil_image) + + return images + +video = load_video("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-vid2vid-input-1.gif") + +output = pipe( + video = video, + prompt="panda playing a guitar, on a boat, in the ocean, high quality", + negative_prompt="bad quality, worse quality", + guidance_scale=7.5, + num_inference_steps=25, + strength=0.5, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") Here are some sample outputs: Source Video Output Video raccoon playing a guitar + panda playing a guitar + closeup of margot robbie, fireworks in the background, high quality + closeup of tony stark, robert downey jr, fireworks + Using Motion LoRAs Motion LoRAs are a collection of LoRAs that work with the guoyww/animatediff-motion-adapter-v1-5-2 checkpoint. These LoRAs are responsible for adding specific types of motion to the animations. Copied import torch +from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16) +pipe.load_lora_weights( + "guoyww/animatediff-motion-lora-zoom-out", adapter_name="zoom-out" +) + +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + beta_schedule="linear", + timestep_spacing="linspace", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt=( + "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, " + "orange sky, warm lighting, fishing boats, ocean waves seagulls, " + "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, " + "golden hour, coastal landscape, seaside scenery" + ), + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=25, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") + masterpiece, bestquality, sunset. + Using Motion LoRAs with PEFT You can also leverage the PEFT backend to combine Motion LoRA’s and create more complex animations. First install PEFT with Copied pip install peft Then you can use the following code to combine Motion LoRAs. Copied import torch +from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16) + +pipe.load_lora_weights( + "diffusers/animatediff-motion-lora-zoom-out", adapter_name="zoom-out", +) +pipe.load_lora_weights( + "diffusers/animatediff-motion-lora-pan-left", adapter_name="pan-left", +) +pipe.set_adapters(["zoom-out", "pan-left"], adapter_weights=[1.0, 1.0]) + +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + timestep_spacing="linspace", + beta_schedule="linear", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt=( + "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, " + "orange sky, warm lighting, fishing boats, ocean waves seagulls, " + "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, " + "golden hour, coastal landscape, seaside scenery" + ), + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=25, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") + masterpiece, bestquality, sunset. + Using FreeInit FreeInit: Bridging Initialization Gap in Video Diffusion Models by Tianxing Wu, Chenyang Si, Yuming Jiang, Ziqi Huang, Ziwei Liu. FreeInit is an effective method that improves temporal consistency and overall quality of videos generated using video-diffusion-models without any addition training. It can be applied to AnimateDiff, ModelScope, VideoCrafter and various other video generation models seamlessly at inference time, and works by iteratively refining the latent-initialization noise. More details can be found it the paper. The following example demonstrates the usage of FreeInit. Copied import torch +from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler +from diffusers.utils import export_to_gif + +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2") +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16).to("cuda") +pipe.scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + beta_schedule="linear", + clip_sample=False, + timestep_spacing="linspace", + steps_offset=1 +) + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_vae_tiling() + +# enable FreeInit +# Refer to the enable_free_init documentation for a full list of configurable parameters +pipe.enable_free_init(method="butterworth", use_fast_sampling=True) + +# run inference +output = pipe( + prompt="a panda playing a guitar, on a boat, in the ocean, high quality", + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=20, + generator=torch.Generator("cpu").manual_seed(666), +) + +# disable FreeInit +pipe.disable_free_init() + +frames = output.frames[0] +export_to_gif(frames, "animation.gif") FreeInit is not really free - the improved quality comes at the cost of extra computation. It requires sampling a few extra times depending on the num_iters parameter that is set when enabling it. Setting the use_fast_sampling parameter to True can improve the overall performance (at the cost of lower quality compared to when use_fast_sampling=False but still better results than vanilla video generation models). Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. AnimateDiffPipeline class diffusers.AnimateDiffPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel motion_adapter: MotionAdapter scheduler: Union feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter (MotionAdapter) — +A MotionAdapter to be used in combination with unet to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None num_frames: Optional = 16 height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → TextToVideoSDPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated video. num_frames (int, optional, defaults to 16) — +The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds +amounts to 2 seconds of video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated video. Choose between torch.FloatTensor, PIL.Image or +np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +TextToVideoSDPipelineOutput or tuple + +If return_dict is True, TextToVideoSDPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler +>>> from diffusers.utils import export_to_gif + +>>> adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2") +>>> pipe = AnimateDiffPipeline.from_pretrained("frankjoshua/toonyou_beta6", motion_adapter=adapter) +>>> pipe.scheduler = DDIMScheduler(beta_schedule="linear", steps_offset=1, clip_sample=False) +>>> output = pipe(prompt="A corgi walking in the park") +>>> frames = output.frames[0] +>>> export_to_gif(frames, "animation.gif") disable_free_init < source > ( ) Disables the FreeInit mechanism if enabled. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_free_init < source > ( num_iters: int = 3 use_fast_sampling: bool = False method: str = 'butterworth' order: int = 4 spatial_stop_frequency: float = 0.25 temporal_stop_frequency: float = 0.25 generator: Generator = None ) Parameters num_iters (int, optional, defaults to 3) — +Number of FreeInit noise re-initialization iterations. use_fast_sampling (bool, optional, defaults to False) — +Whether or not to speedup sampling procedure at the cost of probably lower quality results. Enables +the “Coarse-to-Fine Sampling” strategy, as mentioned in the paper, if set to True. method (str, optional, defaults to butterworth) — +Must be one of butterworth, ideal or gaussian to use as the filtering method for the +FreeInit low pass filter. order (int, optional, defaults to 4) — +Order of the filter used in butterworth method. Larger values lead to ideal method behaviour +whereas lower values lead to gaussian method behaviour. spatial_stop_frequency (float, optional, defaults to 0.25) — +Normalized stop frequency for spatial dimensions. Must be between 0 to 1. Referred to as d_s in +the original implementation. temporal_stop_frequency (float, optional, defaults to 0.25) — +Normalized stop frequency for temporal dimensions. Must be between 0 to 1. Referred to as d_t in +the original implementation. generator (torch.Generator, optional, defaults to 0.25) — +A torch.Generator to make +FreeInit generation deterministic. Enables the FreeInit mechanism as in https://arxiv.org/abs/2312.07537. This implementation has been adapted from the official repository. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. AnimateDiffVideoToVideoPipeline class diffusers.AnimateDiffVideoToVideoPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel motion_adapter: MotionAdapter scheduler: Union feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter (MotionAdapter) — +A MotionAdapter to be used in combination with unet to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for video-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( video: List = None prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: Optional = None guidance_scale: float = 7.5 strength: float = 0.8 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → AnimateDiffPipelineOutput or tuple Parameters video (List[PipelineImageInput]) — +The input video to condition the generation on. Must be a list of images/frames of the video. prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. strength (float, optional, defaults to 0.8) — +Higher strength leads to more differences between original video and generated video. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated video. Choose between torch.FloatTensor, PIL.Image or +np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a AnimateDiffPipelineOutput instead +of a plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +AnimateDiffPipelineOutput or tuple + +If return_dict is True, AnimateDiffPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. AnimateDiffPipelineOutput class diffusers.pipelines.animatediff.AnimateDiffPipelineOutput < source > ( frames: Union ) Parameters frames (List[List[PIL.Image.Image]] or torch.Tensor or np.ndarray) — +List of PIL Images of length batch_size or torch.Tensor or np.ndarray of shape +(batch_size, num_frames, height, width, num_channels). Output class for AnimateDiff pipelines. diff --git a/scrapped_outputs/9648f236b3b5a205cfedbb53c494f635.txt b/scrapped_outputs/9648f236b3b5a205cfedbb53c494f635.txt new file mode 100644 index 0000000000000000000000000000000000000000..fa69efa9696034670fc8ca476928c6521eb0af53 --- /dev/null +++ b/scrapped_outputs/9648f236b3b5a205cfedbb53c494f635.txt @@ -0,0 +1,212 @@ +Train a diffusion model Unconditional image generation is a popular application of diffusion models that generates images that look like those in the dataset used for training. Typically, the best results are obtained from finetuning a pretrained model on a specific dataset. You can find many of these checkpoints on the Hub, but if you can’t find one you like, you can always train your own! This tutorial will teach you how to train a UNet2DModel from scratch on a subset of the Smithsonian Butterflies dataset to generate your own 🦋 butterflies 🦋. 💡 This training tutorial is based on the Training with 🧨 Diffusers notebook. For additional details and context about diffusion models like how they work, check out the notebook! Before you begin, make sure you have 🤗 Datasets installed to load and preprocess image datasets, and 🤗 Accelerate, to simplify training on any number of GPUs. The following command will also install TensorBoard to visualize training metrics (you can also use Weights & Biases to track your training). Copied # uncomment to install the necessary libraries in Colab +#!pip install diffusers[training] We encourage you to share your model with the community, and in order to do that, you’ll need to login to your Hugging Face account (create one here if you don’t already have one!). You can login from a notebook and enter your token when prompted. Make sure your token has the write role. Copied >>> from huggingface_hub import notebook_login + +>>> notebook_login() Or login in from the terminal: Copied huggingface-cli login Since the model checkpoints are quite large, install Git-LFS to version these large files: Copied !sudo apt -qq install git-lfs +!git config --global credential.helper store Training configuration For convenience, create a TrainingConfig class containing the training hyperparameters (feel free to adjust them): Copied >>> from dataclasses import dataclass + +>>> @dataclass +... class TrainingConfig: +... image_size = 128 # the generated image resolution +... train_batch_size = 16 +... eval_batch_size = 16 # how many images to sample during evaluation +... num_epochs = 50 +... gradient_accumulation_steps = 1 +... learning_rate = 1e-4 +... lr_warmup_steps = 500 +... save_image_epochs = 10 +... save_model_epochs = 30 +... mixed_precision = "fp16" # `no` for float32, `fp16` for automatic mixed precision +... output_dir = "ddpm-butterflies-128" # the model name locally and on the HF Hub + +... push_to_hub = True # whether to upload the saved model to the HF Hub +... hub_model_id = "/" # the name of the repository to create on the HF Hub +... hub_private_repo = False +... overwrite_output_dir = True # overwrite the old model when re-running the notebook +... seed = 0 + + +>>> config = TrainingConfig() Load the dataset You can easily load the Smithsonian Butterflies dataset with the 🤗 Datasets library: Copied >>> from datasets import load_dataset + +>>> config.dataset_name = "huggan/smithsonian_butterflies_subset" +>>> dataset = load_dataset(config.dataset_name, split="train") 💡 You can find additional datasets from the HugGan Community Event or you can use your own dataset by creating a local ImageFolder. Set config.dataset_name to the repository id of the dataset if it is from the HugGan Community Event, or imagefolder if you’re using your own images. 🤗 Datasets uses the Image feature to automatically decode the image data and load it as a PIL.Image which we can visualize: Copied >>> import matplotlib.pyplot as plt + +>>> fig, axs = plt.subplots(1, 4, figsize=(16, 4)) +>>> for i, image in enumerate(dataset[:4]["image"]): +... axs[i].imshow(image) +... axs[i].set_axis_off() +>>> fig.show() The images are all different sizes though, so you’ll need to preprocess them first: Resize changes the image size to the one defined in config.image_size. RandomHorizontalFlip augments the dataset by randomly mirroring the images. Normalize is important to rescale the pixel values into a [-1, 1] range, which is what the model expects. Copied >>> from torchvision import transforms + +>>> preprocess = transforms.Compose( +... [ +... transforms.Resize((config.image_size, config.image_size)), +... transforms.RandomHorizontalFlip(), +... transforms.ToTensor(), +... transforms.Normalize([0.5], [0.5]), +... ] +... ) Use 🤗 Datasets’ set_transform method to apply the preprocess function on the fly during training: Copied >>> def transform(examples): +... images = [preprocess(image.convert("RGB")) for image in examples["image"]] +... return {"images": images} + + +>>> dataset.set_transform(transform) Feel free to visualize the images again to confirm that they’ve been resized. Now you’re ready to wrap the dataset in a DataLoader for training! Copied >>> import torch + +>>> train_dataloader = torch.utils.data.DataLoader(dataset, batch_size=config.train_batch_size, shuffle=True) Create a UNet2DModel Pretrained models in 🧨 Diffusers are easily created from their model class with the parameters you want. For example, to create a UNet2DModel: Copied >>> from diffusers import UNet2DModel + +>>> model = UNet2DModel( +... sample_size=config.image_size, # the target image resolution +... in_channels=3, # the number of input channels, 3 for RGB images +... out_channels=3, # the number of output channels +... layers_per_block=2, # how many ResNet layers to use per UNet block +... block_out_channels=(128, 128, 256, 256, 512, 512), # the number of output channels for each UNet block +... down_block_types=( +... "DownBlock2D", # a regular ResNet downsampling block +... "DownBlock2D", +... "DownBlock2D", +... "DownBlock2D", +... "AttnDownBlock2D", # a ResNet downsampling block with spatial self-attention +... "DownBlock2D", +... ), +... up_block_types=( +... "UpBlock2D", # a regular ResNet upsampling block +... "AttnUpBlock2D", # a ResNet upsampling block with spatial self-attention +... "UpBlock2D", +... "UpBlock2D", +... "UpBlock2D", +... "UpBlock2D", +... ), +... ) It is often a good idea to quickly check the sample image shape matches the model output shape: Copied >>> sample_image = dataset[0]["images"].unsqueeze(0) +>>> print("Input shape:", sample_image.shape) +Input shape: torch.Size([1, 3, 128, 128]) + +>>> print("Output shape:", model(sample_image, timestep=0).sample.shape) +Output shape: torch.Size([1, 3, 128, 128]) Great! Next, you’ll need a scheduler to add some noise to the image. Create a scheduler The scheduler behaves differently depending on whether you’re using the model for training or inference. During inference, the scheduler generates image from the noise. During training, the scheduler takes a model output - or a sample - from a specific point in the diffusion process and applies noise to the image according to a noise schedule and an update rule. Let’s take a look at the DDPMScheduler and use the add_noise method to add some random noise to the sample_image from before: Copied >>> import torch +>>> from PIL import Image +>>> from diffusers import DDPMScheduler + +>>> noise_scheduler = DDPMScheduler(num_train_timesteps=1000) +>>> noise = torch.randn(sample_image.shape) +>>> timesteps = torch.LongTensor([50]) +>>> noisy_image = noise_scheduler.add_noise(sample_image, noise, timesteps) + +>>> Image.fromarray(((noisy_image.permute(0, 2, 3, 1) + 1.0) * 127.5).type(torch.uint8).numpy()[0]) The training objective of the model is to predict the noise added to the image. The loss at this step can be calculated by: Copied >>> import torch.nn.functional as F + +>>> noise_pred = model(noisy_image, timesteps).sample +>>> loss = F.mse_loss(noise_pred, noise) Train the model By now, you have most of the pieces to start training the model and all that’s left is putting everything together. First, you’ll need an optimizer and a learning rate scheduler: Copied >>> from diffusers.optimization import get_cosine_schedule_with_warmup + +>>> optimizer = torch.optim.AdamW(model.parameters(), lr=config.learning_rate) +>>> lr_scheduler = get_cosine_schedule_with_warmup( +... optimizer=optimizer, +... num_warmup_steps=config.lr_warmup_steps, +... num_training_steps=(len(train_dataloader) * config.num_epochs), +... ) Then, you’ll need a way to evaluate the model. For evaluation, you can use the DDPMPipeline to generate a batch of sample images and save it as a grid: Copied >>> from diffusers import DDPMPipeline +>>> from diffusers.utils import make_image_grid +>>> import os + +>>> def evaluate(config, epoch, pipeline): +... # Sample some images from random noise (this is the backward diffusion process). +... # The default pipeline output type is `List[PIL.Image]` +... images = pipeline( +... batch_size=config.eval_batch_size, +... generator=torch.manual_seed(config.seed), +... ).images + +... # Make a grid out of the images +... image_grid = make_image_grid(images, rows=4, cols=4) + +... # Save the images +... test_dir = os.path.join(config.output_dir, "samples") +... os.makedirs(test_dir, exist_ok=True) +... image_grid.save(f"{test_dir}/{epoch:04d}.png") Now you can wrap all these components together in a training loop with 🤗 Accelerate for easy TensorBoard logging, gradient accumulation, and mixed precision training. To upload the model to the Hub, write a function to get your repository name and information and then push it to the Hub. 💡 The training loop below may look intimidating and long, but it’ll be worth it later when you launch your training in just one line of code! If you can’t wait and want to start generating images, feel free to copy and run the code below. You can always come back and examine the training loop more closely later, like when you’re waiting for your model to finish training. 🤗 Copied >>> from accelerate import Accelerator +>>> from huggingface_hub import create_repo, upload_folder +>>> from tqdm.auto import tqdm +>>> from pathlib import Path +>>> import os + +>>> def train_loop(config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler): +... # Initialize accelerator and tensorboard logging +... accelerator = Accelerator( +... mixed_precision=config.mixed_precision, +... gradient_accumulation_steps=config.gradient_accumulation_steps, +... log_with="tensorboard", +... project_dir=os.path.join(config.output_dir, "logs"), +... ) +... if accelerator.is_main_process: +... if config.output_dir is not None: +... os.makedirs(config.output_dir, exist_ok=True) +... if config.push_to_hub: +... repo_id = create_repo( +... repo_id=config.hub_model_id or Path(config.output_dir).name, exist_ok=True +... ).repo_id +... accelerator.init_trackers("train_example") + +... # Prepare everything +... # There is no specific order to remember, you just need to unpack the +... # objects in the same order you gave them to the prepare method. +... model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( +... model, optimizer, train_dataloader, lr_scheduler +... ) + +... global_step = 0 + +... # Now you train the model +... for epoch in range(config.num_epochs): +... progress_bar = tqdm(total=len(train_dataloader), disable=not accelerator.is_local_main_process) +... progress_bar.set_description(f"Epoch {epoch}") + +... for step, batch in enumerate(train_dataloader): +... clean_images = batch["images"] +... # Sample noise to add to the images +... noise = torch.randn(clean_images.shape, device=clean_images.device) +... bs = clean_images.shape[0] + +... # Sample a random timestep for each image +... timesteps = torch.randint( +... 0, noise_scheduler.config.num_train_timesteps, (bs,), device=clean_images.device, +... dtype=torch.int64 +... ) + +... # Add noise to the clean images according to the noise magnitude at each timestep +... # (this is the forward diffusion process) +... noisy_images = noise_scheduler.add_noise(clean_images, noise, timesteps) + +... with accelerator.accumulate(model): +... # Predict the noise residual +... noise_pred = model(noisy_images, timesteps, return_dict=False)[0] +... loss = F.mse_loss(noise_pred, noise) +... accelerator.backward(loss) + +... accelerator.clip_grad_norm_(model.parameters(), 1.0) +... optimizer.step() +... lr_scheduler.step() +... optimizer.zero_grad() + +... progress_bar.update(1) +... logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0], "step": global_step} +... progress_bar.set_postfix(**logs) +... accelerator.log(logs, step=global_step) +... global_step += 1 + +... # After each epoch you optionally sample some demo images with evaluate() and save the model +... if accelerator.is_main_process: +... pipeline = DDPMPipeline(unet=accelerator.unwrap_model(model), scheduler=noise_scheduler) + +... if (epoch + 1) % config.save_image_epochs == 0 or epoch == config.num_epochs - 1: +... evaluate(config, epoch, pipeline) + +... if (epoch + 1) % config.save_model_epochs == 0 or epoch == config.num_epochs - 1: +... if config.push_to_hub: +... upload_folder( +... repo_id=repo_id, +... folder_path=config.output_dir, +... commit_message=f"Epoch {epoch}", +... ignore_patterns=["step_*", "epoch_*"], +... ) +... else: +... pipeline.save_pretrained(config.output_dir) Phew, that was quite a bit of code! But you’re finally ready to launch the training with 🤗 Accelerate’s notebook_launcher function. Pass the function the training loop, all the training arguments, and the number of processes (you can change this value to the number of GPUs available to you) to use for training: Copied >>> from accelerate import notebook_launcher + +>>> args = (config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler) + +>>> notebook_launcher(train_loop, args, num_processes=1) Once training is complete, take a look at the final 🦋 images 🦋 generated by your diffusion model! Copied >>> import glob + +>>> sample_images = sorted(glob.glob(f"{config.output_dir}/samples/*.png")) +>>> Image.open(sample_images[-1]) Next steps Unconditional image generation is one example of a task that can be trained. You can explore other tasks and training techniques by visiting the 🧨 Diffusers Training Examples page. Here are some examples of what you can learn: Textual Inversion, an algorithm that teaches a model a specific visual concept and integrates it into the generated image. DreamBooth, a technique for generating personalized images of a subject given several input images of the subject. Guide to finetuning a Stable Diffusion model on your own dataset. Guide to using LoRA, a memory-efficient technique for finetuning really large models faster. diff --git a/scrapped_outputs/966168e40a86faf531886b2e410c4506.txt b/scrapped_outputs/966168e40a86faf531886b2e410c4506.txt new file mode 100644 index 0000000000000000000000000000000000000000..4049d6b91ac5929ba92113dc859ead44d28a4f4e --- /dev/null +++ b/scrapped_outputs/966168e40a86faf531886b2e410c4506.txt @@ -0,0 +1,45 @@ +EulerAncestralDiscreteScheduler A scheduler that uses ancestral sampling with Euler method steps. This is a fast scheduler which can often generate good outputs in 20-30 steps. The scheduler is based on the original k-diffusion implementation by Katherine Crowson. EulerAncestralDiscreteScheduler class diffusers.EulerAncestralDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. Ancestral sampling with Euler method steps. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. Scales the denoising model input by (sigma**2 + 1) ** 0.5 to match the Euler algorithm. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor generator: Optional = None return_dict: bool = True ) → EulerAncestralDiscreteSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a +EulerAncestralDiscreteSchedulerOutput or tuple. Returns +EulerAncestralDiscreteSchedulerOutput or tuple + +If return_dict is True, +EulerAncestralDiscreteSchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). EulerAncestralDiscreteSchedulerOutput class diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/96e8d38f3dda8073e330929b425877fe.txt b/scrapped_outputs/96e8d38f3dda8073e330929b425877fe.txt new file mode 100644 index 0000000000000000000000000000000000000000..5ee871335093ed2ca29b91e756da3147dae8eda6 --- /dev/null +++ b/scrapped_outputs/96e8d38f3dda8073e330929b425877fe.txt @@ -0,0 +1,217 @@ +Load pipelines, models, and schedulers Having an easy way to use a diffusion system for inference is essential to 🧨 Diffusers. Diffusion systems often consist of multiple components like parameterized models, tokenizers, and schedulers that interact in complex ways. That is why we designed the DiffusionPipeline to wrap the complexity of the entire diffusion system into an easy-to-use API, while remaining flexible enough to be adapted for other use cases, such as loading each component individually as building blocks to assemble your own diffusion system. Everything you need for inference or training is accessible with the from_pretrained() method. This guide will show you how to load: pipelines from the Hub and locally different components into a pipeline checkpoint variants such as different floating point types or non-exponential mean averaged (EMA) weights models and schedulers Diffusion Pipeline 💡 Skip to the DiffusionPipeline explained section if you are interested in learning in more detail about how the DiffusionPipeline class works. The DiffusionPipeline class is the simplest and most generic way to load the latest trending diffusion model from the Hub. The DiffusionPipeline.from_pretrained() method automatically detects the correct pipeline class from the checkpoint, downloads, and caches all the required configuration and weight files, and returns a pipeline instance ready for inference. Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) You can also load a checkpoint with its specific pipeline class. The example above loaded a Stable Diffusion model; to get the same result, use the StableDiffusionPipeline class: Copied from diffusers import StableDiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) A checkpoint (such as CompVis/stable-diffusion-v1-4 or runwayml/stable-diffusion-v1-5) may also be used for more than one task, like text-to-image or image-to-image. To differentiate what task you want to use the checkpoint for, you have to load it directly with its corresponding task-specific pipeline class: Copied from diffusers import StableDiffusionImg2ImgPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionImg2ImgPipeline.from_pretrained(repo_id) Local pipeline To load a diffusion pipeline locally, use git-lfs to manually download the checkpoint (in this case, runwayml/stable-diffusion-v1-5) to your local disk. This creates a local folder, ./stable-diffusion-v1-5, on your disk: Copied git-lfs install +git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 Then pass the local path to from_pretrained(): Copied from diffusers import DiffusionPipeline + +repo_id = "./stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) The from_pretrained() method won’t download any files from the Hub when it detects a local path, but this also means it won’t download and cache the latest changes to a checkpoint. Swap components in a pipeline You can customize the default components of any pipeline with another compatible component. Customization is important because: Changing the scheduler is important for exploring the trade-off between generation speed and quality. Different components of a model are typically trained independently and you can swap out a component with a better-performing one. During finetuning, usually only some components - like the UNet or text encoder - are trained. To find out which schedulers are compatible for customization, you can use the compatibles method: Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) +stable_diffusion.scheduler.compatibles Let’s use the SchedulerMixin.from_pretrained() method to replace the default PNDMScheduler with a more performant scheduler, EulerDiscreteScheduler. The subfolder="scheduler" argument is required to load the scheduler configuration from the correct subfolder of the pipeline repository. Then you can pass the new EulerDiscreteScheduler instance to the scheduler argument in DiffusionPipeline: Copied from diffusers import DiffusionPipeline, EulerDiscreteScheduler + +repo_id = "runwayml/stable-diffusion-v1-5" +scheduler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, scheduler=scheduler, use_safetensors=True) Safety checker Diffusion models like Stable Diffusion can generate harmful content, which is why 🧨 Diffusers has a safety checker to check generated outputs against known hardcoded NSFW content. If you’d like to disable the safety checker for whatever reason, pass None to the safety_checker argument: Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, safety_checker=None, use_safetensors=True) +""" +You have disabled the safety checker for by passing `safety_checker=None`. Ensure that you abide by the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend keeping the safety filter enabled in all public-facing circumstances, disabling it only for use cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 . +""" Reuse components across pipelines You can also reuse the same components in multiple pipelines to avoid loading the weights into RAM twice. Use the components method to save the components: Copied from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True) + +components = stable_diffusion_txt2img.components Then you can pass the components to another pipeline without reloading the weights into RAM: Copied stable_diffusion_img2img = StableDiffusionImg2ImgPipeline(**components) You can also pass the components individually to the pipeline if you want more flexibility over which components to reuse or disable. For example, to reuse the same components in the text-to-image pipeline, except for the safety checker and feature extractor, in the image-to-image pipeline: Copied from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True) +stable_diffusion_img2img = StableDiffusionImg2ImgPipeline( + vae=stable_diffusion_txt2img.vae, + text_encoder=stable_diffusion_txt2img.text_encoder, + tokenizer=stable_diffusion_txt2img.tokenizer, + unet=stable_diffusion_txt2img.unet, + scheduler=stable_diffusion_txt2img.scheduler, + safety_checker=None, + feature_extractor=None, + requires_safety_checker=False, +) Checkpoint variants A checkpoint variant is usually a checkpoint whose weights are: Stored in a different floating point type for lower precision and lower storage, such as torch.float16, because it only requires half the bandwidth and storage to download. You can’t use this variant if you’re continuing training or using a CPU. Non-exponential mean averaged (EMA) weights, which shouldn’t be used for inference. You should use these to continue fine-tuning a model. 💡 When the checkpoints have identical model structures, but they were trained on different datasets and with a different training setup, they should be stored in separate repositories instead of variations (for example, stable-diffusion-v1-4 and stable-diffusion-v1-5). Otherwise, a variant is identical to the original checkpoint. They have exactly the same serialization format (like Safetensors), model structure, and weights that have identical tensor shapes. checkpoint type weight name argument for loading weights original diffusion_pytorch_model.bin floating point diffusion_pytorch_model.fp16.bin variant, torch_dtype non-EMA diffusion_pytorch_model.non_ema.bin variant There are two important arguments to know for loading variants: torch_dtype defines the floating point precision of the loaded checkpoints. For example, if you want to save bandwidth by loading a fp16 variant, you should specify torch_dtype=torch.float16 to convert the weights to fp16. Otherwise, the fp16 weights are converted to the default fp32 precision. You can also load the original checkpoint without defining the variant argument, and convert it to fp16 with torch_dtype=torch.float16. In this case, the default fp32 weights are downloaded first, and then they’re converted to fp16 after loading. variant defines which files should be loaded from the repository. For example, if you want to load a non_ema variant from the diffusers/stable-diffusion-variants repository, you should specify variant="non_ema" to download the non_ema files. Copied from diffusers import DiffusionPipeline +import torch + +# load fp16 variant +stable_diffusion = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16, use_safetensors=True +) +# load non_ema variant +stable_diffusion = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", variant="non_ema", use_safetensors=True +) To save a checkpoint stored in a different floating-point type or as a non-EMA variant, use the DiffusionPipeline.save_pretrained() method and specify the variant argument. You should try and save a variant to the same folder as the original checkpoint, so you can load both from the same folder: Copied from diffusers import DiffusionPipeline + +# save as fp16 variant +stable_diffusion.save_pretrained("runwayml/stable-diffusion-v1-5", variant="fp16") +# save as non-ema variant +stable_diffusion.save_pretrained("runwayml/stable-diffusion-v1-5", variant="non_ema") If you don’t save the variant to an existing folder, you must specify the variant argument otherwise it’ll throw an Exception because it can’t find the original checkpoint: Copied # 👎 this won't work +stable_diffusion = DiffusionPipeline.from_pretrained( + "./stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +# 👍 this works +stable_diffusion = DiffusionPipeline.from_pretrained( + "./stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16, use_safetensors=True +) Models Models are loaded from the ModelMixin.from_pretrained() method, which downloads and caches the latest version of the model weights and configurations. If the latest files are available in the local cache, from_pretrained() reuses files in the cache instead of re-downloading them. Models can be loaded from a subfolder with the subfolder argument. For example, the model weights for runwayml/stable-diffusion-v1-5 are stored in the unet subfolder: Copied from diffusers import UNet2DConditionModel + +repo_id = "runwayml/stable-diffusion-v1-5" +model = UNet2DConditionModel.from_pretrained(repo_id, subfolder="unet", use_safetensors=True) Or directly from a repository’s directory: Copied from diffusers import UNet2DModel + +repo_id = "google/ddpm-cifar10-32" +model = UNet2DModel.from_pretrained(repo_id, use_safetensors=True) You can also load and save model variants by specifying the variant argument in ModelMixin.from_pretrained() and ModelMixin.save_pretrained(): Copied from diffusers import UNet2DConditionModel + +model = UNet2DConditionModel.from_pretrained( + "runwayml/stable-diffusion-v1-5", subfolder="unet", variant="non_ema", use_safetensors=True +) +model.save_pretrained("./local-unet", variant="non_ema") Schedulers Schedulers are loaded from the SchedulerMixin.from_pretrained() method, and unlike models, schedulers are not parameterized or trained; they are defined by a configuration file. Loading schedulers does not consume any significant amount of memory and the same configuration file can be used for a variety of different schedulers. +For example, the following schedulers are compatible with StableDiffusionPipeline, which means you can load the same scheduler configuration file in any of these classes: Copied from diffusers import StableDiffusionPipeline +from diffusers import ( + DDPMScheduler, + DDIMScheduler, + PNDMScheduler, + LMSDiscreteScheduler, + EulerAncestralDiscreteScheduler, + EulerDiscreteScheduler, + DPMSolverMultistepScheduler, +) + +repo_id = "runwayml/stable-diffusion-v1-5" + +ddpm = DDPMScheduler.from_pretrained(repo_id, subfolder="scheduler") +ddim = DDIMScheduler.from_pretrained(repo_id, subfolder="scheduler") +pndm = PNDMScheduler.from_pretrained(repo_id, subfolder="scheduler") +lms = LMSDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +euler_anc = EulerAncestralDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +euler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +dpm = DPMSolverMultistepScheduler.from_pretrained(repo_id, subfolder="scheduler") + +# replace `dpm` with any of `ddpm`, `ddim`, `pndm`, `lms`, `euler_anc`, `euler` +pipeline = StableDiffusionPipeline.from_pretrained(repo_id, scheduler=dpm, use_safetensors=True) DiffusionPipeline explained As a class method, DiffusionPipeline.from_pretrained() is responsible for two things: Download the latest version of the folder structure required for inference and cache it. If the latest folder structure is available in the local cache, DiffusionPipeline.from_pretrained() reuses the cache and won’t redownload the files. Load the cached weights into the correct pipeline class - retrieved from the model_index.json file - and return an instance of it. The pipelines’ underlying folder structure corresponds directly with their class instances. For example, the StableDiffusionPipeline corresponds to the folder structure in runwayml/stable-diffusion-v1-5. Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipeline = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) +print(pipeline) You’ll see pipeline is an instance of StableDiffusionPipeline, which consists of seven components: "feature_extractor": a CLIPImageProcessor from 🤗 Transformers. "safety_checker": a component for screening against harmful content. "scheduler": an instance of PNDMScheduler. "text_encoder": a CLIPTextModel from 🤗 Transformers. "tokenizer": a CLIPTokenizer from 🤗 Transformers. "unet": an instance of UNet2DConditionModel. "vae": an instance of AutoencoderKL. Copied StableDiffusionPipeline { + "feature_extractor": [ + "transformers", + "CLIPImageProcessor" + ], + "safety_checker": [ + "stable_diffusion", + "StableDiffusionSafetyChecker" + ], + "scheduler": [ + "diffusers", + "PNDMScheduler" + ], + "text_encoder": [ + "transformers", + "CLIPTextModel" + ], + "tokenizer": [ + "transformers", + "CLIPTokenizer" + ], + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vae": [ + "diffusers", + "AutoencoderKL" + ] +} Compare the components of the pipeline instance to the runwayml/stable-diffusion-v1-5 folder structure, and you’ll see there is a separate folder for each of the components in the repository: Copied . +├── feature_extractor +│   └── preprocessor_config.json +├── model_index.json +├── safety_checker +│   ├── config.json +| ├── model.fp16.safetensors +│ ├── model.safetensors +│ ├── pytorch_model.bin +| └── pytorch_model.fp16.bin +├── scheduler +│   └── scheduler_config.json +├── text_encoder +│   ├── config.json +| ├── model.fp16.safetensors +│ ├── model.safetensors +│ |── pytorch_model.bin +| └── pytorch_model.fp16.bin +├── tokenizer +│   ├── merges.txt +│   ├── special_tokens_map.json +│   ├── tokenizer_config.json +│   └── vocab.json +├── unet +│   ├── config.json +│   ├── diffusion_pytorch_model.bin +| |── diffusion_pytorch_model.fp16.bin +│ |── diffusion_pytorch_model.f16.safetensors +│ |── diffusion_pytorch_model.non_ema.bin +│ |── diffusion_pytorch_model.non_ema.safetensors +│ └── diffusion_pytorch_model.safetensors +|── vae +. ├── config.json +. ├── diffusion_pytorch_model.bin + ├── diffusion_pytorch_model.fp16.bin + ├── diffusion_pytorch_model.fp16.safetensors + └── diffusion_pytorch_model.safetensors You can access each of the components of the pipeline as an attribute to view its configuration: Copied pipeline.tokenizer +CLIPTokenizer( + name_or_path="/root/.cache/huggingface/hub/models--runwayml--stable-diffusion-v1-5/snapshots/39593d5650112b4cc580433f6b0435385882d819/tokenizer", + vocab_size=49408, + model_max_length=77, + is_fast=False, + padding_side="right", + truncation_side="right", + special_tokens={ + "bos_token": AddedToken("<|startoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), + "eos_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), + "unk_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), + "pad_token": "<|endoftext|>", + }, + clean_up_tokenization_spaces=True +) Every pipeline expects a model_index.json file that tells the DiffusionPipeline: which pipeline class to load from _class_name which version of 🧨 Diffusers was used to create the model in _diffusers_version what components from which library are stored in the subfolders (name corresponds to the component and subfolder name, library corresponds to the name of the library to load the class from, and class corresponds to the class name) Copied { + "_class_name": "StableDiffusionPipeline", + "_diffusers_version": "0.6.0", + "feature_extractor": [ + "transformers", + "CLIPImageProcessor" + ], + "safety_checker": [ + "stable_diffusion", + "StableDiffusionSafetyChecker" + ], + "scheduler": [ + "diffusers", + "PNDMScheduler" + ], + "text_encoder": [ + "transformers", + "CLIPTextModel" + ], + "tokenizer": [ + "transformers", + "CLIPTokenizer" + ], + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vae": [ + "diffusers", + "AutoencoderKL" + ] +} diff --git a/scrapped_outputs/96f29c9201df92f930f89dd120d93c44.txt b/scrapped_outputs/96f29c9201df92f930f89dd120d93c44.txt new file mode 100644 index 0000000000000000000000000000000000000000..77bfc70e39049721df753225367296a6dc627c51 --- /dev/null +++ b/scrapped_outputs/96f29c9201df92f930f89dd120d93c44.txt @@ -0,0 +1,124 @@ +PixArt-α PixArt-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis is Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, and Zhenguo Li. The abstract from the paper is: The most advanced text-to-image (T2I) models require significant training costs (e.g., millions of GPU hours), seriously hindering the fundamental innovation for the AIGC community while increasing CO2 emissions. This paper introduces PIXART-α, a Transformer-based T2I diffusion model whose image generation quality is competitive with state-of-the-art image generators (e.g., Imagen, SDXL, and even Midjourney), reaching near-commercial application standards. Additionally, it supports high-resolution image synthesis up to 1024px resolution with low training cost, as shown in Figure 1 and 2. To achieve this goal, three core designs are proposed: (1) Training strategy decomposition: We devise three distinct training steps that separately optimize pixel dependency, text-image alignment, and image aesthetic quality; (2) Efficient T2I Transformer: We incorporate cross-attention modules into Diffusion Transformer (DiT) to inject text conditions and streamline the computation-intensive class-condition branch; (3) High-informative data: We emphasize the significance of concept density in text-image pairs and leverage a large Vision-Language model to auto-label dense pseudo-captions to assist text-image alignment learning. As a result, PIXART-α’s training speed markedly surpasses existing large-scale T2I models, e.g., PIXART-α only takes 10.8% of Stable Diffusion v1.5’s training time (675 vs. 6,250 A100 GPU days), saving nearly $300,000 ($26,000 vs. $320,000) and reducing 90% CO2 emissions. Moreover, compared with a larger SOTA model, RAPHAEL, our training cost is merely 1%. Extensive experiments demonstrate that PIXART-α excels in image quality, artistry, and semantic control. We hope PIXART-α will provide new insights to the AIGC community and startups to accelerate building their own high-quality yet low-cost generative models from scratch. You can find the original codebase at PixArt-alpha/PixArt-alpha and all the available checkpoints at PixArt-alpha. Some notes about this pipeline: It uses a Transformer backbone (instead of a UNet) for denoising. As such it has a similar architecture as DiT. It was trained using text conditions computed from T5. This aspect makes the pipeline better at following complex text prompts with intricate details. It is good at producing high-resolution images at different aspect ratios. To get the best results, the authors recommend some size brackets which can be found here. It rivals the quality of state-of-the-art text-to-image generation systems (as of this writing) such as Stable Diffusion XL, Imagen, and DALL-E 2, while being more efficient than them. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. Inference with under 8GB GPU VRAM Run the PixArtAlphaPipeline with under 8GB GPU VRAM by loading the text encoder in 8-bit precision. Let’s walk through a full-fledged example. First, install the bitsandbytes library: Copied pip install -U bitsandbytes Then load the text encoder in 8-bit: Copied from transformers import T5EncoderModel +from diffusers import PixArtAlphaPipeline +import torch + +text_encoder = T5EncoderModel.from_pretrained( + "PixArt-alpha/PixArt-XL-2-1024-MS", + subfolder="text_encoder", + load_in_8bit=True, + device_map="auto", + +) +pipe = PixArtAlphaPipeline.from_pretrained( + "PixArt-alpha/PixArt-XL-2-1024-MS", + text_encoder=text_encoder, + transformer=None, + device_map="auto" +) Now, use the pipe to encode a prompt: Copied with torch.no_grad(): + prompt = "cute cat" + prompt_embeds, prompt_attention_mask, negative_embeds, negative_prompt_attention_mask = pipe.encode_prompt(prompt) Since text embeddings have been computed, remove the text_encoder and pipe from the memory, and free up som GPU VRAM: Copied import gc + +def flush(): + gc.collect() + torch.cuda.empty_cache() + +del text_encoder +del pipe +flush() Then compute the latents with the prompt embeddings as inputs: Copied pipe = PixArtAlphaPipeline.from_pretrained( + "PixArt-alpha/PixArt-XL-2-1024-MS", + text_encoder=None, + torch_dtype=torch.float16, +).to("cuda") + +latents = pipe( + negative_prompt=None, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + prompt_attention_mask=prompt_attention_mask, + negative_prompt_attention_mask=negative_prompt_attention_mask, + num_images_per_prompt=1, + output_type="latent", +).images + +del pipe.transformer +flush() Notice that while initializing pipe, you’re setting text_encoder to None so that it’s not loaded. Once the latents are computed, pass it off to the VAE to decode into a real image: Copied with torch.no_grad(): + image = pipe.vae.decode(latents / pipe.vae.config.scaling_factor, return_dict=False)[0] +image = pipe.image_processor.postprocess(image, output_type="pil")[0] +image.save("cat.png") By deleting components you aren’t using and flushing the GPU VRAM, you should be able to run PixArtAlphaPipeline with under 8GB GPU VRAM. If you want a report of your memory-usage, run this script. Text embeddings computed in 8-bit can impact the quality of the generated images because of the information loss in the representation space caused by the reduced precision. It’s recommended to compare the outputs with and without 8-bit. While loading the text_encoder, you set load_in_8bit to True. You could also specify load_in_4bit to bring your memory requirements down even further to under 7GB. PixArtAlphaPipeline class diffusers.PixArtAlphaPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel vae: AutoencoderKL transformer: Transformer2DModel scheduler: DPMSolverMultistepScheduler ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (T5EncoderModel) — +Frozen text-encoder. PixArt-Alpha uses +T5, specifically the +t5-v1_1-xxl variant. tokenizer (T5Tokenizer) — +Tokenizer of class +T5Tokenizer. transformer (Transformer2DModel) — +A text conditioned Transformer2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with transformer to denoise the encoded image latents. Pipeline for text-to-image generation using PixArt-Alpha. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union = None negative_prompt: str = '' num_inference_steps: int = 20 timesteps: List = None guidance_scale: float = 4.5 num_images_per_prompt: Optional = 1 height: Optional = None width: Optional = None eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None prompt_attention_mask: Optional = None negative_prompt_embeds: Optional = None negative_prompt_attention_mask: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True use_resolution_binning: bool = True **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to self.unet.config.sample_size) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size) — +The width in pixels of the generated image. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. prompt_attention_mask (torch.FloatTensor, optional) — Pre-generated attention mask for text embeddings. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. For PixArt-Alpha this negative prompt should be "". If not +provided, negative_prompt_embeds will be generated from negative_prompt input argument. negative_prompt_attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask for negative text embeddings. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. use_resolution_binning (bool defaults to True) — +If set to True, the requested height and width are first mapped to the closest resolutions using +ASPECT_RATIO_1024_BIN. After the produced latents are decoded into images, they are resized back to +the requested resolution. Useful for generating non-square images. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import PixArtAlphaPipeline + +>>> # You can replace the checkpoint id with "PixArt-alpha/PixArt-XL-2-512x512" too. +>>> pipe = PixArtAlphaPipeline.from_pretrained("PixArt-alpha/PixArt-XL-2-1024-MS", torch_dtype=torch.float16) +>>> # Enable memory optimizations. +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A small cactus with a happy face in the Sahara desert." +>>> image = pipe(prompt).images[0] classify_height_width_bin < source > ( height: int width: int ratios: dict ) Returns binned height and width. encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True negative_prompt: str = '' num_images_per_prompt: int = 1 device: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None prompt_attention_mask: Optional = None negative_prompt_attention_mask: Optional = None clean_caption: bool = False **kwargs ) Parameters prompt (str or List[str], optional) — +prompt to be encoded negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. If not defined, one has to pass negative_prompt_embeds +instead. Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). For +PixArt-Alpha, this should be "". do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. For PixArt-Alpha, it’s should be the embeddings of the "" +string. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. diff --git a/scrapped_outputs/97299a6e6b936956b7ad37b111022cf5.txt b/scrapped_outputs/97299a6e6b936956b7ad37b111022cf5.txt new file mode 100644 index 0000000000000000000000000000000000000000..d1a13e4b4a70e8e6d6bd0c0f8b80cc8885fcabb5 --- /dev/null +++ b/scrapped_outputs/97299a6e6b936956b7ad37b111022cf5.txt @@ -0,0 +1,38 @@ +Pipeline callbacks The denoising loop of a pipeline can be modified with custom defined functions using the callback_on_step_end parameter. This can be really useful for dynamically adjusting certain pipeline attributes, or modifying tensor variables. The flexibility of callbacks opens up some interesting use-cases such as changing the prompt embeddings at each timestep, assigning different weights to the prompt embeddings, and editing the guidance scale. This guide will show you how to use the callback_on_step_end parameter to disable classifier-free guidance (CFG) after 40% of the inference steps to save compute with minimal cost to performance. The callback function should have the following arguments: pipe (or the pipeline instance) provides access to useful properties such as num_timestep and guidance_scale. You can modify these properties by updating the underlying attributes. For this example, you’ll disable CFG by setting pipe._guidance_scale=0.0. step_index and timestep tell you where you are in the denoising loop. Use step_index to turn off CFG after reaching 40% of num_timestep. callback_kwargs is a dict that contains tensor variables you can modify during the denoising loop. It only includes variables specified in the callback_on_step_end_tensor_inputs argument, which is passed to the pipeline’s __call__ method. Different pipelines may use different sets of variables, so please check a pipeline’s _callback_tensor_inputs attribute for the list of variables you can modify. Some common variables include latents and prompt_embeds. For this function, change the batch size of prompt_embeds after setting guidance_scale=0.0 in order for it to work properly. Your callback function should look something like this: Copied def callback_dynamic_cfg(pipe, step_index, timestep, callback_kwargs): + # adjust the batch_size of prompt_embeds according to guidance_scale + if step_index == int(pipe.num_timestep * 0.4): + prompt_embeds = callback_kwargs["prompt_embeds"] + prompt_embeds = prompt_embeds.chunk(2)[-1] + + # update guidance_scale and prompt_embeds + pipe._guidance_scale = 0.0 + callback_kwargs["prompt_embeds"] = prompt_embeds + return callback_kwargs Now, you can pass the callback function to the callback_on_step_end parameter and the prompt_embeds to callback_on_step_end_tensor_inputs. Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" + +generator = torch.Generator(device="cuda").manual_seed(1) +out = pipe(prompt, generator=generator, callback_on_step_end=callback_custom_cfg, callback_on_step_end_tensor_inputs=['prompt_embeds']) + +out.images[0].save("out_custom_cfg.png") The callback function is executed at the end of each denoising step, and modifies the pipeline attributes and tensor variables for the next denoising step. With callbacks, you can implement features such as dynamic CFG without having to modify the underlying code at all! 🤗 Diffusers currently only supports callback_on_step_end, but feel free to open a feature request if you have a cool use-case and require a callback function with a different execution point! Using Callbacks to interrupt the Diffusion Process The following Pipelines support interrupting the diffusion process via callback StableDiffusionPipeline StableDiffusionImg2ImgPipeline StableDiffusionInpaintPipeline StableDiffusionXLPipeline StableDiffusionXLImg2ImgPipeline StableDiffusionXLInpaintPipeline Interrupting the diffusion process is particularly useful when building UIs that work with Diffusers because it allows users to stop the generation process if they’re unhappy with the intermediate results. You can incorporate this into your pipeline with a callback. This callback function should take the following arguments: pipe, i, t, and callback_kwargs (this must be returned). Set the pipeline’s _interrupt attribute to True to stop the diffusion process after a certain number of steps. You are also free to implement your own custom stopping logic inside the callback. In this example, the diffusion process is stopped after 10 steps even though num_inference_steps is set to 50. Copied from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +pipe.enable_model_cpu_offload() +num_inference_steps = 50 + +def interrupt_callback(pipe, i, t, callback_kwargs): + stop_idx = 10 + if i == stop_idx: + pipe._interrupt = True + + return callback_kwargs + +pipe( + "A photo of a cat", + num_inference_steps=num_inference_steps, + callback_on_step_end=interrupt_callback, +) diff --git a/scrapped_outputs/97355edefb4bdd1ae4cc36c01a46d015.txt b/scrapped_outputs/97355edefb4bdd1ae4cc36c01a46d015.txt new file mode 100644 index 0000000000000000000000000000000000000000..da3499e1844fd5dbd2117a0f51782369e62b740f --- /dev/null +++ b/scrapped_outputs/97355edefb4bdd1ae4cc36c01a46d015.txt @@ -0,0 +1,251 @@ +Schedulers + +Diffusers contains multiple pre-built schedule functions for the diffusion process. + +What is a scheduler? + +The schedule functions, denoted Schedulers in the library take in the output of a trained model, a sample which the diffusion process is iterating on, and a timestep to return a denoised sample. That’s why schedulers may also be called Samplers in other diffusion models implementations. +Schedulers define the methodology for iteratively adding noise to an image or for updating a sample based on model outputs.adding noise in different manners represent the algorithmic processes to train a diffusion model by adding noise to images. +for inference, the scheduler defines how to update a sample based on an output from a pretrained model. +Schedulers are often defined by a noise schedule and an update rule to solve the differential equation solution. + +Discrete versus continuous schedulers + +All schedulers take in a timestep to predict the updated version of the sample being diffused. +The timesteps dictate where in the diffusion process the step is, where data is generated by iterating forward in time and inference is executed by propagating backwards through timesteps. +Different algorithms use timesteps that can be discrete (accepting int inputs), such as the DDPMScheduler or PNDMScheduler, or continuous (accepting float inputs), such as the score-based schedulers ScoreSdeVeScheduler or ScoreSdeVpScheduler. + +Designing Re-usable schedulers + +The core design principle between the schedule functions is to be model, system, and framework independent. +This allows for rapid experimentation and cleaner abstractions in the code, where the model prediction is separated from the sample update. +To this end, the design of schedulers is such that: +Schedulers can be used interchangeably between diffusion models in inference to find the preferred trade-off between speed and generation quality. +Schedulers are currently by default in PyTorch, but are designed to be framework independent (partial Jax support currently exists). +Many diffusion pipelines, such as StableDiffusionPipeline and DiTPipeline can use any of KarrasDiffusionSchedulers + +Schedulers Summary + +The following table summarizes all officially supported schedulers, their corresponding paper +Scheduler +Paper +ddim +Denoising Diffusion Implicit Models +ddim_inverse +Denoising Diffusion Implicit Models +ddpm +Denoising Diffusion Probabilistic Models +deis +DEISMultistepScheduler +singlestep_dpm_solver +Singlestep DPM-Solver +multistep_dpm_solver +Multistep DPM-Solver +heun +Heun scheduler inspired by Karras et. al paper +dpm_discrete +DPM Discrete Scheduler inspired by Karras et. al paper +dpm_discrete_ancestral +DPM Discrete Scheduler with ancestral sampling inspired by Karras et. al paper +stochastic_karras_ve +Variance exploding, stochastic sampling from Karras et. al +lms_discrete +Linear multistep scheduler for discrete beta schedules +pndm +Pseudo numerical methods for diffusion models (PNDM) +score_sde_ve +variance exploding stochastic differential equation (VE-SDE) scheduler +ipndm +improved pseudo numerical methods for diffusion models (iPNDM) +score_sde_vp +Variance preserving stochastic differential equation (VP-SDE) scheduler +euler +Euler scheduler +euler_ancestral +Euler Ancestral scheduler +vq_diffusion +VQDiffusionScheduler +unipc +UniPCMultistepScheduler +repaint +RePaint scheduler + +API + +The core API for any new scheduler must follow a limited structure. +Schedulers should provide one or more def step(...) functions that should be called to update the generated sample iteratively. +Schedulers should provide a set_timesteps(...) method that configures the parameters of a schedule function for a specific inference task. +Schedulers should be framework-specific. +The base class SchedulerMixin implements low level utilities used by multiple schedulers. + +SchedulerMixin + + +class diffusers.SchedulerMixin + +< +source +> +( +) + + + +Mixin containing common functions for the schedulers. +Class attributes: +_compatibles (List[str]) — A list of classes that are compatible with the parent class, so that +from_config can be used from a class different than the one used to save the config (should be overridden +by parent class). + +from_pretrained + +< +source +> +( +pretrained_model_name_or_path: typing.Dict[str, typing.Any] = None +subfolder: typing.Optional[str] = None +return_unused_kwargs = False +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id of a model repo on huggingface.co. Valid model ids should have an +organization name, like google/ddpm-celebahq-256. +A path to a directory containing the schedluer configurations saved using +save_pretrained(), e.g., ./my_model_directory/. + + + +subfolder (str, optional) — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + +return_unused_kwargs (bool, optional, defaults to False) — +Whether kwargs that are not consumed by the Python class should be returned or not. + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running transformers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + + +Instantiate a Scheduler class from a pre-defined JSON configuration file inside a directory or Hub repo. +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. +Activate the special “offline-mode” to +use this method in a firewalled environment. + +save_pretrained + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +push_to_hub: bool = False +**kwargs + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory where the configuration JSON file will be saved (will be created if it does not exist). + + + +Save a scheduler configuration object to the directory save_directory, so that it can be re-loaded using the +from_pretrained() class method. + +SchedulerOutput + + +The class `SchedulerOutput` contains the outputs from any schedulers `step(...)` call. + +class diffusers.schedulers.scheduling_utils.SchedulerOutput + +< +source +> +( +prev_sample: FloatTensor + +) + + +Parameters + +prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. + + + +Base class for the scheduler’s step function output. + +KarrasDiffusionSchedulers + +KarrasDiffusionSchedulers encompasses the main generalization of schedulers in Diffusers. The schedulers in this class are distinguished, at a high level, by their noise sampling strategy; the type of network and scaling; and finally the training strategy or how the loss is weighed. +The different schedulers, depending on the type of ODE solver, fall into the above taxonomy and provide a good abstraction for the design of the main schedulers implemented in Diffusers. The schedulers in this class are given below: + +class diffusers.schedulers.KarrasDiffusionSchedulers + +< +source +> +( +value +names = None +module = None +qualname = None +type = None +start = 1 + +) + + + +An enumeration. diff --git a/scrapped_outputs/97370984f9d180d4fb0b0a37718d76ba.txt b/scrapped_outputs/97370984f9d180d4fb0b0a37718d76ba.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/974b0195ba7e40d311c3a3eb44795be6.txt b/scrapped_outputs/974b0195ba7e40d311c3a3eb44795be6.txt new file mode 100644 index 0000000000000000000000000000000000000000..816a6ec9c2fb9e36207317fc29707b1dd833518a --- /dev/null +++ b/scrapped_outputs/974b0195ba7e40d311c3a3eb44795be6.txt @@ -0,0 +1,412 @@ +Text-to-Video Generation with AnimateDiff Overview AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning by Yuwei Guo, Ceyuan Yang, Anyi Rao, Yaohui Wang, Yu Qiao, Dahua Lin, Bo Dai. The abstract of the paper is the following: With the advance of text-to-image models (e.g., Stable Diffusion) and corresponding personalization techniques such as DreamBooth and LoRA, everyone can manifest their imagination into high-quality images at an affordable cost. Subsequently, there is a great demand for image animation techniques to further combine generated static images with motion dynamics. In this report, we propose a practical framework to animate most of the existing personalized text-to-image models once and for all, saving efforts in model-specific tuning. At the core of the proposed framework is to insert a newly initialized motion modeling module into the frozen text-to-image model and train it on video clips to distill reasonable motion priors. Once trained, by simply injecting this motion modeling module, all personalized versions derived from the same base T2I readily become text-driven models that produce diverse and personalized animated images. We conduct our evaluation on several public representative personalized text-to-image models across anime pictures and realistic photographs, and demonstrate that our proposed framework helps these models generate temporally smooth animation clips while preserving the domain and diversity of their outputs. Code and pre-trained weights will be publicly available at this https URL. Available Pipelines Pipeline Tasks Demo AnimateDiffPipeline Text-to-Video Generation with AnimateDiff AnimateDiffVideoToVideoPipeline Video-to-Video Generation with AnimateDiff Available checkpoints Motion Adapter checkpoints can be found under guoyww. These checkpoints are meant to work with any model based on Stable Diffusion 1.4/1.5. Usage example AnimateDiffPipeline AnimateDiff works with a MotionAdapter checkpoint and a Stable Diffusion model checkpoint. The MotionAdapter is a collection of Motion Modules that are responsible for adding coherent motion across image frames. These modules are applied after the Resnet and Attention blocks in Stable Diffusion UNet. The following example demonstrates how to use a MotionAdapter checkpoint with Diffusers for inference based on StableDiffusion-1.4/1.5. Copied import torch +from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16) +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + timestep_spacing="linspace", + beta_schedule="linear", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt=( + "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, " + "orange sky, warm lighting, fishing boats, ocean waves seagulls, " + "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, " + "golden hour, coastal landscape, seaside scenery" + ), + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=25, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") + Here are some sample outputs: masterpiece, bestquality, sunset. + AnimateDiff tends to work better with finetuned Stable Diffusion models. If you plan on using a scheduler that can clip samples, make sure to disable it by setting clip_sample=False in the scheduler as this can also have an adverse effect on generated samples. Additionally, the AnimateDiff checkpoints can be sensitive to the beta schedule of the scheduler. We recommend setting this to linear. AnimateDiffVideoToVideoPipeline AnimateDiff can also be used to generate visually similar videos or enable style/character/background or other edits starting from an initial video, allowing you to seamlessly explore creative possibilities. Copied import imageio +import requests +import torch +from diffusers import AnimateDiffVideoToVideoPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif +from io import BytesIO +from PIL import Image + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffVideoToVideoPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16).to("cuda") +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + timestep_spacing="linspace", + beta_schedule="linear", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +# helper function to load videos +def load_video(file_path: str): + images = [] + + if file_path.startswith(('http://', 'https://')): + # If the file_path is a URL + response = requests.get(file_path) + response.raise_for_status() + content = BytesIO(response.content) + vid = imageio.get_reader(content) + else: + # Assuming it's a local file path + vid = imageio.get_reader(file_path) + + for frame in vid: + pil_image = Image.fromarray(frame) + images.append(pil_image) + + return images + +video = load_video("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-vid2vid-input-1.gif") + +output = pipe( + video = video, + prompt="panda playing a guitar, on a boat, in the ocean, high quality", + negative_prompt="bad quality, worse quality", + guidance_scale=7.5, + num_inference_steps=25, + strength=0.5, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") Here are some sample outputs: Source Video Output Video raccoon playing a guitar + panda playing a guitar + closeup of margot robbie, fireworks in the background, high quality + closeup of tony stark, robert downey jr, fireworks + Using Motion LoRAs Motion LoRAs are a collection of LoRAs that work with the guoyww/animatediff-motion-adapter-v1-5-2 checkpoint. These LoRAs are responsible for adding specific types of motion to the animations. Copied import torch +from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16) +pipe.load_lora_weights( + "guoyww/animatediff-motion-lora-zoom-out", adapter_name="zoom-out" +) + +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + beta_schedule="linear", + timestep_spacing="linspace", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt=( + "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, " + "orange sky, warm lighting, fishing boats, ocean waves seagulls, " + "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, " + "golden hour, coastal landscape, seaside scenery" + ), + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=25, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") + masterpiece, bestquality, sunset. + Using Motion LoRAs with PEFT You can also leverage the PEFT backend to combine Motion LoRA’s and create more complex animations. First install PEFT with Copied pip install peft Then you can use the following code to combine Motion LoRAs. Copied import torch +from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16) + +pipe.load_lora_weights( + "diffusers/animatediff-motion-lora-zoom-out", adapter_name="zoom-out", +) +pipe.load_lora_weights( + "diffusers/animatediff-motion-lora-pan-left", adapter_name="pan-left", +) +pipe.set_adapters(["zoom-out", "pan-left"], adapter_weights=[1.0, 1.0]) + +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + timestep_spacing="linspace", + beta_schedule="linear", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt=( + "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, " + "orange sky, warm lighting, fishing boats, ocean waves seagulls, " + "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, " + "golden hour, coastal landscape, seaside scenery" + ), + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=25, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") + masterpiece, bestquality, sunset. + Using FreeInit FreeInit: Bridging Initialization Gap in Video Diffusion Models by Tianxing Wu, Chenyang Si, Yuming Jiang, Ziqi Huang, Ziwei Liu. FreeInit is an effective method that improves temporal consistency and overall quality of videos generated using video-diffusion-models without any addition training. It can be applied to AnimateDiff, ModelScope, VideoCrafter and various other video generation models seamlessly at inference time, and works by iteratively refining the latent-initialization noise. More details can be found it the paper. The following example demonstrates the usage of FreeInit. Copied import torch +from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler +from diffusers.utils import export_to_gif + +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2") +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16).to("cuda") +pipe.scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + beta_schedule="linear", + clip_sample=False, + timestep_spacing="linspace", + steps_offset=1 +) + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_vae_tiling() + +# enable FreeInit +# Refer to the enable_free_init documentation for a full list of configurable parameters +pipe.enable_free_init(method="butterworth", use_fast_sampling=True) + +# run inference +output = pipe( + prompt="a panda playing a guitar, on a boat, in the ocean, high quality", + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=20, + generator=torch.Generator("cpu").manual_seed(666), +) + +# disable FreeInit +pipe.disable_free_init() + +frames = output.frames[0] +export_to_gif(frames, "animation.gif") FreeInit is not really free - the improved quality comes at the cost of extra computation. It requires sampling a few extra times depending on the num_iters parameter that is set when enabling it. Setting the use_fast_sampling parameter to True can improve the overall performance (at the cost of lower quality compared to when use_fast_sampling=False but still better results than vanilla video generation models). Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. AnimateDiffPipeline class diffusers.AnimateDiffPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel motion_adapter: MotionAdapter scheduler: Union feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter (MotionAdapter) — +A MotionAdapter to be used in combination with unet to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None num_frames: Optional = 16 height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → TextToVideoSDPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated video. num_frames (int, optional, defaults to 16) — +The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds +amounts to 2 seconds of video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated video. Choose between torch.FloatTensor, PIL.Image or +np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +TextToVideoSDPipelineOutput or tuple + +If return_dict is True, TextToVideoSDPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler +>>> from diffusers.utils import export_to_gif + +>>> adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2") +>>> pipe = AnimateDiffPipeline.from_pretrained("frankjoshua/toonyou_beta6", motion_adapter=adapter) +>>> pipe.scheduler = DDIMScheduler(beta_schedule="linear", steps_offset=1, clip_sample=False) +>>> output = pipe(prompt="A corgi walking in the park") +>>> frames = output.frames[0] +>>> export_to_gif(frames, "animation.gif") disable_free_init < source > ( ) Disables the FreeInit mechanism if enabled. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_free_init < source > ( num_iters: int = 3 use_fast_sampling: bool = False method: str = 'butterworth' order: int = 4 spatial_stop_frequency: float = 0.25 temporal_stop_frequency: float = 0.25 generator: Generator = None ) Parameters num_iters (int, optional, defaults to 3) — +Number of FreeInit noise re-initialization iterations. use_fast_sampling (bool, optional, defaults to False) — +Whether or not to speedup sampling procedure at the cost of probably lower quality results. Enables +the “Coarse-to-Fine Sampling” strategy, as mentioned in the paper, if set to True. method (str, optional, defaults to butterworth) — +Must be one of butterworth, ideal or gaussian to use as the filtering method for the +FreeInit low pass filter. order (int, optional, defaults to 4) — +Order of the filter used in butterworth method. Larger values lead to ideal method behaviour +whereas lower values lead to gaussian method behaviour. spatial_stop_frequency (float, optional, defaults to 0.25) — +Normalized stop frequency for spatial dimensions. Must be between 0 to 1. Referred to as d_s in +the original implementation. temporal_stop_frequency (float, optional, defaults to 0.25) — +Normalized stop frequency for temporal dimensions. Must be between 0 to 1. Referred to as d_t in +the original implementation. generator (torch.Generator, optional, defaults to 0.25) — +A torch.Generator to make +FreeInit generation deterministic. Enables the FreeInit mechanism as in https://arxiv.org/abs/2312.07537. This implementation has been adapted from the official repository. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. AnimateDiffVideoToVideoPipeline class diffusers.AnimateDiffVideoToVideoPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel motion_adapter: MotionAdapter scheduler: Union feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter (MotionAdapter) — +A MotionAdapter to be used in combination with unet to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for video-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( video: List = None prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: Optional = None guidance_scale: float = 7.5 strength: float = 0.8 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → AnimateDiffPipelineOutput or tuple Parameters video (List[PipelineImageInput]) — +The input video to condition the generation on. Must be a list of images/frames of the video. prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. strength (float, optional, defaults to 0.8) — +Higher strength leads to more differences between original video and generated video. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated video. Choose between torch.FloatTensor, PIL.Image or +np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a AnimateDiffPipelineOutput instead +of a plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +AnimateDiffPipelineOutput or tuple + +If return_dict is True, AnimateDiffPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. AnimateDiffPipelineOutput class diffusers.pipelines.animatediff.AnimateDiffPipelineOutput < source > ( frames: Union ) Parameters frames (List[List[PIL.Image.Image]] or torch.Tensor or np.ndarray) — +List of PIL Images of length batch_size or torch.Tensor or np.ndarray of shape +(batch_size, num_frames, height, width, num_channels). Output class for AnimateDiff pipelines. diff --git a/scrapped_outputs/978c1b231a7e16247f03354e18d49ffb.txt b/scrapped_outputs/978c1b231a7e16247f03354e18d49ffb.txt new file mode 100644 index 0000000000000000000000000000000000000000..ac59df5433d23b7c188dd3d53bf865450ff7dab9 --- /dev/null +++ b/scrapped_outputs/978c1b231a7e16247f03354e18d49ffb.txt @@ -0,0 +1 @@ +Reinforcement learning training with DDPO You can fine-tune Stable Diffusion on a reward function via reinforcement learning with the 🤗 TRL library and 🤗 Diffusers. This is done with the Denoising Diffusion Policy Optimization (DDPO) algorithm introduced by Black et al. in Training Diffusion Models with Reinforcement Learning, which is implemented in 🤗 TRL with the DDPOTrainer. For more information, check out the DDPOTrainer API reference and the Finetune Stable Diffusion Models with DDPO via TRL blog post. diff --git a/scrapped_outputs/979a15d26d4ca211c7f74a9d467344b7.txt b/scrapped_outputs/979a15d26d4ca211c7f74a9d467344b7.txt new file mode 100644 index 0000000000000000000000000000000000000000..48396c146f3995890b4116a7443457db9ccef879 --- /dev/null +++ b/scrapped_outputs/979a15d26d4ca211c7f74a9d467344b7.txt @@ -0,0 +1,60 @@ +VAE Image Processor The VaeImageProcessor provides a unified API for StableDiffusionPipelines to prepare image inputs for VAE encoding and post-processing outputs once they’re decoded. This includes transformations such as resizing, normalization, and conversion between PIL Image, PyTorch, and NumPy arrays. All pipelines with VaeImageProcessor accept PIL Image, PyTorch tensor, or NumPy arrays as image inputs and return outputs based on the output_type argument by the user. You can pass encoded image latents directly to the pipeline and return latents from the pipeline as a specific output with the output_type argument (for example output_type="latent"). This allows you to take the generated latents from one pipeline and pass it to another pipeline as input without leaving the latent space. It also makes it much easier to use multiple pipelines together by passing PyTorch tensors directly between different pipelines. VaeImageProcessor class diffusers.image_processor.VaeImageProcessor < source > ( do_resize: bool = True vae_scale_factor: int = 8 resample: str = 'lanczos' do_normalize: bool = True do_binarize: bool = False do_convert_rgb: bool = False do_convert_grayscale: bool = False ) Parameters do_resize (bool, optional, defaults to True) — +Whether to downscale the image’s (height, width) dimensions to multiples of vae_scale_factor. Can accept +height and width arguments from image_processor.VaeImageProcessor.preprocess() method. vae_scale_factor (int, optional, defaults to 8) — +VAE scale factor. If do_resize is True, the image is automatically resized to multiples of this factor. resample (str, optional, defaults to lanczos) — +Resampling filter to use when resizing the image. do_normalize (bool, optional, defaults to True) — +Whether to normalize the image to [-1,1]. do_binarize (bool, optional, defaults to False) — +Whether to binarize the image to 0/1. do_convert_rgb (bool, optional, defaults to be False) — +Whether to convert the images to RGB format. do_convert_grayscale (bool, optional, defaults to be False) — +Whether to convert the images to grayscale format. Image processor for VAE. apply_overlay < source > ( mask: Image init_image: Image image: Image crop_coords: Optional = None ) overlay the inpaint output to the original image binarize < source > ( image: Image ) → PIL.Image.Image Parameters image (PIL.Image.Image) — +The image input, should be a PIL image. Returns +PIL.Image.Image + +The binarized image. Values less than 0.5 are set to 0, values greater than 0.5 are set to 1. + Create a mask. blur < source > ( image: Image blur_factor: int = 4 ) Applies Gaussian blur to an image. convert_to_grayscale < source > ( image: Image ) Converts a PIL image to grayscale format. convert_to_rgb < source > ( image: Image ) Converts a PIL image to RGB format. denormalize < source > ( images: Union ) Denormalize an image array to [0,1]. get_crop_region < source > ( mask_image: Image width: int height: int pad = 0 ) → tuple Parameters mask_image (PIL.Image.Image) — Mask image. width (int) — Width of the image to be processed. height (int) — Height of the image to be processed. pad (int, optional) — Padding to be added to the crop region. Defaults to 0. Returns +tuple + +(x1, y1, x2, y2) represent a rectangular region that contains all masked ares in an image and matches the original aspect ratio. + Finds a rectangular region that contains all masked ares in an image, and expands region to match the aspect ratio of the original image; +for example, if user drew mask in a 128x32 region, and the dimensions for processing are 512x512, the region will be expanded to 128x128. get_default_height_width < source > ( image: Union height: Optional = None width: Optional = None ) Parameters image(PIL.Image.Image, np.ndarray or torch.Tensor) — +The image input, can be a PIL image, numpy array or pytorch tensor. if it is a numpy array, should have +shape [batch, height, width] or [batch, height, width, channel] if it is a pytorch tensor, should +have shape [batch, channel, height, width]. height (int, optional, defaults to None) — +The height in preprocessed image. If None, will use the height of image input. width (int, optional, defaults to None) -- The width in preprocessed. If None, will use the width of the image` input. This function return the height and width that are downscaled to the next integer multiple of +vae_scale_factor. normalize < source > ( images: Union ) Normalize an image array to [-1,1]. numpy_to_pil < source > ( images: ndarray ) Convert a numpy image or a batch of images to a PIL image. numpy_to_pt < source > ( images: ndarray ) Convert a NumPy image to a PyTorch tensor. pil_to_numpy < source > ( images: Union ) Convert a PIL image or a list of PIL images to NumPy arrays. postprocess < source > ( image: FloatTensor output_type: str = 'pil' do_denormalize: Optional = None ) → PIL.Image.Image, np.ndarray or torch.FloatTensor Parameters image (torch.FloatTensor) — +The image input, should be a pytorch tensor with shape B x C x H x W. output_type (str, optional, defaults to pil) — +The output type of the image, can be one of pil, np, pt, latent. do_denormalize (List[bool], optional, defaults to None) — +Whether to denormalize the image to [0,1]. If None, will use the value of do_normalize in the +VaeImageProcessor config. Returns +PIL.Image.Image, np.ndarray or torch.FloatTensor + +The postprocessed image. + Postprocess the image output from tensor to output_type. preprocess < source > ( image: Union height: Optional = None width: Optional = None resize_mode: str = 'default' crops_coords: Optional = None ) Parameters image (pipeline_image_input) — +The image input, accepted formats are PIL images, NumPy arrays, PyTorch tensors; Also accept list of supported formats. height (int, optional, defaults to None) — +The height in preprocessed image. If None, will use the get_default_height_width() to get default height. width (int, optional, defaults to None) -- The width in preprocessed. If None, will use get_default_height_width() to get the default width. resize_mode (str, optional, defaults to default) — +The resize mode, can be one of default or fill. If default, will resize the image to fit +within the specified width and height, and it may not maintaining the original aspect ratio. +If fill, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, filling empty with data from image. +If crop, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, cropping the excess. +Note that resize_mode fill and crop are only supported for PIL image input. crops_coords (List[Tuple[int, int, int, int]], optional, defaults to None) — +The crop coordinates for each image in the batch. If None, will not crop the image. Preprocess the image input. pt_to_numpy < source > ( images: FloatTensor ) Convert a PyTorch tensor to a NumPy image. resize < source > ( image: Union height: int width: int resize_mode: str = 'default' ) → PIL.Image.Image, np.ndarray or torch.Tensor Parameters image (PIL.Image.Image, np.ndarray or torch.Tensor) — +The image input, can be a PIL image, numpy array or pytorch tensor. height (int) — +The height to resize to. width (int) — +The width to resize to. resize_mode (str, optional, defaults to default) — +The resize mode to use, can be one of default or fill. If default, will resize the image to fit +within the specified width and height, and it may not maintaining the original aspect ratio. +If fill, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, filling empty with data from image. +If crop, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, cropping the excess. +Note that resize_mode fill and crop are only supported for PIL image input. Returns +PIL.Image.Image, np.ndarray or torch.Tensor + +The resized image. + Resize image. VaeImageProcessorLDM3D The VaeImageProcessorLDM3D accepts RGB and depth inputs and returns RGB and depth outputs. class diffusers.image_processor.VaeImageProcessorLDM3D < source > ( do_resize: bool = True vae_scale_factor: int = 8 resample: str = 'lanczos' do_normalize: bool = True ) Parameters do_resize (bool, optional, defaults to True) — +Whether to downscale the image’s (height, width) dimensions to multiples of vae_scale_factor. vae_scale_factor (int, optional, defaults to 8) — +VAE scale factor. If do_resize is True, the image is automatically resized to multiples of this factor. resample (str, optional, defaults to lanczos) — +Resampling filter to use when resizing the image. do_normalize (bool, optional, defaults to True) — +Whether to normalize the image to [-1,1]. Image processor for VAE LDM3D. depth_pil_to_numpy < source > ( images: Union ) Convert a PIL image or a list of PIL images to NumPy arrays. numpy_to_depth < source > ( images: ndarray ) Convert a NumPy depth image or a batch of images to a PIL image. numpy_to_pil < source > ( images: ndarray ) Convert a NumPy image or a batch of images to a PIL image. preprocess < source > ( rgb: Union depth: Union height: Optional = None width: Optional = None target_res: Optional = None ) Preprocess the image input. Accepted formats are PIL images, NumPy arrays or PyTorch tensors. rgblike_to_depthmap < source > ( image: Union ) Returns: depth map diff --git a/scrapped_outputs/97b010c23052da3f624fb5db2f9e0520.txt b/scrapped_outputs/97b010c23052da3f624fb5db2f9e0520.txt new file mode 100644 index 0000000000000000000000000000000000000000..28d0025fe6227f68f990a2d355304bcc0dc60e92 --- /dev/null +++ b/scrapped_outputs/97b010c23052da3f624fb5db2f9e0520.txt @@ -0,0 +1,112 @@ +Unconditional Latent Diffusion + + +Overview + +Unconditional Latent Diffusion was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. +The abstract of the paper is the following: +By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. +The original codebase can be found here. + +Tips: + + + + + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_latent_diffusion_uncond.py +Unconditional Image Generation +- + +Examples: + + +LDMPipeline + + +class diffusers.LDMPipeline + +< +source +> +( +vqvae: VQModel +unet: UNet2DModel +scheduler: DDIMScheduler + +) + + +Parameters + +vqvae (VQModel) — +Vector-quantized (VQ) Model to encode and decode images to and from latent representations. + + +unet (UNet2DModel) — U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +DDIMScheduler is to be used in combination with unet to denoise the encoded image latents. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +batch_size: int = 1 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +eta: float = 0.0 +num_inference_steps: int = 50 +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +**kwargs + +) +→ +ImagePipelineOutput or tuple + +Parameters + +batch_size (int, optional, defaults to 1) — +Number of images to generate. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + +Returns + +ImagePipelineOutput or tuple + + + +~pipelines.utils.ImagePipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. diff --git a/scrapped_outputs/97b5460c0bdf6f9535250c33b24ddc00.txt b/scrapped_outputs/97b5460c0bdf6f9535250c33b24ddc00.txt new file mode 100644 index 0000000000000000000000000000000000000000..02948f26017297db150c2f1b80c70d14cf529652 --- /dev/null +++ b/scrapped_outputs/97b5460c0bdf6f9535250c33b24ddc00.txt @@ -0,0 +1,187 @@ +Kandinsky The Kandinsky models are a series of multilingual text-to-image generation models. The Kandinsky 2.0 model uses two multilingual text encoders and concatenates those results for the UNet. Kandinsky 2.1 changes the architecture to include an image prior model (CLIP) to generate a mapping between text and image embeddings. The mapping provides better text-image alignment and it is used with the text embeddings during training, leading to higher quality results. Finally, Kandinsky 2.1 uses a Modulating Quantized Vectors (MoVQ) decoder - which adds a spatial conditional normalization layer to increase photorealism - to decode the latents into images. Kandinsky 2.2 improves on the previous model by replacing the image encoder of the image prior model with a larger CLIP-ViT-G model to improve quality. The image prior model was also retrained on images with different resolutions and aspect ratios to generate higher-resolution images and different image sizes. Kandinsky 3 simplifies the architecture and shifts away from the two-stage generation process involving the prior model and diffusion model. Instead, Kandinsky 3 uses Flan-UL2 to encode text, a UNet with BigGan-deep blocks, and Sber-MoVQGAN to decode the latents into images. Text understanding and generated image quality are primarily achieved by using a larger text encoder and UNet. This guide will show you how to use the Kandinsky models for text-to-image, image-to-image, inpainting, interpolation, and more. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate Kandinsky 2.1 and 2.2 usage is very similar! The only difference is Kandinsky 2.2 doesn’t accept prompt as an input when decoding the latents. Instead, Kandinsky 2.2 only accepts image_embeds during decoding. Kandinsky 3 has a more concise architecture and it doesn’t require a prior model. This means it’s usage is identical to other diffusion models like Stable Diffusion XL. Text-to-image To use the Kandinsky models for any task, you always start by setting up the prior pipeline to encode the prompt and generate the image embeddings. The prior pipeline also generates negative_image_embeds that correspond to the negative prompt "". For better results, you can pass an actual negative_prompt to the prior pipeline, but this’ll increase the effective batch size of the prior pipeline by 2x. Kandinsky 2.1 Kandinsky 2.2 Kandinsky 3 Copied from diffusers import KandinskyPriorPipeline, KandinskyPipeline +import torch + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16).to("cuda") +pipeline = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16).to("cuda") + +prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting" +negative_prompt = "low quality, bad quality" # optional to include a negative prompt, but results are usually better +image_embeds, negative_image_embeds = prior_pipeline(prompt, negative_prompt, guidance_scale=1.0).to_tuple() Now pass all the prompts and embeddings to the KandinskyPipeline to generate an image: Copied image = pipeline(prompt, image_embeds=image_embeds, negative_prompt=negative_prompt, negative_image_embeds=negative_image_embeds, height=768, width=768).images[0] +image 🤗 Diffusers also provides an end-to-end API with the KandinskyCombinedPipeline and KandinskyV22CombinedPipeline, meaning you don’t have to separately load the prior and text-to-image pipeline. The combined pipeline automatically loads both the prior model and the decoder. You can still set different values for the prior pipeline with the prior_guidance_scale and prior_num_inference_steps parameters if you want. Use the AutoPipelineForText2Image to automatically call the combined pipelines under the hood: Kandinsky 2.1 Kandinsky 2.2 Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) +pipeline.enable_model_cpu_offload() + +prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting" +negative_prompt = "low quality, bad quality" + +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, prior_guidance_scale=1.0, guidance_scale=4.0, height=768, width=768).images[0] +image Image-to-image For image-to-image, pass the initial image and text prompt to condition the image to the pipeline. Start by loading the prior pipeline: Kandinsky 2.1 Kandinsky 2.2 Kandinsky 3 Copied import torch +from diffusers import KandinskyImg2ImgPipeline, KandinskyPriorPipeline + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipeline = KandinskyImg2ImgPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda") Download an image to condition on: Copied from diffusers.utils import load_image + +# download image +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +original_image = load_image(url) +original_image = original_image.resize((768, 512)) Generate the image_embeds and negative_image_embeds with the prior pipeline: Copied prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +image_embeds, negative_image_embeds = prior_pipeline(prompt, negative_prompt).to_tuple() Now pass the original image, and all the prompts and embeddings to the pipeline to generate an image: Kandinsky 2.1 Kandinsky 2.2 Kandinsky 3 Copied from diffusers.utils import make_image_grid + +image = pipeline(prompt, negative_prompt=negative_prompt, image=original_image, image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, height=768, width=768, strength=0.3).images[0] +make_image_grid([original_image.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2) 🤗 Diffusers also provides an end-to-end API with the KandinskyImg2ImgCombinedPipeline and KandinskyV22Img2ImgCombinedPipeline, meaning you don’t have to separately load the prior and image-to-image pipeline. The combined pipeline automatically loads both the prior model and the decoder. You can still set different values for the prior pipeline with the prior_guidance_scale and prior_num_inference_steps parameters if you want. Use the AutoPipelineForImage2Image to automatically call the combined pipelines under the hood: Kandinsky 2.1 Kandinsky 2.2 Copied from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image +import torch + +pipeline = AutoPipelineForImage2Image.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True) +pipeline.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +original_image = load_image(url) + +original_image.thumbnail((768, 768)) + +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=original_image, strength=0.3).images[0] +make_image_grid([original_image.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2) Inpainting ⚠️ The Kandinsky models use ⬜️ white pixels to represent the masked area now instead of black pixels. If you are using KandinskyInpaintPipeline in production, you need to change the mask to use white pixels: Copied # For PIL input +import PIL.ImageOps +mask = PIL.ImageOps.invert(mask) + +# For PyTorch and NumPy input +mask = 1 - mask For inpainting, you’ll need the original image, a mask of the area to replace in the original image, and a text prompt of what to inpaint. Load the prior pipeline: Kandinsky 2.1 Kandinsky 2.2 Copied from diffusers import KandinskyInpaintPipeline, KandinskyPriorPipeline +from diffusers.utils import load_image, make_image_grid +import torch +import numpy as np +from PIL import Image + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipeline = KandinskyInpaintPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16, use_safetensors=True).to("cuda") Load an initial image and create a mask: Copied init_image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png") +mask = np.zeros((768, 768), dtype=np.float32) +# mask area above cat's head +mask[:250, 250:-250] = 1 Generate the embeddings with the prior pipeline: Copied prompt = "a hat" +prior_output = prior_pipeline(prompt) Now pass the initial image, mask, and prompt and embeddings to the pipeline to generate an image: Kandinsky 2.1 Kandinsky 2.2 Copied output_image = pipeline(prompt, image=init_image, mask_image=mask, **prior_output, height=768, width=768, num_inference_steps=150).images[0] +mask = Image.fromarray((mask*255).astype('uint8'), 'L') +make_image_grid([init_image, mask, output_image], rows=1, cols=3) You can also use the end-to-end KandinskyInpaintCombinedPipeline and KandinskyV22InpaintCombinedPipeline to call the prior and decoder pipelines together under the hood. Use the AutoPipelineForInpainting for this: Kandinsky 2.1 Kandinsky 2.2 Copied import torch +import numpy as np +from PIL import Image +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipe = AutoPipelineForInpainting.from_pretrained("kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16) +pipe.enable_model_cpu_offload() + +init_image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png") +mask = np.zeros((768, 768), dtype=np.float32) +# mask area above cat's head +mask[:250, 250:-250] = 1 +prompt = "a hat" + +output_image = pipe(prompt=prompt, image=init_image, mask_image=mask).images[0] +mask = Image.fromarray((mask*255).astype('uint8'), 'L') +make_image_grid([init_image, mask, output_image], rows=1, cols=3) Interpolation Interpolation allows you to explore the latent space between the image and text embeddings which is a cool way to see some of the prior model’s intermediate outputs. Load the prior pipeline and two images you’d like to interpolate: Kandinsky 2.1 Kandinsky 2.2 Copied from diffusers import KandinskyPriorPipeline, KandinskyPipeline +from diffusers.utils import load_image, make_image_grid +import torch + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +img_1 = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png") +img_2 = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/starry_night.jpeg") +make_image_grid([img_1.resize((512,512)), img_2.resize((512,512))], rows=1, cols=2) a cat Van Gogh's Starry Night painting Specify the text or images to interpolate, and set the weights for each text or image. Experiment with the weights to see how they affect the interpolation! Copied images_texts = ["a cat", img_1, img_2] +weights = [0.3, 0.3, 0.4] Call the interpolate function to generate the embeddings, and then pass them to the pipeline to generate the image: Kandinsky 2.1 Kandinsky 2.2 Copied # prompt can be left empty +prompt = "" +prior_out = prior_pipeline.interpolate(images_texts, weights) + +pipeline = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + +image = pipeline(prompt, **prior_out, height=768, width=768).images[0] +image ControlNet ⚠️ ControlNet is only supported for Kandinsky 2.2! ControlNet enables conditioning large pretrained diffusion models with additional inputs such as a depth map or edge detection. For example, you can condition Kandinsky 2.2 with a depth map so the model understands and preserves the structure of the depth image. Let’s load an image and extract it’s depth map: Copied from diffusers.utils import load_image + +img = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png" +).resize((768, 768)) +img Then you can use the depth-estimation Pipeline from 🤗 Transformers to process the image and retrieve the depth map: Copied import torch +import numpy as np + +from transformers import pipeline + +def make_hint(image, depth_estimator): + image = depth_estimator(image)["depth"] + image = np.array(image) + image = image[:, :, None] + image = np.concatenate([image, image, image], axis=2) + detected_map = torch.from_numpy(image).float() / 255.0 + hint = detected_map.permute(2, 0, 1) + return hint + +depth_estimator = pipeline("depth-estimation") +hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda") Text-to-image Load the prior pipeline and the KandinskyV22ControlnetPipeline: Copied from diffusers import KandinskyV22PriorPipeline, KandinskyV22ControlnetPipeline + +prior_pipeline = KandinskyV22PriorPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +pipeline = KandinskyV22ControlnetPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16 +).to("cuda") Generate the image embeddings from a prompt and negative prompt: Copied prompt = "A robot, 4k photo" +negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature" + +generator = torch.Generator(device="cuda").manual_seed(43) + +image_emb, zero_image_emb = prior_pipeline( + prompt=prompt, negative_prompt=negative_prior_prompt, generator=generator +).to_tuple() Finally, pass the image embeddings and the depth image to the KandinskyV22ControlnetPipeline to generate an image: Copied image = pipeline(image_embeds=image_emb, negative_image_embeds=zero_image_emb, hint=hint, num_inference_steps=50, generator=generator, height=768, width=768).images[0] +image Image-to-image For image-to-image with ControlNet, you’ll need to use the: KandinskyV22PriorEmb2EmbPipeline to generate the image embeddings from a text prompt and an image KandinskyV22ControlnetImg2ImgPipeline to generate an image from the initial image and the image embeddings Process and extract a depth map of an initial image of a cat with the depth-estimation Pipeline from 🤗 Transformers: Copied import torch +import numpy as np + +from diffusers import KandinskyV22PriorEmb2EmbPipeline, KandinskyV22ControlnetImg2ImgPipeline +from diffusers.utils import load_image +from transformers import pipeline + +img = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png" +).resize((768, 768)) + +def make_hint(image, depth_estimator): + image = depth_estimator(image)["depth"] + image = np.array(image) + image = image[:, :, None] + image = np.concatenate([image, image, image], axis=2) + detected_map = torch.from_numpy(image).float() / 255.0 + hint = detected_map.permute(2, 0, 1) + return hint + +depth_estimator = pipeline("depth-estimation") +hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda") Load the prior pipeline and the KandinskyV22ControlnetImg2ImgPipeline: Copied prior_pipeline = KandinskyV22PriorEmb2EmbPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +pipeline = KandinskyV22ControlnetImg2ImgPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16 +).to("cuda") Pass a text prompt and the initial image to the prior pipeline to generate the image embeddings: Copied prompt = "A robot, 4k photo" +negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature" + +generator = torch.Generator(device="cuda").manual_seed(43) + +img_emb = prior_pipeline(prompt=prompt, image=img, strength=0.85, generator=generator) +negative_emb = prior_pipeline(prompt=negative_prior_prompt, image=img, strength=1, generator=generator) Now you can run the KandinskyV22ControlnetImg2ImgPipeline to generate an image from the initial image and the image embeddings: Copied image = pipeline(image=img, strength=0.5, image_embeds=img_emb.image_embeds, negative_image_embeds=negative_emb.image_embeds, hint=hint, num_inference_steps=50, generator=generator, height=768, width=768).images[0] +make_image_grid([img.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2) Optimizations Kandinsky is unique because it requires a prior pipeline to generate the mappings, and a second pipeline to decode the latents into an image. Optimization efforts should be focused on the second pipeline because that is where the bulk of the computation is done. Here are some tips to improve Kandinsky during inference. Enable xFormers if you’re using PyTorch < 2.0: Copied from diffusers import DiffusionPipeline + import torch + + pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) ++ pipe.enable_xformers_memory_efficient_attention() Enable torch.compile if you’re using PyTorch >= 2.0 to automatically use scaled dot-product attention (SDPA): Copied pipe.unet.to(memory_format=torch.channels_last) ++ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) This is the same as explicitly setting the attention processor to use AttnAddedKVProcessor2_0: Copied from diffusers.models.attention_processor import AttnAddedKVProcessor2_0 + +pipe.unet.set_attn_processor(AttnAddedKVProcessor2_0()) Offload the model to the CPU with enable_model_cpu_offload() to avoid out-of-memory errors: Copied from diffusers import DiffusionPipeline + import torch + + pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) ++ pipe.enable_model_cpu_offload() By default, the text-to-image pipeline uses the DDIMScheduler but you can replace it with another scheduler like DDPMScheduler to see how that affects the tradeoff between inference speed and image quality: Copied from diffusers import DDPMScheduler +from diffusers import DiffusionPipeline + +scheduler = DDPMScheduler.from_pretrained("kandinsky-community/kandinsky-2-1", subfolder="ddpm_scheduler") +pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", scheduler=scheduler, torch_dtype=torch.float16, use_safetensors=True).to("cuda") diff --git a/scrapped_outputs/97ca6b50b1ee5163ccfac83f445d6c7e.txt b/scrapped_outputs/97ca6b50b1ee5163ccfac83f445d6c7e.txt new file mode 100644 index 0000000000000000000000000000000000000000..e61eb0a68fe6473d1d312b7484e9469ca28f24df --- /dev/null +++ b/scrapped_outputs/97ca6b50b1ee5163ccfac83f445d6c7e.txt @@ -0,0 +1,75 @@ +Shap-E Shap-E is a conditional model for generating 3D assets which could be used for video game development, interior design, and architecture. It is trained on a large dataset of 3D assets, and post-processed to render more views of each object and produce 16K instead of 4K point clouds. The Shap-E model is trained in two steps: an encoder accepts the point clouds and rendered views of a 3D asset and outputs the parameters of implicit functions that represent the asset a diffusion model is trained on the latents produced by the encoder to generate either neural radiance fields (NeRFs) or a textured 3D mesh, making it easier to render and use the 3D asset in downstream applications This guide will show you how to use Shap-E to start generating your own 3D assets! Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate trimesh Text-to-3D To generate a gif of a 3D object, pass a text prompt to the ShapEPipeline. The pipeline generates a list of image frames which are used to create the 3D object. Copied import torch +from diffusers import ShapEPipeline + +device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +pipe = ShapEPipeline.from_pretrained("openai/shap-e", torch_dtype=torch.float16, variant="fp16") +pipe = pipe.to(device) + +guidance_scale = 15.0 +prompt = ["A firecracker", "A birthday cupcake"] + +images = pipe( + prompt, + guidance_scale=guidance_scale, + num_inference_steps=64, + frame_size=256, +).images Now use the export_to_gif() function to turn the list of image frames into a gif of the 3D object. Copied from diffusers.utils import export_to_gif + +export_to_gif(images[0], "firecracker_3d.gif") +export_to_gif(images[1], "cake_3d.gif") prompt = "A firecracker" prompt = "A birthday cupcake" Image-to-3D To generate a 3D object from another image, use the ShapEImg2ImgPipeline. You can use an existing image or generate an entirely new one. Let’s use the Kandinsky 2.1 model to generate a new image. Copied from diffusers import DiffusionPipeline +import torch + +prior_pipeline = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipeline = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + +prompt = "A cheeseburger, white background" + +image_embeds, negative_image_embeds = prior_pipeline(prompt, guidance_scale=1.0).to_tuple() +image = pipeline( + prompt, + image_embeds=image_embeds, + negative_image_embeds=negative_image_embeds, +).images[0] + +image.save("burger.png") Pass the cheeseburger to the ShapEImg2ImgPipeline to generate a 3D representation of it. Copied from PIL import Image +from diffusers import ShapEImg2ImgPipeline +from diffusers.utils import export_to_gif + +pipe = ShapEImg2ImgPipeline.from_pretrained("openai/shap-e-img2img", torch_dtype=torch.float16, variant="fp16").to("cuda") + +guidance_scale = 3.0 +image = Image.open("burger.png").resize((256, 256)) + +images = pipe( + image, + guidance_scale=guidance_scale, + num_inference_steps=64, + frame_size=256, +).images + +gif_path = export_to_gif(images[0], "burger_3d.gif") cheeseburger 3D cheeseburger Generate mesh Shap-E is a flexible model that can also generate textured mesh outputs to be rendered for downstream applications. In this example, you’ll convert the output into a glb file because the 🤗 Datasets library supports mesh visualization of glb files which can be rendered by the Dataset viewer. You can generate mesh outputs for both the ShapEPipeline and ShapEImg2ImgPipeline by specifying the output_type parameter as "mesh": Copied import torch +from diffusers import ShapEPipeline + +device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +pipe = ShapEPipeline.from_pretrained("openai/shap-e", torch_dtype=torch.float16, variant="fp16") +pipe = pipe.to(device) + +guidance_scale = 15.0 +prompt = "A birthday cupcake" + +images = pipe(prompt, guidance_scale=guidance_scale, num_inference_steps=64, frame_size=256, output_type="mesh").images Use the export_to_ply() function to save the mesh output as a ply file: You can optionally save the mesh output as an obj file with the export_to_obj() function. The ability to save the mesh output in a variety of formats makes it more flexible for downstream usage! Copied from diffusers.utils import export_to_ply + +ply_path = export_to_ply(images[0], "3d_cake.ply") +print(f"Saved to folder: {ply_path}") Then you can convert the ply file to a glb file with the trimesh library: Copied import trimesh + +mesh = trimesh.load("3d_cake.ply") +mesh_export = mesh.export("3d_cake.glb", file_type="glb") By default, the mesh output is focused from the bottom viewpoint but you can change the default viewpoint by applying a rotation transform: Copied import trimesh +import numpy as np + +mesh = trimesh.load("3d_cake.ply") +rot = trimesh.transformations.rotation_matrix(-np.pi / 2, [1, 0, 0]) +mesh = mesh.apply_transform(rot) +mesh_export = mesh.export("3d_cake.glb", file_type="glb") Upload the mesh file to your dataset repository to visualize it with the Dataset viewer! diff --git a/scrapped_outputs/97d0f64f56b25df775b12dc45a2bbc38.txt b/scrapped_outputs/97d0f64f56b25df775b12dc45a2bbc38.txt new file mode 100644 index 0000000000000000000000000000000000000000..4c8b823c44c2e8e4db15a8f7d06e4e6905b97be5 --- /dev/null +++ b/scrapped_outputs/97d0f64f56b25df775b12dc45a2bbc38.txt @@ -0,0 +1,255 @@ +Custom Diffusion training example + +Custom Diffusion is a method to customize text-to-image models like Stable Diffusion given just a few (4~5) images of a subject. +The train_custom_diffusion.py script shows how to implement the training procedure and adapt it for stable diffusion. +This training example was contributed by Nupur Kumari (one of the authors of Custom Diffusion). + +Running locally with PyTorch + + +Installing the dependencies + +Before running the scripts, make sure to install the library’s training dependencies: +Important +To make sure you can successfully run the latest versions of the example scripts, we highly recommend installing from source and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment: + + + Copied +git clone https://github.com/huggingface/diffusers +cd diffusers +pip install -e . +Then cd in the example folder and run + + + Copied +pip install -r requirements.txt +pip install clip-retrieval +And initialize an 🤗Accelerate environment with: + + + Copied +accelerate config +Or for a default accelerate configuration without answering questions about your environment + + + Copied +accelerate config default +Or if your environment doesn’t support an interactive shell e.g. a notebook + + + Copied +from accelerate.utils import write_basic_config + +write_basic_config() + +Cat example 😺 + +Now let’s get our dataset. Download dataset from here and unzip it. +We also collect 200 real images using clip-retrieval which are combined with the target images in the training dataset as a regularization. This prevents overfitting to the the given target image. The following flags enable the regularization with_prior_preservation, real_prior with prior_loss_weight=1.. +The class_prompt should be the category name same as target image. The collected real images are with text captions similar to the class_prompt. The retrieved image are saved in class_data_dir. You can disable real_prior to use generated images as regularization. To collect the real images use this command first before training. + + + Copied +pip install clip-retrieval +python retrieve.py --class_prompt cat --class_data_dir real_reg/samples_cat --num_class_images 200 +Note: Change the resolution to 768 if you are using the stable-diffusion-2 768x768 model. + + + Copied +export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export OUTPUT_DIR="path-to-save-model" +export INSTANCE_DIR="./data/cat" + +accelerate launch train_custom_diffusion.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --output_dir=$OUTPUT_DIR \ + --class_data_dir=./real_reg/samples_cat/ \ + --with_prior_preservation --real_prior --prior_loss_weight=1.0 \ + --class_prompt="cat" --num_class_images=200 \ + --instance_prompt="photo of a cat" \ + --resolution=512 \ + --train_batch_size=2 \ + --learning_rate=1e-5 \ + --lr_warmup_steps=0 \ + --max_train_steps=250 \ + --scale_lr --hflip \ + --modifier_token "" +Use --enable_xformers_memory_efficient_attention for faster training with lower VRAM requirement (16GB per GPU). Follow this guide for installation instructions. +To track your experiments using Weights and Biases (wandb) and to save intermediate results (whcih we HIGHLY recommend), follow these steps: +Install wandb: pip install wandb. +Authorize: wandb login. +Then specify a validation_prompt and set report_to to wandb while launching training. You can also configure the following related arguments:num_validation_images +validation_steps +Here is an example command: + + + Copied +accelerate launch train_custom_diffusion.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --output_dir=$OUTPUT_DIR \ + --class_data_dir=./real_reg/samples_cat/ \ + --with_prior_preservation --real_prior --prior_loss_weight=1.0 \ + --class_prompt="cat" --num_class_images=200 \ + --instance_prompt="photo of a cat" \ + --resolution=512 \ + --train_batch_size=2 \ + --learning_rate=1e-5 \ + --lr_warmup_steps=0 \ + --max_train_steps=250 \ + --scale_lr --hflip \ + --modifier_token "" \ + --validation_prompt=" cat sitting in a bucket" \ + --report_to="wandb" +Here is an example Weights and Biases page where you can check out the intermediate results along with other training details. +If you specify --push_to_hub, the learned parameters will be pushed to a repository on the Hugging Face Hub. Here is an example repository. + +Training on multiple concepts 🐱🪵 + +Provide a json file with the info about each concept, similar to this. +To collect the real images run this command for each concept in the json file. + + + Copied +pip install clip-retrieval +python retrieve.py --class_prompt {} --class_data_dir {} --num_class_images 200 +And then we’re ready to start training! + + + Copied +export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export OUTPUT_DIR="path-to-save-model" + +accelerate launch train_custom_diffusion.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --output_dir=$OUTPUT_DIR \ + --concepts_list=./concept_list.json \ + --with_prior_preservation --real_prior --prior_loss_weight=1.0 \ + --resolution=512 \ + --train_batch_size=2 \ + --learning_rate=1e-5 \ + --lr_warmup_steps=0 \ + --max_train_steps=500 \ + --num_class_images=200 \ + --scale_lr --hflip \ + --modifier_token "+" +Here is an example Weights and Biases page where you can check out the intermediate results along with other training details. + +Training on human faces + +For fine-tuning on human faces we found the following configuration to work better: learning_rate=5e-6, max_train_steps=1000 to 2000, and freeze_model=crossattn with at least 15-20 images. +To collect the real images use this command first before training. + + + Copied +pip install clip-retrieval +python retrieve.py --class_prompt person --class_data_dir real_reg/samples_person --num_class_images 200 +Then start training! + + + Copied +export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export OUTPUT_DIR="path-to-save-model" +export INSTANCE_DIR="path-to-images" + +accelerate launch train_custom_diffusion.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --output_dir=$OUTPUT_DIR \ + --class_data_dir=./real_reg/samples_person/ \ + --with_prior_preservation --real_prior --prior_loss_weight=1.0 \ + --class_prompt="person" --num_class_images=200 \ + --instance_prompt="photo of a person" \ + --resolution=512 \ + --train_batch_size=2 \ + --learning_rate=5e-6 \ + --lr_warmup_steps=0 \ + --max_train_steps=1000 \ + --scale_lr --hflip --noaug \ + --freeze_model crossattn \ + --modifier_token "" \ + --enable_xformers_memory_efficient_attention + +Inference + +Once you have trained a model using the above command, you can run inference using the below command. Make sure to include the modifier token (e.g. \ in above example) in your prompt. + + + Copied +import torch +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16).to("cuda") +pipe.unet.load_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin") +pipe.load_textual_inversion("path-to-save-model", weight_name=".bin") + +image = pipe( + " cat sitting in a bucket", + num_inference_steps=100, + guidance_scale=6.0, + eta=1.0, +).images[0] +image.save("cat.png") +It’s possible to directly load these parameters from a Hub repository: + + + Copied +import torch +from huggingface_hub.repocard import RepoCard +from diffusers import DiffusionPipeline + +model_id = "sayakpaul/custom-diffusion-cat" +card = RepoCard.load(model_id) +base_model_id = card.data.to_dict()["base_model"] + +pipe = DiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16).to("cuda") +pipe.unet.load_attn_procs(model_id, weight_name="pytorch_custom_diffusion_weights.bin") +pipe.load_textual_inversion(model_id, weight_name=".bin") + +image = pipe( + " cat sitting in a bucket", + num_inference_steps=100, + guidance_scale=6.0, + eta=1.0, +).images[0] +image.save("cat.png") +Here is an example of performing inference with multiple concepts: + + + Copied +import torch +from huggingface_hub.repocard import RepoCard +from diffusers import DiffusionPipeline + +model_id = "sayakpaul/custom-diffusion-cat-wooden-pot" +card = RepoCard.load(model_id) +base_model_id = card.data.to_dict()["base_model"] + +pipe = DiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16).to("cuda") +pipe.unet.load_attn_procs(model_id, weight_name="pytorch_custom_diffusion_weights.bin") +pipe.load_textual_inversion(model_id, weight_name=".bin") +pipe.load_textual_inversion(model_id, weight_name=".bin") + +image = pipe( + "the cat sculpture in the style of a wooden pot", + num_inference_steps=100, + guidance_scale=6.0, + eta=1.0, +).images[0] +image.save("multi-subject.png") +Here, cat and wooden pot refer to the multiple concepts. + +Inference from a training checkpoint + +You can also perform inference from one of the complete checkpoint saved during the training process, if you used the --checkpointing_steps argument. +TODO. + +Set grads to none + +To save even more memory, pass the --set_grads_to_none argument to the script. This will set grads to None instead of zero. However, be aware that it changes certain behaviors, so if you start experiencing any problems, remove this argument. +More info: https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.zero_grad.html + +Experimental results + +You can refer to our webpage that discusses our experiments in detail. diff --git a/scrapped_outputs/97da1c722a1bea948b3c0d037bf47087.txt b/scrapped_outputs/97da1c722a1bea948b3c0d037bf47087.txt new file mode 100644 index 0000000000000000000000000000000000000000..68ff112b968d56ed709f7889837161b8952ee99b --- /dev/null +++ b/scrapped_outputs/97da1c722a1bea948b3c0d037bf47087.txt @@ -0,0 +1,235 @@ +AutoPipeline AutoPipeline is designed to: make it easy for you to load a checkpoint for a task without knowing the specific pipeline class to use use multiple pipelines in your workflow Based on the task, the AutoPipeline class automatically retrieves the relevant pipeline given the name or path to the pretrained weights with the from_pretrained() method. To seamlessly switch between tasks with the same checkpoint without reallocating additional memory, use the from_pipe() method to transfer the components from the original pipeline to the new one. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +image = pipeline(prompt, num_inference_steps=25).images[0] Check out the AutoPipeline tutorial to learn how to use this API! AutoPipeline supports text-to-image, image-to-image, and inpainting for the following diffusion models: Stable Diffusion ControlNet Stable Diffusion XL (SDXL) DeepFloyd IF Kandinsky 2.1 Kandinsky 2.2 AutoPipelineForText2Image class diffusers.AutoPipelineForText2Image < source > ( *args **kwargs ) AutoPipelineForText2Image is a generic pipeline class that instantiates a text-to-image pipeline class. The +specific underlying pipeline class is automatically selected from either the +from_pretrained() or from_pipe() methods. This class cannot be instantiated using __init__() (throws an error). Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_or_path **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiates a text-to-image Pytorch diffusion pipeline from pretrained pipeline weight. The from_pretrained() method takes care of returning the correct pipeline class instance by: Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its +config object Find the text-to-image pipeline linked to the pipeline class using pattern matching on pipeline class +name. If a controlnet argument is passed, it will instantiate a StableDiffusionControlNetPipeline object. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import AutoPipelineForText2Image + +>>> pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> image = pipeline(prompt).images[0] from_pipe < source > ( pipeline **kwargs ) Parameters pipeline (DiffusionPipeline) — +an instantiated DiffusionPipeline object Instantiates a text-to-image Pytorch diffusion pipeline from another instantiated diffusion pipeline class. The from_pipe() method takes care of returning the correct pipeline class instance by finding the text-to-image +pipeline linked to the pipeline class using pattern matching on pipeline class name. All the modules the pipeline contains will be used to initialize the new pipeline without reallocating +additional memoery. The pipeline is set in evaluation mode (model.eval()) by default. Copied >>> from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image + +>>> pipe_i2i = AutoPipelineForImage2Image.from_pretrained( +... "runwayml/stable-diffusion-v1-5", requires_safety_checker=False +... ) + +>>> pipe_t2i = AutoPipelineForText2Image.from_pipe(pipe_i2i) +>>> image = pipe_t2i(prompt).images[0] AutoPipelineForImage2Image class diffusers.AutoPipelineForImage2Image < source > ( *args **kwargs ) AutoPipelineForImage2Image is a generic pipeline class that instantiates an image-to-image pipeline class. The +specific underlying pipeline class is automatically selected from either the +from_pretrained() or from_pipe() methods. This class cannot be instantiated using __init__() (throws an error). Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_or_path **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiates a image-to-image Pytorch diffusion pipeline from pretrained pipeline weight. The from_pretrained() method takes care of returning the correct pipeline class instance by: Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its +config object Find the image-to-image pipeline linked to the pipeline class using pattern matching on pipeline class +name. If a controlnet argument is passed, it will instantiate a StableDiffusionControlNetImg2ImgPipeline +object. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import AutoPipelineForImage2Image + +>>> pipeline = AutoPipelineForImage2Image.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> image = pipeline(prompt, image).images[0] from_pipe < source > ( pipeline **kwargs ) Parameters pipeline (DiffusionPipeline) — +an instantiated DiffusionPipeline object Instantiates a image-to-image Pytorch diffusion pipeline from another instantiated diffusion pipeline class. The from_pipe() method takes care of returning the correct pipeline class instance by finding the +image-to-image pipeline linked to the pipeline class using pattern matching on pipeline class name. All the modules the pipeline contains will be used to initialize the new pipeline without reallocating +additional memoery. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image + +>>> pipe_t2i = AutoPipelineForText2Image.from_pretrained( +... "runwayml/stable-diffusion-v1-5", requires_safety_checker=False +... ) + +>>> pipe_i2i = AutoPipelineForImage2Image.from_pipe(pipe_t2i) +>>> image = pipe_i2i(prompt, image).images[0] AutoPipelineForInpainting class diffusers.AutoPipelineForInpainting < source > ( *args **kwargs ) AutoPipelineForInpainting is a generic pipeline class that instantiates an inpainting pipeline class. The +specific underlying pipeline class is automatically selected from either the +from_pretrained() or from_pipe() methods. This class cannot be instantiated using __init__() (throws an error). Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_or_path **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiates a inpainting Pytorch diffusion pipeline from pretrained pipeline weight. The from_pretrained() method takes care of returning the correct pipeline class instance by: Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its +config object Find the inpainting pipeline linked to the pipeline class using pattern matching on pipeline class name. If a controlnet argument is passed, it will instantiate a StableDiffusionControlNetInpaintPipeline +object. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import AutoPipelineForInpainting + +>>> pipeline = AutoPipelineForInpainting.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> image = pipeline(prompt, image=init_image, mask_image=mask_image).images[0] from_pipe < source > ( pipeline **kwargs ) Parameters pipeline (DiffusionPipeline) — +an instantiated DiffusionPipeline object Instantiates a inpainting Pytorch diffusion pipeline from another instantiated diffusion pipeline class. The from_pipe() method takes care of returning the correct pipeline class instance by finding the inpainting +pipeline linked to the pipeline class using pattern matching on pipeline class name. All the modules the pipeline class contain will be used to initialize the new pipeline without reallocating +additional memoery. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import AutoPipelineForText2Image, AutoPipelineForInpainting + +>>> pipe_t2i = AutoPipelineForText2Image.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", requires_safety_checker=False +... ) + +>>> pipe_inpaint = AutoPipelineForInpainting.from_pipe(pipe_t2i) +>>> image = pipe_inpaint(prompt, image=init_image, mask_image=mask_image).images[0] diff --git a/scrapped_outputs/97eb8d176e9628f573b95870a046fafe.txt b/scrapped_outputs/97eb8d176e9628f573b95870a046fafe.txt new file mode 100644 index 0000000000000000000000000000000000000000..e4bf68022a6cca56941ea31fe97ff663e04a88a6 --- /dev/null +++ b/scrapped_outputs/97eb8d176e9628f573b95870a046fafe.txt @@ -0,0 +1,66 @@ +VQDiffusionScheduler VQDiffusionScheduler converts the transformer model’s output into a sample for the unnoised image at the previous diffusion timestep. It was introduced in Vector Quantized Diffusion Model for Text-to-Image Synthesis by Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, Baining Guo. The abstract from the paper is: We present the vector quantized diffusion (VQ-Diffusion) model for text-to-image generation. This method is based on a vector quantized variational autoencoder (VQ-VAE) whose latent space is modeled by a conditional variant of the recently developed Denoising Diffusion Probabilistic Model (DDPM). We find that this latent-space method is well-suited for text-to-image generation tasks because it not only eliminates the unidirectional bias with existing methods but also allows us to incorporate a mask-and-replace diffusion strategy to avoid the accumulation of errors, which is a serious problem with existing methods. Our experiments show that the VQ-Diffusion produces significantly better text-to-image generation results when compared with conventional autoregressive (AR) models with similar numbers of parameters. Compared with previous GAN-based text-to-image methods, our VQ-Diffusion can handle more complex scenes and improve the synthesized image quality by a large margin. Finally, we show that the image generation computation in our method can be made highly efficient by reparameterization. With traditional AR methods, the text-to-image generation time increases linearly with the output image resolution and hence is quite time consuming even for normal size images. The VQ-Diffusion allows us to achieve a better trade-off between quality and speed. Our experiments indicate that the VQ-Diffusion model with the reparameterization is fifteen times faster than traditional AR methods while achieving a better image quality. VQDiffusionScheduler class diffusers.VQDiffusionScheduler < source > ( num_vec_classes: int num_train_timesteps: int = 100 alpha_cum_start: float = 0.99999 alpha_cum_end: float = 9e-06 gamma_cum_start: float = 9e-06 gamma_cum_end: float = 0.99999 ) Parameters num_vec_classes (int) — +The number of classes of the vector embeddings of the latent pixels. Includes the class for the masked +latent pixel. num_train_timesteps (int, defaults to 100) — +The number of diffusion steps to train the model. alpha_cum_start (float, defaults to 0.99999) — +The starting cumulative alpha value. alpha_cum_end (float, defaults to 0.00009) — +The ending cumulative alpha value. gamma_cum_start (float, defaults to 0.00009) — +The starting cumulative gamma value. gamma_cum_end (float, defaults to 0.99999) — +The ending cumulative gamma value. A scheduler for vector quantized diffusion. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. log_Q_t_transitioning_to_known_class < source > ( t: torch.int32 x_t: LongTensor log_onehot_x_t: FloatTensor cumulative: bool ) → torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels) Parameters t (torch.Long) — +The timestep that determines which transition matrix is used. x_t (torch.LongTensor of shape (batch size, num latent pixels)) — +The classes of each latent pixel at time t. log_onehot_x_t (torch.FloatTensor of shape (batch size, num classes, num latent pixels)) — +The log one-hot vectors of x_t. cumulative (bool) — +If cumulative is False, the single step transition matrix t-1->t is used. If cumulative is +True, the cumulative transition matrix 0->t is used. Returns +torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels) + +Each column of the returned matrix is a row of log probabilities of the complete probability +transition matrix. +When non cumulative, returns self.num_classes - 1 rows because the initial latent pixel cannot be +masked. +Where: + +q_n is the probability distribution for the forward process of the nth latent pixel. +C_0 is a class of a latent pixel embedding +C_k is the class of the masked latent pixel + +non-cumulative result (omitting logarithms): +_0(x_t | x_{t-1\} = C_0) ... q_n(x_t | x_{t-1\} = C_0) + . . . + . . . + . . . +q_0(x_t | x_{t-1\} = C_k) ... q_n(x_t | x_{t-1\} = C_k)`} + /> +cumulative result (omitting logarithms): +_0_cumulative(x_t | x_0 = C_0) ... q_n_cumulative(x_t | x_0 = C_0) + . . . + . . . + . . . +q_0_cumulative(x_t | x_0 = C_{k-1\}) ... q_n_cumulative(x_t | x_0 = C_{k-1\})`} + /> + Calculates the log probabilities of the rows from the (cumulative or non-cumulative) transition matrix for each +latent pixel in x_t. q_posterior < source > ( log_p_x_0 x_t t ) → torch.FloatTensor of shape (batch size, num classes, num latent pixels) Parameters log_p_x_0 (torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels)) — +The log probabilities for the predicted classes of the initial latent pixels. Does not include a +prediction for the masked class as the initial unnoised image cannot be masked. x_t (torch.LongTensor of shape (batch size, num latent pixels)) — +The classes of each latent pixel at time t. t (torch.Long) — +The timestep that determines which transition matrix is used. Returns +torch.FloatTensor of shape (batch size, num classes, num latent pixels) + +The log probabilities for the predicted classes of the image at timestep t-1. + Calculates the log probabilities for the predicted classes of the image at timestep t-1: Copied p(x_{t-1} | x_t) = sum( q(x_t | x_{t-1}) * q(x_{t-1} | x_0) * p(x_0) / q(x_t | x_0) ) set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps and diffusion process parameters (alpha, beta, gamma) should be moved +to. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: torch.int64 sample: LongTensor generator: Optional = None return_dict: bool = True ) → VQDiffusionSchedulerOutput or tuple Parameters t (torch.long) — +The timestep that determines which transition matrices are used. x_t (torch.LongTensor of shape (batch size, num latent pixels)) — +The classes of each latent pixel at time t. generator (torch.Generator, or None) — +A random number generator for the noise applied to p(x_{t-1} | x_t) before it is sampled from. return_dict (bool, optional, defaults to True) — +Whether or not to return a VQDiffusionSchedulerOutput or +tuple. Returns +VQDiffusionSchedulerOutput or tuple + +If return_dict is True, VQDiffusionSchedulerOutput is +returned, otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by the reverse transition distribution. See +q_posterior() for more details about how the distribution is computer. VQDiffusionSchedulerOutput class diffusers.schedulers.scheduling_vq_diffusion.VQDiffusionSchedulerOutput < source > ( prev_sample: LongTensor ) Parameters prev_sample (torch.LongTensor of shape (batch size, num latent pixels)) — +Computed sample x_{t-1} of previous timestep. prev_sample should be used as next model input in the +denoising loop. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/97f752f223e0c499d305f2510eef6654.txt b/scrapped_outputs/97f752f223e0c499d305f2510eef6654.txt new file mode 100644 index 0000000000000000000000000000000000000000..61c02f3d039e77b64983e654bc361b83b19eed3c --- /dev/null +++ b/scrapped_outputs/97f752f223e0c499d305f2510eef6654.txt @@ -0,0 +1,337 @@ +Image-to-Image Generation + + +StableDiffusionImg2ImgPipeline + +The Stable Diffusion model was created by the researchers and engineers from CompVis, Stability AI, runway, and LAION. The StableDiffusionImg2ImgPipeline lets you pass a text prompt and an initial image to condition the generation of new images using Stable Diffusion. +The original codebase can be found here: CampVis/stable-diffusion +StableDiffusionImg2ImgPipeline is compatible with all Stable Diffusion checkpoints for Text-to-Image +The pipeline uses the diffusion-denoising mechanism proposed by SDEdit (SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations +proposed by Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, Stefano Ermon). + +class diffusers.StableDiffusionImg2ImgPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPFeatureExtractor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-guided image to image generation using Stable Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None +strength: float = 0.8 +num_inference_steps: typing.Optional[int] = 50 +guidance_scale: typing.Optional[float] = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: typing.Optional[float] = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. + + +strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter will be modulated by strength. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. Ignored when not using guidance (i.e., ignored if guidance_scale +is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import requests +>>> import torch +>>> from PIL import Image +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionImg2ImgPipeline + +>>> device = "cuda" +>>> model_id_or_path = "runwayml/stable-diffusion-v1-5" +>>> pipe = StableDiffusionImg2ImgPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +>>> response = requests.get(url) +>>> init_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_image = init_image.resize((768, 512)) + +>>> prompt = "A fantasy landscape, trending on artstation" + +>>> images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images +>>> images[0].save("fantasy_landscape.png") + +enable_attention_slicing + +< +source +> +( +slice_size: typing.Union[str, int, NoneType] = 'auto' + +) + + +Parameters + +slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maxium amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +disable_attention_slicing + +< +source +> +( +) + + + +Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go +back to computing attention in one step. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. diff --git a/scrapped_outputs/98156e20d1e025a1c5d820c53d177fb7.txt b/scrapped_outputs/98156e20d1e025a1c5d820c53d177fb7.txt new file mode 100644 index 0000000000000000000000000000000000000000..191230d895650a96c9b8f907a3911fdd00d72140 --- /dev/null +++ b/scrapped_outputs/98156e20d1e025a1c5d820c53d177fb7.txt @@ -0,0 +1,55 @@ +DDPMScheduler Denoising Diffusion Probabilistic Models (DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes a diffusion based model of the same name. In the context of the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline. The abstract from the paper is: We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. Our implementation is available at this https URL. DDPMScheduler class diffusers.DDPMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None variance_type: str = 'fixed_small' clip_sample: bool = True prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 clip_sample_range: float = 1.0 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' steps_offset: int = 0 rescale_betas_zero_snr: int = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +An array of betas to pass directly to the constructor without using beta_start and beta_end. variance_type (str, defaults to "fixed_small") — +Clip the variance when adding noise to the denoised sample. Choose from fixed_small, fixed_small_log, +fixed_large, fixed_large_log, learned or learned_range. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. DDPMScheduler explores the connections between denoising score matching and Langevin dynamics sampling. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: Optional = None device: Union = None timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. If used, +timesteps must be None. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. timesteps (List[int], optional) — +Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default +timestep spacing strategy of equal spacing between timesteps is used. If timesteps is passed, +num_inference_steps must be None. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator = None return_dict: bool = True ) → DDPMSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a DDPMSchedulerOutput or tuple. Returns +DDPMSchedulerOutput or tuple + +If return_dict is True, DDPMSchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). DDPMSchedulerOutput class diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/982fbaa1fb385ee228610e8f62272ddc.txt b/scrapped_outputs/982fbaa1fb385ee228610e8f62272ddc.txt new file mode 100644 index 0000000000000000000000000000000000000000..d769a7f9060837ab9edb28b421635809b26af2d7 --- /dev/null +++ b/scrapped_outputs/982fbaa1fb385ee228610e8f62272ddc.txt @@ -0,0 +1,61 @@ +Attention Processor An attention processor is a class for applying different types of attention mechanisms. AttnProcessor class diffusers.models.attention_processor.AttnProcessor < source > ( ) Default processor for performing attention-related computations. AttnProcessor2_0 class diffusers.models.attention_processor.AttnProcessor2_0 < source > ( ) Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). FusedAttnProcessor2_0 class diffusers.models.attention_processor.FusedAttnProcessor2_0 < source > ( ) Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). +It uses fused projection layers. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is currently 🧪 experimental in nature and can change in future. LoRAAttnProcessor class diffusers.models.attention_processor.LoRAAttnProcessor < source > ( hidden_size: int cross_attention_dim: Optional = None rank: int = 4 network_alpha: Optional = None **kwargs ) Parameters hidden_size (int, optional) — +The hidden size of the attention layer. cross_attention_dim (int, optional) — +The number of channels in the encoder_hidden_states. rank (int, defaults to 4) — +The dimension of the LoRA update matrices. network_alpha (int, optional) — +Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) — +Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism. LoRAAttnProcessor2_0 class diffusers.models.attention_processor.LoRAAttnProcessor2_0 < source > ( hidden_size: int cross_attention_dim: Optional = None rank: int = 4 network_alpha: Optional = None **kwargs ) Parameters hidden_size (int) — +The hidden size of the attention layer. cross_attention_dim (int, optional) — +The number of channels in the encoder_hidden_states. rank (int, defaults to 4) — +The dimension of the LoRA update matrices. network_alpha (int, optional) — +Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) — +Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism using PyTorch 2.0’s memory-efficient scaled dot-product +attention. CustomDiffusionAttnProcessor class diffusers.models.attention_processor.CustomDiffusionAttnProcessor < source > ( train_kv: bool = True train_q_out: bool = True hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 ) Parameters train_kv (bool, defaults to True) — +Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) — +Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) — +Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) — +The dropout probability to use. Processor for implementing attention for the Custom Diffusion method. CustomDiffusionAttnProcessor2_0 class diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0 < source > ( train_kv: bool = True train_q_out: bool = True hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 ) Parameters train_kv (bool, defaults to True) — +Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) — +Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) — +Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) — +The dropout probability to use. Processor for implementing attention for the Custom Diffusion method using PyTorch 2.0’s memory-efficient scaled +dot-product attention. AttnAddedKVProcessor class diffusers.models.attention_processor.AttnAddedKVProcessor < source > ( ) Processor for performing attention-related computations with extra learnable key and value matrices for the text +encoder. AttnAddedKVProcessor2_0 class diffusers.models.attention_processor.AttnAddedKVProcessor2_0 < source > ( ) Processor for performing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0), with extra +learnable key and value matrices for the text encoder. LoRAAttnAddedKVProcessor class diffusers.models.attention_processor.LoRAAttnAddedKVProcessor < source > ( hidden_size: int cross_attention_dim: Optional = None rank: int = 4 network_alpha: Optional = None ) Parameters hidden_size (int, optional) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. rank (int, defaults to 4) — +The dimension of the LoRA update matrices. network_alpha (int, optional) — +Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) — +Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism with extra learnable key and value matrices for the text +encoder. XFormersAttnProcessor class diffusers.models.attention_processor.XFormersAttnProcessor < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional, defaults to None) — +The base +operator to +use as the attention operator. It is recommended to set to None, and allow xFormers to choose the best +operator. Processor for implementing memory efficient attention using xFormers. LoRAXFormersAttnProcessor class diffusers.models.attention_processor.LoRAXFormersAttnProcessor < source > ( hidden_size: int cross_attention_dim: int rank: int = 4 attention_op: Optional = None network_alpha: Optional = None **kwargs ) Parameters hidden_size (int, optional) — +The hidden size of the attention layer. cross_attention_dim (int, optional) — +The number of channels in the encoder_hidden_states. rank (int, defaults to 4) — +The dimension of the LoRA update matrices. attention_op (Callable, optional, defaults to None) — +The base +operator to +use as the attention operator. It is recommended to set to None, and allow xFormers to choose the best +operator. network_alpha (int, optional) — +Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) — +Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism with memory efficient attention using xFormers. CustomDiffusionXFormersAttnProcessor class diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor < source > ( train_kv: bool = True train_q_out: bool = False hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 attention_op: Optional = None ) Parameters train_kv (bool, defaults to True) — +Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) — +Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) — +Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) — +The dropout probability to use. attention_op (Callable, optional, defaults to None) — +The base +operator to use +as the attention operator. It is recommended to set to None, and allow xFormers to choose the best operator. Processor for implementing memory efficient attention using xFormers for the Custom Diffusion method. SlicedAttnProcessor class diffusers.models.attention_processor.SlicedAttnProcessor < source > ( slice_size: int ) Parameters slice_size (int, optional) — +The number of steps to compute attention. Uses as many slices as attention_head_dim // slice_size, and +attention_head_dim must be a multiple of the slice_size. Processor for implementing sliced attention. SlicedAttnAddedKVProcessor class diffusers.models.attention_processor.SlicedAttnAddedKVProcessor < source > ( slice_size ) Parameters slice_size (int, optional) — +The number of steps to compute attention. Uses as many slices as attention_head_dim // slice_size, and +attention_head_dim must be a multiple of the slice_size. Processor for implementing sliced attention with extra learnable key and value matrices for the text encoder. diff --git a/scrapped_outputs/987898e671aa15ec4048493fe95d562a.txt b/scrapped_outputs/987898e671aa15ec4048493fe95d562a.txt new file mode 100644 index 0000000000000000000000000000000000000000..b36fcdaae1a968a902d79e9e2398812f703a2021 --- /dev/null +++ b/scrapped_outputs/987898e671aa15ec4048493fe95d562a.txt @@ -0,0 +1,63 @@ +Kandinsky 2.2 This script is experimental, and it’s easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset. Kandinsky 2.2 is a multilingual text-to-image model capable of producing more photorealistic images. The model includes an image prior model for creating image embeddings from text prompts, and a decoder model that generates images based on the prior model’s embeddings. That’s why you’ll find two separate scripts in Diffusers for Kandinsky 2.2, one for training the prior model and one for training the decoder model. You can train both models separately, but to get the best results, you should train both the prior and decoder models. Depending on your GPU, you may need to enable gradient_checkpointing (⚠️ not supported for the prior model!), mixed_precision, and gradient_accumulation_steps to help fit the model into memory and to speedup training. You can reduce your memory-usage even more by enabling memory-efficient attention with xFormers (version v0.0.16 fails for training on some GPUs so you may need to install a development version instead). This guide explores the train_text_to_image_prior.py and the train_text_to_image_decoder.py scripts to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the scripts, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/kandinsky2_2/text_to_image +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training scripts that are important for understanding how to modify it, but it doesn’t cover every aspect of the scripts in detail. If you’re interested in learning more, feel free to read through the scripts and let us know if you have any questions or concerns. Script parameters The training scripts provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. The training scripts provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image_prior.py \ + --mixed_precision="fp16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so let’s get straight to a walkthrough of the Kandinsky training scripts! Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_text_to_image_prior.py \ + --snr_gamma=5.0 Training script The training script is also similar to the Text-to-image training guide, but it’s been modified to support training the prior and decoder models. This guide focuses on the code that is unique to the Kandinsky 2.2 training scripts. prior model decoder model The main() function contains the code for preparing the dataset and training the model. One of the main differences you’ll notice right away is that the training script also loads a CLIPImageProcessor - in addition to a scheduler and tokenizer - for preprocessing images and a CLIPVisionModelWithProjection model for encoding the images: Copied noise_scheduler = DDPMScheduler(beta_schedule="squaredcos_cap_v2", prediction_type="sample") +image_processor = CLIPImageProcessor.from_pretrained( + args.pretrained_prior_model_name_or_path, subfolder="image_processor" +) +tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="tokenizer") + +with ContextManagers(deepspeed_zero_init_disabled_context_manager()): + image_encoder = CLIPVisionModelWithProjection.from_pretrained( + args.pretrained_prior_model_name_or_path, subfolder="image_encoder", torch_dtype=weight_dtype + ).eval() + text_encoder = CLIPTextModelWithProjection.from_pretrained( + args.pretrained_prior_model_name_or_path, subfolder="text_encoder", torch_dtype=weight_dtype + ).eval() Kandinsky uses a PriorTransformer to generate the image embeddings, so you’ll want to setup the optimizer to learn the prior mode’s parameters. Copied prior = PriorTransformer.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="prior") +prior.train() +optimizer = optimizer_cls( + prior.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Next, the input captions are tokenized, and images are preprocessed by the CLIPImageProcessor: Copied def preprocess_train(examples): + images = [image.convert("RGB") for image in examples[image_column]] + examples["clip_pixel_values"] = image_processor(images, return_tensors="pt").pixel_values + examples["text_input_ids"], examples["text_mask"] = tokenize_captions(examples) + return examples Finally, the training loop converts the input images into latents, adds noise to the image embeddings, and makes a prediction: Copied model_pred = prior( + noisy_latents, + timestep=timesteps, + proj_embedding=prompt_embeds, + encoder_hidden_states=text_encoder_hidden_states, + attention_mask=text_mask, +).predicted_image_embedding If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 You’ll train on the Pokémon BLIP captions dataset to generate your own Pokémon, but you can also create and train on your own dataset by following the Create a dataset for training guide. Set the environment variable DATASET_NAME to the name of the dataset on the Hub or if you’re training on your own files, set the environment variable TRAIN_DIR to a path to your dataset. If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_prompt to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. prior model decoder model Copied export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch --mixed_precision="fp16" train_text_to_image_prior.py \ + --dataset_name=$DATASET_NAME \ + --resolution=768 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --checkpoints_total_limit=3 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --validation_prompts="A robot pokemon, 4k photo" \ + --report_to="wandb" \ + --push_to_hub \ + --output_dir="kandi2-prior-pokemon-model" Once training is finished, you can use your newly trained model for inference! prior model decoder model Copied from diffusers import AutoPipelineForText2Image, DiffusionPipeline +import torch + +prior_pipeline = DiffusionPipeline.from_pretrained(output_dir, torch_dtype=torch.float16) +prior_components = {"prior_" + k: v for k,v in prior_pipeline.components.items()} +pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", **prior_components, torch_dtype=torch.float16) + +pipe.enable_model_cpu_offload() +prompt="A robot pokemon, 4k photo" +image = pipeline(prompt=prompt, negative_prompt=negative_prompt).images[0] Feel free to replace kandinsky-community/kandinsky-2-2-decoder with your own trained decoder checkpoint! Next steps Congratulations on training a Kandinsky 2.2 model! To learn more about how to use your new model, the following guides may be helpful: Read the Kandinsky guide to learn how to use it for a variety of different tasks (text-to-image, image-to-image, inpainting, interpolation), and how it can be combined with a ControlNet. Check out the DreamBooth and LoRA training guides to learn how to train a personalized Kandinsky model with just a few example images. These two training techniques can even be combined! diff --git a/scrapped_outputs/988278da6fbdf5c71650fbf6411d5b66.txt b/scrapped_outputs/988278da6fbdf5c71650fbf6411d5b66.txt new file mode 100644 index 0000000000000000000000000000000000000000..3485508c41ab8cb3cad851252e99cb060411c2b8 --- /dev/null +++ b/scrapped_outputs/988278da6fbdf5c71650fbf6411d5b66.txt @@ -0,0 +1,109 @@ +Multi-instrument Music Synthesis with Spectrogram Diffusion + + +Overview + +Spectrogram Diffusion by Curtis Hawthorne, Ian Simon, Adam Roberts, Neil Zeghidour, Josh Gardner, Ethan Manilow, and Jesse Engel. +An ideal music synthesizer should be both interactive and expressive, generating high-fidelity audio in realtime for arbitrary combinations of instruments and notes. Recent neural synthesizers have exhibited a tradeoff between domain-specific models that offer detailed control of only specific instruments, or raw waveform models that can train on any music but with minimal control and slow generation. In this work, we focus on a middle ground of neural synthesizers that can generate audio from MIDI sequences with arbitrary combinations of instruments in realtime. This enables training on a wide range of transcription datasets with a single model, which in turn offers note-level control of composition and instrumentation across a wide range of instruments. We use a simple two-stage process: MIDI to spectrograms with an encoder-decoder Transformer, then spectrograms to audio with a generative adversarial network (GAN) spectrogram inverter. We compare training the decoder as an autoregressive model and as a Denoising Diffusion Probabilistic Model (DDPM) and find that the DDPM approach is superior both qualitatively and as measured by audio reconstruction and Fréchet distance metrics. Given the interactivity and generality of this approach, we find this to be a promising first step towards interactive and expressive neural synthesis for arbitrary combinations of instruments and notes. +The original codebase of this implementation can be found at magenta/music-spectrogram-diffusion. + +Model + + +As depicted above the model takes as input a MIDI file and tokenizes it into a sequence of 5 second intervals. Each tokenized interval then together with positional encodings is passed through the Note Encoder and its representation is concatenated with the previous window’s generated spectrogram representation obtained via the Context Encoder. For the initial 5 second window this is set to zero. The resulting context is then used as conditioning to sample the denoised Spectrogram from the MIDI window and we concatenate this spectrogram to the final output as well as use it for the context of the next MIDI window. The process repeats till we have gone over all the MIDI inputs. Finally a MelGAN decoder converts the potentially long spectrogram to audio which is the final result of this pipeline. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_spectrogram_diffusion.py +Unconditional Audio Generation +- + +Example usage + + + + Copied +from diffusers import SpectrogramDiffusionPipeline, MidiProcessor + +pipe = SpectrogramDiffusionPipeline.from_pretrained("google/music-spectrogram-diffusion") +pipe = pipe.to("cuda") +processor = MidiProcessor() + +# Download MIDI from: wget http://www.piano-midi.de/midis/beethoven/beethoven_hammerklavier_2.mid +output = pipe(processor("beethoven_hammerklavier_2.mid")) + +audio = output.audios[0] + +SpectrogramDiffusionPipeline + + +class diffusers.SpectrogramDiffusionPipeline + +< +source +> +( +notes_encoder: SpectrogramNotesEncoder +continuous_encoder: SpectrogramContEncoder +decoder: T5FilmDecoder +scheduler: DDPMScheduler +melgan: typing.Any + +) + + + + +__call__ + +< +source +> +( +input_tokens: typing.List[typing.List[int]] +generator: typing.Optional[torch._C.Generator] = None +num_inference_steps: int = 100 +return_dict: bool = True +output_type: str = 'numpy' +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 + +) + + + + +scale_features + +< +source +> +( +features +output_range = (-1.0, 1.0) +clip = False + +) + + + +Linearly scale features to network outputs range. + +scale_to_features + +< +source +> +( +outputs +input_range = (-1.0, 1.0) +clip = False + +) + + + +Invert by linearly scaling network outputs to features range. diff --git a/scrapped_outputs/988f1d66937752bc240416874eb1b52d.txt b/scrapped_outputs/988f1d66937752bc240416874eb1b52d.txt new file mode 100644 index 0000000000000000000000000000000000000000..42ea24268abea88970a93b02e6ba84a08fdea110 --- /dev/null +++ b/scrapped_outputs/988f1d66937752bc240416874eb1b52d.txt @@ -0,0 +1,277 @@ +DreamBooth fine-tuning example + +DreamBooth is a method to personalize text-to-image models like stable diffusion given just a few (3~5) images of a subject. + +Dreambooth examples from the project’s blog. +The Dreambooth training script shows how to implement this training procedure on a pre-trained Stable Diffusion model. +Dreambooth fine-tuning is very sensitive to hyperparameters and easy to overfit. We recommend you take a look at our in-depth analysis with recommended settings for different subjects, and go from there. + +Training locally + + +Installing the dependencies + +Before running the scripts, make sure to install the library’s training dependencies. We also recommend to install diffusers from the main github branch. + + + Copied +pip install git+https://github.com/huggingface/diffusers +pip install -U -r diffusers/examples/dreambooth/requirements.txt +xFormers is not part of the training requirements, but we recommend you install it if you can. It could make your training faster and less memory intensive. +After all dependencies have been set up you can configure a 🤗 Accelerate environment with: + + + Copied +accelerate config +In this example we’ll use model version v1-4, so please visit its card and carefully read the license before proceeding. +The command below will download and cache the model weights from the Hub because we use the model’s Hub id CompVis/stable-diffusion-v1-4. You may also clone the repo locally and use the local path in your system where the checkout was saved. + +Dog toy example + +In this example we’ll use these images to add a new concept to Stable Diffusion using the Dreambooth process. They will be our training data. Please, download them and place them somewhere in your system. +Then you can launch the training script using: + + + Copied +export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export INSTANCE_DIR="path_to_training_images" +export OUTPUT_DIR="path_to_saved_model" + +accelerate launch train_dreambooth.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --output_dir=$OUTPUT_DIR \ + --instance_prompt="a photo of sks dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=1 \ + --learning_rate=5e-6 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --max_train_steps=400 + +Training with a prior-preserving loss + +Prior preservation is used to avoid overfitting and language-drift. Please, refer to the paper to learn more about it if you are interested. For prior preservation, we use other images of the same class as part of the training process. The nice thing is that we can generate those images using the Stable Diffusion model itself! The training script will save the generated images to a local path we specify. +According to the paper, it’s recommended to generate num_epochs * num_samples images for prior preservation. 200-300 works well for most cases. + + + Copied +export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export INSTANCE_DIR="path_to_training_images" +export CLASS_DIR="path_to_class_images" +export OUTPUT_DIR="path_to_saved_model" + +accelerate launch train_dreambooth.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --class_data_dir=$CLASS_DIR \ + --output_dir=$OUTPUT_DIR \ + --with_prior_preservation --prior_loss_weight=1.0 \ + --instance_prompt="a photo of sks dog" \ + --class_prompt="a photo of dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=1 \ + --learning_rate=5e-6 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --num_class_images=200 \ + --max_train_steps=800 + +Saving checkpoints while training + +It’s easy to overfit while training with Dreambooth, so sometimes it’s useful to save regular checkpoints during the process. One of the intermediate checkpoints might work better than the final model! To use this feature you need to pass the following argument to the training script: + + + Copied + --checkpointing_steps=500 +This will save the full training state in subfolders of your output_dir. Subfolder names begin with the prefix checkpoint-, and then the number of steps performed so far; for example: checkpoint-1500 would be a checkpoint saved after 1500 training steps. + +Resuming training from a saved checkpoint + +If you want to resume training from any of the saved checkpoints, you can pass the argument --resume_from_checkpoint and then indicate the name of the checkpoint you want to use. You can also use the special string "latest" to resume from the last checkpoint saved (i.e., the one with the largest number of steps). For example, the following would resume training from the checkpoint saved after 1500 steps: + + + Copied + --resume_from_checkpoint="checkpoint-1500" +This would be a good opportunity to tweak some of your hyperparameters if you wish. + +Performing inference using a saved checkpoint + +Saved checkpoints are stored in a format suitable for resuming training. They not only include the model weights, but also the state of the optimizer, data loaders and learning rate. +Note: If you have installed "accelerate>=0.16.0" you can use the following code to run +inference from an intermediate checkpoint. + + + Copied +from diffusers import DiffusionPipeline, UNet2DConditionModel +from transformers import CLIPTextModel +import torch + +# Load the pipeline with the same arguments (model, revision) that were used for training +model_id = "CompVis/stable-diffusion-v1-4" + +unet = UNet2DConditionModel.from_pretrained("/sddata/dreambooth/daruma-v2-1/checkpoint-100/unet") + +# if you have trained with `--args.train_text_encoder` make sure to also load the text encoder +text_encoder = CLIPTextModel.from_pretrained("/sddata/dreambooth/daruma-v2-1/checkpoint-100/text_encoder") + +pipeline = DiffusionPipeline.from_pretrained(model_id, unet=unet, text_encoder=text_encoder, dtype=torch.float16) +pipeline.to("cuda") + +# Perform inference, or save, or push to the hub +pipeline.save_pretrained("dreambooth-pipeline") +If you have installed "accelerate<0.16.0" you need to first convert it to an inference pipeline. This is how you could do it: + + + Copied +from accelerate import Accelerator +from diffusers import DiffusionPipeline + +# Load the pipeline with the same arguments (model, revision) that were used for training +model_id = "CompVis/stable-diffusion-v1-4" +pipeline = DiffusionPipeline.from_pretrained(model_id) + +accelerator = Accelerator() + +# Use text_encoder if `--train_text_encoder` was used for the initial training +unet, text_encoder = accelerator.prepare(pipeline.unet, pipeline.text_encoder) + +# Restore state from a checkpoint path. You have to use the absolute path here. +accelerator.load_state("/sddata/dreambooth/daruma-v2-1/checkpoint-100") + +# Rebuild the pipeline with the unwrapped models (assignment to .unet and .text_encoder should work too) +pipeline = DiffusionPipeline.from_pretrained( + model_id, + unet=accelerator.unwrap_model(unet), + text_encoder=accelerator.unwrap_model(text_encoder), +) + +# Perform inference, or save, or push to the hub +pipeline.save_pretrained("dreambooth-pipeline") + +Training on a 16GB GPU + +With the help of gradient checkpointing and the 8-bit optimizer from bitsandbytes, it’s possible to train dreambooth on a 16GB GPU. + + + Copied +pip install bitsandbytes +Then pass the --use_8bit_adam option to the training script. + + + Copied +export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export INSTANCE_DIR="path_to_training_images" +export CLASS_DIR="path_to_class_images" +export OUTPUT_DIR="path_to_saved_model" + +accelerate launch train_dreambooth.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --class_data_dir=$CLASS_DIR \ + --output_dir=$OUTPUT_DIR \ + --with_prior_preservation --prior_loss_weight=1.0 \ + --instance_prompt="a photo of sks dog" \ + --class_prompt="a photo of dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=2 --gradient_checkpointing \ + --use_8bit_adam \ + --learning_rate=5e-6 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --num_class_images=200 \ + --max_train_steps=800 + +Fine-tune the text encoder in addition to the UNet + +The script also allows to fine-tune the text_encoder along with the unet. It has been observed experimentally that this gives much better results, especially on faces. Please, refer to our blog for more details. +To enable this option, pass the --train_text_encoder argument to the training script. +Training the text encoder requires additional memory, so training won't fit on a 16GB GPU. You'll need at least 24GB VRAM to use this option. + + + + Copied +export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export INSTANCE_DIR="path_to_training_images" +export CLASS_DIR="path_to_class_images" +export OUTPUT_DIR="path_to_saved_model" + +accelerate launch train_dreambooth.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --train_text_encoder \ + --instance_data_dir=$INSTANCE_DIR \ + --class_data_dir=$CLASS_DIR \ + --output_dir=$OUTPUT_DIR \ + --with_prior_preservation --prior_loss_weight=1.0 \ + --instance_prompt="a photo of sks dog" \ + --class_prompt="a photo of dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --use_8bit_adam + --gradient_checkpointing \ + --learning_rate=2e-6 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --num_class_images=200 \ + --max_train_steps=800 + +Training on a 8 GB GPU: + +Using DeepSpeed it’s even possible to offload some +tensors from VRAM to either CPU or NVME, allowing training to proceed with less GPU memory. +DeepSpeed needs to be enabled with accelerate config. During configuration, +answer yes to “Do you want to use DeepSpeed?“. Combining DeepSpeed stage 2, fp16 +mixed precision, and offloading both the model parameters and the optimizer state to CPU, it’s +possible to train on under 8 GB VRAM. The drawback is that this requires more system RAM (about 25 GB). See the DeepSpeed documentation for more configuration options. +Changing the default Adam optimizer to DeepSpeed’s special version of Adam +deepspeed.ops.adam.DeepSpeedCPUAdam gives a substantial speedup, but enabling +it requires the system’s CUDA toolchain version to be the same as the one installed with PyTorch. 8-bit optimizers don’t seem to be compatible with DeepSpeed at the moment. + + + Copied +export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export INSTANCE_DIR="path_to_training_images" +export CLASS_DIR="path_to_class_images" +export OUTPUT_DIR="path_to_saved_model" + +accelerate launch train_dreambooth.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --class_data_dir=$CLASS_DIR \ + --output_dir=$OUTPUT_DIR \ + --with_prior_preservation --prior_loss_weight=1.0 \ + --instance_prompt="a photo of sks dog" \ + --class_prompt="a photo of dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --sample_batch_size=1 \ + --gradient_accumulation_steps=1 --gradient_checkpointing \ + --learning_rate=5e-6 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --num_class_images=200 \ + --max_train_steps=800 \ + --mixed_precision=fp16 + +Inference + +Once you have trained a model, inference can be done using the StableDiffusionPipeline, by simply indicating the path where the model was saved. Make sure that your prompts include the special identifier used during training (sks in the previous examples). +Note: If you have installed "accelerate>=0.16.0" you can use the following code to run +inference from an intermediate checkpoint. + + + Copied +from diffusers import StableDiffusionPipeline +import torch + +model_id = "path_to_saved_model" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +prompt = "A photo of sks dog in a bucket" +image = pipe(prompt, num_inference_steps=50, guidance_scale=7.5).images[0] + +image.save("dog-bucket.png") +You may also run inference from any of the saved training checkpoints. diff --git a/scrapped_outputs/98a0dc95ff04f4bcf77bb841a264327c.txt b/scrapped_outputs/98a0dc95ff04f4bcf77bb841a264327c.txt new file mode 100644 index 0000000000000000000000000000000000000000..6eb814578b3c61caf6866a5ffadcbcf16e6fec47 --- /dev/null +++ b/scrapped_outputs/98a0dc95ff04f4bcf77bb841a264327c.txt @@ -0,0 +1,26 @@ +How to run Stable Diffusion with Core ML Core ML is the model format and machine learning library supported by Apple frameworks. If you are interested in running Stable Diffusion models inside your macOS or iOS/iPadOS apps, this guide will show you how to convert existing PyTorch checkpoints into the Core ML format and use them for inference with Python or Swift. Core ML models can leverage all the compute engines available in Apple devices: the CPU, the GPU, and the Apple Neural Engine (or ANE, a tensor-optimized accelerator available in Apple Silicon Macs and modern iPhones/iPads). Depending on the model and the device it’s running on, Core ML can mix and match compute engines too, so some portions of the model may run on the CPU while others run on GPU, for example. You can also run the diffusers Python codebase on Apple Silicon Macs using the mps accelerator built into PyTorch. This approach is explained in depth in the mps guide, but it is not compatible with native apps. Stable Diffusion Core ML Checkpoints Stable Diffusion weights (or checkpoints) are stored in the PyTorch format, so you need to convert them to the Core ML format before we can use them inside native apps. Thankfully, Apple engineers developed a conversion tool based on diffusers to convert the PyTorch checkpoints to Core ML. Before you convert a model, though, take a moment to explore the Hugging Face Hub – chances are the model you’re interested in is already available in Core ML format: the Apple organization includes Stable Diffusion versions 1.4, 1.5, 2.0 base, and 2.1 base coreml community includes custom finetuned models use this filter to return all available Core ML checkpoints If you can’t find the model you’re interested in, we recommend you follow the instructions for Converting Models to Core ML by Apple. Selecting the Core ML Variant to Use Stable Diffusion models can be converted to different Core ML variants intended for different purposes: The type of attention blocks used. The attention operation is used to “pay attention” to the relationship between different areas in the image representations and to understand how the image and text representations are related. Attention is compute- and memory-intensive, so different implementations exist that consider the hardware characteristics of different devices. For Core ML Stable Diffusion models, there are two attention variants: split_einsum (introduced by Apple) is optimized for ANE devices, which is available in modern iPhones, iPads and M-series computers. The “original” attention (the base implementation used in diffusers) is only compatible with CPU/GPU and not ANE. It can be faster to run your model on CPU + GPU using original attention than ANE. See this performance benchmark as well as some additional measures provided by the community for additional details. The supported inference framework. packages are suitable for Python inference. This can be used to test converted Core ML models before attempting to integrate them inside native apps, or if you want to explore Core ML performance but don’t need to support native apps. For example, an application with a web UI could perfectly use a Python Core ML backend. compiled models are required for Swift code. The compiled models in the Hub split the large UNet model weights into several files for compatibility with iOS and iPadOS devices. This corresponds to the --chunk-unet conversion option. If you want to support native apps, then you need to select the compiled variant. The official Core ML Stable Diffusion models include these variants, but the community ones may vary: Copied coreml-stable-diffusion-v1-4 +├── README.md +├── original +│ ├── compiled +│ └── packages +└── split_einsum + ├── compiled + └── packages You can download and use the variant you need as shown below. Core ML Inference in Python Install the following libraries to run Core ML inference in Python: Copied pip install huggingface_hub +pip install git+https://github.com/apple/ml-stable-diffusion Download the Model Checkpoints To run inference in Python, use one of the versions stored in the packages folders because the compiled ones are only compatible with Swift. You may choose whether you want to use original or split_einsum attention. This is how you’d download the original attention variant from the Hub to a directory called models: Copied from huggingface_hub import snapshot_download +from pathlib import Path + +repo_id = "apple/coreml-stable-diffusion-v1-4" +variant = "original/packages" + +model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_")) +snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False) +print(f"Model downloaded at {model_path}") Inference Once you have downloaded a snapshot of the model, you can test it using Apple’s Python script. Copied python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" -i models/coreml-stable-diffusion-v1-4_original_packages -o --compute-unit CPU_AND_GPU --seed 93 Pass the path of the downloaded checkpoint with -i flag to the script. --compute-unit indicates the hardware you want to allow for inference. It must be one of the following options: ALL, CPU_AND_GPU, CPU_ONLY, CPU_AND_NE. You may also provide an optional output path, and a seed for reproducibility. The inference script assumes you’re using the original version of the Stable Diffusion model, CompVis/stable-diffusion-v1-4. If you use another model, you have to specify its Hub id in the inference command line, using the --model-version option. This works for models already supported and custom models you trained or fine-tuned yourself. For example, if you want to use runwayml/stable-diffusion-v1-5: Copied python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" --compute-unit ALL -o output --seed 93 -i models/coreml-stable-diffusion-v1-5_original_packages --model-version runwayml/stable-diffusion-v1-5 Core ML inference in Swift Running inference in Swift is slightly faster than in Python because the models are already compiled in the mlmodelc format. This is noticeable on app startup when the model is loaded but shouldn’t be noticeable if you run several generations afterward. Download To run inference in Swift on your Mac, you need one of the compiled checkpoint versions. We recommend you download them locally using Python code similar to the previous example, but with one of the compiled variants: Copied from huggingface_hub import snapshot_download +from pathlib import Path + +repo_id = "apple/coreml-stable-diffusion-v1-4" +variant = "original/compiled" + +model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_")) +snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False) +print(f"Model downloaded at {model_path}") Inference To run inference, please clone Apple’s repo: Copied git clone https://github.com/apple/ml-stable-diffusion +cd ml-stable-diffusion And then use Apple’s command line tool, Swift Package Manager: Copied swift run StableDiffusionSample --resource-path models/coreml-stable-diffusion-v1-4_original_compiled --compute-units all "a photo of an astronaut riding a horse on mars" You have to specify in --resource-path one of the checkpoints downloaded in the previous step, so please make sure it contains compiled Core ML bundles with the extension .mlmodelc. The --compute-units has to be one of these values: all, cpuOnly, cpuAndGPU, cpuAndNeuralEngine. For more details, please refer to the instructions in Apple’s repo. Supported Diffusers Features The Core ML models and inference code don’t support many of the features, options, and flexibility of 🧨 Diffusers. These are some of the limitations to keep in mind: Core ML models are only suitable for inference. They can’t be used for training or fine-tuning. Only two schedulers have been ported to Swift, the default one used by Stable Diffusion and DPMSolverMultistepScheduler, which we ported to Swift from our diffusers implementation. We recommend you use DPMSolverMultistepScheduler, since it produces the same quality in about half the steps. Negative prompts, classifier-free guidance scale, and image-to-image tasks are available in the inference code. Advanced features such as depth guidance, ControlNet, and latent upscalers are not available yet. Apple’s conversion and inference repo and our own swift-coreml-diffusers repos are intended as technology demonstrators to enable other developers to build upon. If you feel strongly about any missing features, please feel free to open a feature request or, better yet, a contribution PR 🙂. Native Diffusers Swift app One easy way to run Stable Diffusion on your own Apple hardware is to use our open-source Swift repo, based on diffusers and Apple’s conversion and inference repo. You can study the code, compile it with Xcode and adapt it for your own needs. For your convenience, there’s also a standalone Mac app in the App Store, so you can play with it without having to deal with the code or IDE. If you are a developer and have determined that Core ML is the best solution to build your Stable Diffusion app, then you can use the rest of this guide to get started with your project. We can’t wait to see what you’ll build 🙂. diff --git a/scrapped_outputs/98a219ab294594bbfabd7daea2153e96.txt b/scrapped_outputs/98a219ab294594bbfabd7daea2153e96.txt new file mode 100644 index 0000000000000000000000000000000000000000..7f071804a6d1fd96f89b53ac2e21853833e83f62 --- /dev/null +++ b/scrapped_outputs/98a219ab294594bbfabd7daea2153e96.txt @@ -0,0 +1,74 @@ +DEISMultistepScheduler Diffusion Exponential Integrator Sampler (DEIS) is proposed in Fast Sampling of Diffusion Models with Exponential Integrator by Qinsheng Zhang and Yongxin Chen. DEISMultistepScheduler is a fast high order solver for diffusion ordinary differential equations (ODEs). This implementation modifies the polynomial fitting formula in log-rho space instead of the original linear t space in the DEIS paper. The modification enjoys closed-form coefficients for exponential multistep update instead of replying on the numerical solver. The abstract from the paper is: The past few years have witnessed the great success of Diffusion models~(DMs) in generating high-fidelity samples in generative modeling tasks. A major limitation of the DM is its notoriously slow sampling procedure which normally requires hundreds to thousands of time discretization steps of the learned diffusion process to reach the desired accuracy. Our goal is to develop a fast sampling method for DMs with a much less number of steps while retaining high sample quality. To this end, we systematically analyze the sampling procedure in DMs and identify key factors that affect the sample quality, among which the method of discretization is most crucial. By carefully examining the learned diffusion process, we propose Diffusion Exponential Integrator Sampler~(DEIS). It is based on the Exponential Integrator designed for discretizing ordinary differential equations (ODEs) and leverages a semilinear structure of the learned diffusion process to reduce the discretization error. The proposed method can be applied to any DMs and can generate high-fidelity samples in as few as 10 steps. In our experiments, it takes about 3 minutes on one A6000 GPU to generate 50k images from CIFAR10. Moreover, by directly using pre-trained DMs, we achieve the state-of-art sampling performance when the number of score function evaluation~(NFE) is limited, e.g., 4.17 FID with 10 NFEs, 3.37 FID, and 9.74 IS with only 15 NFEs on CIFAR10. Code is available at this https URL. Tips It is recommended to set solver_order to 2 or 3, while solver_order=1 is equivalent to DDIMScheduler. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set thresholding=True to use the dynamic thresholding. DEISMultistepScheduler class diffusers.DEISMultistepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Optional = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'deis' solver_type: str = 'logrho' lower_order_final: bool = True use_karras_sigmas: Optional = False timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DEIS order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. algorithm_type (str, defaults to deis) — +The algorithm type for the solver. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DEISMultistepScheduler is a fast high order solver for diffusion ordinary differential equations (ODEs). This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DEIS algorithm needs. deis_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DEIS (equivalent to DDIM). multistep_deis_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order multistep DEIS. multistep_deis_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order multistep DEIS. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep DEIS. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/98b3a1685a15cba0533fc791b335e795.txt b/scrapped_outputs/98b3a1685a15cba0533fc791b335e795.txt new file mode 100644 index 0000000000000000000000000000000000000000..a185d25c228d5e5b754781664d828e2c9566ab08 --- /dev/null +++ b/scrapped_outputs/98b3a1685a15cba0533fc791b335e795.txt @@ -0,0 +1,739 @@ +Zero-shot Image-to-Image Translation + + +Overview + +Zero-shot Image-to-Image Translation. +The abstract of the paper is the following: +Large-scale text-to-image generative models have shown their remarkable ability to synthesize diverse and high-quality images. However, it is still challenging to directly apply these models for editing real images for two reasons. First, it is hard for users to come up with a perfect text prompt that accurately describes every visual detail in the input image. Second, while existing models can introduce desirable changes in certain regions, they often dramatically alter the input content and introduce unexpected changes in unwanted regions. In this work, we propose pix2pix-zero, an image-to-image translation method that can preserve the content of the original image without manual prompting. We first automatically discover editing directions that reflect desired edits in the text embedding space. To preserve the general content structure after editing, we further propose cross-attention guidance, which aims to retain the cross-attention maps of the input image throughout the diffusion process. In addition, our method does not need additional training for these edits and can directly use the existing pre-trained text-to-image diffusion model. We conduct extensive experiments and show that our method outperforms existing and concurrent works for both real and synthetic image editing. +Resources: +Project Page. +Paper. +Original Code. +Demo. + +Tips + +The pipeline can be conditioned on real input images. Check out the code examples below to know more. +The pipeline exposes two arguments namely source_embeds and target_embeds +that let you control the direction of the semantic edits in the final image to be generated. Let’s say, +you wanted to translate from “cat” to “dog”. In this case, the edit direction will be “cat -> dog”. To reflect +this in the pipeline, you simply have to set the embeddings related to the phrases including “cat” to +source_embeds and “dog” to target_embeds. Refer to the code example below for more details. +When you’re using this pipeline from a prompt, specify the source concept in the prompt. Taking +the above example, a valid input prompt would be: “a high resolution painting of a cat in the style of van gough”. +If you wanted to reverse the direction in the example above, i.e., “dog -> cat”, then it’s recommended to:Swap the source_embeds and target_embeds. +Change the input prompt to include “dog”. +To learn more about how the source and target embeddings are generated, refer to the original +paper. Below, we also provide some directions on how to generate the embeddings. +Note that the quality of the outputs generated with this pipeline is dependent on how good the source_embeds and target_embeds are. Please, refer to this discussion for some suggestions on the topic. + +Available Pipelines: + +Pipeline +Tasks +Demo +StableDiffusionPix2PixZeroPipeline +Text-Based Image Editing +🤗 Space + +Usage example + + +Based on an image generated with the input prompt + + + + Copied +import requests +import torch + +from diffusers import DDIMScheduler, StableDiffusionPix2PixZeroPipeline + + +def download(embedding_url, local_filepath): + r = requests.get(embedding_url) + with open(local_filepath, "wb") as f: + f.write(r.content) + + +model_ckpt = "CompVis/stable-diffusion-v1-4" +pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained( + model_ckpt, conditions_input_image=False, torch_dtype=torch.float16 +) +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +pipeline.to("cuda") + +prompt = "a high resolution painting of a cat in the style of van gogh" +src_embs_url = "https://github.com/pix2pixzero/pix2pix-zero/raw/main/assets/embeddings_sd_1.4/cat.pt" +target_embs_url = "https://github.com/pix2pixzero/pix2pix-zero/raw/main/assets/embeddings_sd_1.4/dog.pt" + +for url in [src_embs_url, target_embs_url]: + download(url, url.split("/")[-1]) + +src_embeds = torch.load(src_embs_url.split("/")[-1]) +target_embeds = torch.load(target_embs_url.split("/")[-1]) + +images = pipeline( + prompt, + source_embeds=src_embeds, + target_embeds=target_embeds, + num_inference_steps=50, + cross_attention_guidance_amount=0.15, +).images +images[0].save("edited_image_dog.png") + +Based on an input image + +When the pipeline is conditioned on an input image, we first obtain an inverted +noise from it using a DDIMInverseScheduler with the help of a generated caption. Then +the inverted noise is used to start the generation process. +First, let’s load our pipeline: + + + Copied +import torch +from transformers import BlipForConditionalGeneration, BlipProcessor +from diffusers import DDIMScheduler, DDIMInverseScheduler, StableDiffusionPix2PixZeroPipeline + +captioner_id = "Salesforce/blip-image-captioning-base" +processor = BlipProcessor.from_pretrained(captioner_id) +model = BlipForConditionalGeneration.from_pretrained(captioner_id, torch_dtype=torch.float16, low_cpu_mem_usage=True) + +sd_model_ckpt = "CompVis/stable-diffusion-v1-4" +pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained( + sd_model_ckpt, + caption_generator=model, + caption_processor=processor, + torch_dtype=torch.float16, + safety_checker=None, +) +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +pipeline.enable_model_cpu_offload() +Then, we load an input image for conditioning and obtain a suitable caption for it: + + + Copied +import requests +from PIL import Image + +img_url = "https://github.com/pix2pixzero/pix2pix-zero/raw/main/assets/test_images/cats/cat_6.png" +raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB").resize((512, 512)) +caption = pipeline.generate_caption(raw_image) +Then we employ the generated caption and the input image to get the inverted noise: + + + Copied +generator = torch.manual_seed(0) +inv_latents = pipeline.invert(caption, image=raw_image, generator=generator).latents +Now, generate the image with edit directions: + + + Copied +# See the "Generating source and target embeddings" section below to +# automate the generation of these captions with a pre-trained model like Flan-T5 as explained below. +source_prompts = ["a cat sitting on the street", "a cat playing in the field", "a face of a cat"] +target_prompts = ["a dog sitting on the street", "a dog playing in the field", "a face of a dog"] + +source_embeds = pipeline.get_embeds(source_prompts, batch_size=2) +target_embeds = pipeline.get_embeds(target_prompts, batch_size=2) + + +image = pipeline( + caption, + source_embeds=source_embeds, + target_embeds=target_embeds, + num_inference_steps=50, + cross_attention_guidance_amount=0.15, + generator=generator, + latents=inv_latents, + negative_prompt=caption, +).images[0] +image.save("edited_image.png") + +Generating source and target embeddings + +The authors originally used the GPT-3 API to generate the source and target captions for discovering +edit directions. However, we can also leverage open source and public models for the same purpose. +Below, we provide an end-to-end example with the Flan-T5 model +for generating captions and CLIP for +computing embeddings on the generated captions. +1. Load the generation model: + + + Copied +import torch +from transformers import AutoTokenizer, T5ForConditionalGeneration + +tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-xl") +model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xl", device_map="auto", torch_dtype=torch.float16) +2. Construct a starting prompt: + + + Copied +source_concept = "cat" +target_concept = "dog" + +source_text = f"Provide a caption for images containing a {source_concept}. " +"The captions should be in English and should be no longer than 150 characters." + +target_text = f"Provide a caption for images containing a {target_concept}. " +"The captions should be in English and should be no longer than 150 characters." +Here, we’re interested in the “cat -> dog” direction. +3. Generate captions: +We can use a utility like so for this purpose. + + + Copied +def generate_captions(input_prompt): + input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids.to("cuda") + + outputs = model.generate( + input_ids, temperature=0.8, num_return_sequences=16, do_sample=True, max_new_tokens=128, top_k=10 + ) + return tokenizer.batch_decode(outputs, skip_special_tokens=True) +And then we just call it to generate our captions: + + + Copied +source_captions = generate_captions(source_text) +target_captions = generate_captions(target_concept) +We encourage you to play around with the different parameters supported by the +generate() method (documentation) for the generation quality you are looking for. +4. Load the embedding model: +Here, we need to use the same text encoder model used by the subsequent Stable Diffusion model. + + + Copied +from diffusers import StableDiffusionPix2PixZeroPipeline + +pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 +) +pipeline = pipeline.to("cuda") +tokenizer = pipeline.tokenizer +text_encoder = pipeline.text_encoder +5. Compute embeddings: + + + Copied +import torch + +def embed_captions(sentences, tokenizer, text_encoder, device="cuda"): + with torch.no_grad(): + embeddings = [] + for sent in sentences: + text_inputs = tokenizer( + sent, + padding="max_length", + max_length=tokenizer.model_max_length, + truncation=True, + return_tensors="pt", + ) + text_input_ids = text_inputs.input_ids + prompt_embeds = text_encoder(text_input_ids.to(device), attention_mask=None)[0] + embeddings.append(prompt_embeds) + return torch.concatenate(embeddings, dim=0).mean(dim=0).unsqueeze(0) + +source_embeddings = embed_captions(source_captions, tokenizer, text_encoder) +target_embeddings = embed_captions(target_captions, tokenizer, text_encoder) +And you’re done! Here is a Colab Notebook that you can use to interact with the entire process. +Now, you can use these embeddings directly while calling the pipeline: + + + Copied +from diffusers import DDIMScheduler + +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) + +images = pipeline( + prompt, + source_embeds=source_embeddings, + target_embeds=target_embeddings, + num_inference_steps=50, + cross_attention_guidance_amount=0.15, +).images +images[0].save("edited_image_dog.png") + +StableDiffusionPix2PixZeroPipeline + + +class diffusers.StableDiffusionPix2PixZeroPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: typing.Union[diffusers.schedulers.scheduling_ddpm.DDPMScheduler, diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler] +feature_extractor: CLIPImageProcessor +safety_checker: StableDiffusionSafetyChecker +inverse_scheduler: DDIMInverseScheduler +caption_generator: BlipForConditionalGeneration +caption_processor: BlipProcessor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, EulerAncestralDiscreteScheduler, or DDPMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + +requires_safety_checker (bool) — +Whether the pipeline requires a safety checker. We recommend setting it to True if you’re using the +pipeline publicly. + + + +Pipeline for pixel-levl image editing using Pix2Pix Zero. Based on Stable Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str], NoneType] = None +image: typing.Union[torch.FloatTensor, PIL.Image.Image, NoneType] = None +source_embeds: Tensor = None +target_embeds: Tensor = None +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +cross_attention_guidance_amount: float = 0.1 +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: typing.Optional[int] = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +source_embeds (torch.Tensor) — +Source concept embeddings. Generation of the embeddings as per the original +paper. Used in discovering the edit direction. + + +target_embeds (torch.Tensor) — +Target concept embeddings. Generation of the embeddings as per the original +paper. Used in discovering the edit direction. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +cross_attention_guidance_amount (float, defaults to 0.1) — +Amount of guidance needed from the reference cross-attention maps. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import requests +>>> import torch + +>>> from diffusers import DDIMScheduler, StableDiffusionPix2PixZeroPipeline + + +>>> def download(embedding_url, local_filepath): +... r = requests.get(embedding_url) +... with open(local_filepath, "wb") as f: +... f.write(r.content) + + +>>> model_ckpt = "CompVis/stable-diffusion-v1-4" +>>> pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained(model_ckpt, torch_dtype=torch.float16) +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.to("cuda") + +>>> prompt = "a high resolution painting of a cat in the style of van gough" +>>> source_emb_url = "https://hf.co/datasets/sayakpaul/sample-datasets/resolve/main/cat.pt" +>>> target_emb_url = "https://hf.co/datasets/sayakpaul/sample-datasets/resolve/main/dog.pt" + +>>> for url in [source_emb_url, target_emb_url]: +... download(url, url.split("/")[-1]) + +>>> src_embeds = torch.load(source_emb_url.split("/")[-1]) +>>> target_embeds = torch.load(target_emb_url.split("/")[-1]) +>>> images = pipeline( +... prompt, +... source_embeds=src_embeds, +... target_embeds=target_embeds, +... num_inference_steps=50, +... cross_attention_guidance_amount=0.15, +... ).images + +>>> images[0].save("edited_image_dog.png") + +construct_direction + +< +source +> +( +embs_source: Tensor +embs_target: Tensor + +) + + + +Constructs the edit direction to steer the image generation process semantically. + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. + +generate_caption + +< +source +> +( +images + +) + + + +Generates caption for a given image. + +invert + +< +source +> +( +prompt: typing.Optional[str] = None +image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None +num_inference_steps: int = 50 +guidance_scale: float = 1 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +cross_attention_guidance_amount: float = 0.1 +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: typing.Optional[int] = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None +lambda_auto_corr: float = 20.0 +lambda_kl: float = 20.0 +num_reg_steps: int = 5 +num_auto_corr_rolls: int = 5 + +) + + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +image (PIL.Image.Image, optional) — +Image, or tensor representing an image batch which will be used for conditioning. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 1) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +cross_attention_guidance_amount (float, defaults to 0.1) — +Amount of guidance needed from the reference cross-attention maps. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +lambda_auto_corr (float, optional, defaults to 20.0) — +Lambda parameter to control auto correction + + +lambda_kl (float, optional, defaults to 20.0) — +Lambda parameter to control Kullback–Leibler divergence output + + +num_reg_steps (int, optional, defaults to 5) — +Number of regularization loss steps + + +num_auto_corr_rolls (int, optional, defaults to 5) — +Number of auto correction roll steps + + + +Function used to generate inverted latents given a prompt and image. + +Examples: + + + Copied +>>> import torch +>>> from transformers import BlipForConditionalGeneration, BlipProcessor +>>> from diffusers import DDIMScheduler, DDIMInverseScheduler, StableDiffusionPix2PixZeroPipeline + +>>> import requests +>>> from PIL import Image + +>>> captioner_id = "Salesforce/blip-image-captioning-base" +>>> processor = BlipProcessor.from_pretrained(captioner_id) +>>> model = BlipForConditionalGeneration.from_pretrained( +... captioner_id, torch_dtype=torch.float16, low_cpu_mem_usage=True +... ) + +>>> sd_model_ckpt = "CompVis/stable-diffusion-v1-4" +>>> pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained( +... sd_model_ckpt, +... caption_generator=model, +... caption_processor=processor, +... torch_dtype=torch.float16, +... safety_checker=None, +... ) + +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.enable_model_cpu_offload() + +>>> img_url = "https://github.com/pix2pixzero/pix2pix-zero/raw/main/assets/test_images/cats/cat_6.png" + +>>> raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB").resize((512, 512)) +>>> # generate caption +>>> caption = pipeline.generate_caption(raw_image) + +>>> # "a photography of a cat with flowers and dai dai daie - daie - daie kasaii" +>>> inv_latents = pipeline.invert(caption, image=raw_image).latents +>>> # we need to generate source and target embeds + +>>> source_prompts = ["a cat sitting on the street", "a cat playing in the field", "a face of a cat"] + +>>> target_prompts = ["a dog sitting on the street", "a dog playing in the field", "a face of a dog"] + +>>> source_embeds = pipeline.get_embeds(source_prompts) +>>> target_embeds = pipeline.get_embeds(target_prompts) +>>> # the latents can then be used to edit a real image +>>> # when using Stable Diffusion 2 or other models that use v-prediction +>>> # set `cross_attention_guidance_amount` to 0.01 or less to avoid input latent gradient explosion + +>>> image = pipeline( +... caption, +... source_embeds=source_embeds, +... target_embeds=target_embeds, +... num_inference_steps=50, +... cross_attention_guidance_amount=0.15, +... generator=generator, +... latents=inv_latents, +... negative_prompt=caption, +... ).images[0] +>>> image.save("edited_image.png") diff --git a/scrapped_outputs/98d5c107eb99dd3eb09b2f95a9e1f5a9.txt b/scrapped_outputs/98d5c107eb99dd3eb09b2f95a9e1f5a9.txt new file mode 100644 index 0000000000000000000000000000000000000000..843875e320b6bcdb29106ed38d7b3cffd10030d2 --- /dev/null +++ b/scrapped_outputs/98d5c107eb99dd3eb09b2f95a9e1f5a9.txt @@ -0,0 +1,232 @@ +Würstchen Wuerstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models is by Pablo Pernias, Dominic Rampas, Mats L. Richter and Christopher Pal and Marc Aubreville. The abstract from the paper is: We introduce Würstchen, a novel architecture for text-to-image synthesis that combines competitive performance with unprecedented cost-effectiveness for large-scale text-to-image diffusion models. A key contribution of our work is to develop a latent diffusion technique in which we learn a detailed but extremely compact semantic image representation used to guide the diffusion process. This highly compressed representation of an image provides much more detailed guidance compared to latent representations of language and this significantly reduces the computational requirements to achieve state-of-the-art results. Our approach also improves the quality of text-conditioned image generation based on our user preference study. The training requirements of our approach consists of 24,602 A100-GPU hours - compared to Stable Diffusion 2.1’s 200,000 GPU hours. Our approach also requires less training data to achieve these results. Furthermore, our compact latent representations allows us to perform inference over twice as fast, slashing the usual costs and carbon footprint of a state-of-the-art (SOTA) diffusion model significantly, without compromising the end performance. In a broader comparison against SOTA models our approach is substantially more efficient and compares favorably in terms of image quality. We believe that this work motivates more emphasis on the prioritization of both performance and computational accessibility. Würstchen Overview Würstchen is a diffusion model, whose text-conditional model works in a highly compressed latent space of images. Why is this important? Compressing data can reduce computational costs for both training and inference by magnitudes. Training on 1024x1024 images is way more expensive than training on 32x32. Usually, other works make use of a relatively small compression, in the range of 4x - 8x spatial compression. Würstchen takes this to an extreme. Through its novel design, we achieve a 42x spatial compression. This was unseen before because common methods fail to faithfully reconstruct detailed images after 16x spatial compression. Würstchen employs a two-stage compression, what we call Stage A and Stage B. Stage A is a VQGAN, and Stage B is a Diffusion Autoencoder (more details can be found in the paper). A third model, Stage C, is learned in that highly compressed latent space. This training requires fractions of the compute used for current top-performing models, while also allowing cheaper and faster inference. Würstchen v2 comes to Diffusers After the initial paper release, we have improved numerous things in the architecture, training and sampling, making Würstchen competitive to current state-of-the-art models in many ways. We are excited to release this new version together with Diffusers. Here is a list of the improvements. Higher resolution (1024x1024 up to 2048x2048) Faster inference Multi Aspect Resolution Sampling Better quality We are releasing 3 checkpoints for the text-conditional image generation model (Stage C). Those are: v2-base v2-aesthetic (default) v2-interpolated (50% interpolation between v2-base and v2-aesthetic) We recommend using v2-interpolated, as it has a nice touch of both photorealism and aesthetics. Use v2-base for finetunings as it does not have a style bias and use v2-aesthetic for very artistic generations. +A comparison can be seen here: Text-to-Image Generation For the sake of usability, Würstchen can be used with a single pipeline. This pipeline can be used as follows: Copied import torch +from diffusers import AutoPipelineForText2Image +from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS + +pipe = AutoPipelineForText2Image.from_pretrained("warp-ai/wuerstchen", torch_dtype=torch.float16).to("cuda") + +caption = "Anthropomorphic cat dressed as a fire fighter" +images = pipe( + caption, + width=1024, + height=1536, + prior_timesteps=DEFAULT_STAGE_C_TIMESTEPS, + prior_guidance_scale=4.0, + num_images_per_prompt=2, +).images For explanation purposes, we can also initialize the two main pipelines of Würstchen individually. Würstchen consists of 3 stages: Stage C, Stage B, Stage A. They all have different jobs and work only together. When generating text-conditional images, Stage C will first generate the latents in a very compressed latent space. This is what happens in the prior_pipeline. Afterwards, the generated latents will be passed to Stage B, which decompresses the latents into a bigger latent space of a VQGAN. These latents can then be decoded by Stage A, which is a VQGAN, into the pixel-space. Stage B & Stage A are both encapsulated in the decoder_pipeline. For more details, take a look at the paper. Copied import torch +from diffusers import WuerstchenDecoderPipeline, WuerstchenPriorPipeline +from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS + +device = "cuda" +dtype = torch.float16 +num_images_per_prompt = 2 + +prior_pipeline = WuerstchenPriorPipeline.from_pretrained( + "warp-ai/wuerstchen-prior", torch_dtype=dtype +).to(device) +decoder_pipeline = WuerstchenDecoderPipeline.from_pretrained( + "warp-ai/wuerstchen", torch_dtype=dtype +).to(device) + +caption = "Anthropomorphic cat dressed as a fire fighter" +negative_prompt = "" + +prior_output = prior_pipeline( + prompt=caption, + height=1024, + width=1536, + timesteps=DEFAULT_STAGE_C_TIMESTEPS, + negative_prompt=negative_prompt, + guidance_scale=4.0, + num_images_per_prompt=num_images_per_prompt, +) +decoder_output = decoder_pipeline( + image_embeddings=prior_output.image_embeddings, + prompt=caption, + negative_prompt=negative_prompt, + guidance_scale=0.0, + output_type="pil", +).images[0] +decoder_output Speed-Up Inference You can make use of torch.compile function and gain a speed-up of about 2-3x: Copied prior_pipeline.prior = torch.compile(prior_pipeline.prior, mode="reduce-overhead", fullgraph=True) +decoder_pipeline.decoder = torch.compile(decoder_pipeline.decoder, mode="reduce-overhead", fullgraph=True) Limitations Due to the high compression employed by Würstchen, generations can lack a good amount +of detail. To our human eye, this is especially noticeable in faces, hands etc. Images can only be generated in 128-pixel steps, e.g. the next higher resolution +after 1024x1024 is 1152x1152 The model lacks the ability to render correct text in images The model often does not achieve photorealism Difficult compositional prompts are hard for the model The original codebase, as well as experimental ideas, can be found at dome272/Wuerstchen. WuerstchenCombinedPipeline class diffusers.WuerstchenCombinedPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModel decoder: WuerstchenDiffNeXt scheduler: DDPMWuerstchenScheduler vqgan: PaellaVQModel prior_tokenizer: CLIPTokenizer prior_text_encoder: CLIPTextModel prior_prior: WuerstchenPrior prior_scheduler: DDPMWuerstchenScheduler ) Parameters tokenizer (CLIPTokenizer) — +The decoder tokenizer to be used for text inputs. text_encoder (CLIPTextModel) — +The decoder text encoder to be used for text inputs. decoder (WuerstchenDiffNeXt) — +The decoder model to be used for decoder image generation pipeline. scheduler (DDPMWuerstchenScheduler) — +The scheduler to be used for decoder image generation pipeline. vqgan (PaellaVQModel) — +The VQGAN model to be used for decoder image generation pipeline. prior_tokenizer (CLIPTokenizer) — +The prior tokenizer to be used for text inputs. prior_text_encoder (CLIPTextModel) — +The prior text encoder to be used for text inputs. prior_prior (WuerstchenPrior) — +The prior model to be used for prior pipeline. prior_scheduler (DDPMWuerstchenScheduler) — +The scheduler to be used for prior pipeline. Combined Pipeline for text-to-image generation using Wuerstchen This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union = None height: int = 512 width: int = 512 prior_num_inference_steps: int = 60 prior_timesteps: Optional = None prior_guidance_scale: float = 4.0 num_inference_steps: int = 12 decoder_timesteps: Optional = None decoder_guidance_scale: float = 0.0 negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation for the prior and decoder. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings for the prior. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings for the prior. Can be used to easily tweak text inputs, e.g. +prompt weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt +input argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +prior_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +prior_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked +to the text prompt, usually at the expense of lower image quality. prior_num_inference_steps (Union[int, Dict[float, int]], optional, defaults to 60) — +The number of prior denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. For more specific timestep spacing, you can pass customized +prior_timesteps num_inference_steps (int, optional, defaults to 12) — +The number of decoder denoising steps. More denoising steps usually lead to a higher quality image at +the expense of slower inference. For more specific timestep spacing, you can pass customized +timesteps prior_timesteps (List[float], optional) — +Custom timesteps to use for the denoising process for the prior. If not defined, equal spaced +prior_num_inference_steps timesteps are used. Must be in descending order. decoder_timesteps (List[float], optional) — +Custom timesteps to use for the denoising process for the decoder. If not defined, equal spaced +num_inference_steps timesteps are used. Must be in descending order. decoder_guidance_scale (float, optional, defaults to 0.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. prior_callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). prior_callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the prior_callback_on_step_end function. The tensors specified in the +list will be passed as callback_kwargs argument. You will only be able to include variables listed in +the ._callback_tensor_inputs attribute of your pipeline class. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusions import WuerstchenCombinedPipeline + +>>> pipe = WuerstchenCombinedPipeline.from_pretrained("warp-ai/Wuerstchen", torch_dtype=torch.float16).to( +... "cuda" +... ) +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> images = pipe(prompt=prompt) enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models (unet, text_encoder, vae, and safety checker state dicts) to CPU using 🤗 +Accelerate, significantly reducing memory usage. Models are moved to a torch.device('meta') and loaded on a +GPU only when their specific submodule’s forward method is called. Offloading happens on a submodule basis. +Memory savings are higher than using enable_model_cpu_offload, but performance is lower. WuerstchenPriorPipeline class diffusers.WuerstchenPriorPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModel prior: WuerstchenPrior scheduler: DDPMWuerstchenScheduler latent_mean: float = 42.0 latent_std: float = 1.0 resolution_multiple: float = 42.67 ) Parameters prior (Prior) — +The canonical unCLIP prior to approximate the image embedding from the text embedding. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (DDPMWuerstchenScheduler) — +A scheduler to be used in combination with prior to generate image embedding. latent_mean (‘float’, optional, defaults to 42.0) — +Mean value for latent diffusers. latent_std (‘float’, optional, defaults to 1.0) — +Standard value for latent diffusers. resolution_multiple (‘float’, optional, defaults to 42.67) — +Default resolution for multiple images generated. Pipeline for generating image prior for Wuerstchen. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None height: int = 1024 width: int = 1024 num_inference_steps: int = 60 timesteps: List = None guidance_scale: float = 8.0 negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pt' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. height (int, optional, defaults to 1024) — +The height in pixels of the generated image. width (int, optional, defaults to 1024) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 60) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 8.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +decoder_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +decoder_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely +linked to the text prompt, usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if decoder_guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import WuerstchenPriorPipeline + +>>> prior_pipe = WuerstchenPriorPipeline.from_pretrained( +... "warp-ai/wuerstchen-prior", torch_dtype=torch.float16 +... ).to("cuda") + +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> prior_output = pipe(prompt) WuerstchenPriorPipelineOutput class diffusers.pipelines.wuerstchen.pipeline_wuerstchen_prior.WuerstchenPriorPipelineOutput < source > ( image_embeddings: Union ) Parameters image_embeddings (torch.FloatTensor or np.ndarray) — +Prior image embeddings for text prompt Output class for WuerstchenPriorPipeline. WuerstchenDecoderPipeline class diffusers.WuerstchenDecoderPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModel decoder: WuerstchenDiffNeXt scheduler: DDPMWuerstchenScheduler vqgan: PaellaVQModel latent_dim_scale: float = 10.67 ) Parameters tokenizer (CLIPTokenizer) — +The CLIP tokenizer. text_encoder (CLIPTextModel) — +The CLIP text encoder. decoder (WuerstchenDiffNeXt) — +The WuerstchenDiffNeXt unet decoder. vqgan (PaellaVQModel) — +The VQGAN model. scheduler (DDPMWuerstchenScheduler) — +A scheduler to be used in combination with prior to generate image embedding. latent_dim_scale (float, optional, defaults to 10.67) — +Multiplier to determine the VQ latent space size from the image embeddings. If the image embeddings are +height=24 and width=24, the VQ latent shape needs to be height=int(2410.67)=256 and +width=int(2410.67)=256 in order to match the training conditions. Pipeline for generating images from the Wuerstchen model. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeddings: Union prompt: Union = None num_inference_steps: int = 12 timesteps: Optional = None guidance_scale: float = 0.0 negative_prompt: Union = None num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) Parameters image_embedding (torch.FloatTensor or List[torch.FloatTensor]) — +Image Embeddings either extracted from an image or generated by a Prior Model. prompt (str or List[str]) — +The prompt or prompts to guide the image generation. num_inference_steps (int, optional, defaults to 12) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 0.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +decoder_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +decoder_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely +linked to the text prompt, usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if decoder_guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import WuerstchenPriorPipeline, WuerstchenDecoderPipeline + +>>> prior_pipe = WuerstchenPriorPipeline.from_pretrained( +... "warp-ai/wuerstchen-prior", torch_dtype=torch.float16 +... ).to("cuda") +>>> gen_pipe = WuerstchenDecoderPipeline.from_pretrain("warp-ai/wuerstchen", torch_dtype=torch.float16).to( +... "cuda" +... ) + +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> prior_output = pipe(prompt) +>>> images = gen_pipe(prior_output.image_embeddings, prompt=prompt) Citation Copied @misc{pernias2023wuerstchen, + title={Wuerstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models}, + author={Pablo Pernias and Dominic Rampas and Mats L. Richter and Christopher J. Pal and Marc Aubreville}, + year={2023}, + eprint={2306.00637}, + archivePrefix={arXiv}, + primaryClass={cs.CV} + } diff --git a/scrapped_outputs/98dc554e9a8fcccbe8af82ac0f0817e5.txt b/scrapped_outputs/98dc554e9a8fcccbe8af82ac0f0817e5.txt new file mode 100644 index 0000000000000000000000000000000000000000..cff714448fde8a5841e9c4833e95b6589962a2ce --- /dev/null +++ b/scrapped_outputs/98dc554e9a8fcccbe8af82ac0f0817e5.txt @@ -0,0 +1 @@ +Overview 🧨 Diffusers offers many pipelines, models, and schedulers for generative tasks. To make loading these components as simple as possible, we provide a single and unified method - from_pretrained() - that loads any of these components from either the Hugging Face Hub or your local machine. Whenever you load a pipeline or model, the latest files are automatically downloaded and cached so you can quickly reuse them next time without redownloading the files. This section will show you everything you need to know about loading pipelines, how to load different components in a pipeline, how to load checkpoint variants, and how to load community pipelines. You’ll also learn how to load schedulers and compare the speed and quality trade-offs of using different schedulers. Finally, you’ll see how to convert and load KerasCV checkpoints so you can use them in PyTorch with 🧨 Diffusers. diff --git a/scrapped_outputs/98df94a6db10465594ee0d53870b09b3.txt b/scrapped_outputs/98df94a6db10465594ee0d53870b09b3.txt new file mode 100644 index 0000000000000000000000000000000000000000..666a3fca5765b9f54e337bb3359e45c9d5018c27 --- /dev/null +++ b/scrapped_outputs/98df94a6db10465594ee0d53870b09b3.txt @@ -0,0 +1,70 @@ +TCDScheduler Trajectory Consistency Distillation by Jianbin Zheng, Minghui Hu, Zhongyi Fan, Chaoyue Wang, Changxing Ding, Dacheng Tao and Tat-Jen Cham introduced a Strategic Stochastic Sampling (Algorithm 4) that is capable of generating good samples in a small number of steps. Distinguishing it as an advanced iteration of the multistep scheduler (Algorithm 1) in the Consistency Models, Strategic Stochastic Sampling specifically tailored for the trajectory consistency function. The abstract from the paper is: Latent Consistency Model (LCM) extends the Consistency Model to the latent space and leverages the guided consistency distillation technique to achieve impressive performance in accelerating text-to-image synthesis. However, we observed that LCM struggles to generate images with both clarity and detailed intricacy. To address this limitation, we initially delve into and elucidate the underlying causes. Our investigation identifies that the primary issue stems from errors in three distinct areas. Consequently, we introduce Trajectory Consistency Distillation (TCD), which encompasses trajectory consistency function and strategic stochastic sampling. The trajectory consistency function diminishes the distillation errors by broadening the scope of the self-consistency boundary condition and endowing the TCD with the ability to accurately trace the entire trajectory of the Probability Flow ODE. Additionally, strategic stochastic sampling is specifically designed to circumvent the accumulated errors inherent in multi-step consistency sampling, which is meticulously tailored to complement the TCD model. Experiments demonstrate that TCD not only significantly enhances image quality at low NFEs but also yields more detailed results compared to the teacher model at high NFEs. The original codebase can be found at jabir-zheng/TCD. TCDScheduler class diffusers.TCDScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'scaled_linear' trained_betas: Union = None original_inference_steps: int = 50 clip_sample: bool = False clip_sample_range: float = 1.0 set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' timestep_scaling: float = 10.0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. original_inference_steps (int, optional, defaults to 50) — +The default number of inference steps used to generate a linearly-spaced timestep schedule, from which we +will ultimately take num_inference_steps evenly spaced timesteps to form the final timestep schedule. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the alpha value at step 0. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. timestep_scaling (float, defaults to 10.0) — +The factor the timesteps will be multiplied by when calculating the consistency model boundary conditions +c_skip and c_out. Increasing this will decrease the approximation error (although the approximation +error at the default of 10.0 is already pretty small). rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. TCDScheduler incorporates the Strategic Stochastic Sampling introduced by the paper Trajectory Consistency Distillation, +extending the original Multistep Consistency Sampling to enable unrestricted trajectory traversal. This code is based on the official repo of TCD(https://github.com/jabir-zheng/TCD). This model inherits from SchedulerMixin and ConfigMixin. ~ConfigMixin takes care of storing all config +attributes that are passed in the scheduler’s __init__ function, such as num_train_timesteps. They can be +accessed via scheduler.config.num_train_timesteps. SchedulerMixin provides general loading and saving +functionality via the SchedulerMixin.save_pretrained() and from_pretrained() functions. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: Optional = None device: Union = None original_inference_steps: Optional = None timesteps: Optional = None strength: int = 1.0 ) Parameters num_inference_steps (int, optional) — +The number of diffusion steps used when generating samples with a pre-trained model. If used, +timesteps must be None. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. original_inference_steps (int, optional) — +The original number of inference steps, which will be used to generate a linearly-spaced timestep +schedule (which is different from the standard diffusers implementation). We will then take +num_inference_steps timesteps from this schedule, evenly spaced in terms of indices, and use that as +our final timestep schedule. If not set, this will default to the original_inference_steps attribute. timesteps (List[int], optional) — +Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default +timestep spacing strategy of equal spacing between timesteps on the training/distillation timestep +schedule is used. If timesteps is passed, num_inference_steps must be None. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor eta: float = 0.3 generator: Optional = None return_dict: bool = True ) → ~schedulers.scheduling_utils.TCDSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. eta (float) — +A stochastic parameter (referred to as gamma in the paper) used to control the stochasticity in every step. +When eta = 0, it represents deterministic sampling, whereas eta = 1 indicates full stochastic sampling. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a TCDSchedulerOutput or tuple. Returns +~schedulers.scheduling_utils.TCDSchedulerOutput or tuple + +If return_dict is True, TCDSchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). TCDSchedulerOutput class diffusers.schedulers.scheduling_tcd.TCDSchedulerOutput < source > ( prev_sample: FloatTensor pred_noised_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_noised_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted noised sample (x_{s}) based on the model output from the current timestep. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/98eb6923e0b72e85aa104d289daeac53.txt b/scrapped_outputs/98eb6923e0b72e85aa104d289daeac53.txt new file mode 100644 index 0000000000000000000000000000000000000000..848931d1969089ae8a8d21d431c071f2b1f6f901 --- /dev/null +++ b/scrapped_outputs/98eb6923e0b72e85aa104d289daeac53.txt @@ -0,0 +1,71 @@ +AutoencoderKL The variational autoencoder (VAE) model with KL loss was introduced in Auto-Encoding Variational Bayes by Diederik P. Kingma and Max Welling. The model is used in 🤗 Diffusers to encode images into latents and to decode latent representations into images. The abstract from the paper is: How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions are two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results. Loading from the original format By default the AutoencoderKL should be loaded with from_pretrained(), but it can also be loaded +from the original format using FromOriginalVAEMixin.from_single_file as follows: Copied from diffusers import AutoencoderKL + +url = "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors" # can also be a local file +model = AutoencoderKL.from_single_file(url) AutoencoderKL class diffusers.AutoencoderKL < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) up_block_types: Tuple = ('UpDecoderBlock2D',) block_out_channels: Tuple = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 4 norm_num_groups: int = 32 sample_size: int = 32 scaling_factor: float = 0.18215 force_upcast: float = True ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("DownEncoderBlock2D",)) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to ("UpDecoderBlock2D",)) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of block output channels. act_fn (str, optional, defaults to "silu") — The activation function to use. latent_channels (int, optional, defaults to 4) — Number of channels in the latent space. sample_size (int, optional, defaults to 32) — Sample input size. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. force_upcast (bool, optional, default to True) — +If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE +can be fine-tuned / trained to a lower range without loosing too much precision in which case +force_upcast can be set to False - see: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix A VAE model with KL loss for encoding images into latents and decoding latent representations into images. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). wrapper < source > ( *args **kwargs ) wrapper < source > ( *args **kwargs ) disable_slicing < source > ( ) Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing +decoding in one step. disable_tiling < source > ( ) Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing +decoding in one step. enable_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_tiling < source > ( use_tiling: bool = True ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. forward < source > ( sample: FloatTensor sample_posterior: bool = False return_dict: bool = True generator: Optional = None ) Parameters sample (torch.FloatTensor) — Input sample. sample_posterior (bool, optional, defaults to False) — +Whether to sample from the posterior. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. fuse_qkv_projections < source > ( ) Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. tiled_decode < source > ( z: FloatTensor return_dict: bool = True ) → ~models.vae.DecoderOutput or tuple Parameters z (torch.FloatTensor) — Input batch of latent vectors. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.vae.DecoderOutput instead of a plain tuple. Returns +~models.vae.DecoderOutput or tuple + +If return_dict is True, a ~models.vae.DecoderOutput is returned, otherwise a plain tuple is +returned. + Decode a batch of images using a tiled decoder. tiled_encode < source > ( x: FloatTensor return_dict: bool = True ) → ~models.autoencoder_kl.AutoencoderKLOutput or tuple Parameters x (torch.FloatTensor) — Input batch of images. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.autoencoder_kl.AutoencoderKLOutput instead of a plain tuple. Returns +~models.autoencoder_kl.AutoencoderKLOutput or tuple + +If return_dict is True, a ~models.autoencoder_kl.AutoencoderKLOutput is returned, otherwise a plain +tuple is returned. + Encode a batch of images using a tiled encoder. When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several +steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is +different from non-tiled encoding because each tile uses a different encoder. To avoid tiling artifacts, the +tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the +output, but they should be much less noticeable. unfuse_qkv_projections < source > ( ) Disables the fused QKV projection if enabled. This API is 🧪 experimental. AutoencoderKLOutput class diffusers.models.modeling_outputs.AutoencoderKLOutput < source > ( latent_dist: DiagonalGaussianDistribution ) Parameters latent_dist (DiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of DiagonalGaussianDistribution. +DiagonalGaussianDistribution allows for sampling latents from the distribution. Output of AutoencoderKL encoding method. DecoderOutput class diffusers.models.autoencoders.vae.DecoderOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The decoded output sample from the last layer of the model. Output of decoding method. FlaxAutoencoderKL class diffusers.FlaxAutoencoderKL < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) up_block_types: Tuple = ('UpDecoderBlock2D',) block_out_channels: Tuple = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 4 norm_num_groups: int = 32 sample_size: int = 32 scaling_factor: float = 0.18215 dtype: dtype = parent: Union = name: Optional = None ) Parameters in_channels (int, optional, defaults to 3) — +Number of channels in the input image. out_channels (int, optional, defaults to 3) — +Number of channels in the output. down_block_types (Tuple[str], optional, defaults to (DownEncoderBlock2D)) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to (UpDecoderBlock2D)) — +Tuple of upsample block types. block_out_channels (Tuple[str], optional, defaults to (64,)) — +Tuple of block output channels. layers_per_block (int, optional, defaults to 2) — +Number of ResNet layer for each block. act_fn (str, optional, defaults to silu) — +The activation function to use. latent_channels (int, optional, defaults to 4) — +Number of channels in the latent space. norm_num_groups (int, optional, defaults to 32) — +The number of groups for normalization. sample_size (int, optional, defaults to 32) — +Sample input size. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. dtype (jnp.dtype, optional, defaults to jnp.float32) — +The dtype of the parameters. Flax implementation of a VAE model with KL loss for decoding latent representations. This model inherits from FlaxModelMixin. Check the superclass documentation for it’s generic methods +implemented for all models (such as downloading or saving). This model is a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matter related to its +general usage and behavior. Inherent JAX features such as the following are supported: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization FlaxAutoencoderKLOutput class diffusers.models.vae_flax.FlaxAutoencoderKLOutput < source > ( latent_dist: FlaxDiagonalGaussianDistribution ) Parameters latent_dist (FlaxDiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of FlaxDiagonalGaussianDistribution. +FlaxDiagonalGaussianDistribution allows for sampling latents from the distribution. Output of AutoencoderKL encoding method. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. FlaxDecoderOutput class diffusers.models.vae_flax.FlaxDecoderOutput < source > ( sample: Array ) Parameters sample (jnp.ndarray of shape (batch_size, num_channels, height, width)) — +The decoded output sample from the last layer of the model. dtype (jnp.dtype, optional, defaults to jnp.float32) — +The dtype of the parameters. Output of decoding method. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/9923e63739717ae0ea27bcceb49912ae.txt b/scrapped_outputs/9923e63739717ae0ea27bcceb49912ae.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/99419a462a4de46d3dd3757a51263b53.txt b/scrapped_outputs/99419a462a4de46d3dd3757a51263b53.txt new file mode 100644 index 0000000000000000000000000000000000000000..8ddef7d2587e0ab05a500a167a90610ae978a96c --- /dev/null +++ b/scrapped_outputs/99419a462a4de46d3dd3757a51263b53.txt @@ -0,0 +1,107 @@ +Attend-and-Excite Attend-and-Excite for Stable Diffusion was proposed in Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models and provides textual attention control over image generation. The abstract from the paper is: Recent text-to-image generative models have demonstrated an unparalleled ability to generate diverse and creative imagery guided by a target text prompt. While revolutionary, current state-of-the-art diffusion models may still fail in generating images that fully convey the semantics in the given text prompt. We analyze the publicly available Stable Diffusion model and assess the existence of catastrophic neglect, where the model fails to generate one or more of the subjects from the input prompt. Moreover, we find that in some cases the model also fails to correctly bind attributes (e.g., colors) to their corresponding subjects. To help mitigate these failure cases, we introduce the concept of Generative Semantic Nursing (GSN), where we seek to intervene in the generative process on the fly during inference time to improve the faithfulness of the generated images. Using an attention-based formulation of GSN, dubbed Attend-and-Excite, we guide the model to refine the cross-attention units to attend to all subject tokens in the text prompt and strengthen - or excite - their activations, encouraging the model to generate all subjects described in the text prompt. We compare our approach to alternative approaches and demonstrate that it conveys the desired concepts more faithfully across a range of text prompts. You can find additional information about Attend-and-Excite on the project page, the original codebase, or try it out in a demo. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionAttendAndExcitePipeline class diffusers.StableDiffusionAttendAndExcitePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion and Attend-and-Excite. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings __call__ < source > ( prompt: Union token_indices: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: int = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None max_iter_to_alter: int = 25 thresholds: dict = {0: 0.05, 10: 0.5, 20: 0.8} scale_factor: int = 20 attn_res: Optional = (16, 16) clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. token_indices (List[int]) — +The token indices to alter with attend-and-excite. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. max_iter_to_alter (int, optional, defaults to 25) — +Number of denoising steps to apply attend-and-excite. The max_iter_to_alter denoising steps are when +attend-and-excite is applied. For example, if max_iter_to_alter is 25 and there are a total of 30 +denoising steps, the first 25 denoising steps applies attend-and-excite and the last 5 will not. thresholds (dict, optional, defaults to {0 -- 0.05, 10: 0.5, 20: 0.8}): +Dictionary defining the iterations and desired thresholds to apply iterative latent refinement in. scale_factor (int, optional, default to 20) — +Scale factor to control the step size of each attend-and-excite update. attn_res (tuple, optional, default computed from width and height) — +The 2D resolution of the semantic attention map. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionAttendAndExcitePipeline + +>>> pipe = StableDiffusionAttendAndExcitePipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 +... ).to("cuda") + + +>>> prompt = "a cat and a frog" + +>>> # use get_indices function to find out indices of the tokens you want to alter +>>> pipe.get_indices(prompt) +{0: '<|startoftext|>', 1: 'a', 2: 'cat', 3: 'and', 4: 'a', 5: 'frog', 6: '<|endoftext|>'} + +>>> token_indices = [2, 5] +>>> seed = 6141 +>>> generator = torch.Generator("cuda").manual_seed(seed) + +>>> images = pipe( +... prompt=prompt, +... token_indices=token_indices, +... guidance_scale=7.5, +... generator=generator, +... num_inference_steps=50, +... max_iter_to_alter=25, +... ).images + +>>> image = images[0] +>>> image.save(f"../images/{prompt}_{seed}.png") disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_indices < source > ( prompt: str ) Utility function to list the indices of the tokens you wish to alte StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/9972ecf2149a3da5c5d780027c370abb.txt b/scrapped_outputs/9972ecf2149a3da5c5d780027c370abb.txt new file mode 100644 index 0000000000000000000000000000000000000000..44fefb3c47f353a4d2bfd9d051ae6e0b396bc8d5 --- /dev/null +++ b/scrapped_outputs/9972ecf2149a3da5c5d780027c370abb.txt @@ -0,0 +1,4 @@ +Using Diffusers for audio + +DanceDiffusionPipeline and AudioDiffusionPipeline can be used to generate +audio rapidly! More coming soon! diff --git a/scrapped_outputs/997902817832fe06bf646b90225d3a1f.txt b/scrapped_outputs/997902817832fe06bf646b90225d3a1f.txt new file mode 100644 index 0000000000000000000000000000000000000000..95d0b3baa7c66bb81020debd1d01d9bc9a2c8a95 --- /dev/null +++ b/scrapped_outputs/997902817832fe06bf646b90225d3a1f.txt @@ -0,0 +1,398 @@ +Memory and speed + +We present some techniques and ideas to optimize 🤗 Diffusers inference for memory or speed. As a general rule, we recommend the use of xFormers for memory efficient attention, please see the recommended installation instructions. +We’ll discuss how the following settings impact performance and memory. + +Latency +Speedup +original +9.50s +x1 +fp16 +3.61s +x2.63 +channels last +3.30s +x2.88 +traced UNet +3.21s +x2.96 +memory efficient attention +2.63s +x3.61 +obtained on NVIDIA TITAN RTX by generating a single image of size 512x512 from + the prompt "a photo of an astronaut riding a horse on mars" with 50 DDIM + steps. + + +Use tf32 instead of fp32 (on Ampere and later CUDA devices) + +On Ampere and later CUDA devices matrix multiplications and convolutions can use the TensorFloat32 (TF32) mode for faster but slightly less accurate computations. By default PyTorch enables TF32 mode for convolutions but not matrix multiplications, and unless a network requires full float32 precision we recommend enabling this setting for matrix multiplications, too. It can significantly speed up computations with typically negligible loss of numerical accuracy. You can read more about it here. All you need to do is to add this before your inference: + + + Copied +import torch + +torch.backends.cuda.matmul.allow_tf32 = True + +Half precision weights + +To save more GPU memory and get more speed, you can load and run the model weights directly in half precision. This involves loading the float16 version of the weights, which was saved to a branch named fp16, and telling PyTorch to use the float16 type when loading them: + + + Copied +import torch +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + + torch_dtype=torch.float16, +) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +image = pipe(prompt).images[0] +It is strongly discouraged to make use of [`torch.autocast`](https://pytorch.org/docs/stable/amp.html#torch.autocast) in any of the pipelines as it can lead to black images and is always slower than using pure + float16 precision. + + +Sliced attention for additional memory savings + +For even additional memory savings, you can use a sliced version of attention that performs the computation in steps instead of all at once. +Attention slicing is useful even if a batch size of just 1 is used - as long + as the model uses more than one attention head. If there is more than one + attention head the *QK^T* attention matrix can be computed sequentially for + each head which can save a significant amount of memory. + +To perform the attention computation sequentially over each head, you only need to invoke enable_attention_slicing() in your pipeline before inference, like here: + + + Copied +import torch +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + + torch_dtype=torch.float16, +) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_attention_slicing() +image = pipe(prompt).images[0] +There’s a small performance penalty of about 10% slower inference times, but this method allows you to use Stable Diffusion in as little as 3.2 GB of VRAM! + +Sliced VAE decode for larger batches + +To decode large batches of images with limited VRAM, or to enable batches with 32 images or more, you can use sliced VAE decode that decodes the batch latents one image at a time. +You likely want to couple this with enable_attention_slicing() or enable_xformers_memory_efficient_attention() to further minimize memory use. +To perform the VAE decode one image at a time, invoke enable_vae_slicing() in your pipeline before inference. For example: + + + Copied +import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + + torch_dtype=torch.float16, +) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_vae_slicing() +images = pipe([prompt] * 32).images +You may see a small performance boost in VAE decode on multi-image batches. There should be no performance impact on single-image batches. + +Tiled VAE decode and encode for large images + +Tiled VAE processing makes it possible to work with large images on limited VRAM. For example, generating 4k images in 8GB of VRAM. Tiled VAE decoder splits the image into overlapping tiles, decodes the tiles, and blends the outputs to make the final image. +You want to couple this with enable_attention_slicing() or enable_xformers_memory_efficient_attention() to further minimize memory use. +To use tiled VAE processing, invoke enable_vae_tiling() in your pipeline before inference. For example: + + + Copied +import torch +from diffusers import StableDiffusionPipeline, UniPCMultistepScheduler + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, +) +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") +prompt = "a beautiful landscape photograph" +pipe.enable_vae_tiling() +pipe.enable_xformers_memory_efficient_attention() + +image = pipe([prompt], width=3840, height=2224, num_inference_steps=20).images[0] +The output image will have some tile-to-tile tone variation from the tiles having separate decoders, but you shouldn’t see sharp seams between the tiles. The tiling is turned off for images that are 512x512 or smaller. + + +Offloading to CPU with accelerate for memory savings + +For additional memory savings, you can offload the weights to CPU and only load them to GPU when performing the forward pass. +To perform CPU offloading, all you have to do is invoke enable_sequential_cpu_offload(): + + + Copied +import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + + torch_dtype=torch.float16, +) + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_sequential_cpu_offload() +image = pipe(prompt).images[0] +And you can get the memory consumption to < 3GB. +Note that this method works at the submodule level, not on whole models. This is the best way to minimize memory consumption, but inference is much slower due to the iterative nature of the process. The UNet component of the pipeline runs several times (as many as num_inference_steps); each time, the different submodules of the UNet are sequentially onloaded and then offloaded as they are needed, so the number of memory transfers is large. +Consider using model offloading as another point in the optimization space: it will be much faster, but memory savings won't be as large. + +It is also possible to chain offloading with attention slicing for minimal memory consumption (< 2GB). + + + Copied +import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + + torch_dtype=torch.float16, +) + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_sequential_cpu_offload() +pipe.enable_attention_slicing(1) + +image = pipe(prompt).images[0] +Note: When using enable_sequential_cpu_offload(), it is important to not move the pipeline to CUDA beforehand or else the gain in memory consumption will only be minimal. See this issue for more information. + + +Model offloading for fast inference and memory savings + +Sequential CPU offloading, as discussed in the previous section, preserves a lot of memory but makes inference slower, because submodules are moved to GPU as needed, and immediately returned to CPU when a new module runs. +Full-model offloading is an alternative that moves whole models to the GPU, instead of handling each model’s constituent modules. This results in a negligible impact on inference time (compared with moving the pipeline to cuda), while still providing some memory savings. +In this scenario, only one of the main components of the pipeline (typically: text encoder, unet and vae) +will be in the GPU while the others wait in the CPU. Components like the UNet that run for multiple iterations will stay on GPU until they are no longer needed. +This feature can be enabled by invoking enable_model_cpu_offload() on the pipeline, as shown below. + + + Copied +import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, +) + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_model_cpu_offload() +image = pipe(prompt).images[0] +This is also compatible with attention slicing for additional memory savings. + + + Copied +import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, +) + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_model_cpu_offload() +pipe.enable_attention_slicing(1) + +image = pipe(prompt).images[0] +This feature requires `accelerate` version 0.17.0 or larger. + + +Using Channels Last memory format + +Channels last memory format is an alternative way of ordering NCHW tensors in memory preserving dimensions ordering. Channels last tensors ordered in such a way that channels become the densest dimension (aka storing images pixel-per-pixel). Since not all operators currently support channels last format it may result in a worst performance, so it’s better to try it and see if it works for your model. +For example, in order to set the UNet model in our pipeline to use channels last format, we can use the following: + + + Copied +print(pipe.unet.conv_out.state_dict()["weight"].stride()) # (2880, 9, 3, 1) +pipe.unet.to(memory_format=torch.channels_last) # in-place operation +print( + pipe.unet.conv_out.state_dict()["weight"].stride() +) # (2880, 1, 960, 320) having a stride of 1 for the 2nd dimension proves that it works + +Tracing + +Tracing runs an example input tensor through your model, and captures the operations that are invoked as that input makes its way through the model’s layers so that an executable or ScriptFunction is returned that will be optimized using just-in-time compilation. +To trace our UNet model, we can use the following: + + + Copied +import time +import torch +from diffusers import StableDiffusionPipeline +import functools + +# torch disable grad +torch.set_grad_enabled(False) + +# set variables +n_experiments = 2 +unet_runs_per_experiment = 50 + + +# load inputs +def generate_inputs(): + sample = torch.randn(2, 4, 64, 64).half().cuda() + timestep = torch.rand(1).half().cuda() * 999 + encoder_hidden_states = torch.randn(2, 77, 768).half().cuda() + return sample, timestep, encoder_hidden_states + + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, +).to("cuda") +unet = pipe.unet +unet.eval() +unet.to(memory_format=torch.channels_last) # use channels_last memory format +unet.forward = functools.partial(unet.forward, return_dict=False) # set return_dict=False as default + +# warmup +for _ in range(3): + with torch.inference_mode(): + inputs = generate_inputs() + orig_output = unet(*inputs) + +# trace +print("tracing..") +unet_traced = torch.jit.trace(unet, inputs) +unet_traced.eval() +print("done tracing") + + +# warmup and optimize graph +for _ in range(5): + with torch.inference_mode(): + inputs = generate_inputs() + orig_output = unet_traced(*inputs) + + +# benchmarking +with torch.inference_mode(): + for _ in range(n_experiments): + torch.cuda.synchronize() + start_time = time.time() + for _ in range(unet_runs_per_experiment): + orig_output = unet_traced(*inputs) + torch.cuda.synchronize() + print(f"unet traced inference took {time.time() - start_time:.2f} seconds") + for _ in range(n_experiments): + torch.cuda.synchronize() + start_time = time.time() + for _ in range(unet_runs_per_experiment): + orig_output = unet(*inputs) + torch.cuda.synchronize() + print(f"unet inference took {time.time() - start_time:.2f} seconds") + +# save the model +unet_traced.save("unet_traced.pt") +Then we can replace the unet attribute of the pipeline with the traced model like the following + + + Copied +from diffusers import StableDiffusionPipeline +import torch +from dataclasses import dataclass + + +@dataclass +class UNet2DConditionOutput: + sample: torch.FloatTensor + + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, +).to("cuda") + +# use jitted unet +unet_traced = torch.jit.load("unet_traced.pt") + + +# del pipe.unet +class TracedUNet(torch.nn.Module): + def __init__(self): + super().__init__() + self.in_channels = pipe.unet.in_channels + self.device = pipe.unet.device + + def forward(self, latent_model_input, t, encoder_hidden_states): + sample = unet_traced(latent_model_input, t, encoder_hidden_states)[0] + return UNet2DConditionOutput(sample=sample) + + +pipe.unet = TracedUNet() + +with torch.inference_mode(): + image = pipe([prompt] * 1, num_inference_steps=50).images[0] + +Memory Efficient Attention + +Recent work on optimizing the bandwitdh in the attention block has generated huge speed ups and gains in GPU memory usage. The most recent being Flash Attention from @tridao: code, paper. +Here are the speedups we obtain on a few Nvidia GPUs when running the inference at 512x512 with a batch size of 1 (one prompt): +GPU +Base Attention FP16 +Memory Efficient Attention FP16 +NVIDIA Tesla T4 +3.5it/s +5.5it/s +NVIDIA 3060 RTX +4.6it/s +7.8it/s +NVIDIA A10G +8.88it/s +15.6it/s +NVIDIA RTX A6000 +11.7it/s +21.09it/s +NVIDIA TITAN RTX +12.51it/s +18.22it/s +A100-SXM4-40GB +18.6it/s +29.it/s +A100-SXM-80GB +18.7it/s +29.5it/s +To leverage it just make sure you have: +PyTorch > 1.12 +Cuda available +Installed the xformers library. + + + Copied +from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, +).to("cuda") + +pipe.enable_xformers_memory_efficient_attention() + +with torch.inference_mode(): + sample = pipe("a small cat") + +# optional: You can disable it via +# pipe.disable_xformers_memory_efficient_attention() diff --git a/scrapped_outputs/99a79cbb9bea80c895842c07f23a9702.txt b/scrapped_outputs/99a79cbb9bea80c895842c07f23a9702.txt new file mode 100644 index 0000000000000000000000000000000000000000..28be7c2be08b90122a456c3dc3dafcfdbac176dc --- /dev/null +++ b/scrapped_outputs/99a79cbb9bea80c895842c07f23a9702.txt @@ -0,0 +1,75 @@ +AutoPipeline 🤗 Diffusers is able to complete many different tasks, and you can often reuse the same pretrained weights for multiple tasks such as text-to-image, image-to-image, and inpainting. If you’re new to the library and diffusion models though, it may be difficult to know which pipeline to use for a task. For example, if you’re using the runwayml/stable-diffusion-v1-5 checkpoint for text-to-image, you might not know that you could also use it for image-to-image and inpainting by loading the checkpoint with the StableDiffusionImg2ImgPipeline and StableDiffusionInpaintPipeline classes respectively. The AutoPipeline class is designed to simplify the variety of pipelines in 🤗 Diffusers. It is a generic, task-first pipeline that lets you focus on the task. The AutoPipeline automatically detects the correct pipeline class to use, which makes it easier to load a checkpoint for a task without knowing the specific pipeline class name. Take a look at the AutoPipeline reference to see which tasks are supported. Currently, it supports text-to-image, image-to-image, and inpainting. This tutorial shows you how to use an AutoPipeline to automatically infer the pipeline class to load for a specific task, given the pretrained weights. Choose an AutoPipeline for your task Start by picking a checkpoint. For example, if you’re interested in text-to-image with the runwayml/stable-diffusion-v1-5 checkpoint, use AutoPipelineForText2Image: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") +prompt = "peasant and dragon combat, wood cutting style, viking era, bevel with rune" + +image = pipeline(prompt, num_inference_steps=25).images[0] +image Under the hood, AutoPipelineForText2Image: automatically detects a "stable-diffusion" class from the model_index.json file loads the corresponding text-to-image StableDiffusionPipeline based on the "stable-diffusion" class name Likewise, for image-to-image, AutoPipelineForImage2Image detects a "stable-diffusion" checkpoint from the model_index.json file and it’ll load the corresponding StableDiffusionImg2ImgPipeline behind the scenes. You can also pass any additional arguments specific to the pipeline class such as strength, which determines the amount of noise or variation added to an input image: Copied from diffusers import AutoPipelineForImage2Image +import torch +import requests +from PIL import Image +from io import BytesIO + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") +prompt = "a portrait of a dog wearing a pearl earring" + +url = "https://upload.wikimedia.org/wikipedia/commons/thumb/0/0f/1665_Girl_with_a_Pearl_Earring.jpg/800px-1665_Girl_with_a_Pearl_Earring.jpg" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") +image.thumbnail((768, 768)) + +image = pipeline(prompt, image, num_inference_steps=200, strength=0.75, guidance_scale=10.5).images[0] +image And if you want to do inpainting, then AutoPipelineForInpainting loads the underlying StableDiffusionInpaintPipeline class in the same way: Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +import torch + +pipeline = AutoPipelineForInpainting.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).convert("RGB") +mask_image = load_image(mask_url).convert("RGB") + +prompt = "A majestic tiger sitting on a bench" +image = pipeline(prompt, image=init_image, mask_image=mask_image, num_inference_steps=50, strength=0.80).images[0] +image If you try to load an unsupported checkpoint, it’ll throw an error: Copied from diffusers import AutoPipelineForImage2Image +import torch + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "openai/shap-e-img2img", torch_dtype=torch.float16, use_safetensors=True +) +"ValueError: AutoPipeline can't find a pipeline linked to ShapEImg2ImgPipeline for None" Use multiple pipelines For some workflows or if you’re loading many pipelines, it is more memory-efficient to reuse the same components from a checkpoint instead of reloading them which would unnecessarily consume additional memory. For example, if you’re using a checkpoint for text-to-image and you want to use it again for image-to-image, use the from_pipe() method. This method creates a new pipeline from the components of a previously loaded pipeline at no additional memory cost. The from_pipe() method detects the original pipeline class and maps it to the new pipeline class corresponding to the task you want to do. For example, if you load a "stable-diffusion" class pipeline for text-to-image: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch + +pipeline_text2img = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +print(type(pipeline_text2img)) +"" Then from_pipe() maps the original "stable-diffusion" pipeline class to StableDiffusionImg2ImgPipeline: Copied pipeline_img2img = AutoPipelineForImage2Image.from_pipe(pipeline_text2img) +print(type(pipeline_img2img)) +"" If you passed an optional argument - like disabling the safety checker - to the original pipeline, this argument is also passed on to the new pipeline: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch + +pipeline_text2img = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, + requires_safety_checker=False, +).to("cuda") + +pipeline_img2img = AutoPipelineForImage2Image.from_pipe(pipeline_text2img) +print(pipeline_img2img.config.requires_safety_checker) +"False" You can overwrite any of the arguments and even configuration from the original pipeline if you want to change the behavior of the new pipeline. For example, to turn the safety checker back on and add the strength argument: Copied pipeline_img2img = AutoPipelineForImage2Image.from_pipe(pipeline_text2img, requires_safety_checker=True, strength=0.3) +print(pipeline_img2img.config.requires_safety_checker) +"True" diff --git a/scrapped_outputs/99b4a695231293e33b37cbd7b579c35e.txt b/scrapped_outputs/99b4a695231293e33b37cbd7b579c35e.txt new file mode 100644 index 0000000000000000000000000000000000000000..bd08be310ae298544c596d050027c90654a96e42 --- /dev/null +++ b/scrapped_outputs/99b4a695231293e33b37cbd7b579c35e.txt @@ -0,0 +1,308 @@ +Stable unCLIP Stable unCLIP checkpoints are finetuned from Stable Diffusion 2.1 checkpoints to condition on CLIP image embeddings. +Stable unCLIP still conditions on text embeddings. Given the two separate conditionings, stable unCLIP can be used +for text guided image variation. When combined with an unCLIP prior, it can also be used for full text to image generation. The abstract from the paper is: Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples. Tips Stable unCLIP takes noise_level as input during inference which determines how much noise is added to the image embeddings. A higher noise_level increases variation in the final un-noised images. By default, we do not add any additional noise to the image embeddings (noise_level = 0). Text-to-Image Generation + +Stable unCLIP can be leveraged for text-to-image generation by pipelining it with the prior model of KakaoBrain's open source DALL-E 2 replication [Karlo](https://huggingface.co/kakaobrain/karlo-v1-alpha): + + Copied import torch +from diffusers import UnCLIPScheduler, DDPMScheduler, StableUnCLIPPipeline +from diffusers.models import PriorTransformer +from transformers import CLIPTokenizer, CLIPTextModelWithProjection + +prior_model_id = "kakaobrain/karlo-v1-alpha" +data_type = torch.float16 +prior = PriorTransformer.from_pretrained(prior_model_id, subfolder="prior", torch_dtype=data_type) + +prior_text_model_id = "openai/clip-vit-large-patch14" +prior_tokenizer = CLIPTokenizer.from_pretrained(prior_text_model_id) +prior_text_model = CLIPTextModelWithProjection.from_pretrained(prior_text_model_id, torch_dtype=data_type) +prior_scheduler = UnCLIPScheduler.from_pretrained(prior_model_id, subfolder="prior_scheduler") +prior_scheduler = DDPMScheduler.from_config(prior_scheduler.config) + +stable_unclip_model_id = "stabilityai/stable-diffusion-2-1-unclip-small" + +pipe = StableUnCLIPPipeline.from_pretrained( + stable_unclip_model_id, + torch_dtype=data_type, + variant="fp16", + prior_tokenizer=prior_tokenizer, + prior_text_encoder=prior_text_model, + prior=prior, + prior_scheduler=prior_scheduler, +) + +pipe = pipe.to("cuda") +wave_prompt = "dramatic wave, the Oceans roar, Strong wave spiral across the oceans as the waves unfurl into roaring crests; perfect wave form; perfect wave shape; dramatic wave shape; wave shape unbelievable; wave; wave shape spectacular" + +image = pipe(prompt=wave_prompt).images[0] +image For text-to-image we use stabilityai/stable-diffusion-2-1-unclip-small as it was trained on CLIP ViT-L/14 embedding, the same as the Karlo model prior. stabilityai/stable-diffusion-2-1-unclip was trained on OpenCLIP ViT-H, so we don’t recommend its use. Text guided Image-to-Image Variation Copied from diffusers import StableUnCLIPImg2ImgPipeline +from diffusers.utils import load_image +import torch + +pipe = StableUnCLIPImg2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1-unclip", torch_dtype=torch.float16, variation="fp16" +) +pipe = pipe.to("cuda") + +url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/stable_unclip/tarsila_do_amaral.png" +init_image = load_image(url) + +images = pipe(init_image).images +images[0].save("variation_image.png") Optionally, you can also pass a prompt to pipe such as: Copied prompt = "A fantasy landscape, trending on artstation" + +image = pipe(init_image, prompt=prompt).images[0] +image Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableUnCLIPPipeline class diffusers.StableUnCLIPPipeline < source > ( prior_tokenizer: CLIPTokenizer prior_text_encoder: CLIPTextModelWithProjection prior: PriorTransformer prior_scheduler: KarrasDiffusionSchedulers image_normalizer: StableUnCLIPImageNormalizer image_noising_scheduler: KarrasDiffusionSchedulers tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vae: AutoencoderKL ) Parameters prior_tokenizer (CLIPTokenizer) — +A CLIPTokenizer. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen CLIPTextModelWithProjection text-encoder. prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_scheduler (KarrasDiffusionSchedulers) — +Scheduler used in the prior denoising process. image_normalizer (StableUnCLIPImageNormalizer) — +Used to normalize the predicted image embeddings before the noise is applied and un-normalize the image +embeddings after the noise has been applied. image_noising_scheduler (KarrasDiffusionSchedulers) — +Noise schedule for adding noise to the predicted image embeddings. The amount of noise to add is determined +by the noise_level. tokenizer (CLIPTokenizer) — +A CLIPTokenizer. text_encoder (CLIPTextModel) — +Frozen CLIPTextModel text-encoder. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (KarrasDiffusionSchedulers) — +A scheduler to be used in combination with unet to denoise the encoded image latents. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. Pipeline for text-to-image generation using stable unCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 20 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Optional = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 0 prior_num_inference_steps: int = 25 prior_guidance_scale: float = 4.0 prior_latents: Optional = None clip_skip: Optional = None ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 20) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. noise_level (int, optional, defaults to 0) — +The amount of noise to add to the image embeddings. A higher noise_level increases the variance in +the final un-noised images. See StableUnCLIPPipeline.noise_image_embeddings() for more details. prior_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps in the prior denoising process. More denoising steps usually lead to a +higher quality image at the expense of slower inference. prior_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. prior_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +embedding generation in the prior denoising process. Can be used to tweak the same generation with +different prompts. If not provided, a latents tensor is generated by sampling using the supplied random +generator. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +ImagePipelineOutput or tuple + +~ pipeline_utils.ImagePipelineOutput if return_dict is True, otherwise a tuple. When returning +a tuple, the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableUnCLIPPipeline + +>>> pipe = StableUnCLIPPipeline.from_pretrained( +... "fusing/stable-unclip-2-1-l", torch_dtype=torch.float16 +... ) # TODO update model path +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> images = pipe(prompt).images +>>> images[0].save("astronaut_horse.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. noise_image_embeddings < source > ( image_embeds: Tensor noise_level: int noise: Optional = None generator: Optional = None ) Add noise to the image embeddings. The amount of noise is controlled by a noise_level input. A higher +noise_level increases the variance in the final un-noised images. The noise is applied in two ways: A noise schedule is applied directly to the embeddings. A vector of sinusoidal time embeddings are appended to the output. In both cases, the amount of noise is controlled by the same noise_level. The embeddings are normalized before the noise is applied and un-normalized after the noise is applied. StableUnCLIPImg2ImgPipeline class diffusers.StableUnCLIPImg2ImgPipeline < source > ( feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection image_normalizer: StableUnCLIPImageNormalizer image_noising_scheduler: KarrasDiffusionSchedulers tokenizer: CLIPTokenizer text_encoder: CLIPTextModel unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vae: AutoencoderKL ) Parameters feature_extractor (CLIPImageProcessor) — +Feature extractor for image pre-processing before being encoded. image_encoder (CLIPVisionModelWithProjection) — +CLIP vision model for encoding images. image_normalizer (StableUnCLIPImageNormalizer) — +Used to normalize the predicted image embeddings before the noise is applied and un-normalize the image +embeddings after the noise has been applied. image_noising_scheduler (KarrasDiffusionSchedulers) — +Noise schedule for adding noise to the predicted image embeddings. The amount of noise to add is determined +by the noise_level. tokenizer (~transformers.CLIPTokenizer) — +A [~transformers.CLIPTokenizer)]. text_encoder (CLIPTextModel) — +Frozen CLIPTextModel text-encoder. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (KarrasDiffusionSchedulers) — +A scheduler to be used in combination with unet to denoise the encoded image latents. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. Pipeline for text-guided image-to-image generation using stable unCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( image: Union = None prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 20 guidance_scale: float = 10 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Optional = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 0 image_embeds: Optional = None clip_skip: Optional = None ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, either prompt_embeds will be +used or prompt is initialized to "". image (torch.FloatTensor or PIL.Image.Image) — +Image or tensor representing an image batch. The image is encoded to its CLIP embedding which the +unet is conditioned on. The image is not encoded by the vae and then used as the latents in the +denoising process like it is in the standard Stable Diffusion text-guided image variation process. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 20) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. noise_level (int, optional, defaults to 0) — +The amount of noise to add to the image embeddings. A higher noise_level increases the variance in +the final un-noised images. See StableUnCLIPPipeline.noise_image_embeddings() for more details. image_embeds (torch.FloatTensor, optional) — +Pre-generated CLIP embeddings to condition the unet on. These latents are not used in the denoising +process. If you want to provide pre-generated latents, pass them to __call__ as latents. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +ImagePipelineOutput or tuple + +~ pipeline_utils.ImagePipelineOutput if return_dict is True, otherwise a tuple. When returning +a tuple, the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import requests +>>> import torch +>>> from PIL import Image +>>> from io import BytesIO + +>>> from diffusers import StableUnCLIPImg2ImgPipeline + +>>> pipe = StableUnCLIPImg2ImgPipeline.from_pretrained( +... "fusing/stable-unclip-2-1-l-img2img", torch_dtype=torch.float16 +... ) # TODO update model path +>>> pipe = pipe.to("cuda") + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +>>> response = requests.get(url) +>>> init_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_image = init_image.resize((768, 512)) + +>>> prompt = "A fantasy landscape, trending on artstation" + +>>> images = pipe(prompt, init_image).images +>>> images[0].save("fantasy_landscape.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. noise_image_embeddings < source > ( image_embeds: Tensor noise_level: int noise: Optional = None generator: Optional = None ) Add noise to the image embeddings. The amount of noise is controlled by a noise_level input. A higher +noise_level increases the variance in the final un-noised images. The noise is applied in two ways: A noise schedule is applied directly to the embeddings. A vector of sinusoidal time embeddings are appended to the output. In both cases, the amount of noise is controlled by the same noise_level. The embeddings are normalized before the noise is applied and un-normalized after the noise is applied. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/99b85800259ede6f4e34e2ec044eab76.txt b/scrapped_outputs/99b85800259ede6f4e34e2ec044eab76.txt new file mode 100644 index 0000000000000000000000000000000000000000..3beb02ec7305478ce7a4b87fda4119de8d27d39d --- /dev/null +++ b/scrapped_outputs/99b85800259ede6f4e34e2ec044eab76.txt @@ -0,0 +1,176 @@ +VQDiffusion + + +Overview + +Vector Quantized Diffusion Model for Text-to-Image Synthesis by Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, Baining Guo +The abstract of the paper is the following: +We present the vector quantized diffusion (VQ-Diffusion) model for text-to-image generation. This method is based on a vector quantized variational autoencoder (VQ-VAE) whose latent space is modeled by a conditional variant of the recently developed Denoising Diffusion Probabilistic Model (DDPM). We find that this latent-space method is well-suited for text-to-image generation tasks because it not only eliminates the unidirectional bias with existing methods but also allows us to incorporate a mask-and-replace diffusion strategy to avoid the accumulation of errors, which is a serious problem with existing methods. Our experiments show that the VQ-Diffusion produces significantly better text-to-image generation results when compared with conventional autoregressive (AR) models with similar numbers of parameters. Compared with previous GAN-based text-to-image methods, our VQ-Diffusion can handle more complex scenes and improve the synthesized image quality by a large margin. Finally, we show that the image generation computation in our method can be made highly efficient by reparameterization. With traditional AR methods, the text-to-image generation time increases linearly with the output image resolution and hence is quite time consuming even for normal size images. The VQ-Diffusion allows us to achieve a better trade-off between quality and speed. Our experiments indicate that the VQ-Diffusion model with the reparameterization is fifteen times faster than traditional AR methods while achieving a better image quality. +The original codebase can be found here. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_vq_diffusion.py +Text-to-Image Generation +- + +VQDiffusionPipeline + + +class diffusers.VQDiffusionPipeline + +< +source +> +( +vqvae: VQModel +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +transformer: Transformer2DModel +scheduler: VQDiffusionScheduler +learned_classifier_free_sampling_embeddings: LearnedClassifierFreeSamplingEmbeddings + +) + + +Parameters + +vqvae (VQModel) — +Vector Quantized Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent +representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. VQ Diffusion uses the text portion of +CLIP, specifically +the clip-vit-base-patch32 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +transformer (Transformer2DModel) — +Conditional transformer to denoise the encoded image latents. + + +scheduler (VQDiffusionScheduler) — +A scheduler to be used in combination with transformer to denoise the encoded image latents. + + + +Pipeline for text-to-image generation using VQ Diffusion +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] +num_inference_steps: int = 100 +guidance_scale: float = 5.0 +truncation_rate: float = 1.0 +num_images_per_prompt: int = 1 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: typing.Optional[int] = 1 + +) +→ +ImagePipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. + + +num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +truncation_rate (float, optional, defaults to 1.0 (equivalent to no truncation)) — +Used to “truncate” the predicted classes for x_0 such that the cumulative probability for a pixel is at +most truncation_rate. The lowest probabilities that would increase the cumulative probability above +truncation_rate are set to zero. + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor of shape (batch), optional) — +Pre-generated noisy latents to be used as inputs for image generation. Must be valid embedding indices. +Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will +be generated of completely masked latent pixels. + + +output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +ImagePipelineOutput or tuple + + + +~ pipeline_utils.ImagePipelineOutput if return_dict +is True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. + + +Function invoked when calling the pipeline for generation. + +truncate + +< +source +> +( +log_p_x_0: FloatTensor +truncation_rate: float + +) + + + +Truncates log_p_x_0 such that for each column vector, the total cumulative probability is truncation_rate The +lowest probabilities that would increase the cumulative probability above truncation_rate are set to zero. diff --git a/scrapped_outputs/99fc4e048de4323b6bc42634e0d24bad.txt b/scrapped_outputs/99fc4e048de4323b6bc42634e0d24bad.txt new file mode 100644 index 0000000000000000000000000000000000000000..499d88bc0e5374d11fae3590eafbb42edb1b8fe8 --- /dev/null +++ b/scrapped_outputs/99fc4e048de4323b6bc42634e0d24bad.txt @@ -0,0 +1,3027 @@ +Models + +Diffusers contains pretrained models for popular algorithms and modules for creating the next set of diffusion models. +The primary function of these models is to denoise an input sample, by modeling the distribution pθ(xt−1∣xt)p_{\theta}(x_{t-1}|x_{t})pθ​(xt−1​∣xt​). +The models are built on the base class [‘ModelMixin’] that is a torch.nn.module with basic functionality for saving and loading models both locally and from the HuggingFace hub. + +ModelMixin + + +class diffusers.ModelMixin + +< +source +> +( +) + + + +Base class for all models. +ModelMixin takes care of storing the configuration of the models and handles methods for loading, downloading +and saving models. +config_name (str) — A filename under which the model should be stored when calling +save_pretrained(). + +disable_gradient_checkpointing + +< +source +> +( +) + + + +Deactivates gradient checkpointing for the current model. +Note that in other frameworks this feature can be referred to as “activation checkpointing” or “checkpoint +activations”. + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +enable_gradient_checkpointing + +< +source +> +( +) + + + +Activates gradient checkpointing for the current model. +Note that in other frameworks this feature can be referred to as “activation checkpointing” or “checkpoint +activations”. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import UNet2DConditionModel +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> model = UNet2DConditionModel.from_pretrained( +... "stabilityai/stable-diffusion-2-1", subfolder="unet", torch_dtype=torch.float16 +... ) +>>> model = model.to("cuda") +>>> model.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) + +from_pretrained + +< +source +> +( +pretrained_model_name_or_path: typing.Union[str, os.PathLike, NoneType] +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. +Valid model ids should have an organization name, like google/ddpm-celebahq-256. +A path to a directory containing model weights saved using ~ModelMixin.save_config, e.g., +./my_model_directory/. + + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model under this dtype. If "auto" is passed the dtype +will be automatically derived from the model’s weights. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running diffusers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +from_flax (bool, optional, defaults to False) — +Load the model weights from a Flax checkpoint save file. + + +subfolder (str, optional, defaults to "") — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + +mirror (str, optional) — +Mirror source to accelerate downloads in China. If you are from China and have an accessibility +problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. +Please refer to the mirror site for more information. + + +device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be refined to each +parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the +same device. +To have Accelerate compute the most optimized device_map automatically, set device_map="auto". For +more information about each option see designing a device +map. + + +max_memory (Dict, optional) — +A dictionary device identifier to maximum memory. Will default to the maximum memory available for each +GPU and the available CPU RAM if unset. + + +offload_folder (str or os.PathLike, optional) — +If the device_map contains any value "disk", the folder where we will offload weights. + + +offload_state_dict (bool, optional) — +If True, will temporarily offload the CPU state dict to the hard drive to avoid getting out of CPU +RAM if the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to +True when there is some disk offload. + + +low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading by not initializing the weights and only loading the pre-trained weights. This +also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the +model. This is only supported when torch version >= 1.9.0. If you are using an older version of torch, +setting this argument to True will raise an error. + + +variant (str, optional) — +If specified load weights from variant filename, e.g. pytorch_model..bin. variant is +ignored when using from_flax. + + +use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights will be downloaded if they’re available and if the +safetensors library is installed. If set to True, the model will be forcibly loaded from +safetensors weights. If set to False, loading will not use safetensors. + + + +Instantiate a pretrained pytorch model from a pre-trained model configuration. +The model is set in evaluation mode by default using model.eval() (Dropout modules are deactivated). To train +the model, you should first set it back in training mode with model.train(). +The warning Weights from XXX not initialized from pretrained model means that the weights of XXX do not come +pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning +task. +The warning Weights from XXX not used in YYY means that the layer XXX is not used by YYY, therefore those +weights are discarded. +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. +Activate the special “offline-mode” to use +this method in a firewalled environment. + +num_parameters + +< +source +> +( +only_trainable: bool = False +exclude_embeddings: bool = False + +) +→ +int + +Parameters + +only_trainable (bool, optional, defaults to False) — +Whether or not to return only the number of trainable parameters + + +exclude_embeddings (bool, optional, defaults to False) — +Whether or not to return only the number of non-embeddings parameters + + +Returns + +int + + + +The number of parameters. + + +Get number of (optionally, trainable or non-embeddings) parameters in the module. + +save_pretrained + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +is_main_process: bool = True +save_function: typing.Callable = None +safe_serialization: bool = False +variant: typing.Optional[str] = None + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. + + +is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful when in distributed training like +TPUs and need to call this function on all processes. In this case, set is_main_process=True only on +the main process to avoid race conditions. + + +save_function (Callable) — +The function to use to save the state dictionary. Useful on distributed training like TPUs when one +need to replace torch.save by another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. + + +safe_serialization (bool, optional, defaults to False) — +Whether to save the model using safetensors or the traditional PyTorch way (that uses pickle). + + +variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. + + + +Save a model and its configuration file to a directory, so that it can be re-loaded using the +[from_pretrained()](/docs/diffusers/main/en/api/models#diffusers.ModelMixin.from_pretrained) class method. + +UNet2DOutput + + +class diffusers.models.unet_2d.UNet2DOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +Hidden states output. Output of last layer of model. + + + + +UNet2DModel + + +class diffusers.UNet2DModel + +< +source +> +( +sample_size: typing.Union[int, typing.Tuple[int, int], NoneType] = None +in_channels: int = 3 +out_channels: int = 3 +center_input_sample: bool = False +time_embedding_type: str = 'positional' +freq_shift: int = 0 +flip_sin_to_cos: bool = True +down_block_types: typing.Tuple[str] = ('DownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D') +up_block_types: typing.Tuple[str] = ('AttnUpBlock2D', 'AttnUpBlock2D', 'AttnUpBlock2D', 'UpBlock2D') +block_out_channels: typing.Tuple[int] = (224, 448, 672, 896) +layers_per_block: int = 2 +mid_block_scale_factor: float = 1 +downsample_padding: int = 1 +act_fn: str = 'silu' +attention_head_dim: typing.Optional[int] = 8 +norm_num_groups: int = 32 +norm_eps: float = 1e-05 +resnet_time_scale_shift: str = 'default' +add_attention: bool = True +class_embed_type: typing.Optional[str] = None +num_class_embeds: typing.Optional[int] = None + +) + + +Parameters + +sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. Dimensions must be a multiple of 2 ** (len(block_out_channels) - 1). + + +in_channels (int, optional, defaults to 3) — Number of channels in the input image. + + +out_channels (int, optional, defaults to 3) — Number of channels in the output. + + +center_input_sample (bool, optional, defaults to False) — Whether to center the input sample. + + +time_embedding_type (str, optional, defaults to "positional") — Type of time embedding to use. + + +freq_shift (int, optional, defaults to 0) — Frequency shift for fourier time embedding. + + +flip_sin_to_cos (bool, optional, defaults to — +obj:True): Whether to flip sin to cos for fourier time embedding. + + +down_block_types (Tuple[str], optional, defaults to — +obj:("DownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D")): Tuple of downsample block +types. + + +mid_block_type (str, optional, defaults to "UNetMidBlock2D") — +The mid block type. Choose from UNetMidBlock2D or UnCLIPUNetMidBlock2D. + + +up_block_types (Tuple[str], optional, defaults to — +obj:("AttnUpBlock2D", "AttnUpBlock2D", "AttnUpBlock2D", "UpBlock2D")): Tuple of upsample block types. + + +block_out_channels (Tuple[int], optional, defaults to — +obj:(224, 448, 672, 896)): Tuple of block output channels. + + +layers_per_block (int, optional, defaults to 2) — The number of layers per block. + + +mid_block_scale_factor (float, optional, defaults to 1) — The scale factor for the mid block. + + +downsample_padding (int, optional, defaults to 1) — The padding for the downsample convolution. + + +act_fn (str, optional, defaults to "silu") — The activation function to use. + + +attention_head_dim (int, optional, defaults to 8) — The attention head dimension. + + +norm_num_groups (int, optional, defaults to 32) — The number of groups for the normalization. + + +norm_eps (float, optional, defaults to 1e-5) — The epsilon for the normalization. + + +resnet_time_scale_shift (str, optional, defaults to "default") — Time scale shift config +for resnet blocks, see ResnetBlock2D. Choose from default or scale_shift. + + +class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", or "identity". + + +num_class_embeds (int, optional, defaults to None) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. + + + +UNet2DModel is a 2D UNet model that takes in a noisy sample and a timestep and returns sample shaped output. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the model (such as downloading or saving, etc.) + +forward + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[torch.Tensor, float, int] +class_labels: typing.Optional[torch.Tensor] = None +return_dict: bool = True + +) +→ +UNet2DOutput or tuple + +Parameters + +sample (torch.FloatTensor) — (batch, channel, height, width) noisy inputs tensor + + +timestep (torch.FloatTensor or float or `int) — (batch) timesteps + + +class_labels (torch.FloatTensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DOutput instead of a plain tuple. + + +Returns + +UNet2DOutput or tuple + + + +UNet2DOutput if return_dict is True, +otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + + +UNet1DOutput + + +class diffusers.models.unet_1d.UNet1DOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size, num_channels, sample_size)) — +Hidden states output. Output of last layer of model. + + + + +UNet1DModel + + +class diffusers.UNet1DModel + +< +source +> +( +sample_size: int = 65536 +sample_rate: typing.Optional[int] = None +in_channels: int = 2 +out_channels: int = 2 +extra_in_channels: int = 0 +time_embedding_type: str = 'fourier' +flip_sin_to_cos: bool = True +use_timestep_embedding: bool = False +freq_shift: float = 0.0 +down_block_types: typing.Tuple[str] = ('DownBlock1DNoSkip', 'DownBlock1D', 'AttnDownBlock1D') +up_block_types: typing.Tuple[str] = ('AttnUpBlock1D', 'UpBlock1D', 'UpBlock1DNoSkip') +mid_block_type: typing.Tuple[str] = 'UNetMidBlock1D' +out_block_type: str = None +block_out_channels: typing.Tuple[int] = (32, 32, 64) +act_fn: str = None +norm_num_groups: int = 8 +layers_per_block: int = 1 +downsample_each_block: bool = False + +) + + +Parameters + +sample_size (int, optional) — Default length of sample. Should be adaptable at runtime. + + +in_channels (int, optional, defaults to 2) — Number of channels in the input sample. + + +out_channels (int, optional, defaults to 2) — Number of channels in the output. + + +extra_in_channels (int, optional, defaults to 0) — +Number of additional channels to be added to the input of the first down block. Useful for cases where the +input data has more channels than what the model is initially designed for. + + +time_embedding_type (str, optional, defaults to "fourier") — Type of time embedding to use. + + +freq_shift (float, optional, defaults to 0.0) — Frequency shift for fourier time embedding. + + +flip_sin_to_cos (bool, optional, defaults to — +obj:False): Whether to flip sin to cos for fourier time embedding. + + +down_block_types (Tuple[str], optional, defaults to — +obj:("DownBlock1D", "DownBlock1DNoSkip", "AttnDownBlock1D")): Tuple of downsample block types. + + +up_block_types (Tuple[str], optional, defaults to — +obj:("UpBlock1D", "UpBlock1DNoSkip", "AttnUpBlock1D")): Tuple of upsample block types. + + +block_out_channels (Tuple[int], optional, defaults to — +obj:(32, 32, 64)): Tuple of block output channels. + + +mid_block_type (str, optional, defaults to “UNetMidBlock1D”) — block type for middle of UNet. + + +out_block_type (str, optional, defaults to None) — optional output processing of UNet. + + +act_fn (str, optional, defaults to None) — optional activation function in UNet blocks. + + +norm_num_groups (int, optional, defaults to 8) — group norm member count in UNet blocks. + + +layers_per_block (int, optional, defaults to 1) — added number of layers in a UNet block. + + +downsample_each_block (int, optional, defaults to False — +experimental feature for using a UNet without upsampling. + + + +UNet1DModel is a 1D UNet model that takes in a noisy sample and a timestep and returns sample shaped output. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the model (such as downloading or saving, etc.) + +forward + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[torch.Tensor, float, int] +return_dict: bool = True + +) +→ +UNet1DOutput or tuple + +Parameters + +sample (torch.FloatTensor) — (batch_size, num_channels, sample_size) noisy inputs tensor + + +timestep (torch.FloatTensor or float or `int) — (batch) timesteps + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet1DOutput instead of a plain tuple. + + +Returns + +UNet1DOutput or tuple + + + +UNet1DOutput if return_dict is True, +otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + + +UNet2DConditionOutput + + +class diffusers.models.unet_2d_condition.UNet2DConditionOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +Hidden states conditioned on encoder_hidden_states input. Output of last layer of model. + + + + +UNet2DConditionModel + + +class diffusers.UNet2DConditionModel + +< +source +> +( +sample_size: typing.Optional[int] = None +in_channels: int = 4 +out_channels: int = 4 +center_input_sample: bool = False +flip_sin_to_cos: bool = True +freq_shift: int = 0 +down_block_types: typing.Tuple[str] = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') +mid_block_type: typing.Optional[str] = 'UNetMidBlock2DCrossAttn' +up_block_types: typing.Tuple[str] = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') +only_cross_attention: typing.Union[bool, typing.Tuple[bool]] = False +block_out_channels: typing.Tuple[int] = (320, 640, 1280, 1280) +layers_per_block: typing.Union[int, typing.Tuple[int]] = 2 +downsample_padding: int = 1 +mid_block_scale_factor: float = 1 +act_fn: str = 'silu' +norm_num_groups: typing.Optional[int] = 32 +norm_eps: float = 1e-05 +cross_attention_dim: typing.Union[int, typing.Tuple[int]] = 1280 +encoder_hid_dim: typing.Optional[int] = None +encoder_hid_dim_type: typing.Optional[str] = None +attention_head_dim: typing.Union[int, typing.Tuple[int]] = 8 +num_attention_heads: typing.Union[int, typing.Tuple[int], NoneType] = None +dual_cross_attention: bool = False +use_linear_projection: bool = False +class_embed_type: typing.Optional[str] = None +addition_embed_type: typing.Optional[str] = None +num_class_embeds: typing.Optional[int] = None +upcast_attention: bool = False +resnet_time_scale_shift: str = 'default' +resnet_skip_time_act: bool = False +resnet_out_scale_factor: int = 1.0 +time_embedding_type: str = 'positional' +time_embedding_dim: typing.Optional[int] = None +time_embedding_act_fn: typing.Optional[str] = None +timestep_post_act: typing.Optional[str] = None +time_cond_proj_dim: typing.Optional[int] = None +conv_in_kernel: int = 3 +conv_out_kernel: int = 3 +projection_class_embeddings_input_dim: typing.Optional[int] = None +class_embeddings_concat: bool = False +mid_block_only_cross_attention: typing.Optional[bool] = None +cross_attention_norm: typing.Optional[str] = None +addition_embed_type_num_heads = 64 + +) + + +Parameters + +sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. + + +in_channels (int, optional, defaults to 4) — The number of channels in the input sample. + + +out_channels (int, optional, defaults to 4) — The number of channels in the output. + + +center_input_sample (bool, optional, defaults to False) — Whether to center the input sample. + + +flip_sin_to_cos (bool, optional, defaults to False) — +Whether to flip the sin to cos in the time embedding. + + +freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. + + +down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. + + +mid_block_type (str, optional, defaults to "UNetMidBlock2DCrossAttn") — +The mid block type. Choose from UNetMidBlock2DCrossAttn or UNetMidBlock2DSimpleCrossAttn, will skip the +mid block layer if None. + + +up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D",)) — +The tuple of upsample blocks to use. + + +only_cross_attention(bool or Tuple[bool], optional, default to False) — +Whether to include self-attention in the basic transformer blocks, see +BasicTransformerBlock. + + +block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. + + +layers_per_block (int, optional, defaults to 2) — The number of layers per block. + + +downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. + + +mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. + + +act_fn (str, optional, defaults to "silu") — The activation function to use. + + +norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, it will skip the normalization and activation layers in post-processing + + +norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. + + +cross_attention_dim (int or Tuple[int], optional, defaults to 1280) — +The dimension of the cross attention features. + + +encoder_hid_dim (int, optional, defaults to None) — +If encoder_hid_dim_type is defined, encoder_hidden_states will be projected from encoder_hid_dim +dimension to cross_attention_dim. + + +encoder_hid_dim_type (str, optional, defaults to None) — +If given, the encoder_hidden_states and potentially other embeddings will be down-projected to text +embeddings of dimension cross_attention according to encoder_hid_dim_type. + + +attention_head_dim (int, optional, defaults to 8) — The dimension of the attention heads. + + +num_attention_heads (int, optional) — +The number of attention heads. If not defined, defaults to attention_head_dim + + +resnet_time_scale_shift (str, optional, defaults to "default") — Time scale shift config +for resnet blocks, see ResnetBlock2D. Choose from default or scale_shift. + + +class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", "identity", "projection", or "simple_projection". + + +addition_embed_type (str, optional, defaults to None) — +Configures an optional embedding which will be summed with the time embeddings. Choose from None or +“text”. “text” will use the TextTimeEmbedding layer. + + +num_class_embeds (int, optional, defaults to None) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. + + +time_embedding_type (str, optional, default to positional) — +The type of position embedding to use for timesteps. Choose from positional or fourier. + + +time_embedding_dim (int, optional, default to None) — +An optional override for the dimension of the projected time embedding. + + +time_embedding_act_fn (str, optional, default to None) — +Optional activation function to use on the time embeddings only one time before they as passed to the rest +of the unet. Choose from silu, mish, gelu, and swish. + + +timestep_post_act (str, *optional*, default to None) -- The second activation function to use in timestep embedding. Choose from silu, mishandgelu`. + + +time_cond_proj_dim (int, optional, default to None) — +The dimension of cond_proj layer in timestep embedding. + + +conv_in_kernel (int, optional, default to 3) — The kernel size of conv_in layer. + + +conv_out_kernel (int, optional, default to 3) — The kernel size of conv_out layer. + + +projection_class_embeddings_input_dim (int, optional) — The dimension of the class_labels input when +using the “projection” class_embed_type. Required when using the “projection” class_embed_type. + + +class_embeddings_concat (bool, optional, defaults to False) — Whether to concatenate the time +embeddings with the class embeddings. + + +mid_block_only_cross_attention (bool, optional, defaults to None) — +Whether to use cross attention with the mid block when using the UNetMidBlock2DSimpleCrossAttn. If +only_cross_attention is given as a single boolean and mid_block_only_cross_attention is None, the +only_cross_attention value will be used as the value for mid_block_only_cross_attention. Else, it will +default to False. + + + +UNet2DConditionModel is a conditional 2D UNet model that takes in a noisy sample, conditional state, and a timestep +and returns sample shaped output. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the models (such as downloading or saving, etc.) + +forward + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[torch.Tensor, float, int] +encoder_hidden_states: Tensor +class_labels: typing.Optional[torch.Tensor] = None +timestep_cond: typing.Optional[torch.Tensor] = None +attention_mask: typing.Optional[torch.Tensor] = None +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None +added_cond_kwargs: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None +down_block_additional_residuals: typing.Optional[typing.Tuple[torch.Tensor]] = None +mid_block_additional_residual: typing.Optional[torch.Tensor] = None +encoder_attention_mask: typing.Optional[torch.Tensor] = None +return_dict: bool = True + +) +→ +UNet2DConditionOutput or tuple + +Parameters + +sample (torch.FloatTensor) — (batch, channel, height, width) noisy inputs tensor + + +timestep (torch.FloatTensor or float or int) — (batch) timesteps + + +encoder_hidden_states (torch.FloatTensor) — (batch, sequence_length, feature_dim) encoder hidden states + + +encoder_attention_mask (torch.Tensor) — +(batch, sequence_length) cross-attention mask, applied to encoder_hidden_states. True = keep, False = +discard. Mask will be converted into a bias, which adds large negative values to attention scores +corresponding to “discard” tokens. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a models.unet_2d_condition.UNet2DConditionOutput instead of a plain tuple. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +added_cond_kwargs (dict, optional) — +A kwargs dictionary that if specified includes additonal conditions that can be used for additonal time +embeddings or encoder hidden states projections. See the configurations encoder_hid_dim_type and +addition_embed_type for more information. + + +Returns + +UNet2DConditionOutput or tuple + + + +UNet2DConditionOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + + +set_attention_slice + +< +source +> +( +slice_size + +) + + +Parameters + +slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as num_attention_heads // slice_size. In this case, +num_attention_heads must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +set_attn_processor + +< +source +> +( +processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor]]] + +) + + +Parameters + +`processor (dict of AttentionProcessor or AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +of all Attention layers. + + +In case processor is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainable attention processors. — + + + + +set_default_attn_processor + +< +source +> +( +) + + + +Disables custom attention processors and sets the default attention implementation. + +UNet3DConditionOutput + + +class diffusers.models.unet_3d_condition.UNet3DConditionOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size, num_frames, num_channels, height, width)) — +Hidden states conditioned on encoder_hidden_states input. Output of last layer of model. + + + + +UNet3DConditionModel + + +class diffusers.UNet3DConditionModel + +< +source +> +( +sample_size: typing.Optional[int] = None +in_channels: int = 4 +out_channels: int = 4 +down_block_types: typing.Tuple[str] = ('CrossAttnDownBlock3D', 'CrossAttnDownBlock3D', 'CrossAttnDownBlock3D', 'DownBlock3D') +up_block_types: typing.Tuple[str] = ('UpBlock3D', 'CrossAttnUpBlock3D', 'CrossAttnUpBlock3D', 'CrossAttnUpBlock3D') +block_out_channels: typing.Tuple[int] = (320, 640, 1280, 1280) +layers_per_block: int = 2 +downsample_padding: int = 1 +mid_block_scale_factor: float = 1 +act_fn: str = 'silu' +norm_num_groups: typing.Optional[int] = 32 +norm_eps: float = 1e-05 +cross_attention_dim: int = 1024 +attention_head_dim: typing.Union[int, typing.Tuple[int]] = 64 +num_attention_heads: typing.Union[int, typing.Tuple[int], NoneType] = None + +) + + +Parameters + +sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. + + +in_channels (int, optional, defaults to 4) — The number of channels in the input sample. + + +out_channels (int, optional, defaults to 4) — The number of channels in the output. + + +down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. + + +up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D",)) — +The tuple of upsample blocks to use. + + +block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. + + +layers_per_block (int, optional, defaults to 2) — The number of layers per block. + + +downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. + + +mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. + + +act_fn (str, optional, defaults to "silu") — The activation function to use. + + +norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, it will skip the normalization and activation layers in post-processing + + +norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. + + +cross_attention_dim (int, optional, defaults to 1280) — The dimension of the cross attention features. + + +attention_head_dim (int, optional, defaults to 8) — The dimension of the attention heads. + + +num_attention_heads (int, optional) — The number of attention heads. + + + +UNet3DConditionModel is a conditional 2D UNet model that takes in a noisy sample, conditional state, and a timestep +and returns sample shaped output. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the models (such as downloading or saving, etc.) + +forward + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[torch.Tensor, float, int] +encoder_hidden_states: Tensor +class_labels: typing.Optional[torch.Tensor] = None +timestep_cond: typing.Optional[torch.Tensor] = None +attention_mask: typing.Optional[torch.Tensor] = None +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None +down_block_additional_residuals: typing.Optional[typing.Tuple[torch.Tensor]] = None +mid_block_additional_residual: typing.Optional[torch.Tensor] = None +return_dict: bool = True + +) +→ +~models.unet_2d_condition.UNet3DConditionOutput or tuple + +Parameters + +sample (torch.FloatTensor) — (batch, num_frames, channel, height, width) noisy inputs tensor + + +timestep (torch.FloatTensor or float or int) — (batch) timesteps + + +encoder_hidden_states (torch.FloatTensor) — (batch, sequence_length, feature_dim) encoder hidden states + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a models.unet_2d_condition.UNet3DConditionOutput instead of a plain tuple. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +Returns + +~models.unet_2d_condition.UNet3DConditionOutput or tuple + + + +~models.unet_2d_condition.UNet3DConditionOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + + +set_attention_slice + +< +source +> +( +slice_size + +) + + +Parameters + +slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as num_attention_heads // slice_size. In this case, +num_attention_heads must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +set_attn_processor + +< +source +> +( +processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor]]] + +) + + +Parameters + +`processor (dict of AttentionProcessor or AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +of all Attention layers. + + +In case processor is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainable attention processors. — + + + + +set_default_attn_processor + +< +source +> +( +) + + + +Disables custom attention processors and sets the default attention implementation. + +DecoderOutput + + +class diffusers.models.vae.DecoderOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +Decoded output sample of the model. Output of the last layer of the model. + + + +Output of decoding method. + +VQEncoderOutput + + +class diffusers.models.vq_model.VQEncoderOutput + +< +source +> +( +latents: FloatTensor + +) + + +Parameters + +latents (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +Encoded output sample of the model. Output of the last layer of the model. + + + +Output of VQModel encoding method. + +VQModel + + +class diffusers.VQModel + +< +source +> +( +in_channels: int = 3 +out_channels: int = 3 +down_block_types: typing.Tuple[str] = ('DownEncoderBlock2D',) +up_block_types: typing.Tuple[str] = ('UpDecoderBlock2D',) +block_out_channels: typing.Tuple[int] = (64,) +layers_per_block: int = 1 +act_fn: str = 'silu' +latent_channels: int = 3 +sample_size: int = 32 +num_vq_embeddings: int = 256 +norm_num_groups: int = 32 +vq_embed_dim: typing.Optional[int] = None +scaling_factor: float = 0.18215 +norm_type: str = 'group' + +) + + +Parameters + +in_channels (int, optional, defaults to 3) — Number of channels in the input image. + + +out_channels (int, optional, defaults to 3) — Number of channels in the output. + + +down_block_types (Tuple[str], optional, defaults to — +obj:("DownEncoderBlock2D",)): Tuple of downsample block types. + + +up_block_types (Tuple[str], optional, defaults to — +obj:("UpDecoderBlock2D",)): Tuple of upsample block types. + + +block_out_channels (Tuple[int], optional, defaults to — +obj:(64,)): Tuple of block output channels. + + +act_fn (str, optional, defaults to "silu") — The activation function to use. + + +latent_channels (int, optional, defaults to 3) — Number of channels in the latent space. + + +sample_size (int, optional, defaults to 32) — TODO + + +num_vq_embeddings (int, optional, defaults to 256) — Number of codebook vectors in the VQ-VAE. + + +vq_embed_dim (int, optional) — Hidden dim of codebook vectors in the VQ-VAE. + + +scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. + + + +VQ-VAE model from the paper Neural Discrete Representation Learning by Aaron van den Oord, Oriol Vinyals and Koray +Kavukcuoglu. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the model (such as downloading or saving, etc.) + +forward + +< +source +> +( +sample: FloatTensor +return_dict: bool = True + +) + + +Parameters + +sample (torch.FloatTensor) — Input sample. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. + + + + +AutoencoderKLOutput + + +class diffusers.models.autoencoder_kl.AutoencoderKLOutput + +< +source +> +( +latent_dist: DiagonalGaussianDistribution + +) + + +Parameters + +latent_dist (DiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of DiagonalGaussianDistribution. +DiagonalGaussianDistribution allows for sampling latents from the distribution. + + + +Output of AutoencoderKL encoding method. + +AutoencoderKL + + +class diffusers.AutoencoderKL + +< +source +> +( +in_channels: int = 3 +out_channels: int = 3 +down_block_types: typing.Tuple[str] = ('DownEncoderBlock2D',) +up_block_types: typing.Tuple[str] = ('UpDecoderBlock2D',) +block_out_channels: typing.Tuple[int] = (64,) +layers_per_block: int = 1 +act_fn: str = 'silu' +latent_channels: int = 4 +norm_num_groups: int = 32 +sample_size: int = 32 +scaling_factor: float = 0.18215 + +) + + +Parameters + +in_channels (int, optional, defaults to 3) — Number of channels in the input image. + + +out_channels (int, optional, defaults to 3) — Number of channels in the output. + + +down_block_types (Tuple[str], optional, defaults to — +obj:("DownEncoderBlock2D",)): Tuple of downsample block types. + + +up_block_types (Tuple[str], optional, defaults to — +obj:("UpDecoderBlock2D",)): Tuple of upsample block types. + + +block_out_channels (Tuple[int], optional, defaults to — +obj:(64,)): Tuple of block output channels. + + +act_fn (str, optional, defaults to "silu") — The activation function to use. + + +latent_channels (int, optional, defaults to 4) — Number of channels in the latent space. + + +sample_size (int, optional, defaults to 32) — TODO + + +scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. + + + +Variational Autoencoder (VAE) model with KL loss from the paper Auto-Encoding Variational Bayes by Diederik P. Kingma +and Max Welling. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the model (such as downloading or saving, etc.) + +disable_slicing + +< +source +> +( +) + + + +Disable sliced VAE decoding. If enable_slicing was previously invoked, this method will go back to computing +decoding in one step. + +disable_tiling + +< +source +> +( +) + + + +Disable tiled VAE decoding. If enable_vae_tiling was previously invoked, this method will go back to +computing decoding in one step. + +enable_slicing + +< +source +> +( +) + + + +Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. + +enable_tiling + +< +source +> +( +use_tiling: bool = True + +) + + + +Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful to save a large amount of memory and to allow +the processing of larger images. + +forward + +< +source +> +( +sample: FloatTensor +sample_posterior: bool = False +return_dict: bool = True +generator: typing.Optional[torch._C.Generator] = None + +) + + +Parameters + +sample (torch.FloatTensor) — Input sample. + + +sample_posterior (bool, optional, defaults to False) — +Whether to sample from the posterior. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. + + + + +set_attn_processor + +< +source +> +( +processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor]]] + +) + + +Parameters + +`processor (dict of AttentionProcessor or AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +of all Attention layers. + + +In case processor is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainable attention processors. — + + + + +set_default_attn_processor + +< +source +> +( +) + + + +Disables custom attention processors and sets the default attention implementation. + +tiled_decode + +< +source +> +( +z: FloatTensor +return_dict: bool = True + +) + + +Parameters + +When this option is enabled, the VAE will split the input tensor into tiles to compute decoding in several — + + +steps. This is useful to keep memory use constant regardless of image size. The end result of tiled decoding is — + + +different from non-tiled decoding due to each tile using a different decoder. To avoid tiling artifacts, the — + + +tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the — + + +look of the output, but they should be much less noticeable. — +z (torch.FloatTensor): Input batch of latent vectors. return_dict (bool, optional, defaults to +True): +Whether or not to return a DecoderOutput instead of a plain tuple. + + + +Decode a batch of images using a tiled decoder. + +tiled_encode + +< +source +> +( +x: FloatTensor +return_dict: bool = True + +) + + +Parameters + +When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several — + + +steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is — + + +different from non-tiled encoding due to each tile using a different encoder. To avoid tiling artifacts, the — + + +tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the — + + +look of the output, but they should be much less noticeable. — +x (torch.FloatTensor): Input batch of images. return_dict (bool, optional, defaults to True): +Whether or not to return a AutoencoderKLOutput instead of a plain tuple. + + + +Encode a batch of images using a tiled encoder. + +Transformer2DModel + + +class diffusers.Transformer2DModel + +< +source +> +( +num_attention_heads: int = 16 +attention_head_dim: int = 88 +in_channels: typing.Optional[int] = None +out_channels: typing.Optional[int] = None +num_layers: int = 1 +dropout: float = 0.0 +norm_num_groups: int = 32 +cross_attention_dim: typing.Optional[int] = None +attention_bias: bool = False +sample_size: typing.Optional[int] = None +num_vector_embeds: typing.Optional[int] = None +patch_size: typing.Optional[int] = None +activation_fn: str = 'geglu' +num_embeds_ada_norm: typing.Optional[int] = None +use_linear_projection: bool = False +only_cross_attention: bool = False +upcast_attention: bool = False +norm_type: str = 'layer_norm' +norm_elementwise_affine: bool = True + +) + + +Parameters + +num_attention_heads (int, optional, defaults to 16) — The number of heads to use for multi-head attention. + + +attention_head_dim (int, optional, defaults to 88) — The number of channels in each head. + + +in_channels (int, optional) — +Pass if the input is continuous. The number of channels in the input and output. + + +num_layers (int, optional, defaults to 1) — The number of layers of Transformer blocks to use. + + +dropout (float, optional, defaults to 0.0) — The dropout probability to use. + + +cross_attention_dim (int, optional) — The number of encoder_hidden_states dimensions to use. + + +sample_size (int, optional) — Pass if the input is discrete. The width of the latent images. +Note that this is fixed at training time as it is used for learning a number of position embeddings. See +ImagePositionalEmbeddings. + + +num_vector_embeds (int, optional) — +Pass if the input is discrete. The number of classes of the vector embeddings of the latent pixels. +Includes the class for the masked latent pixel. + + +activation_fn (str, optional, defaults to "geglu") — Activation function to be used in feed-forward. + + +num_embeds_ada_norm ( int, optional) — Pass if at least one of the norm_layers is AdaLayerNorm. +The number of diffusion steps used during training. Note that this is fixed at training time as it is used +to learn a number of embeddings that are added to the hidden states. During inference, you can denoise for +up to but not more than steps than num_embeds_ada_norm. + + +attention_bias (bool, optional) — +Configure if the TransformerBlocks’ attention should contain a bias parameter. + + + +Transformer model for image-like data. Takes either discrete (classes of vector embeddings) or continuous (actual +embeddings) inputs. +When input is continuous: First, project the input (aka embedding) and reshape to b, t, d. Then apply standard +transformer action. Finally, reshape to image. +When input is discrete: First, input (classes of latent pixels) is converted to embeddings and has positional +embeddings applied, see ImagePositionalEmbeddings. Then apply standard transformer action. Finally, predict +classes of unnoised image. +Note that it is assumed one of the input classes is the masked latent pixel. The predicted classes of the unnoised +image do not contain a prediction for the masked pixel as the unnoised image cannot be masked. + +forward + +< +source +> +( +hidden_states: Tensor +encoder_hidden_states: typing.Optional[torch.Tensor] = None +timestep: typing.Optional[torch.LongTensor] = None +class_labels: typing.Optional[torch.LongTensor] = None +cross_attention_kwargs: typing.Dict[str, typing.Any] = None +attention_mask: typing.Optional[torch.Tensor] = None +encoder_attention_mask: typing.Optional[torch.Tensor] = None +return_dict: bool = True + +) +→ +Transformer2DModelOutput or tuple + +Parameters + +hidden_states ( When discrete, torch.LongTensor of shape (batch size, num latent pixels). — +When continuous, torch.FloatTensor of shape (batch size, channel, height, width)): Input +hidden_states + + +encoder_hidden_states ( torch.FloatTensor of shape (batch size, sequence len, embed dims), optional) — +Conditional embeddings for cross attention layer. If not given, cross-attention defaults to +self-attention. + + +timestep ( torch.LongTensor, optional) — +Optional timestep to be applied as an embedding in AdaLayerNorm’s. Used to indicate denoising step. + + +class_labels ( torch.LongTensor of shape (batch size, num classes), optional) — +Optional class labels to be applied as an embedding in AdaLayerZeroNorm. Used to indicate class labels +conditioning. + + +encoder_attention_mask ( torch.Tensor, optional ). — +Cross-attention mask, applied to encoder_hidden_states. Two formats supported: +Mask (batch, sequence_length) True = keep, False = discard. Bias (batch, 1, sequence_length) 0 += keep, -10000 = discard. +If ndim == 2: will be interpreted as a mask, then converted into a bias consistent with the format +above. This bias will be added to the cross-attention scores. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a models.unet_2d_condition.UNet2DConditionOutput instead of a plain tuple. + + +Returns + +Transformer2DModelOutput or tuple + + + +Transformer2DModelOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + + +Transformer2DModelOutput + + +class diffusers.models.transformer_2d.Transformer2DModelOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if Transformer2DModel is discrete) — +Hidden states conditioned on encoder_hidden_states input. If discrete, returns probability distributions +for the unnoised latent pixels. + + + + +TransformerTemporalModel + + +class diffusers.models.transformer_temporal.TransformerTemporalModel + +< +source +> +( +num_attention_heads: int = 16 +attention_head_dim: int = 88 +in_channels: typing.Optional[int] = None +out_channels: typing.Optional[int] = None +num_layers: int = 1 +dropout: float = 0.0 +norm_num_groups: int = 32 +cross_attention_dim: typing.Optional[int] = None +attention_bias: bool = False +sample_size: typing.Optional[int] = None +activation_fn: str = 'geglu' +norm_elementwise_affine: bool = True +double_self_attention: bool = True + +) + + +Parameters + +num_attention_heads (int, optional, defaults to 16) — The number of heads to use for multi-head attention. + + +attention_head_dim (int, optional, defaults to 88) — The number of channels in each head. + + +in_channels (int, optional) — +Pass if the input is continuous. The number of channels in the input and output. + + +num_layers (int, optional, defaults to 1) — The number of layers of Transformer blocks to use. + + +dropout (float, optional, defaults to 0.0) — The dropout probability to use. + + +cross_attention_dim (int, optional) — The number of encoder_hidden_states dimensions to use. + + +sample_size (int, optional) — Pass if the input is discrete. The width of the latent images. +Note that this is fixed at training time as it is used for learning a number of position embeddings. See +ImagePositionalEmbeddings. + + +activation_fn (str, optional, defaults to "geglu") — Activation function to be used in feed-forward. + + +attention_bias (bool, optional) — +Configure if the TransformerBlocks’ attention should contain a bias parameter. + + +double_self_attention (bool, optional) — +Configure if each TransformerBlock should contain two self-attention layers + + + +Transformer model for video-like data. + +forward + +< +source +> +( +hidden_states +encoder_hidden_states = None +timestep = None +class_labels = None +num_frames = 1 +cross_attention_kwargs = None +return_dict: bool = True + +) +→ +~models.transformer_2d.TransformerTemporalModelOutput or tuple + +Parameters + +hidden_states ( When discrete, torch.LongTensor of shape (batch size, num latent pixels). — +When continous, torch.FloatTensor of shape (batch size, channel, height, width)): Input +hidden_states + + +encoder_hidden_states ( torch.LongTensor of shape (batch size, encoder_hidden_states dim), optional) — +Conditional embeddings for cross attention layer. If not given, cross-attention defaults to +self-attention. + + +timestep ( torch.long, optional) — +Optional timestep to be applied as an embedding in AdaLayerNorm’s. Used to indicate denoising step. + + +class_labels ( torch.LongTensor of shape (batch size, num classes), optional) — +Optional class labels to be applied as an embedding in AdaLayerZeroNorm. Used to indicate class labels +conditioning. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a models.unet_2d_condition.UNet2DConditionOutput instead of a plain tuple. + + +Returns + +~models.transformer_2d.TransformerTemporalModelOutput or tuple + + + +~models.transformer_2d.TransformerTemporalModelOutput if return_dict is True, otherwise a tuple. +When returning a tuple, the first element is the sample tensor. + + + +Transformer2DModelOutput + + +class diffusers.models.transformer_temporal.TransformerTemporalModelOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size x num_frames, num_channels, height, width)) — +Hidden states conditioned on encoder_hidden_states input. + + + + +PriorTransformer + + +class diffusers.PriorTransformer + +< +source +> +( +num_attention_heads: int = 32 +attention_head_dim: int = 64 +num_layers: int = 20 +embedding_dim: int = 768 +num_embeddings = 77 +additional_embeddings = 4 +dropout: float = 0.0 + +) + + +Parameters + +num_attention_heads (int, optional, defaults to 32) — The number of heads to use for multi-head attention. + + +attention_head_dim (int, optional, defaults to 64) — The number of channels in each head. + + +num_layers (int, optional, defaults to 20) — The number of layers of Transformer blocks to use. + + +embedding_dim (int, optional, defaults to 768) — The dimension of the CLIP embeddings. Note that CLIP +image embeddings and text embeddings are both the same dimension. + + +num_embeddings (int, optional, defaults to 77) — The max number of clip embeddings allowed. I.e. the +length of the prompt after it has been tokenized. + + +additional_embeddings (int, optional, defaults to 4) — The number of additional tokens appended to the +projected hidden_states. The actual length of the used hidden_states is num_embeddings + additional_embeddings. + + +dropout (float, optional, defaults to 0.0) — The dropout probability to use. + + + +The prior transformer from unCLIP is used to predict CLIP image embeddings from CLIP text embeddings. Note that the +transformer predicts the image embeddings through a denoising diffusion process. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the models (such as downloading or saving, etc.) +For more details, see the original paper: https://arxiv.org/abs/2204.06125 + +forward + +< +source +> +( +hidden_states +timestep: typing.Union[torch.Tensor, float, int] +proj_embedding: FloatTensor +encoder_hidden_states: FloatTensor +attention_mask: typing.Optional[torch.BoolTensor] = None +return_dict: bool = True + +) +→ +PriorTransformerOutput or tuple + +Parameters + +hidden_states (torch.FloatTensor of shape (batch_size, embedding_dim)) — +x_t, the currently predicted image embeddings. + + +timestep (torch.long) — +Current denoising step. + + +proj_embedding (torch.FloatTensor of shape (batch_size, embedding_dim)) — +Projected embedding vector the denoising process is conditioned on. + + +encoder_hidden_states (torch.FloatTensor of shape (batch_size, num_embeddings, embedding_dim)) — +Hidden states of the text embeddings the denoising process is conditioned on. + + +attention_mask (torch.BoolTensor of shape (batch_size, num_embeddings)) — +Text mask for the text embeddings. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a models.prior_transformer.PriorTransformerOutput instead of a plain +tuple. + + +Returns + +PriorTransformerOutput or tuple + + + +PriorTransformerOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + + +set_attn_processor + +< +source +> +( +processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor]]] + +) + + +Parameters + +`processor (dict of AttentionProcessor or AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +of all Attention layers. + + +In case processor is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainable attention processors. — + + + + +set_default_attn_processor + +< +source +> +( +) + + + +Disables custom attention processors and sets the default attention implementation. + +PriorTransformerOutput + + +class diffusers.models.prior_transformer.PriorTransformerOutput + +< +source +> +( +predicted_image_embedding: FloatTensor + +) + + +Parameters + +predicted_image_embedding (torch.FloatTensor of shape (batch_size, embedding_dim)) — +The predicted CLIP image embedding conditioned on the CLIP text embedding input. + + + + +ControlNetOutput + + +class diffusers.models.controlnet.ControlNetOutput + +< +source +> +( +down_block_res_samples: typing.Tuple[torch.Tensor] +mid_block_res_sample: Tensor + +) + + + + +ControlNetModel + + +class diffusers.ControlNetModel + +< +source +> +( +in_channels: int = 4 +conditioning_channels: int = 3 +flip_sin_to_cos: bool = True +freq_shift: int = 0 +down_block_types: typing.Tuple[str] = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') +only_cross_attention: typing.Union[bool, typing.Tuple[bool]] = False +block_out_channels: typing.Tuple[int] = (320, 640, 1280, 1280) +layers_per_block: int = 2 +downsample_padding: int = 1 +mid_block_scale_factor: float = 1 +act_fn: str = 'silu' +norm_num_groups: typing.Optional[int] = 32 +norm_eps: float = 1e-05 +cross_attention_dim: int = 1280 +attention_head_dim: typing.Union[int, typing.Tuple[int]] = 8 +num_attention_heads: typing.Union[int, typing.Tuple[int], NoneType] = None +use_linear_projection: bool = False +class_embed_type: typing.Optional[str] = None +num_class_embeds: typing.Optional[int] = None +upcast_attention: bool = False +resnet_time_scale_shift: str = 'default' +projection_class_embeddings_input_dim: typing.Optional[int] = None +controlnet_conditioning_channel_order: str = 'rgb' +conditioning_embedding_out_channels: typing.Optional[typing.Tuple[int]] = (16, 32, 96, 256) +global_pool_conditions: bool = False + +) + + + + +from_unet + +< +source +> +( +unet: UNet2DConditionModel +controlnet_conditioning_channel_order: str = 'rgb' +conditioning_embedding_out_channels: typing.Optional[typing.Tuple[int]] = (16, 32, 96, 256) +load_weights_from_unet: bool = True + +) + + +Parameters + +unet (UNet2DConditionModel) — +UNet model which weights are copied to the ControlNet. Note that all configuration options are also +copied where applicable. + + + +Instantiate Controlnet class from UNet2DConditionModel. + +set_attention_slice + +< +source +> +( +slice_size + +) + + +Parameters + +slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as num_attention_heads // slice_size. In this case, +num_attention_heads must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +set_attn_processor + +< +source +> +( +processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor]]] + +) + + +Parameters + +`processor (dict of AttentionProcessor or AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +of all Attention layers. + + +In case processor is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainable attention processors. — + + + + +set_default_attn_processor + +< +source +> +( +) + + + +Disables custom attention processors and sets the default attention implementation. + +FlaxModelMixin + + +class diffusers.FlaxModelMixin + +< +source +> +( +) + + + +Base class for all flax models. +FlaxModelMixin takes care of storing the configuration of the models and handles methods for loading, +downloading and saving models. + +from_pretrained + +< +source +> +( +pretrained_model_name_or_path: typing.Union[str, os.PathLike] +dtype: dtype = +*model_args +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path (str or os.PathLike) — +Can be either: + +A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. +Valid model ids are namespaced under a user or organization name, like +runwayml/stable-diffusion-v1-5. +A path to a directory containing model weights saved using save_pretrained(), +e.g., ./my_model_directory/. + + + +dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — +The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and +jax.numpy.bfloat16 (on TPUs). +This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If +specified all the computation will be performed with the given dtype. +Note that this only specifies the dtype of the computation and does not influence the dtype of model +parameters. +If you wish to change the dtype of the model parameters, see ~ModelMixin.to_fp16 and +~ModelMixin.to_bf16. + + +model_args (sequence of positional arguments, optional) — +All remaining positional arguments will be passed to the underlying model’s __init__ method. + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +from_pt (bool, optional, defaults to False) — +Load the model weights from a PyTorch checkpoint save file. + + +kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., +output_attentions=True). Behaves differently depending on whether a config is provided or +automatically loaded: + +If a configuration is provided with config, **kwargs will be directly passed to the +underlying model’s __init__ method (we assume all relevant updates to the configuration have +already been done) +If a configuration is not provided, kwargs will be first passed to the configuration class +initialization function (from_config()). Each key of kwargs that corresponds to +a configuration attribute will be used to override said attribute with the supplied kwargs +value. Remaining keys that do not correspond to any configuration attribute will be passed to the +underlying model’s __init__ function. + + + + +Instantiate a pretrained flax model from a pre-trained model configuration. +The warning Weights from XXX not initialized from pretrained model means that the weights of XXX do not come +pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning +task. +The warning Weights from XXX not used in YYY means that the layer XXX is not used by YYY, therefore those +weights are discarded. + +Examples: + + + Copied +>>> from diffusers import FlaxUNet2DConditionModel + +>>> # Download model and configuration from huggingface.co and cache. +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # Model was saved using *save_pretrained('./test/saved_model/')* (for example purposes, not runnable). +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("./test/saved_model/") + +save_pretrained + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] +is_main_process: bool = True + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. + + +params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. + + +is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful when in distributed training like +TPUs and need to call this function on all processes. In this case, set is_main_process=True only on +the main process to avoid race conditions. + + + +Save a model and its configuration file to a directory, so that it can be re-loaded using the +[from_pretrained()](/docs/diffusers/main/en/api/models#diffusers.FlaxModelMixin.from_pretrained) class method + +to_bf16 + +< +source +> +( +params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] +mask: typing.Any = None + +) + + +Parameters + +params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. + + +mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans, True for params +you want to cast, and should be False for those you want to skip. + + + +Cast the floating-point params to jax.numpy.bfloat16. This returns a new params tree and does not cast +the params in place. +This method can be used on TPU to explicitly convert the model parameters to bfloat16 precision to do full +half-precision training or to save weights in bfloat16 for inference in order to save memory and improve speed. + +Examples: + + + Copied +>>> from diffusers import FlaxUNet2DConditionModel + +>>> # load model +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model parameters will be in fp32 precision, to cast these to bfloat16 precision +>>> params = model.to_bf16(params) +>>> # If you don't want to cast certain parameters (for example layer norm bias and scale) +>>> # then pass the mask as follows +>>> from flax import traverse_util + +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> flat_params = traverse_util.flatten_dict(params) +>>> mask = { +... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) +... for path in flat_params +... } +>>> mask = traverse_util.unflatten_dict(mask) +>>> params = model.to_bf16(params, mask) + +to_fp16 + +< +source +> +( +params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] +mask: typing.Any = None + +) + + +Parameters + +params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. + + +mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans, True for params +you want to cast, and should be False for those you want to skip + + + +Cast the floating-point params to jax.numpy.float16. This returns a new params tree and does not cast the +params in place. +This method can be used on GPU to explicitly convert the model parameters to float16 precision to do full +half-precision training or to save weights in float16 for inference in order to save memory and improve speed. + +Examples: + + + Copied +>>> from diffusers import FlaxUNet2DConditionModel + +>>> # load model +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model params will be in fp32, to cast these to float16 +>>> params = model.to_fp16(params) +>>> # If you want don't want to cast certain parameters (for example layer norm bias and scale) +>>> # then pass the mask as follows +>>> from flax import traverse_util + +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> flat_params = traverse_util.flatten_dict(params) +>>> mask = { +... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) +... for path in flat_params +... } +>>> mask = traverse_util.unflatten_dict(mask) +>>> params = model.to_fp16(params, mask) + +to_fp32 + +< +source +> +( +params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] +mask: typing.Any = None + +) + + +Parameters + +params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. + + +mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans, True for params +you want to cast, and should be False for those you want to skip + + + +Cast the floating-point params to jax.numpy.float32. This method can be used to explicitly convert the +model parameters to fp32 precision. This returns a new params tree and does not cast the params in place. + +Examples: + + + Copied +>>> from diffusers import FlaxUNet2DConditionModel + +>>> # Download model and configuration from huggingface.co +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model params will be in fp32, to illustrate the use of this method, +>>> # we'll first cast to fp16 and back to fp32 +>>> params = model.to_f16(params) +>>> # now cast back to fp32 +>>> params = model.to_fp32(params) + +FlaxUNet2DConditionOutput + + +class diffusers.models.unet_2d_condition_flax.FlaxUNet2DConditionOutput + +< +source +> +( +sample: ndarray + +) + + +Parameters + +sample (jnp.ndarray of shape (batch_size, num_channels, height, width)) — +Hidden states conditioned on encoder_hidden_states input. Output of last layer of model. + + + + +replace + +< +source +> +( +**updates + +) + + + +“Returns a new object replacing the specified fields with new values. + +FlaxUNet2DConditionModel + + +class diffusers.FlaxUNet2DConditionModel + +< +source +> +( +sample_size: int = 32 +in_channels: int = 4 +out_channels: int = 4 +down_block_types: typing.Tuple[str] = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') +up_block_types: typing.Tuple[str] = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') +only_cross_attention: typing.Union[bool, typing.Tuple[bool]] = False +block_out_channels: typing.Tuple[int] = (320, 640, 1280, 1280) +layers_per_block: int = 2 +attention_head_dim: typing.Union[int, typing.Tuple[int]] = 8 +num_attention_heads: typing.Union[int, typing.Tuple[int], NoneType] = None +cross_attention_dim: int = 1280 +dropout: float = 0.0 +use_linear_projection: bool = False +dtype: dtype = +flip_sin_to_cos: bool = True +freq_shift: int = 0 +use_memory_efficient_attention: bool = False +parent: typing.Union[typing.Type[flax.linen.module.Module], typing.Type[flax.core.scope.Scope], typing.Type[flax.linen.module._Sentinel], NoneType] = +name: str = None + +) + + +Parameters + +sample_size (int, optional) — +The size of the input sample. + + +in_channels (int, optional, defaults to 4) — +The number of channels in the input sample. + + +out_channels (int, optional, defaults to 4) — +The number of channels in the output. + + +down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. The corresponding class names will be: “FlaxCrossAttnDownBlock2D”, +“FlaxCrossAttnDownBlock2D”, “FlaxCrossAttnDownBlock2D”, “FlaxDownBlock2D” + + +up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D",)) — +The tuple of upsample blocks to use. The corresponding class names will be: “FlaxUpBlock2D”, +“FlaxCrossAttnUpBlock2D”, “FlaxCrossAttnUpBlock2D”, “FlaxCrossAttnUpBlock2D” + + +block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. + + +layers_per_block (int, optional, defaults to 2) — +The number of layers per block. + + +attention_head_dim (int or Tuple[int], optional, defaults to 8) — +The dimension of the attention heads. + + +num_attention_heads (int or Tuple[int], optional) — +The number of attention heads. + + +cross_attention_dim (int, optional, defaults to 768) — +The dimension of the cross attention features. + + +dropout (float, optional, defaults to 0) — +Dropout probability for down, up and bottleneck blocks. + + +flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip the sin to cos in the time embedding. + + +freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. + + +use_memory_efficient_attention (bool, optional, defaults to False) — +enable memory efficient attention https://arxiv.org/abs/2112.05682 + + + +FlaxUNet2DConditionModel is a conditional 2D UNet model that takes in a noisy sample, conditional state, and a +timestep and returns sample shaped output. +This model inherits from FlaxModelMixin. Check the superclass documentation for the generic methods the library +implements for all the models (such as downloading or saving, etc.) +Also, this model is a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to +general usage and behavior. +Finally, this model supports inherent JAX features such as: +Just-In-Time (JIT) compilation +Automatic Differentiation +Vectorization +Parallelization + +FlaxDecoderOutput + + +class diffusers.models.vae_flax.FlaxDecoderOutput + +< +source +> +( +sample: ndarray + +) + + +Parameters + +sample (jnp.ndarray of shape (batch_size, num_channels, height, width)) — +Decoded output sample of the model. Output of the last layer of the model. + + +dtype (jnp.dtype, optional, defaults to jnp.float32) — +Parameters dtype + + + +Output of decoding method. + +replace + +< +source +> +( +**updates + +) + + + +“Returns a new object replacing the specified fields with new values. + +FlaxAutoencoderKLOutput + + +class diffusers.models.vae_flax.FlaxAutoencoderKLOutput + +< +source +> +( +latent_dist: FlaxDiagonalGaussianDistribution + +) + + +Parameters + +latent_dist (FlaxDiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of FlaxDiagonalGaussianDistribution. +FlaxDiagonalGaussianDistribution allows for sampling latents from the distribution. + + + +Output of AutoencoderKL encoding method. + +replace + +< +source +> +( +**updates + +) + + + +“Returns a new object replacing the specified fields with new values. + +FlaxAutoencoderKL + + +class diffusers.FlaxAutoencoderKL + +< +source +> +( +in_channels: int = 3 +out_channels: int = 3 +down_block_types: typing.Tuple[str] = ('DownEncoderBlock2D',) +up_block_types: typing.Tuple[str] = ('UpDecoderBlock2D',) +block_out_channels: typing.Tuple[int] = (64,) +layers_per_block: int = 1 +act_fn: str = 'silu' +latent_channels: int = 4 +norm_num_groups: int = 32 +sample_size: int = 32 +scaling_factor: float = 0.18215 +dtype: dtype = +parent: typing.Union[typing.Type[flax.linen.module.Module], typing.Type[flax.core.scope.Scope], typing.Type[flax.linen.module._Sentinel], NoneType] = +name: str = None + +) + + +Parameters + +in_channels (int, optional, defaults to 3) — +Input channels + + +out_channels (int, optional, defaults to 3) — +Output channels + + +down_block_types (Tuple[str], optional, defaults to (DownEncoderBlock2D)) — +DownEncoder block type + + +up_block_types (Tuple[str], optional, defaults to (UpDecoderBlock2D)) — +UpDecoder block type + + +block_out_channels (Tuple[str], optional, defaults to (64,)) — +Tuple containing the number of output channels for each block + + +layers_per_block (int, optional, defaults to 2) — +Number of Resnet layer for each block + + +act_fn (str, optional, defaults to silu) — +Activation function + + +latent_channels (int, optional, defaults to 4) — +Latent space channels + + +norm_num_groups (int, optional, defaults to 32) — +Norm num group + + +sample_size (int, optional, defaults to 32) — +Sample input size + + +scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 +/ scaling_factor z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. + + +dtype (jnp.dtype, optional, defaults to jnp.float32) — +parameters dtype + + + +Flax Implementation of Variational Autoencoder (VAE) model with KL loss from the paper Auto-Encoding Variational +Bayes by Diederik P. Kingma and Max Welling. +This model is a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to +general usage and behavior. +Finally, this model supports inherent JAX features such as: +Just-In-Time (JIT) compilation +Automatic Differentiation +Vectorization +Parallelization + +FlaxControlNetOutput + + +class diffusers.models.controlnet_flax.FlaxControlNetOutput + +< +source +> +( +down_block_res_samples: ndarray +mid_block_res_sample: ndarray + +) + + + + +replace + +< +source +> +( +**updates + +) + + + +“Returns a new object replacing the specified fields with new values. + +FlaxControlNetModel + + +class diffusers.FlaxControlNetModel + +< +source +> +( +sample_size: int = 32 +in_channels: int = 4 +down_block_types: typing.Tuple[str] = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') +only_cross_attention: typing.Union[bool, typing.Tuple[bool]] = False +block_out_channels: typing.Tuple[int] = (320, 640, 1280, 1280) +layers_per_block: int = 2 +attention_head_dim: typing.Union[int, typing.Tuple[int]] = 8 +num_attention_heads: typing.Union[int, typing.Tuple[int], NoneType] = None +cross_attention_dim: int = 1280 +dropout: float = 0.0 +use_linear_projection: bool = False +dtype: dtype = +flip_sin_to_cos: bool = True +freq_shift: int = 0 +controlnet_conditioning_channel_order: str = 'rgb' +conditioning_embedding_out_channels: typing.Tuple[int] = (16, 32, 96, 256) +parent: typing.Union[typing.Type[flax.linen.module.Module], typing.Type[flax.core.scope.Scope], typing.Type[flax.linen.module._Sentinel], NoneType] = +name: str = None + +) + + +Parameters + +sample_size (int, optional) — +The size of the input sample. + + +in_channels (int, optional, defaults to 4) — +The number of channels in the input sample. + + +down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. The corresponding class names will be: “FlaxCrossAttnDownBlock2D”, +“FlaxCrossAttnDownBlock2D”, “FlaxCrossAttnDownBlock2D”, “FlaxDownBlock2D” + + +block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. + + +layers_per_block (int, optional, defaults to 2) — +The number of layers per block. + + +attention_head_dim (int or Tuple[int], optional, defaults to 8) — +The dimension of the attention heads. + + +num_attention_heads (int or Tuple[int], optional) — +The number of attention heads. + + +cross_attention_dim (int, optional, defaults to 768) — +The dimension of the cross attention features. + + +dropout (float, optional, defaults to 0) — +Dropout probability for down, up and bottleneck blocks. + + +flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip the sin to cos in the time embedding. + + +freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. + + +controlnet_conditioning_channel_order (str, optional, defaults to rgb) — +The channel order of conditional image. Will convert it to rgb if it’s bgr + + +conditioning_embedding_out_channels (tuple, optional, defaults to (16, 32, 96, 256)) — +The tuple of output channel for each block in conditioning_embedding layer + + + +Quoting from https://arxiv.org/abs/2302.05543: “Stable Diffusion uses a pre-processing method similar to VQ-GAN +[11] to convert the entire dataset of 512 × 512 images into smaller 64 × 64 “latent images” for stabilized +training. This requires ControlNets to convert image-based conditions to 64 × 64 feature space to match the +convolution size. We use a tiny network E(·) of four convolution layers with 4 × 4 kernels and 2 × 2 strides +(activated by ReLU, channels are 16, 32, 64, 128, initialized with Gaussian weights, trained jointly with the full +model) to encode image-space conditions … into feature maps …” +This model inherits from FlaxModelMixin. Check the superclass documentation for the generic methods the library +implements for all the models (such as downloading or saving, etc.) +Also, this model is a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to +general usage and behavior. +Finally, this model supports inherent JAX features such as: +Just-In-Time (JIT) compilation +Automatic Differentiation +Vectorization +Parallelization diff --git a/scrapped_outputs/9a02726c38b8521fc78c438d7ff12716.txt b/scrapped_outputs/9a02726c38b8521fc78c438d7ff12716.txt new file mode 100644 index 0000000000000000000000000000000000000000..7125b1f4576bd006c2775b645a91c4ebfdd25fe6 --- /dev/null +++ b/scrapped_outputs/9a02726c38b8521fc78c438d7ff12716.txt @@ -0,0 +1,122 @@ +🧨 Diffusers + +🤗 Diffusers provides pretrained vision and audio diffusion models, and serves as a modular toolbox for inference and training. +More precisely, 🤗 Diffusers offers: +State-of-the-art diffusion pipelines that can be run in inference with just a couple of lines of code (see Using Diffusers) or have a look at Pipelines to get an overview of all supported pipelines and their corresponding papers. +Various noise schedulers that can be used interchangeably for the preferred speed vs. quality trade-off in inference. For more information see Schedulers. +Multiple types of models, such as UNet, can be used as building blocks in an end-to-end diffusion system. See Models for more details +Training examples to show how to train the most popular diffusion model tasks. For more information see Training. + +🧨 Diffusers Pipelines + +The following table summarizes all officially supported pipelines, their corresponding paper, and if +available a colab notebook to directly try them out. +Pipeline +Paper +Tasks +Colab +alt_diffusion +AltDiffusion +Image-to-Image Text-Guided Generation + +audio_diffusion +Audio Diffusion +Unconditional Audio Generation + +cycle_diffusion +Cycle Diffusion +Image-to-Image Text-Guided Generation + +dance_diffusion +Dance Diffusion +Unconditional Audio Generation + +ddpm +Denoising Diffusion Probabilistic Models +Unconditional Image Generation + +ddim +Denoising Diffusion Implicit Models +Unconditional Image Generation + +latent_diffusion +High-Resolution Image Synthesis with Latent Diffusion Models +Text-to-Image Generation + +latent_diffusion +High-Resolution Image Synthesis with Latent Diffusion Models +Super Resolution Image-to-Image + +latent_diffusion_uncond +High-Resolution Image Synthesis with Latent Diffusion Models +Unconditional Image Generation + +paint_by_example +Paint by Example: Exemplar-based Image Editing with Diffusion Models +Image-Guided Image Inpainting + +pndm +Pseudo Numerical Methods for Diffusion Models on Manifolds +Unconditional Image Generation + +score_sde_ve +Score-Based Generative Modeling through Stochastic Differential Equations +Unconditional Image Generation + +score_sde_vp +Score-Based Generative Modeling through Stochastic Differential Equations +Unconditional Image Generation + +stable_diffusion +Stable Diffusion +Text-to-Image Generation + +stable_diffusion +Stable Diffusion +Image-to-Image Text-Guided Generation + +stable_diffusion +Stable Diffusion +Text-Guided Image Inpainting + +stable_diffusion_2 +Stable Diffusion 2 +Text-to-Image Generation + +stable_diffusion_2 +Stable Diffusion 2 +Text-Guided Image Inpainting + +stable_diffusion_2 +Stable Diffusion 2 +Text-Guided Super Resolution Image-to-Image + +stable_diffusion_safe +Safe Stable Diffusion +Text-Guided Generation + +stochastic_karras_ve +Elucidating the Design Space of Diffusion-Based Generative Models +Unconditional Image Generation + +unclip +Hierarchical Text-Conditional Image Generation with CLIP Latents +Text-to-Image Generation + +versatile_diffusion +Versatile Diffusion: Text, Images and Variations All in One Diffusion Model +Text-to-Image Generation + +versatile_diffusion +Versatile Diffusion: Text, Images and Variations All in One Diffusion Model +Image Variations Generation + +versatile_diffusion +Versatile Diffusion: Text, Images and Variations All in One Diffusion Model +Dual Image and Text Guided Generation + +vq_diffusion +Vector Quantized Diffusion Model for Text-to-Image Synthesis +Text-to-Image Generation + +Note: Pipelines are simple examples of how to play around with the diffusion systems as described in the corresponding papers. diff --git a/scrapped_outputs/9a098cfee7acff742418d899e19e4684.txt b/scrapped_outputs/9a098cfee7acff742418d899e19e4684.txt new file mode 100644 index 0000000000000000000000000000000000000000..3852e4b540ae565f239e88502bab4b42a7fe8ab9 --- /dev/null +++ b/scrapped_outputs/9a098cfee7acff742418d899e19e4684.txt @@ -0,0 +1,255 @@ +DiffEdit DiffEdit: Diffusion-based semantic image editing with mask guidance is by Guillaume Couairon, Jakob Verbeek, Holger Schwenk, and Matthieu Cord. The abstract from the paper is: Image generation has recently seen tremendous advances, with diffusion models allowing to synthesize convincing images for a large variety of text prompts. In this article, we propose DiffEdit, a method to take advantage of text-conditioned diffusion models for the task of semantic image editing, where the goal is to edit an image based on a text query. Semantic image editing is an extension of image generation, with the additional constraint that the generated image should be as similar as possible to a given input image. Current editing methods based on diffusion models usually require to provide a mask, making the task much easier by treating it as a conditional inpainting task. In contrast, our main contribution is able to automatically generate a mask highlighting regions of the input image that need to be edited, by contrasting predictions of a diffusion model conditioned on different text prompts. Moreover, we rely on latent inference to preserve content in those regions of interest and show excellent synergies with mask-based diffusion. DiffEdit achieves state-of-the-art editing performance on ImageNet. In addition, we evaluate semantic image editing in more challenging settings, using images from the COCO dataset as well as text-based generated images. The original codebase can be found at Xiang-cd/DiffEdit-stable-diffusion, and you can try it out in this demo. This pipeline was contributed by clarencechen. ❤️ Tips The pipeline can generate masks that can be fed into other inpainting pipelines. In order to generate an image using this pipeline, both an image mask (source and target prompts can be manually specified or generated, and passed to generate_mask()) +and a set of partially inverted latents (generated using invert()) must be provided as arguments when calling the pipeline to generate the final edited image. The function generate_mask() exposes two prompt arguments, source_prompt and target_prompt +that let you control the locations of the semantic edits in the final image to be generated. Let’s say, +you wanted to translate from “cat” to “dog”. In this case, the edit direction will be “cat -> dog”. To reflect +this in the generated mask, you simply have to set the embeddings related to the phrases including “cat” to +source_prompt and “dog” to target_prompt. When generating partially inverted latents using invert, assign a caption or text embedding describing the +overall image to the prompt argument to help guide the inverse latent sampling process. In most cases, the +source concept is sufficiently descriptive to yield good results, but feel free to explore alternatives. When calling the pipeline to generate the final edited image, assign the source concept to negative_prompt +and the target concept to prompt. Taking the above example, you simply have to set the embeddings related to +the phrases including “cat” to negative_prompt and “dog” to prompt. If you wanted to reverse the direction in the example above, i.e., “dog -> cat”, then it’s recommended to:Swap the source_prompt and target_prompt in the arguments to generate_mask. Change the input prompt in invert() to include “dog”. Swap the prompt and negative_prompt in the arguments to call the pipeline to generate the final edited image. The source and target prompts, or their corresponding embeddings, can also be automatically generated. Please refer to the DiffEdit guide for more details. StableDiffusionDiffEditPipeline class diffusers.StableDiffusionDiffEditPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor inverse_scheduler: DDIMInverseScheduler requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. inverse_scheduler (DDIMInverseScheduler) — +A scheduler to be used in combination with unet to fill in the unmasked part of the input latents. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. This is an experimental feature! Pipeline for text-guided image inpainting using Stable Diffusion and DiffEdit. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading and saving methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights generate_mask < source > ( image: Union = None target_prompt: Union = None target_negative_prompt: Union = None target_prompt_embeds: Optional = None target_negative_prompt_embeds: Optional = None source_prompt: Union = None source_negative_prompt: Union = None source_prompt_embeds: Optional = None source_negative_prompt_embeds: Optional = None num_maps_per_mask: Optional = 10 mask_encode_strength: Optional = 0.5 mask_thresholding_ratio: Optional = 3.0 num_inference_steps: int = 50 guidance_scale: float = 7.5 generator: Union = None output_type: Optional = 'np' cross_attention_kwargs: Optional = None ) → List[PIL.Image.Image] or np.array Parameters image (PIL.Image.Image) — +Image or tensor representing an image batch to be used for computing the mask. target_prompt (str or List[str], optional) — +The prompt or prompts to guide semantic mask generation. If not defined, you need to pass +prompt_embeds. target_negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). target_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. target_negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. source_prompt (str or List[str], optional) — +The prompt or prompts to guide semantic mask generation using DiffEdit. If not defined, you need to +pass source_prompt_embeds or source_image instead. source_negative_prompt (str or List[str], optional) — +The prompt or prompts to guide semantic mask generation away from using DiffEdit. If not defined, you +need to pass source_negative_prompt_embeds or source_image instead. source_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings to guide the semantic mask generation. Can be used to easily tweak text +inputs (prompt weighting). If not provided, text embeddings are generated from source_prompt input +argument. source_negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings to negatively guide the semantic mask generation. Can be used to easily +tweak text inputs (prompt weighting). If not provided, text embeddings are generated from +source_negative_prompt input argument. num_maps_per_mask (int, optional, defaults to 10) — +The number of noise maps sampled to generate the semantic mask using DiffEdit. mask_encode_strength (float, optional, defaults to 0.5) — +The strength of the noise maps sampled to generate the semantic mask using DiffEdit. Must be between 0 +and 1. mask_thresholding_ratio (float, optional, defaults to 3.0) — +The maximum multiple of the mean absolute difference used to clamp the semantic guidance map before +mask binarization. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the +AttnProcessor as defined in +self.processor. Returns +List[PIL.Image.Image] or np.array + +When returning a List[PIL.Image.Image], the list consists of a batch of single-channel binary images +with dimensions (height // self.vae_scale_factor, width // self.vae_scale_factor). If it’s +np.array, the shape is (batch_size, height // self.vae_scale_factor, width // self.vae_scale_factor). + Generate a latent mask given a mask prompt, a target prompt, and an image. Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionDiffEditPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + +>>> init_image = download_image(img_url).resize((768, 768)) + +>>> pipe = StableDiffusionDiffEditPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.enable_model_cpu_offload() + +>>> mask_prompt = "A bowl of fruits" +>>> prompt = "A bowl of pears" + +>>> mask_image = pipe.generate_mask(image=init_image, source_prompt=prompt, target_prompt=mask_prompt) +>>> image_latents = pipe.invert(image=init_image, prompt=mask_prompt).latents +>>> image = pipe(prompt=prompt, mask_image=mask_image, image_latents=image_latents).images[0] invert < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 50 inpaint_strength: float = 0.8 guidance_scale: float = 7.5 negative_prompt: Union = None generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None decode_latents: bool = False output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None lambda_auto_corr: float = 20.0 lambda_kl: float = 20.0 num_reg_steps: int = 0 num_auto_corr_rolls: int = 5 ) Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (PIL.Image.Image) — +Image or tensor representing an image batch to produce the inverted latents guided by prompt. inpaint_strength (float, optional, defaults to 0.8) — +Indicates extent of the noising process to run latent inversion. Must be between 0 and 1. When +inpaint_strength is 1, the inversion process is run for the full number of iterations specified in +num_inference_steps. image is used as a reference for the inversion process, and adding more noise +increases inpaint_strength. If inpaint_strength is 0, no inpainting occurs. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. decode_latents (bool, optional, defaults to False) — +Whether or not to decode the inverted latents into a generated image. Setting this argument to True +decodes all inverted latents for each timestep into a list of generated images. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.DiffEditInversionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the +AttnProcessor as defined in +self.processor. lambda_auto_corr (float, optional, defaults to 20.0) — +Lambda parameter to control auto correction. lambda_kl (float, optional, defaults to 20.0) — +Lambda parameter to control Kullback-Leibler divergence output. num_reg_steps (int, optional, defaults to 0) — +Number of regularization loss steps. num_auto_corr_rolls (int, optional, defaults to 5) — +Number of auto correction roll steps. Generate inverted latents given a prompt and image. Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionDiffEditPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + +>>> init_image = download_image(img_url).resize((768, 768)) + +>>> pipe = StableDiffusionDiffEditPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.enable_model_cpu_offload() + +>>> prompt = "A bowl of fruits" + +>>> inverted_latents = pipe.invert(image=init_image, prompt=prompt).latents __call__ < source > ( prompt: Union = None mask_image: Union = None image_latents: Union = None inpaint_strength: Optional = 0.8 num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_ckip: int = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. mask_image (PIL.Image.Image) — +Image or tensor representing an image batch to mask the generated image. White pixels in the mask are +repainted, while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, 1, H, W). image_latents (PIL.Image.Image or torch.FloatTensor) — +Partially noised image latents from the inversion process to be used as inputs for image generation. inpaint_strength (float, optional, defaults to 0.8) — +Indicates extent to inpaint the masked area. Must be between 0 and 1. When inpaint_strength is 1, the +denoising process is run on the masked area for the full number of iterations specified in +num_inference_steps. image_latents is used as a reference for the masked area, and adding more +noise to a region increases inpaint_strength. If inpaint_strength is 0, no inpainting occurs. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionDiffEditPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + +>>> init_image = download_image(img_url).resize((768, 768)) + +>>> pipe = StableDiffusionDiffEditPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.enable_model_cpu_offload() + +>>> mask_prompt = "A bowl of fruits" +>>> prompt = "A bowl of pears" + +>>> mask_image = pipe.generate_mask(image=init_image, source_prompt=prompt, target_prompt=mask_prompt) +>>> image_latents = pipe.invert(image=init_image, prompt=mask_prompt).latents +>>> image = pipe(prompt=prompt, mask_image=mask_image, image_latents=image_latents).images[0] disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/9a1393b7c99d552c349e68b23cfe171b.txt b/scrapped_outputs/9a1393b7c99d552c349e68b23cfe171b.txt new file mode 100644 index 0000000000000000000000000000000000000000..6b2f521e40e38cf54824f4d7c2c05c78554dd3cf --- /dev/null +++ b/scrapped_outputs/9a1393b7c99d552c349e68b23cfe171b.txt @@ -0,0 +1,62 @@ +AudioLDM AudioLDM was proposed in AudioLDM: Text-to-Audio Generation with Latent Diffusion Models by Haohe Liu et al. Inspired by Stable Diffusion, AudioLDM +is a text-to-audio latent diffusion model (LDM) that learns continuous audio representations from CLAP +latents. AudioLDM takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional +sound effects, human speech and music. The abstract from the paper is: Text-to-audio (TTA) system has recently gained attention for its ability to synthesize general audio based on text descriptions. However, previous studies in TTA have limited generation quality with high computational costs. In this study, we propose AudioLDM, a TTA system that is built on a latent space to learn the continuous audio representations from contrastive language-audio pretraining (CLAP) latents. The pretrained CLAP models enable us to train LDMs with audio embedding while providing text embedding as a condition during sampling. By learning the latent representations of audio signals and their compositions without modeling the cross-modal relationship, AudioLDM is advantageous in both generation quality and computational efficiency. Trained on AudioCaps with a single GPU, AudioLDM achieves state-of-the-art TTA performance measured by both objective and subjective metrics (e.g., frechet distance). Moreover, AudioLDM is the first TTA system that enables various text-guided audio manipulations (e.g., style transfer) in a zero-shot fashion. Our implementation and demos are available at this https URL. The original codebase can be found at haoheliu/AudioLDM. Tips When constructing a prompt, keep in mind: Descriptive prompt inputs work best; you can use adjectives to describe the sound (for example, “high quality” or “clear”) and make the prompt context specific (for example, “water stream in a forest” instead of “stream”). It’s best to use general terms like “cat” or “dog” instead of specific names or abstract objects the model may not be familiar with. During inference: The quality of the predicted audio sample can be controlled by the num_inference_steps argument; higher steps give higher quality audio at the expense of slower inference. The length of the predicted audio sample can be controlled by varying the audio_length_in_s argument. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. AudioLDMPipeline class diffusers.AudioLDMPipeline < source > ( vae: AutoencoderKL text_encoder: ClapTextModelWithProjection tokenizer: Union unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vocoder: SpeechT5HifiGan ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (ClapTextModelWithProjection) — +Frozen text-encoder (ClapTextModelWithProjection, specifically the +laion/clap-htsat-unfused variant. tokenizer (PreTrainedTokenizer) — +A RobertaTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded audio latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. vocoder (SpeechT5HifiGan) — +Vocoder of class SpeechT5HifiGan. Pipeline for text-to-audio generation using AudioLDM. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None audio_length_in_s: Optional = None num_inference_steps: int = 10 guidance_scale: float = 2.5 negative_prompt: Union = None num_waveforms_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None output_type: Optional = 'np' ) → AudioPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide audio generation. If not defined, you need to pass prompt_embeds. audio_length_in_s (int, optional, defaults to 5.12) — +The length of the generated audio sample in seconds. num_inference_steps (int, optional, defaults to 10) — +The number of denoising steps. More denoising steps usually lead to a higher quality audio at the +expense of slower inference. guidance_scale (float, optional, defaults to 2.5) — +A higher guidance scale value encourages the model to generate audio that is closely linked to the text +prompt at the expense of lower sound quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in audio generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_waveforms_per_prompt (int, optional, defaults to 1) — +The number of waveforms to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. return_dict (bool, optional, defaults to True) — +Whether or not to return a AudioPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. output_type (str, optional, defaults to "np") — +The output format of the generated image. Choose between "np" to return a NumPy np.ndarray or +"pt" to return a PyTorch torch.Tensor object. Returns +AudioPipelineOutput or tuple + +If return_dict is True, AudioPipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import AudioLDMPipeline +>>> import torch +>>> import scipy + +>>> repo_id = "cvssp/audioldm-s-full-v2" +>>> pipe = AudioLDMPipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> prompt = "Techno music with a strong, upbeat tempo and high melodic riffs" +>>> audio = pipe(prompt, num_inference_steps=10, audio_length_in_s=5.0).audios[0] + +>>> # save the audio sample as a .wav file +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio) disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. diff --git a/scrapped_outputs/9a1f80504bef79ba8db4a18dd5142224.txt b/scrapped_outputs/9a1f80504bef79ba8db4a18dd5142224.txt new file mode 100644 index 0000000000000000000000000000000000000000..370ce691af60ec569bb22a8523c7b30831598db5 --- /dev/null +++ b/scrapped_outputs/9a1f80504bef79ba8db4a18dd5142224.txt @@ -0,0 +1,260 @@ +Performing inference with LCM-LoRA Latent Consistency Models (LCM) enable quality image generation in typically 2-4 steps making it possible to use diffusion models in almost real-time settings. From the official website: LCMs can be distilled from any pre-trained Stable Diffusion (SD) in only 4,000 training steps (~32 A100 GPU Hours) for generating high quality 768 x 768 resolution images in 2~4 steps or even one step, significantly accelerating text-to-image generation. We employ LCM to distill the Dreamshaper-V7 version of SD in just 4,000 training iterations. For a more technical overview of LCMs, refer to the paper. However, each model needs to be distilled separately for latent consistency distillation. The core idea with LCM-LoRA is to train just a few adapter layers, the adapter being LoRA in this case. +This way, we don’t have to train the full model and keep the number of trainable parameters manageable. The resulting LoRAs can then be applied to any fine-tuned version of the model without distilling them separately. +Additionally, the LoRAs can be applied to image-to-image, ControlNet/T2I-Adapter, inpainting, AnimateDiff etc. +The LCM-LoRA can also be combined with other LoRAs to generate styled images in very few steps (4-8). LCM-LoRAs are available for stable-diffusion-v1-5, stable-diffusion-xl-base-1.0, and the SSD-1B model. All the checkpoints can be found in this collection. For more details about LCM-LoRA, refer to the technical report. This guide shows how to perform inference with LCM-LoRAs for text-to-image image-to-image combined with styled LoRAs ControlNet/T2I-Adapter inpainting AnimateDiff Before going through this guide, we’ll take a look at the general workflow for performing inference with LCM-LoRAs. +LCM-LoRAs are similar to other Stable Diffusion LoRAs so they can be used with any DiffusionPipeline that supports LoRAs. Load the task specific pipeline and model. Set the scheduler to LCMScheduler. Load the LCM-LoRA weights for the model. Reduce the guidance_scale between [1.0, 2.0] and set the num_inference_steps between [4, 8]. Perform inference with the pipeline with the usual parameters. Let’s look at how we can perform inference with LCM-LoRAs for different tasks. First, make sure you have peft installed, for better LoRA support. Copied pip install -U peft Text-to-image You’ll use the StableDiffusionXLPipeline with the scheduler: LCMScheduler and then load the LCM-LoRA. Together with the LCM-LoRA and the scheduler, the pipeline enables a fast inference workflow overcoming the slow iterative nature of diffusion models. Copied import torch +from diffusers import DiffusionPipeline, LCMScheduler + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + variant="fp16", + torch_dtype=torch.float16 +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl") + +prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" + +generator = torch.manual_seed(42) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=1.0 +).images[0] Notice that we use only 4 steps for generation which is way less than what’s typically used for standard SDXL. You may have noticed that we set guidance_scale=1.0, which disables classifer-free-guidance. This is because the LCM-LoRA is trained with guidance, so the batch size does not have to be doubled in this case. This leads to a faster inference time, with the drawback that negative prompts don’t have any effect on the denoising process. You can also use guidance with LCM-LoRA, but due to the nature of training the model is very sensitve to the guidance_scale values, high values can lead to artifacts in the generated images. In our experiments, we found that the best values are in the range of [1.0, 2.0]. Inference with a fine-tuned model As mentioned above, the LCM-LoRA can be applied to any fine-tuned version of the model without having to distill them separately. Let’s look at how we can perform inference with a fine-tuned model. In this example, we’ll use the animagine-xl model, which is a fine-tuned version of the SDXL model for generating anime. Copied from diffusers import DiffusionPipeline, LCMScheduler + +pipe = DiffusionPipeline.from_pretrained( + "Linaqruf/animagine-xl", + variant="fp16", + torch_dtype=torch.float16 +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl") + +prompt = "face focus, cute, masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=1.0 +).images[0] Image-to-image LCM-LoRA can be applied to image-to-image tasks too. Let’s look at how we can perform image-to-image generation with LCMs. For this example we’ll use the dreamshaper-7 model and the LCM-LoRA for stable-diffusion-v1-5 . Copied import torch +from diffusers import AutoPipelineForImage2Image, LCMScheduler +from diffusers.utils import make_image_grid, load_image + +pipe = AutoPipelineForImage2Image.from_pretrained( + "Lykon/dreamshaper-7", + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5") + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) +prompt = "Astronauts in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +generator = torch.manual_seed(0) +image = pipe( + prompt, + image=init_image, + num_inference_steps=4, + guidance_scale=1, + strength=0.6, + generator=generator +).images[0] +make_image_grid([init_image, image], rows=1, cols=2) You can get different results based on your prompt and the image you provide. To get the best results, we recommend trying different values for num_inference_steps, strength, and guidance_scale parameters and choose the best one. Combine with styled LoRAs LCM-LoRA can be combined with other LoRAs to generate styled-images in very few steps (4-8). In the following example, we’ll use the LCM-LoRA with the papercut LoRA. +To learn more about how to combine LoRAs, refer to this guide. Copied import torch +from diffusers import DiffusionPipeline, LCMScheduler + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + variant="fp16", + torch_dtype=torch.float16 +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LoRAs +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl", adapter_name="lcm") +pipe.load_lora_weights("TheLastBen/Papercut_SDXL", weight_name="papercut.safetensors", adapter_name="papercut") + +# Combine LoRAs +pipe.set_adapters(["lcm", "papercut"], adapter_weights=[1.0, 0.8]) + +prompt = "papercut, a cute fox" +generator = torch.manual_seed(0) +image = pipe(prompt, num_inference_steps=4, guidance_scale=1, generator=generator).images[0] +image ControlNet/T2I-Adapter Let’s look at how we can perform inference with ControlNet/T2I-Adapter and LCM-LoRA. ControlNet For this example, we’ll use the SD-v1-5 model and the LCM-LoRA for SD-v1-5 with canny ControlNet. Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, LCMScheduler +from diffusers.utils import load_image + +image = load_image( + "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +).resize((512, 512)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + controlnet=controlnet, + torch_dtype=torch.float16, + safety_checker=None, + variant="fp16" +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5") + +generator = torch.manual_seed(0) +image = pipe( + "the mona lisa", + image=canny_image, + num_inference_steps=4, + guidance_scale=1.5, + controlnet_conditioning_scale=0.8, + cross_attention_kwargs={"scale": 1}, + generator=generator, +).images[0] +make_image_grid([canny_image, image], rows=1, cols=2) The inference parameters in this example might not work for all examples, so we recommend you to try different values for `num_inference_steps`, `guidance_scale`, `controlnet_conditioning_scale` and `cross_attention_kwargs` parameters and choose the best one. T2I-Adapter This example shows how to use the LCM-LoRA with the Canny T2I-Adapter and SDXL. Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +# Prepare image +# Detect the canny map in low resolution to avoid high-frequency details +image = load_image( + "https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_canny.jpg" +).resize((384, 384)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image).resize((1024, 1024)) + +# load adapter +adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, varient="fp16").to("cuda") + +pipe = StableDiffusionXLAdapterPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + adapter=adapter, + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl") + +prompt = "Mystical fairy in real, magic, 4k picture, high quality" +negative_prompt = "extra digit, fewer digits, cropped, worst quality, low quality, glitch, deformed, mutated, ugly, disfigured" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, + negative_prompt=negative_prompt, + image=canny_image, + num_inference_steps=4, + guidance_scale=1.5, + adapter_conditioning_scale=0.8, + adapter_conditioning_factor=1, + generator=generator, +).images[0] +make_image_grid([canny_image, image], rows=1, cols=2) Inpainting LCM-LoRA can be used for inpainting as well. Copied import torch +from diffusers import AutoPipelineForInpainting, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +pipe = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5") + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +# generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, + image=init_image, + mask_image=mask_image, + generator=generator, + num_inference_steps=4, + guidance_scale=4, +).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) AnimateDiff AnimateDiff allows you to animate images using Stable Diffusion models. To get good results, we need to generate multiple frames (16-24), and doing this with standard SD models can be very slow. +LCM-LoRA can be used to speed up the process significantly, as you just need to do 4-8 steps for each frame. Let’s look at how we can perform animation with LCM-LoRA and AnimateDiff. Copied import torch +from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler, LCMScheduler +from diffusers.utils import export_to_gif + +adapter = MotionAdapter.from_pretrained("diffusers/animatediff-motion-adapter-v1-5") +pipe = AnimateDiffPipeline.from_pretrained( + "frankjoshua/toonyou_beta6", + motion_adapter=adapter, +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5", adapter_name="lcm") +pipe.load_lora_weights("guoyww/animatediff-motion-lora-zoom-in", weight_name="diffusion_pytorch_model.safetensors", adapter_name="motion-lora") + +pipe.set_adapters(["lcm", "motion-lora"], adapter_weights=[0.55, 1.2]) + +prompt = "best quality, masterpiece, 1girl, looking at viewer, blurry background, upper body, contemporary, dress" +generator = torch.manual_seed(0) +frames = pipe( + prompt=prompt, + num_inference_steps=5, + guidance_scale=1.25, + cross_attention_kwargs={"scale": 1}, + num_frames=24, + generator=generator +).frames[0] +export_to_gif(frames, "animation.gif") diff --git a/scrapped_outputs/9a2b50e25edfddb5a3c82dee2762de56.txt b/scrapped_outputs/9a2b50e25edfddb5a3c82dee2762de56.txt new file mode 100644 index 0000000000000000000000000000000000000000..d8eb8600572d533c95c8e93de2ce9be735ab2e02 --- /dev/null +++ b/scrapped_outputs/9a2b50e25edfddb5a3c82dee2762de56.txt @@ -0,0 +1,465 @@ +Text-to-Image Generation with Adapter Conditioning Overview T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models by Chong Mou, Xintao Wang, Liangbin Xie, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie. Using the pretrained models we can provide control images (for example, a depth map) to control Stable Diffusion text-to-image generation so that it follows the structure of the depth image and fills in the details. The abstract of the paper is the following: The incredible generative ability of large-scale text-to-image (T2I) models has demonstrated strong power of learning complex structures and meaningful semantics. However, relying solely on text prompts cannot fully take advantage of the knowledge learned by the model, especially when flexible and accurate controlling (e.g., color and structure) is needed. In this paper, we aim to “dig out” the capabilities that T2I models have implicitly learned, and then explicitly use them to control the generation more granularly. Specifically, we propose to learn simple and lightweight T2I-Adapters to align internal knowledge in T2I models with external control signals, while freezing the original large T2I models. In this way, we can train various adapters according to different conditions, achieving rich control and editing effects in the color and structure of the generation results. Further, the proposed T2I-Adapters have attractive properties of practical value, such as composability and generalization ability. Extensive experiments demonstrate that our T2I-Adapter has promising generation quality and a wide range of applications. This model was contributed by the community contributor HimariO ❤️ . Available Pipelines: Pipeline Tasks Demo StableDiffusionAdapterPipeline Text-to-Image Generation with T2I-Adapter Conditioning - StableDiffusionXLAdapterPipeline Text-to-Image Generation with T2I-Adapter Conditioning on StableDiffusion-XL - Usage example with the base model of StableDiffusion-1.4/1.5 In the following we give a simple example of how to use a T2I-Adapter checkpoint with Diffusers for inference based on StableDiffusion-1.4/1.5. +All adapters use the same pipeline. Images are first converted into the appropriate control image format. The control image and prompt are passed to the StableDiffusionAdapterPipeline. Let’s have a look at a simple example using the Color Adapter. Copied from diffusers.utils import load_image, make_image_grid + +image = load_image("https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_ref.png") Then we can create our color palette by simply resizing it to 8 by 8 pixels and then scaling it back to original size. Copied from PIL import Image + +color_palette = image.resize((8, 8)) +color_palette = color_palette.resize((512, 512), resample=Image.Resampling.NEAREST) Let’s take a look at the processed image. Next, create the adapter pipeline Copied import torch +from diffusers import StableDiffusionAdapterPipeline, T2IAdapter + +adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_color_sd14v1", torch_dtype=torch.float16) +pipe = StableDiffusionAdapterPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + adapter=adapter, + torch_dtype=torch.float16, +) +pipe.to("cuda") Finally, pass the prompt and control image to the pipeline Copied # fix the random seed, so you will get the same result as the example +generator = torch.Generator("cuda").manual_seed(7) + +out_image = pipe( + "At night, glowing cubes in front of the beach", + image=color_palette, + generator=generator, +).images[0] +make_image_grid([image, color_palette, out_image], rows=1, cols=3) Usage example with the base model of StableDiffusion-XL In the following we give a simple example of how to use a T2I-Adapter checkpoint with Diffusers for inference based on StableDiffusion-XL. +All adapters use the same pipeline. Images are first downloaded into the appropriate control image format. The control image and prompt are passed to the StableDiffusionXLAdapterPipeline. Let’s have a look at a simple example using the Sketch Adapter. Copied from diffusers.utils import load_image, make_image_grid + +sketch_image = load_image("https://huggingface.co/Adapter/t2iadapter/resolve/main/sketch.png").convert("L") Then, create the adapter pipeline Copied import torch +from diffusers import ( + T2IAdapter, + StableDiffusionXLAdapterPipeline, + DDPMScheduler +) + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +adapter = T2IAdapter.from_pretrained("Adapter/t2iadapter", subfolder="sketch_sdxl_1.0", torch_dtype=torch.float16, adapter_type="full_adapter_xl") +scheduler = DDPMScheduler.from_pretrained(model_id, subfolder="scheduler") + +pipe = StableDiffusionXLAdapterPipeline.from_pretrained( + model_id, adapter=adapter, safety_checker=None, torch_dtype=torch.float16, variant="fp16", scheduler=scheduler +) + +pipe.to("cuda") Finally, pass the prompt and control image to the pipeline Copied # fix the random seed, so you will get the same result as the example +generator = torch.Generator().manual_seed(42) + +sketch_image_out = pipe( + prompt="a photo of a dog in real world, high quality", + negative_prompt="extra digit, fewer digits, cropped, worst quality, low quality", + image=sketch_image, + generator=generator, + guidance_scale=7.5 +).images[0] +make_image_grid([sketch_image, sketch_image_out], rows=1, cols=2) Available checkpoints Non-diffusers checkpoints can be found under TencentARC/T2I-Adapter. T2I-Adapter with Stable Diffusion 1.4 Model Name Control Image Overview Control Image Example Generated Image Example TencentARC/t2iadapter_color_sd14v1 Trained with spatial color palette An image with 8x8 color palette. TencentARC/t2iadapter_canny_sd14v1 Trained with canny edge detection A monochrome image with white edges on a black background. TencentARC/t2iadapter_sketch_sd14v1 Trained with PidiNet edge detection A hand-drawn monochrome image with white outlines on a black background. TencentARC/t2iadapter_depth_sd14v1 Trained with Midas depth estimation A grayscale image with black representing deep areas and white representing shallow areas. TencentARC/t2iadapter_openpose_sd14v1 Trained with OpenPose bone image A OpenPose bone image. TencentARC/t2iadapter_keypose_sd14v1 Trained with mmpose skeleton image A mmpose skeleton image. TencentARC/t2iadapter_seg_sd14v1Trained with semantic segmentation An custom segmentation protocol image. TencentARC/t2iadapter_canny_sd15v2 TencentARC/t2iadapter_depth_sd15v2 TencentARC/t2iadapter_sketch_sd15v2 TencentARC/t2iadapter_zoedepth_sd15v1 Adapter/t2iadapter, subfolder=‘sketch_sdxl_1.0’ Adapter/t2iadapter, subfolder=‘canny_sdxl_1.0’ Adapter/t2iadapter, subfolder=‘openpose_sdxl_1.0’ Combining multiple adapters MultiAdapter can be used for applying multiple conditionings at once. Here we use the keypose adapter for the character posture and the depth adapter for creating the scene. Copied from diffusers.utils import load_image, make_image_grid + +cond_keypose = load_image( + "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png" +) +cond_depth = load_image( + "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png" +) +cond = [cond_keypose, cond_depth] + +prompt = ["A man walking in an office room with a nice view"] The two control images look as such: MultiAdapter combines keypose and depth adapters. adapter_conditioning_scale balances the relative influence of the different adapters. Copied import torch +from diffusers import StableDiffusionAdapterPipeline, MultiAdapter, T2IAdapter + +adapters = MultiAdapter( + [ + T2IAdapter.from_pretrained("TencentARC/t2iadapter_keypose_sd14v1"), + T2IAdapter.from_pretrained("TencentARC/t2iadapter_depth_sd14v1"), + ] +) +adapters = adapters.to(torch.float16) + +pipe = StableDiffusionAdapterPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + torch_dtype=torch.float16, + adapter=adapters, +).to("cuda") + +image = pipe(prompt, cond, adapter_conditioning_scale=[0.8, 0.8]).images[0] +make_image_grid([cond_keypose, cond_depth, image], rows=1, cols=3) T2I-Adapter vs ControlNet T2I-Adapter is similar to ControlNet. +T2I-Adapter uses a smaller auxiliary network which is only run once for the entire diffusion process. +However, T2I-Adapter performs slightly worse than ControlNet. StableDiffusionAdapterPipeline class diffusers.StableDiffusionAdapterPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel adapter: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True ) Parameters adapter (T2IAdapter or MultiAdapter or List[T2IAdapter]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple Adapter as a +list, the outputs from each Adapter are added together to create one combined additional conditioning. adapter_weights (List[float], optional, defaults to None) — +List of floats representing the weight which will be multiply to each adapter’s output before adding them +together. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. Pipeline for text-to-image generation using Stable Diffusion augmented with T2I-Adapter +https://arxiv.org/abs/2302.08453 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None adapter_conditioning_scale: Union = 1.0 clip_skip: Optional = None ) → ~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor] or List[PIL.Image.Image] or List[List[PIL.Image.Image]]) — +The Adapter input condition. Adapter uses this input condition to generate guidance to Unet. If the +type is specified as Torch.FloatTensor, it is passed to Adapter as is. PIL.Image.Image` can also be +accepted as an image. The control image is automatically resized to fit the output image. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor as defined under +self.processor in +diffusers.models.attention_processor. adapter_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the adapter are multiplied by adapter_conditioning_scale before they are added to the +residual in the original unet. If multiple adapters are specified in init, you can set the +corresponding scale as a list. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from PIL import Image +>>> from diffusers.utils import load_image +>>> import torch +>>> from diffusers import StableDiffusionAdapterPipeline, T2IAdapter + +>>> image = load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_ref.png" +... ) + +>>> color_palette = image.resize((8, 8)) +>>> color_palette = color_palette.resize((512, 512), resample=Image.Resampling.NEAREST) + +>>> adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_color_sd14v1", torch_dtype=torch.float16) +>>> pipe = StableDiffusionAdapterPipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", +... adapter=adapter, +... torch_dtype=torch.float16, +... ) + +>>> pipe.to("cuda") + +>>> out_image = pipe( +... "At night, glowing cubes in front of the beach", +... image=color_palette, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionXLAdapterPipeline class diffusers.StableDiffusionXLAdapterPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel adapter: Union scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True ) Parameters adapter (T2IAdapter or MultiAdapter or List[T2IAdapter]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple Adapter as a +list, the outputs from each Adapter are added together to create one combined additional conditioning. adapter_weights (List[float], optional, defaults to None) — +List of floats representing the weight which will be multiply to each adapter’s output before adding them +together. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. Pipeline for text-to-image generation using Stable Diffusion augmented with T2I-Adapter +https://arxiv.org/abs/2302.08453 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Optional = None crops_coords_top_left: Tuple = (0, 0) target_size: Optional = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None adapter_conditioning_scale: Union = 1.0 adapter_conditioning_factor: float = 1.0 clip_skip: Optional = None ) → ~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor] or List[PIL.Image.Image] or List[List[PIL.Image.Image]]) — +The Adapter input condition. Adapter uses this input condition to generate guidance to Unet. If the +type is specified as Torch.FloatTensor, it is passed to Adapter as is. PIL.Image.Image` can also be +accepted as an image. The control image is automatically resized to fit the output image. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 5.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion_xl.StableDiffusionAdapterPipelineOutput +instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. adapter_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the adapter are multiplied by adapter_conditioning_scale before they are added to the +residual in the original unet. If multiple adapters are specified in init, you can set the +corresponding scale as a list. adapter_conditioning_factor (float, optional, defaults to 1.0) — +The fraction of timesteps for which adapter should be applied. If adapter_conditioning_factor is +0.0, adapter is not applied at all. If adapter_conditioning_factor is 1.0, adapter is applied for +all timesteps. If adapter_conditioning_factor is 0.5, adapter is applied for half of the timesteps. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import T2IAdapter, StableDiffusionXLAdapterPipeline, DDPMScheduler +>>> from diffusers.utils import load_image + +>>> sketch_image = load_image("https://huggingface.co/Adapter/t2iadapter/resolve/main/sketch.png").convert("L") + +>>> model_id = "stabilityai/stable-diffusion-xl-base-1.0" + +>>> adapter = T2IAdapter.from_pretrained( +... "Adapter/t2iadapter", +... subfolder="sketch_sdxl_1.0", +... torch_dtype=torch.float16, +... adapter_type="full_adapter_xl", +... ) +>>> scheduler = DDPMScheduler.from_pretrained(model_id, subfolder="scheduler") + +>>> pipe = StableDiffusionXLAdapterPipeline.from_pretrained( +... model_id, adapter=adapter, torch_dtype=torch.float16, variant="fp16", scheduler=scheduler +... ).to("cuda") + +>>> generator = torch.manual_seed(42) +>>> sketch_image_out = pipe( +... prompt="a photo of a dog in real world, high quality", +... negative_prompt="extra digit, fewer digits, cropped, worst quality, low quality", +... image=sketch_image, +... generator=generator, +... guidance_scale=7.5, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 diff --git a/scrapped_outputs/9a93c2137ea74c9075b5d427abb85cf0.txt b/scrapped_outputs/9a93c2137ea74c9075b5d427abb85cf0.txt new file mode 100644 index 0000000000000000000000000000000000000000..28be7c2be08b90122a456c3dc3dafcfdbac176dc --- /dev/null +++ b/scrapped_outputs/9a93c2137ea74c9075b5d427abb85cf0.txt @@ -0,0 +1,75 @@ +AutoPipeline 🤗 Diffusers is able to complete many different tasks, and you can often reuse the same pretrained weights for multiple tasks such as text-to-image, image-to-image, and inpainting. If you’re new to the library and diffusion models though, it may be difficult to know which pipeline to use for a task. For example, if you’re using the runwayml/stable-diffusion-v1-5 checkpoint for text-to-image, you might not know that you could also use it for image-to-image and inpainting by loading the checkpoint with the StableDiffusionImg2ImgPipeline and StableDiffusionInpaintPipeline classes respectively. The AutoPipeline class is designed to simplify the variety of pipelines in 🤗 Diffusers. It is a generic, task-first pipeline that lets you focus on the task. The AutoPipeline automatically detects the correct pipeline class to use, which makes it easier to load a checkpoint for a task without knowing the specific pipeline class name. Take a look at the AutoPipeline reference to see which tasks are supported. Currently, it supports text-to-image, image-to-image, and inpainting. This tutorial shows you how to use an AutoPipeline to automatically infer the pipeline class to load for a specific task, given the pretrained weights. Choose an AutoPipeline for your task Start by picking a checkpoint. For example, if you’re interested in text-to-image with the runwayml/stable-diffusion-v1-5 checkpoint, use AutoPipelineForText2Image: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") +prompt = "peasant and dragon combat, wood cutting style, viking era, bevel with rune" + +image = pipeline(prompt, num_inference_steps=25).images[0] +image Under the hood, AutoPipelineForText2Image: automatically detects a "stable-diffusion" class from the model_index.json file loads the corresponding text-to-image StableDiffusionPipeline based on the "stable-diffusion" class name Likewise, for image-to-image, AutoPipelineForImage2Image detects a "stable-diffusion" checkpoint from the model_index.json file and it’ll load the corresponding StableDiffusionImg2ImgPipeline behind the scenes. You can also pass any additional arguments specific to the pipeline class such as strength, which determines the amount of noise or variation added to an input image: Copied from diffusers import AutoPipelineForImage2Image +import torch +import requests +from PIL import Image +from io import BytesIO + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") +prompt = "a portrait of a dog wearing a pearl earring" + +url = "https://upload.wikimedia.org/wikipedia/commons/thumb/0/0f/1665_Girl_with_a_Pearl_Earring.jpg/800px-1665_Girl_with_a_Pearl_Earring.jpg" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") +image.thumbnail((768, 768)) + +image = pipeline(prompt, image, num_inference_steps=200, strength=0.75, guidance_scale=10.5).images[0] +image And if you want to do inpainting, then AutoPipelineForInpainting loads the underlying StableDiffusionInpaintPipeline class in the same way: Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +import torch + +pipeline = AutoPipelineForInpainting.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).convert("RGB") +mask_image = load_image(mask_url).convert("RGB") + +prompt = "A majestic tiger sitting on a bench" +image = pipeline(prompt, image=init_image, mask_image=mask_image, num_inference_steps=50, strength=0.80).images[0] +image If you try to load an unsupported checkpoint, it’ll throw an error: Copied from diffusers import AutoPipelineForImage2Image +import torch + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "openai/shap-e-img2img", torch_dtype=torch.float16, use_safetensors=True +) +"ValueError: AutoPipeline can't find a pipeline linked to ShapEImg2ImgPipeline for None" Use multiple pipelines For some workflows or if you’re loading many pipelines, it is more memory-efficient to reuse the same components from a checkpoint instead of reloading them which would unnecessarily consume additional memory. For example, if you’re using a checkpoint for text-to-image and you want to use it again for image-to-image, use the from_pipe() method. This method creates a new pipeline from the components of a previously loaded pipeline at no additional memory cost. The from_pipe() method detects the original pipeline class and maps it to the new pipeline class corresponding to the task you want to do. For example, if you load a "stable-diffusion" class pipeline for text-to-image: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch + +pipeline_text2img = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +print(type(pipeline_text2img)) +"" Then from_pipe() maps the original "stable-diffusion" pipeline class to StableDiffusionImg2ImgPipeline: Copied pipeline_img2img = AutoPipelineForImage2Image.from_pipe(pipeline_text2img) +print(type(pipeline_img2img)) +"" If you passed an optional argument - like disabling the safety checker - to the original pipeline, this argument is also passed on to the new pipeline: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch + +pipeline_text2img = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, + requires_safety_checker=False, +).to("cuda") + +pipeline_img2img = AutoPipelineForImage2Image.from_pipe(pipeline_text2img) +print(pipeline_img2img.config.requires_safety_checker) +"False" You can overwrite any of the arguments and even configuration from the original pipeline if you want to change the behavior of the new pipeline. For example, to turn the safety checker back on and add the strength argument: Copied pipeline_img2img = AutoPipelineForImage2Image.from_pipe(pipeline_text2img, requires_safety_checker=True, strength=0.3) +print(pipeline_img2img.config.requires_safety_checker) +"True" diff --git a/scrapped_outputs/9ab172d7493afd28f8f84a6a47b270fd.txt b/scrapped_outputs/9ab172d7493afd28f8f84a6a47b270fd.txt new file mode 100644 index 0000000000000000000000000000000000000000..13e167b0959d0af366c380365e13d791431b1240 --- /dev/null +++ b/scrapped_outputs/9ab172d7493afd28f8f84a6a47b270fd.txt @@ -0,0 +1,8 @@ +Utilities Utility and helper functions for working with 🤗 Diffusers. numpy_to_pil diffusers.utils.numpy_to_pil < source > ( images ) Convert a numpy image or a batch of images to a PIL image. pt_to_pil diffusers.utils.pt_to_pil < source > ( images ) Convert a torch image to a PIL image. load_image diffusers.utils.load_image < source > ( image: Union convert_method: Callable = None ) → PIL.Image.Image Parameters image (str or PIL.Image.Image) — +The image to convert to the PIL Image format. convert_method (Callable[[PIL.Image.Image], PIL.Image.Image], optional) — +A conversion method to apply to the image after loading it. +When set to None the image will be converted “RGB”. Returns +PIL.Image.Image + +A PIL Image. + Loads image to a PIL Image. export_to_gif diffusers.utils.export_to_gif < source > ( image: List output_gif_path: str = None ) export_to_video diffusers.utils.export_to_video < source > ( video_frames: Union output_video_path: str = None fps: int = 8 ) make_image_grid diffusers.utils.make_image_grid < source > ( images: List rows: int cols: int resize: int = None ) Prepares a single grid of images. Useful for visualization purposes. diff --git a/scrapped_outputs/9ac2edca812a1058c27c1f1dc62ff350.txt b/scrapped_outputs/9ac2edca812a1058c27c1f1dc62ff350.txt new file mode 100644 index 0000000000000000000000000000000000000000..b1a5b1caf72ab66f1458f358678fe7da6bdce6c7 --- /dev/null +++ b/scrapped_outputs/9ac2edca812a1058c27c1f1dc62ff350.txt @@ -0,0 +1 @@ +SDXL Turbo Stable Diffusion XL (SDXL) Turbo was proposed in Adversarial Diffusion Distillation by Axel Sauer, Dominik Lorenz, Andreas Blattmann, and Robin Rombach. The abstract from the paper is: We introduce Adversarial Diffusion Distillation (ADD), a novel training approach that efficiently samples large-scale foundational image diffusion models in just 1–4 steps while maintaining high image quality. We use score distillation to leverage large-scale off-the-shelf image diffusion models as a teacher signal in combination with an adversarial loss to ensure high image fidelity even in the low-step regime of one or two sampling steps. Our analyses show that our model clearly outperforms existing few-step methods (GANs,Latent Consistency Models) in a single step and reaches the performance of state-of-the-art diffusion models (SDXL) in only four steps. ADD is the first method to unlock single-step, real-time image synthesis with foundation models. Tips SDXL Turbo uses the exact same architecture as SDXL, which means it also has the same API. Please refer to the SDXL API reference for more details. SDXL Turbo should disable guidance scale by setting guidance_scale=0.0 SDXL Turbo should use timestep_spacing='trailing' for the scheduler and use between 1 and 4 steps. SDXL Turbo has been trained to generate images of size 512x512. SDXL Turbo is open-access, but not open-source meaning that one might have to buy a model license in order to use it for commercial applications. Make sure to read the official model card to learn more. To learn how to use SDXL Turbo for various tasks, how to optimize performance, and other usage examples, take a look at the SDXL Turbo guide. Check out the Stability AI Hub organization for the official base and refiner model checkpoints! diff --git a/scrapped_outputs/9acf7e45de078dbc4bc72d8c67d00192.txt b/scrapped_outputs/9acf7e45de078dbc4bc72d8c67d00192.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/9ae00d0f8662900e0adcb83f8984adca.txt b/scrapped_outputs/9ae00d0f8662900e0adcb83f8984adca.txt new file mode 100644 index 0000000000000000000000000000000000000000..68423ddd910d132ae1322ca37d1a005d76c1e75b --- /dev/null +++ b/scrapped_outputs/9ae00d0f8662900e0adcb83f8984adca.txt @@ -0,0 +1,238 @@ +VQDiffusionScheduler + + +Overview + +Original paper can be found here + +VQDiffusionScheduler + + +class diffusers.VQDiffusionScheduler + +< +source +> +( +num_vec_classes: int +num_train_timesteps: int = 100 +alpha_cum_start: float = 0.99999 +alpha_cum_end: float = 9e-06 +gamma_cum_start: float = 9e-06 +gamma_cum_end: float = 0.99999 + +) + + +Parameters + +num_vec_classes (int) — +The number of classes of the vector embeddings of the latent pixels. Includes the class for the masked +latent pixel. + + +num_train_timesteps (int) — +Number of diffusion steps used to train the model. + + +alpha_cum_start (float) — +The starting cumulative alpha value. + + +alpha_cum_end (float) — +The ending cumulative alpha value. + + +gamma_cum_start (float) — +The starting cumulative gamma value. + + +gamma_cum_end (float) — +The ending cumulative gamma value. + + + +The VQ-diffusion transformer outputs predicted probabilities of the initial unnoised image. +The VQ-diffusion scheduler converts the transformer’s output into a sample for the unnoised image at the previous +diffusion timestep. +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. +For more details, see the original paper: https://arxiv.org/abs/2111.14822 + +log_Q_t_transitioning_to_known_class + +< +source +> +( +t: torch.int32 +x_t: LongTensor +log_onehot_x_t: FloatTensor +cumulative: bool + +) +→ +torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels) + +Parameters + +t (torch.Long) — +The timestep that determines which transition matrix is used. + + +x_t (torch.LongTensor of shape (batch size, num latent pixels)) — +The classes of each latent pixel at time t. + + +log_onehot_x_t (torch.FloatTensor of shape (batch size, num classes, num latent pixels)) — +The log one-hot vectors of x_t + + +cumulative (bool) — +If cumulative is False, we use the single step transition matrix t-1->t. If cumulative is True, +we use the cumulative transition matrix 0->t. + + +Returns + +torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels) + + + +Each column of the returned matrix is a row of log probabilities of the complete probability +transition matrix. +When non cumulative, returns self.num_classes - 1 rows because the initial latent pixel cannot be +masked. +Where: + +q_n is the probability distribution for the forward process of the nth latent pixel. +C_0 is a class of a latent pixel embedding +C_k is the class of the masked latent pixel + +non-cumulative result (omitting logarithms): +_0(x_t | x_{t-1\} = C_0) ... q_n(x_t | x_{t-1\} = C_0) + . . . + . . . + . . . +q_0(x_t | x_{t-1\} = C_k) ... q_n(x_t | x_{t-1\} = C_k)`} + /> +cumulative result (omitting logarithms): +_0_cumulative(x_t | x_0 = C_0) ... q_n_cumulative(x_t | x_0 = C_0) + . . . + . . . + . . . +q_0_cumulative(x_t | x_0 = C_{k-1\}) ... q_n_cumulative(x_t | x_0 = C_{k-1\})`} + /> + + +Returns the log probabilities of the rows from the (cumulative or non-cumulative) transition matrix for each +latent pixel in x_t. +See equation (7) for the complete non-cumulative transition matrix. The complete cumulative transition matrix +is the same structure except the parameters (alpha, beta, gamma) are the cumulative analogs. + +q_posterior + +< +source +> +( +log_p_x_0 +x_t +t + +) +→ +torch.FloatTensor of shape (batch size, num classes, num latent pixels) + +Parameters + +t (torch.Long) — +The timestep that determines which transition matrix is used. + + +Returns + +torch.FloatTensor of shape (batch size, num classes, num latent pixels) + + + +The log probabilities for the predicted classes of the image at timestep t-1. I.e. Equation (11). + + +Calculates the log probabilities for the predicted classes of the image at timestep t-1. I.e. Equation (11). +Instead of directly computing equation (11), we use Equation (5) to restate Equation (11) in terms of only +forward probabilities. +Equation (11) stated in terms of forward probabilities via Equation (5): +Where: +the sum is over x0 = {C_0 … C{k-1}} (classes for x_0) +p(x{t-1} | x_t) = sum( q(x_t | x{t-1}) q(x_{t-1} | x_0) p(x_0) / q(x_t | x_0) ) + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device) — +device to place the timesteps and the diffusion process parameters (alpha, beta, gamma) on. + + + +Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: torch.int64 +sample: LongTensor +generator: typing.Optional[torch._C.Generator] = None +return_dict: bool = True + +) +→ +~schedulers.scheduling_utils.VQDiffusionSchedulerOutput or tuple + +Parameters + +t (torch.long) — +The timestep that determines which transition matrices are used. +x_t — (torch.LongTensor of shape (batch size, num latent pixels)): +The classes of each latent pixel at time t +generator — (torch.Generator or None): +RNG for the noise applied to p(x_{t-1} | x_t) before it is sampled from. + + +return_dict (bool) — +option for returning tuple rather than VQDiffusionSchedulerOutput class + + +Returns + +~schedulers.scheduling_utils.VQDiffusionSchedulerOutput or tuple + + + +~schedulers.scheduling_utils.VQDiffusionSchedulerOutput if return_dict is True, otherwise a tuple. +When returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep via the reverse transition distribution i.e. Equation (11). See the +docstring for self.q_posterior for more in depth docs on how Equation (11) is computed. diff --git a/scrapped_outputs/9ae7c12010163c8ef0777c691425e357.txt b/scrapped_outputs/9ae7c12010163c8ef0777c691425e357.txt new file mode 100644 index 0000000000000000000000000000000000000000..670e60a336d617da607490febe4cdc7f57188444 --- /dev/null +++ b/scrapped_outputs/9ae7c12010163c8ef0777c691425e357.txt @@ -0,0 +1,82 @@ +T2I-Adapter T2I-Adapter is a lightweight adapter model that provides an additional conditioning input image (line art, canny, sketch, depth, pose) to better control image generation. It is similar to a ControlNet, but it is a lot smaller (~77M parameters and ~300MB file size) because its only inserts weights into the UNet instead of copying and training it. The T2I-Adapter is only available for training with the Stable Diffusion XL (SDXL) model. This guide will explore the train_t2i_adapter_sdxl.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/t2i_adapter +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to activate gradient accumulation, add the --gradient_accumulation_steps parameter to the training command: Copied accelerate launch train_t2i_adapter_sdxl.py \ + ----gradient_accumulation_steps=4 Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the relevant T2I-Adapter parameters: --pretrained_vae_model_name_or_path: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify a better VAE --crops_coords_top_left_h and --crops_coords_top_left_w: height and width coordinates to include in SDXL’s crop coordinate embeddings --conditioning_image_column: the column of the conditioning images in the dataset --proportion_empty_prompts: the proportion of image prompts to replace with empty strings Training script As with the script parameters, a walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the T2I-Adapter relevant parts of the script. The training script begins by preparing the dataset. This incudes tokenizing the prompt and applying transforms to the images and conditioning images. Copied conditioning_image_transforms = transforms.Compose( + [ + transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), + transforms.CenterCrop(args.resolution), + transforms.ToTensor(), + ] +) Within the main() function, the T2I-Adapter is either loaded from a pretrained adapter or it is randomly initialized: Copied if args.adapter_model_name_or_path: + logger.info("Loading existing adapter weights.") + t2iadapter = T2IAdapter.from_pretrained(args.adapter_model_name_or_path) +else: + logger.info("Initializing t2iadapter weights.") + t2iadapter = T2IAdapter( + in_channels=3, + channels=(320, 640, 1280, 1280), + num_res_blocks=2, + downscale_factor=16, + adapter_type="full_adapter_xl", + ) The optimizer is initialized for the T2I-Adapter parameters: Copied params_to_optimize = t2iadapter.parameters() +optimizer = optimizer_class( + params_to_optimize, + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Lastly, in the training loop, the adapter conditioning image and the text embeddings are passed to the UNet to predict the noise residual: Copied t2iadapter_image = batch["conditioning_pixel_values"].to(dtype=weight_dtype) +down_block_additional_residuals = t2iadapter(t2iadapter_image) +down_block_additional_residuals = [ + sample.to(dtype=weight_dtype) for sample in down_block_additional_residuals +] + +model_pred = unet( + inp_noisy_latents, + timesteps, + encoder_hidden_states=batch["prompt_ids"], + added_cond_kwargs=batch["unet_added_conditions"], + down_block_additional_residuals=down_block_additional_residuals, +).sample If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Now you’re ready to launch the training script! 🚀 For this example training, you’ll use the fusing/fill50k dataset. You can also create and use your own dataset if you want (see the Create a dataset for training guide). Set the environment variable MODEL_DIR to a model id on the Hub or a path to a local model and OUTPUT_DIR to where you want to save the model. Download the following images to condition your training with: Copied wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png +wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_image, --validation_prompt, and --validation_steps to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. Copied export MODEL_DIR="stabilityai/stable-diffusion-xl-base-1.0" +export OUTPUT_DIR="path to save model" + +accelerate launch train_t2i_adapter_sdxl.py \ + --pretrained_model_name_or_path=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --dataset_name=fusing/fill50k \ + --mixed_precision="fp16" \ + --resolution=1024 \ + --learning_rate=1e-5 \ + --max_train_steps=15000 \ + --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ + --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ + --validation_steps=100 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --report_to="wandb" \ + --seed=42 \ + --push_to_hub Once training is complete, you can use your T2I-Adapter for inference: Copied from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, EulerAncestralDiscreteSchedulerTest +from diffusers.utils import load_image +import torch + +adapter = T2IAdapter.from_pretrained("path/to/adapter", torch_dtype=torch.float16) +pipeline = StableDiffusionXLAdapterPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", adapter=adapter, torch_dtype=torch.float16 +) + +pipeline.scheduler = EulerAncestralDiscreteSchedulerTest.from_config(pipe.scheduler.config) +pipeline.enable_xformers_memory_efficient_attention() +pipeline.enable_model_cpu_offload() + +control_image = load_image("./conditioning_image_1.png") +prompt = "pale golden rod circle with old lace background" + +generator = torch.manual_seed(0) +image = pipeline( + prompt, image=control_image, generator=generator +).images[0] +image.save("./output.png") Next steps Congratulations on training a T2I-Adapter model! 🎉 To learn more: Read the Efficient Controllable Generation for SDXL with T2I-Adapters blog post to learn more details about the experimental results from the T2I-Adapter team. diff --git a/scrapped_outputs/9aebf41694593e3418993f3c16fcbd35.txt b/scrapped_outputs/9aebf41694593e3418993f3c16fcbd35.txt new file mode 100644 index 0000000000000000000000000000000000000000..0454f29f161e7c79737a21f6448f556cf18eca51 --- /dev/null +++ b/scrapped_outputs/9aebf41694593e3418993f3c16fcbd35.txt @@ -0,0 +1,81 @@ +Push files to the Hub 🤗 Diffusers provides a PushToHubMixin for uploading your model, scheduler, or pipeline to the Hub. It is an easy way to store your files on the Hub, and also allows you to share your work with others. Under the hood, the PushToHubMixin: creates a repository on the Hub saves your model, scheduler, or pipeline files so they can be reloaded later uploads folder containing these files to the Hub This guide will show you how to use the PushToHubMixin to upload your files to the Hub. You’ll need to log in to your Hub account with your access token first: Copied from huggingface_hub import notebook_login + +notebook_login() Models To push a model to the Hub, call push_to_hub() and specify the repository id of the model to be stored on the Hub: Copied from diffusers import ControlNetModel + +controlnet = ControlNetModel( + block_out_channels=(32, 64), + layers_per_block=2, + in_channels=4, + down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), + cross_attention_dim=32, + conditioning_embedding_out_channels=(16, 32), +) +controlnet.push_to_hub("my-controlnet-model") For models, you can also specify the variant of the weights to push to the Hub. For example, to push fp16 weights: Copied controlnet.push_to_hub("my-controlnet-model", variant="fp16") The push_to_hub() function saves the model’s config.json file and the weights are automatically saved in the safetensors format. Now you can reload the model from your repository on the Hub: Copied model = ControlNetModel.from_pretrained("your-namespace/my-controlnet-model") Scheduler To push a scheduler to the Hub, call push_to_hub() and specify the repository id of the scheduler to be stored on the Hub: Copied from diffusers import DDIMScheduler + +scheduler = DDIMScheduler( + beta_start=0.00085, + beta_end=0.012, + beta_schedule="scaled_linear", + clip_sample=False, + set_alpha_to_one=False, +) +scheduler.push_to_hub("my-controlnet-scheduler") The push_to_hub() function saves the scheduler’s scheduler_config.json file to the specified repository. Now you can reload the scheduler from your repository on the Hub: Copied scheduler = DDIMScheduler.from_pretrained("your-namepsace/my-controlnet-scheduler") Pipeline You can also push an entire pipeline with all it’s components to the Hub. For example, initialize the components of a StableDiffusionPipeline with the parameters you want: Copied from diffusers import ( + UNet2DConditionModel, + AutoencoderKL, + DDIMScheduler, + StableDiffusionPipeline, +) +from transformers import CLIPTextModel, CLIPTextConfig, CLIPTokenizer + +unet = UNet2DConditionModel( + block_out_channels=(32, 64), + layers_per_block=2, + sample_size=32, + in_channels=4, + out_channels=4, + down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), + up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), + cross_attention_dim=32, +) + +scheduler = DDIMScheduler( + beta_start=0.00085, + beta_end=0.012, + beta_schedule="scaled_linear", + clip_sample=False, + set_alpha_to_one=False, +) + +vae = AutoencoderKL( + block_out_channels=[32, 64], + in_channels=3, + out_channels=3, + down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], + up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], + latent_channels=4, +) + +text_encoder_config = CLIPTextConfig( + bos_token_id=0, + eos_token_id=2, + hidden_size=32, + intermediate_size=37, + layer_norm_eps=1e-05, + num_attention_heads=4, + num_hidden_layers=5, + pad_token_id=1, + vocab_size=1000, +) +text_encoder = CLIPTextModel(text_encoder_config) +tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") Pass all of the components to the StableDiffusionPipeline and call push_to_hub() to push the pipeline to the Hub: Copied components = { + "unet": unet, + "scheduler": scheduler, + "vae": vae, + "text_encoder": text_encoder, + "tokenizer": tokenizer, + "safety_checker": None, + "feature_extractor": None, +} + +pipeline = StableDiffusionPipeline(**components) +pipeline.push_to_hub("my-pipeline") The push_to_hub() function saves each component to a subfolder in the repository. Now you can reload the pipeline from your repository on the Hub: Copied pipeline = StableDiffusionPipeline.from_pretrained("your-namespace/my-pipeline") Privacy Set private=True in the push_to_hub() function to keep your model, scheduler, or pipeline files private: Copied controlnet.push_to_hub("my-controlnet-model-private", private=True) Private repositories are only visible to you, and other users won’t be able to clone the repository and your repository won’t appear in search results. Even if a user has the URL to your private repository, they’ll receive a 404 - Sorry, we can't find the page you are looking for. You must be logged in to load a model from a private repository. diff --git a/scrapped_outputs/9b329bfd97f6890a5d33a2ccb0d58a51.txt b/scrapped_outputs/9b329bfd97f6890a5d33a2ccb0d58a51.txt new file mode 100644 index 0000000000000000000000000000000000000000..e2964b4de3ce4ab12035f087f12488cf56bb741a --- /dev/null +++ b/scrapped_outputs/9b329bfd97f6890a5d33a2ccb0d58a51.txt @@ -0,0 +1,167 @@ +InstructPix2Pix + +InstructPix2Pix is a method to fine-tune text-conditioned diffusion models such that they can follow an edit instruction for an input image. Models fine-tuned using this method take the following as inputs: + +The output is an “edited” image that reflects the edit instruction applied on the input image: + +The train_instruct_pix2pix.py script shows how to implement the training procedure and adapt it for Stable Diffusion. +Disclaimer: Even though train_instruct_pix2pix.py implements the InstructPix2Pix +training procedure while being faithful to the original implementation we have only tested it on a small-scale dataset. This can impact the end results. For better results, we recommend longer training runs with a larger dataset. Here you can find a large dataset for InstructPix2Pix training. + +Running locally with PyTorch + + +Installing the dependencies + +Before running the scripts, make sure to install the library’s training dependencies: +Important +To make sure you can successfully run the latest versions of the example scripts, we highly recommend installing from source and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment: + + + Copied +git clone https://github.com/huggingface/diffusers +cd diffusers +pip install -e . +Then cd in the example folder and run + + + Copied +pip install -r requirements.txt +And initialize an 🤗Accelerate environment with: + + + Copied +accelerate config +Or for a default accelerate configuration without answering questions about your environment + + + Copied +accelerate config default +Or if your environment doesn’t support an interactive shell e.g. a notebook + + + Copied +from accelerate.utils import write_basic_config + +write_basic_config() + +Toy example + +As mentioned before, we’ll use a small toy dataset for training. The dataset +is a smaller version of the original dataset used in the InstructPix2Pix paper. +Specify the MODEL_NAME environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the ~diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path argument. You’ll also need to specify the dataset name in DATASET_ID: + + + Copied +export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export DATASET_ID="fusing/instructpix2pix-1000-samples" +Now, we can launch training: + + + Copied +accelerate launch --mixed_precision="fp16" train_instruct_pix2pix.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$DATASET_ID \ + --enable_xformers_memory_efficient_attention \ + --resolution=256 --random_flip \ + --train_batch_size=4 --gradient_accumulation_steps=4 --gradient_checkpointing \ + --max_train_steps=15000 \ + --checkpointing_steps=5000 --checkpoints_total_limit=1 \ + --learning_rate=5e-05 --max_grad_norm=1 --lr_warmup_steps=0 \ + --conditioning_dropout_prob=0.05 \ + --mixed_precision=fp16 \ + --seed=42 +Additionally, we support performing validation inference to monitor training progress +with Weights and Biases. You can enable this feature with report_to="wandb": + + + Copied +accelerate launch --mixed_precision="fp16" train_instruct_pix2pix.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$DATASET_ID \ + --enable_xformers_memory_efficient_attention \ + --resolution=256 --random_flip \ + --train_batch_size=4 --gradient_accumulation_steps=4 --gradient_checkpointing \ + --max_train_steps=15000 \ + --checkpointing_steps=5000 --checkpoints_total_limit=1 \ + --learning_rate=5e-05 --max_grad_norm=1 --lr_warmup_steps=0 \ + --conditioning_dropout_prob=0.05 \ + --mixed_precision=fp16 \ + --val_image_url="https://hf.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" \ + --validation_prompt="make the mountains snowy" \ + --seed=42 \ + --report_to=wandb +We recommend this type of validation as it can be useful for model debugging. Note that you need wandb installed to use this. You can install wandb by running pip install wandb. +Here, you can find an example training run that includes some validation samples and the training hyperparameters. +Note: In the original paper, the authors observed that even when the model is trained with an image resolution of 256x256, it generalizes well to bigger resolutions such as 512x512. This is likely because of the larger dataset they used during training. + +Training with multiple GPUs + +accelerate allows for seamless multi-GPU training. Follow the instructions here +for running distributed training with accelerate. Here is an example command: + + + Copied +accelerate launch --mixed_precision="fp16" --multi_gpu train_instruct_pix2pix.py \ + --pretrained_model_name_or_path=runwayml/stable-diffusion-v1-5 \ + --dataset_name=sayakpaul/instructpix2pix-1000-samples \ + --use_ema \ + --enable_xformers_memory_efficient_attention \ + --resolution=512 --random_flip \ + --train_batch_size=4 --gradient_accumulation_steps=4 --gradient_checkpointing \ + --max_train_steps=15000 \ + --checkpointing_steps=5000 --checkpoints_total_limit=1 \ + --learning_rate=5e-05 --lr_warmup_steps=0 \ + --conditioning_dropout_prob=0.05 \ + --mixed_precision=fp16 \ + --seed=42 + +Inference + +Once training is complete, we can perform inference: + + + Copied +import PIL +import requests +import torch +from diffusers import StableDiffusionInstructPix2PixPipeline + +model_id = "your_model_id" # <- replace this +pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") +generator = torch.Generator("cuda").manual_seed(0) + +url = "https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/test_pix2pix_4.png" + + +def download_image(url): + image = PIL.Image.open(requests.get(url, stream=True).raw) + image = PIL.ImageOps.exif_transpose(image) + image = image.convert("RGB") + return image + + +image = download_image(url) +prompt = "wipe out the lake" +num_inference_steps = 20 +image_guidance_scale = 1.5 +guidance_scale = 10 + +edited_image = pipe( + prompt, + image=image, + num_inference_steps=num_inference_steps, + image_guidance_scale=image_guidance_scale, + guidance_scale=guidance_scale, + generator=generator, +).images[0] +edited_image.save("edited_image.png") +An example model repo obtained using this training script can be found +here - sayakpaul/instruct-pix2pix. +We encourage you to play with the following three parameters to control +speed and quality during performance: +num_inference_steps +image_guidance_scale +guidance_scale +Particularly, image_guidance_scale and guidance_scale can have a profound impact +on the generated (“edited”) image (see here for an example). diff --git a/scrapped_outputs/9b41926d0054698c891fa2c3273255bd.txt b/scrapped_outputs/9b41926d0054698c891fa2c3273255bd.txt new file mode 100644 index 0000000000000000000000000000000000000000..b2d859bc97e9bd992d2613a0e2e7f43466ad9f8d --- /dev/null +++ b/scrapped_outputs/9b41926d0054698c891fa2c3273255bd.txt @@ -0,0 +1,75 @@ +DEISMultistepScheduler Diffusion Exponential Integrator Sampler (DEIS) is proposed in Fast Sampling of Diffusion Models with Exponential Integrator by Qinsheng Zhang and Yongxin Chen. DEISMultistepScheduler is a fast high order solver for diffusion ordinary differential equations (ODEs). This implementation modifies the polynomial fitting formula in log-rho space instead of the original linear t space in the DEIS paper. The modification enjoys closed-form coefficients for exponential multistep update instead of replying on the numerical solver. The abstract from the paper is: The past few years have witnessed the great success of Diffusion models~(DMs) in generating high-fidelity samples in generative modeling tasks. A major limitation of the DM is its notoriously slow sampling procedure which normally requires hundreds to thousands of time discretization steps of the learned diffusion process to reach the desired accuracy. Our goal is to develop a fast sampling method for DMs with a much less number of steps while retaining high sample quality. To this end, we systematically analyze the sampling procedure in DMs and identify key factors that affect the sample quality, among which the method of discretization is most crucial. By carefully examining the learned diffusion process, we propose Diffusion Exponential Integrator Sampler~(DEIS). It is based on the Exponential Integrator designed for discretizing ordinary differential equations (ODEs) and leverages a semilinear structure of the learned diffusion process to reduce the discretization error. The proposed method can be applied to any DMs and can generate high-fidelity samples in as few as 10 steps. In our experiments, it takes about 3 minutes on one A6000 GPU to generate 50k images from CIFAR10. Moreover, by directly using pre-trained DMs, we achieve the state-of-art sampling performance when the number of score function evaluation~(NFE) is limited, e.g., 4.17 FID with 10 NFEs, 3.37 FID, and 9.74 IS with only 15 NFEs on CIFAR10. Code is available at this https URL. Tips It is recommended to set solver_order to 2 or 3, while solver_order=1 is equivalent to DDIMScheduler. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set thresholding=True to use the dynamic thresholding. DEISMultistepScheduler class diffusers.DEISMultistepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Optional = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'deis' solver_type: str = 'logrho' lower_order_final: bool = True use_karras_sigmas: Optional = False timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DEIS order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. algorithm_type (str, defaults to deis) — +The algorithm type for the solver. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DEISMultistepScheduler is a fast high order solver for diffusion ordinary differential equations (ODEs). This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DEIS algorithm needs. deis_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DEIS (equivalent to DDIM). multistep_deis_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order multistep DEIS. multistep_deis_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order multistep DEIS. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep DEIS. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/9b78296a1e9803c761d914f079e80be5.txt b/scrapped_outputs/9b78296a1e9803c761d914f079e80be5.txt new file mode 100644 index 0000000000000000000000000000000000000000..ab7809d34983d6a8ebbe82ac4a22518de74ebdc9 --- /dev/null +++ b/scrapped_outputs/9b78296a1e9803c761d914f079e80be5.txt @@ -0,0 +1,31 @@ +Prior Transformer The Prior Transformer was originally introduced in Hierarchical Text-Conditional Image Generation with CLIP Latents by Ramesh et al. It is used to predict CLIP image embeddings from CLIP text embeddings; image embeddings are predicted through a denoising diffusion process. The abstract from the paper is: Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples. PriorTransformer class diffusers.PriorTransformer < source > ( num_attention_heads: int = 32 attention_head_dim: int = 64 num_layers: int = 20 embedding_dim: int = 768 num_embeddings = 77 additional_embeddings = 4 dropout: float = 0.0 time_embed_act_fn: str = 'silu' norm_in_type: Optional = None embedding_proj_norm_type: Optional = None encoder_hid_proj_type: Optional = 'linear' added_emb_type: Optional = 'prd' time_embed_dim: Optional = None embedding_proj_dim: Optional = None clip_embed_dim: Optional = None ) Parameters num_attention_heads (int, optional, defaults to 32) — The number of heads to use for multi-head attention. attention_head_dim (int, optional, defaults to 64) — The number of channels in each head. num_layers (int, optional, defaults to 20) — The number of layers of Transformer blocks to use. embedding_dim (int, optional, defaults to 768) — The dimension of the model input hidden_states num_embeddings (int, optional, defaults to 77) — +The number of embeddings of the model input hidden_states additional_embeddings (int, optional, defaults to 4) — The number of additional tokens appended to the +projected hidden_states. The actual length of the used hidden_states is num_embeddings + additional_embeddings. dropout (float, optional, defaults to 0.0) — The dropout probability to use. time_embed_act_fn (str, optional, defaults to ‘silu’) — +The activation function to use to create timestep embeddings. norm_in_type (str, optional, defaults to None) — The normalization layer to apply on hidden states before +passing to Transformer blocks. Set it to None if normalization is not needed. embedding_proj_norm_type (str, optional, defaults to None) — +The normalization layer to apply on the input proj_embedding. Set it to None if normalization is not +needed. encoder_hid_proj_type (str, optional, defaults to linear) — +The projection layer to apply on the input encoder_hidden_states. Set it to None if +encoder_hidden_states is None. added_emb_type (str, optional, defaults to prd) — Additional embeddings to condition the model. +Choose from prd or None. if choose prd, it will prepend a token indicating the (quantized) dot +product between the text embedding and image embedding as proposed in the unclip paper +https://arxiv.org/abs/2204.06125 If it is None, no additional embeddings will be prepended. time_embed_dim (int, *optional*, defaults to None) -- The dimension of timestep embeddings. If None, will be set to num_attention_heads * attention_head_dim` embedding_proj_dim (int, optional, default to None) — +The dimension of proj_embedding. If None, will be set to embedding_dim. clip_embed_dim (int, optional, default to None) — +The dimension of the output. If None, will be set to embedding_dim. A Prior Transformer model. forward < source > ( hidden_states timestep: Union proj_embedding: FloatTensor encoder_hidden_states: Optional = None attention_mask: Optional = None return_dict: bool = True ) → ~models.prior_transformer.PriorTransformerOutput or tuple Parameters hidden_states (torch.FloatTensor of shape (batch_size, embedding_dim)) — +The currently predicted image embeddings. timestep (torch.LongTensor) — +Current denoising step. proj_embedding (torch.FloatTensor of shape (batch_size, embedding_dim)) — +Projected embedding vector the denoising process is conditioned on. encoder_hidden_states (torch.FloatTensor of shape (batch_size, num_embeddings, embedding_dim)) — +Hidden states of the text embeddings the denoising process is conditioned on. attention_mask (torch.BoolTensor of shape (batch_size, num_embeddings)) — +Text mask for the text embeddings. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.prior_transformer.PriorTransformerOutput instead of a plain +tuple. Returns +~models.prior_transformer.PriorTransformerOutput or tuple + +If return_dict is True, a ~models.prior_transformer.PriorTransformerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + The PriorTransformer forward method. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. PriorTransformerOutput class diffusers.models.transformers.prior_transformer.PriorTransformerOutput < source > ( predicted_image_embedding: FloatTensor ) Parameters predicted_image_embedding (torch.FloatTensor of shape (batch_size, embedding_dim)) — +The predicted CLIP image embedding conditioned on the CLIP text embedding input. The output of PriorTransformer. diff --git a/scrapped_outputs/9bc36607608e15d8e95a56774d4d1413.txt b/scrapped_outputs/9bc36607608e15d8e95a56774d4d1413.txt new file mode 100644 index 0000000000000000000000000000000000000000..3202fb51e10a32c683f71e7b038c0b00367fe667 --- /dev/null +++ b/scrapped_outputs/9bc36607608e15d8e95a56774d4d1413.txt @@ -0,0 +1 @@ +Overview The APIs in this section are more experimental and prone to breaking changes. Most of them are used internally for development, but they may also be useful to you if you’re interested in building a diffusion model with some custom parts or if you’re interested in some of our helper utilities for working with 🤗 Diffusers. diff --git a/scrapped_outputs/9be521331b8577f21688123a8dc21511.txt b/scrapped_outputs/9be521331b8577f21688123a8dc21511.txt new file mode 100644 index 0000000000000000000000000000000000000000..ee5d5916fb70fd8ec6cb76f08d4c82e91bebd0c4 --- /dev/null +++ b/scrapped_outputs/9be521331b8577f21688123a8dc21511.txt @@ -0,0 +1,189 @@ +Linear multistep scheduler for discrete beta schedules + + +Overview + +Original implementation can be found here. + +LMSDiscreteScheduler + + +class diffusers.LMSDiscreteScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +prediction_type: str = 'epsilon' + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + + +Linear Multistep Scheduler for discrete beta schedules. Based on the original k-diffusion implementation by +Katherine Crowson: +https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L181 +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +get_lms_coefficient + +< +source +> +( +order +t +current_order + +) + + +Parameters + +order (TODO) — + + +t (TODO) — + + +current_order (TODO) — + + + +Compute a linear multistep coefficient. + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[float, torch.FloatTensor] + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +timestep (float or torch.FloatTensor) — the current timestep in the diffusion chain + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Scales the denoising model input by (sigma**2 + 1) ** 0.5 to match the K-LMS algorithm. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device, optional) — +the device to which the timesteps should be moved to. If None, the timesteps are not moved. + + + +Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: typing.Union[float, torch.FloatTensor] +sample: FloatTensor +order: int = 4 +return_dict: bool = True + +) +→ +~schedulers.scheduling_utils.LMSDiscreteSchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (float) — current timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. +order — coefficient for multi-step inference. + + +return_dict (bool) — option for returning tuple rather than LMSDiscreteSchedulerOutput class + + +Returns + +~schedulers.scheduling_utils.LMSDiscreteSchedulerOutput or tuple + + + +~schedulers.scheduling_utils.LMSDiscreteSchedulerOutput if return_dict is True, otherwise a tuple. +When returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/9bfe603ceef15986222600fc6f93afb2.txt b/scrapped_outputs/9bfe603ceef15986222600fc6f93afb2.txt new file mode 100644 index 0000000000000000000000000000000000000000..f2bcdd0eab08a61d4d8ad8d73bfbe01b5aad187f --- /dev/null +++ b/scrapped_outputs/9bfe603ceef15986222600fc6f93afb2.txt @@ -0,0 +1,234 @@ +Models 🤗 Diffusers provides pretrained models for popular algorithms and modules to create custom diffusion systems. The primary function of models is to denoise an input sample as modeled by the distribution pθ(xt−1∣xt)p_{\theta}(x_{t-1}|x_{t})pθ​(xt−1​∣xt​). All models are built from the base ModelMixin class which is a torch.nn.Module providing basic functionality for saving and loading models, locally and from the Hugging Face Hub. ModelMixin class diffusers.ModelMixin < source > ( ) Base class for all models. ModelMixin takes care of storing the model configuration and provides methods for loading, downloading and +saving models. config_name (str) — Filename to save a model to when calling save_pretrained(). disable_gradient_checkpointing < source > ( ) Deactivates gradient checkpointing for the current model (may be referred to as activation checkpointing or +checkpoint activations in other frameworks). disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. enable_gradient_checkpointing < source > ( ) Activates gradient checkpointing for the current model (may be referred to as activation checkpointing or +checkpoint activations in other frameworks). enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this option is enabled, you should observe lower GPU memory usage and a potential speed up during +inference. Speed up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import UNet2DConditionModel +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> model = UNet2DConditionModel.from_pretrained( +... "stabilityai/stable-diffusion-2-1", subfolder="unet", torch_dtype=torch.float16 +... ) +>>> model = model.to("cuda") +>>> model.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) from_pretrained < source > ( pretrained_model_name_or_path: Union **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with save_pretrained(). + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info (bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. from_flax (bool, optional, defaults to False) — +Load the model weights from a Flax checkpoint save file. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. Instantiate a pretrained PyTorch model from a pretrained model configuration. The model is set in evaluation mode - model.eval() - by default, and dropout modules are deactivated. To +train the model, set it back in training mode with model.train(). To use private or gated models, log-in with +huggingface-cli login. You can also activate the special +“offline-mode” to use this method in a +firewalled environment. Example: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5", subfolder="unet") If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. num_parameters < source > ( only_trainable: bool = False exclude_embeddings: bool = False ) → int Parameters only_trainable (bool, optional, defaults to False) — +Whether or not to return only the number of trainable parameters. exclude_embeddings (bool, optional, defaults to False) — +Whether or not to return only the number of non-embedding parameters. Returns +int + +The number of parameters. + Get number of (trainable or non-embedding) parameters in the module. Example: Copied from diffusers import UNet2DConditionModel + +model_id = "runwayml/stable-diffusion-v1-5" +unet = UNet2DConditionModel.from_pretrained(model_id, subfolder="unet") +unet.num_parameters(only_trainable=True) +859520964 save_pretrained < source > ( save_directory: Union is_main_process: bool = True save_function: Optional = None safe_serialization: bool = True variant: Optional = None push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save a model and its configuration file to. Will be created if it doesn’t exist. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save a model and its configuration file to a directory so that it can be reloaded using the +from_pretrained() class method. FlaxModelMixin class diffusers.FlaxModelMixin < source > ( ) Base class for all Flax models. FlaxModelMixin takes care of storing the model configuration and provides methods for loading, downloading and +saving models. config_name (str) — Filename to save a model to when calling save_pretrained(). from_pretrained < source > ( pretrained_model_name_or_path: Union dtype: dtype = *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — +Can be either: + +A string, the model id (for example runwayml/stable-diffusion-v1-5) of a pretrained model +hosted on the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +using save_pretrained(). + dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — +The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and +jax.numpy.bfloat16 (on TPUs). +This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If +specified, all the computation will be performed with the given dtype. + +This only specifies the dtype of the computation and does not influence the dtype of model +parameters. +If you wish to change the dtype of the model parameters, see to_fp16() and +to_bf16(). + model_args (sequence of positional arguments, optional) — +All remaining positional arguments are passed to the underlying model’s __init__ method. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only(bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. from_pt (bool, optional, defaults to False) — +Load the model weights from a PyTorch checkpoint save file. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to update the configuration object (after it is loaded) and initiate the model (for +example, output_attentions=True). Behaves differently depending on whether a config is provided or +automatically loaded: + +If a configuration is provided with config, kwargs are directly passed to the underlying +model’s __init__ method (we assume all relevant updates to the configuration have already been +done). +If a configuration is not provided, kwargs are first passed to the configuration class +initialization function from_config(). Each key of the kwargs that corresponds +to a configuration attribute is used to override said attribute with the supplied kwargs value. +Remaining keys that do not correspond to any configuration attribute are passed to the underlying +model’s __init__ function. + Instantiate a pretrained Flax model from a pretrained model configuration. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # Download model and configuration from huggingface.co and cache. +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # Model was saved using *save_pretrained('./test/saved_model/')* (for example purposes, not runnable). +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("./test/saved_model/") If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. save_pretrained < source > ( save_directory: Union params: Union is_main_process: bool = True push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save a model and its configuration file to. Will be created if it doesn’t exist. params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional key word arguments passed along to the push_to_hub() method. Save a model and its configuration file to a directory so that it can be reloaded using the +from_pretrained() class method. to_bf16 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans. It should be True +for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.bfloat16. This returns a new params tree and does not cast +the params in place. This method can be used on a TPU to explicitly convert the model parameters to bfloat16 precision to do full +half-precision training or to save weights in bfloat16 for inference in order to save memory and improve speed. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # load model +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model parameters will be in fp32 precision, to cast these to bfloat16 precision +>>> params = model.to_bf16(params) +>>> # If you don't want to cast certain parameters (for example layer norm bias and scale) +>>> # then pass the mask as follows +>>> from flax import traverse_util + +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> flat_params = traverse_util.flatten_dict(params) +>>> mask = { +... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) +... for path in flat_params +... } +>>> mask = traverse_util.unflatten_dict(mask) +>>> params = model.to_bf16(params, mask) to_fp16 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans. It should be True +for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.float16. This returns a new params tree and does not cast the +params in place. This method can be used on a GPU to explicitly convert the model parameters to float16 precision to do full +half-precision training or to save weights in float16 for inference in order to save memory and improve speed. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # load model +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model params will be in fp32, to cast these to float16 +>>> params = model.to_fp16(params) +>>> # If you want don't want to cast certain parameters (for example layer norm bias and scale) +>>> # then pass the mask as follows +>>> from flax import traverse_util + +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> flat_params = traverse_util.flatten_dict(params) +>>> mask = { +... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) +... for path in flat_params +... } +>>> mask = traverse_util.unflatten_dict(mask) +>>> params = model.to_fp16(params, mask) to_fp32 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans. It should be True +for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.float32. This method can be used to explicitly convert the +model parameters to fp32 precision. This returns a new params tree and does not cast the params in place. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # Download model and configuration from huggingface.co +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model params will be in fp32, to illustrate the use of this method, +>>> # we'll first cast to fp16 and back to fp32 +>>> params = model.to_f16(params) +>>> # now cast back to fp32 +>>> params = model.to_fp32(params) PushToHubMixin class diffusers.utils.PushToHubMixin < source > ( ) A Mixin to push a model, scheduler, or pipeline to the Hugging Face Hub. push_to_hub < source > ( repo_id: str commit_message: Optional = None private: Optional = None token: Optional = None create_pr: bool = False safe_serialization: bool = True variant: Optional = None ) Parameters repo_id (str) — +The name of the repository you want to push your model, scheduler, or pipeline files to. It should +contain your organization name when pushing to an organization. repo_id can also be a path to a local +directory. commit_message (str, optional) — +Message to commit while pushing. Default to "Upload {object}". private (bool, optional) — +Whether or not the repository created should be private. token (str, optional) — +The token to use as HTTP bearer authorization for remote files. The token generated when running +huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) — +Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to True) — +Whether or not to convert the model weights to the safetensors format. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. Upload model, scheduler, or pipeline files to the 🤗 Hugging Face Hub. Examples: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet") + +# Push the `unet` to your namespace with the name "my-finetuned-unet". +unet.push_to_hub("my-finetuned-unet") + +# Push the `unet` to an organization with the name "my-finetuned-unet". +unet.push_to_hub("your-org/my-finetuned-unet") diff --git a/scrapped_outputs/9c35cf077ab95461da4947e465ce79fe.txt b/scrapped_outputs/9c35cf077ab95461da4947e465ce79fe.txt new file mode 100644 index 0000000000000000000000000000000000000000..c5edad39e1983c7fb75dd543a1419ed0874a67f5 --- /dev/null +++ b/scrapped_outputs/9c35cf077ab95461da4947e465ce79fe.txt @@ -0,0 +1,1075 @@ +VersatileDiffusion + +VersatileDiffusion was proposed in Versatile Diffusion: Text, Images and Variations All in One Diffusion Model by Xingqian Xu, Zhangyang Wang, Eric Zhang, Kai Wang, Humphrey Shi . +The abstract of the paper is the following: +The recent advances in diffusion models have set an impressive milestone in many generation tasks. Trending works such as DALL-E2, Imagen, and Stable Diffusion have attracted great interest in academia and industry. Despite the rapid landscape changes, recent new approaches focus on extensions and performance rather than capacity, thus requiring separate models for separate tasks. In this work, we expand the existing single-flow diffusion pipeline into a multi-flow network, dubbed Versatile Diffusion (VD), that handles text-to-image, image-to-text, image-variation, and text-variation in one unified model. Moreover, we generalize VD to a unified multi-flow multimodal diffusion framework with grouped layers, swappable streams, and other propositions that can process modalities beyond images and text. Through our experiments, we demonstrate that VD and its underlying framework have the following merits: a) VD handles all subtasks with competitive quality; b) VD initiates novel extensions and applications such as disentanglement of style and semantic, image-text dual-guided generation, etc.; c) Through these experiments and applications, VD provides more semantic insights of the generated outputs. + +Tips + +VersatileDiffusion is conceptually very similar as Stable Diffusion, but instead of providing just a image data stream conditioned on text, VersatileDiffusion provides both a image and text data stream and can be conditioned on both text and image. + +*Run VersatileDiffusion* + +You can both load the memory intensive “all-in-one” VersatileDiffusionPipeline that can run all tasks +with the same class as shown in VersatileDiffusionPipeline.text_to_image(), VersatileDiffusionPipeline.image_variation(), and VersatileDiffusionPipeline.dual_guided() +or +You can run the individual pipelines which are much more memory efficient: +Text-to-Image: VersatileDiffusionTextToImagePipeline.call() +Image Variation: VersatileDiffusionImageVariationPipeline.call() +Dual Text and Image Guided Generation: VersatileDiffusionDualGuidedPipeline.call() + +*How to load and use different schedulers.* + +The versatile diffusion pipelines uses DDIMScheduler scheduler by default. But diffusers provides many other schedulers that can be used with the alt diffusion pipeline such as PNDMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, EulerAncestralDiscreteScheduler etc. +To use a different scheduler, you can either change it via the ConfigMixin.from_config() method or pass the scheduler argument to the from_pretrained method of the pipeline. For example, to use the EulerDiscreteScheduler, you can do the following: + + + Copied +>>> from diffusers import VersatileDiffusionPipeline, EulerDiscreteScheduler + +>>> pipeline = VersatileDiffusionPipeline.from_pretrained("shi-labs/versatile-diffusion") +>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +>>> # or +>>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("shi-labs/versatile-diffusion", subfolder="scheduler") +>>> pipeline = VersatileDiffusionPipeline.from_pretrained("shi-labs/versatile-diffusion", scheduler=euler_scheduler) + +VersatileDiffusionPipeline + + +class diffusers.VersatileDiffusionPipeline + +< +source +> +( +tokenizer: CLIPTokenizer +image_feature_extractor: CLIPFeatureExtractor +text_encoder: CLIPTextModel +image_encoder: CLIPVisionModel +image_unet: UNet2DConditionModel +text_unet: UNet2DConditionModel +vae: AutoencoderKL +scheduler: KarrasDiffusionSchedulers + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionMegaSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-to-image generation using Stable Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +dual_guided + +< +source +> +( +prompt: typing.Union[PIL.Image.Image, typing.List[PIL.Image.Image]] +image: typing.Union[str, typing.List[str]] +text_to_image_strength: float = 0.5 +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: typing.Optional[int] = 1 + +) +→ +~pipelines.stable_diffusion.ImagePipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. + + +height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +~pipelines.stable_diffusion.ImagePipelineOutput or tuple + + + +~pipelines.stable_diffusion.ImagePipelineOutput if return_dict is True, otherwise a `tuple. When +returning a tuple, the first element is a list with the generated images. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import VersatileDiffusionPipeline +>>> import torch +>>> import requests +>>> from io import BytesIO +>>> from PIL import Image + +>>> # let's download an initial image +>>> url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg" + +>>> response = requests.get(url) +>>> image = Image.open(BytesIO(response.content)).convert("RGB") +>>> text = "a red car in the sun" + +>>> pipe = VersatileDiffusionPipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> text_to_image_strength = 0.75 + +>>> image = pipe.dual_guided( +... prompt=text, image=image, text_to_image_strength=text_to_image_strength, generator=generator +... ).images[0] +>>> image.save("./car_variation.png") + +image_variation + +< +source +> +( +image: typing.Union[torch.FloatTensor, PIL.Image.Image] +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: typing.Optional[int] = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +image (PIL.Image.Image, List[PIL.Image.Image] or torch.Tensor) — +The image prompt or prompts to guide the image generation. + + +height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import VersatileDiffusionPipeline +>>> import torch +>>> import requests +>>> from io import BytesIO +>>> from PIL import Image + +>>> # let's download an initial image +>>> url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg" + +>>> response = requests.get(url) +>>> image = Image.open(BytesIO(response.content)).convert("RGB") + +>>> pipe = VersatileDiffusionPipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> image = pipe.image_variation(image, generator=generator).images[0] +>>> image.save("./car_variation.png") + +text_to_image + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: typing.Optional[int] = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. + + +height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import VersatileDiffusionPipeline +>>> import torch + +>>> pipe = VersatileDiffusionPipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> image = pipe.text_to_image("an astronaut riding on a horse on mars", generator=generator).images[0] +>>> image.save("./astronaut.png") + +VersatileDiffusionTextToImagePipeline + + +class diffusers.VersatileDiffusionTextToImagePipeline + +< +source +> +( +tokenizer: CLIPTokenizer +text_encoder: CLIPTextModelWithProjection +image_unet: UNet2DConditionModel +text_unet: UNetFlatConditionModel +vae: AutoencoderKL +scheduler: KarrasDiffusionSchedulers + +) + + +Parameters + +vqvae (VQModel) — +Vector-quantized (VQ) Model to encode and decode images to and from latent representations. + + +bert (LDMBertModel) — +Text-encoder model based on BERT architecture. + + +tokenizer (transformers.BertTokenizer) — +Tokenizer of class +BertTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: typing.Optional[int] = 1 +**kwargs + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. + + +height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import VersatileDiffusionTextToImagePipeline +>>> import torch + +>>> pipe = VersatileDiffusionTextToImagePipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe.remove_unused_weights() +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> image = pipe("an astronaut riding on a horse on mars", generator=generator).images[0] +>>> image.save("./astronaut.png") + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. + +VersatileDiffusionImageVariationPipeline + + +class diffusers.VersatileDiffusionImageVariationPipeline + +< +source +> +( +image_feature_extractor: CLIPFeatureExtractor +image_encoder: CLIPVisionModelWithProjection +image_unet: UNet2DConditionModel +vae: AutoencoderKL +scheduler: KarrasDiffusionSchedulers + +) + + +Parameters + +vqvae (VQModel) — +Vector-quantized (VQ) Model to encode and decode images to and from latent representations. + + +bert (LDMBertModel) — +Text-encoder model based on BERT architecture. + + +tokenizer (transformers.BertTokenizer) — +Tokenizer of class +BertTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +image: typing.Union[PIL.Image.Image, typing.List[PIL.Image.Image], torch.Tensor] +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: typing.Optional[int] = 1 +**kwargs + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +image (PIL.Image.Image, List[PIL.Image.Image] or torch.Tensor) — +The image prompt or prompts to guide the image generation. + + +height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import VersatileDiffusionImageVariationPipeline +>>> import torch +>>> import requests +>>> from io import BytesIO +>>> from PIL import Image + +>>> # let's download an initial image +>>> url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg" + +>>> response = requests.get(url) +>>> image = Image.open(BytesIO(response.content)).convert("RGB") + +>>> pipe = VersatileDiffusionImageVariationPipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> image = pipe(image, generator=generator).images[0] +>>> image.save("./car_variation.png") + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. + +VersatileDiffusionDualGuidedPipeline + + +class diffusers.VersatileDiffusionDualGuidedPipeline + +< +source +> +( +tokenizer: CLIPTokenizer +image_feature_extractor: CLIPFeatureExtractor +text_encoder: CLIPTextModelWithProjection +image_encoder: CLIPVisionModelWithProjection +image_unet: UNet2DConditionModel +text_unet: UNetFlatConditionModel +vae: AutoencoderKL +scheduler: KarrasDiffusionSchedulers + +) + + +Parameters + +vqvae (VQModel) — +Vector-quantized (VQ) Model to encode and decode images to and from latent representations. + + +bert (LDMBertModel) — +Text-encoder model based on BERT architecture. + + +tokenizer (transformers.BertTokenizer) — +Tokenizer of class +BertTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[PIL.Image.Image, typing.List[PIL.Image.Image]] +image: typing.Union[str, typing.List[str]] +text_to_image_strength: float = 0.5 +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: typing.Optional[int] = 1 +**kwargs + +) +→ +~pipelines.stable_diffusion.ImagePipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. + + +height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +~pipelines.stable_diffusion.ImagePipelineOutput or tuple + + + +~pipelines.stable_diffusion.ImagePipelineOutput if return_dict is True, otherwise a `tuple. When +returning a tuple, the first element is a list with the generated images. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import VersatileDiffusionDualGuidedPipeline +>>> import torch +>>> import requests +>>> from io import BytesIO +>>> from PIL import Image + +>>> # let's download an initial image +>>> url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg" + +>>> response = requests.get(url) +>>> image = Image.open(BytesIO(response.content)).convert("RGB") +>>> text = "a red car in the sun" + +>>> pipe = VersatileDiffusionDualGuidedPipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe.remove_unused_weights() +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> text_to_image_strength = 0.75 + +>>> image = pipe( +... prompt=text, image=image, text_to_image_strength=text_to_image_strength, generator=generator +... ).images[0] +>>> image.save("./car_variation.png") + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. diff --git a/scrapped_outputs/9c7aa7fb0d6cddfba56bc271ba7998bc.txt b/scrapped_outputs/9c7aa7fb0d6cddfba56bc271ba7998bc.txt new file mode 100644 index 0000000000000000000000000000000000000000..de6c58fcde78c492edef95532e5c2bb054c86845 --- /dev/null +++ b/scrapped_outputs/9c7aa7fb0d6cddfba56bc271ba7998bc.txt @@ -0,0 +1,81 @@ +UniPCMultistepScheduler UniPCMultistepScheduler is a training-free framework designed for fast sampling of diffusion models. It was introduced in UniPC: A Unified Predictor-Corrector Framework for Fast Sampling of Diffusion Models by Wenliang Zhao, Lujia Bai, Yongming Rao, Jie Zhou, Jiwen Lu. It consists of a corrector (UniC) and a predictor (UniP) that share a unified analytical form and support arbitrary orders. +UniPC is by design model-agnostic, supporting pixel-space/latent-space DPMs on unconditional/conditional sampling. It can also be applied to both noise prediction and data prediction models. The corrector UniC can be also applied after any off-the-shelf solvers to increase the order of accuracy. The abstract from the paper is: Diffusion probabilistic models (DPMs) have demonstrated a very promising ability in high-resolution image synthesis. However, sampling from a pre-trained DPM is time-consuming due to the multiple evaluations of the denoising network, making it more and more important to accelerate the sampling of DPMs. Despite recent progress in designing fast samplers, existing methods still cannot generate satisfying images in many applications where fewer steps (e.g., <10) are favored. In this paper, we develop a unified corrector (UniC) that can be applied after any existing DPM sampler to increase the order of accuracy without extra model evaluations, and derive a unified predictor (UniP) that supports arbitrary order as a byproduct. Combining UniP and UniC, we propose a unified predictor-corrector framework called UniPC for the fast sampling of DPMs, which has a unified analytical form for any order and can significantly improve the sampling quality over previous methods, especially in extremely few steps. We evaluate our methods through extensive experiments including both unconditional and conditional sampling using pixel-space and latent-space DPMs. Our UniPC can achieve 3.87 FID on CIFAR10 (unconditional) and 7.51 FID on ImageNet 256×256 (conditional) with only 10 function evaluations. Code is available at this https URL. Tips It is recommended to set solver_order to 2 for guide sampling, and solver_order=3 for unconditional sampling. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both predict_x0=True and thresholding=True to use dynamic thresholding. This thresholding method is unsuitable for latent-space diffusion models such as Stable Diffusion. UniPCMultistepScheduler class diffusers.UniPCMultistepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 predict_x0: bool = True solver_type: str = 'bh2' lower_order_final: bool = True disable_corrector: List = [] solver_p: SchedulerMixin = None use_karras_sigmas: Optional = False timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, default 2) — +The UniPC order which can be any positive integer. The effective order of accuracy is solver_order + 1 +due to the UniC. It is recommended to use solver_order=2 for guided sampling, and solver_order=3 for +unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and predict_x0=True. predict_x0 (bool, defaults to True) — +Whether to use the updating algorithm on the predicted x0. solver_type (str, default bh2) — +Solver type for UniPC. It is recommended to use bh1 for unconditional sampling when steps < 10, and bh2 +otherwise. lower_order_final (bool, default True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. disable_corrector (list, default []) — +Decides which step to disable the corrector to mitigate the misalignment between epsilon_theta(x_t, c) +and epsilon_theta(x_t^c, c) which can influence convergence for a large guidance scale. Corrector is +usually disabled during the first few steps. solver_p (SchedulerMixin, default None) — +Any other scheduler that if specified, the algorithm becomes solver_p + UniC. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. UniPCMultistepScheduler is a training-free framework designed for the fast sampling of diffusion models. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the UniPC algorithm needs. multistep_uni_c_bh_update < source > ( this_model_output: FloatTensor *args last_sample: FloatTensor = None this_sample: FloatTensor = None order: int = None **kwargs ) → torch.FloatTensor Parameters this_model_output (torch.FloatTensor) — +The model outputs at x_t. this_timestep (int) — +The current timestep t. last_sample (torch.FloatTensor) — +The generated sample before the last predictor x_{t-1}. this_sample (torch.FloatTensor) — +The generated sample after the last predictor x_{t}. order (int) — +The p of UniC-p at this step. The effective order of accuracy should be order + 1. Returns +torch.FloatTensor + +The corrected sample tensor at the current timestep. + One step for the UniC (B(h) version). multistep_uni_p_bh_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None order: int = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model at the current timestep. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. order (int) — +The order of UniP at this timestep (corresponds to the p in UniPC-p). Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the UniP (B(h) version). Alternatively, self.solver_p is used if is specified. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep UniPC. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/9c8d958a665c228ca3ad32d2bf627ef1.txt b/scrapped_outputs/9c8d958a665c228ca3ad32d2bf627ef1.txt new file mode 100644 index 0000000000000000000000000000000000000000..b141ceaf084a8212da6ac7e6a804208f1ca7d021 --- /dev/null +++ b/scrapped_outputs/9c8d958a665c228ca3ad32d2bf627ef1.txt @@ -0,0 +1,35 @@ +Dance Diffusion Dance Diffusion is by Zach Evans. Dance Diffusion is the first in a suite of generative audio tools for producers and musicians released by Harmonai. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. DanceDiffusionPipeline class diffusers.DanceDiffusionPipeline < source > ( unet scheduler ) Parameters unet (UNet1DModel) — +A UNet1DModel to denoise the encoded audio. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +IPNDMScheduler. Pipeline for audio generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 num_inference_steps: int = 100 generator: Union = None audio_length_in_s: Optional = None return_dict: bool = True ) → AudioPipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of audio samples to generate. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher-quality audio sample at +the expense of slower inference. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. audio_length_in_s (float, optional, defaults to self.unet.config.sample_size/self.unet.config.sample_rate) — +The length of the generated audio sample in seconds. return_dict (bool, optional, defaults to True) — +Whether or not to return a AudioPipelineOutput instead of a plain tuple. Returns +AudioPipelineOutput or tuple + +If return_dict is True, AudioPipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Example: Copied from diffusers import DiffusionPipeline +from scipy.io.wavfile import write + +model_id = "harmonai/maestro-150k" +pipe = DiffusionPipeline.from_pretrained(model_id) +pipe = pipe.to("cuda") + +audios = pipe(audio_length_in_s=4.0).audios + +# To save locally +for i, audio in enumerate(audios): + write(f"maestro_test_{i}.wav", pipe.unet.sample_rate, audio.transpose()) + +# To dislay in google colab +import IPython.display as ipd + +for audio in audios: + display(ipd.Audio(audio, rate=pipe.unet.sample_rate)) AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. diff --git a/scrapped_outputs/9c9b8b687d542f537c59898bf2e0f5af.txt b/scrapped_outputs/9c9b8b687d542f537c59898bf2e0f5af.txt new file mode 100644 index 0000000000000000000000000000000000000000..b6ee2d139f8d33d1b57f5e5dc720363dd35642a1 --- /dev/null +++ b/scrapped_outputs/9c9b8b687d542f537c59898bf2e0f5af.txt @@ -0,0 +1,101 @@ +Shap-E The Shap-E model was proposed in Shap-E: Generating Conditional 3D Implicit Functions by Alex Nichol and Heewoo Jun from OpenAI. The abstract from the paper is: We present Shap-E, a conditional generative model for 3D assets. Unlike recent work on 3D generative models which produce a single output representation, Shap-E directly generates the parameters of implicit functions that can be rendered as both textured meshes and neural radiance fields. We train Shap-E in two stages: first, we train an encoder that deterministically maps 3D assets into the parameters of an implicit function; second, we train a conditional diffusion model on outputs of the encoder. When trained on a large dataset of paired 3D and text data, our resulting models are capable of generating complex and diverse 3D assets in a matter of seconds. When compared to Point-E, an explicit generative model over point clouds, Shap-E converges faster and reaches comparable or better sample quality despite modeling a higher-dimensional, multi-representation output space. The original codebase can be found at openai/shap-e. See the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. ShapEPipeline class diffusers.ShapEPipeline < source > ( prior: PriorTransformer text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: HeunDiscreteScheduler shap_e_renderer: ShapERenderer ) Parameters prior (PriorTransformer) — +The canonical unCLIP prior to approximate the image embedding from the text embedding. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. scheduler (HeunDiscreteScheduler) — +A scheduler to be used in combination with the prior model to generate image embedding. shap_e_renderer (ShapERenderer) — +Shap-E renderer projects the generated latents into parameters of a MLP to create 3D objects with the NeRF +rendering method. Pipeline for generating latent representation of a 3D asset and rendering with the NeRF method. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: str num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 frame_size: int = 64 output_type: Optional = 'pil' return_dict: bool = True ) → ShapEPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. frame_size (int, optional, default to 64) — +The width and height of each image frame of the generated 3D output. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between "pil" (PIL.Image.Image), "np" +(np.array), "latent" (torch.Tensor), or mesh (MeshDecoderOutput). return_dict (bool, optional, defaults to True) — +Whether or not to return a ShapEPipelineOutput instead of a plain +tuple. Returns +ShapEPipelineOutput or tuple + +If return_dict is True, ShapEPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from diffusers.utils import export_to_gif + +>>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +>>> repo = "openai/shap-e" +>>> pipe = DiffusionPipeline.from_pretrained(repo, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> guidance_scale = 15.0 +>>> prompt = "a shark" + +>>> images = pipe( +... prompt, +... guidance_scale=guidance_scale, +... num_inference_steps=64, +... frame_size=256, +... ).images + +>>> gif_path = export_to_gif(images[0], "shark_3d.gif") ShapEImg2ImgPipeline class diffusers.ShapEImg2ImgPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModel image_processor: CLIPImageProcessor scheduler: HeunDiscreteScheduler shap_e_renderer: ShapERenderer ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModel) — +Frozen image-encoder. image_processor (CLIPImageProcessor) — +A CLIPImageProcessor to process images. scheduler (HeunDiscreteScheduler) — +A scheduler to be used in combination with the prior model to generate image embedding. shap_e_renderer (ShapERenderer) — +Shap-E renderer projects the generated latents into parameters of a MLP to create 3D objects with the NeRF +rendering method. Pipeline for generating latent representation of a 3D asset and rendering with the NeRF method from an image. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 frame_size: int = 64 output_type: Optional = 'pil' return_dict: bool = True ) → ShapEPipelineOutput or tuple Parameters image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be used as the starting point. Can also accept image +latents as image, but if passing latents directly it is not encoded again. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. frame_size (int, optional, default to 64) — +The width and height of each image frame of the generated 3D output. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between "pil" (PIL.Image.Image), "np" +(np.array), "latent" (torch.Tensor), or mesh (MeshDecoderOutput). return_dict (bool, optional, defaults to True) — +Whether or not to return a ShapEPipelineOutput instead of a plain +tuple. Returns +ShapEPipelineOutput or tuple + +If return_dict is True, ShapEPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> from PIL import Image +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from diffusers.utils import export_to_gif, load_image + +>>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +>>> repo = "openai/shap-e-img2img" +>>> pipe = DiffusionPipeline.from_pretrained(repo, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> guidance_scale = 3.0 +>>> image_url = "https://hf.co/datasets/diffusers/docs-images/resolve/main/shap-e/corgi.png" +>>> image = load_image(image_url).convert("RGB") + +>>> images = pipe( +... image, +... guidance_scale=guidance_scale, +... num_inference_steps=64, +... frame_size=256, +... ).images + +>>> gif_path = export_to_gif(images[0], "corgi_3d.gif") ShapEPipelineOutput class diffusers.pipelines.shap_e.pipeline_shap_e.ShapEPipelineOutput < source > ( images: Union ) Parameters images (torch.FloatTensor) — +A list of images for 3D rendering. Output class for ShapEPipeline and ShapEImg2ImgPipeline. diff --git a/scrapped_outputs/9cbe47ce1bff1a94dc94a0141070c971.txt b/scrapped_outputs/9cbe47ce1bff1a94dc94a0141070c971.txt new file mode 100644 index 0000000000000000000000000000000000000000..68a889e0164baf54701fd5b5808fa585c5e8997f --- /dev/null +++ b/scrapped_outputs/9cbe47ce1bff1a94dc94a0141070c971.txt @@ -0,0 +1,303 @@ +Pipelines + +Pipelines provide a simple way to run state-of-the-art diffusion models in inference. +Most diffusion systems consist of multiple independently-trained models and highly adaptable scheduler +components - all of which are needed to have a functioning end-to-end diffusion system. +As an example, Stable Diffusion has three independently trained models: +Autoencoder +Conditional Unet +CLIP text encoder +a scheduler component, scheduler, +a CLIPFeatureExtractor, +as well as a safety checker. +All of these components are necessary to run stable diffusion in inference even though they were trained +or created independently from each other. +To that end, we strive to offer all open-sourced, state-of-the-art diffusion system under a unified API. +More specifically, we strive to provide pipelines that +can load the officially published weights and yield 1-to-1 the same outputs as the original implementation according to the corresponding paper (e.g. LDMTextToImagePipeline, uses the officially released weights of High-Resolution Image Synthesis with Latent Diffusion Models), +have a simple user interface to run the model in inference (see the Pipelines API section), +are easy to understand with code that is self-explanatory and can be read along-side the official paper (see Pipelines summary), +can easily be contributed by the community (see the Contribution section). +Note that pipelines do not (and should not) offer any training functionality. +If you are looking for official training examples, please have a look at examples. + +🧨 Diffusers Summary + +The following table summarizes all officially supported pipelines, their corresponding paper, and if +available a colab notebook to directly try them out. +Pipeline +Paper +Tasks +Colab +alt_diffusion +AltDiffusion +Image-to-Image Text-Guided Generation +- +audio_diffusion +Audio Diffusion +Unconditional Audio Generation + +controlnet +ControlNet with Stable Diffusion +Image-to-Image Text-Guided Generation +[ +cycle_diffusion +Cycle Diffusion +Image-to-Image Text-Guided Generation + +dance_diffusion +Dance Diffusion +Unconditional Audio Generation + +ddpm +Denoising Diffusion Probabilistic Models +Unconditional Image Generation + +ddim +Denoising Diffusion Implicit Models +Unconditional Image Generation + +latent_diffusion +High-Resolution Image Synthesis with Latent Diffusion Models +Text-to-Image Generation + +latent_diffusion +High-Resolution Image Synthesis with Latent Diffusion Models +Super Resolution Image-to-Image + +latent_diffusion_uncond +High-Resolution Image Synthesis with Latent Diffusion Models +Unconditional Image Generation + +paint_by_example +Paint by Example: Exemplar-based Image Editing with Diffusion Models +Image-Guided Image Inpainting + +pndm +Pseudo Numerical Methods for Diffusion Models on Manifolds +Unconditional Image Generation + +score_sde_ve +Score-Based Generative Modeling through Stochastic Differential Equations +Unconditional Image Generation + +score_sde_vp +Score-Based Generative Modeling through Stochastic Differential Equations +Unconditional Image Generation + +semantic_stable_diffusion +SEGA: Instructing Diffusion using Semantic Dimensions +Text-to-Image Generation + +stable_diffusion_text2img +Stable Diffusion +Text-to-Image Generation + +stable_diffusion_img2img +Stable Diffusion +Image-to-Image Text-Guided Generation + +stable_diffusion_inpaint +Stable Diffusion +Text-Guided Image Inpainting + +stable_diffusion_panorama +MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation +Text-Guided Panorama View Generation + +stable_diffusion_pix2pix +InstructPix2Pix: Learning to Follow Image Editing Instructions +Text-Based Image Editing + +stable_diffusion_pix2pix_zero +Zero-shot Image-to-Image Translation +Text-Based Image Editing + +stable_diffusion_attend_and_excite +Attend and Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models +Text-to-Image Generation + +stable_diffusion_self_attention_guidance +Self-Attention Guidance +Text-to-Image Generation + +stable_diffusion_image_variation +Stable Diffusion Image Variations +Image-to-Image Generation + +stable_diffusion_latent_upscale +Stable Diffusion Latent Upscaler +Text-Guided Super Resolution Image-to-Image + +stable_diffusion_2 +Stable Diffusion 2 +Text-to-Image Generation + +stable_diffusion_2 +Stable Diffusion 2 +Text-Guided Image Inpainting + +stable_diffusion_2 +Stable Diffusion 2 +Depth-to-Image Text-Guided Generation + +stable_diffusion_2 +Stable Diffusion 2 +Text-Guided Super Resolution Image-to-Image + +stable_diffusion_safe +Safe Stable Diffusion +Text-Guided Generation + +stable_unclip +Stable unCLIP +Text-to-Image Generation + +stable_unclip +Stable unCLIP +Image-to-Image Text-Guided Generation + +stochastic_karras_ve +Elucidating the Design Space of Diffusion-Based Generative Models +Unconditional Image Generation + +unclip +Hierarchical Text-Conditional Image Generation with CLIP Latents +Text-to-Image Generation + +versatile_diffusion +Versatile Diffusion: Text, Images and Variations All in One Diffusion Model +Text-to-Image Generation + +versatile_diffusion +Versatile Diffusion: Text, Images and Variations All in One Diffusion Model +Image Variations Generation + +versatile_diffusion +Versatile Diffusion: Text, Images and Variations All in One Diffusion Model +Dual Image and Text Guided Generation + +vq_diffusion +Vector Quantized Diffusion Model for Text-to-Image Synthesis +Text-to-Image Generation + +Note: Pipelines are simple examples of how to play around with the diffusion systems as described in the corresponding papers. +However, most of them can be adapted to use different scheduler components or even different model components. Some pipeline examples are shown in the Examples below. + +Pipelines API + +Diffusion models often consist of multiple independently-trained models or other previously existing components. +Each model has been trained independently on a different task and the scheduler can easily be swapped out and replaced with a different one. +During inference, we however want to be able to easily load all components and use them in inference - even if one component, e.g. CLIP’s text encoder, originates from a different library, such as Transformers. To that end, all pipelines provide the following functionality: +from_pretrained method that accepts a Hugging Face Hub repository id, e.g. runwayml/stable-diffusion-v1-5 or a path to a local directory, e.g. +”./stable-diffusion”. To correctly retrieve which models and components should be loaded, one has to provide a model_index.json file, e.g. runwayml/stable-diffusion-v1-5/model_index.json, which defines all components that should be +loaded into the pipelines. More specifically, for each model/component one needs to define the format : ["", ""]. is the attribute name given to the loaded instance of which can be found in the library or pipeline folder called "". +save_pretrained that accepts a local path, e.g. ./stable-diffusion under which all models/components of the pipeline will be saved. For each component/model a folder is created inside the local path that is named after the given attribute name, e.g. ./stable_diffusion/unet. +In addition, a model_index.json file is created at the root of the local path, e.g. ./stable_diffusion/model_index.json so that the complete pipeline can again be instantiated +from the local path. +to which accepts a string or torch.device to move all models that are of type torch.nn.Module to the passed device. The behavior is fully analogous to PyTorch’s to method. +__call__ method to use the pipeline in inference. __call__ defines inference logic of the pipeline and should ideally encompass all aspects of it, from pre-processing to forwarding tensors to the different models and schedulers, as well as post-processing. The API of the __call__ method can strongly vary from pipeline to pipeline. E.g. a text-to-image pipeline, such as StableDiffusionPipeline should accept among other things the text prompt to generate the image. A pure image generation pipeline, such as DDPMPipeline on the other hand can be run without providing any inputs. To better understand what inputs can be adapted for +each pipeline, one should look directly into the respective pipeline. +Note: All pipelines have PyTorch’s autograd disabled by decorating the __call__ method with a torch.no_grad decorator because pipelines should +not be used for training. If you want to store the gradients during the forward pass, we recommend writing your own pipeline, see also our community-examples + +Contribution + +We are more than happy about any contribution to the officially supported pipelines 🤗. We aspire +all of our pipelines to be self-contained, easy-to-tweak, beginner-friendly and for one-purpose-only. +Self-contained: A pipeline shall be as self-contained as possible. More specifically, this means that all functionality should be either directly defined in the pipeline file itself, should be inherited from (and only from) the DiffusionPipeline class or be directly attached to the model and scheduler components of the pipeline. +Easy-to-use: Pipelines should be extremely easy to use - one should be able to load the pipeline and +use it for its designated task, e.g. text-to-image generation, in just a couple of lines of code. Most +logic including pre-processing, an unrolled diffusion loop, and post-processing should all happen inside the __call__ method. +Easy-to-tweak: Certain pipelines will not be able to handle all use cases and tasks that you might like them to. If you want to use a certain pipeline for a specific use case that is not yet supported, you might have to copy the pipeline file and tweak the code to your needs. We try to make the pipeline code as readable as possible so that each part –from pre-processing to diffusing to post-processing– can easily be adapted. If you would like the community to benefit from your customized pipeline, we would love to see a contribution to our community-examples. If you feel that an important pipeline should be part of the official pipelines but isn’t, a contribution to the official pipelines would be even better. +One-purpose-only: Pipelines should be used for one task and one task only. Even if two tasks are very similar from a modeling point of view, e.g. image2image translation and in-painting, pipelines shall be used for one task only to keep them easy-to-tweak and readable. + +Examples + + +Text-to-Image generation with Stable Diffusion + + + + Copied +# make sure you're logged in with `huggingface-cli login` +from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler + +pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +image = pipe(prompt).images[0] + +image.save("astronaut_rides_horse.png") + +Image-to-Image text-guided generation with Stable Diffusion + +The StableDiffusionImg2ImgPipeline lets you pass a text prompt and an initial image to condition the generation of new images. + + + Copied +import requests +from PIL import Image +from io import BytesIO + +from diffusers import StableDiffusionImg2ImgPipeline + +# load the pipeline +device = "cuda" +pipe = StableDiffusionImg2ImgPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to( + device +) + +# let's download an initial image +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +response = requests.get(url) +init_image = Image.open(BytesIO(response.content)).convert("RGB") +init_image = init_image.resize((768, 512)) + +prompt = "A fantasy landscape, trending on artstation" + +images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images + +images[0].save("fantasy_landscape.png") +You can also run this example on colab + +Tweak prompts reusing seeds and latents + +You can generate your own latents to reproduce results, or tweak your prompt on a specific result you liked. This notebook shows how to do it step by step. You can also run it in Google Colab . + +In-painting using Stable Diffusion + +The StableDiffusionInpaintPipeline lets you edit specific parts of an image by providing a mask and text prompt. + + + Copied +import PIL +import requests +import torch +from io import BytesIO + +from diffusers import StableDiffusionInpaintPipeline + + +def download_image(url): + response = requests.get(url) + return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = download_image(img_url).resize((512, 512)) +mask_image = download_image(mask_url).resize((512, 512)) + +pipe = StableDiffusionInpaintPipeline.from_pretrained( + "runwayml/stable-diffusion-inpainting", + torch_dtype=torch.float16, +) +pipe = pipe.to("cuda") + +prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] +You can also run this example on colab diff --git a/scrapped_outputs/9d5259ff464bbd06f967f209a4e3fb6c.txt b/scrapped_outputs/9d5259ff464bbd06f967f209a4e3fb6c.txt new file mode 100644 index 0000000000000000000000000000000000000000..d652e1d857c98c3e8bba256ca96f37cda949853a --- /dev/null +++ b/scrapped_outputs/9d5259ff464bbd06f967f209a4e3fb6c.txt @@ -0,0 +1,57 @@ +Schedulers 🤗 Diffusers provides many scheduler functions for the diffusion process. A scheduler takes a model’s output (the sample which the diffusion process is iterating on) and a timestep to return a denoised sample. The timestep is important because it dictates where in the diffusion process the step is; data is generated by iterating forward n timesteps and inference occurs by propagating backward through the timesteps. Based on the timestep, a scheduler may be discrete in which case the timestep is an int or continuous in which case the timestep is a float. Depending on the context, a scheduler defines how to iteratively add noise to an image or how to update a sample based on a model’s output: during training, a scheduler adds noise (there are different algorithms for how to add noise) to a sample to train a diffusion model during inference, a scheduler defines how to update a sample based on a pretrained model’s output Many schedulers are implemented from the k-diffusion library by Katherine Crowson, and they’re also widely used in A1111. To help you map the schedulers from k-diffusion and A1111 to the schedulers in 🤗 Diffusers, take a look at the table below: A1111/k-diffusion 🤗 Diffusers Usage DPM++ 2M DPMSolverMultistepScheduler DPM++ 2M Karras DPMSolverMultistepScheduler init with use_karras_sigmas=True DPM++ 2M SDE DPMSolverMultistepScheduler init with algorithm_type="sde-dpmsolver++" DPM++ 2M SDE Karras DPMSolverMultistepScheduler init with use_karras_sigmas=True and algorithm_type="sde-dpmsolver++" DPM++ 2S a N/A very similar to DPMSolverSinglestepScheduler DPM++ 2S a Karras N/A very similar to DPMSolverSinglestepScheduler(use_karras_sigmas=True, ...) DPM++ SDE DPMSolverSinglestepScheduler DPM++ SDE Karras DPMSolverSinglestepScheduler init with use_karras_sigmas=True DPM2 KDPM2DiscreteScheduler DPM2 Karras KDPM2DiscreteScheduler init with use_karras_sigmas=True DPM2 a KDPM2AncestralDiscreteScheduler DPM2 a Karras KDPM2AncestralDiscreteScheduler init with use_karras_sigmas=True DPM adaptive N/A DPM fast N/A Euler EulerDiscreteScheduler Euler a EulerAncestralDiscreteScheduler Heun HeunDiscreteScheduler LMS LMSDiscreteScheduler LMS Karras LMSDiscreteScheduler init with use_karras_sigmas=True N/A DEISMultistepScheduler N/A UniPCMultistepScheduler All schedulers are built from the base SchedulerMixin class which implements low level utilities shared by all schedulers. SchedulerMixin class diffusers.SchedulerMixin < source > ( ) Base class for all schedulers. SchedulerMixin contains common functions shared by all schedulers such as general loading and saving +functionalities. ConfigMixin takes care of storing the configuration attributes (like num_train_timesteps) that are passed to +the scheduler’s __init__ function, and the attributes can be accessed by scheduler.config.num_train_timesteps. Class attributes: _compatibles (List[str]) — A list of scheduler classes that are compatible with the parent scheduler +class. Use from_config() to load a different compatible scheduler class (should be overridden +by parent class). from_pretrained < source > ( pretrained_model_name_or_path: Union = None subfolder: Optional = None return_unused_kwargs = False **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the scheduler +configuration saved with save_pretrained(). + subfolder (str, optional) — +The subfolder location of a model file within a larger model repository on the Hub or locally. return_unused_kwargs (bool, optional, defaults to False) — +Whether kwargs that are not consumed by the Python class should be returned or not. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. Instantiate a scheduler from a pre-defined JSON configuration file in a local directory or Hub repository. To use private or gated models, log-in with +huggingface-cli login. You can also activate the special +“offline-mode” to use this method in a +firewalled environment. save_pretrained < source > ( save_directory: Union push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory where the configuration JSON file will be saved (will be created if it does not exist). push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save a scheduler configuration object to a directory so that it can be reloaded using the +from_pretrained() class method. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. KarrasDiffusionSchedulers KarrasDiffusionSchedulers are a broad generalization of schedulers in 🤗 Diffusers. The schedulers in this class are distinguished at a high level by their noise sampling strategy, the type of network and scaling, the training strategy, and how the loss is weighed. The different schedulers in this class, depending on the ordinary differential equations (ODE) solver type, fall into the above taxonomy and provide a good abstraction for the design of the main schedulers implemented in 🤗 Diffusers. The schedulers in this class are given here. PushToHubMixin class diffusers.utils.PushToHubMixin < source > ( ) A Mixin to push a model, scheduler, or pipeline to the Hugging Face Hub. push_to_hub < source > ( repo_id: str commit_message: Optional = None private: Optional = None token: Optional = None create_pr: bool = False safe_serialization: bool = True variant: Optional = None ) Parameters repo_id (str) — +The name of the repository you want to push your model, scheduler, or pipeline files to. It should +contain your organization name when pushing to an organization. repo_id can also be a path to a local +directory. commit_message (str, optional) — +Message to commit while pushing. Default to "Upload {object}". private (bool, optional) — +Whether or not the repository created should be private. token (str, optional) — +The token to use as HTTP bearer authorization for remote files. The token generated when running +huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) — +Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to True) — +Whether or not to convert the model weights to the safetensors format. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. Upload model, scheduler, or pipeline files to the 🤗 Hugging Face Hub. Examples: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet") + +# Push the `unet` to your namespace with the name "my-finetuned-unet". +unet.push_to_hub("my-finetuned-unet") + +# Push the `unet` to an organization with the name "my-finetuned-unet". +unet.push_to_hub("your-org/my-finetuned-unet") diff --git a/scrapped_outputs/9d7e1cef9db737b5861c3a5ba11fffda.txt b/scrapped_outputs/9d7e1cef9db737b5861c3a5ba11fffda.txt new file mode 100644 index 0000000000000000000000000000000000000000..d509c1ac7ab849c2b3afbdbbc876d1114069ba2e --- /dev/null +++ b/scrapped_outputs/9d7e1cef9db737b5861c3a5ba11fffda.txt @@ -0,0 +1,217 @@ +Latent Consistency Models Latent Consistency Models (LCMs) were proposed in Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference by Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao. The abstract of the paper is as follows: Latent Diffusion models (LDMs) have achieved remarkable results in synthesizing high-resolution images. However, the iterative sampling process is computationally intensive and leads to slow generation. Inspired by Consistency Models (song et al.), we propose Latent Consistency Models (LCMs), enabling swift inference with minimal steps on any pre-trained LDMs, including Stable Diffusion (rombach et al). Viewing the guided reverse diffusion process as solving an augmented probability flow ODE (PF-ODE), LCMs are designed to directly predict the solution of such ODE in latent space, mitigating the need for numerous iterations and allowing rapid, high-fidelity sampling. Efficiently distilled from pre-trained classifier-free guided diffusion models, a high-quality 768 x 768 2~4-step LCM takes only 32 A100 GPU hours for training. Furthermore, we introduce Latent Consistency Fine-tuning (LCF), a novel method that is tailored for fine-tuning LCMs on customized image datasets. Evaluation on the LAION-5B-Aesthetics dataset demonstrates that LCMs achieve state-of-the-art text-to-image generation performance with few-step inference. Project Page: this https URL. A demo for the SimianLuo/LCM_Dreamshaper_v7 checkpoint can be found here. The pipelines were contributed by luosiallen, nagolinc, and dg845. LatentConsistencyModelPipeline class diffusers.LatentConsistencyModelPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: LCMScheduler safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Currently only +supports LCMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. requires_safety_checker (bool, optional, defaults to True) — +Whether the pipeline requires a safety checker component. Pipeline for text-to-image generation using a latent consistency model. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 4 original_inference_steps: int = None timesteps: List = None guidance_scale: float = 8.5 num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. original_inference_steps (int, optional) — +The original number of inference steps use to generate a linearly-spaced timestep schedule, from which +we will draw num_inference_steps evenly spaced timesteps from as our final timestep schedule, +following the Skipping-Step method in the paper (see Section 4.3). If not set this will default to the +scheduler’s original_inference_steps attribute. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps on the original LCM training/distillation timestep schedule are used. Must be in descending +order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. +Note that the original latent consistency models paper uses a different CFG formulation where the +guidance scales are decreased by 1 (so in the paper formulation CFG is enabled when guidance_scale > 0). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import DiffusionPipeline +>>> import torch + +>>> pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7") +>>> # To save GPU memory, torch.float16 can be used, but it may compromise image quality. +>>> pipe.to(torch_device="cuda", torch_dtype=torch.float32) + +>>> prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" + +>>> # Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps. +>>> num_inference_steps = 4 +>>> images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=8.0).images +>>> images[0].save("image.png") enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 LatentConsistencyModelImg2ImgPipeline class diffusers.LatentConsistencyModelImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: LCMScheduler safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Currently only +supports LCMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. requires_safety_checker (bool, optional, defaults to True) — +Whether the pipeline requires a safety checker component. Pipeline for image-to-image generation using a latent consistency model. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 4 strength: float = 0.8 original_inference_steps: int = None timesteps: List = None guidance_scale: float = 8.5 num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. original_inference_steps (int, optional) — +The original number of inference steps use to generate a linearly-spaced timestep schedule, from which +we will draw num_inference_steps evenly spaced timesteps from as our final timestep schedule, +following the Skipping-Step method in the paper (see Section 4.3). If not set this will default to the +scheduler’s original_inference_steps attribute. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps on the original LCM training/distillation timestep schedule are used. Must be in descending +order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. +Note that the original latent consistency models paper uses a different CFG formulation where the +guidance scales are decreased by 1 (so in the paper formulation CFG is enabled when guidance_scale > 0). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import AutoPipelineForImage2Image +>>> import torch +>>> import PIL + +>>> pipe = AutoPipelineForImage2Image.from_pretrained("SimianLuo/LCM_Dreamshaper_v7") +>>> # To save GPU memory, torch.float16 can be used, but it may compromise image quality. +>>> pipe.to(torch_device="cuda", torch_dtype=torch.float32) + +>>> prompt = "High altitude snowy mountains" +>>> image = PIL.Image.open("./snowy_mountains.png") + +>>> # Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps. +>>> num_inference_steps = 4 +>>> images = pipe( +... prompt=prompt, image=image, num_inference_steps=num_inference_steps, guidance_scale=8.0 +... ).images + +>>> images[0].save("image.png") enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/9da93f718c70a9ee7997ef4925170201.txt b/scrapped_outputs/9da93f718c70a9ee7997ef4925170201.txt new file mode 100644 index 0000000000000000000000000000000000000000..5e5f20bcd4c8ced4f5d66653f375f4b97a022c2a --- /dev/null +++ b/scrapped_outputs/9da93f718c70a9ee7997ef4925170201.txt @@ -0,0 +1,13 @@ +Improve image quality with deterministic generation A common way to improve the quality of generated images is with deterministic batch generation, generate a batch of images and select one image to improve with a more detailed prompt in a second round of inference. The key is to pass a list of torch.Generator’s to the pipeline for batched image generation, and tie each Generator to a seed so you can reuse it for an image. Let’s use runwayml/stable-diffusion-v1-5 for example, and generate several versions of the following prompt: Copied prompt = "Labrador in the style of Vermeer" Instantiate a pipeline with DiffusionPipeline.from_pretrained() and place it on a GPU (if available): Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import make_image_grid + +pipe = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +pipe = pipe.to("cuda") Now, define four different Generators and assign each Generator a seed (0 to 3) so you can reuse a Generator later for a specific image: Copied generator = [torch.Generator(device="cuda").manual_seed(i) for i in range(4)] To create a batched seed, you should use a list comprehension that iterates over the length specified in range(). This creates a unique Generator object for each image in the batch. If you only multiply the Generator by the batch size, this only creates one Generator object that is used sequentially for each image in the batch. For example, if you want to use the same seed to create 4 identical images: Copied ❌ [torch.Generator().manual_seed(seed)] * 4 + +✅ [torch.Generator().manual_seed(seed) for _ in range(4)] Generate the images and have a look: Copied images = pipe(prompt, generator=generator, num_images_per_prompt=4).images +make_image_grid(images, rows=2, cols=2) In this example, you’ll improve upon the first image - but in reality, you can use any image you want (even the image with double sets of eyes!). The first image used the Generator with seed 0, so you’ll reuse that Generator for the second round of inference. To improve the quality of the image, add some additional text to the prompt: Copied prompt = [prompt + t for t in [", highly realistic", ", artsy", ", trending", ", colorful"]] +generator = [torch.Generator(device="cuda").manual_seed(0) for i in range(4)] Create four generators with seed 0, and generate another batch of images, all of which should look like the first image from the previous round! Copied images = pipe(prompt, generator=generator).images +make_image_grid(images, rows=2, cols=2) diff --git a/scrapped_outputs/9e0745f92d4a03e7bc89756567231258.txt b/scrapped_outputs/9e0745f92d4a03e7bc89756567231258.txt new file mode 100644 index 0000000000000000000000000000000000000000..7c4120ca559ac7e154bd60c031ca497e0b8a77e7 --- /dev/null +++ b/scrapped_outputs/9e0745f92d4a03e7bc89756567231258.txt @@ -0,0 +1 @@ +Overview Generating high-quality outputs is computationally intensive, especially during each iterative step where you go from a noisy output to a less noisy output. One of 🤗 Diffuser’s goals is to make this technology widely accessible to everyone, which includes enabling fast inference on consumer and specialized hardware. This section will cover tips and tricks - like half-precision weights and sliced attention - for optimizing inference speed and reducing memory-consumption. You’ll also learn how to speed up your PyTorch code with torch.compile or ONNX Runtime, and enable memory-efficient attention with xFormers. There are also guides for running inference on specific hardware like Apple Silicon, and Intel or Habana processors. diff --git a/scrapped_outputs/9e21342294762f120a2ce75cbecf5b60.txt b/scrapped_outputs/9e21342294762f120a2ce75cbecf5b60.txt new file mode 100644 index 0000000000000000000000000000000000000000..370ce691af60ec569bb22a8523c7b30831598db5 --- /dev/null +++ b/scrapped_outputs/9e21342294762f120a2ce75cbecf5b60.txt @@ -0,0 +1,260 @@ +Performing inference with LCM-LoRA Latent Consistency Models (LCM) enable quality image generation in typically 2-4 steps making it possible to use diffusion models in almost real-time settings. From the official website: LCMs can be distilled from any pre-trained Stable Diffusion (SD) in only 4,000 training steps (~32 A100 GPU Hours) for generating high quality 768 x 768 resolution images in 2~4 steps or even one step, significantly accelerating text-to-image generation. We employ LCM to distill the Dreamshaper-V7 version of SD in just 4,000 training iterations. For a more technical overview of LCMs, refer to the paper. However, each model needs to be distilled separately for latent consistency distillation. The core idea with LCM-LoRA is to train just a few adapter layers, the adapter being LoRA in this case. +This way, we don’t have to train the full model and keep the number of trainable parameters manageable. The resulting LoRAs can then be applied to any fine-tuned version of the model without distilling them separately. +Additionally, the LoRAs can be applied to image-to-image, ControlNet/T2I-Adapter, inpainting, AnimateDiff etc. +The LCM-LoRA can also be combined with other LoRAs to generate styled images in very few steps (4-8). LCM-LoRAs are available for stable-diffusion-v1-5, stable-diffusion-xl-base-1.0, and the SSD-1B model. All the checkpoints can be found in this collection. For more details about LCM-LoRA, refer to the technical report. This guide shows how to perform inference with LCM-LoRAs for text-to-image image-to-image combined with styled LoRAs ControlNet/T2I-Adapter inpainting AnimateDiff Before going through this guide, we’ll take a look at the general workflow for performing inference with LCM-LoRAs. +LCM-LoRAs are similar to other Stable Diffusion LoRAs so they can be used with any DiffusionPipeline that supports LoRAs. Load the task specific pipeline and model. Set the scheduler to LCMScheduler. Load the LCM-LoRA weights for the model. Reduce the guidance_scale between [1.0, 2.0] and set the num_inference_steps between [4, 8]. Perform inference with the pipeline with the usual parameters. Let’s look at how we can perform inference with LCM-LoRAs for different tasks. First, make sure you have peft installed, for better LoRA support. Copied pip install -U peft Text-to-image You’ll use the StableDiffusionXLPipeline with the scheduler: LCMScheduler and then load the LCM-LoRA. Together with the LCM-LoRA and the scheduler, the pipeline enables a fast inference workflow overcoming the slow iterative nature of diffusion models. Copied import torch +from diffusers import DiffusionPipeline, LCMScheduler + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + variant="fp16", + torch_dtype=torch.float16 +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl") + +prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" + +generator = torch.manual_seed(42) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=1.0 +).images[0] Notice that we use only 4 steps for generation which is way less than what’s typically used for standard SDXL. You may have noticed that we set guidance_scale=1.0, which disables classifer-free-guidance. This is because the LCM-LoRA is trained with guidance, so the batch size does not have to be doubled in this case. This leads to a faster inference time, with the drawback that negative prompts don’t have any effect on the denoising process. You can also use guidance with LCM-LoRA, but due to the nature of training the model is very sensitve to the guidance_scale values, high values can lead to artifacts in the generated images. In our experiments, we found that the best values are in the range of [1.0, 2.0]. Inference with a fine-tuned model As mentioned above, the LCM-LoRA can be applied to any fine-tuned version of the model without having to distill them separately. Let’s look at how we can perform inference with a fine-tuned model. In this example, we’ll use the animagine-xl model, which is a fine-tuned version of the SDXL model for generating anime. Copied from diffusers import DiffusionPipeline, LCMScheduler + +pipe = DiffusionPipeline.from_pretrained( + "Linaqruf/animagine-xl", + variant="fp16", + torch_dtype=torch.float16 +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl") + +prompt = "face focus, cute, masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=1.0 +).images[0] Image-to-image LCM-LoRA can be applied to image-to-image tasks too. Let’s look at how we can perform image-to-image generation with LCMs. For this example we’ll use the dreamshaper-7 model and the LCM-LoRA for stable-diffusion-v1-5 . Copied import torch +from diffusers import AutoPipelineForImage2Image, LCMScheduler +from diffusers.utils import make_image_grid, load_image + +pipe = AutoPipelineForImage2Image.from_pretrained( + "Lykon/dreamshaper-7", + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5") + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) +prompt = "Astronauts in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +generator = torch.manual_seed(0) +image = pipe( + prompt, + image=init_image, + num_inference_steps=4, + guidance_scale=1, + strength=0.6, + generator=generator +).images[0] +make_image_grid([init_image, image], rows=1, cols=2) You can get different results based on your prompt and the image you provide. To get the best results, we recommend trying different values for num_inference_steps, strength, and guidance_scale parameters and choose the best one. Combine with styled LoRAs LCM-LoRA can be combined with other LoRAs to generate styled-images in very few steps (4-8). In the following example, we’ll use the LCM-LoRA with the papercut LoRA. +To learn more about how to combine LoRAs, refer to this guide. Copied import torch +from diffusers import DiffusionPipeline, LCMScheduler + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + variant="fp16", + torch_dtype=torch.float16 +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LoRAs +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl", adapter_name="lcm") +pipe.load_lora_weights("TheLastBen/Papercut_SDXL", weight_name="papercut.safetensors", adapter_name="papercut") + +# Combine LoRAs +pipe.set_adapters(["lcm", "papercut"], adapter_weights=[1.0, 0.8]) + +prompt = "papercut, a cute fox" +generator = torch.manual_seed(0) +image = pipe(prompt, num_inference_steps=4, guidance_scale=1, generator=generator).images[0] +image ControlNet/T2I-Adapter Let’s look at how we can perform inference with ControlNet/T2I-Adapter and LCM-LoRA. ControlNet For this example, we’ll use the SD-v1-5 model and the LCM-LoRA for SD-v1-5 with canny ControlNet. Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, LCMScheduler +from diffusers.utils import load_image + +image = load_image( + "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +).resize((512, 512)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + controlnet=controlnet, + torch_dtype=torch.float16, + safety_checker=None, + variant="fp16" +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5") + +generator = torch.manual_seed(0) +image = pipe( + "the mona lisa", + image=canny_image, + num_inference_steps=4, + guidance_scale=1.5, + controlnet_conditioning_scale=0.8, + cross_attention_kwargs={"scale": 1}, + generator=generator, +).images[0] +make_image_grid([canny_image, image], rows=1, cols=2) The inference parameters in this example might not work for all examples, so we recommend you to try different values for `num_inference_steps`, `guidance_scale`, `controlnet_conditioning_scale` and `cross_attention_kwargs` parameters and choose the best one. T2I-Adapter This example shows how to use the LCM-LoRA with the Canny T2I-Adapter and SDXL. Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +# Prepare image +# Detect the canny map in low resolution to avoid high-frequency details +image = load_image( + "https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_canny.jpg" +).resize((384, 384)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image).resize((1024, 1024)) + +# load adapter +adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, varient="fp16").to("cuda") + +pipe = StableDiffusionXLAdapterPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + adapter=adapter, + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl") + +prompt = "Mystical fairy in real, magic, 4k picture, high quality" +negative_prompt = "extra digit, fewer digits, cropped, worst quality, low quality, glitch, deformed, mutated, ugly, disfigured" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, + negative_prompt=negative_prompt, + image=canny_image, + num_inference_steps=4, + guidance_scale=1.5, + adapter_conditioning_scale=0.8, + adapter_conditioning_factor=1, + generator=generator, +).images[0] +make_image_grid([canny_image, image], rows=1, cols=2) Inpainting LCM-LoRA can be used for inpainting as well. Copied import torch +from diffusers import AutoPipelineForInpainting, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +pipe = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5") + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +# generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, + image=init_image, + mask_image=mask_image, + generator=generator, + num_inference_steps=4, + guidance_scale=4, +).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) AnimateDiff AnimateDiff allows you to animate images using Stable Diffusion models. To get good results, we need to generate multiple frames (16-24), and doing this with standard SD models can be very slow. +LCM-LoRA can be used to speed up the process significantly, as you just need to do 4-8 steps for each frame. Let’s look at how we can perform animation with LCM-LoRA and AnimateDiff. Copied import torch +from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler, LCMScheduler +from diffusers.utils import export_to_gif + +adapter = MotionAdapter.from_pretrained("diffusers/animatediff-motion-adapter-v1-5") +pipe = AnimateDiffPipeline.from_pretrained( + "frankjoshua/toonyou_beta6", + motion_adapter=adapter, +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5", adapter_name="lcm") +pipe.load_lora_weights("guoyww/animatediff-motion-lora-zoom-in", weight_name="diffusion_pytorch_model.safetensors", adapter_name="motion-lora") + +pipe.set_adapters(["lcm", "motion-lora"], adapter_weights=[0.55, 1.2]) + +prompt = "best quality, masterpiece, 1girl, looking at viewer, blurry background, upper body, contemporary, dress" +generator = torch.manual_seed(0) +frames = pipe( + prompt=prompt, + num_inference_steps=5, + guidance_scale=1.25, + cross_attention_kwargs={"scale": 1}, + num_frames=24, + generator=generator +).frames[0] +export_to_gif(frames, "animation.gif") diff --git a/scrapped_outputs/9e28ab757a77a8342169b5f708d48601.txt b/scrapped_outputs/9e28ab757a77a8342169b5f708d48601.txt new file mode 100644 index 0000000000000000000000000000000000000000..d0ff9812e8390d7761559412d64c19cfc04afa33 --- /dev/null +++ b/scrapped_outputs/9e28ab757a77a8342169b5f708d48601.txt @@ -0,0 +1,89 @@ +Quicktour Diffusion models are trained to denoise random Gaussian noise step-by-step to generate a sample of interest, such as an image or audio. This has sparked a tremendous amount of interest in generative AI, and you have probably seen examples of diffusion generated images on the internet. 🧨 Diffusers is a library aimed at making diffusion models widely accessible to everyone. Whether you’re a developer or an everyday user, this quicktour will introduce you to 🧨 Diffusers and help you get up and generating quickly! There are three main components of the library to know about: The DiffusionPipeline is a high-level end-to-end class designed to rapidly generate samples from pretrained diffusion models for inference. Popular pretrained model architectures and modules that can be used as building blocks for creating diffusion systems. Many different schedulers - algorithms that control how noise is added for training, and how to generate denoised images during inference. The quicktour will show you how to use the DiffusionPipeline for inference, and then walk you through how to combine a model and scheduler to replicate what’s happening inside the DiffusionPipeline. The quicktour is a simplified version of the introductory 🧨 Diffusers notebook to help you get started quickly. If you want to learn more about 🧨 Diffusers’ goal, design philosophy, and additional details about its core API, check out the notebook! Before you begin, make sure you have all the necessary libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install --upgrade diffusers accelerate transformers 🤗 Accelerate speeds up model loading for inference and training. 🤗 Transformers is required to run the most popular diffusion models, such as Stable Diffusion. DiffusionPipeline The DiffusionPipeline is the easiest way to use a pretrained diffusion system for inference. It is an end-to-end system containing the model and the scheduler. You can use the DiffusionPipeline out-of-the-box for many tasks. Take a look at the table below for some supported tasks, and for a complete list of supported tasks, check out the 🧨 Diffusers Summary table. Task Description Pipeline Unconditional Image Generation generate an image from Gaussian noise unconditional_image_generation Text-Guided Image Generation generate an image given a text prompt conditional_image_generation Text-Guided Image-to-Image Translation adapt an image guided by a text prompt img2img Text-Guided Image-Inpainting fill the masked part of an image given the image, the mask and a text prompt inpaint Text-Guided Depth-to-Image Translation adapt parts of an image guided by a text prompt while preserving structure via depth estimation depth2img Start by creating an instance of a DiffusionPipeline and specify which pipeline checkpoint you would like to download. +You can use the DiffusionPipeline for any checkpoint stored on the Hugging Face Hub. +In this quicktour, you’ll load the stable-diffusion-v1-5 checkpoint for text-to-image generation. For Stable Diffusion models, please carefully read the license first before running the model. 🧨 Diffusers implements a safety_checker to prevent offensive or harmful content, but the model’s improved image generation capabilities can still produce potentially harmful content. Load the model with the from_pretrained() method: Copied >>> from diffusers import DiffusionPipeline + +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) The DiffusionPipeline downloads and caches all modeling, tokenization, and scheduling components. You’ll see that the Stable Diffusion pipeline is composed of the UNet2DConditionModel and PNDMScheduler among other things: Copied >>> pipeline +StableDiffusionPipeline { + "_class_name": "StableDiffusionPipeline", + "_diffusers_version": "0.21.4", + ..., + "scheduler": [ + "diffusers", + "PNDMScheduler" + ], + ..., + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vae": [ + "diffusers", + "AutoencoderKL" + ] +} We strongly recommend running the pipeline on a GPU because the model consists of roughly 1.4 billion parameters. +You can move the generator object to a GPU, just like you would in PyTorch: Copied >>> pipeline.to("cuda") Now you can pass a text prompt to the pipeline to generate an image, and then access the denoised image. By default, the image output is wrapped in a PIL.Image object. Copied >>> image = pipeline("An image of a squirrel in Picasso style").images[0] +>>> image Save the image by calling save: Copied >>> image.save("image_of_squirrel_painting.png") Local pipeline You can also use the pipeline locally. The only difference is you need to download the weights first: Copied !git lfs install +!git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 Then load the saved weights into the pipeline: Copied >>> pipeline = DiffusionPipeline.from_pretrained("./stable-diffusion-v1-5", use_safetensors=True) Now, you can run the pipeline as you would in the section above. Swapping schedulers Different schedulers come with different denoising speeds and quality trade-offs. The best way to find out which one works best for you is to try them out! One of the main features of 🧨 Diffusers is to allow you to easily switch between schedulers. For example, to replace the default PNDMScheduler with the EulerDiscreteScheduler, load it with the from_config() method: Copied >>> from diffusers import EulerDiscreteScheduler + +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) +>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) Try generating an image with the new scheduler and see if you notice a difference! In the next section, you’ll take a closer look at the components - the model and scheduler - that make up the DiffusionPipeline and learn how to use these components to generate an image of a cat. Models Most models take a noisy sample, and at each timestep it predicts the noise residual (other models learn to predict the previous sample directly or the velocity or v-prediction), the difference between a less noisy image and the input image. You can mix and match models to create other diffusion systems. Models are initiated with the from_pretrained() method which also locally caches the model weights so it is faster the next time you load the model. For the quicktour, you’ll load the UNet2DModel, a basic unconditional image generation model with a checkpoint trained on cat images: Copied >>> from diffusers import UNet2DModel + +>>> repo_id = "google/ddpm-cat-256" +>>> model = UNet2DModel.from_pretrained(repo_id, use_safetensors=True) To access the model parameters, call model.config: Copied >>> model.config The model configuration is a 🧊 frozen 🧊 dictionary, which means those parameters can’t be changed after the model is created. This is intentional and ensures that the parameters used to define the model architecture at the start remain the same, while other parameters can still be adjusted during inference. Some of the most important parameters are: sample_size: the height and width dimension of the input sample. in_channels: the number of input channels of the input sample. down_block_types and up_block_types: the type of down- and upsampling blocks used to create the UNet architecture. block_out_channels: the number of output channels of the downsampling blocks; also used in reverse order for the number of input channels of the upsampling blocks. layers_per_block: the number of ResNet blocks present in each UNet block. To use the model for inference, create the image shape with random Gaussian noise. It should have a batch axis because the model can receive multiple random noises, a channel axis corresponding to the number of input channels, and a sample_size axis for the height and width of the image: Copied >>> import torch + +>>> torch.manual_seed(0) + +>>> noisy_sample = torch.randn(1, model.config.in_channels, model.config.sample_size, model.config.sample_size) +>>> noisy_sample.shape +torch.Size([1, 3, 256, 256]) For inference, pass the noisy image and a timestep to the model. The timestep indicates how noisy the input image is, with more noise at the beginning and less at the end. This helps the model determine its position in the diffusion process, whether it is closer to the start or the end. Use the sample method to get the model output: Copied >>> with torch.no_grad(): +... noisy_residual = model(sample=noisy_sample, timestep=2).sample To generate actual examples though, you’ll need a scheduler to guide the denoising process. In the next section, you’ll learn how to couple a model with a scheduler. Schedulers Schedulers manage going from a noisy sample to a less noisy sample given the model output - in this case, it is the noisy_residual. 🧨 Diffusers is a toolbox for building diffusion systems. While the DiffusionPipeline is a convenient way to get started with a pre-built diffusion system, you can also choose your own model and scheduler components separately to build a custom diffusion system. For the quicktour, you’ll instantiate the DDPMScheduler with its from_config() method: Copied >>> from diffusers import DDPMScheduler + +>>> scheduler = DDPMScheduler.from_pretrained(repo_id) +>>> scheduler +DDPMScheduler { + "_class_name": "DDPMScheduler", + "_diffusers_version": "0.21.4", + "beta_end": 0.02, + "beta_schedule": "linear", + "beta_start": 0.0001, + "clip_sample": true, + "clip_sample_range": 1.0, + "dynamic_thresholding_ratio": 0.995, + "num_train_timesteps": 1000, + "prediction_type": "epsilon", + "sample_max_value": 1.0, + "steps_offset": 0, + "thresholding": false, + "timestep_spacing": "leading", + "trained_betas": null, + "variance_type": "fixed_small" +} 💡 Unlike a model, a scheduler does not have trainable weights and is parameter-free! Some of the most important parameters are: num_train_timesteps: the length of the denoising process or, in other words, the number of timesteps required to process random Gaussian noise into a data sample. beta_schedule: the type of noise schedule to use for inference and training. beta_start and beta_end: the start and end noise values for the noise schedule. To predict a slightly less noisy image, pass the following to the scheduler’s step() method: model output, timestep, and current sample. Copied >>> less_noisy_sample = scheduler.step(model_output=noisy_residual, timestep=2, sample=noisy_sample).prev_sample +>>> less_noisy_sample.shape +torch.Size([1, 3, 256, 256]) The less_noisy_sample can be passed to the next timestep where it’ll get even less noisy! Let’s bring it all together now and visualize the entire denoising process. First, create a function that postprocesses and displays the denoised image as a PIL.Image: Copied >>> import PIL.Image +>>> import numpy as np + + +>>> def display_sample(sample, i): +... image_processed = sample.cpu().permute(0, 2, 3, 1) +... image_processed = (image_processed + 1.0) * 127.5 +... image_processed = image_processed.numpy().astype(np.uint8) + +... image_pil = PIL.Image.fromarray(image_processed[0]) +... display(f"Image at step {i}") +... display(image_pil) To speed up the denoising process, move the input and model to a GPU: Copied >>> model.to("cuda") +>>> noisy_sample = noisy_sample.to("cuda") Now create a denoising loop that predicts the residual of the less noisy sample, and computes the less noisy sample with the scheduler: Copied >>> import tqdm + +>>> sample = noisy_sample + +>>> for i, t in enumerate(tqdm.tqdm(scheduler.timesteps)): +... # 1. predict noise residual +... with torch.no_grad(): +... residual = model(sample, t).sample + +... # 2. compute less noisy image and set x_t -> x_t-1 +... sample = scheduler.step(residual, t, sample).prev_sample + +... # 3. optionally look at image +... if (i + 1) % 50 == 0: +... display_sample(sample, i + 1) Sit back and watch as a cat is generated from nothing but noise! 😻 Next steps Hopefully, you generated some cool images with 🧨 Diffusers in this quicktour! For your next steps, you can: Train or finetune a model to generate your own images in the training tutorial. See example official and community training or finetuning scripts for a variety of use cases. Learn more about loading, accessing, changing, and comparing schedulers in the Using different Schedulers guide. Explore prompt engineering, speed and memory optimizations, and tips and tricks for generating higher-quality images with the Stable Diffusion guide. Dive deeper into speeding up 🧨 Diffusers with guides on optimized PyTorch on a GPU, and inference guides for running Stable Diffusion on Apple Silicon (M1/M2) and ONNX Runtime. diff --git a/scrapped_outputs/9e61993b9afec11a1df0d11527114804.txt b/scrapped_outputs/9e61993b9afec11a1df0d11527114804.txt new file mode 100644 index 0000000000000000000000000000000000000000..1fbae7407bd5ca9d68e49a6dd0e5b1c668ca0baf --- /dev/null +++ b/scrapped_outputs/9e61993b9afec11a1df0d11527114804.txt @@ -0,0 +1,336 @@ +Inpainting The Stable Diffusion model can also be applied to inpainting which lets you edit specific parts of an image by providing a mask and a text prompt using Stable Diffusion. Tips It is recommended to use this pipeline with checkpoints that have been specifically fine-tuned for inpainting, such +as runwayml/stable-diffusion-inpainting. Default +text-to-image Stable Diffusion checkpoints, such as +runwayml/stable-diffusion-v1-5 are also compatible but they might be less performant. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionInpaintPipeline class diffusers.StableDiffusionInpaintPipeline < source > ( vae: Union text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae ([AutoencoderKL, AsymmetricAutoencoderKL]) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-guided image inpainting using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None masked_image_latents: FloatTensor = None height: Optional = None width: Optional = None padding_mask_crop: Optional = None strength: float = 1.0 num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: int = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be inpainted (which parts of the image to +be masked out with mask_image and repainted according to prompt). For both numpy array and pytorch +tensor, the expected value range is between [0, 1] If it’s a tensor or a list or tensors, the +expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a list of arrays, the +expected shape should be (B, H, W, C) or (H, W, C) It can also accept image latents as image, but +if passing latents directly it is not encoded again. mask_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to mask image. White pixels in the mask +are repainted while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a numpy array or pytorch tensor, it should contain one +color channel (L) instead of 3, so the expected shape for pytorch tensor would be (B, 1, H, W), (B, H, W), (1, H, W), (H, W). And for numpy array would be for (B, H, W, 1), (B, H, W), (H, W, 1), or (H, W). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. padding_mask_crop (int, optional, defaults to None) — +The size of margin in the crop to be applied to the image and masking. If None, no crop is applied to image and mask_image. If +padding_mask_crop is not None, it will first find a rectangular region with the same aspect ration of the image and +contains all masked area, and then expand that area based on padding_mask_crop. The image and mask_image will then be cropped based on +the expanded area before resizing to the original image size for inpainting. This is useful when the masked area is small while the image is large +and contain information inreleant for inpainging, such as background. strength (float, optional, defaults to 1.0) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionInpaintPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +>>> init_image = download_image(img_url).resize((512, 512)) +>>> mask_image = download_image(mask_url).resize((512, 512)) + +>>> pipe = StableDiffusionInpaintPipeline.from_pretrained( +... "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +>>> image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionInpaintPipeline class diffusers.FlaxStableDiffusionInpaintPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-guided image inpainting using Stable Diffusion. 🧪 This is an experimental feature! This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: Array mask: Array masked_image: Array params: Union prng_seed: Array num_inference_steps: int = 50 height: Optional = None width: Optional = None guidance_scale: Union = 7.5 latents: Array = None neg_prompt_ids: Array = None return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. latents (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +array is generated by sampling using the supplied random generator. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard +>>> import PIL +>>> import requests +>>> from io import BytesIO +>>> from diffusers import FlaxStableDiffusionInpaintPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +>>> init_image = download_image(img_url).resize((512, 512)) +>>> mask_image = download_image(mask_url).resize((512, 512)) + +>>> pipeline, params = FlaxStableDiffusionInpaintPipeline.from_pretrained( +... "xvjiarui/stable-diffusion-2-inpainting" +... ) + +>>> prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +>>> prng_seed = jax.random.PRNGKey(0) +>>> num_inference_steps = 50 + +>>> num_samples = jax.device_count() +>>> prompt = num_samples * [prompt] +>>> init_image = num_samples * [init_image] +>>> mask_image = num_samples * [mask_image] +>>> prompt_ids, processed_masked_images, processed_masks = pipeline.prepare_inputs( +... prompt, init_image, mask_image +... ) +# shard inputs and rng + +>>> params = replicate(params) +>>> prng_seed = jax.random.split(prng_seed, jax.device_count()) +>>> prompt_ids = shard(prompt_ids) +>>> processed_masked_images = shard(processed_masked_images) +>>> processed_masks = shard(processed_masks) + +>>> images = pipeline( +... prompt_ids, processed_masks, processed_masked_images, params, prng_seed, num_inference_steps, jit=True +... ).images +>>> images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) FlaxStableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/9e7feb0859b7841514bfec5fe1b16d50.txt b/scrapped_outputs/9e7feb0859b7841514bfec5fe1b16d50.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69636ab475595c240f0bd86a1983886d1f8de0d --- /dev/null +++ b/scrapped_outputs/9e7feb0859b7841514bfec5fe1b16d50.txt @@ -0,0 +1,40 @@ +DDIM Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. The abstract from the paper is: Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space. The original codebase can be found at ermongroup/ddim. DDIMPipeline class diffusers.DDIMPipeline < source > ( unet scheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image. Can be one of +DDPMScheduler, or DDIMScheduler. Pipeline for image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 generator: Union = None eta: float = 0.0 num_inference_steps: int = 50 use_clipped_model_output: Optional = None output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. A value of 0 corresponds to +DDIM and 1 corresponds to DDPM. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. use_clipped_model_output (bool, optional, defaults to None) — +If True or False, see documentation for DDIMScheduler.step(). If None, nothing is passed +downstream to the scheduler (use None for schedulers which don’t support this argument). output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Example: Copied >>> from diffusers import DDIMPipeline +>>> import PIL.Image +>>> import numpy as np + +>>> # load model and scheduler +>>> pipe = DDIMPipeline.from_pretrained("fusing/ddim-lsun-bedroom") + +>>> # run pipeline in inference (sample random noise and denoise) +>>> image = pipe(eta=0.0, num_inference_steps=50) + +>>> # process image to PIL +>>> image_processed = image.cpu().permute(0, 2, 3, 1) +>>> image_processed = (image_processed + 1.0) * 127.5 +>>> image_processed = image_processed.numpy().astype(np.uint8) +>>> image_pil = PIL.Image.fromarray(image_processed[0]) + +>>> # save image +>>> image_pil.save("test.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/9eaf02245bd770a4bc64a7f335d5151b.txt b/scrapped_outputs/9eaf02245bd770a4bc64a7f335d5151b.txt new file mode 100644 index 0000000000000000000000000000000000000000..1ba99b2cebeb8e772b0fe72b5462a01245d5b5b7 --- /dev/null +++ b/scrapped_outputs/9eaf02245bd770a4bc64a7f335d5151b.txt @@ -0,0 +1,360 @@ +Attend and Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models + + +Overview + +Attend and Excite for Stable Diffusion was proposed in Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models and provides textual attention control over the image generation. +The abstract of the paper is the following: +Text-to-image diffusion models have recently received a lot of interest for their astonishing ability to produce high-fidelity images from text only. However, achieving one-shot generation that aligns with the user’s intent is nearly impossible, yet small changes to the input prompt often result in very different images. This leaves the user with little semantic control. To put the user in control, we show how to interact with the diffusion process to flexibly steer it along semantic directions. This semantic guidance (SEGA) allows for subtle and extensive edits, changes in composition and style, as well as optimizing the overall artistic conception. We demonstrate SEGA’s effectiveness on a variety of tasks and provide evidence for its versatility and flexibility. +Resources +Project Page +Paper +Original Code +Demo + +Available Pipelines: + +Pipeline +Tasks +Colab +Demo +pipeline_semantic_stable_diffusion_attend_and_excite.py +Text-to-Image Generation +- +https://huggingface.co/spaces/AttendAndExcite/Attend-and-Excite + +Usage example + + + + Copied +import torch +from diffusers import StableDiffusionAttendAndExcitePipeline + +model_id = "CompVis/stable-diffusion-v1-4" +pipe = StableDiffusionAttendAndExcitePipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") +pipe = pipe.to("cuda") + +prompt = "a cat and a frog" + +# use get_indices function to find out indices of the tokens you want to alter +pipe.get_indices(prompt) + +token_indices = [2, 5] +seed = 6141 +generator = torch.Generator("cuda").manual_seed(seed) + +images = pipe( + prompt=prompt, + token_indices=token_indices, + guidance_scale=7.5, + generator=generator, + num_inference_steps=50, + max_iter_to_alter=25, +).images + +image = images[0] +image.save(f"../images/{prompt}_{seed}.png") + +StableDiffusionAttendAndExcitePipeline + + +class diffusers.StableDiffusionAttendAndExcitePipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPFeatureExtractor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-to-image generation using Stable Diffusion and Attend and Excite. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] +token_indices: typing.List[int] +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: int = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None +max_iter_to_alter: int = 25 +thresholds: dict = {0: 0.05, 10: 0.5, 20: 0.8} +scale_factor: int = 20 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +token_indices (List[int]) — +The token indices to alter with attend-and-excite. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor as defined under +self.processor in +diffusers.cross_attention. + + +max_iter_to_alter (int, optional, defaults to 25) — +Number of denoising steps to apply attend-and-excite. The first denoising steps are +where the attend-and-excite is applied. I.e. if max_iter_to_alter is 25 and there are a total of 30 +denoising steps, the first 25 denoising steps will apply attend-and-excite and the last 5 will not +apply attend-and-excite. + + +thresholds (dict, optional, defaults to {0 -- 0.05, 10: 0.5, 20: 0.8}): +Dictionary defining the iterations and desired thresholds to apply iterative latent refinement in. + + +scale_factor (int, optional, default to 20) — +Scale factor that controls the step size of each Attend and Excite update. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. :type attention_store: object + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import StableDiffusionAttendAndExcitePipeline + +>>> pipe = StableDiffusionAttendAndExcitePipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 +... ).to("cuda") + + +>>> prompt = "a cat and a frog" + +>>> # use get_indices function to find out indices of the tokens you want to alter +>>> pipe.get_indices(prompt) +{0: '<|startoftext|>', 1: 'a', 2: 'cat', 3: 'and', 4: 'a', 5: 'frog', 6: '<|endoftext|>'} + +>>> token_indices = [2, 5] +>>> seed = 6141 +>>> generator = torch.Generator("cuda").manual_seed(seed) + +>>> images = pipe( +... prompt=prompt, +... token_indices=token_indices, +... guidance_scale=7.5, +... generator=generator, +... num_inference_steps=50, +... max_iter_to_alter=25, +... ).images + +>>> image = images[0] +>>> image.save(f"../images/{prompt}_{seed}.png") + +disable_vae_slicing + +< +source +> +( +) + + + +Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to +computing decoding in one step. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. + +enable_vae_slicing + +< +source +> +( +) + + + +Enable sliced VAE decoding. +When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several +steps. This is useful to save some memory and allow larger batch sizes. + +get_indices + +< +source +> +( +prompt: str + +) + + + +Utility function to list the indices of the tokens you wish to alte diff --git a/scrapped_outputs/9ebe41304aa1daf5415f33d22f7d84fd.txt b/scrapped_outputs/9ebe41304aa1daf5415f33d22f7d84fd.txt new file mode 100644 index 0000000000000000000000000000000000000000..ec0ca022fc192e20ccf6ff3307b2799096156b70 --- /dev/null +++ b/scrapped_outputs/9ebe41304aa1daf5415f33d22f7d84fd.txt @@ -0,0 +1,44 @@ +Using Diffusers for reinforcement learning + +Support for one RL model and related pipelines is included in the experimental source of diffusers. +More models and examples coming soon! + +Diffuser Value-guided Planning + +You can run the model from Planning with Diffusion for Flexible Behavior Synthesis with Diffusers. +The script is located in the RL Examples folder. +Or, run this example in Colab + +class diffusers.experimental.ValueGuidedRLPipeline + +< +source +> +( +value_function: UNet1DModel +unet: UNet1DModel +scheduler: DDPMScheduler +env + +) + + +Parameters + +value_function (UNet1DModel) — A specialized UNet for fine-tuning trajectories base on reward. + + +unet (UNet1DModel) — U-Net architecture to denoise the encoded trajectories. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded trajectories. Default for this +application is DDPMScheduler. +env — An environment following the OpenAI gym API to act in. For now only Hopper has pretrained models. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) +Pipeline for sampling actions from a diffusion model trained to predict sequences of states. +Original implementation inspired by this repository: https://github.com/jannerm/diffuser. diff --git a/scrapped_outputs/9f040df03935a0127a6936e3f4dee3b4.txt b/scrapped_outputs/9f040df03935a0127a6936e3f4dee3b4.txt new file mode 100644 index 0000000000000000000000000000000000000000..b16b1a8e34aaa9499323c43a56a0084cfbc1c8e2 --- /dev/null +++ b/scrapped_outputs/9f040df03935a0127a6936e3f4dee3b4.txt @@ -0,0 +1,42 @@ +KDPM2AncestralDiscreteScheduler The KDPM2DiscreteScheduler with ancestral sampling is inspired by the Elucidating the Design Space of Diffusion-Based Generative Models paper, and the scheduler is ported from and created by Katherine Crowson. The original codebase can be found at crowsonkb/k-diffusion. KDPM2AncestralDiscreteScheduler class diffusers.KDPM2AncestralDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None use_karras_sigmas: Optional = False prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.00085) — +The starting beta value of inference. beta_end (float, defaults to 0.012) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. KDPM2DiscreteScheduler with ancestral sampling is inspired by the DPMSolver2 and Algorithm 2 from the Elucidating +the Design Space of Diffusion-Based Generative Models paper. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union generator: Optional = None return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, ~schedulers.scheduling_ddim.SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/9f05170f5c060bd7d54763b1c4f09e6b.txt b/scrapped_outputs/9f05170f5c060bd7d54763b1c4f09e6b.txt new file mode 100644 index 0000000000000000000000000000000000000000..1fcb84525093d37b917b1ff715c495271505cd29 --- /dev/null +++ b/scrapped_outputs/9f05170f5c060bd7d54763b1c4f09e6b.txt @@ -0,0 +1,68 @@ +Text-guided image-to-image generation + + + + + + + + + + + + +The StableDiffusionImg2ImgPipeline lets you pass a text prompt and an initial image to condition the generation of new images. +Before you begin, make sure you have all the necessary libraries installed: + + + Copied +!pip install diffusers transformers ftfy accelerate +Get started by creating a StableDiffusionImg2ImgPipeline with a pretrained Stable Diffusion model like nitrosocke/Ghibli-Diffusion. + + + Copied +import torch +import requests +from PIL import Image +from io import BytesIO +from diffusers import StableDiffusionImg2ImgPipeline + +device = "cuda" +pipe = StableDiffusionImg2ImgPipeline.from_pretrained("nitrosocke/Ghibli-Diffusion", torch_dtype=torch.float16).to( + device +) +Download and preprocess an initial image so you can pass it to the pipeline: + + + Copied +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +response = requests.get(url) +init_image = Image.open(BytesIO(response.content)).convert("RGB") +init_image.thumbnail((768, 768)) +init_image + +💡 strength is a value between 0.0 and 1.0 that controls the amount of noise added to the input image. Values that approach 1.0 allow for lots of variations but will also produce images that are not semantically consistent with the input. +Define the prompt (for this checkpoint finetuned on Ghibli-style art, you need to prefix the prompt with the ghibli style tokens) and run the pipeline: + + + Copied +prompt = "ghibli style, a fantasy landscape with castles" +generator = torch.Generator(device=device).manual_seed(1024) +image = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5, generator=generator).images[0] +image + +You can also try experimenting with a different scheduler to see how that affects the output: + + + Copied +from diffusers import LMSDiscreteScheduler + +lms = LMSDiscreteScheduler.from_config(pipe.scheduler.config) +pipe.scheduler = lms +generator = torch.Generator(device=device).manual_seed(1024) +image = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5, generator=generator).images[0] +image + +Check out the Spaces below, and try generating images with different values for strength. You’ll notice that using lower values for strength produces images that are more similar to the original image. +Feel free to also switch the scheduler to the LMSDiscreteScheduler and see how that affects the output. diff --git a/scrapped_outputs/9f0d8ff246ed6df4a5d1e4f7d9f9f503.txt b/scrapped_outputs/9f0d8ff246ed6df4a5d1e4f7d9f9f503.txt new file mode 100644 index 0000000000000000000000000000000000000000..ed307c5e7ec0eba355d6da6f87807233e0a27eec --- /dev/null +++ b/scrapped_outputs/9f0d8ff246ed6df4a5d1e4f7d9f9f503.txt @@ -0,0 +1,43 @@ +DiT Scalable Diffusion Models with Transformers (DiT) is by William Peebles and Saining Xie. The abstract from the paper is: We explore a new class of diffusion models based on the transformer architecture. We train latent diffusion models of images, replacing the commonly-used U-Net backbone with a transformer that operates on latent patches. We analyze the scalability of our Diffusion Transformers (DiTs) through the lens of forward pass complexity as measured by Gflops. We find that DiTs with higher Gflops — through increased transformer depth/width or increased number of input tokens — consistently have lower FID. In addition to possessing good scalability properties, our largest DiT-XL/2 models outperform all prior diffusion models on the class-conditional ImageNet 512x512 and 256x256 benchmarks, achieving a state-of-the-art FID of 2.27 on the latter. The original codebase can be found at facebookresearch/dit. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. DiTPipeline class diffusers.DiTPipeline < source > ( transformer: Transformer2DModel vae: AutoencoderKL scheduler: KarrasDiffusionSchedulers id2label: Optional = None ) Parameters transformer (Transformer2DModel) — +A class conditioned Transformer2DModel to denoise the encoded image latents. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. scheduler (DDIMScheduler) — +A scheduler to be used in combination with transformer to denoise the encoded image latents. Pipeline for image generation based on a Transformer backbone instead of a UNet. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( class_labels: List guidance_scale: float = 4.0 generator: Union = None num_inference_steps: int = 50 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters class_labels (List[int]) — +List of ImageNet class labels for the images to be generated. guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. num_inference_steps (int, optional, defaults to 250) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import DiTPipeline, DPMSolverMultistepScheduler +>>> import torch + +>>> pipe = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256", torch_dtype=torch.float16) +>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe = pipe.to("cuda") + +>>> # pick words from Imagenet class labels +>>> pipe.labels # to print all available words + +>>> # pick words that exist in ImageNet +>>> words = ["white shark", "umbrella"] + +>>> class_ids = pipe.get_label_ids(words) + +>>> generator = torch.manual_seed(33) +>>> output = pipe(class_labels=class_ids, num_inference_steps=25, generator=generator) + +>>> image = output.images[0] # label 'white shark' get_label_ids < source > ( label: Union ) → list of int Parameters label (str or dict of str) — +Label strings to be mapped to class ids. Returns +list of int + +Class ids to be processed by pipeline. + Map label strings from ImageNet to corresponding class ids. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/9f2407a480bd9a35c0549e7f99124b97.txt b/scrapped_outputs/9f2407a480bd9a35c0549e7f99124b97.txt new file mode 100644 index 0000000000000000000000000000000000000000..f86c7601a8960e5b9b1d28395df88617938da400 --- /dev/null +++ b/scrapped_outputs/9f2407a480bd9a35c0549e7f99124b97.txt @@ -0,0 +1,42 @@ +LMSDiscreteScheduler LMSDiscreteScheduler is a linear multistep scheduler for discrete beta schedules. The scheduler is ported from and created by Katherine Crowson, and the original implementation can be found at crowsonkb/k-diffusion. LMSDiscreteScheduler class diffusers.LMSDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None use_karras_sigmas: Optional = False prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. A linear multistep scheduler for discrete beta schedules. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. get_lms_coefficient < source > ( order t current_order ) Parameters order () — t () — current_order () — Compute the linear multistep coefficient. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (float or torch.FloatTensor) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor order: int = 4 return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float or torch.FloatTensor) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. order (int, defaults to 4) — +The order of the linear multistep method. return_dict (bool, optional, defaults to True) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). LMSDiscreteSchedulerOutput class diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/9f5215dff12b318e0f726a3e8367147e.txt b/scrapped_outputs/9f5215dff12b318e0f726a3e8367147e.txt new file mode 100644 index 0000000000000000000000000000000000000000..e4abc6c3bdbf1174d841ae03e5693f7552e06dd7 --- /dev/null +++ b/scrapped_outputs/9f5215dff12b318e0f726a3e8367147e.txt @@ -0,0 +1,38 @@ +Distributed inference with multiple GPUs On distributed setups, you can run inference across multiple GPUs with 🤗 Accelerate or PyTorch Distributed, which is useful for generating with multiple prompts in parallel. This guide will show you how to use 🤗 Accelerate and PyTorch Distributed for distributed inference. 🤗 Accelerate 🤗 Accelerate is a library designed to make it easy to train or run inference across distributed setups. It simplifies the process of setting up the distributed environment, allowing you to focus on your PyTorch code. To begin, create a Python file and initialize an accelerate.PartialState to create a distributed environment; your setup is automatically detected so you don’t need to explicitly define the rank or world_size. Move the DiffusionPipeline to distributed_state.device to assign a GPU to each process. Now use the split_between_processes utility as a context manager to automatically distribute the prompts between the number of processes. Copied import torch +from accelerate import PartialState +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +distributed_state = PartialState() +pipeline.to(distributed_state.device) + +with distributed_state.split_between_processes(["a dog", "a cat"]) as prompt: + result = pipeline(prompt).images[0] + result.save(f"result_{distributed_state.process_index}.png") Use the --num_processes argument to specify the number of GPUs to use, and call accelerate launch to run the script: Copied accelerate launch run_distributed.py --num_processes=2 To learn more, take a look at the Distributed Inference with 🤗 Accelerate guide. PyTorch Distributed PyTorch supports DistributedDataParallel which enables data parallelism. To start, create a Python file and import torch.distributed and torch.multiprocessing to set up the distributed process group and to spawn the processes for inference on each GPU. You should also initialize a DiffusionPipeline: Copied import torch +import torch.distributed as dist +import torch.multiprocessing as mp + +from diffusers import DiffusionPipeline + +sd = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) You’ll want to create a function to run inference; init_process_group handles creating a distributed environment with the type of backend to use, the rank of the current process, and the world_size or the number of processes participating. If you’re running inference in parallel over 2 GPUs, then the world_size is 2. Move the DiffusionPipeline to rank and use get_rank to assign a GPU to each process, where each process handles a different prompt: Copied def run_inference(rank, world_size): + dist.init_process_group("nccl", rank=rank, world_size=world_size) + + sd.to(rank) + + if torch.distributed.get_rank() == 0: + prompt = "a dog" + elif torch.distributed.get_rank() == 1: + prompt = "a cat" + + image = sd(prompt).images[0] + image.save(f"./{'_'.join(prompt)}.png") To run the distributed inference, call mp.spawn to run the run_inference function on the number of GPUs defined in world_size: Copied def main(): + world_size = 2 + mp.spawn(run_inference, args=(world_size,), nprocs=world_size, join=True) + + +if __name__ == "__main__": + main() Once you’ve completed the inference script, use the --nproc_per_node argument to specify the number of GPUs to use and call torchrun to run the script: Copied torchrun run_distributed.py --nproc_per_node=2 diff --git a/scrapped_outputs/9f8a583f5cc1fde911b43b7d1c69352b.txt b/scrapped_outputs/9f8a583f5cc1fde911b43b7d1c69352b.txt new file mode 100644 index 0000000000000000000000000000000000000000..6239505b8ff5f3f7eb6043b475677f1d948af531 --- /dev/null +++ b/scrapped_outputs/9f8a583f5cc1fde911b43b7d1c69352b.txt @@ -0,0 +1,38 @@ +Pipeline callbacks The denoising loop of a pipeline can be modified with custom defined functions using the callback_on_step_end parameter. This can be really useful for dynamically adjusting certain pipeline attributes, or modifying tensor variables. The flexibility of callbacks opens up some interesting use-cases such as changing the prompt embeddings at each timestep, assigning different weights to the prompt embeddings, and editing the guidance scale. This guide will show you how to use the callback_on_step_end parameter to disable classifier-free guidance (CFG) after 40% of the inference steps to save compute with minimal cost to performance. The callback function should have the following arguments: pipe (or the pipeline instance) provides access to useful properties such as num_timestep and guidance_scale. You can modify these properties by updating the underlying attributes. For this example, you’ll disable CFG by setting pipe._guidance_scale=0.0. step_index and timestep tell you where you are in the denoising loop. Use step_index to turn off CFG after reaching 40% of num_timestep. callback_kwargs is a dict that contains tensor variables you can modify during the denoising loop. It only includes variables specified in the callback_on_step_end_tensor_inputs argument, which is passed to the pipeline’s __call__ method. Different pipelines may use different sets of variables, so please check a pipeline’s _callback_tensor_inputs attribute for the list of variables you can modify. Some common variables include latents and prompt_embeds. For this function, change the batch size of prompt_embeds after setting guidance_scale=0.0 in order for it to work properly. Your callback function should look something like this: Copied def callback_dynamic_cfg(pipe, step_index, timestep, callback_kwargs): + # adjust the batch_size of prompt_embeds according to guidance_scale + if step_index == int(pipe.num_timestep * 0.4): + prompt_embeds = callback_kwargs["prompt_embeds"] + prompt_embeds = prompt_embeds.chunk(2)[-1] + + # update guidance_scale and prompt_embeds + pipe._guidance_scale = 0.0 + callback_kwargs["prompt_embeds"] = prompt_embeds + return callback_kwargs Now, you can pass the callback function to the callback_on_step_end parameter and the prompt_embeds to callback_on_step_end_tensor_inputs. Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" + +generator = torch.Generator(device="cuda").manual_seed(1) +out = pipe(prompt, generator=generator, callback_on_step_end=callback_custom_cfg, callback_on_step_end_tensor_inputs=['prompt_embeds']) + +out.images[0].save("out_custom_cfg.png") The callback function is executed at the end of each denoising step, and modifies the pipeline attributes and tensor variables for the next denoising step. With callbacks, you can implement features such as dynamic CFG without having to modify the underlying code at all! 🤗 Diffusers currently only supports callback_on_step_end, but feel free to open a feature request if you have a cool use-case and require a callback function with a different execution point! Interrupt the diffusion process Interrupting the diffusion process is particularly useful when building UIs that work with Diffusers because it allows users to stop the generation process if they’re unhappy with the intermediate results. You can incorporate this into your pipeline with a callback. The interruption callback is supported for text-to-image, image-to-image, and inpainting for the StableDiffusionPipeline and StableDiffusionXLPipeline. This callback function should take the following arguments: pipe, i, t, and callback_kwargs (this must be returned). Set the pipeline’s _interrupt attribute to True to stop the diffusion process after a certain number of steps. You are also free to implement your own custom stopping logic inside the callback. In this example, the diffusion process is stopped after 10 steps even though num_inference_steps is set to 50. Copied from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +pipe.enable_model_cpu_offload() +num_inference_steps = 50 + +def interrupt_callback(pipe, i, t, callback_kwargs): + stop_idx = 10 + if i == stop_idx: + pipe._interrupt = True + + return callback_kwargs + +pipe( + "A photo of a cat", + num_inference_steps=num_inference_steps, + callback_on_step_end=interrupt_callback, +) diff --git a/scrapped_outputs/9fc2365e4b1d05629da73e12dbaacc4b.txt b/scrapped_outputs/9fc2365e4b1d05629da73e12dbaacc4b.txt new file mode 100644 index 0000000000000000000000000000000000000000..f57b44311834487e66dc102fce3208b71376c674 --- /dev/null +++ b/scrapped_outputs/9fc2365e4b1d05629da73e12dbaacc4b.txt @@ -0,0 +1,49 @@ +Improve generation quality with FreeU The UNet is responsible for denoising during the reverse diffusion process, and there are two distinct features in its architecture: Backbone features primarily contribute to the denoising process Skip features mainly introduce high-frequency features into the decoder module and can make the network overlook the semantics in the backbone features However, the skip connection can sometimes introduce unnatural image details. FreeU is a technique for improving image quality by rebalancing the contributions from the UNet’s skip connections and backbone feature maps. FreeU is applied during inference and it does not require any additional training. The technique works for different tasks such as text-to-image, image-to-image, and text-to-video. In this guide, you will apply FreeU to the StableDiffusionPipeline, StableDiffusionXLPipeline, and TextToVideoSDPipeline. You need to install Diffusers from source to run the examples below. StableDiffusionPipeline Load the pipeline: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, safety_checker=None +).to("cuda") Then enable the FreeU mechanism with the FreeU-specific hyperparameters. These values are scaling factors for the backbone and skip features. Copied pipeline.enable_freeu(s1=0.9, s2=0.2, b1=1.2, b2=1.4) The values above are from the official FreeU code repository where you can also find reference hyperparameters for different models. Disable the FreeU mechanism by calling disable_freeu() on a pipeline. And then run inference: Copied prompt = "A squirrel eating a burger" +seed = 2023 +image = pipeline(prompt, generator=torch.manual_seed(seed)).images[0] +image The figure below compares non-FreeU and FreeU results respectively for the same hyperparameters used above (prompt and seed): Let’s see how Stable Diffusion 2 results are impacted: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16, safety_checker=None +).to("cuda") + +prompt = "A squirrel eating a burger" +seed = 2023 + +pipeline.enable_freeu(s1=0.9, s2=0.2, b1=1.1, b2=1.2) +image = pipeline(prompt, generator=torch.manual_seed(seed)).images[0] +image Stable Diffusion XL Finally, let’s take a look at how FreeU affects Stable Diffusion XL results: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, +).to("cuda") + +prompt = "A squirrel eating a burger" +seed = 2023 + +# Comes from +# https://wandb.ai/nasirk24/UNET-FreeU-SDXL/reports/FreeU-SDXL-Optimal-Parameters--Vmlldzo1NDg4NTUw +pipeline.enable_freeu(s1=0.6, s2=0.4, b1=1.1, b2=1.2) +image = pipeline(prompt, generator=torch.manual_seed(seed)).images[0] +image Text-to-video generation FreeU can also be used to improve video quality: Copied from diffusers import DiffusionPipeline +from diffusers.utils import export_to_video +import torch + +model_id = "cerspense/zeroscope_v2_576w" +pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +prompt = "an astronaut riding a horse on mars" +seed = 2023 + +# The values come from +# https://github.com/lyn-rgb/FreeU_Diffusers#video-pipelines +pipe.enable_freeu(b1=1.2, b2=1.4, s1=0.9, s2=0.2) +video_frames = pipe(prompt, height=320, width=576, num_frames=30, generator=torch.manual_seed(seed)).frames[0] +export_to_video(video_frames, "astronaut_rides_horse.mp4") Thanks to kadirnar for helping to integrate the feature, and to justindujardin for the helpful discussions. diff --git a/scrapped_outputs/9fc5b33d35bed7ac18b2379de8023f88.txt b/scrapped_outputs/9fc5b33d35bed7ac18b2379de8023f88.txt new file mode 100644 index 0000000000000000000000000000000000000000..b7d158c83f1d4ace037f662eae21300f2008f1a9 --- /dev/null +++ b/scrapped_outputs/9fc5b33d35bed7ac18b2379de8023f88.txt @@ -0,0 +1,568 @@ +Stable Diffusion XL Stable Diffusion XL (SDXL) was proposed in SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis by Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. The abstract from the paper is: We present SDXL, a latent diffusion model for text-to-image synthesis. Compared to previous versions of Stable Diffusion, SDXL leverages a three times larger UNet backbone: The increase of model parameters is mainly due to more attention blocks and a larger cross-attention context as SDXL uses a second text encoder. We design multiple novel conditioning schemes and train SDXL on multiple aspect ratios. We also introduce a refinement model which is used to improve the visual fidelity of samples generated by SDXL using a post-hoc image-to-image technique. We demonstrate that SDXL shows drastically improved performance compared the previous versions of Stable Diffusion and achieves results competitive with those of black-box state-of-the-art image generators. Tips Using SDXL with a DPM++ scheduler for less than 50 steps is known to produce visual artifacts because the solver becomes numerically unstable. To fix this issue, take a look at this PR which recommends for ODE/SDE solvers:set use_karras_sigmas=True or lu_lambdas=True to improve image quality set euler_at_final=True if you’re using a solver with uniform step sizes (DPM++2M or DPM++2M SDE) Most SDXL checkpoints work best with an image size of 1024x1024. Image sizes of 768x768 and 512x512 are also supported, but the results aren’t as good. Anything below 512x512 is not recommended and likely won’t be for default checkpoints like stabilityai/stable-diffusion-xl-base-1.0. SDXL can pass a different prompt for each of the text encoders it was trained on. We can even pass different parts of the same prompt to the text encoders. SDXL output images can be improved by making use of a refiner model in an image-to-image setting. SDXL offers negative_original_size, negative_crops_coords_top_left, and negative_target_size to negatively condition the model on image resolution and cropping parameters. To learn how to use SDXL for various tasks, how to optimize performance, and other usage examples, take a look at the Stable Diffusion XL guide. Check out the Stability AI Hub organization for the official base and refiner model checkpoints! StableDiffusionXLPipeline class diffusers.StableDiffusionXLPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Optional = None crops_coords_top_left: Tuple = (0, 0) target_size: Optional = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 5.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionXLPipelineOutput instead +of a plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionXLPipelineOutput or tuple + +StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLPipeline + +>>> pipe = StableDiffusionXLPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionXLImg2ImgPipeline class diffusers.StableDiffusionXLImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires an aesthetic_score condition to be passed during inference. Also see the +config of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None strength: float = 0.3 num_inference_steps: int = 50 timesteps: List = None denoising_start: Optional = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor or PIL.Image.Image or np.ndarray or List[torch.FloatTensor] or List[PIL.Image.Image] or List[np.ndarray]) — +The image(s) to modify with the pipeline. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. Note that in the case of +denoising_start being declared as an integer, the value of strength will be ignored. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_start (float, optional) — +When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be +bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and +it is assumed that the passed image is a partly denoised image. Note that when this is specified, +strength will be ignored. The denoising_start parameter is particularly beneficial when this pipeline +is integrated into a “Mixture of Denoisers” multi-pipeline setup, as detailed in Refine Image +Quality. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be +denoised by a successor pipeline that has denoising_start set to 0.8 so that it only denoises the +final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline +forms a part of a “Mixture of Denoisers” multi-pipeline setup, as elaborated in Refine Image +Quality. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +`tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLImg2ImgPipeline +>>> from diffusers.utils import load_image + +>>> pipe = StableDiffusionXLImg2ImgPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") +>>> url = "https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/aa_xl/000000009.png" + +>>> init_image = load_image(url).convert("RGB") +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt, image=init_image).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionXLInpaintPipeline class diffusers.StableDiffusionXLInpaintPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires a aesthetic_score condition to be passed during inference. Also see the config +of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None mask_image: Union = None masked_image_latents: FloatTensor = None height: Optional = None width: Optional = None strength: float = 0.9999 num_inference_steps: int = 50 timesteps: List = None denoising_start: Optional = None denoising_end: Optional = None guidance_scale: float = 7.5 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (PIL.Image.Image) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. strength (float, optional, defaults to 0.9999) — +Conceptually, indicates how much to transform the masked portion of the reference image. Must be +between 0 and 1. image will be used as a starting point, adding more noise to it the larger the +strength. The number of denoising steps depends on the amount of noise initially added. When +strength is 1, added noise will be maximum and the denoising process will run for the full number of +iterations specified in num_inference_steps. A value of 1, therefore, essentially ignores the masked +portion of the reference image. Note that in the case of denoising_start being declared as an +integer, the value of strength will be ignored. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_start (float, optional) — +When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be +bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and +it is assumed that the passed image is a partly denoised image. Note that when this is specified, +strength will be ignored. The denoising_start parameter is particularly beneficial when this pipeline +is integrated into a “Mixture of Denoisers” multi-pipeline setup, as detailed in Refining the Image +Output. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be +denoised by a successor pipeline that has denoising_start set to 0.8 so that it only denoises the +final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline +forms a part of a “Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLInpaintPipeline +>>> from diffusers.utils import load_image + +>>> pipe = StableDiffusionXLInpaintPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", +... torch_dtype=torch.float16, +... variant="fp16", +... use_safetensors=True, +... ) +>>> pipe.to("cuda") + +>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +>>> init_image = load_image(img_url).convert("RGB") +>>> mask_image = load_image(mask_url).convert("RGB") + +>>> prompt = "A majestic tiger sitting on a bench" +>>> image = pipe( +... prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=50, strength=0.80 +... ).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. diff --git a/scrapped_outputs/9fc8ae1f0477429a4b597b2aa87d1cac.txt b/scrapped_outputs/9fc8ae1f0477429a4b597b2aa87d1cac.txt new file mode 100644 index 0000000000000000000000000000000000000000..bbc3acf76c7c15bd0150cb7a94aa944d1e65fda4 --- /dev/null +++ b/scrapped_outputs/9fc8ae1f0477429a4b597b2aa87d1cac.txt @@ -0,0 +1,93 @@ +InstructPix2Pix InstructPix2Pix is a Stable Diffusion model trained to edit images from human-provided instructions. For example, your prompt can be “turn the clouds rainy” and the model will edit the input image accordingly. This model is conditioned on the text prompt (or editing instruction) and the input image. This guide will explore the train_instruct_pix2pix.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/instruct_pix2pix +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script has many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. Default values are provided for most parameters that work pretty well, but you can also set your own values in the training command if you’d like. For example, to increase the resolution of the input image: Copied accelerate launch train_instruct_pix2pix.py \ + --resolution=512 \ Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the relevant parameters for InstructPix2Pix: --original_image_column: the original image before the edits are made --edited_image_column: the image after the edits are made --edit_prompt_column: the instructions to edit the image --conditioning_dropout_prob: the dropout probability for the edited image and edit prompts during training which enables classifier-free guidance (CFG) for one or both conditioning inputs Training script The dataset preprocessing code and training loop are found in the main() function. This is where you’ll make your changes to the training script to adapt it for your own use-case. As with the script parameters, a walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the InstructPix2Pix relevant parts of the script. The script begins by modifing the number of input channels in the first convolutional layer of the UNet to account for InstructPix2Pix’s additional conditioning image: Copied in_channels = 8 +out_channels = unet.conv_in.out_channels +unet.register_to_config(in_channels=in_channels) + +with torch.no_grad(): + new_conv_in = nn.Conv2d( + in_channels, out_channels, unet.conv_in.kernel_size, unet.conv_in.stride, unet.conv_in.padding + ) + new_conv_in.weight.zero_() + new_conv_in.weight[:, :4, :, :].copy_(unet.conv_in.weight) + unet.conv_in = new_conv_in These UNet parameters are updated by the optimizer: Copied optimizer = optimizer_cls( + unet.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Next, the edited images and and edit instructions are preprocessed and tokenized. It is important the same image transformations are applied to the original and edited images. Copied def preprocess_train(examples): + preprocessed_images = preprocess_images(examples) + + original_images, edited_images = preprocessed_images.chunk(2) + original_images = original_images.reshape(-1, 3, args.resolution, args.resolution) + edited_images = edited_images.reshape(-1, 3, args.resolution, args.resolution) + + examples["original_pixel_values"] = original_images + examples["edited_pixel_values"] = edited_images + + captions = list(examples[edit_prompt_column]) + examples["input_ids"] = tokenize_captions(captions) + return examples Finally, in the training loop, it starts by encoding the edited images into latent space: Copied latents = vae.encode(batch["edited_pixel_values"].to(weight_dtype)).latent_dist.sample() +latents = latents * vae.config.scaling_factor Then, the script applies dropout to the original image and edit instruction embeddings to support CFG. This is what enables the model to modulate the influence of the edit instruction and original image on the edited image. Copied encoder_hidden_states = text_encoder(batch["input_ids"])[0] +original_image_embeds = vae.encode(batch["original_pixel_values"].to(weight_dtype)).latent_dist.mode() + +if args.conditioning_dropout_prob is not None: + random_p = torch.rand(bsz, device=latents.device, generator=generator) + prompt_mask = random_p < 2 * args.conditioning_dropout_prob + prompt_mask = prompt_mask.reshape(bsz, 1, 1) + null_conditioning = text_encoder(tokenize_captions([""]).to(accelerator.device))[0] + encoder_hidden_states = torch.where(prompt_mask, null_conditioning, encoder_hidden_states) + + image_mask_dtype = original_image_embeds.dtype + image_mask = 1 - ( + (random_p >= args.conditioning_dropout_prob).to(image_mask_dtype) + * (random_p < 3 * args.conditioning_dropout_prob).to(image_mask_dtype) + ) + image_mask = image_mask.reshape(bsz, 1, 1, 1) + original_image_embeds = image_mask * original_image_embeds That’s pretty much it! Aside from the differences described here, the rest of the script is very similar to the Text-to-image training script, so feel free to check it out for more details. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’re happy with the changes to your script or if you’re okay with the default configuration, you’re ready to launch the training script! 🚀 This guide uses the fusing/instructpix2pix-1000-samples dataset, which is a smaller version of the original dataset. You can also create and use your own dataset if you’d like (see the Create a dataset for training guide). Set the MODEL_NAME environment variable to the name of the model (can be a model id on the Hub or a path to a local model), and the DATASET_ID to the name of the dataset on the Hub. The script creates and saves all the components (feature extractor, scheduler, text encoder, UNet, etc.) to a subfolder in your repository. For better results, try longer training runs with a larger dataset. We’ve only tested this training script on a smaller-scale dataset. To monitor training progress with Weights and Biases, add the --report_to=wandb parameter to the training command and specify a validation image with --val_image_url and a validation prompt with --validation_prompt. This can be really useful for debugging the model. If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. Copied accelerate launch --mixed_precision="fp16" train_instruct_pix2pix.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$DATASET_ID \ + --enable_xformers_memory_efficient_attention \ + --resolution=256 \ + --random_flip \ + --train_batch_size=4 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --max_train_steps=15000 \ + --checkpointing_steps=5000 \ + --checkpoints_total_limit=1 \ + --learning_rate=5e-05 \ + --max_grad_norm=1 \ + --lr_warmup_steps=0 \ + --conditioning_dropout_prob=0.05 \ + --mixed_precision=fp16 \ + --seed=42 \ + --push_to_hub After training is finished, you can use your new InstructPix2Pix for inference: Copied import PIL +import requests +import torch +from diffusers import StableDiffusionInstructPix2PixPipeline +from diffusers.utils import load_image + +pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained("your_cool_model", torch_dtype=torch.float16).to("cuda") +generator = torch.Generator("cuda").manual_seed(0) + +image = load_image("https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/test_pix2pix_4.png") +prompt = "add some ducks to the lake" +num_inference_steps = 20 +image_guidance_scale = 1.5 +guidance_scale = 10 + +edited_image = pipeline( + prompt, + image=image, + num_inference_steps=num_inference_steps, + image_guidance_scale=image_guidance_scale, + guidance_scale=guidance_scale, + generator=generator, +).images[0] +edited_image.save("edited_image.png") You should experiment with different num_inference_steps, image_guidance_scale, and guidance_scale values to see how they affect inference speed and quality. The guidance scale parameters are especially impactful because they control how much the original image and edit instructions affect the edited image. Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_instruct_pix2pix_sdxl.py script to train a SDXL model to follow image editing instructions. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on training your own InstructPix2Pix model! 🥳 To learn more about the model, it may be helpful to: Read the Instruction-tuning Stable Diffusion with InstructPix2Pix blog post to learn more about some experiments we’ve done with InstructPix2Pix, dataset preparation, and results for different instructions. diff --git a/scrapped_outputs/a00d3e9de6553dfd1ac59c170331c55d.txt b/scrapped_outputs/a00d3e9de6553dfd1ac59c170331c55d.txt new file mode 100644 index 0000000000000000000000000000000000000000..68ff112b968d56ed709f7889837161b8952ee99b --- /dev/null +++ b/scrapped_outputs/a00d3e9de6553dfd1ac59c170331c55d.txt @@ -0,0 +1,235 @@ +AutoPipeline AutoPipeline is designed to: make it easy for you to load a checkpoint for a task without knowing the specific pipeline class to use use multiple pipelines in your workflow Based on the task, the AutoPipeline class automatically retrieves the relevant pipeline given the name or path to the pretrained weights with the from_pretrained() method. To seamlessly switch between tasks with the same checkpoint without reallocating additional memory, use the from_pipe() method to transfer the components from the original pipeline to the new one. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +image = pipeline(prompt, num_inference_steps=25).images[0] Check out the AutoPipeline tutorial to learn how to use this API! AutoPipeline supports text-to-image, image-to-image, and inpainting for the following diffusion models: Stable Diffusion ControlNet Stable Diffusion XL (SDXL) DeepFloyd IF Kandinsky 2.1 Kandinsky 2.2 AutoPipelineForText2Image class diffusers.AutoPipelineForText2Image < source > ( *args **kwargs ) AutoPipelineForText2Image is a generic pipeline class that instantiates a text-to-image pipeline class. The +specific underlying pipeline class is automatically selected from either the +from_pretrained() or from_pipe() methods. This class cannot be instantiated using __init__() (throws an error). Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_or_path **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiates a text-to-image Pytorch diffusion pipeline from pretrained pipeline weight. The from_pretrained() method takes care of returning the correct pipeline class instance by: Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its +config object Find the text-to-image pipeline linked to the pipeline class using pattern matching on pipeline class +name. If a controlnet argument is passed, it will instantiate a StableDiffusionControlNetPipeline object. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import AutoPipelineForText2Image + +>>> pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> image = pipeline(prompt).images[0] from_pipe < source > ( pipeline **kwargs ) Parameters pipeline (DiffusionPipeline) — +an instantiated DiffusionPipeline object Instantiates a text-to-image Pytorch diffusion pipeline from another instantiated diffusion pipeline class. The from_pipe() method takes care of returning the correct pipeline class instance by finding the text-to-image +pipeline linked to the pipeline class using pattern matching on pipeline class name. All the modules the pipeline contains will be used to initialize the new pipeline without reallocating +additional memoery. The pipeline is set in evaluation mode (model.eval()) by default. Copied >>> from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image + +>>> pipe_i2i = AutoPipelineForImage2Image.from_pretrained( +... "runwayml/stable-diffusion-v1-5", requires_safety_checker=False +... ) + +>>> pipe_t2i = AutoPipelineForText2Image.from_pipe(pipe_i2i) +>>> image = pipe_t2i(prompt).images[0] AutoPipelineForImage2Image class diffusers.AutoPipelineForImage2Image < source > ( *args **kwargs ) AutoPipelineForImage2Image is a generic pipeline class that instantiates an image-to-image pipeline class. The +specific underlying pipeline class is automatically selected from either the +from_pretrained() or from_pipe() methods. This class cannot be instantiated using __init__() (throws an error). Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_or_path **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiates a image-to-image Pytorch diffusion pipeline from pretrained pipeline weight. The from_pretrained() method takes care of returning the correct pipeline class instance by: Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its +config object Find the image-to-image pipeline linked to the pipeline class using pattern matching on pipeline class +name. If a controlnet argument is passed, it will instantiate a StableDiffusionControlNetImg2ImgPipeline +object. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import AutoPipelineForImage2Image + +>>> pipeline = AutoPipelineForImage2Image.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> image = pipeline(prompt, image).images[0] from_pipe < source > ( pipeline **kwargs ) Parameters pipeline (DiffusionPipeline) — +an instantiated DiffusionPipeline object Instantiates a image-to-image Pytorch diffusion pipeline from another instantiated diffusion pipeline class. The from_pipe() method takes care of returning the correct pipeline class instance by finding the +image-to-image pipeline linked to the pipeline class using pattern matching on pipeline class name. All the modules the pipeline contains will be used to initialize the new pipeline without reallocating +additional memoery. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image + +>>> pipe_t2i = AutoPipelineForText2Image.from_pretrained( +... "runwayml/stable-diffusion-v1-5", requires_safety_checker=False +... ) + +>>> pipe_i2i = AutoPipelineForImage2Image.from_pipe(pipe_t2i) +>>> image = pipe_i2i(prompt, image).images[0] AutoPipelineForInpainting class diffusers.AutoPipelineForInpainting < source > ( *args **kwargs ) AutoPipelineForInpainting is a generic pipeline class that instantiates an inpainting pipeline class. The +specific underlying pipeline class is automatically selected from either the +from_pretrained() or from_pipe() methods. This class cannot be instantiated using __init__() (throws an error). Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_or_path **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiates a inpainting Pytorch diffusion pipeline from pretrained pipeline weight. The from_pretrained() method takes care of returning the correct pipeline class instance by: Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its +config object Find the inpainting pipeline linked to the pipeline class using pattern matching on pipeline class name. If a controlnet argument is passed, it will instantiate a StableDiffusionControlNetInpaintPipeline +object. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import AutoPipelineForInpainting + +>>> pipeline = AutoPipelineForInpainting.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> image = pipeline(prompt, image=init_image, mask_image=mask_image).images[0] from_pipe < source > ( pipeline **kwargs ) Parameters pipeline (DiffusionPipeline) — +an instantiated DiffusionPipeline object Instantiates a inpainting Pytorch diffusion pipeline from another instantiated diffusion pipeline class. The from_pipe() method takes care of returning the correct pipeline class instance by finding the inpainting +pipeline linked to the pipeline class using pattern matching on pipeline class name. All the modules the pipeline class contain will be used to initialize the new pipeline without reallocating +additional memoery. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import AutoPipelineForText2Image, AutoPipelineForInpainting + +>>> pipe_t2i = AutoPipelineForText2Image.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", requires_safety_checker=False +... ) + +>>> pipe_inpaint = AutoPipelineForInpainting.from_pipe(pipe_t2i) +>>> image = pipe_inpaint(prompt, image=init_image, mask_image=mask_image).images[0] diff --git a/scrapped_outputs/a022250a39b38976b802d9aa3cddc753.txt b/scrapped_outputs/a022250a39b38976b802d9aa3cddc753.txt new file mode 100644 index 0000000000000000000000000000000000000000..f9d5759d2a52433aeb4a07b9b2cace405fc5aff7 --- /dev/null +++ b/scrapped_outputs/a022250a39b38976b802d9aa3cddc753.txt @@ -0,0 +1,61 @@ +Distilled Stable Diffusion inference Stable Diffusion inference can be a computationally intensive process because it must iteratively denoise the latents to generate an image. To reduce the computational burden, you can use a distilled version of the Stable Diffusion model from Nota AI. The distilled version of their Stable Diffusion model eliminates some of the residual and attention blocks from the UNet, reducing the model size by 51% and improving latency on CPU/GPU by 43%. Read this blog post to learn more about how knowledge distillation training works to produce a faster, smaller, and cheaper generative model. Let’s load the distilled Stable Diffusion model and compare it against the original Stable Diffusion model: Copied from diffusers import StableDiffusionPipeline +import torch + +distilled = StableDiffusionPipeline.from_pretrained( + "nota-ai/bk-sdm-small", torch_dtype=torch.float16, use_safetensors=True, +).to("cuda") + +original = StableDiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16, use_safetensors=True, +).to("cuda") Given a prompt, get the inference time for the original model: Copied import time + +seed = 2023 +generator = torch.manual_seed(seed) + +NUM_ITERS_TO_RUN = 3 +NUM_INFERENCE_STEPS = 25 +NUM_IMAGES_PER_PROMPT = 4 + +prompt = "a golden vase with different flowers" + +start = time.time_ns() +for _ in range(NUM_ITERS_TO_RUN): + images = original( + prompt, + num_inference_steps=NUM_INFERENCE_STEPS, + generator=generator, + num_images_per_prompt=NUM_IMAGES_PER_PROMPT + ).images +end = time.time_ns() +original_sd = f"{(end - start) / 1e6:.1f}" + +print(f"Execution time -- {original_sd} ms\n") +"Execution time -- 45781.5 ms" Time the distilled model inference: Copied start = time.time_ns() +for _ in range(NUM_ITERS_TO_RUN): + images = distilled( + prompt, + num_inference_steps=NUM_INFERENCE_STEPS, + generator=generator, + num_images_per_prompt=NUM_IMAGES_PER_PROMPT + ).images +end = time.time_ns() + +distilled_sd = f"{(end - start) / 1e6:.1f}" +print(f"Execution time -- {distilled_sd} ms\n") +"Execution time -- 29884.2 ms" original Stable Diffusion (45781.5 ms) distilled Stable Diffusion (29884.2 ms) Tiny AutoEncoder To speed inference up even more, use a tiny distilled version of the Stable Diffusion VAE to denoise the latents into images. Replace the VAE in the distilled Stable Diffusion model with the tiny VAE: Copied from diffusers import AutoencoderTiny + +distilled.vae = AutoencoderTiny.from_pretrained( + "sayakpaul/taesd-diffusers", torch_dtype=torch.float16, use_safetensors=True, +).to("cuda") Time the distilled model and distilled VAE inference: Copied start = time.time_ns() +for _ in range(NUM_ITERS_TO_RUN): + images = distilled( + prompt, + num_inference_steps=NUM_INFERENCE_STEPS, + generator=generator, + num_images_per_prompt=NUM_IMAGES_PER_PROMPT + ).images +end = time.time_ns() + +distilled_tiny_sd = f"{(end - start) / 1e6:.1f}" +print(f"Execution time -- {distilled_tiny_sd} ms\n") +"Execution time -- 27165.7 ms" distilled Stable Diffusion + Tiny AutoEncoder (27165.7 ms) diff --git a/scrapped_outputs/a03545535d90f17569e5af53d26a1c01.txt b/scrapped_outputs/a03545535d90f17569e5af53d26a1c01.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/a0360d39ce061379afe0488c7463c5b4.txt b/scrapped_outputs/a0360d39ce061379afe0488c7463c5b4.txt new file mode 100644 index 0000000000000000000000000000000000000000..26444ce0b02439b036cdb5951e8bcee16133d21d --- /dev/null +++ b/scrapped_outputs/a0360d39ce061379afe0488c7463c5b4.txt @@ -0,0 +1,7 @@ +Value-guided planning 🧪 This is an experimental pipeline for reinforcement learning! This pipeline is based on the Planning with Diffusion for Flexible Behavior Synthesis paper by Michael Janner, Yilun Du, Joshua B. Tenenbaum, Sergey Levine. The abstract from the paper is: Model-based reinforcement learning methods often use learning only for the purpose of estimating an approximate dynamics model, offloading the rest of the decision-making work to classical trajectory optimizers. While conceptually simple, this combination has a number of empirical shortcomings, suggesting that learned models may not be well-suited to standard trajectory optimization. In this paper, we consider what it would look like to fold as much of the trajectory optimization pipeline as possible into the modeling problem, such that sampling from the model and planning with it become nearly identical. The core of our technical approach lies in a diffusion probabilistic model that plans by iteratively denoising trajectories. We show how classifier-guided sampling and image inpainting can be reinterpreted as coherent planning strategies, explore the unusual and useful properties of diffusion-based planning methods, and demonstrate the effectiveness of our framework in control settings that emphasize long-horizon decision-making and test-time flexibility. You can find additional information about the model on the project page, the original codebase, or try it out in a demo notebook. The script to run the model is available here. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. ValueGuidedRLPipeline class diffusers.experimental.ValueGuidedRLPipeline < source > ( value_function: UNet1DModel unet: UNet1DModel scheduler: DDPMScheduler env ) Parameters value_function (UNet1DModel) — +A specialized UNet for fine-tuning trajectories base on reward. unet (UNet1DModel) — +UNet architecture to denoise the encoded trajectories. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded trajectories. Default for this +application is DDPMScheduler. env () — +An environment following the OpenAI gym API to act in. For now only Hopper has pretrained models. Pipeline for value-guided sampling from a diffusion model trained to predict sequences of states. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). diff --git a/scrapped_outputs/a04d7b7dd8c7a5f11b965daa836951a4.txt b/scrapped_outputs/a04d7b7dd8c7a5f11b965daa836951a4.txt new file mode 100644 index 0000000000000000000000000000000000000000..e7efd5146c1078113af0423ef6c60dab2df7383d --- /dev/null +++ b/scrapped_outputs/a04d7b7dd8c7a5f11b965daa836951a4.txt @@ -0,0 +1,77 @@ +Stable Diffusion XL This script is experimental, and it’s easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset. Stable Diffusion XL (SDXL) is a larger and more powerful iteration of the Stable Diffusion model, capable of producing higher resolution images. SDXL’s UNet is 3x larger and the model adds a second text encoder to the architecture. Depending on the hardware available to you, this can be very computationally intensive and it may not run on a consumer GPU like a Tesla T4. To help fit this larger model into memory and to speedup training, try enabling gradient_checkpointing, mixed_precision, and gradient_accumulation_steps. You can reduce your memory-usage even more by enabling memory-efficient attention with xFormers and using bitsandbytes’ 8-bit optimizer. This guide will explore the train_text_to_image_sdxl.py training script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/text_to_image +pip install -r requirements_sdxl.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the bf16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image_sdxl.py \ + --mixed_precision="bf16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so you’ll focus on the parameters that are relevant to training SDXL in this guide. --pretrained_vae_model_name_or_path: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify a better VAE --proportion_empty_prompts: the proportion of image prompts to replace with empty strings --timestep_bias_strategy: where (earlier vs. later) in the timestep to apply a bias, which can encourage the model to either learn low or high frequency details --timestep_bias_multiplier: the weight of the bias to apply to the timestep --timestep_bias_begin: the timestep to begin applying the bias --timestep_bias_end: the timestep to end applying the bias --timestep_bias_portion: the proportion of timesteps to apply the bias to Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting either epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_text_to_image_sdxl.py \ + --snr_gamma=5.0 Training script The training script is also similar to the Text-to-image training guide, but it’s been modified to support SDXL training. This guide will focus on the code that is unique to the SDXL training script. It starts by creating functions to tokenize the prompts to calculate the prompt embeddings, and to compute the image embeddings with the VAE. Next, you’ll a function to generate the timesteps weights depending on the number of timesteps and the timestep bias strategy to apply. Within the main() function, in addition to loading a tokenizer, the script loads a second tokenizer and text encoder because the SDXL architecture uses two of each: Copied tokenizer_one = AutoTokenizer.from_pretrained( + args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision, use_fast=False +) +tokenizer_two = AutoTokenizer.from_pretrained( + args.pretrained_model_name_or_path, subfolder="tokenizer_2", revision=args.revision, use_fast=False +) + +text_encoder_cls_one = import_model_class_from_model_name_or_path( + args.pretrained_model_name_or_path, args.revision +) +text_encoder_cls_two = import_model_class_from_model_name_or_path( + args.pretrained_model_name_or_path, args.revision, subfolder="text_encoder_2" +) The prompt and image embeddings are computed first and kept in memory, which isn’t typically an issue for a smaller dataset, but for larger datasets it can lead to memory problems. If this is the case, you should save the pre-computed embeddings to disk separately and load them into memory during the training process (see this PR for more discussion about this topic). Copied text_encoders = [text_encoder_one, text_encoder_two] +tokenizers = [tokenizer_one, tokenizer_two] +compute_embeddings_fn = functools.partial( + encode_prompt, + text_encoders=text_encoders, + tokenizers=tokenizers, + proportion_empty_prompts=args.proportion_empty_prompts, + caption_column=args.caption_column, +) + +train_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint) +train_dataset = train_dataset.map( + compute_vae_encodings_fn, + batched=True, + batch_size=args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps, + new_fingerprint=new_fingerprint_for_vae, +) After calculating the embeddings, the text encoder, VAE, and tokenizer are deleted to free up some memory: Copied del text_encoders, tokenizers, vae +gc.collect() +torch.cuda.empty_cache() Finally, the training loop takes care of the rest. If you chose to apply a timestep bias strategy, you’ll see the timestep weights are calculated and added as noise: Copied weights = generate_timestep_weights(args, noise_scheduler.config.num_train_timesteps).to( + model_input.device + ) + timesteps = torch.multinomial(weights, bsz, replacement=True).long() + +noisy_model_input = noise_scheduler.add_noise(model_input, noise, timesteps) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 Let’s train on the Pokémon BLIP captions dataset to generate your own Pokémon. Set the environment variables MODEL_NAME and DATASET_NAME to the model and the dataset (either from the Hub or a local path). You should also specify a VAE other than the SDXL VAE (either from the Hub or a local path) with VAE_NAME to avoid numerical instabilities. To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_prompt and --validation_epochs to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. Copied export MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0" +export VAE_NAME="madebyollin/sdxl-vae-fp16-fix" +export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch train_text_to_image_sdxl.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --pretrained_vae_model_name_or_path=$VAE_NAME \ + --dataset_name=$DATASET_NAME \ + --enable_xformers_memory_efficient_attention \ + --resolution=512 \ + --center_crop \ + --random_flip \ + --proportion_empty_prompts=0.2 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --max_train_steps=10000 \ + --use_8bit_adam \ + --learning_rate=1e-06 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --mixed_precision="fp16" \ + --report_to="wandb" \ + --validation_prompt="a cute Sundar Pichai creature" \ + --validation_epochs 5 \ + --checkpointing_steps=5000 \ + --output_dir="sdxl-pokemon-model" \ + --push_to_hub After you’ve finished training, you can use your newly trained SDXL model for inference! PyTorch PyTorch XLA Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("path/to/your/model", torch_dtype=torch.float16).to("cuda") + +prompt = "A pokemon with green eyes and red legs." +image = pipeline(prompt, num_inference_steps=30, guidance_scale=7.5).images[0] +image.save("pokemon.png") Next steps Congratulations on training a SDXL model! To learn more about how to use your new model, the following guides may be helpful: Read the Stable Diffusion XL guide to learn how to use it for a variety of different tasks (text-to-image, image-to-image, inpainting), how to use it’s refiner model, and the different types of micro-conditionings. Check out the DreamBooth and LoRA training guides to learn how to train a personalized SDXL model with just a few example images. These two training techniques can even be combined! diff --git a/scrapped_outputs/a08dbcefb7a9aa5a0f55c7815220484c.txt b/scrapped_outputs/a08dbcefb7a9aa5a0f55c7815220484c.txt new file mode 100644 index 0000000000000000000000000000000000000000..a3ac22e44f82a2bfeede971a5b1063163f7e9fc2 --- /dev/null +++ b/scrapped_outputs/a08dbcefb7a9aa5a0f55c7815220484c.txt @@ -0,0 +1,176 @@ +Image-to-Video Generation with PIA (Personalized Image Animator) Overview PIA: Your Personalized Image Animator via Plug-and-Play Modules in Text-to-Image Models by Yiming Zhang, Zhening Xing, Yanhong Zeng, Youqing Fang, Kai Chen Recent advancements in personalized text-to-image (T2I) models have revolutionized content creation, empowering non-experts to generate stunning images with unique styles. While promising, adding realistic motions into these personalized images by text poses significant challenges in preserving distinct styles, high-fidelity details, and achieving motion controllability by text. In this paper, we present PIA, a Personalized Image Animator that excels in aligning with condition images, achieving motion controllability by text, and the compatibility with various personalized T2I models without specific tuning. To achieve these goals, PIA builds upon a base T2I model with well-trained temporal alignment layers, allowing for the seamless transformation of any personalized T2I model into an image animation model. A key component of PIA is the introduction of the condition module, which utilizes the condition frame and inter-frame affinity as input to transfer appearance information guided by the affinity hint for individual frame synthesis in the latent space. This design mitigates the challenges of appearance-related image alignment within and allows for a stronger focus on aligning with motion-related guidance. Project page Available Pipelines Pipeline Tasks Demo PIAPipeline Image-to-Video Generation with PIA Available checkpoints Motion Adapter checkpoints for PIA can be found under the OpenMMLab org. These checkpoints are meant to work with any model based on Stable Diffusion 1.5 Usage example PIA works with a MotionAdapter checkpoint and a Stable Diffusion 1.5 model checkpoint. The MotionAdapter is a collection of Motion Modules that are responsible for adding coherent motion across image frames. These modules are applied after the Resnet and Attention blocks in the Stable Diffusion UNet. In addition to the motion modules, PIA also replaces the input convolution layer of the SD 1.5 UNet model with a 9 channel input convolution layer. The following example demonstrates how to use PIA to generate a video from a single image. Copied import torch +from diffusers import ( + EulerDiscreteScheduler, + MotionAdapter, + PIAPipeline, +) +from diffusers.utils import export_to_gif, load_image + +adapter = MotionAdapter.from_pretrained("openmmlab/PIA-condition-adapter") +pipe = PIAPipeline.from_pretrained("SG161222/Realistic_Vision_V6.0_B1_noVAE", motion_adapter=adapter, torch_dtype=torch.float16) + +pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() +pipe.enable_vae_slicing() + +image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat_6.png?download=true" +) +image = image.resize((512, 512)) +prompt = "cat in a field" +negative_prompt = "wrong white balance, dark, sketches,worst quality,low quality" + +generator = torch.Generator("cpu").manual_seed(0) +output = pipe(image=image, prompt=prompt, generator=generator) +frames = output.frames[0] +export_to_gif(frames, "pia-animation.gif") Here are some sample outputs: masterpiece, bestquality, sunset. + If you plan on using a scheduler that can clip samples, make sure to disable it by setting clip_sample=False in the scheduler as this can also have an adverse effect on generated samples. Additionally, the PIA checkpoints can be sensitive to the beta schedule of the scheduler. We recommend setting this to linear. Using FreeInit FreeInit: Bridging Initialization Gap in Video Diffusion Models by Tianxing Wu, Chenyang Si, Yuming Jiang, Ziqi Huang, Ziwei Liu. FreeInit is an effective method that improves temporal consistency and overall quality of videos generated using video-diffusion-models without any addition training. It can be applied to PIA, AnimateDiff, ModelScope, VideoCrafter and various other video generation models seamlessly at inference time, and works by iteratively refining the latent-initialization noise. More details can be found it the paper. The following example demonstrates the usage of FreeInit. Copied import torch +from diffusers import ( + DDIMScheduler, + MotionAdapter, + PIAPipeline, +) +from diffusers.utils import export_to_gif, load_image + +adapter = MotionAdapter.from_pretrained("openmmlab/PIA-condition-adapter") +pipe = PIAPipeline.from_pretrained("SG161222/Realistic_Vision_V6.0_B1_noVAE", motion_adapter=adapter) + +# enable FreeInit +# Refer to the enable_free_init documentation for a full list of configurable parameters +pipe.enable_free_init(method="butterworth", use_fast_sampling=True) + +# Memory saving options +pipe.enable_model_cpu_offload() +pipe.enable_vae_slicing() + +pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) +image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat_6.png?download=true" +) +image = image.resize((512, 512)) +prompt = "cat in a hat" +negative_prompt = "wrong white balance, dark, sketches,worst quality,low quality" + +generator = torch.Generator("cpu").manual_seed(0) + +output = pipe(image=image, prompt=prompt, generator=generator) +frames = output.frames[0] +export_to_gif(frames, "pia-freeinit-animation.gif") masterpiece, bestquality, sunset. + FreeInit is not really free - the improved quality comes at the cost of extra computation. It requires sampling a few extra times depending on the num_iters parameter that is set when enabling it. Setting the use_fast_sampling parameter to True can improve the overall performance (at the cost of lower quality compared to when use_fast_sampling=False but still better results than vanilla video generation models). PIAPipeline class diffusers.PIAPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: Union scheduler: Union motion_adapter: Optional = None feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter (MotionAdapter) — +A MotionAdapter to be used in combination with unet to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( image: Union prompt: Union = None strength: float = 1.0 num_frames: Optional = 16 height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None motion_scale: int = 0 output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → TextToVideoSDPipelineOutput or tuple Parameters image (PipelineImageInput) — +The input image to be used for video generation. prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. strength (float, optional, defaults to 1.0) — Indicates extent to transform the reference image. Must be between 0 and 1. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated video. num_frames (int, optional, defaults to 16) — +The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds +amounts to 2 seconds of video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. +motion_scale — (int, optional, defaults to 0): +Parameter that controls the amount and type of motion that is added to the image. Increasing the value increases the amount of motion, while specific +ranges of values control the type of motion that is added. Must be between 0 and 8. +Set between 0-2 to only increase the amount of motion. +Set between 3-5 to create looping motion. +Set between 6-8 to perform motion with image style transfer. output_type (str, optional, defaults to "pil") — +The output format of the generated video. Choose between torch.FloatTensor, PIL.Image or +np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +TextToVideoSDPipelineOutput or tuple + +If return_dict is True, TextToVideoSDPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import ( +... EulerDiscreteScheduler, +... MotionAdapter, +... PIAPipeline, +... ) +>>> from diffusers.utils import export_to_gif, load_image +>>> adapter = MotionAdapter.from_pretrained("../checkpoints/pia-diffusers") +>>> pipe = PIAPipeline.from_pretrained("SG161222/Realistic_Vision_V6.0_B1_noVAE", motion_adapter=adapter) +>>> pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config) +>>> image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat_6.png?download=true" +... ) +>>> image = image.resize((512, 512)) +>>> prompt = "cat in a hat" +>>> negative_prompt = "wrong white balance, dark, sketches,worst quality,low quality, deformed, distorted, disfigured, bad eyes, wrong lips,weird mouth, bad teeth, mutated hands and fingers, bad anatomy,wrong anatomy, amputation, extra limb, missing limb, floating,limbs, disconnected limbs, mutation, ugly, disgusting, bad_pictures, negative_hand-neg" +>>> generator = torch.Generator("cpu").manual_seed(0) +>>> output = pipe(image=image, prompt=prompt, negative_prompt=negative_prompt, generator=generator) +>>> frames = output.frames[0] +>>> export_to_gif(frames, "pia-animation.gif") disable_free_init < source > ( ) Disables the FreeInit mechanism if enabled. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_free_init < source > ( num_iters: int = 3 use_fast_sampling: bool = False method: str = 'butterworth' order: int = 4 spatial_stop_frequency: float = 0.25 temporal_stop_frequency: float = 0.25 generator: Optional = None ) Parameters num_iters (int, optional, defaults to 3) — +Number of FreeInit noise re-initialization iterations. use_fast_sampling (bool, optional, defaults to False) — +Whether or not to speedup sampling procedure at the cost of probably lower quality results. Enables +the “Coarse-to-Fine Sampling” strategy, as mentioned in the paper, if set to True. method (str, optional, defaults to butterworth) — +Must be one of butterworth, ideal or gaussian to use as the filtering method for the +FreeInit low pass filter. order (int, optional, defaults to 4) — +Order of the filter used in butterworth method. Larger values lead to ideal method behaviour +whereas lower values lead to gaussian method behaviour. spatial_stop_frequency (float, optional, defaults to 0.25) — +Normalized stop frequency for spatial dimensions. Must be between 0 to 1. Referred to as d_s in +the original implementation. temporal_stop_frequency (float, optional, defaults to 0.25) — +Normalized stop frequency for temporal dimensions. Must be between 0 to 1. Referred to as d_t in +the original implementation. generator (torch.Generator, optional, defaults to 0.25) — +A torch.Generator to make +FreeInit generation deterministic. Enables the FreeInit mechanism as in https://arxiv.org/abs/2312.07537. This implementation has been adapted from the official repository. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. enable_freeu disable_freeu enable_free_init disable_free_init enable_vae_slicing disable_vae_slicing enable_vae_tiling disable_vae_tiling PIAPipelineOutput class diffusers.pipelines.pia.PIAPipelineOutput < source > ( frames: Union ) Parameters frames (torch.Tensor, np.ndarray, or List[PIL.Image.Image]) — Nested list of length batch_size with denoised PIL image sequences of length num_frames, — NumPy array of shape `(batch_size, num_frames, channels, height, width, — Torch tensor of shape (batch_size, num_frames, channels, height, width). — Output class for PIAPipeline. diff --git a/scrapped_outputs/a0ad0683dbf610849b9664e35b43397a.txt b/scrapped_outputs/a0ad0683dbf610849b9664e35b43397a.txt new file mode 100644 index 0000000000000000000000000000000000000000..191eba717cd93724b13a5915ff44bfc9153360dd --- /dev/null +++ b/scrapped_outputs/a0ad0683dbf610849b9664e35b43397a.txt @@ -0,0 +1,338 @@ +GLIGEN (Grounded Language-to-Image Generation) The GLIGEN model was created by researchers and engineers from University of Wisconsin-Madison, Columbia University, and Microsoft. The StableDiffusionGLIGENPipeline and StableDiffusionGLIGENTextImagePipeline can generate photorealistic images conditioned on grounding inputs. Along with text and bounding boxes with StableDiffusionGLIGENPipeline, if input images are given, StableDiffusionGLIGENTextImagePipeline can insert objects described by text at the region defined by bounding boxes. Otherwise, it’ll generate an image described by the caption/prompt and insert objects described by text at the region defined by bounding boxes. It’s trained on COCO2014D and COCO2014CD datasets, and the model uses a frozen CLIP ViT-L/14 text encoder to condition itself on grounding inputs. The abstract from the paper is: Large-scale text-to-image diffusion models have made amazing advances. However, the status quo is to use text input alone, which can impede controllability. In this work, we propose GLIGEN, Grounded-Language-to-Image Generation, a novel approach that builds upon and extends the functionality of existing pre-trained text-to-image diffusion models by enabling them to also be conditioned on grounding inputs. To preserve the vast concept knowledge of the pre-trained model, we freeze all of its weights and inject the grounding information into new trainable layers via a gated mechanism. Our model achieves open-world grounded text2img generation with caption and bounding box condition inputs, and the grounding ability generalizes well to novel spatial configurations and concepts. GLIGEN’s zeroshot performance on COCO and LVIS outperforms existing supervised layout-to-image baselines by a large margin. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality and how to reuse pipeline components efficiently! If you want to use one of the official checkpoints for a task, explore the gligen Hub organizations! StableDiffusionGLIGENPipeline was contributed by Nikhil Gajendrakumar and StableDiffusionGLIGENTextImagePipeline was contributed by Nguyễn Công Tú Anh. StableDiffusionGLIGENPipeline class diffusers.StableDiffusionGLIGENPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with Grounded-Language-to-Image Generation (GLIGEN). This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 gligen_scheduled_sampling_beta: float = 0.3 gligen_phrases: List = None gligen_boxes: List = None gligen_inpaint_image: Optional = None negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. gligen_phrases (List[str]) — +The phrases to guide what to include in each of the regions defined by the corresponding +gligen_boxes. There should only be one phrase per bounding box. gligen_boxes (List[List[float]]) — +The bounding boxes that identify rectangular regions of the image that are going to be filled with the +content described by the corresponding gligen_phrases. Each rectangular box is defined as a +List[float] of 4 elements [xmin, ymin, xmax, ymax] where each value is between [0,1]. gligen_inpaint_image (PIL.Image.Image, optional) — +The input image, if provided, is inpainted with objects described by the gligen_boxes and +gligen_phrases. Otherwise, it is treated as a generation task on a blank input image. gligen_scheduled_sampling_beta (float, defaults to 0.3) — +Scheduled Sampling factor from GLIGEN: Open-Set Grounded Text-to-Image +Generation. Scheduled Sampling factor is only varied for +scheduled sampling during inference for improved quality and controllability. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor from Common Diffusion Noise Schedules and Sample Steps are +Flawed. Guidance rescale factor should fix overexposure when +using zero terminal SNR. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionGLIGENPipeline +>>> from diffusers.utils import load_image + +>>> # Insert objects described by text at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENPipeline.from_pretrained( +... "masterful/gligen-1-4-inpainting-text-box", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> input_image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/livingroom_modern.png" +... ) +>>> prompt = "a birthday cake" +>>> boxes = [[0.2676, 0.6088, 0.4773, 0.7183]] +>>> phrases = ["a birthday cake"] + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_inpaint_image=input_image, +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-1-4-inpainting-text-box.jpg") + +>>> # Generate an image described by the prompt and +>>> # insert objects described by text at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENPipeline.from_pretrained( +... "masterful/gligen-1-4-generation-text-box", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a waterfall and a modern high speed train running through the tunnel in a beautiful forest with fall foliage" +>>> boxes = [[0.1387, 0.2051, 0.4277, 0.7090], [0.4980, 0.4355, 0.8516, 0.7266]] +>>> phrases = ["a waterfall", "a modern high speed train running through the tunnel"] + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-1-4-generation-text-box.jpg") enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. prepare_latents < source > ( batch_size num_channels_latents height width dtype device generator latents = None ) enable_fuser < source > ( enabled = True ) encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionGLIGENTextImagePipeline class diffusers.StableDiffusionGLIGENTextImagePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer processor: CLIPProcessor image_encoder: CLIPVisionModelWithProjection image_project: CLIPImageProjection unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. processor (CLIPProcessor) — +A CLIPProcessor to procces reference image. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder (clip-vit-large-patch14). image_project (CLIPImageProjection) — +A CLIPImageProjection to project image embedding into phrases embedding space. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with Grounded-Language-to-Image Generation (GLIGEN). This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 gligen_scheduled_sampling_beta: float = 0.3 gligen_phrases: List = None gligen_images: List = None input_phrases_mask: Union = None input_images_mask: Union = None gligen_boxes: List = None gligen_inpaint_image: Optional = None negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None gligen_normalize_constant: float = 28.7 clip_skip: int = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. gligen_phrases (List[str]) — +The phrases to guide what to include in each of the regions defined by the corresponding +gligen_boxes. There should only be one phrase per bounding box. gligen_images (List[PIL.Image.Image]) — +The images to guide what to include in each of the regions defined by the corresponding gligen_boxes. +There should only be one image per bounding box input_phrases_mask (int or List[int]) — +pre phrases mask input defined by the correspongding input_phrases_mask input_images_mask (int or List[int]) — +pre images mask input defined by the correspongding input_images_mask gligen_boxes (List[List[float]]) — +The bounding boxes that identify rectangular regions of the image that are going to be filled with the +content described by the corresponding gligen_phrases. Each rectangular box is defined as a +List[float] of 4 elements [xmin, ymin, xmax, ymax] where each value is between [0,1]. gligen_inpaint_image (PIL.Image.Image, optional) — +The input image, if provided, is inpainted with objects described by the gligen_boxes and +gligen_phrases. Otherwise, it is treated as a generation task on a blank input image. gligen_scheduled_sampling_beta (float, defaults to 0.3) — +Scheduled Sampling factor from GLIGEN: Open-Set Grounded Text-to-Image +Generation. Scheduled Sampling factor is only varied for +scheduled sampling during inference for improved quality and controllability. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. gligen_normalize_constant (float, optional, defaults to 28.7) — +The normalize value of the image embedding. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionGLIGENTextImagePipeline +>>> from diffusers.utils import load_image + +>>> # Insert objects described by image at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained( +... "anhnct/Gligen_Inpainting_Text_Image", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> input_image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/livingroom_modern.png" +... ) +>>> prompt = "a backpack" +>>> boxes = [[0.2676, 0.4088, 0.4773, 0.7183]] +>>> phrases = None +>>> gligen_image = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/backpack.jpeg" +... ) + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_inpaint_image=input_image, +... gligen_boxes=boxes, +... gligen_images=[gligen_image], +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-inpainting-text-image-box.jpg") + +>>> # Generate an image described by the prompt and +>>> # insert objects described by text and image at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained( +... "anhnct/Gligen_Text_Image", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a flower sitting on the beach" +>>> boxes = [[0.0, 0.09, 0.53, 0.76]] +>>> phrases = ["flower"] +>>> gligen_image = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/pexels-pixabay-60597.jpg" +... ) + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_images=[gligen_image], +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-generation-text-image-box.jpg") + +>>> # Generate an image described by the prompt and +>>> # transfer style described by image at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained( +... "anhnct/Gligen_Text_Image", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a dragon flying on the sky" +>>> boxes = [[0.4, 0.2, 1.0, 0.8], [0.0, 1.0, 0.0, 1.0]] # Set `[0.0, 1.0, 0.0, 1.0]` for the style + +>>> gligen_image = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" +... ) + +>>> gligen_placeholder = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" +... ) + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=[ +... "dragon", +... "placeholder", +... ], # Can use any text instead of `placeholder` token, because we will use mask here +... gligen_images=[ +... gligen_placeholder, +... gligen_image, +... ], # Can use any image in gligen_placeholder, because we will use mask here +... input_phrases_mask=[1, 0], # Set 0 for the placeholder token +... input_images_mask=[0, 1], # Set 0 for the placeholder image +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-generation-text-image-box-style-transfer.jpg") enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. prepare_latents < source > ( batch_size num_channels_latents height width dtype device generator latents = None ) enable_fuser < source > ( enabled = True ) complete_mask < source > ( has_mask max_objs device ) Based on the input mask corresponding value 0 or 1 for each phrases and image, mask the features +corresponding to phrases and images. crop < source > ( im new_width new_height ) Crop the input image to the specified dimensions. draw_inpaint_mask_from_boxes < source > ( boxes size ) Create an inpainting mask based on given boxes. This function generates an inpainting mask using the provided +boxes to mark regions that need to be inpainted. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_clip_feature < source > ( input normalize_constant device is_image = False ) Get image and phrases embedding by using CLIP pretrain model. The image embedding is transformed into the +phrases embedding space through a projection. get_cross_attention_kwargs_with_grounded < source > ( hidden_size gligen_phrases gligen_images gligen_boxes input_phrases_mask input_images_mask repeat_batch normalize_constant max_objs device ) Prepare the cross-attention kwargs containing information about the grounded input (boxes, mask, image +embedding, phrases embedding). get_cross_attention_kwargs_without_grounded < source > ( hidden_size repeat_batch max_objs device ) Prepare the cross-attention kwargs without information about the grounded input (boxes, mask, image embedding, +phrases embedding) (All are zero tensor). target_size_center_crop < source > ( im new_hw ) Crop and resize the image to the target size while keeping the center. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/a0b8b893520244b12bdc705fafccbb7f.txt b/scrapped_outputs/a0b8b893520244b12bdc705fafccbb7f.txt new file mode 100644 index 0000000000000000000000000000000000000000..e7efd5146c1078113af0423ef6c60dab2df7383d --- /dev/null +++ b/scrapped_outputs/a0b8b893520244b12bdc705fafccbb7f.txt @@ -0,0 +1,77 @@ +Stable Diffusion XL This script is experimental, and it’s easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset. Stable Diffusion XL (SDXL) is a larger and more powerful iteration of the Stable Diffusion model, capable of producing higher resolution images. SDXL’s UNet is 3x larger and the model adds a second text encoder to the architecture. Depending on the hardware available to you, this can be very computationally intensive and it may not run on a consumer GPU like a Tesla T4. To help fit this larger model into memory and to speedup training, try enabling gradient_checkpointing, mixed_precision, and gradient_accumulation_steps. You can reduce your memory-usage even more by enabling memory-efficient attention with xFormers and using bitsandbytes’ 8-bit optimizer. This guide will explore the train_text_to_image_sdxl.py training script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/text_to_image +pip install -r requirements_sdxl.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the bf16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image_sdxl.py \ + --mixed_precision="bf16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so you’ll focus on the parameters that are relevant to training SDXL in this guide. --pretrained_vae_model_name_or_path: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify a better VAE --proportion_empty_prompts: the proportion of image prompts to replace with empty strings --timestep_bias_strategy: where (earlier vs. later) in the timestep to apply a bias, which can encourage the model to either learn low or high frequency details --timestep_bias_multiplier: the weight of the bias to apply to the timestep --timestep_bias_begin: the timestep to begin applying the bias --timestep_bias_end: the timestep to end applying the bias --timestep_bias_portion: the proportion of timesteps to apply the bias to Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting either epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_text_to_image_sdxl.py \ + --snr_gamma=5.0 Training script The training script is also similar to the Text-to-image training guide, but it’s been modified to support SDXL training. This guide will focus on the code that is unique to the SDXL training script. It starts by creating functions to tokenize the prompts to calculate the prompt embeddings, and to compute the image embeddings with the VAE. Next, you’ll a function to generate the timesteps weights depending on the number of timesteps and the timestep bias strategy to apply. Within the main() function, in addition to loading a tokenizer, the script loads a second tokenizer and text encoder because the SDXL architecture uses two of each: Copied tokenizer_one = AutoTokenizer.from_pretrained( + args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision, use_fast=False +) +tokenizer_two = AutoTokenizer.from_pretrained( + args.pretrained_model_name_or_path, subfolder="tokenizer_2", revision=args.revision, use_fast=False +) + +text_encoder_cls_one = import_model_class_from_model_name_or_path( + args.pretrained_model_name_or_path, args.revision +) +text_encoder_cls_two = import_model_class_from_model_name_or_path( + args.pretrained_model_name_or_path, args.revision, subfolder="text_encoder_2" +) The prompt and image embeddings are computed first and kept in memory, which isn’t typically an issue for a smaller dataset, but for larger datasets it can lead to memory problems. If this is the case, you should save the pre-computed embeddings to disk separately and load them into memory during the training process (see this PR for more discussion about this topic). Copied text_encoders = [text_encoder_one, text_encoder_two] +tokenizers = [tokenizer_one, tokenizer_two] +compute_embeddings_fn = functools.partial( + encode_prompt, + text_encoders=text_encoders, + tokenizers=tokenizers, + proportion_empty_prompts=args.proportion_empty_prompts, + caption_column=args.caption_column, +) + +train_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint) +train_dataset = train_dataset.map( + compute_vae_encodings_fn, + batched=True, + batch_size=args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps, + new_fingerprint=new_fingerprint_for_vae, +) After calculating the embeddings, the text encoder, VAE, and tokenizer are deleted to free up some memory: Copied del text_encoders, tokenizers, vae +gc.collect() +torch.cuda.empty_cache() Finally, the training loop takes care of the rest. If you chose to apply a timestep bias strategy, you’ll see the timestep weights are calculated and added as noise: Copied weights = generate_timestep_weights(args, noise_scheduler.config.num_train_timesteps).to( + model_input.device + ) + timesteps = torch.multinomial(weights, bsz, replacement=True).long() + +noisy_model_input = noise_scheduler.add_noise(model_input, noise, timesteps) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 Let’s train on the Pokémon BLIP captions dataset to generate your own Pokémon. Set the environment variables MODEL_NAME and DATASET_NAME to the model and the dataset (either from the Hub or a local path). You should also specify a VAE other than the SDXL VAE (either from the Hub or a local path) with VAE_NAME to avoid numerical instabilities. To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_prompt and --validation_epochs to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. Copied export MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0" +export VAE_NAME="madebyollin/sdxl-vae-fp16-fix" +export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch train_text_to_image_sdxl.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --pretrained_vae_model_name_or_path=$VAE_NAME \ + --dataset_name=$DATASET_NAME \ + --enable_xformers_memory_efficient_attention \ + --resolution=512 \ + --center_crop \ + --random_flip \ + --proportion_empty_prompts=0.2 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --max_train_steps=10000 \ + --use_8bit_adam \ + --learning_rate=1e-06 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --mixed_precision="fp16" \ + --report_to="wandb" \ + --validation_prompt="a cute Sundar Pichai creature" \ + --validation_epochs 5 \ + --checkpointing_steps=5000 \ + --output_dir="sdxl-pokemon-model" \ + --push_to_hub After you’ve finished training, you can use your newly trained SDXL model for inference! PyTorch PyTorch XLA Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("path/to/your/model", torch_dtype=torch.float16).to("cuda") + +prompt = "A pokemon with green eyes and red legs." +image = pipeline(prompt, num_inference_steps=30, guidance_scale=7.5).images[0] +image.save("pokemon.png") Next steps Congratulations on training a SDXL model! To learn more about how to use your new model, the following guides may be helpful: Read the Stable Diffusion XL guide to learn how to use it for a variety of different tasks (text-to-image, image-to-image, inpainting), how to use it’s refiner model, and the different types of micro-conditionings. Check out the DreamBooth and LoRA training guides to learn how to train a personalized SDXL model with just a few example images. These two training techniques can even be combined! diff --git a/scrapped_outputs/a0c38ca52283716c160d1bf72eb6af85.txt b/scrapped_outputs/a0c38ca52283716c160d1bf72eb6af85.txt new file mode 100644 index 0000000000000000000000000000000000000000..6b2f521e40e38cf54824f4d7c2c05c78554dd3cf --- /dev/null +++ b/scrapped_outputs/a0c38ca52283716c160d1bf72eb6af85.txt @@ -0,0 +1,62 @@ +AudioLDM AudioLDM was proposed in AudioLDM: Text-to-Audio Generation with Latent Diffusion Models by Haohe Liu et al. Inspired by Stable Diffusion, AudioLDM +is a text-to-audio latent diffusion model (LDM) that learns continuous audio representations from CLAP +latents. AudioLDM takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional +sound effects, human speech and music. The abstract from the paper is: Text-to-audio (TTA) system has recently gained attention for its ability to synthesize general audio based on text descriptions. However, previous studies in TTA have limited generation quality with high computational costs. In this study, we propose AudioLDM, a TTA system that is built on a latent space to learn the continuous audio representations from contrastive language-audio pretraining (CLAP) latents. The pretrained CLAP models enable us to train LDMs with audio embedding while providing text embedding as a condition during sampling. By learning the latent representations of audio signals and their compositions without modeling the cross-modal relationship, AudioLDM is advantageous in both generation quality and computational efficiency. Trained on AudioCaps with a single GPU, AudioLDM achieves state-of-the-art TTA performance measured by both objective and subjective metrics (e.g., frechet distance). Moreover, AudioLDM is the first TTA system that enables various text-guided audio manipulations (e.g., style transfer) in a zero-shot fashion. Our implementation and demos are available at this https URL. The original codebase can be found at haoheliu/AudioLDM. Tips When constructing a prompt, keep in mind: Descriptive prompt inputs work best; you can use adjectives to describe the sound (for example, “high quality” or “clear”) and make the prompt context specific (for example, “water stream in a forest” instead of “stream”). It’s best to use general terms like “cat” or “dog” instead of specific names or abstract objects the model may not be familiar with. During inference: The quality of the predicted audio sample can be controlled by the num_inference_steps argument; higher steps give higher quality audio at the expense of slower inference. The length of the predicted audio sample can be controlled by varying the audio_length_in_s argument. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. AudioLDMPipeline class diffusers.AudioLDMPipeline < source > ( vae: AutoencoderKL text_encoder: ClapTextModelWithProjection tokenizer: Union unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vocoder: SpeechT5HifiGan ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (ClapTextModelWithProjection) — +Frozen text-encoder (ClapTextModelWithProjection, specifically the +laion/clap-htsat-unfused variant. tokenizer (PreTrainedTokenizer) — +A RobertaTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded audio latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. vocoder (SpeechT5HifiGan) — +Vocoder of class SpeechT5HifiGan. Pipeline for text-to-audio generation using AudioLDM. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None audio_length_in_s: Optional = None num_inference_steps: int = 10 guidance_scale: float = 2.5 negative_prompt: Union = None num_waveforms_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None output_type: Optional = 'np' ) → AudioPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide audio generation. If not defined, you need to pass prompt_embeds. audio_length_in_s (int, optional, defaults to 5.12) — +The length of the generated audio sample in seconds. num_inference_steps (int, optional, defaults to 10) — +The number of denoising steps. More denoising steps usually lead to a higher quality audio at the +expense of slower inference. guidance_scale (float, optional, defaults to 2.5) — +A higher guidance scale value encourages the model to generate audio that is closely linked to the text +prompt at the expense of lower sound quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in audio generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_waveforms_per_prompt (int, optional, defaults to 1) — +The number of waveforms to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. return_dict (bool, optional, defaults to True) — +Whether or not to return a AudioPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. output_type (str, optional, defaults to "np") — +The output format of the generated image. Choose between "np" to return a NumPy np.ndarray or +"pt" to return a PyTorch torch.Tensor object. Returns +AudioPipelineOutput or tuple + +If return_dict is True, AudioPipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import AudioLDMPipeline +>>> import torch +>>> import scipy + +>>> repo_id = "cvssp/audioldm-s-full-v2" +>>> pipe = AudioLDMPipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> prompt = "Techno music with a strong, upbeat tempo and high melodic riffs" +>>> audio = pipe(prompt, num_inference_steps=10, audio_length_in_s=5.0).audios[0] + +>>> # save the audio sample as a .wav file +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio) disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. diff --git a/scrapped_outputs/a0e0a27828acef640b3671798c60c23a.txt b/scrapped_outputs/a0e0a27828acef640b3671798c60c23a.txt new file mode 100644 index 0000000000000000000000000000000000000000..bbc3acf76c7c15bd0150cb7a94aa944d1e65fda4 --- /dev/null +++ b/scrapped_outputs/a0e0a27828acef640b3671798c60c23a.txt @@ -0,0 +1,93 @@ +InstructPix2Pix InstructPix2Pix is a Stable Diffusion model trained to edit images from human-provided instructions. For example, your prompt can be “turn the clouds rainy” and the model will edit the input image accordingly. This model is conditioned on the text prompt (or editing instruction) and the input image. This guide will explore the train_instruct_pix2pix.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/instruct_pix2pix +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script has many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. Default values are provided for most parameters that work pretty well, but you can also set your own values in the training command if you’d like. For example, to increase the resolution of the input image: Copied accelerate launch train_instruct_pix2pix.py \ + --resolution=512 \ Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the relevant parameters for InstructPix2Pix: --original_image_column: the original image before the edits are made --edited_image_column: the image after the edits are made --edit_prompt_column: the instructions to edit the image --conditioning_dropout_prob: the dropout probability for the edited image and edit prompts during training which enables classifier-free guidance (CFG) for one or both conditioning inputs Training script The dataset preprocessing code and training loop are found in the main() function. This is where you’ll make your changes to the training script to adapt it for your own use-case. As with the script parameters, a walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the InstructPix2Pix relevant parts of the script. The script begins by modifing the number of input channels in the first convolutional layer of the UNet to account for InstructPix2Pix’s additional conditioning image: Copied in_channels = 8 +out_channels = unet.conv_in.out_channels +unet.register_to_config(in_channels=in_channels) + +with torch.no_grad(): + new_conv_in = nn.Conv2d( + in_channels, out_channels, unet.conv_in.kernel_size, unet.conv_in.stride, unet.conv_in.padding + ) + new_conv_in.weight.zero_() + new_conv_in.weight[:, :4, :, :].copy_(unet.conv_in.weight) + unet.conv_in = new_conv_in These UNet parameters are updated by the optimizer: Copied optimizer = optimizer_cls( + unet.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Next, the edited images and and edit instructions are preprocessed and tokenized. It is important the same image transformations are applied to the original and edited images. Copied def preprocess_train(examples): + preprocessed_images = preprocess_images(examples) + + original_images, edited_images = preprocessed_images.chunk(2) + original_images = original_images.reshape(-1, 3, args.resolution, args.resolution) + edited_images = edited_images.reshape(-1, 3, args.resolution, args.resolution) + + examples["original_pixel_values"] = original_images + examples["edited_pixel_values"] = edited_images + + captions = list(examples[edit_prompt_column]) + examples["input_ids"] = tokenize_captions(captions) + return examples Finally, in the training loop, it starts by encoding the edited images into latent space: Copied latents = vae.encode(batch["edited_pixel_values"].to(weight_dtype)).latent_dist.sample() +latents = latents * vae.config.scaling_factor Then, the script applies dropout to the original image and edit instruction embeddings to support CFG. This is what enables the model to modulate the influence of the edit instruction and original image on the edited image. Copied encoder_hidden_states = text_encoder(batch["input_ids"])[0] +original_image_embeds = vae.encode(batch["original_pixel_values"].to(weight_dtype)).latent_dist.mode() + +if args.conditioning_dropout_prob is not None: + random_p = torch.rand(bsz, device=latents.device, generator=generator) + prompt_mask = random_p < 2 * args.conditioning_dropout_prob + prompt_mask = prompt_mask.reshape(bsz, 1, 1) + null_conditioning = text_encoder(tokenize_captions([""]).to(accelerator.device))[0] + encoder_hidden_states = torch.where(prompt_mask, null_conditioning, encoder_hidden_states) + + image_mask_dtype = original_image_embeds.dtype + image_mask = 1 - ( + (random_p >= args.conditioning_dropout_prob).to(image_mask_dtype) + * (random_p < 3 * args.conditioning_dropout_prob).to(image_mask_dtype) + ) + image_mask = image_mask.reshape(bsz, 1, 1, 1) + original_image_embeds = image_mask * original_image_embeds That’s pretty much it! Aside from the differences described here, the rest of the script is very similar to the Text-to-image training script, so feel free to check it out for more details. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’re happy with the changes to your script or if you’re okay with the default configuration, you’re ready to launch the training script! 🚀 This guide uses the fusing/instructpix2pix-1000-samples dataset, which is a smaller version of the original dataset. You can also create and use your own dataset if you’d like (see the Create a dataset for training guide). Set the MODEL_NAME environment variable to the name of the model (can be a model id on the Hub or a path to a local model), and the DATASET_ID to the name of the dataset on the Hub. The script creates and saves all the components (feature extractor, scheduler, text encoder, UNet, etc.) to a subfolder in your repository. For better results, try longer training runs with a larger dataset. We’ve only tested this training script on a smaller-scale dataset. To monitor training progress with Weights and Biases, add the --report_to=wandb parameter to the training command and specify a validation image with --val_image_url and a validation prompt with --validation_prompt. This can be really useful for debugging the model. If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. Copied accelerate launch --mixed_precision="fp16" train_instruct_pix2pix.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$DATASET_ID \ + --enable_xformers_memory_efficient_attention \ + --resolution=256 \ + --random_flip \ + --train_batch_size=4 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --max_train_steps=15000 \ + --checkpointing_steps=5000 \ + --checkpoints_total_limit=1 \ + --learning_rate=5e-05 \ + --max_grad_norm=1 \ + --lr_warmup_steps=0 \ + --conditioning_dropout_prob=0.05 \ + --mixed_precision=fp16 \ + --seed=42 \ + --push_to_hub After training is finished, you can use your new InstructPix2Pix for inference: Copied import PIL +import requests +import torch +from diffusers import StableDiffusionInstructPix2PixPipeline +from diffusers.utils import load_image + +pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained("your_cool_model", torch_dtype=torch.float16).to("cuda") +generator = torch.Generator("cuda").manual_seed(0) + +image = load_image("https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/test_pix2pix_4.png") +prompt = "add some ducks to the lake" +num_inference_steps = 20 +image_guidance_scale = 1.5 +guidance_scale = 10 + +edited_image = pipeline( + prompt, + image=image, + num_inference_steps=num_inference_steps, + image_guidance_scale=image_guidance_scale, + guidance_scale=guidance_scale, + generator=generator, +).images[0] +edited_image.save("edited_image.png") You should experiment with different num_inference_steps, image_guidance_scale, and guidance_scale values to see how they affect inference speed and quality. The guidance scale parameters are especially impactful because they control how much the original image and edit instructions affect the edited image. Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_instruct_pix2pix_sdxl.py script to train a SDXL model to follow image editing instructions. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on training your own InstructPix2Pix model! 🥳 To learn more about the model, it may be helpful to: Read the Instruction-tuning Stable Diffusion with InstructPix2Pix blog post to learn more about some experiments we’ve done with InstructPix2Pix, dataset preparation, and results for different instructions. diff --git a/scrapped_outputs/a12cc378399bade1083f1506cbc658e5.txt b/scrapped_outputs/a12cc378399bade1083f1506cbc658e5.txt new file mode 100644 index 0000000000000000000000000000000000000000..836dee32c8271dc967057672c03614a463c4ec61 --- /dev/null +++ b/scrapped_outputs/a12cc378399bade1083f1506cbc658e5.txt @@ -0,0 +1,324 @@ +Pipelines Pipelines provide a simple way to run state-of-the-art diffusion models in inference by bundling all of the necessary components (multiple independently-trained models, schedulers, and processors) into a single end-to-end class. Pipelines are flexible and they can be adapted to use different schedulers or even model components. All pipelines are built from the base DiffusionPipeline class which provides basic functionality for loading, downloading, and saving all the components. Specific pipeline types (for example StableDiffusionPipeline) loaded with from_pretrained() are automatically detected and the pipeline components are loaded and passed to the __init__ function of the pipeline. You shouldn’t use the DiffusionPipeline class for training. Individual components (for example, UNet2DModel and UNet2DConditionModel) of diffusion pipelines are usually trained individually, so we suggest directly working with them instead. Pipelines do not offer any training functionality. You’ll notice PyTorch’s autograd is disabled by decorating the __call__() method with a torch.no_grad decorator because pipelines should not be used for training. If you’re interested in training, please take a look at the Training guides instead! The table below lists all the pipelines currently available in 🤗 Diffusers and the tasks they support. Click on a pipeline to view its abstract and published paper. Pipeline Tasks AltDiffusion image2image AnimateDiff text2video Attend-and-Excite text2image Audio Diffusion image2audio AudioLDM text2audio AudioLDM2 text2audio BLIP Diffusion text2image Consistency Models unconditional image generation ControlNet text2image, image2image, inpainting ControlNet with Stable Diffusion XL text2image ControlNet-XS text2image ControlNet-XS with Stable Diffusion XL text2image Cycle Diffusion image2image Dance Diffusion unconditional audio generation DDIM unconditional image generation DDPM unconditional image generation DeepFloyd IF text2image, image2image, inpainting, super-resolution DiffEdit inpainting DiT text2image GLIGEN text2image InstructPix2Pix image editing Kandinsky 2.1 text2image, image2image, inpainting, interpolation Kandinsky 2.2 text2image, image2image, inpainting Kandinsky 3 text2image, image2image Latent Consistency Models text2image Latent Diffusion text2image, super-resolution LDM3D text2image, text-to-3D, text-to-pano, upscaling MultiDiffusion text2image MusicLDM text2audio Paint by Example inpainting ParaDiGMS text2image Pix2Pix Zero image editing PixArt-α text2image PNDM unconditional image generation RePaint inpainting Score SDE VE unconditional image generation Self-Attention Guidance text2image Semantic Guidance text2image Shap-E text-to-3D, image-to-3D Spectrogram Diffusion Stable Diffusion text2image, image2image, depth2image, inpainting, image variation, latent upscaler, super-resolution Stable Diffusion Model Editing model editing Stable Diffusion XL text2image, image2image, inpainting Stable Diffusion XL Turbo text2image, image2image, inpainting Stable unCLIP text2image, image variation Stochastic Karras VE unconditional image generation T2I-Adapter text2image Text2Video text2video, video2video Text2Video-Zero text2video unCLIP text2image, image variation Unconditional Latent Diffusion unconditional image generation UniDiffuser text2image, image2text, image variation, text variation, unconditional image generation, unconditional audio generation Value-guided planning value guided sampling Versatile Diffusion text2image, image variation VQ Diffusion text2image Wuerstchen text2image DiffusionPipeline class diffusers.DiffusionPipeline < source > ( ) Base class for all pipelines. DiffusionPipeline stores all components (models, schedulers, and processors) for diffusion pipelines and +provides methods for loading, downloading and saving models. It also includes methods to: move all PyTorch modules to the device of your choice enable/disable the progress bar for the denoising iteration Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. _optional_components (List[str]) — List of all optional components that don’t have to be passed to the +pipeline to function (should be overridden by subclasses). __call__ ( *args **kwargs ) Call self as a function. device < source > ( ) → torch.device Returns +torch.device + +The torch device on which the pipeline is located. + to < source > ( *args **kwargs ) → DiffusionPipeline Parameters dtype (torch.dtype, optional) — +Returns a pipeline with the specified +dtype device (torch.Device, optional) — +Returns a pipeline with the specified +device silence_dtype_warnings (str, optional, defaults to False) — +Whether to omit warnings if the target dtype is not compatible with the target device. Returns +DiffusionPipeline + +The pipeline converted to specified dtype and/or dtype. + Performs Pipeline dtype and/or device conversion. A torch.dtype and torch.device are inferred from the +arguments of self.to(*args, **kwargs). If the pipeline already has the correct torch.dtype and torch.device, then it is returned as is. Otherwise, +the returned pipeline is a copy of self with the desired torch.dtype and torch.device. Here are the ways to call to: to(dtype, silence_dtype_warnings=False) → DiffusionPipeline to return a pipeline with the specified +dtype to(device, silence_dtype_warnings=False) → DiffusionPipeline to return a pipeline with the specified +device to(device=None, dtype=None, silence_dtype_warnings=False) → DiffusionPipeline to return a pipeline with the +specified device and +dtype components < source > ( ) The self.components property can be useful to run different pipelines with the same weights and +configurations without reallocating additional memory. Returns (dict): +A dictionary containing all the modules needed to initialize the pipeline. Examples: Copied >>> from diffusers import ( +... StableDiffusionPipeline, +... StableDiffusionImg2ImgPipeline, +... StableDiffusionInpaintPipeline, +... ) + +>>> text2img = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> img2img = StableDiffusionImg2ImgPipeline(**text2img.components) +>>> inpaint = StableDiffusionInpaintPipeline(**text2img.components) disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. download < source > ( pretrained_model_name **kwargs ) → os.PathLike Parameters pretrained_model_name (str or os.PathLike, optional) — +A string, the repository id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. custom_pipeline (str, optional) — +Can be either: + + +A string, the repository id (for example CompVis/ldm-text2im-large-256) of a pretrained +pipeline hosted on the Hub. The repository must contain a file called pipeline.py that defines +the custom pipeline. + + +A string, the file name of a community pipeline hosted on GitHub under +Community. Valid file +names must match the file name and not the pipeline script (clip_guided_stable_diffusion +instead of clip_guided_stable_diffusion.py). Community pipelines are always loaded from the +current main branch of GitHub. + + +A path to a directory (./my_pipeline_directory/) containing a custom pipeline. The directory +must contain a file called pipeline.py that defines the custom pipeline. + + + +🧪 This is an experimental feature and may change in the future. + +For more information on how to load and create custom pipelines, take a look at How to contribute a +community pipeline. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. use_onnx (bool, optional, defaults to False) — +If set to True, ONNX weights will always be downloaded if present. If set to False, ONNX weights +will never be downloaded. By default use_onnx defaults to the _is_onnx class attribute which is +False for non-ONNX pipelines and True for ONNX pipelines. ONNX weights include both files ending +with .onnx and .pb. trust_remote_code (bool, optional, defaults to False) — +Whether or not to allow for custom pipelines and components defined on the Hub in their own files. This +option should only be set to True for repositories you trust and in which you have read the code, as +it will execute code present on the Hub on your local machine. Returns +os.PathLike + +A path to the downloaded pipeline. + Download and cache a PyTorch diffusion pipeline from pretrained pipeline weights. To use private or gated models, log-in with +huggingface-cli login. enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] enable_model_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_sequential_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using 🤗 Accelerate, significantly reducing memory usage. When called, the state +dicts of all torch.nn.Module components (except those in self._exclude_from_cpu_offload) are saved to CPU +and then moved to torch.device('meta') and loaded to GPU only when their specific submodule has its forward +method called. Offloading happens on a submodule basis. Memory savings are higher than with +enable_model_cpu_offload, but performance is lower. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) from_pretrained < source > ( pretrained_model_name_or_path: Union **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. custom_pipeline (str, optional) — + +🧪 This is an experimental feature and may change in the future. + +Can be either: + +A string, the repo id (for example hf-internal-testing/diffusers-dummy-pipeline) of a custom +pipeline hosted on the Hub. The repository must contain a file called pipeline.py that defines +the custom pipeline. +A string, the file name of a community pipeline hosted on GitHub under +Community. Valid file +names must match the file name and not the pipeline script (clip_guided_stable_diffusion +instead of clip_guided_stable_diffusion.py). Community pipelines are always loaded from the +current main branch of GitHub. +A path to a directory (./my_pipeline_directory/) containing a custom pipeline. The directory +must contain a file called pipeline.py that defines the custom pipeline. + +For more information on how to load and create custom pipelines, please have a look at Loading and +Adding Custom +Pipelines force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. use_onnx (bool, optional, defaults to None) — +If set to True, ONNX weights will always be downloaded if present. If set to False, ONNX weights +will never be downloaded. By default use_onnx defaults to the _is_onnx class attribute which is +False for non-ONNX pipelines and True for ONNX pipelines. ONNX weights include both files ending +with .onnx and .pb. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiate a PyTorch diffusion pipeline from pretrained pipeline weights. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import DiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256") + +>>> # Download pipeline that requires an authorization token +>>> # For more information on access tokens, please refer to this section +>>> # of the documentation](https://huggingface.co/docs/hub/security-tokens) +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") + +>>> # Use a different scheduler +>>> from diffusers import LMSDiscreteScheduler + +>>> scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.scheduler = scheduler maybe_free_model_hooks < source > ( ) Function that offloads all components, removes all model hooks that were added when using +enable_model_cpu_offload and then applies them again. In case the model has not been offloaded this function +is a no-op. Make sure to add this function to the end of the __call__ function of your pipeline so that it +functions correctly when applying enable_model_cpu_offload. numpy_to_pil < source > ( images ) Convert a NumPy image or a batch of images to a PIL image. save_pretrained < source > ( save_directory: Union safe_serialization: bool = True variant: Optional = None push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save a pipeline to. Will be created if it doesn’t exist. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save all saveable variables of the pipeline to a directory. A pipeline variable can be saved and loaded if its +class implements both a save and loading method. The pipeline is easily reloaded using the +from_pretrained() class method. FlaxDiffusionPipeline class diffusers.FlaxDiffusionPipeline < source > ( ) Base class for Flax-based pipelines. FlaxDiffusionPipeline stores all components (models, schedulers, and processors) for diffusion pipelines and +provides methods for loading, downloading and saving models. It also includes methods to: enable/disable the progress bar for the denoising iteration Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_name_or_path: Union **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example runwayml/stable-diffusion-v1-5) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +using save_pretrained(). + dtype (str or jnp.dtype, optional) — +Override the default jnp.dtype and load the model under this dtype. If "auto", the dtype is +automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components) of the specific pipeline +class. The overwritten components are passed directly to the pipelines __init__ method. Instantiate a Flax-based diffusion pipeline from pretrained pipeline weights. The pipeline is set in evaluation mode (`model.eval()) by default and dropout modules are deactivated. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of FlaxUNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import FlaxDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> # Requires to be logged in to Hugging Face hub, +>>> # see more in [the documentation](https://huggingface.co/docs/hub/security-tokens) +>>> pipeline, params = FlaxDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... revision="bf16", +... dtype=jnp.bfloat16, +... ) + +>>> # Download pipeline, but use a different scheduler +>>> from diffusers import FlaxDPMSolverMultistepScheduler + +>>> model_id = "runwayml/stable-diffusion-v1-5" +>>> dpmpp, dpmpp_state = FlaxDPMSolverMultistepScheduler.from_pretrained( +... model_id, +... subfolder="scheduler", +... ) + +>>> dpm_pipe, dpm_params = FlaxStableDiffusionPipeline.from_pretrained( +... model_id, revision="bf16", dtype=jnp.bfloat16, scheduler=dpmpp +... ) +>>> dpm_params["scheduler"] = dpmpp_state numpy_to_pil < source > ( images ) Convert a NumPy image or a batch of images to a PIL image. save_pretrained < source > ( save_directory: Union params: Union push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save all saveable variables of the pipeline to a directory. A pipeline variable can be saved and loaded if its +class implements both a save and loading method. The pipeline is easily reloaded using the +from_pretrained() class method. PushToHubMixin class diffusers.utils.PushToHubMixin < source > ( ) A Mixin to push a model, scheduler, or pipeline to the Hugging Face Hub. push_to_hub < source > ( repo_id: str commit_message: Optional = None private: Optional = None token: Optional = None create_pr: bool = False safe_serialization: bool = True variant: Optional = None ) Parameters repo_id (str) — +The name of the repository you want to push your model, scheduler, or pipeline files to. It should +contain your organization name when pushing to an organization. repo_id can also be a path to a local +directory. commit_message (str, optional) — +Message to commit while pushing. Default to "Upload {object}". private (bool, optional) — +Whether or not the repository created should be private. token (str, optional) — +The token to use as HTTP bearer authorization for remote files. The token generated when running +huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) — +Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to True) — +Whether or not to convert the model weights to the safetensors format. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. Upload model, scheduler, or pipeline files to the 🤗 Hugging Face Hub. Examples: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet") + +# Push the `unet` to your namespace with the name "my-finetuned-unet". +unet.push_to_hub("my-finetuned-unet") + +# Push the `unet` to an organization with the name "my-finetuned-unet". +unet.push_to_hub("your-org/my-finetuned-unet") diff --git a/scrapped_outputs/a15522ef97803ad68bca00a53eb34dbe.txt b/scrapped_outputs/a15522ef97803ad68bca00a53eb34dbe.txt new file mode 100644 index 0000000000000000000000000000000000000000..18ff21ef44b1209309d3996bfa0c5efab35a57c1 --- /dev/null +++ b/scrapped_outputs/a15522ef97803ad68bca00a53eb34dbe.txt @@ -0,0 +1,78 @@ +Safe Stable Diffusion Safe Stable Diffusion was proposed in Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models and mitigates inappropriate degeneration from Stable Diffusion models because they’re trained on unfiltered web-crawled datasets. For instance Stable Diffusion may unexpectedly generate nudity, violence, images depicting self-harm, and otherwise offensive content. Safe Stable Diffusion is an extension of Stable Diffusion that drastically reduces this type of content. The abstract from the paper is: Text-conditioned image generation models have recently achieved astonishing results in image quality and text alignment and are consequently employed in a fast-growing number of applications. Since they are highly data-driven, relying on billion-sized datasets randomly scraped from the internet, they also suffer, as we demonstrate, from degenerated and biased human behavior. In turn, they may even reinforce such biases. To help combat these undesired side effects, we present safe latent diffusion (SLD). Specifically, to measure the inappropriate degeneration due to unfiltered and imbalanced training sets, we establish a novel image generation test bed-inappropriate image prompts (I2P)-containing dedicated, real-world image-to-text prompts covering concepts such as nudity and violence. As our exhaustive empirical evaluation demonstrates, the introduced SLD removes and suppresses inappropriate image parts during the diffusion process, with no additional training required and no adverse effect on overall image quality or text alignment. Tips Use the safety_concept property of StableDiffusionPipelineSafe to check and edit the current safety concept: Copied >>> from diffusers import StableDiffusionPipelineSafe + +>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe") +>>> pipeline.safety_concept +'an image showing hate, harassment, violence, suffering, humiliation, harm, suicide, sexual, nudity, bodily fluids, blood, obscene gestures, illegal activity, drug use, theft, vandalism, weapons, child abuse, brutality, cruelty' For each image generation the active concept is also contained in StableDiffusionSafePipelineOutput. There are 4 configurations (SafetyConfig.WEAK, SafetyConfig.MEDIUM, SafetyConfig.STRONG, and SafetyConfig.MAX) that can be applied: Copied >>> from diffusers import StableDiffusionPipelineSafe +>>> from diffusers.pipelines.stable_diffusion_safe import SafetyConfig + +>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe") +>>> prompt = "the four horsewomen of the apocalypse, painting by tom of finland, gaston bussiere, craig mullins, j. c. leyendecker" +>>> out = pipeline(prompt=prompt, **SafetyConfig.MAX) Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionPipelineSafe class diffusers.StableDiffusionPipelineSafe < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: SafeStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline based on the StableDiffusionPipeline for text-to-image generation using Safe Latent Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 sld_guidance_scale: Optional = 1000 sld_warmup_steps: Optional = 10 sld_threshold: Optional = 0.01 sld_momentum_scale: Optional = 0.3 sld_mom_beta: Optional = 0.4 ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. sld_guidance_scale (float, optional, defaults to 1000) — +If sld_guidance_scale < 1, safety guidance is disabled. sld_warmup_steps (int, optional, defaults to 10) — +Number of warmup steps for safety guidance. SLD is only be applied for diffusion steps greater than +sld_warmup_steps. sld_threshold (float, optional, defaults to 0.01) — +Threshold that separates the hyperplane between appropriate and inappropriate images. sld_momentum_scale (float, optional, defaults to 0.3) — +Scale of the SLD momentum to be added to the safety guidance at each diffusion step. If set to 0.0, +momentum is disabled. Momentum is built up during warmup for diffusion steps smaller than +sld_warmup_steps. sld_mom_beta (float, optional, defaults to 0.4) — +Defines how safety guidance momentum builds up. sld_mom_beta indicates how much of the previous +momentum is kept. Momentum is built up during warmup for diffusion steps smaller than +sld_warmup_steps. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied import torch +from diffusers import StableDiffusionPipelineSafe +from diffusers.pipelines.stable_diffusion_safe import SafetyConfig + +pipeline = StableDiffusionPipelineSafe.from_pretrained( + "AIML-TUDA/stable-diffusion-safe", torch_dtype=torch.float16 +).to("cuda") +prompt = "the four horsewomen of the apocalypse, painting by tom of finland, gaston bussiere, craig mullins, j. c. leyendecker" +image = pipeline(prompt=prompt, **SafetyConfig.MEDIUM).images[0] StableDiffusionSafePipelineOutput class diffusers.pipelines.stable_diffusion_safe.StableDiffusionSafePipelineOutput < source > ( images: Union nsfw_content_detected: Optional unsafe_images: Union applied_safety_concept: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or numpy array of shape (batch_size, height, width, num_channels). PIL images or numpy array present the denoised images of the diffusion pipeline. nsfw_content_detected (List[bool]) — +List of flags denoting whether the corresponding generated image likely represents “not-safe-for-work” +(nsfw) content, or None if safety checking could not be performed. images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images that were flagged by the safety checker any may contain “not-safe-for-work” +(nsfw) content, or None if no safety check was performed or no images were flagged. applied_safety_concept (str) — +The safety concept that was applied for safety guidance, or None if safety guidance was disabled Output class for Safe Stable Diffusion pipelines. __call__ ( *args **kwargs ) Call self as a function. diff --git a/scrapped_outputs/a16f273c8e93bf1dba4fb5db23fb1267.txt b/scrapped_outputs/a16f273c8e93bf1dba4fb5db23fb1267.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/a1a1823846456f2c770faf153a00ffe4.txt b/scrapped_outputs/a1a1823846456f2c770faf153a00ffe4.txt new file mode 100644 index 0000000000000000000000000000000000000000..d0ff9812e8390d7761559412d64c19cfc04afa33 --- /dev/null +++ b/scrapped_outputs/a1a1823846456f2c770faf153a00ffe4.txt @@ -0,0 +1,89 @@ +Quicktour Diffusion models are trained to denoise random Gaussian noise step-by-step to generate a sample of interest, such as an image or audio. This has sparked a tremendous amount of interest in generative AI, and you have probably seen examples of diffusion generated images on the internet. 🧨 Diffusers is a library aimed at making diffusion models widely accessible to everyone. Whether you’re a developer or an everyday user, this quicktour will introduce you to 🧨 Diffusers and help you get up and generating quickly! There are three main components of the library to know about: The DiffusionPipeline is a high-level end-to-end class designed to rapidly generate samples from pretrained diffusion models for inference. Popular pretrained model architectures and modules that can be used as building blocks for creating diffusion systems. Many different schedulers - algorithms that control how noise is added for training, and how to generate denoised images during inference. The quicktour will show you how to use the DiffusionPipeline for inference, and then walk you through how to combine a model and scheduler to replicate what’s happening inside the DiffusionPipeline. The quicktour is a simplified version of the introductory 🧨 Diffusers notebook to help you get started quickly. If you want to learn more about 🧨 Diffusers’ goal, design philosophy, and additional details about its core API, check out the notebook! Before you begin, make sure you have all the necessary libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install --upgrade diffusers accelerate transformers 🤗 Accelerate speeds up model loading for inference and training. 🤗 Transformers is required to run the most popular diffusion models, such as Stable Diffusion. DiffusionPipeline The DiffusionPipeline is the easiest way to use a pretrained diffusion system for inference. It is an end-to-end system containing the model and the scheduler. You can use the DiffusionPipeline out-of-the-box for many tasks. Take a look at the table below for some supported tasks, and for a complete list of supported tasks, check out the 🧨 Diffusers Summary table. Task Description Pipeline Unconditional Image Generation generate an image from Gaussian noise unconditional_image_generation Text-Guided Image Generation generate an image given a text prompt conditional_image_generation Text-Guided Image-to-Image Translation adapt an image guided by a text prompt img2img Text-Guided Image-Inpainting fill the masked part of an image given the image, the mask and a text prompt inpaint Text-Guided Depth-to-Image Translation adapt parts of an image guided by a text prompt while preserving structure via depth estimation depth2img Start by creating an instance of a DiffusionPipeline and specify which pipeline checkpoint you would like to download. +You can use the DiffusionPipeline for any checkpoint stored on the Hugging Face Hub. +In this quicktour, you’ll load the stable-diffusion-v1-5 checkpoint for text-to-image generation. For Stable Diffusion models, please carefully read the license first before running the model. 🧨 Diffusers implements a safety_checker to prevent offensive or harmful content, but the model’s improved image generation capabilities can still produce potentially harmful content. Load the model with the from_pretrained() method: Copied >>> from diffusers import DiffusionPipeline + +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) The DiffusionPipeline downloads and caches all modeling, tokenization, and scheduling components. You’ll see that the Stable Diffusion pipeline is composed of the UNet2DConditionModel and PNDMScheduler among other things: Copied >>> pipeline +StableDiffusionPipeline { + "_class_name": "StableDiffusionPipeline", + "_diffusers_version": "0.21.4", + ..., + "scheduler": [ + "diffusers", + "PNDMScheduler" + ], + ..., + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vae": [ + "diffusers", + "AutoencoderKL" + ] +} We strongly recommend running the pipeline on a GPU because the model consists of roughly 1.4 billion parameters. +You can move the generator object to a GPU, just like you would in PyTorch: Copied >>> pipeline.to("cuda") Now you can pass a text prompt to the pipeline to generate an image, and then access the denoised image. By default, the image output is wrapped in a PIL.Image object. Copied >>> image = pipeline("An image of a squirrel in Picasso style").images[0] +>>> image Save the image by calling save: Copied >>> image.save("image_of_squirrel_painting.png") Local pipeline You can also use the pipeline locally. The only difference is you need to download the weights first: Copied !git lfs install +!git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 Then load the saved weights into the pipeline: Copied >>> pipeline = DiffusionPipeline.from_pretrained("./stable-diffusion-v1-5", use_safetensors=True) Now, you can run the pipeline as you would in the section above. Swapping schedulers Different schedulers come with different denoising speeds and quality trade-offs. The best way to find out which one works best for you is to try them out! One of the main features of 🧨 Diffusers is to allow you to easily switch between schedulers. For example, to replace the default PNDMScheduler with the EulerDiscreteScheduler, load it with the from_config() method: Copied >>> from diffusers import EulerDiscreteScheduler + +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) +>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) Try generating an image with the new scheduler and see if you notice a difference! In the next section, you’ll take a closer look at the components - the model and scheduler - that make up the DiffusionPipeline and learn how to use these components to generate an image of a cat. Models Most models take a noisy sample, and at each timestep it predicts the noise residual (other models learn to predict the previous sample directly or the velocity or v-prediction), the difference between a less noisy image and the input image. You can mix and match models to create other diffusion systems. Models are initiated with the from_pretrained() method which also locally caches the model weights so it is faster the next time you load the model. For the quicktour, you’ll load the UNet2DModel, a basic unconditional image generation model with a checkpoint trained on cat images: Copied >>> from diffusers import UNet2DModel + +>>> repo_id = "google/ddpm-cat-256" +>>> model = UNet2DModel.from_pretrained(repo_id, use_safetensors=True) To access the model parameters, call model.config: Copied >>> model.config The model configuration is a 🧊 frozen 🧊 dictionary, which means those parameters can’t be changed after the model is created. This is intentional and ensures that the parameters used to define the model architecture at the start remain the same, while other parameters can still be adjusted during inference. Some of the most important parameters are: sample_size: the height and width dimension of the input sample. in_channels: the number of input channels of the input sample. down_block_types and up_block_types: the type of down- and upsampling blocks used to create the UNet architecture. block_out_channels: the number of output channels of the downsampling blocks; also used in reverse order for the number of input channels of the upsampling blocks. layers_per_block: the number of ResNet blocks present in each UNet block. To use the model for inference, create the image shape with random Gaussian noise. It should have a batch axis because the model can receive multiple random noises, a channel axis corresponding to the number of input channels, and a sample_size axis for the height and width of the image: Copied >>> import torch + +>>> torch.manual_seed(0) + +>>> noisy_sample = torch.randn(1, model.config.in_channels, model.config.sample_size, model.config.sample_size) +>>> noisy_sample.shape +torch.Size([1, 3, 256, 256]) For inference, pass the noisy image and a timestep to the model. The timestep indicates how noisy the input image is, with more noise at the beginning and less at the end. This helps the model determine its position in the diffusion process, whether it is closer to the start or the end. Use the sample method to get the model output: Copied >>> with torch.no_grad(): +... noisy_residual = model(sample=noisy_sample, timestep=2).sample To generate actual examples though, you’ll need a scheduler to guide the denoising process. In the next section, you’ll learn how to couple a model with a scheduler. Schedulers Schedulers manage going from a noisy sample to a less noisy sample given the model output - in this case, it is the noisy_residual. 🧨 Diffusers is a toolbox for building diffusion systems. While the DiffusionPipeline is a convenient way to get started with a pre-built diffusion system, you can also choose your own model and scheduler components separately to build a custom diffusion system. For the quicktour, you’ll instantiate the DDPMScheduler with its from_config() method: Copied >>> from diffusers import DDPMScheduler + +>>> scheduler = DDPMScheduler.from_pretrained(repo_id) +>>> scheduler +DDPMScheduler { + "_class_name": "DDPMScheduler", + "_diffusers_version": "0.21.4", + "beta_end": 0.02, + "beta_schedule": "linear", + "beta_start": 0.0001, + "clip_sample": true, + "clip_sample_range": 1.0, + "dynamic_thresholding_ratio": 0.995, + "num_train_timesteps": 1000, + "prediction_type": "epsilon", + "sample_max_value": 1.0, + "steps_offset": 0, + "thresholding": false, + "timestep_spacing": "leading", + "trained_betas": null, + "variance_type": "fixed_small" +} 💡 Unlike a model, a scheduler does not have trainable weights and is parameter-free! Some of the most important parameters are: num_train_timesteps: the length of the denoising process or, in other words, the number of timesteps required to process random Gaussian noise into a data sample. beta_schedule: the type of noise schedule to use for inference and training. beta_start and beta_end: the start and end noise values for the noise schedule. To predict a slightly less noisy image, pass the following to the scheduler’s step() method: model output, timestep, and current sample. Copied >>> less_noisy_sample = scheduler.step(model_output=noisy_residual, timestep=2, sample=noisy_sample).prev_sample +>>> less_noisy_sample.shape +torch.Size([1, 3, 256, 256]) The less_noisy_sample can be passed to the next timestep where it’ll get even less noisy! Let’s bring it all together now and visualize the entire denoising process. First, create a function that postprocesses and displays the denoised image as a PIL.Image: Copied >>> import PIL.Image +>>> import numpy as np + + +>>> def display_sample(sample, i): +... image_processed = sample.cpu().permute(0, 2, 3, 1) +... image_processed = (image_processed + 1.0) * 127.5 +... image_processed = image_processed.numpy().astype(np.uint8) + +... image_pil = PIL.Image.fromarray(image_processed[0]) +... display(f"Image at step {i}") +... display(image_pil) To speed up the denoising process, move the input and model to a GPU: Copied >>> model.to("cuda") +>>> noisy_sample = noisy_sample.to("cuda") Now create a denoising loop that predicts the residual of the less noisy sample, and computes the less noisy sample with the scheduler: Copied >>> import tqdm + +>>> sample = noisy_sample + +>>> for i, t in enumerate(tqdm.tqdm(scheduler.timesteps)): +... # 1. predict noise residual +... with torch.no_grad(): +... residual = model(sample, t).sample + +... # 2. compute less noisy image and set x_t -> x_t-1 +... sample = scheduler.step(residual, t, sample).prev_sample + +... # 3. optionally look at image +... if (i + 1) % 50 == 0: +... display_sample(sample, i + 1) Sit back and watch as a cat is generated from nothing but noise! 😻 Next steps Hopefully, you generated some cool images with 🧨 Diffusers in this quicktour! For your next steps, you can: Train or finetune a model to generate your own images in the training tutorial. See example official and community training or finetuning scripts for a variety of use cases. Learn more about loading, accessing, changing, and comparing schedulers in the Using different Schedulers guide. Explore prompt engineering, speed and memory optimizations, and tips and tricks for generating higher-quality images with the Stable Diffusion guide. Dive deeper into speeding up 🧨 Diffusers with guides on optimized PyTorch on a GPU, and inference guides for running Stable Diffusion on Apple Silicon (M1/M2) and ONNX Runtime. diff --git a/scrapped_outputs/a1a38ca57122010fa3f17490e15b1b73.txt b/scrapped_outputs/a1a38ca57122010fa3f17490e15b1b73.txt new file mode 100644 index 0000000000000000000000000000000000000000..e9b53eb8a868ef3829ac58348524811ec445482c --- /dev/null +++ b/scrapped_outputs/a1a38ca57122010fa3f17490e15b1b73.txt @@ -0,0 +1,143 @@ +BLIP-Diffusion BLIP-Diffusion was proposed in BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing. It enables zero-shot subject-driven generation and control-guided zero-shot generation. The abstract from the paper is: Subject-driven text-to-image generation models create novel renditions of an input subject based on text prompts. Existing models suffer from lengthy fine-tuning and difficulties preserving the subject fidelity. To overcome these limitations, we introduce BLIP-Diffusion, a new subject-driven image generation model that supports multimodal control which consumes inputs of subject images and text prompts. Unlike other subject-driven generation models, BLIP-Diffusion introduces a new multimodal encoder which is pre-trained to provide subject representation. We first pre-train the multimodal encoder following BLIP-2 to produce visual representation aligned with the text. Then we design a subject representation learning task which enables a diffusion model to leverage such visual representation and generates new subject renditions. Compared with previous methods such as DreamBooth, our model enables zero-shot subject-driven generation, and efficient fine-tuning for customized subject with up to 20x speedup. We also demonstrate that BLIP-Diffusion can be flexibly combined with existing techniques such as ControlNet and prompt-to-prompt to enable novel subject-driven generation and editing applications. Project page at this https URL. The original codebase can be found at salesforce/LAVIS. You can find the official BLIP-Diffusion checkpoints under the hf.co/SalesForce organization. BlipDiffusionPipeline and BlipDiffusionControlNetPipeline were contributed by ayushtues. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. BlipDiffusionPipeline class diffusers.BlipDiffusionPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: ContextCLIPTextModel vae: AutoencoderKL unet: UNet2DConditionModel scheduler: PNDMScheduler qformer: Blip2QFormerModel image_processor: BlipImageProcessor ctx_begin_pos: int = 2 mean: List = None std: List = None ) Parameters tokenizer (CLIPTokenizer) — +Tokenizer for the text encoder text_encoder (ContextCLIPTextModel) — +Text encoder to encode the text prompt vae (AutoencoderKL) — +VAE model to map the latents to the image unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. scheduler (PNDMScheduler) — +A scheduler to be used in combination with unet to generate image latents. qformer (Blip2QFormerModel) — +QFormer model to get multi-modal embeddings from the text and image. image_processor (BlipImageProcessor) — +Image Processor to preprocess and postprocess the image. ctx_begin_pos (int, optional, defaults to 2) — +Position of the context token in the text encoder. Pipeline for Zero-Shot Subject Driven Generation using Blip Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: List reference_image: Image source_subject_category: List target_subject_category: List latents: Optional = None guidance_scale: float = 7.5 height: int = 512 width: int = 512 num_inference_steps: int = 50 generator: Union = None neg_prompt: Optional = '' prompt_strength: float = 1.0 prompt_reps: int = 20 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (List[str]) — +The prompt or prompts to guide the image generation. reference_image (PIL.Image.Image) — +The reference image to condition the generation on. source_subject_category (List[str]) — +The source subject category. target_subject_category (List[str]) — +The target subject category. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by random sampling. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. height (int, optional, defaults to 512) — +The height of the generated image. width (int, optional, defaults to 512) — +The width of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. neg_prompt (str, optional, defaults to "") — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). prompt_strength (float, optional, defaults to 1.0) — +The strength of the prompt. Specifies the number of times the prompt is repeated along with prompt_reps +to amplify the prompt. prompt_reps (int, optional, defaults to 20) — +The number of times the prompt is repeated along with prompt_strength to amplify the prompt. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers.pipelines import BlipDiffusionPipeline +>>> from diffusers.utils import load_image +>>> import torch + +>>> blip_diffusion_pipe = BlipDiffusionPipeline.from_pretrained( +... "Salesforce/blipdiffusion", torch_dtype=torch.float16 +... ).to("cuda") + + +>>> cond_subject = "dog" +>>> tgt_subject = "dog" +>>> text_prompt_input = "swimming underwater" + +>>> cond_image = load_image( +... "https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/dog.jpg" +... ) +>>> guidance_scale = 7.5 +>>> num_inference_steps = 25 +>>> negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate" + + +>>> output = blip_diffusion_pipe( +... text_prompt_input, +... cond_image, +... cond_subject, +... tgt_subject, +... guidance_scale=guidance_scale, +... num_inference_steps=num_inference_steps, +... neg_prompt=negative_prompt, +... height=512, +... width=512, +... ).images +>>> output[0].save("image.png") BlipDiffusionControlNetPipeline class diffusers.BlipDiffusionControlNetPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: ContextCLIPTextModel vae: AutoencoderKL unet: UNet2DConditionModel scheduler: PNDMScheduler qformer: Blip2QFormerModel controlnet: ControlNetModel image_processor: BlipImageProcessor ctx_begin_pos: int = 2 mean: List = None std: List = None ) Parameters tokenizer (CLIPTokenizer) — +Tokenizer for the text encoder text_encoder (ContextCLIPTextModel) — +Text encoder to encode the text prompt vae (AutoencoderKL) — +VAE model to map the latents to the image unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. scheduler (PNDMScheduler) — +A scheduler to be used in combination with unet to generate image latents. qformer (Blip2QFormerModel) — +QFormer model to get multi-modal embeddings from the text and image. controlnet (ControlNetModel) — +ControlNet model to get the conditioning image embedding. image_processor (BlipImageProcessor) — +Image Processor to preprocess and postprocess the image. ctx_begin_pos (int, optional, defaults to 2) — +Position of the context token in the text encoder. Pipeline for Canny Edge based Controlled subject-driven generation using Blip Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: List reference_image: Image condtioning_image: Image source_subject_category: List target_subject_category: List latents: Optional = None guidance_scale: float = 7.5 height: int = 512 width: int = 512 num_inference_steps: int = 50 generator: Union = None neg_prompt: Optional = '' prompt_strength: float = 1.0 prompt_reps: int = 20 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (List[str]) — +The prompt or prompts to guide the image generation. reference_image (PIL.Image.Image) — +The reference image to condition the generation on. condtioning_image (PIL.Image.Image) — +The conditioning canny edge image to condition the generation on. source_subject_category (List[str]) — +The source subject category. target_subject_category (List[str]) — +The target subject category. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by random sampling. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. height (int, optional, defaults to 512) — +The height of the generated image. width (int, optional, defaults to 512) — +The width of the generated image. seed (int, optional, defaults to 42) — +The seed to use for random generation. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. neg_prompt (str, optional, defaults to "") — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). prompt_strength (float, optional, defaults to 1.0) — +The strength of the prompt. Specifies the number of times the prompt is repeated along with prompt_reps +to amplify the prompt. prompt_reps (int, optional, defaults to 20) — +The number of times the prompt is repeated along with prompt_strength to amplify the prompt. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers.pipelines import BlipDiffusionControlNetPipeline +>>> from diffusers.utils import load_image +>>> from controlnet_aux import CannyDetector +>>> import torch + +>>> blip_diffusion_pipe = BlipDiffusionControlNetPipeline.from_pretrained( +... "Salesforce/blipdiffusion-controlnet", torch_dtype=torch.float16 +... ).to("cuda") + +>>> style_subject = "flower" +>>> tgt_subject = "teapot" +>>> text_prompt = "on a marble table" + +>>> cldm_cond_image = load_image( +... "https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/kettle.jpg" +... ).resize((512, 512)) +>>> canny = CannyDetector() +>>> cldm_cond_image = canny(cldm_cond_image, 30, 70, output_type="pil") +>>> style_image = load_image( +... "https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg" +... ) +>>> guidance_scale = 7.5 +>>> num_inference_steps = 50 +>>> negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate" + + +>>> output = blip_diffusion_pipe( +... text_prompt, +... style_image, +... cldm_cond_image, +... style_subject, +... tgt_subject, +... guidance_scale=guidance_scale, +... num_inference_steps=num_inference_steps, +... neg_prompt=negative_prompt, +... height=512, +... width=512, +... ).images +>>> output[0].save("image.png") diff --git a/scrapped_outputs/a1b101819179198acf695ce5ffdb9b41.txt b/scrapped_outputs/a1b101819179198acf695ce5ffdb9b41.txt new file mode 100644 index 0000000000000000000000000000000000000000..00a1475bcc5e7b5a02879867e599566c0cc82ebb --- /dev/null +++ b/scrapped_outputs/a1b101819179198acf695ce5ffdb9b41.txt @@ -0,0 +1,225 @@ +AudioLDM 2 AudioLDM 2 was proposed in AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining by Haohe Liu et al. AudioLDM 2 takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional sound effects, human speech and music. Inspired by Stable Diffusion, AudioLDM 2 is a text-to-audio latent diffusion model (LDM) that learns continuous audio representations from text embeddings. Two text encoder models are used to compute the text embeddings from a prompt input: the text-branch of CLAP and the encoder of Flan-T5. These text embeddings are then projected to a shared embedding space by an AudioLDM2ProjectionModel. A GPT2 language model (LM) is used to auto-regressively predict eight new embedding vectors, conditional on the projected CLAP and Flan-T5 embeddings. The generated embedding vectors and Flan-T5 text embeddings are used as cross-attention conditioning in the LDM. The UNet of AudioLDM 2 is unique in the sense that it takes two cross-attention embeddings, as opposed to one cross-attention conditioning, as in most other LDMs. The abstract of the paper is the following: Although audio generation shares commonalities across different types of audio, such as speech, music, and sound effects, designing models for each type requires careful consideration of specific objectives and biases that can significantly differ from those of other types. To bring us closer to a unified perspective of audio generation, this paper proposes a framework that utilizes the same learning method for speech, music, and sound effect generation. Our framework introduces a general representation of audio, called “language of audio” (LOA). Any audio can be translated into LOA based on AudioMAE, a self-supervised pre-trained representation learning model. In the generation process, we translate any modalities into LOA by using a GPT-2 model, and we perform self-supervised audio generation learning with a latent diffusion model conditioned on LOA. The proposed framework naturally brings advantages such as in-context learning abilities and reusable self-supervised pretrained AudioMAE and latent diffusion models. Experiments on the major benchmarks of text-to-audio, text-to-music, and text-to-speech demonstrate state-of-the-art or competitive performance against previous approaches. Our code, pretrained model, and demo are available at this https URL. This pipeline was contributed by sanchit-gandhi. The original codebase can be found at haoheliu/audioldm2. Tips Choosing a checkpoint AudioLDM2 comes in three variants. Two of these checkpoints are applicable to the general task of text-to-audio generation. The third checkpoint is trained exclusively on text-to-music generation. All checkpoints share the same model size for the text encoders and VAE. They differ in the size and depth of the UNet. +See table below for details on the three checkpoints: Checkpoint Task UNet Model Size Total Model Size Training Data / h audioldm2 Text-to-audio 350M 1.1B 1150k audioldm2-large Text-to-audio 750M 1.5B 1150k audioldm2-music Text-to-music 350M 1.1B 665k Constructing a prompt Descriptive prompt inputs work best: use adjectives to describe the sound (e.g. “high quality” or “clear”) and make the prompt context specific (e.g. “water stream in a forest” instead of “stream”). It’s best to use general terms like “cat” or “dog” instead of specific names or abstract objects the model may not be familiar with. Using a negative prompt can significantly improve the quality of the generated waveform, by guiding the generation away from terms that correspond to poor quality audio. Try using a negative prompt of “Low quality.” Controlling inference The quality of the predicted audio sample can be controlled by the num_inference_steps argument; higher steps give higher quality audio at the expense of slower inference. The length of the predicted audio sample can be controlled by varying the audio_length_in_s argument. Evaluating generated waveforms: The quality of the generated waveforms can vary significantly based on the seed. Try generating with different seeds until you find a satisfactory generation. Multiple waveforms can be generated in one go: set num_waveforms_per_prompt to a value greater than 1. Automatic scoring will be performed between the generated waveforms and prompt text, and the audios ranked from best to worst accordingly. The following example demonstrates how to construct good music generation using the aforementioned tips: example. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. AudioLDM2Pipeline class diffusers.AudioLDM2Pipeline < source > ( vae: AutoencoderKL text_encoder: ClapModel text_encoder_2: T5EncoderModel projection_model: AudioLDM2ProjectionModel language_model: GPT2Model tokenizer: Union tokenizer_2: Union feature_extractor: ClapFeatureExtractor unet: AudioLDM2UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vocoder: SpeechT5HifiGan ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (ClapModel) — +First frozen text-encoder. AudioLDM2 uses the joint audio-text embedding model +CLAP, +specifically the laion/clap-htsat-unfused variant. The +text branch is used to encode the text prompt to a prompt embedding. The full audio-text model is used to +rank generated waveforms against the text prompt by computing similarity scores. text_encoder_2 (T5EncoderModel) — +Second frozen text-encoder. AudioLDM2 uses the encoder of +T5, specifically the +google/flan-t5-large variant. projection_model (AudioLDM2ProjectionModel) — +A trained model used to linearly project the hidden-states from the first and second text encoder models +and insert learned SOS and EOS token embeddings. The projected hidden-states from the two text encoders are +concatenated to give the input to the language model. language_model (GPT2Model) — +An auto-regressive language model used to generate a sequence of hidden-states conditioned on the projected +outputs from the two text encoders. tokenizer (RobertaTokenizer) — +Tokenizer to tokenize text for the first frozen text-encoder. tokenizer_2 (T5Tokenizer) — +Tokenizer to tokenize text for the second frozen text-encoder. feature_extractor (ClapFeatureExtractor) — +Feature extractor to pre-process generated audio waveforms to log-mel spectrograms for automatic scoring. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded audio latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. vocoder (SpeechT5HifiGan) — +Vocoder of class SpeechT5HifiGan to convert the mel-spectrogram latents to the final audio waveform. Pipeline for text-to-audio generation using AudioLDM2. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None audio_length_in_s: Optional = None num_inference_steps: int = 200 guidance_scale: float = 3.5 negative_prompt: Union = None num_waveforms_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None generated_prompt_embeds: Optional = None negative_generated_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None max_new_tokens: Optional = None return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None output_type: Optional = 'np' ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide audio generation. If not defined, you need to pass prompt_embeds. audio_length_in_s (int, optional, defaults to 10.24) — +The length of the generated audio sample in seconds. num_inference_steps (int, optional, defaults to 200) — +The number of denoising steps. More denoising steps usually lead to a higher quality audio at the +expense of slower inference. guidance_scale (float, optional, defaults to 3.5) — +A higher guidance scale value encourages the model to generate audio that is closely linked to the text +prompt at the expense of lower sound quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in audio generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_waveforms_per_prompt (int, optional, defaults to 1) — +The number of waveforms to generate per prompt. If num_waveforms_per_prompt > 1, then automatic +scoring is performed between the generated outputs and the text prompt. This scoring ranks the +generated waveforms based on their cosine similarity with the text input in the joint text-audio +embedding space. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for spectrogram +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings from the GPT2 langauge model. Can be used to easily tweak text inputs, +e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input +argument. negative_generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings from the GPT2 language model. Can be used to easily tweak text +inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from +negative_prompt input argument. attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the prompt_embeds. If not provided, attention mask will +be computed from prompt input argument. negative_attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the negative_prompt_embeds. If not provided, attention +mask will be computed from negative_prompt input argument. max_new_tokens (int, optional, defaults to None) — +Number of new tokens to generate with the GPT2 language model. If not provided, number of tokens will +be taken from the config of the model. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. output_type (str, optional, defaults to "np") — +The output format of the generated audio. Choose between "np" to return a NumPy np.ndarray or +"pt" to return a PyTorch torch.Tensor object. Set to "latent" to return the latent diffusion +model (LDM) output. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Examples: Copied >>> import scipy +>>> import torch +>>> from diffusers import AudioLDM2Pipeline + +>>> repo_id = "cvssp/audioldm2" +>>> pipe = AudioLDM2Pipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> # define the prompts +>>> prompt = "The sound of a hammer hitting a wooden surface." +>>> negative_prompt = "Low quality." + +>>> # set the seed for generator +>>> generator = torch.Generator("cuda").manual_seed(0) + +>>> # run the generation +>>> audio = pipe( +... prompt, +... negative_prompt=negative_prompt, +... num_inference_steps=200, +... audio_length_in_s=10.0, +... num_waveforms_per_prompt=3, +... generator=generator, +... ).audios + +>>> # save the best audio sample (index 0) as a .wav file +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio[0]) disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_waveforms_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None generated_prompt_embeds: Optional = None negative_generated_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None max_new_tokens: Optional = None ) → prompt_embeds (torch.FloatTensor) Parameters prompt (str or List[str], optional) — +prompt to be encoded device (torch.device) — +torch device num_waveforms_per_prompt (int) — +number of waveforms that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the audio generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-computed text embeddings from the Flan T5 model. Can be used to easily tweak text inputs, e.g. +prompt weighting. If not provided, text embeddings will be computed from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-computed negative text embeddings from the Flan T5 model. Can be used to easily tweak text inputs, +e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from +negative_prompt input argument. generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings from the GPT2 langauge model. Can be used to easily tweak text inputs, +e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input +argument. negative_generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings from the GPT2 language model. Can be used to easily tweak text +inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from +negative_prompt input argument. attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the prompt_embeds. If not provided, attention mask will +be computed from prompt input argument. negative_attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the negative_prompt_embeds. If not provided, attention +mask will be computed from negative_prompt input argument. max_new_tokens (int, optional, defaults to None) — +The number of new tokens to generate with the GPT2 language model. Returns +prompt_embeds (torch.FloatTensor) + +Text embeddings from the Flan T5 model. +attention_mask (torch.LongTensor): +Attention mask to be applied to the prompt_embeds. +generated_prompt_embeds (torch.FloatTensor): +Text embeddings generated from the GPT2 langauge model. + Encodes the prompt into text encoder hidden states. Example: Copied >>> import scipy +>>> import torch +>>> from diffusers import AudioLDM2Pipeline + +>>> repo_id = "cvssp/audioldm2" +>>> pipe = AudioLDM2Pipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> # Get text embedding vectors +>>> prompt_embeds, attention_mask, generated_prompt_embeds = pipe.encode_prompt( +... prompt="Techno music with a strong, upbeat tempo and high melodic riffs", +... device="cuda", +... do_classifier_free_guidance=True, +... ) + +>>> # Pass text embeddings to pipeline for text-conditional audio generation +>>> audio = pipe( +... prompt_embeds=prompt_embeds, +... attention_mask=attention_mask, +... generated_prompt_embeds=generated_prompt_embeds, +... num_inference_steps=200, +... audio_length_in_s=10.0, +... ).audios[0] + +>>> # save generated audio sample +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio) generate_language_model < source > ( inputs_embeds: Tensor = None max_new_tokens: int = 8 **model_kwargs ) → inputs_embeds (torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)`) Parameters inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — +The sequence used as a prompt for the generation. max_new_tokens (int) — +Number of new tokens to generate. model_kwargs (Dict[str, Any], optional) — +Ad hoc parametrization of additional model-specific kwargs that will be forwarded to the forward +function of the model. Returns +inputs_embeds (torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)`) + +The sequence of generated hidden-states. + Generates a sequence of hidden-states from the language model, conditioned on the embedding inputs. AudioLDM2ProjectionModel class diffusers.AudioLDM2ProjectionModel < source > ( text_encoder_dim text_encoder_1_dim langauge_model_dim ) Parameters text_encoder_dim (int) — +Dimensionality of the text embeddings from the first text encoder (CLAP). text_encoder_1_dim (int) — +Dimensionality of the text embeddings from the second text encoder (T5 or VITS). langauge_model_dim (int) — +Dimensionality of the text embeddings from the language model (GPT2). A simple linear projection model to map two text embeddings to a shared latent space. It also inserts learned +embedding vectors at the start and end of each text embedding sequence respectively. Each variable appended with +_1 refers to that corresponding to the second text encoder. Otherwise, it is from the first. forward < source > ( hidden_states: Optional = None hidden_states_1: Optional = None attention_mask: Optional = None attention_mask_1: Optional = None ) AudioLDM2UNet2DConditionModel class diffusers.AudioLDM2UNet2DConditionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 flip_sin_to_cos: bool = True freq_shift: int = 0 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') mid_block_type: Optional = 'UNetMidBlock2DCrossAttn' up_block_types: Tuple = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: Union = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: Union = 1280 transformer_layers_per_block: Union = 1 attention_head_dim: Union = 8 num_attention_heads: Union = None use_linear_projection: bool = False class_embed_type: Optional = None num_class_embeds: Optional = None upcast_attention: bool = False resnet_time_scale_shift: str = 'default' time_embedding_type: str = 'positional' time_embedding_dim: Optional = None time_embedding_act_fn: Optional = None timestep_post_act: Optional = None time_cond_proj_dim: Optional = None conv_in_kernel: int = 3 conv_out_kernel: int = 3 projection_class_embeddings_input_dim: Optional = None class_embeddings_concat: bool = False ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. in_channels (int, optional, defaults to 4) — Number of channels in the input sample. out_channels (int, optional, defaults to 4) — Number of channels in the output. flip_sin_to_cos (bool, optional, defaults to False) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. mid_block_type (str, optional, defaults to "UNetMidBlock2DCrossAttn") — +Block type for middle of UNet, it can only be UNetMidBlock2DCrossAttn for AudioLDM2. up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. only_cross_attention (bool or Tuple[bool], optional, default to False) — +Whether to include self-attention in the basic transformer blocks, see +BasicTransformerBlock. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — The number of layers per block. downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. act_fn (str, optional, defaults to "silu") — The activation function to use. norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, normalization and activation layers is skipped in post-processing. norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. cross_attention_dim (int or Tuple[int], optional, defaults to 1280) — +The dimension of the cross attention features. transformer_layers_per_block (int or Tuple[int], optional, defaults to 1) — +The number of transformer blocks of type BasicTransformerBlock. Only relevant for +CrossAttnDownBlock2D, CrossAttnUpBlock2D, +UNetMidBlock2DCrossAttn. attention_head_dim (int, optional, defaults to 8) — The dimension of the attention heads. num_attention_heads (int, optional) — +The number of attention heads. If not defined, defaults to attention_head_dim resnet_time_scale_shift (str, optional, defaults to "default") — Time scale shift config +for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", "identity", "projection", or "simple_projection". num_class_embeds (int, optional, defaults to None) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. time_embedding_type (str, optional, defaults to positional) — +The type of position embedding to use for timesteps. Choose from positional or fourier. time_embedding_dim (int, optional, defaults to None) — +An optional override for the dimension of the projected time embedding. time_embedding_act_fn (str, optional, defaults to None) — +Optional activation function to use only once on the time embeddings before they are passed to the rest of +the UNet. Choose from silu, mish, gelu, and swish. timestep_post_act (str, optional, defaults to None) — +The second activation function to use in timestep embedding. Choose from silu, mish and gelu. time_cond_proj_dim (int, optional, defaults to None) — +The dimension of cond_proj layer in the timestep embedding. conv_in_kernel (int, optional, default to 3) — The kernel size of conv_in layer. conv_out_kernel (int, optional, default to 3) — The kernel size of conv_out layer. projection_class_embeddings_input_dim (int, optional) — The dimension of the class_labels input when +class_embed_type="projection". Required when class_embed_type="projection". class_embeddings_concat (bool, optional, defaults to False) — Whether to concatenate the time +embeddings with the class embeddings. A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. Compared to the vanilla UNet2DConditionModel, this variant optionally includes an additional +self-attention layer in each Transformer block, as well as multiple cross-attention layers. It also allows for up +to two cross-attention embeddings, encoder_hidden_states and encoder_hidden_states_1. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None encoder_attention_mask: Optional = None return_dict: bool = True encoder_hidden_states_1: Optional = None encoder_attention_mask_1: Optional = None ) → UNet2DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, channel, height, width). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). encoder_attention_mask (torch.Tensor) — +A cross-attention mask of shape (batch, sequence_length) is applied to encoder_hidden_states. If +True the mask is kept, otherwise if False it is discarded. Mask will be converted into a bias, +which adds large negative values to the attention scores corresponding to “discard” tokens. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. encoder_hidden_states_1 (torch.FloatTensor, optional) — +A second set of encoder hidden states with shape (batch, sequence_length_2, feature_dim_2). Can be +used to condition the model on a different set of embeddings to encoder_hidden_states. encoder_attention_mask_1 (torch.Tensor, optional) — +A cross-attention mask of shape (batch, sequence_length_2) is applied to encoder_hidden_states_1. +If True the mask is kept, otherwise if False it is discarded. Mask will be converted into a bias, +which adds large negative values to the attention scores corresponding to “discard” tokens. Returns +UNet2DConditionOutput or tuple + +If return_dict is True, an UNet2DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The AudioLDM2UNet2DConditionModel forward method. AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. diff --git a/scrapped_outputs/a1d4005d39ed2712e4d689df39e7a124.txt b/scrapped_outputs/a1d4005d39ed2712e4d689df39e7a124.txt new file mode 100644 index 0000000000000000000000000000000000000000..68e7435f9073e50ea2e9fbf1d917e1ce99d02f13 --- /dev/null +++ b/scrapped_outputs/a1d4005d39ed2712e4d689df39e7a124.txt @@ -0,0 +1,264 @@ +Performing inference with LCM-LoRA Latent Consistency Models (LCM) enable quality image generation in typically 2-4 steps making it possible to use diffusion models in almost real-time settings. From the official website: LCMs can be distilled from any pre-trained Stable Diffusion (SD) in only 4,000 training steps (~32 A100 GPU Hours) for generating high quality 768 x 768 resolution images in 2~4 steps or even one step, significantly accelerating text-to-image generation. We employ LCM to distill the Dreamshaper-V7 version of SD in just 4,000 training iterations. For a more technical overview of LCMs, refer to the paper. However, each model needs to be distilled separately for latent consistency distillation. The core idea with LCM-LoRA is to train just a few adapter layers, the adapter being LoRA in this case. +This way, we don’t have to train the full model and keep the number of trainable parameters manageable. The resulting LoRAs can then be applied to any fine-tuned version of the model without distilling them separately. +Additionally, the LoRAs can be applied to image-to-image, ControlNet/T2I-Adapter, inpainting, AnimateDiff etc. +The LCM-LoRA can also be combined with other LoRAs to generate styled images in very few steps (4-8). LCM-LoRAs are available for stable-diffusion-v1-5, stable-diffusion-xl-base-1.0, and the SSD-1B model. All the checkpoints can be found in this collection. For more details about LCM-LoRA, refer to the technical report. This guide shows how to perform inference with LCM-LoRAs for text-to-image image-to-image combined with styled LoRAs ControlNet/T2I-Adapter inpainting AnimateDiff Before going through this guide, we’ll take a look at the general workflow for performing inference with LCM-LoRAs. +LCM-LoRAs are similar to other Stable Diffusion LoRAs so they can be used with any DiffusionPipeline that supports LoRAs. Load the task specific pipeline and model. Set the scheduler to LCMScheduler. Load the LCM-LoRA weights for the model. Reduce the guidance_scale between [1.0, 2.0] and set the num_inference_steps between [4, 8]. Perform inference with the pipeline with the usual parameters. Let’s look at how we can perform inference with LCM-LoRAs for different tasks. First, make sure you have peft installed, for better LoRA support. Copied pip install -U peft Text-to-image You’ll use the StableDiffusionXLPipeline with the scheduler: LCMScheduler and then load the LCM-LoRA. Together with the LCM-LoRA and the scheduler, the pipeline enables a fast inference workflow overcoming the slow iterative nature of diffusion models. Copied import torch +from diffusers import DiffusionPipeline, LCMScheduler + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + variant="fp16", + torch_dtype=torch.float16 +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl") + +prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" + +generator = torch.manual_seed(42) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=1.0 +).images[0] Notice that we use only 4 steps for generation which is way less than what’s typically used for standard SDXL. You may have noticed that we set guidance_scale=1.0, which disables classifer-free-guidance. This is because the LCM-LoRA is trained with guidance, so the batch size does not have to be doubled in this case. This leads to a faster inference time, with the drawback that negative prompts don’t have any effect on the denoising process. You can also use guidance with LCM-LoRA, but due to the nature of training the model is very sensitve to the guidance_scale values, high values can lead to artifacts in the generated images. In our experiments, we found that the best values are in the range of [1.0, 2.0]. Inference with a fine-tuned model As mentioned above, the LCM-LoRA can be applied to any fine-tuned version of the model without having to distill them separately. Let’s look at how we can perform inference with a fine-tuned model. In this example, we’ll use the animagine-xl model, which is a fine-tuned version of the SDXL model for generating anime. Copied from diffusers import DiffusionPipeline, LCMScheduler + +pipe = DiffusionPipeline.from_pretrained( + "Linaqruf/animagine-xl", + variant="fp16", + torch_dtype=torch.float16 +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl") + +prompt = "face focus, cute, masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=1.0 +).images[0] Image-to-image LCM-LoRA can be applied to image-to-image tasks too. Let’s look at how we can perform image-to-image generation with LCMs. For this example we’ll use the dreamshaper-7 model and the LCM-LoRA for stable-diffusion-v1-5 . Copied import torch +from diffusers import AutoPipelineForImage2Image, LCMScheduler +from diffusers.utils import make_image_grid, load_image + +pipe = AutoPipelineForImage2Image.from_pretrained( + "Lykon/dreamshaper-7", + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5") + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) +prompt = "Astronauts in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +generator = torch.manual_seed(0) +image = pipe( + prompt, + image=init_image, + num_inference_steps=4, + guidance_scale=1, + strength=0.6, + generator=generator +).images[0] +make_image_grid([init_image, image], rows=1, cols=2) You can get different results based on your prompt and the image you provide. To get the best results, we recommend trying different values for num_inference_steps, strength, and guidance_scale parameters and choose the best one. Combine with styled LoRAs LCM-LoRA can be combined with other LoRAs to generate styled-images in very few steps (4-8). In the following example, we’ll use the LCM-LoRA with the papercut LoRA. +To learn more about how to combine LoRAs, refer to this guide. Copied import torch +from diffusers import DiffusionPipeline, LCMScheduler + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + variant="fp16", + torch_dtype=torch.float16 +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LoRAs +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl", adapter_name="lcm") +pipe.load_lora_weights("TheLastBen/Papercut_SDXL", weight_name="papercut.safetensors", adapter_name="papercut") + +# Combine LoRAs +pipe.set_adapters(["lcm", "papercut"], adapter_weights=[1.0, 0.8]) + +prompt = "papercut, a cute fox" +generator = torch.manual_seed(0) +image = pipe(prompt, num_inference_steps=4, guidance_scale=1, generator=generator).images[0] +image ControlNet/T2I-Adapter Let’s look at how we can perform inference with ControlNet/T2I-Adapter and LCM-LoRA. ControlNet + +For this example, we'll use the SD-v1-5 model and the LCM-LoRA for SD-v1-5 with canny ControlNet. + + Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, LCMScheduler +from diffusers.utils import load_image + +image = load_image( + "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +).resize((512, 512)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + controlnet=controlnet, + torch_dtype=torch.float16, + safety_checker=None, + variant="fp16" +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5") + +generator = torch.manual_seed(0) +image = pipe( + "the mona lisa", + image=canny_image, + num_inference_steps=4, + guidance_scale=1.5, + controlnet_conditioning_scale=0.8, + cross_attention_kwargs={"scale": 1}, + generator=generator, +).images[0] +make_image_grid([canny_image, image], rows=1, cols=2) The inference parameters in this example might not work for all examples, so we recommend you to try different values for `num_inference_steps`, `guidance_scale`, `controlnet_conditioning_scale` and `cross_attention_kwargs` parameters and choose the best one. T2I-Adapter This example shows how to use the LCM-LoRA with the Canny T2I-Adapter and SDXL. Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +# Prepare image +# Detect the canny map in low resolution to avoid high-frequency details +image = load_image( + "https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_canny.jpg" +).resize((384, 384)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image).resize((1024, 1024)) + +# load adapter +adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, varient="fp16").to("cuda") + +pipe = StableDiffusionXLAdapterPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + adapter=adapter, + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl") + +prompt = "Mystical fairy in real, magic, 4k picture, high quality" +negative_prompt = "extra digit, fewer digits, cropped, worst quality, low quality, glitch, deformed, mutated, ugly, disfigured" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, + negative_prompt=negative_prompt, + image=canny_image, + num_inference_steps=4, + guidance_scale=1.5, + adapter_conditioning_scale=0.8, + adapter_conditioning_factor=1, + generator=generator, +).images[0] +make_image_grid([canny_image, image], rows=1, cols=2) Inpainting LCM-LoRA can be used for inpainting as well. Copied import torch +from diffusers import AutoPipelineForInpainting, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +pipe = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5") + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +# generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, + image=init_image, + mask_image=mask_image, + generator=generator, + num_inference_steps=4, + guidance_scale=4, +).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) AnimateDiff AnimateDiff allows you to animate images using Stable Diffusion models. To get good results, we need to generate multiple frames (16-24), and doing this with standard SD models can be very slow. +LCM-LoRA can be used to speed up the process significantly, as you just need to do 4-8 steps for each frame. Let’s look at how we can perform animation with LCM-LoRA and AnimateDiff. Copied import torch +from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler, LCMScheduler +from diffusers.utils import export_to_gif + +adapter = MotionAdapter.from_pretrained("diffusers/animatediff-motion-adapter-v1-5") +pipe = AnimateDiffPipeline.from_pretrained( + "frankjoshua/toonyou_beta6", + motion_adapter=adapter, +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5", adapter_name="lcm") +pipe.load_lora_weights("guoyww/animatediff-motion-lora-zoom-in", weight_name="diffusion_pytorch_model.safetensors", adapter_name="motion-lora") + +pipe.set_adapters(["lcm", "motion-lora"], adapter_weights=[0.55, 1.2]) + +prompt = "best quality, masterpiece, 1girl, looking at viewer, blurry background, upper body, contemporary, dress" +generator = torch.manual_seed(0) +frames = pipe( + prompt=prompt, + num_inference_steps=5, + guidance_scale=1.25, + cross_attention_kwargs={"scale": 1}, + num_frames=24, + generator=generator +).frames[0] +export_to_gif(frames, "animation.gif") diff --git a/scrapped_outputs/a1e1e25570999dc6e5024c3570dacacf.txt b/scrapped_outputs/a1e1e25570999dc6e5024c3570dacacf.txt new file mode 100644 index 0000000000000000000000000000000000000000..9d0d5ffb83e07315423c11b905ac9fe8aa24c736 --- /dev/null +++ b/scrapped_outputs/a1e1e25570999dc6e5024c3570dacacf.txt @@ -0,0 +1,18 @@ +Installation 🤗 Diffusers is tested on Python 3.8+, PyTorch 1.7.0+, and Flax. Follow the installation instructions below for the deep learning library you are using: PyTorch installation instructions Flax installation instructions Install with pip You should install 🤗 Diffusers in a virtual environment. +If you’re unfamiliar with Python virtual environments, take a look at this guide. +A virtual environment makes it easier to manage different projects and avoid compatibility issues between dependencies. Start by creating a virtual environment in your project directory: Copied python -m venv .env Activate the virtual environment: Copied source .env/bin/activate You should also install 🤗 Transformers because 🤗 Diffusers relies on its models: Pytorch Hide Pytorch content Note - PyTorch only supports Python 3.8 - 3.11 on Windows. Copied pip install diffusers["torch"] transformers JAX Hide JAX content Copied pip install diffusers["flax"] transformers Install with conda After activating your virtual environment, with conda (maintained by the community): Copied conda install -c conda-forge diffusers Install from source Before installing 🤗 Diffusers from source, make sure you have PyTorch and 🤗 Accelerate installed. To install 🤗 Accelerate: Copied pip install accelerate Then install 🤗 Diffusers from source: Copied pip install git+https://github.com/huggingface/diffusers This command installs the bleeding edge main version rather than the latest stable version. +The main version is useful for staying up-to-date with the latest developments. +For instance, if a bug has been fixed since the last official release but a new release hasn’t been rolled out yet. +However, this means the main version may not always be stable. +We strive to keep the main version operational, and most issues are usually resolved within a few hours or a day. +If you run into a problem, please open an Issue so we can fix it even sooner! Editable install You will need an editable install if you’d like to: Use the main version of the source code. Contribute to 🤗 Diffusers and need to test changes in the code. Clone the repository and install 🤗 Diffusers with the following commands: Copied git clone https://github.com/huggingface/diffusers.git +cd diffusers Pytorch Hide Pytorch content Copied pip install -e ".[torch]" JAX Hide JAX content Copied pip install -e ".[flax]" These commands will link the folder you cloned the repository to and your Python library paths. +Python will now look inside the folder you cloned to in addition to the normal library paths. +For example, if your Python packages are typically installed in ~/anaconda3/envs/main/lib/python3.8/site-packages/, Python will also search the ~/diffusers/ folder you cloned to. You must keep the diffusers folder if you want to keep using the library. Now you can easily update your clone to the latest version of 🤗 Diffusers with the following command: Copied cd ~/diffusers/ +git pull Your Python environment will find the main version of 🤗 Diffusers on the next run. Cache Model weights and files are downloaded from the Hub to a cache which is usually your home directory. You can change the cache location by specifying the HF_HOME or HUGGINFACE_HUB_CACHE environment variables or configuring the cache_dir parameter in methods like from_pretrained(). Cached files allow you to run 🤗 Diffusers offline. To prevent 🤗 Diffusers from connecting to the internet, set the HF_HUB_OFFLINE environment variable to True and 🤗 Diffusers will only load previously downloaded files in the cache. Copied export HF_HUB_OFFLINE=True For more details about managing and cleaning the cache, take a look at the caching guide. Telemetry logging Our library gathers telemetry information during from_pretrained() requests. +The data gathered includes the version of 🤗 Diffusers and PyTorch/Flax, the requested model or pipeline class, +and the path to a pretrained checkpoint if it is hosted on the Hugging Face Hub. +This usage data helps us debug issues and prioritize new features. +Telemetry is only sent when loading models and pipelines from the Hub, +and it is not collected if you’re loading local files. We understand that not everyone wants to share additional information,and we respect your privacy. +You can disable telemetry collection by setting the DISABLE_TELEMETRY environment variable from your terminal: On Linux/MacOS: Copied export DISABLE_TELEMETRY=YES On Windows: Copied set DISABLE_TELEMETRY=YES diff --git a/scrapped_outputs/a1fe88461d49316d47644ee7204d1357.txt b/scrapped_outputs/a1fe88461d49316d47644ee7204d1357.txt new file mode 100644 index 0000000000000000000000000000000000000000..e3c71ca96baa76c1c11f96cfbdad30df65a97ee3 --- /dev/null +++ b/scrapped_outputs/a1fe88461d49316d47644ee7204d1357.txt @@ -0,0 +1,112 @@ +How to contribute to Diffusers 🧨 We ❤️ contributions from the open-source community! Everyone is welcome, and all types of participation –not just code– are valued and appreciated. Answering questions, helping others, reaching out, and improving the documentation are all immensely valuable to the community, so don’t be afraid and get involved if you’re up for it! Everyone is encouraged to start by saying 👋 in our public Discord channel. We discuss the latest trends in diffusion models, ask questions, show off personal projects, help each other with contributions, or just hang out ☕. Whichever way you choose to contribute, we strive to be part of an open, welcoming, and kind community. Please, read our code of conduct and be mindful to respect it during your interactions. We also recommend you become familiar with the ethical guidelines that guide our project and ask you to adhere to the same principles of transparency and responsibility. We enormously value feedback from the community, so please do not be afraid to speak up if you believe you have valuable feedback that can help improve the library - every message, comment, issue, and pull request (PR) is read and considered. Overview You can contribute in many ways ranging from answering questions on issues to adding new diffusion models to +the core library. In the following, we give an overview of different ways to contribute, ranked by difficulty in ascending order. All of them are valuable to the community. Asking and answering questions on the Diffusers discussion forum or on Discord. Opening new issues on the GitHub Issues tab. Answering issues on the GitHub Issues tab. Fix a simple issue, marked by the “Good first issue” label, see here. Contribute to the documentation. Contribute a Community Pipeline. Contribute to the examples. Fix a more difficult issue, marked by the “Good second issue” label, see here. Add a new pipeline, model, or scheduler, see “New Pipeline/Model” and “New scheduler” issues. For this contribution, please have a look at Design Philosophy. As said before, all contributions are valuable to the community. +In the following, we will explain each contribution a bit more in detail. For all contributions 4 - 9, you will need to open a PR. It is explained in detail how to do so in Opening a pull request. 1. Asking and answering questions on the Diffusers discussion forum or on the Diffusers Discord Any question or comment related to the Diffusers library can be asked on the discussion forum or on Discord. Such questions and comments include (but are not limited to): Reports of training or inference experiments in an attempt to share knowledge Presentation of personal projects Questions to non-official training examples Project proposals General feedback Paper summaries Asking for help on personal projects that build on top of the Diffusers library General questions Ethical questions regarding diffusion models … Every question that is asked on the forum or on Discord actively encourages the community to publicly +share knowledge and might very well help a beginner in the future who has the same question you’re +having. Please do pose any questions you might have. +In the same spirit, you are of immense help to the community by answering such questions because this way you are publicly documenting knowledge for everybody to learn from. Please keep in mind that the more effort you put into asking or answering a question, the higher +the quality of the publicly documented knowledge. In the same way, well-posed and well-answered questions create a high-quality knowledge database accessible to everybody, while badly posed questions or answers reduce the overall quality of the public knowledge database. +In short, a high quality question or answer is precise, concise, relevant, easy-to-understand, accessible, and well-formated/well-posed. For more information, please have a look through the How to write a good issue section. NOTE about channels: +The forum is much better indexed by search engines, such as Google. Posts are ranked by popularity rather than chronologically. Hence, it’s easier to look up questions and answers that we posted some time ago. +In addition, questions and answers posted in the forum can easily be linked to. +In contrast, Discord has a chat-like format that invites fast back-and-forth communication. +While it will most likely take less time for you to get an answer to your question on Discord, your +question won’t be visible anymore over time. Also, it’s much harder to find information that was posted a while back on Discord. We therefore strongly recommend using the forum for high-quality questions and answers in an attempt to create long-lasting knowledge for the community. If discussions on Discord lead to very interesting answers and conclusions, we recommend posting the results on the forum to make the information more available for future readers. 2. Opening new issues on the GitHub issues tab The 🧨 Diffusers library is robust and reliable thanks to the users who notify us of +the problems they encounter. So thank you for reporting an issue. Remember, GitHub issues are reserved for technical questions directly related to the Diffusers library, bug reports, feature requests, or feedback on the library design. In a nutshell, this means that everything that is not related to the code of the Diffusers library (including the documentation) should not be asked on GitHub, but rather on either the forum or Discord. Please consider the following guidelines when opening a new issue: Make sure you have searched whether your issue has already been asked before (use the search bar on GitHub under Issues). Please never report a new issue on another (related) issue. If another issue is highly related, please +open a new issue nevertheless and link to the related issue. Make sure your issue is written in English. Please use one of the great, free online translation services, such as DeepL to translate from your native language to English if you are not comfortable in English. Check whether your issue might be solved by updating to the newest Diffusers version. Before posting your issue, please make sure that python -c "import diffusers; print(diffusers.__version__)" is higher or matches the latest Diffusers version. Remember that the more effort you put into opening a new issue, the higher the quality of your answer will be and the better the overall quality of the Diffusers issues. New issues usually include the following. 2.1. Reproducible, minimal bug reports A bug report should always have a reproducible code snippet and be as minimal and concise as possible. +This means in more detail: Narrow the bug down as much as you can, do not just dump your whole code file. Format your code. Do not include any external libraries except for Diffusers depending on them. Always provide all necessary information about your environment; for this, you can run: diffusers-cli env in your shell and copy-paste the displayed information to the issue. Explain the issue. If the reader doesn’t know what the issue is and why it is an issue, she cannot solve it. Always make sure the reader can reproduce your issue with as little effort as possible. If your code snippet cannot be run because of missing libraries or undefined variables, the reader cannot help you. Make sure your reproducible code snippet is as minimal as possible and can be copy-pasted into a simple Python shell. If in order to reproduce your issue a model and/or dataset is required, make sure the reader has access to that model or dataset. You can always upload your model or dataset to the Hub to make it easily downloadable. Try to keep your model and dataset as small as possible, to make the reproduction of your issue as effortless as possible. For more information, please have a look through the How to write a good issue section. You can open a bug report here. 2.2. Feature requests A world-class feature request addresses the following points: Motivation first: Is it related to a problem/frustration with the library? If so, please explain +why. Providing a code snippet that demonstrates the problem is best. Is it related to something you would need for a project? We’d love to hear +about it! Is it something you worked on and think could benefit the community? +Awesome! Tell us what problem it solved for you. Write a full paragraph describing the feature; Provide a code snippet that demonstrates its future use; In case this is related to a paper, please attach a link; Attach any additional information (drawings, screenshots, etc.) you think may help. You can open a feature request here. 2.3 Feedback Feedback about the library design and why it is good or not good helps the core maintainers immensely to build a user-friendly library. To understand the philosophy behind the current design philosophy, please have a look here. If you feel like a certain design choice does not fit with the current design philosophy, please explain why and how it should be changed. If a certain design choice follows the design philosophy too much, hence restricting use cases, explain why and how it should be changed. +If a certain design choice is very useful for you, please also leave a note as this is great feedback for future design decisions. You can open an issue about feedback here. 2.4 Technical questions Technical questions are mainly about why certain code of the library was written in a certain way, or what a certain part of the code does. Please make sure to link to the code in question and please provide details on +why this part of the code is difficult to understand. You can open an issue about a technical question here. 2.5 Proposal to add a new model, scheduler, or pipeline If the diffusion model community released a new model, pipeline, or scheduler that you would like to see in the Diffusers library, please provide the following information: Short description of the diffusion pipeline, model, or scheduler and link to the paper or public release. Link to any of its open-source implementation(s). Link to the model weights if they are available. If you are willing to contribute to the model yourself, let us know so we can best guide you. Also, don’t forget +to tag the original author of the component (model, scheduler, pipeline, etc.) by GitHub handle if you can find it. You can open a request for a model/pipeline/scheduler here. 3. Answering issues on the GitHub issues tab Answering issues on GitHub might require some technical knowledge of Diffusers, but we encourage everybody to give it a try even if you are not 100% certain that your answer is correct. +Some tips to give a high-quality answer to an issue: Be as concise and minimal as possible. Stay on topic. An answer to the issue should concern the issue and only the issue. Provide links to code, papers, or other sources that prove or encourage your point. Answer in code. If a simple code snippet is the answer to the issue or shows how the issue can be solved, please provide a fully reproducible code snippet. Also, many issues tend to be simply off-topic, duplicates of other issues, or irrelevant. It is of great +help to the maintainers if you can answer such issues, encouraging the author of the issue to be +more precise, provide the link to a duplicated issue or redirect them to the forum or Discord. If you have verified that the issued bug report is correct and requires a correction in the source code, +please have a look at the next sections. For all of the following contributions, you will need to open a PR. It is explained in detail how to do so in the Opening a pull request section. 4. Fixing a “Good first issue” Good first issues are marked by the Good first issue label. Usually, the issue already +explains how a potential solution should look so that it is easier to fix. +If the issue hasn’t been closed and you would like to try to fix this issue, you can just leave a message “I would like to try this issue.”. There are usually three scenarios: a.) The issue description already proposes a fix. In this case and if the solution makes sense to you, you can open a PR or draft PR to fix it. b.) The issue description does not propose a fix. In this case, you can ask what a proposed fix could look like and someone from the Diffusers team should answer shortly. If you have a good idea of how to fix it, feel free to directly open a PR. c.) There is already an open PR to fix the issue, but the issue hasn’t been closed yet. If the PR has gone stale, you can simply open a new PR and link to the stale PR. PRs often go stale if the original contributor who wanted to fix the issue suddenly cannot find the time anymore to proceed. This often happens in open-source and is very normal. In this case, the community will be very happy if you give it a new try and leverage the knowledge of the existing PR. If there is already a PR and it is active, you can help the author by giving suggestions, reviewing the PR or even asking whether you can contribute to the PR. 5. Contribute to the documentation A good library always has good documentation! The official documentation is often one of the first points of contact for new users of the library, and therefore contributing to the documentation is a highly +valuable contribution. Contributing to the library can have many forms: Correcting spelling or grammatical errors. Correct incorrect formatting of the docstring. If you see that the official documentation is weirdly displayed or a link is broken, we would be very happy if you take some time to correct it. Correct the shape or dimensions of a docstring input or output tensor. Clarify documentation that is hard to understand or incorrect. Update outdated code examples. Translating the documentation to another language. Anything displayed on the official Diffusers doc page is part of the official documentation and can be corrected, adjusted in the respective documentation source. Please have a look at this page on how to verify changes made to the documentation locally. 6. Contribute a community pipeline Pipelines are usually the first point of contact between the Diffusers library and the user. +Pipelines are examples of how to use Diffusers models and schedulers. +We support two types of pipelines: Official Pipelines Community Pipelines Both official and community pipelines follow the same design and consist of the same type of components. Official pipelines are tested and maintained by the core maintainers of Diffusers. Their code +resides in src/diffusers/pipelines. +In contrast, community pipelines are contributed and maintained purely by the community and are not tested. +They reside in examples/community and while they can be accessed via the PyPI diffusers package, their code is not part of the PyPI distribution. The reason for the distinction is that the core maintainers of the Diffusers library cannot maintain and test all +possible ways diffusion models can be used for inference, but some of them may be of interest to the community. +Officially released diffusion pipelines, +such as Stable Diffusion are added to the core src/diffusers/pipelines package which ensures +high quality of maintenance, no backward-breaking code changes, and testing. +More bleeding edge pipelines should be added as community pipelines. If usage for a community pipeline is high, the pipeline can be moved to the official pipelines upon request from the community. This is one of the ways we strive to be a community-driven library. To add a community pipeline, one should add a .py file to examples/community and adapt the examples/community/README.md to include an example of the new pipeline. An example can be seen here. Community pipeline PRs are only checked at a superficial level and ideally they should be maintained by their original authors. Contributing a community pipeline is a great way to understand how Diffusers models and schedulers work. Having contributed a community pipeline is usually the first stepping stone to contributing an official pipeline to the +core package. 7. Contribute to training examples Diffusers examples are a collection of training scripts that reside in examples. We support two types of training examples: Official training examples Research training examples Research training examples are located in examples/research_projects whereas official training examples include all folders under examples except the research_projects and community folders. +The official training examples are maintained by the Diffusers’ core maintainers whereas the research training examples are maintained by the community. +This is because of the same reasons put forward in 6. Contribute a community pipeline for official pipelines vs. community pipelines: It is not feasible for the core maintainers to maintain all possible training methods for diffusion models. +If the Diffusers core maintainers and the community consider a certain training paradigm to be too experimental or not popular enough, the corresponding training code should be put in the research_projects folder and maintained by the author. Both official training and research examples consist of a directory that contains one or more training scripts, a requirements.txt file, and a README.md file. In order for the user to make use of the +training examples, it is required to clone the repository: Copied git clone https://github.com/huggingface/diffusers as well as to install all additional dependencies required for training: Copied pip install -r /examples//requirements.txt Therefore when adding an example, the requirements.txt file shall define all pip dependencies required for your training example so that once all those are installed, the user can run the example’s training script. See, for example, the DreamBooth requirements.txt file. Training examples of the Diffusers library should adhere to the following philosophy: All the code necessary to run the examples should be found in a single Python file. One should be able to run the example from the command line with python .py --args. Examples should be kept simple and serve as an example on how to use Diffusers for training. The purpose of example scripts is not to create state-of-the-art diffusion models, but rather to reproduce known training schemes without adding too much custom logic. As a byproduct of this point, our examples also strive to serve as good educational materials. To contribute an example, it is highly recommended to look at already existing examples such as dreambooth to get an idea of how they should look like. +We strongly advise contributors to make use of the Accelerate library as it’s tightly integrated +with Diffusers. +Once an example script works, please make sure to add a comprehensive README.md that states how to use the example exactly. This README should include: An example command on how to run the example script as shown here. A link to some training results (logs, models, etc.) that show what the user can expect as shown here. If you are adding a non-official/research training example, please don’t forget to add a sentence that you are maintaining this training example which includes your git handle as shown here. If you are contributing to the official training examples, please also make sure to add a test to examples/test_examples.py. This is not necessary for non-official training examples. 8. Fixing a “Good second issue” Good second issues are marked by the Good second issue label. Good second issues are +usually more complicated to solve than Good first issues. +The issue description usually gives less guidance on how to fix the issue and requires +a decent understanding of the library by the interested contributor. +If you are interested in tackling a good second issue, feel free to open a PR to fix it and link the PR to the issue. If you see that a PR has already been opened for this issue but did not get merged, have a look to understand why it wasn’t merged and try to open an improved PR. +Good second issues are usually more difficult to get merged compared to good first issues, so don’t hesitate to ask for help from the core maintainers. If your PR is almost finished the core maintainers can also jump into your PR and commit to it in order to get it merged. 9. Adding pipelines, models, schedulers Pipelines, models, and schedulers are the most important pieces of the Diffusers library. +They provide easy access to state-of-the-art diffusion technologies and thus allow the community to +build powerful generative AI applications. By adding a new model, pipeline, or scheduler you might enable a new powerful use case for any of the user interfaces relying on Diffusers which can be of immense value for the whole generative AI ecosystem. Diffusers has a couple of open feature requests for all three components - feel free to gloss over them +if you don’t know yet what specific component you would like to add: Model or pipeline Scheduler Before adding any of the three components, it is strongly recommended that you give the Philosophy guide a read to better understand the design of any of the three components. Please be aware that we cannot merge model, scheduler, or pipeline additions that strongly diverge from our design philosophy +as it will lead to API inconsistencies. If you fundamentally disagree with a design choice, please open a Feedback issue instead so that it can be discussed whether a certain design pattern/design choice shall be changed everywhere in the library and whether we shall update our design philosophy. Consistency across the library is very important for us. Please make sure to add links to the original codebase/paper to the PR and ideally also ping the original author directly on the PR so that they can follow the progress and potentially help with questions. If you are unsure or stuck in the PR, don’t hesitate to leave a message to ask for a first review or help. Copied from mechanism A unique and important feature to understand when adding any pipeline, model or scheduler code is the # Copied from mechanism. You’ll see this all over the Diffusers codebase, and the reason we use it is to keep the codebase easy to understand and maintain. Marking code with the # Copied from mechanism forces the marked code to be identical to the code it was copied from. This makes it easy to update and propagate changes across many files whenever you run make fix-copies. For example, in the code example below, StableDiffusionPipelineOutput is the original code and AltDiffusionPipelineOutput uses the # Copied from mechanism to copy it. The only difference is changing the class prefix from Stable to Alt. Copied # Copied from diffusers.pipelines.stable_diffusion.pipeline_output.StableDiffusionPipelineOutput with Stable->Alt +class AltDiffusionPipelineOutput(BaseOutput): + """ + Output class for Alt Diffusion pipelines. + + Args: + images (`List[PIL.Image.Image]` or `np.ndarray`) + List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width, + num_channels)`. + nsfw_content_detected (`List[bool]`) + List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or + `None` if safety checking could not be performed. + """ To learn more, read this section of the ~Don’t~ Repeat Yourself* blog post. How to write a good issue The better your issue is written, the higher the chances that it will be quickly resolved. Make sure that you’ve used the correct template for your issue. You can pick between Bug Report, Feature Request, Feedback about API Design, New model/pipeline/scheduler addition, Forum, or a blank issue. Make sure to pick the correct one when opening a new issue. Be precise: Give your issue a fitting title. Try to formulate your issue description as simple as possible. The more precise you are when submitting an issue, the less time it takes to understand the issue and potentially solve it. Make sure to open an issue for one issue only and not for multiple issues. If you found multiple issues, simply open multiple issues. If your issue is a bug, try to be as precise as possible about what bug it is - you should not just write “Error in diffusers”. Reproducibility: No reproducible code snippet == no solution. If you encounter a bug, maintainers have to be able to reproduce it. Make sure that you include a code snippet that can be copy-pasted into a Python interpreter to reproduce the issue. Make sure that your code snippet works, i.e. that there are no missing imports or missing links to images, … Your issue should contain an error message and a code snippet that can be copy-pasted without any changes to reproduce the exact same error message. If your issue is using local model weights or local data that cannot be accessed by the reader, the issue cannot be solved. If you cannot share your data or model, try to make a dummy model or dummy data. Minimalistic: Try to help the reader as much as you can to understand the issue as quickly as possible by staying as concise as possible. Remove all code / all information that is irrelevant to the issue. If you have found a bug, try to create the easiest code example you can to demonstrate your issue, do not just dump your whole workflow into the issue as soon as you have found a bug. E.g., if you train a model and get an error at some point during the training, you should first try to understand what part of the training code is responsible for the error and try to reproduce it with a couple of lines. Try to use dummy data instead of full datasets. Add links. If you are referring to a certain naming, method, or model make sure to provide a link so that the reader can better understand what you mean. If you are referring to a specific PR or issue, make sure to link it to your issue. Do not assume that the reader knows what you are talking about. The more links you add to your issue the better. Formatting. Make sure to nicely format your issue by formatting code into Python code syntax, and error messages into normal code syntax. See the official GitHub formatting docs for more information. Think of your issue not as a ticket to be solved, but rather as a beautiful entry to a well-written encyclopedia. Every added issue is a contribution to publicly available knowledge. By adding a nicely written issue you not only make it easier for maintainers to solve your issue, but you are helping the whole community to better understand a certain aspect of the library. How to write a good PR Be a chameleon. Understand existing design patterns and syntax and make sure your code additions flow seamlessly into the existing code base. Pull requests that significantly diverge from existing design patterns or user interfaces will not be merged. Be laser focused. A pull request should solve one problem and one problem only. Make sure to not fall into the trap of “also fixing another problem while we’re adding it”. It is much more difficult to review pull requests that solve multiple, unrelated problems at once. If helpful, try to add a code snippet that displays an example of how your addition can be used. The title of your pull request should be a summary of its contribution. If your pull request addresses an issue, please mention the issue number in +the pull request description to make sure they are linked (and people +consulting the issue know you are working on it); To indicate a work in progress please prefix the title with [WIP]. These +are useful to avoid duplicated work, and to differentiate it from PRs ready +to be merged; Try to formulate and format your text as explained in How to write a good issue. Make sure existing tests pass; Add high-coverage tests. No quality testing = no merge. If you are adding new @slow tests, make sure they pass using +RUN_SLOW=1 python -m pytest tests/test_my_new_model.py. +CircleCI does not run the slow tests, but GitHub Actions does every night! All public methods must have informative docstrings that work nicely with markdown. See pipeline_latent_diffusion.py for an example. Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted dataset like +hf-internal-testing or huggingface/documentation-images to place these files. +If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images +to this dataset. How to open a PR Before writing code, we strongly advise you to search through the existing PRs or +issues to make sure that nobody is already working on the same thing. If you are +unsure, it is always a good idea to open an issue to get some feedback. You will need basic git proficiency to be able to contribute to +🧨 Diffusers. git is not the easiest tool to use but it has the greatest +manual. Type git --help in a shell and enjoy. If you prefer books, Pro +Git is a very good reference. Follow these steps to start contributing (supported Python versions): Fork the repository by +clicking on the ‘Fork’ button on the repository’s page. This creates a copy of the code +under your GitHub user account. Clone your fork to your local disk, and add the base repository as a remote: Copied $ git clone git@github.com:/diffusers.git +$ cd diffusers +$ git remote add upstream https://github.com/huggingface/diffusers.git Create a new branch to hold your development changes: Copied $ git checkout -b a-descriptive-name-for-my-changes Do not work on the main branch. Set up a development environment by running the following command in a virtual environment: Copied $ pip install -e ".[dev]" If you have already cloned the repo, you might need to git pull to get the most recent changes in the +library. Develop the features on your branch. As you work on the features, you should make sure that the test suite +passes. You should run the tests impacted by your changes like this: Copied $ pytest tests/.py Before you run the tests, please make sure you install the dependencies required for testing. You can do so +with this command: Copied $ pip install -e ".[test]" You can also run the full test suite with the following command, but it takes +a beefy machine to produce a result in a decent amount of time now that +Diffusers has grown a lot. Here is the command for it: Copied $ make test 🧨 Diffusers relies on black and isort to format its source code +consistently. After you make changes, apply automatic style corrections and code verifications +that can’t be automated in one go with: Copied $ make style 🧨 Diffusers also uses ruff and a few custom scripts to check for coding mistakes. Quality +control runs in CI, however, you can also run the same checks with: Copied $ make quality Once you’re happy with your changes, add changed files using git add and +make a commit with git commit to record your changes locally: Copied $ git add modified_file.py +$ git commit -m "A descriptive message about your changes." It is a good idea to sync your copy of the code with the original +repository regularly. This way you can quickly account for changes: Copied $ git pull upstream main Push the changes to your account using: Copied $ git push -u origin a-descriptive-name-for-my-changes Once you are satisfied, go to the +webpage of your fork on GitHub. Click on ‘Pull request’ to send your changes +to the project maintainers for review. It’s OK if maintainers ask you for changes. It happens to core contributors +too! So everyone can see the changes in the Pull request, work in your local +branch and push the changes to your fork. They will automatically appear in +the pull request. Tests An extensive test suite is included to test the library behavior and several examples. Library tests can be found in +the tests folder. We like pytest and pytest-xdist because it’s faster. From the root of the +repository, here’s how to run tests with pytest for the library: Copied $ python -m pytest -n auto --dist=loadfile -s -v ./tests/ In fact, that’s how make test is implemented! You can specify a smaller set of tests in order to test only the feature +you’re working on. By default, slow tests are skipped. Set the RUN_SLOW environment variable to +yes to run them. This will download many gigabytes of models — make sure you +have enough disk space and a good Internet connection, or a lot of patience! Copied $ RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/ unittest is fully supported, here’s how to run tests with it: Copied $ python -m unittest discover -s tests -t . -v +$ python -m unittest discover -s examples -t examples -v Syncing forked main with upstream (HuggingFace) main To avoid pinging the upstream repository which adds reference notes to each upstream PR and sends unnecessary notifications to the developers involved in these PRs, +when syncing the main branch of a forked repository, please, follow these steps: When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead, merge directly into the forked main. If a PR is absolutely necessary, use the following steps after checking out your branch: Copied $ git checkout -b your-branch-for-syncing +$ git pull --squash --no-commit upstream main +$ git commit -m '' +$ git push --set-upstream origin your-branch-for-syncing Style guide For documentation strings, 🧨 Diffusers follows the Google style. diff --git a/scrapped_outputs/a212824fa46c02d44e8f1bef52a63b47.txt b/scrapped_outputs/a212824fa46c02d44e8f1bef52a63b47.txt new file mode 100644 index 0000000000000000000000000000000000000000..f53414f9cec5f2da51d8d93bb294c2d2d37a79e4 --- /dev/null +++ b/scrapped_outputs/a212824fa46c02d44e8f1bef52a63b47.txt @@ -0,0 +1,294 @@ +MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation + + +Overview + +MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation by Omer Bar-Tal, Lior Yariv, Yaron Lipman, and Tali Dekel. +The abstract of the paper is the following: +*Recent advances in text-to-image generation with diffusion models present transformative capabilities in image quality. However, user controllability of the generated image, and fast adaptation to new tasks still remains an open challenge, currently mostly addressed by costly and long re-training and fine-tuning or ad-hoc adaptations to specific image generation tasks. In this work, we present MultiDiffusion, a unified framework that enables versatile and controllable image generation, using a pre-trained text-to-image diffusion model, without any further training or finetuning. At the center of our approach is a new generation process, based on an optimization task that binds together multiple diffusion generation processes with a shared set of parameters or constraints. We show that MultiDiffusion can be readily applied to generate high quality and diverse images that adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes. +Resources: +Project Page. +Paper. +Original Code. +Demo. + +Available Pipelines: + +Pipeline +Tasks +Demo +StableDiffusionPanoramaPipeline +Text-Guided Panorama View Generation +🤗 Space) + +Usage example + + + + Copied +import torch +from diffusers import StableDiffusionPanoramaPipeline, DDIMScheduler + +model_ckpt = "stabilityai/stable-diffusion-2-base" +scheduler = DDIMScheduler.from_pretrained(model_ckpt, subfolder="scheduler") +pipe = StableDiffusionPanoramaPipeline.from_pretrained(model_ckpt, scheduler=scheduler, torch_dtype=torch.float16) + +pipe = pipe.to("cuda") + +prompt = "a photo of the dolomites" +image = pipe(prompt).images[0] +image.save("dolomites.png") + +StableDiffusionPanoramaPipeline + + +class diffusers.StableDiffusionPanoramaPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: DDIMScheduler +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPImageProcessor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. The original work +on Multi Diffsion used the DDIMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-to-image generation using “MultiDiffusion: Fusing Diffusion Paths for Controlled Image +Generation”. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.). +To generate panorama-like images, be sure to pass the width parameter accordingly when using the pipeline. Our +recommendation for the width value is 2048. This is the default value of the width parameter for this pipeline. + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +height: typing.Optional[int] = 512 +width: typing.Optional[int] = 2048 +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: typing.Optional[int] = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +height (int, optional, defaults to 512 — +The height in pixels of the generated image. + + +width (int, optional, defaults to 2048) — +The width in pixels of the generated image. The width is kept to a high number because the +pipeline is supposed to be used for generating panorama-like images. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import StableDiffusionPanoramaPipeline, DDIMScheduler + +>>> model_ckpt = "stabilityai/stable-diffusion-2-base" +>>> scheduler = DDIMScheduler.from_pretrained(model_ckpt, subfolder="scheduler") +>>> pipe = StableDiffusionPanoramaPipeline.from_pretrained( +... model_ckpt, scheduler=scheduler, torch_dtype=torch.float16 +... ) + +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of the dolomites" +>>> image = pipe(prompt).images[0] + +disable_vae_slicing + +< +source +> +( +) + + + +Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to +computing decoding in one step. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. + +enable_vae_slicing + +< +source +> +( +) + + + +Enable sliced VAE decoding. +When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several +steps. This is useful to save some memory and allow larger batch sizes. diff --git a/scrapped_outputs/a26d6f0d6f19935b3aa7c3b62ed26167.txt b/scrapped_outputs/a26d6f0d6f19935b3aa7c3b62ed26167.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/a28f457645e358f6080661199fb204fd.txt b/scrapped_outputs/a28f457645e358f6080661199fb204fd.txt new file mode 100644 index 0000000000000000000000000000000000000000..f30b39a298e4c56dee2c29827af6d01fc3c8586a --- /dev/null +++ b/scrapped_outputs/a28f457645e358f6080661199fb204fd.txt @@ -0,0 +1,36 @@ +AsymmetricAutoencoderKL Improved larger variational autoencoder (VAE) model with KL loss for inpainting task: Designing a Better Asymmetric VQGAN for StableDiffusion by Zixin Zhu, Xuelu Feng, Dongdong Chen, Jianmin Bao, Le Wang, Yinpeng Chen, Lu Yuan, Gang Hua. The abstract from the paper is: StableDiffusion is a revolutionary text-to-image generator that is causing a stir in the world of image generation and editing. Unlike traditional methods that learn a diffusion model in pixel space, StableDiffusion learns a diffusion model in the latent space via a VQGAN, ensuring both efficiency and quality. It not only supports image generation tasks, but also enables image editing for real images, such as image inpainting and local editing. However, we have observed that the vanilla VQGAN used in StableDiffusion leads to significant information loss, causing distortion artifacts even in non-edited image regions. To this end, we propose a new asymmetric VQGAN with two simple designs. Firstly, in addition to the input from the encoder, the decoder contains a conditional branch that incorporates information from task-specific priors, such as the unmasked image region in inpainting. Secondly, the decoder is much heavier than the encoder, allowing for more detailed recovery while only slightly increasing the total inference cost. The training cost of our asymmetric VQGAN is cheap, and we only need to retrain a new asymmetric decoder while keeping the vanilla VQGAN encoder and StableDiffusion unchanged. Our asymmetric VQGAN can be widely used in StableDiffusion-based inpainting and local editing methods. Extensive experiments demonstrate that it can significantly improve the inpainting and editing performance, while maintaining the original text-to-image capability. The code is available at https://github.com/buxiangzhiren/Asymmetric_VQGAN Evaluation results can be found in section 4.1 of the original paper. Available checkpoints https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-1-5 https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-2 Example Usage Copied from diffusers import AsymmetricAutoencoderKL, StableDiffusionInpaintPipeline +from diffusers.utils import load_image, make_image_grid + + +prompt = "a photo of a person with beard" +img_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/celeba_hq_256.png" +mask_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/mask_256.png" + +original_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +pipe = StableDiffusionInpaintPipeline.from_pretrained("runwayml/stable-diffusion-inpainting") +pipe.vae = AsymmetricAutoencoderKL.from_pretrained("cross-attention/asymmetric-autoencoder-kl-x-1-5") +pipe.to("cuda") + +image = pipe(prompt=prompt, image=original_image, mask_image=mask_image).images[0] +make_image_grid([original_image, mask_image, image], rows=1, cols=3) AsymmetricAutoencoderKL class diffusers.AsymmetricAutoencoderKL < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) down_block_out_channels: Tuple = (64,) layers_per_down_block: int = 1 up_block_types: Tuple = ('UpDecoderBlock2D',) up_block_out_channels: Tuple = (64,) layers_per_up_block: int = 1 act_fn: str = 'silu' latent_channels: int = 4 norm_num_groups: int = 32 sample_size: int = 32 scaling_factor: float = 0.18215 ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("DownEncoderBlock2D",)) — +Tuple of downsample block types. down_block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of down block output channels. layers_per_down_block (int, optional, defaults to 1) — +Number layers for down block. up_block_types (Tuple[str], optional, defaults to ("UpDecoderBlock2D",)) — +Tuple of upsample block types. up_block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of up block output channels. layers_per_up_block (int, optional, defaults to 1) — +Number layers for up block. act_fn (str, optional, defaults to "silu") — The activation function to use. latent_channels (int, optional, defaults to 4) — Number of channels in the latent space. sample_size (int, optional, defaults to 32) — Sample input size. norm_num_groups (int, optional, defaults to 32) — +Number of groups to use for the first normalization layer in ResNet blocks. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. Designing a Better Asymmetric VQGAN for StableDiffusion https://arxiv.org/abs/2306.04632 . A VAE model with KL loss +for encoding images into latents and decoding latent representations into images. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor mask: Optional = None sample_posterior: bool = False return_dict: bool = True generator: Optional = None ) Parameters sample (torch.FloatTensor) — Input sample. mask (torch.FloatTensor, optional, defaults to None) — Optional inpainting mask. sample_posterior (bool, optional, defaults to False) — +Whether to sample from the posterior. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. AutoencoderKLOutput class diffusers.models.modeling_outputs.AutoencoderKLOutput < source > ( latent_dist: DiagonalGaussianDistribution ) Parameters latent_dist (DiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of DiagonalGaussianDistribution. +DiagonalGaussianDistribution allows for sampling latents from the distribution. Output of AutoencoderKL encoding method. DecoderOutput class diffusers.models.autoencoders.vae.DecoderOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The decoded output sample from the last layer of the model. Output of decoding method. diff --git a/scrapped_outputs/a2b2d18f57dd08345de63af3547b58a1.txt b/scrapped_outputs/a2b2d18f57dd08345de63af3547b58a1.txt new file mode 100644 index 0000000000000000000000000000000000000000..49dfad88e1e2c0dcad3d9918f9f7b9486f85e0dc --- /dev/null +++ b/scrapped_outputs/a2b2d18f57dd08345de63af3547b58a1.txt @@ -0,0 +1,92 @@ +DPMSolverMultistepInverse DPMSolverMultistepInverse is the inverted scheduler from DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps and DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. The implementation is mostly based on the DDIM inversion definition of Null-text Inversion for Editing Real Images using Guided Diffusion Models and notebook implementation of the DiffEdit latent inversion from Xiang-cd/DiffEdit-stable-diffusion. Tips Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use the dynamic +thresholding. This thresholding method is unsuitable for latent-space diffusion models such as +Stable Diffusion. DPMSolverMultistepInverseScheduler class diffusers.DPMSolverMultistepInverseScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'dpmsolver++' solver_type: str = 'midpoint' lower_order_final: bool = True euler_at_final: bool = False use_karras_sigmas: Optional = False lambda_min_clipped: float = -inf variance_type: Optional = None timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DPMSolver order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++". algorithm_type (str, defaults to dpmsolver++) — +Algorithm type for the solver; can be dpmsolver, dpmsolver++, sde-dpmsolver or sde-dpmsolver++. The +dpmsolver type implements the algorithms in the DPMSolver +paper, and the dpmsolver++ type implements the algorithms in the +DPMSolver++ paper. It is recommended to use dpmsolver++ or +sde-dpmsolver++ with solver_order=2 for guided sampling like in Stable Diffusion. solver_type (str, defaults to midpoint) — +Solver type for the second-order solver; can be midpoint or heun. The solver type slightly affects the +sample quality, especially for a small number of steps. It is recommended to use midpoint solvers. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. euler_at_final (bool, defaults to False) — +Whether to use Euler’s method in the final step. It is a trade-off between numerical stability and detail +richness. This can stabilize the sampling of the SDE variant of DPMSolver for small number of inference +steps, but sometimes may result in blurring. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. lambda_min_clipped (float, defaults to -inf) — +Clipping threshold for the minimum value of lambda(t) for numerical stability. This is critical for the +cosine (squaredcos_cap_v2) noise schedule. variance_type (str, optional) — +Set to “learned” or “learned_range” for diffusion models that predict variance. If set, the model’s output +contains the predicted Gaussian variance. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DPMSolverMultistepInverseScheduler is the reverse scheduler of DPMSolverMultistepScheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is +designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an +integral of the data prediction model. The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise +prediction and data prediction models. dpm_solver_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DPMSolver (equivalent to DDIM). multistep_dpm_solver_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order multistep DPMSolver. multistep_dpm_solver_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order multistep DPMSolver. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int = None device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator = None return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep DPMSolver. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/a2b6b59f4dea70c5b9bf8b5c67ae8084.txt b/scrapped_outputs/a2b6b59f4dea70c5b9bf8b5c67ae8084.txt new file mode 100644 index 0000000000000000000000000000000000000000..f4ac146d99ffb1b55007c36017f8a921b08b2a37 --- /dev/null +++ b/scrapped_outputs/a2b6b59f4dea70c5b9bf8b5c67ae8084.txt @@ -0,0 +1,327 @@ +Semantic Guidance + +Semantic Guidance for Diffusion Models was proposed in SEGA: Instructing Diffusion using Semantic Dimensions and provides strong semantic control over the image generation. +Small changes to the text prompt usually result in entirely different output images. However, with SEGA a variety of changes to the image are enabled that can be controlled easily and intuitively, and stay true to the original image composition. +The abstract of the paper is the following: +Text-to-image diffusion models have recently received a lot of interest for their astonishing ability to produce high-fidelity images from text only. However, achieving one-shot generation that aligns with the user’s intent is nearly impossible, yet small changes to the input prompt often result in very different images. This leaves the user with little semantic control. To put the user in control, we show how to interact with the diffusion process to flexibly steer it along semantic directions. This semantic guidance (SEGA) allows for subtle and extensive edits, changes in composition and style, as well as optimizing the overall artistic conception. We demonstrate SEGA’s effectiveness on a variety of tasks and provide evidence for its versatility and flexibility. +Overview: +Pipeline +Tasks +Colab +Demo +pipeline_semantic_stable_diffusion.py +Text-to-Image Generation + +Coming Soon + +Tips + +The Semantic Guidance pipeline can be used with any Stable Diffusion checkpoint. + +Run Semantic Guidance + +The interface of SemanticStableDiffusionPipeline provides several additional parameters to influence the image generation. +Exemplary usage may look like this: + + + Copied +import torch +from diffusers import SemanticStableDiffusionPipeline + +pipe = SemanticStableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +pipe = pipe.to("cuda") + +out = pipe( + prompt="a photo of the face of a woman", + num_images_per_prompt=1, + guidance_scale=7, + editing_prompt=[ + "smiling, smile", # Concepts to apply + "glasses, wearing glasses", + "curls, wavy hair, curly hair", + "beard, full beard, mustache", + ], + reverse_editing_direction=[False, False, False, False], # Direction of guidance i.e. increase all concepts + edit_warmup_steps=[10, 10, 10, 10], # Warmup period for each concept + edit_guidance_scale=[4, 5, 5, 5.4], # Guidance scale for each concept + edit_threshold=[ + 0.99, + 0.975, + 0.925, + 0.96, + ], # Threshold for each concept. Threshold equals the percentile of the latent space that will be discarded. I.e. threshold=0.99 uses 1% of the latent dimensions + edit_momentum_scale=0.3, # Momentum scale that will be added to the latent guidance + edit_mom_beta=0.6, # Momentum beta + edit_weights=[1, 1, 1, 1, 1], # Weights of the individual concepts against each other +) +For more examples check the colab notebook. + +StableDiffusionSafePipelineOutput + + +class diffusers.pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput + +< +source +> +( +images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] +nsfw_content_detected: typing.Optional[typing.List[bool]] + +) + + +Parameters + +images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or numpy array of shape (batch_size, height, width, num_channels). PIL images or numpy array present the denoised images of the diffusion pipeline. + + +nsfw_content_detected (List[bool]) — +List of flags denoting whether the corresponding generated image likely represents “not-safe-for-work” +(nsfw) content, or None if safety checking could not be performed. + + + +Output class for Stable Diffusion pipelines. + +SemanticStableDiffusionPipeline + + +class diffusers.SemanticStableDiffusionPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPFeatureExtractor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latens. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (Q16SafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-to-image generation with latent editing. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) +This model builds on the implementation of [‘StableDiffusionPipeline’] + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: int = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +editing_prompt: typing.Union[str, typing.List[str], NoneType] = None +editing_prompt_embeddings: typing.Optional[torch.Tensor] = None +reverse_editing_direction: typing.Union[bool, typing.List[bool], NoneType] = False +edit_guidance_scale: typing.Union[float, typing.List[float], NoneType] = 5 +edit_warmup_steps: typing.Union[int, typing.List[int], NoneType] = 10 +edit_cooldown_steps: typing.Union[int, typing.List[int], NoneType] = None +edit_threshold: typing.Union[float, typing.List[float], NoneType] = 0.9 +edit_momentum_scale: typing.Optional[float] = 0.1 +edit_mom_beta: typing.Optional[float] = 0.4 +edit_weights: typing.Optional[typing.List[float]] = None +sem_guidance: typing.Optional[typing.List[torch.Tensor]] = None + +) +→ +SemanticStableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +editing_prompt (str or List[str], optional) — +The prompt or prompts to use for Semantic guidance. Semantic guidance is disabled by setting +editing_prompt = None. Guidance direction of prompt should be specified via +reverse_editing_direction. + + +editing_prompt_embeddings (torch.Tensor>, optional) — +Pre-computed embeddings to use for semantic guidance. Guidance direction of embedding should be +specified via reverse_editing_direction. + + +reverse_editing_direction (bool or List[bool], optional, defaults to False) — +Whether the corresponding prompt in editing_prompt should be increased or decreased. + + +edit_guidance_scale (float or List[float], optional, defaults to 5) — +Guidance scale for semantic guidance. If provided as list values should correspond to editing_prompt. +edit_guidance_scale is defined as s_e of equation 6 of SEGA +Paper. + + +edit_warmup_steps (float or List[float], optional, defaults to 10) — +Number of diffusion steps (for each prompt) for which semantic guidance will not be applied. Momentum +will still be calculated for those steps and applied once all warmup periods are over. +edit_warmup_steps is defined as delta (δ) of SEGA Paper. + + +edit_cooldown_steps (float or List[float], optional, defaults to None) — +Number of diffusion steps (for each prompt) after which semantic guidance will no longer be applied. + + +edit_threshold (float or List[float], optional, defaults to 0.9) — +Threshold of semantic guidance. + + +edit_momentum_scale (float, optional, defaults to 0.1) — +Scale of the momentum to be added to the semantic guidance at each diffusion step. If set to 0.0 +momentum will be disabled. Momentum is already built up during warmup, i.e. for diffusion steps smaller +than sld_warmup_steps. Momentum will only be added to latent guidance once all warmup periods are +finished. edit_momentum_scale is defined as s_m of equation 7 of SEGA +Paper. + + +edit_mom_beta (float, optional, defaults to 0.4) — +Defines how semantic guidance momentum builds up. edit_mom_beta indicates how much of the previous +momentum will be kept. Momentum is already built up during warmup, i.e. for diffusion steps smaller +than edit_warmup_steps. edit_mom_beta is defined as beta_m (β) of equation 8 of SEGA +Paper. + + +edit_weights (List[float], optional, defaults to None) — +Indicates how much each individual concept should influence the overall guidance. If no weights are +provided all concepts are applied equally. edit_mom_beta is defined as g_i of equation 9 of SEGA +Paper. + + +sem_guidance (List[torch.Tensor], optional) — +List of pre-generated guidance vectors to be applied at generation. Length of the list has to +correspond to num_inference_steps. + + +Returns + +SemanticStableDiffusionPipelineOutput or tuple + + + +SemanticStableDiffusionPipelineOutput if return_dict is True, +otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. diff --git a/scrapped_outputs/a2bc46a5605a295f564117a780e5e56d.txt b/scrapped_outputs/a2bc46a5605a295f564117a780e5e56d.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/a2c16b913c17240d015ccb7909ccbe81.txt b/scrapped_outputs/a2c16b913c17240d015ccb7909ccbe81.txt new file mode 100644 index 0000000000000000000000000000000000000000..1afea408cd64c8c6388f2de4ad0e81536407a2fb --- /dev/null +++ b/scrapped_outputs/a2c16b913c17240d015ccb7909ccbe81.txt @@ -0,0 +1,6 @@ +Utilities Utility and helper functions for working with 🤗 Diffusers. numpy_to_pil diffusers.utils.numpy_to_pil < source > ( images ) Convert a numpy image or a batch of images to a PIL image. pt_to_pil diffusers.utils.pt_to_pil < source > ( images ) Convert a torch image to a PIL image. load_image diffusers.utils.load_image < source > ( image: Union ) → PIL.Image.Image Parameters image (str or PIL.Image.Image) — +The image to convert to the PIL Image format. Returns +PIL.Image.Image + +A PIL Image. + Loads image to a PIL Image. export_to_gif diffusers.utils.export_to_gif < source > ( image: List output_gif_path: str = None ) export_to_video diffusers.utils.export_to_video < source > ( video_frames: Union output_video_path: str = None fps: int = 8 ) make_image_grid diffusers.utils.make_image_grid < source > ( images: List rows: int cols: int resize: int = None ) Prepares a single grid of images. Useful for visualization purposes. diff --git a/scrapped_outputs/a2f14179a7c72fc649190322cfcb709b.txt b/scrapped_outputs/a2f14179a7c72fc649190322cfcb709b.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/a302549f7e44f33023037b028e33672a.txt b/scrapped_outputs/a302549f7e44f33023037b028e33672a.txt new file mode 100644 index 0000000000000000000000000000000000000000..99c9c7d4f2201d98cc2da9436565b2c181d1c9c1 --- /dev/null +++ b/scrapped_outputs/a302549f7e44f33023037b028e33672a.txt @@ -0,0 +1,83 @@ +Paint by Example Paint by Example: Exemplar-based Image Editing with Diffusion Models is by Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, Fang Wen. The abstract from the paper is: Language-guided image editing has achieved great success recently. In this paper, for the first time, we investigate exemplar-guided image editing for more precise control. We achieve this goal by leveraging self-supervised training to disentangle and re-organize the source image and the exemplar. However, the naive approach will cause obvious fusing artifacts. We carefully analyze it and propose an information bottleneck and strong augmentations to avoid the trivial solution of directly copying and pasting the exemplar image. Meanwhile, to ensure the controllability of the editing process, we design an arbitrary shape mask for the exemplar image and leverage the classifier-free guidance to increase the similarity to the exemplar image. The whole framework involves a single forward of the diffusion model without any iterative optimization. We demonstrate that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity. The original codebase can be found at Fantasy-Studio/Paint-by-Example, and you can try it out in a demo. Tips Paint by Example is supported by the official Fantasy-Studio/Paint-by-Example checkpoint. The checkpoint is warm-started from CompVis/stable-diffusion-v1-4 to inpaint partly masked images conditioned on example and reference images. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. PaintByExamplePipeline class diffusers.PaintByExamplePipeline < source > ( vae: AutoencoderKL image_encoder: PaintByExampleImageEncoder unet: UNet2DConditionModel scheduler: Union safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = False ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. image_encoder (PaintByExampleImageEncoder) — +Encodes the example input image. The unet is conditioned on the example image instead of a text prompt. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. 🧪 This is an experimental feature! Pipeline for image-guided image inpainting using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( example_image: Union image: Union mask_image: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters example_image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +An example image to guide image generation. image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +Image or tensor representing an image batch to be inpainted (parts of the image are masked out with +mask_image and repainted according to prompt). mask_image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +Image or tensor representing an image batch to mask image. White pixels in the mask are repainted, +while black pixels are preserved. If mask_image is a PIL image, it is converted to a single channel +(luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, so the +expected shape would be (B, H, W, 1). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Example: Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO +>>> from diffusers import PaintByExamplePipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = ( +... "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/image/example_1.png" +... ) +>>> mask_url = ( +... "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/mask/example_1.png" +... ) +>>> example_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/reference/example_1.jpg" + +>>> init_image = download_image(img_url).resize((512, 512)) +>>> mask_image = download_image(mask_url).resize((512, 512)) +>>> example_image = download_image(example_url).resize((512, 512)) + +>>> pipe = PaintByExamplePipeline.from_pretrained( +... "Fantasy-Studio/Paint-by-Example", +... torch_dtype=torch.float16, +... ) +>>> pipe = pipe.to("cuda") + +>>> image = pipe(image=init_image, mask_image=mask_image, example_image=example_image).images[0] +>>> image StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/a3185b7e8162503b7c554797cb89eaf4.txt b/scrapped_outputs/a3185b7e8162503b7c554797cb89eaf4.txt new file mode 100644 index 0000000000000000000000000000000000000000..7b33af7ded71fb9ee111a4c828a87ecbd9858360 --- /dev/null +++ b/scrapped_outputs/a3185b7e8162503b7c554797cb89eaf4.txt @@ -0,0 +1,36 @@ +Consistency Decoder Consistency decoder can be used to decode the latents from the denoising UNet in the StableDiffusionPipeline. This decoder was introduced in the DALL-E 3 technical report. The original codebase can be found at openai/consistencydecoder. Inference is only supported for 2 iterations as of now. The pipeline could not have been contributed without the help of madebyollin and mrsteyk from this issue. ConsistencyDecoderVAE class diffusers.ConsistencyDecoderVAE < source > ( scaling_factor: float = 0.18215 latent_channels: int = 4 encoder_act_fn: str = 'silu' encoder_block_out_channels: Tuple = (128, 256, 512, 512) encoder_double_z: bool = True encoder_down_block_types: Tuple = ('DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D') encoder_in_channels: int = 3 encoder_layers_per_block: int = 2 encoder_norm_num_groups: int = 32 encoder_out_channels: int = 4 decoder_add_attention: bool = False decoder_block_out_channels: Tuple = (320, 640, 1024, 1024) decoder_down_block_types: Tuple = ('ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D') decoder_downsample_padding: int = 1 decoder_in_channels: int = 7 decoder_layers_per_block: int = 3 decoder_norm_eps: float = 1e-05 decoder_norm_num_groups: int = 32 decoder_num_train_timesteps: int = 1024 decoder_out_channels: int = 6 decoder_resnet_time_scale_shift: str = 'scale_shift' decoder_time_embedding_type: str = 'learned' decoder_up_block_types: Tuple = ('ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D') ) The consistency decoder used with DALL-E 3. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline, ConsistencyDecoderVAE + +>>> vae = ConsistencyDecoderVAE.from_pretrained("openai/consistency-decoder", torch_dtype=torch.float16) +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", vae=vae, torch_dtype=torch.float16 +... ).to("cuda") + +>>> pipe("horse", generator=torch.manual_seed(0)).images wrapper < source > ( *args **kwargs ) disable_slicing < source > ( ) Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing +decoding in one step. disable_tiling < source > ( ) Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing +decoding in one step. enable_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_tiling < source > ( use_tiling: bool = True ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. forward < source > ( sample: FloatTensor sample_posterior: bool = False return_dict: bool = True generator: Optional = None ) → DecoderOutput or tuple Parameters sample (torch.FloatTensor) — Input sample. sample_posterior (bool, optional, defaults to False) — +Whether to sample from the posterior. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. generator (torch.Generator, optional, defaults to None) — +Generator to use for sampling. Returns +DecoderOutput or tuple + +If return_dict is True, a DecoderOutput is returned, otherwise a plain tuple is returned. + set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. tiled_encode < source > ( x: FloatTensor return_dict: bool = True ) → ~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput or tuple Parameters x (torch.FloatTensor) — Input batch of images. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput instead of a +plain tuple. Returns +~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput or tuple + +If return_dict is True, a ~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput is returned, +otherwise a plain tuple is returned. + Encode a batch of images using a tiled encoder. When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several +steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is +different from non-tiled encoding because each tile uses a different encoder. To avoid tiling artifacts, the +tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the +output, but they should be much less noticeable. diff --git a/scrapped_outputs/a3340ebf545416126098ed1c4da6cfd5.txt b/scrapped_outputs/a3340ebf545416126098ed1c4da6cfd5.txt new file mode 100644 index 0000000000000000000000000000000000000000..b26a6d56b0f7175109506df5db21894b73ff5f5f --- /dev/null +++ b/scrapped_outputs/a3340ebf545416126098ed1c4da6cfd5.txt @@ -0,0 +1,25 @@ +Metal Performance Shaders (MPS) 🤗 Diffusers is compatible with Apple silicon (M1/M2 chips) using the PyTorch mps device, which uses the Metal framework to leverage the GPU on MacOS devices. You’ll need to have: macOS computer with Apple silicon (M1/M2) hardware macOS 12.6 or later (13.0 or later recommended) arm64 version of Python PyTorch 2.0 (recommended) or 1.13 (minimum version supported for mps) The mps backend uses PyTorch’s .to() interface to move the Stable Diffusion pipeline on to your M1 or M2 device: Copied from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +pipe = pipe.to("mps") + +# Recommended if your computer has < 64 GB of RAM +pipe.enable_attention_slicing() + +prompt = "a photo of an astronaut riding a horse on mars" +image = pipe(prompt).images[0] +image Generating multiple prompts in a batch can crash or fail to work reliably. We believe this is related to the mps backend in PyTorch. While this is being investigated, you should iterate instead of batching. If you’re using PyTorch 1.13, you need to “prime” the pipeline with an additional one-time pass through it. This is a temporary workaround for an issue where the first inference pass produces slightly different results than subsequent ones. You only need to do this pass once, and after just one inference step you can discard the result. Copied from diffusers import DiffusionPipeline + + pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5").to("mps") + pipe.enable_attention_slicing() + + prompt = "a photo of an astronaut riding a horse on mars" + # First-time "warmup" pass if PyTorch version is 1.13 ++ _ = pipe(prompt, num_inference_steps=1) + + # Results match those from the CPU device after the warmup pass. + image = pipe(prompt).images[0] Troubleshoot M1/M2 performance is very sensitive to memory pressure. When this occurs, the system automatically swaps if it needs to which significantly degrades performance. To prevent this from happening, we recommend attention slicing to reduce memory pressure during inference and prevent swapping. This is especially relevant if your computer has less than 64GB of system RAM, or if you generate images at non-standard resolutions larger than 512×512 pixels. Call the enable_attention_slicing() function on your pipeline: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True).to("mps") +pipeline.enable_attention_slicing() Attention slicing performs the costly attention operation in multiple steps instead of all at once. It usually improves performance by ~20% in computers without universal memory, but we’ve observed better performance in most Apple silicon computers unless you have 64GB of RAM or more. diff --git a/scrapped_outputs/a3ac502c8ff0a54c8a3ee4bf9427deb9.txt b/scrapped_outputs/a3ac502c8ff0a54c8a3ee4bf9427deb9.txt new file mode 100644 index 0000000000000000000000000000000000000000..9d0a8a28b6d3bc1a9ce7a2bdbcac9943975943ca --- /dev/null +++ b/scrapped_outputs/a3ac502c8ff0a54c8a3ee4bf9427deb9.txt @@ -0,0 +1 @@ +Overview Welcome to 🧨 Diffusers! If you’re new to diffusion models and generative AI, and want to learn more, then you’ve come to the right place. These beginner-friendly tutorials are designed to provide a gentle introduction to diffusion models and help you understand the library fundamentals - the core components and how 🧨 Diffusers is meant to be used. You’ll learn how to use a pipeline for inference to rapidly generate things, and then deconstruct that pipeline to really understand how to use the library as a modular toolbox for building your own diffusion systems. In the next lesson, you’ll learn how to train your own diffusion model to generate what you want. After completing the tutorials, you’ll have gained the necessary skills to start exploring the library on your own and see how to use it for your own projects and applications. Feel free to join our community on Discord or the forums to connect and collaborate with other users and developers! Let’s start diffusing! 🧨 diff --git a/scrapped_outputs/a3c7a18145991f4b4276b83f46b44d74.txt b/scrapped_outputs/a3c7a18145991f4b4276b83f46b44d74.txt new file mode 100644 index 0000000000000000000000000000000000000000..a60cf1709306cd604a335558453963caf02df74b --- /dev/null +++ b/scrapped_outputs/a3c7a18145991f4b4276b83f46b44d74.txt @@ -0,0 +1,56 @@ +Community pipelines For more context about the design choices behind community pipelines, please have a look at this issue. Community pipelines allow you to get creative and build your own unique pipelines to share with the community. You can find all community pipelines in the diffusers/examples/community folder along with inference and training examples for how to use them. This guide showcases some of the community pipelines and hopefully it’ll inspire you to create your own (feel free to open a PR with your own pipeline and we will merge it!). To load a community pipeline, use the custom_pipeline argument in DiffusionPipeline to specify one of the files in diffusers/examples/community: Copied from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", custom_pipeline="filename_in_the_community_folder", use_safetensors=True +) If a community pipeline doesn’t work as expected, please open a GitHub issue and mention the author. You can learn more about community pipelines in the how to load community pipelines and how to contribute a community pipeline guides. Multilingual Stable Diffusion The multilingual Stable Diffusion pipeline uses a pretrained XLM-RoBERTa to identify a language and the mBART-large-50 model to handle the translation. This allows you to generate images from text in 20 languages. Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import make_image_grid +from transformers import ( + pipeline, + MBart50TokenizerFast, + MBartForConditionalGeneration, +) + +device = "cuda" if torch.cuda.is_available() else "cpu" +device_dict = {"cuda": 0, "cpu": -1} + +# add language detection pipeline +language_detection_model_ckpt = "papluca/xlm-roberta-base-language-detection" +language_detection_pipeline = pipeline("text-classification", + model=language_detection_model_ckpt, + device=device_dict[device]) + +# add model for language translation +translation_tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-one-mmt") +translation_model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-one-mmt").to(device) + +diffuser_pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + custom_pipeline="multilingual_stable_diffusion", + detection_pipeline=language_detection_pipeline, + translation_model=translation_model, + translation_tokenizer=translation_tokenizer, + torch_dtype=torch.float16, +) + +diffuser_pipeline.enable_attention_slicing() +diffuser_pipeline = diffuser_pipeline.to(device) + +prompt = ["a photograph of an astronaut riding a horse", + "Una casa en la playa", + "Ein Hund, der Orange isst", + "Un restaurant parisien"] + +images = diffuser_pipeline(prompt).images +make_image_grid(images, rows=2, cols=2) MagicMix MagicMix is a pipeline that can mix an image and text prompt to generate a new image that preserves the image structure. The mix_factor determines how much influence the prompt has on the layout generation, kmin controls the number of steps during the content generation process, and kmax determines how much information is kept in the layout of the original image. Copied from diffusers import DiffusionPipeline, DDIMScheduler +from diffusers.utils import load_image, make_image_grid + +pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + custom_pipeline="magic_mix", + scheduler=DDIMScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler"), +).to('cuda') + +img = load_image("https://user-images.githubusercontent.com/59410571/209578593-141467c7-d831-4792-8b9a-b17dc5e47816.jpg") +mix_img = pipeline(img, prompt="bed", kmin=0.3, kmax=0.5, mix_factor=0.5) +make_image_grid([img, mix_img], rows=1, cols=2) original image image and text prompt mix diff --git a/scrapped_outputs/a3e9ec98e3095cc33868a0b0b8475273.txt b/scrapped_outputs/a3e9ec98e3095cc33868a0b0b8475273.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/a417d4e1ee8169744d5b82687f89c8d8.txt b/scrapped_outputs/a417d4e1ee8169744d5b82687f89c8d8.txt new file mode 100644 index 0000000000000000000000000000000000000000..ae35bd71905061d7430ba6a839a139739f34ded5 --- /dev/null +++ b/scrapped_outputs/a417d4e1ee8169744d5b82687f89c8d8.txt @@ -0,0 +1,84 @@ +Self-Attention Guidance Improving Sample Quality of Diffusion Models Using Self-Attention Guidance is by Susung Hong et al. The abstract from the paper is: Denoising diffusion models (DDMs) have attracted attention for their exceptional generation quality and diversity. This success is largely attributed to the use of class- or text-conditional diffusion guidance methods, such as classifier and classifier-free guidance. In this paper, we present a more comprehensive perspective that goes beyond the traditional guidance methods. From this generalized perspective, we introduce novel condition- and training-free strategies to enhance the quality of generated images. As a simple solution, blur guidance improves the suitability of intermediate samples for their fine-scale information and structures, enabling diffusion models to generate higher quality samples with a moderate guidance scale. Improving upon this, Self-Attention Guidance (SAG) uses the intermediate self-attention maps of diffusion models to enhance their stability and efficacy. Specifically, SAG adversarially blurs only the regions that diffusion models attend to at each iteration and guides them accordingly. Our experimental results show that our SAG improves the performance of various diffusion models, including ADM, IDDPM, Stable Diffusion, and DiT. Moreover, combining SAG with conventional guidance methods leads to further improvement. You can find additional information about Self-Attention Guidance on the project page, original codebase, and try it out in a demo or notebook. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionSAGPipeline class diffusers.StableDiffusionSAGPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 sag_scale: float = 0.75 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. sag_scale (float, optional, defaults to 0.75) — +Chosen between [0, 1.0] for better quality. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionSAGPipeline + +>>> pipe = StableDiffusionSAGPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt, sag_scale=0.75).images[0] disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/a41f4ed5db7131bb7b36f868ea3c67fc.txt b/scrapped_outputs/a41f4ed5db7131bb7b36f868ea3c67fc.txt new file mode 100644 index 0000000000000000000000000000000000000000..8dc3604b4b9c771e172750704b5ebd2c5de8bc3e --- /dev/null +++ b/scrapped_outputs/a41f4ed5db7131bb7b36f868ea3c67fc.txt @@ -0,0 +1,122 @@ +Super-resolution The Stable Diffusion upscaler diffusion model was created by the researchers and engineers from CompVis, Stability AI, and LAION. It is used to enhance the resolution of input images by a factor of 4. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionUpscalePipeline class diffusers.StableDiffusionUpscalePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel low_res_scheduler: DDPMScheduler scheduler: KarrasDiffusionSchedulers safety_checker: Optional = None feature_extractor: Optional = None watermarker: Optional = None max_noise_level: int = 350 ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. low_res_scheduler (SchedulerMixin) — +A scheduler used to add initial noise to the low resolution conditioning image. It must be an instance of +DDPMScheduler. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-guided image super-resolution using Stable Diffusion 2. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 75 guidance_scale: float = 9.0 noise_level: int = 20 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: int = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be upscaled. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import requests +>>> from PIL import Image +>>> from io import BytesIO +>>> from diffusers import StableDiffusionUpscalePipeline +>>> import torch + +>>> # load model and scheduler +>>> model_id = "stabilityai/stable-diffusion-x4-upscaler" +>>> pipeline = StableDiffusionUpscalePipeline.from_pretrained( +... model_id, revision="fp16", torch_dtype=torch.float16 +... ) +>>> pipeline = pipeline.to("cuda") + +>>> # let's download an image +>>> url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png" +>>> response = requests.get(url) +>>> low_res_img = Image.open(BytesIO(response.content)).convert("RGB") +>>> low_res_img = low_res_img.resize((128, 128)) +>>> prompt = "a white cat" + +>>> upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0] +>>> upscaled_image.save("upsampled_cat.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/a44db5fd8a2008d4d441545577725db8.txt b/scrapped_outputs/a44db5fd8a2008d4d441545577725db8.txt new file mode 100644 index 0000000000000000000000000000000000000000..db7171b03930077dc4188ad756a7f5e1ae92467f --- /dev/null +++ b/scrapped_outputs/a44db5fd8a2008d4d441545577725db8.txt @@ -0,0 +1,27 @@ +UNet2DModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 2D UNet model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet2DModel class diffusers.UNet2DModel < source > ( sample_size: Union = None in_channels: int = 3 out_channels: int = 3 center_input_sample: bool = False time_embedding_type: str = 'positional' freq_shift: int = 0 flip_sin_to_cos: bool = True down_block_types: Tuple = ('DownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D') up_block_types: Tuple = ('AttnUpBlock2D', 'AttnUpBlock2D', 'AttnUpBlock2D', 'UpBlock2D') block_out_channels: Tuple = (224, 448, 672, 896) layers_per_block: int = 2 mid_block_scale_factor: float = 1 downsample_padding: int = 1 downsample_type: str = 'conv' upsample_type: str = 'conv' dropout: float = 0.0 act_fn: str = 'silu' attention_head_dim: Optional = 8 norm_num_groups: int = 32 attn_norm_num_groups: Optional = None norm_eps: float = 1e-05 resnet_time_scale_shift: str = 'default' add_attention: bool = True class_embed_type: Optional = None num_class_embeds: Optional = None num_train_timesteps: Optional = None ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. Dimensions must be a multiple of 2 ** (len(block_out_channels) - 1). in_channels (int, optional, defaults to 3) — Number of channels in the input sample. out_channels (int, optional, defaults to 3) — Number of channels in the output. center_input_sample (bool, optional, defaults to False) — Whether to center the input sample. time_embedding_type (str, optional, defaults to "positional") — Type of time embedding to use. freq_shift (int, optional, defaults to 0) — Frequency shift for Fourier time embedding. flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip sin to cos for Fourier time embedding. down_block_types (Tuple[str], optional, defaults to ("DownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D")) — +Tuple of downsample block types. mid_block_type (str, optional, defaults to "UNetMidBlock2D") — +Block type for middle of UNet, it can be either UNetMidBlock2D or UnCLIPUNetMidBlock2D. up_block_types (Tuple[str], optional, defaults to ("AttnUpBlock2D", "AttnUpBlock2D", "AttnUpBlock2D", "UpBlock2D")) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (224, 448, 672, 896)) — +Tuple of block output channels. layers_per_block (int, optional, defaults to 2) — The number of layers per block. mid_block_scale_factor (float, optional, defaults to 1) — The scale factor for the mid block. downsample_padding (int, optional, defaults to 1) — The padding for the downsample convolution. downsample_type (str, optional, defaults to conv) — +The downsample type for downsampling layers. Choose between “conv” and “resnet” upsample_type (str, optional, defaults to conv) — +The upsample type for upsampling layers. Choose between “conv” and “resnet” dropout (float, optional, defaults to 0.0) — The dropout probability to use. act_fn (str, optional, defaults to "silu") — The activation function to use. attention_head_dim (int, optional, defaults to 8) — The attention head dimension. norm_num_groups (int, optional, defaults to 32) — The number of groups for normalization. attn_norm_num_groups (int, optional, defaults to None) — +If set to an integer, a group norm layer will be created in the mid block’s Attention layer with the +given number of groups. If left as None, the group norm layer will only be created if +resnet_time_scale_shift is set to default, and if created will have norm_num_groups groups. norm_eps (float, optional, defaults to 1e-5) — The epsilon for normalization. resnet_time_scale_shift (str, optional, defaults to "default") — Time scale shift config +for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", or "identity". num_class_embeds (int, optional, defaults to None) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim when performing class +conditioning with class_embed_type equal to None. A 2D UNet model that takes a noisy sample and a timestep and returns a sample shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor timestep: Union class_labels: Optional = None return_dict: bool = True ) → ~models.unet_2d.UNet2DOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, channel, height, width). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. class_labels (torch.FloatTensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.unet_2d.UNet2DOutput instead of a plain tuple. Returns +~models.unet_2d.UNet2DOutput or tuple + +If return_dict is True, an ~models.unet_2d.UNet2DOutput is returned, otherwise a tuple is +returned where the first element is the sample tensor. + The UNet2DModel forward method. UNet2DOutput class diffusers.models.unets.unet_2d.UNet2DOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The hidden states output from the last layer of the model. The output of UNet2DModel. diff --git a/scrapped_outputs/a469d3a4c477b72596738394f4918876.txt b/scrapped_outputs/a469d3a4c477b72596738394f4918876.txt new file mode 100644 index 0000000000000000000000000000000000000000..acbc313e656972084810639a2513c61961c63127 --- /dev/null +++ b/scrapped_outputs/a469d3a4c477b72596738394f4918876.txt @@ -0,0 +1 @@ +Normalization layers Customized normalization layers for supporting various models in 🤗 Diffusers. AdaLayerNorm class diffusers.models.normalization.AdaLayerNorm < source > ( embedding_dim: int num_embeddings: int ) Parameters embedding_dim (int) — The size of each embedding vector. num_embeddings (int) — The size of the embeddings dictionary. Norm layer modified to incorporate timestep embeddings. AdaLayerNormZero class diffusers.models.normalization.AdaLayerNormZero < source > ( embedding_dim: int num_embeddings: int ) Parameters embedding_dim (int) — The size of each embedding vector. num_embeddings (int) — The size of the embeddings dictionary. Norm layer adaptive layer norm zero (adaLN-Zero). AdaLayerNormSingle class diffusers.models.normalization.AdaLayerNormSingle < source > ( embedding_dim: int use_additional_conditions: bool = False ) Parameters embedding_dim (int) — The size of each embedding vector. use_additional_conditions (bool) — To use additional conditions for normalization or not. Norm layer adaptive layer norm single (adaLN-single). As proposed in PixArt-Alpha (see: https://arxiv.org/abs/2310.00426; Section 2.3). AdaGroupNorm class diffusers.models.normalization.AdaGroupNorm < source > ( embedding_dim: int out_dim: int num_groups: int act_fn: Optional = None eps: float = 1e-05 ) Parameters embedding_dim (int) — The size of each embedding vector. num_embeddings (int) — The size of the embeddings dictionary. num_groups (int) — The number of groups to separate the channels into. act_fn (str, optional, defaults to None) — The activation function to use. eps (float, optional, defaults to 1e-5) — The epsilon value to use for numerical stability. GroupNorm layer modified to incorporate timestep embeddings. diff --git a/scrapped_outputs/a472d7c969cc8a023cfa2eaacaacc404.txt b/scrapped_outputs/a472d7c969cc8a023cfa2eaacaacc404.txt new file mode 100644 index 0000000000000000000000000000000000000000..1f6f4515145581efe8db27c822c4dac240053ef7 --- /dev/null +++ b/scrapped_outputs/a472d7c969cc8a023cfa2eaacaacc404.txt @@ -0,0 +1,68 @@ +Consistency Models Consistency Models were proposed in Consistency Models by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. The abstract from the paper is: Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast one-step generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet 64x64 and LSUN 256x256. The original codebase can be found at openai/consistency_models, and additional checkpoints are available at openai. The pipeline was contributed by dg845 and ayushtues. ❤️ Tips For an additional speed-up, use torch.compile to generate multiple images in <1 second: Copied import torch + from diffusers import ConsistencyModelPipeline + + device = "cuda" + # Load the cd_bedroom256_lpips checkpoint. + model_id_or_path = "openai/diffusers-cd_bedroom256_lpips" + pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) + pipe.to(device) + ++ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + + # Multistep sampling + # Timesteps can be explicitly specified; the particular timesteps below are from the original GitHub repo: + # https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L83 + for _ in range(10): + image = pipe(timesteps=[17, 0]).images[0] + image.show() ConsistencyModelPipeline class diffusers.ConsistencyModelPipeline < source > ( unet: UNet2DModel scheduler: CMStochasticIterativeScheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Currently only +compatible with CMStochasticIterativeScheduler. Pipeline for unconditional or class-conditional image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 class_labels: Union = None num_inference_steps: int = 1 timesteps: List = None generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. class_labels (torch.Tensor or List[int] or int, optional) — +Optional class labels for conditioning class-conditional consistency models. Not used if the model is +not class-conditional. num_inference_steps (int, optional, defaults to 1) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + Examples: Copied >>> import torch + +>>> from diffusers import ConsistencyModelPipeline + +>>> device = "cuda" +>>> # Load the cd_imagenet64_l2 checkpoint. +>>> model_id_or_path = "openai/diffusers-cd_imagenet64_l2" +>>> pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +>>> pipe.to(device) + +>>> # Onestep Sampling +>>> image = pipe(num_inference_steps=1).images[0] +>>> image.save("cd_imagenet64_l2_onestep_sample.png") + +>>> # Onestep sampling, class-conditional image generation +>>> # ImageNet-64 class label 145 corresponds to king penguins +>>> image = pipe(num_inference_steps=1, class_labels=145).images[0] +>>> image.save("cd_imagenet64_l2_onestep_sample_penguin.png") + +>>> # Multistep sampling, class-conditional image generation +>>> # Timesteps can be explicitly specified; the particular timesteps below are from the original Github repo: +>>> # https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L77 +>>> image = pipe(num_inference_steps=None, timesteps=[22, 0], class_labels=145).images[0] +>>> image.save("cd_imagenet64_l2_multistep_sample_penguin.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/a47ce4591b2324677a2aa8cdc51c3958.txt b/scrapped_outputs/a47ce4591b2324677a2aa8cdc51c3958.txt new file mode 100644 index 0000000000000000000000000000000000000000..67b5d55033891dd6a812b152f5290d66bb31ebfe --- /dev/null +++ b/scrapped_outputs/a47ce4591b2324677a2aa8cdc51c3958.txt @@ -0,0 +1,239 @@ +Würstchen Wuerstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models is by Pablo Pernias, Dominic Rampas, Mats L. Richter and Christopher Pal and Marc Aubreville. The abstract from the paper is: We introduce Würstchen, a novel architecture for text-to-image synthesis that combines competitive performance with unprecedented cost-effectiveness for large-scale text-to-image diffusion models. A key contribution of our work is to develop a latent diffusion technique in which we learn a detailed but extremely compact semantic image representation used to guide the diffusion process. This highly compressed representation of an image provides much more detailed guidance compared to latent representations of language and this significantly reduces the computational requirements to achieve state-of-the-art results. Our approach also improves the quality of text-conditioned image generation based on our user preference study. The training requirements of our approach consists of 24,602 A100-GPU hours - compared to Stable Diffusion 2.1’s 200,000 GPU hours. Our approach also requires less training data to achieve these results. Furthermore, our compact latent representations allows us to perform inference over twice as fast, slashing the usual costs and carbon footprint of a state-of-the-art (SOTA) diffusion model significantly, without compromising the end performance. In a broader comparison against SOTA models our approach is substantially more efficient and compares favorably in terms of image quality. We believe that this work motivates more emphasis on the prioritization of both performance and computational accessibility. Würstchen Overview + +Würstchen is a diffusion model, whose text-conditional model works in a highly compressed latent space of images. Why is this important? Compressing data can reduce computational costs for both training and inference by magnitudes. Training on 1024x1024 images is way more expensive than training on 32x32. Usually, other works make use of a relatively small compression, in the range of 4x - 8x spatial compression. Würstchen takes this to an extreme. Through its novel design, we achieve a 42x spatial compression. This was unseen before because common methods fail to faithfully reconstruct detailed images after 16x spatial compression. Würstchen employs a two-stage compression, what we call Stage A and Stage B. Stage A is a VQGAN, and Stage B is a Diffusion Autoencoder (more details can be found in the [paper](https://huggingface.co/papers/2306.00637)). A third model, Stage C, is learned in that highly compressed latent space. This training requires fractions of the compute used for current top-performing models, while also allowing cheaper and faster inference. + Würstchen v2 comes to Diffusers After the initial paper release, we have improved numerous things in the architecture, training and sampling, making Würstchen competitive to current state-of-the-art models in many ways. We are excited to release this new version together with Diffusers. Here is a list of the improvements. Higher resolution (1024x1024 up to 2048x2048) Faster inference Multi Aspect Resolution Sampling Better quality We are releasing 3 checkpoints for the text-conditional image generation model (Stage C). Those are: v2-base v2-aesthetic (default) v2-interpolated (50% interpolation between v2-base and v2-aesthetic) We recommend using v2-interpolated, as it has a nice touch of both photorealism and aesthetics. Use v2-base for finetunings as it does not have a style bias and use v2-aesthetic for very artistic generations. +A comparison can be seen here: Text-to-Image Generation For the sake of usability, Würstchen can be used with a single pipeline. This pipeline can be used as follows: Copied import torch +from diffusers import AutoPipelineForText2Image +from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS + +pipe = AutoPipelineForText2Image.from_pretrained("warp-ai/wuerstchen", torch_dtype=torch.float16).to("cuda") + +caption = "Anthropomorphic cat dressed as a fire fighter" +images = pipe( + caption, + width=1024, + height=1536, + prior_timesteps=DEFAULT_STAGE_C_TIMESTEPS, + prior_guidance_scale=4.0, + num_images_per_prompt=2, +).images For explanation purposes, we can also initialize the two main pipelines of Würstchen individually. Würstchen consists of 3 stages: Stage C, Stage B, Stage A. They all have different jobs and work only together. When generating text-conditional images, Stage C will first generate the latents in a very compressed latent space. This is what happens in the prior_pipeline. Afterwards, the generated latents will be passed to Stage B, which decompresses the latents into a bigger latent space of a VQGAN. These latents can then be decoded by Stage A, which is a VQGAN, into the pixel-space. Stage B & Stage A are both encapsulated in the decoder_pipeline. For more details, take a look at the paper. Copied import torch +from diffusers import WuerstchenDecoderPipeline, WuerstchenPriorPipeline +from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS + +device = "cuda" +dtype = torch.float16 +num_images_per_prompt = 2 + +prior_pipeline = WuerstchenPriorPipeline.from_pretrained( + "warp-ai/wuerstchen-prior", torch_dtype=dtype +).to(device) +decoder_pipeline = WuerstchenDecoderPipeline.from_pretrained( + "warp-ai/wuerstchen", torch_dtype=dtype +).to(device) + +caption = "Anthropomorphic cat dressed as a fire fighter" +negative_prompt = "" + +prior_output = prior_pipeline( + prompt=caption, + height=1024, + width=1536, + timesteps=DEFAULT_STAGE_C_TIMESTEPS, + negative_prompt=negative_prompt, + guidance_scale=4.0, + num_images_per_prompt=num_images_per_prompt, +) +decoder_output = decoder_pipeline( + image_embeddings=prior_output.image_embeddings, + prompt=caption, + negative_prompt=negative_prompt, + guidance_scale=0.0, + output_type="pil", +).images[0] +decoder_output Speed-Up Inference + +You can make use of `torch.compile` function and gain a speed-up of about 2-3x: + + Copied prior_pipeline.prior = torch.compile(prior_pipeline.prior, mode="reduce-overhead", fullgraph=True) +decoder_pipeline.decoder = torch.compile(decoder_pipeline.decoder, mode="reduce-overhead", fullgraph=True) Limitations Due to the high compression employed by Würstchen, generations can lack a good amount +of detail. To our human eye, this is especially noticeable in faces, hands etc. Images can only be generated in 128-pixel steps, e.g. the next higher resolution +after 1024x1024 is 1152x1152 The model lacks the ability to render correct text in images The model often does not achieve photorealism Difficult compositional prompts are hard for the model The original codebase, as well as experimental ideas, can be found at dome272/Wuerstchen. WuerstchenCombinedPipeline class diffusers.WuerstchenCombinedPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModel decoder: WuerstchenDiffNeXt scheduler: DDPMWuerstchenScheduler vqgan: PaellaVQModel prior_tokenizer: CLIPTokenizer prior_text_encoder: CLIPTextModel prior_prior: WuerstchenPrior prior_scheduler: DDPMWuerstchenScheduler ) Parameters tokenizer (CLIPTokenizer) — +The decoder tokenizer to be used for text inputs. text_encoder (CLIPTextModel) — +The decoder text encoder to be used for text inputs. decoder (WuerstchenDiffNeXt) — +The decoder model to be used for decoder image generation pipeline. scheduler (DDPMWuerstchenScheduler) — +The scheduler to be used for decoder image generation pipeline. vqgan (PaellaVQModel) — +The VQGAN model to be used for decoder image generation pipeline. prior_tokenizer (CLIPTokenizer) — +The prior tokenizer to be used for text inputs. prior_text_encoder (CLIPTextModel) — +The prior text encoder to be used for text inputs. prior_prior (WuerstchenPrior) — +The prior model to be used for prior pipeline. prior_scheduler (DDPMWuerstchenScheduler) — +The scheduler to be used for prior pipeline. Combined Pipeline for text-to-image generation using Wuerstchen This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union = None height: int = 512 width: int = 512 prior_num_inference_steps: int = 60 prior_timesteps: Optional = None prior_guidance_scale: float = 4.0 num_inference_steps: int = 12 decoder_timesteps: Optional = None decoder_guidance_scale: float = 0.0 negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation for the prior and decoder. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings for the prior. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings for the prior. Can be used to easily tweak text inputs, e.g. +prompt weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt +input argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +prior_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +prior_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked +to the text prompt, usually at the expense of lower image quality. prior_num_inference_steps (Union[int, Dict[float, int]], optional, defaults to 60) — +The number of prior denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. For more specific timestep spacing, you can pass customized +prior_timesteps num_inference_steps (int, optional, defaults to 12) — +The number of decoder denoising steps. More denoising steps usually lead to a higher quality image at +the expense of slower inference. For more specific timestep spacing, you can pass customized +timesteps prior_timesteps (List[float], optional) — +Custom timesteps to use for the denoising process for the prior. If not defined, equal spaced +prior_num_inference_steps timesteps are used. Must be in descending order. decoder_timesteps (List[float], optional) — +Custom timesteps to use for the denoising process for the decoder. If not defined, equal spaced +num_inference_steps timesteps are used. Must be in descending order. decoder_guidance_scale (float, optional, defaults to 0.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. prior_callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). prior_callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the prior_callback_on_step_end function. The tensors specified in the +list will be passed as callback_kwargs argument. You will only be able to include variables listed in +the ._callback_tensor_inputs attribute of your pipeline class. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusions import WuerstchenCombinedPipeline + +>>> pipe = WuerstchenCombinedPipeline.from_pretrained("warp-ai/Wuerstchen", torch_dtype=torch.float16).to( +... "cuda" +... ) +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> images = pipe(prompt=prompt) enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models (unet, text_encoder, vae, and safety checker state dicts) to CPU using 🤗 +Accelerate, significantly reducing memory usage. Models are moved to a torch.device('meta') and loaded on a +GPU only when their specific submodule’s forward method is called. Offloading happens on a submodule basis. +Memory savings are higher than using enable_model_cpu_offload, but performance is lower. WuerstchenPriorPipeline class diffusers.WuerstchenPriorPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModel prior: WuerstchenPrior scheduler: DDPMWuerstchenScheduler latent_mean: float = 42.0 latent_std: float = 1.0 resolution_multiple: float = 42.67 ) Parameters prior (Prior) — +The canonical unCLIP prior to approximate the image embedding from the text embedding. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (DDPMWuerstchenScheduler) — +A scheduler to be used in combination with prior to generate image embedding. latent_mean (‘float’, optional, defaults to 42.0) — +Mean value for latent diffusers. latent_std (‘float’, optional, defaults to 1.0) — +Standard value for latent diffusers. resolution_multiple (‘float’, optional, defaults to 42.67) — +Default resolution for multiple images generated. Pipeline for generating image prior for Wuerstchen. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None height: int = 1024 width: int = 1024 num_inference_steps: int = 60 timesteps: List = None guidance_scale: float = 8.0 negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pt' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. height (int, optional, defaults to 1024) — +The height in pixels of the generated image. width (int, optional, defaults to 1024) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 60) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 8.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +decoder_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +decoder_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely +linked to the text prompt, usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if decoder_guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import WuerstchenPriorPipeline + +>>> prior_pipe = WuerstchenPriorPipeline.from_pretrained( +... "warp-ai/wuerstchen-prior", torch_dtype=torch.float16 +... ).to("cuda") + +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> prior_output = pipe(prompt) WuerstchenPriorPipelineOutput class diffusers.pipelines.wuerstchen.pipeline_wuerstchen_prior.WuerstchenPriorPipelineOutput < source > ( image_embeddings: Union ) Parameters image_embeddings (torch.FloatTensor or np.ndarray) — +Prior image embeddings for text prompt Output class for WuerstchenPriorPipeline. WuerstchenDecoderPipeline class diffusers.WuerstchenDecoderPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModel decoder: WuerstchenDiffNeXt scheduler: DDPMWuerstchenScheduler vqgan: PaellaVQModel latent_dim_scale: float = 10.67 ) Parameters tokenizer (CLIPTokenizer) — +The CLIP tokenizer. text_encoder (CLIPTextModel) — +The CLIP text encoder. decoder (WuerstchenDiffNeXt) — +The WuerstchenDiffNeXt unet decoder. vqgan (PaellaVQModel) — +The VQGAN model. scheduler (DDPMWuerstchenScheduler) — +A scheduler to be used in combination with prior to generate image embedding. latent_dim_scale (float, optional, defaults to 10.67) — +Multiplier to determine the VQ latent space size from the image embeddings. If the image embeddings are +height=24 and width=24, the VQ latent shape needs to be height=int(2410.67)=256 and +width=int(2410.67)=256 in order to match the training conditions. Pipeline for generating images from the Wuerstchen model. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeddings: Union prompt: Union = None num_inference_steps: int = 12 timesteps: Optional = None guidance_scale: float = 0.0 negative_prompt: Union = None num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) Parameters image_embedding (torch.FloatTensor or List[torch.FloatTensor]) — +Image Embeddings either extracted from an image or generated by a Prior Model. prompt (str or List[str]) — +The prompt or prompts to guide the image generation. num_inference_steps (int, optional, defaults to 12) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 0.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +decoder_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +decoder_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely +linked to the text prompt, usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if decoder_guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import WuerstchenPriorPipeline, WuerstchenDecoderPipeline + +>>> prior_pipe = WuerstchenPriorPipeline.from_pretrained( +... "warp-ai/wuerstchen-prior", torch_dtype=torch.float16 +... ).to("cuda") +>>> gen_pipe = WuerstchenDecoderPipeline.from_pretrain("warp-ai/wuerstchen", torch_dtype=torch.float16).to( +... "cuda" +... ) + +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> prior_output = pipe(prompt) +>>> images = gen_pipe(prior_output.image_embeddings, prompt=prompt) Citation Copied @misc{pernias2023wuerstchen, + title={Wuerstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models}, + author={Pablo Pernias and Dominic Rampas and Mats L. Richter and Christopher J. Pal and Marc Aubreville}, + year={2023}, + eprint={2306.00637}, + archivePrefix={arXiv}, + primaryClass={cs.CV} + } diff --git a/scrapped_outputs/a496ef25ccca4da37ca4bec0bc436067.txt b/scrapped_outputs/a496ef25ccca4da37ca4bec0bc436067.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/a4a39b51da014ee507f81b59c90521ea.txt b/scrapped_outputs/a4a39b51da014ee507f81b59c90521ea.txt new file mode 100644 index 0000000000000000000000000000000000000000..f44a3d21a8e26d613db10e2b1641d1bc1fb54490 --- /dev/null +++ b/scrapped_outputs/a4a39b51da014ee507f81b59c90521ea.txt @@ -0,0 +1,2 @@ +🧨 Diffusers’ Ethical Guidelines Preamble Diffusers provides pre-trained diffusion models and serves as a modular toolbox for inference and training. Given its real case applications in the world and potential negative impacts on society, we think it is important to provide the project with ethical guidelines to guide the development, users’ contributions, and usage of the Diffusers library. The risks associated with using this technology are still being examined, but to name a few: copyrights issues for artists; deep-fake exploitation; sexual content generation in inappropriate contexts; non-consensual impersonation; harmful social biases perpetuating the oppression of marginalized groups. +We will keep tracking risks and adapt the following guidelines based on the community’s responsiveness and valuable feedback. Scope The Diffusers community will apply the following ethical guidelines to the project’s development and help coordinate how the community will integrate the contributions, especially concerning sensitive topics related to ethical concerns. Ethical guidelines The following ethical guidelines apply generally, but we will primarily implement them when dealing with ethically sensitive issues while making a technical choice. Furthermore, we commit to adapting those ethical principles over time following emerging harms related to the state of the art of the technology in question. Transparency: we are committed to being transparent in managing PRs, explaining our choices to users, and making technical decisions. Consistency: we are committed to guaranteeing our users the same level of attention in project management, keeping it technically stable and consistent. Simplicity: with a desire to make it easy to use and exploit the Diffusers library, we are committed to keeping the project’s goals lean and coherent. Accessibility: the Diffusers project helps lower the entry bar for contributors who can help run it even without technical expertise. Doing so makes research artifacts more accessible to the community. Reproducibility: we aim to be transparent about the reproducibility of upstream code, models, and datasets when made available through the Diffusers library. Responsibility: as a community and through teamwork, we hold a collective responsibility to our users by anticipating and mitigating this technology’s potential risks and dangers. Examples of implementations: Safety features and Mechanisms The team works daily to make the technical and non-technical tools available to deal with the potential ethical and social risks associated with diffusion technology. Moreover, the community’s input is invaluable in ensuring these features’ implementation and raising awareness with us. Community tab: it enables the community to discuss and better collaborate on a project. Bias exploration and evaluation: the Hugging Face team provides a space to demonstrate the biases in Stable Diffusion interactively. In this sense, we support and encourage bias explorers and evaluations. Encouraging safety in deployment Safe Stable Diffusion: It mitigates the well-known issue that models, like Stable Diffusion, that are trained on unfiltered, web-crawled datasets tend to suffer from inappropriate degeneration. Related paper: Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models. Safety Checker: It checks and compares the class probability of a set of hard-coded harmful concepts in the embedding space against an image after it has been generated. The harmful concepts are intentionally hidden to prevent reverse engineering of the checker. Staged released on the Hub: in particularly sensitive situations, access to some repositories should be restricted. This staged release is an intermediary step that allows the repository’s authors to have more control over its use. Licensing: OpenRAILs, a new type of licensing, allow us to ensure free access while having a set of restrictions that ensure more responsible use. diff --git a/scrapped_outputs/a4d7d0bfee0132f556b38590e452a176.txt b/scrapped_outputs/a4d7d0bfee0132f556b38590e452a176.txt new file mode 100644 index 0000000000000000000000000000000000000000..8d7d87b86ae5af7a7fd008d5f2ca5264b88e97d8 --- /dev/null +++ b/scrapped_outputs/a4d7d0bfee0132f556b38590e452a176.txt @@ -0,0 +1,110 @@ +Text-to-image The text-to-image script is experimental, and it’s easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset. Text-to-image models like Stable Diffusion are conditioned to generate images given a text prompt. Training a model can be taxing on your hardware, but if you enable gradient_checkpointing and mixed_precision, it is possible to train a model on a single 24GB GPU. If you’re training with larger batch sizes or want to train faster, it’s better to use GPUs with more than 30GB of memory. You can reduce your memory footprint by enabling memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing, gradient accumulation or xFormers. A GPU with at least 30GB of memory or a TPU v3 is recommended for training with Flax. This guide will explore the train_text_to_image.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: + + +```bash +cd examples/text_to_image +pip install -r requirements.txt +``` + + +```bash +cd examples/text_to_image +pip install -r requirements_flax.txt +``` + + + 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image.py \ + --mixed_precision="fp16" Some basic and important parameters include: --pretrained_model_name_or_path: the name of the model on the Hub or a local path to the pretrained model --dataset_name: the name of the dataset on the Hub or a local path to the dataset to train on --image_column: the name of the image column in the dataset to train on --caption_column: the name of the text column in the dataset to train on --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_text_to_image.py \ + --snr_gamma=5.0 You can compare the loss surfaces for different snr_gamma values in this Weights and Biases report. For smaller datasets, the effects of Min-SNR may not be as obvious compared to larger datasets. Training script The dataset preprocessing code and training loop are found in the main() function. If you need to adapt the training script, this is where you’ll need to make your changes. The train_text_to_image script starts by loading a scheduler and tokenizer. You can choose to use a different scheduler here if you want: Copied noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") +tokenizer = CLIPTokenizer.from_pretrained( + args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision +) Then the script loads the UNet model: Copied load_model = UNet2DConditionModel.from_pretrained(input_dir, subfolder="unet") +model.register_to_config(**load_model.config) + +model.load_state_dict(load_model.state_dict()) Next, the text and image columns of the dataset need to be preprocessed. The tokenize_captions function handles tokenizing the inputs, and the train_transforms function specifies the type of transforms to apply to the image. Both of these functions are bundled into preprocess_train: Copied def preprocess_train(examples): + images = [image.convert("RGB") for image in examples[image_column]] + examples["pixel_values"] = [train_transforms(image) for image in images] + examples["input_ids"] = tokenize_captions(examples) + return examples Lastly, the training loop handles everything else. It encodes images into latent space, adds noise to the latents, computes the text embeddings to condition on, updates the model parameters, and saves and pushes the model to the Hub. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 + + +Let’s train on the Pokémon BLIP captions dataset to generate your own Pokémon. Set the environment variables MODEL_NAME and dataset_name to the model and the dataset (either from the Hub or a local path). If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. To train on a local dataset, set the TRAIN_DIR and OUTPUT_DIR environment variables to the path of the dataset and where to save the model to. Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export dataset_name="lambdalabs/pokemon-blip-captions" + +accelerate launch --mixed_precision="fp16" train_text_to_image.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$dataset_name \ + --use_ema \ + --resolution=512 --center_crop --random_flip \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --enable_xformers_memory_efficient_attention + --lr_scheduler="constant" --lr_warmup_steps=0 \ + --output_dir="sd-pokemon-model" \ + --push_to_hub + + +Training with Flax can be faster on TPUs and GPUs thanks to @duongna211. Flax is more efficient on a TPU, but GPU performance is also great. Set the environment variables MODEL_NAME and dataset_name to the model and the dataset (either from the Hub or a local path). To train on a local dataset, set the TRAIN_DIR and OUTPUT_DIR environment variables to the path of the dataset and where to save the model to. Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export dataset_name="lambdalabs/pokemon-blip-captions" + +python train_text_to_image_flax.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$dataset_name \ + --resolution=512 --center_crop --random_flip \ + --train_batch_size=1 \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --output_dir="sd-pokemon-model" \ + --push_to_hub + + +Once training is complete, you can use your newly trained model for inference: + + + + Copied from diffusers import StableDiffusionPipeline +import torch + +pipeline = StableDiffusionPipeline.from_pretrained("path/to/saved_model", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + +image = pipeline(prompt="yoda").images[0] +image.save("yoda-pokemon.png") + + + + Copied import jax +import numpy as np +from flax.jax_utils import replicate +from flax.training.common_utils import shard +from diffusers import FlaxStableDiffusionPipeline + +pipeline, params = FlaxStableDiffusionPipeline.from_pretrained("path/to/saved_model", dtype=jax.numpy.bfloat16) + +prompt = "yoda pokemon" +prng_seed = jax.random.PRNGKey(0) +num_inference_steps = 50 + +num_samples = jax.device_count() +prompt = num_samples * [prompt] +prompt_ids = pipeline.prepare_inputs(prompt) + +# shard inputs and rng +params = replicate(params) +prng_seed = jax.random.split(prng_seed, jax.device_count()) +prompt_ids = shard(prompt_ids) + +images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images +images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) +image.save("yoda-pokemon.png") + + + Next steps Congratulations on training your own text-to-image model! To learn more about how to use your new model, the following guides may be helpful: Learn how to load LoRA weights for inference if you trained your model with LoRA. Learn more about how certain parameters like guidance scale or techniques such as prompt weighting can help you control inference in the Text-to-image task guide. diff --git a/scrapped_outputs/a4ded19986c5ca37f45213efa9552318.txt b/scrapped_outputs/a4ded19986c5ca37f45213efa9552318.txt new file mode 100644 index 0000000000000000000000000000000000000000..ee5d5916fb70fd8ec6cb76f08d4c82e91bebd0c4 --- /dev/null +++ b/scrapped_outputs/a4ded19986c5ca37f45213efa9552318.txt @@ -0,0 +1,189 @@ +Linear multistep scheduler for discrete beta schedules + + +Overview + +Original implementation can be found here. + +LMSDiscreteScheduler + + +class diffusers.LMSDiscreteScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +prediction_type: str = 'epsilon' + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + + +Linear Multistep Scheduler for discrete beta schedules. Based on the original k-diffusion implementation by +Katherine Crowson: +https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L181 +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +get_lms_coefficient + +< +source +> +( +order +t +current_order + +) + + +Parameters + +order (TODO) — + + +t (TODO) — + + +current_order (TODO) — + + + +Compute a linear multistep coefficient. + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[float, torch.FloatTensor] + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +timestep (float or torch.FloatTensor) — the current timestep in the diffusion chain + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Scales the denoising model input by (sigma**2 + 1) ** 0.5 to match the K-LMS algorithm. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device, optional) — +the device to which the timesteps should be moved to. If None, the timesteps are not moved. + + + +Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: typing.Union[float, torch.FloatTensor] +sample: FloatTensor +order: int = 4 +return_dict: bool = True + +) +→ +~schedulers.scheduling_utils.LMSDiscreteSchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (float) — current timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. +order — coefficient for multi-step inference. + + +return_dict (bool) — option for returning tuple rather than LMSDiscreteSchedulerOutput class + + +Returns + +~schedulers.scheduling_utils.LMSDiscreteSchedulerOutput or tuple + + + +~schedulers.scheduling_utils.LMSDiscreteSchedulerOutput if return_dict is True, otherwise a tuple. +When returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/a535e4b2f5d27104de1c602ff17b9956.txt b/scrapped_outputs/a535e4b2f5d27104de1c602ff17b9956.txt new file mode 100644 index 0000000000000000000000000000000000000000..cff714448fde8a5841e9c4833e95b6589962a2ce --- /dev/null +++ b/scrapped_outputs/a535e4b2f5d27104de1c602ff17b9956.txt @@ -0,0 +1 @@ +Overview 🧨 Diffusers offers many pipelines, models, and schedulers for generative tasks. To make loading these components as simple as possible, we provide a single and unified method - from_pretrained() - that loads any of these components from either the Hugging Face Hub or your local machine. Whenever you load a pipeline or model, the latest files are automatically downloaded and cached so you can quickly reuse them next time without redownloading the files. This section will show you everything you need to know about loading pipelines, how to load different components in a pipeline, how to load checkpoint variants, and how to load community pipelines. You’ll also learn how to load schedulers and compare the speed and quality trade-offs of using different schedulers. Finally, you’ll see how to convert and load KerasCV checkpoints so you can use them in PyTorch with 🧨 Diffusers. diff --git a/scrapped_outputs/a54df189f1e2099429a47a43f1f90a5f.txt b/scrapped_outputs/a54df189f1e2099429a47a43f1f90a5f.txt new file mode 100644 index 0000000000000000000000000000000000000000..26903c98059769d09923319afe7503b246c3bfc7 --- /dev/null +++ b/scrapped_outputs/a54df189f1e2099429a47a43f1f90a5f.txt @@ -0,0 +1,92 @@ +Score SDE VE + + +Overview + +Score-Based Generative Modeling through Stochastic Differential Equations (Score SDE) by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon and Ben Poole. +The abstract of the paper is the following: +Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model. +The original codebase can be found here. +This pipeline implements the Variance Expanding (VE) variant of the method. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_score_sde_ve.py +Unconditional Image Generation +- + +ScoreSdeVePipeline + + +class diffusers.ScoreSdeVePipeline + +< +source +> +( +unet: UNet2DModel +scheduler: DiffusionPipeline + +) + + +Parameters + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the — + + +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) — +unet (UNet2DModel): U-Net architecture to denoise the encoded image. scheduler (SchedulerMixin): +The ScoreSdeVeScheduler scheduler to be used in combination with unet to denoise the encoded image. + + + + +__call__ + +< +source +> +( +batch_size: int = 1 +num_inference_steps: int = 2000 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +**kwargs + +) +→ +ImagePipelineOutput or tuple + +Parameters + +batch_size (int, optional, defaults to 1) — +The number of images to generate. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + +Returns + +ImagePipelineOutput or tuple + + + +~pipelines.utils.ImagePipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. diff --git a/scrapped_outputs/a559360146fc29023dac333e6c0ddd90.txt b/scrapped_outputs/a559360146fc29023dac333e6c0ddd90.txt new file mode 100644 index 0000000000000000000000000000000000000000..987c9209fcde600484b42a955615d555013bf385 --- /dev/null +++ b/scrapped_outputs/a559360146fc29023dac333e6c0ddd90.txt @@ -0,0 +1,367 @@ +Image-to-image The Stable Diffusion model can also be applied to image-to-image generation by passing a text prompt and an initial image to condition the generation of new images. The StableDiffusionImg2ImgPipeline uses the diffusion-denoising mechanism proposed in SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations by Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, Stefano Ermon. The abstract from the paper is: Guided image synthesis enables everyday users to create and edit photo-realistic images with minimum effort. The key challenge is balancing faithfulness to the user input (e.g., hand-drawn colored strokes) and realism of the synthesized image. Existing GAN-based methods attempt to achieve such balance using either conditional GANs or GAN inversions, which are challenging and often require additional training data or loss functions for individual applications. To address these issues, we introduce a new image synthesis and editing method, Stochastic Differential Editing (SDEdit), based on a diffusion model generative prior, which synthesizes realistic images by iteratively denoising through a stochastic differential equation (SDE). Given an input image with user guide of any type, SDEdit first adds noise to the input, then subsequently denoises the resulting image through the SDE prior to increase its realism. SDEdit does not require task-specific training or inversions and can naturally achieve the balance between realism and faithfulness. SDEdit significantly outperforms state-of-the-art GAN-based methods by up to 98.09% on realism and 91.72% on overall satisfaction scores, according to a human perception study, on multiple tasks, including stroke-based image synthesis and editing as well as image compositing. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionImg2ImgPipeline class diffusers.StableDiffusionImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-guided image-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.8 num_inference_steps: Optional = 50 timesteps: List = None guidance_scale: Optional = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: Optional = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: int = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be used as the starting point. For both +numpy array and pytorch tensor, the expected value range is between [0, 1] If it’s a tensor or a list +or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a +list of arrays, the expected shape should be (B, H, W, C) or (H, W, C) It can also accept image +latents as image, but if passing latents directly it is not encoded again. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import requests +>>> import torch +>>> from PIL import Image +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionImg2ImgPipeline + +>>> device = "cuda" +>>> model_id_or_path = "runwayml/stable-diffusion-v1-5" +>>> pipe = StableDiffusionImg2ImgPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +>>> response = requests.get(url) +>>> init_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_image = init_image.resize((768, 512)) + +>>> prompt = "A fantasy landscape, trending on artstation" + +>>> images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images +>>> images[0].save("fantasy_landscape.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt or .safetensors +format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import StableDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +... ) + +>>> # Download pipeline from local file +>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt +>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly") + +>>> # Enable float16 and move to GPU +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", +... torch_dtype=torch.float16, +... ) +>>> pipeline.to("cuda") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionImg2ImgPipeline class diffusers.FlaxStableDiffusionImg2ImgPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-guided image-to-image generation using Stable Diffusion. This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: Array image: Array params: Union prng_seed: Array strength: float = 0.8 num_inference_steps: int = 50 height: Optional = None width: Optional = None guidance_scale: Union = 7.5 noise: Array = None neg_prompt_ids: Array = None return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt_ids (jnp.ndarray) — +The prompt or prompts to guide image generation. image (jnp.ndarray) — +Array representing an image batch to be used as the starting point. params (Dict or FrozenDict) — +Dictionary containing the model parameters/weights. prng_seed (jax.Array or jax.Array) — +Array containing random number generator key. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. noise (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. The array is generated by +sampling using the supplied random generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> import jax.numpy as jnp +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard +>>> import requests +>>> from io import BytesIO +>>> from PIL import Image +>>> from diffusers import FlaxStableDiffusionImg2ImgPipeline + + +>>> def create_key(seed=0): +... return jax.random.PRNGKey(seed) + + +>>> rng = create_key(0) + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +>>> response = requests.get(url) +>>> init_img = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_img = init_img.resize((768, 512)) + +>>> prompts = "A fantasy landscape, trending on artstation" + +>>> pipeline, params = FlaxStableDiffusionImg2ImgPipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", +... revision="flax", +... dtype=jnp.bfloat16, +... ) + +>>> num_samples = jax.device_count() +>>> rng = jax.random.split(rng, jax.device_count()) +>>> prompt_ids, processed_image = pipeline.prepare_inputs( +... prompt=[prompts] * num_samples, image=[init_img] * num_samples +... ) +>>> p_params = replicate(params) +>>> prompt_ids = shard(prompt_ids) +>>> processed_image = shard(processed_image) + +>>> output = pipeline( +... prompt_ids=prompt_ids, +... image=processed_image, +... params=p_params, +... prng_seed=rng, +... strength=0.75, +... num_inference_steps=50, +... jit=True, +... height=512, +... width=768, +... ).images + +>>> output_images = pipeline.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:]))) FlaxStableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/a55cb85a666a28472c0cb6b954877b70.txt b/scrapped_outputs/a55cb85a666a28472c0cb6b954877b70.txt new file mode 100644 index 0000000000000000000000000000000000000000..96a0a5c22497290cdb231bbf72184daeee1b4d8c --- /dev/null +++ b/scrapped_outputs/a55cb85a666a28472c0cb6b954877b70.txt @@ -0,0 +1,18 @@ +VQModel The VQ-VAE model was introduced in Neural Discrete Representation Learning by Aaron van den Oord, Oriol Vinyals and Koray Kavukcuoglu. The model is used in 🤗 Diffusers to decode latent representations into images. Unlike AutoencoderKL, the VQModel works in a quantized latent space. The abstract from the paper is: Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector Quantised-Variational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of “posterior collapse” — where the latents are ignored when they are paired with a powerful autoregressive decoder — typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations. VQModel class diffusers.VQModel < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) up_block_types: Tuple = ('UpDecoderBlock2D',) block_out_channels: Tuple = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 3 sample_size: int = 32 num_vq_embeddings: int = 256 norm_num_groups: int = 32 vq_embed_dim: Optional = None scaling_factor: float = 0.18215 norm_type: str = 'group' mid_block_add_attention = True lookup_from_codebook = False force_upcast = False ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("DownEncoderBlock2D",)) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to ("UpDecoderBlock2D",)) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of block output channels. layers_per_block (int, optional, defaults to 1) — Number of layers per block. act_fn (str, optional, defaults to "silu") — The activation function to use. latent_channels (int, optional, defaults to 3) — Number of channels in the latent space. sample_size (int, optional, defaults to 32) — Sample input size. num_vq_embeddings (int, optional, defaults to 256) — Number of codebook vectors in the VQ-VAE. norm_num_groups (int, optional, defaults to 32) — Number of groups for normalization layers. vq_embed_dim (int, optional) — Hidden dim of codebook vectors in the VQ-VAE. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. norm_type (str, optional, defaults to "group") — +Type of normalization layer to use. Can be one of "group" or "spatial". A VQ-VAE model for decoding latent representations. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor return_dict: bool = True ) → VQEncoderOutput or tuple Parameters sample (torch.FloatTensor) — Input sample. return_dict (bool, optional, defaults to True) — +Whether or not to return a models.vq_model.VQEncoderOutput instead of a plain tuple. Returns +VQEncoderOutput or tuple + +If return_dict is True, a VQEncoderOutput is returned, otherwise a plain tuple +is returned. + The VQModel forward method. VQEncoderOutput class diffusers.models.vq_model.VQEncoderOutput < source > ( latents: FloatTensor ) Parameters latents (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The encoded output sample from the last layer of the model. Output of VQModel encoding method. diff --git a/scrapped_outputs/a57ddf54a1071623e2f317983ac6bcfa.txt b/scrapped_outputs/a57ddf54a1071623e2f317983ac6bcfa.txt new file mode 100644 index 0000000000000000000000000000000000000000..fbc000e3f1f4798b3b57e43c2f0af0e2e06c9cce --- /dev/null +++ b/scrapped_outputs/a57ddf54a1071623e2f317983ac6bcfa.txt @@ -0,0 +1,65 @@ +Latent Consistency Model Multistep Scheduler Overview Multistep and onestep scheduler (Algorithm 3) introduced alongside latent consistency models in the paper Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference by Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao. +This scheduler should be able to generate good samples from LatentConsistencyModelPipeline in 1-8 steps. LCMScheduler class diffusers.LCMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'scaled_linear' trained_betas: Union = None original_inference_steps: int = 50 clip_sample: bool = False clip_sample_range: float = 1.0 set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' timestep_scaling: float = 10.0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. original_inference_steps (int, optional, defaults to 50) — +The default number of inference steps used to generate a linearly-spaced timestep schedule, from which we +will ultimately take num_inference_steps evenly spaced timesteps to form the final timestep schedule. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the alpha value at step 0. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. timestep_scaling (float, defaults to 10.0) — +The factor the timesteps will be multiplied by when calculating the consistency model boundary conditions +c_skip and c_out. Increasing this will decrease the approximation error (although the approximation +error at the default of 10.0 is already pretty small). rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. LCMScheduler extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with +non-Markovian guidance. This model inherits from SchedulerMixin and ConfigMixin. ~ConfigMixin takes care of storing all config +attributes that are passed in the scheduler’s __init__ function, such as num_train_timesteps. They can be +accessed via scheduler.config.num_train_timesteps. SchedulerMixin provides general loading and saving +functionality via the SchedulerMixin.save_pretrained() and from_pretrained() functions. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: Optional = None device: Union = None original_inference_steps: Optional = None timesteps: Optional = None strength: int = 1.0 ) Parameters num_inference_steps (int, optional) — +The number of diffusion steps used when generating samples with a pre-trained model. If used, +timesteps must be None. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. original_inference_steps (int, optional) — +The original number of inference steps, which will be used to generate a linearly-spaced timestep +schedule (which is different from the standard diffusers implementation). We will then take +num_inference_steps timesteps from this schedule, evenly spaced in terms of indices, and use that as +our final timestep schedule. If not set, this will default to the original_inference_steps attribute. timesteps (List[int], optional) — +Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default +timestep spacing strategy of equal spacing between timesteps on the training/distillation timestep +schedule is used. If timesteps is passed, num_inference_steps must be None. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator: Optional = None return_dict: bool = True ) → ~schedulers.scheduling_utils.LCMSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a LCMSchedulerOutput or tuple. Returns +~schedulers.scheduling_utils.LCMSchedulerOutput or tuple + +If return_dict is True, LCMSchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/a5ec988b5b08ba734d914389ca849385.txt b/scrapped_outputs/a5ec988b5b08ba734d914389ca849385.txt new file mode 100644 index 0000000000000000000000000000000000000000..a7b663b381edb40c44b5dc45124142bca44fb798 --- /dev/null +++ b/scrapped_outputs/a5ec988b5b08ba734d914389ca849385.txt @@ -0,0 +1,148 @@ +PyTorch 2.0 🤗 Diffusers supports the latest optimizations from PyTorch 2.0 which include: A memory-efficient attention implementation, scaled dot product attention, without requiring any extra dependencies such as xFormers. torch.compile, a just-in-time (JIT) compiler to provide an extra performance boost when individual models are compiled. Both of these optimizations require PyTorch 2.0 or later and 🤗 Diffusers > 0.13.0. Copied pip install --upgrade torch diffusers Scaled dot product attention torch.nn.functional.scaled_dot_product_attention (SDPA) is an optimized and memory-efficient attention (similar to xFormers) that automatically enables several other optimizations depending on the model inputs and GPU type. SDPA is enabled by default if you’re using PyTorch 2.0 and the latest version of 🤗 Diffusers, so you don’t need to add anything to your code. However, if you want to explicitly enable it, you can set a DiffusionPipeline to use AttnProcessor2_0: Copied import torch + from diffusers import DiffusionPipeline ++ from diffusers.models.attention_processor import AttnProcessor2_0 + + pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True).to("cuda") ++ pipe.unet.set_attn_processor(AttnProcessor2_0()) + + prompt = "a photo of an astronaut riding a horse on mars" + image = pipe(prompt).images[0] SDPA should be as fast and memory efficient as xFormers; check the benchmark for more details. In some cases - such as making the pipeline more deterministic or converting it to other formats - it may be helpful to use the vanilla attention processor, AttnProcessor. To revert to AttnProcessor, call the set_default_attn_processor() function on the pipeline: Copied import torch + from diffusers import DiffusionPipeline + + pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True).to("cuda") ++ pipe.unet.set_default_attn_processor() + + prompt = "a photo of an astronaut riding a horse on mars" + image = pipe(prompt).images[0] torch.compile The torch.compile function can often provide an additional speed-up to your PyTorch code. In 🤗 Diffusers, it is usually best to wrap the UNet with torch.compile because it does most of the heavy lifting in the pipeline. Copied from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) +images = pipe(prompt, num_inference_steps=steps, num_images_per_prompt=batch_size).images[0] Depending on GPU type, torch.compile can provide an additional speed-up of 5-300x on top of SDPA! If you’re using more recent GPU architectures such as Ampere (A100, 3090), Ada (4090), and Hopper (H100), torch.compile is able to squeeze even more performance out of these GPUs. Compilation requires some time to complete, so it is best suited for situations where you prepare your pipeline once and then perform the same type of inference operations multiple times. For example, calling the compiled pipeline on a different image size triggers compilation again which can be expensive. For more information and different options about torch.compile, refer to the torch_compile tutorial. Benchmark We conducted a comprehensive benchmark with PyTorch 2.0’s efficient attention implementation and torch.compile across different GPUs and batch sizes for five of our most used pipelines. The code is benchmarked on 🤗 Diffusers v0.17.0.dev0 to optimize torch.compile usage (see here for more details). Expand the dropdown below to find the code used to benchmark each pipeline: Stable Diffusion text-to-image Copied from diffusers import DiffusionPipeline +import torch + +path = "runwayml/stable-diffusion-v1-5" + +run_compile = True # Set True / False + +pipe = DiffusionPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors=True) +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + images = pipe(prompt=prompt).images Stable Diffusion image-to-image Copied from diffusers import StableDiffusionImg2ImgPipeline +from diffusers.utils import load_image +import torch + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +init_image = load_image(url) +init_image = init_image.resize((512, 512)) + +path = "runwayml/stable-diffusion-v1-5" + +run_compile = True # Set True / False + +pipe = StableDiffusionImg2ImgPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors=True) +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + image = pipe(prompt=prompt, image=init_image).images[0] Stable Diffusion inpainting Copied from diffusers import StableDiffusionInpaintPipeline +from diffusers.utils import load_image +import torch + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +path = "runwayml/stable-diffusion-inpainting" + +run_compile = True # Set True / False + +pipe = StableDiffusionInpaintPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors=True) +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] ControlNet Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.utils import load_image +import torch + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +init_image = load_image(url) +init_image = init_image.resize((512, 512)) + +path = "runwayml/stable-diffusion-v1-5" + +run_compile = True # Set True / False +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + path, controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) +pipe.controlnet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + pipe.controlnet = torch.compile(pipe.controlnet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + image = pipe(prompt=prompt, image=init_image).images[0] DeepFloyd IF text-to-image + upscaling Copied from diffusers import DiffusionPipeline +import torch + +run_compile = True # Set True / False + +pipe_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-M-v1.0", variant="fp16", text_encoder=None, torch_dtype=torch.float16, use_safetensors=True) +pipe_1.to("cuda") +pipe_2 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-II-M-v1.0", variant="fp16", text_encoder=None, torch_dtype=torch.float16, use_safetensors=True) +pipe_2.to("cuda") +pipe_3 = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-x4-upscaler", torch_dtype=torch.float16, use_safetensors=True) +pipe_3.to("cuda") + + +pipe_1.unet.to(memory_format=torch.channels_last) +pipe_2.unet.to(memory_format=torch.channels_last) +pipe_3.unet.to(memory_format=torch.channels_last) + +if run_compile: + pipe_1.unet = torch.compile(pipe_1.unet, mode="reduce-overhead", fullgraph=True) + pipe_2.unet = torch.compile(pipe_2.unet, mode="reduce-overhead", fullgraph=True) + pipe_3.unet = torch.compile(pipe_3.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "the blue hulk" + +prompt_embeds = torch.randn((1, 2, 4096), dtype=torch.float16) +neg_prompt_embeds = torch.randn((1, 2, 4096), dtype=torch.float16) + +for _ in range(3): + image_1 = pipe_1(prompt_embeds=prompt_embeds, negative_prompt_embeds=neg_prompt_embeds, output_type="pt").images + image_2 = pipe_2(image=image_1, prompt_embeds=prompt_embeds, negative_prompt_embeds=neg_prompt_embeds, output_type="pt").images + image_3 = pipe_3(prompt=prompt, image=image_1, noise_level=100).images The graph below highlights the relative speed-ups for the StableDiffusionPipeline across five GPU families with PyTorch 2.0 and torch.compile enabled. The benchmarks for the following graphs are measured in number of iterations/second. To give you an even better idea of how this speed-up holds for the other pipelines, consider the following +graph for an A100 with PyTorch 2.0 and torch.compile: In the following tables, we report our findings in terms of the number of iterations/second. A100 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 21.66 23.13 44.03 49.74 SD - img2img 21.81 22.40 43.92 46.32 SD - inpaint 22.24 23.23 43.76 49.25 SD - controlnet 15.02 15.82 32.13 36.08 IF 20.21 / 13.84 / 24.00 20.12 / 13.70 / 24.03 ❌ 97.34 / 27.23 / 111.66 SDXL - txt2img 8.64 9.9 - - A100 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 11.6 13.12 14.62 17.27 SD - img2img 11.47 13.06 14.66 17.25 SD - inpaint 11.67 13.31 14.88 17.48 SD - controlnet 8.28 9.38 10.51 12.41 IF 25.02 18.04 ❌ 48.47 SDXL - txt2img 2.44 2.74 - - A100 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 3.04 3.6 3.83 4.68 SD - img2img 2.98 3.58 3.83 4.67 SD - inpaint 3.04 3.66 3.9 4.76 SD - controlnet 2.15 2.58 2.74 3.35 IF 8.78 9.82 ❌ 16.77 SDXL - txt2img 0.64 0.72 - - V100 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 18.99 19.14 20.95 22.17 SD - img2img 18.56 19.18 20.95 22.11 SD - inpaint 19.14 19.06 21.08 22.20 SD - controlnet 13.48 13.93 15.18 15.88 IF 20.01 / 9.08 / 23.34 19.79 / 8.98 / 24.10 ❌ 55.75 / 11.57 / 57.67 V100 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 5.96 5.89 6.83 6.86 SD - img2img 5.90 5.91 6.81 6.82 SD - inpaint 5.99 6.03 6.93 6.95 SD - controlnet 4.26 4.29 4.92 4.93 IF 15.41 14.76 ❌ 22.95 V100 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 1.66 1.66 1.92 1.90 SD - img2img 1.65 1.65 1.91 1.89 SD - inpaint 1.69 1.69 1.95 1.93 SD - controlnet 1.19 1.19 OOM after warmup 1.36 IF 5.43 5.29 ❌ 7.06 T4 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 6.9 6.95 7.3 7.56 SD - img2img 6.84 6.99 7.04 7.55 SD - inpaint 6.91 6.7 7.01 7.37 SD - controlnet 4.89 4.86 5.35 5.48 IF 17.42 / 2.47 / 18.52 16.96 / 2.45 / 18.69 ❌ 24.63 / 2.47 / 23.39 SDXL - txt2img 1.15 1.16 - - T4 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 1.79 1.79 2.03 1.99 SD - img2img 1.77 1.77 2.05 2.04 SD - inpaint 1.81 1.82 2.09 2.09 SD - controlnet 1.34 1.27 1.47 1.46 IF 5.79 5.61 ❌ 7.39 SDXL - txt2img 0.288 0.289 - - T4 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 2.34s 2.30s OOM after 2nd iteration 1.99s SD - img2img 2.35s 2.31s OOM after warmup 2.00s SD - inpaint 2.30s 2.26s OOM after 2nd iteration 1.95s SD - controlnet OOM after 2nd iteration OOM after 2nd iteration OOM after warmup OOM after warmup IF * 1.44 1.44 ❌ 1.94 SDXL - txt2img OOM OOM - - RTX 3090 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 22.56 22.84 23.84 25.69 SD - img2img 22.25 22.61 24.1 25.83 SD - inpaint 22.22 22.54 24.26 26.02 SD - controlnet 16.03 16.33 17.38 18.56 IF 27.08 / 9.07 / 31.23 26.75 / 8.92 / 31.47 ❌ 68.08 / 11.16 / 65.29 RTX 3090 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 6.46 6.35 7.29 7.3 SD - img2img 6.33 6.27 7.31 7.26 SD - inpaint 6.47 6.4 7.44 7.39 SD - controlnet 4.59 4.54 5.27 5.26 IF 16.81 16.62 ❌ 21.57 RTX 3090 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 1.7 1.69 1.93 1.91 SD - img2img 1.68 1.67 1.93 1.9 SD - inpaint 1.72 1.71 1.97 1.94 SD - controlnet 1.23 1.22 1.4 1.38 IF 5.01 5.00 ❌ 6.33 RTX 4090 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 40.5 41.89 44.65 49.81 SD - img2img 40.39 41.95 44.46 49.8 SD - inpaint 40.51 41.88 44.58 49.72 SD - controlnet 29.27 30.29 32.26 36.03 IF 69.71 / 18.78 / 85.49 69.13 / 18.80 / 85.56 ❌ 124.60 / 26.37 / 138.79 SDXL - txt2img 6.8 8.18 - - RTX 4090 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 12.62 12.84 15.32 15.59 SD - img2img 12.61 12,.79 15.35 15.66 SD - inpaint 12.65 12.81 15.3 15.58 SD - controlnet 9.1 9.25 11.03 11.22 IF 31.88 31.14 ❌ 43.92 SDXL - txt2img 2.19 2.35 - - RTX 4090 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 3.17 3.2 3.84 3.85 SD - img2img 3.16 3.2 3.84 3.85 SD - inpaint 3.17 3.2 3.85 3.85 SD - controlnet 2.23 2.3 2.7 2.75 IF 9.26 9.2 ❌ 13.31 SDXL - txt2img 0.52 0.53 - - Notes Follow this PR for more details on the environment used for conducting the benchmarks. For the DeepFloyd IF pipeline where batch sizes > 1, we only used a batch size of > 1 in the first IF pipeline for text-to-image generation and NOT for upscaling. That means the two upscaling pipelines received a batch size of 1. Thanks to Horace He from the PyTorch team for their support in improving our support of torch.compile() in Diffusers. diff --git a/scrapped_outputs/a6378352565fe7fbfd581ac905c413a0.txt b/scrapped_outputs/a6378352565fe7fbfd581ac905c413a0.txt new file mode 100644 index 0000000000000000000000000000000000000000..2a6b37c14a57be7db64f0e80116b7c2521234987 --- /dev/null +++ b/scrapped_outputs/a6378352565fe7fbfd581ac905c413a0.txt @@ -0,0 +1,1993 @@ +Models + +Diffusers contains pretrained models for popular algorithms and modules for creating the next set of diffusion models. +The primary function of these models is to denoise an input sample, by modeling the distribution $p\theta(\mathbf{x}{t-1}|\mathbf{x}_t)$. +The models are built on the base class [‘ModelMixin’] that is a torch.nn.module with basic functionality for saving and loading models both locally and from the HuggingFace hub. + +ModelMixin + + +class diffusers.ModelMixin + +< +source +> +( +) + + + +Base class for all models. +ModelMixin takes care of storing the configuration of the models and handles methods for loading, downloading +and saving models. +config_name (str) — A filename under which the model should be stored when calling +save_pretrained(). + +disable_gradient_checkpointing + +< +source +> +( +) + + + +Deactivates gradient checkpointing for the current model. +Note that in other frameworks this feature can be referred to as “activation checkpointing” or “checkpoint +activations”. + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +enable_gradient_checkpointing + +< +source +> +( +) + + + +Activates gradient checkpointing for the current model. +Note that in other frameworks this feature can be referred to as “activation checkpointing” or “checkpoint +activations”. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import UNet2DConditionModel +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> model = UNet2DConditionModel.from_pretrained( +... "stabilityai/stable-diffusion-2-1", subfolder="unet", torch_dtype=torch.float16 +... ) +>>> model = model.to("cuda") +>>> model.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) + +from_pretrained + +< +source +> +( +pretrained_model_name_or_path: typing.Union[str, os.PathLike, NoneType] +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. +Valid model ids should have an organization name, like google/ddpm-celebahq-256. +A path to a directory containing model weights saved using ~ModelMixin.save_config, e.g., +./my_model_directory/. + + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model under this dtype. If "auto" is passed the dtype +will be automatically derived from the model’s weights. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running diffusers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +from_flax (bool, optional, defaults to False) — +Load the model weights from a Flax checkpoint save file. + + +subfolder (str, optional, defaults to "") — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + +mirror (str, optional) — +Mirror source to accelerate downloads in China. If you are from China and have an accessibility +problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. +Please refer to the mirror site for more information. + + +device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be refined to each +parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the +same device. +To have Accelerate compute the most optimized device_map automatically, set device_map="auto". For +more information about each option see designing a device +map. + + +low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading by not initializing the weights and only loading the pre-trained weights. This +also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the +model. This is only supported when torch version >= 1.9.0. If you are using an older version of torch, +setting this argument to True will raise an error. + + + +Instantiate a pretrained pytorch model from a pre-trained model configuration. +The model is set in evaluation mode by default using model.eval() (Dropout modules are deactivated). To train +the model, you should first set it back in training mode with model.train(). +The warning Weights from XXX not initialized from pretrained model means that the weights of XXX do not come +pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning +task. +The warning Weights from XXX not used in YYY means that the layer XXX is not used by YYY, therefore those +weights are discarded. +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. +Activate the special “offline-mode” to use +this method in a firewalled environment. + +num_parameters + +< +source +> +( +only_trainable: bool = False +exclude_embeddings: bool = False + +) +→ +int + +Parameters + +only_trainable (bool, optional, defaults to False) — +Whether or not to return only the number of trainable parameters + + +exclude_embeddings (bool, optional, defaults to False) — +Whether or not to return only the number of non-embeddings parameters + + +Returns + +int + + + +The number of parameters. + + +Get number of (optionally, trainable or non-embeddings) parameters in the module. + +save_pretrained + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +is_main_process: bool = True +save_function: typing.Callable = None +safe_serialization: bool = False + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. + + +is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful when in distributed training like +TPUs and need to call this function on all processes. In this case, set is_main_process=True only on +the main process to avoid race conditions. + + +save_function (Callable) — +The function to use to save the state dictionary. Useful on distributed training like TPUs when one +need to replace torch.save by another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. + + +safe_serialization (bool, optional, defaults to False) — +Whether to save the model using safetensors or the traditional PyTorch way (that uses pickle). + + + +Save a model and its configuration file to a directory, so that it can be re-loaded using the +[from_pretrained()](/docs/diffusers/v0.12.0/en/api/models#diffusers.ModelMixin.from_pretrained) class method. + +UNet2DOutput + + +class diffusers.models.unet_2d.UNet2DOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +Hidden states output. Output of last layer of model. + + + + +UNet2DModel + + +class diffusers.UNet2DModel + +< +source +> +( +sample_size: typing.Union[int, typing.Tuple[int, int], NoneType] = None +in_channels: int = 3 +out_channels: int = 3 +center_input_sample: bool = False +time_embedding_type: str = 'positional' +freq_shift: int = 0 +flip_sin_to_cos: bool = True +down_block_types: typing.Tuple[str] = ('DownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D') +up_block_types: typing.Tuple[str] = ('AttnUpBlock2D', 'AttnUpBlock2D', 'AttnUpBlock2D', 'UpBlock2D') +block_out_channels: typing.Tuple[int] = (224, 448, 672, 896) +layers_per_block: int = 2 +mid_block_scale_factor: float = 1 +downsample_padding: int = 1 +act_fn: str = 'silu' +attention_head_dim: int = 8 +norm_num_groups: int = 32 +norm_eps: float = 1e-05 +resnet_time_scale_shift: str = 'default' +add_attention: bool = True + +) + + +Parameters + +sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. + + +in_channels (int, optional, defaults to 3) — Number of channels in the input image. + + +out_channels (int, optional, defaults to 3) — Number of channels in the output. + + +center_input_sample (bool, optional, defaults to False) — Whether to center the input sample. + + +time_embedding_type (str, optional, defaults to "positional") — Type of time embedding to use. + + +freq_shift (int, optional, defaults to 0) — Frequency shift for fourier time embedding. + + +flip_sin_to_cos (bool, optional, defaults to — +obj:True): Whether to flip sin to cos for fourier time embedding. + + +down_block_types (Tuple[str], optional, defaults to — +obj:("DownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D")): Tuple of downsample block +types. + + +mid_block_type (str, optional, defaults to "UNetMidBlock2D") — +The mid block type. Choose from UNetMidBlock2D or UnCLIPUNetMidBlock2D. + + +up_block_types (Tuple[str], optional, defaults to — +obj:("AttnUpBlock2D", "AttnUpBlock2D", "AttnUpBlock2D", "UpBlock2D")): Tuple of upsample block types. + + +block_out_channels (Tuple[int], optional, defaults to — +obj:(224, 448, 672, 896)): Tuple of block output channels. + + +layers_per_block (int, optional, defaults to 2) — The number of layers per block. + + +mid_block_scale_factor (float, optional, defaults to 1) — The scale factor for the mid block. + + +downsample_padding (int, optional, defaults to 1) — The padding for the downsample convolution. + + +act_fn (str, optional, defaults to "silu") — The activation function to use. + + +attention_head_dim (int, optional, defaults to 8) — The attention head dimension. + + +norm_num_groups (int, optional, defaults to 32) — The number of groups for the normalization. + + +norm_eps (float, optional, defaults to 1e-5) — The epsilon for the normalization. + + +resnet_time_scale_shift (str, optional, defaults to "default") — Time scale shift config +for resnet blocks, see ResnetBlock2D. Choose from default or scale_shift. + + + +UNet2DModel is a 2D UNet model that takes in a noisy sample and a timestep and returns sample shaped output. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the model (such as downloading or saving, etc.) + +forward + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[torch.Tensor, float, int] +return_dict: bool = True + +) +→ +UNet2DOutput or tuple + +Parameters + +sample (torch.FloatTensor) — (batch, channel, height, width) noisy inputs tensor + + +timestep (torch.FloatTensor or float or `int) — (batch) timesteps + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DOutput instead of a plain tuple. + + +Returns + +UNet2DOutput or tuple + + + +UNet2DOutput if return_dict is True, +otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + + +UNet1DOutput + + +class diffusers.models.unet_1d.UNet1DOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size, num_channels, sample_size)) — +Hidden states output. Output of last layer of model. + + + + +UNet1DModel + + +class diffusers.UNet1DModel + +< +source +> +( +sample_size: int = 65536 +sample_rate: typing.Optional[int] = None +in_channels: int = 2 +out_channels: int = 2 +extra_in_channels: int = 0 +time_embedding_type: str = 'fourier' +flip_sin_to_cos: bool = True +use_timestep_embedding: bool = False +freq_shift: float = 0.0 +down_block_types: typing.Tuple[str] = ('DownBlock1DNoSkip', 'DownBlock1D', 'AttnDownBlock1D') +up_block_types: typing.Tuple[str] = ('AttnUpBlock1D', 'UpBlock1D', 'UpBlock1DNoSkip') +mid_block_type: typing.Tuple[str] = 'UNetMidBlock1D' +out_block_type: str = None +block_out_channels: typing.Tuple[int] = (32, 32, 64) +act_fn: str = None +norm_num_groups: int = 8 +layers_per_block: int = 1 +downsample_each_block: bool = False + +) + + +Parameters + +sample_size (int, optional) — Default length of sample. Should be adaptable at runtime. + + +in_channels (int, optional, defaults to 2) — Number of channels in the input sample. + + +out_channels (int, optional, defaults to 2) — Number of channels in the output. + + +time_embedding_type (str, optional, defaults to "fourier") — Type of time embedding to use. + + +freq_shift (float, optional, defaults to 0.0) — Frequency shift for fourier time embedding. + + +flip_sin_to_cos (bool, optional, defaults to — +obj:False): Whether to flip sin to cos for fourier time embedding. + + +down_block_types (Tuple[str], optional, defaults to — +obj:("DownBlock1D", "DownBlock1DNoSkip", "AttnDownBlock1D")): Tuple of downsample block types. + + +up_block_types (Tuple[str], optional, defaults to — +obj:("UpBlock1D", "UpBlock1DNoSkip", "AttnUpBlock1D")): Tuple of upsample block types. + + +block_out_channels (Tuple[int], optional, defaults to — +obj:(32, 32, 64)): Tuple of block output channels. + + +mid_block_type (str, optional, defaults to “UNetMidBlock1D”) — block type for middle of UNet. + + +out_block_type (str, optional, defaults to None) — optional output processing of UNet. + + +act_fn (str, optional, defaults to None) — optional activitation function in UNet blocks. + + +norm_num_groups (int, optional, defaults to 8) — group norm member count in UNet blocks. + + +layers_per_block (int, optional, defaults to 1) — added number of layers in a UNet block. + + +downsample_each_block (int, optional, defaults to False — +experimental feature for using a UNet without upsampling. + + + +UNet1DModel is a 1D UNet model that takes in a noisy sample and a timestep and returns sample shaped output. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the model (such as downloading or saving, etc.) + +forward + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[torch.Tensor, float, int] +return_dict: bool = True + +) +→ +UNet1DOutput or tuple + +Parameters + +sample (torch.FloatTensor) — (batch_size, sample_size, num_channels) noisy inputs tensor + + +timestep (torch.FloatTensor or float or `int) — (batch) timesteps + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet1DOutput instead of a plain tuple. + + +Returns + +UNet1DOutput or tuple + + + +UNet1DOutput if return_dict is True, +otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + + +UNet2DConditionOutput + + +class diffusers.models.unet_2d_condition.UNet2DConditionOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +Hidden states conditioned on encoder_hidden_states input. Output of last layer of model. + + + + +UNet2DConditionModel + + +class diffusers.UNet2DConditionModel + +< +source +> +( +sample_size: typing.Optional[int] = None +in_channels: int = 4 +out_channels: int = 4 +center_input_sample: bool = False +flip_sin_to_cos: bool = True +freq_shift: int = 0 +down_block_types: typing.Tuple[str] = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') +mid_block_type: str = 'UNetMidBlock2DCrossAttn' +up_block_types: typing.Tuple[str] = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') +only_cross_attention: typing.Union[bool, typing.Tuple[bool]] = False +block_out_channels: typing.Tuple[int] = (320, 640, 1280, 1280) +layers_per_block: int = 2 +downsample_padding: int = 1 +mid_block_scale_factor: float = 1 +act_fn: str = 'silu' +norm_num_groups: int = 32 +norm_eps: float = 1e-05 +cross_attention_dim: int = 1280 +attention_head_dim: typing.Union[int, typing.Tuple[int]] = 8 +dual_cross_attention: bool = False +use_linear_projection: bool = False +class_embed_type: typing.Optional[str] = None +num_class_embeds: typing.Optional[int] = None +upcast_attention: bool = False +resnet_time_scale_shift: str = 'default' + +) + + +Parameters + +sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. + + +in_channels (int, optional, defaults to 4) — The number of channels in the input sample. + + +out_channels (int, optional, defaults to 4) — The number of channels in the output. + + +center_input_sample (bool, optional, defaults to False) — Whether to center the input sample. + + +flip_sin_to_cos (bool, optional, defaults to False) — +Whether to flip the sin to cos in the time embedding. + + +freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. + + +down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. + + +mid_block_type (str, optional, defaults to "UNetMidBlock2DCrossAttn") — +The mid block type. Choose from UNetMidBlock2DCrossAttn or UNetMidBlock2DSimpleCrossAttn. + + +up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D",)) — +The tuple of upsample blocks to use. + + +block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. + + +layers_per_block (int, optional, defaults to 2) — The number of layers per block. + + +downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. + + +mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. + + +act_fn (str, optional, defaults to "silu") — The activation function to use. + + +norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. + + +norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. + + +cross_attention_dim (int, optional, defaults to 1280) — The dimension of the cross attention features. + + +attention_head_dim (int, optional, defaults to 8) — The dimension of the attention heads. + + +resnet_time_scale_shift (str, optional, defaults to "default") — Time scale shift config +for resnet blocks, see ResnetBlock2D. Choose from default or scale_shift. + + +class_embed_type (str, optional, defaults to None) — The type of class embedding to use which is ultimately +summed with the time embeddings. Choose from None, "timestep", or "identity". + + + +UNet2DConditionModel is a conditional 2D UNet model that takes in a noisy sample, conditional state, and a timestep +and returns sample shaped output. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the models (such as downloading or saving, etc.) + +forward + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[torch.Tensor, float, int] +encoder_hidden_states: Tensor +class_labels: typing.Optional[torch.Tensor] = None +attention_mask: typing.Optional[torch.Tensor] = None +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None +return_dict: bool = True + +) +→ +UNet2DConditionOutput or tuple + +Parameters + +sample (torch.FloatTensor) — (batch, channel, height, width) noisy inputs tensor + + +timestep (torch.FloatTensor or float or int) — (batch) timesteps + + +encoder_hidden_states (torch.FloatTensor) — (batch, sequence_length, feature_dim) encoder hidden states + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a models.unet_2d_condition.UNet2DConditionOutput instead of a plain tuple. + + +Returns + +UNet2DConditionOutput or tuple + + + +UNet2DConditionOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + + +set_attention_slice + +< +source +> +( +slice_size + +) + + +Parameters + +slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maxium amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +set_attn_processor + +< +source +> +( +processor: typing.Union[diffusers.models.cross_attention.CrossAttnProcessor, diffusers.models.cross_attention.XFormersCrossAttnProcessor, diffusers.models.cross_attention.SlicedAttnProcessor, diffusers.models.cross_attention.CrossAttnAddedKVProcessor, diffusers.models.cross_attention.SlicedAttnAddedKVProcessor, typing.Dict[str, typing.Union[diffusers.models.cross_attention.CrossAttnProcessor, diffusers.models.cross_attention.XFormersCrossAttnProcessor, diffusers.models.cross_attention.SlicedAttnProcessor, diffusers.models.cross_attention.CrossAttnAddedKVProcessor, diffusers.models.cross_attention.SlicedAttnAddedKVProcessor]]] + +) + + +Parameters + +`processor (dict of AttnProcessor or AttnProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +of all CrossAttention layers. + + +In case processor is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainablae attention processors. — + + + + +DecoderOutput + + +class diffusers.models.vae.DecoderOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +Decoded output sample of the model. Output of the last layer of the model. + + + +Output of decoding method. + +VQEncoderOutput + + +class diffusers.models.vq_model.VQEncoderOutput + +< +source +> +( +latents: FloatTensor + +) + + +Parameters + +latents (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +Encoded output sample of the model. Output of the last layer of the model. + + + +Output of VQModel encoding method. + +VQModel + + +class diffusers.VQModel + +< +source +> +( +in_channels: int = 3 +out_channels: int = 3 +down_block_types: typing.Tuple[str] = ('DownEncoderBlock2D',) +up_block_types: typing.Tuple[str] = ('UpDecoderBlock2D',) +block_out_channels: typing.Tuple[int] = (64,) +layers_per_block: int = 1 +act_fn: str = 'silu' +latent_channels: int = 3 +sample_size: int = 32 +num_vq_embeddings: int = 256 +norm_num_groups: int = 32 +vq_embed_dim: typing.Optional[int] = None + +) + + +Parameters + +in_channels (int, optional, defaults to 3) — Number of channels in the input image. + + +out_channels (int, optional, defaults to 3) — Number of channels in the output. + + +down_block_types (Tuple[str], optional, defaults to — +obj:("DownEncoderBlock2D",)): Tuple of downsample block types. + + +up_block_types (Tuple[str], optional, defaults to — +obj:("UpDecoderBlock2D",)): Tuple of upsample block types. + + +block_out_channels (Tuple[int], optional, defaults to — +obj:(64,)): Tuple of block output channels. + + +act_fn (str, optional, defaults to "silu") — The activation function to use. + + +latent_channels (int, optional, defaults to 3) — Number of channels in the latent space. + + +sample_size (int, optional, defaults to 32) — TODO + + +num_vq_embeddings (int, optional, defaults to 256) — Number of codebook vectors in the VQ-VAE. + + +vq_embed_dim (int, optional) — Hidden dim of codebook vectors in the VQ-VAE. + + + +VQ-VAE model from the paper Neural Discrete Representation Learning by Aaron van den Oord, Oriol Vinyals and Koray +Kavukcuoglu. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the model (such as downloading or saving, etc.) + +forward + +< +source +> +( +sample: FloatTensor +return_dict: bool = True + +) + + +Parameters + +sample (torch.FloatTensor) — Input sample. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. + + + + +AutoencoderKLOutput + + +class diffusers.models.autoencoder_kl.AutoencoderKLOutput + +< +source +> +( +latent_dist: DiagonalGaussianDistribution + +) + + +Parameters + +latent_dist (DiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of DiagonalGaussianDistribution. +DiagonalGaussianDistribution allows for sampling latents from the distribution. + + + +Output of AutoencoderKL encoding method. + +AutoencoderKL + + +class diffusers.AutoencoderKL + +< +source +> +( +in_channels: int = 3 +out_channels: int = 3 +down_block_types: typing.Tuple[str] = ('DownEncoderBlock2D',) +up_block_types: typing.Tuple[str] = ('UpDecoderBlock2D',) +block_out_channels: typing.Tuple[int] = (64,) +layers_per_block: int = 1 +act_fn: str = 'silu' +latent_channels: int = 4 +norm_num_groups: int = 32 +sample_size: int = 32 + +) + + +Parameters + +in_channels (int, optional, defaults to 3) — Number of channels in the input image. + + +out_channels (int, optional, defaults to 3) — Number of channels in the output. + + +down_block_types (Tuple[str], optional, defaults to — +obj:("DownEncoderBlock2D",)): Tuple of downsample block types. + + +up_block_types (Tuple[str], optional, defaults to — +obj:("UpDecoderBlock2D",)): Tuple of upsample block types. + + +block_out_channels (Tuple[int], optional, defaults to — +obj:(64,)): Tuple of block output channels. + + +act_fn (str, optional, defaults to "silu") — The activation function to use. + + +latent_channels (int, optional, defaults to 4) — Number of channels in the latent space. + + +sample_size (int, optional, defaults to 32) — TODO + + + +Variational Autoencoder (VAE) model with KL loss from the paper Auto-Encoding Variational Bayes by Diederik P. Kingma +and Max Welling. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the model (such as downloading or saving, etc.) + +disable_slicing + +< +source +> +( +) + + + +Disable sliced VAE decoding. If enable_slicing was previously invoked, this method will go back to computing +decoding in one step. + +enable_slicing + +< +source +> +( +) + + + +Enable sliced VAE decoding. +When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several +steps. This is useful to save some memory and allow larger batch sizes. + +forward + +< +source +> +( +sample: FloatTensor +sample_posterior: bool = False +return_dict: bool = True +generator: typing.Optional[torch._C.Generator] = None + +) + + +Parameters + +sample (torch.FloatTensor) — Input sample. + + +sample_posterior (bool, optional, defaults to False) — +Whether to sample from the posterior. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. + + + + +Transformer2DModel + + +class diffusers.Transformer2DModel + +< +source +> +( +num_attention_heads: int = 16 +attention_head_dim: int = 88 +in_channels: typing.Optional[int] = None +out_channels: typing.Optional[int] = None +num_layers: int = 1 +dropout: float = 0.0 +norm_num_groups: int = 32 +cross_attention_dim: typing.Optional[int] = None +attention_bias: bool = False +sample_size: typing.Optional[int] = None +num_vector_embeds: typing.Optional[int] = None +patch_size: typing.Optional[int] = None +activation_fn: str = 'geglu' +num_embeds_ada_norm: typing.Optional[int] = None +use_linear_projection: bool = False +only_cross_attention: bool = False +upcast_attention: bool = False +norm_type: str = 'layer_norm' +norm_elementwise_affine: bool = True + +) + + +Parameters + +num_attention_heads (int, optional, defaults to 16) — The number of heads to use for multi-head attention. + + +attention_head_dim (int, optional, defaults to 88) — The number of channels in each head. + + +in_channels (int, optional) — +Pass if the input is continuous. The number of channels in the input and output. + + +num_layers (int, optional, defaults to 1) — The number of layers of Transformer blocks to use. + + +dropout (float, optional, defaults to 0.0) — The dropout probability to use. + + +cross_attention_dim (int, optional) — The number of encoder_hidden_states dimensions to use. + + +sample_size (int, optional) — Pass if the input is discrete. The width of the latent images. +Note that this is fixed at training time as it is used for learning a number of position embeddings. See +ImagePositionalEmbeddings. + + +num_vector_embeds (int, optional) — +Pass if the input is discrete. The number of classes of the vector embeddings of the latent pixels. +Includes the class for the masked latent pixel. + + +activation_fn (str, optional, defaults to "geglu") — Activation function to be used in feed-forward. + + +num_embeds_ada_norm ( int, optional) — Pass if at least one of the norm_layers is AdaLayerNorm. +The number of diffusion steps used during training. Note that this is fixed at training time as it is used +to learn a number of embeddings that are added to the hidden states. During inference, you can denoise for +up to but not more than steps than num_embeds_ada_norm. + + +attention_bias (bool, optional) — +Configure if the TransformerBlocks’ attention should contain a bias parameter. + + + +Transformer model for image-like data. Takes either discrete (classes of vector embeddings) or continuous (actual +embeddings) inputs. +When input is continuous: First, project the input (aka embedding) and reshape to b, t, d. Then apply standard +transformer action. Finally, reshape to image. +When input is discrete: First, input (classes of latent pixels) is converted to embeddings and has positional +embeddings applied, see ImagePositionalEmbeddings. Then apply standard transformer action. Finally, predict +classes of unnoised image. +Note that it is assumed one of the input classes is the masked latent pixel. The predicted classes of the unnoised +image do not contain a prediction for the masked pixel as the unnoised image cannot be masked. + +forward + +< +source +> +( +hidden_states +encoder_hidden_states = None +timestep = None +class_labels = None +cross_attention_kwargs = None +return_dict: bool = True + +) +→ +Transformer2DModelOutput or tuple + +Parameters + +hidden_states ( When discrete, torch.LongTensor of shape (batch size, num latent pixels). — +When continous, torch.FloatTensor of shape (batch size, channel, height, width)): Input +hidden_states + + +encoder_hidden_states ( torch.LongTensor of shape (batch size, encoder_hidden_states dim), optional) — +Conditional embeddings for cross attention layer. If not given, cross-attention defaults to +self-attention. + + +timestep ( torch.long, optional) — +Optional timestep to be applied as an embedding in AdaLayerNorm’s. Used to indicate denoising step. + + +class_labels ( torch.LongTensor of shape (batch size, num classes), optional) — +Optional class labels to be applied as an embedding in AdaLayerZeroNorm. Used to indicate class labels +conditioning. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a models.unet_2d_condition.UNet2DConditionOutput instead of a plain tuple. + + +Returns + +Transformer2DModelOutput or tuple + + + +Transformer2DModelOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + + +Transformer2DModelOutput + + +class diffusers.models.transformer_2d.Transformer2DModelOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if Transformer2DModel is discrete) — +Hidden states conditioned on encoder_hidden_states input. If discrete, returns probability distributions +for the unnoised latent pixels. + + + + +PriorTransformer + + +class diffusers.PriorTransformer + +< +source +> +( +num_attention_heads: int = 32 +attention_head_dim: int = 64 +num_layers: int = 20 +embedding_dim: int = 768 +num_embeddings = 77 +additional_embeddings = 4 +dropout: float = 0.0 + +) + + +Parameters + +num_attention_heads (int, optional, defaults to 32) — The number of heads to use for multi-head attention. + + +attention_head_dim (int, optional, defaults to 64) — The number of channels in each head. + + +num_layers (int, optional, defaults to 20) — The number of layers of Transformer blocks to use. + + +embedding_dim (int, optional, defaults to 768) — The dimension of the CLIP embeddings. Note that CLIP +image embeddings and text embeddings are both the same dimension. + + +num_embeddings (int, optional, defaults to 77) — The max number of clip embeddings allowed. I.e. the +length of the prompt after it has been tokenized. + + +additional_embeddings (int, optional, defaults to 4) — The number of additional tokens appended to the +projected hidden_states. The actual length of the used hidden_states is num_embeddings + additional_embeddings. + + +dropout (float, optional, defaults to 0.0) — The dropout probability to use. + + + +The prior transformer from unCLIP is used to predict CLIP image embeddings from CLIP text embeddings. Note that the +transformer predicts the image embeddings through a denoising diffusion process. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the models (such as downloading or saving, etc.) +For more details, see the original paper: https://arxiv.org/abs/2204.06125 + +forward + +< +source +> +( +hidden_states +timestep: typing.Union[torch.Tensor, float, int] +proj_embedding: FloatTensor +encoder_hidden_states: FloatTensor +attention_mask: typing.Optional[torch.BoolTensor] = None +return_dict: bool = True + +) +→ +PriorTransformerOutput or tuple + +Parameters + +hidden_states (torch.FloatTensor of shape (batch_size, embedding_dim)) — +x_t, the currently predicted image embeddings. + + +timestep (torch.long) — +Current denoising step. + + +proj_embedding (torch.FloatTensor of shape (batch_size, embedding_dim)) — +Projected embedding vector the denoising process is conditioned on. + + +encoder_hidden_states (torch.FloatTensor of shape (batch_size, num_embeddings, embedding_dim)) — +Hidden states of the text embeddings the denoising process is conditioned on. + + +attention_mask (torch.BoolTensor of shape (batch_size, num_embeddings)) — +Text mask for the text embeddings. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a models.prior_transformer.PriorTransformerOutput instead of a plain +tuple. + + +Returns + +PriorTransformerOutput or tuple + + + +PriorTransformerOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + + +PriorTransformerOutput + + +class diffusers.models.prior_transformer.PriorTransformerOutput + +< +source +> +( +predicted_image_embedding: FloatTensor + +) + + +Parameters + +predicted_image_embedding (torch.FloatTensor of shape (batch_size, embedding_dim)) — +The predicted CLIP image embedding conditioned on the CLIP text embedding input. + + + + +FlaxModelMixin + + +class diffusers.FlaxModelMixin + +< +source +> +( +) + + + +Base class for all flax models. +FlaxModelMixin takes care of storing the configuration of the models and handles methods for loading, +downloading and saving models. + +from_pretrained + +< +source +> +( +pretrained_model_name_or_path: typing.Union[str, os.PathLike] +dtype: dtype = +*model_args +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path (str or os.PathLike) — +Can be either: + +A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. +Valid model ids are namespaced under a user or organization name, like +runwayml/stable-diffusion-v1-5. +A path to a directory containing model weights saved using save_pretrained(), +e.g., ./my_model_directory/. + + + +dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — +The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and +jax.numpy.bfloat16 (on TPUs). +This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If +specified all the computation will be performed with the given dtype. +Note that this only specifies the dtype of the computation and does not influence the dtype of model +parameters. +If you wish to change the dtype of the model parameters, see ~ModelMixin.to_fp16 and +~ModelMixin.to_bf16. + + +model_args (sequence of positional arguments, optional) — +All remaining positional arguments will be passed to the underlying model’s __init__ method. + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +from_pt (bool, optional, defaults to False) — +Load the model weights from a PyTorch checkpoint save file. + + +kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., +output_attentions=True). Behaves differently depending on whether a config is provided or +automatically loaded: + +If a configuration is provided with config, **kwargs will be directly passed to the +underlying model’s __init__ method (we assume all relevant updates to the configuration have +already been done) +If a configuration is not provided, kwargs will be first passed to the configuration class +initialization function (from_config()). Each key of kwargs that corresponds to +a configuration attribute will be used to override said attribute with the supplied kwargs +value. Remaining keys that do not correspond to any configuration attribute will be passed to the +underlying model’s __init__ function. + + + + +Instantiate a pretrained flax model from a pre-trained model configuration. +The warning Weights from XXX not initialized from pretrained model means that the weights of XXX do not come +pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning +task. +The warning Weights from XXX not used in YYY means that the layer XXX is not used by YYY, therefore those +weights are discarded. + +Examples: + + + Copied +>>> from diffusers import FlaxUNet2DConditionModel + +>>> # Download model and configuration from huggingface.co and cache. +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # Model was saved using *save_pretrained('./test/saved_model/')* (for example purposes, not runnable). +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("./test/saved_model/") + +save_pretrained + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] +is_main_process: bool = True + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. + + +params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. + + +is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful when in distributed training like +TPUs and need to call this function on all processes. In this case, set is_main_process=True only on +the main process to avoid race conditions. + + + +Save a model and its configuration file to a directory, so that it can be re-loaded using the +[from_pretrained()](/docs/diffusers/v0.12.0/en/api/models#diffusers.FlaxModelMixin.from_pretrained) class method + +to_bf16 + +< +source +> +( +params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] +mask: typing.Any = None + +) + + +Parameters + +params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. + + +mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans, True for params +you want to cast, and should be False for those you want to skip. + + + +Cast the floating-point params to jax.numpy.bfloat16. This returns a new params tree and does not cast +the params in place. +This method can be used on TPU to explicitly convert the model parameters to bfloat16 precision to do full +half-precision training or to save weights in bfloat16 for inference in order to save memory and improve speed. + +Examples: + + + Copied +>>> from diffusers import FlaxUNet2DConditionModel + +>>> # load model +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model parameters will be in fp32 precision, to cast these to bfloat16 precision +>>> params = model.to_bf16(params) +>>> # If you don't want to cast certain parameters (for example layer norm bias and scale) +>>> # then pass the mask as follows +>>> from flax import traverse_util + +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> flat_params = traverse_util.flatten_dict(params) +>>> mask = { +... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) +... for path in flat_params +... } +>>> mask = traverse_util.unflatten_dict(mask) +>>> params = model.to_bf16(params, mask) + +to_fp16 + +< +source +> +( +params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] +mask: typing.Any = None + +) + + +Parameters + +params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. + + +mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans, True for params +you want to cast, and should be False for those you want to skip + + + +Cast the floating-point params to jax.numpy.float16. This returns a new params tree and does not cast the +params in place. +This method can be used on GPU to explicitly convert the model parameters to float16 precision to do full +half-precision training or to save weights in float16 for inference in order to save memory and improve speed. + +Examples: + + + Copied +>>> from diffusers import FlaxUNet2DConditionModel + +>>> # load model +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model params will be in fp32, to cast these to float16 +>>> params = model.to_fp16(params) +>>> # If you want don't want to cast certain parameters (for example layer norm bias and scale) +>>> # then pass the mask as follows +>>> from flax import traverse_util + +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> flat_params = traverse_util.flatten_dict(params) +>>> mask = { +... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) +... for path in flat_params +... } +>>> mask = traverse_util.unflatten_dict(mask) +>>> params = model.to_fp16(params, mask) + +to_fp32 + +< +source +> +( +params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] +mask: typing.Any = None + +) + + +Parameters + +params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. + + +mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans, True for params +you want to cast, and should be False for those you want to skip + + + +Cast the floating-point params to jax.numpy.float32. This method can be used to explicitly convert the +model parameters to fp32 precision. This returns a new params tree and does not cast the params in place. + +Examples: + + + Copied +>>> from diffusers import FlaxUNet2DConditionModel + +>>> # Download model and configuration from huggingface.co +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model params will be in fp32, to illustrate the use of this method, +>>> # we'll first cast to fp16 and back to fp32 +>>> params = model.to_f16(params) +>>> # now cast back to fp32 +>>> params = model.to_fp32(params) + +FlaxUNet2DConditionOutput + + +class diffusers.models.unet_2d_condition_flax.FlaxUNet2DConditionOutput + +< +source +> +( +sample: ndarray + +) + + +Parameters + +sample (jnp.ndarray of shape (batch_size, num_channels, height, width)) — +Hidden states conditioned on encoder_hidden_states input. Output of last layer of model. + + + + +replace + +< +source +> +( +**updates + +) + + + +“Returns a new object replacing the specified fields with new values. + +FlaxUNet2DConditionModel + + +class diffusers.FlaxUNet2DConditionModel + +< +source +> +( +sample_size: int = 32 +in_channels: int = 4 +out_channels: int = 4 +down_block_types: typing.Tuple[str] = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') +up_block_types: typing.Tuple[str] = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') +only_cross_attention: typing.Union[bool, typing.Tuple[bool]] = False +block_out_channels: typing.Tuple[int] = (320, 640, 1280, 1280) +layers_per_block: int = 2 +attention_head_dim: typing.Union[int, typing.Tuple[int]] = 8 +cross_attention_dim: int = 1280 +dropout: float = 0.0 +use_linear_projection: bool = False +dtype: dtype = +flip_sin_to_cos: bool = True +freq_shift: int = 0 +parent: typing.Union[typing.Type[flax.linen.module.Module], typing.Type[flax.core.scope.Scope], typing.Type[flax.linen.module._Sentinel], NoneType] = +name: str = None + +) + + +Parameters + +sample_size (int, optional) — +The size of the input sample. + + +in_channels (int, optional, defaults to 4) — +The number of channels in the input sample. + + +out_channels (int, optional, defaults to 4) — +The number of channels in the output. + + +down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. The corresponding class names will be: “FlaxCrossAttnDownBlock2D”, +“FlaxCrossAttnDownBlock2D”, “FlaxCrossAttnDownBlock2D”, “FlaxDownBlock2D” + + +up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D",)) — +The tuple of upsample blocks to use. The corresponding class names will be: “FlaxUpBlock2D”, +“FlaxCrossAttnUpBlock2D”, “FlaxCrossAttnUpBlock2D”, “FlaxCrossAttnUpBlock2D” + + +block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. + + +layers_per_block (int, optional, defaults to 2) — +The number of layers per block. + + +attention_head_dim (int or Tuple[int], optional, defaults to 8) — +The dimension of the attention heads. + + +cross_attention_dim (int, optional, defaults to 768) — +The dimension of the cross attention features. + + +dropout (float, optional, defaults to 0) — +Dropout probability for down, up and bottleneck blocks. + + +flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip the sin to cos in the time embedding. + + +freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. + + + +FlaxUNet2DConditionModel is a conditional 2D UNet model that takes in a noisy sample, conditional state, and a +timestep and returns sample shaped output. +This model inherits from FlaxModelMixin. Check the superclass documentation for the generic methods the library +implements for all the models (such as downloading or saving, etc.) +Also, this model is a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to +general usage and behavior. +Finally, this model supports inherent JAX features such as: +Just-In-Time (JIT) compilation +Automatic Differentiation +Vectorization +Parallelization + +FlaxDecoderOutput + + +class diffusers.models.vae_flax.FlaxDecoderOutput + +< +source +> +( +sample: ndarray + +) + + +Parameters + +sample (jnp.ndarray of shape (batch_size, num_channels, height, width)) — +Decoded output sample of the model. Output of the last layer of the model. + + +dtype (jnp.dtype, optional, defaults to jnp.float32) — +Parameters dtype + + + +Output of decoding method. + +replace + +< +source +> +( +**updates + +) + + + +“Returns a new object replacing the specified fields with new values. + +FlaxAutoencoderKLOutput + + +class diffusers.models.vae_flax.FlaxAutoencoderKLOutput + +< +source +> +( +latent_dist: FlaxDiagonalGaussianDistribution + +) + + +Parameters + +latent_dist (FlaxDiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of FlaxDiagonalGaussianDistribution. +FlaxDiagonalGaussianDistribution allows for sampling latents from the distribution. + + + +Output of AutoencoderKL encoding method. + +replace + +< +source +> +( +**updates + +) + + + +“Returns a new object replacing the specified fields with new values. + +FlaxAutoencoderKL + + +class diffusers.FlaxAutoencoderKL + +< +source +> +( +in_channels: int = 3 +out_channels: int = 3 +down_block_types: typing.Tuple[str] = ('DownEncoderBlock2D',) +up_block_types: typing.Tuple[str] = ('UpDecoderBlock2D',) +block_out_channels: typing.Tuple[int] = (64,) +layers_per_block: int = 1 +act_fn: str = 'silu' +latent_channels: int = 4 +norm_num_groups: int = 32 +sample_size: int = 32 +dtype: dtype = +parent: typing.Union[typing.Type[flax.linen.module.Module], typing.Type[flax.core.scope.Scope], typing.Type[flax.linen.module._Sentinel], NoneType] = +name: str = None + +) + + +Parameters + +in_channels (int, optional, defaults to 3) — +Input channels + + +out_channels (int, optional, defaults to 3) — +Output channels + + +down_block_types (Tuple[str], optional, defaults to (DownEncoderBlock2D)) — +DownEncoder block type + + +up_block_types (Tuple[str], optional, defaults to (UpDecoderBlock2D)) — +UpDecoder block type + + +block_out_channels (Tuple[str], optional, defaults to (64,)) — +Tuple containing the number of output channels for each block + + +layers_per_block (int, optional, defaults to 2) — +Number of Resnet layer for each block + + +act_fn (str, optional, defaults to silu) — +Activation function + + +latent_channels (int, optional, defaults to 4) — +Latent space channels + + +norm_num_groups (int, optional, defaults to 32) — +Norm num group + + +sample_size (int, optional, defaults to 32) — +Sample input size + + +dtype (jnp.dtype, optional, defaults to jnp.float32) — +parameters dtype + + + +Flax Implementation of Variational Autoencoder (VAE) model with KL loss from the paper Auto-Encoding Variational +Bayes by Diederik P. Kingma and Max Welling. +This model is a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to +general usage and behavior. +Finally, this model supports inherent JAX features such as: +Just-In-Time (JIT) compilation +Automatic Differentiation +Vectorization +Parallelization diff --git a/scrapped_outputs/a640b62722743b8d87609c11db7ff84b.txt b/scrapped_outputs/a640b62722743b8d87609c11db7ff84b.txt new file mode 100644 index 0000000000000000000000000000000000000000..0210cbd544b0e21f7b6b7210ce551caa401e72ea --- /dev/null +++ b/scrapped_outputs/a640b62722743b8d87609c11db7ff84b.txt @@ -0,0 +1,1075 @@ +VersatileDiffusion + +VersatileDiffusion was proposed in Versatile Diffusion: Text, Images and Variations All in One Diffusion Model by Xingqian Xu, Zhangyang Wang, Eric Zhang, Kai Wang, Humphrey Shi . +The abstract of the paper is the following: +The recent advances in diffusion models have set an impressive milestone in many generation tasks. Trending works such as DALL-E2, Imagen, and Stable Diffusion have attracted great interest in academia and industry. Despite the rapid landscape changes, recent new approaches focus on extensions and performance rather than capacity, thus requiring separate models for separate tasks. In this work, we expand the existing single-flow diffusion pipeline into a multi-flow network, dubbed Versatile Diffusion (VD), that handles text-to-image, image-to-text, image-variation, and text-variation in one unified model. Moreover, we generalize VD to a unified multi-flow multimodal diffusion framework with grouped layers, swappable streams, and other propositions that can process modalities beyond images and text. Through our experiments, we demonstrate that VD and its underlying framework have the following merits: a) VD handles all subtasks with competitive quality; b) VD initiates novel extensions and applications such as disentanglement of style and semantic, image-text dual-guided generation, etc.; c) Through these experiments and applications, VD provides more semantic insights of the generated outputs. + +Tips + +VersatileDiffusion is conceptually very similar as Stable Diffusion, but instead of providing just a image data stream conditioned on text, VersatileDiffusion provides both a image and text data stream and can be conditioned on both text and image. + +*Run VersatileDiffusion* + +You can both load the memory intensive “all-in-one” VersatileDiffusionPipeline that can run all tasks +with the same class as shown in VersatileDiffusionPipeline.text_to_image(), VersatileDiffusionPipeline.image_variation(), and VersatileDiffusionPipeline.dual_guided() +or +You can run the individual pipelines which are much more memory efficient: +Text-to-Image: VersatileDiffusionTextToImagePipeline.call() +Image Variation: VersatileDiffusionImageVariationPipeline.call() +Dual Text and Image Guided Generation: VersatileDiffusionDualGuidedPipeline.call() + +*How to load and use different schedulers.* + +The versatile diffusion pipelines uses DDIMScheduler scheduler by default. But diffusers provides many other schedulers that can be used with the alt diffusion pipeline such as PNDMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, EulerAncestralDiscreteScheduler etc. +To use a different scheduler, you can either change it via the ConfigMixin.from_config() method or pass the scheduler argument to the from_pretrained method of the pipeline. For example, to use the EulerDiscreteScheduler, you can do the following: + + + Copied +>>> from diffusers import VersatileDiffusionPipeline, EulerDiscreteScheduler + +>>> pipeline = VersatileDiffusionPipeline.from_pretrained("shi-labs/versatile-diffusion") +>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +>>> # or +>>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("shi-labs/versatile-diffusion", subfolder="scheduler") +>>> pipeline = VersatileDiffusionPipeline.from_pretrained("shi-labs/versatile-diffusion", scheduler=euler_scheduler) + +VersatileDiffusionPipeline + + +class diffusers.VersatileDiffusionPipeline + +< +source +> +( +tokenizer: CLIPTokenizer +image_feature_extractor: CLIPImageProcessor +text_encoder: CLIPTextModel +image_encoder: CLIPVisionModel +image_unet: UNet2DConditionModel +text_unet: UNet2DConditionModel +vae: AutoencoderKL +scheduler: KarrasDiffusionSchedulers + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionMegaSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-to-image generation using Stable Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +dual_guided + +< +source +> +( +prompt: typing.Union[PIL.Image.Image, typing.List[PIL.Image.Image]] +image: typing.Union[str, typing.List[str]] +text_to_image_strength: float = 0.5 +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 + +) +→ +~pipelines.stable_diffusion.ImagePipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. + + +height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +~pipelines.stable_diffusion.ImagePipelineOutput or tuple + + + +~pipelines.stable_diffusion.ImagePipelineOutput if return_dict is True, otherwise a `tuple. When +returning a tuple, the first element is a list with the generated images. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import VersatileDiffusionPipeline +>>> import torch +>>> import requests +>>> from io import BytesIO +>>> from PIL import Image + +>>> # let's download an initial image +>>> url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg" + +>>> response = requests.get(url) +>>> image = Image.open(BytesIO(response.content)).convert("RGB") +>>> text = "a red car in the sun" + +>>> pipe = VersatileDiffusionPipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> text_to_image_strength = 0.75 + +>>> image = pipe.dual_guided( +... prompt=text, image=image, text_to_image_strength=text_to_image_strength, generator=generator +... ).images[0] +>>> image.save("./car_variation.png") + +image_variation + +< +source +> +( +image: typing.Union[torch.FloatTensor, PIL.Image.Image] +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +image (PIL.Image.Image, List[PIL.Image.Image] or torch.Tensor) — +The image prompt or prompts to guide the image generation. + + +height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import VersatileDiffusionPipeline +>>> import torch +>>> import requests +>>> from io import BytesIO +>>> from PIL import Image + +>>> # let's download an initial image +>>> url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg" + +>>> response = requests.get(url) +>>> image = Image.open(BytesIO(response.content)).convert("RGB") + +>>> pipe = VersatileDiffusionPipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> image = pipe.image_variation(image, generator=generator).images[0] +>>> image.save("./car_variation.png") + +text_to_image + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. + + +height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import VersatileDiffusionPipeline +>>> import torch + +>>> pipe = VersatileDiffusionPipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> image = pipe.text_to_image("an astronaut riding on a horse on mars", generator=generator).images[0] +>>> image.save("./astronaut.png") + +VersatileDiffusionTextToImagePipeline + + +class diffusers.VersatileDiffusionTextToImagePipeline + +< +source +> +( +tokenizer: CLIPTokenizer +text_encoder: CLIPTextModelWithProjection +image_unet: UNet2DConditionModel +text_unet: UNetFlatConditionModel +vae: AutoencoderKL +scheduler: KarrasDiffusionSchedulers + +) + + +Parameters + +vqvae (VQModel) — +Vector-quantized (VQ) Model to encode and decode images to and from latent representations. + + +bert (LDMBertModel) — +Text-encoder model based on BERT architecture. + + +tokenizer (transformers.BertTokenizer) — +Tokenizer of class +BertTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +**kwargs + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. + + +height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import VersatileDiffusionTextToImagePipeline +>>> import torch + +>>> pipe = VersatileDiffusionTextToImagePipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe.remove_unused_weights() +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> image = pipe("an astronaut riding on a horse on mars", generator=generator).images[0] +>>> image.save("./astronaut.png") + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. + +VersatileDiffusionImageVariationPipeline + + +class diffusers.VersatileDiffusionImageVariationPipeline + +< +source +> +( +image_feature_extractor: CLIPImageProcessor +image_encoder: CLIPVisionModelWithProjection +image_unet: UNet2DConditionModel +vae: AutoencoderKL +scheduler: KarrasDiffusionSchedulers + +) + + +Parameters + +vqvae (VQModel) — +Vector-quantized (VQ) Model to encode and decode images to and from latent representations. + + +bert (LDMBertModel) — +Text-encoder model based on BERT architecture. + + +tokenizer (transformers.BertTokenizer) — +Tokenizer of class +BertTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +image: typing.Union[PIL.Image.Image, typing.List[PIL.Image.Image], torch.Tensor] +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +**kwargs + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +image (PIL.Image.Image, List[PIL.Image.Image] or torch.Tensor) — +The image prompt or prompts to guide the image generation. + + +height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import VersatileDiffusionImageVariationPipeline +>>> import torch +>>> import requests +>>> from io import BytesIO +>>> from PIL import Image + +>>> # let's download an initial image +>>> url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg" + +>>> response = requests.get(url) +>>> image = Image.open(BytesIO(response.content)).convert("RGB") + +>>> pipe = VersatileDiffusionImageVariationPipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> image = pipe(image, generator=generator).images[0] +>>> image.save("./car_variation.png") + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. + +VersatileDiffusionDualGuidedPipeline + + +class diffusers.VersatileDiffusionDualGuidedPipeline + +< +source +> +( +tokenizer: CLIPTokenizer +image_feature_extractor: CLIPImageProcessor +text_encoder: CLIPTextModelWithProjection +image_encoder: CLIPVisionModelWithProjection +image_unet: UNet2DConditionModel +text_unet: UNetFlatConditionModel +vae: AutoencoderKL +scheduler: KarrasDiffusionSchedulers + +) + + +Parameters + +vqvae (VQModel) — +Vector-quantized (VQ) Model to encode and decode images to and from latent representations. + + +bert (LDMBertModel) — +Text-encoder model based on BERT architecture. + + +tokenizer (transformers.BertTokenizer) — +Tokenizer of class +BertTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[PIL.Image.Image, typing.List[PIL.Image.Image]] +image: typing.Union[str, typing.List[str]] +text_to_image_strength: float = 0.5 +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +**kwargs + +) +→ +~pipelines.stable_diffusion.ImagePipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. + + +height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +~pipelines.stable_diffusion.ImagePipelineOutput or tuple + + + +~pipelines.stable_diffusion.ImagePipelineOutput if return_dict is True, otherwise a `tuple. When +returning a tuple, the first element is a list with the generated images. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import VersatileDiffusionDualGuidedPipeline +>>> import torch +>>> import requests +>>> from io import BytesIO +>>> from PIL import Image + +>>> # let's download an initial image +>>> url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg" + +>>> response = requests.get(url) +>>> image = Image.open(BytesIO(response.content)).convert("RGB") +>>> text = "a red car in the sun" + +>>> pipe = VersatileDiffusionDualGuidedPipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe.remove_unused_weights() +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> text_to_image_strength = 0.75 + +>>> image = pipe( +... prompt=text, image=image, text_to_image_strength=text_to_image_strength, generator=generator +... ).images[0] +>>> image.save("./car_variation.png") + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. diff --git a/scrapped_outputs/a682a4825bb4801d7ebd7884aad95336.txt b/scrapped_outputs/a682a4825bb4801d7ebd7884aad95336.txt new file mode 100644 index 0000000000000000000000000000000000000000..e61eb0a68fe6473d1d312b7484e9469ca28f24df --- /dev/null +++ b/scrapped_outputs/a682a4825bb4801d7ebd7884aad95336.txt @@ -0,0 +1,75 @@ +Shap-E Shap-E is a conditional model for generating 3D assets which could be used for video game development, interior design, and architecture. It is trained on a large dataset of 3D assets, and post-processed to render more views of each object and produce 16K instead of 4K point clouds. The Shap-E model is trained in two steps: an encoder accepts the point clouds and rendered views of a 3D asset and outputs the parameters of implicit functions that represent the asset a diffusion model is trained on the latents produced by the encoder to generate either neural radiance fields (NeRFs) or a textured 3D mesh, making it easier to render and use the 3D asset in downstream applications This guide will show you how to use Shap-E to start generating your own 3D assets! Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate trimesh Text-to-3D To generate a gif of a 3D object, pass a text prompt to the ShapEPipeline. The pipeline generates a list of image frames which are used to create the 3D object. Copied import torch +from diffusers import ShapEPipeline + +device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +pipe = ShapEPipeline.from_pretrained("openai/shap-e", torch_dtype=torch.float16, variant="fp16") +pipe = pipe.to(device) + +guidance_scale = 15.0 +prompt = ["A firecracker", "A birthday cupcake"] + +images = pipe( + prompt, + guidance_scale=guidance_scale, + num_inference_steps=64, + frame_size=256, +).images Now use the export_to_gif() function to turn the list of image frames into a gif of the 3D object. Copied from diffusers.utils import export_to_gif + +export_to_gif(images[0], "firecracker_3d.gif") +export_to_gif(images[1], "cake_3d.gif") prompt = "A firecracker" prompt = "A birthday cupcake" Image-to-3D To generate a 3D object from another image, use the ShapEImg2ImgPipeline. You can use an existing image or generate an entirely new one. Let’s use the Kandinsky 2.1 model to generate a new image. Copied from diffusers import DiffusionPipeline +import torch + +prior_pipeline = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipeline = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + +prompt = "A cheeseburger, white background" + +image_embeds, negative_image_embeds = prior_pipeline(prompt, guidance_scale=1.0).to_tuple() +image = pipeline( + prompt, + image_embeds=image_embeds, + negative_image_embeds=negative_image_embeds, +).images[0] + +image.save("burger.png") Pass the cheeseburger to the ShapEImg2ImgPipeline to generate a 3D representation of it. Copied from PIL import Image +from diffusers import ShapEImg2ImgPipeline +from diffusers.utils import export_to_gif + +pipe = ShapEImg2ImgPipeline.from_pretrained("openai/shap-e-img2img", torch_dtype=torch.float16, variant="fp16").to("cuda") + +guidance_scale = 3.0 +image = Image.open("burger.png").resize((256, 256)) + +images = pipe( + image, + guidance_scale=guidance_scale, + num_inference_steps=64, + frame_size=256, +).images + +gif_path = export_to_gif(images[0], "burger_3d.gif") cheeseburger 3D cheeseburger Generate mesh Shap-E is a flexible model that can also generate textured mesh outputs to be rendered for downstream applications. In this example, you’ll convert the output into a glb file because the 🤗 Datasets library supports mesh visualization of glb files which can be rendered by the Dataset viewer. You can generate mesh outputs for both the ShapEPipeline and ShapEImg2ImgPipeline by specifying the output_type parameter as "mesh": Copied import torch +from diffusers import ShapEPipeline + +device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +pipe = ShapEPipeline.from_pretrained("openai/shap-e", torch_dtype=torch.float16, variant="fp16") +pipe = pipe.to(device) + +guidance_scale = 15.0 +prompt = "A birthday cupcake" + +images = pipe(prompt, guidance_scale=guidance_scale, num_inference_steps=64, frame_size=256, output_type="mesh").images Use the export_to_ply() function to save the mesh output as a ply file: You can optionally save the mesh output as an obj file with the export_to_obj() function. The ability to save the mesh output in a variety of formats makes it more flexible for downstream usage! Copied from diffusers.utils import export_to_ply + +ply_path = export_to_ply(images[0], "3d_cake.ply") +print(f"Saved to folder: {ply_path}") Then you can convert the ply file to a glb file with the trimesh library: Copied import trimesh + +mesh = trimesh.load("3d_cake.ply") +mesh_export = mesh.export("3d_cake.glb", file_type="glb") By default, the mesh output is focused from the bottom viewpoint but you can change the default viewpoint by applying a rotation transform: Copied import trimesh +import numpy as np + +mesh = trimesh.load("3d_cake.ply") +rot = trimesh.transformations.rotation_matrix(-np.pi / 2, [1, 0, 0]) +mesh = mesh.apply_transform(rot) +mesh_export = mesh.export("3d_cake.glb", file_type="glb") Upload the mesh file to your dataset repository to visualize it with the Dataset viewer! diff --git a/scrapped_outputs/a69931156dd1ed5b4038fad6f2133f51.txt b/scrapped_outputs/a69931156dd1ed5b4038fad6f2133f51.txt new file mode 100644 index 0000000000000000000000000000000000000000..eaf1daaf7ae542a78f5381f7eae39049ee58f668 --- /dev/null +++ b/scrapped_outputs/a69931156dd1ed5b4038fad6f2133f51.txt @@ -0,0 +1,49 @@ +Improve generation quality with FreeU The UNet is responsible for denoising during the reverse diffusion process, and there are two distinct features in its architecture: Backbone features primarily contribute to the denoising process Skip features mainly introduce high-frequency features into the decoder module and can make the network overlook the semantics in the backbone features However, the skip connection can sometimes introduce unnatural image details. FreeU is a technique for improving image quality by rebalancing the contributions from the UNet’s skip connections and backbone feature maps. FreeU is applied during inference and it does not require any additional training. The technique works for different tasks such as text-to-image, image-to-image, and text-to-video. In this guide, you will apply FreeU to the StableDiffusionPipeline, StableDiffusionXLPipeline, and TextToVideoSDPipeline. You need to install Diffusers from source to run the examples below. StableDiffusionPipeline Load the pipeline: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, safety_checker=None +).to("cuda") Then enable the FreeU mechanism with the FreeU-specific hyperparameters. These values are scaling factors for the backbone and skip features. Copied pipeline.enable_freeu(s1=0.9, s2=0.2, b1=1.2, b2=1.4) The values above are from the official FreeU code repository where you can also find reference hyperparameters for different models. Disable the FreeU mechanism by calling disable_freeu() on a pipeline. And then run inference: Copied prompt = "A squirrel eating a burger" +seed = 2023 +image = pipeline(prompt, generator=torch.manual_seed(seed)).images[0] +image The figure below compares non-FreeU and FreeU results respectively for the same hyperparameters used above (prompt and seed): Let’s see how Stable Diffusion 2 results are impacted: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16, safety_checker=None +).to("cuda") + +prompt = "A squirrel eating a burger" +seed = 2023 + +pipeline.enable_freeu(s1=0.9, s2=0.2, b1=1.1, b2=1.2) +image = pipeline(prompt, generator=torch.manual_seed(seed)).images[0] +image Stable Diffusion XL Finally, let’s take a look at how FreeU affects Stable Diffusion XL results: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, +).to("cuda") + +prompt = "A squirrel eating a burger" +seed = 2023 + +# Comes from +# https://wandb.ai/nasirk24/UNET-FreeU-SDXL/reports/FreeU-SDXL-Optimal-Parameters--Vmlldzo1NDg4NTUw +pipeline.enable_freeu(s1=0.6, s2=0.4, b1=1.1, b2=1.2) +image = pipeline(prompt, generator=torch.manual_seed(seed)).images[0] +image Text-to-video generation FreeU can also be used to improve video quality: Copied from diffusers import DiffusionPipeline +from diffusers.utils import export_to_video +import torch + +model_id = "cerspense/zeroscope_v2_576w" +pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +prompt = "an astronaut riding a horse on mars" +seed = 2023 + +# The values come from +# https://github.com/lyn-rgb/FreeU_Diffusers#video-pipelines +pipe.enable_freeu(b1=1.2, b2=1.4, s1=0.9, s2=0.2) +video_frames = pipe(prompt, height=320, width=576, num_frames=30, generator=torch.manual_seed(seed)).frames +export_to_video(video_frames, "astronaut_rides_horse.mp4") Thanks to kadirnar for helping to integrate the feature, and to justindujardin for the helpful discussions. diff --git a/scrapped_outputs/a6e45cb7456f1cafcb5acf01a651f249.txt b/scrapped_outputs/a6e45cb7456f1cafcb5acf01a651f249.txt new file mode 100644 index 0000000000000000000000000000000000000000..eaf1daaf7ae542a78f5381f7eae39049ee58f668 --- /dev/null +++ b/scrapped_outputs/a6e45cb7456f1cafcb5acf01a651f249.txt @@ -0,0 +1,49 @@ +Improve generation quality with FreeU The UNet is responsible for denoising during the reverse diffusion process, and there are two distinct features in its architecture: Backbone features primarily contribute to the denoising process Skip features mainly introduce high-frequency features into the decoder module and can make the network overlook the semantics in the backbone features However, the skip connection can sometimes introduce unnatural image details. FreeU is a technique for improving image quality by rebalancing the contributions from the UNet’s skip connections and backbone feature maps. FreeU is applied during inference and it does not require any additional training. The technique works for different tasks such as text-to-image, image-to-image, and text-to-video. In this guide, you will apply FreeU to the StableDiffusionPipeline, StableDiffusionXLPipeline, and TextToVideoSDPipeline. You need to install Diffusers from source to run the examples below. StableDiffusionPipeline Load the pipeline: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, safety_checker=None +).to("cuda") Then enable the FreeU mechanism with the FreeU-specific hyperparameters. These values are scaling factors for the backbone and skip features. Copied pipeline.enable_freeu(s1=0.9, s2=0.2, b1=1.2, b2=1.4) The values above are from the official FreeU code repository where you can also find reference hyperparameters for different models. Disable the FreeU mechanism by calling disable_freeu() on a pipeline. And then run inference: Copied prompt = "A squirrel eating a burger" +seed = 2023 +image = pipeline(prompt, generator=torch.manual_seed(seed)).images[0] +image The figure below compares non-FreeU and FreeU results respectively for the same hyperparameters used above (prompt and seed): Let’s see how Stable Diffusion 2 results are impacted: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16, safety_checker=None +).to("cuda") + +prompt = "A squirrel eating a burger" +seed = 2023 + +pipeline.enable_freeu(s1=0.9, s2=0.2, b1=1.1, b2=1.2) +image = pipeline(prompt, generator=torch.manual_seed(seed)).images[0] +image Stable Diffusion XL Finally, let’s take a look at how FreeU affects Stable Diffusion XL results: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, +).to("cuda") + +prompt = "A squirrel eating a burger" +seed = 2023 + +# Comes from +# https://wandb.ai/nasirk24/UNET-FreeU-SDXL/reports/FreeU-SDXL-Optimal-Parameters--Vmlldzo1NDg4NTUw +pipeline.enable_freeu(s1=0.6, s2=0.4, b1=1.1, b2=1.2) +image = pipeline(prompt, generator=torch.manual_seed(seed)).images[0] +image Text-to-video generation FreeU can also be used to improve video quality: Copied from diffusers import DiffusionPipeline +from diffusers.utils import export_to_video +import torch + +model_id = "cerspense/zeroscope_v2_576w" +pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +prompt = "an astronaut riding a horse on mars" +seed = 2023 + +# The values come from +# https://github.com/lyn-rgb/FreeU_Diffusers#video-pipelines +pipe.enable_freeu(b1=1.2, b2=1.4, s1=0.9, s2=0.2) +video_frames = pipe(prompt, height=320, width=576, num_frames=30, generator=torch.manual_seed(seed)).frames +export_to_video(video_frames, "astronaut_rides_horse.mp4") Thanks to kadirnar for helping to integrate the feature, and to justindujardin for the helpful discussions. diff --git a/scrapped_outputs/a6ef0004502437085a987d20a95abafa.txt b/scrapped_outputs/a6ef0004502437085a987d20a95abafa.txt new file mode 100644 index 0000000000000000000000000000000000000000..18ff21ef44b1209309d3996bfa0c5efab35a57c1 --- /dev/null +++ b/scrapped_outputs/a6ef0004502437085a987d20a95abafa.txt @@ -0,0 +1,78 @@ +Safe Stable Diffusion Safe Stable Diffusion was proposed in Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models and mitigates inappropriate degeneration from Stable Diffusion models because they’re trained on unfiltered web-crawled datasets. For instance Stable Diffusion may unexpectedly generate nudity, violence, images depicting self-harm, and otherwise offensive content. Safe Stable Diffusion is an extension of Stable Diffusion that drastically reduces this type of content. The abstract from the paper is: Text-conditioned image generation models have recently achieved astonishing results in image quality and text alignment and are consequently employed in a fast-growing number of applications. Since they are highly data-driven, relying on billion-sized datasets randomly scraped from the internet, they also suffer, as we demonstrate, from degenerated and biased human behavior. In turn, they may even reinforce such biases. To help combat these undesired side effects, we present safe latent diffusion (SLD). Specifically, to measure the inappropriate degeneration due to unfiltered and imbalanced training sets, we establish a novel image generation test bed-inappropriate image prompts (I2P)-containing dedicated, real-world image-to-text prompts covering concepts such as nudity and violence. As our exhaustive empirical evaluation demonstrates, the introduced SLD removes and suppresses inappropriate image parts during the diffusion process, with no additional training required and no adverse effect on overall image quality or text alignment. Tips Use the safety_concept property of StableDiffusionPipelineSafe to check and edit the current safety concept: Copied >>> from diffusers import StableDiffusionPipelineSafe + +>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe") +>>> pipeline.safety_concept +'an image showing hate, harassment, violence, suffering, humiliation, harm, suicide, sexual, nudity, bodily fluids, blood, obscene gestures, illegal activity, drug use, theft, vandalism, weapons, child abuse, brutality, cruelty' For each image generation the active concept is also contained in StableDiffusionSafePipelineOutput. There are 4 configurations (SafetyConfig.WEAK, SafetyConfig.MEDIUM, SafetyConfig.STRONG, and SafetyConfig.MAX) that can be applied: Copied >>> from diffusers import StableDiffusionPipelineSafe +>>> from diffusers.pipelines.stable_diffusion_safe import SafetyConfig + +>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe") +>>> prompt = "the four horsewomen of the apocalypse, painting by tom of finland, gaston bussiere, craig mullins, j. c. leyendecker" +>>> out = pipeline(prompt=prompt, **SafetyConfig.MAX) Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionPipelineSafe class diffusers.StableDiffusionPipelineSafe < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: SafeStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline based on the StableDiffusionPipeline for text-to-image generation using Safe Latent Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 sld_guidance_scale: Optional = 1000 sld_warmup_steps: Optional = 10 sld_threshold: Optional = 0.01 sld_momentum_scale: Optional = 0.3 sld_mom_beta: Optional = 0.4 ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. sld_guidance_scale (float, optional, defaults to 1000) — +If sld_guidance_scale < 1, safety guidance is disabled. sld_warmup_steps (int, optional, defaults to 10) — +Number of warmup steps for safety guidance. SLD is only be applied for diffusion steps greater than +sld_warmup_steps. sld_threshold (float, optional, defaults to 0.01) — +Threshold that separates the hyperplane between appropriate and inappropriate images. sld_momentum_scale (float, optional, defaults to 0.3) — +Scale of the SLD momentum to be added to the safety guidance at each diffusion step. If set to 0.0, +momentum is disabled. Momentum is built up during warmup for diffusion steps smaller than +sld_warmup_steps. sld_mom_beta (float, optional, defaults to 0.4) — +Defines how safety guidance momentum builds up. sld_mom_beta indicates how much of the previous +momentum is kept. Momentum is built up during warmup for diffusion steps smaller than +sld_warmup_steps. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied import torch +from diffusers import StableDiffusionPipelineSafe +from diffusers.pipelines.stable_diffusion_safe import SafetyConfig + +pipeline = StableDiffusionPipelineSafe.from_pretrained( + "AIML-TUDA/stable-diffusion-safe", torch_dtype=torch.float16 +).to("cuda") +prompt = "the four horsewomen of the apocalypse, painting by tom of finland, gaston bussiere, craig mullins, j. c. leyendecker" +image = pipeline(prompt=prompt, **SafetyConfig.MEDIUM).images[0] StableDiffusionSafePipelineOutput class diffusers.pipelines.stable_diffusion_safe.StableDiffusionSafePipelineOutput < source > ( images: Union nsfw_content_detected: Optional unsafe_images: Union applied_safety_concept: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or numpy array of shape (batch_size, height, width, num_channels). PIL images or numpy array present the denoised images of the diffusion pipeline. nsfw_content_detected (List[bool]) — +List of flags denoting whether the corresponding generated image likely represents “not-safe-for-work” +(nsfw) content, or None if safety checking could not be performed. images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images that were flagged by the safety checker any may contain “not-safe-for-work” +(nsfw) content, or None if no safety check was performed or no images were flagged. applied_safety_concept (str) — +The safety concept that was applied for safety guidance, or None if safety guidance was disabled Output class for Safe Stable Diffusion pipelines. __call__ ( *args **kwargs ) Call self as a function. diff --git a/scrapped_outputs/a7435e7cb938159c94b8678abb81dd25.txt b/scrapped_outputs/a7435e7cb938159c94b8678abb81dd25.txt new file mode 100644 index 0000000000000000000000000000000000000000..f09fe46dfcc2641727bea35773a653b7ffb9e5e5 --- /dev/null +++ b/scrapped_outputs/a7435e7cb938159c94b8678abb81dd25.txt @@ -0,0 +1,96 @@ +Image variation The Stable Diffusion model can also generate variations from an input image. It uses a fine-tuned version of a Stable Diffusion model by Justin Pinkney from Lambda. The original codebase can be found at LambdaLabsML/lambda-diffusers and additional official checkpoints for image variation can be found at lambdalabs/sd-image-variations-diffusers. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionImageVariationPipeline class diffusers.StableDiffusionImageVariationPipeline < source > ( vae: AutoencoderKL image_encoder: CLIPVisionModelWithProjection unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. image_encoder (CLIPVisionModelWithProjection) — +Frozen CLIP image-encoder (clip-vit-large-patch14). text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline to generate image variations from an input image using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — +Image or images to guide image generation. If you provide a tensor, it needs to be compatible with +CLIPImageProcessor. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied from diffusers import StableDiffusionImageVariationPipeline +from PIL import Image +from io import BytesIO +import requests + +pipe = StableDiffusionImageVariationPipeline.from_pretrained( + "lambdalabs/sd-image-variations-diffusers", revision="v2.0" +) +pipe = pipe.to("cuda") + +url = "https://lh3.googleusercontent.com/y-iFOHfLTwkuQSUegpwDdgKmOjRSTvPxat63dQLB25xkTs4lhIbRUFeNBWZzYf370g=s1200" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") + +out = pipe(image, num_images_per_prompt=3, guidance_scale=15) +out["images"][0].save("result.jpg") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/a7473347b9491d1cdf3ff17f04dd2922.txt b/scrapped_outputs/a7473347b9491d1cdf3ff17f04dd2922.txt new file mode 100644 index 0000000000000000000000000000000000000000..92296bcdbbca1fe039034b8fbfc23043d7895d17 --- /dev/null +++ b/scrapped_outputs/a7473347b9491d1cdf3ff17f04dd2922.txt @@ -0,0 +1,105 @@ +DPMSolverSinglestepScheduler DPMSolverSinglestepScheduler is a single step scheduler from DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps and DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. DPMSolver (and the improved version DPMSolver++) is a fast dedicated high-order solver for diffusion ODEs with convergence order guarantee. Empirically, DPMSolver sampling with only 20 steps can generate high-quality +samples, and it can generate quite good samples even in 10 steps. The original implementation can be found at LuChengTHU/dpm-solver. Tips It is recommended to set solver_order to 2 for guide sampling, and solver_order=3 for unconditional sampling. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use dynamic +thresholding. This thresholding method is unsuitable for latent-space diffusion models such as +Stable Diffusion. DPMSolverSinglestepScheduler class diffusers.DPMSolverSinglestepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Optional = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'dpmsolver++' solver_type: str = 'midpoint' lower_order_final: bool = False use_karras_sigmas: Optional = False final_sigmas_type: Optional = 'zero' lambda_min_clipped: float = -inf variance_type: Optional = None ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DPMSolver order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++". algorithm_type (str, defaults to dpmsolver++) — +Algorithm type for the solver; can be dpmsolver or dpmsolver++. The +dpmsolver type implements the algorithms in the DPMSolver +paper, and the dpmsolver++ type implements the algorithms in the +DPMSolver++ paper. It is recommended to use dpmsolver++ or +sde-dpmsolver++ with solver_order=2 for guided sampling like in Stable Diffusion. solver_type (str, defaults to midpoint) — +Solver type for the second-order solver; can be midpoint or heun. The solver type slightly affects the +sample quality, especially for a small number of steps. It is recommended to use midpoint solvers. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. final_sigmas_type (str, optional, defaults to "zero") — +The final sigma value for the noise schedule during the sampling process. If "sigma_min", the final sigma +is the same as the last sigma in the training schedule. If zero, the final sigma is set to 0. lambda_min_clipped (float, defaults to -inf) — +Clipping threshold for the minimum value of lambda(t) for numerical stability. This is critical for the +cosine (squaredcos_cap_v2) noise schedule. variance_type (str, optional) — +Set to “learned” or “learned_range” for diffusion models that predict variance. If set, the model’s output +contains the predicted Gaussian variance. DPMSolverSinglestepScheduler is a fast dedicated high-order solver for diffusion ODEs. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is +designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an +integral of the data prediction model. The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise +prediction and data prediction models. dpm_solver_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DPMSolver (equivalent to DDIM). get_order_list < source > ( num_inference_steps: int ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. Computes the solver order at each time step. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). singlestep_dpm_solver_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. timestep (int) — +The current and latter discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order singlestep DPMSolver that computes the solution at time prev_timestep from the +time timestep_list[-2]. singlestep_dpm_solver_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. timestep (int) — +The current and latter discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order singlestep DPMSolver that computes the solution at time prev_timestep from the +time timestep_list[-3]. singlestep_dpm_solver_update < source > ( model_output_list: List *args sample: FloatTensor = None order: int = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. timestep (int) — +The current and latter discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. order (int) — +The solver order at this step. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the singlestep DPMSolver. step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the singlestep DPMSolver. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/a74b46527fed533883342309202ac4b8.txt b/scrapped_outputs/a74b46527fed533883342309202ac4b8.txt new file mode 100644 index 0000000000000000000000000000000000000000..836dee32c8271dc967057672c03614a463c4ec61 --- /dev/null +++ b/scrapped_outputs/a74b46527fed533883342309202ac4b8.txt @@ -0,0 +1,324 @@ +Pipelines Pipelines provide a simple way to run state-of-the-art diffusion models in inference by bundling all of the necessary components (multiple independently-trained models, schedulers, and processors) into a single end-to-end class. Pipelines are flexible and they can be adapted to use different schedulers or even model components. All pipelines are built from the base DiffusionPipeline class which provides basic functionality for loading, downloading, and saving all the components. Specific pipeline types (for example StableDiffusionPipeline) loaded with from_pretrained() are automatically detected and the pipeline components are loaded and passed to the __init__ function of the pipeline. You shouldn’t use the DiffusionPipeline class for training. Individual components (for example, UNet2DModel and UNet2DConditionModel) of diffusion pipelines are usually trained individually, so we suggest directly working with them instead. Pipelines do not offer any training functionality. You’ll notice PyTorch’s autograd is disabled by decorating the __call__() method with a torch.no_grad decorator because pipelines should not be used for training. If you’re interested in training, please take a look at the Training guides instead! The table below lists all the pipelines currently available in 🤗 Diffusers and the tasks they support. Click on a pipeline to view its abstract and published paper. Pipeline Tasks AltDiffusion image2image AnimateDiff text2video Attend-and-Excite text2image Audio Diffusion image2audio AudioLDM text2audio AudioLDM2 text2audio BLIP Diffusion text2image Consistency Models unconditional image generation ControlNet text2image, image2image, inpainting ControlNet with Stable Diffusion XL text2image ControlNet-XS text2image ControlNet-XS with Stable Diffusion XL text2image Cycle Diffusion image2image Dance Diffusion unconditional audio generation DDIM unconditional image generation DDPM unconditional image generation DeepFloyd IF text2image, image2image, inpainting, super-resolution DiffEdit inpainting DiT text2image GLIGEN text2image InstructPix2Pix image editing Kandinsky 2.1 text2image, image2image, inpainting, interpolation Kandinsky 2.2 text2image, image2image, inpainting Kandinsky 3 text2image, image2image Latent Consistency Models text2image Latent Diffusion text2image, super-resolution LDM3D text2image, text-to-3D, text-to-pano, upscaling MultiDiffusion text2image MusicLDM text2audio Paint by Example inpainting ParaDiGMS text2image Pix2Pix Zero image editing PixArt-α text2image PNDM unconditional image generation RePaint inpainting Score SDE VE unconditional image generation Self-Attention Guidance text2image Semantic Guidance text2image Shap-E text-to-3D, image-to-3D Spectrogram Diffusion Stable Diffusion text2image, image2image, depth2image, inpainting, image variation, latent upscaler, super-resolution Stable Diffusion Model Editing model editing Stable Diffusion XL text2image, image2image, inpainting Stable Diffusion XL Turbo text2image, image2image, inpainting Stable unCLIP text2image, image variation Stochastic Karras VE unconditional image generation T2I-Adapter text2image Text2Video text2video, video2video Text2Video-Zero text2video unCLIP text2image, image variation Unconditional Latent Diffusion unconditional image generation UniDiffuser text2image, image2text, image variation, text variation, unconditional image generation, unconditional audio generation Value-guided planning value guided sampling Versatile Diffusion text2image, image variation VQ Diffusion text2image Wuerstchen text2image DiffusionPipeline class diffusers.DiffusionPipeline < source > ( ) Base class for all pipelines. DiffusionPipeline stores all components (models, schedulers, and processors) for diffusion pipelines and +provides methods for loading, downloading and saving models. It also includes methods to: move all PyTorch modules to the device of your choice enable/disable the progress bar for the denoising iteration Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. _optional_components (List[str]) — List of all optional components that don’t have to be passed to the +pipeline to function (should be overridden by subclasses). __call__ ( *args **kwargs ) Call self as a function. device < source > ( ) → torch.device Returns +torch.device + +The torch device on which the pipeline is located. + to < source > ( *args **kwargs ) → DiffusionPipeline Parameters dtype (torch.dtype, optional) — +Returns a pipeline with the specified +dtype device (torch.Device, optional) — +Returns a pipeline with the specified +device silence_dtype_warnings (str, optional, defaults to False) — +Whether to omit warnings if the target dtype is not compatible with the target device. Returns +DiffusionPipeline + +The pipeline converted to specified dtype and/or dtype. + Performs Pipeline dtype and/or device conversion. A torch.dtype and torch.device are inferred from the +arguments of self.to(*args, **kwargs). If the pipeline already has the correct torch.dtype and torch.device, then it is returned as is. Otherwise, +the returned pipeline is a copy of self with the desired torch.dtype and torch.device. Here are the ways to call to: to(dtype, silence_dtype_warnings=False) → DiffusionPipeline to return a pipeline with the specified +dtype to(device, silence_dtype_warnings=False) → DiffusionPipeline to return a pipeline with the specified +device to(device=None, dtype=None, silence_dtype_warnings=False) → DiffusionPipeline to return a pipeline with the +specified device and +dtype components < source > ( ) The self.components property can be useful to run different pipelines with the same weights and +configurations without reallocating additional memory. Returns (dict): +A dictionary containing all the modules needed to initialize the pipeline. Examples: Copied >>> from diffusers import ( +... StableDiffusionPipeline, +... StableDiffusionImg2ImgPipeline, +... StableDiffusionInpaintPipeline, +... ) + +>>> text2img = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> img2img = StableDiffusionImg2ImgPipeline(**text2img.components) +>>> inpaint = StableDiffusionInpaintPipeline(**text2img.components) disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. download < source > ( pretrained_model_name **kwargs ) → os.PathLike Parameters pretrained_model_name (str or os.PathLike, optional) — +A string, the repository id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. custom_pipeline (str, optional) — +Can be either: + + +A string, the repository id (for example CompVis/ldm-text2im-large-256) of a pretrained +pipeline hosted on the Hub. The repository must contain a file called pipeline.py that defines +the custom pipeline. + + +A string, the file name of a community pipeline hosted on GitHub under +Community. Valid file +names must match the file name and not the pipeline script (clip_guided_stable_diffusion +instead of clip_guided_stable_diffusion.py). Community pipelines are always loaded from the +current main branch of GitHub. + + +A path to a directory (./my_pipeline_directory/) containing a custom pipeline. The directory +must contain a file called pipeline.py that defines the custom pipeline. + + + +🧪 This is an experimental feature and may change in the future. + +For more information on how to load and create custom pipelines, take a look at How to contribute a +community pipeline. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. use_onnx (bool, optional, defaults to False) — +If set to True, ONNX weights will always be downloaded if present. If set to False, ONNX weights +will never be downloaded. By default use_onnx defaults to the _is_onnx class attribute which is +False for non-ONNX pipelines and True for ONNX pipelines. ONNX weights include both files ending +with .onnx and .pb. trust_remote_code (bool, optional, defaults to False) — +Whether or not to allow for custom pipelines and components defined on the Hub in their own files. This +option should only be set to True for repositories you trust and in which you have read the code, as +it will execute code present on the Hub on your local machine. Returns +os.PathLike + +A path to the downloaded pipeline. + Download and cache a PyTorch diffusion pipeline from pretrained pipeline weights. To use private or gated models, log-in with +huggingface-cli login. enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] enable_model_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_sequential_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using 🤗 Accelerate, significantly reducing memory usage. When called, the state +dicts of all torch.nn.Module components (except those in self._exclude_from_cpu_offload) are saved to CPU +and then moved to torch.device('meta') and loaded to GPU only when their specific submodule has its forward +method called. Offloading happens on a submodule basis. Memory savings are higher than with +enable_model_cpu_offload, but performance is lower. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) from_pretrained < source > ( pretrained_model_name_or_path: Union **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. custom_pipeline (str, optional) — + +🧪 This is an experimental feature and may change in the future. + +Can be either: + +A string, the repo id (for example hf-internal-testing/diffusers-dummy-pipeline) of a custom +pipeline hosted on the Hub. The repository must contain a file called pipeline.py that defines +the custom pipeline. +A string, the file name of a community pipeline hosted on GitHub under +Community. Valid file +names must match the file name and not the pipeline script (clip_guided_stable_diffusion +instead of clip_guided_stable_diffusion.py). Community pipelines are always loaded from the +current main branch of GitHub. +A path to a directory (./my_pipeline_directory/) containing a custom pipeline. The directory +must contain a file called pipeline.py that defines the custom pipeline. + +For more information on how to load and create custom pipelines, please have a look at Loading and +Adding Custom +Pipelines force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. use_onnx (bool, optional, defaults to None) — +If set to True, ONNX weights will always be downloaded if present. If set to False, ONNX weights +will never be downloaded. By default use_onnx defaults to the _is_onnx class attribute which is +False for non-ONNX pipelines and True for ONNX pipelines. ONNX weights include both files ending +with .onnx and .pb. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiate a PyTorch diffusion pipeline from pretrained pipeline weights. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import DiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256") + +>>> # Download pipeline that requires an authorization token +>>> # For more information on access tokens, please refer to this section +>>> # of the documentation](https://huggingface.co/docs/hub/security-tokens) +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") + +>>> # Use a different scheduler +>>> from diffusers import LMSDiscreteScheduler + +>>> scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.scheduler = scheduler maybe_free_model_hooks < source > ( ) Function that offloads all components, removes all model hooks that were added when using +enable_model_cpu_offload and then applies them again. In case the model has not been offloaded this function +is a no-op. Make sure to add this function to the end of the __call__ function of your pipeline so that it +functions correctly when applying enable_model_cpu_offload. numpy_to_pil < source > ( images ) Convert a NumPy image or a batch of images to a PIL image. save_pretrained < source > ( save_directory: Union safe_serialization: bool = True variant: Optional = None push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save a pipeline to. Will be created if it doesn’t exist. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save all saveable variables of the pipeline to a directory. A pipeline variable can be saved and loaded if its +class implements both a save and loading method. The pipeline is easily reloaded using the +from_pretrained() class method. FlaxDiffusionPipeline class diffusers.FlaxDiffusionPipeline < source > ( ) Base class for Flax-based pipelines. FlaxDiffusionPipeline stores all components (models, schedulers, and processors) for diffusion pipelines and +provides methods for loading, downloading and saving models. It also includes methods to: enable/disable the progress bar for the denoising iteration Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_name_or_path: Union **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example runwayml/stable-diffusion-v1-5) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +using save_pretrained(). + dtype (str or jnp.dtype, optional) — +Override the default jnp.dtype and load the model under this dtype. If "auto", the dtype is +automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components) of the specific pipeline +class. The overwritten components are passed directly to the pipelines __init__ method. Instantiate a Flax-based diffusion pipeline from pretrained pipeline weights. The pipeline is set in evaluation mode (`model.eval()) by default and dropout modules are deactivated. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of FlaxUNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import FlaxDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> # Requires to be logged in to Hugging Face hub, +>>> # see more in [the documentation](https://huggingface.co/docs/hub/security-tokens) +>>> pipeline, params = FlaxDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... revision="bf16", +... dtype=jnp.bfloat16, +... ) + +>>> # Download pipeline, but use a different scheduler +>>> from diffusers import FlaxDPMSolverMultistepScheduler + +>>> model_id = "runwayml/stable-diffusion-v1-5" +>>> dpmpp, dpmpp_state = FlaxDPMSolverMultistepScheduler.from_pretrained( +... model_id, +... subfolder="scheduler", +... ) + +>>> dpm_pipe, dpm_params = FlaxStableDiffusionPipeline.from_pretrained( +... model_id, revision="bf16", dtype=jnp.bfloat16, scheduler=dpmpp +... ) +>>> dpm_params["scheduler"] = dpmpp_state numpy_to_pil < source > ( images ) Convert a NumPy image or a batch of images to a PIL image. save_pretrained < source > ( save_directory: Union params: Union push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save all saveable variables of the pipeline to a directory. A pipeline variable can be saved and loaded if its +class implements both a save and loading method. The pipeline is easily reloaded using the +from_pretrained() class method. PushToHubMixin class diffusers.utils.PushToHubMixin < source > ( ) A Mixin to push a model, scheduler, or pipeline to the Hugging Face Hub. push_to_hub < source > ( repo_id: str commit_message: Optional = None private: Optional = None token: Optional = None create_pr: bool = False safe_serialization: bool = True variant: Optional = None ) Parameters repo_id (str) — +The name of the repository you want to push your model, scheduler, or pipeline files to. It should +contain your organization name when pushing to an organization. repo_id can also be a path to a local +directory. commit_message (str, optional) — +Message to commit while pushing. Default to "Upload {object}". private (bool, optional) — +Whether or not the repository created should be private. token (str, optional) — +The token to use as HTTP bearer authorization for remote files. The token generated when running +huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) — +Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to True) — +Whether or not to convert the model weights to the safetensors format. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. Upload model, scheduler, or pipeline files to the 🤗 Hugging Face Hub. Examples: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet") + +# Push the `unet` to your namespace with the name "my-finetuned-unet". +unet.push_to_hub("my-finetuned-unet") + +# Push the `unet` to an organization with the name "my-finetuned-unet". +unet.push_to_hub("your-org/my-finetuned-unet") diff --git a/scrapped_outputs/a763cc08c7ab02945ce7f386e0f15ba5.txt b/scrapped_outputs/a763cc08c7ab02945ce7f386e0f15ba5.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/a790b743cc77d72aa8f0cc0cce78eedb.txt b/scrapped_outputs/a790b743cc77d72aa8f0cc0cce78eedb.txt new file mode 100644 index 0000000000000000000000000000000000000000..a1045fc50045367e6cb6ef462fcda1222c68ba0d --- /dev/null +++ b/scrapped_outputs/a790b743cc77d72aa8f0cc0cce78eedb.txt @@ -0,0 +1,148 @@ +PyTorch 2.0 🤗 Diffusers supports the latest optimizations from PyTorch 2.0 which include: A memory-efficient attention implementation, scaled dot product attention, without requiring any extra dependencies such as xFormers. torch.compile, a just-in-time (JIT) compiler to provide an extra performance boost when individual models are compiled. Both of these optimizations require PyTorch 2.0 or later and 🤗 Diffusers > 0.13.0. Copied pip install --upgrade torch diffusers Scaled dot product attention torch.nn.functional.scaled_dot_product_attention (SDPA) is an optimized and memory-efficient attention (similar to xFormers) that automatically enables several other optimizations depending on the model inputs and GPU type. SDPA is enabled by default if you’re using PyTorch 2.0 and the latest version of 🤗 Diffusers, so you don’t need to add anything to your code. However, if you want to explicitly enable it, you can set a DiffusionPipeline to use AttnProcessor2_0: Copied import torch + from diffusers import DiffusionPipeline ++ from diffusers.models.attention_processor import AttnProcessor2_0 + + pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True).to("cuda") ++ pipe.unet.set_attn_processor(AttnProcessor2_0()) + + prompt = "a photo of an astronaut riding a horse on mars" + image = pipe(prompt).images[0] SDPA should be as fast and memory efficient as xFormers; check the benchmark for more details. In some cases - such as making the pipeline more deterministic or converting it to other formats - it may be helpful to use the vanilla attention processor, AttnProcessor. To revert to AttnProcessor, call the set_default_attn_processor() function on the pipeline: Copied import torch + from diffusers import DiffusionPipeline + + pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True).to("cuda") ++ pipe.unet.set_default_attn_processor() + + prompt = "a photo of an astronaut riding a horse on mars" + image = pipe(prompt).images[0] torch.compile The torch.compile function can often provide an additional speed-up to your PyTorch code. In 🤗 Diffusers, it is usually best to wrap the UNet with torch.compile because it does most of the heavy lifting in the pipeline. Copied from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) +images = pipe(prompt, num_inference_steps=steps, num_images_per_prompt=batch_size).images[0] Depending on GPU type, torch.compile can provide an additional speed-up of 5-300x on top of SDPA! If you’re using more recent GPU architectures such as Ampere (A100, 3090), Ada (4090), and Hopper (H100), torch.compile is able to squeeze even more performance out of these GPUs. Compilation requires some time to complete, so it is best suited for situations where you prepare your pipeline once and then perform the same type of inference operations multiple times. For example, calling the compiled pipeline on a different image size triggers compilation again which can be expensive. For more information and different options about torch.compile, refer to the torch_compile tutorial. Learn more about other ways PyTorch 2.0 can help optimize your model in the Accelerate inference of text-to-image diffusion models tutorial. Benchmark We conducted a comprehensive benchmark with PyTorch 2.0’s efficient attention implementation and torch.compile across different GPUs and batch sizes for five of our most used pipelines. The code is benchmarked on 🤗 Diffusers v0.17.0.dev0 to optimize torch.compile usage (see here for more details). Expand the dropdown below to find the code used to benchmark each pipeline: Stable Diffusion text-to-image Copied from diffusers import DiffusionPipeline +import torch + +path = "runwayml/stable-diffusion-v1-5" + +run_compile = True # Set True / False + +pipe = DiffusionPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors=True) +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + images = pipe(prompt=prompt).images Stable Diffusion image-to-image Copied from diffusers import StableDiffusionImg2ImgPipeline +from diffusers.utils import load_image +import torch + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +init_image = load_image(url) +init_image = init_image.resize((512, 512)) + +path = "runwayml/stable-diffusion-v1-5" + +run_compile = True # Set True / False + +pipe = StableDiffusionImg2ImgPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors=True) +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + image = pipe(prompt=prompt, image=init_image).images[0] Stable Diffusion inpainting Copied from diffusers import StableDiffusionInpaintPipeline +from diffusers.utils import load_image +import torch + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +path = "runwayml/stable-diffusion-inpainting" + +run_compile = True # Set True / False + +pipe = StableDiffusionInpaintPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors=True) +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] ControlNet Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.utils import load_image +import torch + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +init_image = load_image(url) +init_image = init_image.resize((512, 512)) + +path = "runwayml/stable-diffusion-v1-5" + +run_compile = True # Set True / False +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + path, controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) +pipe.controlnet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + pipe.controlnet = torch.compile(pipe.controlnet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + image = pipe(prompt=prompt, image=init_image).images[0] DeepFloyd IF text-to-image + upscaling Copied from diffusers import DiffusionPipeline +import torch + +run_compile = True # Set True / False + +pipe_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-M-v1.0", variant="fp16", text_encoder=None, torch_dtype=torch.float16, use_safetensors=True) +pipe_1.to("cuda") +pipe_2 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-II-M-v1.0", variant="fp16", text_encoder=None, torch_dtype=torch.float16, use_safetensors=True) +pipe_2.to("cuda") +pipe_3 = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-x4-upscaler", torch_dtype=torch.float16, use_safetensors=True) +pipe_3.to("cuda") + + +pipe_1.unet.to(memory_format=torch.channels_last) +pipe_2.unet.to(memory_format=torch.channels_last) +pipe_3.unet.to(memory_format=torch.channels_last) + +if run_compile: + pipe_1.unet = torch.compile(pipe_1.unet, mode="reduce-overhead", fullgraph=True) + pipe_2.unet = torch.compile(pipe_2.unet, mode="reduce-overhead", fullgraph=True) + pipe_3.unet = torch.compile(pipe_3.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "the blue hulk" + +prompt_embeds = torch.randn((1, 2, 4096), dtype=torch.float16) +neg_prompt_embeds = torch.randn((1, 2, 4096), dtype=torch.float16) + +for _ in range(3): + image_1 = pipe_1(prompt_embeds=prompt_embeds, negative_prompt_embeds=neg_prompt_embeds, output_type="pt").images + image_2 = pipe_2(image=image_1, prompt_embeds=prompt_embeds, negative_prompt_embeds=neg_prompt_embeds, output_type="pt").images + image_3 = pipe_3(prompt=prompt, image=image_1, noise_level=100).images The graph below highlights the relative speed-ups for the StableDiffusionPipeline across five GPU families with PyTorch 2.0 and torch.compile enabled. The benchmarks for the following graphs are measured in number of iterations/second. To give you an even better idea of how this speed-up holds for the other pipelines, consider the following +graph for an A100 with PyTorch 2.0 and torch.compile: In the following tables, we report our findings in terms of the number of iterations/second. A100 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 21.66 23.13 44.03 49.74 SD - img2img 21.81 22.40 43.92 46.32 SD - inpaint 22.24 23.23 43.76 49.25 SD - controlnet 15.02 15.82 32.13 36.08 IF 20.21 / 13.84 / 24.00 20.12 / 13.70 / 24.03 ❌ 97.34 / 27.23 / 111.66 SDXL - txt2img 8.64 9.9 - - A100 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 11.6 13.12 14.62 17.27 SD - img2img 11.47 13.06 14.66 17.25 SD - inpaint 11.67 13.31 14.88 17.48 SD - controlnet 8.28 9.38 10.51 12.41 IF 25.02 18.04 ❌ 48.47 SDXL - txt2img 2.44 2.74 - - A100 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 3.04 3.6 3.83 4.68 SD - img2img 2.98 3.58 3.83 4.67 SD - inpaint 3.04 3.66 3.9 4.76 SD - controlnet 2.15 2.58 2.74 3.35 IF 8.78 9.82 ❌ 16.77 SDXL - txt2img 0.64 0.72 - - V100 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 18.99 19.14 20.95 22.17 SD - img2img 18.56 19.18 20.95 22.11 SD - inpaint 19.14 19.06 21.08 22.20 SD - controlnet 13.48 13.93 15.18 15.88 IF 20.01 / 9.08 / 23.34 19.79 / 8.98 / 24.10 ❌ 55.75 / 11.57 / 57.67 V100 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 5.96 5.89 6.83 6.86 SD - img2img 5.90 5.91 6.81 6.82 SD - inpaint 5.99 6.03 6.93 6.95 SD - controlnet 4.26 4.29 4.92 4.93 IF 15.41 14.76 ❌ 22.95 V100 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 1.66 1.66 1.92 1.90 SD - img2img 1.65 1.65 1.91 1.89 SD - inpaint 1.69 1.69 1.95 1.93 SD - controlnet 1.19 1.19 OOM after warmup 1.36 IF 5.43 5.29 ❌ 7.06 T4 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 6.9 6.95 7.3 7.56 SD - img2img 6.84 6.99 7.04 7.55 SD - inpaint 6.91 6.7 7.01 7.37 SD - controlnet 4.89 4.86 5.35 5.48 IF 17.42 / 2.47 / 18.52 16.96 / 2.45 / 18.69 ❌ 24.63 / 2.47 / 23.39 SDXL - txt2img 1.15 1.16 - - T4 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 1.79 1.79 2.03 1.99 SD - img2img 1.77 1.77 2.05 2.04 SD - inpaint 1.81 1.82 2.09 2.09 SD - controlnet 1.34 1.27 1.47 1.46 IF 5.79 5.61 ❌ 7.39 SDXL - txt2img 0.288 0.289 - - T4 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 2.34s 2.30s OOM after 2nd iteration 1.99s SD - img2img 2.35s 2.31s OOM after warmup 2.00s SD - inpaint 2.30s 2.26s OOM after 2nd iteration 1.95s SD - controlnet OOM after 2nd iteration OOM after 2nd iteration OOM after warmup OOM after warmup IF * 1.44 1.44 ❌ 1.94 SDXL - txt2img OOM OOM - - RTX 3090 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 22.56 22.84 23.84 25.69 SD - img2img 22.25 22.61 24.1 25.83 SD - inpaint 22.22 22.54 24.26 26.02 SD - controlnet 16.03 16.33 17.38 18.56 IF 27.08 / 9.07 / 31.23 26.75 / 8.92 / 31.47 ❌ 68.08 / 11.16 / 65.29 RTX 3090 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 6.46 6.35 7.29 7.3 SD - img2img 6.33 6.27 7.31 7.26 SD - inpaint 6.47 6.4 7.44 7.39 SD - controlnet 4.59 4.54 5.27 5.26 IF 16.81 16.62 ❌ 21.57 RTX 3090 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 1.7 1.69 1.93 1.91 SD - img2img 1.68 1.67 1.93 1.9 SD - inpaint 1.72 1.71 1.97 1.94 SD - controlnet 1.23 1.22 1.4 1.38 IF 5.01 5.00 ❌ 6.33 RTX 4090 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 40.5 41.89 44.65 49.81 SD - img2img 40.39 41.95 44.46 49.8 SD - inpaint 40.51 41.88 44.58 49.72 SD - controlnet 29.27 30.29 32.26 36.03 IF 69.71 / 18.78 / 85.49 69.13 / 18.80 / 85.56 ❌ 124.60 / 26.37 / 138.79 SDXL - txt2img 6.8 8.18 - - RTX 4090 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 12.62 12.84 15.32 15.59 SD - img2img 12.61 12,.79 15.35 15.66 SD - inpaint 12.65 12.81 15.3 15.58 SD - controlnet 9.1 9.25 11.03 11.22 IF 31.88 31.14 ❌ 43.92 SDXL - txt2img 2.19 2.35 - - RTX 4090 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 3.17 3.2 3.84 3.85 SD - img2img 3.16 3.2 3.84 3.85 SD - inpaint 3.17 3.2 3.85 3.85 SD - controlnet 2.23 2.3 2.7 2.75 IF 9.26 9.2 ❌ 13.31 SDXL - txt2img 0.52 0.53 - - Notes Follow this PR for more details on the environment used for conducting the benchmarks. For the DeepFloyd IF pipeline where batch sizes > 1, we only used a batch size of > 1 in the first IF pipeline for text-to-image generation and NOT for upscaling. That means the two upscaling pipelines received a batch size of 1. Thanks to Horace He from the PyTorch team for their support in improving our support of torch.compile() in Diffusers. diff --git a/scrapped_outputs/a7a3e2f69d050db98b34827f57156e23.txt b/scrapped_outputs/a7a3e2f69d050db98b34827f57156e23.txt new file mode 100644 index 0000000000000000000000000000000000000000..b0b0a9f6f6538388b8c5e1816de1537cd679e779 --- /dev/null +++ b/scrapped_outputs/a7a3e2f69d050db98b34827f57156e23.txt @@ -0,0 +1,96 @@ +MultiDiffusion MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation is by Omer Bar-Tal, Lior Yariv, Yaron Lipman, and Tali Dekel. The abstract from the paper is: Recent advances in text-to-image generation with diffusion models present transformative capabilities in image quality. However, user controllability of the generated image, and fast adaptation to new tasks still remains an open challenge, currently mostly addressed by costly and long re-training and fine-tuning or ad-hoc adaptations to specific image generation tasks. In this work, we present MultiDiffusion, a unified framework that enables versatile and controllable image generation, using a pre-trained text-to-image diffusion model, without any further training or finetuning. At the center of our approach is a new generation process, based on an optimization task that binds together multiple diffusion generation processes with a shared set of parameters or constraints. We show that MultiDiffusion can be readily applied to generate high quality and diverse images that adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes. You can find additional information about MultiDiffusion on the project page, original codebase, and try it out in a demo. Tips While calling StableDiffusionPanoramaPipeline, it’s possible to specify the view_batch_size parameter to be > 1. +For some GPUs with high performance, this can speedup the generation process and increase VRAM usage. To generate panorama-like images make sure you pass the width parameter accordingly. We recommend a width value of 2048 which is the default. Circular padding is applied to ensure there are no stitching artifacts when working with panoramas to ensure a seamless transition from the rightmost part to the leftmost part. By enabling circular padding (set circular_padding=True), the operation applies additional crops after the rightmost point of the image, allowing the model to “see” the transition from the rightmost part to the leftmost part. This helps maintain visual consistency in a 360-degree sense and creates a proper “panorama” that can be viewed using 360-degree panorama viewers. When decoding latents in Stable Diffusion, circular padding is applied to ensure that the decoded latents match in the RGB space. For example, without circular padding, there is a stitching artifact (default): + But with circular padding, the right and the left parts are matching (circular_padding=True): + Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionPanoramaPipeline class diffusers.StableDiffusionPanoramaPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: DDIMScheduler safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using MultiDiffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = 512 width: Optional = 2048 num_inference_steps: int = 50 guidance_scale: float = 7.5 view_batch_size: int = 1 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None circular_padding: bool = False clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 2048) — +The width in pixels of the generated image. The width is kept high because the pipeline is supposed +generate panorama-like images. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. view_batch_size (int, optional, defaults to 1) — +The batch size to denoise split views. For some GPUs with high performance, higher view batch size can +speedup the generation and increase the VRAM usage. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. circular_padding (bool, optional, defaults to False) — +If set to True, circular padding is applied to ensure there are no stitching artifacts. Circular +padding allows the model to seamlessly generate a transition from the rightmost part of the image to +the leftmost part, maintaining consistency in a 360-degree sense. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPanoramaPipeline, DDIMScheduler + +>>> model_ckpt = "stabilityai/stable-diffusion-2-base" +>>> scheduler = DDIMScheduler.from_pretrained(model_ckpt, subfolder="scheduler") +>>> pipe = StableDiffusionPanoramaPipeline.from_pretrained( +... model_ckpt, scheduler=scheduler, torch_dtype=torch.float16 +... ) + +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of the dolomites" +>>> image = pipe(prompt).images[0] disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/a7bb8a85e42ad3f020ff8519558e5259.txt b/scrapped_outputs/a7bb8a85e42ad3f020ff8519558e5259.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/a80385538bf9499b89f3e5194f563cb1.txt b/scrapped_outputs/a80385538bf9499b89f3e5194f563cb1.txt new file mode 100644 index 0000000000000000000000000000000000000000..ab7809d34983d6a8ebbe82ac4a22518de74ebdc9 --- /dev/null +++ b/scrapped_outputs/a80385538bf9499b89f3e5194f563cb1.txt @@ -0,0 +1,31 @@ +Prior Transformer The Prior Transformer was originally introduced in Hierarchical Text-Conditional Image Generation with CLIP Latents by Ramesh et al. It is used to predict CLIP image embeddings from CLIP text embeddings; image embeddings are predicted through a denoising diffusion process. The abstract from the paper is: Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples. PriorTransformer class diffusers.PriorTransformer < source > ( num_attention_heads: int = 32 attention_head_dim: int = 64 num_layers: int = 20 embedding_dim: int = 768 num_embeddings = 77 additional_embeddings = 4 dropout: float = 0.0 time_embed_act_fn: str = 'silu' norm_in_type: Optional = None embedding_proj_norm_type: Optional = None encoder_hid_proj_type: Optional = 'linear' added_emb_type: Optional = 'prd' time_embed_dim: Optional = None embedding_proj_dim: Optional = None clip_embed_dim: Optional = None ) Parameters num_attention_heads (int, optional, defaults to 32) — The number of heads to use for multi-head attention. attention_head_dim (int, optional, defaults to 64) — The number of channels in each head. num_layers (int, optional, defaults to 20) — The number of layers of Transformer blocks to use. embedding_dim (int, optional, defaults to 768) — The dimension of the model input hidden_states num_embeddings (int, optional, defaults to 77) — +The number of embeddings of the model input hidden_states additional_embeddings (int, optional, defaults to 4) — The number of additional tokens appended to the +projected hidden_states. The actual length of the used hidden_states is num_embeddings + additional_embeddings. dropout (float, optional, defaults to 0.0) — The dropout probability to use. time_embed_act_fn (str, optional, defaults to ‘silu’) — +The activation function to use to create timestep embeddings. norm_in_type (str, optional, defaults to None) — The normalization layer to apply on hidden states before +passing to Transformer blocks. Set it to None if normalization is not needed. embedding_proj_norm_type (str, optional, defaults to None) — +The normalization layer to apply on the input proj_embedding. Set it to None if normalization is not +needed. encoder_hid_proj_type (str, optional, defaults to linear) — +The projection layer to apply on the input encoder_hidden_states. Set it to None if +encoder_hidden_states is None. added_emb_type (str, optional, defaults to prd) — Additional embeddings to condition the model. +Choose from prd or None. if choose prd, it will prepend a token indicating the (quantized) dot +product between the text embedding and image embedding as proposed in the unclip paper +https://arxiv.org/abs/2204.06125 If it is None, no additional embeddings will be prepended. time_embed_dim (int, *optional*, defaults to None) -- The dimension of timestep embeddings. If None, will be set to num_attention_heads * attention_head_dim` embedding_proj_dim (int, optional, default to None) — +The dimension of proj_embedding. If None, will be set to embedding_dim. clip_embed_dim (int, optional, default to None) — +The dimension of the output. If None, will be set to embedding_dim. A Prior Transformer model. forward < source > ( hidden_states timestep: Union proj_embedding: FloatTensor encoder_hidden_states: Optional = None attention_mask: Optional = None return_dict: bool = True ) → ~models.prior_transformer.PriorTransformerOutput or tuple Parameters hidden_states (torch.FloatTensor of shape (batch_size, embedding_dim)) — +The currently predicted image embeddings. timestep (torch.LongTensor) — +Current denoising step. proj_embedding (torch.FloatTensor of shape (batch_size, embedding_dim)) — +Projected embedding vector the denoising process is conditioned on. encoder_hidden_states (torch.FloatTensor of shape (batch_size, num_embeddings, embedding_dim)) — +Hidden states of the text embeddings the denoising process is conditioned on. attention_mask (torch.BoolTensor of shape (batch_size, num_embeddings)) — +Text mask for the text embeddings. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.prior_transformer.PriorTransformerOutput instead of a plain +tuple. Returns +~models.prior_transformer.PriorTransformerOutput or tuple + +If return_dict is True, a ~models.prior_transformer.PriorTransformerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + The PriorTransformer forward method. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. PriorTransformerOutput class diffusers.models.transformers.prior_transformer.PriorTransformerOutput < source > ( predicted_image_embedding: FloatTensor ) Parameters predicted_image_embedding (torch.FloatTensor of shape (batch_size, embedding_dim)) — +The predicted CLIP image embedding conditioned on the CLIP text embedding input. The output of PriorTransformer. diff --git a/scrapped_outputs/a8471e8318a12da00a305ce911702294.txt b/scrapped_outputs/a8471e8318a12da00a305ce911702294.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/a866cc4dffd9d5a64f177f2cfa87746e.txt b/scrapped_outputs/a866cc4dffd9d5a64f177f2cfa87746e.txt new file mode 100644 index 0000000000000000000000000000000000000000..2eccd67f9c208127ff91723e453d53474eb833b7 --- /dev/null +++ b/scrapped_outputs/a866cc4dffd9d5a64f177f2cfa87746e.txt @@ -0,0 +1,38 @@ +Stable Diffusion XL Turbo SDXL Turbo is an adversarial time-distilled Stable Diffusion XL (SDXL) model capable +of running inference in as little as 1 step. This guide will show you how to use SDXL-Turbo for text-to-image and image-to-image. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate Load model checkpoints Model weights may be stored in separate subfolders on the Hub or locally, in which case, you should use the from_pretrained() method: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16") +pipeline = pipeline.to("cuda") You can also use the from_single_file() method to load a model checkpoint stored in a single file format (.ckpt or .safetensors) from the Hub or locally. For this loading method, you need to set timestep_spacing="trailing" (feel free to experiment with the other scheduler config values to get better results): Copied from diffusers import StableDiffusionXLPipeline, EulerAncestralDiscreteScheduler +import torch + +pipeline = StableDiffusionXLPipeline.from_single_file( + "https://huggingface.co/stabilityai/sdxl-turbo/blob/main/sd_xl_turbo_1.0_fp16.safetensors", + torch_dtype=torch.float16, variant="fp16") +pipeline = pipeline.to("cuda") +pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config(pipeline.scheduler.config, timestep_spacing="trailing") Text-to-image For text-to-image, pass a text prompt. By default, SDXL Turbo generates a 512x512 image, and that resolution gives the best results. You can try setting the height and width parameters to 768x768 or 1024x1024, but you should expect quality degradations when doing so. Make sure to set guidance_scale to 0.0 to disable, as the model was trained without it. A single inference step is enough to generate high quality images. +Increasing the number of steps to 2, 3 or 4 should improve image quality. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline_text2image = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16") +pipeline_text2image = pipeline_text2image.to("cuda") + +prompt = "A cinematic shot of a baby racoon wearing an intricate italian priest robe." + +image = pipeline_text2image(prompt=prompt, guidance_scale=0.0, num_inference_steps=1).images[0] +image Image-to-image For image-to-image generation, make sure that num_inference_steps * strength is larger or equal to 1. +The image-to-image pipeline will run for int(num_inference_steps * strength) steps, e.g. 0.5 * 2.0 = 1 step in +our example below. Copied from diffusers import AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +# use from_pipe to avoid consuming additional memory when loading a checkpoint +pipeline_image2image = AutoPipelineForImage2Image.from_pipe(pipeline_text2image).to("cuda") + +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") +init_image = init_image.resize((512, 512)) + +prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k" + +image = pipeline_image2image(prompt, image=init_image, strength=0.5, guidance_scale=0.0, num_inference_steps=2).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Speed-up SDXL Turbo even more Compile the UNet if you are using PyTorch version 2.0 or higher. The first inference run will be very slow, but subsequent ones will be much faster. Copied pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) When using the default VAE, keep it in float32 to avoid costly dtype conversions before and after each generation. You only need to do this one before your first generation: Copied pipe.upcast_vae() As an alternative, you can also use a 16-bit VAE created by community member @madebyollin that does not need to be upcasted to float32. diff --git a/scrapped_outputs/a880f37696ab74d318f2a5fdfd5d25ba.txt b/scrapped_outputs/a880f37696ab74d318f2a5fdfd5d25ba.txt new file mode 100644 index 0000000000000000000000000000000000000000..321353dbaa0806b386dcef5fad6fbc29b72b6c56 --- /dev/null +++ b/scrapped_outputs/a880f37696ab74d318f2a5fdfd5d25ba.txt @@ -0,0 +1,36 @@ +Consistency Decoder Consistency decoder can be used to decode the latents from the denoising UNet in the StableDiffusionPipeline. This decoder was introduced in the DALL-E 3 technical report. The original codebase can be found at openai/consistencydecoder. Inference is only supported for 2 iterations as of now. The pipeline could not have been contributed without the help of madebyollin and mrsteyk from this issue. ConsistencyDecoderVAE class diffusers.ConsistencyDecoderVAE < source > ( scaling_factor: float = 0.18215 latent_channels: int = 4 encoder_act_fn: str = 'silu' encoder_block_out_channels: Tuple = (128, 256, 512, 512) encoder_double_z: bool = True encoder_down_block_types: Tuple = ('DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D') encoder_in_channels: int = 3 encoder_layers_per_block: int = 2 encoder_norm_num_groups: int = 32 encoder_out_channels: int = 4 decoder_add_attention: bool = False decoder_block_out_channels: Tuple = (320, 640, 1024, 1024) decoder_down_block_types: Tuple = ('ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D') decoder_downsample_padding: int = 1 decoder_in_channels: int = 7 decoder_layers_per_block: int = 3 decoder_norm_eps: float = 1e-05 decoder_norm_num_groups: int = 32 decoder_num_train_timesteps: int = 1024 decoder_out_channels: int = 6 decoder_resnet_time_scale_shift: str = 'scale_shift' decoder_time_embedding_type: str = 'learned' decoder_up_block_types: Tuple = ('ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D') ) The consistency decoder used with DALL-E 3. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline, ConsistencyDecoderVAE + +>>> vae = ConsistencyDecoderVAE.from_pretrained("openai/consistency-decoder", torch_dtype=torch.float16) +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", vae=vae, torch_dtype=torch.float16 +... ).to("cuda") + +>>> pipe("horse", generator=torch.manual_seed(0)).images wrapper < source > ( *args **kwargs ) disable_slicing < source > ( ) Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing +decoding in one step. disable_tiling < source > ( ) Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing +decoding in one step. enable_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_tiling < source > ( use_tiling: bool = True ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. forward < source > ( sample: FloatTensor sample_posterior: bool = False return_dict: bool = True generator: Optional = None ) → DecoderOutput or tuple Parameters sample (torch.FloatTensor) — Input sample. sample_posterior (bool, optional, defaults to False) — +Whether to sample from the posterior. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. generator (torch.Generator, optional, defaults to None) — +Generator to use for sampling. Returns +DecoderOutput or tuple + +If return_dict is True, a DecoderOutput is returned, otherwise a plain tuple is returned. + set_attn_processor < source > ( processor: Union _remove_lora = False ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. tiled_encode < source > ( x: FloatTensor return_dict: bool = True ) → ~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput or tuple Parameters x (torch.FloatTensor) — Input batch of images. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput instead of a +plain tuple. Returns +~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput or tuple + +If return_dict is True, a ~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput is returned, +otherwise a plain tuple is returned. + Encode a batch of images using a tiled encoder. When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several +steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is +different from non-tiled encoding because each tile uses a different encoder. To avoid tiling artifacts, the +tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the +output, but they should be much less noticeable. diff --git a/scrapped_outputs/a882db7198b3deea72198c0b4f5f1c1f.txt b/scrapped_outputs/a882db7198b3deea72198c0b4f5f1c1f.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/a890f0526dfe338d333bf3d2a8e4bb2f.txt b/scrapped_outputs/a890f0526dfe338d333bf3d2a8e4bb2f.txt new file mode 100644 index 0000000000000000000000000000000000000000..4540f6a7c0e03add95f145da0638f9a5a6f1c9cb --- /dev/null +++ b/scrapped_outputs/a890f0526dfe338d333bf3d2a8e4bb2f.txt @@ -0,0 +1,14 @@ +DeepCache DeepCache accelerates StableDiffusionPipeline and StableDiffusionXLPipeline by strategically caching and reusing high-level features while efficiently updating low-level features by taking advantage of the U-Net architecture. Start by installing DeepCache: Copied pip install DeepCache Then load and enable the DeepCacheSDHelper: Copied import torch + from diffusers import StableDiffusionPipeline + pipe = StableDiffusionPipeline.from_pretrained('runwayml/stable-diffusion-v1-5', torch_dtype=torch.float16).to("cuda") + ++ from DeepCache import DeepCacheSDHelper ++ helper = DeepCacheSDHelper(pipe=pipe) ++ helper.set_params( ++ cache_interval=3, ++ cache_branch_id=0, ++ ) ++ helper.enable() + + image = pipe("a photo of an astronaut on a moon").images[0] The set_params method accepts two arguments: cache_interval and cache_branch_id. cache_interval means the frequency of feature caching, specified as the number of steps between each cache operation. cache_branch_id identifies which branch of the network (ordered from the shallowest to the deepest layer) is responsible for executing the caching processes. +Opting for a lower cache_branch_id or a larger cache_interval can lead to faster inference speed at the expense of reduced image quality (ablation experiments of these two hyperparameters can be found in the paper). Once those arguments are set, use the enable or disable methods to activate or deactivate the DeepCacheSDHelper. You can find more generated samples (original pipeline vs DeepCache) and the corresponding inference latency in the WandB report. The prompts are randomly selected from the MS-COCO 2017 dataset. Benchmark We tested how much faster DeepCache accelerates Stable Diffusion v2.1 with 50 inference steps on an NVIDIA RTX A5000, using different configurations for resolution, batch size, cache interval (I), and cache branch (B). Resolution Batch size Original DeepCache(I=3, B=0) DeepCache(I=5, B=0) DeepCache(I=5, B=1) 512 8 15.96 6.88(2.32x) 5.03(3.18x) 7.27(2.20x) 4 8.39 3.60(2.33x) 2.62(3.21x) 3.75(2.24x) 1 2.61 1.12(2.33x) 0.81(3.24x) 1.11(2.35x) 768 8 43.58 18.99(2.29x) 13.96(3.12x) 21.27(2.05x) 4 22.24 9.67(2.30x) 7.10(3.13x) 10.74(2.07x) 1 6.33 2.72(2.33x) 1.97(3.21x) 2.98(2.12x) 1024 8 101.95 45.57(2.24x) 33.72(3.02x) 53.00(1.92x) 4 49.25 21.86(2.25x) 16.19(3.04x) 25.78(1.91x) 1 13.83 6.07(2.28x) 4.43(3.12x) 7.15(1.93x) diff --git a/scrapped_outputs/a8e4c0ece1f8dc2bb9e98f0d21b13ff1.txt b/scrapped_outputs/a8e4c0ece1f8dc2bb9e98f0d21b13ff1.txt new file mode 100644 index 0000000000000000000000000000000000000000..3202fb51e10a32c683f71e7b038c0b00367fe667 --- /dev/null +++ b/scrapped_outputs/a8e4c0ece1f8dc2bb9e98f0d21b13ff1.txt @@ -0,0 +1 @@ +Overview The APIs in this section are more experimental and prone to breaking changes. Most of them are used internally for development, but they may also be useful to you if you’re interested in building a diffusion model with some custom parts or if you’re interested in some of our helper utilities for working with 🤗 Diffusers. diff --git a/scrapped_outputs/a90b6c9aa3e26f99aaece4825911af64.txt b/scrapped_outputs/a90b6c9aa3e26f99aaece4825911af64.txt new file mode 100644 index 0000000000000000000000000000000000000000..260e2d1961cab74b037b8005bfcbb5822351f744 --- /dev/null +++ b/scrapped_outputs/a90b6c9aa3e26f99aaece4825911af64.txt @@ -0,0 +1,197 @@ +UniDiffuser The UniDiffuser model was proposed in One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale by Fan Bao, Shen Nie, Kaiwen Xue, Chongxuan Li, Shi Pu, Yaole Wang, Gang Yue, Yue Cao, Hang Su, Jun Zhu. The abstract from the paper is: This paper proposes a unified diffusion framework (dubbed UniDiffuser) to fit all distributions relevant to a set of multi-modal data in one model. Our key insight is — learning diffusion models for marginal, conditional, and joint distributions can be unified as predicting the noise in the perturbed data, where the perturbation levels (i.e. timesteps) can be different for different modalities. Inspired by the unified view, UniDiffuser learns all distributions simultaneously with a minimal modification to the original diffusion model — perturbs data in all modalities instead of a single modality, inputs individual timesteps in different modalities, and predicts the noise of all modalities instead of a single modality. UniDiffuser is parameterized by a transformer for diffusion models to handle input types of different modalities. Implemented on large-scale paired image-text data, UniDiffuser is able to perform image, text, text-to-image, image-to-text, and image-text pair generation by setting proper timesteps without additional overhead. In particular, UniDiffuser is able to produce perceptually realistic samples in all tasks and its quantitative results (e.g., the FID and CLIP score) are not only superior to existing general-purpose models but also comparable to the bespoken models (e.g., Stable Diffusion and DALL-E 2) in representative tasks (e.g., text-to-image generation). You can find the original codebase at thu-ml/unidiffuser and additional checkpoints at thu-ml. There is currently an issue on PyTorch 1.X where the output images are all black or the pixel values become NaNs. This issue can be mitigated by switching to PyTorch 2.X. This pipeline was contributed by dg845. ❤️ Usage Examples Because the UniDiffuser model is trained to model the joint distribution of (image, text) pairs, it is capable of performing a diverse range of generation tasks: Unconditional Image and Text Generation Unconditional generation (where we start from only latents sampled from a standard Gaussian prior) from a UniDiffuserPipeline will produce a (image, text) pair: Copied import torch + +from diffusers import UniDiffuserPipeline + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Unconditional image and text generation. The generation task is automatically inferred. +sample = pipe(num_inference_steps=20, guidance_scale=8.0) +image = sample.images[0] +text = sample.text[0] +image.save("unidiffuser_joint_sample_image.png") +print(text) This is also called “joint” generation in the UniDiffuser paper, since we are sampling from the joint image-text distribution. Note that the generation task is inferred from the inputs used when calling the pipeline. +It is also possible to manually specify the unconditional generation task (“mode”) manually with UniDiffuserPipeline.set_joint_mode(): Copied # Equivalent to the above. +pipe.set_joint_mode() +sample = pipe(num_inference_steps=20, guidance_scale=8.0) When the mode is set manually, subsequent calls to the pipeline will use the set mode without attempting to infer the mode. +You can reset the mode with UniDiffuserPipeline.reset_mode(), after which the pipeline will once again infer the mode. You can also generate only an image or only text (which the UniDiffuser paper calls “marginal” generation since we sample from the marginal distribution of images and text, respectively): Copied # Unlike other generation tasks, image-only and text-only generation don't use classifier-free guidance +# Image-only generation +pipe.set_image_mode() +sample_image = pipe(num_inference_steps=20).images[0] +# Text-only generation +pipe.set_text_mode() +sample_text = pipe(num_inference_steps=20).text[0] Text-to-Image Generation UniDiffuser is also capable of sampling from conditional distributions; that is, the distribution of images conditioned on a text prompt or the distribution of texts conditioned on an image. +Here is an example of sampling from the conditional image distribution (text-to-image generation or text-conditioned image generation): Copied import torch + +from diffusers import UniDiffuserPipeline + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Text-to-image generation +prompt = "an elephant under the sea" + +sample = pipe(prompt=prompt, num_inference_steps=20, guidance_scale=8.0) +t2i_image = sample.images[0] +t2i_image The text2img mode requires that either an input prompt or prompt_embeds be supplied. You can set the text2img mode manually with UniDiffuserPipeline.set_text_to_image_mode(). Image-to-Text Generation Similarly, UniDiffuser can also produce text samples given an image (image-to-text or image-conditioned text generation): Copied import torch + +from diffusers import UniDiffuserPipeline +from diffusers.utils import load_image + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Image-to-text generation +image_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unidiffuser/unidiffuser_example_image.jpg" +init_image = load_image(image_url).resize((512, 512)) + +sample = pipe(image=init_image, num_inference_steps=20, guidance_scale=8.0) +i2t_text = sample.text[0] +print(i2t_text) The img2text mode requires that an input image be supplied. You can set the img2text mode manually with UniDiffuserPipeline.set_image_to_text_mode(). Image Variation The UniDiffuser authors suggest performing image variation through a “round-trip” generation method, where given an input image, we first perform an image-to-text generation, and then perform a text-to-image generation on the outputs of the first generation. +This produces a new image which is semantically similar to the input image: Copied import torch + +from diffusers import UniDiffuserPipeline +from diffusers.utils import load_image + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Image variation can be performed with an image-to-text generation followed by a text-to-image generation: +# 1. Image-to-text generation +image_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unidiffuser/unidiffuser_example_image.jpg" +init_image = load_image(image_url).resize((512, 512)) + +sample = pipe(image=init_image, num_inference_steps=20, guidance_scale=8.0) +i2t_text = sample.text[0] +print(i2t_text) + +# 2. Text-to-image generation +sample = pipe(prompt=i2t_text, num_inference_steps=20, guidance_scale=8.0) +final_image = sample.images[0] +final_image.save("unidiffuser_image_variation_sample.png") Text Variation Similarly, text variation can be performed on an input prompt with a text-to-image generation followed by a image-to-text generation: Copied import torch + +from diffusers import UniDiffuserPipeline + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Text variation can be performed with a text-to-image generation followed by a image-to-text generation: +# 1. Text-to-image generation +prompt = "an elephant under the sea" + +sample = pipe(prompt=prompt, num_inference_steps=20, guidance_scale=8.0) +t2i_image = sample.images[0] +t2i_image.save("unidiffuser_text2img_sample_image.png") + +# 2. Image-to-text generation +sample = pipe(image=t2i_image, num_inference_steps=20, guidance_scale=8.0) +final_prompt = sample.text[0] +print(final_prompt) Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. UniDiffuserPipeline class diffusers.UniDiffuserPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel image_encoder: CLIPVisionModelWithProjection clip_image_processor: CLIPImageProcessor clip_tokenizer: CLIPTokenizer text_decoder: UniDiffuserTextDecoder text_tokenizer: GPT2Tokenizer unet: UniDiffuserModel scheduler: KarrasDiffusionSchedulers ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. This +is part of the UniDiffuser image representation along with the CLIP vision encoding. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). image_encoder (CLIPVisionModel) — +A CLIPVisionModel to encode images as part of its image representation along with the VAE +latent representation. image_processor (CLIPImageProcessor) — +CLIPImageProcessor to preprocess an image before CLIP encoding it with image_encoder. clip_tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize the prompt before encoding it with text_encoder. text_decoder (UniDiffuserTextDecoder) — +Frozen text decoder. This is a GPT-style model which is used to generate text from the UniDiffuser +embedding. text_tokenizer (GPT2Tokenizer) — +A GPT2Tokenizer to decode text for text generation; used along with the text_decoder. unet (UniDiffuserModel) — +A U-ViT model with UNNet-style skip connections between transformer +layers to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image and/or text latents. The +original UniDiffuser paper uses the DPMSolverMultistepScheduler scheduler. Pipeline for a bimodal image-text model which supports unconditional text and image generation, text-conditioned +image generation, image-conditioned text generation, and joint image-text generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None image: Union = None height: Optional = None width: Optional = None data_type: Optional = 1 num_inference_steps: int = 50 guidance_scale: float = 8.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 num_prompts_per_image: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_latents: Optional = None vae_latents: Optional = None clip_latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → ImageTextPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. +Required for text-conditioned image generation (text2img) mode. image (torch.FloatTensor or PIL.Image.Image, optional) — +Image or tensor representing an image batch. Required for image-conditioned text generation +(img2text) mode. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. data_type (int, optional, defaults to 1) — +The data type (either 0 or 1). Only used if you are loading a checkpoint which supports a data type +embedding; this is added for compatibility with the +UniDiffuser-v1 checkpoint. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 8.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). Used in +text-conditioned image generation (text2img) mode. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. Used in text2img (text-conditioned image generation) and +img mode. If the mode is joint and both num_images_per_prompt and num_prompts_per_image are +supplied, min(num_images_per_prompt, num_prompts_per_image) samples are generated. num_prompts_per_image (int, optional, defaults to 1) — +The number of prompts to generate per image. Used in img2text (image-conditioned text generation) and +text mode. If the mode is joint and both num_images_per_prompt and num_prompts_per_image are +supplied, min(num_images_per_prompt, num_prompts_per_image) samples are generated. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for joint +image-text generation. Can be used to tweak the same generation with different prompts. If not +provided, a latents tensor is generated by sampling using the supplied random generator. This assumes +a full set of VAE, CLIP, and text latents, if supplied, overrides the value of prompt_latents, +vae_latents, and clip_latents. prompt_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for text +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. vae_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. clip_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. Used in text-conditioned +image generation (text2img) mode. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are be generated from the negative_prompt input argument. Used +in text-conditioned image generation (text2img) mode. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImageTextPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +ImageTextPipelineOutput or tuple + +If return_dict is True, ImageTextPipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images and the second element +is a list of generated texts. + The call function to the pipeline for generation. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. reset_mode < source > ( ) Removes a manually set mode; after calling this, the pipeline will infer the mode from inputs. set_image_mode < source > ( ) Manually set the generation mode to unconditional (“marginal”) image generation. set_image_to_text_mode < source > ( ) Manually set the generation mode to image-conditioned text generation. set_joint_mode < source > ( ) Manually set the generation mode to unconditional joint image-text generation. set_text_mode < source > ( ) Manually set the generation mode to unconditional (“marginal”) text generation. set_text_to_image_mode < source > ( ) Manually set the generation mode to text-conditioned image generation. ImageTextPipelineOutput class diffusers.ImageTextPipelineOutput < source > ( images: Union text: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). text (List[str] or List[List[str]]) — +List of generated text strings of length batch_size or a list of list of strings whose outer list has +length batch_size. Output class for joint image-text pipelines. diff --git a/scrapped_outputs/a945d6fec1f4777d98f9c6026bfe118b.txt b/scrapped_outputs/a945d6fec1f4777d98f9c6026bfe118b.txt new file mode 100644 index 0000000000000000000000000000000000000000..da7517473881ae8a5f98c9de9071381dc720f891 --- /dev/null +++ b/scrapped_outputs/a945d6fec1f4777d98f9c6026bfe118b.txt @@ -0,0 +1 @@ +Diffusers 🤗 Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you’re looking for a simple inference solution or want to train your own diffusion model, 🤗 Diffusers is a modular toolbox that supports both. Our library is designed with a focus on usability over performance, simple over easy, and customizability over abstractions. The library has three main components: State-of-the-art diffusion pipelines for inference with just a few lines of code. There are many pipelines in 🤗 Diffusers, check out the table in the pipeline overview for a complete list of available pipelines and the task they solve. Interchangeable noise schedulers for balancing trade-offs between generation speed and quality. Pretrained models that can be used as building blocks, and combined with schedulers, for creating your own end-to-end diffusion systems. Tutorials Learn the fundamental skills you need to start generating outputs, build your own diffusion system, and train a diffusion model. We recommend starting here if you're using 🤗 Diffusers for the first time! How-to guides Practical guides for helping you load pipelines, models, and schedulers. You'll also learn how to use pipelines for specific tasks, control how outputs are generated, optimize for inference speed, and different training techniques. Conceptual guides Understand why the library was designed the way it was, and learn more about the ethical guidelines and safety implementations for using the library. Reference Technical descriptions of how 🤗 Diffusers classes and methods work. diff --git a/scrapped_outputs/a9496914b0c13182a3f28b2d33f99e79.txt b/scrapped_outputs/a9496914b0c13182a3f28b2d33f99e79.txt new file mode 100644 index 0000000000000000000000000000000000000000..65a9cfaf29f703e7c7512eba0f3f7082686a6b82 --- /dev/null +++ b/scrapped_outputs/a9496914b0c13182a3f28b2d33f99e79.txt @@ -0,0 +1,40 @@ +KDPM2DiscreteScheduler The KDPM2DiscreteScheduler is inspired by the Elucidating the Design Space of Diffusion-Based Generative Models paper, and the scheduler is ported from and created by Katherine Crowson. The original codebase can be found at crowsonkb/k-diffusion. KDPM2DiscreteScheduler class diffusers.KDPM2DiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None use_karras_sigmas: Optional = False prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.00085) — +The starting beta value of inference. beta_end (float, defaults to 0.012) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. KDPM2DiscreteScheduler is inspired by the DPMSolver2 and Algorithm 2 from the Elucidating the Design Space of +Diffusion-Based Generative Models paper. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/a95c001616a97ef415e82aa8ab5bb85d.txt b/scrapped_outputs/a95c001616a97ef415e82aa8ab5bb85d.txt new file mode 100644 index 0000000000000000000000000000000000000000..8cd225c40f1e1f3165282e7d014eb49dc1a5c8bf --- /dev/null +++ b/scrapped_outputs/a95c001616a97ef415e82aa8ab5bb85d.txt @@ -0,0 +1,42 @@ +Improve image quality with deterministic generation + +A common way to improve the quality of generated images is with deterministic batch generation, generate a batch of images and select one image to improve with a more detailed prompt in a second round of inference. The key is to pass a list of torch.Generator’s to the pipeline for batched image generation, and tie each Generator to a seed so you can reuse it for an image. +Let’s use runwayml/stable-diffusion-v1-5 for example, and generate several versions of the following prompt: + + + Copied +prompt = "Labrador in the style of Vermeer" +Instantiate a pipeline with DiffusionPipeline.from_pretrained() and place it on a GPU (if available): + + + Copied +>>> from diffusers import DiffusionPipeline + +>>> pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +Now, define four different Generator’s and assign each Generator a seed (0 to 3) so you can reuse a Generator later for a specific image: + + + Copied +>>> import torch + +>>> generator = [torch.Generator(device="cuda").manual_seed(i) for i in range(4)] +Generate the images and have a look: + + + Copied +>>> images = pipe(prompt, generator=generator, num_images_per_prompt=4).images +>>> images + +In this example, you’ll improve upon the first image - but in reality, you can use any image you want (even the image with double sets of eyes!). The first image used the Generator with seed 0, so you’ll reuse that Generator for the second round of inference. To improve the quality of the image, add some additional text to the prompt: + + + Copied +prompt = [prompt + t for t in [", highly realistic", ", artsy", ", trending", ", colorful"]] +generator = [torch.Generator(device="cuda").manual_seed(0) for i in range(4)] +Create four generators with seed 0, and generate another batch of images, all of which should look like the first image from the previous round! + + + Copied +>>> images = pipe(prompt, generator=generator).images +>>> images diff --git a/scrapped_outputs/a95f2388a71edb871529e3fc7f75c009.txt b/scrapped_outputs/a95f2388a71edb871529e3fc7f75c009.txt new file mode 100644 index 0000000000000000000000000000000000000000..54b679f844e0756b73267dc59e36b49e7f006adb --- /dev/null +++ b/scrapped_outputs/a95f2388a71edb871529e3fc7f75c009.txt @@ -0,0 +1,95 @@ +PNDM + + +Overview + +Pseudo Numerical methods for Diffusion Models on manifolds (PNDM) by Luping Liu, Yi Ren, Zhijie Lin and Zhou Zhao. +The abstract of the paper is the following: +Denoising Diffusion Probabilistic Models (DDPMs) can generate high-quality samples such as image and audio samples. However, DDPMs require hundreds to thousands of iterations to produce final samples. Several prior works have successfully accelerated DDPMs through adjusting the variance schedule (e.g., Improved Denoising Diffusion Probabilistic Models) or the denoising equation (e.g., Denoising Diffusion Implicit Models (DDIMs)). However, these acceleration methods cannot maintain the quality of samples and even introduce new noise at a high speedup rate, which limit their practicability. To accelerate the inference process while keeping the sample quality, we provide a fresh perspective that DDPMs should be treated as solving differential equations on manifolds. Under such a perspective, we propose pseudo numerical methods for diffusion models (PNDMs). Specifically, we figure out how to solve differential equations on manifolds and show that DDIMs are simple cases of pseudo numerical methods. We change several classical numerical methods to corresponding pseudo numerical methods and find that the pseudo linear multi-step method is the best in most situations. According to our experiments, by directly using pre-trained models on Cifar10, CelebA and LSUN, PNDMs can generate higher quality synthetic images with only 50 steps compared with 1000-step DDIMs (20x speedup), significantly outperform DDIMs with 250 steps (by around 0.4 in FID) and have good generalization on different variance schedules. +The original codebase can be found here. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_pndm.py +Unconditional Image Generation +- + +PNDMPipeline + + +class diffusers.PNDMPipeline + +< +source +> +( +unet: UNet2DModel +scheduler: PNDMScheduler + +) + + +Parameters + +unet (UNet2DModel) — U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +The PNDMScheduler to be used in combination with unet to denoise the encoded image. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +batch_size: int = 1 +num_inference_steps: int = 50 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +**kwargs + +) +→ +ImagePipelineOutput or tuple + +Parameters + +batch_size (int, optional, defaults to 1) — The number of images to generate. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +generator (torch.Generator, optional) — A torch +generator to make generation +deterministic. + + +output_type (str, optional, defaults to "pil") — The output format of the generate image. Choose +between PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — Whether or not to return a +ImagePipelineOutput instead of a plain tuple. + + +Returns + +ImagePipelineOutput or tuple + + + +~pipelines.utils.ImagePipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. diff --git a/scrapped_outputs/a9d12ce9705ea43d50fbf0f14c7dca20.txt b/scrapped_outputs/a9d12ce9705ea43d50fbf0f14c7dca20.txt new file mode 100644 index 0000000000000000000000000000000000000000..ff28dd01033ce547a340e7754e35c2123f361679 --- /dev/null +++ b/scrapped_outputs/a9d12ce9705ea43d50fbf0f14c7dca20.txt @@ -0,0 +1,14 @@ +Text-guided depth-to-image generation The StableDiffusionDepth2ImgPipeline lets you pass a text prompt and an initial image to condition the generation of new images. In addition, you can also pass a depth_map to preserve the image structure. If no depth_map is provided, the pipeline automatically predicts the depth via an integrated depth-estimation model. Start by creating an instance of the StableDiffusionDepth2ImgPipeline: Copied import torch +from diffusers import StableDiffusionDepth2ImgPipeline +from diffusers.utils import load_image, make_image_grid + +pipeline = StableDiffusionDepth2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-depth", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") Now pass your prompt to the pipeline. You can also pass a negative_prompt to prevent certain words from guiding how an image is generated: Copied url = "http://images.cocodataset.org/val2017/000000039769.jpg" +init_image = load_image(url) +prompt = "two tigers" +negative_prompt = "bad, deformed, ugly, bad anatomy" +image = pipeline(prompt=prompt, image=init_image, negative_prompt=negative_prompt, strength=0.7).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Input Output diff --git a/scrapped_outputs/a9ed380b3c877f12bca97a0fb8bf2c19.txt b/scrapped_outputs/a9ed380b3c877f12bca97a0fb8bf2c19.txt new file mode 100644 index 0000000000000000000000000000000000000000..ca254f42f72a76d580bb5340e193834f7f82b6d6 --- /dev/null +++ b/scrapped_outputs/a9ed380b3c877f12bca97a0fb8bf2c19.txt @@ -0,0 +1,86 @@ +Prompt weighting Prompt weighting provides a way to emphasize or de-emphasize certain parts of a prompt, allowing for more control over the generated image. A prompt can include several concepts, which gets turned into contextualized text embeddings. The embeddings are used by the model to condition its cross-attention layers to generate an image (read the Stable Diffusion blog post to learn more about how it works). Prompt weighting works by increasing or decreasing the scale of the text embedding vector that corresponds to its concept in the prompt because you may not necessarily want the model to focus on all concepts equally. The easiest way to prepare the prompt-weighted embeddings is to use Compel, a text prompt-weighting and blending library. Once you have the prompt-weighted embeddings, you can pass them to any pipeline that has a prompt_embeds (and optionally negative_prompt_embeds) parameter, such as StableDiffusionPipeline, StableDiffusionControlNetPipeline, and StableDiffusionXLPipeline. If your favorite pipeline doesn’t have a prompt_embeds parameter, please open an issue so we can add it! This guide will show you how to weight and blend your prompts with Compel in 🤗 Diffusers. Before you begin, make sure you have the latest version of Compel installed: Copied # uncomment to install in Colab +#!pip install compel --upgrade For this guide, let’s generate an image with the prompt "a red cat playing with a ball" using the StableDiffusionPipeline: Copied from diffusers import StableDiffusionPipeline, UniPCMultistepScheduler +import torch + +pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", use_safetensors=True) +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.to("cuda") + +prompt = "a red cat playing with a ball" + +generator = torch.Generator(device="cpu").manual_seed(33) + +image = pipe(prompt, generator=generator, num_inference_steps=20).images[0] +image Weighting You’ll notice there is no “ball” in the image! Let’s use compel to upweight the concept of “ball” in the prompt. Create a Compel object, and pass it a tokenizer and text encoder: Copied from compel import Compel + +compel_proc = Compel(tokenizer=pipe.tokenizer, text_encoder=pipe.text_encoder) compel uses + or - to increase or decrease the weight of a word in the prompt. To increase the weight of “ball”: + corresponds to the value 1.1, ++ corresponds to 1.1^2, and so on. Similarly, - corresponds to 0.9 and -- corresponds to 0.9^2. Feel free to experiment with adding more + or - in your prompt! Copied prompt = "a red cat playing with a ball++" Pass the prompt to compel_proc to create the new prompt embeddings which are passed to the pipeline: Copied prompt_embeds = compel_proc(prompt) +generator = torch.manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image To downweight parts of the prompt, use the - suffix: Copied prompt = "a red------- cat playing with a ball" +prompt_embeds = compel_proc(prompt) + +generator = torch.manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image You can even up or downweight multiple concepts in the same prompt: Copied prompt = "a red cat++ playing with a ball----" +prompt_embeds = compel_proc(prompt) + +generator = torch.manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image Blending You can also create a weighted blend of prompts by adding .blend() to a list of prompts and passing it some weights. Your blend may not always produce the result you expect because it breaks some assumptions about how the text encoder functions, so just have fun and experiment with it! Copied prompt_embeds = compel_proc('("a red cat playing with a ball", "jungle").blend(0.7, 0.8)') +generator = torch.Generator(device="cuda").manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image Conjunction A conjunction diffuses each prompt independently and concatenates their results by their weighted sum. Add .and() to the end of a list of prompts to create a conjunction: Copied prompt_embeds = compel_proc('["a red cat", "playing with a", "ball"].and()') +generator = torch.Generator(device="cuda").manual_seed(55) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image Textual inversion Textual inversion is a technique for learning a specific concept from some images which you can use to generate new images conditioned on that concept. Create a pipeline and use the load_textual_inversion() function to load the textual inversion embeddings (feel free to browse the Stable Diffusion Conceptualizer for 100+ trained concepts): Copied import torch +from diffusers import StableDiffusionPipeline +from compel import Compel, DiffusersTextualInversionManager + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, + use_safetensors=True, variant="fp16").to("cuda") +pipe.load_textual_inversion("sd-concepts-library/midjourney-style") Compel provides a DiffusersTextualInversionManager class to simplify prompt weighting with textual inversion. Instantiate DiffusersTextualInversionManager and pass it to the Compel class: Copied textual_inversion_manager = DiffusersTextualInversionManager(pipe) +compel_proc = Compel( + tokenizer=pipe.tokenizer, + text_encoder=pipe.text_encoder, + textual_inversion_manager=textual_inversion_manager) Incorporate the concept to condition a prompt with using the syntax: Copied prompt_embeds = compel_proc('("A red cat++ playing with a ball ")') + +image = pipe(prompt_embeds=prompt_embeds).images[0] +image DreamBooth DreamBooth is a technique for generating contextualized images of a subject given just a few images of the subject to train on. It is similar to textual inversion, but DreamBooth trains the full model whereas textual inversion only fine-tunes the text embeddings. This means you should use from_pretrained() to load the DreamBooth model (feel free to browse the Stable Diffusion Dreambooth Concepts Library for 100+ trained models): Copied import torch +from diffusers import DiffusionPipeline, UniPCMultistepScheduler +from compel import Compel + +pipe = DiffusionPipeline.from_pretrained("sd-dreambooth-library/dndcoverart-v1", torch_dtype=torch.float16).to("cuda") +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) Create a Compel class with a tokenizer and text encoder, and pass your prompt to it. Depending on the model you use, you’ll need to incorporate the model’s unique identifier into your prompt. For example, the dndcoverart-v1 model uses the identifier dndcoverart: Copied compel_proc = Compel(tokenizer=pipe.tokenizer, text_encoder=pipe.text_encoder) +prompt_embeds = compel_proc('("magazine cover of a dndcoverart dragon, high quality, intricate details, larry elmore art style").and()') +image = pipe(prompt_embeds=prompt_embeds).images[0] +image Stable Diffusion XL Stable Diffusion XL (SDXL) has two tokenizers and text encoders so it’s usage is a bit different. To address this, you should pass both tokenizers and encoders to the Compel class: Copied from compel import Compel, ReturnedEmbeddingsType +from diffusers import DiffusionPipeline +from diffusers.utils import make_image_grid +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + variant="fp16", + use_safetensors=True, + torch_dtype=torch.float16 +).to("cuda") + +compel = Compel( + tokenizer=[pipeline.tokenizer, pipeline.tokenizer_2] , + text_encoder=[pipeline.text_encoder, pipeline.text_encoder_2], + returned_embeddings_type=ReturnedEmbeddingsType.PENULTIMATE_HIDDEN_STATES_NON_NORMALIZED, + requires_pooled=[False, True] +) This time, let’s upweight “ball” by a factor of 1.5 for the first prompt, and downweight “ball” by 0.6 for the second prompt. The StableDiffusionXLPipeline also requires pooled_prompt_embeds (and optionally negative_pooled_prompt_embeds) so you should pass those to the pipeline along with the conditioning tensors: Copied # apply weights +prompt = ["a red cat playing with a (ball)1.5", "a red cat playing with a (ball)0.6"] +conditioning, pooled = compel(prompt) + +# generate image +generator = [torch.Generator().manual_seed(33) for _ in range(len(prompt))] +images = pipeline(prompt_embeds=conditioning, pooled_prompt_embeds=pooled, generator=generator, num_inference_steps=30).images +make_image_grid(images, rows=1, cols=2) "a red cat playing with a (ball)1.5" "a red cat playing with a (ball)0.6" diff --git a/scrapped_outputs/a9f14a1e955f843b282d72036e577a0e.txt b/scrapped_outputs/a9f14a1e955f843b282d72036e577a0e.txt new file mode 100644 index 0000000000000000000000000000000000000000..2eccd67f9c208127ff91723e453d53474eb833b7 --- /dev/null +++ b/scrapped_outputs/a9f14a1e955f843b282d72036e577a0e.txt @@ -0,0 +1,38 @@ +Stable Diffusion XL Turbo SDXL Turbo is an adversarial time-distilled Stable Diffusion XL (SDXL) model capable +of running inference in as little as 1 step. This guide will show you how to use SDXL-Turbo for text-to-image and image-to-image. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate Load model checkpoints Model weights may be stored in separate subfolders on the Hub or locally, in which case, you should use the from_pretrained() method: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16") +pipeline = pipeline.to("cuda") You can also use the from_single_file() method to load a model checkpoint stored in a single file format (.ckpt or .safetensors) from the Hub or locally. For this loading method, you need to set timestep_spacing="trailing" (feel free to experiment with the other scheduler config values to get better results): Copied from diffusers import StableDiffusionXLPipeline, EulerAncestralDiscreteScheduler +import torch + +pipeline = StableDiffusionXLPipeline.from_single_file( + "https://huggingface.co/stabilityai/sdxl-turbo/blob/main/sd_xl_turbo_1.0_fp16.safetensors", + torch_dtype=torch.float16, variant="fp16") +pipeline = pipeline.to("cuda") +pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config(pipeline.scheduler.config, timestep_spacing="trailing") Text-to-image For text-to-image, pass a text prompt. By default, SDXL Turbo generates a 512x512 image, and that resolution gives the best results. You can try setting the height and width parameters to 768x768 or 1024x1024, but you should expect quality degradations when doing so. Make sure to set guidance_scale to 0.0 to disable, as the model was trained without it. A single inference step is enough to generate high quality images. +Increasing the number of steps to 2, 3 or 4 should improve image quality. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline_text2image = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16") +pipeline_text2image = pipeline_text2image.to("cuda") + +prompt = "A cinematic shot of a baby racoon wearing an intricate italian priest robe." + +image = pipeline_text2image(prompt=prompt, guidance_scale=0.0, num_inference_steps=1).images[0] +image Image-to-image For image-to-image generation, make sure that num_inference_steps * strength is larger or equal to 1. +The image-to-image pipeline will run for int(num_inference_steps * strength) steps, e.g. 0.5 * 2.0 = 1 step in +our example below. Copied from diffusers import AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +# use from_pipe to avoid consuming additional memory when loading a checkpoint +pipeline_image2image = AutoPipelineForImage2Image.from_pipe(pipeline_text2image).to("cuda") + +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") +init_image = init_image.resize((512, 512)) + +prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k" + +image = pipeline_image2image(prompt, image=init_image, strength=0.5, guidance_scale=0.0, num_inference_steps=2).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Speed-up SDXL Turbo even more Compile the UNet if you are using PyTorch version 2.0 or higher. The first inference run will be very slow, but subsequent ones will be much faster. Copied pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) When using the default VAE, keep it in float32 to avoid costly dtype conversions before and after each generation. You only need to do this one before your first generation: Copied pipe.upcast_vae() As an alternative, you can also use a 16-bit VAE created by community member @madebyollin that does not need to be upcasted to float32. diff --git a/scrapped_outputs/aa6a926162b9ba5a32d907472cc5d505.txt b/scrapped_outputs/aa6a926162b9ba5a32d907472cc5d505.txt new file mode 100644 index 0000000000000000000000000000000000000000..4c696398635d3121e95a98f588be43126adc80ee --- /dev/null +++ b/scrapped_outputs/aa6a926162b9ba5a32d907472cc5d505.txt @@ -0,0 +1,323 @@ +Text-to-image The Stable Diffusion model was created by researchers and engineers from CompVis, Stability AI, Runway, and LAION. The StableDiffusionPipeline is capable of generating photorealistic images given any text input. It’s trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs. Latent diffusion is the research on top of which Stable Diffusion was built. It was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. The abstract from the paper is: By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. Code is available at https://github.com/CompVis/latent-diffusion. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionPipeline class diffusers.StableDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor from Common Diffusion Noise Schedules and Sample Steps are +Flawed. Guidance rescale factor should fix overexposure when +using zero terminal SNR. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt or .safetensors +format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import StableDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +... ) + +>>> # Download pipeline from local file +>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt +>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly") + +>>> # Enable float16 and move to GPU +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", +... torch_dtype=torch.float16, +... ) +>>> pipeline.to("cuda") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionPipeline class diffusers.FlaxStableDiffusionPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-to-image generation using Stable Diffusion. This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: array params: Union prng_seed: Array num_inference_steps: int = 50 height: Optional = None width: Optional = None guidance_scale: Union = 7.5 latents: Array = None neg_prompt_ids: Array = None return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. latents (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +array is generated by sampling using the supplied random generator. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard + +>>> from diffusers import FlaxStableDiffusionPipeline + +>>> pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", revision="bf16", dtype=jax.numpy.bfloat16 +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" + +>>> prng_seed = jax.random.PRNGKey(0) +>>> num_inference_steps = 50 + +>>> num_samples = jax.device_count() +>>> prompt = num_samples * [prompt] +>>> prompt_ids = pipeline.prepare_inputs(prompt) +# shard inputs and rng + +>>> params = replicate(params) +>>> prng_seed = jax.random.split(prng_seed, jax.device_count()) +>>> prompt_ids = shard(prompt_ids) + +>>> images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images +>>> images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) FlaxStableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/aab8cc23095f38a6a5ae4faabd7188bc.txt b/scrapped_outputs/aab8cc23095f38a6a5ae4faabd7188bc.txt new file mode 100644 index 0000000000000000000000000000000000000000..676f59486d6e5f0cdefcafa5b34abe676b789494 --- /dev/null +++ b/scrapped_outputs/aab8cc23095f38a6a5ae4faabd7188bc.txt @@ -0,0 +1,347 @@ +UniPC + + +Overview + +UniPC is a training-free framework designed for the fast sampling of diffusion models, which consists of a corrector (UniC) and a predictor (UniP) that share a unified analytical form and support arbitrary orders. +For more details about the method, please refer to the [paper] and the [code]. +Fast Sampling of Diffusion Models with Exponential Integrator. + +UniPCMultistepScheduler + + +class diffusers.UniPCMultistepScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +solver_order: int = 2 +prediction_type: str = 'epsilon' +thresholding: bool = False +dynamic_thresholding_ratio: float = 0.995 +sample_max_value: float = 1.0 +predict_x0: bool = True +solver_type: str = 'bh2' +lower_order_final: bool = True +disable_corrector: typing.List[int] = [] +solver_p: SchedulerMixin = None + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +solver_order (int, default 2) — +the order of UniPC, also the p in UniPC-p; can be any positive integer. Note that the effective order of +accuracy is solver_order + 1 due to the UniC. We recommend to use solver_order=2 for guided sampling, +and solver_order=3 for unconditional sampling. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + +thresholding (bool, default False) — +whether to use the “dynamic thresholding” method (introduced by Imagen, https://arxiv.org/abs/2205.11487). +For pixel-space diffusion models, you can set both predict_x0=True and thresholding=True to use the +dynamic thresholding. Note that the thresholding method is unsuitable for latent-space diffusion models +(such as stable-diffusion). + + +dynamic_thresholding_ratio (float, default 0.995) — +the ratio for the dynamic thresholding method. Default is 0.995, the same as Imagen +(https://arxiv.org/abs/2205.11487). + + +sample_max_value (float, default 1.0) — +the threshold value for dynamic thresholding. Valid only when thresholding=True and predict_x0=True. + + +predict_x0 (bool, default True) — +whether to use the updating algrithm on the predicted x0. See https://arxiv.org/abs/2211.01095 for details + + +solver_type (str, default bh2) — +the solver type of UniPC. We recommend use bh1 for unconditional sampling when steps < 10, and use bh2 +otherwise. + + +lower_order_final (bool, default True) — +whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. We empirically +find this trick can stabilize the sampling of DPM-Solver for steps < 15, especially for steps <= 10. + + +disable_corrector (list, default []) — +decide which step to disable the corrector. For large guidance scale, the misalignment between the +epsilon_theta(x_t, c)and epsilon_theta(x_t^c, c) might influence the convergence. This can be mitigated +by disable the corrector at the first few steps (e.g., disable_corrector=[0]) + + +solver_p (SchedulerMixin, default None) — +can be any other scheduler. If specified, the algorithm will become solver_p + UniC. + + + +UniPC is a training-free framework designed for the fast sampling of diffusion models, which consists of a +corrector (UniC) and a predictor (UniP) that share a unified analytical form and support arbitrary orders. UniPC is +by desinged model-agnostic, supporting pixel-space/latent-space DPMs on unconditional/conditional sampling. It can +also be applied to both noise prediction model and data prediction model. The corrector UniC can be also applied +after any off-the-shelf solvers to increase the order of accuracy. +For more details, see the original paper: https://arxiv.org/abs/2302.04867 +Currently, we support the multistep UniPC for both noise prediction models and data prediction models. We recommend +to use solver_order=2 for guided sampling, and solver_order=3 for unconditional sampling. +We also support the “dynamic thresholding” method in Imagen (https://arxiv.org/abs/2205.11487). For pixel-space +diffusion models, you can set both predict_x0=True and thresholding=True to use the dynamic thresholding. Note +that the thresholding method is unsuitable for latent-space diffusion models (such as stable-diffusion). +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +convert_model_output + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the converted model output. + + +Convert the model output to the corresponding type that the algorithm PC needs. + +multistep_uni_c_bh_update + +< +source +> +( +this_model_output: FloatTensor +this_timestep: int +last_sample: FloatTensor +this_sample: FloatTensor +order: int + +) +→ +torch.FloatTensor + +Parameters + +this_model_output (torch.FloatTensor) — the model outputs at x_t + + +this_timestep (int) — the current timestep t + + +last_sample (torch.FloatTensor) — the generated sample before the last predictor: x_{t-1} + + +this_sample (torch.FloatTensor) — the generated sample after the last predictor: x_{t} + + +order (int) — the p of UniC-p at this step. Note that the effective order of accuracy +should be order + 1 + + +Returns + +torch.FloatTensor + + + +the corrected sample tensor at the current timestep. + + +One step for the UniC (B(h) version). + +multistep_uni_p_bh_update + +< +source +> +( +model_output: FloatTensor +prev_timestep: int +sample: FloatTensor +order: int + +) +→ +torch.FloatTensor + +Parameters + +model_output (torch.FloatTensor) — +direct outputs from learned diffusion model at the current timestep. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +order (int) — the order of UniP at this step, also the p in UniPC-p. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the UniP (B(h) version). Alternatively, self.solver_p is used if is specified. + +scale_model_input + +< +source +> +( +sample: FloatTensor +*args +**kwargs + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device, optional) — +the device to which the timesteps should be moved to. If None, the timesteps are not moved. + + + +Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +return_dict: bool = True + +) +→ +~scheduling_utils.SchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +return_dict (bool) — option for returning tuple rather than SchedulerOutput class + + +Returns + +~scheduling_utils.SchedulerOutput or tuple + + + +~scheduling_utils.SchedulerOutput if return_dict is +True, otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + +Step function propagating the sample with the multistep UniPC. diff --git a/scrapped_outputs/aac8569e92daf3af435e5ee648b9667e.txt b/scrapped_outputs/aac8569e92daf3af435e5ee648b9667e.txt new file mode 100644 index 0000000000000000000000000000000000000000..62048f3edadd8546292e68118bb5c184fe8f5dda --- /dev/null +++ b/scrapped_outputs/aac8569e92daf3af435e5ee648b9667e.txt @@ -0,0 +1,341 @@ +Text-to-Image Generation + + +StableDiffusionPipeline + +The Stable Diffusion model was created by the researchers and engineers from CompVis, Stability AI, runway, and LAION. The StableDiffusionPipeline is capable of generating photo-realistic images given any text input using Stable Diffusion. +The original codebase can be found here: +Stable Diffusion V1: CampVis/stable-diffusion +Stable Diffusion v2: Stability-AI/stablediffusion +Available Checkpoints are: +stable-diffusion-v1-4 (512x512 resolution) CompVis/stable-diffusion-v1-4 +stable-diffusion-v1-5 (512x512 resolution) runwayml/stable-diffusion-v1-5 +stable-diffusion-2-base (512x512 resolution): stabilityai/stable-diffusion-2-base +stable-diffusion-2 (768x768 resolution): stabilityai/stable-diffusion-2 +stable-diffusion-2-1-base (512x512 resolution) stabilityai/stable-diffusion-2-1-base +stable-diffusion-2-1 (768x768 resolution): stabilityai/stable-diffusion-2-1 + +class diffusers.StableDiffusionPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPFeatureExtractor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-to-image generation using Stable Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: typing.Optional[int] = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt).images[0] + +enable_attention_slicing + +< +source +> +( +slice_size: typing.Union[str, int, NoneType] = 'auto' + +) + + +Parameters + +slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maxium amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +disable_attention_slicing + +< +source +> +( +) + + + +Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go +back to computing attention in one step. + +enable_vae_slicing + +< +source +> +( +) + + + +Enable sliced VAE decoding. +When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several +steps. This is useful to save some memory and allow larger batch sizes. + +disable_vae_slicing + +< +source +> +( +) + + + +Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to +computing decoding in one step. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. diff --git a/scrapped_outputs/aad6f442169ecc29203d649753bfc040.txt b/scrapped_outputs/aad6f442169ecc29203d649753bfc040.txt new file mode 100644 index 0000000000000000000000000000000000000000..37cb8ee91a7a6d494c837efd70eece6c46a9851b --- /dev/null +++ b/scrapped_outputs/aad6f442169ecc29203d649753bfc040.txt @@ -0,0 +1,146 @@ +Loaders + +There are many ways to train adapter neural networks for diffusion models, such as +Textual Inversion +LoRA +Hypernetworks +Such adapter neural networks often only consist of a fraction of the number of weights compared +to the pretrained model and as such are very portable. The Diffusers library offers an easy-to-use +API to load such adapter neural networks via the loaders.py module. +Note: This module is still highly experimental and prone to future changes. + +LoaderMixins + + +UNet2DConditionLoadersMixin + + +class diffusers.loaders.UNet2DConditionLoadersMixin + +< +source +> +( +) + + + + +load_attn_procs + +< +source +> +( +pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]] +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. +Valid model ids should have an organization name, like google/ddpm-celebahq-256. +A path to a directory containing model weights saved using ~ModelMixin.save_config, e.g., +./my_model_directory/. +A torch state +dict. + + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running diffusers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +subfolder (str, optional, defaults to "") — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + +mirror (str, optional) — +Mirror source to accelerate downloads in China. If you are from China and have an accessibility +problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. +Please refer to the mirror site for more information. + + + +Load pretrained attention processor layers into UNet2DConditionModel. Attention processor layers have to be +defined in +cross_attention.py +and be a torch.nn.Module class. +This function is experimental and might change in the future. +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. +Activate the special “offline-mode” to use +this method in a firewalled environment. + +save_attn_procs + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +is_main_process: bool = True +weights_name: str = None +save_function: typing.Callable = None +safe_serialization: bool = False + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. + + +is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful when in distributed training like +TPUs and need to call this function on all processes. In this case, set is_main_process=True only on +the main process to avoid race conditions. + + +save_function (Callable) — +The function to use to save the state dictionary. Useful on distributed training like TPUs when one +need to replace torch.save by another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. + + + +Save an attention processor to a directory, so that it can be re-loaded using the +[load_attn_procs()](/docs/diffusers/v0.14.0/en/api/loaders#diffusers.loaders.UNet2DConditionLoadersMixin.load_attn_procs) method. diff --git a/scrapped_outputs/aafa579ae28b819d95633ab04de77ae0.txt b/scrapped_outputs/aafa579ae28b819d95633ab04de77ae0.txt new file mode 100644 index 0000000000000000000000000000000000000000..161bab95d89c856bbecb72654e8b0d0142d13c70 --- /dev/null +++ b/scrapped_outputs/aafa579ae28b819d95633ab04de77ae0.txt @@ -0,0 +1,6 @@ +Unconditional image generation Unconditional image generation generates images that look like a random sample from the training data the model was trained on because the denoising process is not guided by any additional context like text or image. To get started, use the DiffusionPipeline to load the anton-l/ddpm-butterflies-128 checkpoint to generate images of butterflies. The DiffusionPipeline downloads and caches all the model components required to generate an image. Copied from diffusers import DiffusionPipeline + +generator = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128").to("cuda") +image = generator().images[0] +image Want to generate images of something else? Take a look at the training guide to learn how to train a model to generate your own images. The output image is a PIL.Image object that can be saved: Copied image.save("generated_image.png") You can also try experimenting with the num_inference_steps parameter, which controls the number of denoising steps. More denoising steps typically produce higher quality images, but it’ll take longer to generate. Feel free to play around with this parameter to see how it affects the image quality. Copied image = generator(num_inference_steps=100).images[0] +image Try out the Space below to generate an image of a butterfly! diff --git a/scrapped_outputs/ab1ef8ad7115128cec401a21a3e14c92.txt b/scrapped_outputs/ab1ef8ad7115128cec401a21a3e14c92.txt new file mode 100644 index 0000000000000000000000000000000000000000..b20fa826f93ceab8b9350b48a73ddf983d626f35 --- /dev/null +++ b/scrapped_outputs/ab1ef8ad7115128cec401a21a3e14c92.txt @@ -0,0 +1,115 @@ +Custom Diffusion Custom Diffusion is a training technique for personalizing image generation models. Like Textual Inversion, DreamBooth, and LoRA, Custom Diffusion only requires a few (~4-5) example images. This technique works by only training weights in the cross-attention layers, and it uses a special word to represent the newly learned concept. Custom Diffusion is unique because it can also learn multiple concepts at the same time. If you’re training on a GPU with limited vRAM, you should try enabling xFormers with --enable_xformers_memory_efficient_attention for faster training with lower vRAM requirements (16GB). To save even more memory, add --set_grads_to_none in the training argument to set the gradients to None instead of zero (this option can cause some issues, so if you experience any, try removing this parameter). This guide will explore the train_custom_diffusion.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies: Copied cd examples/custom_diffusion +pip install -r requirements.txt +pip install clip-retrieval 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script contains all the parameters to help you customize your training run. These are found in the parse_args() function. The function comes with default values, but you can also set your own values in the training command if you’d like. For example, to change the resolution of the input image: Copied accelerate launch train_custom_diffusion.py \ + --resolution=256 Many of the basic parameters are described in the DreamBooth training guide, so this guide focuses on the parameters unique to Custom Diffusion: --freeze_model: freezes the key and value parameters in the cross-attention layer; the default is crossattn_kv, but you can set it to crossattn to train all the parameters in the cross-attention layer --concepts_list: to learn multiple concepts, provide a path to a JSON file containing the concepts --modifier_token: a special word used to represent the learned concept --initializer_token: Prior preservation loss Prior preservation loss is a method that uses a model’s own generated samples to help it learn how to generate more diverse images. Because these generated sample images belong to the same class as the images you provided, they help the model retain what it has learned about the class and how it can use what it already knows about the class to make new compositions. Many of the parameters for prior preservation loss are described in the DreamBooth training guide. Regularization Custom Diffusion includes training the target images with a small set of real images to prevent overfitting. As you can imagine, this can be easy to do when you’re only training on a few images! Download 200 real images with clip_retrieval. The class_prompt should be the same category as the target images. These images are stored in class_data_dir. Copied python retrieve.py --class_prompt cat --class_data_dir real_reg/samples_cat --num_class_images 200 To enable regularization, add the following parameters: --with_prior_preservation: whether to use prior preservation loss --prior_loss_weight: controls the influence of the prior preservation loss on the model --real_prior: whether to use a small set of real images to prevent overfitting Copied accelerate launch train_custom_diffusion.py \ + --with_prior_preservation \ + --prior_loss_weight=1.0 \ + --class_data_dir="./real_reg/samples_cat" \ + --class_prompt="cat" \ + --real_prior=True \ Training script A lot of the code in the Custom Diffusion training script is similar to the DreamBooth script. This guide instead focuses on the code that is relevant to Custom Diffusion. The Custom Diffusion training script has two dataset classes: CustomDiffusionDataset: preprocesses the images, class images, and prompts for training PromptDataset: prepares the prompts for generating class images Next, the modifier_token is added to the tokenizer, converted to token ids, and the token embeddings are resized to account for the new modifier_token. Then the modifier_token embeddings are initialized with the embeddings of the initializer_token. All parameters in the text encoder are frozen, except for the token embeddings since this is what the model is trying to learn to associate with the concepts. Copied params_to_freeze = itertools.chain( + text_encoder.text_model.encoder.parameters(), + text_encoder.text_model.final_layer_norm.parameters(), + text_encoder.text_model.embeddings.position_embedding.parameters(), +) +freeze_params(params_to_freeze) Now you’ll need to add the Custom Diffusion weights to the attention layers. This is a really important step for getting the shape and size of the attention weights correct, and for setting the appropriate number of attention processors in each UNet block. Copied st = unet.state_dict() +for name, _ in unet.attn_processors.items(): + cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim + if name.startswith("mid_block"): + hidden_size = unet.config.block_out_channels[-1] + elif name.startswith("up_blocks"): + block_id = int(name[len("up_blocks.")]) + hidden_size = list(reversed(unet.config.block_out_channels))[block_id] + elif name.startswith("down_blocks"): + block_id = int(name[len("down_blocks.")]) + hidden_size = unet.config.block_out_channels[block_id] + layer_name = name.split(".processor")[0] + weights = { + "to_k_custom_diffusion.weight": st[layer_name + ".to_k.weight"], + "to_v_custom_diffusion.weight": st[layer_name + ".to_v.weight"], + } + if train_q_out: + weights["to_q_custom_diffusion.weight"] = st[layer_name + ".to_q.weight"] + weights["to_out_custom_diffusion.0.weight"] = st[layer_name + ".to_out.0.weight"] + weights["to_out_custom_diffusion.0.bias"] = st[layer_name + ".to_out.0.bias"] + if cross_attention_dim is not None: + custom_diffusion_attn_procs[name] = attention_class( + train_kv=train_kv, + train_q_out=train_q_out, + hidden_size=hidden_size, + cross_attention_dim=cross_attention_dim, + ).to(unet.device) + custom_diffusion_attn_procs[name].load_state_dict(weights) + else: + custom_diffusion_attn_procs[name] = attention_class( + train_kv=False, + train_q_out=False, + hidden_size=hidden_size, + cross_attention_dim=cross_attention_dim, + ) +del st +unet.set_attn_processor(custom_diffusion_attn_procs) +custom_diffusion_layers = AttnProcsLayers(unet.attn_processors) The optimizer is initialized to update the cross-attention layer parameters: Copied optimizer = optimizer_class( + itertools.chain(text_encoder.get_input_embeddings().parameters(), custom_diffusion_layers.parameters()) + if args.modifier_token is not None + else custom_diffusion_layers.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) In the training loop, it is important to only update the embeddings for the concept you’re trying to learn. This means setting the gradients of all the other token embeddings to zero: Copied if args.modifier_token is not None: + if accelerator.num_processes > 1: + grads_text_encoder = text_encoder.module.get_input_embeddings().weight.grad + else: + grads_text_encoder = text_encoder.get_input_embeddings().weight.grad + index_grads_to_zero = torch.arange(len(tokenizer)) != modifier_token_id[0] + for i in range(len(modifier_token_id[1:])): + index_grads_to_zero = index_grads_to_zero & ( + torch.arange(len(tokenizer)) != modifier_token_id[i] + ) + grads_text_encoder.data[index_grads_to_zero, :] = grads_text_encoder.data[ + index_grads_to_zero, : + ].fill_(0) Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 In this guide, you’ll download and use these example cat images. You can also create and use your own dataset if you want (see the Create a dataset for training guide). Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model, INSTANCE_DIR to the path where you just downloaded the cat images to, and OUTPUT_DIR to where you want to save the model. You’ll use as the special word to tie the newly learned embeddings to. The script creates and saves model checkpoints and a pytorch_custom_diffusion_weights.bin file to your repository. To monitor training progress with Weights and Biases, add the --report_to=wandb parameter to the training command and specify a validation prompt with --validation_prompt. This is useful for debugging and saving intermediate results. If you’re training on human faces, the Custom Diffusion team has found the following parameters to work well: --learning_rate=5e-6 --max_train_steps can be anywhere between 1000 and 2000 --freeze_model=crossattn use at least 15-20 images to train with single concept multiple concepts Copied export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export OUTPUT_DIR="path-to-save-model" +export INSTANCE_DIR="./data/cat" + +accelerate launch train_custom_diffusion.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --output_dir=$OUTPUT_DIR \ + --class_data_dir=./real_reg/samples_cat/ \ + --with_prior_preservation \ + --real_prior \ + --prior_loss_weight=1.0 \ + --class_prompt="cat" \ + --num_class_images=200 \ + --instance_prompt="photo of a cat" \ + --resolution=512 \ + --train_batch_size=2 \ + --learning_rate=1e-5 \ + --lr_warmup_steps=0 \ + --max_train_steps=250 \ + --scale_lr \ + --hflip \ + --modifier_token "" \ + --validation_prompt=" cat sitting in a bucket" \ + --report_to="wandb" \ + --push_to_hub Once training is finished, you can use your new Custom Diffusion model for inference. single concept multiple concepts Copied import torch +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16, +).to("cuda") +pipeline.unet.load_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin") +pipeline.load_textual_inversion("path-to-save-model", weight_name=".bin") + +image = pipeline( + " cat sitting in a bucket", + num_inference_steps=100, + guidance_scale=6.0, + eta=1.0, +).images[0] +image.save("cat.png") Next steps Congratulations on training a model with Custom Diffusion! 🎉 To learn more: Read the Multi-Concept Customization of Text-to-Image Diffusion blog post to learn more details about the experimental results from the Custom Diffusion team. diff --git a/scrapped_outputs/ab36d65d391b06f342878f0ee1b8ac47.txt b/scrapped_outputs/ab36d65d391b06f342878f0ee1b8ac47.txt new file mode 100644 index 0000000000000000000000000000000000000000..468c0483a2546314fa3f8291e558ee4a11ec620d --- /dev/null +++ b/scrapped_outputs/ab36d65d391b06f342878f0ee1b8ac47.txt @@ -0,0 +1,69 @@ +JAX/Flax 🤗 Diffusers supports Flax for super fast inference on Google TPUs, such as those available in Colab, Kaggle or Google Cloud Platform. This guide shows you how to run inference with Stable Diffusion using JAX/Flax. Before you begin, make sure you have the necessary libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q jax==0.3.25 jaxlib==0.3.25 flax transformers ftfy +#!pip install -q diffusers You should also make sure you’re using a TPU backend. While JAX does not run exclusively on TPUs, you’ll get the best performance on a TPU because each server has 8 TPU accelerators working in parallel. If you are running this guide in Colab, select Runtime in the menu above, select the option Change runtime type, and then select TPU under the Hardware accelerator setting. Import JAX and quickly check whether you’re using a TPU: Copied import jax +import jax.tools.colab_tpu +jax.tools.colab_tpu.setup_tpu() + +num_devices = jax.device_count() +device_type = jax.devices()[0].device_kind + +print(f"Found {num_devices} JAX devices of type {device_type}.") +assert ( + "TPU" in device_type, + "Available device is not a TPU, please select TPU from Runtime > Change runtime type > Hardware accelerator" +) +# Found 8 JAX devices of type Cloud TPU. Great, now you can import the rest of the dependencies you’ll need: Copied import jax.numpy as jnp +from jax import pmap +from flax.jax_utils import replicate +from flax.training.common_utils import shard + +from diffusers import FlaxStableDiffusionPipeline Load a model Flax is a functional framework, so models are stateless and parameters are stored outside of them. Loading a pretrained Flax pipeline returns both the pipeline and the model weights (or parameters). In this guide, you’ll use bfloat16, a more efficient half-float type that is supported by TPUs (you can also use float32 for full precision if you want). Copied dtype = jnp.bfloat16 +pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + revision="bf16", + dtype=dtype, +) Inference TPUs usually have 8 devices working in parallel, so let’s use the same prompt for each device. This means you can perform inference on 8 devices at once, with each device generating one image. As a result, you’ll get 8 images in the same amount of time it takes for one chip to generate a single image! Learn more details in the How does parallelization work? section. After replicating the prompt, get the tokenized text ids by calling the prepare_inputs function on the pipeline. The length of the tokenized text is set to 77 tokens as required by the configuration of the underlying CLIP text model. Copied prompt = "A cinematic film still of Morgan Freeman starring as Jimi Hendrix, portrait, 40mm lens, shallow depth of field, close up, split lighting, cinematic" +prompt = [prompt] * jax.device_count() +prompt_ids = pipeline.prepare_inputs(prompt) +prompt_ids.shape +# (8, 77) Model parameters and inputs have to be replicated across the 8 parallel devices. The parameters dictionary is replicated with flax.jax_utils.replicate which traverses the dictionary and changes the shape of the weights so they are repeated 8 times. Arrays are replicated using shard. Copied # parameters +p_params = replicate(params) + +# arrays +prompt_ids = shard(prompt_ids) +prompt_ids.shape +# (8, 1, 77) This shape means each one of the 8 devices receives as an input a jnp array with shape (1, 77), where 1 is the batch size per device. On TPUs with sufficient memory, you could have a batch size larger than 1 if you want to generate multiple images (per chip) at once. Next, create a random number generator to pass to the generation function. This is standard procedure in Flax, which is very serious and opinionated about random numbers. All functions that deal with random numbers are expected to receive a generator to ensure reproducibility, even when you’re training across multiple distributed devices. The helper function below uses a seed to initialize a random number generator. As long as you use the same seed, you’ll get the exact same results. Feel free to use different seeds when exploring results later in the guide. Copied def create_key(seed=0): + return jax.random.PRNGKey(seed) The helper function, or rng, is split 8 times so each device receives a different generator and generates a different image. Copied rng = create_key(0) +rng = jax.random.split(rng, jax.device_count()) To take advantage of JAX’s optimized speed on a TPU, pass jit=True to the pipeline to compile the JAX code into an efficient representation and to ensure the model runs in parallel across the 8 devices. You need to ensure all your inputs have the same shape in subsequent calls, otherwise JAX will need to recompile the code which is slower. The first inference run takes more time because it needs to compile the code, but subsequent calls (even with different inputs) are much faster. For example, it took more than a minute to compile on a TPU v2-8, but then it takes about 7s on a future inference run! Copied %%time +images = pipeline(prompt_ids, p_params, rng, jit=True)[0] + +# CPU times: user 56.2 s, sys: 42.5 s, total: 1min 38s +# Wall time: 1min 29s The returned array has shape (8, 1, 512, 512, 3) which should be reshaped to remove the second dimension and get 8 images of 512 × 512 × 3. Then you can use the numpy_to_pil() function to convert the arrays into images. Copied from diffusers.utils import make_image_grid + +images = images.reshape((images.shape[0] * images.shape[1],) + images.shape[-3:]) +images = pipeline.numpy_to_pil(images) +make_image_grid(images, rows=2, cols=4) Using different prompts You don’t necessarily have to use the same prompt on all devices. For example, to generate 8 different prompts: Copied prompts = [ + "Labrador in the style of Hokusai", + "Painting of a squirrel skating in New York", + "HAL-9000 in the style of Van Gogh", + "Times Square under water, with fish and a dolphin swimming around", + "Ancient Roman fresco showing a man working on his laptop", + "Close-up photograph of young black woman against urban background, high quality, bokeh", + "Armchair in the shape of an avocado", + "Clown astronaut in space, with Earth in the background", +] + +prompt_ids = pipeline.prepare_inputs(prompts) +prompt_ids = shard(prompt_ids) + +images = pipeline(prompt_ids, p_params, rng, jit=True).images +images = images.reshape((images.shape[0] * images.shape[1],) + images.shape[-3:]) +images = pipeline.numpy_to_pil(images) + +make_image_grid(images, 2, 4) How does parallelization work? The Flax pipeline in 🤗 Diffusers automatically compiles the model and runs it in parallel on all available devices. Let’s take a closer look at how that process works. JAX parallelization can be done in multiple ways. The easiest one revolves around using the jax.pmap function to achieve single-program multiple-data (SPMD) parallelization. It means running several copies of the same code, each on different data inputs. More sophisticated approaches are possible, and you can go over to the JAX documentation to explore this topic in more detail if you are interested! jax.pmap does two things: Compiles (or ”jits”) the code which is similar to jax.jit(). This does not happen when you call pmap, and only the first time the pmapped function is called. Ensures the compiled code runs in parallel on all available devices. To demonstrate, call pmap on the pipeline’s _generate method (this is a private method that generates images and may be renamed or removed in future releases of 🤗 Diffusers): Copied p_generate = pmap(pipeline._generate) After calling pmap, the prepared function p_generate will: Make a copy of the underlying function, pipeline._generate, on each device. Send each device a different portion of the input arguments (this is why it’s necessary to call the shard function). In this case, prompt_ids has shape (8, 1, 77, 768) so the array is split into 8 and each copy of _generate receives an input with shape (1, 77, 768). The most important thing to pay attention to here is the batch size (1 in this example), and the input dimensions that make sense for your code. You don’t have to change anything else to make the code work in parallel. The first time you call the pipeline takes more time, but the calls afterward are much faster. The block_until_ready function is used to correctly measure inference time because JAX uses asynchronous dispatch and returns control to the Python loop as soon as it can. You don’t need to use that in your code; blocking occurs automatically when you want to use the result of a computation that has not yet been materialized. Copied %%time +images = p_generate(prompt_ids, p_params, rng) +images = images.block_until_ready() + +# CPU times: user 1min 15s, sys: 18.2 s, total: 1min 34s +# Wall time: 1min 15s Check your image dimensions to see if they’re correct: Copied images.shape +# (8, 1, 512, 512, 3) diff --git a/scrapped_outputs/ab4d397ec3af622312974db9896c940f.txt b/scrapped_outputs/ab4d397ec3af622312974db9896c940f.txt new file mode 100644 index 0000000000000000000000000000000000000000..01844c7ed27f75fbb3bbfa25dbcbcd0b53a31e81 --- /dev/null +++ b/scrapped_outputs/ab4d397ec3af622312974db9896c940f.txt @@ -0,0 +1,1130 @@ +Text-to-Image Generation with ControlNet Conditioning + + +Overview + +Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang and Maneesh Agrawala. +Using the pretrained models we can provide control images (for example, a depth map) to control Stable Diffusion text-to-image generation so that it follows the structure of the depth image and fills in the details. +The abstract of the paper is the following: +We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions. The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k). Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices. Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data. We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc. This may enrich the methods to control large diffusion models and further facilitate related applications. +This model was contributed by the amazing community contributor takuma104 ❤️ . +Resources: +Paper +Original Code + +Available Pipelines: + +Pipeline +Tasks +Demo +StableDiffusionControlNetPipeline +Text-to-Image Generation with ControlNet Conditioning +Colab Example + +Usage example + +In the following we give a simple example of how to use a ControlNet checkpoint with Diffusers for inference. +The inference pipeline is the same for all pipelines: +Take an image and run it through a pre-conditioning processor. +Run the pre-processed image through the StableDiffusionControlNetPipeline. +Let’s have a look at a simple example using the Canny Edge ControlNet. + + + Copied +from diffusers import StableDiffusionControlNetPipeline +from diffusers.utils import load_image + +# Let's load the popular vermeer image +image = load_image( + "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +) + +Next, we process the image to get the canny image. This is step 1. - running the pre-conditioning processor. The pre-conditioning processor is different for every ControlNet. Please see the model cards of the official checkpoints for more information about other models. +First, we need to install opencv: + + + Copied +pip install opencv-contrib-python +Next, let’s also install all required Hugging Face libraries: + + + Copied +pip install diffusers transformers git+https://github.com/huggingface/accelerate.git +Then we can retrieve the canny edges of the image. + + + Copied +import cv2 +from PIL import Image +import numpy as np + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) +Let’s take a look at the processed image. + +Now, we load the official Stable Diffusion 1.5 Model as well as the ControlNet for canny edges. + + + Copied +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +import torch + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +) +To speed-up things and reduce memory, let’s enable model offloading and use the fast UniPCMultistepScheduler. + + + Copied +from diffusers import UniPCMultistepScheduler + +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) + +# this command loads the individual model components on GPU on-demand. +pipe.enable_model_cpu_offload() +Finally, we can run the pipeline: + + + Copied +generator = torch.manual_seed(0) + +out_image = pipe( + "disco dancer with colorful lights", num_inference_steps=20, generator=generator, image=canny_image +).images[0] +This should take only around 3-4 seconds on GPU (depending on hardware). The output image then looks as follows: + +Note: To see how to run all other ControlNet checkpoints, please have a look at ControlNet with Stable Diffusion 1.5. + +Combining multiple conditionings + +Multiple ControlNet conditionings can be combined for a single image generation. Pass a list of ControlNets to the pipeline’s constructor and a corresponding list of conditionings to __call__. +When combining conditionings, it is helpful to mask conditionings such that they do not overlap. In the example, we mask the middle of the canny map where the pose conditioning is located. +It can also be helpful to vary the controlnet_conditioning_scales to emphasize one conditioning over the other. + +Canny conditioning + +The original image: + +Prepare the conditioning: + + + Copied +from diffusers.utils import load_image +from PIL import Image +import cv2 +import numpy as np +from diffusers.utils import load_image + +canny_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" +) +canny_image = np.array(canny_image) + +low_threshold = 100 +high_threshold = 200 + +canny_image = cv2.Canny(canny_image, low_threshold, high_threshold) + +# zero out middle columns of image where pose will be overlayed +zero_start = canny_image.shape[1] // 4 +zero_end = zero_start + canny_image.shape[1] // 2 +canny_image[:, zero_start:zero_end] = 0 + +canny_image = canny_image[:, :, None] +canny_image = np.concatenate([canny_image, canny_image, canny_image], axis=2) +canny_image = Image.fromarray(canny_image) + + +Openpose conditioning + +The original image: + +Prepare the conditioning: + + + Copied +from controlnet_aux import OpenposeDetector +from diffusers.utils import load_image + +openpose = OpenposeDetector.from_pretrained("lllyasviel/ControlNet") + +openpose_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/person.png" +) +openpose_image = openpose(openpose_image) + + +Running ControlNet with multiple conditionings + + + + Copied +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler +import torch + +controlnet = [ + ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16), + ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16), +] + +pipe = StableDiffusionControlNetPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +) +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) + +pipe.enable_xformers_memory_efficient_attention() +pipe.enable_model_cpu_offload() + +prompt = "a giant standing in a fantasy landscape, best quality" +negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality" + +generator = torch.Generator(device="cpu").manual_seed(1) + +images = [openpose_image, canny_image] + +image = pipe( + prompt, + images, + num_inference_steps=20, + generator=generator, + negative_prompt=negative_prompt, + controlnet_conditioning_scale=[1.0, 0.8], +).images[0] + +image.save("./multi_controlnet_output.png") + + +Guess Mode + +Guess Mode is a ControlNet feature that was implemented after the publication of the paper. The description states: +In this mode, the ControlNet encoder will try best to recognize the content of the input control map, like depth map, edge map, scribbles, etc, even if you remove all prompts. + +The core implementation: + +It adjusts the scale of the output residuals from ControlNet by a fixed ratio depending on the block depth. The shallowest DownBlock corresponds to 0.1. As the blocks get deeper, the scale increases exponentially, and the scale for the output of the MidBlock becomes 1.0. +Since the core implementation is just this, it does not have any impact on prompt conditioning. While it is common to use it without specifying any prompts, it is also possible to provide prompts if desired. + +Usage: + +Just specify guess_mode=True in the pipe() function. A guidance_scale between 3.0 and 5.0 is recommended. + + + Copied +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +import torch + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny") +pipe = StableDiffusionControlNetPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", controlnet=controlnet).to( + "cuda" +) +image = pipe("", image=canny_image, guess_mode=True, guidance_scale=3.0).images[0] +image.save("guess_mode_generated.png") + +Output image comparison: + + +Canny Control Example +no guess_mode with prompt +guess_mode without prompt + + + +Available checkpoints + +ControlNet requires a control image in addition to the text-to-image prompt. +Each pretrained model is trained using a different conditioning method that requires different images for conditioning the generated outputs. For example, Canny edge conditioning requires the control image to be the output of a Canny filter, while depth conditioning requires the control image to be a depth map. See the overview and image examples below to know more. +All checkpoints can be found under the authors’ namespace lllyasviel. +13.04.2024 Update: The author has released improved controlnet checkpoints v1.1 - see here. + +ControlNet v1.0 + +Model Name +Control Image Overview +Control Image Example +Generated Image Example +lllyasviel/sd-controlnet-canny Trained with canny edge detection +A monochrome image with white edges on a black background. + + +lllyasviel/sd-controlnet-depth Trained with Midas depth estimation +A grayscale image with black representing deep areas and white representing shallow areas. + + +lllyasviel/sd-controlnet-hed Trained with HED edge detection (soft edge) +A monochrome image with white soft edges on a black background. + + +lllyasviel/sd-controlnet-mlsd Trained with M-LSD line detection +A monochrome image composed only of white straight lines on a black background. + + +lllyasviel/sd-controlnet-normal Trained with normal map +A normal mapped image. + + +lllyasviel/sd-controlnet-openpose Trained with OpenPose bone image +A OpenPose bone image. + + +lllyasviel/sd-controlnet-scribble Trained with human scribbles +A hand-drawn monochrome image with white outlines on a black background. + + +lllyasviel/sd-controlnet-segTrained with semantic segmentation +An ADE20K’s segmentation protocol image. + + + +ControlNet v1.1 + +Model Name +Control Image Overview +Control Image Example +Generated Image Example +lllyasviel/control_v11p_sd15_canny Trained with canny edge detection +A monochrome image with white edges on a black background. + + +lllyasviel/control_v11e_sd15_ip2p Trained with pixel to pixel instruction +No condition . + + +lllyasviel/control_v11p_sd15_inpaint Trained with image inpainting +No condition. + + +lllyasviel/control_v11p_sd15_mlsd Trained with multi-level line segment detection +An image with annotated line segments. + + +lllyasviel/control_v11f1p_sd15_depth Trained with depth estimation +An image with depth information, usually represented as a grayscale image. + + +lllyasviel/control_v11p_sd15_normalbae Trained with surface normal estimation +An image with surface normal information, usually represented as a color-coded image. + + +lllyasviel/control_v11p_sd15_seg Trained with image segmentation +An image with segmented regions, usually represented as a color-coded image. + + +lllyasviel/control_v11p_sd15_lineart Trained with line art generation +An image with line art, usually black lines on a white background. + + +lllyasviel/control_v11p_sd15s2_lineart_anime Trained with anime line art generation +An image with anime-style line art. + + +lllyasviel/control_v11p_sd15_openpose Trained with human pose estimation +An image with human poses, usually represented as a set of keypoints or skeletons. + + +lllyasviel/control_v11p_sd15_scribble Trained with scribble-based image generation +An image with scribbles, usually random or user-drawn strokes. + + +lllyasviel/control_v11p_sd15_softedge Trained with soft edge image generation +An image with soft edges, usually to create a more painterly or artistic effect. + + +lllyasviel/control_v11e_sd15_shuffle Trained with image shuffling +An image with shuffled patches or regions. + + + +StableDiffusionControlNetPipeline + + +class diffusers.StableDiffusionControlNetPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +controlnet: typing.Union[diffusers.models.controlnet.ControlNetModel, typing.List[diffusers.models.controlnet.ControlNetModel], typing.Tuple[diffusers.models.controlnet.ControlNetModel], diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_controlnet.MultiControlNetModel] +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPImageProcessor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple ControlNets +as a list, the outputs from each ControlNet are added together to create one combined additional +conditioning. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-to-image generation using Stable Diffusion with ControlNet guidance. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) +In addition the pipeline inherits the following loading methods: +Textual-Inversion: loaders.TextualInversionLoaderMixin.load_textual_inversion() + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +image: typing.Union[torch.FloatTensor, PIL.Image.Image, typing.List[torch.FloatTensor], typing.List[PIL.Image.Image]] = None +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None +controlnet_conditioning_scale: typing.Union[float, typing.List[float]] = 1.0 +guess_mode: bool = False + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +image (torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image], — +List[List[torch.FloatTensor]], or List[List[PIL.Image.Image]]): +The ControlNet input condition. ControlNet uses this input condition to generate guidance to Unet. If +the type is specified as Torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can +also be accepted as an image. The dimensions of the output image defaults to image’s dimensions. If +height and/or width are passed, image is resized according to them. If multiple ControlNets are +specified in init, images must be passed as a list such that each element of the list can be correctly +batched for input to a single controlnet. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the controlnet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set the +corresponding scale as a list. + + +guess_mode (bool, optional, defaults to False) — +In this mode, the ControlNet encoder will try best to recognize the content of the input image even if +you remove all prompts. The guidance_scale between 3.0 and 5.0 is recommended. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +... ) +>>> image = np.array(image) + +>>> # get canny image +>>> image = cv2.Canny(image, 100, 200) +>>> image = image[:, :, None] +>>> image = np.concatenate([image, image, image], axis=2) +>>> canny_image = Image.fromarray(image) + +>>> # load control net and stable diffusion v1-5 +>>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +>>> pipe = StableDiffusionControlNetPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> # speed up diffusion process with faster scheduler and memory optimization +>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +>>> # remove following line if xformers is not installed +>>> pipe.enable_xformers_memory_efficient_attention() + +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> generator = torch.manual_seed(0) +>>> image = pipe( +... "futuristic-looking woman", num_inference_steps=20, generator=generator, image=canny_image +... ).images[0] + +enable_attention_slicing + +< +source +> +( +slice_size: typing.Union[str, int, NoneType] = 'auto' + +) + + +Parameters + +slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +disable_attention_slicing + +< +source +> +( +) + + + +Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go +back to computing attention in one step. + +enable_vae_slicing + +< +source +> +( +) + + + +Enable sliced VAE decoding. +When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several +steps. This is useful to save some memory and allow larger batch sizes. + +disable_vae_slicing + +< +source +> +( +) + + + +Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to +computing decoding in one step. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +load_textual_inversion + +< +source +> +( +pretrained_model_name_or_path: typing.Union[str, typing.Dict[str, torch.Tensor]] +token: typing.Optional[str] = None +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path (str or os.PathLike) — +Can be either: + +A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. +Valid model ids should have an organization name, like +"sd-concepts-library/low-poly-hd-logos-icons". +A path to a directory containing textual inversion weights, e.g. +./my_text_inversion_directory/. + + + +weight_name (str, optional) — +Name of a custom weight file. This should be used in two cases: + +The saved textual inversion file is in diffusers format, but was saved under a specific weight +name, such as text_inv.bin. +The saved textual inversion file is in the “Automatic1111” form. + + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running diffusers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +subfolder (str, optional, defaults to "") — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + +mirror (str, optional) — +Mirror source to accelerate downloads in China. If you are from China and have an accessibility +problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. +Please refer to the mirror site for more information. + + + +Load textual inversion embeddings into the text encoder of stable diffusion pipelines. Both diffusers and +Automatic1111 formats are supported (see example below). +This function is experimental and might change in the future. +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. +Example: + +To load a textual inversion embedding vector in diffusers format: + + + Copied +from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") +To load a textual inversion embedding vector in Automatic1111 format, make sure to first download the vector, + +e.g. from civitAI and then load the vector locally: + + + Copied +from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") + +disable_vae_tiling + +< +source +> +( +) + + + +Disable tiled VAE decoding. If enable_vae_tiling was previously invoked, this method will go back to +computing decoding in one step. + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae, controlnet, and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. + +enable_vae_tiling + +< +source +> +( +) + + + +Enable tiled VAE decoding. +When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in +several steps. This is useful to save a large amount of memory and to allow the processing of larger images. + +FlaxStableDiffusionControlNetPipeline + + +class diffusers.FlaxStableDiffusionControlNetPipeline + +< +source +> +( +vae: FlaxAutoencoderKL +text_encoder: FlaxCLIPTextModel +tokenizer: CLIPTokenizer +unet: FlaxUNet2DConditionModel +controlnet: FlaxControlNetModel +scheduler: typing.Union[diffusers.schedulers.scheduling_ddim_flax.FlaxDDIMScheduler, diffusers.schedulers.scheduling_pndm_flax.FlaxPNDMScheduler, diffusers.schedulers.scheduling_lms_discrete_flax.FlaxLMSDiscreteScheduler, diffusers.schedulers.scheduling_dpmsolver_multistep_flax.FlaxDPMSolverMultistepScheduler] +safety_checker: FlaxStableDiffusionSafetyChecker +feature_extractor: CLIPFeatureExtractor +dtype: dtype = + +) + + +Parameters + +vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, +specifically the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (FlaxUNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +controlnet (FlaxControlNetModel — +Provides additional conditioning to the unet during the denoising process. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. + + +safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-to-image generation using Stable Diffusion with ControlNet Guidance. +This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt_ids: array +image: array +params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] +prng_seed: PRNGKeyArray +num_inference_steps: int = 50 +guidance_scale: typing.Union[float, array] = 7.5 +latents: array = None +neg_prompt_ids: array = None +controlnet_conditioning_scale: typing.Union[float, array] = 1.0 +return_dict: bool = True +jit: bool = False + +) +→ +FlaxStableDiffusionPipelineOutput or tuple + +Parameters + +prompt_ids (jnp.array) — +The prompt or prompts to guide the image generation. + + +image (jnp.array) — +Array representing the ControlNet input condition. ControlNet use this input condition to generate +guidance to Unet. + + +params (Dict or FrozenDict) — Dictionary containing the model parameters/weights + + +prng_seed (jax.random.KeyArray or jax.Array) — Array containing random number generator key + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +latents (jnp.array, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +controlnet_conditioning_scale (float or jnp.array, optional, defaults to 1.0) — +The outputs of the controlnet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. + + +jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. NOTE: This argument +exists because __call__ is not yet end-to-end pmap-able. It will be removed in a future release. + + +Returns + +FlaxStableDiffusionPipelineOutput or tuple + + + +FlaxStableDiffusionPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import jax +>>> import numpy as np +>>> import jax.numpy as jnp +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard +>>> from diffusers.utils import load_image +>>> from PIL import Image +>>> from diffusers import FlaxStableDiffusionControlNetPipeline, FlaxControlNetModel + + +>>> def image_grid(imgs, rows, cols): +... w, h = imgs[0].size +... grid = Image.new("RGB", size=(cols * w, rows * h)) +... for i, img in enumerate(imgs): +... grid.paste(img, box=(i % cols * w, i // cols * h)) +... return grid + + +>>> def create_key(seed=0): +... return jax.random.PRNGKey(seed) + + +>>> rng = create_key(0) + +>>> # get canny image +>>> canny_image = load_image( +... "https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/blog_post_cell_10_output_0.jpeg" +... ) + +>>> prompts = "best quality, extremely detailed" +>>> negative_prompts = "monochrome, lowres, bad anatomy, worst quality, low quality" + +>>> # load control net and stable diffusion v1-5 +>>> controlnet, controlnet_params = FlaxControlNetModel.from_pretrained( +... "lllyasviel/sd-controlnet-canny", from_pt=True, dtype=jnp.float32 +... ) +>>> pipe, params = FlaxStableDiffusionControlNetPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, revision="flax", dtype=jnp.float32 +... ) +>>> params["controlnet"] = controlnet_params + +>>> num_samples = jax.device_count() +>>> rng = jax.random.split(rng, jax.device_count()) + +>>> prompt_ids = pipe.prepare_text_inputs([prompts] * num_samples) +>>> negative_prompt_ids = pipe.prepare_text_inputs([negative_prompts] * num_samples) +>>> processed_image = pipe.prepare_image_inputs([canny_image] * num_samples) + +>>> p_params = replicate(params) +>>> prompt_ids = shard(prompt_ids) +>>> negative_prompt_ids = shard(negative_prompt_ids) +>>> processed_image = shard(processed_image) + +>>> output = pipe( +... prompt_ids=prompt_ids, +... image=processed_image, +... params=p_params, +... prng_seed=rng, +... num_inference_steps=50, +... neg_prompt_ids=negative_prompt_ids, +... jit=True, +... ).images + +>>> output_images = pipe.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:]))) +>>> output_images = image_grid(output_images, num_samples // 4, 4) +>>> output_images.save("generated_image.png") diff --git a/scrapped_outputs/ab53601823df871f613184c5a066241e.txt b/scrapped_outputs/ab53601823df871f613184c5a066241e.txt new file mode 100644 index 0000000000000000000000000000000000000000..6c0a08b7cecfdee2741d5b8ad6d0c8c331f822a1 --- /dev/null +++ b/scrapped_outputs/ab53601823df871f613184c5a066241e.txt @@ -0,0 +1,29 @@ +How to use the ONNX Runtime for inference + +🤗 Diffusers provides a Stable Diffusion pipeline compatible with the ONNX Runtime. This allows you to run Stable Diffusion on any hardware that supports ONNX (including CPUs), and where an accelerated version of PyTorch is not available. + +Installation + +TODO + +Stable Diffusion Inference + +The snippet below demonstrates how to use the ONNX runtime. You need to use StableDiffusionOnnxPipeline instead of StableDiffusionPipeline. You also need to download the weights from the onnx branch of the repository, and indicate the runtime provider you want to use. + + + Copied +# make sure you're logged in with `huggingface-cli login` +from diffusers import StableDiffusionOnnxPipeline + +pipe = StableDiffusionOnnxPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + revision="onnx", + provider="CUDAExecutionProvider", +) + +prompt = "a photo of an astronaut riding a horse on mars" +image = pipe(prompt).images[0] + +Known Issues + +Generating multiple prompts in a batch seems to take too much memory. While we look into it, you may need to iterate instead of batching. diff --git a/scrapped_outputs/ab6c6b0f1ae5bcf422ba8f03715a7f10.txt b/scrapped_outputs/ab6c6b0f1ae5bcf422ba8f03715a7f10.txt new file mode 100644 index 0000000000000000000000000000000000000000..9d0d5ffb83e07315423c11b905ac9fe8aa24c736 --- /dev/null +++ b/scrapped_outputs/ab6c6b0f1ae5bcf422ba8f03715a7f10.txt @@ -0,0 +1,18 @@ +Installation 🤗 Diffusers is tested on Python 3.8+, PyTorch 1.7.0+, and Flax. Follow the installation instructions below for the deep learning library you are using: PyTorch installation instructions Flax installation instructions Install with pip You should install 🤗 Diffusers in a virtual environment. +If you’re unfamiliar with Python virtual environments, take a look at this guide. +A virtual environment makes it easier to manage different projects and avoid compatibility issues between dependencies. Start by creating a virtual environment in your project directory: Copied python -m venv .env Activate the virtual environment: Copied source .env/bin/activate You should also install 🤗 Transformers because 🤗 Diffusers relies on its models: Pytorch Hide Pytorch content Note - PyTorch only supports Python 3.8 - 3.11 on Windows. Copied pip install diffusers["torch"] transformers JAX Hide JAX content Copied pip install diffusers["flax"] transformers Install with conda After activating your virtual environment, with conda (maintained by the community): Copied conda install -c conda-forge diffusers Install from source Before installing 🤗 Diffusers from source, make sure you have PyTorch and 🤗 Accelerate installed. To install 🤗 Accelerate: Copied pip install accelerate Then install 🤗 Diffusers from source: Copied pip install git+https://github.com/huggingface/diffusers This command installs the bleeding edge main version rather than the latest stable version. +The main version is useful for staying up-to-date with the latest developments. +For instance, if a bug has been fixed since the last official release but a new release hasn’t been rolled out yet. +However, this means the main version may not always be stable. +We strive to keep the main version operational, and most issues are usually resolved within a few hours or a day. +If you run into a problem, please open an Issue so we can fix it even sooner! Editable install You will need an editable install if you’d like to: Use the main version of the source code. Contribute to 🤗 Diffusers and need to test changes in the code. Clone the repository and install 🤗 Diffusers with the following commands: Copied git clone https://github.com/huggingface/diffusers.git +cd diffusers Pytorch Hide Pytorch content Copied pip install -e ".[torch]" JAX Hide JAX content Copied pip install -e ".[flax]" These commands will link the folder you cloned the repository to and your Python library paths. +Python will now look inside the folder you cloned to in addition to the normal library paths. +For example, if your Python packages are typically installed in ~/anaconda3/envs/main/lib/python3.8/site-packages/, Python will also search the ~/diffusers/ folder you cloned to. You must keep the diffusers folder if you want to keep using the library. Now you can easily update your clone to the latest version of 🤗 Diffusers with the following command: Copied cd ~/diffusers/ +git pull Your Python environment will find the main version of 🤗 Diffusers on the next run. Cache Model weights and files are downloaded from the Hub to a cache which is usually your home directory. You can change the cache location by specifying the HF_HOME or HUGGINFACE_HUB_CACHE environment variables or configuring the cache_dir parameter in methods like from_pretrained(). Cached files allow you to run 🤗 Diffusers offline. To prevent 🤗 Diffusers from connecting to the internet, set the HF_HUB_OFFLINE environment variable to True and 🤗 Diffusers will only load previously downloaded files in the cache. Copied export HF_HUB_OFFLINE=True For more details about managing and cleaning the cache, take a look at the caching guide. Telemetry logging Our library gathers telemetry information during from_pretrained() requests. +The data gathered includes the version of 🤗 Diffusers and PyTorch/Flax, the requested model or pipeline class, +and the path to a pretrained checkpoint if it is hosted on the Hugging Face Hub. +This usage data helps us debug issues and prioritize new features. +Telemetry is only sent when loading models and pipelines from the Hub, +and it is not collected if you’re loading local files. We understand that not everyone wants to share additional information,and we respect your privacy. +You can disable telemetry collection by setting the DISABLE_TELEMETRY environment variable from your terminal: On Linux/MacOS: Copied export DISABLE_TELEMETRY=YES On Windows: Copied set DISABLE_TELEMETRY=YES diff --git a/scrapped_outputs/ab897b537f9dd1a50dd49bdd5894fcb2.txt b/scrapped_outputs/ab897b537f9dd1a50dd49bdd5894fcb2.txt new file mode 100644 index 0000000000000000000000000000000000000000..ec0ca022fc192e20ccf6ff3307b2799096156b70 --- /dev/null +++ b/scrapped_outputs/ab897b537f9dd1a50dd49bdd5894fcb2.txt @@ -0,0 +1,44 @@ +Using Diffusers for reinforcement learning + +Support for one RL model and related pipelines is included in the experimental source of diffusers. +More models and examples coming soon! + +Diffuser Value-guided Planning + +You can run the model from Planning with Diffusion for Flexible Behavior Synthesis with Diffusers. +The script is located in the RL Examples folder. +Or, run this example in Colab + +class diffusers.experimental.ValueGuidedRLPipeline + +< +source +> +( +value_function: UNet1DModel +unet: UNet1DModel +scheduler: DDPMScheduler +env + +) + + +Parameters + +value_function (UNet1DModel) — A specialized UNet for fine-tuning trajectories base on reward. + + +unet (UNet1DModel) — U-Net architecture to denoise the encoded trajectories. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded trajectories. Default for this +application is DDPMScheduler. +env — An environment following the OpenAI gym API to act in. For now only Hopper has pretrained models. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) +Pipeline for sampling actions from a diffusion model trained to predict sequences of states. +Original implementation inspired by this repository: https://github.com/jannerm/diffuser. diff --git a/scrapped_outputs/ab9f5f77b3911d974ff1b70a8d759381.txt b/scrapped_outputs/ab9f5f77b3911d974ff1b70a8d759381.txt new file mode 100644 index 0000000000000000000000000000000000000000..67c8b53cf21b58b36cb7eadc4efa707362746029 --- /dev/null +++ b/scrapped_outputs/ab9f5f77b3911d974ff1b70a8d759381.txt @@ -0,0 +1,61 @@ +Stable Diffusion 2 Stable Diffusion 2 is a text-to-image latent diffusion model built upon the work of the original Stable Diffusion, and it was led by Robin Rombach and Katherine Crowson from Stability AI and LAION. The Stable Diffusion 2.0 release includes robust text-to-image models trained using a brand new text encoder (OpenCLIP), developed by LAION with support from Stability AI, which greatly improves the quality of the generated images compared to earlier V1 releases. The text-to-image models in this release can generate images with default resolutions of both 512x512 pixels and 768x768 pixels. +These models are trained on an aesthetic subset of the LAION-5B dataset created by the DeepFloyd team at Stability AI, which is then further filtered to remove adult content using LAION’s NSFW filter. For more details about how Stable Diffusion 2 works and how it differs from the original Stable Diffusion, please refer to the official announcement post. The architecture of Stable Diffusion 2 is more or less identical to the original Stable Diffusion model so check out it’s API documentation for how to use Stable Diffusion 2. We recommend using the DPMSolverMultistepScheduler as it gives a reasonable speed/quality trade-off and can be run with as little as 20 steps. Stable Diffusion 2 is available for tasks like text-to-image, inpainting, super-resolution, and depth-to-image: Task Repository text-to-image (512x512) stabilityai/stable-diffusion-2-base text-to-image (768x768) stabilityai/stable-diffusion-2 inpainting stabilityai/stable-diffusion-2-inpainting super-resolution stable-diffusion-x4-upscaler depth-to-image stabilityai/stable-diffusion-2-depth Here are some examples for how to use Stable Diffusion 2 for each task: Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! Text-to-image Copied from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +import torch + +repo_id = "stabilityai/stable-diffusion-2-base" +pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") + +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") + +prompt = "High quality photo of an astronaut riding a horse in space" +image = pipe(prompt, num_inference_steps=25).images[0] +image Inpainting Copied import torch +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +from diffusers.utils import load_image, make_image_grid + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +repo_id = "stabilityai/stable-diffusion-2-inpainting" +pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") + +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") + +prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +image = pipe(prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=25).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Super-resolution Copied from diffusers import StableDiffusionUpscalePipeline +from diffusers.utils import load_image, make_image_grid +import torch + +# load model and scheduler +model_id = "stabilityai/stable-diffusion-x4-upscaler" +pipeline = StableDiffusionUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) +pipeline = pipeline.to("cuda") + +# let's download an image +url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png" +low_res_img = load_image(url) +low_res_img = low_res_img.resize((128, 128)) +prompt = "a white cat" +upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0] +make_image_grid([low_res_img.resize((512, 512)), upscaled_image.resize((512, 512))], rows=1, cols=2) Depth-to-image Copied import torch +from diffusers import StableDiffusionDepth2ImgPipeline +from diffusers.utils import load_image, make_image_grid + +pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-depth", + torch_dtype=torch.float16, +).to("cuda") + + +url = "http://images.cocodataset.org/val2017/000000039769.jpg" +init_image = load_image(url) +prompt = "two tigers" +negative_prompt = "bad, deformed, ugly, bad anotomy" +image = pipe(prompt=prompt, image=init_image, negative_prompt=negative_prompt, strength=0.7).images[0] +make_image_grid([init_image, image], rows=1, cols=2) diff --git a/scrapped_outputs/abb68fe0468e952dd3db82ae9c2eac2f.txt b/scrapped_outputs/abb68fe0468e952dd3db82ae9c2eac2f.txt new file mode 100644 index 0000000000000000000000000000000000000000..84ee568798480dea840f7e1bb501e3ca6528fcc9 --- /dev/null +++ b/scrapped_outputs/abb68fe0468e952dd3db82ae9c2eac2f.txt @@ -0,0 +1,153 @@ +RePaint + + +Overview + +RePaint: Inpainting using Denoising Diffusion Probabilistic Models (PNDM) by Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, Luc Van Gool. +The abstract of the paper is the following: +Free-form inpainting is the task of adding new content to an image in the regions specified by an arbitrary binary mask. Most existing approaches train for a certain distribution of masks, which limits their generalization capabilities to unseen mask types. Furthermore, training with pixel-wise and perceptual losses often leads to simple textural extensions towards the missing areas instead of semantically meaningful generation. In this work, we propose RePaint: A Denoising Diffusion Probabilistic Model (DDPM) based inpainting approach that is applicable to even extreme masks. We employ a pretrained unconditional DDPM as the generative prior. To condition the generation process, we only alter the reverse diffusion iterations by sampling the unmasked regions using the given image information. Since this technique does not modify or condition the original DDPM network itself, the model produces high-quality and diverse output images for any inpainting form. We validate our method for both faces and general-purpose image inpainting using standard and extreme masks. +RePaint outperforms state-of-the-art Autoregressive, and GAN approaches for at least five out of six mask distributions. +The original codebase can be found here. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_repaint.py +Image Inpainting +- + +Usage example + + + + Copied +from io import BytesIO + +import torch + +import PIL +import requests +from diffusers import RePaintPipeline, RePaintScheduler + + +def download_image(url): + response = requests.get(url) + return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +img_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/celeba_hq_256.png" +mask_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/mask_256.png" + +# Load the original image and the mask as PIL images +original_image = download_image(img_url).resize((256, 256)) +mask_image = download_image(mask_url).resize((256, 256)) + +# Load the RePaint scheduler and pipeline based on a pretrained DDPM model +scheduler = RePaintScheduler.from_pretrained("google/ddpm-ema-celebahq-256") +pipe = RePaintPipeline.from_pretrained("google/ddpm-ema-celebahq-256", scheduler=scheduler) +pipe = pipe.to("cuda") + +generator = torch.Generator(device="cuda").manual_seed(0) +output = pipe( + original_image=original_image, + mask_image=mask_image, + num_inference_steps=250, + eta=0.0, + jump_length=10, + jump_n_sample=10, + generator=generator, +) +inpainted_image = output.images[0] + +RePaintPipeline + + +class diffusers.RePaintPipeline + +< +source +> +( +unet +scheduler + +) + + + + +__call__ + +< +source +> +( +image: typing.Union[torch.Tensor, PIL.Image.Image] +mask_image: typing.Union[torch.Tensor, PIL.Image.Image] +num_inference_steps: int = 250 +eta: float = 0.0 +jump_length: int = 10 +jump_n_sample: int = 10 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +**kwargs + +) +→ +ImagePipelineOutput or tuple + +Parameters + +image (torch.FloatTensor or PIL.Image.Image) — +The original image to inpaint on. + + +mask_image (torch.FloatTensor or PIL.Image.Image) — +The mask_image where 0.0 values define which part of the original image to inpaint (change). + + +num_inference_steps (int, optional, defaults to 1000) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +eta (float) — +The weight of noise for added noise in a diffusion step. Its value is between 0.0 and 1.0 - 0.0 is DDIM +and 1.0 is DDPM scheduler respectively. + + +jump_length (int, optional, defaults to 10) — +The number of steps taken forward in time before going backward in time for a single jump (“j” in +RePaint paper). Take a look at Figure 9 and 10 in https://arxiv.org/pdf/2201.09865.pdf. + + +jump_n_sample (int, optional, defaults to 10) — +The number of times we will make forward time jump for a given chosen time sample. Take a look at +Figure 9 and 10 in https://arxiv.org/pdf/2201.09865.pdf. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + +Returns + +ImagePipelineOutput or tuple + + + +~pipelines.utils.ImagePipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. diff --git a/scrapped_outputs/abbc45b52eea663790d1cfeaed3d5013.txt b/scrapped_outputs/abbc45b52eea663790d1cfeaed3d5013.txt new file mode 100644 index 0000000000000000000000000000000000000000..2c39533f3811507775688b8fc90c71c93f8c744f --- /dev/null +++ b/scrapped_outputs/abbc45b52eea663790d1cfeaed3d5013.txt @@ -0,0 +1,324 @@ +InstructPix2Pix InstructPix2Pix: Learning to Follow Image Editing Instructions is by Tim Brooks, Aleksander Holynski and Alexei A. Efros. The abstract from the paper is: We propose a method for editing images from human instructions: given an input image and a written instruction that tells the model what to do, our model follows these instructions to edit the image. To obtain training data for this problem, we combine the knowledge of two large pretrained models — a language model (GPT-3) and a text-to-image model (Stable Diffusion) — to generate a large dataset of image editing examples. Our conditional diffusion model, InstructPix2Pix, is trained on our generated data, and generalizes to real images and user-written instructions at inference time. Since it performs edits in the forward pass and does not require per example fine-tuning or inversion, our model edits images quickly, in a matter of seconds. We show compelling editing results for a diverse collection of input images and written instructions. You can find additional information about InstructPix2Pix on the project page, original codebase, and try it out in a demo. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionInstructPix2PixPipeline class diffusers.StableDiffusionInstructPix2PixPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for pixel-level image editing by following text instructions (based on Stable Diffusion). This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 100 guidance_scale: float = 7.5 image_guidance_scale: float = 1.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor np.ndarray, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be repainted according to prompt. Can also accept +image latents as image, but if passing latents directly it is not encoded again. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. image_guidance_scale (float, optional, defaults to 1.5) — +Push the generated image towards the inital image. Image guidance scale is enabled by setting +image_guidance_scale > 1. Higher image guidance scale encourages generated images that are closely +linked to the source image, usually at the expense of lower image quality. This pipeline requires a +value of at least 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionInstructPix2PixPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" + +>>> image = download_image(img_url).resize((512, 512)) + +>>> pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained( +... "timbrooks/instruct-pix2pix", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "make the mountains snowy" +>>> image = pipe(prompt=prompt, image=image).images[0] load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. StableDiffusionXLInstructPix2PixPipeline class diffusers.StableDiffusionXLInstructPix2PixPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires a aesthetic_score condition to be passed during inference. Also see the config +of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for pixel-level image editing by following text instructions. Based on Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 100 denoising_end: Optional = None guidance_scale: float = 5.0 image_guidance_scale: float = 1.5 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None ) → ~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor or PIL.Image.Image or np.ndarray or List[torch.FloatTensor] or List[PIL.Image.Image] or List[np.ndarray]) — +The image(s) to modify with the pipeline. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 5.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. image_guidance_scale (float, optional, defaults to 1.5) — +Image guidance scale is to push the generated image towards the inital image image. Image guidance +scale is enabled by setting image_guidance_scale > 1. Higher image guidance scale encourages to +generate images that are closely linked to the source image image, usually at the expense of lower +image quality. This pipeline requires a value of at least 1. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. Returns +~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLInstructPix2PixPipeline +>>> from diffusers.utils import load_image + +>>> resolution = 768 +>>> image = load_image( +... "https://hf.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" +... ).resize((resolution, resolution)) +>>> edit_instruction = "Turn sky into a cloudy one" + +>>> pipe = StableDiffusionXLInstructPix2PixPipeline.from_pretrained( +... "diffusers/sdxl-instructpix2pix-768", torch_dtype=torch.float16 +... ).to("cuda") + +>>> edited_image = pipe( +... prompt=edit_instruction, +... image=image, +... height=resolution, +... width=resolution, +... guidance_scale=3.0, +... image_guidance_scale=1.5, +... num_inference_steps=30, +... ).images[0] +>>> edited_image disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously invoked, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several +steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in +several steps. This is useful to save a large amount of memory and to allow the processing of larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. Encodes the prompt into text encoder hidden states. diff --git a/scrapped_outputs/abdfd424a08b6e05f77575e9dc1be3a1.txt b/scrapped_outputs/abdfd424a08b6e05f77575e9dc1be3a1.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/abe81e74fe53c6c06760deb8f796f53a.txt b/scrapped_outputs/abe81e74fe53c6c06760deb8f796f53a.txt new file mode 100644 index 0000000000000000000000000000000000000000..3ac97628e55336ffb0041210b78e5d43066c4f7c --- /dev/null +++ b/scrapped_outputs/abe81e74fe53c6c06760deb8f796f53a.txt @@ -0,0 +1,225 @@ +AudioLDM 2 AudioLDM 2 was proposed in AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining by Haohe Liu et al. AudioLDM 2 takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional sound effects, human speech and music. Inspired by Stable Diffusion, AudioLDM 2 is a text-to-audio latent diffusion model (LDM) that learns continuous audio representations from text embeddings. Two text encoder models are used to compute the text embeddings from a prompt input: the text-branch of CLAP and the encoder of Flan-T5. These text embeddings are then projected to a shared embedding space by an AudioLDM2ProjectionModel. A GPT2 language model (LM) is used to auto-regressively predict eight new embedding vectors, conditional on the projected CLAP and Flan-T5 embeddings. The generated embedding vectors and Flan-T5 text embeddings are used as cross-attention conditioning in the LDM. The UNet of AudioLDM 2 is unique in the sense that it takes two cross-attention embeddings, as opposed to one cross-attention conditioning, as in most other LDMs. The abstract of the paper is the following: Although audio generation shares commonalities across different types of audio, such as speech, music, and sound effects, designing models for each type requires careful consideration of specific objectives and biases that can significantly differ from those of other types. To bring us closer to a unified perspective of audio generation, this paper proposes a framework that utilizes the same learning method for speech, music, and sound effect generation. Our framework introduces a general representation of audio, called “language of audio” (LOA). Any audio can be translated into LOA based on AudioMAE, a self-supervised pre-trained representation learning model. In the generation process, we translate any modalities into LOA by using a GPT-2 model, and we perform self-supervised audio generation learning with a latent diffusion model conditioned on LOA. The proposed framework naturally brings advantages such as in-context learning abilities and reusable self-supervised pretrained AudioMAE and latent diffusion models. Experiments on the major benchmarks of text-to-audio, text-to-music, and text-to-speech demonstrate state-of-the-art or competitive performance against previous approaches. Our code, pretrained model, and demo are available at this https URL. This pipeline was contributed by sanchit-gandhi. The original codebase can be found at haoheliu/audioldm2. Tips Choosing a checkpoint AudioLDM2 comes in three variants. Two of these checkpoints are applicable to the general task of text-to-audio generation. The third checkpoint is trained exclusively on text-to-music generation. All checkpoints share the same model size for the text encoders and VAE. They differ in the size and depth of the UNet. +See table below for details on the three checkpoints: Checkpoint Task UNet Model Size Total Model Size Training Data / h audioldm2 Text-to-audio 350M 1.1B 1150k audioldm2-large Text-to-audio 750M 1.5B 1150k audioldm2-music Text-to-music 350M 1.1B 665k Constructing a prompt Descriptive prompt inputs work best: use adjectives to describe the sound (e.g. “high quality” or “clear”) and make the prompt context specific (e.g. “water stream in a forest” instead of “stream”). It’s best to use general terms like “cat” or “dog” instead of specific names or abstract objects the model may not be familiar with. Using a negative prompt can significantly improve the quality of the generated waveform, by guiding the generation away from terms that correspond to poor quality audio. Try using a negative prompt of “Low quality.” Controlling inference The quality of the predicted audio sample can be controlled by the num_inference_steps argument; higher steps give higher quality audio at the expense of slower inference. The length of the predicted audio sample can be controlled by varying the audio_length_in_s argument. Evaluating generated waveforms: The quality of the generated waveforms can vary significantly based on the seed. Try generating with different seeds until you find a satisfactory generation. Multiple waveforms can be generated in one go: set num_waveforms_per_prompt to a value greater than 1. Automatic scoring will be performed between the generated waveforms and prompt text, and the audios ranked from best to worst accordingly. The following example demonstrates how to construct good music generation using the aforementioned tips: example. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. AudioLDM2Pipeline class diffusers.AudioLDM2Pipeline < source > ( vae: AutoencoderKL text_encoder: ClapModel text_encoder_2: T5EncoderModel projection_model: AudioLDM2ProjectionModel language_model: GPT2Model tokenizer: Union tokenizer_2: Union feature_extractor: ClapFeatureExtractor unet: AudioLDM2UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vocoder: SpeechT5HifiGan ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (ClapModel) — +First frozen text-encoder. AudioLDM2 uses the joint audio-text embedding model +CLAP, +specifically the laion/clap-htsat-unfused variant. The +text branch is used to encode the text prompt to a prompt embedding. The full audio-text model is used to +rank generated waveforms against the text prompt by computing similarity scores. text_encoder_2 (T5EncoderModel) — +Second frozen text-encoder. AudioLDM2 uses the encoder of +T5, specifically the +google/flan-t5-large variant. projection_model (AudioLDM2ProjectionModel) — +A trained model used to linearly project the hidden-states from the first and second text encoder models +and insert learned SOS and EOS token embeddings. The projected hidden-states from the two text encoders are +concatenated to give the input to the language model. language_model (GPT2Model) — +An auto-regressive language model used to generate a sequence of hidden-states conditioned on the projected +outputs from the two text encoders. tokenizer (RobertaTokenizer) — +Tokenizer to tokenize text for the first frozen text-encoder. tokenizer_2 (T5Tokenizer) — +Tokenizer to tokenize text for the second frozen text-encoder. feature_extractor (ClapFeatureExtractor) — +Feature extractor to pre-process generated audio waveforms to log-mel spectrograms for automatic scoring. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded audio latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. vocoder (SpeechT5HifiGan) — +Vocoder of class SpeechT5HifiGan to convert the mel-spectrogram latents to the final audio waveform. Pipeline for text-to-audio generation using AudioLDM2. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None audio_length_in_s: Optional = None num_inference_steps: int = 200 guidance_scale: float = 3.5 negative_prompt: Union = None num_waveforms_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None generated_prompt_embeds: Optional = None negative_generated_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None max_new_tokens: Optional = None return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None output_type: Optional = 'np' ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide audio generation. If not defined, you need to pass prompt_embeds. audio_length_in_s (int, optional, defaults to 10.24) — +The length of the generated audio sample in seconds. num_inference_steps (int, optional, defaults to 200) — +The number of denoising steps. More denoising steps usually lead to a higher quality audio at the +expense of slower inference. guidance_scale (float, optional, defaults to 3.5) — +A higher guidance scale value encourages the model to generate audio that is closely linked to the text +prompt at the expense of lower sound quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in audio generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_waveforms_per_prompt (int, optional, defaults to 1) — +The number of waveforms to generate per prompt. If num_waveforms_per_prompt > 1, then automatic +scoring is performed between the generated outputs and the text prompt. This scoring ranks the +generated waveforms based on their cosine similarity with the text input in the joint text-audio +embedding space. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for spectrogram +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings from the GPT2 langauge model. Can be used to easily tweak text inputs, +e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input +argument. negative_generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings from the GPT2 language model. Can be used to easily tweak text +inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from +negative_prompt input argument. attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the prompt_embeds. If not provided, attention mask will +be computed from prompt input argument. negative_attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the negative_prompt_embeds. If not provided, attention +mask will be computed from negative_prompt input argument. max_new_tokens (int, optional, defaults to None) — +Number of new tokens to generate with the GPT2 language model. If not provided, number of tokens will +be taken from the config of the model. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. output_type (str, optional, defaults to "np") — +The output format of the generated audio. Choose between "np" to return a NumPy np.ndarray or +"pt" to return a PyTorch torch.Tensor object. Set to "latent" to return the latent diffusion +model (LDM) output. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Examples: Copied >>> import scipy +>>> import torch +>>> from diffusers import AudioLDM2Pipeline + +>>> repo_id = "cvssp/audioldm2" +>>> pipe = AudioLDM2Pipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> # define the prompts +>>> prompt = "The sound of a hammer hitting a wooden surface." +>>> negative_prompt = "Low quality." + +>>> # set the seed for generator +>>> generator = torch.Generator("cuda").manual_seed(0) + +>>> # run the generation +>>> audio = pipe( +... prompt, +... negative_prompt=negative_prompt, +... num_inference_steps=200, +... audio_length_in_s=10.0, +... num_waveforms_per_prompt=3, +... generator=generator, +... ).audios + +>>> # save the best audio sample (index 0) as a .wav file +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio[0]) disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_waveforms_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None generated_prompt_embeds: Optional = None negative_generated_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None max_new_tokens: Optional = None ) → prompt_embeds (torch.FloatTensor) Parameters prompt (str or List[str], optional) — +prompt to be encoded device (torch.device) — +torch device num_waveforms_per_prompt (int) — +number of waveforms that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the audio generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-computed text embeddings from the Flan T5 model. Can be used to easily tweak text inputs, e.g. +prompt weighting. If not provided, text embeddings will be computed from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-computed negative text embeddings from the Flan T5 model. Can be used to easily tweak text inputs, +e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from +negative_prompt input argument. generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings from the GPT2 langauge model. Can be used to easily tweak text inputs, +e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input +argument. negative_generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings from the GPT2 language model. Can be used to easily tweak text +inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from +negative_prompt input argument. attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the prompt_embeds. If not provided, attention mask will +be computed from prompt input argument. negative_attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the negative_prompt_embeds. If not provided, attention +mask will be computed from negative_prompt input argument. max_new_tokens (int, optional, defaults to None) — +The number of new tokens to generate with the GPT2 language model. Returns +prompt_embeds (torch.FloatTensor) + +Text embeddings from the Flan T5 model. +attention_mask (torch.LongTensor): +Attention mask to be applied to the prompt_embeds. +generated_prompt_embeds (torch.FloatTensor): +Text embeddings generated from the GPT2 langauge model. + Encodes the prompt into text encoder hidden states. Example: Copied >>> import scipy +>>> import torch +>>> from diffusers import AudioLDM2Pipeline + +>>> repo_id = "cvssp/audioldm2" +>>> pipe = AudioLDM2Pipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> # Get text embedding vectors +>>> prompt_embeds, attention_mask, generated_prompt_embeds = pipe.encode_prompt( +... prompt="Techno music with a strong, upbeat tempo and high melodic riffs", +... device="cuda", +... do_classifier_free_guidance=True, +... ) + +>>> # Pass text embeddings to pipeline for text-conditional audio generation +>>> audio = pipe( +... prompt_embeds=prompt_embeds, +... attention_mask=attention_mask, +... generated_prompt_embeds=generated_prompt_embeds, +... num_inference_steps=200, +... audio_length_in_s=10.0, +... ).audios[0] + +>>> # save generated audio sample +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio) generate_language_model < source > ( inputs_embeds: Tensor = None max_new_tokens: int = 8 **model_kwargs ) → inputs_embeds (torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)`) Parameters inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — +The sequence used as a prompt for the generation. max_new_tokens (int) — +Number of new tokens to generate. model_kwargs (Dict[str, Any], optional) — +Ad hoc parametrization of additional model-specific kwargs that will be forwarded to the forward +function of the model. Returns +inputs_embeds (torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)`) + +The sequence of generated hidden-states. + Generates a sequence of hidden-states from the language model, conditioned on the embedding inputs. AudioLDM2ProjectionModel class diffusers.AudioLDM2ProjectionModel < source > ( text_encoder_dim text_encoder_1_dim langauge_model_dim ) Parameters text_encoder_dim (int) — +Dimensionality of the text embeddings from the first text encoder (CLAP). text_encoder_1_dim (int) — +Dimensionality of the text embeddings from the second text encoder (T5 or VITS). langauge_model_dim (int) — +Dimensionality of the text embeddings from the language model (GPT2). A simple linear projection model to map two text embeddings to a shared latent space. It also inserts learned +embedding vectors at the start and end of each text embedding sequence respectively. Each variable appended with +_1 refers to that corresponding to the second text encoder. Otherwise, it is from the first. forward < source > ( hidden_states: Optional = None hidden_states_1: Optional = None attention_mask: Optional = None attention_mask_1: Optional = None ) AudioLDM2UNet2DConditionModel class diffusers.AudioLDM2UNet2DConditionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 flip_sin_to_cos: bool = True freq_shift: int = 0 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') mid_block_type: Optional = 'UNetMidBlock2DCrossAttn' up_block_types: Tuple = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: Union = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: Union = 1280 transformer_layers_per_block: Union = 1 attention_head_dim: Union = 8 num_attention_heads: Union = None use_linear_projection: bool = False class_embed_type: Optional = None num_class_embeds: Optional = None upcast_attention: bool = False resnet_time_scale_shift: str = 'default' time_embedding_type: str = 'positional' time_embedding_dim: Optional = None time_embedding_act_fn: Optional = None timestep_post_act: Optional = None time_cond_proj_dim: Optional = None conv_in_kernel: int = 3 conv_out_kernel: int = 3 projection_class_embeddings_input_dim: Optional = None class_embeddings_concat: bool = False ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. in_channels (int, optional, defaults to 4) — Number of channels in the input sample. out_channels (int, optional, defaults to 4) — Number of channels in the output. flip_sin_to_cos (bool, optional, defaults to False) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. mid_block_type (str, optional, defaults to "UNetMidBlock2DCrossAttn") — +Block type for middle of UNet, it can only be UNetMidBlock2DCrossAttn for AudioLDM2. up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. only_cross_attention (bool or Tuple[bool], optional, default to False) — +Whether to include self-attention in the basic transformer blocks, see +BasicTransformerBlock. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — The number of layers per block. downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. act_fn (str, optional, defaults to "silu") — The activation function to use. norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, normalization and activation layers is skipped in post-processing. norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. cross_attention_dim (int or Tuple[int], optional, defaults to 1280) — +The dimension of the cross attention features. transformer_layers_per_block (int or Tuple[int], optional, defaults to 1) — +The number of transformer blocks of type BasicTransformerBlock. Only relevant for +~models.unet_2d_blocks.CrossAttnDownBlock2D, ~models.unet_2d_blocks.CrossAttnUpBlock2D, +~models.unet_2d_blocks.UNetMidBlock2DCrossAttn. attention_head_dim (int, optional, defaults to 8) — The dimension of the attention heads. num_attention_heads (int, optional) — +The number of attention heads. If not defined, defaults to attention_head_dim resnet_time_scale_shift (str, optional, defaults to "default") — Time scale shift config +for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", "identity", "projection", or "simple_projection". num_class_embeds (int, optional, defaults to None) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. time_embedding_type (str, optional, defaults to positional) — +The type of position embedding to use for timesteps. Choose from positional or fourier. time_embedding_dim (int, optional, defaults to None) — +An optional override for the dimension of the projected time embedding. time_embedding_act_fn (str, optional, defaults to None) — +Optional activation function to use only once on the time embeddings before they are passed to the rest of +the UNet. Choose from silu, mish, gelu, and swish. timestep_post_act (str, optional, defaults to None) — +The second activation function to use in timestep embedding. Choose from silu, mish and gelu. time_cond_proj_dim (int, optional, defaults to None) — +The dimension of cond_proj layer in the timestep embedding. conv_in_kernel (int, optional, default to 3) — The kernel size of conv_in layer. conv_out_kernel (int, optional, default to 3) — The kernel size of conv_out layer. projection_class_embeddings_input_dim (int, optional) — The dimension of the class_labels input when +class_embed_type="projection". Required when class_embed_type="projection". class_embeddings_concat (bool, optional, defaults to False) — Whether to concatenate the time +embeddings with the class embeddings. A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. Compared to the vanilla UNet2DConditionModel, this variant optionally includes an additional +self-attention layer in each Transformer block, as well as multiple cross-attention layers. It also allows for up +to two cross-attention embeddings, encoder_hidden_states and encoder_hidden_states_1. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None encoder_attention_mask: Optional = None return_dict: bool = True encoder_hidden_states_1: Optional = None encoder_attention_mask_1: Optional = None ) → UNet2DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, channel, height, width). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). encoder_attention_mask (torch.Tensor) — +A cross-attention mask of shape (batch, sequence_length) is applied to encoder_hidden_states. If +True the mask is kept, otherwise if False it is discarded. Mask will be converted into a bias, +which adds large negative values to the attention scores corresponding to “discard” tokens. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. encoder_hidden_states_1 (torch.FloatTensor, optional) — +A second set of encoder hidden states with shape (batch, sequence_length_2, feature_dim_2). Can be +used to condition the model on a different set of embeddings to encoder_hidden_states. encoder_attention_mask_1 (torch.Tensor, optional) — +A cross-attention mask of shape (batch, sequence_length_2) is applied to encoder_hidden_states_1. +If True the mask is kept, otherwise if False it is discarded. Mask will be converted into a bias, +which adds large negative values to the attention scores corresponding to “discard” tokens. Returns +UNet2DConditionOutput or tuple + +If return_dict is True, an UNet2DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The AudioLDM2UNet2DConditionModel forward method. AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. diff --git a/scrapped_outputs/ac355cdb63ebb6cb5b45c1f978048c9a.txt b/scrapped_outputs/ac355cdb63ebb6cb5b45c1f978048c9a.txt new file mode 100644 index 0000000000000000000000000000000000000000..67c8b53cf21b58b36cb7eadc4efa707362746029 --- /dev/null +++ b/scrapped_outputs/ac355cdb63ebb6cb5b45c1f978048c9a.txt @@ -0,0 +1,61 @@ +Stable Diffusion 2 Stable Diffusion 2 is a text-to-image latent diffusion model built upon the work of the original Stable Diffusion, and it was led by Robin Rombach and Katherine Crowson from Stability AI and LAION. The Stable Diffusion 2.0 release includes robust text-to-image models trained using a brand new text encoder (OpenCLIP), developed by LAION with support from Stability AI, which greatly improves the quality of the generated images compared to earlier V1 releases. The text-to-image models in this release can generate images with default resolutions of both 512x512 pixels and 768x768 pixels. +These models are trained on an aesthetic subset of the LAION-5B dataset created by the DeepFloyd team at Stability AI, which is then further filtered to remove adult content using LAION’s NSFW filter. For more details about how Stable Diffusion 2 works and how it differs from the original Stable Diffusion, please refer to the official announcement post. The architecture of Stable Diffusion 2 is more or less identical to the original Stable Diffusion model so check out it’s API documentation for how to use Stable Diffusion 2. We recommend using the DPMSolverMultistepScheduler as it gives a reasonable speed/quality trade-off and can be run with as little as 20 steps. Stable Diffusion 2 is available for tasks like text-to-image, inpainting, super-resolution, and depth-to-image: Task Repository text-to-image (512x512) stabilityai/stable-diffusion-2-base text-to-image (768x768) stabilityai/stable-diffusion-2 inpainting stabilityai/stable-diffusion-2-inpainting super-resolution stable-diffusion-x4-upscaler depth-to-image stabilityai/stable-diffusion-2-depth Here are some examples for how to use Stable Diffusion 2 for each task: Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! Text-to-image Copied from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +import torch + +repo_id = "stabilityai/stable-diffusion-2-base" +pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") + +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") + +prompt = "High quality photo of an astronaut riding a horse in space" +image = pipe(prompt, num_inference_steps=25).images[0] +image Inpainting Copied import torch +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +from diffusers.utils import load_image, make_image_grid + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +repo_id = "stabilityai/stable-diffusion-2-inpainting" +pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") + +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") + +prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +image = pipe(prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=25).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Super-resolution Copied from diffusers import StableDiffusionUpscalePipeline +from diffusers.utils import load_image, make_image_grid +import torch + +# load model and scheduler +model_id = "stabilityai/stable-diffusion-x4-upscaler" +pipeline = StableDiffusionUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) +pipeline = pipeline.to("cuda") + +# let's download an image +url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png" +low_res_img = load_image(url) +low_res_img = low_res_img.resize((128, 128)) +prompt = "a white cat" +upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0] +make_image_grid([low_res_img.resize((512, 512)), upscaled_image.resize((512, 512))], rows=1, cols=2) Depth-to-image Copied import torch +from diffusers import StableDiffusionDepth2ImgPipeline +from diffusers.utils import load_image, make_image_grid + +pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-depth", + torch_dtype=torch.float16, +).to("cuda") + + +url = "http://images.cocodataset.org/val2017/000000039769.jpg" +init_image = load_image(url) +prompt = "two tigers" +negative_prompt = "bad, deformed, ugly, bad anotomy" +image = pipe(prompt=prompt, image=init_image, negative_prompt=negative_prompt, strength=0.7).images[0] +make_image_grid([init_image, image], rows=1, cols=2) diff --git a/scrapped_outputs/ac5f8ade9471a686729a97dfa6844934.txt b/scrapped_outputs/ac5f8ade9471a686729a97dfa6844934.txt new file mode 100644 index 0000000000000000000000000000000000000000..7334d3b19bb90733d54619947604ca646d8ae003 --- /dev/null +++ b/scrapped_outputs/ac5f8ade9471a686729a97dfa6844934.txt @@ -0,0 +1,110 @@ +Controlled generation + +Controlling outputs generated by diffusion models has been long pursued by the community and is now an active research topic. In many popular diffusion models, subtle changes in inputs, both images and text prompts, can drastically change outputs. In an ideal world we want to be able to control how semantics are preserved and changed. +Most examples of preserving semantics reduce to being able to accurately map a change in input to a change in output. I.e. adding an adjective to a subject in a prompt preserves the entire image, only modifying the changed subject. Or, image variation of a particular subject preserves the subject’s pose. +Additionally, there are qualities of generated images that we would like to influence beyond semantic preservation. I.e. in general, we would like our outputs to be of good quality, adhere to a particular style, or be realistic. +We will document some of the techniques diffusers supports to control generation of diffusion models. Much is cutting edge research and can be quite nuanced. If something needs clarifying or you have a suggestion, don’t hesitate to open a discussion on the forum or a GitHub issue. +We provide a high level explanation of how the generation can be controlled as well as a snippet of the technicals. For more in depth explanations on the technicals, the original papers which are linked from the pipelines are always the best resources. +Depending on the use case, one should choose a technique accordingly. In many cases, these techniques can be combined. For example, one can combine Textual Inversion with SEGA to provide more semantic guidance to the outputs generated using Textual Inversion. +Unless otherwise mentioned, these are techniques that work with existing models and don’t require their own weights. +Instruct Pix2Pix +Pix2Pix Zero +Attend and Excite +Semantic Guidance +Self-attention Guidance +Depth2Image +MultiDiffusion Panorama +DreamBooth +Textual Inversion +ControlNet +Prompt Weighting + +Instruct Pix2Pix + +Paper +Instruct Pix2Pix is fine-tuned from stable diffusion to support editing input images. It takes as inputs an image and a prompt describing an edit, and it outputs the edited image. +Instruct Pix2Pix has been explicitly trained to work well with InstructGPT-like prompts. +See here for more information on how to use it. + +Pix2Pix Zero + +Paper +Pix2Pix Zero allows modifying an image so that one concept or subject is translated to another one while preserving general image semantics. +The denoising process is guided from one conceptual embedding towards another conceptual embedding. The intermediate latents are optimized during the denoising process to push the attention maps towards reference attention maps. The reference attention maps are from the denoising process of the input image and are used to encourage semantic preservation. +Pix2Pix Zero can be used both to edit synthetic images as well as real images. +To edit synthetic images, one first generates an image given a caption. +Next, we generate image captions for the concept that shall be edited and for the new target concept. We can use a model like Flan-T5 for this purpose. Then, “mean” prompt embeddings for both the source and target concepts are created via the text encoder. Finally, the pix2pix-zero algorithm is used to edit the synthetic image. +To edit a real image, one first generates an image caption using a model like BLIP. Then one applies ddim inversion on the prompt and image to generate “inverse” latents. Similar to before, “mean” prompt embeddings for both source and target concepts are created and finally the pix2pix-zero algorithm in combination with the “inverse” latents is used to edit the image. +Pix2Pix Zero is the first model that allows “zero-shot” image editing. This means that the model +can edit an image in less than a minute on a consumer GPU as shown here. +As mentioned above, Pix2Pix Zero includes optimizing the latents (and not any of the UNet, VAE, or the text encoder) to steer the generation toward a specific concept. This means that the overall +pipeline might require more memory than a standard StableDiffusionPipeline. +See here for more information on how to use it. + +Attend and Excite + +Paper +Attend and Excite allows subjects in the prompt to be faithfully represented in the final image. +A set of token indices are given as input, corresponding to the subjects in the prompt that need to be present in the image. During denoising, each token index is guaranteed to have a minimum attention threshold for at least one patch of the image. The intermediate latents are iteratively optimized during the denoising process to strengthen the attention of the most neglected subject token until the attention threshold is passed for all subject tokens. +Like Pix2Pix Zero, Attend and Excite also involves a mini optimization loop (leaving the pre-trained weights untouched) in its pipeline and can require more memory than the usual StableDiffusionPipeline. +See here for more information on how to use it. + +Semantic Guidance (SEGA) + +Paper +SEGA allows applying or removing one or more concepts from an image. The strength of the concept can also be controlled. I.e. the smile concept can be used to incrementally increase or decrease the smile of a portrait. +Similar to how classifier free guidance provides guidance via empty prompt inputs, SEGA provides guidance on conceptual prompts. Multiple of these conceptual prompts can be applied simultaneously. Each conceptual prompt can either add or remove their concept depending on if the guidance is applied positively or negatively. +Unlike Pix2Pix Zero or Attend and Excite, SEGA directly interacts with the diffusion process instead of performing any explicit gradient-based optimization. +See here for more information on how to use it. + +Self-attention Guidance (SAG) + +Paper +Self-attention Guidance improves the general quality of images. +SAG provides guidance from predictions not conditioned on high-frequency details to fully conditioned images. The high frequency details are extracted out of the UNet self-attention maps. +See here for more information on how to use it. + +Depth2Image + +Project +Depth2Image is fine-tuned from Stable Diffusion to better preserve semantics for text guided image variation. +It conditions on a monocular depth estimate of the original image. +See here for more information on how to use it. +An important distinction between methods like InstructPix2Pix and Pix2Pix Zero is that the former +involves fine-tuning the pre-trained weights while the latter does not. This means that you can +apply Pix2Pix Zero to any of the available Stable Diffusion models. + +MultiDiffusion Panorama + +Paper +MultiDiffusion defines a new generation process over a pre-trained diffusion model. This process binds together multiple diffusion generation methods that can be readily applied to generate high quality and diverse images. Results adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes. +MultiDiffusion Panorama allows to generate high-quality images at arbitrary aspect ratios (e.g., panoramas). +See here for more information on how to use it to generate panoramic images. + +Fine-tuning your own models + +In addition to pre-trained models, Diffusers has training scripts for fine-tuning models on user-provided data. + +DreamBooth + +DreamBooth fine-tunes a model to teach it about a new subject. I.e. a few pictures of a person can be used to generate images of that person in different styles. +See here for more information on how to use it. + +Textual Inversion + +Textual Inversion fine-tunes a model to teach it about a new concept. I.e. a few pictures of a style of artwork can be used to generate images in that style. +See here for more information on how to use it. + +ControlNet + +Paper +ControlNet is an auxiliary network which adds an extra condition. +There are 8 canonical pre-trained ControlNets trained on different conditionings such as edge detection, scribbles, +depth maps, and semantic segmentations. +See here for more information on how to use it. + +Prompt Weighting + +Prompt weighting is a simple technique that puts more attention weight on certain parts of the text +input. +For a more in-detail explanation and examples, see here. diff --git a/scrapped_outputs/ac68477143c2f6db0935e743b3365196.txt b/scrapped_outputs/ac68477143c2f6db0935e743b3365196.txt new file mode 100644 index 0000000000000000000000000000000000000000..b26a6d56b0f7175109506df5db21894b73ff5f5f --- /dev/null +++ b/scrapped_outputs/ac68477143c2f6db0935e743b3365196.txt @@ -0,0 +1,25 @@ +Metal Performance Shaders (MPS) 🤗 Diffusers is compatible with Apple silicon (M1/M2 chips) using the PyTorch mps device, which uses the Metal framework to leverage the GPU on MacOS devices. You’ll need to have: macOS computer with Apple silicon (M1/M2) hardware macOS 12.6 or later (13.0 or later recommended) arm64 version of Python PyTorch 2.0 (recommended) or 1.13 (minimum version supported for mps) The mps backend uses PyTorch’s .to() interface to move the Stable Diffusion pipeline on to your M1 or M2 device: Copied from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +pipe = pipe.to("mps") + +# Recommended if your computer has < 64 GB of RAM +pipe.enable_attention_slicing() + +prompt = "a photo of an astronaut riding a horse on mars" +image = pipe(prompt).images[0] +image Generating multiple prompts in a batch can crash or fail to work reliably. We believe this is related to the mps backend in PyTorch. While this is being investigated, you should iterate instead of batching. If you’re using PyTorch 1.13, you need to “prime” the pipeline with an additional one-time pass through it. This is a temporary workaround for an issue where the first inference pass produces slightly different results than subsequent ones. You only need to do this pass once, and after just one inference step you can discard the result. Copied from diffusers import DiffusionPipeline + + pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5").to("mps") + pipe.enable_attention_slicing() + + prompt = "a photo of an astronaut riding a horse on mars" + # First-time "warmup" pass if PyTorch version is 1.13 ++ _ = pipe(prompt, num_inference_steps=1) + + # Results match those from the CPU device after the warmup pass. + image = pipe(prompt).images[0] Troubleshoot M1/M2 performance is very sensitive to memory pressure. When this occurs, the system automatically swaps if it needs to which significantly degrades performance. To prevent this from happening, we recommend attention slicing to reduce memory pressure during inference and prevent swapping. This is especially relevant if your computer has less than 64GB of system RAM, or if you generate images at non-standard resolutions larger than 512×512 pixels. Call the enable_attention_slicing() function on your pipeline: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True).to("mps") +pipeline.enable_attention_slicing() Attention slicing performs the costly attention operation in multiple steps instead of all at once. It usually improves performance by ~20% in computers without universal memory, but we’ve observed better performance in most Apple silicon computers unless you have 64GB of RAM or more. diff --git a/scrapped_outputs/acbb13ebe69e946991b0585d0be5536a.txt b/scrapped_outputs/acbb13ebe69e946991b0585d0be5536a.txt new file mode 100644 index 0000000000000000000000000000000000000000..f2282512f2f0bcea89548e640b2b6d75311dad9c --- /dev/null +++ b/scrapped_outputs/acbb13ebe69e946991b0585d0be5536a.txt @@ -0,0 +1,27 @@ +OpenVINO 🤗 Optimum provides Stable Diffusion pipelines compatible with OpenVINO to perform inference on a variety of Intel processors (see the full list of supported devices). You’ll need to install 🤗 Optimum Intel with the --upgrade-strategy eager option to ensure optimum-intel is using the latest version: Copied pip install --upgrade-strategy eager optimum["openvino"] This guide will show you how to use the Stable Diffusion and Stable Diffusion XL (SDXL) pipelines with OpenVINO. Stable Diffusion To load and run inference, use the OVStableDiffusionPipeline. If you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, set export=True: Copied from optimum.intel import OVStableDiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipeline = OVStableDiffusionPipeline.from_pretrained(model_id, export=True) +prompt = "sailing ship in storm by Rembrandt" +image = pipeline(prompt).images[0] + +# Don't forget to save the exported model +pipeline.save_pretrained("openvino-sd-v1-5") To further speed-up inference, statically reshape the model. If you change any parameters such as the outputs height or width, you’ll need to statically reshape your model again. Copied # Define the shapes related to the inputs and desired outputs +batch_size, num_images, height, width = 1, 1, 512, 512 + +# Statically reshape the model +pipeline.reshape(batch_size, height, width, num_images) +# Compile the model before inference +pipeline.compile() + +image = pipeline( + prompt, + height=height, + width=width, + num_images_per_prompt=num_images, +).images[0] You can find more examples in the 🤗 Optimum documentation, and Stable Diffusion is supported for text-to-image, image-to-image, and inpainting. Stable Diffusion XL To load and run inference with SDXL, use the OVStableDiffusionXLPipeline: Copied from optimum.intel import OVStableDiffusionXLPipeline + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipeline = OVStableDiffusionXLPipeline.from_pretrained(model_id) +prompt = "sailing ship in storm by Rembrandt" +image = pipeline(prompt).images[0] To further speed-up inference, statically reshape the model as shown in the Stable Diffusion section. You can find more examples in the 🤗 Optimum documentation, and running SDXL in OpenVINO is supported for text-to-image and image-to-image. diff --git a/scrapped_outputs/acf97c7cd4e42954b6b032d59df77988.txt b/scrapped_outputs/acf97c7cd4e42954b6b032d59df77988.txt new file mode 100644 index 0000000000000000000000000000000000000000..ca6b3b34aaa6cc1e6674850833f805d99aa68782 --- /dev/null +++ b/scrapped_outputs/acf97c7cd4e42954b6b032d59df77988.txt @@ -0,0 +1,33 @@ +Logging 🤗 Diffusers has a centralized logging system to easily manage the verbosity of the library. The default verbosity is set to WARNING. To change the verbosity level, use one of the direct setters. For instance, to change the verbosity to the INFO level. Copied import diffusers + +diffusers.logging.set_verbosity_info() You can also use the environment variable DIFFUSERS_VERBOSITY to override the default verbosity. You can set it +to one of the following: debug, info, warning, error, critical. For example: Copied DIFFUSERS_VERBOSITY=error ./myprogram.py Additionally, some warnings can be disabled by setting the environment variable +DIFFUSERS_NO_ADVISORY_WARNINGS to a true value, like 1. This disables any warning logged by +logger.warning_advice. For example: Copied DIFFUSERS_NO_ADVISORY_WARNINGS=1 ./myprogram.py Here is an example of how to use the same logger as the library in your own module or script: Copied from diffusers.utils import logging + +logging.set_verbosity_info() +logger = logging.get_logger("diffusers") +logger.info("INFO") +logger.warning("WARN") All methods of the logging module are documented below. The main methods are +logging.get_verbosity() to get the current level of verbosity in the logger and +logging.set_verbosity() to set the verbosity to the level of your choice. In order from the least verbose to the most verbose: Method Integer value Description diffusers.logging.CRITICAL or diffusers.logging.FATAL 50 only report the most critical errors diffusers.logging.ERROR 40 only report errors diffusers.logging.WARNING or diffusers.logging.WARN 30 only report errors and warnings (default) diffusers.logging.INFO 20 only report errors, warnings, and basic information diffusers.logging.DEBUG 10 report all information By default, tqdm progress bars are displayed during model download. logging.disable_progress_bar() and logging.enable_progress_bar() are used to enable or disable this behavior. Base setters diffusers.utils.logging.set_verbosity_error < source > ( ) Set the verbosity to the ERROR level. diffusers.utils.logging.set_verbosity_warning < source > ( ) Set the verbosity to the WARNING level. diffusers.utils.logging.set_verbosity_info < source > ( ) Set the verbosity to the INFO level. diffusers.utils.logging.set_verbosity_debug < source > ( ) Set the verbosity to the DEBUG level. Other functions diffusers.utils.logging.get_verbosity < source > ( ) → int Returns +int + +Logging level integers which can be one of: + +50: diffusers.logging.CRITICAL or diffusers.logging.FATAL +40: diffusers.logging.ERROR +30: diffusers.logging.WARNING or diffusers.logging.WARN +20: diffusers.logging.INFO +10: diffusers.logging.DEBUG + + Return the current level for the 🤗 Diffusers’ root logger as an int. diffusers.utils.logging.set_verbosity < source > ( verbosity: int ) Parameters verbosity (int) — +Logging level which can be one of: + +diffusers.logging.CRITICAL or diffusers.logging.FATAL +diffusers.logging.ERROR +diffusers.logging.WARNING or diffusers.logging.WARN +diffusers.logging.INFO +diffusers.logging.DEBUG + Set the verbosity level for the 🤗 Diffusers’ root logger. diffusers.utils.get_logger < source > ( name: Optional = None ) Return a logger with the specified name. This function is not supposed to be directly accessed unless you are writing a custom diffusers module. diffusers.utils.logging.enable_default_handler < source > ( ) Enable the default handler of the 🤗 Diffusers’ root logger. diffusers.utils.logging.disable_default_handler < source > ( ) Disable the default handler of the 🤗 Diffusers’ root logger. diffusers.utils.logging.enable_explicit_format < source > ( ) Enable explicit formatting for every 🤗 Diffusers’ logger. The explicit formatter is as follows: Copied [LEVELNAME|FILENAME|LINE NUMBER] TIME >> MESSAGE +All handlers currently bound to the root logger are affected by this method. diffusers.utils.logging.reset_format < source > ( ) Resets the formatting for 🤗 Diffusers’ loggers. All handlers currently bound to the root logger are affected by this method. diffusers.utils.logging.enable_progress_bar < source > ( ) Enable tqdm progress bar. diffusers.utils.logging.disable_progress_bar < source > ( ) Disable tqdm progress bar. diff --git a/scrapped_outputs/ad2f6aa36f18fdfc39e500c9410f02e5.txt b/scrapped_outputs/ad2f6aa36f18fdfc39e500c9410f02e5.txt new file mode 100644 index 0000000000000000000000000000000000000000..b5eb4e7be4c261d898545230090f92fda4b54478 --- /dev/null +++ b/scrapped_outputs/ad2f6aa36f18fdfc39e500c9410f02e5.txt @@ -0,0 +1,192 @@ +Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image generation model that iterates on the previous Stable Diffusion models in three key ways: the UNet is 3x larger and SDXL combines a second text encoder (OpenCLIP ViT-bigG/14) with the original text encoder to significantly increase the number of parameters introduces size and crop-conditioning to preserve training data from being discarded and gain more control over how a generated image should be cropped introduces a two-stage model process; the base model (can also be run as a standalone model) generates an image as an input to the refiner model which adds additional high-quality details This guide will show you how to use SDXL for text-to-image, image-to-image, and inpainting. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate invisible-watermark>=0.2.0 We recommend installing the invisible-watermark library to help identify images that are generated. If the invisible-watermark library is installed, it is used by default. To disable the watermarker: Copied pipeline = StableDiffusionXLPipeline.from_pretrained(..., add_watermarker=False) Load model checkpoints Model weights may be stored in separate subfolders on the Hub or locally, in which case, you should use the from_pretrained() method: Copied from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = StableDiffusionXLImg2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, use_safetensors=True, variant="fp16" +).to("cuda") You can also use the from_single_file() method to load a model checkpoint stored in a single file format (.ckpt or .safetensors) from the Hub or locally: Copied from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_single_file( + "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0.safetensors", + torch_dtype=torch.float16 +).to("cuda") + +refiner = StableDiffusionXLImg2ImgPipeline.from_single_file( + "https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/blob/main/sd_xl_refiner_1.0.safetensors", torch_dtype=torch.float16 +).to("cuda") Text-to-image For text-to-image, pass a text prompt. By default, SDXL generates a 1024x1024 image for the best results. You can try setting the height and width parameters to 768x768 or 512x512, but anything below 512x512 is not likely to work. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline_text2image = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipeline_text2image(prompt=prompt).images[0] +image Image-to-image For image-to-image, SDXL works especially well with image sizes between 768x768 and 1024x1024. Pass an initial image, and a text prompt to condition the image with: Copied from diffusers import AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +# use from_pipe to avoid consuming additional memory when loading a checkpoint +pipeline = AutoPipelineForImage2Image.from_pipe(pipeline_text2image).to("cuda") + +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png" +init_image = load_image(url) +prompt = "a dog catching a frisbee in the jungle" +image = pipeline(prompt, image=init_image, strength=0.8, guidance_scale=10.5).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Inpainting For inpainting, you’ll need the original image and a mask of what you want to replace in the original image. Create a prompt to describe what you want to replace the masked area with. Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +# use from_pipe to avoid consuming additional memory when loading a checkpoint +pipeline = AutoPipelineForInpainting.from_pipe(pipeline_text2image).to("cuda") + +img_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png" +mask_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-inpaint-mask.png" + +init_image = load_image(img_url) +mask_image = load_image(mask_url) + +prompt = "A deep sea diver floating" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.85, guidance_scale=12.5).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Refine image quality SDXL includes a refiner model specialized in denoising low-noise stage images to generate higher-quality images from the base model. There are two ways to use the refiner: use the base and refiner models together to produce a refined image use the base model to produce an image, and subsequently use the refiner model to add more details to the image (this is how SDXL was originally trained) Base + refiner model When you use the base and refiner model together to generate an image, this is known as an ensemble of expert denoisers. The ensemble of expert denoisers approach requires fewer overall denoising steps versus passing the base model’s output to the refiner model, so it should be significantly faster to run. However, you won’t be able to inspect the base model’s output because it still contains a large amount of noise. As an ensemble of expert denoisers, the base model serves as the expert during the high-noise diffusion stage and the refiner model serves as the expert during the low-noise diffusion stage. Load the base and refiner model: Copied from diffusers import DiffusionPipeline +import torch + +base = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", + text_encoder_2=base.text_encoder_2, + vae=base.vae, + torch_dtype=torch.float16, + use_safetensors=True, + variant="fp16", +).to("cuda") To use this approach, you need to define the number of timesteps for each model to run through their respective stages. For the base model, this is controlled by the denoising_end parameter and for the refiner model, it is controlled by the denoising_start parameter. The denoising_end and denoising_start parameters should be a float between 0 and 1. These parameters are represented as a proportion of discrete timesteps as defined by the scheduler. If you’re also using the strength parameter, it’ll be ignored because the number of denoising steps is determined by the discrete timesteps the model is trained on and the declared fractional cutoff. Let’s set denoising_end=0.8 so the base model performs the first 80% of denoising the high-noise timesteps and set denoising_start=0.8 so the refiner model performs the last 20% of denoising the low-noise timesteps. The base model output should be in latent space instead of a PIL image. Copied prompt = "A majestic lion jumping from a big stone at night" + +image = base( + prompt=prompt, + num_inference_steps=40, + denoising_end=0.8, + output_type="latent", +).images +image = refiner( + prompt=prompt, + num_inference_steps=40, + denoising_start=0.8, + image=image, +).images[0] +image default base model ensemble of expert denoisers The refiner model can also be used for inpainting in the StableDiffusionXLInpaintPipeline: Copied from diffusers import StableDiffusionXLInpaintPipeline +from diffusers.utils import load_image, make_image_grid +import torch + +base = StableDiffusionXLInpaintPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = StableDiffusionXLInpaintPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", + text_encoder_2=base.text_encoder_2, + vae=base.vae, + torch_dtype=torch.float16, + use_safetensors=True, + variant="fp16", +).to("cuda") + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url) +mask_image = load_image(mask_url) + +prompt = "A majestic tiger sitting on a bench" +num_inference_steps = 75 +high_noise_frac = 0.7 + +image = base( + prompt=prompt, + image=init_image, + mask_image=mask_image, + num_inference_steps=num_inference_steps, + denoising_end=high_noise_frac, + output_type="latent", +).images +image = refiner( + prompt=prompt, + image=image, + mask_image=mask_image, + num_inference_steps=num_inference_steps, + denoising_start=high_noise_frac, +).images[0] +make_image_grid([init_image, mask_image, image.resize((512, 512))], rows=1, cols=3) This ensemble of expert denoisers method works well for all available schedulers! Base to refiner model SDXL gets a boost in image quality by using the refiner model to add additional high-quality details to the fully-denoised image from the base model, in an image-to-image setting. Load the base and refiner models: Copied from diffusers import DiffusionPipeline +import torch + +base = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", + text_encoder_2=base.text_encoder_2, + vae=base.vae, + torch_dtype=torch.float16, + use_safetensors=True, + variant="fp16", +).to("cuda") Generate an image from the base model, and set the model output to latent space: Copied prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +image = base(prompt=prompt, output_type="latent").images[0] Pass the generated image to the refiner model: Copied image = refiner(prompt=prompt, image=image[None, :]).images[0] base model base model + refiner model For inpainting, load the base and the refiner model in the StableDiffusionXLInpaintPipeline, remove the denoising_end and denoising_start parameters, and choose a smaller number of inference steps for the refiner. Micro-conditioning SDXL training involves several additional conditioning techniques, which are referred to as micro-conditioning. These include original image size, target image size, and cropping parameters. The micro-conditionings can be used at inference time to create high-quality, centered images. You can use both micro-conditioning and negative micro-conditioning parameters thanks to classifier-free guidance. They are available in the StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline, StableDiffusionXLInpaintPipeline, and StableDiffusionXLControlNetPipeline. Size conditioning There are two types of size conditioning: original_size conditioning comes from upscaled images in the training batch (because it would be wasteful to discard the smaller images which make up almost 40% of the total training data). This way, SDXL learns that upscaling artifacts are not supposed to be present in high-resolution images. During inference, you can use original_size to indicate the original image resolution. Using the default value of (1024, 1024) produces higher-quality images that resemble the 1024x1024 images in the dataset. If you choose to use a lower resolution, such as (256, 256), the model still generates 1024x1024 images, but they’ll look like the low resolution images (simpler patterns, blurring) in the dataset. target_size conditioning comes from finetuning SDXL to support different image aspect ratios. During inference, if you use the default value of (1024, 1024), you’ll get an image that resembles the composition of square images in the dataset. We recommend using the same value for target_size and original_size, but feel free to experiment with other options! 🤗 Diffusers also lets you specify negative conditions about an image’s size to steer generation away from certain image resolutions: Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe( + prompt=prompt, + negative_original_size=(512, 512), + negative_target_size=(1024, 1024), +).images[0] Images negatively conditioned on image resolutions of (128, 128), (256, 256), and (512, 512). Crop conditioning Images generated by previous Stable Diffusion models may sometimes appear to be cropped. This is because images are actually cropped during training so that all the images in a batch have the same size. By conditioning on crop coordinates, SDXL learns that no cropping - coordinates (0, 0) - usually correlates with centered subjects and complete faces (this is the default value in 🤗 Diffusers). You can experiment with different coordinates if you want to generate off-centered compositions! Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipeline(prompt=prompt, crops_coords_top_left=(256, 0)).images[0] +image You can also specify negative cropping coordinates to steer generation away from certain cropping parameters: Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe( + prompt=prompt, + negative_original_size=(512, 512), + negative_crops_coords_top_left=(0, 0), + negative_target_size=(1024, 1024), +).images[0] +image Use a different prompt for each text-encoder SDXL uses two text-encoders, so it is possible to pass a different prompt to each text-encoder, which can improve quality. Pass your original prompt to prompt and the second prompt to prompt_2 (use negative_prompt and negative_prompt_2 if you’re using negative prompts): Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +# prompt is passed to OAI CLIP-ViT/L-14 +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +# prompt_2 is passed to OpenCLIP-ViT/bigG-14 +prompt_2 = "Van Gogh painting" +image = pipeline(prompt=prompt, prompt_2=prompt_2).images[0] +image The dual text-encoders also support textual inversion embeddings that need to be loaded separately as explained in the SDXL textual inversion section. Optimizations SDXL is a large model, and you may need to optimize memory to get it to run on your hardware. Here are some tips to save memory and speed up inference. Offload the model to the CPU with enable_model_cpu_offload() for out-of-memory errors: Copied - base.to("cuda") +- refiner.to("cuda") ++ base.enable_model_cpu_offload() ++ refiner.enable_model_cpu_offload() Use torch.compile for ~20% speed-up (you need torch>=2.0): Copied + base.unet = torch.compile(base.unet, mode="reduce-overhead", fullgraph=True) ++ refiner.unet = torch.compile(refiner.unet, mode="reduce-overhead", fullgraph=True) Enable xFormers to run SDXL if torch<2.0: Copied + base.enable_xformers_memory_efficient_attention() ++ refiner.enable_xformers_memory_efficient_attention() Other resources If you’re interested in experimenting with a minimal version of the UNet2DConditionModel used in SDXL, take a look at the minSDXL implementation which is written in PyTorch and directly compatible with 🤗 Diffusers. diff --git a/scrapped_outputs/ad440233c763d2bb709b7beafce349e6.txt b/scrapped_outputs/ad440233c763d2bb709b7beafce349e6.txt new file mode 100644 index 0000000000000000000000000000000000000000..5e5f20bcd4c8ced4f5d66653f375f4b97a022c2a --- /dev/null +++ b/scrapped_outputs/ad440233c763d2bb709b7beafce349e6.txt @@ -0,0 +1,13 @@ +Improve image quality with deterministic generation A common way to improve the quality of generated images is with deterministic batch generation, generate a batch of images and select one image to improve with a more detailed prompt in a second round of inference. The key is to pass a list of torch.Generator’s to the pipeline for batched image generation, and tie each Generator to a seed so you can reuse it for an image. Let’s use runwayml/stable-diffusion-v1-5 for example, and generate several versions of the following prompt: Copied prompt = "Labrador in the style of Vermeer" Instantiate a pipeline with DiffusionPipeline.from_pretrained() and place it on a GPU (if available): Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import make_image_grid + +pipe = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +pipe = pipe.to("cuda") Now, define four different Generators and assign each Generator a seed (0 to 3) so you can reuse a Generator later for a specific image: Copied generator = [torch.Generator(device="cuda").manual_seed(i) for i in range(4)] To create a batched seed, you should use a list comprehension that iterates over the length specified in range(). This creates a unique Generator object for each image in the batch. If you only multiply the Generator by the batch size, this only creates one Generator object that is used sequentially for each image in the batch. For example, if you want to use the same seed to create 4 identical images: Copied ❌ [torch.Generator().manual_seed(seed)] * 4 + +✅ [torch.Generator().manual_seed(seed) for _ in range(4)] Generate the images and have a look: Copied images = pipe(prompt, generator=generator, num_images_per_prompt=4).images +make_image_grid(images, rows=2, cols=2) In this example, you’ll improve upon the first image - but in reality, you can use any image you want (even the image with double sets of eyes!). The first image used the Generator with seed 0, so you’ll reuse that Generator for the second round of inference. To improve the quality of the image, add some additional text to the prompt: Copied prompt = [prompt + t for t in [", highly realistic", ", artsy", ", trending", ", colorful"]] +generator = [torch.Generator(device="cuda").manual_seed(0) for i in range(4)] Create four generators with seed 0, and generate another batch of images, all of which should look like the first image from the previous round! Copied images = pipe(prompt, generator=generator).images +make_image_grid(images, rows=2, cols=2) diff --git a/scrapped_outputs/ad49f9edc970b266b4ea861ad944439f.txt b/scrapped_outputs/ad49f9edc970b266b4ea861ad944439f.txt new file mode 100644 index 0000000000000000000000000000000000000000..fa29aaa3795982e1203729759aa3fb501feeb077 --- /dev/null +++ b/scrapped_outputs/ad49f9edc970b266b4ea861ad944439f.txt @@ -0,0 +1,19 @@ +Habana Gaudi 🤗 Diffusers is compatible with Habana Gaudi through 🤗 Optimum. Follow the installation guide to install the SynapseAI and Gaudi drivers, and then install Optimum Habana: Copied python -m pip install --upgrade-strategy eager optimum[habana] To generate images with Stable Diffusion 1 and 2 on Gaudi, you need to instantiate two instances: GaudiStableDiffusionPipeline, a pipeline for text-to-image generation. GaudiDDIMScheduler, a Gaudi-optimized scheduler. When you initialize the pipeline, you have to specify use_habana=True to deploy it on HPUs and to get the fastest possible generation, you should enable HPU graphs with use_hpu_graphs=True. Finally, specify a GaudiConfig which can be downloaded from the Habana organization on the Hub. Copied from optimum.habana import GaudiConfig +from optimum.habana.diffusers import GaudiDDIMScheduler, GaudiStableDiffusionPipeline + +model_name = "stabilityai/stable-diffusion-2-base" +scheduler = GaudiDDIMScheduler.from_pretrained(model_name, subfolder="scheduler") +pipeline = GaudiStableDiffusionPipeline.from_pretrained( + model_name, + scheduler=scheduler, + use_habana=True, + use_hpu_graphs=True, + gaudi_config="Habana/stable-diffusion-2", +) Now you can call the pipeline to generate images by batches from one or several prompts: Copied outputs = pipeline( + prompt=[ + "High quality photo of an astronaut riding a horse in space", + "Face of a yellow cat, high resolution, sitting on a park bench", + ], + num_images_per_prompt=10, + batch_size=4, +) For more information, check out 🤗 Optimum Habana’s documentation and the example provided in the official GitHub repository. Benchmark We benchmarked Habana’s first-generation Gaudi and Gaudi2 with the Habana/stable-diffusion and Habana/stable-diffusion-2 Gaudi configurations (mixed precision bf16/fp32) to demonstrate their performance. For Stable Diffusion v1.5 on 512x512 images: Latency (batch size = 1) Throughput first-generation Gaudi 3.80s 0.308 images/s (batch size = 8) Gaudi2 1.33s 1.081 images/s (batch size = 8) For Stable Diffusion v2.1 on 768x768 images: Latency (batch size = 1) Throughput first-generation Gaudi 10.2s 0.108 images/s (batch size = 4) Gaudi2 3.17s 0.379 images/s (batch size = 8) diff --git a/scrapped_outputs/ad4f1a5f70ed97adfea766f034b7ed3e.txt b/scrapped_outputs/ad4f1a5f70ed97adfea766f034b7ed3e.txt new file mode 100644 index 0000000000000000000000000000000000000000..f30b39a298e4c56dee2c29827af6d01fc3c8586a --- /dev/null +++ b/scrapped_outputs/ad4f1a5f70ed97adfea766f034b7ed3e.txt @@ -0,0 +1,36 @@ +AsymmetricAutoencoderKL Improved larger variational autoencoder (VAE) model with KL loss for inpainting task: Designing a Better Asymmetric VQGAN for StableDiffusion by Zixin Zhu, Xuelu Feng, Dongdong Chen, Jianmin Bao, Le Wang, Yinpeng Chen, Lu Yuan, Gang Hua. The abstract from the paper is: StableDiffusion is a revolutionary text-to-image generator that is causing a stir in the world of image generation and editing. Unlike traditional methods that learn a diffusion model in pixel space, StableDiffusion learns a diffusion model in the latent space via a VQGAN, ensuring both efficiency and quality. It not only supports image generation tasks, but also enables image editing for real images, such as image inpainting and local editing. However, we have observed that the vanilla VQGAN used in StableDiffusion leads to significant information loss, causing distortion artifacts even in non-edited image regions. To this end, we propose a new asymmetric VQGAN with two simple designs. Firstly, in addition to the input from the encoder, the decoder contains a conditional branch that incorporates information from task-specific priors, such as the unmasked image region in inpainting. Secondly, the decoder is much heavier than the encoder, allowing for more detailed recovery while only slightly increasing the total inference cost. The training cost of our asymmetric VQGAN is cheap, and we only need to retrain a new asymmetric decoder while keeping the vanilla VQGAN encoder and StableDiffusion unchanged. Our asymmetric VQGAN can be widely used in StableDiffusion-based inpainting and local editing methods. Extensive experiments demonstrate that it can significantly improve the inpainting and editing performance, while maintaining the original text-to-image capability. The code is available at https://github.com/buxiangzhiren/Asymmetric_VQGAN Evaluation results can be found in section 4.1 of the original paper. Available checkpoints https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-1-5 https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-2 Example Usage Copied from diffusers import AsymmetricAutoencoderKL, StableDiffusionInpaintPipeline +from diffusers.utils import load_image, make_image_grid + + +prompt = "a photo of a person with beard" +img_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/celeba_hq_256.png" +mask_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/mask_256.png" + +original_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +pipe = StableDiffusionInpaintPipeline.from_pretrained("runwayml/stable-diffusion-inpainting") +pipe.vae = AsymmetricAutoencoderKL.from_pretrained("cross-attention/asymmetric-autoencoder-kl-x-1-5") +pipe.to("cuda") + +image = pipe(prompt=prompt, image=original_image, mask_image=mask_image).images[0] +make_image_grid([original_image, mask_image, image], rows=1, cols=3) AsymmetricAutoencoderKL class diffusers.AsymmetricAutoencoderKL < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) down_block_out_channels: Tuple = (64,) layers_per_down_block: int = 1 up_block_types: Tuple = ('UpDecoderBlock2D',) up_block_out_channels: Tuple = (64,) layers_per_up_block: int = 1 act_fn: str = 'silu' latent_channels: int = 4 norm_num_groups: int = 32 sample_size: int = 32 scaling_factor: float = 0.18215 ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("DownEncoderBlock2D",)) — +Tuple of downsample block types. down_block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of down block output channels. layers_per_down_block (int, optional, defaults to 1) — +Number layers for down block. up_block_types (Tuple[str], optional, defaults to ("UpDecoderBlock2D",)) — +Tuple of upsample block types. up_block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of up block output channels. layers_per_up_block (int, optional, defaults to 1) — +Number layers for up block. act_fn (str, optional, defaults to "silu") — The activation function to use. latent_channels (int, optional, defaults to 4) — Number of channels in the latent space. sample_size (int, optional, defaults to 32) — Sample input size. norm_num_groups (int, optional, defaults to 32) — +Number of groups to use for the first normalization layer in ResNet blocks. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. Designing a Better Asymmetric VQGAN for StableDiffusion https://arxiv.org/abs/2306.04632 . A VAE model with KL loss +for encoding images into latents and decoding latent representations into images. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor mask: Optional = None sample_posterior: bool = False return_dict: bool = True generator: Optional = None ) Parameters sample (torch.FloatTensor) — Input sample. mask (torch.FloatTensor, optional, defaults to None) — Optional inpainting mask. sample_posterior (bool, optional, defaults to False) — +Whether to sample from the posterior. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. AutoencoderKLOutput class diffusers.models.modeling_outputs.AutoencoderKLOutput < source > ( latent_dist: DiagonalGaussianDistribution ) Parameters latent_dist (DiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of DiagonalGaussianDistribution. +DiagonalGaussianDistribution allows for sampling latents from the distribution. Output of AutoencoderKL encoding method. DecoderOutput class diffusers.models.autoencoders.vae.DecoderOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The decoded output sample from the last layer of the model. Output of decoding method. diff --git a/scrapped_outputs/ad76b962c1de0a11acc4a8bf38124809.txt b/scrapped_outputs/ad76b962c1de0a11acc4a8bf38124809.txt new file mode 100644 index 0000000000000000000000000000000000000000..3ac97628e55336ffb0041210b78e5d43066c4f7c --- /dev/null +++ b/scrapped_outputs/ad76b962c1de0a11acc4a8bf38124809.txt @@ -0,0 +1,225 @@ +AudioLDM 2 AudioLDM 2 was proposed in AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining by Haohe Liu et al. AudioLDM 2 takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional sound effects, human speech and music. Inspired by Stable Diffusion, AudioLDM 2 is a text-to-audio latent diffusion model (LDM) that learns continuous audio representations from text embeddings. Two text encoder models are used to compute the text embeddings from a prompt input: the text-branch of CLAP and the encoder of Flan-T5. These text embeddings are then projected to a shared embedding space by an AudioLDM2ProjectionModel. A GPT2 language model (LM) is used to auto-regressively predict eight new embedding vectors, conditional on the projected CLAP and Flan-T5 embeddings. The generated embedding vectors and Flan-T5 text embeddings are used as cross-attention conditioning in the LDM. The UNet of AudioLDM 2 is unique in the sense that it takes two cross-attention embeddings, as opposed to one cross-attention conditioning, as in most other LDMs. The abstract of the paper is the following: Although audio generation shares commonalities across different types of audio, such as speech, music, and sound effects, designing models for each type requires careful consideration of specific objectives and biases that can significantly differ from those of other types. To bring us closer to a unified perspective of audio generation, this paper proposes a framework that utilizes the same learning method for speech, music, and sound effect generation. Our framework introduces a general representation of audio, called “language of audio” (LOA). Any audio can be translated into LOA based on AudioMAE, a self-supervised pre-trained representation learning model. In the generation process, we translate any modalities into LOA by using a GPT-2 model, and we perform self-supervised audio generation learning with a latent diffusion model conditioned on LOA. The proposed framework naturally brings advantages such as in-context learning abilities and reusable self-supervised pretrained AudioMAE and latent diffusion models. Experiments on the major benchmarks of text-to-audio, text-to-music, and text-to-speech demonstrate state-of-the-art or competitive performance against previous approaches. Our code, pretrained model, and demo are available at this https URL. This pipeline was contributed by sanchit-gandhi. The original codebase can be found at haoheliu/audioldm2. Tips Choosing a checkpoint AudioLDM2 comes in three variants. Two of these checkpoints are applicable to the general task of text-to-audio generation. The third checkpoint is trained exclusively on text-to-music generation. All checkpoints share the same model size for the text encoders and VAE. They differ in the size and depth of the UNet. +See table below for details on the three checkpoints: Checkpoint Task UNet Model Size Total Model Size Training Data / h audioldm2 Text-to-audio 350M 1.1B 1150k audioldm2-large Text-to-audio 750M 1.5B 1150k audioldm2-music Text-to-music 350M 1.1B 665k Constructing a prompt Descriptive prompt inputs work best: use adjectives to describe the sound (e.g. “high quality” or “clear”) and make the prompt context specific (e.g. “water stream in a forest” instead of “stream”). It’s best to use general terms like “cat” or “dog” instead of specific names or abstract objects the model may not be familiar with. Using a negative prompt can significantly improve the quality of the generated waveform, by guiding the generation away from terms that correspond to poor quality audio. Try using a negative prompt of “Low quality.” Controlling inference The quality of the predicted audio sample can be controlled by the num_inference_steps argument; higher steps give higher quality audio at the expense of slower inference. The length of the predicted audio sample can be controlled by varying the audio_length_in_s argument. Evaluating generated waveforms: The quality of the generated waveforms can vary significantly based on the seed. Try generating with different seeds until you find a satisfactory generation. Multiple waveforms can be generated in one go: set num_waveforms_per_prompt to a value greater than 1. Automatic scoring will be performed between the generated waveforms and prompt text, and the audios ranked from best to worst accordingly. The following example demonstrates how to construct good music generation using the aforementioned tips: example. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. AudioLDM2Pipeline class diffusers.AudioLDM2Pipeline < source > ( vae: AutoencoderKL text_encoder: ClapModel text_encoder_2: T5EncoderModel projection_model: AudioLDM2ProjectionModel language_model: GPT2Model tokenizer: Union tokenizer_2: Union feature_extractor: ClapFeatureExtractor unet: AudioLDM2UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vocoder: SpeechT5HifiGan ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (ClapModel) — +First frozen text-encoder. AudioLDM2 uses the joint audio-text embedding model +CLAP, +specifically the laion/clap-htsat-unfused variant. The +text branch is used to encode the text prompt to a prompt embedding. The full audio-text model is used to +rank generated waveforms against the text prompt by computing similarity scores. text_encoder_2 (T5EncoderModel) — +Second frozen text-encoder. AudioLDM2 uses the encoder of +T5, specifically the +google/flan-t5-large variant. projection_model (AudioLDM2ProjectionModel) — +A trained model used to linearly project the hidden-states from the first and second text encoder models +and insert learned SOS and EOS token embeddings. The projected hidden-states from the two text encoders are +concatenated to give the input to the language model. language_model (GPT2Model) — +An auto-regressive language model used to generate a sequence of hidden-states conditioned on the projected +outputs from the two text encoders. tokenizer (RobertaTokenizer) — +Tokenizer to tokenize text for the first frozen text-encoder. tokenizer_2 (T5Tokenizer) — +Tokenizer to tokenize text for the second frozen text-encoder. feature_extractor (ClapFeatureExtractor) — +Feature extractor to pre-process generated audio waveforms to log-mel spectrograms for automatic scoring. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded audio latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. vocoder (SpeechT5HifiGan) — +Vocoder of class SpeechT5HifiGan to convert the mel-spectrogram latents to the final audio waveform. Pipeline for text-to-audio generation using AudioLDM2. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None audio_length_in_s: Optional = None num_inference_steps: int = 200 guidance_scale: float = 3.5 negative_prompt: Union = None num_waveforms_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None generated_prompt_embeds: Optional = None negative_generated_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None max_new_tokens: Optional = None return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None output_type: Optional = 'np' ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide audio generation. If not defined, you need to pass prompt_embeds. audio_length_in_s (int, optional, defaults to 10.24) — +The length of the generated audio sample in seconds. num_inference_steps (int, optional, defaults to 200) — +The number of denoising steps. More denoising steps usually lead to a higher quality audio at the +expense of slower inference. guidance_scale (float, optional, defaults to 3.5) — +A higher guidance scale value encourages the model to generate audio that is closely linked to the text +prompt at the expense of lower sound quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in audio generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_waveforms_per_prompt (int, optional, defaults to 1) — +The number of waveforms to generate per prompt. If num_waveforms_per_prompt > 1, then automatic +scoring is performed between the generated outputs and the text prompt. This scoring ranks the +generated waveforms based on their cosine similarity with the text input in the joint text-audio +embedding space. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for spectrogram +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings from the GPT2 langauge model. Can be used to easily tweak text inputs, +e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input +argument. negative_generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings from the GPT2 language model. Can be used to easily tweak text +inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from +negative_prompt input argument. attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the prompt_embeds. If not provided, attention mask will +be computed from prompt input argument. negative_attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the negative_prompt_embeds. If not provided, attention +mask will be computed from negative_prompt input argument. max_new_tokens (int, optional, defaults to None) — +Number of new tokens to generate with the GPT2 language model. If not provided, number of tokens will +be taken from the config of the model. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. output_type (str, optional, defaults to "np") — +The output format of the generated audio. Choose between "np" to return a NumPy np.ndarray or +"pt" to return a PyTorch torch.Tensor object. Set to "latent" to return the latent diffusion +model (LDM) output. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Examples: Copied >>> import scipy +>>> import torch +>>> from diffusers import AudioLDM2Pipeline + +>>> repo_id = "cvssp/audioldm2" +>>> pipe = AudioLDM2Pipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> # define the prompts +>>> prompt = "The sound of a hammer hitting a wooden surface." +>>> negative_prompt = "Low quality." + +>>> # set the seed for generator +>>> generator = torch.Generator("cuda").manual_seed(0) + +>>> # run the generation +>>> audio = pipe( +... prompt, +... negative_prompt=negative_prompt, +... num_inference_steps=200, +... audio_length_in_s=10.0, +... num_waveforms_per_prompt=3, +... generator=generator, +... ).audios + +>>> # save the best audio sample (index 0) as a .wav file +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio[0]) disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_waveforms_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None generated_prompt_embeds: Optional = None negative_generated_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None max_new_tokens: Optional = None ) → prompt_embeds (torch.FloatTensor) Parameters prompt (str or List[str], optional) — +prompt to be encoded device (torch.device) — +torch device num_waveforms_per_prompt (int) — +number of waveforms that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the audio generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-computed text embeddings from the Flan T5 model. Can be used to easily tweak text inputs, e.g. +prompt weighting. If not provided, text embeddings will be computed from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-computed negative text embeddings from the Flan T5 model. Can be used to easily tweak text inputs, +e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from +negative_prompt input argument. generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings from the GPT2 langauge model. Can be used to easily tweak text inputs, +e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input +argument. negative_generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings from the GPT2 language model. Can be used to easily tweak text +inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from +negative_prompt input argument. attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the prompt_embeds. If not provided, attention mask will +be computed from prompt input argument. negative_attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the negative_prompt_embeds. If not provided, attention +mask will be computed from negative_prompt input argument. max_new_tokens (int, optional, defaults to None) — +The number of new tokens to generate with the GPT2 language model. Returns +prompt_embeds (torch.FloatTensor) + +Text embeddings from the Flan T5 model. +attention_mask (torch.LongTensor): +Attention mask to be applied to the prompt_embeds. +generated_prompt_embeds (torch.FloatTensor): +Text embeddings generated from the GPT2 langauge model. + Encodes the prompt into text encoder hidden states. Example: Copied >>> import scipy +>>> import torch +>>> from diffusers import AudioLDM2Pipeline + +>>> repo_id = "cvssp/audioldm2" +>>> pipe = AudioLDM2Pipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> # Get text embedding vectors +>>> prompt_embeds, attention_mask, generated_prompt_embeds = pipe.encode_prompt( +... prompt="Techno music with a strong, upbeat tempo and high melodic riffs", +... device="cuda", +... do_classifier_free_guidance=True, +... ) + +>>> # Pass text embeddings to pipeline for text-conditional audio generation +>>> audio = pipe( +... prompt_embeds=prompt_embeds, +... attention_mask=attention_mask, +... generated_prompt_embeds=generated_prompt_embeds, +... num_inference_steps=200, +... audio_length_in_s=10.0, +... ).audios[0] + +>>> # save generated audio sample +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio) generate_language_model < source > ( inputs_embeds: Tensor = None max_new_tokens: int = 8 **model_kwargs ) → inputs_embeds (torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)`) Parameters inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — +The sequence used as a prompt for the generation. max_new_tokens (int) — +Number of new tokens to generate. model_kwargs (Dict[str, Any], optional) — +Ad hoc parametrization of additional model-specific kwargs that will be forwarded to the forward +function of the model. Returns +inputs_embeds (torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)`) + +The sequence of generated hidden-states. + Generates a sequence of hidden-states from the language model, conditioned on the embedding inputs. AudioLDM2ProjectionModel class diffusers.AudioLDM2ProjectionModel < source > ( text_encoder_dim text_encoder_1_dim langauge_model_dim ) Parameters text_encoder_dim (int) — +Dimensionality of the text embeddings from the first text encoder (CLAP). text_encoder_1_dim (int) — +Dimensionality of the text embeddings from the second text encoder (T5 or VITS). langauge_model_dim (int) — +Dimensionality of the text embeddings from the language model (GPT2). A simple linear projection model to map two text embeddings to a shared latent space. It also inserts learned +embedding vectors at the start and end of each text embedding sequence respectively. Each variable appended with +_1 refers to that corresponding to the second text encoder. Otherwise, it is from the first. forward < source > ( hidden_states: Optional = None hidden_states_1: Optional = None attention_mask: Optional = None attention_mask_1: Optional = None ) AudioLDM2UNet2DConditionModel class diffusers.AudioLDM2UNet2DConditionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 flip_sin_to_cos: bool = True freq_shift: int = 0 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') mid_block_type: Optional = 'UNetMidBlock2DCrossAttn' up_block_types: Tuple = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: Union = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: Union = 1280 transformer_layers_per_block: Union = 1 attention_head_dim: Union = 8 num_attention_heads: Union = None use_linear_projection: bool = False class_embed_type: Optional = None num_class_embeds: Optional = None upcast_attention: bool = False resnet_time_scale_shift: str = 'default' time_embedding_type: str = 'positional' time_embedding_dim: Optional = None time_embedding_act_fn: Optional = None timestep_post_act: Optional = None time_cond_proj_dim: Optional = None conv_in_kernel: int = 3 conv_out_kernel: int = 3 projection_class_embeddings_input_dim: Optional = None class_embeddings_concat: bool = False ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. in_channels (int, optional, defaults to 4) — Number of channels in the input sample. out_channels (int, optional, defaults to 4) — Number of channels in the output. flip_sin_to_cos (bool, optional, defaults to False) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. mid_block_type (str, optional, defaults to "UNetMidBlock2DCrossAttn") — +Block type for middle of UNet, it can only be UNetMidBlock2DCrossAttn for AudioLDM2. up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. only_cross_attention (bool or Tuple[bool], optional, default to False) — +Whether to include self-attention in the basic transformer blocks, see +BasicTransformerBlock. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — The number of layers per block. downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. act_fn (str, optional, defaults to "silu") — The activation function to use. norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, normalization and activation layers is skipped in post-processing. norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. cross_attention_dim (int or Tuple[int], optional, defaults to 1280) — +The dimension of the cross attention features. transformer_layers_per_block (int or Tuple[int], optional, defaults to 1) — +The number of transformer blocks of type BasicTransformerBlock. Only relevant for +~models.unet_2d_blocks.CrossAttnDownBlock2D, ~models.unet_2d_blocks.CrossAttnUpBlock2D, +~models.unet_2d_blocks.UNetMidBlock2DCrossAttn. attention_head_dim (int, optional, defaults to 8) — The dimension of the attention heads. num_attention_heads (int, optional) — +The number of attention heads. If not defined, defaults to attention_head_dim resnet_time_scale_shift (str, optional, defaults to "default") — Time scale shift config +for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", "identity", "projection", or "simple_projection". num_class_embeds (int, optional, defaults to None) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. time_embedding_type (str, optional, defaults to positional) — +The type of position embedding to use for timesteps. Choose from positional or fourier. time_embedding_dim (int, optional, defaults to None) — +An optional override for the dimension of the projected time embedding. time_embedding_act_fn (str, optional, defaults to None) — +Optional activation function to use only once on the time embeddings before they are passed to the rest of +the UNet. Choose from silu, mish, gelu, and swish. timestep_post_act (str, optional, defaults to None) — +The second activation function to use in timestep embedding. Choose from silu, mish and gelu. time_cond_proj_dim (int, optional, defaults to None) — +The dimension of cond_proj layer in the timestep embedding. conv_in_kernel (int, optional, default to 3) — The kernel size of conv_in layer. conv_out_kernel (int, optional, default to 3) — The kernel size of conv_out layer. projection_class_embeddings_input_dim (int, optional) — The dimension of the class_labels input when +class_embed_type="projection". Required when class_embed_type="projection". class_embeddings_concat (bool, optional, defaults to False) — Whether to concatenate the time +embeddings with the class embeddings. A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. Compared to the vanilla UNet2DConditionModel, this variant optionally includes an additional +self-attention layer in each Transformer block, as well as multiple cross-attention layers. It also allows for up +to two cross-attention embeddings, encoder_hidden_states and encoder_hidden_states_1. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None encoder_attention_mask: Optional = None return_dict: bool = True encoder_hidden_states_1: Optional = None encoder_attention_mask_1: Optional = None ) → UNet2DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, channel, height, width). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). encoder_attention_mask (torch.Tensor) — +A cross-attention mask of shape (batch, sequence_length) is applied to encoder_hidden_states. If +True the mask is kept, otherwise if False it is discarded. Mask will be converted into a bias, +which adds large negative values to the attention scores corresponding to “discard” tokens. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. encoder_hidden_states_1 (torch.FloatTensor, optional) — +A second set of encoder hidden states with shape (batch, sequence_length_2, feature_dim_2). Can be +used to condition the model on a different set of embeddings to encoder_hidden_states. encoder_attention_mask_1 (torch.Tensor, optional) — +A cross-attention mask of shape (batch, sequence_length_2) is applied to encoder_hidden_states_1. +If True the mask is kept, otherwise if False it is discarded. Mask will be converted into a bias, +which adds large negative values to the attention scores corresponding to “discard” tokens. Returns +UNet2DConditionOutput or tuple + +If return_dict is True, an UNet2DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The AudioLDM2UNet2DConditionModel forward method. AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. diff --git a/scrapped_outputs/ad8bed543d4651b09822686d4b79fb18.txt b/scrapped_outputs/ad8bed543d4651b09822686d4b79fb18.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/ad90a998b0404659ee449d580bc675b4.txt b/scrapped_outputs/ad90a998b0404659ee449d580bc675b4.txt new file mode 100644 index 0000000000000000000000000000000000000000..edc4f7b1ca0249c72aa65698e4a858d07f008b84 --- /dev/null +++ b/scrapped_outputs/ad90a998b0404659ee449d580bc675b4.txt @@ -0,0 +1,146 @@ +Reproducibility + +Before reading about reproducibility for Diffusers, it is strongly recommended to take a look at +PyTorch’s statement about reproducibility. +PyTorch states that +completely reproducible results are not guaranteed across PyTorch releases, individual commits, or different platforms. +While one can never expect the same results across platforms, one can expect results to be reproducible +across releases, platforms, etc… within a certain tolerance. However, this tolerance strongly varies +depending on the diffusion pipeline and checkpoint. +In the following, we show how to best control sources of randomness for diffusion models. + +Inference + +During inference, diffusion pipelines heavily rely on random sampling operations, such as the creating the +gaussian noise tensors to be denoised and adding noise to the scheduling step. +Let’s have a look at an example. We run the DDIM pipeline +for just two inference steps and return a numpy tensor to look into the numerical values of the output. + + + Copied +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np").images +print(np.abs(image).sum()) +Running the above prints a value of 1464.2076, but running it again prints a different +value of 1495.1768. What is going on here? Every time the pipeline is run, gaussian noise +is created and step-wise denoised. To create the gaussian noise with torch.randn, a different random seed is taken every time, thus leading to a different result. +This is a desired property of diffusion pipelines, as it means that the pipeline can create a different random image every time it is run. In many cases, one would like to generate the exact same image of a certain +run, for which case an instance of a PyTorch generator has to be passed: + + + Copied +import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id) + +# create a generator for reproducibility +generator = torch.Generator(device="cpu").manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) +Running the above always prints a value of 1491.1711 - also upon running it again because we +define the generator object to be passed to all random functions of the pipeline. +If you run this code snippet on your specific hardware and version, you should get a similar, if not the same, result. +It might be a bit unintuitive at first to pass generator objects to the pipelines instead of +just integer values representing the seed, but this is the recommended design when dealing with +probabilistic models in PyTorch as generators are random states that are advanced and can thus be +passed to multiple pipelines in a sequence. +Great! Now, we know how to write reproducible pipelines, but it gets a bit trickier since the above example only runs on the CPU. How do we also achieve reproducibility on GPU? +In short, one should not expect full reproducibility across different hardware when running pipelines on GPU +as matrix multiplications are less deterministic on GPU than on CPU and diffusion pipelines tend to require +a lot of matrix multiplications. Let’s see what we can do to keep the randomness within limits across +different GPU hardware. +To achieve maximum speed performance, it is recommended to create the generator directly on GPU when running +the pipeline on GPU: + + + Copied +import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id) +ddim.to("cuda") + +# create a generator for reproducibility +generator = torch.Generator(device="cuda").manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) +Running the above now prints a value of 1389.8634 - even though we’re using the exact same seed! +This is unfortunate as it means we cannot reproduce the results we achieved on GPU, also on CPU. +Nevertheless, it should be expected since the GPU uses a different random number generator than the CPU. +To circumvent this problem, we created a randn_tensor function, which can create random noise +on the CPU and then move the tensor to GPU if necessary. The function is used everywhere inside the pipelines allowing the user to always pass a CPU generator even if the pipeline is run on GPU: + + + Copied +import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id) +ddim.to("cuda") + +# create a generator for reproducibility +generator = torch.manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) +Running the above now prints a value of 1491.1713, much closer to the value of 1491.1711 when +the pipeline is fully run on the CPU. +As a consequence, we recommend always passing a CPU generator if Reproducibility is important. +The loss of performance is often neglectable, but one can be sure to generate much more similar +values than if the pipeline would have been run on CPU. +Finally, we noticed that more complex pipelines, such as UnCLIPPipeline are often extremely +susceptible to precision error propagation and thus one cannot expect even similar results across +different GPU hardware or PyTorch versions. In such cases, one has to make sure to run +exactly the same hardware and PyTorch version for full Reproducibility. + +Randomness utilities + + +randn_tensor + + +diffusers.utils.randn_tensor + +< +source +> +( +shape: typing.Union[typing.Tuple, typing.List] +generator: typing.Union[typing.List[ForwardRef('torch.Generator')], ForwardRef('torch.Generator'), NoneType] = None +device: typing.Optional[ForwardRef('torch.device')] = None +dtype: typing.Optional[ForwardRef('torch.dtype')] = None +layout: typing.Optional[ForwardRef('torch.layout')] = None + +) + + + +This is a helper function that allows to create random tensors on the desired device with the desired dtype. When +passing a list of generators one can seed each batched size individually. If CPU generators are passed the tensor +will always be created on CPU. diff --git a/scrapped_outputs/ad93e5526005e692fc573edcffac0868.txt b/scrapped_outputs/ad93e5526005e692fc573edcffac0868.txt new file mode 100644 index 0000000000000000000000000000000000000000..370ce691af60ec569bb22a8523c7b30831598db5 --- /dev/null +++ b/scrapped_outputs/ad93e5526005e692fc573edcffac0868.txt @@ -0,0 +1,260 @@ +Performing inference with LCM-LoRA Latent Consistency Models (LCM) enable quality image generation in typically 2-4 steps making it possible to use diffusion models in almost real-time settings. From the official website: LCMs can be distilled from any pre-trained Stable Diffusion (SD) in only 4,000 training steps (~32 A100 GPU Hours) for generating high quality 768 x 768 resolution images in 2~4 steps or even one step, significantly accelerating text-to-image generation. We employ LCM to distill the Dreamshaper-V7 version of SD in just 4,000 training iterations. For a more technical overview of LCMs, refer to the paper. However, each model needs to be distilled separately for latent consistency distillation. The core idea with LCM-LoRA is to train just a few adapter layers, the adapter being LoRA in this case. +This way, we don’t have to train the full model and keep the number of trainable parameters manageable. The resulting LoRAs can then be applied to any fine-tuned version of the model without distilling them separately. +Additionally, the LoRAs can be applied to image-to-image, ControlNet/T2I-Adapter, inpainting, AnimateDiff etc. +The LCM-LoRA can also be combined with other LoRAs to generate styled images in very few steps (4-8). LCM-LoRAs are available for stable-diffusion-v1-5, stable-diffusion-xl-base-1.0, and the SSD-1B model. All the checkpoints can be found in this collection. For more details about LCM-LoRA, refer to the technical report. This guide shows how to perform inference with LCM-LoRAs for text-to-image image-to-image combined with styled LoRAs ControlNet/T2I-Adapter inpainting AnimateDiff Before going through this guide, we’ll take a look at the general workflow for performing inference with LCM-LoRAs. +LCM-LoRAs are similar to other Stable Diffusion LoRAs so they can be used with any DiffusionPipeline that supports LoRAs. Load the task specific pipeline and model. Set the scheduler to LCMScheduler. Load the LCM-LoRA weights for the model. Reduce the guidance_scale between [1.0, 2.0] and set the num_inference_steps between [4, 8]. Perform inference with the pipeline with the usual parameters. Let’s look at how we can perform inference with LCM-LoRAs for different tasks. First, make sure you have peft installed, for better LoRA support. Copied pip install -U peft Text-to-image You’ll use the StableDiffusionXLPipeline with the scheduler: LCMScheduler and then load the LCM-LoRA. Together with the LCM-LoRA and the scheduler, the pipeline enables a fast inference workflow overcoming the slow iterative nature of diffusion models. Copied import torch +from diffusers import DiffusionPipeline, LCMScheduler + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + variant="fp16", + torch_dtype=torch.float16 +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl") + +prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" + +generator = torch.manual_seed(42) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=1.0 +).images[0] Notice that we use only 4 steps for generation which is way less than what’s typically used for standard SDXL. You may have noticed that we set guidance_scale=1.0, which disables classifer-free-guidance. This is because the LCM-LoRA is trained with guidance, so the batch size does not have to be doubled in this case. This leads to a faster inference time, with the drawback that negative prompts don’t have any effect on the denoising process. You can also use guidance with LCM-LoRA, but due to the nature of training the model is very sensitve to the guidance_scale values, high values can lead to artifacts in the generated images. In our experiments, we found that the best values are in the range of [1.0, 2.0]. Inference with a fine-tuned model As mentioned above, the LCM-LoRA can be applied to any fine-tuned version of the model without having to distill them separately. Let’s look at how we can perform inference with a fine-tuned model. In this example, we’ll use the animagine-xl model, which is a fine-tuned version of the SDXL model for generating anime. Copied from diffusers import DiffusionPipeline, LCMScheduler + +pipe = DiffusionPipeline.from_pretrained( + "Linaqruf/animagine-xl", + variant="fp16", + torch_dtype=torch.float16 +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl") + +prompt = "face focus, cute, masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=1.0 +).images[0] Image-to-image LCM-LoRA can be applied to image-to-image tasks too. Let’s look at how we can perform image-to-image generation with LCMs. For this example we’ll use the dreamshaper-7 model and the LCM-LoRA for stable-diffusion-v1-5 . Copied import torch +from diffusers import AutoPipelineForImage2Image, LCMScheduler +from diffusers.utils import make_image_grid, load_image + +pipe = AutoPipelineForImage2Image.from_pretrained( + "Lykon/dreamshaper-7", + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5") + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) +prompt = "Astronauts in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +generator = torch.manual_seed(0) +image = pipe( + prompt, + image=init_image, + num_inference_steps=4, + guidance_scale=1, + strength=0.6, + generator=generator +).images[0] +make_image_grid([init_image, image], rows=1, cols=2) You can get different results based on your prompt and the image you provide. To get the best results, we recommend trying different values for num_inference_steps, strength, and guidance_scale parameters and choose the best one. Combine with styled LoRAs LCM-LoRA can be combined with other LoRAs to generate styled-images in very few steps (4-8). In the following example, we’ll use the LCM-LoRA with the papercut LoRA. +To learn more about how to combine LoRAs, refer to this guide. Copied import torch +from diffusers import DiffusionPipeline, LCMScheduler + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + variant="fp16", + torch_dtype=torch.float16 +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LoRAs +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl", adapter_name="lcm") +pipe.load_lora_weights("TheLastBen/Papercut_SDXL", weight_name="papercut.safetensors", adapter_name="papercut") + +# Combine LoRAs +pipe.set_adapters(["lcm", "papercut"], adapter_weights=[1.0, 0.8]) + +prompt = "papercut, a cute fox" +generator = torch.manual_seed(0) +image = pipe(prompt, num_inference_steps=4, guidance_scale=1, generator=generator).images[0] +image ControlNet/T2I-Adapter Let’s look at how we can perform inference with ControlNet/T2I-Adapter and LCM-LoRA. ControlNet For this example, we’ll use the SD-v1-5 model and the LCM-LoRA for SD-v1-5 with canny ControlNet. Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, LCMScheduler +from diffusers.utils import load_image + +image = load_image( + "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +).resize((512, 512)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + controlnet=controlnet, + torch_dtype=torch.float16, + safety_checker=None, + variant="fp16" +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5") + +generator = torch.manual_seed(0) +image = pipe( + "the mona lisa", + image=canny_image, + num_inference_steps=4, + guidance_scale=1.5, + controlnet_conditioning_scale=0.8, + cross_attention_kwargs={"scale": 1}, + generator=generator, +).images[0] +make_image_grid([canny_image, image], rows=1, cols=2) The inference parameters in this example might not work for all examples, so we recommend you to try different values for `num_inference_steps`, `guidance_scale`, `controlnet_conditioning_scale` and `cross_attention_kwargs` parameters and choose the best one. T2I-Adapter This example shows how to use the LCM-LoRA with the Canny T2I-Adapter and SDXL. Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +# Prepare image +# Detect the canny map in low resolution to avoid high-frequency details +image = load_image( + "https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_canny.jpg" +).resize((384, 384)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image).resize((1024, 1024)) + +# load adapter +adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, varient="fp16").to("cuda") + +pipe = StableDiffusionXLAdapterPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + adapter=adapter, + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl") + +prompt = "Mystical fairy in real, magic, 4k picture, high quality" +negative_prompt = "extra digit, fewer digits, cropped, worst quality, low quality, glitch, deformed, mutated, ugly, disfigured" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, + negative_prompt=negative_prompt, + image=canny_image, + num_inference_steps=4, + guidance_scale=1.5, + adapter_conditioning_scale=0.8, + adapter_conditioning_factor=1, + generator=generator, +).images[0] +make_image_grid([canny_image, image], rows=1, cols=2) Inpainting LCM-LoRA can be used for inpainting as well. Copied import torch +from diffusers import AutoPipelineForInpainting, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +pipe = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5") + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +# generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, + image=init_image, + mask_image=mask_image, + generator=generator, + num_inference_steps=4, + guidance_scale=4, +).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) AnimateDiff AnimateDiff allows you to animate images using Stable Diffusion models. To get good results, we need to generate multiple frames (16-24), and doing this with standard SD models can be very slow. +LCM-LoRA can be used to speed up the process significantly, as you just need to do 4-8 steps for each frame. Let’s look at how we can perform animation with LCM-LoRA and AnimateDiff. Copied import torch +from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler, LCMScheduler +from diffusers.utils import export_to_gif + +adapter = MotionAdapter.from_pretrained("diffusers/animatediff-motion-adapter-v1-5") +pipe = AnimateDiffPipeline.from_pretrained( + "frankjoshua/toonyou_beta6", + motion_adapter=adapter, +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5", adapter_name="lcm") +pipe.load_lora_weights("guoyww/animatediff-motion-lora-zoom-in", weight_name="diffusion_pytorch_model.safetensors", adapter_name="motion-lora") + +pipe.set_adapters(["lcm", "motion-lora"], adapter_weights=[0.55, 1.2]) + +prompt = "best quality, masterpiece, 1girl, looking at viewer, blurry background, upper body, contemporary, dress" +generator = torch.manual_seed(0) +frames = pipe( + prompt=prompt, + num_inference_steps=5, + guidance_scale=1.25, + cross_attention_kwargs={"scale": 1}, + num_frames=24, + generator=generator +).frames[0] +export_to_gif(frames, "animation.gif") diff --git a/scrapped_outputs/ad9e25ef72059872c1ad5890a5e38955.txt b/scrapped_outputs/ad9e25ef72059872c1ad5890a5e38955.txt new file mode 100644 index 0000000000000000000000000000000000000000..e7329224ae6981db4a03041ff4d3d5107653214c --- /dev/null +++ b/scrapped_outputs/ad9e25ef72059872c1ad5890a5e38955.txt @@ -0,0 +1,235 @@ +variance exploding stochastic differential equation (VE-SDE) scheduler + + +Overview + +Original paper can be found here. + +ScoreSdeVeScheduler + + +class diffusers.ScoreSdeVeScheduler + +< +source +> +( +num_train_timesteps: int = 2000 +snr: float = 0.15 +sigma_min: float = 0.01 +sigma_max: float = 1348.0 +sampling_eps: float = 1e-05 +correct_steps: int = 1 + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +snr (float) — +coefficient weighting the step from the model_output sample (from the network) to the random noise. + + +sigma_min (float) — +initial noise scale for sigma sequence in sampling procedure. The minimum sigma should mirror the +distribution of the data. + + +sigma_max (float) — maximum value used for the range of continuous timesteps passed into the model. + + +sampling_eps (float) — the end value of sampling, where timesteps decrease progressively from 1 to +epsilon. — + + +correct_steps (int) — number of correction steps performed on a produced sample. + + + +The variance exploding stochastic differential equation (SDE) scheduler. +For more information, see the original paper: https://arxiv.org/abs/2011.13456 +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Optional[int] = None + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +timestep (int, optional) — current timestep + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_sigmas + +< +source +> +( +num_inference_steps: int +sigma_min: float = None +sigma_max: float = None +sampling_eps: float = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +sigma_min (float, optional) — +initial noise scale value (overrides value given at Scheduler instantiation). + + +sigma_max (float, optional) — final noise scale value (overrides value given at Scheduler instantiation). + + +sampling_eps (float, optional) — final timestep value (overrides value given at Scheduler instantiation). + + + +Sets the noise scales used for the diffusion chain. Supporting function to be run before inference. +The sigmas control the weight of the drift and diffusion components of sample update. + +set_timesteps + +< +source +> +( +num_inference_steps: int +sampling_eps: float = None +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +sampling_eps (float, optional) — final timestep value (overrides value given at Scheduler instantiation). + + + +Sets the continuous timesteps used for the diffusion chain. Supporting function to be run before inference. + +step_correct + +< +source +> +( +model_output: FloatTensor +sample: FloatTensor +generator: typing.Optional[torch._C.Generator] = None +return_dict: bool = True + +) +→ +SdeVeOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. +generator — random number generator. + + +return_dict (bool) — option for returning tuple rather than SchedulerOutput class + + +Returns + +SdeVeOutput or tuple + + + +SdeVeOutput if +return_dict is True, otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + +Correct the predicted sample based on the output model_output of the network. This is often run repeatedly +after making the prediction for the previous timestep. + +step_pred + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +generator: typing.Optional[torch._C.Generator] = None +return_dict: bool = True + +) +→ +SdeVeOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. +generator — random number generator. + + +return_dict (bool) — option for returning tuple rather than SchedulerOutput class + + +Returns + +SdeVeOutput or tuple + + + +SdeVeOutput if +return_dict is True, otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/adb1a350d1c374ec11830c694806403e.txt b/scrapped_outputs/adb1a350d1c374ec11830c694806403e.txt new file mode 100644 index 0000000000000000000000000000000000000000..d61c3f265da975aac5d562125c788f3e245e5b73 --- /dev/null +++ b/scrapped_outputs/adb1a350d1c374ec11830c694806403e.txt @@ -0,0 +1,96 @@ +ControlNet The ControlNet model was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, Maneesh Agrawala. It provides a greater degree of control over text-to-image generation by conditioning the model on additional inputs such as edge maps, depth maps, segmentation maps, and keypoints for pose detection. The abstract from the paper is: We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models. Loading from the original format By default the ControlNetModel should be loaded with from_pretrained(), but it can also be loaded +from the original format using FromOriginalControlnetMixin.from_single_file as follows: Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel + +url = "https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_canny.pth" # can also be a local path +controlnet = ControlNetModel.from_single_file(url) + +url = "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.safetensors" # can also be a local path +pipe = StableDiffusionControlNetPipeline.from_single_file(url, controlnet=controlnet) ControlNetModel class diffusers.ControlNetModel < source > ( in_channels: int = 4 conditioning_channels: int = 3 flip_sin_to_cos: bool = True freq_shift: int = 0 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') mid_block_type: Optional = 'UNetMidBlock2DCrossAttn' only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: int = 1280 transformer_layers_per_block: Union = 1 encoder_hid_dim: Optional = None encoder_hid_dim_type: Optional = None attention_head_dim: Union = 8 num_attention_heads: Union = None use_linear_projection: bool = False class_embed_type: Optional = None addition_embed_type: Optional = None addition_time_embed_dim: Optional = None num_class_embeds: Optional = None upcast_attention: bool = False resnet_time_scale_shift: str = 'default' projection_class_embeddings_input_dim: Optional = None controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: Optional = (16, 32, 96, 256) global_pool_conditions: bool = False addition_embed_type_num_heads: int = 64 ) Parameters in_channels (int, defaults to 4) — +The number of channels in the input sample. flip_sin_to_cos (bool, defaults to True) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, defaults to 0) — +The frequency shift to apply to the time embedding. down_block_types (tuple[str], defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. only_cross_attention (Union[bool, Tuple[bool]], defaults to False) — block_out_channels (tuple[int], defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, defaults to 2) — +The number of layers per block. downsample_padding (int, defaults to 1) — +The padding to use for the downsampling convolution. mid_block_scale_factor (float, defaults to 1) — +The scale factor to use for the mid block. act_fn (str, defaults to “silu”) — +The activation function to use. norm_num_groups (int, optional, defaults to 32) — +The number of groups to use for the normalization. If None, normalization and activation layers is skipped +in post-processing. norm_eps (float, defaults to 1e-5) — +The epsilon to use for the normalization. cross_attention_dim (int, defaults to 1280) — +The dimension of the cross attention features. transformer_layers_per_block (int or Tuple[int], optional, defaults to 1) — +The number of transformer blocks of type BasicTransformerBlock. Only relevant for +~models.unet_2d_blocks.CrossAttnDownBlock2D, ~models.unet_2d_blocks.CrossAttnUpBlock2D, +~models.unet_2d_blocks.UNetMidBlock2DCrossAttn. encoder_hid_dim (int, optional, defaults to None) — +If encoder_hid_dim_type is defined, encoder_hidden_states will be projected from encoder_hid_dim +dimension to cross_attention_dim. encoder_hid_dim_type (str, optional, defaults to None) — +If given, the encoder_hidden_states and potentially other embeddings are down-projected to text +embeddings of dimension cross_attention according to encoder_hid_dim_type. attention_head_dim (Union[int, Tuple[int]], defaults to 8) — +The dimension of the attention heads. use_linear_projection (bool, defaults to False) — class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", "identity", "projection", or "simple_projection". addition_embed_type (str, optional, defaults to None) — +Configures an optional embedding which will be summed with the time embeddings. Choose from None or +“text”. “text” will use the TextTimeEmbedding layer. num_class_embeds (int, optional, defaults to 0) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. upcast_attention (bool, defaults to False) — resnet_time_scale_shift (str, defaults to "default") — +Time scale shift config for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. projection_class_embeddings_input_dim (int, optional, defaults to None) — +The dimension of the class_labels input when class_embed_type="projection". Required when +class_embed_type="projection". controlnet_conditioning_channel_order (str, defaults to "rgb") — +The channel order of conditional image. Will convert to rgb if it’s bgr. conditioning_embedding_out_channels (tuple[int], optional, defaults to (16, 32, 96, 256)) — +The tuple of output channel for each block in the conditioning_embedding layer. global_pool_conditions (bool, defaults to False) — +TODO(Patrick) - unused parameter. addition_embed_type_num_heads (int, defaults to 64) — +The number of heads to use for the TextTimeEmbedding layer. A ControlNet model. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor controlnet_cond: FloatTensor conditioning_scale: float = 1.0 class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None added_cond_kwargs: Optional = None cross_attention_kwargs: Optional = None guess_mode: bool = False return_dict: bool = True ) → ControlNetOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor. timestep (Union[torch.Tensor, float, int]) — +The number of timesteps to denoise an input. encoder_hidden_states (torch.Tensor) — +The encoder hidden states. controlnet_cond (torch.FloatTensor) — +The conditional input tensor of shape (batch_size, sequence_length, hidden_size). conditioning_scale (float, defaults to 1.0) — +The scale factor for ControlNet outputs. class_labels (torch.Tensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. timestep_cond (torch.Tensor, optional, defaults to None) — +Additional conditional embeddings for timestep. If provided, the embeddings will be summed with the +timestep_embedding passed through the self.time_embedding layer to obtain the final timestep +embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. added_cond_kwargs (dict) — +Additional conditions for the Stable Diffusion XL UNet. cross_attention_kwargs (dict[str], optional, defaults to None) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. guess_mode (bool, defaults to False) — +In this mode, the ControlNet encoder tries its best to recognize the input content of the input even if +you remove all prompts. A guidance_scale between 3.0 and 5.0 is recommended. return_dict (bool, defaults to True) — +Whether or not to return a ControlNetOutput instead of a plain tuple. Returns +ControlNetOutput or tuple + +If return_dict is True, a ControlNetOutput is returned, otherwise a tuple is +returned where the first element is the sample tensor. + The ControlNetModel forward method. from_unet < source > ( unet: UNet2DConditionModel controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: Optional = (16, 32, 96, 256) load_weights_from_unet: bool = True conditioning_channels: int = 3 ) Parameters unet (UNet2DConditionModel) — +The UNet model weights to copy to the ControlNetModel. All configuration options are also copied +where applicable. Instantiate a ControlNetModel from UNet2DConditionModel. set_attention_slice < source > ( slice_size: Union ) Parameters slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", input to the attention heads is halved, so attention is computed in two steps. If +"max", maximum amount of memory is saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in +several steps. This is useful for saving some memory in exchange for a small decrease in speed. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. ControlNetOutput class diffusers.models.controlnet.ControlNetOutput < source > ( down_block_res_samples: Tuple mid_block_res_sample: Tensor ) Parameters down_block_res_samples (tuple[torch.Tensor]) — +A tuple of downsample activations at different resolutions for each downsampling block. Each tensor should +be of shape (batch_size, channel * resolution, height //resolution, width // resolution). Output can be +used to condition the original UNet’s downsampling activations. mid_down_block_re_sample (torch.Tensor) — +The activation of the midde block (the lowest sample resolution). Each tensor should be of shape +(batch_size, channel * lowest_resolution, height // lowest_resolution, width // lowest_resolution). +Output can be used to condition the original UNet’s middle block activation. The output of ControlNetModel. FlaxControlNetModel class diffusers.FlaxControlNetModel < source > ( sample_size: int = 32 in_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 attention_head_dim: Union = 8 num_attention_heads: Union = None cross_attention_dim: int = 1280 dropout: float = 0.0 use_linear_projection: bool = False dtype: dtype = flip_sin_to_cos: bool = True freq_shift: int = 0 controlnet_conditioning_channel_order: str = 'rgb' conditioning_embedding_out_channels: Tuple = (16, 32, 96, 256) parent: Union = name: Optional = None ) Parameters sample_size (int, optional) — +The size of the input sample. in_channels (int, optional, defaults to 4) — +The number of channels in the input sample. down_block_types (Tuple[str], optional, defaults to ("FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxDownBlock2D")) — +The tuple of downsample blocks to use. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — +The number of layers per block. attention_head_dim (int or Tuple[int], optional, defaults to 8) — +The dimension of the attention heads. num_attention_heads (int or Tuple[int], optional) — +The number of attention heads. cross_attention_dim (int, optional, defaults to 768) — +The dimension of the cross attention features. dropout (float, optional, defaults to 0) — +Dropout probability for down, up and bottleneck blocks. flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. controlnet_conditioning_channel_order (str, optional, defaults to rgb) — +The channel order of conditional image. Will convert to rgb if it’s bgr. conditioning_embedding_out_channels (tuple, optional, defaults to (16, 32, 96, 256)) — +The tuple of output channel for each block in the conditioning_embedding layer. A ControlNet model. This model inherits from FlaxModelMixin. Check the superclass documentation for it’s generic methods +implemented for all models (such as downloading or saving). This model is also a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matters related to its +general usage and behavior. Inherent JAX features such as the following are supported: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization FlaxControlNetOutput class diffusers.models.controlnet_flax.FlaxControlNetOutput < source > ( down_block_res_samples: Array mid_block_res_sample: Array ) Parameters down_block_res_samples (jnp.ndarray) — mid_block_res_sample (jnp.ndarray) — The output of FlaxControlNetModel. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/adb391bf05330396d7b7dd9d51c6b480.txt b/scrapped_outputs/adb391bf05330396d7b7dd9d51c6b480.txt new file mode 100644 index 0000000000000000000000000000000000000000..67d20ffe84f73ce9a3ad3216bb740471c0ea1a73 --- /dev/null +++ b/scrapped_outputs/adb391bf05330396d7b7dd9d51c6b480.txt @@ -0,0 +1,93 @@ +Text-to-image model editing Editing Implicit Assumptions in Text-to-Image Diffusion Models is by Hadas Orgad, Bahjat Kawar, and Yonatan Belinkov. This pipeline enables editing diffusion model weights, such that its assumptions of a given concept are changed. The resulting change is expected to take effect in all prompt generations related to the edited concept. The abstract from the paper is: Text-to-image diffusion models often make implicit assumptions about the world when generating images. While some assumptions are useful (e.g., the sky is blue), they can also be outdated, incorrect, or reflective of social biases present in the training data. Thus, there is a need to control these assumptions without requiring explicit user input or costly re-training. In this work, we aim to edit a given implicit assumption in a pre-trained diffusion model. Our Text-to-Image Model Editing method, TIME for short, receives a pair of inputs: a “source” under-specified prompt for which the model makes an implicit assumption (e.g., “a pack of roses”), and a “destination” prompt that describes the same setting, but with a specified desired attribute (e.g., “a pack of blue roses”). TIME then updates the model’s cross-attention layers, as these layers assign visual meaning to textual tokens. We edit the projection matrices in these layers such that the source prompt is projected close to the destination prompt. Our method is highly efficient, as it modifies a mere 2.2% of the model’s parameters in under one second. To evaluate model editing approaches, we introduce TIMED (TIME Dataset), containing 147 source and destination prompt pairs from various domains. Our experiments (using Stable Diffusion) show that TIME is successful in model editing, generalizes well for related prompts unseen during editing, and imposes minimal effect on unrelated generations. You can find additional information about model editing on the project page, original codebase, and try it out in a demo. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionModelEditingPipeline class diffusers.StableDiffusionModelEditingPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: SchedulerMixin safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True with_to_k: bool = True with_augs: list = ['A photo of ', 'An image of ', 'A picture of '] ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPFeatureExtractor) — +A CLIPFeatureExtractor to extract features from generated images; used as inputs to the safety_checker. with_to_k (bool) — +Whether to edit the key projection matrices along with the value projection matrices. with_augs (list) — +Textual augmentations to apply while editing the text-to-image model. Set to [] for no augmentations. Pipeline for text-to-image model editing. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: typing.Union[str, typing.List[str]] = None height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: typing.Union[typing.List[str], str, NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: int = 1 cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None clip_skip: typing.Optional[int] = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionModelEditingPipeline + +>>> model_ckpt = "CompVis/stable-diffusion-v1-4" +>>> pipe = StableDiffusionModelEditingPipeline.from_pretrained(model_ckpt) + +>>> pipe = pipe.to("cuda") + +>>> source_prompt = "A pack of roses" +>>> destination_prompt = "A pack of blue roses" +>>> pipe.edit_model(source_prompt, destination_prompt) + +>>> prompt = "A field of roses" +>>> image = pipe(prompt).images[0] disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. edit_model < source > ( source_prompt: str destination_prompt: str lamb: float = 0.1 restart_params: bool = True ) Parameters source_prompt (str) — +The source prompt containing the concept to be edited. destination_prompt (str) — +The destination prompt. Must contain all words from source_prompt with additional ones to specify the +target edit. lamb (float, optional, defaults to 0.1) — +The lambda parameter specifying the regularization intesity. Smaller values increase the editing power. restart_params (bool, optional, defaults to True) — +Restart the model parameters to their pre-trained version before editing. This is done to avoid edit +compounding. When it is False, edits accumulate. Apply model editing via closed-form solution (see Eq. 5 in the TIME paper). enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None lora_scale: typing.Optional[float] = None clip_skip: typing.Optional[int] = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] nsfw_content_detected: typing.Optional[typing.List[bool]] ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/adf64f4703002f2b93b868eea6f64c02.txt b/scrapped_outputs/adf64f4703002f2b93b868eea6f64c02.txt new file mode 100644 index 0000000000000000000000000000000000000000..f3d2a1a340ad1efdbcd58232cb5909967c8d6d47 --- /dev/null +++ b/scrapped_outputs/adf64f4703002f2b93b868eea6f64c02.txt @@ -0,0 +1,64 @@ +Configuration Schedulers from SchedulerMixin and models from ModelMixin inherit from ConfigMixin which stores all the parameters that are passed to their respective __init__ methods in a JSON-configuration file. To use private or gated models, log-in with huggingface-cli login. ConfigMixin class diffusers.ConfigMixin < source > ( ) Base class for all configuration classes. All configuration parameters are stored under self.config. Also +provides the from_config() and save_config() methods for loading, downloading, and +saving classes that inherit from ConfigMixin. Class attributes: config_name (str) — A filename under which the config should stored when calling +save_config() (should be overridden by parent class). ignore_for_config (List[str]) — A list of attributes that should not be saved in the config (should be +overridden by subclass). has_compatibles (bool) — Whether the class has compatible classes (should be overridden by subclass). _deprecated_kwargs (List[str]) — Keyword arguments that are deprecated. Note that the init function +should only have a kwargs argument if at least one argument is deprecated (should be overridden by +subclass). load_config < source > ( pretrained_model_name_or_path: Union return_unused_kwargs = False return_commit_hash = False **kwargs ) → dict Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing model weights saved with +save_config(). + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. return_unused_kwargs (bool, optional, defaults to `False) — +Whether unused keyword arguments of the config are returned. return_commit_hash (bool, optional, defaults to False) -- Whether the commit_hash` of the loaded configuration are returned. Returns +dict + +A dictionary of all the parameters stored in a JSON configuration file. + Load a model or scheduler configuration. from_config < source > ( config: Union = None return_unused_kwargs = False **kwargs ) → ModelMixin or SchedulerMixin Parameters config (Dict[str, Any]) — +A config dictionary from which the Python class is instantiated. Make sure to only load configuration +files of compatible classes. return_unused_kwargs (bool, optional, defaults to False) — +Whether kwargs that are not consumed by the Python class should be returned or not. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to update the configuration object (after it is loaded) and initiate the Python class. +**kwargs are passed directly to the underlying scheduler/model’s __init__ method and eventually +overwrite the same named arguments in config. Returns +ModelMixin or SchedulerMixin + +A model or scheduler object instantiated from a config dictionary. + Instantiate a Python class from a config dictionary. Examples: Copied >>> from diffusers import DDPMScheduler, DDIMScheduler, PNDMScheduler + +>>> # Download scheduler from huggingface.co and cache. +>>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cifar10-32") + +>>> # Instantiate DDIM scheduler class with same config as DDPM +>>> scheduler = DDIMScheduler.from_config(scheduler.config) + +>>> # Instantiate PNDM scheduler class with same config as DDPM +>>> scheduler = PNDMScheduler.from_config(scheduler.config) save_config < source > ( save_directory: Union push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory where the configuration JSON file is saved (will be created if it does not exist). push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save a configuration object to the directory specified in save_directory so that it can be reloaded using the +from_config() class method. to_json_file < source > ( json_file_path: Union ) Parameters json_file_path (str or os.PathLike) — +Path to the JSON file to save a configuration instance’s parameters. Save the configuration instance’s parameters to a JSON file. to_json_string < source > ( ) → str Returns +str + +String containing all the attributes that make up the configuration instance in JSON format. + Serializes the configuration instance to a JSON string. diff --git a/scrapped_outputs/ae34d0b0ba7575909f5ec2b49826d0c3.txt b/scrapped_outputs/ae34d0b0ba7575909f5ec2b49826d0c3.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/ae4bf187e9774c577798e5c8c3f4c31b.txt b/scrapped_outputs/ae4bf187e9774c577798e5c8c3f4c31b.txt new file mode 100644 index 0000000000000000000000000000000000000000..f2bcdd0eab08a61d4d8ad8d73bfbe01b5aad187f --- /dev/null +++ b/scrapped_outputs/ae4bf187e9774c577798e5c8c3f4c31b.txt @@ -0,0 +1,234 @@ +Models 🤗 Diffusers provides pretrained models for popular algorithms and modules to create custom diffusion systems. The primary function of models is to denoise an input sample as modeled by the distribution pθ(xt−1∣xt)p_{\theta}(x_{t-1}|x_{t})pθ​(xt−1​∣xt​). All models are built from the base ModelMixin class which is a torch.nn.Module providing basic functionality for saving and loading models, locally and from the Hugging Face Hub. ModelMixin class diffusers.ModelMixin < source > ( ) Base class for all models. ModelMixin takes care of storing the model configuration and provides methods for loading, downloading and +saving models. config_name (str) — Filename to save a model to when calling save_pretrained(). disable_gradient_checkpointing < source > ( ) Deactivates gradient checkpointing for the current model (may be referred to as activation checkpointing or +checkpoint activations in other frameworks). disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. enable_gradient_checkpointing < source > ( ) Activates gradient checkpointing for the current model (may be referred to as activation checkpointing or +checkpoint activations in other frameworks). enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this option is enabled, you should observe lower GPU memory usage and a potential speed up during +inference. Speed up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import UNet2DConditionModel +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> model = UNet2DConditionModel.from_pretrained( +... "stabilityai/stable-diffusion-2-1", subfolder="unet", torch_dtype=torch.float16 +... ) +>>> model = model.to("cuda") +>>> model.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) from_pretrained < source > ( pretrained_model_name_or_path: Union **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with save_pretrained(). + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info (bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. from_flax (bool, optional, defaults to False) — +Load the model weights from a Flax checkpoint save file. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. Instantiate a pretrained PyTorch model from a pretrained model configuration. The model is set in evaluation mode - model.eval() - by default, and dropout modules are deactivated. To +train the model, set it back in training mode with model.train(). To use private or gated models, log-in with +huggingface-cli login. You can also activate the special +“offline-mode” to use this method in a +firewalled environment. Example: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5", subfolder="unet") If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. num_parameters < source > ( only_trainable: bool = False exclude_embeddings: bool = False ) → int Parameters only_trainable (bool, optional, defaults to False) — +Whether or not to return only the number of trainable parameters. exclude_embeddings (bool, optional, defaults to False) — +Whether or not to return only the number of non-embedding parameters. Returns +int + +The number of parameters. + Get number of (trainable or non-embedding) parameters in the module. Example: Copied from diffusers import UNet2DConditionModel + +model_id = "runwayml/stable-diffusion-v1-5" +unet = UNet2DConditionModel.from_pretrained(model_id, subfolder="unet") +unet.num_parameters(only_trainable=True) +859520964 save_pretrained < source > ( save_directory: Union is_main_process: bool = True save_function: Optional = None safe_serialization: bool = True variant: Optional = None push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save a model and its configuration file to. Will be created if it doesn’t exist. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save a model and its configuration file to a directory so that it can be reloaded using the +from_pretrained() class method. FlaxModelMixin class diffusers.FlaxModelMixin < source > ( ) Base class for all Flax models. FlaxModelMixin takes care of storing the model configuration and provides methods for loading, downloading and +saving models. config_name (str) — Filename to save a model to when calling save_pretrained(). from_pretrained < source > ( pretrained_model_name_or_path: Union dtype: dtype = *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — +Can be either: + +A string, the model id (for example runwayml/stable-diffusion-v1-5) of a pretrained model +hosted on the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +using save_pretrained(). + dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — +The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and +jax.numpy.bfloat16 (on TPUs). +This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If +specified, all the computation will be performed with the given dtype. + +This only specifies the dtype of the computation and does not influence the dtype of model +parameters. +If you wish to change the dtype of the model parameters, see to_fp16() and +to_bf16(). + model_args (sequence of positional arguments, optional) — +All remaining positional arguments are passed to the underlying model’s __init__ method. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only(bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. from_pt (bool, optional, defaults to False) — +Load the model weights from a PyTorch checkpoint save file. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to update the configuration object (after it is loaded) and initiate the model (for +example, output_attentions=True). Behaves differently depending on whether a config is provided or +automatically loaded: + +If a configuration is provided with config, kwargs are directly passed to the underlying +model’s __init__ method (we assume all relevant updates to the configuration have already been +done). +If a configuration is not provided, kwargs are first passed to the configuration class +initialization function from_config(). Each key of the kwargs that corresponds +to a configuration attribute is used to override said attribute with the supplied kwargs value. +Remaining keys that do not correspond to any configuration attribute are passed to the underlying +model’s __init__ function. + Instantiate a pretrained Flax model from a pretrained model configuration. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # Download model and configuration from huggingface.co and cache. +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # Model was saved using *save_pretrained('./test/saved_model/')* (for example purposes, not runnable). +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("./test/saved_model/") If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. save_pretrained < source > ( save_directory: Union params: Union is_main_process: bool = True push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save a model and its configuration file to. Will be created if it doesn’t exist. params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional key word arguments passed along to the push_to_hub() method. Save a model and its configuration file to a directory so that it can be reloaded using the +from_pretrained() class method. to_bf16 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans. It should be True +for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.bfloat16. This returns a new params tree and does not cast +the params in place. This method can be used on a TPU to explicitly convert the model parameters to bfloat16 precision to do full +half-precision training or to save weights in bfloat16 for inference in order to save memory and improve speed. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # load model +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model parameters will be in fp32 precision, to cast these to bfloat16 precision +>>> params = model.to_bf16(params) +>>> # If you don't want to cast certain parameters (for example layer norm bias and scale) +>>> # then pass the mask as follows +>>> from flax import traverse_util + +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> flat_params = traverse_util.flatten_dict(params) +>>> mask = { +... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) +... for path in flat_params +... } +>>> mask = traverse_util.unflatten_dict(mask) +>>> params = model.to_bf16(params, mask) to_fp16 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans. It should be True +for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.float16. This returns a new params tree and does not cast the +params in place. This method can be used on a GPU to explicitly convert the model parameters to float16 precision to do full +half-precision training or to save weights in float16 for inference in order to save memory and improve speed. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # load model +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model params will be in fp32, to cast these to float16 +>>> params = model.to_fp16(params) +>>> # If you want don't want to cast certain parameters (for example layer norm bias and scale) +>>> # then pass the mask as follows +>>> from flax import traverse_util + +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> flat_params = traverse_util.flatten_dict(params) +>>> mask = { +... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) +... for path in flat_params +... } +>>> mask = traverse_util.unflatten_dict(mask) +>>> params = model.to_fp16(params, mask) to_fp32 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans. It should be True +for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.float32. This method can be used to explicitly convert the +model parameters to fp32 precision. This returns a new params tree and does not cast the params in place. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # Download model and configuration from huggingface.co +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model params will be in fp32, to illustrate the use of this method, +>>> # we'll first cast to fp16 and back to fp32 +>>> params = model.to_f16(params) +>>> # now cast back to fp32 +>>> params = model.to_fp32(params) PushToHubMixin class diffusers.utils.PushToHubMixin < source > ( ) A Mixin to push a model, scheduler, or pipeline to the Hugging Face Hub. push_to_hub < source > ( repo_id: str commit_message: Optional = None private: Optional = None token: Optional = None create_pr: bool = False safe_serialization: bool = True variant: Optional = None ) Parameters repo_id (str) — +The name of the repository you want to push your model, scheduler, or pipeline files to. It should +contain your organization name when pushing to an organization. repo_id can also be a path to a local +directory. commit_message (str, optional) — +Message to commit while pushing. Default to "Upload {object}". private (bool, optional) — +Whether or not the repository created should be private. token (str, optional) — +The token to use as HTTP bearer authorization for remote files. The token generated when running +huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) — +Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to True) — +Whether or not to convert the model weights to the safetensors format. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. Upload model, scheduler, or pipeline files to the 🤗 Hugging Face Hub. Examples: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet") + +# Push the `unet` to your namespace with the name "my-finetuned-unet". +unet.push_to_hub("my-finetuned-unet") + +# Push the `unet` to an organization with the name "my-finetuned-unet". +unet.push_to_hub("your-org/my-finetuned-unet") diff --git a/scrapped_outputs/ae4cdddd1ba20f3b42827ba5d141895f.txt b/scrapped_outputs/ae4cdddd1ba20f3b42827ba5d141895f.txt new file mode 100644 index 0000000000000000000000000000000000000000..6b2f521e40e38cf54824f4d7c2c05c78554dd3cf --- /dev/null +++ b/scrapped_outputs/ae4cdddd1ba20f3b42827ba5d141895f.txt @@ -0,0 +1,62 @@ +AudioLDM AudioLDM was proposed in AudioLDM: Text-to-Audio Generation with Latent Diffusion Models by Haohe Liu et al. Inspired by Stable Diffusion, AudioLDM +is a text-to-audio latent diffusion model (LDM) that learns continuous audio representations from CLAP +latents. AudioLDM takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional +sound effects, human speech and music. The abstract from the paper is: Text-to-audio (TTA) system has recently gained attention for its ability to synthesize general audio based on text descriptions. However, previous studies in TTA have limited generation quality with high computational costs. In this study, we propose AudioLDM, a TTA system that is built on a latent space to learn the continuous audio representations from contrastive language-audio pretraining (CLAP) latents. The pretrained CLAP models enable us to train LDMs with audio embedding while providing text embedding as a condition during sampling. By learning the latent representations of audio signals and their compositions without modeling the cross-modal relationship, AudioLDM is advantageous in both generation quality and computational efficiency. Trained on AudioCaps with a single GPU, AudioLDM achieves state-of-the-art TTA performance measured by both objective and subjective metrics (e.g., frechet distance). Moreover, AudioLDM is the first TTA system that enables various text-guided audio manipulations (e.g., style transfer) in a zero-shot fashion. Our implementation and demos are available at this https URL. The original codebase can be found at haoheliu/AudioLDM. Tips When constructing a prompt, keep in mind: Descriptive prompt inputs work best; you can use adjectives to describe the sound (for example, “high quality” or “clear”) and make the prompt context specific (for example, “water stream in a forest” instead of “stream”). It’s best to use general terms like “cat” or “dog” instead of specific names or abstract objects the model may not be familiar with. During inference: The quality of the predicted audio sample can be controlled by the num_inference_steps argument; higher steps give higher quality audio at the expense of slower inference. The length of the predicted audio sample can be controlled by varying the audio_length_in_s argument. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. AudioLDMPipeline class diffusers.AudioLDMPipeline < source > ( vae: AutoencoderKL text_encoder: ClapTextModelWithProjection tokenizer: Union unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vocoder: SpeechT5HifiGan ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (ClapTextModelWithProjection) — +Frozen text-encoder (ClapTextModelWithProjection, specifically the +laion/clap-htsat-unfused variant. tokenizer (PreTrainedTokenizer) — +A RobertaTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded audio latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. vocoder (SpeechT5HifiGan) — +Vocoder of class SpeechT5HifiGan. Pipeline for text-to-audio generation using AudioLDM. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None audio_length_in_s: Optional = None num_inference_steps: int = 10 guidance_scale: float = 2.5 negative_prompt: Union = None num_waveforms_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None output_type: Optional = 'np' ) → AudioPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide audio generation. If not defined, you need to pass prompt_embeds. audio_length_in_s (int, optional, defaults to 5.12) — +The length of the generated audio sample in seconds. num_inference_steps (int, optional, defaults to 10) — +The number of denoising steps. More denoising steps usually lead to a higher quality audio at the +expense of slower inference. guidance_scale (float, optional, defaults to 2.5) — +A higher guidance scale value encourages the model to generate audio that is closely linked to the text +prompt at the expense of lower sound quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in audio generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_waveforms_per_prompt (int, optional, defaults to 1) — +The number of waveforms to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. return_dict (bool, optional, defaults to True) — +Whether or not to return a AudioPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. output_type (str, optional, defaults to "np") — +The output format of the generated image. Choose between "np" to return a NumPy np.ndarray or +"pt" to return a PyTorch torch.Tensor object. Returns +AudioPipelineOutput or tuple + +If return_dict is True, AudioPipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import AudioLDMPipeline +>>> import torch +>>> import scipy + +>>> repo_id = "cvssp/audioldm-s-full-v2" +>>> pipe = AudioLDMPipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> prompt = "Techno music with a strong, upbeat tempo and high melodic riffs" +>>> audio = pipe(prompt, num_inference_steps=10, audio_length_in_s=5.0).audios[0] + +>>> # save the audio sample as a .wav file +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio) disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. diff --git a/scrapped_outputs/ae5be1fb66b3bd636679b964398408d9.txt b/scrapped_outputs/ae5be1fb66b3bd636679b964398408d9.txt new file mode 100644 index 0000000000000000000000000000000000000000..99c9c7d4f2201d98cc2da9436565b2c181d1c9c1 --- /dev/null +++ b/scrapped_outputs/ae5be1fb66b3bd636679b964398408d9.txt @@ -0,0 +1,83 @@ +Paint by Example Paint by Example: Exemplar-based Image Editing with Diffusion Models is by Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, Fang Wen. The abstract from the paper is: Language-guided image editing has achieved great success recently. In this paper, for the first time, we investigate exemplar-guided image editing for more precise control. We achieve this goal by leveraging self-supervised training to disentangle and re-organize the source image and the exemplar. However, the naive approach will cause obvious fusing artifacts. We carefully analyze it and propose an information bottleneck and strong augmentations to avoid the trivial solution of directly copying and pasting the exemplar image. Meanwhile, to ensure the controllability of the editing process, we design an arbitrary shape mask for the exemplar image and leverage the classifier-free guidance to increase the similarity to the exemplar image. The whole framework involves a single forward of the diffusion model without any iterative optimization. We demonstrate that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity. The original codebase can be found at Fantasy-Studio/Paint-by-Example, and you can try it out in a demo. Tips Paint by Example is supported by the official Fantasy-Studio/Paint-by-Example checkpoint. The checkpoint is warm-started from CompVis/stable-diffusion-v1-4 to inpaint partly masked images conditioned on example and reference images. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. PaintByExamplePipeline class diffusers.PaintByExamplePipeline < source > ( vae: AutoencoderKL image_encoder: PaintByExampleImageEncoder unet: UNet2DConditionModel scheduler: Union safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = False ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. image_encoder (PaintByExampleImageEncoder) — +Encodes the example input image. The unet is conditioned on the example image instead of a text prompt. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. 🧪 This is an experimental feature! Pipeline for image-guided image inpainting using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( example_image: Union image: Union mask_image: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters example_image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +An example image to guide image generation. image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +Image or tensor representing an image batch to be inpainted (parts of the image are masked out with +mask_image and repainted according to prompt). mask_image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +Image or tensor representing an image batch to mask image. White pixels in the mask are repainted, +while black pixels are preserved. If mask_image is a PIL image, it is converted to a single channel +(luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, so the +expected shape would be (B, H, W, 1). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Example: Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO +>>> from diffusers import PaintByExamplePipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = ( +... "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/image/example_1.png" +... ) +>>> mask_url = ( +... "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/mask/example_1.png" +... ) +>>> example_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/reference/example_1.jpg" + +>>> init_image = download_image(img_url).resize((512, 512)) +>>> mask_image = download_image(mask_url).resize((512, 512)) +>>> example_image = download_image(example_url).resize((512, 512)) + +>>> pipe = PaintByExamplePipeline.from_pretrained( +... "Fantasy-Studio/Paint-by-Example", +... torch_dtype=torch.float16, +... ) +>>> pipe = pipe.to("cuda") + +>>> image = pipe(image=init_image, mask_image=mask_image, example_image=example_image).images[0] +>>> image StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/ae842d9529e0d63a5814e8602f7cf041.txt b/scrapped_outputs/ae842d9529e0d63a5814e8602f7cf041.txt new file mode 100644 index 0000000000000000000000000000000000000000..dd4c590f313bcd6ca9c410cd102a6d9dc7d48399 --- /dev/null +++ b/scrapped_outputs/ae842d9529e0d63a5814e8602f7cf041.txt @@ -0,0 +1,298 @@ +Stable Diffusion Latent Upscaler + + +StableDiffusionLatentUpscalePipeline + +The Stable Diffusion Latent Upscaler model was created by Katherine Crowson in collaboration with Stability AI. It can be used on top of any StableDiffusionUpscalePipeline checkpoint to enhance its output image resolution by a factor of 2. +A notebook that demonstrates the original implementation can be found here: +Stable Diffusion Upscaler Demo +Available Checkpoints are: +stabilityai/latent-upscaler: stabilityai/sd-x2-latent-upscaler + +class diffusers.StableDiffusionLatentUpscalePipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: EulerDiscreteScheduler + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +EulerDiscreteScheduler. + + + +Pipeline to upscale the resolution of Stable Diffusion output images by a factor of 2. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] +image: typing.Union[torch.FloatTensor, PIL.Image.Image, typing.List[PIL.Image.Image]] +num_inference_steps: int = 75 +guidance_scale: float = 9.0 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image upscaling. + + +image (PIL.Image.Image or ListPIL.Image.Image or torch.FloatTensor) — +Image, or tensor representing an image batch which will be upscaled. If it’s a tensor, it can be +either a latent output from a stable diffusion model, or an image tensor in the range [-1, 1]. It +will be considered a latent if image.shape[1] is 4; otherwise, it will be considered to be an +image representation and encoded using this pipeline’s vae encoder. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import StableDiffusionLatentUpscalePipeline, StableDiffusionPipeline +>>> import torch + + +>>> pipeline = StableDiffusionPipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 +... ) +>>> pipeline.to("cuda") + +>>> model_id = "stabilityai/sd-x2-latent-upscaler" +>>> upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) +>>> upscaler.to("cuda") + +>>> prompt = "a photo of an astronaut high resolution, unreal engine, ultra realistic" +>>> generator = torch.manual_seed(33) + +>>> low_res_latents = pipeline(prompt, generator=generator, output_type="latent").images + +>>> with torch.no_grad(): +... image = pipeline.decode_latents(low_res_latents) +>>> image = pipeline.numpy_to_pil(image)[0] + +>>> image.save("../images/a1.png") + +>>> upscaled_image = upscaler( +... prompt=prompt, +... image=low_res_latents, +... num_inference_steps=20, +... guidance_scale=0, +... generator=generator, +... ).images[0] + +>>> upscaled_image.save("../images/a2.png") + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. + +enable_attention_slicing + +< +source +> +( +slice_size: typing.Union[str, int, NoneType] = 'auto' + +) + + +Parameters + +slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maxium amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +disable_attention_slicing + +< +source +> +( +) + + + +Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go +back to computing attention in one step. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. diff --git a/scrapped_outputs/ae9169f55def3d96630f8d3d53ad8741.txt b/scrapped_outputs/ae9169f55def3d96630f8d3d53ad8741.txt new file mode 100644 index 0000000000000000000000000000000000000000..3a98c66d29962c8bcbbc0fa8e780c5fcacb9d94f --- /dev/null +++ b/scrapped_outputs/ae9169f55def3d96630f8d3d53ad8741.txt @@ -0,0 +1,102 @@ +Stable Diffusion XL This script is experimental, and it’s easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset. Stable Diffusion XL (SDXL) is a larger and more powerful iteration of the Stable Diffusion model, capable of producing higher resolution images. SDXL’s UNet is 3x larger and the model adds a second text encoder to the architecture. Depending on the hardware available to you, this can be very computationally intensive and it may not run on a consumer GPU like a Tesla T4. To help fit this larger model into memory and to speedup training, try enabling gradient_checkpointing, mixed_precision, and gradient_accumulation_steps. You can reduce your memory-usage even more by enabling memory-efficient attention with xFormers and using bitsandbytes’ 8-bit optimizer. This guide will explore the train_text_to_image_sdxl.py training script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/text_to_image +pip install -r requirements_sdxl.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the bf16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image_sdxl.py \ + --mixed_precision="bf16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so you’ll focus on the parameters that are relevant to training SDXL in this guide. --pretrained_vae_model_name_or_path: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify a better VAE --proportion_empty_prompts: the proportion of image prompts to replace with empty strings --timestep_bias_strategy: where (earlier vs. later) in the timestep to apply a bias, which can encourage the model to either learn low or high frequency details --timestep_bias_multiplier: the weight of the bias to apply to the timestep --timestep_bias_begin: the timestep to begin applying the bias --timestep_bias_end: the timestep to end applying the bias --timestep_bias_portion: the proportion of timesteps to apply the bias to Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting either epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_text_to_image_sdxl.py \ + --snr_gamma=5.0 Training script The training script is also similar to the Text-to-image training guide, but it’s been modified to support SDXL training. This guide will focus on the code that is unique to the SDXL training script. It starts by creating functions to tokenize the prompts to calculate the prompt embeddings, and to compute the image embeddings with the VAE. Next, you’ll a function to generate the timesteps weights depending on the number of timesteps and the timestep bias strategy to apply. Within the main() function, in addition to loading a tokenizer, the script loads a second tokenizer and text encoder because the SDXL architecture uses two of each: Copied tokenizer_one = AutoTokenizer.from_pretrained( + args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision, use_fast=False +) +tokenizer_two = AutoTokenizer.from_pretrained( + args.pretrained_model_name_or_path, subfolder="tokenizer_2", revision=args.revision, use_fast=False +) + +text_encoder_cls_one = import_model_class_from_model_name_or_path( + args.pretrained_model_name_or_path, args.revision +) +text_encoder_cls_two = import_model_class_from_model_name_or_path( + args.pretrained_model_name_or_path, args.revision, subfolder="text_encoder_2" +) The prompt and image embeddings are computed first and kept in memory, which isn’t typically an issue for a smaller dataset, but for larger datasets it can lead to memory problems. If this is the case, you should save the pre-computed embeddings to disk separately and load them into memory during the training process (see this PR for more discussion about this topic). Copied text_encoders = [text_encoder_one, text_encoder_two] +tokenizers = [tokenizer_one, tokenizer_two] +compute_embeddings_fn = functools.partial( + encode_prompt, + text_encoders=text_encoders, + tokenizers=tokenizers, + proportion_empty_prompts=args.proportion_empty_prompts, + caption_column=args.caption_column, +) + +train_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint) +train_dataset = train_dataset.map( + compute_vae_encodings_fn, + batched=True, + batch_size=args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps, + new_fingerprint=new_fingerprint_for_vae, +) After calculating the embeddings, the text encoder, VAE, and tokenizer are deleted to free up some memory: Copied del text_encoders, tokenizers, vae +gc.collect() +torch.cuda.empty_cache() Finally, the training loop takes care of the rest. If you chose to apply a timestep bias strategy, you’ll see the timestep weights are calculated and added as noise: Copied weights = generate_timestep_weights(args, noise_scheduler.config.num_train_timesteps).to( + model_input.device + ) + timesteps = torch.multinomial(weights, bsz, replacement=True).long() + +noisy_model_input = noise_scheduler.add_noise(model_input, noise, timesteps) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 Let’s train on the Pokémon BLIP captions dataset to generate your own Pokémon. Set the environment variables MODEL_NAME and DATASET_NAME to the model and the dataset (either from the Hub or a local path). You should also specify a VAE other than the SDXL VAE (either from the Hub or a local path) with VAE_NAME to avoid numerical instabilities. To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_prompt and --validation_epochs to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. Copied export MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0" +export VAE_NAME="madebyollin/sdxl-vae-fp16-fix" +export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch train_text_to_image_sdxl.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --pretrained_vae_model_name_or_path=$VAE_NAME \ + --dataset_name=$DATASET_NAME \ + --enable_xformers_memory_efficient_attention \ + --resolution=512 \ + --center_crop \ + --random_flip \ + --proportion_empty_prompts=0.2 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --max_train_steps=10000 \ + --use_8bit_adam \ + --learning_rate=1e-06 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --mixed_precision="fp16" \ + --report_to="wandb" \ + --validation_prompt="a cute Sundar Pichai creature" \ + --validation_epochs 5 \ + --checkpointing_steps=5000 \ + --output_dir="sdxl-pokemon-model" \ + --push_to_hub After you’ve finished training, you can use your newly trained SDXL model for inference! + + + + Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("path/to/your/model", torch_dtype=torch.float16).to("cuda") + +prompt = "A pokemon with green eyes and red legs." +image = pipeline(prompt, num_inference_steps=30, guidance_scale=7.5).images[0] +image.save("pokemon.png") + + +PyTorch XLA allows you to run PyTorch on XLA devices such as TPUs, which can be faster. The initial warmup step takes longer because the model needs to be compiled and optimized. However, subsequent calls to the pipeline on an input with the same length as the original prompt are much faster because it can reuse the optimized graph. Copied from diffusers import DiffusionPipeline +import torch +import torch_xla.core.xla_model as xm + +device = xm.xla_device() +pipeline = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0").to(device) + +prompt = "A pokemon with green eyes and red legs." +start = time() +image = pipeline(prompt, num_inference_steps=inference_steps).images[0] +print(f'Compilation time is {time()-start} sec') +image.save("pokemon.png") + +start = time() +image = pipeline(prompt, num_inference_steps=inference_steps).images[0] +print(f'Inference time is {time()-start} sec after compilation') + + + Next steps Congratulations on training a SDXL model! To learn more about how to use your new model, the following guides may be helpful: Read the Stable Diffusion XL guide to learn how to use it for a variety of different tasks (text-to-image, image-to-image, inpainting), how to use it’s refiner model, and the different types of micro-conditionings. Check out the DreamBooth and LoRA training guides to learn how to train a personalized SDXL model with just a few example images. These two training techniques can even be combined! diff --git a/scrapped_outputs/ae93e3d246b2fb4c81a38f976bf7e690.txt b/scrapped_outputs/ae93e3d246b2fb4c81a38f976bf7e690.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/aebb6bd4a349d6c350d83ef11e458302.txt b/scrapped_outputs/aebb6bd4a349d6c350d83ef11e458302.txt new file mode 100644 index 0000000000000000000000000000000000000000..db7171b03930077dc4188ad756a7f5e1ae92467f --- /dev/null +++ b/scrapped_outputs/aebb6bd4a349d6c350d83ef11e458302.txt @@ -0,0 +1,27 @@ +UNet2DModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 2D UNet model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet2DModel class diffusers.UNet2DModel < source > ( sample_size: Union = None in_channels: int = 3 out_channels: int = 3 center_input_sample: bool = False time_embedding_type: str = 'positional' freq_shift: int = 0 flip_sin_to_cos: bool = True down_block_types: Tuple = ('DownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D') up_block_types: Tuple = ('AttnUpBlock2D', 'AttnUpBlock2D', 'AttnUpBlock2D', 'UpBlock2D') block_out_channels: Tuple = (224, 448, 672, 896) layers_per_block: int = 2 mid_block_scale_factor: float = 1 downsample_padding: int = 1 downsample_type: str = 'conv' upsample_type: str = 'conv' dropout: float = 0.0 act_fn: str = 'silu' attention_head_dim: Optional = 8 norm_num_groups: int = 32 attn_norm_num_groups: Optional = None norm_eps: float = 1e-05 resnet_time_scale_shift: str = 'default' add_attention: bool = True class_embed_type: Optional = None num_class_embeds: Optional = None num_train_timesteps: Optional = None ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. Dimensions must be a multiple of 2 ** (len(block_out_channels) - 1). in_channels (int, optional, defaults to 3) — Number of channels in the input sample. out_channels (int, optional, defaults to 3) — Number of channels in the output. center_input_sample (bool, optional, defaults to False) — Whether to center the input sample. time_embedding_type (str, optional, defaults to "positional") — Type of time embedding to use. freq_shift (int, optional, defaults to 0) — Frequency shift for Fourier time embedding. flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip sin to cos for Fourier time embedding. down_block_types (Tuple[str], optional, defaults to ("DownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D")) — +Tuple of downsample block types. mid_block_type (str, optional, defaults to "UNetMidBlock2D") — +Block type for middle of UNet, it can be either UNetMidBlock2D or UnCLIPUNetMidBlock2D. up_block_types (Tuple[str], optional, defaults to ("AttnUpBlock2D", "AttnUpBlock2D", "AttnUpBlock2D", "UpBlock2D")) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (224, 448, 672, 896)) — +Tuple of block output channels. layers_per_block (int, optional, defaults to 2) — The number of layers per block. mid_block_scale_factor (float, optional, defaults to 1) — The scale factor for the mid block. downsample_padding (int, optional, defaults to 1) — The padding for the downsample convolution. downsample_type (str, optional, defaults to conv) — +The downsample type for downsampling layers. Choose between “conv” and “resnet” upsample_type (str, optional, defaults to conv) — +The upsample type for upsampling layers. Choose between “conv” and “resnet” dropout (float, optional, defaults to 0.0) — The dropout probability to use. act_fn (str, optional, defaults to "silu") — The activation function to use. attention_head_dim (int, optional, defaults to 8) — The attention head dimension. norm_num_groups (int, optional, defaults to 32) — The number of groups for normalization. attn_norm_num_groups (int, optional, defaults to None) — +If set to an integer, a group norm layer will be created in the mid block’s Attention layer with the +given number of groups. If left as None, the group norm layer will only be created if +resnet_time_scale_shift is set to default, and if created will have norm_num_groups groups. norm_eps (float, optional, defaults to 1e-5) — The epsilon for normalization. resnet_time_scale_shift (str, optional, defaults to "default") — Time scale shift config +for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", or "identity". num_class_embeds (int, optional, defaults to None) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim when performing class +conditioning with class_embed_type equal to None. A 2D UNet model that takes a noisy sample and a timestep and returns a sample shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor timestep: Union class_labels: Optional = None return_dict: bool = True ) → ~models.unet_2d.UNet2DOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, channel, height, width). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. class_labels (torch.FloatTensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.unet_2d.UNet2DOutput instead of a plain tuple. Returns +~models.unet_2d.UNet2DOutput or tuple + +If return_dict is True, an ~models.unet_2d.UNet2DOutput is returned, otherwise a tuple is +returned where the first element is the sample tensor. + The UNet2DModel forward method. UNet2DOutput class diffusers.models.unets.unet_2d.UNet2DOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The hidden states output from the last layer of the model. The output of UNet2DModel. diff --git a/scrapped_outputs/aebbc059e31cb4d2954f4b0240650270.txt b/scrapped_outputs/aebbc059e31cb4d2954f4b0240650270.txt new file mode 100644 index 0000000000000000000000000000000000000000..dc3d7a1c7171fd34d73cc2b81943b203dbd4e6fa --- /dev/null +++ b/scrapped_outputs/aebbc059e31cb4d2954f4b0240650270.txt @@ -0,0 +1,457 @@ +Text-to-Image Generation with Adapter Conditioning Overview T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models by Chong Mou, Xintao Wang, Liangbin Xie, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie. Using the pretrained models we can provide control images (for example, a depth map) to control Stable Diffusion text-to-image generation so that it follows the structure of the depth image and fills in the details. The abstract of the paper is the following: The incredible generative ability of large-scale text-to-image (T2I) models has demonstrated strong power of learning complex structures and meaningful semantics. However, relying solely on text prompts cannot fully take advantage of the knowledge learned by the model, especially when flexible and accurate controlling (e.g., color and structure) is needed. In this paper, we aim to “dig out” the capabilities that T2I models have implicitly learned, and then explicitly use them to control the generation more granularly. Specifically, we propose to learn simple and lightweight T2I-Adapters to align internal knowledge in T2I models with external control signals, while freezing the original large T2I models. In this way, we can train various adapters according to different conditions, achieving rich control and editing effects in the color and structure of the generation results. Further, the proposed T2I-Adapters have attractive properties of practical value, such as composability and generalization ability. Extensive experiments demonstrate that our T2I-Adapter has promising generation quality and a wide range of applications. This model was contributed by the community contributor HimariO ❤️ . Available Pipelines: Pipeline Tasks Demo StableDiffusionAdapterPipeline Text-to-Image Generation with T2I-Adapter Conditioning - StableDiffusionXLAdapterPipeline Text-to-Image Generation with T2I-Adapter Conditioning on StableDiffusion-XL - Usage example with the base model of StableDiffusion-1.4/1.5 In the following we give a simple example of how to use a T2I-Adapter checkpoint with Diffusers for inference based on StableDiffusion-1.4/1.5. +All adapters use the same pipeline. Images are first converted into the appropriate control image format. The control image and prompt are passed to the StableDiffusionAdapterPipeline. Let’s have a look at a simple example using the Color Adapter. Copied from diffusers.utils import load_image, make_image_grid + +image = load_image("https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_ref.png") Then we can create our color palette by simply resizing it to 8 by 8 pixels and then scaling it back to original size. Copied from PIL import Image + +color_palette = image.resize((8, 8)) +color_palette = color_palette.resize((512, 512), resample=Image.Resampling.NEAREST) Let’s take a look at the processed image. Next, create the adapter pipeline Copied import torch +from diffusers import StableDiffusionAdapterPipeline, T2IAdapter + +adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_color_sd14v1", torch_dtype=torch.float16) +pipe = StableDiffusionAdapterPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + adapter=adapter, + torch_dtype=torch.float16, +) +pipe.to("cuda") Finally, pass the prompt and control image to the pipeline Copied # fix the random seed, so you will get the same result as the example +generator = torch.Generator("cuda").manual_seed(7) + +out_image = pipe( + "At night, glowing cubes in front of the beach", + image=color_palette, + generator=generator, +).images[0] +make_image_grid([image, color_palette, out_image], rows=1, cols=3) Usage example with the base model of StableDiffusion-XL In the following we give a simple example of how to use a T2I-Adapter checkpoint with Diffusers for inference based on StableDiffusion-XL. +All adapters use the same pipeline. Images are first downloaded into the appropriate control image format. The control image and prompt are passed to the StableDiffusionXLAdapterPipeline. Let’s have a look at a simple example using the Sketch Adapter. Copied from diffusers.utils import load_image, make_image_grid + +sketch_image = load_image("https://huggingface.co/Adapter/t2iadapter/resolve/main/sketch.png").convert("L") Then, create the adapter pipeline Copied import torch +from diffusers import ( + T2IAdapter, + StableDiffusionXLAdapterPipeline, + DDPMScheduler +) + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +adapter = T2IAdapter.from_pretrained("Adapter/t2iadapter", subfolder="sketch_sdxl_1.0", torch_dtype=torch.float16, adapter_type="full_adapter_xl") +scheduler = DDPMScheduler.from_pretrained(model_id, subfolder="scheduler") + +pipe = StableDiffusionXLAdapterPipeline.from_pretrained( + model_id, adapter=adapter, safety_checker=None, torch_dtype=torch.float16, variant="fp16", scheduler=scheduler +) + +pipe.to("cuda") Finally, pass the prompt and control image to the pipeline Copied # fix the random seed, so you will get the same result as the example +generator = torch.Generator().manual_seed(42) + +sketch_image_out = pipe( + prompt="a photo of a dog in real world, high quality", + negative_prompt="extra digit, fewer digits, cropped, worst quality, low quality", + image=sketch_image, + generator=generator, + guidance_scale=7.5 +).images[0] +make_image_grid([sketch_image, sketch_image_out], rows=1, cols=2) Available checkpoints Non-diffusers checkpoints can be found under TencentARC/T2I-Adapter. T2I-Adapter with Stable Diffusion 1.4 Model Name Control Image Overview Control Image Example Generated Image Example TencentARC/t2iadapter_color_sd14v1 Trained with spatial color palette An image with 8x8 color palette. TencentARC/t2iadapter_canny_sd14v1 Trained with canny edge detection A monochrome image with white edges on a black background. TencentARC/t2iadapter_sketch_sd14v1 Trained with PidiNet edge detection A hand-drawn monochrome image with white outlines on a black background. TencentARC/t2iadapter_depth_sd14v1 Trained with Midas depth estimation A grayscale image with black representing deep areas and white representing shallow areas. TencentARC/t2iadapter_openpose_sd14v1 Trained with OpenPose bone image A OpenPose bone image. TencentARC/t2iadapter_keypose_sd14v1 Trained with mmpose skeleton image A mmpose skeleton image. TencentARC/t2iadapter_seg_sd14v1Trained with semantic segmentation An custom segmentation protocol image. TencentARC/t2iadapter_canny_sd15v2 TencentARC/t2iadapter_depth_sd15v2 TencentARC/t2iadapter_sketch_sd15v2 TencentARC/t2iadapter_zoedepth_sd15v1 Adapter/t2iadapter, subfolder=‘sketch_sdxl_1.0’ Adapter/t2iadapter, subfolder=‘canny_sdxl_1.0’ Adapter/t2iadapter, subfolder=‘openpose_sdxl_1.0’ Combining multiple adapters MultiAdapter can be used for applying multiple conditionings at once. Here we use the keypose adapter for the character posture and the depth adapter for creating the scene. Copied from diffusers.utils import load_image, make_image_grid + +cond_keypose = load_image( + "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png" +) +cond_depth = load_image( + "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png" +) +cond = [cond_keypose, cond_depth] + +prompt = ["A man walking in an office room with a nice view"] The two control images look as such: MultiAdapter combines keypose and depth adapters. adapter_conditioning_scale balances the relative influence of the different adapters. Copied import torch +from diffusers import StableDiffusionAdapterPipeline, MultiAdapter, T2IAdapter + +adapters = MultiAdapter( + [ + T2IAdapter.from_pretrained("TencentARC/t2iadapter_keypose_sd14v1"), + T2IAdapter.from_pretrained("TencentARC/t2iadapter_depth_sd14v1"), + ] +) +adapters = adapters.to(torch.float16) + +pipe = StableDiffusionAdapterPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + torch_dtype=torch.float16, + adapter=adapters, +).to("cuda") + +image = pipe(prompt, cond, adapter_conditioning_scale=[0.8, 0.8]).images[0] +make_image_grid([cond_keypose, cond_depth, image], rows=1, cols=3) T2I-Adapter vs ControlNet T2I-Adapter is similar to ControlNet. +T2I-Adapter uses a smaller auxiliary network which is only run once for the entire diffusion process. +However, T2I-Adapter performs slightly worse than ControlNet. StableDiffusionAdapterPipeline class diffusers.StableDiffusionAdapterPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel adapter: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True ) Parameters adapter (T2IAdapter or MultiAdapter or List[T2IAdapter]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple Adapter as a +list, the outputs from each Adapter are added together to create one combined additional conditioning. adapter_weights (List[float], optional, defaults to None) — +List of floats representing the weight which will be multiply to each adapter’s output before adding them +together. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. Pipeline for text-to-image generation using Stable Diffusion augmented with T2I-Adapter +https://arxiv.org/abs/2302.08453 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None adapter_conditioning_scale: Union = 1.0 clip_skip: Optional = None ) → ~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor] or List[PIL.Image.Image] or List[List[PIL.Image.Image]]) — +The Adapter input condition. Adapter uses this input condition to generate guidance to Unet. If the +type is specified as Torch.FloatTensor, it is passed to Adapter as is. PIL.Image.Image` can also be +accepted as an image. The control image is automatically resized to fit the output image. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor as defined under +self.processor in +diffusers.models.attention_processor. adapter_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the adapter are multiplied by adapter_conditioning_scale before they are added to the +residual in the original unet. If multiple adapters are specified in init, you can set the +corresponding scale as a list. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from PIL import Image +>>> from diffusers.utils import load_image +>>> import torch +>>> from diffusers import StableDiffusionAdapterPipeline, T2IAdapter + +>>> image = load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_ref.png" +... ) + +>>> color_palette = image.resize((8, 8)) +>>> color_palette = color_palette.resize((512, 512), resample=Image.Resampling.NEAREST) + +>>> adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_color_sd14v1", torch_dtype=torch.float16) +>>> pipe = StableDiffusionAdapterPipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", +... adapter=adapter, +... torch_dtype=torch.float16, +... ) + +>>> pipe.to("cuda") + +>>> out_image = pipe( +... "At night, glowing cubes in front of the beach", +... image=color_palette, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionXLAdapterPipeline class diffusers.StableDiffusionXLAdapterPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel adapter: Union scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters adapter (T2IAdapter or MultiAdapter or List[T2IAdapter]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple Adapter as a +list, the outputs from each Adapter are added together to create one combined additional conditioning. adapter_weights (List[float], optional, defaults to None) — +List of floats representing the weight which will be multiply to each adapter’s output before adding them +together. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. Pipeline for text-to-image generation using Stable Diffusion augmented with T2I-Adapter +https://arxiv.org/abs/2302.08453 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Optional = None crops_coords_top_left: Tuple = (0, 0) target_size: Optional = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None adapter_conditioning_scale: Union = 1.0 adapter_conditioning_factor: float = 1.0 clip_skip: Optional = None ) → ~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor] or List[PIL.Image.Image] or List[List[PIL.Image.Image]]) — +The Adapter input condition. Adapter uses this input condition to generate guidance to Unet. If the +type is specified as Torch.FloatTensor, it is passed to Adapter as is. PIL.Image.Image` can also be +accepted as an image. The control image is automatically resized to fit the output image. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 5.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion_xl.StableDiffusionAdapterPipelineOutput +instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. adapter_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the adapter are multiplied by adapter_conditioning_scale before they are added to the +residual in the original unet. If multiple adapters are specified in init, you can set the +corresponding scale as a list. adapter_conditioning_factor (float, optional, defaults to 1.0) — +The fraction of timesteps for which adapter should be applied. If adapter_conditioning_factor is +0.0, adapter is not applied at all. If adapter_conditioning_factor is 1.0, adapter is applied for +all timesteps. If adapter_conditioning_factor is 0.5, adapter is applied for half of the timesteps. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import T2IAdapter, StableDiffusionXLAdapterPipeline, DDPMScheduler +>>> from diffusers.utils import load_image + +>>> sketch_image = load_image("https://huggingface.co/Adapter/t2iadapter/resolve/main/sketch.png").convert("L") + +>>> model_id = "stabilityai/stable-diffusion-xl-base-1.0" + +>>> adapter = T2IAdapter.from_pretrained( +... "Adapter/t2iadapter", +... subfolder="sketch_sdxl_1.0", +... torch_dtype=torch.float16, +... adapter_type="full_adapter_xl", +... ) +>>> scheduler = DDPMScheduler.from_pretrained(model_id, subfolder="scheduler") + +>>> pipe = StableDiffusionXLAdapterPipeline.from_pretrained( +... model_id, adapter=adapter, torch_dtype=torch.float16, variant="fp16", scheduler=scheduler +... ).to("cuda") + +>>> generator = torch.manual_seed(42) +>>> sketch_image_out = pipe( +... prompt="a photo of a dog in real world, high quality", +... negative_prompt="extra digit, fewer digits, cropped, worst quality, low quality", +... image=sketch_image, +... generator=generator, +... guidance_scale=7.5, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 diff --git a/scrapped_outputs/aebc93f5f49c70b892d79b28b08971fe.txt b/scrapped_outputs/aebc93f5f49c70b892d79b28b08971fe.txt new file mode 100644 index 0000000000000000000000000000000000000000..0a7cc0b79a2823c78003b419462fee63e47bb1de --- /dev/null +++ b/scrapped_outputs/aebc93f5f49c70b892d79b28b08971fe.txt @@ -0,0 +1,18 @@ +ONNX Runtime 🤗 Optimum provides a Stable Diffusion pipeline compatible with ONNX Runtime. You’ll need to install 🤗 Optimum with the following command for ONNX Runtime support: Copied pip install -q optimum["onnxruntime"] This guide will show you how to use the Stable Diffusion and Stable Diffusion XL (SDXL) pipelines with ONNX Runtime. Stable Diffusion To load and run inference, use the ORTStableDiffusionPipeline. If you want to load a PyTorch model and convert it to the ONNX format on-the-fly, set export=True: Copied from optimum.onnxruntime import ORTStableDiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id, export=True) +prompt = "sailing ship in storm by Leonardo da Vinci" +image = pipeline(prompt).images[0] +pipeline.save_pretrained("./onnx-stable-diffusion-v1-5") Generating multiple prompts in a batch seems to take too much memory. While we look into it, you may need to iterate instead of batching. To export the pipeline in the ONNX format offline and use it later for inference, +use the optimum-cli export command: Copied optimum-cli export onnx --model runwayml/stable-diffusion-v1-5 sd_v15_onnx/ Then to perform inference (you don’t have to specify export=True again): Copied from optimum.onnxruntime import ORTStableDiffusionPipeline + +model_id = "sd_v15_onnx" +pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id) +prompt = "sailing ship in storm by Leonardo da Vinci" +image = pipeline(prompt).images[0] You can find more examples in 🤗 Optimum documentation, and Stable Diffusion is supported for text-to-image, image-to-image, and inpainting. Stable Diffusion XL To load and run inference with SDXL, use the ORTStableDiffusionXLPipeline: Copied from optimum.onnxruntime import ORTStableDiffusionXLPipeline + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipeline = ORTStableDiffusionXLPipeline.from_pretrained(model_id) +prompt = "sailing ship in storm by Leonardo da Vinci" +image = pipeline(prompt).images[0] To export the pipeline in the ONNX format and use it later for inference, use the optimum-cli export command: Copied optimum-cli export onnx --model stabilityai/stable-diffusion-xl-base-1.0 --task stable-diffusion-xl sd_xl_onnx/ SDXL in the ONNX format is supported for text-to-image and image-to-image. diff --git a/scrapped_outputs/aed81ada840356e959f9a80eec4b8431.txt b/scrapped_outputs/aed81ada840356e959f9a80eec4b8431.txt new file mode 100644 index 0000000000000000000000000000000000000000..9c200308786dff55b3d8f085d026e38d2ddec95d --- /dev/null +++ b/scrapped_outputs/aed81ada840356e959f9a80eec4b8431.txt @@ -0,0 +1,96 @@ +DPMSolverMultistepScheduler DPMSolverMultistep is a multistep scheduler from DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps and DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. DPMSolver (and the improved version DPMSolver++) is a fast dedicated high-order solver for diffusion ODEs with convergence order guarantee. Empirically, DPMSolver sampling with only 20 steps can generate high-quality +samples, and it can generate quite good samples even in 10 steps. Tips It is recommended to set solver_order to 2 for guide sampling, and solver_order=3 for unconditional sampling. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use the dynamic +thresholding. This thresholding method is unsuitable for latent-space diffusion models such as +Stable Diffusion. The SDE variant of DPMSolver and DPM-Solver++ is also supported, but only for the first and second-order solvers. This is a fast SDE solver for the reverse diffusion SDE. It is recommended to use the second-order sde-dpmsolver++. DPMSolverMultistepScheduler class diffusers.DPMSolverMultistepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'dpmsolver++' solver_type: str = 'midpoint' lower_order_final: bool = True euler_at_final: bool = False use_karras_sigmas: Optional = False use_lu_lambdas: Optional = False lambda_min_clipped: float = -inf variance_type: Optional = None timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DPMSolver order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++". algorithm_type (str, defaults to dpmsolver++) — +Algorithm type for the solver; can be dpmsolver, dpmsolver++, sde-dpmsolver or sde-dpmsolver++. The +dpmsolver type implements the algorithms in the DPMSolver +paper, and the dpmsolver++ type implements the algorithms in the +DPMSolver++ paper. It is recommended to use dpmsolver++ or +sde-dpmsolver++ with solver_order=2 for guided sampling like in Stable Diffusion. solver_type (str, defaults to midpoint) — +Solver type for the second-order solver; can be midpoint or heun. The solver type slightly affects the +sample quality, especially for a small number of steps. It is recommended to use midpoint solvers. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. euler_at_final (bool, defaults to False) — +Whether to use Euler’s method in the final step. It is a trade-off between numerical stability and detail +richness. This can stabilize the sampling of the SDE variant of DPMSolver for small number of inference +steps, but sometimes may result in blurring. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. use_lu_lambdas (bool, optional, defaults to False) — +Whether to use the uniform-logSNR for step sizes proposed by Lu’s DPM-Solver in the noise schedule during +the sampling process. If True, the sigmas and time steps are determined according to a sequence of +lambda(t). lambda_min_clipped (float, defaults to -inf) — +Clipping threshold for the minimum value of lambda(t) for numerical stability. This is critical for the +cosine (squaredcos_cap_v2) noise schedule. variance_type (str, optional) — +Set to “learned” or “learned_range” for diffusion models that predict variance. If set, the model’s output +contains the predicted Gaussian variance. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DPMSolverMultistepScheduler is a fast dedicated high-order solver for diffusion ODEs. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is +designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an +integral of the data prediction model. The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise +prediction and data prediction models. dpm_solver_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DPMSolver (equivalent to DDIM). multistep_dpm_solver_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order multistep DPMSolver. multistep_dpm_solver_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order multistep DPMSolver. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int = None device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator = None return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep DPMSolver. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/aee4f67efe991749b4b495e54bb922db.txt b/scrapped_outputs/aee4f67efe991749b4b495e54bb922db.txt new file mode 100644 index 0000000000000000000000000000000000000000..eae19b2e2becff1d17c9403c12096c6335834cf4 --- /dev/null +++ b/scrapped_outputs/aee4f67efe991749b4b495e54bb922db.txt @@ -0,0 +1,26 @@ +PNDM Pseudo Numerical Methods for Diffusion Models on Manifolds (PNDM) is by Luping Liu, Yi Ren, Zhijie Lin and Zhou Zhao. The abstract from the paper is: Denoising Diffusion Probabilistic Models (DDPMs) can generate high-quality samples such as image and audio samples. However, DDPMs require hundreds to thousands of iterations to produce final samples. Several prior works have successfully accelerated DDPMs through adjusting the variance schedule (e.g., Improved Denoising Diffusion Probabilistic Models) or the denoising equation (e.g., Denoising Diffusion Implicit Models (DDIMs)). However, these acceleration methods cannot maintain the quality of samples and even introduce new noise at a high speedup rate, which limit their practicability. To accelerate the inference process while keeping the sample quality, we provide a fresh perspective that DDPMs should be treated as solving differential equations on manifolds. Under such a perspective, we propose pseudo numerical methods for diffusion models (PNDMs). Specifically, we figure out how to solve differential equations on manifolds and show that DDIMs are simple cases of pseudo numerical methods. We change several classical numerical methods to corresponding pseudo numerical methods and find that the pseudo linear multi-step method is the best in most situations. According to our experiments, by directly using pre-trained models on Cifar10, CelebA and LSUN, PNDMs can generate higher quality synthetic images with only 50 steps compared with 1000-step DDIMs (20x speedup), significantly outperform DDIMs with 250 steps (by around 0.4 in FID) and have good generalization on different variance schedules. The original codebase can be found at luping-liu/PNDM. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. PNDMPipeline class diffusers.PNDMPipeline < source > ( unet: UNet2DModel scheduler: PNDMScheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (PNDMScheduler) — +A PNDMScheduler to be used in combination with unet to denoise the encoded image. Pipeline for unconditional image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 num_inference_steps: int = 50 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True **kwargs ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Example: Copied >>> from diffusers import PNDMPipeline + +>>> # load model and scheduler +>>> pndm = PNDMPipeline.from_pretrained("google/ddpm-cifar10-32") + +>>> # run pipeline in inference (sample random noise and denoise) +>>> image = pndm().images[0] + +>>> # save image +>>> image.save("pndm_generated_image.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/af09d27434d1512e9c84feeb35122e29.txt b/scrapped_outputs/af09d27434d1512e9c84feeb35122e29.txt new file mode 100644 index 0000000000000000000000000000000000000000..fa29aaa3795982e1203729759aa3fb501feeb077 --- /dev/null +++ b/scrapped_outputs/af09d27434d1512e9c84feeb35122e29.txt @@ -0,0 +1,19 @@ +Habana Gaudi 🤗 Diffusers is compatible with Habana Gaudi through 🤗 Optimum. Follow the installation guide to install the SynapseAI and Gaudi drivers, and then install Optimum Habana: Copied python -m pip install --upgrade-strategy eager optimum[habana] To generate images with Stable Diffusion 1 and 2 on Gaudi, you need to instantiate two instances: GaudiStableDiffusionPipeline, a pipeline for text-to-image generation. GaudiDDIMScheduler, a Gaudi-optimized scheduler. When you initialize the pipeline, you have to specify use_habana=True to deploy it on HPUs and to get the fastest possible generation, you should enable HPU graphs with use_hpu_graphs=True. Finally, specify a GaudiConfig which can be downloaded from the Habana organization on the Hub. Copied from optimum.habana import GaudiConfig +from optimum.habana.diffusers import GaudiDDIMScheduler, GaudiStableDiffusionPipeline + +model_name = "stabilityai/stable-diffusion-2-base" +scheduler = GaudiDDIMScheduler.from_pretrained(model_name, subfolder="scheduler") +pipeline = GaudiStableDiffusionPipeline.from_pretrained( + model_name, + scheduler=scheduler, + use_habana=True, + use_hpu_graphs=True, + gaudi_config="Habana/stable-diffusion-2", +) Now you can call the pipeline to generate images by batches from one or several prompts: Copied outputs = pipeline( + prompt=[ + "High quality photo of an astronaut riding a horse in space", + "Face of a yellow cat, high resolution, sitting on a park bench", + ], + num_images_per_prompt=10, + batch_size=4, +) For more information, check out 🤗 Optimum Habana’s documentation and the example provided in the official GitHub repository. Benchmark We benchmarked Habana’s first-generation Gaudi and Gaudi2 with the Habana/stable-diffusion and Habana/stable-diffusion-2 Gaudi configurations (mixed precision bf16/fp32) to demonstrate their performance. For Stable Diffusion v1.5 on 512x512 images: Latency (batch size = 1) Throughput first-generation Gaudi 3.80s 0.308 images/s (batch size = 8) Gaudi2 1.33s 1.081 images/s (batch size = 8) For Stable Diffusion v2.1 on 768x768 images: Latency (batch size = 1) Throughput first-generation Gaudi 10.2s 0.108 images/s (batch size = 4) Gaudi2 3.17s 0.379 images/s (batch size = 8) diff --git a/scrapped_outputs/af5445d1df873ec97821b2e09de37c82.txt b/scrapped_outputs/af5445d1df873ec97821b2e09de37c82.txt new file mode 100644 index 0000000000000000000000000000000000000000..d652e1d857c98c3e8bba256ca96f37cda949853a --- /dev/null +++ b/scrapped_outputs/af5445d1df873ec97821b2e09de37c82.txt @@ -0,0 +1,57 @@ +Schedulers 🤗 Diffusers provides many scheduler functions for the diffusion process. A scheduler takes a model’s output (the sample which the diffusion process is iterating on) and a timestep to return a denoised sample. The timestep is important because it dictates where in the diffusion process the step is; data is generated by iterating forward n timesteps and inference occurs by propagating backward through the timesteps. Based on the timestep, a scheduler may be discrete in which case the timestep is an int or continuous in which case the timestep is a float. Depending on the context, a scheduler defines how to iteratively add noise to an image or how to update a sample based on a model’s output: during training, a scheduler adds noise (there are different algorithms for how to add noise) to a sample to train a diffusion model during inference, a scheduler defines how to update a sample based on a pretrained model’s output Many schedulers are implemented from the k-diffusion library by Katherine Crowson, and they’re also widely used in A1111. To help you map the schedulers from k-diffusion and A1111 to the schedulers in 🤗 Diffusers, take a look at the table below: A1111/k-diffusion 🤗 Diffusers Usage DPM++ 2M DPMSolverMultistepScheduler DPM++ 2M Karras DPMSolverMultistepScheduler init with use_karras_sigmas=True DPM++ 2M SDE DPMSolverMultistepScheduler init with algorithm_type="sde-dpmsolver++" DPM++ 2M SDE Karras DPMSolverMultistepScheduler init with use_karras_sigmas=True and algorithm_type="sde-dpmsolver++" DPM++ 2S a N/A very similar to DPMSolverSinglestepScheduler DPM++ 2S a Karras N/A very similar to DPMSolverSinglestepScheduler(use_karras_sigmas=True, ...) DPM++ SDE DPMSolverSinglestepScheduler DPM++ SDE Karras DPMSolverSinglestepScheduler init with use_karras_sigmas=True DPM2 KDPM2DiscreteScheduler DPM2 Karras KDPM2DiscreteScheduler init with use_karras_sigmas=True DPM2 a KDPM2AncestralDiscreteScheduler DPM2 a Karras KDPM2AncestralDiscreteScheduler init with use_karras_sigmas=True DPM adaptive N/A DPM fast N/A Euler EulerDiscreteScheduler Euler a EulerAncestralDiscreteScheduler Heun HeunDiscreteScheduler LMS LMSDiscreteScheduler LMS Karras LMSDiscreteScheduler init with use_karras_sigmas=True N/A DEISMultistepScheduler N/A UniPCMultistepScheduler All schedulers are built from the base SchedulerMixin class which implements low level utilities shared by all schedulers. SchedulerMixin class diffusers.SchedulerMixin < source > ( ) Base class for all schedulers. SchedulerMixin contains common functions shared by all schedulers such as general loading and saving +functionalities. ConfigMixin takes care of storing the configuration attributes (like num_train_timesteps) that are passed to +the scheduler’s __init__ function, and the attributes can be accessed by scheduler.config.num_train_timesteps. Class attributes: _compatibles (List[str]) — A list of scheduler classes that are compatible with the parent scheduler +class. Use from_config() to load a different compatible scheduler class (should be overridden +by parent class). from_pretrained < source > ( pretrained_model_name_or_path: Union = None subfolder: Optional = None return_unused_kwargs = False **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the scheduler +configuration saved with save_pretrained(). + subfolder (str, optional) — +The subfolder location of a model file within a larger model repository on the Hub or locally. return_unused_kwargs (bool, optional, defaults to False) — +Whether kwargs that are not consumed by the Python class should be returned or not. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. Instantiate a scheduler from a pre-defined JSON configuration file in a local directory or Hub repository. To use private or gated models, log-in with +huggingface-cli login. You can also activate the special +“offline-mode” to use this method in a +firewalled environment. save_pretrained < source > ( save_directory: Union push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory where the configuration JSON file will be saved (will be created if it does not exist). push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save a scheduler configuration object to a directory so that it can be reloaded using the +from_pretrained() class method. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. KarrasDiffusionSchedulers KarrasDiffusionSchedulers are a broad generalization of schedulers in 🤗 Diffusers. The schedulers in this class are distinguished at a high level by their noise sampling strategy, the type of network and scaling, the training strategy, and how the loss is weighed. The different schedulers in this class, depending on the ordinary differential equations (ODE) solver type, fall into the above taxonomy and provide a good abstraction for the design of the main schedulers implemented in 🤗 Diffusers. The schedulers in this class are given here. PushToHubMixin class diffusers.utils.PushToHubMixin < source > ( ) A Mixin to push a model, scheduler, or pipeline to the Hugging Face Hub. push_to_hub < source > ( repo_id: str commit_message: Optional = None private: Optional = None token: Optional = None create_pr: bool = False safe_serialization: bool = True variant: Optional = None ) Parameters repo_id (str) — +The name of the repository you want to push your model, scheduler, or pipeline files to. It should +contain your organization name when pushing to an organization. repo_id can also be a path to a local +directory. commit_message (str, optional) — +Message to commit while pushing. Default to "Upload {object}". private (bool, optional) — +Whether or not the repository created should be private. token (str, optional) — +The token to use as HTTP bearer authorization for remote files. The token generated when running +huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) — +Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to True) — +Whether or not to convert the model weights to the safetensors format. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. Upload model, scheduler, or pipeline files to the 🤗 Hugging Face Hub. Examples: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet") + +# Push the `unet` to your namespace with the name "my-finetuned-unet". +unet.push_to_hub("my-finetuned-unet") + +# Push the `unet` to an organization with the name "my-finetuned-unet". +unet.push_to_hub("your-org/my-finetuned-unet") diff --git a/scrapped_outputs/af608f033d4103678ebed72fe58d88ed.txt b/scrapped_outputs/af608f033d4103678ebed72fe58d88ed.txt new file mode 100644 index 0000000000000000000000000000000000000000..e4abc6c3bdbf1174d841ae03e5693f7552e06dd7 --- /dev/null +++ b/scrapped_outputs/af608f033d4103678ebed72fe58d88ed.txt @@ -0,0 +1,38 @@ +Distributed inference with multiple GPUs On distributed setups, you can run inference across multiple GPUs with 🤗 Accelerate or PyTorch Distributed, which is useful for generating with multiple prompts in parallel. This guide will show you how to use 🤗 Accelerate and PyTorch Distributed for distributed inference. 🤗 Accelerate 🤗 Accelerate is a library designed to make it easy to train or run inference across distributed setups. It simplifies the process of setting up the distributed environment, allowing you to focus on your PyTorch code. To begin, create a Python file and initialize an accelerate.PartialState to create a distributed environment; your setup is automatically detected so you don’t need to explicitly define the rank or world_size. Move the DiffusionPipeline to distributed_state.device to assign a GPU to each process. Now use the split_between_processes utility as a context manager to automatically distribute the prompts between the number of processes. Copied import torch +from accelerate import PartialState +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +distributed_state = PartialState() +pipeline.to(distributed_state.device) + +with distributed_state.split_between_processes(["a dog", "a cat"]) as prompt: + result = pipeline(prompt).images[0] + result.save(f"result_{distributed_state.process_index}.png") Use the --num_processes argument to specify the number of GPUs to use, and call accelerate launch to run the script: Copied accelerate launch run_distributed.py --num_processes=2 To learn more, take a look at the Distributed Inference with 🤗 Accelerate guide. PyTorch Distributed PyTorch supports DistributedDataParallel which enables data parallelism. To start, create a Python file and import torch.distributed and torch.multiprocessing to set up the distributed process group and to spawn the processes for inference on each GPU. You should also initialize a DiffusionPipeline: Copied import torch +import torch.distributed as dist +import torch.multiprocessing as mp + +from diffusers import DiffusionPipeline + +sd = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) You’ll want to create a function to run inference; init_process_group handles creating a distributed environment with the type of backend to use, the rank of the current process, and the world_size or the number of processes participating. If you’re running inference in parallel over 2 GPUs, then the world_size is 2. Move the DiffusionPipeline to rank and use get_rank to assign a GPU to each process, where each process handles a different prompt: Copied def run_inference(rank, world_size): + dist.init_process_group("nccl", rank=rank, world_size=world_size) + + sd.to(rank) + + if torch.distributed.get_rank() == 0: + prompt = "a dog" + elif torch.distributed.get_rank() == 1: + prompt = "a cat" + + image = sd(prompt).images[0] + image.save(f"./{'_'.join(prompt)}.png") To run the distributed inference, call mp.spawn to run the run_inference function on the number of GPUs defined in world_size: Copied def main(): + world_size = 2 + mp.spawn(run_inference, args=(world_size,), nprocs=world_size, join=True) + + +if __name__ == "__main__": + main() Once you’ve completed the inference script, use the --nproc_per_node argument to specify the number of GPUs to use and call torchrun to run the script: Copied torchrun run_distributed.py --nproc_per_node=2 diff --git a/scrapped_outputs/af8fbb3e89035ce5c72e9fbf743ae11d.txt b/scrapped_outputs/af8fbb3e89035ce5c72e9fbf743ae11d.txt new file mode 100644 index 0000000000000000000000000000000000000000..d0a7733b20b78bdc5197af0cfb33fb05e4395be0 --- /dev/null +++ b/scrapped_outputs/af8fbb3e89035ce5c72e9fbf743ae11d.txt @@ -0,0 +1,346 @@ +Text-to-image The Stable Diffusion model was created by researchers and engineers from CompVis, Stability AI, Runway, and LAION. The StableDiffusionPipeline is capable of generating photorealistic images given any text input. It’s trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs. Latent diffusion is the research on top of which Stable Diffusion was built. It was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. The abstract from the paper is: By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. Code is available at https://github.com/CompVis/latent-diffusion. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionPipeline class diffusers.StableDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor from Common Diffusion Noise Schedules and Sample Steps are +Flawed. Guidance rescale factor should fix overexposure when +using zero terminal SNR. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. extract_ema (bool, optional, defaults to False) — +Whether to extract the EMA weights or not. Pass True to extract the EMA weights which usually yield +higher quality images for inference. Non-EMA weights are usually better for continuing finetuning. upcast_attention (bool, optional, defaults to None) — +Whether the attention computation should always be upcasted. image_size (int, optional, defaults to 512) — +The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable +Diffusion v2 base model. Use 768 for Stable Diffusion v2. prediction_type (str, optional) — +The prediction type the model was trained on. Use 'epsilon' for all Stable Diffusion v1 models and +the Stable Diffusion v2 base model. Use 'v_prediction' for Stable Diffusion v2. num_in_channels (int, optional, defaults to None) — +The number of input channels. If None, it is automatically inferred. scheduler_type (str, optional, defaults to "pndm") — +Type of scheduler to use. Should be one of ["pndm", "lms", "heun", "euler", "euler-ancestral", "dpm", "ddim"]. load_safety_checker (bool, optional, defaults to True) — +Whether to load the safety checker or not. text_encoder (CLIPTextModel, optional, defaults to None) — +An instance of CLIPTextModel to use, specifically the +clip-vit-large-patch14 variant. If this +parameter is None, the function loads a new instance of CLIPTextModel by itself if needed. vae (AutoencoderKL, optional, defaults to None) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. If +this parameter is None, the function will load a new instance of [CLIP] by itself, if needed. tokenizer (CLIPTokenizer, optional, defaults to None) — +An instance of CLIPTokenizer to use. If this parameter is None, the function loads a new instance +of CLIPTokenizer by itself if needed. original_config_file (str) — +Path to .yaml config file corresponding to the original architecture. If None, will be +automatically inferred by looking for a key that only exists in SD2.0 models. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (for example the pipeline components of the +specific pipeline class). The overwritten components are directly passed to the pipelines __init__ +method. See example below for more information. Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt or .safetensors +format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import StableDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +... ) + +>>> # Download pipeline from local file +>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt +>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly") + +>>> # Enable float16 and move to GPU +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", +... torch_dtype=torch.float16, +... ) +>>> pipeline.to("cuda") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionPipeline class diffusers.FlaxStableDiffusionPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-to-image generation using Stable Diffusion. This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: array params: Union prng_seed: Array num_inference_steps: int = 50 height: Optional = None width: Optional = None guidance_scale: Union = 7.5 latents: Array = None neg_prompt_ids: Array = None return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. latents (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +array is generated by sampling using the supplied random generator. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard + +>>> from diffusers import FlaxStableDiffusionPipeline + +>>> pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", revision="bf16", dtype=jax.numpy.bfloat16 +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" + +>>> prng_seed = jax.random.PRNGKey(0) +>>> num_inference_steps = 50 + +>>> num_samples = jax.device_count() +>>> prompt = num_samples * [prompt] +>>> prompt_ids = pipeline.prepare_inputs(prompt) +# shard inputs and rng + +>>> params = replicate(params) +>>> prng_seed = jax.random.split(prng_seed, jax.device_count()) +>>> prompt_ids = shard(prompt_ids) + +>>> images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images +>>> images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) FlaxStableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/afa340e9d917ddb3e526b7e779e6254f.txt b/scrapped_outputs/afa340e9d917ddb3e526b7e779e6254f.txt new file mode 100644 index 0000000000000000000000000000000000000000..f94f567f98cc2ea0c0b0894f04d08e55446dbcc1 --- /dev/null +++ b/scrapped_outputs/afa340e9d917ddb3e526b7e779e6254f.txt @@ -0,0 +1,71 @@ +Textual Inversion Textual Inversion is a training method for personalizing models by learning new text embeddings from a few example images. The file produced from training is extremely small (a few KBs) and the new embeddings can be loaded into the text encoder. TextualInversionLoaderMixin provides a function for loading Textual Inversion embeddings from Diffusers and Automatic1111 into the text encoder and loading a special token to activate the embeddings. To learn more about how to load Textual Inversion embeddings, see the Textual Inversion loading guide. TextualInversionLoaderMixin class diffusers.loaders.TextualInversionLoaderMixin < source > ( ) Load Textual Inversion tokens and embeddings to the tokenizer and text encoder. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") maybe_convert_prompt < source > ( prompt: Union tokenizer: PreTrainedTokenizer ) → str or list of str Parameters prompt (str or list of str) — +The prompt or prompts to guide the image generation. tokenizer (PreTrainedTokenizer) — +The tokenizer responsible for encoding the prompt into input tokens. Returns +str or list of str + +The converted prompt + Processes prompts that include a special token corresponding to a multi-vector textual inversion embedding to +be replaced with multiple special tokens each corresponding to one of the vectors. If the prompt has no textual +inversion token or if the textual inversion token is a single vector, the input prompt is returned. diff --git a/scrapped_outputs/afb805342e13beb58dbd4b5d2e2f9b40.txt b/scrapped_outputs/afb805342e13beb58dbd4b5d2e2f9b40.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/afd5010a71f91147501a5cb8fa16354e.txt b/scrapped_outputs/afd5010a71f91147501a5cb8fa16354e.txt new file mode 100644 index 0000000000000000000000000000000000000000..7f8a405dfba4e08faba47f75f9a9a309028d423f --- /dev/null +++ b/scrapped_outputs/afd5010a71f91147501a5cb8fa16354e.txt @@ -0,0 +1,381 @@ +Text2Video-Zero Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators is by Levon Khachatryan, Andranik Movsisyan, Vahram Tadevosyan, Roberto Henschel, Zhangyang Wang, Shant Navasardyan, Humphrey Shi. Text2Video-Zero enables zero-shot video generation using either: A textual prompt A prompt combined with guidance from poses or edges Video Instruct-Pix2Pix (instruction-guided video editing) Results are temporally consistent and closely follow the guidance and textual prompts. The abstract from the paper is: Recent text-to-video generation approaches rely on computationally heavy training and require large-scale video datasets. In this paper, we introduce a new task of zero-shot text-to-video generation and propose a low-cost approach (without any training or optimization) by leveraging the power of existing text-to-image synthesis methods (e.g., Stable Diffusion), making them suitable for the video domain. +Our key modifications include (i) enriching the latent codes of the generated frames with motion dynamics to keep the global scene and the background time consistent; and (ii) reprogramming frame-level self-attention using a new cross-frame attention of each frame on the first frame, to preserve the context, appearance, and identity of the foreground object. +Experiments show that this leads to low overhead, yet high-quality and remarkably consistent video generation. Moreover, our approach is not limited to text-to-video synthesis but is also applicable to other tasks such as conditional and content-specialized video generation, and Video Instruct-Pix2Pix, i.e., instruction-guided video editing. +As experiments show, our method performs comparably or sometimes better than recent approaches, despite not being trained on additional video data. You can find additional information about Text2Video-Zero on the project page, paper, and original codebase. Usage example Text-To-Video To generate a video from prompt, run the following Python code: Copied import torch +from diffusers import TextToVideoZeroPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +prompt = "A panda is playing guitar on times square" +result = pipe(prompt=prompt).images +result = [(r * 255).astype("uint8") for r in result] +imageio.mimsave("video.mp4", result, fps=4) You can change these parameters in the pipeline call: Motion field strength (see the paper, Sect. 3.3.1):motion_field_strength_x and motion_field_strength_y. Default: motion_field_strength_x=12, motion_field_strength_y=12 T and T' (see the paper, Sect. 3.3.1)t0 and t1 in the range {0, ..., num_inference_steps}. Default: t0=45, t1=48 Video length:video_length, the number of frames video_length to be generated. Default: video_length=8 We can also generate longer videos by doing the processing in a chunk-by-chunk manner: Copied import torch +from diffusers import TextToVideoZeroPipeline +import numpy as np + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") +seed = 0 +video_length = 24 #24 ÷ 4fps = 6 seconds +chunk_size = 8 +prompt = "A panda is playing guitar on times square" + +# Generate the video chunk-by-chunk +result = [] +chunk_ids = np.arange(0, video_length, chunk_size - 1) +generator = torch.Generator(device="cuda") +for i in range(len(chunk_ids)): + print(f"Processing chunk {i + 1} / {len(chunk_ids)}") + ch_start = chunk_ids[i] + ch_end = video_length if i == len(chunk_ids) - 1 else chunk_ids[i + 1] + # Attach the first frame for Cross Frame Attention + frame_ids = [0] + list(range(ch_start, ch_end)) + # Fix the seed for the temporal consistency + generator.manual_seed(seed) + output = pipe(prompt=prompt, video_length=len(frame_ids), generator=generator, frame_ids=frame_ids) + result.append(output.images[1:]) + +# Concatenate chunks and save +result = np.concatenate(result) +result = [(r * 255).astype("uint8") for r in result] +imageio.mimsave("video.mp4", result, fps=4) SDXL SupportIn order to use the SDXL model when generating a video from prompt, use the TextToVideoZeroSDXLPipeline pipeline: Copied import torch +from diffusers import TextToVideoZeroSDXLPipeline + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipe = TextToVideoZeroSDXLPipeline.from_pretrained( + model_id, torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") Text-To-Video with Pose Control To generate a video from prompt with additional pose control Download a demo video Copied from huggingface_hub import hf_hub_download + +filename = "__assets__/poses_skeleton_gifs/dance1_corr.mp4" +repo_id = "PAIR/Text2Video-Zero" +video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) Read video containing extracted pose images Copied from PIL import Image +import imageio + +reader = imageio.get_reader(video_path, "ffmpeg") +frame_count = 8 +pose_images = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] To extract pose from actual video, read ControlNet documentation. Run StableDiffusionControlNetPipeline with our custom attention processor Copied import torch +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +model_id = "runwayml/stable-diffusion-v1-5" +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + model_id, controlnet=controlnet, torch_dtype=torch.float16 +).to("cuda") + +# Set the attention processor +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) +pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) + +# fix latents for all frames +latents = torch.randn((1, 4, 64, 64), device="cuda", dtype=torch.float16).repeat(len(pose_images), 1, 1, 1) + +prompt = "Darth Vader dancing in a desert" +result = pipe(prompt=[prompt] * len(pose_images), image=pose_images, latents=latents).images +imageio.mimsave("video.mp4", result, fps=4) SDXL Support Since our attention processor also works with SDXL, it can be utilized to generate a video from prompt using ControlNet models powered by SDXL: Copied import torch +from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +controlnet_model_id = 'thibaud/controlnet-openpose-sdxl-1.0' +model_id = 'stabilityai/stable-diffusion-xl-base-1.0' + +controlnet = ControlNetModel.from_pretrained(controlnet_model_id, torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + model_id, controlnet=controlnet, torch_dtype=torch.float16 +).to('cuda') + +# Set the attention processor +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) +pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) + +# fix latents for all frames +latents = torch.randn((1, 4, 128, 128), device="cuda", dtype=torch.float16).repeat(len(pose_images), 1, 1, 1) + +prompt = "Darth Vader dancing in a desert" +result = pipe(prompt=[prompt] * len(pose_images), image=pose_images, latents=latents).images +imageio.mimsave("video.mp4", result, fps=4) Text-To-Video with Edge Control To generate a video from prompt with additional Canny edge control, follow the same steps described above for pose-guided generation using Canny edge ControlNet model. Video Instruct-Pix2Pix To perform text-guided video editing (with InstructPix2Pix): Download a demo video Copied from huggingface_hub import hf_hub_download + +filename = "__assets__/pix2pix video/camel.mp4" +repo_id = "PAIR/Text2Video-Zero" +video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) Read video from path Copied from PIL import Image +import imageio + +reader = imageio.get_reader(video_path, "ffmpeg") +frame_count = 8 +video = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] Run StableDiffusionInstructPix2PixPipeline with our custom attention processor Copied import torch +from diffusers import StableDiffusionInstructPix2PixPipeline +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +model_id = "timbrooks/instruct-pix2pix" +pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=3)) + +prompt = "make it Van Gogh Starry Night style" +result = pipe(prompt=[prompt] * len(video), image=video).images +imageio.mimsave("edited_video.mp4", result, fps=4) DreamBooth specialization Methods Text-To-Video, Text-To-Video with Pose Control and Text-To-Video with Edge Control +can run with custom DreamBooth models, as shown below for +Canny edge ControlNet model and +Avatar style DreamBooth model: Download a demo video Copied from huggingface_hub import hf_hub_download + +filename = "__assets__/canny_videos_mp4/girl_turning.mp4" +repo_id = "PAIR/Text2Video-Zero" +video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) Read video from path Copied from PIL import Image +import imageio + +reader = imageio.get_reader(video_path, "ffmpeg") +frame_count = 8 +canny_edges = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] Run StableDiffusionControlNetPipeline with custom trained DreamBooth model Copied import torch +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +# set model id to custom model +model_id = "PAIR/text2video-zero-controlnet-canny-avatar" +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + model_id, controlnet=controlnet, torch_dtype=torch.float16 +).to("cuda") + +# Set the attention processor +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) +pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) + +# fix latents for all frames +latents = torch.randn((1, 4, 64, 64), device="cuda", dtype=torch.float16).repeat(len(canny_edges), 1, 1, 1) + +prompt = "oil painting of a beautiful girl avatar style" +result = pipe(prompt=[prompt] * len(canny_edges), image=canny_edges, latents=latents).images +imageio.mimsave("video.mp4", result, fps=4) You can filter out some available DreamBooth-trained models with this link. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. TextToVideoZeroPipeline class diffusers.TextToVideoZeroPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet3DConditionModel to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for zero-shot text-to-video generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union video_length: Optional = 8 height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None motion_field_strength_x: float = 12 motion_field_strength_y: float = 12 output_type: Optional = 'tensor' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 t0: int = 44 t1: int = 47 frame_ids: Optional = None ) → TextToVideoPipelineOutput Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. video_length (int, optional, defaults to 8) — +The number of generated video frames. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in video generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_videos_per_prompt (int, optional, defaults to 1) — +The number of videos to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "numpy") — +The output format of the generated video. Choose between "latent" and "numpy". return_dict (bool, optional, defaults to True) — +Whether or not to return a +TextToVideoPipelineOutput instead of +a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. motion_field_strength_x (float, optional, defaults to 12) — +Strength of motion in generated video along x-axis. See the paper, +Sect. 3.3.1. motion_field_strength_y (float, optional, defaults to 12) — +Strength of motion in generated video along y-axis. See the paper, +Sect. 3.3.1. t0 (int, optional, defaults to 44) — +Timestep t0. Should be in the range [0, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. t1 (int, optional, defaults to 47) — +Timestep t0. Should be in the range [t0 + 1, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. frame_ids (List[int], optional) — +Indexes of the frames that are being generated. This is used when generating longer videos +chunk-by-chunk. Returns +TextToVideoPipelineOutput + +The output contains a ndarray of the generated video, when output_type != "latent", otherwise a +latent code of generated videos and a list of bools indicating whether the corresponding generated +video contains “not-safe-for-work” (nsfw) content.. + The call function to the pipeline for generation. backward_loop < source > ( latents timesteps prompt_embeds guidance_scale callback callback_steps num_warmup_steps extra_step_kwargs cross_attention_kwargs = None ) → latents Parameters callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. +extra_step_kwargs — +Extra_step_kwargs. +cross_attention_kwargs — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. +num_warmup_steps — +number of warmup steps. Returns +latents + +Latents of backward process output at time timesteps[-1]. + Perform backward process given list of time steps. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. forward_loop < source > ( x_t0 t0 t1 generator ) → x_t1 Parameters generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. Returns +x_t1 + +Forward process applied to x_t0 from time t0 to t1. + Perform DDPM forward process from time t0 to t1. This is the same as adding noise with corresponding variance. TextToVideoZeroSDXLPipeline class diffusers.TextToVideoZeroSDXLPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for zero-shot text-to-video generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union prompt_2: Union = None video_length: Optional = 8 height: Optional = None width: Optional = None num_inference_steps: int = 50 denoising_end: Optional = None guidance_scale: float = 7.5 negative_prompt: Union = None negative_prompt_2: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None frame_ids: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None latents: Optional = None motion_field_strength_x: float = 12 motion_field_strength_y: float = 12 output_type: Optional = 'tensor' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Optional = None crops_coords_top_left: Tuple = (0, 0) target_size: Optional = None t0: int = 44 t1: int = 47 ) Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders video_length (int, optional, defaults to 8) — +The number of generated video frames. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_videos_per_prompt (int, optional, defaults to 1) — +The number of videos to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. frame_ids (List[int], optional) — +Indexes of the frames that are being generated. This is used when generating longer videos +chunk-by-chunk. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. motion_field_strength_x (float, optional, defaults to 12) — +Strength of motion in generated video along x-axis. See the paper, +Sect. 3.3.1. motion_field_strength_y (float, optional, defaults to 12) — +Strength of motion in generated video along y-axis. See the paper, +Sect. 3.3.1. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. guidance_rescale (float, optional, defaults to 0.7) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (width, height) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (width, height). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. t0 (int, optional, defaults to 44) — +Timestep t0. Should be in the range [0, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. t1 (int, optional, defaults to 47) — +Timestep t0. Should be in the range [t0 + 1, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. Function invoked when calling the pipeline for generation. backward_loop < source > ( latents timesteps prompt_embeds guidance_scale callback callback_steps num_warmup_steps extra_step_kwargs add_text_embeds add_time_ids cross_attention_kwargs = None guidance_rescale: float = 0.0 ) → latents Parameters callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. +extra_step_kwargs — +Extra_step_kwargs. +cross_attention_kwargs — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. +num_warmup_steps — +number of warmup steps. Returns +latents + +latents of backward process output at time timesteps[-1] + Perform backward process given list of time steps encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. forward_loop < source > ( x_t0 t0 t1 generator ) → x_t1 Parameters generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. Returns +x_t1 + +Forward process applied to x_t0 from time t0 to t1. + Perform DDPM forward process from time t0 to t1. This is the same as adding noise with corresponding variance. TextToVideoPipelineOutput class diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.TextToVideoPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images ([List[PIL.Image.Image], np.ndarray]) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected ([List[bool]]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for zero-shot text-to-video pipeline. diff --git a/scrapped_outputs/afdd10134bd966dc2b94df08bd445770.txt b/scrapped_outputs/afdd10134bd966dc2b94df08bd445770.txt new file mode 100644 index 0000000000000000000000000000000000000000..b6ee2d139f8d33d1b57f5e5dc720363dd35642a1 --- /dev/null +++ b/scrapped_outputs/afdd10134bd966dc2b94df08bd445770.txt @@ -0,0 +1,101 @@ +Shap-E The Shap-E model was proposed in Shap-E: Generating Conditional 3D Implicit Functions by Alex Nichol and Heewoo Jun from OpenAI. The abstract from the paper is: We present Shap-E, a conditional generative model for 3D assets. Unlike recent work on 3D generative models which produce a single output representation, Shap-E directly generates the parameters of implicit functions that can be rendered as both textured meshes and neural radiance fields. We train Shap-E in two stages: first, we train an encoder that deterministically maps 3D assets into the parameters of an implicit function; second, we train a conditional diffusion model on outputs of the encoder. When trained on a large dataset of paired 3D and text data, our resulting models are capable of generating complex and diverse 3D assets in a matter of seconds. When compared to Point-E, an explicit generative model over point clouds, Shap-E converges faster and reaches comparable or better sample quality despite modeling a higher-dimensional, multi-representation output space. The original codebase can be found at openai/shap-e. See the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. ShapEPipeline class diffusers.ShapEPipeline < source > ( prior: PriorTransformer text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: HeunDiscreteScheduler shap_e_renderer: ShapERenderer ) Parameters prior (PriorTransformer) — +The canonical unCLIP prior to approximate the image embedding from the text embedding. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. scheduler (HeunDiscreteScheduler) — +A scheduler to be used in combination with the prior model to generate image embedding. shap_e_renderer (ShapERenderer) — +Shap-E renderer projects the generated latents into parameters of a MLP to create 3D objects with the NeRF +rendering method. Pipeline for generating latent representation of a 3D asset and rendering with the NeRF method. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: str num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 frame_size: int = 64 output_type: Optional = 'pil' return_dict: bool = True ) → ShapEPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. frame_size (int, optional, default to 64) — +The width and height of each image frame of the generated 3D output. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between "pil" (PIL.Image.Image), "np" +(np.array), "latent" (torch.Tensor), or mesh (MeshDecoderOutput). return_dict (bool, optional, defaults to True) — +Whether or not to return a ShapEPipelineOutput instead of a plain +tuple. Returns +ShapEPipelineOutput or tuple + +If return_dict is True, ShapEPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from diffusers.utils import export_to_gif + +>>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +>>> repo = "openai/shap-e" +>>> pipe = DiffusionPipeline.from_pretrained(repo, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> guidance_scale = 15.0 +>>> prompt = "a shark" + +>>> images = pipe( +... prompt, +... guidance_scale=guidance_scale, +... num_inference_steps=64, +... frame_size=256, +... ).images + +>>> gif_path = export_to_gif(images[0], "shark_3d.gif") ShapEImg2ImgPipeline class diffusers.ShapEImg2ImgPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModel image_processor: CLIPImageProcessor scheduler: HeunDiscreteScheduler shap_e_renderer: ShapERenderer ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModel) — +Frozen image-encoder. image_processor (CLIPImageProcessor) — +A CLIPImageProcessor to process images. scheduler (HeunDiscreteScheduler) — +A scheduler to be used in combination with the prior model to generate image embedding. shap_e_renderer (ShapERenderer) — +Shap-E renderer projects the generated latents into parameters of a MLP to create 3D objects with the NeRF +rendering method. Pipeline for generating latent representation of a 3D asset and rendering with the NeRF method from an image. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 frame_size: int = 64 output_type: Optional = 'pil' return_dict: bool = True ) → ShapEPipelineOutput or tuple Parameters image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be used as the starting point. Can also accept image +latents as image, but if passing latents directly it is not encoded again. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. frame_size (int, optional, default to 64) — +The width and height of each image frame of the generated 3D output. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between "pil" (PIL.Image.Image), "np" +(np.array), "latent" (torch.Tensor), or mesh (MeshDecoderOutput). return_dict (bool, optional, defaults to True) — +Whether or not to return a ShapEPipelineOutput instead of a plain +tuple. Returns +ShapEPipelineOutput or tuple + +If return_dict is True, ShapEPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> from PIL import Image +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from diffusers.utils import export_to_gif, load_image + +>>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +>>> repo = "openai/shap-e-img2img" +>>> pipe = DiffusionPipeline.from_pretrained(repo, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> guidance_scale = 3.0 +>>> image_url = "https://hf.co/datasets/diffusers/docs-images/resolve/main/shap-e/corgi.png" +>>> image = load_image(image_url).convert("RGB") + +>>> images = pipe( +... image, +... guidance_scale=guidance_scale, +... num_inference_steps=64, +... frame_size=256, +... ).images + +>>> gif_path = export_to_gif(images[0], "corgi_3d.gif") ShapEPipelineOutput class diffusers.pipelines.shap_e.pipeline_shap_e.ShapEPipelineOutput < source > ( images: Union ) Parameters images (torch.FloatTensor) — +A list of images for 3D rendering. Output class for ShapEPipeline and ShapEImg2ImgPipeline. diff --git a/scrapped_outputs/b00eb0bfbae07742366c1c19c03ecedc.txt b/scrapped_outputs/b00eb0bfbae07742366c1c19c03ecedc.txt new file mode 100644 index 0000000000000000000000000000000000000000..89bc887713a4db23ab02dc3377a161ea6292c27f --- /dev/null +++ b/scrapped_outputs/b00eb0bfbae07742366c1c19c03ecedc.txt @@ -0,0 +1,23 @@ +Controlled generation Controlling outputs generated by diffusion models has been long pursued by the community and is now an active research topic. In many popular diffusion models, subtle changes in inputs, both images and text prompts, can drastically change outputs. In an ideal world we want to be able to control how semantics are preserved and changed. Most examples of preserving semantics reduce to being able to accurately map a change in input to a change in output. I.e. adding an adjective to a subject in a prompt preserves the entire image, only modifying the changed subject. Or, image variation of a particular subject preserves the subject’s pose. Additionally, there are qualities of generated images that we would like to influence beyond semantic preservation. I.e. in general, we would like our outputs to be of good quality, adhere to a particular style, or be realistic. We will document some of the techniques diffusers supports to control generation of diffusion models. Much is cutting edge research and can be quite nuanced. If something needs clarifying or you have a suggestion, don’t hesitate to open a discussion on the forum or a GitHub issue. We provide a high level explanation of how the generation can be controlled as well as a snippet of the technicals. For more in depth explanations on the technicals, the original papers which are linked from the pipelines are always the best resources. Depending on the use case, one should choose a technique accordingly. In many cases, these techniques can be combined. For example, one can combine Textual Inversion with SEGA to provide more semantic guidance to the outputs generated using Textual Inversion. Unless otherwise mentioned, these are techniques that work with existing models and don’t require their own weights. InstructPix2Pix Pix2Pix Zero Attend and Excite Semantic Guidance Self-attention Guidance Depth2Image MultiDiffusion Panorama DreamBooth Textual Inversion ControlNet Prompt Weighting Custom Diffusion Model Editing DiffEdit T2I-Adapter FABRIC For convenience, we provide a table to denote which methods are inference-only and which require fine-tuning/training. Method Inference only Requires training / fine-tuning Comments InstructPix2Pix ✅ ❌ Can additionally befine-tuned for better performance on specific edit instructions. Pix2Pix Zero ✅ ❌ Attend and Excite ✅ ❌ Semantic Guidance ✅ ❌ Self-attention Guidance ✅ ❌ Depth2Image ✅ ❌ MultiDiffusion Panorama ✅ ❌ DreamBooth ❌ ✅ Textual Inversion ❌ ✅ ControlNet ✅ ❌ A ControlNet can be trained/fine-tuned ona custom conditioning. Prompt Weighting ✅ ❌ Custom Diffusion ❌ ✅ Model Editing ✅ ❌ DiffEdit ✅ ❌ T2I-Adapter ✅ ❌ Fabric ✅ ❌ InstructPix2Pix Paper InstructPix2Pix is fine-tuned from Stable Diffusion to support editing input images. It takes as inputs an image and a prompt describing an edit, and it outputs the edited image. +InstructPix2Pix has been explicitly trained to work well with InstructGPT-like prompts. Pix2Pix Zero Paper Pix2Pix Zero allows modifying an image so that one concept or subject is translated to another one while preserving general image semantics. The denoising process is guided from one conceptual embedding towards another conceptual embedding. The intermediate latents are optimized during the denoising process to push the attention maps towards reference attention maps. The reference attention maps are from the denoising process of the input image and are used to encourage semantic preservation. Pix2Pix Zero can be used both to edit synthetic images as well as real images. To edit synthetic images, one first generates an image given a caption. +Next, we generate image captions for the concept that shall be edited and for the new target concept. We can use a model like Flan-T5 for this purpose. Then, “mean” prompt embeddings for both the source and target concepts are created via the text encoder. Finally, the pix2pix-zero algorithm is used to edit the synthetic image. To edit a real image, one first generates an image caption using a model like BLIP. Then one applies DDIM inversion on the prompt and image to generate “inverse” latents. Similar to before, “mean” prompt embeddings for both source and target concepts are created and finally the pix2pix-zero algorithm in combination with the “inverse” latents is used to edit the image. Pix2Pix Zero is the first model that allows “zero-shot” image editing. This means that the model +can edit an image in less than a minute on a consumer GPU as shown here. As mentioned above, Pix2Pix Zero includes optimizing the latents (and not any of the UNet, VAE, or the text encoder) to steer the generation toward a specific concept. This means that the overall +pipeline might require more memory than a standard StableDiffusionPipeline. An important distinction between methods like InstructPix2Pix and Pix2Pix Zero is that the former +involves fine-tuning the pre-trained weights while the latter does not. This means that you can +apply Pix2Pix Zero to any of the available Stable Diffusion models. Attend and Excite Paper Attend and Excite allows subjects in the prompt to be faithfully represented in the final image. A set of token indices are given as input, corresponding to the subjects in the prompt that need to be present in the image. During denoising, each token index is guaranteed to have a minimum attention threshold for at least one patch of the image. The intermediate latents are iteratively optimized during the denoising process to strengthen the attention of the most neglected subject token until the attention threshold is passed for all subject tokens. Like Pix2Pix Zero, Attend and Excite also involves a mini optimization loop (leaving the pre-trained weights untouched) in its pipeline and can require more memory than the usual StableDiffusionPipeline. Semantic Guidance (SEGA) Paper SEGA allows applying or removing one or more concepts from an image. The strength of the concept can also be controlled. I.e. the smile concept can be used to incrementally increase or decrease the smile of a portrait. Similar to how classifier free guidance provides guidance via empty prompt inputs, SEGA provides guidance on conceptual prompts. Multiple of these conceptual prompts can be applied simultaneously. Each conceptual prompt can either add or remove their concept depending on if the guidance is applied positively or negatively. Unlike Pix2Pix Zero or Attend and Excite, SEGA directly interacts with the diffusion process instead of performing any explicit gradient-based optimization. Self-attention Guidance (SAG) Paper Self-attention Guidance improves the general quality of images. SAG provides guidance from predictions not conditioned on high-frequency details to fully conditioned images. The high frequency details are extracted out of the UNet self-attention maps. Depth2Image Project Depth2Image is fine-tuned from Stable Diffusion to better preserve semantics for text guided image variation. It conditions on a monocular depth estimate of the original image. MultiDiffusion Panorama Paper MultiDiffusion Panorama defines a new generation process over a pre-trained diffusion model. This process binds together multiple diffusion generation methods that can be readily applied to generate high quality and diverse images. Results adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes. +MultiDiffusion Panorama allows to generate high-quality images at arbitrary aspect ratios (e.g., panoramas). Fine-tuning your own models In addition to pre-trained models, Diffusers has training scripts for fine-tuning models on user-provided data. DreamBooth Project DreamBooth fine-tunes a model to teach it about a new subject. I.e. a few pictures of a person can be used to generate images of that person in different styles. Textual Inversion Paper Textual Inversion fine-tunes a model to teach it about a new concept. I.e. a few pictures of a style of artwork can be used to generate images in that style. ControlNet Paper ControlNet is an auxiliary network which adds an extra condition. +There are 8 canonical pre-trained ControlNets trained on different conditionings such as edge detection, scribbles, +depth maps, and semantic segmentations. Prompt Weighting Prompt weighting is a simple technique that puts more attention weight on certain parts of the text +input. Custom Diffusion Paper Custom Diffusion only fine-tunes the cross-attention maps of a pre-trained +text-to-image diffusion model. It also allows for additionally performing Textual Inversion. It supports +multi-concept training by design. Like DreamBooth and Textual Inversion, Custom Diffusion is also used to +teach a pre-trained text-to-image diffusion model about new concepts to generate outputs involving the +concept(s) of interest. Model Editing Paper The text-to-image model editing pipeline helps you mitigate some of the incorrect implicit assumptions a pre-trained text-to-image +diffusion model might make about the subjects present in the input prompt. For example, if you prompt Stable Diffusion to generate images for “A pack of roses”, the roses in the generated images +are more likely to be red. This pipeline helps you change that assumption. DiffEdit Paper DiffEdit allows for semantic editing of input images along with +input prompts while preserving the original input images as much as possible. T2I-Adapter Paper T2I-Adapter is an auxiliary network which adds an extra condition. +There are 8 canonical pre-trained adapters trained on different conditionings such as edge detection, sketch, +depth maps, and semantic segmentations. Fabric Paper Fabric is a training-free +approach applicable to a wide range of popular diffusion models, which exploits +the self-attention layer present in the most widely used architectures to condition +the diffusion process on a set of feedback images. diff --git a/scrapped_outputs/b00f63a27f387bfe1047ececd9464cdd.txt b/scrapped_outputs/b00f63a27f387bfe1047ececd9464cdd.txt new file mode 100644 index 0000000000000000000000000000000000000000..89bc887713a4db23ab02dc3377a161ea6292c27f --- /dev/null +++ b/scrapped_outputs/b00f63a27f387bfe1047ececd9464cdd.txt @@ -0,0 +1,23 @@ +Controlled generation Controlling outputs generated by diffusion models has been long pursued by the community and is now an active research topic. In many popular diffusion models, subtle changes in inputs, both images and text prompts, can drastically change outputs. In an ideal world we want to be able to control how semantics are preserved and changed. Most examples of preserving semantics reduce to being able to accurately map a change in input to a change in output. I.e. adding an adjective to a subject in a prompt preserves the entire image, only modifying the changed subject. Or, image variation of a particular subject preserves the subject’s pose. Additionally, there are qualities of generated images that we would like to influence beyond semantic preservation. I.e. in general, we would like our outputs to be of good quality, adhere to a particular style, or be realistic. We will document some of the techniques diffusers supports to control generation of diffusion models. Much is cutting edge research and can be quite nuanced. If something needs clarifying or you have a suggestion, don’t hesitate to open a discussion on the forum or a GitHub issue. We provide a high level explanation of how the generation can be controlled as well as a snippet of the technicals. For more in depth explanations on the technicals, the original papers which are linked from the pipelines are always the best resources. Depending on the use case, one should choose a technique accordingly. In many cases, these techniques can be combined. For example, one can combine Textual Inversion with SEGA to provide more semantic guidance to the outputs generated using Textual Inversion. Unless otherwise mentioned, these are techniques that work with existing models and don’t require their own weights. InstructPix2Pix Pix2Pix Zero Attend and Excite Semantic Guidance Self-attention Guidance Depth2Image MultiDiffusion Panorama DreamBooth Textual Inversion ControlNet Prompt Weighting Custom Diffusion Model Editing DiffEdit T2I-Adapter FABRIC For convenience, we provide a table to denote which methods are inference-only and which require fine-tuning/training. Method Inference only Requires training / fine-tuning Comments InstructPix2Pix ✅ ❌ Can additionally befine-tuned for better performance on specific edit instructions. Pix2Pix Zero ✅ ❌ Attend and Excite ✅ ❌ Semantic Guidance ✅ ❌ Self-attention Guidance ✅ ❌ Depth2Image ✅ ❌ MultiDiffusion Panorama ✅ ❌ DreamBooth ❌ ✅ Textual Inversion ❌ ✅ ControlNet ✅ ❌ A ControlNet can be trained/fine-tuned ona custom conditioning. Prompt Weighting ✅ ❌ Custom Diffusion ❌ ✅ Model Editing ✅ ❌ DiffEdit ✅ ❌ T2I-Adapter ✅ ❌ Fabric ✅ ❌ InstructPix2Pix Paper InstructPix2Pix is fine-tuned from Stable Diffusion to support editing input images. It takes as inputs an image and a prompt describing an edit, and it outputs the edited image. +InstructPix2Pix has been explicitly trained to work well with InstructGPT-like prompts. Pix2Pix Zero Paper Pix2Pix Zero allows modifying an image so that one concept or subject is translated to another one while preserving general image semantics. The denoising process is guided from one conceptual embedding towards another conceptual embedding. The intermediate latents are optimized during the denoising process to push the attention maps towards reference attention maps. The reference attention maps are from the denoising process of the input image and are used to encourage semantic preservation. Pix2Pix Zero can be used both to edit synthetic images as well as real images. To edit synthetic images, one first generates an image given a caption. +Next, we generate image captions for the concept that shall be edited and for the new target concept. We can use a model like Flan-T5 for this purpose. Then, “mean” prompt embeddings for both the source and target concepts are created via the text encoder. Finally, the pix2pix-zero algorithm is used to edit the synthetic image. To edit a real image, one first generates an image caption using a model like BLIP. Then one applies DDIM inversion on the prompt and image to generate “inverse” latents. Similar to before, “mean” prompt embeddings for both source and target concepts are created and finally the pix2pix-zero algorithm in combination with the “inverse” latents is used to edit the image. Pix2Pix Zero is the first model that allows “zero-shot” image editing. This means that the model +can edit an image in less than a minute on a consumer GPU as shown here. As mentioned above, Pix2Pix Zero includes optimizing the latents (and not any of the UNet, VAE, or the text encoder) to steer the generation toward a specific concept. This means that the overall +pipeline might require more memory than a standard StableDiffusionPipeline. An important distinction between methods like InstructPix2Pix and Pix2Pix Zero is that the former +involves fine-tuning the pre-trained weights while the latter does not. This means that you can +apply Pix2Pix Zero to any of the available Stable Diffusion models. Attend and Excite Paper Attend and Excite allows subjects in the prompt to be faithfully represented in the final image. A set of token indices are given as input, corresponding to the subjects in the prompt that need to be present in the image. During denoising, each token index is guaranteed to have a minimum attention threshold for at least one patch of the image. The intermediate latents are iteratively optimized during the denoising process to strengthen the attention of the most neglected subject token until the attention threshold is passed for all subject tokens. Like Pix2Pix Zero, Attend and Excite also involves a mini optimization loop (leaving the pre-trained weights untouched) in its pipeline and can require more memory than the usual StableDiffusionPipeline. Semantic Guidance (SEGA) Paper SEGA allows applying or removing one or more concepts from an image. The strength of the concept can also be controlled. I.e. the smile concept can be used to incrementally increase or decrease the smile of a portrait. Similar to how classifier free guidance provides guidance via empty prompt inputs, SEGA provides guidance on conceptual prompts. Multiple of these conceptual prompts can be applied simultaneously. Each conceptual prompt can either add or remove their concept depending on if the guidance is applied positively or negatively. Unlike Pix2Pix Zero or Attend and Excite, SEGA directly interacts with the diffusion process instead of performing any explicit gradient-based optimization. Self-attention Guidance (SAG) Paper Self-attention Guidance improves the general quality of images. SAG provides guidance from predictions not conditioned on high-frequency details to fully conditioned images. The high frequency details are extracted out of the UNet self-attention maps. Depth2Image Project Depth2Image is fine-tuned from Stable Diffusion to better preserve semantics for text guided image variation. It conditions on a monocular depth estimate of the original image. MultiDiffusion Panorama Paper MultiDiffusion Panorama defines a new generation process over a pre-trained diffusion model. This process binds together multiple diffusion generation methods that can be readily applied to generate high quality and diverse images. Results adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes. +MultiDiffusion Panorama allows to generate high-quality images at arbitrary aspect ratios (e.g., panoramas). Fine-tuning your own models In addition to pre-trained models, Diffusers has training scripts for fine-tuning models on user-provided data. DreamBooth Project DreamBooth fine-tunes a model to teach it about a new subject. I.e. a few pictures of a person can be used to generate images of that person in different styles. Textual Inversion Paper Textual Inversion fine-tunes a model to teach it about a new concept. I.e. a few pictures of a style of artwork can be used to generate images in that style. ControlNet Paper ControlNet is an auxiliary network which adds an extra condition. +There are 8 canonical pre-trained ControlNets trained on different conditionings such as edge detection, scribbles, +depth maps, and semantic segmentations. Prompt Weighting Prompt weighting is a simple technique that puts more attention weight on certain parts of the text +input. Custom Diffusion Paper Custom Diffusion only fine-tunes the cross-attention maps of a pre-trained +text-to-image diffusion model. It also allows for additionally performing Textual Inversion. It supports +multi-concept training by design. Like DreamBooth and Textual Inversion, Custom Diffusion is also used to +teach a pre-trained text-to-image diffusion model about new concepts to generate outputs involving the +concept(s) of interest. Model Editing Paper The text-to-image model editing pipeline helps you mitigate some of the incorrect implicit assumptions a pre-trained text-to-image +diffusion model might make about the subjects present in the input prompt. For example, if you prompt Stable Diffusion to generate images for “A pack of roses”, the roses in the generated images +are more likely to be red. This pipeline helps you change that assumption. DiffEdit Paper DiffEdit allows for semantic editing of input images along with +input prompts while preserving the original input images as much as possible. T2I-Adapter Paper T2I-Adapter is an auxiliary network which adds an extra condition. +There are 8 canonical pre-trained adapters trained on different conditionings such as edge detection, sketch, +depth maps, and semantic segmentations. Fabric Paper Fabric is a training-free +approach applicable to a wide range of popular diffusion models, which exploits +the self-attention layer present in the most widely used architectures to condition +the diffusion process on a set of feedback images. diff --git a/scrapped_outputs/b01c15d731a0d151f7a85b62c1fa5f56.txt b/scrapped_outputs/b01c15d731a0d151f7a85b62c1fa5f56.txt new file mode 100644 index 0000000000000000000000000000000000000000..743357598369036ae890caa3bf05637fb12c3b84 --- /dev/null +++ b/scrapped_outputs/b01c15d731a0d151f7a85b62c1fa5f56.txt @@ -0,0 +1,17 @@ +UNet1DModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 1D UNet model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet1DModel class diffusers.UNet1DModel < source > ( sample_size: int = 65536 sample_rate: Optional = None in_channels: int = 2 out_channels: int = 2 extra_in_channels: int = 0 time_embedding_type: str = 'fourier' flip_sin_to_cos: bool = True use_timestep_embedding: bool = False freq_shift: float = 0.0 down_block_types: Tuple = ('DownBlock1DNoSkip', 'DownBlock1D', 'AttnDownBlock1D') up_block_types: Tuple = ('AttnUpBlock1D', 'UpBlock1D', 'UpBlock1DNoSkip') mid_block_type: Tuple = 'UNetMidBlock1D' out_block_type: str = None block_out_channels: Tuple = (32, 32, 64) act_fn: str = None norm_num_groups: int = 8 layers_per_block: int = 1 downsample_each_block: bool = False ) Parameters sample_size (int, optional) — Default length of sample. Should be adaptable at runtime. in_channels (int, optional, defaults to 2) — Number of channels in the input sample. out_channels (int, optional, defaults to 2) — Number of channels in the output. extra_in_channels (int, optional, defaults to 0) — +Number of additional channels to be added to the input of the first down block. Useful for cases where the +input data has more channels than what the model was initially designed for. time_embedding_type (str, optional, defaults to "fourier") — Type of time embedding to use. freq_shift (float, optional, defaults to 0.0) — Frequency shift for Fourier time embedding. flip_sin_to_cos (bool, optional, defaults to False) — +Whether to flip sin to cos for Fourier time embedding. down_block_types (Tuple[str], optional, defaults to ("DownBlock1DNoSkip", "DownBlock1D", "AttnDownBlock1D")) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to ("AttnUpBlock1D", "UpBlock1D", "UpBlock1DNoSkip")) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (32, 32, 64)) — +Tuple of block output channels. mid_block_type (str, optional, defaults to "UNetMidBlock1D") — Block type for middle of UNet. out_block_type (str, optional, defaults to None) — Optional output processing block of UNet. act_fn (str, optional, defaults to None) — Optional activation function in UNet blocks. norm_num_groups (int, optional, defaults to 8) — The number of groups for normalization. layers_per_block (int, optional, defaults to 1) — The number of layers per block. downsample_each_block (int, optional, defaults to False) — +Experimental feature for using a UNet without upsampling. A 1D UNet model that takes a noisy sample and a timestep and returns a sample shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor timestep: Union return_dict: bool = True ) → UNet1DOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch_size, num_channels, sample_size). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet1DOutput instead of a plain tuple. Returns +UNet1DOutput or tuple + +If return_dict is True, an UNet1DOutput is returned, otherwise a tuple is +returned where the first element is the sample tensor. + The UNet1DModel forward method. UNet1DOutput class diffusers.models.unet_1d.UNet1DOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, sample_size)) — +The hidden states output from the last layer of the model. The output of UNet1DModel. diff --git a/scrapped_outputs/b05623d270f69691ed24f587216fd261.txt b/scrapped_outputs/b05623d270f69691ed24f587216fd261.txt new file mode 100644 index 0000000000000000000000000000000000000000..d05e83f211afd073b47b8d298eea79b4b3c9daf7 --- /dev/null +++ b/scrapped_outputs/b05623d270f69691ed24f587216fd261.txt @@ -0,0 +1,97 @@ +Text-to-image When you think of diffusion models, text-to-image is usually one of the first things that come to mind. Text-to-image generates an image from a text description (for example, “Astronaut in a jungle, cold color palette, muted colors, detailed, 8k”) which is also known as a prompt. From a very high level, a diffusion model takes a prompt and some random initial noise, and iteratively removes the noise to construct an image. The denoising process is guided by the prompt, and once the denoising process ends after a predetermined number of time steps, the image representation is decoded into an image. Read the How does Stable Diffusion work? blog post to learn more about how a latent diffusion model works. You can generate images from a prompt in 🤗 Diffusers in two steps: Load a checkpoint into the AutoPipelineForText2Image class, which automatically detects the appropriate pipeline class to use based on the checkpoint: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") Pass a prompt to the pipeline to generate an image: Copied image = pipeline( + "stained glass of darth vader, backlight, centered composition, masterpiece, photorealistic, 8k" +).images[0] +image Popular models The most common text-to-image models are Stable Diffusion v1.5, Stable Diffusion XL (SDXL), and Kandinsky 2.2. There are also ControlNet models or adapters that can be used with text-to-image models for more direct control in generating images. The results from each model are slightly different because of their architecture and training process, but no matter which model you choose, their usage is more or less the same. Let’s use the same prompt for each model and compare their results. Stable Diffusion v1.5 Stable Diffusion v1.5 is a latent diffusion model initialized from Stable Diffusion v1-4, and finetuned for 595K steps on 512x512 images from the LAION-Aesthetics V2 dataset. You can use this model like: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0] +image Stable Diffusion XL SDXL is a much larger version of the previous Stable Diffusion models, and involves a two-stage model process that adds even more details to an image. It also includes some additional micro-conditionings to generate high-quality images centered subjects. Take a look at the more comprehensive SDXL guide to learn more about how to use it. In general, you can use SDXL like: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0] +image Kandinsky 2.2 The Kandinsky model is a bit different from the Stable Diffusion models because it also uses an image prior model to create embeddings that are used to better align text and images in the diffusion model. The easiest way to use Kandinsky 2.2 is: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0] +image ControlNet ControlNet models are auxiliary models or adapters that are finetuned on top of text-to-image models, such as Stable Diffusion v1.5. Using ControlNet models in combination with text-to-image models offers diverse options for more explicit control over how to generate an image. With ControlNet, you add an additional conditioning input image to the model. For example, if you provide an image of a human pose (usually represented as multiple keypoints that are connected into a skeleton) as a conditioning input, the model generates an image that follows the pose of the image. Check out the more in-depth ControlNet guide to learn more about other conditioning inputs and how to use them. In this example, let’s condition the ControlNet with a human pose estimation image. Load the ControlNet model pretrained on human pose estimations: Copied from diffusers import ControlNetModel, AutoPipelineForText2Image +from diffusers.utils import load_image +import torch + +controlnet = ControlNetModel.from_pretrained( + "lllyasviel/control_v11p_sd15_openpose", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +pose_image = load_image("https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png") Pass the controlnet to the AutoPipelineForText2Image, and provide the prompt and pose estimation image: Copied pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16" +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", image=pose_image, generator=generator).images[0] +image Stable Diffusion v1.5 Stable Diffusion XL Kandinsky 2.2 ControlNet (pose conditioning) Configure pipeline parameters There are a number of parameters that can be configured in the pipeline that affect how an image is generated. You can change the image’s output size, specify a negative prompt to improve image quality, and more. This section dives deeper into how to use these parameters. Height and width The height and width parameters control the height and width (in pixels) of the generated image. By default, the Stable Diffusion v1.5 model outputs 512x512 images, but you can change this to any size that is a multiple of 8. For example, to create a rectangular image: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +image = pipeline( + "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", height=768, width=512 +).images[0] +image Other models may have different default image sizes depending on the image sizes in the training dataset. For example, SDXL’s default image size is 1024x1024 and using lower height and width values may result in lower quality images. Make sure you check the model’s API reference first! Guidance scale The guidance_scale parameter affects how much the prompt influences image generation. A lower value gives the model “creativity” to generate images that are more loosely related to the prompt. Higher guidance_scale values push the model to follow the prompt more closely, and if this value is too high, you may observe some artifacts in the generated image. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +image = pipeline( + "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", guidance_scale=3.5 +).images[0] +image guidance_scale = 2.5 guidance_scale = 7.5 guidance_scale = 10.5 Negative prompt Just like how a prompt guides generation, a negative prompt steers the model away from things you don’t want the model to generate. This is commonly used to improve overall image quality by removing poor or bad image features such as “low resolution” or “bad details”. You can also use a negative prompt to remove or modify the content and style of an image. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +image = pipeline( + prompt="Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", + negative_prompt="ugly, deformed, disfigured, poor details, bad anatomy", +).images[0] +image negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" negative_prompt = "astronaut" Generator A torch.Generator object enables reproducibility in a pipeline by setting a manual seed. You can use a Generator to generate batches of images and iteratively improve on an image generated from a seed as detailed in the Improve image quality with deterministic generation guide. You can set a seed and Generator as shown below. Creating an image with a Generator should return the same result each time instead of randomly generating a new image. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +generator = torch.Generator(device="cuda").manual_seed(30) +image = pipeline( + "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", + generator=generator, +).images[0] +image Control image generation There are several ways to exert more control over how an image is generated outside of configuring a pipeline’s parameters, such as prompt weighting and ControlNet models. Prompt weighting Prompt weighting is a technique for increasing or decreasing the importance of concepts in a prompt to emphasize or minimize certain features in an image. We recommend using the Compel library to help you generate the weighted prompt embeddings. Learn how to create the prompt embeddings in the Prompt weighting guide. This example focuses on how to use the prompt embeddings in the pipeline. Once you’ve created the embeddings, you can pass them to the prompt_embeds (and negative_prompt_embeds if you’re using a negative prompt) parameter in the pipeline. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +image = pipeline( + prompt_embeds=prompt_embeds, # generated from Compel + negative_prompt_embeds=negative_prompt_embeds, # generated from Compel +).images[0] ControlNet As you saw in the ControlNet section, these models offer a more flexible and accurate way to generate images by incorporating an additional conditioning image input. Each ControlNet model is pretrained on a particular type of conditioning image to generate new images that resemble it. For example, if you take a ControlNet model pretrained on depth maps, you can give the model a depth map as a conditioning input and it’ll generate an image that preserves the spatial information in it. This is quicker and easier than specifying the depth information in a prompt. You can even combine multiple conditioning inputs with a MultiControlNet! There are many types of conditioning inputs you can use, and 🤗 Diffusers supports ControlNet for Stable Diffusion and SDXL models. Take a look at the more comprehensive ControlNet guide to learn how you can use these models. Optimize Diffusion models are large, and the iterative nature of denoising an image is computationally expensive and intensive. But this doesn’t mean you need access to powerful - or even many - GPUs to use them. There are many optimization techniques for running diffusion models on consumer and free-tier resources. For example, you can load model weights in half-precision to save GPU memory and increase speed or offload the entire model to the GPU to save even more memory. PyTorch 2.0 also supports a more memory-efficient attention mechanism called scaled dot product attention that is automatically enabled if you’re using PyTorch 2.0. You can combine this with torch.compile to speed your code up even more: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16").to("cuda") +pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) For more tips on how to optimize your code to save memory and speed up inference, read the Memory and speed and Torch 2.0 guides. diff --git a/scrapped_outputs/b067d087d75da6190e0b8c47b0136c08.txt b/scrapped_outputs/b067d087d75da6190e0b8c47b0136c08.txt new file mode 100644 index 0000000000000000000000000000000000000000..670e60a336d617da607490febe4cdc7f57188444 --- /dev/null +++ b/scrapped_outputs/b067d087d75da6190e0b8c47b0136c08.txt @@ -0,0 +1,82 @@ +T2I-Adapter T2I-Adapter is a lightweight adapter model that provides an additional conditioning input image (line art, canny, sketch, depth, pose) to better control image generation. It is similar to a ControlNet, but it is a lot smaller (~77M parameters and ~300MB file size) because its only inserts weights into the UNet instead of copying and training it. The T2I-Adapter is only available for training with the Stable Diffusion XL (SDXL) model. This guide will explore the train_t2i_adapter_sdxl.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/t2i_adapter +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to activate gradient accumulation, add the --gradient_accumulation_steps parameter to the training command: Copied accelerate launch train_t2i_adapter_sdxl.py \ + ----gradient_accumulation_steps=4 Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the relevant T2I-Adapter parameters: --pretrained_vae_model_name_or_path: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify a better VAE --crops_coords_top_left_h and --crops_coords_top_left_w: height and width coordinates to include in SDXL’s crop coordinate embeddings --conditioning_image_column: the column of the conditioning images in the dataset --proportion_empty_prompts: the proportion of image prompts to replace with empty strings Training script As with the script parameters, a walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the T2I-Adapter relevant parts of the script. The training script begins by preparing the dataset. This incudes tokenizing the prompt and applying transforms to the images and conditioning images. Copied conditioning_image_transforms = transforms.Compose( + [ + transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), + transforms.CenterCrop(args.resolution), + transforms.ToTensor(), + ] +) Within the main() function, the T2I-Adapter is either loaded from a pretrained adapter or it is randomly initialized: Copied if args.adapter_model_name_or_path: + logger.info("Loading existing adapter weights.") + t2iadapter = T2IAdapter.from_pretrained(args.adapter_model_name_or_path) +else: + logger.info("Initializing t2iadapter weights.") + t2iadapter = T2IAdapter( + in_channels=3, + channels=(320, 640, 1280, 1280), + num_res_blocks=2, + downscale_factor=16, + adapter_type="full_adapter_xl", + ) The optimizer is initialized for the T2I-Adapter parameters: Copied params_to_optimize = t2iadapter.parameters() +optimizer = optimizer_class( + params_to_optimize, + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Lastly, in the training loop, the adapter conditioning image and the text embeddings are passed to the UNet to predict the noise residual: Copied t2iadapter_image = batch["conditioning_pixel_values"].to(dtype=weight_dtype) +down_block_additional_residuals = t2iadapter(t2iadapter_image) +down_block_additional_residuals = [ + sample.to(dtype=weight_dtype) for sample in down_block_additional_residuals +] + +model_pred = unet( + inp_noisy_latents, + timesteps, + encoder_hidden_states=batch["prompt_ids"], + added_cond_kwargs=batch["unet_added_conditions"], + down_block_additional_residuals=down_block_additional_residuals, +).sample If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Now you’re ready to launch the training script! 🚀 For this example training, you’ll use the fusing/fill50k dataset. You can also create and use your own dataset if you want (see the Create a dataset for training guide). Set the environment variable MODEL_DIR to a model id on the Hub or a path to a local model and OUTPUT_DIR to where you want to save the model. Download the following images to condition your training with: Copied wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png +wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_image, --validation_prompt, and --validation_steps to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. Copied export MODEL_DIR="stabilityai/stable-diffusion-xl-base-1.0" +export OUTPUT_DIR="path to save model" + +accelerate launch train_t2i_adapter_sdxl.py \ + --pretrained_model_name_or_path=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --dataset_name=fusing/fill50k \ + --mixed_precision="fp16" \ + --resolution=1024 \ + --learning_rate=1e-5 \ + --max_train_steps=15000 \ + --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ + --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ + --validation_steps=100 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --report_to="wandb" \ + --seed=42 \ + --push_to_hub Once training is complete, you can use your T2I-Adapter for inference: Copied from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, EulerAncestralDiscreteSchedulerTest +from diffusers.utils import load_image +import torch + +adapter = T2IAdapter.from_pretrained("path/to/adapter", torch_dtype=torch.float16) +pipeline = StableDiffusionXLAdapterPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", adapter=adapter, torch_dtype=torch.float16 +) + +pipeline.scheduler = EulerAncestralDiscreteSchedulerTest.from_config(pipe.scheduler.config) +pipeline.enable_xformers_memory_efficient_attention() +pipeline.enable_model_cpu_offload() + +control_image = load_image("./conditioning_image_1.png") +prompt = "pale golden rod circle with old lace background" + +generator = torch.manual_seed(0) +image = pipeline( + prompt, image=control_image, generator=generator +).images[0] +image.save("./output.png") Next steps Congratulations on training a T2I-Adapter model! 🎉 To learn more: Read the Efficient Controllable Generation for SDXL with T2I-Adapters blog post to learn more details about the experimental results from the T2I-Adapter team. diff --git a/scrapped_outputs/b0868c18ff94f98dc676e7f487dd1564.txt b/scrapped_outputs/b0868c18ff94f98dc676e7f487dd1564.txt new file mode 100644 index 0000000000000000000000000000000000000000..e80c0c76f67b222f116cbc389bb925517c9da820 --- /dev/null +++ b/scrapped_outputs/b0868c18ff94f98dc676e7f487dd1564.txt @@ -0,0 +1,139 @@ +UNet2DConditionModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 2D UNet conditional model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet2DConditionModel class diffusers.UNet2DConditionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 center_input_sample: bool = False flip_sin_to_cos: bool = True freq_shift: int = 0 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') mid_block_type: Optional = 'UNetMidBlock2DCrossAttn' up_block_types: Tuple = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: Union = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 dropout: float = 0.0 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: Union = 1280 transformer_layers_per_block: Union = 1 reverse_transformer_layers_per_block: Optional = None encoder_hid_dim: Optional = None encoder_hid_dim_type: Optional = None attention_head_dim: Union = 8 num_attention_heads: Union = None dual_cross_attention: bool = False use_linear_projection: bool = False class_embed_type: Optional = None addition_embed_type: Optional = None addition_time_embed_dim: Optional = None num_class_embeds: Optional = None upcast_attention: bool = False resnet_time_scale_shift: str = 'default' resnet_skip_time_act: bool = False resnet_out_scale_factor: float = 1.0 time_embedding_type: str = 'positional' time_embedding_dim: Optional = None time_embedding_act_fn: Optional = None timestep_post_act: Optional = None time_cond_proj_dim: Optional = None conv_in_kernel: int = 3 conv_out_kernel: int = 3 projection_class_embeddings_input_dim: Optional = None attention_type: str = 'default' class_embeddings_concat: bool = False mid_block_only_cross_attention: Optional = None cross_attention_norm: Optional = None addition_embed_type_num_heads: int = 64 ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. in_channels (int, optional, defaults to 4) — Number of channels in the input sample. out_channels (int, optional, defaults to 4) — Number of channels in the output. center_input_sample (bool, optional, defaults to False) — Whether to center the input sample. flip_sin_to_cos (bool, optional, defaults to False) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. mid_block_type (str, optional, defaults to "UNetMidBlock2DCrossAttn") — +Block type for middle of UNet, it can be one of UNetMidBlock2DCrossAttn, UNetMidBlock2D, or +UNetMidBlock2DSimpleCrossAttn. If None, the mid block layer is skipped. up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. only_cross_attention(bool or Tuple[bool], optional, default to False) — +Whether to include self-attention in the basic transformer blocks, see +BasicTransformerBlock. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — The number of layers per block. downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. dropout (float, optional, defaults to 0.0) — The dropout probability to use. act_fn (str, optional, defaults to "silu") — The activation function to use. norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, normalization and activation layers is skipped in post-processing. norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. cross_attention_dim (int or Tuple[int], optional, defaults to 1280) — +The dimension of the cross attention features. transformer_layers_per_block (int, Tuple[int], or Tuple[Tuple] , optional, defaults to 1) — +The number of transformer blocks of type BasicTransformerBlock. Only relevant for +~models.unet_2d_blocks.CrossAttnDownBlock2D, ~models.unet_2d_blocks.CrossAttnUpBlock2D, +~models.unet_2d_blocks.UNetMidBlock2DCrossAttn. A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). reverse_transformer_layers_per_block : (Tuple[Tuple], optional, defaults to None): +The number of transformer blocks of type BasicTransformerBlock, in the upsampling +blocks of the U-Net. Only relevant if transformer_layers_per_block is of type Tuple[Tuple] and for +~models.unet_2d_blocks.CrossAttnDownBlock2D, ~models.unet_2d_blocks.CrossAttnUpBlock2D, +~models.unet_2d_blocks.UNetMidBlock2DCrossAttn. +encoder_hid_dim (int, optional, defaults to None): +If encoder_hid_dim_type is defined, encoder_hidden_states will be projected from encoder_hid_dim +dimension to cross_attention_dim. +encoder_hid_dim_type (str, optional, defaults to None): +If given, the encoder_hidden_states and potentially other embeddings are down-projected to text +embeddings of dimension cross_attention according to encoder_hid_dim_type. +attention_head_dim (int, optional, defaults to 8): The dimension of the attention heads. +num_attention_heads (int, optional): +The number of attention heads. If not defined, defaults to attention_head_dim +resnet_time_scale_shift (str, optional, defaults to "default"): Time scale shift config +for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. +class_embed_type (str, optional, defaults to None): +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", "identity", "projection", or "simple_projection". +addition_embed_type (str, optional, defaults to None): +Configures an optional embedding which will be summed with the time embeddings. Choose from None or +“text”. “text” will use the TextTimeEmbedding layer. +addition_time_embed_dim: (int, optional, defaults to None): +Dimension for the timestep embeddings. +num_class_embeds (int, optional, defaults to None): +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. +time_embedding_type (str, optional, defaults to positional): +The type of position embedding to use for timesteps. Choose from positional or fourier. +time_embedding_dim (int, optional, defaults to None): +An optional override for the dimension of the projected time embedding. +time_embedding_act_fn (str, optional, defaults to None): +Optional activation function to use only once on the time embeddings before they are passed to the rest of +the UNet. Choose from silu, mish, gelu, and swish. +timestep_post_act (str, optional, defaults to None): +The second activation function to use in timestep embedding. Choose from silu, mish and gelu. +time_cond_proj_dim (int, optional, defaults to None): +The dimension of cond_proj layer in the timestep embedding. +conv_in_kernel (int, optional, default to 3): The kernel size of conv_in layer. conv_out_kernel (int, +optional, default to 3): The kernel size of conv_out layer. projection_class_embeddings_input_dim (int, +optional): The dimension of the class_labels input when +class_embed_type="projection". Required when class_embed_type="projection". +class_embeddings_concat (bool, optional, defaults to False): Whether to concatenate the time +embeddings with the class embeddings. +mid_block_only_cross_attention (bool, optional, defaults to None): +Whether to use cross attention with the mid block when using the UNetMidBlock2DSimpleCrossAttn. If +only_cross_attention is given as a single boolean and mid_block_only_cross_attention is None, the +only_cross_attention value is used as the value for mid_block_only_cross_attention. Default to False +otherwise. disable_freeu < source > ( ) Disables the FreeU mechanism. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism from https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stage blocks where they are being applied. Please refer to the official repository for combinations of values that +are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None added_cond_kwargs: Optional = None down_block_additional_residuals: Optional = None mid_block_additional_residual: Optional = None down_intrablock_additional_residuals: Optional = None encoder_attention_mask: Optional = None return_dict: bool = True ) → UNet2DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, channel, height, width). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). class_labels (torch.Tensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. +timestep_cond — (torch.Tensor, optional, defaults to None): +Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed +through the self.time_embedding layer to obtain the timestep embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. +added_cond_kwargs — (dict, optional): +A kwargs dictionary containing additional embeddings that if specified are added to the embeddings that +are passed along to the UNet blocks. +down_block_additional_residuals — (tuple of torch.Tensor, optional): +A tuple of tensors that if specified are added to the residuals of down unet blocks. +mid_block_additional_residual — (torch.Tensor, optional): +A tensor that if specified is added to the residual of the middle unet block. encoder_attention_mask (torch.Tensor) — +A cross-attention mask of shape (batch, sequence_length) is applied to encoder_hidden_states. If +True the mask is kept, otherwise if False it is discarded. Mask will be converted into a bias, +which adds large negative values to the attention scores corresponding to “discard” tokens. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. +added_cond_kwargs — (dict, optional): +A kwargs dictionary containin additional embeddings that if specified are added to the embeddings that +are passed along to the UNet blocks. down_block_additional_residuals (tuple of torch.Tensor, optional) — +additional residuals to be added to UNet long skip connections from down blocks to up blocks for +example from ControlNet side model(s) mid_block_additional_residual (torch.Tensor, optional) — +additional residual to be added to UNet mid block output, for example from ControlNet side model down_intrablock_additional_residuals (tuple of torch.Tensor, optional) — +additional residuals to be added within UNet down blocks, for example from T2I-Adapter side model(s) Returns +UNet2DConditionOutput or tuple + +If return_dict is True, an UNet2DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The UNet2DConditionModel forward method. fuse_qkv_projections < source > ( ) Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. set_attention_slice < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", input to the attention heads is halved, so attention is computed in two steps. If +"max", maximum amount of memory is saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in +several steps. This is useful for saving some memory in exchange for a small decrease in speed. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. unfuse_qkv_projections < source > ( ) Disables the fused QKV projection if enabled. This API is 🧪 experimental. unload_lora < source > ( ) Unloads LoRA weights. UNet2DConditionOutput class diffusers.models.unets.unet_2d_condition.UNet2DConditionOutput < source > ( sample: FloatTensor = None ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of UNet2DConditionModel. FlaxUNet2DConditionModel class diffusers.FlaxUNet2DConditionModel < source > ( sample_size: int = 32 in_channels: int = 4 out_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') up_block_types: Tuple = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') mid_block_type: Optional = 'UNetMidBlock2DCrossAttn' only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 attention_head_dim: Union = 8 num_attention_heads: Union = None cross_attention_dim: int = 1280 dropout: float = 0.0 use_linear_projection: bool = False dtype: dtype = flip_sin_to_cos: bool = True freq_shift: int = 0 use_memory_efficient_attention: bool = False split_head_dim: bool = False transformer_layers_per_block: Union = 1 addition_embed_type: Optional = None addition_time_embed_dim: Optional = None addition_embed_type_num_heads: int = 64 projection_class_embeddings_input_dim: Optional = None parent: Union = name: Optional = None ) Parameters sample_size (int, optional) — +The size of the input sample. in_channels (int, optional, defaults to 4) — +The number of channels in the input sample. out_channels (int, optional, defaults to 4) — +The number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxDownBlock2D")) — +The tuple of downsample blocks to use. up_block_types (Tuple[str], optional, defaults to ("FlaxUpBlock2D", "FlaxCrossAttnUpBlock2D", "FlaxCrossAttnUpBlock2D", "FlaxCrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. mid_block_type (str, optional, defaults to "UNetMidBlock2DCrossAttn") — +Block type for middle of UNet, it can be one of UNetMidBlock2DCrossAttn. If None, the mid block layer is skipped. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — +The number of layers per block. attention_head_dim (int or Tuple[int], optional, defaults to 8) — +The dimension of the attention heads. num_attention_heads (int or Tuple[int], optional) — +The number of attention heads. cross_attention_dim (int, optional, defaults to 768) — +The dimension of the cross attention features. dropout (float, optional, defaults to 0) — +Dropout probability for down, up and bottleneck blocks. flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. use_memory_efficient_attention (bool, optional, defaults to False) — +Enable memory efficient attention as described here. split_head_dim (bool, optional, defaults to False) — +Whether to split the head dimension into a new axis for the self-attention computation. In most cases, +enabling this flag should speed up the computation for Stable Diffusion 2.x and Stable Diffusion XL. A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. This model inherits from FlaxModelMixin. Check the superclass documentation for it’s generic methods +implemented for all models (such as downloading or saving). This model is also a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matters related to its +general usage and behavior. Inherent JAX features such as the following are supported: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization FlaxUNet2DConditionOutput class diffusers.models.unets.unet_2d_condition_flax.FlaxUNet2DConditionOutput < source > ( sample: Array ) Parameters sample (jnp.ndarray of shape (batch_size, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of FlaxUNet2DConditionModel. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/b0d65ac29140a04a026a6162b664da09.txt b/scrapped_outputs/b0d65ac29140a04a026a6162b664da09.txt new file mode 100644 index 0000000000000000000000000000000000000000..a8f413795df9ab5a09a9d2bf61a84f345bf9ed33 --- /dev/null +++ b/scrapped_outputs/b0d65ac29140a04a026a6162b664da09.txt @@ -0,0 +1,33 @@ +Variance preserving stochastic differential equation (VP-SDE) scheduler + + +Overview + +Original paper can be found here. +Score SDE-VP is under construction. + +ScoreSdeVpScheduler + + +class diffusers.schedulers.ScoreSdeVpScheduler + +< +source +> +( +num_train_timesteps = 2000 +beta_min = 0.1 +beta_max = 20 +sampling_eps = 0.001 + +) + + + +The variance preserving stochastic differential equation (SDE) scheduler. +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. +For more information, see the original paper: https://arxiv.org/abs/2011.13456 +UNDER CONSTRUCTION diff --git a/scrapped_outputs/b0eaf17f8e2dd919491aae1aaeca5005.txt b/scrapped_outputs/b0eaf17f8e2dd919491aae1aaeca5005.txt new file mode 100644 index 0000000000000000000000000000000000000000..dc45cc411c1e99044b02de9de0b70f888962c563 --- /dev/null +++ b/scrapped_outputs/b0eaf17f8e2dd919491aae1aaeca5005.txt @@ -0,0 +1,42 @@ +DPMSolverSDEScheduler The DPMSolverSDEScheduler is inspired by the stochastic sampler from the Elucidating the Design Space of Diffusion-Based Generative Models paper, and the scheduler is ported from and created by Katherine Crowson. DPMSolverSDEScheduler class diffusers.DPMSolverSDEScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' use_karras_sigmas: Optional = False noise_sampler_seed: Optional = None timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.00085) — +The starting beta value of inference. beta_end (float, defaults to 0.012) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. noise_sampler_seed (int, optional, defaults to None) — +The random seed to use for the noise sampler. If None, a random seed is generated. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DPMSolverSDEScheduler implements the stochastic sampler from the Elucidating the Design Space of Diffusion-Based +Generative Models paper. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union return_dict: bool = True s_noise: float = 1.0 ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor or np.ndarray) — +The direct output from learned diffusion model. timestep (float or torch.FloatTensor) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor or np.ndarray) — +A current instance of a sample created by the diffusion process. return_dict (bool, optional, defaults to True) — +Whether or not to return a SchedulerOutput or tuple. s_noise (float, optional, defaults to 1.0) — +Scaling factor for noise added to the sample. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/b0fc92f51230209ec447648e0f699449.txt b/scrapped_outputs/b0fc92f51230209ec447648e0f699449.txt new file mode 100644 index 0000000000000000000000000000000000000000..039dc21252f140b854db30919cf4105c2b03492c --- /dev/null +++ b/scrapped_outputs/b0fc92f51230209ec447648e0f699449.txt @@ -0,0 +1,249 @@ +Evaluating Diffusion Models Evaluation of generative models like Stable Diffusion is subjective in nature. But as practitioners and researchers, we often have to make careful choices amongst many different possibilities. So, when working with different generative models (like GANs, Diffusion, etc.), how do we choose one over the other? Qualitative evaluation of such models can be error-prone and might incorrectly influence a decision. +However, quantitative metrics don’t necessarily correspond to image quality. So, usually, a combination +of both qualitative and quantitative evaluations provides a stronger signal when choosing one model +over the other. In this document, we provide a non-exhaustive overview of qualitative and quantitative methods to evaluate Diffusion models. For quantitative methods, we specifically focus on how to implement them alongside diffusers. The methods shown in this document can also be used to evaluate different noise schedulers keeping the underlying generation model fixed. Scenarios We cover Diffusion models with the following pipelines: Text-guided image generation (such as the StableDiffusionPipeline). Text-guided image generation, additionally conditioned on an input image (such as the StableDiffusionImg2ImgPipeline and StableDiffusionInstructPix2PixPipeline). Class-conditioned image generation models (such as the DiTPipeline). Qualitative Evaluation Qualitative evaluation typically involves human assessment of generated images. Quality is measured across aspects such as compositionality, image-text alignment, and spatial relations. Common prompts provide a degree of uniformity for subjective metrics. +DrawBench and PartiPrompts are prompt datasets used for qualitative benchmarking. DrawBench and PartiPrompts were introduced by Imagen and Parti respectively. From the official Parti website: PartiPrompts (P2) is a rich set of over 1600 prompts in English that we release as part of this work. P2 can be used to measure model capabilities across various categories and challenge aspects. PartiPrompts has the following columns: Prompt Category of the prompt (such as “Abstract”, “World Knowledge”, etc.) Challenge reflecting the difficulty (such as “Basic”, “Complex”, “Writing & Symbols”, etc.) These benchmarks allow for side-by-side human evaluation of different image generation models. For this, the 🧨 Diffusers team has built Open Parti Prompts, which is a community-driven qualitative benchmark based on Parti Prompts to compare state-of-the-art open-source diffusion models: Open Parti Prompts Game: For 10 parti prompts, 4 generated images are shown and the user selects the image that suits the prompt best. Open Parti Prompts Leaderboard: The leaderboard comparing the currently best open-sourced diffusion models to each other. To manually compare images, let’s see how we can use diffusers on a couple of PartiPrompts. Below we show some prompts sampled across different challenges: Basic, Complex, Linguistic Structures, Imagination, and Writing & Symbols. Here we are using PartiPrompts as a dataset. Copied from datasets import load_dataset + +# prompts = load_dataset("nateraw/parti-prompts", split="train") +# prompts = prompts.shuffle() +# sample_prompts = [prompts[i]["Prompt"] for i in range(5)] + +# Fixing these sample prompts in the interest of reproducibility. +sample_prompts = [ + "a corgi", + "a hot air balloon with a yin-yang symbol, with the moon visible in the daytime sky", + "a car with no windows", + "a cube made of porcupine", + 'The saying "BE EXCELLENT TO EACH OTHER" written on a red brick wall with a graffiti image of a green alien wearing a tuxedo. A yellow fire hydrant is on a sidewalk in the foreground.', +] Now we can use these prompts to generate some images using Stable Diffusion (v1-4 checkpoint): Copied import torch + +seed = 0 +generator = torch.manual_seed(seed) + +images = sd_pipeline(sample_prompts, num_images_per_prompt=1, generator=generator).images We can also set num_images_per_prompt accordingly to compare different images for the same prompt. Running the same pipeline but with a different checkpoint (v1-5), yields: Once several images are generated from all the prompts using multiple models (under evaluation), these results are presented to human evaluators for scoring. For +more details on the DrawBench and PartiPrompts benchmarks, refer to their respective papers. It is useful to look at some inference samples while a model is training to measure the +training progress. In our training scripts, we support this utility with additional support for +logging to TensorBoard and Weights & Biases. Quantitative Evaluation In this section, we will walk you through how to evaluate three different diffusion pipelines using: CLIP score CLIP directional similarity FID Text-guided image generation CLIP score measures the compatibility of image-caption pairs. Higher CLIP scores imply higher compatibility 🔼. The CLIP score is a quantitative measurement of the qualitative concept “compatibility”. Image-caption pair compatibility can also be thought of as the semantic similarity between the image and the caption. CLIP score was found to have high correlation with human judgement. Let’s first load a StableDiffusionPipeline: Copied from diffusers import StableDiffusionPipeline +import torch + +model_ckpt = "CompVis/stable-diffusion-v1-4" +sd_pipeline = StableDiffusionPipeline.from_pretrained(model_ckpt, torch_dtype=torch.float16).to("cuda") Generate some images with multiple prompts: Copied prompts = [ + "a photo of an astronaut riding a horse on mars", + "A high tech solarpunk utopia in the Amazon rainforest", + "A pikachu fine dining with a view to the Eiffel Tower", + "A mecha robot in a favela in expressionist style", + "an insect robot preparing a delicious meal", + "A small cabin on top of a snowy mountain in the style of Disney, artstation", +] + +images = sd_pipeline(prompts, num_images_per_prompt=1, output_type="np").images + +print(images.shape) +# (6, 512, 512, 3) And then, we calculate the CLIP score. Copied from torchmetrics.functional.multimodal import clip_score +from functools import partial + +clip_score_fn = partial(clip_score, model_name_or_path="openai/clip-vit-base-patch16") + +def calculate_clip_score(images, prompts): + images_int = (images * 255).astype("uint8") + clip_score = clip_score_fn(torch.from_numpy(images_int).permute(0, 3, 1, 2), prompts).detach() + return round(float(clip_score), 4) + +sd_clip_score = calculate_clip_score(images, prompts) +print(f"CLIP score: {sd_clip_score}") +# CLIP score: 35.7038 In the above example, we generated one image per prompt. If we generated multiple images per prompt, we would have to take the average score from the generated images per prompt. Now, if we wanted to compare two checkpoints compatible with the StableDiffusionPipeline we should pass a generator while calling the pipeline. First, we generate images with a +fixed seed with the v1-4 Stable Diffusion checkpoint: Copied seed = 0 +generator = torch.manual_seed(seed) + +images = sd_pipeline(prompts, num_images_per_prompt=1, generator=generator, output_type="np").images Then we load the v1-5 checkpoint to generate images: Copied model_ckpt_1_5 = "runwayml/stable-diffusion-v1-5" +sd_pipeline_1_5 = StableDiffusionPipeline.from_pretrained(model_ckpt_1_5, torch_dtype=weight_dtype).to(device) + +images_1_5 = sd_pipeline_1_5(prompts, num_images_per_prompt=1, generator=generator, output_type="np").images And finally, we compare their CLIP scores: Copied sd_clip_score_1_4 = calculate_clip_score(images, prompts) +print(f"CLIP Score with v-1-4: {sd_clip_score_1_4}") +# CLIP Score with v-1-4: 34.9102 + +sd_clip_score_1_5 = calculate_clip_score(images_1_5, prompts) +print(f"CLIP Score with v-1-5: {sd_clip_score_1_5}") +# CLIP Score with v-1-5: 36.2137 It seems like the v1-5 checkpoint performs better than its predecessor. Note, however, that the number of prompts we used to compute the CLIP scores is quite low. For a more practical evaluation, this number should be way higher, and the prompts should be diverse. By construction, there are some limitations in this score. The captions in the training dataset +were crawled from the web and extracted from alt and similar tags associated an image on the internet. +They are not necessarily representative of what a human being would use to describe an image. Hence we +had to “engineer” some prompts here. Image-conditioned text-to-image generation In this case, we condition the generation pipeline with an input image as well as a text prompt. Let’s take the StableDiffusionInstructPix2PixPipeline, as an example. It takes an edit instruction as an input prompt and an input image to be edited. Here is one example: One strategy to evaluate such a model is to measure the consistency of the change between the two images (in CLIP space) with the change between the two image captions (as shown in CLIP-Guided Domain Adaptation of Image Generators). This is referred to as the ”CLIP directional similarity“. Caption 1 corresponds to the input image (image 1) that is to be edited. Caption 2 corresponds to the edited image (image 2). It should reflect the edit instruction. Following is a pictorial overview: We have prepared a mini dataset to implement this metric. Let’s first load the dataset. Copied from datasets import load_dataset + +dataset = load_dataset("sayakpaul/instructpix2pix-demo", split="train") +dataset.features Copied {'input': Value(dtype='string', id=None), + 'edit': Value(dtype='string', id=None), + 'output': Value(dtype='string', id=None), + 'image': Image(decode=True, id=None)} Here we have: input is a caption corresponding to the image. edit denotes the edit instruction. output denotes the modified caption reflecting the edit instruction. Let’s take a look at a sample. Copied idx = 0 +print(f"Original caption: {dataset[idx]['input']}") +print(f"Edit instruction: {dataset[idx]['edit']}") +print(f"Modified caption: {dataset[idx]['output']}") Copied Original caption: 2. FAROE ISLANDS: An archipelago of 18 mountainous isles in the North Atlantic Ocean between Norway and Iceland, the Faroe Islands has 'everything you could hope for', according to Big 7 Travel. It boasts 'crystal clear waterfalls, rocky cliffs that seem to jut out of nowhere and velvety green hills' +Edit instruction: make the isles all white marble +Modified caption: 2. WHITE MARBLE ISLANDS: An archipelago of 18 mountainous white marble isles in the North Atlantic Ocean between Norway and Iceland, the White Marble Islands has 'everything you could hope for', according to Big 7 Travel. It boasts 'crystal clear waterfalls, rocky cliffs that seem to jut out of nowhere and velvety green hills' And here is the image: Copied dataset[idx]["image"] We will first edit the images of our dataset with the edit instruction and compute the directional similarity. Let’s first load the StableDiffusionInstructPix2PixPipeline: Copied from diffusers import StableDiffusionInstructPix2PixPipeline + +instruct_pix2pix_pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained( + "timbrooks/instruct-pix2pix", torch_dtype=torch.float16 +).to(device) Now, we perform the edits: Copied import numpy as np + + +def edit_image(input_image, instruction): + image = instruct_pix2pix_pipeline( + instruction, + image=input_image, + output_type="np", + generator=generator, + ).images[0] + return image + +input_images = [] +original_captions = [] +modified_captions = [] +edited_images = [] + +for idx in range(len(dataset)): + input_image = dataset[idx]["image"] + edit_instruction = dataset[idx]["edit"] + edited_image = edit_image(input_image, edit_instruction) + + input_images.append(np.array(input_image)) + original_captions.append(dataset[idx]["input"]) + modified_captions.append(dataset[idx]["output"]) + edited_images.append(edited_image) To measure the directional similarity, we first load CLIP’s image and text encoders: Copied from transformers import ( + CLIPTokenizer, + CLIPTextModelWithProjection, + CLIPVisionModelWithProjection, + CLIPImageProcessor, +) + +clip_id = "openai/clip-vit-large-patch14" +tokenizer = CLIPTokenizer.from_pretrained(clip_id) +text_encoder = CLIPTextModelWithProjection.from_pretrained(clip_id).to(device) +image_processor = CLIPImageProcessor.from_pretrained(clip_id) +image_encoder = CLIPVisionModelWithProjection.from_pretrained(clip_id).to(device) Notice that we are using a particular CLIP checkpoint, i.e., openai/clip-vit-large-patch14. This is because the Stable Diffusion pre-training was performed with this CLIP variant. For more details, refer to the documentation. Next, we prepare a PyTorch nn.Module to compute directional similarity: Copied import torch.nn as nn +import torch.nn.functional as F + + +class DirectionalSimilarity(nn.Module): + def __init__(self, tokenizer, text_encoder, image_processor, image_encoder): + super().__init__() + self.tokenizer = tokenizer + self.text_encoder = text_encoder + self.image_processor = image_processor + self.image_encoder = image_encoder + + def preprocess_image(self, image): + image = self.image_processor(image, return_tensors="pt")["pixel_values"] + return {"pixel_values": image.to(device)} + + def tokenize_text(self, text): + inputs = self.tokenizer( + text, + max_length=self.tokenizer.model_max_length, + padding="max_length", + truncation=True, + return_tensors="pt", + ) + return {"input_ids": inputs.input_ids.to(device)} + + def encode_image(self, image): + preprocessed_image = self.preprocess_image(image) + image_features = self.image_encoder(**preprocessed_image).image_embeds + image_features = image_features / image_features.norm(dim=1, keepdim=True) + return image_features + + def encode_text(self, text): + tokenized_text = self.tokenize_text(text) + text_features = self.text_encoder(**tokenized_text).text_embeds + text_features = text_features / text_features.norm(dim=1, keepdim=True) + return text_features + + def compute_directional_similarity(self, img_feat_one, img_feat_two, text_feat_one, text_feat_two): + sim_direction = F.cosine_similarity(img_feat_two - img_feat_one, text_feat_two - text_feat_one) + return sim_direction + + def forward(self, image_one, image_two, caption_one, caption_two): + img_feat_one = self.encode_image(image_one) + img_feat_two = self.encode_image(image_two) + text_feat_one = self.encode_text(caption_one) + text_feat_two = self.encode_text(caption_two) + directional_similarity = self.compute_directional_similarity( + img_feat_one, img_feat_two, text_feat_one, text_feat_two + ) + return directional_similarity Let’s put DirectionalSimilarity to use now. Copied dir_similarity = DirectionalSimilarity(tokenizer, text_encoder, image_processor, image_encoder) +scores = [] + +for i in range(len(input_images)): + original_image = input_images[i] + original_caption = original_captions[i] + edited_image = edited_images[i] + modified_caption = modified_captions[i] + + similarity_score = dir_similarity(original_image, edited_image, original_caption, modified_caption) + scores.append(float(similarity_score.detach().cpu())) + +print(f"CLIP directional similarity: {np.mean(scores)}") +# CLIP directional similarity: 0.0797976553440094 Like the CLIP Score, the higher the CLIP directional similarity, the better it is. It should be noted that the StableDiffusionInstructPix2PixPipeline exposes two arguments, namely, image_guidance_scale and guidance_scale that let you control the quality of the final edited image. We encourage you to experiment with these two arguments and see the impact of that on the directional similarity. We can extend the idea of this metric to measure how similar the original image and edited version are. To do that, we can just do F.cosine_similarity(img_feat_two, img_feat_one). For these kinds of edits, we would still want the primary semantics of the images to be preserved as much as possible, i.e., a high similarity score. We can use these metrics for similar pipelines such as the StableDiffusionPix2PixZeroPipeline. Both CLIP score and CLIP direction similarity rely on the CLIP model, which can make the evaluations biased. Extending metrics like IS, FID (discussed later), or KID can be difficult when the model under evaluation was pre-trained on a large image-captioning dataset (such as the LAION-5B dataset). This is because underlying these metrics is an InceptionNet (pre-trained on the ImageNet-1k dataset) used for extracting intermediate image features. The pre-training dataset of Stable Diffusion may have limited overlap with the pre-training dataset of InceptionNet, so it is not a good candidate here for feature extraction. Using the above metrics helps evaluate models that are class-conditioned. For example, DiT. It was pre-trained being conditioned on the ImageNet-1k classes. Class-conditioned image generation Class-conditioned generative models are usually pre-trained on a class-labeled dataset such as ImageNet-1k. Popular metrics for evaluating these models include Fréchet Inception Distance (FID), Kernel Inception Distance (KID), and Inception Score (IS). In this document, we focus on FID (Heusel et al.). We show how to compute it with the DiTPipeline, which uses the DiT model under the hood. FID aims to measure how similar are two datasets of images. As per this resource: Fréchet Inception Distance is a measure of similarity between two datasets of images. It was shown to correlate well with the human judgment of visual quality and is most often used to evaluate the quality of samples of Generative Adversarial Networks. FID is calculated by computing the Fréchet distance between two Gaussians fitted to feature representations of the Inception network. These two datasets are essentially the dataset of real images and the dataset of fake images (generated images in our case). FID is usually calculated with two large datasets. However, for this document, we will work with two mini datasets. Let’s first download a few images from the ImageNet-1k training set: Copied from zipfile import ZipFile +import requests + + +def download(url, local_filepath): + r = requests.get(url) + with open(local_filepath, "wb") as f: + f.write(r.content) + return local_filepath + +dummy_dataset_url = "https://hf.co/datasets/sayakpaul/sample-datasets/resolve/main/sample-imagenet-images.zip" +local_filepath = download(dummy_dataset_url, dummy_dataset_url.split("/")[-1]) + +with ZipFile(local_filepath, "r") as zipper: + zipper.extractall(".") Copied from PIL import Image +import os + +dataset_path = "sample-imagenet-images" +image_paths = sorted([os.path.join(dataset_path, x) for x in os.listdir(dataset_path)]) + +real_images = [np.array(Image.open(path).convert("RGB")) for path in image_paths] These are 10 images from the following ImageNet-1k classes: “cassette_player”, “chain_saw” (x2), “church”, “gas_pump” (x3), “parachute” (x2), and “tench”. Real images. Now that the images are loaded, let’s apply some lightweight pre-processing on them to use them for FID calculation. Copied from torchvision.transforms import functional as F + + +def preprocess_image(image): + image = torch.tensor(image).unsqueeze(0) + image = image.permute(0, 3, 1, 2) / 255.0 + return F.center_crop(image, (256, 256)) + +real_images = torch.cat([preprocess_image(image) for image in real_images]) +print(real_images.shape) +# torch.Size([10, 3, 256, 256]) We now load the DiTPipeline to generate images conditioned on the above-mentioned classes. Copied from diffusers import DiTPipeline, DPMSolverMultistepScheduler + +dit_pipeline = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256", torch_dtype=torch.float16) +dit_pipeline.scheduler = DPMSolverMultistepScheduler.from_config(dit_pipeline.scheduler.config) +dit_pipeline = dit_pipeline.to("cuda") + +words = [ + "cassette player", + "chainsaw", + "chainsaw", + "church", + "gas pump", + "gas pump", + "gas pump", + "parachute", + "parachute", + "tench", +] + +class_ids = dit_pipeline.get_label_ids(words) +output = dit_pipeline(class_labels=class_ids, generator=generator, output_type="np") + +fake_images = output.images +fake_images = torch.tensor(fake_images) +fake_images = fake_images.permute(0, 3, 1, 2) +print(fake_images.shape) +# torch.Size([10, 3, 256, 256]) Now, we can compute the FID using torchmetrics. Copied from torchmetrics.image.fid import FrechetInceptionDistance + +fid = FrechetInceptionDistance(normalize=True) +fid.update(real_images, real=True) +fid.update(fake_images, real=False) + +print(f"FID: {float(fid.compute())}") +# FID: 177.7147216796875 The lower the FID, the better it is. Several things can influence FID here: Number of images (both real and fake) Randomness induced in the diffusion process Number of inference steps in the diffusion process The scheduler being used in the diffusion process For the last two points, it is, therefore, a good practice to run the evaluation across different seeds and inference steps, and then report an average result. FID results tend to be fragile as they depend on a lot of factors: The specific Inception model used during computation. The implementation accuracy of the computation. The image format (not the same if we start from PNGs vs JPGs). Keeping that in mind, FID is often most useful when comparing similar runs, but it is +hard to reproduce paper results unless the authors carefully disclose the FID +measurement code. These points apply to other related metrics too, such as KID and IS. As a final step, let’s visually inspect the fake_images. Fake images. diff --git a/scrapped_outputs/b106c887bb3ebea0c0f4f4923313f4a4.txt b/scrapped_outputs/b106c887bb3ebea0c0f4f4923313f4a4.txt new file mode 100644 index 0000000000000000000000000000000000000000..d05e83f211afd073b47b8d298eea79b4b3c9daf7 --- /dev/null +++ b/scrapped_outputs/b106c887bb3ebea0c0f4f4923313f4a4.txt @@ -0,0 +1,97 @@ +Text-to-image When you think of diffusion models, text-to-image is usually one of the first things that come to mind. Text-to-image generates an image from a text description (for example, “Astronaut in a jungle, cold color palette, muted colors, detailed, 8k”) which is also known as a prompt. From a very high level, a diffusion model takes a prompt and some random initial noise, and iteratively removes the noise to construct an image. The denoising process is guided by the prompt, and once the denoising process ends after a predetermined number of time steps, the image representation is decoded into an image. Read the How does Stable Diffusion work? blog post to learn more about how a latent diffusion model works. You can generate images from a prompt in 🤗 Diffusers in two steps: Load a checkpoint into the AutoPipelineForText2Image class, which automatically detects the appropriate pipeline class to use based on the checkpoint: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") Pass a prompt to the pipeline to generate an image: Copied image = pipeline( + "stained glass of darth vader, backlight, centered composition, masterpiece, photorealistic, 8k" +).images[0] +image Popular models The most common text-to-image models are Stable Diffusion v1.5, Stable Diffusion XL (SDXL), and Kandinsky 2.2. There are also ControlNet models or adapters that can be used with text-to-image models for more direct control in generating images. The results from each model are slightly different because of their architecture and training process, but no matter which model you choose, their usage is more or less the same. Let’s use the same prompt for each model and compare their results. Stable Diffusion v1.5 Stable Diffusion v1.5 is a latent diffusion model initialized from Stable Diffusion v1-4, and finetuned for 595K steps on 512x512 images from the LAION-Aesthetics V2 dataset. You can use this model like: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0] +image Stable Diffusion XL SDXL is a much larger version of the previous Stable Diffusion models, and involves a two-stage model process that adds even more details to an image. It also includes some additional micro-conditionings to generate high-quality images centered subjects. Take a look at the more comprehensive SDXL guide to learn more about how to use it. In general, you can use SDXL like: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0] +image Kandinsky 2.2 The Kandinsky model is a bit different from the Stable Diffusion models because it also uses an image prior model to create embeddings that are used to better align text and images in the diffusion model. The easiest way to use Kandinsky 2.2 is: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0] +image ControlNet ControlNet models are auxiliary models or adapters that are finetuned on top of text-to-image models, such as Stable Diffusion v1.5. Using ControlNet models in combination with text-to-image models offers diverse options for more explicit control over how to generate an image. With ControlNet, you add an additional conditioning input image to the model. For example, if you provide an image of a human pose (usually represented as multiple keypoints that are connected into a skeleton) as a conditioning input, the model generates an image that follows the pose of the image. Check out the more in-depth ControlNet guide to learn more about other conditioning inputs and how to use them. In this example, let’s condition the ControlNet with a human pose estimation image. Load the ControlNet model pretrained on human pose estimations: Copied from diffusers import ControlNetModel, AutoPipelineForText2Image +from diffusers.utils import load_image +import torch + +controlnet = ControlNetModel.from_pretrained( + "lllyasviel/control_v11p_sd15_openpose", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +pose_image = load_image("https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png") Pass the controlnet to the AutoPipelineForText2Image, and provide the prompt and pose estimation image: Copied pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16" +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", image=pose_image, generator=generator).images[0] +image Stable Diffusion v1.5 Stable Diffusion XL Kandinsky 2.2 ControlNet (pose conditioning) Configure pipeline parameters There are a number of parameters that can be configured in the pipeline that affect how an image is generated. You can change the image’s output size, specify a negative prompt to improve image quality, and more. This section dives deeper into how to use these parameters. Height and width The height and width parameters control the height and width (in pixels) of the generated image. By default, the Stable Diffusion v1.5 model outputs 512x512 images, but you can change this to any size that is a multiple of 8. For example, to create a rectangular image: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +image = pipeline( + "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", height=768, width=512 +).images[0] +image Other models may have different default image sizes depending on the image sizes in the training dataset. For example, SDXL’s default image size is 1024x1024 and using lower height and width values may result in lower quality images. Make sure you check the model’s API reference first! Guidance scale The guidance_scale parameter affects how much the prompt influences image generation. A lower value gives the model “creativity” to generate images that are more loosely related to the prompt. Higher guidance_scale values push the model to follow the prompt more closely, and if this value is too high, you may observe some artifacts in the generated image. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +image = pipeline( + "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", guidance_scale=3.5 +).images[0] +image guidance_scale = 2.5 guidance_scale = 7.5 guidance_scale = 10.5 Negative prompt Just like how a prompt guides generation, a negative prompt steers the model away from things you don’t want the model to generate. This is commonly used to improve overall image quality by removing poor or bad image features such as “low resolution” or “bad details”. You can also use a negative prompt to remove or modify the content and style of an image. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +image = pipeline( + prompt="Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", + negative_prompt="ugly, deformed, disfigured, poor details, bad anatomy", +).images[0] +image negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" negative_prompt = "astronaut" Generator A torch.Generator object enables reproducibility in a pipeline by setting a manual seed. You can use a Generator to generate batches of images and iteratively improve on an image generated from a seed as detailed in the Improve image quality with deterministic generation guide. You can set a seed and Generator as shown below. Creating an image with a Generator should return the same result each time instead of randomly generating a new image. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +generator = torch.Generator(device="cuda").manual_seed(30) +image = pipeline( + "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", + generator=generator, +).images[0] +image Control image generation There are several ways to exert more control over how an image is generated outside of configuring a pipeline’s parameters, such as prompt weighting and ControlNet models. Prompt weighting Prompt weighting is a technique for increasing or decreasing the importance of concepts in a prompt to emphasize or minimize certain features in an image. We recommend using the Compel library to help you generate the weighted prompt embeddings. Learn how to create the prompt embeddings in the Prompt weighting guide. This example focuses on how to use the prompt embeddings in the pipeline. Once you’ve created the embeddings, you can pass them to the prompt_embeds (and negative_prompt_embeds if you’re using a negative prompt) parameter in the pipeline. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +image = pipeline( + prompt_embeds=prompt_embeds, # generated from Compel + negative_prompt_embeds=negative_prompt_embeds, # generated from Compel +).images[0] ControlNet As you saw in the ControlNet section, these models offer a more flexible and accurate way to generate images by incorporating an additional conditioning image input. Each ControlNet model is pretrained on a particular type of conditioning image to generate new images that resemble it. For example, if you take a ControlNet model pretrained on depth maps, you can give the model a depth map as a conditioning input and it’ll generate an image that preserves the spatial information in it. This is quicker and easier than specifying the depth information in a prompt. You can even combine multiple conditioning inputs with a MultiControlNet! There are many types of conditioning inputs you can use, and 🤗 Diffusers supports ControlNet for Stable Diffusion and SDXL models. Take a look at the more comprehensive ControlNet guide to learn how you can use these models. Optimize Diffusion models are large, and the iterative nature of denoising an image is computationally expensive and intensive. But this doesn’t mean you need access to powerful - or even many - GPUs to use them. There are many optimization techniques for running diffusion models on consumer and free-tier resources. For example, you can load model weights in half-precision to save GPU memory and increase speed or offload the entire model to the GPU to save even more memory. PyTorch 2.0 also supports a more memory-efficient attention mechanism called scaled dot product attention that is automatically enabled if you’re using PyTorch 2.0. You can combine this with torch.compile to speed your code up even more: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16").to("cuda") +pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) For more tips on how to optimize your code to save memory and speed up inference, read the Memory and speed and Torch 2.0 guides. diff --git a/scrapped_outputs/b109540ec0a75e798f36d36eaa20a8ce.txt b/scrapped_outputs/b109540ec0a75e798f36d36eaa20a8ce.txt new file mode 100644 index 0000000000000000000000000000000000000000..e4abc6c3bdbf1174d841ae03e5693f7552e06dd7 --- /dev/null +++ b/scrapped_outputs/b109540ec0a75e798f36d36eaa20a8ce.txt @@ -0,0 +1,38 @@ +Distributed inference with multiple GPUs On distributed setups, you can run inference across multiple GPUs with 🤗 Accelerate or PyTorch Distributed, which is useful for generating with multiple prompts in parallel. This guide will show you how to use 🤗 Accelerate and PyTorch Distributed for distributed inference. 🤗 Accelerate 🤗 Accelerate is a library designed to make it easy to train or run inference across distributed setups. It simplifies the process of setting up the distributed environment, allowing you to focus on your PyTorch code. To begin, create a Python file and initialize an accelerate.PartialState to create a distributed environment; your setup is automatically detected so you don’t need to explicitly define the rank or world_size. Move the DiffusionPipeline to distributed_state.device to assign a GPU to each process. Now use the split_between_processes utility as a context manager to automatically distribute the prompts between the number of processes. Copied import torch +from accelerate import PartialState +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +distributed_state = PartialState() +pipeline.to(distributed_state.device) + +with distributed_state.split_between_processes(["a dog", "a cat"]) as prompt: + result = pipeline(prompt).images[0] + result.save(f"result_{distributed_state.process_index}.png") Use the --num_processes argument to specify the number of GPUs to use, and call accelerate launch to run the script: Copied accelerate launch run_distributed.py --num_processes=2 To learn more, take a look at the Distributed Inference with 🤗 Accelerate guide. PyTorch Distributed PyTorch supports DistributedDataParallel which enables data parallelism. To start, create a Python file and import torch.distributed and torch.multiprocessing to set up the distributed process group and to spawn the processes for inference on each GPU. You should also initialize a DiffusionPipeline: Copied import torch +import torch.distributed as dist +import torch.multiprocessing as mp + +from diffusers import DiffusionPipeline + +sd = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) You’ll want to create a function to run inference; init_process_group handles creating a distributed environment with the type of backend to use, the rank of the current process, and the world_size or the number of processes participating. If you’re running inference in parallel over 2 GPUs, then the world_size is 2. Move the DiffusionPipeline to rank and use get_rank to assign a GPU to each process, where each process handles a different prompt: Copied def run_inference(rank, world_size): + dist.init_process_group("nccl", rank=rank, world_size=world_size) + + sd.to(rank) + + if torch.distributed.get_rank() == 0: + prompt = "a dog" + elif torch.distributed.get_rank() == 1: + prompt = "a cat" + + image = sd(prompt).images[0] + image.save(f"./{'_'.join(prompt)}.png") To run the distributed inference, call mp.spawn to run the run_inference function on the number of GPUs defined in world_size: Copied def main(): + world_size = 2 + mp.spawn(run_inference, args=(world_size,), nprocs=world_size, join=True) + + +if __name__ == "__main__": + main() Once you’ve completed the inference script, use the --nproc_per_node argument to specify the number of GPUs to use and call torchrun to run the script: Copied torchrun run_distributed.py --nproc_per_node=2 diff --git a/scrapped_outputs/b12d1d0a06f3672e9fcc5192a862b597.txt b/scrapped_outputs/b12d1d0a06f3672e9fcc5192a862b597.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69636ab475595c240f0bd86a1983886d1f8de0d --- /dev/null +++ b/scrapped_outputs/b12d1d0a06f3672e9fcc5192a862b597.txt @@ -0,0 +1,40 @@ +DDIM Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. The abstract from the paper is: Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space. The original codebase can be found at ermongroup/ddim. DDIMPipeline class diffusers.DDIMPipeline < source > ( unet scheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image. Can be one of +DDPMScheduler, or DDIMScheduler. Pipeline for image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 generator: Union = None eta: float = 0.0 num_inference_steps: int = 50 use_clipped_model_output: Optional = None output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. A value of 0 corresponds to +DDIM and 1 corresponds to DDPM. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. use_clipped_model_output (bool, optional, defaults to None) — +If True or False, see documentation for DDIMScheduler.step(). If None, nothing is passed +downstream to the scheduler (use None for schedulers which don’t support this argument). output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Example: Copied >>> from diffusers import DDIMPipeline +>>> import PIL.Image +>>> import numpy as np + +>>> # load model and scheduler +>>> pipe = DDIMPipeline.from_pretrained("fusing/ddim-lsun-bedroom") + +>>> # run pipeline in inference (sample random noise and denoise) +>>> image = pipe(eta=0.0, num_inference_steps=50) + +>>> # process image to PIL +>>> image_processed = image.cpu().permute(0, 2, 3, 1) +>>> image_processed = (image_processed + 1.0) * 127.5 +>>> image_processed = image_processed.numpy().astype(np.uint8) +>>> image_pil = PIL.Image.fromarray(image_processed[0]) + +>>> # save image +>>> image_pil.save("test.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/b13e88c2d358cbaa8d268160057d81e9.txt b/scrapped_outputs/b13e88c2d358cbaa8d268160057d81e9.txt new file mode 100644 index 0000000000000000000000000000000000000000..670e60a336d617da607490febe4cdc7f57188444 --- /dev/null +++ b/scrapped_outputs/b13e88c2d358cbaa8d268160057d81e9.txt @@ -0,0 +1,82 @@ +T2I-Adapter T2I-Adapter is a lightweight adapter model that provides an additional conditioning input image (line art, canny, sketch, depth, pose) to better control image generation. It is similar to a ControlNet, but it is a lot smaller (~77M parameters and ~300MB file size) because its only inserts weights into the UNet instead of copying and training it. The T2I-Adapter is only available for training with the Stable Diffusion XL (SDXL) model. This guide will explore the train_t2i_adapter_sdxl.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/t2i_adapter +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to activate gradient accumulation, add the --gradient_accumulation_steps parameter to the training command: Copied accelerate launch train_t2i_adapter_sdxl.py \ + ----gradient_accumulation_steps=4 Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the relevant T2I-Adapter parameters: --pretrained_vae_model_name_or_path: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify a better VAE --crops_coords_top_left_h and --crops_coords_top_left_w: height and width coordinates to include in SDXL’s crop coordinate embeddings --conditioning_image_column: the column of the conditioning images in the dataset --proportion_empty_prompts: the proportion of image prompts to replace with empty strings Training script As with the script parameters, a walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the T2I-Adapter relevant parts of the script. The training script begins by preparing the dataset. This incudes tokenizing the prompt and applying transforms to the images and conditioning images. Copied conditioning_image_transforms = transforms.Compose( + [ + transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), + transforms.CenterCrop(args.resolution), + transforms.ToTensor(), + ] +) Within the main() function, the T2I-Adapter is either loaded from a pretrained adapter or it is randomly initialized: Copied if args.adapter_model_name_or_path: + logger.info("Loading existing adapter weights.") + t2iadapter = T2IAdapter.from_pretrained(args.adapter_model_name_or_path) +else: + logger.info("Initializing t2iadapter weights.") + t2iadapter = T2IAdapter( + in_channels=3, + channels=(320, 640, 1280, 1280), + num_res_blocks=2, + downscale_factor=16, + adapter_type="full_adapter_xl", + ) The optimizer is initialized for the T2I-Adapter parameters: Copied params_to_optimize = t2iadapter.parameters() +optimizer = optimizer_class( + params_to_optimize, + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Lastly, in the training loop, the adapter conditioning image and the text embeddings are passed to the UNet to predict the noise residual: Copied t2iadapter_image = batch["conditioning_pixel_values"].to(dtype=weight_dtype) +down_block_additional_residuals = t2iadapter(t2iadapter_image) +down_block_additional_residuals = [ + sample.to(dtype=weight_dtype) for sample in down_block_additional_residuals +] + +model_pred = unet( + inp_noisy_latents, + timesteps, + encoder_hidden_states=batch["prompt_ids"], + added_cond_kwargs=batch["unet_added_conditions"], + down_block_additional_residuals=down_block_additional_residuals, +).sample If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Now you’re ready to launch the training script! 🚀 For this example training, you’ll use the fusing/fill50k dataset. You can also create and use your own dataset if you want (see the Create a dataset for training guide). Set the environment variable MODEL_DIR to a model id on the Hub or a path to a local model and OUTPUT_DIR to where you want to save the model. Download the following images to condition your training with: Copied wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png +wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_image, --validation_prompt, and --validation_steps to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. Copied export MODEL_DIR="stabilityai/stable-diffusion-xl-base-1.0" +export OUTPUT_DIR="path to save model" + +accelerate launch train_t2i_adapter_sdxl.py \ + --pretrained_model_name_or_path=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --dataset_name=fusing/fill50k \ + --mixed_precision="fp16" \ + --resolution=1024 \ + --learning_rate=1e-5 \ + --max_train_steps=15000 \ + --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ + --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ + --validation_steps=100 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --report_to="wandb" \ + --seed=42 \ + --push_to_hub Once training is complete, you can use your T2I-Adapter for inference: Copied from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, EulerAncestralDiscreteSchedulerTest +from diffusers.utils import load_image +import torch + +adapter = T2IAdapter.from_pretrained("path/to/adapter", torch_dtype=torch.float16) +pipeline = StableDiffusionXLAdapterPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", adapter=adapter, torch_dtype=torch.float16 +) + +pipeline.scheduler = EulerAncestralDiscreteSchedulerTest.from_config(pipe.scheduler.config) +pipeline.enable_xformers_memory_efficient_attention() +pipeline.enable_model_cpu_offload() + +control_image = load_image("./conditioning_image_1.png") +prompt = "pale golden rod circle with old lace background" + +generator = torch.manual_seed(0) +image = pipeline( + prompt, image=control_image, generator=generator +).images[0] +image.save("./output.png") Next steps Congratulations on training a T2I-Adapter model! 🎉 To learn more: Read the Efficient Controllable Generation for SDXL with T2I-Adapters blog post to learn more details about the experimental results from the T2I-Adapter team. diff --git a/scrapped_outputs/b147ba5709a6ad73de02cd730eee0eff.txt b/scrapped_outputs/b147ba5709a6ad73de02cd730eee0eff.txt new file mode 100644 index 0000000000000000000000000000000000000000..707a06e6336d2883e0c81a8c8cc00f306f544615 --- /dev/null +++ b/scrapped_outputs/b147ba5709a6ad73de02cd730eee0eff.txt @@ -0,0 +1,65 @@ +Unconditional image generation Unconditional image generation models are not conditioned on text or images during training. It only generates images that resemble its training data distribution. This guide will explore the train_unconditional.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies: Copied cd examples/unconditional_image_generation +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the bf16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_unconditional.py \ + --mixed_precision="bf16" Some basic and important parameters to specify include: --dataset_name: the name of the dataset on the Hub or a local path to the dataset to train on --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command Bring your dataset, and let the training script handle everything else! Training script The code for preprocessing the dataset and the training loop is found in the main() function. If you need to adapt the training script, this is where you’ll need to make your changes. The train_unconditional script initializes a UNet2DModel if you don’t provide a model configuration. You can configure the UNet here if you’d like: Copied model = UNet2DModel( + sample_size=args.resolution, + in_channels=3, + out_channels=3, + layers_per_block=2, + block_out_channels=(128, 128, 256, 256, 512, 512), + down_block_types=( + "DownBlock2D", + "DownBlock2D", + "DownBlock2D", + "DownBlock2D", + "AttnDownBlock2D", + "DownBlock2D", + ), + up_block_types=( + "UpBlock2D", + "AttnUpBlock2D", + "UpBlock2D", + "UpBlock2D", + "UpBlock2D", + "UpBlock2D", + ), +) Next, the script initializes a scheduler and optimizer: Copied # Initialize the scheduler +accepts_prediction_type = "prediction_type" in set(inspect.signature(DDPMScheduler.__init__).parameters.keys()) +if accepts_prediction_type: + noise_scheduler = DDPMScheduler( + num_train_timesteps=args.ddpm_num_steps, + beta_schedule=args.ddpm_beta_schedule, + prediction_type=args.prediction_type, + ) +else: + noise_scheduler = DDPMScheduler(num_train_timesteps=args.ddpm_num_steps, beta_schedule=args.ddpm_beta_schedule) + +# Initialize the optimizer +optimizer = torch.optim.AdamW( + model.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Then it loads a dataset and you can specify how to preprocess it: Copied dataset = load_dataset("imagefolder", data_dir=args.train_data_dir, cache_dir=args.cache_dir, split="train") + +augmentations = transforms.Compose( + [ + transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), + transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution), + transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x), + transforms.ToTensor(), + transforms.Normalize([0.5], [0.5]), + ] +) Finally, the training loop handles everything else such as adding noise to the images, predicting the noise residual, calculating the loss, saving checkpoints at specified steps, and saving and pushing the model to the Hub. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 A full training run takes 2 hours on 4xV100 GPUs. single GPU multi-GPU Copied accelerate launch train_unconditional.py \ + --dataset_name="huggan/flowers-102-categories" \ + --output_dir="ddpm-ema-flowers-64" \ + --mixed_precision="fp16" \ + --push_to_hub The training script creates and saves a checkpoint file in your repository. Now you can load and use your trained model for inference: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128").to("cuda") +image = pipeline().images[0] diff --git a/scrapped_outputs/b150a7abdedc97e33a20eaced9a4e80e.txt b/scrapped_outputs/b150a7abdedc97e33a20eaced9a4e80e.txt new file mode 100644 index 0000000000000000000000000000000000000000..684383d3b766fe2306777de3fdfe7ac6f1cc9bb6 --- /dev/null +++ b/scrapped_outputs/b150a7abdedc97e33a20eaced9a4e80e.txt @@ -0,0 +1,29 @@ +Create a dataset for training There are many datasets on the Hub to train a model on, but if you can’t find one you’re interested in or want to use your own, you can create a dataset with the 🤗 Datasets library. The dataset structure depends on the task you want to train your model on. The most basic dataset structure is a directory of images for tasks like unconditional image generation. Another dataset structure may be a directory of images and a text file containing their corresponding text captions for tasks like text-to-image generation. This guide will show you two ways to create a dataset to finetune on: provide a folder of images to the --train_data_dir argument upload a dataset to the Hub and pass the dataset repository id to the --dataset_name argument 💡 Learn more about how to create an image dataset for training in the Create an image dataset guide. Provide a dataset as a folder For unconditional generation, you can provide your own dataset as a folder of images. The training script uses the ImageFolder builder from 🤗 Datasets to automatically build a dataset from the folder. Your directory structure should look like: Copied data_dir/xxx.png +data_dir/xxy.png +data_dir/[...]/xxz.png Pass the path to the dataset directory to the --train_data_dir argument, and then you can start training: Copied accelerate launch train_unconditional.py \ + --train_data_dir \ + Upload your data to the Hub 💡 For more details and context about creating and uploading a dataset to the Hub, take a look at the Image search with 🤗 Datasets post. Start by creating a dataset with the ImageFolder feature, which creates an image column containing the PIL-encoded images. You can use the data_dir or data_files parameters to specify the location of the dataset. The data_files parameter supports mapping specific files to dataset splits like train or test: Copied from datasets import load_dataset + +# example 1: local folder +dataset = load_dataset("imagefolder", data_dir="path_to_your_folder") + +# example 2: local files (supported formats are tar, gzip, zip, xz, rar, zstd) +dataset = load_dataset("imagefolder", data_files="path_to_zip_file") + +# example 3: remote files (supported formats are tar, gzip, zip, xz, rar, zstd) +dataset = load_dataset( + "imagefolder", + data_files="https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip", +) + +# example 4: providing several splits +dataset = load_dataset( + "imagefolder", data_files={"train": ["path/to/file1", "path/to/file2"], "test": ["path/to/file3", "path/to/file4"]} +) Then use the push_to_hub method to upload the dataset to the Hub: Copied # assuming you have ran the huggingface-cli login command in a terminal +dataset.push_to_hub("name_of_your_dataset") + +# if you want to push to a private repo, simply pass private=True: +dataset.push_to_hub("name_of_your_dataset", private=True) Now the dataset is available for training by passing the dataset name to the --dataset_name argument: Copied accelerate launch --mixed_precision="fp16" train_text_to_image.py \ + --pretrained_model_name_or_path="runwayml/stable-diffusion-v1-5" \ + --dataset_name="name_of_your_dataset" \ + Next steps Now that you’ve created a dataset, you can plug it into the train_data_dir (if your dataset is local) or dataset_name (if your dataset is on the Hub) arguments of a training script. For your next steps, feel free to try and use your dataset to train a model for unconditional generation or text-to-image generation! diff --git a/scrapped_outputs/b165806e8584b5b7642613d18c12024f.txt b/scrapped_outputs/b165806e8584b5b7642613d18c12024f.txt new file mode 100644 index 0000000000000000000000000000000000000000..9cfc96be6aaacc8d08b00ff6b4042e641b297921 --- /dev/null +++ b/scrapped_outputs/b165806e8584b5b7642613d18c12024f.txt @@ -0,0 +1,13 @@ +PEFT Diffusers supports loading adapters such as LoRA with the PEFT library with the PeftAdapterMixin class. This allows modeling classes in Diffusers like UNet2DConditionModel to load an adapter. Refer to the Inference with PEFT tutorial for an overview of how to use PEFT in Diffusers for inference. PeftAdapterMixin class diffusers.loaders.PeftAdapterMixin < source > ( ) A class containing all functions for loading and using adapters weights that are supported in PEFT library. For +more details about adapters and injecting them in a transformer-based model, check out the PEFT documentation. Install the latest version of PEFT, and use this mixin to: Attach new adapters in the model. Attach multiple adapters and iteratively activate/deactivate them. Activate/deactivate all adapters from the model. Get a list of the active adapters. active_adapters < source > ( ) Gets the current list of active adapters of the model. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +documentation. add_adapter < source > ( adapter_config adapter_name: str = 'default' ) Parameters adapter_config ([~peft.PeftConfig]) — +The configuration of the adapter to add; supported adapters are non-prefix tuning and adaption prompt +methods. adapter_name (str, optional, defaults to "default") — +The name of the adapter to add. If no name is passed, a default name is assigned to the adapter. Adds a new adapter to the current model for training. If no adapter name is passed, a default name is assigned +to the adapter to follow the convention of the PEFT library. If you are not familiar with adapters and PEFT methods, we invite you to read more about them in the PEFT +documentation. disable_adapters < source > ( ) Disable all adapters attached to the model and fallback to inference with the base model only. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +documentation. enable_adapters < source > ( ) Enable adapters that are attached to the model. The model uses self.active_adapters() to retrieve the +list of adapters to enable. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +documentation. set_adapter < source > ( adapter_name: Union ) Parameters adapter_name (Union[str, List[str]])) — +The list of adapters to set or the adapter name in the case of a single adapter. Sets a specific adapter by forcing the model to only use that adapter and disables the other adapters. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +documentation. diff --git a/scrapped_outputs/b186da5ac3a14d0dbdfe38ba2efc2c5a.txt b/scrapped_outputs/b186da5ac3a14d0dbdfe38ba2efc2c5a.txt new file mode 100644 index 0000000000000000000000000000000000000000..d652e1d857c98c3e8bba256ca96f37cda949853a --- /dev/null +++ b/scrapped_outputs/b186da5ac3a14d0dbdfe38ba2efc2c5a.txt @@ -0,0 +1,57 @@ +Schedulers 🤗 Diffusers provides many scheduler functions for the diffusion process. A scheduler takes a model’s output (the sample which the diffusion process is iterating on) and a timestep to return a denoised sample. The timestep is important because it dictates where in the diffusion process the step is; data is generated by iterating forward n timesteps and inference occurs by propagating backward through the timesteps. Based on the timestep, a scheduler may be discrete in which case the timestep is an int or continuous in which case the timestep is a float. Depending on the context, a scheduler defines how to iteratively add noise to an image or how to update a sample based on a model’s output: during training, a scheduler adds noise (there are different algorithms for how to add noise) to a sample to train a diffusion model during inference, a scheduler defines how to update a sample based on a pretrained model’s output Many schedulers are implemented from the k-diffusion library by Katherine Crowson, and they’re also widely used in A1111. To help you map the schedulers from k-diffusion and A1111 to the schedulers in 🤗 Diffusers, take a look at the table below: A1111/k-diffusion 🤗 Diffusers Usage DPM++ 2M DPMSolverMultistepScheduler DPM++ 2M Karras DPMSolverMultistepScheduler init with use_karras_sigmas=True DPM++ 2M SDE DPMSolverMultistepScheduler init with algorithm_type="sde-dpmsolver++" DPM++ 2M SDE Karras DPMSolverMultistepScheduler init with use_karras_sigmas=True and algorithm_type="sde-dpmsolver++" DPM++ 2S a N/A very similar to DPMSolverSinglestepScheduler DPM++ 2S a Karras N/A very similar to DPMSolverSinglestepScheduler(use_karras_sigmas=True, ...) DPM++ SDE DPMSolverSinglestepScheduler DPM++ SDE Karras DPMSolverSinglestepScheduler init with use_karras_sigmas=True DPM2 KDPM2DiscreteScheduler DPM2 Karras KDPM2DiscreteScheduler init with use_karras_sigmas=True DPM2 a KDPM2AncestralDiscreteScheduler DPM2 a Karras KDPM2AncestralDiscreteScheduler init with use_karras_sigmas=True DPM adaptive N/A DPM fast N/A Euler EulerDiscreteScheduler Euler a EulerAncestralDiscreteScheduler Heun HeunDiscreteScheduler LMS LMSDiscreteScheduler LMS Karras LMSDiscreteScheduler init with use_karras_sigmas=True N/A DEISMultistepScheduler N/A UniPCMultistepScheduler All schedulers are built from the base SchedulerMixin class which implements low level utilities shared by all schedulers. SchedulerMixin class diffusers.SchedulerMixin < source > ( ) Base class for all schedulers. SchedulerMixin contains common functions shared by all schedulers such as general loading and saving +functionalities. ConfigMixin takes care of storing the configuration attributes (like num_train_timesteps) that are passed to +the scheduler’s __init__ function, and the attributes can be accessed by scheduler.config.num_train_timesteps. Class attributes: _compatibles (List[str]) — A list of scheduler classes that are compatible with the parent scheduler +class. Use from_config() to load a different compatible scheduler class (should be overridden +by parent class). from_pretrained < source > ( pretrained_model_name_or_path: Union = None subfolder: Optional = None return_unused_kwargs = False **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the scheduler +configuration saved with save_pretrained(). + subfolder (str, optional) — +The subfolder location of a model file within a larger model repository on the Hub or locally. return_unused_kwargs (bool, optional, defaults to False) — +Whether kwargs that are not consumed by the Python class should be returned or not. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. Instantiate a scheduler from a pre-defined JSON configuration file in a local directory or Hub repository. To use private or gated models, log-in with +huggingface-cli login. You can also activate the special +“offline-mode” to use this method in a +firewalled environment. save_pretrained < source > ( save_directory: Union push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory where the configuration JSON file will be saved (will be created if it does not exist). push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save a scheduler configuration object to a directory so that it can be reloaded using the +from_pretrained() class method. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. KarrasDiffusionSchedulers KarrasDiffusionSchedulers are a broad generalization of schedulers in 🤗 Diffusers. The schedulers in this class are distinguished at a high level by their noise sampling strategy, the type of network and scaling, the training strategy, and how the loss is weighed. The different schedulers in this class, depending on the ordinary differential equations (ODE) solver type, fall into the above taxonomy and provide a good abstraction for the design of the main schedulers implemented in 🤗 Diffusers. The schedulers in this class are given here. PushToHubMixin class diffusers.utils.PushToHubMixin < source > ( ) A Mixin to push a model, scheduler, or pipeline to the Hugging Face Hub. push_to_hub < source > ( repo_id: str commit_message: Optional = None private: Optional = None token: Optional = None create_pr: bool = False safe_serialization: bool = True variant: Optional = None ) Parameters repo_id (str) — +The name of the repository you want to push your model, scheduler, or pipeline files to. It should +contain your organization name when pushing to an organization. repo_id can also be a path to a local +directory. commit_message (str, optional) — +Message to commit while pushing. Default to "Upload {object}". private (bool, optional) — +Whether or not the repository created should be private. token (str, optional) — +The token to use as HTTP bearer authorization for remote files. The token generated when running +huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) — +Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to True) — +Whether or not to convert the model weights to the safetensors format. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. Upload model, scheduler, or pipeline files to the 🤗 Hugging Face Hub. Examples: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet") + +# Push the `unet` to your namespace with the name "my-finetuned-unet". +unet.push_to_hub("my-finetuned-unet") + +# Push the `unet` to an organization with the name "my-finetuned-unet". +unet.push_to_hub("your-org/my-finetuned-unet") diff --git a/scrapped_outputs/b19a5090c5adee4ce84ff9683b1f95ad.txt b/scrapped_outputs/b19a5090c5adee4ce84ff9683b1f95ad.txt new file mode 100644 index 0000000000000000000000000000000000000000..69a22d0ec7c29f9c77e70e4f6ce38e418f6240d3 --- /dev/null +++ b/scrapped_outputs/b19a5090c5adee4ce84ff9683b1f95ad.txt @@ -0,0 +1,301 @@ +Self-Attention Guidance (SAG) + + +Overview + +Self-Attention Guidance by Susung Hong et al. +The abstract of the paper is the following: +Denoising diffusion models (DDMs) have been drawing much attention for their appreciable sample quality and diversity. Despite their remarkable performance, DDMs remain black boxes on which further study is necessary to take a profound step. Motivated by this, we delve into the design of conventional U-shaped diffusion models. More specifically, we investigate the self-attention modules within these models through carefully designed experiments and explore their characteristics. In addition, inspired by the studies that substantiate the effectiveness of the guidance schemes, we present plug-and-play diffusion guidance, namely Self-Attention Guidance (SAG), that can drastically boost the performance of existing diffusion models. Our method, SAG, extracts the intermediate attention map from a diffusion model at every iteration and selects tokens above a certain attention score for masking and blurring to obtain a partially blurred input. Subsequently, we measure the dissimilarity between the predicted noises obtained from feeding the blurred and original input to the diffusion model and leverage it as guidance. With this guidance, we observe apparent improvements in a wide range of diffusion models, e.g., ADM, IDDPM, and Stable Diffusion, and show that the results further improve by combining our method with the conventional guidance scheme. We provide extensive ablation studies to verify our choices. +Resources: +Project Page. +Paper. +Original Code. +Demo. + +Available Pipelines: + +Pipeline +Tasks +Demo +StableDiffusionSAGPipeline +Text-to-Image Generation +Colab + +Usage example + + + + Copied +import torch +from diffusers import StableDiffusionSAGPipeline +from accelerate.utils import set_seed + +pipe = StableDiffusionSAGPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16) +pipe = pipe.to("cuda") + +seed = 8978 +prompt = "." +guidance_scale = 7.5 +num_images_per_prompt = 1 + +sag_scale = 1.0 + +set_seed(seed) +images = pipe( + prompt, num_images_per_prompt=num_images_per_prompt, guidance_scale=guidance_scale, sag_scale=sag_scale +).images +images[0].save("example.png") + +StableDiffusionSAGPipeline + + +class diffusers.StableDiffusionSAGPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPFeatureExtractor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-to-image generation using Stable Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +sag_scale: float = 0.75 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: typing.Optional[int] = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +sag_scale (float, optional, defaults to 0.75) — +SAG scale as defined in [Improving Sample Quality of Diffusion Models Using Self-Attention Guidance] +(https://arxiv.org/abs/2210.00939). sag_scale is defined as s_s of equation (24) of SAG paper: +https://arxiv.org/pdf/2210.00939.pdf. Typically chosen between [0, 1.0] for better quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor as defined under +self.processor in +diffusers.cross_attention. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import StableDiffusionSAGPipeline + +>>> pipe = StableDiffusionSAGPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt, sag_scale=0.75).images[0] + +disable_vae_slicing + +< +source +> +( +) + + + +Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to +computing decoding in one step. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. + +enable_vae_slicing + +< +source +> +( +) + + + +Enable sliced VAE decoding. +When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several +steps. This is useful to save some memory and allow larger batch sizes. diff --git a/scrapped_outputs/b1ac7046abc4986845182231dae5274f.txt b/scrapped_outputs/b1ac7046abc4986845182231dae5274f.txt new file mode 100644 index 0000000000000000000000000000000000000000..cdd78d68bba0e712cfad73d0a4eb0e2833f322c8 --- /dev/null +++ b/scrapped_outputs/b1ac7046abc4986845182231dae5274f.txt @@ -0,0 +1,15 @@ +Outputs All model outputs are subclasses of BaseOutput, data structures containing all the information returned by the model. The outputs can also be used as tuples or dictionaries. For example: Copied from diffusers import DDIMPipeline + +pipeline = DDIMPipeline.from_pretrained("google/ddpm-cifar10-32") +outputs = pipeline() The outputs object is a ImagePipelineOutput which means it has an image attribute. You can access each attribute as you normally would or with a keyword lookup, and if that attribute is not returned by the model, you will get None: Copied outputs.images +outputs["images"] When considering the outputs object as a tuple, it only considers the attributes that don’t have None values. +For instance, retrieving an image by indexing into it returns the tuple (outputs.images): Copied outputs[:1] To check a specific pipeline or model output, refer to its corresponding API documentation. BaseOutput class diffusers.utils.BaseOutput < source > ( ) Base class for all model outputs as dataclass. Has a __getitem__ that allows indexing by integer or slice (like a +tuple) or strings (like a dictionary) that will ignore the None attributes. Otherwise behaves like a regular +Python dictionary. You can’t unpack a BaseOutput directly. Use the to_tuple() method to convert it to a tuple +first. to_tuple < source > ( ) Convert self to a tuple containing all the attributes/keys that are not None. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. FlaxImagePipelineOutput class diffusers.pipelines.pipeline_flax_utils.FlaxImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. ImageTextPipelineOutput class diffusers.ImageTextPipelineOutput < source > ( images: Union text: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). text (List[str] or List[List[str]]) — +List of generated text strings of length batch_size or a list of list of strings whose outer list has +length batch_size. Output class for joint image-text pipelines. diff --git a/scrapped_outputs/b20afe4d09d35d0158d1064d140c082b.txt b/scrapped_outputs/b20afe4d09d35d0158d1064d140c082b.txt new file mode 100644 index 0000000000000000000000000000000000000000..321353dbaa0806b386dcef5fad6fbc29b72b6c56 --- /dev/null +++ b/scrapped_outputs/b20afe4d09d35d0158d1064d140c082b.txt @@ -0,0 +1,36 @@ +Consistency Decoder Consistency decoder can be used to decode the latents from the denoising UNet in the StableDiffusionPipeline. This decoder was introduced in the DALL-E 3 technical report. The original codebase can be found at openai/consistencydecoder. Inference is only supported for 2 iterations as of now. The pipeline could not have been contributed without the help of madebyollin and mrsteyk from this issue. ConsistencyDecoderVAE class diffusers.ConsistencyDecoderVAE < source > ( scaling_factor: float = 0.18215 latent_channels: int = 4 encoder_act_fn: str = 'silu' encoder_block_out_channels: Tuple = (128, 256, 512, 512) encoder_double_z: bool = True encoder_down_block_types: Tuple = ('DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D') encoder_in_channels: int = 3 encoder_layers_per_block: int = 2 encoder_norm_num_groups: int = 32 encoder_out_channels: int = 4 decoder_add_attention: bool = False decoder_block_out_channels: Tuple = (320, 640, 1024, 1024) decoder_down_block_types: Tuple = ('ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D') decoder_downsample_padding: int = 1 decoder_in_channels: int = 7 decoder_layers_per_block: int = 3 decoder_norm_eps: float = 1e-05 decoder_norm_num_groups: int = 32 decoder_num_train_timesteps: int = 1024 decoder_out_channels: int = 6 decoder_resnet_time_scale_shift: str = 'scale_shift' decoder_time_embedding_type: str = 'learned' decoder_up_block_types: Tuple = ('ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D') ) The consistency decoder used with DALL-E 3. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline, ConsistencyDecoderVAE + +>>> vae = ConsistencyDecoderVAE.from_pretrained("openai/consistency-decoder", torch_dtype=torch.float16) +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", vae=vae, torch_dtype=torch.float16 +... ).to("cuda") + +>>> pipe("horse", generator=torch.manual_seed(0)).images wrapper < source > ( *args **kwargs ) disable_slicing < source > ( ) Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing +decoding in one step. disable_tiling < source > ( ) Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing +decoding in one step. enable_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_tiling < source > ( use_tiling: bool = True ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. forward < source > ( sample: FloatTensor sample_posterior: bool = False return_dict: bool = True generator: Optional = None ) → DecoderOutput or tuple Parameters sample (torch.FloatTensor) — Input sample. sample_posterior (bool, optional, defaults to False) — +Whether to sample from the posterior. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. generator (torch.Generator, optional, defaults to None) — +Generator to use for sampling. Returns +DecoderOutput or tuple + +If return_dict is True, a DecoderOutput is returned, otherwise a plain tuple is returned. + set_attn_processor < source > ( processor: Union _remove_lora = False ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. tiled_encode < source > ( x: FloatTensor return_dict: bool = True ) → ~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput or tuple Parameters x (torch.FloatTensor) — Input batch of images. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput instead of a +plain tuple. Returns +~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput or tuple + +If return_dict is True, a ~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput is returned, +otherwise a plain tuple is returned. + Encode a batch of images using a tiled encoder. When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several +steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is +different from non-tiled encoding because each tile uses a different encoder. To avoid tiling artifacts, the +tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the +output, but they should be much less noticeable. diff --git a/scrapped_outputs/b20ee034a6cc9ebc565c6bf08c21d6d7.txt b/scrapped_outputs/b20ee034a6cc9ebc565c6bf08c21d6d7.txt new file mode 100644 index 0000000000000000000000000000000000000000..bdb12cb9f8ec935ec9417d06fc21a1176b44b6b4 --- /dev/null +++ b/scrapped_outputs/b20ee034a6cc9ebc565c6bf08c21d6d7.txt @@ -0,0 +1,248 @@ +Load adapters There are several training techniques for personalizing diffusion models to generate images of a specific subject or images in certain styles. Each of these training methods produces a different type of adapter. Some of the adapters generate an entirely new model, while other adapters only modify a smaller set of embeddings or weights. This means the loading process for each adapter is also different. This guide will show you how to load DreamBooth, textual inversion, and LoRA weights. Feel free to browse the Stable Diffusion Conceptualizer, LoRA the Explorer, and the Diffusers Models Gallery for checkpoints and embeddings to use. DreamBooth DreamBooth finetunes an entire diffusion model on just several images of a subject to generate images of that subject in new styles and settings. This method works by using a special word in the prompt that the model learns to associate with the subject image. Of all the training methods, DreamBooth produces the largest file size (usually a few GBs) because it is a full checkpoint model. Let’s load the herge_style checkpoint, which is trained on just 10 images drawn by Hergé, to generate images in that style. For it to work, you need to include the special word herge_style in your prompt to trigger the checkpoint: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("sd-dreambooth-library/herge-style", torch_dtype=torch.float16).to("cuda") +prompt = "A cute herge_style brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration" +image = pipeline(prompt).images[0] +image Textual inversion Textual inversion is very similar to DreamBooth and it can also personalize a diffusion model to generate certain concepts (styles, objects) from just a few images. This method works by training and finding new embeddings that represent the images you provide with a special word in the prompt. As a result, the diffusion model weights stay the same and the training process produces a relatively tiny (a few KBs) file. Because textual inversion creates embeddings, it cannot be used on its own like DreamBooth and requires another model. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") Now you can load the textual inversion embeddings with the load_textual_inversion() method and generate some images. Let’s load the sd-concepts-library/gta5-artwork embeddings and you’ll need to include the special word in your prompt to trigger it: Copied pipeline.load_textual_inversion("sd-concepts-library/gta5-artwork") +prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration, style" +image = pipeline(prompt).images[0] +image Textual inversion can also be trained on undesirable things to create negative embeddings to discourage a model from generating images with those undesirable things like blurry images or extra fingers on a hand. This can be an easy way to quickly improve your prompt. You’ll also load the embeddings with load_textual_inversion(), but this time, you’ll need two more parameters: weight_name: specifies the weight file to load if the file was saved in the 🤗 Diffusers format with a specific name or if the file is stored in the A1111 format token: specifies the special word to use in the prompt to trigger the embeddings Let’s load the sayakpaul/EasyNegative-test embeddings: Copied pipeline.load_textual_inversion( + "sayakpaul/EasyNegative-test", weight_name="EasyNegative.safetensors", token="EasyNegative" +) Now you can use the token to generate an image with the negative embeddings: Copied prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration, EasyNegative" +negative_prompt = "EasyNegative" + +image = pipeline(prompt, negative_prompt=negative_prompt, num_inference_steps=50).images[0] +image LoRA Low-Rank Adaptation (LoRA) is a popular training technique because it is fast and generates smaller file sizes (a couple hundred MBs). Like the other methods in this guide, LoRA can train a model to learn new styles from just a few images. It works by inserting new weights into the diffusion model and then only the new weights are trained instead of the entire model. This makes LoRAs faster to train and easier to store. LoRA is a very general training technique that can be used with other training methods. For example, it is common to train a model with DreamBooth and LoRA. LoRAs also need to be used with another model: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") Then use the load_lora_weights() method to load the ostris/super-cereal-sdxl-lora weights and specify the weights filename from the repository: Copied pipeline.load_lora_weights("ostris/super-cereal-sdxl-lora", weight_name="cereal_box_sdxl_v1.safetensors") +prompt = "bears, pizza bites" +image = pipeline(prompt).images[0] +image The load_lora_weights() method loads LoRA weights into both the UNet and text encoder. It is the preferred way for loading LoRAs because it can handle cases where: the LoRA weights don’t have separate identifiers for the UNet and text encoder the LoRA weights have separate identifiers for the UNet and text encoder But if you only need to load LoRA weights into the UNet, then you can use the load_attn_procs() method. Let’s load the jbilcke-hf/sdxl-cinematic-1 LoRA: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.unet.load_attn_procs("jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors") + +# use cnmt in the prompt to trigger the LoRA +prompt = "A cute cnmt eating a slice of pizza, stunning color scheme, masterpiece, illustration" +image = pipeline(prompt).images[0] +image For both load_lora_weights() and load_attn_procs(), you can pass the cross_attention_kwargs={"scale": 0.5} parameter to adjust how much of the LoRA weights to use. A value of 0 is the same as only using the base model weights, and a value of 1 is equivalent to using the fully finetuned LoRA. To unload the LoRA weights, use the unload_lora_weights() method to discard the LoRA weights and restore the model to its original weights: Copied pipeline.unload_lora_weights() Load multiple LoRAs It can be fun to use multiple LoRAs together to create something entirely new and unique. The fuse_lora() method allows you to fuse the LoRA weights with the original weights of the underlying model. Fusing the weights can lead to a speedup in inference latency because you don’t need to separately load the base model and LoRA! You can save your fused pipeline with save_pretrained() to avoid loading and fusing the weights every time you want to use the model. Load an initial model: Copied from diffusers import StableDiffusionXLPipeline, AutoencoderKL +import torch + +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + vae=vae, + torch_dtype=torch.float16, +).to("cuda") Next, load the LoRA checkpoint and fuse it with the original weights. The lora_scale parameter controls how much to scale the output by with the LoRA weights. It is important to make the lora_scale adjustments in the fuse_lora() method because it won’t work if you try to pass scale to the cross_attention_kwargs in the pipeline. If you need to reset the original model weights for any reason (use a different lora_scale), you should use the unfuse_lora() method. Copied pipeline.load_lora_weights("ostris/ikea-instructions-lora-sdxl") +pipeline.fuse_lora(lora_scale=0.7) + +# to unfuse the LoRA weights +pipeline.unfuse_lora() Then fuse this pipeline with the next set of LoRA weights: Copied pipeline.load_lora_weights("ostris/super-cereal-sdxl-lora") +pipeline.fuse_lora(lora_scale=0.7) You can’t unfuse multiple LoRA checkpoints, so if you need to reset the model to its original weights, you’ll need to reload it. Now you can generate an image that uses the weights from both LoRAs: Copied prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration" +image = pipeline(prompt).images[0] +image 🤗 PEFT Read the Inference with 🤗 PEFT tutorial to learn more about its integration with 🤗 Diffusers and how you can easily work with and juggle multiple adapters. You’ll need to install 🤗 Diffusers and PEFT from source to run the example in this section. Another way you can load and use multiple LoRAs is to specify the adapter_name parameter in load_lora_weights(). This method takes advantage of the 🤗 PEFT integration. For example, load and name both LoRA weights: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("ostris/ikea-instructions-lora-sdxl", weight_name="ikea_instructions_xl_v1_5.safetensors", adapter_name="ikea") +pipeline.load_lora_weights("ostris/super-cereal-sdxl-lora", weight_name="cereal_box_sdxl_v1.safetensors", adapter_name="cereal") Now use the set_adapters() to activate both LoRAs, and you can configure how much weight each LoRA should have on the output: Copied pipeline.set_adapters(["ikea", "cereal"], adapter_weights=[0.7, 0.5]) Then, generate an image: Copied prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration" +image = pipeline(prompt, num_inference_steps=30, cross_attention_kwargs={"scale": 1.0}).images[0] +image Kohya and TheLastBen Other popular LoRA trainers from the community include those by Kohya and TheLastBen. These trainers create different LoRA checkpoints than those trained by 🤗 Diffusers, but they can still be loaded in the same way. Let’s download the Blueprintify SD XL 1.0 checkpoint from Civitai: Copied !wget https://civitai.com/api/download/models/168776 -O blueprintify-sd-xl-10.safetensors Load the LoRA checkpoint with the load_lora_weights() method, and specify the filename in the weight_name parameter: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("path/to/weights", weight_name="blueprintify-sd-xl-10.safetensors") Generate an image: Copied # use bl3uprint in the prompt to trigger the LoRA +prompt = "bl3uprint, a highly detailed blueprint of the eiffel tower, explaining how to build all parts, many txt, blueprint grid backdrop" +image = pipeline(prompt).images[0] +image Some limitations of using Kohya LoRAs with 🤗 Diffusers include: Images may not look like those generated by UIs - like ComfyUI - for multiple reasons, which are explained here. LyCORIS checkpoints aren’t fully supported. The load_lora_weights() method loads LyCORIS checkpoints with LoRA and LoCon modules, but Hada and LoKR are not supported. Loading a checkpoint from TheLastBen is very similar. For example, to load the TheLastBen/William_Eggleston_Style_SDXL checkpoint: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("TheLastBen/William_Eggleston_Style_SDXL", weight_name="wegg.safetensors") + +# use by william eggleston in the prompt to trigger the LoRA +prompt = "a house by william eggleston, sunrays, beautiful, sunlight, sunrays, beautiful" +image = pipeline(prompt=prompt).images[0] +image IP-Adapter IP-Adapter is an effective and lightweight adapter that adds image prompting capabilities to a diffusion model. This adapter works by decoupling the cross-attention layers of the image and text features. All the other model components are frozen and only the embedded image features in the UNet are trained. As a result, IP-Adapter files are typically only ~100MBs. IP-Adapter works with most of our pipelines, including Stable Diffusion, Stable Diffusion XL (SDXL), ControlNet, T2I-Adapter, AnimateDiff. And you can use any custom models finetuned from the same base models. It also works with LCM-Lora out of box. You can find official IP-Adapter checkpoints in h94/IP-Adapter. IP-Adapter was contributed by okotaku. Let’s first create a Stable Diffusion Pipeline. Copied from diffusers import AutoPipelineForText2Image +import torch +from diffusers.utils import load_image + + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") Now load the h94/IP-Adapter weights with the load_ip_adapter() method. Copied pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") IP-Adapter relies on an image encoder to generate the image features, if your IP-Adapter weights folder contains a "image_encoder" subfolder, the image encoder will be automatically loaded and registered to the pipeline. Otherwise you can so load a [CLIPVisionModelWithProjection](https://huggingface.co/docs/transformers/v4.37.2/en/model_doc/clip#transformers.CLIPVisionModelWithProjection) model and pass it to a Stable Diffusion pipeline when you create it. + + Copied from diffusers import AutoPipelineForText2Image +from transformers import CLIPVisionModelWithProjection +import torch + +image_encoder = CLIPVisionModelWithProjection.from_pretrained( + "h94/IP-Adapter", + subfolder="models/image_encoder", + torch_dtype=torch.float16, +).to("cuda") + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", image_encoder=image_encoder, torch_dtype=torch.float16).to("cuda") IP-Adapter allows you to use both image and text to condition the image generation process. For example, let’s use the bear image from the Textual Inversion section as the image prompt (ip_adapter_image) along with a text prompt to add “sunglasses”. 😎 Copied pipeline.set_ip_adapter_scale(0.6) +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_neg_embed.png") +generator = torch.Generator(device="cpu").manual_seed(33) +images = pipeline( +    prompt='best quality, high quality, wearing sunglasses', +    ip_adapter_image=image, +    negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", +    num_inference_steps=50, +    generator=generator, +).images +images[0]     You can use the set_ip_adapter_scale() method to adjust the text prompt and image prompt condition ratio.  If you’re only using the image prompt, you should set the scale to 1.0. You can lower the scale to get more generation diversity, but it’ll be less aligned with the prompt. +scale=0.5 can achieve good results in most cases when you use both text and image prompts. IP-Adapter also works great with Image-to-Image and Inpainting pipelines. See below examples of how you can use it with Image-to-Image and Inpaint. image-to-image inpaint Copied from diffusers import AutoPipelineForImage2Image +import torch +from diffusers.utils import load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") + +image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/vermeer.jpg") +ip_image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/river.png") + +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") +generator = torch.Generator(device="cpu").manual_seed(33) +images = pipeline( +    prompt='best quality, high quality', +    image = image, +    ip_adapter_image=ip_image, +    num_inference_steps=50, +    generator=generator, +    strength=0.6, +).images +images[0] IP-Adapters can also be used with SDXL Copied from diffusers import AutoPipelineForText2Image +from diffusers.utils import load_image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + torch_dtype=torch.float16 +).to("cuda") + +image = load_image("https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/watercolor_painting.jpeg") + +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name="ip-adapter_sdxl.bin") + +generator = torch.Generator(device="cpu").manual_seed(33) +image = pipeline( + prompt="best quality, high quality", + ip_adapter_image=image, + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=25, + generator=generator, +).images[0] +image.save("sdxl_t2i.png") input image adapted image You can use the IP-Adapter face model to apply specific faces to your images. It is an effective way to maintain consistent characters in your image generations. +Weights are loaded with the same method used for the other IP-Adapters. Copied # Load ip-adapter-full-face_sd15.bin +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter-full-face_sd15.bin") It is recommended to use DDIMScheduler and EulerDiscreteScheduler for face model. Copied import torch +from diffusers import StableDiffusionPipeline, DDIMScheduler +from diffusers.utils import load_image + +pipeline = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, +).to("cuda") +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter-full-face_sd15.bin") + +pipeline.set_ip_adapter_scale(0.7) + +image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ai_face2.png") + +generator = torch.Generator(device="cpu").manual_seed(33) + +image = pipeline( + prompt="A photo of a girl wearing a black dress, holding red roses in hand, upper body, behind is the Eiffel Tower", + ip_adapter_image=image, + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=50, num_images_per_prompt=1, width=512, height=704, + generator=generator, +).images[0] input image output image You can load multiple IP-Adapter models and use multiple reference images at the same time. In this example we use IP-Adapter-Plus face model to create a consistent character and also use IP-Adapter-Plus model along with 10 images to create a coherent style in the image we generate. Copied import torch +from diffusers import AutoPipelineForText2Image, DDIMScheduler +from transformers import CLIPVisionModelWithProjection +from diffusers.utils import load_image + +image_encoder = CLIPVisionModelWithProjection.from_pretrained( + "h94/IP-Adapter", + subfolder="models/image_encoder", + torch_dtype=torch.float16, +) + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + torch_dtype=torch.float16, + image_encoder=image_encoder, +) +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +pipeline.load_ip_adapter( + "h94/IP-Adapter", + subfolder="sdxl_models", + weight_name=["ip-adapter-plus_sdxl_vit-h.safetensors", "ip-adapter-plus-face_sdxl_vit-h.safetensors"] +) +pipeline.set_ip_adapter_scale([0.7, 0.3]) +pipeline.enable_model_cpu_offload() + +face_image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/women_input.png") +style_folder = "https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/style_ziggy" +style_images = [load_image(f"{style_folder}/img{i}.png") for i in range(10)] + +generator = torch.Generator(device="cpu").manual_seed(0) + +image = pipeline( + prompt="wonderwoman", + ip_adapter_image=[style_images, face_image], + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=50, num_images_per_prompt=1, + generator=generator, +).images[0]     style input image face input image output image LCM-Lora You can use IP-Adapter with LCM-Lora to achieve “instant fine-tune” with custom images. Note that you need to load IP-Adapter weights before loading the LCM-Lora weights. Copied from diffusers import DiffusionPipeline, LCMScheduler +import torch +from diffusers.utils import load_image + +model_id = "sd-dreambooth-library/herge-style" +lcm_lora_id = "latent-consistency/lcm-lora-sdv1-5" + +pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) + +pipe.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") +pipe.load_lora_weights(lcm_lora_id) +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() + +prompt = "best quality, high quality" +image = load_image("https://user-images.githubusercontent.com/24734142/266492875-2d50d223-8475-44f0-a7c6-08b51cb53572.png") +images = pipe( + prompt=prompt, + ip_adapter_image=image, + num_inference_steps=4, + guidance_scale=1, +).images[0] Other pipelines IP-Adapter is compatible with any pipeline that (1) uses a text prompt and (2) uses Stable Diffusion or Stable Diffusion XL checkpoint. To use IP-Adapter with a different pipeline, all you need to do is to run load_ip_adapter() method after you create the pipeline, and then pass your image to the pipeline as ip_adapter_image 🤗 Diffusers currently only supports using IP-Adapter with some of the most popular pipelines, feel free to open a feature request if you have a cool use-case and require integrating IP-adapters with a pipeline that does not support it yet! You can find below examples on how to use IP-Adapter with ControlNet and AnimateDiff. ControlNet AnimateDiff Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +import torch +from diffusers.utils import load_image + +controlnet_model_path = "lllyasviel/control_v11f1p_sd15_depth" +controlnet = ControlNetModel.from_pretrained(controlnet_model_path, torch_dtype=torch.float16) + +pipeline = StableDiffusionControlNetPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16) +pipeline.to("cuda") + +image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/statue.png") +depth_map = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/depth.png") + +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") + +generator = torch.Generator(device="cpu").manual_seed(33) +images = pipeline( + prompt='best quality, high quality', + image=depth_map, + ip_adapter_image=image, + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=50, + generator=generator, +).images +images[0] input image adapted image diff --git a/scrapped_outputs/b233876dde0cd04b23890508f571ce58.txt b/scrapped_outputs/b233876dde0cd04b23890508f571ce58.txt new file mode 100644 index 0000000000000000000000000000000000000000..18ff21ef44b1209309d3996bfa0c5efab35a57c1 --- /dev/null +++ b/scrapped_outputs/b233876dde0cd04b23890508f571ce58.txt @@ -0,0 +1,78 @@ +Safe Stable Diffusion Safe Stable Diffusion was proposed in Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models and mitigates inappropriate degeneration from Stable Diffusion models because they’re trained on unfiltered web-crawled datasets. For instance Stable Diffusion may unexpectedly generate nudity, violence, images depicting self-harm, and otherwise offensive content. Safe Stable Diffusion is an extension of Stable Diffusion that drastically reduces this type of content. The abstract from the paper is: Text-conditioned image generation models have recently achieved astonishing results in image quality and text alignment and are consequently employed in a fast-growing number of applications. Since they are highly data-driven, relying on billion-sized datasets randomly scraped from the internet, they also suffer, as we demonstrate, from degenerated and biased human behavior. In turn, they may even reinforce such biases. To help combat these undesired side effects, we present safe latent diffusion (SLD). Specifically, to measure the inappropriate degeneration due to unfiltered and imbalanced training sets, we establish a novel image generation test bed-inappropriate image prompts (I2P)-containing dedicated, real-world image-to-text prompts covering concepts such as nudity and violence. As our exhaustive empirical evaluation demonstrates, the introduced SLD removes and suppresses inappropriate image parts during the diffusion process, with no additional training required and no adverse effect on overall image quality or text alignment. Tips Use the safety_concept property of StableDiffusionPipelineSafe to check and edit the current safety concept: Copied >>> from diffusers import StableDiffusionPipelineSafe + +>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe") +>>> pipeline.safety_concept +'an image showing hate, harassment, violence, suffering, humiliation, harm, suicide, sexual, nudity, bodily fluids, blood, obscene gestures, illegal activity, drug use, theft, vandalism, weapons, child abuse, brutality, cruelty' For each image generation the active concept is also contained in StableDiffusionSafePipelineOutput. There are 4 configurations (SafetyConfig.WEAK, SafetyConfig.MEDIUM, SafetyConfig.STRONG, and SafetyConfig.MAX) that can be applied: Copied >>> from diffusers import StableDiffusionPipelineSafe +>>> from diffusers.pipelines.stable_diffusion_safe import SafetyConfig + +>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe") +>>> prompt = "the four horsewomen of the apocalypse, painting by tom of finland, gaston bussiere, craig mullins, j. c. leyendecker" +>>> out = pipeline(prompt=prompt, **SafetyConfig.MAX) Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionPipelineSafe class diffusers.StableDiffusionPipelineSafe < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: SafeStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline based on the StableDiffusionPipeline for text-to-image generation using Safe Latent Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 sld_guidance_scale: Optional = 1000 sld_warmup_steps: Optional = 10 sld_threshold: Optional = 0.01 sld_momentum_scale: Optional = 0.3 sld_mom_beta: Optional = 0.4 ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. sld_guidance_scale (float, optional, defaults to 1000) — +If sld_guidance_scale < 1, safety guidance is disabled. sld_warmup_steps (int, optional, defaults to 10) — +Number of warmup steps for safety guidance. SLD is only be applied for diffusion steps greater than +sld_warmup_steps. sld_threshold (float, optional, defaults to 0.01) — +Threshold that separates the hyperplane between appropriate and inappropriate images. sld_momentum_scale (float, optional, defaults to 0.3) — +Scale of the SLD momentum to be added to the safety guidance at each diffusion step. If set to 0.0, +momentum is disabled. Momentum is built up during warmup for diffusion steps smaller than +sld_warmup_steps. sld_mom_beta (float, optional, defaults to 0.4) — +Defines how safety guidance momentum builds up. sld_mom_beta indicates how much of the previous +momentum is kept. Momentum is built up during warmup for diffusion steps smaller than +sld_warmup_steps. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied import torch +from diffusers import StableDiffusionPipelineSafe +from diffusers.pipelines.stable_diffusion_safe import SafetyConfig + +pipeline = StableDiffusionPipelineSafe.from_pretrained( + "AIML-TUDA/stable-diffusion-safe", torch_dtype=torch.float16 +).to("cuda") +prompt = "the four horsewomen of the apocalypse, painting by tom of finland, gaston bussiere, craig mullins, j. c. leyendecker" +image = pipeline(prompt=prompt, **SafetyConfig.MEDIUM).images[0] StableDiffusionSafePipelineOutput class diffusers.pipelines.stable_diffusion_safe.StableDiffusionSafePipelineOutput < source > ( images: Union nsfw_content_detected: Optional unsafe_images: Union applied_safety_concept: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or numpy array of shape (batch_size, height, width, num_channels). PIL images or numpy array present the denoised images of the diffusion pipeline. nsfw_content_detected (List[bool]) — +List of flags denoting whether the corresponding generated image likely represents “not-safe-for-work” +(nsfw) content, or None if safety checking could not be performed. images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images that were flagged by the safety checker any may contain “not-safe-for-work” +(nsfw) content, or None if no safety check was performed or no images were flagged. applied_safety_concept (str) — +The safety concept that was applied for safety guidance, or None if safety guidance was disabled Output class for Safe Stable Diffusion pipelines. __call__ ( *args **kwargs ) Call self as a function. diff --git a/scrapped_outputs/b24c616ecb4fd76745c08bbfba05fdd2.txt b/scrapped_outputs/b24c616ecb4fd76745c08bbfba05fdd2.txt new file mode 100644 index 0000000000000000000000000000000000000000..b413917c52bc7069ecb64d4b6c9ce531220bac25 --- /dev/null +++ b/scrapped_outputs/b24c616ecb4fd76745c08bbfba05fdd2.txt @@ -0,0 +1,87 @@ +Create reproducible pipelines Reproducibility is important for testing, replicating results, and can even be used to improve image quality. However, the randomness in diffusion models is a desired property because it allows the pipeline to generate different images every time it is run. While you can’t expect to get the exact same results across platforms, you can expect results to be reproducible across releases and platforms within a certain tolerance range. Even then, tolerance varies depending on the diffusion pipeline and checkpoint. This is why it’s important to understand how to control sources of randomness in diffusion models or use deterministic algorithms. 💡 We strongly recommend reading PyTorch’s statement about reproducibility: Completely reproducible results are not guaranteed across PyTorch releases, individual commits, or different platforms. Furthermore, results may not be reproducible between CPU and GPU executions, even when using identical seeds. Control randomness During inference, pipelines rely heavily on random sampling operations which include creating the +Gaussian noise tensors to denoise and adding noise to the scheduling step. Take a look at the tensor values in the DDIMPipeline after two inference steps: Copied from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np").images +print(np.abs(image).sum()) Running the code above prints one value, but if you run it again you get a different value. What is going on here? Every time the pipeline is run, torch.randn uses a different random seed to create Gaussian noise which is denoised stepwise. This leads to a different result each time it is run, which is great for diffusion pipelines since it generates a different random image each time. But if you need to reliably generate the same image, that’ll depend on whether you’re running the pipeline on a CPU or GPU. CPU To generate reproducible results on a CPU, you’ll need to use a PyTorch Generator and set a seed: Copied import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) + +# create a generator for reproducibility +generator = torch.Generator(device="cpu").manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) Now when you run the code above, it always prints a value of 1491.1711 no matter what because the Generator object with the seed is passed to all the random functions of the pipeline. If you run this code example on your specific hardware and PyTorch version, you should get a similar, if not the same, result. 💡 It might be a bit unintuitive at first to pass Generator objects to the pipeline instead of +just integer values representing the seed, but this is the recommended design when dealing with +probabilistic models in PyTorch, as Generators are random states that can be +passed to multiple pipelines in a sequence. GPU Writing a reproducible pipeline on a GPU is a bit trickier, and full reproducibility across different hardware is not guaranteed because matrix multiplication - which diffusion pipelines require a lot of - is less deterministic on a GPU than a CPU. For example, if you run the same code example above on a GPU: Copied import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) +ddim.to("cuda") + +# create a generator for reproducibility +generator = torch.Generator(device="cuda").manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) The result is not the same even though you’re using an identical seed because the GPU uses a different random number generator than the CPU. To circumvent this problem, 🧨 Diffusers has a randn_tensor() function for creating random noise on the CPU, and then moving the tensor to a GPU if necessary. The randn_tensor function is used everywhere inside the pipeline, allowing the user to always pass a CPU Generator even if the pipeline is run on a GPU. You’ll see the results are much closer now! Copied import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) +ddim.to("cuda") + +# create a generator for reproducibility; notice you don't place it on the GPU! +generator = torch.manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) 💡 If reproducibility is important, we recommend always passing a CPU generator. +The performance loss is often neglectable, and you’ll generate much more similar +values than if the pipeline had been run on a GPU. Finally, for more complex pipelines such as UnCLIPPipeline, these are often extremely +susceptible to precision error propagation. Don’t expect similar results across +different GPU hardware or PyTorch versions. In this case, you’ll need to run +exactly the same hardware and PyTorch version for full reproducibility. Deterministic algorithms You can also configure PyTorch to use deterministic algorithms to create a reproducible pipeline. However, you should be aware that deterministic algorithms may be slower than nondeterministic ones and you may observe a decrease in performance. But if reproducibility is important to you, then this is the way to go! Nondeterministic behavior occurs when operations are launched in more than one CUDA stream. To avoid this, set the environment variable CUBLAS_WORKSPACE_CONFIG to :16:8 to only use one buffer size during runtime. PyTorch typically benchmarks multiple algorithms to select the fastest one, but if you want reproducibility, you should disable this feature because the benchmark may select different algorithms each time. Lastly, pass True to torch.use_deterministic_algorithms to enable deterministic algorithms. Copied import os +import torch + +os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":16:8" + +torch.backends.cudnn.benchmark = False +torch.use_deterministic_algorithms(True) Now when you run the same pipeline twice, you’ll get identical results. Copied import torch +from diffusers import DDIMScheduler, StableDiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True).to("cuda") +pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) +g = torch.Generator(device="cuda") + +prompt = "A bear is playing a guitar on Times Square" + +g.manual_seed(0) +result1 = pipe(prompt=prompt, num_inference_steps=50, generator=g, output_type="latent").images + +g.manual_seed(0) +result2 = pipe(prompt=prompt, num_inference_steps=50, generator=g, output_type="latent").images + +print("L_inf dist =", abs(result1 - result2).max()) +"L_inf dist = tensor(0., device='cuda:0')" diff --git a/scrapped_outputs/b250aef70a7608b66fd1eeb963ec7acc.txt b/scrapped_outputs/b250aef70a7608b66fd1eeb963ec7acc.txt new file mode 100644 index 0000000000000000000000000000000000000000..260e2d1961cab74b037b8005bfcbb5822351f744 --- /dev/null +++ b/scrapped_outputs/b250aef70a7608b66fd1eeb963ec7acc.txt @@ -0,0 +1,197 @@ +UniDiffuser The UniDiffuser model was proposed in One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale by Fan Bao, Shen Nie, Kaiwen Xue, Chongxuan Li, Shi Pu, Yaole Wang, Gang Yue, Yue Cao, Hang Su, Jun Zhu. The abstract from the paper is: This paper proposes a unified diffusion framework (dubbed UniDiffuser) to fit all distributions relevant to a set of multi-modal data in one model. Our key insight is — learning diffusion models for marginal, conditional, and joint distributions can be unified as predicting the noise in the perturbed data, where the perturbation levels (i.e. timesteps) can be different for different modalities. Inspired by the unified view, UniDiffuser learns all distributions simultaneously with a minimal modification to the original diffusion model — perturbs data in all modalities instead of a single modality, inputs individual timesteps in different modalities, and predicts the noise of all modalities instead of a single modality. UniDiffuser is parameterized by a transformer for diffusion models to handle input types of different modalities. Implemented on large-scale paired image-text data, UniDiffuser is able to perform image, text, text-to-image, image-to-text, and image-text pair generation by setting proper timesteps without additional overhead. In particular, UniDiffuser is able to produce perceptually realistic samples in all tasks and its quantitative results (e.g., the FID and CLIP score) are not only superior to existing general-purpose models but also comparable to the bespoken models (e.g., Stable Diffusion and DALL-E 2) in representative tasks (e.g., text-to-image generation). You can find the original codebase at thu-ml/unidiffuser and additional checkpoints at thu-ml. There is currently an issue on PyTorch 1.X where the output images are all black or the pixel values become NaNs. This issue can be mitigated by switching to PyTorch 2.X. This pipeline was contributed by dg845. ❤️ Usage Examples Because the UniDiffuser model is trained to model the joint distribution of (image, text) pairs, it is capable of performing a diverse range of generation tasks: Unconditional Image and Text Generation Unconditional generation (where we start from only latents sampled from a standard Gaussian prior) from a UniDiffuserPipeline will produce a (image, text) pair: Copied import torch + +from diffusers import UniDiffuserPipeline + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Unconditional image and text generation. The generation task is automatically inferred. +sample = pipe(num_inference_steps=20, guidance_scale=8.0) +image = sample.images[0] +text = sample.text[0] +image.save("unidiffuser_joint_sample_image.png") +print(text) This is also called “joint” generation in the UniDiffuser paper, since we are sampling from the joint image-text distribution. Note that the generation task is inferred from the inputs used when calling the pipeline. +It is also possible to manually specify the unconditional generation task (“mode”) manually with UniDiffuserPipeline.set_joint_mode(): Copied # Equivalent to the above. +pipe.set_joint_mode() +sample = pipe(num_inference_steps=20, guidance_scale=8.0) When the mode is set manually, subsequent calls to the pipeline will use the set mode without attempting to infer the mode. +You can reset the mode with UniDiffuserPipeline.reset_mode(), after which the pipeline will once again infer the mode. You can also generate only an image or only text (which the UniDiffuser paper calls “marginal” generation since we sample from the marginal distribution of images and text, respectively): Copied # Unlike other generation tasks, image-only and text-only generation don't use classifier-free guidance +# Image-only generation +pipe.set_image_mode() +sample_image = pipe(num_inference_steps=20).images[0] +# Text-only generation +pipe.set_text_mode() +sample_text = pipe(num_inference_steps=20).text[0] Text-to-Image Generation UniDiffuser is also capable of sampling from conditional distributions; that is, the distribution of images conditioned on a text prompt or the distribution of texts conditioned on an image. +Here is an example of sampling from the conditional image distribution (text-to-image generation or text-conditioned image generation): Copied import torch + +from diffusers import UniDiffuserPipeline + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Text-to-image generation +prompt = "an elephant under the sea" + +sample = pipe(prompt=prompt, num_inference_steps=20, guidance_scale=8.0) +t2i_image = sample.images[0] +t2i_image The text2img mode requires that either an input prompt or prompt_embeds be supplied. You can set the text2img mode manually with UniDiffuserPipeline.set_text_to_image_mode(). Image-to-Text Generation Similarly, UniDiffuser can also produce text samples given an image (image-to-text or image-conditioned text generation): Copied import torch + +from diffusers import UniDiffuserPipeline +from diffusers.utils import load_image + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Image-to-text generation +image_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unidiffuser/unidiffuser_example_image.jpg" +init_image = load_image(image_url).resize((512, 512)) + +sample = pipe(image=init_image, num_inference_steps=20, guidance_scale=8.0) +i2t_text = sample.text[0] +print(i2t_text) The img2text mode requires that an input image be supplied. You can set the img2text mode manually with UniDiffuserPipeline.set_image_to_text_mode(). Image Variation The UniDiffuser authors suggest performing image variation through a “round-trip” generation method, where given an input image, we first perform an image-to-text generation, and then perform a text-to-image generation on the outputs of the first generation. +This produces a new image which is semantically similar to the input image: Copied import torch + +from diffusers import UniDiffuserPipeline +from diffusers.utils import load_image + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Image variation can be performed with an image-to-text generation followed by a text-to-image generation: +# 1. Image-to-text generation +image_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unidiffuser/unidiffuser_example_image.jpg" +init_image = load_image(image_url).resize((512, 512)) + +sample = pipe(image=init_image, num_inference_steps=20, guidance_scale=8.0) +i2t_text = sample.text[0] +print(i2t_text) + +# 2. Text-to-image generation +sample = pipe(prompt=i2t_text, num_inference_steps=20, guidance_scale=8.0) +final_image = sample.images[0] +final_image.save("unidiffuser_image_variation_sample.png") Text Variation Similarly, text variation can be performed on an input prompt with a text-to-image generation followed by a image-to-text generation: Copied import torch + +from diffusers import UniDiffuserPipeline + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Text variation can be performed with a text-to-image generation followed by a image-to-text generation: +# 1. Text-to-image generation +prompt = "an elephant under the sea" + +sample = pipe(prompt=prompt, num_inference_steps=20, guidance_scale=8.0) +t2i_image = sample.images[0] +t2i_image.save("unidiffuser_text2img_sample_image.png") + +# 2. Image-to-text generation +sample = pipe(image=t2i_image, num_inference_steps=20, guidance_scale=8.0) +final_prompt = sample.text[0] +print(final_prompt) Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. UniDiffuserPipeline class diffusers.UniDiffuserPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel image_encoder: CLIPVisionModelWithProjection clip_image_processor: CLIPImageProcessor clip_tokenizer: CLIPTokenizer text_decoder: UniDiffuserTextDecoder text_tokenizer: GPT2Tokenizer unet: UniDiffuserModel scheduler: KarrasDiffusionSchedulers ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. This +is part of the UniDiffuser image representation along with the CLIP vision encoding. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). image_encoder (CLIPVisionModel) — +A CLIPVisionModel to encode images as part of its image representation along with the VAE +latent representation. image_processor (CLIPImageProcessor) — +CLIPImageProcessor to preprocess an image before CLIP encoding it with image_encoder. clip_tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize the prompt before encoding it with text_encoder. text_decoder (UniDiffuserTextDecoder) — +Frozen text decoder. This is a GPT-style model which is used to generate text from the UniDiffuser +embedding. text_tokenizer (GPT2Tokenizer) — +A GPT2Tokenizer to decode text for text generation; used along with the text_decoder. unet (UniDiffuserModel) — +A U-ViT model with UNNet-style skip connections between transformer +layers to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image and/or text latents. The +original UniDiffuser paper uses the DPMSolverMultistepScheduler scheduler. Pipeline for a bimodal image-text model which supports unconditional text and image generation, text-conditioned +image generation, image-conditioned text generation, and joint image-text generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None image: Union = None height: Optional = None width: Optional = None data_type: Optional = 1 num_inference_steps: int = 50 guidance_scale: float = 8.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 num_prompts_per_image: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_latents: Optional = None vae_latents: Optional = None clip_latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → ImageTextPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. +Required for text-conditioned image generation (text2img) mode. image (torch.FloatTensor or PIL.Image.Image, optional) — +Image or tensor representing an image batch. Required for image-conditioned text generation +(img2text) mode. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. data_type (int, optional, defaults to 1) — +The data type (either 0 or 1). Only used if you are loading a checkpoint which supports a data type +embedding; this is added for compatibility with the +UniDiffuser-v1 checkpoint. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 8.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). Used in +text-conditioned image generation (text2img) mode. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. Used in text2img (text-conditioned image generation) and +img mode. If the mode is joint and both num_images_per_prompt and num_prompts_per_image are +supplied, min(num_images_per_prompt, num_prompts_per_image) samples are generated. num_prompts_per_image (int, optional, defaults to 1) — +The number of prompts to generate per image. Used in img2text (image-conditioned text generation) and +text mode. If the mode is joint and both num_images_per_prompt and num_prompts_per_image are +supplied, min(num_images_per_prompt, num_prompts_per_image) samples are generated. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for joint +image-text generation. Can be used to tweak the same generation with different prompts. If not +provided, a latents tensor is generated by sampling using the supplied random generator. This assumes +a full set of VAE, CLIP, and text latents, if supplied, overrides the value of prompt_latents, +vae_latents, and clip_latents. prompt_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for text +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. vae_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. clip_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. Used in text-conditioned +image generation (text2img) mode. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are be generated from the negative_prompt input argument. Used +in text-conditioned image generation (text2img) mode. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImageTextPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +ImageTextPipelineOutput or tuple + +If return_dict is True, ImageTextPipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images and the second element +is a list of generated texts. + The call function to the pipeline for generation. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. reset_mode < source > ( ) Removes a manually set mode; after calling this, the pipeline will infer the mode from inputs. set_image_mode < source > ( ) Manually set the generation mode to unconditional (“marginal”) image generation. set_image_to_text_mode < source > ( ) Manually set the generation mode to image-conditioned text generation. set_joint_mode < source > ( ) Manually set the generation mode to unconditional joint image-text generation. set_text_mode < source > ( ) Manually set the generation mode to unconditional (“marginal”) text generation. set_text_to_image_mode < source > ( ) Manually set the generation mode to text-conditioned image generation. ImageTextPipelineOutput class diffusers.ImageTextPipelineOutput < source > ( images: Union text: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). text (List[str] or List[List[str]]) — +List of generated text strings of length batch_size or a list of list of strings whose outer list has +length batch_size. Output class for joint image-text pipelines. diff --git a/scrapped_outputs/b26c1df096758866d30c2e027ce61725.txt b/scrapped_outputs/b26c1df096758866d30c2e027ce61725.txt new file mode 100644 index 0000000000000000000000000000000000000000..163deebba32d44239adf15467f9dcbdfbfad7c90 --- /dev/null +++ b/scrapped_outputs/b26c1df096758866d30c2e027ce61725.txt @@ -0,0 +1,635 @@ +ControlNet with Stable Diffusion XL ControlNet was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that’ll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process. The abstract from the paper is: We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models. You can find additional smaller Stable Diffusion XL (SDXL) ControlNet checkpoints from the 🤗 Diffusers Hub organization, and browse community-trained checkpoints on the Hub. 🧪 Many of the SDXL ControlNet checkpoints are experimental, and there is a lot of room for improvement. Feel free to open an Issue and leave us feedback on how we can improve! If you don’t see a checkpoint you’re interested in, you can train your own SDXL ControlNet with our training script. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionXLControlNetPipeline class diffusers.StableDiffusionXLControlNetPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). text_encoder_2 (CLIPTextModelWithProjection) — +Second frozen text-encoder +(laion/CLIP-ViT-bigG-14-laion2B-39B-b160k). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. tokenizer_2 (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings should always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it defaults to True if the package is installed; otherwise no +watermarker is used. Pipeline for text-to-image generation using Stable Diffusion XL with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 5.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. This is sent to tokenizer_2 +and text_encoder_2. If not defined, negative_prompt is used in both text-encoders. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, pooled text embeddings are generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs (prompt +weighting). If not provided, pooled negative_prompt_embeds are generated from negative_prompt input +argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned containing the output images. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" +>>> negative_prompt = "low quality, bad quality, sketches" + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" +... ) + +>>> # initialize the models and pipeline +>>> controlnet_conditioning_scale = 0.5 # recommended for good generalization +>>> controlnet = ControlNetModel.from_pretrained( +... "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16 +... ) +>>> vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) +>>> pipe = StableDiffusionXLControlNetPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> # get canny image +>>> image = np.array(image) +>>> image = cv2.Canny(image, 100, 200) +>>> image = image[:, :, None] +>>> image = np.concatenate([image, image, image], axis=2) +>>> canny_image = Image.fromarray(image) + +>>> # generate image +>>> image = pipe( +... prompt, controlnet_conditioning_scale=controlnet_conditioning_scale, image=canny_image +... ).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionXLControlNetImg2ImgPipeline class diffusers.StableDiffusionXLControlNetImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple ControlNets +as a list, the outputs from each ControlNet are added together to create one combined additional +conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires an aesthetic_score condition to be passed during inference. Also see the +config of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for image-to-image generation using Stable Diffusion XL with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None control_image: Union = None height: Optional = None width: Optional = None strength: float = 0.8 num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 0.8 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The initial image will be used as the starting point for the image generation process. Can also accept +image latents as image, if passing latents directly, it will not be encoded again. control_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition. ControlNet uses this input condition to generate guidance to Unet. If +the type is specified as Torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can +also be accepted as an image. The dimensions of the output image defaults to image’s dimensions. If +height and/or width are passed, image is resized according to them. If multiple ControlNets are +specified in init, images must be passed as a list such that each element of the list can be correctly +batched for input to a single controlnet. height (int, optional, defaults to the size of control_image) — +The height in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to the size of control_image) — +The width in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the controlnet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set the +corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +In this mode, the ControlNet encoder will try best to recognize the content of the input image even if +you remove all prompts. The guidance_scale between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the controlnet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the controlnet stops applying. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple +containing the output images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> # pip install accelerate transformers safetensors diffusers + +>>> import torch +>>> import numpy as np +>>> from PIL import Image + +>>> from transformers import DPTFeatureExtractor, DPTForDepthEstimation +>>> from diffusers import ControlNetModel, StableDiffusionXLControlNetImg2ImgPipeline, AutoencoderKL +>>> from diffusers.utils import load_image + + +>>> depth_estimator = DPTForDepthEstimation.from_pretrained("Intel/dpt-hybrid-midas").to("cuda") +>>> feature_extractor = DPTFeatureExtractor.from_pretrained("Intel/dpt-hybrid-midas") +>>> controlnet = ControlNetModel.from_pretrained( +... "diffusers/controlnet-depth-sdxl-1.0-small", +... variant="fp16", +... use_safetensors=True, +... torch_dtype=torch.float16, +... ).to("cuda") +>>> vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16).to("cuda") +>>> pipe = StableDiffusionXLControlNetImg2ImgPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", +... controlnet=controlnet, +... vae=vae, +... variant="fp16", +... use_safetensors=True, +... torch_dtype=torch.float16, +... ).to("cuda") +>>> pipe.enable_model_cpu_offload() + + +>>> def get_depth_map(image): +... image = feature_extractor(images=image, return_tensors="pt").pixel_values.to("cuda") +... with torch.no_grad(), torch.autocast("cuda"): +... depth_map = depth_estimator(image).predicted_depth + +... depth_map = torch.nn.functional.interpolate( +... depth_map.unsqueeze(1), +... size=(1024, 1024), +... mode="bicubic", +... align_corners=False, +... ) +... depth_min = torch.amin(depth_map, dim=[1, 2, 3], keepdim=True) +... depth_max = torch.amax(depth_map, dim=[1, 2, 3], keepdim=True) +... depth_map = (depth_map - depth_min) / (depth_max - depth_min) +... image = torch.cat([depth_map] * 3, dim=1) +... image = image.permute(0, 2, 3, 1).cpu().numpy()[0] +... image = Image.fromarray((image * 255.0).clip(0, 255).astype(np.uint8)) +... return image + + +>>> prompt = "A robot, 4k photo" +>>> image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ).resize((1024, 1024)) +>>> controlnet_conditioning_scale = 0.5 # recommended for good generalization +>>> depth_image = get_depth_map(image) + +>>> images = pipe( +... prompt, +... image=image, +... control_image=depth_image, +... strength=0.99, +... num_inference_steps=50, +... controlnet_conditioning_scale=controlnet_conditioning_scale, +... ).images +>>> images[0].save(f"robot_cat.png") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionXLControlNetInpaintPipeline class diffusers.StableDiffusionXLControlNetInpaintPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: ControlNetModel scheduler: KarrasDiffusionSchedulers requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None mask_image: Union = None control_image: Union = None height: Optional = None width: Optional = None padding_mask_crop: Optional = None strength: float = 0.9999 num_inference_steps: int = 50 denoising_start: Optional = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (PIL.Image.Image) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. padding_mask_crop (int, optional, defaults to None) — +The size of margin in the crop to be applied to the image and masking. If None, no crop is applied to image and mask_image. If +padding_mask_crop is not None, it will first find a rectangular region with the same aspect ration of the image and +contains all masked area, and then expand that area based on padding_mask_crop. The image and mask_image will then be cropped based on +the expanded area before resizing to the original image size for inpainting. This is useful when the masked area is small while the image is large +and contain information inreleant for inpainging, such as background. strength (float, optional, defaults to 0.9999) — +Conceptually, indicates how much to transform the masked portion of the reference image. Must be +between 0 and 1. image will be used as a starting point, adding more noise to it the larger the +strength. The number of denoising steps depends on the amount of noise initially added. When +strength is 1, added noise will be maximum and the denoising process will run for the full number of +iterations specified in num_inference_steps. A value of 1, therefore, essentially ignores the masked +portion of the reference image. Note that in the case of denoising_start being declared as an +integer, the value of strength will be ignored. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. denoising_start (float, optional) — +When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be +bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and +it is assumed that the passed image is a partly denoised image. Note that when this is specified, +strength will be ignored. The denoising_start parameter is particularly beneficial when this pipeline +is integrated into a “Mixture of Denoisers” multi-pipeline setup, as detailed in Refining the Image +Output. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be +denoised by a successor pipeline that has denoising_start set to 0.8 so that it only denoises the +final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline +forms a part of a “Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (width, height) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (width, height). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> # !pip install transformers accelerate +>>> from diffusers import StableDiffusionXLControlNetInpaintPipeline, ControlNetModel, DDIMScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> init_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy.png" +... ) +>>> init_image = init_image.resize((1024, 1024)) + +>>> generator = torch.Generator(device="cpu").manual_seed(1) + +>>> mask_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy_mask.png" +... ) +>>> mask_image = mask_image.resize((1024, 1024)) + + +>>> def make_canny_condition(image): +... image = np.array(image) +... image = cv2.Canny(image, 100, 200) +... image = image[:, :, None] +... image = np.concatenate([image, image, image], axis=2) +... image = Image.fromarray(image) +... return image + + +>>> control_image = make_canny_condition(init_image) + +>>> controlnet = ControlNetModel.from_pretrained( +... "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16 +... ) +>>> pipe = StableDiffusionXLControlNetInpaintPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> image = pipe( +... "a handsome man with ray-ban sunglasses", +... num_inference_steps=20, +... generator=generator, +... eta=1.0, +... image=init_image, +... mask_image=mask_image, +... control_image=control_image, +... ).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/b27251692d7265c291455730589ab3fa.txt b/scrapped_outputs/b27251692d7265c291455730589ab3fa.txt new file mode 100644 index 0000000000000000000000000000000000000000..4fdf516b6d77156c92f409f664a1bb5bd1902c7b --- /dev/null +++ b/scrapped_outputs/b27251692d7265c291455730589ab3fa.txt @@ -0,0 +1,65 @@ +ControlNet ControlNet models are adapters trained on top of another pretrained model. It allows for a greater degree of control over image generation by conditioning the model with an additional input image. The input image can be a canny edge, depth map, human pose, and many more. If you’re training on a GPU with limited vRAM, you should try enabling the gradient_checkpointing, gradient_accumulation_steps, and mixed_precision parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing or xFormers. You should have a GPU with >30GB of memory if you want to train faster with Flax. This guide will explore the train_controlnet.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/controlnet +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_controlnet.py \ + --mixed_precision="fp16" Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the relevant parameters for ControlNet: --max_train_samples: the number of training samples; this can be lowered for faster training, but if you want to stream really large datasets, you’ll need to include this parameter and the --streaming parameter in your training command --gradient_accumulation_steps: number of update steps to accumulate before the backward pass; this allows you to train with a bigger batch size than your GPU memory can typically handle Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_controlnet.py \ + --snr_gamma=5.0 Training script As with the script parameters, a general walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the relevant parts of the ControlNet script. The training script has a make_train_dataset function for preprocessing the dataset with image transforms and caption tokenization. You’ll see that in addition to the usual caption tokenization and image transforms, the script also includes transforms for the conditioning image. If you’re streaming a dataset on a TPU, performance may be bottlenecked by the 🤗 Datasets library which is not optimized for images. To ensure maximum throughput, you’re encouraged to explore other dataset formats like WebDataset, TorchData, and TensorFlow Datasets. Copied conditioning_image_transforms = transforms.Compose( + [ + transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), + transforms.CenterCrop(args.resolution), + transforms.ToTensor(), + ] +) Within the main() function, you’ll find the code for loading the tokenizer, text encoder, scheduler and models. This is also where the ControlNet model is loaded either from existing weights or randomly initialized from a UNet: Copied if args.controlnet_model_name_or_path: + logger.info("Loading existing controlnet weights") + controlnet = ControlNetModel.from_pretrained(args.controlnet_model_name_or_path) +else: + logger.info("Initializing controlnet weights from unet") + controlnet = ControlNetModel.from_unet(unet) The optimizer is set up to update the ControlNet parameters: Copied params_to_optimize = controlnet.parameters() +optimizer = optimizer_class( + params_to_optimize, + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Finally, in the training loop, the conditioning text embeddings and image are passed to the down and mid-blocks of the ControlNet model: Copied encoder_hidden_states = text_encoder(batch["input_ids"])[0] +controlnet_image = batch["conditioning_pixel_values"].to(dtype=weight_dtype) + +down_block_res_samples, mid_block_res_sample = controlnet( + noisy_latents, + timesteps, + encoder_hidden_states=encoder_hidden_states, + controlnet_cond=controlnet_image, + return_dict=False, +) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Now you’re ready to launch the training script! 🚀 This guide uses the fusing/fill50k dataset, but remember, you can create and use your own dataset if you want (see the Create a dataset for training guide). Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model and OUTPUT_DIR to where you want to save the model. Download the following images to condition your training with: Copied wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png +wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png One more thing before you launch the script! Depending on the GPU you have, you may need to enable certain optimizations to train a ControlNet. The default configuration in this script requires ~38GB of vRAM. If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. 16GB 12GB 8GB On a 16GB GPU, you can use bitsandbytes 8-bit optimizer and gradient checkpointing to optimize your training run. Install bitsandbytes: Copied pip install bitsandbytes Then, add the following parameter to your training command: Copied accelerate launch train_controlnet.py \ + --gradient_checkpointing \ + --use_8bit_adam \ PyTorch Flax Copied export MODEL_DIR="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="path/to/save/model" + +accelerate launch train_controlnet.py \ + --pretrained_model_name_or_path=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --dataset_name=fusing/fill50k \ + --resolution=512 \ + --learning_rate=1e-5 \ + --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ + --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --push_to_hub Once training is complete, you can use your newly trained model for inference! Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.utils import load_image +import torch + +controlnet = ControlNetModel.from_pretrained("path/to/controlnet", torch_dtype=torch.float16) +pipeline = StableDiffusionControlNetPipeline.from_pretrained( + "path/to/base/model", controlnet=controlnet, torch_dtype=torch.float16 +).to("cuda") + +control_image = load_image("./conditioning_image_1.png") +prompt = "pale golden rod circle with old lace background" + +generator = torch.manual_seed(0) +image = pipe(prompt, num_inference_steps=20, generator=generator, image=control_image).images[0] +image.save("./output.png") Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_controlnet_sdxl.py script to train a ControlNet adapter for the SDXL model. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on training your own ControlNet! To learn more about how to use your new model, the following guides may be helpful: Learn how to use a ControlNet for inference on a variety of tasks. diff --git a/scrapped_outputs/b27d9a1948f6067b00319e6e82487395.txt b/scrapped_outputs/b27d9a1948f6067b00319e6e82487395.txt new file mode 100644 index 0000000000000000000000000000000000000000..260e2d1961cab74b037b8005bfcbb5822351f744 --- /dev/null +++ b/scrapped_outputs/b27d9a1948f6067b00319e6e82487395.txt @@ -0,0 +1,197 @@ +UniDiffuser The UniDiffuser model was proposed in One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale by Fan Bao, Shen Nie, Kaiwen Xue, Chongxuan Li, Shi Pu, Yaole Wang, Gang Yue, Yue Cao, Hang Su, Jun Zhu. The abstract from the paper is: This paper proposes a unified diffusion framework (dubbed UniDiffuser) to fit all distributions relevant to a set of multi-modal data in one model. Our key insight is — learning diffusion models for marginal, conditional, and joint distributions can be unified as predicting the noise in the perturbed data, where the perturbation levels (i.e. timesteps) can be different for different modalities. Inspired by the unified view, UniDiffuser learns all distributions simultaneously with a minimal modification to the original diffusion model — perturbs data in all modalities instead of a single modality, inputs individual timesteps in different modalities, and predicts the noise of all modalities instead of a single modality. UniDiffuser is parameterized by a transformer for diffusion models to handle input types of different modalities. Implemented on large-scale paired image-text data, UniDiffuser is able to perform image, text, text-to-image, image-to-text, and image-text pair generation by setting proper timesteps without additional overhead. In particular, UniDiffuser is able to produce perceptually realistic samples in all tasks and its quantitative results (e.g., the FID and CLIP score) are not only superior to existing general-purpose models but also comparable to the bespoken models (e.g., Stable Diffusion and DALL-E 2) in representative tasks (e.g., text-to-image generation). You can find the original codebase at thu-ml/unidiffuser and additional checkpoints at thu-ml. There is currently an issue on PyTorch 1.X where the output images are all black or the pixel values become NaNs. This issue can be mitigated by switching to PyTorch 2.X. This pipeline was contributed by dg845. ❤️ Usage Examples Because the UniDiffuser model is trained to model the joint distribution of (image, text) pairs, it is capable of performing a diverse range of generation tasks: Unconditional Image and Text Generation Unconditional generation (where we start from only latents sampled from a standard Gaussian prior) from a UniDiffuserPipeline will produce a (image, text) pair: Copied import torch + +from diffusers import UniDiffuserPipeline + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Unconditional image and text generation. The generation task is automatically inferred. +sample = pipe(num_inference_steps=20, guidance_scale=8.0) +image = sample.images[0] +text = sample.text[0] +image.save("unidiffuser_joint_sample_image.png") +print(text) This is also called “joint” generation in the UniDiffuser paper, since we are sampling from the joint image-text distribution. Note that the generation task is inferred from the inputs used when calling the pipeline. +It is also possible to manually specify the unconditional generation task (“mode”) manually with UniDiffuserPipeline.set_joint_mode(): Copied # Equivalent to the above. +pipe.set_joint_mode() +sample = pipe(num_inference_steps=20, guidance_scale=8.0) When the mode is set manually, subsequent calls to the pipeline will use the set mode without attempting to infer the mode. +You can reset the mode with UniDiffuserPipeline.reset_mode(), after which the pipeline will once again infer the mode. You can also generate only an image or only text (which the UniDiffuser paper calls “marginal” generation since we sample from the marginal distribution of images and text, respectively): Copied # Unlike other generation tasks, image-only and text-only generation don't use classifier-free guidance +# Image-only generation +pipe.set_image_mode() +sample_image = pipe(num_inference_steps=20).images[0] +# Text-only generation +pipe.set_text_mode() +sample_text = pipe(num_inference_steps=20).text[0] Text-to-Image Generation UniDiffuser is also capable of sampling from conditional distributions; that is, the distribution of images conditioned on a text prompt or the distribution of texts conditioned on an image. +Here is an example of sampling from the conditional image distribution (text-to-image generation or text-conditioned image generation): Copied import torch + +from diffusers import UniDiffuserPipeline + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Text-to-image generation +prompt = "an elephant under the sea" + +sample = pipe(prompt=prompt, num_inference_steps=20, guidance_scale=8.0) +t2i_image = sample.images[0] +t2i_image The text2img mode requires that either an input prompt or prompt_embeds be supplied. You can set the text2img mode manually with UniDiffuserPipeline.set_text_to_image_mode(). Image-to-Text Generation Similarly, UniDiffuser can also produce text samples given an image (image-to-text or image-conditioned text generation): Copied import torch + +from diffusers import UniDiffuserPipeline +from diffusers.utils import load_image + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Image-to-text generation +image_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unidiffuser/unidiffuser_example_image.jpg" +init_image = load_image(image_url).resize((512, 512)) + +sample = pipe(image=init_image, num_inference_steps=20, guidance_scale=8.0) +i2t_text = sample.text[0] +print(i2t_text) The img2text mode requires that an input image be supplied. You can set the img2text mode manually with UniDiffuserPipeline.set_image_to_text_mode(). Image Variation The UniDiffuser authors suggest performing image variation through a “round-trip” generation method, where given an input image, we first perform an image-to-text generation, and then perform a text-to-image generation on the outputs of the first generation. +This produces a new image which is semantically similar to the input image: Copied import torch + +from diffusers import UniDiffuserPipeline +from diffusers.utils import load_image + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Image variation can be performed with an image-to-text generation followed by a text-to-image generation: +# 1. Image-to-text generation +image_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unidiffuser/unidiffuser_example_image.jpg" +init_image = load_image(image_url).resize((512, 512)) + +sample = pipe(image=init_image, num_inference_steps=20, guidance_scale=8.0) +i2t_text = sample.text[0] +print(i2t_text) + +# 2. Text-to-image generation +sample = pipe(prompt=i2t_text, num_inference_steps=20, guidance_scale=8.0) +final_image = sample.images[0] +final_image.save("unidiffuser_image_variation_sample.png") Text Variation Similarly, text variation can be performed on an input prompt with a text-to-image generation followed by a image-to-text generation: Copied import torch + +from diffusers import UniDiffuserPipeline + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Text variation can be performed with a text-to-image generation followed by a image-to-text generation: +# 1. Text-to-image generation +prompt = "an elephant under the sea" + +sample = pipe(prompt=prompt, num_inference_steps=20, guidance_scale=8.0) +t2i_image = sample.images[0] +t2i_image.save("unidiffuser_text2img_sample_image.png") + +# 2. Image-to-text generation +sample = pipe(image=t2i_image, num_inference_steps=20, guidance_scale=8.0) +final_prompt = sample.text[0] +print(final_prompt) Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. UniDiffuserPipeline class diffusers.UniDiffuserPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel image_encoder: CLIPVisionModelWithProjection clip_image_processor: CLIPImageProcessor clip_tokenizer: CLIPTokenizer text_decoder: UniDiffuserTextDecoder text_tokenizer: GPT2Tokenizer unet: UniDiffuserModel scheduler: KarrasDiffusionSchedulers ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. This +is part of the UniDiffuser image representation along with the CLIP vision encoding. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). image_encoder (CLIPVisionModel) — +A CLIPVisionModel to encode images as part of its image representation along with the VAE +latent representation. image_processor (CLIPImageProcessor) — +CLIPImageProcessor to preprocess an image before CLIP encoding it with image_encoder. clip_tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize the prompt before encoding it with text_encoder. text_decoder (UniDiffuserTextDecoder) — +Frozen text decoder. This is a GPT-style model which is used to generate text from the UniDiffuser +embedding. text_tokenizer (GPT2Tokenizer) — +A GPT2Tokenizer to decode text for text generation; used along with the text_decoder. unet (UniDiffuserModel) — +A U-ViT model with UNNet-style skip connections between transformer +layers to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image and/or text latents. The +original UniDiffuser paper uses the DPMSolverMultistepScheduler scheduler. Pipeline for a bimodal image-text model which supports unconditional text and image generation, text-conditioned +image generation, image-conditioned text generation, and joint image-text generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None image: Union = None height: Optional = None width: Optional = None data_type: Optional = 1 num_inference_steps: int = 50 guidance_scale: float = 8.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 num_prompts_per_image: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_latents: Optional = None vae_latents: Optional = None clip_latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → ImageTextPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. +Required for text-conditioned image generation (text2img) mode. image (torch.FloatTensor or PIL.Image.Image, optional) — +Image or tensor representing an image batch. Required for image-conditioned text generation +(img2text) mode. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. data_type (int, optional, defaults to 1) — +The data type (either 0 or 1). Only used if you are loading a checkpoint which supports a data type +embedding; this is added for compatibility with the +UniDiffuser-v1 checkpoint. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 8.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). Used in +text-conditioned image generation (text2img) mode. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. Used in text2img (text-conditioned image generation) and +img mode. If the mode is joint and both num_images_per_prompt and num_prompts_per_image are +supplied, min(num_images_per_prompt, num_prompts_per_image) samples are generated. num_prompts_per_image (int, optional, defaults to 1) — +The number of prompts to generate per image. Used in img2text (image-conditioned text generation) and +text mode. If the mode is joint and both num_images_per_prompt and num_prompts_per_image are +supplied, min(num_images_per_prompt, num_prompts_per_image) samples are generated. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for joint +image-text generation. Can be used to tweak the same generation with different prompts. If not +provided, a latents tensor is generated by sampling using the supplied random generator. This assumes +a full set of VAE, CLIP, and text latents, if supplied, overrides the value of prompt_latents, +vae_latents, and clip_latents. prompt_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for text +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. vae_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. clip_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. Used in text-conditioned +image generation (text2img) mode. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are be generated from the negative_prompt input argument. Used +in text-conditioned image generation (text2img) mode. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImageTextPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +ImageTextPipelineOutput or tuple + +If return_dict is True, ImageTextPipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images and the second element +is a list of generated texts. + The call function to the pipeline for generation. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. reset_mode < source > ( ) Removes a manually set mode; after calling this, the pipeline will infer the mode from inputs. set_image_mode < source > ( ) Manually set the generation mode to unconditional (“marginal”) image generation. set_image_to_text_mode < source > ( ) Manually set the generation mode to image-conditioned text generation. set_joint_mode < source > ( ) Manually set the generation mode to unconditional joint image-text generation. set_text_mode < source > ( ) Manually set the generation mode to unconditional (“marginal”) text generation. set_text_to_image_mode < source > ( ) Manually set the generation mode to text-conditioned image generation. ImageTextPipelineOutput class diffusers.ImageTextPipelineOutput < source > ( images: Union text: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). text (List[str] or List[List[str]]) — +List of generated text strings of length batch_size or a list of list of strings whose outer list has +length batch_size. Output class for joint image-text pipelines. diff --git a/scrapped_outputs/b2abd283ed6d92aee7012bc1da8dda9a.txt b/scrapped_outputs/b2abd283ed6d92aee7012bc1da8dda9a.txt new file mode 100644 index 0000000000000000000000000000000000000000..cb2a7efc0d3fd312b0eaa154732c00aa8b34bd28 --- /dev/null +++ b/scrapped_outputs/b2abd283ed6d92aee7012bc1da8dda9a.txt @@ -0,0 +1,123 @@ +Single files Diffusers supports loading pretrained pipeline (or model) weights stored in a single file, such as a ckpt or safetensors file. These single file types are typically produced from community trained models. There are three classes for loading single file weights: FromSingleFileMixin supports loading pretrained pipeline weights stored in a single file, which can either be a ckpt or safetensors file. FromOriginalVAEMixin supports loading a pretrained AutoencoderKL from pretrained ControlNet weights stored in a single file, which can either be a ckpt or safetensors file. FromOriginalControlnetMixin supports loading pretrained ControlNet weights stored in a single file, which can either be a ckpt or safetensors file. To learn more about how to load single file weights, see the Load different Stable Diffusion formats loading guide. FromSingleFileMixin class diffusers.loaders.FromSingleFileMixin < source > ( ) Load model weights saved in the .ckpt format into a DiffusionPipeline. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. original_config_file (str, optional) — +The path to the original config file that was used to train the model. If not provided, the config file +will be inferred from the checkpoint file. model_type (str, optional) — +The type of model to load. If not provided, the model type will be inferred from the checkpoint file. image_size (int, optional) — +The size of the image output. It’s used to configure the sample_size parameter of the UNet and VAE model. load_safety_checker (bool, optional, defaults to False) — +Whether to load the safety checker model or not. By default, the safety checker is not loaded unless a safety_checker component is passed to the kwargs. num_in_channels (int, optional) — +Specify the number of input channels for the UNet model. Read more about how to configure UNet model with this parameter +here. scaling_factor (float, optional) — +The scaling factor to use for the VAE model. If not provided, it is inferred from the config file first. +If the scaling factor is not found in the config file, the default value 0.18215 is used. scheduler_type (str, optional) — +The type of scheduler to load. If not provided, the scheduler type will be inferred from the checkpoint file. prediction_type (str, optional) — +The type of prediction to load. If not provided, the prediction type will be inferred from the checkpoint file. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt or .safetensors +format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import StableDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +... ) + +>>> # Download pipeline from local file +>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt +>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly") + +>>> # Enable float16 and move to GPU +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", +... torch_dtype=torch.float16, +... ) +>>> pipeline.to("cuda") FromOriginalVAEMixin class diffusers.loaders.FromOriginalVAEMixin < source > ( ) Load pretrained AutoencoderKL weights saved in the .ckpt or .safetensors format into a AutoencoderKL. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + config_file (str, optional) — +Filepath to the configuration YAML file associated with the model. If not provided it will default to: +https://raw.githubusercontent.com/CompVis/stable-diffusion/main/configs/stable-diffusion/v1-inference.yaml torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. image_size (int, optional, defaults to 512) — +The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable +Diffusion v2 base model. Use 768 for Stable Diffusion v2. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution +Image Synthesis with Latent Diffusion Models paper. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (for example the pipeline components of the +specific pipeline class). The overwritten components are directly passed to the pipelines __init__ +method. See example below for more information. Instantiate a AutoencoderKL from pretrained ControlNet weights saved in the original .ckpt or +.safetensors format. The pipeline is set in evaluation mode (model.eval()) by default. Make sure to pass both image_size and scaling_factor to from_single_file() if you’re loading +a VAE from SDXL or a Stable Diffusion v2 model or higher. Examples: Copied from diffusers import AutoencoderKL + +url = "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors" # can also be local file +model = AutoencoderKL.from_single_file(url) FromOriginalControlnetMixin class diffusers.loaders.FromOriginalControlNetMixin < source > ( ) Load pretrained ControlNet weights saved in the .ckpt or .safetensors format into a ControlNetModel. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + config_file (str, optional) — +Filepath to the configuration YAML file associated with the model. If not provided it will default to: +https://raw.githubusercontent.com/lllyasviel/ControlNet/main/models/cldm_v15.yaml torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. image_size (int, optional, defaults to 512) — +The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable +Diffusion v2 base model. Use 768 for Stable Diffusion v2. upcast_attention (bool, optional, defaults to None) — +Whether the attention computation should always be upcasted. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (for example the pipeline components of the +specific pipeline class). The overwritten components are directly passed to the pipelines __init__ +method. See example below for more information. Instantiate a ControlNetModel from pretrained ControlNet weights saved in the original .ckpt or +.safetensors format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel + +url = "https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_canny.pth" # can also be a local path +model = ControlNetModel.from_single_file(url) + +url = "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.safetensors" # can also be a local path +pipe = StableDiffusionControlNetPipeline.from_single_file(url, controlnet=controlnet) diff --git a/scrapped_outputs/b2c3ef92ef9f4b6e3e0b0dfee7ea6c21.txt b/scrapped_outputs/b2c3ef92ef9f4b6e3e0b0dfee7ea6c21.txt new file mode 100644 index 0000000000000000000000000000000000000000..d509c1ac7ab849c2b3afbdbbc876d1114069ba2e --- /dev/null +++ b/scrapped_outputs/b2c3ef92ef9f4b6e3e0b0dfee7ea6c21.txt @@ -0,0 +1,217 @@ +Latent Consistency Models Latent Consistency Models (LCMs) were proposed in Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference by Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao. The abstract of the paper is as follows: Latent Diffusion models (LDMs) have achieved remarkable results in synthesizing high-resolution images. However, the iterative sampling process is computationally intensive and leads to slow generation. Inspired by Consistency Models (song et al.), we propose Latent Consistency Models (LCMs), enabling swift inference with minimal steps on any pre-trained LDMs, including Stable Diffusion (rombach et al). Viewing the guided reverse diffusion process as solving an augmented probability flow ODE (PF-ODE), LCMs are designed to directly predict the solution of such ODE in latent space, mitigating the need for numerous iterations and allowing rapid, high-fidelity sampling. Efficiently distilled from pre-trained classifier-free guided diffusion models, a high-quality 768 x 768 2~4-step LCM takes only 32 A100 GPU hours for training. Furthermore, we introduce Latent Consistency Fine-tuning (LCF), a novel method that is tailored for fine-tuning LCMs on customized image datasets. Evaluation on the LAION-5B-Aesthetics dataset demonstrates that LCMs achieve state-of-the-art text-to-image generation performance with few-step inference. Project Page: this https URL. A demo for the SimianLuo/LCM_Dreamshaper_v7 checkpoint can be found here. The pipelines were contributed by luosiallen, nagolinc, and dg845. LatentConsistencyModelPipeline class diffusers.LatentConsistencyModelPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: LCMScheduler safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Currently only +supports LCMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. requires_safety_checker (bool, optional, defaults to True) — +Whether the pipeline requires a safety checker component. Pipeline for text-to-image generation using a latent consistency model. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 4 original_inference_steps: int = None timesteps: List = None guidance_scale: float = 8.5 num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. original_inference_steps (int, optional) — +The original number of inference steps use to generate a linearly-spaced timestep schedule, from which +we will draw num_inference_steps evenly spaced timesteps from as our final timestep schedule, +following the Skipping-Step method in the paper (see Section 4.3). If not set this will default to the +scheduler’s original_inference_steps attribute. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps on the original LCM training/distillation timestep schedule are used. Must be in descending +order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. +Note that the original latent consistency models paper uses a different CFG formulation where the +guidance scales are decreased by 1 (so in the paper formulation CFG is enabled when guidance_scale > 0). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import DiffusionPipeline +>>> import torch + +>>> pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7") +>>> # To save GPU memory, torch.float16 can be used, but it may compromise image quality. +>>> pipe.to(torch_device="cuda", torch_dtype=torch.float32) + +>>> prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" + +>>> # Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps. +>>> num_inference_steps = 4 +>>> images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=8.0).images +>>> images[0].save("image.png") enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 LatentConsistencyModelImg2ImgPipeline class diffusers.LatentConsistencyModelImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: LCMScheduler safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Currently only +supports LCMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. requires_safety_checker (bool, optional, defaults to True) — +Whether the pipeline requires a safety checker component. Pipeline for image-to-image generation using a latent consistency model. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 4 strength: float = 0.8 original_inference_steps: int = None timesteps: List = None guidance_scale: float = 8.5 num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. original_inference_steps (int, optional) — +The original number of inference steps use to generate a linearly-spaced timestep schedule, from which +we will draw num_inference_steps evenly spaced timesteps from as our final timestep schedule, +following the Skipping-Step method in the paper (see Section 4.3). If not set this will default to the +scheduler’s original_inference_steps attribute. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps on the original LCM training/distillation timestep schedule are used. Must be in descending +order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. +Note that the original latent consistency models paper uses a different CFG formulation where the +guidance scales are decreased by 1 (so in the paper formulation CFG is enabled when guidance_scale > 0). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import AutoPipelineForImage2Image +>>> import torch +>>> import PIL + +>>> pipe = AutoPipelineForImage2Image.from_pretrained("SimianLuo/LCM_Dreamshaper_v7") +>>> # To save GPU memory, torch.float16 can be used, but it may compromise image quality. +>>> pipe.to(torch_device="cuda", torch_dtype=torch.float32) + +>>> prompt = "High altitude snowy mountains" +>>> image = PIL.Image.open("./snowy_mountains.png") + +>>> # Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps. +>>> num_inference_steps = 4 +>>> images = pipe( +... prompt=prompt, image=image, num_inference_steps=num_inference_steps, guidance_scale=8.0 +... ).images + +>>> images[0].save("image.png") enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/b2ef01a9663ad03e3317119f47901e76.txt b/scrapped_outputs/b2ef01a9663ad03e3317119f47901e76.txt new file mode 100644 index 0000000000000000000000000000000000000000..78bbe5a9f180ff0b096046b649d06bb4063d6161 --- /dev/null +++ b/scrapped_outputs/b2ef01a9663ad03e3317119f47901e76.txt @@ -0,0 +1,137 @@ +DiffEdit Image editing typically requires providing a mask of the area to be edited. DiffEdit automatically generates the mask for you based on a text query, making it easier overall to create a mask without image editing software. The DiffEdit algorithm works in three steps: the diffusion model denoises an image conditioned on some query text and reference text which produces different noise estimates for different areas of the image; the difference is used to infer a mask to identify which area of the image needs to be changed to match the query text the input image is encoded into latent space with DDIM the latents are decoded with the diffusion model conditioned on the text query, using the mask as a guide such that pixels outside the mask remain the same as in the input image This guide will show you how to use DiffEdit to edit images without manually creating a mask. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate The StableDiffusionDiffEditPipeline requires an image mask and a set of partially inverted latents. The image mask is generated from the generate_mask() function, and includes two parameters, source_prompt and target_prompt. These parameters determine what to edit in the image. For example, if you want to change a bowl of fruits to a bowl of pears, then: Copied source_prompt = "a bowl of fruits" +target_prompt = "a bowl of pears" The partially inverted latents are generated from the invert() function, and it is generally a good idea to include a prompt or caption describing the image to help guide the inverse latent sampling process. The caption can often be your source_prompt, but feel free to experiment with other text descriptions! Let’s load the pipeline, scheduler, inverse scheduler, and enable some optimizations to reduce memory usage: Copied import torch +from diffusers import DDIMScheduler, DDIMInverseScheduler, StableDiffusionDiffEditPipeline + +pipeline = StableDiffusionDiffEditPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", + torch_dtype=torch.float16, + safety_checker=None, + use_safetensors=True, +) +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +pipeline.enable_model_cpu_offload() +pipeline.enable_vae_slicing() Load the image to edit: Copied from diffusers.utils import load_image, make_image_grid + +img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" +raw_image = load_image(img_url).resize((768, 768)) +raw_image Use the generate_mask() function to generate the image mask. You’ll need to pass it the source_prompt and target_prompt to specify what to edit in the image: Copied from PIL import Image + +source_prompt = "a bowl of fruits" +target_prompt = "a basket of pears" +mask_image = pipeline.generate_mask( + image=raw_image, + source_prompt=source_prompt, + target_prompt=target_prompt, +) +Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L").resize((768, 768)) Next, create the inverted latents and pass it a caption describing the image: Copied inv_latents = pipeline.invert(prompt=source_prompt, image=raw_image).latents Finally, pass the image mask and inverted latents to the pipeline. The target_prompt becomes the prompt now, and the source_prompt is used as the negative_prompt: Copied output_image = pipeline( + prompt=target_prompt, + mask_image=mask_image, + image_latents=inv_latents, + negative_prompt=source_prompt, +).images[0] +mask_image = Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L").resize((768, 768)) +make_image_grid([raw_image, mask_image, output_image], rows=1, cols=3) original image edited image Generate source and target embeddings The source and target embeddings can be automatically generated with the Flan-T5 model instead of creating them manually. Load the Flan-T5 model and tokenizer from the 🤗 Transformers library: Copied import torch +from transformers import AutoTokenizer, T5ForConditionalGeneration + +tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-large") +model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", device_map="auto", torch_dtype=torch.float16) Provide some initial text to prompt the model to generate the source and target prompts. Copied source_concept = "bowl" +target_concept = "basket" + +source_text = f"Provide a caption for images containing a {source_concept}. " +"The captions should be in English and should be no longer than 150 characters." + +target_text = f"Provide a caption for images containing a {target_concept}. " +"The captions should be in English and should be no longer than 150 characters." Next, create a utility function to generate the prompts: Copied @torch.no_grad() +def generate_prompts(input_prompt): + input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids.to("cuda") + + outputs = model.generate( + input_ids, temperature=0.8, num_return_sequences=16, do_sample=True, max_new_tokens=128, top_k=10 + ) + return tokenizer.batch_decode(outputs, skip_special_tokens=True) + +source_prompts = generate_prompts(source_text) +target_prompts = generate_prompts(target_text) +print(source_prompts) +print(target_prompts) Check out the generation strategy guide if you’re interested in learning more about strategies for generating different quality text. Load the text encoder model used by the StableDiffusionDiffEditPipeline to encode the text. You’ll use the text encoder to compute the text embeddings: Copied import torch +from diffusers import StableDiffusionDiffEditPipeline + +pipeline = StableDiffusionDiffEditPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +pipeline.enable_vae_slicing() + +@torch.no_grad() +def embed_prompts(sentences, tokenizer, text_encoder, device="cuda"): + embeddings = [] + for sent in sentences: + text_inputs = tokenizer( + sent, + padding="max_length", + max_length=tokenizer.model_max_length, + truncation=True, + return_tensors="pt", + ) + text_input_ids = text_inputs.input_ids + prompt_embeds = text_encoder(text_input_ids.to(device), attention_mask=None)[0] + embeddings.append(prompt_embeds) + return torch.concatenate(embeddings, dim=0).mean(dim=0).unsqueeze(0) + +source_embeds = embed_prompts(source_prompts, pipeline.tokenizer, pipeline.text_encoder) +target_embeds = embed_prompts(target_prompts, pipeline.tokenizer, pipeline.text_encoder) Finally, pass the embeddings to the generate_mask() and invert() functions, and pipeline to generate the image: Copied from diffusers import DDIMInverseScheduler, DDIMScheduler + from diffusers.utils import load_image, make_image_grid + from PIL import Image + + pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) + pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) + + img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + raw_image = load_image(img_url).resize((768, 768)) + + mask_image = pipeline.generate_mask( + image=raw_image, +- source_prompt=source_prompt, +- target_prompt=target_prompt, ++ source_prompt_embeds=source_embeds, ++ target_prompt_embeds=target_embeds, + ) + + inv_latents = pipeline.invert( +- prompt=source_prompt, ++ prompt_embeds=source_embeds, + image=raw_image, + ).latents + + output_image = pipeline( + mask_image=mask_image, + image_latents=inv_latents, +- prompt=target_prompt, +- negative_prompt=source_prompt, ++ prompt_embeds=target_embeds, ++ negative_prompt_embeds=source_embeds, + ).images[0] + mask_image = Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L") + make_image_grid([raw_image, mask_image, output_image], rows=1, cols=3) Generate a caption for inversion While you can use the source_prompt as a caption to help generate the partially inverted latents, you can also use the BLIP model to automatically generate a caption. Load the BLIP model and processor from the 🤗 Transformers library: Copied import torch +from transformers import BlipForConditionalGeneration, BlipProcessor + +processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") +model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base", torch_dtype=torch.float16, low_cpu_mem_usage=True) Create a utility function to generate a caption from the input image: Copied @torch.no_grad() +def generate_caption(images, caption_generator, caption_processor): + text = "a photograph of" + + inputs = caption_processor(images, text, return_tensors="pt").to(device="cuda", dtype=caption_generator.dtype) + caption_generator.to("cuda") + outputs = caption_generator.generate(**inputs, max_new_tokens=128) + + # offload caption generator + caption_generator.to("cpu") + + caption = caption_processor.batch_decode(outputs, skip_special_tokens=True)[0] + return caption Load an input image and generate a caption for it using the generate_caption function: Copied from diffusers.utils import load_image + +img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" +raw_image = load_image(img_url).resize((768, 768)) +caption = generate_caption(raw_image, model, processor) generated caption: "a photograph of a bowl of fruit on a table" Now you can drop the caption into the invert() function to generate the partially inverted latents! diff --git a/scrapped_outputs/b2efb6611ada1701c12205cf905b3d37.txt b/scrapped_outputs/b2efb6611ada1701c12205cf905b3d37.txt new file mode 100644 index 0000000000000000000000000000000000000000..191eba717cd93724b13a5915ff44bfc9153360dd --- /dev/null +++ b/scrapped_outputs/b2efb6611ada1701c12205cf905b3d37.txt @@ -0,0 +1,338 @@ +GLIGEN (Grounded Language-to-Image Generation) The GLIGEN model was created by researchers and engineers from University of Wisconsin-Madison, Columbia University, and Microsoft. The StableDiffusionGLIGENPipeline and StableDiffusionGLIGENTextImagePipeline can generate photorealistic images conditioned on grounding inputs. Along with text and bounding boxes with StableDiffusionGLIGENPipeline, if input images are given, StableDiffusionGLIGENTextImagePipeline can insert objects described by text at the region defined by bounding boxes. Otherwise, it’ll generate an image described by the caption/prompt and insert objects described by text at the region defined by bounding boxes. It’s trained on COCO2014D and COCO2014CD datasets, and the model uses a frozen CLIP ViT-L/14 text encoder to condition itself on grounding inputs. The abstract from the paper is: Large-scale text-to-image diffusion models have made amazing advances. However, the status quo is to use text input alone, which can impede controllability. In this work, we propose GLIGEN, Grounded-Language-to-Image Generation, a novel approach that builds upon and extends the functionality of existing pre-trained text-to-image diffusion models by enabling them to also be conditioned on grounding inputs. To preserve the vast concept knowledge of the pre-trained model, we freeze all of its weights and inject the grounding information into new trainable layers via a gated mechanism. Our model achieves open-world grounded text2img generation with caption and bounding box condition inputs, and the grounding ability generalizes well to novel spatial configurations and concepts. GLIGEN’s zeroshot performance on COCO and LVIS outperforms existing supervised layout-to-image baselines by a large margin. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality and how to reuse pipeline components efficiently! If you want to use one of the official checkpoints for a task, explore the gligen Hub organizations! StableDiffusionGLIGENPipeline was contributed by Nikhil Gajendrakumar and StableDiffusionGLIGENTextImagePipeline was contributed by Nguyễn Công Tú Anh. StableDiffusionGLIGENPipeline class diffusers.StableDiffusionGLIGENPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with Grounded-Language-to-Image Generation (GLIGEN). This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 gligen_scheduled_sampling_beta: float = 0.3 gligen_phrases: List = None gligen_boxes: List = None gligen_inpaint_image: Optional = None negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. gligen_phrases (List[str]) — +The phrases to guide what to include in each of the regions defined by the corresponding +gligen_boxes. There should only be one phrase per bounding box. gligen_boxes (List[List[float]]) — +The bounding boxes that identify rectangular regions of the image that are going to be filled with the +content described by the corresponding gligen_phrases. Each rectangular box is defined as a +List[float] of 4 elements [xmin, ymin, xmax, ymax] where each value is between [0,1]. gligen_inpaint_image (PIL.Image.Image, optional) — +The input image, if provided, is inpainted with objects described by the gligen_boxes and +gligen_phrases. Otherwise, it is treated as a generation task on a blank input image. gligen_scheduled_sampling_beta (float, defaults to 0.3) — +Scheduled Sampling factor from GLIGEN: Open-Set Grounded Text-to-Image +Generation. Scheduled Sampling factor is only varied for +scheduled sampling during inference for improved quality and controllability. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor from Common Diffusion Noise Schedules and Sample Steps are +Flawed. Guidance rescale factor should fix overexposure when +using zero terminal SNR. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionGLIGENPipeline +>>> from diffusers.utils import load_image + +>>> # Insert objects described by text at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENPipeline.from_pretrained( +... "masterful/gligen-1-4-inpainting-text-box", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> input_image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/livingroom_modern.png" +... ) +>>> prompt = "a birthday cake" +>>> boxes = [[0.2676, 0.6088, 0.4773, 0.7183]] +>>> phrases = ["a birthday cake"] + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_inpaint_image=input_image, +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-1-4-inpainting-text-box.jpg") + +>>> # Generate an image described by the prompt and +>>> # insert objects described by text at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENPipeline.from_pretrained( +... "masterful/gligen-1-4-generation-text-box", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a waterfall and a modern high speed train running through the tunnel in a beautiful forest with fall foliage" +>>> boxes = [[0.1387, 0.2051, 0.4277, 0.7090], [0.4980, 0.4355, 0.8516, 0.7266]] +>>> phrases = ["a waterfall", "a modern high speed train running through the tunnel"] + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-1-4-generation-text-box.jpg") enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. prepare_latents < source > ( batch_size num_channels_latents height width dtype device generator latents = None ) enable_fuser < source > ( enabled = True ) encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionGLIGENTextImagePipeline class diffusers.StableDiffusionGLIGENTextImagePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer processor: CLIPProcessor image_encoder: CLIPVisionModelWithProjection image_project: CLIPImageProjection unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. processor (CLIPProcessor) — +A CLIPProcessor to procces reference image. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder (clip-vit-large-patch14). image_project (CLIPImageProjection) — +A CLIPImageProjection to project image embedding into phrases embedding space. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with Grounded-Language-to-Image Generation (GLIGEN). This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 gligen_scheduled_sampling_beta: float = 0.3 gligen_phrases: List = None gligen_images: List = None input_phrases_mask: Union = None input_images_mask: Union = None gligen_boxes: List = None gligen_inpaint_image: Optional = None negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None gligen_normalize_constant: float = 28.7 clip_skip: int = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. gligen_phrases (List[str]) — +The phrases to guide what to include in each of the regions defined by the corresponding +gligen_boxes. There should only be one phrase per bounding box. gligen_images (List[PIL.Image.Image]) — +The images to guide what to include in each of the regions defined by the corresponding gligen_boxes. +There should only be one image per bounding box input_phrases_mask (int or List[int]) — +pre phrases mask input defined by the correspongding input_phrases_mask input_images_mask (int or List[int]) — +pre images mask input defined by the correspongding input_images_mask gligen_boxes (List[List[float]]) — +The bounding boxes that identify rectangular regions of the image that are going to be filled with the +content described by the corresponding gligen_phrases. Each rectangular box is defined as a +List[float] of 4 elements [xmin, ymin, xmax, ymax] where each value is between [0,1]. gligen_inpaint_image (PIL.Image.Image, optional) — +The input image, if provided, is inpainted with objects described by the gligen_boxes and +gligen_phrases. Otherwise, it is treated as a generation task on a blank input image. gligen_scheduled_sampling_beta (float, defaults to 0.3) — +Scheduled Sampling factor from GLIGEN: Open-Set Grounded Text-to-Image +Generation. Scheduled Sampling factor is only varied for +scheduled sampling during inference for improved quality and controllability. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. gligen_normalize_constant (float, optional, defaults to 28.7) — +The normalize value of the image embedding. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionGLIGENTextImagePipeline +>>> from diffusers.utils import load_image + +>>> # Insert objects described by image at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained( +... "anhnct/Gligen_Inpainting_Text_Image", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> input_image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/livingroom_modern.png" +... ) +>>> prompt = "a backpack" +>>> boxes = [[0.2676, 0.4088, 0.4773, 0.7183]] +>>> phrases = None +>>> gligen_image = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/backpack.jpeg" +... ) + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_inpaint_image=input_image, +... gligen_boxes=boxes, +... gligen_images=[gligen_image], +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-inpainting-text-image-box.jpg") + +>>> # Generate an image described by the prompt and +>>> # insert objects described by text and image at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained( +... "anhnct/Gligen_Text_Image", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a flower sitting on the beach" +>>> boxes = [[0.0, 0.09, 0.53, 0.76]] +>>> phrases = ["flower"] +>>> gligen_image = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/pexels-pixabay-60597.jpg" +... ) + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_images=[gligen_image], +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-generation-text-image-box.jpg") + +>>> # Generate an image described by the prompt and +>>> # transfer style described by image at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained( +... "anhnct/Gligen_Text_Image", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a dragon flying on the sky" +>>> boxes = [[0.4, 0.2, 1.0, 0.8], [0.0, 1.0, 0.0, 1.0]] # Set `[0.0, 1.0, 0.0, 1.0]` for the style + +>>> gligen_image = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" +... ) + +>>> gligen_placeholder = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" +... ) + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=[ +... "dragon", +... "placeholder", +... ], # Can use any text instead of `placeholder` token, because we will use mask here +... gligen_images=[ +... gligen_placeholder, +... gligen_image, +... ], # Can use any image in gligen_placeholder, because we will use mask here +... input_phrases_mask=[1, 0], # Set 0 for the placeholder token +... input_images_mask=[0, 1], # Set 0 for the placeholder image +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-generation-text-image-box-style-transfer.jpg") enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. prepare_latents < source > ( batch_size num_channels_latents height width dtype device generator latents = None ) enable_fuser < source > ( enabled = True ) complete_mask < source > ( has_mask max_objs device ) Based on the input mask corresponding value 0 or 1 for each phrases and image, mask the features +corresponding to phrases and images. crop < source > ( im new_width new_height ) Crop the input image to the specified dimensions. draw_inpaint_mask_from_boxes < source > ( boxes size ) Create an inpainting mask based on given boxes. This function generates an inpainting mask using the provided +boxes to mark regions that need to be inpainted. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_clip_feature < source > ( input normalize_constant device is_image = False ) Get image and phrases embedding by using CLIP pretrain model. The image embedding is transformed into the +phrases embedding space through a projection. get_cross_attention_kwargs_with_grounded < source > ( hidden_size gligen_phrases gligen_images gligen_boxes input_phrases_mask input_images_mask repeat_batch normalize_constant max_objs device ) Prepare the cross-attention kwargs containing information about the grounded input (boxes, mask, image +embedding, phrases embedding). get_cross_attention_kwargs_without_grounded < source > ( hidden_size repeat_batch max_objs device ) Prepare the cross-attention kwargs without information about the grounded input (boxes, mask, image embedding, +phrases embedding) (All are zero tensor). target_size_center_crop < source > ( im new_hw ) Crop and resize the image to the target size while keeping the center. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/b2f1a0971d99ba99c370c5ae08896695.txt b/scrapped_outputs/b2f1a0971d99ba99c370c5ae08896695.txt new file mode 100644 index 0000000000000000000000000000000000000000..75fd75154200d2b6d6fa296048d043f23b017c54 --- /dev/null +++ b/scrapped_outputs/b2f1a0971d99ba99c370c5ae08896695.txt @@ -0,0 +1,84 @@ +Performing inference with LCM Latent Consistency Models (LCM) enable quality image generation in typically 2-4 steps making it possible to use diffusion models in almost real-time settings. From the official website: LCMs can be distilled from any pre-trained Stable Diffusion (SD) in only 4,000 training steps (~32 A100 GPU Hours) for generating high quality 768 x 768 resolution images in 2~4 steps or even one step, significantly accelerating text-to-image generation. We employ LCM to distill the Dreamshaper-V7 version of SD in just 4,000 training iterations. For a more technical overview of LCMs, refer to the paper. This guide shows how to perform inference with LCMs for text-to-image and image-to-image generation tasks. It will also cover performing inference with LoRA checkpoints. Text-to-image You’ll use the StableDiffusionXLPipeline here changing the unet. The UNet was distilled from the SDXL UNet using the framework introduced in LCM. Another important component is the scheduler: LCMScheduler. Together with the distilled UNet and the scheduler, LCM enables a fast inference workflow overcoming the slow iterative nature of diffusion models. Copied from diffusers import DiffusionPipeline, UNet2DConditionModel, LCMScheduler +import torch + +unet = UNet2DConditionModel.from_pretrained( + "latent-consistency/lcm-sdxl", + torch_dtype=torch.float16, + variant="fp16", +) +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16 +).to("cuda") +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=8.0 +).images[0] Notice that we use only 4 steps for generation which is way less than what’s typically used for standard SDXL. Some details to keep in mind: To perform classifier-free guidance, batch size is usually doubled inside the pipeline. LCM, however, applies guidance using guidance embeddings, so the batch size does not have to be doubled in this case. This leads to a faster inference time, with the drawback that negative prompts don’t have any effect on the denoising process. The UNet was trained using the [3., 13.] guidance scale range. So, that is the ideal range for guidance_scale. However, disabling guidance_scale using a value of 1.0 is also effective in most cases. Image-to-image The findings above apply to image-to-image tasks too. Let’s look at how we can perform image-to-image generation with LCMs: Copied from diffusers import AutoPipelineForImage2Image, UNet2DConditionModel, LCMScheduler +from diffusers.utils import load_image +import torch + +unet = UNet2DConditionModel.from_pretrained( + "latent-consistency/lcm-sdxl", + torch_dtype=torch.float16, + variant="fp16", +) +pipe = AutoPipelineForImage2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16 +).to("cuda") +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +prompt = "High altitude snowy mountains" +image = load_image( + "https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/snowy_mountains.jpeg" +) + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, + image=image, + num_inference_steps=4, + generator=generator, + guidance_scale=8.0, +).images[0] LoRA It is possible to generalize the LCM framework to use with LoRA. It effectively eliminates the need to conduct expensive fine-tuning runs as LoRA training concerns just a few number of parameters compared to full fine-tuning. During inference, the LCMScheduler comes to the advantage as it enables very few-steps inference without compromising the quality. We recommend to disable guidance_scale by setting it 0. The model is trained to follow prompts accurately +even without using guidance scale. You can however, still use guidance scale in which case we recommend +using values between 1.0 and 2.0. Text-to-image Copied from diffusers import DiffusionPipeline, LCMScheduler +import torch + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +lcm_lora_id = "latent-consistency/lcm-lora-sdxl" + +pipe = DiffusionPipeline.from_pretrained(model_id, variant="fp16", torch_dtype=torch.float16).to("cuda") + +pipe.load_lora_weights(lcm_lora_id) +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +prompt = "close-up photography of old man standing in the rain at night, in a street lit by lamps, leica 35mm summilux" +image = pipe( + prompt=prompt, + num_inference_steps=4, + guidance_scale=0, # set guidance scale to 0 to disable it +).images[0] Image-to-image Extending LCM LoRA to image-to-image is possible: Copied from diffusers import StableDiffusionXLImg2ImgPipeline, LCMScheduler +from diffusers.utils import load_image +import torch + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +lcm_lora_id = "latent-consistency/lcm-lora-sdxl" + +pipe = StableDiffusionXLImg2ImgPipeline.from_pretrained(model_id, variant="fp16", torch_dtype=torch.float16).to("cuda") + +pipe.load_lora_weights(lcm_lora_id) +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +prompt = "close-up photography of old man standing in the rain at night, in a street lit by lamps, leica 35mm summilux" + +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lcm/lora_lcm.png") + +image = pipe( + prompt=prompt, + image=image, + num_inference_steps=4, + guidance_scale=0, # set guidance scale to 0 to disable it +).images[0] diff --git a/scrapped_outputs/b32bb55bab9d91e8273b3806015bca6b.txt b/scrapped_outputs/b32bb55bab9d91e8273b3806015bca6b.txt new file mode 100644 index 0000000000000000000000000000000000000000..d05e83f211afd073b47b8d298eea79b4b3c9daf7 --- /dev/null +++ b/scrapped_outputs/b32bb55bab9d91e8273b3806015bca6b.txt @@ -0,0 +1,97 @@ +Text-to-image When you think of diffusion models, text-to-image is usually one of the first things that come to mind. Text-to-image generates an image from a text description (for example, “Astronaut in a jungle, cold color palette, muted colors, detailed, 8k”) which is also known as a prompt. From a very high level, a diffusion model takes a prompt and some random initial noise, and iteratively removes the noise to construct an image. The denoising process is guided by the prompt, and once the denoising process ends after a predetermined number of time steps, the image representation is decoded into an image. Read the How does Stable Diffusion work? blog post to learn more about how a latent diffusion model works. You can generate images from a prompt in 🤗 Diffusers in two steps: Load a checkpoint into the AutoPipelineForText2Image class, which automatically detects the appropriate pipeline class to use based on the checkpoint: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") Pass a prompt to the pipeline to generate an image: Copied image = pipeline( + "stained glass of darth vader, backlight, centered composition, masterpiece, photorealistic, 8k" +).images[0] +image Popular models The most common text-to-image models are Stable Diffusion v1.5, Stable Diffusion XL (SDXL), and Kandinsky 2.2. There are also ControlNet models or adapters that can be used with text-to-image models for more direct control in generating images. The results from each model are slightly different because of their architecture and training process, but no matter which model you choose, their usage is more or less the same. Let’s use the same prompt for each model and compare their results. Stable Diffusion v1.5 Stable Diffusion v1.5 is a latent diffusion model initialized from Stable Diffusion v1-4, and finetuned for 595K steps on 512x512 images from the LAION-Aesthetics V2 dataset. You can use this model like: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0] +image Stable Diffusion XL SDXL is a much larger version of the previous Stable Diffusion models, and involves a two-stage model process that adds even more details to an image. It also includes some additional micro-conditionings to generate high-quality images centered subjects. Take a look at the more comprehensive SDXL guide to learn more about how to use it. In general, you can use SDXL like: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0] +image Kandinsky 2.2 The Kandinsky model is a bit different from the Stable Diffusion models because it also uses an image prior model to create embeddings that are used to better align text and images in the diffusion model. The easiest way to use Kandinsky 2.2 is: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0] +image ControlNet ControlNet models are auxiliary models or adapters that are finetuned on top of text-to-image models, such as Stable Diffusion v1.5. Using ControlNet models in combination with text-to-image models offers diverse options for more explicit control over how to generate an image. With ControlNet, you add an additional conditioning input image to the model. For example, if you provide an image of a human pose (usually represented as multiple keypoints that are connected into a skeleton) as a conditioning input, the model generates an image that follows the pose of the image. Check out the more in-depth ControlNet guide to learn more about other conditioning inputs and how to use them. In this example, let’s condition the ControlNet with a human pose estimation image. Load the ControlNet model pretrained on human pose estimations: Copied from diffusers import ControlNetModel, AutoPipelineForText2Image +from diffusers.utils import load_image +import torch + +controlnet = ControlNetModel.from_pretrained( + "lllyasviel/control_v11p_sd15_openpose", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +pose_image = load_image("https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png") Pass the controlnet to the AutoPipelineForText2Image, and provide the prompt and pose estimation image: Copied pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16" +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", image=pose_image, generator=generator).images[0] +image Stable Diffusion v1.5 Stable Diffusion XL Kandinsky 2.2 ControlNet (pose conditioning) Configure pipeline parameters There are a number of parameters that can be configured in the pipeline that affect how an image is generated. You can change the image’s output size, specify a negative prompt to improve image quality, and more. This section dives deeper into how to use these parameters. Height and width The height and width parameters control the height and width (in pixels) of the generated image. By default, the Stable Diffusion v1.5 model outputs 512x512 images, but you can change this to any size that is a multiple of 8. For example, to create a rectangular image: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +image = pipeline( + "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", height=768, width=512 +).images[0] +image Other models may have different default image sizes depending on the image sizes in the training dataset. For example, SDXL’s default image size is 1024x1024 and using lower height and width values may result in lower quality images. Make sure you check the model’s API reference first! Guidance scale The guidance_scale parameter affects how much the prompt influences image generation. A lower value gives the model “creativity” to generate images that are more loosely related to the prompt. Higher guidance_scale values push the model to follow the prompt more closely, and if this value is too high, you may observe some artifacts in the generated image. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +image = pipeline( + "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", guidance_scale=3.5 +).images[0] +image guidance_scale = 2.5 guidance_scale = 7.5 guidance_scale = 10.5 Negative prompt Just like how a prompt guides generation, a negative prompt steers the model away from things you don’t want the model to generate. This is commonly used to improve overall image quality by removing poor or bad image features such as “low resolution” or “bad details”. You can also use a negative prompt to remove or modify the content and style of an image. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +image = pipeline( + prompt="Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", + negative_prompt="ugly, deformed, disfigured, poor details, bad anatomy", +).images[0] +image negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" negative_prompt = "astronaut" Generator A torch.Generator object enables reproducibility in a pipeline by setting a manual seed. You can use a Generator to generate batches of images and iteratively improve on an image generated from a seed as detailed in the Improve image quality with deterministic generation guide. You can set a seed and Generator as shown below. Creating an image with a Generator should return the same result each time instead of randomly generating a new image. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +generator = torch.Generator(device="cuda").manual_seed(30) +image = pipeline( + "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", + generator=generator, +).images[0] +image Control image generation There are several ways to exert more control over how an image is generated outside of configuring a pipeline’s parameters, such as prompt weighting and ControlNet models. Prompt weighting Prompt weighting is a technique for increasing or decreasing the importance of concepts in a prompt to emphasize or minimize certain features in an image. We recommend using the Compel library to help you generate the weighted prompt embeddings. Learn how to create the prompt embeddings in the Prompt weighting guide. This example focuses on how to use the prompt embeddings in the pipeline. Once you’ve created the embeddings, you can pass them to the prompt_embeds (and negative_prompt_embeds if you’re using a negative prompt) parameter in the pipeline. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +image = pipeline( + prompt_embeds=prompt_embeds, # generated from Compel + negative_prompt_embeds=negative_prompt_embeds, # generated from Compel +).images[0] ControlNet As you saw in the ControlNet section, these models offer a more flexible and accurate way to generate images by incorporating an additional conditioning image input. Each ControlNet model is pretrained on a particular type of conditioning image to generate new images that resemble it. For example, if you take a ControlNet model pretrained on depth maps, you can give the model a depth map as a conditioning input and it’ll generate an image that preserves the spatial information in it. This is quicker and easier than specifying the depth information in a prompt. You can even combine multiple conditioning inputs with a MultiControlNet! There are many types of conditioning inputs you can use, and 🤗 Diffusers supports ControlNet for Stable Diffusion and SDXL models. Take a look at the more comprehensive ControlNet guide to learn how you can use these models. Optimize Diffusion models are large, and the iterative nature of denoising an image is computationally expensive and intensive. But this doesn’t mean you need access to powerful - or even many - GPUs to use them. There are many optimization techniques for running diffusion models on consumer and free-tier resources. For example, you can load model weights in half-precision to save GPU memory and increase speed or offload the entire model to the GPU to save even more memory. PyTorch 2.0 also supports a more memory-efficient attention mechanism called scaled dot product attention that is automatically enabled if you’re using PyTorch 2.0. You can combine this with torch.compile to speed your code up even more: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16").to("cuda") +pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) For more tips on how to optimize your code to save memory and speed up inference, read the Memory and speed and Torch 2.0 guides. diff --git a/scrapped_outputs/b34c96d61495800cb24da5067f16de47.txt b/scrapped_outputs/b34c96d61495800cb24da5067f16de47.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/b354e6ba52904872a4d9b30ee8a292ea.txt b/scrapped_outputs/b354e6ba52904872a4d9b30ee8a292ea.txt new file mode 100644 index 0000000000000000000000000000000000000000..2d4b6abf16763c9b581e6b0009844fdbc7bdf964 --- /dev/null +++ b/scrapped_outputs/b354e6ba52904872a4d9b30ee8a292ea.txt @@ -0,0 +1,313 @@ +Super-Resolution + + +StableDiffusionUpscalePipeline + +The upscaler diffusion model was created by the researchers and engineers from CompVis, Stability AI, and LAION, as part of Stable Diffusion 2.0. StableDiffusionUpscalePipeline can be used to enhance the resolution of input images by a factor of 4. +The original codebase can be found here: +Stable Diffusion v2: Stability-AI/stablediffusion +Available Checkpoints are: +stabilityai/stable-diffusion-x4-upscaler (x4 resolution resolution): stable-diffusion-x4-upscaler + +class diffusers.StableDiffusionUpscalePipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +low_res_scheduler: DDPMScheduler +scheduler: KarrasDiffusionSchedulers +max_noise_level: int = 350 + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +low_res_scheduler (SchedulerMixin) — +A scheduler used to add initial noise to the low res conditioning image. It must be an instance of +DDPMScheduler. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + + +Pipeline for text-guided image super-resolution using Stable Diffusion 2. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +image: typing.Union[torch.FloatTensor, PIL.Image.Image, typing.List[PIL.Image.Image]] = None +num_inference_steps: int = 75 +guidance_scale: float = 9.0 +noise_level: int = 20 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +image (PIL.Image.Image or ListPIL.Image.Image or torch.FloatTensor) — +Image, or tensor representing an image batch which will be upscaled. * + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. Ignored when not using guidance (i.e., ignored if guidance_scale +is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import requests +>>> from PIL import Image +>>> from io import BytesIO +>>> from diffusers import StableDiffusionUpscalePipeline +>>> import torch + +>>> # load model and scheduler +>>> model_id = "stabilityai/stable-diffusion-x4-upscaler" +>>> pipeline = StableDiffusionUpscalePipeline.from_pretrained( +... model_id, revision="fp16", torch_dtype=torch.float16 +... ) +>>> pipeline = pipeline.to("cuda") + +>>> # let's download an image +>>> url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png" +>>> response = requests.get(url) +>>> low_res_img = Image.open(BytesIO(response.content)).convert("RGB") +>>> low_res_img = low_res_img.resize((128, 128)) +>>> prompt = "a white cat" + +>>> upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0] +>>> upscaled_image.save("upsampled_cat.png") + +enable_attention_slicing + +< +source +> +( +slice_size: typing.Union[str, int, NoneType] = 'auto' + +) + + +Parameters + +slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maxium amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +disable_attention_slicing + +< +source +> +( +) + + + +Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go +back to computing attention in one step. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. diff --git a/scrapped_outputs/b359349d8bddc0f9f3aac1cd427bbbf9.txt b/scrapped_outputs/b359349d8bddc0f9f3aac1cd427bbbf9.txt new file mode 100644 index 0000000000000000000000000000000000000000..173b882d6bb0b0500124b1e8f97633b6bc0e5c16 --- /dev/null +++ b/scrapped_outputs/b359349d8bddc0f9f3aac1cd427bbbf9.txt @@ -0,0 +1,62 @@ +LoRA This is experimental and the API may change in the future. LoRA (Low-Rank Adaptation of Large Language Models) is a popular and lightweight training technique that significantly reduces the number of trainable parameters. It works by inserting a smaller number of new weights into the model and only these are trained. This makes training with LoRA much faster, memory-efficient, and produces smaller model weights (a few hundred MBs), which are easier to store and share. LoRA can also be combined with other training techniques like DreamBooth to speedup training. LoRA is very versatile and supported for DreamBooth, Kandinsky 2.2, Stable Diffusion XL, text-to-image, and Wuerstchen. This guide will explore the train_text_to_image_lora.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/text_to_image +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script has many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. Default values are provided for most parameters that work pretty well, but you can also set your own values in the training command if you’d like. For example, to increase the number of epochs to train: Copied accelerate launch train_text_to_image_lora.py \ + --num_train_epochs=150 \ Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the LoRA relevant parameters: --rank: the inner dimension of the low-rank matrices to train; a higher rank means more trainable parameters --learning_rate: the default learning rate is 1e-4, but with LoRA, you can use a higher learning rate Training script The dataset preprocessing code and training loop are found in the main() function, and if you need to adapt the training script, this is where you’ll make your changes. As with the script parameters, a walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the LoRA relevant parts of the script. The script begins by adding the new LoRA weights to the attention layers. This involves correctly configuring the weight size for each block in the UNet. You’ll see the rank parameter is used to create the LoRAAttnProcessor: Copied lora_attn_procs = {} +for name in unet.attn_processors.keys(): + cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim + if name.startswith("mid_block"): + hidden_size = unet.config.block_out_channels[-1] + elif name.startswith("up_blocks"): + block_id = int(name[len("up_blocks.")]) + hidden_size = list(reversed(unet.config.block_out_channels))[block_id] + elif name.startswith("down_blocks"): + block_id = int(name[len("down_blocks.")]) + hidden_size = unet.config.block_out_channels[block_id] + + lora_attn_procs[name] = LoRAAttnProcessor( + hidden_size=hidden_size, + cross_attention_dim=cross_attention_dim, + rank=args.rank, + ) + +unet.set_attn_processor(lora_attn_procs) +lora_layers = AttnProcsLayers(unet.attn_processors) The optimizer is initialized with the lora_layers because these are the only weights that’ll be optimized: Copied optimizer = optimizer_cls( + lora_layers.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Aside from setting up the LoRA layers, the training script is more or less the same as train_text_to_image.py! Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 Let’s train on the Pokémon BLIP captions dataset to generate our yown Pokémon. Set the environment variables MODEL_NAME and DATASET_NAME to the model and dataset respectively. You should also specify where to save the model in OUTPUT_DIR, and the name of the model to save to on the Hub with HUB_MODEL_ID. The script creates and saves the following files to your repository: saved model checkpoints pytorch_lora_weights.safetensors (the trained LoRA weights) If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. A full training run takes ~5 hours on a 2080 Ti GPU with 11GB of VRAM. Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="/sddata/finetune/lora/pokemon" +export HUB_MODEL_ID="pokemon-lora" +export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch --mixed_precision="fp16" train_text_to_image_lora.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$DATASET_NAME \ + --dataloader_num_workers=8 \ + --resolution=512 \ + --center_crop \ + --random_flip \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --max_train_steps=15000 \ + --learning_rate=1e-04 \ + --max_grad_norm=1 \ + --lr_scheduler="cosine" \ + --lr_warmup_steps=0 \ + --output_dir=${OUTPUT_DIR} \ + --push_to_hub \ + --hub_model_id=${HUB_MODEL_ID} \ + --report_to=wandb \ + --checkpointing_steps=500 \ + --validation_prompt="A pokemon with blue eyes." \ + --seed=1337 Once training has been completed, you can use your model for inference: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("path/to/lora/model", weight_name="pytorch_lora_weights.safetensors") +image = pipeline("A pokemon with blue eyes").images[0] Next steps Congratulations on training a new model with LoRA! To learn more about how to use your new model, the following guides may be helpful: Learn how to load different LoRA formats trained using community trainers like Kohya and TheLastBen. Learn how to use and combine multiple LoRA’s with PEFT for inference. diff --git a/scrapped_outputs/b3af2ed269713c749f0308efe8e8fbe8.txt b/scrapped_outputs/b3af2ed269713c749f0308efe8e8fbe8.txt new file mode 100644 index 0000000000000000000000000000000000000000..d78ead537f42d571c3c10da3a7e42623bf2ab7fa --- /dev/null +++ b/scrapped_outputs/b3af2ed269713c749f0308efe8e8fbe8.txt @@ -0,0 +1,150 @@ +Diffusers + +🤗 Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you’re looking for a simple inference solution or want to train your own diffusion model, 🤗 Diffusers is a modular toolbox that supports both. Our library is designed with a focus on usability over performance, simple over easy, and customizability over abstractions. +The library has three main components: +State-of-the-art diffusion pipelines for inference with just a few lines of code. +Interchangeable noise schedulers for balancing trade-offs between generation speed and quality. +Pretrained models that can be used as building blocks, and combined with schedulers, for creating your own end-to-end diffusion systems. +Tutorials +Learn the fundamental skills you need to start generating outputs, build your own diffusion system, and train a diffusion model. We recommend starting here if you're using 🤗 Diffusers for the first time! +How-to guides +Practical guides for helping you load pipelines, models, and schedulers. You'll also learn how to use pipelines for specific tasks, control how outputs are generated, optimize for inference speed, and different training techniques. +Conceptual guides +Understand why the library was designed the way it was, and learn more about the ethical guidelines and safety implementations for using the library. +Reference +Technical descriptions of how 🤗 Diffusers classes and methods work. + +Supported pipelines + +Pipeline +Paper/Repository +Tasks +alt_diffusion +AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities +Image-to-Image Text-Guided Generation +audio_diffusion +Audio Diffusion +Unconditional Audio Generation +controlnet +Adding Conditional Control to Text-to-Image Diffusion Models +Image-to-Image Text-Guided Generation +cycle_diffusion +Unifying Diffusion Models’ Latent Space, with Applications to CycleDiffusion and Guidance +Image-to-Image Text-Guided Generation +dance_diffusion +Dance Diffusion +Unconditional Audio Generation +ddpm +Denoising Diffusion Probabilistic Models +Unconditional Image Generation +ddim +Denoising Diffusion Implicit Models +Unconditional Image Generation +if +IF +Image Generation +if_img2img +IF +Image-to-Image Generation +if_inpainting +IF +Image-to-Image Generation +latent_diffusion +High-Resolution Image Synthesis with Latent Diffusion Models +Text-to-Image Generation +latent_diffusion +High-Resolution Image Synthesis with Latent Diffusion Models +Super Resolution Image-to-Image +latent_diffusion_uncond +High-Resolution Image Synthesis with Latent Diffusion Models +Unconditional Image Generation +paint_by_example +Paint by Example: Exemplar-based Image Editing with Diffusion Models +Image-Guided Image Inpainting +pndm +Pseudo Numerical Methods for Diffusion Models on Manifolds +Unconditional Image Generation +score_sde_ve +Score-Based Generative Modeling through Stochastic Differential Equations +Unconditional Image Generation +score_sde_vp +Score-Based Generative Modeling through Stochastic Differential Equations +Unconditional Image Generation +semantic_stable_diffusion +Semantic Guidance +Text-Guided Generation +stable_diffusion_text2img +Stable Diffusion +Text-to-Image Generation +stable_diffusion_img2img +Stable Diffusion +Image-to-Image Text-Guided Generation +stable_diffusion_inpaint +Stable Diffusion +Text-Guided Image Inpainting +stable_diffusion_panorama +MultiDiffusion +Text-to-Panorama Generation +stable_diffusion_pix2pix +InstructPix2Pix: Learning to Follow Image Editing Instructions +Text-Guided Image Editing +stable_diffusion_pix2pix_zero +Zero-shot Image-to-Image Translation +Text-Guided Image Editing +stable_diffusion_attend_and_excite +Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models +Text-to-Image Generation +stable_diffusion_self_attention_guidance +Improving Sample Quality of Diffusion Models Using Self-Attention Guidance +Text-to-Image Generation Unconditional Image Generation +stable_diffusion_image_variation +Stable Diffusion Image Variations +Image-to-Image Generation +stable_diffusion_latent_upscale +Stable Diffusion Latent Upscaler +Text-Guided Super Resolution Image-to-Image +stable_diffusion_model_editing +Editing Implicit Assumptions in Text-to-Image Diffusion Models +Text-to-Image Model Editing +stable_diffusion_2 +Stable Diffusion 2 +Text-to-Image Generation +stable_diffusion_2 +Stable Diffusion 2 +Text-Guided Image Inpainting +stable_diffusion_2 +Depth-Conditional Stable Diffusion +Depth-to-Image Generation +stable_diffusion_2 +Stable Diffusion 2 +Text-Guided Super Resolution Image-to-Image +stable_diffusion_safe +Safe Stable Diffusion +Text-Guided Generation +stable_unclip +Stable unCLIP +Text-to-Image Generation +stable_unclip +Stable unCLIP +Image-to-Image Text-Guided Generation +stochastic_karras_ve +Elucidating the Design Space of Diffusion-Based Generative Models +Unconditional Image Generation +text_to_video_sd +Modelscope’s Text-to-video-synthesis Model in Open Domain +Text-to-Video Generation +unclip +Hierarchical Text-Conditional Image Generation with CLIP Latents(implementation by kakaobrain) +Text-to-Image Generation +versatile_diffusion +Versatile Diffusion: Text, Images and Variations All in One Diffusion Model +Text-to-Image Generation +versatile_diffusion +Versatile Diffusion: Text, Images and Variations All in One Diffusion Model +Image Variations Generation +versatile_diffusion +Versatile Diffusion: Text, Images and Variations All in One Diffusion Model +Dual Image and Text Guided Generation +vq_diffusion +Vector Quantized Diffusion Model for Text-to-Image Synthesis +Text-to-Image Generation diff --git a/scrapped_outputs/b3e13d9f5164b59ac50fc9ab4803526e.txt b/scrapped_outputs/b3e13d9f5164b59ac50fc9ab4803526e.txt new file mode 100644 index 0000000000000000000000000000000000000000..c796491cbfe9ea7c96684c36934fc2d682903305 --- /dev/null +++ b/scrapped_outputs/b3e13d9f5164b59ac50fc9ab4803526e.txt @@ -0,0 +1,191 @@ +Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image generation model that iterates on the previous Stable Diffusion models in three key ways: the UNet is 3x larger and SDXL combines a second text encoder (OpenCLIP ViT-bigG/14) with the original text encoder to significantly increase the number of parameters introduces size and crop-conditioning to preserve training data from being discarded and gain more control over how a generated image should be cropped introduces a two-stage model process; the base model (can also be run as a standalone model) generates an image as an input to the refiner model which adds additional high-quality details This guide will show you how to use SDXL for text-to-image, image-to-image, and inpainting. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate invisible-watermark>=0.2.0 We recommend installing the invisible-watermark library to help identify images that are generated. If the invisible-watermark library is installed, it is used by default. To disable the watermarker: Copied pipeline = StableDiffusionXLPipeline.from_pretrained(..., add_watermarker=False) Load model checkpoints Model weights may be stored in separate subfolders on the Hub or locally, in which case, you should use the from_pretrained() method: Copied from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = StableDiffusionXLImg2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, use_safetensors=True, variant="fp16" +).to("cuda") You can also use the from_single_file() method to load a model checkpoint stored in a single file format (.ckpt or .safetensors) from the Hub or locally: Copied from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_single_file( + "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0.safetensors", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = StableDiffusionXLImg2ImgPipeline.from_single_file( + "https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/blob/main/sd_xl_refiner_1.0.safetensors", torch_dtype=torch.float16, use_safetensors=True, variant="fp16" +).to("cuda") Text-to-image For text-to-image, pass a text prompt. By default, SDXL generates a 1024x1024 image for the best results. You can try setting the height and width parameters to 768x768 or 512x512, but anything below 512x512 is not likely to work. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline_text2image = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipeline_text2image(prompt=prompt).images[0] +image Image-to-image For image-to-image, SDXL works especially well with image sizes between 768x768 and 1024x1024. Pass an initial image, and a text prompt to condition the image with: Copied from diffusers import AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +# use from_pipe to avoid consuming additional memory when loading a checkpoint +pipeline = AutoPipelineForImage2Image.from_pipe(pipeline_text2image).to("cuda") + +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png" +init_image = load_image(url) +prompt = "a dog catching a frisbee in the jungle" +image = pipeline(prompt, image=init_image, strength=0.8, guidance_scale=10.5).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Inpainting For inpainting, you’ll need the original image and a mask of what you want to replace in the original image. Create a prompt to describe what you want to replace the masked area with. Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +# use from_pipe to avoid consuming additional memory when loading a checkpoint +pipeline = AutoPipelineForInpainting.from_pipe(pipeline_text2image).to("cuda") + +img_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png" +mask_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-inpaint-mask.png" + +init_image = load_image(img_url) +mask_image = load_image(mask_url) + +prompt = "A deep sea diver floating" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.85, guidance_scale=12.5).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Refine image quality SDXL includes a refiner model specialized in denoising low-noise stage images to generate higher-quality images from the base model. There are two ways to use the refiner: use the base and refiner models together to produce a refined image use the base model to produce an image, and subsequently use the refiner model to add more details to the image (this is how SDXL was originally trained) Base + refiner model When you use the base and refiner model together to generate an image, this is known as an ensemble of expert denoisers. The ensemble of expert denoisers approach requires fewer overall denoising steps versus passing the base model’s output to the refiner model, so it should be significantly faster to run. However, you won’t be able to inspect the base model’s output because it still contains a large amount of noise. As an ensemble of expert denoisers, the base model serves as the expert during the high-noise diffusion stage and the refiner model serves as the expert during the low-noise diffusion stage. Load the base and refiner model: Copied from diffusers import DiffusionPipeline +import torch + +base = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", + text_encoder_2=base.text_encoder_2, + vae=base.vae, + torch_dtype=torch.float16, + use_safetensors=True, + variant="fp16", +).to("cuda") To use this approach, you need to define the number of timesteps for each model to run through their respective stages. For the base model, this is controlled by the denoising_end parameter and for the refiner model, it is controlled by the denoising_start parameter. The denoising_end and denoising_start parameters should be a float between 0 and 1. These parameters are represented as a proportion of discrete timesteps as defined by the scheduler. If you’re also using the strength parameter, it’ll be ignored because the number of denoising steps is determined by the discrete timesteps the model is trained on and the declared fractional cutoff. Let’s set denoising_end=0.8 so the base model performs the first 80% of denoising the high-noise timesteps and set denoising_start=0.8 so the refiner model performs the last 20% of denoising the low-noise timesteps. The base model output should be in latent space instead of a PIL image. Copied prompt = "A majestic lion jumping from a big stone at night" + +image = base( + prompt=prompt, + num_inference_steps=40, + denoising_end=0.8, + output_type="latent", +).images +image = refiner( + prompt=prompt, + num_inference_steps=40, + denoising_start=0.8, + image=image, +).images[0] +image default base model ensemble of expert denoisers The refiner model can also be used for inpainting in the StableDiffusionXLInpaintPipeline: Copied from diffusers import StableDiffusionXLInpaintPipeline +from diffusers.utils import load_image, make_image_grid +import torch + +base = StableDiffusionXLInpaintPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = StableDiffusionXLInpaintPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", + text_encoder_2=base.text_encoder_2, + vae=base.vae, + torch_dtype=torch.float16, + use_safetensors=True, + variant="fp16", +).to("cuda") + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url) +mask_image = load_image(mask_url) + +prompt = "A majestic tiger sitting on a bench" +num_inference_steps = 75 +high_noise_frac = 0.7 + +image = base( + prompt=prompt, + image=init_image, + mask_image=mask_image, + num_inference_steps=num_inference_steps, + denoising_end=high_noise_frac, + output_type="latent", +).images +image = refiner( + prompt=prompt, + image=image, + mask_image=mask_image, + num_inference_steps=num_inference_steps, + denoising_start=high_noise_frac, +).images[0] +make_image_grid([init_image, mask_image, image.resize((512, 512))], rows=1, cols=3) This ensemble of expert denoisers method works well for all available schedulers! Base to refiner model SDXL gets a boost in image quality by using the refiner model to add additional high-quality details to the fully-denoised image from the base model, in an image-to-image setting. Load the base and refiner models: Copied from diffusers import DiffusionPipeline +import torch + +base = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", + text_encoder_2=base.text_encoder_2, + vae=base.vae, + torch_dtype=torch.float16, + use_safetensors=True, + variant="fp16", +).to("cuda") Generate an image from the base model, and set the model output to latent space: Copied prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +image = base(prompt=prompt, output_type="latent").images[0] Pass the generated image to the refiner model: Copied image = refiner(prompt=prompt, image=image[None, :]).images[0] base model base model + refiner model For inpainting, load the base and the refiner model in the StableDiffusionXLInpaintPipeline, remove the denoising_end and denoising_start parameters, and choose a smaller number of inference steps for the refiner. Micro-conditioning SDXL training involves several additional conditioning techniques, which are referred to as micro-conditioning. These include original image size, target image size, and cropping parameters. The micro-conditionings can be used at inference time to create high-quality, centered images. You can use both micro-conditioning and negative micro-conditioning parameters thanks to classifier-free guidance. They are available in the StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline, StableDiffusionXLInpaintPipeline, and StableDiffusionXLControlNetPipeline. Size conditioning There are two types of size conditioning: original_size conditioning comes from upscaled images in the training batch (because it would be wasteful to discard the smaller images which make up almost 40% of the total training data). This way, SDXL learns that upscaling artifacts are not supposed to be present in high-resolution images. During inference, you can use original_size to indicate the original image resolution. Using the default value of (1024, 1024) produces higher-quality images that resemble the 1024x1024 images in the dataset. If you choose to use a lower resolution, such as (256, 256), the model still generates 1024x1024 images, but they’ll look like the low resolution images (simpler patterns, blurring) in the dataset. target_size conditioning comes from finetuning SDXL to support different image aspect ratios. During inference, if you use the default value of (1024, 1024), you’ll get an image that resembles the composition of square images in the dataset. We recommend using the same value for target_size and original_size, but feel free to experiment with other options! 🤗 Diffusers also lets you specify negative conditions about an image’s size to steer generation away from certain image resolutions: Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe( + prompt=prompt, + negative_original_size=(512, 512), + negative_target_size=(1024, 1024), +).images[0] Images negatively conditioned on image resolutions of (128, 128), (256, 256), and (512, 512). Crop conditioning Images generated by previous Stable Diffusion models may sometimes appear to be cropped. This is because images are actually cropped during training so that all the images in a batch have the same size. By conditioning on crop coordinates, SDXL learns that no cropping - coordinates (0, 0) - usually correlates with centered subjects and complete faces (this is the default value in 🤗 Diffusers). You can experiment with different coordinates if you want to generate off-centered compositions! Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipeline(prompt=prompt, crops_coords_top_left=(256, 0)).images[0] +image You can also specify negative cropping coordinates to steer generation away from certain cropping parameters: Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe( + prompt=prompt, + negative_original_size=(512, 512), + negative_crops_coords_top_left=(0, 0), + negative_target_size=(1024, 1024), +).images[0] +image Use a different prompt for each text-encoder SDXL uses two text-encoders, so it is possible to pass a different prompt to each text-encoder, which can improve quality. Pass your original prompt to prompt and the second prompt to prompt_2 (use negative_prompt and negative_prompt_2 if you’re using negative prompts): Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +# prompt is passed to OAI CLIP-ViT/L-14 +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +# prompt_2 is passed to OpenCLIP-ViT/bigG-14 +prompt_2 = "Van Gogh painting" +image = pipeline(prompt=prompt, prompt_2=prompt_2).images[0] +image The dual text-encoders also support textual inversion embeddings that need to be loaded separately as explained in the SDXL textual inversion section. Optimizations SDXL is a large model, and you may need to optimize memory to get it to run on your hardware. Here are some tips to save memory and speed up inference. Offload the model to the CPU with enable_model_cpu_offload() for out-of-memory errors: Copied - base.to("cuda") +- refiner.to("cuda") ++ base.enable_model_cpu_offload() ++ refiner.enable_model_cpu_offload() Use torch.compile for ~20% speed-up (you need torch>=2.0): Copied + base.unet = torch.compile(base.unet, mode="reduce-overhead", fullgraph=True) ++ refiner.unet = torch.compile(refiner.unet, mode="reduce-overhead", fullgraph=True) Enable xFormers to run SDXL if torch<2.0: Copied + base.enable_xformers_memory_efficient_attention() ++ refiner.enable_xformers_memory_efficient_attention() Other resources If you’re interested in experimenting with a minimal version of the UNet2DConditionModel used in SDXL, take a look at the minSDXL implementation which is written in PyTorch and directly compatible with 🤗 Diffusers. diff --git a/scrapped_outputs/b3f64dd1491685b4457c37ba5cef058a.txt b/scrapped_outputs/b3f64dd1491685b4457c37ba5cef058a.txt new file mode 100644 index 0000000000000000000000000000000000000000..0454f29f161e7c79737a21f6448f556cf18eca51 --- /dev/null +++ b/scrapped_outputs/b3f64dd1491685b4457c37ba5cef058a.txt @@ -0,0 +1,81 @@ +Push files to the Hub 🤗 Diffusers provides a PushToHubMixin for uploading your model, scheduler, or pipeline to the Hub. It is an easy way to store your files on the Hub, and also allows you to share your work with others. Under the hood, the PushToHubMixin: creates a repository on the Hub saves your model, scheduler, or pipeline files so they can be reloaded later uploads folder containing these files to the Hub This guide will show you how to use the PushToHubMixin to upload your files to the Hub. You’ll need to log in to your Hub account with your access token first: Copied from huggingface_hub import notebook_login + +notebook_login() Models To push a model to the Hub, call push_to_hub() and specify the repository id of the model to be stored on the Hub: Copied from diffusers import ControlNetModel + +controlnet = ControlNetModel( + block_out_channels=(32, 64), + layers_per_block=2, + in_channels=4, + down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), + cross_attention_dim=32, + conditioning_embedding_out_channels=(16, 32), +) +controlnet.push_to_hub("my-controlnet-model") For models, you can also specify the variant of the weights to push to the Hub. For example, to push fp16 weights: Copied controlnet.push_to_hub("my-controlnet-model", variant="fp16") The push_to_hub() function saves the model’s config.json file and the weights are automatically saved in the safetensors format. Now you can reload the model from your repository on the Hub: Copied model = ControlNetModel.from_pretrained("your-namespace/my-controlnet-model") Scheduler To push a scheduler to the Hub, call push_to_hub() and specify the repository id of the scheduler to be stored on the Hub: Copied from diffusers import DDIMScheduler + +scheduler = DDIMScheduler( + beta_start=0.00085, + beta_end=0.012, + beta_schedule="scaled_linear", + clip_sample=False, + set_alpha_to_one=False, +) +scheduler.push_to_hub("my-controlnet-scheduler") The push_to_hub() function saves the scheduler’s scheduler_config.json file to the specified repository. Now you can reload the scheduler from your repository on the Hub: Copied scheduler = DDIMScheduler.from_pretrained("your-namepsace/my-controlnet-scheduler") Pipeline You can also push an entire pipeline with all it’s components to the Hub. For example, initialize the components of a StableDiffusionPipeline with the parameters you want: Copied from diffusers import ( + UNet2DConditionModel, + AutoencoderKL, + DDIMScheduler, + StableDiffusionPipeline, +) +from transformers import CLIPTextModel, CLIPTextConfig, CLIPTokenizer + +unet = UNet2DConditionModel( + block_out_channels=(32, 64), + layers_per_block=2, + sample_size=32, + in_channels=4, + out_channels=4, + down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), + up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), + cross_attention_dim=32, +) + +scheduler = DDIMScheduler( + beta_start=0.00085, + beta_end=0.012, + beta_schedule="scaled_linear", + clip_sample=False, + set_alpha_to_one=False, +) + +vae = AutoencoderKL( + block_out_channels=[32, 64], + in_channels=3, + out_channels=3, + down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], + up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], + latent_channels=4, +) + +text_encoder_config = CLIPTextConfig( + bos_token_id=0, + eos_token_id=2, + hidden_size=32, + intermediate_size=37, + layer_norm_eps=1e-05, + num_attention_heads=4, + num_hidden_layers=5, + pad_token_id=1, + vocab_size=1000, +) +text_encoder = CLIPTextModel(text_encoder_config) +tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") Pass all of the components to the StableDiffusionPipeline and call push_to_hub() to push the pipeline to the Hub: Copied components = { + "unet": unet, + "scheduler": scheduler, + "vae": vae, + "text_encoder": text_encoder, + "tokenizer": tokenizer, + "safety_checker": None, + "feature_extractor": None, +} + +pipeline = StableDiffusionPipeline(**components) +pipeline.push_to_hub("my-pipeline") The push_to_hub() function saves each component to a subfolder in the repository. Now you can reload the pipeline from your repository on the Hub: Copied pipeline = StableDiffusionPipeline.from_pretrained("your-namespace/my-pipeline") Privacy Set private=True in the push_to_hub() function to keep your model, scheduler, or pipeline files private: Copied controlnet.push_to_hub("my-controlnet-model-private", private=True) Private repositories are only visible to you, and other users won’t be able to clone the repository and your repository won’t appear in search results. Even if a user has the URL to your private repository, they’ll receive a 404 - Sorry, we can't find the page you are looking for. You must be logged in to load a model from a private repository. diff --git a/scrapped_outputs/b4000019340215fedbb14c3b1292e855.txt b/scrapped_outputs/b4000019340215fedbb14c3b1292e855.txt new file mode 100644 index 0000000000000000000000000000000000000000..f2282512f2f0bcea89548e640b2b6d75311dad9c --- /dev/null +++ b/scrapped_outputs/b4000019340215fedbb14c3b1292e855.txt @@ -0,0 +1,27 @@ +OpenVINO 🤗 Optimum provides Stable Diffusion pipelines compatible with OpenVINO to perform inference on a variety of Intel processors (see the full list of supported devices). You’ll need to install 🤗 Optimum Intel with the --upgrade-strategy eager option to ensure optimum-intel is using the latest version: Copied pip install --upgrade-strategy eager optimum["openvino"] This guide will show you how to use the Stable Diffusion and Stable Diffusion XL (SDXL) pipelines with OpenVINO. Stable Diffusion To load and run inference, use the OVStableDiffusionPipeline. If you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, set export=True: Copied from optimum.intel import OVStableDiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipeline = OVStableDiffusionPipeline.from_pretrained(model_id, export=True) +prompt = "sailing ship in storm by Rembrandt" +image = pipeline(prompt).images[0] + +# Don't forget to save the exported model +pipeline.save_pretrained("openvino-sd-v1-5") To further speed-up inference, statically reshape the model. If you change any parameters such as the outputs height or width, you’ll need to statically reshape your model again. Copied # Define the shapes related to the inputs and desired outputs +batch_size, num_images, height, width = 1, 1, 512, 512 + +# Statically reshape the model +pipeline.reshape(batch_size, height, width, num_images) +# Compile the model before inference +pipeline.compile() + +image = pipeline( + prompt, + height=height, + width=width, + num_images_per_prompt=num_images, +).images[0] You can find more examples in the 🤗 Optimum documentation, and Stable Diffusion is supported for text-to-image, image-to-image, and inpainting. Stable Diffusion XL To load and run inference with SDXL, use the OVStableDiffusionXLPipeline: Copied from optimum.intel import OVStableDiffusionXLPipeline + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipeline = OVStableDiffusionXLPipeline.from_pretrained(model_id) +prompt = "sailing ship in storm by Rembrandt" +image = pipeline(prompt).images[0] To further speed-up inference, statically reshape the model as shown in the Stable Diffusion section. You can find more examples in the 🤗 Optimum documentation, and running SDXL in OpenVINO is supported for text-to-image and image-to-image. diff --git a/scrapped_outputs/b40b7fc31ec693a08ab029dd8f6e6385.txt b/scrapped_outputs/b40b7fc31ec693a08ab029dd8f6e6385.txt new file mode 100644 index 0000000000000000000000000000000000000000..d8610ad87c070caa4fdd6e48fd8b56d49472e888 --- /dev/null +++ b/scrapped_outputs/b40b7fc31ec693a08ab029dd8f6e6385.txt @@ -0,0 +1,41 @@ +HeunDiscreteScheduler The Heun scheduler (Algorithm 1) is from the Elucidating the Design Space of Diffusion-Based Generative Models paper by Karras et al. The scheduler is ported from the k-diffusion library and created by Katherine Crowson. HeunDiscreteScheduler class diffusers.HeunDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' use_karras_sigmas: Optional = False clip_sample: Optional = False clip_sample_range: float = 1.0 timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. Scheduler with Heun steps for discrete beta schedules. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/b4259f9677a79f0a8a30dd2344d723a3.txt b/scrapped_outputs/b4259f9677a79f0a8a30dd2344d723a3.txt new file mode 100644 index 0000000000000000000000000000000000000000..b38b5c13a31ff2d5b90900e6331e648465b535b4 --- /dev/null +++ b/scrapped_outputs/b4259f9677a79f0a8a30dd2344d723a3.txt @@ -0,0 +1,174 @@ +Reduce memory usage A barrier to using diffusion models is the large amount of memory required. To overcome this challenge, there are several memory-reducing techniques you can use to run even some of the largest models on free-tier or consumer GPUs. Some of these techniques can even be combined to further reduce memory usage. In many cases, optimizing for memory or speed leads to improved performance in the other, so you should try to optimize for both whenever you can. This guide focuses on minimizing memory usage, but you can also learn more about how to Speed up inference. The results below are obtained from generating a single 512x512 image from the prompt a photo of an astronaut riding a horse on mars with 50 DDIM steps on a Nvidia Titan RTX, demonstrating the speed-up you can expect as a result of reduced memory consumption. latency speed-up original 9.50s x1 fp16 3.61s x2.63 channels last 3.30s x2.88 traced UNet 3.21s x2.96 memory-efficient attention 2.63s x3.61 Sliced VAE Sliced VAE enables decoding large batches of images with limited VRAM or batches with 32 images or more by decoding the batches of latents one image at a time. You’ll likely want to couple this with enable_xformers_memory_efficient_attention() to reduce memory use further if you have xFormers installed. To use sliced VAE, call enable_vae_slicing() on your pipeline before inference: Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_vae_slicing() +#pipe.enable_xformers_memory_efficient_attention() +images = pipe([prompt] * 32).images You may see a small performance boost in VAE decoding on multi-image batches, and there should be no performance impact on single-image batches. Tiled VAE Tiled VAE processing also enables working with large images on limited VRAM (for example, generating 4k images on 8GB of VRAM) by splitting the image into overlapping tiles, decoding the tiles, and then blending the outputs together to compose the final image. You should also used tiled VAE with enable_xformers_memory_efficient_attention() to reduce memory use further if you have xFormers installed. To use tiled VAE processing, call enable_vae_tiling() on your pipeline before inference: Copied import torch +from diffusers import StableDiffusionPipeline, UniPCMultistepScheduler + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") +prompt = "a beautiful landscape photograph" +pipe.enable_vae_tiling() +#pipe.enable_xformers_memory_efficient_attention() + +image = pipe([prompt], width=3840, height=2224, num_inference_steps=20).images[0] The output image has some tile-to-tile tone variation because the tiles are decoded separately, but you shouldn’t see any sharp and obvious seams between the tiles. Tiling is turned off for images that are 512x512 or smaller. CPU offloading Offloading the weights to the CPU and only loading them on the GPU when performing the forward pass can also save memory. Often, this technique can reduce memory consumption to less than 3GB. To perform CPU offloading, call enable_sequential_cpu_offload(): Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_sequential_cpu_offload() +image = pipe(prompt).images[0] CPU offloading works on submodules rather than whole models. This is the best way to minimize memory consumption, but inference is much slower due to the iterative nature of the diffusion process. The UNet component of the pipeline runs several times (as many as num_inference_steps); each time, the different UNet submodules are sequentially onloaded and offloaded as needed, resulting in a large number of memory transfers. Consider using model offloading if you want to optimize for speed because it is much faster. The tradeoff is your memory savings won’t be as large. When using enable_sequential_cpu_offload(), don’t move the pipeline to CUDA beforehand or else the gain in memory consumption will only be minimal (see this issue for more information). enable_sequential_cpu_offload() is a stateful operation that installs hooks on the models. Model offloading Model offloading requires 🤗 Accelerate version 0.17.0 or higher. Sequential CPU offloading preserves a lot of memory but it makes inference slower because submodules are moved to GPU as needed, and they’re immediately returned to the CPU when a new module runs. Full-model offloading is an alternative that moves whole models to the GPU, instead of handling each model’s constituent submodules. There is a negligible impact on inference time (compared with moving the pipeline to cuda), and it still provides some memory savings. During model offloading, only one of the main components of the pipeline (typically the text encoder, UNet and VAE) +is placed on the GPU while the others wait on the CPU. Components like the UNet that run for multiple iterations stay on the GPU until they’re no longer needed. Enable model offloading by calling enable_model_cpu_offload() on the pipeline: Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_model_cpu_offload() +image = pipe(prompt).images[0] In order to properly offload models after they’re called, it is required to run the entire pipeline and models are called in the pipeline’s expected order. Exercise caution if models are reused outside the context of the pipeline after hooks have been installed. See Removing Hooks for more information. enable_model_cpu_offload() is a stateful operation that installs hooks on the models and state on the pipeline. Channels-last memory format The channels-last memory format is an alternative way of ordering NCHW tensors in memory to preserve dimension ordering. Channels-last tensors are ordered in such a way that the channels become the densest dimension (storing images pixel-per-pixel). Since not all operators currently support the channels-last format, it may result in worst performance but you should still try and see if it works for your model. For example, to set the pipeline’s UNet to use the channels-last format: Copied print(pipe.unet.conv_out.state_dict()["weight"].stride()) # (2880, 9, 3, 1) +pipe.unet.to(memory_format=torch.channels_last) # in-place operation +print( + pipe.unet.conv_out.state_dict()["weight"].stride() +) # (2880, 1, 960, 320) having a stride of 1 for the 2nd dimension proves that it works Tracing Tracing runs an example input tensor through the model and captures the operations that are performed on it as that input makes its way through the model’s layers. The executable or ScriptFunction that is returned is optimized with just-in-time compilation. To trace a UNet: Copied import time +import torch +from diffusers import StableDiffusionPipeline +import functools + +# torch disable grad +torch.set_grad_enabled(False) + +# set variables +n_experiments = 2 +unet_runs_per_experiment = 50 + + +# load inputs +def generate_inputs(): + sample = torch.randn((2, 4, 64, 64), device="cuda", dtype=torch.float16) + timestep = torch.rand(1, device="cuda", dtype=torch.float16) * 999 + encoder_hidden_states = torch.randn((2, 77, 768), device="cuda", dtype=torch.float16) + return sample, timestep, encoder_hidden_states + + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") +unet = pipe.unet +unet.eval() +unet.to(memory_format=torch.channels_last) # use channels_last memory format +unet.forward = functools.partial(unet.forward, return_dict=False) # set return_dict=False as default + +# warmup +for _ in range(3): + with torch.inference_mode(): + inputs = generate_inputs() + orig_output = unet(*inputs) + +# trace +print("tracing..") +unet_traced = torch.jit.trace(unet, inputs) +unet_traced.eval() +print("done tracing") + + +# warmup and optimize graph +for _ in range(5): + with torch.inference_mode(): + inputs = generate_inputs() + orig_output = unet_traced(*inputs) + + +# benchmarking +with torch.inference_mode(): + for _ in range(n_experiments): + torch.cuda.synchronize() + start_time = time.time() + for _ in range(unet_runs_per_experiment): + orig_output = unet_traced(*inputs) + torch.cuda.synchronize() + print(f"unet traced inference took {time.time() - start_time:.2f} seconds") + for _ in range(n_experiments): + torch.cuda.synchronize() + start_time = time.time() + for _ in range(unet_runs_per_experiment): + orig_output = unet(*inputs) + torch.cuda.synchronize() + print(f"unet inference took {time.time() - start_time:.2f} seconds") + +# save the model +unet_traced.save("unet_traced.pt") Replace the unet attribute of the pipeline with the traced model: Copied from diffusers import StableDiffusionPipeline +import torch +from dataclasses import dataclass + + +@dataclass +class UNet2DConditionOutput: + sample: torch.FloatTensor + + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") + +# use jitted unet +unet_traced = torch.jit.load("unet_traced.pt") + + +# del pipe.unet +class TracedUNet(torch.nn.Module): + def __init__(self): + super().__init__() + self.in_channels = pipe.unet.config.in_channels + self.device = pipe.unet.device + + def forward(self, latent_model_input, t, encoder_hidden_states): + sample = unet_traced(latent_model_input, t, encoder_hidden_states)[0] + return UNet2DConditionOutput(sample=sample) + + +pipe.unet = TracedUNet() + +with torch.inference_mode(): + image = pipe([prompt] * 1, num_inference_steps=50).images[0] Memory-efficient attention Recent work on optimizing bandwidth in the attention block has generated huge speed-ups and reductions in GPU memory usage. The most recent type of memory-efficient attention is Flash Attention (you can check out the original code at HazyResearch/flash-attention). If you have PyTorch >= 2.0 installed, you should not expect a speed-up for inference when enabling xformers. To use Flash Attention, install the following: PyTorch > 1.12 CUDA available xFormers Then call enable_xformers_memory_efficient_attention() on the pipeline: Copied from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") + +pipe.enable_xformers_memory_efficient_attention() + +with torch.inference_mode(): + sample = pipe("a small cat") + +# optional: You can disable it via +# pipe.disable_xformers_memory_efficient_attention() The iteration speed when using xformers should match the iteration speed of PyTorch 2.0 as described here. diff --git a/scrapped_outputs/b44c324af60b437af27947b41ce25cb7.txt b/scrapped_outputs/b44c324af60b437af27947b41ce25cb7.txt new file mode 100644 index 0000000000000000000000000000000000000000..25c46b6891734af2caccd73456b27f1ecd1e462b --- /dev/null +++ b/scrapped_outputs/b44c324af60b437af27947b41ce25cb7.txt @@ -0,0 +1,64 @@ +PNDMScheduler PNDMScheduler, or pseudo numerical methods for diffusion models, uses more advanced ODE integration techniques like the Runge-Kutta and linear multi-step method. The original implementation can be found at crowsonkb/k-diffusion. PNDMScheduler class diffusers.PNDMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None skip_prk_steps: bool = False set_alpha_to_one: bool = False prediction_type: str = 'epsilon' timestep_spacing: str = 'leading' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. skip_prk_steps (bool, defaults to False) — +Allows the scheduler to skip the Runge-Kutta steps defined in the original paper as being required before +PLMS steps. set_alpha_to_one (bool, defaults to False) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the alpha value at step 0. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process) +or v_prediction (see section 2.4 of Imagen Video +paper). timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. PNDMScheduler uses pseudo numerical methods for diffusion models such as the Runge-Kutta and linear multi-step +method. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise), and calls step_prk() +or step_plms() depending on the internal variable counter. step_plms < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the linear multistep method. It performs one forward pass multiple times to approximate the solution. step_prk < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the Runge-Kutta method. It performs four forward passes to approximate the solution to the differential +equation. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/b457b3050c78d4ef15dd932f0a47c1b3.txt b/scrapped_outputs/b457b3050c78d4ef15dd932f0a47c1b3.txt new file mode 100644 index 0000000000000000000000000000000000000000..ae719be0b7ba5e539ea6636677a7dcc7a90dd1e7 --- /dev/null +++ b/scrapped_outputs/b457b3050c78d4ef15dd932f0a47c1b3.txt @@ -0,0 +1,88 @@ +Text-to-(RGB, depth) LDM3D was proposed in LDM3D: Latent Diffusion Model for 3D by Gabriela Ben Melech Stan, Diana Wofk, Scottie Fox, Alex Redden, Will Saxton, Jean Yu, Estelle Aflalo, Shao-Yen Tseng, Fabio Nonato, Matthias Muller, and Vasudev Lal. LDM3D generates an image and a depth map from a given text prompt unlike the existing text-to-image diffusion models such as Stable Diffusion which only generates an image. With almost the same number of parameters, LDM3D achieves to create a latent space that can compress both the RGB images and the depth maps. Two checkpoints are available for use: ldm3d-original. The original checkpoint used in the paper ldm3d-4c. The new version of LDM3D using 4 channels inputs instead of 6-channels inputs and finetuned on higher resolution images. The abstract from the paper is: This research paper proposes a Latent Diffusion Model for 3D (LDM3D) that generates both image and depth map data from a given text prompt, allowing users to generate RGBD images from text prompts. The LDM3D model is fine-tuned on a dataset of tuples containing an RGB image, depth map and caption, and validated through extensive experiments. We also develop an application called DepthFusion, which uses the generated RGB images and depth maps to create immersive and interactive 360-degree-view experiences using TouchDesigner. This technology has the potential to transform a wide range of industries, from entertainment and gaming to architecture and design. Overall, this paper presents a significant contribution to the field of generative AI and computer vision, and showcases the potential of LDM3D and DepthFusion to revolutionize content creation and digital experiences. A short video summarizing the approach can be found at this url. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionLDM3DPipeline class diffusers.StableDiffusionLDM3DPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image and 3D generation using LDM3D. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 49 guidance_scale: float = 5.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 5.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import StableDiffusionLDM3DPipeline + +>>> pipe = StableDiffusionLDM3DPipeline.from_pretrained("Intel/ldm3d-4c") +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> output = pipe(prompt) +>>> rgb_image, depth_image = output.rgb, output.depth +>>> rgb_image[0].save("astronaut_ldm3d_rgb.jpg") +>>> depth_image[0].save("astronaut_ldm3d_depth.png") disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. LDM3DPipelineOutput class diffusers.pipelines.stable_diffusion_ldm3d.pipeline_stable_diffusion_ldm3d.LDM3DPipelineOutput < source > ( rgb: Union depth: Union nsfw_content_detected: Optional ) Parameters rgb (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). depth (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. __call__ ( *args **kwargs ) Call self as a function. Upscaler LDM3D-VR is an extended version of LDM3D. The abstract from the paper is: +Latent diffusion models have proven to be state-of-the-art in the creation and manipulation of visual outputs. However, as far as we know, the generation of depth maps jointly with RGB is still limited. We introduce LDM3D-VR, a suite of diffusion models targeting virtual reality development that includes LDM3D-pano and LDM3D-SR. These models enable the generation of panoramic RGBD based on textual prompts and the upscaling of low-resolution inputs to high-resolution RGBD, respectively. Our models are fine-tuned from existing pretrained models on datasets containing panoramic/high-resolution RGB images, depth maps and captions. Both models are evaluated in comparison to existing related methods Two checkpoints are available for use: ldm3d-pano. This checkpoint enables the generation of panoramic images and requires the StableDiffusionLDM3DPipeline pipeline to be used. ldm3d-sr. This checkpoint enables the upscaling of RGB and depth images. Can be used in cascade after the original LDM3D pipeline using the StableDiffusionUpscaleLDM3DPipeline from communauty pipeline. diff --git a/scrapped_outputs/b466492f9ae3c04af17d3600f3a8eb36.txt b/scrapped_outputs/b466492f9ae3c04af17d3600f3a8eb36.txt new file mode 100644 index 0000000000000000000000000000000000000000..ae010f11e65233dac43dd755c7b86163538bf00a --- /dev/null +++ b/scrapped_outputs/b466492f9ae3c04af17d3600f3a8eb36.txt @@ -0,0 +1,69 @@ +Load adapters There are several training techniques for personalizing diffusion models to generate images of a specific subject or images in certain styles. Each of these training methods produces a different type of adapter. Some of the adapters generate an entirely new model, while other adapters only modify a smaller set of embeddings or weights. This means the loading process for each adapter is also different. This guide will show you how to load DreamBooth, textual inversion, and LoRA weights. Feel free to browse the Stable Diffusion Conceptualizer, LoRA the Explorer, and the Diffusers Models Gallery for checkpoints and embeddings to use. DreamBooth DreamBooth finetunes an entire diffusion model on just several images of a subject to generate images of that subject in new styles and settings. This method works by using a special word in the prompt that the model learns to associate with the subject image. Of all the training methods, DreamBooth produces the largest file size (usually a few GBs) because it is a full checkpoint model. Let’s load the herge_style checkpoint, which is trained on just 10 images drawn by Hergé, to generate images in that style. For it to work, you need to include the special word herge_style in your prompt to trigger the checkpoint: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("sd-dreambooth-library/herge-style", torch_dtype=torch.float16).to("cuda") +prompt = "A cute herge_style brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration" +image = pipeline(prompt).images[0] +image Textual inversion Textual inversion is very similar to DreamBooth and it can also personalize a diffusion model to generate certain concepts (styles, objects) from just a few images. This method works by training and finding new embeddings that represent the images you provide with a special word in the prompt. As a result, the diffusion model weights stay the same and the training process produces a relatively tiny (a few KBs) file. Because textual inversion creates embeddings, it cannot be used on its own like DreamBooth and requires another model. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") Now you can load the textual inversion embeddings with the load_textual_inversion() method and generate some images. Let’s load the sd-concepts-library/gta5-artwork embeddings and you’ll need to include the special word in your prompt to trigger it: Copied pipeline.load_textual_inversion("sd-concepts-library/gta5-artwork") +prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration, style" +image = pipeline(prompt).images[0] +image Textual inversion can also be trained on undesirable things to create negative embeddings to discourage a model from generating images with those undesirable things like blurry images or extra fingers on a hand. This can be an easy way to quickly improve your prompt. You’ll also load the embeddings with load_textual_inversion(), but this time, you’ll need two more parameters: weight_name: specifies the weight file to load if the file was saved in the 🤗 Diffusers format with a specific name or if the file is stored in the A1111 format token: specifies the special word to use in the prompt to trigger the embeddings Let’s load the sayakpaul/EasyNegative-test embeddings: Copied pipeline.load_textual_inversion( + "sayakpaul/EasyNegative-test", weight_name="EasyNegative.safetensors", token="EasyNegative" +) Now you can use the token to generate an image with the negative embeddings: Copied prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration, EasyNegative" +negative_prompt = "EasyNegative" + +image = pipeline(prompt, negative_prompt=negative_prompt, num_inference_steps=50).images[0] +image LoRA Low-Rank Adaptation (LoRA) is a popular training technique because it is fast and generates smaller file sizes (a couple hundred MBs). Like the other methods in this guide, LoRA can train a model to learn new styles from just a few images. It works by inserting new weights into the diffusion model and then only the new weights are trained instead of the entire model. This makes LoRAs faster to train and easier to store. LoRA is a very general training technique that can be used with other training methods. For example, it is common to train a model with DreamBooth and LoRA. It is also increasingly common to load and merge multiple LoRAs to create new and unique images. You can learn more about it in the in-depth Merge LoRAs guide since merging is outside the scope of this loading guide. LoRAs also need to be used with another model: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") Then use the load_lora_weights() method to load the ostris/super-cereal-sdxl-lora weights and specify the weights filename from the repository: Copied pipeline.load_lora_weights("ostris/super-cereal-sdxl-lora", weight_name="cereal_box_sdxl_v1.safetensors") +prompt = "bears, pizza bites" +image = pipeline(prompt).images[0] +image The load_lora_weights() method loads LoRA weights into both the UNet and text encoder. It is the preferred way for loading LoRAs because it can handle cases where: the LoRA weights don’t have separate identifiers for the UNet and text encoder the LoRA weights have separate identifiers for the UNet and text encoder But if you only need to load LoRA weights into the UNet, then you can use the load_attn_procs() method. Let’s load the jbilcke-hf/sdxl-cinematic-1 LoRA: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.unet.load_attn_procs("jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors") + +# use cnmt in the prompt to trigger the LoRA +prompt = "A cute cnmt eating a slice of pizza, stunning color scheme, masterpiece, illustration" +image = pipeline(prompt).images[0] +image For both load_lora_weights() and load_attn_procs(), you can pass the cross_attention_kwargs={"scale": 0.5} parameter to adjust how much of the LoRA weights to use. A value of 0 is the same as only using the base model weights, and a value of 1 is equivalent to using the fully finetuned LoRA. To unload the LoRA weights, use the unload_lora_weights() method to discard the LoRA weights and restore the model to its original weights: Copied pipeline.unload_lora_weights() Kohya and TheLastBen Other popular LoRA trainers from the community include those by Kohya and TheLastBen. These trainers create different LoRA checkpoints than those trained by 🤗 Diffusers, but they can still be loaded in the same way. Kohya TheLastBen To load a Kohya LoRA, let’s download the Blueprintify SD XL 1.0 checkpoint from Civitai as an example: Copied !wget https://civitai.com/api/download/models/168776 -O blueprintify-sd-xl-10.safetensors Load the LoRA checkpoint with the load_lora_weights() method, and specify the filename in the weight_name parameter: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("path/to/weights", weight_name="blueprintify-sd-xl-10.safetensors") Generate an image: Copied # use bl3uprint in the prompt to trigger the LoRA +prompt = "bl3uprint, a highly detailed blueprint of the eiffel tower, explaining how to build all parts, many txt, blueprint grid backdrop" +image = pipeline(prompt).images[0] +image Some limitations of using Kohya LoRAs with 🤗 Diffusers include: Images may not look like those generated by UIs - like ComfyUI - for multiple reasons, which are explained here. LyCORIS checkpoints aren’t fully supported. The load_lora_weights() method loads LyCORIS checkpoints with LoRA and LoCon modules, but Hada and LoKR are not supported. IP-Adapter IP-Adapter is a lightweight adapter that enables image prompting for any diffusion model. This adapter works by decoupling the cross-attention layers of the image and text features. All the other model components are frozen and only the embedded image features in the UNet are trained. As a result, IP-Adapter files are typically only ~100MBs. You can learn more about how to use IP-Adapter for different tasks and specific use cases in the IP-Adapter guide. Diffusers currently only supports IP-Adapter for some of the most popular pipelines. Feel free to open a feature request if you have a cool use case and want to integrate IP-Adapter with an unsupported pipeline! +Official IP-Adapter checkpoints are available from h94/IP-Adapter. To start, load a Stable Diffusion checkpoint. Copied from diffusers import AutoPipelineForText2Image +import torch +from diffusers.utils import load_image + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") Then load the IP-Adapter weights and add it to the pipeline with the load_ip_adapter() method. Copied pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") Once loaded, you can use the pipeline with an image and text prompt to guide the image generation process. Copied image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_neg_embed.png") +generator = torch.Generator(device="cpu").manual_seed(33) +images = pipeline( +    prompt='best quality, high quality, wearing sunglasses', +    ip_adapter_image=image, +    negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", +    num_inference_steps=50, +    generator=generator, +).images[0] +images     IP-Adapter Plus IP-Adapter relies on an image encoder to generate image features. If the IP-Adapter repository contains an image_encoder subfolder, the image encoder is automatically loaded and registered to the pipeline. Otherwise, you’ll need to explicitly load the image encoder with a CLIPVisionModelWithProjection model and pass it to the pipeline. This is the case for IP-Adapter Plus checkpoints which use the ViT-H image encoder. Copied from transformers import CLIPVisionModelWithProjection + +image_encoder = CLIPVisionModelWithProjection.from_pretrained( + "h94/IP-Adapter", + subfolder="models/image_encoder", + torch_dtype=torch.float16 +) + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + image_encoder=image_encoder, + torch_dtype=torch.float16 +).to("cuda") + +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name="ip-adapter-plus_sdxl_vit-h.safetensors") diff --git a/scrapped_outputs/b46722867eb3a84f45fd1338770ff097.txt b/scrapped_outputs/b46722867eb3a84f45fd1338770ff097.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/b4d0bc88079043fadcd59bf71e1eb037.txt b/scrapped_outputs/b4d0bc88079043fadcd59bf71e1eb037.txt new file mode 100644 index 0000000000000000000000000000000000000000..4919abfa1ecf3f1ae19b2fe8dcb0c8a6dd16271b --- /dev/null +++ b/scrapped_outputs/b4d0bc88079043fadcd59bf71e1eb037.txt @@ -0,0 +1,2236 @@ +IF + + +Overview + +DeepFloyd IF is a novel state-of-the-art open-source text-to-image model with a high degree of photorealism and language understanding. +The model is a modular composed of a frozen text encoder and three cascaded pixel diffusion modules: +Stage 1: a base model that generates 64x64 px image based on text prompt, +Stage 2: a 64x64 px => 256x256 px super-resolution model, and a +Stage 3: a 256x256 px => 1024x1024 px super-resolution model +Stage 1 and Stage 2 utilize a frozen text encoder based on the T5 transformer to extract text embeddings, +which are then fed into a UNet architecture enhanced with cross-attention and attention pooling. +Stage 3 is Stability’s x4 Upscaling model. +The result is a highly efficient model that outperforms current state-of-the-art models, achieving a zero-shot FID score of 6.66 on the COCO dataset. +Our work underscores the potential of larger UNet architectures in the first stage of cascaded diffusion models and depicts a promising future for text-to-image synthesis. + +Usage + +Before you can use IF, you need to accept its usage conditions. To do so: +Make sure to have a Hugging Face account and be logged in +Accept the license on the model card of DeepFloyd/IF-I-XL-v1.0. Accepting the license on the stage I model card will auto accept for the other IF models. +Make sure to login locally. Install huggingface_hub + + + Copied +pip install huggingface_hub --upgrade +run the login function in a Python shell + + + Copied +from huggingface_hub import login + +login() +and enter your Hugging Face Hub access token. +Next we install diffusers and dependencies: + + + Copied +pip install diffusers accelerate transformers safetensors +The following sections give more in-detail examples of how to use IF. Specifically: +Text-to-Image Generation +Image-to-Image Generation +Inpainting +Reusing model weights +Speed optimization +Memory optimization +Available checkpoints +Stage-1 +DeepFloyd/IF-I-XL-v1.0 +DeepFloyd/IF-I-L-v1.0 +DeepFloyd/IF-I-M-v1.0 +Stage-2 +DeepFloyd/IF-II-L-v1.0 +DeepFloyd/IF-II-M-v1.0 +Stage-3 +stabilityai/stable-diffusion-x4-upscaler +Demo + +Google Colab + + +Text-to-Image Generation + +By default diffusers makes use of model cpu offloading +to run the whole IF pipeline with as little as 14 GB of VRAM. + + + Copied +from diffusers import DiffusionPipeline +from diffusers.utils import pt_to_pil +import torch + +# stage 1 +stage_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +image = stage_1( + prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, generator=generator, output_type="pt" +).images +pt_to_pil(image)[0].save("./if_stage_I.png") + +# stage 2 +image = stage_2( + image=image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +pt_to_pil(image)[0].save("./if_stage_II.png") + +# stage 3 +image = stage_3(prompt=prompt, image=image, noise_level=100, generator=generator).images +image[0].save("./if_stage_III.png") + +Text Guided Image-to-Image Generation + +The same IF model weights can be used for text-guided image-to-image translation or image variation. +In this case just make sure to load the weights using the IFInpaintingPipeline and IFInpaintingSuperResolutionPipeline pipelines. +Note: You can also directly move the weights of the text-to-image pipelines to the image-to-image pipelines +without loading them twice by making use of the ~DiffusionPipeline.components() function as explained here. + + + Copied +from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +from diffusers.utils import pt_to_pil + +import torch + +from PIL import Image +import requests +from io import BytesIO + +# download image +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +response = requests.get(url) +original_image = Image.open(BytesIO(response.content)).convert("RGB") +original_image = original_image.resize((768, 512)) + +# stage 1 +stage_1 = IFImg2ImgPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = IFImg2ImgSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = "A fantasy landscape in style minecraft" +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +image = stage_1( + image=original_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +pt_to_pil(image)[0].save("./if_stage_I.png") + +# stage 2 +image = stage_2( + image=image, + original_image=original_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +pt_to_pil(image)[0].save("./if_stage_II.png") + +# stage 3 +image = stage_3(prompt=prompt, image=image, generator=generator, noise_level=100).images +image[0].save("./if_stage_III.png") + +Text Guided Inpainting Generation + +The same IF model weights can be used for text-guided image-to-image translation or image variation. +In this case just make sure to load the weights using the IFInpaintingPipeline and IFInpaintingSuperResolutionPipeline pipelines. +Note: You can also directly move the weights of the text-to-image pipelines to the image-to-image pipelines +without loading them twice by making use of the ~DiffusionPipeline.components() function as explained here. + + + Copied +from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +from diffusers.utils import pt_to_pil +import torch + +from PIL import Image +import requests +from io import BytesIO + +# download image +url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +response = requests.get(url) +original_image = Image.open(BytesIO(response.content)).convert("RGB") +original_image = original_image + +# download mask +url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +response = requests.get(url) +mask_image = Image.open(BytesIO(response.content)) +mask_image = mask_image + +# stage 1 +stage_1 = IFInpaintingPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = IFInpaintingSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = "blue sunglasses" +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +image = stage_1( + image=original_image, + mask_image=mask_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +pt_to_pil(image)[0].save("./if_stage_I.png") + +# stage 2 +image = stage_2( + image=image, + original_image=original_image, + mask_image=mask_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +pt_to_pil(image)[0].save("./if_stage_II.png") + +# stage 3 +image = stage_3(prompt=prompt, image=image, generator=generator, noise_level=100).images +image[0].save("./if_stage_III.png") + +Converting between different pipelines + +In addition to being loaded with from_pretrained, Pipelines can also be loaded directly from each other. + + + Copied +from diffusers import IFPipeline, IFSuperResolutionPipeline + +pipe_1 = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0") +pipe_2 = IFSuperResolutionPipeline.from_pretrained("DeepFloyd/IF-II-L-v1.0") + + +from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline + +pipe_1 = IFImg2ImgPipeline(**pipe_1.components) +pipe_2 = IFImg2ImgSuperResolutionPipeline(**pipe_2.components) + + +from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline + +pipe_1 = IFInpaintingPipeline(**pipe_1.components) +pipe_2 = IFInpaintingSuperResolutionPipeline(**pipe_2.components) + +Optimizing for speed + +The simplest optimization to run IF faster is to move all model components to the GPU. + + + Copied +pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") +You can also run the diffusion process for a shorter number of timesteps. +This can either be done with the num_inference_steps argument + + + Copied +pipe("", num_inference_steps=30) +Or with the timesteps argument + + + Copied +from diffusers.pipelines.deepfloyd_if import fast27_timesteps + +pipe("", timesteps=fast27_timesteps) +When doing image variation or inpainting, you can also decrease the number of timesteps +with the strength argument. The strength argument is the amount of noise to add to +the input image which also determines how many steps to run in the denoising process. +A smaller number will vary the image less but run faster. + + + Copied +pipe = IFImg2ImgPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") + +image = pipe(image=image, prompt="", strength=0.3).images +You can also use torch.compile. Note that we have not exhaustively tested torch.compile +with IF and it might not give expected results. + + + Copied +import torch + +pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") + +pipe.text_encoder = torch.compile(pipe.text_encoder) +pipe.unet = torch.compile(pipe.unet) + +Optimizing for memory + +When optimizing for GPU memory, we can use the standard diffusers cpu offloading APIs. +Either the model based CPU offloading, + + + Copied +pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.enable_model_cpu_offload() +or the more aggressive layer based CPU offloading. + + + Copied +pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.enable_sequential_cpu_offload() +Additionally, T5 can be loaded in 8bit precision + + + Copied +from transformers import T5EncoderModel + +text_encoder = T5EncoderModel.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit" +) + +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", + text_encoder=text_encoder, # pass the previously instantiated 8bit text encoder + unet=None, + device_map="auto", +) + +prompt_embeds, negative_embeds = pipe.encode_prompt("") +For CPU RAM constrained machines like google colab free tier where we can’t load all +model components to the CPU at once, we can manually only load the pipeline with +the text encoder or unet when the respective model components are needed. + + + Copied +from diffusers import IFPipeline, IFSuperResolutionPipeline +import torch +import gc +from transformers import T5EncoderModel +from diffusers.utils import pt_to_pil + +text_encoder = T5EncoderModel.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit" +) + +# text to image + +pipe = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", + text_encoder=text_encoder, # pass the previously instantiated 8bit text encoder + unet=None, + device_map="auto", +) + +prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +# Remove the pipeline so we can re-load the pipeline with the unet +del text_encoder +del pipe +gc.collect() +torch.cuda.empty_cache() + +pipe = IFPipeline.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto" +) + +generator = torch.Generator().manual_seed(0) +image = pipe( + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + output_type="pt", + generator=generator, +).images + +pt_to_pil(image)[0].save("./if_stage_I.png") + +# Remove the pipeline so we can load the super-resolution pipeline +del pipe +gc.collect() +torch.cuda.empty_cache() + +# First super resolution + +pipe = IFSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto" +) + +generator = torch.Generator().manual_seed(0) +image = pipe( + image=image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + output_type="pt", + generator=generator, +).images + +pt_to_pil(image)[0].save("./if_stage_II.png") + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_if.py +Text-to-Image Generation +- +pipeline_if_superresolution.py +Text-to-Image Generation +- +pipeline_if_img2img.py +Image-to-Image Generation +- +pipeline_if_img2img_superresolution.py +Image-to-Image Generation +- +pipeline_if_inpainting.py +Image-to-Image Generation +- +pipeline_if_inpainting_superresolution.py +Image-to-Image Generation +- + +IFPipeline + + +class diffusers.IFPipeline + +< +source +> +( +tokenizer: T5Tokenizer +text_encoder: T5EncoderModel +unet: UNet2DConditionModel +scheduler: DDPMScheduler +safety_checker: typing.Optional[diffusers.pipelines.deepfloyd_if.safety_checker.IFSafetyChecker] +feature_extractor: typing.Optional[transformers.models.clip.image_processing_clip.CLIPImageProcessor] +watermarker: typing.Optional[diffusers.pipelines.deepfloyd_if.watermark.IFWatermarker] +requires_safety_checker: bool = True + +) + + + + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +num_inference_steps: int = 100 +timesteps: typing.List[int] = None +guidance_scale: float = 7.0 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +height: typing.Optional[int] = None +width: typing.Optional[int] = None +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +clean_caption: bool = True +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None + +) +→ +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +height (int, optional, defaults to self.unet.config.sample_size) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size) — +The width in pixels of the generated image. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +Returns + +~pipelines.stable_diffusion.IFPipelineOutput or tuple + + + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import IFPipeline, IFSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch + +>>> pipe = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt").images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt" +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> safety_modules = { +... "feature_extractor": pipe.feature_extractor, +... "safety_checker": pipe.safety_checker, +... "watermarker": pipe.watermarker, +... } +>>> super_res_2_pipe = DiffusionPipeline.from_pretrained( +... "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +... ) +>>> super_res_2_pipe.enable_model_cpu_offload() + +>>> image = super_res_2_pipe( +... prompt=prompt, +... image=image, +... ).images +>>> image[0].save("./if_stage_II.png") + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline’s +models have their state dicts saved to CPU and then are moved to a torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. + +encode_prompt + +< +source +> +( +prompt +do_classifier_free_guidance = True +num_images_per_prompt = 1 +device = None +negative_prompt = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +clean_caption: bool = False + +) + + +Parameters + +prompt (str or List[str], optional) — +prompt to be encoded + + + +Encodes the prompt into text encoder hidden states. +device: (torch.device, optional): +torch device to place the resulting embeddings on +num_images_per_prompt (int, optional, defaults to 1): +number of images that should be generated per prompt +do_classifier_free_guidance (bool, optional, defaults to True): +whether to use classifier free guidance or not +negative_prompt (str or List[str], optional): +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). +prompt_embeds (torch.FloatTensor, optional): +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. +negative_prompt_embeds (torch.FloatTensor, optional): +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + +IFSuperResolutionPipeline + + +class diffusers.IFSuperResolutionPipeline + +< +source +> +( +tokenizer: T5Tokenizer +text_encoder: T5EncoderModel +unet: UNet2DConditionModel +scheduler: DDPMScheduler +image_noising_scheduler: DDPMScheduler +safety_checker: typing.Optional[diffusers.pipelines.deepfloyd_if.safety_checker.IFSafetyChecker] +feature_extractor: typing.Optional[transformers.models.clip.image_processing_clip.CLIPImageProcessor] +watermarker: typing.Optional[diffusers.pipelines.deepfloyd_if.watermark.IFWatermarker] +requires_safety_checker: bool = True + +) + + + + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +height: int = None +width: int = None +image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.FloatTensor] = None +num_inference_steps: int = 50 +timesteps: typing.List[int] = None +guidance_scale: float = 4.0 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None +noise_level: int = 250 +clean_caption: bool = True + +) +→ +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +height (int, optional, defaults to self.unet.config.sample_size) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size) — +The width in pixels of the generated image. + + +image (PIL.Image.Image, np.ndarray, torch.FloatTensor) — +The image to be upscaled. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +noise_level (int, optional, defaults to 250) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) + + +clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. + + +Returns + +~pipelines.stable_diffusion.IFPipelineOutput or tuple + + + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import IFPipeline, IFSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch + +>>> pipe = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt").images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds +... ).images +>>> image[0].save("./if_stage_II.png") + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline’s +models have their state dicts saved to CPU and then are moved to a torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. + +encode_prompt + +< +source +> +( +prompt +do_classifier_free_guidance = True +num_images_per_prompt = 1 +device = None +negative_prompt = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +clean_caption: bool = False + +) + + +Parameters + +prompt (str or List[str], optional) — +prompt to be encoded + + + +Encodes the prompt into text encoder hidden states. +device: (torch.device, optional): +torch device to place the resulting embeddings on +num_images_per_prompt (int, optional, defaults to 1): +number of images that should be generated per prompt +do_classifier_free_guidance (bool, optional, defaults to True): +whether to use classifier free guidance or not +negative_prompt (str or List[str], optional): +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). +prompt_embeds (torch.FloatTensor, optional): +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. +negative_prompt_embeds (torch.FloatTensor, optional): +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + +IFImg2ImgPipeline + + +class diffusers.IFImg2ImgPipeline + +< +source +> +( +tokenizer: T5Tokenizer +text_encoder: T5EncoderModel +unet: UNet2DConditionModel +scheduler: DDPMScheduler +safety_checker: typing.Optional[diffusers.pipelines.deepfloyd_if.safety_checker.IFSafetyChecker] +feature_extractor: typing.Optional[transformers.models.clip.image_processing_clip.CLIPImageProcessor] +watermarker: typing.Optional[diffusers.pipelines.deepfloyd_if.watermark.IFWatermarker] +requires_safety_checker: bool = True + +) + + + + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +image: typing.Union[PIL.Image.Image, torch.Tensor, numpy.ndarray, typing.List[PIL.Image.Image], typing.List[torch.Tensor], typing.List[numpy.ndarray]] = None +strength: float = 0.7 +num_inference_steps: int = 80 +timesteps: typing.List[int] = None +guidance_scale: float = 10.0 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +clean_caption: bool = True +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None + +) +→ +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. + + +strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +Returns + +~pipelines.stable_diffusion.IFPipelineOutput or tuple + + + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image.resize((768, 512)) + +>>> pipe = IFImg2ImgPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A fantasy landscape in style minecraft" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFImg2ImgSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", +... text_encoder=None, +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline’s +models have their state dicts saved to CPU and then are moved to a torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. + +encode_prompt + +< +source +> +( +prompt +do_classifier_free_guidance = True +num_images_per_prompt = 1 +device = None +negative_prompt = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +clean_caption: bool = False + +) + + +Parameters + +prompt (str or List[str], optional) — +prompt to be encoded + + + +Encodes the prompt into text encoder hidden states. +device: (torch.device, optional): +torch device to place the resulting embeddings on +num_images_per_prompt (int, optional, defaults to 1): +number of images that should be generated per prompt +do_classifier_free_guidance (bool, optional, defaults to True): +whether to use classifier free guidance or not +negative_prompt (str or List[str], optional): +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). +prompt_embeds (torch.FloatTensor, optional): +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. +negative_prompt_embeds (torch.FloatTensor, optional): +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + +IFImg2ImgSuperResolutionPipeline + + +class diffusers.IFImg2ImgSuperResolutionPipeline + +< +source +> +( +tokenizer: T5Tokenizer +text_encoder: T5EncoderModel +unet: UNet2DConditionModel +scheduler: DDPMScheduler +image_noising_scheduler: DDPMScheduler +safety_checker: typing.Optional[diffusers.pipelines.deepfloyd_if.safety_checker.IFSafetyChecker] +feature_extractor: typing.Optional[transformers.models.clip.image_processing_clip.CLIPImageProcessor] +watermarker: typing.Optional[diffusers.pipelines.deepfloyd_if.watermark.IFWatermarker] +requires_safety_checker: bool = True + +) + + + + +__call__ + +< +source +> +( +image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.FloatTensor] +original_image: typing.Union[PIL.Image.Image, torch.Tensor, numpy.ndarray, typing.List[PIL.Image.Image], typing.List[torch.Tensor], typing.List[numpy.ndarray]] = None +strength: float = 0.8 +prompt: typing.Union[str, typing.List[str]] = None +num_inference_steps: int = 50 +timesteps: typing.List[int] = None +guidance_scale: float = 4.0 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None +noise_level: int = 250 +clean_caption: bool = True + +) +→ +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +Parameters + +image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. + + +original_image (torch.FloatTensor or PIL.Image.Image) — +The original image that image was varied from. + + +strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. + + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +noise_level (int, optional, defaults to 250) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) + + +clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. + + +Returns + +~pipelines.stable_diffusion.IFPipelineOutput or tuple + + + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image.resize((768, 512)) + +>>> pipe = IFImg2ImgPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A fantasy landscape in style minecraft" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFImg2ImgSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", +... text_encoder=None, +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline’s +models have their state dicts saved to CPU and then are moved to a torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. + +encode_prompt + +< +source +> +( +prompt +do_classifier_free_guidance = True +num_images_per_prompt = 1 +device = None +negative_prompt = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +clean_caption: bool = False + +) + + +Parameters + +prompt (str or List[str], optional) — +prompt to be encoded + + + +Encodes the prompt into text encoder hidden states. +device: (torch.device, optional): +torch device to place the resulting embeddings on +num_images_per_prompt (int, optional, defaults to 1): +number of images that should be generated per prompt +do_classifier_free_guidance (bool, optional, defaults to True): +whether to use classifier free guidance or not +negative_prompt (str or List[str], optional): +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). +prompt_embeds (torch.FloatTensor, optional): +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. +negative_prompt_embeds (torch.FloatTensor, optional): +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + +IFInpaintingPipeline + + +class diffusers.IFInpaintingPipeline + +< +source +> +( +tokenizer: T5Tokenizer +text_encoder: T5EncoderModel +unet: UNet2DConditionModel +scheduler: DDPMScheduler +safety_checker: typing.Optional[diffusers.pipelines.deepfloyd_if.safety_checker.IFSafetyChecker] +feature_extractor: typing.Optional[transformers.models.clip.image_processing_clip.CLIPImageProcessor] +watermarker: typing.Optional[diffusers.pipelines.deepfloyd_if.watermark.IFWatermarker] +requires_safety_checker: bool = True + +) + + + + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +image: typing.Union[PIL.Image.Image, torch.Tensor, numpy.ndarray, typing.List[PIL.Image.Image], typing.List[torch.Tensor], typing.List[numpy.ndarray]] = None +mask_image: typing.Union[PIL.Image.Image, torch.Tensor, numpy.ndarray, typing.List[PIL.Image.Image], typing.List[torch.Tensor], typing.List[numpy.ndarray]] = None +strength: float = 1.0 +num_inference_steps: int = 50 +timesteps: typing.List[int] = None +guidance_scale: float = 7.0 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +clean_caption: bool = True +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None + +) +→ +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. + + +mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). + + +strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +Returns + +~pipelines.stable_diffusion.IFPipelineOutput or tuple + + + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +>>> response = requests.get(url) +>>> mask_image = Image.open(BytesIO(response.content)) +>>> mask_image = mask_image + +>>> pipe = IFInpaintingPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "blue sunglasses" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... mask_image=mask_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFInpaintingSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... mask_image=mask_image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline’s +models have their state dicts saved to CPU and then are moved to a torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. + +encode_prompt + +< +source +> +( +prompt +do_classifier_free_guidance = True +num_images_per_prompt = 1 +device = None +negative_prompt = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +clean_caption: bool = False + +) + + +Parameters + +prompt (str or List[str], optional) — +prompt to be encoded + + + +Encodes the prompt into text encoder hidden states. +device: (torch.device, optional): +torch device to place the resulting embeddings on +num_images_per_prompt (int, optional, defaults to 1): +number of images that should be generated per prompt +do_classifier_free_guidance (bool, optional, defaults to True): +whether to use classifier free guidance or not +negative_prompt (str or List[str], optional): +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). +prompt_embeds (torch.FloatTensor, optional): +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. +negative_prompt_embeds (torch.FloatTensor, optional): +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + +IFInpaintingSuperResolutionPipeline + + +class diffusers.IFInpaintingSuperResolutionPipeline + +< +source +> +( +tokenizer: T5Tokenizer +text_encoder: T5EncoderModel +unet: UNet2DConditionModel +scheduler: DDPMScheduler +image_noising_scheduler: DDPMScheduler +safety_checker: typing.Optional[diffusers.pipelines.deepfloyd_if.safety_checker.IFSafetyChecker] +feature_extractor: typing.Optional[transformers.models.clip.image_processing_clip.CLIPImageProcessor] +watermarker: typing.Optional[diffusers.pipelines.deepfloyd_if.watermark.IFWatermarker] +requires_safety_checker: bool = True + +) + + + + +__call__ + +< +source +> +( +image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.FloatTensor] +original_image: typing.Union[PIL.Image.Image, torch.Tensor, numpy.ndarray, typing.List[PIL.Image.Image], typing.List[torch.Tensor], typing.List[numpy.ndarray]] = None +mask_image: typing.Union[PIL.Image.Image, torch.Tensor, numpy.ndarray, typing.List[PIL.Image.Image], typing.List[torch.Tensor], typing.List[numpy.ndarray]] = None +strength: float = 0.8 +prompt: typing.Union[str, typing.List[str]] = None +num_inference_steps: int = 100 +timesteps: typing.List[int] = None +guidance_scale: float = 4.0 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None +noise_level: int = 0 +clean_caption: bool = True + +) +→ +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +Parameters + +image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. + + +original_image (torch.FloatTensor or PIL.Image.Image) — +The original image that image was varied from. + + +mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). + + +strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. + + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +noise_level (int, optional, defaults to 0) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) + + +clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. + + +Returns + +~pipelines.stable_diffusion.IFPipelineOutput or tuple + + + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +>>> response = requests.get(url) +>>> mask_image = Image.open(BytesIO(response.content)) +>>> mask_image = mask_image + +>>> pipe = IFInpaintingPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "blue sunglasses" + +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) +>>> image = pipe( +... image=original_image, +... mask_image=mask_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFInpaintingSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... mask_image=mask_image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline’s +models have their state dicts saved to CPU and then are moved to a torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. + +encode_prompt + +< +source +> +( +prompt +do_classifier_free_guidance = True +num_images_per_prompt = 1 +device = None +negative_prompt = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +clean_caption: bool = False + +) + + +Parameters + +prompt (str or List[str], optional) — +prompt to be encoded + + + +Encodes the prompt into text encoder hidden states. +device: (torch.device, optional): +torch device to place the resulting embeddings on +num_images_per_prompt (int, optional, defaults to 1): +number of images that should be generated per prompt +do_classifier_free_guidance (bool, optional, defaults to True): +whether to use classifier free guidance or not +negative_prompt (str or List[str], optional): +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). +prompt_embeds (torch.FloatTensor, optional): +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. +negative_prompt_embeds (torch.FloatTensor, optional): +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. diff --git a/scrapped_outputs/b51f6c8819b03d3cd2cf47ffc03e6247.txt b/scrapped_outputs/b51f6c8819b03d3cd2cf47ffc03e6247.txt new file mode 100644 index 0000000000000000000000000000000000000000..77bfc70e39049721df753225367296a6dc627c51 --- /dev/null +++ b/scrapped_outputs/b51f6c8819b03d3cd2cf47ffc03e6247.txt @@ -0,0 +1,124 @@ +PixArt-α PixArt-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis is Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, and Zhenguo Li. The abstract from the paper is: The most advanced text-to-image (T2I) models require significant training costs (e.g., millions of GPU hours), seriously hindering the fundamental innovation for the AIGC community while increasing CO2 emissions. This paper introduces PIXART-α, a Transformer-based T2I diffusion model whose image generation quality is competitive with state-of-the-art image generators (e.g., Imagen, SDXL, and even Midjourney), reaching near-commercial application standards. Additionally, it supports high-resolution image synthesis up to 1024px resolution with low training cost, as shown in Figure 1 and 2. To achieve this goal, three core designs are proposed: (1) Training strategy decomposition: We devise three distinct training steps that separately optimize pixel dependency, text-image alignment, and image aesthetic quality; (2) Efficient T2I Transformer: We incorporate cross-attention modules into Diffusion Transformer (DiT) to inject text conditions and streamline the computation-intensive class-condition branch; (3) High-informative data: We emphasize the significance of concept density in text-image pairs and leverage a large Vision-Language model to auto-label dense pseudo-captions to assist text-image alignment learning. As a result, PIXART-α’s training speed markedly surpasses existing large-scale T2I models, e.g., PIXART-α only takes 10.8% of Stable Diffusion v1.5’s training time (675 vs. 6,250 A100 GPU days), saving nearly $300,000 ($26,000 vs. $320,000) and reducing 90% CO2 emissions. Moreover, compared with a larger SOTA model, RAPHAEL, our training cost is merely 1%. Extensive experiments demonstrate that PIXART-α excels in image quality, artistry, and semantic control. We hope PIXART-α will provide new insights to the AIGC community and startups to accelerate building their own high-quality yet low-cost generative models from scratch. You can find the original codebase at PixArt-alpha/PixArt-alpha and all the available checkpoints at PixArt-alpha. Some notes about this pipeline: It uses a Transformer backbone (instead of a UNet) for denoising. As such it has a similar architecture as DiT. It was trained using text conditions computed from T5. This aspect makes the pipeline better at following complex text prompts with intricate details. It is good at producing high-resolution images at different aspect ratios. To get the best results, the authors recommend some size brackets which can be found here. It rivals the quality of state-of-the-art text-to-image generation systems (as of this writing) such as Stable Diffusion XL, Imagen, and DALL-E 2, while being more efficient than them. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. Inference with under 8GB GPU VRAM Run the PixArtAlphaPipeline with under 8GB GPU VRAM by loading the text encoder in 8-bit precision. Let’s walk through a full-fledged example. First, install the bitsandbytes library: Copied pip install -U bitsandbytes Then load the text encoder in 8-bit: Copied from transformers import T5EncoderModel +from diffusers import PixArtAlphaPipeline +import torch + +text_encoder = T5EncoderModel.from_pretrained( + "PixArt-alpha/PixArt-XL-2-1024-MS", + subfolder="text_encoder", + load_in_8bit=True, + device_map="auto", + +) +pipe = PixArtAlphaPipeline.from_pretrained( + "PixArt-alpha/PixArt-XL-2-1024-MS", + text_encoder=text_encoder, + transformer=None, + device_map="auto" +) Now, use the pipe to encode a prompt: Copied with torch.no_grad(): + prompt = "cute cat" + prompt_embeds, prompt_attention_mask, negative_embeds, negative_prompt_attention_mask = pipe.encode_prompt(prompt) Since text embeddings have been computed, remove the text_encoder and pipe from the memory, and free up som GPU VRAM: Copied import gc + +def flush(): + gc.collect() + torch.cuda.empty_cache() + +del text_encoder +del pipe +flush() Then compute the latents with the prompt embeddings as inputs: Copied pipe = PixArtAlphaPipeline.from_pretrained( + "PixArt-alpha/PixArt-XL-2-1024-MS", + text_encoder=None, + torch_dtype=torch.float16, +).to("cuda") + +latents = pipe( + negative_prompt=None, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + prompt_attention_mask=prompt_attention_mask, + negative_prompt_attention_mask=negative_prompt_attention_mask, + num_images_per_prompt=1, + output_type="latent", +).images + +del pipe.transformer +flush() Notice that while initializing pipe, you’re setting text_encoder to None so that it’s not loaded. Once the latents are computed, pass it off to the VAE to decode into a real image: Copied with torch.no_grad(): + image = pipe.vae.decode(latents / pipe.vae.config.scaling_factor, return_dict=False)[0] +image = pipe.image_processor.postprocess(image, output_type="pil")[0] +image.save("cat.png") By deleting components you aren’t using and flushing the GPU VRAM, you should be able to run PixArtAlphaPipeline with under 8GB GPU VRAM. If you want a report of your memory-usage, run this script. Text embeddings computed in 8-bit can impact the quality of the generated images because of the information loss in the representation space caused by the reduced precision. It’s recommended to compare the outputs with and without 8-bit. While loading the text_encoder, you set load_in_8bit to True. You could also specify load_in_4bit to bring your memory requirements down even further to under 7GB. PixArtAlphaPipeline class diffusers.PixArtAlphaPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel vae: AutoencoderKL transformer: Transformer2DModel scheduler: DPMSolverMultistepScheduler ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (T5EncoderModel) — +Frozen text-encoder. PixArt-Alpha uses +T5, specifically the +t5-v1_1-xxl variant. tokenizer (T5Tokenizer) — +Tokenizer of class +T5Tokenizer. transformer (Transformer2DModel) — +A text conditioned Transformer2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with transformer to denoise the encoded image latents. Pipeline for text-to-image generation using PixArt-Alpha. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union = None negative_prompt: str = '' num_inference_steps: int = 20 timesteps: List = None guidance_scale: float = 4.5 num_images_per_prompt: Optional = 1 height: Optional = None width: Optional = None eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None prompt_attention_mask: Optional = None negative_prompt_embeds: Optional = None negative_prompt_attention_mask: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True use_resolution_binning: bool = True **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to self.unet.config.sample_size) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size) — +The width in pixels of the generated image. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. prompt_attention_mask (torch.FloatTensor, optional) — Pre-generated attention mask for text embeddings. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. For PixArt-Alpha this negative prompt should be "". If not +provided, negative_prompt_embeds will be generated from negative_prompt input argument. negative_prompt_attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask for negative text embeddings. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. use_resolution_binning (bool defaults to True) — +If set to True, the requested height and width are first mapped to the closest resolutions using +ASPECT_RATIO_1024_BIN. After the produced latents are decoded into images, they are resized back to +the requested resolution. Useful for generating non-square images. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import PixArtAlphaPipeline + +>>> # You can replace the checkpoint id with "PixArt-alpha/PixArt-XL-2-512x512" too. +>>> pipe = PixArtAlphaPipeline.from_pretrained("PixArt-alpha/PixArt-XL-2-1024-MS", torch_dtype=torch.float16) +>>> # Enable memory optimizations. +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A small cactus with a happy face in the Sahara desert." +>>> image = pipe(prompt).images[0] classify_height_width_bin < source > ( height: int width: int ratios: dict ) Returns binned height and width. encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True negative_prompt: str = '' num_images_per_prompt: int = 1 device: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None prompt_attention_mask: Optional = None negative_prompt_attention_mask: Optional = None clean_caption: bool = False **kwargs ) Parameters prompt (str or List[str], optional) — +prompt to be encoded negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. If not defined, one has to pass negative_prompt_embeds +instead. Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). For +PixArt-Alpha, this should be "". do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. For PixArt-Alpha, it’s should be the embeddings of the "" +string. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. diff --git a/scrapped_outputs/b5279a0be9bfe591db53967fb4ee7e84.txt b/scrapped_outputs/b5279a0be9bfe591db53967fb4ee7e84.txt new file mode 100644 index 0000000000000000000000000000000000000000..ff28dd01033ce547a340e7754e35c2123f361679 --- /dev/null +++ b/scrapped_outputs/b5279a0be9bfe591db53967fb4ee7e84.txt @@ -0,0 +1,14 @@ +Text-guided depth-to-image generation The StableDiffusionDepth2ImgPipeline lets you pass a text prompt and an initial image to condition the generation of new images. In addition, you can also pass a depth_map to preserve the image structure. If no depth_map is provided, the pipeline automatically predicts the depth via an integrated depth-estimation model. Start by creating an instance of the StableDiffusionDepth2ImgPipeline: Copied import torch +from diffusers import StableDiffusionDepth2ImgPipeline +from diffusers.utils import load_image, make_image_grid + +pipeline = StableDiffusionDepth2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-depth", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") Now pass your prompt to the pipeline. You can also pass a negative_prompt to prevent certain words from guiding how an image is generated: Copied url = "http://images.cocodataset.org/val2017/000000039769.jpg" +init_image = load_image(url) +prompt = "two tigers" +negative_prompt = "bad, deformed, ugly, bad anatomy" +image = pipeline(prompt=prompt, image=init_image, negative_prompt=negative_prompt, strength=0.7).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Input Output diff --git a/scrapped_outputs/b57b9693a00178065929495d293e7d94.txt b/scrapped_outputs/b57b9693a00178065929495d293e7d94.txt new file mode 100644 index 0000000000000000000000000000000000000000..5eb8aca237f4b1aa72ff085bbc8ab70f6ba7cd91 --- /dev/null +++ b/scrapped_outputs/b57b9693a00178065929495d293e7d94.txt @@ -0,0 +1,128 @@ +LoRA LoRA is a fast and lightweight training method that inserts and trains a significantly smaller number of parameters instead of all the model parameters. This produces a smaller file (~100 MBs) and makes it easier to quickly train a model to learn a new concept. LoRA weights are typically loaded into the UNet, text encoder or both. There are two classes for loading LoRA weights: LoraLoaderMixin provides functions for loading and unloading, fusing and unfusing, enabling and disabling, and more functions for managing LoRA weights. This class can be used with any model. StableDiffusionXLLoraLoaderMixin is a Stable Diffusion (SDXL) version of the LoraLoaderMixin class for loading and saving LoRA weights. It can only be used with the SDXL model. To learn more about how to load LoRA weights, see the LoRA loading guide. LoraLoaderMixin class diffusers.loaders.LoraLoaderMixin < source > ( ) Load LoRA layers into UNet2DConditionModel and +CLIPTextModel. delete_adapters < source > ( adapter_names: Union ) Parameters Deletes the LoRA layers of adapter_name for the unet and text-encoder(s). — +adapter_names (Union[List[str], str]): +The names of the adapter to delete. Can be a single string or a list of strings disable_lora_for_text_encoder < source > ( text_encoder: Optional = None ) Parameters text_encoder (torch.nn.Module, optional) — +The text encoder module to disable the LoRA layers for. If None, it will try to get the +text_encoder attribute. Disables the LoRA layers for the text encoder. enable_lora_for_text_encoder < source > ( text_encoder: Optional = None ) Parameters text_encoder (torch.nn.Module, optional) — +The text encoder module to enable the LoRA layers for. If None, it will try to get the text_encoder +attribute. Enables the LoRA layers for the text encoder. fuse_lora < source > ( fuse_unet: bool = True fuse_text_encoder: bool = True lora_scale: float = 1.0 safe_fusing: bool = False adapter_names: Optional = None ) Parameters fuse_unet (bool, defaults to True) — Whether to fuse the UNet LoRA parameters. fuse_text_encoder (bool, defaults to True) — +Whether to fuse the text encoder LoRA parameters. If the text encoder wasn’t monkey-patched with the +LoRA parameters then it won’t have any effect. lora_scale (float, defaults to 1.0) — +Controls how much to influence the outputs with the LoRA parameters. safe_fusing (bool, defaults to False) — +Whether to check fused weights for NaN values before fusing and if values are NaN not fusing them. adapter_names (List[str], optional) — +Adapter names to be used for fusing. If nothing is passed, all active adapters will be fused. Fuses the LoRA parameters into the original parameters of the corresponding blocks. This is an experimental API. Example: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipeline.fuse_lora(lora_scale=0.7) get_active_adapters < source > ( ) Gets the list of the current active adapters. Example: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", +).to("cuda") +pipeline.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") +pipeline.get_active_adapters() get_list_adapters < source > ( ) Gets the current list of all available adapters in the pipeline. load_lora_into_text_encoder < source > ( state_dict network_alphas text_encoder prefix = None lora_scale = 1.0 low_cpu_mem_usage = None adapter_name = None _pipeline = None ) Parameters state_dict (dict) — +A standard state dict containing the lora layer parameters. The key should be prefixed with an +additional text_encoder to distinguish between unet lora layers. network_alphas (Dict[str, float]) — +See LoRALinearLayer for more details. text_encoder (CLIPTextModel) — +The text encoder model to load the LoRA layers into. prefix (str) — +Expected prefix of the text_encoder in the state_dict. lora_scale (float) — +How much to scale the output of the lora linear layer before it is added with the output of the regular +lora layer. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. This will load the LoRA layers specified in state_dict into text_encoder load_lora_into_transformer < source > ( state_dict network_alphas transformer low_cpu_mem_usage = None adapter_name = None _pipeline = None ) Parameters state_dict (dict) — +A standard state dict containing the lora layer parameters. The keys can either be indexed directly +into the unet or prefixed with an additional unet which can be used to distinguish between text +encoder lora layers. network_alphas (Dict[str, float]) — +See LoRALinearLayer for more details. unet (UNet2DConditionModel) — +The UNet model to load the LoRA layers into. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. This will load the LoRA layers specified in state_dict into transformer. load_lora_into_unet < source > ( state_dict network_alphas unet low_cpu_mem_usage = None adapter_name = None _pipeline = None ) Parameters state_dict (dict) — +A standard state dict containing the lora layer parameters. The keys can either be indexed directly +into the unet or prefixed with an additional unet which can be used to distinguish between text +encoder lora layers. network_alphas (Dict[str, float]) — +See LoRALinearLayer for more details. unet (UNet2DConditionModel) — +The UNet model to load the LoRA layers into. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. This will load the LoRA layers specified in state_dict into unet. load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. lora_state_dict < source > ( pretrained_model_name_or_path_or_dict: Union **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with ModelMixin.save_pretrained(). +A torch state +dict. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Return state dict for lora weights and the network alphas. We support loading A1111 formatted LoRA checkpoints in a limited capacity. This function is experimental and might change in the future. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. set_adapters_for_text_encoder < source > ( adapter_names: Union text_encoder: Optional = None text_encoder_weights: List = None ) Parameters adapter_names (List[str] or str) — +The names of the adapters to use. text_encoder (torch.nn.Module, optional) — +The text encoder module to set the adapter layers for. If None, it will try to get the text_encoder +attribute. text_encoder_weights (List[float], optional) — +The weights to use for the text encoder. If None, the weights are set to 1.0 for all the adapters. Sets the adapter layers for the text encoder. set_lora_device < source > ( adapter_names: List device: Union ) Parameters adapter_names (List[str]) — +List of adapters to send device to. device (Union[torch.device, str, int]) — +Device to send the adapters to. Can be either a torch device, a str or an integer. Moves the LoRAs listed in adapter_names to a target device. Useful for offloading the LoRA to the CPU in case +you want to load multiple adapters and free some GPU memory. unfuse_lora < source > ( unfuse_unet: bool = True unfuse_text_encoder: bool = True ) Parameters unfuse_unet (bool, defaults to True) — Whether to unfuse the UNet LoRA parameters. unfuse_text_encoder (bool, defaults to True) — +Whether to unfuse the text encoder LoRA parameters. If the text encoder wasn’t monkey-patched with the +LoRA parameters then it won’t have any effect. Reverses the effect of +pipe.fuse_lora(). This is an experimental API. unload_lora_weights < source > ( ) Unloads the LoRA parameters. Examples: Copied >>> # Assuming `pipeline` is already loaded with the LoRA parameters. +>>> pipeline.unload_lora_weights() +>>> ... StableDiffusionXLLoraLoaderMixin class diffusers.loaders.StableDiffusionXLLoraLoaderMixin < source > ( ) This class overrides LoraLoaderMixin with LoRA loading/saving code that’s specific to SDXL load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name: Optional = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. kwargs (dict, optional) — +See lora_state_dict(). Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. diff --git a/scrapped_outputs/b583c283ef5e83f7d5259f2feda791bd.txt b/scrapped_outputs/b583c283ef5e83f7d5259f2feda791bd.txt new file mode 100644 index 0000000000000000000000000000000000000000..d497661a6c9cfce4b8b06d95ad96868e9dc634a1 --- /dev/null +++ b/scrapped_outputs/b583c283ef5e83f7d5259f2feda791bd.txt @@ -0,0 +1,42 @@ +Textual inversion The StableDiffusionPipeline supports textual inversion, a technique that enables a model like Stable Diffusion to learn a new concept from just a few sample images. This gives you more control over the generated images and allows you to tailor the model towards specific concepts. You can get started quickly with a collection of community created concepts in the Stable Diffusion Conceptualizer. This guide will show you how to run inference with textual inversion using a pre-learned concept from the Stable Diffusion Conceptualizer. If you’re interested in teaching a model new concepts with textual inversion, take a look at the Textual Inversion training guide. Import the necessary libraries: Copied import torch +from diffusers import StableDiffusionPipeline +from diffusers.utils import make_image_grid Stable Diffusion 1 and 2 Pick a Stable Diffusion checkpoint and a pre-learned concept from the Stable Diffusion Conceptualizer: Copied pretrained_model_name_or_path = "runwayml/stable-diffusion-v1-5" +repo_id_embeds = "sd-concepts-library/cat-toy" Now you can load a pipeline, and pass the pre-learned concept to it: Copied pipeline = StableDiffusionPipeline.from_pretrained( + pretrained_model_name_or_path, torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +pipeline.load_textual_inversion(repo_id_embeds) Create a prompt with the pre-learned concept by using the special placeholder token , and choose the number of samples and rows of images you’d like to generate: Copied prompt = "a grafitti in a favela wall with a on it" + +num_samples_per_row = 2 +num_rows = 2 Then run the pipeline (feel free to adjust the parameters like num_inference_steps and guidance_scale to see how they affect image quality), save the generated images and visualize them with the helper function you created at the beginning: Copied all_images = [] +for _ in range(num_rows): + images = pipeline(prompt, num_images_per_prompt=num_samples_per_row, num_inference_steps=50, guidance_scale=7.5).images + all_images.extend(images) + +grid = make_image_grid(all_images, num_rows, num_samples_per_row) +grid Stable Diffusion XL Stable Diffusion XL (SDXL) can also use textual inversion vectors for inference. In contrast to Stable Diffusion 1 and 2, SDXL has two text encoders so you’ll need two textual inversion embeddings - one for each text encoder model. Let’s download the SDXL textual inversion embeddings and have a closer look at it’s structure: Copied from huggingface_hub import hf_hub_download +from safetensors.torch import load_file + +file = hf_hub_download("dn118/unaestheticXL", filename="unaestheticXLv31.safetensors") +state_dict = load_file(file) +state_dict Copied {'clip_g': tensor([[ 0.0077, -0.0112, 0.0065, ..., 0.0195, 0.0159, 0.0275], + ..., + [-0.0170, 0.0213, 0.0143, ..., -0.0302, -0.0240, -0.0362]], + 'clip_l': tensor([[ 0.0023, 0.0192, 0.0213, ..., -0.0385, 0.0048, -0.0011], + ..., + [ 0.0475, -0.0508, -0.0145, ..., 0.0070, -0.0089, -0.0163]], There are two tensors, "clip_g" and "clip_l". +"clip_g" corresponds to the bigger text encoder in SDXL and refers to +pipe.text_encoder_2 and "clip_l" refers to pipe.text_encoder. Now you can load each tensor separately by passing them along with the correct text encoder and tokenizer +to load_textual_inversion(): Copied from diffusers import AutoPipelineForText2Image +import torch + +pipe = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") + +pipe.load_textual_inversion(state_dict["clip_g"], token="unaestheticXLv31", text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2) +pipe.load_textual_inversion(state_dict["clip_l"], token="unaestheticXLv31", text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer) + +# the embedding should be used as a negative embedding, so we pass it as a negative prompt +generator = torch.Generator().manual_seed(33) +image = pipe("a woman standing in front of a mountain", negative_prompt="unaestheticXLv31", generator=generator).images[0] +image diff --git a/scrapped_outputs/b59c581209925f9ec4e763b130bac73e.txt b/scrapped_outputs/b59c581209925f9ec4e763b130bac73e.txt new file mode 100644 index 0000000000000000000000000000000000000000..a7b674d6dbdb5acb929fa209ec80df7084a21fd2 --- /dev/null +++ b/scrapped_outputs/b59c581209925f9ec4e763b130bac73e.txt @@ -0,0 +1,243 @@ +🧪 This pipeline is for research purposes only. Text-to-video ModelScope Text-to-Video Technical Report is by Jiuniu Wang, Hangjie Yuan, Dayou Chen, Yingya Zhang, Xiang Wang, Shiwei Zhang. The abstract from the paper is: This paper introduces ModelScopeT2V, a text-to-video synthesis model that evolves from a text-to-image synthesis model (i.e., Stable Diffusion). ModelScopeT2V incorporates spatio-temporal blocks to ensure consistent frame generation and smooth movement transitions. The model could adapt to varying frame numbers during training and inference, rendering it suitable for both image-text and video-text datasets. ModelScopeT2V brings together three components (i.e., VQGAN, a text encoder, and a denoising UNet), totally comprising 1.7 billion parameters, in which 0.5 billion parameters are dedicated to temporal capabilities. The model demonstrates superior performance over state-of-the-art methods across three evaluation metrics. The code and an online demo are available at https://modelscope.cn/models/damo/text-to-video-synthesis/summary. You can find additional information about Text-to-Video on the project page, original codebase, and try it out in a demo. Official checkpoints can be found at damo-vilab and cerspense. Usage example text-to-video-ms-1.7b Let’s start by generating a short video with the default length of 16 frames (2s at 8 fps): Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import export_to_video + +pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") +pipe = pipe.to("cuda") + +prompt = "Spiderman is surfing" +video_frames = pipe(prompt).frames[0] +video_path = export_to_video(video_frames) +video_path Diffusers supports different optimization techniques to improve the latency +and memory footprint of a pipeline. Since videos are often more memory-heavy than images, +we can enable CPU offloading and VAE slicing to keep the memory footprint at bay. Let’s generate a video of 8 seconds (64 frames) on the same GPU using CPU offloading and VAE slicing: Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import export_to_video + +pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") +pipe.enable_model_cpu_offload() + +# memory optimization +pipe.enable_vae_slicing() + +prompt = "Darth Vader surfing a wave" +video_frames = pipe(prompt, num_frames=64).frames[0] +video_path = export_to_video(video_frames) +video_path It just takes 7 GBs of GPU memory to generate the 64 video frames using PyTorch 2.0, “fp16” precision and the techniques mentioned above. We can also use a different scheduler easily, using the same method we’d use for Stable Diffusion: Copied import torch +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +from diffusers.utils import export_to_video + +pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() + +prompt = "Spiderman is surfing" +video_frames = pipe(prompt, num_inference_steps=25).frames[0] +video_path = export_to_video(video_frames) +video_path Here are some sample outputs: An astronaut riding a horse. + Darth vader surfing in waves. + cerspense/zeroscope_v2_576w & cerspense/zeroscope_v2_XL Zeroscope are watermark-free model and have been trained on specific sizes such as 576x320 and 1024x576. +One should first generate a video using the lower resolution checkpoint cerspense/zeroscope_v2_576w with TextToVideoSDPipeline, +which can then be upscaled using VideoToVideoSDPipeline and cerspense/zeroscope_v2_XL. Copied import torch +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +from diffusers.utils import export_to_video +from PIL import Image + +pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16) +pipe.enable_model_cpu_offload() + +# memory optimization +pipe.unet.enable_forward_chunking(chunk_size=1, dim=1) +pipe.enable_vae_slicing() + +prompt = "Darth Vader surfing a wave" +video_frames = pipe(prompt, num_frames=24).frames[0] +video_path = export_to_video(video_frames) +video_path Now the video can be upscaled: Copied pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_XL", torch_dtype=torch.float16) +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() + +# memory optimization +pipe.unet.enable_forward_chunking(chunk_size=1, dim=1) +pipe.enable_vae_slicing() + +video = [Image.fromarray(frame).resize((1024, 576)) for frame in video_frames] + +video_frames = pipe(prompt, video=video, strength=0.6).frames[0] +video_path = export_to_video(video_frames) +video_path Here are some sample outputs: Darth vader surfing in waves. + Tips Video generation is memory-intensive and one way to reduce your memory usage is to set enable_forward_chunking on the pipeline’s UNet so you don’t run the entire feedforward layer at once. Breaking it up into chunks in a loop is more efficient. Check out the Text or image-to-video guide for more details about how certain parameters can affect video generation and how to optimize inference by reducing memory usage. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. TextToVideoSDPipeline class diffusers.TextToVideoSDPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet3DConditionModel scheduler: KarrasDiffusionSchedulers ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet3DConditionModel) — +A UNet3DConditionModel to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_frames: int = 16 num_inference_steps: int = 50 guidance_scale: float = 9.0 negative_prompt: Union = None eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'np' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → TextToVideoSDPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated video. num_frames (int, optional, defaults to 16) — +The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds +amounts to 2 seconds of video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "np") — +The output format of the generated video. Choose between torch.FloatTensor or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +TextToVideoSDPipelineOutput or tuple + +If return_dict is True, TextToVideoSDPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import TextToVideoSDPipeline +>>> from diffusers.utils import export_to_video + +>>> pipe = TextToVideoSDPipeline.from_pretrained( +... "damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16" +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "Spiderman is surfing" +>>> video_frames = pipe(prompt).frames[0] +>>> video_path = export_to_video(video_frames) +>>> video_path encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. VideoToVideoSDPipeline class diffusers.VideoToVideoSDPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet3DConditionModel scheduler: KarrasDiffusionSchedulers ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet3DConditionModel) — +A UNet3DConditionModel to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-guided video-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None video: Union = None strength: float = 0.6 num_inference_steps: int = 50 guidance_scale: float = 15.0 negative_prompt: Union = None eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'np' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → TextToVideoSDPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. video (List[np.ndarray] or torch.FloatTensor) — +video frames or tensor representing a video batch to be used as the starting point for the process. +Can also accept video latents as image, if passing latents directly, it will not be encoded again. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference video. Must be between 0 and 1. video is used as a +starting point, adding more noise to it the larger the strength. The number of denoising steps +depends on the amount of noise initially added. When strength is 1, added noise is maximum and the +denoising process runs for the full number of iterations specified in num_inference_steps. A value of +1 essentially ignores video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in video generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "np") — +The output format of the generated video. Choose between torch.FloatTensor or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +TextToVideoSDPipelineOutput or tuple + +If return_dict is True, TextToVideoSDPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +>>> from diffusers.utils import export_to_video + +>>> pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16) +>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe.to("cuda") + +>>> prompt = "spiderman running in the desert" +>>> video_frames = pipe(prompt, num_inference_steps=40, height=320, width=576, num_frames=24).frames[0] +>>> # safe low-res video +>>> video_path = export_to_video(video_frames, output_video_path="./video_576_spiderman.mp4") + +>>> # let's offload the text-to-image model +>>> pipe.to("cpu") + +>>> # and load the image-to-image model +>>> pipe = DiffusionPipeline.from_pretrained( +... "cerspense/zeroscope_v2_XL", torch_dtype=torch.float16, revision="refs/pr/15" +... ) +>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe.enable_model_cpu_offload() + +>>> # The VAE consumes A LOT of memory, let's make sure we run it in sliced mode +>>> pipe.vae.enable_slicing() + +>>> # now let's upscale it +>>> video = [Image.fromarray(frame).resize((1024, 576)) for frame in video_frames] + +>>> # and denoise it +>>> video_frames = pipe(prompt, video=video, strength=0.6).frames[0] +>>> video_path = export_to_video(video_frames, output_video_path="./video_1024_spiderman.mp4") +>>> video_path encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. TextToVideoSDPipelineOutput class diffusers.pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput < source > ( frames: Union ) Parameters frames (torch.Tensor, np.ndarray, or List[List[PIL.Image.Image]]) — +List of video outputs - It can be a nested list of length batch_size, with each sub-list containing denoised Output class for text-to-video pipelines. PIL image sequences of length num_frames. It can also be a NumPy array or Torch tensor of shape +(batch_size, num_frames, channels, height, width) diff --git a/scrapped_outputs/b59fb5e3c826c8b30ba45b279f204ce5.txt b/scrapped_outputs/b59fb5e3c826c8b30ba45b279f204ce5.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/b5b17158c32b6fc8cd37c15ead0d17cf.txt b/scrapped_outputs/b5b17158c32b6fc8cd37c15ead0d17cf.txt new file mode 100644 index 0000000000000000000000000000000000000000..8e035fc84c71e612cbb63df1b6546d79487088fd --- /dev/null +++ b/scrapped_outputs/b5b17158c32b6fc8cd37c15ead0d17cf.txt @@ -0,0 +1,173 @@ +Latent Consistency Model Latent Consistency Models (LCM) enable quality image generation in typically 2-4 steps making it possible to use diffusion models in almost real-time settings. From the official website: LCMs can be distilled from any pre-trained Stable Diffusion (SD) in only 4,000 training steps (~32 A100 GPU Hours) for generating high quality 768 x 768 resolution images in 2~4 steps or even one step, significantly accelerating text-to-image generation. We employ LCM to distill the Dreamshaper-V7 version of SD in just 4,000 training iterations. For a more technical overview of LCMs, refer to the paper. LCM distilled models are available for stable-diffusion-v1-5, stable-diffusion-xl-base-1.0, and the SSD-1B model. All the checkpoints can be found in this collection. This guide shows how to perform inference with LCMs for text-to-image image-to-image combined with style LoRAs ControlNet/T2I-Adapter Text-to-image You’ll use the StableDiffusionXLPipeline pipeline with the LCMScheduler and then load the LCM-LoRA. Together with the LCM-LoRA and the scheduler, the pipeline enables a fast inference workflow, overcoming the slow iterative nature of diffusion models. Copied from diffusers import StableDiffusionXLPipeline, UNet2DConditionModel, LCMScheduler +import torch + +unet = UNet2DConditionModel.from_pretrained( + "latent-consistency/lcm-sdxl", + torch_dtype=torch.float16, + variant="fp16", +) +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16, variant="fp16", +).to("cuda") +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=8.0 +).images[0] Notice that we use only 4 steps for generation which is way less than what’s typically used for standard SDXL. Some details to keep in mind: To perform classifier-free guidance, batch size is usually doubled inside the pipeline. LCM, however, applies guidance using guidance embeddings, so the batch size does not have to be doubled in this case. This leads to a faster inference time, with the drawback that negative prompts don’t have any effect on the denoising process. The UNet was trained using the [3., 13.] guidance scale range. So, that is the ideal range for guidance_scale. However, disabling guidance_scale using a value of 1.0 is also effective in most cases. Image-to-image LCMs can be applied to image-to-image tasks too. For this example, we’ll use the LCM_Dreamshaper_v7 model, but the same steps can be applied to other LCM models as well. Copied import torch +from diffusers import AutoPipelineForImage2Image, UNet2DConditionModel, LCMScheduler +from diffusers.utils import make_image_grid, load_image + +unet = UNet2DConditionModel.from_pretrained( + "SimianLuo/LCM_Dreamshaper_v7", + subfolder="unet", + torch_dtype=torch.float16, +) + +pipe = AutoPipelineForImage2Image.from_pretrained( + "Lykon/dreamshaper-7", + unet=unet, + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) +prompt = "Astronauts in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +generator = torch.manual_seed(0) +image = pipe( + prompt, + image=init_image, + num_inference_steps=4, + guidance_scale=7.5, + strength=0.5, + generator=generator +).images[0] +make_image_grid([init_image, image], rows=1, cols=2) You can get different results based on your prompt and the image you provide. To get the best results, we recommend trying different values for num_inference_steps, strength, and guidance_scale parameters and choose the best one. Combine with style LoRAs LCMs can be used with other styled LoRAs to generate styled-images in very few steps (4-8). In the following example, we’ll use the papercut LoRA. Copied from diffusers import StableDiffusionXLPipeline, UNet2DConditionModel, LCMScheduler +import torch + +unet = UNet2DConditionModel.from_pretrained( + "latent-consistency/lcm-sdxl", + torch_dtype=torch.float16, + variant="fp16", +) +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16, variant="fp16", +).to("cuda") +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +pipe.load_lora_weights("TheLastBen/Papercut_SDXL", weight_name="papercut.safetensors", adapter_name="papercut") + +prompt = "papercut, a cute fox" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=8.0 +).images[0] +image ControlNet/T2I-Adapter Let’s look at how we can perform inference with ControlNet/T2I-Adapter and a LCM. ControlNet + +For this example, we'll use the [LCM_Dreamshaper_v7](https://huggingface.co/SimianLuo/LCM_Dreamshaper_v7) model with canny ControlNet, but the same steps can be applied to other LCM models as well. + + Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +image = load_image( + "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +).resize((512, 512)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + "SimianLuo/LCM_Dreamshaper_v7", + controlnet=controlnet, + torch_dtype=torch.float16, + safety_checker=None, +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +generator = torch.manual_seed(0) +image = pipe( + "the mona lisa", + image=canny_image, + num_inference_steps=4, + generator=generator, +).images[0] +make_image_grid([canny_image, image], rows=1, cols=2) The inference parameters in this example might not work for all examples, so we recommend trying different values for the `num_inference_steps`, `guidance_scale`, `controlnet_conditioning_scale`, and `cross_attention_kwargs` parameters and choosing the best one. T2I-Adapter This example shows how to use the lcm-sdxl with the Canny T2I-Adapter. Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionXLAdapterPipeline, UNet2DConditionModel, T2IAdapter, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +# Prepare image +# Detect the canny map in low resolution to avoid high-frequency details +image = load_image( + "https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_canny.jpg" +).resize((384, 384)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image).resize((1024, 1216)) + +# load adapter +adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, varient="fp16").to("cuda") + +unet = UNet2DConditionModel.from_pretrained( + "latent-consistency/lcm-sdxl", + torch_dtype=torch.float16, + variant="fp16", +) +pipe = StableDiffusionXLAdapterPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + unet=unet, + adapter=adapter, + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +prompt = "Mystical fairy in real, magic, 4k picture, high quality" +negative_prompt = "extra digit, fewer digits, cropped, worst quality, low quality, glitch, deformed, mutated, ugly, disfigured" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, + negative_prompt=negative_prompt, + image=canny_image, + num_inference_steps=4, + guidance_scale=5, + adapter_conditioning_scale=0.8, + adapter_conditioning_factor=1, + generator=generator, +).images[0] +grid = make_image_grid([canny_image, image], rows=1, cols=2) diff --git a/scrapped_outputs/b5b7971bf45edd613f1d599e98372089.txt b/scrapped_outputs/b5b7971bf45edd613f1d599e98372089.txt new file mode 100644 index 0000000000000000000000000000000000000000..aff30a571d591ad29e58f7a9ac94d335f8988bee --- /dev/null +++ b/scrapped_outputs/b5b7971bf45edd613f1d599e98372089.txt @@ -0,0 +1,136 @@ +Textual Inversion Textual Inversion is a training technique for personalizing image generation models with just a few example images of what you want it to learn. This technique works by learning and updating the text embeddings (the new embeddings are tied to a special word you must use in the prompt) to match the example images you provide. If you’re training on a GPU with limited vRAM, you should try enabling the gradient_checkpointing and mixed_precision parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing or xFormers. With the same configuration and setup as PyTorch, the Flax training script should be at least ~70% faster! This guide will explore the textual_inversion.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies for the script you’re using: + + + + Copied cd examples/textual_inversion +pip install -r requirements.txt + + + + Copied cd examples/textual_inversion +pip install -r requirements_flax.txt + + + 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script has many parameters to help you tailor the training run to your needs. All of the parameters and their descriptions are listed in the parse_args() function. Where applicable, Diffusers provides default values for each parameter such as the training batch size and learning rate, but feel free to change these values in the training command if you’d like. For example, to increase the number of gradient accumulation steps above the default value of 1: Copied accelerate launch textual_inversion.py \ + --gradient_accumulation_steps=4 Some other basic and important parameters to specify include: --pretrained_model_name_or_path: the name of the model on the Hub or a local path to the pretrained model --train_data_dir: path to a folder containing the training dataset (example images) --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command --num_vectors: the number of vectors to learn the embeddings with; increasing this parameter helps the model learn better but it comes with increased training costs --placeholder_token: the special word to tie the learned embeddings to (you must use the word in your prompt for inference) --initializer_token: a single-word that roughly describes the object or style you’re trying to train on --learnable_property: whether you’re training the model to learn a new “style” (for example, Van Gogh’s painting style) or “object” (for example, your dog) Training script Unlike some of the other training scripts, textual_inversion.py has a custom dataset class, TextualInversionDataset for creating a dataset. You can customize the image size, placeholder token, interpolation method, whether to crop the image, and more. If you need to change how the dataset is created, you can modify TextualInversionDataset. Next, you’ll find the dataset preprocessing code and training loop in the main() function. The script starts by loading the tokenizer, scheduler and model: Copied # Load tokenizer +if args.tokenizer_name: + tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name) +elif args.pretrained_model_name_or_path: + tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer") + +# Load scheduler and models +noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") +text_encoder = CLIPTextModel.from_pretrained( + args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision +) +vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision) +unet = UNet2DConditionModel.from_pretrained( + args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision +) The special placeholder token is added next to the tokenizer, and the embedding is readjusted to account for the new token. Then, the script creates a dataset from the TextualInversionDataset: Copied train_dataset = TextualInversionDataset( + data_root=args.train_data_dir, + tokenizer=tokenizer, + size=args.resolution, + placeholder_token=(" ".join(tokenizer.convert_ids_to_tokens(placeholder_token_ids))), + repeats=args.repeats, + learnable_property=args.learnable_property, + center_crop=args.center_crop, + set="train", +) +train_dataloader = torch.utils.data.DataLoader( + train_dataset, batch_size=args.train_batch_size, shuffle=True, num_workers=args.dataloader_num_workers +) Finally, the training loop handles everything else from predicting the noisy residual to updating the embedding weights of the special placeholder token. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 For this guide, you’ll download some images of a cat toy and store them in a directory. But remember, you can create and use your own dataset if you want (see the Create a dataset for training guide). Copied from huggingface_hub import snapshot_download + +local_dir = "./cat" +snapshot_download( + "diffusers/cat_toy_example", local_dir=local_dir, repo_type="dataset", ignore_patterns=".gitattributes" +) Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model, and DATA_DIR to the path where you just downloaded the cat images to. The script creates and saves the following files to your repository: learned_embeds.bin: the learned embedding vectors corresponding to your example images token_identifier.txt: the special placeholder token type_of_concept.txt: the type of concept you’re training on (either “object” or “style”) A full training run takes ~1 hour on a single V100 GPU. One more thing before you launch the script. If you’re interested in following along with the training process, you can periodically save generated images as training progresses. Add the following parameters to the training command: Copied --validation_prompt="A train" +--num_validation_images=4 +--validation_steps=100 + + + + Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export DATA_DIR="./cat" + +accelerate launch textual_inversion.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --train_data_dir=$DATA_DIR \ + --learnable_property="object" \ + --placeholder_token="" \ + --initializer_token="toy" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --max_train_steps=3000 \ + --learning_rate=5.0e-04 \ + --scale_lr \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --output_dir="textual_inversion_cat" \ + --push_to_hub + + + + Copied export MODEL_NAME="duongna/stable-diffusion-v1-4-flax" +export DATA_DIR="./cat" + +python textual_inversion_flax.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --train_data_dir=$DATA_DIR \ + --learnable_property="object" \ + --placeholder_token="" \ + --initializer_token="toy" \ + --resolution=512 \ + --train_batch_size=1 \ + --max_train_steps=3000 \ + --learning_rate=5.0e-04 \ + --scale_lr \ + --output_dir="textual_inversion_cat" \ + --push_to_hub + + +After training is complete, you can use your newly trained model for inference like: + + + + Copied from diffusers import StableDiffusionPipeline +import torch + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") +pipeline.load_textual_inversion("sd-concepts-library/cat-toy") +image = pipeline("A train", num_inference_steps=50).images[0] +image.save("cat-train.png") + + +Flax doesn’t support the load_textual_inversion() method, but the textual_inversion_flax.py script saves the learned embeddings as a part of the model after training. This means you can use the model for inference like any other Flax model: Copied import jax +import numpy as np +from flax.jax_utils import replicate +from flax.training.common_utils import shard +from diffusers import FlaxStableDiffusionPipeline + +model_path = "path-to-your-trained-model" +pipeline, params = FlaxStableDiffusionPipeline.from_pretrained(model_path, dtype=jax.numpy.bfloat16) + +prompt = "A train" +prng_seed = jax.random.PRNGKey(0) +num_inference_steps = 50 + +num_samples = jax.device_count() +prompt = num_samples * [prompt] +prompt_ids = pipeline.prepare_inputs(prompt) + +# shard inputs and rng +params = replicate(params) +prng_seed = jax.random.split(prng_seed, jax.device_count()) +prompt_ids = shard(prompt_ids) + +images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images +images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) +image.save("cat-train.png") + + + Next steps Congratulations on training your own Textual Inversion model! 🎉 To learn more about how to use your new model, the following guides may be helpful: Learn how to load Textual Inversion embeddings and also use them as negative embeddings. Learn how to use Textual Inversion for inference with Stable Diffusion 1/2 and Stable Diffusion XL. diff --git a/scrapped_outputs/b5decb4f0b643c0245cd1f954f893b4e.txt b/scrapped_outputs/b5decb4f0b643c0245cd1f954f893b4e.txt new file mode 100644 index 0000000000000000000000000000000000000000..ef62c086e705e0fd98841711ee18a967fbc85f5e --- /dev/null +++ b/scrapped_outputs/b5decb4f0b643c0245cd1f954f893b4e.txt @@ -0,0 +1,41 @@ +UNetMotionModel The UNet model was originally introduced by Ronneberger et al for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 2D UNet model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNetMotionModel class diffusers.UNetMotionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlockMotion', 'CrossAttnDownBlockMotion', 'CrossAttnDownBlockMotion', 'DownBlockMotion') up_block_types: Tuple = ('UpBlockMotion', 'CrossAttnUpBlockMotion', 'CrossAttnUpBlockMotion', 'CrossAttnUpBlockMotion') block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: int = 32 norm_eps: float = 1e-05 cross_attention_dim: int = 1280 use_linear_projection: bool = False num_attention_heads: Union = 8 motion_max_seq_length: int = 32 motion_num_attention_heads: int = 8 use_motion_mid_block: int = True encoder_hid_dim: Optional = None encoder_hid_dim_type: Optional = None ) A modified conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a +sample shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). disable_freeu < source > ( ) Disables the FreeU mechanism. enable_forward_chunking < source > ( chunk_size: Optional = None dim: int = 0 ) Parameters chunk_size (int, optional) — +The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually +over each tensor of dim=dim. dim (int, optional, defaults to 0) — +The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch) +or dim=1 (sequence length). Sets the attention processor to use feed forward +chunking. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism from https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stage blocks where they are being applied. Please refer to the official repository for combinations of values that +are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None added_cond_kwargs: Optional = None down_block_additional_residuals: Optional = None mid_block_additional_residual: Optional = None return_dict: bool = True ) → ~models.unet_3d_condition.UNet3DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, num_frames, channel, height, width. timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). +timestep_cond — (torch.Tensor, optional, defaults to None): +Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed +through the self.time_embedding layer to obtain the timestep embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. +down_block_additional_residuals — (tuple of torch.Tensor, optional): +A tuple of tensors that if specified are added to the residuals of down unet blocks. +mid_block_additional_residual — (torch.Tensor, optional): +A tensor that if specified is added to the residual of the middle unet block. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.unet_3d_condition.UNet3DConditionOutput instead of a plain +tuple. Returns +~models.unet_3d_condition.UNet3DConditionOutput or tuple + +If return_dict is True, an ~models.unet_3d_condition.UNet3DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The UNetMotionModel forward method. freeze_unet2d_params < source > ( ) Freeze the weights of just the UNet2DConditionModel, and leave the motion modules +unfrozen for fine tuning. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. UNet3DConditionOutput class diffusers.models.unets.unet_3d_condition.UNet3DConditionOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_frames, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of UNet3DConditionModel. diff --git a/scrapped_outputs/b5edc36077e73a635aa0e859e9e92cc1.txt b/scrapped_outputs/b5edc36077e73a635aa0e859e9e92cc1.txt new file mode 100644 index 0000000000000000000000000000000000000000..db7171b03930077dc4188ad756a7f5e1ae92467f --- /dev/null +++ b/scrapped_outputs/b5edc36077e73a635aa0e859e9e92cc1.txt @@ -0,0 +1,27 @@ +UNet2DModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 2D UNet model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet2DModel class diffusers.UNet2DModel < source > ( sample_size: Union = None in_channels: int = 3 out_channels: int = 3 center_input_sample: bool = False time_embedding_type: str = 'positional' freq_shift: int = 0 flip_sin_to_cos: bool = True down_block_types: Tuple = ('DownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D') up_block_types: Tuple = ('AttnUpBlock2D', 'AttnUpBlock2D', 'AttnUpBlock2D', 'UpBlock2D') block_out_channels: Tuple = (224, 448, 672, 896) layers_per_block: int = 2 mid_block_scale_factor: float = 1 downsample_padding: int = 1 downsample_type: str = 'conv' upsample_type: str = 'conv' dropout: float = 0.0 act_fn: str = 'silu' attention_head_dim: Optional = 8 norm_num_groups: int = 32 attn_norm_num_groups: Optional = None norm_eps: float = 1e-05 resnet_time_scale_shift: str = 'default' add_attention: bool = True class_embed_type: Optional = None num_class_embeds: Optional = None num_train_timesteps: Optional = None ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. Dimensions must be a multiple of 2 ** (len(block_out_channels) - 1). in_channels (int, optional, defaults to 3) — Number of channels in the input sample. out_channels (int, optional, defaults to 3) — Number of channels in the output. center_input_sample (bool, optional, defaults to False) — Whether to center the input sample. time_embedding_type (str, optional, defaults to "positional") — Type of time embedding to use. freq_shift (int, optional, defaults to 0) — Frequency shift for Fourier time embedding. flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip sin to cos for Fourier time embedding. down_block_types (Tuple[str], optional, defaults to ("DownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D")) — +Tuple of downsample block types. mid_block_type (str, optional, defaults to "UNetMidBlock2D") — +Block type for middle of UNet, it can be either UNetMidBlock2D or UnCLIPUNetMidBlock2D. up_block_types (Tuple[str], optional, defaults to ("AttnUpBlock2D", "AttnUpBlock2D", "AttnUpBlock2D", "UpBlock2D")) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (224, 448, 672, 896)) — +Tuple of block output channels. layers_per_block (int, optional, defaults to 2) — The number of layers per block. mid_block_scale_factor (float, optional, defaults to 1) — The scale factor for the mid block. downsample_padding (int, optional, defaults to 1) — The padding for the downsample convolution. downsample_type (str, optional, defaults to conv) — +The downsample type for downsampling layers. Choose between “conv” and “resnet” upsample_type (str, optional, defaults to conv) — +The upsample type for upsampling layers. Choose between “conv” and “resnet” dropout (float, optional, defaults to 0.0) — The dropout probability to use. act_fn (str, optional, defaults to "silu") — The activation function to use. attention_head_dim (int, optional, defaults to 8) — The attention head dimension. norm_num_groups (int, optional, defaults to 32) — The number of groups for normalization. attn_norm_num_groups (int, optional, defaults to None) — +If set to an integer, a group norm layer will be created in the mid block’s Attention layer with the +given number of groups. If left as None, the group norm layer will only be created if +resnet_time_scale_shift is set to default, and if created will have norm_num_groups groups. norm_eps (float, optional, defaults to 1e-5) — The epsilon for normalization. resnet_time_scale_shift (str, optional, defaults to "default") — Time scale shift config +for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", or "identity". num_class_embeds (int, optional, defaults to None) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim when performing class +conditioning with class_embed_type equal to None. A 2D UNet model that takes a noisy sample and a timestep and returns a sample shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor timestep: Union class_labels: Optional = None return_dict: bool = True ) → ~models.unet_2d.UNet2DOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, channel, height, width). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. class_labels (torch.FloatTensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.unet_2d.UNet2DOutput instead of a plain tuple. Returns +~models.unet_2d.UNet2DOutput or tuple + +If return_dict is True, an ~models.unet_2d.UNet2DOutput is returned, otherwise a tuple is +returned where the first element is the sample tensor. + The UNet2DModel forward method. UNet2DOutput class diffusers.models.unets.unet_2d.UNet2DOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The hidden states output from the last layer of the model. The output of UNet2DModel. diff --git a/scrapped_outputs/b619ecb7eb43e3aa16495bf0bbe84cdd.txt b/scrapped_outputs/b619ecb7eb43e3aa16495bf0bbe84cdd.txt new file mode 100644 index 0000000000000000000000000000000000000000..68ff112b968d56ed709f7889837161b8952ee99b --- /dev/null +++ b/scrapped_outputs/b619ecb7eb43e3aa16495bf0bbe84cdd.txt @@ -0,0 +1,235 @@ +AutoPipeline AutoPipeline is designed to: make it easy for you to load a checkpoint for a task without knowing the specific pipeline class to use use multiple pipelines in your workflow Based on the task, the AutoPipeline class automatically retrieves the relevant pipeline given the name or path to the pretrained weights with the from_pretrained() method. To seamlessly switch between tasks with the same checkpoint without reallocating additional memory, use the from_pipe() method to transfer the components from the original pipeline to the new one. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +image = pipeline(prompt, num_inference_steps=25).images[0] Check out the AutoPipeline tutorial to learn how to use this API! AutoPipeline supports text-to-image, image-to-image, and inpainting for the following diffusion models: Stable Diffusion ControlNet Stable Diffusion XL (SDXL) DeepFloyd IF Kandinsky 2.1 Kandinsky 2.2 AutoPipelineForText2Image class diffusers.AutoPipelineForText2Image < source > ( *args **kwargs ) AutoPipelineForText2Image is a generic pipeline class that instantiates a text-to-image pipeline class. The +specific underlying pipeline class is automatically selected from either the +from_pretrained() or from_pipe() methods. This class cannot be instantiated using __init__() (throws an error). Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_or_path **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiates a text-to-image Pytorch diffusion pipeline from pretrained pipeline weight. The from_pretrained() method takes care of returning the correct pipeline class instance by: Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its +config object Find the text-to-image pipeline linked to the pipeline class using pattern matching on pipeline class +name. If a controlnet argument is passed, it will instantiate a StableDiffusionControlNetPipeline object. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import AutoPipelineForText2Image + +>>> pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> image = pipeline(prompt).images[0] from_pipe < source > ( pipeline **kwargs ) Parameters pipeline (DiffusionPipeline) — +an instantiated DiffusionPipeline object Instantiates a text-to-image Pytorch diffusion pipeline from another instantiated diffusion pipeline class. The from_pipe() method takes care of returning the correct pipeline class instance by finding the text-to-image +pipeline linked to the pipeline class using pattern matching on pipeline class name. All the modules the pipeline contains will be used to initialize the new pipeline without reallocating +additional memoery. The pipeline is set in evaluation mode (model.eval()) by default. Copied >>> from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image + +>>> pipe_i2i = AutoPipelineForImage2Image.from_pretrained( +... "runwayml/stable-diffusion-v1-5", requires_safety_checker=False +... ) + +>>> pipe_t2i = AutoPipelineForText2Image.from_pipe(pipe_i2i) +>>> image = pipe_t2i(prompt).images[0] AutoPipelineForImage2Image class diffusers.AutoPipelineForImage2Image < source > ( *args **kwargs ) AutoPipelineForImage2Image is a generic pipeline class that instantiates an image-to-image pipeline class. The +specific underlying pipeline class is automatically selected from either the +from_pretrained() or from_pipe() methods. This class cannot be instantiated using __init__() (throws an error). Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_or_path **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiates a image-to-image Pytorch diffusion pipeline from pretrained pipeline weight. The from_pretrained() method takes care of returning the correct pipeline class instance by: Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its +config object Find the image-to-image pipeline linked to the pipeline class using pattern matching on pipeline class +name. If a controlnet argument is passed, it will instantiate a StableDiffusionControlNetImg2ImgPipeline +object. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import AutoPipelineForImage2Image + +>>> pipeline = AutoPipelineForImage2Image.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> image = pipeline(prompt, image).images[0] from_pipe < source > ( pipeline **kwargs ) Parameters pipeline (DiffusionPipeline) — +an instantiated DiffusionPipeline object Instantiates a image-to-image Pytorch diffusion pipeline from another instantiated diffusion pipeline class. The from_pipe() method takes care of returning the correct pipeline class instance by finding the +image-to-image pipeline linked to the pipeline class using pattern matching on pipeline class name. All the modules the pipeline contains will be used to initialize the new pipeline without reallocating +additional memoery. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image + +>>> pipe_t2i = AutoPipelineForText2Image.from_pretrained( +... "runwayml/stable-diffusion-v1-5", requires_safety_checker=False +... ) + +>>> pipe_i2i = AutoPipelineForImage2Image.from_pipe(pipe_t2i) +>>> image = pipe_i2i(prompt, image).images[0] AutoPipelineForInpainting class diffusers.AutoPipelineForInpainting < source > ( *args **kwargs ) AutoPipelineForInpainting is a generic pipeline class that instantiates an inpainting pipeline class. The +specific underlying pipeline class is automatically selected from either the +from_pretrained() or from_pipe() methods. This class cannot be instantiated using __init__() (throws an error). Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_or_path **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiates a inpainting Pytorch diffusion pipeline from pretrained pipeline weight. The from_pretrained() method takes care of returning the correct pipeline class instance by: Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its +config object Find the inpainting pipeline linked to the pipeline class using pattern matching on pipeline class name. If a controlnet argument is passed, it will instantiate a StableDiffusionControlNetInpaintPipeline +object. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import AutoPipelineForInpainting + +>>> pipeline = AutoPipelineForInpainting.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> image = pipeline(prompt, image=init_image, mask_image=mask_image).images[0] from_pipe < source > ( pipeline **kwargs ) Parameters pipeline (DiffusionPipeline) — +an instantiated DiffusionPipeline object Instantiates a inpainting Pytorch diffusion pipeline from another instantiated diffusion pipeline class. The from_pipe() method takes care of returning the correct pipeline class instance by finding the inpainting +pipeline linked to the pipeline class using pattern matching on pipeline class name. All the modules the pipeline class contain will be used to initialize the new pipeline without reallocating +additional memoery. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import AutoPipelineForText2Image, AutoPipelineForInpainting + +>>> pipe_t2i = AutoPipelineForText2Image.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", requires_safety_checker=False +... ) + +>>> pipe_inpaint = AutoPipelineForInpainting.from_pipe(pipe_t2i) +>>> image = pipe_inpaint(prompt, image=init_image, mask_image=mask_image).images[0] diff --git a/scrapped_outputs/b64ebf7ab7d2d355905c26783210f39f.txt b/scrapped_outputs/b64ebf7ab7d2d355905c26783210f39f.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/b660243ad53357cfa07ea42199c2161c.txt b/scrapped_outputs/b660243ad53357cfa07ea42199c2161c.txt new file mode 100644 index 0000000000000000000000000000000000000000..48396c146f3995890b4116a7443457db9ccef879 --- /dev/null +++ b/scrapped_outputs/b660243ad53357cfa07ea42199c2161c.txt @@ -0,0 +1,60 @@ +VAE Image Processor The VaeImageProcessor provides a unified API for StableDiffusionPipelines to prepare image inputs for VAE encoding and post-processing outputs once they’re decoded. This includes transformations such as resizing, normalization, and conversion between PIL Image, PyTorch, and NumPy arrays. All pipelines with VaeImageProcessor accept PIL Image, PyTorch tensor, or NumPy arrays as image inputs and return outputs based on the output_type argument by the user. You can pass encoded image latents directly to the pipeline and return latents from the pipeline as a specific output with the output_type argument (for example output_type="latent"). This allows you to take the generated latents from one pipeline and pass it to another pipeline as input without leaving the latent space. It also makes it much easier to use multiple pipelines together by passing PyTorch tensors directly between different pipelines. VaeImageProcessor class diffusers.image_processor.VaeImageProcessor < source > ( do_resize: bool = True vae_scale_factor: int = 8 resample: str = 'lanczos' do_normalize: bool = True do_binarize: bool = False do_convert_rgb: bool = False do_convert_grayscale: bool = False ) Parameters do_resize (bool, optional, defaults to True) — +Whether to downscale the image’s (height, width) dimensions to multiples of vae_scale_factor. Can accept +height and width arguments from image_processor.VaeImageProcessor.preprocess() method. vae_scale_factor (int, optional, defaults to 8) — +VAE scale factor. If do_resize is True, the image is automatically resized to multiples of this factor. resample (str, optional, defaults to lanczos) — +Resampling filter to use when resizing the image. do_normalize (bool, optional, defaults to True) — +Whether to normalize the image to [-1,1]. do_binarize (bool, optional, defaults to False) — +Whether to binarize the image to 0/1. do_convert_rgb (bool, optional, defaults to be False) — +Whether to convert the images to RGB format. do_convert_grayscale (bool, optional, defaults to be False) — +Whether to convert the images to grayscale format. Image processor for VAE. apply_overlay < source > ( mask: Image init_image: Image image: Image crop_coords: Optional = None ) overlay the inpaint output to the original image binarize < source > ( image: Image ) → PIL.Image.Image Parameters image (PIL.Image.Image) — +The image input, should be a PIL image. Returns +PIL.Image.Image + +The binarized image. Values less than 0.5 are set to 0, values greater than 0.5 are set to 1. + Create a mask. blur < source > ( image: Image blur_factor: int = 4 ) Applies Gaussian blur to an image. convert_to_grayscale < source > ( image: Image ) Converts a PIL image to grayscale format. convert_to_rgb < source > ( image: Image ) Converts a PIL image to RGB format. denormalize < source > ( images: Union ) Denormalize an image array to [0,1]. get_crop_region < source > ( mask_image: Image width: int height: int pad = 0 ) → tuple Parameters mask_image (PIL.Image.Image) — Mask image. width (int) — Width of the image to be processed. height (int) — Height of the image to be processed. pad (int, optional) — Padding to be added to the crop region. Defaults to 0. Returns +tuple + +(x1, y1, x2, y2) represent a rectangular region that contains all masked ares in an image and matches the original aspect ratio. + Finds a rectangular region that contains all masked ares in an image, and expands region to match the aspect ratio of the original image; +for example, if user drew mask in a 128x32 region, and the dimensions for processing are 512x512, the region will be expanded to 128x128. get_default_height_width < source > ( image: Union height: Optional = None width: Optional = None ) Parameters image(PIL.Image.Image, np.ndarray or torch.Tensor) — +The image input, can be a PIL image, numpy array or pytorch tensor. if it is a numpy array, should have +shape [batch, height, width] or [batch, height, width, channel] if it is a pytorch tensor, should +have shape [batch, channel, height, width]. height (int, optional, defaults to None) — +The height in preprocessed image. If None, will use the height of image input. width (int, optional, defaults to None) -- The width in preprocessed. If None, will use the width of the image` input. This function return the height and width that are downscaled to the next integer multiple of +vae_scale_factor. normalize < source > ( images: Union ) Normalize an image array to [-1,1]. numpy_to_pil < source > ( images: ndarray ) Convert a numpy image or a batch of images to a PIL image. numpy_to_pt < source > ( images: ndarray ) Convert a NumPy image to a PyTorch tensor. pil_to_numpy < source > ( images: Union ) Convert a PIL image or a list of PIL images to NumPy arrays. postprocess < source > ( image: FloatTensor output_type: str = 'pil' do_denormalize: Optional = None ) → PIL.Image.Image, np.ndarray or torch.FloatTensor Parameters image (torch.FloatTensor) — +The image input, should be a pytorch tensor with shape B x C x H x W. output_type (str, optional, defaults to pil) — +The output type of the image, can be one of pil, np, pt, latent. do_denormalize (List[bool], optional, defaults to None) — +Whether to denormalize the image to [0,1]. If None, will use the value of do_normalize in the +VaeImageProcessor config. Returns +PIL.Image.Image, np.ndarray or torch.FloatTensor + +The postprocessed image. + Postprocess the image output from tensor to output_type. preprocess < source > ( image: Union height: Optional = None width: Optional = None resize_mode: str = 'default' crops_coords: Optional = None ) Parameters image (pipeline_image_input) — +The image input, accepted formats are PIL images, NumPy arrays, PyTorch tensors; Also accept list of supported formats. height (int, optional, defaults to None) — +The height in preprocessed image. If None, will use the get_default_height_width() to get default height. width (int, optional, defaults to None) -- The width in preprocessed. If None, will use get_default_height_width() to get the default width. resize_mode (str, optional, defaults to default) — +The resize mode, can be one of default or fill. If default, will resize the image to fit +within the specified width and height, and it may not maintaining the original aspect ratio. +If fill, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, filling empty with data from image. +If crop, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, cropping the excess. +Note that resize_mode fill and crop are only supported for PIL image input. crops_coords (List[Tuple[int, int, int, int]], optional, defaults to None) — +The crop coordinates for each image in the batch. If None, will not crop the image. Preprocess the image input. pt_to_numpy < source > ( images: FloatTensor ) Convert a PyTorch tensor to a NumPy image. resize < source > ( image: Union height: int width: int resize_mode: str = 'default' ) → PIL.Image.Image, np.ndarray or torch.Tensor Parameters image (PIL.Image.Image, np.ndarray or torch.Tensor) — +The image input, can be a PIL image, numpy array or pytorch tensor. height (int) — +The height to resize to. width (int) — +The width to resize to. resize_mode (str, optional, defaults to default) — +The resize mode to use, can be one of default or fill. If default, will resize the image to fit +within the specified width and height, and it may not maintaining the original aspect ratio. +If fill, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, filling empty with data from image. +If crop, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, cropping the excess. +Note that resize_mode fill and crop are only supported for PIL image input. Returns +PIL.Image.Image, np.ndarray or torch.Tensor + +The resized image. + Resize image. VaeImageProcessorLDM3D The VaeImageProcessorLDM3D accepts RGB and depth inputs and returns RGB and depth outputs. class diffusers.image_processor.VaeImageProcessorLDM3D < source > ( do_resize: bool = True vae_scale_factor: int = 8 resample: str = 'lanczos' do_normalize: bool = True ) Parameters do_resize (bool, optional, defaults to True) — +Whether to downscale the image’s (height, width) dimensions to multiples of vae_scale_factor. vae_scale_factor (int, optional, defaults to 8) — +VAE scale factor. If do_resize is True, the image is automatically resized to multiples of this factor. resample (str, optional, defaults to lanczos) — +Resampling filter to use when resizing the image. do_normalize (bool, optional, defaults to True) — +Whether to normalize the image to [-1,1]. Image processor for VAE LDM3D. depth_pil_to_numpy < source > ( images: Union ) Convert a PIL image or a list of PIL images to NumPy arrays. numpy_to_depth < source > ( images: ndarray ) Convert a NumPy depth image or a batch of images to a PIL image. numpy_to_pil < source > ( images: ndarray ) Convert a NumPy image or a batch of images to a PIL image. preprocess < source > ( rgb: Union depth: Union height: Optional = None width: Optional = None target_res: Optional = None ) Preprocess the image input. Accepted formats are PIL images, NumPy arrays or PyTorch tensors. rgblike_to_depthmap < source > ( image: Union ) Returns: depth map diff --git a/scrapped_outputs/b660ee253849f252d638f684785c4162.txt b/scrapped_outputs/b660ee253849f252d638f684785c4162.txt new file mode 100644 index 0000000000000000000000000000000000000000..643707bcdd440e65416f02ac6003e845768e0c87 --- /dev/null +++ b/scrapped_outputs/b660ee253849f252d638f684785c4162.txt @@ -0,0 +1,96 @@ +I2VGen-XL I2VGen-XL: High-Quality Image-to-Video Synthesis via Cascaded Diffusion Models by Shiwei Zhang, Jiayu Wang, Yingya Zhang, Kang Zhao, Hangjie Yuan, Zhiwu Qin, Xiang Wang, Deli Zhao, and Jingren Zhou. The abstract from the paper is: Video synthesis has recently made remarkable strides benefiting from the rapid development of diffusion models. However, it still encounters challenges in terms of semantic accuracy, clarity and spatio-temporal continuity. They primarily arise from the scarcity of well-aligned text-video data and the complex inherent structure of videos, making it difficult for the model to simultaneously ensure semantic and qualitative excellence. In this report, we propose a cascaded I2VGen-XL approach that enhances model performance by decoupling these two factors and ensures the alignment of the input data by utilizing static images as a form of crucial guidance. I2VGen-XL consists of two stages: i) the base stage guarantees coherent semantics and preserves content from input images by using two hierarchical encoders, and ii) the refinement stage enhances the video’s details by incorporating an additional brief text and improves the resolution to 1280×720. To improve the diversity, we collect around 35 million single-shot text-video pairs and 6 billion text-image pairs to optimize the model. By this means, I2VGen-XL can simultaneously enhance the semantic accuracy, continuity of details and clarity of generated videos. Through extensive experiments, we have investigated the underlying principles of I2VGen-XL and compared it with current top methods, which can demonstrate its effectiveness on diverse data. The source code and models will be publicly available at this https URL. The original codebase can be found here. The model checkpoints can be found here. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. Also, to know more about reducing the memory usage of this pipeline, refer to the [“Reduce memory usage”] section here. Sample output with I2VGenXL: masterpiece, bestquality, sunset. + Notes I2VGenXL always uses a clip_skip value of 1. This means it leverages the penultimate layer representations from the text encoder of CLIP. It can generate videos of quality that is often on par with Stable Video Diffusion (SVD). Unlike SVD, it additionally accepts text prompts as inputs. It can generate higher resolution videos. When using the DDIMScheduler (which is default for this pipeline), less than 50 steps for inference leads to bad results. I2VGenXLPipeline class diffusers.I2VGenXLPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer image_encoder: CLIPVisionModelWithProjection feature_extractor: CLIPImageProcessor unet: I2VGenXLUNet scheduler: DDIMScheduler ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (I2VGenXLUNet) — +A I2VGenXLUNet to denoise the encoded video latents. scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Pipeline for image-to-video generation as proposed in I2VGenXL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None image: Union = None height: Optional = 704 width: Optional = 1280 target_fps: Optional = 16 num_frames: int = 16 num_inference_steps: int = 50 guidance_scale: float = 9.0 negative_prompt: Union = None eta: float = 0.0 num_videos_per_prompt: Optional = 1 decode_chunk_size: Optional = 1 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = 1 ) → pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — +Image or images to guide image generation. If you provide a tensor, it needs to be compatible with +CLIPImageProcessor. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. target_fps (int, optional) — +Frames per second. The rate at which the generated images shall be exported to a video after generation. This is also used as a “micro-condition” while generation. num_frames (int, optional) — +The number of video frames to generate. num_inference_steps (int, optional) — +The number of denoising steps. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. num_videos_per_prompt (int, optional) — +The number of images to generate per prompt. decode_chunk_size (int, optional) — +The number of frames to decode at a time. The higher the chunk size, the higher the temporal consistency +between frames, but also the higher the memory consumption. By default, the decoder will decode all frames at once +for maximal quality. Reduce decode_chunk_size to reduce memory usage. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput or tuple + +If return_dict is True, pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for image-to-video generation with I2VGenXLPipeline. Examples: Copied >>> import torch +>>> from diffusers import I2VGenXLPipeline + +>>> pipeline = I2VGenXLPipeline.from_pretrained("ali-vilab/i2vgen-xl", torch_dtype=torch.float16, variant="fp16") +>>> pipeline.enable_model_cpu_offload() + +>>> image_url = "https://github.com/ali-vilab/i2vgen-xl/blob/main/data/test_images/img_0009.png?raw=true" +>>> image = load_image(image_url).convert("RGB") + +>>> prompt = "Papers were floating in the air on a table in the library" +>>> negative_prompt = "Distorted, discontinuous, Ugly, blurry, low resolution, motionless, static, disfigured, disconnected limbs, Ugly faces, incomplete arms" +>>> generator = torch.manual_seed(8888) + +>>> frames = pipeline( +... prompt=prompt, +... image=image, +... num_inference_steps=50, +... negative_prompt=negative_prompt, +... guidance_scale=9.0, +... generator=generator +... ).frames[0] +>>> video_path = export_to_gif(frames, "i2v.gif") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_videos_per_prompt negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_videos_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. I2VGenXLPipelineOutput class diffusers.pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput < source > ( frames: Union ) Parameters frames (List[np.ndarray] or torch.FloatTensor) — +List of denoised frames (essentially images) as NumPy arrays of shape (height, width, num_channels) or as +a torch tensor. The length of the list denotes the video length (the number of frames). Output class for image-to-video pipeline. diff --git a/scrapped_outputs/b663185a90d47c02b4412999ee1db12d.txt b/scrapped_outputs/b663185a90d47c02b4412999ee1db12d.txt new file mode 100644 index 0000000000000000000000000000000000000000..f2282512f2f0bcea89548e640b2b6d75311dad9c --- /dev/null +++ b/scrapped_outputs/b663185a90d47c02b4412999ee1db12d.txt @@ -0,0 +1,27 @@ +OpenVINO 🤗 Optimum provides Stable Diffusion pipelines compatible with OpenVINO to perform inference on a variety of Intel processors (see the full list of supported devices). You’ll need to install 🤗 Optimum Intel with the --upgrade-strategy eager option to ensure optimum-intel is using the latest version: Copied pip install --upgrade-strategy eager optimum["openvino"] This guide will show you how to use the Stable Diffusion and Stable Diffusion XL (SDXL) pipelines with OpenVINO. Stable Diffusion To load and run inference, use the OVStableDiffusionPipeline. If you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, set export=True: Copied from optimum.intel import OVStableDiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipeline = OVStableDiffusionPipeline.from_pretrained(model_id, export=True) +prompt = "sailing ship in storm by Rembrandt" +image = pipeline(prompt).images[0] + +# Don't forget to save the exported model +pipeline.save_pretrained("openvino-sd-v1-5") To further speed-up inference, statically reshape the model. If you change any parameters such as the outputs height or width, you’ll need to statically reshape your model again. Copied # Define the shapes related to the inputs and desired outputs +batch_size, num_images, height, width = 1, 1, 512, 512 + +# Statically reshape the model +pipeline.reshape(batch_size, height, width, num_images) +# Compile the model before inference +pipeline.compile() + +image = pipeline( + prompt, + height=height, + width=width, + num_images_per_prompt=num_images, +).images[0] You can find more examples in the 🤗 Optimum documentation, and Stable Diffusion is supported for text-to-image, image-to-image, and inpainting. Stable Diffusion XL To load and run inference with SDXL, use the OVStableDiffusionXLPipeline: Copied from optimum.intel import OVStableDiffusionXLPipeline + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipeline = OVStableDiffusionXLPipeline.from_pretrained(model_id) +prompt = "sailing ship in storm by Rembrandt" +image = pipeline(prompt).images[0] To further speed-up inference, statically reshape the model as shown in the Stable Diffusion section. You can find more examples in the 🤗 Optimum documentation, and running SDXL in OpenVINO is supported for text-to-image and image-to-image. diff --git a/scrapped_outputs/b685c5f162207e4665db45a5913e025b.txt b/scrapped_outputs/b685c5f162207e4665db45a5913e025b.txt new file mode 100644 index 0000000000000000000000000000000000000000..7b1735de34d975258705c997ab6b7091fbeddde0 --- /dev/null +++ b/scrapped_outputs/b685c5f162207e4665db45a5913e025b.txt @@ -0,0 +1,2 @@ +Activation functions Customized activation functions for supporting various models in 🤗 Diffusers. GELU class diffusers.models.activations.GELU < source > ( dim_in: int dim_out: int approximate: str = 'none' bias: bool = True ) Parameters dim_in (int) — The number of channels in the input. dim_out (int) — The number of channels in the output. approximate (str, optional, defaults to "none") — If "tanh", use tanh approximation. bias (bool, defaults to True) — Whether to use a bias in the linear layer. GELU activation function with tanh approximation support with approximate="tanh". GEGLU class diffusers.models.activations.GEGLU < source > ( dim_in: int dim_out: int bias: bool = True ) Parameters dim_in (int) — The number of channels in the input. dim_out (int) — The number of channels in the output. bias (bool, defaults to True) — Whether to use a bias in the linear layer. A variant of the gated linear unit activation function. ApproximateGELU class diffusers.models.activations.ApproximateGELU < source > ( dim_in: int dim_out: int bias: bool = True ) Parameters dim_in (int) — The number of channels in the input. dim_out (int) — The number of channels in the output. bias (bool, defaults to True) — Whether to use a bias in the linear layer. The approximate form of the Gaussian Error Linear Unit (GELU). For more details, see section 2 of this +paper. diff --git a/scrapped_outputs/b68d4d017eb8b7350b4efa3c058a4a24.txt b/scrapped_outputs/b68d4d017eb8b7350b4efa3c058a4a24.txt new file mode 100644 index 0000000000000000000000000000000000000000..e807efa0bdba9fcaf725824d3ab7c1cc5f8142b5 --- /dev/null +++ b/scrapped_outputs/b68d4d017eb8b7350b4efa3c058a4a24.txt @@ -0,0 +1,138 @@ +Kandinsky 3 Kandinsky 3 is created by Vladimir Arkhipkin,Anastasia Maltseva,Igor Pavlov,Andrei Filatov,Arseniy Shakhmatov,Andrey Kuznetsov,Denis Dimitrov, Zein Shaheen The description from it’s Github page: Kandinsky 3.0 is an open-source text-to-image diffusion model built upon the Kandinsky2-x model family. In comparison to its predecessors, enhancements have been made to the text understanding and visual quality of the model, achieved by increasing the size of the text encoder and Diffusion U-Net models, respectively. Its architecture includes 3 main components: FLAN-UL2, which is an encoder decoder model based on the T5 architecture. New U-Net architecture featuring BigGAN-deep blocks doubles depth while maintaining the same number of parameters. Sber-MoVQGAN is a decoder proven to have superior results in image restoration. The original codebase can be found at ai-forever/Kandinsky-3. Check out the Kandinsky Community organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting. Make sure to check out the schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. Kandinsky3Pipeline class diffusers.Kandinsky3Pipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: Kandinsky3UNet scheduler: DDPMScheduler movq: VQModel ) __call__ < source > ( prompt: Union = None num_inference_steps: int = 25 guidance_scale: float = 3.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 height: Optional = 1024 width: Optional = 1024 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None output_type: Optional = 'pil' return_dict: bool = True latents = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 3.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to self.unet.config.sample_size) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size) — +The width in pixels of the generated image. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask. Must provide if passing prompt_embeds directly. negative_attention_mask (torch.FloatTensor, optional) — +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import AutoPipelineForText2Image +>>> import torch + +>>> pipe = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-3", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A photograph of the inside of a subway train. There are raccoons sitting on the seats. One of them is reading a newspaper. The window shows the city in the background." + +>>> generator = torch.Generator(device="cpu").manual_seed(0) +>>> image = pipe(prompt, num_inference_steps=25, generator=generator).images[0] encode_prompt < source > ( prompt do_classifier_free_guidance = True num_images_per_prompt = 1 device = None negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None _cut_context = False attention_mask: Optional = None negative_attention_mask: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device, optional): +torch device to place the resulting embeddings on num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask. Must provide if passing prompt_embeds directly. negative_attention_mask (torch.FloatTensor, optional) — +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. Encodes the prompt into text encoder hidden states. Kandinsky3Img2ImgPipeline class diffusers.Kandinsky3Img2ImgPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: Kandinsky3UNet scheduler: DDPMScheduler movq: VQModel ) __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.3 num_inference_steps: int = 25 guidance_scale: float = 3.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 3.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask. Must provide if passing prompt_embeds directly. negative_attention_mask (torch.FloatTensor, optional) — +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import AutoPipelineForImage2Image +>>> from diffusers.utils import load_image +>>> import torch + +>>> pipe = AutoPipelineForImage2Image.from_pretrained("kandinsky-community/kandinsky-3", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A painting of the inside of a subway train with tiny raccoons." +>>> image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky3/t2i.png") + +>>> generator = torch.Generator(device="cpu").manual_seed(0) +>>> image = pipe(prompt, image=image, strength=0.75, num_inference_steps=25, generator=generator).images[0] encode_prompt < source > ( prompt do_classifier_free_guidance = True num_images_per_prompt = 1 device = None negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None _cut_context = False attention_mask: Optional = None negative_attention_mask: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded Encodes the prompt into text encoder hidden states. device: (torch.device, optional): +torch device to place the resulting embeddings on +num_images_per_prompt (int, optional, defaults to 1): +number of images that should be generated per prompt +do_classifier_free_guidance (bool, optional, defaults to True): +whether to use classifier free guidance or not +negative_prompt (str or List[str], optional): +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). +prompt_embeds (torch.FloatTensor, optional): +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. +negative_prompt_embeds (torch.FloatTensor, optional): +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. +attention_mask (torch.FloatTensor, optional): +Pre-generated attention mask. Must provide if passing prompt_embeds directly. +negative_attention_mask (torch.FloatTensor, optional): +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. diff --git a/scrapped_outputs/b6a374e8a24fd78d0c31c795c857c535.txt b/scrapped_outputs/b6a374e8a24fd78d0c31c795c857c535.txt new file mode 100644 index 0000000000000000000000000000000000000000..0f46c0cfb05d4b31a44bcbb7e006dad028814545 --- /dev/null +++ b/scrapped_outputs/b6a374e8a24fd78d0c31c795c857c535.txt @@ -0,0 +1,53 @@ +IP-Adapter IP-Adapter is a lightweight adapter that enables prompting a diffusion model with an image. This method decouples the cross-attention layers of the image and text features. The image features are generated from an image encoder. Learn how to load an IP-Adapter checkpoint and image in the IP-Adapter loading guide, and you can see how to use it in the usage guide. IPAdapterMixin class diffusers.loaders.IPAdapterMixin < source > ( ) Mixin for handling IP Adapters. load_ip_adapter < source > ( pretrained_model_name_or_path_or_dict: Union subfolder: Union weight_name: Union image_encoder_folder: Optional = 'image_encoder' **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or List[str] or os.PathLike or List[os.PathLike] or dict or List[dict]) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with ModelMixin.save_pretrained(). +A torch state +dict. + subfolder (str or List[str]) — +The subfolder location of a model file within a larger model repository on the Hub or locally. +If a list is passed, it should have the same length as weight_name. weight_name (str or List[str]) — +The name of the weight file to load. If a list is passed, it should have the same length as +weight_name. image_encoder_folder (str, optional, defaults to image_encoder) — +The subfolder location of the image encoder within a larger model repository on the Hub or locally. +Pass None to not load the image encoder. If the image encoder is located in a folder inside subfolder, +you only need to pass the name of the folder that contains image encoder weights, e.g. image_encoder_folder="image_encoder". +If the image encoder is located in a folder other than subfolder, you should pass the path to the folder that contains image encoder weights, +for example, image_encoder_folder="different_subfolder/image_encoder". cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. set_ip_adapter_scale < source > ( scale ) Sets the conditioning scale between text and image. Example: Copied pipeline.set_ip_adapter_scale(0.5) unload_ip_adapter < source > ( ) Unloads the IP Adapter weights Examples: Copied >>> # Assuming `pipeline` is already loaded with the IP Adapter weights. +>>> pipeline.unload_ip_adapter() +>>> ... IPAdapterMaskProcessor class diffusers.image_processor.IPAdapterMaskProcessor < source > ( do_resize: bool = True vae_scale_factor: int = 8 resample: str = 'lanczos' do_normalize: bool = False do_binarize: bool = True do_convert_grayscale: bool = True ) Parameters do_resize (bool, optional, defaults to True) — +Whether to downscale the image’s (height, width) dimensions to multiples of vae_scale_factor. vae_scale_factor (int, optional, defaults to 8) — +VAE scale factor. If do_resize is True, the image is automatically resized to multiples of this factor. resample (str, optional, defaults to lanczos) — +Resampling filter to use when resizing the image. do_normalize (bool, optional, defaults to False) — +Whether to normalize the image to [-1,1]. do_binarize (bool, optional, defaults to True) — +Whether to binarize the image to 0/1. do_convert_grayscale (bool, optional, defaults to be True) — +Whether to convert the images to grayscale format. Image processor for IP Adapter image masks. downsample < source > ( mask: FloatTensor batch_size: int num_queries: int value_embed_dim: int ) → torch.FloatTensor Parameters mask (torch.FloatTensor) — +The input mask tensor generated with IPAdapterMaskProcessor.preprocess(). batch_size (int) — +The batch size. num_queries (int) — +The number of queries. value_embed_dim (int) — +The dimensionality of the value embeddings. Returns +torch.FloatTensor + +The downsampled mask tensor. + Downsamples the provided mask tensor to match the expected dimensions for scaled dot-product attention. +If the aspect ratio of the mask does not match the aspect ratio of the output image, a warning is issued. diff --git a/scrapped_outputs/b6e63c961eddaf3b053ca800d4483397.txt b/scrapped_outputs/b6e63c961eddaf3b053ca800d4483397.txt new file mode 100644 index 0000000000000000000000000000000000000000..b0b0a9f6f6538388b8c5e1816de1537cd679e779 --- /dev/null +++ b/scrapped_outputs/b6e63c961eddaf3b053ca800d4483397.txt @@ -0,0 +1,96 @@ +MultiDiffusion MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation is by Omer Bar-Tal, Lior Yariv, Yaron Lipman, and Tali Dekel. The abstract from the paper is: Recent advances in text-to-image generation with diffusion models present transformative capabilities in image quality. However, user controllability of the generated image, and fast adaptation to new tasks still remains an open challenge, currently mostly addressed by costly and long re-training and fine-tuning or ad-hoc adaptations to specific image generation tasks. In this work, we present MultiDiffusion, a unified framework that enables versatile and controllable image generation, using a pre-trained text-to-image diffusion model, without any further training or finetuning. At the center of our approach is a new generation process, based on an optimization task that binds together multiple diffusion generation processes with a shared set of parameters or constraints. We show that MultiDiffusion can be readily applied to generate high quality and diverse images that adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes. You can find additional information about MultiDiffusion on the project page, original codebase, and try it out in a demo. Tips While calling StableDiffusionPanoramaPipeline, it’s possible to specify the view_batch_size parameter to be > 1. +For some GPUs with high performance, this can speedup the generation process and increase VRAM usage. To generate panorama-like images make sure you pass the width parameter accordingly. We recommend a width value of 2048 which is the default. Circular padding is applied to ensure there are no stitching artifacts when working with panoramas to ensure a seamless transition from the rightmost part to the leftmost part. By enabling circular padding (set circular_padding=True), the operation applies additional crops after the rightmost point of the image, allowing the model to “see” the transition from the rightmost part to the leftmost part. This helps maintain visual consistency in a 360-degree sense and creates a proper “panorama” that can be viewed using 360-degree panorama viewers. When decoding latents in Stable Diffusion, circular padding is applied to ensure that the decoded latents match in the RGB space. For example, without circular padding, there is a stitching artifact (default): + But with circular padding, the right and the left parts are matching (circular_padding=True): + Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionPanoramaPipeline class diffusers.StableDiffusionPanoramaPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: DDIMScheduler safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using MultiDiffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = 512 width: Optional = 2048 num_inference_steps: int = 50 guidance_scale: float = 7.5 view_batch_size: int = 1 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None circular_padding: bool = False clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 2048) — +The width in pixels of the generated image. The width is kept high because the pipeline is supposed +generate panorama-like images. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. view_batch_size (int, optional, defaults to 1) — +The batch size to denoise split views. For some GPUs with high performance, higher view batch size can +speedup the generation and increase the VRAM usage. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. circular_padding (bool, optional, defaults to False) — +If set to True, circular padding is applied to ensure there are no stitching artifacts. Circular +padding allows the model to seamlessly generate a transition from the rightmost part of the image to +the leftmost part, maintaining consistency in a 360-degree sense. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPanoramaPipeline, DDIMScheduler + +>>> model_ckpt = "stabilityai/stable-diffusion-2-base" +>>> scheduler = DDIMScheduler.from_pretrained(model_ckpt, subfolder="scheduler") +>>> pipe = StableDiffusionPanoramaPipeline.from_pretrained( +... model_ckpt, scheduler=scheduler, torch_dtype=torch.float16 +... ) + +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of the dolomites" +>>> image = pipe(prompt).images[0] disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/b6ea2879e896cb5b2b0720b2036540fa.txt b/scrapped_outputs/b6ea2879e896cb5b2b0720b2036540fa.txt new file mode 100644 index 0000000000000000000000000000000000000000..c565f5b1caff03532e8726d9531e87542512d714 --- /dev/null +++ b/scrapped_outputs/b6ea2879e896cb5b2b0720b2036540fa.txt @@ -0,0 +1,294 @@ +MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation + + +Overview + +MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation by Omer Bar-Tal, Lior Yariv, Yaron Lipman, and Tali Dekel. +The abstract of the paper is the following: +*Recent advances in text-to-image generation with diffusion models present transformative capabilities in image quality. However, user controllability of the generated image, and fast adaptation to new tasks still remains an open challenge, currently mostly addressed by costly and long re-training and fine-tuning or ad-hoc adaptations to specific image generation tasks. In this work, we present MultiDiffusion, a unified framework that enables versatile and controllable image generation, using a pre-trained text-to-image diffusion model, without any further training or finetuning. At the center of our approach is a new generation process, based on an optimization task that binds together multiple diffusion generation processes with a shared set of parameters or constraints. We show that MultiDiffusion can be readily applied to generate high quality and diverse images that adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes. +Resources: +Project Page. +Paper. +Original Code. +Demo. + +Available Pipelines: + +Pipeline +Tasks +Demo +StableDiffusionPanoramaPipeline +Text-Guided Panorama View Generation +🤗 Space) + +Usage example + + + + Copied +import torch +from diffusers import StableDiffusionPanoramaPipeline, DDIMScheduler + +model_ckpt = "stabilityai/stable-diffusion-2-base" +scheduler = DDIMScheduler.from_pretrained(model_ckpt, subfolder="scheduler") +pipe = StableDiffusionPanoramaPipeline.from_pretrained(model_ckpt, scheduler=scheduler, torch_dtype=torch.float16) + +pipe = pipe.to("cuda") + +prompt = "a photo of the dolomites" +image = pipe(prompt).images[0] +image.save("dolomites.png") + +StableDiffusionPanoramaPipeline + + +class diffusers.StableDiffusionPanoramaPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: DDIMScheduler +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPFeatureExtractor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. The original work +on Multi Diffsion used the DDIMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-to-image generation using “MultiDiffusion: Fusing Diffusion Paths for Controlled Image +Generation”. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.). +To generate panorama-like images, be sure to pass the width parameter accordingly when using the pipeline. Our +recommendation for the width value is 2048. This is the default value of the width parameter for this pipeline. + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +height: typing.Optional[int] = 512 +width: typing.Optional[int] = 2048 +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: typing.Optional[int] = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +height (int, optional, defaults to 512 — +The height in pixels of the generated image. + + +width (int, optional, defaults to 2048) — +The width in pixels of the generated image. The width is kept to a high number because the +pipeline is supposed to be used for generating panorama-like images. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor as defined under +self.processor in +diffusers.cross_attention. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import StableDiffusionPanoramaPipeline, DDIMScheduler + +>>> model_ckpt = "stabilityai/stable-diffusion-2-base" +>>> scheduler = DDIMScheduler.from_pretrained(model_ckpt, subfolder="scheduler") +>>> pipe = StableDiffusionPanoramaPipeline.from_pretrained( +... model_ckpt, scheduler=scheduler, torch_dtype=torch.float16 +... ) + +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of the dolomites" +>>> image = pipe(prompt).images[0] + +disable_vae_slicing + +< +source +> +( +) + + + +Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to +computing decoding in one step. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. + +enable_vae_slicing + +< +source +> +( +) + + + +Enable sliced VAE decoding. +When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several +steps. This is useful to save some memory and allow larger batch sizes. diff --git a/scrapped_outputs/b70d5b44bb1255072d26be4e26f34e16.txt b/scrapped_outputs/b70d5b44bb1255072d26be4e26f34e16.txt new file mode 100644 index 0000000000000000000000000000000000000000..fbc000e3f1f4798b3b57e43c2f0af0e2e06c9cce --- /dev/null +++ b/scrapped_outputs/b70d5b44bb1255072d26be4e26f34e16.txt @@ -0,0 +1,65 @@ +Latent Consistency Model Multistep Scheduler Overview Multistep and onestep scheduler (Algorithm 3) introduced alongside latent consistency models in the paper Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference by Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao. +This scheduler should be able to generate good samples from LatentConsistencyModelPipeline in 1-8 steps. LCMScheduler class diffusers.LCMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'scaled_linear' trained_betas: Union = None original_inference_steps: int = 50 clip_sample: bool = False clip_sample_range: float = 1.0 set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' timestep_scaling: float = 10.0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. original_inference_steps (int, optional, defaults to 50) — +The default number of inference steps used to generate a linearly-spaced timestep schedule, from which we +will ultimately take num_inference_steps evenly spaced timesteps to form the final timestep schedule. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the alpha value at step 0. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. timestep_scaling (float, defaults to 10.0) — +The factor the timesteps will be multiplied by when calculating the consistency model boundary conditions +c_skip and c_out. Increasing this will decrease the approximation error (although the approximation +error at the default of 10.0 is already pretty small). rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. LCMScheduler extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with +non-Markovian guidance. This model inherits from SchedulerMixin and ConfigMixin. ~ConfigMixin takes care of storing all config +attributes that are passed in the scheduler’s __init__ function, such as num_train_timesteps. They can be +accessed via scheduler.config.num_train_timesteps. SchedulerMixin provides general loading and saving +functionality via the SchedulerMixin.save_pretrained() and from_pretrained() functions. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: Optional = None device: Union = None original_inference_steps: Optional = None timesteps: Optional = None strength: int = 1.0 ) Parameters num_inference_steps (int, optional) — +The number of diffusion steps used when generating samples with a pre-trained model. If used, +timesteps must be None. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. original_inference_steps (int, optional) — +The original number of inference steps, which will be used to generate a linearly-spaced timestep +schedule (which is different from the standard diffusers implementation). We will then take +num_inference_steps timesteps from this schedule, evenly spaced in terms of indices, and use that as +our final timestep schedule. If not set, this will default to the original_inference_steps attribute. timesteps (List[int], optional) — +Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default +timestep spacing strategy of equal spacing between timesteps on the training/distillation timestep +schedule is used. If timesteps is passed, num_inference_steps must be None. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator: Optional = None return_dict: bool = True ) → ~schedulers.scheduling_utils.LCMSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a LCMSchedulerOutput or tuple. Returns +~schedulers.scheduling_utils.LCMSchedulerOutput or tuple + +If return_dict is True, LCMSchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/b71c1836f273730f7f7b2328d06b4c43.txt b/scrapped_outputs/b71c1836f273730f7f7b2328d06b4c43.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/b72154664450c00139b99c10ed00ce06.txt b/scrapped_outputs/b72154664450c00139b99c10ed00ce06.txt new file mode 100644 index 0000000000000000000000000000000000000000..019c4d1bed8279c368db9a675af18172eacecbe1 --- /dev/null +++ b/scrapped_outputs/b72154664450c00139b99c10ed00ce06.txt @@ -0,0 +1,24 @@ +IP-Adapter IP-Adapter is a lightweight adapter that enables prompting a diffusion model with an image. This method decouples the cross-attention layers of the image and text features. The image features are generated from an image encoder. Files generated from IP-Adapter are only ~100MBs. Learn how to load an IP-Adapter checkpoint and image in the IP-Adapter loading guide. IPAdapterMixin class diffusers.loaders.IPAdapterMixin < source > ( ) Mixin for handling IP Adapters. load_ip_adapter < source > ( pretrained_model_name_or_path_or_dict: Union subfolder: str weight_name: str **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with ModelMixin.save_pretrained(). +A torch state +dict. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. diff --git a/scrapped_outputs/b72bc455e9495b580e10fd08f662f98b.txt b/scrapped_outputs/b72bc455e9495b580e10fd08f662f98b.txt new file mode 100644 index 0000000000000000000000000000000000000000..0a7cc0b79a2823c78003b419462fee63e47bb1de --- /dev/null +++ b/scrapped_outputs/b72bc455e9495b580e10fd08f662f98b.txt @@ -0,0 +1,18 @@ +ONNX Runtime 🤗 Optimum provides a Stable Diffusion pipeline compatible with ONNX Runtime. You’ll need to install 🤗 Optimum with the following command for ONNX Runtime support: Copied pip install -q optimum["onnxruntime"] This guide will show you how to use the Stable Diffusion and Stable Diffusion XL (SDXL) pipelines with ONNX Runtime. Stable Diffusion To load and run inference, use the ORTStableDiffusionPipeline. If you want to load a PyTorch model and convert it to the ONNX format on-the-fly, set export=True: Copied from optimum.onnxruntime import ORTStableDiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id, export=True) +prompt = "sailing ship in storm by Leonardo da Vinci" +image = pipeline(prompt).images[0] +pipeline.save_pretrained("./onnx-stable-diffusion-v1-5") Generating multiple prompts in a batch seems to take too much memory. While we look into it, you may need to iterate instead of batching. To export the pipeline in the ONNX format offline and use it later for inference, +use the optimum-cli export command: Copied optimum-cli export onnx --model runwayml/stable-diffusion-v1-5 sd_v15_onnx/ Then to perform inference (you don’t have to specify export=True again): Copied from optimum.onnxruntime import ORTStableDiffusionPipeline + +model_id = "sd_v15_onnx" +pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id) +prompt = "sailing ship in storm by Leonardo da Vinci" +image = pipeline(prompt).images[0] You can find more examples in 🤗 Optimum documentation, and Stable Diffusion is supported for text-to-image, image-to-image, and inpainting. Stable Diffusion XL To load and run inference with SDXL, use the ORTStableDiffusionXLPipeline: Copied from optimum.onnxruntime import ORTStableDiffusionXLPipeline + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipeline = ORTStableDiffusionXLPipeline.from_pretrained(model_id) +prompt = "sailing ship in storm by Leonardo da Vinci" +image = pipeline(prompt).images[0] To export the pipeline in the ONNX format and use it later for inference, use the optimum-cli export command: Copied optimum-cli export onnx --model stabilityai/stable-diffusion-xl-base-1.0 --task stable-diffusion-xl sd_xl_onnx/ SDXL in the ONNX format is supported for text-to-image and image-to-image. diff --git a/scrapped_outputs/b763b59716397350326009605df630a3.txt b/scrapped_outputs/b763b59716397350326009605df630a3.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/b77273f5d3d15ddc268dda0802911c43.txt b/scrapped_outputs/b77273f5d3d15ddc268dda0802911c43.txt new file mode 100644 index 0000000000000000000000000000000000000000..96c0514d704cece83e17fb8a355ec25c182d1eb8 --- /dev/null +++ b/scrapped_outputs/b77273f5d3d15ddc268dda0802911c43.txt @@ -0,0 +1,24 @@ +Unconditional Latent Diffusion Unconditional Latent Diffusion was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. The abstract from the paper is: By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. The original codebase can be found at CompVis/latent-diffusion. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. LDMPipeline class diffusers.LDMPipeline < source > ( vqvae: VQModel unet: UNet2DModel scheduler: DDIMScheduler ) Parameters vqvae (VQModel) — +Vector-quantized (VQ) model to encode and decode images to and from latent representations. unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +DDIMScheduler is used in combination with unet to denoise the encoded image latents. Pipeline for unconditional image generation using latent diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None eta: float = 0.0 num_inference_steps: int = 50 output_type: typing.Optional[str] = 'pil' return_dict: bool = True **kwargs ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +Number of images to generate. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Example: Copied >>> from diffusers import LDMPipeline + +>>> # load model and scheduler +>>> pipe = LDMPipeline.from_pretrained("CompVis/ldm-celebahq-256") + +>>> # run pipeline in inference (sample random noise and denoise) +>>> image = pipe().images[0] ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/b7a2796a7d2ff11508d3ccd50eb7e8f0.txt b/scrapped_outputs/b7a2796a7d2ff11508d3ccd50eb7e8f0.txt new file mode 100644 index 0000000000000000000000000000000000000000..fbc000e3f1f4798b3b57e43c2f0af0e2e06c9cce --- /dev/null +++ b/scrapped_outputs/b7a2796a7d2ff11508d3ccd50eb7e8f0.txt @@ -0,0 +1,65 @@ +Latent Consistency Model Multistep Scheduler Overview Multistep and onestep scheduler (Algorithm 3) introduced alongside latent consistency models in the paper Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference by Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao. +This scheduler should be able to generate good samples from LatentConsistencyModelPipeline in 1-8 steps. LCMScheduler class diffusers.LCMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'scaled_linear' trained_betas: Union = None original_inference_steps: int = 50 clip_sample: bool = False clip_sample_range: float = 1.0 set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' timestep_scaling: float = 10.0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. original_inference_steps (int, optional, defaults to 50) — +The default number of inference steps used to generate a linearly-spaced timestep schedule, from which we +will ultimately take num_inference_steps evenly spaced timesteps to form the final timestep schedule. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the alpha value at step 0. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. timestep_scaling (float, defaults to 10.0) — +The factor the timesteps will be multiplied by when calculating the consistency model boundary conditions +c_skip and c_out. Increasing this will decrease the approximation error (although the approximation +error at the default of 10.0 is already pretty small). rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. LCMScheduler extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with +non-Markovian guidance. This model inherits from SchedulerMixin and ConfigMixin. ~ConfigMixin takes care of storing all config +attributes that are passed in the scheduler’s __init__ function, such as num_train_timesteps. They can be +accessed via scheduler.config.num_train_timesteps. SchedulerMixin provides general loading and saving +functionality via the SchedulerMixin.save_pretrained() and from_pretrained() functions. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: Optional = None device: Union = None original_inference_steps: Optional = None timesteps: Optional = None strength: int = 1.0 ) Parameters num_inference_steps (int, optional) — +The number of diffusion steps used when generating samples with a pre-trained model. If used, +timesteps must be None. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. original_inference_steps (int, optional) — +The original number of inference steps, which will be used to generate a linearly-spaced timestep +schedule (which is different from the standard diffusers implementation). We will then take +num_inference_steps timesteps from this schedule, evenly spaced in terms of indices, and use that as +our final timestep schedule. If not set, this will default to the original_inference_steps attribute. timesteps (List[int], optional) — +Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default +timestep spacing strategy of equal spacing between timesteps on the training/distillation timestep +schedule is used. If timesteps is passed, num_inference_steps must be None. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator: Optional = None return_dict: bool = True ) → ~schedulers.scheduling_utils.LCMSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a LCMSchedulerOutput or tuple. Returns +~schedulers.scheduling_utils.LCMSchedulerOutput or tuple + +If return_dict is True, LCMSchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/b7c17e2f10422d1d134c2b51210f9abc.txt b/scrapped_outputs/b7c17e2f10422d1d134c2b51210f9abc.txt new file mode 100644 index 0000000000000000000000000000000000000000..f4c2eb6cbae969f77508d1033b9305f0e934f6cf --- /dev/null +++ b/scrapped_outputs/b7c17e2f10422d1d134c2b51210f9abc.txt @@ -0,0 +1,109 @@ +LoRA Support in Diffusers + +Diffusers supports LoRA for faster fine-tuning of Stable Diffusion, allowing greater memory efficiency and easier portability. +Low-Rank Adaption of Large Language Models was first introduced by Microsoft in +LoRA: Low-Rank Adaptation of Large Language Models by Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen. +In a nutshell, LoRA allows adapting pretrained models by adding pairs of rank-decomposition weight matrices (called update matrices) +to existing weights and only training those newly added weights. This has a couple of advantages: +Previous pretrained weights are kept frozen so that the model is not so prone to catastrophic forgetting. +Rank-decomposition matrices have significantly fewer parameters than the original model, which means that trained LoRA weights are easily portable. +LoRA matrices are generally added to the attention layers of the original model and they control to which extent the model is adapted toward new training images via a scale parameter. +Note that the usage of LoRA is not just limited to attention layers. In the original LoRA work, the authors found out that just amending +the attention layers of a language model is sufficient to obtain good downstream performance with great efficiency. This is why, it’s common +to just add the LoRA weights to the attention layers of a model. +cloneofsimo was the first to try out LoRA training for Stable Diffusion in the popular lora GitHub repository. +LoRA allows us to achieve greater memory efficiency since the pretrained weights are kept frozen and only the LoRA weights are trained, thereby +allowing us to run fine-tuning on consumer GPUs like Tesla T4, RTX 3080 or even RTX 2080 Ti! One can get access to GPUs like T4 in the free +tiers of Kaggle Kernels and Google Colab Notebooks. + +Getting started with LoRA for fine-tuning + +Stable Diffusion can be fine-tuned in different ways: +Textual inversion +DreamBooth +Text2Image fine-tuning +We provide two end-to-end examples that show how to run fine-tuning with LoRA: +DreamBooth +Text2Image +If you want to perform DreamBooth training with LoRA, for instance, you would run: + + + Copied +export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export INSTANCE_DIR="path-to-instance-images" +export OUTPUT_DIR="path-to-save-model" + +accelerate launch train_dreambooth_lora.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --output_dir=$OUTPUT_DIR \ + --instance_prompt="a photo of sks dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=1 \ + --checkpointing_steps=100 \ + --learning_rate=1e-4 \ + --report_to="wandb" \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --max_train_steps=500 \ + --validation_prompt="A photo of sks dog in a bucket" \ + --validation_epochs=50 \ + --seed="0" \ + --push_to_hub +A similar process can be followed to fully fine-tune Stable Diffusion on a custom dataset using the +examples/text_to_image/train_text_to_image_lora.py script. +Refer to the respective examples linked above to learn more. +When using LoRA we can use a much higher learning rate (typically 1e-4 as opposed to ~1e-6) compared to non-LoRA Dreambooth fine-tuning. +But there is no free lunch. For the given dataset and expected generation quality, you’d still need to experiment with +different hyperparameters. Here are some important ones: +Training timeLearning rate +Number of training steps +Inference time Number of steps +Scheduler type +Additionally, you can follow this blog that documents some of our experimental +findings for performing DreamBooth training of Stable Diffusion. +When fine-tuning, the LoRA update matrices are only added to the attention layers. To enable this, we added new weight +loading functionalities. Their details are available here. + +Inference + +Assuming you used the examples/text_to_image/train_text_to_image_lora.py to fine-tune Stable Diffusion on the Pokemon +dataset, you can perform inference like so: + + + Copied +from diffusers import StableDiffusionPipeline +import torch + +model_path = "sayakpaul/sd-model-finetuned-lora-t4" +pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16) +pipe.unet.load_attn_procs(model_path) +pipe.to("cuda") + +prompt = "A pokemon with blue eyes." +image = pipe(prompt, num_inference_steps=30, guidance_scale=7.5).images[0] +image.save("pokemon.png") +Here are some example images you can expect: + +sayakpaul/sd-model-finetuned-lora-t4 contains LoRA fine-tuned update matrices +which is only 3 MBs in size. During inference, the pre-trained Stable Diffusion checkpoints are loaded alongside these update +matrices and then they are combined to run inference. +You can use the huggingface_hub library to retrieve the base model +from sayakpaul/sd-model-finetuned-lora-t4 like so: + + + Copied +from huggingface_hub.repocard import RepoCard + +card = RepoCard.load("sayakpaul/sd-model-finetuned-lora-t4") +base_model = card.data.to_dict()["base_model"] +# 'CompVis/stable-diffusion-v1-4' +And then you can use pipe = StableDiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.float16). +This is especially useful when you don’t want to hardcode the base model identifier during initializing the StableDiffusionPipeline. +Inference for DreamBooth training remains the same. Check +this section for more details. + +Known limitations + +Currently, we only support LoRA for the attention layers of UNet2DConditionModel. diff --git a/scrapped_outputs/b7db3a78de7e92a267f3cd76cbf60c99.txt b/scrapped_outputs/b7db3a78de7e92a267f3cd76cbf60c99.txt new file mode 100644 index 0000000000000000000000000000000000000000..c6ada9556f117e916687e4a6c5586a56d8e2825d --- /dev/null +++ b/scrapped_outputs/b7db3a78de7e92a267f3cd76cbf60c99.txt @@ -0,0 +1,17 @@ +Load safetensors safetensors is a safe and fast file format for storing and loading tensors. Typically, PyTorch model weights are saved or pickled into a .bin file with Python’s pickle utility. However, pickle is not secure and pickled files may contain malicious code that can be executed. safetensors is a secure alternative to pickle, making it ideal for sharing model weights. This guide will show you how you load .safetensor files, and how to convert Stable Diffusion model weights stored in other formats to .safetensor. Before you start, make sure you have safetensors installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install safetensors If you look at the runwayml/stable-diffusion-v1-5 repository, you’ll see weights inside the text_encoder, unet and vae subfolders are stored in the .safetensors format. By default, 🤗 Diffusers automatically loads these .safetensors files from their subfolders if they’re available in the model repository. For more explicit control, you can optionally set use_safetensors=True (if safetensors is not installed, you’ll get an error message asking you to install it): Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) However, model weights are not necessarily stored in separate subfolders like in the example above. Sometimes, all the weights are stored in a single .safetensors file. In this case, if the weights are Stable Diffusion weights, you can load the file directly with the from_single_file() method: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_single_file( + "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +) Convert to safetensors Not all weights on the Hub are available in the .safetensors format, and you may encounter weights stored as .bin. In this case, use the Convert Space to convert the weights to .safetensors. The Convert Space downloads the pickled weights, converts them, and opens a Pull Request to upload the newly converted .safetensors file on the Hub. This way, if there is any malicious code contained in the pickled files, they’re uploaded to the Hub - which has a security scanner to detect unsafe files and suspicious pickle imports - instead of your computer. You can use the model with the new .safetensors weights by specifying the reference to the Pull Request in the revision parameter (you can also test it in this Check PR Space on the Hub), for example refs/pr/22: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", revision="refs/pr/22", use_safetensors=True +) Why use safetensors? There are several reasons for using safetensors: Safety is the number one reason for using safetensors. As open-source and model distribution grows, it is important to be able to trust the model weights you downloaded don’t contain any malicious code. The current size of the header in safetensors prevents parsing extremely large JSON files. Loading speed between switching models is another reason to use safetensors, which performs zero-copy of the tensors. It is especially fast compared to pickle if you’re loading the weights to CPU (the default case), and just as fast if not faster when directly loading the weights to GPU. You’ll only notice the performance difference if the model is already loaded, and not if you’re downloading the weights or loading the model for the first time. The time it takes to load the entire pipeline: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", use_safetensors=True) +"Loaded in safetensors 0:00:02.033658" +"Loaded in PyTorch 0:00:02.663379" But the actual time it takes to load 500MB of the model weights is only: Copied safetensors: 3.4873ms +PyTorch: 172.7537ms Lazy loading is also supported in safetensors, which is useful in distributed settings to only load some of the tensors. This format allowed the BLOOM model to be loaded in 45 seconds on 8 GPUs instead of 10 minutes with regular PyTorch weights. diff --git a/scrapped_outputs/b7ddde48566adfc2080bb7ae212f2319.txt b/scrapped_outputs/b7ddde48566adfc2080bb7ae212f2319.txt new file mode 100644 index 0000000000000000000000000000000000000000..26444ce0b02439b036cdb5951e8bcee16133d21d --- /dev/null +++ b/scrapped_outputs/b7ddde48566adfc2080bb7ae212f2319.txt @@ -0,0 +1,7 @@ +Value-guided planning 🧪 This is an experimental pipeline for reinforcement learning! This pipeline is based on the Planning with Diffusion for Flexible Behavior Synthesis paper by Michael Janner, Yilun Du, Joshua B. Tenenbaum, Sergey Levine. The abstract from the paper is: Model-based reinforcement learning methods often use learning only for the purpose of estimating an approximate dynamics model, offloading the rest of the decision-making work to classical trajectory optimizers. While conceptually simple, this combination has a number of empirical shortcomings, suggesting that learned models may not be well-suited to standard trajectory optimization. In this paper, we consider what it would look like to fold as much of the trajectory optimization pipeline as possible into the modeling problem, such that sampling from the model and planning with it become nearly identical. The core of our technical approach lies in a diffusion probabilistic model that plans by iteratively denoising trajectories. We show how classifier-guided sampling and image inpainting can be reinterpreted as coherent planning strategies, explore the unusual and useful properties of diffusion-based planning methods, and demonstrate the effectiveness of our framework in control settings that emphasize long-horizon decision-making and test-time flexibility. You can find additional information about the model on the project page, the original codebase, or try it out in a demo notebook. The script to run the model is available here. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. ValueGuidedRLPipeline class diffusers.experimental.ValueGuidedRLPipeline < source > ( value_function: UNet1DModel unet: UNet1DModel scheduler: DDPMScheduler env ) Parameters value_function (UNet1DModel) — +A specialized UNet for fine-tuning trajectories base on reward. unet (UNet1DModel) — +UNet architecture to denoise the encoded trajectories. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded trajectories. Default for this +application is DDPMScheduler. env () — +An environment following the OpenAI gym API to act in. For now only Hopper has pretrained models. Pipeline for value-guided sampling from a diffusion model trained to predict sequences of states. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). diff --git a/scrapped_outputs/b877d9f43b7245c96138f7c727606e65.txt b/scrapped_outputs/b877d9f43b7245c96138f7c727606e65.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/b8967afe56119ef6d934aeded91d353d.txt b/scrapped_outputs/b8967afe56119ef6d934aeded91d353d.txt new file mode 100644 index 0000000000000000000000000000000000000000..6a4046e75f9452321616835465fe3146c7ab0c46 --- /dev/null +++ b/scrapped_outputs/b8967afe56119ef6d934aeded91d353d.txt @@ -0,0 +1,215 @@ +Configuration + +The handling of configurations in Diffusers is with the ConfigMixin class. + +class diffusers.ConfigMixin + +< +source +> +( +) + + + +Base class for all configuration classes. Stores all configuration parameters under self.config Also handles all +methods for loading/downloading/saving classes inheriting from ConfigMixin with +from_config() +save_config() +Class attributes: +config_name (str) — A filename under which the config should stored when calling +save_config() (should be overridden by parent class). +ignore_for_config (List[str]) — A list of attributes that should not be saved in the config (should be +overridden by subclass). +has_compatibles (bool) — Whether the class has compatible classes (should be overridden by subclass). +_deprecated_kwargs (List[str]) — Keyword arguments that are deprecated. Note that the init function +should only have a kwargs argument if at least one argument is deprecated (should be overridden by +subclass). + +from_config + +< +source +> +( +config: typing.Union[diffusers.configuration_utils.FrozenDict, typing.Dict[str, typing.Any]] = None +return_unused_kwargs = False +**kwargs + +) + + +Parameters + +config (Dict[str, Any]) — +A config dictionary from which the Python class will be instantiated. Make sure to only load +configuration files of compatible classes. + + +return_unused_kwargs (bool, optional, defaults to False) — +Whether kwargs that are not consumed by the Python class should be returned or not. + + +kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to update the configuration object (after it being loaded) and initiate the Python class. +**kwargs will be directly passed to the underlying scheduler/model’s __init__ method and eventually +overwrite same named arguments of config. + + + +Instantiate a Python class from a config dictionary + +Examples: + + + Copied +>>> from diffusers import DDPMScheduler, DDIMScheduler, PNDMScheduler + +>>> # Download scheduler from huggingface.co and cache. +>>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cifar10-32") + +>>> # Instantiate DDIM scheduler class with same config as DDPM +>>> scheduler = DDIMScheduler.from_config(scheduler.config) + +>>> # Instantiate PNDM scheduler class with same config as DDPM +>>> scheduler = PNDMScheduler.from_config(scheduler.config) + +load_config + +< +source +> +( +pretrained_model_name_or_path: typing.Union[str, os.PathLike] +return_unused_kwargs = False +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id of a model repo on huggingface.co. Valid model ids should have an +organization name, like google/ddpm-celebahq-256. +A path to a directory containing model weights saved using save_config(), e.g., +./my_model_directory/. + + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running transformers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +subfolder (str, optional, defaults to "") — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + + +Instantiate a Python class from a config dictionary +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. +Activate the special “offline-mode” to +use this method in a firewalled environment. + +save_config + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +push_to_hub: bool = False +**kwargs + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory where the configuration JSON file will be saved (will be created if it does not exist). + + + +Save a configuration object to the directory save_directory, so that it can be re-loaded using the +from_config() class method. + +to_json_file + +< +source +> +( +json_file_path: typing.Union[str, os.PathLike] + +) + + +Parameters + +json_file_path (str or os.PathLike) — +Path to the JSON file in which this configuration instance’s parameters will be saved. + + + +Save this instance to a JSON file. + +to_json_string + +< +source +> +( +) +→ +str + +Returns + +str + + + +String containing all the attributes that make up this configuration instance in JSON format. + + +Serializes this instance to a JSON string. +Under further construction 🚧, open a PR if you want to contribute! diff --git a/scrapped_outputs/b8a209a16d8d179ff9ba620bfece5189.txt b/scrapped_outputs/b8a209a16d8d179ff9ba620bfece5189.txt new file mode 100644 index 0000000000000000000000000000000000000000..b1a5b1caf72ab66f1458f358678fe7da6bdce6c7 --- /dev/null +++ b/scrapped_outputs/b8a209a16d8d179ff9ba620bfece5189.txt @@ -0,0 +1 @@ +SDXL Turbo Stable Diffusion XL (SDXL) Turbo was proposed in Adversarial Diffusion Distillation by Axel Sauer, Dominik Lorenz, Andreas Blattmann, and Robin Rombach. The abstract from the paper is: We introduce Adversarial Diffusion Distillation (ADD), a novel training approach that efficiently samples large-scale foundational image diffusion models in just 1–4 steps while maintaining high image quality. We use score distillation to leverage large-scale off-the-shelf image diffusion models as a teacher signal in combination with an adversarial loss to ensure high image fidelity even in the low-step regime of one or two sampling steps. Our analyses show that our model clearly outperforms existing few-step methods (GANs,Latent Consistency Models) in a single step and reaches the performance of state-of-the-art diffusion models (SDXL) in only four steps. ADD is the first method to unlock single-step, real-time image synthesis with foundation models. Tips SDXL Turbo uses the exact same architecture as SDXL, which means it also has the same API. Please refer to the SDXL API reference for more details. SDXL Turbo should disable guidance scale by setting guidance_scale=0.0 SDXL Turbo should use timestep_spacing='trailing' for the scheduler and use between 1 and 4 steps. SDXL Turbo has been trained to generate images of size 512x512. SDXL Turbo is open-access, but not open-source meaning that one might have to buy a model license in order to use it for commercial applications. Make sure to read the official model card to learn more. To learn how to use SDXL Turbo for various tasks, how to optimize performance, and other usage examples, take a look at the SDXL Turbo guide. Check out the Stability AI Hub organization for the official base and refiner model checkpoints! diff --git a/scrapped_outputs/b8ade700d065585f5a42cfba9cb2daec.txt b/scrapped_outputs/b8ade700d065585f5a42cfba9cb2daec.txt new file mode 100644 index 0000000000000000000000000000000000000000..ac84e7af684acbbe414a495264a2879f29f202cf --- /dev/null +++ b/scrapped_outputs/b8ade700d065585f5a42cfba9cb2daec.txt @@ -0,0 +1,114 @@ +Accelerate inference of text-to-image diffusion models Diffusion models are slower than their GAN counterparts because of the iterative and sequential reverse diffusion process. There are several techniques that can address this limitation such as progressive timestep distillation (LCM LoRA), model compression (SSD-1B), and reusing adjacent features of the denoiser (DeepCache). However, you don’t necessarily need to use these techniques to speed up inference. With PyTorch 2 alone, you can accelerate the inference latency of text-to-image diffusion pipelines by up to 3x. This tutorial will show you how to progressively apply the optimizations found in PyTorch 2 to reduce inference latency. You’ll use the Stable Diffusion XL (SDXL) pipeline in this tutorial, but these techniques are applicable to other text-to-image diffusion pipelines too. Make sure you’re using the latest version of Diffusers: Copied pip install -U diffusers Then upgrade the other required libraries too: Copied pip install -U transformers accelerate peft Install PyTorch nightly to benefit from the latest and fastest kernels: Copied pip3 install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu121 The results reported below are from a 80GB 400W A100 with its clock rate set to the maximum. If you’re interested in the full benchmarking code, take a look at huggingface/diffusion-fast. Baseline Let’s start with a baseline. Disable reduced precision and the scaled_dot_product_attention (SDPA) function which is automatically used by Diffusers: Copied from diffusers import StableDiffusionXLPipeline + +# Load the pipeline in full-precision and place its model components on CUDA. +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0" +).to("cuda") + +# Run the attention ops without SDPA. +pipe.unet.set_default_attn_processor() +pipe.vae.set_default_attn_processor() + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] This default setup takes 7.36 seconds. bfloat16 Enable the first optimization, reduced precision or more specifically bfloat16. There are several benefits of using reduced precision: Using a reduced numerical precision (such as float16 or bfloat16) for inference doesn’t affect the generation quality but significantly improves latency. The benefits of using bfloat16 compared to float16 are hardware dependent, but modern GPUs tend to favor bfloat16. bfloat16 is much more resilient when used with quantization compared to float16, but more recent versions of the quantization library (torchao) we used don’t have numerical issues with float16. Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 +).to("cuda") + +# Run the attention ops without SDPA. +pipe.unet.set_default_attn_processor() +pipe.vae.set_default_attn_processor() + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] bfloat16 reduces the latency from 7.36 seconds to 4.63 seconds. In our later experiments with float16, recent versions of torchao do not incur numerical problems from float16. Take a look at the Speed up inference guide to learn more about running inference with reduced precision. SDPA Attention blocks are intensive to run. But with PyTorch’s scaled_dot_product_attention function, it is a lot more efficient. This function is used by default in Diffusers so you don’t need to make any changes to the code. Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] Scaled dot product attention improves the latency from 4.63 seconds to 3.31 seconds. torch.compile PyTorch 2 includes torch.compile which uses fast and optimized kernels. In Diffusers, the UNet and VAE are usually compiled because these are the most compute-intensive modules. First, configure a few compiler flags (refer to the full list for more options): Copied from diffusers import StableDiffusionXLPipeline +import torch + +torch._inductor.config.conv_1x1_as_mm = True +torch._inductor.config.coordinate_descent_tuning = True +torch._inductor.config.epilogue_fusion = False +torch._inductor.config.coordinate_descent_check_all_directions = True It is also important to change the UNet and VAE’s memory layout to “channels_last” when compiling them to ensure maximum speed. Copied pipe.unet.to(memory_format=torch.channels_last) +pipe.vae.to(memory_format=torch.channels_last) Now compile and perform inference: Copied # Compile the UNet and VAE. +pipe.unet = torch.compile(pipe.unet, mode="max-autotune", fullgraph=True) +pipe.vae.decode = torch.compile(pipe.vae.decode, mode="max-autotune", fullgraph=True) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# First call to `pipe` is slow, subsequent ones are faster. +image = pipe(prompt, num_inference_steps=30).images[0] torch.compile offers different backends and modes. For maximum inference speed, use “max-autotune” for the inductor backend. “max-autotune” uses CUDA graphs and optimizes the compilation graph specifically for latency. CUDA graphs greatly reduces the overhead of launching GPU operations by using a mechanism to launch multiple GPU operations through a single CPU operation. Using SDPA attention and compiling both the UNet and VAE cuts the latency from 3.31 seconds to 2.54 seconds. Prevent graph breaks Specifying fullgraph=True ensures there are no graph breaks in the underlying model to take full advantage of torch.compile without any performance degradation. For the UNet and VAE, this means changing how you access the return variables. Copied - latents = unet( +- latents, timestep=timestep, encoder_hidden_states=prompt_embeds +-).sample + ++ latents = unet( ++ latents, timestep=timestep, encoder_hidden_states=prompt_embeds, return_dict=False ++)[0] Remove GPU sync after compilation During the iterative reverse diffusion process, the step() function is called on the scheduler each time after the denoiser predicts the less noisy latent embeddings. Inside step(), the sigmas variable is indexed which when placed on the GPU, causes a communication sync between the CPU and GPU. This introduces latency and it becomes more evident when the denoiser has already been compiled. But if the sigmas array always stays on the CPU, the CPU and GPU sync doesn’t occur and you don’t get any latency. In general, any CPU and GPU communication sync should be none or be kept to a bare minimum because it can impact inference latency. Combine the attention block’s projection matrices The UNet and VAE in SDXL use Transformer-like blocks which consists of attention blocks and feed-forward blocks. In an attention block, the input is projected into three sub-spaces using three different projection matrices – Q, K, and V. These projections are performed separately on the input. But we can horizontally combine the projection matrices into a single matrix and perform the projection in one step. This increases the size of the matrix multiplications of the input projections and improves the impact of quantization. You can combine the projection matrices with just a single line of code: Copied pipe.fuse_qkv_projections() This provides a minor improvement from 2.54 seconds to 2.52 seconds. Support for fuse_qkv_projections() is limited and experimental. It’s not available for many non-Stable Diffusion pipelines such as Kandinsky. You can refer to this PR to get an idea about how to enable this for the other pipelines. Dynamic quantization You can also use the ultra-lightweight PyTorch quantization library, torchao (commit SHA 54bcd5a10d0abbe7b0c045052029257099f83fd9), to apply dynamic int8 quantization to the UNet and VAE. Quantization adds additional conversion overhead to the model that is hopefully made up for by faster matmuls (dynamic quantization). If the matmuls are too small, these techniques may degrade performance. First, configure all the compiler tags: Copied from diffusers import StableDiffusionXLPipeline +import torch + +# Notice the two new flags at the end. +torch._inductor.config.conv_1x1_as_mm = True +torch._inductor.config.coordinate_descent_tuning = True +torch._inductor.config.epilogue_fusion = False +torch._inductor.config.coordinate_descent_check_all_directions = True +torch._inductor.config.force_fuse_int_mm_with_mul = True +torch._inductor.config.use_mixed_mm = True Certain linear layers in the UNet and VAE don’t benefit from dynamic int8 quantization. You can filter out those layers with the dynamic_quant_filter_fn shown below. Copied def dynamic_quant_filter_fn(mod, *args): + return ( + isinstance(mod, torch.nn.Linear) + and mod.in_features > 16 + and (mod.in_features, mod.out_features) + not in [ + (1280, 640), + (1920, 1280), + (1920, 640), + (2048, 1280), + (2048, 2560), + (2560, 1280), + (256, 128), + (2816, 1280), + (320, 640), + (512, 1536), + (512, 256), + (512, 512), + (640, 1280), + (640, 1920), + (640, 320), + (640, 5120), + (640, 640), + (960, 320), + (960, 640), + ] + ) + + +def conv_filter_fn(mod, *args): + return ( + isinstance(mod, torch.nn.Conv2d) and mod.kernel_size == (1, 1) and 128 in [mod.in_channels, mod.out_channels] + ) Finally, apply all the optimizations discussed so far: Copied # SDPA + bfloat16. +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 +).to("cuda") + +# Combine attention projection matrices. +pipe.fuse_qkv_projections() + +# Change the memory layout. +pipe.unet.to(memory_format=torch.channels_last) +pipe.vae.to(memory_format=torch.channels_last) Since dynamic quantization is only limited to the linear layers, convert the appropriate pointwise convolution layers into linear layers to maximize its benefit. Copied from torchao import swap_conv2d_1x1_to_linear + +swap_conv2d_1x1_to_linear(pipe.unet, conv_filter_fn) +swap_conv2d_1x1_to_linear(pipe.vae, conv_filter_fn) Apply dynamic quantization: Copied from torchao import apply_dynamic_quant + +apply_dynamic_quant(pipe.unet, dynamic_quant_filter_fn) +apply_dynamic_quant(pipe.vae, dynamic_quant_filter_fn) Finally, compile and perform inference: Copied pipe.unet = torch.compile(pipe.unet, mode="max-autotune", fullgraph=True) +pipe.vae.decode = torch.compile(pipe.vae.decode, mode="max-autotune", fullgraph=True) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] Applying dynamic quantization improves the latency from 2.52 seconds to 2.43 seconds. diff --git a/scrapped_outputs/b8c7ad83d0eb72833f625c376ab0f7e1.txt b/scrapped_outputs/b8c7ad83d0eb72833f625c376ab0f7e1.txt new file mode 100644 index 0000000000000000000000000000000000000000..118d04526fdacb6e280461a814f7dea84ba76932 --- /dev/null +++ b/scrapped_outputs/b8c7ad83d0eb72833f625c376ab0f7e1.txt @@ -0,0 +1,51 @@ +DDIMInverseScheduler DDIMInverseScheduler is the inverted scheduler from Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. +The implementation is mostly based on the DDIM inversion definition from Null-text Inversion for Editing Real Images using Guided Diffusion Models. DDIMInverseScheduler class diffusers.DDIMInverseScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None clip_sample: bool = True set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' clip_sample_range: float = 1.0 timestep_spacing: str = 'leading' rescale_betas_zero_snr: bool = False **kwargs ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 0, otherwise +it uses the alpha value at step num_train_timesteps - 1. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use num_train_timesteps - 1 for the previous alpha +product. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. DDIMInverseScheduler is the reverse scheduler of DDIMScheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → ~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. eta (float) — +The weight of noise for added noise in diffusion step. use_clipped_model_output (bool, defaults to False) — +If True, computes “corrected” model_output from the clipped predicted original sample. Necessary +because predicted original sample is clipped to [-1, 1] when self.config.clip_sample is True. If no +clipping has happened, “corrected” model_output would coincide with the one provided as input and +use_clipped_model_output has no effect. variance_noise (torch.FloatTensor) — +Alternative to generating noise with generator by directly providing the noise for the variance +itself. Useful for methods such as CycleDiffusion. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput or +tuple. Returns +~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput or tuple + +If return_dict is True, ~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput is +returned, otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/b8e1f8871070d563812986ff2f97ad72.txt b/scrapped_outputs/b8e1f8871070d563812986ff2f97ad72.txt new file mode 100644 index 0000000000000000000000000000000000000000..f3ff45d9b537f73b4891b1294f8d618d1aafc935 --- /dev/null +++ b/scrapped_outputs/b8e1f8871070d563812986ff2f97ad72.txt @@ -0,0 +1,48 @@ +ScoreSdeVeScheduler ScoreSdeVeScheduler is a variance exploding stochastic differential equation (SDE) scheduler. It was introduced in the Score-Based Generative Modeling through Stochastic Differential Equations paper by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole. The abstract from the paper is: Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model. ScoreSdeVeScheduler class diffusers.ScoreSdeVeScheduler < source > ( num_train_timesteps: int = 2000 snr: float = 0.15 sigma_min: float = 0.01 sigma_max: float = 1348.0 sampling_eps: float = 1e-05 correct_steps: int = 1 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. snr (float, defaults to 0.15) — +A coefficient weighting the step from the model_output sample (from the network) to the random noise. sigma_min (float, defaults to 0.01) — +The initial noise scale for the sigma sequence in the sampling procedure. The minimum sigma should mirror +the distribution of the data. sigma_max (float, defaults to 1348.0) — +The maximum value used for the range of continuous timesteps passed into the model. sampling_eps (float, defaults to 1e-5) — +The end value of sampling where timesteps decrease progressively from 1 to epsilon. correct_steps (int, defaults to 1) — +The number of correction steps performed on a produced sample. ScoreSdeVeScheduler is a variance exploding stochastic differential equation (SDE) scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_sigmas < source > ( num_inference_steps: int sigma_min: float = None sigma_max: float = None sampling_eps: float = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. sigma_min (float, optional) — +The initial noise scale value (overrides value given during scheduler instantiation). sigma_max (float, optional) — +The final noise scale value (overrides value given during scheduler instantiation). sampling_eps (float, optional) — +The final timestep value (overrides value given during scheduler instantiation). Sets the noise scales used for the diffusion chain (to be run before inference). The sigmas control the weight +of the drift and diffusion components of the sample update. set_timesteps < source > ( num_inference_steps: int sampling_eps: float = None device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. sampling_eps (float, optional) — +The final timestep value (overrides value given during scheduler instantiation). device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the continuous timesteps used for the diffusion chain (to be run before inference). step_correct < source > ( model_output: FloatTensor sample: FloatTensor generator: Optional = None return_dict: bool = True ) → SdeVeOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a SdeVeOutput or tuple. Returns +SdeVeOutput or tuple + +If return_dict is True, SdeVeOutput is returned, otherwise a tuple +is returned where the first element is the sample tensor. + Correct the predicted sample based on the model_output of the network. This is often run repeatedly after +making the prediction for the previous timestep. step_pred < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator: Optional = None return_dict: bool = True ) → SdeVeOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a SdeVeOutput or tuple. Returns +SdeVeOutput or tuple + +If return_dict is True, SdeVeOutput is returned, otherwise a tuple +is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SdeVeOutput class diffusers.schedulers.scheduling_sde_ve.SdeVeOutput < source > ( prev_sample: FloatTensor prev_sample_mean: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. prev_sample_mean (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Mean averaged prev_sample over previous timesteps. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/b90aaf8b4a24c22f0c26aeae0d976c22.txt b/scrapped_outputs/b90aaf8b4a24c22f0c26aeae0d976c22.txt new file mode 100644 index 0000000000000000000000000000000000000000..d8610ad87c070caa4fdd6e48fd8b56d49472e888 --- /dev/null +++ b/scrapped_outputs/b90aaf8b4a24c22f0c26aeae0d976c22.txt @@ -0,0 +1,41 @@ +HeunDiscreteScheduler The Heun scheduler (Algorithm 1) is from the Elucidating the Design Space of Diffusion-Based Generative Models paper by Karras et al. The scheduler is ported from the k-diffusion library and created by Katherine Crowson. HeunDiscreteScheduler class diffusers.HeunDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' use_karras_sigmas: Optional = False clip_sample: Optional = False clip_sample_range: float = 1.0 timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. Scheduler with Heun steps for discrete beta schedules. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/b928b33df2330be7a644313dce758418.txt b/scrapped_outputs/b928b33df2330be7a644313dce758418.txt new file mode 100644 index 0000000000000000000000000000000000000000..0b817eaf4714d1f34a75a12bd55a710beab93be8 --- /dev/null +++ b/scrapped_outputs/b928b33df2330be7a644313dce758418.txt @@ -0,0 +1,271 @@ +Image Variation + + +StableDiffusionImageVariationPipeline + +StableDiffusionImageVariationPipeline lets you generate variations from an input image using Stable Diffusion. It uses a fine-tuned version of Stable Diffusion model, trained by Justin Pinkney (@Buntworthy) at Lambda +The original codebase can be found here: +Stable Diffusion Image Variations +Available Checkpoints are: +sd-image-variations-diffusers: lambdalabs/sd-image-variations-diffusers + +class diffusers.StableDiffusionImageVariationPipeline + +< +source +> +( +vae: AutoencoderKL +image_encoder: CLIPVisionModelWithProjection +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPFeatureExtractor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +image_encoder (CLIPVisionModelWithProjection) — +Frozen CLIP image-encoder. Stable Diffusion Image Variation uses the vision portion of +CLIP, +specifically the clip-vit-large-patch14 variant. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline to generate variations from an input image using Stable Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +image: typing.Union[PIL.Image.Image, typing.List[PIL.Image.Image], torch.FloatTensor] +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: typing.Optional[int] = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — +The image or images to guide the image generation. If you provide a tensor, it needs to comply with the +configuration of +this +CLIPFeatureExtractor + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +enable_attention_slicing + +< +source +> +( +slice_size: typing.Union[str, int, NoneType] = 'auto' + +) + + +Parameters + +slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maxium amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +disable_attention_slicing + +< +source +> +( +) + + + +Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go +back to computing attention in one step. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. diff --git a/scrapped_outputs/b9401921b7e78820950377dfd105e78b.txt b/scrapped_outputs/b9401921b7e78820950377dfd105e78b.txt new file mode 100644 index 0000000000000000000000000000000000000000..651ea7735a84779102f99c628b73538a2c0f99d1 --- /dev/null +++ b/scrapped_outputs/b9401921b7e78820950377dfd105e78b.txt @@ -0,0 +1,98 @@ +DDPM + + +Overview + +Denoising Diffusion Probabilistic Models +(DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes the diffusion based model of the same name, but in the context of the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline. +The abstract of the paper is the following: +We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. +The original codebase of this paper can be found here. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_ddpm.py +Unconditional Image Generation +- + +DDPMPipeline + + +class diffusers.DDPMPipeline + +< +source +> +( +unet +scheduler + +) + + +Parameters + +unet (UNet2DModel) — U-Net architecture to denoise the encoded image. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image. Can be one of +DDPMScheduler, or DDIMScheduler. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +batch_size: int = 1 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +num_inference_steps: int = 1000 +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True + +) +→ +ImagePipelineOutput or tuple + +Parameters + +batch_size (int, optional, defaults to 1) — +The number of images to generate. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +num_inference_steps (int, optional, defaults to 1000) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + +Returns + +ImagePipelineOutput or tuple + + + +~pipelines.utils.ImagePipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. diff --git a/scrapped_outputs/b9918c2569368648f154b4f821b20afc.txt b/scrapped_outputs/b9918c2569368648f154b4f821b20afc.txt new file mode 100644 index 0000000000000000000000000000000000000000..d1a13e4b4a70e8e6d6bd0c0f8b80cc8885fcabb5 --- /dev/null +++ b/scrapped_outputs/b9918c2569368648f154b4f821b20afc.txt @@ -0,0 +1,38 @@ +Pipeline callbacks The denoising loop of a pipeline can be modified with custom defined functions using the callback_on_step_end parameter. This can be really useful for dynamically adjusting certain pipeline attributes, or modifying tensor variables. The flexibility of callbacks opens up some interesting use-cases such as changing the prompt embeddings at each timestep, assigning different weights to the prompt embeddings, and editing the guidance scale. This guide will show you how to use the callback_on_step_end parameter to disable classifier-free guidance (CFG) after 40% of the inference steps to save compute with minimal cost to performance. The callback function should have the following arguments: pipe (or the pipeline instance) provides access to useful properties such as num_timestep and guidance_scale. You can modify these properties by updating the underlying attributes. For this example, you’ll disable CFG by setting pipe._guidance_scale=0.0. step_index and timestep tell you where you are in the denoising loop. Use step_index to turn off CFG after reaching 40% of num_timestep. callback_kwargs is a dict that contains tensor variables you can modify during the denoising loop. It only includes variables specified in the callback_on_step_end_tensor_inputs argument, which is passed to the pipeline’s __call__ method. Different pipelines may use different sets of variables, so please check a pipeline’s _callback_tensor_inputs attribute for the list of variables you can modify. Some common variables include latents and prompt_embeds. For this function, change the batch size of prompt_embeds after setting guidance_scale=0.0 in order for it to work properly. Your callback function should look something like this: Copied def callback_dynamic_cfg(pipe, step_index, timestep, callback_kwargs): + # adjust the batch_size of prompt_embeds according to guidance_scale + if step_index == int(pipe.num_timestep * 0.4): + prompt_embeds = callback_kwargs["prompt_embeds"] + prompt_embeds = prompt_embeds.chunk(2)[-1] + + # update guidance_scale and prompt_embeds + pipe._guidance_scale = 0.0 + callback_kwargs["prompt_embeds"] = prompt_embeds + return callback_kwargs Now, you can pass the callback function to the callback_on_step_end parameter and the prompt_embeds to callback_on_step_end_tensor_inputs. Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" + +generator = torch.Generator(device="cuda").manual_seed(1) +out = pipe(prompt, generator=generator, callback_on_step_end=callback_custom_cfg, callback_on_step_end_tensor_inputs=['prompt_embeds']) + +out.images[0].save("out_custom_cfg.png") The callback function is executed at the end of each denoising step, and modifies the pipeline attributes and tensor variables for the next denoising step. With callbacks, you can implement features such as dynamic CFG without having to modify the underlying code at all! 🤗 Diffusers currently only supports callback_on_step_end, but feel free to open a feature request if you have a cool use-case and require a callback function with a different execution point! Using Callbacks to interrupt the Diffusion Process The following Pipelines support interrupting the diffusion process via callback StableDiffusionPipeline StableDiffusionImg2ImgPipeline StableDiffusionInpaintPipeline StableDiffusionXLPipeline StableDiffusionXLImg2ImgPipeline StableDiffusionXLInpaintPipeline Interrupting the diffusion process is particularly useful when building UIs that work with Diffusers because it allows users to stop the generation process if they’re unhappy with the intermediate results. You can incorporate this into your pipeline with a callback. This callback function should take the following arguments: pipe, i, t, and callback_kwargs (this must be returned). Set the pipeline’s _interrupt attribute to True to stop the diffusion process after a certain number of steps. You are also free to implement your own custom stopping logic inside the callback. In this example, the diffusion process is stopped after 10 steps even though num_inference_steps is set to 50. Copied from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +pipe.enable_model_cpu_offload() +num_inference_steps = 50 + +def interrupt_callback(pipe, i, t, callback_kwargs): + stop_idx = 10 + if i == stop_idx: + pipe._interrupt = True + + return callback_kwargs + +pipe( + "A photo of a cat", + num_inference_steps=num_inference_steps, + callback_on_step_end=interrupt_callback, +) diff --git a/scrapped_outputs/b9ab7f55f745f87b838356ee7cbc9dbe.txt b/scrapped_outputs/b9ab7f55f745f87b838356ee7cbc9dbe.txt new file mode 100644 index 0000000000000000000000000000000000000000..ef8f603ddfc814bb38cda19aa526350c7005dee9 --- /dev/null +++ b/scrapped_outputs/b9ab7f55f745f87b838356ee7cbc9dbe.txt @@ -0,0 +1,104 @@ +Textual Inversion Textual Inversion is a training method for personalizing models by learning new text embeddings from a few example images. The file produced from training is extremely small (a few KBs) and the new embeddings can be loaded into the text encoder. TextualInversionLoaderMixin provides a function for loading Textual Inversion embeddings from Diffusers and Automatic1111 into the text encoder and loading a special token to activate the embeddings. To learn more about how to load Textual Inversion embeddings, see the Textual Inversion loading guide. TextualInversionLoaderMixin class diffusers.loaders.TextualInversionLoaderMixin < source > ( ) Load Textual Inversion tokens and embeddings to the tokenizer and text encoder. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") maybe_convert_prompt < source > ( prompt: Union tokenizer: PreTrainedTokenizer ) → str or list of str Parameters prompt (str or list of str) — +The prompt or prompts to guide the image generation. tokenizer (PreTrainedTokenizer) — +The tokenizer responsible for encoding the prompt into input tokens. Returns +str or list of str + +The converted prompt + Processes prompts that include a special token corresponding to a multi-vector textual inversion embedding to +be replaced with multiple special tokens each corresponding to one of the vectors. If the prompt has no textual +inversion token or if the textual inversion token is a single vector, the input prompt is returned. unload_textual_inversion < source > ( tokens: Union = None tokenizer: Optional = None text_encoder: Optional = None ) Unload Textual Inversion embeddings from the text encoder of StableDiffusionPipeline Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5") + +# Example 1 +pipeline.load_textual_inversion("sd-concepts-library/gta5-artwork") +pipeline.load_textual_inversion("sd-concepts-library/moeb-style") + +# Remove all token embeddings +pipeline.unload_textual_inversion() + +# Example 2 +pipeline.load_textual_inversion("sd-concepts-library/moeb-style") +pipeline.load_textual_inversion("sd-concepts-library/gta5-artwork") + +# Remove just one token +pipeline.unload_textual_inversion("") + +# Example 3: unload from SDXL +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0") +embedding_path = hf_hub_download(repo_id="linoyts/web_y2k", filename="web_y2k_emb.safetensors", repo_type="model") + +# load embeddings to the text encoders +state_dict = load_file(embedding_path) + +# load embeddings of text_encoder 1 (CLIP ViT-L/14) +pipeline.load_textual_inversion(state_dict["clip_l"], token=["", ""], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) +# load embeddings of text_encoder 2 (CLIP ViT-G/14) +pipeline.load_textual_inversion(state_dict["clip_g"], token=["", ""], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) + +# Unload explicitly from both text encoders abd tokenizers +pipeline.unload_textual_inversion(tokens=["", ""], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) +pipeline.unload_textual_inversion(tokens=["", ""], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) diff --git a/scrapped_outputs/b9c0ac69a6f7d9553856064552c97d52.txt b/scrapped_outputs/b9c0ac69a6f7d9553856064552c97d52.txt new file mode 100644 index 0000000000000000000000000000000000000000..a9b23cd194564c43aca8fd94b78d118e14153f64 --- /dev/null +++ b/scrapped_outputs/b9c0ac69a6f7d9553856064552c97d52.txt @@ -0,0 +1,263 @@ +🧪 This pipeline is for research purposes only. Text-to-video ModelScope Text-to-Video Technical Report is by Jiuniu Wang, Hangjie Yuan, Dayou Chen, Yingya Zhang, Xiang Wang, Shiwei Zhang. The abstract from the paper is: This paper introduces ModelScopeT2V, a text-to-video synthesis model that evolves from a text-to-image synthesis model (i.e., Stable Diffusion). ModelScopeT2V incorporates spatio-temporal blocks to ensure consistent frame generation and smooth movement transitions. The model could adapt to varying frame numbers during training and inference, rendering it suitable for both image-text and video-text datasets. ModelScopeT2V brings together three components (i.e., VQGAN, a text encoder, and a denoising UNet), totally comprising 1.7 billion parameters, in which 0.5 billion parameters are dedicated to temporal capabilities. The model demonstrates superior performance over state-of-the-art methods across three evaluation metrics. The code and an online demo are available at https://modelscope.cn/models/damo/text-to-video-synthesis/summary. You can find additional information about Text-to-Video on the project page, original codebase, and try it out in a demo. Official checkpoints can be found at damo-vilab and cerspense. Usage example text-to-video-ms-1.7b Let’s start by generating a short video with the default length of 16 frames (2s at 8 fps): Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import export_to_video + +pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") +pipe = pipe.to("cuda") + +prompt = "Spiderman is surfing" +video_frames = pipe(prompt).frames +video_path = export_to_video(video_frames) +video_path Diffusers supports different optimization techniques to improve the latency +and memory footprint of a pipeline. Since videos are often more memory-heavy than images, +we can enable CPU offloading and VAE slicing to keep the memory footprint at bay. Let’s generate a video of 8 seconds (64 frames) on the same GPU using CPU offloading and VAE slicing: Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import export_to_video + +pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") +pipe.enable_model_cpu_offload() + +# memory optimization +pipe.enable_vae_slicing() + +prompt = "Darth Vader surfing a wave" +video_frames = pipe(prompt, num_frames=64).frames +video_path = export_to_video(video_frames) +video_path It just takes 7 GBs of GPU memory to generate the 64 video frames using PyTorch 2.0, “fp16” precision and the techniques mentioned above. We can also use a different scheduler easily, using the same method we’d use for Stable Diffusion: Copied import torch +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +from diffusers.utils import export_to_video + +pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() + +prompt = "Spiderman is surfing" +video_frames = pipe(prompt, num_inference_steps=25).frames +video_path = export_to_video(video_frames) +video_path Here are some sample outputs: An astronaut riding a horse. + Darth vader surfing in waves. + cerspense/zeroscope_v2_576w & cerspense/zeroscope_v2_XL Zeroscope are watermark-free model and have been trained on specific sizes such as 576x320 and 1024x576. +One should first generate a video using the lower resolution checkpoint cerspense/zeroscope_v2_576w with TextToVideoSDPipeline, +which can then be upscaled using VideoToVideoSDPipeline and cerspense/zeroscope_v2_XL. Copied import torch +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +from diffusers.utils import export_to_video +from PIL import Image + +pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16) +pipe.enable_model_cpu_offload() + +# memory optimization +pipe.unet.enable_forward_chunking(chunk_size=1, dim=1) +pipe.enable_vae_slicing() + +prompt = "Darth Vader surfing a wave" +video_frames = pipe(prompt, num_frames=24).frames +video_path = export_to_video(video_frames) +video_path Now the video can be upscaled: Copied pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_XL", torch_dtype=torch.float16) +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() + +# memory optimization +pipe.unet.enable_forward_chunking(chunk_size=1, dim=1) +pipe.enable_vae_slicing() + +video = [Image.fromarray(frame).resize((1024, 576)) for frame in video_frames] + +video_frames = pipe(prompt, video=video, strength=0.6).frames +video_path = export_to_video(video_frames) +video_path Here are some sample outputs: Darth vader surfing in waves. + Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. TextToVideoSDPipeline class diffusers.TextToVideoSDPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet3DConditionModel scheduler: KarrasDiffusionSchedulers ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet3DConditionModel) — +A UNet3DConditionModel to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_frames: int = 16 num_inference_steps: int = 50 guidance_scale: float = 9.0 negative_prompt: Union = None eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'np' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → TextToVideoSDPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated video. num_frames (int, optional, defaults to 16) — +The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds +amounts to 2 seconds of video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "np") — +The output format of the generated video. Choose between torch.FloatTensor or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +TextToVideoSDPipelineOutput or tuple + +If return_dict is True, TextToVideoSDPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import TextToVideoSDPipeline +>>> from diffusers.utils import export_to_video + +>>> pipe = TextToVideoSDPipeline.from_pretrained( +... "damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16" +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "Spiderman is surfing" +>>> video_frames = pipe(prompt).frames +>>> video_path = export_to_video(video_frames) +>>> video_path disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. VideoToVideoSDPipeline class diffusers.VideoToVideoSDPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet3DConditionModel scheduler: KarrasDiffusionSchedulers ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet3DConditionModel) — +A UNet3DConditionModel to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-guided video-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None video: Union = None strength: float = 0.6 num_inference_steps: int = 50 guidance_scale: float = 15.0 negative_prompt: Union = None eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'np' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → TextToVideoSDPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. video (List[np.ndarray] or torch.FloatTensor) — +video frames or tensor representing a video batch to be used as the starting point for the process. +Can also accept video latents as image, if passing latents directly, it will not be encoded again. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference video. Must be between 0 and 1. video is used as a +starting point, adding more noise to it the larger the strength. The number of denoising steps +depends on the amount of noise initially added. When strength is 1, added noise is maximum and the +denoising process runs for the full number of iterations specified in num_inference_steps. A value of +1 essentially ignores video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in video generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "np") — +The output format of the generated video. Choose between torch.FloatTensor or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +TextToVideoSDPipelineOutput or tuple + +If return_dict is True, TextToVideoSDPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +>>> from diffusers.utils import export_to_video + +>>> pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16) +>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe.to("cuda") + +>>> prompt = "spiderman running in the desert" +>>> video_frames = pipe(prompt, num_inference_steps=40, height=320, width=576, num_frames=24).frames +>>> # safe low-res video +>>> video_path = export_to_video(video_frames, output_video_path="./video_576_spiderman.mp4") + +>>> # let's offload the text-to-image model +>>> pipe.to("cpu") + +>>> # and load the image-to-image model +>>> pipe = DiffusionPipeline.from_pretrained( +... "cerspense/zeroscope_v2_XL", torch_dtype=torch.float16, revision="refs/pr/15" +... ) +>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe.enable_model_cpu_offload() + +>>> # The VAE consumes A LOT of memory, let's make sure we run it in sliced mode +>>> pipe.vae.enable_slicing() + +>>> # now let's upscale it +>>> video = [Image.fromarray(frame).resize((1024, 576)) for frame in video_frames] + +>>> # and denoise it +>>> video_frames = pipe(prompt, video=video, strength=0.6).frames +>>> video_path = export_to_video(video_frames, output_video_path="./video_1024_spiderman.mp4") +>>> video_path disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. TextToVideoSDPipelineOutput class diffusers.pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput < source > ( frames: Union ) Parameters frames (List[np.ndarray] or torch.FloatTensor) — +List of denoised frames (essentially images) as NumPy arrays of shape (height, width, num_channels) or as +a torch tensor. The length of the list denotes the video length (the number of frames). Output class for text-to-video pipelines. diff --git a/scrapped_outputs/b9d7765cfec8dbe5b2d366fa049cb017.txt b/scrapped_outputs/b9d7765cfec8dbe5b2d366fa049cb017.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/b9d8d271af6db7d7c1f8780e97f1c67f.txt b/scrapped_outputs/b9d8d271af6db7d7c1f8780e97f1c67f.txt new file mode 100644 index 0000000000000000000000000000000000000000..1f39cdc6039e6ca60233a880a99b0f0bf5e50fc4 --- /dev/null +++ b/scrapped_outputs/b9d8d271af6db7d7c1f8780e97f1c67f.txt @@ -0,0 +1,53 @@ +CMStochasticIterativeScheduler Consistency Models by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever introduced a multistep and onestep scheduler (Algorithm 1) that is capable of generating good samples in one or a small number of steps. The abstract from the paper is: Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast one-step generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet 64x64 and LSUN 256x256. The original codebase can be found at openai/consistency_models. CMStochasticIterativeScheduler class diffusers.CMStochasticIterativeScheduler < source > ( num_train_timesteps: int = 40 sigma_min: float = 0.002 sigma_max: float = 80.0 sigma_data: float = 0.5 s_noise: float = 1.0 rho: float = 7.0 clip_denoised: bool = True ) Parameters num_train_timesteps (int, defaults to 40) — +The number of diffusion steps to train the model. sigma_min (float, defaults to 0.002) — +Minimum noise magnitude in the sigma schedule. Defaults to 0.002 from the original implementation. sigma_max (float, defaults to 80.0) — +Maximum noise magnitude in the sigma schedule. Defaults to 80.0 from the original implementation. sigma_data (float, defaults to 0.5) — +The standard deviation of the data distribution from the EDM +paper. Defaults to 0.5 from the original implementation. s_noise (float, defaults to 1.0) — +The amount of additional noise to counteract loss of detail during sampling. A reasonable range is [1.000, +1.011]. Defaults to 1.0 from the original implementation. rho (float, defaults to 7.0) — +The parameter for calculating the Karras sigma schedule from the EDM +paper. Defaults to 7.0 from the original implementation. clip_denoised (bool, defaults to True) — +Whether to clip the denoised outputs to (-1, 1). timesteps (List or np.ndarray or torch.Tensor, optional) — +An explicit timestep schedule that can be optionally specified. The timesteps are expected to be in +increasing order. Multistep and onestep sampling for consistency models. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. get_scalings_for_boundary_condition < source > ( sigma ) → tuple Parameters sigma (torch.FloatTensor) — +The current sigma in the Karras sigma schedule. Returns +tuple + +A two-element tuple where c_skip (which weights the current sample) is the first element and c_out +(which weights the consistency model output) is the second element. + Gets the scalings used in the consistency model parameterization (from Appendix C of the +paper) to enforce boundary condition. epsilon in the equations for c_skip and c_out is set to sigma_min. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (float or torch.FloatTensor) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Scales the consistency model input by (sigma**2 + sigma_data**2) ** 0.5. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: Optional = None device: Union = None timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. timesteps (List[int], optional) — +Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default +timestep spacing strategy of equal spacing between timesteps is used. If timesteps is passed, +num_inference_steps must be None. Sets the timesteps used for the diffusion chain (to be run before inference). sigma_to_t < source > ( sigmas: Union ) → float or np.ndarray Parameters sigmas (float or np.ndarray) — +A single Karras sigma or an array of Karras sigmas. Returns +float or np.ndarray + +A scaled input timestep or scaled input timestep array. + Gets scaled timesteps from the Karras sigmas for input to the consistency model. step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor generator: Optional = None return_dict: bool = True ) → CMStochasticIterativeSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (float) — +The current timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a +CMStochasticIterativeSchedulerOutput or tuple. Returns +CMStochasticIterativeSchedulerOutput or tuple + +If return_dict is True, +CMStochasticIterativeSchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). CMStochasticIterativeSchedulerOutput class diffusers.schedulers.scheduling_consistency_models.CMStochasticIterativeSchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Output class for the scheduler’s step function. diff --git a/scrapped_outputs/b9dd6769a06158cba4685a7ed3f1f169.txt b/scrapped_outputs/b9dd6769a06158cba4685a7ed3f1f169.txt new file mode 100644 index 0000000000000000000000000000000000000000..f559dcc80ec22dbf65c22dd7f4b1273f5e564097 --- /dev/null +++ b/scrapped_outputs/b9dd6769a06158cba4685a7ed3f1f169.txt @@ -0,0 +1,118 @@ +Latent upscaler The Stable Diffusion latent upscaler model was created by Katherine Crowson in collaboration with Stability AI. It is used to enhance the output image resolution by a factor of 2 (see this demo notebook for a demonstration of the original implementation). Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionLatentUpscalePipeline class diffusers.StableDiffusionLatentUpscalePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: EulerDiscreteScheduler ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A EulerDiscreteScheduler to be used in combination with unet to denoise the encoded image latents. Pipeline for upscaling Stable Diffusion output image resolution by a factor of 2. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union image: Union = None num_inference_steps: int = 75 guidance_scale: float = 9.0 negative_prompt: Union = None generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image upscaling. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be upscaled. If it’s a tensor, it can be either a +latent output from a Stable Diffusion model or an image tensor in the range [-1, 1]. It is considered +a latent if image.shape[1] is 4; otherwise, it is considered to be an image representation and +encoded using this pipeline’s vae encoder. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import StableDiffusionLatentUpscalePipeline, StableDiffusionPipeline +>>> import torch + + +>>> pipeline = StableDiffusionPipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 +... ) +>>> pipeline.to("cuda") + +>>> model_id = "stabilityai/sd-x2-latent-upscaler" +>>> upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) +>>> upscaler.to("cuda") + +>>> prompt = "a photo of an astronaut high resolution, unreal engine, ultra realistic" +>>> generator = torch.manual_seed(33) + +>>> low_res_latents = pipeline(prompt, generator=generator, output_type="latent").images + +>>> with torch.no_grad(): +... image = pipeline.decode_latents(low_res_latents) +>>> image = pipeline.numpy_to_pil(image)[0] + +>>> image.save("../images/a1.png") + +>>> upscaled_image = upscaler( +... prompt=prompt, +... image=low_res_latents, +... num_inference_steps=20, +... guidance_scale=0, +... generator=generator, +... ).images[0] + +>>> upscaled_image.save("../images/a2.png") enable_sequential_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using 🤗 Accelerate, significantly reducing memory usage. When called, the state +dicts of all torch.nn.Module components (except those in self._exclude_from_cpu_offload) are saved to CPU +and then moved to torch.device('meta') and loaded to GPU only when their specific submodule has its forward +method called. Offloading happens on a submodule basis. Memory savings are higher than with +enable_model_cpu_offload, but performance is lower. enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/ba22110439488c12d93938ce24e09344.txt b/scrapped_outputs/ba22110439488c12d93938ce24e09344.txt new file mode 100644 index 0000000000000000000000000000000000000000..738762efdae5ec5169d56eff6d3638f353b0769a --- /dev/null +++ b/scrapped_outputs/ba22110439488c12d93938ce24e09344.txt @@ -0,0 +1,409 @@ +Memory and speed + +We present some techniques and ideas to optimize 🤗 Diffusers inference for memory or speed. As a general rule, we recommend the use of xFormers for memory efficient attention, please see the recommended installation instructions. +We’ll discuss how the following settings impact performance and memory. + +Latency +Speedup +original +9.50s +x1 +cuDNN auto-tuner +9.37s +x1.01 +fp16 +3.61s +x2.63 +channels last +3.30s +x2.88 +traced UNet +3.21s +x2.96 +memory efficient attention +2.63s +x3.61 +obtained on NVIDIA TITAN RTX by generating a single image of size 512x512 from + the prompt "a photo of an astronaut riding a horse on mars" with 50 DDIM + steps. + + +Enable cuDNN auto-tuner + +NVIDIA cuDNN supports many algorithms to compute a convolution. Autotuner runs a short benchmark and selects the kernel with the best performance on a given hardware for a given input size. +Since we’re using convolutional networks (other types currently not supported), we can enable cuDNN autotuner before launching the inference by setting: + + + Copied +import torch + +torch.backends.cudnn.benchmark = True + +Use tf32 instead of fp32 (on Ampere and later CUDA devices) + +On Ampere and later CUDA devices matrix multiplications and convolutions can use the TensorFloat32 (TF32) mode for faster but slightly less accurate computations. By default PyTorch enables TF32 mode for convolutions but not matrix multiplications, and unless a network requires full float32 precision we recommend enabling this setting for matrix multiplications, too. It can significantly speed up computations with typically negligible loss of numerical accuracy. You can read more about it here. All you need to do is to add this before your inference: + + + Copied +import torch + +torch.backends.cuda.matmul.allow_tf32 = True + +Half precision weights + +To save more GPU memory and get more speed, you can load and run the model weights directly in half precision. This involves loading the float16 version of the weights, which was saved to a branch named fp16, and telling PyTorch to use the float16 type when loading them: + + + Copied +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + + torch_dtype=torch.float16, +) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +image = pipe(prompt).images[0] +It is strongly discouraged to make use of [`torch.autocast`](https://pytorch.org/docs/stable/amp.html#torch.autocast) in any of the pipelines as it can lead to black images and is always slower than using pure + float16 precision. + + +Sliced attention for additional memory savings + +For even additional memory savings, you can use a sliced version of attention that performs the computation in steps instead of all at once. +Attention slicing is useful even if a batch size of just 1 is used - as long + as the model uses more than one attention head. If there is more than one + attention head the *QK^T* attention matrix can be computed sequentially for + each head which can save a significant amount of memory. + +To perform the attention computation sequentially over each head, you only need to invoke enable_attention_slicing() in your pipeline before inference, like here: + + + Copied +import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + + torch_dtype=torch.float16, +) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_attention_slicing() +image = pipe(prompt).images[0] +There’s a small performance penalty of about 10% slower inference times, but this method allows you to use Stable Diffusion in as little as 3.2 GB of VRAM! + +Sliced VAE decode for larger batches + +To decode large batches of images with limited VRAM, or to enable batches with 32 images or more, you can use sliced VAE decode that decodes the batch latents one image at a time. +You likely want to couple this with enable_attention_slicing() or enable_xformers_memory_efficient_attention() to further minimize memory use. +To perform the VAE decode one image at a time, invoke enable_vae_slicing() in your pipeline before inference. For example: + + + Copied +import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + + torch_dtype=torch.float16, +) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_vae_slicing() +images = pipe([prompt] * 32).images +You may see a small performance boost in VAE decode on multi-image batches. There should be no performance impact on single-image batches. + +Tiled VAE decode and encode for large images + +Tiled VAE processing makes it possible to work with large images on limited VRAM. For example, generating 4k images in 8GB of VRAM. Tiled VAE decoder splits the image into overlapping tiles, decodes the tiles, and blends the outputs to make the final image. +You want to couple this with enable_attention_slicing() or enable_xformers_memory_efficient_attention() to further minimize memory use. +To use tiled VAE processing, invoke enable_vae_tiling() in your pipeline before inference. For example: + + + Copied +import torch +from diffusers import StableDiffusionPipeline, UniPCMultistepScheduler + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, +) +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") +prompt = "a beautiful landscape photograph" +pipe.enable_vae_tiling() +pipe.enable_xformers_memory_efficient_attention() + +image = pipe([prompt], width=3840, height=2224, num_inference_steps=20).images[0] +The output image will have some tile-to-tile tone variation from the tiles having separate decoders, but you shouldn’t see sharp seams between the tiles. The tiling is turned off for images that are 512x512 or smaller. + + +Offloading to CPU with accelerate for memory savings + +For additional memory savings, you can offload the weights to CPU and only load them to GPU when performing the forward pass. +To perform CPU offloading, all you have to do is invoke enable_sequential_cpu_offload(): + + + Copied +import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + + torch_dtype=torch.float16, +) + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_sequential_cpu_offload() +image = pipe(prompt).images[0] +And you can get the memory consumption to < 3GB. +Note that this method works at the submodule level, not on whole models. This is the best way to minimize memory consumption, but inference is much slower due to the iterative nature of the process. The UNet component of the pipeline runs several times (as many as num_inference_steps); each time, the different submodules of the UNet are sequentially onloaded and then offloaded as they are needed, so the number of memory transfers is large. +Consider using model offloading as another point in the optimization space: it will be much faster, but memory savings won't be as large. + +It is also possible to chain offloading with attention slicing for minimal memory consumption (< 2GB). + + + Copied +import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + + torch_dtype=torch.float16, +) + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_sequential_cpu_offload() +pipe.enable_attention_slicing(1) + +image = pipe(prompt).images[0] +Note: When using enable_sequential_cpu_offload(), it is important to not move the pipeline to CUDA beforehand or else the gain in memory consumption will only be minimal. See this issue for more information. + + +Model offloading for fast inference and memory savings + +Sequential CPU offloading, as discussed in the previous section, preserves a lot of memory but makes inference slower, because submodules are moved to GPU as needed, and immediately returned to CPU when a new module runs. +Full-model offloading is an alternative that moves whole models to the GPU, instead of handling each model’s constituent modules. This results in a negligible impact on inference time (compared with moving the pipeline to cuda), while still providing some memory savings. +In this scenario, only one of the main components of the pipeline (typically: text encoder, unet and vae) +will be in the GPU while the others wait in the CPU. Compoments like the UNet that run for multiple iterations will stay on GPU until they are no longer needed. +This feature can be enabled by invoking enable_model_cpu_offload() on the pipeline, as shown below. + + + Copied +import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, +) + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_model_cpu_offload() +image = pipe(prompt).images[0] +This is also compatible with attention slicing for additional memory savings. + + + Copied +import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, +) + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_model_cpu_offload() +pipe.enable_attention_slicing(1) + +image = pipe(prompt).images[0] +This feature requires `accelerate` version 0.17.0 or larger. + + +Using Channels Last memory format + +Channels last memory format is an alternative way of ordering NCHW tensors in memory preserving dimensions ordering. Channels last tensors ordered in such a way that channels become the densest dimension (aka storing images pixel-per-pixel). Since not all operators currently support channels last format it may result in a worst performance, so it’s better to try it and see if it works for your model. +For example, in order to set the UNet model in our pipeline to use channels last format, we can use the following: + + + Copied +print(pipe.unet.conv_out.state_dict()["weight"].stride()) # (2880, 9, 3, 1) +pipe.unet.to(memory_format=torch.channels_last) # in-place operation +print( + pipe.unet.conv_out.state_dict()["weight"].stride() +) # (2880, 1, 960, 320) having a stride of 1 for the 2nd dimension proves that it works + +Tracing + +Tracing runs an example input tensor through your model, and captures the operations that are invoked as that input makes its way through the model’s layers so that an executable or ScriptFunction is returned that will be optimized using just-in-time compilation. +To trace our UNet model, we can use the following: + + + Copied +import time +import torch +from diffusers import StableDiffusionPipeline +import functools + +# torch disable grad +torch.set_grad_enabled(False) + +# set variables +n_experiments = 2 +unet_runs_per_experiment = 50 + + +# load inputs +def generate_inputs(): + sample = torch.randn(2, 4, 64, 64).half().cuda() + timestep = torch.rand(1).half().cuda() * 999 + encoder_hidden_states = torch.randn(2, 77, 768).half().cuda() + return sample, timestep, encoder_hidden_states + + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, +).to("cuda") +unet = pipe.unet +unet.eval() +unet.to(memory_format=torch.channels_last) # use channels_last memory format +unet.forward = functools.partial(unet.forward, return_dict=False) # set return_dict=False as default + +# warmup +for _ in range(3): + with torch.inference_mode(): + inputs = generate_inputs() + orig_output = unet(*inputs) + +# trace +print("tracing..") +unet_traced = torch.jit.trace(unet, inputs) +unet_traced.eval() +print("done tracing") + + +# warmup and optimize graph +for _ in range(5): + with torch.inference_mode(): + inputs = generate_inputs() + orig_output = unet_traced(*inputs) + + +# benchmarking +with torch.inference_mode(): + for _ in range(n_experiments): + torch.cuda.synchronize() + start_time = time.time() + for _ in range(unet_runs_per_experiment): + orig_output = unet_traced(*inputs) + torch.cuda.synchronize() + print(f"unet traced inference took {time.time() - start_time:.2f} seconds") + for _ in range(n_experiments): + torch.cuda.synchronize() + start_time = time.time() + for _ in range(unet_runs_per_experiment): + orig_output = unet(*inputs) + torch.cuda.synchronize() + print(f"unet inference took {time.time() - start_time:.2f} seconds") + +# save the model +unet_traced.save("unet_traced.pt") +Then we can replace the unet attribute of the pipeline with the traced model like the following + + + Copied +from diffusers import StableDiffusionPipeline +import torch +from dataclasses import dataclass + + +@dataclass +class UNet2DConditionOutput: + sample: torch.FloatTensor + + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, +).to("cuda") + +# use jitted unet +unet_traced = torch.jit.load("unet_traced.pt") + + +# del pipe.unet +class TracedUNet(torch.nn.Module): + def __init__(self): + super().__init__() + self.in_channels = pipe.unet.in_channels + self.device = pipe.unet.device + + def forward(self, latent_model_input, t, encoder_hidden_states): + sample = unet_traced(latent_model_input, t, encoder_hidden_states)[0] + return UNet2DConditionOutput(sample=sample) + + +pipe.unet = TracedUNet() + +with torch.inference_mode(): + image = pipe([prompt] * 1, num_inference_steps=50).images[0] + +Memory Efficient Attention + +Recent work on optimizing the bandwitdh in the attention block has generated huge speed ups and gains in GPU memory usage. The most recent being Flash Attention from @tridao: code, paper. +Here are the speedups we obtain on a few Nvidia GPUs when running the inference at 512x512 with a batch size of 1 (one prompt): +GPU +Base Attention FP16 +Memory Efficient Attention FP16 +NVIDIA Tesla T4 +3.5it/s +5.5it/s +NVIDIA 3060 RTX +4.6it/s +7.8it/s +NVIDIA A10G +8.88it/s +15.6it/s +NVIDIA RTX A6000 +11.7it/s +21.09it/s +NVIDIA TITAN RTX +12.51it/s +18.22it/s +A100-SXM4-40GB +18.6it/s +29.it/s +A100-SXM-80GB +18.7it/s +29.5it/s +To leverage it just make sure you have: +PyTorch > 1.12 +Cuda available +Installed the xformers library. + + + Copied +from diffusers import StableDiffusionPipeline +import torch + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, +).to("cuda") + +pipe.enable_xformers_memory_efficient_attention() + +with torch.inference_mode(): + sample = pipe("a small cat") + +# optional: You can disable it via +# pipe.disable_xformers_memory_efficient_attention() diff --git a/scrapped_outputs/ba40af8a5489b5e2d78fc3ef2b7f1af8.txt b/scrapped_outputs/ba40af8a5489b5e2d78fc3ef2b7f1af8.txt new file mode 100644 index 0000000000000000000000000000000000000000..7c97c654c1ee35cd7df313c6a56cd1af6d619611 --- /dev/null +++ b/scrapped_outputs/ba40af8a5489b5e2d78fc3ef2b7f1af8.txt @@ -0,0 +1,78 @@ +Pipeline callbacks The denoising loop of a pipeline can be modified with custom defined functions using the callback_on_step_end parameter. The callback function is executed at the end of each step, and modifies the pipeline attributes and variables for the next step. This is really useful for dynamically adjusting certain pipeline attributes or modifying tensor variables. This versatility allows for interesting use-cases such as changing the prompt embeddings at each timestep, assigning different weights to the prompt embeddings, and editing the guidance scale. With callbacks, you can implement new features without modifying the underlying code! 🤗 Diffusers currently only supports callback_on_step_end, but feel free to open a feature request if you have a cool use-case and require a callback function with a different execution point! This guide will demonstrate how callbacks work by a few features you can implement with them. Dynamic classifier-free guidance Dynamic classifier-free guidance (CFG) is a feature that allows you to disable CFG after a certain number of inference steps which can help you save compute with minimal cost to performance. The callback function for this should have the following arguments: pipeline (or the pipeline instance) provides access to important properties such as num_timesteps and guidance_scale. You can modify these properties by updating the underlying attributes. For this example, you’ll disable CFG by setting pipeline._guidance_scale=0.0. step_index and timestep tell you where you are in the denoising loop. Use step_index to turn off CFG after reaching 40% of num_timesteps. callback_kwargs is a dict that contains tensor variables you can modify during the denoising loop. It only includes variables specified in the callback_on_step_end_tensor_inputs argument, which is passed to the pipeline’s __call__ method. Different pipelines may use different sets of variables, so please check a pipeline’s _callback_tensor_inputs attribute for the list of variables you can modify. Some common variables include latents and prompt_embeds. For this function, change the batch size of prompt_embeds after setting guidance_scale=0.0 in order for it to work properly. Your callback function should look something like this: Copied def callback_dynamic_cfg(pipe, step_index, timestep, callback_kwargs): + # adjust the batch_size of prompt_embeds according to guidance_scale + if step_index == int(pipeline.num_timesteps * 0.4): + prompt_embeds = callback_kwargs["prompt_embeds"] + prompt_embeds = prompt_embeds.chunk(2)[-1] + + # update guidance_scale and prompt_embeds + pipeline._guidance_scale = 0.0 + callback_kwargs["prompt_embeds"] = prompt_embeds + return callback_kwargs Now, you can pass the callback function to the callback_on_step_end parameter and the prompt_embeds to callback_on_step_end_tensor_inputs. Copied import torch +from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +pipeline = pipeline.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" + +generator = torch.Generator(device="cuda").manual_seed(1) +out = pipeline( + prompt, + generator=generator, + callback_on_step_end=callback_dynamic_cfg, + callback_on_step_end_tensor_inputs=['prompt_embeds'] +) + +out.images[0].save("out_custom_cfg.png") Interrupt the diffusion process The interruption callback is supported for text-to-image, image-to-image, and inpainting for the StableDiffusionPipeline and StableDiffusionXLPipeline. Stopping the diffusion process early is useful when building UIs that work with Diffusers because it allows users to stop the generation process if they’re unhappy with the intermediate results. You can incorporate this into your pipeline with a callback. This callback function should take the following arguments: pipeline, i, t, and callback_kwargs (this must be returned). Set the pipeline’s _interrupt attribute to True to stop the diffusion process after a certain number of steps. You are also free to implement your own custom stopping logic inside the callback. In this example, the diffusion process is stopped after 10 steps even though num_inference_steps is set to 50. Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +pipeline.enable_model_cpu_offload() +num_inference_steps = 50 + +def interrupt_callback(pipeline, i, t, callback_kwargs): + stop_idx = 10 + if i == stop_idx: + pipeline._interrupt = True + + return callback_kwargs + +pipeline( + "A photo of a cat", + num_inference_steps=num_inference_steps, + callback_on_step_end=interrupt_callback, +) Display image after each generation step This tip was contributed by asomoza. Display an image after each generation step by accessing and converting the latents after each step into an image. The latent space is compressed to 128x128, so the images are also 128x128 which is useful for a quick preview. Use the function below to convert the SDXL latents (4 channels) to RGB tensors (3 channels) as explained in the Explaining the SDXL latent space blog post. Copied def latents_to_rgb(latents): + weights = ( + (60, -60, 25, -70), + (60, -5, 15, -50), + (60, 10, -5, -35) + ) + + weights_tensor = torch.t(torch.tensor(weights, dtype=latents.dtype).to(latents.device)) + biases_tensor = torch.tensor((150, 140, 130), dtype=latents.dtype).to(latents.device) + rgb_tensor = torch.einsum("...lxy,lr -> ...rxy", latents, weights_tensor) + biases_tensor.unsqueeze(-1).unsqueeze(-1) + image_array = rgb_tensor.clamp(0, 255)[0].byte().cpu().numpy() + image_array = image_array.transpose(1, 2, 0) + + return Image.fromarray(image_array) Create a function to decode and save the latents into an image. Copied def decode_tensors(pipe, step, timestep, callback_kwargs): + latents = callback_kwargs["latents"] + + image = latents_to_rgb(latents) + image.save(f"{step}.png") + + return callback_kwargs Pass the decode_tensors function to the callback_on_step_end parameter to decode the tensors after each step. You also need to specify what you want to modify in the callback_on_step_end_tensor_inputs parameter, which in this case are the latents. Copied from diffusers import AutoPipelineForText2Image +import torch +from PIL import Image + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + torch_dtype=torch.float16, + variant="fp16", + use_safetensors=True +).to("cuda") + +image = pipe( + prompt = "A croissant shaped like a cute bear." + negative_prompt = "Deformed, ugly, bad anatomy" + callback_on_step_end=decode_tensors, + callback_on_step_end_tensor_inputs=["latents"], +).images[0] step 0 step 19 step 29 step 39 step 49 diff --git a/scrapped_outputs/bab1838903a4728cbe8014b64b137b7f.txt b/scrapped_outputs/bab1838903a4728cbe8014b64b137b7f.txt new file mode 100644 index 0000000000000000000000000000000000000000..77bfc70e39049721df753225367296a6dc627c51 --- /dev/null +++ b/scrapped_outputs/bab1838903a4728cbe8014b64b137b7f.txt @@ -0,0 +1,124 @@ +PixArt-α PixArt-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis is Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, and Zhenguo Li. The abstract from the paper is: The most advanced text-to-image (T2I) models require significant training costs (e.g., millions of GPU hours), seriously hindering the fundamental innovation for the AIGC community while increasing CO2 emissions. This paper introduces PIXART-α, a Transformer-based T2I diffusion model whose image generation quality is competitive with state-of-the-art image generators (e.g., Imagen, SDXL, and even Midjourney), reaching near-commercial application standards. Additionally, it supports high-resolution image synthesis up to 1024px resolution with low training cost, as shown in Figure 1 and 2. To achieve this goal, three core designs are proposed: (1) Training strategy decomposition: We devise three distinct training steps that separately optimize pixel dependency, text-image alignment, and image aesthetic quality; (2) Efficient T2I Transformer: We incorporate cross-attention modules into Diffusion Transformer (DiT) to inject text conditions and streamline the computation-intensive class-condition branch; (3) High-informative data: We emphasize the significance of concept density in text-image pairs and leverage a large Vision-Language model to auto-label dense pseudo-captions to assist text-image alignment learning. As a result, PIXART-α’s training speed markedly surpasses existing large-scale T2I models, e.g., PIXART-α only takes 10.8% of Stable Diffusion v1.5’s training time (675 vs. 6,250 A100 GPU days), saving nearly $300,000 ($26,000 vs. $320,000) and reducing 90% CO2 emissions. Moreover, compared with a larger SOTA model, RAPHAEL, our training cost is merely 1%. Extensive experiments demonstrate that PIXART-α excels in image quality, artistry, and semantic control. We hope PIXART-α will provide new insights to the AIGC community and startups to accelerate building their own high-quality yet low-cost generative models from scratch. You can find the original codebase at PixArt-alpha/PixArt-alpha and all the available checkpoints at PixArt-alpha. Some notes about this pipeline: It uses a Transformer backbone (instead of a UNet) for denoising. As such it has a similar architecture as DiT. It was trained using text conditions computed from T5. This aspect makes the pipeline better at following complex text prompts with intricate details. It is good at producing high-resolution images at different aspect ratios. To get the best results, the authors recommend some size brackets which can be found here. It rivals the quality of state-of-the-art text-to-image generation systems (as of this writing) such as Stable Diffusion XL, Imagen, and DALL-E 2, while being more efficient than them. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. Inference with under 8GB GPU VRAM Run the PixArtAlphaPipeline with under 8GB GPU VRAM by loading the text encoder in 8-bit precision. Let’s walk through a full-fledged example. First, install the bitsandbytes library: Copied pip install -U bitsandbytes Then load the text encoder in 8-bit: Copied from transformers import T5EncoderModel +from diffusers import PixArtAlphaPipeline +import torch + +text_encoder = T5EncoderModel.from_pretrained( + "PixArt-alpha/PixArt-XL-2-1024-MS", + subfolder="text_encoder", + load_in_8bit=True, + device_map="auto", + +) +pipe = PixArtAlphaPipeline.from_pretrained( + "PixArt-alpha/PixArt-XL-2-1024-MS", + text_encoder=text_encoder, + transformer=None, + device_map="auto" +) Now, use the pipe to encode a prompt: Copied with torch.no_grad(): + prompt = "cute cat" + prompt_embeds, prompt_attention_mask, negative_embeds, negative_prompt_attention_mask = pipe.encode_prompt(prompt) Since text embeddings have been computed, remove the text_encoder and pipe from the memory, and free up som GPU VRAM: Copied import gc + +def flush(): + gc.collect() + torch.cuda.empty_cache() + +del text_encoder +del pipe +flush() Then compute the latents with the prompt embeddings as inputs: Copied pipe = PixArtAlphaPipeline.from_pretrained( + "PixArt-alpha/PixArt-XL-2-1024-MS", + text_encoder=None, + torch_dtype=torch.float16, +).to("cuda") + +latents = pipe( + negative_prompt=None, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + prompt_attention_mask=prompt_attention_mask, + negative_prompt_attention_mask=negative_prompt_attention_mask, + num_images_per_prompt=1, + output_type="latent", +).images + +del pipe.transformer +flush() Notice that while initializing pipe, you’re setting text_encoder to None so that it’s not loaded. Once the latents are computed, pass it off to the VAE to decode into a real image: Copied with torch.no_grad(): + image = pipe.vae.decode(latents / pipe.vae.config.scaling_factor, return_dict=False)[0] +image = pipe.image_processor.postprocess(image, output_type="pil")[0] +image.save("cat.png") By deleting components you aren’t using and flushing the GPU VRAM, you should be able to run PixArtAlphaPipeline with under 8GB GPU VRAM. If you want a report of your memory-usage, run this script. Text embeddings computed in 8-bit can impact the quality of the generated images because of the information loss in the representation space caused by the reduced precision. It’s recommended to compare the outputs with and without 8-bit. While loading the text_encoder, you set load_in_8bit to True. You could also specify load_in_4bit to bring your memory requirements down even further to under 7GB. PixArtAlphaPipeline class diffusers.PixArtAlphaPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel vae: AutoencoderKL transformer: Transformer2DModel scheduler: DPMSolverMultistepScheduler ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (T5EncoderModel) — +Frozen text-encoder. PixArt-Alpha uses +T5, specifically the +t5-v1_1-xxl variant. tokenizer (T5Tokenizer) — +Tokenizer of class +T5Tokenizer. transformer (Transformer2DModel) — +A text conditioned Transformer2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with transformer to denoise the encoded image latents. Pipeline for text-to-image generation using PixArt-Alpha. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union = None negative_prompt: str = '' num_inference_steps: int = 20 timesteps: List = None guidance_scale: float = 4.5 num_images_per_prompt: Optional = 1 height: Optional = None width: Optional = None eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None prompt_attention_mask: Optional = None negative_prompt_embeds: Optional = None negative_prompt_attention_mask: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True use_resolution_binning: bool = True **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to self.unet.config.sample_size) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size) — +The width in pixels of the generated image. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. prompt_attention_mask (torch.FloatTensor, optional) — Pre-generated attention mask for text embeddings. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. For PixArt-Alpha this negative prompt should be "". If not +provided, negative_prompt_embeds will be generated from negative_prompt input argument. negative_prompt_attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask for negative text embeddings. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. use_resolution_binning (bool defaults to True) — +If set to True, the requested height and width are first mapped to the closest resolutions using +ASPECT_RATIO_1024_BIN. After the produced latents are decoded into images, they are resized back to +the requested resolution. Useful for generating non-square images. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import PixArtAlphaPipeline + +>>> # You can replace the checkpoint id with "PixArt-alpha/PixArt-XL-2-512x512" too. +>>> pipe = PixArtAlphaPipeline.from_pretrained("PixArt-alpha/PixArt-XL-2-1024-MS", torch_dtype=torch.float16) +>>> # Enable memory optimizations. +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A small cactus with a happy face in the Sahara desert." +>>> image = pipe(prompt).images[0] classify_height_width_bin < source > ( height: int width: int ratios: dict ) Returns binned height and width. encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True negative_prompt: str = '' num_images_per_prompt: int = 1 device: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None prompt_attention_mask: Optional = None negative_prompt_attention_mask: Optional = None clean_caption: bool = False **kwargs ) Parameters prompt (str or List[str], optional) — +prompt to be encoded negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. If not defined, one has to pass negative_prompt_embeds +instead. Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). For +PixArt-Alpha, this should be "". do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. For PixArt-Alpha, it’s should be the embeddings of the "" +string. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. diff --git a/scrapped_outputs/babf807c14ed0ccaad6671bf64b90a20.txt b/scrapped_outputs/babf807c14ed0ccaad6671bf64b90a20.txt new file mode 100644 index 0000000000000000000000000000000000000000..4760b4efe8b8dbb32af0b2b6a3453f7f5646bcf8 --- /dev/null +++ b/scrapped_outputs/babf807c14ed0ccaad6671bf64b90a20.txt @@ -0,0 +1,15 @@ +Score SDE VE Score-Based Generative Modeling through Stochastic Differential Equations (Score SDE) is by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon and Ben Poole. This pipeline implements the variance expanding (VE) variant of the stochastic differential equation method. The abstract from the paper is: Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model. The original codebase can be found at yang-song/score_sde_pytorch. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. ScoreSdeVePipeline class diffusers.ScoreSdeVePipeline < source > ( unet: UNet2DModel scheduler: ScoreSdeVeScheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image. scheduler (ScoreSdeVeScheduler) — +A ScoreSdeVeScheduler to be used in combination with unet to denoise the encoded image. Pipeline for unconditional image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 num_inference_steps: int = 2000 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True **kwargs ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/bac77a2102fb2142331fc1a92b4fd227.txt b/scrapped_outputs/bac77a2102fb2142331fc1a92b4fd227.txt new file mode 100644 index 0000000000000000000000000000000000000000..11ac9a3410a83a30e6fc980490a3dfac0dbf0c58 --- /dev/null +++ b/scrapped_outputs/bac77a2102fb2142331fc1a92b4fd227.txt @@ -0,0 +1,219 @@ +AltDiffusion AltDiffusion was proposed in AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities by Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang, Ledell Wu. The abstract from the paper is: In this work, we present a conceptually simple and effective method to train a strong bilingual/multilingual multimodal representation model. Starting from the pre-trained multimodal representation model CLIP released by OpenAI, we altered its text encoder with a pre-trained multilingual text encoder XLM-R, and aligned both languages and image representations by a two-stage training schema consisting of teacher learning and contrastive learning. We validate our method through evaluations of a wide range of tasks. We set new state-of-the-art performances on a bunch of tasks including ImageNet-CN, Flicker30k-CN, COCO-CN and XTD. Further, we obtain very close performances with CLIP on almost all tasks, suggesting that one can simply alter the text encoder in CLIP for extended capabilities such as multilingual understanding. Our models and code are available at this https URL. Tips AltDiffusion is conceptually the same as Stable Diffusion. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. AltDiffusionPipeline class diffusers.AltDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: RobertaSeriesModelWithTransformation tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (RobertaSeriesModelWithTransformation) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (XLMRobertaTokenizer) — +A XLMRobertaTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Alt Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: typing.Union[str, typing.List[str]] = None height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 timesteps: typing.List[int] = None guidance_scale: float = 7.5 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None ip_adapter_image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.FloatTensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.FloatTensor], NoneType] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None guidance_rescale: float = 0.0 clip_skip: typing.Optional[int] = None callback_on_step_end: typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], NoneType] = None callback_on_step_end_tensor_inputs: typing.List[str] = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.AltDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.AltDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor from Common Diffusion Noise Schedules and Sample Steps are +Flawed. Guidance rescale factor should fix overexposure when +using zero terminal SNR. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +~pipelines.stable_diffusion.AltDiffusionPipelineOutput or tuple + +If return_dict is True, ~pipelines.stable_diffusion.AltDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import AltDiffusionPipeline + +>>> pipe = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> # "dark elf princess, highly detailed, d & d, fantasy, highly detailed, digital painting, trending on artstation, concept art, sharp focus, illustration, art by artgerm and greg rutkowski and fuji choko and viktoria gavrilenko and hoang lap" +>>> prompt = "黑暗精灵公主,非常详细,幻想,非常详细,数字绘画,概念艺术,敏锐的焦点,插图" +>>> image = pipe(prompt).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Alt Diffusion v1, v2, and Alt Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None lora_scale: typing.Optional[float] = None clip_skip: typing.Optional[int] = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. AltDiffusionImg2ImgPipeline class diffusers.AltDiffusionImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: RobertaSeriesModelWithTransformation tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (RobertaSeriesModelWithTransformation) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (XLMRobertaTokenizer) — +A XLMRobertaTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-guided image-to-image generation using Alt Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: typing.Union[str, typing.List[str]] = None image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.FloatTensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.FloatTensor]] = None strength: float = 0.8 num_inference_steps: typing.Optional[int] = 50 timesteps: typing.List[int] = None guidance_scale: typing.Optional[float] = 7.5 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 eta: typing.Optional[float] = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None ip_adapter_image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.FloatTensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.FloatTensor], NoneType] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None clip_skip: int = None callback_on_step_end: typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], NoneType] = None callback_on_step_end_tensor_inputs: typing.List[str] = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.AltDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be used as the starting point. For both +numpy array and pytorch tensor, the expected value range is between [0, 1] If it’s a tensor or a list +or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a +list of arrays, the expected shape should be (B, H, W, C) or (H, W, C) It can also accept image +latents as image, but if passing latents directly it is not encoded again. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.AltDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +~pipelines.stable_diffusion.AltDiffusionPipelineOutput or tuple + +If return_dict is True, ~pipelines.stable_diffusion.AltDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import requests +>>> import torch +>>> from PIL import Image +>>> from io import BytesIO + +>>> from diffusers import AltDiffusionImg2ImgPipeline + +>>> device = "cuda" +>>> model_id_or_path = "BAAI/AltDiffusion-m9" +>>> pipe = AltDiffusionImg2ImgPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +>>> response = requests.get(url) +>>> init_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_image = init_image.resize((768, 512)) + +>>> # "A fantasy landscape, trending on artstation" +>>> prompt = "幻想风景, artstation" + +>>> images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images +>>> images[0].save("幻想风景.png") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Alt Diffusion v1, v2, and Alt Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None lora_scale: typing.Optional[float] = None clip_skip: typing.Optional[int] = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. AltDiffusionPipelineOutput class diffusers.pipelines.alt_diffusion.AltDiffusionPipelineOutput < source > ( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] nsfw_content_detected: typing.Optional[typing.List[bool]] ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Alt Diffusion pipelines. __call__ ( *args **kwargs ) Call self as a function. diff --git a/scrapped_outputs/bada3171a0a46b7c2e71d82878161cdc.txt b/scrapped_outputs/bada3171a0a46b7c2e71d82878161cdc.txt new file mode 100644 index 0000000000000000000000000000000000000000..173b882d6bb0b0500124b1e8f97633b6bc0e5c16 --- /dev/null +++ b/scrapped_outputs/bada3171a0a46b7c2e71d82878161cdc.txt @@ -0,0 +1,62 @@ +LoRA This is experimental and the API may change in the future. LoRA (Low-Rank Adaptation of Large Language Models) is a popular and lightweight training technique that significantly reduces the number of trainable parameters. It works by inserting a smaller number of new weights into the model and only these are trained. This makes training with LoRA much faster, memory-efficient, and produces smaller model weights (a few hundred MBs), which are easier to store and share. LoRA can also be combined with other training techniques like DreamBooth to speedup training. LoRA is very versatile and supported for DreamBooth, Kandinsky 2.2, Stable Diffusion XL, text-to-image, and Wuerstchen. This guide will explore the train_text_to_image_lora.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/text_to_image +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script has many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. Default values are provided for most parameters that work pretty well, but you can also set your own values in the training command if you’d like. For example, to increase the number of epochs to train: Copied accelerate launch train_text_to_image_lora.py \ + --num_train_epochs=150 \ Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the LoRA relevant parameters: --rank: the inner dimension of the low-rank matrices to train; a higher rank means more trainable parameters --learning_rate: the default learning rate is 1e-4, but with LoRA, you can use a higher learning rate Training script The dataset preprocessing code and training loop are found in the main() function, and if you need to adapt the training script, this is where you’ll make your changes. As with the script parameters, a walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the LoRA relevant parts of the script. The script begins by adding the new LoRA weights to the attention layers. This involves correctly configuring the weight size for each block in the UNet. You’ll see the rank parameter is used to create the LoRAAttnProcessor: Copied lora_attn_procs = {} +for name in unet.attn_processors.keys(): + cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim + if name.startswith("mid_block"): + hidden_size = unet.config.block_out_channels[-1] + elif name.startswith("up_blocks"): + block_id = int(name[len("up_blocks.")]) + hidden_size = list(reversed(unet.config.block_out_channels))[block_id] + elif name.startswith("down_blocks"): + block_id = int(name[len("down_blocks.")]) + hidden_size = unet.config.block_out_channels[block_id] + + lora_attn_procs[name] = LoRAAttnProcessor( + hidden_size=hidden_size, + cross_attention_dim=cross_attention_dim, + rank=args.rank, + ) + +unet.set_attn_processor(lora_attn_procs) +lora_layers = AttnProcsLayers(unet.attn_processors) The optimizer is initialized with the lora_layers because these are the only weights that’ll be optimized: Copied optimizer = optimizer_cls( + lora_layers.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Aside from setting up the LoRA layers, the training script is more or less the same as train_text_to_image.py! Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 Let’s train on the Pokémon BLIP captions dataset to generate our yown Pokémon. Set the environment variables MODEL_NAME and DATASET_NAME to the model and dataset respectively. You should also specify where to save the model in OUTPUT_DIR, and the name of the model to save to on the Hub with HUB_MODEL_ID. The script creates and saves the following files to your repository: saved model checkpoints pytorch_lora_weights.safetensors (the trained LoRA weights) If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. A full training run takes ~5 hours on a 2080 Ti GPU with 11GB of VRAM. Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="/sddata/finetune/lora/pokemon" +export HUB_MODEL_ID="pokemon-lora" +export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch --mixed_precision="fp16" train_text_to_image_lora.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$DATASET_NAME \ + --dataloader_num_workers=8 \ + --resolution=512 \ + --center_crop \ + --random_flip \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --max_train_steps=15000 \ + --learning_rate=1e-04 \ + --max_grad_norm=1 \ + --lr_scheduler="cosine" \ + --lr_warmup_steps=0 \ + --output_dir=${OUTPUT_DIR} \ + --push_to_hub \ + --hub_model_id=${HUB_MODEL_ID} \ + --report_to=wandb \ + --checkpointing_steps=500 \ + --validation_prompt="A pokemon with blue eyes." \ + --seed=1337 Once training has been completed, you can use your model for inference: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("path/to/lora/model", weight_name="pytorch_lora_weights.safetensors") +image = pipeline("A pokemon with blue eyes").images[0] Next steps Congratulations on training a new model with LoRA! To learn more about how to use your new model, the following guides may be helpful: Learn how to load different LoRA formats trained using community trainers like Kohya and TheLastBen. Learn how to use and combine multiple LoRA’s with PEFT for inference. diff --git a/scrapped_outputs/bb7206fb1d69d4626d6f4f9b303e5a2c.txt b/scrapped_outputs/bb7206fb1d69d4626d6f4f9b303e5a2c.txt new file mode 100644 index 0000000000000000000000000000000000000000..4a289245a8ad76d1c08b10f1992b7f746fc18cd3 --- /dev/null +++ b/scrapped_outputs/bb7206fb1d69d4626d6f4f9b303e5a2c.txt @@ -0,0 +1,16 @@ +Stochastic Karras VE Elucidating the Design Space of Diffusion-Based Generative Models is by Tero Karras, Miika Aittala, Timo Aila and Samuli Laine. This pipeline implements the stochastic sampling tailored to variance expanding (VE) models. The abstract from the paper: We argue that the theory and practice of diffusion-based generative models are currently unnecessarily convoluted and seek to remedy the situation by presenting a design space that clearly separates the concrete design choices. This lets us identify several changes to both the sampling and training processes, as well as preconditioning of the score networks. Together, our improvements yield new state-of-the-art FID of 1.79 for CIFAR-10 in a class-conditional setting and 1.97 in an unconditional setting, with much faster sampling (35 network evaluations per image) than prior designs. To further demonstrate their modular nature, we show that our design changes dramatically improve both the efficiency and quality obtainable with pre-trained score networks from previous work, including improving the FID of a previously trained ImageNet-64 model from 2.07 to near-SOTA 1.55, and after re-training with our proposed improvements to a new SOTA of 1.36. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. KarrasVePipeline class diffusers.KarrasVePipeline < source > ( unet: UNet2DModel scheduler: KarrasVeScheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image. scheduler (KarrasVeScheduler) — +A scheduler to be used in combination with unet to denoise the encoded image. Pipeline for unconditional image generation. __call__ < source > ( batch_size: int = 1 num_inference_steps: int = 50 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True **kwargs ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Example: ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/bb89e0c97aa285aae6ff65565580b3ee.txt b/scrapped_outputs/bb89e0c97aa285aae6ff65565580b3ee.txt new file mode 100644 index 0000000000000000000000000000000000000000..7a2bec5ba9618f6269207b92cee2e6ce506f4b8d --- /dev/null +++ b/scrapped_outputs/bb89e0c97aa285aae6ff65565580b3ee.txt @@ -0,0 +1,244 @@ +Text-to-image + +The text-to-image fine-tuning script is experimental. It’s easy to overfit and run into issues like catastrophic forgetting. We recommend you explore different hyperparameters to get the best results on your dataset. +Text-to-image models like Stable Diffusion generate an image from a text prompt. This guide will show you how to finetune the CompVis/stable-diffusion-v1-4 model on your own dataset with PyTorch and Flax. All the training scripts for text-to-image finetuning used in this guide can be found in this repository if you’re interested in taking a closer look. +Before running the scripts, make sure to install the library’s training dependencies: + + + Copied +pip install git+https://github.com/huggingface/diffusers.git +pip install -U -r requirements.txt +And initialize an 🤗 Accelerate environment with: + + + Copied +accelerate config +If you have already cloned the repo, then you won’t need to go through these steps. Instead, you can pass the path to your local checkout to the training script and it will be loaded from there. + +Hardware requirements + +Using gradient_checkpointing and mixed_precision, it should be possible to finetune the model on a single 24GB GPU. For higher batch_size’s and faster training, it’s better to use GPUs with more than 30GB of GPU memory. You can also use JAX/Flax for fine-tuning on TPUs or GPUs, which will be covered below. +You can reduce your memory footprint even more by enabling memory efficient attention with xFormers. Make sure you have xFormers installed and pass the --enable_xformers_memory_efficient_attention flag to the training script. +xFormers is not available for Flax. + +Upload model to Hub + +Store your model on the Hub by adding the following argument to the training script: + + + Copied + --push_to_hub + +Save and load checkpoints + +It is a good idea to regularly save checkpoints in case anything happens during training. To save a checkpoint, pass the following argument to the training script: + + + Copied + --checkpointing_steps=500 +Every 500 steps, the full training state is saved in a subfolder in the output_dir. The checkpoint has the format checkpoint- followed by the number of steps trained so far. For example, checkpoint-1500 is a checkpoint saved after 1500 training steps. +To load a checkpoint to resume training, pass the argument --resume_from_checkpoint to the training script and specify the checkpoint you want to resume from. For example, the following argument resumes training from the checkpoint saved after 1500 training steps: + + + Copied + --resume_from_checkpoint="checkpoint-1500" + +Fine-tuning + + + +Pytorch + +Hide Pytorch content + +Launch the PyTorch training script for a fine-tuning run on the Pokémon BLIP captions dataset like this. +Specify the MODEL_NAME environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the ~diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path argument. + + + Copied +```bash +export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export dataset_name="lambdalabs/pokemon-blip-captions" + +accelerate launch --mixed_precision="fp16" train_text_to_image.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$dataset_name \ + --use_ema \ + --resolution=512 --center_crop --random_flip \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --lr_scheduler="constant" --lr_warmup_steps=0 \ + --output_dir="sd-pokemon-model" + + + Copied + +To finetune on your own dataset, prepare the dataset according to the format required by 🤗 [Datasets](https://huggingface.co/docs/datasets/index). You can [upload your dataset to the Hub](https://huggingface.co/docs/datasets/image_dataset#upload-dataset-to-the-hub), or you can [prepare a local folder with your files](https://huggingface.co/docs/datasets/image_dataset#imagefolder). + +Modify the script if you want to use custom loading logic. We left pointers in the code in the appropriate places to help you. 🤗 The example script below shows how to finetune on a local dataset in `TRAIN_DIR` and where to save the model to in `OUTPUT_DIR`: + +```bash +export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export TRAIN_DIR="path_to_your_dataset" +export OUTPUT_DIR="path_to_save_model" + +accelerate launch train_text_to_image.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --train_data_dir=$TRAIN_DIR \ + --use_ema \ + --resolution=512 --center_crop --random_flip \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --mixed_precision="fp16" \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --lr_scheduler="constant" --lr_warmup_steps=0 \ + --output_dir=${OUTPUT_DIR} + +Training with multiple GPUs + +accelerate allows for seamless multi-GPU training. Follow the instructions here +for running distributed training with accelerate. Here is an example command: + + + Copied +export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export dataset_name="lambdalabs/pokemon-blip-captions" + +accelerate launch --mixed_precision="fp16" --multi_gpu train_text_to_image.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$dataset_name \ + --use_ema \ + --resolution=512 --center_crop --random_flip \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --lr_scheduler="constant" --lr_warmup_steps=0 \ + --output_dir="sd-pokemon-model" + +JAX + +Hide JAX content + +With Flax, it’s possible to train a Stable Diffusion model faster on TPUs and GPUs thanks to @duongna211. This is very efficient on TPU hardware but works great on GPUs too. The Flax training script doesn’t support features like gradient checkpointing or gradient accumulation yet, so you’ll need a GPU with at least 30GB of memory or a TPU v3. +Before running the script, make sure you have the requirements installed: + + + Copied +pip install -U -r requirements_flax.txt +Specify the MODEL_NAME environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the ~diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path argument. +Now you can launch the Flax training script like this: + + + Copied +export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export dataset_name="lambdalabs/pokemon-blip-captions" + +python train_text_to_image_flax.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$dataset_name \ + --resolution=512 --center_crop --random_flip \ + --train_batch_size=1 \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --output_dir="sd-pokemon-model" +To finetune on your own dataset, prepare the dataset according to the format required by 🤗 Datasets. You can upload your dataset to the Hub, or you can prepare a local folder with your files. +Modify the script if you want to use custom loading logic. We left pointers in the code in the appropriate places to help you. 🤗 The example script below shows how to finetune on a local dataset in TRAIN_DIR: + + + Copied +export MODEL_NAME="duongna/stable-diffusion-v1-4-flax" +export TRAIN_DIR="path_to_your_dataset" + +python train_text_to_image_flax.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --train_data_dir=$TRAIN_DIR \ + --resolution=512 --center_crop --random_flip \ + --train_batch_size=1 \ + --mixed_precision="fp16" \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --output_dir="sd-pokemon-model" + + +Training with Min-SNR weighting + +We support training with the Min-SNR weighting strategy proposed in Efficient Diffusion Training via Min-SNR Weighting Strategy which helps to achieve faster convergence +by rebalancing the loss. In order to use it, one needs to set the --snr_gamma argument. The recommended +value when using it is 5.0. +You can find this project on Weights and Biases that compares the loss surfaces of the following setups: +Training without the Min-SNR weighting strategy +Training with the Min-SNR weighting strategy (snr_gamma set to 5.0) +Training with the Min-SNR weighting strategy (snr_gamma set to 1.0) +For our small Pokemons dataset, the effects of Min-SNR weighting strategy might not appear to be pronounced, but for larger datasets, we believe the effects will be more pronounced. +Also, note that in this example, we either predict epsilon (i.e., the noise) or the v_prediction. For both of these cases, the formulation of the Min-SNR weighting strategy that we have used holds. +Training with Min-SNR weighting strategy is only supported in PyTorch. + +LoRA + +You can also use Low-Rank Adaptation of Large Language Models (LoRA), a fine-tuning technique for accelerating training large models, for fine-tuning text-to-image models. For more details, take a look at the LoRA training guide. + +Inference + +Now you can load the fine-tuned model for inference by passing the model path or model name on the Hub to the StableDiffusionPipeline: + + +Pytorch + +Hide Pytorch content + + + + Copied +from diffusers import StableDiffusionPipeline + +model_path = "path_to_saved_model" +pipe = StableDiffusionPipeline.from_pretrained(model_path, torch_dtype=torch.float16) +pipe.to("cuda") + +image = pipe(prompt="yoda").images[0] +image.save("yoda-pokemon.png") + +JAX + +Hide JAX content + + + + Copied +import jax +import numpy as np +from flax.jax_utils import replicate +from flax.training.common_utils import shard +from diffusers import FlaxStableDiffusionPipeline + +model_path = "path_to_saved_model" +pipe, params = FlaxStableDiffusionPipeline.from_pretrained(model_path, dtype=jax.numpy.bfloat16) + +prompt = "yoda pokemon" +prng_seed = jax.random.PRNGKey(0) +num_inference_steps = 50 + +num_samples = jax.device_count() +prompt = num_samples * [prompt] +prompt_ids = pipeline.prepare_inputs(prompt) + +# shard inputs and rng +params = replicate(params) +prng_seed = jax.random.split(prng_seed, jax.device_count()) +prompt_ids = shard(prompt_ids) + +images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images +images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) +image.save("yoda-pokemon.png") diff --git a/scrapped_outputs/bb979e3c84b93526ee91f400782d8a91.txt b/scrapped_outputs/bb979e3c84b93526ee91f400782d8a91.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/bb9fa2866fefc7a68f3ad704c40d9e09.txt b/scrapped_outputs/bb9fa2866fefc7a68f3ad704c40d9e09.txt new file mode 100644 index 0000000000000000000000000000000000000000..191230d895650a96c9b8f907a3911fdd00d72140 --- /dev/null +++ b/scrapped_outputs/bb9fa2866fefc7a68f3ad704c40d9e09.txt @@ -0,0 +1,55 @@ +DDPMScheduler Denoising Diffusion Probabilistic Models (DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes a diffusion based model of the same name. In the context of the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline. The abstract from the paper is: We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. Our implementation is available at this https URL. DDPMScheduler class diffusers.DDPMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None variance_type: str = 'fixed_small' clip_sample: bool = True prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 clip_sample_range: float = 1.0 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' steps_offset: int = 0 rescale_betas_zero_snr: int = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +An array of betas to pass directly to the constructor without using beta_start and beta_end. variance_type (str, defaults to "fixed_small") — +Clip the variance when adding noise to the denoised sample. Choose from fixed_small, fixed_small_log, +fixed_large, fixed_large_log, learned or learned_range. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. DDPMScheduler explores the connections between denoising score matching and Langevin dynamics sampling. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: Optional = None device: Union = None timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. If used, +timesteps must be None. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. timesteps (List[int], optional) — +Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default +timestep spacing strategy of equal spacing between timesteps is used. If timesteps is passed, +num_inference_steps must be None. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator = None return_dict: bool = True ) → DDPMSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a DDPMSchedulerOutput or tuple. Returns +DDPMSchedulerOutput or tuple + +If return_dict is True, DDPMSchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). DDPMSchedulerOutput class diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/bba130b23190ad8bc8bed04372ecf587.txt b/scrapped_outputs/bba130b23190ad8bc8bed04372ecf587.txt new file mode 100644 index 0000000000000000000000000000000000000000..039dc21252f140b854db30919cf4105c2b03492c --- /dev/null +++ b/scrapped_outputs/bba130b23190ad8bc8bed04372ecf587.txt @@ -0,0 +1,249 @@ +Evaluating Diffusion Models Evaluation of generative models like Stable Diffusion is subjective in nature. But as practitioners and researchers, we often have to make careful choices amongst many different possibilities. So, when working with different generative models (like GANs, Diffusion, etc.), how do we choose one over the other? Qualitative evaluation of such models can be error-prone and might incorrectly influence a decision. +However, quantitative metrics don’t necessarily correspond to image quality. So, usually, a combination +of both qualitative and quantitative evaluations provides a stronger signal when choosing one model +over the other. In this document, we provide a non-exhaustive overview of qualitative and quantitative methods to evaluate Diffusion models. For quantitative methods, we specifically focus on how to implement them alongside diffusers. The methods shown in this document can also be used to evaluate different noise schedulers keeping the underlying generation model fixed. Scenarios We cover Diffusion models with the following pipelines: Text-guided image generation (such as the StableDiffusionPipeline). Text-guided image generation, additionally conditioned on an input image (such as the StableDiffusionImg2ImgPipeline and StableDiffusionInstructPix2PixPipeline). Class-conditioned image generation models (such as the DiTPipeline). Qualitative Evaluation Qualitative evaluation typically involves human assessment of generated images. Quality is measured across aspects such as compositionality, image-text alignment, and spatial relations. Common prompts provide a degree of uniformity for subjective metrics. +DrawBench and PartiPrompts are prompt datasets used for qualitative benchmarking. DrawBench and PartiPrompts were introduced by Imagen and Parti respectively. From the official Parti website: PartiPrompts (P2) is a rich set of over 1600 prompts in English that we release as part of this work. P2 can be used to measure model capabilities across various categories and challenge aspects. PartiPrompts has the following columns: Prompt Category of the prompt (such as “Abstract”, “World Knowledge”, etc.) Challenge reflecting the difficulty (such as “Basic”, “Complex”, “Writing & Symbols”, etc.) These benchmarks allow for side-by-side human evaluation of different image generation models. For this, the 🧨 Diffusers team has built Open Parti Prompts, which is a community-driven qualitative benchmark based on Parti Prompts to compare state-of-the-art open-source diffusion models: Open Parti Prompts Game: For 10 parti prompts, 4 generated images are shown and the user selects the image that suits the prompt best. Open Parti Prompts Leaderboard: The leaderboard comparing the currently best open-sourced diffusion models to each other. To manually compare images, let’s see how we can use diffusers on a couple of PartiPrompts. Below we show some prompts sampled across different challenges: Basic, Complex, Linguistic Structures, Imagination, and Writing & Symbols. Here we are using PartiPrompts as a dataset. Copied from datasets import load_dataset + +# prompts = load_dataset("nateraw/parti-prompts", split="train") +# prompts = prompts.shuffle() +# sample_prompts = [prompts[i]["Prompt"] for i in range(5)] + +# Fixing these sample prompts in the interest of reproducibility. +sample_prompts = [ + "a corgi", + "a hot air balloon with a yin-yang symbol, with the moon visible in the daytime sky", + "a car with no windows", + "a cube made of porcupine", + 'The saying "BE EXCELLENT TO EACH OTHER" written on a red brick wall with a graffiti image of a green alien wearing a tuxedo. A yellow fire hydrant is on a sidewalk in the foreground.', +] Now we can use these prompts to generate some images using Stable Diffusion (v1-4 checkpoint): Copied import torch + +seed = 0 +generator = torch.manual_seed(seed) + +images = sd_pipeline(sample_prompts, num_images_per_prompt=1, generator=generator).images We can also set num_images_per_prompt accordingly to compare different images for the same prompt. Running the same pipeline but with a different checkpoint (v1-5), yields: Once several images are generated from all the prompts using multiple models (under evaluation), these results are presented to human evaluators for scoring. For +more details on the DrawBench and PartiPrompts benchmarks, refer to their respective papers. It is useful to look at some inference samples while a model is training to measure the +training progress. In our training scripts, we support this utility with additional support for +logging to TensorBoard and Weights & Biases. Quantitative Evaluation In this section, we will walk you through how to evaluate three different diffusion pipelines using: CLIP score CLIP directional similarity FID Text-guided image generation CLIP score measures the compatibility of image-caption pairs. Higher CLIP scores imply higher compatibility 🔼. The CLIP score is a quantitative measurement of the qualitative concept “compatibility”. Image-caption pair compatibility can also be thought of as the semantic similarity between the image and the caption. CLIP score was found to have high correlation with human judgement. Let’s first load a StableDiffusionPipeline: Copied from diffusers import StableDiffusionPipeline +import torch + +model_ckpt = "CompVis/stable-diffusion-v1-4" +sd_pipeline = StableDiffusionPipeline.from_pretrained(model_ckpt, torch_dtype=torch.float16).to("cuda") Generate some images with multiple prompts: Copied prompts = [ + "a photo of an astronaut riding a horse on mars", + "A high tech solarpunk utopia in the Amazon rainforest", + "A pikachu fine dining with a view to the Eiffel Tower", + "A mecha robot in a favela in expressionist style", + "an insect robot preparing a delicious meal", + "A small cabin on top of a snowy mountain in the style of Disney, artstation", +] + +images = sd_pipeline(prompts, num_images_per_prompt=1, output_type="np").images + +print(images.shape) +# (6, 512, 512, 3) And then, we calculate the CLIP score. Copied from torchmetrics.functional.multimodal import clip_score +from functools import partial + +clip_score_fn = partial(clip_score, model_name_or_path="openai/clip-vit-base-patch16") + +def calculate_clip_score(images, prompts): + images_int = (images * 255).astype("uint8") + clip_score = clip_score_fn(torch.from_numpy(images_int).permute(0, 3, 1, 2), prompts).detach() + return round(float(clip_score), 4) + +sd_clip_score = calculate_clip_score(images, prompts) +print(f"CLIP score: {sd_clip_score}") +# CLIP score: 35.7038 In the above example, we generated one image per prompt. If we generated multiple images per prompt, we would have to take the average score from the generated images per prompt. Now, if we wanted to compare two checkpoints compatible with the StableDiffusionPipeline we should pass a generator while calling the pipeline. First, we generate images with a +fixed seed with the v1-4 Stable Diffusion checkpoint: Copied seed = 0 +generator = torch.manual_seed(seed) + +images = sd_pipeline(prompts, num_images_per_prompt=1, generator=generator, output_type="np").images Then we load the v1-5 checkpoint to generate images: Copied model_ckpt_1_5 = "runwayml/stable-diffusion-v1-5" +sd_pipeline_1_5 = StableDiffusionPipeline.from_pretrained(model_ckpt_1_5, torch_dtype=weight_dtype).to(device) + +images_1_5 = sd_pipeline_1_5(prompts, num_images_per_prompt=1, generator=generator, output_type="np").images And finally, we compare their CLIP scores: Copied sd_clip_score_1_4 = calculate_clip_score(images, prompts) +print(f"CLIP Score with v-1-4: {sd_clip_score_1_4}") +# CLIP Score with v-1-4: 34.9102 + +sd_clip_score_1_5 = calculate_clip_score(images_1_5, prompts) +print(f"CLIP Score with v-1-5: {sd_clip_score_1_5}") +# CLIP Score with v-1-5: 36.2137 It seems like the v1-5 checkpoint performs better than its predecessor. Note, however, that the number of prompts we used to compute the CLIP scores is quite low. For a more practical evaluation, this number should be way higher, and the prompts should be diverse. By construction, there are some limitations in this score. The captions in the training dataset +were crawled from the web and extracted from alt and similar tags associated an image on the internet. +They are not necessarily representative of what a human being would use to describe an image. Hence we +had to “engineer” some prompts here. Image-conditioned text-to-image generation In this case, we condition the generation pipeline with an input image as well as a text prompt. Let’s take the StableDiffusionInstructPix2PixPipeline, as an example. It takes an edit instruction as an input prompt and an input image to be edited. Here is one example: One strategy to evaluate such a model is to measure the consistency of the change between the two images (in CLIP space) with the change between the two image captions (as shown in CLIP-Guided Domain Adaptation of Image Generators). This is referred to as the ”CLIP directional similarity“. Caption 1 corresponds to the input image (image 1) that is to be edited. Caption 2 corresponds to the edited image (image 2). It should reflect the edit instruction. Following is a pictorial overview: We have prepared a mini dataset to implement this metric. Let’s first load the dataset. Copied from datasets import load_dataset + +dataset = load_dataset("sayakpaul/instructpix2pix-demo", split="train") +dataset.features Copied {'input': Value(dtype='string', id=None), + 'edit': Value(dtype='string', id=None), + 'output': Value(dtype='string', id=None), + 'image': Image(decode=True, id=None)} Here we have: input is a caption corresponding to the image. edit denotes the edit instruction. output denotes the modified caption reflecting the edit instruction. Let’s take a look at a sample. Copied idx = 0 +print(f"Original caption: {dataset[idx]['input']}") +print(f"Edit instruction: {dataset[idx]['edit']}") +print(f"Modified caption: {dataset[idx]['output']}") Copied Original caption: 2. FAROE ISLANDS: An archipelago of 18 mountainous isles in the North Atlantic Ocean between Norway and Iceland, the Faroe Islands has 'everything you could hope for', according to Big 7 Travel. It boasts 'crystal clear waterfalls, rocky cliffs that seem to jut out of nowhere and velvety green hills' +Edit instruction: make the isles all white marble +Modified caption: 2. WHITE MARBLE ISLANDS: An archipelago of 18 mountainous white marble isles in the North Atlantic Ocean between Norway and Iceland, the White Marble Islands has 'everything you could hope for', according to Big 7 Travel. It boasts 'crystal clear waterfalls, rocky cliffs that seem to jut out of nowhere and velvety green hills' And here is the image: Copied dataset[idx]["image"] We will first edit the images of our dataset with the edit instruction and compute the directional similarity. Let’s first load the StableDiffusionInstructPix2PixPipeline: Copied from diffusers import StableDiffusionInstructPix2PixPipeline + +instruct_pix2pix_pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained( + "timbrooks/instruct-pix2pix", torch_dtype=torch.float16 +).to(device) Now, we perform the edits: Copied import numpy as np + + +def edit_image(input_image, instruction): + image = instruct_pix2pix_pipeline( + instruction, + image=input_image, + output_type="np", + generator=generator, + ).images[0] + return image + +input_images = [] +original_captions = [] +modified_captions = [] +edited_images = [] + +for idx in range(len(dataset)): + input_image = dataset[idx]["image"] + edit_instruction = dataset[idx]["edit"] + edited_image = edit_image(input_image, edit_instruction) + + input_images.append(np.array(input_image)) + original_captions.append(dataset[idx]["input"]) + modified_captions.append(dataset[idx]["output"]) + edited_images.append(edited_image) To measure the directional similarity, we first load CLIP’s image and text encoders: Copied from transformers import ( + CLIPTokenizer, + CLIPTextModelWithProjection, + CLIPVisionModelWithProjection, + CLIPImageProcessor, +) + +clip_id = "openai/clip-vit-large-patch14" +tokenizer = CLIPTokenizer.from_pretrained(clip_id) +text_encoder = CLIPTextModelWithProjection.from_pretrained(clip_id).to(device) +image_processor = CLIPImageProcessor.from_pretrained(clip_id) +image_encoder = CLIPVisionModelWithProjection.from_pretrained(clip_id).to(device) Notice that we are using a particular CLIP checkpoint, i.e., openai/clip-vit-large-patch14. This is because the Stable Diffusion pre-training was performed with this CLIP variant. For more details, refer to the documentation. Next, we prepare a PyTorch nn.Module to compute directional similarity: Copied import torch.nn as nn +import torch.nn.functional as F + + +class DirectionalSimilarity(nn.Module): + def __init__(self, tokenizer, text_encoder, image_processor, image_encoder): + super().__init__() + self.tokenizer = tokenizer + self.text_encoder = text_encoder + self.image_processor = image_processor + self.image_encoder = image_encoder + + def preprocess_image(self, image): + image = self.image_processor(image, return_tensors="pt")["pixel_values"] + return {"pixel_values": image.to(device)} + + def tokenize_text(self, text): + inputs = self.tokenizer( + text, + max_length=self.tokenizer.model_max_length, + padding="max_length", + truncation=True, + return_tensors="pt", + ) + return {"input_ids": inputs.input_ids.to(device)} + + def encode_image(self, image): + preprocessed_image = self.preprocess_image(image) + image_features = self.image_encoder(**preprocessed_image).image_embeds + image_features = image_features / image_features.norm(dim=1, keepdim=True) + return image_features + + def encode_text(self, text): + tokenized_text = self.tokenize_text(text) + text_features = self.text_encoder(**tokenized_text).text_embeds + text_features = text_features / text_features.norm(dim=1, keepdim=True) + return text_features + + def compute_directional_similarity(self, img_feat_one, img_feat_two, text_feat_one, text_feat_two): + sim_direction = F.cosine_similarity(img_feat_two - img_feat_one, text_feat_two - text_feat_one) + return sim_direction + + def forward(self, image_one, image_two, caption_one, caption_two): + img_feat_one = self.encode_image(image_one) + img_feat_two = self.encode_image(image_two) + text_feat_one = self.encode_text(caption_one) + text_feat_two = self.encode_text(caption_two) + directional_similarity = self.compute_directional_similarity( + img_feat_one, img_feat_two, text_feat_one, text_feat_two + ) + return directional_similarity Let’s put DirectionalSimilarity to use now. Copied dir_similarity = DirectionalSimilarity(tokenizer, text_encoder, image_processor, image_encoder) +scores = [] + +for i in range(len(input_images)): + original_image = input_images[i] + original_caption = original_captions[i] + edited_image = edited_images[i] + modified_caption = modified_captions[i] + + similarity_score = dir_similarity(original_image, edited_image, original_caption, modified_caption) + scores.append(float(similarity_score.detach().cpu())) + +print(f"CLIP directional similarity: {np.mean(scores)}") +# CLIP directional similarity: 0.0797976553440094 Like the CLIP Score, the higher the CLIP directional similarity, the better it is. It should be noted that the StableDiffusionInstructPix2PixPipeline exposes two arguments, namely, image_guidance_scale and guidance_scale that let you control the quality of the final edited image. We encourage you to experiment with these two arguments and see the impact of that on the directional similarity. We can extend the idea of this metric to measure how similar the original image and edited version are. To do that, we can just do F.cosine_similarity(img_feat_two, img_feat_one). For these kinds of edits, we would still want the primary semantics of the images to be preserved as much as possible, i.e., a high similarity score. We can use these metrics for similar pipelines such as the StableDiffusionPix2PixZeroPipeline. Both CLIP score and CLIP direction similarity rely on the CLIP model, which can make the evaluations biased. Extending metrics like IS, FID (discussed later), or KID can be difficult when the model under evaluation was pre-trained on a large image-captioning dataset (such as the LAION-5B dataset). This is because underlying these metrics is an InceptionNet (pre-trained on the ImageNet-1k dataset) used for extracting intermediate image features. The pre-training dataset of Stable Diffusion may have limited overlap with the pre-training dataset of InceptionNet, so it is not a good candidate here for feature extraction. Using the above metrics helps evaluate models that are class-conditioned. For example, DiT. It was pre-trained being conditioned on the ImageNet-1k classes. Class-conditioned image generation Class-conditioned generative models are usually pre-trained on a class-labeled dataset such as ImageNet-1k. Popular metrics for evaluating these models include Fréchet Inception Distance (FID), Kernel Inception Distance (KID), and Inception Score (IS). In this document, we focus on FID (Heusel et al.). We show how to compute it with the DiTPipeline, which uses the DiT model under the hood. FID aims to measure how similar are two datasets of images. As per this resource: Fréchet Inception Distance is a measure of similarity between two datasets of images. It was shown to correlate well with the human judgment of visual quality and is most often used to evaluate the quality of samples of Generative Adversarial Networks. FID is calculated by computing the Fréchet distance between two Gaussians fitted to feature representations of the Inception network. These two datasets are essentially the dataset of real images and the dataset of fake images (generated images in our case). FID is usually calculated with two large datasets. However, for this document, we will work with two mini datasets. Let’s first download a few images from the ImageNet-1k training set: Copied from zipfile import ZipFile +import requests + + +def download(url, local_filepath): + r = requests.get(url) + with open(local_filepath, "wb") as f: + f.write(r.content) + return local_filepath + +dummy_dataset_url = "https://hf.co/datasets/sayakpaul/sample-datasets/resolve/main/sample-imagenet-images.zip" +local_filepath = download(dummy_dataset_url, dummy_dataset_url.split("/")[-1]) + +with ZipFile(local_filepath, "r") as zipper: + zipper.extractall(".") Copied from PIL import Image +import os + +dataset_path = "sample-imagenet-images" +image_paths = sorted([os.path.join(dataset_path, x) for x in os.listdir(dataset_path)]) + +real_images = [np.array(Image.open(path).convert("RGB")) for path in image_paths] These are 10 images from the following ImageNet-1k classes: “cassette_player”, “chain_saw” (x2), “church”, “gas_pump” (x3), “parachute” (x2), and “tench”. Real images. Now that the images are loaded, let’s apply some lightweight pre-processing on them to use them for FID calculation. Copied from torchvision.transforms import functional as F + + +def preprocess_image(image): + image = torch.tensor(image).unsqueeze(0) + image = image.permute(0, 3, 1, 2) / 255.0 + return F.center_crop(image, (256, 256)) + +real_images = torch.cat([preprocess_image(image) for image in real_images]) +print(real_images.shape) +# torch.Size([10, 3, 256, 256]) We now load the DiTPipeline to generate images conditioned on the above-mentioned classes. Copied from diffusers import DiTPipeline, DPMSolverMultistepScheduler + +dit_pipeline = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256", torch_dtype=torch.float16) +dit_pipeline.scheduler = DPMSolverMultistepScheduler.from_config(dit_pipeline.scheduler.config) +dit_pipeline = dit_pipeline.to("cuda") + +words = [ + "cassette player", + "chainsaw", + "chainsaw", + "church", + "gas pump", + "gas pump", + "gas pump", + "parachute", + "parachute", + "tench", +] + +class_ids = dit_pipeline.get_label_ids(words) +output = dit_pipeline(class_labels=class_ids, generator=generator, output_type="np") + +fake_images = output.images +fake_images = torch.tensor(fake_images) +fake_images = fake_images.permute(0, 3, 1, 2) +print(fake_images.shape) +# torch.Size([10, 3, 256, 256]) Now, we can compute the FID using torchmetrics. Copied from torchmetrics.image.fid import FrechetInceptionDistance + +fid = FrechetInceptionDistance(normalize=True) +fid.update(real_images, real=True) +fid.update(fake_images, real=False) + +print(f"FID: {float(fid.compute())}") +# FID: 177.7147216796875 The lower the FID, the better it is. Several things can influence FID here: Number of images (both real and fake) Randomness induced in the diffusion process Number of inference steps in the diffusion process The scheduler being used in the diffusion process For the last two points, it is, therefore, a good practice to run the evaluation across different seeds and inference steps, and then report an average result. FID results tend to be fragile as they depend on a lot of factors: The specific Inception model used during computation. The implementation accuracy of the computation. The image format (not the same if we start from PNGs vs JPGs). Keeping that in mind, FID is often most useful when comparing similar runs, but it is +hard to reproduce paper results unless the authors carefully disclose the FID +measurement code. These points apply to other related metrics too, such as KID and IS. As a final step, let’s visually inspect the fake_images. Fake images. diff --git a/scrapped_outputs/bba4e7f789d722a69b6301c73bffc326.txt b/scrapped_outputs/bba4e7f789d722a69b6301c73bffc326.txt new file mode 100644 index 0000000000000000000000000000000000000000..96a0a5c22497290cdb231bbf72184daeee1b4d8c --- /dev/null +++ b/scrapped_outputs/bba4e7f789d722a69b6301c73bffc326.txt @@ -0,0 +1,18 @@ +VQModel The VQ-VAE model was introduced in Neural Discrete Representation Learning by Aaron van den Oord, Oriol Vinyals and Koray Kavukcuoglu. The model is used in 🤗 Diffusers to decode latent representations into images. Unlike AutoencoderKL, the VQModel works in a quantized latent space. The abstract from the paper is: Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector Quantised-Variational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of “posterior collapse” — where the latents are ignored when they are paired with a powerful autoregressive decoder — typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations. VQModel class diffusers.VQModel < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) up_block_types: Tuple = ('UpDecoderBlock2D',) block_out_channels: Tuple = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 3 sample_size: int = 32 num_vq_embeddings: int = 256 norm_num_groups: int = 32 vq_embed_dim: Optional = None scaling_factor: float = 0.18215 norm_type: str = 'group' mid_block_add_attention = True lookup_from_codebook = False force_upcast = False ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("DownEncoderBlock2D",)) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to ("UpDecoderBlock2D",)) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of block output channels. layers_per_block (int, optional, defaults to 1) — Number of layers per block. act_fn (str, optional, defaults to "silu") — The activation function to use. latent_channels (int, optional, defaults to 3) — Number of channels in the latent space. sample_size (int, optional, defaults to 32) — Sample input size. num_vq_embeddings (int, optional, defaults to 256) — Number of codebook vectors in the VQ-VAE. norm_num_groups (int, optional, defaults to 32) — Number of groups for normalization layers. vq_embed_dim (int, optional) — Hidden dim of codebook vectors in the VQ-VAE. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. norm_type (str, optional, defaults to "group") — +Type of normalization layer to use. Can be one of "group" or "spatial". A VQ-VAE model for decoding latent representations. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor return_dict: bool = True ) → VQEncoderOutput or tuple Parameters sample (torch.FloatTensor) — Input sample. return_dict (bool, optional, defaults to True) — +Whether or not to return a models.vq_model.VQEncoderOutput instead of a plain tuple. Returns +VQEncoderOutput or tuple + +If return_dict is True, a VQEncoderOutput is returned, otherwise a plain tuple +is returned. + The VQModel forward method. VQEncoderOutput class diffusers.models.vq_model.VQEncoderOutput < source > ( latents: FloatTensor ) Parameters latents (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The encoded output sample from the last layer of the model. Output of VQModel encoding method. diff --git a/scrapped_outputs/bba60e2da29ca3b1f1ad9f2857230df7.txt b/scrapped_outputs/bba60e2da29ca3b1f1ad9f2857230df7.txt new file mode 100644 index 0000000000000000000000000000000000000000..65a9cfaf29f703e7c7512eba0f3f7082686a6b82 --- /dev/null +++ b/scrapped_outputs/bba60e2da29ca3b1f1ad9f2857230df7.txt @@ -0,0 +1,40 @@ +KDPM2DiscreteScheduler The KDPM2DiscreteScheduler is inspired by the Elucidating the Design Space of Diffusion-Based Generative Models paper, and the scheduler is ported from and created by Katherine Crowson. The original codebase can be found at crowsonkb/k-diffusion. KDPM2DiscreteScheduler class diffusers.KDPM2DiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None use_karras_sigmas: Optional = False prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.00085) — +The starting beta value of inference. beta_end (float, defaults to 0.012) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. KDPM2DiscreteScheduler is inspired by the DPMSolver2 and Algorithm 2 from the Elucidating the Design Space of +Diffusion-Based Generative Models paper. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/bbc210f0a6e0fda987b298d38311c56c.txt b/scrapped_outputs/bbc210f0a6e0fda987b298d38311c56c.txt new file mode 100644 index 0000000000000000000000000000000000000000..707a06e6336d2883e0c81a8c8cc00f306f544615 --- /dev/null +++ b/scrapped_outputs/bbc210f0a6e0fda987b298d38311c56c.txt @@ -0,0 +1,65 @@ +Unconditional image generation Unconditional image generation models are not conditioned on text or images during training. It only generates images that resemble its training data distribution. This guide will explore the train_unconditional.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies: Copied cd examples/unconditional_image_generation +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the bf16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_unconditional.py \ + --mixed_precision="bf16" Some basic and important parameters to specify include: --dataset_name: the name of the dataset on the Hub or a local path to the dataset to train on --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command Bring your dataset, and let the training script handle everything else! Training script The code for preprocessing the dataset and the training loop is found in the main() function. If you need to adapt the training script, this is where you’ll need to make your changes. The train_unconditional script initializes a UNet2DModel if you don’t provide a model configuration. You can configure the UNet here if you’d like: Copied model = UNet2DModel( + sample_size=args.resolution, + in_channels=3, + out_channels=3, + layers_per_block=2, + block_out_channels=(128, 128, 256, 256, 512, 512), + down_block_types=( + "DownBlock2D", + "DownBlock2D", + "DownBlock2D", + "DownBlock2D", + "AttnDownBlock2D", + "DownBlock2D", + ), + up_block_types=( + "UpBlock2D", + "AttnUpBlock2D", + "UpBlock2D", + "UpBlock2D", + "UpBlock2D", + "UpBlock2D", + ), +) Next, the script initializes a scheduler and optimizer: Copied # Initialize the scheduler +accepts_prediction_type = "prediction_type" in set(inspect.signature(DDPMScheduler.__init__).parameters.keys()) +if accepts_prediction_type: + noise_scheduler = DDPMScheduler( + num_train_timesteps=args.ddpm_num_steps, + beta_schedule=args.ddpm_beta_schedule, + prediction_type=args.prediction_type, + ) +else: + noise_scheduler = DDPMScheduler(num_train_timesteps=args.ddpm_num_steps, beta_schedule=args.ddpm_beta_schedule) + +# Initialize the optimizer +optimizer = torch.optim.AdamW( + model.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Then it loads a dataset and you can specify how to preprocess it: Copied dataset = load_dataset("imagefolder", data_dir=args.train_data_dir, cache_dir=args.cache_dir, split="train") + +augmentations = transforms.Compose( + [ + transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), + transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution), + transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x), + transforms.ToTensor(), + transforms.Normalize([0.5], [0.5]), + ] +) Finally, the training loop handles everything else such as adding noise to the images, predicting the noise residual, calculating the loss, saving checkpoints at specified steps, and saving and pushing the model to the Hub. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 A full training run takes 2 hours on 4xV100 GPUs. single GPU multi-GPU Copied accelerate launch train_unconditional.py \ + --dataset_name="huggan/flowers-102-categories" \ + --output_dir="ddpm-ema-flowers-64" \ + --mixed_precision="fp16" \ + --push_to_hub The training script creates and saves a checkpoint file in your repository. Now you can load and use your trained model for inference: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128").to("cuda") +image = pipeline().images[0] diff --git a/scrapped_outputs/bbc50a1e3710b583fb685921e790b1d6.txt b/scrapped_outputs/bbc50a1e3710b583fb685921e790b1d6.txt new file mode 100644 index 0000000000000000000000000000000000000000..3202fb51e10a32c683f71e7b038c0b00367fe667 --- /dev/null +++ b/scrapped_outputs/bbc50a1e3710b583fb685921e790b1d6.txt @@ -0,0 +1 @@ +Overview The APIs in this section are more experimental and prone to breaking changes. Most of them are used internally for development, but they may also be useful to you if you’re interested in building a diffusion model with some custom parts or if you’re interested in some of our helper utilities for working with 🤗 Diffusers. diff --git a/scrapped_outputs/bbca6c264a29d0aef66cacb23e299b60.txt b/scrapped_outputs/bbca6c264a29d0aef66cacb23e299b60.txt new file mode 100644 index 0000000000000000000000000000000000000000..0f46c0cfb05d4b31a44bcbb7e006dad028814545 --- /dev/null +++ b/scrapped_outputs/bbca6c264a29d0aef66cacb23e299b60.txt @@ -0,0 +1,53 @@ +IP-Adapter IP-Adapter is a lightweight adapter that enables prompting a diffusion model with an image. This method decouples the cross-attention layers of the image and text features. The image features are generated from an image encoder. Learn how to load an IP-Adapter checkpoint and image in the IP-Adapter loading guide, and you can see how to use it in the usage guide. IPAdapterMixin class diffusers.loaders.IPAdapterMixin < source > ( ) Mixin for handling IP Adapters. load_ip_adapter < source > ( pretrained_model_name_or_path_or_dict: Union subfolder: Union weight_name: Union image_encoder_folder: Optional = 'image_encoder' **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or List[str] or os.PathLike or List[os.PathLike] or dict or List[dict]) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with ModelMixin.save_pretrained(). +A torch state +dict. + subfolder (str or List[str]) — +The subfolder location of a model file within a larger model repository on the Hub or locally. +If a list is passed, it should have the same length as weight_name. weight_name (str or List[str]) — +The name of the weight file to load. If a list is passed, it should have the same length as +weight_name. image_encoder_folder (str, optional, defaults to image_encoder) — +The subfolder location of the image encoder within a larger model repository on the Hub or locally. +Pass None to not load the image encoder. If the image encoder is located in a folder inside subfolder, +you only need to pass the name of the folder that contains image encoder weights, e.g. image_encoder_folder="image_encoder". +If the image encoder is located in a folder other than subfolder, you should pass the path to the folder that contains image encoder weights, +for example, image_encoder_folder="different_subfolder/image_encoder". cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. set_ip_adapter_scale < source > ( scale ) Sets the conditioning scale between text and image. Example: Copied pipeline.set_ip_adapter_scale(0.5) unload_ip_adapter < source > ( ) Unloads the IP Adapter weights Examples: Copied >>> # Assuming `pipeline` is already loaded with the IP Adapter weights. +>>> pipeline.unload_ip_adapter() +>>> ... IPAdapterMaskProcessor class diffusers.image_processor.IPAdapterMaskProcessor < source > ( do_resize: bool = True vae_scale_factor: int = 8 resample: str = 'lanczos' do_normalize: bool = False do_binarize: bool = True do_convert_grayscale: bool = True ) Parameters do_resize (bool, optional, defaults to True) — +Whether to downscale the image’s (height, width) dimensions to multiples of vae_scale_factor. vae_scale_factor (int, optional, defaults to 8) — +VAE scale factor. If do_resize is True, the image is automatically resized to multiples of this factor. resample (str, optional, defaults to lanczos) — +Resampling filter to use when resizing the image. do_normalize (bool, optional, defaults to False) — +Whether to normalize the image to [-1,1]. do_binarize (bool, optional, defaults to True) — +Whether to binarize the image to 0/1. do_convert_grayscale (bool, optional, defaults to be True) — +Whether to convert the images to grayscale format. Image processor for IP Adapter image masks. downsample < source > ( mask: FloatTensor batch_size: int num_queries: int value_embed_dim: int ) → torch.FloatTensor Parameters mask (torch.FloatTensor) — +The input mask tensor generated with IPAdapterMaskProcessor.preprocess(). batch_size (int) — +The batch size. num_queries (int) — +The number of queries. value_embed_dim (int) — +The dimensionality of the value embeddings. Returns +torch.FloatTensor + +The downsampled mask tensor. + Downsamples the provided mask tensor to match the expected dimensions for scaled dot-product attention. +If the aspect ratio of the mask does not match the aspect ratio of the output image, a warning is issued. diff --git a/scrapped_outputs/bbca80b6496733c98f8c3c684b7f3edf.txt b/scrapped_outputs/bbca80b6496733c98f8c3c684b7f3edf.txt new file mode 100644 index 0000000000000000000000000000000000000000..cff714448fde8a5841e9c4833e95b6589962a2ce --- /dev/null +++ b/scrapped_outputs/bbca80b6496733c98f8c3c684b7f3edf.txt @@ -0,0 +1 @@ +Overview 🧨 Diffusers offers many pipelines, models, and schedulers for generative tasks. To make loading these components as simple as possible, we provide a single and unified method - from_pretrained() - that loads any of these components from either the Hugging Face Hub or your local machine. Whenever you load a pipeline or model, the latest files are automatically downloaded and cached so you can quickly reuse them next time without redownloading the files. This section will show you everything you need to know about loading pipelines, how to load different components in a pipeline, how to load checkpoint variants, and how to load community pipelines. You’ll also learn how to load schedulers and compare the speed and quality trade-offs of using different schedulers. Finally, you’ll see how to convert and load KerasCV checkpoints so you can use them in PyTorch with 🧨 Diffusers. diff --git a/scrapped_outputs/bbf7d6d6795b05c6199a146d2f85f8ff.txt b/scrapped_outputs/bbf7d6d6795b05c6199a146d2f85f8ff.txt new file mode 100644 index 0000000000000000000000000000000000000000..89bc887713a4db23ab02dc3377a161ea6292c27f --- /dev/null +++ b/scrapped_outputs/bbf7d6d6795b05c6199a146d2f85f8ff.txt @@ -0,0 +1,23 @@ +Controlled generation Controlling outputs generated by diffusion models has been long pursued by the community and is now an active research topic. In many popular diffusion models, subtle changes in inputs, both images and text prompts, can drastically change outputs. In an ideal world we want to be able to control how semantics are preserved and changed. Most examples of preserving semantics reduce to being able to accurately map a change in input to a change in output. I.e. adding an adjective to a subject in a prompt preserves the entire image, only modifying the changed subject. Or, image variation of a particular subject preserves the subject’s pose. Additionally, there are qualities of generated images that we would like to influence beyond semantic preservation. I.e. in general, we would like our outputs to be of good quality, adhere to a particular style, or be realistic. We will document some of the techniques diffusers supports to control generation of diffusion models. Much is cutting edge research and can be quite nuanced. If something needs clarifying or you have a suggestion, don’t hesitate to open a discussion on the forum or a GitHub issue. We provide a high level explanation of how the generation can be controlled as well as a snippet of the technicals. For more in depth explanations on the technicals, the original papers which are linked from the pipelines are always the best resources. Depending on the use case, one should choose a technique accordingly. In many cases, these techniques can be combined. For example, one can combine Textual Inversion with SEGA to provide more semantic guidance to the outputs generated using Textual Inversion. Unless otherwise mentioned, these are techniques that work with existing models and don’t require their own weights. InstructPix2Pix Pix2Pix Zero Attend and Excite Semantic Guidance Self-attention Guidance Depth2Image MultiDiffusion Panorama DreamBooth Textual Inversion ControlNet Prompt Weighting Custom Diffusion Model Editing DiffEdit T2I-Adapter FABRIC For convenience, we provide a table to denote which methods are inference-only and which require fine-tuning/training. Method Inference only Requires training / fine-tuning Comments InstructPix2Pix ✅ ❌ Can additionally befine-tuned for better performance on specific edit instructions. Pix2Pix Zero ✅ ❌ Attend and Excite ✅ ❌ Semantic Guidance ✅ ❌ Self-attention Guidance ✅ ❌ Depth2Image ✅ ❌ MultiDiffusion Panorama ✅ ❌ DreamBooth ❌ ✅ Textual Inversion ❌ ✅ ControlNet ✅ ❌ A ControlNet can be trained/fine-tuned ona custom conditioning. Prompt Weighting ✅ ❌ Custom Diffusion ❌ ✅ Model Editing ✅ ❌ DiffEdit ✅ ❌ T2I-Adapter ✅ ❌ Fabric ✅ ❌ InstructPix2Pix Paper InstructPix2Pix is fine-tuned from Stable Diffusion to support editing input images. It takes as inputs an image and a prompt describing an edit, and it outputs the edited image. +InstructPix2Pix has been explicitly trained to work well with InstructGPT-like prompts. Pix2Pix Zero Paper Pix2Pix Zero allows modifying an image so that one concept or subject is translated to another one while preserving general image semantics. The denoising process is guided from one conceptual embedding towards another conceptual embedding. The intermediate latents are optimized during the denoising process to push the attention maps towards reference attention maps. The reference attention maps are from the denoising process of the input image and are used to encourage semantic preservation. Pix2Pix Zero can be used both to edit synthetic images as well as real images. To edit synthetic images, one first generates an image given a caption. +Next, we generate image captions for the concept that shall be edited and for the new target concept. We can use a model like Flan-T5 for this purpose. Then, “mean” prompt embeddings for both the source and target concepts are created via the text encoder. Finally, the pix2pix-zero algorithm is used to edit the synthetic image. To edit a real image, one first generates an image caption using a model like BLIP. Then one applies DDIM inversion on the prompt and image to generate “inverse” latents. Similar to before, “mean” prompt embeddings for both source and target concepts are created and finally the pix2pix-zero algorithm in combination with the “inverse” latents is used to edit the image. Pix2Pix Zero is the first model that allows “zero-shot” image editing. This means that the model +can edit an image in less than a minute on a consumer GPU as shown here. As mentioned above, Pix2Pix Zero includes optimizing the latents (and not any of the UNet, VAE, or the text encoder) to steer the generation toward a specific concept. This means that the overall +pipeline might require more memory than a standard StableDiffusionPipeline. An important distinction between methods like InstructPix2Pix and Pix2Pix Zero is that the former +involves fine-tuning the pre-trained weights while the latter does not. This means that you can +apply Pix2Pix Zero to any of the available Stable Diffusion models. Attend and Excite Paper Attend and Excite allows subjects in the prompt to be faithfully represented in the final image. A set of token indices are given as input, corresponding to the subjects in the prompt that need to be present in the image. During denoising, each token index is guaranteed to have a minimum attention threshold for at least one patch of the image. The intermediate latents are iteratively optimized during the denoising process to strengthen the attention of the most neglected subject token until the attention threshold is passed for all subject tokens. Like Pix2Pix Zero, Attend and Excite also involves a mini optimization loop (leaving the pre-trained weights untouched) in its pipeline and can require more memory than the usual StableDiffusionPipeline. Semantic Guidance (SEGA) Paper SEGA allows applying or removing one or more concepts from an image. The strength of the concept can also be controlled. I.e. the smile concept can be used to incrementally increase or decrease the smile of a portrait. Similar to how classifier free guidance provides guidance via empty prompt inputs, SEGA provides guidance on conceptual prompts. Multiple of these conceptual prompts can be applied simultaneously. Each conceptual prompt can either add or remove their concept depending on if the guidance is applied positively or negatively. Unlike Pix2Pix Zero or Attend and Excite, SEGA directly interacts with the diffusion process instead of performing any explicit gradient-based optimization. Self-attention Guidance (SAG) Paper Self-attention Guidance improves the general quality of images. SAG provides guidance from predictions not conditioned on high-frequency details to fully conditioned images. The high frequency details are extracted out of the UNet self-attention maps. Depth2Image Project Depth2Image is fine-tuned from Stable Diffusion to better preserve semantics for text guided image variation. It conditions on a monocular depth estimate of the original image. MultiDiffusion Panorama Paper MultiDiffusion Panorama defines a new generation process over a pre-trained diffusion model. This process binds together multiple diffusion generation methods that can be readily applied to generate high quality and diverse images. Results adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes. +MultiDiffusion Panorama allows to generate high-quality images at arbitrary aspect ratios (e.g., panoramas). Fine-tuning your own models In addition to pre-trained models, Diffusers has training scripts for fine-tuning models on user-provided data. DreamBooth Project DreamBooth fine-tunes a model to teach it about a new subject. I.e. a few pictures of a person can be used to generate images of that person in different styles. Textual Inversion Paper Textual Inversion fine-tunes a model to teach it about a new concept. I.e. a few pictures of a style of artwork can be used to generate images in that style. ControlNet Paper ControlNet is an auxiliary network which adds an extra condition. +There are 8 canonical pre-trained ControlNets trained on different conditionings such as edge detection, scribbles, +depth maps, and semantic segmentations. Prompt Weighting Prompt weighting is a simple technique that puts more attention weight on certain parts of the text +input. Custom Diffusion Paper Custom Diffusion only fine-tunes the cross-attention maps of a pre-trained +text-to-image diffusion model. It also allows for additionally performing Textual Inversion. It supports +multi-concept training by design. Like DreamBooth and Textual Inversion, Custom Diffusion is also used to +teach a pre-trained text-to-image diffusion model about new concepts to generate outputs involving the +concept(s) of interest. Model Editing Paper The text-to-image model editing pipeline helps you mitigate some of the incorrect implicit assumptions a pre-trained text-to-image +diffusion model might make about the subjects present in the input prompt. For example, if you prompt Stable Diffusion to generate images for “A pack of roses”, the roses in the generated images +are more likely to be red. This pipeline helps you change that assumption. DiffEdit Paper DiffEdit allows for semantic editing of input images along with +input prompts while preserving the original input images as much as possible. T2I-Adapter Paper T2I-Adapter is an auxiliary network which adds an extra condition. +There are 8 canonical pre-trained adapters trained on different conditionings such as edge detection, sketch, +depth maps, and semantic segmentations. Fabric Paper Fabric is a training-free +approach applicable to a wide range of popular diffusion models, which exploits +the self-attention layer present in the most widely used architectures to condition +the diffusion process on a set of feedback images. diff --git a/scrapped_outputs/bc1e518ce56edc0398f118f10d939b02.txt b/scrapped_outputs/bc1e518ce56edc0398f118f10d939b02.txt new file mode 100644 index 0000000000000000000000000000000000000000..8423dbc4c086a93fc684851efbfbaf2fbcda62c5 --- /dev/null +++ b/scrapped_outputs/bc1e518ce56edc0398f118f10d939b02.txt @@ -0,0 +1,127 @@ +Super-resolution The Stable Diffusion upscaler diffusion model was created by the researchers and engineers from CompVis, Stability AI, and LAION. It is used to enhance the resolution of input images by a factor of 4. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionUpscalePipeline class diffusers.StableDiffusionUpscalePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel low_res_scheduler: DDPMScheduler scheduler: KarrasDiffusionSchedulers safety_checker: Optional = None feature_extractor: Optional = None watermarker: Optional = None max_noise_level: int = 350 ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. low_res_scheduler (SchedulerMixin) — +A scheduler used to add initial noise to the low resolution conditioning image. It must be an instance of +DDPMScheduler. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-guided image super-resolution using Stable Diffusion 2. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 75 guidance_scale: float = 9.0 noise_level: int = 20 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: int = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be upscaled. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import requests +>>> from PIL import Image +>>> from io import BytesIO +>>> from diffusers import StableDiffusionUpscalePipeline +>>> import torch + +>>> # load model and scheduler +>>> model_id = "stabilityai/stable-diffusion-x4-upscaler" +>>> pipeline = StableDiffusionUpscalePipeline.from_pretrained( +... model_id, revision="fp16", torch_dtype=torch.float16 +... ) +>>> pipeline = pipeline.to("cuda") + +>>> # let's download an image +>>> url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png" +>>> response = requests.get(url) +>>> low_res_img = Image.open(BytesIO(response.content)).convert("RGB") +>>> low_res_img = low_res_img.resize((128, 128)) +>>> prompt = "a white cat" + +>>> upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0] +>>> upscaled_image.save("upsampled_cat.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/bc209f28d4fe1091ba934ad82c0a6a4e.txt b/scrapped_outputs/bc209f28d4fe1091ba934ad82c0a6a4e.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/bc2627d86c8f827a974cb7d049099627.txt b/scrapped_outputs/bc2627d86c8f827a974cb7d049099627.txt new file mode 100644 index 0000000000000000000000000000000000000000..ac59df5433d23b7c188dd3d53bf865450ff7dab9 --- /dev/null +++ b/scrapped_outputs/bc2627d86c8f827a974cb7d049099627.txt @@ -0,0 +1 @@ +Reinforcement learning training with DDPO You can fine-tune Stable Diffusion on a reward function via reinforcement learning with the 🤗 TRL library and 🤗 Diffusers. This is done with the Denoising Diffusion Policy Optimization (DDPO) algorithm introduced by Black et al. in Training Diffusion Models with Reinforcement Learning, which is implemented in 🤗 TRL with the DDPOTrainer. For more information, check out the DDPOTrainer API reference and the Finetune Stable Diffusion Models with DDPO via TRL blog post. diff --git a/scrapped_outputs/bc35f759d739905317ac39da84000a41.txt b/scrapped_outputs/bc35f759d739905317ac39da84000a41.txt new file mode 100644 index 0000000000000000000000000000000000000000..f75b37d0697a81c66f25162d0911b66f223d8162 --- /dev/null +++ b/scrapped_outputs/bc35f759d739905317ac39da84000a41.txt @@ -0,0 +1,106 @@ +Audio Diffusion Audio Diffusion is by Robert Dargavel Smith, and it leverages the recent advances in image generation from diffusion models by converting audio samples to and from Mel spectrogram images. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. AudioDiffusionPipeline class diffusers.AudioDiffusionPipeline < source > ( vqvae: AutoencoderKL unet: UNet2DConditionModel mel: Mel scheduler: typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_ddpm.DDPMScheduler] ) Parameters vqae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. mel (Mel) — +Transform audio into a spectrogram. scheduler (DDIMScheduler or DDPMScheduler) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler or DDPMScheduler. Pipeline for audio diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 audio_file: str = None raw_audio: ndarray = None slice: int = 0 start_step: int = 0 steps: int = None generator: Generator = None mask_start_secs: float = 0 mask_end_secs: float = 0 step_generator: Generator = None eta: float = 0 noise: Tensor = None encoding: Tensor = None return_dict = True ) → List[PIL Image] Parameters batch_size (int) — +Number of samples to generate. audio_file (str) — +An audio file that must be on disk due to Librosa limitation. raw_audio (np.ndarray) — +The raw audio file as a NumPy array. slice (int) — +Slice number of audio to convert. start_step (int) — +Step to start diffusion from. steps (int) — +Number of denoising steps (defaults to 50 for DDIM and 1000 for DDPM). generator (torch.Generator) — +A torch.Generator to make +generation deterministic. mask_start_secs (float) — +Number of seconds of audio to mask (not generate) at start. mask_end_secs (float) — +Number of seconds of audio to mask (not generate) at end. step_generator (torch.Generator) — +A torch.Generator used to denoise. +None eta (float) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. noise (torch.Tensor) — +A noise tensor of shape (batch_size, 1, height, width) or None. encoding (torch.Tensor) — +A tensor for UNet2DConditionModel of shape (batch_size, seq_length, cross_attention_dim). return_dict (bool) — +Whether or not to return a AudioPipelineOutput, ImagePipelineOutput or a plain tuple. Returns +List[PIL Image] + +A list of Mel spectrograms (float, List[np.ndarray]) with the sample rate and raw audio. + The call function to the pipeline for generation. Examples: For audio diffusion: Copied import torch +from IPython.display import Audio +from diffusers import DiffusionPipeline + +device = "cuda" if torch.cuda.is_available() else "cpu" +pipe = DiffusionPipeline.from_pretrained("teticio/audio-diffusion-256").to(device) + +output = pipe() +display(output.images[0]) +display(Audio(output.audios[0], rate=mel.get_sample_rate())) For latent audio diffusion: Copied import torch +from IPython.display import Audio +from diffusers import DiffusionPipeline + +device = "cuda" if torch.cuda.is_available() else "cpu" +pipe = DiffusionPipeline.from_pretrained("teticio/latent-audio-diffusion-256").to(device) + +output = pipe() +display(output.images[0]) +display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate())) For other tasks like variation, inpainting, outpainting, etc: Copied output = pipe( + raw_audio=output.audios[0, 0], + start_step=int(pipe.get_default_steps() / 2), + mask_start_secs=1, + mask_end_secs=1, +) +display(output.images[0]) +display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate())) encode < source > ( images: typing.List[PIL.Image.Image] steps: int = 50 ) → np.ndarray Parameters images (List[PIL Image]) — +List of images to encode. steps (int) — +Number of encoding steps to perform (defaults to 50). Returns +np.ndarray + +A noise tensor of shape (batch_size, 1, height, width). + Reverse the denoising step process to recover a noisy image from the generated image. get_default_steps < source > ( ) → int Returns +int + +The number of steps. + Returns default number of steps recommended for inference. slerp < source > ( x0: Tensor x1: Tensor alpha: float ) → torch.Tensor Parameters x0 (torch.Tensor) — +The first tensor to interpolate between. x1 (torch.Tensor) — +Second tensor to interpolate between. alpha (float) — +Interpolation between 0 and 1 Returns +torch.Tensor + +The interpolated tensor. + Spherical Linear intERPolation. AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. Mel class diffusers.Mel < source > ( x_res: int = 256 y_res: int = 256 sample_rate: int = 22050 n_fft: int = 2048 hop_length: int = 512 top_db: int = 80 n_iter: int = 32 ) Parameters x_res (int) — +x resolution of spectrogram (time). y_res (int) — +y resolution of spectrogram (frequency bins). sample_rate (int) — +Sample rate of audio. n_fft (int) — +Number of Fast Fourier Transforms. hop_length (int) — +Hop length (a higher number is recommended if y_res < 256). top_db (int) — +Loudest decibel value. n_iter (int) — +Number of iterations for Griffin-Lim Mel inversion. audio_slice_to_image < source > ( slice: int ) → PIL Image Parameters slice (int) — +Slice number of audio to convert (out of get_number_of_slices()). Returns +PIL Image + +A grayscale image of x_res x y_res. + Convert slice of audio to spectrogram. get_audio_slice < source > ( slice: int = 0 ) → np.ndarray Parameters slice (int) — +Slice number of audio (out of get_number_of_slices()). Returns +np.ndarray + +The audio slice as a NumPy array. + Get slice of audio. get_number_of_slices < source > ( ) → int Returns +int + +Number of spectograms audio can be sliced into. + Get number of slices in audio. get_sample_rate < source > ( ) → int Returns +int + +Sample rate of audio. + Get sample rate. image_to_audio < source > ( image: Image ) → audio (np.ndarray) Parameters image (PIL Image) — +An grayscale image of x_res x y_res. Returns +audio (np.ndarray) + +The audio as a NumPy array. + Converts spectrogram to audio. load_audio < source > ( audio_file: str = None raw_audio: ndarray = None ) Parameters audio_file (str) — +An audio file that must be on disk due to Librosa limitation. raw_audio (np.ndarray) — +The raw audio file as a NumPy array. Load audio. set_resolution < source > ( x_res: int y_res: int ) Parameters x_res (int) — +x resolution of spectrogram (time). y_res (int) — +y resolution of spectrogram (frequency bins). Set resolution. diff --git a/scrapped_outputs/bc5f9f1ad1f987c75c954f0a449868e6.txt b/scrapped_outputs/bc5f9f1ad1f987c75c954f0a449868e6.txt new file mode 100644 index 0000000000000000000000000000000000000000..5ee871335093ed2ca29b91e756da3147dae8eda6 --- /dev/null +++ b/scrapped_outputs/bc5f9f1ad1f987c75c954f0a449868e6.txt @@ -0,0 +1,217 @@ +Load pipelines, models, and schedulers Having an easy way to use a diffusion system for inference is essential to 🧨 Diffusers. Diffusion systems often consist of multiple components like parameterized models, tokenizers, and schedulers that interact in complex ways. That is why we designed the DiffusionPipeline to wrap the complexity of the entire diffusion system into an easy-to-use API, while remaining flexible enough to be adapted for other use cases, such as loading each component individually as building blocks to assemble your own diffusion system. Everything you need for inference or training is accessible with the from_pretrained() method. This guide will show you how to load: pipelines from the Hub and locally different components into a pipeline checkpoint variants such as different floating point types or non-exponential mean averaged (EMA) weights models and schedulers Diffusion Pipeline 💡 Skip to the DiffusionPipeline explained section if you are interested in learning in more detail about how the DiffusionPipeline class works. The DiffusionPipeline class is the simplest and most generic way to load the latest trending diffusion model from the Hub. The DiffusionPipeline.from_pretrained() method automatically detects the correct pipeline class from the checkpoint, downloads, and caches all the required configuration and weight files, and returns a pipeline instance ready for inference. Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) You can also load a checkpoint with its specific pipeline class. The example above loaded a Stable Diffusion model; to get the same result, use the StableDiffusionPipeline class: Copied from diffusers import StableDiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) A checkpoint (such as CompVis/stable-diffusion-v1-4 or runwayml/stable-diffusion-v1-5) may also be used for more than one task, like text-to-image or image-to-image. To differentiate what task you want to use the checkpoint for, you have to load it directly with its corresponding task-specific pipeline class: Copied from diffusers import StableDiffusionImg2ImgPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionImg2ImgPipeline.from_pretrained(repo_id) Local pipeline To load a diffusion pipeline locally, use git-lfs to manually download the checkpoint (in this case, runwayml/stable-diffusion-v1-5) to your local disk. This creates a local folder, ./stable-diffusion-v1-5, on your disk: Copied git-lfs install +git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 Then pass the local path to from_pretrained(): Copied from diffusers import DiffusionPipeline + +repo_id = "./stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) The from_pretrained() method won’t download any files from the Hub when it detects a local path, but this also means it won’t download and cache the latest changes to a checkpoint. Swap components in a pipeline You can customize the default components of any pipeline with another compatible component. Customization is important because: Changing the scheduler is important for exploring the trade-off between generation speed and quality. Different components of a model are typically trained independently and you can swap out a component with a better-performing one. During finetuning, usually only some components - like the UNet or text encoder - are trained. To find out which schedulers are compatible for customization, you can use the compatibles method: Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) +stable_diffusion.scheduler.compatibles Let’s use the SchedulerMixin.from_pretrained() method to replace the default PNDMScheduler with a more performant scheduler, EulerDiscreteScheduler. The subfolder="scheduler" argument is required to load the scheduler configuration from the correct subfolder of the pipeline repository. Then you can pass the new EulerDiscreteScheduler instance to the scheduler argument in DiffusionPipeline: Copied from diffusers import DiffusionPipeline, EulerDiscreteScheduler + +repo_id = "runwayml/stable-diffusion-v1-5" +scheduler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, scheduler=scheduler, use_safetensors=True) Safety checker Diffusion models like Stable Diffusion can generate harmful content, which is why 🧨 Diffusers has a safety checker to check generated outputs against known hardcoded NSFW content. If you’d like to disable the safety checker for whatever reason, pass None to the safety_checker argument: Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, safety_checker=None, use_safetensors=True) +""" +You have disabled the safety checker for by passing `safety_checker=None`. Ensure that you abide by the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend keeping the safety filter enabled in all public-facing circumstances, disabling it only for use cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 . +""" Reuse components across pipelines You can also reuse the same components in multiple pipelines to avoid loading the weights into RAM twice. Use the components method to save the components: Copied from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True) + +components = stable_diffusion_txt2img.components Then you can pass the components to another pipeline without reloading the weights into RAM: Copied stable_diffusion_img2img = StableDiffusionImg2ImgPipeline(**components) You can also pass the components individually to the pipeline if you want more flexibility over which components to reuse or disable. For example, to reuse the same components in the text-to-image pipeline, except for the safety checker and feature extractor, in the image-to-image pipeline: Copied from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True) +stable_diffusion_img2img = StableDiffusionImg2ImgPipeline( + vae=stable_diffusion_txt2img.vae, + text_encoder=stable_diffusion_txt2img.text_encoder, + tokenizer=stable_diffusion_txt2img.tokenizer, + unet=stable_diffusion_txt2img.unet, + scheduler=stable_diffusion_txt2img.scheduler, + safety_checker=None, + feature_extractor=None, + requires_safety_checker=False, +) Checkpoint variants A checkpoint variant is usually a checkpoint whose weights are: Stored in a different floating point type for lower precision and lower storage, such as torch.float16, because it only requires half the bandwidth and storage to download. You can’t use this variant if you’re continuing training or using a CPU. Non-exponential mean averaged (EMA) weights, which shouldn’t be used for inference. You should use these to continue fine-tuning a model. 💡 When the checkpoints have identical model structures, but they were trained on different datasets and with a different training setup, they should be stored in separate repositories instead of variations (for example, stable-diffusion-v1-4 and stable-diffusion-v1-5). Otherwise, a variant is identical to the original checkpoint. They have exactly the same serialization format (like Safetensors), model structure, and weights that have identical tensor shapes. checkpoint type weight name argument for loading weights original diffusion_pytorch_model.bin floating point diffusion_pytorch_model.fp16.bin variant, torch_dtype non-EMA diffusion_pytorch_model.non_ema.bin variant There are two important arguments to know for loading variants: torch_dtype defines the floating point precision of the loaded checkpoints. For example, if you want to save bandwidth by loading a fp16 variant, you should specify torch_dtype=torch.float16 to convert the weights to fp16. Otherwise, the fp16 weights are converted to the default fp32 precision. You can also load the original checkpoint without defining the variant argument, and convert it to fp16 with torch_dtype=torch.float16. In this case, the default fp32 weights are downloaded first, and then they’re converted to fp16 after loading. variant defines which files should be loaded from the repository. For example, if you want to load a non_ema variant from the diffusers/stable-diffusion-variants repository, you should specify variant="non_ema" to download the non_ema files. Copied from diffusers import DiffusionPipeline +import torch + +# load fp16 variant +stable_diffusion = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16, use_safetensors=True +) +# load non_ema variant +stable_diffusion = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", variant="non_ema", use_safetensors=True +) To save a checkpoint stored in a different floating-point type or as a non-EMA variant, use the DiffusionPipeline.save_pretrained() method and specify the variant argument. You should try and save a variant to the same folder as the original checkpoint, so you can load both from the same folder: Copied from diffusers import DiffusionPipeline + +# save as fp16 variant +stable_diffusion.save_pretrained("runwayml/stable-diffusion-v1-5", variant="fp16") +# save as non-ema variant +stable_diffusion.save_pretrained("runwayml/stable-diffusion-v1-5", variant="non_ema") If you don’t save the variant to an existing folder, you must specify the variant argument otherwise it’ll throw an Exception because it can’t find the original checkpoint: Copied # 👎 this won't work +stable_diffusion = DiffusionPipeline.from_pretrained( + "./stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +# 👍 this works +stable_diffusion = DiffusionPipeline.from_pretrained( + "./stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16, use_safetensors=True +) Models Models are loaded from the ModelMixin.from_pretrained() method, which downloads and caches the latest version of the model weights and configurations. If the latest files are available in the local cache, from_pretrained() reuses files in the cache instead of re-downloading them. Models can be loaded from a subfolder with the subfolder argument. For example, the model weights for runwayml/stable-diffusion-v1-5 are stored in the unet subfolder: Copied from diffusers import UNet2DConditionModel + +repo_id = "runwayml/stable-diffusion-v1-5" +model = UNet2DConditionModel.from_pretrained(repo_id, subfolder="unet", use_safetensors=True) Or directly from a repository’s directory: Copied from diffusers import UNet2DModel + +repo_id = "google/ddpm-cifar10-32" +model = UNet2DModel.from_pretrained(repo_id, use_safetensors=True) You can also load and save model variants by specifying the variant argument in ModelMixin.from_pretrained() and ModelMixin.save_pretrained(): Copied from diffusers import UNet2DConditionModel + +model = UNet2DConditionModel.from_pretrained( + "runwayml/stable-diffusion-v1-5", subfolder="unet", variant="non_ema", use_safetensors=True +) +model.save_pretrained("./local-unet", variant="non_ema") Schedulers Schedulers are loaded from the SchedulerMixin.from_pretrained() method, and unlike models, schedulers are not parameterized or trained; they are defined by a configuration file. Loading schedulers does not consume any significant amount of memory and the same configuration file can be used for a variety of different schedulers. +For example, the following schedulers are compatible with StableDiffusionPipeline, which means you can load the same scheduler configuration file in any of these classes: Copied from diffusers import StableDiffusionPipeline +from diffusers import ( + DDPMScheduler, + DDIMScheduler, + PNDMScheduler, + LMSDiscreteScheduler, + EulerAncestralDiscreteScheduler, + EulerDiscreteScheduler, + DPMSolverMultistepScheduler, +) + +repo_id = "runwayml/stable-diffusion-v1-5" + +ddpm = DDPMScheduler.from_pretrained(repo_id, subfolder="scheduler") +ddim = DDIMScheduler.from_pretrained(repo_id, subfolder="scheduler") +pndm = PNDMScheduler.from_pretrained(repo_id, subfolder="scheduler") +lms = LMSDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +euler_anc = EulerAncestralDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +euler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +dpm = DPMSolverMultistepScheduler.from_pretrained(repo_id, subfolder="scheduler") + +# replace `dpm` with any of `ddpm`, `ddim`, `pndm`, `lms`, `euler_anc`, `euler` +pipeline = StableDiffusionPipeline.from_pretrained(repo_id, scheduler=dpm, use_safetensors=True) DiffusionPipeline explained As a class method, DiffusionPipeline.from_pretrained() is responsible for two things: Download the latest version of the folder structure required for inference and cache it. If the latest folder structure is available in the local cache, DiffusionPipeline.from_pretrained() reuses the cache and won’t redownload the files. Load the cached weights into the correct pipeline class - retrieved from the model_index.json file - and return an instance of it. The pipelines’ underlying folder structure corresponds directly with their class instances. For example, the StableDiffusionPipeline corresponds to the folder structure in runwayml/stable-diffusion-v1-5. Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipeline = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) +print(pipeline) You’ll see pipeline is an instance of StableDiffusionPipeline, which consists of seven components: "feature_extractor": a CLIPImageProcessor from 🤗 Transformers. "safety_checker": a component for screening against harmful content. "scheduler": an instance of PNDMScheduler. "text_encoder": a CLIPTextModel from 🤗 Transformers. "tokenizer": a CLIPTokenizer from 🤗 Transformers. "unet": an instance of UNet2DConditionModel. "vae": an instance of AutoencoderKL. Copied StableDiffusionPipeline { + "feature_extractor": [ + "transformers", + "CLIPImageProcessor" + ], + "safety_checker": [ + "stable_diffusion", + "StableDiffusionSafetyChecker" + ], + "scheduler": [ + "diffusers", + "PNDMScheduler" + ], + "text_encoder": [ + "transformers", + "CLIPTextModel" + ], + "tokenizer": [ + "transformers", + "CLIPTokenizer" + ], + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vae": [ + "diffusers", + "AutoencoderKL" + ] +} Compare the components of the pipeline instance to the runwayml/stable-diffusion-v1-5 folder structure, and you’ll see there is a separate folder for each of the components in the repository: Copied . +├── feature_extractor +│   └── preprocessor_config.json +├── model_index.json +├── safety_checker +│   ├── config.json +| ├── model.fp16.safetensors +│ ├── model.safetensors +│ ├── pytorch_model.bin +| └── pytorch_model.fp16.bin +├── scheduler +│   └── scheduler_config.json +├── text_encoder +│   ├── config.json +| ├── model.fp16.safetensors +│ ├── model.safetensors +│ |── pytorch_model.bin +| └── pytorch_model.fp16.bin +├── tokenizer +│   ├── merges.txt +│   ├── special_tokens_map.json +│   ├── tokenizer_config.json +│   └── vocab.json +├── unet +│   ├── config.json +│   ├── diffusion_pytorch_model.bin +| |── diffusion_pytorch_model.fp16.bin +│ |── diffusion_pytorch_model.f16.safetensors +│ |── diffusion_pytorch_model.non_ema.bin +│ |── diffusion_pytorch_model.non_ema.safetensors +│ └── diffusion_pytorch_model.safetensors +|── vae +. ├── config.json +. ├── diffusion_pytorch_model.bin + ├── diffusion_pytorch_model.fp16.bin + ├── diffusion_pytorch_model.fp16.safetensors + └── diffusion_pytorch_model.safetensors You can access each of the components of the pipeline as an attribute to view its configuration: Copied pipeline.tokenizer +CLIPTokenizer( + name_or_path="/root/.cache/huggingface/hub/models--runwayml--stable-diffusion-v1-5/snapshots/39593d5650112b4cc580433f6b0435385882d819/tokenizer", + vocab_size=49408, + model_max_length=77, + is_fast=False, + padding_side="right", + truncation_side="right", + special_tokens={ + "bos_token": AddedToken("<|startoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), + "eos_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), + "unk_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), + "pad_token": "<|endoftext|>", + }, + clean_up_tokenization_spaces=True +) Every pipeline expects a model_index.json file that tells the DiffusionPipeline: which pipeline class to load from _class_name which version of 🧨 Diffusers was used to create the model in _diffusers_version what components from which library are stored in the subfolders (name corresponds to the component and subfolder name, library corresponds to the name of the library to load the class from, and class corresponds to the class name) Copied { + "_class_name": "StableDiffusionPipeline", + "_diffusers_version": "0.6.0", + "feature_extractor": [ + "transformers", + "CLIPImageProcessor" + ], + "safety_checker": [ + "stable_diffusion", + "StableDiffusionSafetyChecker" + ], + "scheduler": [ + "diffusers", + "PNDMScheduler" + ], + "text_encoder": [ + "transformers", + "CLIPTextModel" + ], + "tokenizer": [ + "transformers", + "CLIPTokenizer" + ], + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vae": [ + "diffusers", + "AutoencoderKL" + ] +} diff --git a/scrapped_outputs/bc6f6da9350833179e4bad2954bc8b65.txt b/scrapped_outputs/bc6f6da9350833179e4bad2954bc8b65.txt new file mode 100644 index 0000000000000000000000000000000000000000..31e52125400a7fdadb4ea2228a975cdd092ebcc4 --- /dev/null +++ b/scrapped_outputs/bc6f6da9350833179e4bad2954bc8b65.txt @@ -0,0 +1,319 @@ +Image-to-Image Generation + + +StableDiffusionImg2ImgPipeline + +The Stable Diffusion model was created by the researchers and engineers from CompVis, Stability AI, runway, and LAION. The StableDiffusionImg2ImgPipeline lets you pass a text prompt and an initial image to condition the generation of new images using Stable Diffusion. +The original codebase can be found here: CampVis/stable-diffusion +StableDiffusionImg2ImgPipeline is compatible with all Stable Diffusion checkpoints for Text-to-Image + +class diffusers.StableDiffusionImg2ImgPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPFeatureExtractor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-guided image to image generation using Stable Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None +strength: float = 0.8 +num_inference_steps: typing.Optional[int] = 50 +guidance_scale: typing.Optional[float] = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: typing.Optional[float] = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: typing.Optional[int] = 1 +**kwargs + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. + + +strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter will be modulated by strength. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. Ignored when not using guidance (i.e., ignored if guidance_scale +is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import requests +>>> import torch +>>> from PIL import Image +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionImg2ImgPipeline + +>>> device = "cuda" +>>> model_id_or_path = "runwayml/stable-diffusion-v1-5" +>>> pipe = StableDiffusionImg2ImgPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +>>> response = requests.get(url) +>>> init_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_image = init_image.resize((768, 512)) + +>>> prompt = "A fantasy landscape, trending on artstation" + +>>> images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images +>>> images[0].save("fantasy_landscape.png") + +enable_attention_slicing + +< +source +> +( +slice_size: typing.Union[str, int, NoneType] = 'auto' + +) + + +Parameters + +slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maxium amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +disable_attention_slicing + +< +source +> +( +) + + + +Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go +back to computing attention in one step. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. diff --git a/scrapped_outputs/bc7f14dd461df4d2c24c2272e21e1d99.txt b/scrapped_outputs/bc7f14dd461df4d2c24c2272e21e1d99.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/bc803f6e65cf718d794eb4e2cad294e1.txt b/scrapped_outputs/bc803f6e65cf718d794eb4e2cad294e1.txt new file mode 100644 index 0000000000000000000000000000000000000000..5eb8aca237f4b1aa72ff085bbc8ab70f6ba7cd91 --- /dev/null +++ b/scrapped_outputs/bc803f6e65cf718d794eb4e2cad294e1.txt @@ -0,0 +1,128 @@ +LoRA LoRA is a fast and lightweight training method that inserts and trains a significantly smaller number of parameters instead of all the model parameters. This produces a smaller file (~100 MBs) and makes it easier to quickly train a model to learn a new concept. LoRA weights are typically loaded into the UNet, text encoder or both. There are two classes for loading LoRA weights: LoraLoaderMixin provides functions for loading and unloading, fusing and unfusing, enabling and disabling, and more functions for managing LoRA weights. This class can be used with any model. StableDiffusionXLLoraLoaderMixin is a Stable Diffusion (SDXL) version of the LoraLoaderMixin class for loading and saving LoRA weights. It can only be used with the SDXL model. To learn more about how to load LoRA weights, see the LoRA loading guide. LoraLoaderMixin class diffusers.loaders.LoraLoaderMixin < source > ( ) Load LoRA layers into UNet2DConditionModel and +CLIPTextModel. delete_adapters < source > ( adapter_names: Union ) Parameters Deletes the LoRA layers of adapter_name for the unet and text-encoder(s). — +adapter_names (Union[List[str], str]): +The names of the adapter to delete. Can be a single string or a list of strings disable_lora_for_text_encoder < source > ( text_encoder: Optional = None ) Parameters text_encoder (torch.nn.Module, optional) — +The text encoder module to disable the LoRA layers for. If None, it will try to get the +text_encoder attribute. Disables the LoRA layers for the text encoder. enable_lora_for_text_encoder < source > ( text_encoder: Optional = None ) Parameters text_encoder (torch.nn.Module, optional) — +The text encoder module to enable the LoRA layers for. If None, it will try to get the text_encoder +attribute. Enables the LoRA layers for the text encoder. fuse_lora < source > ( fuse_unet: bool = True fuse_text_encoder: bool = True lora_scale: float = 1.0 safe_fusing: bool = False adapter_names: Optional = None ) Parameters fuse_unet (bool, defaults to True) — Whether to fuse the UNet LoRA parameters. fuse_text_encoder (bool, defaults to True) — +Whether to fuse the text encoder LoRA parameters. If the text encoder wasn’t monkey-patched with the +LoRA parameters then it won’t have any effect. lora_scale (float, defaults to 1.0) — +Controls how much to influence the outputs with the LoRA parameters. safe_fusing (bool, defaults to False) — +Whether to check fused weights for NaN values before fusing and if values are NaN not fusing them. adapter_names (List[str], optional) — +Adapter names to be used for fusing. If nothing is passed, all active adapters will be fused. Fuses the LoRA parameters into the original parameters of the corresponding blocks. This is an experimental API. Example: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipeline.fuse_lora(lora_scale=0.7) get_active_adapters < source > ( ) Gets the list of the current active adapters. Example: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", +).to("cuda") +pipeline.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") +pipeline.get_active_adapters() get_list_adapters < source > ( ) Gets the current list of all available adapters in the pipeline. load_lora_into_text_encoder < source > ( state_dict network_alphas text_encoder prefix = None lora_scale = 1.0 low_cpu_mem_usage = None adapter_name = None _pipeline = None ) Parameters state_dict (dict) — +A standard state dict containing the lora layer parameters. The key should be prefixed with an +additional text_encoder to distinguish between unet lora layers. network_alphas (Dict[str, float]) — +See LoRALinearLayer for more details. text_encoder (CLIPTextModel) — +The text encoder model to load the LoRA layers into. prefix (str) — +Expected prefix of the text_encoder in the state_dict. lora_scale (float) — +How much to scale the output of the lora linear layer before it is added with the output of the regular +lora layer. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. This will load the LoRA layers specified in state_dict into text_encoder load_lora_into_transformer < source > ( state_dict network_alphas transformer low_cpu_mem_usage = None adapter_name = None _pipeline = None ) Parameters state_dict (dict) — +A standard state dict containing the lora layer parameters. The keys can either be indexed directly +into the unet or prefixed with an additional unet which can be used to distinguish between text +encoder lora layers. network_alphas (Dict[str, float]) — +See LoRALinearLayer for more details. unet (UNet2DConditionModel) — +The UNet model to load the LoRA layers into. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. This will load the LoRA layers specified in state_dict into transformer. load_lora_into_unet < source > ( state_dict network_alphas unet low_cpu_mem_usage = None adapter_name = None _pipeline = None ) Parameters state_dict (dict) — +A standard state dict containing the lora layer parameters. The keys can either be indexed directly +into the unet or prefixed with an additional unet which can be used to distinguish between text +encoder lora layers. network_alphas (Dict[str, float]) — +See LoRALinearLayer for more details. unet (UNet2DConditionModel) — +The UNet model to load the LoRA layers into. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. This will load the LoRA layers specified in state_dict into unet. load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. lora_state_dict < source > ( pretrained_model_name_or_path_or_dict: Union **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with ModelMixin.save_pretrained(). +A torch state +dict. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Return state dict for lora weights and the network alphas. We support loading A1111 formatted LoRA checkpoints in a limited capacity. This function is experimental and might change in the future. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. set_adapters_for_text_encoder < source > ( adapter_names: Union text_encoder: Optional = None text_encoder_weights: List = None ) Parameters adapter_names (List[str] or str) — +The names of the adapters to use. text_encoder (torch.nn.Module, optional) — +The text encoder module to set the adapter layers for. If None, it will try to get the text_encoder +attribute. text_encoder_weights (List[float], optional) — +The weights to use for the text encoder. If None, the weights are set to 1.0 for all the adapters. Sets the adapter layers for the text encoder. set_lora_device < source > ( adapter_names: List device: Union ) Parameters adapter_names (List[str]) — +List of adapters to send device to. device (Union[torch.device, str, int]) — +Device to send the adapters to. Can be either a torch device, a str or an integer. Moves the LoRAs listed in adapter_names to a target device. Useful for offloading the LoRA to the CPU in case +you want to load multiple adapters and free some GPU memory. unfuse_lora < source > ( unfuse_unet: bool = True unfuse_text_encoder: bool = True ) Parameters unfuse_unet (bool, defaults to True) — Whether to unfuse the UNet LoRA parameters. unfuse_text_encoder (bool, defaults to True) — +Whether to unfuse the text encoder LoRA parameters. If the text encoder wasn’t monkey-patched with the +LoRA parameters then it won’t have any effect. Reverses the effect of +pipe.fuse_lora(). This is an experimental API. unload_lora_weights < source > ( ) Unloads the LoRA parameters. Examples: Copied >>> # Assuming `pipeline` is already loaded with the LoRA parameters. +>>> pipeline.unload_lora_weights() +>>> ... StableDiffusionXLLoraLoaderMixin class diffusers.loaders.StableDiffusionXLLoraLoaderMixin < source > ( ) This class overrides LoraLoaderMixin with LoRA loading/saving code that’s specific to SDXL load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name: Optional = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. kwargs (dict, optional) — +See lora_state_dict(). Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. diff --git a/scrapped_outputs/bc80dd358723b32800ea4f628aee8808.txt b/scrapped_outputs/bc80dd358723b32800ea4f628aee8808.txt new file mode 100644 index 0000000000000000000000000000000000000000..aa2d63d59b04449a98f5d12b99c53e29a1ead14b --- /dev/null +++ b/scrapped_outputs/bc80dd358723b32800ea4f628aee8808.txt @@ -0,0 +1,64 @@ +Textual Inversion Textual Inversion is a training technique for personalizing image generation models with just a few example images of what you want it to learn. This technique works by learning and updating the text embeddings (the new embeddings are tied to a special word you must use in the prompt) to match the example images you provide. If you’re training on a GPU with limited vRAM, you should try enabling the gradient_checkpointing and mixed_precision parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing or xFormers. With the same configuration and setup as PyTorch, the Flax training script should be at least ~70% faster! This guide will explore the textual_inversion.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/textual_inversion +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script has many parameters to help you tailor the training run to your needs. All of the parameters and their descriptions are listed in the parse_args() function. Where applicable, Diffusers provides default values for each parameter such as the training batch size and learning rate, but feel free to change these values in the training command if you’d like. For example, to increase the number of gradient accumulation steps above the default value of 1: Copied accelerate launch textual_inversion.py \ + --gradient_accumulation_steps=4 Some other basic and important parameters to specify include: --pretrained_model_name_or_path: the name of the model on the Hub or a local path to the pretrained model --train_data_dir: path to a folder containing the training dataset (example images) --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command --num_vectors: the number of vectors to learn the embeddings with; increasing this parameter helps the model learn better but it comes with increased training costs --placeholder_token: the special word to tie the learned embeddings to (you must use the word in your prompt for inference) --initializer_token: a single-word that roughly describes the object or style you’re trying to train on --learnable_property: whether you’re training the model to learn a new “style” (for example, Van Gogh’s painting style) or “object” (for example, your dog) Training script Unlike some of the other training scripts, textual_inversion.py has a custom dataset class, TextualInversionDataset for creating a dataset. You can customize the image size, placeholder token, interpolation method, whether to crop the image, and more. If you need to change how the dataset is created, you can modify TextualInversionDataset. Next, you’ll find the dataset preprocessing code and training loop in the main() function. The script starts by loading the tokenizer, scheduler and model: Copied # Load tokenizer +if args.tokenizer_name: + tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name) +elif args.pretrained_model_name_or_path: + tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer") + +# Load scheduler and models +noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") +text_encoder = CLIPTextModel.from_pretrained( + args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision +) +vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision) +unet = UNet2DConditionModel.from_pretrained( + args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision +) The special placeholder token is added next to the tokenizer, and the embedding is readjusted to account for the new token. Then, the script creates a dataset from the TextualInversionDataset: Copied train_dataset = TextualInversionDataset( + data_root=args.train_data_dir, + tokenizer=tokenizer, + size=args.resolution, + placeholder_token=(" ".join(tokenizer.convert_ids_to_tokens(placeholder_token_ids))), + repeats=args.repeats, + learnable_property=args.learnable_property, + center_crop=args.center_crop, + set="train", +) +train_dataloader = torch.utils.data.DataLoader( + train_dataset, batch_size=args.train_batch_size, shuffle=True, num_workers=args.dataloader_num_workers +) Finally, the training loop handles everything else from predicting the noisy residual to updating the embedding weights of the special placeholder token. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 For this guide, you’ll download some images of a cat toy and store them in a directory. But remember, you can create and use your own dataset if you want (see the Create a dataset for training guide). Copied from huggingface_hub import snapshot_download + +local_dir = "./cat" +snapshot_download( + "diffusers/cat_toy_example", local_dir=local_dir, repo_type="dataset", ignore_patterns=".gitattributes" +) Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model, and DATA_DIR to the path where you just downloaded the cat images to. The script creates and saves the following files to your repository: learned_embeds.bin: the learned embedding vectors corresponding to your example images token_identifier.txt: the special placeholder token type_of_concept.txt: the type of concept you’re training on (either “object” or “style”) A full training run takes ~1 hour on a single V100 GPU. One more thing before you launch the script. If you’re interested in following along with the training process, you can periodically save generated images as training progresses. Add the following parameters to the training command: Copied --validation_prompt="A train" +--num_validation_images=4 +--validation_steps=100 PyTorch Flax Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export DATA_DIR="./cat" + +accelerate launch textual_inversion.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --train_data_dir=$DATA_DIR \ + --learnable_property="object" \ + --placeholder_token="" \ + --initializer_token="toy" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --max_train_steps=3000 \ + --learning_rate=5.0e-04 \ + --scale_lr \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --output_dir="textual_inversion_cat" \ + --push_to_hub After training is complete, you can use your newly trained model for inference like: PyTorch Flax Copied from diffusers import StableDiffusionPipeline +import torch + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") +pipeline.load_textual_inversion("sd-concepts-library/cat-toy") +image = pipeline("A train", num_inference_steps=50).images[0] +image.save("cat-train.png") Next steps Congratulations on training your own Textual Inversion model! 🎉 To learn more about how to use your new model, the following guides may be helpful: Learn how to load Textual Inversion embeddings and also use them as negative embeddings. Learn how to use Textual Inversion for inference with Stable Diffusion 1/2 and Stable Diffusion XL. diff --git a/scrapped_outputs/bc839c553043d9bb812216228d4b5333.txt b/scrapped_outputs/bc839c553043d9bb812216228d4b5333.txt new file mode 100644 index 0000000000000000000000000000000000000000..5609c43fc2c76167b35287c9c0d231795b1d9be0 --- /dev/null +++ b/scrapped_outputs/bc839c553043d9bb812216228d4b5333.txt @@ -0,0 +1,332 @@ +Text-to-image The Stable Diffusion model was created by researchers and engineers from CompVis, Stability AI, Runway, and LAION. The StableDiffusionPipeline is capable of generating photorealistic images given any text input. It’s trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs. Latent diffusion is the research on top of which Stable Diffusion was built. It was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. The abstract from the paper is: By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. Code is available at https://github.com/CompVis/latent-diffusion. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionPipeline class diffusers.StableDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor from Common Diffusion Noise Schedules and Sample Steps are +Flawed. Guidance rescale factor should fix overexposure when +using zero terminal SNR. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. original_config_file (str, optional) — +The path to the original config file that was used to train the model. If not provided, the config file +will be inferred from the checkpoint file. model_type (str, optional) — +The type of model to load. If not provided, the model type will be inferred from the checkpoint file. image_size (int, optional) — +The size of the image output. It’s used to configure the sample_size parameter of the UNet and VAE model. load_safety_checker (bool, optional, defaults to False) — +Whether to load the safety checker model or not. By default, the safety checker is not loaded unless a safety_checker component is passed to the kwargs. num_in_channels (int, optional) — +Specify the number of input channels for the UNet model. Read more about how to configure UNet model with this parameter +here. scaling_factor (float, optional) — +The scaling factor to use for the VAE model. If not provided, it is inferred from the config file first. +If the scaling factor is not found in the config file, the default value 0.18215 is used. scheduler_type (str, optional) — +The type of scheduler to load. If not provided, the scheduler type will be inferred from the checkpoint file. prediction_type (str, optional) — +The type of prediction to load. If not provided, the prediction type will be inferred from the checkpoint file. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt or .safetensors +format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import StableDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +... ) + +>>> # Download pipeline from local file +>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt +>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly") + +>>> # Enable float16 and move to GPU +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", +... torch_dtype=torch.float16, +... ) +>>> pipeline.to("cuda") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionPipeline class diffusers.FlaxStableDiffusionPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-to-image generation using Stable Diffusion. This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: array params: Union prng_seed: Array num_inference_steps: int = 50 height: Optional = None width: Optional = None guidance_scale: Union = 7.5 latents: Array = None neg_prompt_ids: Array = None return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. latents (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +array is generated by sampling using the supplied random generator. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard + +>>> from diffusers import FlaxStableDiffusionPipeline + +>>> pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", revision="bf16", dtype=jax.numpy.bfloat16 +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" + +>>> prng_seed = jax.random.PRNGKey(0) +>>> num_inference_steps = 50 + +>>> num_samples = jax.device_count() +>>> prompt = num_samples * [prompt] +>>> prompt_ids = pipeline.prepare_inputs(prompt) +# shard inputs and rng + +>>> params = replicate(params) +>>> prng_seed = jax.random.split(prng_seed, jax.device_count()) +>>> prompt_ids = shard(prompt_ids) + +>>> images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images +>>> images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) FlaxStableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/bc8f0b9c2983dcc13de24fb86050fbcb.txt b/scrapped_outputs/bc8f0b9c2983dcc13de24fb86050fbcb.txt new file mode 100644 index 0000000000000000000000000000000000000000..d01ee532445db56e8f74e8c6a472f7e9146b01fe --- /dev/null +++ b/scrapped_outputs/bc8f0b9c2983dcc13de24fb86050fbcb.txt @@ -0,0 +1,97 @@ +Loading and Adding Custom Pipelines + +Diffusers allows you to conveniently load any custom pipeline from the Hugging Face Hub as well as any official community pipeline +via the DiffusionPipeline class. + +Loading custom pipelines from the Hub + +Custom pipelines can be easily loaded from any model repository on the Hub that defines a diffusion pipeline in a pipeline.py file. +Let’s load a dummy pipeline from hf-internal-testing/diffusers-dummy-pipeline. +All you need to do is pass the custom pipeline repo id with the custom_pipeline argument alongside the repo from where you wish to load the pipeline modules. + + + Copied +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "google/ddpm-cifar10-32", custom_pipeline="hf-internal-testing/diffusers-dummy-pipeline" +) +This will load the custom pipeline as defined in the model repository. +By loading a custom pipeline from the Hugging Face Hub, you are trusting that the code you are loading +is safe 🔒. Make sure to check out the code online before loading & running it automatically. + +Loading official community pipelines + +Community pipelines are summarized in the community examples folder +Similarly, you need to pass both the repo id from where you wish to load the weights as well as the custom_pipeline argument. Here the custom_pipeline argument should consist simply of the filename of the community pipeline excluding the .py suffix, e.g. clip_guided_stable_diffusion. +Since community pipelines are often more complex, one can mix loading weights from an official repo id +and passing pipeline modules directly. + + + Copied +from diffusers import DiffusionPipeline +from transformers import CLIPFeatureExtractor, CLIPModel + +clip_model_id = "laion/CLIP-ViT-B-32-laion2B-s34B-b79K" + +feature_extractor = CLIPFeatureExtractor.from_pretrained(clip_model_id) +clip_model = CLIPModel.from_pretrained(clip_model_id) + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + custom_pipeline="clip_guided_stable_diffusion", + clip_model=clip_model, + feature_extractor=feature_extractor, +) + +Adding custom pipelines to the Hub + +To add a custom pipeline to the Hub, all you need to do is to define a pipeline class that inherits +from DiffusionPipeline in a pipeline.py file. +Make sure that the whole pipeline is encapsulated within a single class and that the pipeline.py file +has only one such class. +Let’s quickly define an example pipeline. + + + Copied +import torch +from diffusers import DiffusionPipeline + + +class MyPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() + + self.register_modules(unet=unet, scheduler=scheduler) + + @torch.no_grad() + def __call__(self, batch_size: int = 1, num_inference_steps: int = 50): + # Sample gaussian noise to begin loop + image = torch.randn((batch_size, self.unet.in_channels, self.unet.sample_size, self.unet.sample_size)) + + image = image.to(self.device) + + # set step values + self.scheduler.set_timesteps(num_inference_steps) + + for t in self.progress_bar(self.scheduler.timesteps): + # 1. predict noise model_output + model_output = self.unet(image, t).sample + + # 2. predict previous mean of image x_t-1 and add variance depending on eta + # eta corresponds to η in paper and should be between [0, 1] + # do x_t -> x_t-1 + image = self.scheduler.step(model_output, t, image, eta).prev_sample + + image = (image / 2 + 0.5).clamp(0, 1) + image = image.cpu().permute(0, 2, 3, 1).numpy() + + return image +Now you can upload this short file under the name pipeline.py in your preferred model repository. For Stable Diffusion pipelines, you may also join the community organisation for shared pipelines to upload yours. +Finally, we can load the custom pipeline by passing the model repository name, e.g. sd-diffusers-pipelines-library/my_custom_pipeline alongside the model repository from where we want to load the unet and scheduler components. + + + Copied +my_pipeline = DiffusionPipeline.from_pretrained( + "google/ddpm-cifar10-32", custom_pipeline="patrickvonplaten/my_custom_pipeline" +) diff --git a/scrapped_outputs/bcad38a807226eada5d116e045b9763f.txt b/scrapped_outputs/bcad38a807226eada5d116e045b9763f.txt new file mode 100644 index 0000000000000000000000000000000000000000..45e22755718c396e45e6a4cc8269e866cbee209f --- /dev/null +++ b/scrapped_outputs/bcad38a807226eada5d116e045b9763f.txt @@ -0,0 +1,175 @@ +ControlNet-XS with Stable Diffusion XL ControlNet-XS was introduced in ControlNet-XS by Denis Zavadski and Carsten Rother. It is based on the observation that the control model in the original ControlNet can be made much smaller and still produce good results. Like the original ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that’ll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process. ControlNet-XS generates images with comparable quality to a regular ControlNet, but it is 20-25% faster (see benchmark) and uses ~45% less memory. Here’s the overview from the project page: With increasing computing capabilities, current model architectures appear to follow the trend of simply upscaling all components without validating the necessity for doing so. In this project we investigate the size and architectural design of ControlNet [Zhang et al., 2023] for controlling the image generation process with stable diffusion-based models. We show that a new architecture with as little as 1% of the parameters of the base model achieves state-of-the art results, considerably better than ControlNet in terms of FID score. Hence we call it ControlNet-XS. We provide the code for controlling StableDiffusion-XL [Podell et al., 2023] (Model B, 48M Parameters) and StableDiffusion 2.1 [Rombach et al. 2022] (Model B, 14M Parameters), all under openrail license. This model was contributed by UmerHA. ❤️ 🧪 Many of the SDXL ControlNet checkpoints are experimental, and there is a lot of room for improvement. Feel free to open an Issue and leave us feedback on how we can improve! Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionXLControlNetXSPipeline class diffusers.StableDiffusionXLControlNetXSPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: ControlNetXSModel scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). text_encoder_2 (CLIPTextModelWithProjection) — +Second frozen text-encoder +(laion/CLIP-ViT-bigG-14-laion2B-39B-b160k). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. tokenizer_2 (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetXSModel — +Provides additional conditioning to the unet during the denoising process. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings should always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it defaults to True if the package is installed; otherwise no +watermarker is used. Pipeline for text-to-image generation using Stable Diffusion XL with ControlNet-XS guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 control_guidance_start: float = 0.0 control_guidance_end: float = 1.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None clip_skip: Optional = None ) → ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 5.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. This is sent to tokenizer_2 +and text_encoder_2. If not defined, negative_prompt is used in both text-encoders. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, pooled text embeddings are generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs (prompt +weighting). If not provided, pooled negative_prompt_embeds are generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. control_guidance_start (float, optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float, optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (width, height) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (width, height). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple + +If return_dict is True, ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput is +returned, otherwise a tuple is returned containing the output images. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionXLControlNetXSPipeline, ControlNetXSModel, AutoencoderKL +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" +>>> negative_prompt = "low quality, bad quality, sketches" + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" +... ) + +>>> # initialize the models and pipeline +>>> controlnet_conditioning_scale = 0.5 # recommended for good generalization +>>> controlnet = ControlNetXSModel.from_pretrained("UmerHA/ConrolNetXS-SDXL-canny", torch_dtype=torch.float16) +>>> vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) +>>> pipe = StableDiffusionXLControlNetXSPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> # get canny image +>>> image = np.array(image) +>>> image = cv2.Canny(image, 100, 200) +>>> image = image[:, :, None] +>>> image = np.concatenate([image, image, image], axis=2) +>>> canny_image = Image.fromarray(image) + +>>> # generate image +>>> image = pipe( +... prompt, controlnet_conditioning_scale=controlnet_conditioning_scale, image=canny_image +... ).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/bcb75a23944bcd81e380e7d1448c5935.txt b/scrapped_outputs/bcb75a23944bcd81e380e7d1448c5935.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/bcec8fd94b6b6797d6fd400373395f56.txt b/scrapped_outputs/bcec8fd94b6b6797d6fd400373395f56.txt new file mode 100644 index 0000000000000000000000000000000000000000..b20022e247e26d3166908064d68af68c15cebe08 --- /dev/null +++ b/scrapped_outputs/bcec8fd94b6b6797d6fd400373395f56.txt @@ -0,0 +1,302 @@ +Self-Attention Guidance (SAG) + + +Overview + +Improving Sample Quality of Diffusion Models Using Self-Attention Guidance by Susung Hong et al. +The abstract of the paper is the following: +Denoising diffusion models (DDMs) have attracted attention for their exceptional generation quality and diversity. This success is largely attributed to the use of class- or text-conditional diffusion guidance methods, such as classifier and classifier-free guidance. In this paper, we present a more comprehensive perspective that goes beyond the traditional guidance methods. From this generalized perspective, we introduce novel condition- and training-free strategies to enhance the quality of generated images. As a simple solution, blur guidance improves the suitability of intermediate samples for their fine-scale information and structures, enabling diffusion models to generate higher quality samples with a moderate guidance scale. Improving upon this, Self-Attention Guidance (SAG) uses the intermediate self-attention maps of diffusion models to enhance their stability and efficacy. Specifically, SAG adversarially blurs only the regions that diffusion models attend to at each iteration and guides them accordingly. Our experimental results show that our SAG improves the performance of various diffusion models, including ADM, IDDPM, Stable Diffusion, and DiT. Moreover, combining SAG with conventional guidance methods leads to further improvement. +Resources: +Project Page. +Paper. +Original Code. +Hugging Face Demo. +Colab Demo. + +Available Pipelines: + +Pipeline +Tasks +Demo +StableDiffusionSAGPipeline +Text-to-Image Generation +🤗 Space + +Usage example + + + + Copied +import torch +from diffusers import StableDiffusionSAGPipeline +from accelerate.utils import set_seed + +pipe = StableDiffusionSAGPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16) +pipe = pipe.to("cuda") + +seed = 8978 +prompt = "." +guidance_scale = 7.5 +num_images_per_prompt = 1 + +sag_scale = 1.0 + +set_seed(seed) +images = pipe( + prompt, num_images_per_prompt=num_images_per_prompt, guidance_scale=guidance_scale, sag_scale=sag_scale +).images +images[0].save("example.png") + +StableDiffusionSAGPipeline + + +class diffusers.StableDiffusionSAGPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPImageProcessor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-to-image generation using Stable Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +sag_scale: float = 0.75 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: typing.Optional[int] = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +sag_scale (float, optional, defaults to 0.75) — +SAG scale as defined in [Improving Sample Quality of Diffusion Models Using Self-Attention Guidance] +(https://arxiv.org/abs/2210.00939). sag_scale is defined as s_s of equation (24) of SAG paper: +https://arxiv.org/pdf/2210.00939.pdf. Typically chosen between [0, 1.0] for better quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import StableDiffusionSAGPipeline + +>>> pipe = StableDiffusionSAGPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt, sag_scale=0.75).images[0] + +disable_vae_slicing + +< +source +> +( +) + + + +Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to +computing decoding in one step. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. + +enable_vae_slicing + +< +source +> +( +) + + + +Enable sliced VAE decoding. +When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several +steps. This is useful to save some memory and allow larger batch sizes. diff --git a/scrapped_outputs/bcefceeed597787582040bb69c395012.txt b/scrapped_outputs/bcefceeed597787582040bb69c395012.txt new file mode 100644 index 0000000000000000000000000000000000000000..b74469f82ef8b466711311984ef602705509440e --- /dev/null +++ b/scrapped_outputs/bcefceeed597787582040bb69c395012.txt @@ -0,0 +1,285 @@ +ControlNet + +Adding Conditional Control to Text-to-Image Diffusion Models (ControlNet) by Lvmin Zhang and Maneesh Agrawala. +This example is based on the training example in the original ControlNet repository. It trains a ControlNet to fill circles using a small synthetic dataset. + +Installing the dependencies + +Before running the scripts, make sure to install the library’s training dependencies. +To successfully run the latest versions of the example scripts, we highly recommend installing from source and keeping the installation up to date. We update the example scripts frequently and install example-specific requirements. +To do this, execute the following steps in a new virtual environment: + + + Copied +git clone https://github.com/huggingface/diffusers +cd diffusers +pip install -e . +Then navigate into the example folder and run: + + + Copied +pip install -r requirements.txt +And initialize an 🤗Accelerate environment with: + + + Copied +accelerate config +Or for a default 🤗Accelerate configuration without answering questions about your environment: + + + Copied +accelerate config default +Or if your environment doesn’t support an interactive shell like a notebook: + + + Copied +from accelerate.utils import write_basic_config + +write_basic_config() + +Circle filling dataset + +The original dataset is hosted in the ControlNet repo, but we re-uploaded it here to be compatible with 🤗 Datasets so that it can handle the data loading within the training script. +Our training examples use runwayml/stable-diffusion-v1-5 because that is what the original set of ControlNet models was trained on. However, ControlNet can be trained to augment any compatible Stable Diffusion model (such as CompVis/stable-diffusion-v1-4) or stabilityai/stable-diffusion-2-1. + +Training + +Download the following images to condition our training with: + + + Copied +wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png + +wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png +Specify the MODEL_NAME environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the ~diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path argument. + + + Copied +export MODEL_DIR="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="path to save model" + +accelerate launch train_controlnet.py \ + --pretrained_model_name_or_path=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --dataset_name=fusing/fill50k \ + --resolution=512 \ + --learning_rate=1e-5 \ + --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ + --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ + --train_batch_size=4 +This default configuration requires ~38GB VRAM. +By default, the training script logs outputs to tensorboard. Pass --report_to wandb to use Weights & +Biases. +Gradient accumulation with a smaller batch size can be used to reduce training requirements to ~20 GB VRAM. + + + Copied +export MODEL_DIR="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="path to save model" + +accelerate launch train_controlnet.py \ + --pretrained_model_name_or_path=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --dataset_name=fusing/fill50k \ + --resolution=512 \ + --learning_rate=1e-5 \ + --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ + --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 + +Training with multiple GPUs + +accelerate allows for seamless multi-GPU training. Follow the instructions here +for running distributed training with accelerate. Here is an example command: + + + Copied +export MODEL_DIR="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="path to save model" + +accelerate launch --mixed_precision="fp16" --multi_gpu train_controlnet.py \ + --pretrained_model_name_or_path=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --dataset_name=fusing/fill50k \ + --resolution=512 \ + --learning_rate=1e-5 \ + --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ + --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ + --train_batch_size=4 \ + --mixed_precision="fp16" \ + --tracker_project_name="controlnet-demo" \ + --report_to=wandb + +Example results + + +After 300 steps with batch size 8 + + + + +red circle with blue background + + + +cyan circle with brown floral background + + + +After 6000 steps with batch size 8: + + + + +red circle with blue background + + + +cyan circle with brown floral background + + + +Training on a 16 GB GPU + +Enable the following optimizations to train on a 16GB GPU: +Gradient checkpointing +bitsandbyte’s 8-bit optimizer (take a look at the [installation]((https://github.com/TimDettmers/bitsandbytes#requirements—installation) instructions if you don’t already have it installed) +Now you can launch the training script: + + + Copied +export MODEL_DIR="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="path to save model" + +accelerate launch train_controlnet.py \ + --pretrained_model_name_or_path=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --dataset_name=fusing/fill50k \ + --resolution=512 \ + --learning_rate=1e-5 \ + --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ + --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --use_8bit_adam + +Training on a 12 GB GPU + +Enable the following optimizations to train on a 12GB GPU: +Gradient checkpointing +bitsandbyte’s 8-bit optimizer (take a look at the [installation]((https://github.com/TimDettmers/bitsandbytes#requirements—installation) instructions if you don’t already have it installed) +xFormers (take a look at the installation instructions if you don’t already have it installed) +set gradients to None + + + Copied +export MODEL_DIR="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="path to save model" + +accelerate launch train_controlnet.py \ + --pretrained_model_name_or_path=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --dataset_name=fusing/fill50k \ + --resolution=512 \ + --learning_rate=1e-5 \ + --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ + --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --use_8bit_adam \ + --enable_xformers_memory_efficient_attention \ + --set_grads_to_none +When using enable_xformers_memory_efficient_attention, please make sure to install xformers by pip install xformers. + +Training on an 8 GB GPU + +We have not exhaustively tested DeepSpeed support for ControlNet. While the configuration does +save memory, we have not confirmed whether the configuration trains successfully. You will very likely +have to make changes to the config to have a successful training run. +Enable the following optimizations to train on a 8GB GPU: +Gradient checkpointing +bitsandbyte’s 8-bit optimizer (take a look at the [installation]((https://github.com/TimDettmers/bitsandbytes#requirements—installation) instructions if you don’t already have it installed) +xFormers (take a look at the installation instructions if you don’t already have it installed) +set gradients to None +DeepSpeed stage 2 with parameter and optimizer offloading +fp16 mixed precision +DeepSpeed can offload tensors from VRAM to either +CPU or NVME. This requires significantly more RAM (about 25 GB). +You’ll have to configure your environment with accelerate config to enable DeepSpeed stage 2. +The configuration file should look like this: + + + Copied +compute_environment: LOCAL_MACHINE +deepspeed_config: + gradient_accumulation_steps: 4 + offload_optimizer_device: cpu + offload_param_device: cpu + zero3_init_flag: false + zero_stage: 2 +distributed_type: DEEPSPEED + +See documentation for more DeepSpeed configuration options. + +Changing the default Adam optimizer to DeepSpeed’s Adam +deepspeed.ops.adam.DeepSpeedCPUAdam gives a substantial speedup but +it requires a CUDA toolchain with the same version as PyTorch. 8-bit optimizer +does not seem to be compatible with DeepSpeed at the moment. + + + Copied +export MODEL_DIR="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="path to save model" + +accelerate launch train_controlnet.py \ + --pretrained_model_name_or_path=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --dataset_name=fusing/fill50k \ + --resolution=512 \ + --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ + --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --enable_xformers_memory_efficient_attention \ + --set_grads_to_none \ + --mixed_precision fp16 + +Inference + +The trained model can be run with the StableDiffusionControlNetPipeline. +Set base_model_path and controlnet_path to the values --pretrained_model_name_or_path and +--output_dir were respectively set to in the training script. + + + Copied +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler +from diffusers.utils import load_image +import torch + +base_model_path = "path to model" +controlnet_path = "path to controlnet" + +controlnet = ControlNetModel.from_pretrained(controlnet_path, torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + base_model_path, controlnet=controlnet, torch_dtype=torch.float16 +) + +# speed up diffusion process with faster scheduler and memory optimization +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +# remove following line if xformers is not installed +pipe.enable_xformers_memory_efficient_attention() + +pipe.enable_model_cpu_offload() + +control_image = load_image("./conditioning_image_1.png") +prompt = "pale golden rod circle with old lace background" + +# generate image +generator = torch.manual_seed(0) +image = pipe(prompt, num_inference_steps=20, generator=generator, image=control_image).images[0] + +image.save("./output.png") diff --git a/scrapped_outputs/bcfdf92f2f566377b652306bd0d759c4.txt b/scrapped_outputs/bcfdf92f2f566377b652306bd0d759c4.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/bd14aa7d865109d2ad2e07ddc4e6bdcb.txt b/scrapped_outputs/bd14aa7d865109d2ad2e07ddc4e6bdcb.txt new file mode 100644 index 0000000000000000000000000000000000000000..2dd3da44ce18bcb5133ef0eef41276eeb6637b6a --- /dev/null +++ b/scrapped_outputs/bd14aa7d865109d2ad2e07ddc4e6bdcb.txt @@ -0,0 +1,845 @@ +ControlNet ControlNet was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that’ll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process. The abstract from the paper is: We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models. This model was contributed by takuma104. ❤️ The original codebase can be found at lllyasviel/ControlNet, and you can find official ControlNet checkpoints on lllyasviel’s Hub profile. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionControlNetPipeline class diffusers.StableDiffusionControlNetPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +... ) +>>> image = np.array(image) + +>>> # get canny image +>>> image = cv2.Canny(image, 100, 200) +>>> image = image[:, :, None] +>>> image = np.concatenate([image, image, image], axis=2) +>>> canny_image = Image.fromarray(image) + +>>> # load control net and stable diffusion v1-5 +>>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +>>> pipe = StableDiffusionControlNetPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> # speed up diffusion process with faster scheduler and memory optimization +>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +>>> # remove following line if xformers is not installed +>>> pipe.enable_xformers_memory_efficient_attention() + +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> generator = torch.manual_seed(0) +>>> image = pipe( +... "futuristic-looking woman", num_inference_steps=20, generator=generator, image=canny_image +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionControlNetImg2ImgPipeline class diffusers.StableDiffusionControlNetImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for image-to-image generation using Stable Diffusion with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None control_image: Union = None height: Optional = None width: Optional = None strength: float = 0.8 num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 0.8 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The initial image to be used as the starting point for the image generation process. Can also accept +image latents as image, and if passing latents directly they are not encoded again. control_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, UniPCMultistepScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +... ) +>>> np_image = np.array(image) + +>>> # get canny image +>>> np_image = cv2.Canny(np_image, 100, 200) +>>> np_image = np_image[:, :, None] +>>> np_image = np.concatenate([np_image, np_image, np_image], axis=2) +>>> canny_image = Image.fromarray(np_image) + +>>> # load control net and stable diffusion v1-5 +>>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +>>> pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> # speed up diffusion process with faster scheduler and memory optimization +>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> generator = torch.manual_seed(0) +>>> image = pipe( +... "futuristic-looking woman", +... num_inference_steps=20, +... generator=generator, +... image=image, +... control_image=canny_image, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionControlNetInpaintPipeline class diffusers.StableDiffusionControlNetInpaintPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for image inpainting using Stable Diffusion with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters This pipeline can be used with checkpoints that have been specifically fine-tuned for inpainting +(runwayml/stable-diffusion-inpainting) as well as +default text-to-image Stable Diffusion checkpoints +(runwayml/stable-diffusion-v1-5). Default text-to-image +Stable Diffusion checkpoints might be preferable for ControlNets that have been fine-tuned on those, such as +lllyasviel/control_v11p_sd15_inpaint. __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None control_image: Union = None height: Optional = None width: Optional = None strength: float = 1.0 num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 0.5 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], — +List[PIL.Image.Image], or List[np.ndarray]): +Image, NumPy array or tensor representing an image batch to be used as the starting point. For both +NumPy array and PyTorch tensor, the expected value range is between [0, 1]. If it’s a tensor or a +list or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a NumPy array or +a list of arrays, the expected shape should be (B, H, W, C) or (H, W, C). It can also accept image +latents as image, but if passing latents directly it is not encoded again. mask_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], — +List[PIL.Image.Image], or List[np.ndarray]): +Image, NumPy array or tensor representing an image batch to mask image. White pixels in the mask +are repainted while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a NumPy array or PyTorch tensor, it should contain one +color channel (L) instead of 3, so the expected shape for PyTorch tensor would be (B, 1, H, W), (B, H, W), (1, H, W), (H, W). And for NumPy array, it would be for (B, H, W, 1), (B, H, W), (H, W, 1), or (H, W). control_image (torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image], — +List[List[torch.FloatTensor]], or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. strength (float, optional, defaults to 1.0) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 0.5) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install transformers accelerate +>>> from diffusers import StableDiffusionControlNetInpaintPipeline, ControlNetModel, DDIMScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> init_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy.png" +... ) +>>> init_image = init_image.resize((512, 512)) + +>>> generator = torch.Generator(device="cpu").manual_seed(1) + +>>> mask_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy_mask.png" +... ) +>>> mask_image = mask_image.resize((512, 512)) + + +>>> def make_canny_condition(image): +... image = np.array(image) +... image = cv2.Canny(image, 100, 200) +... image = image[:, :, None] +... image = np.concatenate([image, image, image], axis=2) +... image = Image.fromarray(image) +... return image + + +>>> control_image = make_canny_condition(init_image) + +>>> controlnet = ControlNetModel.from_pretrained( +... "lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16 +... ) +>>> pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> image = pipe( +... "a handsome man with ray-ban sunglasses", +... num_inference_steps=20, +... generator=generator, +... eta=1.0, +... image=init_image, +... mask_image=mask_image, +... control_image=control_image, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionControlNetPipeline class diffusers.FlaxStableDiffusionControlNetPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel controlnet: FlaxControlNetModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. controlnet (FlaxControlNetModel — +Provides additional conditioning to the unet during the denoising process. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-to-image generation using Stable Diffusion with ControlNet Guidance. This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: Array image: Array params: Union prng_seed: Array num_inference_steps: int = 50 guidance_scale: Union = 7.5 latents: Array = None neg_prompt_ids: Array = None controlnet_conditioning_scale: Union = 1.0 return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt_ids (jnp.ndarray) — +The prompt or prompts to guide the image generation. image (jnp.ndarray) — +Array representing the ControlNet input condition to provide guidance to the unet for generation. params (Dict or FrozenDict) — +Dictionary containing the model parameters/weights. prng_seed (jax.Array) — +Array containing random number generator key. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. latents (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +array is generated by sampling using the supplied random generator. controlnet_conditioning_scale (float or jnp.ndarray, optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> import jax.numpy as jnp +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard +>>> from diffusers.utils import load_image, make_image_grid +>>> from PIL import Image +>>> from diffusers import FlaxStableDiffusionControlNetPipeline, FlaxControlNetModel + + +>>> def create_key(seed=0): +... return jax.random.PRNGKey(seed) + + +>>> rng = create_key(0) + +>>> # get canny image +>>> canny_image = load_image( +... "https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/blog_post_cell_10_output_0.jpeg" +... ) + +>>> prompts = "best quality, extremely detailed" +>>> negative_prompts = "monochrome, lowres, bad anatomy, worst quality, low quality" + +>>> # load control net and stable diffusion v1-5 +>>> controlnet, controlnet_params = FlaxControlNetModel.from_pretrained( +... "lllyasviel/sd-controlnet-canny", from_pt=True, dtype=jnp.float32 +... ) +>>> pipe, params = FlaxStableDiffusionControlNetPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, revision="flax", dtype=jnp.float32 +... ) +>>> params["controlnet"] = controlnet_params + +>>> num_samples = jax.device_count() +>>> rng = jax.random.split(rng, jax.device_count()) + +>>> prompt_ids = pipe.prepare_text_inputs([prompts] * num_samples) +>>> negative_prompt_ids = pipe.prepare_text_inputs([negative_prompts] * num_samples) +>>> processed_image = pipe.prepare_image_inputs([canny_image] * num_samples) + +>>> p_params = replicate(params) +>>> prompt_ids = shard(prompt_ids) +>>> negative_prompt_ids = shard(negative_prompt_ids) +>>> processed_image = shard(processed_image) + +>>> output = pipe( +... prompt_ids=prompt_ids, +... image=processed_image, +... params=p_params, +... prng_seed=rng, +... num_inference_steps=50, +... neg_prompt_ids=negative_prompt_ids, +... jit=True, +... ).images + +>>> output_images = pipe.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:]))) +>>> output_images = make_image_grid(output_images, num_samples // 4, 4) +>>> output_images.save("generated_image.png") FlaxStableDiffusionControlNetPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/bd595da3fbcc79b3c3e1e45c838741bc.txt b/scrapped_outputs/bd595da3fbcc79b3c3e1e45c838741bc.txt new file mode 100644 index 0000000000000000000000000000000000000000..f2282512f2f0bcea89548e640b2b6d75311dad9c --- /dev/null +++ b/scrapped_outputs/bd595da3fbcc79b3c3e1e45c838741bc.txt @@ -0,0 +1,27 @@ +OpenVINO 🤗 Optimum provides Stable Diffusion pipelines compatible with OpenVINO to perform inference on a variety of Intel processors (see the full list of supported devices). You’ll need to install 🤗 Optimum Intel with the --upgrade-strategy eager option to ensure optimum-intel is using the latest version: Copied pip install --upgrade-strategy eager optimum["openvino"] This guide will show you how to use the Stable Diffusion and Stable Diffusion XL (SDXL) pipelines with OpenVINO. Stable Diffusion To load and run inference, use the OVStableDiffusionPipeline. If you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, set export=True: Copied from optimum.intel import OVStableDiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipeline = OVStableDiffusionPipeline.from_pretrained(model_id, export=True) +prompt = "sailing ship in storm by Rembrandt" +image = pipeline(prompt).images[0] + +# Don't forget to save the exported model +pipeline.save_pretrained("openvino-sd-v1-5") To further speed-up inference, statically reshape the model. If you change any parameters such as the outputs height or width, you’ll need to statically reshape your model again. Copied # Define the shapes related to the inputs and desired outputs +batch_size, num_images, height, width = 1, 1, 512, 512 + +# Statically reshape the model +pipeline.reshape(batch_size, height, width, num_images) +# Compile the model before inference +pipeline.compile() + +image = pipeline( + prompt, + height=height, + width=width, + num_images_per_prompt=num_images, +).images[0] You can find more examples in the 🤗 Optimum documentation, and Stable Diffusion is supported for text-to-image, image-to-image, and inpainting. Stable Diffusion XL To load and run inference with SDXL, use the OVStableDiffusionXLPipeline: Copied from optimum.intel import OVStableDiffusionXLPipeline + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipeline = OVStableDiffusionXLPipeline.from_pretrained(model_id) +prompt = "sailing ship in storm by Rembrandt" +image = pipeline(prompt).images[0] To further speed-up inference, statically reshape the model as shown in the Stable Diffusion section. You can find more examples in the 🤗 Optimum documentation, and running SDXL in OpenVINO is supported for text-to-image and image-to-image. diff --git a/scrapped_outputs/bd636e16f97abe750d9c6e12beaf8a8b.txt b/scrapped_outputs/bd636e16f97abe750d9c6e12beaf8a8b.txt new file mode 100644 index 0000000000000000000000000000000000000000..f3ff45d9b537f73b4891b1294f8d618d1aafc935 --- /dev/null +++ b/scrapped_outputs/bd636e16f97abe750d9c6e12beaf8a8b.txt @@ -0,0 +1,48 @@ +ScoreSdeVeScheduler ScoreSdeVeScheduler is a variance exploding stochastic differential equation (SDE) scheduler. It was introduced in the Score-Based Generative Modeling through Stochastic Differential Equations paper by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole. The abstract from the paper is: Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model. ScoreSdeVeScheduler class diffusers.ScoreSdeVeScheduler < source > ( num_train_timesteps: int = 2000 snr: float = 0.15 sigma_min: float = 0.01 sigma_max: float = 1348.0 sampling_eps: float = 1e-05 correct_steps: int = 1 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. snr (float, defaults to 0.15) — +A coefficient weighting the step from the model_output sample (from the network) to the random noise. sigma_min (float, defaults to 0.01) — +The initial noise scale for the sigma sequence in the sampling procedure. The minimum sigma should mirror +the distribution of the data. sigma_max (float, defaults to 1348.0) — +The maximum value used for the range of continuous timesteps passed into the model. sampling_eps (float, defaults to 1e-5) — +The end value of sampling where timesteps decrease progressively from 1 to epsilon. correct_steps (int, defaults to 1) — +The number of correction steps performed on a produced sample. ScoreSdeVeScheduler is a variance exploding stochastic differential equation (SDE) scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_sigmas < source > ( num_inference_steps: int sigma_min: float = None sigma_max: float = None sampling_eps: float = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. sigma_min (float, optional) — +The initial noise scale value (overrides value given during scheduler instantiation). sigma_max (float, optional) — +The final noise scale value (overrides value given during scheduler instantiation). sampling_eps (float, optional) — +The final timestep value (overrides value given during scheduler instantiation). Sets the noise scales used for the diffusion chain (to be run before inference). The sigmas control the weight +of the drift and diffusion components of the sample update. set_timesteps < source > ( num_inference_steps: int sampling_eps: float = None device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. sampling_eps (float, optional) — +The final timestep value (overrides value given during scheduler instantiation). device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the continuous timesteps used for the diffusion chain (to be run before inference). step_correct < source > ( model_output: FloatTensor sample: FloatTensor generator: Optional = None return_dict: bool = True ) → SdeVeOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a SdeVeOutput or tuple. Returns +SdeVeOutput or tuple + +If return_dict is True, SdeVeOutput is returned, otherwise a tuple +is returned where the first element is the sample tensor. + Correct the predicted sample based on the model_output of the network. This is often run repeatedly after +making the prediction for the previous timestep. step_pred < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator: Optional = None return_dict: bool = True ) → SdeVeOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a SdeVeOutput or tuple. Returns +SdeVeOutput or tuple + +If return_dict is True, SdeVeOutput is returned, otherwise a tuple +is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SdeVeOutput class diffusers.schedulers.scheduling_sde_ve.SdeVeOutput < source > ( prev_sample: FloatTensor prev_sample_mean: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. prev_sample_mean (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Mean averaged prev_sample over previous timesteps. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/bd66ce670ce40a7f0b44ccdc3b051c2d.txt b/scrapped_outputs/bd66ce670ce40a7f0b44ccdc3b051c2d.txt new file mode 100644 index 0000000000000000000000000000000000000000..c6f952ad08987328ef5a7108f6c98636c5902202 --- /dev/null +++ b/scrapped_outputs/bd66ce670ce40a7f0b44ccdc3b051c2d.txt @@ -0,0 +1,76 @@ +Contribute a community pipeline 💡 Take a look at GitHub Issue #841 for more context about why we’re adding community pipelines to help everyone easily share their work without being slowed down. Community pipelines allow you to add any additional features you’d like on top of the DiffusionPipeline. The main benefit of building on top of the DiffusionPipeline is anyone can load and use your pipeline by only adding one more argument, making it super easy for the community to access. This guide will show you how to create a community pipeline and explain how they work. To keep things simple, you’ll create a “one-step” pipeline where the UNet does a single forward pass and calls the scheduler once. Initialize the pipeline You should start by creating a one_step_unet.py file for your community pipeline. In this file, create a pipeline class that inherits from the DiffusionPipeline to be able to load model weights and the scheduler configuration from the Hub. The one-step pipeline needs a UNet and a scheduler, so you’ll need to add these as arguments to the __init__ function: Copied from diffusers import DiffusionPipeline +import torch + +class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() To ensure your pipeline and its components (unet and scheduler) can be saved with save_pretrained(), add them to the register_modules function: Copied from diffusers import DiffusionPipeline + import torch + + class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() + ++ self.register_modules(unet=unet, scheduler=scheduler) Cool, the __init__ step is done and you can move to the forward pass now! 🔥 Define the forward pass In the forward pass, which we recommend defining as __call__, you have complete creative freedom to add whatever feature you’d like. For our amazing one-step pipeline, create a random image and only call the unet and scheduler once by setting timestep=1: Copied from diffusers import DiffusionPipeline + import torch + + class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() + + self.register_modules(unet=unet, scheduler=scheduler) + ++ def __call__(self): ++ image = torch.randn( ++ (1, self.unet.config.in_channels, self.unet.config.sample_size, self.unet.config.sample_size), ++ ) ++ timestep = 1 + ++ model_output = self.unet(image, timestep).sample ++ scheduler_output = self.scheduler.step(model_output, timestep, image).prev_sample + ++ return scheduler_output That’s it! 🚀 You can now run this pipeline by passing a unet and scheduler to it: Copied from diffusers import DDPMScheduler, UNet2DModel + +scheduler = DDPMScheduler() +unet = UNet2DModel() + +pipeline = UnetSchedulerOneForwardPipeline(unet=unet, scheduler=scheduler) + +output = pipeline() But what’s even better is you can load pre-existing weights into the pipeline if the pipeline structure is identical. For example, you can load the google/ddpm-cifar10-32 weights into the one-step pipeline: Copied pipeline = UnetSchedulerOneForwardPipeline.from_pretrained("google/ddpm-cifar10-32", use_safetensors=True) + +output = pipeline() Share your pipeline Open a Pull Request on the 🧨 Diffusers repository to add your awesome pipeline in one_step_unet.py to the examples/community subfolder. Once it is merged, anyone with diffusers >= 0.4.0 installed can use this pipeline magically 🪄 by specifying it in the custom_pipeline argument: Copied from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "google/ddpm-cifar10-32", custom_pipeline="one_step_unet", use_safetensors=True +) +pipe() Another way to share your community pipeline is to upload the one_step_unet.py file directly to your preferred model repository on the Hub. Instead of specifying the one_step_unet.py file, pass the model repository id to the custom_pipeline argument: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "google/ddpm-cifar10-32", custom_pipeline="stevhliu/one_step_unet", use_safetensors=True +) Take a look at the following table to compare the two sharing workflows to help you decide the best option for you: GitHub community pipeline HF Hub community pipeline usage same same review process open a Pull Request on GitHub and undergo a review process from the Diffusers team before merging; may be slower upload directly to a Hub repository without any review; this is the fastest workflow visibility included in the official Diffusers repository and documentation included on your HF Hub profile and relies on your own usage/promotion to gain visibility 💡 You can use whatever package you want in your community pipeline file - as long as the user has it installed, everything will work fine. Make sure you have one and only one pipeline class that inherits from DiffusionPipeline because this is automatically detected. How do community pipelines work? A community pipeline is a class that inherits from DiffusionPipeline which means: It can be loaded with the custom_pipeline argument. The model weights and scheduler configuration are loaded from pretrained_model_name_or_path. The code that implements a feature in the community pipeline is defined in a pipeline.py file. Sometimes you can’t load all the pipeline components weights from an official repository. In this case, the other components should be passed directly to the pipeline: Copied from diffusers import DiffusionPipeline +from transformers import CLIPImageProcessor, CLIPModel + +model_id = "CompVis/stable-diffusion-v1-4" +clip_model_id = "laion/CLIP-ViT-B-32-laion2B-s34B-b79K" + +feature_extractor = CLIPImageProcessor.from_pretrained(clip_model_id) +clip_model = CLIPModel.from_pretrained(clip_model_id, torch_dtype=torch.float16) + +pipeline = DiffusionPipeline.from_pretrained( + model_id, + custom_pipeline="clip_guided_stable_diffusion", + clip_model=clip_model, + feature_extractor=feature_extractor, + scheduler=scheduler, + torch_dtype=torch.float16, + use_safetensors=True, +) The magic behind community pipelines is contained in the following code. It allows the community pipeline to be loaded from GitHub or the Hub, and it’ll be available to all 🧨 Diffusers packages. Copied # 2. Load the pipeline class, if using custom module then load it from the Hub +# if we load from explicit class, let's use it +if custom_pipeline is not None: + pipeline_class = get_class_from_dynamic_module( + custom_pipeline, module_file=CUSTOM_PIPELINE_FILE_NAME, cache_dir=custom_pipeline + ) +elif cls != DiffusionPipeline: + pipeline_class = cls +else: + diffusers_module = importlib.import_module(cls.__module__.split(".")[0]) + pipeline_class = getattr(diffusers_module, config_dict["_class_name"]) diff --git a/scrapped_outputs/bd71ba0b23056d0474cdf4878a5ce07f.txt b/scrapped_outputs/bd71ba0b23056d0474cdf4878a5ce07f.txt new file mode 100644 index 0000000000000000000000000000000000000000..670e60a336d617da607490febe4cdc7f57188444 --- /dev/null +++ b/scrapped_outputs/bd71ba0b23056d0474cdf4878a5ce07f.txt @@ -0,0 +1,82 @@ +T2I-Adapter T2I-Adapter is a lightweight adapter model that provides an additional conditioning input image (line art, canny, sketch, depth, pose) to better control image generation. It is similar to a ControlNet, but it is a lot smaller (~77M parameters and ~300MB file size) because its only inserts weights into the UNet instead of copying and training it. The T2I-Adapter is only available for training with the Stable Diffusion XL (SDXL) model. This guide will explore the train_t2i_adapter_sdxl.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/t2i_adapter +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to activate gradient accumulation, add the --gradient_accumulation_steps parameter to the training command: Copied accelerate launch train_t2i_adapter_sdxl.py \ + ----gradient_accumulation_steps=4 Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the relevant T2I-Adapter parameters: --pretrained_vae_model_name_or_path: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify a better VAE --crops_coords_top_left_h and --crops_coords_top_left_w: height and width coordinates to include in SDXL’s crop coordinate embeddings --conditioning_image_column: the column of the conditioning images in the dataset --proportion_empty_prompts: the proportion of image prompts to replace with empty strings Training script As with the script parameters, a walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the T2I-Adapter relevant parts of the script. The training script begins by preparing the dataset. This incudes tokenizing the prompt and applying transforms to the images and conditioning images. Copied conditioning_image_transforms = transforms.Compose( + [ + transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), + transforms.CenterCrop(args.resolution), + transforms.ToTensor(), + ] +) Within the main() function, the T2I-Adapter is either loaded from a pretrained adapter or it is randomly initialized: Copied if args.adapter_model_name_or_path: + logger.info("Loading existing adapter weights.") + t2iadapter = T2IAdapter.from_pretrained(args.adapter_model_name_or_path) +else: + logger.info("Initializing t2iadapter weights.") + t2iadapter = T2IAdapter( + in_channels=3, + channels=(320, 640, 1280, 1280), + num_res_blocks=2, + downscale_factor=16, + adapter_type="full_adapter_xl", + ) The optimizer is initialized for the T2I-Adapter parameters: Copied params_to_optimize = t2iadapter.parameters() +optimizer = optimizer_class( + params_to_optimize, + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Lastly, in the training loop, the adapter conditioning image and the text embeddings are passed to the UNet to predict the noise residual: Copied t2iadapter_image = batch["conditioning_pixel_values"].to(dtype=weight_dtype) +down_block_additional_residuals = t2iadapter(t2iadapter_image) +down_block_additional_residuals = [ + sample.to(dtype=weight_dtype) for sample in down_block_additional_residuals +] + +model_pred = unet( + inp_noisy_latents, + timesteps, + encoder_hidden_states=batch["prompt_ids"], + added_cond_kwargs=batch["unet_added_conditions"], + down_block_additional_residuals=down_block_additional_residuals, +).sample If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Now you’re ready to launch the training script! 🚀 For this example training, you’ll use the fusing/fill50k dataset. You can also create and use your own dataset if you want (see the Create a dataset for training guide). Set the environment variable MODEL_DIR to a model id on the Hub or a path to a local model and OUTPUT_DIR to where you want to save the model. Download the following images to condition your training with: Copied wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png +wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_image, --validation_prompt, and --validation_steps to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. Copied export MODEL_DIR="stabilityai/stable-diffusion-xl-base-1.0" +export OUTPUT_DIR="path to save model" + +accelerate launch train_t2i_adapter_sdxl.py \ + --pretrained_model_name_or_path=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --dataset_name=fusing/fill50k \ + --mixed_precision="fp16" \ + --resolution=1024 \ + --learning_rate=1e-5 \ + --max_train_steps=15000 \ + --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ + --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ + --validation_steps=100 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --report_to="wandb" \ + --seed=42 \ + --push_to_hub Once training is complete, you can use your T2I-Adapter for inference: Copied from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, EulerAncestralDiscreteSchedulerTest +from diffusers.utils import load_image +import torch + +adapter = T2IAdapter.from_pretrained("path/to/adapter", torch_dtype=torch.float16) +pipeline = StableDiffusionXLAdapterPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", adapter=adapter, torch_dtype=torch.float16 +) + +pipeline.scheduler = EulerAncestralDiscreteSchedulerTest.from_config(pipe.scheduler.config) +pipeline.enable_xformers_memory_efficient_attention() +pipeline.enable_model_cpu_offload() + +control_image = load_image("./conditioning_image_1.png") +prompt = "pale golden rod circle with old lace background" + +generator = torch.manual_seed(0) +image = pipeline( + prompt, image=control_image, generator=generator +).images[0] +image.save("./output.png") Next steps Congratulations on training a T2I-Adapter model! 🎉 To learn more: Read the Efficient Controllable Generation for SDXL with T2I-Adapters blog post to learn more details about the experimental results from the T2I-Adapter team. diff --git a/scrapped_outputs/bd91fdd6915743ef490c7a669e07e3b1.txt b/scrapped_outputs/bd91fdd6915743ef490c7a669e07e3b1.txt new file mode 100644 index 0000000000000000000000000000000000000000..260e2d1961cab74b037b8005bfcbb5822351f744 --- /dev/null +++ b/scrapped_outputs/bd91fdd6915743ef490c7a669e07e3b1.txt @@ -0,0 +1,197 @@ +UniDiffuser The UniDiffuser model was proposed in One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale by Fan Bao, Shen Nie, Kaiwen Xue, Chongxuan Li, Shi Pu, Yaole Wang, Gang Yue, Yue Cao, Hang Su, Jun Zhu. The abstract from the paper is: This paper proposes a unified diffusion framework (dubbed UniDiffuser) to fit all distributions relevant to a set of multi-modal data in one model. Our key insight is — learning diffusion models for marginal, conditional, and joint distributions can be unified as predicting the noise in the perturbed data, where the perturbation levels (i.e. timesteps) can be different for different modalities. Inspired by the unified view, UniDiffuser learns all distributions simultaneously with a minimal modification to the original diffusion model — perturbs data in all modalities instead of a single modality, inputs individual timesteps in different modalities, and predicts the noise of all modalities instead of a single modality. UniDiffuser is parameterized by a transformer for diffusion models to handle input types of different modalities. Implemented on large-scale paired image-text data, UniDiffuser is able to perform image, text, text-to-image, image-to-text, and image-text pair generation by setting proper timesteps without additional overhead. In particular, UniDiffuser is able to produce perceptually realistic samples in all tasks and its quantitative results (e.g., the FID and CLIP score) are not only superior to existing general-purpose models but also comparable to the bespoken models (e.g., Stable Diffusion and DALL-E 2) in representative tasks (e.g., text-to-image generation). You can find the original codebase at thu-ml/unidiffuser and additional checkpoints at thu-ml. There is currently an issue on PyTorch 1.X where the output images are all black or the pixel values become NaNs. This issue can be mitigated by switching to PyTorch 2.X. This pipeline was contributed by dg845. ❤️ Usage Examples Because the UniDiffuser model is trained to model the joint distribution of (image, text) pairs, it is capable of performing a diverse range of generation tasks: Unconditional Image and Text Generation Unconditional generation (where we start from only latents sampled from a standard Gaussian prior) from a UniDiffuserPipeline will produce a (image, text) pair: Copied import torch + +from diffusers import UniDiffuserPipeline + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Unconditional image and text generation. The generation task is automatically inferred. +sample = pipe(num_inference_steps=20, guidance_scale=8.0) +image = sample.images[0] +text = sample.text[0] +image.save("unidiffuser_joint_sample_image.png") +print(text) This is also called “joint” generation in the UniDiffuser paper, since we are sampling from the joint image-text distribution. Note that the generation task is inferred from the inputs used when calling the pipeline. +It is also possible to manually specify the unconditional generation task (“mode”) manually with UniDiffuserPipeline.set_joint_mode(): Copied # Equivalent to the above. +pipe.set_joint_mode() +sample = pipe(num_inference_steps=20, guidance_scale=8.0) When the mode is set manually, subsequent calls to the pipeline will use the set mode without attempting to infer the mode. +You can reset the mode with UniDiffuserPipeline.reset_mode(), after which the pipeline will once again infer the mode. You can also generate only an image or only text (which the UniDiffuser paper calls “marginal” generation since we sample from the marginal distribution of images and text, respectively): Copied # Unlike other generation tasks, image-only and text-only generation don't use classifier-free guidance +# Image-only generation +pipe.set_image_mode() +sample_image = pipe(num_inference_steps=20).images[0] +# Text-only generation +pipe.set_text_mode() +sample_text = pipe(num_inference_steps=20).text[0] Text-to-Image Generation UniDiffuser is also capable of sampling from conditional distributions; that is, the distribution of images conditioned on a text prompt or the distribution of texts conditioned on an image. +Here is an example of sampling from the conditional image distribution (text-to-image generation or text-conditioned image generation): Copied import torch + +from diffusers import UniDiffuserPipeline + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Text-to-image generation +prompt = "an elephant under the sea" + +sample = pipe(prompt=prompt, num_inference_steps=20, guidance_scale=8.0) +t2i_image = sample.images[0] +t2i_image The text2img mode requires that either an input prompt or prompt_embeds be supplied. You can set the text2img mode manually with UniDiffuserPipeline.set_text_to_image_mode(). Image-to-Text Generation Similarly, UniDiffuser can also produce text samples given an image (image-to-text or image-conditioned text generation): Copied import torch + +from diffusers import UniDiffuserPipeline +from diffusers.utils import load_image + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Image-to-text generation +image_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unidiffuser/unidiffuser_example_image.jpg" +init_image = load_image(image_url).resize((512, 512)) + +sample = pipe(image=init_image, num_inference_steps=20, guidance_scale=8.0) +i2t_text = sample.text[0] +print(i2t_text) The img2text mode requires that an input image be supplied. You can set the img2text mode manually with UniDiffuserPipeline.set_image_to_text_mode(). Image Variation The UniDiffuser authors suggest performing image variation through a “round-trip” generation method, where given an input image, we first perform an image-to-text generation, and then perform a text-to-image generation on the outputs of the first generation. +This produces a new image which is semantically similar to the input image: Copied import torch + +from diffusers import UniDiffuserPipeline +from diffusers.utils import load_image + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Image variation can be performed with an image-to-text generation followed by a text-to-image generation: +# 1. Image-to-text generation +image_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unidiffuser/unidiffuser_example_image.jpg" +init_image = load_image(image_url).resize((512, 512)) + +sample = pipe(image=init_image, num_inference_steps=20, guidance_scale=8.0) +i2t_text = sample.text[0] +print(i2t_text) + +# 2. Text-to-image generation +sample = pipe(prompt=i2t_text, num_inference_steps=20, guidance_scale=8.0) +final_image = sample.images[0] +final_image.save("unidiffuser_image_variation_sample.png") Text Variation Similarly, text variation can be performed on an input prompt with a text-to-image generation followed by a image-to-text generation: Copied import torch + +from diffusers import UniDiffuserPipeline + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Text variation can be performed with a text-to-image generation followed by a image-to-text generation: +# 1. Text-to-image generation +prompt = "an elephant under the sea" + +sample = pipe(prompt=prompt, num_inference_steps=20, guidance_scale=8.0) +t2i_image = sample.images[0] +t2i_image.save("unidiffuser_text2img_sample_image.png") + +# 2. Image-to-text generation +sample = pipe(image=t2i_image, num_inference_steps=20, guidance_scale=8.0) +final_prompt = sample.text[0] +print(final_prompt) Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. UniDiffuserPipeline class diffusers.UniDiffuserPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel image_encoder: CLIPVisionModelWithProjection clip_image_processor: CLIPImageProcessor clip_tokenizer: CLIPTokenizer text_decoder: UniDiffuserTextDecoder text_tokenizer: GPT2Tokenizer unet: UniDiffuserModel scheduler: KarrasDiffusionSchedulers ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. This +is part of the UniDiffuser image representation along with the CLIP vision encoding. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). image_encoder (CLIPVisionModel) — +A CLIPVisionModel to encode images as part of its image representation along with the VAE +latent representation. image_processor (CLIPImageProcessor) — +CLIPImageProcessor to preprocess an image before CLIP encoding it with image_encoder. clip_tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize the prompt before encoding it with text_encoder. text_decoder (UniDiffuserTextDecoder) — +Frozen text decoder. This is a GPT-style model which is used to generate text from the UniDiffuser +embedding. text_tokenizer (GPT2Tokenizer) — +A GPT2Tokenizer to decode text for text generation; used along with the text_decoder. unet (UniDiffuserModel) — +A U-ViT model with UNNet-style skip connections between transformer +layers to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image and/or text latents. The +original UniDiffuser paper uses the DPMSolverMultistepScheduler scheduler. Pipeline for a bimodal image-text model which supports unconditional text and image generation, text-conditioned +image generation, image-conditioned text generation, and joint image-text generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None image: Union = None height: Optional = None width: Optional = None data_type: Optional = 1 num_inference_steps: int = 50 guidance_scale: float = 8.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 num_prompts_per_image: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_latents: Optional = None vae_latents: Optional = None clip_latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → ImageTextPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. +Required for text-conditioned image generation (text2img) mode. image (torch.FloatTensor or PIL.Image.Image, optional) — +Image or tensor representing an image batch. Required for image-conditioned text generation +(img2text) mode. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. data_type (int, optional, defaults to 1) — +The data type (either 0 or 1). Only used if you are loading a checkpoint which supports a data type +embedding; this is added for compatibility with the +UniDiffuser-v1 checkpoint. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 8.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). Used in +text-conditioned image generation (text2img) mode. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. Used in text2img (text-conditioned image generation) and +img mode. If the mode is joint and both num_images_per_prompt and num_prompts_per_image are +supplied, min(num_images_per_prompt, num_prompts_per_image) samples are generated. num_prompts_per_image (int, optional, defaults to 1) — +The number of prompts to generate per image. Used in img2text (image-conditioned text generation) and +text mode. If the mode is joint and both num_images_per_prompt and num_prompts_per_image are +supplied, min(num_images_per_prompt, num_prompts_per_image) samples are generated. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for joint +image-text generation. Can be used to tweak the same generation with different prompts. If not +provided, a latents tensor is generated by sampling using the supplied random generator. This assumes +a full set of VAE, CLIP, and text latents, if supplied, overrides the value of prompt_latents, +vae_latents, and clip_latents. prompt_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for text +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. vae_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. clip_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. Used in text-conditioned +image generation (text2img) mode. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are be generated from the negative_prompt input argument. Used +in text-conditioned image generation (text2img) mode. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImageTextPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +ImageTextPipelineOutput or tuple + +If return_dict is True, ImageTextPipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images and the second element +is a list of generated texts. + The call function to the pipeline for generation. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. reset_mode < source > ( ) Removes a manually set mode; after calling this, the pipeline will infer the mode from inputs. set_image_mode < source > ( ) Manually set the generation mode to unconditional (“marginal”) image generation. set_image_to_text_mode < source > ( ) Manually set the generation mode to image-conditioned text generation. set_joint_mode < source > ( ) Manually set the generation mode to unconditional joint image-text generation. set_text_mode < source > ( ) Manually set the generation mode to unconditional (“marginal”) text generation. set_text_to_image_mode < source > ( ) Manually set the generation mode to text-conditioned image generation. ImageTextPipelineOutput class diffusers.ImageTextPipelineOutput < source > ( images: Union text: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). text (List[str] or List[List[str]]) — +List of generated text strings of length batch_size or a list of list of strings whose outer list has +length batch_size. Output class for joint image-text pipelines. diff --git a/scrapped_outputs/bdb1610e597582a4243b2534d7f4e872.txt b/scrapped_outputs/bdb1610e597582a4243b2534d7f4e872.txt new file mode 100644 index 0000000000000000000000000000000000000000..f30b39a298e4c56dee2c29827af6d01fc3c8586a --- /dev/null +++ b/scrapped_outputs/bdb1610e597582a4243b2534d7f4e872.txt @@ -0,0 +1,36 @@ +AsymmetricAutoencoderKL Improved larger variational autoencoder (VAE) model with KL loss for inpainting task: Designing a Better Asymmetric VQGAN for StableDiffusion by Zixin Zhu, Xuelu Feng, Dongdong Chen, Jianmin Bao, Le Wang, Yinpeng Chen, Lu Yuan, Gang Hua. The abstract from the paper is: StableDiffusion is a revolutionary text-to-image generator that is causing a stir in the world of image generation and editing. Unlike traditional methods that learn a diffusion model in pixel space, StableDiffusion learns a diffusion model in the latent space via a VQGAN, ensuring both efficiency and quality. It not only supports image generation tasks, but also enables image editing for real images, such as image inpainting and local editing. However, we have observed that the vanilla VQGAN used in StableDiffusion leads to significant information loss, causing distortion artifacts even in non-edited image regions. To this end, we propose a new asymmetric VQGAN with two simple designs. Firstly, in addition to the input from the encoder, the decoder contains a conditional branch that incorporates information from task-specific priors, such as the unmasked image region in inpainting. Secondly, the decoder is much heavier than the encoder, allowing for more detailed recovery while only slightly increasing the total inference cost. The training cost of our asymmetric VQGAN is cheap, and we only need to retrain a new asymmetric decoder while keeping the vanilla VQGAN encoder and StableDiffusion unchanged. Our asymmetric VQGAN can be widely used in StableDiffusion-based inpainting and local editing methods. Extensive experiments demonstrate that it can significantly improve the inpainting and editing performance, while maintaining the original text-to-image capability. The code is available at https://github.com/buxiangzhiren/Asymmetric_VQGAN Evaluation results can be found in section 4.1 of the original paper. Available checkpoints https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-1-5 https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-2 Example Usage Copied from diffusers import AsymmetricAutoencoderKL, StableDiffusionInpaintPipeline +from diffusers.utils import load_image, make_image_grid + + +prompt = "a photo of a person with beard" +img_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/celeba_hq_256.png" +mask_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/mask_256.png" + +original_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +pipe = StableDiffusionInpaintPipeline.from_pretrained("runwayml/stable-diffusion-inpainting") +pipe.vae = AsymmetricAutoencoderKL.from_pretrained("cross-attention/asymmetric-autoencoder-kl-x-1-5") +pipe.to("cuda") + +image = pipe(prompt=prompt, image=original_image, mask_image=mask_image).images[0] +make_image_grid([original_image, mask_image, image], rows=1, cols=3) AsymmetricAutoencoderKL class diffusers.AsymmetricAutoencoderKL < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) down_block_out_channels: Tuple = (64,) layers_per_down_block: int = 1 up_block_types: Tuple = ('UpDecoderBlock2D',) up_block_out_channels: Tuple = (64,) layers_per_up_block: int = 1 act_fn: str = 'silu' latent_channels: int = 4 norm_num_groups: int = 32 sample_size: int = 32 scaling_factor: float = 0.18215 ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("DownEncoderBlock2D",)) — +Tuple of downsample block types. down_block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of down block output channels. layers_per_down_block (int, optional, defaults to 1) — +Number layers for down block. up_block_types (Tuple[str], optional, defaults to ("UpDecoderBlock2D",)) — +Tuple of upsample block types. up_block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of up block output channels. layers_per_up_block (int, optional, defaults to 1) — +Number layers for up block. act_fn (str, optional, defaults to "silu") — The activation function to use. latent_channels (int, optional, defaults to 4) — Number of channels in the latent space. sample_size (int, optional, defaults to 32) — Sample input size. norm_num_groups (int, optional, defaults to 32) — +Number of groups to use for the first normalization layer in ResNet blocks. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. Designing a Better Asymmetric VQGAN for StableDiffusion https://arxiv.org/abs/2306.04632 . A VAE model with KL loss +for encoding images into latents and decoding latent representations into images. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor mask: Optional = None sample_posterior: bool = False return_dict: bool = True generator: Optional = None ) Parameters sample (torch.FloatTensor) — Input sample. mask (torch.FloatTensor, optional, defaults to None) — Optional inpainting mask. sample_posterior (bool, optional, defaults to False) — +Whether to sample from the posterior. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. AutoencoderKLOutput class diffusers.models.modeling_outputs.AutoencoderKLOutput < source > ( latent_dist: DiagonalGaussianDistribution ) Parameters latent_dist (DiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of DiagonalGaussianDistribution. +DiagonalGaussianDistribution allows for sampling latents from the distribution. Output of AutoencoderKL encoding method. DecoderOutput class diffusers.models.autoencoders.vae.DecoderOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The decoded output sample from the last layer of the model. Output of decoding method. diff --git a/scrapped_outputs/bddf45c19b83a09ca2b12ac42fee79de.txt b/scrapped_outputs/bddf45c19b83a09ca2b12ac42fee79de.txt new file mode 100644 index 0000000000000000000000000000000000000000..a782332fc7cd440b86e7889f43564b9e3d2ea725 --- /dev/null +++ b/scrapped_outputs/bddf45c19b83a09ca2b12ac42fee79de.txt @@ -0,0 +1,87 @@ +Understanding pipelines, models and schedulers 🧨 Diffusers is designed to be a user-friendly and flexible toolbox for building diffusion systems tailored to your use-case. At the core of the toolbox are models and schedulers. While the DiffusionPipeline bundles these components together for convenience, you can also unbundle the pipeline and use the models and schedulers separately to create new diffusion systems. In this tutorial, you’ll learn how to use models and schedulers to assemble a diffusion system for inference, starting with a basic pipeline and then progressing to the Stable Diffusion pipeline. Deconstruct a basic pipeline A pipeline is a quick and easy way to run a model for inference, requiring no more than four lines of code to generate an image: Copied >>> from diffusers import DDPMPipeline + +>>> ddpm = DDPMPipeline.from_pretrained("google/ddpm-cat-256", use_safetensors=True).to("cuda") +>>> image = ddpm(num_inference_steps=25).images[0] +>>> image That was super easy, but how did the pipeline do that? Let’s breakdown the pipeline and take a look at what’s happening under the hood. In the example above, the pipeline contains a UNet2DModel model and a DDPMScheduler. The pipeline denoises an image by taking random noise the size of the desired output and passing it through the model several times. At each timestep, the model predicts the noise residual and the scheduler uses it to predict a less noisy image. The pipeline repeats this process until it reaches the end of the specified number of inference steps. To recreate the pipeline with the model and scheduler separately, let’s write our own denoising process. Load the model and scheduler: Copied >>> from diffusers import DDPMScheduler, UNet2DModel + +>>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cat-256") +>>> model = UNet2DModel.from_pretrained("google/ddpm-cat-256", use_safetensors=True).to("cuda") Set the number of timesteps to run the denoising process for: Copied >>> scheduler.set_timesteps(50) Setting the scheduler timesteps creates a tensor with evenly spaced elements in it, 50 in this example. Each element corresponds to a timestep at which the model denoises an image. When you create the denoising loop later, you’ll iterate over this tensor to denoise an image: Copied >>> scheduler.timesteps +tensor([980, 960, 940, 920, 900, 880, 860, 840, 820, 800, 780, 760, 740, 720, + 700, 680, 660, 640, 620, 600, 580, 560, 540, 520, 500, 480, 460, 440, + 420, 400, 380, 360, 340, 320, 300, 280, 260, 240, 220, 200, 180, 160, + 140, 120, 100, 80, 60, 40, 20, 0]) Create some random noise with the same shape as the desired output: Copied >>> import torch + +>>> sample_size = model.config.sample_size +>>> noise = torch.randn((1, 3, sample_size, sample_size), device="cuda") Now write a loop to iterate over the timesteps. At each timestep, the model does a UNet2DModel.forward() pass and returns the noisy residual. The scheduler’s step() method takes the noisy residual, timestep, and input and it predicts the image at the previous timestep. This output becomes the next input to the model in the denoising loop, and it’ll repeat until it reaches the end of the timesteps array. Copied >>> input = noise + +>>> for t in scheduler.timesteps: +... with torch.no_grad(): +... noisy_residual = model(input, t).sample +... previous_noisy_sample = scheduler.step(noisy_residual, t, input).prev_sample +... input = previous_noisy_sample This is the entire denoising process, and you can use this same pattern to write any diffusion system. The last step is to convert the denoised output into an image: Copied >>> from PIL import Image +>>> import numpy as np + +>>> image = (input / 2 + 0.5).clamp(0, 1).squeeze() +>>> image = (image.permute(1, 2, 0) * 255).round().to(torch.uint8).cpu().numpy() +>>> image = Image.fromarray(image) +>>> image In the next section, you’ll put your skills to the test and breakdown the more complex Stable Diffusion pipeline. The steps are more or less the same. You’ll initialize the necessary components, and set the number of timesteps to create a timestep array. The timestep array is used in the denoising loop, and for each element in this array, the model predicts a less noisy image. The denoising loop iterates over the timestep’s, and at each timestep, it outputs a noisy residual and the scheduler uses it to predict a less noisy image at the previous timestep. This process is repeated until you reach the end of the timestep array. Let’s try it out! Deconstruct the Stable Diffusion pipeline Stable Diffusion is a text-to-image latent diffusion model. It is called a latent diffusion model because it works with a lower-dimensional representation of the image instead of the actual pixel space, which makes it more memory efficient. The encoder compresses the image into a smaller representation, and a decoder to convert the compressed representation back into an image. For text-to-image models, you’ll need a tokenizer and an encoder to generate text embeddings. From the previous example, you already know you need a UNet model and a scheduler. As you can see, this is already more complex than the DDPM pipeline which only contains a UNet model. The Stable Diffusion model has three separate pretrained models. 💡 Read the How does Stable Diffusion work? blog for more details about how the VAE, UNet, and text encoder models work. Now that you know what you need for the Stable Diffusion pipeline, load all these components with the from_pretrained() method. You can find them in the pretrained runwayml/stable-diffusion-v1-5 checkpoint, and each component is stored in a separate subfolder: Copied >>> from PIL import Image +>>> import torch +>>> from transformers import CLIPTextModel, CLIPTokenizer +>>> from diffusers import AutoencoderKL, UNet2DConditionModel, PNDMScheduler + +>>> vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae", use_safetensors=True) +>>> tokenizer = CLIPTokenizer.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="tokenizer") +>>> text_encoder = CLIPTextModel.from_pretrained( +... "CompVis/stable-diffusion-v1-4", subfolder="text_encoder", use_safetensors=True +... ) +>>> unet = UNet2DConditionModel.from_pretrained( +... "CompVis/stable-diffusion-v1-4", subfolder="unet", use_safetensors=True +... ) Instead of the default PNDMScheduler, exchange it for the UniPCMultistepScheduler to see how easy it is to plug a different scheduler in: Copied >>> from diffusers import UniPCMultistepScheduler + +>>> scheduler = UniPCMultistepScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler") To speed up inference, move the models to a GPU since, unlike the scheduler, they have trainable weights: Copied >>> torch_device = "cuda" +>>> vae.to(torch_device) +>>> text_encoder.to(torch_device) +>>> unet.to(torch_device) Create text embeddings The next step is to tokenize the text to generate embeddings. The text is used to condition the UNet model and steer the diffusion process towards something that resembles the input prompt. 💡 The guidance_scale parameter determines how much weight should be given to the prompt when generating an image. Feel free to choose any prompt you like if you want to generate something else! Copied >>> prompt = ["a photograph of an astronaut riding a horse"] +>>> height = 512 # default height of Stable Diffusion +>>> width = 512 # default width of Stable Diffusion +>>> num_inference_steps = 25 # Number of denoising steps +>>> guidance_scale = 7.5 # Scale for classifier-free guidance +>>> generator = torch.manual_seed(0) # Seed generator to create the initial latent noise +>>> batch_size = len(prompt) Tokenize the text and generate the embeddings from the prompt: Copied >>> text_input = tokenizer( +... prompt, padding="max_length", max_length=tokenizer.model_max_length, truncation=True, return_tensors="pt" +... ) + +>>> with torch.no_grad(): +... text_embeddings = text_encoder(text_input.input_ids.to(torch_device))[0] You’ll also need to generate the unconditional text embeddings which are the embeddings for the padding token. These need to have the same shape (batch_size and seq_length) as the conditional text_embeddings: Copied >>> max_length = text_input.input_ids.shape[-1] +>>> uncond_input = tokenizer([""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt") +>>> uncond_embeddings = text_encoder(uncond_input.input_ids.to(torch_device))[0] Let’s concatenate the conditional and unconditional embeddings into a batch to avoid doing two forward passes: Copied >>> text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) Create random noise Next, generate some initial random noise as a starting point for the diffusion process. This is the latent representation of the image, and it’ll be gradually denoised. At this point, the latent image is smaller than the final image size but that’s okay though because the model will transform it into the final 512x512 image dimensions later. 💡 The height and width are divided by 8 because the vae model has 3 down-sampling layers. You can check by running the following: Copied 2 ** (len(vae.config.block_out_channels) - 1) == 8 Copied >>> latents = torch.randn( +... (batch_size, unet.config.in_channels, height // 8, width // 8), +... generator=generator, +... device=torch_device, +... ) Denoise the image Start by scaling the input with the initial noise distribution, sigma, the noise scale value, which is required for improved schedulers like UniPCMultistepScheduler: Copied >>> latents = latents * scheduler.init_noise_sigma The last step is to create the denoising loop that’ll progressively transform the pure noise in latents to an image described by your prompt. Remember, the denoising loop needs to do three things: Set the scheduler’s timesteps to use during denoising. Iterate over the timesteps. At each timestep, call the UNet model to predict the noise residual and pass it to the scheduler to compute the previous noisy sample. Copied >>> from tqdm.auto import tqdm + +>>> scheduler.set_timesteps(num_inference_steps) + +>>> for t in tqdm(scheduler.timesteps): +... # expand the latents if we are doing classifier-free guidance to avoid doing two forward passes. +... latent_model_input = torch.cat([latents] * 2) + +... latent_model_input = scheduler.scale_model_input(latent_model_input, timestep=t) + +... # predict the noise residual +... with torch.no_grad(): +... noise_pred = unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample + +... # perform guidance +... noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) +... noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) + +... # compute the previous noisy sample x_t -> x_t-1 +... latents = scheduler.step(noise_pred, t, latents).prev_sample Decode the image The final step is to use the vae to decode the latent representation into an image and get the decoded output with sample: Copied # scale and decode the image latents with vae +latents = 1 / 0.18215 * latents +with torch.no_grad(): + image = vae.decode(latents).sample Lastly, convert the image to a PIL.Image to see your generated image! Copied >>> image = (image / 2 + 0.5).clamp(0, 1).squeeze() +>>> image = (image.permute(1, 2, 0) * 255).to(torch.uint8).cpu().numpy() +>>> images = (image * 255).round().astype("uint8") +>>> image = Image.fromarray(image) +>>> image Next steps From basic to complex pipelines, you’ve seen that all you really need to write your own diffusion system is a denoising loop. The loop should set the scheduler’s timesteps, iterate over them, and alternate between calling the UNet model to predict the noise residual and passing it to the scheduler to compute the previous noisy sample. This is really what 🧨 Diffusers is designed for: to make it intuitive and easy to write your own diffusion system using models and schedulers. For your next steps, feel free to: Learn how to build and contribute a pipeline to 🧨 Diffusers. We can’t wait and see what you’ll come up with! Explore existing pipelines in the library, and see if you can deconstruct and build a pipeline from scratch using the models and schedulers separately. diff --git a/scrapped_outputs/be15b819aa762b3db96c1aaae3b3b52d.txt b/scrapped_outputs/be15b819aa762b3db96c1aaae3b3b52d.txt new file mode 100644 index 0000000000000000000000000000000000000000..c45daf9a97ec4b41db61304ab7ca97f58be2ed61 --- /dev/null +++ b/scrapped_outputs/be15b819aa762b3db96c1aaae3b3b52d.txt @@ -0,0 +1 @@ +xFormers We recommend xFormers for both inference and training. In our tests, the optimizations performed in the attention blocks allow for both faster speed and reduced memory consumption. Install xFormers from pip: Copied pip install xformers The xFormers pip package requires the latest version of PyTorch. If you need to use a previous version of PyTorch, then we recommend installing xFormers from the source. After xFormers is installed, you can use enable_xformers_memory_efficient_attention() for faster inference and reduced memory consumption as shown in this section. According to this issue, xFormers v0.0.16 cannot be used for training (fine-tune or DreamBooth) in some GPUs. If you observe this problem, please install a development version as indicated in the issue comments. diff --git a/scrapped_outputs/be1d72b934aa00e15420f145cab5cf50.txt b/scrapped_outputs/be1d72b934aa00e15420f145cab5cf50.txt new file mode 100644 index 0000000000000000000000000000000000000000..4049d6b91ac5929ba92113dc859ead44d28a4f4e --- /dev/null +++ b/scrapped_outputs/be1d72b934aa00e15420f145cab5cf50.txt @@ -0,0 +1,45 @@ +EulerAncestralDiscreteScheduler A scheduler that uses ancestral sampling with Euler method steps. This is a fast scheduler which can often generate good outputs in 20-30 steps. The scheduler is based on the original k-diffusion implementation by Katherine Crowson. EulerAncestralDiscreteScheduler class diffusers.EulerAncestralDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. Ancestral sampling with Euler method steps. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. Scales the denoising model input by (sigma**2 + 1) ** 0.5 to match the Euler algorithm. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor generator: Optional = None return_dict: bool = True ) → EulerAncestralDiscreteSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a +EulerAncestralDiscreteSchedulerOutput or tuple. Returns +EulerAncestralDiscreteSchedulerOutput or tuple + +If return_dict is True, +EulerAncestralDiscreteSchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). EulerAncestralDiscreteSchedulerOutput class diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/be5f80e451c58807a18ab0a331084a18.txt b/scrapped_outputs/be5f80e451c58807a18ab0a331084a18.txt new file mode 100644 index 0000000000000000000000000000000000000000..7aa9d7438e50cb065d601931ea93e05ed669bc92 --- /dev/null +++ b/scrapped_outputs/be5f80e451c58807a18ab0a331084a18.txt @@ -0,0 +1,58 @@ +Effective and efficient diffusion Getting the DiffusionPipeline to generate images in a certain style or include what you want can be tricky. Often times, you have to run the DiffusionPipeline several times before you end up with an image you’re happy with. But generating something out of nothing is a computationally intensive process, especially if you’re running inference over and over again. This is why it’s important to get the most computational (speed) and memory (GPU vRAM) efficiency from the pipeline to reduce the time between inference cycles so you can iterate faster. This tutorial walks you through how to generate faster and better with the DiffusionPipeline. Begin by loading the runwayml/stable-diffusion-v1-5 model: Copied from diffusers import DiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipeline = DiffusionPipeline.from_pretrained(model_id, use_safetensors=True) The example prompt you’ll use is a portrait of an old warrior chief, but feel free to use your own prompt: Copied prompt = "portrait photo of a old warrior chief" Speed 💡 If you don’t have access to a GPU, you can use one for free from a GPU provider like Colab! One of the simplest ways to speed up inference is to place the pipeline on a GPU the same way you would with any PyTorch module: Copied pipeline = pipeline.to("cuda") To make sure you can use the same image and improve on it, use a Generator and set a seed for reproducibility: Copied import torch + +generator = torch.Generator("cuda").manual_seed(0) Now you can generate an image: Copied image = pipeline(prompt, generator=generator).images[0] +image This process took ~30 seconds on a T4 GPU (it might be faster if your allocated GPU is better than a T4). By default, the DiffusionPipeline runs inference with full float32 precision for 50 inference steps. You can speed this up by switching to a lower precision like float16 or running fewer inference steps. Let’s start by loading the model in float16 and generate an image: Copied import torch + +pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, use_safetensors=True) +pipeline = pipeline.to("cuda") +generator = torch.Generator("cuda").manual_seed(0) +image = pipeline(prompt, generator=generator).images[0] +image This time, it only took ~11 seconds to generate the image, which is almost 3x faster than before! 💡 We strongly suggest always running your pipelines in float16, and so far, we’ve rarely seen any degradation in output quality. Another option is to reduce the number of inference steps. Choosing a more efficient scheduler could help decrease the number of steps without sacrificing output quality. You can find which schedulers are compatible with the current model in the DiffusionPipeline by calling the compatibles method: Copied pipeline.scheduler.compatibles +[ + diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, + diffusers.schedulers.scheduling_unipc_multistep.UniPCMultistepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_discrete.KDPM2DiscreteScheduler, + diffusers.schedulers.scheduling_deis_multistep.DEISMultistepScheduler, + diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, + diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler, + diffusers.schedulers.scheduling_ddpm.DDPMScheduler, + diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_ancestral_discrete.KDPM2AncestralDiscreteScheduler, + diffusers.utils.dummy_torch_and_torchsde_objects.DPMSolverSDEScheduler, + diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler, + diffusers.schedulers.scheduling_pndm.PNDMScheduler, + diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, + diffusers.schedulers.scheduling_ddim.DDIMScheduler, +] The Stable Diffusion model uses the PNDMScheduler by default which usually requires ~50 inference steps, but more performant schedulers like DPMSolverMultistepScheduler, require only ~20 or 25 inference steps. Use the from_config() method to load a new scheduler: Copied from diffusers import DPMSolverMultistepScheduler + +pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) Now set the num_inference_steps to 20: Copied generator = torch.Generator("cuda").manual_seed(0) +image = pipeline(prompt, generator=generator, num_inference_steps=20).images[0] +image Great, you’ve managed to cut the inference time to just 4 seconds! ⚡️ Memory The other key to improving pipeline performance is consuming less memory, which indirectly implies more speed, since you’re often trying to maximize the number of images generated per second. The easiest way to see how many images you can generate at once is to try out different batch sizes until you get an OutOfMemoryError (OOM). Create a function that’ll generate a batch of images from a list of prompts and Generators. Make sure to assign each Generator a seed so you can reuse it if it produces a good result. Copied def get_inputs(batch_size=1): + generator = [torch.Generator("cuda").manual_seed(i) for i in range(batch_size)] + prompts = batch_size * [prompt] + num_inference_steps = 20 + + return {"prompt": prompts, "generator": generator, "num_inference_steps": num_inference_steps} Start with batch_size=4 and see how much memory you’ve consumed: Copied from diffusers.utils import make_image_grid + +images = pipeline(**get_inputs(batch_size=4)).images +make_image_grid(images, 2, 2) Unless you have a GPU with more vRAM, the code above probably returned an OOM error! Most of the memory is taken up by the cross-attention layers. Instead of running this operation in a batch, you can run it sequentially to save a significant amount of memory. All you have to do is configure the pipeline to use the enable_attention_slicing() function: Copied pipeline.enable_attention_slicing() Now try increasing the batch_size to 8! Copied images = pipeline(**get_inputs(batch_size=8)).images +make_image_grid(images, rows=2, cols=4) Whereas before you couldn’t even generate a batch of 4 images, now you can generate a batch of 8 images at ~3.5 seconds per image! This is probably the fastest you can go on a T4 GPU without sacrificing quality. Quality In the last two sections, you learned how to optimize the speed of your pipeline by using fp16, reducing the number of inference steps by using a more performant scheduler, and enabling attention slicing to reduce memory consumption. Now you’re going to focus on how to improve the quality of generated images. Better checkpoints The most obvious step is to use better checkpoints. The Stable Diffusion model is a good starting point, and since its official launch, several improved versions have also been released. However, using a newer version doesn’t automatically mean you’ll get better results. You’ll still have to experiment with different checkpoints yourself, and do a little research (such as using negative prompts) to get the best results. As the field grows, there are more and more high-quality checkpoints finetuned to produce certain styles. Try exploring the Hub and Diffusers Gallery to find one you’re interested in! Better pipeline components You can also try replacing the current pipeline components with a newer version. Let’s try loading the latest autoencoder from Stability AI into the pipeline, and generate some images: Copied from diffusers import AutoencoderKL + +vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16).to("cuda") +pipeline.vae = vae +images = pipeline(**get_inputs(batch_size=8)).images +make_image_grid(images, rows=2, cols=4) Better prompt engineering The text prompt you use to generate an image is super important, so much so that it is called prompt engineering. Some considerations to keep during prompt engineering are: How is the image or similar images of the one I want to generate stored on the internet? What additional detail can I give that steers the model towards the style I want? With this in mind, let’s improve the prompt to include color and higher quality details: Copied prompt += ", tribal panther make up, blue on red, side profile, looking away, serious eyes" +prompt += " 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta" Generate a batch of images with the new prompt: Copied images = pipeline(**get_inputs(batch_size=8)).images +make_image_grid(images, rows=2, cols=4) Pretty impressive! Let’s tweak the second image - corresponding to the Generator with a seed of 1 - a bit more by adding some text about the age of the subject: Copied prompts = [ + "portrait photo of the oldest warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a old warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a young warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", +] + +generator = [torch.Generator("cuda").manual_seed(1) for _ in range(len(prompts))] +images = pipeline(prompt=prompts, generator=generator, num_inference_steps=25).images +make_image_grid(images, 2, 2) Next steps In this tutorial, you learned how to optimize a DiffusionPipeline for computational and memory efficiency as well as improving the quality of generated outputs. If you’re interested in making your pipeline even faster, take a look at the following resources: Learn how PyTorch 2.0 and torch.compile can yield 5 - 300% faster inference speed. On an A100 GPU, inference can be up to 50% faster! If you can’t use PyTorch 2, we recommend you install xFormers. Its memory-efficient attention mechanism works great with PyTorch 1.13.1 for faster speed and reduced memory consumption. Other optimization techniques, such as model offloading, are covered in this guide. diff --git a/scrapped_outputs/be92e03e33a2e77c177b9894efbbf124.txt b/scrapped_outputs/be92e03e33a2e77c177b9894efbbf124.txt new file mode 100644 index 0000000000000000000000000000000000000000..f559dcc80ec22dbf65c22dd7f4b1273f5e564097 --- /dev/null +++ b/scrapped_outputs/be92e03e33a2e77c177b9894efbbf124.txt @@ -0,0 +1,118 @@ +Latent upscaler The Stable Diffusion latent upscaler model was created by Katherine Crowson in collaboration with Stability AI. It is used to enhance the output image resolution by a factor of 2 (see this demo notebook for a demonstration of the original implementation). Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionLatentUpscalePipeline class diffusers.StableDiffusionLatentUpscalePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: EulerDiscreteScheduler ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A EulerDiscreteScheduler to be used in combination with unet to denoise the encoded image latents. Pipeline for upscaling Stable Diffusion output image resolution by a factor of 2. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union image: Union = None num_inference_steps: int = 75 guidance_scale: float = 9.0 negative_prompt: Union = None generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image upscaling. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be upscaled. If it’s a tensor, it can be either a +latent output from a Stable Diffusion model or an image tensor in the range [-1, 1]. It is considered +a latent if image.shape[1] is 4; otherwise, it is considered to be an image representation and +encoded using this pipeline’s vae encoder. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import StableDiffusionLatentUpscalePipeline, StableDiffusionPipeline +>>> import torch + + +>>> pipeline = StableDiffusionPipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 +... ) +>>> pipeline.to("cuda") + +>>> model_id = "stabilityai/sd-x2-latent-upscaler" +>>> upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) +>>> upscaler.to("cuda") + +>>> prompt = "a photo of an astronaut high resolution, unreal engine, ultra realistic" +>>> generator = torch.manual_seed(33) + +>>> low_res_latents = pipeline(prompt, generator=generator, output_type="latent").images + +>>> with torch.no_grad(): +... image = pipeline.decode_latents(low_res_latents) +>>> image = pipeline.numpy_to_pil(image)[0] + +>>> image.save("../images/a1.png") + +>>> upscaled_image = upscaler( +... prompt=prompt, +... image=low_res_latents, +... num_inference_steps=20, +... guidance_scale=0, +... generator=generator, +... ).images[0] + +>>> upscaled_image.save("../images/a2.png") enable_sequential_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using 🤗 Accelerate, significantly reducing memory usage. When called, the state +dicts of all torch.nn.Module components (except those in self._exclude_from_cpu_offload) are saved to CPU +and then moved to torch.device('meta') and loaded to GPU only when their specific submodule has its forward +method called. Offloading happens on a submodule basis. Memory savings are higher than with +enable_model_cpu_offload, but performance is lower. enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/beb27c5cd8c445e9b1986e6de457648a.txt b/scrapped_outputs/beb27c5cd8c445e9b1986e6de457648a.txt new file mode 100644 index 0000000000000000000000000000000000000000..9de2a9918b4f9735de3ea0d622cdf65706556cae --- /dev/null +++ b/scrapped_outputs/beb27c5cd8c445e9b1986e6de457648a.txt @@ -0,0 +1,124 @@ +Schedulers Diffusion pipelines are inherently a collection of diffusion models and schedulers that are partly independent from each other. This means that one is able to switch out parts of the pipeline to better customize +a pipeline to one’s use case. The best example of this is the Schedulers. Whereas diffusion models usually simply define the forward pass from noise to a less noisy sample, +schedulers define the whole denoising process, i.e.: How many denoising steps? Stochastic or deterministic? What algorithm to use to find the denoised sample? They can be quite complex and often define a trade-off between denoising speed and denoising quality. +It is extremely difficult to measure quantitatively which scheduler works best for a given diffusion pipeline, so it is often recommended to simply try out which works best. The following paragraphs show how to do so with the 🧨 Diffusers library. Load pipeline Let’s start by loading the runwayml/stable-diffusion-v1-5 model in the DiffusionPipeline: Copied from huggingface_hub import login +from diffusers import DiffusionPipeline +import torch + +login() + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) Next, we move it to GPU: Copied pipeline.to("cuda") Access the scheduler The scheduler is always one of the components of the pipeline and is usually called "scheduler". +So it can be accessed via the "scheduler" property. Copied pipeline.scheduler Output: Copied PNDMScheduler { + "_class_name": "PNDMScheduler", + "_diffusers_version": "0.21.4", + "beta_end": 0.012, + "beta_schedule": "scaled_linear", + "beta_start": 0.00085, + "clip_sample": false, + "num_train_timesteps": 1000, + "set_alpha_to_one": false, + "skip_prk_steps": true, + "steps_offset": 1, + "timestep_spacing": "leading", + "trained_betas": null +} We can see that the scheduler is of type PNDMScheduler. +Cool, now let’s compare the scheduler in its performance to other schedulers. +First we define a prompt on which we will test all the different schedulers: Copied prompt = "A photograph of an astronaut riding a horse on Mars, high resolution, high definition." Next, we create a generator from a random seed that will ensure that we can generate similar images as well as run the pipeline: Copied generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image Changing the scheduler Now we show how easy it is to change the scheduler of a pipeline. Every scheduler has a property compatibles +which defines all compatible schedulers. You can take a look at all available, compatible schedulers for the Stable Diffusion pipeline as follows. Copied pipeline.scheduler.compatibles Output: Copied [diffusers.utils.dummy_torch_and_torchsde_objects.DPMSolverSDEScheduler, + diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, + diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, + diffusers.schedulers.scheduling_ddim.DDIMScheduler, + diffusers.schedulers.scheduling_ddpm.DDPMScheduler, + diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler, + diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler, + diffusers.schedulers.scheduling_deis_multistep.DEISMultistepScheduler, + diffusers.schedulers.scheduling_pndm.PNDMScheduler, + diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, + diffusers.schedulers.scheduling_unipc_multistep.UniPCMultistepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_discrete.KDPM2DiscreteScheduler, + diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_ancestral_discrete.KDPM2AncestralDiscreteScheduler] Cool, lots of schedulers to look at. Feel free to have a look at their respective class definitions: EulerDiscreteScheduler, LMSDiscreteScheduler, DDIMScheduler, DDPMScheduler, HeunDiscreteScheduler, DPMSolverMultistepScheduler, DEISMultistepScheduler, PNDMScheduler, EulerAncestralDiscreteScheduler, UniPCMultistepScheduler, KDPM2DiscreteScheduler, DPMSolverSinglestepScheduler, KDPM2AncestralDiscreteScheduler. We will now compare the input prompt with all other schedulers. To change the scheduler of the pipeline you can make use of the +convenient config property in combination with the from_config() function. Copied pipeline.scheduler.config returns a dictionary of the configuration of the scheduler: Output: Copied FrozenDict([('num_train_timesteps', 1000), + ('beta_start', 0.00085), + ('beta_end', 0.012), + ('beta_schedule', 'scaled_linear'), + ('trained_betas', None), + ('skip_prk_steps', True), + ('set_alpha_to_one', False), + ('prediction_type', 'epsilon'), + ('timestep_spacing', 'leading'), + ('steps_offset', 1), + ('_use_default_values', ['timestep_spacing', 'prediction_type']), + ('_class_name', 'PNDMScheduler'), + ('_diffusers_version', '0.21.4'), + ('clip_sample', False)]) This configuration can then be used to instantiate a scheduler +of a different class that is compatible with the pipeline. Here, +we change the scheduler to the DDIMScheduler. Copied from diffusers import DDIMScheduler + +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) Cool, now we can run the pipeline again to compare the generation quality. Copied generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image If you are a JAX/Flax user, please check this section instead. Compare schedulers So far we have tried running the stable diffusion pipeline with two schedulers: PNDMScheduler and DDIMScheduler. +A number of better schedulers have been released that can be run with much fewer steps; let’s compare them here: LMSDiscreteScheduler usually leads to better results: Copied from diffusers import LMSDiscreteScheduler + +pipeline.scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image EulerDiscreteScheduler and EulerAncestralDiscreteScheduler can generate high quality results with as little as 30 steps. Copied from diffusers import EulerDiscreteScheduler + +pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=30).images[0] +image and: Copied from diffusers import EulerAncestralDiscreteScheduler + +pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=30).images[0] +image DPMSolverMultistepScheduler gives a reasonable speed/quality trade-off and can be run with as little as 20 steps. Copied from diffusers import DPMSolverMultistepScheduler + +pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=20).images[0] +image As you can see, most images look very similar and are arguably of very similar quality. It often really depends on the specific use case which scheduler to choose. A good approach is always to run multiple different +schedulers to compare results. Changing the Scheduler in Flax If you are a JAX/Flax user, you can also change the default pipeline scheduler. This is a complete example of how to run inference using the Flax Stable Diffusion pipeline and the super-fast DPM-Solver++ scheduler: Copied import jax +import numpy as np +from flax.jax_utils import replicate +from flax.training.common_utils import shard + +from diffusers import FlaxStableDiffusionPipeline, FlaxDPMSolverMultistepScheduler + +model_id = "runwayml/stable-diffusion-v1-5" +scheduler, scheduler_state = FlaxDPMSolverMultistepScheduler.from_pretrained( + model_id, + subfolder="scheduler" +) +pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( + model_id, + scheduler=scheduler, + revision="bf16", + dtype=jax.numpy.bfloat16, +) +params["scheduler"] = scheduler_state + +# Generate 1 image per parallel device (8 on TPUv2-8 or TPUv3-8) +prompt = "a photo of an astronaut riding a horse on mars" +num_samples = jax.device_count() +prompt_ids = pipeline.prepare_inputs([prompt] * num_samples) + +prng_seed = jax.random.PRNGKey(0) +num_inference_steps = 25 + +# shard inputs and rng +params = replicate(params) +prng_seed = jax.random.split(prng_seed, jax.device_count()) +prompt_ids = shard(prompt_ids) + +images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images +images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) The following Flax schedulers are not yet compatible with the Flax Stable Diffusion Pipeline: FlaxLMSDiscreteScheduler FlaxDDPMScheduler diff --git a/scrapped_outputs/beb7cac37f3ccae607e922f97550f644.txt b/scrapped_outputs/beb7cac37f3ccae607e922f97550f644.txt new file mode 100644 index 0000000000000000000000000000000000000000..e109b181bff7e509d8447aec9e012243d4f843dc --- /dev/null +++ b/scrapped_outputs/beb7cac37f3ccae607e922f97550f644.txt @@ -0,0 +1,115 @@ +DreamBooth DreamBooth is a training technique that updates the entire diffusion model by training on just a few images of a subject or style. It works by associating a special word in the prompt with the example images. If you’re training on a GPU with limited vRAM, you should try enabling the gradient_checkpointing and mixed_precision parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing or xFormers. You should have a GPU with >30GB of memory if you want to train faster with Flax. This guide will explore the train_dreambooth.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/dreambooth +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters DreamBooth is very sensitive to training hyperparameters, and it is easy to overfit. Read the Training Stable Diffusion with Dreambooth using 🧨 Diffusers blog post for recommended settings for different subjects to help you choose the appropriate hyperparameters. The training script offers many parameters for customizing your training run. All of the parameters and their descriptions are found in the parse_args() function. The parameters are set with default values that should work pretty well out-of-the-box, but you can also set your own values in the training command if you’d like. For example, to train in the bf16 format: Copied accelerate launch train_dreambooth.py \ + --mixed_precision="bf16" Some basic and important parameters to know and specify are: --pretrained_model_name_or_path: the name of the model on the Hub or a local path to the pretrained model --instance_data_dir: path to a folder containing the training dataset (example images) --instance_prompt: the text prompt that contains the special word for the example images --train_text_encoder: whether to also train the text encoder --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_dreambooth.py \ + --snr_gamma=5.0 Prior preservation loss Prior preservation loss is a method that uses a model’s own generated samples to help it learn how to generate more diverse images. Because these generated sample images belong to the same class as the images you provided, they help the model retain what it has learned about the class and how it can use what it already knows about the class to make new compositions. --with_prior_preservation: whether to use prior preservation loss --prior_loss_weight: controls the influence of the prior preservation loss on the model --class_data_dir: path to a folder containing the generated class sample images --class_prompt: the text prompt describing the class of the generated sample images Copied accelerate launch train_dreambooth.py \ + --with_prior_preservation \ + --prior_loss_weight=1.0 \ + --class_data_dir="path/to/class/images" \ + --class_prompt="text prompt describing class" Train text encoder To improve the quality of the generated outputs, you can also train the text encoder in addition to the UNet. This requires additional memory and you’ll need a GPU with at least 24GB of vRAM. If you have the necessary hardware, then training the text encoder produces better results, especially when generating images of faces. Enable this option by: Copied accelerate launch train_dreambooth.py \ + --train_text_encoder Training script DreamBooth comes with its own dataset classes: DreamBoothDataset: preprocesses the images and class images, and tokenizes the prompts for training PromptDataset: generates the prompt embeddings to generate the class images If you enabled prior preservation loss, the class images are generated here: Copied sample_dataset = PromptDataset(args.class_prompt, num_new_images) +sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size) + +sample_dataloader = accelerator.prepare(sample_dataloader) +pipeline.to(accelerator.device) + +for example in tqdm( + sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process +): + images = pipeline(example["prompt"]).images Next is the main() function which handles setting up the dataset for training and the training loop itself. The script loads the tokenizer, scheduler and models: Copied # Load the tokenizer +if args.tokenizer_name: + tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name, revision=args.revision, use_fast=False) +elif args.pretrained_model_name_or_path: + tokenizer = AutoTokenizer.from_pretrained( + args.pretrained_model_name_or_path, + subfolder="tokenizer", + revision=args.revision, + use_fast=False, + ) + +# Load scheduler and models +noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") +text_encoder = text_encoder_cls.from_pretrained( + args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision +) + +if model_has_vae(args): + vae = AutoencoderKL.from_pretrained( + args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision + ) +else: + vae = None + +unet = UNet2DConditionModel.from_pretrained( + args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision +) Then, it’s time to create the training dataset and DataLoader from DreamBoothDataset: Copied train_dataset = DreamBoothDataset( + instance_data_root=args.instance_data_dir, + instance_prompt=args.instance_prompt, + class_data_root=args.class_data_dir if args.with_prior_preservation else None, + class_prompt=args.class_prompt, + class_num=args.num_class_images, + tokenizer=tokenizer, + size=args.resolution, + center_crop=args.center_crop, + encoder_hidden_states=pre_computed_encoder_hidden_states, + class_prompt_encoder_hidden_states=pre_computed_class_prompt_encoder_hidden_states, + tokenizer_max_length=args.tokenizer_max_length, +) + +train_dataloader = torch.utils.data.DataLoader( + train_dataset, + batch_size=args.train_batch_size, + shuffle=True, + collate_fn=lambda examples: collate_fn(examples, args.with_prior_preservation), + num_workers=args.dataloader_num_workers, +) Lastly, the training loop takes care of the remaining steps such as converting images to latent space, adding noise to the input, predicting the noise residual, and calculating the loss. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script You’re now ready to launch the training script! 🚀 For this guide, you’ll download some images of a dog and store them in a directory. But remember, you can create and use your own dataset if you want (see the Create a dataset for training guide). Copied from huggingface_hub import snapshot_download + +local_dir = "./dog" +snapshot_download( + "diffusers/dog-example", + local_dir=local_dir, + repo_type="dataset", + ignore_patterns=".gitattributes", +) Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model, INSTANCE_DIR to the path where you just downloaded the dog images to, and OUTPUT_DIR to where you want to save the model. You’ll use sks as the special word to tie the training to. If you’re interested in following along with the training process, you can periodically save generated images as training progresses. Add the following parameters to the training command: Copied --validation_prompt="a photo of a sks dog" +--num_validation_images=4 +--validation_steps=100 One more thing before you launch the script! Depending on the GPU you have, you may need to enable certain optimizations to train DreamBooth. 16GB 12GB 8GB On a 16GB GPU, you can use bitsandbytes 8-bit optimizer and gradient checkpointing to help you train a DreamBooth model. Install bitsandbytes: Copied pip install bitsandbytes Then, add the following parameter to your training command: Copied accelerate launch train_dreambooth.py \ + --gradient_checkpointing \ + --use_8bit_adam \ PyTorch Flax Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export INSTANCE_DIR="./dog" +export OUTPUT_DIR="path_to_saved_model" + +accelerate launch train_dreambooth.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --output_dir=$OUTPUT_DIR \ + --instance_prompt="a photo of sks dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=1 \ + --learning_rate=5e-6 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --max_train_steps=400 \ + --push_to_hub Once training is complete, you can use your newly trained model for inference! Can’t wait to try your model for inference before training is complete? 🤭 Make sure you have the latest version of 🤗 Accelerate installed. Copied from diffusers import DiffusionPipeline, UNet2DConditionModel +from transformers import CLIPTextModel +import torch + +unet = UNet2DConditionModel.from_pretrained("path/to/model/checkpoint-100/unet") + +# if you have trained with `--args.train_text_encoder` make sure to also load the text encoder +text_encoder = CLIPTextModel.from_pretrained("path/to/model/checkpoint-100/checkpoint-100/text_encoder") + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", unet=unet, text_encoder=text_encoder, dtype=torch.float16, +).to("cuda") + +image = pipeline("A photo of sks dog in a bucket", num_inference_steps=50, guidance_scale=7.5).images[0] +image.save("dog-bucket.png") PyTorch Flax Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("path_to_saved_model", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +image = pipeline("A photo of sks dog in a bucket", num_inference_steps=50, guidance_scale=7.5).images[0] +image.save("dog-bucket.png") LoRA LoRA is a training technique for significantly reducing the number of trainable parameters. As a result, training is faster and it is easier to store the resulting weights because they are a lot smaller (~100MBs). Use the train_dreambooth_lora.py script to train with LoRA. The LoRA training script is discussed in more detail in the LoRA training guide. Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_dreambooth_lora_sdxl.py script to train a SDXL model with LoRA. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on training your DreamBooth model! To learn more about how to use your new model, the following guide may be helpful: Learn how to load a DreamBooth model for inference if you trained your model with LoRA. diff --git a/scrapped_outputs/bedee84c722b5c5e038d150dfad3c3db.txt b/scrapped_outputs/bedee84c722b5c5e038d150dfad3c3db.txt new file mode 100644 index 0000000000000000000000000000000000000000..707a06e6336d2883e0c81a8c8cc00f306f544615 --- /dev/null +++ b/scrapped_outputs/bedee84c722b5c5e038d150dfad3c3db.txt @@ -0,0 +1,65 @@ +Unconditional image generation Unconditional image generation models are not conditioned on text or images during training. It only generates images that resemble its training data distribution. This guide will explore the train_unconditional.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies: Copied cd examples/unconditional_image_generation +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the bf16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_unconditional.py \ + --mixed_precision="bf16" Some basic and important parameters to specify include: --dataset_name: the name of the dataset on the Hub or a local path to the dataset to train on --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command Bring your dataset, and let the training script handle everything else! Training script The code for preprocessing the dataset and the training loop is found in the main() function. If you need to adapt the training script, this is where you’ll need to make your changes. The train_unconditional script initializes a UNet2DModel if you don’t provide a model configuration. You can configure the UNet here if you’d like: Copied model = UNet2DModel( + sample_size=args.resolution, + in_channels=3, + out_channels=3, + layers_per_block=2, + block_out_channels=(128, 128, 256, 256, 512, 512), + down_block_types=( + "DownBlock2D", + "DownBlock2D", + "DownBlock2D", + "DownBlock2D", + "AttnDownBlock2D", + "DownBlock2D", + ), + up_block_types=( + "UpBlock2D", + "AttnUpBlock2D", + "UpBlock2D", + "UpBlock2D", + "UpBlock2D", + "UpBlock2D", + ), +) Next, the script initializes a scheduler and optimizer: Copied # Initialize the scheduler +accepts_prediction_type = "prediction_type" in set(inspect.signature(DDPMScheduler.__init__).parameters.keys()) +if accepts_prediction_type: + noise_scheduler = DDPMScheduler( + num_train_timesteps=args.ddpm_num_steps, + beta_schedule=args.ddpm_beta_schedule, + prediction_type=args.prediction_type, + ) +else: + noise_scheduler = DDPMScheduler(num_train_timesteps=args.ddpm_num_steps, beta_schedule=args.ddpm_beta_schedule) + +# Initialize the optimizer +optimizer = torch.optim.AdamW( + model.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Then it loads a dataset and you can specify how to preprocess it: Copied dataset = load_dataset("imagefolder", data_dir=args.train_data_dir, cache_dir=args.cache_dir, split="train") + +augmentations = transforms.Compose( + [ + transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), + transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution), + transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x), + transforms.ToTensor(), + transforms.Normalize([0.5], [0.5]), + ] +) Finally, the training loop handles everything else such as adding noise to the images, predicting the noise residual, calculating the loss, saving checkpoints at specified steps, and saving and pushing the model to the Hub. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 A full training run takes 2 hours on 4xV100 GPUs. single GPU multi-GPU Copied accelerate launch train_unconditional.py \ + --dataset_name="huggan/flowers-102-categories" \ + --output_dir="ddpm-ema-flowers-64" \ + --mixed_precision="fp16" \ + --push_to_hub The training script creates and saves a checkpoint file in your repository. Now you can load and use your trained model for inference: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128").to("cuda") +image = pipeline().images[0] diff --git a/scrapped_outputs/bf12893129a78cb924b942dd5482510a.txt b/scrapped_outputs/bf12893129a78cb924b942dd5482510a.txt new file mode 100644 index 0000000000000000000000000000000000000000..da7517473881ae8a5f98c9de9071381dc720f891 --- /dev/null +++ b/scrapped_outputs/bf12893129a78cb924b942dd5482510a.txt @@ -0,0 +1 @@ +Diffusers 🤗 Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you’re looking for a simple inference solution or want to train your own diffusion model, 🤗 Diffusers is a modular toolbox that supports both. Our library is designed with a focus on usability over performance, simple over easy, and customizability over abstractions. The library has three main components: State-of-the-art diffusion pipelines for inference with just a few lines of code. There are many pipelines in 🤗 Diffusers, check out the table in the pipeline overview for a complete list of available pipelines and the task they solve. Interchangeable noise schedulers for balancing trade-offs between generation speed and quality. Pretrained models that can be used as building blocks, and combined with schedulers, for creating your own end-to-end diffusion systems. Tutorials Learn the fundamental skills you need to start generating outputs, build your own diffusion system, and train a diffusion model. We recommend starting here if you're using 🤗 Diffusers for the first time! How-to guides Practical guides for helping you load pipelines, models, and schedulers. You'll also learn how to use pipelines for specific tasks, control how outputs are generated, optimize for inference speed, and different training techniques. Conceptual guides Understand why the library was designed the way it was, and learn more about the ethical guidelines and safety implementations for using the library. Reference Technical descriptions of how 🤗 Diffusers classes and methods work. diff --git a/scrapped_outputs/bf369c7495d138548e111eaa77e4b326.txt b/scrapped_outputs/bf369c7495d138548e111eaa77e4b326.txt new file mode 100644 index 0000000000000000000000000000000000000000..9cb75a8be6e31ddd7b690ba57afc96eb0139e038 --- /dev/null +++ b/scrapped_outputs/bf369c7495d138548e111eaa77e4b326.txt @@ -0,0 +1,237 @@ +How to contribute to Diffusers 🧨 + +We ❤️ contributions from the open-source community! Everyone is welcome, and all types of participation –not just code– are valued and appreciated. Answering questions, helping others, reaching out and improving the documentation are all immensely valuable to the community, so don’t be afraid and get involved if you’re up for it! +It also helps us if you spread the word: reference the library from blog posts +on the awesome projects it made possible, shout out on Twitter every time it has +helped you, or simply star the repo to say “thank you”. +We encourage everyone to start by saying 👋 in our public Discord channel. We discuss the hottest trends about diffusion models, ask questions, show-off personal projects, help each other with contributions, or just hang out ☕. +Whichever way you choose to contribute, we strive to be part of an open, welcoming and kind community. Please, read our code of conduct and be mindful to respect it during your interactions. + +Overview + +You can contribute in so many ways! Just to name a few: +Fixing outstanding issues with the existing code. +Implementing new diffusion pipelines, new schedulers or new models. +Contributing to the examples. +Contributing to the documentation. +Submitting issues related to bugs or desired new features. +All are equally valuable to the community. + +Browse GitHub issues for suggestions + +If you need inspiration, you can look out for issues you’d like to tackle to contribute to the library. There are a few filters that can be helpful: +See Good first issues for general opportunities to contribute and getting started with the codebase. +See New pipeline/model to contribute exciting new diffusion models or diffusion pipelines. +See New scheduler to work on new samplers and schedulers. + +Submitting a new issue or feature request + +Do your best to follow these guidelines when submitting an issue or a feature +request. It will make it easier for us to come back to you quickly and with good +feedback. + +Did you find a bug? + +The 🧨 Diffusers library is robust and reliable thanks to the users who notify us of +the problems they encounter. So thank you for reporting an issue. +First, we would really appreciate it if you could make sure the bug was not +already reported (use the search bar on GitHub under Issues). + +Do you want to implement a new diffusion pipeline / diffusion model? + +Awesome! Please provide the following information: +Short description of the diffusion pipeline and link to the paper; +Link to the implementation if it is open-source; +Link to the model weights if they are available. +If you are willing to contribute the model yourself, let us know so we can best +guide you. + +Do you want a new feature (that is not a model)? + +A world-class feature request addresses the following points: +Motivation first: +Is it related to a problem/frustration with the library? If so, please explain +why. Providing a code snippet that demonstrates the problem is best. +Is it related to something you would need for a project? We’d love to hear +about it! +Is it something you worked on and think could benefit the community? +Awesome! Tell us what problem it solved for you. +Write a full paragraph describing the feature; +Provide a code snippet that demonstrates its future use; +In case this is related to a paper, please attach a link; +Attach any additional information (drawings, screenshots, etc.) you think may help. +If your issue is well written we’re already 80% of the way there by the time you +post it. + +Start contributing! (Pull Requests) + +Before writing code, we strongly advise you to search through the existing PRs or +issues to make sure that nobody is already working on the same thing. If you are +unsure, it is always a good idea to open an issue to get some feedback. +You will need basic git proficiency to be able to contribute to +🧨 Diffusers. git is not the easiest tool to use but it has the greatest +manual. Type git --help in a shell and enjoy. If you prefer books, Pro +Git is a very good reference. +Follow these steps to start contributing (supported Python versions): +Fork the repository by +clicking on the ‘Fork’ button on the repository’s page. This creates a copy of the code +under your GitHub user account. +Clone your fork to your local disk, and add the base repository as a remote: + + + Copied +$ git clone git@github.com:/diffusers.git +$ cd diffusers +$ git remote add upstream https://github.com/huggingface/diffusers.git +Create a new branch to hold your development changes: + + + Copied +$ git checkout -b a-descriptive-name-for-my-changes +Do not work on the main branch. +Set up a development environment by running the following command in a virtual environment: + + + Copied +$ pip install -e ".[dev]" +(If Diffusers was already installed in the virtual environment, remove +it with pip uninstall diffusers before reinstalling it in editable +mode with the -e flag.) +To run the full test suite, you might need the additional dependency on transformers and datasets which requires a separate source +install: + + + Copied +$ git clone https://github.com/huggingface/transformers +$ cd transformers +$ pip install -e . + + + Copied +$ git clone https://github.com/huggingface/datasets +$ cd datasets +$ pip install -e . +If you have already cloned that repo, you might need to git pull to get the most recent changes in the datasets +library. +Develop the features on your branch. +As you work on the features, you should make sure that the test suite +passes. You should run the tests impacted by your changes like this: + + + Copied +$ pytest tests/.py +You can also run the full suite with the following command, but it takes +a beefy machine to produce a result in a decent amount of time now that +Diffusers has grown a lot. Here is the command for it: + + + Copied +$ make test +For more information about tests, check out the +dedicated documentation +🧨 Diffusers relies on black and isort to format its source code +consistently. After you make changes, apply automatic style corrections and code verifications +that can’t be automated in one go with: + + + Copied +$ make style +🧨 Diffusers also uses flake8 and a few custom scripts to check for coding mistakes. Quality +control runs in CI, however you can also run the same checks with: + + + Copied +$ make quality +Once you’re happy with your changes, add changed files using git add and +make a commit with git commit to record your changes locally: + + + Copied +$ git add modified_file.py +$ git commit +It is a good idea to sync your copy of the code with the original +repository regularly. This way you can quickly account for changes: + + + Copied +$ git fetch upstream +$ git rebase upstream/main +Push the changes to your account using: + + + Copied +$ git push -u origin a-descriptive-name-for-my-changes +Once you are satisfied (and the checklist below is happy too), go to the +webpage of your fork on GitHub. Click on ‘Pull request’ to send your changes +to the project maintainers for review. +It’s ok if maintainers ask you for changes. It happens to core contributors +too! So everyone can see the changes in the Pull request, work in your local +branch and push the changes to your fork. They will automatically appear in +the pull request. + +Checklist + +The title of your pull request should be a summary of its contribution; +If your pull request addresses an issue, please mention the issue number in +the pull request description to make sure they are linked (and people +consulting the issue know you are working on it); +To indicate a work in progress please prefix the title with [WIP]. These +are useful to avoid duplicated work, and to differentiate it from PRs ready +to be merged; +Make sure existing tests pass; +Add high-coverage tests. No quality testing = no merge.If you are adding new @slow tests, make sure they pass using +RUN_SLOW=1 python -m pytest tests/test_my_new_model.py. +If you are adding a new tokenizer, write tests, and make sure +RUN_SLOW=1 python -m pytest tests/test_tokenization_{your_model_name}.py passes. +CircleCI does not run the slow tests, but GitHub actions does every night! +All public methods must have informative docstrings that work nicely with sphinx. See [pipeline_latent_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py) for an example. +Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos and other non-text files. We prefer to leverage a hf.co hosted dataset like +the ones hosted on hf-internal-testing in which to place these files and reference or huggingface/documentation-images. +If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images +to this dataset. + +Tests + +An extensive test suite is included to test the library behavior and several examples. Library tests can be found in +the tests folder. +We like pytest and pytest-xdist because it’s faster. From the root of the +repository, here’s how to run tests with pytest for the library: + + + Copied +$ python -m pytest -n auto --dist=loadfile -s -v ./tests/ +In fact, that’s how make test is implemented! +You can specify a smaller set of tests in order to test only the feature +you’re working on. +By default, slow tests are skipped. Set the RUN_SLOW environment variable to +yes to run them. This will download many gigabytes of models — make sure you +have enough disk space and a good Internet connection, or a lot of patience! + + + Copied +$ RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/ +unittest is fully supported, here’s how to run tests with it: + + + Copied +$ python -m unittest discover -s tests -t . -v +$ python -m unittest discover -s examples -t examples -v + +Syncing forked main with upstream (HuggingFace) main + +To avoid pinging the upstream repository which adds reference notes to each upstream PR and sends unnecessary notifications to the developers involved in these PRs, +when syncing the main branch of a forked repository, please, follow these steps: +When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead, merge directly into the forked main. +If a PR is absolutely necessary, use the following steps after checking out your branch: + + + Copied +$ git checkout -b your-branch-for-syncing +$ git pull --squash --no-commit upstream main +$ git commit -m '' +$ git push --set-upstream origin your-branch-for-syncing + +Style guide + +For documentation strings, 🧨 Diffusers follows the google style. +This guide was heavily inspired by the awesome scikit-learn guide to contributing. diff --git a/scrapped_outputs/bf5b945d6883901086b894d3ee29376b.txt b/scrapped_outputs/bf5b945d6883901086b894d3ee29376b.txt new file mode 100644 index 0000000000000000000000000000000000000000..8ab6d945875a0e619ebe5c590dd7d60c41d0ccbd --- /dev/null +++ b/scrapped_outputs/bf5b945d6883901086b894d3ee29376b.txt @@ -0,0 +1,362 @@ +DEIS + +Fast Sampling of Diffusion Models with Exponential Integrator. + +Overview + +Original paper can be found here. The original implementation can be found here. + +DEISMultistepScheduler + + +class diffusers.DEISMultistepScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Optional[numpy.ndarray] = None +solver_order: int = 2 +prediction_type: str = 'epsilon' +thresholding: bool = False +dynamic_thresholding_ratio: float = 0.995 +sample_max_value: float = 1.0 +algorithm_type: str = 'deis' +solver_type: str = 'logrho' +lower_order_final: bool = True + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +solver_order (int, default 2) — +the order of DEIS; can be 1 or 2 or 3. We recommend to use solver_order=2 for guided sampling, and +solver_order=3 for unconditional sampling. + + +prediction_type (str, default epsilon) — +indicates whether the model predicts the noise (epsilon), or the data / x0. One of epsilon, sample, +or v-prediction. + + +thresholding (bool, default False) — +whether to use the “dynamic thresholding” method (introduced by Imagen, https://arxiv.org/abs/2205.11487). +Note that the thresholding method is unsuitable for latent-space diffusion models (such as +stable-diffusion). + + +dynamic_thresholding_ratio (float, default 0.995) — +the ratio for the dynamic thresholding method. Default is 0.995, the same as Imagen +(https://arxiv.org/abs/2205.11487). + + +sample_max_value (float, default 1.0) — +the threshold value for dynamic thresholding. Valid woks when thresholding=True + + +algorithm_type (str, default deis) — +the algorithm type for the solver. current we support multistep deis, we will add other variants of DEIS in +the future + + +lower_order_final (bool, default True) — +whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. We empirically +find this trick can stabilize the sampling of DEIS for steps < 15, especially for steps <= 10. + + + +DEIS (https://arxiv.org/abs/2204.13902) is a fast high order solver for diffusion ODEs. We slightly modify the +polynomial fitting formula in log-rho space instead of the original linear t space in DEIS paper. The modification +enjoys closed-form coefficients for exponential multistep update instead of replying on the numerical solver. More +variants of DEIS can be found in https://github.com/qsh-zh/deis. +Currently, we support the log-rho multistep DEIS. We recommend to use solver_order=2 / 3 while solver_order=1 +reduces to DDIM. +We also support the “dynamic thresholding” method in Imagen (https://arxiv.org/abs/2205.11487). For pixel-space +diffusion models, you can set thresholding=True to use the dynamic thresholding. +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +convert_model_output + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the converted model output. + + +Convert the model output to the corresponding type that the algorithm DEIS needs. + +deis_first_order_update + +< +source +> +( +model_output: FloatTensor +timestep: int +prev_timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the first-order DEIS (equivalent to DDIM). + +multistep_deis_second_order_update + +< +source +> +( +model_output_list: typing.List[torch.FloatTensor] +timestep_list: typing.List[int] +prev_timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output_list (List[torch.FloatTensor]) — +direct outputs from learned diffusion model at current and latter timesteps. + + +timestep (int) — current and latter discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the second-order multistep DEIS. + +multistep_deis_third_order_update + +< +source +> +( +model_output_list: typing.List[torch.FloatTensor] +timestep_list: typing.List[int] +prev_timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output_list (List[torch.FloatTensor]) — +direct outputs from learned diffusion model at current and latter timesteps. + + +timestep (int) — current and latter discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the third-order multistep DEIS. + +scale_model_input + +< +source +> +( +sample: FloatTensor +*args +**kwargs + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device, optional) — +the device to which the timesteps should be moved to. If None, the timesteps are not moved. + + + +Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +return_dict: bool = True + +) +→ +~scheduling_utils.SchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +return_dict (bool) — option for returning tuple rather than SchedulerOutput class + + +Returns + +~scheduling_utils.SchedulerOutput or tuple + + + +~scheduling_utils.SchedulerOutput if return_dict is +True, otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + +Step function propagating the sample with the multistep DEIS. diff --git a/scrapped_outputs/bf67ee657563f56a174990b1d02fbc59.txt b/scrapped_outputs/bf67ee657563f56a174990b1d02fbc59.txt new file mode 100644 index 0000000000000000000000000000000000000000..b45fe5213bcfa863fc1c686b497f93e27b1008f7 --- /dev/null +++ b/scrapped_outputs/bf67ee657563f56a174990b1d02fbc59.txt @@ -0,0 +1,630 @@ +Kandinsky 2.2 Kandinsky 2.2 is created by Arseniy Shakhmatov, Anton Razzhigaev, Aleksandr Nikolich, Vladimir Arkhipkin, Igor Pavlov, Andrey Kuznetsov, and Denis Dimitrov. The description from it’s GitHub page is: Kandinsky 2.2 brings substantial improvements upon its predecessor, Kandinsky 2.1, by introducing a new, more powerful image encoder - CLIP-ViT-G and the ControlNet support. The switch to CLIP-ViT-G as the image encoder significantly increases the model’s capability to generate more aesthetic pictures and better understand text, thus enhancing the model’s overall performance. The addition of the ControlNet mechanism allows the model to effectively control the process of generating images. This leads to more accurate and visually appealing outputs and opens new possibilities for text-guided image manipulation. The original codebase can be found at ai-forever/Kandinsky-2. Check out the Kandinsky Community organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting. Make sure to check out the schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. KandinskyV22PriorPipeline class diffusers.KandinskyV22PriorPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModelWithProjection text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: UnCLIPScheduler image_processor: CLIPImageProcessor ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Pipeline for generating image prior for Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 output_type: Optional = 'pt' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → KandinskyPriorPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. output_type (str, optional, defaults to "pt") — +The output format of the generate image. Choose between: "np" (np.array) or "pt" +(torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyV22Pipeline, KandinskyV22PriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior") +>>> pipe_prior.to("cuda") +>>> prompt = "red cat, 4k photo" +>>> image_emb, negative_image_emb = pipe_prior(prompt).to_tuple() + +>>> pipe = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder") +>>> pipe.to("cuda") +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=50, +... ).images +>>> image[0].save("cat.png") interpolate < source > ( images_and_prompts: List weights: List num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None negative_prior_prompt: Optional = None negative_prompt: str = '' guidance_scale: float = 4.0 device = None ) → KandinskyPriorPipelineOutput or tuple Parameters images_and_prompts (List[Union[str, PIL.Image.Image, torch.FloatTensor]]) — +list of prompts and images to guide the image generation. +weights — (List[float]): +list of weights for each condition in images_and_prompts num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. negative_prior_prompt (str, optional) — +The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when using the prior pipeline for interpolation. Examples: Copied >>> from diffusers import KandinskyV22PriorPipeline, KandinskyV22Pipeline +>>> from diffusers.utils import load_image +>>> import PIL +>>> import torch +>>> from torchvision import transforms + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") +>>> img1 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) +>>> img2 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/starry_night.jpeg" +... ) +>>> images_texts = ["a cat", img1, img2] +>>> weights = [0.3, 0.3, 0.4] +>>> out = pipe_prior.interpolate(images_texts, weights) +>>> pipe = KandinskyV22Pipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") +>>> image = pipe( +... image_embeds=out.image_embeds, +... negative_image_embeds=out.negative_image_embeds, +... height=768, +... width=768, +... num_inference_steps=50, +... ).images[0] +>>> image.save("starry_cat.png") KandinskyV22Pipeline class diffusers.KandinskyV22Pipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union negative_image_embeds: Union height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyV22Pipeline, KandinskyV22PriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior") +>>> pipe_prior.to("cuda") +>>> prompt = "red cat, 4k photo" +>>> out = pipe_prior(prompt) +>>> image_emb = out.image_embeds +>>> zero_image_emb = out.negative_image_embeds +>>> pipe = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder") +>>> pipe.to("cuda") +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=50, +... ).images +>>> image[0].save("cat.png") KandinskyV22CombinedPipeline class diffusers.KandinskyV22CombinedPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. prior_image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Combined Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. prior_callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference of the prior pipeline. +The function is called with the following arguments: prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). prior_callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the prior_callback_on_step_end function. The tensors specified in the +list will be passed as callback_kwargs argument. You will only be able to include variables listed in +the ._callback_tensor_inputs attribute of your prior pipeline class. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference of the decoder pipeline. +The function is called with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors +as specified by callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipe = AutoPipelineForText2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" + +image = pipe(prompt=prompt, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. KandinskyV22ControlnetPipeline class diffusers.KandinskyV22ControlnetPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union negative_image_embeds: Union hint: FloatTensor height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. hint (torch.FloatTensor) — +The controlnet condition. image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22PriorEmb2EmbPipeline class diffusers.KandinskyV22PriorEmb2EmbPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModelWithProjection text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: UnCLIPScheduler image_processor: CLIPImageProcessor ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Pipeline for generating image prior for Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union strength: float = 0.3 negative_prompt: Union = None num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None guidance_scale: float = 4.0 output_type: Optional = 'pt' return_dict: bool = True ) → KandinskyPriorPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference emb. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. emb (torch.FloatTensor) — +The image embedding. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. output_type (str, optional, defaults to "pt") — +The output format of the generate image. Choose between: "np" (np.array) or "pt" +(torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyV22Pipeline, KandinskyV22PriorEmb2EmbPipeline +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> prompt = "red cat, 4k photo" +>>> img = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) +>>> image_emb, nagative_image_emb = pipe_prior(prompt, image=img, strength=0.2).to_tuple() + +>>> pipe = KandinskyPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-decoder, torch_dtype=torch.float16" +... ) +>>> pipe.to("cuda") + +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... ).images + +>>> image[0].save("cat.png") interpolate < source > ( images_and_prompts: List weights: List num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None negative_prior_prompt: Optional = None negative_prompt: str = '' guidance_scale: float = 4.0 device = None ) → KandinskyPriorPipelineOutput or tuple Parameters images_and_prompts (List[Union[str, PIL.Image.Image, torch.FloatTensor]]) — +list of prompts and images to guide the image generation. +weights — (List[float]): +list of weights for each condition in images_and_prompts num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. negative_prior_prompt (str, optional) — +The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when using the prior pipeline for interpolation. Examples: Copied >>> from diffusers import KandinskyV22PriorEmb2EmbPipeline, KandinskyV22Pipeline +>>> from diffusers.utils import load_image +>>> import PIL + +>>> import torch +>>> from torchvision import transforms + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> img1 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) + +>>> img2 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/starry_night.jpeg" +... ) + +>>> images_texts = ["a cat", img1, img2] +>>> weights = [0.3, 0.3, 0.4] +>>> image_emb, zero_image_emb = pipe_prior.interpolate(images_texts, weights) + +>>> pipe = KandinskyV22Pipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") + +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=150, +... ).images[0] + +>>> image.save("starry_cat.png") KandinskyV22Img2ImgPipeline class diffusers.KandinskyV22Img2ImgPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union image: Union negative_image_embeds: Union height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 strength: float = 0.3 num_images_per_prompt: int = 1 generator: Union = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22Img2ImgCombinedPipeline class diffusers.KandinskyV22Img2ImgCombinedPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. prior_image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Combined Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 strength: float = 0.3 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForImage2Image +import torch +import requests +from io import BytesIO +from PIL import Image +import os + +pipe = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") +image.thumbnail((768, 768)) + +image = pipe(prompt=prompt, image=original_image, num_inference_steps=25).images[0] enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. KandinskyV22ControlnetImg2ImgPipeline class diffusers.KandinskyV22ControlnetImg2ImgPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union image: Union negative_image_embeds: Union hint: FloatTensor height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 strength: float = 0.3 num_images_per_prompt: int = 1 generator: Union = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. hint (torch.FloatTensor) — +The controlnet condition. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22InpaintPipeline class diffusers.KandinskyV22InpaintPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-guided image inpainting using Kandinsky2.1 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union image: Union mask_image: Union negative_image_embeds: Union height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. image (PIL.Image.Image) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. mask_image (np.array) — +Tensor representing an image batch, to mask image. White pixels in the mask will be repainted, while +black pixels will be preserved. If mask_image is a PIL image, it will be converted to a single +channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, +so the expected shape would be (B, H, W, 1). negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22InpaintCombinedPipeline class diffusers.KandinskyV22InpaintCombinedPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. prior_image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Combined Pipeline for inpainting generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union mask_image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. mask_image (np.array) — +Tensor representing an image batch, to mask image. White pixels in the mask will be repainted, while +black pixels will be preserved. If mask_image is a PIL image, it will be converted to a single +channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, +so the expected shape would be (B, H, W, 1). negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. prior_callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). prior_callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the prior_callback_on_step_end function. The tensors specified in the +list will be passed as callback_kwargs argument. You will only be able to include variables listed in +the ._callback_tensor_inputs attribute of your pipeline class. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +import torch +import numpy as np + +pipe = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +original_image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png" +) + +mask = np.zeros((768, 768), dtype=np.float32) +# Let's mask out an area above the cat's head +mask[:250, 250:-250] = 1 + +image = pipe(prompt=prompt, image=original_image, mask_image=mask, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. diff --git a/scrapped_outputs/bf8fabf67af6964feb57f14c6d3f037e.txt b/scrapped_outputs/bf8fabf67af6964feb57f14c6d3f037e.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/bf9a90184429716dbf81e5ab9c3baa26.txt b/scrapped_outputs/bf9a90184429716dbf81e5ab9c3baa26.txt new file mode 100644 index 0000000000000000000000000000000000000000..004d63135cbd3902cdc0fc7688be065a9d048582 --- /dev/null +++ b/scrapped_outputs/bf9a90184429716dbf81e5ab9c3baa26.txt @@ -0,0 +1,133 @@ +Inverse Denoising Diffusion Implicit Models (DDIMInverse) + + +Overview + +This scheduler is the inverted scheduler of Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. +The implementation is mostly based on the DDIM inversion definition of Null-text Inversion for Editing Real Images using Guided Diffusion Models + +DDIMInverseScheduler + + +class diffusers.DDIMInverseScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +clip_sample: bool = True +set_alpha_to_one: bool = True +steps_offset: int = 0 +prediction_type: str = 'epsilon' + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +clip_sample (bool, default True) — +option to clip predicted sample between -1 and 1 for numerical stability. + + +set_alpha_to_one (bool, default True) — +each diffusion step uses the value of alphas product at that step and at the previous one. For the final +step there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the value of alpha at step 0. + + +steps_offset (int, default 0) — +an offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False, to make the last step use step 0 for the previous alpha product, as done in +stable diffusion. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + + +DDIMInverseScheduler is the reverse scheduler of DDIMScheduler. +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. +For more details, see the original paper: https://arxiv.org/abs/2010.02502 + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Optional[int] = None + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +timestep (int, optional) — current timestep + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + + +Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. diff --git a/scrapped_outputs/bfbe5d8fe036fe1cf5db952e0204407d.txt b/scrapped_outputs/bfbe5d8fe036fe1cf5db952e0204407d.txt new file mode 100644 index 0000000000000000000000000000000000000000..b26a6d56b0f7175109506df5db21894b73ff5f5f --- /dev/null +++ b/scrapped_outputs/bfbe5d8fe036fe1cf5db952e0204407d.txt @@ -0,0 +1,25 @@ +Metal Performance Shaders (MPS) 🤗 Diffusers is compatible with Apple silicon (M1/M2 chips) using the PyTorch mps device, which uses the Metal framework to leverage the GPU on MacOS devices. You’ll need to have: macOS computer with Apple silicon (M1/M2) hardware macOS 12.6 or later (13.0 or later recommended) arm64 version of Python PyTorch 2.0 (recommended) or 1.13 (minimum version supported for mps) The mps backend uses PyTorch’s .to() interface to move the Stable Diffusion pipeline on to your M1 or M2 device: Copied from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +pipe = pipe.to("mps") + +# Recommended if your computer has < 64 GB of RAM +pipe.enable_attention_slicing() + +prompt = "a photo of an astronaut riding a horse on mars" +image = pipe(prompt).images[0] +image Generating multiple prompts in a batch can crash or fail to work reliably. We believe this is related to the mps backend in PyTorch. While this is being investigated, you should iterate instead of batching. If you’re using PyTorch 1.13, you need to “prime” the pipeline with an additional one-time pass through it. This is a temporary workaround for an issue where the first inference pass produces slightly different results than subsequent ones. You only need to do this pass once, and after just one inference step you can discard the result. Copied from diffusers import DiffusionPipeline + + pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5").to("mps") + pipe.enable_attention_slicing() + + prompt = "a photo of an astronaut riding a horse on mars" + # First-time "warmup" pass if PyTorch version is 1.13 ++ _ = pipe(prompt, num_inference_steps=1) + + # Results match those from the CPU device after the warmup pass. + image = pipe(prompt).images[0] Troubleshoot M1/M2 performance is very sensitive to memory pressure. When this occurs, the system automatically swaps if it needs to which significantly degrades performance. To prevent this from happening, we recommend attention slicing to reduce memory pressure during inference and prevent swapping. This is especially relevant if your computer has less than 64GB of system RAM, or if you generate images at non-standard resolutions larger than 512×512 pixels. Call the enable_attention_slicing() function on your pipeline: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True).to("mps") +pipeline.enable_attention_slicing() Attention slicing performs the costly attention operation in multiple steps instead of all at once. It usually improves performance by ~20% in computers without universal memory, but we’ve observed better performance in most Apple silicon computers unless you have 64GB of RAM or more. diff --git a/scrapped_outputs/bfc54f1a2804a8a2b0233ef0f750eda8.txt b/scrapped_outputs/bfc54f1a2804a8a2b0233ef0f750eda8.txt new file mode 100644 index 0000000000000000000000000000000000000000..ed307c5e7ec0eba355d6da6f87807233e0a27eec --- /dev/null +++ b/scrapped_outputs/bfc54f1a2804a8a2b0233ef0f750eda8.txt @@ -0,0 +1,43 @@ +DiT Scalable Diffusion Models with Transformers (DiT) is by William Peebles and Saining Xie. The abstract from the paper is: We explore a new class of diffusion models based on the transformer architecture. We train latent diffusion models of images, replacing the commonly-used U-Net backbone with a transformer that operates on latent patches. We analyze the scalability of our Diffusion Transformers (DiTs) through the lens of forward pass complexity as measured by Gflops. We find that DiTs with higher Gflops — through increased transformer depth/width or increased number of input tokens — consistently have lower FID. In addition to possessing good scalability properties, our largest DiT-XL/2 models outperform all prior diffusion models on the class-conditional ImageNet 512x512 and 256x256 benchmarks, achieving a state-of-the-art FID of 2.27 on the latter. The original codebase can be found at facebookresearch/dit. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. DiTPipeline class diffusers.DiTPipeline < source > ( transformer: Transformer2DModel vae: AutoencoderKL scheduler: KarrasDiffusionSchedulers id2label: Optional = None ) Parameters transformer (Transformer2DModel) — +A class conditioned Transformer2DModel to denoise the encoded image latents. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. scheduler (DDIMScheduler) — +A scheduler to be used in combination with transformer to denoise the encoded image latents. Pipeline for image generation based on a Transformer backbone instead of a UNet. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( class_labels: List guidance_scale: float = 4.0 generator: Union = None num_inference_steps: int = 50 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters class_labels (List[int]) — +List of ImageNet class labels for the images to be generated. guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. num_inference_steps (int, optional, defaults to 250) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import DiTPipeline, DPMSolverMultistepScheduler +>>> import torch + +>>> pipe = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256", torch_dtype=torch.float16) +>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe = pipe.to("cuda") + +>>> # pick words from Imagenet class labels +>>> pipe.labels # to print all available words + +>>> # pick words that exist in ImageNet +>>> words = ["white shark", "umbrella"] + +>>> class_ids = pipe.get_label_ids(words) + +>>> generator = torch.manual_seed(33) +>>> output = pipe(class_labels=class_ids, num_inference_steps=25, generator=generator) + +>>> image = output.images[0] # label 'white shark' get_label_ids < source > ( label: Union ) → list of int Parameters label (str or dict of str) — +Label strings to be mapped to class ids. Returns +list of int + +Class ids to be processed by pipeline. + Map label strings from ImageNet to corresponding class ids. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/bfe22ba426c834ef86adc3365fabd3ec.txt b/scrapped_outputs/bfe22ba426c834ef86adc3365fabd3ec.txt new file mode 100644 index 0000000000000000000000000000000000000000..8ffbeca318ea60288f515ef9c440ebea9a984f50 --- /dev/null +++ b/scrapped_outputs/bfe22ba426c834ef86adc3365fabd3ec.txt @@ -0,0 +1,80 @@ +UniPCMultistepScheduler UniPCMultistepScheduler is a training-free framework designed for fast sampling of diffusion models. It was introduced in UniPC: A Unified Predictor-Corrector Framework for Fast Sampling of Diffusion Models by Wenliang Zhao, Lujia Bai, Yongming Rao, Jie Zhou, Jiwen Lu. It consists of a corrector (UniC) and a predictor (UniP) that share a unified analytical form and support arbitrary orders. +UniPC is by design model-agnostic, supporting pixel-space/latent-space DPMs on unconditional/conditional sampling. It can also be applied to both noise prediction and data prediction models. The corrector UniC can be also applied after any off-the-shelf solvers to increase the order of accuracy. The abstract from the paper is: Diffusion probabilistic models (DPMs) have demonstrated a very promising ability in high-resolution image synthesis. However, sampling from a pre-trained DPM is time-consuming due to the multiple evaluations of the denoising network, making it more and more important to accelerate the sampling of DPMs. Despite recent progress in designing fast samplers, existing methods still cannot generate satisfying images in many applications where fewer steps (e.g., <10) are favored. In this paper, we develop a unified corrector (UniC) that can be applied after any existing DPM sampler to increase the order of accuracy without extra model evaluations, and derive a unified predictor (UniP) that supports arbitrary order as a byproduct. Combining UniP and UniC, we propose a unified predictor-corrector framework called UniPC for the fast sampling of DPMs, which has a unified analytical form for any order and can significantly improve the sampling quality over previous methods, especially in extremely few steps. We evaluate our methods through extensive experiments including both unconditional and conditional sampling using pixel-space and latent-space DPMs. Our UniPC can achieve 3.87 FID on CIFAR10 (unconditional) and 7.51 FID on ImageNet 256×256 (conditional) with only 10 function evaluations. Code is available at this https URL. Tips It is recommended to set solver_order to 2 for guide sampling, and solver_order=3 for unconditional sampling. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both predict_x0=True and thresholding=True to use dynamic thresholding. This thresholding method is unsuitable for latent-space diffusion models such as Stable Diffusion. UniPCMultistepScheduler class diffusers.UniPCMultistepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 predict_x0: bool = True solver_type: str = 'bh2' lower_order_final: bool = True disable_corrector: List = [] solver_p: SchedulerMixin = None use_karras_sigmas: Optional = False timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, default 2) — +The UniPC order which can be any positive integer. The effective order of accuracy is solver_order + 1 +due to the UniC. It is recommended to use solver_order=2 for guided sampling, and solver_order=3 for +unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and predict_x0=True. predict_x0 (bool, defaults to True) — +Whether to use the updating algorithm on the predicted x0. solver_type (str, default bh2) — +Solver type for UniPC. It is recommended to use bh1 for unconditional sampling when steps < 10, and bh2 +otherwise. lower_order_final (bool, default True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. disable_corrector (list, default []) — +Decides which step to disable the corrector to mitigate the misalignment between epsilon_theta(x_t, c) +and epsilon_theta(x_t^c, c) which can influence convergence for a large guidance scale. Corrector is +usually disabled during the first few steps. solver_p (SchedulerMixin, default None) — +Any other scheduler that if specified, the algorithm becomes solver_p + UniC. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. UniPCMultistepScheduler is a training-free framework designed for the fast sampling of diffusion models. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the UniPC algorithm needs. multistep_uni_c_bh_update < source > ( this_model_output: FloatTensor *args last_sample: FloatTensor = None this_sample: FloatTensor = None order: int = None **kwargs ) → torch.FloatTensor Parameters this_model_output (torch.FloatTensor) — +The model outputs at x_t. this_timestep (int) — +The current timestep t. last_sample (torch.FloatTensor) — +The generated sample before the last predictor x_{t-1}. this_sample (torch.FloatTensor) — +The generated sample after the last predictor x_{t}. order (int) — +The p of UniC-p at this step. The effective order of accuracy should be order + 1. Returns +torch.FloatTensor + +The corrected sample tensor at the current timestep. + One step for the UniC (B(h) version). multistep_uni_p_bh_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None order: int = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model at the current timestep. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. order (int) — +The order of UniP at this timestep (corresponds to the p in UniPC-p). Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the UniP (B(h) version). Alternatively, self.solver_p is used if is specified. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep UniPC. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/c02be50633d52cddf1a983cb3cfe6197.txt b/scrapped_outputs/c02be50633d52cddf1a983cb3cfe6197.txt new file mode 100644 index 0000000000000000000000000000000000000000..51eec044ff9541ddf40cd3ef6404f0e25abfaa6f --- /dev/null +++ b/scrapped_outputs/c02be50633d52cddf1a983cb3cfe6197.txt @@ -0,0 +1,226 @@ +aMUSEd aMUSEd was introduced in aMUSEd: An Open MUSE Reproduction by Suraj Patil, William Berman, Robin Rombach, and Patrick von Platen. Amused is a lightweight text to image model based off of the MUSE architecture. Amused is particularly useful in applications that require a lightweight and fast model such as generating many images quickly at once. Amused is a vqvae token based transformer that can generate an image in fewer forward passes than many diffusion models. In contrast with muse, it uses the smaller text encoder CLIP-L/14 instead of t5-xxl. Due to its small parameter count and few forward pass generation process, amused can generate many images quickly. This benefit is seen particularly at larger batch sizes. The abstract from the paper is: We present aMUSEd, an open-source, lightweight masked image model (MIM) for text-to-image generation based on MUSE. With 10 percent of MUSE’s parameters, aMUSEd is focused on fast image generation. We believe MIM is under-explored compared to latent diffusion, the prevailing approach for text-to-image generation. Compared to latent diffusion, MIM requires fewer inference steps and is more interpretable. Additionally, MIM can be fine-tuned to learn additional styles with only a single image. We hope to encourage further exploration of MIM by demonstrating its effectiveness on large-scale text-to-image generation and releasing reproducible training code. We also release checkpoints for two models which directly produce images at 256x256 and 512x512 resolutions. Model Params amused-256 603M amused-512 608M AmusedPipeline class diffusers.AmusedPipeline < source > ( vqvae: VQModel tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection transformer: UVit2DModel scheduler: AmusedScheduler ) __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 12 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Optional = None latents: Optional = None prompt_embeds: Optional = None encoder_hidden_states: Optional = None negative_prompt_embeds: Optional = None negative_encoder_hidden_states: Optional = None output_type = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None micro_conditioning_aesthetic_score: int = 6 micro_conditioning_crop_coord: Tuple = (0, 0) temperature: Union = (2, 0) ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.transformer.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 16) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.IntTensor, optional) — +Pre-generated tokens representing latent vectors in self.vqvae, to be used as inputs for image +gneration. If not provided, the starting latents will be completely masked. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. A single vector from the +pooled and projected final hidden states. encoder_hidden_states (torch.FloatTensor, optional) — +Pre-generated penultimate hidden states from the text encoder providing additional text conditioning. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. negative_encoder_hidden_states (torch.FloatTensor, optional) — +Analogous to encoder_hidden_states for the positive prompt. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. micro_conditioning_aesthetic_score (int, optional, defaults to 6) — +The targeted aesthetic score according to the laion aesthetic classifier. See https://laion.ai/blog/laion-aesthetics/ +and the micro-conditioning section of https://arxiv.org/abs/2307.01952. micro_conditioning_crop_coord (Tuple[int], optional, defaults to (0, 0)) — +The targeted height, width crop coordinates. See the micro-conditioning section of https://arxiv.org/abs/2307.01952. temperature (Union[int, Tuple[int, int], List[int]], optional, defaults to (2, 0)) — +Configures the temperature scheduler on self.scheduler see AmusedScheduler#set_timesteps. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import AmusedPipeline + +>>> pipe = AmusedPipeline.from_pretrained( +... "amused/amused-512", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt).images[0] enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. class diffusers.AmusedImg2ImgPipeline < source > ( vqvae: VQModel tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection transformer: UVit2DModel scheduler: AmusedScheduler ) __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.5 num_inference_steps: int = 12 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Optional = None prompt_embeds: Optional = None encoder_hidden_states: Optional = None negative_prompt_embeds: Optional = None negative_encoder_hidden_states: Optional = None output_type = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None micro_conditioning_aesthetic_score: int = 6 micro_conditioning_crop_coord: Tuple = (0, 0) temperature: Union = (2, 0) ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be used as the starting point. For both +numpy array and pytorch tensor, the expected value range is between [0, 1] If it’s a tensor or a list +or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a +list of arrays, the expected shape should be (B, H, W, C) or (H, W, C) It can also accept image +latents as image, but if passing latents directly it is not encoded again. strength (float, optional, defaults to 0.5) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 16) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. A single vector from the +pooled and projected final hidden states. encoder_hidden_states (torch.FloatTensor, optional) — +Pre-generated penultimate hidden states from the text encoder providing additional text conditioning. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. negative_encoder_hidden_states (torch.FloatTensor, optional) — +Analogous to encoder_hidden_states for the positive prompt. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. micro_conditioning_aesthetic_score (int, optional, defaults to 6) — +The targeted aesthetic score according to the laion aesthetic classifier. See https://laion.ai/blog/laion-aesthetics/ +and the micro-conditioning section of https://arxiv.org/abs/2307.01952. micro_conditioning_crop_coord (Tuple[int], optional, defaults to (0, 0)) — +The targeted height, width crop coordinates. See the micro-conditioning section of https://arxiv.org/abs/2307.01952. temperature (Union[int, Tuple[int, int], List[int]], optional, defaults to (2, 0)) — +Configures the temperature scheduler on self.scheduler see AmusedScheduler#set_timesteps. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import AmusedImg2ImgPipeline +>>> from diffusers.utils import load_image + +>>> pipe = AmusedImg2ImgPipeline.from_pretrained( +... "amused/amused-512", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "winter mountains" +>>> input_image = ( +... load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/open_muse/mountains.jpg" +... ) +... .resize((512, 512)) +... .convert("RGB") +... ) +>>> image = pipe(prompt, input_image).images[0] enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. class diffusers.AmusedInpaintPipeline < source > ( vqvae: VQModel tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection transformer: UVit2DModel scheduler: AmusedScheduler ) __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None strength: float = 1.0 num_inference_steps: int = 12 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Optional = None prompt_embeds: Optional = None encoder_hidden_states: Optional = None negative_prompt_embeds: Optional = None negative_encoder_hidden_states: Optional = None output_type = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None micro_conditioning_aesthetic_score: int = 6 micro_conditioning_crop_coord: Tuple = (0, 0) temperature: Union = (2, 0) ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be used as the starting point. For both +numpy array and pytorch tensor, the expected value range is between [0, 1] If it’s a tensor or a list +or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a +list of arrays, the expected shape should be (B, H, W, C) or (H, W, C) It can also accept image +latents as image, but if passing latents directly it is not encoded again. mask_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to mask image. White pixels in the mask +are repainted while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a numpy array or pytorch tensor, it should contain one +color channel (L) instead of 3, so the expected shape for pytorch tensor would be (B, 1, H, W), (B, H, W), (1, H, W), (H, W). And for numpy array would be for (B, H, W, 1), (B, H, W), (H, W, 1), or (H, W). strength (float, optional, defaults to 1.0) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 16) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. A single vector from the +pooled and projected final hidden states. encoder_hidden_states (torch.FloatTensor, optional) — +Pre-generated penultimate hidden states from the text encoder providing additional text conditioning. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. negative_encoder_hidden_states (torch.FloatTensor, optional) — +Analogous to encoder_hidden_states for the positive prompt. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. micro_conditioning_aesthetic_score (int, optional, defaults to 6) — +The targeted aesthetic score according to the laion aesthetic classifier. See https://laion.ai/blog/laion-aesthetics/ +and the micro-conditioning section of https://arxiv.org/abs/2307.01952. micro_conditioning_crop_coord (Tuple[int], optional, defaults to (0, 0)) — +The targeted height, width crop coordinates. See the micro-conditioning section of https://arxiv.org/abs/2307.01952. temperature (Union[int, Tuple[int, int], List[int]], optional, defaults to (2, 0)) — +Configures the temperature scheduler on self.scheduler see AmusedScheduler#set_timesteps. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import AmusedInpaintPipeline +>>> from diffusers.utils import load_image + +>>> pipe = AmusedInpaintPipeline.from_pretrained( +... "amused/amused-512", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "fall mountains" +>>> input_image = ( +... load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/open_muse/mountains_1.jpg" +... ) +... .resize((512, 512)) +... .convert("RGB") +... ) +>>> mask = ( +... load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/open_muse/mountains_1_mask.png" +... ) +... .resize((512, 512)) +... .convert("L") +... ) +>>> pipe(prompt, input_image, mask).images[0].save("out.png") enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. diff --git a/scrapped_outputs/c05e7f6dd82aa043f537dfe65df37a17.txt b/scrapped_outputs/c05e7f6dd82aa043f537dfe65df37a17.txt new file mode 100644 index 0000000000000000000000000000000000000000..2a6f8d6ced6b91e1a0e4d7840137c4d469ea2882 --- /dev/null +++ b/scrapped_outputs/c05e7f6dd82aa043f537dfe65df37a17.txt @@ -0,0 +1,154 @@ +Scalable Diffusion Models with Transformers (DiT) + + +Overview + +Scalable Diffusion Models with Transformers (DiT) by William Peebles and Saining Xie. +The abstract of the paper is the following: +We explore a new class of diffusion models based on the transformer architecture. We train latent diffusion models of images, replacing the commonly-used U-Net backbone with a transformer that operates on latent patches. We analyze the scalability of our Diffusion Transformers (DiTs) through the lens of forward pass complexity as measured by Gflops. We find that DiTs with higher Gflops — through increased transformer depth/width or increased number of input tokens — consistently have lower FID. In addition to possessing good scalability properties, our largest DiT-XL/2 models outperform all prior diffusion models on the class-conditional ImageNet 512x512 and 256x256 benchmarks, achieving a state-of-the-art FID of 2.27 on the latter. +The original codebase of this paper can be found here: facebookresearch/dit. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_dit.py +Conditional Image Generation +- + +Usage example + + + + Copied +from diffusers import DiTPipeline, DPMSolverMultistepScheduler +import torch + +pipe = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256", torch_dtype=torch.float16) +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") + +# pick words from Imagenet class labels +pipe.labels # to print all available words + +# pick words that exist in ImageNet +words = ["white shark", "umbrella"] + +class_ids = pipe.get_label_ids(words) + +generator = torch.manual_seed(33) +output = pipe(class_labels=class_ids, num_inference_steps=25, generator=generator) + +image = output.images[0] # label 'white shark' + +DiTPipeline + + +class diffusers.DiTPipeline + +< +source +> +( +transformer: Transformer2DModel +vae: AutoencoderKL +scheduler: KarrasDiffusionSchedulers +id2label: typing.Union[typing.Dict[int, str], NoneType] = None + +) + + +Parameters + +transformer (Transformer2DModel) — +Class conditioned Transformer in Diffusion model to denoise the encoded image latents. + + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +scheduler (DDIMScheduler) — +A scheduler to be used in combination with dit to denoise the encoded image latents. + + + +This pipeline inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +class_labels: typing.List[int] +guidance_scale: float = 4.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +num_inference_steps: int = 50 +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True + +) + + +Parameters + +class_labels (List[int]) — +List of imagenet class labels for the images to be generated. + + +guidance_scale (float, optional, defaults to 4.0) — +Scale of the guidance signal. + + +generator (torch.Generator, optional) — +A torch generator to make generation +deterministic. + + +num_inference_steps (int, optional, defaults to 250) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + + +Function invoked when calling the pipeline for generation. + +get_label_ids + +< +source +> +( +label: typing.Union[str, typing.List[str]] + +) +→ +list of int + +Parameters + +label (str or dict of str) — label strings to be mapped to class ids. + + +Returns + +list of int + + + +Class ids to be processed by pipeline. + + +Map label strings, e.g. from ImageNet, to corresponding class ids. diff --git a/scrapped_outputs/c0e2942c18a3fc1388442c214741970d.txt b/scrapped_outputs/c0e2942c18a3fc1388442c214741970d.txt new file mode 100644 index 0000000000000000000000000000000000000000..e37bbead67d21277358aedb13e677930bc250b1b --- /dev/null +++ b/scrapped_outputs/c0e2942c18a3fc1388442c214741970d.txt @@ -0,0 +1,14 @@ +Speed up inference There are several ways to optimize 🤗 Diffusers for inference speed. As a general rule of thumb, we recommend using either xFormers or torch.nn.functional.scaled_dot_product_attention in PyTorch 2.0 for their memory-efficient attention. In many cases, optimizing for speed or memory leads to improved performance in the other, so you should try to optimize for both whenever you can. This guide focuses on inference speed, but you can learn more about preserving memory in the Reduce memory usage guide. The results below are obtained from generating a single 512x512 image from the prompt a photo of an astronaut riding a horse on mars with 50 DDIM steps on a Nvidia Titan RTX, demonstrating the speed-up you can expect. latency speed-up original 9.50s x1 fp16 3.61s x2.63 channels last 3.30s x2.88 traced UNet 3.21s x2.96 memory efficient attention 2.63s x3.61 Use TensorFloat-32 On Ampere and later CUDA devices, matrix multiplications and convolutions can use the TensorFloat-32 (TF32) mode for faster, but slightly less accurate computations. By default, PyTorch enables TF32 mode for convolutions but not matrix multiplications. Unless your network requires full float32 precision, we recommend enabling TF32 for matrix multiplications. It can significantly speeds up computations with typically negligible loss in numerical accuracy. Copied import torch + +torch.backends.cuda.matmul.allow_tf32 = True You can learn more about TF32 in the Mixed precision training guide. Half-precision weights To save GPU memory and get more speed, try loading and running the model weights directly in half-precision or float16: Copied import torch +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +image = pipe(prompt).images[0] Don’t use torch.autocast in any of the pipelines as it can lead to black images and is always slower than pure float16 precision. Distilled model You could also use a distilled Stable Diffusion model and autoencoder to speed up inference. During distillation, many of the UNet’s residual and attention blocks are shed to reduce the model size. The distilled model is faster and uses less memory while generating images of comparable quality to the full Stable Diffusion model. Learn more about in the Distilled Stable Diffusion inference guide! diff --git a/scrapped_outputs/c1081f8c5c2a7a9626a0059deea41941.txt b/scrapped_outputs/c1081f8c5c2a7a9626a0059deea41941.txt new file mode 100644 index 0000000000000000000000000000000000000000..6aaf3fc017641a3b23f127adc2cdbafd5e059ae6 --- /dev/null +++ b/scrapped_outputs/c1081f8c5c2a7a9626a0059deea41941.txt @@ -0,0 +1,33 @@ +Transformer2D A Transformer model for image-like data from CompVis that is based on the Vision Transformer introduced by Dosovitskiy et al. The Transformer2DModel accepts discrete (classes of vector embeddings) or continuous (actual embeddings) inputs. When the input is continuous: Project the input and reshape it to (batch_size, sequence_length, feature_dimension). Apply the Transformer blocks in the standard way. Reshape to image. When the input is discrete: It is assumed one of the input classes is the masked latent pixel. The predicted classes of the unnoised image don’t contain a prediction for the masked pixel because the unnoised image cannot be masked. Convert input (classes of latent pixels) to embeddings and apply positional embeddings. Apply the Transformer blocks in the standard way. Predict classes of unnoised image. Transformer2DModel class diffusers.Transformer2DModel < source > ( num_attention_heads: int = 16 attention_head_dim: int = 88 in_channels: Optional = None out_channels: Optional = None num_layers: int = 1 dropout: float = 0.0 norm_num_groups: int = 32 cross_attention_dim: Optional = None attention_bias: bool = False sample_size: Optional = None num_vector_embeds: Optional = None patch_size: Optional = None activation_fn: str = 'geglu' num_embeds_ada_norm: Optional = None use_linear_projection: bool = False only_cross_attention: bool = False double_self_attention: bool = False upcast_attention: bool = False norm_type: str = 'layer_norm' norm_elementwise_affine: bool = True norm_eps: float = 1e-05 attention_type: str = 'default' caption_channels: int = None interpolation_scale: float = None ) Parameters num_attention_heads (int, optional, defaults to 16) — The number of heads to use for multi-head attention. attention_head_dim (int, optional, defaults to 88) — The number of channels in each head. in_channels (int, optional) — +The number of channels in the input and output (specify if the input is continuous). num_layers (int, optional, defaults to 1) — The number of layers of Transformer blocks to use. dropout (float, optional, defaults to 0.0) — The dropout probability to use. cross_attention_dim (int, optional) — The number of encoder_hidden_states dimensions to use. sample_size (int, optional) — The width of the latent images (specify if the input is discrete). +This is fixed during training since it is used to learn a number of position embeddings. num_vector_embeds (int, optional) — +The number of classes of the vector embeddings of the latent pixels (specify if the input is discrete). +Includes the class for the masked latent pixel. activation_fn (str, optional, defaults to "geglu") — Activation function to use in feed-forward. num_embeds_ada_norm ( int, optional) — +The number of diffusion steps used during training. Pass if at least one of the norm_layers is +AdaLayerNorm. This is fixed during training since it is used to learn a number of embeddings that are +added to the hidden states. +During inference, you can denoise for up to but not more steps than num_embeds_ada_norm. attention_bias (bool, optional) — +Configure if the TransformerBlocks attention should contain a bias parameter. A 2D Transformer model for image-like data. forward < source > ( hidden_states: Tensor encoder_hidden_states: Optional = None timestep: Optional = None added_cond_kwargs: Dict = None class_labels: Optional = None cross_attention_kwargs: Dict = None attention_mask: Optional = None encoder_attention_mask: Optional = None return_dict: bool = True ) Parameters hidden_states (torch.LongTensor of shape (batch size, num latent pixels) if discrete, torch.FloatTensor of shape (batch size, channel, height, width) if continuous) — +Input hidden_states. encoder_hidden_states ( torch.FloatTensor of shape (batch size, sequence len, embed dims), optional) — +Conditional embeddings for cross attention layer. If not given, cross-attention defaults to +self-attention. timestep ( torch.LongTensor, optional) — +Used to indicate denoising step. Optional timestep to be applied as an embedding in AdaLayerNorm. class_labels ( torch.LongTensor of shape (batch size, num classes), optional) — +Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in +AdaLayerZeroNorm. cross_attention_kwargs ( Dict[str, Any], optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. attention_mask ( torch.Tensor, optional) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. encoder_attention_mask ( torch.Tensor, optional) — +Cross-attention mask applied to encoder_hidden_states. Two formats supported: + +Mask (batch, sequence_length) True = keep, False = discard. +Bias (batch, 1, sequence_length) 0 = keep, -10000 = discard. + +If ndim == 2: will be interpreted as a mask, then converted into a bias consistent with the format +above. This bias will be added to the cross-attention scores. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. The Transformer2DModel forward method. Transformer2DModelOutput class diffusers.models.transformers.transformer_2d.Transformer2DModelOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if Transformer2DModel is discrete) — +The hidden states output conditioned on the encoder_hidden_states input. If discrete, returns probability +distributions for the unnoised latent pixels. The output of Transformer2DModel. diff --git a/scrapped_outputs/c11006b0e3f376eeec9f7bbf9505feb5.txt b/scrapped_outputs/c11006b0e3f376eeec9f7bbf9505feb5.txt new file mode 100644 index 0000000000000000000000000000000000000000..89bc887713a4db23ab02dc3377a161ea6292c27f --- /dev/null +++ b/scrapped_outputs/c11006b0e3f376eeec9f7bbf9505feb5.txt @@ -0,0 +1,23 @@ +Controlled generation Controlling outputs generated by diffusion models has been long pursued by the community and is now an active research topic. In many popular diffusion models, subtle changes in inputs, both images and text prompts, can drastically change outputs. In an ideal world we want to be able to control how semantics are preserved and changed. Most examples of preserving semantics reduce to being able to accurately map a change in input to a change in output. I.e. adding an adjective to a subject in a prompt preserves the entire image, only modifying the changed subject. Or, image variation of a particular subject preserves the subject’s pose. Additionally, there are qualities of generated images that we would like to influence beyond semantic preservation. I.e. in general, we would like our outputs to be of good quality, adhere to a particular style, or be realistic. We will document some of the techniques diffusers supports to control generation of diffusion models. Much is cutting edge research and can be quite nuanced. If something needs clarifying or you have a suggestion, don’t hesitate to open a discussion on the forum or a GitHub issue. We provide a high level explanation of how the generation can be controlled as well as a snippet of the technicals. For more in depth explanations on the technicals, the original papers which are linked from the pipelines are always the best resources. Depending on the use case, one should choose a technique accordingly. In many cases, these techniques can be combined. For example, one can combine Textual Inversion with SEGA to provide more semantic guidance to the outputs generated using Textual Inversion. Unless otherwise mentioned, these are techniques that work with existing models and don’t require their own weights. InstructPix2Pix Pix2Pix Zero Attend and Excite Semantic Guidance Self-attention Guidance Depth2Image MultiDiffusion Panorama DreamBooth Textual Inversion ControlNet Prompt Weighting Custom Diffusion Model Editing DiffEdit T2I-Adapter FABRIC For convenience, we provide a table to denote which methods are inference-only and which require fine-tuning/training. Method Inference only Requires training / fine-tuning Comments InstructPix2Pix ✅ ❌ Can additionally befine-tuned for better performance on specific edit instructions. Pix2Pix Zero ✅ ❌ Attend and Excite ✅ ❌ Semantic Guidance ✅ ❌ Self-attention Guidance ✅ ❌ Depth2Image ✅ ❌ MultiDiffusion Panorama ✅ ❌ DreamBooth ❌ ✅ Textual Inversion ❌ ✅ ControlNet ✅ ❌ A ControlNet can be trained/fine-tuned ona custom conditioning. Prompt Weighting ✅ ❌ Custom Diffusion ❌ ✅ Model Editing ✅ ❌ DiffEdit ✅ ❌ T2I-Adapter ✅ ❌ Fabric ✅ ❌ InstructPix2Pix Paper InstructPix2Pix is fine-tuned from Stable Diffusion to support editing input images. It takes as inputs an image and a prompt describing an edit, and it outputs the edited image. +InstructPix2Pix has been explicitly trained to work well with InstructGPT-like prompts. Pix2Pix Zero Paper Pix2Pix Zero allows modifying an image so that one concept or subject is translated to another one while preserving general image semantics. The denoising process is guided from one conceptual embedding towards another conceptual embedding. The intermediate latents are optimized during the denoising process to push the attention maps towards reference attention maps. The reference attention maps are from the denoising process of the input image and are used to encourage semantic preservation. Pix2Pix Zero can be used both to edit synthetic images as well as real images. To edit synthetic images, one first generates an image given a caption. +Next, we generate image captions for the concept that shall be edited and for the new target concept. We can use a model like Flan-T5 for this purpose. Then, “mean” prompt embeddings for both the source and target concepts are created via the text encoder. Finally, the pix2pix-zero algorithm is used to edit the synthetic image. To edit a real image, one first generates an image caption using a model like BLIP. Then one applies DDIM inversion on the prompt and image to generate “inverse” latents. Similar to before, “mean” prompt embeddings for both source and target concepts are created and finally the pix2pix-zero algorithm in combination with the “inverse” latents is used to edit the image. Pix2Pix Zero is the first model that allows “zero-shot” image editing. This means that the model +can edit an image in less than a minute on a consumer GPU as shown here. As mentioned above, Pix2Pix Zero includes optimizing the latents (and not any of the UNet, VAE, or the text encoder) to steer the generation toward a specific concept. This means that the overall +pipeline might require more memory than a standard StableDiffusionPipeline. An important distinction between methods like InstructPix2Pix and Pix2Pix Zero is that the former +involves fine-tuning the pre-trained weights while the latter does not. This means that you can +apply Pix2Pix Zero to any of the available Stable Diffusion models. Attend and Excite Paper Attend and Excite allows subjects in the prompt to be faithfully represented in the final image. A set of token indices are given as input, corresponding to the subjects in the prompt that need to be present in the image. During denoising, each token index is guaranteed to have a minimum attention threshold for at least one patch of the image. The intermediate latents are iteratively optimized during the denoising process to strengthen the attention of the most neglected subject token until the attention threshold is passed for all subject tokens. Like Pix2Pix Zero, Attend and Excite also involves a mini optimization loop (leaving the pre-trained weights untouched) in its pipeline and can require more memory than the usual StableDiffusionPipeline. Semantic Guidance (SEGA) Paper SEGA allows applying or removing one or more concepts from an image. The strength of the concept can also be controlled. I.e. the smile concept can be used to incrementally increase or decrease the smile of a portrait. Similar to how classifier free guidance provides guidance via empty prompt inputs, SEGA provides guidance on conceptual prompts. Multiple of these conceptual prompts can be applied simultaneously. Each conceptual prompt can either add or remove their concept depending on if the guidance is applied positively or negatively. Unlike Pix2Pix Zero or Attend and Excite, SEGA directly interacts with the diffusion process instead of performing any explicit gradient-based optimization. Self-attention Guidance (SAG) Paper Self-attention Guidance improves the general quality of images. SAG provides guidance from predictions not conditioned on high-frequency details to fully conditioned images. The high frequency details are extracted out of the UNet self-attention maps. Depth2Image Project Depth2Image is fine-tuned from Stable Diffusion to better preserve semantics for text guided image variation. It conditions on a monocular depth estimate of the original image. MultiDiffusion Panorama Paper MultiDiffusion Panorama defines a new generation process over a pre-trained diffusion model. This process binds together multiple diffusion generation methods that can be readily applied to generate high quality and diverse images. Results adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes. +MultiDiffusion Panorama allows to generate high-quality images at arbitrary aspect ratios (e.g., panoramas). Fine-tuning your own models In addition to pre-trained models, Diffusers has training scripts for fine-tuning models on user-provided data. DreamBooth Project DreamBooth fine-tunes a model to teach it about a new subject. I.e. a few pictures of a person can be used to generate images of that person in different styles. Textual Inversion Paper Textual Inversion fine-tunes a model to teach it about a new concept. I.e. a few pictures of a style of artwork can be used to generate images in that style. ControlNet Paper ControlNet is an auxiliary network which adds an extra condition. +There are 8 canonical pre-trained ControlNets trained on different conditionings such as edge detection, scribbles, +depth maps, and semantic segmentations. Prompt Weighting Prompt weighting is a simple technique that puts more attention weight on certain parts of the text +input. Custom Diffusion Paper Custom Diffusion only fine-tunes the cross-attention maps of a pre-trained +text-to-image diffusion model. It also allows for additionally performing Textual Inversion. It supports +multi-concept training by design. Like DreamBooth and Textual Inversion, Custom Diffusion is also used to +teach a pre-trained text-to-image diffusion model about new concepts to generate outputs involving the +concept(s) of interest. Model Editing Paper The text-to-image model editing pipeline helps you mitigate some of the incorrect implicit assumptions a pre-trained text-to-image +diffusion model might make about the subjects present in the input prompt. For example, if you prompt Stable Diffusion to generate images for “A pack of roses”, the roses in the generated images +are more likely to be red. This pipeline helps you change that assumption. DiffEdit Paper DiffEdit allows for semantic editing of input images along with +input prompts while preserving the original input images as much as possible. T2I-Adapter Paper T2I-Adapter is an auxiliary network which adds an extra condition. +There are 8 canonical pre-trained adapters trained on different conditionings such as edge detection, sketch, +depth maps, and semantic segmentations. Fabric Paper Fabric is a training-free +approach applicable to a wide range of popular diffusion models, which exploits +the self-attention layer present in the most widely used architectures to condition +the diffusion process on a set of feedback images. diff --git a/scrapped_outputs/c12020bcd563314f0dab98e9dbb97599.txt b/scrapped_outputs/c12020bcd563314f0dab98e9dbb97599.txt new file mode 100644 index 0000000000000000000000000000000000000000..6239505b8ff5f3f7eb6043b475677f1d948af531 --- /dev/null +++ b/scrapped_outputs/c12020bcd563314f0dab98e9dbb97599.txt @@ -0,0 +1,38 @@ +Pipeline callbacks The denoising loop of a pipeline can be modified with custom defined functions using the callback_on_step_end parameter. This can be really useful for dynamically adjusting certain pipeline attributes, or modifying tensor variables. The flexibility of callbacks opens up some interesting use-cases such as changing the prompt embeddings at each timestep, assigning different weights to the prompt embeddings, and editing the guidance scale. This guide will show you how to use the callback_on_step_end parameter to disable classifier-free guidance (CFG) after 40% of the inference steps to save compute with minimal cost to performance. The callback function should have the following arguments: pipe (or the pipeline instance) provides access to useful properties such as num_timestep and guidance_scale. You can modify these properties by updating the underlying attributes. For this example, you’ll disable CFG by setting pipe._guidance_scale=0.0. step_index and timestep tell you where you are in the denoising loop. Use step_index to turn off CFG after reaching 40% of num_timestep. callback_kwargs is a dict that contains tensor variables you can modify during the denoising loop. It only includes variables specified in the callback_on_step_end_tensor_inputs argument, which is passed to the pipeline’s __call__ method. Different pipelines may use different sets of variables, so please check a pipeline’s _callback_tensor_inputs attribute for the list of variables you can modify. Some common variables include latents and prompt_embeds. For this function, change the batch size of prompt_embeds after setting guidance_scale=0.0 in order for it to work properly. Your callback function should look something like this: Copied def callback_dynamic_cfg(pipe, step_index, timestep, callback_kwargs): + # adjust the batch_size of prompt_embeds according to guidance_scale + if step_index == int(pipe.num_timestep * 0.4): + prompt_embeds = callback_kwargs["prompt_embeds"] + prompt_embeds = prompt_embeds.chunk(2)[-1] + + # update guidance_scale and prompt_embeds + pipe._guidance_scale = 0.0 + callback_kwargs["prompt_embeds"] = prompt_embeds + return callback_kwargs Now, you can pass the callback function to the callback_on_step_end parameter and the prompt_embeds to callback_on_step_end_tensor_inputs. Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" + +generator = torch.Generator(device="cuda").manual_seed(1) +out = pipe(prompt, generator=generator, callback_on_step_end=callback_custom_cfg, callback_on_step_end_tensor_inputs=['prompt_embeds']) + +out.images[0].save("out_custom_cfg.png") The callback function is executed at the end of each denoising step, and modifies the pipeline attributes and tensor variables for the next denoising step. With callbacks, you can implement features such as dynamic CFG without having to modify the underlying code at all! 🤗 Diffusers currently only supports callback_on_step_end, but feel free to open a feature request if you have a cool use-case and require a callback function with a different execution point! Interrupt the diffusion process Interrupting the diffusion process is particularly useful when building UIs that work with Diffusers because it allows users to stop the generation process if they’re unhappy with the intermediate results. You can incorporate this into your pipeline with a callback. The interruption callback is supported for text-to-image, image-to-image, and inpainting for the StableDiffusionPipeline and StableDiffusionXLPipeline. This callback function should take the following arguments: pipe, i, t, and callback_kwargs (this must be returned). Set the pipeline’s _interrupt attribute to True to stop the diffusion process after a certain number of steps. You are also free to implement your own custom stopping logic inside the callback. In this example, the diffusion process is stopped after 10 steps even though num_inference_steps is set to 50. Copied from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +pipe.enable_model_cpu_offload() +num_inference_steps = 50 + +def interrupt_callback(pipe, i, t, callback_kwargs): + stop_idx = 10 + if i == stop_idx: + pipe._interrupt = True + + return callback_kwargs + +pipe( + "A photo of a cat", + num_inference_steps=num_inference_steps, + callback_on_step_end=interrupt_callback, +) diff --git a/scrapped_outputs/c142ff32307875ad978f403b063eaed1.txt b/scrapped_outputs/c142ff32307875ad978f403b063eaed1.txt new file mode 100644 index 0000000000000000000000000000000000000000..62825fe72aa801b97e465830300492417c227d28 --- /dev/null +++ b/scrapped_outputs/c142ff32307875ad978f403b063eaed1.txt @@ -0,0 +1,18 @@ +Stable Diffusion pipelines Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from CompVis, Stability AI and LAION. Latent diffusion applies the diffusion process over a lower dimensional latent space to reduce memory and compute complexity. This specific type of diffusion model was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. Stable Diffusion is trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs. For more details about how Stable Diffusion works and how it differs from the base latent diffusion model, take a look at the Stability AI announcement and our own blog post for more technical details. You can find the original codebase for Stable Diffusion v1.0 at CompVis/stable-diffusion and Stable Diffusion v2.0 at Stability-AI/stablediffusion as well as their original scripts for various tasks. Additional official checkpoints for the different Stable Diffusion versions and tasks can be found on the CompVis, Runway, and Stability AI Hub organizations. Explore these organizations to find the best checkpoint for your use-case! The table below summarizes the available Stable Diffusion pipelines, their supported tasks, and an interactive demo: Pipeline Supported tasks 🤗 Space StableDiffusion text-to-image StableDiffusionImg2Img image-to-image StableDiffusionInpaint inpainting StableDiffusionDepth2Img depth-to-image StableDiffusionImageVariation image variation StableDiffusionPipelineSafe filtered text-to-image StableDiffusion2 text-to-image, inpainting, depth-to-image, super-resolution StableDiffusionXL text-to-image, image-to-image StableDiffusionLatentUpscale super-resolution StableDiffusionUpscale super-resolution StableDiffusionLDM3D text-to-rgb, text-to-depth, text-to-pano StableDiffusionUpscaleLDM3D ldm3d super-resolution Tips To help you get the most out of the Stable Diffusion pipelines, here are a few tips for improving performance and usability. These tips are applicable to all Stable Diffusion pipelines. Explore tradeoff between speed and quality StableDiffusionPipeline uses the PNDMScheduler by default, but 🤗 Diffusers provides many other schedulers (some of which are faster or output better quality) that are compatible. For example, if you want to use the EulerDiscreteScheduler instead of the default: Copied from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler + +pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") +pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +# or +euler_scheduler = EulerDiscreteScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler") +pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=euler_scheduler) Reuse pipeline components to save memory To save memory and use the same components across multiple pipelines, use the .components method to avoid loading weights into RAM more than once. Copied from diffusers import ( + StableDiffusionPipeline, + StableDiffusionImg2ImgPipeline, + StableDiffusionInpaintPipeline, +) + +text2img = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") +img2img = StableDiffusionImg2ImgPipeline(**text2img.components) +inpaint = StableDiffusionInpaintPipeline(**text2img.components) + +# now you can use text2img(...), img2img(...), inpaint(...) just like the call methods of each respective pipeline diff --git a/scrapped_outputs/c169d873a18e0cecccee6060615ac033.txt b/scrapped_outputs/c169d873a18e0cecccee6060615ac033.txt new file mode 100644 index 0000000000000000000000000000000000000000..9870975bcefa54ea72473d89b0342fceb38f6b83 --- /dev/null +++ b/scrapped_outputs/c169d873a18e0cecccee6060615ac033.txt @@ -0,0 +1,176 @@ +VQDiffusion + + +Overview + +Vector Quantized Diffusion Model for Text-to-Image Synthesis by Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, Baining Guo +The abstract of the paper is the following: +We present the vector quantized diffusion (VQ-Diffusion) model for text-to-image generation. This method is based on a vector quantized variational autoencoder (VQ-VAE) whose latent space is modeled by a conditional variant of the recently developed Denoising Diffusion Probabilistic Model (DDPM). We find that this latent-space method is well-suited for text-to-image generation tasks because it not only eliminates the unidirectional bias with existing methods but also allows us to incorporate a mask-and-replace diffusion strategy to avoid the accumulation of errors, which is a serious problem with existing methods. Our experiments show that the VQ-Diffusion produces significantly better text-to-image generation results when compared with conventional autoregressive (AR) models with similar numbers of parameters. Compared with previous GAN-based text-to-image methods, our VQ-Diffusion can handle more complex scenes and improve the synthesized image quality by a large margin. Finally, we show that the image generation computation in our method can be made highly efficient by reparameterization. With traditional AR methods, the text-to-image generation time increases linearly with the output image resolution and hence is quite time consuming even for normal size images. The VQ-Diffusion allows us to achieve a better trade-off between quality and speed. Our experiments indicate that the VQ-Diffusion model with the reparameterization is fifteen times faster than traditional AR methods while achieving a better image quality. +The original codebase can be found here. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_vq_diffusion.py +Text-to-Image Generation +- + +VQDiffusionPipeline + + +class diffusers.VQDiffusionPipeline + +< +source +> +( +vqvae: VQModel +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +transformer: Transformer2DModel +scheduler: VQDiffusionScheduler +learned_classifier_free_sampling_embeddings: LearnedClassifierFreeSamplingEmbeddings + +) + + +Parameters + +vqvae (VQModel) — +Vector Quantized Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent +representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. VQ Diffusion uses the text portion of +CLIP, specifically +the clip-vit-base-patch32 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +transformer (Transformer2DModel) — +Conditional transformer to denoise the encoded image latents. + + +scheduler (VQDiffusionScheduler) — +A scheduler to be used in combination with transformer to denoise the encoded image latents. + + + +Pipeline for text-to-image generation using VQ Diffusion +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] +num_inference_steps: int = 100 +guidance_scale: float = 5.0 +truncation_rate: float = 1.0 +num_images_per_prompt: int = 1 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 + +) +→ +ImagePipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. + + +num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +truncation_rate (float, optional, defaults to 1.0 (equivalent to no truncation)) — +Used to “truncate” the predicted classes for x_0 such that the cumulative probability for a pixel is at +most truncation_rate. The lowest probabilities that would increase the cumulative probability above +truncation_rate are set to zero. + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor of shape (batch), optional) — +Pre-generated noisy latents to be used as inputs for image generation. Must be valid embedding indices. +Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will +be generated of completely masked latent pixels. + + +output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +ImagePipelineOutput or tuple + + + +~ pipeline_utils.ImagePipelineOutput if return_dict +is True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. + + +Function invoked when calling the pipeline for generation. + +truncate + +< +source +> +( +log_p_x_0: FloatTensor +truncation_rate: float + +) + + + +Truncates log_p_x_0 such that for each column vector, the total cumulative probability is truncation_rate The +lowest probabilities that would increase the cumulative probability above truncation_rate are set to zero. diff --git a/scrapped_outputs/c189f203fff24cefa9b9bcafe6517026.txt b/scrapped_outputs/c189f203fff24cefa9b9bcafe6517026.txt new file mode 100644 index 0000000000000000000000000000000000000000..ee5d5916fb70fd8ec6cb76f08d4c82e91bebd0c4 --- /dev/null +++ b/scrapped_outputs/c189f203fff24cefa9b9bcafe6517026.txt @@ -0,0 +1,189 @@ +Linear multistep scheduler for discrete beta schedules + + +Overview + +Original implementation can be found here. + +LMSDiscreteScheduler + + +class diffusers.LMSDiscreteScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +prediction_type: str = 'epsilon' + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + + +Linear Multistep Scheduler for discrete beta schedules. Based on the original k-diffusion implementation by +Katherine Crowson: +https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L181 +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +get_lms_coefficient + +< +source +> +( +order +t +current_order + +) + + +Parameters + +order (TODO) — + + +t (TODO) — + + +current_order (TODO) — + + + +Compute a linear multistep coefficient. + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[float, torch.FloatTensor] + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +timestep (float or torch.FloatTensor) — the current timestep in the diffusion chain + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Scales the denoising model input by (sigma**2 + 1) ** 0.5 to match the K-LMS algorithm. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device, optional) — +the device to which the timesteps should be moved to. If None, the timesteps are not moved. + + + +Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: typing.Union[float, torch.FloatTensor] +sample: FloatTensor +order: int = 4 +return_dict: bool = True + +) +→ +~schedulers.scheduling_utils.LMSDiscreteSchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (float) — current timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. +order — coefficient for multi-step inference. + + +return_dict (bool) — option for returning tuple rather than LMSDiscreteSchedulerOutput class + + +Returns + +~schedulers.scheduling_utils.LMSDiscreteSchedulerOutput or tuple + + + +~schedulers.scheduling_utils.LMSDiscreteSchedulerOutput if return_dict is True, otherwise a tuple. +When returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/c1b740023482f0d379cc0aada82d9e44.txt b/scrapped_outputs/c1b740023482f0d379cc0aada82d9e44.txt new file mode 100644 index 0000000000000000000000000000000000000000..c45daf9a97ec4b41db61304ab7ca97f58be2ed61 --- /dev/null +++ b/scrapped_outputs/c1b740023482f0d379cc0aada82d9e44.txt @@ -0,0 +1 @@ +xFormers We recommend xFormers for both inference and training. In our tests, the optimizations performed in the attention blocks allow for both faster speed and reduced memory consumption. Install xFormers from pip: Copied pip install xformers The xFormers pip package requires the latest version of PyTorch. If you need to use a previous version of PyTorch, then we recommend installing xFormers from the source. After xFormers is installed, you can use enable_xformers_memory_efficient_attention() for faster inference and reduced memory consumption as shown in this section. According to this issue, xFormers v0.0.16 cannot be used for training (fine-tune or DreamBooth) in some GPUs. If you observe this problem, please install a development version as indicated in the issue comments. diff --git a/scrapped_outputs/c1b7880f0f48c6fdee2c35ee9b11e1ab.txt b/scrapped_outputs/c1b7880f0f48c6fdee2c35ee9b11e1ab.txt new file mode 100644 index 0000000000000000000000000000000000000000..ed307c5e7ec0eba355d6da6f87807233e0a27eec --- /dev/null +++ b/scrapped_outputs/c1b7880f0f48c6fdee2c35ee9b11e1ab.txt @@ -0,0 +1,43 @@ +DiT Scalable Diffusion Models with Transformers (DiT) is by William Peebles and Saining Xie. The abstract from the paper is: We explore a new class of diffusion models based on the transformer architecture. We train latent diffusion models of images, replacing the commonly-used U-Net backbone with a transformer that operates on latent patches. We analyze the scalability of our Diffusion Transformers (DiTs) through the lens of forward pass complexity as measured by Gflops. We find that DiTs with higher Gflops — through increased transformer depth/width or increased number of input tokens — consistently have lower FID. In addition to possessing good scalability properties, our largest DiT-XL/2 models outperform all prior diffusion models on the class-conditional ImageNet 512x512 and 256x256 benchmarks, achieving a state-of-the-art FID of 2.27 on the latter. The original codebase can be found at facebookresearch/dit. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. DiTPipeline class diffusers.DiTPipeline < source > ( transformer: Transformer2DModel vae: AutoencoderKL scheduler: KarrasDiffusionSchedulers id2label: Optional = None ) Parameters transformer (Transformer2DModel) — +A class conditioned Transformer2DModel to denoise the encoded image latents. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. scheduler (DDIMScheduler) — +A scheduler to be used in combination with transformer to denoise the encoded image latents. Pipeline for image generation based on a Transformer backbone instead of a UNet. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( class_labels: List guidance_scale: float = 4.0 generator: Union = None num_inference_steps: int = 50 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters class_labels (List[int]) — +List of ImageNet class labels for the images to be generated. guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. num_inference_steps (int, optional, defaults to 250) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import DiTPipeline, DPMSolverMultistepScheduler +>>> import torch + +>>> pipe = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256", torch_dtype=torch.float16) +>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe = pipe.to("cuda") + +>>> # pick words from Imagenet class labels +>>> pipe.labels # to print all available words + +>>> # pick words that exist in ImageNet +>>> words = ["white shark", "umbrella"] + +>>> class_ids = pipe.get_label_ids(words) + +>>> generator = torch.manual_seed(33) +>>> output = pipe(class_labels=class_ids, num_inference_steps=25, generator=generator) + +>>> image = output.images[0] # label 'white shark' get_label_ids < source > ( label: Union ) → list of int Parameters label (str or dict of str) — +Label strings to be mapped to class ids. Returns +list of int + +Class ids to be processed by pipeline. + Map label strings from ImageNet to corresponding class ids. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/c1b86bd240705985a9a6dbbeef2f7958.txt b/scrapped_outputs/c1b86bd240705985a9a6dbbeef2f7958.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/c1cb0c8fc570ab0f4612dab61c0caf0d.txt b/scrapped_outputs/c1cb0c8fc570ab0f4612dab61c0caf0d.txt new file mode 100644 index 0000000000000000000000000000000000000000..cff714448fde8a5841e9c4833e95b6589962a2ce --- /dev/null +++ b/scrapped_outputs/c1cb0c8fc570ab0f4612dab61c0caf0d.txt @@ -0,0 +1 @@ +Overview 🧨 Diffusers offers many pipelines, models, and schedulers for generative tasks. To make loading these components as simple as possible, we provide a single and unified method - from_pretrained() - that loads any of these components from either the Hugging Face Hub or your local machine. Whenever you load a pipeline or model, the latest files are automatically downloaded and cached so you can quickly reuse them next time without redownloading the files. This section will show you everything you need to know about loading pipelines, how to load different components in a pipeline, how to load checkpoint variants, and how to load community pipelines. You’ll also learn how to load schedulers and compare the speed and quality trade-offs of using different schedulers. Finally, you’ll see how to convert and load KerasCV checkpoints so you can use them in PyTorch with 🧨 Diffusers. diff --git a/scrapped_outputs/c1d726b0c26c70b26fccf6792c5d39d8.txt b/scrapped_outputs/c1d726b0c26c70b26fccf6792c5d39d8.txt new file mode 100644 index 0000000000000000000000000000000000000000..0454f29f161e7c79737a21f6448f556cf18eca51 --- /dev/null +++ b/scrapped_outputs/c1d726b0c26c70b26fccf6792c5d39d8.txt @@ -0,0 +1,81 @@ +Push files to the Hub 🤗 Diffusers provides a PushToHubMixin for uploading your model, scheduler, or pipeline to the Hub. It is an easy way to store your files on the Hub, and also allows you to share your work with others. Under the hood, the PushToHubMixin: creates a repository on the Hub saves your model, scheduler, or pipeline files so they can be reloaded later uploads folder containing these files to the Hub This guide will show you how to use the PushToHubMixin to upload your files to the Hub. You’ll need to log in to your Hub account with your access token first: Copied from huggingface_hub import notebook_login + +notebook_login() Models To push a model to the Hub, call push_to_hub() and specify the repository id of the model to be stored on the Hub: Copied from diffusers import ControlNetModel + +controlnet = ControlNetModel( + block_out_channels=(32, 64), + layers_per_block=2, + in_channels=4, + down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), + cross_attention_dim=32, + conditioning_embedding_out_channels=(16, 32), +) +controlnet.push_to_hub("my-controlnet-model") For models, you can also specify the variant of the weights to push to the Hub. For example, to push fp16 weights: Copied controlnet.push_to_hub("my-controlnet-model", variant="fp16") The push_to_hub() function saves the model’s config.json file and the weights are automatically saved in the safetensors format. Now you can reload the model from your repository on the Hub: Copied model = ControlNetModel.from_pretrained("your-namespace/my-controlnet-model") Scheduler To push a scheduler to the Hub, call push_to_hub() and specify the repository id of the scheduler to be stored on the Hub: Copied from diffusers import DDIMScheduler + +scheduler = DDIMScheduler( + beta_start=0.00085, + beta_end=0.012, + beta_schedule="scaled_linear", + clip_sample=False, + set_alpha_to_one=False, +) +scheduler.push_to_hub("my-controlnet-scheduler") The push_to_hub() function saves the scheduler’s scheduler_config.json file to the specified repository. Now you can reload the scheduler from your repository on the Hub: Copied scheduler = DDIMScheduler.from_pretrained("your-namepsace/my-controlnet-scheduler") Pipeline You can also push an entire pipeline with all it’s components to the Hub. For example, initialize the components of a StableDiffusionPipeline with the parameters you want: Copied from diffusers import ( + UNet2DConditionModel, + AutoencoderKL, + DDIMScheduler, + StableDiffusionPipeline, +) +from transformers import CLIPTextModel, CLIPTextConfig, CLIPTokenizer + +unet = UNet2DConditionModel( + block_out_channels=(32, 64), + layers_per_block=2, + sample_size=32, + in_channels=4, + out_channels=4, + down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), + up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), + cross_attention_dim=32, +) + +scheduler = DDIMScheduler( + beta_start=0.00085, + beta_end=0.012, + beta_schedule="scaled_linear", + clip_sample=False, + set_alpha_to_one=False, +) + +vae = AutoencoderKL( + block_out_channels=[32, 64], + in_channels=3, + out_channels=3, + down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], + up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], + latent_channels=4, +) + +text_encoder_config = CLIPTextConfig( + bos_token_id=0, + eos_token_id=2, + hidden_size=32, + intermediate_size=37, + layer_norm_eps=1e-05, + num_attention_heads=4, + num_hidden_layers=5, + pad_token_id=1, + vocab_size=1000, +) +text_encoder = CLIPTextModel(text_encoder_config) +tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") Pass all of the components to the StableDiffusionPipeline and call push_to_hub() to push the pipeline to the Hub: Copied components = { + "unet": unet, + "scheduler": scheduler, + "vae": vae, + "text_encoder": text_encoder, + "tokenizer": tokenizer, + "safety_checker": None, + "feature_extractor": None, +} + +pipeline = StableDiffusionPipeline(**components) +pipeline.push_to_hub("my-pipeline") The push_to_hub() function saves each component to a subfolder in the repository. Now you can reload the pipeline from your repository on the Hub: Copied pipeline = StableDiffusionPipeline.from_pretrained("your-namespace/my-pipeline") Privacy Set private=True in the push_to_hub() function to keep your model, scheduler, or pipeline files private: Copied controlnet.push_to_hub("my-controlnet-model-private", private=True) Private repositories are only visible to you, and other users won’t be able to clone the repository and your repository won’t appear in search results. Even if a user has the URL to your private repository, they’ll receive a 404 - Sorry, we can't find the page you are looking for. You must be logged in to load a model from a private repository. diff --git a/scrapped_outputs/c1fd6ae3cffc7bb80637cccc68e5f797.txt b/scrapped_outputs/c1fd6ae3cffc7bb80637cccc68e5f797.txt new file mode 100644 index 0000000000000000000000000000000000000000..118d04526fdacb6e280461a814f7dea84ba76932 --- /dev/null +++ b/scrapped_outputs/c1fd6ae3cffc7bb80637cccc68e5f797.txt @@ -0,0 +1,51 @@ +DDIMInverseScheduler DDIMInverseScheduler is the inverted scheduler from Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. +The implementation is mostly based on the DDIM inversion definition from Null-text Inversion for Editing Real Images using Guided Diffusion Models. DDIMInverseScheduler class diffusers.DDIMInverseScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None clip_sample: bool = True set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' clip_sample_range: float = 1.0 timestep_spacing: str = 'leading' rescale_betas_zero_snr: bool = False **kwargs ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 0, otherwise +it uses the alpha value at step num_train_timesteps - 1. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use num_train_timesteps - 1 for the previous alpha +product. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. DDIMInverseScheduler is the reverse scheduler of DDIMScheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → ~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. eta (float) — +The weight of noise for added noise in diffusion step. use_clipped_model_output (bool, defaults to False) — +If True, computes “corrected” model_output from the clipped predicted original sample. Necessary +because predicted original sample is clipped to [-1, 1] when self.config.clip_sample is True. If no +clipping has happened, “corrected” model_output would coincide with the one provided as input and +use_clipped_model_output has no effect. variance_noise (torch.FloatTensor) — +Alternative to generating noise with generator by directly providing the noise for the variance +itself. Useful for methods such as CycleDiffusion. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput or +tuple. Returns +~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput or tuple + +If return_dict is True, ~schedulers.scheduling_ddim_inverse.DDIMInverseSchedulerOutput is +returned, otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/c2165e9a8997df1fe1d5fa2aacf5fbd2.txt b/scrapped_outputs/c2165e9a8997df1fe1d5fa2aacf5fbd2.txt new file mode 100644 index 0000000000000000000000000000000000000000..aa2d63d59b04449a98f5d12b99c53e29a1ead14b --- /dev/null +++ b/scrapped_outputs/c2165e9a8997df1fe1d5fa2aacf5fbd2.txt @@ -0,0 +1,64 @@ +Textual Inversion Textual Inversion is a training technique for personalizing image generation models with just a few example images of what you want it to learn. This technique works by learning and updating the text embeddings (the new embeddings are tied to a special word you must use in the prompt) to match the example images you provide. If you’re training on a GPU with limited vRAM, you should try enabling the gradient_checkpointing and mixed_precision parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing or xFormers. With the same configuration and setup as PyTorch, the Flax training script should be at least ~70% faster! This guide will explore the textual_inversion.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/textual_inversion +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script has many parameters to help you tailor the training run to your needs. All of the parameters and their descriptions are listed in the parse_args() function. Where applicable, Diffusers provides default values for each parameter such as the training batch size and learning rate, but feel free to change these values in the training command if you’d like. For example, to increase the number of gradient accumulation steps above the default value of 1: Copied accelerate launch textual_inversion.py \ + --gradient_accumulation_steps=4 Some other basic and important parameters to specify include: --pretrained_model_name_or_path: the name of the model on the Hub or a local path to the pretrained model --train_data_dir: path to a folder containing the training dataset (example images) --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command --num_vectors: the number of vectors to learn the embeddings with; increasing this parameter helps the model learn better but it comes with increased training costs --placeholder_token: the special word to tie the learned embeddings to (you must use the word in your prompt for inference) --initializer_token: a single-word that roughly describes the object or style you’re trying to train on --learnable_property: whether you’re training the model to learn a new “style” (for example, Van Gogh’s painting style) or “object” (for example, your dog) Training script Unlike some of the other training scripts, textual_inversion.py has a custom dataset class, TextualInversionDataset for creating a dataset. You can customize the image size, placeholder token, interpolation method, whether to crop the image, and more. If you need to change how the dataset is created, you can modify TextualInversionDataset. Next, you’ll find the dataset preprocessing code and training loop in the main() function. The script starts by loading the tokenizer, scheduler and model: Copied # Load tokenizer +if args.tokenizer_name: + tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name) +elif args.pretrained_model_name_or_path: + tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer") + +# Load scheduler and models +noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") +text_encoder = CLIPTextModel.from_pretrained( + args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision +) +vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision) +unet = UNet2DConditionModel.from_pretrained( + args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision +) The special placeholder token is added next to the tokenizer, and the embedding is readjusted to account for the new token. Then, the script creates a dataset from the TextualInversionDataset: Copied train_dataset = TextualInversionDataset( + data_root=args.train_data_dir, + tokenizer=tokenizer, + size=args.resolution, + placeholder_token=(" ".join(tokenizer.convert_ids_to_tokens(placeholder_token_ids))), + repeats=args.repeats, + learnable_property=args.learnable_property, + center_crop=args.center_crop, + set="train", +) +train_dataloader = torch.utils.data.DataLoader( + train_dataset, batch_size=args.train_batch_size, shuffle=True, num_workers=args.dataloader_num_workers +) Finally, the training loop handles everything else from predicting the noisy residual to updating the embedding weights of the special placeholder token. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 For this guide, you’ll download some images of a cat toy and store them in a directory. But remember, you can create and use your own dataset if you want (see the Create a dataset for training guide). Copied from huggingface_hub import snapshot_download + +local_dir = "./cat" +snapshot_download( + "diffusers/cat_toy_example", local_dir=local_dir, repo_type="dataset", ignore_patterns=".gitattributes" +) Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model, and DATA_DIR to the path where you just downloaded the cat images to. The script creates and saves the following files to your repository: learned_embeds.bin: the learned embedding vectors corresponding to your example images token_identifier.txt: the special placeholder token type_of_concept.txt: the type of concept you’re training on (either “object” or “style”) A full training run takes ~1 hour on a single V100 GPU. One more thing before you launch the script. If you’re interested in following along with the training process, you can periodically save generated images as training progresses. Add the following parameters to the training command: Copied --validation_prompt="A train" +--num_validation_images=4 +--validation_steps=100 PyTorch Flax Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export DATA_DIR="./cat" + +accelerate launch textual_inversion.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --train_data_dir=$DATA_DIR \ + --learnable_property="object" \ + --placeholder_token="" \ + --initializer_token="toy" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --max_train_steps=3000 \ + --learning_rate=5.0e-04 \ + --scale_lr \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --output_dir="textual_inversion_cat" \ + --push_to_hub After training is complete, you can use your newly trained model for inference like: PyTorch Flax Copied from diffusers import StableDiffusionPipeline +import torch + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") +pipeline.load_textual_inversion("sd-concepts-library/cat-toy") +image = pipeline("A train", num_inference_steps=50).images[0] +image.save("cat-train.png") Next steps Congratulations on training your own Textual Inversion model! 🎉 To learn more about how to use your new model, the following guides may be helpful: Learn how to load Textual Inversion embeddings and also use them as negative embeddings. Learn how to use Textual Inversion for inference with Stable Diffusion 1/2 and Stable Diffusion XL. diff --git a/scrapped_outputs/c231c6817c55abed1b2208a37be61284.txt b/scrapped_outputs/c231c6817c55abed1b2208a37be61284.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/c235bffb436185c19fad7dcfeea4faf0.txt b/scrapped_outputs/c235bffb436185c19fad7dcfeea4faf0.txt new file mode 100644 index 0000000000000000000000000000000000000000..4f387bee8376e54f0f8c4aff4d0d0913887ec569 --- /dev/null +++ b/scrapped_outputs/c235bffb436185c19fad7dcfeea4faf0.txt @@ -0,0 +1,786 @@ +Accelerated PyTorch 2.0 support in Diffusers + +Starting from version 0.13.0, Diffusers supports the latest optimization from the upcoming PyTorch 2.0 release. These include: +Support for accelerated transformers implementation with memory-efficient attention – no extra dependencies required. +torch.compile support for extra performance boost when individual models are compiled. + +Installation + + +To benefit from the accelerated attention implementation and `torch.compile`, you just need to install the latest versions of PyTorch 2.0 from `pip`, and make sure you are on diffusers 0.13.0 or later. As explained below, `diffusers` automatically uses the attention optimizations (but not `torch.compile`) when available. + + + + Copied +pip install --upgrade torch torchvision diffusers + +Using accelerated transformers and torch.compile. + +Accelerated Transformers implementation +PyTorch 2.0 includes an optimized and memory-efficient attention implementation through the torch.nn.functional.scaled_dot_product_attention function, which automatically enables several optimizations depending on the inputs and the GPU type. This is similar to the memory_efficient_attention from xFormers, but built natively into PyTorch. +These optimizations will be enabled by default in Diffusers if PyTorch 2.0 is installed and if torch.nn.functional.scaled_dot_product_attention is available. To use it, just install torch 2.0 as suggested above and simply use the pipeline. For example: + + + Copied +import torch +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +image = pipe(prompt).images[0] +If you want to enable it explicitly (which is not required), you can do so as shown below. + + + Copied +import torch +from diffusers import DiffusionPipeline +from diffusers.models.attention_processor import AttnProcessor2_0 + +pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") +pipe.unet.set_attn_processor(AttnProcessor2_0()) + +prompt = "a photo of an astronaut riding a horse on mars" +image = pipe(prompt).images[0] +This should be as fast and memory efficient as xFormers. More details in our benchmark. +torch.compile +To get an additional speedup, we can use the new torch.compile feature. To do so, we simply wrap our unet with torch.compile. For more information and different options, refer to the +torch compile docs. + + + Copied +import torch +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") +pipe.unet = torch.compile(pipe.unet) + +batch_size = 10 +prompt = "A photo of an astronaut riding a horse on marse." +images = pipe(prompt, num_inference_steps=steps, num_images_per_prompt=batch_size).images +Depending on the type of GPU, compile() can yield between 2-9% of additional speed-up over the accelerated transformer optimizations. Note, however, that compilation is able to squeeze more performance improvements in more recent GPU architectures such as Ampere (A100, 3090), Ada (4090) and Hopper (H100). +Compilation takes some time to complete, so it is best suited for situations where you need to prepare your pipeline once and then perform the same type of inference operations multiple times. + +Benchmark + +We conducted a simple benchmark on different GPUs to compare vanilla attention, xFormers, torch.nn.functional.scaled_dot_product_attention and torch.compile+torch.nn.functional.scaled_dot_product_attention. +For the benchmark we used the stable-diffusion-v1-4 model with 50 steps. The xFormers benchmark is done using the torch==1.13.1 version, while the accelerated transformers optimizations are tested using nightly versions of PyTorch 2.0. The tables below summarize the results we got. +Please refer to our featured blog post in the PyTorch site for more details. + +FP16 benchmark + +The table below shows the benchmark results for inference using fp16. As we can see, torch.nn.functional.scaled_dot_product_attention is as fast as xFormers (sometimes slightly faster/slower) on all the GPUs we tested. +And using torch.compile gives further speed-up of up of 10% over xFormers, but it’s mostly noticeable on the A100 GPU. +The time reported is in seconds. +GPU +Batch Size +Vanilla Attention +xFormers +PyTorch2.0 SDPA +SDPA + torch.compile +Speed over xformers (%) +A100 +1 +2.69 +2.7 +1.98 +2.47 +8.52 +A100 +2 +3.21 +3.04 +2.38 +2.78 +8.55 +A100 +4 +5.27 +3.91 +3.89 +3.53 +9.72 +A100 +8 +9.74 +7.03 +7.04 +6.62 +5.83 +A100 +10 +12.02 +8.7 +8.67 +8.45 +2.87 +A100 +16 +18.95 +13.57 +13.55 +13.20 +2.73 +A100 +32 (1) +OOM +26.56 +26.68 +25.85 +2.67 +A100 +64 + +52.51 +53.03 +50.93 +3.01 + + + + + + + +A10 +4 +13.94 +9.81 +10.01 +9.35 +4.69 +A10 +8 +27.09 +19 +19.53 +18.33 +3.53 +A10 +10 +33.69 +23.53 +24.19 +22.52 +4.29 +A10 +16 +OOM +37.55 +38.31 +36.81 +1.97 +A10 +32 (1) + +77.19 +78.43 +76.64 +0.71 +A10 +64 (1) + +173.59 +158.99 +155.14 +10.63 + + + + + + + +T4 +4 +38.81 +30.09 +29.74 +27.55 +8.44 +T4 +8 +OOM +55.71 +55.99 +53.85 +3.34 +T4 +10 +OOM +68.96 +69.86 +65.35 +5.23 +T4 +16 +OOM +111.47 +113.26 +106.93 +4.07 + + + + + + + +V100 +4 +9.84 +8.16 +8.09 +7.65 +6.25 +V100 +8 +OOM +15.62 +15.44 +14.59 +6.59 +V100 +10 +OOM +19.52 +19.28 +18.18 +6.86 +V100 +16 +OOM +30.29 +29.84 +28.22 +6.83 + + + + + + + +3090 +1 +2.94 +2.5 +2.42 +2.33 +6.80 +3090 +4 +10.04 +7.82 +7.72 +7.38 +5.63 +3090 +8 +19.27 +14.97 +14.88 +14.15 +5.48 +3090 +10 +24.08 +18.7 +18.62 +18.12 +3.10 +3090 +16 +OOM +29.06 +28.88 +28.2 +2.96 +3090 +32 (1) + +58.05 +57.42 +56.28 +3.05 +3090 +64 (1) + +126.54 +114.27 +112.21 +11.32 + + + + + + + +3090 Ti +1 +2.7 +2.26 +2.19 +2.12 +6.19 +3090 Ti +4 +9.07 +7.14 +7.00 +6.71 +6.02 +3090 Ti +8 +17.51 +13.65 +13.53 +12.94 +5.20 +3090 Ti +10 (2) +21.79 +16.85 +16.77 +16.44 +2.43 +3090 Ti +16 +OOM +26.1 +26.04 +25.53 +2.18 +3090 Ti +32 (1) + +51.78 +51.71 +50.91 +1.68 +3090 Ti +64 (1) + +112.02 +102.78 +100.89 +9.94 + + + + + + + +4090 +1 +4.47 +3.98 +1.28 +1.21 +69.60 +4090 +4 +10.48 +8.37 +3.76 +3.56 +57.47 +4090 +8 +14.33 +10.22 +7.43 +6.99 +31.60 +4090 +16 + +17.07 +14.98 +14.58 +14.59 +4090 +32 (1) + +39.03 +30.18 +29.49 +24.44 +4090 +64 (1) + +77.29 +61.34 +59.96 +22.42 + +FP32 benchmark + +The table below shows the benchmark results for inference using fp32. In this case, torch.nn.functional.scaled_dot_product_attention is faster than xFormers on all the GPUs we tested. +Using torch.compile in addition to the accelerated transformers implementation can yield up to 19% performance improvement over xFormers in Ampere and Ada cards, and up to 20% (Ampere) or 28% (Ada) over vanilla attention. +GPU +Batch Size +Vanilla Attention +xFormers +PyTorch2.0 SDPA +SDPA + torch.compile +Speed over xformers (%) +Speed over vanilla (%) +A100 +1 +4.97 +3.86 +2.6 +2.86 +25.91 +42.45 +A100 +2 +9.03 +6.76 +4.41 +4.21 +37.72 +53.38 +A100 +4 +16.70 +12.42 +7.94 +7.54 +39.29 +54.85 +A100 +10 +OOM +29.93 +18.70 +18.46 +38.32 + +A100 +16 + +47.08 +29.41 +29.04 +38.32 + +A100 +32 + +92.89 +57.55 +56.67 +38.99 + +A100 +64 + +185.3 +114.8 +112.98 +39.03 + + + + + + + + + +A10 +1 +10.59 +8.81 +7.51 +7.35 +16.57 +30.59 +A10 +4 +34.77 +27.63 +22.77 +22.07 +20.12 +36.53 +A10 +8 + +56.19 +43.53 +43.86 +21.94 + +A10 +16 + +116.49 +88.56 +86.64 +25.62 + +A10 +32 + +221.95 +175.74 +168.18 +24.23 + +A10 +48 + +333.23 +264.84 + +20.52 + + + + + + + + + +T4 +1 +28.2 +24.49 +23.93 +23.56 +3.80 +16.45 +T4 +2 +52.77 +45.7 +45.88 +45.06 +1.40 +14.61 +T4 +4 +OOM +85.72 +85.78 +84.48 +1.45 + +T4 +8 + +149.64 +150.75 +148.4 +0.83 + + + + + + + + + +V100 +1 +7.4 +6.84 +6.8 +6.66 +2.63 +10.00 +V100 +2 +13.85 +12.81 +12.66 +12.35 +3.59 +10.83 +V100 +4 +OOM +25.73 +25.31 +24.78 +3.69 + +V100 +8 + +43.95 +43.37 +42.25 +3.87 + +V100 +16 + +84.99 +84.73 +82.55 +2.87 + + + + + + + + + +3090 +1 +7.09 +6.78 +5.34 +5.35 +21.09 +24.54 +3090 +4 +22.69 +21.45 +18.56 +18.18 +15.24 +19.88 +3090 +8 + +42.59 +36.68 +35.61 +16.39 + +3090 +16 + +85.35 +72.93 +70.18 +17.77 + +3090 +32 (1) + +162.05 +143.46 +138.67 +14.43 + + + + + + + + + +3090 Ti +1 +6.45 +6.19 +4.99 +4.89 +21.00 +24.19 +3090 Ti +4 +20.32 +19.31 +17.02 +16.48 +14.66 +18.90 +3090 Ti +8 + +37.93 +33.21 +32.24 +15.00 + +3090 Ti +16 + +75.37 +66.63 +64.5 +14.42 + +3090 Ti +32 (1) + +142.55 +128.89 +124.92 +12.37 + + + + + + + + + +4090 +1 +5.54 +4.99 +2.66 +2.58 +48.30 +53.43 +4090 +4 +13.67 +11.4 +8.81 +8.46 +25.79 +38.11 +4090 +8 + +19.79 +17.55 +16.62 +16.02 + +4090 +16 + +38.62 +35.65 +34.07 +11.78 + +4090 +32 (1) + +76.57 +69.48 +65.35 +14.65 + +4090 +48 + +114.44 +106.3 + +7.11 + +(1) Batch Size >= 32 requires enable_vae_slicing() because of https://github.com/pytorch/pytorch/issues/81665. +This is required for PyTorch 1.13.1, and also for PyTorch 2.0 and large batch sizes. +For more details about how this benchmark was run, please refer to this PR and to the blog post. diff --git a/scrapped_outputs/c2449cddf09616de6736e5a0060ddb9b.txt b/scrapped_outputs/c2449cddf09616de6736e5a0060ddb9b.txt new file mode 100644 index 0000000000000000000000000000000000000000..07086f82bad81c666b44ca2d095feabb72569dd6 --- /dev/null +++ b/scrapped_outputs/c2449cddf09616de6736e5a0060ddb9b.txt @@ -0,0 +1,261 @@ +Pseudo numerical methods for diffusion models (PNDM) + + +Overview + +Original implementation can be found here. + +PNDMScheduler + + +class diffusers.PNDMScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +skip_prk_steps: bool = False +set_alpha_to_one: bool = False +prediction_type: str = 'epsilon' +steps_offset: int = 0 + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +skip_prk_steps (bool) — +allows the scheduler to skip the Runge-Kutta steps that are defined in the original paper as being required +before plms steps; defaults to False. + + +set_alpha_to_one (bool, default False) — +each diffusion step uses the value of alphas product at that step and at the previous one. For the final +step there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the value of alpha at step 0. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion process) +or v_prediction (see section 2.4 https://imagen.research.google/video/paper.pdf) + + +steps_offset (int, default 0) — +an offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False, to make the last step use step 0 for the previous alpha product, as done in +stable diffusion. + + + +Pseudo numerical methods for diffusion models (PNDM) proposes using more advanced ODE integration techniques, +namely Runge-Kutta method and a linear multi-step method. +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. +For more details, see the original paper: https://arxiv.org/abs/2202.09778 + +scale_model_input + +< +source +> +( +sample: FloatTensor +*args +**kwargs + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + + +Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +return_dict: bool = True + +) +→ +SchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +return_dict (bool) — option for returning tuple rather than SchedulerOutput class + + +Returns + +SchedulerOutput or tuple + + + +SchedulerOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion +process from the learned model outputs (most often the predicted noise). +This function calls step_prk() or step_plms() depending on the internal variable counter. + +step_plms + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +return_dict: bool = True + +) +→ +~scheduling_utils.SchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +return_dict (bool) — option for returning tuple rather than SchedulerOutput class + + +Returns + +~scheduling_utils.SchedulerOutput or tuple + + + +~scheduling_utils.SchedulerOutput if return_dict is +True, otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + +Step function propagating the sample with the linear multi-step method. This has one forward pass with multiple +times to approximate the solution. + +step_prk + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +return_dict: bool = True + +) +→ +~scheduling_utils.SchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +return_dict (bool) — option for returning tuple rather than SchedulerOutput class + + +Returns + +~scheduling_utils.SchedulerOutput or tuple + + + +~scheduling_utils.SchedulerOutput if return_dict is +True, otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + +Step function propagating the sample with the Runge-Kutta method. RK takes 4 forward passes to approximate the +solution to the differential equation. diff --git a/scrapped_outputs/c25b09e5127ecb90c847b8dbf5d75b68.txt b/scrapped_outputs/c25b09e5127ecb90c847b8dbf5d75b68.txt new file mode 100644 index 0000000000000000000000000000000000000000..ed307c5e7ec0eba355d6da6f87807233e0a27eec --- /dev/null +++ b/scrapped_outputs/c25b09e5127ecb90c847b8dbf5d75b68.txt @@ -0,0 +1,43 @@ +DiT Scalable Diffusion Models with Transformers (DiT) is by William Peebles and Saining Xie. The abstract from the paper is: We explore a new class of diffusion models based on the transformer architecture. We train latent diffusion models of images, replacing the commonly-used U-Net backbone with a transformer that operates on latent patches. We analyze the scalability of our Diffusion Transformers (DiTs) through the lens of forward pass complexity as measured by Gflops. We find that DiTs with higher Gflops — through increased transformer depth/width or increased number of input tokens — consistently have lower FID. In addition to possessing good scalability properties, our largest DiT-XL/2 models outperform all prior diffusion models on the class-conditional ImageNet 512x512 and 256x256 benchmarks, achieving a state-of-the-art FID of 2.27 on the latter. The original codebase can be found at facebookresearch/dit. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. DiTPipeline class diffusers.DiTPipeline < source > ( transformer: Transformer2DModel vae: AutoencoderKL scheduler: KarrasDiffusionSchedulers id2label: Optional = None ) Parameters transformer (Transformer2DModel) — +A class conditioned Transformer2DModel to denoise the encoded image latents. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. scheduler (DDIMScheduler) — +A scheduler to be used in combination with transformer to denoise the encoded image latents. Pipeline for image generation based on a Transformer backbone instead of a UNet. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( class_labels: List guidance_scale: float = 4.0 generator: Union = None num_inference_steps: int = 50 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters class_labels (List[int]) — +List of ImageNet class labels for the images to be generated. guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. num_inference_steps (int, optional, defaults to 250) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import DiTPipeline, DPMSolverMultistepScheduler +>>> import torch + +>>> pipe = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256", torch_dtype=torch.float16) +>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe = pipe.to("cuda") + +>>> # pick words from Imagenet class labels +>>> pipe.labels # to print all available words + +>>> # pick words that exist in ImageNet +>>> words = ["white shark", "umbrella"] + +>>> class_ids = pipe.get_label_ids(words) + +>>> generator = torch.manual_seed(33) +>>> output = pipe(class_labels=class_ids, num_inference_steps=25, generator=generator) + +>>> image = output.images[0] # label 'white shark' get_label_ids < source > ( label: Union ) → list of int Parameters label (str or dict of str) — +Label strings to be mapped to class ids. Returns +list of int + +Class ids to be processed by pipeline. + Map label strings from ImageNet to corresponding class ids. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/c2a3e26d9fae3d828b6b7816a4e5f21f.txt b/scrapped_outputs/c2a3e26d9fae3d828b6b7816a4e5f21f.txt new file mode 100644 index 0000000000000000000000000000000000000000..26444ce0b02439b036cdb5951e8bcee16133d21d --- /dev/null +++ b/scrapped_outputs/c2a3e26d9fae3d828b6b7816a4e5f21f.txt @@ -0,0 +1,7 @@ +Value-guided planning 🧪 This is an experimental pipeline for reinforcement learning! This pipeline is based on the Planning with Diffusion for Flexible Behavior Synthesis paper by Michael Janner, Yilun Du, Joshua B. Tenenbaum, Sergey Levine. The abstract from the paper is: Model-based reinforcement learning methods often use learning only for the purpose of estimating an approximate dynamics model, offloading the rest of the decision-making work to classical trajectory optimizers. While conceptually simple, this combination has a number of empirical shortcomings, suggesting that learned models may not be well-suited to standard trajectory optimization. In this paper, we consider what it would look like to fold as much of the trajectory optimization pipeline as possible into the modeling problem, such that sampling from the model and planning with it become nearly identical. The core of our technical approach lies in a diffusion probabilistic model that plans by iteratively denoising trajectories. We show how classifier-guided sampling and image inpainting can be reinterpreted as coherent planning strategies, explore the unusual and useful properties of diffusion-based planning methods, and demonstrate the effectiveness of our framework in control settings that emphasize long-horizon decision-making and test-time flexibility. You can find additional information about the model on the project page, the original codebase, or try it out in a demo notebook. The script to run the model is available here. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. ValueGuidedRLPipeline class diffusers.experimental.ValueGuidedRLPipeline < source > ( value_function: UNet1DModel unet: UNet1DModel scheduler: DDPMScheduler env ) Parameters value_function (UNet1DModel) — +A specialized UNet for fine-tuning trajectories base on reward. unet (UNet1DModel) — +UNet architecture to denoise the encoded trajectories. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded trajectories. Default for this +application is DDPMScheduler. env () — +An environment following the OpenAI gym API to act in. For now only Hopper has pretrained models. Pipeline for value-guided sampling from a diffusion model trained to predict sequences of states. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). diff --git a/scrapped_outputs/c2a721c3cda380b9b5458e84189c42d5.txt b/scrapped_outputs/c2a721c3cda380b9b5458e84189c42d5.txt new file mode 100644 index 0000000000000000000000000000000000000000..a1d62e149f06897a73f0cf31016ea5252858f00a --- /dev/null +++ b/scrapped_outputs/c2a721c3cda380b9b5458e84189c42d5.txt @@ -0,0 +1,525 @@ +Kandinsky 2.1 Kandinsky 2.1 is created by Arseniy Shakhmatov, Anton Razzhigaev, Aleksandr Nikolich, Vladimir Arkhipkin, Igor Pavlov, Andrey Kuznetsov, and Denis Dimitrov. The description from it’s GitHub page is: Kandinsky 2.1 inherits best practicies from Dall-E 2 and Latent diffusion, while introducing some new ideas. As text and image encoder it uses CLIP model and diffusion image prior (mapping) between latent spaces of CLIP modalities. This approach increases the visual performance of the model and unveils new horizons in blending images and text-guided image manipulation. The original codebase can be found at ai-forever/Kandinsky-2. Check out the Kandinsky Community organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. KandinskyPriorPipeline class diffusers.KandinskyPriorPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModelWithProjection text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: UnCLIPScheduler image_processor: CLIPImageProcessor ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Pipeline for generating image prior for Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 output_type: Optional = 'pt' return_dict: bool = True ) → KandinskyPriorPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. output_type (str, optional, defaults to "pt") — +The output format of the generate image. Choose between: "np" (np.array) or "pt" +(torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyPipeline, KandinskyPriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior") +>>> pipe_prior.to("cuda") + +>>> prompt = "red cat, 4k photo" +>>> out = pipe_prior(prompt) +>>> image_emb = out.image_embeds +>>> negative_image_emb = out.negative_image_embeds + +>>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1") +>>> pipe.to("cuda") + +>>> image = pipe( +... prompt, +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... ).images + +>>> image[0].save("cat.png") interpolate < source > ( images_and_prompts: List weights: List num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None negative_prior_prompt: Optional = None negative_prompt: str = '' guidance_scale: float = 4.0 device = None ) → KandinskyPriorPipelineOutput or tuple Parameters images_and_prompts (List[Union[str, PIL.Image.Image, torch.FloatTensor]]) — +list of prompts and images to guide the image generation. +weights — (List[float]): +list of weights for each condition in images_and_prompts num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. negative_prior_prompt (str, optional) — +The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when using the prior pipeline for interpolation. Examples: Copied >>> from diffusers import KandinskyPriorPipeline, KandinskyPipeline +>>> from diffusers.utils import load_image +>>> import PIL + +>>> import torch +>>> from torchvision import transforms + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> img1 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) + +>>> img2 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/starry_night.jpeg" +... ) + +>>> images_texts = ["a cat", img1, img2] +>>> weights = [0.3, 0.3, 0.4] +>>> image_emb, zero_image_emb = pipe_prior.interpolate(images_texts, weights) + +>>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) +>>> pipe.to("cuda") + +>>> image = pipe( +... "", +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=150, +... ).images[0] + +>>> image.save("starry_cat.png") KandinskyPipeline class diffusers.KandinskyPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image_embeds: Union negative_image_embeds: Union negative_prompt: Union = None height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyPipeline, KandinskyPriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained("kandinsky-community/Kandinsky-2-1-prior") +>>> pipe_prior.to("cuda") + +>>> prompt = "red cat, 4k photo" +>>> out = pipe_prior(prompt) +>>> image_emb = out.image_embeds +>>> negative_image_emb = out.negative_image_embeds + +>>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1") +>>> pipe.to("cuda") + +>>> image = pipe( +... prompt, +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... ).images + +>>> image[0].save("cat.png") KandinskyCombinedPipeline class diffusers.KandinskyCombinedPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Combined Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipe = AutoPipelineForText2Image.from_pretrained( + "kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" + +image = pipe(prompt=prompt, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models (unet, text_encoder, vae, and safety checker state dicts) to CPU using 🤗 +Accelerate, significantly reducing memory usage. Models are moved to a torch.device('meta') and loaded on a +GPU only when their specific submodule’s forward method is called. Offloading happens on a submodule basis. +Memory savings are higher than using enable_model_cpu_offload, but performance is lower. KandinskyImg2ImgPipeline class diffusers.KandinskyImg2ImgPipeline < source > ( text_encoder: MultilingualCLIP movq: VQModel tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: DDIMScheduler ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ image encoder and decoder Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union image_embeds: FloatTensor negative_image_embeds: FloatTensor negative_prompt: Union = None height: int = 512 width: int = 512 num_inference_steps: int = 100 strength: float = 0.3 guidance_scale: float = 7.0 num_images_per_prompt: int = 1 generator: Union = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyImg2ImgPipeline, KandinskyPriorPipeline +>>> from diffusers.utils import load_image +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> prompt = "A red cartoon frog, 4k" +>>> image_emb, zero_image_emb = pipe_prior(prompt, return_dict=False) + +>>> pipe = KandinskyImg2ImgPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") + +>>> init_image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/frog.png" +... ) + +>>> image = pipe( +... prompt, +... image=init_image, +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... strength=0.2, +... ).images + +>>> image[0].save("red_frog.png") KandinskyImg2ImgCombinedPipeline class diffusers.KandinskyImg2ImgCombinedPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Combined Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 strength: float = 0.3 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForImage2Image +import torch +import requests +from io import BytesIO +from PIL import Image +import os + +pipe = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") +image.thumbnail((768, 768)) + +image = pipe(prompt=prompt, image=original_image, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. KandinskyInpaintPipeline class diffusers.KandinskyInpaintPipeline < source > ( text_encoder: MultilingualCLIP movq: VQModel tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: DDIMScheduler ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ image encoder and decoder Pipeline for text-guided image inpainting using Kandinsky2.1 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union mask_image: Union image_embeds: FloatTensor negative_image_embeds: FloatTensor negative_prompt: Union = None height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image or np.ndarray) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. mask_image (PIL.Image.Image,torch.FloatTensor or np.ndarray) — +Image, or a tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. You can pass a pytorch tensor as mask only if the +image you passed is a pytorch tensor, and it should contain one color channel (L) instead of 3, so the +expected shape would be either (B, 1, H, W,), (B, H, W), (1, H, W) or (H, W) If image is an PIL +image or numpy array, mask should also be a either PIL image or numpy array. If it is a PIL image, it +will be converted to a single channel (luminance) before use. If it is a nummpy array, the expected +shape is (H, W). image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyInpaintPipeline, KandinskyPriorPipeline +>>> from diffusers.utils import load_image +>>> import torch +>>> import numpy as np + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> prompt = "a hat" +>>> image_emb, zero_image_emb = pipe_prior(prompt, return_dict=False) + +>>> pipe = KandinskyInpaintPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") + +>>> init_image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) + +>>> mask = np.zeros((768, 768), dtype=np.float32) +>>> mask[:250, 250:-250] = 1 + +>>> out = pipe( +... prompt, +... image=init_image, +... mask_image=mask, +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=50, +... ) + +>>> image = out.images[0] +>>> image.save("cat_with_hat.png") KandinskyInpaintCombinedPipeline class diffusers.KandinskyInpaintCombinedPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Combined Pipeline for generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union mask_image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. mask_image (np.array) — +Tensor representing an image batch, to mask image. White pixels in the mask will be repainted, while +black pixels will be preserved. If mask_image is a PIL image, it will be converted to a single +channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, +so the expected shape would be (B, H, W, 1). negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +import torch +import numpy as np + +pipe = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +original_image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png" +) + +mask = np.zeros((768, 768), dtype=np.float32) +# Let's mask out an area above the cat's head +mask[:250, 250:-250] = 1 + +image = pipe(prompt=prompt, image=original_image, mask_image=mask, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. diff --git a/scrapped_outputs/c2d3b3197ff1c650ee88aa32c5b58421.txt b/scrapped_outputs/c2d3b3197ff1c650ee88aa32c5b58421.txt new file mode 100644 index 0000000000000000000000000000000000000000..8ffbeca318ea60288f515ef9c440ebea9a984f50 --- /dev/null +++ b/scrapped_outputs/c2d3b3197ff1c650ee88aa32c5b58421.txt @@ -0,0 +1,80 @@ +UniPCMultistepScheduler UniPCMultistepScheduler is a training-free framework designed for fast sampling of diffusion models. It was introduced in UniPC: A Unified Predictor-Corrector Framework for Fast Sampling of Diffusion Models by Wenliang Zhao, Lujia Bai, Yongming Rao, Jie Zhou, Jiwen Lu. It consists of a corrector (UniC) and a predictor (UniP) that share a unified analytical form and support arbitrary orders. +UniPC is by design model-agnostic, supporting pixel-space/latent-space DPMs on unconditional/conditional sampling. It can also be applied to both noise prediction and data prediction models. The corrector UniC can be also applied after any off-the-shelf solvers to increase the order of accuracy. The abstract from the paper is: Diffusion probabilistic models (DPMs) have demonstrated a very promising ability in high-resolution image synthesis. However, sampling from a pre-trained DPM is time-consuming due to the multiple evaluations of the denoising network, making it more and more important to accelerate the sampling of DPMs. Despite recent progress in designing fast samplers, existing methods still cannot generate satisfying images in many applications where fewer steps (e.g., <10) are favored. In this paper, we develop a unified corrector (UniC) that can be applied after any existing DPM sampler to increase the order of accuracy without extra model evaluations, and derive a unified predictor (UniP) that supports arbitrary order as a byproduct. Combining UniP and UniC, we propose a unified predictor-corrector framework called UniPC for the fast sampling of DPMs, which has a unified analytical form for any order and can significantly improve the sampling quality over previous methods, especially in extremely few steps. We evaluate our methods through extensive experiments including both unconditional and conditional sampling using pixel-space and latent-space DPMs. Our UniPC can achieve 3.87 FID on CIFAR10 (unconditional) and 7.51 FID on ImageNet 256×256 (conditional) with only 10 function evaluations. Code is available at this https URL. Tips It is recommended to set solver_order to 2 for guide sampling, and solver_order=3 for unconditional sampling. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both predict_x0=True and thresholding=True to use dynamic thresholding. This thresholding method is unsuitable for latent-space diffusion models such as Stable Diffusion. UniPCMultistepScheduler class diffusers.UniPCMultistepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 predict_x0: bool = True solver_type: str = 'bh2' lower_order_final: bool = True disable_corrector: List = [] solver_p: SchedulerMixin = None use_karras_sigmas: Optional = False timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, default 2) — +The UniPC order which can be any positive integer. The effective order of accuracy is solver_order + 1 +due to the UniC. It is recommended to use solver_order=2 for guided sampling, and solver_order=3 for +unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and predict_x0=True. predict_x0 (bool, defaults to True) — +Whether to use the updating algorithm on the predicted x0. solver_type (str, default bh2) — +Solver type for UniPC. It is recommended to use bh1 for unconditional sampling when steps < 10, and bh2 +otherwise. lower_order_final (bool, default True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. disable_corrector (list, default []) — +Decides which step to disable the corrector to mitigate the misalignment between epsilon_theta(x_t, c) +and epsilon_theta(x_t^c, c) which can influence convergence for a large guidance scale. Corrector is +usually disabled during the first few steps. solver_p (SchedulerMixin, default None) — +Any other scheduler that if specified, the algorithm becomes solver_p + UniC. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. UniPCMultistepScheduler is a training-free framework designed for the fast sampling of diffusion models. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the UniPC algorithm needs. multistep_uni_c_bh_update < source > ( this_model_output: FloatTensor *args last_sample: FloatTensor = None this_sample: FloatTensor = None order: int = None **kwargs ) → torch.FloatTensor Parameters this_model_output (torch.FloatTensor) — +The model outputs at x_t. this_timestep (int) — +The current timestep t. last_sample (torch.FloatTensor) — +The generated sample before the last predictor x_{t-1}. this_sample (torch.FloatTensor) — +The generated sample after the last predictor x_{t}. order (int) — +The p of UniC-p at this step. The effective order of accuracy should be order + 1. Returns +torch.FloatTensor + +The corrected sample tensor at the current timestep. + One step for the UniC (B(h) version). multistep_uni_p_bh_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None order: int = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model at the current timestep. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. order (int) — +The order of UniP at this timestep (corresponds to the p in UniPC-p). Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the UniP (B(h) version). Alternatively, self.solver_p is used if is specified. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep UniPC. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/c2d7c74b714011a9567ecff8b46824de.txt b/scrapped_outputs/c2d7c74b714011a9567ecff8b46824de.txt new file mode 100644 index 0000000000000000000000000000000000000000..5e5f20bcd4c8ced4f5d66653f375f4b97a022c2a --- /dev/null +++ b/scrapped_outputs/c2d7c74b714011a9567ecff8b46824de.txt @@ -0,0 +1,13 @@ +Improve image quality with deterministic generation A common way to improve the quality of generated images is with deterministic batch generation, generate a batch of images and select one image to improve with a more detailed prompt in a second round of inference. The key is to pass a list of torch.Generator’s to the pipeline for batched image generation, and tie each Generator to a seed so you can reuse it for an image. Let’s use runwayml/stable-diffusion-v1-5 for example, and generate several versions of the following prompt: Copied prompt = "Labrador in the style of Vermeer" Instantiate a pipeline with DiffusionPipeline.from_pretrained() and place it on a GPU (if available): Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import make_image_grid + +pipe = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +pipe = pipe.to("cuda") Now, define four different Generators and assign each Generator a seed (0 to 3) so you can reuse a Generator later for a specific image: Copied generator = [torch.Generator(device="cuda").manual_seed(i) for i in range(4)] To create a batched seed, you should use a list comprehension that iterates over the length specified in range(). This creates a unique Generator object for each image in the batch. If you only multiply the Generator by the batch size, this only creates one Generator object that is used sequentially for each image in the batch. For example, if you want to use the same seed to create 4 identical images: Copied ❌ [torch.Generator().manual_seed(seed)] * 4 + +✅ [torch.Generator().manual_seed(seed) for _ in range(4)] Generate the images and have a look: Copied images = pipe(prompt, generator=generator, num_images_per_prompt=4).images +make_image_grid(images, rows=2, cols=2) In this example, you’ll improve upon the first image - but in reality, you can use any image you want (even the image with double sets of eyes!). The first image used the Generator with seed 0, so you’ll reuse that Generator for the second round of inference. To improve the quality of the image, add some additional text to the prompt: Copied prompt = [prompt + t for t in [", highly realistic", ", artsy", ", trending", ", colorful"]] +generator = [torch.Generator(device="cuda").manual_seed(0) for i in range(4)] Create four generators with seed 0, and generate another batch of images, all of which should look like the first image from the previous round! Copied images = pipe(prompt, generator=generator).images +make_image_grid(images, rows=2, cols=2) diff --git a/scrapped_outputs/c2fa79957a8fa186d0d56328a90f34ee.txt b/scrapped_outputs/c2fa79957a8fa186d0d56328a90f34ee.txt new file mode 100644 index 0000000000000000000000000000000000000000..1303672965e9fdebfc1a9c219c08f87449df8999 --- /dev/null +++ b/scrapped_outputs/c2fa79957a8fa186d0d56328a90f34ee.txt @@ -0,0 +1,45 @@ +Stable Video Diffusion Stable Video Diffusion is a powerful image-to-video generation model that can generate high resolution (576x1024) 2-4 second videos conditioned on the input image. This guide will show you how to use SVD to short generate videos from images. Before you begin, make sure you have the following libraries installed: Copied !pip install -q -U diffusers transformers accelerate Image to Video Generation The are two variants of SVD. SVD +and SVD-XT. The svd checkpoint is trained to generate 14 frames and the svd-xt checkpoint is further +finetuned to generate 25 frames. We will use the svd-xt checkpoint for this guide. Copied import torch + +from diffusers import StableVideoDiffusionPipeline +from diffusers.utils import load_image, export_to_video + +pipe = StableVideoDiffusionPipeline.from_pretrained( + "stabilityai/stable-video-diffusion-img2vid-xt", torch_dtype=torch.float16, variant="fp16" +) +pipe.enable_model_cpu_offload() + +# Load the conditioning image +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png") +image = image.resize((1024, 576)) + +generator = torch.manual_seed(42) +frames = pipe(image, decode_chunk_size=8, generator=generator).frames[0] + +export_to_video(frames, "generated.mp4", fps=7) Source Image Video Since generating videos is more memory intensive we can use the `decode_chunk_size` argument to control how many frames are decoded at once. This will reduce the memory usage. It's recommended to tweak this value based on your GPU memory. +Setting `decode_chunk_size=1` will decode one frame at a time and will use the least amount of memory but the video might have some flickering. +Additionally, we also use model cpu offloading to reduce the memory usage. Torch.compile You can achieve a 20-25% speed-up at the expense of slightly increased memory by compiling the UNet as follows: Copied - pipe.enable_model_cpu_offload() ++ pipe.to("cuda") ++ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) Low-memory Video generation is very memory intensive as we have to essentially generate num_frames all at once. The mechanism is very comparable to text-to-image generation with a high batch size. To reduce the memory requirement you have multiple options. The following options trade inference speed against lower memory requirement: enable model offloading: Each component of the pipeline is offloaded to CPU once it’s not needed anymore. enable feed-forward chunking: The feed-forward layer runs in a loop instead of running with a single huge feed-forward batch size reduce decode_chunk_size: This means that the VAE decodes frames in chunks instead of decoding them all together. Note: In addition to leading to a small slowdown, this method also slightly leads to video quality deterioration You can enable them as follows: Copied -pipe.enable_model_cpu_offload() +-frames = pipe(image, decode_chunk_size=8, generator=generator).frames[0] ++pipe.enable_model_cpu_offload() ++pipe.unet.enable_forward_chunking() ++frames = pipe(image, decode_chunk_size=2, generator=generator, num_frames=25).frames[0] Including all these tricks should lower the memory requirement to less than 8GB VRAM. Micro-conditioning Along with conditioning image Stable Diffusion Video also allows providing micro-conditioning that allows more control over the generated video. +It accepts the following arguments: fps: The frames per second of the generated video. motion_bucket_id: The motion bucket id to use for the generated video. This can be used to control the motion of the generated video. Increasing the motion bucket id will increase the motion of the generated video. noise_aug_strength: The amount of noise added to the conditioning image. The higher the values the less the video will resemble the conditioning image. Increasing this value will also increase the motion of the generated video. Here is an example of using micro-conditioning to generate a video with more motion. Copied import torch + +from diffusers import StableVideoDiffusionPipeline +from diffusers.utils import load_image, export_to_video + +pipe = StableVideoDiffusionPipeline.from_pretrained( + "stabilityai/stable-video-diffusion-img2vid-xt", torch_dtype=torch.float16, variant="fp16" +) +pipe.enable_model_cpu_offload() + +# Load the conditioning image +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png") +image = image.resize((1024, 576)) + +generator = torch.manual_seed(42) +frames = pipe(image, decode_chunk_size=8, generator=generator, motion_bucket_id=180, noise_aug_strength=0.1).frames[0] +export_to_video(frames, "generated.mp4", fps=7) diff --git a/scrapped_outputs/c309100b94787c95c9be3248bfc8ae56.txt b/scrapped_outputs/c309100b94787c95c9be3248bfc8ae56.txt new file mode 100644 index 0000000000000000000000000000000000000000..f971d25fc44aa74df592b1a56356146d3ed210ee --- /dev/null +++ b/scrapped_outputs/c309100b94787c95c9be3248bfc8ae56.txt @@ -0,0 +1,83 @@ +K-Diffusion k-diffusion is a popular library created by Katherine Crowson. We provide StableDiffusionKDiffusionPipeline and StableDiffusionXLKDiffusionPipeline that allow you to run Stable DIffusion with samplers from k-diffusion. Note that most the samplers from k-diffusion are implemented in Diffusers and we recommend using existing schedulers. You can find a mapping between k-diffusion samplers and schedulers in Diffusers here StableDiffusionKDiffusionPipeline class diffusers.StableDiffusionKDiffusionPipeline < source > ( vae text_encoder tokenizer unet scheduler safety_checker feature_extractor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. Pipeline for text-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights This is an experimental pipeline and is likely to change in the future. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionXLKDiffusionPipeline class diffusers.StableDiffusionXLKDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. Pipeline for text-to-image generation using Stable Diffusion XL and k-diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. diff --git a/scrapped_outputs/c33db06fc04813d7115af74b92cc6b7f.txt b/scrapped_outputs/c33db06fc04813d7115af74b92cc6b7f.txt new file mode 100644 index 0000000000000000000000000000000000000000..ec0984527432f3dd6b6a86e93a43c1b2ef304cb3 --- /dev/null +++ b/scrapped_outputs/c33db06fc04813d7115af74b92cc6b7f.txt @@ -0,0 +1,60 @@ +VAE Image Processor The VaeImageProcessor provides a unified API for StableDiffusionPipelines to prepare image inputs for VAE encoding and post-processing outputs once they’re decoded. This includes transformations such as resizing, normalization, and conversion between PIL Image, PyTorch, and NumPy arrays. All pipelines with VaeImageProcessor accept PIL Image, PyTorch tensor, or NumPy arrays as image inputs and return outputs based on the output_type argument by the user. You can pass encoded image latents directly to the pipeline and return latents from the pipeline as a specific output with the output_type argument (for example output_type="latent"). This allows you to take the generated latents from one pipeline and pass it to another pipeline as input without leaving the latent space. It also makes it much easier to use multiple pipelines together by passing PyTorch tensors directly between different pipelines. VaeImageProcessor class diffusers.image_processor.VaeImageProcessor < source > ( do_resize: bool = True vae_scale_factor: int = 8 resample: str = 'lanczos' do_normalize: bool = True do_binarize: bool = False do_convert_rgb: bool = False do_convert_grayscale: bool = False ) Parameters do_resize (bool, optional, defaults to True) — +Whether to downscale the image’s (height, width) dimensions to multiples of vae_scale_factor. Can accept +height and width arguments from image_processor.VaeImageProcessor.preprocess() method. vae_scale_factor (int, optional, defaults to 8) — +VAE scale factor. If do_resize is True, the image is automatically resized to multiples of this factor. resample (str, optional, defaults to lanczos) — +Resampling filter to use when resizing the image. do_normalize (bool, optional, defaults to True) — +Whether to normalize the image to [-1,1]. do_binarize (bool, optional, defaults to False) — +Whether to binarize the image to 0/1. do_convert_rgb (bool, optional, defaults to be False) — +Whether to convert the images to RGB format. do_convert_grayscale (bool, optional, defaults to be False) — +Whether to convert the images to grayscale format. Image processor for VAE. apply_overlay < source > ( mask: Image init_image: Image image: Image crop_coords: Optional = None ) overlay the inpaint output to the original image binarize < source > ( image: Image ) → PIL.Image.Image Parameters image (PIL.Image.Image) — +The image input, should be a PIL image. Returns +PIL.Image.Image + +The binarized image. Values less than 0.5 are set to 0, values greater than 0.5 are set to 1. + Create a mask. blur < source > ( image: Image blur_factor: int = 4 ) Blurs an image. convert_to_grayscale < source > ( image: Image ) Converts a PIL image to grayscale format. convert_to_rgb < source > ( image: Image ) Converts a PIL image to RGB format. denormalize < source > ( images: Union ) Denormalize an image array to [0,1]. get_crop_region < source > ( mask_image: Image width: int height: int pad = 0 ) → tuple Parameters mask_image (PIL.Image.Image) — Mask image. width (int) — Width of the image to be processed. height (int) — Height of the image to be processed. pad (int, optional) — Padding to be added to the crop region. Defaults to 0. Returns +tuple + +(x1, y1, x2, y2) represent a rectangular region that contains all masked ares in an image and matches the original aspect ratio. + Finds a rectangular region that contains all masked ares in an image, and expands region to match the aspect ratio of the original image; +for example, if user drew mask in a 128x32 region, and the dimensions for processing are 512x512, the region will be expanded to 128x128. get_default_height_width < source > ( image: Union height: Optional = None width: Optional = None ) Parameters image(PIL.Image.Image, np.ndarray or torch.Tensor) — +The image input, can be a PIL image, numpy array or pytorch tensor. if it is a numpy array, should have +shape [batch, height, width] or [batch, height, width, channel] if it is a pytorch tensor, should +have shape [batch, channel, height, width]. height (int, optional, defaults to None) — +The height in preprocessed image. If None, will use the height of image input. width (int, optional, defaults to None) -- The width in preprocessed. If None, will use the width of the image` input. This function return the height and width that are downscaled to the next integer multiple of +vae_scale_factor. normalize < source > ( images: Union ) Normalize an image array to [-1,1]. numpy_to_pil < source > ( images: ndarray ) Convert a numpy image or a batch of images to a PIL image. numpy_to_pt < source > ( images: ndarray ) Convert a NumPy image to a PyTorch tensor. pil_to_numpy < source > ( images: Union ) Convert a PIL image or a list of PIL images to NumPy arrays. postprocess < source > ( image: FloatTensor output_type: str = 'pil' do_denormalize: Optional = None ) → PIL.Image.Image, np.ndarray or torch.FloatTensor Parameters image (torch.FloatTensor) — +The image input, should be a pytorch tensor with shape B x C x H x W. output_type (str, optional, defaults to pil) — +The output type of the image, can be one of pil, np, pt, latent. do_denormalize (List[bool], optional, defaults to None) — +Whether to denormalize the image to [0,1]. If None, will use the value of do_normalize in the +VaeImageProcessor config. Returns +PIL.Image.Image, np.ndarray or torch.FloatTensor + +The postprocessed image. + Postprocess the image output from tensor to output_type. preprocess < source > ( image: Union height: Optional = None width: Optional = None resize_mode: str = 'default' crops_coords: Optional = None ) Parameters image (pipeline_image_input) — +The image input, accepted formats are PIL images, NumPy arrays, PyTorch tensors; Also accept list of supported formats. height (int, optional, defaults to None) — +The height in preprocessed image. If None, will use the get_default_height_width() to get default height. width (int, optional, defaults to None) -- The width in preprocessed. If None, will use get_default_height_width() to get the default width. resize_mode (str, optional, defaults to default) — +The resize mode, can be one of default or fill. If default, will resize the image to fit +within the specified width and height, and it may not maintaining the original aspect ratio. +If fill, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, filling empty with data from image. +If crop, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, cropping the excess. +Note that resize_mode fill and crop are only supported for PIL image input. crops_coords (List[Tuple[int, int, int, int]], optional, defaults to None) — +The crop coordinates for each image in the batch. If None, will not crop the image. Preprocess the image input. pt_to_numpy < source > ( images: FloatTensor ) Convert a PyTorch tensor to a NumPy image. resize < source > ( image: Union height: int width: int resize_mode: str = 'default' ) → PIL.Image.Image, np.ndarray or torch.Tensor Parameters image (PIL.Image.Image, np.ndarray or torch.Tensor) — +The image input, can be a PIL image, numpy array or pytorch tensor. height (int) — +The height to resize to. width (int) — +The width to resize to. resize_mode (str, optional, defaults to default) — +The resize mode to use, can be one of default or fill. If default, will resize the image to fit +within the specified width and height, and it may not maintaining the original aspect ratio. +If fill, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, filling empty with data from image. +If crop, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, cropping the excess. +Note that resize_mode fill and crop are only supported for PIL image input. Returns +PIL.Image.Image, np.ndarray or torch.Tensor + +The resized image. + Resize image. VaeImageProcessorLDM3D The VaeImageProcessorLDM3D accepts RGB and depth inputs and returns RGB and depth outputs. class diffusers.image_processor.VaeImageProcessorLDM3D < source > ( do_resize: bool = True vae_scale_factor: int = 8 resample: str = 'lanczos' do_normalize: bool = True ) Parameters do_resize (bool, optional, defaults to True) — +Whether to downscale the image’s (height, width) dimensions to multiples of vae_scale_factor. vae_scale_factor (int, optional, defaults to 8) — +VAE scale factor. If do_resize is True, the image is automatically resized to multiples of this factor. resample (str, optional, defaults to lanczos) — +Resampling filter to use when resizing the image. do_normalize (bool, optional, defaults to True) — +Whether to normalize the image to [-1,1]. Image processor for VAE LDM3D. depth_pil_to_numpy < source > ( images: Union ) Convert a PIL image or a list of PIL images to NumPy arrays. numpy_to_depth < source > ( images: ndarray ) Convert a NumPy depth image or a batch of images to a PIL image. numpy_to_pil < source > ( images: ndarray ) Convert a NumPy image or a batch of images to a PIL image. preprocess < source > ( rgb: Union depth: Union height: Optional = None width: Optional = None target_res: Optional = None ) Preprocess the image input. Accepted formats are PIL images, NumPy arrays or PyTorch tensors. rgblike_to_depthmap < source > ( image: Union ) Returns: depth map diff --git a/scrapped_outputs/c35b6e05bc78f3b3105ac4b83e81c131.txt b/scrapped_outputs/c35b6e05bc78f3b3105ac4b83e81c131.txt new file mode 100644 index 0000000000000000000000000000000000000000..8b0977586efe9e10fcd7d4008f90fed1622c72b5 --- /dev/null +++ b/scrapped_outputs/c35b6e05bc78f3b3105ac4b83e81c131.txt @@ -0,0 +1,868 @@ +Stable unCLIP + +Stable unCLIP checkpoints are finetuned from stable diffusion 2.1 checkpoints to condition on CLIP image embeddings. +Stable unCLIP also still conditions on text embeddings. Given the two separate conditionings, stable unCLIP can be used +for text guided image variation. When combined with an unCLIP prior, it can also be used for full text to image generation. + +Tips + +Stable unCLIP takes a noise_level as input during inference. noise_level determines how much noise is added +to the image embeddings. A higher noise_level increases variation in the final un-noised images. By default, +we do not add any additional noise to the image embeddings i.e. noise_level = 0. + +Available checkpoints: + +TODO + +Text-to-Image Generation + + + + Copied +import torch +from diffusers import StableUnCLIPPipeline + +pipe = StableUnCLIPPipeline.from_pretrained( + "fusing/stable-unclip-2-1-l", torch_dtype=torch.float16 +) # TODO update model path +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +images = pipe(prompt).images +images[0].save("astronaut_horse.png") + +Text guided Image-to-Image Variation + + + + Copied +import requests +import torch +from PIL import Image +from io import BytesIO + +from diffusers import StableUnCLIPImg2ImgPipeline + +pipe = StableUnCLIPImg2ImgPipeline.from_pretrained( + "fusing/stable-unclip-2-1-l-img2img", torch_dtype=torch.float16 +) # TODO update model path +pipe = pipe.to("cuda") + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +response = requests.get(url) +init_image = Image.open(BytesIO(response.content)).convert("RGB") +init_image = init_image.resize((768, 512)) + +prompt = "A fantasy landscape, trending on artstation" + +images = pipe(prompt, init_image).images +images[0].save("fantasy_landscape.png") + +StableUnCLIPPipeline + + +class diffusers.StableUnCLIPPipeline + +< +source +> +( +prior_tokenizer: CLIPTokenizer +prior_text_encoder: CLIPTextModelWithProjection +prior: PriorTransformer +prior_scheduler: KarrasDiffusionSchedulers +image_normalizer: StableUnCLIPImageNormalizer +image_noising_scheduler: KarrasDiffusionSchedulers +tokenizer: CLIPTokenizer +text_encoder: CLIPTextModelWithProjection +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +vae: AutoencoderKL + +) + + +Parameters + +prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. + + +prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. + + +prior_scheduler (KarrasDiffusionSchedulers) — +Scheduler used in the prior denoising process. + + +image_normalizer (StableUnCLIPImageNormalizer) — +Used to normalize the predicted image embeddings before the noise is applied and un-normalize the image +embeddings after the noise has been applied. + + +image_noising_scheduler (KarrasDiffusionSchedulers) — +Noise schedule for adding noise to the predicted image embeddings. The amount of noise to add is determined +by noise_level in StableUnCLIPPipeline.__call__. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (KarrasDiffusionSchedulers) — +A scheduler to be used in combination with unet to denoise the encoded image latents. + + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + + +Pipeline for text-to-image generation using stable unCLIP. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str], NoneType] = None +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 20 +guidance_scale: float = 10.0 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Optional[torch._C.Generator] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None +noise_level: int = 0 +prior_num_inference_steps: int = 25 +prior_guidance_scale: float = 4.0 +prior_latents: typing.Optional[torch.FloatTensor] = None + +) +→ +ImagePipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 20) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 10.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor as defined under +self.processor in +diffusers.cross_attention. + + +noise_level (int, optional, defaults to 0) — +The amount of noise to add to the image embeddings. A higher noise_level increases the variance in +the final un-noised images. See StableUnCLIPPipeline.noise_image_embeddings for details. + + +prior_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps in the prior denoising process. More denoising steps usually lead to a +higher quality image at the expense of slower inference. + + +prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale for the prior denoising process as defined in Classifier-Free Diffusion +Guidance. prior_guidance_scale is defined as w of equation 2. of +Imagen Paper. Guidance scale is enabled by setting +guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to +the text prompt, usually at the expense of lower image quality. + + +prior_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +embedding generation in the prior denoising process. Can be used to tweak the same generation with +different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied +random generator. + + +Returns + +ImagePipelineOutput or tuple + + + +~ pipeline_utils.ImagePipelineOutput if return_dict is +True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import StableUnCLIPPipeline + +>>> pipe = StableUnCLIPPipeline.from_pretrained( +... "fusing/stable-unclip-2-1-l", torch_dtype=torch.float16 +... ) # TODO update model path +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> images = pipe(prompt).images +>>> images[0].save("astronaut_horse.png") + +enable_attention_slicing + +< +source +> +( +slice_size: typing.Union[str, int, NoneType] = 'auto' + +) + + +Parameters + +slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maxium amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +disable_attention_slicing + +< +source +> +( +) + + + +Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go +back to computing attention in one step. + +enable_vae_slicing + +< +source +> +( +) + + + +Enable sliced VAE decoding. +When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several +steps. This is useful to save some memory and allow larger batch sizes. + +disable_vae_slicing + +< +source +> +( +) + + + +Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to +computing decoding in one step. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline’s +models have their state dicts saved to CPU and then are moved to a torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. + +noise_image_embeddings + +< +source +> +( +image_embeds: Tensor +noise_level: int +noise: typing.Optional[torch.FloatTensor] = None +generator: typing.Optional[torch._C.Generator] = None + +) + + + +Add noise to the image embeddings. The amount of noise is controlled by a noise_level input. A higher +noise_level increases the variance in the final un-noised images. +The noise is applied in two ways +A noise schedule is applied directly to the embeddings +A vector of sinusoidal time embeddings are appended to the output. +In both cases, the amount of noise is controlled by the same noise_level. +The embeddings are normalized before the noise is applied and un-normalized after the noise is applied. + +StableUnCLIPImg2ImgPipeline + + +class diffusers.StableUnCLIPImg2ImgPipeline + +< +source +> +( +feature_extractor: CLIPFeatureExtractor +image_encoder: CLIPVisionModelWithProjection +image_normalizer: StableUnCLIPImageNormalizer +image_noising_scheduler: KarrasDiffusionSchedulers +tokenizer: CLIPTokenizer +text_encoder: CLIPTextModel +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +vae: AutoencoderKL + +) + + +Parameters + +feature_extractor (CLIPFeatureExtractor) — +Feature extractor for image pre-processing before being encoded. + + +image_encoder (CLIPVisionModelWithProjection) — +CLIP vision model for encoding images. + + +image_normalizer (StableUnCLIPImageNormalizer) — +Used to normalize the predicted image embeddings before the noise is applied and un-normalize the image +embeddings after the noise has been applied. + + +image_noising_scheduler (KarrasDiffusionSchedulers) — +Noise schedule for adding noise to the predicted image embeddings. The amount of noise to add is determined +by noise_level in StableUnCLIPPipeline.__call__. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (KarrasDiffusionSchedulers) — +A scheduler to be used in combination with unet to denoise the encoded image latents. + + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + + +Pipeline for text-guided image to image generation using stable unCLIP. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 20 +guidance_scale: float = 10 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Optional[torch._C.Generator] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None +noise_level: int = 0 +image_embeds: typing.Optional[torch.FloatTensor] = None + +) +→ +ImagePipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch. The image will be encoded to its CLIP embedding which +the unet will be conditioned on. Note that the image is not encoded by the vae and then used as the +latents in the denoising process such as in the standard stable diffusion text guided image variation +process. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 20) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 10.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor as defined under +self.processor in +diffusers.cross_attention. + + +noise_level (int, optional, defaults to 0) — +The amount of noise to add to the image embeddings. A higher noise_level increases the variance in +the final un-noised images. See StableUnCLIPPipeline.noise_image_embeddings for details. + + +image_embeds (torch.FloatTensor, optional) — +Pre-generated CLIP embeddings to condition the unet on. Note that these are not latents to be used in +the denoising process. If you want to provide pre-generated latents, pass them to __call__ as +latents. + + +Returns + +ImagePipelineOutput or tuple + + + +~ pipeline_utils.ImagePipelineOutput if return_dict is +True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import requests +>>> import torch +>>> from PIL import Image +>>> from io import BytesIO + +>>> from diffusers import StableUnCLIPImg2ImgPipeline + +>>> pipe = StableUnCLIPImg2ImgPipeline.from_pretrained( +... "fusing/stable-unclip-2-1-l-img2img", torch_dtype=torch.float16 +... ) # TODO update model path +>>> pipe = pipe.to("cuda") + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +>>> response = requests.get(url) +>>> init_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_image = init_image.resize((768, 512)) + +>>> prompt = "A fantasy landscape, trending on artstation" + +>>> images = pipe(prompt, init_image).images +>>> images[0].save("fantasy_landscape.png") + +enable_attention_slicing + +< +source +> +( +slice_size: typing.Union[str, int, NoneType] = 'auto' + +) + + +Parameters + +slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maxium amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +disable_attention_slicing + +< +source +> +( +) + + + +Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go +back to computing attention in one step. + +enable_vae_slicing + +< +source +> +( +) + + + +Enable sliced VAE decoding. +When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several +steps. This is useful to save some memory and allow larger batch sizes. + +disable_vae_slicing + +< +source +> +( +) + + + +Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to +computing decoding in one step. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline’s +models have their state dicts saved to CPU and then are moved to a torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. + +noise_image_embeddings + +< +source +> +( +image_embeds: Tensor +noise_level: int +noise: typing.Optional[torch.FloatTensor] = None +generator: typing.Optional[torch._C.Generator] = None + +) + + + +Add noise to the image embeddings. The amount of noise is controlled by a noise_level input. A higher +noise_level increases the variance in the final un-noised images. +The noise is applied in two ways +A noise schedule is applied directly to the embeddings +A vector of sinusoidal time embeddings are appended to the output. +In both cases, the amount of noise is controlled by the same noise_level. +The embeddings are normalized before the noise is applied and un-normalized after the noise is applied. diff --git a/scrapped_outputs/c35be047b562548c9193d8200dd9ce5c.txt b/scrapped_outputs/c35be047b562548c9193d8200dd9ce5c.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/c35c2bb1ab41ed0f4bc1b4479fd6486d.txt b/scrapped_outputs/c35c2bb1ab41ed0f4bc1b4479fd6486d.txt new file mode 100644 index 0000000000000000000000000000000000000000..4760b4efe8b8dbb32af0b2b6a3453f7f5646bcf8 --- /dev/null +++ b/scrapped_outputs/c35c2bb1ab41ed0f4bc1b4479fd6486d.txt @@ -0,0 +1,15 @@ +Score SDE VE Score-Based Generative Modeling through Stochastic Differential Equations (Score SDE) is by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon and Ben Poole. This pipeline implements the variance expanding (VE) variant of the stochastic differential equation method. The abstract from the paper is: Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model. The original codebase can be found at yang-song/score_sde_pytorch. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. ScoreSdeVePipeline class diffusers.ScoreSdeVePipeline < source > ( unet: UNet2DModel scheduler: ScoreSdeVeScheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image. scheduler (ScoreSdeVeScheduler) — +A ScoreSdeVeScheduler to be used in combination with unet to denoise the encoded image. Pipeline for unconditional image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 num_inference_steps: int = 2000 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True **kwargs ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/c39c0270e1b8444dfdf8d600dce210bc.txt b/scrapped_outputs/c39c0270e1b8444dfdf8d600dce210bc.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/c39eafd46fe17912d73380436acd0f13.txt b/scrapped_outputs/c39eafd46fe17912d73380436acd0f13.txt new file mode 100644 index 0000000000000000000000000000000000000000..9b91d27246c47e715d8fb32343e10ffa0337626a --- /dev/null +++ b/scrapped_outputs/c39eafd46fe17912d73380436acd0f13.txt @@ -0,0 +1,36 @@ +Stable Diffusion XL Turbo SDXL Turbo is an adversarial time-distilled Stable Diffusion XL (SDXL) model capable +of running inference in as little as 1 step. This guide will show you how to use SDXL-Turbo for text-to-image and image-to-image. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate Load model checkpoints Model weights may be stored in separate subfolders on the Hub or locally, in which case, you should use the from_pretrained() method: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16") +pipeline = pipeline.to("cuda") You can also use the from_single_file() method to load a model checkpoint stored in a single file format (.ckpt or .safetensors) from the Hub or locally: Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_single_file( + "https://huggingface.co/stabilityai/sdxl-turbo/blob/main/sd_xl_turbo_1.0_fp16.safetensors", torch_dtype=torch.float16) +pipeline = pipeline.to("cuda") Text-to-image For text-to-image, pass a text prompt. By default, SDXL Turbo generates a 512x512 image, and that resolution gives the best results. You can try setting the height and width parameters to 768x768 or 1024x1024, but you should expect quality degradations when doing so. Make sure to set guidance_scale to 0.0 to disable, as the model was trained without it. A single inference step is enough to generate high quality images. +Increasing the number of steps to 2, 3 or 4 should improve image quality. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline_text2image = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16") +pipeline_text2image = pipeline_text2image.to("cuda") + +prompt = "A cinematic shot of a baby racoon wearing an intricate italian priest robe." + +image = pipeline_text2image(prompt=prompt, guidance_scale=0.0, num_inference_steps=1).images[0] +image Image-to-image For image-to-image generation, make sure that num_inference_steps * strength is larger or equal to 1. +The image-to-image pipeline will run for int(num_inference_steps * strength) steps, e.g. 0.5 * 2.0 = 1 step in +our example below. Copied from diffusers import AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +# use from_pipe to avoid consuming additional memory when loading a checkpoint +pipeline = AutoPipelineForImage2Image.from_pipe(pipeline_text2image).to("cuda") + +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") +init_image = init_image.resize((512, 512)) + +prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k" + +image = pipeline(prompt, image=init_image, strength=0.5, guidance_scale=0.0, num_inference_steps=2).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Speed-up SDXL Turbo even more Compile the UNet if you are using PyTorch version 2 or better. The first inference run will be very slow, but subsequent ones will be much faster. Copied pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) When using the default VAE, keep it in float32 to avoid costly dtype conversions before and after each generation. You only need to do this one before your first generation: Copied pipe.upcast_vae() As an alternative, you can also use a 16-bit VAE created by community member @madebyollin that does not need to be upcasted to float32. diff --git a/scrapped_outputs/c39fb10ae023fd726529dd119d9d94fb.txt b/scrapped_outputs/c39fb10ae023fd726529dd119d9d94fb.txt new file mode 100644 index 0000000000000000000000000000000000000000..1be12d79ba5093a72f6b36cdd1b7acba966736f4 --- /dev/null +++ b/scrapped_outputs/c39fb10ae023fd726529dd119d9d94fb.txt @@ -0,0 +1,42 @@ +HeunDiscreteScheduler The Heun scheduler (Algorithm 1) is from the Elucidating the Design Space of Diffusion-Based Generative Models paper by Karras et al. The scheduler is ported from the k-diffusion library and created by Katherine Crowson. HeunDiscreteScheduler class diffusers.HeunDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' use_karras_sigmas: Optional = False clip_sample: Optional = False clip_sample_range: float = 1.0 timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. Scheduler with Heun steps for discrete beta schedules. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/c3e44560f3f244260a9cb7a53742ae2d.txt b/scrapped_outputs/c3e44560f3f244260a9cb7a53742ae2d.txt new file mode 100644 index 0000000000000000000000000000000000000000..c6f952ad08987328ef5a7108f6c98636c5902202 --- /dev/null +++ b/scrapped_outputs/c3e44560f3f244260a9cb7a53742ae2d.txt @@ -0,0 +1,76 @@ +Contribute a community pipeline 💡 Take a look at GitHub Issue #841 for more context about why we’re adding community pipelines to help everyone easily share their work without being slowed down. Community pipelines allow you to add any additional features you’d like on top of the DiffusionPipeline. The main benefit of building on top of the DiffusionPipeline is anyone can load and use your pipeline by only adding one more argument, making it super easy for the community to access. This guide will show you how to create a community pipeline and explain how they work. To keep things simple, you’ll create a “one-step” pipeline where the UNet does a single forward pass and calls the scheduler once. Initialize the pipeline You should start by creating a one_step_unet.py file for your community pipeline. In this file, create a pipeline class that inherits from the DiffusionPipeline to be able to load model weights and the scheduler configuration from the Hub. The one-step pipeline needs a UNet and a scheduler, so you’ll need to add these as arguments to the __init__ function: Copied from diffusers import DiffusionPipeline +import torch + +class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() To ensure your pipeline and its components (unet and scheduler) can be saved with save_pretrained(), add them to the register_modules function: Copied from diffusers import DiffusionPipeline + import torch + + class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() + ++ self.register_modules(unet=unet, scheduler=scheduler) Cool, the __init__ step is done and you can move to the forward pass now! 🔥 Define the forward pass In the forward pass, which we recommend defining as __call__, you have complete creative freedom to add whatever feature you’d like. For our amazing one-step pipeline, create a random image and only call the unet and scheduler once by setting timestep=1: Copied from diffusers import DiffusionPipeline + import torch + + class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() + + self.register_modules(unet=unet, scheduler=scheduler) + ++ def __call__(self): ++ image = torch.randn( ++ (1, self.unet.config.in_channels, self.unet.config.sample_size, self.unet.config.sample_size), ++ ) ++ timestep = 1 + ++ model_output = self.unet(image, timestep).sample ++ scheduler_output = self.scheduler.step(model_output, timestep, image).prev_sample + ++ return scheduler_output That’s it! 🚀 You can now run this pipeline by passing a unet and scheduler to it: Copied from diffusers import DDPMScheduler, UNet2DModel + +scheduler = DDPMScheduler() +unet = UNet2DModel() + +pipeline = UnetSchedulerOneForwardPipeline(unet=unet, scheduler=scheduler) + +output = pipeline() But what’s even better is you can load pre-existing weights into the pipeline if the pipeline structure is identical. For example, you can load the google/ddpm-cifar10-32 weights into the one-step pipeline: Copied pipeline = UnetSchedulerOneForwardPipeline.from_pretrained("google/ddpm-cifar10-32", use_safetensors=True) + +output = pipeline() Share your pipeline Open a Pull Request on the 🧨 Diffusers repository to add your awesome pipeline in one_step_unet.py to the examples/community subfolder. Once it is merged, anyone with diffusers >= 0.4.0 installed can use this pipeline magically 🪄 by specifying it in the custom_pipeline argument: Copied from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "google/ddpm-cifar10-32", custom_pipeline="one_step_unet", use_safetensors=True +) +pipe() Another way to share your community pipeline is to upload the one_step_unet.py file directly to your preferred model repository on the Hub. Instead of specifying the one_step_unet.py file, pass the model repository id to the custom_pipeline argument: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "google/ddpm-cifar10-32", custom_pipeline="stevhliu/one_step_unet", use_safetensors=True +) Take a look at the following table to compare the two sharing workflows to help you decide the best option for you: GitHub community pipeline HF Hub community pipeline usage same same review process open a Pull Request on GitHub and undergo a review process from the Diffusers team before merging; may be slower upload directly to a Hub repository without any review; this is the fastest workflow visibility included in the official Diffusers repository and documentation included on your HF Hub profile and relies on your own usage/promotion to gain visibility 💡 You can use whatever package you want in your community pipeline file - as long as the user has it installed, everything will work fine. Make sure you have one and only one pipeline class that inherits from DiffusionPipeline because this is automatically detected. How do community pipelines work? A community pipeline is a class that inherits from DiffusionPipeline which means: It can be loaded with the custom_pipeline argument. The model weights and scheduler configuration are loaded from pretrained_model_name_or_path. The code that implements a feature in the community pipeline is defined in a pipeline.py file. Sometimes you can’t load all the pipeline components weights from an official repository. In this case, the other components should be passed directly to the pipeline: Copied from diffusers import DiffusionPipeline +from transformers import CLIPImageProcessor, CLIPModel + +model_id = "CompVis/stable-diffusion-v1-4" +clip_model_id = "laion/CLIP-ViT-B-32-laion2B-s34B-b79K" + +feature_extractor = CLIPImageProcessor.from_pretrained(clip_model_id) +clip_model = CLIPModel.from_pretrained(clip_model_id, torch_dtype=torch.float16) + +pipeline = DiffusionPipeline.from_pretrained( + model_id, + custom_pipeline="clip_guided_stable_diffusion", + clip_model=clip_model, + feature_extractor=feature_extractor, + scheduler=scheduler, + torch_dtype=torch.float16, + use_safetensors=True, +) The magic behind community pipelines is contained in the following code. It allows the community pipeline to be loaded from GitHub or the Hub, and it’ll be available to all 🧨 Diffusers packages. Copied # 2. Load the pipeline class, if using custom module then load it from the Hub +# if we load from explicit class, let's use it +if custom_pipeline is not None: + pipeline_class = get_class_from_dynamic_module( + custom_pipeline, module_file=CUSTOM_PIPELINE_FILE_NAME, cache_dir=custom_pipeline + ) +elif cls != DiffusionPipeline: + pipeline_class = cls +else: + diffusers_module = importlib.import_module(cls.__module__.split(".")[0]) + pipeline_class = getattr(diffusers_module, config_dict["_class_name"]) diff --git a/scrapped_outputs/c3f512b95b520166ab4664b6604c9a89.txt b/scrapped_outputs/c3f512b95b520166ab4664b6604c9a89.txt new file mode 100644 index 0000000000000000000000000000000000000000..02948f26017297db150c2f1b80c70d14cf529652 --- /dev/null +++ b/scrapped_outputs/c3f512b95b520166ab4664b6604c9a89.txt @@ -0,0 +1,187 @@ +Kandinsky The Kandinsky models are a series of multilingual text-to-image generation models. The Kandinsky 2.0 model uses two multilingual text encoders and concatenates those results for the UNet. Kandinsky 2.1 changes the architecture to include an image prior model (CLIP) to generate a mapping between text and image embeddings. The mapping provides better text-image alignment and it is used with the text embeddings during training, leading to higher quality results. Finally, Kandinsky 2.1 uses a Modulating Quantized Vectors (MoVQ) decoder - which adds a spatial conditional normalization layer to increase photorealism - to decode the latents into images. Kandinsky 2.2 improves on the previous model by replacing the image encoder of the image prior model with a larger CLIP-ViT-G model to improve quality. The image prior model was also retrained on images with different resolutions and aspect ratios to generate higher-resolution images and different image sizes. Kandinsky 3 simplifies the architecture and shifts away from the two-stage generation process involving the prior model and diffusion model. Instead, Kandinsky 3 uses Flan-UL2 to encode text, a UNet with BigGan-deep blocks, and Sber-MoVQGAN to decode the latents into images. Text understanding and generated image quality are primarily achieved by using a larger text encoder and UNet. This guide will show you how to use the Kandinsky models for text-to-image, image-to-image, inpainting, interpolation, and more. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate Kandinsky 2.1 and 2.2 usage is very similar! The only difference is Kandinsky 2.2 doesn’t accept prompt as an input when decoding the latents. Instead, Kandinsky 2.2 only accepts image_embeds during decoding. Kandinsky 3 has a more concise architecture and it doesn’t require a prior model. This means it’s usage is identical to other diffusion models like Stable Diffusion XL. Text-to-image To use the Kandinsky models for any task, you always start by setting up the prior pipeline to encode the prompt and generate the image embeddings. The prior pipeline also generates negative_image_embeds that correspond to the negative prompt "". For better results, you can pass an actual negative_prompt to the prior pipeline, but this’ll increase the effective batch size of the prior pipeline by 2x. Kandinsky 2.1 Kandinsky 2.2 Kandinsky 3 Copied from diffusers import KandinskyPriorPipeline, KandinskyPipeline +import torch + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16).to("cuda") +pipeline = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16).to("cuda") + +prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting" +negative_prompt = "low quality, bad quality" # optional to include a negative prompt, but results are usually better +image_embeds, negative_image_embeds = prior_pipeline(prompt, negative_prompt, guidance_scale=1.0).to_tuple() Now pass all the prompts and embeddings to the KandinskyPipeline to generate an image: Copied image = pipeline(prompt, image_embeds=image_embeds, negative_prompt=negative_prompt, negative_image_embeds=negative_image_embeds, height=768, width=768).images[0] +image 🤗 Diffusers also provides an end-to-end API with the KandinskyCombinedPipeline and KandinskyV22CombinedPipeline, meaning you don’t have to separately load the prior and text-to-image pipeline. The combined pipeline automatically loads both the prior model and the decoder. You can still set different values for the prior pipeline with the prior_guidance_scale and prior_num_inference_steps parameters if you want. Use the AutoPipelineForText2Image to automatically call the combined pipelines under the hood: Kandinsky 2.1 Kandinsky 2.2 Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) +pipeline.enable_model_cpu_offload() + +prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting" +negative_prompt = "low quality, bad quality" + +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, prior_guidance_scale=1.0, guidance_scale=4.0, height=768, width=768).images[0] +image Image-to-image For image-to-image, pass the initial image and text prompt to condition the image to the pipeline. Start by loading the prior pipeline: Kandinsky 2.1 Kandinsky 2.2 Kandinsky 3 Copied import torch +from diffusers import KandinskyImg2ImgPipeline, KandinskyPriorPipeline + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipeline = KandinskyImg2ImgPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda") Download an image to condition on: Copied from diffusers.utils import load_image + +# download image +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +original_image = load_image(url) +original_image = original_image.resize((768, 512)) Generate the image_embeds and negative_image_embeds with the prior pipeline: Copied prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +image_embeds, negative_image_embeds = prior_pipeline(prompt, negative_prompt).to_tuple() Now pass the original image, and all the prompts and embeddings to the pipeline to generate an image: Kandinsky 2.1 Kandinsky 2.2 Kandinsky 3 Copied from diffusers.utils import make_image_grid + +image = pipeline(prompt, negative_prompt=negative_prompt, image=original_image, image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, height=768, width=768, strength=0.3).images[0] +make_image_grid([original_image.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2) 🤗 Diffusers also provides an end-to-end API with the KandinskyImg2ImgCombinedPipeline and KandinskyV22Img2ImgCombinedPipeline, meaning you don’t have to separately load the prior and image-to-image pipeline. The combined pipeline automatically loads both the prior model and the decoder. You can still set different values for the prior pipeline with the prior_guidance_scale and prior_num_inference_steps parameters if you want. Use the AutoPipelineForImage2Image to automatically call the combined pipelines under the hood: Kandinsky 2.1 Kandinsky 2.2 Copied from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image +import torch + +pipeline = AutoPipelineForImage2Image.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True) +pipeline.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +original_image = load_image(url) + +original_image.thumbnail((768, 768)) + +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=original_image, strength=0.3).images[0] +make_image_grid([original_image.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2) Inpainting ⚠️ The Kandinsky models use ⬜️ white pixels to represent the masked area now instead of black pixels. If you are using KandinskyInpaintPipeline in production, you need to change the mask to use white pixels: Copied # For PIL input +import PIL.ImageOps +mask = PIL.ImageOps.invert(mask) + +# For PyTorch and NumPy input +mask = 1 - mask For inpainting, you’ll need the original image, a mask of the area to replace in the original image, and a text prompt of what to inpaint. Load the prior pipeline: Kandinsky 2.1 Kandinsky 2.2 Copied from diffusers import KandinskyInpaintPipeline, KandinskyPriorPipeline +from diffusers.utils import load_image, make_image_grid +import torch +import numpy as np +from PIL import Image + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipeline = KandinskyInpaintPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16, use_safetensors=True).to("cuda") Load an initial image and create a mask: Copied init_image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png") +mask = np.zeros((768, 768), dtype=np.float32) +# mask area above cat's head +mask[:250, 250:-250] = 1 Generate the embeddings with the prior pipeline: Copied prompt = "a hat" +prior_output = prior_pipeline(prompt) Now pass the initial image, mask, and prompt and embeddings to the pipeline to generate an image: Kandinsky 2.1 Kandinsky 2.2 Copied output_image = pipeline(prompt, image=init_image, mask_image=mask, **prior_output, height=768, width=768, num_inference_steps=150).images[0] +mask = Image.fromarray((mask*255).astype('uint8'), 'L') +make_image_grid([init_image, mask, output_image], rows=1, cols=3) You can also use the end-to-end KandinskyInpaintCombinedPipeline and KandinskyV22InpaintCombinedPipeline to call the prior and decoder pipelines together under the hood. Use the AutoPipelineForInpainting for this: Kandinsky 2.1 Kandinsky 2.2 Copied import torch +import numpy as np +from PIL import Image +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipe = AutoPipelineForInpainting.from_pretrained("kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16) +pipe.enable_model_cpu_offload() + +init_image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png") +mask = np.zeros((768, 768), dtype=np.float32) +# mask area above cat's head +mask[:250, 250:-250] = 1 +prompt = "a hat" + +output_image = pipe(prompt=prompt, image=init_image, mask_image=mask).images[0] +mask = Image.fromarray((mask*255).astype('uint8'), 'L') +make_image_grid([init_image, mask, output_image], rows=1, cols=3) Interpolation Interpolation allows you to explore the latent space between the image and text embeddings which is a cool way to see some of the prior model’s intermediate outputs. Load the prior pipeline and two images you’d like to interpolate: Kandinsky 2.1 Kandinsky 2.2 Copied from diffusers import KandinskyPriorPipeline, KandinskyPipeline +from diffusers.utils import load_image, make_image_grid +import torch + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +img_1 = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png") +img_2 = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/starry_night.jpeg") +make_image_grid([img_1.resize((512,512)), img_2.resize((512,512))], rows=1, cols=2) a cat Van Gogh's Starry Night painting Specify the text or images to interpolate, and set the weights for each text or image. Experiment with the weights to see how they affect the interpolation! Copied images_texts = ["a cat", img_1, img_2] +weights = [0.3, 0.3, 0.4] Call the interpolate function to generate the embeddings, and then pass them to the pipeline to generate the image: Kandinsky 2.1 Kandinsky 2.2 Copied # prompt can be left empty +prompt = "" +prior_out = prior_pipeline.interpolate(images_texts, weights) + +pipeline = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + +image = pipeline(prompt, **prior_out, height=768, width=768).images[0] +image ControlNet ⚠️ ControlNet is only supported for Kandinsky 2.2! ControlNet enables conditioning large pretrained diffusion models with additional inputs such as a depth map or edge detection. For example, you can condition Kandinsky 2.2 with a depth map so the model understands and preserves the structure of the depth image. Let’s load an image and extract it’s depth map: Copied from diffusers.utils import load_image + +img = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png" +).resize((768, 768)) +img Then you can use the depth-estimation Pipeline from 🤗 Transformers to process the image and retrieve the depth map: Copied import torch +import numpy as np + +from transformers import pipeline + +def make_hint(image, depth_estimator): + image = depth_estimator(image)["depth"] + image = np.array(image) + image = image[:, :, None] + image = np.concatenate([image, image, image], axis=2) + detected_map = torch.from_numpy(image).float() / 255.0 + hint = detected_map.permute(2, 0, 1) + return hint + +depth_estimator = pipeline("depth-estimation") +hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda") Text-to-image Load the prior pipeline and the KandinskyV22ControlnetPipeline: Copied from diffusers import KandinskyV22PriorPipeline, KandinskyV22ControlnetPipeline + +prior_pipeline = KandinskyV22PriorPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +pipeline = KandinskyV22ControlnetPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16 +).to("cuda") Generate the image embeddings from a prompt and negative prompt: Copied prompt = "A robot, 4k photo" +negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature" + +generator = torch.Generator(device="cuda").manual_seed(43) + +image_emb, zero_image_emb = prior_pipeline( + prompt=prompt, negative_prompt=negative_prior_prompt, generator=generator +).to_tuple() Finally, pass the image embeddings and the depth image to the KandinskyV22ControlnetPipeline to generate an image: Copied image = pipeline(image_embeds=image_emb, negative_image_embeds=zero_image_emb, hint=hint, num_inference_steps=50, generator=generator, height=768, width=768).images[0] +image Image-to-image For image-to-image with ControlNet, you’ll need to use the: KandinskyV22PriorEmb2EmbPipeline to generate the image embeddings from a text prompt and an image KandinskyV22ControlnetImg2ImgPipeline to generate an image from the initial image and the image embeddings Process and extract a depth map of an initial image of a cat with the depth-estimation Pipeline from 🤗 Transformers: Copied import torch +import numpy as np + +from diffusers import KandinskyV22PriorEmb2EmbPipeline, KandinskyV22ControlnetImg2ImgPipeline +from diffusers.utils import load_image +from transformers import pipeline + +img = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png" +).resize((768, 768)) + +def make_hint(image, depth_estimator): + image = depth_estimator(image)["depth"] + image = np.array(image) + image = image[:, :, None] + image = np.concatenate([image, image, image], axis=2) + detected_map = torch.from_numpy(image).float() / 255.0 + hint = detected_map.permute(2, 0, 1) + return hint + +depth_estimator = pipeline("depth-estimation") +hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda") Load the prior pipeline and the KandinskyV22ControlnetImg2ImgPipeline: Copied prior_pipeline = KandinskyV22PriorEmb2EmbPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +pipeline = KandinskyV22ControlnetImg2ImgPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16 +).to("cuda") Pass a text prompt and the initial image to the prior pipeline to generate the image embeddings: Copied prompt = "A robot, 4k photo" +negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature" + +generator = torch.Generator(device="cuda").manual_seed(43) + +img_emb = prior_pipeline(prompt=prompt, image=img, strength=0.85, generator=generator) +negative_emb = prior_pipeline(prompt=negative_prior_prompt, image=img, strength=1, generator=generator) Now you can run the KandinskyV22ControlnetImg2ImgPipeline to generate an image from the initial image and the image embeddings: Copied image = pipeline(image=img, strength=0.5, image_embeds=img_emb.image_embeds, negative_image_embeds=negative_emb.image_embeds, hint=hint, num_inference_steps=50, generator=generator, height=768, width=768).images[0] +make_image_grid([img.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2) Optimizations Kandinsky is unique because it requires a prior pipeline to generate the mappings, and a second pipeline to decode the latents into an image. Optimization efforts should be focused on the second pipeline because that is where the bulk of the computation is done. Here are some tips to improve Kandinsky during inference. Enable xFormers if you’re using PyTorch < 2.0: Copied from diffusers import DiffusionPipeline + import torch + + pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) ++ pipe.enable_xformers_memory_efficient_attention() Enable torch.compile if you’re using PyTorch >= 2.0 to automatically use scaled dot-product attention (SDPA): Copied pipe.unet.to(memory_format=torch.channels_last) ++ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) This is the same as explicitly setting the attention processor to use AttnAddedKVProcessor2_0: Copied from diffusers.models.attention_processor import AttnAddedKVProcessor2_0 + +pipe.unet.set_attn_processor(AttnAddedKVProcessor2_0()) Offload the model to the CPU with enable_model_cpu_offload() to avoid out-of-memory errors: Copied from diffusers import DiffusionPipeline + import torch + + pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) ++ pipe.enable_model_cpu_offload() By default, the text-to-image pipeline uses the DDIMScheduler but you can replace it with another scheduler like DDPMScheduler to see how that affects the tradeoff between inference speed and image quality: Copied from diffusers import DDPMScheduler +from diffusers import DiffusionPipeline + +scheduler = DDPMScheduler.from_pretrained("kandinsky-community/kandinsky-2-1", subfolder="ddpm_scheduler") +pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", scheduler=scheduler, torch_dtype=torch.float16, use_safetensors=True).to("cuda") diff --git a/scrapped_outputs/c4252e75b6b87532ddbc1f094ded7108.txt b/scrapped_outputs/c4252e75b6b87532ddbc1f094ded7108.txt new file mode 100644 index 0000000000000000000000000000000000000000..c64e5338e7b801217166447f9876dee342fd9e20 --- /dev/null +++ b/scrapped_outputs/c4252e75b6b87532ddbc1f094ded7108.txt @@ -0,0 +1,100 @@ +UNet Some training methods - like LoRA and Custom Diffusion - typically target the UNet’s attention layers, but these training methods can also target other non-attention layers. Instead of training all of a model’s parameters, only a subset of the parameters are trained, which is faster and more efficient. This class is useful if you’re only loading weights into a UNet. If you need to load weights into the text encoder or a text encoder and UNet, try using the load_lora_weights() function instead. The UNet2DConditionLoadersMixin class provides functions for loading and saving weights, fusing and unfusing LoRAs, disabling and enabling LoRAs, and setting and deleting adapters. To learn more about how to load LoRA weights, see the LoRA loading guide. UNet2DConditionLoadersMixin class diffusers.loaders.UNet2DConditionLoadersMixin < source > ( ) Load LoRA layers into a UNet2DCondtionModel. delete_adapters < source > ( adapter_names: Union ) Parameters adapter_names (Union[List[str], str]) — +The names (single string or list of strings) of the adapter to delete. Delete an adapter’s LoRA layers from the UNet. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_names="cinematic" +) +pipeline.delete_adapters("cinematic") disable_lora < source > ( ) Disable the UNet’s active LoRA layers. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) +pipeline.disable_lora() enable_lora < source > ( ) Enable the UNet’s active LoRA layers. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) +pipeline.enable_lora() load_attn_procs < source > ( pretrained_model_name_or_path_or_dict: Union **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with ModelMixin.save_pretrained(). +A torch state +dict. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load pretrained attention processor layers into UNet2DConditionModel. Attention processor layers have to be +defined in +attention_processor.py +and be a torch.nn.Module class. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.unet.load_attn_procs( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) save_attn_procs < source > ( save_directory: Union is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save an attention processor to (will be created if it doesn’t exist). is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or with pickle. Save attention processor layers to a directory so that it can be reloaded with the +load_attn_procs() method. Example: Copied import torch +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + torch_dtype=torch.float16, +).to("cuda") +pipeline.unet.load_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin") +pipeline.unet.save_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin") set_adapters < source > ( adapter_names: Union weights: Union = None ) Parameters adapter_names (List[str] or str) — +The names of the adapters to use. adapter_weights (Union[List[float], float], optional) — +The adapter(s) weights to use with the UNet. If None, the weights are set to 1.0 for all the +adapters. Set the currently active adapters for use in the UNet. Example: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights( + "jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors", adapter_name="cinematic" +) +pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipeline.set_adapters(["cinematic", "pixel"], adapter_weights=[0.5, 0.5]) diff --git a/scrapped_outputs/c43da5247bfaaf8ca1b607b59b4d7d06.txt b/scrapped_outputs/c43da5247bfaaf8ca1b607b59b4d7d06.txt new file mode 100644 index 0000000000000000000000000000000000000000..dc45cc411c1e99044b02de9de0b70f888962c563 --- /dev/null +++ b/scrapped_outputs/c43da5247bfaaf8ca1b607b59b4d7d06.txt @@ -0,0 +1,42 @@ +DPMSolverSDEScheduler The DPMSolverSDEScheduler is inspired by the stochastic sampler from the Elucidating the Design Space of Diffusion-Based Generative Models paper, and the scheduler is ported from and created by Katherine Crowson. DPMSolverSDEScheduler class diffusers.DPMSolverSDEScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' use_karras_sigmas: Optional = False noise_sampler_seed: Optional = None timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.00085) — +The starting beta value of inference. beta_end (float, defaults to 0.012) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. noise_sampler_seed (int, optional, defaults to None) — +The random seed to use for the noise sampler. If None, a random seed is generated. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DPMSolverSDEScheduler implements the stochastic sampler from the Elucidating the Design Space of Diffusion-Based +Generative Models paper. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union return_dict: bool = True s_noise: float = 1.0 ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor or np.ndarray) — +The direct output from learned diffusion model. timestep (float or torch.FloatTensor) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor or np.ndarray) — +A current instance of a sample created by the diffusion process. return_dict (bool, optional, defaults to True) — +Whether or not to return a SchedulerOutput or tuple. s_noise (float, optional, defaults to 1.0) — +Scaling factor for noise added to the sample. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/c445e065f6878673b8535c3fa7e086b5.txt b/scrapped_outputs/c445e065f6878673b8535c3fa7e086b5.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/c477bbe980c114bef4121e83f4fb1936.txt b/scrapped_outputs/c477bbe980c114bef4121e83f4fb1936.txt new file mode 100644 index 0000000000000000000000000000000000000000..07c26006c9cf463e7cf6147858153f2079e6c0ef --- /dev/null +++ b/scrapped_outputs/c477bbe980c114bef4121e83f4fb1936.txt @@ -0,0 +1,148 @@ +PyTorch 2.0 🤗 Diffusers supports the latest optimizations from PyTorch 2.0 which include: A memory-efficient attention implementation, scaled dot product attention, without requiring any extra dependencies such as xFormers. torch.compile, a just-in-time (JIT) compiler to provide an extra performance boost when individual models are compiled. Both of these optimizations require PyTorch 2.0 or later and 🤗 Diffusers > 0.13.0. Copied pip install --upgrade torch diffusers Scaled dot product attention torch.nn.functional.scaled_dot_product_attention (SDPA) is an optimized and memory-efficient attention (similar to xFormers) that automatically enables several other optimizations depending on the model inputs and GPU type. SDPA is enabled by default if you’re using PyTorch 2.0 and the latest version of 🤗 Diffusers, so you don’t need to add anything to your code. However, if you want to explicitly enable it, you can set a DiffusionPipeline to use AttnProcessor2_0: Copied import torch + from diffusers import DiffusionPipeline ++ from diffusers.models.attention_processor import AttnProcessor2_0 + + pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True).to("cuda") ++ pipe.unet.set_attn_processor(AttnProcessor2_0()) + + prompt = "a photo of an astronaut riding a horse on mars" + image = pipe(prompt).images[0] SDPA should be as fast and memory efficient as xFormers; check the benchmark for more details. In some cases - such as making the pipeline more deterministic or converting it to other formats - it may be helpful to use the vanilla attention processor, AttnProcessor. To revert to AttnProcessor, call the set_default_attn_processor() function on the pipeline: Copied import torch + from diffusers import DiffusionPipeline + + pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True).to("cuda") ++ pipe.unet.set_default_attn_processor() + + prompt = "a photo of an astronaut riding a horse on mars" + image = pipe(prompt).images[0] torch.compile The torch.compile function can often provide an additional speed-up to your PyTorch code. In 🤗 Diffusers, it is usually best to wrap the UNet with torch.compile because it does most of the heavy lifting in the pipeline. Copied from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) +images = pipe(prompt, num_inference_steps=steps, num_images_per_prompt=batch_size).images[0] Depending on GPU type, torch.compile can provide an additional speed-up of 5-300x on top of SDPA! If you’re using more recent GPU architectures such as Ampere (A100, 3090), Ada (4090), and Hopper (H100), torch.compile is able to squeeze even more performance out of these GPUs. Compilation requires some time to complete, so it is best suited for situations where you prepare your pipeline once and then perform the same type of inference operations multiple times. For example, calling the compiled pipeline on a different image size triggers compilation again which can be expensive. For more information and different options about torch.compile, refer to the torch_compile tutorial. Benchmark We conducted a comprehensive benchmark with PyTorch 2.0’s efficient attention implementation and torch.compile across different GPUs and batch sizes for five of our most used pipelines. The code is benchmarked on 🤗 Diffusers v0.17.0.dev0 to optimize torch.compile usage (see here for more details). Expand the dropdown below to find the code used to benchmark each pipeline: Stable Diffusion text-to-image Copied from diffusers import DiffusionPipeline +import torch + +path = "runwayml/stable-diffusion-v1-5" + +run_compile = True # Set True / False + +pipe = DiffusionPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors=True) +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + images = pipe(prompt=prompt).images Stable Diffusion image-to-image Copied from diffusers import StableDiffusionImg2ImgPipeline +from diffusers.utils import load_image +import torch + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +init_image = load_image(url) +init_image = init_image.resize((512, 512)) + +path = "runwayml/stable-diffusion-v1-5" + +run_compile = True # Set True / False + +pipe = StableDiffusionImg2ImgPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors=True) +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + image = pipe(prompt=prompt, image=init_image).images[0] Stable Diffusion inpainting Copied from diffusers import StableDiffusionInpaintPipeline +from diffusers.utils import load_image +import torch + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +path = "runwayml/stable-diffusion-inpainting" + +run_compile = True # Set True / False + +pipe = StableDiffusionInpaintPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors=True) +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] ControlNet Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.utils import load_image +import torch + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +init_image = load_image(url) +init_image = init_image.resize((512, 512)) + +path = "runwayml/stable-diffusion-v1-5" + +run_compile = True # Set True / False +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + path, controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) +pipe.controlnet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + pipe.controlnet = torch.compile(pipe.controlnet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + image = pipe(prompt=prompt, image=init_image).images[0] DeepFloyd IF text-to-image + upscaling Copied from diffusers import DiffusionPipeline +import torch + +run_compile = True # Set True / False + +pipe_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-M-v1.0", variant="fp16", text_encoder=None, torch_dtype=torch.float16, use_safetensors=True) +pipe_1.to("cuda") +pipe_2 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-II-M-v1.0", variant="fp16", text_encoder=None, torch_dtype=torch.float16, use_safetensors=True) +pipe_2.to("cuda") +pipe_3 = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-x4-upscaler", torch_dtype=torch.float16, use_safetensors=True) +pipe_3.to("cuda") + + +pipe_1.unet.to(memory_format=torch.channels_last) +pipe_2.unet.to(memory_format=torch.channels_last) +pipe_3.unet.to(memory_format=torch.channels_last) + +if run_compile: + pipe_1.unet = torch.compile(pipe_1.unet, mode="reduce-overhead", fullgraph=True) + pipe_2.unet = torch.compile(pipe_2.unet, mode="reduce-overhead", fullgraph=True) + pipe_3.unet = torch.compile(pipe_3.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "the blue hulk" + +prompt_embeds = torch.randn((1, 2, 4096), dtype=torch.float16) +neg_prompt_embeds = torch.randn((1, 2, 4096), dtype=torch.float16) + +for _ in range(3): + image_1 = pipe_1(prompt_embeds=prompt_embeds, negative_prompt_embeds=neg_prompt_embeds, output_type="pt").images + image_2 = pipe_2(image=image_1, prompt_embeds=prompt_embeds, negative_prompt_embeds=neg_prompt_embeds, output_type="pt").images + image_3 = pipe_3(prompt=prompt, image=image_1, noise_level=100).images The graph below highlights the relative speed-ups for the StableDiffusionPipeline across five GPU families with PyTorch 2.0 and torch.compile enabled. The benchmarks for the following graphs are measured in number of iterations/second. To give you an even better idea of how this speed-up holds for the other pipelines, consider the following +graph for an A100 with PyTorch 2.0 and torch.compile: In the following tables, we report our findings in terms of the number of iterations/second. A100 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 21.66 23.13 44.03 49.74 SD - img2img 21.81 22.40 43.92 46.32 SD - inpaint 22.24 23.23 43.76 49.25 SD - controlnet 15.02 15.82 32.13 36.08 IF 20.21 / 13.84 / 24.00 20.12 / 13.70 / 24.03 ❌ 97.34 / 27.23 / 111.66 SDXL - txt2img 8.64 9.9 - - A100 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 11.6 13.12 14.62 17.27 SD - img2img 11.47 13.06 14.66 17.25 SD - inpaint 11.67 13.31 14.88 17.48 SD - controlnet 8.28 9.38 10.51 12.41 IF 25.02 18.04 ❌ 48.47 SDXL - txt2img 2.44 2.74 - - A100 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 3.04 3.6 3.83 4.68 SD - img2img 2.98 3.58 3.83 4.67 SD - inpaint 3.04 3.66 3.9 4.76 SD - controlnet 2.15 2.58 2.74 3.35 IF 8.78 9.82 ❌ 16.77 SDXL - txt2img 0.64 0.72 - - V100 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 18.99 19.14 20.95 22.17 SD - img2img 18.56 19.18 20.95 22.11 SD - inpaint 19.14 19.06 21.08 22.20 SD - controlnet 13.48 13.93 15.18 15.88 IF 20.01 / 9.08 / 23.34 19.79 / 8.98 / 24.10 ❌ 55.75 / 11.57 / 57.67 V100 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 5.96 5.89 6.83 6.86 SD - img2img 5.90 5.91 6.81 6.82 SD - inpaint 5.99 6.03 6.93 6.95 SD - controlnet 4.26 4.29 4.92 4.93 IF 15.41 14.76 ❌ 22.95 V100 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 1.66 1.66 1.92 1.90 SD - img2img 1.65 1.65 1.91 1.89 SD - inpaint 1.69 1.69 1.95 1.93 SD - controlnet 1.19 1.19 OOM after warmup 1.36 IF 5.43 5.29 ❌ 7.06 T4 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 6.9 6.95 7.3 7.56 SD - img2img 6.84 6.99 7.04 7.55 SD - inpaint 6.91 6.7 7.01 7.37 SD - controlnet 4.89 4.86 5.35 5.48 IF 17.42 / 2.47 / 18.52 16.96 / 2.45 / 18.69 ❌ 24.63 / 2.47 / 23.39 SDXL - txt2img 1.15 1.16 - - T4 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 1.79 1.79 2.03 1.99 SD - img2img 1.77 1.77 2.05 2.04 SD - inpaint 1.81 1.82 2.09 2.09 SD - controlnet 1.34 1.27 1.47 1.46 IF 5.79 5.61 ❌ 7.39 SDXL - txt2img 0.288 0.289 - - T4 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 2.34s 2.30s OOM after 2nd iteration 1.99s SD - img2img 2.35s 2.31s OOM after warmup 2.00s SD - inpaint 2.30s 2.26s OOM after 2nd iteration 1.95s SD - controlnet OOM after 2nd iteration OOM after 2nd iteration OOM after warmup OOM after warmup IF * 1.44 1.44 ❌ 1.94 SDXL - txt2img OOM OOM - - RTX 3090 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 22.56 22.84 23.84 25.69 SD - img2img 22.25 22.61 24.1 25.83 SD - inpaint 22.22 22.54 24.26 26.02 SD - controlnet 16.03 16.33 17.38 18.56 IF 27.08 / 9.07 / 31.23 26.75 / 8.92 / 31.47 ❌ 68.08 / 11.16 / 65.29 RTX 3090 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 6.46 6.35 7.29 7.3 SD - img2img 6.33 6.27 7.31 7.26 SD - inpaint 6.47 6.4 7.44 7.39 SD - controlnet 4.59 4.54 5.27 5.26 IF 16.81 16.62 ❌ 21.57 RTX 3090 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 1.7 1.69 1.93 1.91 SD - img2img 1.68 1.67 1.93 1.9 SD - inpaint 1.72 1.71 1.97 1.94 SD - controlnet 1.23 1.22 1.4 1.38 IF 5.01 5.00 ❌ 6.33 RTX 4090 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 40.5 41.89 44.65 49.81 SD - img2img 40.39 41.95 44.46 49.8 SD - inpaint 40.51 41.88 44.58 49.72 SD - controlnet 29.27 30.29 32.26 36.03 IF 69.71 / 18.78 / 85.49 69.13 / 18.80 / 85.56 ❌ 124.60 / 26.37 / 138.79 SDXL - txt2img 6.8 8.18 - - RTX 4090 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 12.62 12.84 15.32 15.59 SD - img2img 12.61 12,.79 15.35 15.66 SD - inpaint 12.65 12.81 15.3 15.58 SD - controlnet 9.1 9.25 11.03 11.22 IF 31.88 31.14 ❌ 43.92 SDXL - txt2img 2.19 2.35 - - RTX 4090 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 3.17 3.2 3.84 3.85 SD - img2img 3.16 3.2 3.84 3.85 SD - inpaint 3.17 3.2 3.85 3.85 SD - controlnet 2.23 2.3 2.7 2.75 IF 9.26 9.2 ❌ 13.31 SDXL - txt2img 0.52 0.53 - - Notes Follow this PR for more details on the environment used for conducting the benchmarks. For the DeepFloyd IF pipeline where batch sizes > 1, we only used a batch size of > 1 in the first IF pipeline for text-to-image generation and NOT for upscaling. That means the two upscaling pipelines received a batch size of 1. Thanks to Horace He from the PyTorch team for their support in improving our support of torch.compile() in Diffusers. diff --git a/scrapped_outputs/c4a32da7c63aec0e6bce49943cb4c914.txt b/scrapped_outputs/c4a32da7c63aec0e6bce49943cb4c914.txt new file mode 100644 index 0000000000000000000000000000000000000000..75fd75154200d2b6d6fa296048d043f23b017c54 --- /dev/null +++ b/scrapped_outputs/c4a32da7c63aec0e6bce49943cb4c914.txt @@ -0,0 +1,84 @@ +Performing inference with LCM Latent Consistency Models (LCM) enable quality image generation in typically 2-4 steps making it possible to use diffusion models in almost real-time settings. From the official website: LCMs can be distilled from any pre-trained Stable Diffusion (SD) in only 4,000 training steps (~32 A100 GPU Hours) for generating high quality 768 x 768 resolution images in 2~4 steps or even one step, significantly accelerating text-to-image generation. We employ LCM to distill the Dreamshaper-V7 version of SD in just 4,000 training iterations. For a more technical overview of LCMs, refer to the paper. This guide shows how to perform inference with LCMs for text-to-image and image-to-image generation tasks. It will also cover performing inference with LoRA checkpoints. Text-to-image You’ll use the StableDiffusionXLPipeline here changing the unet. The UNet was distilled from the SDXL UNet using the framework introduced in LCM. Another important component is the scheduler: LCMScheduler. Together with the distilled UNet and the scheduler, LCM enables a fast inference workflow overcoming the slow iterative nature of diffusion models. Copied from diffusers import DiffusionPipeline, UNet2DConditionModel, LCMScheduler +import torch + +unet = UNet2DConditionModel.from_pretrained( + "latent-consistency/lcm-sdxl", + torch_dtype=torch.float16, + variant="fp16", +) +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16 +).to("cuda") +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=8.0 +).images[0] Notice that we use only 4 steps for generation which is way less than what’s typically used for standard SDXL. Some details to keep in mind: To perform classifier-free guidance, batch size is usually doubled inside the pipeline. LCM, however, applies guidance using guidance embeddings, so the batch size does not have to be doubled in this case. This leads to a faster inference time, with the drawback that negative prompts don’t have any effect on the denoising process. The UNet was trained using the [3., 13.] guidance scale range. So, that is the ideal range for guidance_scale. However, disabling guidance_scale using a value of 1.0 is also effective in most cases. Image-to-image The findings above apply to image-to-image tasks too. Let’s look at how we can perform image-to-image generation with LCMs: Copied from diffusers import AutoPipelineForImage2Image, UNet2DConditionModel, LCMScheduler +from diffusers.utils import load_image +import torch + +unet = UNet2DConditionModel.from_pretrained( + "latent-consistency/lcm-sdxl", + torch_dtype=torch.float16, + variant="fp16", +) +pipe = AutoPipelineForImage2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16 +).to("cuda") +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +prompt = "High altitude snowy mountains" +image = load_image( + "https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/snowy_mountains.jpeg" +) + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, + image=image, + num_inference_steps=4, + generator=generator, + guidance_scale=8.0, +).images[0] LoRA It is possible to generalize the LCM framework to use with LoRA. It effectively eliminates the need to conduct expensive fine-tuning runs as LoRA training concerns just a few number of parameters compared to full fine-tuning. During inference, the LCMScheduler comes to the advantage as it enables very few-steps inference without compromising the quality. We recommend to disable guidance_scale by setting it 0. The model is trained to follow prompts accurately +even without using guidance scale. You can however, still use guidance scale in which case we recommend +using values between 1.0 and 2.0. Text-to-image Copied from diffusers import DiffusionPipeline, LCMScheduler +import torch + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +lcm_lora_id = "latent-consistency/lcm-lora-sdxl" + +pipe = DiffusionPipeline.from_pretrained(model_id, variant="fp16", torch_dtype=torch.float16).to("cuda") + +pipe.load_lora_weights(lcm_lora_id) +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +prompt = "close-up photography of old man standing in the rain at night, in a street lit by lamps, leica 35mm summilux" +image = pipe( + prompt=prompt, + num_inference_steps=4, + guidance_scale=0, # set guidance scale to 0 to disable it +).images[0] Image-to-image Extending LCM LoRA to image-to-image is possible: Copied from diffusers import StableDiffusionXLImg2ImgPipeline, LCMScheduler +from diffusers.utils import load_image +import torch + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +lcm_lora_id = "latent-consistency/lcm-lora-sdxl" + +pipe = StableDiffusionXLImg2ImgPipeline.from_pretrained(model_id, variant="fp16", torch_dtype=torch.float16).to("cuda") + +pipe.load_lora_weights(lcm_lora_id) +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +prompt = "close-up photography of old man standing in the rain at night, in a street lit by lamps, leica 35mm summilux" + +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/lcm/lora_lcm.png") + +image = pipe( + prompt=prompt, + image=image, + num_inference_steps=4, + guidance_scale=0, # set guidance scale to 0 to disable it +).images[0] diff --git a/scrapped_outputs/c4d45b35a92c736b2ba67c3151e197de.txt b/scrapped_outputs/c4d45b35a92c736b2ba67c3151e197de.txt new file mode 100644 index 0000000000000000000000000000000000000000..499d88bc0e5374d11fae3590eafbb42edb1b8fe8 --- /dev/null +++ b/scrapped_outputs/c4d45b35a92c736b2ba67c3151e197de.txt @@ -0,0 +1,3027 @@ +Models + +Diffusers contains pretrained models for popular algorithms and modules for creating the next set of diffusion models. +The primary function of these models is to denoise an input sample, by modeling the distribution pθ(xt−1∣xt)p_{\theta}(x_{t-1}|x_{t})pθ​(xt−1​∣xt​). +The models are built on the base class [‘ModelMixin’] that is a torch.nn.module with basic functionality for saving and loading models both locally and from the HuggingFace hub. + +ModelMixin + + +class diffusers.ModelMixin + +< +source +> +( +) + + + +Base class for all models. +ModelMixin takes care of storing the configuration of the models and handles methods for loading, downloading +and saving models. +config_name (str) — A filename under which the model should be stored when calling +save_pretrained(). + +disable_gradient_checkpointing + +< +source +> +( +) + + + +Deactivates gradient checkpointing for the current model. +Note that in other frameworks this feature can be referred to as “activation checkpointing” or “checkpoint +activations”. + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +enable_gradient_checkpointing + +< +source +> +( +) + + + +Activates gradient checkpointing for the current model. +Note that in other frameworks this feature can be referred to as “activation checkpointing” or “checkpoint +activations”. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import UNet2DConditionModel +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> model = UNet2DConditionModel.from_pretrained( +... "stabilityai/stable-diffusion-2-1", subfolder="unet", torch_dtype=torch.float16 +... ) +>>> model = model.to("cuda") +>>> model.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) + +from_pretrained + +< +source +> +( +pretrained_model_name_or_path: typing.Union[str, os.PathLike, NoneType] +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. +Valid model ids should have an organization name, like google/ddpm-celebahq-256. +A path to a directory containing model weights saved using ~ModelMixin.save_config, e.g., +./my_model_directory/. + + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model under this dtype. If "auto" is passed the dtype +will be automatically derived from the model’s weights. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running diffusers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +from_flax (bool, optional, defaults to False) — +Load the model weights from a Flax checkpoint save file. + + +subfolder (str, optional, defaults to "") — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + +mirror (str, optional) — +Mirror source to accelerate downloads in China. If you are from China and have an accessibility +problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. +Please refer to the mirror site for more information. + + +device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be refined to each +parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the +same device. +To have Accelerate compute the most optimized device_map automatically, set device_map="auto". For +more information about each option see designing a device +map. + + +max_memory (Dict, optional) — +A dictionary device identifier to maximum memory. Will default to the maximum memory available for each +GPU and the available CPU RAM if unset. + + +offload_folder (str or os.PathLike, optional) — +If the device_map contains any value "disk", the folder where we will offload weights. + + +offload_state_dict (bool, optional) — +If True, will temporarily offload the CPU state dict to the hard drive to avoid getting out of CPU +RAM if the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to +True when there is some disk offload. + + +low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading by not initializing the weights and only loading the pre-trained weights. This +also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the +model. This is only supported when torch version >= 1.9.0. If you are using an older version of torch, +setting this argument to True will raise an error. + + +variant (str, optional) — +If specified load weights from variant filename, e.g. pytorch_model..bin. variant is +ignored when using from_flax. + + +use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights will be downloaded if they’re available and if the +safetensors library is installed. If set to True, the model will be forcibly loaded from +safetensors weights. If set to False, loading will not use safetensors. + + + +Instantiate a pretrained pytorch model from a pre-trained model configuration. +The model is set in evaluation mode by default using model.eval() (Dropout modules are deactivated). To train +the model, you should first set it back in training mode with model.train(). +The warning Weights from XXX not initialized from pretrained model means that the weights of XXX do not come +pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning +task. +The warning Weights from XXX not used in YYY means that the layer XXX is not used by YYY, therefore those +weights are discarded. +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. +Activate the special “offline-mode” to use +this method in a firewalled environment. + +num_parameters + +< +source +> +( +only_trainable: bool = False +exclude_embeddings: bool = False + +) +→ +int + +Parameters + +only_trainable (bool, optional, defaults to False) — +Whether or not to return only the number of trainable parameters + + +exclude_embeddings (bool, optional, defaults to False) — +Whether or not to return only the number of non-embeddings parameters + + +Returns + +int + + + +The number of parameters. + + +Get number of (optionally, trainable or non-embeddings) parameters in the module. + +save_pretrained + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +is_main_process: bool = True +save_function: typing.Callable = None +safe_serialization: bool = False +variant: typing.Optional[str] = None + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. + + +is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful when in distributed training like +TPUs and need to call this function on all processes. In this case, set is_main_process=True only on +the main process to avoid race conditions. + + +save_function (Callable) — +The function to use to save the state dictionary. Useful on distributed training like TPUs when one +need to replace torch.save by another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. + + +safe_serialization (bool, optional, defaults to False) — +Whether to save the model using safetensors or the traditional PyTorch way (that uses pickle). + + +variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. + + + +Save a model and its configuration file to a directory, so that it can be re-loaded using the +[from_pretrained()](/docs/diffusers/main/en/api/models#diffusers.ModelMixin.from_pretrained) class method. + +UNet2DOutput + + +class diffusers.models.unet_2d.UNet2DOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +Hidden states output. Output of last layer of model. + + + + +UNet2DModel + + +class diffusers.UNet2DModel + +< +source +> +( +sample_size: typing.Union[int, typing.Tuple[int, int], NoneType] = None +in_channels: int = 3 +out_channels: int = 3 +center_input_sample: bool = False +time_embedding_type: str = 'positional' +freq_shift: int = 0 +flip_sin_to_cos: bool = True +down_block_types: typing.Tuple[str] = ('DownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D') +up_block_types: typing.Tuple[str] = ('AttnUpBlock2D', 'AttnUpBlock2D', 'AttnUpBlock2D', 'UpBlock2D') +block_out_channels: typing.Tuple[int] = (224, 448, 672, 896) +layers_per_block: int = 2 +mid_block_scale_factor: float = 1 +downsample_padding: int = 1 +act_fn: str = 'silu' +attention_head_dim: typing.Optional[int] = 8 +norm_num_groups: int = 32 +norm_eps: float = 1e-05 +resnet_time_scale_shift: str = 'default' +add_attention: bool = True +class_embed_type: typing.Optional[str] = None +num_class_embeds: typing.Optional[int] = None + +) + + +Parameters + +sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. Dimensions must be a multiple of 2 ** (len(block_out_channels) - 1). + + +in_channels (int, optional, defaults to 3) — Number of channels in the input image. + + +out_channels (int, optional, defaults to 3) — Number of channels in the output. + + +center_input_sample (bool, optional, defaults to False) — Whether to center the input sample. + + +time_embedding_type (str, optional, defaults to "positional") — Type of time embedding to use. + + +freq_shift (int, optional, defaults to 0) — Frequency shift for fourier time embedding. + + +flip_sin_to_cos (bool, optional, defaults to — +obj:True): Whether to flip sin to cos for fourier time embedding. + + +down_block_types (Tuple[str], optional, defaults to — +obj:("DownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D")): Tuple of downsample block +types. + + +mid_block_type (str, optional, defaults to "UNetMidBlock2D") — +The mid block type. Choose from UNetMidBlock2D or UnCLIPUNetMidBlock2D. + + +up_block_types (Tuple[str], optional, defaults to — +obj:("AttnUpBlock2D", "AttnUpBlock2D", "AttnUpBlock2D", "UpBlock2D")): Tuple of upsample block types. + + +block_out_channels (Tuple[int], optional, defaults to — +obj:(224, 448, 672, 896)): Tuple of block output channels. + + +layers_per_block (int, optional, defaults to 2) — The number of layers per block. + + +mid_block_scale_factor (float, optional, defaults to 1) — The scale factor for the mid block. + + +downsample_padding (int, optional, defaults to 1) — The padding for the downsample convolution. + + +act_fn (str, optional, defaults to "silu") — The activation function to use. + + +attention_head_dim (int, optional, defaults to 8) — The attention head dimension. + + +norm_num_groups (int, optional, defaults to 32) — The number of groups for the normalization. + + +norm_eps (float, optional, defaults to 1e-5) — The epsilon for the normalization. + + +resnet_time_scale_shift (str, optional, defaults to "default") — Time scale shift config +for resnet blocks, see ResnetBlock2D. Choose from default or scale_shift. + + +class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", or "identity". + + +num_class_embeds (int, optional, defaults to None) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. + + + +UNet2DModel is a 2D UNet model that takes in a noisy sample and a timestep and returns sample shaped output. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the model (such as downloading or saving, etc.) + +forward + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[torch.Tensor, float, int] +class_labels: typing.Optional[torch.Tensor] = None +return_dict: bool = True + +) +→ +UNet2DOutput or tuple + +Parameters + +sample (torch.FloatTensor) — (batch, channel, height, width) noisy inputs tensor + + +timestep (torch.FloatTensor or float or `int) — (batch) timesteps + + +class_labels (torch.FloatTensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DOutput instead of a plain tuple. + + +Returns + +UNet2DOutput or tuple + + + +UNet2DOutput if return_dict is True, +otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + + +UNet1DOutput + + +class diffusers.models.unet_1d.UNet1DOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size, num_channels, sample_size)) — +Hidden states output. Output of last layer of model. + + + + +UNet1DModel + + +class diffusers.UNet1DModel + +< +source +> +( +sample_size: int = 65536 +sample_rate: typing.Optional[int] = None +in_channels: int = 2 +out_channels: int = 2 +extra_in_channels: int = 0 +time_embedding_type: str = 'fourier' +flip_sin_to_cos: bool = True +use_timestep_embedding: bool = False +freq_shift: float = 0.0 +down_block_types: typing.Tuple[str] = ('DownBlock1DNoSkip', 'DownBlock1D', 'AttnDownBlock1D') +up_block_types: typing.Tuple[str] = ('AttnUpBlock1D', 'UpBlock1D', 'UpBlock1DNoSkip') +mid_block_type: typing.Tuple[str] = 'UNetMidBlock1D' +out_block_type: str = None +block_out_channels: typing.Tuple[int] = (32, 32, 64) +act_fn: str = None +norm_num_groups: int = 8 +layers_per_block: int = 1 +downsample_each_block: bool = False + +) + + +Parameters + +sample_size (int, optional) — Default length of sample. Should be adaptable at runtime. + + +in_channels (int, optional, defaults to 2) — Number of channels in the input sample. + + +out_channels (int, optional, defaults to 2) — Number of channels in the output. + + +extra_in_channels (int, optional, defaults to 0) — +Number of additional channels to be added to the input of the first down block. Useful for cases where the +input data has more channels than what the model is initially designed for. + + +time_embedding_type (str, optional, defaults to "fourier") — Type of time embedding to use. + + +freq_shift (float, optional, defaults to 0.0) — Frequency shift for fourier time embedding. + + +flip_sin_to_cos (bool, optional, defaults to — +obj:False): Whether to flip sin to cos for fourier time embedding. + + +down_block_types (Tuple[str], optional, defaults to — +obj:("DownBlock1D", "DownBlock1DNoSkip", "AttnDownBlock1D")): Tuple of downsample block types. + + +up_block_types (Tuple[str], optional, defaults to — +obj:("UpBlock1D", "UpBlock1DNoSkip", "AttnUpBlock1D")): Tuple of upsample block types. + + +block_out_channels (Tuple[int], optional, defaults to — +obj:(32, 32, 64)): Tuple of block output channels. + + +mid_block_type (str, optional, defaults to “UNetMidBlock1D”) — block type for middle of UNet. + + +out_block_type (str, optional, defaults to None) — optional output processing of UNet. + + +act_fn (str, optional, defaults to None) — optional activation function in UNet blocks. + + +norm_num_groups (int, optional, defaults to 8) — group norm member count in UNet blocks. + + +layers_per_block (int, optional, defaults to 1) — added number of layers in a UNet block. + + +downsample_each_block (int, optional, defaults to False — +experimental feature for using a UNet without upsampling. + + + +UNet1DModel is a 1D UNet model that takes in a noisy sample and a timestep and returns sample shaped output. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the model (such as downloading or saving, etc.) + +forward + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[torch.Tensor, float, int] +return_dict: bool = True + +) +→ +UNet1DOutput or tuple + +Parameters + +sample (torch.FloatTensor) — (batch_size, num_channels, sample_size) noisy inputs tensor + + +timestep (torch.FloatTensor or float or `int) — (batch) timesteps + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet1DOutput instead of a plain tuple. + + +Returns + +UNet1DOutput or tuple + + + +UNet1DOutput if return_dict is True, +otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + + +UNet2DConditionOutput + + +class diffusers.models.unet_2d_condition.UNet2DConditionOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +Hidden states conditioned on encoder_hidden_states input. Output of last layer of model. + + + + +UNet2DConditionModel + + +class diffusers.UNet2DConditionModel + +< +source +> +( +sample_size: typing.Optional[int] = None +in_channels: int = 4 +out_channels: int = 4 +center_input_sample: bool = False +flip_sin_to_cos: bool = True +freq_shift: int = 0 +down_block_types: typing.Tuple[str] = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') +mid_block_type: typing.Optional[str] = 'UNetMidBlock2DCrossAttn' +up_block_types: typing.Tuple[str] = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') +only_cross_attention: typing.Union[bool, typing.Tuple[bool]] = False +block_out_channels: typing.Tuple[int] = (320, 640, 1280, 1280) +layers_per_block: typing.Union[int, typing.Tuple[int]] = 2 +downsample_padding: int = 1 +mid_block_scale_factor: float = 1 +act_fn: str = 'silu' +norm_num_groups: typing.Optional[int] = 32 +norm_eps: float = 1e-05 +cross_attention_dim: typing.Union[int, typing.Tuple[int]] = 1280 +encoder_hid_dim: typing.Optional[int] = None +encoder_hid_dim_type: typing.Optional[str] = None +attention_head_dim: typing.Union[int, typing.Tuple[int]] = 8 +num_attention_heads: typing.Union[int, typing.Tuple[int], NoneType] = None +dual_cross_attention: bool = False +use_linear_projection: bool = False +class_embed_type: typing.Optional[str] = None +addition_embed_type: typing.Optional[str] = None +num_class_embeds: typing.Optional[int] = None +upcast_attention: bool = False +resnet_time_scale_shift: str = 'default' +resnet_skip_time_act: bool = False +resnet_out_scale_factor: int = 1.0 +time_embedding_type: str = 'positional' +time_embedding_dim: typing.Optional[int] = None +time_embedding_act_fn: typing.Optional[str] = None +timestep_post_act: typing.Optional[str] = None +time_cond_proj_dim: typing.Optional[int] = None +conv_in_kernel: int = 3 +conv_out_kernel: int = 3 +projection_class_embeddings_input_dim: typing.Optional[int] = None +class_embeddings_concat: bool = False +mid_block_only_cross_attention: typing.Optional[bool] = None +cross_attention_norm: typing.Optional[str] = None +addition_embed_type_num_heads = 64 + +) + + +Parameters + +sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. + + +in_channels (int, optional, defaults to 4) — The number of channels in the input sample. + + +out_channels (int, optional, defaults to 4) — The number of channels in the output. + + +center_input_sample (bool, optional, defaults to False) — Whether to center the input sample. + + +flip_sin_to_cos (bool, optional, defaults to False) — +Whether to flip the sin to cos in the time embedding. + + +freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. + + +down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. + + +mid_block_type (str, optional, defaults to "UNetMidBlock2DCrossAttn") — +The mid block type. Choose from UNetMidBlock2DCrossAttn or UNetMidBlock2DSimpleCrossAttn, will skip the +mid block layer if None. + + +up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D",)) — +The tuple of upsample blocks to use. + + +only_cross_attention(bool or Tuple[bool], optional, default to False) — +Whether to include self-attention in the basic transformer blocks, see +BasicTransformerBlock. + + +block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. + + +layers_per_block (int, optional, defaults to 2) — The number of layers per block. + + +downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. + + +mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. + + +act_fn (str, optional, defaults to "silu") — The activation function to use. + + +norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, it will skip the normalization and activation layers in post-processing + + +norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. + + +cross_attention_dim (int or Tuple[int], optional, defaults to 1280) — +The dimension of the cross attention features. + + +encoder_hid_dim (int, optional, defaults to None) — +If encoder_hid_dim_type is defined, encoder_hidden_states will be projected from encoder_hid_dim +dimension to cross_attention_dim. + + +encoder_hid_dim_type (str, optional, defaults to None) — +If given, the encoder_hidden_states and potentially other embeddings will be down-projected to text +embeddings of dimension cross_attention according to encoder_hid_dim_type. + + +attention_head_dim (int, optional, defaults to 8) — The dimension of the attention heads. + + +num_attention_heads (int, optional) — +The number of attention heads. If not defined, defaults to attention_head_dim + + +resnet_time_scale_shift (str, optional, defaults to "default") — Time scale shift config +for resnet blocks, see ResnetBlock2D. Choose from default or scale_shift. + + +class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", "identity", "projection", or "simple_projection". + + +addition_embed_type (str, optional, defaults to None) — +Configures an optional embedding which will be summed with the time embeddings. Choose from None or +“text”. “text” will use the TextTimeEmbedding layer. + + +num_class_embeds (int, optional, defaults to None) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. + + +time_embedding_type (str, optional, default to positional) — +The type of position embedding to use for timesteps. Choose from positional or fourier. + + +time_embedding_dim (int, optional, default to None) — +An optional override for the dimension of the projected time embedding. + + +time_embedding_act_fn (str, optional, default to None) — +Optional activation function to use on the time embeddings only one time before they as passed to the rest +of the unet. Choose from silu, mish, gelu, and swish. + + +timestep_post_act (str, *optional*, default to None) -- The second activation function to use in timestep embedding. Choose from silu, mishandgelu`. + + +time_cond_proj_dim (int, optional, default to None) — +The dimension of cond_proj layer in timestep embedding. + + +conv_in_kernel (int, optional, default to 3) — The kernel size of conv_in layer. + + +conv_out_kernel (int, optional, default to 3) — The kernel size of conv_out layer. + + +projection_class_embeddings_input_dim (int, optional) — The dimension of the class_labels input when +using the “projection” class_embed_type. Required when using the “projection” class_embed_type. + + +class_embeddings_concat (bool, optional, defaults to False) — Whether to concatenate the time +embeddings with the class embeddings. + + +mid_block_only_cross_attention (bool, optional, defaults to None) — +Whether to use cross attention with the mid block when using the UNetMidBlock2DSimpleCrossAttn. If +only_cross_attention is given as a single boolean and mid_block_only_cross_attention is None, the +only_cross_attention value will be used as the value for mid_block_only_cross_attention. Else, it will +default to False. + + + +UNet2DConditionModel is a conditional 2D UNet model that takes in a noisy sample, conditional state, and a timestep +and returns sample shaped output. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the models (such as downloading or saving, etc.) + +forward + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[torch.Tensor, float, int] +encoder_hidden_states: Tensor +class_labels: typing.Optional[torch.Tensor] = None +timestep_cond: typing.Optional[torch.Tensor] = None +attention_mask: typing.Optional[torch.Tensor] = None +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None +added_cond_kwargs: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None +down_block_additional_residuals: typing.Optional[typing.Tuple[torch.Tensor]] = None +mid_block_additional_residual: typing.Optional[torch.Tensor] = None +encoder_attention_mask: typing.Optional[torch.Tensor] = None +return_dict: bool = True + +) +→ +UNet2DConditionOutput or tuple + +Parameters + +sample (torch.FloatTensor) — (batch, channel, height, width) noisy inputs tensor + + +timestep (torch.FloatTensor or float or int) — (batch) timesteps + + +encoder_hidden_states (torch.FloatTensor) — (batch, sequence_length, feature_dim) encoder hidden states + + +encoder_attention_mask (torch.Tensor) — +(batch, sequence_length) cross-attention mask, applied to encoder_hidden_states. True = keep, False = +discard. Mask will be converted into a bias, which adds large negative values to attention scores +corresponding to “discard” tokens. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a models.unet_2d_condition.UNet2DConditionOutput instead of a plain tuple. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +added_cond_kwargs (dict, optional) — +A kwargs dictionary that if specified includes additonal conditions that can be used for additonal time +embeddings or encoder hidden states projections. See the configurations encoder_hid_dim_type and +addition_embed_type for more information. + + +Returns + +UNet2DConditionOutput or tuple + + + +UNet2DConditionOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + + +set_attention_slice + +< +source +> +( +slice_size + +) + + +Parameters + +slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as num_attention_heads // slice_size. In this case, +num_attention_heads must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +set_attn_processor + +< +source +> +( +processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor]]] + +) + + +Parameters + +`processor (dict of AttentionProcessor or AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +of all Attention layers. + + +In case processor is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainable attention processors. — + + + + +set_default_attn_processor + +< +source +> +( +) + + + +Disables custom attention processors and sets the default attention implementation. + +UNet3DConditionOutput + + +class diffusers.models.unet_3d_condition.UNet3DConditionOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size, num_frames, num_channels, height, width)) — +Hidden states conditioned on encoder_hidden_states input. Output of last layer of model. + + + + +UNet3DConditionModel + + +class diffusers.UNet3DConditionModel + +< +source +> +( +sample_size: typing.Optional[int] = None +in_channels: int = 4 +out_channels: int = 4 +down_block_types: typing.Tuple[str] = ('CrossAttnDownBlock3D', 'CrossAttnDownBlock3D', 'CrossAttnDownBlock3D', 'DownBlock3D') +up_block_types: typing.Tuple[str] = ('UpBlock3D', 'CrossAttnUpBlock3D', 'CrossAttnUpBlock3D', 'CrossAttnUpBlock3D') +block_out_channels: typing.Tuple[int] = (320, 640, 1280, 1280) +layers_per_block: int = 2 +downsample_padding: int = 1 +mid_block_scale_factor: float = 1 +act_fn: str = 'silu' +norm_num_groups: typing.Optional[int] = 32 +norm_eps: float = 1e-05 +cross_attention_dim: int = 1024 +attention_head_dim: typing.Union[int, typing.Tuple[int]] = 64 +num_attention_heads: typing.Union[int, typing.Tuple[int], NoneType] = None + +) + + +Parameters + +sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. + + +in_channels (int, optional, defaults to 4) — The number of channels in the input sample. + + +out_channels (int, optional, defaults to 4) — The number of channels in the output. + + +down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. + + +up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D",)) — +The tuple of upsample blocks to use. + + +block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. + + +layers_per_block (int, optional, defaults to 2) — The number of layers per block. + + +downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. + + +mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. + + +act_fn (str, optional, defaults to "silu") — The activation function to use. + + +norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, it will skip the normalization and activation layers in post-processing + + +norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. + + +cross_attention_dim (int, optional, defaults to 1280) — The dimension of the cross attention features. + + +attention_head_dim (int, optional, defaults to 8) — The dimension of the attention heads. + + +num_attention_heads (int, optional) — The number of attention heads. + + + +UNet3DConditionModel is a conditional 2D UNet model that takes in a noisy sample, conditional state, and a timestep +and returns sample shaped output. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the models (such as downloading or saving, etc.) + +forward + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[torch.Tensor, float, int] +encoder_hidden_states: Tensor +class_labels: typing.Optional[torch.Tensor] = None +timestep_cond: typing.Optional[torch.Tensor] = None +attention_mask: typing.Optional[torch.Tensor] = None +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None +down_block_additional_residuals: typing.Optional[typing.Tuple[torch.Tensor]] = None +mid_block_additional_residual: typing.Optional[torch.Tensor] = None +return_dict: bool = True + +) +→ +~models.unet_2d_condition.UNet3DConditionOutput or tuple + +Parameters + +sample (torch.FloatTensor) — (batch, num_frames, channel, height, width) noisy inputs tensor + + +timestep (torch.FloatTensor or float or int) — (batch) timesteps + + +encoder_hidden_states (torch.FloatTensor) — (batch, sequence_length, feature_dim) encoder hidden states + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a models.unet_2d_condition.UNet3DConditionOutput instead of a plain tuple. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +Returns + +~models.unet_2d_condition.UNet3DConditionOutput or tuple + + + +~models.unet_2d_condition.UNet3DConditionOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + + +set_attention_slice + +< +source +> +( +slice_size + +) + + +Parameters + +slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as num_attention_heads // slice_size. In this case, +num_attention_heads must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +set_attn_processor + +< +source +> +( +processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor]]] + +) + + +Parameters + +`processor (dict of AttentionProcessor or AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +of all Attention layers. + + +In case processor is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainable attention processors. — + + + + +set_default_attn_processor + +< +source +> +( +) + + + +Disables custom attention processors and sets the default attention implementation. + +DecoderOutput + + +class diffusers.models.vae.DecoderOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +Decoded output sample of the model. Output of the last layer of the model. + + + +Output of decoding method. + +VQEncoderOutput + + +class diffusers.models.vq_model.VQEncoderOutput + +< +source +> +( +latents: FloatTensor + +) + + +Parameters + +latents (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +Encoded output sample of the model. Output of the last layer of the model. + + + +Output of VQModel encoding method. + +VQModel + + +class diffusers.VQModel + +< +source +> +( +in_channels: int = 3 +out_channels: int = 3 +down_block_types: typing.Tuple[str] = ('DownEncoderBlock2D',) +up_block_types: typing.Tuple[str] = ('UpDecoderBlock2D',) +block_out_channels: typing.Tuple[int] = (64,) +layers_per_block: int = 1 +act_fn: str = 'silu' +latent_channels: int = 3 +sample_size: int = 32 +num_vq_embeddings: int = 256 +norm_num_groups: int = 32 +vq_embed_dim: typing.Optional[int] = None +scaling_factor: float = 0.18215 +norm_type: str = 'group' + +) + + +Parameters + +in_channels (int, optional, defaults to 3) — Number of channels in the input image. + + +out_channels (int, optional, defaults to 3) — Number of channels in the output. + + +down_block_types (Tuple[str], optional, defaults to — +obj:("DownEncoderBlock2D",)): Tuple of downsample block types. + + +up_block_types (Tuple[str], optional, defaults to — +obj:("UpDecoderBlock2D",)): Tuple of upsample block types. + + +block_out_channels (Tuple[int], optional, defaults to — +obj:(64,)): Tuple of block output channels. + + +act_fn (str, optional, defaults to "silu") — The activation function to use. + + +latent_channels (int, optional, defaults to 3) — Number of channels in the latent space. + + +sample_size (int, optional, defaults to 32) — TODO + + +num_vq_embeddings (int, optional, defaults to 256) — Number of codebook vectors in the VQ-VAE. + + +vq_embed_dim (int, optional) — Hidden dim of codebook vectors in the VQ-VAE. + + +scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. + + + +VQ-VAE model from the paper Neural Discrete Representation Learning by Aaron van den Oord, Oriol Vinyals and Koray +Kavukcuoglu. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the model (such as downloading or saving, etc.) + +forward + +< +source +> +( +sample: FloatTensor +return_dict: bool = True + +) + + +Parameters + +sample (torch.FloatTensor) — Input sample. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. + + + + +AutoencoderKLOutput + + +class diffusers.models.autoencoder_kl.AutoencoderKLOutput + +< +source +> +( +latent_dist: DiagonalGaussianDistribution + +) + + +Parameters + +latent_dist (DiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of DiagonalGaussianDistribution. +DiagonalGaussianDistribution allows for sampling latents from the distribution. + + + +Output of AutoencoderKL encoding method. + +AutoencoderKL + + +class diffusers.AutoencoderKL + +< +source +> +( +in_channels: int = 3 +out_channels: int = 3 +down_block_types: typing.Tuple[str] = ('DownEncoderBlock2D',) +up_block_types: typing.Tuple[str] = ('UpDecoderBlock2D',) +block_out_channels: typing.Tuple[int] = (64,) +layers_per_block: int = 1 +act_fn: str = 'silu' +latent_channels: int = 4 +norm_num_groups: int = 32 +sample_size: int = 32 +scaling_factor: float = 0.18215 + +) + + +Parameters + +in_channels (int, optional, defaults to 3) — Number of channels in the input image. + + +out_channels (int, optional, defaults to 3) — Number of channels in the output. + + +down_block_types (Tuple[str], optional, defaults to — +obj:("DownEncoderBlock2D",)): Tuple of downsample block types. + + +up_block_types (Tuple[str], optional, defaults to — +obj:("UpDecoderBlock2D",)): Tuple of upsample block types. + + +block_out_channels (Tuple[int], optional, defaults to — +obj:(64,)): Tuple of block output channels. + + +act_fn (str, optional, defaults to "silu") — The activation function to use. + + +latent_channels (int, optional, defaults to 4) — Number of channels in the latent space. + + +sample_size (int, optional, defaults to 32) — TODO + + +scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. + + + +Variational Autoencoder (VAE) model with KL loss from the paper Auto-Encoding Variational Bayes by Diederik P. Kingma +and Max Welling. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the model (such as downloading or saving, etc.) + +disable_slicing + +< +source +> +( +) + + + +Disable sliced VAE decoding. If enable_slicing was previously invoked, this method will go back to computing +decoding in one step. + +disable_tiling + +< +source +> +( +) + + + +Disable tiled VAE decoding. If enable_vae_tiling was previously invoked, this method will go back to +computing decoding in one step. + +enable_slicing + +< +source +> +( +) + + + +Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. + +enable_tiling + +< +source +> +( +use_tiling: bool = True + +) + + + +Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful to save a large amount of memory and to allow +the processing of larger images. + +forward + +< +source +> +( +sample: FloatTensor +sample_posterior: bool = False +return_dict: bool = True +generator: typing.Optional[torch._C.Generator] = None + +) + + +Parameters + +sample (torch.FloatTensor) — Input sample. + + +sample_posterior (bool, optional, defaults to False) — +Whether to sample from the posterior. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. + + + + +set_attn_processor + +< +source +> +( +processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor]]] + +) + + +Parameters + +`processor (dict of AttentionProcessor or AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +of all Attention layers. + + +In case processor is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainable attention processors. — + + + + +set_default_attn_processor + +< +source +> +( +) + + + +Disables custom attention processors and sets the default attention implementation. + +tiled_decode + +< +source +> +( +z: FloatTensor +return_dict: bool = True + +) + + +Parameters + +When this option is enabled, the VAE will split the input tensor into tiles to compute decoding in several — + + +steps. This is useful to keep memory use constant regardless of image size. The end result of tiled decoding is — + + +different from non-tiled decoding due to each tile using a different decoder. To avoid tiling artifacts, the — + + +tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the — + + +look of the output, but they should be much less noticeable. — +z (torch.FloatTensor): Input batch of latent vectors. return_dict (bool, optional, defaults to +True): +Whether or not to return a DecoderOutput instead of a plain tuple. + + + +Decode a batch of images using a tiled decoder. + +tiled_encode + +< +source +> +( +x: FloatTensor +return_dict: bool = True + +) + + +Parameters + +When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several — + + +steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is — + + +different from non-tiled encoding due to each tile using a different encoder. To avoid tiling artifacts, the — + + +tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the — + + +look of the output, but they should be much less noticeable. — +x (torch.FloatTensor): Input batch of images. return_dict (bool, optional, defaults to True): +Whether or not to return a AutoencoderKLOutput instead of a plain tuple. + + + +Encode a batch of images using a tiled encoder. + +Transformer2DModel + + +class diffusers.Transformer2DModel + +< +source +> +( +num_attention_heads: int = 16 +attention_head_dim: int = 88 +in_channels: typing.Optional[int] = None +out_channels: typing.Optional[int] = None +num_layers: int = 1 +dropout: float = 0.0 +norm_num_groups: int = 32 +cross_attention_dim: typing.Optional[int] = None +attention_bias: bool = False +sample_size: typing.Optional[int] = None +num_vector_embeds: typing.Optional[int] = None +patch_size: typing.Optional[int] = None +activation_fn: str = 'geglu' +num_embeds_ada_norm: typing.Optional[int] = None +use_linear_projection: bool = False +only_cross_attention: bool = False +upcast_attention: bool = False +norm_type: str = 'layer_norm' +norm_elementwise_affine: bool = True + +) + + +Parameters + +num_attention_heads (int, optional, defaults to 16) — The number of heads to use for multi-head attention. + + +attention_head_dim (int, optional, defaults to 88) — The number of channels in each head. + + +in_channels (int, optional) — +Pass if the input is continuous. The number of channels in the input and output. + + +num_layers (int, optional, defaults to 1) — The number of layers of Transformer blocks to use. + + +dropout (float, optional, defaults to 0.0) — The dropout probability to use. + + +cross_attention_dim (int, optional) — The number of encoder_hidden_states dimensions to use. + + +sample_size (int, optional) — Pass if the input is discrete. The width of the latent images. +Note that this is fixed at training time as it is used for learning a number of position embeddings. See +ImagePositionalEmbeddings. + + +num_vector_embeds (int, optional) — +Pass if the input is discrete. The number of classes of the vector embeddings of the latent pixels. +Includes the class for the masked latent pixel. + + +activation_fn (str, optional, defaults to "geglu") — Activation function to be used in feed-forward. + + +num_embeds_ada_norm ( int, optional) — Pass if at least one of the norm_layers is AdaLayerNorm. +The number of diffusion steps used during training. Note that this is fixed at training time as it is used +to learn a number of embeddings that are added to the hidden states. During inference, you can denoise for +up to but not more than steps than num_embeds_ada_norm. + + +attention_bias (bool, optional) — +Configure if the TransformerBlocks’ attention should contain a bias parameter. + + + +Transformer model for image-like data. Takes either discrete (classes of vector embeddings) or continuous (actual +embeddings) inputs. +When input is continuous: First, project the input (aka embedding) and reshape to b, t, d. Then apply standard +transformer action. Finally, reshape to image. +When input is discrete: First, input (classes of latent pixels) is converted to embeddings and has positional +embeddings applied, see ImagePositionalEmbeddings. Then apply standard transformer action. Finally, predict +classes of unnoised image. +Note that it is assumed one of the input classes is the masked latent pixel. The predicted classes of the unnoised +image do not contain a prediction for the masked pixel as the unnoised image cannot be masked. + +forward + +< +source +> +( +hidden_states: Tensor +encoder_hidden_states: typing.Optional[torch.Tensor] = None +timestep: typing.Optional[torch.LongTensor] = None +class_labels: typing.Optional[torch.LongTensor] = None +cross_attention_kwargs: typing.Dict[str, typing.Any] = None +attention_mask: typing.Optional[torch.Tensor] = None +encoder_attention_mask: typing.Optional[torch.Tensor] = None +return_dict: bool = True + +) +→ +Transformer2DModelOutput or tuple + +Parameters + +hidden_states ( When discrete, torch.LongTensor of shape (batch size, num latent pixels). — +When continuous, torch.FloatTensor of shape (batch size, channel, height, width)): Input +hidden_states + + +encoder_hidden_states ( torch.FloatTensor of shape (batch size, sequence len, embed dims), optional) — +Conditional embeddings for cross attention layer. If not given, cross-attention defaults to +self-attention. + + +timestep ( torch.LongTensor, optional) — +Optional timestep to be applied as an embedding in AdaLayerNorm’s. Used to indicate denoising step. + + +class_labels ( torch.LongTensor of shape (batch size, num classes), optional) — +Optional class labels to be applied as an embedding in AdaLayerZeroNorm. Used to indicate class labels +conditioning. + + +encoder_attention_mask ( torch.Tensor, optional ). — +Cross-attention mask, applied to encoder_hidden_states. Two formats supported: +Mask (batch, sequence_length) True = keep, False = discard. Bias (batch, 1, sequence_length) 0 += keep, -10000 = discard. +If ndim == 2: will be interpreted as a mask, then converted into a bias consistent with the format +above. This bias will be added to the cross-attention scores. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a models.unet_2d_condition.UNet2DConditionOutput instead of a plain tuple. + + +Returns + +Transformer2DModelOutput or tuple + + + +Transformer2DModelOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + + +Transformer2DModelOutput + + +class diffusers.models.transformer_2d.Transformer2DModelOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if Transformer2DModel is discrete) — +Hidden states conditioned on encoder_hidden_states input. If discrete, returns probability distributions +for the unnoised latent pixels. + + + + +TransformerTemporalModel + + +class diffusers.models.transformer_temporal.TransformerTemporalModel + +< +source +> +( +num_attention_heads: int = 16 +attention_head_dim: int = 88 +in_channels: typing.Optional[int] = None +out_channels: typing.Optional[int] = None +num_layers: int = 1 +dropout: float = 0.0 +norm_num_groups: int = 32 +cross_attention_dim: typing.Optional[int] = None +attention_bias: bool = False +sample_size: typing.Optional[int] = None +activation_fn: str = 'geglu' +norm_elementwise_affine: bool = True +double_self_attention: bool = True + +) + + +Parameters + +num_attention_heads (int, optional, defaults to 16) — The number of heads to use for multi-head attention. + + +attention_head_dim (int, optional, defaults to 88) — The number of channels in each head. + + +in_channels (int, optional) — +Pass if the input is continuous. The number of channels in the input and output. + + +num_layers (int, optional, defaults to 1) — The number of layers of Transformer blocks to use. + + +dropout (float, optional, defaults to 0.0) — The dropout probability to use. + + +cross_attention_dim (int, optional) — The number of encoder_hidden_states dimensions to use. + + +sample_size (int, optional) — Pass if the input is discrete. The width of the latent images. +Note that this is fixed at training time as it is used for learning a number of position embeddings. See +ImagePositionalEmbeddings. + + +activation_fn (str, optional, defaults to "geglu") — Activation function to be used in feed-forward. + + +attention_bias (bool, optional) — +Configure if the TransformerBlocks’ attention should contain a bias parameter. + + +double_self_attention (bool, optional) — +Configure if each TransformerBlock should contain two self-attention layers + + + +Transformer model for video-like data. + +forward + +< +source +> +( +hidden_states +encoder_hidden_states = None +timestep = None +class_labels = None +num_frames = 1 +cross_attention_kwargs = None +return_dict: bool = True + +) +→ +~models.transformer_2d.TransformerTemporalModelOutput or tuple + +Parameters + +hidden_states ( When discrete, torch.LongTensor of shape (batch size, num latent pixels). — +When continous, torch.FloatTensor of shape (batch size, channel, height, width)): Input +hidden_states + + +encoder_hidden_states ( torch.LongTensor of shape (batch size, encoder_hidden_states dim), optional) — +Conditional embeddings for cross attention layer. If not given, cross-attention defaults to +self-attention. + + +timestep ( torch.long, optional) — +Optional timestep to be applied as an embedding in AdaLayerNorm’s. Used to indicate denoising step. + + +class_labels ( torch.LongTensor of shape (batch size, num classes), optional) — +Optional class labels to be applied as an embedding in AdaLayerZeroNorm. Used to indicate class labels +conditioning. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a models.unet_2d_condition.UNet2DConditionOutput instead of a plain tuple. + + +Returns + +~models.transformer_2d.TransformerTemporalModelOutput or tuple + + + +~models.transformer_2d.TransformerTemporalModelOutput if return_dict is True, otherwise a tuple. +When returning a tuple, the first element is the sample tensor. + + + +Transformer2DModelOutput + + +class diffusers.models.transformer_temporal.TransformerTemporalModelOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size x num_frames, num_channels, height, width)) — +Hidden states conditioned on encoder_hidden_states input. + + + + +PriorTransformer + + +class diffusers.PriorTransformer + +< +source +> +( +num_attention_heads: int = 32 +attention_head_dim: int = 64 +num_layers: int = 20 +embedding_dim: int = 768 +num_embeddings = 77 +additional_embeddings = 4 +dropout: float = 0.0 + +) + + +Parameters + +num_attention_heads (int, optional, defaults to 32) — The number of heads to use for multi-head attention. + + +attention_head_dim (int, optional, defaults to 64) — The number of channels in each head. + + +num_layers (int, optional, defaults to 20) — The number of layers of Transformer blocks to use. + + +embedding_dim (int, optional, defaults to 768) — The dimension of the CLIP embeddings. Note that CLIP +image embeddings and text embeddings are both the same dimension. + + +num_embeddings (int, optional, defaults to 77) — The max number of clip embeddings allowed. I.e. the +length of the prompt after it has been tokenized. + + +additional_embeddings (int, optional, defaults to 4) — The number of additional tokens appended to the +projected hidden_states. The actual length of the used hidden_states is num_embeddings + additional_embeddings. + + +dropout (float, optional, defaults to 0.0) — The dropout probability to use. + + + +The prior transformer from unCLIP is used to predict CLIP image embeddings from CLIP text embeddings. Note that the +transformer predicts the image embeddings through a denoising diffusion process. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the models (such as downloading or saving, etc.) +For more details, see the original paper: https://arxiv.org/abs/2204.06125 + +forward + +< +source +> +( +hidden_states +timestep: typing.Union[torch.Tensor, float, int] +proj_embedding: FloatTensor +encoder_hidden_states: FloatTensor +attention_mask: typing.Optional[torch.BoolTensor] = None +return_dict: bool = True + +) +→ +PriorTransformerOutput or tuple + +Parameters + +hidden_states (torch.FloatTensor of shape (batch_size, embedding_dim)) — +x_t, the currently predicted image embeddings. + + +timestep (torch.long) — +Current denoising step. + + +proj_embedding (torch.FloatTensor of shape (batch_size, embedding_dim)) — +Projected embedding vector the denoising process is conditioned on. + + +encoder_hidden_states (torch.FloatTensor of shape (batch_size, num_embeddings, embedding_dim)) — +Hidden states of the text embeddings the denoising process is conditioned on. + + +attention_mask (torch.BoolTensor of shape (batch_size, num_embeddings)) — +Text mask for the text embeddings. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a models.prior_transformer.PriorTransformerOutput instead of a plain +tuple. + + +Returns + +PriorTransformerOutput or tuple + + + +PriorTransformerOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + + +set_attn_processor + +< +source +> +( +processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor]]] + +) + + +Parameters + +`processor (dict of AttentionProcessor or AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +of all Attention layers. + + +In case processor is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainable attention processors. — + + + + +set_default_attn_processor + +< +source +> +( +) + + + +Disables custom attention processors and sets the default attention implementation. + +PriorTransformerOutput + + +class diffusers.models.prior_transformer.PriorTransformerOutput + +< +source +> +( +predicted_image_embedding: FloatTensor + +) + + +Parameters + +predicted_image_embedding (torch.FloatTensor of shape (batch_size, embedding_dim)) — +The predicted CLIP image embedding conditioned on the CLIP text embedding input. + + + + +ControlNetOutput + + +class diffusers.models.controlnet.ControlNetOutput + +< +source +> +( +down_block_res_samples: typing.Tuple[torch.Tensor] +mid_block_res_sample: Tensor + +) + + + + +ControlNetModel + + +class diffusers.ControlNetModel + +< +source +> +( +in_channels: int = 4 +conditioning_channels: int = 3 +flip_sin_to_cos: bool = True +freq_shift: int = 0 +down_block_types: typing.Tuple[str] = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') +only_cross_attention: typing.Union[bool, typing.Tuple[bool]] = False +block_out_channels: typing.Tuple[int] = (320, 640, 1280, 1280) +layers_per_block: int = 2 +downsample_padding: int = 1 +mid_block_scale_factor: float = 1 +act_fn: str = 'silu' +norm_num_groups: typing.Optional[int] = 32 +norm_eps: float = 1e-05 +cross_attention_dim: int = 1280 +attention_head_dim: typing.Union[int, typing.Tuple[int]] = 8 +num_attention_heads: typing.Union[int, typing.Tuple[int], NoneType] = None +use_linear_projection: bool = False +class_embed_type: typing.Optional[str] = None +num_class_embeds: typing.Optional[int] = None +upcast_attention: bool = False +resnet_time_scale_shift: str = 'default' +projection_class_embeddings_input_dim: typing.Optional[int] = None +controlnet_conditioning_channel_order: str = 'rgb' +conditioning_embedding_out_channels: typing.Optional[typing.Tuple[int]] = (16, 32, 96, 256) +global_pool_conditions: bool = False + +) + + + + +from_unet + +< +source +> +( +unet: UNet2DConditionModel +controlnet_conditioning_channel_order: str = 'rgb' +conditioning_embedding_out_channels: typing.Optional[typing.Tuple[int]] = (16, 32, 96, 256) +load_weights_from_unet: bool = True + +) + + +Parameters + +unet (UNet2DConditionModel) — +UNet model which weights are copied to the ControlNet. Note that all configuration options are also +copied where applicable. + + + +Instantiate Controlnet class from UNet2DConditionModel. + +set_attention_slice + +< +source +> +( +slice_size + +) + + +Parameters + +slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as num_attention_heads // slice_size. In this case, +num_attention_heads must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +set_attn_processor + +< +source +> +( +processor: typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor, typing.Dict[str, typing.Union[diffusers.models.attention_processor.AttnProcessor, diffusers.models.attention_processor.AttnProcessor2_0, diffusers.models.attention_processor.XFormersAttnProcessor, diffusers.models.attention_processor.SlicedAttnProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor, diffusers.models.attention_processor.SlicedAttnAddedKVProcessor, diffusers.models.attention_processor.AttnAddedKVProcessor2_0, diffusers.models.attention_processor.XFormersAttnAddedKVProcessor, diffusers.models.attention_processor.LoRAAttnProcessor, diffusers.models.attention_processor.LoRAXFormersAttnProcessor, diffusers.models.attention_processor.LoRAAttnProcessor2_0, diffusers.models.attention_processor.LoRAAttnAddedKVProcessor, diffusers.models.attention_processor.CustomDiffusionAttnProcessor, diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor]]] + +) + + +Parameters + +`processor (dict of AttentionProcessor or AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +of all Attention layers. + + +In case processor is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainable attention processors. — + + + + +set_default_attn_processor + +< +source +> +( +) + + + +Disables custom attention processors and sets the default attention implementation. + +FlaxModelMixin + + +class diffusers.FlaxModelMixin + +< +source +> +( +) + + + +Base class for all flax models. +FlaxModelMixin takes care of storing the configuration of the models and handles methods for loading, +downloading and saving models. + +from_pretrained + +< +source +> +( +pretrained_model_name_or_path: typing.Union[str, os.PathLike] +dtype: dtype = +*model_args +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path (str or os.PathLike) — +Can be either: + +A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. +Valid model ids are namespaced under a user or organization name, like +runwayml/stable-diffusion-v1-5. +A path to a directory containing model weights saved using save_pretrained(), +e.g., ./my_model_directory/. + + + +dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — +The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and +jax.numpy.bfloat16 (on TPUs). +This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If +specified all the computation will be performed with the given dtype. +Note that this only specifies the dtype of the computation and does not influence the dtype of model +parameters. +If you wish to change the dtype of the model parameters, see ~ModelMixin.to_fp16 and +~ModelMixin.to_bf16. + + +model_args (sequence of positional arguments, optional) — +All remaining positional arguments will be passed to the underlying model’s __init__ method. + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +from_pt (bool, optional, defaults to False) — +Load the model weights from a PyTorch checkpoint save file. + + +kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., +output_attentions=True). Behaves differently depending on whether a config is provided or +automatically loaded: + +If a configuration is provided with config, **kwargs will be directly passed to the +underlying model’s __init__ method (we assume all relevant updates to the configuration have +already been done) +If a configuration is not provided, kwargs will be first passed to the configuration class +initialization function (from_config()). Each key of kwargs that corresponds to +a configuration attribute will be used to override said attribute with the supplied kwargs +value. Remaining keys that do not correspond to any configuration attribute will be passed to the +underlying model’s __init__ function. + + + + +Instantiate a pretrained flax model from a pre-trained model configuration. +The warning Weights from XXX not initialized from pretrained model means that the weights of XXX do not come +pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning +task. +The warning Weights from XXX not used in YYY means that the layer XXX is not used by YYY, therefore those +weights are discarded. + +Examples: + + + Copied +>>> from diffusers import FlaxUNet2DConditionModel + +>>> # Download model and configuration from huggingface.co and cache. +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # Model was saved using *save_pretrained('./test/saved_model/')* (for example purposes, not runnable). +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("./test/saved_model/") + +save_pretrained + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] +is_main_process: bool = True + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. + + +params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. + + +is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful when in distributed training like +TPUs and need to call this function on all processes. In this case, set is_main_process=True only on +the main process to avoid race conditions. + + + +Save a model and its configuration file to a directory, so that it can be re-loaded using the +[from_pretrained()](/docs/diffusers/main/en/api/models#diffusers.FlaxModelMixin.from_pretrained) class method + +to_bf16 + +< +source +> +( +params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] +mask: typing.Any = None + +) + + +Parameters + +params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. + + +mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans, True for params +you want to cast, and should be False for those you want to skip. + + + +Cast the floating-point params to jax.numpy.bfloat16. This returns a new params tree and does not cast +the params in place. +This method can be used on TPU to explicitly convert the model parameters to bfloat16 precision to do full +half-precision training or to save weights in bfloat16 for inference in order to save memory and improve speed. + +Examples: + + + Copied +>>> from diffusers import FlaxUNet2DConditionModel + +>>> # load model +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model parameters will be in fp32 precision, to cast these to bfloat16 precision +>>> params = model.to_bf16(params) +>>> # If you don't want to cast certain parameters (for example layer norm bias and scale) +>>> # then pass the mask as follows +>>> from flax import traverse_util + +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> flat_params = traverse_util.flatten_dict(params) +>>> mask = { +... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) +... for path in flat_params +... } +>>> mask = traverse_util.unflatten_dict(mask) +>>> params = model.to_bf16(params, mask) + +to_fp16 + +< +source +> +( +params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] +mask: typing.Any = None + +) + + +Parameters + +params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. + + +mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans, True for params +you want to cast, and should be False for those you want to skip + + + +Cast the floating-point params to jax.numpy.float16. This returns a new params tree and does not cast the +params in place. +This method can be used on GPU to explicitly convert the model parameters to float16 precision to do full +half-precision training or to save weights in float16 for inference in order to save memory and improve speed. + +Examples: + + + Copied +>>> from diffusers import FlaxUNet2DConditionModel + +>>> # load model +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model params will be in fp32, to cast these to float16 +>>> params = model.to_fp16(params) +>>> # If you want don't want to cast certain parameters (for example layer norm bias and scale) +>>> # then pass the mask as follows +>>> from flax import traverse_util + +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> flat_params = traverse_util.flatten_dict(params) +>>> mask = { +... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) +... for path in flat_params +... } +>>> mask = traverse_util.unflatten_dict(mask) +>>> params = model.to_fp16(params, mask) + +to_fp32 + +< +source +> +( +params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] +mask: typing.Any = None + +) + + +Parameters + +params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. + + +mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans, True for params +you want to cast, and should be False for those you want to skip + + + +Cast the floating-point params to jax.numpy.float32. This method can be used to explicitly convert the +model parameters to fp32 precision. This returns a new params tree and does not cast the params in place. + +Examples: + + + Copied +>>> from diffusers import FlaxUNet2DConditionModel + +>>> # Download model and configuration from huggingface.co +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model params will be in fp32, to illustrate the use of this method, +>>> # we'll first cast to fp16 and back to fp32 +>>> params = model.to_f16(params) +>>> # now cast back to fp32 +>>> params = model.to_fp32(params) + +FlaxUNet2DConditionOutput + + +class diffusers.models.unet_2d_condition_flax.FlaxUNet2DConditionOutput + +< +source +> +( +sample: ndarray + +) + + +Parameters + +sample (jnp.ndarray of shape (batch_size, num_channels, height, width)) — +Hidden states conditioned on encoder_hidden_states input. Output of last layer of model. + + + + +replace + +< +source +> +( +**updates + +) + + + +“Returns a new object replacing the specified fields with new values. + +FlaxUNet2DConditionModel + + +class diffusers.FlaxUNet2DConditionModel + +< +source +> +( +sample_size: int = 32 +in_channels: int = 4 +out_channels: int = 4 +down_block_types: typing.Tuple[str] = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') +up_block_types: typing.Tuple[str] = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') +only_cross_attention: typing.Union[bool, typing.Tuple[bool]] = False +block_out_channels: typing.Tuple[int] = (320, 640, 1280, 1280) +layers_per_block: int = 2 +attention_head_dim: typing.Union[int, typing.Tuple[int]] = 8 +num_attention_heads: typing.Union[int, typing.Tuple[int], NoneType] = None +cross_attention_dim: int = 1280 +dropout: float = 0.0 +use_linear_projection: bool = False +dtype: dtype = +flip_sin_to_cos: bool = True +freq_shift: int = 0 +use_memory_efficient_attention: bool = False +parent: typing.Union[typing.Type[flax.linen.module.Module], typing.Type[flax.core.scope.Scope], typing.Type[flax.linen.module._Sentinel], NoneType] = +name: str = None + +) + + +Parameters + +sample_size (int, optional) — +The size of the input sample. + + +in_channels (int, optional, defaults to 4) — +The number of channels in the input sample. + + +out_channels (int, optional, defaults to 4) — +The number of channels in the output. + + +down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. The corresponding class names will be: “FlaxCrossAttnDownBlock2D”, +“FlaxCrossAttnDownBlock2D”, “FlaxCrossAttnDownBlock2D”, “FlaxDownBlock2D” + + +up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D",)) — +The tuple of upsample blocks to use. The corresponding class names will be: “FlaxUpBlock2D”, +“FlaxCrossAttnUpBlock2D”, “FlaxCrossAttnUpBlock2D”, “FlaxCrossAttnUpBlock2D” + + +block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. + + +layers_per_block (int, optional, defaults to 2) — +The number of layers per block. + + +attention_head_dim (int or Tuple[int], optional, defaults to 8) — +The dimension of the attention heads. + + +num_attention_heads (int or Tuple[int], optional) — +The number of attention heads. + + +cross_attention_dim (int, optional, defaults to 768) — +The dimension of the cross attention features. + + +dropout (float, optional, defaults to 0) — +Dropout probability for down, up and bottleneck blocks. + + +flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip the sin to cos in the time embedding. + + +freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. + + +use_memory_efficient_attention (bool, optional, defaults to False) — +enable memory efficient attention https://arxiv.org/abs/2112.05682 + + + +FlaxUNet2DConditionModel is a conditional 2D UNet model that takes in a noisy sample, conditional state, and a +timestep and returns sample shaped output. +This model inherits from FlaxModelMixin. Check the superclass documentation for the generic methods the library +implements for all the models (such as downloading or saving, etc.) +Also, this model is a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to +general usage and behavior. +Finally, this model supports inherent JAX features such as: +Just-In-Time (JIT) compilation +Automatic Differentiation +Vectorization +Parallelization + +FlaxDecoderOutput + + +class diffusers.models.vae_flax.FlaxDecoderOutput + +< +source +> +( +sample: ndarray + +) + + +Parameters + +sample (jnp.ndarray of shape (batch_size, num_channels, height, width)) — +Decoded output sample of the model. Output of the last layer of the model. + + +dtype (jnp.dtype, optional, defaults to jnp.float32) — +Parameters dtype + + + +Output of decoding method. + +replace + +< +source +> +( +**updates + +) + + + +“Returns a new object replacing the specified fields with new values. + +FlaxAutoencoderKLOutput + + +class diffusers.models.vae_flax.FlaxAutoencoderKLOutput + +< +source +> +( +latent_dist: FlaxDiagonalGaussianDistribution + +) + + +Parameters + +latent_dist (FlaxDiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of FlaxDiagonalGaussianDistribution. +FlaxDiagonalGaussianDistribution allows for sampling latents from the distribution. + + + +Output of AutoencoderKL encoding method. + +replace + +< +source +> +( +**updates + +) + + + +“Returns a new object replacing the specified fields with new values. + +FlaxAutoencoderKL + + +class diffusers.FlaxAutoencoderKL + +< +source +> +( +in_channels: int = 3 +out_channels: int = 3 +down_block_types: typing.Tuple[str] = ('DownEncoderBlock2D',) +up_block_types: typing.Tuple[str] = ('UpDecoderBlock2D',) +block_out_channels: typing.Tuple[int] = (64,) +layers_per_block: int = 1 +act_fn: str = 'silu' +latent_channels: int = 4 +norm_num_groups: int = 32 +sample_size: int = 32 +scaling_factor: float = 0.18215 +dtype: dtype = +parent: typing.Union[typing.Type[flax.linen.module.Module], typing.Type[flax.core.scope.Scope], typing.Type[flax.linen.module._Sentinel], NoneType] = +name: str = None + +) + + +Parameters + +in_channels (int, optional, defaults to 3) — +Input channels + + +out_channels (int, optional, defaults to 3) — +Output channels + + +down_block_types (Tuple[str], optional, defaults to (DownEncoderBlock2D)) — +DownEncoder block type + + +up_block_types (Tuple[str], optional, defaults to (UpDecoderBlock2D)) — +UpDecoder block type + + +block_out_channels (Tuple[str], optional, defaults to (64,)) — +Tuple containing the number of output channels for each block + + +layers_per_block (int, optional, defaults to 2) — +Number of Resnet layer for each block + + +act_fn (str, optional, defaults to silu) — +Activation function + + +latent_channels (int, optional, defaults to 4) — +Latent space channels + + +norm_num_groups (int, optional, defaults to 32) — +Norm num group + + +sample_size (int, optional, defaults to 32) — +Sample input size + + +scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 +/ scaling_factor z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. + + +dtype (jnp.dtype, optional, defaults to jnp.float32) — +parameters dtype + + + +Flax Implementation of Variational Autoencoder (VAE) model with KL loss from the paper Auto-Encoding Variational +Bayes by Diederik P. Kingma and Max Welling. +This model is a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to +general usage and behavior. +Finally, this model supports inherent JAX features such as: +Just-In-Time (JIT) compilation +Automatic Differentiation +Vectorization +Parallelization + +FlaxControlNetOutput + + +class diffusers.models.controlnet_flax.FlaxControlNetOutput + +< +source +> +( +down_block_res_samples: ndarray +mid_block_res_sample: ndarray + +) + + + + +replace + +< +source +> +( +**updates + +) + + + +“Returns a new object replacing the specified fields with new values. + +FlaxControlNetModel + + +class diffusers.FlaxControlNetModel + +< +source +> +( +sample_size: int = 32 +in_channels: int = 4 +down_block_types: typing.Tuple[str] = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') +only_cross_attention: typing.Union[bool, typing.Tuple[bool]] = False +block_out_channels: typing.Tuple[int] = (320, 640, 1280, 1280) +layers_per_block: int = 2 +attention_head_dim: typing.Union[int, typing.Tuple[int]] = 8 +num_attention_heads: typing.Union[int, typing.Tuple[int], NoneType] = None +cross_attention_dim: int = 1280 +dropout: float = 0.0 +use_linear_projection: bool = False +dtype: dtype = +flip_sin_to_cos: bool = True +freq_shift: int = 0 +controlnet_conditioning_channel_order: str = 'rgb' +conditioning_embedding_out_channels: typing.Tuple[int] = (16, 32, 96, 256) +parent: typing.Union[typing.Type[flax.linen.module.Module], typing.Type[flax.core.scope.Scope], typing.Type[flax.linen.module._Sentinel], NoneType] = +name: str = None + +) + + +Parameters + +sample_size (int, optional) — +The size of the input sample. + + +in_channels (int, optional, defaults to 4) — +The number of channels in the input sample. + + +down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. The corresponding class names will be: “FlaxCrossAttnDownBlock2D”, +“FlaxCrossAttnDownBlock2D”, “FlaxCrossAttnDownBlock2D”, “FlaxDownBlock2D” + + +block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. + + +layers_per_block (int, optional, defaults to 2) — +The number of layers per block. + + +attention_head_dim (int or Tuple[int], optional, defaults to 8) — +The dimension of the attention heads. + + +num_attention_heads (int or Tuple[int], optional) — +The number of attention heads. + + +cross_attention_dim (int, optional, defaults to 768) — +The dimension of the cross attention features. + + +dropout (float, optional, defaults to 0) — +Dropout probability for down, up and bottleneck blocks. + + +flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip the sin to cos in the time embedding. + + +freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. + + +controlnet_conditioning_channel_order (str, optional, defaults to rgb) — +The channel order of conditional image. Will convert it to rgb if it’s bgr + + +conditioning_embedding_out_channels (tuple, optional, defaults to (16, 32, 96, 256)) — +The tuple of output channel for each block in conditioning_embedding layer + + + +Quoting from https://arxiv.org/abs/2302.05543: “Stable Diffusion uses a pre-processing method similar to VQ-GAN +[11] to convert the entire dataset of 512 × 512 images into smaller 64 × 64 “latent images” for stabilized +training. This requires ControlNets to convert image-based conditions to 64 × 64 feature space to match the +convolution size. We use a tiny network E(·) of four convolution layers with 4 × 4 kernels and 2 × 2 strides +(activated by ReLU, channels are 16, 32, 64, 128, initialized with Gaussian weights, trained jointly with the full +model) to encode image-space conditions … into feature maps …” +This model inherits from FlaxModelMixin. Check the superclass documentation for the generic methods the library +implements for all the models (such as downloading or saving, etc.) +Also, this model is a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to +general usage and behavior. +Finally, this model supports inherent JAX features such as: +Just-In-Time (JIT) compilation +Automatic Differentiation +Vectorization +Parallelization diff --git a/scrapped_outputs/c4f0519d42d2ad8d8f60b068a1f99ee6.txt b/scrapped_outputs/c4f0519d42d2ad8d8f60b068a1f99ee6.txt new file mode 100644 index 0000000000000000000000000000000000000000..218eb87f8f649852b0b2e0b52a2a1d758aa1b603 --- /dev/null +++ b/scrapped_outputs/c4f0519d42d2ad8d8f60b068a1f99ee6.txt @@ -0,0 +1 @@ +Using Diffusers with other modalities Diffusers is in the process of expanding to modalities other than images. Example type Colab Pipeline Molecule conformation generation ❌ More coming soon! diff --git a/scrapped_outputs/c4f9e9dd39c497c04ad0a5f339a8624c.txt b/scrapped_outputs/c4f9e9dd39c497c04ad0a5f339a8624c.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/c4face6be629c692e4f274c5fa7ab0a6.txt b/scrapped_outputs/c4face6be629c692e4f274c5fa7ab0a6.txt new file mode 100644 index 0000000000000000000000000000000000000000..843875e320b6bcdb29106ed38d7b3cffd10030d2 --- /dev/null +++ b/scrapped_outputs/c4face6be629c692e4f274c5fa7ab0a6.txt @@ -0,0 +1,232 @@ +Würstchen Wuerstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models is by Pablo Pernias, Dominic Rampas, Mats L. Richter and Christopher Pal and Marc Aubreville. The abstract from the paper is: We introduce Würstchen, a novel architecture for text-to-image synthesis that combines competitive performance with unprecedented cost-effectiveness for large-scale text-to-image diffusion models. A key contribution of our work is to develop a latent diffusion technique in which we learn a detailed but extremely compact semantic image representation used to guide the diffusion process. This highly compressed representation of an image provides much more detailed guidance compared to latent representations of language and this significantly reduces the computational requirements to achieve state-of-the-art results. Our approach also improves the quality of text-conditioned image generation based on our user preference study. The training requirements of our approach consists of 24,602 A100-GPU hours - compared to Stable Diffusion 2.1’s 200,000 GPU hours. Our approach also requires less training data to achieve these results. Furthermore, our compact latent representations allows us to perform inference over twice as fast, slashing the usual costs and carbon footprint of a state-of-the-art (SOTA) diffusion model significantly, without compromising the end performance. In a broader comparison against SOTA models our approach is substantially more efficient and compares favorably in terms of image quality. We believe that this work motivates more emphasis on the prioritization of both performance and computational accessibility. Würstchen Overview Würstchen is a diffusion model, whose text-conditional model works in a highly compressed latent space of images. Why is this important? Compressing data can reduce computational costs for both training and inference by magnitudes. Training on 1024x1024 images is way more expensive than training on 32x32. Usually, other works make use of a relatively small compression, in the range of 4x - 8x spatial compression. Würstchen takes this to an extreme. Through its novel design, we achieve a 42x spatial compression. This was unseen before because common methods fail to faithfully reconstruct detailed images after 16x spatial compression. Würstchen employs a two-stage compression, what we call Stage A and Stage B. Stage A is a VQGAN, and Stage B is a Diffusion Autoencoder (more details can be found in the paper). A third model, Stage C, is learned in that highly compressed latent space. This training requires fractions of the compute used for current top-performing models, while also allowing cheaper and faster inference. Würstchen v2 comes to Diffusers After the initial paper release, we have improved numerous things in the architecture, training and sampling, making Würstchen competitive to current state-of-the-art models in many ways. We are excited to release this new version together with Diffusers. Here is a list of the improvements. Higher resolution (1024x1024 up to 2048x2048) Faster inference Multi Aspect Resolution Sampling Better quality We are releasing 3 checkpoints for the text-conditional image generation model (Stage C). Those are: v2-base v2-aesthetic (default) v2-interpolated (50% interpolation between v2-base and v2-aesthetic) We recommend using v2-interpolated, as it has a nice touch of both photorealism and aesthetics. Use v2-base for finetunings as it does not have a style bias and use v2-aesthetic for very artistic generations. +A comparison can be seen here: Text-to-Image Generation For the sake of usability, Würstchen can be used with a single pipeline. This pipeline can be used as follows: Copied import torch +from diffusers import AutoPipelineForText2Image +from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS + +pipe = AutoPipelineForText2Image.from_pretrained("warp-ai/wuerstchen", torch_dtype=torch.float16).to("cuda") + +caption = "Anthropomorphic cat dressed as a fire fighter" +images = pipe( + caption, + width=1024, + height=1536, + prior_timesteps=DEFAULT_STAGE_C_TIMESTEPS, + prior_guidance_scale=4.0, + num_images_per_prompt=2, +).images For explanation purposes, we can also initialize the two main pipelines of Würstchen individually. Würstchen consists of 3 stages: Stage C, Stage B, Stage A. They all have different jobs and work only together. When generating text-conditional images, Stage C will first generate the latents in a very compressed latent space. This is what happens in the prior_pipeline. Afterwards, the generated latents will be passed to Stage B, which decompresses the latents into a bigger latent space of a VQGAN. These latents can then be decoded by Stage A, which is a VQGAN, into the pixel-space. Stage B & Stage A are both encapsulated in the decoder_pipeline. For more details, take a look at the paper. Copied import torch +from diffusers import WuerstchenDecoderPipeline, WuerstchenPriorPipeline +from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS + +device = "cuda" +dtype = torch.float16 +num_images_per_prompt = 2 + +prior_pipeline = WuerstchenPriorPipeline.from_pretrained( + "warp-ai/wuerstchen-prior", torch_dtype=dtype +).to(device) +decoder_pipeline = WuerstchenDecoderPipeline.from_pretrained( + "warp-ai/wuerstchen", torch_dtype=dtype +).to(device) + +caption = "Anthropomorphic cat dressed as a fire fighter" +negative_prompt = "" + +prior_output = prior_pipeline( + prompt=caption, + height=1024, + width=1536, + timesteps=DEFAULT_STAGE_C_TIMESTEPS, + negative_prompt=negative_prompt, + guidance_scale=4.0, + num_images_per_prompt=num_images_per_prompt, +) +decoder_output = decoder_pipeline( + image_embeddings=prior_output.image_embeddings, + prompt=caption, + negative_prompt=negative_prompt, + guidance_scale=0.0, + output_type="pil", +).images[0] +decoder_output Speed-Up Inference You can make use of torch.compile function and gain a speed-up of about 2-3x: Copied prior_pipeline.prior = torch.compile(prior_pipeline.prior, mode="reduce-overhead", fullgraph=True) +decoder_pipeline.decoder = torch.compile(decoder_pipeline.decoder, mode="reduce-overhead", fullgraph=True) Limitations Due to the high compression employed by Würstchen, generations can lack a good amount +of detail. To our human eye, this is especially noticeable in faces, hands etc. Images can only be generated in 128-pixel steps, e.g. the next higher resolution +after 1024x1024 is 1152x1152 The model lacks the ability to render correct text in images The model often does not achieve photorealism Difficult compositional prompts are hard for the model The original codebase, as well as experimental ideas, can be found at dome272/Wuerstchen. WuerstchenCombinedPipeline class diffusers.WuerstchenCombinedPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModel decoder: WuerstchenDiffNeXt scheduler: DDPMWuerstchenScheduler vqgan: PaellaVQModel prior_tokenizer: CLIPTokenizer prior_text_encoder: CLIPTextModel prior_prior: WuerstchenPrior prior_scheduler: DDPMWuerstchenScheduler ) Parameters tokenizer (CLIPTokenizer) — +The decoder tokenizer to be used for text inputs. text_encoder (CLIPTextModel) — +The decoder text encoder to be used for text inputs. decoder (WuerstchenDiffNeXt) — +The decoder model to be used for decoder image generation pipeline. scheduler (DDPMWuerstchenScheduler) — +The scheduler to be used for decoder image generation pipeline. vqgan (PaellaVQModel) — +The VQGAN model to be used for decoder image generation pipeline. prior_tokenizer (CLIPTokenizer) — +The prior tokenizer to be used for text inputs. prior_text_encoder (CLIPTextModel) — +The prior text encoder to be used for text inputs. prior_prior (WuerstchenPrior) — +The prior model to be used for prior pipeline. prior_scheduler (DDPMWuerstchenScheduler) — +The scheduler to be used for prior pipeline. Combined Pipeline for text-to-image generation using Wuerstchen This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union = None height: int = 512 width: int = 512 prior_num_inference_steps: int = 60 prior_timesteps: Optional = None prior_guidance_scale: float = 4.0 num_inference_steps: int = 12 decoder_timesteps: Optional = None decoder_guidance_scale: float = 0.0 negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation for the prior and decoder. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings for the prior. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings for the prior. Can be used to easily tweak text inputs, e.g. +prompt weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt +input argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +prior_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +prior_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked +to the text prompt, usually at the expense of lower image quality. prior_num_inference_steps (Union[int, Dict[float, int]], optional, defaults to 60) — +The number of prior denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. For more specific timestep spacing, you can pass customized +prior_timesteps num_inference_steps (int, optional, defaults to 12) — +The number of decoder denoising steps. More denoising steps usually lead to a higher quality image at +the expense of slower inference. For more specific timestep spacing, you can pass customized +timesteps prior_timesteps (List[float], optional) — +Custom timesteps to use for the denoising process for the prior. If not defined, equal spaced +prior_num_inference_steps timesteps are used. Must be in descending order. decoder_timesteps (List[float], optional) — +Custom timesteps to use for the denoising process for the decoder. If not defined, equal spaced +num_inference_steps timesteps are used. Must be in descending order. decoder_guidance_scale (float, optional, defaults to 0.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. prior_callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). prior_callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the prior_callback_on_step_end function. The tensors specified in the +list will be passed as callback_kwargs argument. You will only be able to include variables listed in +the ._callback_tensor_inputs attribute of your pipeline class. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusions import WuerstchenCombinedPipeline + +>>> pipe = WuerstchenCombinedPipeline.from_pretrained("warp-ai/Wuerstchen", torch_dtype=torch.float16).to( +... "cuda" +... ) +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> images = pipe(prompt=prompt) enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models (unet, text_encoder, vae, and safety checker state dicts) to CPU using 🤗 +Accelerate, significantly reducing memory usage. Models are moved to a torch.device('meta') and loaded on a +GPU only when their specific submodule’s forward method is called. Offloading happens on a submodule basis. +Memory savings are higher than using enable_model_cpu_offload, but performance is lower. WuerstchenPriorPipeline class diffusers.WuerstchenPriorPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModel prior: WuerstchenPrior scheduler: DDPMWuerstchenScheduler latent_mean: float = 42.0 latent_std: float = 1.0 resolution_multiple: float = 42.67 ) Parameters prior (Prior) — +The canonical unCLIP prior to approximate the image embedding from the text embedding. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (DDPMWuerstchenScheduler) — +A scheduler to be used in combination with prior to generate image embedding. latent_mean (‘float’, optional, defaults to 42.0) — +Mean value for latent diffusers. latent_std (‘float’, optional, defaults to 1.0) — +Standard value for latent diffusers. resolution_multiple (‘float’, optional, defaults to 42.67) — +Default resolution for multiple images generated. Pipeline for generating image prior for Wuerstchen. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None height: int = 1024 width: int = 1024 num_inference_steps: int = 60 timesteps: List = None guidance_scale: float = 8.0 negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pt' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. height (int, optional, defaults to 1024) — +The height in pixels of the generated image. width (int, optional, defaults to 1024) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 60) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 8.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +decoder_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +decoder_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely +linked to the text prompt, usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if decoder_guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import WuerstchenPriorPipeline + +>>> prior_pipe = WuerstchenPriorPipeline.from_pretrained( +... "warp-ai/wuerstchen-prior", torch_dtype=torch.float16 +... ).to("cuda") + +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> prior_output = pipe(prompt) WuerstchenPriorPipelineOutput class diffusers.pipelines.wuerstchen.pipeline_wuerstchen_prior.WuerstchenPriorPipelineOutput < source > ( image_embeddings: Union ) Parameters image_embeddings (torch.FloatTensor or np.ndarray) — +Prior image embeddings for text prompt Output class for WuerstchenPriorPipeline. WuerstchenDecoderPipeline class diffusers.WuerstchenDecoderPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModel decoder: WuerstchenDiffNeXt scheduler: DDPMWuerstchenScheduler vqgan: PaellaVQModel latent_dim_scale: float = 10.67 ) Parameters tokenizer (CLIPTokenizer) — +The CLIP tokenizer. text_encoder (CLIPTextModel) — +The CLIP text encoder. decoder (WuerstchenDiffNeXt) — +The WuerstchenDiffNeXt unet decoder. vqgan (PaellaVQModel) — +The VQGAN model. scheduler (DDPMWuerstchenScheduler) — +A scheduler to be used in combination with prior to generate image embedding. latent_dim_scale (float, optional, defaults to 10.67) — +Multiplier to determine the VQ latent space size from the image embeddings. If the image embeddings are +height=24 and width=24, the VQ latent shape needs to be height=int(2410.67)=256 and +width=int(2410.67)=256 in order to match the training conditions. Pipeline for generating images from the Wuerstchen model. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeddings: Union prompt: Union = None num_inference_steps: int = 12 timesteps: Optional = None guidance_scale: float = 0.0 negative_prompt: Union = None num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) Parameters image_embedding (torch.FloatTensor or List[torch.FloatTensor]) — +Image Embeddings either extracted from an image or generated by a Prior Model. prompt (str or List[str]) — +The prompt or prompts to guide the image generation. num_inference_steps (int, optional, defaults to 12) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 0.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +decoder_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +decoder_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely +linked to the text prompt, usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if decoder_guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import WuerstchenPriorPipeline, WuerstchenDecoderPipeline + +>>> prior_pipe = WuerstchenPriorPipeline.from_pretrained( +... "warp-ai/wuerstchen-prior", torch_dtype=torch.float16 +... ).to("cuda") +>>> gen_pipe = WuerstchenDecoderPipeline.from_pretrain("warp-ai/wuerstchen", torch_dtype=torch.float16).to( +... "cuda" +... ) + +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> prior_output = pipe(prompt) +>>> images = gen_pipe(prior_output.image_embeddings, prompt=prompt) Citation Copied @misc{pernias2023wuerstchen, + title={Wuerstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models}, + author={Pablo Pernias and Dominic Rampas and Mats L. Richter and Christopher J. Pal and Marc Aubreville}, + year={2023}, + eprint={2306.00637}, + archivePrefix={arXiv}, + primaryClass={cs.CV} + } diff --git a/scrapped_outputs/c54cfc9668523316a5b27ac0cd8c2c31.txt b/scrapped_outputs/c54cfc9668523316a5b27ac0cd8c2c31.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/c5732bd39ecce8f34b110488e3a88ec2.txt b/scrapped_outputs/c5732bd39ecce8f34b110488e3a88ec2.txt new file mode 100644 index 0000000000000000000000000000000000000000..fbc000e3f1f4798b3b57e43c2f0af0e2e06c9cce --- /dev/null +++ b/scrapped_outputs/c5732bd39ecce8f34b110488e3a88ec2.txt @@ -0,0 +1,65 @@ +Latent Consistency Model Multistep Scheduler Overview Multistep and onestep scheduler (Algorithm 3) introduced alongside latent consistency models in the paper Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference by Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao. +This scheduler should be able to generate good samples from LatentConsistencyModelPipeline in 1-8 steps. LCMScheduler class diffusers.LCMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'scaled_linear' trained_betas: Union = None original_inference_steps: int = 50 clip_sample: bool = False clip_sample_range: float = 1.0 set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' timestep_scaling: float = 10.0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. original_inference_steps (int, optional, defaults to 50) — +The default number of inference steps used to generate a linearly-spaced timestep schedule, from which we +will ultimately take num_inference_steps evenly spaced timesteps to form the final timestep schedule. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the alpha value at step 0. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. timestep_scaling (float, defaults to 10.0) — +The factor the timesteps will be multiplied by when calculating the consistency model boundary conditions +c_skip and c_out. Increasing this will decrease the approximation error (although the approximation +error at the default of 10.0 is already pretty small). rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. LCMScheduler extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with +non-Markovian guidance. This model inherits from SchedulerMixin and ConfigMixin. ~ConfigMixin takes care of storing all config +attributes that are passed in the scheduler’s __init__ function, such as num_train_timesteps. They can be +accessed via scheduler.config.num_train_timesteps. SchedulerMixin provides general loading and saving +functionality via the SchedulerMixin.save_pretrained() and from_pretrained() functions. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: Optional = None device: Union = None original_inference_steps: Optional = None timesteps: Optional = None strength: int = 1.0 ) Parameters num_inference_steps (int, optional) — +The number of diffusion steps used when generating samples with a pre-trained model. If used, +timesteps must be None. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. original_inference_steps (int, optional) — +The original number of inference steps, which will be used to generate a linearly-spaced timestep +schedule (which is different from the standard diffusers implementation). We will then take +num_inference_steps timesteps from this schedule, evenly spaced in terms of indices, and use that as +our final timestep schedule. If not set, this will default to the original_inference_steps attribute. timesteps (List[int], optional) — +Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default +timestep spacing strategy of equal spacing between timesteps on the training/distillation timestep +schedule is used. If timesteps is passed, num_inference_steps must be None. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator: Optional = None return_dict: bool = True ) → ~schedulers.scheduling_utils.LCMSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a LCMSchedulerOutput or tuple. Returns +~schedulers.scheduling_utils.LCMSchedulerOutput or tuple + +If return_dict is True, LCMSchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/c5774e0b087790919808f78cb210ee7c.txt b/scrapped_outputs/c5774e0b087790919808f78cb210ee7c.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/c5bb24b109c69107144ea651c648ca6d.txt b/scrapped_outputs/c5bb24b109c69107144ea651c648ca6d.txt new file mode 100644 index 0000000000000000000000000000000000000000..161bab95d89c856bbecb72654e8b0d0142d13c70 --- /dev/null +++ b/scrapped_outputs/c5bb24b109c69107144ea651c648ca6d.txt @@ -0,0 +1,6 @@ +Unconditional image generation Unconditional image generation generates images that look like a random sample from the training data the model was trained on because the denoising process is not guided by any additional context like text or image. To get started, use the DiffusionPipeline to load the anton-l/ddpm-butterflies-128 checkpoint to generate images of butterflies. The DiffusionPipeline downloads and caches all the model components required to generate an image. Copied from diffusers import DiffusionPipeline + +generator = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128").to("cuda") +image = generator().images[0] +image Want to generate images of something else? Take a look at the training guide to learn how to train a model to generate your own images. The output image is a PIL.Image object that can be saved: Copied image.save("generated_image.png") You can also try experimenting with the num_inference_steps parameter, which controls the number of denoising steps. More denoising steps typically produce higher quality images, but it’ll take longer to generate. Feel free to play around with this parameter to see how it affects the image quality. Copied image = generator(num_inference_steps=100).images[0] +image Try out the Space below to generate an image of a butterfly! diff --git a/scrapped_outputs/c5e9f0d5c5f6f6e1ab9e991a3812aac2.txt b/scrapped_outputs/c5e9f0d5c5f6f6e1ab9e991a3812aac2.txt new file mode 100644 index 0000000000000000000000000000000000000000..28be7c2be08b90122a456c3dc3dafcfdbac176dc --- /dev/null +++ b/scrapped_outputs/c5e9f0d5c5f6f6e1ab9e991a3812aac2.txt @@ -0,0 +1,75 @@ +AutoPipeline 🤗 Diffusers is able to complete many different tasks, and you can often reuse the same pretrained weights for multiple tasks such as text-to-image, image-to-image, and inpainting. If you’re new to the library and diffusion models though, it may be difficult to know which pipeline to use for a task. For example, if you’re using the runwayml/stable-diffusion-v1-5 checkpoint for text-to-image, you might not know that you could also use it for image-to-image and inpainting by loading the checkpoint with the StableDiffusionImg2ImgPipeline and StableDiffusionInpaintPipeline classes respectively. The AutoPipeline class is designed to simplify the variety of pipelines in 🤗 Diffusers. It is a generic, task-first pipeline that lets you focus on the task. The AutoPipeline automatically detects the correct pipeline class to use, which makes it easier to load a checkpoint for a task without knowing the specific pipeline class name. Take a look at the AutoPipeline reference to see which tasks are supported. Currently, it supports text-to-image, image-to-image, and inpainting. This tutorial shows you how to use an AutoPipeline to automatically infer the pipeline class to load for a specific task, given the pretrained weights. Choose an AutoPipeline for your task Start by picking a checkpoint. For example, if you’re interested in text-to-image with the runwayml/stable-diffusion-v1-5 checkpoint, use AutoPipelineForText2Image: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") +prompt = "peasant and dragon combat, wood cutting style, viking era, bevel with rune" + +image = pipeline(prompt, num_inference_steps=25).images[0] +image Under the hood, AutoPipelineForText2Image: automatically detects a "stable-diffusion" class from the model_index.json file loads the corresponding text-to-image StableDiffusionPipeline based on the "stable-diffusion" class name Likewise, for image-to-image, AutoPipelineForImage2Image detects a "stable-diffusion" checkpoint from the model_index.json file and it’ll load the corresponding StableDiffusionImg2ImgPipeline behind the scenes. You can also pass any additional arguments specific to the pipeline class such as strength, which determines the amount of noise or variation added to an input image: Copied from diffusers import AutoPipelineForImage2Image +import torch +import requests +from PIL import Image +from io import BytesIO + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") +prompt = "a portrait of a dog wearing a pearl earring" + +url = "https://upload.wikimedia.org/wikipedia/commons/thumb/0/0f/1665_Girl_with_a_Pearl_Earring.jpg/800px-1665_Girl_with_a_Pearl_Earring.jpg" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") +image.thumbnail((768, 768)) + +image = pipeline(prompt, image, num_inference_steps=200, strength=0.75, guidance_scale=10.5).images[0] +image And if you want to do inpainting, then AutoPipelineForInpainting loads the underlying StableDiffusionInpaintPipeline class in the same way: Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +import torch + +pipeline = AutoPipelineForInpainting.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).convert("RGB") +mask_image = load_image(mask_url).convert("RGB") + +prompt = "A majestic tiger sitting on a bench" +image = pipeline(prompt, image=init_image, mask_image=mask_image, num_inference_steps=50, strength=0.80).images[0] +image If you try to load an unsupported checkpoint, it’ll throw an error: Copied from diffusers import AutoPipelineForImage2Image +import torch + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "openai/shap-e-img2img", torch_dtype=torch.float16, use_safetensors=True +) +"ValueError: AutoPipeline can't find a pipeline linked to ShapEImg2ImgPipeline for None" Use multiple pipelines For some workflows or if you’re loading many pipelines, it is more memory-efficient to reuse the same components from a checkpoint instead of reloading them which would unnecessarily consume additional memory. For example, if you’re using a checkpoint for text-to-image and you want to use it again for image-to-image, use the from_pipe() method. This method creates a new pipeline from the components of a previously loaded pipeline at no additional memory cost. The from_pipe() method detects the original pipeline class and maps it to the new pipeline class corresponding to the task you want to do. For example, if you load a "stable-diffusion" class pipeline for text-to-image: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch + +pipeline_text2img = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +print(type(pipeline_text2img)) +"" Then from_pipe() maps the original "stable-diffusion" pipeline class to StableDiffusionImg2ImgPipeline: Copied pipeline_img2img = AutoPipelineForImage2Image.from_pipe(pipeline_text2img) +print(type(pipeline_img2img)) +"" If you passed an optional argument - like disabling the safety checker - to the original pipeline, this argument is also passed on to the new pipeline: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch + +pipeline_text2img = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, + requires_safety_checker=False, +).to("cuda") + +pipeline_img2img = AutoPipelineForImage2Image.from_pipe(pipeline_text2img) +print(pipeline_img2img.config.requires_safety_checker) +"False" You can overwrite any of the arguments and even configuration from the original pipeline if you want to change the behavior of the new pipeline. For example, to turn the safety checker back on and add the strength argument: Copied pipeline_img2img = AutoPipelineForImage2Image.from_pipe(pipeline_text2img, requires_safety_checker=True, strength=0.3) +print(pipeline_img2img.config.requires_safety_checker) +"True" diff --git a/scrapped_outputs/c5fc173b295089c24c566f3a6b690a1d.txt b/scrapped_outputs/c5fc173b295089c24c566f3a6b690a1d.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/c6438e834da6171c3314965e5f47e17c.txt b/scrapped_outputs/c6438e834da6171c3314965e5f47e17c.txt new file mode 100644 index 0000000000000000000000000000000000000000..5e5f20bcd4c8ced4f5d66653f375f4b97a022c2a --- /dev/null +++ b/scrapped_outputs/c6438e834da6171c3314965e5f47e17c.txt @@ -0,0 +1,13 @@ +Improve image quality with deterministic generation A common way to improve the quality of generated images is with deterministic batch generation, generate a batch of images and select one image to improve with a more detailed prompt in a second round of inference. The key is to pass a list of torch.Generator’s to the pipeline for batched image generation, and tie each Generator to a seed so you can reuse it for an image. Let’s use runwayml/stable-diffusion-v1-5 for example, and generate several versions of the following prompt: Copied prompt = "Labrador in the style of Vermeer" Instantiate a pipeline with DiffusionPipeline.from_pretrained() and place it on a GPU (if available): Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import make_image_grid + +pipe = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +pipe = pipe.to("cuda") Now, define four different Generators and assign each Generator a seed (0 to 3) so you can reuse a Generator later for a specific image: Copied generator = [torch.Generator(device="cuda").manual_seed(i) for i in range(4)] To create a batched seed, you should use a list comprehension that iterates over the length specified in range(). This creates a unique Generator object for each image in the batch. If you only multiply the Generator by the batch size, this only creates one Generator object that is used sequentially for each image in the batch. For example, if you want to use the same seed to create 4 identical images: Copied ❌ [torch.Generator().manual_seed(seed)] * 4 + +✅ [torch.Generator().manual_seed(seed) for _ in range(4)] Generate the images and have a look: Copied images = pipe(prompt, generator=generator, num_images_per_prompt=4).images +make_image_grid(images, rows=2, cols=2) In this example, you’ll improve upon the first image - but in reality, you can use any image you want (even the image with double sets of eyes!). The first image used the Generator with seed 0, so you’ll reuse that Generator for the second round of inference. To improve the quality of the image, add some additional text to the prompt: Copied prompt = [prompt + t for t in [", highly realistic", ", artsy", ", trending", ", colorful"]] +generator = [torch.Generator(device="cuda").manual_seed(0) for i in range(4)] Create four generators with seed 0, and generate another batch of images, all of which should look like the first image from the previous round! Copied images = pipe(prompt, generator=generator).images +make_image_grid(images, rows=2, cols=2) diff --git a/scrapped_outputs/c665216c5170a563de5d2c7eecdb6abd.txt b/scrapped_outputs/c665216c5170a563de5d2c7eecdb6abd.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/c6b620f3dbd50306a026b31440c2080d.txt b/scrapped_outputs/c6b620f3dbd50306a026b31440c2080d.txt new file mode 100644 index 0000000000000000000000000000000000000000..a3ac22e44f82a2bfeede971a5b1063163f7e9fc2 --- /dev/null +++ b/scrapped_outputs/c6b620f3dbd50306a026b31440c2080d.txt @@ -0,0 +1,176 @@ +Image-to-Video Generation with PIA (Personalized Image Animator) Overview PIA: Your Personalized Image Animator via Plug-and-Play Modules in Text-to-Image Models by Yiming Zhang, Zhening Xing, Yanhong Zeng, Youqing Fang, Kai Chen Recent advancements in personalized text-to-image (T2I) models have revolutionized content creation, empowering non-experts to generate stunning images with unique styles. While promising, adding realistic motions into these personalized images by text poses significant challenges in preserving distinct styles, high-fidelity details, and achieving motion controllability by text. In this paper, we present PIA, a Personalized Image Animator that excels in aligning with condition images, achieving motion controllability by text, and the compatibility with various personalized T2I models without specific tuning. To achieve these goals, PIA builds upon a base T2I model with well-trained temporal alignment layers, allowing for the seamless transformation of any personalized T2I model into an image animation model. A key component of PIA is the introduction of the condition module, which utilizes the condition frame and inter-frame affinity as input to transfer appearance information guided by the affinity hint for individual frame synthesis in the latent space. This design mitigates the challenges of appearance-related image alignment within and allows for a stronger focus on aligning with motion-related guidance. Project page Available Pipelines Pipeline Tasks Demo PIAPipeline Image-to-Video Generation with PIA Available checkpoints Motion Adapter checkpoints for PIA can be found under the OpenMMLab org. These checkpoints are meant to work with any model based on Stable Diffusion 1.5 Usage example PIA works with a MotionAdapter checkpoint and a Stable Diffusion 1.5 model checkpoint. The MotionAdapter is a collection of Motion Modules that are responsible for adding coherent motion across image frames. These modules are applied after the Resnet and Attention blocks in the Stable Diffusion UNet. In addition to the motion modules, PIA also replaces the input convolution layer of the SD 1.5 UNet model with a 9 channel input convolution layer. The following example demonstrates how to use PIA to generate a video from a single image. Copied import torch +from diffusers import ( + EulerDiscreteScheduler, + MotionAdapter, + PIAPipeline, +) +from diffusers.utils import export_to_gif, load_image + +adapter = MotionAdapter.from_pretrained("openmmlab/PIA-condition-adapter") +pipe = PIAPipeline.from_pretrained("SG161222/Realistic_Vision_V6.0_B1_noVAE", motion_adapter=adapter, torch_dtype=torch.float16) + +pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() +pipe.enable_vae_slicing() + +image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat_6.png?download=true" +) +image = image.resize((512, 512)) +prompt = "cat in a field" +negative_prompt = "wrong white balance, dark, sketches,worst quality,low quality" + +generator = torch.Generator("cpu").manual_seed(0) +output = pipe(image=image, prompt=prompt, generator=generator) +frames = output.frames[0] +export_to_gif(frames, "pia-animation.gif") Here are some sample outputs: masterpiece, bestquality, sunset. + If you plan on using a scheduler that can clip samples, make sure to disable it by setting clip_sample=False in the scheduler as this can also have an adverse effect on generated samples. Additionally, the PIA checkpoints can be sensitive to the beta schedule of the scheduler. We recommend setting this to linear. Using FreeInit FreeInit: Bridging Initialization Gap in Video Diffusion Models by Tianxing Wu, Chenyang Si, Yuming Jiang, Ziqi Huang, Ziwei Liu. FreeInit is an effective method that improves temporal consistency and overall quality of videos generated using video-diffusion-models without any addition training. It can be applied to PIA, AnimateDiff, ModelScope, VideoCrafter and various other video generation models seamlessly at inference time, and works by iteratively refining the latent-initialization noise. More details can be found it the paper. The following example demonstrates the usage of FreeInit. Copied import torch +from diffusers import ( + DDIMScheduler, + MotionAdapter, + PIAPipeline, +) +from diffusers.utils import export_to_gif, load_image + +adapter = MotionAdapter.from_pretrained("openmmlab/PIA-condition-adapter") +pipe = PIAPipeline.from_pretrained("SG161222/Realistic_Vision_V6.0_B1_noVAE", motion_adapter=adapter) + +# enable FreeInit +# Refer to the enable_free_init documentation for a full list of configurable parameters +pipe.enable_free_init(method="butterworth", use_fast_sampling=True) + +# Memory saving options +pipe.enable_model_cpu_offload() +pipe.enable_vae_slicing() + +pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) +image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat_6.png?download=true" +) +image = image.resize((512, 512)) +prompt = "cat in a hat" +negative_prompt = "wrong white balance, dark, sketches,worst quality,low quality" + +generator = torch.Generator("cpu").manual_seed(0) + +output = pipe(image=image, prompt=prompt, generator=generator) +frames = output.frames[0] +export_to_gif(frames, "pia-freeinit-animation.gif") masterpiece, bestquality, sunset. + FreeInit is not really free - the improved quality comes at the cost of extra computation. It requires sampling a few extra times depending on the num_iters parameter that is set when enabling it. Setting the use_fast_sampling parameter to True can improve the overall performance (at the cost of lower quality compared to when use_fast_sampling=False but still better results than vanilla video generation models). PIAPipeline class diffusers.PIAPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: Union scheduler: Union motion_adapter: Optional = None feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter (MotionAdapter) — +A MotionAdapter to be used in combination with unet to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( image: Union prompt: Union = None strength: float = 1.0 num_frames: Optional = 16 height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None motion_scale: int = 0 output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → TextToVideoSDPipelineOutput or tuple Parameters image (PipelineImageInput) — +The input image to be used for video generation. prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. strength (float, optional, defaults to 1.0) — Indicates extent to transform the reference image. Must be between 0 and 1. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated video. num_frames (int, optional, defaults to 16) — +The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds +amounts to 2 seconds of video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. +motion_scale — (int, optional, defaults to 0): +Parameter that controls the amount and type of motion that is added to the image. Increasing the value increases the amount of motion, while specific +ranges of values control the type of motion that is added. Must be between 0 and 8. +Set between 0-2 to only increase the amount of motion. +Set between 3-5 to create looping motion. +Set between 6-8 to perform motion with image style transfer. output_type (str, optional, defaults to "pil") — +The output format of the generated video. Choose between torch.FloatTensor, PIL.Image or +np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +TextToVideoSDPipelineOutput or tuple + +If return_dict is True, TextToVideoSDPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import ( +... EulerDiscreteScheduler, +... MotionAdapter, +... PIAPipeline, +... ) +>>> from diffusers.utils import export_to_gif, load_image +>>> adapter = MotionAdapter.from_pretrained("../checkpoints/pia-diffusers") +>>> pipe = PIAPipeline.from_pretrained("SG161222/Realistic_Vision_V6.0_B1_noVAE", motion_adapter=adapter) +>>> pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config) +>>> image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat_6.png?download=true" +... ) +>>> image = image.resize((512, 512)) +>>> prompt = "cat in a hat" +>>> negative_prompt = "wrong white balance, dark, sketches,worst quality,low quality, deformed, distorted, disfigured, bad eyes, wrong lips,weird mouth, bad teeth, mutated hands and fingers, bad anatomy,wrong anatomy, amputation, extra limb, missing limb, floating,limbs, disconnected limbs, mutation, ugly, disgusting, bad_pictures, negative_hand-neg" +>>> generator = torch.Generator("cpu").manual_seed(0) +>>> output = pipe(image=image, prompt=prompt, negative_prompt=negative_prompt, generator=generator) +>>> frames = output.frames[0] +>>> export_to_gif(frames, "pia-animation.gif") disable_free_init < source > ( ) Disables the FreeInit mechanism if enabled. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_free_init < source > ( num_iters: int = 3 use_fast_sampling: bool = False method: str = 'butterworth' order: int = 4 spatial_stop_frequency: float = 0.25 temporal_stop_frequency: float = 0.25 generator: Optional = None ) Parameters num_iters (int, optional, defaults to 3) — +Number of FreeInit noise re-initialization iterations. use_fast_sampling (bool, optional, defaults to False) — +Whether or not to speedup sampling procedure at the cost of probably lower quality results. Enables +the “Coarse-to-Fine Sampling” strategy, as mentioned in the paper, if set to True. method (str, optional, defaults to butterworth) — +Must be one of butterworth, ideal or gaussian to use as the filtering method for the +FreeInit low pass filter. order (int, optional, defaults to 4) — +Order of the filter used in butterworth method. Larger values lead to ideal method behaviour +whereas lower values lead to gaussian method behaviour. spatial_stop_frequency (float, optional, defaults to 0.25) — +Normalized stop frequency for spatial dimensions. Must be between 0 to 1. Referred to as d_s in +the original implementation. temporal_stop_frequency (float, optional, defaults to 0.25) — +Normalized stop frequency for temporal dimensions. Must be between 0 to 1. Referred to as d_t in +the original implementation. generator (torch.Generator, optional, defaults to 0.25) — +A torch.Generator to make +FreeInit generation deterministic. Enables the FreeInit mechanism as in https://arxiv.org/abs/2312.07537. This implementation has been adapted from the official repository. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. enable_freeu disable_freeu enable_free_init disable_free_init enable_vae_slicing disable_vae_slicing enable_vae_tiling disable_vae_tiling PIAPipelineOutput class diffusers.pipelines.pia.PIAPipelineOutput < source > ( frames: Union ) Parameters frames (torch.Tensor, np.ndarray, or List[PIL.Image.Image]) — Nested list of length batch_size with denoised PIL image sequences of length num_frames, — NumPy array of shape `(batch_size, num_frames, channels, height, width, — Torch tensor of shape (batch_size, num_frames, channels, height, width). — Output class for PIAPipeline. diff --git a/scrapped_outputs/c6d1dc5906a84739e100cc56d25df06c.txt b/scrapped_outputs/c6d1dc5906a84739e100cc56d25df06c.txt new file mode 100644 index 0000000000000000000000000000000000000000..96a0a5c22497290cdb231bbf72184daeee1b4d8c --- /dev/null +++ b/scrapped_outputs/c6d1dc5906a84739e100cc56d25df06c.txt @@ -0,0 +1,18 @@ +VQModel The VQ-VAE model was introduced in Neural Discrete Representation Learning by Aaron van den Oord, Oriol Vinyals and Koray Kavukcuoglu. The model is used in 🤗 Diffusers to decode latent representations into images. Unlike AutoencoderKL, the VQModel works in a quantized latent space. The abstract from the paper is: Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector Quantised-Variational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of “posterior collapse” — where the latents are ignored when they are paired with a powerful autoregressive decoder — typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations. VQModel class diffusers.VQModel < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) up_block_types: Tuple = ('UpDecoderBlock2D',) block_out_channels: Tuple = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 3 sample_size: int = 32 num_vq_embeddings: int = 256 norm_num_groups: int = 32 vq_embed_dim: Optional = None scaling_factor: float = 0.18215 norm_type: str = 'group' mid_block_add_attention = True lookup_from_codebook = False force_upcast = False ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("DownEncoderBlock2D",)) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to ("UpDecoderBlock2D",)) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of block output channels. layers_per_block (int, optional, defaults to 1) — Number of layers per block. act_fn (str, optional, defaults to "silu") — The activation function to use. latent_channels (int, optional, defaults to 3) — Number of channels in the latent space. sample_size (int, optional, defaults to 32) — Sample input size. num_vq_embeddings (int, optional, defaults to 256) — Number of codebook vectors in the VQ-VAE. norm_num_groups (int, optional, defaults to 32) — Number of groups for normalization layers. vq_embed_dim (int, optional) — Hidden dim of codebook vectors in the VQ-VAE. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. norm_type (str, optional, defaults to "group") — +Type of normalization layer to use. Can be one of "group" or "spatial". A VQ-VAE model for decoding latent representations. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor return_dict: bool = True ) → VQEncoderOutput or tuple Parameters sample (torch.FloatTensor) — Input sample. return_dict (bool, optional, defaults to True) — +Whether or not to return a models.vq_model.VQEncoderOutput instead of a plain tuple. Returns +VQEncoderOutput or tuple + +If return_dict is True, a VQEncoderOutput is returned, otherwise a plain tuple +is returned. + The VQModel forward method. VQEncoderOutput class diffusers.models.vq_model.VQEncoderOutput < source > ( latents: FloatTensor ) Parameters latents (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The encoded output sample from the last layer of the model. Output of VQModel encoding method. diff --git a/scrapped_outputs/c6d778130b44d050532e8c59060a7dab.txt b/scrapped_outputs/c6d778130b44d050532e8c59060a7dab.txt new file mode 100644 index 0000000000000000000000000000000000000000..68ff112b968d56ed709f7889837161b8952ee99b --- /dev/null +++ b/scrapped_outputs/c6d778130b44d050532e8c59060a7dab.txt @@ -0,0 +1,235 @@ +AutoPipeline AutoPipeline is designed to: make it easy for you to load a checkpoint for a task without knowing the specific pipeline class to use use multiple pipelines in your workflow Based on the task, the AutoPipeline class automatically retrieves the relevant pipeline given the name or path to the pretrained weights with the from_pretrained() method. To seamlessly switch between tasks with the same checkpoint without reallocating additional memory, use the from_pipe() method to transfer the components from the original pipeline to the new one. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +image = pipeline(prompt, num_inference_steps=25).images[0] Check out the AutoPipeline tutorial to learn how to use this API! AutoPipeline supports text-to-image, image-to-image, and inpainting for the following diffusion models: Stable Diffusion ControlNet Stable Diffusion XL (SDXL) DeepFloyd IF Kandinsky 2.1 Kandinsky 2.2 AutoPipelineForText2Image class diffusers.AutoPipelineForText2Image < source > ( *args **kwargs ) AutoPipelineForText2Image is a generic pipeline class that instantiates a text-to-image pipeline class. The +specific underlying pipeline class is automatically selected from either the +from_pretrained() or from_pipe() methods. This class cannot be instantiated using __init__() (throws an error). Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_or_path **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiates a text-to-image Pytorch diffusion pipeline from pretrained pipeline weight. The from_pretrained() method takes care of returning the correct pipeline class instance by: Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its +config object Find the text-to-image pipeline linked to the pipeline class using pattern matching on pipeline class +name. If a controlnet argument is passed, it will instantiate a StableDiffusionControlNetPipeline object. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import AutoPipelineForText2Image + +>>> pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> image = pipeline(prompt).images[0] from_pipe < source > ( pipeline **kwargs ) Parameters pipeline (DiffusionPipeline) — +an instantiated DiffusionPipeline object Instantiates a text-to-image Pytorch diffusion pipeline from another instantiated diffusion pipeline class. The from_pipe() method takes care of returning the correct pipeline class instance by finding the text-to-image +pipeline linked to the pipeline class using pattern matching on pipeline class name. All the modules the pipeline contains will be used to initialize the new pipeline without reallocating +additional memoery. The pipeline is set in evaluation mode (model.eval()) by default. Copied >>> from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image + +>>> pipe_i2i = AutoPipelineForImage2Image.from_pretrained( +... "runwayml/stable-diffusion-v1-5", requires_safety_checker=False +... ) + +>>> pipe_t2i = AutoPipelineForText2Image.from_pipe(pipe_i2i) +>>> image = pipe_t2i(prompt).images[0] AutoPipelineForImage2Image class diffusers.AutoPipelineForImage2Image < source > ( *args **kwargs ) AutoPipelineForImage2Image is a generic pipeline class that instantiates an image-to-image pipeline class. The +specific underlying pipeline class is automatically selected from either the +from_pretrained() or from_pipe() methods. This class cannot be instantiated using __init__() (throws an error). Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_or_path **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiates a image-to-image Pytorch diffusion pipeline from pretrained pipeline weight. The from_pretrained() method takes care of returning the correct pipeline class instance by: Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its +config object Find the image-to-image pipeline linked to the pipeline class using pattern matching on pipeline class +name. If a controlnet argument is passed, it will instantiate a StableDiffusionControlNetImg2ImgPipeline +object. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import AutoPipelineForImage2Image + +>>> pipeline = AutoPipelineForImage2Image.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> image = pipeline(prompt, image).images[0] from_pipe < source > ( pipeline **kwargs ) Parameters pipeline (DiffusionPipeline) — +an instantiated DiffusionPipeline object Instantiates a image-to-image Pytorch diffusion pipeline from another instantiated diffusion pipeline class. The from_pipe() method takes care of returning the correct pipeline class instance by finding the +image-to-image pipeline linked to the pipeline class using pattern matching on pipeline class name. All the modules the pipeline contains will be used to initialize the new pipeline without reallocating +additional memoery. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image + +>>> pipe_t2i = AutoPipelineForText2Image.from_pretrained( +... "runwayml/stable-diffusion-v1-5", requires_safety_checker=False +... ) + +>>> pipe_i2i = AutoPipelineForImage2Image.from_pipe(pipe_t2i) +>>> image = pipe_i2i(prompt, image).images[0] AutoPipelineForInpainting class diffusers.AutoPipelineForInpainting < source > ( *args **kwargs ) AutoPipelineForInpainting is a generic pipeline class that instantiates an inpainting pipeline class. The +specific underlying pipeline class is automatically selected from either the +from_pretrained() or from_pipe() methods. This class cannot be instantiated using __init__() (throws an error). Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_or_path **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiates a inpainting Pytorch diffusion pipeline from pretrained pipeline weight. The from_pretrained() method takes care of returning the correct pipeline class instance by: Detect the pipeline class of the pretrained_model_or_path based on the _class_name property of its +config object Find the inpainting pipeline linked to the pipeline class using pattern matching on pipeline class name. If a controlnet argument is passed, it will instantiate a StableDiffusionControlNetInpaintPipeline +object. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import AutoPipelineForInpainting + +>>> pipeline = AutoPipelineForInpainting.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> image = pipeline(prompt, image=init_image, mask_image=mask_image).images[0] from_pipe < source > ( pipeline **kwargs ) Parameters pipeline (DiffusionPipeline) — +an instantiated DiffusionPipeline object Instantiates a inpainting Pytorch diffusion pipeline from another instantiated diffusion pipeline class. The from_pipe() method takes care of returning the correct pipeline class instance by finding the inpainting +pipeline linked to the pipeline class using pattern matching on pipeline class name. All the modules the pipeline class contain will be used to initialize the new pipeline without reallocating +additional memoery. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import AutoPipelineForText2Image, AutoPipelineForInpainting + +>>> pipe_t2i = AutoPipelineForText2Image.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", requires_safety_checker=False +... ) + +>>> pipe_inpaint = AutoPipelineForInpainting.from_pipe(pipe_t2i) +>>> image = pipe_inpaint(prompt, image=init_image, mask_image=mask_image).images[0] diff --git a/scrapped_outputs/c6deb844dd324e90b09a9283df9a794a.txt b/scrapped_outputs/c6deb844dd324e90b09a9283df9a794a.txt new file mode 100644 index 0000000000000000000000000000000000000000..df6c7a63a692fbcb67cd30a67bb4f5f0a2dbf20d --- /dev/null +++ b/scrapped_outputs/c6deb844dd324e90b09a9283df9a794a.txt @@ -0,0 +1,156 @@ +IP-Adapter IP-Adapter is an image prompt adapter that can be plugged into diffusion models to enable image prompting without any changes to the underlying model. Furthermore, this adapter can be reused with other models finetuned from the same base model and it can be combined with other adapters like ControlNet. The key idea behind IP-Adapter is the decoupled cross-attention mechanism which adds a separate cross-attention layer just for image features instead of using the same cross-attention layer for both text and image features. This allows the model to learn more image-specific features. Learn how to load an IP-Adapter in the Load adapters guide, and make sure you check out the IP-Adapter Plus section which requires manually loading the image encoder. This guide will walk you through using IP-Adapter for various tasks and use cases. General tasks Let’s take a look at how to use IP-Adapter’s image prompting capabilities with the StableDiffusionXLPipeline for tasks like text-to-image, image-to-image, and inpainting. We also encourage you to try out other pipelines such as Stable Diffusion, LCM-LoRA, ControlNet, T2I-Adapter, or AnimateDiff! In all the following examples, you’ll see the set_ip_adapter_scale() method. This method controls the amount of text or image conditioning to apply to the model. A value of 1.0 means the model is only conditioned on the image prompt. Lowering this value encourages the model to produce more diverse images, but they may not be as aligned with the image prompt. Typically, a value of 0.5 achieves a good balance between the two prompt types and produces good results. In the examples below, try adding low_cpu_mem_usage=True to the load_ip_adapter() method to speed up the loading time. Text-to-image Image-to-image Inpainting Video Crafting the precise text prompt to generate the image you want can be difficult because it may not always capture what you’d like to express. Adding an image alongside the text prompt helps the model better understand what it should generate and can lead to more accurate results. Load a Stable Diffusion XL (SDXL) model and insert an IP-Adapter into the model with the load_ip_adapter() method. Use the subfolder parameter to load the SDXL model weights. Copied from diffusers import AutoPipelineForText2Image +from diffusers.utils import load_image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name="ip-adapter_sdxl.bin") +pipeline.set_ip_adapter_scale(0.6) Create a text prompt and load an image prompt before passing them to the pipeline to generate an image. Copied image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_diner.png") +generator = torch.Generator(device="cpu").manual_seed(0) +images = pipeline( + prompt="a polar bear sitting in a chair drinking a milkshake", + ip_adapter_image=image, + negative_prompt="deformed, ugly, wrong proportion, low res, bad anatomy, worst quality, low quality", + num_inference_steps=100, + generator=generator, +).images +images[0] IP-Adapter image generated image Configure parameters There are a couple of IP-Adapter parameters that are useful to know about and can help you with your image generation tasks. These parameters can make your workflow more efficient or give you more control over image generation. Image embeddings IP-Adapter enabled pipelines provide the ip_adapter_image_embeds parameter to accept precomputed image embeddings. This is particularly useful in scenarios where you need to run the IP-Adapter pipeline multiple times because you have more than one image. For example, multi IP-Adapter is a specific use case where you provide multiple styling images to generate a specific image in a specific style. Loading and encoding multiple images each time you use the pipeline would be inefficient. Instead, you can precompute and save the image embeddings to disk (which can save a lot of space if you’re using high-quality images) and load them when you need them. This parameter also gives you the flexibility to load embeddings from other sources. For example, ComfyUI image embeddings for IP-Adapters are compatible with Diffusers and should work ouf-of-the-box! Call the prepare_ip_adapter_image_embeds() method to encode and generate the image embeddings. Then you can save them to disk with torch.save. If you’re using IP-Adapter with ip_adapter_image_embedding instead of ip_adapter_image’, you can set load_ip_adapter(image_encoder_folder=None,...) because you don’t need to load an encoder to generate the image embeddings. Copied image_embeds = pipeline.prepare_ip_adapter_image_embeds( + ip_adapter_image=image, + ip_adapter_image_embeds=None, + device="cuda", + num_images_per_prompt=1, + do_classifier_free_guidance=True, +) + +torch.save(image_embeds, "image_embeds.ipadpt") Now load the image embeddings by passing them to the ip_adapter_image_embeds parameter. Copied image_embeds = torch.load("image_embeds.ipadpt") +images = pipeline( + prompt="a polar bear sitting in a chair drinking a milkshake", + ip_adapter_image_embeds=image_embeds, + negative_prompt="deformed, ugly, wrong proportion, low res, bad anatomy, worst quality, low quality", + num_inference_steps=100, + generator=generator, +).images IP-Adapter masking Binary masks specify which portion of the output image should be assigned to an IP-Adapter. This is useful for composing more than one IP-Adapter image. For each input IP-Adapter image, you must provide a binary mask an an IP-Adapter. To start, preprocess the input IP-Adapter images with the ~image_processor.IPAdapterMaskProcessor.preprocess() to generate their masks. For optimal results, provide the output height and width to ~image_processor.IPAdapterMaskProcessor.preprocess(). This ensures masks with different aspect ratios are appropriately stretched. If the input masks already match the aspect ratio of the generated image, you don’t have to set the height and width. Copied from diffusers.image_processor import IPAdapterMaskProcessor + +mask1 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_mask1.png") +mask2 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_mask2.png") + +output_height = 1024 +output_width = 1024 + +processor = IPAdapterMaskProcessor() +masks = processor.preprocess([mask1, mask2], height=output_height, width=output_width) mask one mask two When there is more than one input IP-Adapter image, load them as a list to ensure each image is assigned to a different IP-Adapter. Each of the input IP-Adapter images here correspond to the masks generated above. Copied face_image1 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_girl1.png") +face_image2 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_girl2.png") + +ip_images = [[face_image1], [face_image2]] IP-Adapter image one IP-Adapter image two Now pass the preprocessed masks to cross_attention_kwargs in the pipeline call. Copied pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name=["ip-adapter-plus-face_sdxl_vit-h.safetensors"] * 2) +pipeline.set_ip_adapter_scale([0.7] * 2) +generator = torch.Generator(device="cpu").manual_seed(0) +num_images = 1 + +image = pipeline( + prompt="2 girls", + ip_adapter_image=ip_images, + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=20, + num_images_per_prompt=num_images, + generator=generator, + cross_attention_kwargs={"ip_adapter_masks": masks} +).images[0] +image IP-Adapter masking applied no IP-Adapter masking applied Specific use cases IP-Adapter’s image prompting and compatibility with other adapters and models makes it a versatile tool for a variety of use cases. This section covers some of the more popular applications of IP-Adapter, and we can’t wait to see what you come up with! Face model Generating accurate faces is challenging because they are complex and nuanced. Diffusers supports two IP-Adapter checkpoints specifically trained to generate faces: ip-adapter-full-face_sd15.safetensors is conditioned with images of cropped faces and removed backgrounds ip-adapter-plus-face_sd15.safetensors uses patch embeddings and is conditioned with images of cropped faces IP-Adapter-FaceID is a face-specific IP-Adapter trained with face ID embeddings instead of CLIP image embeddings, allowing you to generate more consistent faces in different contexts and styles. Try out this popular community pipeline and see how it compares to the other face IP-Adapters. For face models, use the h94/IP-Adapter checkpoint. It is also recommended to use DDIMScheduler or EulerDiscreteScheduler for face models. Copied import torch +from diffusers import StableDiffusionPipeline, DDIMScheduler +from diffusers.utils import load_image + +pipeline = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, +).to("cuda") +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter-full-face_sd15.bin") + +pipeline.set_ip_adapter_scale(0.5) + +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_einstein_base.png") +generator = torch.Generator(device="cpu").manual_seed(26) + +image = pipeline( + prompt="A photo of Einstein as a chef, wearing an apron, cooking in a French restaurant", + ip_adapter_image=image, + negative_prompt="lowres, bad anatomy, worst quality, low quality", + num_inference_steps=100, + generator=generator, +).images[0] +image IP-Adapter image generated image Multi IP-Adapter More than one IP-Adapter can be used at the same time to generate specific images in more diverse styles. For example, you can use IP-Adapter-Face to generate consistent faces and characters, and IP-Adapter Plus to generate those faces in a specific style. Read the IP-Adapter Plus section to learn why you need to manually load the image encoder. Load the image encoder with CLIPVisionModelWithProjection. Copied import torch +from diffusers import AutoPipelineForText2Image, DDIMScheduler +from transformers import CLIPVisionModelWithProjection +from diffusers.utils import load_image + +image_encoder = CLIPVisionModelWithProjection.from_pretrained( + "h94/IP-Adapter", + subfolder="models/image_encoder", + torch_dtype=torch.float16, +) Next, you’ll load a base model, scheduler, and the IP-Adapters. The IP-Adapters to use are passed as a list to the weight_name parameter: ip-adapter-plus_sdxl_vit-h uses patch embeddings and a ViT-H image encoder ip-adapter-plus-face_sdxl_vit-h has the same architecture but it is conditioned with images of cropped faces Copied pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + torch_dtype=torch.float16, + image_encoder=image_encoder, +) +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +pipeline.load_ip_adapter( + "h94/IP-Adapter", + subfolder="sdxl_models", + weight_name=["ip-adapter-plus_sdxl_vit-h.safetensors", "ip-adapter-plus-face_sdxl_vit-h.safetensors"] +) +pipeline.set_ip_adapter_scale([0.7, 0.3]) +pipeline.enable_model_cpu_offload() Load an image prompt and a folder containing images of a certain style you want to use. Copied face_image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/women_input.png") +style_folder = "https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/style_ziggy" +style_images = [load_image(f"{style_folder}/img{i}.png") for i in range(10)] IP-Adapter image of face IP-Adapter style images Pass the image prompt and style images as a list to the ip_adapter_image parameter, and run the pipeline! Copied generator = torch.Generator(device="cpu").manual_seed(0) + +image = pipeline( + prompt="wonderwoman", + ip_adapter_image=[style_images, face_image], + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=50, num_images_per_prompt=1, + generator=generator, +).images[0] +image     Instant generation Latent Consistency Models (LCM) are diffusion models that can generate images in as little as 4 steps compared to other diffusion models like SDXL that typically require way more steps. This is why image generation with an LCM feels “instantaneous”. IP-Adapters can be plugged into an LCM-LoRA model to instantly generate images with an image prompt. The IP-Adapter weights need to be loaded first, then you can use load_lora_weights() to load the LoRA style and weight you want to apply to your image. Copied from diffusers import DiffusionPipeline, LCMScheduler +import torch +from diffusers.utils import load_image + +model_id = "sd-dreambooth-library/herge-style" +lcm_lora_id = "latent-consistency/lcm-lora-sdv1-5" + +pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) + +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") +pipeline.load_lora_weights(lcm_lora_id) +pipeline.scheduler = LCMScheduler.from_config(pipeline.scheduler.config) +pipeline.enable_model_cpu_offload() Try using with a lower IP-Adapter scale to condition image generation more on the herge_style checkpoint, and remember to use the special token herge_style in your prompt to trigger and apply the style. Copied pipeline.set_ip_adapter_scale(0.4) + +prompt = "herge_style woman in armor, best quality, high quality" +generator = torch.Generator(device="cpu").manual_seed(0) + +ip_adapter_image = load_image("https://user-images.githubusercontent.com/24734142/266492875-2d50d223-8475-44f0-a7c6-08b51cb53572.png") +image = pipeline( + prompt=prompt, + ip_adapter_image=ip_adapter_image, + num_inference_steps=4, + guidance_scale=1, +).images[0] +image     Structural control To control image generation to an even greater degree, you can combine IP-Adapter with a model like ControlNet. A ControlNet is also an adapter that can be inserted into a diffusion model to allow for conditioning on an additional control image. The control image can be depth maps, edge maps, pose estimations, and more. Load a ControlNetModel checkpoint conditioned on depth maps, insert it into a diffusion model, and load the IP-Adapter. Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +import torch +from diffusers.utils import load_image + +controlnet_model_path = "lllyasviel/control_v11f1p_sd15_depth" +controlnet = ControlNetModel.from_pretrained(controlnet_model_path, torch_dtype=torch.float16) + +pipeline = StableDiffusionControlNetPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16) +pipeline.to("cuda") +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") Now load the IP-Adapter image and depth map. Copied ip_adapter_image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/statue.png") +depth_map = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/depth.png") IP-Adapter image depth map Pass the depth map and IP-Adapter image to the pipeline to generate an image. Copied generator = torch.Generator(device="cpu").manual_seed(33) +image = pipeline( + prompt="best quality, high quality", + image=depth_map, + ip_adapter_image=ip_adapter_image, + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=50, + generator=generator, +).images[0] +image diff --git a/scrapped_outputs/c6e54dcbfa7f34f9eae27127fcc6d4e4.txt b/scrapped_outputs/c6e54dcbfa7f34f9eae27127fcc6d4e4.txt new file mode 100644 index 0000000000000000000000000000000000000000..74ebf95ae4d144f747165d7c8784c89b6729768f --- /dev/null +++ b/scrapped_outputs/c6e54dcbfa7f34f9eae27127fcc6d4e4.txt @@ -0,0 +1,324 @@ +Load adapters There are several training techniques for personalizing diffusion models to generate images of a specific subject or images in certain styles. Each of these training methods produces a different type of adapter. Some of the adapters generate an entirely new model, while other adapters only modify a smaller set of embeddings or weights. This means the loading process for each adapter is also different. This guide will show you how to load DreamBooth, textual inversion, and LoRA weights. Feel free to browse the Stable Diffusion Conceptualizer, LoRA the Explorer, and the Diffusers Models Gallery for checkpoints and embeddings to use. DreamBooth DreamBooth finetunes an entire diffusion model on just several images of a subject to generate images of that subject in new styles and settings. This method works by using a special word in the prompt that the model learns to associate with the subject image. Of all the training methods, DreamBooth produces the largest file size (usually a few GBs) because it is a full checkpoint model. Let’s load the herge_style checkpoint, which is trained on just 10 images drawn by Hergé, to generate images in that style. For it to work, you need to include the special word herge_style in your prompt to trigger the checkpoint: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("sd-dreambooth-library/herge-style", torch_dtype=torch.float16).to("cuda") +prompt = "A cute herge_style brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration" +image = pipeline(prompt).images[0] +image Textual inversion Textual inversion is very similar to DreamBooth and it can also personalize a diffusion model to generate certain concepts (styles, objects) from just a few images. This method works by training and finding new embeddings that represent the images you provide with a special word in the prompt. As a result, the diffusion model weights stay the same and the training process produces a relatively tiny (a few KBs) file. Because textual inversion creates embeddings, it cannot be used on its own like DreamBooth and requires another model. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") Now you can load the textual inversion embeddings with the load_textual_inversion() method and generate some images. Let’s load the sd-concepts-library/gta5-artwork embeddings and you’ll need to include the special word in your prompt to trigger it: Copied pipeline.load_textual_inversion("sd-concepts-library/gta5-artwork") +prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration, style" +image = pipeline(prompt).images[0] +image Textual inversion can also be trained on undesirable things to create negative embeddings to discourage a model from generating images with those undesirable things like blurry images or extra fingers on a hand. This can be an easy way to quickly improve your prompt. You’ll also load the embeddings with load_textual_inversion(), but this time, you’ll need two more parameters: weight_name: specifies the weight file to load if the file was saved in the 🤗 Diffusers format with a specific name or if the file is stored in the A1111 format token: specifies the special word to use in the prompt to trigger the embeddings Let’s load the sayakpaul/EasyNegative-test embeddings: Copied pipeline.load_textual_inversion( + "sayakpaul/EasyNegative-test", weight_name="EasyNegative.safetensors", token="EasyNegative" +) Now you can use the token to generate an image with the negative embeddings: Copied prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration, EasyNegative" +negative_prompt = "EasyNegative" + +image = pipeline(prompt, negative_prompt=negative_prompt, num_inference_steps=50).images[0] +image LoRA Low-Rank Adaptation (LoRA) is a popular training technique because it is fast and generates smaller file sizes (a couple hundred MBs). Like the other methods in this guide, LoRA can train a model to learn new styles from just a few images. It works by inserting new weights into the diffusion model and then only the new weights are trained instead of the entire model. This makes LoRAs faster to train and easier to store. LoRA is a very general training technique that can be used with other training methods. For example, it is common to train a model with DreamBooth and LoRA. LoRAs also need to be used with another model: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") Then use the load_lora_weights() method to load the ostris/super-cereal-sdxl-lora weights and specify the weights filename from the repository: Copied pipeline.load_lora_weights("ostris/super-cereal-sdxl-lora", weight_name="cereal_box_sdxl_v1.safetensors") +prompt = "bears, pizza bites" +image = pipeline(prompt).images[0] +image The load_lora_weights() method loads LoRA weights into both the UNet and text encoder. It is the preferred way for loading LoRAs because it can handle cases where: the LoRA weights don’t have separate identifiers for the UNet and text encoder the LoRA weights have separate identifiers for the UNet and text encoder But if you only need to load LoRA weights into the UNet, then you can use the load_attn_procs() method. Let’s load the jbilcke-hf/sdxl-cinematic-1 LoRA: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.unet.load_attn_procs("jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors") + +# use cnmt in the prompt to trigger the LoRA +prompt = "A cute cnmt eating a slice of pizza, stunning color scheme, masterpiece, illustration" +image = pipeline(prompt).images[0] +image For both load_lora_weights() and load_attn_procs(), you can pass the cross_attention_kwargs={"scale": 0.5} parameter to adjust how much of the LoRA weights to use. A value of 0 is the same as only using the base model weights, and a value of 1 is equivalent to using the fully finetuned LoRA. To unload the LoRA weights, use the unload_lora_weights() method to discard the LoRA weights and restore the model to its original weights: Copied pipeline.unload_lora_weights() Load multiple LoRAs It can be fun to use multiple LoRAs together to create something entirely new and unique. The fuse_lora() method allows you to fuse the LoRA weights with the original weights of the underlying model. Fusing the weights can lead to a speedup in inference latency because you don’t need to separately load the base model and LoRA! You can save your fused pipeline with save_pretrained() to avoid loading and fusing the weights every time you want to use the model. Load an initial model: Copied from diffusers import StableDiffusionXLPipeline, AutoencoderKL +import torch + +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + vae=vae, + torch_dtype=torch.float16, +).to("cuda") Next, load the LoRA checkpoint and fuse it with the original weights. The lora_scale parameter controls how much to scale the output by with the LoRA weights. It is important to make the lora_scale adjustments in the fuse_lora() method because it won’t work if you try to pass scale to the cross_attention_kwargs in the pipeline. If you need to reset the original model weights for any reason (use a different lora_scale), you should use the unfuse_lora() method. Copied pipeline.load_lora_weights("ostris/ikea-instructions-lora-sdxl") +pipeline.fuse_lora(lora_scale=0.7) + +# to unfuse the LoRA weights +pipeline.unfuse_lora() Then fuse this pipeline with the next set of LoRA weights: Copied pipeline.load_lora_weights("ostris/super-cereal-sdxl-lora") +pipeline.fuse_lora(lora_scale=0.7) You can’t unfuse multiple LoRA checkpoints, so if you need to reset the model to its original weights, you’ll need to reload it. Now you can generate an image that uses the weights from both LoRAs: Copied prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration" +image = pipeline(prompt).images[0] +image 🤗 PEFT Read the Inference with 🤗 PEFT tutorial to learn more about its integration with 🤗 Diffusers and how you can easily work with and juggle multiple adapters. You’ll need to install 🤗 Diffusers and PEFT from source to run the example in this section. Another way you can load and use multiple LoRAs is to specify the adapter_name parameter in load_lora_weights(). This method takes advantage of the 🤗 PEFT integration. For example, load and name both LoRA weights: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("ostris/ikea-instructions-lora-sdxl", weight_name="ikea_instructions_xl_v1_5.safetensors", adapter_name="ikea") +pipeline.load_lora_weights("ostris/super-cereal-sdxl-lora", weight_name="cereal_box_sdxl_v1.safetensors", adapter_name="cereal") Now use the set_adapters() to activate both LoRAs, and you can configure how much weight each LoRA should have on the output: Copied pipeline.set_adapters(["ikea", "cereal"], adapter_weights=[0.7, 0.5]) Then, generate an image: Copied prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration" +image = pipeline(prompt, num_inference_steps=30, cross_attention_kwargs={"scale": 1.0}).images[0] +image Kohya and TheLastBen Other popular LoRA trainers from the community include those by Kohya and TheLastBen. These trainers create different LoRA checkpoints than those trained by 🤗 Diffusers, but they can still be loaded in the same way. Let’s download the Blueprintify SD XL 1.0 checkpoint from Civitai: Copied !wget https://civitai.com/api/download/models/168776 -O blueprintify-sd-xl-10.safetensors Load the LoRA checkpoint with the load_lora_weights() method, and specify the filename in the weight_name parameter: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("path/to/weights", weight_name="blueprintify-sd-xl-10.safetensors") Generate an image: Copied # use bl3uprint in the prompt to trigger the LoRA +prompt = "bl3uprint, a highly detailed blueprint of the eiffel tower, explaining how to build all parts, many txt, blueprint grid backdrop" +image = pipeline(prompt).images[0] +image Some limitations of using Kohya LoRAs with 🤗 Diffusers include: Images may not look like those generated by UIs - like ComfyUI - for multiple reasons, which are explained here. LyCORIS checkpoints aren’t fully supported. The load_lora_weights() method loads LyCORIS checkpoints with LoRA and LoCon modules, but Hada and LoKR are not supported. Loading a checkpoint from TheLastBen is very similar. For example, to load the TheLastBen/William_Eggleston_Style_SDXL checkpoint: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("TheLastBen/William_Eggleston_Style_SDXL", weight_name="wegg.safetensors") + +# use by william eggleston in the prompt to trigger the LoRA +prompt = "a house by william eggleston, sunrays, beautiful, sunlight, sunrays, beautiful" +image = pipeline(prompt=prompt).images[0] +image IP-Adapter IP-Adapter is an effective and lightweight adapter that adds image prompting capabilities to a diffusion model. This adapter works by decoupling the cross-attention layers of the image and text features. All the other model components are frozen and only the embedded image features in the UNet are trained. As a result, IP-Adapter files are typically only ~100MBs. IP-Adapter works with most of our pipelines, including Stable Diffusion, Stable Diffusion XL (SDXL), ControlNet, T2I-Adapter, AnimateDiff. And you can use any custom models finetuned from the same base models. It also works with LCM-Lora out of box. You can find official IP-Adapter checkpoints in h94/IP-Adapter. IP-Adapter was contributed by okotaku. Let’s first create a Stable Diffusion Pipeline. Copied from diffusers import AutoPipelineForText2Image +import torch +from diffusers.utils import load_image + + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") Now load the h94/IP-Adapter weights with the load_ip_adapter() method. Copied pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") IP-Adapter relies on an image encoder to generate the image features, if your IP-Adapter weights folder contains a "image_encoder" subfolder, the image encoder will be automatically loaded and registered to the pipeline. Otherwise you can so load a [CLIPVisionModelWithProjection](https://huggingface.co/docs/transformers/v4.36.2/en/model_doc/clip#transformers.CLIPVisionModelWithProjection) model and pass it to a Stable Diffusion pipeline when you create it. + + Copied from diffusers import AutoPipelineForText2Image, CLIPVisionModelWithProjection +import torch + +image_encoder = CLIPVisionModelWithProjection.from_pretrained( + "h94/IP-Adapter", + subfolder="models/image_encoder", + torch_dtype=torch.float16, +).to("cuda") + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", image_encoder=image_encoder, torch_dtype=torch.float16).to("cuda") IP-Adapter allows you to use both image and text to condition the image generation process. For example, let’s use the bear image from the Textual Inversion section as the image prompt (ip_adapter_image) along with a text prompt to add “sunglasses”. 😎 Copied pipeline.set_ip_adapter_scale(0.6) +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_neg_embed.png") +generator = torch.Generator(device="cpu").manual_seed(33) +images = pipeline( +    prompt='best quality, high quality, wearing sunglasses', +    ip_adapter_image=image, +    negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", +    num_inference_steps=50, +    generator=generator, +).images +images[0]     You can use the set_ip_adapter_scale() method to adjust the text prompt and image prompt condition ratio.  If you’re only using the image prompt, you should set the scale to 1.0. You can lower the scale to get more generation diversity, but it’ll be less aligned with the prompt. +scale=0.5 can achieve good results in most cases when you use both text and image prompts. IP-Adapter also works great with Image-to-Image and Inpainting pipelines. See below examples of how you can use it with Image-to-Image and Inpaint. + + + + Copied from diffusers import AutoPipelineForImage2Image +import torch +from diffusers.utils import load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") + +image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/vermeer.jpg") +ip_image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/river.png") + +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") +generator = torch.Generator(device="cpu").manual_seed(33) +images = pipeline( +    prompt='best quality, high quality', +    image = image, +    ip_adapter_image=ip_image, +    num_inference_steps=50, +    generator=generator, +    strength=0.6, +).images +images[0] + + + + Copied from diffusers import AutoPipelineForInpaint +import torch +from diffusers.utils import load_image + +pipeline = AutoPipelineForInpaint.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float).to("cuda") + +image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/inpaint_image.png") +mask = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/mask.png") +ip_image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/girl.png") + +image = image.resize((512, 768)) +mask = mask.resize((512, 768)) + +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") + +generator = torch.Generator(device="cpu").manual_seed(33) +images = pipeline( + prompt='best quality, high quality', + image = image, + mask_image = mask, + ip_adapter_image=ip_image, + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=50, + generator=generator, + strength=0.5, +).images +images[0] + + +IP-Adapters can also be used with SDXL Copied from diffusers import AutoPipelineForText2Image +from diffusers.utils import load_image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + torch_dtype=torch.float16 +).to("cuda") + +image = load_image("https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/watercolor_painting.jpeg") + +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name="ip-adapter_sdxl.bin") + +generator = torch.Generator(device="cpu").manual_seed(33) +image = pipeline( + prompt="best quality, high quality", + ip_adapter_image=image, + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=25, + generator=generator, +).images[0] +image.save("sdxl_t2i.png") input image adapted image You can use the IP-Adapter face model to apply specific faces to your images. It is an effective way to maintain consistent characters in your image generations. +Weights are loaded with the same method used for the other IP-Adapters. Copied # Load ip-adapter-full-face_sd15.bin +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter-full-face_sd15.bin") It is recommended to use DDIMScheduler and EulerDiscreteScheduler for face model. Copied import torch +from diffusers import StableDiffusionPipeline, DDIMScheduler +from diffusers.utils import load_image + +noise_scheduler = DDIMScheduler( + num_train_timesteps=1000, + beta_start=0.00085, + beta_end=0.012, + beta_schedule="scaled_linear", + clip_sample=False, + set_alpha_to_one=False, + steps_offset=1 +) + +pipeline = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + scheduler=noise_scheduler, +).to("cuda") + +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter-full-face_sd15.bin") + +pipeline.set_ip_adapter_scale(0.7) + +image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ai_face2.png") + +generator = torch.Generator(device="cpu").manual_seed(33) + +image = pipeline( + prompt="A photo of a girl wearing a black dress, holding red roses in hand, upper body, behind is the Eiffel Tower", + ip_adapter_image=image, + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=50, num_images_per_prompt=1, width=512, height=704, + generator=generator, +).images[0] input image output image LCM-Lora You can use IP-Adapter with LCM-Lora to achieve “instant fine-tune” with custom images. Note that you need to load IP-Adapter weights before loading the LCM-Lora weights. Copied from diffusers import DiffusionPipeline, LCMScheduler +import torch +from diffusers.utils import load_image + +model_id = "sd-dreambooth-library/herge-style" +lcm_lora_id = "latent-consistency/lcm-lora-sdv1-5" + +pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) + +pipe.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") +pipe.load_lora_weights(lcm_lora_id) +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() + +prompt = "best quality, high quality" +image = load_image("https://user-images.githubusercontent.com/24734142/266492875-2d50d223-8475-44f0-a7c6-08b51cb53572.png") +images = pipe( + prompt=prompt, + ip_adapter_image=image, + num_inference_steps=4, + guidance_scale=1, +).images[0] Other pipelines IP-Adapter is compatible with any pipeline that (1) uses a text prompt and (2) uses Stable Diffusion or Stable Diffusion XL checkpoint. To use IP-Adapter with a different pipeline, all you need to do is to run load_ip_adapter() method after you create the pipeline, and then pass your image to the pipeline as ip_adapter_image 🤗 Diffusers currently only supports using IP-Adapter with some of the most popular pipelines, feel free to open a feature request if you have a cool use-case and require integrating IP-adapters with a pipeline that does not support it yet! You can find below examples on how to use IP-Adapter with ControlNet and AnimateDiff. + + + + Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +import torch +from diffusers.utils import load_image + +controlnet_model_path = "lllyasviel/control_v11f1p_sd15_depth" +controlnet = ControlNetModel.from_pretrained(controlnet_model_path, torch_dtype=torch.float16) + +pipeline = StableDiffusionControlNetPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16) +pipeline.to("cuda") + +image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/statue.png") +depth_map = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/depth.png") + +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") + +generator = torch.Generator(device="cpu").manual_seed(33) +images = pipeline( + prompt='best quality, high quality', + image=depth_map, + ip_adapter_image=image, + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=50, + generator=generator, +).images +images[0] input image adapted image + + + + Copied # animate diff + ip adapter +import torch +from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler +from diffusers.utils import export_to_gif, load_image + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "Lykon/DreamShaper" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16) + +# scheduler +scheduler = DDIMScheduler( + clip_sample=False, + beta_start=0.00085, + beta_end=0.012, + beta_schedule="linear", + timestep_spacing="trailing", + steps_offset=1 +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +# load ip_adapter +pipe.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") + +# load motion adapters +pipe.load_lora_weights("guoyww/animatediff-motion-lora-zoom-out", adapter_name="zoom-out") +pipe.load_lora_weights("guoyww/animatediff-motion-lora-tilt-up", adapter_name="tilt-up") +pipe.load_lora_weights("guoyww/animatediff-motion-lora-pan-left", adapter_name="pan-left") + +seed = 42 +image = load_image("https://user-images.githubusercontent.com/24734142/266492875-2d50d223-8475-44f0-a7c6-08b51cb53572.png") +images = [image] * 3 +prompts = ["best quality, high quality"] * 3 +negative_prompt = "bad quality, worst quality" +adapter_weights = [[0.75, 0.0, 0.0], [0.0, 0.0, 0.75], [0.0, 0.75, 0.75]] + +# generate +output_frames = [] +for prompt, image, adapter_weight in zip(prompts, images, adapter_weights): + pipe.set_adapters(["zoom-out", "tilt-up", "pan-left"], adapter_weights=adapter_weight) + output = pipe( + prompt= prompt, + num_frames=16, + guidance_scale=7.5, + num_inference_steps=30, + ip_adapter_image = image, + generator=torch.Generator("cpu").manual_seed(seed), + ) + frames = output.frames[0] + output_frames.extend(frames) + +export_to_gif(output_frames, "test_out_animation.gif") + + diff --git a/scrapped_outputs/c6f4643647ddc03901e07ad7198e29d9.txt b/scrapped_outputs/c6f4643647ddc03901e07ad7198e29d9.txt new file mode 100644 index 0000000000000000000000000000000000000000..5afc2be3d91199356b9d7628f7ca4a75d3ed1ce9 --- /dev/null +++ b/scrapped_outputs/c6f4643647ddc03901e07ad7198e29d9.txt @@ -0,0 +1,74 @@ +DDIMScheduler Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. The abstract from the paper is: Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. +To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models +with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. +We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. +We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space. The original codebase of this paper can be found at ermongroup/ddim, and you can contact the author on tsong.me. Tips The paper Common Diffusion Noise Schedules and Sample Steps are Flawed claims that a mismatch between the training and inference settings leads to suboptimal inference generation results for Stable Diffusion. To fix this, the authors propose: 🧪 This is an experimental feature! rescale the noise schedule to enforce zero terminal signal-to-noise ratio (SNR) Copied pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, rescale_betas_zero_snr=True) train a model with v_prediction (add the following argument to the train_text_to_image.py or train_text_to_image_lora.py scripts) Copied --prediction_type="v_prediction" change the sampler to always start from the last timestep Copied pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing") rescale classifier-free guidance to prevent over-exposure Copied image = pipe(prompt, guidance_rescale=0.7).images[0] For example: Copied from diffusers import DiffusionPipeline, DDIMScheduler +import torch + +pipe = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", torch_dtype=torch.float16) +pipe.scheduler = DDIMScheduler.from_config( + pipe.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing" +) +pipe.to("cuda") + +prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" +image = pipe(prompt, guidance_rescale=0.7).images[0] +image DDIMScheduler class diffusers.DDIMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None clip_sample: bool = True set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 clip_sample_range: float = 1.0 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the alpha value at step 0. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. DDIMScheduler extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with +non-Markovian guidance. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor eta: float = 0.0 use_clipped_model_output: bool = False generator = None variance_noise: Optional = None return_dict: bool = True ) → ~schedulers.scheduling_utils.DDIMSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. eta (float) — +The weight of noise for added noise in diffusion step. use_clipped_model_output (bool, defaults to False) — +If True, computes “corrected” model_output from the clipped predicted original sample. Necessary +because predicted original sample is clipped to [-1, 1] when self.config.clip_sample is True. If no +clipping has happened, “corrected” model_output would coincide with the one provided as input and +use_clipped_model_output has no effect. generator (torch.Generator, optional) — +A random number generator. variance_noise (torch.FloatTensor) — +Alternative to generating noise with generator by directly providing the noise for the variance +itself. Useful for methods such as CycleDiffusion. return_dict (bool, optional, defaults to True) — +Whether or not to return a DDIMSchedulerOutput or tuple. Returns +~schedulers.scheduling_utils.DDIMSchedulerOutput or tuple + +If return_dict is True, DDIMSchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). DDIMSchedulerOutput class diffusers.schedulers.scheduling_ddim.DDIMSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/c6f960c0e388dacb21666b2bc6bde893.txt b/scrapped_outputs/c6f960c0e388dacb21666b2bc6bde893.txt new file mode 100644 index 0000000000000000000000000000000000000000..161bab95d89c856bbecb72654e8b0d0142d13c70 --- /dev/null +++ b/scrapped_outputs/c6f960c0e388dacb21666b2bc6bde893.txt @@ -0,0 +1,6 @@ +Unconditional image generation Unconditional image generation generates images that look like a random sample from the training data the model was trained on because the denoising process is not guided by any additional context like text or image. To get started, use the DiffusionPipeline to load the anton-l/ddpm-butterflies-128 checkpoint to generate images of butterflies. The DiffusionPipeline downloads and caches all the model components required to generate an image. Copied from diffusers import DiffusionPipeline + +generator = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128").to("cuda") +image = generator().images[0] +image Want to generate images of something else? Take a look at the training guide to learn how to train a model to generate your own images. The output image is a PIL.Image object that can be saved: Copied image.save("generated_image.png") You can also try experimenting with the num_inference_steps parameter, which controls the number of denoising steps. More denoising steps typically produce higher quality images, but it’ll take longer to generate. Feel free to play around with this parameter to see how it affects the image quality. Copied image = generator(num_inference_steps=100).images[0] +image Try out the Space below to generate an image of a butterfly! diff --git a/scrapped_outputs/c72748cd3604fa1b1c12b27eb1ec58dc.txt b/scrapped_outputs/c72748cd3604fa1b1c12b27eb1ec58dc.txt new file mode 100644 index 0000000000000000000000000000000000000000..d2b3bcaac6e037386e0f81d812eeac9f05eb325a --- /dev/null +++ b/scrapped_outputs/c72748cd3604fa1b1c12b27eb1ec58dc.txt @@ -0,0 +1,547 @@ +Text-to-Image Generation with ControlNet Conditioning + + +Overview + +Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang and Maneesh Agrawala. +Using the pretrained models we can provide control images (for example, a depth map) to control Stable Diffusion text-to-image generation so that it follows the structure of the depth image and fills in the details. +The abstract of the paper is the following: +We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions. The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k). Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices. Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data. We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc. This may enrich the methods to control large diffusion models and further facilitate related applications. +This model was contributed by the amazing community contributor takuma104 ❤️ . +Resources: +Paper +Original Code + +Available Pipelines: + +Pipeline +Tasks +Demo +StableDiffusionControlNetPipeline +Text-to-Image Generation with ControlNet Conditioning +Colab Example + +Usage example + +In the following we give a simple example of how to use a ControlNet checkpoint with Diffusers for inference. +The inference pipeline is the same for all pipelines: +Take an image and run it through a pre-conditioning processor. +Run the pre-processed image through the StableDiffusionControlNetPipeline. +Let’s have a look at a simple example using the Canny Edge ControlNet. + + + Copied +from diffusers import StableDiffusionControlNetPipeline +from diffusers.utils import load_image + +# Let's load the popular vermeer image +image = load_image( + "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +) + +Next, we process the image to get the canny image. This is step 1. - running the pre-conditioning processor. The pre-conditioning processor is different for every ControlNet. Please see the model cards of the official checkpoints for more information about other models. +First, we need to install opencv: + + + Copied +pip install opencv-contrib-python +Next, let’s also install all required Hugging Face libraries: + + + Copied +pip install diffusers transformers git+https://github.com/huggingface/accelerate.git +Then we can retrieve the canny edges of the image. + + + Copied +import cv2 +from PIL import Image +import numpy as np + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) +Let’s take a look at the processed image. + +Now, we load the official Stable Diffusion 1.5 Model as well as the ControlNet for canny edges. + + + Copied +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +import torch + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +) +To speed-up things and reduce memory, let’s enable model offloading and use the fast UniPCMultistepScheduler. + + + Copied +from diffusers import UniPCMultistepScheduler + +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) + +# this command loads the individual model components on GPU on-demand. +pipe.enable_model_cpu_offload() +Finally, we can run the pipeline: + + + Copied +generator = torch.manual_seed(0) + +out_image = pipe( + "disco dancer with colorful lights", num_inference_steps=20, generator=generator, image=canny_image +).images[0] +This should take only around 3-4 seconds on GPU (depending on hardware). The output image then looks as follows: + +Note: To see how to run all other ControlNet checkpoints, please have a look at ControlNet with Stable Diffusion 1.5 + +Available checkpoints + +ControlNet requires a control image in addition to the text-to-image prompt. +Each pretrained model is trained using a different conditioning method that requires different images for conditioning the generated outputs. For example, Canny edge conditioning requires the control image to be the output of a Canny filter, while depth conditioning requires the control image to be a depth map. See the overview and image examples below to know more. +All checkpoints can be found under the authors’ namespace lllyasviel. + +ControlNet with Stable Diffusion 1.5 + +Model Name +Control Image Overview +Control Image Example +Generated Image Example +lllyasviel/sd-controlnet-canny Trained with canny edge detection +A monochrome image with white edges on a black background. + + +lllyasviel/sd-controlnet-depth Trained with Midas depth estimation +A grayscale image with black representing deep areas and white representing shallow areas. + + +lllyasviel/sd-controlnet-hed Trained with HED edge detection (soft edge) +A monochrome image with white soft edges on a black background. + + +lllyasviel/sd-controlnet-mlsd Trained with M-LSD line detection +A monochrome image composed only of white straight lines on a black background. + + +lllyasviel/sd-controlnet-normal Trained with normal map +A normal mapped image. + + +lllyasviel/sd-controlnet-openpose Trained with OpenPose bone image +A OpenPose bone image. + + +lllyasviel/sd-controlnet-scribble Trained with human scribbles +A hand-drawn monochrome image with white outlines on a black background. + + +lllyasviel/sd-controlnet-segTrained with semantic segmentation +An ADE20K’s segmentation protocol image. + + + +class diffusers.StableDiffusionControlNetPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +controlnet: ControlNetModel +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPFeatureExtractor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +controlnet (ControlNetModel) — +Provides additional conditioning to the unet during the denoising process + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-to-image generation using Stable Diffusion with ControlNet guidance. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +image: typing.Union[torch.FloatTensor, PIL.Image.Image, typing.List[torch.FloatTensor], typing.List[PIL.Image.Image]] = None +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None +controlnet_conditioning_scale: float = 1.0 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +image (torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor] or List[PIL.Image.Image]) — +The ControlNet input condition. ControlNet uses this input condition to generate guidance to Unet. If +the type is specified as Torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image` can +also be accepted as an image. The control image is automatically resized to fit the output image. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor as defined under +self.processor in +diffusers.cross_attention. + + +controlnet_conditioning_scale (float, optional, defaults to 1.0) — +The outputs of the controlnet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +... ) +>>> image = np.array(image) + +>>> # get canny image +>>> image = cv2.Canny(image, 100, 200) +>>> image = image[:, :, None] +>>> image = np.concatenate([image, image, image], axis=2) +>>> canny_image = Image.fromarray(image) + +>>> # load control net and stable diffusion v1-5 +>>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +>>> pipe = StableDiffusionControlNetPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> # speed up diffusion process with faster scheduler and memory optimization +>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +>>> # remove following line if xformers is not installed +>>> pipe.enable_xformers_memory_efficient_attention() + +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> generator = torch.manual_seed(0) +>>> image = pipe( +... "futuristic-looking woman", num_inference_steps=20, generator=generator, image=canny_image +... ).images[0] + +enable_attention_slicing + +< +source +> +( +slice_size: typing.Union[str, int, NoneType] = 'auto' + +) + + +Parameters + +slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maxium amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +disable_attention_slicing + +< +source +> +( +) + + + +Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go +back to computing attention in one step. + +enable_vae_slicing + +< +source +> +( +) + + + +Enable sliced VAE decoding. +When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several +steps. This is useful to save some memory and allow larger batch sizes. + +disable_vae_slicing + +< +source +> +( +) + + + +Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to +computing decoding in one step. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae, controlnet, and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. diff --git a/scrapped_outputs/c73417c729385e6c86119327b4ed50e9.txt b/scrapped_outputs/c73417c729385e6c86119327b4ed50e9.txt new file mode 100644 index 0000000000000000000000000000000000000000..84ee568798480dea840f7e1bb501e3ca6528fcc9 --- /dev/null +++ b/scrapped_outputs/c73417c729385e6c86119327b4ed50e9.txt @@ -0,0 +1,153 @@ +RePaint + + +Overview + +RePaint: Inpainting using Denoising Diffusion Probabilistic Models (PNDM) by Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, Luc Van Gool. +The abstract of the paper is the following: +Free-form inpainting is the task of adding new content to an image in the regions specified by an arbitrary binary mask. Most existing approaches train for a certain distribution of masks, which limits their generalization capabilities to unseen mask types. Furthermore, training with pixel-wise and perceptual losses often leads to simple textural extensions towards the missing areas instead of semantically meaningful generation. In this work, we propose RePaint: A Denoising Diffusion Probabilistic Model (DDPM) based inpainting approach that is applicable to even extreme masks. We employ a pretrained unconditional DDPM as the generative prior. To condition the generation process, we only alter the reverse diffusion iterations by sampling the unmasked regions using the given image information. Since this technique does not modify or condition the original DDPM network itself, the model produces high-quality and diverse output images for any inpainting form. We validate our method for both faces and general-purpose image inpainting using standard and extreme masks. +RePaint outperforms state-of-the-art Autoregressive, and GAN approaches for at least five out of six mask distributions. +The original codebase can be found here. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_repaint.py +Image Inpainting +- + +Usage example + + + + Copied +from io import BytesIO + +import torch + +import PIL +import requests +from diffusers import RePaintPipeline, RePaintScheduler + + +def download_image(url): + response = requests.get(url) + return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +img_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/celeba_hq_256.png" +mask_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/mask_256.png" + +# Load the original image and the mask as PIL images +original_image = download_image(img_url).resize((256, 256)) +mask_image = download_image(mask_url).resize((256, 256)) + +# Load the RePaint scheduler and pipeline based on a pretrained DDPM model +scheduler = RePaintScheduler.from_pretrained("google/ddpm-ema-celebahq-256") +pipe = RePaintPipeline.from_pretrained("google/ddpm-ema-celebahq-256", scheduler=scheduler) +pipe = pipe.to("cuda") + +generator = torch.Generator(device="cuda").manual_seed(0) +output = pipe( + original_image=original_image, + mask_image=mask_image, + num_inference_steps=250, + eta=0.0, + jump_length=10, + jump_n_sample=10, + generator=generator, +) +inpainted_image = output.images[0] + +RePaintPipeline + + +class diffusers.RePaintPipeline + +< +source +> +( +unet +scheduler + +) + + + + +__call__ + +< +source +> +( +image: typing.Union[torch.Tensor, PIL.Image.Image] +mask_image: typing.Union[torch.Tensor, PIL.Image.Image] +num_inference_steps: int = 250 +eta: float = 0.0 +jump_length: int = 10 +jump_n_sample: int = 10 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +**kwargs + +) +→ +ImagePipelineOutput or tuple + +Parameters + +image (torch.FloatTensor or PIL.Image.Image) — +The original image to inpaint on. + + +mask_image (torch.FloatTensor or PIL.Image.Image) — +The mask_image where 0.0 values define which part of the original image to inpaint (change). + + +num_inference_steps (int, optional, defaults to 1000) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +eta (float) — +The weight of noise for added noise in a diffusion step. Its value is between 0.0 and 1.0 - 0.0 is DDIM +and 1.0 is DDPM scheduler respectively. + + +jump_length (int, optional, defaults to 10) — +The number of steps taken forward in time before going backward in time for a single jump (“j” in +RePaint paper). Take a look at Figure 9 and 10 in https://arxiv.org/pdf/2201.09865.pdf. + + +jump_n_sample (int, optional, defaults to 10) — +The number of times we will make forward time jump for a given chosen time sample. Take a look at +Figure 9 and 10 in https://arxiv.org/pdf/2201.09865.pdf. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + +Returns + +ImagePipelineOutput or tuple + + + +~pipelines.utils.ImagePipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. diff --git a/scrapped_outputs/c74cf00b7ddfacb85b55b1669b0360bd.txt b/scrapped_outputs/c74cf00b7ddfacb85b55b1669b0360bd.txt new file mode 100644 index 0000000000000000000000000000000000000000..27ff3e96e4e6d4dd3d19eb137ba8d07b5db24119 --- /dev/null +++ b/scrapped_outputs/c74cf00b7ddfacb85b55b1669b0360bd.txt @@ -0,0 +1,8 @@ +Utilities Utility and helper functions for working with 🤗 Diffusers. numpy_to_pil diffusers.utils.numpy_to_pil < source > ( images ) Convert a numpy image or a batch of images to a PIL image. pt_to_pil diffusers.utils.pt_to_pil < source > ( images ) Convert a torch image to a PIL image. load_image diffusers.utils.load_image < source > ( image: Union convert_method: Callable = None ) → PIL.Image.Image Parameters image (str or PIL.Image.Image) — +The image to convert to the PIL Image format. convert_method (Callable[[PIL.Image.Image], PIL.Image.Image], optional) — +A conversion method to apply to the image after loading it. +When set to None the image will be converted “RGB”. Returns +PIL.Image.Image + +A PIL Image. + Loads image to a PIL Image. export_to_gif diffusers.utils.export_to_gif < source > ( image: List output_gif_path: str = None fps: int = 10 ) export_to_video diffusers.utils.export_to_video < source > ( video_frames: Union output_video_path: str = None fps: int = 8 ) make_image_grid diffusers.utils.make_image_grid < source > ( images: List rows: int cols: int resize: int = None ) Prepares a single grid of images. Useful for visualization purposes. diff --git a/scrapped_outputs/c76d316cb7a7a1b803442d064f8be13d.txt b/scrapped_outputs/c76d316cb7a7a1b803442d064f8be13d.txt new file mode 100644 index 0000000000000000000000000000000000000000..1682c999f107e1c6ee2accbd0fb9ce7568ca96d8 --- /dev/null +++ b/scrapped_outputs/c76d316cb7a7a1b803442d064f8be13d.txt @@ -0,0 +1,200 @@ +DreamBooth DreamBooth is a training technique that updates the entire diffusion model by training on just a few images of a subject or style. It works by associating a special word in the prompt with the example images. If you’re training on a GPU with limited vRAM, you should try enabling the gradient_checkpointing and mixed_precision parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing or xFormers. You should have a GPU with >30GB of memory if you want to train faster with Flax. This guide will explore the train_dreambooth.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies for the script you’re using: + + + + Copied cd examples/dreambooth +pip install -r requirements.txt + + + + Copied cd examples/dreambooth +pip install -r requirements_flax.txt + + + 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters DreamBooth is very sensitive to training hyperparameters, and it is easy to overfit. Read the Training Stable Diffusion with Dreambooth using 🧨 Diffusers blog post for recommended settings for different subjects to help you choose the appropriate hyperparameters. The training script offers many parameters for customizing your training run. All of the parameters and their descriptions are found in the parse_args() function. The parameters are set with default values that should work pretty well out-of-the-box, but you can also set your own values in the training command if you’d like. For example, to train in the bf16 format: Copied accelerate launch train_dreambooth.py \ + --mixed_precision="bf16" Some basic and important parameters to know and specify are: --pretrained_model_name_or_path: the name of the model on the Hub or a local path to the pretrained model --instance_data_dir: path to a folder containing the training dataset (example images) --instance_prompt: the text prompt that contains the special word for the example images --train_text_encoder: whether to also train the text encoder --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_dreambooth.py \ + --snr_gamma=5.0 Prior preservation loss Prior preservation loss is a method that uses a model’s own generated samples to help it learn how to generate more diverse images. Because these generated sample images belong to the same class as the images you provided, they help the model retain what it has learned about the class and how it can use what it already knows about the class to make new compositions. --with_prior_preservation: whether to use prior preservation loss --prior_loss_weight: controls the influence of the prior preservation loss on the model --class_data_dir: path to a folder containing the generated class sample images --class_prompt: the text prompt describing the class of the generated sample images Copied accelerate launch train_dreambooth.py \ + --with_prior_preservation \ + --prior_loss_weight=1.0 \ + --class_data_dir="path/to/class/images" \ + --class_prompt="text prompt describing class" Train text encoder To improve the quality of the generated outputs, you can also train the text encoder in addition to the UNet. This requires additional memory and you’ll need a GPU with at least 24GB of vRAM. If you have the necessary hardware, then training the text encoder produces better results, especially when generating images of faces. Enable this option by: Copied accelerate launch train_dreambooth.py \ + --train_text_encoder Training script DreamBooth comes with its own dataset classes: DreamBoothDataset: preprocesses the images and class images, and tokenizes the prompts for training PromptDataset: generates the prompt embeddings to generate the class images If you enabled prior preservation loss, the class images are generated here: Copied sample_dataset = PromptDataset(args.class_prompt, num_new_images) +sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size) + +sample_dataloader = accelerator.prepare(sample_dataloader) +pipeline.to(accelerator.device) + +for example in tqdm( + sample_dataloader, desc="Generating class images", disable=not accelerator.is_local_main_process +): + images = pipeline(example["prompt"]).images Next is the main() function which handles setting up the dataset for training and the training loop itself. The script loads the tokenizer, scheduler and models: Copied # Load the tokenizer +if args.tokenizer_name: + tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_name, revision=args.revision, use_fast=False) +elif args.pretrained_model_name_or_path: + tokenizer = AutoTokenizer.from_pretrained( + args.pretrained_model_name_or_path, + subfolder="tokenizer", + revision=args.revision, + use_fast=False, + ) + +# Load scheduler and models +noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") +text_encoder = text_encoder_cls.from_pretrained( + args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision +) + +if model_has_vae(args): + vae = AutoencoderKL.from_pretrained( + args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision + ) +else: + vae = None + +unet = UNet2DConditionModel.from_pretrained( + args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision +) Then, it’s time to create the training dataset and DataLoader from DreamBoothDataset: Copied train_dataset = DreamBoothDataset( + instance_data_root=args.instance_data_dir, + instance_prompt=args.instance_prompt, + class_data_root=args.class_data_dir if args.with_prior_preservation else None, + class_prompt=args.class_prompt, + class_num=args.num_class_images, + tokenizer=tokenizer, + size=args.resolution, + center_crop=args.center_crop, + encoder_hidden_states=pre_computed_encoder_hidden_states, + class_prompt_encoder_hidden_states=pre_computed_class_prompt_encoder_hidden_states, + tokenizer_max_length=args.tokenizer_max_length, +) + +train_dataloader = torch.utils.data.DataLoader( + train_dataset, + batch_size=args.train_batch_size, + shuffle=True, + collate_fn=lambda examples: collate_fn(examples, args.with_prior_preservation), + num_workers=args.dataloader_num_workers, +) Lastly, the training loop takes care of the remaining steps such as converting images to latent space, adding noise to the input, predicting the noise residual, and calculating the loss. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script You’re now ready to launch the training script! 🚀 For this guide, you’ll download some images of a dog and store them in a directory. But remember, you can create and use your own dataset if you want (see the Create a dataset for training guide). Copied from huggingface_hub import snapshot_download + +local_dir = "./dog" +snapshot_download( + "diffusers/dog-example", + local_dir=local_dir, + repo_type="dataset", + ignore_patterns=".gitattributes", +) Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model, INSTANCE_DIR to the path where you just downloaded the dog images to, and OUTPUT_DIR to where you want to save the model. You’ll use sks as the special word to tie the training to. If you’re interested in following along with the training process, you can periodically save generated images as training progresses. Add the following parameters to the training command: Copied --validation_prompt="a photo of a sks dog" +--num_validation_images=4 +--validation_steps=100 One more thing before you launch the script! Depending on the GPU you have, you may need to enable certain optimizations to train DreamBooth. + + +On a 16GB GPU, you can use bitsandbytes 8-bit optimizer and gradient checkpointing to help you train a DreamBooth model. Install bitsandbytes: Copied pip install bitsandbytes Then, add the following parameter to your training command: Copied accelerate launch train_dreambooth.py \ + --gradient_checkpointing \ + --use_8bit_adam \ + + +On a 12GB GPU, you’ll need bitsandbytes 8-bit optimizer, gradient checkpointing, xFormers, and set the gradients to None instead of zero to reduce your memory-usage. Copied accelerate launch train_dreambooth.py \ + --use_8bit_adam \ + --gradient_checkpointing \ + --enable_xformers_memory_efficient_attention \ + --set_grads_to_none \ + + +On a 8GB GPU, you’ll need DeepSpeed to offload some of the tensors from the vRAM to either the CPU or NVME to allow training with less GPU memory. Run the following command to configure your 🤗 Accelerate environment: Copied accelerate config During configuration, confirm that you want to use DeepSpeed. Now it should be possible to train on under 8GB vRAM by combining DeepSpeed stage 2, fp16 mixed precision, and offloading the model parameters and the optimizer state to the CPU. The drawback is that this requires more system RAM (~25 GB). See the DeepSpeed documentation for more configuration options. You should also change the default Adam optimizer to DeepSpeed’s optimized version of Adam deepspeed.ops.adam.DeepSpeedCPUAdam for a substantial speedup. Enabling DeepSpeedCPUAdam requires your system’s CUDA toolchain version to be the same as the one installed with PyTorch. bitsandbytes 8-bit optimizers don’t seem to be compatible with DeepSpeed at the moment. That’s it! You don’t need to add any additional parameters to your training command. + + + + + + Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export INSTANCE_DIR="./dog" +export OUTPUT_DIR="path_to_saved_model" + +accelerate launch train_dreambooth.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --output_dir=$OUTPUT_DIR \ + --instance_prompt="a photo of sks dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=1 \ + --learning_rate=5e-6 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --max_train_steps=400 \ + --push_to_hub + + + + Copied export MODEL_NAME="duongna/stable-diffusion-v1-4-flax" +export INSTANCE_DIR="./dog" +export OUTPUT_DIR="path-to-save-model" + +python train_dreambooth_flax.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --output_dir=$OUTPUT_DIR \ + --instance_prompt="a photo of sks dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --learning_rate=5e-6 \ + --max_train_steps=400 \ + --push_to_hub + + +Once training is complete, you can use your newly trained model for inference! Can’t wait to try your model for inference before training is complete? 🤭 Make sure you have the latest version of 🤗 Accelerate installed. Copied from diffusers import DiffusionPipeline, UNet2DConditionModel +from transformers import CLIPTextModel +import torch + +unet = UNet2DConditionModel.from_pretrained("path/to/model/checkpoint-100/unet") + +# if you have trained with `--args.train_text_encoder` make sure to also load the text encoder +text_encoder = CLIPTextModel.from_pretrained("path/to/model/checkpoint-100/checkpoint-100/text_encoder") + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", unet=unet, text_encoder=text_encoder, dtype=torch.float16, +).to("cuda") + +image = pipeline("A photo of sks dog in a bucket", num_inference_steps=50, guidance_scale=7.5).images[0] +image.save("dog-bucket.png") + + + + Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("path_to_saved_model", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +image = pipeline("A photo of sks dog in a bucket", num_inference_steps=50, guidance_scale=7.5).images[0] +image.save("dog-bucket.png") + + + + Copied import jax +import numpy as np +from flax.jax_utils import replicate +from flax.training.common_utils import shard +from diffusers import FlaxStableDiffusionPipeline + +pipeline, params = FlaxStableDiffusionPipeline.from_pretrained("path-to-your-trained-model", dtype=jax.numpy.bfloat16) + +prompt = "A photo of sks dog in a bucket" +prng_seed = jax.random.PRNGKey(0) +num_inference_steps = 50 + +num_samples = jax.device_count() +prompt = num_samples * [prompt] +prompt_ids = pipeline.prepare_inputs(prompt) + +# shard inputs and rng +params = replicate(params) +prng_seed = jax.random.split(prng_seed, jax.device_count()) +prompt_ids = shard(prompt_ids) + +images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images +images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) +image.save("dog-bucket.png") + + + LoRA LoRA is a training technique for significantly reducing the number of trainable parameters. As a result, training is faster and it is easier to store the resulting weights because they are a lot smaller (~100MBs). Use the train_dreambooth_lora.py script to train with LoRA. The LoRA training script is discussed in more detail in the LoRA training guide. Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_dreambooth_lora_sdxl.py script to train a SDXL model with LoRA. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on training your DreamBooth model! To learn more about how to use your new model, the following guide may be helpful: Learn how to load a DreamBooth model for inference if you trained your model with LoRA. diff --git a/scrapped_outputs/c7909e5b058504c4f1d894711a7f5b60.txt b/scrapped_outputs/c7909e5b058504c4f1d894711a7f5b60.txt new file mode 100644 index 0000000000000000000000000000000000000000..f6343d343964a165861c1cd9fd3b9fbe84354c01 --- /dev/null +++ b/scrapped_outputs/c7909e5b058504c4f1d894711a7f5b60.txt @@ -0,0 +1,100 @@ +Parallel Sampling of Diffusion Models Parallel Sampling of Diffusion Models is by Andy Shih, Suneel Belkhale, Stefano Ermon, Dorsa Sadigh, Nima Anari. The abstract from the paper is: Diffusion models are powerful generative models but suffer from slow sampling, often taking 1000 sequential denoising steps for one sample. As a result, considerable efforts have been directed toward reducing the number of denoising steps, but these methods hurt sample quality. Instead of reducing the number of denoising steps (trading quality for speed), in this paper we explore an orthogonal approach: can we run the denoising steps in parallel (trading compute for speed)? In spite of the sequential nature of the denoising steps, we show that surprisingly it is possible to parallelize sampling via Picard iterations, by guessing the solution of future denoising steps and iteratively refining until convergence. With this insight, we present ParaDiGMS, a novel method to accelerate the sampling of pretrained diffusion models by denoising multiple steps in parallel. ParaDiGMS is the first diffusion sampling method that enables trading compute for speed and is even compatible with existing fast sampling techniques such as DDIM and DPMSolver. Using ParaDiGMS, we improve sampling speed by 2-4x across a range of robotics and image generation models, giving state-of-the-art sampling speeds of 0.2s on 100-step DiffusionPolicy and 14.6s on 1000-step StableDiffusion-v2 with no measurable degradation of task reward, FID score, or CLIP score. The original codebase can be found at AndyShih12/paradigms, and the pipeline was contributed by AndyShih12. ❤️ Tips This pipeline improves sampling speed by running denoising steps in parallel, at the cost of increased total FLOPs. +Therefore, it is better to call this pipeline when running on multiple GPUs. Otherwise, without enough GPU bandwidth +sampling may be even slower than sequential sampling. The two parameters to play with are parallel (batch size) and tolerance. If it fits in memory, for a 1000-step DDPM you can aim for a batch size of around 100 (for example, 8 GPUs and batch_per_device=12 to get parallel=96). A higher batch size may not fit in memory, and lower batch size gives less parallelism. For tolerance, using a higher tolerance may get better speedups but can risk sample quality degradation. If there is quality degradation with the default tolerance, then use a lower tolerance like 0.001. For a 1000-step DDPM on 8 A100 GPUs, you can expect around a 3x speedup from StableDiffusionParadigmsPipeline compared to the StableDiffusionPipeline +by setting parallel=80 and tolerance=0.1. 🤗 Diffusers offers distributed inference support for generating multiple prompts +in parallel on multiple GPUs. But StableDiffusionParadigmsPipeline is designed for speeding up sampling of a single prompt by using multiple GPUs. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionParadigmsPipeline class diffusers.StableDiffusionParadigmsPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using a parallelized version of Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files __call__ < source > ( prompt: typing.Union[str, typing.List[str]] = None height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 parallel: int = 10 tolerance: float = 0.1 guidance_scale: float = 7.5 negative_prompt: typing.Union[typing.List[str], str, NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: int = 1 cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None debug: bool = False clip_skip: int = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. parallel (int, optional, defaults to 10) — +The batch size to use when doing parallel sampling. More parallelism may lead to faster inference but +requires higher memory usage and can also require more total FLOPs. tolerance (float, optional, defaults to 0.1) — +The error tolerance for determining when to slide the batch window forward for parallel sampling. Lower +tolerance usually leads to less or no degradation. Higher tolerance is faster but can risk degradation +of sample quality. The tolerance is specified as a ratio of the scheduler’s noise magnitude. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. debug (bool, optional, defaults to False) — +Whether or not to run in debug mode. In debug mode, torch.cumsum is evaluated using the CPU. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import DDPMParallelScheduler +>>> from diffusers import StableDiffusionParadigmsPipeline + +>>> scheduler = DDPMParallelScheduler.from_pretrained("runwayml/stable-diffusion-v1-5", subfolder="scheduler") + +>>> pipe = StableDiffusionParadigmsPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", scheduler=scheduler, torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> ngpu, batch_per_device = torch.cuda.device_count(), 5 +>>> pipe.wrapped_unet = torch.nn.DataParallel(pipe.unet, device_ids=[d for d in range(ngpu)]) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt, parallel=ngpu * batch_per_device, num_inference_steps=1000).images[0] disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None lora_scale: typing.Optional[float] = None clip_skip: typing.Optional[int] = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] nsfw_content_detected: typing.Optional[typing.List[bool]] ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/c7cb60a94e2de5c9f060798badad5723.txt b/scrapped_outputs/c7cb60a94e2de5c9f060798badad5723.txt new file mode 100644 index 0000000000000000000000000000000000000000..987c9209fcde600484b42a955615d555013bf385 --- /dev/null +++ b/scrapped_outputs/c7cb60a94e2de5c9f060798badad5723.txt @@ -0,0 +1,367 @@ +Image-to-image The Stable Diffusion model can also be applied to image-to-image generation by passing a text prompt and an initial image to condition the generation of new images. The StableDiffusionImg2ImgPipeline uses the diffusion-denoising mechanism proposed in SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations by Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, Stefano Ermon. The abstract from the paper is: Guided image synthesis enables everyday users to create and edit photo-realistic images with minimum effort. The key challenge is balancing faithfulness to the user input (e.g., hand-drawn colored strokes) and realism of the synthesized image. Existing GAN-based methods attempt to achieve such balance using either conditional GANs or GAN inversions, which are challenging and often require additional training data or loss functions for individual applications. To address these issues, we introduce a new image synthesis and editing method, Stochastic Differential Editing (SDEdit), based on a diffusion model generative prior, which synthesizes realistic images by iteratively denoising through a stochastic differential equation (SDE). Given an input image with user guide of any type, SDEdit first adds noise to the input, then subsequently denoises the resulting image through the SDE prior to increase its realism. SDEdit does not require task-specific training or inversions and can naturally achieve the balance between realism and faithfulness. SDEdit significantly outperforms state-of-the-art GAN-based methods by up to 98.09% on realism and 91.72% on overall satisfaction scores, according to a human perception study, on multiple tasks, including stroke-based image synthesis and editing as well as image compositing. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionImg2ImgPipeline class diffusers.StableDiffusionImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-guided image-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.8 num_inference_steps: Optional = 50 timesteps: List = None guidance_scale: Optional = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: Optional = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: int = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be used as the starting point. For both +numpy array and pytorch tensor, the expected value range is between [0, 1] If it’s a tensor or a list +or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a +list of arrays, the expected shape should be (B, H, W, C) or (H, W, C) It can also accept image +latents as image, but if passing latents directly it is not encoded again. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import requests +>>> import torch +>>> from PIL import Image +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionImg2ImgPipeline + +>>> device = "cuda" +>>> model_id_or_path = "runwayml/stable-diffusion-v1-5" +>>> pipe = StableDiffusionImg2ImgPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +>>> response = requests.get(url) +>>> init_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_image = init_image.resize((768, 512)) + +>>> prompt = "A fantasy landscape, trending on artstation" + +>>> images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images +>>> images[0].save("fantasy_landscape.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt or .safetensors +format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import StableDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +... ) + +>>> # Download pipeline from local file +>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt +>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly") + +>>> # Enable float16 and move to GPU +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", +... torch_dtype=torch.float16, +... ) +>>> pipeline.to("cuda") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionImg2ImgPipeline class diffusers.FlaxStableDiffusionImg2ImgPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-guided image-to-image generation using Stable Diffusion. This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: Array image: Array params: Union prng_seed: Array strength: float = 0.8 num_inference_steps: int = 50 height: Optional = None width: Optional = None guidance_scale: Union = 7.5 noise: Array = None neg_prompt_ids: Array = None return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt_ids (jnp.ndarray) — +The prompt or prompts to guide image generation. image (jnp.ndarray) — +Array representing an image batch to be used as the starting point. params (Dict or FrozenDict) — +Dictionary containing the model parameters/weights. prng_seed (jax.Array or jax.Array) — +Array containing random number generator key. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. noise (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. The array is generated by +sampling using the supplied random generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> import jax.numpy as jnp +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard +>>> import requests +>>> from io import BytesIO +>>> from PIL import Image +>>> from diffusers import FlaxStableDiffusionImg2ImgPipeline + + +>>> def create_key(seed=0): +... return jax.random.PRNGKey(seed) + + +>>> rng = create_key(0) + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +>>> response = requests.get(url) +>>> init_img = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_img = init_img.resize((768, 512)) + +>>> prompts = "A fantasy landscape, trending on artstation" + +>>> pipeline, params = FlaxStableDiffusionImg2ImgPipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", +... revision="flax", +... dtype=jnp.bfloat16, +... ) + +>>> num_samples = jax.device_count() +>>> rng = jax.random.split(rng, jax.device_count()) +>>> prompt_ids, processed_image = pipeline.prepare_inputs( +... prompt=[prompts] * num_samples, image=[init_img] * num_samples +... ) +>>> p_params = replicate(params) +>>> prompt_ids = shard(prompt_ids) +>>> processed_image = shard(processed_image) + +>>> output = pipeline( +... prompt_ids=prompt_ids, +... image=processed_image, +... params=p_params, +... prng_seed=rng, +... strength=0.75, +... num_inference_steps=50, +... jit=True, +... height=512, +... width=768, +... ).images + +>>> output_images = pipeline.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:]))) FlaxStableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/c7d33cdf4c41a45452fc79027dbfa890.txt b/scrapped_outputs/c7d33cdf4c41a45452fc79027dbfa890.txt new file mode 100644 index 0000000000000000000000000000000000000000..039dc21252f140b854db30919cf4105c2b03492c --- /dev/null +++ b/scrapped_outputs/c7d33cdf4c41a45452fc79027dbfa890.txt @@ -0,0 +1,249 @@ +Evaluating Diffusion Models Evaluation of generative models like Stable Diffusion is subjective in nature. But as practitioners and researchers, we often have to make careful choices amongst many different possibilities. So, when working with different generative models (like GANs, Diffusion, etc.), how do we choose one over the other? Qualitative evaluation of such models can be error-prone and might incorrectly influence a decision. +However, quantitative metrics don’t necessarily correspond to image quality. So, usually, a combination +of both qualitative and quantitative evaluations provides a stronger signal when choosing one model +over the other. In this document, we provide a non-exhaustive overview of qualitative and quantitative methods to evaluate Diffusion models. For quantitative methods, we specifically focus on how to implement them alongside diffusers. The methods shown in this document can also be used to evaluate different noise schedulers keeping the underlying generation model fixed. Scenarios We cover Diffusion models with the following pipelines: Text-guided image generation (such as the StableDiffusionPipeline). Text-guided image generation, additionally conditioned on an input image (such as the StableDiffusionImg2ImgPipeline and StableDiffusionInstructPix2PixPipeline). Class-conditioned image generation models (such as the DiTPipeline). Qualitative Evaluation Qualitative evaluation typically involves human assessment of generated images. Quality is measured across aspects such as compositionality, image-text alignment, and spatial relations. Common prompts provide a degree of uniformity for subjective metrics. +DrawBench and PartiPrompts are prompt datasets used for qualitative benchmarking. DrawBench and PartiPrompts were introduced by Imagen and Parti respectively. From the official Parti website: PartiPrompts (P2) is a rich set of over 1600 prompts in English that we release as part of this work. P2 can be used to measure model capabilities across various categories and challenge aspects. PartiPrompts has the following columns: Prompt Category of the prompt (such as “Abstract”, “World Knowledge”, etc.) Challenge reflecting the difficulty (such as “Basic”, “Complex”, “Writing & Symbols”, etc.) These benchmarks allow for side-by-side human evaluation of different image generation models. For this, the 🧨 Diffusers team has built Open Parti Prompts, which is a community-driven qualitative benchmark based on Parti Prompts to compare state-of-the-art open-source diffusion models: Open Parti Prompts Game: For 10 parti prompts, 4 generated images are shown and the user selects the image that suits the prompt best. Open Parti Prompts Leaderboard: The leaderboard comparing the currently best open-sourced diffusion models to each other. To manually compare images, let’s see how we can use diffusers on a couple of PartiPrompts. Below we show some prompts sampled across different challenges: Basic, Complex, Linguistic Structures, Imagination, and Writing & Symbols. Here we are using PartiPrompts as a dataset. Copied from datasets import load_dataset + +# prompts = load_dataset("nateraw/parti-prompts", split="train") +# prompts = prompts.shuffle() +# sample_prompts = [prompts[i]["Prompt"] for i in range(5)] + +# Fixing these sample prompts in the interest of reproducibility. +sample_prompts = [ + "a corgi", + "a hot air balloon with a yin-yang symbol, with the moon visible in the daytime sky", + "a car with no windows", + "a cube made of porcupine", + 'The saying "BE EXCELLENT TO EACH OTHER" written on a red brick wall with a graffiti image of a green alien wearing a tuxedo. A yellow fire hydrant is on a sidewalk in the foreground.', +] Now we can use these prompts to generate some images using Stable Diffusion (v1-4 checkpoint): Copied import torch + +seed = 0 +generator = torch.manual_seed(seed) + +images = sd_pipeline(sample_prompts, num_images_per_prompt=1, generator=generator).images We can also set num_images_per_prompt accordingly to compare different images for the same prompt. Running the same pipeline but with a different checkpoint (v1-5), yields: Once several images are generated from all the prompts using multiple models (under evaluation), these results are presented to human evaluators for scoring. For +more details on the DrawBench and PartiPrompts benchmarks, refer to their respective papers. It is useful to look at some inference samples while a model is training to measure the +training progress. In our training scripts, we support this utility with additional support for +logging to TensorBoard and Weights & Biases. Quantitative Evaluation In this section, we will walk you through how to evaluate three different diffusion pipelines using: CLIP score CLIP directional similarity FID Text-guided image generation CLIP score measures the compatibility of image-caption pairs. Higher CLIP scores imply higher compatibility 🔼. The CLIP score is a quantitative measurement of the qualitative concept “compatibility”. Image-caption pair compatibility can also be thought of as the semantic similarity between the image and the caption. CLIP score was found to have high correlation with human judgement. Let’s first load a StableDiffusionPipeline: Copied from diffusers import StableDiffusionPipeline +import torch + +model_ckpt = "CompVis/stable-diffusion-v1-4" +sd_pipeline = StableDiffusionPipeline.from_pretrained(model_ckpt, torch_dtype=torch.float16).to("cuda") Generate some images with multiple prompts: Copied prompts = [ + "a photo of an astronaut riding a horse on mars", + "A high tech solarpunk utopia in the Amazon rainforest", + "A pikachu fine dining with a view to the Eiffel Tower", + "A mecha robot in a favela in expressionist style", + "an insect robot preparing a delicious meal", + "A small cabin on top of a snowy mountain in the style of Disney, artstation", +] + +images = sd_pipeline(prompts, num_images_per_prompt=1, output_type="np").images + +print(images.shape) +# (6, 512, 512, 3) And then, we calculate the CLIP score. Copied from torchmetrics.functional.multimodal import clip_score +from functools import partial + +clip_score_fn = partial(clip_score, model_name_or_path="openai/clip-vit-base-patch16") + +def calculate_clip_score(images, prompts): + images_int = (images * 255).astype("uint8") + clip_score = clip_score_fn(torch.from_numpy(images_int).permute(0, 3, 1, 2), prompts).detach() + return round(float(clip_score), 4) + +sd_clip_score = calculate_clip_score(images, prompts) +print(f"CLIP score: {sd_clip_score}") +# CLIP score: 35.7038 In the above example, we generated one image per prompt. If we generated multiple images per prompt, we would have to take the average score from the generated images per prompt. Now, if we wanted to compare two checkpoints compatible with the StableDiffusionPipeline we should pass a generator while calling the pipeline. First, we generate images with a +fixed seed with the v1-4 Stable Diffusion checkpoint: Copied seed = 0 +generator = torch.manual_seed(seed) + +images = sd_pipeline(prompts, num_images_per_prompt=1, generator=generator, output_type="np").images Then we load the v1-5 checkpoint to generate images: Copied model_ckpt_1_5 = "runwayml/stable-diffusion-v1-5" +sd_pipeline_1_5 = StableDiffusionPipeline.from_pretrained(model_ckpt_1_5, torch_dtype=weight_dtype).to(device) + +images_1_5 = sd_pipeline_1_5(prompts, num_images_per_prompt=1, generator=generator, output_type="np").images And finally, we compare their CLIP scores: Copied sd_clip_score_1_4 = calculate_clip_score(images, prompts) +print(f"CLIP Score with v-1-4: {sd_clip_score_1_4}") +# CLIP Score with v-1-4: 34.9102 + +sd_clip_score_1_5 = calculate_clip_score(images_1_5, prompts) +print(f"CLIP Score with v-1-5: {sd_clip_score_1_5}") +# CLIP Score with v-1-5: 36.2137 It seems like the v1-5 checkpoint performs better than its predecessor. Note, however, that the number of prompts we used to compute the CLIP scores is quite low. For a more practical evaluation, this number should be way higher, and the prompts should be diverse. By construction, there are some limitations in this score. The captions in the training dataset +were crawled from the web and extracted from alt and similar tags associated an image on the internet. +They are not necessarily representative of what a human being would use to describe an image. Hence we +had to “engineer” some prompts here. Image-conditioned text-to-image generation In this case, we condition the generation pipeline with an input image as well as a text prompt. Let’s take the StableDiffusionInstructPix2PixPipeline, as an example. It takes an edit instruction as an input prompt and an input image to be edited. Here is one example: One strategy to evaluate such a model is to measure the consistency of the change between the two images (in CLIP space) with the change between the two image captions (as shown in CLIP-Guided Domain Adaptation of Image Generators). This is referred to as the ”CLIP directional similarity“. Caption 1 corresponds to the input image (image 1) that is to be edited. Caption 2 corresponds to the edited image (image 2). It should reflect the edit instruction. Following is a pictorial overview: We have prepared a mini dataset to implement this metric. Let’s first load the dataset. Copied from datasets import load_dataset + +dataset = load_dataset("sayakpaul/instructpix2pix-demo", split="train") +dataset.features Copied {'input': Value(dtype='string', id=None), + 'edit': Value(dtype='string', id=None), + 'output': Value(dtype='string', id=None), + 'image': Image(decode=True, id=None)} Here we have: input is a caption corresponding to the image. edit denotes the edit instruction. output denotes the modified caption reflecting the edit instruction. Let’s take a look at a sample. Copied idx = 0 +print(f"Original caption: {dataset[idx]['input']}") +print(f"Edit instruction: {dataset[idx]['edit']}") +print(f"Modified caption: {dataset[idx]['output']}") Copied Original caption: 2. FAROE ISLANDS: An archipelago of 18 mountainous isles in the North Atlantic Ocean between Norway and Iceland, the Faroe Islands has 'everything you could hope for', according to Big 7 Travel. It boasts 'crystal clear waterfalls, rocky cliffs that seem to jut out of nowhere and velvety green hills' +Edit instruction: make the isles all white marble +Modified caption: 2. WHITE MARBLE ISLANDS: An archipelago of 18 mountainous white marble isles in the North Atlantic Ocean between Norway and Iceland, the White Marble Islands has 'everything you could hope for', according to Big 7 Travel. It boasts 'crystal clear waterfalls, rocky cliffs that seem to jut out of nowhere and velvety green hills' And here is the image: Copied dataset[idx]["image"] We will first edit the images of our dataset with the edit instruction and compute the directional similarity. Let’s first load the StableDiffusionInstructPix2PixPipeline: Copied from diffusers import StableDiffusionInstructPix2PixPipeline + +instruct_pix2pix_pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained( + "timbrooks/instruct-pix2pix", torch_dtype=torch.float16 +).to(device) Now, we perform the edits: Copied import numpy as np + + +def edit_image(input_image, instruction): + image = instruct_pix2pix_pipeline( + instruction, + image=input_image, + output_type="np", + generator=generator, + ).images[0] + return image + +input_images = [] +original_captions = [] +modified_captions = [] +edited_images = [] + +for idx in range(len(dataset)): + input_image = dataset[idx]["image"] + edit_instruction = dataset[idx]["edit"] + edited_image = edit_image(input_image, edit_instruction) + + input_images.append(np.array(input_image)) + original_captions.append(dataset[idx]["input"]) + modified_captions.append(dataset[idx]["output"]) + edited_images.append(edited_image) To measure the directional similarity, we first load CLIP’s image and text encoders: Copied from transformers import ( + CLIPTokenizer, + CLIPTextModelWithProjection, + CLIPVisionModelWithProjection, + CLIPImageProcessor, +) + +clip_id = "openai/clip-vit-large-patch14" +tokenizer = CLIPTokenizer.from_pretrained(clip_id) +text_encoder = CLIPTextModelWithProjection.from_pretrained(clip_id).to(device) +image_processor = CLIPImageProcessor.from_pretrained(clip_id) +image_encoder = CLIPVisionModelWithProjection.from_pretrained(clip_id).to(device) Notice that we are using a particular CLIP checkpoint, i.e., openai/clip-vit-large-patch14. This is because the Stable Diffusion pre-training was performed with this CLIP variant. For more details, refer to the documentation. Next, we prepare a PyTorch nn.Module to compute directional similarity: Copied import torch.nn as nn +import torch.nn.functional as F + + +class DirectionalSimilarity(nn.Module): + def __init__(self, tokenizer, text_encoder, image_processor, image_encoder): + super().__init__() + self.tokenizer = tokenizer + self.text_encoder = text_encoder + self.image_processor = image_processor + self.image_encoder = image_encoder + + def preprocess_image(self, image): + image = self.image_processor(image, return_tensors="pt")["pixel_values"] + return {"pixel_values": image.to(device)} + + def tokenize_text(self, text): + inputs = self.tokenizer( + text, + max_length=self.tokenizer.model_max_length, + padding="max_length", + truncation=True, + return_tensors="pt", + ) + return {"input_ids": inputs.input_ids.to(device)} + + def encode_image(self, image): + preprocessed_image = self.preprocess_image(image) + image_features = self.image_encoder(**preprocessed_image).image_embeds + image_features = image_features / image_features.norm(dim=1, keepdim=True) + return image_features + + def encode_text(self, text): + tokenized_text = self.tokenize_text(text) + text_features = self.text_encoder(**tokenized_text).text_embeds + text_features = text_features / text_features.norm(dim=1, keepdim=True) + return text_features + + def compute_directional_similarity(self, img_feat_one, img_feat_two, text_feat_one, text_feat_two): + sim_direction = F.cosine_similarity(img_feat_two - img_feat_one, text_feat_two - text_feat_one) + return sim_direction + + def forward(self, image_one, image_two, caption_one, caption_two): + img_feat_one = self.encode_image(image_one) + img_feat_two = self.encode_image(image_two) + text_feat_one = self.encode_text(caption_one) + text_feat_two = self.encode_text(caption_two) + directional_similarity = self.compute_directional_similarity( + img_feat_one, img_feat_two, text_feat_one, text_feat_two + ) + return directional_similarity Let’s put DirectionalSimilarity to use now. Copied dir_similarity = DirectionalSimilarity(tokenizer, text_encoder, image_processor, image_encoder) +scores = [] + +for i in range(len(input_images)): + original_image = input_images[i] + original_caption = original_captions[i] + edited_image = edited_images[i] + modified_caption = modified_captions[i] + + similarity_score = dir_similarity(original_image, edited_image, original_caption, modified_caption) + scores.append(float(similarity_score.detach().cpu())) + +print(f"CLIP directional similarity: {np.mean(scores)}") +# CLIP directional similarity: 0.0797976553440094 Like the CLIP Score, the higher the CLIP directional similarity, the better it is. It should be noted that the StableDiffusionInstructPix2PixPipeline exposes two arguments, namely, image_guidance_scale and guidance_scale that let you control the quality of the final edited image. We encourage you to experiment with these two arguments and see the impact of that on the directional similarity. We can extend the idea of this metric to measure how similar the original image and edited version are. To do that, we can just do F.cosine_similarity(img_feat_two, img_feat_one). For these kinds of edits, we would still want the primary semantics of the images to be preserved as much as possible, i.e., a high similarity score. We can use these metrics for similar pipelines such as the StableDiffusionPix2PixZeroPipeline. Both CLIP score and CLIP direction similarity rely on the CLIP model, which can make the evaluations biased. Extending metrics like IS, FID (discussed later), or KID can be difficult when the model under evaluation was pre-trained on a large image-captioning dataset (such as the LAION-5B dataset). This is because underlying these metrics is an InceptionNet (pre-trained on the ImageNet-1k dataset) used for extracting intermediate image features. The pre-training dataset of Stable Diffusion may have limited overlap with the pre-training dataset of InceptionNet, so it is not a good candidate here for feature extraction. Using the above metrics helps evaluate models that are class-conditioned. For example, DiT. It was pre-trained being conditioned on the ImageNet-1k classes. Class-conditioned image generation Class-conditioned generative models are usually pre-trained on a class-labeled dataset such as ImageNet-1k. Popular metrics for evaluating these models include Fréchet Inception Distance (FID), Kernel Inception Distance (KID), and Inception Score (IS). In this document, we focus on FID (Heusel et al.). We show how to compute it with the DiTPipeline, which uses the DiT model under the hood. FID aims to measure how similar are two datasets of images. As per this resource: Fréchet Inception Distance is a measure of similarity between two datasets of images. It was shown to correlate well with the human judgment of visual quality and is most often used to evaluate the quality of samples of Generative Adversarial Networks. FID is calculated by computing the Fréchet distance between two Gaussians fitted to feature representations of the Inception network. These two datasets are essentially the dataset of real images and the dataset of fake images (generated images in our case). FID is usually calculated with two large datasets. However, for this document, we will work with two mini datasets. Let’s first download a few images from the ImageNet-1k training set: Copied from zipfile import ZipFile +import requests + + +def download(url, local_filepath): + r = requests.get(url) + with open(local_filepath, "wb") as f: + f.write(r.content) + return local_filepath + +dummy_dataset_url = "https://hf.co/datasets/sayakpaul/sample-datasets/resolve/main/sample-imagenet-images.zip" +local_filepath = download(dummy_dataset_url, dummy_dataset_url.split("/")[-1]) + +with ZipFile(local_filepath, "r") as zipper: + zipper.extractall(".") Copied from PIL import Image +import os + +dataset_path = "sample-imagenet-images" +image_paths = sorted([os.path.join(dataset_path, x) for x in os.listdir(dataset_path)]) + +real_images = [np.array(Image.open(path).convert("RGB")) for path in image_paths] These are 10 images from the following ImageNet-1k classes: “cassette_player”, “chain_saw” (x2), “church”, “gas_pump” (x3), “parachute” (x2), and “tench”. Real images. Now that the images are loaded, let’s apply some lightweight pre-processing on them to use them for FID calculation. Copied from torchvision.transforms import functional as F + + +def preprocess_image(image): + image = torch.tensor(image).unsqueeze(0) + image = image.permute(0, 3, 1, 2) / 255.0 + return F.center_crop(image, (256, 256)) + +real_images = torch.cat([preprocess_image(image) for image in real_images]) +print(real_images.shape) +# torch.Size([10, 3, 256, 256]) We now load the DiTPipeline to generate images conditioned on the above-mentioned classes. Copied from diffusers import DiTPipeline, DPMSolverMultistepScheduler + +dit_pipeline = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256", torch_dtype=torch.float16) +dit_pipeline.scheduler = DPMSolverMultistepScheduler.from_config(dit_pipeline.scheduler.config) +dit_pipeline = dit_pipeline.to("cuda") + +words = [ + "cassette player", + "chainsaw", + "chainsaw", + "church", + "gas pump", + "gas pump", + "gas pump", + "parachute", + "parachute", + "tench", +] + +class_ids = dit_pipeline.get_label_ids(words) +output = dit_pipeline(class_labels=class_ids, generator=generator, output_type="np") + +fake_images = output.images +fake_images = torch.tensor(fake_images) +fake_images = fake_images.permute(0, 3, 1, 2) +print(fake_images.shape) +# torch.Size([10, 3, 256, 256]) Now, we can compute the FID using torchmetrics. Copied from torchmetrics.image.fid import FrechetInceptionDistance + +fid = FrechetInceptionDistance(normalize=True) +fid.update(real_images, real=True) +fid.update(fake_images, real=False) + +print(f"FID: {float(fid.compute())}") +# FID: 177.7147216796875 The lower the FID, the better it is. Several things can influence FID here: Number of images (both real and fake) Randomness induced in the diffusion process Number of inference steps in the diffusion process The scheduler being used in the diffusion process For the last two points, it is, therefore, a good practice to run the evaluation across different seeds and inference steps, and then report an average result. FID results tend to be fragile as they depend on a lot of factors: The specific Inception model used during computation. The implementation accuracy of the computation. The image format (not the same if we start from PNGs vs JPGs). Keeping that in mind, FID is often most useful when comparing similar runs, but it is +hard to reproduce paper results unless the authors carefully disclose the FID +measurement code. These points apply to other related metrics too, such as KID and IS. As a final step, let’s visually inspect the fake_images. Fake images. diff --git a/scrapped_outputs/c7e8d7fdcc030756e18184528ce1f8fb.txt b/scrapped_outputs/c7e8d7fdcc030756e18184528ce1f8fb.txt new file mode 100644 index 0000000000000000000000000000000000000000..acbc313e656972084810639a2513c61961c63127 --- /dev/null +++ b/scrapped_outputs/c7e8d7fdcc030756e18184528ce1f8fb.txt @@ -0,0 +1 @@ +Normalization layers Customized normalization layers for supporting various models in 🤗 Diffusers. AdaLayerNorm class diffusers.models.normalization.AdaLayerNorm < source > ( embedding_dim: int num_embeddings: int ) Parameters embedding_dim (int) — The size of each embedding vector. num_embeddings (int) — The size of the embeddings dictionary. Norm layer modified to incorporate timestep embeddings. AdaLayerNormZero class diffusers.models.normalization.AdaLayerNormZero < source > ( embedding_dim: int num_embeddings: int ) Parameters embedding_dim (int) — The size of each embedding vector. num_embeddings (int) — The size of the embeddings dictionary. Norm layer adaptive layer norm zero (adaLN-Zero). AdaLayerNormSingle class diffusers.models.normalization.AdaLayerNormSingle < source > ( embedding_dim: int use_additional_conditions: bool = False ) Parameters embedding_dim (int) — The size of each embedding vector. use_additional_conditions (bool) — To use additional conditions for normalization or not. Norm layer adaptive layer norm single (adaLN-single). As proposed in PixArt-Alpha (see: https://arxiv.org/abs/2310.00426; Section 2.3). AdaGroupNorm class diffusers.models.normalization.AdaGroupNorm < source > ( embedding_dim: int out_dim: int num_groups: int act_fn: Optional = None eps: float = 1e-05 ) Parameters embedding_dim (int) — The size of each embedding vector. num_embeddings (int) — The size of the embeddings dictionary. num_groups (int) — The number of groups to separate the channels into. act_fn (str, optional, defaults to None) — The activation function to use. eps (float, optional, defaults to 1e-5) — The epsilon value to use for numerical stability. GroupNorm layer modified to incorporate timestep embeddings. diff --git a/scrapped_outputs/c8516788be0f3f73d0205360f0e0bc03.txt b/scrapped_outputs/c8516788be0f3f73d0205360f0e0bc03.txt new file mode 100644 index 0000000000000000000000000000000000000000..3ed526b258968db676928aa0d8cb1ec1badf1fc8 --- /dev/null +++ b/scrapped_outputs/c8516788be0f3f73d0205360f0e0bc03.txt @@ -0,0 +1,129 @@ +ControlNet-XS ControlNet-XS was introduced in ControlNet-XS by Denis Zavadski and Carsten Rother. It is based on the observation that the control model in the original ControlNet can be made much smaller and still produce good results. Like the original ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that’ll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process. ControlNet-XS generates images with comparable quality to a regular ControlNet, but it is 20-25% faster (see benchmark with StableDiffusion-XL) and uses ~45% less memory. Here’s the overview from the project page: With increasing computing capabilities, current model architectures appear to follow the trend of simply upscaling all components without validating the necessity for doing so. In this project we investigate the size and architectural design of ControlNet [Zhang et al., 2023] for controlling the image generation process with stable diffusion-based models. We show that a new architecture with as little as 1% of the parameters of the base model achieves state-of-the art results, considerably better than ControlNet in terms of FID score. Hence we call it ControlNet-XS. We provide the code for controlling StableDiffusion-XL [Podell et al., 2023] (Model B, 48M Parameters) and StableDiffusion 2.1 [Rombach et al. 2022] (Model B, 14M Parameters), all under openrail license. This model was contributed by UmerHA. ❤️ Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionControlNetXSPipeline class diffusers.StableDiffusionControlNetXSPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel controlnet: ControlNetXSModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetXSModel) — +Provides additional conditioning to the unet during the denoising process. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with ControlNet-XS guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 control_guidance_start: float = 0.0 control_guidance_end: float = 1.0 clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionControlNetXSPipeline, ControlNetXSModel +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" +>>> negative_prompt = "low quality, bad quality, sketches" + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" +... ) + +>>> # initialize the models and pipeline +>>> controlnet_conditioning_scale = 0.5 +>>> controlnet = ControlNetXSModel.from_pretrained( +... "UmerHA/ConrolNetXS-SD2.1-canny", torch_dtype=torch.float16 +... ) +>>> pipe = StableDiffusionControlNetXSPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-1", controlnet=controlnet, torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> # get canny image +>>> image = np.array(image) +>>> image = cv2.Canny(image, 100, 200) +>>> image = image[:, :, None] +>>> image = np.concatenate([image, image, image], axis=2) +>>> canny_image = Image.fromarray(image) +>>> # generate image +>>> image = pipe( +... prompt, controlnet_conditioning_scale=controlnet_conditioning_scale, image=canny_image +... ).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/c864616dfac7b641b011f18425e71ecd.txt b/scrapped_outputs/c864616dfac7b641b011f18425e71ecd.txt new file mode 100644 index 0000000000000000000000000000000000000000..ae04cd19402c4ab82a0fccbebd2b76ec8a611a13 --- /dev/null +++ b/scrapped_outputs/c864616dfac7b641b011f18425e71ecd.txt @@ -0,0 +1,41 @@ +UNetMotionModel The UNet model was originally introduced by Ronneberger et al for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 2D UNet model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNetMotionModel class diffusers.UNetMotionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlockMotion', 'CrossAttnDownBlockMotion', 'CrossAttnDownBlockMotion', 'DownBlockMotion') up_block_types: Tuple = ('UpBlockMotion', 'CrossAttnUpBlockMotion', 'CrossAttnUpBlockMotion', 'CrossAttnUpBlockMotion') block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: int = 32 norm_eps: float = 1e-05 cross_attention_dim: int = 1280 use_linear_projection: bool = False num_attention_heads: Union = 8 motion_max_seq_length: int = 32 motion_num_attention_heads: int = 8 use_motion_mid_block: int = True encoder_hid_dim: Optional = None encoder_hid_dim_type: Optional = None ) A modified conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a +sample shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). disable_freeu < source > ( ) Disables the FreeU mechanism. enable_forward_chunking < source > ( chunk_size: Optional = None dim: int = 0 ) Parameters chunk_size (int, optional) — +The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually +over each tensor of dim=dim. dim (int, optional, defaults to 0) — +The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch) +or dim=1 (sequence length). Sets the attention processor to use feed forward +chunking. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism from https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stage blocks where they are being applied. Please refer to the official repository for combinations of values that +are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None added_cond_kwargs: Optional = None down_block_additional_residuals: Optional = None mid_block_additional_residual: Optional = None return_dict: bool = True ) → UNet3DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, num_frames, channel, height, width. timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). +timestep_cond — (torch.Tensor, optional, defaults to None): +Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed +through the self.time_embedding layer to obtain the timestep embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. +down_block_additional_residuals — (tuple of torch.Tensor, optional): +A tuple of tensors that if specified are added to the residuals of down unet blocks. +mid_block_additional_residual — (torch.Tensor, optional): +A tensor that if specified is added to the residual of the middle unet block. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet3DConditionOutput instead of a plain +tuple. Returns +UNet3DConditionOutput or tuple + +If return_dict is True, an UNet3DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The UNetMotionModel forward method. freeze_unet2d_params < source > ( ) Freeze the weights of just the UNet2DConditionModel, and leave the motion modules +unfrozen for fine tuning. set_attn_processor < source > ( processor: Union _remove_lora = False ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. UNet3DConditionOutput class diffusers.models.unet_3d_condition.UNet3DConditionOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_frames, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of UNet3DConditionModel. diff --git a/scrapped_outputs/c8647e3589c553c148673d0e4e7e822e.txt b/scrapped_outputs/c8647e3589c553c148673d0e4e7e822e.txt new file mode 100644 index 0000000000000000000000000000000000000000..f30b39a298e4c56dee2c29827af6d01fc3c8586a --- /dev/null +++ b/scrapped_outputs/c8647e3589c553c148673d0e4e7e822e.txt @@ -0,0 +1,36 @@ +AsymmetricAutoencoderKL Improved larger variational autoencoder (VAE) model with KL loss for inpainting task: Designing a Better Asymmetric VQGAN for StableDiffusion by Zixin Zhu, Xuelu Feng, Dongdong Chen, Jianmin Bao, Le Wang, Yinpeng Chen, Lu Yuan, Gang Hua. The abstract from the paper is: StableDiffusion is a revolutionary text-to-image generator that is causing a stir in the world of image generation and editing. Unlike traditional methods that learn a diffusion model in pixel space, StableDiffusion learns a diffusion model in the latent space via a VQGAN, ensuring both efficiency and quality. It not only supports image generation tasks, but also enables image editing for real images, such as image inpainting and local editing. However, we have observed that the vanilla VQGAN used in StableDiffusion leads to significant information loss, causing distortion artifacts even in non-edited image regions. To this end, we propose a new asymmetric VQGAN with two simple designs. Firstly, in addition to the input from the encoder, the decoder contains a conditional branch that incorporates information from task-specific priors, such as the unmasked image region in inpainting. Secondly, the decoder is much heavier than the encoder, allowing for more detailed recovery while only slightly increasing the total inference cost. The training cost of our asymmetric VQGAN is cheap, and we only need to retrain a new asymmetric decoder while keeping the vanilla VQGAN encoder and StableDiffusion unchanged. Our asymmetric VQGAN can be widely used in StableDiffusion-based inpainting and local editing methods. Extensive experiments demonstrate that it can significantly improve the inpainting and editing performance, while maintaining the original text-to-image capability. The code is available at https://github.com/buxiangzhiren/Asymmetric_VQGAN Evaluation results can be found in section 4.1 of the original paper. Available checkpoints https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-1-5 https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-2 Example Usage Copied from diffusers import AsymmetricAutoencoderKL, StableDiffusionInpaintPipeline +from diffusers.utils import load_image, make_image_grid + + +prompt = "a photo of a person with beard" +img_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/celeba_hq_256.png" +mask_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/mask_256.png" + +original_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +pipe = StableDiffusionInpaintPipeline.from_pretrained("runwayml/stable-diffusion-inpainting") +pipe.vae = AsymmetricAutoencoderKL.from_pretrained("cross-attention/asymmetric-autoencoder-kl-x-1-5") +pipe.to("cuda") + +image = pipe(prompt=prompt, image=original_image, mask_image=mask_image).images[0] +make_image_grid([original_image, mask_image, image], rows=1, cols=3) AsymmetricAutoencoderKL class diffusers.AsymmetricAutoencoderKL < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) down_block_out_channels: Tuple = (64,) layers_per_down_block: int = 1 up_block_types: Tuple = ('UpDecoderBlock2D',) up_block_out_channels: Tuple = (64,) layers_per_up_block: int = 1 act_fn: str = 'silu' latent_channels: int = 4 norm_num_groups: int = 32 sample_size: int = 32 scaling_factor: float = 0.18215 ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("DownEncoderBlock2D",)) — +Tuple of downsample block types. down_block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of down block output channels. layers_per_down_block (int, optional, defaults to 1) — +Number layers for down block. up_block_types (Tuple[str], optional, defaults to ("UpDecoderBlock2D",)) — +Tuple of upsample block types. up_block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of up block output channels. layers_per_up_block (int, optional, defaults to 1) — +Number layers for up block. act_fn (str, optional, defaults to "silu") — The activation function to use. latent_channels (int, optional, defaults to 4) — Number of channels in the latent space. sample_size (int, optional, defaults to 32) — Sample input size. norm_num_groups (int, optional, defaults to 32) — +Number of groups to use for the first normalization layer in ResNet blocks. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. Designing a Better Asymmetric VQGAN for StableDiffusion https://arxiv.org/abs/2306.04632 . A VAE model with KL loss +for encoding images into latents and decoding latent representations into images. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor mask: Optional = None sample_posterior: bool = False return_dict: bool = True generator: Optional = None ) Parameters sample (torch.FloatTensor) — Input sample. mask (torch.FloatTensor, optional, defaults to None) — Optional inpainting mask. sample_posterior (bool, optional, defaults to False) — +Whether to sample from the posterior. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. AutoencoderKLOutput class diffusers.models.modeling_outputs.AutoencoderKLOutput < source > ( latent_dist: DiagonalGaussianDistribution ) Parameters latent_dist (DiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of DiagonalGaussianDistribution. +DiagonalGaussianDistribution allows for sampling latents from the distribution. Output of AutoencoderKL encoding method. DecoderOutput class diffusers.models.autoencoders.vae.DecoderOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The decoded output sample from the last layer of the model. Output of decoding method. diff --git a/scrapped_outputs/c86da04dde4455915fa31be0c0aa4a67.txt b/scrapped_outputs/c86da04dde4455915fa31be0c0aa4a67.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/c87157611fd1c4bccecb63644a6c5189.txt b/scrapped_outputs/c87157611fd1c4bccecb63644a6c5189.txt new file mode 100644 index 0000000000000000000000000000000000000000..78bbe5a9f180ff0b096046b649d06bb4063d6161 --- /dev/null +++ b/scrapped_outputs/c87157611fd1c4bccecb63644a6c5189.txt @@ -0,0 +1,137 @@ +DiffEdit Image editing typically requires providing a mask of the area to be edited. DiffEdit automatically generates the mask for you based on a text query, making it easier overall to create a mask without image editing software. The DiffEdit algorithm works in three steps: the diffusion model denoises an image conditioned on some query text and reference text which produces different noise estimates for different areas of the image; the difference is used to infer a mask to identify which area of the image needs to be changed to match the query text the input image is encoded into latent space with DDIM the latents are decoded with the diffusion model conditioned on the text query, using the mask as a guide such that pixels outside the mask remain the same as in the input image This guide will show you how to use DiffEdit to edit images without manually creating a mask. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate The StableDiffusionDiffEditPipeline requires an image mask and a set of partially inverted latents. The image mask is generated from the generate_mask() function, and includes two parameters, source_prompt and target_prompt. These parameters determine what to edit in the image. For example, if you want to change a bowl of fruits to a bowl of pears, then: Copied source_prompt = "a bowl of fruits" +target_prompt = "a bowl of pears" The partially inverted latents are generated from the invert() function, and it is generally a good idea to include a prompt or caption describing the image to help guide the inverse latent sampling process. The caption can often be your source_prompt, but feel free to experiment with other text descriptions! Let’s load the pipeline, scheduler, inverse scheduler, and enable some optimizations to reduce memory usage: Copied import torch +from diffusers import DDIMScheduler, DDIMInverseScheduler, StableDiffusionDiffEditPipeline + +pipeline = StableDiffusionDiffEditPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", + torch_dtype=torch.float16, + safety_checker=None, + use_safetensors=True, +) +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +pipeline.enable_model_cpu_offload() +pipeline.enable_vae_slicing() Load the image to edit: Copied from diffusers.utils import load_image, make_image_grid + +img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" +raw_image = load_image(img_url).resize((768, 768)) +raw_image Use the generate_mask() function to generate the image mask. You’ll need to pass it the source_prompt and target_prompt to specify what to edit in the image: Copied from PIL import Image + +source_prompt = "a bowl of fruits" +target_prompt = "a basket of pears" +mask_image = pipeline.generate_mask( + image=raw_image, + source_prompt=source_prompt, + target_prompt=target_prompt, +) +Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L").resize((768, 768)) Next, create the inverted latents and pass it a caption describing the image: Copied inv_latents = pipeline.invert(prompt=source_prompt, image=raw_image).latents Finally, pass the image mask and inverted latents to the pipeline. The target_prompt becomes the prompt now, and the source_prompt is used as the negative_prompt: Copied output_image = pipeline( + prompt=target_prompt, + mask_image=mask_image, + image_latents=inv_latents, + negative_prompt=source_prompt, +).images[0] +mask_image = Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L").resize((768, 768)) +make_image_grid([raw_image, mask_image, output_image], rows=1, cols=3) original image edited image Generate source and target embeddings The source and target embeddings can be automatically generated with the Flan-T5 model instead of creating them manually. Load the Flan-T5 model and tokenizer from the 🤗 Transformers library: Copied import torch +from transformers import AutoTokenizer, T5ForConditionalGeneration + +tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-large") +model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", device_map="auto", torch_dtype=torch.float16) Provide some initial text to prompt the model to generate the source and target prompts. Copied source_concept = "bowl" +target_concept = "basket" + +source_text = f"Provide a caption for images containing a {source_concept}. " +"The captions should be in English and should be no longer than 150 characters." + +target_text = f"Provide a caption for images containing a {target_concept}. " +"The captions should be in English and should be no longer than 150 characters." Next, create a utility function to generate the prompts: Copied @torch.no_grad() +def generate_prompts(input_prompt): + input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids.to("cuda") + + outputs = model.generate( + input_ids, temperature=0.8, num_return_sequences=16, do_sample=True, max_new_tokens=128, top_k=10 + ) + return tokenizer.batch_decode(outputs, skip_special_tokens=True) + +source_prompts = generate_prompts(source_text) +target_prompts = generate_prompts(target_text) +print(source_prompts) +print(target_prompts) Check out the generation strategy guide if you’re interested in learning more about strategies for generating different quality text. Load the text encoder model used by the StableDiffusionDiffEditPipeline to encode the text. You’ll use the text encoder to compute the text embeddings: Copied import torch +from diffusers import StableDiffusionDiffEditPipeline + +pipeline = StableDiffusionDiffEditPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +pipeline.enable_vae_slicing() + +@torch.no_grad() +def embed_prompts(sentences, tokenizer, text_encoder, device="cuda"): + embeddings = [] + for sent in sentences: + text_inputs = tokenizer( + sent, + padding="max_length", + max_length=tokenizer.model_max_length, + truncation=True, + return_tensors="pt", + ) + text_input_ids = text_inputs.input_ids + prompt_embeds = text_encoder(text_input_ids.to(device), attention_mask=None)[0] + embeddings.append(prompt_embeds) + return torch.concatenate(embeddings, dim=0).mean(dim=0).unsqueeze(0) + +source_embeds = embed_prompts(source_prompts, pipeline.tokenizer, pipeline.text_encoder) +target_embeds = embed_prompts(target_prompts, pipeline.tokenizer, pipeline.text_encoder) Finally, pass the embeddings to the generate_mask() and invert() functions, and pipeline to generate the image: Copied from diffusers import DDIMInverseScheduler, DDIMScheduler + from diffusers.utils import load_image, make_image_grid + from PIL import Image + + pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) + pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) + + img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + raw_image = load_image(img_url).resize((768, 768)) + + mask_image = pipeline.generate_mask( + image=raw_image, +- source_prompt=source_prompt, +- target_prompt=target_prompt, ++ source_prompt_embeds=source_embeds, ++ target_prompt_embeds=target_embeds, + ) + + inv_latents = pipeline.invert( +- prompt=source_prompt, ++ prompt_embeds=source_embeds, + image=raw_image, + ).latents + + output_image = pipeline( + mask_image=mask_image, + image_latents=inv_latents, +- prompt=target_prompt, +- negative_prompt=source_prompt, ++ prompt_embeds=target_embeds, ++ negative_prompt_embeds=source_embeds, + ).images[0] + mask_image = Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L") + make_image_grid([raw_image, mask_image, output_image], rows=1, cols=3) Generate a caption for inversion While you can use the source_prompt as a caption to help generate the partially inverted latents, you can also use the BLIP model to automatically generate a caption. Load the BLIP model and processor from the 🤗 Transformers library: Copied import torch +from transformers import BlipForConditionalGeneration, BlipProcessor + +processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") +model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base", torch_dtype=torch.float16, low_cpu_mem_usage=True) Create a utility function to generate a caption from the input image: Copied @torch.no_grad() +def generate_caption(images, caption_generator, caption_processor): + text = "a photograph of" + + inputs = caption_processor(images, text, return_tensors="pt").to(device="cuda", dtype=caption_generator.dtype) + caption_generator.to("cuda") + outputs = caption_generator.generate(**inputs, max_new_tokens=128) + + # offload caption generator + caption_generator.to("cpu") + + caption = caption_processor.batch_decode(outputs, skip_special_tokens=True)[0] + return caption Load an input image and generate a caption for it using the generate_caption function: Copied from diffusers.utils import load_image + +img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" +raw_image = load_image(img_url).resize((768, 768)) +caption = generate_caption(raw_image, model, processor) generated caption: "a photograph of a bowl of fruit on a table" Now you can drop the caption into the invert() function to generate the partially inverted latents! diff --git a/scrapped_outputs/c87e6c6ebe69cfb6d47925eb03869976.txt b/scrapped_outputs/c87e6c6ebe69cfb6d47925eb03869976.txt new file mode 100644 index 0000000000000000000000000000000000000000..d9d30a7d367c357d2e506841038933d2d5cecb7f --- /dev/null +++ b/scrapped_outputs/c87e6c6ebe69cfb6d47925eb03869976.txt @@ -0,0 +1,43 @@ +LMSDiscreteScheduler LMSDiscreteScheduler is a linear multistep scheduler for discrete beta schedules. The scheduler is ported from and created by Katherine Crowson, and the original implementation can be found at crowsonkb/k-diffusion. LMSDiscreteScheduler class diffusers.LMSDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None use_karras_sigmas: Optional = False prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. A linear multistep scheduler for discrete beta schedules. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. get_lms_coefficient < source > ( order t current_order ) Parameters order () — t () — current_order () — Compute the linear multistep coefficient. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (float or torch.FloatTensor) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor order: int = 4 return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float or torch.FloatTensor) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. order (int, defaults to 4) — +The order of the linear multistep method. return_dict (bool, optional, defaults to True) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). LMSDiscreteSchedulerOutput class diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/c8880d19c5cfd61409adf244a9bb21ad.txt b/scrapped_outputs/c8880d19c5cfd61409adf244a9bb21ad.txt new file mode 100644 index 0000000000000000000000000000000000000000..48396c146f3995890b4116a7443457db9ccef879 --- /dev/null +++ b/scrapped_outputs/c8880d19c5cfd61409adf244a9bb21ad.txt @@ -0,0 +1,60 @@ +VAE Image Processor The VaeImageProcessor provides a unified API for StableDiffusionPipelines to prepare image inputs for VAE encoding and post-processing outputs once they’re decoded. This includes transformations such as resizing, normalization, and conversion between PIL Image, PyTorch, and NumPy arrays. All pipelines with VaeImageProcessor accept PIL Image, PyTorch tensor, or NumPy arrays as image inputs and return outputs based on the output_type argument by the user. You can pass encoded image latents directly to the pipeline and return latents from the pipeline as a specific output with the output_type argument (for example output_type="latent"). This allows you to take the generated latents from one pipeline and pass it to another pipeline as input without leaving the latent space. It also makes it much easier to use multiple pipelines together by passing PyTorch tensors directly between different pipelines. VaeImageProcessor class diffusers.image_processor.VaeImageProcessor < source > ( do_resize: bool = True vae_scale_factor: int = 8 resample: str = 'lanczos' do_normalize: bool = True do_binarize: bool = False do_convert_rgb: bool = False do_convert_grayscale: bool = False ) Parameters do_resize (bool, optional, defaults to True) — +Whether to downscale the image’s (height, width) dimensions to multiples of vae_scale_factor. Can accept +height and width arguments from image_processor.VaeImageProcessor.preprocess() method. vae_scale_factor (int, optional, defaults to 8) — +VAE scale factor. If do_resize is True, the image is automatically resized to multiples of this factor. resample (str, optional, defaults to lanczos) — +Resampling filter to use when resizing the image. do_normalize (bool, optional, defaults to True) — +Whether to normalize the image to [-1,1]. do_binarize (bool, optional, defaults to False) — +Whether to binarize the image to 0/1. do_convert_rgb (bool, optional, defaults to be False) — +Whether to convert the images to RGB format. do_convert_grayscale (bool, optional, defaults to be False) — +Whether to convert the images to grayscale format. Image processor for VAE. apply_overlay < source > ( mask: Image init_image: Image image: Image crop_coords: Optional = None ) overlay the inpaint output to the original image binarize < source > ( image: Image ) → PIL.Image.Image Parameters image (PIL.Image.Image) — +The image input, should be a PIL image. Returns +PIL.Image.Image + +The binarized image. Values less than 0.5 are set to 0, values greater than 0.5 are set to 1. + Create a mask. blur < source > ( image: Image blur_factor: int = 4 ) Applies Gaussian blur to an image. convert_to_grayscale < source > ( image: Image ) Converts a PIL image to grayscale format. convert_to_rgb < source > ( image: Image ) Converts a PIL image to RGB format. denormalize < source > ( images: Union ) Denormalize an image array to [0,1]. get_crop_region < source > ( mask_image: Image width: int height: int pad = 0 ) → tuple Parameters mask_image (PIL.Image.Image) — Mask image. width (int) — Width of the image to be processed. height (int) — Height of the image to be processed. pad (int, optional) — Padding to be added to the crop region. Defaults to 0. Returns +tuple + +(x1, y1, x2, y2) represent a rectangular region that contains all masked ares in an image and matches the original aspect ratio. + Finds a rectangular region that contains all masked ares in an image, and expands region to match the aspect ratio of the original image; +for example, if user drew mask in a 128x32 region, and the dimensions for processing are 512x512, the region will be expanded to 128x128. get_default_height_width < source > ( image: Union height: Optional = None width: Optional = None ) Parameters image(PIL.Image.Image, np.ndarray or torch.Tensor) — +The image input, can be a PIL image, numpy array or pytorch tensor. if it is a numpy array, should have +shape [batch, height, width] or [batch, height, width, channel] if it is a pytorch tensor, should +have shape [batch, channel, height, width]. height (int, optional, defaults to None) — +The height in preprocessed image. If None, will use the height of image input. width (int, optional, defaults to None) -- The width in preprocessed. If None, will use the width of the image` input. This function return the height and width that are downscaled to the next integer multiple of +vae_scale_factor. normalize < source > ( images: Union ) Normalize an image array to [-1,1]. numpy_to_pil < source > ( images: ndarray ) Convert a numpy image or a batch of images to a PIL image. numpy_to_pt < source > ( images: ndarray ) Convert a NumPy image to a PyTorch tensor. pil_to_numpy < source > ( images: Union ) Convert a PIL image or a list of PIL images to NumPy arrays. postprocess < source > ( image: FloatTensor output_type: str = 'pil' do_denormalize: Optional = None ) → PIL.Image.Image, np.ndarray or torch.FloatTensor Parameters image (torch.FloatTensor) — +The image input, should be a pytorch tensor with shape B x C x H x W. output_type (str, optional, defaults to pil) — +The output type of the image, can be one of pil, np, pt, latent. do_denormalize (List[bool], optional, defaults to None) — +Whether to denormalize the image to [0,1]. If None, will use the value of do_normalize in the +VaeImageProcessor config. Returns +PIL.Image.Image, np.ndarray or torch.FloatTensor + +The postprocessed image. + Postprocess the image output from tensor to output_type. preprocess < source > ( image: Union height: Optional = None width: Optional = None resize_mode: str = 'default' crops_coords: Optional = None ) Parameters image (pipeline_image_input) — +The image input, accepted formats are PIL images, NumPy arrays, PyTorch tensors; Also accept list of supported formats. height (int, optional, defaults to None) — +The height in preprocessed image. If None, will use the get_default_height_width() to get default height. width (int, optional, defaults to None) -- The width in preprocessed. If None, will use get_default_height_width() to get the default width. resize_mode (str, optional, defaults to default) — +The resize mode, can be one of default or fill. If default, will resize the image to fit +within the specified width and height, and it may not maintaining the original aspect ratio. +If fill, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, filling empty with data from image. +If crop, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, cropping the excess. +Note that resize_mode fill and crop are only supported for PIL image input. crops_coords (List[Tuple[int, int, int, int]], optional, defaults to None) — +The crop coordinates for each image in the batch. If None, will not crop the image. Preprocess the image input. pt_to_numpy < source > ( images: FloatTensor ) Convert a PyTorch tensor to a NumPy image. resize < source > ( image: Union height: int width: int resize_mode: str = 'default' ) → PIL.Image.Image, np.ndarray or torch.Tensor Parameters image (PIL.Image.Image, np.ndarray or torch.Tensor) — +The image input, can be a PIL image, numpy array or pytorch tensor. height (int) — +The height to resize to. width (int) — +The width to resize to. resize_mode (str, optional, defaults to default) — +The resize mode to use, can be one of default or fill. If default, will resize the image to fit +within the specified width and height, and it may not maintaining the original aspect ratio. +If fill, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, filling empty with data from image. +If crop, will resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image +within the dimensions, cropping the excess. +Note that resize_mode fill and crop are only supported for PIL image input. Returns +PIL.Image.Image, np.ndarray or torch.Tensor + +The resized image. + Resize image. VaeImageProcessorLDM3D The VaeImageProcessorLDM3D accepts RGB and depth inputs and returns RGB and depth outputs. class diffusers.image_processor.VaeImageProcessorLDM3D < source > ( do_resize: bool = True vae_scale_factor: int = 8 resample: str = 'lanczos' do_normalize: bool = True ) Parameters do_resize (bool, optional, defaults to True) — +Whether to downscale the image’s (height, width) dimensions to multiples of vae_scale_factor. vae_scale_factor (int, optional, defaults to 8) — +VAE scale factor. If do_resize is True, the image is automatically resized to multiples of this factor. resample (str, optional, defaults to lanczos) — +Resampling filter to use when resizing the image. do_normalize (bool, optional, defaults to True) — +Whether to normalize the image to [-1,1]. Image processor for VAE LDM3D. depth_pil_to_numpy < source > ( images: Union ) Convert a PIL image or a list of PIL images to NumPy arrays. numpy_to_depth < source > ( images: ndarray ) Convert a NumPy depth image or a batch of images to a PIL image. numpy_to_pil < source > ( images: ndarray ) Convert a NumPy image or a batch of images to a PIL image. preprocess < source > ( rgb: Union depth: Union height: Optional = None width: Optional = None target_res: Optional = None ) Preprocess the image input. Accepted formats are PIL images, NumPy arrays or PyTorch tensors. rgblike_to_depthmap < source > ( image: Union ) Returns: depth map diff --git a/scrapped_outputs/c8c024771cccaa76c263f70a134541d4.txt b/scrapped_outputs/c8c024771cccaa76c263f70a134541d4.txt new file mode 100644 index 0000000000000000000000000000000000000000..619b44cd8c05a0c372dc935e8e8f3871d9c7d942 --- /dev/null +++ b/scrapped_outputs/c8c024771cccaa76c263f70a134541d4.txt @@ -0,0 +1,3 @@ +TODO + +Coming soon! diff --git a/scrapped_outputs/c90cb184d9f5e9312b1f1f206d279c8f.txt b/scrapped_outputs/c90cb184d9f5e9312b1f1f206d279c8f.txt new file mode 100644 index 0000000000000000000000000000000000000000..010804c603b16ee701d32fbfb26b35fc124e9936 --- /dev/null +++ b/scrapped_outputs/c90cb184d9f5e9312b1f1f206d279c8f.txt @@ -0,0 +1,338 @@ +Safe Stable Diffusion + +Safe Stable Diffusion was proposed in Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models and mitigates the well known issue that models like Stable Diffusion that are trained on unfiltered, web-crawled datasets tend to suffer from inappropriate degeneration. For instance Stable Diffusion may unexpectedly generate nudity, violence, images depicting self-harm, or otherwise offensive content. +Safe Stable Diffusion is an extension to the Stable Diffusion that drastically reduces content like this. +The abstract of the paper is the following: +Text-conditioned image generation models have recently achieved astonishing results in image quality and text alignment and are consequently employed in a fast-growing number of applications. Since they are highly data-driven, relying on billion-sized datasets randomly scraped from the internet, they also suffer, as we demonstrate, from degenerated and biased human behavior. In turn, they may even reinforce such biases. To help combat these undesired side effects, we present safe latent diffusion (SLD). Specifically, to measure the inappropriate degeneration due to unfiltered and imbalanced training sets, we establish a novel image generation test bed-inappropriate image prompts (I2P)-containing dedicated, real-world image-to-text prompts covering concepts such as nudity and violence. As our exhaustive empirical evaluation demonstrates, the introduced SLD removes and suppresses inappropriate image parts during the diffusion process, with no additional training required and no adverse effect on overall image quality or text alignment. +Overview: +Pipeline +Tasks +Colab +Demo +pipeline_stable_diffusion_safe.py +Text-to-Image Generation + +- + +Tips + +Safe Stable Diffusion may also be used with weights of Stable Diffusion. + +Run Safe Stable Diffusion + +Safe Stable Diffusion can be tested very easily with the StableDiffusionPipelineSafe, and the "AIML-TUDA/stable-diffusion-safe" checkpoint exactly in the same way it is shown in the Conditional Image Generation Guide. + +Interacting with the Safety Concept + +To check and edit the currently used safety concept, use the safety_concept property of StableDiffusionPipelineSafe + + + Copied +>>> from diffusers import StableDiffusionPipelineSafe + +>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe") +>>> pipeline.safety_concept +For each image generation the active concept is also contained in StableDiffusionSafePipelineOutput. + +Using pre-defined safety configurations + +You may use the 4 configurations defined in the Safe Latent Diffusion paper as follows: + + + Copied +>>> from diffusers import StableDiffusionPipelineSafe +>>> from diffusers.pipelines.stable_diffusion_safe import SafetyConfig + +>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe") +>>> prompt = "the four horsewomen of the apocalypse, painting by tom of finland, gaston bussiere, craig mullins, j. c. leyendecker" +>>> out = pipeline(prompt=prompt, **SafetyConfig.MAX) +The following configurations are available: SafetyConfig.WEAK, SafetyConfig.MEDIUM, SafetyConfig.STRONg, and SafetyConfig.MAX. + +How to load and use different schedulers. + +The safe stable diffusion pipeline uses PNDMScheduler scheduler by default. But diffusers provides many other schedulers that can be used with the stable diffusion pipeline such as DDIMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, EulerAncestralDiscreteScheduler etc. +To use a different scheduler, you can either change it via the ConfigMixin.from_config() method or pass the scheduler argument to the from_pretrained method of the pipeline. For example, to use the EulerDiscreteScheduler, you can do the following: + + + Copied +>>> from diffusers import StableDiffusionPipelineSafe, EulerDiscreteScheduler + +>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe") +>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +>>> # or +>>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("AIML-TUDA/stable-diffusion-safe", subfolder="scheduler") +>>> pipeline = StableDiffusionPipelineSafe.from_pretrained( +... "AIML-TUDA/stable-diffusion-safe", scheduler=euler_scheduler +... ) + +StableDiffusionSafePipelineOutput + + +class diffusers.pipelines.stable_diffusion_safe.StableDiffusionSafePipelineOutput + +< +source +> +( +images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] +nsfw_content_detected: typing.Optional[typing.List[bool]] +unsafe_images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray, NoneType] +applied_safety_concept: typing.Optional[str] + +) + + +Parameters + +images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or numpy array of shape (batch_size, height, width, num_channels). PIL images or numpy array present the denoised images of the diffusion pipeline. + + +nsfw_content_detected (List[bool]) — +List of flags denoting whether the corresponding generated image likely represents “not-safe-for-work” +(nsfw) content, or None if safety checking could not be performed. + + +images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images that were flagged by the safety checker any may contain “not-safe-for-work” +(nsfw) content, or None if no safety check was performed or no images were flagged. + + +applied_safety_concept (str) — +The safety concept that was applied for safety guidance, or None if safety guidance was disabled + + + +Output class for Safe Stable Diffusion pipelines. + +__call__ + + +( +*args +**kwargs + +) + + + +Call self as a function. + +StableDiffusionPipelineSafe + + +class diffusers.StableDiffusionPipelineSafe + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: SafeStableDiffusionSafetyChecker +feature_extractor: CLIPFeatureExtractor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-to-image generation using Safe Latent Diffusion. +The implementation is based on the StableDiffusionPipeline +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: typing.Optional[int] = 1 +sld_guidance_scale: typing.Optional[float] = 1000 +sld_warmup_steps: typing.Optional[int] = 10 +sld_threshold: typing.Optional[float] = 0.01 +sld_momentum_scale: typing.Optional[float] = 0.3 +sld_mom_beta: typing.Optional[float] = 0.4 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +sld_guidance_scale (float, optional, defaults to 1000) — +Safe latent guidance as defined in Safe Latent Diffusion. +sld_guidance_scale is defined as sS of Eq. 6. If set to be less than 1, safety guidance will be +disabled. + + +sld_warmup_steps (int, optional, defaults to 10) — +Number of warmup steps for safety guidance. SLD will only be applied for diffusion steps greater than +sld_warmup_steps. sld_warmup_steps is defined as delta of Safe Latent +Diffusion. + + +sld_threshold (float, optional, defaults to 0.01) — +Threshold that separates the hyperplane between appropriate and inappropriate images. sld_threshold +is defined as lamda of Eq. 5 in Safe Latent Diffusion. + + +sld_momentum_scale (float, optional, defaults to 0.3) — +Scale of the SLD momentum to be added to the safety guidance at each diffusion step. If set to 0.0 +momentum will be disabled. Momentum is already built up during warmup, i.e. for diffusion steps smaller +than sld_warmup_steps. sld_momentum_scale is defined as sm of Eq. 7 in Safe Latent +Diffusion. + + +sld_mom_beta (float, optional, defaults to 0.4) — +Defines how safety guidance momentum builds up. sld_mom_beta indicates how much of the previous +momentum will be kept. Momentum is already built up during warmup, i.e. for diffusion steps smaller +than sld_warmup_steps. sld_mom_beta is defined as beta m of Eq. 8 in Safe Latent +Diffusion. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +enable_sequential_cpu_offload + +< +source +> +( +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. diff --git a/scrapped_outputs/c91085fe08c1ed11446ff71190c49936.txt b/scrapped_outputs/c91085fe08c1ed11446ff71190c49936.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/c91688153f87611c2027448974dae762.txt b/scrapped_outputs/c91688153f87611c2027448974dae762.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/c92e2d02549d01834398328e6f313e0b.txt b/scrapped_outputs/c92e2d02549d01834398328e6f313e0b.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/c94ddeb385d7a5a4efe9b51845d9843c.txt b/scrapped_outputs/c94ddeb385d7a5a4efe9b51845d9843c.txt new file mode 100644 index 0000000000000000000000000000000000000000..ae719be0b7ba5e539ea6636677a7dcc7a90dd1e7 --- /dev/null +++ b/scrapped_outputs/c94ddeb385d7a5a4efe9b51845d9843c.txt @@ -0,0 +1,88 @@ +Text-to-(RGB, depth) LDM3D was proposed in LDM3D: Latent Diffusion Model for 3D by Gabriela Ben Melech Stan, Diana Wofk, Scottie Fox, Alex Redden, Will Saxton, Jean Yu, Estelle Aflalo, Shao-Yen Tseng, Fabio Nonato, Matthias Muller, and Vasudev Lal. LDM3D generates an image and a depth map from a given text prompt unlike the existing text-to-image diffusion models such as Stable Diffusion which only generates an image. With almost the same number of parameters, LDM3D achieves to create a latent space that can compress both the RGB images and the depth maps. Two checkpoints are available for use: ldm3d-original. The original checkpoint used in the paper ldm3d-4c. The new version of LDM3D using 4 channels inputs instead of 6-channels inputs and finetuned on higher resolution images. The abstract from the paper is: This research paper proposes a Latent Diffusion Model for 3D (LDM3D) that generates both image and depth map data from a given text prompt, allowing users to generate RGBD images from text prompts. The LDM3D model is fine-tuned on a dataset of tuples containing an RGB image, depth map and caption, and validated through extensive experiments. We also develop an application called DepthFusion, which uses the generated RGB images and depth maps to create immersive and interactive 360-degree-view experiences using TouchDesigner. This technology has the potential to transform a wide range of industries, from entertainment and gaming to architecture and design. Overall, this paper presents a significant contribution to the field of generative AI and computer vision, and showcases the potential of LDM3D and DepthFusion to revolutionize content creation and digital experiences. A short video summarizing the approach can be found at this url. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionLDM3DPipeline class diffusers.StableDiffusionLDM3DPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image and 3D generation using LDM3D. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 49 guidance_scale: float = 5.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 5.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import StableDiffusionLDM3DPipeline + +>>> pipe = StableDiffusionLDM3DPipeline.from_pretrained("Intel/ldm3d-4c") +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> output = pipe(prompt) +>>> rgb_image, depth_image = output.rgb, output.depth +>>> rgb_image[0].save("astronaut_ldm3d_rgb.jpg") +>>> depth_image[0].save("astronaut_ldm3d_depth.png") disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. LDM3DPipelineOutput class diffusers.pipelines.stable_diffusion_ldm3d.pipeline_stable_diffusion_ldm3d.LDM3DPipelineOutput < source > ( rgb: Union depth: Union nsfw_content_detected: Optional ) Parameters rgb (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). depth (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. __call__ ( *args **kwargs ) Call self as a function. Upscaler LDM3D-VR is an extended version of LDM3D. The abstract from the paper is: +Latent diffusion models have proven to be state-of-the-art in the creation and manipulation of visual outputs. However, as far as we know, the generation of depth maps jointly with RGB is still limited. We introduce LDM3D-VR, a suite of diffusion models targeting virtual reality development that includes LDM3D-pano and LDM3D-SR. These models enable the generation of panoramic RGBD based on textual prompts and the upscaling of low-resolution inputs to high-resolution RGBD, respectively. Our models are fine-tuned from existing pretrained models on datasets containing panoramic/high-resolution RGB images, depth maps and captions. Both models are evaluated in comparison to existing related methods Two checkpoints are available for use: ldm3d-pano. This checkpoint enables the generation of panoramic images and requires the StableDiffusionLDM3DPipeline pipeline to be used. ldm3d-sr. This checkpoint enables the upscaling of RGB and depth images. Can be used in cascade after the original LDM3D pipeline using the StableDiffusionUpscaleLDM3DPipeline from communauty pipeline. diff --git a/scrapped_outputs/c9562a3c705624d0b45fdc26154d2d7d.txt b/scrapped_outputs/c9562a3c705624d0b45fdc26154d2d7d.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/c96e8f3e6553eec50a0e62c5a1bcff39.txt b/scrapped_outputs/c96e8f3e6553eec50a0e62c5a1bcff39.txt new file mode 100644 index 0000000000000000000000000000000000000000..4693493f47eb216514601520c5cc11a456764ba4 --- /dev/null +++ b/scrapped_outputs/c96e8f3e6553eec50a0e62c5a1bcff39.txt @@ -0,0 +1,941 @@ +Image-to-Image Generation + + +StableDiffusionImg2ImgPipeline + +The Stable Diffusion model was created by the researchers and engineers from CompVis, Stability AI, runway, and LAION. The StableDiffusionImg2ImgPipeline lets you pass a text prompt and an initial image to condition the generation of new images using Stable Diffusion. +The original codebase can be found here: CampVis/stable-diffusion +StableDiffusionImg2ImgPipeline is compatible with all Stable Diffusion checkpoints for Text-to-Image +The pipeline uses the diffusion-denoising mechanism proposed by SDEdit (SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations +proposed by Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, Stefano Ermon). + +class diffusers.StableDiffusionImg2ImgPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPImageProcessor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-guided image to image generation using Stable Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) +In addition the pipeline inherits the following loading methods: +Textual-Inversion: loaders.TextualInversionLoaderMixin.load_textual_inversion() +LoRA: loaders.LoraLoaderMixin.load_lora_weights() +Ckpt: loaders.FromCkptMixin.from_ckpt() +as well as the following saving methods: +LoRA: loaders.LoraLoaderMixin.save_lora_weights() + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None +strength: float = 0.8 +num_inference_steps: typing.Optional[int] = 50 +guidance_scale: typing.Optional[float] = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: typing.Optional[float] = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. + + +strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter will be modulated by strength. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. Ignored when not using guidance (i.e., ignored if guidance_scale +is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import requests +>>> import torch +>>> from PIL import Image +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionImg2ImgPipeline + +>>> device = "cuda" +>>> model_id_or_path = "runwayml/stable-diffusion-v1-5" +>>> pipe = StableDiffusionImg2ImgPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +>>> response = requests.get(url) +>>> init_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_image = init_image.resize((768, 512)) + +>>> prompt = "A fantasy landscape, trending on artstation" + +>>> images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images +>>> images[0].save("fantasy_landscape.png") + +enable_attention_slicing + +< +source +> +( +slice_size: typing.Union[str, int, NoneType] = 'auto' + +) + + +Parameters + +slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +disable_attention_slicing + +< +source +> +( +) + + + +Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go +back to computing attention in one step. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +load_textual_inversion + +< +source +> +( +pretrained_model_name_or_path: typing.Union[str, typing.Dict[str, torch.Tensor]] +token: typing.Optional[str] = None +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path (str or os.PathLike) — +Can be either: + +A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. +Valid model ids should have an organization name, like +"sd-concepts-library/low-poly-hd-logos-icons". +A path to a directory containing textual inversion weights, e.g. +./my_text_inversion_directory/. + + + +weight_name (str, optional) — +Name of a custom weight file. This should be used in two cases: + +The saved textual inversion file is in diffusers format, but was saved under a specific weight +name, such as text_inv.bin. +The saved textual inversion file is in the “Automatic1111” form. + + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running diffusers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +subfolder (str, optional, defaults to "") — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + +mirror (str, optional) — +Mirror source to accelerate downloads in China. If you are from China and have an accessibility +problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. +Please refer to the mirror site for more information. + + + +Load textual inversion embeddings into the text encoder of stable diffusion pipelines. Both diffusers and +Automatic1111 formats are supported (see example below). +This function is experimental and might change in the future. +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. +Example: + +To load a textual inversion embedding vector in diffusers format: + + + Copied +from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") +To load a textual inversion embedding vector in Automatic1111 format, make sure to first download the vector, + +e.g. from civitAI and then load the vector locally: + + + Copied +from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") + +from_ckpt + +< +source +> +( +pretrained_model_link_or_path +**kwargs + +) + + +Parameters + +pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file on the Hub. Should be in the format +"https://huggingface.co//blob/main/" +A path to a file containing all pipeline weights. + + + +torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model under this dtype. If "auto" is passed the dtype +will be automatically derived from the model’s weights. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +local_files_only (bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running huggingface-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +use_safetensors (bool, optional ) — +If set to True, the pipeline will be loaded from safetensors weights. If set to None (the +default). The pipeline will load using safetensors if the safetensors weights are available and if +safetensors is installed. If the to False the pipeline will not use safetensors. + + +extract_ema (bool, optional, defaults to False) — Only relevant for +checkpoints that have both EMA and non-EMA weights. Whether to extract the EMA weights or not. Defaults +to False. Pass True to extract the EMA weights. EMA weights usually yield higher quality images for +inference. Non-EMA weights are usually better to continue fine-tuning. + + +upcast_attention (bool, optional, defaults to None) — +Whether the attention computation should always be upcasted. This is necessary when running stable + + +image_size (int, optional, defaults to 512) — +The image size that the model was trained on. Use 512 for Stable Diffusion v1.X and Stable Diffusion v2 +Base. Use 768 for Stable Diffusion v2. + + +prediction_type (str, optional) — +The prediction type that the model was trained on. Use 'epsilon' for Stable Diffusion v1.X and Stable +Diffusion v2 Base. Use 'v_prediction' for Stable Diffusion v2. + + +num_in_channels (int, optional, defaults to None) — +The number of input channels. If None, it will be automatically inferred. + + +scheduler_type (str, optional, defaults to ‘pndm’) — +Type of scheduler to use. Should be one of ["pndm", "lms", "heun", "euler", "euler-ancestral", "dpm", "ddim"]. + + +load_safety_checker (bool, optional, defaults to True) — +Whether to load the safety checker or not. Defaults to True. + + +kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load - and saveable variables - i.e. the pipeline components - of the +specific pipeline class. The overwritten components are then directly passed to the pipelines +__init__ method. See example below for more information. + + + +Instantiate a PyTorch diffusion pipeline from pre-trained pipeline weights saved in the original .ckpt format. +The pipeline is set in evaluation mode by default using model.eval() (Dropout modules are deactivated). + +Examples: + + + Copied +>>> from diffusers import StableDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = StableDiffusionPipeline.from_ckpt( +... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +... ) + +>>> # Download pipeline from local file +>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt +>>> pipeline = StableDiffusionPipeline.from_ckpt("./v1-5-pruned-emaonly") + +>>> # Enable float16 and move to GPU +>>> pipeline = StableDiffusionPipeline.from_ckpt( +... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", +... torch_dtype=torch.float16, +... ) +>>> pipeline.to("cuda") + +load_lora_weights + +< +source +> +( +pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]] +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. +Valid model ids should have an organization name, like google/ddpm-celebahq-256. +A path to a directory containing model weights saved using ~ModelMixin.save_config, e.g., +./my_model_directory/. +A torch state +dict. + + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running diffusers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +subfolder (str, optional, defaults to "") — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + +mirror (str, optional) — +Mirror source to accelerate downloads in China. If you are from China and have an accessibility +problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. +Please refer to the mirror site for more information. + + + +Load pretrained attention processor layers (such as LoRA) into UNet2DConditionModel and +CLIPTextModel). +This function is experimental and might change in the future. +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. + +save_lora_weights + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +unet_lora_layers: typing.Dict[str, torch.nn.modules.module.Module] = None +text_encoder_lora_layers: typing.Dict[str, torch.nn.modules.module.Module] = None +is_main_process: bool = True +weight_name: str = None +save_function: typing.Callable = None +safe_serialization: bool = False + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. + + +unet_lora_layers (Dict[str, torch.nn.Module]) — +State dict of the LoRA layers corresponding to the UNet. Specifying this helps to make the +serialization process easier and cleaner. + + +text_encoder_lora_layers (Dict[str, torch.nn.Module]) — +State dict of the LoRA layers corresponding to the text_encoder. Since the text_encoder comes from +transformers, we cannot rejig it. That is why we have to explicitly pass the text encoder LoRA state +dict. + + +is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful when in distributed training like +TPUs and need to call this function on all processes. In this case, set is_main_process=True only on +the main process to avoid race conditions. + + +save_function (Callable) — +The function to use to save the state dictionary. Useful on distributed training like TPUs when one +need to replace torch.save by another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. + + + +Save the LoRA parameters corresponding to the UNet and the text encoder. + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. + +class diffusers.FlaxStableDiffusionImg2ImgPipeline + +< +source +> +( +vae: FlaxAutoencoderKL +text_encoder: FlaxCLIPTextModel +tokenizer: CLIPTokenizer +unet: FlaxUNet2DConditionModel +scheduler: typing.Union[diffusers.schedulers.scheduling_ddim_flax.FlaxDDIMScheduler, diffusers.schedulers.scheduling_pndm_flax.FlaxPNDMScheduler, diffusers.schedulers.scheduling_lms_discrete_flax.FlaxLMSDiscreteScheduler, diffusers.schedulers.scheduling_dpmsolver_multistep_flax.FlaxDPMSolverMultistepScheduler] +safety_checker: FlaxStableDiffusionSafetyChecker +feature_extractor: CLIPImageProcessor +dtype: dtype = + +) + + +Parameters + +vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, +specifically the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (FlaxUNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. + + +safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for image-to-image generation using Stable Diffusion. +This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt_ids: array +image: array +params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] +prng_seed: PRNGKeyArray +strength: float = 0.8 +num_inference_steps: int = 50 +height: typing.Optional[int] = None +width: typing.Optional[int] = None +guidance_scale: typing.Union[float, array] = 7.5 +noise: array = None +neg_prompt_ids: array = None +return_dict: bool = True +jit: bool = False + +) +→ +FlaxStableDiffusionPipelineOutput or tuple + +Parameters + +prompt_ids (jnp.array) — +The prompt or prompts to guide the image generation. + + +image (jnp.array) — +Array representing an image batch, that will be used as the starting point for the process. + + +params (Dict or FrozenDict) — Dictionary containing the model parameters/weights + + +prng_seed (jax.random.KeyArray or jax.Array) — Array containing random number generator key + + +strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +noise (jnp.array, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. tensor will ge generated +by sampling using the supplied random generator. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. + + +jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. NOTE: This argument +exists because __call__ is not yet end-to-end pmap-able. It will be removed in a future release. + + +Returns + +FlaxStableDiffusionPipelineOutput or tuple + + + +FlaxStableDiffusionPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import jax +>>> import numpy as np +>>> import jax.numpy as jnp +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard +>>> import requests +>>> from io import BytesIO +>>> from PIL import Image +>>> from diffusers import FlaxStableDiffusionImg2ImgPipeline + + +>>> def create_key(seed=0): +... return jax.random.PRNGKey(seed) + + +>>> rng = create_key(0) + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +>>> response = requests.get(url) +>>> init_img = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_img = init_img.resize((768, 512)) + +>>> prompts = "A fantasy landscape, trending on artstation" + +>>> pipeline, params = FlaxStableDiffusionImg2ImgPipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", +... revision="flax", +... dtype=jnp.bfloat16, +... ) + +>>> num_samples = jax.device_count() +>>> rng = jax.random.split(rng, jax.device_count()) +>>> prompt_ids, processed_image = pipeline.prepare_inputs( +... prompt=[prompts] * num_samples, image=[init_img] * num_samples +... ) +>>> p_params = replicate(params) +>>> prompt_ids = shard(prompt_ids) +>>> processed_image = shard(processed_image) + +>>> output = pipeline( +... prompt_ids=prompt_ids, +... image=processed_image, +... params=p_params, +... prng_seed=rng, +... strength=0.75, +... num_inference_steps=50, +... jit=True, +... height=512, +... width=768, +... ).images + +>>> output_images = pipeline.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:]))) diff --git a/scrapped_outputs/c97948b017316b19132e598045a382ff.txt b/scrapped_outputs/c97948b017316b19132e598045a382ff.txt new file mode 100644 index 0000000000000000000000000000000000000000..4a289245a8ad76d1c08b10f1992b7f746fc18cd3 --- /dev/null +++ b/scrapped_outputs/c97948b017316b19132e598045a382ff.txt @@ -0,0 +1,16 @@ +Stochastic Karras VE Elucidating the Design Space of Diffusion-Based Generative Models is by Tero Karras, Miika Aittala, Timo Aila and Samuli Laine. This pipeline implements the stochastic sampling tailored to variance expanding (VE) models. The abstract from the paper: We argue that the theory and practice of diffusion-based generative models are currently unnecessarily convoluted and seek to remedy the situation by presenting a design space that clearly separates the concrete design choices. This lets us identify several changes to both the sampling and training processes, as well as preconditioning of the score networks. Together, our improvements yield new state-of-the-art FID of 1.79 for CIFAR-10 in a class-conditional setting and 1.97 in an unconditional setting, with much faster sampling (35 network evaluations per image) than prior designs. To further demonstrate their modular nature, we show that our design changes dramatically improve both the efficiency and quality obtainable with pre-trained score networks from previous work, including improving the FID of a previously trained ImageNet-64 model from 2.07 to near-SOTA 1.55, and after re-training with our proposed improvements to a new SOTA of 1.36. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. KarrasVePipeline class diffusers.KarrasVePipeline < source > ( unet: UNet2DModel scheduler: KarrasVeScheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image. scheduler (KarrasVeScheduler) — +A scheduler to be used in combination with unet to denoise the encoded image. Pipeline for unconditional image generation. __call__ < source > ( batch_size: int = 1 num_inference_steps: int = 50 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True **kwargs ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Example: ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/c9832fb56f8f6ed45fbeb6cc46ad909f.txt b/scrapped_outputs/c9832fb56f8f6ed45fbeb6cc46ad909f.txt new file mode 100644 index 0000000000000000000000000000000000000000..ca96e25d4b56d29e1aaf758a2139d89e38d25509 --- /dev/null +++ b/scrapped_outputs/c9832fb56f8f6ed45fbeb6cc46ad909f.txt @@ -0,0 +1,387 @@ +Text2Video-Zero Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators is by Levon Khachatryan, Andranik Movsisyan, Vahram Tadevosyan, Roberto Henschel, Zhangyang Wang, Shant Navasardyan, Humphrey Shi. Text2Video-Zero enables zero-shot video generation using either: A textual prompt A prompt combined with guidance from poses or edges Video Instruct-Pix2Pix (instruction-guided video editing) Results are temporally consistent and closely follow the guidance and textual prompts. The abstract from the paper is: Recent text-to-video generation approaches rely on computationally heavy training and require large-scale video datasets. In this paper, we introduce a new task of zero-shot text-to-video generation and propose a low-cost approach (without any training or optimization) by leveraging the power of existing text-to-image synthesis methods (e.g., Stable Diffusion), making them suitable for the video domain. +Our key modifications include (i) enriching the latent codes of the generated frames with motion dynamics to keep the global scene and the background time consistent; and (ii) reprogramming frame-level self-attention using a new cross-frame attention of each frame on the first frame, to preserve the context, appearance, and identity of the foreground object. +Experiments show that this leads to low overhead, yet high-quality and remarkably consistent video generation. Moreover, our approach is not limited to text-to-video synthesis but is also applicable to other tasks such as conditional and content-specialized video generation, and Video Instruct-Pix2Pix, i.e., instruction-guided video editing. +As experiments show, our method performs comparably or sometimes better than recent approaches, despite not being trained on additional video data. You can find additional information about Text2Video-Zero on the project page, paper, and original codebase. Usage example Text-To-Video To generate a video from prompt, run the following Python code: Copied import torch +from diffusers import TextToVideoZeroPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +prompt = "A panda is playing guitar on times square" +result = pipe(prompt=prompt).images +result = [(r * 255).astype("uint8") for r in result] +imageio.mimsave("video.mp4", result, fps=4) You can change these parameters in the pipeline call: Motion field strength (see the paper, Sect. 3.3.1):motion_field_strength_x and motion_field_strength_y. Default: motion_field_strength_x=12, motion_field_strength_y=12 T and T' (see the paper, Sect. 3.3.1)t0 and t1 in the range {0, ..., num_inference_steps}. Default: t0=45, t1=48 Video length:video_length, the number of frames video_length to be generated. Default: video_length=8 We can also generate longer videos by doing the processing in a chunk-by-chunk manner: Copied import torch +from diffusers import TextToVideoZeroPipeline +import numpy as np + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") +seed = 0 +video_length = 24 #24 ÷ 4fps = 6 seconds +chunk_size = 8 +prompt = "A panda is playing guitar on times square" + +# Generate the video chunk-by-chunk +result = [] +chunk_ids = np.arange(0, video_length, chunk_size - 1) +generator = torch.Generator(device="cuda") +for i in range(len(chunk_ids)): + print(f"Processing chunk {i + 1} / {len(chunk_ids)}") + ch_start = chunk_ids[i] + ch_end = video_length if i == len(chunk_ids) - 1 else chunk_ids[i + 1] + # Attach the first frame for Cross Frame Attention + frame_ids = [0] + list(range(ch_start, ch_end)) + # Fix the seed for the temporal consistency + generator.manual_seed(seed) + output = pipe(prompt=prompt, video_length=len(frame_ids), generator=generator, frame_ids=frame_ids) + result.append(output.images[1:]) + +# Concatenate chunks and save +result = np.concatenate(result) +result = [(r * 255).astype("uint8") for r in result] +imageio.mimsave("video.mp4", result, fps=4) SDXL Support +In order to use the SDXL model when generating a video from prompt, use the TextToVideoZeroSDXLPipeline pipeline: Copied import torch +from diffusers import TextToVideoZeroSDXLPipeline + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipe = TextToVideoZeroSDXLPipeline.from_pretrained( + model_id, torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") Text-To-Video with Pose Control + +To generate a video from prompt with additional pose control +Download a demo video Copied from huggingface_hub import hf_hub_download + +filename = "__assets__/poses_skeleton_gifs/dance1_corr.mp4" +repo_id = "PAIR/Text2Video-Zero" +video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) Read video containing extracted pose images Copied from PIL import Image +import imageio + +reader = imageio.get_reader(video_path, "ffmpeg") +frame_count = 8 +pose_images = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] To extract pose from actual video, read ControlNet documentation. Run StableDiffusionControlNetPipeline with our custom attention processor Copied import torch +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +model_id = "runwayml/stable-diffusion-v1-5" +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + model_id, controlnet=controlnet, torch_dtype=torch.float16 +).to("cuda") + +# Set the attention processor +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) +pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) + +# fix latents for all frames +latents = torch.randn((1, 4, 64, 64), device="cuda", dtype=torch.float16).repeat(len(pose_images), 1, 1, 1) + +prompt = "Darth Vader dancing in a desert" +result = pipe(prompt=[prompt] * len(pose_images), image=pose_images, latents=latents).images +imageio.mimsave("video.mp4", result, fps=4) SDXL Support Since our attention processor also works with SDXL, it can be utilized to generate a video from prompt using ControlNet models powered by SDXL: Copied import torch +from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +controlnet_model_id = 'thibaud/controlnet-openpose-sdxl-1.0' +model_id = 'stabilityai/stable-diffusion-xl-base-1.0' + +controlnet = ControlNetModel.from_pretrained(controlnet_model_id, torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + model_id, controlnet=controlnet, torch_dtype=torch.float16 +).to('cuda') + +# Set the attention processor +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) +pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) + +# fix latents for all frames +latents = torch.randn((1, 4, 128, 128), device="cuda", dtype=torch.float16).repeat(len(pose_images), 1, 1, 1) + +prompt = "Darth Vader dancing in a desert" +result = pipe(prompt=[prompt] * len(pose_images), image=pose_images, latents=latents).images +imageio.mimsave("video.mp4", result, fps=4) Text-To-Video with Edge Control To generate a video from prompt with additional Canny edge control, follow the same steps described above for pose-guided generation using Canny edge ControlNet model. Video Instruct-Pix2Pix To perform text-guided video editing (with InstructPix2Pix): Download a demo video Copied from huggingface_hub import hf_hub_download + +filename = "__assets__/pix2pix video/camel.mp4" +repo_id = "PAIR/Text2Video-Zero" +video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) Read video from path Copied from PIL import Image +import imageio + +reader = imageio.get_reader(video_path, "ffmpeg") +frame_count = 8 +video = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] Run StableDiffusionInstructPix2PixPipeline with our custom attention processor Copied import torch +from diffusers import StableDiffusionInstructPix2PixPipeline +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +model_id = "timbrooks/instruct-pix2pix" +pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=3)) + +prompt = "make it Van Gogh Starry Night style" +result = pipe(prompt=[prompt] * len(video), image=video).images +imageio.mimsave("edited_video.mp4", result, fps=4) DreamBooth specialization Methods Text-To-Video, Text-To-Video with Pose Control and Text-To-Video with Edge Control +can run with custom DreamBooth models, as shown below for +Canny edge ControlNet model and +Avatar style DreamBooth model: Download a demo video Copied from huggingface_hub import hf_hub_download + +filename = "__assets__/canny_videos_mp4/girl_turning.mp4" +repo_id = "PAIR/Text2Video-Zero" +video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) Read video from path Copied from PIL import Image +import imageio + +reader = imageio.get_reader(video_path, "ffmpeg") +frame_count = 8 +canny_edges = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] Run StableDiffusionControlNetPipeline with custom trained DreamBooth model Copied import torch +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +# set model id to custom model +model_id = "PAIR/text2video-zero-controlnet-canny-avatar" +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + model_id, controlnet=controlnet, torch_dtype=torch.float16 +).to("cuda") + +# Set the attention processor +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) +pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) + +# fix latents for all frames +latents = torch.randn((1, 4, 64, 64), device="cuda", dtype=torch.float16).repeat(len(canny_edges), 1, 1, 1) + +prompt = "oil painting of a beautiful girl avatar style" +result = pipe(prompt=[prompt] * len(canny_edges), image=canny_edges, latents=latents).images +imageio.mimsave("video.mp4", result, fps=4) You can filter out some available DreamBooth-trained models with this link. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. TextToVideoZeroPipeline class diffusers.TextToVideoZeroPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet3DConditionModel to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for zero-shot text-to-video generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union video_length: Optional = 8 height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None motion_field_strength_x: float = 12 motion_field_strength_y: float = 12 output_type: Optional = 'tensor' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 t0: int = 44 t1: int = 47 frame_ids: Optional = None ) → TextToVideoPipelineOutput Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. video_length (int, optional, defaults to 8) — +The number of generated video frames. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in video generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_videos_per_prompt (int, optional, defaults to 1) — +The number of videos to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "numpy") — +The output format of the generated video. Choose between "latent" and "numpy". return_dict (bool, optional, defaults to True) — +Whether or not to return a +TextToVideoPipelineOutput instead of +a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. motion_field_strength_x (float, optional, defaults to 12) — +Strength of motion in generated video along x-axis. See the paper, +Sect. 3.3.1. motion_field_strength_y (float, optional, defaults to 12) — +Strength of motion in generated video along y-axis. See the paper, +Sect. 3.3.1. t0 (int, optional, defaults to 44) — +Timestep t0. Should be in the range [0, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. t1 (int, optional, defaults to 47) — +Timestep t0. Should be in the range [t0 + 1, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. frame_ids (List[int], optional) — +Indexes of the frames that are being generated. This is used when generating longer videos +chunk-by-chunk. Returns +TextToVideoPipelineOutput + +The output contains a ndarray of the generated video, when output_type != "latent", otherwise a +latent code of generated videos and a list of bools indicating whether the corresponding generated +video contains “not-safe-for-work” (nsfw) content.. + The call function to the pipeline for generation. backward_loop < source > ( latents timesteps prompt_embeds guidance_scale callback callback_steps num_warmup_steps extra_step_kwargs cross_attention_kwargs = None ) → latents Parameters callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. +extra_step_kwargs — +Extra_step_kwargs. +cross_attention_kwargs — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. +num_warmup_steps — +number of warmup steps. Returns +latents + +Latents of backward process output at time timesteps[-1]. + Perform backward process given list of time steps. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. forward_loop < source > ( x_t0 t0 t1 generator ) → x_t1 Parameters generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. Returns +x_t1 + +Forward process applied to x_t0 from time t0 to t1. + Perform DDPM forward process from time t0 to t1. This is the same as adding noise with corresponding variance. TextToVideoZeroSDXLPipeline class diffusers.TextToVideoZeroSDXLPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for zero-shot text-to-video generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union prompt_2: Union = None video_length: Optional = 8 height: Optional = None width: Optional = None num_inference_steps: int = 50 denoising_end: Optional = None guidance_scale: float = 7.5 negative_prompt: Union = None negative_prompt_2: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None frame_ids: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None latents: Optional = None motion_field_strength_x: float = 12 motion_field_strength_y: float = 12 output_type: Optional = 'tensor' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Optional = None crops_coords_top_left: Tuple = (0, 0) target_size: Optional = None t0: int = 44 t1: int = 47 ) Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders video_length (int, optional, defaults to 8) — +The number of generated video frames. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_videos_per_prompt (int, optional, defaults to 1) — +The number of videos to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. frame_ids (List[int], optional) — +Indexes of the frames that are being generated. This is used when generating longer videos +chunk-by-chunk. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. motion_field_strength_x (float, optional, defaults to 12) — +Strength of motion in generated video along x-axis. See the paper, +Sect. 3.3.1. motion_field_strength_y (float, optional, defaults to 12) — +Strength of motion in generated video along y-axis. See the paper, +Sect. 3.3.1. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionXLPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. guidance_rescale (float, optional, defaults to 0.7) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (width, height) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (width, height). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. t0 (int, optional, defaults to 44) — +Timestep t0. Should be in the range [0, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. t1 (int, optional, defaults to 47) — +Timestep t0. Should be in the range [t0 + 1, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. Function invoked when calling the pipeline for generation. backward_loop < source > ( latents timesteps prompt_embeds guidance_scale callback callback_steps num_warmup_steps extra_step_kwargs add_text_embeds add_time_ids cross_attention_kwargs = None guidance_rescale: float = 0.0 ) → latents Parameters callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. +extra_step_kwargs — +Extra_step_kwargs. +cross_attention_kwargs — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. +num_warmup_steps — +number of warmup steps. Returns +latents + +latents of backward process output at time timesteps[-1] + Perform backward process given list of time steps disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. forward_loop < source > ( x_t0 t0 t1 generator ) → x_t1 Parameters generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. Returns +x_t1 + +Forward process applied to x_t0 from time t0 to t1. + Perform DDPM forward process from time t0 to t1. This is the same as adding noise with corresponding variance. TextToVideoPipelineOutput class diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.TextToVideoPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images ([List[PIL.Image.Image], np.ndarray]) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected ([List[bool]]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for zero-shot text-to-video pipeline. diff --git a/scrapped_outputs/c9dd2acd25c6a6c6d19c622132b96d7e.txt b/scrapped_outputs/c9dd2acd25c6a6c6d19c622132b96d7e.txt new file mode 100644 index 0000000000000000000000000000000000000000..c6ada9556f117e916687e4a6c5586a56d8e2825d --- /dev/null +++ b/scrapped_outputs/c9dd2acd25c6a6c6d19c622132b96d7e.txt @@ -0,0 +1,17 @@ +Load safetensors safetensors is a safe and fast file format for storing and loading tensors. Typically, PyTorch model weights are saved or pickled into a .bin file with Python’s pickle utility. However, pickle is not secure and pickled files may contain malicious code that can be executed. safetensors is a secure alternative to pickle, making it ideal for sharing model weights. This guide will show you how you load .safetensor files, and how to convert Stable Diffusion model weights stored in other formats to .safetensor. Before you start, make sure you have safetensors installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install safetensors If you look at the runwayml/stable-diffusion-v1-5 repository, you’ll see weights inside the text_encoder, unet and vae subfolders are stored in the .safetensors format. By default, 🤗 Diffusers automatically loads these .safetensors files from their subfolders if they’re available in the model repository. For more explicit control, you can optionally set use_safetensors=True (if safetensors is not installed, you’ll get an error message asking you to install it): Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) However, model weights are not necessarily stored in separate subfolders like in the example above. Sometimes, all the weights are stored in a single .safetensors file. In this case, if the weights are Stable Diffusion weights, you can load the file directly with the from_single_file() method: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_single_file( + "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +) Convert to safetensors Not all weights on the Hub are available in the .safetensors format, and you may encounter weights stored as .bin. In this case, use the Convert Space to convert the weights to .safetensors. The Convert Space downloads the pickled weights, converts them, and opens a Pull Request to upload the newly converted .safetensors file on the Hub. This way, if there is any malicious code contained in the pickled files, they’re uploaded to the Hub - which has a security scanner to detect unsafe files and suspicious pickle imports - instead of your computer. You can use the model with the new .safetensors weights by specifying the reference to the Pull Request in the revision parameter (you can also test it in this Check PR Space on the Hub), for example refs/pr/22: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", revision="refs/pr/22", use_safetensors=True +) Why use safetensors? There are several reasons for using safetensors: Safety is the number one reason for using safetensors. As open-source and model distribution grows, it is important to be able to trust the model weights you downloaded don’t contain any malicious code. The current size of the header in safetensors prevents parsing extremely large JSON files. Loading speed between switching models is another reason to use safetensors, which performs zero-copy of the tensors. It is especially fast compared to pickle if you’re loading the weights to CPU (the default case), and just as fast if not faster when directly loading the weights to GPU. You’ll only notice the performance difference if the model is already loaded, and not if you’re downloading the weights or loading the model for the first time. The time it takes to load the entire pipeline: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", use_safetensors=True) +"Loaded in safetensors 0:00:02.033658" +"Loaded in PyTorch 0:00:02.663379" But the actual time it takes to load 500MB of the model weights is only: Copied safetensors: 3.4873ms +PyTorch: 172.7537ms Lazy loading is also supported in safetensors, which is useful in distributed settings to only load some of the tensors. This format allowed the BLOOM model to be loaded in 45 seconds on 8 GPUs instead of 10 minutes with regular PyTorch weights. diff --git a/scrapped_outputs/ca365c0c3871266d253718936e5b7363.txt b/scrapped_outputs/ca365c0c3871266d253718936e5b7363.txt new file mode 100644 index 0000000000000000000000000000000000000000..eae7bd4faaff8cd637ecc9e7c8c93e29f55d9c4b --- /dev/null +++ b/scrapped_outputs/ca365c0c3871266d253718936e5b7363.txt @@ -0,0 +1,166 @@ +Text or image-to-video Driven by the success of text-to-image diffusion models, generative video models are able to generate short clips of video from a text prompt or an initial image. These models extend a pretrained diffusion model to generate videos by adding some type of temporal and/or spatial convolution layer to the architecture. A mixed dataset of images and videos are used to train the model which learns to output a series of video frames based on the text or image conditioning. This guide will show you how to generate videos, how to configure video model parameters, and how to control video generation. Popular models Discover other cool and trending video generation models on the Hub here! Stable Video Diffusions (SVD), I2VGen-XL, AnimateDiff, and ModelScopeT2V are popular models used for video diffusion. Each model is distinct. For example, AnimateDiff inserts a motion modeling module into a frozen text-to-image model to generate personalized animated images, whereas SVD is entirely pretrained from scratch with a three-stage training process to generate short high-quality videos. Stable Video Diffusion SVD is based on the Stable Diffusion 2.1 model and it is trained on images, then low-resolution videos, and finally a smaller dataset of high-resolution videos. This model generates a short 2-4 second video from an initial image. You can learn more details about model, like micro-conditioning, in the Stable Video Diffusion guide. Begin by loading the StableVideoDiffusionPipeline and passing an initial image to generate a video from. Copied import torch +from diffusers import StableVideoDiffusionPipeline +from diffusers.utils import load_image, export_to_video + +pipeline = StableVideoDiffusionPipeline.from_pretrained( + "stabilityai/stable-video-diffusion-img2vid-xt", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() + +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png") +image = image.resize((1024, 576)) + +generator = torch.manual_seed(42) +frames = pipeline(image, decode_chunk_size=8, generator=generator).frames[0] +export_to_video(frames, "generated.mp4", fps=7) initial image generated video I2VGen-XL I2VGen-XL is a diffusion model that can generate higher resolution videos than SVD and it is also capable of accepting text prompts in addition to images. The model is trained with two hierarchical encoders (detail and global encoder) to better capture low and high-level details in images. These learned details are used to train a video diffusion model which refines the video resolution and details in the generated video. You can use I2VGen-XL by loading the I2VGenXLPipeline, and passing a text and image prompt to generate a video. Copied import torch +from diffusers import I2VGenXLPipeline +from diffusers.utils import export_to_gif, load_image + +pipeline = I2VGenXLPipeline.from_pretrained("ali-vilab/i2vgen-xl", torch_dtype=torch.float16, variant="fp16") +pipeline.enable_model_cpu_offload() + +image_url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/i2vgen_xl_images/img_0009.png" +image = load_image(image_url).convert("RGB") + +prompt = "Papers were floating in the air on a table in the library" +negative_prompt = "Distorted, discontinuous, Ugly, blurry, low resolution, motionless, static, disfigured, disconnected limbs, Ugly faces, incomplete arms" +generator = torch.manual_seed(8888) + +frames = pipeline( + prompt=prompt, + image=image, + num_inference_steps=50, + negative_prompt=negative_prompt, + guidance_scale=9.0, + generator=generator +).frames[0] +export_to_gif(frames, "i2v.gif") initial image generated video AnimateDiff AnimateDiff is an adapter model that inserts a motion module into a pretrained diffusion model to animate an image. The adapter is trained on video clips to learn motion which is used to condition the generation process to create a video. It is faster and easier to only train the adapter and it can be loaded into most diffusion models, effectively turning them into “video models”. Start by loading a MotionAdapter. Copied import torch +from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) Then load a finetuned Stable Diffusion model with the AnimateDiffPipeline. Copied pipeline = AnimateDiffPipeline.from_pretrained("emilianJR/epiCRealism", motion_adapter=adapter, torch_dtype=torch.float16) +scheduler = DDIMScheduler.from_pretrained( + "emilianJR/epiCRealism", + subfolder="scheduler", + clip_sample=False, + timestep_spacing="linspace", + beta_schedule="linear", + steps_offset=1, +) +pipeline.scheduler = scheduler +pipeline.enable_vae_slicing() +pipeline.enable_model_cpu_offload() Create a prompt and generate the video. Copied output = pipeline( + prompt="A space rocket with trails of smoke behind it launching into space from the desert, 4k, high resolution", + negative_prompt="bad quality, worse quality, low resolution", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=50, + generator=torch.Generator("cpu").manual_seed(49), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") ModelscopeT2V ModelscopeT2V adds spatial and temporal convolutions and attention to a UNet, and it is trained on image-text and video-text datasets to enhance what it learns during training. The model takes a prompt, encodes it and creates text embeddings which are denoised by the UNet, and then decoded by a VQGAN into a video. ModelScopeT2V generates watermarked videos due to the datasets it was trained on. To use a watermark-free model, try the cerspense/zeroscope_v2_76w model with the TextToVideoSDPipeline first, and then upscale it’s output with the cerspense/zeroscope_v2_XL checkpoint using the VideoToVideoSDPipeline. Load a ModelScopeT2V checkpoint into the DiffusionPipeline along with a prompt to generate a video. Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import export_to_video + +pipeline = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") +pipeline.enable_model_cpu_offload() +pipeline.enable_vae_slicing() + +prompt = "Confident teddy bear surfer rides the wave in the tropics" +video_frames = pipeline(prompt).frames[0] +export_to_video(video_frames, "modelscopet2v.mp4", fps=10) Configure model parameters There are a few important parameters you can configure in the pipeline that’ll affect the video generation process and quality. Let’s take a closer look at what these parameters do and how changing them affects the output. Number of frames The num_frames parameter determines how many video frames are generated per second. A frame is an image that is played in a sequence of other frames to create motion or a video. This affects video length because the pipeline generates a certain number of frames per second (check a pipeline’s API reference for the default value). To increase the video duration, you’ll need to increase the num_frames parameter. Copied import torch +from diffusers import StableVideoDiffusionPipeline +from diffusers.utils import load_image, export_to_video + +pipeline = StableVideoDiffusionPipeline.from_pretrained( + "stabilityai/stable-video-diffusion-img2vid", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() + +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png") +image = image.resize((1024, 576)) + +generator = torch.manual_seed(42) +frames = pipeline(image, decode_chunk_size=8, generator=generator, num_frames=25).frames[0] +export_to_video(frames, "generated.mp4", fps=7) num_frames=14 num_frames=25 Guidance scale The guidance_scale parameter controls how closely aligned the generated video and text prompt or initial image is. A higher guidance_scale value means your generated video is more aligned with the text prompt or initial image, while a lower guidance_scale value means your generated video is less aligned which could give the model more “creativity” to interpret the conditioning input. SVD uses the min_guidance_scale and max_guidance_scale parameters for applying guidance to the first and last frames respectively. Copied import torch +from diffusers import I2VGenXLPipeline +from diffusers.utils import export_to_gif, load_image + +pipeline = I2VGenXLPipeline.from_pretrained("ali-vilab/i2vgen-xl", torch_dtype=torch.float16, variant="fp16") +pipeline.enable_model_cpu_offload() + +image_url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/i2vgen_xl_images/img_0009.png" +image = load_image(image_url).convert("RGB") + +prompt = "Papers were floating in the air on a table in the library" +negative_prompt = "Distorted, discontinuous, Ugly, blurry, low resolution, motionless, static, disfigured, disconnected limbs, Ugly faces, incomplete arms" +generator = torch.manual_seed(0) + +frames = pipeline( + prompt=prompt, + image=image, + num_inference_steps=50, + negative_prompt=negative_prompt, + guidance_scale=1.0, + generator=generator +).frames[0] +export_to_gif(frames, "i2v.gif") guidance_scale=9.0 guidance_scale=1.0 Negative prompt A negative prompt deters the model from generating things you don’t want it to. This parameter is commonly used to improve overall generation quality by removing poor or bad features such as “low resolution” or “bad details”. Copied import torch +from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) + +pipeline = AnimateDiffPipeline.from_pretrained("emilianJR/epiCRealism", motion_adapter=adapter, torch_dtype=torch.float16) +scheduler = DDIMScheduler.from_pretrained( + "emilianJR/epiCRealism", + subfolder="scheduler", + clip_sample=False, + timestep_spacing="linspace", + beta_schedule="linear", + steps_offset=1, +) +pipeline.scheduler = scheduler +pipeline.enable_vae_slicing() +pipeline.enable_model_cpu_offload() + +output = pipeline( + prompt="360 camera shot of a sushi roll in a restaurant", + negative_prompt="Distorted, discontinuous, ugly, blurry, low resolution, motionless, static", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=50, + generator=torch.Generator("cpu").manual_seed(0), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") no negative prompt negative prompt applied Model-specific parameters There are some pipeline parameters that are unique to each model such as adjusting the motion in a video or adding noise to the initial image. Stable Video Diffusion Text2Video-Zero Stable Video Diffusion provides additional micro-conditioning for the frame rate with the fps parameter and for motion with the motion_bucket_id parameter. Together, these parameters allow for adjusting the amount of motion in the generated video. There is also a noise_aug_strength parameter that increases the amount of noise added to the initial image. Varying this parameter affects how similar the generated video and initial image are. A higher noise_aug_strength also increases the amount of motion. To learn more, read the Micro-conditioning guide. Control video generation Video generation can be controlled similar to how text-to-image, image-to-image, and inpainting can be controlled with a ControlNetModel. The only difference is you need to use the CrossFrameAttnProcessor so each frame attends to the first frame. Text2Video-Zero Text2Video-Zero video generation can be conditioned on pose and edge images for even greater control over a subject’s motion in the generated video or to preserve the identity of a subject/object in the video. You can also use Text2Video-Zero with InstructPix2Pix for editing videos with text. pose control edge control InstructPix2Pix Start by downloading a video and extracting the pose images from it. Copied from huggingface_hub import hf_hub_download +from PIL import Image +import imageio + +filename = "__assets__/poses_skeleton_gifs/dance1_corr.mp4" +repo_id = "PAIR/Text2Video-Zero" +video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) + +reader = imageio.get_reader(video_path, "ffmpeg") +frame_count = 8 +pose_images = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] Load a ControlNetModel for pose estimation and a checkpoint into the StableDiffusionControlNetPipeline. Then you’ll use the CrossFrameAttnProcessor for the UNet and ControlNet. Copied import torch +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +model_id = "runwayml/stable-diffusion-v1-5" +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16) +pipeline = StableDiffusionControlNetPipeline.from_pretrained( + model_id, controlnet=controlnet, torch_dtype=torch.float16 +).to("cuda") + +pipeline.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) +pipeline.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) Fix the latents for all the frames, and then pass your prompt and extracted pose images to the model to generate a video. Copied latents = torch.randn((1, 4, 64, 64), device="cuda", dtype=torch.float16).repeat(len(pose_images), 1, 1, 1) + +prompt = "Darth Vader dancing in a desert" +result = pipeline(prompt=[prompt] * len(pose_images), image=pose_images, latents=latents).images +imageio.mimsave("video.mp4", result, fps=4) Optimize Video generation requires a lot of memory because you’re generating many video frames at once. You can reduce your memory requirements at the expense of some inference speed. Try: offloading pipeline components that are no longer needed to the CPU feed-forward chunking runs the feed-forward layer in a loop instead of all at once break up the number of frames the VAE has to decode into chunks instead of decoding them all at once Copied - pipeline.enable_model_cpu_offload() +- frames = pipeline(image, decode_chunk_size=8, generator=generator).frames[0] ++ pipeline.enable_model_cpu_offload() ++ pipeline.unet.enable_forward_chunking() ++ frames = pipeline(image, decode_chunk_size=2, generator=generator, num_frames=25).frames[0] If memory is not an issue and you want to optimize for speed, try wrapping the UNet with torch.compile. Copied - pipeline.enable_model_cpu_offload() ++ pipeline.to("cuda") ++ pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) diff --git a/scrapped_outputs/ca3752d403050e6d4708119740cad1fa.txt b/scrapped_outputs/ca3752d403050e6d4708119740cad1fa.txt new file mode 100644 index 0000000000000000000000000000000000000000..2a6f8d6ced6b91e1a0e4d7840137c4d469ea2882 --- /dev/null +++ b/scrapped_outputs/ca3752d403050e6d4708119740cad1fa.txt @@ -0,0 +1,154 @@ +Scalable Diffusion Models with Transformers (DiT) + + +Overview + +Scalable Diffusion Models with Transformers (DiT) by William Peebles and Saining Xie. +The abstract of the paper is the following: +We explore a new class of diffusion models based on the transformer architecture. We train latent diffusion models of images, replacing the commonly-used U-Net backbone with a transformer that operates on latent patches. We analyze the scalability of our Diffusion Transformers (DiTs) through the lens of forward pass complexity as measured by Gflops. We find that DiTs with higher Gflops — through increased transformer depth/width or increased number of input tokens — consistently have lower FID. In addition to possessing good scalability properties, our largest DiT-XL/2 models outperform all prior diffusion models on the class-conditional ImageNet 512x512 and 256x256 benchmarks, achieving a state-of-the-art FID of 2.27 on the latter. +The original codebase of this paper can be found here: facebookresearch/dit. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_dit.py +Conditional Image Generation +- + +Usage example + + + + Copied +from diffusers import DiTPipeline, DPMSolverMultistepScheduler +import torch + +pipe = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256", torch_dtype=torch.float16) +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") + +# pick words from Imagenet class labels +pipe.labels # to print all available words + +# pick words that exist in ImageNet +words = ["white shark", "umbrella"] + +class_ids = pipe.get_label_ids(words) + +generator = torch.manual_seed(33) +output = pipe(class_labels=class_ids, num_inference_steps=25, generator=generator) + +image = output.images[0] # label 'white shark' + +DiTPipeline + + +class diffusers.DiTPipeline + +< +source +> +( +transformer: Transformer2DModel +vae: AutoencoderKL +scheduler: KarrasDiffusionSchedulers +id2label: typing.Union[typing.Dict[int, str], NoneType] = None + +) + + +Parameters + +transformer (Transformer2DModel) — +Class conditioned Transformer in Diffusion model to denoise the encoded image latents. + + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +scheduler (DDIMScheduler) — +A scheduler to be used in combination with dit to denoise the encoded image latents. + + + +This pipeline inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +class_labels: typing.List[int] +guidance_scale: float = 4.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +num_inference_steps: int = 50 +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True + +) + + +Parameters + +class_labels (List[int]) — +List of imagenet class labels for the images to be generated. + + +guidance_scale (float, optional, defaults to 4.0) — +Scale of the guidance signal. + + +generator (torch.Generator, optional) — +A torch generator to make generation +deterministic. + + +num_inference_steps (int, optional, defaults to 250) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + + +Function invoked when calling the pipeline for generation. + +get_label_ids + +< +source +> +( +label: typing.Union[str, typing.List[str]] + +) +→ +list of int + +Parameters + +label (str or dict of str) — label strings to be mapped to class ids. + + +Returns + +list of int + + + +Class ids to be processed by pipeline. + + +Map label strings, e.g. from ImageNet, to corresponding class ids. diff --git a/scrapped_outputs/ca7c0a746cdb2389b92cdde2dad9f4a3.txt b/scrapped_outputs/ca7c0a746cdb2389b92cdde2dad9f4a3.txt new file mode 100644 index 0000000000000000000000000000000000000000..e4abc6c3bdbf1174d841ae03e5693f7552e06dd7 --- /dev/null +++ b/scrapped_outputs/ca7c0a746cdb2389b92cdde2dad9f4a3.txt @@ -0,0 +1,38 @@ +Distributed inference with multiple GPUs On distributed setups, you can run inference across multiple GPUs with 🤗 Accelerate or PyTorch Distributed, which is useful for generating with multiple prompts in parallel. This guide will show you how to use 🤗 Accelerate and PyTorch Distributed for distributed inference. 🤗 Accelerate 🤗 Accelerate is a library designed to make it easy to train or run inference across distributed setups. It simplifies the process of setting up the distributed environment, allowing you to focus on your PyTorch code. To begin, create a Python file and initialize an accelerate.PartialState to create a distributed environment; your setup is automatically detected so you don’t need to explicitly define the rank or world_size. Move the DiffusionPipeline to distributed_state.device to assign a GPU to each process. Now use the split_between_processes utility as a context manager to automatically distribute the prompts between the number of processes. Copied import torch +from accelerate import PartialState +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +distributed_state = PartialState() +pipeline.to(distributed_state.device) + +with distributed_state.split_between_processes(["a dog", "a cat"]) as prompt: + result = pipeline(prompt).images[0] + result.save(f"result_{distributed_state.process_index}.png") Use the --num_processes argument to specify the number of GPUs to use, and call accelerate launch to run the script: Copied accelerate launch run_distributed.py --num_processes=2 To learn more, take a look at the Distributed Inference with 🤗 Accelerate guide. PyTorch Distributed PyTorch supports DistributedDataParallel which enables data parallelism. To start, create a Python file and import torch.distributed and torch.multiprocessing to set up the distributed process group and to spawn the processes for inference on each GPU. You should also initialize a DiffusionPipeline: Copied import torch +import torch.distributed as dist +import torch.multiprocessing as mp + +from diffusers import DiffusionPipeline + +sd = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) You’ll want to create a function to run inference; init_process_group handles creating a distributed environment with the type of backend to use, the rank of the current process, and the world_size or the number of processes participating. If you’re running inference in parallel over 2 GPUs, then the world_size is 2. Move the DiffusionPipeline to rank and use get_rank to assign a GPU to each process, where each process handles a different prompt: Copied def run_inference(rank, world_size): + dist.init_process_group("nccl", rank=rank, world_size=world_size) + + sd.to(rank) + + if torch.distributed.get_rank() == 0: + prompt = "a dog" + elif torch.distributed.get_rank() == 1: + prompt = "a cat" + + image = sd(prompt).images[0] + image.save(f"./{'_'.join(prompt)}.png") To run the distributed inference, call mp.spawn to run the run_inference function on the number of GPUs defined in world_size: Copied def main(): + world_size = 2 + mp.spawn(run_inference, args=(world_size,), nprocs=world_size, join=True) + + +if __name__ == "__main__": + main() Once you’ve completed the inference script, use the --nproc_per_node argument to specify the number of GPUs to use and call torchrun to run the script: Copied torchrun run_distributed.py --nproc_per_node=2 diff --git a/scrapped_outputs/cab285b7cdcbde06cddf88280a565c7b.txt b/scrapped_outputs/cab285b7cdcbde06cddf88280a565c7b.txt new file mode 100644 index 0000000000000000000000000000000000000000..70b4217dd0c7138c00d1e18f1498d6ca0f929b68 --- /dev/null +++ b/scrapped_outputs/cab285b7cdcbde06cddf88280a565c7b.txt @@ -0,0 +1,31 @@ +Load different Stable Diffusion formats Stable Diffusion models are available in different formats depending on the framework they’re trained and saved with, and where you download them from. Converting these formats for use in 🤗 Diffusers allows you to use all the features supported by the library, such as using different schedulers for inference, building your custom pipeline, and a variety of techniques and methods for optimizing inference speed. We highly recommend using the .safetensors format because it is more secure than traditional pickled files which are vulnerable and can be exploited to execute any code on your machine (learn more in the Load safetensors guide). This guide will show you how to convert other Stable Diffusion formats to be compatible with 🤗 Diffusers. PyTorch .ckpt The checkpoint - or .ckpt - format is commonly used to store and save models. The .ckpt file contains the entire model and is typically several GBs in size. While you can load and use a .ckpt file directly with the from_single_file() method, it is generally better to convert the .ckpt file to 🤗 Diffusers so both formats are available. There are two options for converting a .ckpt file: use a Space to convert the checkpoint or convert the .ckpt file with a script. Convert with a Space The easiest and most convenient way to convert a .ckpt file is to use the SD to Diffusers Space. You can follow the instructions on the Space to convert the .ckpt file. This approach works well for basic models, but it may struggle with more customized models. You’ll know the Space failed if it returns an empty pull request or error. In this case, you can try converting the .ckpt file with a script. Convert with a script 🤗 Diffusers provides a conversion script for converting .ckpt files. This approach is more reliable than the Space above. Before you start, make sure you have a local clone of 🤗 Diffusers to run the script and log in to your Hugging Face account so you can open pull requests and push your converted model to the Hub. Copied huggingface-cli login To use the script: Git clone the repository containing the .ckpt file you want to convert. For this example, let’s convert this TemporalNet .ckpt file: Copied git lfs install +git clone https://huggingface.co/CiaraRowles/TemporalNet Open a pull request on the repository where you’re converting the checkpoint from: Copied cd TemporalNet && git fetch origin refs/pr/13:pr/13 +git checkout pr/13 There are several input arguments to configure in the conversion script, but the most important ones are: checkpoint_path: the path to the .ckpt file to convert. original_config_file: a YAML file defining the configuration of the original architecture. If you can’t find this file, try searching for the YAML file in the GitHub repository where you found the .ckpt file. dump_path: the path to the converted model. For example, you can take the cldm_v15.yaml file from the ControlNet repository because the TemporalNet model is a Stable Diffusion v1.5 and ControlNet model. Now you can run the script to convert the .ckpt file: Copied python ../diffusers/scripts/convert_original_stable_diffusion_to_diffusers.py --checkpoint_path temporalnetv3.ckpt --original_config_file cldm_v15.yaml --dump_path ./ --controlnet Once the conversion is done, upload your converted model and test out the resulting pull request! Copied git push origin pr/13:refs/pr/13 Keras .pb or .h5 🧪 This is an experimental feature. Only Stable Diffusion v1 checkpoints are supported by the Convert KerasCV Space at the moment. KerasCV supports training for Stable Diffusion v1 and v2. However, it offers limited support for experimenting with Stable Diffusion models for inference and deployment whereas 🤗 Diffusers has a more complete set of features for this purpose, such as different noise schedulers, flash attention, and other +optimization techniques. The Convert KerasCV Space converts .pb or .h5 files to PyTorch, and then wraps them in a StableDiffusionPipeline so it is ready for inference. The converted checkpoint is stored in a repository on the Hugging Face Hub. For this example, let’s convert the sayakpaul/textual-inversion-kerasio checkpoint which was trained with Textual Inversion. It uses the special token to personalize images with cats. The Convert KerasCV Space allows you to input the following: Your Hugging Face token. Paths to download the UNet and text encoder weights from. Depending on how the model was trained, you don’t necessarily need to provide the paths to both the UNet and text encoder. For example, Textual Inversion only requires the embeddings from the text encoder and a text-to-image model only requires the UNet weights. Placeholder token is only applicable for textual inversion models. The output_repo_prefix is the name of the repository where the converted model is stored. Click the Submit button to automatically convert the KerasCV checkpoint! Once the checkpoint is successfully converted, you’ll see a link to the new repository containing the converted checkpoint. Follow the link to the new repository, and you’ll see the Convert KerasCV Space generated a model card with an inference widget to try out the converted model. If you prefer to run inference with code, click on the Use in Diffusers button in the upper right corner of the model card to copy and paste the code snippet: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline", use_safetensors=True +) Then, you can generate an image like: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline", use_safetensors=True +) +pipeline.to("cuda") + +placeholder_token = "" +prompt = f"two {placeholder_token} getting married, photorealistic, high quality" +image = pipeline(prompt, num_inference_steps=50).images[0] A1111 LoRA files Automatic1111 (A1111) is a popular web UI for Stable Diffusion that supports model sharing platforms like Civitai. Models trained with the Low-Rank Adaptation (LoRA) technique are especially popular because they’re fast to train and have a much smaller file size than a fully finetuned model. 🤗 Diffusers supports loading A1111 LoRA checkpoints with load_lora_weights(): Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "Lykon/dreamshaper-xl-1-0", torch_dtype=torch.float16, variant="fp16" +).to("cuda") Download a LoRA checkpoint from Civitai; this example uses the Blueprintify SD XL 1.0 checkpoint, but feel free to try out any LoRA checkpoint! Copied # uncomment to download the safetensor weights +#!wget https://civitai.com/api/download/models/168776 -O blueprintify.safetensors Load the LoRA checkpoint into the pipeline with the load_lora_weights() method: Copied pipeline.load_lora_weights(".", weight_name="blueprintify.safetensors") Now you can use the pipeline to generate images: Copied prompt = "bl3uprint, a highly detailed blueprint of the empire state building, explaining how to build all parts, many txt, blueprint grid backdrop" +negative_prompt = "lowres, cropped, worst quality, low quality, normal quality, artifacts, signature, watermark, username, blurry, more than one bridge, bad architecture" + +image = pipeline( + prompt=prompt, + negative_prompt=negative_prompt, + generator=torch.manual_seed(0), +).images[0] +image diff --git a/scrapped_outputs/cad070da34f7d671c88e510501b5401b.txt b/scrapped_outputs/cad070da34f7d671c88e510501b5401b.txt new file mode 100644 index 0000000000000000000000000000000000000000..c6ada9556f117e916687e4a6c5586a56d8e2825d --- /dev/null +++ b/scrapped_outputs/cad070da34f7d671c88e510501b5401b.txt @@ -0,0 +1,17 @@ +Load safetensors safetensors is a safe and fast file format for storing and loading tensors. Typically, PyTorch model weights are saved or pickled into a .bin file with Python’s pickle utility. However, pickle is not secure and pickled files may contain malicious code that can be executed. safetensors is a secure alternative to pickle, making it ideal for sharing model weights. This guide will show you how you load .safetensor files, and how to convert Stable Diffusion model weights stored in other formats to .safetensor. Before you start, make sure you have safetensors installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install safetensors If you look at the runwayml/stable-diffusion-v1-5 repository, you’ll see weights inside the text_encoder, unet and vae subfolders are stored in the .safetensors format. By default, 🤗 Diffusers automatically loads these .safetensors files from their subfolders if they’re available in the model repository. For more explicit control, you can optionally set use_safetensors=True (if safetensors is not installed, you’ll get an error message asking you to install it): Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) However, model weights are not necessarily stored in separate subfolders like in the example above. Sometimes, all the weights are stored in a single .safetensors file. In this case, if the weights are Stable Diffusion weights, you can load the file directly with the from_single_file() method: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_single_file( + "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +) Convert to safetensors Not all weights on the Hub are available in the .safetensors format, and you may encounter weights stored as .bin. In this case, use the Convert Space to convert the weights to .safetensors. The Convert Space downloads the pickled weights, converts them, and opens a Pull Request to upload the newly converted .safetensors file on the Hub. This way, if there is any malicious code contained in the pickled files, they’re uploaded to the Hub - which has a security scanner to detect unsafe files and suspicious pickle imports - instead of your computer. You can use the model with the new .safetensors weights by specifying the reference to the Pull Request in the revision parameter (you can also test it in this Check PR Space on the Hub), for example refs/pr/22: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", revision="refs/pr/22", use_safetensors=True +) Why use safetensors? There are several reasons for using safetensors: Safety is the number one reason for using safetensors. As open-source and model distribution grows, it is important to be able to trust the model weights you downloaded don’t contain any malicious code. The current size of the header in safetensors prevents parsing extremely large JSON files. Loading speed between switching models is another reason to use safetensors, which performs zero-copy of the tensors. It is especially fast compared to pickle if you’re loading the weights to CPU (the default case), and just as fast if not faster when directly loading the weights to GPU. You’ll only notice the performance difference if the model is already loaded, and not if you’re downloading the weights or loading the model for the first time. The time it takes to load the entire pipeline: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", use_safetensors=True) +"Loaded in safetensors 0:00:02.033658" +"Loaded in PyTorch 0:00:02.663379" But the actual time it takes to load 500MB of the model weights is only: Copied safetensors: 3.4873ms +PyTorch: 172.7537ms Lazy loading is also supported in safetensors, which is useful in distributed settings to only load some of the tensors. This format allowed the BLOOM model to be loaded in 45 seconds on 8 GPUs instead of 10 minutes with regular PyTorch weights. diff --git a/scrapped_outputs/cad675827cc561bb4992678dc845410c.txt b/scrapped_outputs/cad675827cc561bb4992678dc845410c.txt new file mode 100644 index 0000000000000000000000000000000000000000..c45daf9a97ec4b41db61304ab7ca97f58be2ed61 --- /dev/null +++ b/scrapped_outputs/cad675827cc561bb4992678dc845410c.txt @@ -0,0 +1 @@ +xFormers We recommend xFormers for both inference and training. In our tests, the optimizations performed in the attention blocks allow for both faster speed and reduced memory consumption. Install xFormers from pip: Copied pip install xformers The xFormers pip package requires the latest version of PyTorch. If you need to use a previous version of PyTorch, then we recommend installing xFormers from the source. After xFormers is installed, you can use enable_xformers_memory_efficient_attention() for faster inference and reduced memory consumption as shown in this section. According to this issue, xFormers v0.0.16 cannot be used for training (fine-tune or DreamBooth) in some GPUs. If you observe this problem, please install a development version as indicated in the issue comments. diff --git a/scrapped_outputs/caf9e77588a817ec3493c9b89937270f.txt b/scrapped_outputs/caf9e77588a817ec3493c9b89937270f.txt new file mode 100644 index 0000000000000000000000000000000000000000..cff714448fde8a5841e9c4833e95b6589962a2ce --- /dev/null +++ b/scrapped_outputs/caf9e77588a817ec3493c9b89937270f.txt @@ -0,0 +1 @@ +Overview 🧨 Diffusers offers many pipelines, models, and schedulers for generative tasks. To make loading these components as simple as possible, we provide a single and unified method - from_pretrained() - that loads any of these components from either the Hugging Face Hub or your local machine. Whenever you load a pipeline or model, the latest files are automatically downloaded and cached so you can quickly reuse them next time without redownloading the files. This section will show you everything you need to know about loading pipelines, how to load different components in a pipeline, how to load checkpoint variants, and how to load community pipelines. You’ll also learn how to load schedulers and compare the speed and quality trade-offs of using different schedulers. Finally, you’ll see how to convert and load KerasCV checkpoints so you can use them in PyTorch with 🧨 Diffusers. diff --git a/scrapped_outputs/caff6a37039a7877771c60a6a311ead6.txt b/scrapped_outputs/caff6a37039a7877771c60a6a311ead6.txt new file mode 100644 index 0000000000000000000000000000000000000000..9d0a8a28b6d3bc1a9ce7a2bdbcac9943975943ca --- /dev/null +++ b/scrapped_outputs/caff6a37039a7877771c60a6a311ead6.txt @@ -0,0 +1 @@ +Overview Welcome to 🧨 Diffusers! If you’re new to diffusion models and generative AI, and want to learn more, then you’ve come to the right place. These beginner-friendly tutorials are designed to provide a gentle introduction to diffusion models and help you understand the library fundamentals - the core components and how 🧨 Diffusers is meant to be used. You’ll learn how to use a pipeline for inference to rapidly generate things, and then deconstruct that pipeline to really understand how to use the library as a modular toolbox for building your own diffusion systems. In the next lesson, you’ll learn how to train your own diffusion model to generate what you want. After completing the tutorials, you’ll have gained the necessary skills to start exploring the library on your own and see how to use it for your own projects and applications. Feel free to join our community on Discord or the forums to connect and collaborate with other users and developers! Let’s start diffusing! 🧨 diff --git a/scrapped_outputs/cb034fd8b08d4c5a5131ca07b60570e9.txt b/scrapped_outputs/cb034fd8b08d4c5a5131ca07b60570e9.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/cb2dadd0032fe4194e69a9bc0c394e91.txt b/scrapped_outputs/cb2dadd0032fe4194e69a9bc0c394e91.txt new file mode 100644 index 0000000000000000000000000000000000000000..bbc3acf76c7c15bd0150cb7a94aa944d1e65fda4 --- /dev/null +++ b/scrapped_outputs/cb2dadd0032fe4194e69a9bc0c394e91.txt @@ -0,0 +1,93 @@ +InstructPix2Pix InstructPix2Pix is a Stable Diffusion model trained to edit images from human-provided instructions. For example, your prompt can be “turn the clouds rainy” and the model will edit the input image accordingly. This model is conditioned on the text prompt (or editing instruction) and the input image. This guide will explore the train_instruct_pix2pix.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/instruct_pix2pix +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script has many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. Default values are provided for most parameters that work pretty well, but you can also set your own values in the training command if you’d like. For example, to increase the resolution of the input image: Copied accelerate launch train_instruct_pix2pix.py \ + --resolution=512 \ Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the relevant parameters for InstructPix2Pix: --original_image_column: the original image before the edits are made --edited_image_column: the image after the edits are made --edit_prompt_column: the instructions to edit the image --conditioning_dropout_prob: the dropout probability for the edited image and edit prompts during training which enables classifier-free guidance (CFG) for one or both conditioning inputs Training script The dataset preprocessing code and training loop are found in the main() function. This is where you’ll make your changes to the training script to adapt it for your own use-case. As with the script parameters, a walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the InstructPix2Pix relevant parts of the script. The script begins by modifing the number of input channels in the first convolutional layer of the UNet to account for InstructPix2Pix’s additional conditioning image: Copied in_channels = 8 +out_channels = unet.conv_in.out_channels +unet.register_to_config(in_channels=in_channels) + +with torch.no_grad(): + new_conv_in = nn.Conv2d( + in_channels, out_channels, unet.conv_in.kernel_size, unet.conv_in.stride, unet.conv_in.padding + ) + new_conv_in.weight.zero_() + new_conv_in.weight[:, :4, :, :].copy_(unet.conv_in.weight) + unet.conv_in = new_conv_in These UNet parameters are updated by the optimizer: Copied optimizer = optimizer_cls( + unet.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Next, the edited images and and edit instructions are preprocessed and tokenized. It is important the same image transformations are applied to the original and edited images. Copied def preprocess_train(examples): + preprocessed_images = preprocess_images(examples) + + original_images, edited_images = preprocessed_images.chunk(2) + original_images = original_images.reshape(-1, 3, args.resolution, args.resolution) + edited_images = edited_images.reshape(-1, 3, args.resolution, args.resolution) + + examples["original_pixel_values"] = original_images + examples["edited_pixel_values"] = edited_images + + captions = list(examples[edit_prompt_column]) + examples["input_ids"] = tokenize_captions(captions) + return examples Finally, in the training loop, it starts by encoding the edited images into latent space: Copied latents = vae.encode(batch["edited_pixel_values"].to(weight_dtype)).latent_dist.sample() +latents = latents * vae.config.scaling_factor Then, the script applies dropout to the original image and edit instruction embeddings to support CFG. This is what enables the model to modulate the influence of the edit instruction and original image on the edited image. Copied encoder_hidden_states = text_encoder(batch["input_ids"])[0] +original_image_embeds = vae.encode(batch["original_pixel_values"].to(weight_dtype)).latent_dist.mode() + +if args.conditioning_dropout_prob is not None: + random_p = torch.rand(bsz, device=latents.device, generator=generator) + prompt_mask = random_p < 2 * args.conditioning_dropout_prob + prompt_mask = prompt_mask.reshape(bsz, 1, 1) + null_conditioning = text_encoder(tokenize_captions([""]).to(accelerator.device))[0] + encoder_hidden_states = torch.where(prompt_mask, null_conditioning, encoder_hidden_states) + + image_mask_dtype = original_image_embeds.dtype + image_mask = 1 - ( + (random_p >= args.conditioning_dropout_prob).to(image_mask_dtype) + * (random_p < 3 * args.conditioning_dropout_prob).to(image_mask_dtype) + ) + image_mask = image_mask.reshape(bsz, 1, 1, 1) + original_image_embeds = image_mask * original_image_embeds That’s pretty much it! Aside from the differences described here, the rest of the script is very similar to the Text-to-image training script, so feel free to check it out for more details. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’re happy with the changes to your script or if you’re okay with the default configuration, you’re ready to launch the training script! 🚀 This guide uses the fusing/instructpix2pix-1000-samples dataset, which is a smaller version of the original dataset. You can also create and use your own dataset if you’d like (see the Create a dataset for training guide). Set the MODEL_NAME environment variable to the name of the model (can be a model id on the Hub or a path to a local model), and the DATASET_ID to the name of the dataset on the Hub. The script creates and saves all the components (feature extractor, scheduler, text encoder, UNet, etc.) to a subfolder in your repository. For better results, try longer training runs with a larger dataset. We’ve only tested this training script on a smaller-scale dataset. To monitor training progress with Weights and Biases, add the --report_to=wandb parameter to the training command and specify a validation image with --val_image_url and a validation prompt with --validation_prompt. This can be really useful for debugging the model. If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. Copied accelerate launch --mixed_precision="fp16" train_instruct_pix2pix.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$DATASET_ID \ + --enable_xformers_memory_efficient_attention \ + --resolution=256 \ + --random_flip \ + --train_batch_size=4 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --max_train_steps=15000 \ + --checkpointing_steps=5000 \ + --checkpoints_total_limit=1 \ + --learning_rate=5e-05 \ + --max_grad_norm=1 \ + --lr_warmup_steps=0 \ + --conditioning_dropout_prob=0.05 \ + --mixed_precision=fp16 \ + --seed=42 \ + --push_to_hub After training is finished, you can use your new InstructPix2Pix for inference: Copied import PIL +import requests +import torch +from diffusers import StableDiffusionInstructPix2PixPipeline +from diffusers.utils import load_image + +pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained("your_cool_model", torch_dtype=torch.float16).to("cuda") +generator = torch.Generator("cuda").manual_seed(0) + +image = load_image("https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/test_pix2pix_4.png") +prompt = "add some ducks to the lake" +num_inference_steps = 20 +image_guidance_scale = 1.5 +guidance_scale = 10 + +edited_image = pipeline( + prompt, + image=image, + num_inference_steps=num_inference_steps, + image_guidance_scale=image_guidance_scale, + guidance_scale=guidance_scale, + generator=generator, +).images[0] +edited_image.save("edited_image.png") You should experiment with different num_inference_steps, image_guidance_scale, and guidance_scale values to see how they affect inference speed and quality. The guidance scale parameters are especially impactful because they control how much the original image and edit instructions affect the edited image. Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_instruct_pix2pix_sdxl.py script to train a SDXL model to follow image editing instructions. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on training your own InstructPix2Pix model! 🥳 To learn more about the model, it may be helpful to: Read the Instruction-tuning Stable Diffusion with InstructPix2Pix blog post to learn more about some experiments we’ve done with InstructPix2Pix, dataset preparation, and results for different instructions. diff --git a/scrapped_outputs/cb4422fe147c1755054a3359cb493c80.txt b/scrapped_outputs/cb4422fe147c1755054a3359cb493c80.txt new file mode 100644 index 0000000000000000000000000000000000000000..9b91d27246c47e715d8fb32343e10ffa0337626a --- /dev/null +++ b/scrapped_outputs/cb4422fe147c1755054a3359cb493c80.txt @@ -0,0 +1,36 @@ +Stable Diffusion XL Turbo SDXL Turbo is an adversarial time-distilled Stable Diffusion XL (SDXL) model capable +of running inference in as little as 1 step. This guide will show you how to use SDXL-Turbo for text-to-image and image-to-image. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate Load model checkpoints Model weights may be stored in separate subfolders on the Hub or locally, in which case, you should use the from_pretrained() method: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16") +pipeline = pipeline.to("cuda") You can also use the from_single_file() method to load a model checkpoint stored in a single file format (.ckpt or .safetensors) from the Hub or locally: Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_single_file( + "https://huggingface.co/stabilityai/sdxl-turbo/blob/main/sd_xl_turbo_1.0_fp16.safetensors", torch_dtype=torch.float16) +pipeline = pipeline.to("cuda") Text-to-image For text-to-image, pass a text prompt. By default, SDXL Turbo generates a 512x512 image, and that resolution gives the best results. You can try setting the height and width parameters to 768x768 or 1024x1024, but you should expect quality degradations when doing so. Make sure to set guidance_scale to 0.0 to disable, as the model was trained without it. A single inference step is enough to generate high quality images. +Increasing the number of steps to 2, 3 or 4 should improve image quality. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline_text2image = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16") +pipeline_text2image = pipeline_text2image.to("cuda") + +prompt = "A cinematic shot of a baby racoon wearing an intricate italian priest robe." + +image = pipeline_text2image(prompt=prompt, guidance_scale=0.0, num_inference_steps=1).images[0] +image Image-to-image For image-to-image generation, make sure that num_inference_steps * strength is larger or equal to 1. +The image-to-image pipeline will run for int(num_inference_steps * strength) steps, e.g. 0.5 * 2.0 = 1 step in +our example below. Copied from diffusers import AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +# use from_pipe to avoid consuming additional memory when loading a checkpoint +pipeline = AutoPipelineForImage2Image.from_pipe(pipeline_text2image).to("cuda") + +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") +init_image = init_image.resize((512, 512)) + +prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k" + +image = pipeline(prompt, image=init_image, strength=0.5, guidance_scale=0.0, num_inference_steps=2).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Speed-up SDXL Turbo even more Compile the UNet if you are using PyTorch version 2 or better. The first inference run will be very slow, but subsequent ones will be much faster. Copied pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) When using the default VAE, keep it in float32 to avoid costly dtype conversions before and after each generation. You only need to do this one before your first generation: Copied pipe.upcast_vae() As an alternative, you can also use a 16-bit VAE created by community member @madebyollin that does not need to be upcasted to float32. diff --git a/scrapped_outputs/cb6dcd6ccb29b68d56642da5f16f5b6f.txt b/scrapped_outputs/cb6dcd6ccb29b68d56642da5f16f5b6f.txt new file mode 100644 index 0000000000000000000000000000000000000000..62825fe72aa801b97e465830300492417c227d28 --- /dev/null +++ b/scrapped_outputs/cb6dcd6ccb29b68d56642da5f16f5b6f.txt @@ -0,0 +1,18 @@ +Stable Diffusion pipelines Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from CompVis, Stability AI and LAION. Latent diffusion applies the diffusion process over a lower dimensional latent space to reduce memory and compute complexity. This specific type of diffusion model was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. Stable Diffusion is trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs. For more details about how Stable Diffusion works and how it differs from the base latent diffusion model, take a look at the Stability AI announcement and our own blog post for more technical details. You can find the original codebase for Stable Diffusion v1.0 at CompVis/stable-diffusion and Stable Diffusion v2.0 at Stability-AI/stablediffusion as well as their original scripts for various tasks. Additional official checkpoints for the different Stable Diffusion versions and tasks can be found on the CompVis, Runway, and Stability AI Hub organizations. Explore these organizations to find the best checkpoint for your use-case! The table below summarizes the available Stable Diffusion pipelines, their supported tasks, and an interactive demo: Pipeline Supported tasks 🤗 Space StableDiffusion text-to-image StableDiffusionImg2Img image-to-image StableDiffusionInpaint inpainting StableDiffusionDepth2Img depth-to-image StableDiffusionImageVariation image variation StableDiffusionPipelineSafe filtered text-to-image StableDiffusion2 text-to-image, inpainting, depth-to-image, super-resolution StableDiffusionXL text-to-image, image-to-image StableDiffusionLatentUpscale super-resolution StableDiffusionUpscale super-resolution StableDiffusionLDM3D text-to-rgb, text-to-depth, text-to-pano StableDiffusionUpscaleLDM3D ldm3d super-resolution Tips To help you get the most out of the Stable Diffusion pipelines, here are a few tips for improving performance and usability. These tips are applicable to all Stable Diffusion pipelines. Explore tradeoff between speed and quality StableDiffusionPipeline uses the PNDMScheduler by default, but 🤗 Diffusers provides many other schedulers (some of which are faster or output better quality) that are compatible. For example, if you want to use the EulerDiscreteScheduler instead of the default: Copied from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler + +pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") +pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +# or +euler_scheduler = EulerDiscreteScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler") +pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=euler_scheduler) Reuse pipeline components to save memory To save memory and use the same components across multiple pipelines, use the .components method to avoid loading weights into RAM more than once. Copied from diffusers import ( + StableDiffusionPipeline, + StableDiffusionImg2ImgPipeline, + StableDiffusionInpaintPipeline, +) + +text2img = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") +img2img = StableDiffusionImg2ImgPipeline(**text2img.components) +inpaint = StableDiffusionInpaintPipeline(**text2img.components) + +# now you can use text2img(...), img2img(...), inpaint(...) just like the call methods of each respective pipeline diff --git a/scrapped_outputs/cb82cb9e85e32336ce02ab02483e34bc.txt b/scrapped_outputs/cb82cb9e85e32336ce02ab02483e34bc.txt new file mode 100644 index 0000000000000000000000000000000000000000..707a06e6336d2883e0c81a8c8cc00f306f544615 --- /dev/null +++ b/scrapped_outputs/cb82cb9e85e32336ce02ab02483e34bc.txt @@ -0,0 +1,65 @@ +Unconditional image generation Unconditional image generation models are not conditioned on text or images during training. It only generates images that resemble its training data distribution. This guide will explore the train_unconditional.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies: Copied cd examples/unconditional_image_generation +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the bf16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_unconditional.py \ + --mixed_precision="bf16" Some basic and important parameters to specify include: --dataset_name: the name of the dataset on the Hub or a local path to the dataset to train on --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command Bring your dataset, and let the training script handle everything else! Training script The code for preprocessing the dataset and the training loop is found in the main() function. If you need to adapt the training script, this is where you’ll need to make your changes. The train_unconditional script initializes a UNet2DModel if you don’t provide a model configuration. You can configure the UNet here if you’d like: Copied model = UNet2DModel( + sample_size=args.resolution, + in_channels=3, + out_channels=3, + layers_per_block=2, + block_out_channels=(128, 128, 256, 256, 512, 512), + down_block_types=( + "DownBlock2D", + "DownBlock2D", + "DownBlock2D", + "DownBlock2D", + "AttnDownBlock2D", + "DownBlock2D", + ), + up_block_types=( + "UpBlock2D", + "AttnUpBlock2D", + "UpBlock2D", + "UpBlock2D", + "UpBlock2D", + "UpBlock2D", + ), +) Next, the script initializes a scheduler and optimizer: Copied # Initialize the scheduler +accepts_prediction_type = "prediction_type" in set(inspect.signature(DDPMScheduler.__init__).parameters.keys()) +if accepts_prediction_type: + noise_scheduler = DDPMScheduler( + num_train_timesteps=args.ddpm_num_steps, + beta_schedule=args.ddpm_beta_schedule, + prediction_type=args.prediction_type, + ) +else: + noise_scheduler = DDPMScheduler(num_train_timesteps=args.ddpm_num_steps, beta_schedule=args.ddpm_beta_schedule) + +# Initialize the optimizer +optimizer = torch.optim.AdamW( + model.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Then it loads a dataset and you can specify how to preprocess it: Copied dataset = load_dataset("imagefolder", data_dir=args.train_data_dir, cache_dir=args.cache_dir, split="train") + +augmentations = transforms.Compose( + [ + transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), + transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution), + transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x), + transforms.ToTensor(), + transforms.Normalize([0.5], [0.5]), + ] +) Finally, the training loop handles everything else such as adding noise to the images, predicting the noise residual, calculating the loss, saving checkpoints at specified steps, and saving and pushing the model to the Hub. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 A full training run takes 2 hours on 4xV100 GPUs. single GPU multi-GPU Copied accelerate launch train_unconditional.py \ + --dataset_name="huggan/flowers-102-categories" \ + --output_dir="ddpm-ema-flowers-64" \ + --mixed_precision="fp16" \ + --push_to_hub The training script creates and saves a checkpoint file in your repository. Now you can load and use your trained model for inference: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128").to("cuda") +image = pipeline().images[0] diff --git a/scrapped_outputs/cbc52d2fb70df8ea3dc13f5c5ff0a0e8.txt b/scrapped_outputs/cbc52d2fb70df8ea3dc13f5c5ff0a0e8.txt new file mode 100644 index 0000000000000000000000000000000000000000..c618df35dab9f1ea7404eb6772bf3711c834e51e --- /dev/null +++ b/scrapped_outputs/cbc52d2fb70df8ea3dc13f5c5ff0a0e8.txt @@ -0,0 +1,40 @@ +Stable Video Diffusion Stable Video Diffusion (SVD) is a powerful image-to-video generation model that can generate 2-4 second high resolution (576x1024) videos conditioned on an input image. This guide will show you how to use SVD to generate short videos from images. Before you begin, make sure you have the following libraries installed: Copied !pip install -q -U diffusers transformers accelerate The are two variants of this model, SVD and SVD-XT. The SVD checkpoint is trained to generate 14 frames and the SVD-XT checkpoint is further finetuned to generate 25 frames. You’ll use the SVD-XT checkpoint for this guide. Copied import torch + +from diffusers import StableVideoDiffusionPipeline +from diffusers.utils import load_image, export_to_video + +pipe = StableVideoDiffusionPipeline.from_pretrained( + "stabilityai/stable-video-diffusion-img2vid-xt", torch_dtype=torch.float16, variant="fp16" +) +pipe.enable_model_cpu_offload() + +# Load the conditioning image +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png") +image = image.resize((1024, 576)) + +generator = torch.manual_seed(42) +frames = pipe(image, decode_chunk_size=8, generator=generator).frames[0] + +export_to_video(frames, "generated.mp4", fps=7) "source image of a rocket" "generated video from source image" torch.compile You can gain a 20-25% speedup at the expense of slightly increased memory by compiling the UNet. Copied - pipe.enable_model_cpu_offload() ++ pipe.to("cuda") ++ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) Reduce memory usage Video generation is very memory intensive because you’re essentially generating num_frames all at once, similar to text-to-image generation with a high batch size. To reduce the memory requirement, there are multiple options that trade-off inference speed for lower memory requirement: enable model offloading: each component of the pipeline is offloaded to the CPU once it’s not needed anymore. enable feed-forward chunking: the feed-forward layer runs in a loop instead of running a single feed-forward with a huge batch size. reduce decode_chunk_size: the VAE decodes frames in chunks instead of decoding them all together. Setting decode_chunk_size=1 decodes one frame at a time and uses the least amount of memory (we recommend adjusting this value based on your GPU memory) but the video might have some flickering. Copied - pipe.enable_model_cpu_offload() +- frames = pipe(image, decode_chunk_size=8, generator=generator).frames[0] ++ pipe.enable_model_cpu_offload() ++ pipe.unet.enable_forward_chunking() ++ frames = pipe(image, decode_chunk_size=2, generator=generator, num_frames=25).frames[0] Using all these tricks togethere should lower the memory requirement to less than 8GB VRAM. Micro-conditioning Stable Diffusion Video also accepts micro-conditioning, in addition to the conditioning image, which allows more control over the generated video: fps: the frames per second of the generated video. motion_bucket_id: the motion bucket id to use for the generated video. This can be used to control the motion of the generated video. Increasing the motion bucket id increases the motion of the generated video. noise_aug_strength: the amount of noise added to the conditioning image. The higher the values the less the video resembles the conditioning image. Increasing this value also increases the motion of the generated video. For example, to generate a video with more motion, use the motion_bucket_id and noise_aug_strength micro-conditioning parameters: Copied import torch + +from diffusers import StableVideoDiffusionPipeline +from diffusers.utils import load_image, export_to_video + +pipe = StableVideoDiffusionPipeline.from_pretrained( + "stabilityai/stable-video-diffusion-img2vid-xt", torch_dtype=torch.float16, variant="fp16" +) +pipe.enable_model_cpu_offload() + +# Load the conditioning image +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png") +image = image.resize((1024, 576)) + +generator = torch.manual_seed(42) +frames = pipe(image, decode_chunk_size=8, generator=generator, motion_bucket_id=180, noise_aug_strength=0.1).frames[0] +export_to_video(frames, "generated.mp4", fps=7) diff --git a/scrapped_outputs/cbf5bef925c2e1c70a85d3da85f341cf.txt b/scrapped_outputs/cbf5bef925c2e1c70a85d3da85f341cf.txt new file mode 100644 index 0000000000000000000000000000000000000000..65bdd33b14d0e8369c584da67430e389c4261fd0 --- /dev/null +++ b/scrapped_outputs/cbf5bef925c2e1c70a85d3da85f341cf.txt @@ -0,0 +1,18 @@ +Installation 🤗 Diffusers is tested on Python 3.8+, PyTorch 1.7.0+, and Flax. Follow the installation instructions below for the deep learning library you are using: PyTorch installation instructions Flax installation instructions Install with pip You should install 🤗 Diffusers in a virtual environment. +If you’re unfamiliar with Python virtual environments, take a look at this guide. +A virtual environment makes it easier to manage different projects and avoid compatibility issues between dependencies. Start by creating a virtual environment in your project directory: Copied python -m venv .env Activate the virtual environment: Copied source .env/bin/activate You should also install 🤗 Transformers because 🤗 Diffusers relies on its models: Pytorch Hide Pytorch content Copied pip install diffusers["torch"] transformers JAX Hide JAX content Copied pip install diffusers["flax"] transformers Install with conda After activating your virtual environment, with conda (maintained by the community): Copied conda install -c conda-forge diffusers Install from source Before installing 🤗 Diffusers from source, make sure you have PyTorch and 🤗 Accelerate installed. To install 🤗 Accelerate: Copied pip install accelerate Then install 🤗 Diffusers from source: Copied pip install git+https://github.com/huggingface/diffusers This command installs the bleeding edge main version rather than the latest stable version. +The main version is useful for staying up-to-date with the latest developments. +For instance, if a bug has been fixed since the last official release but a new release hasn’t been rolled out yet. +However, this means the main version may not always be stable. +We strive to keep the main version operational, and most issues are usually resolved within a few hours or a day. +If you run into a problem, please open an Issue so we can fix it even sooner! Editable install You will need an editable install if you’d like to: Use the main version of the source code. Contribute to 🤗 Diffusers and need to test changes in the code. Clone the repository and install 🤗 Diffusers with the following commands: Copied git clone https://github.com/huggingface/diffusers.git +cd diffusers Pytorch Hide Pytorch content Copied pip install -e ".[torch]" JAX Hide JAX content Copied pip install -e ".[flax]" These commands will link the folder you cloned the repository to and your Python library paths. +Python will now look inside the folder you cloned to in addition to the normal library paths. +For example, if your Python packages are typically installed in ~/anaconda3/envs/main/lib/python3.8/site-packages/, Python will also search the ~/diffusers/ folder you cloned to. You must keep the diffusers folder if you want to keep using the library. Now you can easily update your clone to the latest version of 🤗 Diffusers with the following command: Copied cd ~/diffusers/ +git pull Your Python environment will find the main version of 🤗 Diffusers on the next run. Cache Model weights and files are downloaded from the Hub to a cache which is usually your home directory. You can change the cache location by specifying the HF_HOME or HUGGINFACE_HUB_CACHE environment variables or configuring the cache_dir parameter in methods like from_pretrained(). Cached files allow you to run 🤗 Diffusers offline. To prevent 🤗 Diffusers from connecting to the internet, set the HF_HUB_OFFLINE environment variable to True and 🤗 Diffusers will only load previously downloaded files in the cache. Copied export HF_HUB_OFFLINE=True For more details about managing and cleaning the cache, take a look at the caching guide. Telemetry logging Our library gathers telemetry information during from_pretrained() requests. +The data gathered includes the version of 🤗 Diffusers and PyTorch/Flax, the requested model or pipeline class, +and the path to a pretrained checkpoint if it is hosted on the Hugging Face Hub. +This usage data helps us debug issues and prioritize new features. +Telemetry is only sent when loading models and pipelines from the Hub, +and it is not collected if you’re loading local files. We understand that not everyone wants to share additional information,and we respect your privacy. +You can disable telemetry collection by setting the DISABLE_TELEMETRY environment variable from your terminal: On Linux/MacOS: Copied export DISABLE_TELEMETRY=YES On Windows: Copied set DISABLE_TELEMETRY=YES diff --git a/scrapped_outputs/cc0bbf68183acf8e5dd192c875dc625c.txt b/scrapped_outputs/cc0bbf68183acf8e5dd192c875dc625c.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/cc8bafe6f2904540e8dcd912ca4f4c34.txt b/scrapped_outputs/cc8bafe6f2904540e8dcd912ca4f4c34.txt new file mode 100644 index 0000000000000000000000000000000000000000..2dde9c6e189ad6d607bc313e3e555570773bb332 --- /dev/null +++ b/scrapped_outputs/cc8bafe6f2904540e8dcd912ca4f4c34.txt @@ -0,0 +1,19 @@ +Adapt a model to a new task Many diffusion systems share the same components, allowing you to adapt a pretrained model for one task to an entirely different task. This guide will show you how to adapt a pretrained text-to-image model for inpainting by initializing and modifying the architecture of a pretrained UNet2DConditionModel. Configure UNet2DConditionModel parameters A UNet2DConditionModel by default accepts 4 channels in the input sample. For example, load a pretrained text-to-image model like runwayml/stable-diffusion-v1-5 and take a look at the number of in_channels: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) +pipeline.unet.config["in_channels"] +4 Inpainting requires 9 channels in the input sample. You can check this value in a pretrained inpainting model like runwayml/stable-diffusion-inpainting: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-inpainting", use_safetensors=True) +pipeline.unet.config["in_channels"] +9 To adapt your text-to-image model for inpainting, you’ll need to change the number of in_channels from 4 to 9. Initialize a UNet2DConditionModel with the pretrained text-to-image model weights, and change in_channels to 9. Changing the number of in_channels means you need to set ignore_mismatched_sizes=True and low_cpu_mem_usage=False to avoid a size mismatch error because the shape is different now. Copied from diffusers import UNet2DConditionModel + +model_id = "runwayml/stable-diffusion-v1-5" +unet = UNet2DConditionModel.from_pretrained( + model_id, + subfolder="unet", + in_channels=9, + low_cpu_mem_usage=False, + ignore_mismatched_sizes=True, + use_safetensors=True, +) The pretrained weights of the other components from the text-to-image model are initialized from their checkpoints, but the input channel weights (conv_in.weight) of the unet are randomly initialized. It is important to finetune the model for inpainting because otherwise the model returns noise. diff --git a/scrapped_outputs/cc9b67bc14c691db71ff4ce431f1859e.txt b/scrapped_outputs/cc9b67bc14c691db71ff4ce431f1859e.txt new file mode 100644 index 0000000000000000000000000000000000000000..16849601e9d0912637c03645c62a6df01c5e7eef --- /dev/null +++ b/scrapped_outputs/cc9b67bc14c691db71ff4ce431f1859e.txt @@ -0,0 +1,90 @@ +Philosophy + +🧨 Diffusers provides state-of-the-art pretrained diffusion models across multiple modalities. +Its purpose is to serve as a modular toolbox for both inference and training. +We aim at building a library that stands the test of time and therefore take API design very seriously. +In a nutshell, Diffusers is built to be a natural extension of PyTorch. Therefore, most of our design choices are based on PyTorch’s Design Principles. Let’s go over the most important ones: + +Usability over Performance + +While Diffusers has many built-in performance-enhancing features (see Memory and Speed), models are always loaded with the highest precision and lowest optimization. Therefore, by default diffusion pipelines are always instantiated on CPU with float32 precision if not otherwise defined by the user. This ensures usability across different platforms and accelerators and means that no complex installations are required to run the library. +Diffusers aim at being a light-weight package and therefore has very few required dependencies, but many soft dependencies that can improve performance (such as accelerate, safetensors, onnx, etc…). We strive to keep the library as lightweight as possible so that it can be added without much concern as a dependency on other packages. +Diffusers prefers simple, self-explainable code over condensed, magic code. This means that short-hand code syntaxes such as lambda functions, and advanced PyTorch operators are often not desired. + +Simple over easy + +As PyTorch states, explicit is better than implicit and simple is better than complex. This design philosophy is reflected in multiple parts of the library: +We follow PyTorch’s API with methods like DiffusionPipeline.to to let the user handle device management. +Raising concise error messages is preferred to silently correct erroneous input. Diffusers aims at teaching the user, rather than making the library as easy to use as possible. +Complex model vs. scheduler logic is exposed instead of magically handled inside. Schedulers/Samplers are separated from diffusion models with minimal dependencies on each other. This forces the user to write the unrolled denoising loop. However, the separation allows for easier debugging and gives the user more control over adapting the denoising process or switching out diffusion models or schedulers. +Separately trained components of the diffusion pipeline, e.g. the text encoder, the unet, and the variational autoencoder, each have their own model class. This forces the user to handle the interaction between the different model components, and the serialization format separates the model components into different files. However, this allows for easier debugging and customization. Dreambooth or textual inversion training +is very simple thanks to diffusers’ ability to separate single components of the diffusion pipeline. + +Tweakable, contributor-friendly over abstraction + +For large parts of the library, Diffusers adopts an important design principle of the Transformers library, which is to prefer copy-pasted code over hasty abstractions. This design principle is very opinionated and stands in stark contrast to popular design principles such as Don’t repeat yourself (DRY). +In short, just like Transformers does for modeling files, diffusers prefers to keep an extremely low level of abstraction and very self-contained code for pipelines and schedulers. +Functions, long code blocks, and even classes can be copied across multiple files which at first can look like a bad, sloppy design choice that makes the library unmaintainable. +However, this design has proven to be extremely successful for Transformers and makes a lot of sense for community-driven, open-source machine learning libraries because: +Machine Learning is an extremely fast-moving field in which paradigms, model architectures, and algorithms are changing rapidly, which therefore makes it very difficult to define long-lasting code abstractions. +Machine Learning practitioners like to be able to quickly tweak existing code for ideation and research and therefore prefer self-contained code over one that contains many abstractions. +Open-source libraries rely on community contributions and therefore must build a library that is easy to contribute to. The more abstract the code, the more dependencies, the harder to read, and the harder to contribute to. Contributors simply stop contributing to very abstract libraries out of fear of breaking vital functionality. If contributing to a library cannot break other fundamental code, not only is it more inviting for potential new contributors, but it is also easier to review and contribute to multiple parts in parallel. +At Hugging Face, we call this design the single-file policy which means that almost all of the code of a certain class should be written in a single, self-contained file. To read more about the philosophy, you can have a look +at this blog post. +In diffusers, we follow this philosophy for both pipelines and schedulers, but only partly for diffusion models. The reason we don’t follow this design fully for diffusion models is because almost all diffusion pipelines, such +as DDPM, Stable Diffusion, UnCLIP (Dalle-2) and Imagen all rely on the same diffusion model, the UNet. +Great, now you should have generally understood why 🧨 Diffusers is designed the way it is 🤗. +We try to apply these design principles consistently across the library. Nevertheless, there are some minor exceptions to the philosophy or some unlucky design choices. If you have feedback regarding the design, we would ❤️ to hear it directly on GitHub. + +Design Philosophy in Details + +Now, let’s look a bit into the nitty-gritty details of the design philosophy. Diffusers essentially consist of three major classes, pipelines, models, and schedulers. +Let’s walk through more in-detail design decisions for each class. + +Pipelines + +Pipelines are designed to be easy to use (therefore do not follow Simple over easy 100%), are not feature complete, and should loosely be seen as examples of how to use models and schedulers for inference. +The following design principles are followed: +Pipelines follow the single-file policy. All pipelines can be found in individual directories under src/diffusers/pipelines. One pipeline folder corresponds to one diffusion paper/project/release. Multiple pipeline files can be gathered in one pipeline folder, as it’s done for src/diffusers/pipelines/stable-diffusion. If pipelines share similar functionality, one can make use of the #Copied from mechanism. +Pipelines all inherit from DiffusionPipeline. +Every pipeline consists of different model and scheduler components, that are documented in the model_index.json file, are accessible under the same name as attributes of the pipeline and can be shared between pipelines with DiffusionPipeline.components function. +Every pipeline should be loadable via the DiffusionPipeline.from_pretrained function. +Pipelines should be used only for inference. +Pipelines should be very readable, self-explanatory, and easy to tweak. +Pipelines should be designed to build on top of each other and be easy to integrate into higher-level APIs. +Pipelines are not intended to be feature-complete user interfaces. For future complete user interfaces one should rather have a look at InvokeAI, Diffuzers, and lama-cleaner. +Every pipeline should have one and only one way to run it via a __call__ method. The naming of the __call__ arguments should be shared across all pipelines. +Pipelines should be named after the task they are intended to solve. +In almost all cases, novel diffusion pipelines shall be implemented in a new pipeline folder/file. + +Models + +Models are designed as configurable toolboxes that are natural extensions of PyTorch’s Module class. They only partly follow the single-file policy. +The following design principles are followed: +Models correspond to a type of model architecture. E.g. the UNet2DConditionModel class is used for all UNet variations that expect 2D image inputs and are conditioned on some context. +All models can be found in src/diffusers/models and every model architecture shall be defined in its file, e.g. unet_2d_condition.py, transformer_2d.py, etc… +Models do not follow the single-file policy and should make use of smaller model building blocks, such as attention.py, resnet.py, embeddings.py, etc… Note: This is in stark contrast to Transformers’ modeling files and shows that models do not really follow the single-file policy. +Models intend to expose complexity, just like PyTorch’s module does, and give clear error messages. +Models all inherit from ModelMixin and ConfigMixin. +Models can be optimized for performance when it doesn’t demand major code changes, keeps backward compatibility, and gives significant memory or compute gain. +Models should by default have the highest precision and lowest performance setting. +To integrate new model checkpoints whose general architecture can be classified as an architecture that already exists in Diffusers, the existing model architecture shall be adapted to make it work with the new checkpoint. One should only create a new file if the model architecture is fundamentally different. +Models should be designed to be easily extendable to future changes. This can be achieved by limiting public function arguments, configuration arguments, and “foreseeing” future changes, e.g. it is usually better to add string “…type” arguments that can easily be extended to new future types instead of boolean is_..._type arguments. Only the minimum amount of changes shall be made to existing architectures to make a new model checkpoint work. +The model design is a difficult trade-off between keeping code readable and concise and supporting many model checkpoints. For most parts of the modeling code, classes shall be adapted for new model checkpoints, while there are some exceptions where it is preferred to add new classes to make sure the code is kept concise and +readable longterm, such as UNet blocks and Attention processors. + +Schedulers + +Schedulers are responsible to guide the denoising process for inference as well as to define a noise schedule for training. They are designed as individual classes with loadable configuration files and strongly follow the single-file policy. +The following design principles are followed: +All schedulers are found in src/diffusers/schedulers. +Schedulers are not allowed to import from large utils files and shall be kept very self-contained. +One scheduler python file corresponds to one scheduler algorithm (as might be defined in a paper). +If schedulers share similar functionalities, we can make use of the #Copied from mechanism. +Schedulers all inherit from SchedulerMixin and ConfigMixin. +Schedulers can be easily swapped out with the ConfigMixin.from_config method as explained in detail here. +Every scheduler has to have a set_num_inference_steps, and a step function. set_num_inference_steps(...) has to be called before every denoising process, i.e. before step(...) is called. +Every scheduler exposes the timesteps to be “looped over” via a timesteps attribute, which is an array of timesteps the model will be called upon. +The step(...) function takes a predicted model output and the “current” sample (x_t) and returns the “previous”, slightly more denoised sample (x_t-1). +Given the complexity of diffusion schedulers, the step function does not expose all the complexity and can be a bit of a “black box”. +In almost all cases, novel schedulers shall be implemented in a new scheduling file. diff --git a/scrapped_outputs/cd06d960ab257b24b9c02c72bc753fb6.txt b/scrapped_outputs/cd06d960ab257b24b9c02c72bc753fb6.txt new file mode 100644 index 0000000000000000000000000000000000000000..b20fa826f93ceab8b9350b48a73ddf983d626f35 --- /dev/null +++ b/scrapped_outputs/cd06d960ab257b24b9c02c72bc753fb6.txt @@ -0,0 +1,115 @@ +Custom Diffusion Custom Diffusion is a training technique for personalizing image generation models. Like Textual Inversion, DreamBooth, and LoRA, Custom Diffusion only requires a few (~4-5) example images. This technique works by only training weights in the cross-attention layers, and it uses a special word to represent the newly learned concept. Custom Diffusion is unique because it can also learn multiple concepts at the same time. If you’re training on a GPU with limited vRAM, you should try enabling xFormers with --enable_xformers_memory_efficient_attention for faster training with lower vRAM requirements (16GB). To save even more memory, add --set_grads_to_none in the training argument to set the gradients to None instead of zero (this option can cause some issues, so if you experience any, try removing this parameter). This guide will explore the train_custom_diffusion.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies: Copied cd examples/custom_diffusion +pip install -r requirements.txt +pip install clip-retrieval 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script contains all the parameters to help you customize your training run. These are found in the parse_args() function. The function comes with default values, but you can also set your own values in the training command if you’d like. For example, to change the resolution of the input image: Copied accelerate launch train_custom_diffusion.py \ + --resolution=256 Many of the basic parameters are described in the DreamBooth training guide, so this guide focuses on the parameters unique to Custom Diffusion: --freeze_model: freezes the key and value parameters in the cross-attention layer; the default is crossattn_kv, but you can set it to crossattn to train all the parameters in the cross-attention layer --concepts_list: to learn multiple concepts, provide a path to a JSON file containing the concepts --modifier_token: a special word used to represent the learned concept --initializer_token: Prior preservation loss Prior preservation loss is a method that uses a model’s own generated samples to help it learn how to generate more diverse images. Because these generated sample images belong to the same class as the images you provided, they help the model retain what it has learned about the class and how it can use what it already knows about the class to make new compositions. Many of the parameters for prior preservation loss are described in the DreamBooth training guide. Regularization Custom Diffusion includes training the target images with a small set of real images to prevent overfitting. As you can imagine, this can be easy to do when you’re only training on a few images! Download 200 real images with clip_retrieval. The class_prompt should be the same category as the target images. These images are stored in class_data_dir. Copied python retrieve.py --class_prompt cat --class_data_dir real_reg/samples_cat --num_class_images 200 To enable regularization, add the following parameters: --with_prior_preservation: whether to use prior preservation loss --prior_loss_weight: controls the influence of the prior preservation loss on the model --real_prior: whether to use a small set of real images to prevent overfitting Copied accelerate launch train_custom_diffusion.py \ + --with_prior_preservation \ + --prior_loss_weight=1.0 \ + --class_data_dir="./real_reg/samples_cat" \ + --class_prompt="cat" \ + --real_prior=True \ Training script A lot of the code in the Custom Diffusion training script is similar to the DreamBooth script. This guide instead focuses on the code that is relevant to Custom Diffusion. The Custom Diffusion training script has two dataset classes: CustomDiffusionDataset: preprocesses the images, class images, and prompts for training PromptDataset: prepares the prompts for generating class images Next, the modifier_token is added to the tokenizer, converted to token ids, and the token embeddings are resized to account for the new modifier_token. Then the modifier_token embeddings are initialized with the embeddings of the initializer_token. All parameters in the text encoder are frozen, except for the token embeddings since this is what the model is trying to learn to associate with the concepts. Copied params_to_freeze = itertools.chain( + text_encoder.text_model.encoder.parameters(), + text_encoder.text_model.final_layer_norm.parameters(), + text_encoder.text_model.embeddings.position_embedding.parameters(), +) +freeze_params(params_to_freeze) Now you’ll need to add the Custom Diffusion weights to the attention layers. This is a really important step for getting the shape and size of the attention weights correct, and for setting the appropriate number of attention processors in each UNet block. Copied st = unet.state_dict() +for name, _ in unet.attn_processors.items(): + cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim + if name.startswith("mid_block"): + hidden_size = unet.config.block_out_channels[-1] + elif name.startswith("up_blocks"): + block_id = int(name[len("up_blocks.")]) + hidden_size = list(reversed(unet.config.block_out_channels))[block_id] + elif name.startswith("down_blocks"): + block_id = int(name[len("down_blocks.")]) + hidden_size = unet.config.block_out_channels[block_id] + layer_name = name.split(".processor")[0] + weights = { + "to_k_custom_diffusion.weight": st[layer_name + ".to_k.weight"], + "to_v_custom_diffusion.weight": st[layer_name + ".to_v.weight"], + } + if train_q_out: + weights["to_q_custom_diffusion.weight"] = st[layer_name + ".to_q.weight"] + weights["to_out_custom_diffusion.0.weight"] = st[layer_name + ".to_out.0.weight"] + weights["to_out_custom_diffusion.0.bias"] = st[layer_name + ".to_out.0.bias"] + if cross_attention_dim is not None: + custom_diffusion_attn_procs[name] = attention_class( + train_kv=train_kv, + train_q_out=train_q_out, + hidden_size=hidden_size, + cross_attention_dim=cross_attention_dim, + ).to(unet.device) + custom_diffusion_attn_procs[name].load_state_dict(weights) + else: + custom_diffusion_attn_procs[name] = attention_class( + train_kv=False, + train_q_out=False, + hidden_size=hidden_size, + cross_attention_dim=cross_attention_dim, + ) +del st +unet.set_attn_processor(custom_diffusion_attn_procs) +custom_diffusion_layers = AttnProcsLayers(unet.attn_processors) The optimizer is initialized to update the cross-attention layer parameters: Copied optimizer = optimizer_class( + itertools.chain(text_encoder.get_input_embeddings().parameters(), custom_diffusion_layers.parameters()) + if args.modifier_token is not None + else custom_diffusion_layers.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) In the training loop, it is important to only update the embeddings for the concept you’re trying to learn. This means setting the gradients of all the other token embeddings to zero: Copied if args.modifier_token is not None: + if accelerator.num_processes > 1: + grads_text_encoder = text_encoder.module.get_input_embeddings().weight.grad + else: + grads_text_encoder = text_encoder.get_input_embeddings().weight.grad + index_grads_to_zero = torch.arange(len(tokenizer)) != modifier_token_id[0] + for i in range(len(modifier_token_id[1:])): + index_grads_to_zero = index_grads_to_zero & ( + torch.arange(len(tokenizer)) != modifier_token_id[i] + ) + grads_text_encoder.data[index_grads_to_zero, :] = grads_text_encoder.data[ + index_grads_to_zero, : + ].fill_(0) Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 In this guide, you’ll download and use these example cat images. You can also create and use your own dataset if you want (see the Create a dataset for training guide). Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model, INSTANCE_DIR to the path where you just downloaded the cat images to, and OUTPUT_DIR to where you want to save the model. You’ll use as the special word to tie the newly learned embeddings to. The script creates and saves model checkpoints and a pytorch_custom_diffusion_weights.bin file to your repository. To monitor training progress with Weights and Biases, add the --report_to=wandb parameter to the training command and specify a validation prompt with --validation_prompt. This is useful for debugging and saving intermediate results. If you’re training on human faces, the Custom Diffusion team has found the following parameters to work well: --learning_rate=5e-6 --max_train_steps can be anywhere between 1000 and 2000 --freeze_model=crossattn use at least 15-20 images to train with single concept multiple concepts Copied export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export OUTPUT_DIR="path-to-save-model" +export INSTANCE_DIR="./data/cat" + +accelerate launch train_custom_diffusion.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --output_dir=$OUTPUT_DIR \ + --class_data_dir=./real_reg/samples_cat/ \ + --with_prior_preservation \ + --real_prior \ + --prior_loss_weight=1.0 \ + --class_prompt="cat" \ + --num_class_images=200 \ + --instance_prompt="photo of a cat" \ + --resolution=512 \ + --train_batch_size=2 \ + --learning_rate=1e-5 \ + --lr_warmup_steps=0 \ + --max_train_steps=250 \ + --scale_lr \ + --hflip \ + --modifier_token "" \ + --validation_prompt=" cat sitting in a bucket" \ + --report_to="wandb" \ + --push_to_hub Once training is finished, you can use your new Custom Diffusion model for inference. single concept multiple concepts Copied import torch +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16, +).to("cuda") +pipeline.unet.load_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin") +pipeline.load_textual_inversion("path-to-save-model", weight_name=".bin") + +image = pipeline( + " cat sitting in a bucket", + num_inference_steps=100, + guidance_scale=6.0, + eta=1.0, +).images[0] +image.save("cat.png") Next steps Congratulations on training a model with Custom Diffusion! 🎉 To learn more: Read the Multi-Concept Customization of Text-to-Image Diffusion blog post to learn more details about the experimental results from the Custom Diffusion team. diff --git a/scrapped_outputs/cd21f897e0b68bd66827cc947fa5fb83.txt b/scrapped_outputs/cd21f897e0b68bd66827cc947fa5fb83.txt new file mode 100644 index 0000000000000000000000000000000000000000..c927ce7d06865cbd6e16bcd6bb15efd3ab6ad802 --- /dev/null +++ b/scrapped_outputs/cd21f897e0b68bd66827cc947fa5fb83.txt @@ -0,0 +1,152 @@ +DPM Discrete Scheduler with ancestral sampling inspired by Karras et. al paper + + +Overview + +Inspired by Karras et. al. Scheduler ported from @crowsonkb’s https://github.com/crowsonkb/k-diffusion library: +All credit for making this scheduler work goes to Katherine Crowson + +KDPM2AncestralDiscreteScheduler + + +class diffusers.KDPM2AncestralDiscreteScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.00085 +beta_end: float = 0.012 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +prediction_type: str = 'epsilon' + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. beta_start (float): the + + +starting beta value of inference. beta_end (float) — the final beta value. beta_schedule (str): +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. +options to clip the variance used when adding noise to the denoised sample. Choose from fixed_small, +fixed_small_log, fixed_large, fixed_large_log, learned or learned_range. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + + +Scheduler created by @crowsonkb in k_diffusion, see: +https://github.com/crowsonkb/k-diffusion/blob/5b3af030dd83e0297272d861c19477735d0317ec/k_diffusion/sampling.py#L188 +Scheduler inspired by DPM-Solver-2 and Algorthim 2 from Karras et al. (2022). +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[float, torch.FloatTensor] + +) +→ +torch.FloatTensor + +Parameters + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the — + + +current timestep. — +sample (torch.FloatTensor): input sample timestep (int, optional): current timestep + + +Returns + +torch.FloatTensor + + + +scaled input sample + + + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None +num_train_timesteps: typing.Optional[int] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device, optional) — +the device to which the timesteps should be moved to. If None, the timesteps are not moved. + + + +Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: typing.Union[torch.FloatTensor, numpy.ndarray] +timestep: typing.Union[float, torch.FloatTensor] +sample: typing.Union[torch.FloatTensor, numpy.ndarray] +generator: typing.Optional[torch._C.Generator] = None +return_dict: bool = True + +) +→ +SchedulerOutput or tuple + +Parameters + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion — + + +process from the learned model outputs (most often the predicted noise). — +model_output (torch.FloatTensor or np.ndarray): direct output from learned diffusion model. timestep +(int): current discrete timestep in the diffusion chain. sample (torch.FloatTensor or np.ndarray): +current instance of sample being created by diffusion process. +return_dict (bool): option for returning tuple rather than SchedulerOutput class + + +Returns + +SchedulerOutput or tuple + + + +SchedulerOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. diff --git a/scrapped_outputs/cd5ee29ea39e8311de5c75f8e48cb60c.txt b/scrapped_outputs/cd5ee29ea39e8311de5c75f8e48cb60c.txt new file mode 100644 index 0000000000000000000000000000000000000000..7aa53bdbaa4afd2403451b87ff52427c2d2fa0b7 --- /dev/null +++ b/scrapped_outputs/cd5ee29ea39e8311de5c75f8e48cb60c.txt @@ -0,0 +1,177 @@ +Configuration + +In Diffusers, schedulers of type schedulers.scheduling_utils.SchedulerMixin, and models of type ModelMixin inherit from ConfigMixin which conveniently takes care of storing all parameters that are +passed to the respective __init__ methods in a JSON-configuration file. + +ConfigMixin + + +class diffusers.ConfigMixin + +< +source +> +( +) + + + +Base class for all configuration classes. Stores all configuration parameters under self.config Also handles all +methods for loading/downloading/saving classes inheriting from ConfigMixin with +from_config() +save_config() +Class attributes: +config_name (str) — A filename under which the config should stored when calling +save_config() (should be overridden by parent class). +ignore_for_config (List[str]) — A list of attributes that should not be saved in the config (should be +overridden by subclass). +has_compatibles (bool) — Whether the class has compatible classes (should be overridden by subclass). +_deprecated_kwargs (List[str]) — Keyword arguments that are deprecated. Note that the init function +should only have a kwargs argument if at least one argument is deprecated (should be overridden by +subclass). + +load_config + +< +source +> +( +pretrained_model_name_or_path: typing.Union[str, os.PathLike] +return_unused_kwargs = False +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id of a model repo on huggingface.co. Valid model ids should have an +organization name, like google/ddpm-celebahq-256. +A path to a directory containing model weights saved using save_config(), e.g., +./my_model_directory/. + + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running transformers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +subfolder (str, optional, defaults to "") — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + + +Instantiate a Python class from a config dictionary +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. +Activate the special “offline-mode” to +use this method in a firewalled environment. + +from_config + +< +source +> +( +config: typing.Union[diffusers.configuration_utils.FrozenDict, typing.Dict[str, typing.Any]] = None +return_unused_kwargs = False +**kwargs + +) + + +Parameters + +config (Dict[str, Any]) — +A config dictionary from which the Python class will be instantiated. Make sure to only load +configuration files of compatible classes. + + +return_unused_kwargs (bool, optional, defaults to False) — +Whether kwargs that are not consumed by the Python class should be returned or not. + + +kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to update the configuration object (after it being loaded) and initiate the Python class. +**kwargs will be directly passed to the underlying scheduler/model’s __init__ method and eventually +overwrite same named arguments of config. + + + +Instantiate a Python class from a config dictionary + +Examples: + + + Copied +>>> from diffusers import DDPMScheduler, DDIMScheduler, PNDMScheduler + +>>> # Download scheduler from huggingface.co and cache. +>>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cifar10-32") + +>>> # Instantiate DDIM scheduler class with same config as DDPM +>>> scheduler = DDIMScheduler.from_config(scheduler.config) + +>>> # Instantiate PNDM scheduler class with same config as DDPM +>>> scheduler = PNDMScheduler.from_config(scheduler.config) + +save_config + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +push_to_hub: bool = False +**kwargs + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory where the configuration JSON file will be saved (will be created if it does not exist). + + + +Save a configuration object to the directory save_directory, so that it can be re-loaded using the +from_config() class method. diff --git a/scrapped_outputs/cd5f970ea8294fc9c98fa6583a41814d.txt b/scrapped_outputs/cd5f970ea8294fc9c98fa6583a41814d.txt new file mode 100644 index 0000000000000000000000000000000000000000..c6ada9556f117e916687e4a6c5586a56d8e2825d --- /dev/null +++ b/scrapped_outputs/cd5f970ea8294fc9c98fa6583a41814d.txt @@ -0,0 +1,17 @@ +Load safetensors safetensors is a safe and fast file format for storing and loading tensors. Typically, PyTorch model weights are saved or pickled into a .bin file with Python’s pickle utility. However, pickle is not secure and pickled files may contain malicious code that can be executed. safetensors is a secure alternative to pickle, making it ideal for sharing model weights. This guide will show you how you load .safetensor files, and how to convert Stable Diffusion model weights stored in other formats to .safetensor. Before you start, make sure you have safetensors installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install safetensors If you look at the runwayml/stable-diffusion-v1-5 repository, you’ll see weights inside the text_encoder, unet and vae subfolders are stored in the .safetensors format. By default, 🤗 Diffusers automatically loads these .safetensors files from their subfolders if they’re available in the model repository. For more explicit control, you can optionally set use_safetensors=True (if safetensors is not installed, you’ll get an error message asking you to install it): Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) However, model weights are not necessarily stored in separate subfolders like in the example above. Sometimes, all the weights are stored in a single .safetensors file. In this case, if the weights are Stable Diffusion weights, you can load the file directly with the from_single_file() method: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_single_file( + "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +) Convert to safetensors Not all weights on the Hub are available in the .safetensors format, and you may encounter weights stored as .bin. In this case, use the Convert Space to convert the weights to .safetensors. The Convert Space downloads the pickled weights, converts them, and opens a Pull Request to upload the newly converted .safetensors file on the Hub. This way, if there is any malicious code contained in the pickled files, they’re uploaded to the Hub - which has a security scanner to detect unsafe files and suspicious pickle imports - instead of your computer. You can use the model with the new .safetensors weights by specifying the reference to the Pull Request in the revision parameter (you can also test it in this Check PR Space on the Hub), for example refs/pr/22: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", revision="refs/pr/22", use_safetensors=True +) Why use safetensors? There are several reasons for using safetensors: Safety is the number one reason for using safetensors. As open-source and model distribution grows, it is important to be able to trust the model weights you downloaded don’t contain any malicious code. The current size of the header in safetensors prevents parsing extremely large JSON files. Loading speed between switching models is another reason to use safetensors, which performs zero-copy of the tensors. It is especially fast compared to pickle if you’re loading the weights to CPU (the default case), and just as fast if not faster when directly loading the weights to GPU. You’ll only notice the performance difference if the model is already loaded, and not if you’re downloading the weights or loading the model for the first time. The time it takes to load the entire pipeline: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", use_safetensors=True) +"Loaded in safetensors 0:00:02.033658" +"Loaded in PyTorch 0:00:02.663379" But the actual time it takes to load 500MB of the model weights is only: Copied safetensors: 3.4873ms +PyTorch: 172.7537ms Lazy loading is also supported in safetensors, which is useful in distributed settings to only load some of the tensors. This format allowed the BLOOM model to be loaded in 45 seconds on 8 GPUs instead of 10 minutes with regular PyTorch weights. diff --git a/scrapped_outputs/cd77e06d1dc767d97f1e30ffdb11ce07.txt b/scrapped_outputs/cd77e06d1dc767d97f1e30ffdb11ce07.txt new file mode 100644 index 0000000000000000000000000000000000000000..97a771bf1c4a69150adf921fcc1b4adbe14566c1 --- /dev/null +++ b/scrapped_outputs/cd77e06d1dc767d97f1e30ffdb11ce07.txt @@ -0,0 +1,927 @@ +DeepFloyd IF Overview DeepFloyd IF is a novel state-of-the-art open-source text-to-image model with a high degree of photorealism and language understanding. +The model is a modular composed of a frozen text encoder and three cascaded pixel diffusion modules: Stage 1: a base model that generates 64x64 px image based on text prompt, Stage 2: a 64x64 px => 256x256 px super-resolution model, and Stage 3: a 256x256 px => 1024x1024 px super-resolution model +Stage 1 and Stage 2 utilize a frozen text encoder based on the T5 transformer to extract text embeddings, which are then fed into a UNet architecture enhanced with cross-attention and attention pooling. +Stage 3 is Stability AI’s x4 Upscaling model. +The result is a highly efficient model that outperforms current state-of-the-art models, achieving a zero-shot FID score of 6.66 on the COCO dataset. +Our work underscores the potential of larger UNet architectures in the first stage of cascaded diffusion models and depicts a promising future for text-to-image synthesis. Usage Before you can use IF, you need to accept its usage conditions. To do so: Make sure to have a Hugging Face account and be logged in. Accept the license on the model card of DeepFloyd/IF-I-XL-v1.0. Accepting the license on the stage I model card will auto accept for the other IF models. Make sure to login locally. Install huggingface_hub: Copied pip install huggingface_hub --upgrade run the login function in a Python shell: Copied from huggingface_hub import login + +login() and enter your Hugging Face Hub access token. Next we install diffusers and dependencies: Copied pip install -q diffusers accelerate transformers The following sections give more in-detail examples of how to use IF. Specifically: Text-to-Image Generation Image-to-Image Generation Inpainting Reusing model weights Speed optimization Memory optimization Available checkpoints Stage-1 DeepFloyd/IF-I-XL-v1.0 DeepFloyd/IF-I-L-v1.0 DeepFloyd/IF-I-M-v1.0 Stage-2 DeepFloyd/IF-II-L-v1.0 DeepFloyd/IF-II-M-v1.0 Stage-3 stabilityai/stable-diffusion-x4-upscaler Google Colab Text-to-Image Generation By default diffusers makes use of model cpu offloading to run the whole IF pipeline with as little as 14 GB of VRAM. Copied from diffusers import DiffusionPipeline +from diffusers.utils import pt_to_pil, make_image_grid +import torch + +# stage 1 +stage_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +stage_1_output = stage_1( + prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, generator=generator, output_type="pt" +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# stage 2 +stage_2_output = stage_2( + image=stage_1_output, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png") + +# stage 3 +stage_3_output = stage_3(prompt=prompt, image=stage_2_output, noise_level=100, generator=generator).images +#stage_3_output[0].save("./if_stage_III.png") +make_image_grid([pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=3) Text Guided Image-to-Image Generation The same IF model weights can be used for text-guided image-to-image translation or image variation. +In this case just make sure to load the weights using the IFImg2ImgPipeline and IFImg2ImgSuperResolutionPipeline pipelines. Note: You can also directly move the weights of the text-to-image pipelines to the image-to-image pipelines +without loading them twice by making use of the components argument as explained here. Copied from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +from diffusers.utils import pt_to_pil, load_image, make_image_grid +import torch + +# download image +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +original_image = load_image(url) +original_image = original_image.resize((768, 512)) + +# stage 1 +stage_1 = IFImg2ImgPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = IFImg2ImgSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = "A fantasy landscape in style minecraft" +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +stage_1_output = stage_1( + image=original_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# stage 2 +stage_2_output = stage_2( + image=stage_1_output, + original_image=original_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png") + +# stage 3 +stage_3_output = stage_3(prompt=prompt, image=stage_2_output, generator=generator, noise_level=100).images +#stage_3_output[0].save("./if_stage_III.png") +make_image_grid([original_image, pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=4) Text Guided Inpainting Generation The same IF model weights can be used for text-guided image-to-image translation or image variation. +In this case just make sure to load the weights using the IFInpaintingPipeline and IFInpaintingSuperResolutionPipeline pipelines. Note: You can also directly move the weights of the text-to-image pipelines to the image-to-image pipelines +without loading them twice by making use of the ~DiffusionPipeline.components() function as explained here. Copied from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +from diffusers.utils import pt_to_pil, load_image, make_image_grid +import torch + +# download image +url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +original_image = load_image(url) + +# download mask +url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +mask_image = load_image(url) + +# stage 1 +stage_1 = IFInpaintingPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = IFInpaintingSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = "blue sunglasses" +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +stage_1_output = stage_1( + image=original_image, + mask_image=mask_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# stage 2 +stage_2_output = stage_2( + image=stage_1_output, + original_image=original_image, + mask_image=mask_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_II.png") + +# stage 3 +stage_3_output = stage_3(prompt=prompt, image=stage_2_output, generator=generator, noise_level=100).images +#stage_3_output[0].save("./if_stage_III.png") +make_image_grid([original_image, mask_image, pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=5) Converting between different pipelines In addition to being loaded with from_pretrained, Pipelines can also be loaded directly from each other. Copied from diffusers import IFPipeline, IFSuperResolutionPipeline + +pipe_1 = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0") +pipe_2 = IFSuperResolutionPipeline.from_pretrained("DeepFloyd/IF-II-L-v1.0") + + +from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline + +pipe_1 = IFImg2ImgPipeline(**pipe_1.components) +pipe_2 = IFImg2ImgSuperResolutionPipeline(**pipe_2.components) + + +from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline + +pipe_1 = IFInpaintingPipeline(**pipe_1.components) +pipe_2 = IFInpaintingSuperResolutionPipeline(**pipe_2.components) Optimizing for speed The simplest optimization to run IF faster is to move all model components to the GPU. Copied pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") You can also run the diffusion process for a shorter number of timesteps. This can either be done with the num_inference_steps argument: Copied pipe("", num_inference_steps=30) Or with the timesteps argument: Copied from diffusers.pipelines.deepfloyd_if import fast27_timesteps + +pipe("", timesteps=fast27_timesteps) When doing image variation or inpainting, you can also decrease the number of timesteps +with the strength argument. The strength argument is the amount of noise to add to the input image which also determines how many steps to run in the denoising process. +A smaller number will vary the image less but run faster. Copied pipe = IFImg2ImgPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") + +image = pipe(image=image, prompt="", strength=0.3).images You can also use torch.compile. Note that we have not exhaustively tested torch.compile +with IF and it might not give expected results. Copied from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") + +pipe.text_encoder = torch.compile(pipe.text_encoder, mode="reduce-overhead", fullgraph=True) +pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) Optimizing for memory When optimizing for GPU memory, we can use the standard diffusers CPU offloading APIs. Either the model based CPU offloading, Copied pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.enable_model_cpu_offload() or the more aggressive layer based CPU offloading. Copied pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.enable_sequential_cpu_offload() Additionally, T5 can be loaded in 8bit precision Copied from transformers import T5EncoderModel + +text_encoder = T5EncoderModel.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit" +) + +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", + text_encoder=text_encoder, # pass the previously instantiated 8bit text encoder + unet=None, + device_map="auto", +) + +prompt_embeds, negative_embeds = pipe.encode_prompt("") For CPU RAM constrained machines like Google Colab free tier where we can’t load all model components to the CPU at once, we can manually only load the pipeline with +the text encoder or UNet when the respective model components are needed. Copied from diffusers import IFPipeline, IFSuperResolutionPipeline +import torch +import gc +from transformers import T5EncoderModel +from diffusers.utils import pt_to_pil, make_image_grid + +text_encoder = T5EncoderModel.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit" +) + +# text to image +pipe = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", + text_encoder=text_encoder, # pass the previously instantiated 8bit text encoder + unet=None, + device_map="auto", +) + +prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +# Remove the pipeline so we can re-load the pipeline with the unet +del text_encoder +del pipe +gc.collect() +torch.cuda.empty_cache() + +pipe = IFPipeline.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto" +) + +generator = torch.Generator().manual_seed(0) +stage_1_output = pipe( + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + output_type="pt", + generator=generator, +).images + +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# Remove the pipeline so we can load the super-resolution pipeline +del pipe +gc.collect() +torch.cuda.empty_cache() + +# First super resolution + +pipe = IFSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto" +) + +generator = torch.Generator().manual_seed(0) +stage_2_output = pipe( + image=stage_1_output, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + output_type="pt", + generator=generator, +).images + +#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png") +make_image_grid([pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0]], rows=1, rows=2) Available Pipelines: Pipeline Tasks Colab pipeline_if.py Text-to-Image Generation - pipeline_if_superresolution.py Text-to-Image Generation - pipeline_if_img2img.py Image-to-Image Generation - pipeline_if_img2img_superresolution.py Image-to-Image Generation - pipeline_if_inpainting.py Image-to-Image Generation - pipeline_if_inpainting_superresolution.py Image-to-Image Generation - IFPipeline class diffusers.IFPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None num_inference_steps: int = 100 timesteps: List = None guidance_scale: float = 7.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 height: Optional = None width: Optional = None eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True cross_attention_kwargs: Optional = None ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 7.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to self.unet.config.sample_size) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size) — +The width in pixels of the generated image. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFPipeline, IFSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch + +>>> pipe = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt").images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt" +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> safety_modules = { +... "feature_extractor": pipe.feature_extractor, +... "safety_checker": pipe.safety_checker, +... "watermarker": pipe.watermarker, +... } +>>> super_res_2_pipe = DiffusionPipeline.from_pretrained( +... "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +... ) +>>> super_res_2_pipe.enable_model_cpu_offload() + +>>> image = super_res_2_pipe( +... prompt=prompt, +... image=image, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFSuperResolutionPipeline class diffusers.IFSuperResolutionPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler image_noising_scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None height: int = None width: int = None image: Union = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 4.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 250 clean_caption: bool = True ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. height (int, optional, defaults to None) — +The height in pixels of the generated image. width (int, optional, defaults to None) — +The width in pixels of the generated image. image (PIL.Image.Image, np.ndarray, torch.FloatTensor) — +The image to be upscaled. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional, defaults to None) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. noise_level (int, optional, defaults to 250) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFPipeline, IFSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch + +>>> pipe = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt").images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFImg2ImgPipeline class diffusers.IFImg2ImgPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.7 num_inference_steps: int = 80 timesteps: List = None guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True cross_attention_kwargs: Optional = None ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. strength (float, optional, defaults to 0.7) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. num_inference_steps (int, optional, defaults to 80) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 10.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image.resize((768, 512)) + +>>> pipe = IFImg2ImgPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A fantasy landscape in style minecraft" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFImg2ImgSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", +... text_encoder=None, +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFImg2ImgSuperResolutionPipeline class diffusers.IFImg2ImgSuperResolutionPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler image_noising_scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( image: Union original_image: Union = None strength: float = 0.8 prompt: Union = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 4.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 250 clean_caption: bool = True ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. original_image (torch.FloatTensor or PIL.Image.Image) — +The original image that image was varied from. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. noise_level (int, optional, defaults to 250) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image.resize((768, 512)) + +>>> pipe = IFImg2ImgPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A fantasy landscape in style minecraft" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFImg2ImgSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", +... text_encoder=None, +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFInpaintingPipeline class diffusers.IFInpaintingPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None strength: float = 1.0 num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True cross_attention_kwargs: Optional = None ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). strength (float, optional, defaults to 1.0) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 7.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +>>> response = requests.get(url) +>>> mask_image = Image.open(BytesIO(response.content)) +>>> mask_image = mask_image + +>>> pipe = IFInpaintingPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "blue sunglasses" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... mask_image=mask_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFInpaintingSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... mask_image=mask_image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFInpaintingSuperResolutionPipeline class diffusers.IFInpaintingSuperResolutionPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler image_noising_scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( image: Union original_image: Union = None mask_image: Union = None strength: float = 0.8 prompt: Union = None num_inference_steps: int = 100 timesteps: List = None guidance_scale: float = 4.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 0 clean_caption: bool = True ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. original_image (torch.FloatTensor or PIL.Image.Image) — +The original image that image was varied from. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. noise_level (int, optional, defaults to 0) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +>>> response = requests.get(url) +>>> mask_image = Image.open(BytesIO(response.content)) +>>> mask_image = mask_image + +>>> pipe = IFInpaintingPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "blue sunglasses" + +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) +>>> image = pipe( +... image=original_image, +... mask_image=mask_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFInpaintingSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... mask_image=mask_image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. diff --git a/scrapped_outputs/cd79740f879eb9cbaa961354558a80cc.txt b/scrapped_outputs/cd79740f879eb9cbaa961354558a80cc.txt new file mode 100644 index 0000000000000000000000000000000000000000..49dfad88e1e2c0dcad3d9918f9f7b9486f85e0dc --- /dev/null +++ b/scrapped_outputs/cd79740f879eb9cbaa961354558a80cc.txt @@ -0,0 +1,92 @@ +DPMSolverMultistepInverse DPMSolverMultistepInverse is the inverted scheduler from DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps and DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. The implementation is mostly based on the DDIM inversion definition of Null-text Inversion for Editing Real Images using Guided Diffusion Models and notebook implementation of the DiffEdit latent inversion from Xiang-cd/DiffEdit-stable-diffusion. Tips Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use the dynamic +thresholding. This thresholding method is unsuitable for latent-space diffusion models such as +Stable Diffusion. DPMSolverMultistepInverseScheduler class diffusers.DPMSolverMultistepInverseScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'dpmsolver++' solver_type: str = 'midpoint' lower_order_final: bool = True euler_at_final: bool = False use_karras_sigmas: Optional = False lambda_min_clipped: float = -inf variance_type: Optional = None timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DPMSolver order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++". algorithm_type (str, defaults to dpmsolver++) — +Algorithm type for the solver; can be dpmsolver, dpmsolver++, sde-dpmsolver or sde-dpmsolver++. The +dpmsolver type implements the algorithms in the DPMSolver +paper, and the dpmsolver++ type implements the algorithms in the +DPMSolver++ paper. It is recommended to use dpmsolver++ or +sde-dpmsolver++ with solver_order=2 for guided sampling like in Stable Diffusion. solver_type (str, defaults to midpoint) — +Solver type for the second-order solver; can be midpoint or heun. The solver type slightly affects the +sample quality, especially for a small number of steps. It is recommended to use midpoint solvers. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. euler_at_final (bool, defaults to False) — +Whether to use Euler’s method in the final step. It is a trade-off between numerical stability and detail +richness. This can stabilize the sampling of the SDE variant of DPMSolver for small number of inference +steps, but sometimes may result in blurring. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. lambda_min_clipped (float, defaults to -inf) — +Clipping threshold for the minimum value of lambda(t) for numerical stability. This is critical for the +cosine (squaredcos_cap_v2) noise schedule. variance_type (str, optional) — +Set to “learned” or “learned_range” for diffusion models that predict variance. If set, the model’s output +contains the predicted Gaussian variance. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DPMSolverMultistepInverseScheduler is the reverse scheduler of DPMSolverMultistepScheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is +designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an +integral of the data prediction model. The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise +prediction and data prediction models. dpm_solver_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DPMSolver (equivalent to DDIM). multistep_dpm_solver_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order multistep DPMSolver. multistep_dpm_solver_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order multistep DPMSolver. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int = None device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator = None return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep DPMSolver. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/cdb4cad702132fd63ac769ad7f7d1e5a.txt b/scrapped_outputs/cdb4cad702132fd63ac769ad7f7d1e5a.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/cdb66cf368c85263116f77a6c3c72590.txt b/scrapped_outputs/cdb66cf368c85263116f77a6c3c72590.txt new file mode 100644 index 0000000000000000000000000000000000000000..be2cb47ac7929d07604329901692862da670fc66 --- /dev/null +++ b/scrapped_outputs/cdb66cf368c85263116f77a6c3c72590.txt @@ -0,0 +1,70 @@ +MusicLDM MusicLDM was proposed in MusicLDM: Enhancing Novelty in Text-to-Music Generation Using Beat-Synchronous Mixup Strategies by Ke Chen, Yusong Wu, Haohe Liu, Marianna Nezhurina, Taylor Berg-Kirkpatrick, Shlomo Dubnov. +MusicLDM takes a text prompt as input and predicts the corresponding music sample. Inspired by Stable Diffusion and AudioLDM, +MusicLDM is a text-to-music latent diffusion model (LDM) that learns continuous audio representations from CLAP +latents. MusicLDM is trained on a corpus of 466 hours of music data. Beat-synchronous data augmentation strategies are applied to the music samples, both in the time domain and in the latent space. Using beat-synchronous data augmentation strategies encourages the model to interpolate between the training samples, but stay within the domain of the training data. The result is generated music that is more diverse while staying faithful to the corresponding style. The abstract of the paper is the following: Diffusion models have shown promising results in cross-modal generation tasks, including text-to-image and text-to-audio generation. However, generating music, as a special type of audio, presents unique challenges due to limited availability of music data and sensitive issues related to copyright and plagiarism. In this paper, to tackle these challenges, we first construct a state-of-the-art text-to-music model, MusicLDM, that adapts Stable Diffusion and AudioLDM architectures to the music domain. We achieve this by retraining the contrastive language-audio pretraining model (CLAP) and the Hifi-GAN vocoder, as components of MusicLDM, on a collection of music data samples. Then, to address the limitations of training data and to avoid plagiarism, we leverage a beat tracking model and propose two different mixup strategies for data augmentation: beat-synchronous audio mixup and beat-synchronous latent mixup, which recombine training audio directly or via a latent embeddings space, respectively. Such mixup strategies encourage the model to interpolate between musical training samples and generate new music within the convex hull of the training data, making the generated music more diverse while still staying faithful to the corresponding style. In addition to popular evaluation metrics, we design several new evaluation metrics based on CLAP score to demonstrate that our proposed MusicLDM and beat-synchronous mixup strategies improve both the quality and novelty of generated music, as well as the correspondence between input text and generated music. This pipeline was contributed by sanchit-gandhi. Tips When constructing a prompt, keep in mind: Descriptive prompt inputs work best; use adjectives to describe the sound (for example, “high quality” or “clear”) and make the prompt context specific where possible (e.g. “melodic techno with a fast beat and synths” works better than “techno”). Using a negative prompt can significantly improve the quality of the generated audio. Try using a negative prompt of “low quality, average quality”. During inference: The quality of the generated audio sample can be controlled by the num_inference_steps argument; higher steps give higher quality audio at the expense of slower inference. Multiple waveforms can be generated in one go: set num_waveforms_per_prompt to a value greater than 1 to enable. Automatic scoring will be performed between the generated waveforms and prompt text, and the audios ranked from best to worst accordingly. The length of the generated audio sample can be controlled by varying the audio_length_in_s argument. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. MusicLDMPipeline class diffusers.MusicLDMPipeline < source > ( vae: AutoencoderKL text_encoder: Union tokenizer: Union feature_extractor: Optional unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vocoder: SpeechT5HifiGan ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (ClapModel) — +Frozen text-audio embedding model (ClapTextModel), specifically the +laion/clap-htsat-unfused variant. tokenizer (PreTrainedTokenizer) — +A RobertaTokenizer to tokenize text. feature_extractor (ClapFeatureExtractor) — +Feature extractor to compute mel-spectrograms from audio waveforms. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded audio latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. vocoder (SpeechT5HifiGan) — +Vocoder of class SpeechT5HifiGan. Pipeline for text-to-audio generation using MusicLDM. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None audio_length_in_s: Optional = None num_inference_steps: int = 200 guidance_scale: float = 2.0 negative_prompt: Union = None num_waveforms_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None output_type: Optional = 'np' ) → AudioPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide audio generation. If not defined, you need to pass prompt_embeds. audio_length_in_s (int, optional, defaults to 10.24) — +The length of the generated audio sample in seconds. num_inference_steps (int, optional, defaults to 200) — +The number of denoising steps. More denoising steps usually lead to a higher quality audio at the +expense of slower inference. guidance_scale (float, optional, defaults to 2.0) — +A higher guidance scale value encourages the model to generate audio that is closely linked to the text +prompt at the expense of lower sound quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in audio generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_waveforms_per_prompt (int, optional, defaults to 1) — +The number of waveforms to generate per prompt. If num_waveforms_per_prompt > 1, the text encoding +model is a joint text-audio model (ClapModel), and the tokenizer is a +[~transformers.ClapProcessor], then automatic scoring will be performed between the generated outputs +and the input text. This scoring ranks the generated waveforms based on their cosine similarity to text +input in the joint text-audio embedding space. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. return_dict (bool, optional, defaults to True) — +Whether or not to return a AudioPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. output_type (str, optional, defaults to "np") — +The output format of the generated audio. Choose between "np" to return a NumPy np.ndarray or +"pt" to return a PyTorch torch.Tensor object. Set to "latent" to return the latent diffusion +model (LDM) output. Returns +AudioPipelineOutput or tuple + +If return_dict is True, AudioPipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import MusicLDMPipeline +>>> import torch +>>> import scipy + +>>> repo_id = "ucsd-reach/musicldm" +>>> pipe = MusicLDMPipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> prompt = "Techno music with a strong, upbeat tempo and high melodic riffs" +>>> audio = pipe(prompt, num_inference_steps=10, audio_length_in_s=5.0).audios[0] + +>>> # save the audio sample as a .wav file +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio) disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. diff --git a/scrapped_outputs/cdb707833eaec09ae25c0aa776326a27.txt b/scrapped_outputs/cdb707833eaec09ae25c0aa776326a27.txt new file mode 100644 index 0000000000000000000000000000000000000000..7a9e212961820d51624ffc31cfcde392d033e6bb --- /dev/null +++ b/scrapped_outputs/cdb707833eaec09ae25c0aa776326a27.txt @@ -0,0 +1,336 @@ +Memory and speed + +We present some techniques and ideas to optimize 🤗 Diffusers inference for memory or speed. As a general rule, we recommend the use of xFormers for memory efficient attention, please see the recommended installation instructions. +We’ll discuss how the following settings impact performance and memory. + +Latency +Speedup +original +9.50s +x1 +cuDNN auto-tuner +9.37s +x1.01 +fp16 +3.61s +x2.63 +channels last +3.30s +x2.88 +traced UNet +3.21s +x2.96 +memory efficient attention +2.63s +x3.61 +obtained on NVIDIA TITAN RTX by generating a single image of size 512x512 from + the prompt "a photo of an astronaut riding a horse on mars" with 50 DDIM + steps. + + +Enable cuDNN auto-tuner + +NVIDIA cuDNN supports many algorithms to compute a convolution. Autotuner runs a short benchmark and selects the kernel with the best performance on a given hardware for a given input size. +Since we’re using convolutional networks (other types currently not supported), we can enable cuDNN autotuner before launching the inference by setting: + + + Copied +import torch + +torch.backends.cudnn.benchmark = True + +Use tf32 instead of fp32 (on Ampere and later CUDA devices) + +On Ampere and later CUDA devices matrix multiplications and convolutions can use the TensorFloat32 (TF32) mode for faster but slightly less accurate computations. By default PyTorch enables TF32 mode for convolutions but not matrix multiplications, and unless a network requires full float32 precision we recommend enabling this setting for matrix multiplications, too. It can significantly speed up computations with typically negligible loss of numerical accuracy. You can read more about it here. All you need to do is to add this before your inference: + + + Copied +import torch + +torch.backends.cuda.matmul.allow_tf32 = True + +Half precision weights + +To save more GPU memory and get more speed, you can load and run the model weights directly in half precision. This involves loading the float16 version of the weights, which was saved to a branch named fp16, and telling PyTorch to use the float16 type when loading them: + + + Copied +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + + torch_dtype=torch.float16, +) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +image = pipe(prompt).images[0] +It is strongly discouraged to make use of [`torch.autocast`](https://pytorch.org/docs/stable/amp.html#torch.autocast) in any of the pipelines as it can lead to black images and is always slower than using pure + float16 precision. + + +Sliced attention for additional memory savings + +For even additional memory savings, you can use a sliced version of attention that performs the computation in steps instead of all at once. +Attention slicing is useful even if a batch size of just 1 is used - as long + as the model uses more than one attention head. If there is more than one + attention head the *QK^T* attention matrix can be computed sequentially for + each head which can save a significant amount of memory. + +To perform the attention computation sequentially over each head, you only need to invoke enable_attention_slicing() in your pipeline before inference, like here: + + + Copied +import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + + torch_dtype=torch.float16, +) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_attention_slicing() +image = pipe(prompt).images[0] +There’s a small performance penalty of about 10% slower inference times, but this method allows you to use Stable Diffusion in as little as 3.2 GB of VRAM! + +Sliced VAE decode for larger batches + +To decode large batches of images with limited VRAM, or to enable batches with 32 images or more, you can use sliced VAE decode that decodes the batch latents one image at a time. +You likely want to couple this with enable_attention_slicing() or enable_xformers_memory_efficient_attention() to further minimize memory use. +To perform the VAE decode one image at a time, invoke enable_vae_slicing() in your pipeline before inference. For example: + + + Copied +import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + + torch_dtype=torch.float16, +) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_vae_slicing() +images = pipe([prompt] * 32).images +You may see a small performance boost in VAE decode on multi-image batches. There should be no performance impact on single-image batches. + +Offloading to CPU with accelerate for memory savings + +For additional memory savings, you can offload the weights to CPU and only load them to GPU when performing the forward pass. +To perform CPU offloading, all you have to do is invoke enable_sequential_cpu_offload(): + + + Copied +import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + + torch_dtype=torch.float16, +) + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_sequential_cpu_offload() +image = pipe(prompt).images[0] +And you can get the memory consumption to < 3GB. +If is also possible to chain it with attention slicing for minimal memory consumption (< 2GB). + + + Copied +import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + + torch_dtype=torch.float16, +) + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_sequential_cpu_offload() +pipe.enable_attention_slicing(1) + +image = pipe(prompt).images[0] +Note: When using enable_sequential_cpu_offload(), it is important to not move the pipeline to CUDA beforehand or else the gain in memory consumption will only be minimal. See this issue for more information. + +Using Channels Last memory format + +Channels last memory format is an alternative way of ordering NCHW tensors in memory preserving dimensions ordering. Channels last tensors ordered in such a way that channels become the densest dimension (aka storing images pixel-per-pixel). Since not all operators currently support channels last format it may result in a worst performance, so it’s better to try it and see if it works for your model. +For example, in order to set the UNet model in our pipeline to use channels last format, we can use the following: + + + Copied +print(pipe.unet.conv_out.state_dict()["weight"].stride()) # (2880, 9, 3, 1) +pipe.unet.to(memory_format=torch.channels_last) # in-place operation +print( + pipe.unet.conv_out.state_dict()["weight"].stride() +) # (2880, 1, 960, 320) having a stride of 1 for the 2nd dimension proves that it works + +Tracing + +Tracing runs an example input tensor through your model, and captures the operations that are invoked as that input makes its way through the model’s layers so that an executable or ScriptFunction is returned that will be optimized using just-in-time compilation. +To trace our UNet model, we can use the following: + + + Copied +import time +import torch +from diffusers import StableDiffusionPipeline +import functools + +# torch disable grad +torch.set_grad_enabled(False) + +# set variables +n_experiments = 2 +unet_runs_per_experiment = 50 + +# load inputs +def generate_inputs(): + sample = torch.randn(2, 4, 64, 64).half().cuda() + timestep = torch.rand(1).half().cuda() * 999 + encoder_hidden_states = torch.randn(2, 77, 768).half().cuda() + return sample, timestep, encoder_hidden_states + + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, +).to("cuda") +unet = pipe.unet +unet.eval() +unet.to(memory_format=torch.channels_last) # use channels_last memory format +unet.forward = functools.partial(unet.forward, return_dict=False) # set return_dict=False as default + +# warmup +for _ in range(3): + with torch.inference_mode(): + inputs = generate_inputs() + orig_output = unet(*inputs) + +# trace +print("tracing..") +unet_traced = torch.jit.trace(unet, inputs) +unet_traced.eval() +print("done tracing") + + +# warmup and optimize graph +for _ in range(5): + with torch.inference_mode(): + inputs = generate_inputs() + orig_output = unet_traced(*inputs) + + +# benchmarking +with torch.inference_mode(): + for _ in range(n_experiments): + torch.cuda.synchronize() + start_time = time.time() + for _ in range(unet_runs_per_experiment): + orig_output = unet_traced(*inputs) + torch.cuda.synchronize() + print(f"unet traced inference took {time.time() - start_time:.2f} seconds") + for _ in range(n_experiments): + torch.cuda.synchronize() + start_time = time.time() + for _ in range(unet_runs_per_experiment): + orig_output = unet(*inputs) + torch.cuda.synchronize() + print(f"unet inference took {time.time() - start_time:.2f} seconds") + +# save the model +unet_traced.save("unet_traced.pt") +Then we can replace the unet attribute of the pipeline with the traced model like the following + + + Copied +from diffusers import StableDiffusionPipeline +import torch +from dataclasses import dataclass + + +@dataclass +class UNet2DConditionOutput: + sample: torch.FloatTensor + + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, +).to("cuda") + +# use jitted unet +unet_traced = torch.jit.load("unet_traced.pt") +# del pipe.unet +class TracedUNet(torch.nn.Module): + def __init__(self): + super().__init__() + self.in_channels = pipe.unet.in_channels + self.device = pipe.unet.device + + def forward(self, latent_model_input, t, encoder_hidden_states): + sample = unet_traced(latent_model_input, t, encoder_hidden_states)[0] + return UNet2DConditionOutput(sample=sample) + + +pipe.unet = TracedUNet() + +with torch.inference_mode(): + image = pipe([prompt] * 1, num_inference_steps=50).images[0] + +Memory Efficient Attention + +Recent work on optimizing the bandwitdh in the attention block has generated huge speed ups and gains in GPU memory usage. The most recent being Flash Attention from @tridao: code, paper. +Here are the speedups we obtain on a few Nvidia GPUs when running the inference at 512x512 with a batch size of 1 (one prompt): +GPU +Base Attention FP16 +Memory Efficient Attention FP16 +NVIDIA Tesla T4 +3.5it/s +5.5it/s +NVIDIA 3060 RTX +4.6it/s +7.8it/s +NVIDIA A10G +8.88it/s +15.6it/s +NVIDIA RTX A6000 +11.7it/s +21.09it/s +NVIDIA TITAN RTX +12.51it/s +18.22it/s +A100-SXM4-40GB +18.6it/s +29.it/s +A100-SXM-80GB +18.7it/s +29.5it/s +To leverage it just make sure you have: +PyTorch > 1.12 +Cuda available +Installed the xformers library. + + + Copied +from diffusers import StableDiffusionPipeline +import torch + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, +).to("cuda") + +pipe.enable_xformers_memory_efficient_attention() + +with torch.inference_mode(): + sample = pipe("a small cat") + +# optional: You can disable it via +# pipe.disable_xformers_memory_efficient_attention() diff --git a/scrapped_outputs/cde2816f735f65c9158ae549fb74ffca.txt b/scrapped_outputs/cde2816f735f65c9158ae549fb74ffca.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/cdecef2d2c223cc871902376ab0004d6.txt b/scrapped_outputs/cdecef2d2c223cc871902376ab0004d6.txt new file mode 100644 index 0000000000000000000000000000000000000000..b413917c52bc7069ecb64d4b6c9ce531220bac25 --- /dev/null +++ b/scrapped_outputs/cdecef2d2c223cc871902376ab0004d6.txt @@ -0,0 +1,87 @@ +Create reproducible pipelines Reproducibility is important for testing, replicating results, and can even be used to improve image quality. However, the randomness in diffusion models is a desired property because it allows the pipeline to generate different images every time it is run. While you can’t expect to get the exact same results across platforms, you can expect results to be reproducible across releases and platforms within a certain tolerance range. Even then, tolerance varies depending on the diffusion pipeline and checkpoint. This is why it’s important to understand how to control sources of randomness in diffusion models or use deterministic algorithms. 💡 We strongly recommend reading PyTorch’s statement about reproducibility: Completely reproducible results are not guaranteed across PyTorch releases, individual commits, or different platforms. Furthermore, results may not be reproducible between CPU and GPU executions, even when using identical seeds. Control randomness During inference, pipelines rely heavily on random sampling operations which include creating the +Gaussian noise tensors to denoise and adding noise to the scheduling step. Take a look at the tensor values in the DDIMPipeline after two inference steps: Copied from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np").images +print(np.abs(image).sum()) Running the code above prints one value, but if you run it again you get a different value. What is going on here? Every time the pipeline is run, torch.randn uses a different random seed to create Gaussian noise which is denoised stepwise. This leads to a different result each time it is run, which is great for diffusion pipelines since it generates a different random image each time. But if you need to reliably generate the same image, that’ll depend on whether you’re running the pipeline on a CPU or GPU. CPU To generate reproducible results on a CPU, you’ll need to use a PyTorch Generator and set a seed: Copied import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) + +# create a generator for reproducibility +generator = torch.Generator(device="cpu").manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) Now when you run the code above, it always prints a value of 1491.1711 no matter what because the Generator object with the seed is passed to all the random functions of the pipeline. If you run this code example on your specific hardware and PyTorch version, you should get a similar, if not the same, result. 💡 It might be a bit unintuitive at first to pass Generator objects to the pipeline instead of +just integer values representing the seed, but this is the recommended design when dealing with +probabilistic models in PyTorch, as Generators are random states that can be +passed to multiple pipelines in a sequence. GPU Writing a reproducible pipeline on a GPU is a bit trickier, and full reproducibility across different hardware is not guaranteed because matrix multiplication - which diffusion pipelines require a lot of - is less deterministic on a GPU than a CPU. For example, if you run the same code example above on a GPU: Copied import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) +ddim.to("cuda") + +# create a generator for reproducibility +generator = torch.Generator(device="cuda").manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) The result is not the same even though you’re using an identical seed because the GPU uses a different random number generator than the CPU. To circumvent this problem, 🧨 Diffusers has a randn_tensor() function for creating random noise on the CPU, and then moving the tensor to a GPU if necessary. The randn_tensor function is used everywhere inside the pipeline, allowing the user to always pass a CPU Generator even if the pipeline is run on a GPU. You’ll see the results are much closer now! Copied import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) +ddim.to("cuda") + +# create a generator for reproducibility; notice you don't place it on the GPU! +generator = torch.manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) 💡 If reproducibility is important, we recommend always passing a CPU generator. +The performance loss is often neglectable, and you’ll generate much more similar +values than if the pipeline had been run on a GPU. Finally, for more complex pipelines such as UnCLIPPipeline, these are often extremely +susceptible to precision error propagation. Don’t expect similar results across +different GPU hardware or PyTorch versions. In this case, you’ll need to run +exactly the same hardware and PyTorch version for full reproducibility. Deterministic algorithms You can also configure PyTorch to use deterministic algorithms to create a reproducible pipeline. However, you should be aware that deterministic algorithms may be slower than nondeterministic ones and you may observe a decrease in performance. But if reproducibility is important to you, then this is the way to go! Nondeterministic behavior occurs when operations are launched in more than one CUDA stream. To avoid this, set the environment variable CUBLAS_WORKSPACE_CONFIG to :16:8 to only use one buffer size during runtime. PyTorch typically benchmarks multiple algorithms to select the fastest one, but if you want reproducibility, you should disable this feature because the benchmark may select different algorithms each time. Lastly, pass True to torch.use_deterministic_algorithms to enable deterministic algorithms. Copied import os +import torch + +os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":16:8" + +torch.backends.cudnn.benchmark = False +torch.use_deterministic_algorithms(True) Now when you run the same pipeline twice, you’ll get identical results. Copied import torch +from diffusers import DDIMScheduler, StableDiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True).to("cuda") +pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) +g = torch.Generator(device="cuda") + +prompt = "A bear is playing a guitar on Times Square" + +g.manual_seed(0) +result1 = pipe(prompt=prompt, num_inference_steps=50, generator=g, output_type="latent").images + +g.manual_seed(0) +result2 = pipe(prompt=prompt, num_inference_steps=50, generator=g, output_type="latent").images + +print("L_inf dist =", abs(result1 - result2).max()) +"L_inf dist = tensor(0., device='cuda:0')" diff --git a/scrapped_outputs/cdeefd45f06fb04c1e83f5165c92d040.txt b/scrapped_outputs/cdeefd45f06fb04c1e83f5165c92d040.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/ce04e0edadec4a19e4eb62f2a10c8c0c.txt b/scrapped_outputs/ce04e0edadec4a19e4eb62f2a10c8c0c.txt new file mode 100644 index 0000000000000000000000000000000000000000..70f4789731822d35a8fe512dc617234f1f0b0f36 --- /dev/null +++ b/scrapped_outputs/ce04e0edadec4a19e4eb62f2a10c8c0c.txt @@ -0,0 +1,33 @@ +🧨 Diffusers’ Ethical Guidelines + + +Preamble + +Diffusers provides pre-trained diffusion models and serves as a modular toolbox for inference and training. +Given its real case applications in the world and potential negative impacts on society, we think it is important to provide the project with ethical guidelines to guide the development, users’ contributions, and usage of the Diffusers library. +The risks associated with using this technology are still being examined, but to name a few: copyrights issues for artists; deep-fake exploitation; sexual content generation in inappropriate contexts; non-consensual impersonation; harmful social biases perpetuating the oppression of marginalized groups. +We will keep tracking risks and adapt the following guidelines based on the community’s responsiveness and valuable feedback. + +Scope + +The Diffusers community will apply the following ethical guidelines to the project’s development and help coordinate how the community will integrate the contributions, especially concerning sensitive topics related to ethical concerns. + +Ethical guidelines + +The following ethical guidelines apply generally, but we will primarily implement them when dealing with ethically sensitive issues while making a technical choice. Furthermore, we commit to adapting those ethical principles over time following emerging harms related to the state of the art of the technology in question. +Transparency: we are committed to being transparent in managing PRs, explaining our choices to users, and making technical decisions. +Consistency: we are committed to guaranteeing our users the same level of attention in project management, keeping it technically stable and consistent. +Simplicity: with a desire to make it easy to use and exploit the Diffusers library, we are committed to keeping the project’s goals lean and coherent. +Accessibility: the Diffusers project helps lower the entry bar for contributors who can help run it even without technical expertise. Doing so makes research artifacts more accessible to the community. +Reproducibility: we aim to be transparent about the reproducibility of upstream code, models, and datasets when made available through the Diffusers library. +Responsibility: as a community and through teamwork, we hold a collective responsibility to our users by anticipating and mitigating this technology’s potential risks and dangers. + +Examples of implementations: Safety features and Mechanisms + +The team works daily to make the technical and non-technical tools available to deal with the potential ethical and social risks associated with diffusion technology. Moreover, the community’s input is invaluable in ensuring these features’ implementation and raising awareness with us. +Community tab: it enables the community to discuss and better collaborate on a project. +Bias exploration and evaluation: the Hugging Face team provides a space to demonstrate the biases in Stable Diffusion interactively. In this sense, we support and encourage bias explorers and evaluations. +Encouraging safety in deployment +Safe Stable Diffusion: It mitigates the well-known issue that models, like Stable Diffusion, that are trained on unfiltered, web-crawled datasets tend to suffer from inappropriate degeneration. Related paper: Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models. +Staged released on the Hub: in particularly sensitive situations, access to some repositories should be restricted. This staged release is an intermediary step that allows the repository’s authors to have more control over its use. +Licensing: OpenRAILs, a new type of licensing, allow us to ensure free access while having a set of restrictions that ensure more responsible use. diff --git a/scrapped_outputs/ce07e0815fbf840b7230553c877be637.txt b/scrapped_outputs/ce07e0815fbf840b7230553c877be637.txt new file mode 100644 index 0000000000000000000000000000000000000000..0ee0e400598d2a2833e04ab333507f2ff56ef276 --- /dev/null +++ b/scrapped_outputs/ce07e0815fbf840b7230553c877be637.txt @@ -0,0 +1,42 @@ +UNetMotionModel The UNet model was originally introduced by Ronneberger et al for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 2D UNet model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNetMotionModel class diffusers.UNetMotionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlockMotion', 'CrossAttnDownBlockMotion', 'CrossAttnDownBlockMotion', 'DownBlockMotion') up_block_types: Tuple = ('UpBlockMotion', 'CrossAttnUpBlockMotion', 'CrossAttnUpBlockMotion', 'CrossAttnUpBlockMotion') block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: int = 32 norm_eps: float = 1e-05 cross_attention_dim: int = 1280 use_linear_projection: bool = False num_attention_heads: Union = 8 motion_max_seq_length: int = 32 motion_num_attention_heads: int = 8 use_motion_mid_block: int = True encoder_hid_dim: Optional = None encoder_hid_dim_type: Optional = None time_cond_proj_dim: Optional = None ) A modified conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a +sample shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). disable_freeu < source > ( ) Disables the FreeU mechanism. enable_forward_chunking < source > ( chunk_size: Optional = None dim: int = 0 ) Parameters chunk_size (int, optional) — +The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually +over each tensor of dim=dim. dim (int, optional, defaults to 0) — +The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch) +or dim=1 (sequence length). Sets the attention processor to use feed forward +chunking. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism from https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stage blocks where they are being applied. Please refer to the official repository for combinations of values that +are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None added_cond_kwargs: Optional = None down_block_additional_residuals: Optional = None mid_block_additional_residual: Optional = None return_dict: bool = True ) → ~models.unet_3d_condition.UNet3DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, num_frames, channel, height, width. timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). +timestep_cond — (torch.Tensor, optional, defaults to None): +Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed +through the self.time_embedding layer to obtain the timestep embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. +down_block_additional_residuals — (tuple of torch.Tensor, optional): +A tuple of tensors that if specified are added to the residuals of down unet blocks. +mid_block_additional_residual — (torch.Tensor, optional): +A tensor that if specified is added to the residual of the middle unet block. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.unet_3d_condition.UNet3DConditionOutput instead of a plain +tuple. Returns +~models.unet_3d_condition.UNet3DConditionOutput or tuple + +If return_dict is True, an ~models.unet_3d_condition.UNet3DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The UNetMotionModel forward method. freeze_unet2d_params < source > ( ) Freeze the weights of just the UNet2DConditionModel, and leave the motion modules +unfrozen for fine tuning. fuse_qkv_projections < source > ( ) Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. unfuse_qkv_projections < source > ( ) Disables the fused QKV projection if enabled. This API is 🧪 experimental. UNet3DConditionOutput class diffusers.models.unets.unet_3d_condition.UNet3DConditionOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, num_frames, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of UNet3DConditionModel. diff --git a/scrapped_outputs/ce2216e13ab54eedfb608df28228e07d.txt b/scrapped_outputs/ce2216e13ab54eedfb608df28228e07d.txt new file mode 100644 index 0000000000000000000000000000000000000000..1867f773b4344fd37e77bce342b7730704ed1f48 --- /dev/null +++ b/scrapped_outputs/ce2216e13ab54eedfb608df28228e07d.txt @@ -0,0 +1,76 @@ +Load community pipelines and components Community pipelines Community pipelines are any DiffusionPipeline class that are different from the original implementation as specified in their paper (for example, the StableDiffusionControlNetPipeline corresponds to the Text-to-Image Generation with ControlNet Conditioning paper). They provide additional functionality or extend the original implementation of a pipeline. There are many cool community pipelines like Speech to Image or Composable Stable Diffusion, and you can find all the official community pipelines here. To load any community pipeline on the Hub, pass the repository id of the community pipeline to the custom_pipeline argument and the model repository where you’d like to load the pipeline weights and components from. For example, the example below loads a dummy pipeline from hf-internal-testing/diffusers-dummy-pipeline and the pipeline weights and components from google/ddpm-cifar10-32: 🔒 By loading a community pipeline from the Hugging Face Hub, you are trusting that the code you are loading is safe. Make sure to inspect the code online before loading and running it automatically! Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "google/ddpm-cifar10-32", custom_pipeline="hf-internal-testing/diffusers-dummy-pipeline", use_safetensors=True +) Loading an official community pipeline is similar, but you can mix loading weights from an official repository id and pass pipeline components directly. The example below loads the community CLIP Guided Stable Diffusion pipeline, and you can pass the CLIP model components directly to it: Copied from diffusers import DiffusionPipeline +from transformers import CLIPImageProcessor, CLIPModel + +clip_model_id = "laion/CLIP-ViT-B-32-laion2B-s34B-b79K" + +feature_extractor = CLIPImageProcessor.from_pretrained(clip_model_id) +clip_model = CLIPModel.from_pretrained(clip_model_id) + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + custom_pipeline="clip_guided_stable_diffusion", + clip_model=clip_model, + feature_extractor=feature_extractor, + use_safetensors=True, +) For more information about community pipelines, take a look at the Community pipelines guide for how to use them and if you’re interested in adding a community pipeline check out the How to contribute a community pipeline guide! Community components Community components allow users to build pipelines that may have customized components that are not a part of Diffusers. If your pipeline has custom components that Diffusers doesn’t already support, you need to provide their implementations as Python modules. These customized components could be a VAE, UNet, and scheduler. In most cases, the text encoder is imported from the Transformers library. The pipeline code itself can also be customized. This section shows how users should use community components to build a community pipeline. You’ll use the showlab/show-1-base pipeline checkpoint as an example. So, let’s start loading the components: Import and load the text encoder from Transformers: Copied from transformers import T5Tokenizer, T5EncoderModel + +pipe_id = "showlab/show-1-base" +tokenizer = T5Tokenizer.from_pretrained(pipe_id, subfolder="tokenizer") +text_encoder = T5EncoderModel.from_pretrained(pipe_id, subfolder="text_encoder") Load a scheduler: Copied from diffusers import DPMSolverMultistepScheduler + +scheduler = DPMSolverMultistepScheduler.from_pretrained(pipe_id, subfolder="scheduler") Load an image processor: Copied from transformers import CLIPFeatureExtractor + +feature_extractor = CLIPFeatureExtractor.from_pretrained(pipe_id, subfolder="feature_extractor") In steps 4 and 5, the custom UNet and pipeline implementation must match the format shown in their files for this example to work. Now you’ll load a custom UNet, which in this example, has already been implemented in the showone_unet_3d_condition.py script for your convenience. You’ll notice the UNet3DConditionModel class name is changed to ShowOneUNet3DConditionModel because UNet3DConditionModel already exists in Diffusers. Any components needed for the ShowOneUNet3DConditionModel class should be placed in the showone_unet_3d_condition.py script. Once this is done, you can initialize the UNet: Copied from showone_unet_3d_condition import ShowOneUNet3DConditionModel + +unet = ShowOneUNet3DConditionModel.from_pretrained(pipe_id, subfolder="unet") Finally, you’ll load the custom pipeline code. For this example, it has already been created for you in the pipeline_t2v_base_pixel.py script. This script contains a custom TextToVideoIFPipeline class for generating videos from text. Just like the custom UNet, any code needed for the custom pipeline to work should go in the pipeline_t2v_base_pixel.py script. Once everything is in place, you can initialize the TextToVideoIFPipeline with the ShowOneUNet3DConditionModel: Copied from pipeline_t2v_base_pixel import TextToVideoIFPipeline +import torch + +pipeline = TextToVideoIFPipeline( + unet=unet, + text_encoder=text_encoder, + tokenizer=tokenizer, + scheduler=scheduler, + feature_extractor=feature_extractor +) +pipeline = pipeline.to(device="cuda") +pipeline.torch_dtype = torch.float16 Push the pipeline to the Hub to share with the community! Copied pipeline.push_to_hub("custom-t2v-pipeline") After the pipeline is successfully pushed, you need a couple of changes: Change the _class_name attribute in model_index.json to "pipeline_t2v_base_pixel" and "TextToVideoIFPipeline". Upload showone_unet_3d_condition.py to the unet directory. Upload pipeline_t2v_base_pixel.py to the pipeline base directory. To run inference, simply add the trust_remote_code argument while initializing the pipeline to handle all the “magic” behind the scenes. Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "/", trust_remote_code=True, torch_dtype=torch.float16 +).to("cuda") + +prompt = "hello" + +# Text embeds +prompt_embeds, negative_embeds = pipeline.encode_prompt(prompt) + +# Keyframes generation (8x64x40, 2fps) +video_frames = pipeline( + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + num_frames=8, + height=40, + width=64, + num_inference_steps=2, + guidance_scale=9.0, + output_type="pt" +).frames As an additional reference example, you can refer to the repository structure of stabilityai/japanese-stable-diffusion-xl, that makes use of the trust_remote_code feature: Copied +from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/japanese-stable-diffusion-xl", trust_remote_code=True +) +pipeline.to("cuda") + +# if using torch < 2.0 +# pipeline.enable_xformers_memory_efficient_attention() + +prompt = "柴犬、カラフルアート" + +image = pipeline(prompt=prompt).images[0] diff --git a/scrapped_outputs/ce78ea240ecf0bf4a49d8b58d5faa0a5.txt b/scrapped_outputs/ce78ea240ecf0bf4a49d8b58d5faa0a5.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/ce854979b99ced763c8b92e7f59ed313.txt b/scrapped_outputs/ce854979b99ced763c8b92e7f59ed313.txt new file mode 100644 index 0000000000000000000000000000000000000000..816a6ec9c2fb9e36207317fc29707b1dd833518a --- /dev/null +++ b/scrapped_outputs/ce854979b99ced763c8b92e7f59ed313.txt @@ -0,0 +1,412 @@ +Text-to-Video Generation with AnimateDiff Overview AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning by Yuwei Guo, Ceyuan Yang, Anyi Rao, Yaohui Wang, Yu Qiao, Dahua Lin, Bo Dai. The abstract of the paper is the following: With the advance of text-to-image models (e.g., Stable Diffusion) and corresponding personalization techniques such as DreamBooth and LoRA, everyone can manifest their imagination into high-quality images at an affordable cost. Subsequently, there is a great demand for image animation techniques to further combine generated static images with motion dynamics. In this report, we propose a practical framework to animate most of the existing personalized text-to-image models once and for all, saving efforts in model-specific tuning. At the core of the proposed framework is to insert a newly initialized motion modeling module into the frozen text-to-image model and train it on video clips to distill reasonable motion priors. Once trained, by simply injecting this motion modeling module, all personalized versions derived from the same base T2I readily become text-driven models that produce diverse and personalized animated images. We conduct our evaluation on several public representative personalized text-to-image models across anime pictures and realistic photographs, and demonstrate that our proposed framework helps these models generate temporally smooth animation clips while preserving the domain and diversity of their outputs. Code and pre-trained weights will be publicly available at this https URL. Available Pipelines Pipeline Tasks Demo AnimateDiffPipeline Text-to-Video Generation with AnimateDiff AnimateDiffVideoToVideoPipeline Video-to-Video Generation with AnimateDiff Available checkpoints Motion Adapter checkpoints can be found under guoyww. These checkpoints are meant to work with any model based on Stable Diffusion 1.4/1.5. Usage example AnimateDiffPipeline AnimateDiff works with a MotionAdapter checkpoint and a Stable Diffusion model checkpoint. The MotionAdapter is a collection of Motion Modules that are responsible for adding coherent motion across image frames. These modules are applied after the Resnet and Attention blocks in Stable Diffusion UNet. The following example demonstrates how to use a MotionAdapter checkpoint with Diffusers for inference based on StableDiffusion-1.4/1.5. Copied import torch +from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16) +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + timestep_spacing="linspace", + beta_schedule="linear", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt=( + "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, " + "orange sky, warm lighting, fishing boats, ocean waves seagulls, " + "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, " + "golden hour, coastal landscape, seaside scenery" + ), + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=25, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") + Here are some sample outputs: masterpiece, bestquality, sunset. + AnimateDiff tends to work better with finetuned Stable Diffusion models. If you plan on using a scheduler that can clip samples, make sure to disable it by setting clip_sample=False in the scheduler as this can also have an adverse effect on generated samples. Additionally, the AnimateDiff checkpoints can be sensitive to the beta schedule of the scheduler. We recommend setting this to linear. AnimateDiffVideoToVideoPipeline AnimateDiff can also be used to generate visually similar videos or enable style/character/background or other edits starting from an initial video, allowing you to seamlessly explore creative possibilities. Copied import imageio +import requests +import torch +from diffusers import AnimateDiffVideoToVideoPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif +from io import BytesIO +from PIL import Image + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffVideoToVideoPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16).to("cuda") +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + timestep_spacing="linspace", + beta_schedule="linear", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +# helper function to load videos +def load_video(file_path: str): + images = [] + + if file_path.startswith(('http://', 'https://')): + # If the file_path is a URL + response = requests.get(file_path) + response.raise_for_status() + content = BytesIO(response.content) + vid = imageio.get_reader(content) + else: + # Assuming it's a local file path + vid = imageio.get_reader(file_path) + + for frame in vid: + pil_image = Image.fromarray(frame) + images.append(pil_image) + + return images + +video = load_video("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-vid2vid-input-1.gif") + +output = pipe( + video = video, + prompt="panda playing a guitar, on a boat, in the ocean, high quality", + negative_prompt="bad quality, worse quality", + guidance_scale=7.5, + num_inference_steps=25, + strength=0.5, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") Here are some sample outputs: Source Video Output Video raccoon playing a guitar + panda playing a guitar + closeup of margot robbie, fireworks in the background, high quality + closeup of tony stark, robert downey jr, fireworks + Using Motion LoRAs Motion LoRAs are a collection of LoRAs that work with the guoyww/animatediff-motion-adapter-v1-5-2 checkpoint. These LoRAs are responsible for adding specific types of motion to the animations. Copied import torch +from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16) +pipe.load_lora_weights( + "guoyww/animatediff-motion-lora-zoom-out", adapter_name="zoom-out" +) + +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + beta_schedule="linear", + timestep_spacing="linspace", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt=( + "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, " + "orange sky, warm lighting, fishing boats, ocean waves seagulls, " + "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, " + "golden hour, coastal landscape, seaside scenery" + ), + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=25, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") + masterpiece, bestquality, sunset. + Using Motion LoRAs with PEFT You can also leverage the PEFT backend to combine Motion LoRA’s and create more complex animations. First install PEFT with Copied pip install peft Then you can use the following code to combine Motion LoRAs. Copied import torch +from diffusers import AnimateDiffPipeline, DDIMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +# Load the motion adapter +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16) +# load SD 1.5 based finetuned model +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16) + +pipe.load_lora_weights( + "diffusers/animatediff-motion-lora-zoom-out", adapter_name="zoom-out", +) +pipe.load_lora_weights( + "diffusers/animatediff-motion-lora-pan-left", adapter_name="pan-left", +) +pipe.set_adapters(["zoom-out", "pan-left"], adapter_weights=[1.0, 1.0]) + +scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + clip_sample=False, + timestep_spacing="linspace", + beta_schedule="linear", + steps_offset=1, +) +pipe.scheduler = scheduler + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt=( + "masterpiece, bestquality, highlydetailed, ultradetailed, sunset, " + "orange sky, warm lighting, fishing boats, ocean waves seagulls, " + "rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, " + "golden hour, coastal landscape, seaside scenery" + ), + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=25, + generator=torch.Generator("cpu").manual_seed(42), +) +frames = output.frames[0] +export_to_gif(frames, "animation.gif") + masterpiece, bestquality, sunset. + Using FreeInit FreeInit: Bridging Initialization Gap in Video Diffusion Models by Tianxing Wu, Chenyang Si, Yuming Jiang, Ziqi Huang, Ziwei Liu. FreeInit is an effective method that improves temporal consistency and overall quality of videos generated using video-diffusion-models without any addition training. It can be applied to AnimateDiff, ModelScope, VideoCrafter and various other video generation models seamlessly at inference time, and works by iteratively refining the latent-initialization noise. More details can be found it the paper. The following example demonstrates the usage of FreeInit. Copied import torch +from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler +from diffusers.utils import export_to_gif + +adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2") +model_id = "SG161222/Realistic_Vision_V5.1_noVAE" +pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter, torch_dtype=torch.float16).to("cuda") +pipe.scheduler = DDIMScheduler.from_pretrained( + model_id, + subfolder="scheduler", + beta_schedule="linear", + clip_sample=False, + timestep_spacing="linspace", + steps_offset=1 +) + +# enable memory savings +pipe.enable_vae_slicing() +pipe.enable_vae_tiling() + +# enable FreeInit +# Refer to the enable_free_init documentation for a full list of configurable parameters +pipe.enable_free_init(method="butterworth", use_fast_sampling=True) + +# run inference +output = pipe( + prompt="a panda playing a guitar, on a boat, in the ocean, high quality", + negative_prompt="bad quality, worse quality", + num_frames=16, + guidance_scale=7.5, + num_inference_steps=20, + generator=torch.Generator("cpu").manual_seed(666), +) + +# disable FreeInit +pipe.disable_free_init() + +frames = output.frames[0] +export_to_gif(frames, "animation.gif") FreeInit is not really free - the improved quality comes at the cost of extra computation. It requires sampling a few extra times depending on the num_iters parameter that is set when enabling it. Setting the use_fast_sampling parameter to True can improve the overall performance (at the cost of lower quality compared to when use_fast_sampling=False but still better results than vanilla video generation models). Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. AnimateDiffPipeline class diffusers.AnimateDiffPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel motion_adapter: MotionAdapter scheduler: Union feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter (MotionAdapter) — +A MotionAdapter to be used in combination with unet to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None num_frames: Optional = 16 height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → TextToVideoSDPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated video. num_frames (int, optional, defaults to 16) — +The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds +amounts to 2 seconds of video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated video. Choose between torch.FloatTensor, PIL.Image or +np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +TextToVideoSDPipelineOutput or tuple + +If return_dict is True, TextToVideoSDPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler +>>> from diffusers.utils import export_to_gif + +>>> adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2") +>>> pipe = AnimateDiffPipeline.from_pretrained("frankjoshua/toonyou_beta6", motion_adapter=adapter) +>>> pipe.scheduler = DDIMScheduler(beta_schedule="linear", steps_offset=1, clip_sample=False) +>>> output = pipe(prompt="A corgi walking in the park") +>>> frames = output.frames[0] +>>> export_to_gif(frames, "animation.gif") disable_free_init < source > ( ) Disables the FreeInit mechanism if enabled. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_free_init < source > ( num_iters: int = 3 use_fast_sampling: bool = False method: str = 'butterworth' order: int = 4 spatial_stop_frequency: float = 0.25 temporal_stop_frequency: float = 0.25 generator: Generator = None ) Parameters num_iters (int, optional, defaults to 3) — +Number of FreeInit noise re-initialization iterations. use_fast_sampling (bool, optional, defaults to False) — +Whether or not to speedup sampling procedure at the cost of probably lower quality results. Enables +the “Coarse-to-Fine Sampling” strategy, as mentioned in the paper, if set to True. method (str, optional, defaults to butterworth) — +Must be one of butterworth, ideal or gaussian to use as the filtering method for the +FreeInit low pass filter. order (int, optional, defaults to 4) — +Order of the filter used in butterworth method. Larger values lead to ideal method behaviour +whereas lower values lead to gaussian method behaviour. spatial_stop_frequency (float, optional, defaults to 0.25) — +Normalized stop frequency for spatial dimensions. Must be between 0 to 1. Referred to as d_s in +the original implementation. temporal_stop_frequency (float, optional, defaults to 0.25) — +Normalized stop frequency for temporal dimensions. Must be between 0 to 1. Referred to as d_t in +the original implementation. generator (torch.Generator, optional, defaults to 0.25) — +A torch.Generator to make +FreeInit generation deterministic. Enables the FreeInit mechanism as in https://arxiv.org/abs/2312.07537. This implementation has been adapted from the official repository. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. AnimateDiffVideoToVideoPipeline class diffusers.AnimateDiffVideoToVideoPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel motion_adapter: MotionAdapter scheduler: Union feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter (MotionAdapter) — +A MotionAdapter to be used in combination with unet to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for video-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( video: List = None prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: Optional = None guidance_scale: float = 7.5 strength: float = 0.8 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → AnimateDiffPipelineOutput or tuple Parameters video (List[PipelineImageInput]) — +The input video to condition the generation on. Must be a list of images/frames of the video. prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. strength (float, optional, defaults to 0.8) — +Higher strength leads to more differences between original video and generated video. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated video. Choose between torch.FloatTensor, PIL.Image or +np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a AnimateDiffPipelineOutput instead +of a plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +AnimateDiffPipelineOutput or tuple + +If return_dict is True, AnimateDiffPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. AnimateDiffPipelineOutput class diffusers.pipelines.animatediff.AnimateDiffPipelineOutput < source > ( frames: Union ) Parameters frames (List[List[PIL.Image.Image]] or torch.Tensor or np.ndarray) — +List of PIL Images of length batch_size or torch.Tensor or np.ndarray of shape +(batch_size, num_frames, height, width, num_channels). Output class for AnimateDiff pipelines. diff --git a/scrapped_outputs/ce898e3d025144441d3540704baa71ea.txt b/scrapped_outputs/ce898e3d025144441d3540704baa71ea.txt new file mode 100644 index 0000000000000000000000000000000000000000..da3499e1844fd5dbd2117a0f51782369e62b740f --- /dev/null +++ b/scrapped_outputs/ce898e3d025144441d3540704baa71ea.txt @@ -0,0 +1,251 @@ +Schedulers + +Diffusers contains multiple pre-built schedule functions for the diffusion process. + +What is a scheduler? + +The schedule functions, denoted Schedulers in the library take in the output of a trained model, a sample which the diffusion process is iterating on, and a timestep to return a denoised sample. That’s why schedulers may also be called Samplers in other diffusion models implementations. +Schedulers define the methodology for iteratively adding noise to an image or for updating a sample based on model outputs.adding noise in different manners represent the algorithmic processes to train a diffusion model by adding noise to images. +for inference, the scheduler defines how to update a sample based on an output from a pretrained model. +Schedulers are often defined by a noise schedule and an update rule to solve the differential equation solution. + +Discrete versus continuous schedulers + +All schedulers take in a timestep to predict the updated version of the sample being diffused. +The timesteps dictate where in the diffusion process the step is, where data is generated by iterating forward in time and inference is executed by propagating backwards through timesteps. +Different algorithms use timesteps that can be discrete (accepting int inputs), such as the DDPMScheduler or PNDMScheduler, or continuous (accepting float inputs), such as the score-based schedulers ScoreSdeVeScheduler or ScoreSdeVpScheduler. + +Designing Re-usable schedulers + +The core design principle between the schedule functions is to be model, system, and framework independent. +This allows for rapid experimentation and cleaner abstractions in the code, where the model prediction is separated from the sample update. +To this end, the design of schedulers is such that: +Schedulers can be used interchangeably between diffusion models in inference to find the preferred trade-off between speed and generation quality. +Schedulers are currently by default in PyTorch, but are designed to be framework independent (partial Jax support currently exists). +Many diffusion pipelines, such as StableDiffusionPipeline and DiTPipeline can use any of KarrasDiffusionSchedulers + +Schedulers Summary + +The following table summarizes all officially supported schedulers, their corresponding paper +Scheduler +Paper +ddim +Denoising Diffusion Implicit Models +ddim_inverse +Denoising Diffusion Implicit Models +ddpm +Denoising Diffusion Probabilistic Models +deis +DEISMultistepScheduler +singlestep_dpm_solver +Singlestep DPM-Solver +multistep_dpm_solver +Multistep DPM-Solver +heun +Heun scheduler inspired by Karras et. al paper +dpm_discrete +DPM Discrete Scheduler inspired by Karras et. al paper +dpm_discrete_ancestral +DPM Discrete Scheduler with ancestral sampling inspired by Karras et. al paper +stochastic_karras_ve +Variance exploding, stochastic sampling from Karras et. al +lms_discrete +Linear multistep scheduler for discrete beta schedules +pndm +Pseudo numerical methods for diffusion models (PNDM) +score_sde_ve +variance exploding stochastic differential equation (VE-SDE) scheduler +ipndm +improved pseudo numerical methods for diffusion models (iPNDM) +score_sde_vp +Variance preserving stochastic differential equation (VP-SDE) scheduler +euler +Euler scheduler +euler_ancestral +Euler Ancestral scheduler +vq_diffusion +VQDiffusionScheduler +unipc +UniPCMultistepScheduler +repaint +RePaint scheduler + +API + +The core API for any new scheduler must follow a limited structure. +Schedulers should provide one or more def step(...) functions that should be called to update the generated sample iteratively. +Schedulers should provide a set_timesteps(...) method that configures the parameters of a schedule function for a specific inference task. +Schedulers should be framework-specific. +The base class SchedulerMixin implements low level utilities used by multiple schedulers. + +SchedulerMixin + + +class diffusers.SchedulerMixin + +< +source +> +( +) + + + +Mixin containing common functions for the schedulers. +Class attributes: +_compatibles (List[str]) — A list of classes that are compatible with the parent class, so that +from_config can be used from a class different than the one used to save the config (should be overridden +by parent class). + +from_pretrained + +< +source +> +( +pretrained_model_name_or_path: typing.Dict[str, typing.Any] = None +subfolder: typing.Optional[str] = None +return_unused_kwargs = False +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id of a model repo on huggingface.co. Valid model ids should have an +organization name, like google/ddpm-celebahq-256. +A path to a directory containing the schedluer configurations saved using +save_pretrained(), e.g., ./my_model_directory/. + + + +subfolder (str, optional) — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + +return_unused_kwargs (bool, optional, defaults to False) — +Whether kwargs that are not consumed by the Python class should be returned or not. + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running transformers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + + +Instantiate a Scheduler class from a pre-defined JSON configuration file inside a directory or Hub repo. +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. +Activate the special “offline-mode” to +use this method in a firewalled environment. + +save_pretrained + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +push_to_hub: bool = False +**kwargs + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory where the configuration JSON file will be saved (will be created if it does not exist). + + + +Save a scheduler configuration object to the directory save_directory, so that it can be re-loaded using the +from_pretrained() class method. + +SchedulerOutput + + +The class `SchedulerOutput` contains the outputs from any schedulers `step(...)` call. + +class diffusers.schedulers.scheduling_utils.SchedulerOutput + +< +source +> +( +prev_sample: FloatTensor + +) + + +Parameters + +prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. + + + +Base class for the scheduler’s step function output. + +KarrasDiffusionSchedulers + +KarrasDiffusionSchedulers encompasses the main generalization of schedulers in Diffusers. The schedulers in this class are distinguished, at a high level, by their noise sampling strategy; the type of network and scaling; and finally the training strategy or how the loss is weighed. +The different schedulers, depending on the type of ODE solver, fall into the above taxonomy and provide a good abstraction for the design of the main schedulers implemented in Diffusers. The schedulers in this class are given below: + +class diffusers.schedulers.KarrasDiffusionSchedulers + +< +source +> +( +value +names = None +module = None +qualname = None +type = None +start = 1 + +) + + + +An enumeration. diff --git a/scrapped_outputs/cef95220d637d1d9f3811023dbc9233a.txt b/scrapped_outputs/cef95220d637d1d9f3811023dbc9233a.txt new file mode 100644 index 0000000000000000000000000000000000000000..7ba14b6e0e43d4ca7ed6b0c338388308b99ebb1d --- /dev/null +++ b/scrapped_outputs/cef95220d637d1d9f3811023dbc9233a.txt @@ -0,0 +1,265 @@ +ControlNet ControlNet is a type of model for controlling image diffusion models by conditioning the model with an additional input image. There are many types of conditioning inputs (canny edge, user sketching, human pose, depth, and more) you can use to control a diffusion model. This is hugely useful because it affords you greater control over image generation, making it easier to generate specific images without experimenting with different text prompts or denoising values as much. Check out Section 3.5 of the ControlNet paper v1 for a list of ControlNet implementations on various conditioning inputs. You can find the official Stable Diffusion ControlNet conditioned models on lllyasviel’s Hub profile, and more community-trained ones on the Hub. For Stable Diffusion XL (SDXL) ControlNet models, you can find them on the 🤗 Diffusers Hub organization, or you can browse community-trained ones on the Hub. A ControlNet model has two sets of weights (or blocks) connected by a zero-convolution layer: a locked copy keeps everything a large pretrained diffusion model has learned a trainable copy is trained on the additional conditioning input Since the locked copy preserves the pretrained model, training and implementing a ControlNet on a new conditioning input is as fast as finetuning any other model because you aren’t training the model from scratch. This guide will show you how to use ControlNet for text-to-image, image-to-image, inpainting, and more! There are many types of ControlNet conditioning inputs to choose from, but in this guide we’ll only focus on several of them. Feel free to experiment with other conditioning inputs! Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate opencv-python Text-to-image For text-to-image, you normally pass a text prompt to the model. But with ControlNet, you can specify an additional conditioning input. Let’s condition the model with a canny image, a white outline of an image on a black background. This way, the ControlNet can use the canny image as a control to guide the model to generate an image with the same outline. Load an image and use the opencv-python library to extract the canny image: Copied from diffusers.utils import load_image, make_image_grid +from PIL import Image +import cv2 +import numpy as np + +original_image = load_image( + "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +) + +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) original image canny image Next, load a ControlNet model conditioned on canny edge detection and pass it to the StableDiffusionControlNetPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler +import torch + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now pass your prompt and canny image to the pipeline: Copied output = pipe( + "the mona lisa", image=canny_image +).images[0] +make_image_grid([original_image, canny_image, output], rows=1, cols=3) Image-to-image For image-to-image, you’d typically pass an initial image and a prompt to the pipeline to generate a new image. With ControlNet, you can pass an additional conditioning input to guide the model. Let’s condition the model with a depth map, an image which contains spatial information. This way, the ControlNet can use the depth map as a control to guide the model to generate an image that preserves spatial information. You’ll use the StableDiffusionControlNetImg2ImgPipeline for this task, which is different from the StableDiffusionControlNetPipeline because it allows you to pass an initial image as the starting point for the image generation process. Load an image and use the depth-estimation Pipeline from 🤗 Transformers to extract the depth map of an image: Copied import torch +import numpy as np + +from transformers import pipeline +from diffusers.utils import load_image, make_image_grid + +image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-img2img.jpg" +) + +def get_depth_map(image, depth_estimator): + image = depth_estimator(image)["depth"] + image = np.array(image) + image = image[:, :, None] + image = np.concatenate([image, image, image], axis=2) + detected_map = torch.from_numpy(image).float() / 255.0 + depth_map = detected_map.permute(2, 0, 1) + return depth_map + +depth_estimator = pipeline("depth-estimation") +depth_map = get_depth_map(image, depth_estimator).unsqueeze(0).half().to("cuda") Next, load a ControlNet model conditioned on depth maps and pass it to the StableDiffusionControlNetImg2ImgPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, UniPCMultistepScheduler +import torch + +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11f1p_sd15_depth", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now pass your prompt, initial image, and depth map to the pipeline: Copied output = pipe( + "lego batman and robin", image=image, control_image=depth_map, +).images[0] +make_image_grid([image, output], rows=1, cols=2) original image generated image Inpainting For inpainting, you need an initial image, a mask image, and a prompt describing what to replace the mask with. ControlNet models allow you to add another control image to condition a model with. Let’s condition the model with an inpainting mask. This way, the ControlNet can use the inpainting mask as a control to guide the model to generate an image within the mask area. Load an initial image and a mask image: Copied from diffusers.utils import load_image, make_image_grid + +init_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-inpaint.jpg" +) +init_image = init_image.resize((512, 512)) + +mask_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-inpaint-mask.jpg" +) +mask_image = mask_image.resize((512, 512)) +make_image_grid([init_image, mask_image], rows=1, cols=2) Create a function to prepare the control image from the initial and mask images. This’ll create a tensor to mark the pixels in init_image as masked if the corresponding pixel in mask_image is over a certain threshold. Copied import numpy as np +import torch + +def make_inpaint_condition(image, image_mask): + image = np.array(image.convert("RGB")).astype(np.float32) / 255.0 + image_mask = np.array(image_mask.convert("L")).astype(np.float32) / 255.0 + + assert image.shape[0:1] == image_mask.shape[0:1] + image[image_mask > 0.5] = -1.0 # set as masked pixel + image = np.expand_dims(image, 0).transpose(0, 3, 1, 2) + image = torch.from_numpy(image) + return image + +control_image = make_inpaint_condition(init_image, mask_image) original image mask image Load a ControlNet model conditioned on inpainting and pass it to the StableDiffusionControlNetInpaintPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetInpaintPipeline, ControlNetModel, UniPCMultistepScheduler + +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now pass your prompt, initial image, mask image, and control image to the pipeline: Copied output = pipe( + "corgi face with large ears, detailed, pixar, animated, disney", + num_inference_steps=20, + eta=1.0, + image=init_image, + mask_image=mask_image, + control_image=control_image, +).images[0] +make_image_grid([init_image, mask_image, output], rows=1, cols=3) Guess mode Guess mode does not require supplying a prompt to a ControlNet at all! This forces the ControlNet encoder to do it’s best to “guess” the contents of the input control map (depth map, pose estimation, canny edge, etc.). Guess mode adjusts the scale of the output residuals from a ControlNet by a fixed ratio depending on the block depth. The shallowest DownBlock corresponds to 0.1, and as the blocks get deeper, the scale increases exponentially such that the scale of the MidBlock output becomes 1.0. Guess mode does not have any impact on prompt conditioning and you can still provide a prompt if you want. Set guess_mode=True in the pipeline, and it is recommended to set the guidance_scale value between 3.0 and 5.0. Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.utils import load_image, make_image_grid +import numpy as np +import torch +from PIL import Image +import cv2 + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", use_safetensors=True) +pipe = StableDiffusionControlNetPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", controlnet=controlnet, use_safetensors=True).to("cuda") + +original_image = load_image("https://huggingface.co/takuma104/controlnet_dev/resolve/main/bird_512x512.png") + +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +image = pipe("", image=canny_image, guess_mode=True, guidance_scale=3.0).images[0] +make_image_grid([original_image, canny_image, image], rows=1, cols=3) regular mode with prompt guess mode without prompt ControlNet with Stable Diffusion XL There aren’t too many ControlNet models compatible with Stable Diffusion XL (SDXL) at the moment, but we’ve trained two full-sized ControlNet models for SDXL conditioned on canny edge detection and depth maps. We’re also experimenting with creating smaller versions of these SDXL-compatible ControlNet models so it is easier to run on resource-constrained hardware. You can find these checkpoints on the 🤗 Diffusers Hub organization! Let’s use a SDXL ControlNet conditioned on canny images to generate an image. Start by loading an image and prepare the canny image: Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL +from diffusers.utils import load_image, make_image_grid +from PIL import Image +import cv2 +import numpy as np +import torch + +original_image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" +) + +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) +make_image_grid([original_image, canny_image], rows=1, cols=2) original image canny image Load a SDXL ControlNet model conditioned on canny edge detection and pass it to the StableDiffusionXLControlNetPipeline. You can also enable model offloading to reduce memory usage. Copied controlnet = ControlNetModel.from_pretrained( + "diffusers/controlnet-canny-sdxl-1.0", + torch_dtype=torch.float16, + use_safetensors=True +) +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionXLControlNetPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + controlnet=controlnet, + vae=vae, + torch_dtype=torch.float16, + use_safetensors=True +) +pipe.enable_model_cpu_offload() Now pass your prompt (and optionally a negative prompt if you’re using one) and canny image to the pipeline: The controlnet_conditioning_scale parameter determines how much weight to assign to the conditioning inputs. A value of 0.5 is recommended for good generalization, but feel free to experiment with this number! Copied prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" +negative_prompt = 'low quality, bad quality, sketches' + +image = pipe( + prompt, + negative_prompt=negative_prompt, + image=canny_image, + controlnet_conditioning_scale=0.5, +).images[0] +make_image_grid([original_image, canny_image, image], rows=1, cols=3) You can use StableDiffusionXLControlNetPipeline in guess mode as well by setting the parameter to True: Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL +from diffusers.utils import load_image, make_image_grid +import numpy as np +import torch +import cv2 +from PIL import Image + +prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" +negative_prompt = "low quality, bad quality, sketches" + +original_image = load_image( + "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" +) + +controlnet = ControlNetModel.from_pretrained( + "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16, use_safetensors=True +) +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionXLControlNetPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16, use_safetensors=True +) +pipe.enable_model_cpu_offload() + +image = np.array(original_image) +image = cv2.Canny(image, 100, 200) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +image = pipe( + prompt, negative_prompt=negative_prompt, controlnet_conditioning_scale=0.5, image=canny_image, guess_mode=True, +).images[0] +make_image_grid([original_image, canny_image, image], rows=1, cols=3) MultiControlNet Replace the SDXL model with a model like runwayml/stable-diffusion-v1-5 to use multiple conditioning inputs with Stable Diffusion models. You can compose multiple ControlNet conditionings from different image inputs to create a MultiControlNet. To get better results, it is often helpful to: mask conditionings such that they don’t overlap (for example, mask the area of a canny image where the pose conditioning is located) experiment with the controlnet_conditioning_scale parameter to determine how much weight to assign to each conditioning input In this example, you’ll combine a canny image and a human pose estimation image to generate a new image. Prepare the canny image conditioning: Copied from diffusers.utils import load_image, make_image_grid +from PIL import Image +import numpy as np +import cv2 + +original_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" +) +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) + +# zero out middle columns of image where pose will be overlaid +zero_start = image.shape[1] // 4 +zero_end = zero_start + image.shape[1] // 2 +image[:, zero_start:zero_end] = 0 + +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) +make_image_grid([original_image, canny_image], rows=1, cols=2) original image canny image For human pose estimation, install controlnet_aux: Copied # uncomment to install the necessary library in Colab +#!pip install -q controlnet-aux Prepare the human pose estimation conditioning: Copied from controlnet_aux import OpenposeDetector + +openpose = OpenposeDetector.from_pretrained("lllyasviel/ControlNet") +original_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/person.png" +) +openpose_image = openpose(original_image) +make_image_grid([original_image, openpose_image], rows=1, cols=2) original image human pose image Load a list of ControlNet models that correspond to each conditioning, and pass them to the StableDiffusionXLControlNetPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to reduce memory usage. Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL, UniPCMultistepScheduler +import torch + +controlnets = [ + ControlNetModel.from_pretrained( + "thibaud/controlnet-openpose-sdxl-1.0", torch_dtype=torch.float16 + ), + ControlNetModel.from_pretrained( + "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16, use_safetensors=True + ), +] + +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionXLControlNetPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnets, vae=vae, torch_dtype=torch.float16, use_safetensors=True +) +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now you can pass your prompt (an optional negative prompt if you’re using one), canny image, and pose image to the pipeline: Copied prompt = "a giant standing in a fantasy landscape, best quality" +negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality" + +generator = torch.manual_seed(1) + +images = [openpose_image.resize((1024, 1024)), canny_image.resize((1024, 1024))] + +images = pipe( + prompt, + image=images, + num_inference_steps=25, + generator=generator, + negative_prompt=negative_prompt, + num_images_per_prompt=3, + controlnet_conditioning_scale=[1.0, 0.8], +).images +make_image_grid([original_image, canny_image, openpose_image, + images[0].resize((512, 512)), images[1].resize((512, 512)), images[2].resize((512, 512))], rows=2, cols=3) diff --git a/scrapped_outputs/cf07b7cf73858cee03e8384313c75132.txt b/scrapped_outputs/cf07b7cf73858cee03e8384313c75132.txt new file mode 100644 index 0000000000000000000000000000000000000000..1fe3bd3f06785a74a09c4c4199e812fcd2270991 --- /dev/null +++ b/scrapped_outputs/cf07b7cf73858cee03e8384313c75132.txt @@ -0,0 +1,6 @@ +Overview 🤗 Diffusers provides a collection of training scripts for you to train your own diffusion models. You can find all of our training scripts in diffusers/examples. Each training script is: Self-contained: the training script does not depend on any local files, and all packages required to run the script are installed from the requirements.txt file. Easy-to-tweak: the training scripts are an example of how to train a diffusion model for a specific task and won’t work out-of-the-box for every training scenario. You’ll likely need to adapt the training script for your specific use-case. To help you with that, we’ve fully exposed the data preprocessing code and the training loop so you can modify it for your own use. Beginner-friendly: the training scripts are designed to be beginner-friendly and easy to understand, rather than including the latest state-of-the-art methods to get the best and most competitive results. Any training methods we consider too complex are purposefully left out. Single-purpose: each training script is expressly designed for only one task to keep it readable and understandable. Our current collection of training scripts include: Training SDXL-support LoRA-support Flax-support unconditional image generation text-to-image 👍 👍 👍 textual inversion 👍 DreamBooth 👍 👍 👍 ControlNet 👍 👍 InstructPix2Pix 👍 Custom Diffusion T2I-Adapters 👍 Kandinsky 2.2 👍 Wuerstchen 👍 These examples are actively maintained, so please feel free to open an issue if they aren’t working as expected. If you feel like another training example should be included, you’re more than welcome to start a Feature Request to discuss your feature idea with us and whether it meets our criteria of being self-contained, easy-to-tweak, beginner-friendly, and single-purpose. Install Make sure you can successfully run the latest versions of the example scripts by installing the library from source in a new virtual environment: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the folder of the training script (for example, DreamBooth) and install the requirements.txt file. Some training scripts have a specific requirement file for SDXL, LoRA or Flax. If you’re using one of these scripts, make sure you install its corresponding requirements file. Copied cd examples/dreambooth +pip install -r requirements.txt +# to train SDXL with DreamBooth +pip install -r requirements_sdxl.txt To speedup training and reduce memory-usage, we recommend: using PyTorch 2.0 or higher to automatically use scaled dot product attention during training (you don’t need to make any changes to the training code) installing xFormers to enable memory-efficient attention diff --git a/scrapped_outputs/cf3c50379c35fe6dc1224b260077a450.txt b/scrapped_outputs/cf3c50379c35fe6dc1224b260077a450.txt new file mode 100644 index 0000000000000000000000000000000000000000..0a7cc0b79a2823c78003b419462fee63e47bb1de --- /dev/null +++ b/scrapped_outputs/cf3c50379c35fe6dc1224b260077a450.txt @@ -0,0 +1,18 @@ +ONNX Runtime 🤗 Optimum provides a Stable Diffusion pipeline compatible with ONNX Runtime. You’ll need to install 🤗 Optimum with the following command for ONNX Runtime support: Copied pip install -q optimum["onnxruntime"] This guide will show you how to use the Stable Diffusion and Stable Diffusion XL (SDXL) pipelines with ONNX Runtime. Stable Diffusion To load and run inference, use the ORTStableDiffusionPipeline. If you want to load a PyTorch model and convert it to the ONNX format on-the-fly, set export=True: Copied from optimum.onnxruntime import ORTStableDiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id, export=True) +prompt = "sailing ship in storm by Leonardo da Vinci" +image = pipeline(prompt).images[0] +pipeline.save_pretrained("./onnx-stable-diffusion-v1-5") Generating multiple prompts in a batch seems to take too much memory. While we look into it, you may need to iterate instead of batching. To export the pipeline in the ONNX format offline and use it later for inference, +use the optimum-cli export command: Copied optimum-cli export onnx --model runwayml/stable-diffusion-v1-5 sd_v15_onnx/ Then to perform inference (you don’t have to specify export=True again): Copied from optimum.onnxruntime import ORTStableDiffusionPipeline + +model_id = "sd_v15_onnx" +pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id) +prompt = "sailing ship in storm by Leonardo da Vinci" +image = pipeline(prompt).images[0] You can find more examples in 🤗 Optimum documentation, and Stable Diffusion is supported for text-to-image, image-to-image, and inpainting. Stable Diffusion XL To load and run inference with SDXL, use the ORTStableDiffusionXLPipeline: Copied from optimum.onnxruntime import ORTStableDiffusionXLPipeline + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipeline = ORTStableDiffusionXLPipeline.from_pretrained(model_id) +prompt = "sailing ship in storm by Leonardo da Vinci" +image = pipeline(prompt).images[0] To export the pipeline in the ONNX format and use it later for inference, use the optimum-cli export command: Copied optimum-cli export onnx --model stabilityai/stable-diffusion-xl-base-1.0 --task stable-diffusion-xl sd_xl_onnx/ SDXL in the ONNX format is supported for text-to-image and image-to-image. diff --git a/scrapped_outputs/cf865b83f864e866a27d1854de750839.txt b/scrapped_outputs/cf865b83f864e866a27d1854de750839.txt new file mode 100644 index 0000000000000000000000000000000000000000..5ee871335093ed2ca29b91e756da3147dae8eda6 --- /dev/null +++ b/scrapped_outputs/cf865b83f864e866a27d1854de750839.txt @@ -0,0 +1,217 @@ +Load pipelines, models, and schedulers Having an easy way to use a diffusion system for inference is essential to 🧨 Diffusers. Diffusion systems often consist of multiple components like parameterized models, tokenizers, and schedulers that interact in complex ways. That is why we designed the DiffusionPipeline to wrap the complexity of the entire diffusion system into an easy-to-use API, while remaining flexible enough to be adapted for other use cases, such as loading each component individually as building blocks to assemble your own diffusion system. Everything you need for inference or training is accessible with the from_pretrained() method. This guide will show you how to load: pipelines from the Hub and locally different components into a pipeline checkpoint variants such as different floating point types or non-exponential mean averaged (EMA) weights models and schedulers Diffusion Pipeline 💡 Skip to the DiffusionPipeline explained section if you are interested in learning in more detail about how the DiffusionPipeline class works. The DiffusionPipeline class is the simplest and most generic way to load the latest trending diffusion model from the Hub. The DiffusionPipeline.from_pretrained() method automatically detects the correct pipeline class from the checkpoint, downloads, and caches all the required configuration and weight files, and returns a pipeline instance ready for inference. Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) You can also load a checkpoint with its specific pipeline class. The example above loaded a Stable Diffusion model; to get the same result, use the StableDiffusionPipeline class: Copied from diffusers import StableDiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) A checkpoint (such as CompVis/stable-diffusion-v1-4 or runwayml/stable-diffusion-v1-5) may also be used for more than one task, like text-to-image or image-to-image. To differentiate what task you want to use the checkpoint for, you have to load it directly with its corresponding task-specific pipeline class: Copied from diffusers import StableDiffusionImg2ImgPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionImg2ImgPipeline.from_pretrained(repo_id) Local pipeline To load a diffusion pipeline locally, use git-lfs to manually download the checkpoint (in this case, runwayml/stable-diffusion-v1-5) to your local disk. This creates a local folder, ./stable-diffusion-v1-5, on your disk: Copied git-lfs install +git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 Then pass the local path to from_pretrained(): Copied from diffusers import DiffusionPipeline + +repo_id = "./stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) The from_pretrained() method won’t download any files from the Hub when it detects a local path, but this also means it won’t download and cache the latest changes to a checkpoint. Swap components in a pipeline You can customize the default components of any pipeline with another compatible component. Customization is important because: Changing the scheduler is important for exploring the trade-off between generation speed and quality. Different components of a model are typically trained independently and you can swap out a component with a better-performing one. During finetuning, usually only some components - like the UNet or text encoder - are trained. To find out which schedulers are compatible for customization, you can use the compatibles method: Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) +stable_diffusion.scheduler.compatibles Let’s use the SchedulerMixin.from_pretrained() method to replace the default PNDMScheduler with a more performant scheduler, EulerDiscreteScheduler. The subfolder="scheduler" argument is required to load the scheduler configuration from the correct subfolder of the pipeline repository. Then you can pass the new EulerDiscreteScheduler instance to the scheduler argument in DiffusionPipeline: Copied from diffusers import DiffusionPipeline, EulerDiscreteScheduler + +repo_id = "runwayml/stable-diffusion-v1-5" +scheduler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, scheduler=scheduler, use_safetensors=True) Safety checker Diffusion models like Stable Diffusion can generate harmful content, which is why 🧨 Diffusers has a safety checker to check generated outputs against known hardcoded NSFW content. If you’d like to disable the safety checker for whatever reason, pass None to the safety_checker argument: Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, safety_checker=None, use_safetensors=True) +""" +You have disabled the safety checker for by passing `safety_checker=None`. Ensure that you abide by the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend keeping the safety filter enabled in all public-facing circumstances, disabling it only for use cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 . +""" Reuse components across pipelines You can also reuse the same components in multiple pipelines to avoid loading the weights into RAM twice. Use the components method to save the components: Copied from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True) + +components = stable_diffusion_txt2img.components Then you can pass the components to another pipeline without reloading the weights into RAM: Copied stable_diffusion_img2img = StableDiffusionImg2ImgPipeline(**components) You can also pass the components individually to the pipeline if you want more flexibility over which components to reuse or disable. For example, to reuse the same components in the text-to-image pipeline, except for the safety checker and feature extractor, in the image-to-image pipeline: Copied from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True) +stable_diffusion_img2img = StableDiffusionImg2ImgPipeline( + vae=stable_diffusion_txt2img.vae, + text_encoder=stable_diffusion_txt2img.text_encoder, + tokenizer=stable_diffusion_txt2img.tokenizer, + unet=stable_diffusion_txt2img.unet, + scheduler=stable_diffusion_txt2img.scheduler, + safety_checker=None, + feature_extractor=None, + requires_safety_checker=False, +) Checkpoint variants A checkpoint variant is usually a checkpoint whose weights are: Stored in a different floating point type for lower precision and lower storage, such as torch.float16, because it only requires half the bandwidth and storage to download. You can’t use this variant if you’re continuing training or using a CPU. Non-exponential mean averaged (EMA) weights, which shouldn’t be used for inference. You should use these to continue fine-tuning a model. 💡 When the checkpoints have identical model structures, but they were trained on different datasets and with a different training setup, they should be stored in separate repositories instead of variations (for example, stable-diffusion-v1-4 and stable-diffusion-v1-5). Otherwise, a variant is identical to the original checkpoint. They have exactly the same serialization format (like Safetensors), model structure, and weights that have identical tensor shapes. checkpoint type weight name argument for loading weights original diffusion_pytorch_model.bin floating point diffusion_pytorch_model.fp16.bin variant, torch_dtype non-EMA diffusion_pytorch_model.non_ema.bin variant There are two important arguments to know for loading variants: torch_dtype defines the floating point precision of the loaded checkpoints. For example, if you want to save bandwidth by loading a fp16 variant, you should specify torch_dtype=torch.float16 to convert the weights to fp16. Otherwise, the fp16 weights are converted to the default fp32 precision. You can also load the original checkpoint without defining the variant argument, and convert it to fp16 with torch_dtype=torch.float16. In this case, the default fp32 weights are downloaded first, and then they’re converted to fp16 after loading. variant defines which files should be loaded from the repository. For example, if you want to load a non_ema variant from the diffusers/stable-diffusion-variants repository, you should specify variant="non_ema" to download the non_ema files. Copied from diffusers import DiffusionPipeline +import torch + +# load fp16 variant +stable_diffusion = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16, use_safetensors=True +) +# load non_ema variant +stable_diffusion = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", variant="non_ema", use_safetensors=True +) To save a checkpoint stored in a different floating-point type or as a non-EMA variant, use the DiffusionPipeline.save_pretrained() method and specify the variant argument. You should try and save a variant to the same folder as the original checkpoint, so you can load both from the same folder: Copied from diffusers import DiffusionPipeline + +# save as fp16 variant +stable_diffusion.save_pretrained("runwayml/stable-diffusion-v1-5", variant="fp16") +# save as non-ema variant +stable_diffusion.save_pretrained("runwayml/stable-diffusion-v1-5", variant="non_ema") If you don’t save the variant to an existing folder, you must specify the variant argument otherwise it’ll throw an Exception because it can’t find the original checkpoint: Copied # 👎 this won't work +stable_diffusion = DiffusionPipeline.from_pretrained( + "./stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +# 👍 this works +stable_diffusion = DiffusionPipeline.from_pretrained( + "./stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16, use_safetensors=True +) Models Models are loaded from the ModelMixin.from_pretrained() method, which downloads and caches the latest version of the model weights and configurations. If the latest files are available in the local cache, from_pretrained() reuses files in the cache instead of re-downloading them. Models can be loaded from a subfolder with the subfolder argument. For example, the model weights for runwayml/stable-diffusion-v1-5 are stored in the unet subfolder: Copied from diffusers import UNet2DConditionModel + +repo_id = "runwayml/stable-diffusion-v1-5" +model = UNet2DConditionModel.from_pretrained(repo_id, subfolder="unet", use_safetensors=True) Or directly from a repository’s directory: Copied from diffusers import UNet2DModel + +repo_id = "google/ddpm-cifar10-32" +model = UNet2DModel.from_pretrained(repo_id, use_safetensors=True) You can also load and save model variants by specifying the variant argument in ModelMixin.from_pretrained() and ModelMixin.save_pretrained(): Copied from diffusers import UNet2DConditionModel + +model = UNet2DConditionModel.from_pretrained( + "runwayml/stable-diffusion-v1-5", subfolder="unet", variant="non_ema", use_safetensors=True +) +model.save_pretrained("./local-unet", variant="non_ema") Schedulers Schedulers are loaded from the SchedulerMixin.from_pretrained() method, and unlike models, schedulers are not parameterized or trained; they are defined by a configuration file. Loading schedulers does not consume any significant amount of memory and the same configuration file can be used for a variety of different schedulers. +For example, the following schedulers are compatible with StableDiffusionPipeline, which means you can load the same scheduler configuration file in any of these classes: Copied from diffusers import StableDiffusionPipeline +from diffusers import ( + DDPMScheduler, + DDIMScheduler, + PNDMScheduler, + LMSDiscreteScheduler, + EulerAncestralDiscreteScheduler, + EulerDiscreteScheduler, + DPMSolverMultistepScheduler, +) + +repo_id = "runwayml/stable-diffusion-v1-5" + +ddpm = DDPMScheduler.from_pretrained(repo_id, subfolder="scheduler") +ddim = DDIMScheduler.from_pretrained(repo_id, subfolder="scheduler") +pndm = PNDMScheduler.from_pretrained(repo_id, subfolder="scheduler") +lms = LMSDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +euler_anc = EulerAncestralDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +euler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +dpm = DPMSolverMultistepScheduler.from_pretrained(repo_id, subfolder="scheduler") + +# replace `dpm` with any of `ddpm`, `ddim`, `pndm`, `lms`, `euler_anc`, `euler` +pipeline = StableDiffusionPipeline.from_pretrained(repo_id, scheduler=dpm, use_safetensors=True) DiffusionPipeline explained As a class method, DiffusionPipeline.from_pretrained() is responsible for two things: Download the latest version of the folder structure required for inference and cache it. If the latest folder structure is available in the local cache, DiffusionPipeline.from_pretrained() reuses the cache and won’t redownload the files. Load the cached weights into the correct pipeline class - retrieved from the model_index.json file - and return an instance of it. The pipelines’ underlying folder structure corresponds directly with their class instances. For example, the StableDiffusionPipeline corresponds to the folder structure in runwayml/stable-diffusion-v1-5. Copied from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipeline = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) +print(pipeline) You’ll see pipeline is an instance of StableDiffusionPipeline, which consists of seven components: "feature_extractor": a CLIPImageProcessor from 🤗 Transformers. "safety_checker": a component for screening against harmful content. "scheduler": an instance of PNDMScheduler. "text_encoder": a CLIPTextModel from 🤗 Transformers. "tokenizer": a CLIPTokenizer from 🤗 Transformers. "unet": an instance of UNet2DConditionModel. "vae": an instance of AutoencoderKL. Copied StableDiffusionPipeline { + "feature_extractor": [ + "transformers", + "CLIPImageProcessor" + ], + "safety_checker": [ + "stable_diffusion", + "StableDiffusionSafetyChecker" + ], + "scheduler": [ + "diffusers", + "PNDMScheduler" + ], + "text_encoder": [ + "transformers", + "CLIPTextModel" + ], + "tokenizer": [ + "transformers", + "CLIPTokenizer" + ], + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vae": [ + "diffusers", + "AutoencoderKL" + ] +} Compare the components of the pipeline instance to the runwayml/stable-diffusion-v1-5 folder structure, and you’ll see there is a separate folder for each of the components in the repository: Copied . +├── feature_extractor +│   └── preprocessor_config.json +├── model_index.json +├── safety_checker +│   ├── config.json +| ├── model.fp16.safetensors +│ ├── model.safetensors +│ ├── pytorch_model.bin +| └── pytorch_model.fp16.bin +├── scheduler +│   └── scheduler_config.json +├── text_encoder +│   ├── config.json +| ├── model.fp16.safetensors +│ ├── model.safetensors +│ |── pytorch_model.bin +| └── pytorch_model.fp16.bin +├── tokenizer +│   ├── merges.txt +│   ├── special_tokens_map.json +│   ├── tokenizer_config.json +│   └── vocab.json +├── unet +│   ├── config.json +│   ├── diffusion_pytorch_model.bin +| |── diffusion_pytorch_model.fp16.bin +│ |── diffusion_pytorch_model.f16.safetensors +│ |── diffusion_pytorch_model.non_ema.bin +│ |── diffusion_pytorch_model.non_ema.safetensors +│ └── diffusion_pytorch_model.safetensors +|── vae +. ├── config.json +. ├── diffusion_pytorch_model.bin + ├── diffusion_pytorch_model.fp16.bin + ├── diffusion_pytorch_model.fp16.safetensors + └── diffusion_pytorch_model.safetensors You can access each of the components of the pipeline as an attribute to view its configuration: Copied pipeline.tokenizer +CLIPTokenizer( + name_or_path="/root/.cache/huggingface/hub/models--runwayml--stable-diffusion-v1-5/snapshots/39593d5650112b4cc580433f6b0435385882d819/tokenizer", + vocab_size=49408, + model_max_length=77, + is_fast=False, + padding_side="right", + truncation_side="right", + special_tokens={ + "bos_token": AddedToken("<|startoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), + "eos_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), + "unk_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), + "pad_token": "<|endoftext|>", + }, + clean_up_tokenization_spaces=True +) Every pipeline expects a model_index.json file that tells the DiffusionPipeline: which pipeline class to load from _class_name which version of 🧨 Diffusers was used to create the model in _diffusers_version what components from which library are stored in the subfolders (name corresponds to the component and subfolder name, library corresponds to the name of the library to load the class from, and class corresponds to the class name) Copied { + "_class_name": "StableDiffusionPipeline", + "_diffusers_version": "0.6.0", + "feature_extractor": [ + "transformers", + "CLIPImageProcessor" + ], + "safety_checker": [ + "stable_diffusion", + "StableDiffusionSafetyChecker" + ], + "scheduler": [ + "diffusers", + "PNDMScheduler" + ], + "text_encoder": [ + "transformers", + "CLIPTextModel" + ], + "tokenizer": [ + "transformers", + "CLIPTokenizer" + ], + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vae": [ + "diffusers", + "AutoencoderKL" + ] +} diff --git a/scrapped_outputs/cf976fd8f3d3f8b1e7c368935332b476.txt b/scrapped_outputs/cf976fd8f3d3f8b1e7c368935332b476.txt new file mode 100644 index 0000000000000000000000000000000000000000..70bc3fe7df1239b88e91535f57a06d94a2d20aa6 --- /dev/null +++ b/scrapped_outputs/cf976fd8f3d3f8b1e7c368935332b476.txt @@ -0,0 +1,298 @@ +InstructPix2Pix: Learning to Follow Image Editing Instructions + + +Overview + +InstructPix2Pix: Learning to Follow Image Editing Instructions by Tim Brooks, Aleksander Holynski and Alexei A. Efros. +The abstract of the paper is the following: +We propose a method for editing images from human instructions: given an input image and a written instruction that tells the model what to do, our model follows these instructions to edit the image. To obtain training data for this problem, we combine the knowledge of two large pretrained models — a language model (GPT-3) and a text-to-image model (Stable Diffusion) — to generate a large dataset of image editing examples. Our conditional diffusion model, InstructPix2Pix, is trained on our generated data, and generalizes to real images and user-written instructions at inference time. Since it performs edits in the forward pass and does not require per example fine-tuning or inversion, our model edits images quickly, in a matter of seconds. We show compelling editing results for a diverse collection of input images and written instructions. +Resources: +Project Page. +Paper. +Original Code. +Demo. + +Available Pipelines: + +Pipeline +Tasks +Demo +StableDiffusionInstructPix2PixPipeline +Text-Based Image Editing +🤗 Space + +Usage example + + + + Copied +import PIL +import requests +import torch +from diffusers import StableDiffusionInstructPix2PixPipeline + +model_id = "timbrooks/instruct-pix2pix" +pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +url = "https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" + + +def download_image(url): + image = PIL.Image.open(requests.get(url, stream=True).raw) + image = PIL.ImageOps.exif_transpose(image) + image = image.convert("RGB") + return image + + +image = download_image(url) + +prompt = "make the mountains snowy" +images = pipe(prompt, image=image, num_inference_steps=20, image_guidance_scale=1.5, guidance_scale=7).images +images[0].save("snowy_mountains.png") + +StableDiffusionInstructPix2PixPipeline + + +class diffusers.StableDiffusionInstructPix2PixPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPFeatureExtractor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for pixel-level image editing by following text instructions. Based on Stable Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None +num_inference_steps: int = 100 +guidance_scale: float = 7.5 +image_guidance_scale: float = 1.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +image (PIL.Image.Image) — +Image, or tensor representing an image batch which will be repainted according to prompt. + + +num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. This pipeline requires a value of at least 1. + + +image_guidance_scale (float, optional, defaults to 1.5) — +Image guidance scale is to push the generated image towards the inital image image. Image guidance +scale is enabled by setting image_guidance_scale > 1. Higher image guidance scale encourages to +generate images that are closely linked to the source image image, usually at the expense of lower +image quality. This pipeline requires a value of at least 1. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. Ignored when not using guidance (i.e., ignored if guidance_scale +is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionInstructPix2PixPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" + +>>> image = download_image(img_url).resize((512, 512)) + +>>> pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained( +... "timbrooks/instruct-pix2pix", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "make the mountains snowy" +>>> image = pipe(prompt=prompt, image=image).images[0] + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. diff --git a/scrapped_outputs/cfc1a0ccf338c485405f6fdb964ebd81.txt b/scrapped_outputs/cfc1a0ccf338c485405f6fdb964ebd81.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/cfc2d58f6e355e5df8594f1fe092be5e.txt b/scrapped_outputs/cfc2d58f6e355e5df8594f1fe092be5e.txt new file mode 100644 index 0000000000000000000000000000000000000000..db47242d5b00684f783685474911a8d89dd98131 --- /dev/null +++ b/scrapped_outputs/cfc2d58f6e355e5df8594f1fe092be5e.txt @@ -0,0 +1,250 @@ +Logging + +🧨 Diffusers has a centralized logging system, so that you can setup the verbosity of the library easily. +Currently the default verbosity of the library is WARNING. +To change the level of verbosity, just use one of the direct setters. For instance, here is how to change the verbosity +to the INFO level. + + + Copied +import diffusers + +diffusers.logging.set_verbosity_info() +You can also use the environment variable DIFFUSERS_VERBOSITY to override the default verbosity. You can set it +to one of the following: debug, info, warning, error, critical. For example: + + + Copied +DIFFUSERS_VERBOSITY=error ./myprogram.py +Additionally, some warnings can be disabled by setting the environment variable +DIFFUSERS_NO_ADVISORY_WARNINGS to a true value, like 1. This will disable any warning that is logged using +logger.warning_advice. For example: + + + Copied +DIFFUSERS_NO_ADVISORY_WARNINGS=1 ./myprogram.py +Here is an example of how to use the same logger as the library in your own module or script: + + + Copied +from diffusers.utils import logging + +logging.set_verbosity_info() +logger = logging.get_logger("diffusers") +logger.info("INFO") +logger.warning("WARN") +All the methods of this logging module are documented below, the main ones are +logging.get_verbosity() to get the current level of verbosity in the logger and +logging.set_verbosity() to set the verbosity to the level of your choice. In order (from the least +verbose to the most verbose), those levels (with their corresponding int values in parenthesis) are: +diffusers.logging.CRITICAL or diffusers.logging.FATAL (int value, 50): only report the most +critical errors. +diffusers.logging.ERROR (int value, 40): only report errors. +diffusers.logging.WARNING or diffusers.logging.WARN (int value, 30): only reports error and +warnings. This the default level used by the library. +diffusers.logging.INFO (int value, 20): reports error, warnings and basic information. +diffusers.logging.DEBUG (int value, 10): report all information. +By default, tqdm progress bars will be displayed during model download. logging.disable_progress_bar() and logging.enable_progress_bar() can be used to suppress or unsuppress this behavior. + +Base setters + + +diffusers.utils.logging.set_verbosity_error + +< +source +> +( +) + + + +Set the verbosity to the ERROR level. + +diffusers.utils.logging.set_verbosity_warning + +< +source +> +( +) + + + +Set the verbosity to the WARNING level. + +diffusers.utils.logging.set_verbosity_info + +< +source +> +( +) + + + +Set the verbosity to the INFO level. + +diffusers.utils.logging.set_verbosity_debug + +< +source +> +( +) + + + +Set the verbosity to the DEBUG level. + +Other functions + + +diffusers.utils.logging.get_verbosity + +< +source +> +( +) +→ +int + +Returns + +int + + + +The logging level. + + +Return the current level for the 🤗 Diffusers’ root logger as an int. +🤗 Diffusers has following logging levels: +50: diffusers.logging.CRITICAL or diffusers.logging.FATAL +40: diffusers.logging.ERROR +30: diffusers.logging.WARNING or diffusers.logging.WARN +20: diffusers.logging.INFO +10: diffusers.logging.DEBUG + +diffusers.utils.logging.set_verbosity + +< +source +> +( +verbosity: int + +) + + +Parameters + +verbosity (int) — +Logging level, e.g., one of: + +diffusers.logging.CRITICAL or diffusers.logging.FATAL +diffusers.logging.ERROR +diffusers.logging.WARNING or diffusers.logging.WARN +diffusers.logging.INFO +diffusers.logging.DEBUG + + + + +Set the verbosity level for the 🤗 Diffusers’ root logger. + +diffusers.utils.get_logger + +< +source +> +( +name: typing.Optional[str] = None + +) + + + +Return a logger with the specified name. +This function is not supposed to be directly accessed unless you are writing a custom diffusers module. + +diffusers.utils.logging.enable_default_handler + +< +source +> +( +) + + + +Enable the default handler of the HuggingFace Diffusers’ root logger. + +diffusers.utils.logging.disable_default_handler + +< +source +> +( +) + + + +Disable the default handler of the HuggingFace Diffusers’ root logger. + +diffusers.utils.logging.enable_explicit_format + +< +source +> +( +) + + + + +Enable explicit formatting for every HuggingFace Diffusers’ logger. The explicit formatter is as follows: + + + Copied + [LEVELNAME|FILENAME|LINE NUMBER] TIME >> MESSAGE +All handlers currently bound to the root logger are affected by this method. + + +diffusers.utils.logging.reset_format + +< +source +> +( +) + + + +Resets the formatting for HuggingFace Diffusers’ loggers. +All handlers currently bound to the root logger are affected by this method. + +diffusers.utils.logging.enable_progress_bar + +< +source +> +( +) + + + +Enable tqdm progress bar. + +diffusers.utils.logging.disable_progress_bar + +< +source +> +( +) + + + +Disable tqdm progress bar. diff --git a/scrapped_outputs/cfc4d0fd617eb4e7fd68ad0aa343cd09.txt b/scrapped_outputs/cfc4d0fd617eb4e7fd68ad0aa343cd09.txt new file mode 100644 index 0000000000000000000000000000000000000000..04f6b66657b62f8093d03c690225df93d111640b --- /dev/null +++ b/scrapped_outputs/cfc4d0fd617eb4e7fd68ad0aa343cd09.txt @@ -0,0 +1,52 @@ +UNet3DConditionModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 3D UNet conditional model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet3DConditionModel class diffusers.UNet3DConditionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlock3D', 'CrossAttnDownBlock3D', 'CrossAttnDownBlock3D', 'DownBlock3D') up_block_types: Tuple = ('UpBlock3D', 'CrossAttnUpBlock3D', 'CrossAttnUpBlock3D', 'CrossAttnUpBlock3D') block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: int = 1024 attention_head_dim: Union = 64 num_attention_heads: Union = None ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. in_channels (int, optional, defaults to 4) — The number of channels in the input sample. out_channels (int, optional, defaults to 4) — The number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — The number of layers per block. downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. act_fn (str, optional, defaults to "silu") — The activation function to use. norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, normalization and activation layers is skipped in post-processing. norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. cross_attention_dim (int, optional, defaults to 1280) — The dimension of the cross attention features. attention_head_dim (int, optional, defaults to 8) — The dimension of the attention heads. num_attention_heads (int, optional) — The number of attention heads. A conditional 3D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). disable_freeu < source > ( ) Disables the FreeU mechanism. enable_forward_chunking < source > ( chunk_size: Optional = None dim: int = 0 ) Parameters chunk_size (int, optional) — +The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually +over each tensor of dim=dim. dim (int, optional, defaults to 0) — +The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch) +or dim=1 (sequence length). Sets the attention processor to use feed forward +chunking. enable_freeu < source > ( s1 s2 b1 b2 ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism from https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stage blocks where they are being applied. Please refer to the official repository for combinations of values that +are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None down_block_additional_residuals: Optional = None mid_block_additional_residual: Optional = None return_dict: bool = True ) → UNet3DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, num_frames, channel, height, width. timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). class_labels (torch.Tensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. +timestep_cond — (torch.Tensor, optional, defaults to None): +Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed +through the self.time_embedding layer to obtain the timestep embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. +down_block_additional_residuals — (tuple of torch.Tensor, optional): +A tuple of tensors that if specified are added to the residuals of down unet blocks. +mid_block_additional_residual — (torch.Tensor, optional): +A tensor that if specified is added to the residual of the middle unet block. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet3DConditionOutput instead of a plain +tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. Returns +UNet3DConditionOutput or tuple + +If return_dict is True, an UNet3DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The UNet3DConditionModel forward method. set_attention_slice < source > ( slice_size: Union ) Parameters slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", input to the attention heads is halved, so attention is computed in two steps. If +"max", maximum amount of memory is saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in +several steps. This is useful for saving some memory in exchange for a small decrease in speed. set_attn_processor < source > ( processor: Union _remove_lora = False ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. UNet3DConditionOutput class diffusers.models.unet_3d_condition.UNet3DConditionOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_frames, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of UNet3DConditionModel. diff --git a/scrapped_outputs/cffafe060dd0252917477e4b47c91b22.txt b/scrapped_outputs/cffafe060dd0252917477e4b47c91b22.txt new file mode 100644 index 0000000000000000000000000000000000000000..a001c5e9c77873189a313244b2e7bed2ac696984 --- /dev/null +++ b/scrapped_outputs/cffafe060dd0252917477e4b47c91b22.txt @@ -0,0 +1,101 @@ +Image variation The Stable Diffusion model can also generate variations from an input image. It uses a fine-tuned version of a Stable Diffusion model by Justin Pinkney from Lambda. The original codebase can be found at LambdaLabsML/lambda-diffusers and additional official checkpoints for image variation can be found at lambdalabs/sd-image-variations-diffusers. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionImageVariationPipeline class diffusers.StableDiffusionImageVariationPipeline < source > ( vae: AutoencoderKL image_encoder: CLIPVisionModelWithProjection unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. image_encoder (CLIPVisionModelWithProjection) — +Frozen CLIP image-encoder (clip-vit-large-patch14). text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline to generate image variations from an input image using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — +Image or images to guide image generation. If you provide a tensor, it needs to be compatible with +CLIPImageProcessor. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied from diffusers import StableDiffusionImageVariationPipeline +from PIL import Image +from io import BytesIO +import requests + +pipe = StableDiffusionImageVariationPipeline.from_pretrained( + "lambdalabs/sd-image-variations-diffusers", revision="v2.0" +) +pipe = pipe.to("cuda") + +url = "https://lh3.googleusercontent.com/y-iFOHfLTwkuQSUegpwDdgKmOjRSTvPxat63dQLB25xkTs4lhIbRUFeNBWZzYf370g=s1200" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") + +out = pipe(image, num_images_per_prompt=3, guidance_scale=15) +out["images"][0].save("result.jpg") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/cffb01a80e1b815c0319ba8015f4597b.txt b/scrapped_outputs/cffb01a80e1b815c0319ba8015f4597b.txt new file mode 100644 index 0000000000000000000000000000000000000000..163deebba32d44239adf15467f9dcbdfbfad7c90 --- /dev/null +++ b/scrapped_outputs/cffb01a80e1b815c0319ba8015f4597b.txt @@ -0,0 +1,635 @@ +ControlNet with Stable Diffusion XL ControlNet was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that’ll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process. The abstract from the paper is: We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models. You can find additional smaller Stable Diffusion XL (SDXL) ControlNet checkpoints from the 🤗 Diffusers Hub organization, and browse community-trained checkpoints on the Hub. 🧪 Many of the SDXL ControlNet checkpoints are experimental, and there is a lot of room for improvement. Feel free to open an Issue and leave us feedback on how we can improve! If you don’t see a checkpoint you’re interested in, you can train your own SDXL ControlNet with our training script. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionXLControlNetPipeline class diffusers.StableDiffusionXLControlNetPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). text_encoder_2 (CLIPTextModelWithProjection) — +Second frozen text-encoder +(laion/CLIP-ViT-bigG-14-laion2B-39B-b160k). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. tokenizer_2 (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings should always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it defaults to True if the package is installed; otherwise no +watermarker is used. Pipeline for text-to-image generation using Stable Diffusion XL with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 5.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. This is sent to tokenizer_2 +and text_encoder_2. If not defined, negative_prompt is used in both text-encoders. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, pooled text embeddings are generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs (prompt +weighting). If not provided, pooled negative_prompt_embeds are generated from negative_prompt input +argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned containing the output images. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" +>>> negative_prompt = "low quality, bad quality, sketches" + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" +... ) + +>>> # initialize the models and pipeline +>>> controlnet_conditioning_scale = 0.5 # recommended for good generalization +>>> controlnet = ControlNetModel.from_pretrained( +... "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16 +... ) +>>> vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) +>>> pipe = StableDiffusionXLControlNetPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> # get canny image +>>> image = np.array(image) +>>> image = cv2.Canny(image, 100, 200) +>>> image = image[:, :, None] +>>> image = np.concatenate([image, image, image], axis=2) +>>> canny_image = Image.fromarray(image) + +>>> # generate image +>>> image = pipe( +... prompt, controlnet_conditioning_scale=controlnet_conditioning_scale, image=canny_image +... ).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionXLControlNetImg2ImgPipeline class diffusers.StableDiffusionXLControlNetImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple ControlNets +as a list, the outputs from each ControlNet are added together to create one combined additional +conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires an aesthetic_score condition to be passed during inference. Also see the +config of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for image-to-image generation using Stable Diffusion XL with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None control_image: Union = None height: Optional = None width: Optional = None strength: float = 0.8 num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 0.8 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The initial image will be used as the starting point for the image generation process. Can also accept +image latents as image, if passing latents directly, it will not be encoded again. control_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition. ControlNet uses this input condition to generate guidance to Unet. If +the type is specified as Torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can +also be accepted as an image. The dimensions of the output image defaults to image’s dimensions. If +height and/or width are passed, image is resized according to them. If multiple ControlNets are +specified in init, images must be passed as a list such that each element of the list can be correctly +batched for input to a single controlnet. height (int, optional, defaults to the size of control_image) — +The height in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to the size of control_image) — +The width in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the controlnet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set the +corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +In this mode, the ControlNet encoder will try best to recognize the content of the input image even if +you remove all prompts. The guidance_scale between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the controlnet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the controlnet stops applying. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple +containing the output images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> # pip install accelerate transformers safetensors diffusers + +>>> import torch +>>> import numpy as np +>>> from PIL import Image + +>>> from transformers import DPTFeatureExtractor, DPTForDepthEstimation +>>> from diffusers import ControlNetModel, StableDiffusionXLControlNetImg2ImgPipeline, AutoencoderKL +>>> from diffusers.utils import load_image + + +>>> depth_estimator = DPTForDepthEstimation.from_pretrained("Intel/dpt-hybrid-midas").to("cuda") +>>> feature_extractor = DPTFeatureExtractor.from_pretrained("Intel/dpt-hybrid-midas") +>>> controlnet = ControlNetModel.from_pretrained( +... "diffusers/controlnet-depth-sdxl-1.0-small", +... variant="fp16", +... use_safetensors=True, +... torch_dtype=torch.float16, +... ).to("cuda") +>>> vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16).to("cuda") +>>> pipe = StableDiffusionXLControlNetImg2ImgPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", +... controlnet=controlnet, +... vae=vae, +... variant="fp16", +... use_safetensors=True, +... torch_dtype=torch.float16, +... ).to("cuda") +>>> pipe.enable_model_cpu_offload() + + +>>> def get_depth_map(image): +... image = feature_extractor(images=image, return_tensors="pt").pixel_values.to("cuda") +... with torch.no_grad(), torch.autocast("cuda"): +... depth_map = depth_estimator(image).predicted_depth + +... depth_map = torch.nn.functional.interpolate( +... depth_map.unsqueeze(1), +... size=(1024, 1024), +... mode="bicubic", +... align_corners=False, +... ) +... depth_min = torch.amin(depth_map, dim=[1, 2, 3], keepdim=True) +... depth_max = torch.amax(depth_map, dim=[1, 2, 3], keepdim=True) +... depth_map = (depth_map - depth_min) / (depth_max - depth_min) +... image = torch.cat([depth_map] * 3, dim=1) +... image = image.permute(0, 2, 3, 1).cpu().numpy()[0] +... image = Image.fromarray((image * 255.0).clip(0, 255).astype(np.uint8)) +... return image + + +>>> prompt = "A robot, 4k photo" +>>> image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ).resize((1024, 1024)) +>>> controlnet_conditioning_scale = 0.5 # recommended for good generalization +>>> depth_image = get_depth_map(image) + +>>> images = pipe( +... prompt, +... image=image, +... control_image=depth_image, +... strength=0.99, +... num_inference_steps=50, +... controlnet_conditioning_scale=controlnet_conditioning_scale, +... ).images +>>> images[0].save(f"robot_cat.png") disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionXLControlNetInpaintPipeline class diffusers.StableDiffusionXLControlNetInpaintPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: ControlNetModel scheduler: KarrasDiffusionSchedulers requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None mask_image: Union = None control_image: Union = None height: Optional = None width: Optional = None padding_mask_crop: Optional = None strength: float = 0.9999 num_inference_steps: int = 50 denoising_start: Optional = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (PIL.Image.Image) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. padding_mask_crop (int, optional, defaults to None) — +The size of margin in the crop to be applied to the image and masking. If None, no crop is applied to image and mask_image. If +padding_mask_crop is not None, it will first find a rectangular region with the same aspect ration of the image and +contains all masked area, and then expand that area based on padding_mask_crop. The image and mask_image will then be cropped based on +the expanded area before resizing to the original image size for inpainting. This is useful when the masked area is small while the image is large +and contain information inreleant for inpainging, such as background. strength (float, optional, defaults to 0.9999) — +Conceptually, indicates how much to transform the masked portion of the reference image. Must be +between 0 and 1. image will be used as a starting point, adding more noise to it the larger the +strength. The number of denoising steps depends on the amount of noise initially added. When +strength is 1, added noise will be maximum and the denoising process will run for the full number of +iterations specified in num_inference_steps. A value of 1, therefore, essentially ignores the masked +portion of the reference image. Note that in the case of denoising_start being declared as an +integer, the value of strength will be ignored. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. denoising_start (float, optional) — +When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be +bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and +it is assumed that the passed image is a partly denoised image. Note that when this is specified, +strength will be ignored. The denoising_start parameter is particularly beneficial when this pipeline +is integrated into a “Mixture of Denoisers” multi-pipeline setup, as detailed in Refining the Image +Output. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be +denoised by a successor pipeline that has denoising_start set to 0.8 so that it only denoises the +final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline +forms a part of a “Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (width, height) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (width, height). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> # !pip install transformers accelerate +>>> from diffusers import StableDiffusionXLControlNetInpaintPipeline, ControlNetModel, DDIMScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> init_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy.png" +... ) +>>> init_image = init_image.resize((1024, 1024)) + +>>> generator = torch.Generator(device="cpu").manual_seed(1) + +>>> mask_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy_mask.png" +... ) +>>> mask_image = mask_image.resize((1024, 1024)) + + +>>> def make_canny_condition(image): +... image = np.array(image) +... image = cv2.Canny(image, 100, 200) +... image = image[:, :, None] +... image = np.concatenate([image, image, image], axis=2) +... image = Image.fromarray(image) +... return image + + +>>> control_image = make_canny_condition(init_image) + +>>> controlnet = ControlNetModel.from_pretrained( +... "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16 +... ) +>>> pipe = StableDiffusionXLControlNetInpaintPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> image = pipe( +... "a handsome man with ray-ban sunglasses", +... num_inference_steps=20, +... generator=generator, +... eta=1.0, +... image=init_image, +... mask_image=mask_image, +... control_image=control_image, +... ).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/cffba7f795a66f33bf9fefa0a105bae1.txt b/scrapped_outputs/cffba7f795a66f33bf9fefa0a105bae1.txt new file mode 100644 index 0000000000000000000000000000000000000000..ae35bd71905061d7430ba6a839a139739f34ded5 --- /dev/null +++ b/scrapped_outputs/cffba7f795a66f33bf9fefa0a105bae1.txt @@ -0,0 +1,84 @@ +Self-Attention Guidance Improving Sample Quality of Diffusion Models Using Self-Attention Guidance is by Susung Hong et al. The abstract from the paper is: Denoising diffusion models (DDMs) have attracted attention for their exceptional generation quality and diversity. This success is largely attributed to the use of class- or text-conditional diffusion guidance methods, such as classifier and classifier-free guidance. In this paper, we present a more comprehensive perspective that goes beyond the traditional guidance methods. From this generalized perspective, we introduce novel condition- and training-free strategies to enhance the quality of generated images. As a simple solution, blur guidance improves the suitability of intermediate samples for their fine-scale information and structures, enabling diffusion models to generate higher quality samples with a moderate guidance scale. Improving upon this, Self-Attention Guidance (SAG) uses the intermediate self-attention maps of diffusion models to enhance their stability and efficacy. Specifically, SAG adversarially blurs only the regions that diffusion models attend to at each iteration and guides them accordingly. Our experimental results show that our SAG improves the performance of various diffusion models, including ADM, IDDPM, Stable Diffusion, and DiT. Moreover, combining SAG with conventional guidance methods leads to further improvement. You can find additional information about Self-Attention Guidance on the project page, original codebase, and try it out in a demo or notebook. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionSAGPipeline class diffusers.StableDiffusionSAGPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 sag_scale: float = 0.75 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. sag_scale (float, optional, defaults to 0.75) — +Chosen between [0, 1.0] for better quality. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionSAGPipeline + +>>> pipe = StableDiffusionSAGPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt, sag_scale=0.75).images[0] disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/d000f21c49ad3f00ccf319ce435d2582.txt b/scrapped_outputs/d000f21c49ad3f00ccf319ce435d2582.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/d00e448bb38c3fec15666ac52eb8ff7e.txt b/scrapped_outputs/d00e448bb38c3fec15666ac52eb8ff7e.txt new file mode 100644 index 0000000000000000000000000000000000000000..2add5dcbc2dfbc796cac5009a8f482715b5ce8eb --- /dev/null +++ b/scrapped_outputs/d00e448bb38c3fec15666ac52eb8ff7e.txt @@ -0,0 +1,5 @@ +UVit2DModel The U-ViT model is a vision transformer (ViT) based UNet. This model incorporates elements from ViT (considers all inputs such as time, conditions and noisy image patches as tokens) and a UNet (long skip connections between the shallow and deep layers). The skip connection is important for predicting pixel-level features. An additional 3x3 convolutional block is applied prior to the final output to improve image quality. The abstract from the paper is: Currently, applying diffusion models in pixel space of high resolution images is difficult. Instead, existing approaches focus on diffusion in lower dimensional spaces (latent diffusion), or have multiple super-resolution levels of generation referred to as cascades. The downside is that these approaches add additional complexity to the diffusion framework. This paper aims to improve denoising diffusion for high resolution images while keeping the model as simple as possible. The paper is centered around the research question: How can one train a standard denoising diffusion models on high resolution images, and still obtain performance comparable to these alternate approaches? The four main findings are: 1) the noise schedule should be adjusted for high resolution images, 2) It is sufficient to scale only a particular part of the architecture, 3) dropout should be added at specific locations in the architecture, and 4) downsampling is an effective strategy to avoid high resolution feature maps. Combining these simple yet effective techniques, we achieve state-of-the-art on image generation among diffusion models without sampling modifiers on ImageNet. UVit2DModel class diffusers.UVit2DModel < source > ( hidden_size: int = 1024 use_bias: bool = False hidden_dropout: float = 0.0 cond_embed_dim: int = 768 micro_cond_encode_dim: int = 256 micro_cond_embed_dim: int = 1280 encoder_hidden_size: int = 768 vocab_size: int = 8256 codebook_size: int = 8192 in_channels: int = 768 block_out_channels: int = 768 num_res_blocks: int = 3 downsample: bool = False upsample: bool = False block_num_heads: int = 12 num_hidden_layers: int = 22 num_attention_heads: int = 16 attention_dropout: float = 0.0 intermediate_size: int = 2816 layer_norm_eps: float = 1e-06 ln_elementwise_affine: bool = True sample_size: int = 64 ) set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. UVit2DConvEmbed class diffusers.models.unets.uvit_2d.UVit2DConvEmbed < source > ( in_channels block_out_channels vocab_size elementwise_affine eps bias ) UVitBlock class diffusers.models.unets.uvit_2d.UVitBlock < source > ( channels num_res_blocks: int hidden_size hidden_dropout ln_elementwise_affine layer_norm_eps use_bias block_num_heads attention_dropout downsample: bool upsample: bool ) ConvNextBlock class diffusers.models.unets.uvit_2d.ConvNextBlock < source > ( channels layer_norm_eps ln_elementwise_affine use_bias hidden_dropout hidden_size res_ffn_factor = 4 ) ConvMlmLayer class diffusers.models.unets.uvit_2d.ConvMlmLayer < source > ( block_out_channels: int in_channels: int use_bias: bool ln_elementwise_affine: bool layer_norm_eps: float codebook_size: int ) diff --git a/scrapped_outputs/d072660be214f0b9d9eca60a13c8f161.txt b/scrapped_outputs/d072660be214f0b9d9eca60a13c8f161.txt new file mode 100644 index 0000000000000000000000000000000000000000..f6cf8102ab9f969d42a944073994e9768b9895a9 --- /dev/null +++ b/scrapped_outputs/d072660be214f0b9d9eca60a13c8f161.txt @@ -0,0 +1,2294 @@ +Models + +Diffusers contains pretrained models for popular algorithms and modules for creating the next set of diffusion models. +The primary function of these models is to denoise an input sample, by modeling the distribution $p\theta(\mathbf{x}{t-1}|\mathbf{x}_t)$. +The models are built on the base class [‘ModelMixin’] that is a torch.nn.module with basic functionality for saving and loading models both locally and from the HuggingFace hub. + +ModelMixin + + +class diffusers.ModelMixin + +< +source +> +( +) + + + +Base class for all models. +ModelMixin takes care of storing the configuration of the models and handles methods for loading, downloading +and saving models. +config_name (str) — A filename under which the model should be stored when calling +save_pretrained(). + +disable_gradient_checkpointing + +< +source +> +( +) + + + +Deactivates gradient checkpointing for the current model. +Note that in other frameworks this feature can be referred to as “activation checkpointing” or “checkpoint +activations”. + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +enable_gradient_checkpointing + +< +source +> +( +) + + + +Activates gradient checkpointing for the current model. +Note that in other frameworks this feature can be referred to as “activation checkpointing” or “checkpoint +activations”. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import UNet2DConditionModel +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> model = UNet2DConditionModel.from_pretrained( +... "stabilityai/stable-diffusion-2-1", subfolder="unet", torch_dtype=torch.float16 +... ) +>>> model = model.to("cuda") +>>> model.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) + +from_pretrained + +< +source +> +( +pretrained_model_name_or_path: typing.Union[str, os.PathLike, NoneType] +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. +Valid model ids should have an organization name, like google/ddpm-celebahq-256. +A path to a directory containing model weights saved using ~ModelMixin.save_config, e.g., +./my_model_directory/. + + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model under this dtype. If "auto" is passed the dtype +will be automatically derived from the model’s weights. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running diffusers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +from_flax (bool, optional, defaults to False) — +Load the model weights from a Flax checkpoint save file. + + +subfolder (str, optional, defaults to "") — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + +mirror (str, optional) — +Mirror source to accelerate downloads in China. If you are from China and have an accessibility +problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. +Please refer to the mirror site for more information. + + +device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be refined to each +parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the +same device. +To have Accelerate compute the most optimized device_map automatically, set device_map="auto". For +more information about each option see designing a device +map. + + +low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading by not initializing the weights and only loading the pre-trained weights. This +also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the +model. This is only supported when torch version >= 1.9.0. If you are using an older version of torch, +setting this argument to True will raise an error. + + +variant (str, optional) — +If specified load weights from variant filename, e.g. pytorch_model..bin. variant is +ignored when using from_flax. + + + +Instantiate a pretrained pytorch model from a pre-trained model configuration. +The model is set in evaluation mode by default using model.eval() (Dropout modules are deactivated). To train +the model, you should first set it back in training mode with model.train(). +The warning Weights from XXX not initialized from pretrained model means that the weights of XXX do not come +pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning +task. +The warning Weights from XXX not used in YYY means that the layer XXX is not used by YYY, therefore those +weights are discarded. +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. +Activate the special “offline-mode” to use +this method in a firewalled environment. + +num_parameters + +< +source +> +( +only_trainable: bool = False +exclude_embeddings: bool = False + +) +→ +int + +Parameters + +only_trainable (bool, optional, defaults to False) — +Whether or not to return only the number of trainable parameters + + +exclude_embeddings (bool, optional, defaults to False) — +Whether or not to return only the number of non-embeddings parameters + + +Returns + +int + + + +The number of parameters. + + +Get number of (optionally, trainable or non-embeddings) parameters in the module. + +save_pretrained + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +is_main_process: bool = True +save_function: typing.Callable = None +safe_serialization: bool = False +variant: typing.Optional[str] = None + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. + + +is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful when in distributed training like +TPUs and need to call this function on all processes. In this case, set is_main_process=True only on +the main process to avoid race conditions. + + +save_function (Callable) — +The function to use to save the state dictionary. Useful on distributed training like TPUs when one +need to replace torch.save by another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. + + +safe_serialization (bool, optional, defaults to False) — +Whether to save the model using safetensors or the traditional PyTorch way (that uses pickle). + + +variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. + + + +Save a model and its configuration file to a directory, so that it can be re-loaded using the +[from_pretrained()](/docs/diffusers/v0.14.0/en/api/models#diffusers.ModelMixin.from_pretrained) class method. + +UNet2DOutput + + +class diffusers.models.unet_2d.UNet2DOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +Hidden states output. Output of last layer of model. + + + + +UNet2DModel + + +class diffusers.UNet2DModel + +< +source +> +( +sample_size: typing.Union[int, typing.Tuple[int, int], NoneType] = None +in_channels: int = 3 +out_channels: int = 3 +center_input_sample: bool = False +time_embedding_type: str = 'positional' +freq_shift: int = 0 +flip_sin_to_cos: bool = True +down_block_types: typing.Tuple[str] = ('DownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D') +up_block_types: typing.Tuple[str] = ('AttnUpBlock2D', 'AttnUpBlock2D', 'AttnUpBlock2D', 'UpBlock2D') +block_out_channels: typing.Tuple[int] = (224, 448, 672, 896) +layers_per_block: int = 2 +mid_block_scale_factor: float = 1 +downsample_padding: int = 1 +act_fn: str = 'silu' +attention_head_dim: typing.Optional[int] = 8 +norm_num_groups: int = 32 +norm_eps: float = 1e-05 +resnet_time_scale_shift: str = 'default' +add_attention: bool = True +class_embed_type: typing.Optional[str] = None +num_class_embeds: typing.Optional[int] = None + +) + + +Parameters + +sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. + + +in_channels (int, optional, defaults to 3) — Number of channels in the input image. + + +out_channels (int, optional, defaults to 3) — Number of channels in the output. + + +center_input_sample (bool, optional, defaults to False) — Whether to center the input sample. + + +time_embedding_type (str, optional, defaults to "positional") — Type of time embedding to use. + + +freq_shift (int, optional, defaults to 0) — Frequency shift for fourier time embedding. + + +flip_sin_to_cos (bool, optional, defaults to — +obj:True): Whether to flip sin to cos for fourier time embedding. + + +down_block_types (Tuple[str], optional, defaults to — +obj:("DownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D")): Tuple of downsample block +types. + + +mid_block_type (str, optional, defaults to "UNetMidBlock2D") — +The mid block type. Choose from UNetMidBlock2D or UnCLIPUNetMidBlock2D. + + +up_block_types (Tuple[str], optional, defaults to — +obj:("AttnUpBlock2D", "AttnUpBlock2D", "AttnUpBlock2D", "UpBlock2D")): Tuple of upsample block types. + + +block_out_channels (Tuple[int], optional, defaults to — +obj:(224, 448, 672, 896)): Tuple of block output channels. + + +layers_per_block (int, optional, defaults to 2) — The number of layers per block. + + +mid_block_scale_factor (float, optional, defaults to 1) — The scale factor for the mid block. + + +downsample_padding (int, optional, defaults to 1) — The padding for the downsample convolution. + + +act_fn (str, optional, defaults to "silu") — The activation function to use. + + +attention_head_dim (int, optional, defaults to 8) — The attention head dimension. + + +norm_num_groups (int, optional, defaults to 32) — The number of groups for the normalization. + + +norm_eps (float, optional, defaults to 1e-5) — The epsilon for the normalization. + + +resnet_time_scale_shift (str, optional, defaults to "default") — Time scale shift config +for resnet blocks, see ResnetBlock2D. Choose from default or scale_shift. + + +class_embed_type (str, optional, defaults to None) — The type of class embedding to use which is ultimately +summed with the time embeddings. Choose from None, "timestep", or "identity". + + +num_class_embeds (int, optional, defaults to None) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. + + + +UNet2DModel is a 2D UNet model that takes in a noisy sample and a timestep and returns sample shaped output. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the model (such as downloading or saving, etc.) + +forward + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[torch.Tensor, float, int] +class_labels: typing.Optional[torch.Tensor] = None +return_dict: bool = True + +) +→ +UNet2DOutput or tuple + +Parameters + +sample (torch.FloatTensor) — (batch, channel, height, width) noisy inputs tensor + + +timestep (torch.FloatTensor or float or `int) — (batch) timesteps + + +class_labels (torch.FloatTensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DOutput instead of a plain tuple. + + +Returns + +UNet2DOutput or tuple + + + +UNet2DOutput if return_dict is True, +otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + + +UNet1DOutput + + +class diffusers.models.unet_1d.UNet1DOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size, num_channels, sample_size)) — +Hidden states output. Output of last layer of model. + + + + +UNet1DModel + + +class diffusers.UNet1DModel + +< +source +> +( +sample_size: int = 65536 +sample_rate: typing.Optional[int] = None +in_channels: int = 2 +out_channels: int = 2 +extra_in_channels: int = 0 +time_embedding_type: str = 'fourier' +flip_sin_to_cos: bool = True +use_timestep_embedding: bool = False +freq_shift: float = 0.0 +down_block_types: typing.Tuple[str] = ('DownBlock1DNoSkip', 'DownBlock1D', 'AttnDownBlock1D') +up_block_types: typing.Tuple[str] = ('AttnUpBlock1D', 'UpBlock1D', 'UpBlock1DNoSkip') +mid_block_type: typing.Tuple[str] = 'UNetMidBlock1D' +out_block_type: str = None +block_out_channels: typing.Tuple[int] = (32, 32, 64) +act_fn: str = None +norm_num_groups: int = 8 +layers_per_block: int = 1 +downsample_each_block: bool = False + +) + + +Parameters + +sample_size (int, optional) — Default length of sample. Should be adaptable at runtime. + + +in_channels (int, optional, defaults to 2) — Number of channels in the input sample. + + +out_channels (int, optional, defaults to 2) — Number of channels in the output. + + +time_embedding_type (str, optional, defaults to "fourier") — Type of time embedding to use. + + +freq_shift (float, optional, defaults to 0.0) — Frequency shift for fourier time embedding. + + +flip_sin_to_cos (bool, optional, defaults to — +obj:False): Whether to flip sin to cos for fourier time embedding. + + +down_block_types (Tuple[str], optional, defaults to — +obj:("DownBlock1D", "DownBlock1DNoSkip", "AttnDownBlock1D")): Tuple of downsample block types. + + +up_block_types (Tuple[str], optional, defaults to — +obj:("UpBlock1D", "UpBlock1DNoSkip", "AttnUpBlock1D")): Tuple of upsample block types. + + +block_out_channels (Tuple[int], optional, defaults to — +obj:(32, 32, 64)): Tuple of block output channels. + + +mid_block_type (str, optional, defaults to “UNetMidBlock1D”) — block type for middle of UNet. + + +out_block_type (str, optional, defaults to None) — optional output processing of UNet. + + +act_fn (str, optional, defaults to None) — optional activitation function in UNet blocks. + + +norm_num_groups (int, optional, defaults to 8) — group norm member count in UNet blocks. + + +layers_per_block (int, optional, defaults to 1) — added number of layers in a UNet block. + + +downsample_each_block (int, optional, defaults to False — +experimental feature for using a UNet without upsampling. + + + +UNet1DModel is a 1D UNet model that takes in a noisy sample and a timestep and returns sample shaped output. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the model (such as downloading or saving, etc.) + +forward + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[torch.Tensor, float, int] +return_dict: bool = True + +) +→ +UNet1DOutput or tuple + +Parameters + +sample (torch.FloatTensor) — (batch_size, sample_size, num_channels) noisy inputs tensor + + +timestep (torch.FloatTensor or float or `int) — (batch) timesteps + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet1DOutput instead of a plain tuple. + + +Returns + +UNet1DOutput or tuple + + + +UNet1DOutput if return_dict is True, +otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + + +UNet2DConditionOutput + + +class diffusers.models.unet_2d_condition.UNet2DConditionOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +Hidden states conditioned on encoder_hidden_states input. Output of last layer of model. + + + + +UNet2DConditionModel + + +class diffusers.UNet2DConditionModel + +< +source +> +( +sample_size: typing.Optional[int] = None +in_channels: int = 4 +out_channels: int = 4 +center_input_sample: bool = False +flip_sin_to_cos: bool = True +freq_shift: int = 0 +down_block_types: typing.Tuple[str] = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') +mid_block_type: typing.Optional[str] = 'UNetMidBlock2DCrossAttn' +up_block_types: typing.Tuple[str] = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') +only_cross_attention: typing.Union[bool, typing.Tuple[bool]] = False +block_out_channels: typing.Tuple[int] = (320, 640, 1280, 1280) +layers_per_block: int = 2 +downsample_padding: int = 1 +mid_block_scale_factor: float = 1 +act_fn: str = 'silu' +norm_num_groups: typing.Optional[int] = 32 +norm_eps: float = 1e-05 +cross_attention_dim: int = 1280 +attention_head_dim: typing.Union[int, typing.Tuple[int]] = 8 +dual_cross_attention: bool = False +use_linear_projection: bool = False +class_embed_type: typing.Optional[str] = None +num_class_embeds: typing.Optional[int] = None +upcast_attention: bool = False +resnet_time_scale_shift: str = 'default' +time_embedding_type: str = 'positional' +timestep_post_act: typing.Optional[str] = None +time_cond_proj_dim: typing.Optional[int] = None +conv_in_kernel: int = 3 +conv_out_kernel: int = 3 +projection_class_embeddings_input_dim: typing.Optional[int] = None + +) + + +Parameters + +sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. + + +in_channels (int, optional, defaults to 4) — The number of channels in the input sample. + + +out_channels (int, optional, defaults to 4) — The number of channels in the output. + + +center_input_sample (bool, optional, defaults to False) — Whether to center the input sample. + + +flip_sin_to_cos (bool, optional, defaults to False) — +Whether to flip the sin to cos in the time embedding. + + +freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. + + +down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. + + +mid_block_type (str, optional, defaults to "UNetMidBlock2DCrossAttn") — +The mid block type. Choose from UNetMidBlock2DCrossAttn or UNetMidBlock2DSimpleCrossAttn, will skip the +mid block layer if None. + + +up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D",)) — +The tuple of upsample blocks to use. + + +only_cross_attention(bool or Tuple[bool], optional, default to False) — +Whether to include self-attention in the basic transformer blocks, see +BasicTransformerBlock. + + +block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. + + +layers_per_block (int, optional, defaults to 2) — The number of layers per block. + + +downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. + + +mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. + + +act_fn (str, optional, defaults to "silu") — The activation function to use. + + +norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, it will skip the normalization and activation layers in post-processing + + +norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. + + +cross_attention_dim (int, optional, defaults to 1280) — The dimension of the cross attention features. + + +attention_head_dim (int, optional, defaults to 8) — The dimension of the attention heads. + + +resnet_time_scale_shift (str, optional, defaults to "default") — Time scale shift config +for resnet blocks, see ResnetBlock2D. Choose from default or scale_shift. + + +class_embed_type (str, optional, defaults to None) — The type of class embedding to use which is ultimately +summed with the time embeddings. Choose from None, "timestep", "identity", or "projection". + + +num_class_embeds (int, optional, defaults to None) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. + + +time_embedding_type (str, optional, default to positional) — +The type of position embedding to use for timesteps. Choose from positional or fourier. + + +timestep_post_act (str, *optional*, default to None) -- The second activation function to use in timestep embedding. Choose from silu, mishandgelu`. + + +time_cond_proj_dim (int, optional, default to None) — +The dimension of cond_proj layer in timestep embedding. + + +conv_in_kernel (int, optional, default to 3) — The kernel size of conv_in layer. + + +conv_out_kernel (int, optional, default to 3) — The kernel size of conv_out layer. + + +projection_class_embeddings_input_dim (int, optional) — The dimension of the class_labels input when +using the “projection” class_embed_type. Required when using the “projection” class_embed_type. + + + +UNet2DConditionModel is a conditional 2D UNet model that takes in a noisy sample, conditional state, and a timestep +and returns sample shaped output. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the models (such as downloading or saving, etc.) + +forward + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[torch.Tensor, float, int] +encoder_hidden_states: Tensor +class_labels: typing.Optional[torch.Tensor] = None +timestep_cond: typing.Optional[torch.Tensor] = None +attention_mask: typing.Optional[torch.Tensor] = None +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None +down_block_additional_residuals: typing.Optional[typing.Tuple[torch.Tensor]] = None +mid_block_additional_residual: typing.Optional[torch.Tensor] = None +return_dict: bool = True + +) +→ +UNet2DConditionOutput or tuple + +Parameters + +sample (torch.FloatTensor) — (batch, channel, height, width) noisy inputs tensor + + +timestep (torch.FloatTensor or float or int) — (batch) timesteps + + +encoder_hidden_states (torch.FloatTensor) — (batch, sequence_length, feature_dim) encoder hidden states + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a models.unet_2d_condition.UNet2DConditionOutput instead of a plain tuple. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor as defined under +self.processor in +diffusers.cross_attention. + + +Returns + +UNet2DConditionOutput or tuple + + + +UNet2DConditionOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + + +set_attention_slice + +< +source +> +( +slice_size + +) + + +Parameters + +slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maxium amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +set_attn_processor + +< +source +> +( +processor: typing.Union[diffusers.models.cross_attention.CrossAttnProcessor, diffusers.models.cross_attention.XFormersCrossAttnProcessor, diffusers.models.cross_attention.SlicedAttnProcessor, diffusers.models.cross_attention.CrossAttnAddedKVProcessor, diffusers.models.cross_attention.SlicedAttnAddedKVProcessor, diffusers.models.cross_attention.LoRACrossAttnProcessor, diffusers.models.cross_attention.LoRAXFormersCrossAttnProcessor, typing.Dict[str, typing.Union[diffusers.models.cross_attention.CrossAttnProcessor, diffusers.models.cross_attention.XFormersCrossAttnProcessor, diffusers.models.cross_attention.SlicedAttnProcessor, diffusers.models.cross_attention.CrossAttnAddedKVProcessor, diffusers.models.cross_attention.SlicedAttnAddedKVProcessor, diffusers.models.cross_attention.LoRACrossAttnProcessor, diffusers.models.cross_attention.LoRAXFormersCrossAttnProcessor]]] + +) + + +Parameters + +`processor (dict of AttnProcessor or AttnProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +of all CrossAttention layers. + + +In case processor is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainablae attention processors. — + + + + +DecoderOutput + + +class diffusers.models.vae.DecoderOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +Decoded output sample of the model. Output of the last layer of the model. + + + +Output of decoding method. + +VQEncoderOutput + + +class diffusers.models.vq_model.VQEncoderOutput + +< +source +> +( +latents: FloatTensor + +) + + +Parameters + +latents (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +Encoded output sample of the model. Output of the last layer of the model. + + + +Output of VQModel encoding method. + +VQModel + + +class diffusers.VQModel + +< +source +> +( +in_channels: int = 3 +out_channels: int = 3 +down_block_types: typing.Tuple[str] = ('DownEncoderBlock2D',) +up_block_types: typing.Tuple[str] = ('UpDecoderBlock2D',) +block_out_channels: typing.Tuple[int] = (64,) +layers_per_block: int = 1 +act_fn: str = 'silu' +latent_channels: int = 3 +sample_size: int = 32 +num_vq_embeddings: int = 256 +norm_num_groups: int = 32 +vq_embed_dim: typing.Optional[int] = None +scaling_factor: float = 0.18215 + +) + + +Parameters + +in_channels (int, optional, defaults to 3) — Number of channels in the input image. + + +out_channels (int, optional, defaults to 3) — Number of channels in the output. + + +down_block_types (Tuple[str], optional, defaults to — +obj:("DownEncoderBlock2D",)): Tuple of downsample block types. + + +up_block_types (Tuple[str], optional, defaults to — +obj:("UpDecoderBlock2D",)): Tuple of upsample block types. + + +block_out_channels (Tuple[int], optional, defaults to — +obj:(64,)): Tuple of block output channels. + + +act_fn (str, optional, defaults to "silu") — The activation function to use. + + +latent_channels (int, optional, defaults to 3) — Number of channels in the latent space. + + +sample_size (int, optional, defaults to 32) — TODO + + +num_vq_embeddings (int, optional, defaults to 256) — Number of codebook vectors in the VQ-VAE. + + +vq_embed_dim (int, optional) — Hidden dim of codebook vectors in the VQ-VAE. + + +scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. + + + +VQ-VAE model from the paper Neural Discrete Representation Learning by Aaron van den Oord, Oriol Vinyals and Koray +Kavukcuoglu. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the model (such as downloading or saving, etc.) + +forward + +< +source +> +( +sample: FloatTensor +return_dict: bool = True + +) + + +Parameters + +sample (torch.FloatTensor) — Input sample. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. + + + + +AutoencoderKLOutput + + +class diffusers.models.autoencoder_kl.AutoencoderKLOutput + +< +source +> +( +latent_dist: DiagonalGaussianDistribution + +) + + +Parameters + +latent_dist (DiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of DiagonalGaussianDistribution. +DiagonalGaussianDistribution allows for sampling latents from the distribution. + + + +Output of AutoencoderKL encoding method. + +AutoencoderKL + + +class diffusers.AutoencoderKL + +< +source +> +( +in_channels: int = 3 +out_channels: int = 3 +down_block_types: typing.Tuple[str] = ('DownEncoderBlock2D',) +up_block_types: typing.Tuple[str] = ('UpDecoderBlock2D',) +block_out_channels: typing.Tuple[int] = (64,) +layers_per_block: int = 1 +act_fn: str = 'silu' +latent_channels: int = 4 +norm_num_groups: int = 32 +sample_size: int = 32 +scaling_factor: float = 0.18215 + +) + + +Parameters + +in_channels (int, optional, defaults to 3) — Number of channels in the input image. + + +out_channels (int, optional, defaults to 3) — Number of channels in the output. + + +down_block_types (Tuple[str], optional, defaults to — +obj:("DownEncoderBlock2D",)): Tuple of downsample block types. + + +up_block_types (Tuple[str], optional, defaults to — +obj:("UpDecoderBlock2D",)): Tuple of upsample block types. + + +block_out_channels (Tuple[int], optional, defaults to — +obj:(64,)): Tuple of block output channels. + + +act_fn (str, optional, defaults to "silu") — The activation function to use. + + +latent_channels (int, optional, defaults to 4) — Number of channels in the latent space. + + +sample_size (int, optional, defaults to 32) — TODO + + +scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. + + + +Variational Autoencoder (VAE) model with KL loss from the paper Auto-Encoding Variational Bayes by Diederik P. Kingma +and Max Welling. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the model (such as downloading or saving, etc.) + +disable_slicing + +< +source +> +( +) + + + +Disable sliced VAE decoding. If enable_slicing was previously invoked, this method will go back to computing +decoding in one step. + +disable_tiling + +< +source +> +( +) + + + +Disable tiled VAE decoding. If enable_vae_tiling was previously invoked, this method will go back to +computing decoding in one step. + +enable_slicing + +< +source +> +( +) + + + +Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. + +enable_tiling + +< +source +> +( +use_tiling: bool = True + +) + + + +Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful to save a large amount of memory and to allow +the processing of larger images. + +forward + +< +source +> +( +sample: FloatTensor +sample_posterior: bool = False +return_dict: bool = True +generator: typing.Optional[torch._C.Generator] = None + +) + + +Parameters + +sample (torch.FloatTensor) — Input sample. + + +sample_posterior (bool, optional, defaults to False) — +Whether to sample from the posterior. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. + + + + +tiled_decode + +< +source +> +( +z: FloatTensor +return_dict: bool = True + +) + + +Parameters + +When this option is enabled, the VAE will split the input tensor into tiles to compute decoding in several — + + +steps. This is useful to keep memory use constant regardless of image size. The end result of tiled decoding is — + + +different from non-tiled decoding due to each tile using a different decoder. To avoid tiling artifacts, the — + + +tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the — + + +look of the output, but they should be much less noticeable. — +z (torch.FloatTensor): Input batch of latent vectors. return_dict (bool, optional, defaults to +True): +Whether or not to return a DecoderOutput instead of a plain tuple. + + + +Decode a batch of images using a tiled decoder. + +tiled_encode + +< +source +> +( +x: FloatTensor +return_dict: bool = True + +) + + +Parameters + +When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several — + + +steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is — + + +different from non-tiled encoding due to each tile using a different encoder. To avoid tiling artifacts, the — + + +tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the — + + +look of the output, but they should be much less noticeable. — +x (torch.FloatTensor): Input batch of images. return_dict (bool, optional, defaults to True): +Whether or not to return a AutoencoderKLOutput instead of a plain tuple. + + + +Encode a batch of images using a tiled encoder. + +Transformer2DModel + + +class diffusers.Transformer2DModel + +< +source +> +( +num_attention_heads: int = 16 +attention_head_dim: int = 88 +in_channels: typing.Optional[int] = None +out_channels: typing.Optional[int] = None +num_layers: int = 1 +dropout: float = 0.0 +norm_num_groups: int = 32 +cross_attention_dim: typing.Optional[int] = None +attention_bias: bool = False +sample_size: typing.Optional[int] = None +num_vector_embeds: typing.Optional[int] = None +patch_size: typing.Optional[int] = None +activation_fn: str = 'geglu' +num_embeds_ada_norm: typing.Optional[int] = None +use_linear_projection: bool = False +only_cross_attention: bool = False +upcast_attention: bool = False +norm_type: str = 'layer_norm' +norm_elementwise_affine: bool = True + +) + + +Parameters + +num_attention_heads (int, optional, defaults to 16) — The number of heads to use for multi-head attention. + + +attention_head_dim (int, optional, defaults to 88) — The number of channels in each head. + + +in_channels (int, optional) — +Pass if the input is continuous. The number of channels in the input and output. + + +num_layers (int, optional, defaults to 1) — The number of layers of Transformer blocks to use. + + +dropout (float, optional, defaults to 0.0) — The dropout probability to use. + + +cross_attention_dim (int, optional) — The number of encoder_hidden_states dimensions to use. + + +sample_size (int, optional) — Pass if the input is discrete. The width of the latent images. +Note that this is fixed at training time as it is used for learning a number of position embeddings. See +ImagePositionalEmbeddings. + + +num_vector_embeds (int, optional) — +Pass if the input is discrete. The number of classes of the vector embeddings of the latent pixels. +Includes the class for the masked latent pixel. + + +activation_fn (str, optional, defaults to "geglu") — Activation function to be used in feed-forward. + + +num_embeds_ada_norm ( int, optional) — Pass if at least one of the norm_layers is AdaLayerNorm. +The number of diffusion steps used during training. Note that this is fixed at training time as it is used +to learn a number of embeddings that are added to the hidden states. During inference, you can denoise for +up to but not more than steps than num_embeds_ada_norm. + + +attention_bias (bool, optional) — +Configure if the TransformerBlocks’ attention should contain a bias parameter. + + + +Transformer model for image-like data. Takes either discrete (classes of vector embeddings) or continuous (actual +embeddings) inputs. +When input is continuous: First, project the input (aka embedding) and reshape to b, t, d. Then apply standard +transformer action. Finally, reshape to image. +When input is discrete: First, input (classes of latent pixels) is converted to embeddings and has positional +embeddings applied, see ImagePositionalEmbeddings. Then apply standard transformer action. Finally, predict +classes of unnoised image. +Note that it is assumed one of the input classes is the masked latent pixel. The predicted classes of the unnoised +image do not contain a prediction for the masked pixel as the unnoised image cannot be masked. + +forward + +< +source +> +( +hidden_states +encoder_hidden_states = None +timestep = None +class_labels = None +cross_attention_kwargs = None +return_dict: bool = True + +) +→ +Transformer2DModelOutput or tuple + +Parameters + +hidden_states ( When discrete, torch.LongTensor of shape (batch size, num latent pixels). — +When continous, torch.FloatTensor of shape (batch size, channel, height, width)): Input +hidden_states + + +encoder_hidden_states ( torch.LongTensor of shape (batch size, encoder_hidden_states dim), optional) — +Conditional embeddings for cross attention layer. If not given, cross-attention defaults to +self-attention. + + +timestep ( torch.long, optional) — +Optional timestep to be applied as an embedding in AdaLayerNorm’s. Used to indicate denoising step. + + +class_labels ( torch.LongTensor of shape (batch size, num classes), optional) — +Optional class labels to be applied as an embedding in AdaLayerZeroNorm. Used to indicate class labels +conditioning. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a models.unet_2d_condition.UNet2DConditionOutput instead of a plain tuple. + + +Returns + +Transformer2DModelOutput or tuple + + + +Transformer2DModelOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + + +Transformer2DModelOutput + + +class diffusers.models.transformer_2d.Transformer2DModelOutput + +< +source +> +( +sample: FloatTensor + +) + + +Parameters + +sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if Transformer2DModel is discrete) — +Hidden states conditioned on encoder_hidden_states input. If discrete, returns probability distributions +for the unnoised latent pixels. + + + + +PriorTransformer + + +class diffusers.PriorTransformer + +< +source +> +( +num_attention_heads: int = 32 +attention_head_dim: int = 64 +num_layers: int = 20 +embedding_dim: int = 768 +num_embeddings = 77 +additional_embeddings = 4 +dropout: float = 0.0 + +) + + +Parameters + +num_attention_heads (int, optional, defaults to 32) — The number of heads to use for multi-head attention. + + +attention_head_dim (int, optional, defaults to 64) — The number of channels in each head. + + +num_layers (int, optional, defaults to 20) — The number of layers of Transformer blocks to use. + + +embedding_dim (int, optional, defaults to 768) — The dimension of the CLIP embeddings. Note that CLIP +image embeddings and text embeddings are both the same dimension. + + +num_embeddings (int, optional, defaults to 77) — The max number of clip embeddings allowed. I.e. the +length of the prompt after it has been tokenized. + + +additional_embeddings (int, optional, defaults to 4) — The number of additional tokens appended to the +projected hidden_states. The actual length of the used hidden_states is num_embeddings + additional_embeddings. + + +dropout (float, optional, defaults to 0.0) — The dropout probability to use. + + + +The prior transformer from unCLIP is used to predict CLIP image embeddings from CLIP text embeddings. Note that the +transformer predicts the image embeddings through a denoising diffusion process. +This model inherits from ModelMixin. Check the superclass documentation for the generic methods the library +implements for all the models (such as downloading or saving, etc.) +For more details, see the original paper: https://arxiv.org/abs/2204.06125 + +forward + +< +source +> +( +hidden_states +timestep: typing.Union[torch.Tensor, float, int] +proj_embedding: FloatTensor +encoder_hidden_states: FloatTensor +attention_mask: typing.Optional[torch.BoolTensor] = None +return_dict: bool = True + +) +→ +PriorTransformerOutput or tuple + +Parameters + +hidden_states (torch.FloatTensor of shape (batch_size, embedding_dim)) — +x_t, the currently predicted image embeddings. + + +timestep (torch.long) — +Current denoising step. + + +proj_embedding (torch.FloatTensor of shape (batch_size, embedding_dim)) — +Projected embedding vector the denoising process is conditioned on. + + +encoder_hidden_states (torch.FloatTensor of shape (batch_size, num_embeddings, embedding_dim)) — +Hidden states of the text embeddings the denoising process is conditioned on. + + +attention_mask (torch.BoolTensor of shape (batch_size, num_embeddings)) — +Text mask for the text embeddings. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a models.prior_transformer.PriorTransformerOutput instead of a plain +tuple. + + +Returns + +PriorTransformerOutput or tuple + + + +PriorTransformerOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + + +PriorTransformerOutput + + +class diffusers.models.prior_transformer.PriorTransformerOutput + +< +source +> +( +predicted_image_embedding: FloatTensor + +) + + +Parameters + +predicted_image_embedding (torch.FloatTensor of shape (batch_size, embedding_dim)) — +The predicted CLIP image embedding conditioned on the CLIP text embedding input. + + + + +ControlNetOutput + + +class diffusers.models.controlnet.ControlNetOutput + +< +source +> +( +down_block_res_samples: typing.Tuple[torch.Tensor] +mid_block_res_sample: Tensor + +) + + + + +ControlNetModel + + +class diffusers.ControlNetModel + +< +source +> +( +in_channels: int = 4 +flip_sin_to_cos: bool = True +freq_shift: int = 0 +down_block_types: typing.Tuple[str] = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') +only_cross_attention: typing.Union[bool, typing.Tuple[bool]] = False +block_out_channels: typing.Tuple[int] = (320, 640, 1280, 1280) +layers_per_block: int = 2 +downsample_padding: int = 1 +mid_block_scale_factor: float = 1 +act_fn: str = 'silu' +norm_num_groups: typing.Optional[int] = 32 +norm_eps: float = 1e-05 +cross_attention_dim: int = 1280 +attention_head_dim: typing.Union[int, typing.Tuple[int]] = 8 +use_linear_projection: bool = False +class_embed_type: typing.Optional[str] = None +num_class_embeds: typing.Optional[int] = None +upcast_attention: bool = False +resnet_time_scale_shift: str = 'default' +projection_class_embeddings_input_dim: typing.Optional[int] = None +controlnet_conditioning_channel_order: str = 'rgb' +conditioning_embedding_out_channels: typing.Optional[typing.Tuple[int]] = (16, 32, 96, 256) + +) + + + + +set_attention_slice + +< +source +> +( +slice_size + +) + + +Parameters + +slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maxium amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +set_attn_processor + +< +source +> +( +processor: typing.Union[diffusers.models.cross_attention.CrossAttnProcessor, diffusers.models.cross_attention.XFormersCrossAttnProcessor, diffusers.models.cross_attention.SlicedAttnProcessor, diffusers.models.cross_attention.CrossAttnAddedKVProcessor, diffusers.models.cross_attention.SlicedAttnAddedKVProcessor, diffusers.models.cross_attention.LoRACrossAttnProcessor, diffusers.models.cross_attention.LoRAXFormersCrossAttnProcessor, typing.Dict[str, typing.Union[diffusers.models.cross_attention.CrossAttnProcessor, diffusers.models.cross_attention.XFormersCrossAttnProcessor, diffusers.models.cross_attention.SlicedAttnProcessor, diffusers.models.cross_attention.CrossAttnAddedKVProcessor, diffusers.models.cross_attention.SlicedAttnAddedKVProcessor, diffusers.models.cross_attention.LoRACrossAttnProcessor, diffusers.models.cross_attention.LoRAXFormersCrossAttnProcessor]]] + +) + + +Parameters + +`processor (dict of AttnProcessor or AttnProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +of all CrossAttention layers. + + +In case processor is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainablae attention processors. — + + + + +FlaxModelMixin + + +class diffusers.FlaxModelMixin + +< +source +> +( +) + + + +Base class for all flax models. +FlaxModelMixin takes care of storing the configuration of the models and handles methods for loading, +downloading and saving models. + +from_pretrained + +< +source +> +( +pretrained_model_name_or_path: typing.Union[str, os.PathLike] +dtype: dtype = +*model_args +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path (str or os.PathLike) — +Can be either: + +A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. +Valid model ids are namespaced under a user or organization name, like +runwayml/stable-diffusion-v1-5. +A path to a directory containing model weights saved using save_pretrained(), +e.g., ./my_model_directory/. + + + +dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — +The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and +jax.numpy.bfloat16 (on TPUs). +This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If +specified all the computation will be performed with the given dtype. +Note that this only specifies the dtype of the computation and does not influence the dtype of model +parameters. +If you wish to change the dtype of the model parameters, see ~ModelMixin.to_fp16 and +~ModelMixin.to_bf16. + + +model_args (sequence of positional arguments, optional) — +All remaining positional arguments will be passed to the underlying model’s __init__ method. + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +from_pt (bool, optional, defaults to False) — +Load the model weights from a PyTorch checkpoint save file. + + +kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., +output_attentions=True). Behaves differently depending on whether a config is provided or +automatically loaded: + +If a configuration is provided with config, **kwargs will be directly passed to the +underlying model’s __init__ method (we assume all relevant updates to the configuration have +already been done) +If a configuration is not provided, kwargs will be first passed to the configuration class +initialization function (from_config()). Each key of kwargs that corresponds to +a configuration attribute will be used to override said attribute with the supplied kwargs +value. Remaining keys that do not correspond to any configuration attribute will be passed to the +underlying model’s __init__ function. + + + + +Instantiate a pretrained flax model from a pre-trained model configuration. +The warning Weights from XXX not initialized from pretrained model means that the weights of XXX do not come +pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning +task. +The warning Weights from XXX not used in YYY means that the layer XXX is not used by YYY, therefore those +weights are discarded. + +Examples: + + + Copied +>>> from diffusers import FlaxUNet2DConditionModel + +>>> # Download model and configuration from huggingface.co and cache. +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # Model was saved using *save_pretrained('./test/saved_model/')* (for example purposes, not runnable). +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("./test/saved_model/") + +save_pretrained + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] +is_main_process: bool = True + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. + + +params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. + + +is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful when in distributed training like +TPUs and need to call this function on all processes. In this case, set is_main_process=True only on +the main process to avoid race conditions. + + + +Save a model and its configuration file to a directory, so that it can be re-loaded using the +[from_pretrained()](/docs/diffusers/v0.14.0/en/api/models#diffusers.FlaxModelMixin.from_pretrained) class method + +to_bf16 + +< +source +> +( +params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] +mask: typing.Any = None + +) + + +Parameters + +params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. + + +mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans, True for params +you want to cast, and should be False for those you want to skip. + + + +Cast the floating-point params to jax.numpy.bfloat16. This returns a new params tree and does not cast +the params in place. +This method can be used on TPU to explicitly convert the model parameters to bfloat16 precision to do full +half-precision training or to save weights in bfloat16 for inference in order to save memory and improve speed. + +Examples: + + + Copied +>>> from diffusers import FlaxUNet2DConditionModel + +>>> # load model +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model parameters will be in fp32 precision, to cast these to bfloat16 precision +>>> params = model.to_bf16(params) +>>> # If you don't want to cast certain parameters (for example layer norm bias and scale) +>>> # then pass the mask as follows +>>> from flax import traverse_util + +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> flat_params = traverse_util.flatten_dict(params) +>>> mask = { +... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) +... for path in flat_params +... } +>>> mask = traverse_util.unflatten_dict(mask) +>>> params = model.to_bf16(params, mask) + +to_fp16 + +< +source +> +( +params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] +mask: typing.Any = None + +) + + +Parameters + +params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. + + +mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans, True for params +you want to cast, and should be False for those you want to skip + + + +Cast the floating-point params to jax.numpy.float16. This returns a new params tree and does not cast the +params in place. +This method can be used on GPU to explicitly convert the model parameters to float16 precision to do full +half-precision training or to save weights in float16 for inference in order to save memory and improve speed. + +Examples: + + + Copied +>>> from diffusers import FlaxUNet2DConditionModel + +>>> # load model +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model params will be in fp32, to cast these to float16 +>>> params = model.to_fp16(params) +>>> # If you want don't want to cast certain parameters (for example layer norm bias and scale) +>>> # then pass the mask as follows +>>> from flax import traverse_util + +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> flat_params = traverse_util.flatten_dict(params) +>>> mask = { +... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) +... for path in flat_params +... } +>>> mask = traverse_util.unflatten_dict(mask) +>>> params = model.to_fp16(params, mask) + +to_fp32 + +< +source +> +( +params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] +mask: typing.Any = None + +) + + +Parameters + +params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. + + +mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans, True for params +you want to cast, and should be False for those you want to skip + + + +Cast the floating-point params to jax.numpy.float32. This method can be used to explicitly convert the +model parameters to fp32 precision. This returns a new params tree and does not cast the params in place. + +Examples: + + + Copied +>>> from diffusers import FlaxUNet2DConditionModel + +>>> # Download model and configuration from huggingface.co +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model params will be in fp32, to illustrate the use of this method, +>>> # we'll first cast to fp16 and back to fp32 +>>> params = model.to_f16(params) +>>> # now cast back to fp32 +>>> params = model.to_fp32(params) + +FlaxUNet2DConditionOutput + + +class diffusers.models.unet_2d_condition_flax.FlaxUNet2DConditionOutput + +< +source +> +( +sample: ndarray + +) + + +Parameters + +sample (jnp.ndarray of shape (batch_size, num_channels, height, width)) — +Hidden states conditioned on encoder_hidden_states input. Output of last layer of model. + + + + +replace + +< +source +> +( +**updates + +) + + + +“Returns a new object replacing the specified fields with new values. + +FlaxUNet2DConditionModel + + +class diffusers.FlaxUNet2DConditionModel + +< +source +> +( +sample_size: int = 32 +in_channels: int = 4 +out_channels: int = 4 +down_block_types: typing.Tuple[str] = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') +up_block_types: typing.Tuple[str] = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') +only_cross_attention: typing.Union[bool, typing.Tuple[bool]] = False +block_out_channels: typing.Tuple[int] = (320, 640, 1280, 1280) +layers_per_block: int = 2 +attention_head_dim: typing.Union[int, typing.Tuple[int]] = 8 +cross_attention_dim: int = 1280 +dropout: float = 0.0 +use_linear_projection: bool = False +dtype: dtype = +flip_sin_to_cos: bool = True +freq_shift: int = 0 +parent: typing.Union[typing.Type[flax.linen.module.Module], typing.Type[flax.core.scope.Scope], typing.Type[flax.linen.module._Sentinel], NoneType] = +name: str = None + +) + + +Parameters + +sample_size (int, optional) — +The size of the input sample. + + +in_channels (int, optional, defaults to 4) — +The number of channels in the input sample. + + +out_channels (int, optional, defaults to 4) — +The number of channels in the output. + + +down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. The corresponding class names will be: “FlaxCrossAttnDownBlock2D”, +“FlaxCrossAttnDownBlock2D”, “FlaxCrossAttnDownBlock2D”, “FlaxDownBlock2D” + + +up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D",)) — +The tuple of upsample blocks to use. The corresponding class names will be: “FlaxUpBlock2D”, +“FlaxCrossAttnUpBlock2D”, “FlaxCrossAttnUpBlock2D”, “FlaxCrossAttnUpBlock2D” + + +block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. + + +layers_per_block (int, optional, defaults to 2) — +The number of layers per block. + + +attention_head_dim (int or Tuple[int], optional, defaults to 8) — +The dimension of the attention heads. + + +cross_attention_dim (int, optional, defaults to 768) — +The dimension of the cross attention features. + + +dropout (float, optional, defaults to 0) — +Dropout probability for down, up and bottleneck blocks. + + +flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip the sin to cos in the time embedding. + + +freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. + + + +FlaxUNet2DConditionModel is a conditional 2D UNet model that takes in a noisy sample, conditional state, and a +timestep and returns sample shaped output. +This model inherits from FlaxModelMixin. Check the superclass documentation for the generic methods the library +implements for all the models (such as downloading or saving, etc.) +Also, this model is a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to +general usage and behavior. +Finally, this model supports inherent JAX features such as: +Just-In-Time (JIT) compilation +Automatic Differentiation +Vectorization +Parallelization + +FlaxDecoderOutput + + +class diffusers.models.vae_flax.FlaxDecoderOutput + +< +source +> +( +sample: ndarray + +) + + +Parameters + +sample (jnp.ndarray of shape (batch_size, num_channels, height, width)) — +Decoded output sample of the model. Output of the last layer of the model. + + +dtype (jnp.dtype, optional, defaults to jnp.float32) — +Parameters dtype + + + +Output of decoding method. + +replace + +< +source +> +( +**updates + +) + + + +“Returns a new object replacing the specified fields with new values. + +FlaxAutoencoderKLOutput + + +class diffusers.models.vae_flax.FlaxAutoencoderKLOutput + +< +source +> +( +latent_dist: FlaxDiagonalGaussianDistribution + +) + + +Parameters + +latent_dist (FlaxDiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of FlaxDiagonalGaussianDistribution. +FlaxDiagonalGaussianDistribution allows for sampling latents from the distribution. + + + +Output of AutoencoderKL encoding method. + +replace + +< +source +> +( +**updates + +) + + + +“Returns a new object replacing the specified fields with new values. + +FlaxAutoencoderKL + + +class diffusers.FlaxAutoencoderKL + +< +source +> +( +in_channels: int = 3 +out_channels: int = 3 +down_block_types: typing.Tuple[str] = ('DownEncoderBlock2D',) +up_block_types: typing.Tuple[str] = ('UpDecoderBlock2D',) +block_out_channels: typing.Tuple[int] = (64,) +layers_per_block: int = 1 +act_fn: str = 'silu' +latent_channels: int = 4 +norm_num_groups: int = 32 +sample_size: int = 32 +scaling_factor: float = 0.18215 +dtype: dtype = +parent: typing.Union[typing.Type[flax.linen.module.Module], typing.Type[flax.core.scope.Scope], typing.Type[flax.linen.module._Sentinel], NoneType] = +name: str = None + +) + + +Parameters + +in_channels (int, optional, defaults to 3) — +Input channels + + +out_channels (int, optional, defaults to 3) — +Output channels + + +down_block_types (Tuple[str], optional, defaults to (DownEncoderBlock2D)) — +DownEncoder block type + + +up_block_types (Tuple[str], optional, defaults to (UpDecoderBlock2D)) — +UpDecoder block type + + +block_out_channels (Tuple[str], optional, defaults to (64,)) — +Tuple containing the number of output channels for each block + + +layers_per_block (int, optional, defaults to 2) — +Number of Resnet layer for each block + + +act_fn (str, optional, defaults to silu) — +Activation function + + +latent_channels (int, optional, defaults to 4) — +Latent space channels + + +norm_num_groups (int, optional, defaults to 32) — +Norm num group + + +sample_size (int, optional, defaults to 32) — +Sample input size + + +scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 +/ scaling_factor z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. + + +dtype (jnp.dtype, optional, defaults to jnp.float32) — +parameters dtype + + + +Flax Implementation of Variational Autoencoder (VAE) model with KL loss from the paper Auto-Encoding Variational +Bayes by Diederik P. Kingma and Max Welling. +This model is a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to +general usage and behavior. +Finally, this model supports inherent JAX features such as: +Just-In-Time (JIT) compilation +Automatic Differentiation +Vectorization +Parallelization diff --git a/scrapped_outputs/d07644ae022d382f18a0b7fd46be4d70.txt b/scrapped_outputs/d07644ae022d382f18a0b7fd46be4d70.txt new file mode 100644 index 0000000000000000000000000000000000000000..bdb12cb9f8ec935ec9417d06fc21a1176b44b6b4 --- /dev/null +++ b/scrapped_outputs/d07644ae022d382f18a0b7fd46be4d70.txt @@ -0,0 +1,248 @@ +Load adapters There are several training techniques for personalizing diffusion models to generate images of a specific subject or images in certain styles. Each of these training methods produces a different type of adapter. Some of the adapters generate an entirely new model, while other adapters only modify a smaller set of embeddings or weights. This means the loading process for each adapter is also different. This guide will show you how to load DreamBooth, textual inversion, and LoRA weights. Feel free to browse the Stable Diffusion Conceptualizer, LoRA the Explorer, and the Diffusers Models Gallery for checkpoints and embeddings to use. DreamBooth DreamBooth finetunes an entire diffusion model on just several images of a subject to generate images of that subject in new styles and settings. This method works by using a special word in the prompt that the model learns to associate with the subject image. Of all the training methods, DreamBooth produces the largest file size (usually a few GBs) because it is a full checkpoint model. Let’s load the herge_style checkpoint, which is trained on just 10 images drawn by Hergé, to generate images in that style. For it to work, you need to include the special word herge_style in your prompt to trigger the checkpoint: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("sd-dreambooth-library/herge-style", torch_dtype=torch.float16).to("cuda") +prompt = "A cute herge_style brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration" +image = pipeline(prompt).images[0] +image Textual inversion Textual inversion is very similar to DreamBooth and it can also personalize a diffusion model to generate certain concepts (styles, objects) from just a few images. This method works by training and finding new embeddings that represent the images you provide with a special word in the prompt. As a result, the diffusion model weights stay the same and the training process produces a relatively tiny (a few KBs) file. Because textual inversion creates embeddings, it cannot be used on its own like DreamBooth and requires another model. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") Now you can load the textual inversion embeddings with the load_textual_inversion() method and generate some images. Let’s load the sd-concepts-library/gta5-artwork embeddings and you’ll need to include the special word in your prompt to trigger it: Copied pipeline.load_textual_inversion("sd-concepts-library/gta5-artwork") +prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration, style" +image = pipeline(prompt).images[0] +image Textual inversion can also be trained on undesirable things to create negative embeddings to discourage a model from generating images with those undesirable things like blurry images or extra fingers on a hand. This can be an easy way to quickly improve your prompt. You’ll also load the embeddings with load_textual_inversion(), but this time, you’ll need two more parameters: weight_name: specifies the weight file to load if the file was saved in the 🤗 Diffusers format with a specific name or if the file is stored in the A1111 format token: specifies the special word to use in the prompt to trigger the embeddings Let’s load the sayakpaul/EasyNegative-test embeddings: Copied pipeline.load_textual_inversion( + "sayakpaul/EasyNegative-test", weight_name="EasyNegative.safetensors", token="EasyNegative" +) Now you can use the token to generate an image with the negative embeddings: Copied prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration, EasyNegative" +negative_prompt = "EasyNegative" + +image = pipeline(prompt, negative_prompt=negative_prompt, num_inference_steps=50).images[0] +image LoRA Low-Rank Adaptation (LoRA) is a popular training technique because it is fast and generates smaller file sizes (a couple hundred MBs). Like the other methods in this guide, LoRA can train a model to learn new styles from just a few images. It works by inserting new weights into the diffusion model and then only the new weights are trained instead of the entire model. This makes LoRAs faster to train and easier to store. LoRA is a very general training technique that can be used with other training methods. For example, it is common to train a model with DreamBooth and LoRA. LoRAs also need to be used with another model: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") Then use the load_lora_weights() method to load the ostris/super-cereal-sdxl-lora weights and specify the weights filename from the repository: Copied pipeline.load_lora_weights("ostris/super-cereal-sdxl-lora", weight_name="cereal_box_sdxl_v1.safetensors") +prompt = "bears, pizza bites" +image = pipeline(prompt).images[0] +image The load_lora_weights() method loads LoRA weights into both the UNet and text encoder. It is the preferred way for loading LoRAs because it can handle cases where: the LoRA weights don’t have separate identifiers for the UNet and text encoder the LoRA weights have separate identifiers for the UNet and text encoder But if you only need to load LoRA weights into the UNet, then you can use the load_attn_procs() method. Let’s load the jbilcke-hf/sdxl-cinematic-1 LoRA: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.unet.load_attn_procs("jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors") + +# use cnmt in the prompt to trigger the LoRA +prompt = "A cute cnmt eating a slice of pizza, stunning color scheme, masterpiece, illustration" +image = pipeline(prompt).images[0] +image For both load_lora_weights() and load_attn_procs(), you can pass the cross_attention_kwargs={"scale": 0.5} parameter to adjust how much of the LoRA weights to use. A value of 0 is the same as only using the base model weights, and a value of 1 is equivalent to using the fully finetuned LoRA. To unload the LoRA weights, use the unload_lora_weights() method to discard the LoRA weights and restore the model to its original weights: Copied pipeline.unload_lora_weights() Load multiple LoRAs It can be fun to use multiple LoRAs together to create something entirely new and unique. The fuse_lora() method allows you to fuse the LoRA weights with the original weights of the underlying model. Fusing the weights can lead to a speedup in inference latency because you don’t need to separately load the base model and LoRA! You can save your fused pipeline with save_pretrained() to avoid loading and fusing the weights every time you want to use the model. Load an initial model: Copied from diffusers import StableDiffusionXLPipeline, AutoencoderKL +import torch + +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + vae=vae, + torch_dtype=torch.float16, +).to("cuda") Next, load the LoRA checkpoint and fuse it with the original weights. The lora_scale parameter controls how much to scale the output by with the LoRA weights. It is important to make the lora_scale adjustments in the fuse_lora() method because it won’t work if you try to pass scale to the cross_attention_kwargs in the pipeline. If you need to reset the original model weights for any reason (use a different lora_scale), you should use the unfuse_lora() method. Copied pipeline.load_lora_weights("ostris/ikea-instructions-lora-sdxl") +pipeline.fuse_lora(lora_scale=0.7) + +# to unfuse the LoRA weights +pipeline.unfuse_lora() Then fuse this pipeline with the next set of LoRA weights: Copied pipeline.load_lora_weights("ostris/super-cereal-sdxl-lora") +pipeline.fuse_lora(lora_scale=0.7) You can’t unfuse multiple LoRA checkpoints, so if you need to reset the model to its original weights, you’ll need to reload it. Now you can generate an image that uses the weights from both LoRAs: Copied prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration" +image = pipeline(prompt).images[0] +image 🤗 PEFT Read the Inference with 🤗 PEFT tutorial to learn more about its integration with 🤗 Diffusers and how you can easily work with and juggle multiple adapters. You’ll need to install 🤗 Diffusers and PEFT from source to run the example in this section. Another way you can load and use multiple LoRAs is to specify the adapter_name parameter in load_lora_weights(). This method takes advantage of the 🤗 PEFT integration. For example, load and name both LoRA weights: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("ostris/ikea-instructions-lora-sdxl", weight_name="ikea_instructions_xl_v1_5.safetensors", adapter_name="ikea") +pipeline.load_lora_weights("ostris/super-cereal-sdxl-lora", weight_name="cereal_box_sdxl_v1.safetensors", adapter_name="cereal") Now use the set_adapters() to activate both LoRAs, and you can configure how much weight each LoRA should have on the output: Copied pipeline.set_adapters(["ikea", "cereal"], adapter_weights=[0.7, 0.5]) Then, generate an image: Copied prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration" +image = pipeline(prompt, num_inference_steps=30, cross_attention_kwargs={"scale": 1.0}).images[0] +image Kohya and TheLastBen Other popular LoRA trainers from the community include those by Kohya and TheLastBen. These trainers create different LoRA checkpoints than those trained by 🤗 Diffusers, but they can still be loaded in the same way. Let’s download the Blueprintify SD XL 1.0 checkpoint from Civitai: Copied !wget https://civitai.com/api/download/models/168776 -O blueprintify-sd-xl-10.safetensors Load the LoRA checkpoint with the load_lora_weights() method, and specify the filename in the weight_name parameter: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("path/to/weights", weight_name="blueprintify-sd-xl-10.safetensors") Generate an image: Copied # use bl3uprint in the prompt to trigger the LoRA +prompt = "bl3uprint, a highly detailed blueprint of the eiffel tower, explaining how to build all parts, many txt, blueprint grid backdrop" +image = pipeline(prompt).images[0] +image Some limitations of using Kohya LoRAs with 🤗 Diffusers include: Images may not look like those generated by UIs - like ComfyUI - for multiple reasons, which are explained here. LyCORIS checkpoints aren’t fully supported. The load_lora_weights() method loads LyCORIS checkpoints with LoRA and LoCon modules, but Hada and LoKR are not supported. Loading a checkpoint from TheLastBen is very similar. For example, to load the TheLastBen/William_Eggleston_Style_SDXL checkpoint: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("TheLastBen/William_Eggleston_Style_SDXL", weight_name="wegg.safetensors") + +# use by william eggleston in the prompt to trigger the LoRA +prompt = "a house by william eggleston, sunrays, beautiful, sunlight, sunrays, beautiful" +image = pipeline(prompt=prompt).images[0] +image IP-Adapter IP-Adapter is an effective and lightweight adapter that adds image prompting capabilities to a diffusion model. This adapter works by decoupling the cross-attention layers of the image and text features. All the other model components are frozen and only the embedded image features in the UNet are trained. As a result, IP-Adapter files are typically only ~100MBs. IP-Adapter works with most of our pipelines, including Stable Diffusion, Stable Diffusion XL (SDXL), ControlNet, T2I-Adapter, AnimateDiff. And you can use any custom models finetuned from the same base models. It also works with LCM-Lora out of box. You can find official IP-Adapter checkpoints in h94/IP-Adapter. IP-Adapter was contributed by okotaku. Let’s first create a Stable Diffusion Pipeline. Copied from diffusers import AutoPipelineForText2Image +import torch +from diffusers.utils import load_image + + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") Now load the h94/IP-Adapter weights with the load_ip_adapter() method. Copied pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") IP-Adapter relies on an image encoder to generate the image features, if your IP-Adapter weights folder contains a "image_encoder" subfolder, the image encoder will be automatically loaded and registered to the pipeline. Otherwise you can so load a [CLIPVisionModelWithProjection](https://huggingface.co/docs/transformers/v4.37.2/en/model_doc/clip#transformers.CLIPVisionModelWithProjection) model and pass it to a Stable Diffusion pipeline when you create it. + + Copied from diffusers import AutoPipelineForText2Image +from transformers import CLIPVisionModelWithProjection +import torch + +image_encoder = CLIPVisionModelWithProjection.from_pretrained( + "h94/IP-Adapter", + subfolder="models/image_encoder", + torch_dtype=torch.float16, +).to("cuda") + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", image_encoder=image_encoder, torch_dtype=torch.float16).to("cuda") IP-Adapter allows you to use both image and text to condition the image generation process. For example, let’s use the bear image from the Textual Inversion section as the image prompt (ip_adapter_image) along with a text prompt to add “sunglasses”. 😎 Copied pipeline.set_ip_adapter_scale(0.6) +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_neg_embed.png") +generator = torch.Generator(device="cpu").manual_seed(33) +images = pipeline( +    prompt='best quality, high quality, wearing sunglasses', +    ip_adapter_image=image, +    negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", +    num_inference_steps=50, +    generator=generator, +).images +images[0]     You can use the set_ip_adapter_scale() method to adjust the text prompt and image prompt condition ratio.  If you’re only using the image prompt, you should set the scale to 1.0. You can lower the scale to get more generation diversity, but it’ll be less aligned with the prompt. +scale=0.5 can achieve good results in most cases when you use both text and image prompts. IP-Adapter also works great with Image-to-Image and Inpainting pipelines. See below examples of how you can use it with Image-to-Image and Inpaint. image-to-image inpaint Copied from diffusers import AutoPipelineForImage2Image +import torch +from diffusers.utils import load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") + +image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/vermeer.jpg") +ip_image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/river.png") + +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") +generator = torch.Generator(device="cpu").manual_seed(33) +images = pipeline( +    prompt='best quality, high quality', +    image = image, +    ip_adapter_image=ip_image, +    num_inference_steps=50, +    generator=generator, +    strength=0.6, +).images +images[0] IP-Adapters can also be used with SDXL Copied from diffusers import AutoPipelineForText2Image +from diffusers.utils import load_image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + torch_dtype=torch.float16 +).to("cuda") + +image = load_image("https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/watercolor_painting.jpeg") + +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name="ip-adapter_sdxl.bin") + +generator = torch.Generator(device="cpu").manual_seed(33) +image = pipeline( + prompt="best quality, high quality", + ip_adapter_image=image, + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=25, + generator=generator, +).images[0] +image.save("sdxl_t2i.png") input image adapted image You can use the IP-Adapter face model to apply specific faces to your images. It is an effective way to maintain consistent characters in your image generations. +Weights are loaded with the same method used for the other IP-Adapters. Copied # Load ip-adapter-full-face_sd15.bin +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter-full-face_sd15.bin") It is recommended to use DDIMScheduler and EulerDiscreteScheduler for face model. Copied import torch +from diffusers import StableDiffusionPipeline, DDIMScheduler +from diffusers.utils import load_image + +pipeline = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, +).to("cuda") +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter-full-face_sd15.bin") + +pipeline.set_ip_adapter_scale(0.7) + +image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ai_face2.png") + +generator = torch.Generator(device="cpu").manual_seed(33) + +image = pipeline( + prompt="A photo of a girl wearing a black dress, holding red roses in hand, upper body, behind is the Eiffel Tower", + ip_adapter_image=image, + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=50, num_images_per_prompt=1, width=512, height=704, + generator=generator, +).images[0] input image output image You can load multiple IP-Adapter models and use multiple reference images at the same time. In this example we use IP-Adapter-Plus face model to create a consistent character and also use IP-Adapter-Plus model along with 10 images to create a coherent style in the image we generate. Copied import torch +from diffusers import AutoPipelineForText2Image, DDIMScheduler +from transformers import CLIPVisionModelWithProjection +from diffusers.utils import load_image + +image_encoder = CLIPVisionModelWithProjection.from_pretrained( + "h94/IP-Adapter", + subfolder="models/image_encoder", + torch_dtype=torch.float16, +) + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + torch_dtype=torch.float16, + image_encoder=image_encoder, +) +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +pipeline.load_ip_adapter( + "h94/IP-Adapter", + subfolder="sdxl_models", + weight_name=["ip-adapter-plus_sdxl_vit-h.safetensors", "ip-adapter-plus-face_sdxl_vit-h.safetensors"] +) +pipeline.set_ip_adapter_scale([0.7, 0.3]) +pipeline.enable_model_cpu_offload() + +face_image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/women_input.png") +style_folder = "https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/style_ziggy" +style_images = [load_image(f"{style_folder}/img{i}.png") for i in range(10)] + +generator = torch.Generator(device="cpu").manual_seed(0) + +image = pipeline( + prompt="wonderwoman", + ip_adapter_image=[style_images, face_image], + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=50, num_images_per_prompt=1, + generator=generator, +).images[0]     style input image face input image output image LCM-Lora You can use IP-Adapter with LCM-Lora to achieve “instant fine-tune” with custom images. Note that you need to load IP-Adapter weights before loading the LCM-Lora weights. Copied from diffusers import DiffusionPipeline, LCMScheduler +import torch +from diffusers.utils import load_image + +model_id = "sd-dreambooth-library/herge-style" +lcm_lora_id = "latent-consistency/lcm-lora-sdv1-5" + +pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) + +pipe.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") +pipe.load_lora_weights(lcm_lora_id) +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() + +prompt = "best quality, high quality" +image = load_image("https://user-images.githubusercontent.com/24734142/266492875-2d50d223-8475-44f0-a7c6-08b51cb53572.png") +images = pipe( + prompt=prompt, + ip_adapter_image=image, + num_inference_steps=4, + guidance_scale=1, +).images[0] Other pipelines IP-Adapter is compatible with any pipeline that (1) uses a text prompt and (2) uses Stable Diffusion or Stable Diffusion XL checkpoint. To use IP-Adapter with a different pipeline, all you need to do is to run load_ip_adapter() method after you create the pipeline, and then pass your image to the pipeline as ip_adapter_image 🤗 Diffusers currently only supports using IP-Adapter with some of the most popular pipelines, feel free to open a feature request if you have a cool use-case and require integrating IP-adapters with a pipeline that does not support it yet! You can find below examples on how to use IP-Adapter with ControlNet and AnimateDiff. ControlNet AnimateDiff Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +import torch +from diffusers.utils import load_image + +controlnet_model_path = "lllyasviel/control_v11f1p_sd15_depth" +controlnet = ControlNetModel.from_pretrained(controlnet_model_path, torch_dtype=torch.float16) + +pipeline = StableDiffusionControlNetPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16) +pipeline.to("cuda") + +image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/statue.png") +depth_map = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/depth.png") + +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") + +generator = torch.Generator(device="cpu").manual_seed(33) +images = pipeline( + prompt='best quality, high quality', + image=depth_map, + ip_adapter_image=image, + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=50, + generator=generator, +).images +images[0] input image adapted image diff --git a/scrapped_outputs/d083c005eb88ec815cf7e0d6d4f5ca8d.txt b/scrapped_outputs/d083c005eb88ec815cf7e0d6d4f5ca8d.txt new file mode 100644 index 0000000000000000000000000000000000000000..e807efa0bdba9fcaf725824d3ab7c1cc5f8142b5 --- /dev/null +++ b/scrapped_outputs/d083c005eb88ec815cf7e0d6d4f5ca8d.txt @@ -0,0 +1,138 @@ +Kandinsky 3 Kandinsky 3 is created by Vladimir Arkhipkin,Anastasia Maltseva,Igor Pavlov,Andrei Filatov,Arseniy Shakhmatov,Andrey Kuznetsov,Denis Dimitrov, Zein Shaheen The description from it’s Github page: Kandinsky 3.0 is an open-source text-to-image diffusion model built upon the Kandinsky2-x model family. In comparison to its predecessors, enhancements have been made to the text understanding and visual quality of the model, achieved by increasing the size of the text encoder and Diffusion U-Net models, respectively. Its architecture includes 3 main components: FLAN-UL2, which is an encoder decoder model based on the T5 architecture. New U-Net architecture featuring BigGAN-deep blocks doubles depth while maintaining the same number of parameters. Sber-MoVQGAN is a decoder proven to have superior results in image restoration. The original codebase can be found at ai-forever/Kandinsky-3. Check out the Kandinsky Community organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting. Make sure to check out the schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. Kandinsky3Pipeline class diffusers.Kandinsky3Pipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: Kandinsky3UNet scheduler: DDPMScheduler movq: VQModel ) __call__ < source > ( prompt: Union = None num_inference_steps: int = 25 guidance_scale: float = 3.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 height: Optional = 1024 width: Optional = 1024 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None output_type: Optional = 'pil' return_dict: bool = True latents = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 3.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to self.unet.config.sample_size) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size) — +The width in pixels of the generated image. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask. Must provide if passing prompt_embeds directly. negative_attention_mask (torch.FloatTensor, optional) — +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import AutoPipelineForText2Image +>>> import torch + +>>> pipe = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-3", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A photograph of the inside of a subway train. There are raccoons sitting on the seats. One of them is reading a newspaper. The window shows the city in the background." + +>>> generator = torch.Generator(device="cpu").manual_seed(0) +>>> image = pipe(prompt, num_inference_steps=25, generator=generator).images[0] encode_prompt < source > ( prompt do_classifier_free_guidance = True num_images_per_prompt = 1 device = None negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None _cut_context = False attention_mask: Optional = None negative_attention_mask: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device, optional): +torch device to place the resulting embeddings on num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask. Must provide if passing prompt_embeds directly. negative_attention_mask (torch.FloatTensor, optional) — +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. Encodes the prompt into text encoder hidden states. Kandinsky3Img2ImgPipeline class diffusers.Kandinsky3Img2ImgPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: Kandinsky3UNet scheduler: DDPMScheduler movq: VQModel ) __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.3 num_inference_steps: int = 25 guidance_scale: float = 3.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 3.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask. Must provide if passing prompt_embeds directly. negative_attention_mask (torch.FloatTensor, optional) — +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import AutoPipelineForImage2Image +>>> from diffusers.utils import load_image +>>> import torch + +>>> pipe = AutoPipelineForImage2Image.from_pretrained("kandinsky-community/kandinsky-3", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A painting of the inside of a subway train with tiny raccoons." +>>> image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky3/t2i.png") + +>>> generator = torch.Generator(device="cpu").manual_seed(0) +>>> image = pipe(prompt, image=image, strength=0.75, num_inference_steps=25, generator=generator).images[0] encode_prompt < source > ( prompt do_classifier_free_guidance = True num_images_per_prompt = 1 device = None negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None _cut_context = False attention_mask: Optional = None negative_attention_mask: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded Encodes the prompt into text encoder hidden states. device: (torch.device, optional): +torch device to place the resulting embeddings on +num_images_per_prompt (int, optional, defaults to 1): +number of images that should be generated per prompt +do_classifier_free_guidance (bool, optional, defaults to True): +whether to use classifier free guidance or not +negative_prompt (str or List[str], optional): +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). +prompt_embeds (torch.FloatTensor, optional): +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. +negative_prompt_embeds (torch.FloatTensor, optional): +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. +attention_mask (torch.FloatTensor, optional): +Pre-generated attention mask. Must provide if passing prompt_embeds directly. +negative_attention_mask (torch.FloatTensor, optional): +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. diff --git a/scrapped_outputs/d0a09d96e057fc18e2ebf4c71c8ef93b.txt b/scrapped_outputs/d0a09d96e057fc18e2ebf4c71c8ef93b.txt new file mode 100644 index 0000000000000000000000000000000000000000..607a91aa0d9c693ca270e96f0e69ff01fb89dfd4 --- /dev/null +++ b/scrapped_outputs/d0a09d96e057fc18e2ebf4c71c8ef93b.txt @@ -0,0 +1,142 @@ +How to build a community pipeline + +Note: this page was built from the GitHub Issue on Community Pipelines #841. +Let’s make an example! +Say you want to define a pipeline that just does a single forward pass to a U-Net and then calls a scheduler only once (Note, this doesn’t make any sense from a scientific point of view, but only represents an example of how things work under the hood). +Cool! So you open your favorite IDE and start creating your pipeline 💻. +First, what model weights and configurations do we need? +We have a U-Net and a scheduler, so our pipeline should take a U-Net and a scheduler as an argument. +Also, as stated above, you’d like to be able to load weights and the scheduler config for Hub and share your code with others, so we’ll inherit from DiffusionPipeline: + + + Copied +from diffusers import DiffusionPipeline +import torch + + +class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() +Now, we must save the unet and scheduler in a config file so that you can save your pipeline with save_pretrained. +Therefore, make sure you add every component that is save-able to the register_modules function: + + + Copied +from diffusers import DiffusionPipeline +import torch + + +class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() + + self.register_modules(unet=unet, scheduler=scheduler) +Cool, the init is done! 🔥 Now, let’s go into the forward pass, which we recommend defining as __call__ . Here you’re given all the creative freedom there is. For our amazing “one-step” pipeline, we simply create a random image and call the unet once and the scheduler once: + + + Copied +from diffusers import DiffusionPipeline +import torch + + +class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() + + self.register_modules(unet=unet, scheduler=scheduler) + + def __call__(self): + image = torch.randn( + (1, self.unet.in_channels, self.unet.sample_size, self.unet.sample_size), + ) + timestep = 1 + + model_output = self.unet(image, timestep).sample + scheduler_output = self.scheduler.step(model_output, timestep, image).prev_sample + + return scheduler_output +Cool, that’s it! 🚀 You can now run this pipeline by passing a unet and a scheduler to the init: + + + Copied +from diffusers import DDPMScheduler, Unet2DModel + +scheduler = DDPMScheduler() +unet = UNet2DModel() + +pipeline = UnetSchedulerOneForwardPipeline(unet=unet, scheduler=scheduler) + +output = pipeline() +But what’s even better is that you can load pre-existing weights into the pipeline if they match exactly your pipeline structure. This is e.g. the case for https://huggingface.co/google/ddpm-cifar10-32 so that we can do the following: + + + Copied +pipeline = UnetSchedulerOneForwardPipeline.from_pretrained("google/ddpm-cifar10-32") + +output = pipeline() +We want to share this amazing pipeline with the community, so we would open a PR request to add the following code under one_step_unet.py to https://github.com/huggingface/diffusers/tree/main/examples/community . + + + Copied +from diffusers import DiffusionPipeline +import torch + + +class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() + + self.register_modules(unet=unet, scheduler=scheduler) + + def __call__(self): + image = torch.randn( + (1, self.unet.in_channels, self.unet.sample_size, self.unet.sample_size), + ) + timestep = 1 + + model_output = self.unet(image, timestep).sample + scheduler_output = self.scheduler.step(model_output, timestep, image).prev_sample + + return scheduler_output +Our amazing pipeline got merged here: #840. +Now everybody that has diffusers >= 0.4.0 installed can use our pipeline magically 🪄 as follows: + + + Copied +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="one_step_unet") +pipe() +Another way to upload your custom_pipeline, besides sending a PR, is uploading the code that contains it to the Hugging Face Hub, as exemplified here. +Try it out now - it works! +In general, you will want to create much more sophisticated pipelines, so we recommend looking at existing pipelines here: https://github.com/huggingface/diffusers/tree/main/examples/community. +IMPORTANT: +You can use whatever package you want in your community pipeline file - as long as the user has it installed, everything will work fine. Make sure you have one and only one pipeline class that inherits from DiffusionPipeline as this will be automatically detected. + +How do community pipelines work? + + +A community pipeline is a class that has to inherit from ['DiffusionPipeline']: +and that has been added to `examples/community` [files](https://github.com/huggingface/diffusers/tree/main/examples/community). +The community can load the pipeline code via the custom_pipeline argument from DiffusionPipeline. See docs [here](https://huggingface.co/docs/diffusers/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained.custom_pipeline): +This means: +The model weights and configs of the pipeline should be loaded from the pretrained_model_name_or_path argument: +whereas the code that powers the community pipeline is defined in a file added in examples/community. +Now, it might very well be that only some of your pipeline components weights can be downloaded from an official repo. +The other components should then be passed directly to init as is the case for the ClIP guidance notebook here. +The magic behind all of this is that we load the code directly from GitHub. You can check it out in more detail if you follow the functionality defined here: + + + Copied +# 2. Load the pipeline class, if using custom module then load it from the hub +# if we load from explicit class, let's use it +if custom_pipeline is not None: + pipeline_class = get_class_from_dynamic_module( + custom_pipeline, module_file=CUSTOM_PIPELINE_FILE_NAME, cache_dir=custom_pipeline + ) +elif cls != DiffusionPipeline: + pipeline_class = cls +else: + diffusers_module = importlib.import_module(cls.__module__.split(".")[0]) + pipeline_class = getattr(diffusers_module, config_dict["_class_name"]) +This is why a community pipeline merged to GitHub will be directly available to all diffusers packages. diff --git a/scrapped_outputs/d0b4f0628b7e6ac2ad518ca291f68c75.txt b/scrapped_outputs/d0b4f0628b7e6ac2ad518ca291f68c75.txt new file mode 100644 index 0000000000000000000000000000000000000000..0051dea3c8497a0aea4368d8c2019c00ab6ab808 --- /dev/null +++ b/scrapped_outputs/d0b4f0628b7e6ac2ad518ca291f68c75.txt @@ -0,0 +1,107 @@ +Semantic Guidance Semantic Guidance for Diffusion Models was proposed in SEGA: Instructing Text-to-Image Models using Semantic Guidance and provides strong semantic control over image generation. +Small changes to the text prompt usually result in entirely different output images. However, with SEGA a variety of changes to the image are enabled that can be controlled easily and intuitively, while staying true to the original image composition. The abstract from the paper is: Text-to-image diffusion models have recently received a lot of interest for their astonishing ability to produce high-fidelity images from text only. However, achieving one-shot generation that aligns with the user’s intent is nearly impossible, yet small changes to the input prompt often result in very different images. This leaves the user with little semantic control. To put the user in control, we show how to interact with the diffusion process to flexibly steer it along semantic directions. This semantic guidance (SEGA) generalizes to any generative architecture using classifier-free guidance. More importantly, it allows for subtle and extensive edits, changes in composition and style, as well as optimizing the overall artistic conception. We demonstrate SEGA’s effectiveness on both latent and pixel-based diffusion models such as Stable Diffusion, Paella, and DeepFloyd-IF using a variety of tasks, thus providing strong evidence for its versatility, flexibility, and improvements over existing methods. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. SemanticStableDiffusionPipeline class diffusers.SemanticStableDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (Q16SafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with latent editing. This model inherits from DiffusionPipeline and builds on the StableDiffusionPipeline. Check the superclass +documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular +device, etc.). __call__ < source > ( prompt: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: int = 1 eta: float = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 editing_prompt: Union = None editing_prompt_embeddings: Optional = None reverse_editing_direction: Union = False edit_guidance_scale: Union = 5 edit_warmup_steps: Union = 10 edit_cooldown_steps: Union = None edit_threshold: Union = 0.9 edit_momentum_scale: Optional = 0.1 edit_mom_beta: Optional = 0.4 edit_weights: Optional = None sem_guidance: Optional = None ) → ~pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. editing_prompt (str or List[str], optional) — +The prompt or prompts to use for semantic guidance. Semantic guidance is disabled by setting +editing_prompt = None. Guidance direction of prompt should be specified via +reverse_editing_direction. editing_prompt_embeddings (torch.Tensor, optional) — +Pre-computed embeddings to use for semantic guidance. Guidance direction of embedding should be +specified via reverse_editing_direction. reverse_editing_direction (bool or List[bool], optional, defaults to False) — +Whether the corresponding prompt in editing_prompt should be increased or decreased. edit_guidance_scale (float or List[float], optional, defaults to 5) — +Guidance scale for semantic guidance. If provided as a list, values should correspond to +editing_prompt. edit_warmup_steps (float or List[float], optional, defaults to 10) — +Number of diffusion steps (for each prompt) for which semantic guidance is not applied. Momentum is +calculated for those steps and applied once all warmup periods are over. edit_cooldown_steps (float or List[float], optional, defaults to None) — +Number of diffusion steps (for each prompt) after which semantic guidance is longer applied. edit_threshold (float or List[float], optional, defaults to 0.9) — +Threshold of semantic guidance. edit_momentum_scale (float, optional, defaults to 0.1) — +Scale of the momentum to be added to the semantic guidance at each diffusion step. If set to 0.0, +momentum is disabled. Momentum is already built up during warmup (for diffusion steps smaller than +sld_warmup_steps). Momentum is only added to latent guidance once all warmup periods are finished. edit_mom_beta (float, optional, defaults to 0.4) — +Defines how semantic guidance momentum builds up. edit_mom_beta indicates how much of the previous +momentum is kept. Momentum is already built up during warmup (for diffusion steps smaller than +edit_warmup_steps). edit_weights (List[float], optional, defaults to None) — +Indicates how much each individual concept should influence the overall guidance. If no weights are +provided all concepts are applied equally. sem_guidance (List[torch.Tensor], optional) — +List of pre-generated guidance vectors to be applied at generation. Length of the list has to +correspond to num_inference_steps. Returns +~pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput or tuple + +If return_dict is True, +~pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images and the second element +is a list of bools indicating whether the corresponding generated image contains “not-safe-for-work” +(nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import SemanticStableDiffusionPipeline + +>>> pipe = SemanticStableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> out = pipe( +... prompt="a photo of the face of a woman", +... num_images_per_prompt=1, +... guidance_scale=7, +... editing_prompt=[ +... "smiling, smile", # Concepts to apply +... "glasses, wearing glasses", +... "curls, wavy hair, curly hair", +... "beard, full beard, mustache", +... ], +... reverse_editing_direction=[ +... False, +... False, +... False, +... False, +... ], # Direction of guidance i.e. increase all concepts +... edit_warmup_steps=[10, 10, 10, 10], # Warmup period for each concept +... edit_guidance_scale=[4, 5, 5, 5.4], # Guidance scale for each concept +... edit_threshold=[ +... 0.99, +... 0.975, +... 0.925, +... 0.96, +... ], # Threshold for each concept. Threshold equals the percentile of the latent space that will be discarded. I.e. threshold=0.99 uses 1% of the latent dimensions +... edit_momentum_scale=0.3, # Momentum scale that will be added to the latent guidance +... edit_mom_beta=0.6, # Momentum beta +... edit_weights=[1, 1, 1, 1, 1], # Weights of the individual concepts against each other +... ) +>>> image = out.images[0] StableDiffusionSafePipelineOutput class diffusers.pipelines.semantic_stable_diffusion.pipeline_output.SemanticStableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/d0c0ce4cf137ed9ba6fd11ea77bfc9e1.txt b/scrapped_outputs/d0c0ce4cf137ed9ba6fd11ea77bfc9e1.txt new file mode 100644 index 0000000000000000000000000000000000000000..e80c0c76f67b222f116cbc389bb925517c9da820 --- /dev/null +++ b/scrapped_outputs/d0c0ce4cf137ed9ba6fd11ea77bfc9e1.txt @@ -0,0 +1,139 @@ +UNet2DConditionModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 2D UNet conditional model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet2DConditionModel class diffusers.UNet2DConditionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 center_input_sample: bool = False flip_sin_to_cos: bool = True freq_shift: int = 0 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') mid_block_type: Optional = 'UNetMidBlock2DCrossAttn' up_block_types: Tuple = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: Union = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 dropout: float = 0.0 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: Union = 1280 transformer_layers_per_block: Union = 1 reverse_transformer_layers_per_block: Optional = None encoder_hid_dim: Optional = None encoder_hid_dim_type: Optional = None attention_head_dim: Union = 8 num_attention_heads: Union = None dual_cross_attention: bool = False use_linear_projection: bool = False class_embed_type: Optional = None addition_embed_type: Optional = None addition_time_embed_dim: Optional = None num_class_embeds: Optional = None upcast_attention: bool = False resnet_time_scale_shift: str = 'default' resnet_skip_time_act: bool = False resnet_out_scale_factor: float = 1.0 time_embedding_type: str = 'positional' time_embedding_dim: Optional = None time_embedding_act_fn: Optional = None timestep_post_act: Optional = None time_cond_proj_dim: Optional = None conv_in_kernel: int = 3 conv_out_kernel: int = 3 projection_class_embeddings_input_dim: Optional = None attention_type: str = 'default' class_embeddings_concat: bool = False mid_block_only_cross_attention: Optional = None cross_attention_norm: Optional = None addition_embed_type_num_heads: int = 64 ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. in_channels (int, optional, defaults to 4) — Number of channels in the input sample. out_channels (int, optional, defaults to 4) — Number of channels in the output. center_input_sample (bool, optional, defaults to False) — Whether to center the input sample. flip_sin_to_cos (bool, optional, defaults to False) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. mid_block_type (str, optional, defaults to "UNetMidBlock2DCrossAttn") — +Block type for middle of UNet, it can be one of UNetMidBlock2DCrossAttn, UNetMidBlock2D, or +UNetMidBlock2DSimpleCrossAttn. If None, the mid block layer is skipped. up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. only_cross_attention(bool or Tuple[bool], optional, default to False) — +Whether to include self-attention in the basic transformer blocks, see +BasicTransformerBlock. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — The number of layers per block. downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. dropout (float, optional, defaults to 0.0) — The dropout probability to use. act_fn (str, optional, defaults to "silu") — The activation function to use. norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, normalization and activation layers is skipped in post-processing. norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. cross_attention_dim (int or Tuple[int], optional, defaults to 1280) — +The dimension of the cross attention features. transformer_layers_per_block (int, Tuple[int], or Tuple[Tuple] , optional, defaults to 1) — +The number of transformer blocks of type BasicTransformerBlock. Only relevant for +~models.unet_2d_blocks.CrossAttnDownBlock2D, ~models.unet_2d_blocks.CrossAttnUpBlock2D, +~models.unet_2d_blocks.UNetMidBlock2DCrossAttn. A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). reverse_transformer_layers_per_block : (Tuple[Tuple], optional, defaults to None): +The number of transformer blocks of type BasicTransformerBlock, in the upsampling +blocks of the U-Net. Only relevant if transformer_layers_per_block is of type Tuple[Tuple] and for +~models.unet_2d_blocks.CrossAttnDownBlock2D, ~models.unet_2d_blocks.CrossAttnUpBlock2D, +~models.unet_2d_blocks.UNetMidBlock2DCrossAttn. +encoder_hid_dim (int, optional, defaults to None): +If encoder_hid_dim_type is defined, encoder_hidden_states will be projected from encoder_hid_dim +dimension to cross_attention_dim. +encoder_hid_dim_type (str, optional, defaults to None): +If given, the encoder_hidden_states and potentially other embeddings are down-projected to text +embeddings of dimension cross_attention according to encoder_hid_dim_type. +attention_head_dim (int, optional, defaults to 8): The dimension of the attention heads. +num_attention_heads (int, optional): +The number of attention heads. If not defined, defaults to attention_head_dim +resnet_time_scale_shift (str, optional, defaults to "default"): Time scale shift config +for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. +class_embed_type (str, optional, defaults to None): +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", "identity", "projection", or "simple_projection". +addition_embed_type (str, optional, defaults to None): +Configures an optional embedding which will be summed with the time embeddings. Choose from None or +“text”. “text” will use the TextTimeEmbedding layer. +addition_time_embed_dim: (int, optional, defaults to None): +Dimension for the timestep embeddings. +num_class_embeds (int, optional, defaults to None): +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. +time_embedding_type (str, optional, defaults to positional): +The type of position embedding to use for timesteps. Choose from positional or fourier. +time_embedding_dim (int, optional, defaults to None): +An optional override for the dimension of the projected time embedding. +time_embedding_act_fn (str, optional, defaults to None): +Optional activation function to use only once on the time embeddings before they are passed to the rest of +the UNet. Choose from silu, mish, gelu, and swish. +timestep_post_act (str, optional, defaults to None): +The second activation function to use in timestep embedding. Choose from silu, mish and gelu. +time_cond_proj_dim (int, optional, defaults to None): +The dimension of cond_proj layer in the timestep embedding. +conv_in_kernel (int, optional, default to 3): The kernel size of conv_in layer. conv_out_kernel (int, +optional, default to 3): The kernel size of conv_out layer. projection_class_embeddings_input_dim (int, +optional): The dimension of the class_labels input when +class_embed_type="projection". Required when class_embed_type="projection". +class_embeddings_concat (bool, optional, defaults to False): Whether to concatenate the time +embeddings with the class embeddings. +mid_block_only_cross_attention (bool, optional, defaults to None): +Whether to use cross attention with the mid block when using the UNetMidBlock2DSimpleCrossAttn. If +only_cross_attention is given as a single boolean and mid_block_only_cross_attention is None, the +only_cross_attention value is used as the value for mid_block_only_cross_attention. Default to False +otherwise. disable_freeu < source > ( ) Disables the FreeU mechanism. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism from https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stage blocks where they are being applied. Please refer to the official repository for combinations of values that +are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None added_cond_kwargs: Optional = None down_block_additional_residuals: Optional = None mid_block_additional_residual: Optional = None down_intrablock_additional_residuals: Optional = None encoder_attention_mask: Optional = None return_dict: bool = True ) → UNet2DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, channel, height, width). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). class_labels (torch.Tensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. +timestep_cond — (torch.Tensor, optional, defaults to None): +Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed +through the self.time_embedding layer to obtain the timestep embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. +added_cond_kwargs — (dict, optional): +A kwargs dictionary containing additional embeddings that if specified are added to the embeddings that +are passed along to the UNet blocks. +down_block_additional_residuals — (tuple of torch.Tensor, optional): +A tuple of tensors that if specified are added to the residuals of down unet blocks. +mid_block_additional_residual — (torch.Tensor, optional): +A tensor that if specified is added to the residual of the middle unet block. encoder_attention_mask (torch.Tensor) — +A cross-attention mask of shape (batch, sequence_length) is applied to encoder_hidden_states. If +True the mask is kept, otherwise if False it is discarded. Mask will be converted into a bias, +which adds large negative values to the attention scores corresponding to “discard” tokens. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. +added_cond_kwargs — (dict, optional): +A kwargs dictionary containin additional embeddings that if specified are added to the embeddings that +are passed along to the UNet blocks. down_block_additional_residuals (tuple of torch.Tensor, optional) — +additional residuals to be added to UNet long skip connections from down blocks to up blocks for +example from ControlNet side model(s) mid_block_additional_residual (torch.Tensor, optional) — +additional residual to be added to UNet mid block output, for example from ControlNet side model down_intrablock_additional_residuals (tuple of torch.Tensor, optional) — +additional residuals to be added within UNet down blocks, for example from T2I-Adapter side model(s) Returns +UNet2DConditionOutput or tuple + +If return_dict is True, an UNet2DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The UNet2DConditionModel forward method. fuse_qkv_projections < source > ( ) Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. set_attention_slice < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int or list(int), optional, defaults to "auto") — +When "auto", input to the attention heads is halved, so attention is computed in two steps. If +"max", maximum amount of memory is saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in +several steps. This is useful for saving some memory in exchange for a small decrease in speed. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. unfuse_qkv_projections < source > ( ) Disables the fused QKV projection if enabled. This API is 🧪 experimental. unload_lora < source > ( ) Unloads LoRA weights. UNet2DConditionOutput class diffusers.models.unets.unet_2d_condition.UNet2DConditionOutput < source > ( sample: FloatTensor = None ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of UNet2DConditionModel. FlaxUNet2DConditionModel class diffusers.FlaxUNet2DConditionModel < source > ( sample_size: int = 32 in_channels: int = 4 out_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') up_block_types: Tuple = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') mid_block_type: Optional = 'UNetMidBlock2DCrossAttn' only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 attention_head_dim: Union = 8 num_attention_heads: Union = None cross_attention_dim: int = 1280 dropout: float = 0.0 use_linear_projection: bool = False dtype: dtype = flip_sin_to_cos: bool = True freq_shift: int = 0 use_memory_efficient_attention: bool = False split_head_dim: bool = False transformer_layers_per_block: Union = 1 addition_embed_type: Optional = None addition_time_embed_dim: Optional = None addition_embed_type_num_heads: int = 64 projection_class_embeddings_input_dim: Optional = None parent: Union = name: Optional = None ) Parameters sample_size (int, optional) — +The size of the input sample. in_channels (int, optional, defaults to 4) — +The number of channels in the input sample. out_channels (int, optional, defaults to 4) — +The number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxCrossAttnDownBlock2D", "FlaxDownBlock2D")) — +The tuple of downsample blocks to use. up_block_types (Tuple[str], optional, defaults to ("FlaxUpBlock2D", "FlaxCrossAttnUpBlock2D", "FlaxCrossAttnUpBlock2D", "FlaxCrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. mid_block_type (str, optional, defaults to "UNetMidBlock2DCrossAttn") — +Block type for middle of UNet, it can be one of UNetMidBlock2DCrossAttn. If None, the mid block layer is skipped. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — +The number of layers per block. attention_head_dim (int or Tuple[int], optional, defaults to 8) — +The dimension of the attention heads. num_attention_heads (int or Tuple[int], optional) — +The number of attention heads. cross_attention_dim (int, optional, defaults to 768) — +The dimension of the cross attention features. dropout (float, optional, defaults to 0) — +Dropout probability for down, up and bottleneck blocks. flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. use_memory_efficient_attention (bool, optional, defaults to False) — +Enable memory efficient attention as described here. split_head_dim (bool, optional, defaults to False) — +Whether to split the head dimension into a new axis for the self-attention computation. In most cases, +enabling this flag should speed up the computation for Stable Diffusion 2.x and Stable Diffusion XL. A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. This model inherits from FlaxModelMixin. Check the superclass documentation for it’s generic methods +implemented for all models (such as downloading or saving). This model is also a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matters related to its +general usage and behavior. Inherent JAX features such as the following are supported: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization FlaxUNet2DConditionOutput class diffusers.models.unets.unet_2d_condition_flax.FlaxUNet2DConditionOutput < source > ( sample: Array ) Parameters sample (jnp.ndarray of shape (batch_size, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of FlaxUNet2DConditionModel. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/d0e3e2377e1ffc64e9f95dbd3d124c11.txt b/scrapped_outputs/d0e3e2377e1ffc64e9f95dbd3d124c11.txt new file mode 100644 index 0000000000000000000000000000000000000000..eddbddf33248951d6d77235788b1dfb5194bea8f --- /dev/null +++ b/scrapped_outputs/d0e3e2377e1ffc64e9f95dbd3d124c11.txt @@ -0,0 +1,474 @@ +Zero-Shot Text-to-Video Generation + + +Overview + +Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators by +Levon Khachatryan, +Andranik Movsisyan, +Vahram Tadevosyan, +Roberto Henschel, +Zhangyang Wang, Shant Navasardyan, Humphrey Shi. +Our method Text2Video-Zero enables zero-shot video generation using either +A textual prompt, or +A prompt combined with guidance from poses or edges, or +Video Instruct-Pix2Pix, i.e., instruction-guided video editing. +Results are temporally consistent and follow closely the guidance and textual prompts. + +The abstract of the paper is the following: +Recent text-to-video generation approaches rely on computationally heavy training and require large-scale video datasets. In this paper, we introduce a new task of zero-shot text-to-video generation and propose a low-cost approach (without any training or optimization) by leveraging the power of existing text-to-image synthesis methods (e.g., Stable Diffusion), making them suitable for the video domain. +Our key modifications include (i) enriching the latent codes of the generated frames with motion dynamics to keep the global scene and the background time consistent; and (ii) reprogramming frame-level self-attention using a new cross-frame attention of each frame on the first frame, to preserve the context, appearance, and identity of the foreground object. +Experiments show that this leads to low overhead, yet high-quality and remarkably consistent video generation. Moreover, our approach is not limited to text-to-video synthesis but is also applicable to other tasks such as conditional and content-specialized video generation, and Video Instruct-Pix2Pix, i.e., instruction-guided video editing. +As experiments show, our method performs comparably or sometimes better than recent approaches, despite not being trained on additional video data. +Resources: +Project Page +Paper +Original Code + +Available Pipelines: + +Pipeline +Tasks +Demo +TextToVideoZeroPipeline +Zero-shot Text-to-Video Generation +🤗 Space + +Usage example + + +Text-To-Video + +To generate a video from prompt, run the following python command + + + Copied +import torch +import imageio +from diffusers import TextToVideoZeroPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +prompt = "A panda is playing guitar on times square" +result = pipe(prompt=prompt).images +result = [(r * 255).astype("uint8") for r in result] +imageio.mimsave("video.mp4", result, fps=4) +You can change these parameters in the pipeline call: +Motion field strength (see the paper, Sect. 3.3.1):motion_field_strength_x and motion_field_strength_y. Default: motion_field_strength_x=12, motion_field_strength_y=12 +T and T' (see the paper, Sect. 3.3.1)t0 and t1 in the range {0, ..., num_inference_steps}. Default: t0=45, t1=48 +Video length:video_length, the number of frames video_length to be generated. Default: video_length=8 + +Text-To-Video with Pose Control + + +To generate a video from prompt with additional pose control +Download a demo video + + + Copied +from huggingface_hub import hf_hub_download + +filename = "__assets__/poses_skeleton_gifs/dance1_corr.mp4" +repo_id = "PAIR/Text2Video-Zero" +video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) +Read video containing extracted pose images + + + Copied +from PIL import Image +import imageio + +reader = imageio.get_reader(video_path, "ffmpeg") +frame_count = 8 +pose_images = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] +To extract pose from actual video, read ControlNet documentation. +Run StableDiffusionControlNetPipeline with our custom attention processor + + + Copied +import torch +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +model_id = "runwayml/stable-diffusion-v1-5" +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + model_id, controlnet=controlnet, torch_dtype=torch.float16 +).to("cuda") + +# Set the attention processor +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) +pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) + +# fix latents for all frames +latents = torch.randn((1, 4, 64, 64), device="cuda", dtype=torch.float16).repeat(len(pose_images), 1, 1, 1) + +prompt = "Darth Vader dancing in a desert" +result = pipe(prompt=[prompt] * len(pose_images), image=pose_images, latents=latents).images +imageio.mimsave("video.mp4", result, fps=4) + +Text-To-Video with Edge Control + +To generate a video from prompt with additional pose control, +follow the steps described above for pose-guided generation using Canny edge ControlNet model. + +Video Instruct-Pix2Pix + +To perform text-guided video editing (with InstructPix2Pix): +Download a demo video + + + Copied +from huggingface_hub import hf_hub_download + +filename = "__assets__/pix2pix video/camel.mp4" +repo_id = "PAIR/Text2Video-Zero" +video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) +Read video from path + + + Copied +from PIL import Image +import imageio + +reader = imageio.get_reader(video_path, "ffmpeg") +frame_count = 8 +video = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] +Run StableDiffusionInstructPix2PixPipeline with our custom attention processor + + + Copied +import torch +from diffusers import StableDiffusionInstructPix2PixPipeline +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +model_id = "timbrooks/instruct-pix2pix" +pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=3)) + +prompt = "make it Van Gogh Starry Night style" +result = pipe(prompt=[prompt] * len(video), image=video).images +imageio.mimsave("edited_video.mp4", result, fps=4) + +DreamBooth specialization + +Methods Text-To-Video, Text-To-Video with Pose Control and Text-To-Video with Edge Control +can run with custom DreamBooth models, as shown below for +Canny edge ControlNet model and +Avatar style DreamBooth model +Download a demo video + + + Copied +from huggingface_hub import hf_hub_download + +filename = "__assets__/canny_videos_mp4/girl_turning.mp4" +repo_id = "PAIR/Text2Video-Zero" +video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) +Read video from path + + + Copied +from PIL import Image +import imageio + +reader = imageio.get_reader(video_path, "ffmpeg") +frame_count = 8 +video = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] +Run StableDiffusionControlNetPipeline with custom trained DreamBooth model + + + Copied +import torch +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +# set model id to custom model +model_id = "PAIR/text2video-zero-controlnet-canny-avatar" +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + model_id, controlnet=controlnet, torch_dtype=torch.float16 +).to("cuda") + +# Set the attention processor +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) +pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) + +# fix latents for all frames +latents = torch.randn((1, 4, 64, 64), device="cuda", dtype=torch.float16).repeat(len(pose_images), 1, 1, 1) + +prompt = "oil painting of a beautiful girl avatar style" +result = pipe(prompt=[prompt] * len(pose_images), image=pose_images, latents=latents).images +imageio.mimsave("video.mp4", result, fps=4) +You can filter out some available DreamBooth-trained models with this link. + +TextToVideoZeroPipeline + + +class diffusers.TextToVideoZeroPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPImageProcessor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for zero-shot text-to-video generation using Stable Diffusion. +This model inherits from StableDiffusionPipeline. Check the superclass documentation for the generic methods +the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] +video_length: typing.Optional[int] = 8 +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_videos_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +motion_field_strength_x: float = 12 +motion_field_strength_y: float = 12 +output_type: typing.Optional[str] = 'tensor' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: typing.Optional[int] = 1 +t0: int = 44 +t1: int = 47 + +) +→ +~pipelines.text_to_video_synthesis.TextToVideoPipelineOutput + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +video_length (int, optional, defaults to 8) — The number of generated video frames + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). + + +num_videos_per_prompt (int, optional, defaults to 1) — +The number of videos to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "numpy") — +The output format of the generated image. Choose between "latent" and "numpy". + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +motion_field_strength_x (float, optional, defaults to 12) — +Strength of motion in generated video along x-axis. See the paper, +Sect. 3.3.1. + + +motion_field_strength_y (float, optional, defaults to 12) — +Strength of motion in generated video along y-axis. See the paper, +Sect. 3.3.1. + + +t0 (int, optional, defaults to 44) — +Timestep t0. Should be in the range [0, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. + + +t1 (int, optional, defaults to 47) — +Timestep t0. Should be in the range [t0 + 1, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. + + +Returns + +~pipelines.text_to_video_synthesis.TextToVideoPipelineOutput + + + +The output contains a ndarray of the generated images, when output_type != ‘latent’, otherwise a latent +codes of generated image, and a list of bools denoting whether the corresponding generated image +likely represents “not-safe-for-work” (nsfw) content, according to the safety_checker. + + +Function invoked when calling the pipeline for generation. + +backward_loop + +< +source +> +( +latents +timesteps +prompt_embeds +guidance_scale +callback +callback_steps +num_warmup_steps +extra_step_kwargs +cross_attention_kwargs = None + +) +→ +latents + +Parameters + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. +extra_step_kwargs — extra_step_kwargs. +cross_attention_kwargs — cross_attention_kwargs. +num_warmup_steps — number of warmup steps. + + +Returns + +latents + + + +latents of backward process output at time timesteps[-1] + + +Perform backward process given list of time steps + +forward_loop + +< +source +> +( +x_t0 +t0 +t1 +generator + +) +→ +x_t1 + +Returns + +x_t1 + + + +forward process applied to x_t0 from time t0 to t1. + + +Perform ddpm forward process from time t0 to t1. This is the same as adding noise with corresponding variance. diff --git a/scrapped_outputs/d116f24e425ebf6630fc38be429fe2f9.txt b/scrapped_outputs/d116f24e425ebf6630fc38be429fe2f9.txt new file mode 100644 index 0000000000000000000000000000000000000000..9b91d27246c47e715d8fb32343e10ffa0337626a --- /dev/null +++ b/scrapped_outputs/d116f24e425ebf6630fc38be429fe2f9.txt @@ -0,0 +1,36 @@ +Stable Diffusion XL Turbo SDXL Turbo is an adversarial time-distilled Stable Diffusion XL (SDXL) model capable +of running inference in as little as 1 step. This guide will show you how to use SDXL-Turbo for text-to-image and image-to-image. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate Load model checkpoints Model weights may be stored in separate subfolders on the Hub or locally, in which case, you should use the from_pretrained() method: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16") +pipeline = pipeline.to("cuda") You can also use the from_single_file() method to load a model checkpoint stored in a single file format (.ckpt or .safetensors) from the Hub or locally: Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_single_file( + "https://huggingface.co/stabilityai/sdxl-turbo/blob/main/sd_xl_turbo_1.0_fp16.safetensors", torch_dtype=torch.float16) +pipeline = pipeline.to("cuda") Text-to-image For text-to-image, pass a text prompt. By default, SDXL Turbo generates a 512x512 image, and that resolution gives the best results. You can try setting the height and width parameters to 768x768 or 1024x1024, but you should expect quality degradations when doing so. Make sure to set guidance_scale to 0.0 to disable, as the model was trained without it. A single inference step is enough to generate high quality images. +Increasing the number of steps to 2, 3 or 4 should improve image quality. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline_text2image = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16") +pipeline_text2image = pipeline_text2image.to("cuda") + +prompt = "A cinematic shot of a baby racoon wearing an intricate italian priest robe." + +image = pipeline_text2image(prompt=prompt, guidance_scale=0.0, num_inference_steps=1).images[0] +image Image-to-image For image-to-image generation, make sure that num_inference_steps * strength is larger or equal to 1. +The image-to-image pipeline will run for int(num_inference_steps * strength) steps, e.g. 0.5 * 2.0 = 1 step in +our example below. Copied from diffusers import AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +# use from_pipe to avoid consuming additional memory when loading a checkpoint +pipeline = AutoPipelineForImage2Image.from_pipe(pipeline_text2image).to("cuda") + +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") +init_image = init_image.resize((512, 512)) + +prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k" + +image = pipeline(prompt, image=init_image, strength=0.5, guidance_scale=0.0, num_inference_steps=2).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Speed-up SDXL Turbo even more Compile the UNet if you are using PyTorch version 2 or better. The first inference run will be very slow, but subsequent ones will be much faster. Copied pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) When using the default VAE, keep it in float32 to avoid costly dtype conversions before and after each generation. You only need to do this one before your first generation: Copied pipe.upcast_vae() As an alternative, you can also use a 16-bit VAE created by community member @madebyollin that does not need to be upcasted to float32. diff --git a/scrapped_outputs/d117cb3ed97e50516c91310f20f3358e.txt b/scrapped_outputs/d117cb3ed97e50516c91310f20f3358e.txt new file mode 100644 index 0000000000000000000000000000000000000000..fc70c0d39d26f48206e829173b963d203c5dbc51 --- /dev/null +++ b/scrapped_outputs/d117cb3ed97e50516c91310f20f3358e.txt @@ -0,0 +1,102 @@ +DPMSolverSinglestepScheduler DPMSolverSinglestepScheduler is a single step scheduler from DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps and DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. DPMSolver (and the improved version DPMSolver++) is a fast dedicated high-order solver for diffusion ODEs with convergence order guarantee. Empirically, DPMSolver sampling with only 20 steps can generate high-quality +samples, and it can generate quite good samples even in 10 steps. The original implementation can be found at LuChengTHU/dpm-solver. Tips It is recommended to set solver_order to 2 for guide sampling, and solver_order=3 for unconditional sampling. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use dynamic +thresholding. This thresholding method is unsuitable for latent-space diffusion models such as +Stable Diffusion. DPMSolverSinglestepScheduler class diffusers.DPMSolverSinglestepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Optional = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'dpmsolver++' solver_type: str = 'midpoint' lower_order_final: bool = True use_karras_sigmas: Optional = False lambda_min_clipped: float = -inf variance_type: Optional = None ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DPMSolver order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++". algorithm_type (str, defaults to dpmsolver++) — +Algorithm type for the solver; can be dpmsolver, dpmsolver++, sde-dpmsolver or sde-dpmsolver++. The +dpmsolver type implements the algorithms in the DPMSolver +paper, and the dpmsolver++ type implements the algorithms in the +DPMSolver++ paper. It is recommended to use dpmsolver++ or +sde-dpmsolver++ with solver_order=2 for guided sampling like in Stable Diffusion. solver_type (str, defaults to midpoint) — +Solver type for the second-order solver; can be midpoint or heun. The solver type slightly affects the +sample quality, especially for a small number of steps. It is recommended to use midpoint solvers. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. lambda_min_clipped (float, defaults to -inf) — +Clipping threshold for the minimum value of lambda(t) for numerical stability. This is critical for the +cosine (squaredcos_cap_v2) noise schedule. variance_type (str, optional) — +Set to “learned” or “learned_range” for diffusion models that predict variance. If set, the model’s output +contains the predicted Gaussian variance. DPMSolverSinglestepScheduler is a fast dedicated high-order solver for diffusion ODEs. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is +designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an +integral of the data prediction model. The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise +prediction and data prediction models. dpm_solver_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DPMSolver (equivalent to DDIM). get_order_list < source > ( num_inference_steps: int ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. Computes the solver order at each time step. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). singlestep_dpm_solver_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. timestep (int) — +The current and latter discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order singlestep DPMSolver that computes the solution at time prev_timestep from the +time timestep_list[-2]. singlestep_dpm_solver_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. timestep (int) — +The current and latter discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order singlestep DPMSolver that computes the solution at time prev_timestep from the +time timestep_list[-3]. singlestep_dpm_solver_update < source > ( model_output_list: List *args sample: FloatTensor = None order: int = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. timestep (int) — +The current and latter discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. order (int) — +The solver order at this step. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the singlestep DPMSolver. step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the singlestep DPMSolver. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/d11d20788c2edd7a485044826062264f.txt b/scrapped_outputs/d11d20788c2edd7a485044826062264f.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/d14052d75d33bc98d22b618380d31a2d.txt b/scrapped_outputs/d14052d75d33bc98d22b618380d31a2d.txt new file mode 100644 index 0000000000000000000000000000000000000000..3ac97628e55336ffb0041210b78e5d43066c4f7c --- /dev/null +++ b/scrapped_outputs/d14052d75d33bc98d22b618380d31a2d.txt @@ -0,0 +1,225 @@ +AudioLDM 2 AudioLDM 2 was proposed in AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining by Haohe Liu et al. AudioLDM 2 takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional sound effects, human speech and music. Inspired by Stable Diffusion, AudioLDM 2 is a text-to-audio latent diffusion model (LDM) that learns continuous audio representations from text embeddings. Two text encoder models are used to compute the text embeddings from a prompt input: the text-branch of CLAP and the encoder of Flan-T5. These text embeddings are then projected to a shared embedding space by an AudioLDM2ProjectionModel. A GPT2 language model (LM) is used to auto-regressively predict eight new embedding vectors, conditional on the projected CLAP and Flan-T5 embeddings. The generated embedding vectors and Flan-T5 text embeddings are used as cross-attention conditioning in the LDM. The UNet of AudioLDM 2 is unique in the sense that it takes two cross-attention embeddings, as opposed to one cross-attention conditioning, as in most other LDMs. The abstract of the paper is the following: Although audio generation shares commonalities across different types of audio, such as speech, music, and sound effects, designing models for each type requires careful consideration of specific objectives and biases that can significantly differ from those of other types. To bring us closer to a unified perspective of audio generation, this paper proposes a framework that utilizes the same learning method for speech, music, and sound effect generation. Our framework introduces a general representation of audio, called “language of audio” (LOA). Any audio can be translated into LOA based on AudioMAE, a self-supervised pre-trained representation learning model. In the generation process, we translate any modalities into LOA by using a GPT-2 model, and we perform self-supervised audio generation learning with a latent diffusion model conditioned on LOA. The proposed framework naturally brings advantages such as in-context learning abilities and reusable self-supervised pretrained AudioMAE and latent diffusion models. Experiments on the major benchmarks of text-to-audio, text-to-music, and text-to-speech demonstrate state-of-the-art or competitive performance against previous approaches. Our code, pretrained model, and demo are available at this https URL. This pipeline was contributed by sanchit-gandhi. The original codebase can be found at haoheliu/audioldm2. Tips Choosing a checkpoint AudioLDM2 comes in three variants. Two of these checkpoints are applicable to the general task of text-to-audio generation. The third checkpoint is trained exclusively on text-to-music generation. All checkpoints share the same model size for the text encoders and VAE. They differ in the size and depth of the UNet. +See table below for details on the three checkpoints: Checkpoint Task UNet Model Size Total Model Size Training Data / h audioldm2 Text-to-audio 350M 1.1B 1150k audioldm2-large Text-to-audio 750M 1.5B 1150k audioldm2-music Text-to-music 350M 1.1B 665k Constructing a prompt Descriptive prompt inputs work best: use adjectives to describe the sound (e.g. “high quality” or “clear”) and make the prompt context specific (e.g. “water stream in a forest” instead of “stream”). It’s best to use general terms like “cat” or “dog” instead of specific names or abstract objects the model may not be familiar with. Using a negative prompt can significantly improve the quality of the generated waveform, by guiding the generation away from terms that correspond to poor quality audio. Try using a negative prompt of “Low quality.” Controlling inference The quality of the predicted audio sample can be controlled by the num_inference_steps argument; higher steps give higher quality audio at the expense of slower inference. The length of the predicted audio sample can be controlled by varying the audio_length_in_s argument. Evaluating generated waveforms: The quality of the generated waveforms can vary significantly based on the seed. Try generating with different seeds until you find a satisfactory generation. Multiple waveforms can be generated in one go: set num_waveforms_per_prompt to a value greater than 1. Automatic scoring will be performed between the generated waveforms and prompt text, and the audios ranked from best to worst accordingly. The following example demonstrates how to construct good music generation using the aforementioned tips: example. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. AudioLDM2Pipeline class diffusers.AudioLDM2Pipeline < source > ( vae: AutoencoderKL text_encoder: ClapModel text_encoder_2: T5EncoderModel projection_model: AudioLDM2ProjectionModel language_model: GPT2Model tokenizer: Union tokenizer_2: Union feature_extractor: ClapFeatureExtractor unet: AudioLDM2UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vocoder: SpeechT5HifiGan ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (ClapModel) — +First frozen text-encoder. AudioLDM2 uses the joint audio-text embedding model +CLAP, +specifically the laion/clap-htsat-unfused variant. The +text branch is used to encode the text prompt to a prompt embedding. The full audio-text model is used to +rank generated waveforms against the text prompt by computing similarity scores. text_encoder_2 (T5EncoderModel) — +Second frozen text-encoder. AudioLDM2 uses the encoder of +T5, specifically the +google/flan-t5-large variant. projection_model (AudioLDM2ProjectionModel) — +A trained model used to linearly project the hidden-states from the first and second text encoder models +and insert learned SOS and EOS token embeddings. The projected hidden-states from the two text encoders are +concatenated to give the input to the language model. language_model (GPT2Model) — +An auto-regressive language model used to generate a sequence of hidden-states conditioned on the projected +outputs from the two text encoders. tokenizer (RobertaTokenizer) — +Tokenizer to tokenize text for the first frozen text-encoder. tokenizer_2 (T5Tokenizer) — +Tokenizer to tokenize text for the second frozen text-encoder. feature_extractor (ClapFeatureExtractor) — +Feature extractor to pre-process generated audio waveforms to log-mel spectrograms for automatic scoring. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded audio latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. vocoder (SpeechT5HifiGan) — +Vocoder of class SpeechT5HifiGan to convert the mel-spectrogram latents to the final audio waveform. Pipeline for text-to-audio generation using AudioLDM2. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None audio_length_in_s: Optional = None num_inference_steps: int = 200 guidance_scale: float = 3.5 negative_prompt: Union = None num_waveforms_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None generated_prompt_embeds: Optional = None negative_generated_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None max_new_tokens: Optional = None return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None output_type: Optional = 'np' ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide audio generation. If not defined, you need to pass prompt_embeds. audio_length_in_s (int, optional, defaults to 10.24) — +The length of the generated audio sample in seconds. num_inference_steps (int, optional, defaults to 200) — +The number of denoising steps. More denoising steps usually lead to a higher quality audio at the +expense of slower inference. guidance_scale (float, optional, defaults to 3.5) — +A higher guidance scale value encourages the model to generate audio that is closely linked to the text +prompt at the expense of lower sound quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in audio generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_waveforms_per_prompt (int, optional, defaults to 1) — +The number of waveforms to generate per prompt. If num_waveforms_per_prompt > 1, then automatic +scoring is performed between the generated outputs and the text prompt. This scoring ranks the +generated waveforms based on their cosine similarity with the text input in the joint text-audio +embedding space. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for spectrogram +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings from the GPT2 langauge model. Can be used to easily tweak text inputs, +e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input +argument. negative_generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings from the GPT2 language model. Can be used to easily tweak text +inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from +negative_prompt input argument. attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the prompt_embeds. If not provided, attention mask will +be computed from prompt input argument. negative_attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the negative_prompt_embeds. If not provided, attention +mask will be computed from negative_prompt input argument. max_new_tokens (int, optional, defaults to None) — +Number of new tokens to generate with the GPT2 language model. If not provided, number of tokens will +be taken from the config of the model. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. output_type (str, optional, defaults to "np") — +The output format of the generated audio. Choose between "np" to return a NumPy np.ndarray or +"pt" to return a PyTorch torch.Tensor object. Set to "latent" to return the latent diffusion +model (LDM) output. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Examples: Copied >>> import scipy +>>> import torch +>>> from diffusers import AudioLDM2Pipeline + +>>> repo_id = "cvssp/audioldm2" +>>> pipe = AudioLDM2Pipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> # define the prompts +>>> prompt = "The sound of a hammer hitting a wooden surface." +>>> negative_prompt = "Low quality." + +>>> # set the seed for generator +>>> generator = torch.Generator("cuda").manual_seed(0) + +>>> # run the generation +>>> audio = pipe( +... prompt, +... negative_prompt=negative_prompt, +... num_inference_steps=200, +... audio_length_in_s=10.0, +... num_waveforms_per_prompt=3, +... generator=generator, +... ).audios + +>>> # save the best audio sample (index 0) as a .wav file +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio[0]) disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_waveforms_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None generated_prompt_embeds: Optional = None negative_generated_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None max_new_tokens: Optional = None ) → prompt_embeds (torch.FloatTensor) Parameters prompt (str or List[str], optional) — +prompt to be encoded device (torch.device) — +torch device num_waveforms_per_prompt (int) — +number of waveforms that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the audio generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-computed text embeddings from the Flan T5 model. Can be used to easily tweak text inputs, e.g. +prompt weighting. If not provided, text embeddings will be computed from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-computed negative text embeddings from the Flan T5 model. Can be used to easily tweak text inputs, +e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from +negative_prompt input argument. generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings from the GPT2 langauge model. Can be used to easily tweak text inputs, +e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input +argument. negative_generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings from the GPT2 language model. Can be used to easily tweak text +inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from +negative_prompt input argument. attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the prompt_embeds. If not provided, attention mask will +be computed from prompt input argument. negative_attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the negative_prompt_embeds. If not provided, attention +mask will be computed from negative_prompt input argument. max_new_tokens (int, optional, defaults to None) — +The number of new tokens to generate with the GPT2 language model. Returns +prompt_embeds (torch.FloatTensor) + +Text embeddings from the Flan T5 model. +attention_mask (torch.LongTensor): +Attention mask to be applied to the prompt_embeds. +generated_prompt_embeds (torch.FloatTensor): +Text embeddings generated from the GPT2 langauge model. + Encodes the prompt into text encoder hidden states. Example: Copied >>> import scipy +>>> import torch +>>> from diffusers import AudioLDM2Pipeline + +>>> repo_id = "cvssp/audioldm2" +>>> pipe = AudioLDM2Pipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> # Get text embedding vectors +>>> prompt_embeds, attention_mask, generated_prompt_embeds = pipe.encode_prompt( +... prompt="Techno music with a strong, upbeat tempo and high melodic riffs", +... device="cuda", +... do_classifier_free_guidance=True, +... ) + +>>> # Pass text embeddings to pipeline for text-conditional audio generation +>>> audio = pipe( +... prompt_embeds=prompt_embeds, +... attention_mask=attention_mask, +... generated_prompt_embeds=generated_prompt_embeds, +... num_inference_steps=200, +... audio_length_in_s=10.0, +... ).audios[0] + +>>> # save generated audio sample +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio) generate_language_model < source > ( inputs_embeds: Tensor = None max_new_tokens: int = 8 **model_kwargs ) → inputs_embeds (torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)`) Parameters inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — +The sequence used as a prompt for the generation. max_new_tokens (int) — +Number of new tokens to generate. model_kwargs (Dict[str, Any], optional) — +Ad hoc parametrization of additional model-specific kwargs that will be forwarded to the forward +function of the model. Returns +inputs_embeds (torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)`) + +The sequence of generated hidden-states. + Generates a sequence of hidden-states from the language model, conditioned on the embedding inputs. AudioLDM2ProjectionModel class diffusers.AudioLDM2ProjectionModel < source > ( text_encoder_dim text_encoder_1_dim langauge_model_dim ) Parameters text_encoder_dim (int) — +Dimensionality of the text embeddings from the first text encoder (CLAP). text_encoder_1_dim (int) — +Dimensionality of the text embeddings from the second text encoder (T5 or VITS). langauge_model_dim (int) — +Dimensionality of the text embeddings from the language model (GPT2). A simple linear projection model to map two text embeddings to a shared latent space. It also inserts learned +embedding vectors at the start and end of each text embedding sequence respectively. Each variable appended with +_1 refers to that corresponding to the second text encoder. Otherwise, it is from the first. forward < source > ( hidden_states: Optional = None hidden_states_1: Optional = None attention_mask: Optional = None attention_mask_1: Optional = None ) AudioLDM2UNet2DConditionModel class diffusers.AudioLDM2UNet2DConditionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 flip_sin_to_cos: bool = True freq_shift: int = 0 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') mid_block_type: Optional = 'UNetMidBlock2DCrossAttn' up_block_types: Tuple = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: Union = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: Union = 1280 transformer_layers_per_block: Union = 1 attention_head_dim: Union = 8 num_attention_heads: Union = None use_linear_projection: bool = False class_embed_type: Optional = None num_class_embeds: Optional = None upcast_attention: bool = False resnet_time_scale_shift: str = 'default' time_embedding_type: str = 'positional' time_embedding_dim: Optional = None time_embedding_act_fn: Optional = None timestep_post_act: Optional = None time_cond_proj_dim: Optional = None conv_in_kernel: int = 3 conv_out_kernel: int = 3 projection_class_embeddings_input_dim: Optional = None class_embeddings_concat: bool = False ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. in_channels (int, optional, defaults to 4) — Number of channels in the input sample. out_channels (int, optional, defaults to 4) — Number of channels in the output. flip_sin_to_cos (bool, optional, defaults to False) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. mid_block_type (str, optional, defaults to "UNetMidBlock2DCrossAttn") — +Block type for middle of UNet, it can only be UNetMidBlock2DCrossAttn for AudioLDM2. up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. only_cross_attention (bool or Tuple[bool], optional, default to False) — +Whether to include self-attention in the basic transformer blocks, see +BasicTransformerBlock. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — The number of layers per block. downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. act_fn (str, optional, defaults to "silu") — The activation function to use. norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, normalization and activation layers is skipped in post-processing. norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. cross_attention_dim (int or Tuple[int], optional, defaults to 1280) — +The dimension of the cross attention features. transformer_layers_per_block (int or Tuple[int], optional, defaults to 1) — +The number of transformer blocks of type BasicTransformerBlock. Only relevant for +~models.unet_2d_blocks.CrossAttnDownBlock2D, ~models.unet_2d_blocks.CrossAttnUpBlock2D, +~models.unet_2d_blocks.UNetMidBlock2DCrossAttn. attention_head_dim (int, optional, defaults to 8) — The dimension of the attention heads. num_attention_heads (int, optional) — +The number of attention heads. If not defined, defaults to attention_head_dim resnet_time_scale_shift (str, optional, defaults to "default") — Time scale shift config +for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", "identity", "projection", or "simple_projection". num_class_embeds (int, optional, defaults to None) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. time_embedding_type (str, optional, defaults to positional) — +The type of position embedding to use for timesteps. Choose from positional or fourier. time_embedding_dim (int, optional, defaults to None) — +An optional override for the dimension of the projected time embedding. time_embedding_act_fn (str, optional, defaults to None) — +Optional activation function to use only once on the time embeddings before they are passed to the rest of +the UNet. Choose from silu, mish, gelu, and swish. timestep_post_act (str, optional, defaults to None) — +The second activation function to use in timestep embedding. Choose from silu, mish and gelu. time_cond_proj_dim (int, optional, defaults to None) — +The dimension of cond_proj layer in the timestep embedding. conv_in_kernel (int, optional, default to 3) — The kernel size of conv_in layer. conv_out_kernel (int, optional, default to 3) — The kernel size of conv_out layer. projection_class_embeddings_input_dim (int, optional) — The dimension of the class_labels input when +class_embed_type="projection". Required when class_embed_type="projection". class_embeddings_concat (bool, optional, defaults to False) — Whether to concatenate the time +embeddings with the class embeddings. A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. Compared to the vanilla UNet2DConditionModel, this variant optionally includes an additional +self-attention layer in each Transformer block, as well as multiple cross-attention layers. It also allows for up +to two cross-attention embeddings, encoder_hidden_states and encoder_hidden_states_1. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None encoder_attention_mask: Optional = None return_dict: bool = True encoder_hidden_states_1: Optional = None encoder_attention_mask_1: Optional = None ) → UNet2DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, channel, height, width). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). encoder_attention_mask (torch.Tensor) — +A cross-attention mask of shape (batch, sequence_length) is applied to encoder_hidden_states. If +True the mask is kept, otherwise if False it is discarded. Mask will be converted into a bias, +which adds large negative values to the attention scores corresponding to “discard” tokens. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. encoder_hidden_states_1 (torch.FloatTensor, optional) — +A second set of encoder hidden states with shape (batch, sequence_length_2, feature_dim_2). Can be +used to condition the model on a different set of embeddings to encoder_hidden_states. encoder_attention_mask_1 (torch.Tensor, optional) — +A cross-attention mask of shape (batch, sequence_length_2) is applied to encoder_hidden_states_1. +If True the mask is kept, otherwise if False it is discarded. Mask will be converted into a bias, +which adds large negative values to the attention scores corresponding to “discard” tokens. Returns +UNet2DConditionOutput or tuple + +If return_dict is True, an UNet2DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The AudioLDM2UNet2DConditionModel forward method. AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. diff --git a/scrapped_outputs/d140c0579a682fc0e4bb7bf1bca208b4.txt b/scrapped_outputs/d140c0579a682fc0e4bb7bf1bca208b4.txt new file mode 100644 index 0000000000000000000000000000000000000000..f4cc4262c8901cbf0efaaf3a95066a4f6481fc18 --- /dev/null +++ b/scrapped_outputs/d140c0579a682fc0e4bb7bf1bca208b4.txt @@ -0,0 +1,78 @@ +unCLIP Hierarchical Text-Conditional Image Generation with CLIP Latents is by Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen. The unCLIP model in 🤗 Diffusers comes from kakaobrain’s karlo. The abstract from the paper is following: Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples. You can find lucidrains’ DALL-E 2 recreation at lucidrains/DALLE2-pytorch. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. UnCLIPPipeline class diffusers.UnCLIPPipeline < source > ( prior: PriorTransformer decoder: UNet2DConditionModel text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer text_proj: UnCLIPTextProjModel super_res_first: UNet2DModel super_res_last: UNet2DModel prior_scheduler: UnCLIPScheduler decoder_scheduler: UnCLIPScheduler super_res_scheduler: UnCLIPScheduler ) Parameters text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. prior (PriorTransformer) — +The canonical unCLIP prior to approximate the image embedding from the text embedding. text_proj (UnCLIPTextProjModel) — +Utility class to prepare and combine the embeddings before they are passed to the decoder. decoder (UNet2DConditionModel) — +The decoder to invert the image embedding into an image. super_res_first (UNet2DModel) — +Super resolution UNet. Used in all but the last step of the super resolution diffusion process. super_res_last (UNet2DModel) — +Super resolution UNet. Used in the last step of the super resolution diffusion process. prior_scheduler (UnCLIPScheduler) — +Scheduler used in the prior denoising process (a modified DDPMScheduler). decoder_scheduler (UnCLIPScheduler) — +Scheduler used in the decoder denoising process (a modified DDPMScheduler). super_res_scheduler (UnCLIPScheduler) — +Scheduler used in the super resolution denoising process (a modified DDPMScheduler). Pipeline for text-to-image generation using unCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None num_images_per_prompt: int = 1 prior_num_inference_steps: int = 25 decoder_num_inference_steps: int = 25 super_res_num_inference_steps: int = 7 generator: Union = None prior_latents: Optional = None decoder_latents: Optional = None super_res_latents: Optional = None text_model_output: Union = None text_attention_mask: Optional = None prior_guidance_scale: float = 4.0 decoder_guidance_scale: float = 8.0 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. This can only be left undefined if text_model_output +and text_attention_mask is passed. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. prior_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the prior. More denoising steps usually lead to a higher quality +image at the expense of slower inference. decoder_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality +image at the expense of slower inference. super_res_num_inference_steps (int, optional, defaults to 7) — +The number of denoising steps for super resolution. More denoising steps usually lead to a higher +quality image at the expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. prior_latents (torch.FloatTensor of shape (batch size, embeddings dimension), optional) — +Pre-generated noisy latents to be used as inputs for the prior. decoder_latents (torch.FloatTensor of shape (batch size, channels, height, width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. super_res_latents (torch.FloatTensor of shape (batch size, channels, super res height, super res width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. prior_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. decoder_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. text_model_output (CLIPTextModelOutput, optional) — +Pre-defined CLIPTextModel outputs that can be derived from the text encoder. Pre-defined text +outputs can be passed for tasks like text embedding interpolations. Make sure to also pass +text_attention_mask in this case. prompt can the be left None. text_attention_mask (torch.Tensor, optional) — +Pre-defined CLIP text attention mask that can be derived from the tokenizer. Pre-defined text attention +masks are necessary when passing text_model_output. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. UnCLIPImageVariationPipeline class diffusers.UnCLIPImageVariationPipeline < source > ( decoder: UNet2DConditionModel text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer text_proj: UnCLIPTextProjModel feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection super_res_first: UNet2DModel super_res_last: UNet2DModel decoder_scheduler: UnCLIPScheduler super_res_scheduler: UnCLIPScheduler ) Parameters text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the image_encoder. image_encoder (CLIPVisionModelWithProjection) — +Frozen CLIP image-encoder (clip-vit-large-patch14). text_proj (UnCLIPTextProjModel) — +Utility class to prepare and combine the embeddings before they are passed to the decoder. decoder (UNet2DConditionModel) — +The decoder to invert the image embedding into an image. super_res_first (UNet2DModel) — +Super resolution UNet. Used in all but the last step of the super resolution diffusion process. super_res_last (UNet2DModel) — +Super resolution UNet. Used in the last step of the super resolution diffusion process. decoder_scheduler (UnCLIPScheduler) — +Scheduler used in the decoder denoising process (a modified DDPMScheduler). super_res_scheduler (UnCLIPScheduler) — +Scheduler used in the super resolution denoising process (a modified DDPMScheduler). Pipeline to generate image variations from an input image using UnCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union = None num_images_per_prompt: int = 1 decoder_num_inference_steps: int = 25 super_res_num_inference_steps: int = 7 generator: Optional = None decoder_latents: Optional = None super_res_latents: Optional = None image_embeddings: Optional = None decoder_guidance_scale: float = 8.0 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — +Image or tensor representing an image batch to be used as the starting point. If you provide a +tensor, it needs to be compatible with the CLIPImageProcessor +configuration. +Can be left as None only when image_embeddings are passed. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. decoder_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality +image at the expense of slower inference. super_res_num_inference_steps (int, optional, defaults to 7) — +The number of denoising steps for super resolution. More denoising steps usually lead to a higher +quality image at the expense of slower inference. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. decoder_latents (torch.FloatTensor of shape (batch size, channels, height, width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. super_res_latents (torch.FloatTensor of shape (batch size, channels, super res height, super res width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. decoder_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. image_embeddings (torch.Tensor, optional) — +Pre-defined image embeddings that can be derived from the image encoder. Pre-defined image embeddings +can be passed for tasks like image interpolations. image can be left as None. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/d163954a127e8f3b77c8e23d60e56b5e.txt b/scrapped_outputs/d163954a127e8f3b77c8e23d60e56b5e.txt new file mode 100644 index 0000000000000000000000000000000000000000..e807efa0bdba9fcaf725824d3ab7c1cc5f8142b5 --- /dev/null +++ b/scrapped_outputs/d163954a127e8f3b77c8e23d60e56b5e.txt @@ -0,0 +1,138 @@ +Kandinsky 3 Kandinsky 3 is created by Vladimir Arkhipkin,Anastasia Maltseva,Igor Pavlov,Andrei Filatov,Arseniy Shakhmatov,Andrey Kuznetsov,Denis Dimitrov, Zein Shaheen The description from it’s Github page: Kandinsky 3.0 is an open-source text-to-image diffusion model built upon the Kandinsky2-x model family. In comparison to its predecessors, enhancements have been made to the text understanding and visual quality of the model, achieved by increasing the size of the text encoder and Diffusion U-Net models, respectively. Its architecture includes 3 main components: FLAN-UL2, which is an encoder decoder model based on the T5 architecture. New U-Net architecture featuring BigGAN-deep blocks doubles depth while maintaining the same number of parameters. Sber-MoVQGAN is a decoder proven to have superior results in image restoration. The original codebase can be found at ai-forever/Kandinsky-3. Check out the Kandinsky Community organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting. Make sure to check out the schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. Kandinsky3Pipeline class diffusers.Kandinsky3Pipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: Kandinsky3UNet scheduler: DDPMScheduler movq: VQModel ) __call__ < source > ( prompt: Union = None num_inference_steps: int = 25 guidance_scale: float = 3.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 height: Optional = 1024 width: Optional = 1024 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None output_type: Optional = 'pil' return_dict: bool = True latents = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 3.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to self.unet.config.sample_size) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size) — +The width in pixels of the generated image. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask. Must provide if passing prompt_embeds directly. negative_attention_mask (torch.FloatTensor, optional) — +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import AutoPipelineForText2Image +>>> import torch + +>>> pipe = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-3", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A photograph of the inside of a subway train. There are raccoons sitting on the seats. One of them is reading a newspaper. The window shows the city in the background." + +>>> generator = torch.Generator(device="cpu").manual_seed(0) +>>> image = pipe(prompt, num_inference_steps=25, generator=generator).images[0] encode_prompt < source > ( prompt do_classifier_free_guidance = True num_images_per_prompt = 1 device = None negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None _cut_context = False attention_mask: Optional = None negative_attention_mask: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device, optional): +torch device to place the resulting embeddings on num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask. Must provide if passing prompt_embeds directly. negative_attention_mask (torch.FloatTensor, optional) — +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. Encodes the prompt into text encoder hidden states. Kandinsky3Img2ImgPipeline class diffusers.Kandinsky3Img2ImgPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: Kandinsky3UNet scheduler: DDPMScheduler movq: VQModel ) __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.3 num_inference_steps: int = 25 guidance_scale: float = 3.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 3.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask. Must provide if passing prompt_embeds directly. negative_attention_mask (torch.FloatTensor, optional) — +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import AutoPipelineForImage2Image +>>> from diffusers.utils import load_image +>>> import torch + +>>> pipe = AutoPipelineForImage2Image.from_pretrained("kandinsky-community/kandinsky-3", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A painting of the inside of a subway train with tiny raccoons." +>>> image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky3/t2i.png") + +>>> generator = torch.Generator(device="cpu").manual_seed(0) +>>> image = pipe(prompt, image=image, strength=0.75, num_inference_steps=25, generator=generator).images[0] encode_prompt < source > ( prompt do_classifier_free_guidance = True num_images_per_prompt = 1 device = None negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None _cut_context = False attention_mask: Optional = None negative_attention_mask: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded Encodes the prompt into text encoder hidden states. device: (torch.device, optional): +torch device to place the resulting embeddings on +num_images_per_prompt (int, optional, defaults to 1): +number of images that should be generated per prompt +do_classifier_free_guidance (bool, optional, defaults to True): +whether to use classifier free guidance or not +negative_prompt (str or List[str], optional): +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). +prompt_embeds (torch.FloatTensor, optional): +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. +negative_prompt_embeds (torch.FloatTensor, optional): +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. +attention_mask (torch.FloatTensor, optional): +Pre-generated attention mask. Must provide if passing prompt_embeds directly. +negative_attention_mask (torch.FloatTensor, optional): +Pre-generated negative attention mask. Must provide if passing negative_prompt_embeds directly. diff --git a/scrapped_outputs/d16ff2f0b691ddcf96e7fd35ba32d8a5.txt b/scrapped_outputs/d16ff2f0b691ddcf96e7fd35ba32d8a5.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/d1755c500d0d9d791c8f0705b06f7201.txt b/scrapped_outputs/d1755c500d0d9d791c8f0705b06f7201.txt new file mode 100644 index 0000000000000000000000000000000000000000..acbc313e656972084810639a2513c61961c63127 --- /dev/null +++ b/scrapped_outputs/d1755c500d0d9d791c8f0705b06f7201.txt @@ -0,0 +1 @@ +Normalization layers Customized normalization layers for supporting various models in 🤗 Diffusers. AdaLayerNorm class diffusers.models.normalization.AdaLayerNorm < source > ( embedding_dim: int num_embeddings: int ) Parameters embedding_dim (int) — The size of each embedding vector. num_embeddings (int) — The size of the embeddings dictionary. Norm layer modified to incorporate timestep embeddings. AdaLayerNormZero class diffusers.models.normalization.AdaLayerNormZero < source > ( embedding_dim: int num_embeddings: int ) Parameters embedding_dim (int) — The size of each embedding vector. num_embeddings (int) — The size of the embeddings dictionary. Norm layer adaptive layer norm zero (adaLN-Zero). AdaLayerNormSingle class diffusers.models.normalization.AdaLayerNormSingle < source > ( embedding_dim: int use_additional_conditions: bool = False ) Parameters embedding_dim (int) — The size of each embedding vector. use_additional_conditions (bool) — To use additional conditions for normalization or not. Norm layer adaptive layer norm single (adaLN-single). As proposed in PixArt-Alpha (see: https://arxiv.org/abs/2310.00426; Section 2.3). AdaGroupNorm class diffusers.models.normalization.AdaGroupNorm < source > ( embedding_dim: int out_dim: int num_groups: int act_fn: Optional = None eps: float = 1e-05 ) Parameters embedding_dim (int) — The size of each embedding vector. num_embeddings (int) — The size of the embeddings dictionary. num_groups (int) — The number of groups to separate the channels into. act_fn (str, optional, defaults to None) — The activation function to use. eps (float, optional, defaults to 1e-5) — The epsilon value to use for numerical stability. GroupNorm layer modified to incorporate timestep embeddings. diff --git a/scrapped_outputs/d176bcdfbb67dc620c285552eb85fa09.txt b/scrapped_outputs/d176bcdfbb67dc620c285552eb85fa09.txt new file mode 100644 index 0000000000000000000000000000000000000000..7aa9d7438e50cb065d601931ea93e05ed669bc92 --- /dev/null +++ b/scrapped_outputs/d176bcdfbb67dc620c285552eb85fa09.txt @@ -0,0 +1,58 @@ +Effective and efficient diffusion Getting the DiffusionPipeline to generate images in a certain style or include what you want can be tricky. Often times, you have to run the DiffusionPipeline several times before you end up with an image you’re happy with. But generating something out of nothing is a computationally intensive process, especially if you’re running inference over and over again. This is why it’s important to get the most computational (speed) and memory (GPU vRAM) efficiency from the pipeline to reduce the time between inference cycles so you can iterate faster. This tutorial walks you through how to generate faster and better with the DiffusionPipeline. Begin by loading the runwayml/stable-diffusion-v1-5 model: Copied from diffusers import DiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipeline = DiffusionPipeline.from_pretrained(model_id, use_safetensors=True) The example prompt you’ll use is a portrait of an old warrior chief, but feel free to use your own prompt: Copied prompt = "portrait photo of a old warrior chief" Speed 💡 If you don’t have access to a GPU, you can use one for free from a GPU provider like Colab! One of the simplest ways to speed up inference is to place the pipeline on a GPU the same way you would with any PyTorch module: Copied pipeline = pipeline.to("cuda") To make sure you can use the same image and improve on it, use a Generator and set a seed for reproducibility: Copied import torch + +generator = torch.Generator("cuda").manual_seed(0) Now you can generate an image: Copied image = pipeline(prompt, generator=generator).images[0] +image This process took ~30 seconds on a T4 GPU (it might be faster if your allocated GPU is better than a T4). By default, the DiffusionPipeline runs inference with full float32 precision for 50 inference steps. You can speed this up by switching to a lower precision like float16 or running fewer inference steps. Let’s start by loading the model in float16 and generate an image: Copied import torch + +pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, use_safetensors=True) +pipeline = pipeline.to("cuda") +generator = torch.Generator("cuda").manual_seed(0) +image = pipeline(prompt, generator=generator).images[0] +image This time, it only took ~11 seconds to generate the image, which is almost 3x faster than before! 💡 We strongly suggest always running your pipelines in float16, and so far, we’ve rarely seen any degradation in output quality. Another option is to reduce the number of inference steps. Choosing a more efficient scheduler could help decrease the number of steps without sacrificing output quality. You can find which schedulers are compatible with the current model in the DiffusionPipeline by calling the compatibles method: Copied pipeline.scheduler.compatibles +[ + diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, + diffusers.schedulers.scheduling_unipc_multistep.UniPCMultistepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_discrete.KDPM2DiscreteScheduler, + diffusers.schedulers.scheduling_deis_multistep.DEISMultistepScheduler, + diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, + diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler, + diffusers.schedulers.scheduling_ddpm.DDPMScheduler, + diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_ancestral_discrete.KDPM2AncestralDiscreteScheduler, + diffusers.utils.dummy_torch_and_torchsde_objects.DPMSolverSDEScheduler, + diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler, + diffusers.schedulers.scheduling_pndm.PNDMScheduler, + diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, + diffusers.schedulers.scheduling_ddim.DDIMScheduler, +] The Stable Diffusion model uses the PNDMScheduler by default which usually requires ~50 inference steps, but more performant schedulers like DPMSolverMultistepScheduler, require only ~20 or 25 inference steps. Use the from_config() method to load a new scheduler: Copied from diffusers import DPMSolverMultistepScheduler + +pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) Now set the num_inference_steps to 20: Copied generator = torch.Generator("cuda").manual_seed(0) +image = pipeline(prompt, generator=generator, num_inference_steps=20).images[0] +image Great, you’ve managed to cut the inference time to just 4 seconds! ⚡️ Memory The other key to improving pipeline performance is consuming less memory, which indirectly implies more speed, since you’re often trying to maximize the number of images generated per second. The easiest way to see how many images you can generate at once is to try out different batch sizes until you get an OutOfMemoryError (OOM). Create a function that’ll generate a batch of images from a list of prompts and Generators. Make sure to assign each Generator a seed so you can reuse it if it produces a good result. Copied def get_inputs(batch_size=1): + generator = [torch.Generator("cuda").manual_seed(i) for i in range(batch_size)] + prompts = batch_size * [prompt] + num_inference_steps = 20 + + return {"prompt": prompts, "generator": generator, "num_inference_steps": num_inference_steps} Start with batch_size=4 and see how much memory you’ve consumed: Copied from diffusers.utils import make_image_grid + +images = pipeline(**get_inputs(batch_size=4)).images +make_image_grid(images, 2, 2) Unless you have a GPU with more vRAM, the code above probably returned an OOM error! Most of the memory is taken up by the cross-attention layers. Instead of running this operation in a batch, you can run it sequentially to save a significant amount of memory. All you have to do is configure the pipeline to use the enable_attention_slicing() function: Copied pipeline.enable_attention_slicing() Now try increasing the batch_size to 8! Copied images = pipeline(**get_inputs(batch_size=8)).images +make_image_grid(images, rows=2, cols=4) Whereas before you couldn’t even generate a batch of 4 images, now you can generate a batch of 8 images at ~3.5 seconds per image! This is probably the fastest you can go on a T4 GPU without sacrificing quality. Quality In the last two sections, you learned how to optimize the speed of your pipeline by using fp16, reducing the number of inference steps by using a more performant scheduler, and enabling attention slicing to reduce memory consumption. Now you’re going to focus on how to improve the quality of generated images. Better checkpoints The most obvious step is to use better checkpoints. The Stable Diffusion model is a good starting point, and since its official launch, several improved versions have also been released. However, using a newer version doesn’t automatically mean you’ll get better results. You’ll still have to experiment with different checkpoints yourself, and do a little research (such as using negative prompts) to get the best results. As the field grows, there are more and more high-quality checkpoints finetuned to produce certain styles. Try exploring the Hub and Diffusers Gallery to find one you’re interested in! Better pipeline components You can also try replacing the current pipeline components with a newer version. Let’s try loading the latest autoencoder from Stability AI into the pipeline, and generate some images: Copied from diffusers import AutoencoderKL + +vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16).to("cuda") +pipeline.vae = vae +images = pipeline(**get_inputs(batch_size=8)).images +make_image_grid(images, rows=2, cols=4) Better prompt engineering The text prompt you use to generate an image is super important, so much so that it is called prompt engineering. Some considerations to keep during prompt engineering are: How is the image or similar images of the one I want to generate stored on the internet? What additional detail can I give that steers the model towards the style I want? With this in mind, let’s improve the prompt to include color and higher quality details: Copied prompt += ", tribal panther make up, blue on red, side profile, looking away, serious eyes" +prompt += " 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta" Generate a batch of images with the new prompt: Copied images = pipeline(**get_inputs(batch_size=8)).images +make_image_grid(images, rows=2, cols=4) Pretty impressive! Let’s tweak the second image - corresponding to the Generator with a seed of 1 - a bit more by adding some text about the age of the subject: Copied prompts = [ + "portrait photo of the oldest warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a old warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a young warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", +] + +generator = [torch.Generator("cuda").manual_seed(1) for _ in range(len(prompts))] +images = pipeline(prompt=prompts, generator=generator, num_inference_steps=25).images +make_image_grid(images, 2, 2) Next steps In this tutorial, you learned how to optimize a DiffusionPipeline for computational and memory efficiency as well as improving the quality of generated outputs. If you’re interested in making your pipeline even faster, take a look at the following resources: Learn how PyTorch 2.0 and torch.compile can yield 5 - 300% faster inference speed. On an A100 GPU, inference can be up to 50% faster! If you can’t use PyTorch 2, we recommend you install xFormers. Its memory-efficient attention mechanism works great with PyTorch 1.13.1 for faster speed and reduced memory consumption. Other optimization techniques, such as model offloading, are covered in this guide. diff --git a/scrapped_outputs/d18dfab86bdd9319ad75ffc73e0a2944.txt b/scrapped_outputs/d18dfab86bdd9319ad75ffc73e0a2944.txt new file mode 100644 index 0000000000000000000000000000000000000000..12062fad7c1578fb4c93f827d5677dc581faff89 --- /dev/null +++ b/scrapped_outputs/d18dfab86bdd9319ad75ffc73e0a2944.txt @@ -0,0 +1,323 @@ +Pipelines Pipelines provide a simple way to run state-of-the-art diffusion models in inference by bundling all of the necessary components (multiple independently-trained models, schedulers, and processors) into a single end-to-end class. Pipelines are flexible and they can be adapted to use different schedulers or even model components. All pipelines are built from the base DiffusionPipeline class which provides basic functionality for loading, downloading, and saving all the components. Specific pipeline types (for example StableDiffusionPipeline) loaded with from_pretrained() are automatically detected and the pipeline components are loaded and passed to the __init__ function of the pipeline. You shouldn’t use the DiffusionPipeline class for training. Individual components (for example, UNet2DModel and UNet2DConditionModel) of diffusion pipelines are usually trained individually, so we suggest directly working with them instead. Pipelines do not offer any training functionality. You’ll notice PyTorch’s autograd is disabled by decorating the __call__() method with a torch.no_grad decorator because pipelines should not be used for training. If you’re interested in training, please take a look at the Training guides instead! The table below lists all the pipelines currently available in 🤗 Diffusers and the tasks they support. Click on a pipeline to view its abstract and published paper. Pipeline Tasks AltDiffusion image2image AnimateDiff text2video Attend-and-Excite text2image Audio Diffusion image2audio AudioLDM text2audio AudioLDM2 text2audio BLIP Diffusion text2image Consistency Models unconditional image generation ControlNet text2image, image2image, inpainting ControlNet with Stable Diffusion XL text2image ControlNet-XS text2image ControlNet-XS with Stable Diffusion XL text2image Cycle Diffusion image2image Dance Diffusion unconditional audio generation DDIM unconditional image generation DDPM unconditional image generation DeepFloyd IF text2image, image2image, inpainting, super-resolution DiffEdit inpainting DiT text2image GLIGEN text2image InstructPix2Pix image editing Kandinsky 2.1 text2image, image2image, inpainting, interpolation Kandinsky 2.2 text2image, image2image, inpainting Kandinsky 3 text2image, image2image Latent Consistency Models text2image Latent Diffusion text2image, super-resolution LDM3D text2image, text-to-3D, text-to-pano, upscaling MultiDiffusion text2image MusicLDM text2audio Paint by Example inpainting ParaDiGMS text2image Pix2Pix Zero image editing PixArt-α text2image PNDM unconditional image generation RePaint inpainting Score SDE VE unconditional image generation Self-Attention Guidance text2image Semantic Guidance text2image Shap-E text-to-3D, image-to-3D Spectrogram Diffusion Stable Diffusion text2image, image2image, depth2image, inpainting, image variation, latent upscaler, super-resolution Stable Diffusion Model Editing model editing Stable Diffusion XL text2image, image2image, inpainting Stable Diffusion XL Turbo text2image, image2image, inpainting Stable unCLIP text2image, image variation Stochastic Karras VE unconditional image generation T2I-Adapter text2image Text2Video text2video, video2video Text2Video-Zero text2video unCLIP text2image, image variation Unconditional Latent Diffusion unconditional image generation UniDiffuser text2image, image2text, image variation, text variation, unconditional image generation, unconditional audio generation Value-guided planning value guided sampling Versatile Diffusion text2image, image variation VQ Diffusion text2image Wuerstchen text2image DiffusionPipeline class diffusers.DiffusionPipeline < source > ( ) Base class for all pipelines. DiffusionPipeline stores all components (models, schedulers, and processors) for diffusion pipelines and +provides methods for loading, downloading and saving models. It also includes methods to: move all PyTorch modules to the device of your choice enable/disable the progress bar for the denoising iteration Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. _optional_components (List[str]) — List of all optional components that don’t have to be passed to the +pipeline to function (should be overridden by subclasses). __call__ ( *args **kwargs ) Call self as a function. device < source > ( ) → torch.device Returns +torch.device + +The torch device on which the pipeline is located. + to < source > ( *args **kwargs ) → DiffusionPipeline Parameters dtype (torch.dtype, optional) — +Returns a pipeline with the specified +dtype device (torch.Device, optional) — +Returns a pipeline with the specified +device silence_dtype_warnings (str, optional, defaults to False) — +Whether to omit warnings if the target dtype is not compatible with the target device. Returns +DiffusionPipeline + +The pipeline converted to specified dtype and/or dtype. + Performs Pipeline dtype and/or device conversion. A torch.dtype and torch.device are inferred from the +arguments of self.to(*args, **kwargs). If the pipeline already has the correct torch.dtype and torch.device, then it is returned as is. Otherwise, +the returned pipeline is a copy of self with the desired torch.dtype and torch.device. Here are the ways to call to: to(dtype, silence_dtype_warnings=False) → DiffusionPipeline to return a pipeline with the specified +dtype to(device, silence_dtype_warnings=False) → DiffusionPipeline to return a pipeline with the specified +device to(device=None, dtype=None, silence_dtype_warnings=False) → DiffusionPipeline to return a pipeline with the +specified device and +dtype components < source > ( ) The self.components property can be useful to run different pipelines with the same weights and +configurations without reallocating additional memory. Returns (dict): +A dictionary containing all the modules needed to initialize the pipeline. Examples: Copied >>> from diffusers import ( +... StableDiffusionPipeline, +... StableDiffusionImg2ImgPipeline, +... StableDiffusionInpaintPipeline, +... ) + +>>> text2img = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> img2img = StableDiffusionImg2ImgPipeline(**text2img.components) +>>> inpaint = StableDiffusionInpaintPipeline(**text2img.components) disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. download < source > ( pretrained_model_name **kwargs ) → os.PathLike Parameters pretrained_model_name (str or os.PathLike, optional) — +A string, the repository id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. custom_pipeline (str, optional) — +Can be either: + + +A string, the repository id (for example CompVis/ldm-text2im-large-256) of a pretrained +pipeline hosted on the Hub. The repository must contain a file called pipeline.py that defines +the custom pipeline. + + +A string, the file name of a community pipeline hosted on GitHub under +Community. Valid file +names must match the file name and not the pipeline script (clip_guided_stable_diffusion +instead of clip_guided_stable_diffusion.py). Community pipelines are always loaded from the +current main branch of GitHub. + + +A path to a directory (./my_pipeline_directory/) containing a custom pipeline. The directory +must contain a file called pipeline.py that defines the custom pipeline. + + + +🧪 This is an experimental feature and may change in the future. + +For more information on how to load and create custom pipelines, take a look at How to contribute a +community pipeline. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. use_onnx (bool, optional, defaults to False) — +If set to True, ONNX weights will always be downloaded if present. If set to False, ONNX weights +will never be downloaded. By default use_onnx defaults to the _is_onnx class attribute which is +False for non-ONNX pipelines and True for ONNX pipelines. ONNX weights include both files ending +with .onnx and .pb. trust_remote_code (bool, optional, defaults to False) — +Whether or not to allow for custom pipelines and components defined on the Hub in their own files. This +option should only be set to True for repositories you trust and in which you have read the code, as +it will execute code present on the Hub on your local machine. Returns +os.PathLike + +A path to the downloaded pipeline. + Download and cache a PyTorch diffusion pipeline from pretrained pipeline weights. To use private or gated models, log-in with +huggingface-cli login. enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] enable_model_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_sequential_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using 🤗 Accelerate, significantly reducing memory usage. When called, the state +dicts of all torch.nn.Module components (except those in self._exclude_from_cpu_offload) are saved to CPU +and then moved to torch.device('meta') and loaded to GPU only when their specific submodule has its forward +method called. Offloading happens on a submodule basis. Memory savings are higher than with +enable_model_cpu_offload, but performance is lower. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) from_pretrained < source > ( pretrained_model_name_or_path: Union **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. custom_pipeline (str, optional) — + +🧪 This is an experimental feature and may change in the future. + +Can be either: + +A string, the repo id (for example hf-internal-testing/diffusers-dummy-pipeline) of a custom +pipeline hosted on the Hub. The repository must contain a file called pipeline.py that defines +the custom pipeline. +A string, the file name of a community pipeline hosted on GitHub under +Community. Valid file +names must match the file name and not the pipeline script (clip_guided_stable_diffusion +instead of clip_guided_stable_diffusion.py). Community pipelines are always loaded from the +current main branch of GitHub. +A path to a directory (./my_pipeline_directory/) containing a custom pipeline. The directory +must contain a file called pipeline.py that defines the custom pipeline. + +For more information on how to load and create custom pipelines, please have a look at Loading and +Adding Custom +Pipelines force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional) — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. Defaults to the latest stable 🤗 Diffusers version. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. use_onnx (bool, optional, defaults to None) — +If set to True, ONNX weights will always be downloaded if present. If set to False, ONNX weights +will never be downloaded. By default use_onnx defaults to the _is_onnx class attribute which is +False for non-ONNX pipelines and True for ONNX pipelines. ONNX weights include both files ending +with .onnx and .pb. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiate a PyTorch diffusion pipeline from pretrained pipeline weights. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import DiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256") + +>>> # Download pipeline that requires an authorization token +>>> # For more information on access tokens, please refer to this section +>>> # of the documentation](https://huggingface.co/docs/hub/security-tokens) +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") + +>>> # Use a different scheduler +>>> from diffusers import LMSDiscreteScheduler + +>>> scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.scheduler = scheduler maybe_free_model_hooks < source > ( ) Function that offloads all components, removes all model hooks that were added when using +enable_model_cpu_offload and then applies them again. In case the model has not been offloaded this function +is a no-op. Make sure to add this function to the end of the __call__ function of your pipeline so that it +functions correctly when applying enable_model_cpu_offload. numpy_to_pil < source > ( images ) Convert a NumPy image or a batch of images to a PIL image. save_pretrained < source > ( save_directory: Union safe_serialization: bool = True variant: Optional = None push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save a pipeline to. Will be created if it doesn’t exist. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save all saveable variables of the pipeline to a directory. A pipeline variable can be saved and loaded if its +class implements both a save and loading method. The pipeline is easily reloaded using the +from_pretrained() class method. FlaxDiffusionPipeline class diffusers.FlaxDiffusionPipeline < source > ( ) Base class for Flax-based pipelines. FlaxDiffusionPipeline stores all components (models, schedulers, and processors) for diffusion pipelines and +provides methods for loading, downloading and saving models. It also includes methods to: enable/disable the progress bar for the denoising iteration Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_name_or_path: Union **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example runwayml/stable-diffusion-v1-5) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +using save_pretrained(). + dtype (str or jnp.dtype, optional) — +Override the default jnp.dtype and load the model under this dtype. If "auto", the dtype is +automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components) of the specific pipeline +class. The overwritten components are passed directly to the pipelines __init__ method. Instantiate a Flax-based diffusion pipeline from pretrained pipeline weights. The pipeline is set in evaluation mode (`model.eval()) by default and dropout modules are deactivated. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of FlaxUNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import FlaxDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> # Requires to be logged in to Hugging Face hub, +>>> # see more in [the documentation](https://huggingface.co/docs/hub/security-tokens) +>>> pipeline, params = FlaxDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... revision="bf16", +... dtype=jnp.bfloat16, +... ) + +>>> # Download pipeline, but use a different scheduler +>>> from diffusers import FlaxDPMSolverMultistepScheduler + +>>> model_id = "runwayml/stable-diffusion-v1-5" +>>> dpmpp, dpmpp_state = FlaxDPMSolverMultistepScheduler.from_pretrained( +... model_id, +... subfolder="scheduler", +... ) + +>>> dpm_pipe, dpm_params = FlaxStableDiffusionPipeline.from_pretrained( +... model_id, revision="bf16", dtype=jnp.bfloat16, scheduler=dpmpp +... ) +>>> dpm_params["scheduler"] = dpmpp_state numpy_to_pil < source > ( images ) Convert a NumPy image or a batch of images to a PIL image. save_pretrained < source > ( save_directory: Union params: Union push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save all saveable variables of the pipeline to a directory. A pipeline variable can be saved and loaded if its +class implements both a save and loading method. The pipeline is easily reloaded using the +from_pretrained() class method. PushToHubMixin class diffusers.utils.PushToHubMixin < source > ( ) A Mixin to push a model, scheduler, or pipeline to the Hugging Face Hub. push_to_hub < source > ( repo_id: str commit_message: Optional = None private: Optional = None token: Optional = None create_pr: bool = False safe_serialization: bool = True variant: Optional = None ) Parameters repo_id (str) — +The name of the repository you want to push your model, scheduler, or pipeline files to. It should +contain your organization name when pushing to an organization. repo_id can also be a path to a local +directory. commit_message (str, optional) — +Message to commit while pushing. Default to "Upload {object}". private (bool, optional) — +Whether or not the repository created should be private. token (str, optional) — +The token to use as HTTP bearer authorization for remote files. The token generated when running +huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) — +Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to True) — +Whether or not to convert the model weights to the safetensors format. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. Upload model, scheduler, or pipeline files to the 🤗 Hugging Face Hub. Examples: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet") + +# Push the `unet` to your namespace with the name "my-finetuned-unet". +unet.push_to_hub("my-finetuned-unet") + +# Push the `unet` to an organization with the name "my-finetuned-unet". +unet.push_to_hub("your-org/my-finetuned-unet") diff --git a/scrapped_outputs/d194123777909746de047ef0c7128b96.txt b/scrapped_outputs/d194123777909746de047ef0c7128b96.txt new file mode 100644 index 0000000000000000000000000000000000000000..17224b1ed5c3d86d388a56d170467252653c4fe2 --- /dev/null +++ b/scrapped_outputs/d194123777909746de047ef0c7128b96.txt @@ -0,0 +1,71 @@ +AutoencoderKL The variational autoencoder (VAE) model with KL loss was introduced in Auto-Encoding Variational Bayes by Diederik P. Kingma and Max Welling. The model is used in 🤗 Diffusers to encode images into latents and to decode latent representations into images. The abstract from the paper is: How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions are two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results. Loading from the original format By default the AutoencoderKL should be loaded with from_pretrained(), but it can also be loaded +from the original format using FromOriginalVAEMixin.from_single_file as follows: Copied from diffusers import AutoencoderKL + +url = "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors" # can also be a local file +model = AutoencoderKL.from_single_file(url) AutoencoderKL class diffusers.AutoencoderKL < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) up_block_types: Tuple = ('UpDecoderBlock2D',) block_out_channels: Tuple = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 4 norm_num_groups: int = 32 sample_size: int = 32 scaling_factor: float = 0.18215 force_upcast: float = True ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("DownEncoderBlock2D",)) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to ("UpDecoderBlock2D",)) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of block output channels. act_fn (str, optional, defaults to "silu") — The activation function to use. latent_channels (int, optional, defaults to 4) — Number of channels in the latent space. sample_size (int, optional, defaults to 32) — Sample input size. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. force_upcast (bool, optional, default to True) — +If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE +can be fine-tuned / trained to a lower range without loosing too much precision in which case +force_upcast can be set to False - see: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix A VAE model with KL loss for encoding images into latents and decoding latent representations into images. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). disable_slicing < source > ( ) Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing +decoding in one step. disable_tiling < source > ( ) Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing +decoding in one step. enable_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_tiling < source > ( use_tiling: bool = True ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. forward < source > ( sample: FloatTensor sample_posterior: bool = False return_dict: bool = True generator: Optional = None ) Parameters sample (torch.FloatTensor) — Input sample. sample_posterior (bool, optional, defaults to False) — +Whether to sample from the posterior. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. fuse_qkv_projections < source > ( ) Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. set_attn_processor < source > ( processor: Union _remove_lora = False ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. tiled_decode < source > ( z: FloatTensor return_dict: bool = True ) → ~models.vae.DecoderOutput or tuple Parameters z (torch.FloatTensor) — Input batch of latent vectors. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.vae.DecoderOutput instead of a plain tuple. Returns +~models.vae.DecoderOutput or tuple + +If return_dict is True, a ~models.vae.DecoderOutput is returned, otherwise a plain tuple is +returned. + Decode a batch of images using a tiled decoder. tiled_encode < source > ( x: FloatTensor return_dict: bool = True ) → ~models.autoencoder_kl.AutoencoderKLOutput or tuple Parameters x (torch.FloatTensor) — Input batch of images. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.autoencoder_kl.AutoencoderKLOutput instead of a plain tuple. Returns +~models.autoencoder_kl.AutoencoderKLOutput or tuple + +If return_dict is True, a ~models.autoencoder_kl.AutoencoderKLOutput is returned, otherwise a plain +tuple is returned. + Encode a batch of images using a tiled encoder. When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several +steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is +different from non-tiled encoding because each tile uses a different encoder. To avoid tiling artifacts, the +tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the +output, but they should be much less noticeable. unfuse_qkv_projections < source > ( ) Disables the fused QKV projection if enabled. This API is 🧪 experimental. AutoencoderKLOutput class diffusers.models.modeling_outputs.AutoencoderKLOutput < source > ( latent_dist: DiagonalGaussianDistribution ) Parameters latent_dist (DiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of DiagonalGaussianDistribution. +DiagonalGaussianDistribution allows for sampling latents from the distribution. Output of AutoencoderKL encoding method. DecoderOutput class diffusers.models.autoencoders.vae.DecoderOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The decoded output sample from the last layer of the model. Output of decoding method. FlaxAutoencoderKL class diffusers.FlaxAutoencoderKL < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) up_block_types: Tuple = ('UpDecoderBlock2D',) block_out_channels: Tuple = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 4 norm_num_groups: int = 32 sample_size: int = 32 scaling_factor: float = 0.18215 dtype: dtype = parent: Union = name: Optional = None ) Parameters in_channels (int, optional, defaults to 3) — +Number of channels in the input image. out_channels (int, optional, defaults to 3) — +Number of channels in the output. down_block_types (Tuple[str], optional, defaults to (DownEncoderBlock2D)) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to (UpDecoderBlock2D)) — +Tuple of upsample block types. block_out_channels (Tuple[str], optional, defaults to (64,)) — +Tuple of block output channels. layers_per_block (int, optional, defaults to 2) — +Number of ResNet layer for each block. act_fn (str, optional, defaults to silu) — +The activation function to use. latent_channels (int, optional, defaults to 4) — +Number of channels in the latent space. norm_num_groups (int, optional, defaults to 32) — +The number of groups for normalization. sample_size (int, optional, defaults to 32) — +Sample input size. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. dtype (jnp.dtype, optional, defaults to jnp.float32) — +The dtype of the parameters. Flax implementation of a VAE model with KL loss for decoding latent representations. This model inherits from FlaxModelMixin. Check the superclass documentation for it’s generic methods +implemented for all models (such as downloading or saving). This model is a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matter related to its +general usage and behavior. Inherent JAX features such as the following are supported: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization FlaxAutoencoderKLOutput class diffusers.models.vae_flax.FlaxAutoencoderKLOutput < source > ( latent_dist: FlaxDiagonalGaussianDistribution ) Parameters latent_dist (FlaxDiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of FlaxDiagonalGaussianDistribution. +FlaxDiagonalGaussianDistribution allows for sampling latents from the distribution. Output of AutoencoderKL encoding method. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. FlaxDecoderOutput class diffusers.models.vae_flax.FlaxDecoderOutput < source > ( sample: Array ) Parameters sample (jnp.ndarray of shape (batch_size, num_channels, height, width)) — +The decoded output sample from the last layer of the model. dtype (jnp.dtype, optional, defaults to jnp.float32) — +The dtype of the parameters. Output of decoding method. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/d19ba16b0720edb8d6f5dd695832f367.txt b/scrapped_outputs/d19ba16b0720edb8d6f5dd695832f367.txt new file mode 100644 index 0000000000000000000000000000000000000000..f3ff45d9b537f73b4891b1294f8d618d1aafc935 --- /dev/null +++ b/scrapped_outputs/d19ba16b0720edb8d6f5dd695832f367.txt @@ -0,0 +1,48 @@ +ScoreSdeVeScheduler ScoreSdeVeScheduler is a variance exploding stochastic differential equation (SDE) scheduler. It was introduced in the Score-Based Generative Modeling through Stochastic Differential Equations paper by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole. The abstract from the paper is: Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model. ScoreSdeVeScheduler class diffusers.ScoreSdeVeScheduler < source > ( num_train_timesteps: int = 2000 snr: float = 0.15 sigma_min: float = 0.01 sigma_max: float = 1348.0 sampling_eps: float = 1e-05 correct_steps: int = 1 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. snr (float, defaults to 0.15) — +A coefficient weighting the step from the model_output sample (from the network) to the random noise. sigma_min (float, defaults to 0.01) — +The initial noise scale for the sigma sequence in the sampling procedure. The minimum sigma should mirror +the distribution of the data. sigma_max (float, defaults to 1348.0) — +The maximum value used for the range of continuous timesteps passed into the model. sampling_eps (float, defaults to 1e-5) — +The end value of sampling where timesteps decrease progressively from 1 to epsilon. correct_steps (int, defaults to 1) — +The number of correction steps performed on a produced sample. ScoreSdeVeScheduler is a variance exploding stochastic differential equation (SDE) scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_sigmas < source > ( num_inference_steps: int sigma_min: float = None sigma_max: float = None sampling_eps: float = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. sigma_min (float, optional) — +The initial noise scale value (overrides value given during scheduler instantiation). sigma_max (float, optional) — +The final noise scale value (overrides value given during scheduler instantiation). sampling_eps (float, optional) — +The final timestep value (overrides value given during scheduler instantiation). Sets the noise scales used for the diffusion chain (to be run before inference). The sigmas control the weight +of the drift and diffusion components of the sample update. set_timesteps < source > ( num_inference_steps: int sampling_eps: float = None device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. sampling_eps (float, optional) — +The final timestep value (overrides value given during scheduler instantiation). device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the continuous timesteps used for the diffusion chain (to be run before inference). step_correct < source > ( model_output: FloatTensor sample: FloatTensor generator: Optional = None return_dict: bool = True ) → SdeVeOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a SdeVeOutput or tuple. Returns +SdeVeOutput or tuple + +If return_dict is True, SdeVeOutput is returned, otherwise a tuple +is returned where the first element is the sample tensor. + Correct the predicted sample based on the model_output of the network. This is often run repeatedly after +making the prediction for the previous timestep. step_pred < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator: Optional = None return_dict: bool = True ) → SdeVeOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a SdeVeOutput or tuple. Returns +SdeVeOutput or tuple + +If return_dict is True, SdeVeOutput is returned, otherwise a tuple +is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SdeVeOutput class diffusers.schedulers.scheduling_sde_ve.SdeVeOutput < source > ( prev_sample: FloatTensor prev_sample_mean: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. prev_sample_mean (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Mean averaged prev_sample over previous timesteps. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/d19e1ebecad1550eecf0e57df33232e7.txt b/scrapped_outputs/d19e1ebecad1550eecf0e57df33232e7.txt new file mode 100644 index 0000000000000000000000000000000000000000..26444ce0b02439b036cdb5951e8bcee16133d21d --- /dev/null +++ b/scrapped_outputs/d19e1ebecad1550eecf0e57df33232e7.txt @@ -0,0 +1,7 @@ +Value-guided planning 🧪 This is an experimental pipeline for reinforcement learning! This pipeline is based on the Planning with Diffusion for Flexible Behavior Synthesis paper by Michael Janner, Yilun Du, Joshua B. Tenenbaum, Sergey Levine. The abstract from the paper is: Model-based reinforcement learning methods often use learning only for the purpose of estimating an approximate dynamics model, offloading the rest of the decision-making work to classical trajectory optimizers. While conceptually simple, this combination has a number of empirical shortcomings, suggesting that learned models may not be well-suited to standard trajectory optimization. In this paper, we consider what it would look like to fold as much of the trajectory optimization pipeline as possible into the modeling problem, such that sampling from the model and planning with it become nearly identical. The core of our technical approach lies in a diffusion probabilistic model that plans by iteratively denoising trajectories. We show how classifier-guided sampling and image inpainting can be reinterpreted as coherent planning strategies, explore the unusual and useful properties of diffusion-based planning methods, and demonstrate the effectiveness of our framework in control settings that emphasize long-horizon decision-making and test-time flexibility. You can find additional information about the model on the project page, the original codebase, or try it out in a demo notebook. The script to run the model is available here. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. ValueGuidedRLPipeline class diffusers.experimental.ValueGuidedRLPipeline < source > ( value_function: UNet1DModel unet: UNet1DModel scheduler: DDPMScheduler env ) Parameters value_function (UNet1DModel) — +A specialized UNet for fine-tuning trajectories base on reward. unet (UNet1DModel) — +UNet architecture to denoise the encoded trajectories. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded trajectories. Default for this +application is DDPMScheduler. env () — +An environment following the OpenAI gym API to act in. For now only Hopper has pretrained models. Pipeline for value-guided sampling from a diffusion model trained to predict sequences of states. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). diff --git a/scrapped_outputs/d1c7981fca980a89be7497d52facb95d.txt b/scrapped_outputs/d1c7981fca980a89be7497d52facb95d.txt new file mode 100644 index 0000000000000000000000000000000000000000..6bf3fb5ccdc0024cae1750fc6804f62a64341e8e --- /dev/null +++ b/scrapped_outputs/d1c7981fca980a89be7497d52facb95d.txt @@ -0,0 +1,59 @@ +RePaint RePaint: Inpainting using Denoising Diffusion Probabilistic Models is by Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, Luc Van Gool. The abstract from the paper is: Free-form inpainting is the task of adding new content to an image in the regions specified by an arbitrary binary mask. Most existing approaches train for a certain distribution of masks, which limits their generalization capabilities to unseen mask types. Furthermore, training with pixel-wise and perceptual losses often leads to simple textural extensions towards the missing areas instead of semantically meaningful generation. In this work, we propose RePaint: A Denoising Diffusion Probabilistic Model (DDPM) based inpainting approach that is applicable to even extreme masks. We employ a pretrained unconditional DDPM as the generative prior. To condition the generation process, we only alter the reverse diffusion iterations by sampling the unmasked regions using the given image information. Since this technique does not modify or condition the original DDPM network itself, the model produces high-quality and diverse output images for any inpainting form. We validate our method for both faces and general-purpose image inpainting using standard and extreme masks. +RePaint outperforms state-of-the-art Autoregressive, and GAN approaches for at least five out of six mask distributions. The original codebase can be found at andreas128/RePaint. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. RePaintPipeline class diffusers.RePaintPipeline < source > ( unet scheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (RePaintScheduler) — +A RePaintScheduler to be used in combination with unet to denoise the encoded image. Pipeline for image inpainting using RePaint. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: typing.Union[torch.Tensor, PIL.Image.Image] mask_image: typing.Union[torch.Tensor, PIL.Image.Image] num_inference_steps: int = 250 eta: float = 0.0 jump_length: int = 10 jump_n_sample: int = 10 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters image (torch.FloatTensor or PIL.Image.Image) — +The original image to inpaint on. mask_image (torch.FloatTensor or PIL.Image.Image) — +The mask_image where 0.0 define which part of the original image to inpaint. num_inference_steps (int, optional, defaults to 1000) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. eta (float) — +The weight of the added noise in a diffusion step. Its value is between 0.0 and 1.0; 0.0 corresponds to +DDIM and 1.0 is the DDPM scheduler. jump_length (int, optional, defaults to 10) — +The number of steps taken forward in time before going backward in time for a single jump (“j” in +RePaint paper). Take a look at Figure 9 and 10 in the paper. jump_n_sample (int, optional, defaults to 10) — +The number of times to make a forward time jump for a given chosen time sample. Take a look at Figure 9 +and 10 in the paper. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Example: Copied >>> from io import BytesIO +>>> import torch +>>> import PIL +>>> import requests +>>> from diffusers import RePaintPipeline, RePaintScheduler + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/celeba_hq_256.png" +>>> mask_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/mask_256.png" + +>>> # Load the original image and the mask as PIL images +>>> original_image = download_image(img_url).resize((256, 256)) +>>> mask_image = download_image(mask_url).resize((256, 256)) + +>>> # Load the RePaint scheduler and pipeline based on a pretrained DDPM model +>>> scheduler = RePaintScheduler.from_pretrained("google/ddpm-ema-celebahq-256") +>>> pipe = RePaintPipeline.from_pretrained("google/ddpm-ema-celebahq-256", scheduler=scheduler) +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> output = pipe( +... image=original_image, +... mask_image=mask_image, +... num_inference_steps=250, +... eta=0.0, +... jump_length=10, +... jump_n_sample=10, +... generator=generator, +... ) +>>> inpainted_image = output.images[0] ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/d1d3714335516ecb067b98d5520dbb47.txt b/scrapped_outputs/d1d3714335516ecb067b98d5520dbb47.txt new file mode 100644 index 0000000000000000000000000000000000000000..ae35bd71905061d7430ba6a839a139739f34ded5 --- /dev/null +++ b/scrapped_outputs/d1d3714335516ecb067b98d5520dbb47.txt @@ -0,0 +1,84 @@ +Self-Attention Guidance Improving Sample Quality of Diffusion Models Using Self-Attention Guidance is by Susung Hong et al. The abstract from the paper is: Denoising diffusion models (DDMs) have attracted attention for their exceptional generation quality and diversity. This success is largely attributed to the use of class- or text-conditional diffusion guidance methods, such as classifier and classifier-free guidance. In this paper, we present a more comprehensive perspective that goes beyond the traditional guidance methods. From this generalized perspective, we introduce novel condition- and training-free strategies to enhance the quality of generated images. As a simple solution, blur guidance improves the suitability of intermediate samples for their fine-scale information and structures, enabling diffusion models to generate higher quality samples with a moderate guidance scale. Improving upon this, Self-Attention Guidance (SAG) uses the intermediate self-attention maps of diffusion models to enhance their stability and efficacy. Specifically, SAG adversarially blurs only the regions that diffusion models attend to at each iteration and guides them accordingly. Our experimental results show that our SAG improves the performance of various diffusion models, including ADM, IDDPM, Stable Diffusion, and DiT. Moreover, combining SAG with conventional guidance methods leads to further improvement. You can find additional information about Self-Attention Guidance on the project page, original codebase, and try it out in a demo or notebook. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionSAGPipeline class diffusers.StableDiffusionSAGPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 sag_scale: float = 0.75 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. sag_scale (float, optional, defaults to 0.75) — +Chosen between [0, 1.0] for better quality. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionSAGPipeline + +>>> pipe = StableDiffusionSAGPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt, sag_scale=0.75).images[0] disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/d1dcd9fa991f711718aae60abbe44e5a.txt b/scrapped_outputs/d1dcd9fa991f711718aae60abbe44e5a.txt new file mode 100644 index 0000000000000000000000000000000000000000..fb52f025805b2d01444b6cca5ff880e32ccc5ff8 --- /dev/null +++ b/scrapped_outputs/d1dcd9fa991f711718aae60abbe44e5a.txt @@ -0,0 +1,71 @@ +AutoencoderKL The variational autoencoder (VAE) model with KL loss was introduced in Auto-Encoding Variational Bayes by Diederik P. Kingma and Max Welling. The model is used in 🤗 Diffusers to encode images into latents and to decode latent representations into images. The abstract from the paper is: How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions are two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results. Loading from the original format By default the AutoencoderKL should be loaded with from_pretrained(), but it can also be loaded +from the original format using FromOriginalVAEMixin.from_single_file as follows: Copied from diffusers import AutoencoderKL + +url = "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors" # can also be a local file +model = AutoencoderKL.from_single_file(url) AutoencoderKL class diffusers.AutoencoderKL < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) up_block_types: Tuple = ('UpDecoderBlock2D',) block_out_channels: Tuple = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 4 norm_num_groups: int = 32 sample_size: int = 32 scaling_factor: float = 0.18215 latents_mean: Optional = None latents_std: Optional = None force_upcast: float = True ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("DownEncoderBlock2D",)) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to ("UpDecoderBlock2D",)) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of block output channels. act_fn (str, optional, defaults to "silu") — The activation function to use. latent_channels (int, optional, defaults to 4) — Number of channels in the latent space. sample_size (int, optional, defaults to 32) — Sample input size. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. force_upcast (bool, optional, default to True) — +If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE +can be fine-tuned / trained to a lower range without loosing too much precision in which case +force_upcast can be set to False - see: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix A VAE model with KL loss for encoding images into latents and decoding latent representations into images. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). wrapper < source > ( *args **kwargs ) wrapper < source > ( *args **kwargs ) disable_slicing < source > ( ) Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing +decoding in one step. disable_tiling < source > ( ) Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing +decoding in one step. enable_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_tiling < source > ( use_tiling: bool = True ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. forward < source > ( sample: FloatTensor sample_posterior: bool = False return_dict: bool = True generator: Optional = None ) Parameters sample (torch.FloatTensor) — Input sample. sample_posterior (bool, optional, defaults to False) — +Whether to sample from the posterior. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. fuse_qkv_projections < source > ( ) Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. tiled_decode < source > ( z: FloatTensor return_dict: bool = True ) → ~models.vae.DecoderOutput or tuple Parameters z (torch.FloatTensor) — Input batch of latent vectors. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.vae.DecoderOutput instead of a plain tuple. Returns +~models.vae.DecoderOutput or tuple + +If return_dict is True, a ~models.vae.DecoderOutput is returned, otherwise a plain tuple is +returned. + Decode a batch of images using a tiled decoder. tiled_encode < source > ( x: FloatTensor return_dict: bool = True ) → ~models.autoencoder_kl.AutoencoderKLOutput or tuple Parameters x (torch.FloatTensor) — Input batch of images. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.autoencoder_kl.AutoencoderKLOutput instead of a plain tuple. Returns +~models.autoencoder_kl.AutoencoderKLOutput or tuple + +If return_dict is True, a ~models.autoencoder_kl.AutoencoderKLOutput is returned, otherwise a plain +tuple is returned. + Encode a batch of images using a tiled encoder. When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several +steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is +different from non-tiled encoding because each tile uses a different encoder. To avoid tiling artifacts, the +tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the +output, but they should be much less noticeable. unfuse_qkv_projections < source > ( ) Disables the fused QKV projection if enabled. This API is 🧪 experimental. AutoencoderKLOutput class diffusers.models.modeling_outputs.AutoencoderKLOutput < source > ( latent_dist: DiagonalGaussianDistribution ) Parameters latent_dist (DiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of DiagonalGaussianDistribution. +DiagonalGaussianDistribution allows for sampling latents from the distribution. Output of AutoencoderKL encoding method. DecoderOutput class diffusers.models.autoencoders.vae.DecoderOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The decoded output sample from the last layer of the model. Output of decoding method. FlaxAutoencoderKL class diffusers.FlaxAutoencoderKL < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) up_block_types: Tuple = ('UpDecoderBlock2D',) block_out_channels: Tuple = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 4 norm_num_groups: int = 32 sample_size: int = 32 scaling_factor: float = 0.18215 dtype: dtype = parent: Union = name: Optional = None ) Parameters in_channels (int, optional, defaults to 3) — +Number of channels in the input image. out_channels (int, optional, defaults to 3) — +Number of channels in the output. down_block_types (Tuple[str], optional, defaults to (DownEncoderBlock2D)) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to (UpDecoderBlock2D)) — +Tuple of upsample block types. block_out_channels (Tuple[str], optional, defaults to (64,)) — +Tuple of block output channels. layers_per_block (int, optional, defaults to 2) — +Number of ResNet layer for each block. act_fn (str, optional, defaults to silu) — +The activation function to use. latent_channels (int, optional, defaults to 4) — +Number of channels in the latent space. norm_num_groups (int, optional, defaults to 32) — +The number of groups for normalization. sample_size (int, optional, defaults to 32) — +Sample input size. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. dtype (jnp.dtype, optional, defaults to jnp.float32) — +The dtype of the parameters. Flax implementation of a VAE model with KL loss for decoding latent representations. This model inherits from FlaxModelMixin. Check the superclass documentation for it’s generic methods +implemented for all models (such as downloading or saving). This model is a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matter related to its +general usage and behavior. Inherent JAX features such as the following are supported: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization FlaxAutoencoderKLOutput class diffusers.models.vae_flax.FlaxAutoencoderKLOutput < source > ( latent_dist: FlaxDiagonalGaussianDistribution ) Parameters latent_dist (FlaxDiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of FlaxDiagonalGaussianDistribution. +FlaxDiagonalGaussianDistribution allows for sampling latents from the distribution. Output of AutoencoderKL encoding method. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. FlaxDecoderOutput class diffusers.models.vae_flax.FlaxDecoderOutput < source > ( sample: Array ) Parameters sample (jnp.ndarray of shape (batch_size, num_channels, height, width)) — +The decoded output sample from the last layer of the model. dtype (jnp.dtype, optional, defaults to jnp.float32) — +The dtype of the parameters. Output of decoding method. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/d1f7baf065288285c5d76e9722f3598a.txt b/scrapped_outputs/d1f7baf065288285c5d76e9722f3598a.txt new file mode 100644 index 0000000000000000000000000000000000000000..49d64c2bb4b20fbd4bc944a6449825ee53c95919 --- /dev/null +++ b/scrapped_outputs/d1f7baf065288285c5d76e9722f3598a.txt @@ -0,0 +1,41 @@ +KDPM2AncestralDiscreteScheduler The KDPM2DiscreteScheduler with ancestral sampling is inspired by the Elucidating the Design Space of Diffusion-Based Generative Models paper, and the scheduler is ported from and created by Katherine Crowson. The original codebase can be found at crowsonkb/k-diffusion. KDPM2AncestralDiscreteScheduler class diffusers.KDPM2AncestralDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None use_karras_sigmas: Optional = False prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.00085) — +The starting beta value of inference. beta_end (float, defaults to 0.012) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. KDPM2DiscreteScheduler with ancestral sampling is inspired by the DPMSolver2 and Algorithm 2 from the Elucidating +the Design Space of Diffusion-Based Generative Models paper. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union generator: Optional = None return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, ~schedulers.scheduling_ddim.SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/d22d3930c3ca559225840d39b8f42e3a.txt b/scrapped_outputs/d22d3930c3ca559225840d39b8f42e3a.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/d2b30fca25fd17e4c38954e5d98cd816.txt b/scrapped_outputs/d2b30fca25fd17e4c38954e5d98cd816.txt new file mode 100644 index 0000000000000000000000000000000000000000..4a2dab2440032fce02434afcfbdf3d52bba38d63 --- /dev/null +++ b/scrapped_outputs/d2b30fca25fd17e4c38954e5d98cd816.txt @@ -0,0 +1,11 @@ +Philosophy 🧨 Diffusers provides state-of-the-art pretrained diffusion models across multiple modalities. +Its purpose is to serve as a modular toolbox for both inference and training. We aim at building a library that stands the test of time and therefore take API design very seriously. In a nutshell, Diffusers is built to be a natural extension of PyTorch. Therefore, most of our design choices are based on PyTorch’s Design Principles. Let’s go over the most important ones: Usability over Performance While Diffusers has many built-in performance-enhancing features (see Memory and Speed), models are always loaded with the highest precision and lowest optimization. Therefore, by default diffusion pipelines are always instantiated on CPU with float32 precision if not otherwise defined by the user. This ensures usability across different platforms and accelerators and means that no complex installations are required to run the library. Diffusers aims to be a light-weight package and therefore has very few required dependencies, but many soft dependencies that can improve performance (such as accelerate, safetensors, onnx, etc…). We strive to keep the library as lightweight as possible so that it can be added without much concern as a dependency on other packages. Diffusers prefers simple, self-explainable code over condensed, magic code. This means that short-hand code syntaxes such as lambda functions, and advanced PyTorch operators are often not desired. Simple over easy As PyTorch states, explicit is better than implicit and simple is better than complex. This design philosophy is reflected in multiple parts of the library: We follow PyTorch’s API with methods like DiffusionPipeline.to to let the user handle device management. Raising concise error messages is preferred to silently correct erroneous input. Diffusers aims at teaching the user, rather than making the library as easy to use as possible. Complex model vs. scheduler logic is exposed instead of magically handled inside. Schedulers/Samplers are separated from diffusion models with minimal dependencies on each other. This forces the user to write the unrolled denoising loop. However, the separation allows for easier debugging and gives the user more control over adapting the denoising process or switching out diffusion models or schedulers. Separately trained components of the diffusion pipeline, e.g. the text encoder, the unet, and the variational autoencoder, each have their own model class. This forces the user to handle the interaction between the different model components, and the serialization format separates the model components into different files. However, this allows for easier debugging and customization. DreamBooth or Textual Inversion training +is very simple thanks to Diffusers’ ability to separate single components of the diffusion pipeline. Tweakable, contributor-friendly over abstraction For large parts of the library, Diffusers adopts an important design principle of the Transformers library, which is to prefer copy-pasted code over hasty abstractions. This design principle is very opinionated and stands in stark contrast to popular design principles such as Don’t repeat yourself (DRY). +In short, just like Transformers does for modeling files, Diffusers prefers to keep an extremely low level of abstraction and very self-contained code for pipelines and schedulers. +Functions, long code blocks, and even classes can be copied across multiple files which at first can look like a bad, sloppy design choice that makes the library unmaintainable. +However, this design has proven to be extremely successful for Transformers and makes a lot of sense for community-driven, open-source machine learning libraries because: Machine Learning is an extremely fast-moving field in which paradigms, model architectures, and algorithms are changing rapidly, which therefore makes it very difficult to define long-lasting code abstractions. Machine Learning practitioners like to be able to quickly tweak existing code for ideation and research and therefore prefer self-contained code over one that contains many abstractions. Open-source libraries rely on community contributions and therefore must build a library that is easy to contribute to. The more abstract the code, the more dependencies, the harder to read, and the harder to contribute to. Contributors simply stop contributing to very abstract libraries out of fear of breaking vital functionality. If contributing to a library cannot break other fundamental code, not only is it more inviting for potential new contributors, but it is also easier to review and contribute to multiple parts in parallel. At Hugging Face, we call this design the single-file policy which means that almost all of the code of a certain class should be written in a single, self-contained file. To read more about the philosophy, you can have a look +at this blog post. In Diffusers, we follow this philosophy for both pipelines and schedulers, but only partly for diffusion models. The reason we don’t follow this design fully for diffusion models is because almost all diffusion pipelines, such +as DDPM, Stable Diffusion, unCLIP (DALL·E 2) and Imagen all rely on the same diffusion model, the UNet. Great, now you should have generally understood why 🧨 Diffusers is designed the way it is 🤗. +We try to apply these design principles consistently across the library. Nevertheless, there are some minor exceptions to the philosophy or some unlucky design choices. If you have feedback regarding the design, we would ❤️ to hear it directly on GitHub. Design Philosophy in Details Now, let’s look a bit into the nitty-gritty details of the design philosophy. Diffusers essentially consists of three major classes: pipelines, models, and schedulers. +Let’s walk through more in-detail design decisions for each class. Pipelines Pipelines are designed to be easy to use (therefore do not follow Simple over easy 100%), are not feature complete, and should loosely be seen as examples of how to use models and schedulers for inference. The following design principles are followed: Pipelines follow the single-file policy. All pipelines can be found in individual directories under src/diffusers/pipelines. One pipeline folder corresponds to one diffusion paper/project/release. Multiple pipeline files can be gathered in one pipeline folder, as it’s done for src/diffusers/pipelines/stable-diffusion. If pipelines share similar functionality, one can make use of the #Copied from mechanism. Pipelines all inherit from DiffusionPipeline. Every pipeline consists of different model and scheduler components, that are documented in the model_index.json file, are accessible under the same name as attributes of the pipeline and can be shared between pipelines with DiffusionPipeline.components function. Every pipeline should be loadable via the DiffusionPipeline.from_pretrained function. Pipelines should be used only for inference. Pipelines should be very readable, self-explanatory, and easy to tweak. Pipelines should be designed to build on top of each other and be easy to integrate into higher-level APIs. Pipelines are not intended to be feature-complete user interfaces. For future complete user interfaces one should rather have a look at InvokeAI, Diffuzers, and lama-cleaner. Every pipeline should have one and only one way to run it via a __call__ method. The naming of the __call__ arguments should be shared across all pipelines. Pipelines should be named after the task they are intended to solve. In almost all cases, novel diffusion pipelines shall be implemented in a new pipeline folder/file. Models Models are designed as configurable toolboxes that are natural extensions of PyTorch’s Module class. They only partly follow the single-file policy. The following design principles are followed: Models correspond to a type of model architecture. E.g. the UNet2DConditionModel class is used for all UNet variations that expect 2D image inputs and are conditioned on some context. All models can be found in src/diffusers/models and every model architecture shall be defined in its file, e.g. unet_2d_condition.py, transformer_2d.py, etc… Models do not follow the single-file policy and should make use of smaller model building blocks, such as attention.py, resnet.py, embeddings.py, etc… Note: This is in stark contrast to Transformers’ modeling files and shows that models do not really follow the single-file policy. Models intend to expose complexity, just like PyTorch’s Module class, and give clear error messages. Models all inherit from ModelMixin and ConfigMixin. Models can be optimized for performance when it doesn’t demand major code changes, keeps backward compatibility, and gives significant memory or compute gain. Models should by default have the highest precision and lowest performance setting. To integrate new model checkpoints whose general architecture can be classified as an architecture that already exists in Diffusers, the existing model architecture shall be adapted to make it work with the new checkpoint. One should only create a new file if the model architecture is fundamentally different. Models should be designed to be easily extendable to future changes. This can be achieved by limiting public function arguments, configuration arguments, and “foreseeing” future changes, e.g. it is usually better to add string “…type” arguments that can easily be extended to new future types instead of boolean is_..._type arguments. Only the minimum amount of changes shall be made to existing architectures to make a new model checkpoint work. The model design is a difficult trade-off between keeping code readable and concise and supporting many model checkpoints. For most parts of the modeling code, classes shall be adapted for new model checkpoints, while there are some exceptions where it is preferred to add new classes to make sure the code is kept concise and +readable long-term, such as UNet blocks and Attention processors. Schedulers Schedulers are responsible to guide the denoising process for inference as well as to define a noise schedule for training. They are designed as individual classes with loadable configuration files and strongly follow the single-file policy. The following design principles are followed: All schedulers are found in src/diffusers/schedulers. Schedulers are not allowed to import from large utils files and shall be kept very self-contained. One scheduler Python file corresponds to one scheduler algorithm (as might be defined in a paper). If schedulers share similar functionalities, we can make use of the #Copied from mechanism. Schedulers all inherit from SchedulerMixin and ConfigMixin. Schedulers can be easily swapped out with the ConfigMixin.from_config method as explained in detail here. Every scheduler has to have a set_num_inference_steps, and a step function. set_num_inference_steps(...) has to be called before every denoising process, i.e. before step(...) is called. Every scheduler exposes the timesteps to be “looped over” via a timesteps attribute, which is an array of timesteps the model will be called upon. The step(...) function takes a predicted model output and the “current” sample (x_t) and returns the “previous”, slightly more denoised sample (x_t-1). Given the complexity of diffusion schedulers, the step function does not expose all the complexity and can be a bit of a “black box”. In almost all cases, novel schedulers shall be implemented in a new scheduling file. diff --git a/scrapped_outputs/d2c4fb0b1ded97084c66a73177682063.txt b/scrapped_outputs/d2c4fb0b1ded97084c66a73177682063.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/d2d0c24a87b39ea3a6c90c66bacfa912.txt b/scrapped_outputs/d2d0c24a87b39ea3a6c90c66bacfa912.txt new file mode 100644 index 0000000000000000000000000000000000000000..0edb177b2ecc106af9689aa9d54df820cf9faa8f --- /dev/null +++ b/scrapped_outputs/d2d0c24a87b39ea3a6c90c66bacfa912.txt @@ -0,0 +1,2 @@ +Spectrogram Diffusion Spectrogram Diffusion is by Curtis Hawthorne, Ian Simon, Adam Roberts, Neil Zeghidour, Josh Gardner, Ethan Manilow, and Jesse Engel. An ideal music synthesizer should be both interactive and expressive, generating high-fidelity audio in realtime for arbitrary combinations of instruments and notes. Recent neural synthesizers have exhibited a tradeoff between domain-specific models that offer detailed control of only specific instruments, or raw waveform models that can train on any music but with minimal control and slow generation. In this work, we focus on a middle ground of neural synthesizers that can generate audio from MIDI sequences with arbitrary combinations of instruments in realtime. This enables training on a wide range of transcription datasets with a single model, which in turn offers note-level control of composition and instrumentation across a wide range of instruments. We use a simple two-stage process: MIDI to spectrograms with an encoder-decoder Transformer, then spectrograms to audio with a generative adversarial network (GAN) spectrogram inverter. We compare training the decoder as an autoregressive model and as a Denoising Diffusion Probabilistic Model (DDPM) and find that the DDPM approach is superior both qualitatively and as measured by audio reconstruction and Fréchet distance metrics. Given the interactivity and generality of this approach, we find this to be a promising first step towards interactive and expressive neural synthesis for arbitrary combinations of instruments and notes. The original codebase can be found at magenta/music-spectrogram-diffusion. As depicted above the model takes as input a MIDI file and tokenizes it into a sequence of 5 second intervals. Each tokenized interval then together with positional encodings is passed through the Note Encoder and its representation is concatenated with the previous window’s generated spectrogram representation obtained via the Context Encoder. For the initial 5 second window this is set to zero. The resulting context is then used as conditioning to sample the denoised Spectrogram from the MIDI window and we concatenate this spectrogram to the final output as well as use it for the context of the next MIDI window. The process repeats till we have gone over all the MIDI inputs. Finally a MelGAN decoder converts the potentially long spectrogram to audio which is the final result of this pipeline. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. SpectrogramDiffusionPipeline class diffusers.SpectrogramDiffusionPipeline < source > ( *args **kwargs ) __call__ ( *args **kwargs ) Call self as a function. AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. diff --git a/scrapped_outputs/d2d9c2189e4bab28616b1c9c545fbe9c.txt b/scrapped_outputs/d2d9c2189e4bab28616b1c9c545fbe9c.txt new file mode 100644 index 0000000000000000000000000000000000000000..7645418c174b20843d0dcacad570025d04b154f1 --- /dev/null +++ b/scrapped_outputs/d2d9c2189e4bab28616b1c9c545fbe9c.txt @@ -0,0 +1,8 @@ +ScoreSdeVpScheduler ScoreSdeVpScheduler is a variance preserving stochastic differential equation (SDE) scheduler. It was introduced in the Score-Based Generative Modeling through Stochastic Differential Equations paper by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole. The abstract from the paper is: Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model. 🚧 This scheduler is under construction! ScoreSdeVpScheduler class diffusers.schedulers.ScoreSdeVpScheduler < source > ( num_train_timesteps = 2000 beta_min = 0.1 beta_max = 20 sampling_eps = 0.001 ) Parameters num_train_timesteps (int, defaults to 2000) — +The number of diffusion steps to train the model. beta_min (int, defaults to 0.1) — beta_max (int, defaults to 20) — sampling_eps (int, defaults to 1e-3) — +The end value of sampling where timesteps decrease progressively from 1 to epsilon. ScoreSdeVpScheduler is a variance preserving stochastic differential equation (SDE) scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. set_timesteps < source > ( num_inference_steps device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the continuous timesteps used for the diffusion chain (to be run before inference). step_pred < source > ( score x t generator = None ) Parameters score () — x () — t () — generator (torch.Generator, optional) — +A random number generator. Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/d2e498e2256eadea8c5c6e64e75b78b5.txt b/scrapped_outputs/d2e498e2256eadea8c5c6e64e75b78b5.txt new file mode 100644 index 0000000000000000000000000000000000000000..1f6f4515145581efe8db27c822c4dac240053ef7 --- /dev/null +++ b/scrapped_outputs/d2e498e2256eadea8c5c6e64e75b78b5.txt @@ -0,0 +1,68 @@ +Consistency Models Consistency Models were proposed in Consistency Models by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. The abstract from the paper is: Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast one-step generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet 64x64 and LSUN 256x256. The original codebase can be found at openai/consistency_models, and additional checkpoints are available at openai. The pipeline was contributed by dg845 and ayushtues. ❤️ Tips For an additional speed-up, use torch.compile to generate multiple images in <1 second: Copied import torch + from diffusers import ConsistencyModelPipeline + + device = "cuda" + # Load the cd_bedroom256_lpips checkpoint. + model_id_or_path = "openai/diffusers-cd_bedroom256_lpips" + pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) + pipe.to(device) + ++ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + + # Multistep sampling + # Timesteps can be explicitly specified; the particular timesteps below are from the original GitHub repo: + # https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L83 + for _ in range(10): + image = pipe(timesteps=[17, 0]).images[0] + image.show() ConsistencyModelPipeline class diffusers.ConsistencyModelPipeline < source > ( unet: UNet2DModel scheduler: CMStochasticIterativeScheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Currently only +compatible with CMStochasticIterativeScheduler. Pipeline for unconditional or class-conditional image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 class_labels: Union = None num_inference_steps: int = 1 timesteps: List = None generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. class_labels (torch.Tensor or List[int] or int, optional) — +Optional class labels for conditioning class-conditional consistency models. Not used if the model is +not class-conditional. num_inference_steps (int, optional, defaults to 1) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + Examples: Copied >>> import torch + +>>> from diffusers import ConsistencyModelPipeline + +>>> device = "cuda" +>>> # Load the cd_imagenet64_l2 checkpoint. +>>> model_id_or_path = "openai/diffusers-cd_imagenet64_l2" +>>> pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +>>> pipe.to(device) + +>>> # Onestep Sampling +>>> image = pipe(num_inference_steps=1).images[0] +>>> image.save("cd_imagenet64_l2_onestep_sample.png") + +>>> # Onestep sampling, class-conditional image generation +>>> # ImageNet-64 class label 145 corresponds to king penguins +>>> image = pipe(num_inference_steps=1, class_labels=145).images[0] +>>> image.save("cd_imagenet64_l2_onestep_sample_penguin.png") + +>>> # Multistep sampling, class-conditional image generation +>>> # Timesteps can be explicitly specified; the particular timesteps below are from the original Github repo: +>>> # https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L77 +>>> image = pipe(num_inference_steps=None, timesteps=[22, 0], class_labels=145).images[0] +>>> image.save("cd_imagenet64_l2_multistep_sample_penguin.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/d2ecff00affbc278876ef4ffd829ad28.txt b/scrapped_outputs/d2ecff00affbc278876ef4ffd829ad28.txt new file mode 100644 index 0000000000000000000000000000000000000000..d497661a6c9cfce4b8b06d95ad96868e9dc634a1 --- /dev/null +++ b/scrapped_outputs/d2ecff00affbc278876ef4ffd829ad28.txt @@ -0,0 +1,42 @@ +Textual inversion The StableDiffusionPipeline supports textual inversion, a technique that enables a model like Stable Diffusion to learn a new concept from just a few sample images. This gives you more control over the generated images and allows you to tailor the model towards specific concepts. You can get started quickly with a collection of community created concepts in the Stable Diffusion Conceptualizer. This guide will show you how to run inference with textual inversion using a pre-learned concept from the Stable Diffusion Conceptualizer. If you’re interested in teaching a model new concepts with textual inversion, take a look at the Textual Inversion training guide. Import the necessary libraries: Copied import torch +from diffusers import StableDiffusionPipeline +from diffusers.utils import make_image_grid Stable Diffusion 1 and 2 Pick a Stable Diffusion checkpoint and a pre-learned concept from the Stable Diffusion Conceptualizer: Copied pretrained_model_name_or_path = "runwayml/stable-diffusion-v1-5" +repo_id_embeds = "sd-concepts-library/cat-toy" Now you can load a pipeline, and pass the pre-learned concept to it: Copied pipeline = StableDiffusionPipeline.from_pretrained( + pretrained_model_name_or_path, torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +pipeline.load_textual_inversion(repo_id_embeds) Create a prompt with the pre-learned concept by using the special placeholder token , and choose the number of samples and rows of images you’d like to generate: Copied prompt = "a grafitti in a favela wall with a on it" + +num_samples_per_row = 2 +num_rows = 2 Then run the pipeline (feel free to adjust the parameters like num_inference_steps and guidance_scale to see how they affect image quality), save the generated images and visualize them with the helper function you created at the beginning: Copied all_images = [] +for _ in range(num_rows): + images = pipeline(prompt, num_images_per_prompt=num_samples_per_row, num_inference_steps=50, guidance_scale=7.5).images + all_images.extend(images) + +grid = make_image_grid(all_images, num_rows, num_samples_per_row) +grid Stable Diffusion XL Stable Diffusion XL (SDXL) can also use textual inversion vectors for inference. In contrast to Stable Diffusion 1 and 2, SDXL has two text encoders so you’ll need two textual inversion embeddings - one for each text encoder model. Let’s download the SDXL textual inversion embeddings and have a closer look at it’s structure: Copied from huggingface_hub import hf_hub_download +from safetensors.torch import load_file + +file = hf_hub_download("dn118/unaestheticXL", filename="unaestheticXLv31.safetensors") +state_dict = load_file(file) +state_dict Copied {'clip_g': tensor([[ 0.0077, -0.0112, 0.0065, ..., 0.0195, 0.0159, 0.0275], + ..., + [-0.0170, 0.0213, 0.0143, ..., -0.0302, -0.0240, -0.0362]], + 'clip_l': tensor([[ 0.0023, 0.0192, 0.0213, ..., -0.0385, 0.0048, -0.0011], + ..., + [ 0.0475, -0.0508, -0.0145, ..., 0.0070, -0.0089, -0.0163]], There are two tensors, "clip_g" and "clip_l". +"clip_g" corresponds to the bigger text encoder in SDXL and refers to +pipe.text_encoder_2 and "clip_l" refers to pipe.text_encoder. Now you can load each tensor separately by passing them along with the correct text encoder and tokenizer +to load_textual_inversion(): Copied from diffusers import AutoPipelineForText2Image +import torch + +pipe = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") + +pipe.load_textual_inversion(state_dict["clip_g"], token="unaestheticXLv31", text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2) +pipe.load_textual_inversion(state_dict["clip_l"], token="unaestheticXLv31", text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer) + +# the embedding should be used as a negative embedding, so we pass it as a negative prompt +generator = torch.Generator().manual_seed(33) +image = pipe("a woman standing in front of a mountain", negative_prompt="unaestheticXLv31", generator=generator).images[0] +image diff --git a/scrapped_outputs/d3213d5318be8eadefb3cd8ffb2ea752.txt b/scrapped_outputs/d3213d5318be8eadefb3cd8ffb2ea752.txt new file mode 100644 index 0000000000000000000000000000000000000000..039dc21252f140b854db30919cf4105c2b03492c --- /dev/null +++ b/scrapped_outputs/d3213d5318be8eadefb3cd8ffb2ea752.txt @@ -0,0 +1,249 @@ +Evaluating Diffusion Models Evaluation of generative models like Stable Diffusion is subjective in nature. But as practitioners and researchers, we often have to make careful choices amongst many different possibilities. So, when working with different generative models (like GANs, Diffusion, etc.), how do we choose one over the other? Qualitative evaluation of such models can be error-prone and might incorrectly influence a decision. +However, quantitative metrics don’t necessarily correspond to image quality. So, usually, a combination +of both qualitative and quantitative evaluations provides a stronger signal when choosing one model +over the other. In this document, we provide a non-exhaustive overview of qualitative and quantitative methods to evaluate Diffusion models. For quantitative methods, we specifically focus on how to implement them alongside diffusers. The methods shown in this document can also be used to evaluate different noise schedulers keeping the underlying generation model fixed. Scenarios We cover Diffusion models with the following pipelines: Text-guided image generation (such as the StableDiffusionPipeline). Text-guided image generation, additionally conditioned on an input image (such as the StableDiffusionImg2ImgPipeline and StableDiffusionInstructPix2PixPipeline). Class-conditioned image generation models (such as the DiTPipeline). Qualitative Evaluation Qualitative evaluation typically involves human assessment of generated images. Quality is measured across aspects such as compositionality, image-text alignment, and spatial relations. Common prompts provide a degree of uniformity for subjective metrics. +DrawBench and PartiPrompts are prompt datasets used for qualitative benchmarking. DrawBench and PartiPrompts were introduced by Imagen and Parti respectively. From the official Parti website: PartiPrompts (P2) is a rich set of over 1600 prompts in English that we release as part of this work. P2 can be used to measure model capabilities across various categories and challenge aspects. PartiPrompts has the following columns: Prompt Category of the prompt (such as “Abstract”, “World Knowledge”, etc.) Challenge reflecting the difficulty (such as “Basic”, “Complex”, “Writing & Symbols”, etc.) These benchmarks allow for side-by-side human evaluation of different image generation models. For this, the 🧨 Diffusers team has built Open Parti Prompts, which is a community-driven qualitative benchmark based on Parti Prompts to compare state-of-the-art open-source diffusion models: Open Parti Prompts Game: For 10 parti prompts, 4 generated images are shown and the user selects the image that suits the prompt best. Open Parti Prompts Leaderboard: The leaderboard comparing the currently best open-sourced diffusion models to each other. To manually compare images, let’s see how we can use diffusers on a couple of PartiPrompts. Below we show some prompts sampled across different challenges: Basic, Complex, Linguistic Structures, Imagination, and Writing & Symbols. Here we are using PartiPrompts as a dataset. Copied from datasets import load_dataset + +# prompts = load_dataset("nateraw/parti-prompts", split="train") +# prompts = prompts.shuffle() +# sample_prompts = [prompts[i]["Prompt"] for i in range(5)] + +# Fixing these sample prompts in the interest of reproducibility. +sample_prompts = [ + "a corgi", + "a hot air balloon with a yin-yang symbol, with the moon visible in the daytime sky", + "a car with no windows", + "a cube made of porcupine", + 'The saying "BE EXCELLENT TO EACH OTHER" written on a red brick wall with a graffiti image of a green alien wearing a tuxedo. A yellow fire hydrant is on a sidewalk in the foreground.', +] Now we can use these prompts to generate some images using Stable Diffusion (v1-4 checkpoint): Copied import torch + +seed = 0 +generator = torch.manual_seed(seed) + +images = sd_pipeline(sample_prompts, num_images_per_prompt=1, generator=generator).images We can also set num_images_per_prompt accordingly to compare different images for the same prompt. Running the same pipeline but with a different checkpoint (v1-5), yields: Once several images are generated from all the prompts using multiple models (under evaluation), these results are presented to human evaluators for scoring. For +more details on the DrawBench and PartiPrompts benchmarks, refer to their respective papers. It is useful to look at some inference samples while a model is training to measure the +training progress. In our training scripts, we support this utility with additional support for +logging to TensorBoard and Weights & Biases. Quantitative Evaluation In this section, we will walk you through how to evaluate three different diffusion pipelines using: CLIP score CLIP directional similarity FID Text-guided image generation CLIP score measures the compatibility of image-caption pairs. Higher CLIP scores imply higher compatibility 🔼. The CLIP score is a quantitative measurement of the qualitative concept “compatibility”. Image-caption pair compatibility can also be thought of as the semantic similarity between the image and the caption. CLIP score was found to have high correlation with human judgement. Let’s first load a StableDiffusionPipeline: Copied from diffusers import StableDiffusionPipeline +import torch + +model_ckpt = "CompVis/stable-diffusion-v1-4" +sd_pipeline = StableDiffusionPipeline.from_pretrained(model_ckpt, torch_dtype=torch.float16).to("cuda") Generate some images with multiple prompts: Copied prompts = [ + "a photo of an astronaut riding a horse on mars", + "A high tech solarpunk utopia in the Amazon rainforest", + "A pikachu fine dining with a view to the Eiffel Tower", + "A mecha robot in a favela in expressionist style", + "an insect robot preparing a delicious meal", + "A small cabin on top of a snowy mountain in the style of Disney, artstation", +] + +images = sd_pipeline(prompts, num_images_per_prompt=1, output_type="np").images + +print(images.shape) +# (6, 512, 512, 3) And then, we calculate the CLIP score. Copied from torchmetrics.functional.multimodal import clip_score +from functools import partial + +clip_score_fn = partial(clip_score, model_name_or_path="openai/clip-vit-base-patch16") + +def calculate_clip_score(images, prompts): + images_int = (images * 255).astype("uint8") + clip_score = clip_score_fn(torch.from_numpy(images_int).permute(0, 3, 1, 2), prompts).detach() + return round(float(clip_score), 4) + +sd_clip_score = calculate_clip_score(images, prompts) +print(f"CLIP score: {sd_clip_score}") +# CLIP score: 35.7038 In the above example, we generated one image per prompt. If we generated multiple images per prompt, we would have to take the average score from the generated images per prompt. Now, if we wanted to compare two checkpoints compatible with the StableDiffusionPipeline we should pass a generator while calling the pipeline. First, we generate images with a +fixed seed with the v1-4 Stable Diffusion checkpoint: Copied seed = 0 +generator = torch.manual_seed(seed) + +images = sd_pipeline(prompts, num_images_per_prompt=1, generator=generator, output_type="np").images Then we load the v1-5 checkpoint to generate images: Copied model_ckpt_1_5 = "runwayml/stable-diffusion-v1-5" +sd_pipeline_1_5 = StableDiffusionPipeline.from_pretrained(model_ckpt_1_5, torch_dtype=weight_dtype).to(device) + +images_1_5 = sd_pipeline_1_5(prompts, num_images_per_prompt=1, generator=generator, output_type="np").images And finally, we compare their CLIP scores: Copied sd_clip_score_1_4 = calculate_clip_score(images, prompts) +print(f"CLIP Score with v-1-4: {sd_clip_score_1_4}") +# CLIP Score with v-1-4: 34.9102 + +sd_clip_score_1_5 = calculate_clip_score(images_1_5, prompts) +print(f"CLIP Score with v-1-5: {sd_clip_score_1_5}") +# CLIP Score with v-1-5: 36.2137 It seems like the v1-5 checkpoint performs better than its predecessor. Note, however, that the number of prompts we used to compute the CLIP scores is quite low. For a more practical evaluation, this number should be way higher, and the prompts should be diverse. By construction, there are some limitations in this score. The captions in the training dataset +were crawled from the web and extracted from alt and similar tags associated an image on the internet. +They are not necessarily representative of what a human being would use to describe an image. Hence we +had to “engineer” some prompts here. Image-conditioned text-to-image generation In this case, we condition the generation pipeline with an input image as well as a text prompt. Let’s take the StableDiffusionInstructPix2PixPipeline, as an example. It takes an edit instruction as an input prompt and an input image to be edited. Here is one example: One strategy to evaluate such a model is to measure the consistency of the change between the two images (in CLIP space) with the change between the two image captions (as shown in CLIP-Guided Domain Adaptation of Image Generators). This is referred to as the ”CLIP directional similarity“. Caption 1 corresponds to the input image (image 1) that is to be edited. Caption 2 corresponds to the edited image (image 2). It should reflect the edit instruction. Following is a pictorial overview: We have prepared a mini dataset to implement this metric. Let’s first load the dataset. Copied from datasets import load_dataset + +dataset = load_dataset("sayakpaul/instructpix2pix-demo", split="train") +dataset.features Copied {'input': Value(dtype='string', id=None), + 'edit': Value(dtype='string', id=None), + 'output': Value(dtype='string', id=None), + 'image': Image(decode=True, id=None)} Here we have: input is a caption corresponding to the image. edit denotes the edit instruction. output denotes the modified caption reflecting the edit instruction. Let’s take a look at a sample. Copied idx = 0 +print(f"Original caption: {dataset[idx]['input']}") +print(f"Edit instruction: {dataset[idx]['edit']}") +print(f"Modified caption: {dataset[idx]['output']}") Copied Original caption: 2. FAROE ISLANDS: An archipelago of 18 mountainous isles in the North Atlantic Ocean between Norway and Iceland, the Faroe Islands has 'everything you could hope for', according to Big 7 Travel. It boasts 'crystal clear waterfalls, rocky cliffs that seem to jut out of nowhere and velvety green hills' +Edit instruction: make the isles all white marble +Modified caption: 2. WHITE MARBLE ISLANDS: An archipelago of 18 mountainous white marble isles in the North Atlantic Ocean between Norway and Iceland, the White Marble Islands has 'everything you could hope for', according to Big 7 Travel. It boasts 'crystal clear waterfalls, rocky cliffs that seem to jut out of nowhere and velvety green hills' And here is the image: Copied dataset[idx]["image"] We will first edit the images of our dataset with the edit instruction and compute the directional similarity. Let’s first load the StableDiffusionInstructPix2PixPipeline: Copied from diffusers import StableDiffusionInstructPix2PixPipeline + +instruct_pix2pix_pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained( + "timbrooks/instruct-pix2pix", torch_dtype=torch.float16 +).to(device) Now, we perform the edits: Copied import numpy as np + + +def edit_image(input_image, instruction): + image = instruct_pix2pix_pipeline( + instruction, + image=input_image, + output_type="np", + generator=generator, + ).images[0] + return image + +input_images = [] +original_captions = [] +modified_captions = [] +edited_images = [] + +for idx in range(len(dataset)): + input_image = dataset[idx]["image"] + edit_instruction = dataset[idx]["edit"] + edited_image = edit_image(input_image, edit_instruction) + + input_images.append(np.array(input_image)) + original_captions.append(dataset[idx]["input"]) + modified_captions.append(dataset[idx]["output"]) + edited_images.append(edited_image) To measure the directional similarity, we first load CLIP’s image and text encoders: Copied from transformers import ( + CLIPTokenizer, + CLIPTextModelWithProjection, + CLIPVisionModelWithProjection, + CLIPImageProcessor, +) + +clip_id = "openai/clip-vit-large-patch14" +tokenizer = CLIPTokenizer.from_pretrained(clip_id) +text_encoder = CLIPTextModelWithProjection.from_pretrained(clip_id).to(device) +image_processor = CLIPImageProcessor.from_pretrained(clip_id) +image_encoder = CLIPVisionModelWithProjection.from_pretrained(clip_id).to(device) Notice that we are using a particular CLIP checkpoint, i.e., openai/clip-vit-large-patch14. This is because the Stable Diffusion pre-training was performed with this CLIP variant. For more details, refer to the documentation. Next, we prepare a PyTorch nn.Module to compute directional similarity: Copied import torch.nn as nn +import torch.nn.functional as F + + +class DirectionalSimilarity(nn.Module): + def __init__(self, tokenizer, text_encoder, image_processor, image_encoder): + super().__init__() + self.tokenizer = tokenizer + self.text_encoder = text_encoder + self.image_processor = image_processor + self.image_encoder = image_encoder + + def preprocess_image(self, image): + image = self.image_processor(image, return_tensors="pt")["pixel_values"] + return {"pixel_values": image.to(device)} + + def tokenize_text(self, text): + inputs = self.tokenizer( + text, + max_length=self.tokenizer.model_max_length, + padding="max_length", + truncation=True, + return_tensors="pt", + ) + return {"input_ids": inputs.input_ids.to(device)} + + def encode_image(self, image): + preprocessed_image = self.preprocess_image(image) + image_features = self.image_encoder(**preprocessed_image).image_embeds + image_features = image_features / image_features.norm(dim=1, keepdim=True) + return image_features + + def encode_text(self, text): + tokenized_text = self.tokenize_text(text) + text_features = self.text_encoder(**tokenized_text).text_embeds + text_features = text_features / text_features.norm(dim=1, keepdim=True) + return text_features + + def compute_directional_similarity(self, img_feat_one, img_feat_two, text_feat_one, text_feat_two): + sim_direction = F.cosine_similarity(img_feat_two - img_feat_one, text_feat_two - text_feat_one) + return sim_direction + + def forward(self, image_one, image_two, caption_one, caption_two): + img_feat_one = self.encode_image(image_one) + img_feat_two = self.encode_image(image_two) + text_feat_one = self.encode_text(caption_one) + text_feat_two = self.encode_text(caption_two) + directional_similarity = self.compute_directional_similarity( + img_feat_one, img_feat_two, text_feat_one, text_feat_two + ) + return directional_similarity Let’s put DirectionalSimilarity to use now. Copied dir_similarity = DirectionalSimilarity(tokenizer, text_encoder, image_processor, image_encoder) +scores = [] + +for i in range(len(input_images)): + original_image = input_images[i] + original_caption = original_captions[i] + edited_image = edited_images[i] + modified_caption = modified_captions[i] + + similarity_score = dir_similarity(original_image, edited_image, original_caption, modified_caption) + scores.append(float(similarity_score.detach().cpu())) + +print(f"CLIP directional similarity: {np.mean(scores)}") +# CLIP directional similarity: 0.0797976553440094 Like the CLIP Score, the higher the CLIP directional similarity, the better it is. It should be noted that the StableDiffusionInstructPix2PixPipeline exposes two arguments, namely, image_guidance_scale and guidance_scale that let you control the quality of the final edited image. We encourage you to experiment with these two arguments and see the impact of that on the directional similarity. We can extend the idea of this metric to measure how similar the original image and edited version are. To do that, we can just do F.cosine_similarity(img_feat_two, img_feat_one). For these kinds of edits, we would still want the primary semantics of the images to be preserved as much as possible, i.e., a high similarity score. We can use these metrics for similar pipelines such as the StableDiffusionPix2PixZeroPipeline. Both CLIP score and CLIP direction similarity rely on the CLIP model, which can make the evaluations biased. Extending metrics like IS, FID (discussed later), or KID can be difficult when the model under evaluation was pre-trained on a large image-captioning dataset (such as the LAION-5B dataset). This is because underlying these metrics is an InceptionNet (pre-trained on the ImageNet-1k dataset) used for extracting intermediate image features. The pre-training dataset of Stable Diffusion may have limited overlap with the pre-training dataset of InceptionNet, so it is not a good candidate here for feature extraction. Using the above metrics helps evaluate models that are class-conditioned. For example, DiT. It was pre-trained being conditioned on the ImageNet-1k classes. Class-conditioned image generation Class-conditioned generative models are usually pre-trained on a class-labeled dataset such as ImageNet-1k. Popular metrics for evaluating these models include Fréchet Inception Distance (FID), Kernel Inception Distance (KID), and Inception Score (IS). In this document, we focus on FID (Heusel et al.). We show how to compute it with the DiTPipeline, which uses the DiT model under the hood. FID aims to measure how similar are two datasets of images. As per this resource: Fréchet Inception Distance is a measure of similarity between two datasets of images. It was shown to correlate well with the human judgment of visual quality and is most often used to evaluate the quality of samples of Generative Adversarial Networks. FID is calculated by computing the Fréchet distance between two Gaussians fitted to feature representations of the Inception network. These two datasets are essentially the dataset of real images and the dataset of fake images (generated images in our case). FID is usually calculated with two large datasets. However, for this document, we will work with two mini datasets. Let’s first download a few images from the ImageNet-1k training set: Copied from zipfile import ZipFile +import requests + + +def download(url, local_filepath): + r = requests.get(url) + with open(local_filepath, "wb") as f: + f.write(r.content) + return local_filepath + +dummy_dataset_url = "https://hf.co/datasets/sayakpaul/sample-datasets/resolve/main/sample-imagenet-images.zip" +local_filepath = download(dummy_dataset_url, dummy_dataset_url.split("/")[-1]) + +with ZipFile(local_filepath, "r") as zipper: + zipper.extractall(".") Copied from PIL import Image +import os + +dataset_path = "sample-imagenet-images" +image_paths = sorted([os.path.join(dataset_path, x) for x in os.listdir(dataset_path)]) + +real_images = [np.array(Image.open(path).convert("RGB")) for path in image_paths] These are 10 images from the following ImageNet-1k classes: “cassette_player”, “chain_saw” (x2), “church”, “gas_pump” (x3), “parachute” (x2), and “tench”. Real images. Now that the images are loaded, let’s apply some lightweight pre-processing on them to use them for FID calculation. Copied from torchvision.transforms import functional as F + + +def preprocess_image(image): + image = torch.tensor(image).unsqueeze(0) + image = image.permute(0, 3, 1, 2) / 255.0 + return F.center_crop(image, (256, 256)) + +real_images = torch.cat([preprocess_image(image) for image in real_images]) +print(real_images.shape) +# torch.Size([10, 3, 256, 256]) We now load the DiTPipeline to generate images conditioned on the above-mentioned classes. Copied from diffusers import DiTPipeline, DPMSolverMultistepScheduler + +dit_pipeline = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256", torch_dtype=torch.float16) +dit_pipeline.scheduler = DPMSolverMultistepScheduler.from_config(dit_pipeline.scheduler.config) +dit_pipeline = dit_pipeline.to("cuda") + +words = [ + "cassette player", + "chainsaw", + "chainsaw", + "church", + "gas pump", + "gas pump", + "gas pump", + "parachute", + "parachute", + "tench", +] + +class_ids = dit_pipeline.get_label_ids(words) +output = dit_pipeline(class_labels=class_ids, generator=generator, output_type="np") + +fake_images = output.images +fake_images = torch.tensor(fake_images) +fake_images = fake_images.permute(0, 3, 1, 2) +print(fake_images.shape) +# torch.Size([10, 3, 256, 256]) Now, we can compute the FID using torchmetrics. Copied from torchmetrics.image.fid import FrechetInceptionDistance + +fid = FrechetInceptionDistance(normalize=True) +fid.update(real_images, real=True) +fid.update(fake_images, real=False) + +print(f"FID: {float(fid.compute())}") +# FID: 177.7147216796875 The lower the FID, the better it is. Several things can influence FID here: Number of images (both real and fake) Randomness induced in the diffusion process Number of inference steps in the diffusion process The scheduler being used in the diffusion process For the last two points, it is, therefore, a good practice to run the evaluation across different seeds and inference steps, and then report an average result. FID results tend to be fragile as they depend on a lot of factors: The specific Inception model used during computation. The implementation accuracy of the computation. The image format (not the same if we start from PNGs vs JPGs). Keeping that in mind, FID is often most useful when comparing similar runs, but it is +hard to reproduce paper results unless the authors carefully disclose the FID +measurement code. These points apply to other related metrics too, such as KID and IS. As a final step, let’s visually inspect the fake_images. Fake images. diff --git a/scrapped_outputs/d3331b6a9b5028617dae5457ebe29c15.txt b/scrapped_outputs/d3331b6a9b5028617dae5457ebe29c15.txt new file mode 100644 index 0000000000000000000000000000000000000000..78bbe5a9f180ff0b096046b649d06bb4063d6161 --- /dev/null +++ b/scrapped_outputs/d3331b6a9b5028617dae5457ebe29c15.txt @@ -0,0 +1,137 @@ +DiffEdit Image editing typically requires providing a mask of the area to be edited. DiffEdit automatically generates the mask for you based on a text query, making it easier overall to create a mask without image editing software. The DiffEdit algorithm works in three steps: the diffusion model denoises an image conditioned on some query text and reference text which produces different noise estimates for different areas of the image; the difference is used to infer a mask to identify which area of the image needs to be changed to match the query text the input image is encoded into latent space with DDIM the latents are decoded with the diffusion model conditioned on the text query, using the mask as a guide such that pixels outside the mask remain the same as in the input image This guide will show you how to use DiffEdit to edit images without manually creating a mask. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate The StableDiffusionDiffEditPipeline requires an image mask and a set of partially inverted latents. The image mask is generated from the generate_mask() function, and includes two parameters, source_prompt and target_prompt. These parameters determine what to edit in the image. For example, if you want to change a bowl of fruits to a bowl of pears, then: Copied source_prompt = "a bowl of fruits" +target_prompt = "a bowl of pears" The partially inverted latents are generated from the invert() function, and it is generally a good idea to include a prompt or caption describing the image to help guide the inverse latent sampling process. The caption can often be your source_prompt, but feel free to experiment with other text descriptions! Let’s load the pipeline, scheduler, inverse scheduler, and enable some optimizations to reduce memory usage: Copied import torch +from diffusers import DDIMScheduler, DDIMInverseScheduler, StableDiffusionDiffEditPipeline + +pipeline = StableDiffusionDiffEditPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", + torch_dtype=torch.float16, + safety_checker=None, + use_safetensors=True, +) +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +pipeline.enable_model_cpu_offload() +pipeline.enable_vae_slicing() Load the image to edit: Copied from diffusers.utils import load_image, make_image_grid + +img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" +raw_image = load_image(img_url).resize((768, 768)) +raw_image Use the generate_mask() function to generate the image mask. You’ll need to pass it the source_prompt and target_prompt to specify what to edit in the image: Copied from PIL import Image + +source_prompt = "a bowl of fruits" +target_prompt = "a basket of pears" +mask_image = pipeline.generate_mask( + image=raw_image, + source_prompt=source_prompt, + target_prompt=target_prompt, +) +Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L").resize((768, 768)) Next, create the inverted latents and pass it a caption describing the image: Copied inv_latents = pipeline.invert(prompt=source_prompt, image=raw_image).latents Finally, pass the image mask and inverted latents to the pipeline. The target_prompt becomes the prompt now, and the source_prompt is used as the negative_prompt: Copied output_image = pipeline( + prompt=target_prompt, + mask_image=mask_image, + image_latents=inv_latents, + negative_prompt=source_prompt, +).images[0] +mask_image = Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L").resize((768, 768)) +make_image_grid([raw_image, mask_image, output_image], rows=1, cols=3) original image edited image Generate source and target embeddings The source and target embeddings can be automatically generated with the Flan-T5 model instead of creating them manually. Load the Flan-T5 model and tokenizer from the 🤗 Transformers library: Copied import torch +from transformers import AutoTokenizer, T5ForConditionalGeneration + +tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-large") +model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", device_map="auto", torch_dtype=torch.float16) Provide some initial text to prompt the model to generate the source and target prompts. Copied source_concept = "bowl" +target_concept = "basket" + +source_text = f"Provide a caption for images containing a {source_concept}. " +"The captions should be in English and should be no longer than 150 characters." + +target_text = f"Provide a caption for images containing a {target_concept}. " +"The captions should be in English and should be no longer than 150 characters." Next, create a utility function to generate the prompts: Copied @torch.no_grad() +def generate_prompts(input_prompt): + input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids.to("cuda") + + outputs = model.generate( + input_ids, temperature=0.8, num_return_sequences=16, do_sample=True, max_new_tokens=128, top_k=10 + ) + return tokenizer.batch_decode(outputs, skip_special_tokens=True) + +source_prompts = generate_prompts(source_text) +target_prompts = generate_prompts(target_text) +print(source_prompts) +print(target_prompts) Check out the generation strategy guide if you’re interested in learning more about strategies for generating different quality text. Load the text encoder model used by the StableDiffusionDiffEditPipeline to encode the text. You’ll use the text encoder to compute the text embeddings: Copied import torch +from diffusers import StableDiffusionDiffEditPipeline + +pipeline = StableDiffusionDiffEditPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +pipeline.enable_vae_slicing() + +@torch.no_grad() +def embed_prompts(sentences, tokenizer, text_encoder, device="cuda"): + embeddings = [] + for sent in sentences: + text_inputs = tokenizer( + sent, + padding="max_length", + max_length=tokenizer.model_max_length, + truncation=True, + return_tensors="pt", + ) + text_input_ids = text_inputs.input_ids + prompt_embeds = text_encoder(text_input_ids.to(device), attention_mask=None)[0] + embeddings.append(prompt_embeds) + return torch.concatenate(embeddings, dim=0).mean(dim=0).unsqueeze(0) + +source_embeds = embed_prompts(source_prompts, pipeline.tokenizer, pipeline.text_encoder) +target_embeds = embed_prompts(target_prompts, pipeline.tokenizer, pipeline.text_encoder) Finally, pass the embeddings to the generate_mask() and invert() functions, and pipeline to generate the image: Copied from diffusers import DDIMInverseScheduler, DDIMScheduler + from diffusers.utils import load_image, make_image_grid + from PIL import Image + + pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) + pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) + + img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + raw_image = load_image(img_url).resize((768, 768)) + + mask_image = pipeline.generate_mask( + image=raw_image, +- source_prompt=source_prompt, +- target_prompt=target_prompt, ++ source_prompt_embeds=source_embeds, ++ target_prompt_embeds=target_embeds, + ) + + inv_latents = pipeline.invert( +- prompt=source_prompt, ++ prompt_embeds=source_embeds, + image=raw_image, + ).latents + + output_image = pipeline( + mask_image=mask_image, + image_latents=inv_latents, +- prompt=target_prompt, +- negative_prompt=source_prompt, ++ prompt_embeds=target_embeds, ++ negative_prompt_embeds=source_embeds, + ).images[0] + mask_image = Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L") + make_image_grid([raw_image, mask_image, output_image], rows=1, cols=3) Generate a caption for inversion While you can use the source_prompt as a caption to help generate the partially inverted latents, you can also use the BLIP model to automatically generate a caption. Load the BLIP model and processor from the 🤗 Transformers library: Copied import torch +from transformers import BlipForConditionalGeneration, BlipProcessor + +processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") +model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base", torch_dtype=torch.float16, low_cpu_mem_usage=True) Create a utility function to generate a caption from the input image: Copied @torch.no_grad() +def generate_caption(images, caption_generator, caption_processor): + text = "a photograph of" + + inputs = caption_processor(images, text, return_tensors="pt").to(device="cuda", dtype=caption_generator.dtype) + caption_generator.to("cuda") + outputs = caption_generator.generate(**inputs, max_new_tokens=128) + + # offload caption generator + caption_generator.to("cpu") + + caption = caption_processor.batch_decode(outputs, skip_special_tokens=True)[0] + return caption Load an input image and generate a caption for it using the generate_caption function: Copied from diffusers.utils import load_image + +img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" +raw_image = load_image(img_url).resize((768, 768)) +caption = generate_caption(raw_image, model, processor) generated caption: "a photograph of a bowl of fruit on a table" Now you can drop the caption into the invert() function to generate the partially inverted latents! diff --git a/scrapped_outputs/d33d545fa9902b9ff141c1385acd9bab.txt b/scrapped_outputs/d33d545fa9902b9ff141c1385acd9bab.txt new file mode 100644 index 0000000000000000000000000000000000000000..74c330470e4ee15b672abb1f3100e9f38a3251c7 --- /dev/null +++ b/scrapped_outputs/d33d545fa9902b9ff141c1385acd9bab.txt @@ -0,0 +1,509 @@ +Loading + +A core premise of the diffusers library is to make diffusion models as accessible as possible. +Accessibility is therefore achieved by providing an API to load complete diffusion pipelines as well as individual components with a single line of code. +In the following we explain in-detail how to easily load: +Complete Diffusion Pipelines via the DiffusionPipeline.from_pretrained() +Diffusion Models via ModelMixin.from_pretrained() +Schedulers via SchedulerMixin.from_pretrained() + +Loading pipelines + +The DiffusionPipeline class is the easiest way to access any diffusion model that is available on the Hub. Let’s look at an example on how to download Runway’s Stable Diffusion model. + + + Copied +from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = DiffusionPipeline.from_pretrained(repo_id) +Here DiffusionPipeline automatically detects the correct pipeline (i.e. StableDiffusionPipeline), downloads and caches all required configuration and weight files (if not already done so), and finally returns a pipeline instance, called pipe. +The pipeline instance can then be called using StableDiffusionPipeline.call() (i.e., pipe("image of a astronaut riding a horse")) for text-to-image generation. +Instead of using the generic DiffusionPipeline class for loading, you can also load the appropriate pipeline class directly. The code snippet above yields the same instance as when doing: + + + Copied +from diffusers import StableDiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(repo_id) +Many checkpoints, such as CompVis/stable-diffusion-v1-4 and runwayml/stable-diffusion-v1-5 can be used for multiple tasks, e.g. text-to-image or image-to-image. +If you want to use those checkpoints for a task that is different from the default one, you have to load it directly from the corresponding task-specific pipeline class: + + + Copied +from diffusers import StableDiffusionImg2ImgPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionImg2ImgPipeline.from_pretrained(repo_id) +Diffusion pipelines like StableDiffusionPipeline or StableDiffusionImg2ImgPipeline consist of multiple components. These components can be both parameterized models, such as "unet", "vae" and "text_encoder", tokenizers or schedulers. +These components often interact in complex ways with each other when using the pipeline in inference, e.g. for StableDiffusionPipeline the inference call is explained here. +The purpose of the pipeline classes is to wrap the complexity of these diffusion systems and give the user an easy-to-use API while staying flexible for customization, as will be shown later. + +Loading pipelines locally + +If you prefer to have complete control over the pipeline and its corresponding files or, as said before, if you want to use pipelines that require an access request without having to be connected to the Hugging Face Hub, +we recommend loading pipelines locally. +To load a diffusion pipeline locally, you first need to manually download the whole folder structure on your local disk and then pass a local path to the DiffusionPipeline.from_pretrained(). Let’s again look at an example for +Runway’s Stable Diffusion Diffusion model. +First, you should make use of git-lfs to download the whole folder structure that has been uploaded to the model repository: + + + Copied +git lfs install +git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 +The command above will create a local folder called ./stable-diffusion-v1-5 on your disk. +Now, all you have to do is to simply pass the local folder path to from_pretrained: + + + Copied +from diffusers import DiffusionPipeline + +repo_id = "./stable-diffusion-v1-5" +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id) +If repo_id is a local path, as it is the case here, DiffusionPipeline.from_pretrained() will automatically detect it and therefore not try to download any files from the Hub. +While we usually recommend to load weights directly from the Hub to be certain to stay up to date with the newest changes, loading pipelines locally should be preferred if one +wants to stay anonymous, self-contained applications, etc… + +Loading customized pipelines + +Advanced users that want to load customized versions of diffusion pipelines can do so by swapping any of the default components, e.g. the scheduler, with other scheduler classes. +A classical use case of this functionality is to swap the scheduler. Stable Diffusion v1-5 uses the PNDMScheduler by default which is generally not the most performant scheduler. Since the release +of stable diffusion, multiple improved schedulers have been published. To use those, the user has to manually load their preferred scheduler and pass it into DiffusionPipeline.from_pretrained(). +E.g. to use EulerDiscreteScheduler or DPMSolverMultistepScheduler to have a better quality vs. generation speed trade-off for inference, one could load them as follows: + + + Copied +from diffusers import DiffusionPipeline, EulerDiscreteScheduler, DPMSolverMultistepScheduler + +repo_id = "runwayml/stable-diffusion-v1-5" + +scheduler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +# or +# scheduler = DPMSolverMultistepScheduler.from_pretrained(repo_id, subfolder="scheduler") + +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, scheduler=scheduler) +Three things are worth paying attention to here. +First, the scheduler is loaded with SchedulerMixin.from_pretrained() +Second, the scheduler is loaded with a function argument, called subfolder="scheduler" as the configuration of stable diffusion’s scheduling is defined in a subfolder of the official pipeline repository +Third, the scheduler instance can simply be passed with the scheduler keyword argument to DiffusionPipeline.from_pretrained(). This works because the StableDiffusionPipeline defines its scheduler with the scheduler attribute. It’s not possible to use a different name, such as sampler=scheduler since sampler is not a defined keyword for StableDiffusionPipeline.__init__() +Not only the scheduler components can be customized for diffusion pipelines; in theory, all components of a pipeline can be customized. In practice, however, it often only makes sense to switch out a component that has compatible alternatives to what the pipeline expects. +Many scheduler classes are compatible with each other as can be seen here. This is not always the case for other components, such as the "unet". +One special case that can also be customized is the "safety_checker" of stable diffusion. If you believe the safety checker doesn’t serve you any good, you can simply disable it by passing None: + + + Copied +from diffusers import DiffusionPipeline, EulerDiscreteScheduler, DPMSolverMultistepScheduler + +stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, safety_checker=None) +Another common use case is to reuse the same components in multiple pipelines, e.g. the weights and configurations of "runwayml/stable-diffusion-v1-5" can be used for both StableDiffusionPipeline and StableDiffusionImg2ImgPipeline and we might not want to +use the exact same weights into RAM twice. In this case, customizing all the input instances would help us +to only load the weights into RAM once: + + + Copied +from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +stable_diffusion_txt2img = StableDiffusionPipeline.from_pretrained(model_id) + +components = stable_diffusion_txt2img.components + +# weights are not reloaded into RAM +stable_diffusion_img2img = StableDiffusionImg2ImgPipeline(**components) +Note how the above code snippet makes use of DiffusionPipeline.components. + +Loading variants + +Diffusion Pipeline checkpoints can offer variants of the “main” diffusion pipeline checkpoint. +Such checkpoint variants are usually variations of the checkpoint that have advantages for specific use-cases and that are so similar to the “main” checkpoint that they should not be put in a new checkpoint. +A variation of a checkpoint has to have exactly the same serialization format and exactly the same model structure, including all weights having the same tensor shapes. +Examples of variations are different floating point types and non-ema weights. I.e. “fp16”, “bf16”, and “no_ema” are common variations. + +Let's first talk about whats **not** checkpoint variant, + +Checkpoint variants do not include different serialization formats (such as safetensors) as weights in different serialization formats are +identical to the weights of the “main” checkpoint, just loaded in a different framework. +Also variants do not correspond to different model structures, e.g. stable-diffusion-v1-5 is not a variant of stable-diffusion-2-0 since the model structure is different (Stable Diffusion 1-5 uses a different CLIPTextModel compared to Stable Diffusion 2.0). +Pipeline checkpoints that are identical in model structure, but have been trained on different datasets, trained with vastly different training setups and thus correspond to different official releases (such as Stable Diffusion v1-4 and Stable Diffusion v1-5) should probably be stored in individual repositories instead of as variations of eachother. + +So what are checkpoint variants then? + +Checkpoint variants usually consist of the checkpoint stored in ”low-precision, low-storage” dtype so that less bandwith is required to download them, or of non-exponential-averaged weights that shall be used when continuing fine-tuning from the checkpoint. +Both use cases have clear advantages when their weights are considered variants: they share the same serialization format as the reference weights, and they correspond to a specialization of the “main” checkpoint which does not warrant a new model repository. +A checkpoint stored in torch’s half-precision / float16 format requires only half the bandwith and storage when downloading the checkpoint, +but cannot be used when continuing training or when running the checkpoint on CPU. +Similarly the non-exponential-averaged (or non-EMA) version of the checkpoint should be used when continuing fine-tuning of the model checkpoint, but should not be used when using the checkpoint for inference. + +How to save and load variants + +Saving a diffusion pipeline as a variant can be done by providing DiffusionPipeline.save_pretrained() with the variant argument. +The variant extends the weight name by the provided variation, by changing the default weight name from diffusion_pytorch_model.bin to diffusion_pytorch_model.{variant}.bin or from diffusion_pytorch_model.safetensors to diffusion_pytorch_model.{variant}.safetensors. By doing so, one creates a variant of the pipeline checkpoint that can be loaded instead of the “main” pipeline checkpoint. +Let’s have a look at how we could create a float16 variant of a pipeline. First, we load +the “main” variant of a checkpoint (stored in float32 precision) into mixed precision format, using torch_dtype=torch.float16. + + + Copied +from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +Now all model components of the pipeline are stored in half-precision dtype. We can now save the +pipeline under a "fp16" variant as follows: + + + Copied +pipe.save_pretrained("./stable-diffusion-v1-5", variant="fp16") +If we don’t save into an existing stable-diffusion-v1-5 folder the new folder would look as follows: + + + Copied +stable-diffusion-v1-5 +├── feature_extractor +│   └── preprocessor_config.json +├── model_index.json +├── safety_checker +│   ├── config.json +│   └── pytorch_model.fp16.bin +├── scheduler +│   └── scheduler_config.json +├── text_encoder +│   ├── config.json +│   └── pytorch_model.fp16.bin +├── tokenizer +│   ├── merges.txt +│   ├── special_tokens_map.json +│   ├── tokenizer_config.json +│   └── vocab.json +├── unet +│   ├── config.json +│   └── diffusion_pytorch_model.fp16.bin +└── vae + ├── config.json + └── diffusion_pytorch_model.fp16.bin +As one can see, all model files now have a .fp16.bin extension instead of just .bin. +The variant now has to be loaded by also passing a variant="fp16" to DiffusionPipeline.from_pretrained(), e.g.: + + + Copied +DiffusionPipeline.from_pretrained("./stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16) +works just fine, while: + + + Copied +DiffusionPipeline.from_pretrained("./stable-diffusion-v1-5", torch_dtype=torch.float16) +throws an Exception: + + + Copied +OSError: Error no file named diffusion_pytorch_model.bin found in directory ./stable-diffusion-v1-45/vae since we **only** stored the model +This is expected as we don’t have any “non-variant” checkpoint files saved locally. +However, the whole idea of pipeline variants is that they can co-exist with the “main” variant, +so one would typically also save the “main” variant in the same folder. Let’s do this: + + + Copied +pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +pipe.save_pretrained("./stable-diffusion-v1-5") +and upload the pipeline to the Hub under diffusers/stable-diffusion-variants. +The file structure on the Hub now looks as follows: + + + Copied +├── feature_extractor +│   └── preprocessor_config.json +├── model_index.json +├── safety_checker +│   ├── config.json +│   ├── pytorch_model.bin +│   └── pytorch_model.fp16.bin +├── scheduler +│   └── scheduler_config.json +├── text_encoder +│   ├── config.json +│   ├── pytorch_model.bin +│   └── pytorch_model.fp16.bin +├── tokenizer +│   ├── merges.txt +│   ├── special_tokens_map.json +│   ├── tokenizer_config.json +│   └── vocab.json +├── unet +│   ├── config.json +│   ├── diffusion_pytorch_model.bin +│   ├── diffusion_pytorch_model.fp16.bin +└── vae + ├── config.json + ├── diffusion_pytorch_model.bin + └── diffusion_pytorch_model.fp16.bin +We can now both download the “main” and the “fp16” variant from the Hub. Both: + + + Copied +pipe = DiffusionPipeline.from_pretrained("diffusers/stable-diffusion-variants") +and + + + Copied +pipe = DiffusionPipeline.from_pretrained("diffusers/stable-diffusion-variants", variant="fp16") +works. +Note that Diffusers never downloads more checkpoints than needed. E.g. when downloading +the “main” variant, none of the “fp16.bin” files are downloaded and cached. +Only when the user specifies variant="fp16" are those files downloaded and cached. +Finally, there are cases where only some of the checkpoint files of the pipeline are of a certain +variation. E.g. it’s usually only the UNet checkpoint that has both a exponential-mean-averaged (EMA) and a non-exponential-mean-averaged (non-EMA) version. All other model components, e.g. the text encoder, safety checker or variational auto-encoder usually don’t have such a variation. +In such a case, one would upload just the UNet’s checkpoint file with a non_ema version format (as done here) and upon calling: + + + Copied +pipe = DiffusionPipeline.from_pretrained("diffusers/stable-diffusion-variants", variant="non_ema") +the model will use only the “non_ema” checkpoint variant if it is available - otherwise it’ll load the +“main” variation. In the above example, variant="non_ema" would therefore download the following file structure: + + + Copied +├── feature_extractor +│   └── preprocessor_config.json +├── model_index.json +├── safety_checker +│   ├── config.json +│   ├── pytorch_model.bin +├── scheduler +│   └── scheduler_config.json +├── text_encoder +│   ├── config.json +│   ├── pytorch_model.bin +├── tokenizer +│   ├── merges.txt +│   ├── special_tokens_map.json +│   ├── tokenizer_config.json +│   └── vocab.json +├── unet +│   ├── config.json +│   └── diffusion_pytorch_model.non_ema.bin +└── vae + ├── config.json + ├── diffusion_pytorch_model.bin +In a nutshell, using variant="{variant}" will download all files that match the {variant} and if for a model component such a file variant is not present it will download the “main” variant. If neither a “main” or {variant} variant is available, an error will the thrown. + +How does loading work? + +As a class method, DiffusionPipeline.from_pretrained() is responsible for two things: +Download the latest version of the folder structure required to run the repo_id with diffusers and cache them. If the latest folder structure is available in the local cache, DiffusionPipeline.from_pretrained() will simply reuse the cache and not re-download the files. +Load the cached weights into the correct pipeline class – one of the officially supported pipeline classes - and return an instance of the class. The correct pipeline class is thereby retrieved from the model_index.json file. +The underlying folder structure of diffusion pipelines correspond 1-to-1 to their corresponding class instances, e.g. StableDiffusionPipeline for runwayml/stable-diffusion-v1-5 +This can be better understood by looking at an example. Let’s load a pipeline class instance pipe and print it: + + + Copied +from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = DiffusionPipeline.from_pretrained(repo_id) +print(pipe) +Output: + + + Copied +StableDiffusionPipeline { + "feature_extractor": [ + "transformers", + "CLIPFeatureExtractor" + ], + "safety_checker": [ + "stable_diffusion", + "StableDiffusionSafetyChecker" + ], + "scheduler": [ + "diffusers", + "PNDMScheduler" + ], + "text_encoder": [ + "transformers", + "CLIPTextModel" + ], + "tokenizer": [ + "transformers", + "CLIPTokenizer" + ], + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vae": [ + "diffusers", + "AutoencoderKL" + ] +} +First, we see that the official pipeline is the StableDiffusionPipeline, and second we see that the StableDiffusionPipeline consists of 7 components: +"feature_extractor" of class CLIPFeatureExtractor as defined in transformers. +"safety_checker" as defined here. +"scheduler" of class PNDMScheduler. +"text_encoder" of class CLIPTextModel as defined in transformers. +"tokenizer" of class CLIPTokenizer as defined in transformers. +"unet" of class UNet2DConditionModel. +"vae" of class AutoencoderKL. +Let’s now compare the pipeline instance to the folder structure of the model repository runwayml/stable-diffusion-v1-5. Looking at the folder structure of runwayml/stable-diffusion-v1-5 on the Hub and excluding model and saving format variants, we can see it matches 1-to-1 the printed out instance of StableDiffusionPipeline above: + + + Copied +. +├── feature_extractor +│   └── preprocessor_config.json +├── model_index.json +├── safety_checker +│   ├── config.json +│   └── pytorch_model.bin +├── scheduler +│   └── scheduler_config.json +├── text_encoder +│   ├── config.json +│   └── pytorch_model.bin +├── tokenizer +│   ├── merges.txt +│   ├── special_tokens_map.json +│   ├── tokenizer_config.json +│   └── vocab.json +├── unet +│   ├── config.json +│   ├── diffusion_pytorch_model.bin +└── vae + ├── config.json + ├── diffusion_pytorch_model.bin +Each attribute of the instance of StableDiffusionPipeline has its configuration and possibly weights defined in a subfolder that is called exactly like the class attribute ("feature_extractor", "safety_checker", "scheduler", "text_encoder", "tokenizer", "unet", "vae"). Importantly, every pipeline expects a model_index.json file that tells the DiffusionPipeline both: +which pipeline class should be loaded, and +what sub-classes from which library are stored in which subfolders +In the case of runwayml/stable-diffusion-v1-5 the model_index.json is therefore defined as follows: + + + Copied +{ + "_class_name": "StableDiffusionPipeline", + "_diffusers_version": "0.6.0", + "feature_extractor": [ + "transformers", + "CLIPFeatureExtractor" + ], + "safety_checker": [ + "stable_diffusion", + "StableDiffusionSafetyChecker" + ], + "scheduler": [ + "diffusers", + "PNDMScheduler" + ], + "text_encoder": [ + "transformers", + "CLIPTextModel" + ], + "tokenizer": [ + "transformers", + "CLIPTokenizer" + ], + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vae": [ + "diffusers", + "AutoencoderKL" + ] +} +_class_name tells DiffusionPipeline which pipeline class should be loaded. +_diffusers_version can be useful to know under which diffusers version this model was created. +Every component of the pipeline is then defined under the form: + + + Copied +"name" : [ + "library", + "class" +] +The "name" field corresponds both to the name of the subfolder in which the configuration and weights are stored as well as the attribute name of the pipeline class (as can be seen here and here +The "library" field corresponds to the name of the library, e.g. diffusers or transformers from which the "class" should be loaded +The "class" field corresponds to the name of the class, e.g. CLIPTokenizer or UNet2DConditionModel + +Loading models + +Models as defined under src/diffusers/models can be loaded via the ModelMixin.from_pretrained() function. The API is very similar the DiffusionPipeline.from_pretrained() and works in the same way: +Download the latest version of the model weights and configuration with diffusers and cache them. If the latest files are available in the local cache, ModelMixin.from_pretrained() will simply reuse the cache and not re-download the files. +Load the cached weights into the defined model class - one of the existing model classes - and return an instance of the class. +In constrast to DiffusionPipeline.from_pretrained(), models rely on fewer files that usually don’t require a folder structure, but just a diffusion_pytorch_model.bin and config.json file. +Let’s look at an example: + + + Copied +from diffusers import UNet2DConditionModel + +repo_id = "runwayml/stable-diffusion-v1-5" +model = UNet2DConditionModel.from_pretrained(repo_id, subfolder="unet") +Note how we have to define the subfolder="unet" argument to tell ModelMixin.from_pretrained() that the model weights are located in a subfolder of the repository. +As explained in Loading customized pipelines, one can pass a loaded model to a diffusion pipeline, via DiffusionPipeline.from_pretrained(): + + + Copied +from diffusers import DiffusionPipeline + +repo_id = "runwayml/stable-diffusion-v1-5" +pipe = DiffusionPipeline.from_pretrained(repo_id, unet=model) +If the model files can be found directly at the root level, which is usually only the case for some very simple diffusion models, such as google/ddpm-cifar10-32, we don’t +need to pass a subfolder argument: + + + Copied +from diffusers import UNet2DModel + +repo_id = "google/ddpm-cifar10-32" +model = UNet2DModel.from_pretrained(repo_id) +As motivated in How to save and load variants?, models can load and +save variants. To load a model variant, one should pass the variant function argument to ModelMixin.from_pretrained(). Analogous, to save a model variant, one should pass the variant function argument to ModelMixin.save_pretrained(): + + + Copied +from diffusers import UNet2DConditionModel + +model = UNet2DConditionModel.from_pretrained( + "diffusers/stable-diffusion-variants", subfolder="unet", variant="non_ema" +) +model.save_pretrained("./local-unet", variant="non_ema") + +Loading schedulers + +Schedulers rely on SchedulerMixin.from_pretrained(). Schedulers are not parameterized or trained, but instead purely defined by a configuration file. +For consistency, we use the same method name as we do for models or pipelines, but no weights are loaded in this case. +In constrast to pipelines or models, loading schedulers does not consume any significant amount of memory and the same configuration file can often be used for a variety of different schedulers. +For example, all of: +DDPMScheduler +DDIMScheduler +PNDMScheduler +LMSDiscreteScheduler +EulerDiscreteScheduler +EulerAncestralDiscreteScheduler +DPMSolverMultistepScheduler +are compatible with StableDiffusionPipeline and therefore the same scheduler configuration file can be loaded in any of those classes: + + + Copied +from diffusers import StableDiffusionPipeline +from diffusers import ( + DDPMScheduler, + DDIMScheduler, + PNDMScheduler, + LMSDiscreteScheduler, + EulerDiscreteScheduler, + EulerAncestralDiscreteScheduler, + DPMSolverMultistepScheduler, +) + +repo_id = "runwayml/stable-diffusion-v1-5" + +ddpm = DDPMScheduler.from_pretrained(repo_id, subfolder="scheduler") +ddim = DDIMScheduler.from_pretrained(repo_id, subfolder="scheduler") +pndm = PNDMScheduler.from_pretrained(repo_id, subfolder="scheduler") +lms = LMSDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +euler_anc = EulerAncestralDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +euler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") +dpm = DPMSolverMultistepScheduler.from_pretrained(repo_id, subfolder="scheduler") + +# replace `dpm` with any of `ddpm`, `ddim`, `pndm`, `lms`, `euler`, `euler_anc` +pipeline = StableDiffusionPipeline.from_pretrained(repo_id, scheduler=dpm) diff --git a/scrapped_outputs/d348b5215830c23cc9d56d651dc2472b.txt b/scrapped_outputs/d348b5215830c23cc9d56d651dc2472b.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/d378f624c05d789d1fb2dd87d22fe243.txt b/scrapped_outputs/d378f624c05d789d1fb2dd87d22fe243.txt new file mode 100644 index 0000000000000000000000000000000000000000..a782332fc7cd440b86e7889f43564b9e3d2ea725 --- /dev/null +++ b/scrapped_outputs/d378f624c05d789d1fb2dd87d22fe243.txt @@ -0,0 +1,87 @@ +Understanding pipelines, models and schedulers 🧨 Diffusers is designed to be a user-friendly and flexible toolbox for building diffusion systems tailored to your use-case. At the core of the toolbox are models and schedulers. While the DiffusionPipeline bundles these components together for convenience, you can also unbundle the pipeline and use the models and schedulers separately to create new diffusion systems. In this tutorial, you’ll learn how to use models and schedulers to assemble a diffusion system for inference, starting with a basic pipeline and then progressing to the Stable Diffusion pipeline. Deconstruct a basic pipeline A pipeline is a quick and easy way to run a model for inference, requiring no more than four lines of code to generate an image: Copied >>> from diffusers import DDPMPipeline + +>>> ddpm = DDPMPipeline.from_pretrained("google/ddpm-cat-256", use_safetensors=True).to("cuda") +>>> image = ddpm(num_inference_steps=25).images[0] +>>> image That was super easy, but how did the pipeline do that? Let’s breakdown the pipeline and take a look at what’s happening under the hood. In the example above, the pipeline contains a UNet2DModel model and a DDPMScheduler. The pipeline denoises an image by taking random noise the size of the desired output and passing it through the model several times. At each timestep, the model predicts the noise residual and the scheduler uses it to predict a less noisy image. The pipeline repeats this process until it reaches the end of the specified number of inference steps. To recreate the pipeline with the model and scheduler separately, let’s write our own denoising process. Load the model and scheduler: Copied >>> from diffusers import DDPMScheduler, UNet2DModel + +>>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cat-256") +>>> model = UNet2DModel.from_pretrained("google/ddpm-cat-256", use_safetensors=True).to("cuda") Set the number of timesteps to run the denoising process for: Copied >>> scheduler.set_timesteps(50) Setting the scheduler timesteps creates a tensor with evenly spaced elements in it, 50 in this example. Each element corresponds to a timestep at which the model denoises an image. When you create the denoising loop later, you’ll iterate over this tensor to denoise an image: Copied >>> scheduler.timesteps +tensor([980, 960, 940, 920, 900, 880, 860, 840, 820, 800, 780, 760, 740, 720, + 700, 680, 660, 640, 620, 600, 580, 560, 540, 520, 500, 480, 460, 440, + 420, 400, 380, 360, 340, 320, 300, 280, 260, 240, 220, 200, 180, 160, + 140, 120, 100, 80, 60, 40, 20, 0]) Create some random noise with the same shape as the desired output: Copied >>> import torch + +>>> sample_size = model.config.sample_size +>>> noise = torch.randn((1, 3, sample_size, sample_size), device="cuda") Now write a loop to iterate over the timesteps. At each timestep, the model does a UNet2DModel.forward() pass and returns the noisy residual. The scheduler’s step() method takes the noisy residual, timestep, and input and it predicts the image at the previous timestep. This output becomes the next input to the model in the denoising loop, and it’ll repeat until it reaches the end of the timesteps array. Copied >>> input = noise + +>>> for t in scheduler.timesteps: +... with torch.no_grad(): +... noisy_residual = model(input, t).sample +... previous_noisy_sample = scheduler.step(noisy_residual, t, input).prev_sample +... input = previous_noisy_sample This is the entire denoising process, and you can use this same pattern to write any diffusion system. The last step is to convert the denoised output into an image: Copied >>> from PIL import Image +>>> import numpy as np + +>>> image = (input / 2 + 0.5).clamp(0, 1).squeeze() +>>> image = (image.permute(1, 2, 0) * 255).round().to(torch.uint8).cpu().numpy() +>>> image = Image.fromarray(image) +>>> image In the next section, you’ll put your skills to the test and breakdown the more complex Stable Diffusion pipeline. The steps are more or less the same. You’ll initialize the necessary components, and set the number of timesteps to create a timestep array. The timestep array is used in the denoising loop, and for each element in this array, the model predicts a less noisy image. The denoising loop iterates over the timestep’s, and at each timestep, it outputs a noisy residual and the scheduler uses it to predict a less noisy image at the previous timestep. This process is repeated until you reach the end of the timestep array. Let’s try it out! Deconstruct the Stable Diffusion pipeline Stable Diffusion is a text-to-image latent diffusion model. It is called a latent diffusion model because it works with a lower-dimensional representation of the image instead of the actual pixel space, which makes it more memory efficient. The encoder compresses the image into a smaller representation, and a decoder to convert the compressed representation back into an image. For text-to-image models, you’ll need a tokenizer and an encoder to generate text embeddings. From the previous example, you already know you need a UNet model and a scheduler. As you can see, this is already more complex than the DDPM pipeline which only contains a UNet model. The Stable Diffusion model has three separate pretrained models. 💡 Read the How does Stable Diffusion work? blog for more details about how the VAE, UNet, and text encoder models work. Now that you know what you need for the Stable Diffusion pipeline, load all these components with the from_pretrained() method. You can find them in the pretrained runwayml/stable-diffusion-v1-5 checkpoint, and each component is stored in a separate subfolder: Copied >>> from PIL import Image +>>> import torch +>>> from transformers import CLIPTextModel, CLIPTokenizer +>>> from diffusers import AutoencoderKL, UNet2DConditionModel, PNDMScheduler + +>>> vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae", use_safetensors=True) +>>> tokenizer = CLIPTokenizer.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="tokenizer") +>>> text_encoder = CLIPTextModel.from_pretrained( +... "CompVis/stable-diffusion-v1-4", subfolder="text_encoder", use_safetensors=True +... ) +>>> unet = UNet2DConditionModel.from_pretrained( +... "CompVis/stable-diffusion-v1-4", subfolder="unet", use_safetensors=True +... ) Instead of the default PNDMScheduler, exchange it for the UniPCMultistepScheduler to see how easy it is to plug a different scheduler in: Copied >>> from diffusers import UniPCMultistepScheduler + +>>> scheduler = UniPCMultistepScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler") To speed up inference, move the models to a GPU since, unlike the scheduler, they have trainable weights: Copied >>> torch_device = "cuda" +>>> vae.to(torch_device) +>>> text_encoder.to(torch_device) +>>> unet.to(torch_device) Create text embeddings The next step is to tokenize the text to generate embeddings. The text is used to condition the UNet model and steer the diffusion process towards something that resembles the input prompt. 💡 The guidance_scale parameter determines how much weight should be given to the prompt when generating an image. Feel free to choose any prompt you like if you want to generate something else! Copied >>> prompt = ["a photograph of an astronaut riding a horse"] +>>> height = 512 # default height of Stable Diffusion +>>> width = 512 # default width of Stable Diffusion +>>> num_inference_steps = 25 # Number of denoising steps +>>> guidance_scale = 7.5 # Scale for classifier-free guidance +>>> generator = torch.manual_seed(0) # Seed generator to create the initial latent noise +>>> batch_size = len(prompt) Tokenize the text and generate the embeddings from the prompt: Copied >>> text_input = tokenizer( +... prompt, padding="max_length", max_length=tokenizer.model_max_length, truncation=True, return_tensors="pt" +... ) + +>>> with torch.no_grad(): +... text_embeddings = text_encoder(text_input.input_ids.to(torch_device))[0] You’ll also need to generate the unconditional text embeddings which are the embeddings for the padding token. These need to have the same shape (batch_size and seq_length) as the conditional text_embeddings: Copied >>> max_length = text_input.input_ids.shape[-1] +>>> uncond_input = tokenizer([""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt") +>>> uncond_embeddings = text_encoder(uncond_input.input_ids.to(torch_device))[0] Let’s concatenate the conditional and unconditional embeddings into a batch to avoid doing two forward passes: Copied >>> text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) Create random noise Next, generate some initial random noise as a starting point for the diffusion process. This is the latent representation of the image, and it’ll be gradually denoised. At this point, the latent image is smaller than the final image size but that’s okay though because the model will transform it into the final 512x512 image dimensions later. 💡 The height and width are divided by 8 because the vae model has 3 down-sampling layers. You can check by running the following: Copied 2 ** (len(vae.config.block_out_channels) - 1) == 8 Copied >>> latents = torch.randn( +... (batch_size, unet.config.in_channels, height // 8, width // 8), +... generator=generator, +... device=torch_device, +... ) Denoise the image Start by scaling the input with the initial noise distribution, sigma, the noise scale value, which is required for improved schedulers like UniPCMultistepScheduler: Copied >>> latents = latents * scheduler.init_noise_sigma The last step is to create the denoising loop that’ll progressively transform the pure noise in latents to an image described by your prompt. Remember, the denoising loop needs to do three things: Set the scheduler’s timesteps to use during denoising. Iterate over the timesteps. At each timestep, call the UNet model to predict the noise residual and pass it to the scheduler to compute the previous noisy sample. Copied >>> from tqdm.auto import tqdm + +>>> scheduler.set_timesteps(num_inference_steps) + +>>> for t in tqdm(scheduler.timesteps): +... # expand the latents if we are doing classifier-free guidance to avoid doing two forward passes. +... latent_model_input = torch.cat([latents] * 2) + +... latent_model_input = scheduler.scale_model_input(latent_model_input, timestep=t) + +... # predict the noise residual +... with torch.no_grad(): +... noise_pred = unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample + +... # perform guidance +... noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) +... noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) + +... # compute the previous noisy sample x_t -> x_t-1 +... latents = scheduler.step(noise_pred, t, latents).prev_sample Decode the image The final step is to use the vae to decode the latent representation into an image and get the decoded output with sample: Copied # scale and decode the image latents with vae +latents = 1 / 0.18215 * latents +with torch.no_grad(): + image = vae.decode(latents).sample Lastly, convert the image to a PIL.Image to see your generated image! Copied >>> image = (image / 2 + 0.5).clamp(0, 1).squeeze() +>>> image = (image.permute(1, 2, 0) * 255).to(torch.uint8).cpu().numpy() +>>> images = (image * 255).round().astype("uint8") +>>> image = Image.fromarray(image) +>>> image Next steps From basic to complex pipelines, you’ve seen that all you really need to write your own diffusion system is a denoising loop. The loop should set the scheduler’s timesteps, iterate over them, and alternate between calling the UNet model to predict the noise residual and passing it to the scheduler to compute the previous noisy sample. This is really what 🧨 Diffusers is designed for: to make it intuitive and easy to write your own diffusion system using models and schedulers. For your next steps, feel free to: Learn how to build and contribute a pipeline to 🧨 Diffusers. We can’t wait and see what you’ll come up with! Explore existing pipelines in the library, and see if you can deconstruct and build a pipeline from scratch using the models and schedulers separately. diff --git a/scrapped_outputs/d381a74616455b6d915b49e1c43be537.txt b/scrapped_outputs/d381a74616455b6d915b49e1c43be537.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/d38aeb3b35e5a49905b4bfb4be1dc685.txt b/scrapped_outputs/d38aeb3b35e5a49905b4bfb4be1dc685.txt new file mode 100644 index 0000000000000000000000000000000000000000..e3eb1154d5a9e2ba5bc7c59799d472993105a9c0 --- /dev/null +++ b/scrapped_outputs/d38aeb3b35e5a49905b4bfb4be1dc685.txt @@ -0,0 +1,233 @@ +🧨 Stable Diffusion in JAX / Flax ! + + + + + + + + + + + + +🤗 Hugging Face Diffusers supports Flax since version 0.5.1! This allows for super fast inference on Google TPUs, such as those available in Colab, Kaggle or Google Cloud Platform. +This notebook shows how to run inference using JAX / Flax. If you want more details about how Stable Diffusion works or want to run it in GPU, please refer to this notebook. +First, make sure you are using a TPU backend. If you are running this notebook in Colab, select Runtime in the menu above, then select the option “Change runtime type” and then select TPU under the Hardware accelerator setting. +Note that JAX is not exclusive to TPUs, but it shines on that hardware because each TPU server has 8 TPU accelerators working in parallel. + +Setup + +First make sure diffusers is installed. + + + Copied +!pip install jax==0.3.25 jaxlib==0.3.25 flax transformers ftfy +!pip install diffusers + + + Copied +import jax.tools.colab_tpu + +jax.tools.colab_tpu.setup_tpu() +import jax + + + Copied +num_devices = jax.device_count() +device_type = jax.devices()[0].device_kind + +print(f"Found {num_devices} JAX devices of type {device_type}.") +assert ( + "TPU" in device_type +), "Available device is not a TPU, please select TPU from Edit > Notebook settings > Hardware accelerator" + + + Copied +Found 8 JAX devices of type Cloud TPU. +Then we import all the dependencies. + + + Copied +import numpy as np +import jax +import jax.numpy as jnp + +from pathlib import Path +from jax import pmap +from flax.jax_utils import replicate +from flax.training.common_utils import shard +from PIL import Image + +from huggingface_hub import notebook_login +from diffusers import FlaxStableDiffusionPipeline + +Model Loading + +TPU devices support bfloat16, an efficient half-float type. We’ll use it for our tests, but you can also use float32 to use full precision instead. + + + Copied +dtype = jnp.bfloat16 +Flax is a functional framework, so models are stateless and parameters are stored outside them. Loading the pre-trained Flax pipeline will return both the pipeline itself and the model weights (or parameters). We are using a bf16 version of the weights, which leads to type warnings that you can safely ignore. + + + Copied +pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + revision="bf16", + dtype=dtype, +) + +Inference + +Since TPUs usually have 8 devices working in parallel, we’ll replicate our prompt as many times as devices we have. Then we’ll perform inference on the 8 devices at once, each responsible for generating one image. Thus, we’ll get 8 images in the same amount of time it takes for one chip to generate a single one. +After replicating the prompt, we obtain the tokenized text ids by invoking the prepare_inputs function of the pipeline. The length of the tokenized text is set to 77 tokens, as required by the configuration of the underlying CLIP Text model. + + + Copied +prompt = "A cinematic film still of Morgan Freeman starring as Jimi Hendrix, portrait, 40mm lens, shallow depth of field, close up, split lighting, cinematic" +prompt = [prompt] * jax.device_count() +prompt_ids = pipeline.prepare_inputs(prompt) +prompt_ids.shape + + + Copied +(8, 77) + +Replication and parallelization + +Model parameters and inputs have to be replicated across the 8 parallel devices we have. The parameters dictionary is replicated using flax.jax_utils.replicate, which traverses the dictionary and changes the shape of the weights so they are repeated 8 times. Arrays are replicated using shard. + + + Copied +p_params = replicate(params) + + + Copied +prompt_ids = shard(prompt_ids) +prompt_ids.shape + + + Copied +(8, 1, 77) +That shape means that each one of the 8 devices will receive as an input a jnp array with shape (1, 77). 1 is therefore the batch size per device. In TPUs with sufficient memory, it could be larger than 1 if we wanted to generate multiple images (per chip) at once. +We are almost ready to generate images! We just need to create a random number generator to pass to the generation function. This is the standard procedure in Flax, which is very serious and opinionated about random numbers – all functions that deal with random numbers are expected to receive a generator. This ensures reproducibility, even when we are training across multiple distributed devices. +The helper function below uses a seed to initialize a random number generator. As long as we use the same seed, we’ll get the exact same results. Feel free to use different seeds when exploring results later in the notebook. + + + Copied +def create_key(seed=0): + return jax.random.PRNGKey(seed) +We obtain a rng and then “split” it 8 times so each device receives a different generator. Therefore, each device will create a different image, and the full process is reproducible. + + + Copied +rng = create_key(0) +rng = jax.random.split(rng, jax.device_count()) +JAX code can be compiled to an efficient representation that runs very fast. However, we need to ensure that all inputs have the same shape in subsequent calls; otherwise, JAX will have to recompile the code, and we wouldn’t be able to take advantage of the optimized speed. +The Flax pipeline can compile the code for us if we pass jit = True as an argument. It will also ensure that the model runs in parallel in the 8 available devices. +The first time we run the following cell it will take a long time to compile, but subequent calls (even with different inputs) will be much faster. For example, it took more than a minute to compile in a TPU v2-8 when I tested, but then it takes about 7s for future inference runs. + + + Copied +%%time +images = pipeline(prompt_ids, p_params, rng, jit=True)[0] + + + Copied +CPU times: user 56.2 s, sys: 42.5 s, total: 1min 38s +Wall time: 1min 29s +The returned array has shape (8, 1, 512, 512, 3). We reshape it to get rid of the second dimension and obtain 8 images of 512 × 512 × 3 and then convert them to PIL. + + + Copied +images = images.reshape((images.shape[0] * images.shape[1],) + images.shape[-3:]) +images = pipeline.numpy_to_pil(images) + +Visualization + +Let’s create a helper function to display images in a grid. + + + Copied +def image_grid(imgs, rows, cols): + w, h = imgs[0].size + grid = Image.new("RGB", size=(cols * w, rows * h)) + for i, img in enumerate(imgs): + grid.paste(img, box=(i % cols * w, i // cols * h)) + return grid + + + Copied +image_grid(images, 2, 4) + + +Using different prompts + +We don’t have to replicate the same prompt in all the devices. We can do whatever we want: generate 2 prompts 4 times each, or even generate 8 different prompts at once. Let’s do that! +First, we’ll refactor the input preparation code into a handy function: + + + Copied +prompts = [ + "Labrador in the style of Hokusai", + "Painting of a squirrel skating in New York", + "HAL-9000 in the style of Van Gogh", + "Times Square under water, with fish and a dolphin swimming around", + "Ancient Roman fresco showing a man working on his laptop", + "Close-up photograph of young black woman against urban background, high quality, bokeh", + "Armchair in the shape of an avocado", + "Clown astronaut in space, with Earth in the background", +] + + + Copied +prompt_ids = pipeline.prepare_inputs(prompts) +prompt_ids = shard(prompt_ids) + +images = pipeline(prompt_ids, p_params, rng, jit=True).images +images = images.reshape((images.shape[0] * images.shape[1],) + images.shape[-3:]) +images = pipeline.numpy_to_pil(images) + +image_grid(images, 2, 4) + + +How does parallelization work? + +We said before that the diffusers Flax pipeline automatically compiles the model and runs it in parallel on all available devices. We’ll now briefly look inside that process to show how it works. +JAX parallelization can be done in multiple ways. The easiest one revolves around using the jax.pmap function to achieve single-program, multiple-data (SPMD) parallelization. It means we’ll run several copies of the same code, each on different data inputs. More sophisticated approaches are possible, we invite you to go over the JAX documentation and the pjit pages to explore this topic if you are interested! +jax.pmap does two things for us: +Compiles (or jits) the code, as if we had invoked jax.jit(). This does not happen when we call pmap, but the first time the pmapped function is invoked. +Ensures the compiled code runs in parallel in all the available devices. +To show how it works we pmap the _generate method of the pipeline, which is the private method that runs generates images. Please, note that this method may be renamed or removed in future releases of diffusers. + + + Copied +p_generate = pmap(pipeline._generate) +After we use pmap, the prepared function p_generate will conceptually do the following: +Invoke a copy of the underlying function pipeline._generate in each device. +Send each device a different portion of the input arguments. That’s what sharding is used for. In our case, prompt_ids has shape (8, 1, 77, 768). This array will be split in 8 and each copy of _generate will receive an input with shape (1, 77, 768). +We can code _generate completely ignoring the fact that it will be invoked in parallel. We just care about our batch size (1 in this example) and the dimensions that make sense for our code, and don’t have to change anything to make it work in parallel. +The same way as when we used the pipeline call, the first time we run the following cell it will take a while, but then it will be much faster. + + + Copied +%%time +images = p_generate(prompt_ids, p_params, rng) +images = images.block_until_ready() +images.shape + + + Copied +CPU times: user 1min 15s, sys: 18.2 s, total: 1min 34s +Wall time: 1min 15s + + + Copied +images.shape + + + Copied +(8, 1, 512, 512, 3) +We use block_until_ready() to correctly measure inference time, because JAX uses asynchronous dispatch and returns control to the Python loop as soon as it can. You don’t need to use that in your code; blocking will occur automatically when you want to use the result of a computation that has not yet been materialized. diff --git a/scrapped_outputs/d3982cad440339992b0b91883f54e551.txt b/scrapped_outputs/d3982cad440339992b0b91883f54e551.txt new file mode 100644 index 0000000000000000000000000000000000000000..09ea06c0ca900208018e1f32ac434cb633735212 --- /dev/null +++ b/scrapped_outputs/d3982cad440339992b0b91883f54e551.txt @@ -0,0 +1,86 @@ +I2VGen-XL I2VGen-XL: High-Quality Image-to-Video Synthesis via Cascaded Diffusion Models by Shiwei Zhang, Jiayu Wang, Yingya Zhang, Kang Zhao, Hangjie Yuan, Zhiwu Qin, Xiang Wang, Deli Zhao, and Jingren Zhou. The abstract from the paper is: Video synthesis has recently made remarkable strides benefiting from the rapid development of diffusion models. However, it still encounters challenges in terms of semantic accuracy, clarity and spatio-temporal continuity. They primarily arise from the scarcity of well-aligned text-video data and the complex inherent structure of videos, making it difficult for the model to simultaneously ensure semantic and qualitative excellence. In this report, we propose a cascaded I2VGen-XL approach that enhances model performance by decoupling these two factors and ensures the alignment of the input data by utilizing static images as a form of crucial guidance. I2VGen-XL consists of two stages: i) the base stage guarantees coherent semantics and preserves content from input images by using two hierarchical encoders, and ii) the refinement stage enhances the video’s details by incorporating an additional brief text and improves the resolution to 1280×720. To improve the diversity, we collect around 35 million single-shot text-video pairs and 6 billion text-image pairs to optimize the model. By this means, I2VGen-XL can simultaneously enhance the semantic accuracy, continuity of details and clarity of generated videos. Through extensive experiments, we have investigated the underlying principles of I2VGen-XL and compared it with current top methods, which can demonstrate its effectiveness on diverse data. The source code and models will be publicly available at this https URL. The original codebase can be found here. The model checkpoints can be found here. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. Also, to know more about reducing the memory usage of this pipeline, refer to the [“Reduce memory usage”] section here. Sample output with I2VGenXL: library. + Notes I2VGenXL always uses a clip_skip value of 1. This means it leverages the penultimate layer representations from the text encoder of CLIP. It can generate videos of quality that is often on par with Stable Video Diffusion (SVD). Unlike SVD, it additionally accepts text prompts as inputs. It can generate higher resolution videos. When using the DDIMScheduler (which is default for this pipeline), less than 50 steps for inference leads to bad results. I2VGenXLPipeline class diffusers.I2VGenXLPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer image_encoder: CLIPVisionModelWithProjection feature_extractor: CLIPImageProcessor unet: I2VGenXLUNet scheduler: DDIMScheduler ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (I2VGenXLUNet) — +A I2VGenXLUNet to denoise the encoded video latents. scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Pipeline for image-to-video generation as proposed in I2VGenXL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None image: Union = None height: Optional = 704 width: Optional = 1280 target_fps: Optional = 16 num_frames: int = 16 num_inference_steps: int = 50 guidance_scale: float = 9.0 negative_prompt: Union = None eta: float = 0.0 num_videos_per_prompt: Optional = 1 decode_chunk_size: Optional = 1 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = 1 ) → pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — +Image or images to guide image generation. If you provide a tensor, it needs to be compatible with +CLIPImageProcessor. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. target_fps (int, optional) — +Frames per second. The rate at which the generated images shall be exported to a video after generation. This is also used as a “micro-condition” while generation. num_frames (int, optional) — +The number of video frames to generate. num_inference_steps (int, optional) — +The number of denoising steps. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. num_videos_per_prompt (int, optional) — +The number of images to generate per prompt. decode_chunk_size (int, optional) — +The number of frames to decode at a time. The higher the chunk size, the higher the temporal consistency +between frames, but also the higher the memory consumption. By default, the decoder will decode all frames at once +for maximal quality. Reduce decode_chunk_size to reduce memory usage. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput or tuple + +If return_dict is True, pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for image-to-video generation with I2VGenXLPipeline. Examples: Copied >>> import torch +>>> from diffusers import I2VGenXLPipeline +>>> from diffusers.utils import export_to_gif, load_image + +>>> pipeline = I2VGenXLPipeline.from_pretrained("ali-vilab/i2vgen-xl", torch_dtype=torch.float16, variant="fp16") +>>> pipeline.enable_model_cpu_offload() + +>>> image_url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/i2vgen_xl_images/img_0009.png" +>>> image = load_image(image_url).convert("RGB") + +>>> prompt = "Papers were floating in the air on a table in the library" +>>> negative_prompt = "Distorted, discontinuous, Ugly, blurry, low resolution, motionless, static, disfigured, disconnected limbs, Ugly faces, incomplete arms" +>>> generator = torch.manual_seed(8888) + +>>> frames = pipeline( +... prompt=prompt, +... image=image, +... num_inference_steps=50, +... negative_prompt=negative_prompt, +... guidance_scale=9.0, +... generator=generator +... ).frames[0] +>>> video_path = export_to_gif(frames, "i2v.gif") encode_prompt < source > ( prompt device num_videos_per_prompt negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_videos_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. I2VGenXLPipelineOutput class diffusers.pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput < source > ( frames: Union ) Parameters frames (torch.Tensor, np.ndarray, or List[List[PIL.Image.Image]]) — +List of video outputs - It can be a nested list of length batch_size, with each sub-list containing denoised Output class for image-to-video pipeline. PIL image sequences of length num_frames. It can also be a NumPy array or Torch tensor of shape +(batch_size, num_frames, channels, height, width) diff --git a/scrapped_outputs/d3ba9aced0a87718cb7a15e2547be883.txt b/scrapped_outputs/d3ba9aced0a87718cb7a15e2547be883.txt new file mode 100644 index 0000000000000000000000000000000000000000..44fefb3c47f353a4d2bfd9d051ae6e0b396bc8d5 --- /dev/null +++ b/scrapped_outputs/d3ba9aced0a87718cb7a15e2547be883.txt @@ -0,0 +1,4 @@ +Using Diffusers for audio + +DanceDiffusionPipeline and AudioDiffusionPipeline can be used to generate +audio rapidly! More coming soon! diff --git a/scrapped_outputs/d3cdf0ec71daaa1c67973713e8982e9b.txt b/scrapped_outputs/d3cdf0ec71daaa1c67973713e8982e9b.txt new file mode 100644 index 0000000000000000000000000000000000000000..41f10cdbedfa38cfad6710fe9073c402f251dcf7 --- /dev/null +++ b/scrapped_outputs/d3cdf0ec71daaa1c67973713e8982e9b.txt @@ -0,0 +1,5 @@ +Philosophy + +Readability and clarity are preferred over highly optimized code. A strong importance is put on providing readable, intuitive and elementary code design. E.g., the provided schedulers are separated from the provided models and use well-commented code that can be read alongside the original paper. +Diffusers is modality independent and focuses on providing pretrained models and tools to build systems that generate continuous outputs, e.g. vision and audio. This is one of the guiding goals even if the initial pipelines are devoted to vision tasks. +Diffusion models and schedulers are provided as concise, elementary building blocks. In contrast, diffusion pipelines are a collection of end-to-end diffusion systems that can be used out-of-the-box, should stay as close as possible to their original implementations and can include components of other libraries, such as text encoders. Examples of diffusion pipelines are Glide, Latent Diffusion and Stable Diffusion. diff --git a/scrapped_outputs/d3de3356e08a77edcba45908a1ed43cc.txt b/scrapped_outputs/d3de3356e08a77edcba45908a1ed43cc.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/d3e9244c2df24d7ede9a87dac840c911.txt b/scrapped_outputs/d3e9244c2df24d7ede9a87dac840c911.txt new file mode 100644 index 0000000000000000000000000000000000000000..f9d5759d2a52433aeb4a07b9b2cace405fc5aff7 --- /dev/null +++ b/scrapped_outputs/d3e9244c2df24d7ede9a87dac840c911.txt @@ -0,0 +1,61 @@ +Distilled Stable Diffusion inference Stable Diffusion inference can be a computationally intensive process because it must iteratively denoise the latents to generate an image. To reduce the computational burden, you can use a distilled version of the Stable Diffusion model from Nota AI. The distilled version of their Stable Diffusion model eliminates some of the residual and attention blocks from the UNet, reducing the model size by 51% and improving latency on CPU/GPU by 43%. Read this blog post to learn more about how knowledge distillation training works to produce a faster, smaller, and cheaper generative model. Let’s load the distilled Stable Diffusion model and compare it against the original Stable Diffusion model: Copied from diffusers import StableDiffusionPipeline +import torch + +distilled = StableDiffusionPipeline.from_pretrained( + "nota-ai/bk-sdm-small", torch_dtype=torch.float16, use_safetensors=True, +).to("cuda") + +original = StableDiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16, use_safetensors=True, +).to("cuda") Given a prompt, get the inference time for the original model: Copied import time + +seed = 2023 +generator = torch.manual_seed(seed) + +NUM_ITERS_TO_RUN = 3 +NUM_INFERENCE_STEPS = 25 +NUM_IMAGES_PER_PROMPT = 4 + +prompt = "a golden vase with different flowers" + +start = time.time_ns() +for _ in range(NUM_ITERS_TO_RUN): + images = original( + prompt, + num_inference_steps=NUM_INFERENCE_STEPS, + generator=generator, + num_images_per_prompt=NUM_IMAGES_PER_PROMPT + ).images +end = time.time_ns() +original_sd = f"{(end - start) / 1e6:.1f}" + +print(f"Execution time -- {original_sd} ms\n") +"Execution time -- 45781.5 ms" Time the distilled model inference: Copied start = time.time_ns() +for _ in range(NUM_ITERS_TO_RUN): + images = distilled( + prompt, + num_inference_steps=NUM_INFERENCE_STEPS, + generator=generator, + num_images_per_prompt=NUM_IMAGES_PER_PROMPT + ).images +end = time.time_ns() + +distilled_sd = f"{(end - start) / 1e6:.1f}" +print(f"Execution time -- {distilled_sd} ms\n") +"Execution time -- 29884.2 ms" original Stable Diffusion (45781.5 ms) distilled Stable Diffusion (29884.2 ms) Tiny AutoEncoder To speed inference up even more, use a tiny distilled version of the Stable Diffusion VAE to denoise the latents into images. Replace the VAE in the distilled Stable Diffusion model with the tiny VAE: Copied from diffusers import AutoencoderTiny + +distilled.vae = AutoencoderTiny.from_pretrained( + "sayakpaul/taesd-diffusers", torch_dtype=torch.float16, use_safetensors=True, +).to("cuda") Time the distilled model and distilled VAE inference: Copied start = time.time_ns() +for _ in range(NUM_ITERS_TO_RUN): + images = distilled( + prompt, + num_inference_steps=NUM_INFERENCE_STEPS, + generator=generator, + num_images_per_prompt=NUM_IMAGES_PER_PROMPT + ).images +end = time.time_ns() + +distilled_tiny_sd = f"{(end - start) / 1e6:.1f}" +print(f"Execution time -- {distilled_tiny_sd} ms\n") +"Execution time -- 27165.7 ms" distilled Stable Diffusion + Tiny AutoEncoder (27165.7 ms) diff --git a/scrapped_outputs/d3e9eb6c1c26f3b636ffe51c3bc89979.txt b/scrapped_outputs/d3e9eb6c1c26f3b636ffe51c3bc89979.txt new file mode 100644 index 0000000000000000000000000000000000000000..670e60a336d617da607490febe4cdc7f57188444 --- /dev/null +++ b/scrapped_outputs/d3e9eb6c1c26f3b636ffe51c3bc89979.txt @@ -0,0 +1,82 @@ +T2I-Adapter T2I-Adapter is a lightweight adapter model that provides an additional conditioning input image (line art, canny, sketch, depth, pose) to better control image generation. It is similar to a ControlNet, but it is a lot smaller (~77M parameters and ~300MB file size) because its only inserts weights into the UNet instead of copying and training it. The T2I-Adapter is only available for training with the Stable Diffusion XL (SDXL) model. This guide will explore the train_t2i_adapter_sdxl.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/t2i_adapter +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to activate gradient accumulation, add the --gradient_accumulation_steps parameter to the training command: Copied accelerate launch train_t2i_adapter_sdxl.py \ + ----gradient_accumulation_steps=4 Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the relevant T2I-Adapter parameters: --pretrained_vae_model_name_or_path: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify a better VAE --crops_coords_top_left_h and --crops_coords_top_left_w: height and width coordinates to include in SDXL’s crop coordinate embeddings --conditioning_image_column: the column of the conditioning images in the dataset --proportion_empty_prompts: the proportion of image prompts to replace with empty strings Training script As with the script parameters, a walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the T2I-Adapter relevant parts of the script. The training script begins by preparing the dataset. This incudes tokenizing the prompt and applying transforms to the images and conditioning images. Copied conditioning_image_transforms = transforms.Compose( + [ + transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), + transforms.CenterCrop(args.resolution), + transforms.ToTensor(), + ] +) Within the main() function, the T2I-Adapter is either loaded from a pretrained adapter or it is randomly initialized: Copied if args.adapter_model_name_or_path: + logger.info("Loading existing adapter weights.") + t2iadapter = T2IAdapter.from_pretrained(args.adapter_model_name_or_path) +else: + logger.info("Initializing t2iadapter weights.") + t2iadapter = T2IAdapter( + in_channels=3, + channels=(320, 640, 1280, 1280), + num_res_blocks=2, + downscale_factor=16, + adapter_type="full_adapter_xl", + ) The optimizer is initialized for the T2I-Adapter parameters: Copied params_to_optimize = t2iadapter.parameters() +optimizer = optimizer_class( + params_to_optimize, + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Lastly, in the training loop, the adapter conditioning image and the text embeddings are passed to the UNet to predict the noise residual: Copied t2iadapter_image = batch["conditioning_pixel_values"].to(dtype=weight_dtype) +down_block_additional_residuals = t2iadapter(t2iadapter_image) +down_block_additional_residuals = [ + sample.to(dtype=weight_dtype) for sample in down_block_additional_residuals +] + +model_pred = unet( + inp_noisy_latents, + timesteps, + encoder_hidden_states=batch["prompt_ids"], + added_cond_kwargs=batch["unet_added_conditions"], + down_block_additional_residuals=down_block_additional_residuals, +).sample If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Now you’re ready to launch the training script! 🚀 For this example training, you’ll use the fusing/fill50k dataset. You can also create and use your own dataset if you want (see the Create a dataset for training guide). Set the environment variable MODEL_DIR to a model id on the Hub or a path to a local model and OUTPUT_DIR to where you want to save the model. Download the following images to condition your training with: Copied wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png +wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_image, --validation_prompt, and --validation_steps to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. Copied export MODEL_DIR="stabilityai/stable-diffusion-xl-base-1.0" +export OUTPUT_DIR="path to save model" + +accelerate launch train_t2i_adapter_sdxl.py \ + --pretrained_model_name_or_path=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --dataset_name=fusing/fill50k \ + --mixed_precision="fp16" \ + --resolution=1024 \ + --learning_rate=1e-5 \ + --max_train_steps=15000 \ + --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ + --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ + --validation_steps=100 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --report_to="wandb" \ + --seed=42 \ + --push_to_hub Once training is complete, you can use your T2I-Adapter for inference: Copied from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, EulerAncestralDiscreteSchedulerTest +from diffusers.utils import load_image +import torch + +adapter = T2IAdapter.from_pretrained("path/to/adapter", torch_dtype=torch.float16) +pipeline = StableDiffusionXLAdapterPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", adapter=adapter, torch_dtype=torch.float16 +) + +pipeline.scheduler = EulerAncestralDiscreteSchedulerTest.from_config(pipe.scheduler.config) +pipeline.enable_xformers_memory_efficient_attention() +pipeline.enable_model_cpu_offload() + +control_image = load_image("./conditioning_image_1.png") +prompt = "pale golden rod circle with old lace background" + +generator = torch.manual_seed(0) +image = pipeline( + prompt, image=control_image, generator=generator +).images[0] +image.save("./output.png") Next steps Congratulations on training a T2I-Adapter model! 🎉 To learn more: Read the Efficient Controllable Generation for SDXL with T2I-Adapters blog post to learn more details about the experimental results from the T2I-Adapter team. diff --git a/scrapped_outputs/d4377b7bf834419c415d5ecd0ef01bf0.txt b/scrapped_outputs/d4377b7bf834419c415d5ecd0ef01bf0.txt new file mode 100644 index 0000000000000000000000000000000000000000..b6426f311f1dcd39145426a6e04473141bc5c4c0 --- /dev/null +++ b/scrapped_outputs/d4377b7bf834419c415d5ecd0ef01bf0.txt @@ -0,0 +1,157 @@ +Stable diffusion 2 + +Stable Diffusion 2 is a text-to-image latent diffusion model built upon the work of Stable Diffusion 1. +The project to train Stable Diffusion 2 was led by Robin Rombach and Katherine Crowson from Stability AI and LAION. +The Stable Diffusion 2.0 release includes robust text-to-image models trained using a brand new text encoder (OpenCLIP), developed by LAION with support from Stability AI, which greatly improves the quality of the generated images compared to earlier V1 releases. The text-to-image models in this release can generate images with default resolutions of both 512x512 pixels and 768x768 pixels. +These models are trained on an aesthetic subset of the LAION-5B dataset created by the DeepFloyd team at Stability AI, which is then further filtered to remove adult content using LAION’s NSFW filter. +For more details about how Stable Diffusion 2 works and how it differs from Stable Diffusion 1, please refer to the official launch announcement post. + +Tips + + +Available checkpoints: + +Note that the architecture is more or less identical to Stable Diffusion 1 so please refer to this page for API documentation. +Text-to-Image (512x512 resolution): stabilityai/stable-diffusion-2-base with StableDiffusionPipeline +Text-to-Image (768x768 resolution): stabilityai/stable-diffusion-2 with StableDiffusionPipeline +Image Inpainting (512x512 resolution): stabilityai/stable-diffusion-2-inpainting with StableDiffusionInpaintPipeline +Super-Resolution (x4 resolution resolution): stable-diffusion-x4-upscaler StableDiffusionUpscalePipeline +Depth-to-Image (512x512 resolution): stabilityai/stable-diffusion-2-depth with StableDiffusionDepth2ImagePipeline +We recommend using the DPMSolverMultistepScheduler as it’s currently the fastest scheduler there is. + +Text-to-Image + +Text-to-Image (512x512 resolution): stabilityai/stable-diffusion-2-base with StableDiffusionPipeline + + + Copied +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +import torch + +repo_id = "stabilityai/stable-diffusion-2-base" +pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") + +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") + +prompt = "High quality photo of an astronaut riding a horse in space" +image = pipe(prompt, num_inference_steps=25).images[0] +image.save("astronaut.png") +Text-to-Image (768x768 resolution): stabilityai/stable-diffusion-2 with StableDiffusionPipeline + + + Copied +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +import torch + +repo_id = "stabilityai/stable-diffusion-2" +pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") + +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") + +prompt = "High quality photo of an astronaut riding a horse in space" +image = pipe(prompt, guidance_scale=9, num_inference_steps=25).images[0] +image.save("astronaut.png") + +Image Inpainting + +Image Inpainting (512x512 resolution): stabilityai/stable-diffusion-2-inpainting with StableDiffusionInpaintPipeline + + + Copied +import PIL +import requests +import torch +from io import BytesIO + +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler + + +def download_image(url): + response = requests.get(url) + return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = download_image(img_url).resize((512, 512)) +mask_image = download_image(mask_url).resize((512, 512)) + +repo_id = "stabilityai/stable-diffusion-2-inpainting" +pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") + +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") + +prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +image = pipe(prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=25).images[0] + +image.save("yellow_cat.png") + +Super-Resolution + +Image Upscaling (x4 resolution resolution): stable-diffusion-x4-upscaler with StableDiffusionUpscalePipeline + + + Copied +import requests +from PIL import Image +from io import BytesIO +from diffusers import StableDiffusionUpscalePipeline +import torch + +# load model and scheduler +model_id = "stabilityai/stable-diffusion-x4-upscaler" +pipeline = StableDiffusionUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) +pipeline = pipeline.to("cuda") + +# let's download an image +url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png" +response = requests.get(url) +low_res_img = Image.open(BytesIO(response.content)).convert("RGB") +low_res_img = low_res_img.resize((128, 128)) +prompt = "a white cat" +upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0] +upscaled_image.save("upsampled_cat.png") + +Depth-to-Image + +Depth-Guided Text-to-Image: stabilityai/stable-diffusion-2-depth StableDiffusionDepth2ImagePipeline + + + Copied +import torch +import requests +from PIL import Image + +from diffusers import StableDiffusionDepth2ImgPipeline + +pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-depth", + torch_dtype=torch.float16, +).to("cuda") + + +url = "http://images.cocodataset.org/val2017/000000039769.jpg" +init_image = Image.open(requests.get(url, stream=True).raw) +prompt = "two tigers" +n_propmt = "bad, deformed, ugly, bad anotomy" +image = pipe(prompt=prompt, image=init_image, negative_prompt=n_propmt, strength=0.7).images[0] + +How to load and use different schedulers. + +The stable diffusion pipeline uses DDIMScheduler scheduler by default. But diffusers provides many other schedulers that can be used with the stable diffusion pipeline such as PNDMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, EulerAncestralDiscreteScheduler etc. +To use a different scheduler, you can either change it via the ConfigMixin.from_config() method or pass the scheduler argument to the from_pretrained method of the pipeline. For example, to use the EulerDiscreteScheduler, you can do the following: + + + Copied +>>> from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler + +>>> pipeline = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2") +>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +>>> # or +>>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("stabilityai/stable-diffusion-2", subfolder="scheduler") +>>> pipeline = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2", scheduler=euler_scheduler) diff --git a/scrapped_outputs/d43cafaeac614e8b36509c417acb7f81.txt b/scrapped_outputs/d43cafaeac614e8b36509c417acb7f81.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/d44d771ea39f1a89663c2a760b448a6f.txt b/scrapped_outputs/d44d771ea39f1a89663c2a760b448a6f.txt new file mode 100644 index 0000000000000000000000000000000000000000..af8bc21f7006c2432f3cf43cbda561eb3e9ef283 --- /dev/null +++ b/scrapped_outputs/d44d771ea39f1a89663c2a760b448a6f.txt @@ -0,0 +1,42 @@ +RePaintScheduler RePaintScheduler is a DDPM-based inpainting scheduler for unsupervised inpainting with extreme masks. It is designed to be used with the RePaintPipeline, and it is based on the paper RePaint: Inpainting using Denoising Diffusion Probabilistic Models by Andreas Lugmayr et al. The abstract from the paper is: Free-form inpainting is the task of adding new content to an image in the regions specified by an arbitrary binary mask. Most existing approaches train for a certain distribution of masks, which limits their generalization capabilities to unseen mask types. Furthermore, training with pixel-wise and perceptual losses often leads to simple textural extensions towards the missing areas instead of semantically meaningful generation. In this work, we propose RePaint: A Denoising Diffusion Probabilistic Model (DDPM) based inpainting approach that is applicable to even extreme masks. We employ a pretrained unconditional DDPM as the generative prior. To condition the generation process, we only alter the reverse diffusion iterations by sampling the unmasked regions using the given image information. Since this technique does not modify or condition the original DDPM network itself, the model produces high-quality and diverse output images for any inpainting form. We validate our method for both faces and general-purpose image inpainting using standard and extreme masks. RePaint outperforms state-of-the-art Autoregressive, and GAN approaches for at least five out of six mask distributions. GitHub Repository: this http URL. The original implementation can be found at andreas128/RePaint. RePaintScheduler class diffusers.RePaintScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' eta: float = 0.0 trained_betas: Optional = None clip_sample: bool = True ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, squaredcos_cap_v2, or sigmoid. eta (float) — +The weight of noise for added noise in diffusion step. If its value is between 0.0 and 1.0 it corresponds +to the DDIM scheduler, and if its value is between -0.0 and 1.0 it corresponds to the DDPM scheduler. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. clip_sample (bool, defaults to True) — +Clip the predicted sample between -1 and 1 for numerical stability. RePaintScheduler is a scheduler for DDPM inpainting inside a given mask. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int jump_length: int = 10 jump_n_sample: int = 10 device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. If used, +timesteps must be None. jump_length (int, defaults to 10) — +The number of steps taken forward in time before going backward in time for a single jump (“j” in +RePaint paper). Take a look at Figure 9 and 10 in the paper. jump_n_sample (int, defaults to 10) — +The number of times to make a forward time jump for a given chosen time sample. Take a look at Figure 9 +and 10 in the paper. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor original_image: FloatTensor mask: FloatTensor generator: Optional = None return_dict: bool = True ) → RePaintSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. original_image (torch.FloatTensor) — +The original image to inpaint on. mask (torch.FloatTensor) — +The mask where a value of 0.0 indicates which part of the original image to inpaint. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a RePaintSchedulerOutput or tuple. Returns +RePaintSchedulerOutput or tuple + +If return_dict is True, RePaintSchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). RePaintSchedulerOutput class diffusers.schedulers.scheduling_repaint.RePaintSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from +the current timestep. pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/d462cb93853439b7e97b2c4dffd59461.txt b/scrapped_outputs/d462cb93853439b7e97b2c4dffd59461.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/d467b0cb7ebe3fbe132fa5919ef66c3e.txt b/scrapped_outputs/d467b0cb7ebe3fbe132fa5919ef66c3e.txt new file mode 100644 index 0000000000000000000000000000000000000000..f30b39a298e4c56dee2c29827af6d01fc3c8586a --- /dev/null +++ b/scrapped_outputs/d467b0cb7ebe3fbe132fa5919ef66c3e.txt @@ -0,0 +1,36 @@ +AsymmetricAutoencoderKL Improved larger variational autoencoder (VAE) model with KL loss for inpainting task: Designing a Better Asymmetric VQGAN for StableDiffusion by Zixin Zhu, Xuelu Feng, Dongdong Chen, Jianmin Bao, Le Wang, Yinpeng Chen, Lu Yuan, Gang Hua. The abstract from the paper is: StableDiffusion is a revolutionary text-to-image generator that is causing a stir in the world of image generation and editing. Unlike traditional methods that learn a diffusion model in pixel space, StableDiffusion learns a diffusion model in the latent space via a VQGAN, ensuring both efficiency and quality. It not only supports image generation tasks, but also enables image editing for real images, such as image inpainting and local editing. However, we have observed that the vanilla VQGAN used in StableDiffusion leads to significant information loss, causing distortion artifacts even in non-edited image regions. To this end, we propose a new asymmetric VQGAN with two simple designs. Firstly, in addition to the input from the encoder, the decoder contains a conditional branch that incorporates information from task-specific priors, such as the unmasked image region in inpainting. Secondly, the decoder is much heavier than the encoder, allowing for more detailed recovery while only slightly increasing the total inference cost. The training cost of our asymmetric VQGAN is cheap, and we only need to retrain a new asymmetric decoder while keeping the vanilla VQGAN encoder and StableDiffusion unchanged. Our asymmetric VQGAN can be widely used in StableDiffusion-based inpainting and local editing methods. Extensive experiments demonstrate that it can significantly improve the inpainting and editing performance, while maintaining the original text-to-image capability. The code is available at https://github.com/buxiangzhiren/Asymmetric_VQGAN Evaluation results can be found in section 4.1 of the original paper. Available checkpoints https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-1-5 https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-2 Example Usage Copied from diffusers import AsymmetricAutoencoderKL, StableDiffusionInpaintPipeline +from diffusers.utils import load_image, make_image_grid + + +prompt = "a photo of a person with beard" +img_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/celeba_hq_256.png" +mask_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/mask_256.png" + +original_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +pipe = StableDiffusionInpaintPipeline.from_pretrained("runwayml/stable-diffusion-inpainting") +pipe.vae = AsymmetricAutoencoderKL.from_pretrained("cross-attention/asymmetric-autoencoder-kl-x-1-5") +pipe.to("cuda") + +image = pipe(prompt=prompt, image=original_image, mask_image=mask_image).images[0] +make_image_grid([original_image, mask_image, image], rows=1, cols=3) AsymmetricAutoencoderKL class diffusers.AsymmetricAutoencoderKL < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) down_block_out_channels: Tuple = (64,) layers_per_down_block: int = 1 up_block_types: Tuple = ('UpDecoderBlock2D',) up_block_out_channels: Tuple = (64,) layers_per_up_block: int = 1 act_fn: str = 'silu' latent_channels: int = 4 norm_num_groups: int = 32 sample_size: int = 32 scaling_factor: float = 0.18215 ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("DownEncoderBlock2D",)) — +Tuple of downsample block types. down_block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of down block output channels. layers_per_down_block (int, optional, defaults to 1) — +Number layers for down block. up_block_types (Tuple[str], optional, defaults to ("UpDecoderBlock2D",)) — +Tuple of upsample block types. up_block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of up block output channels. layers_per_up_block (int, optional, defaults to 1) — +Number layers for up block. act_fn (str, optional, defaults to "silu") — The activation function to use. latent_channels (int, optional, defaults to 4) — Number of channels in the latent space. sample_size (int, optional, defaults to 32) — Sample input size. norm_num_groups (int, optional, defaults to 32) — +Number of groups to use for the first normalization layer in ResNet blocks. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. Designing a Better Asymmetric VQGAN for StableDiffusion https://arxiv.org/abs/2306.04632 . A VAE model with KL loss +for encoding images into latents and decoding latent representations into images. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor mask: Optional = None sample_posterior: bool = False return_dict: bool = True generator: Optional = None ) Parameters sample (torch.FloatTensor) — Input sample. mask (torch.FloatTensor, optional, defaults to None) — Optional inpainting mask. sample_posterior (bool, optional, defaults to False) — +Whether to sample from the posterior. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. AutoencoderKLOutput class diffusers.models.modeling_outputs.AutoencoderKLOutput < source > ( latent_dist: DiagonalGaussianDistribution ) Parameters latent_dist (DiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of DiagonalGaussianDistribution. +DiagonalGaussianDistribution allows for sampling latents from the distribution. Output of AutoencoderKL encoding method. DecoderOutput class diffusers.models.autoencoders.vae.DecoderOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The decoded output sample from the last layer of the model. Output of decoding method. diff --git a/scrapped_outputs/d46a9f1695b53dccd359224793161e56.txt b/scrapped_outputs/d46a9f1695b53dccd359224793161e56.txt new file mode 100644 index 0000000000000000000000000000000000000000..0051dea3c8497a0aea4368d8c2019c00ab6ab808 --- /dev/null +++ b/scrapped_outputs/d46a9f1695b53dccd359224793161e56.txt @@ -0,0 +1,107 @@ +Semantic Guidance Semantic Guidance for Diffusion Models was proposed in SEGA: Instructing Text-to-Image Models using Semantic Guidance and provides strong semantic control over image generation. +Small changes to the text prompt usually result in entirely different output images. However, with SEGA a variety of changes to the image are enabled that can be controlled easily and intuitively, while staying true to the original image composition. The abstract from the paper is: Text-to-image diffusion models have recently received a lot of interest for their astonishing ability to produce high-fidelity images from text only. However, achieving one-shot generation that aligns with the user’s intent is nearly impossible, yet small changes to the input prompt often result in very different images. This leaves the user with little semantic control. To put the user in control, we show how to interact with the diffusion process to flexibly steer it along semantic directions. This semantic guidance (SEGA) generalizes to any generative architecture using classifier-free guidance. More importantly, it allows for subtle and extensive edits, changes in composition and style, as well as optimizing the overall artistic conception. We demonstrate SEGA’s effectiveness on both latent and pixel-based diffusion models such as Stable Diffusion, Paella, and DeepFloyd-IF using a variety of tasks, thus providing strong evidence for its versatility, flexibility, and improvements over existing methods. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. SemanticStableDiffusionPipeline class diffusers.SemanticStableDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (Q16SafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with latent editing. This model inherits from DiffusionPipeline and builds on the StableDiffusionPipeline. Check the superclass +documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular +device, etc.). __call__ < source > ( prompt: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: int = 1 eta: float = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 editing_prompt: Union = None editing_prompt_embeddings: Optional = None reverse_editing_direction: Union = False edit_guidance_scale: Union = 5 edit_warmup_steps: Union = 10 edit_cooldown_steps: Union = None edit_threshold: Union = 0.9 edit_momentum_scale: Optional = 0.1 edit_mom_beta: Optional = 0.4 edit_weights: Optional = None sem_guidance: Optional = None ) → ~pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. editing_prompt (str or List[str], optional) — +The prompt or prompts to use for semantic guidance. Semantic guidance is disabled by setting +editing_prompt = None. Guidance direction of prompt should be specified via +reverse_editing_direction. editing_prompt_embeddings (torch.Tensor, optional) — +Pre-computed embeddings to use for semantic guidance. Guidance direction of embedding should be +specified via reverse_editing_direction. reverse_editing_direction (bool or List[bool], optional, defaults to False) — +Whether the corresponding prompt in editing_prompt should be increased or decreased. edit_guidance_scale (float or List[float], optional, defaults to 5) — +Guidance scale for semantic guidance. If provided as a list, values should correspond to +editing_prompt. edit_warmup_steps (float or List[float], optional, defaults to 10) — +Number of diffusion steps (for each prompt) for which semantic guidance is not applied. Momentum is +calculated for those steps and applied once all warmup periods are over. edit_cooldown_steps (float or List[float], optional, defaults to None) — +Number of diffusion steps (for each prompt) after which semantic guidance is longer applied. edit_threshold (float or List[float], optional, defaults to 0.9) — +Threshold of semantic guidance. edit_momentum_scale (float, optional, defaults to 0.1) — +Scale of the momentum to be added to the semantic guidance at each diffusion step. If set to 0.0, +momentum is disabled. Momentum is already built up during warmup (for diffusion steps smaller than +sld_warmup_steps). Momentum is only added to latent guidance once all warmup periods are finished. edit_mom_beta (float, optional, defaults to 0.4) — +Defines how semantic guidance momentum builds up. edit_mom_beta indicates how much of the previous +momentum is kept. Momentum is already built up during warmup (for diffusion steps smaller than +edit_warmup_steps). edit_weights (List[float], optional, defaults to None) — +Indicates how much each individual concept should influence the overall guidance. If no weights are +provided all concepts are applied equally. sem_guidance (List[torch.Tensor], optional) — +List of pre-generated guidance vectors to be applied at generation. Length of the list has to +correspond to num_inference_steps. Returns +~pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput or tuple + +If return_dict is True, +~pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images and the second element +is a list of bools indicating whether the corresponding generated image contains “not-safe-for-work” +(nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import SemanticStableDiffusionPipeline + +>>> pipe = SemanticStableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> out = pipe( +... prompt="a photo of the face of a woman", +... num_images_per_prompt=1, +... guidance_scale=7, +... editing_prompt=[ +... "smiling, smile", # Concepts to apply +... "glasses, wearing glasses", +... "curls, wavy hair, curly hair", +... "beard, full beard, mustache", +... ], +... reverse_editing_direction=[ +... False, +... False, +... False, +... False, +... ], # Direction of guidance i.e. increase all concepts +... edit_warmup_steps=[10, 10, 10, 10], # Warmup period for each concept +... edit_guidance_scale=[4, 5, 5, 5.4], # Guidance scale for each concept +... edit_threshold=[ +... 0.99, +... 0.975, +... 0.925, +... 0.96, +... ], # Threshold for each concept. Threshold equals the percentile of the latent space that will be discarded. I.e. threshold=0.99 uses 1% of the latent dimensions +... edit_momentum_scale=0.3, # Momentum scale that will be added to the latent guidance +... edit_mom_beta=0.6, # Momentum beta +... edit_weights=[1, 1, 1, 1, 1], # Weights of the individual concepts against each other +... ) +>>> image = out.images[0] StableDiffusionSafePipelineOutput class diffusers.pipelines.semantic_stable_diffusion.pipeline_output.SemanticStableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/d483fe4380eb1ac76487b55c838670a5.txt b/scrapped_outputs/d483fe4380eb1ac76487b55c838670a5.txt new file mode 100644 index 0000000000000000000000000000000000000000..ca254f42f72a76d580bb5340e193834f7f82b6d6 --- /dev/null +++ b/scrapped_outputs/d483fe4380eb1ac76487b55c838670a5.txt @@ -0,0 +1,86 @@ +Prompt weighting Prompt weighting provides a way to emphasize or de-emphasize certain parts of a prompt, allowing for more control over the generated image. A prompt can include several concepts, which gets turned into contextualized text embeddings. The embeddings are used by the model to condition its cross-attention layers to generate an image (read the Stable Diffusion blog post to learn more about how it works). Prompt weighting works by increasing or decreasing the scale of the text embedding vector that corresponds to its concept in the prompt because you may not necessarily want the model to focus on all concepts equally. The easiest way to prepare the prompt-weighted embeddings is to use Compel, a text prompt-weighting and blending library. Once you have the prompt-weighted embeddings, you can pass them to any pipeline that has a prompt_embeds (and optionally negative_prompt_embeds) parameter, such as StableDiffusionPipeline, StableDiffusionControlNetPipeline, and StableDiffusionXLPipeline. If your favorite pipeline doesn’t have a prompt_embeds parameter, please open an issue so we can add it! This guide will show you how to weight and blend your prompts with Compel in 🤗 Diffusers. Before you begin, make sure you have the latest version of Compel installed: Copied # uncomment to install in Colab +#!pip install compel --upgrade For this guide, let’s generate an image with the prompt "a red cat playing with a ball" using the StableDiffusionPipeline: Copied from diffusers import StableDiffusionPipeline, UniPCMultistepScheduler +import torch + +pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", use_safetensors=True) +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.to("cuda") + +prompt = "a red cat playing with a ball" + +generator = torch.Generator(device="cpu").manual_seed(33) + +image = pipe(prompt, generator=generator, num_inference_steps=20).images[0] +image Weighting You’ll notice there is no “ball” in the image! Let’s use compel to upweight the concept of “ball” in the prompt. Create a Compel object, and pass it a tokenizer and text encoder: Copied from compel import Compel + +compel_proc = Compel(tokenizer=pipe.tokenizer, text_encoder=pipe.text_encoder) compel uses + or - to increase or decrease the weight of a word in the prompt. To increase the weight of “ball”: + corresponds to the value 1.1, ++ corresponds to 1.1^2, and so on. Similarly, - corresponds to 0.9 and -- corresponds to 0.9^2. Feel free to experiment with adding more + or - in your prompt! Copied prompt = "a red cat playing with a ball++" Pass the prompt to compel_proc to create the new prompt embeddings which are passed to the pipeline: Copied prompt_embeds = compel_proc(prompt) +generator = torch.manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image To downweight parts of the prompt, use the - suffix: Copied prompt = "a red------- cat playing with a ball" +prompt_embeds = compel_proc(prompt) + +generator = torch.manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image You can even up or downweight multiple concepts in the same prompt: Copied prompt = "a red cat++ playing with a ball----" +prompt_embeds = compel_proc(prompt) + +generator = torch.manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image Blending You can also create a weighted blend of prompts by adding .blend() to a list of prompts and passing it some weights. Your blend may not always produce the result you expect because it breaks some assumptions about how the text encoder functions, so just have fun and experiment with it! Copied prompt_embeds = compel_proc('("a red cat playing with a ball", "jungle").blend(0.7, 0.8)') +generator = torch.Generator(device="cuda").manual_seed(33) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image Conjunction A conjunction diffuses each prompt independently and concatenates their results by their weighted sum. Add .and() to the end of a list of prompts to create a conjunction: Copied prompt_embeds = compel_proc('["a red cat", "playing with a", "ball"].and()') +generator = torch.Generator(device="cuda").manual_seed(55) + +image = pipe(prompt_embeds=prompt_embeds, generator=generator, num_inference_steps=20).images[0] +image Textual inversion Textual inversion is a technique for learning a specific concept from some images which you can use to generate new images conditioned on that concept. Create a pipeline and use the load_textual_inversion() function to load the textual inversion embeddings (feel free to browse the Stable Diffusion Conceptualizer for 100+ trained concepts): Copied import torch +from diffusers import StableDiffusionPipeline +from compel import Compel, DiffusersTextualInversionManager + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, + use_safetensors=True, variant="fp16").to("cuda") +pipe.load_textual_inversion("sd-concepts-library/midjourney-style") Compel provides a DiffusersTextualInversionManager class to simplify prompt weighting with textual inversion. Instantiate DiffusersTextualInversionManager and pass it to the Compel class: Copied textual_inversion_manager = DiffusersTextualInversionManager(pipe) +compel_proc = Compel( + tokenizer=pipe.tokenizer, + text_encoder=pipe.text_encoder, + textual_inversion_manager=textual_inversion_manager) Incorporate the concept to condition a prompt with using the syntax: Copied prompt_embeds = compel_proc('("A red cat++ playing with a ball ")') + +image = pipe(prompt_embeds=prompt_embeds).images[0] +image DreamBooth DreamBooth is a technique for generating contextualized images of a subject given just a few images of the subject to train on. It is similar to textual inversion, but DreamBooth trains the full model whereas textual inversion only fine-tunes the text embeddings. This means you should use from_pretrained() to load the DreamBooth model (feel free to browse the Stable Diffusion Dreambooth Concepts Library for 100+ trained models): Copied import torch +from diffusers import DiffusionPipeline, UniPCMultistepScheduler +from compel import Compel + +pipe = DiffusionPipeline.from_pretrained("sd-dreambooth-library/dndcoverart-v1", torch_dtype=torch.float16).to("cuda") +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) Create a Compel class with a tokenizer and text encoder, and pass your prompt to it. Depending on the model you use, you’ll need to incorporate the model’s unique identifier into your prompt. For example, the dndcoverart-v1 model uses the identifier dndcoverart: Copied compel_proc = Compel(tokenizer=pipe.tokenizer, text_encoder=pipe.text_encoder) +prompt_embeds = compel_proc('("magazine cover of a dndcoverart dragon, high quality, intricate details, larry elmore art style").and()') +image = pipe(prompt_embeds=prompt_embeds).images[0] +image Stable Diffusion XL Stable Diffusion XL (SDXL) has two tokenizers and text encoders so it’s usage is a bit different. To address this, you should pass both tokenizers and encoders to the Compel class: Copied from compel import Compel, ReturnedEmbeddingsType +from diffusers import DiffusionPipeline +from diffusers.utils import make_image_grid +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + variant="fp16", + use_safetensors=True, + torch_dtype=torch.float16 +).to("cuda") + +compel = Compel( + tokenizer=[pipeline.tokenizer, pipeline.tokenizer_2] , + text_encoder=[pipeline.text_encoder, pipeline.text_encoder_2], + returned_embeddings_type=ReturnedEmbeddingsType.PENULTIMATE_HIDDEN_STATES_NON_NORMALIZED, + requires_pooled=[False, True] +) This time, let’s upweight “ball” by a factor of 1.5 for the first prompt, and downweight “ball” by 0.6 for the second prompt. The StableDiffusionXLPipeline also requires pooled_prompt_embeds (and optionally negative_pooled_prompt_embeds) so you should pass those to the pipeline along with the conditioning tensors: Copied # apply weights +prompt = ["a red cat playing with a (ball)1.5", "a red cat playing with a (ball)0.6"] +conditioning, pooled = compel(prompt) + +# generate image +generator = [torch.Generator().manual_seed(33) for _ in range(len(prompt))] +images = pipeline(prompt_embeds=conditioning, pooled_prompt_embeds=pooled, generator=generator, num_inference_steps=30).images +make_image_grid(images, rows=1, cols=2) "a red cat playing with a (ball)1.5" "a red cat playing with a (ball)0.6" diff --git a/scrapped_outputs/d4c20776caa0b0e9f050e622bf7cd5a2.txt b/scrapped_outputs/d4c20776caa0b0e9f050e622bf7cd5a2.txt new file mode 100644 index 0000000000000000000000000000000000000000..bdb12cb9f8ec935ec9417d06fc21a1176b44b6b4 --- /dev/null +++ b/scrapped_outputs/d4c20776caa0b0e9f050e622bf7cd5a2.txt @@ -0,0 +1,248 @@ +Load adapters There are several training techniques for personalizing diffusion models to generate images of a specific subject or images in certain styles. Each of these training methods produces a different type of adapter. Some of the adapters generate an entirely new model, while other adapters only modify a smaller set of embeddings or weights. This means the loading process for each adapter is also different. This guide will show you how to load DreamBooth, textual inversion, and LoRA weights. Feel free to browse the Stable Diffusion Conceptualizer, LoRA the Explorer, and the Diffusers Models Gallery for checkpoints and embeddings to use. DreamBooth DreamBooth finetunes an entire diffusion model on just several images of a subject to generate images of that subject in new styles and settings. This method works by using a special word in the prompt that the model learns to associate with the subject image. Of all the training methods, DreamBooth produces the largest file size (usually a few GBs) because it is a full checkpoint model. Let’s load the herge_style checkpoint, which is trained on just 10 images drawn by Hergé, to generate images in that style. For it to work, you need to include the special word herge_style in your prompt to trigger the checkpoint: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("sd-dreambooth-library/herge-style", torch_dtype=torch.float16).to("cuda") +prompt = "A cute herge_style brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration" +image = pipeline(prompt).images[0] +image Textual inversion Textual inversion is very similar to DreamBooth and it can also personalize a diffusion model to generate certain concepts (styles, objects) from just a few images. This method works by training and finding new embeddings that represent the images you provide with a special word in the prompt. As a result, the diffusion model weights stay the same and the training process produces a relatively tiny (a few KBs) file. Because textual inversion creates embeddings, it cannot be used on its own like DreamBooth and requires another model. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") Now you can load the textual inversion embeddings with the load_textual_inversion() method and generate some images. Let’s load the sd-concepts-library/gta5-artwork embeddings and you’ll need to include the special word in your prompt to trigger it: Copied pipeline.load_textual_inversion("sd-concepts-library/gta5-artwork") +prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration, style" +image = pipeline(prompt).images[0] +image Textual inversion can also be trained on undesirable things to create negative embeddings to discourage a model from generating images with those undesirable things like blurry images or extra fingers on a hand. This can be an easy way to quickly improve your prompt. You’ll also load the embeddings with load_textual_inversion(), but this time, you’ll need two more parameters: weight_name: specifies the weight file to load if the file was saved in the 🤗 Diffusers format with a specific name or if the file is stored in the A1111 format token: specifies the special word to use in the prompt to trigger the embeddings Let’s load the sayakpaul/EasyNegative-test embeddings: Copied pipeline.load_textual_inversion( + "sayakpaul/EasyNegative-test", weight_name="EasyNegative.safetensors", token="EasyNegative" +) Now you can use the token to generate an image with the negative embeddings: Copied prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration, EasyNegative" +negative_prompt = "EasyNegative" + +image = pipeline(prompt, negative_prompt=negative_prompt, num_inference_steps=50).images[0] +image LoRA Low-Rank Adaptation (LoRA) is a popular training technique because it is fast and generates smaller file sizes (a couple hundred MBs). Like the other methods in this guide, LoRA can train a model to learn new styles from just a few images. It works by inserting new weights into the diffusion model and then only the new weights are trained instead of the entire model. This makes LoRAs faster to train and easier to store. LoRA is a very general training technique that can be used with other training methods. For example, it is common to train a model with DreamBooth and LoRA. LoRAs also need to be used with another model: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") Then use the load_lora_weights() method to load the ostris/super-cereal-sdxl-lora weights and specify the weights filename from the repository: Copied pipeline.load_lora_weights("ostris/super-cereal-sdxl-lora", weight_name="cereal_box_sdxl_v1.safetensors") +prompt = "bears, pizza bites" +image = pipeline(prompt).images[0] +image The load_lora_weights() method loads LoRA weights into both the UNet and text encoder. It is the preferred way for loading LoRAs because it can handle cases where: the LoRA weights don’t have separate identifiers for the UNet and text encoder the LoRA weights have separate identifiers for the UNet and text encoder But if you only need to load LoRA weights into the UNet, then you can use the load_attn_procs() method. Let’s load the jbilcke-hf/sdxl-cinematic-1 LoRA: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.unet.load_attn_procs("jbilcke-hf/sdxl-cinematic-1", weight_name="pytorch_lora_weights.safetensors") + +# use cnmt in the prompt to trigger the LoRA +prompt = "A cute cnmt eating a slice of pizza, stunning color scheme, masterpiece, illustration" +image = pipeline(prompt).images[0] +image For both load_lora_weights() and load_attn_procs(), you can pass the cross_attention_kwargs={"scale": 0.5} parameter to adjust how much of the LoRA weights to use. A value of 0 is the same as only using the base model weights, and a value of 1 is equivalent to using the fully finetuned LoRA. To unload the LoRA weights, use the unload_lora_weights() method to discard the LoRA weights and restore the model to its original weights: Copied pipeline.unload_lora_weights() Load multiple LoRAs It can be fun to use multiple LoRAs together to create something entirely new and unique. The fuse_lora() method allows you to fuse the LoRA weights with the original weights of the underlying model. Fusing the weights can lead to a speedup in inference latency because you don’t need to separately load the base model and LoRA! You can save your fused pipeline with save_pretrained() to avoid loading and fusing the weights every time you want to use the model. Load an initial model: Copied from diffusers import StableDiffusionXLPipeline, AutoencoderKL +import torch + +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + vae=vae, + torch_dtype=torch.float16, +).to("cuda") Next, load the LoRA checkpoint and fuse it with the original weights. The lora_scale parameter controls how much to scale the output by with the LoRA weights. It is important to make the lora_scale adjustments in the fuse_lora() method because it won’t work if you try to pass scale to the cross_attention_kwargs in the pipeline. If you need to reset the original model weights for any reason (use a different lora_scale), you should use the unfuse_lora() method. Copied pipeline.load_lora_weights("ostris/ikea-instructions-lora-sdxl") +pipeline.fuse_lora(lora_scale=0.7) + +# to unfuse the LoRA weights +pipeline.unfuse_lora() Then fuse this pipeline with the next set of LoRA weights: Copied pipeline.load_lora_weights("ostris/super-cereal-sdxl-lora") +pipeline.fuse_lora(lora_scale=0.7) You can’t unfuse multiple LoRA checkpoints, so if you need to reset the model to its original weights, you’ll need to reload it. Now you can generate an image that uses the weights from both LoRAs: Copied prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration" +image = pipeline(prompt).images[0] +image 🤗 PEFT Read the Inference with 🤗 PEFT tutorial to learn more about its integration with 🤗 Diffusers and how you can easily work with and juggle multiple adapters. You’ll need to install 🤗 Diffusers and PEFT from source to run the example in this section. Another way you can load and use multiple LoRAs is to specify the adapter_name parameter in load_lora_weights(). This method takes advantage of the 🤗 PEFT integration. For example, load and name both LoRA weights: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("ostris/ikea-instructions-lora-sdxl", weight_name="ikea_instructions_xl_v1_5.safetensors", adapter_name="ikea") +pipeline.load_lora_weights("ostris/super-cereal-sdxl-lora", weight_name="cereal_box_sdxl_v1.safetensors", adapter_name="cereal") Now use the set_adapters() to activate both LoRAs, and you can configure how much weight each LoRA should have on the output: Copied pipeline.set_adapters(["ikea", "cereal"], adapter_weights=[0.7, 0.5]) Then, generate an image: Copied prompt = "A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration" +image = pipeline(prompt, num_inference_steps=30, cross_attention_kwargs={"scale": 1.0}).images[0] +image Kohya and TheLastBen Other popular LoRA trainers from the community include those by Kohya and TheLastBen. These trainers create different LoRA checkpoints than those trained by 🤗 Diffusers, but they can still be loaded in the same way. Let’s download the Blueprintify SD XL 1.0 checkpoint from Civitai: Copied !wget https://civitai.com/api/download/models/168776 -O blueprintify-sd-xl-10.safetensors Load the LoRA checkpoint with the load_lora_weights() method, and specify the filename in the weight_name parameter: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("path/to/weights", weight_name="blueprintify-sd-xl-10.safetensors") Generate an image: Copied # use bl3uprint in the prompt to trigger the LoRA +prompt = "bl3uprint, a highly detailed blueprint of the eiffel tower, explaining how to build all parts, many txt, blueprint grid backdrop" +image = pipeline(prompt).images[0] +image Some limitations of using Kohya LoRAs with 🤗 Diffusers include: Images may not look like those generated by UIs - like ComfyUI - for multiple reasons, which are explained here. LyCORIS checkpoints aren’t fully supported. The load_lora_weights() method loads LyCORIS checkpoints with LoRA and LoCon modules, but Hada and LoKR are not supported. Loading a checkpoint from TheLastBen is very similar. For example, to load the TheLastBen/William_Eggleston_Style_SDXL checkpoint: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("TheLastBen/William_Eggleston_Style_SDXL", weight_name="wegg.safetensors") + +# use by william eggleston in the prompt to trigger the LoRA +prompt = "a house by william eggleston, sunrays, beautiful, sunlight, sunrays, beautiful" +image = pipeline(prompt=prompt).images[0] +image IP-Adapter IP-Adapter is an effective and lightweight adapter that adds image prompting capabilities to a diffusion model. This adapter works by decoupling the cross-attention layers of the image and text features. All the other model components are frozen and only the embedded image features in the UNet are trained. As a result, IP-Adapter files are typically only ~100MBs. IP-Adapter works with most of our pipelines, including Stable Diffusion, Stable Diffusion XL (SDXL), ControlNet, T2I-Adapter, AnimateDiff. And you can use any custom models finetuned from the same base models. It also works with LCM-Lora out of box. You can find official IP-Adapter checkpoints in h94/IP-Adapter. IP-Adapter was contributed by okotaku. Let’s first create a Stable Diffusion Pipeline. Copied from diffusers import AutoPipelineForText2Image +import torch +from diffusers.utils import load_image + + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") Now load the h94/IP-Adapter weights with the load_ip_adapter() method. Copied pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") IP-Adapter relies on an image encoder to generate the image features, if your IP-Adapter weights folder contains a "image_encoder" subfolder, the image encoder will be automatically loaded and registered to the pipeline. Otherwise you can so load a [CLIPVisionModelWithProjection](https://huggingface.co/docs/transformers/v4.37.2/en/model_doc/clip#transformers.CLIPVisionModelWithProjection) model and pass it to a Stable Diffusion pipeline when you create it. + + Copied from diffusers import AutoPipelineForText2Image +from transformers import CLIPVisionModelWithProjection +import torch + +image_encoder = CLIPVisionModelWithProjection.from_pretrained( + "h94/IP-Adapter", + subfolder="models/image_encoder", + torch_dtype=torch.float16, +).to("cuda") + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", image_encoder=image_encoder, torch_dtype=torch.float16).to("cuda") IP-Adapter allows you to use both image and text to condition the image generation process. For example, let’s use the bear image from the Textual Inversion section as the image prompt (ip_adapter_image) along with a text prompt to add “sunglasses”. 😎 Copied pipeline.set_ip_adapter_scale(0.6) +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_neg_embed.png") +generator = torch.Generator(device="cpu").manual_seed(33) +images = pipeline( +    prompt='best quality, high quality, wearing sunglasses', +    ip_adapter_image=image, +    negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", +    num_inference_steps=50, +    generator=generator, +).images +images[0]     You can use the set_ip_adapter_scale() method to adjust the text prompt and image prompt condition ratio.  If you’re only using the image prompt, you should set the scale to 1.0. You can lower the scale to get more generation diversity, but it’ll be less aligned with the prompt. +scale=0.5 can achieve good results in most cases when you use both text and image prompts. IP-Adapter also works great with Image-to-Image and Inpainting pipelines. See below examples of how you can use it with Image-to-Image and Inpaint. image-to-image inpaint Copied from diffusers import AutoPipelineForImage2Image +import torch +from diffusers.utils import load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") + +image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/vermeer.jpg") +ip_image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/river.png") + +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") +generator = torch.Generator(device="cpu").manual_seed(33) +images = pipeline( +    prompt='best quality, high quality', +    image = image, +    ip_adapter_image=ip_image, +    num_inference_steps=50, +    generator=generator, +    strength=0.6, +).images +images[0] IP-Adapters can also be used with SDXL Copied from diffusers import AutoPipelineForText2Image +from diffusers.utils import load_image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + torch_dtype=torch.float16 +).to("cuda") + +image = load_image("https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/watercolor_painting.jpeg") + +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name="ip-adapter_sdxl.bin") + +generator = torch.Generator(device="cpu").manual_seed(33) +image = pipeline( + prompt="best quality, high quality", + ip_adapter_image=image, + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=25, + generator=generator, +).images[0] +image.save("sdxl_t2i.png") input image adapted image You can use the IP-Adapter face model to apply specific faces to your images. It is an effective way to maintain consistent characters in your image generations. +Weights are loaded with the same method used for the other IP-Adapters. Copied # Load ip-adapter-full-face_sd15.bin +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter-full-face_sd15.bin") It is recommended to use DDIMScheduler and EulerDiscreteScheduler for face model. Copied import torch +from diffusers import StableDiffusionPipeline, DDIMScheduler +from diffusers.utils import load_image + +pipeline = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, +).to("cuda") +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter-full-face_sd15.bin") + +pipeline.set_ip_adapter_scale(0.7) + +image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ai_face2.png") + +generator = torch.Generator(device="cpu").manual_seed(33) + +image = pipeline( + prompt="A photo of a girl wearing a black dress, holding red roses in hand, upper body, behind is the Eiffel Tower", + ip_adapter_image=image, + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=50, num_images_per_prompt=1, width=512, height=704, + generator=generator, +).images[0] input image output image You can load multiple IP-Adapter models and use multiple reference images at the same time. In this example we use IP-Adapter-Plus face model to create a consistent character and also use IP-Adapter-Plus model along with 10 images to create a coherent style in the image we generate. Copied import torch +from diffusers import AutoPipelineForText2Image, DDIMScheduler +from transformers import CLIPVisionModelWithProjection +from diffusers.utils import load_image + +image_encoder = CLIPVisionModelWithProjection.from_pretrained( + "h94/IP-Adapter", + subfolder="models/image_encoder", + torch_dtype=torch.float16, +) + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + torch_dtype=torch.float16, + image_encoder=image_encoder, +) +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +pipeline.load_ip_adapter( + "h94/IP-Adapter", + subfolder="sdxl_models", + weight_name=["ip-adapter-plus_sdxl_vit-h.safetensors", "ip-adapter-plus-face_sdxl_vit-h.safetensors"] +) +pipeline.set_ip_adapter_scale([0.7, 0.3]) +pipeline.enable_model_cpu_offload() + +face_image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/women_input.png") +style_folder = "https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/style_ziggy" +style_images = [load_image(f"{style_folder}/img{i}.png") for i in range(10)] + +generator = torch.Generator(device="cpu").manual_seed(0) + +image = pipeline( + prompt="wonderwoman", + ip_adapter_image=[style_images, face_image], + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=50, num_images_per_prompt=1, + generator=generator, +).images[0]     style input image face input image output image LCM-Lora You can use IP-Adapter with LCM-Lora to achieve “instant fine-tune” with custom images. Note that you need to load IP-Adapter weights before loading the LCM-Lora weights. Copied from diffusers import DiffusionPipeline, LCMScheduler +import torch +from diffusers.utils import load_image + +model_id = "sd-dreambooth-library/herge-style" +lcm_lora_id = "latent-consistency/lcm-lora-sdv1-5" + +pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) + +pipe.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") +pipe.load_lora_weights(lcm_lora_id) +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() + +prompt = "best quality, high quality" +image = load_image("https://user-images.githubusercontent.com/24734142/266492875-2d50d223-8475-44f0-a7c6-08b51cb53572.png") +images = pipe( + prompt=prompt, + ip_adapter_image=image, + num_inference_steps=4, + guidance_scale=1, +).images[0] Other pipelines IP-Adapter is compatible with any pipeline that (1) uses a text prompt and (2) uses Stable Diffusion or Stable Diffusion XL checkpoint. To use IP-Adapter with a different pipeline, all you need to do is to run load_ip_adapter() method after you create the pipeline, and then pass your image to the pipeline as ip_adapter_image 🤗 Diffusers currently only supports using IP-Adapter with some of the most popular pipelines, feel free to open a feature request if you have a cool use-case and require integrating IP-adapters with a pipeline that does not support it yet! You can find below examples on how to use IP-Adapter with ControlNet and AnimateDiff. ControlNet AnimateDiff Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +import torch +from diffusers.utils import load_image + +controlnet_model_path = "lllyasviel/control_v11f1p_sd15_depth" +controlnet = ControlNetModel.from_pretrained(controlnet_model_path, torch_dtype=torch.float16) + +pipeline = StableDiffusionControlNetPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16) +pipeline.to("cuda") + +image = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/statue.png") +depth_map = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/depth.png") + +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") + +generator = torch.Generator(device="cpu").manual_seed(33) +images = pipeline( + prompt='best quality, high quality', + image=depth_map, + ip_adapter_image=image, + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=50, + generator=generator, +).images +images[0] input image adapted image diff --git a/scrapped_outputs/d4d22c7bf23bb0090c3f50a90da581b9.txt b/scrapped_outputs/d4d22c7bf23bb0090c3f50a90da581b9.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/d5117cec38f3415038c7a4a0716d351a.txt b/scrapped_outputs/d5117cec38f3415038c7a4a0716d351a.txt new file mode 100644 index 0000000000000000000000000000000000000000..27e473e96ef3e5480dbddcafab99a5316b599755 --- /dev/null +++ b/scrapped_outputs/d5117cec38f3415038c7a4a0716d351a.txt @@ -0,0 +1,57 @@ +Wuerstchen The Wuerstchen model drastically reduces computational costs by compressing the latent space by 42x, without compromising image quality and accelerating inference. During training, Wuerstchen uses two models (VQGAN + autoencoder) to compress the latents, and then a third model (text-conditioned latent diffusion model) is conditioned on this highly compressed space to generate an image. To fit the prior model into GPU memory and to speedup training, try enabling gradient_accumulation_steps, gradient_checkpointing, and mixed_precision respectively. This guide explores the train_text_to_image_prior.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/wuerstchen/text_to_image +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training scripts that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the scripts and let us know if you have any questions or concerns. Script parameters The training scripts provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image_prior.py \ + --mixed_precision="fp16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so let’s dive right into the Wuerstchen training script! Training script The training script is also similar to the Text-to-image training guide, but it’s been modified to support Wuerstchen. This guide focuses on the code that is unique to the Wuerstchen training script. The main() function starts by initializing the image encoder - an EfficientNet - in addition to the usual scheduler and tokenizer. Copied with ContextManagers(deepspeed_zero_init_disabled_context_manager()): + pretrained_checkpoint_file = hf_hub_download("dome272/wuerstchen", filename="model_v2_stage_b.pt") + state_dict = torch.load(pretrained_checkpoint_file, map_location="cpu") + image_encoder = EfficientNetEncoder() + image_encoder.load_state_dict(state_dict["effnet_state_dict"]) + image_encoder.eval() You’ll also load the WuerstchenPrior model for optimization. Copied prior = WuerstchenPrior.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="prior") + +optimizer = optimizer_cls( + prior.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Next, you’ll apply some transforms to the images and tokenize the captions: Copied def preprocess_train(examples): + images = [image.convert("RGB") for image in examples[image_column]] + examples["effnet_pixel_values"] = [effnet_transforms(image) for image in images] + examples["text_input_ids"], examples["text_mask"] = tokenize_captions(examples) + return examples Finally, the training loop handles compressing the images to latent space with the EfficientNetEncoder, adding noise to the latents, and predicting the noise residual with the WuerstchenPrior model. Copied pred_noise = prior(noisy_latents, timesteps, prompt_embeds) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 Set the DATASET_NAME environment variable to the dataset name from the Hub. This guide uses the Pokémon BLIP captions dataset, but you can create and train on your own datasets as well (see the Create a dataset for training guide). To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_prompt to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. Copied export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch train_text_to_image_prior.py \ + --mixed_precision="fp16" \ + --dataset_name=$DATASET_NAME \ + --resolution=768 \ + --train_batch_size=4 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --dataloader_num_workers=4 \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --checkpoints_total_limit=3 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --validation_prompts="A robot pokemon, 4k photo" \ + --report_to="wandb" \ + --push_to_hub \ + --output_dir="wuerstchen-prior-pokemon-model" Once training is complete, you can use your newly trained model for inference! Copied import torch +from diffusers import AutoPipelineForText2Image +from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS + +pipeline = AutoPipelineForText2Image.from_pretrained("path/to/saved/model", torch_dtype=torch.float16).to("cuda") + +caption = "A cute bird pokemon holding a shield" +images = pipeline( + caption, + width=1024, + height=1536, + prior_timesteps=DEFAULT_STAGE_C_TIMESTEPS, + prior_guidance_scale=4.0, + num_images_per_prompt=2, +).images Next steps Congratulations on training a Wuerstchen model! To learn more about how to use your new model, the following may be helpful: Take a look at the Wuerstchen API documentation to learn more about how to use the pipeline for text-to-image generation and its limitations. diff --git a/scrapped_outputs/d577b64118f7c1d7b5c7deea6776c404.txt b/scrapped_outputs/d577b64118f7c1d7b5c7deea6776c404.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/d579dc68fae924a2e5cecc0f4b4c89b5.txt b/scrapped_outputs/d579dc68fae924a2e5cecc0f4b4c89b5.txt new file mode 100644 index 0000000000000000000000000000000000000000..27e473e96ef3e5480dbddcafab99a5316b599755 --- /dev/null +++ b/scrapped_outputs/d579dc68fae924a2e5cecc0f4b4c89b5.txt @@ -0,0 +1,57 @@ +Wuerstchen The Wuerstchen model drastically reduces computational costs by compressing the latent space by 42x, without compromising image quality and accelerating inference. During training, Wuerstchen uses two models (VQGAN + autoencoder) to compress the latents, and then a third model (text-conditioned latent diffusion model) is conditioned on this highly compressed space to generate an image. To fit the prior model into GPU memory and to speedup training, try enabling gradient_accumulation_steps, gradient_checkpointing, and mixed_precision respectively. This guide explores the train_text_to_image_prior.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/wuerstchen/text_to_image +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training scripts that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the scripts and let us know if you have any questions or concerns. Script parameters The training scripts provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image_prior.py \ + --mixed_precision="fp16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so let’s dive right into the Wuerstchen training script! Training script The training script is also similar to the Text-to-image training guide, but it’s been modified to support Wuerstchen. This guide focuses on the code that is unique to the Wuerstchen training script. The main() function starts by initializing the image encoder - an EfficientNet - in addition to the usual scheduler and tokenizer. Copied with ContextManagers(deepspeed_zero_init_disabled_context_manager()): + pretrained_checkpoint_file = hf_hub_download("dome272/wuerstchen", filename="model_v2_stage_b.pt") + state_dict = torch.load(pretrained_checkpoint_file, map_location="cpu") + image_encoder = EfficientNetEncoder() + image_encoder.load_state_dict(state_dict["effnet_state_dict"]) + image_encoder.eval() You’ll also load the WuerstchenPrior model for optimization. Copied prior = WuerstchenPrior.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="prior") + +optimizer = optimizer_cls( + prior.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Next, you’ll apply some transforms to the images and tokenize the captions: Copied def preprocess_train(examples): + images = [image.convert("RGB") for image in examples[image_column]] + examples["effnet_pixel_values"] = [effnet_transforms(image) for image in images] + examples["text_input_ids"], examples["text_mask"] = tokenize_captions(examples) + return examples Finally, the training loop handles compressing the images to latent space with the EfficientNetEncoder, adding noise to the latents, and predicting the noise residual with the WuerstchenPrior model. Copied pred_noise = prior(noisy_latents, timesteps, prompt_embeds) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 Set the DATASET_NAME environment variable to the dataset name from the Hub. This guide uses the Pokémon BLIP captions dataset, but you can create and train on your own datasets as well (see the Create a dataset for training guide). To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_prompt to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. Copied export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch train_text_to_image_prior.py \ + --mixed_precision="fp16" \ + --dataset_name=$DATASET_NAME \ + --resolution=768 \ + --train_batch_size=4 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --dataloader_num_workers=4 \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --checkpoints_total_limit=3 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --validation_prompts="A robot pokemon, 4k photo" \ + --report_to="wandb" \ + --push_to_hub \ + --output_dir="wuerstchen-prior-pokemon-model" Once training is complete, you can use your newly trained model for inference! Copied import torch +from diffusers import AutoPipelineForText2Image +from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS + +pipeline = AutoPipelineForText2Image.from_pretrained("path/to/saved/model", torch_dtype=torch.float16).to("cuda") + +caption = "A cute bird pokemon holding a shield" +images = pipeline( + caption, + width=1024, + height=1536, + prior_timesteps=DEFAULT_STAGE_C_TIMESTEPS, + prior_guidance_scale=4.0, + num_images_per_prompt=2, +).images Next steps Congratulations on training a Wuerstchen model! To learn more about how to use your new model, the following may be helpful: Take a look at the Wuerstchen API documentation to learn more about how to use the pipeline for text-to-image generation and its limitations. diff --git a/scrapped_outputs/d59d581299f0898db8abd6fb3f058a38.txt b/scrapped_outputs/d59d581299f0898db8abd6fb3f058a38.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/d5c99cd7f801f157d5aebafbf381859b.txt b/scrapped_outputs/d5c99cd7f801f157d5aebafbf381859b.txt new file mode 100644 index 0000000000000000000000000000000000000000..6239505b8ff5f3f7eb6043b475677f1d948af531 --- /dev/null +++ b/scrapped_outputs/d5c99cd7f801f157d5aebafbf381859b.txt @@ -0,0 +1,38 @@ +Pipeline callbacks The denoising loop of a pipeline can be modified with custom defined functions using the callback_on_step_end parameter. This can be really useful for dynamically adjusting certain pipeline attributes, or modifying tensor variables. The flexibility of callbacks opens up some interesting use-cases such as changing the prompt embeddings at each timestep, assigning different weights to the prompt embeddings, and editing the guidance scale. This guide will show you how to use the callback_on_step_end parameter to disable classifier-free guidance (CFG) after 40% of the inference steps to save compute with minimal cost to performance. The callback function should have the following arguments: pipe (or the pipeline instance) provides access to useful properties such as num_timestep and guidance_scale. You can modify these properties by updating the underlying attributes. For this example, you’ll disable CFG by setting pipe._guidance_scale=0.0. step_index and timestep tell you where you are in the denoising loop. Use step_index to turn off CFG after reaching 40% of num_timestep. callback_kwargs is a dict that contains tensor variables you can modify during the denoising loop. It only includes variables specified in the callback_on_step_end_tensor_inputs argument, which is passed to the pipeline’s __call__ method. Different pipelines may use different sets of variables, so please check a pipeline’s _callback_tensor_inputs attribute for the list of variables you can modify. Some common variables include latents and prompt_embeds. For this function, change the batch size of prompt_embeds after setting guidance_scale=0.0 in order for it to work properly. Your callback function should look something like this: Copied def callback_dynamic_cfg(pipe, step_index, timestep, callback_kwargs): + # adjust the batch_size of prompt_embeds according to guidance_scale + if step_index == int(pipe.num_timestep * 0.4): + prompt_embeds = callback_kwargs["prompt_embeds"] + prompt_embeds = prompt_embeds.chunk(2)[-1] + + # update guidance_scale and prompt_embeds + pipe._guidance_scale = 0.0 + callback_kwargs["prompt_embeds"] = prompt_embeds + return callback_kwargs Now, you can pass the callback function to the callback_on_step_end parameter and the prompt_embeds to callback_on_step_end_tensor_inputs. Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" + +generator = torch.Generator(device="cuda").manual_seed(1) +out = pipe(prompt, generator=generator, callback_on_step_end=callback_custom_cfg, callback_on_step_end_tensor_inputs=['prompt_embeds']) + +out.images[0].save("out_custom_cfg.png") The callback function is executed at the end of each denoising step, and modifies the pipeline attributes and tensor variables for the next denoising step. With callbacks, you can implement features such as dynamic CFG without having to modify the underlying code at all! 🤗 Diffusers currently only supports callback_on_step_end, but feel free to open a feature request if you have a cool use-case and require a callback function with a different execution point! Interrupt the diffusion process Interrupting the diffusion process is particularly useful when building UIs that work with Diffusers because it allows users to stop the generation process if they’re unhappy with the intermediate results. You can incorporate this into your pipeline with a callback. The interruption callback is supported for text-to-image, image-to-image, and inpainting for the StableDiffusionPipeline and StableDiffusionXLPipeline. This callback function should take the following arguments: pipe, i, t, and callback_kwargs (this must be returned). Set the pipeline’s _interrupt attribute to True to stop the diffusion process after a certain number of steps. You are also free to implement your own custom stopping logic inside the callback. In this example, the diffusion process is stopped after 10 steps even though num_inference_steps is set to 50. Copied from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +pipe.enable_model_cpu_offload() +num_inference_steps = 50 + +def interrupt_callback(pipe, i, t, callback_kwargs): + stop_idx = 10 + if i == stop_idx: + pipe._interrupt = True + + return callback_kwargs + +pipe( + "A photo of a cat", + num_inference_steps=num_inference_steps, + callback_on_step_end=interrupt_callback, +) diff --git a/scrapped_outputs/d5d0f69639c07496a0dbd0302ef01710.txt b/scrapped_outputs/d5d0f69639c07496a0dbd0302ef01710.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/d5d99e6fff512105e97884d85d3dd409.txt b/scrapped_outputs/d5d99e6fff512105e97884d85d3dd409.txt new file mode 100644 index 0000000000000000000000000000000000000000..707a06e6336d2883e0c81a8c8cc00f306f544615 --- /dev/null +++ b/scrapped_outputs/d5d99e6fff512105e97884d85d3dd409.txt @@ -0,0 +1,65 @@ +Unconditional image generation Unconditional image generation models are not conditioned on text or images during training. It only generates images that resemble its training data distribution. This guide will explore the train_unconditional.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies: Copied cd examples/unconditional_image_generation +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the bf16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_unconditional.py \ + --mixed_precision="bf16" Some basic and important parameters to specify include: --dataset_name: the name of the dataset on the Hub or a local path to the dataset to train on --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command Bring your dataset, and let the training script handle everything else! Training script The code for preprocessing the dataset and the training loop is found in the main() function. If you need to adapt the training script, this is where you’ll need to make your changes. The train_unconditional script initializes a UNet2DModel if you don’t provide a model configuration. You can configure the UNet here if you’d like: Copied model = UNet2DModel( + sample_size=args.resolution, + in_channels=3, + out_channels=3, + layers_per_block=2, + block_out_channels=(128, 128, 256, 256, 512, 512), + down_block_types=( + "DownBlock2D", + "DownBlock2D", + "DownBlock2D", + "DownBlock2D", + "AttnDownBlock2D", + "DownBlock2D", + ), + up_block_types=( + "UpBlock2D", + "AttnUpBlock2D", + "UpBlock2D", + "UpBlock2D", + "UpBlock2D", + "UpBlock2D", + ), +) Next, the script initializes a scheduler and optimizer: Copied # Initialize the scheduler +accepts_prediction_type = "prediction_type" in set(inspect.signature(DDPMScheduler.__init__).parameters.keys()) +if accepts_prediction_type: + noise_scheduler = DDPMScheduler( + num_train_timesteps=args.ddpm_num_steps, + beta_schedule=args.ddpm_beta_schedule, + prediction_type=args.prediction_type, + ) +else: + noise_scheduler = DDPMScheduler(num_train_timesteps=args.ddpm_num_steps, beta_schedule=args.ddpm_beta_schedule) + +# Initialize the optimizer +optimizer = torch.optim.AdamW( + model.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Then it loads a dataset and you can specify how to preprocess it: Copied dataset = load_dataset("imagefolder", data_dir=args.train_data_dir, cache_dir=args.cache_dir, split="train") + +augmentations = transforms.Compose( + [ + transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), + transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution), + transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x), + transforms.ToTensor(), + transforms.Normalize([0.5], [0.5]), + ] +) Finally, the training loop handles everything else such as adding noise to the images, predicting the noise residual, calculating the loss, saving checkpoints at specified steps, and saving and pushing the model to the Hub. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 A full training run takes 2 hours on 4xV100 GPUs. single GPU multi-GPU Copied accelerate launch train_unconditional.py \ + --dataset_name="huggan/flowers-102-categories" \ + --output_dir="ddpm-ema-flowers-64" \ + --mixed_precision="fp16" \ + --push_to_hub The training script creates and saves a checkpoint file in your repository. Now you can load and use your trained model for inference: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128").to("cuda") +image = pipeline().images[0] diff --git a/scrapped_outputs/d60a4937be637a57331260982b958a32.txt b/scrapped_outputs/d60a4937be637a57331260982b958a32.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/d644f35aee5328950f737d572a123ee7.txt b/scrapped_outputs/d644f35aee5328950f737d572a123ee7.txt new file mode 100644 index 0000000000000000000000000000000000000000..a8189c1b645b7d3e168af57a95c95aa7ed2e38d7 --- /dev/null +++ b/scrapped_outputs/d644f35aee5328950f737d572a123ee7.txt @@ -0,0 +1,239 @@ +Understanding pipelines, models and schedulers + + + + + + + + + + + + +🧨 Diffusers is designed to be a user-friendly and flexible toolbox for building diffusion systems tailored to your use-case. At the core of the toolbox are models and schedulers. While the DiffusionPipeline bundles these components together for convenience, you can also unbundle the pipeline and use the models and schedulers separately to create new diffusion systems. +In this tutorial, you’ll learn how to use models and schedulers to assemble a diffusion system for inference, starting with a basic pipeline and then progressing to the Stable Diffusion pipeline. + +Deconstruct a basic pipeline + +A pipeline is a quick and easy way to run a model for inference, requiring no more than four lines of code to generate an image: + + + Copied +>>> from diffusers import DDPMPipeline + +>>> ddpm = DDPMPipeline.from_pretrained("google/ddpm-cat-256").to("cuda") +>>> image = ddpm(num_inference_steps=25).images[0] +>>> image + +That was super easy, but how did the pipeline do that? Let’s breakdown the pipeline and take a look at what’s happening under the hood. +In the example above, the pipeline contains a UNet model and a DDPM scheduler. The pipeline denoises an image by taking random noise the size of the desired output and passing it through the model several times. At each timestep, the model predicts the noise residual and the scheduler uses it to predict a less noisy image. The pipeline repeats this process until it reaches the end of the specified number of inference steps. +To recreate the pipeline with the model and scheduler separately, let’s write our own denoising process. +Load the model and scheduler: + + + Copied +>>> from diffusers import DDPMScheduler, UNet2DModel + +>>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cat-256") +>>> model = UNet2DModel.from_pretrained("google/ddpm-cat-256").to("cuda") +Set the number of timesteps to run the denoising process for: + + + Copied +>>> scheduler.set_timesteps(50) +Setting the scheduler timesteps creates a tensor with evenly spaced elements in it, 50 in this example. Each element corresponds to a timestep at which the model denoises an image. When you create the denoising loop later, you’ll iterate over this tensor to denoise an image: + + + Copied +>>> scheduler.timesteps +tensor([980, 960, 940, 920, 900, 880, 860, 840, 820, 800, 780, 760, 740, 720, + 700, 680, 660, 640, 620, 600, 580, 560, 540, 520, 500, 480, 460, 440, + 420, 400, 380, 360, 340, 320, 300, 280, 260, 240, 220, 200, 180, 160, + 140, 120, 100, 80, 60, 40, 20, 0]) +Create some random noise with the same shape as the desired output: + + + Copied +>>> import torch + +>>> sample_size = model.config.sample_size +>>> noise = torch.randn((1, 3, sample_size, sample_size)).to("cuda") +Now write a loop to iterate over the timesteps. At each timestep, the model does a UNet2DModel.forward() pass and returns the noisy residual. The scheduler’s step() method takes the noisy residual, timestep, and input and it predicts the image at the previous timestep. This output becomes the next input to the model in the denoising loop, and it’ll repeat until it reaches the end of the timesteps array. + + + Copied +>>> input = noise + +>>> for t in scheduler.timesteps: +... with torch.no_grad(): +... noisy_residual = model(input, t).sample +>>> previous_noisy_sample = scheduler.step(noisy_residual, t, input).prev_sample +>>> input = previous_noisy_sample +This is the entire denoising process, and you can use this same pattern to write any diffusion system. +The last step is to convert the denoised output into an image: + + + Copied +>>> from PIL import Image +>>> import numpy as np + +>>> image = (input / 2 + 0.5).clamp(0, 1) +>>> image = image.cpu().permute(0, 2, 3, 1).numpy()[0] +>>> image = Image.fromarray((image * 255)).round().astype("uint8") +>>> image +In the next section, you’ll put your skills to the test and breakdown the more complex Stable Diffusion pipeline. The steps are more or less the same. You’ll initialize the necessary components, and set the number of timesteps to create a timestep array. The timestep array is used in the denoising loop, and for each element in this array, the model predicts a less noisy image. The denoising loop iterates over the timestep’s, and at each timestep, it outputs a noisy residual and the scheduler uses it to predict a less noisy image at the previous timestep. This process is repeated until you reach the end of the timestep array. +Let’s try it out! + +Deconstruct the Stable Diffusion pipeline + +Stable Diffusion is a text-to-image latent diffusion model. It is called a latent diffusion model because it works with a lower-dimensional representation of the image instead of the actual pixel space, which makes it more memory efficient. The encoder compresses the image into a smaller representation, and a decoder to convert the compressed representation back into an image. For text-to-image models, you’ll need a tokenizer and an encoder to generate text embeddings. From the previous example, you already know you need a UNet model and a scheduler. +As you can see, this is already more complex than the DDPM pipeline which only contains a UNet model. The Stable Diffusion model has three separate pretrained models. +💡 Read the How does Stable Diffusion work? blog for more details about how the VAE, UNet, and text encoder models. +Now that you know what you need for the Stable Diffusion pipeline, load all these components with the from_pretrained() method. You can find them in the pretrained runwayml/stable-diffusion-v1-5 checkpoint, and each component is stored in a separate subfolder: + + + Copied +>>> from PIL import Image +>>> import torch +>>> from transformers import CLIPTextModel, CLIPTokenizer +>>> from diffusers import AutoencoderKL, UNet2DConditionModel, PNDMScheduler + +>>> vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae") +>>> tokenizer = CLIPTokenizer.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="tokenizer") +>>> text_encoder = CLIPTextModel.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="text_encoder") +>>> unet = UNet2DConditionModel.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="unet") +Instead of the default PNDMScheduler, exchange it for the UniPCMultistepScheduler to see how easy it is to plug a different scheduler in: + + + Copied +>>> from diffusers import UniPCMultistepScheduler + +>>> scheduler = UniPCMultistepScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler") +To speed up inference, move the models to a GPU since, unlike the scheduler, they have trainable weights: + + + Copied +>>> torch_device = "cuda" +>>> vae.to(torch_device) +>>> text_encoder.to(torch_device) +>>> unet.to(torch_device) + +Create text embeddings + +The next step is to tokenize the text to generate embeddings. The text is used to condition the UNet model and steer the diffusion process towards something that resembles the input prompt. +💡 The guidance_scale parameter determines how much weight should be given to the prompt when generating an image. +Feel free to choose any prompt you like if you want to generate something else! + + + Copied +>>> prompt = ["a photograph of an astronaut riding a horse"] +>>> height = 512 # default height of Stable Diffusion +>>> width = 512 # default width of Stable Diffusion +>>> num_inference_steps = 25 # Number of denoising steps +>>> guidance_scale = 7.5 # Scale for classifier-free guidance +>>> generator = torch.manual_seed(0) # Seed generator to create the inital latent noise +>>> batch_size = len(prompt) +Tokenize the text and generate the embeddings from the prompt: + + + Copied +>>> text_input = tokenizer( +... prompt, padding="max_length", max_length=tokenizer.model_max_length, truncation=True, return_tensors="pt" +... ) + +>>> with torch.no_grad(): +... text_embeddings = text_encoder(text_input.input_ids.to(torch_device))[0] +You’ll also need to generate the unconditional text embeddings which are the embeddings for the padding token. These need to have the same shape (batch_size and seq_length) as the conditional text_embeddings: + + + Copied +>>> max_length = text_input.input_ids.shape[-1] +>>> uncond_input = tokenizer([""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt") +>>> uncond_embeddings = text_encoder(uncond_input.input_ids.to(torch_device))[0] +Let’s concatenate the conditional and unconditional embeddings into a batch to avoid doing two forward passes: + + + Copied +>>> text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) + +Create random noise + +Next, generate some initial random noise as a starting point for the diffusion process. This is the latent representation of the image, and it’ll be gradually denoised. At this point, the latent image is smaller than the final image size but that’s okay though because the model will transform it into the final 512x512 image dimensions later. +💡 The height and width are divided by 8 because the vae model has 3 down-sampling layers. You can check by running the following: + + + Copied +2 ** (len(vae.config.block_out_channels) - 1) == 8 + + + Copied +>>> latents = torch.randn( +... (batch_size, unet.in_channels, height // 8, width // 8), +... generator=generator, +... ) +>>> latents = latents.to(torch_device) + +Denoise the image + +Start by scaling the input with the initial noise distribution, sigma, the noise scale value, which is required for improved schedulers like UniPCMultistepScheduler: + + + Copied +>>> latents = latents * scheduler.init_noise_sigma +The last step is to create the denoising loop that’ll progressively transform the pure noise in latents to an image described by your prompt. Remember, the denoising loop needs to do three things: +Set the scheduler’s timesteps to use during denoising. +Iterate over the timesteps. +At each timestep, call the UNet model to predict the noise residual and pass it to the scheduler to compute the previous noisy sample. + + + Copied +>>> from tqdm.auto import tqdm + +>>> scheduler.set_timesteps(num_inference_steps) + +>>> for t in tqdm(scheduler.timesteps): +... # expand the latents if we are doing classifier-free guidance to avoid doing two forward passes. +... latent_model_input = torch.cat([latents] * 2) + +... latent_model_input = scheduler.scale_model_input(latent_model_input, timestep=t) + +... # predict the noise residual +... with torch.no_grad(): +... noise_pred = unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample + +... # perform guidance +... noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) +... noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) + +... # compute the previous noisy sample x_t -> x_t-1 +... latents = scheduler.step(noise_pred, t, latents).prev_sample + +Decode the image + +The final step is to use the vae to decode the latent representation into an image and get the decoded output with sample: + + + Copied +# scale and decode the image latents with vae +latents = 1 / 0.18215 * latents +with torch.no_grad(): + image = vae.decode(latents).sample +Lastly, convert the image to a PIL.Image to see your generated image! + + + Copied +>>> image = (image / 2 + 0.5).clamp(0, 1) +>>> image = image.detach().cpu().permute(0, 2, 3, 1).numpy() +>>> images = (image * 255).round().astype("uint8") +>>> pil_images = [Image.fromarray(image) for image in images] +>>> pil_images[0] + + +Next steps + +From basic to complex pipelines, you’ve seen that all you really need to write your own diffusion system is a denoising loop. The loop should set the scheduler’s timesteps, iterate over them, and alternate between calling the UNet model to predict the noise residual and passing it to the scheduler to compute the previous noisy sample. +This is really what 🧨 Diffusers is designed for: to make it intuitive and easy to write your own diffusion system using models and schedulers. +For your next steps, feel free to: +Learn how to build and contribute a pipeline to 🧨 Diffusers. We can’t wait and see what you’ll come up with! +Explore existing pipelines in the library, and see if you can deconstruct and build a pipeline from scratch using the models and schedulers separately. diff --git a/scrapped_outputs/d6a4ed1c4560c735935ddfbe0caca0c8.txt b/scrapped_outputs/d6a4ed1c4560c735935ddfbe0caca0c8.txt new file mode 100644 index 0000000000000000000000000000000000000000..a60cf1709306cd604a335558453963caf02df74b --- /dev/null +++ b/scrapped_outputs/d6a4ed1c4560c735935ddfbe0caca0c8.txt @@ -0,0 +1,56 @@ +Community pipelines For more context about the design choices behind community pipelines, please have a look at this issue. Community pipelines allow you to get creative and build your own unique pipelines to share with the community. You can find all community pipelines in the diffusers/examples/community folder along with inference and training examples for how to use them. This guide showcases some of the community pipelines and hopefully it’ll inspire you to create your own (feel free to open a PR with your own pipeline and we will merge it!). To load a community pipeline, use the custom_pipeline argument in DiffusionPipeline to specify one of the files in diffusers/examples/community: Copied from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", custom_pipeline="filename_in_the_community_folder", use_safetensors=True +) If a community pipeline doesn’t work as expected, please open a GitHub issue and mention the author. You can learn more about community pipelines in the how to load community pipelines and how to contribute a community pipeline guides. Multilingual Stable Diffusion The multilingual Stable Diffusion pipeline uses a pretrained XLM-RoBERTa to identify a language and the mBART-large-50 model to handle the translation. This allows you to generate images from text in 20 languages. Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import make_image_grid +from transformers import ( + pipeline, + MBart50TokenizerFast, + MBartForConditionalGeneration, +) + +device = "cuda" if torch.cuda.is_available() else "cpu" +device_dict = {"cuda": 0, "cpu": -1} + +# add language detection pipeline +language_detection_model_ckpt = "papluca/xlm-roberta-base-language-detection" +language_detection_pipeline = pipeline("text-classification", + model=language_detection_model_ckpt, + device=device_dict[device]) + +# add model for language translation +translation_tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-one-mmt") +translation_model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-one-mmt").to(device) + +diffuser_pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + custom_pipeline="multilingual_stable_diffusion", + detection_pipeline=language_detection_pipeline, + translation_model=translation_model, + translation_tokenizer=translation_tokenizer, + torch_dtype=torch.float16, +) + +diffuser_pipeline.enable_attention_slicing() +diffuser_pipeline = diffuser_pipeline.to(device) + +prompt = ["a photograph of an astronaut riding a horse", + "Una casa en la playa", + "Ein Hund, der Orange isst", + "Un restaurant parisien"] + +images = diffuser_pipeline(prompt).images +make_image_grid(images, rows=2, cols=2) MagicMix MagicMix is a pipeline that can mix an image and text prompt to generate a new image that preserves the image structure. The mix_factor determines how much influence the prompt has on the layout generation, kmin controls the number of steps during the content generation process, and kmax determines how much information is kept in the layout of the original image. Copied from diffusers import DiffusionPipeline, DDIMScheduler +from diffusers.utils import load_image, make_image_grid + +pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + custom_pipeline="magic_mix", + scheduler=DDIMScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler"), +).to('cuda') + +img = load_image("https://user-images.githubusercontent.com/59410571/209578593-141467c7-d831-4792-8b9a-b17dc5e47816.jpg") +mix_img = pipeline(img, prompt="bed", kmin=0.3, kmax=0.5, mix_factor=0.5) +make_image_grid([img, mix_img], rows=1, cols=2) original image image and text prompt mix diff --git a/scrapped_outputs/d6d5de4945905ec6d48eceb029956a8e.txt b/scrapped_outputs/d6d5de4945905ec6d48eceb029956a8e.txt new file mode 100644 index 0000000000000000000000000000000000000000..e7efd5146c1078113af0423ef6c60dab2df7383d --- /dev/null +++ b/scrapped_outputs/d6d5de4945905ec6d48eceb029956a8e.txt @@ -0,0 +1,77 @@ +Stable Diffusion XL This script is experimental, and it’s easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset. Stable Diffusion XL (SDXL) is a larger and more powerful iteration of the Stable Diffusion model, capable of producing higher resolution images. SDXL’s UNet is 3x larger and the model adds a second text encoder to the architecture. Depending on the hardware available to you, this can be very computationally intensive and it may not run on a consumer GPU like a Tesla T4. To help fit this larger model into memory and to speedup training, try enabling gradient_checkpointing, mixed_precision, and gradient_accumulation_steps. You can reduce your memory-usage even more by enabling memory-efficient attention with xFormers and using bitsandbytes’ 8-bit optimizer. This guide will explore the train_text_to_image_sdxl.py training script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/text_to_image +pip install -r requirements_sdxl.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the bf16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image_sdxl.py \ + --mixed_precision="bf16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so you’ll focus on the parameters that are relevant to training SDXL in this guide. --pretrained_vae_model_name_or_path: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify a better VAE --proportion_empty_prompts: the proportion of image prompts to replace with empty strings --timestep_bias_strategy: where (earlier vs. later) in the timestep to apply a bias, which can encourage the model to either learn low or high frequency details --timestep_bias_multiplier: the weight of the bias to apply to the timestep --timestep_bias_begin: the timestep to begin applying the bias --timestep_bias_end: the timestep to end applying the bias --timestep_bias_portion: the proportion of timesteps to apply the bias to Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting either epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_text_to_image_sdxl.py \ + --snr_gamma=5.0 Training script The training script is also similar to the Text-to-image training guide, but it’s been modified to support SDXL training. This guide will focus on the code that is unique to the SDXL training script. It starts by creating functions to tokenize the prompts to calculate the prompt embeddings, and to compute the image embeddings with the VAE. Next, you’ll a function to generate the timesteps weights depending on the number of timesteps and the timestep bias strategy to apply. Within the main() function, in addition to loading a tokenizer, the script loads a second tokenizer and text encoder because the SDXL architecture uses two of each: Copied tokenizer_one = AutoTokenizer.from_pretrained( + args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision, use_fast=False +) +tokenizer_two = AutoTokenizer.from_pretrained( + args.pretrained_model_name_or_path, subfolder="tokenizer_2", revision=args.revision, use_fast=False +) + +text_encoder_cls_one = import_model_class_from_model_name_or_path( + args.pretrained_model_name_or_path, args.revision +) +text_encoder_cls_two = import_model_class_from_model_name_or_path( + args.pretrained_model_name_or_path, args.revision, subfolder="text_encoder_2" +) The prompt and image embeddings are computed first and kept in memory, which isn’t typically an issue for a smaller dataset, but for larger datasets it can lead to memory problems. If this is the case, you should save the pre-computed embeddings to disk separately and load them into memory during the training process (see this PR for more discussion about this topic). Copied text_encoders = [text_encoder_one, text_encoder_two] +tokenizers = [tokenizer_one, tokenizer_two] +compute_embeddings_fn = functools.partial( + encode_prompt, + text_encoders=text_encoders, + tokenizers=tokenizers, + proportion_empty_prompts=args.proportion_empty_prompts, + caption_column=args.caption_column, +) + +train_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint) +train_dataset = train_dataset.map( + compute_vae_encodings_fn, + batched=True, + batch_size=args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps, + new_fingerprint=new_fingerprint_for_vae, +) After calculating the embeddings, the text encoder, VAE, and tokenizer are deleted to free up some memory: Copied del text_encoders, tokenizers, vae +gc.collect() +torch.cuda.empty_cache() Finally, the training loop takes care of the rest. If you chose to apply a timestep bias strategy, you’ll see the timestep weights are calculated and added as noise: Copied weights = generate_timestep_weights(args, noise_scheduler.config.num_train_timesteps).to( + model_input.device + ) + timesteps = torch.multinomial(weights, bsz, replacement=True).long() + +noisy_model_input = noise_scheduler.add_noise(model_input, noise, timesteps) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 Let’s train on the Pokémon BLIP captions dataset to generate your own Pokémon. Set the environment variables MODEL_NAME and DATASET_NAME to the model and the dataset (either from the Hub or a local path). You should also specify a VAE other than the SDXL VAE (either from the Hub or a local path) with VAE_NAME to avoid numerical instabilities. To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_prompt and --validation_epochs to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. Copied export MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0" +export VAE_NAME="madebyollin/sdxl-vae-fp16-fix" +export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch train_text_to_image_sdxl.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --pretrained_vae_model_name_or_path=$VAE_NAME \ + --dataset_name=$DATASET_NAME \ + --enable_xformers_memory_efficient_attention \ + --resolution=512 \ + --center_crop \ + --random_flip \ + --proportion_empty_prompts=0.2 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --max_train_steps=10000 \ + --use_8bit_adam \ + --learning_rate=1e-06 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --mixed_precision="fp16" \ + --report_to="wandb" \ + --validation_prompt="a cute Sundar Pichai creature" \ + --validation_epochs 5 \ + --checkpointing_steps=5000 \ + --output_dir="sdxl-pokemon-model" \ + --push_to_hub After you’ve finished training, you can use your newly trained SDXL model for inference! PyTorch PyTorch XLA Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained("path/to/your/model", torch_dtype=torch.float16).to("cuda") + +prompt = "A pokemon with green eyes and red legs." +image = pipeline(prompt, num_inference_steps=30, guidance_scale=7.5).images[0] +image.save("pokemon.png") Next steps Congratulations on training a SDXL model! To learn more about how to use your new model, the following guides may be helpful: Read the Stable Diffusion XL guide to learn how to use it for a variety of different tasks (text-to-image, image-to-image, inpainting), how to use it’s refiner model, and the different types of micro-conditionings. Check out the DreamBooth and LoRA training guides to learn how to train a personalized SDXL model with just a few example images. These two training techniques can even be combined! diff --git a/scrapped_outputs/d6f060ea9d1e9c9bf3c5597c3214e0b5.txt b/scrapped_outputs/d6f060ea9d1e9c9bf3c5597c3214e0b5.txt new file mode 100644 index 0000000000000000000000000000000000000000..96a0a5c22497290cdb231bbf72184daeee1b4d8c --- /dev/null +++ b/scrapped_outputs/d6f060ea9d1e9c9bf3c5597c3214e0b5.txt @@ -0,0 +1,18 @@ +VQModel The VQ-VAE model was introduced in Neural Discrete Representation Learning by Aaron van den Oord, Oriol Vinyals and Koray Kavukcuoglu. The model is used in 🤗 Diffusers to decode latent representations into images. Unlike AutoencoderKL, the VQModel works in a quantized latent space. The abstract from the paper is: Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector Quantised-Variational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of “posterior collapse” — where the latents are ignored when they are paired with a powerful autoregressive decoder — typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations. VQModel class diffusers.VQModel < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) up_block_types: Tuple = ('UpDecoderBlock2D',) block_out_channels: Tuple = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 3 sample_size: int = 32 num_vq_embeddings: int = 256 norm_num_groups: int = 32 vq_embed_dim: Optional = None scaling_factor: float = 0.18215 norm_type: str = 'group' mid_block_add_attention = True lookup_from_codebook = False force_upcast = False ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("DownEncoderBlock2D",)) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to ("UpDecoderBlock2D",)) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of block output channels. layers_per_block (int, optional, defaults to 1) — Number of layers per block. act_fn (str, optional, defaults to "silu") — The activation function to use. latent_channels (int, optional, defaults to 3) — Number of channels in the latent space. sample_size (int, optional, defaults to 32) — Sample input size. num_vq_embeddings (int, optional, defaults to 256) — Number of codebook vectors in the VQ-VAE. norm_num_groups (int, optional, defaults to 32) — Number of groups for normalization layers. vq_embed_dim (int, optional) — Hidden dim of codebook vectors in the VQ-VAE. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. norm_type (str, optional, defaults to "group") — +Type of normalization layer to use. Can be one of "group" or "spatial". A VQ-VAE model for decoding latent representations. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor return_dict: bool = True ) → VQEncoderOutput or tuple Parameters sample (torch.FloatTensor) — Input sample. return_dict (bool, optional, defaults to True) — +Whether or not to return a models.vq_model.VQEncoderOutput instead of a plain tuple. Returns +VQEncoderOutput or tuple + +If return_dict is True, a VQEncoderOutput is returned, otherwise a plain tuple +is returned. + The VQModel forward method. VQEncoderOutput class diffusers.models.vq_model.VQEncoderOutput < source > ( latents: FloatTensor ) Parameters latents (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The encoded output sample from the last layer of the model. Output of VQModel encoding method. diff --git a/scrapped_outputs/d79f5e9466bdc01d0ccf9dde7d8aca42.txt b/scrapped_outputs/d79f5e9466bdc01d0ccf9dde7d8aca42.txt new file mode 100644 index 0000000000000000000000000000000000000000..9d0a8a28b6d3bc1a9ce7a2bdbcac9943975943ca --- /dev/null +++ b/scrapped_outputs/d79f5e9466bdc01d0ccf9dde7d8aca42.txt @@ -0,0 +1 @@ +Overview Welcome to 🧨 Diffusers! If you’re new to diffusion models and generative AI, and want to learn more, then you’ve come to the right place. These beginner-friendly tutorials are designed to provide a gentle introduction to diffusion models and help you understand the library fundamentals - the core components and how 🧨 Diffusers is meant to be used. You’ll learn how to use a pipeline for inference to rapidly generate things, and then deconstruct that pipeline to really understand how to use the library as a modular toolbox for building your own diffusion systems. In the next lesson, you’ll learn how to train your own diffusion model to generate what you want. After completing the tutorials, you’ll have gained the necessary skills to start exploring the library on your own and see how to use it for your own projects and applications. Feel free to join our community on Discord or the forums to connect and collaborate with other users and developers! Let’s start diffusing! 🧨 diff --git a/scrapped_outputs/d7fbaa4889186b7a55c0be42f14738d0.txt b/scrapped_outputs/d7fbaa4889186b7a55c0be42f14738d0.txt new file mode 100644 index 0000000000000000000000000000000000000000..9de2a9918b4f9735de3ea0d622cdf65706556cae --- /dev/null +++ b/scrapped_outputs/d7fbaa4889186b7a55c0be42f14738d0.txt @@ -0,0 +1,124 @@ +Schedulers Diffusion pipelines are inherently a collection of diffusion models and schedulers that are partly independent from each other. This means that one is able to switch out parts of the pipeline to better customize +a pipeline to one’s use case. The best example of this is the Schedulers. Whereas diffusion models usually simply define the forward pass from noise to a less noisy sample, +schedulers define the whole denoising process, i.e.: How many denoising steps? Stochastic or deterministic? What algorithm to use to find the denoised sample? They can be quite complex and often define a trade-off between denoising speed and denoising quality. +It is extremely difficult to measure quantitatively which scheduler works best for a given diffusion pipeline, so it is often recommended to simply try out which works best. The following paragraphs show how to do so with the 🧨 Diffusers library. Load pipeline Let’s start by loading the runwayml/stable-diffusion-v1-5 model in the DiffusionPipeline: Copied from huggingface_hub import login +from diffusers import DiffusionPipeline +import torch + +login() + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) Next, we move it to GPU: Copied pipeline.to("cuda") Access the scheduler The scheduler is always one of the components of the pipeline and is usually called "scheduler". +So it can be accessed via the "scheduler" property. Copied pipeline.scheduler Output: Copied PNDMScheduler { + "_class_name": "PNDMScheduler", + "_diffusers_version": "0.21.4", + "beta_end": 0.012, + "beta_schedule": "scaled_linear", + "beta_start": 0.00085, + "clip_sample": false, + "num_train_timesteps": 1000, + "set_alpha_to_one": false, + "skip_prk_steps": true, + "steps_offset": 1, + "timestep_spacing": "leading", + "trained_betas": null +} We can see that the scheduler is of type PNDMScheduler. +Cool, now let’s compare the scheduler in its performance to other schedulers. +First we define a prompt on which we will test all the different schedulers: Copied prompt = "A photograph of an astronaut riding a horse on Mars, high resolution, high definition." Next, we create a generator from a random seed that will ensure that we can generate similar images as well as run the pipeline: Copied generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image Changing the scheduler Now we show how easy it is to change the scheduler of a pipeline. Every scheduler has a property compatibles +which defines all compatible schedulers. You can take a look at all available, compatible schedulers for the Stable Diffusion pipeline as follows. Copied pipeline.scheduler.compatibles Output: Copied [diffusers.utils.dummy_torch_and_torchsde_objects.DPMSolverSDEScheduler, + diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, + diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, + diffusers.schedulers.scheduling_ddim.DDIMScheduler, + diffusers.schedulers.scheduling_ddpm.DDPMScheduler, + diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler, + diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler, + diffusers.schedulers.scheduling_deis_multistep.DEISMultistepScheduler, + diffusers.schedulers.scheduling_pndm.PNDMScheduler, + diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, + diffusers.schedulers.scheduling_unipc_multistep.UniPCMultistepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_discrete.KDPM2DiscreteScheduler, + diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_ancestral_discrete.KDPM2AncestralDiscreteScheduler] Cool, lots of schedulers to look at. Feel free to have a look at their respective class definitions: EulerDiscreteScheduler, LMSDiscreteScheduler, DDIMScheduler, DDPMScheduler, HeunDiscreteScheduler, DPMSolverMultistepScheduler, DEISMultistepScheduler, PNDMScheduler, EulerAncestralDiscreteScheduler, UniPCMultistepScheduler, KDPM2DiscreteScheduler, DPMSolverSinglestepScheduler, KDPM2AncestralDiscreteScheduler. We will now compare the input prompt with all other schedulers. To change the scheduler of the pipeline you can make use of the +convenient config property in combination with the from_config() function. Copied pipeline.scheduler.config returns a dictionary of the configuration of the scheduler: Output: Copied FrozenDict([('num_train_timesteps', 1000), + ('beta_start', 0.00085), + ('beta_end', 0.012), + ('beta_schedule', 'scaled_linear'), + ('trained_betas', None), + ('skip_prk_steps', True), + ('set_alpha_to_one', False), + ('prediction_type', 'epsilon'), + ('timestep_spacing', 'leading'), + ('steps_offset', 1), + ('_use_default_values', ['timestep_spacing', 'prediction_type']), + ('_class_name', 'PNDMScheduler'), + ('_diffusers_version', '0.21.4'), + ('clip_sample', False)]) This configuration can then be used to instantiate a scheduler +of a different class that is compatible with the pipeline. Here, +we change the scheduler to the DDIMScheduler. Copied from diffusers import DDIMScheduler + +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) Cool, now we can run the pipeline again to compare the generation quality. Copied generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image If you are a JAX/Flax user, please check this section instead. Compare schedulers So far we have tried running the stable diffusion pipeline with two schedulers: PNDMScheduler and DDIMScheduler. +A number of better schedulers have been released that can be run with much fewer steps; let’s compare them here: LMSDiscreteScheduler usually leads to better results: Copied from diffusers import LMSDiscreteScheduler + +pipeline.scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image EulerDiscreteScheduler and EulerAncestralDiscreteScheduler can generate high quality results with as little as 30 steps. Copied from diffusers import EulerDiscreteScheduler + +pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=30).images[0] +image and: Copied from diffusers import EulerAncestralDiscreteScheduler + +pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=30).images[0] +image DPMSolverMultistepScheduler gives a reasonable speed/quality trade-off and can be run with as little as 20 steps. Copied from diffusers import DPMSolverMultistepScheduler + +pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=20).images[0] +image As you can see, most images look very similar and are arguably of very similar quality. It often really depends on the specific use case which scheduler to choose. A good approach is always to run multiple different +schedulers to compare results. Changing the Scheduler in Flax If you are a JAX/Flax user, you can also change the default pipeline scheduler. This is a complete example of how to run inference using the Flax Stable Diffusion pipeline and the super-fast DPM-Solver++ scheduler: Copied import jax +import numpy as np +from flax.jax_utils import replicate +from flax.training.common_utils import shard + +from diffusers import FlaxStableDiffusionPipeline, FlaxDPMSolverMultistepScheduler + +model_id = "runwayml/stable-diffusion-v1-5" +scheduler, scheduler_state = FlaxDPMSolverMultistepScheduler.from_pretrained( + model_id, + subfolder="scheduler" +) +pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( + model_id, + scheduler=scheduler, + revision="bf16", + dtype=jax.numpy.bfloat16, +) +params["scheduler"] = scheduler_state + +# Generate 1 image per parallel device (8 on TPUv2-8 or TPUv3-8) +prompt = "a photo of an astronaut riding a horse on mars" +num_samples = jax.device_count() +prompt_ids = pipeline.prepare_inputs([prompt] * num_samples) + +prng_seed = jax.random.PRNGKey(0) +num_inference_steps = 25 + +# shard inputs and rng +params = replicate(params) +prng_seed = jax.random.split(prng_seed, jax.device_count()) +prompt_ids = shard(prompt_ids) + +images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images +images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) The following Flax schedulers are not yet compatible with the Flax Stable Diffusion Pipeline: FlaxLMSDiscreteScheduler FlaxDDPMScheduler diff --git a/scrapped_outputs/d8051346847a406e8ea021964a57f919.txt b/scrapped_outputs/d8051346847a406e8ea021964a57f919.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/d80eb749a741292d494c7f7f5c9a93af.txt b/scrapped_outputs/d80eb749a741292d494c7f7f5c9a93af.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/d841ff0aed9c26c5f871bf92c88b9746.txt b/scrapped_outputs/d841ff0aed9c26c5f871bf92c88b9746.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/d846e28d6f9dbf3a06d20054f128913a.txt b/scrapped_outputs/d846e28d6f9dbf3a06d20054f128913a.txt new file mode 100644 index 0000000000000000000000000000000000000000..1fe3bd3f06785a74a09c4c4199e812fcd2270991 --- /dev/null +++ b/scrapped_outputs/d846e28d6f9dbf3a06d20054f128913a.txt @@ -0,0 +1,6 @@ +Overview 🤗 Diffusers provides a collection of training scripts for you to train your own diffusion models. You can find all of our training scripts in diffusers/examples. Each training script is: Self-contained: the training script does not depend on any local files, and all packages required to run the script are installed from the requirements.txt file. Easy-to-tweak: the training scripts are an example of how to train a diffusion model for a specific task and won’t work out-of-the-box for every training scenario. You’ll likely need to adapt the training script for your specific use-case. To help you with that, we’ve fully exposed the data preprocessing code and the training loop so you can modify it for your own use. Beginner-friendly: the training scripts are designed to be beginner-friendly and easy to understand, rather than including the latest state-of-the-art methods to get the best and most competitive results. Any training methods we consider too complex are purposefully left out. Single-purpose: each training script is expressly designed for only one task to keep it readable and understandable. Our current collection of training scripts include: Training SDXL-support LoRA-support Flax-support unconditional image generation text-to-image 👍 👍 👍 textual inversion 👍 DreamBooth 👍 👍 👍 ControlNet 👍 👍 InstructPix2Pix 👍 Custom Diffusion T2I-Adapters 👍 Kandinsky 2.2 👍 Wuerstchen 👍 These examples are actively maintained, so please feel free to open an issue if they aren’t working as expected. If you feel like another training example should be included, you’re more than welcome to start a Feature Request to discuss your feature idea with us and whether it meets our criteria of being self-contained, easy-to-tweak, beginner-friendly, and single-purpose. Install Make sure you can successfully run the latest versions of the example scripts by installing the library from source in a new virtual environment: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the folder of the training script (for example, DreamBooth) and install the requirements.txt file. Some training scripts have a specific requirement file for SDXL, LoRA or Flax. If you’re using one of these scripts, make sure you install its corresponding requirements file. Copied cd examples/dreambooth +pip install -r requirements.txt +# to train SDXL with DreamBooth +pip install -r requirements_sdxl.txt To speedup training and reduce memory-usage, we recommend: using PyTorch 2.0 or higher to automatically use scaled dot product attention during training (you don’t need to make any changes to the training code) installing xFormers to enable memory-efficient attention diff --git a/scrapped_outputs/d85964071cc62b7425b9f711a1774e67.txt b/scrapped_outputs/d85964071cc62b7425b9f711a1774e67.txt new file mode 100644 index 0000000000000000000000000000000000000000..d0ff9812e8390d7761559412d64c19cfc04afa33 --- /dev/null +++ b/scrapped_outputs/d85964071cc62b7425b9f711a1774e67.txt @@ -0,0 +1,89 @@ +Quicktour Diffusion models are trained to denoise random Gaussian noise step-by-step to generate a sample of interest, such as an image or audio. This has sparked a tremendous amount of interest in generative AI, and you have probably seen examples of diffusion generated images on the internet. 🧨 Diffusers is a library aimed at making diffusion models widely accessible to everyone. Whether you’re a developer or an everyday user, this quicktour will introduce you to 🧨 Diffusers and help you get up and generating quickly! There are three main components of the library to know about: The DiffusionPipeline is a high-level end-to-end class designed to rapidly generate samples from pretrained diffusion models for inference. Popular pretrained model architectures and modules that can be used as building blocks for creating diffusion systems. Many different schedulers - algorithms that control how noise is added for training, and how to generate denoised images during inference. The quicktour will show you how to use the DiffusionPipeline for inference, and then walk you through how to combine a model and scheduler to replicate what’s happening inside the DiffusionPipeline. The quicktour is a simplified version of the introductory 🧨 Diffusers notebook to help you get started quickly. If you want to learn more about 🧨 Diffusers’ goal, design philosophy, and additional details about its core API, check out the notebook! Before you begin, make sure you have all the necessary libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install --upgrade diffusers accelerate transformers 🤗 Accelerate speeds up model loading for inference and training. 🤗 Transformers is required to run the most popular diffusion models, such as Stable Diffusion. DiffusionPipeline The DiffusionPipeline is the easiest way to use a pretrained diffusion system for inference. It is an end-to-end system containing the model and the scheduler. You can use the DiffusionPipeline out-of-the-box for many tasks. Take a look at the table below for some supported tasks, and for a complete list of supported tasks, check out the 🧨 Diffusers Summary table. Task Description Pipeline Unconditional Image Generation generate an image from Gaussian noise unconditional_image_generation Text-Guided Image Generation generate an image given a text prompt conditional_image_generation Text-Guided Image-to-Image Translation adapt an image guided by a text prompt img2img Text-Guided Image-Inpainting fill the masked part of an image given the image, the mask and a text prompt inpaint Text-Guided Depth-to-Image Translation adapt parts of an image guided by a text prompt while preserving structure via depth estimation depth2img Start by creating an instance of a DiffusionPipeline and specify which pipeline checkpoint you would like to download. +You can use the DiffusionPipeline for any checkpoint stored on the Hugging Face Hub. +In this quicktour, you’ll load the stable-diffusion-v1-5 checkpoint for text-to-image generation. For Stable Diffusion models, please carefully read the license first before running the model. 🧨 Diffusers implements a safety_checker to prevent offensive or harmful content, but the model’s improved image generation capabilities can still produce potentially harmful content. Load the model with the from_pretrained() method: Copied >>> from diffusers import DiffusionPipeline + +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) The DiffusionPipeline downloads and caches all modeling, tokenization, and scheduling components. You’ll see that the Stable Diffusion pipeline is composed of the UNet2DConditionModel and PNDMScheduler among other things: Copied >>> pipeline +StableDiffusionPipeline { + "_class_name": "StableDiffusionPipeline", + "_diffusers_version": "0.21.4", + ..., + "scheduler": [ + "diffusers", + "PNDMScheduler" + ], + ..., + "unet": [ + "diffusers", + "UNet2DConditionModel" + ], + "vae": [ + "diffusers", + "AutoencoderKL" + ] +} We strongly recommend running the pipeline on a GPU because the model consists of roughly 1.4 billion parameters. +You can move the generator object to a GPU, just like you would in PyTorch: Copied >>> pipeline.to("cuda") Now you can pass a text prompt to the pipeline to generate an image, and then access the denoised image. By default, the image output is wrapped in a PIL.Image object. Copied >>> image = pipeline("An image of a squirrel in Picasso style").images[0] +>>> image Save the image by calling save: Copied >>> image.save("image_of_squirrel_painting.png") Local pipeline You can also use the pipeline locally. The only difference is you need to download the weights first: Copied !git lfs install +!git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 Then load the saved weights into the pipeline: Copied >>> pipeline = DiffusionPipeline.from_pretrained("./stable-diffusion-v1-5", use_safetensors=True) Now, you can run the pipeline as you would in the section above. Swapping schedulers Different schedulers come with different denoising speeds and quality trade-offs. The best way to find out which one works best for you is to try them out! One of the main features of 🧨 Diffusers is to allow you to easily switch between schedulers. For example, to replace the default PNDMScheduler with the EulerDiscreteScheduler, load it with the from_config() method: Copied >>> from diffusers import EulerDiscreteScheduler + +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) +>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) Try generating an image with the new scheduler and see if you notice a difference! In the next section, you’ll take a closer look at the components - the model and scheduler - that make up the DiffusionPipeline and learn how to use these components to generate an image of a cat. Models Most models take a noisy sample, and at each timestep it predicts the noise residual (other models learn to predict the previous sample directly or the velocity or v-prediction), the difference between a less noisy image and the input image. You can mix and match models to create other diffusion systems. Models are initiated with the from_pretrained() method which also locally caches the model weights so it is faster the next time you load the model. For the quicktour, you’ll load the UNet2DModel, a basic unconditional image generation model with a checkpoint trained on cat images: Copied >>> from diffusers import UNet2DModel + +>>> repo_id = "google/ddpm-cat-256" +>>> model = UNet2DModel.from_pretrained(repo_id, use_safetensors=True) To access the model parameters, call model.config: Copied >>> model.config The model configuration is a 🧊 frozen 🧊 dictionary, which means those parameters can’t be changed after the model is created. This is intentional and ensures that the parameters used to define the model architecture at the start remain the same, while other parameters can still be adjusted during inference. Some of the most important parameters are: sample_size: the height and width dimension of the input sample. in_channels: the number of input channels of the input sample. down_block_types and up_block_types: the type of down- and upsampling blocks used to create the UNet architecture. block_out_channels: the number of output channels of the downsampling blocks; also used in reverse order for the number of input channels of the upsampling blocks. layers_per_block: the number of ResNet blocks present in each UNet block. To use the model for inference, create the image shape with random Gaussian noise. It should have a batch axis because the model can receive multiple random noises, a channel axis corresponding to the number of input channels, and a sample_size axis for the height and width of the image: Copied >>> import torch + +>>> torch.manual_seed(0) + +>>> noisy_sample = torch.randn(1, model.config.in_channels, model.config.sample_size, model.config.sample_size) +>>> noisy_sample.shape +torch.Size([1, 3, 256, 256]) For inference, pass the noisy image and a timestep to the model. The timestep indicates how noisy the input image is, with more noise at the beginning and less at the end. This helps the model determine its position in the diffusion process, whether it is closer to the start or the end. Use the sample method to get the model output: Copied >>> with torch.no_grad(): +... noisy_residual = model(sample=noisy_sample, timestep=2).sample To generate actual examples though, you’ll need a scheduler to guide the denoising process. In the next section, you’ll learn how to couple a model with a scheduler. Schedulers Schedulers manage going from a noisy sample to a less noisy sample given the model output - in this case, it is the noisy_residual. 🧨 Diffusers is a toolbox for building diffusion systems. While the DiffusionPipeline is a convenient way to get started with a pre-built diffusion system, you can also choose your own model and scheduler components separately to build a custom diffusion system. For the quicktour, you’ll instantiate the DDPMScheduler with its from_config() method: Copied >>> from diffusers import DDPMScheduler + +>>> scheduler = DDPMScheduler.from_pretrained(repo_id) +>>> scheduler +DDPMScheduler { + "_class_name": "DDPMScheduler", + "_diffusers_version": "0.21.4", + "beta_end": 0.02, + "beta_schedule": "linear", + "beta_start": 0.0001, + "clip_sample": true, + "clip_sample_range": 1.0, + "dynamic_thresholding_ratio": 0.995, + "num_train_timesteps": 1000, + "prediction_type": "epsilon", + "sample_max_value": 1.0, + "steps_offset": 0, + "thresholding": false, + "timestep_spacing": "leading", + "trained_betas": null, + "variance_type": "fixed_small" +} 💡 Unlike a model, a scheduler does not have trainable weights and is parameter-free! Some of the most important parameters are: num_train_timesteps: the length of the denoising process or, in other words, the number of timesteps required to process random Gaussian noise into a data sample. beta_schedule: the type of noise schedule to use for inference and training. beta_start and beta_end: the start and end noise values for the noise schedule. To predict a slightly less noisy image, pass the following to the scheduler’s step() method: model output, timestep, and current sample. Copied >>> less_noisy_sample = scheduler.step(model_output=noisy_residual, timestep=2, sample=noisy_sample).prev_sample +>>> less_noisy_sample.shape +torch.Size([1, 3, 256, 256]) The less_noisy_sample can be passed to the next timestep where it’ll get even less noisy! Let’s bring it all together now and visualize the entire denoising process. First, create a function that postprocesses and displays the denoised image as a PIL.Image: Copied >>> import PIL.Image +>>> import numpy as np + + +>>> def display_sample(sample, i): +... image_processed = sample.cpu().permute(0, 2, 3, 1) +... image_processed = (image_processed + 1.0) * 127.5 +... image_processed = image_processed.numpy().astype(np.uint8) + +... image_pil = PIL.Image.fromarray(image_processed[0]) +... display(f"Image at step {i}") +... display(image_pil) To speed up the denoising process, move the input and model to a GPU: Copied >>> model.to("cuda") +>>> noisy_sample = noisy_sample.to("cuda") Now create a denoising loop that predicts the residual of the less noisy sample, and computes the less noisy sample with the scheduler: Copied >>> import tqdm + +>>> sample = noisy_sample + +>>> for i, t in enumerate(tqdm.tqdm(scheduler.timesteps)): +... # 1. predict noise residual +... with torch.no_grad(): +... residual = model(sample, t).sample + +... # 2. compute less noisy image and set x_t -> x_t-1 +... sample = scheduler.step(residual, t, sample).prev_sample + +... # 3. optionally look at image +... if (i + 1) % 50 == 0: +... display_sample(sample, i + 1) Sit back and watch as a cat is generated from nothing but noise! 😻 Next steps Hopefully, you generated some cool images with 🧨 Diffusers in this quicktour! For your next steps, you can: Train or finetune a model to generate your own images in the training tutorial. See example official and community training or finetuning scripts for a variety of use cases. Learn more about loading, accessing, changing, and comparing schedulers in the Using different Schedulers guide. Explore prompt engineering, speed and memory optimizations, and tips and tricks for generating higher-quality images with the Stable Diffusion guide. Dive deeper into speeding up 🧨 Diffusers with guides on optimized PyTorch on a GPU, and inference guides for running Stable Diffusion on Apple Silicon (M1/M2) and ONNX Runtime. diff --git a/scrapped_outputs/d8774432314d4ff5cda989b0f40adca9.txt b/scrapped_outputs/d8774432314d4ff5cda989b0f40adca9.txt new file mode 100644 index 0000000000000000000000000000000000000000..98269f3c31d991ee698908d92c0548b99079f45a --- /dev/null +++ b/scrapped_outputs/d8774432314d4ff5cda989b0f40adca9.txt @@ -0,0 +1,24 @@ +IPNDMScheduler IPNDMScheduler is a fourth-order Improved Pseudo Linear Multistep scheduler. The original implementation can be found at crowsonkb/v-diffusion-pytorch. IPNDMScheduler class diffusers.IPNDMScheduler < source > ( num_train_timesteps: int = 1000 trained_betas: Union = None ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. A fourth-order Improved Pseudo Linear Multistep scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the linear multistep method. It performs one forward pass multiple times to approximate the solution. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/d8939b5388b00be18dd0cf7b38da5d18.txt b/scrapped_outputs/d8939b5388b00be18dd0cf7b38da5d18.txt new file mode 100644 index 0000000000000000000000000000000000000000..7172ff07e1b418100afd17352ce66615379947e7 --- /dev/null +++ b/scrapped_outputs/d8939b5388b00be18dd0cf7b38da5d18.txt @@ -0,0 +1,845 @@ +ControlNet ControlNet was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that’ll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process. The abstract from the paper is: We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models. This model was contributed by takuma104. ❤️ The original codebase can be found at lllyasviel/ControlNet, and you can find official ControlNet checkpoints on lllyasviel’s Hub profile. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionControlNetPipeline class diffusers.StableDiffusionControlNetPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. When prompt is a list, and if a list of images is passed for a single ControlNet, +each will be paired with each prompt in the prompt list. This also applies to multiple ControlNets, +where a list of image lists can be passed to batch for each prompt and each ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +... ) +>>> image = np.array(image) + +>>> # get canny image +>>> image = cv2.Canny(image, 100, 200) +>>> image = image[:, :, None] +>>> image = np.concatenate([image, image, image], axis=2) +>>> canny_image = Image.fromarray(image) + +>>> # load control net and stable diffusion v1-5 +>>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +>>> pipe = StableDiffusionControlNetPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> # speed up diffusion process with faster scheduler and memory optimization +>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +>>> # remove following line if xformers is not installed +>>> pipe.enable_xformers_memory_efficient_attention() + +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> generator = torch.manual_seed(0) +>>> image = pipe( +... "futuristic-looking woman", num_inference_steps=20, generator=generator, image=canny_image +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionControlNetImg2ImgPipeline class diffusers.StableDiffusionControlNetImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for image-to-image generation using Stable Diffusion with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None control_image: Union = None height: Optional = None width: Optional = None strength: float = 0.8 num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 0.8 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The initial image to be used as the starting point for the image generation process. Can also accept +image latents as image, and if passing latents directly they are not encoded again. control_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, UniPCMultistepScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +... ) +>>> np_image = np.array(image) + +>>> # get canny image +>>> np_image = cv2.Canny(np_image, 100, 200) +>>> np_image = np_image[:, :, None] +>>> np_image = np.concatenate([np_image, np_image, np_image], axis=2) +>>> canny_image = Image.fromarray(np_image) + +>>> # load control net and stable diffusion v1-5 +>>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +>>> pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> # speed up diffusion process with faster scheduler and memory optimization +>>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> generator = torch.manual_seed(0) +>>> image = pipe( +... "futuristic-looking woman", +... num_inference_steps=20, +... generator=generator, +... image=image, +... control_image=canny_image, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionControlNetInpaintPipeline class diffusers.StableDiffusionControlNetInpaintPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel controlnet: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetModel or List[ControlNetModel]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple +ControlNets as a list, the outputs from each ControlNet are added together to create one combined +additional conditioning. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for image inpainting using Stable Diffusion with ControlNet guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters This pipeline can be used with checkpoints that have been specifically fine-tuned for inpainting +(runwayml/stable-diffusion-inpainting) as well as +default text-to-image Stable Diffusion checkpoints +(runwayml/stable-diffusion-v1-5). Default text-to-image +Stable Diffusion checkpoints might be preferable for ControlNets that have been fine-tuned on those, such as +lllyasviel/control_v11p_sd15_inpaint. __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None control_image: Union = None height: Optional = None width: Optional = None padding_mask_crop: Optional = None strength: float = 1.0 num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 0.5 guess_mode: bool = False control_guidance_start: Union = 0.0 control_guidance_end: Union = 1.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], — +List[PIL.Image.Image], or List[np.ndarray]): +Image, NumPy array or tensor representing an image batch to be used as the starting point. For both +NumPy array and PyTorch tensor, the expected value range is between [0, 1]. If it’s a tensor or a +list or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a NumPy array or +a list of arrays, the expected shape should be (B, H, W, C) or (H, W, C). It can also accept image +latents as image, but if passing latents directly it is not encoded again. mask_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], — +List[PIL.Image.Image], or List[np.ndarray]): +Image, NumPy array or tensor representing an image batch to mask image. White pixels in the mask +are repainted while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a NumPy array or PyTorch tensor, it should contain one +color channel (L) instead of 3, so the expected shape for PyTorch tensor would be (B, 1, H, W), (B, H, W), (1, H, W), (H, W). And for NumPy array, it would be for (B, H, W, 1), (B, H, W), (H, W, 1), or (H, W). control_image (torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image], — +List[List[torch.FloatTensor]], or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. padding_mask_crop (int, optional, defaults to None) — +The size of margin in the crop to be applied to the image and masking. If None, no crop is applied to image and mask_image. If +padding_mask_crop is not None, it will first find a rectangular region with the same aspect ration of the image and +contains all masked area, and then expand that area based on padding_mask_crop. The image and mask_image will then be cropped based on +the expanded area before resizing to the original image size for inpainting. This is useful when the masked area is small while the image is large +and contain information inreleant for inpainging, such as background. strength (float, optional, defaults to 1.0) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 0.5) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. If multiple ControlNets are specified in init, you can set +the corresponding scale as a list. guess_mode (bool, optional, defaults to False) — +The ControlNet encoder tries to recognize the content of the input image even if you remove all +prompts. A guidance_scale value between 3.0 and 5.0 is recommended. control_guidance_start (float or List[float], optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float or List[float], optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install transformers accelerate +>>> from diffusers import StableDiffusionControlNetInpaintPipeline, ControlNetModel, DDIMScheduler +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> init_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy.png" +... ) +>>> init_image = init_image.resize((512, 512)) + +>>> generator = torch.Generator(device="cpu").manual_seed(1) + +>>> mask_image = load_image( +... "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_inpaint/boy_mask.png" +... ) +>>> mask_image = mask_image.resize((512, 512)) + + +>>> def make_canny_condition(image): +... image = np.array(image) +... image = cv2.Canny(image, 100, 200) +... image = image[:, :, None] +... image = np.concatenate([image, image, image], axis=2) +... image = Image.fromarray(image) +... return image + + +>>> control_image = make_canny_condition(init_image) + +>>> controlnet = ControlNetModel.from_pretrained( +... "lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16 +... ) +>>> pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 +... ) + +>>> pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) +>>> pipe.enable_model_cpu_offload() + +>>> # generate image +>>> image = pipe( +... "a handsome man with ray-ban sunglasses", +... num_inference_steps=20, +... generator=generator, +... eta=1.0, +... image=init_image, +... mask_image=mask_image, +... control_image=control_image, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionControlNetPipeline class diffusers.FlaxStableDiffusionControlNetPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel controlnet: FlaxControlNetModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. controlnet (FlaxControlNetModel — +Provides additional conditioning to the unet during the denoising process. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-to-image generation using Stable Diffusion with ControlNet Guidance. This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: Array image: Array params: Union prng_seed: Array num_inference_steps: int = 50 guidance_scale: Union = 7.5 latents: Array = None neg_prompt_ids: Array = None controlnet_conditioning_scale: Union = 1.0 return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt_ids (jnp.ndarray) — +The prompt or prompts to guide the image generation. image (jnp.ndarray) — +Array representing the ControlNet input condition to provide guidance to the unet for generation. params (Dict or FrozenDict) — +Dictionary containing the model parameters/weights. prng_seed (jax.Array) — +Array containing random number generator key. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. latents (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +array is generated by sampling using the supplied random generator. controlnet_conditioning_scale (float or jnp.ndarray, optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> import jax.numpy as jnp +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard +>>> from diffusers.utils import load_image, make_image_grid +>>> from PIL import Image +>>> from diffusers import FlaxStableDiffusionControlNetPipeline, FlaxControlNetModel + + +>>> def create_key(seed=0): +... return jax.random.PRNGKey(seed) + + +>>> rng = create_key(0) + +>>> # get canny image +>>> canny_image = load_image( +... "https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/blog_post_cell_10_output_0.jpeg" +... ) + +>>> prompts = "best quality, extremely detailed" +>>> negative_prompts = "monochrome, lowres, bad anatomy, worst quality, low quality" + +>>> # load control net and stable diffusion v1-5 +>>> controlnet, controlnet_params = FlaxControlNetModel.from_pretrained( +... "lllyasviel/sd-controlnet-canny", from_pt=True, dtype=jnp.float32 +... ) +>>> pipe, params = FlaxStableDiffusionControlNetPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, revision="flax", dtype=jnp.float32 +... ) +>>> params["controlnet"] = controlnet_params + +>>> num_samples = jax.device_count() +>>> rng = jax.random.split(rng, jax.device_count()) + +>>> prompt_ids = pipe.prepare_text_inputs([prompts] * num_samples) +>>> negative_prompt_ids = pipe.prepare_text_inputs([negative_prompts] * num_samples) +>>> processed_image = pipe.prepare_image_inputs([canny_image] * num_samples) + +>>> p_params = replicate(params) +>>> prompt_ids = shard(prompt_ids) +>>> negative_prompt_ids = shard(negative_prompt_ids) +>>> processed_image = shard(processed_image) + +>>> output = pipe( +... prompt_ids=prompt_ids, +... image=processed_image, +... params=p_params, +... prng_seed=rng, +... num_inference_steps=50, +... neg_prompt_ids=negative_prompt_ids, +... jit=True, +... ).images + +>>> output_images = pipe.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:]))) +>>> output_images = make_image_grid(output_images, num_samples // 4, 4) +>>> output_images.save("generated_image.png") FlaxStableDiffusionControlNetPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/d8a7660ec547856d9f7e9c61ba3eb7ae.txt b/scrapped_outputs/d8a7660ec547856d9f7e9c61ba3eb7ae.txt new file mode 100644 index 0000000000000000000000000000000000000000..12f932f27da948cb5ce81edca4bff5444475b84d --- /dev/null +++ b/scrapped_outputs/d8a7660ec547856d9f7e9c61ba3eb7ae.txt @@ -0,0 +1,11 @@ +Control image brightness The Stable Diffusion pipeline is mediocre at generating images that are either very bright or dark as explained in the Common Diffusion Noise Schedules and Sample Steps are Flawed paper. The solutions proposed in the paper are currently implemented in the DDIMScheduler which you can use to improve the lighting in your images. 💡 Take a look at the paper linked above for more details about the proposed solutions! One of the solutions is to train a model with v prediction and v loss. Add the following flag to the train_text_to_image.py or train_text_to_image_lora.py scripts to enable v_prediction: Copied --prediction_type="v_prediction" For example, let’s use the ptx0/pseudo-journey-v2 checkpoint which has been finetuned with v_prediction. Next, configure the following parameters in the DDIMScheduler: rescale_betas_zero_snr=True, rescales the noise schedule to zero terminal signal-to-noise ratio (SNR) timestep_spacing="trailing", starts sampling from the last timestep Copied from diffusers import DiffusionPipeline, DDIMScheduler + +pipeline = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", use_safetensors=True) + +# switch the scheduler in the pipeline to use the DDIMScheduler +pipeline.scheduler = DDIMScheduler.from_config( + pipeline.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing" +) +pipeline.to("cuda") Finally, in your call to the pipeline, set guidance_rescale to prevent overexposure: Copied prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" +image = pipeline(prompt, guidance_rescale=0.7).images[0] +image diff --git a/scrapped_outputs/d8c6226545b7e195be581e57280a6335.txt b/scrapped_outputs/d8c6226545b7e195be581e57280a6335.txt new file mode 100644 index 0000000000000000000000000000000000000000..fa69efa9696034670fc8ca476928c6521eb0af53 --- /dev/null +++ b/scrapped_outputs/d8c6226545b7e195be581e57280a6335.txt @@ -0,0 +1,212 @@ +Train a diffusion model Unconditional image generation is a popular application of diffusion models that generates images that look like those in the dataset used for training. Typically, the best results are obtained from finetuning a pretrained model on a specific dataset. You can find many of these checkpoints on the Hub, but if you can’t find one you like, you can always train your own! This tutorial will teach you how to train a UNet2DModel from scratch on a subset of the Smithsonian Butterflies dataset to generate your own 🦋 butterflies 🦋. 💡 This training tutorial is based on the Training with 🧨 Diffusers notebook. For additional details and context about diffusion models like how they work, check out the notebook! Before you begin, make sure you have 🤗 Datasets installed to load and preprocess image datasets, and 🤗 Accelerate, to simplify training on any number of GPUs. The following command will also install TensorBoard to visualize training metrics (you can also use Weights & Biases to track your training). Copied # uncomment to install the necessary libraries in Colab +#!pip install diffusers[training] We encourage you to share your model with the community, and in order to do that, you’ll need to login to your Hugging Face account (create one here if you don’t already have one!). You can login from a notebook and enter your token when prompted. Make sure your token has the write role. Copied >>> from huggingface_hub import notebook_login + +>>> notebook_login() Or login in from the terminal: Copied huggingface-cli login Since the model checkpoints are quite large, install Git-LFS to version these large files: Copied !sudo apt -qq install git-lfs +!git config --global credential.helper store Training configuration For convenience, create a TrainingConfig class containing the training hyperparameters (feel free to adjust them): Copied >>> from dataclasses import dataclass + +>>> @dataclass +... class TrainingConfig: +... image_size = 128 # the generated image resolution +... train_batch_size = 16 +... eval_batch_size = 16 # how many images to sample during evaluation +... num_epochs = 50 +... gradient_accumulation_steps = 1 +... learning_rate = 1e-4 +... lr_warmup_steps = 500 +... save_image_epochs = 10 +... save_model_epochs = 30 +... mixed_precision = "fp16" # `no` for float32, `fp16` for automatic mixed precision +... output_dir = "ddpm-butterflies-128" # the model name locally and on the HF Hub + +... push_to_hub = True # whether to upload the saved model to the HF Hub +... hub_model_id = "/" # the name of the repository to create on the HF Hub +... hub_private_repo = False +... overwrite_output_dir = True # overwrite the old model when re-running the notebook +... seed = 0 + + +>>> config = TrainingConfig() Load the dataset You can easily load the Smithsonian Butterflies dataset with the 🤗 Datasets library: Copied >>> from datasets import load_dataset + +>>> config.dataset_name = "huggan/smithsonian_butterflies_subset" +>>> dataset = load_dataset(config.dataset_name, split="train") 💡 You can find additional datasets from the HugGan Community Event or you can use your own dataset by creating a local ImageFolder. Set config.dataset_name to the repository id of the dataset if it is from the HugGan Community Event, or imagefolder if you’re using your own images. 🤗 Datasets uses the Image feature to automatically decode the image data and load it as a PIL.Image which we can visualize: Copied >>> import matplotlib.pyplot as plt + +>>> fig, axs = plt.subplots(1, 4, figsize=(16, 4)) +>>> for i, image in enumerate(dataset[:4]["image"]): +... axs[i].imshow(image) +... axs[i].set_axis_off() +>>> fig.show() The images are all different sizes though, so you’ll need to preprocess them first: Resize changes the image size to the one defined in config.image_size. RandomHorizontalFlip augments the dataset by randomly mirroring the images. Normalize is important to rescale the pixel values into a [-1, 1] range, which is what the model expects. Copied >>> from torchvision import transforms + +>>> preprocess = transforms.Compose( +... [ +... transforms.Resize((config.image_size, config.image_size)), +... transforms.RandomHorizontalFlip(), +... transforms.ToTensor(), +... transforms.Normalize([0.5], [0.5]), +... ] +... ) Use 🤗 Datasets’ set_transform method to apply the preprocess function on the fly during training: Copied >>> def transform(examples): +... images = [preprocess(image.convert("RGB")) for image in examples["image"]] +... return {"images": images} + + +>>> dataset.set_transform(transform) Feel free to visualize the images again to confirm that they’ve been resized. Now you’re ready to wrap the dataset in a DataLoader for training! Copied >>> import torch + +>>> train_dataloader = torch.utils.data.DataLoader(dataset, batch_size=config.train_batch_size, shuffle=True) Create a UNet2DModel Pretrained models in 🧨 Diffusers are easily created from their model class with the parameters you want. For example, to create a UNet2DModel: Copied >>> from diffusers import UNet2DModel + +>>> model = UNet2DModel( +... sample_size=config.image_size, # the target image resolution +... in_channels=3, # the number of input channels, 3 for RGB images +... out_channels=3, # the number of output channels +... layers_per_block=2, # how many ResNet layers to use per UNet block +... block_out_channels=(128, 128, 256, 256, 512, 512), # the number of output channels for each UNet block +... down_block_types=( +... "DownBlock2D", # a regular ResNet downsampling block +... "DownBlock2D", +... "DownBlock2D", +... "DownBlock2D", +... "AttnDownBlock2D", # a ResNet downsampling block with spatial self-attention +... "DownBlock2D", +... ), +... up_block_types=( +... "UpBlock2D", # a regular ResNet upsampling block +... "AttnUpBlock2D", # a ResNet upsampling block with spatial self-attention +... "UpBlock2D", +... "UpBlock2D", +... "UpBlock2D", +... "UpBlock2D", +... ), +... ) It is often a good idea to quickly check the sample image shape matches the model output shape: Copied >>> sample_image = dataset[0]["images"].unsqueeze(0) +>>> print("Input shape:", sample_image.shape) +Input shape: torch.Size([1, 3, 128, 128]) + +>>> print("Output shape:", model(sample_image, timestep=0).sample.shape) +Output shape: torch.Size([1, 3, 128, 128]) Great! Next, you’ll need a scheduler to add some noise to the image. Create a scheduler The scheduler behaves differently depending on whether you’re using the model for training or inference. During inference, the scheduler generates image from the noise. During training, the scheduler takes a model output - or a sample - from a specific point in the diffusion process and applies noise to the image according to a noise schedule and an update rule. Let’s take a look at the DDPMScheduler and use the add_noise method to add some random noise to the sample_image from before: Copied >>> import torch +>>> from PIL import Image +>>> from diffusers import DDPMScheduler + +>>> noise_scheduler = DDPMScheduler(num_train_timesteps=1000) +>>> noise = torch.randn(sample_image.shape) +>>> timesteps = torch.LongTensor([50]) +>>> noisy_image = noise_scheduler.add_noise(sample_image, noise, timesteps) + +>>> Image.fromarray(((noisy_image.permute(0, 2, 3, 1) + 1.0) * 127.5).type(torch.uint8).numpy()[0]) The training objective of the model is to predict the noise added to the image. The loss at this step can be calculated by: Copied >>> import torch.nn.functional as F + +>>> noise_pred = model(noisy_image, timesteps).sample +>>> loss = F.mse_loss(noise_pred, noise) Train the model By now, you have most of the pieces to start training the model and all that’s left is putting everything together. First, you’ll need an optimizer and a learning rate scheduler: Copied >>> from diffusers.optimization import get_cosine_schedule_with_warmup + +>>> optimizer = torch.optim.AdamW(model.parameters(), lr=config.learning_rate) +>>> lr_scheduler = get_cosine_schedule_with_warmup( +... optimizer=optimizer, +... num_warmup_steps=config.lr_warmup_steps, +... num_training_steps=(len(train_dataloader) * config.num_epochs), +... ) Then, you’ll need a way to evaluate the model. For evaluation, you can use the DDPMPipeline to generate a batch of sample images and save it as a grid: Copied >>> from diffusers import DDPMPipeline +>>> from diffusers.utils import make_image_grid +>>> import os + +>>> def evaluate(config, epoch, pipeline): +... # Sample some images from random noise (this is the backward diffusion process). +... # The default pipeline output type is `List[PIL.Image]` +... images = pipeline( +... batch_size=config.eval_batch_size, +... generator=torch.manual_seed(config.seed), +... ).images + +... # Make a grid out of the images +... image_grid = make_image_grid(images, rows=4, cols=4) + +... # Save the images +... test_dir = os.path.join(config.output_dir, "samples") +... os.makedirs(test_dir, exist_ok=True) +... image_grid.save(f"{test_dir}/{epoch:04d}.png") Now you can wrap all these components together in a training loop with 🤗 Accelerate for easy TensorBoard logging, gradient accumulation, and mixed precision training. To upload the model to the Hub, write a function to get your repository name and information and then push it to the Hub. 💡 The training loop below may look intimidating and long, but it’ll be worth it later when you launch your training in just one line of code! If you can’t wait and want to start generating images, feel free to copy and run the code below. You can always come back and examine the training loop more closely later, like when you’re waiting for your model to finish training. 🤗 Copied >>> from accelerate import Accelerator +>>> from huggingface_hub import create_repo, upload_folder +>>> from tqdm.auto import tqdm +>>> from pathlib import Path +>>> import os + +>>> def train_loop(config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler): +... # Initialize accelerator and tensorboard logging +... accelerator = Accelerator( +... mixed_precision=config.mixed_precision, +... gradient_accumulation_steps=config.gradient_accumulation_steps, +... log_with="tensorboard", +... project_dir=os.path.join(config.output_dir, "logs"), +... ) +... if accelerator.is_main_process: +... if config.output_dir is not None: +... os.makedirs(config.output_dir, exist_ok=True) +... if config.push_to_hub: +... repo_id = create_repo( +... repo_id=config.hub_model_id or Path(config.output_dir).name, exist_ok=True +... ).repo_id +... accelerator.init_trackers("train_example") + +... # Prepare everything +... # There is no specific order to remember, you just need to unpack the +... # objects in the same order you gave them to the prepare method. +... model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( +... model, optimizer, train_dataloader, lr_scheduler +... ) + +... global_step = 0 + +... # Now you train the model +... for epoch in range(config.num_epochs): +... progress_bar = tqdm(total=len(train_dataloader), disable=not accelerator.is_local_main_process) +... progress_bar.set_description(f"Epoch {epoch}") + +... for step, batch in enumerate(train_dataloader): +... clean_images = batch["images"] +... # Sample noise to add to the images +... noise = torch.randn(clean_images.shape, device=clean_images.device) +... bs = clean_images.shape[0] + +... # Sample a random timestep for each image +... timesteps = torch.randint( +... 0, noise_scheduler.config.num_train_timesteps, (bs,), device=clean_images.device, +... dtype=torch.int64 +... ) + +... # Add noise to the clean images according to the noise magnitude at each timestep +... # (this is the forward diffusion process) +... noisy_images = noise_scheduler.add_noise(clean_images, noise, timesteps) + +... with accelerator.accumulate(model): +... # Predict the noise residual +... noise_pred = model(noisy_images, timesteps, return_dict=False)[0] +... loss = F.mse_loss(noise_pred, noise) +... accelerator.backward(loss) + +... accelerator.clip_grad_norm_(model.parameters(), 1.0) +... optimizer.step() +... lr_scheduler.step() +... optimizer.zero_grad() + +... progress_bar.update(1) +... logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0], "step": global_step} +... progress_bar.set_postfix(**logs) +... accelerator.log(logs, step=global_step) +... global_step += 1 + +... # After each epoch you optionally sample some demo images with evaluate() and save the model +... if accelerator.is_main_process: +... pipeline = DDPMPipeline(unet=accelerator.unwrap_model(model), scheduler=noise_scheduler) + +... if (epoch + 1) % config.save_image_epochs == 0 or epoch == config.num_epochs - 1: +... evaluate(config, epoch, pipeline) + +... if (epoch + 1) % config.save_model_epochs == 0 or epoch == config.num_epochs - 1: +... if config.push_to_hub: +... upload_folder( +... repo_id=repo_id, +... folder_path=config.output_dir, +... commit_message=f"Epoch {epoch}", +... ignore_patterns=["step_*", "epoch_*"], +... ) +... else: +... pipeline.save_pretrained(config.output_dir) Phew, that was quite a bit of code! But you’re finally ready to launch the training with 🤗 Accelerate’s notebook_launcher function. Pass the function the training loop, all the training arguments, and the number of processes (you can change this value to the number of GPUs available to you) to use for training: Copied >>> from accelerate import notebook_launcher + +>>> args = (config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler) + +>>> notebook_launcher(train_loop, args, num_processes=1) Once training is complete, take a look at the final 🦋 images 🦋 generated by your diffusion model! Copied >>> import glob + +>>> sample_images = sorted(glob.glob(f"{config.output_dir}/samples/*.png")) +>>> Image.open(sample_images[-1]) Next steps Unconditional image generation is one example of a task that can be trained. You can explore other tasks and training techniques by visiting the 🧨 Diffusers Training Examples page. Here are some examples of what you can learn: Textual Inversion, an algorithm that teaches a model a specific visual concept and integrates it into the generated image. DreamBooth, a technique for generating personalized images of a subject given several input images of the subject. Guide to finetuning a Stable Diffusion model on your own dataset. Guide to using LoRA, a memory-efficient technique for finetuning really large models faster. diff --git a/scrapped_outputs/d8d3986237a1d323b30c19af83cb807f.txt b/scrapped_outputs/d8d3986237a1d323b30c19af83cb807f.txt new file mode 100644 index 0000000000000000000000000000000000000000..02948f26017297db150c2f1b80c70d14cf529652 --- /dev/null +++ b/scrapped_outputs/d8d3986237a1d323b30c19af83cb807f.txt @@ -0,0 +1,187 @@ +Kandinsky The Kandinsky models are a series of multilingual text-to-image generation models. The Kandinsky 2.0 model uses two multilingual text encoders and concatenates those results for the UNet. Kandinsky 2.1 changes the architecture to include an image prior model (CLIP) to generate a mapping between text and image embeddings. The mapping provides better text-image alignment and it is used with the text embeddings during training, leading to higher quality results. Finally, Kandinsky 2.1 uses a Modulating Quantized Vectors (MoVQ) decoder - which adds a spatial conditional normalization layer to increase photorealism - to decode the latents into images. Kandinsky 2.2 improves on the previous model by replacing the image encoder of the image prior model with a larger CLIP-ViT-G model to improve quality. The image prior model was also retrained on images with different resolutions and aspect ratios to generate higher-resolution images and different image sizes. Kandinsky 3 simplifies the architecture and shifts away from the two-stage generation process involving the prior model and diffusion model. Instead, Kandinsky 3 uses Flan-UL2 to encode text, a UNet with BigGan-deep blocks, and Sber-MoVQGAN to decode the latents into images. Text understanding and generated image quality are primarily achieved by using a larger text encoder and UNet. This guide will show you how to use the Kandinsky models for text-to-image, image-to-image, inpainting, interpolation, and more. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate Kandinsky 2.1 and 2.2 usage is very similar! The only difference is Kandinsky 2.2 doesn’t accept prompt as an input when decoding the latents. Instead, Kandinsky 2.2 only accepts image_embeds during decoding. Kandinsky 3 has a more concise architecture and it doesn’t require a prior model. This means it’s usage is identical to other diffusion models like Stable Diffusion XL. Text-to-image To use the Kandinsky models for any task, you always start by setting up the prior pipeline to encode the prompt and generate the image embeddings. The prior pipeline also generates negative_image_embeds that correspond to the negative prompt "". For better results, you can pass an actual negative_prompt to the prior pipeline, but this’ll increase the effective batch size of the prior pipeline by 2x. Kandinsky 2.1 Kandinsky 2.2 Kandinsky 3 Copied from diffusers import KandinskyPriorPipeline, KandinskyPipeline +import torch + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16).to("cuda") +pipeline = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16).to("cuda") + +prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting" +negative_prompt = "low quality, bad quality" # optional to include a negative prompt, but results are usually better +image_embeds, negative_image_embeds = prior_pipeline(prompt, negative_prompt, guidance_scale=1.0).to_tuple() Now pass all the prompts and embeddings to the KandinskyPipeline to generate an image: Copied image = pipeline(prompt, image_embeds=image_embeds, negative_prompt=negative_prompt, negative_image_embeds=negative_image_embeds, height=768, width=768).images[0] +image 🤗 Diffusers also provides an end-to-end API with the KandinskyCombinedPipeline and KandinskyV22CombinedPipeline, meaning you don’t have to separately load the prior and text-to-image pipeline. The combined pipeline automatically loads both the prior model and the decoder. You can still set different values for the prior pipeline with the prior_guidance_scale and prior_num_inference_steps parameters if you want. Use the AutoPipelineForText2Image to automatically call the combined pipelines under the hood: Kandinsky 2.1 Kandinsky 2.2 Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) +pipeline.enable_model_cpu_offload() + +prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting" +negative_prompt = "low quality, bad quality" + +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, prior_guidance_scale=1.0, guidance_scale=4.0, height=768, width=768).images[0] +image Image-to-image For image-to-image, pass the initial image and text prompt to condition the image to the pipeline. Start by loading the prior pipeline: Kandinsky 2.1 Kandinsky 2.2 Kandinsky 3 Copied import torch +from diffusers import KandinskyImg2ImgPipeline, KandinskyPriorPipeline + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipeline = KandinskyImg2ImgPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda") Download an image to condition on: Copied from diffusers.utils import load_image + +# download image +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +original_image = load_image(url) +original_image = original_image.resize((768, 512)) Generate the image_embeds and negative_image_embeds with the prior pipeline: Copied prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +image_embeds, negative_image_embeds = prior_pipeline(prompt, negative_prompt).to_tuple() Now pass the original image, and all the prompts and embeddings to the pipeline to generate an image: Kandinsky 2.1 Kandinsky 2.2 Kandinsky 3 Copied from diffusers.utils import make_image_grid + +image = pipeline(prompt, negative_prompt=negative_prompt, image=original_image, image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, height=768, width=768, strength=0.3).images[0] +make_image_grid([original_image.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2) 🤗 Diffusers also provides an end-to-end API with the KandinskyImg2ImgCombinedPipeline and KandinskyV22Img2ImgCombinedPipeline, meaning you don’t have to separately load the prior and image-to-image pipeline. The combined pipeline automatically loads both the prior model and the decoder. You can still set different values for the prior pipeline with the prior_guidance_scale and prior_num_inference_steps parameters if you want. Use the AutoPipelineForImage2Image to automatically call the combined pipelines under the hood: Kandinsky 2.1 Kandinsky 2.2 Copied from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image +import torch + +pipeline = AutoPipelineForImage2Image.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True) +pipeline.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +original_image = load_image(url) + +original_image.thumbnail((768, 768)) + +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=original_image, strength=0.3).images[0] +make_image_grid([original_image.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2) Inpainting ⚠️ The Kandinsky models use ⬜️ white pixels to represent the masked area now instead of black pixels. If you are using KandinskyInpaintPipeline in production, you need to change the mask to use white pixels: Copied # For PIL input +import PIL.ImageOps +mask = PIL.ImageOps.invert(mask) + +# For PyTorch and NumPy input +mask = 1 - mask For inpainting, you’ll need the original image, a mask of the area to replace in the original image, and a text prompt of what to inpaint. Load the prior pipeline: Kandinsky 2.1 Kandinsky 2.2 Copied from diffusers import KandinskyInpaintPipeline, KandinskyPriorPipeline +from diffusers.utils import load_image, make_image_grid +import torch +import numpy as np +from PIL import Image + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipeline = KandinskyInpaintPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16, use_safetensors=True).to("cuda") Load an initial image and create a mask: Copied init_image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png") +mask = np.zeros((768, 768), dtype=np.float32) +# mask area above cat's head +mask[:250, 250:-250] = 1 Generate the embeddings with the prior pipeline: Copied prompt = "a hat" +prior_output = prior_pipeline(prompt) Now pass the initial image, mask, and prompt and embeddings to the pipeline to generate an image: Kandinsky 2.1 Kandinsky 2.2 Copied output_image = pipeline(prompt, image=init_image, mask_image=mask, **prior_output, height=768, width=768, num_inference_steps=150).images[0] +mask = Image.fromarray((mask*255).astype('uint8'), 'L') +make_image_grid([init_image, mask, output_image], rows=1, cols=3) You can also use the end-to-end KandinskyInpaintCombinedPipeline and KandinskyV22InpaintCombinedPipeline to call the prior and decoder pipelines together under the hood. Use the AutoPipelineForInpainting for this: Kandinsky 2.1 Kandinsky 2.2 Copied import torch +import numpy as np +from PIL import Image +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipe = AutoPipelineForInpainting.from_pretrained("kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16) +pipe.enable_model_cpu_offload() + +init_image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png") +mask = np.zeros((768, 768), dtype=np.float32) +# mask area above cat's head +mask[:250, 250:-250] = 1 +prompt = "a hat" + +output_image = pipe(prompt=prompt, image=init_image, mask_image=mask).images[0] +mask = Image.fromarray((mask*255).astype('uint8'), 'L') +make_image_grid([init_image, mask, output_image], rows=1, cols=3) Interpolation Interpolation allows you to explore the latent space between the image and text embeddings which is a cool way to see some of the prior model’s intermediate outputs. Load the prior pipeline and two images you’d like to interpolate: Kandinsky 2.1 Kandinsky 2.2 Copied from diffusers import KandinskyPriorPipeline, KandinskyPipeline +from diffusers.utils import load_image, make_image_grid +import torch + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +img_1 = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png") +img_2 = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/starry_night.jpeg") +make_image_grid([img_1.resize((512,512)), img_2.resize((512,512))], rows=1, cols=2) a cat Van Gogh's Starry Night painting Specify the text or images to interpolate, and set the weights for each text or image. Experiment with the weights to see how they affect the interpolation! Copied images_texts = ["a cat", img_1, img_2] +weights = [0.3, 0.3, 0.4] Call the interpolate function to generate the embeddings, and then pass them to the pipeline to generate the image: Kandinsky 2.1 Kandinsky 2.2 Copied # prompt can be left empty +prompt = "" +prior_out = prior_pipeline.interpolate(images_texts, weights) + +pipeline = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + +image = pipeline(prompt, **prior_out, height=768, width=768).images[0] +image ControlNet ⚠️ ControlNet is only supported for Kandinsky 2.2! ControlNet enables conditioning large pretrained diffusion models with additional inputs such as a depth map or edge detection. For example, you can condition Kandinsky 2.2 with a depth map so the model understands and preserves the structure of the depth image. Let’s load an image and extract it’s depth map: Copied from diffusers.utils import load_image + +img = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png" +).resize((768, 768)) +img Then you can use the depth-estimation Pipeline from 🤗 Transformers to process the image and retrieve the depth map: Copied import torch +import numpy as np + +from transformers import pipeline + +def make_hint(image, depth_estimator): + image = depth_estimator(image)["depth"] + image = np.array(image) + image = image[:, :, None] + image = np.concatenate([image, image, image], axis=2) + detected_map = torch.from_numpy(image).float() / 255.0 + hint = detected_map.permute(2, 0, 1) + return hint + +depth_estimator = pipeline("depth-estimation") +hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda") Text-to-image Load the prior pipeline and the KandinskyV22ControlnetPipeline: Copied from diffusers import KandinskyV22PriorPipeline, KandinskyV22ControlnetPipeline + +prior_pipeline = KandinskyV22PriorPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +pipeline = KandinskyV22ControlnetPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16 +).to("cuda") Generate the image embeddings from a prompt and negative prompt: Copied prompt = "A robot, 4k photo" +negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature" + +generator = torch.Generator(device="cuda").manual_seed(43) + +image_emb, zero_image_emb = prior_pipeline( + prompt=prompt, negative_prompt=negative_prior_prompt, generator=generator +).to_tuple() Finally, pass the image embeddings and the depth image to the KandinskyV22ControlnetPipeline to generate an image: Copied image = pipeline(image_embeds=image_emb, negative_image_embeds=zero_image_emb, hint=hint, num_inference_steps=50, generator=generator, height=768, width=768).images[0] +image Image-to-image For image-to-image with ControlNet, you’ll need to use the: KandinskyV22PriorEmb2EmbPipeline to generate the image embeddings from a text prompt and an image KandinskyV22ControlnetImg2ImgPipeline to generate an image from the initial image and the image embeddings Process and extract a depth map of an initial image of a cat with the depth-estimation Pipeline from 🤗 Transformers: Copied import torch +import numpy as np + +from diffusers import KandinskyV22PriorEmb2EmbPipeline, KandinskyV22ControlnetImg2ImgPipeline +from diffusers.utils import load_image +from transformers import pipeline + +img = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png" +).resize((768, 768)) + +def make_hint(image, depth_estimator): + image = depth_estimator(image)["depth"] + image = np.array(image) + image = image[:, :, None] + image = np.concatenate([image, image, image], axis=2) + detected_map = torch.from_numpy(image).float() / 255.0 + hint = detected_map.permute(2, 0, 1) + return hint + +depth_estimator = pipeline("depth-estimation") +hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda") Load the prior pipeline and the KandinskyV22ControlnetImg2ImgPipeline: Copied prior_pipeline = KandinskyV22PriorEmb2EmbPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +pipeline = KandinskyV22ControlnetImg2ImgPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16 +).to("cuda") Pass a text prompt and the initial image to the prior pipeline to generate the image embeddings: Copied prompt = "A robot, 4k photo" +negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature" + +generator = torch.Generator(device="cuda").manual_seed(43) + +img_emb = prior_pipeline(prompt=prompt, image=img, strength=0.85, generator=generator) +negative_emb = prior_pipeline(prompt=negative_prior_prompt, image=img, strength=1, generator=generator) Now you can run the KandinskyV22ControlnetImg2ImgPipeline to generate an image from the initial image and the image embeddings: Copied image = pipeline(image=img, strength=0.5, image_embeds=img_emb.image_embeds, negative_image_embeds=negative_emb.image_embeds, hint=hint, num_inference_steps=50, generator=generator, height=768, width=768).images[0] +make_image_grid([img.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2) Optimizations Kandinsky is unique because it requires a prior pipeline to generate the mappings, and a second pipeline to decode the latents into an image. Optimization efforts should be focused on the second pipeline because that is where the bulk of the computation is done. Here are some tips to improve Kandinsky during inference. Enable xFormers if you’re using PyTorch < 2.0: Copied from diffusers import DiffusionPipeline + import torch + + pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) ++ pipe.enable_xformers_memory_efficient_attention() Enable torch.compile if you’re using PyTorch >= 2.0 to automatically use scaled dot-product attention (SDPA): Copied pipe.unet.to(memory_format=torch.channels_last) ++ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) This is the same as explicitly setting the attention processor to use AttnAddedKVProcessor2_0: Copied from diffusers.models.attention_processor import AttnAddedKVProcessor2_0 + +pipe.unet.set_attn_processor(AttnAddedKVProcessor2_0()) Offload the model to the CPU with enable_model_cpu_offload() to avoid out-of-memory errors: Copied from diffusers import DiffusionPipeline + import torch + + pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) ++ pipe.enable_model_cpu_offload() By default, the text-to-image pipeline uses the DDIMScheduler but you can replace it with another scheduler like DDPMScheduler to see how that affects the tradeoff between inference speed and image quality: Copied from diffusers import DDPMScheduler +from diffusers import DiffusionPipeline + +scheduler = DDPMScheduler.from_pretrained("kandinsky-community/kandinsky-2-1", subfolder="ddpm_scheduler") +pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", scheduler=scheduler, torch_dtype=torch.float16, use_safetensors=True).to("cuda") diff --git a/scrapped_outputs/d8f2beffa4129ef8853876b612c42da2.txt b/scrapped_outputs/d8f2beffa4129ef8853876b612c42da2.txt new file mode 100644 index 0000000000000000000000000000000000000000..f4d8d702045481a41c259fb77d2569da57e434ee --- /dev/null +++ b/scrapped_outputs/d8f2beffa4129ef8853876b612c42da2.txt @@ -0,0 +1,152 @@ +Heun scheduler inspired by Karras et. al paper + + +Overview + +Algorithm 1 of Karras et. al. +Scheduler ported from @crowsonkb’s https://github.com/crowsonkb/k-diffusion library: +All credit for making this scheduler work goes to Katherine Crowson + +HeunDiscreteScheduler + + +class diffusers.HeunDiscreteScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.00085 +beta_end: float = 0.012 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +prediction_type: str = 'epsilon' + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. beta_start (float): the + + +starting beta value of inference. beta_end (float) — the final beta value. beta_schedule (str): +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. +options to clip the variance used when adding noise to the denoised sample. Choose from fixed_small, +fixed_small_log, fixed_large, fixed_large_log, learned or learned_range. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + + +Implements Algorithm 2 (Heun steps) from Karras et al. (2022). for discrete beta schedules. Based on the original +k-diffusion implementation by Katherine Crowson: +https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L90 +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[float, torch.FloatTensor] + +) +→ +torch.FloatTensor + +Parameters + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the — + + +current timestep. — +sample (torch.FloatTensor): input sample timestep (int, optional): current timestep + + +Returns + +torch.FloatTensor + + + +scaled input sample + + + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None +num_train_timesteps: typing.Optional[int] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device, optional) — +the device to which the timesteps should be moved to. If None, the timesteps are not moved. + + + +Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: typing.Union[torch.FloatTensor, numpy.ndarray] +timestep: typing.Union[float, torch.FloatTensor] +sample: typing.Union[torch.FloatTensor, numpy.ndarray] +return_dict: bool = True + +) +→ +SchedulerOutput or tuple + +Parameters + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion — + + +process from the learned model outputs (most often the predicted noise). — +model_output (torch.FloatTensor or np.ndarray): direct output from learned diffusion model. timestep +(int): current discrete timestep in the diffusion chain. sample (torch.FloatTensor or np.ndarray): +current instance of sample being created by diffusion process. +return_dict (bool): option for returning tuple rather than SchedulerOutput class + + +Returns + +SchedulerOutput or tuple + + + +SchedulerOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. diff --git a/scrapped_outputs/d8f54006f541cc92e91f4d63de52b15c.txt b/scrapped_outputs/d8f54006f541cc92e91f4d63de52b15c.txt new file mode 100644 index 0000000000000000000000000000000000000000..684383d3b766fe2306777de3fdfe7ac6f1cc9bb6 --- /dev/null +++ b/scrapped_outputs/d8f54006f541cc92e91f4d63de52b15c.txt @@ -0,0 +1,29 @@ +Create a dataset for training There are many datasets on the Hub to train a model on, but if you can’t find one you’re interested in or want to use your own, you can create a dataset with the 🤗 Datasets library. The dataset structure depends on the task you want to train your model on. The most basic dataset structure is a directory of images for tasks like unconditional image generation. Another dataset structure may be a directory of images and a text file containing their corresponding text captions for tasks like text-to-image generation. This guide will show you two ways to create a dataset to finetune on: provide a folder of images to the --train_data_dir argument upload a dataset to the Hub and pass the dataset repository id to the --dataset_name argument 💡 Learn more about how to create an image dataset for training in the Create an image dataset guide. Provide a dataset as a folder For unconditional generation, you can provide your own dataset as a folder of images. The training script uses the ImageFolder builder from 🤗 Datasets to automatically build a dataset from the folder. Your directory structure should look like: Copied data_dir/xxx.png +data_dir/xxy.png +data_dir/[...]/xxz.png Pass the path to the dataset directory to the --train_data_dir argument, and then you can start training: Copied accelerate launch train_unconditional.py \ + --train_data_dir \ + Upload your data to the Hub 💡 For more details and context about creating and uploading a dataset to the Hub, take a look at the Image search with 🤗 Datasets post. Start by creating a dataset with the ImageFolder feature, which creates an image column containing the PIL-encoded images. You can use the data_dir or data_files parameters to specify the location of the dataset. The data_files parameter supports mapping specific files to dataset splits like train or test: Copied from datasets import load_dataset + +# example 1: local folder +dataset = load_dataset("imagefolder", data_dir="path_to_your_folder") + +# example 2: local files (supported formats are tar, gzip, zip, xz, rar, zstd) +dataset = load_dataset("imagefolder", data_files="path_to_zip_file") + +# example 3: remote files (supported formats are tar, gzip, zip, xz, rar, zstd) +dataset = load_dataset( + "imagefolder", + data_files="https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip", +) + +# example 4: providing several splits +dataset = load_dataset( + "imagefolder", data_files={"train": ["path/to/file1", "path/to/file2"], "test": ["path/to/file3", "path/to/file4"]} +) Then use the push_to_hub method to upload the dataset to the Hub: Copied # assuming you have ran the huggingface-cli login command in a terminal +dataset.push_to_hub("name_of_your_dataset") + +# if you want to push to a private repo, simply pass private=True: +dataset.push_to_hub("name_of_your_dataset", private=True) Now the dataset is available for training by passing the dataset name to the --dataset_name argument: Copied accelerate launch --mixed_precision="fp16" train_text_to_image.py \ + --pretrained_model_name_or_path="runwayml/stable-diffusion-v1-5" \ + --dataset_name="name_of_your_dataset" \ + Next steps Now that you’ve created a dataset, you can plug it into the train_data_dir (if your dataset is local) or dataset_name (if your dataset is on the Hub) arguments of a training script. For your next steps, feel free to try and use your dataset to train a model for unconditional generation or text-to-image generation! diff --git a/scrapped_outputs/d91b149aa43638ad1f1a46af1e7bc5b3.txt b/scrapped_outputs/d91b149aa43638ad1f1a46af1e7bc5b3.txt new file mode 100644 index 0000000000000000000000000000000000000000..7ba14b6e0e43d4ca7ed6b0c338388308b99ebb1d --- /dev/null +++ b/scrapped_outputs/d91b149aa43638ad1f1a46af1e7bc5b3.txt @@ -0,0 +1,265 @@ +ControlNet ControlNet is a type of model for controlling image diffusion models by conditioning the model with an additional input image. There are many types of conditioning inputs (canny edge, user sketching, human pose, depth, and more) you can use to control a diffusion model. This is hugely useful because it affords you greater control over image generation, making it easier to generate specific images without experimenting with different text prompts or denoising values as much. Check out Section 3.5 of the ControlNet paper v1 for a list of ControlNet implementations on various conditioning inputs. You can find the official Stable Diffusion ControlNet conditioned models on lllyasviel’s Hub profile, and more community-trained ones on the Hub. For Stable Diffusion XL (SDXL) ControlNet models, you can find them on the 🤗 Diffusers Hub organization, or you can browse community-trained ones on the Hub. A ControlNet model has two sets of weights (or blocks) connected by a zero-convolution layer: a locked copy keeps everything a large pretrained diffusion model has learned a trainable copy is trained on the additional conditioning input Since the locked copy preserves the pretrained model, training and implementing a ControlNet on a new conditioning input is as fast as finetuning any other model because you aren’t training the model from scratch. This guide will show you how to use ControlNet for text-to-image, image-to-image, inpainting, and more! There are many types of ControlNet conditioning inputs to choose from, but in this guide we’ll only focus on several of them. Feel free to experiment with other conditioning inputs! Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate opencv-python Text-to-image For text-to-image, you normally pass a text prompt to the model. But with ControlNet, you can specify an additional conditioning input. Let’s condition the model with a canny image, a white outline of an image on a black background. This way, the ControlNet can use the canny image as a control to guide the model to generate an image with the same outline. Load an image and use the opencv-python library to extract the canny image: Copied from diffusers.utils import load_image, make_image_grid +from PIL import Image +import cv2 +import numpy as np + +original_image = load_image( + "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +) + +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) original image canny image Next, load a ControlNet model conditioned on canny edge detection and pass it to the StableDiffusionControlNetPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler +import torch + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now pass your prompt and canny image to the pipeline: Copied output = pipe( + "the mona lisa", image=canny_image +).images[0] +make_image_grid([original_image, canny_image, output], rows=1, cols=3) Image-to-image For image-to-image, you’d typically pass an initial image and a prompt to the pipeline to generate a new image. With ControlNet, you can pass an additional conditioning input to guide the model. Let’s condition the model with a depth map, an image which contains spatial information. This way, the ControlNet can use the depth map as a control to guide the model to generate an image that preserves spatial information. You’ll use the StableDiffusionControlNetImg2ImgPipeline for this task, which is different from the StableDiffusionControlNetPipeline because it allows you to pass an initial image as the starting point for the image generation process. Load an image and use the depth-estimation Pipeline from 🤗 Transformers to extract the depth map of an image: Copied import torch +import numpy as np + +from transformers import pipeline +from diffusers.utils import load_image, make_image_grid + +image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-img2img.jpg" +) + +def get_depth_map(image, depth_estimator): + image = depth_estimator(image)["depth"] + image = np.array(image) + image = image[:, :, None] + image = np.concatenate([image, image, image], axis=2) + detected_map = torch.from_numpy(image).float() / 255.0 + depth_map = detected_map.permute(2, 0, 1) + return depth_map + +depth_estimator = pipeline("depth-estimation") +depth_map = get_depth_map(image, depth_estimator).unsqueeze(0).half().to("cuda") Next, load a ControlNet model conditioned on depth maps and pass it to the StableDiffusionControlNetImg2ImgPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, UniPCMultistepScheduler +import torch + +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11f1p_sd15_depth", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now pass your prompt, initial image, and depth map to the pipeline: Copied output = pipe( + "lego batman and robin", image=image, control_image=depth_map, +).images[0] +make_image_grid([image, output], rows=1, cols=2) original image generated image Inpainting For inpainting, you need an initial image, a mask image, and a prompt describing what to replace the mask with. ControlNet models allow you to add another control image to condition a model with. Let’s condition the model with an inpainting mask. This way, the ControlNet can use the inpainting mask as a control to guide the model to generate an image within the mask area. Load an initial image and a mask image: Copied from diffusers.utils import load_image, make_image_grid + +init_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-inpaint.jpg" +) +init_image = init_image.resize((512, 512)) + +mask_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-inpaint-mask.jpg" +) +mask_image = mask_image.resize((512, 512)) +make_image_grid([init_image, mask_image], rows=1, cols=2) Create a function to prepare the control image from the initial and mask images. This’ll create a tensor to mark the pixels in init_image as masked if the corresponding pixel in mask_image is over a certain threshold. Copied import numpy as np +import torch + +def make_inpaint_condition(image, image_mask): + image = np.array(image.convert("RGB")).astype(np.float32) / 255.0 + image_mask = np.array(image_mask.convert("L")).astype(np.float32) / 255.0 + + assert image.shape[0:1] == image_mask.shape[0:1] + image[image_mask > 0.5] = -1.0 # set as masked pixel + image = np.expand_dims(image, 0).transpose(0, 3, 1, 2) + image = torch.from_numpy(image) + return image + +control_image = make_inpaint_condition(init_image, mask_image) original image mask image Load a ControlNet model conditioned on inpainting and pass it to the StableDiffusionControlNetInpaintPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetInpaintPipeline, ControlNetModel, UniPCMultistepScheduler + +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now pass your prompt, initial image, mask image, and control image to the pipeline: Copied output = pipe( + "corgi face with large ears, detailed, pixar, animated, disney", + num_inference_steps=20, + eta=1.0, + image=init_image, + mask_image=mask_image, + control_image=control_image, +).images[0] +make_image_grid([init_image, mask_image, output], rows=1, cols=3) Guess mode Guess mode does not require supplying a prompt to a ControlNet at all! This forces the ControlNet encoder to do it’s best to “guess” the contents of the input control map (depth map, pose estimation, canny edge, etc.). Guess mode adjusts the scale of the output residuals from a ControlNet by a fixed ratio depending on the block depth. The shallowest DownBlock corresponds to 0.1, and as the blocks get deeper, the scale increases exponentially such that the scale of the MidBlock output becomes 1.0. Guess mode does not have any impact on prompt conditioning and you can still provide a prompt if you want. Set guess_mode=True in the pipeline, and it is recommended to set the guidance_scale value between 3.0 and 5.0. Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.utils import load_image, make_image_grid +import numpy as np +import torch +from PIL import Image +import cv2 + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", use_safetensors=True) +pipe = StableDiffusionControlNetPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", controlnet=controlnet, use_safetensors=True).to("cuda") + +original_image = load_image("https://huggingface.co/takuma104/controlnet_dev/resolve/main/bird_512x512.png") + +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +image = pipe("", image=canny_image, guess_mode=True, guidance_scale=3.0).images[0] +make_image_grid([original_image, canny_image, image], rows=1, cols=3) regular mode with prompt guess mode without prompt ControlNet with Stable Diffusion XL There aren’t too many ControlNet models compatible with Stable Diffusion XL (SDXL) at the moment, but we’ve trained two full-sized ControlNet models for SDXL conditioned on canny edge detection and depth maps. We’re also experimenting with creating smaller versions of these SDXL-compatible ControlNet models so it is easier to run on resource-constrained hardware. You can find these checkpoints on the 🤗 Diffusers Hub organization! Let’s use a SDXL ControlNet conditioned on canny images to generate an image. Start by loading an image and prepare the canny image: Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL +from diffusers.utils import load_image, make_image_grid +from PIL import Image +import cv2 +import numpy as np +import torch + +original_image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" +) + +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) +make_image_grid([original_image, canny_image], rows=1, cols=2) original image canny image Load a SDXL ControlNet model conditioned on canny edge detection and pass it to the StableDiffusionXLControlNetPipeline. You can also enable model offloading to reduce memory usage. Copied controlnet = ControlNetModel.from_pretrained( + "diffusers/controlnet-canny-sdxl-1.0", + torch_dtype=torch.float16, + use_safetensors=True +) +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionXLControlNetPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + controlnet=controlnet, + vae=vae, + torch_dtype=torch.float16, + use_safetensors=True +) +pipe.enable_model_cpu_offload() Now pass your prompt (and optionally a negative prompt if you’re using one) and canny image to the pipeline: The controlnet_conditioning_scale parameter determines how much weight to assign to the conditioning inputs. A value of 0.5 is recommended for good generalization, but feel free to experiment with this number! Copied prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" +negative_prompt = 'low quality, bad quality, sketches' + +image = pipe( + prompt, + negative_prompt=negative_prompt, + image=canny_image, + controlnet_conditioning_scale=0.5, +).images[0] +make_image_grid([original_image, canny_image, image], rows=1, cols=3) You can use StableDiffusionXLControlNetPipeline in guess mode as well by setting the parameter to True: Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL +from diffusers.utils import load_image, make_image_grid +import numpy as np +import torch +import cv2 +from PIL import Image + +prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" +negative_prompt = "low quality, bad quality, sketches" + +original_image = load_image( + "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" +) + +controlnet = ControlNetModel.from_pretrained( + "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16, use_safetensors=True +) +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionXLControlNetPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16, use_safetensors=True +) +pipe.enable_model_cpu_offload() + +image = np.array(original_image) +image = cv2.Canny(image, 100, 200) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +image = pipe( + prompt, negative_prompt=negative_prompt, controlnet_conditioning_scale=0.5, image=canny_image, guess_mode=True, +).images[0] +make_image_grid([original_image, canny_image, image], rows=1, cols=3) MultiControlNet Replace the SDXL model with a model like runwayml/stable-diffusion-v1-5 to use multiple conditioning inputs with Stable Diffusion models. You can compose multiple ControlNet conditionings from different image inputs to create a MultiControlNet. To get better results, it is often helpful to: mask conditionings such that they don’t overlap (for example, mask the area of a canny image where the pose conditioning is located) experiment with the controlnet_conditioning_scale parameter to determine how much weight to assign to each conditioning input In this example, you’ll combine a canny image and a human pose estimation image to generate a new image. Prepare the canny image conditioning: Copied from diffusers.utils import load_image, make_image_grid +from PIL import Image +import numpy as np +import cv2 + +original_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" +) +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) + +# zero out middle columns of image where pose will be overlaid +zero_start = image.shape[1] // 4 +zero_end = zero_start + image.shape[1] // 2 +image[:, zero_start:zero_end] = 0 + +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) +make_image_grid([original_image, canny_image], rows=1, cols=2) original image canny image For human pose estimation, install controlnet_aux: Copied # uncomment to install the necessary library in Colab +#!pip install -q controlnet-aux Prepare the human pose estimation conditioning: Copied from controlnet_aux import OpenposeDetector + +openpose = OpenposeDetector.from_pretrained("lllyasviel/ControlNet") +original_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/person.png" +) +openpose_image = openpose(original_image) +make_image_grid([original_image, openpose_image], rows=1, cols=2) original image human pose image Load a list of ControlNet models that correspond to each conditioning, and pass them to the StableDiffusionXLControlNetPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to reduce memory usage. Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL, UniPCMultistepScheduler +import torch + +controlnets = [ + ControlNetModel.from_pretrained( + "thibaud/controlnet-openpose-sdxl-1.0", torch_dtype=torch.float16 + ), + ControlNetModel.from_pretrained( + "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16, use_safetensors=True + ), +] + +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionXLControlNetPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnets, vae=vae, torch_dtype=torch.float16, use_safetensors=True +) +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now you can pass your prompt (an optional negative prompt if you’re using one), canny image, and pose image to the pipeline: Copied prompt = "a giant standing in a fantasy landscape, best quality" +negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality" + +generator = torch.manual_seed(1) + +images = [openpose_image.resize((1024, 1024)), canny_image.resize((1024, 1024))] + +images = pipe( + prompt, + image=images, + num_inference_steps=25, + generator=generator, + negative_prompt=negative_prompt, + num_images_per_prompt=3, + controlnet_conditioning_scale=[1.0, 0.8], +).images +make_image_grid([original_image, canny_image, openpose_image, + images[0].resize((512, 512)), images[1].resize((512, 512)), images[2].resize((512, 512))], rows=2, cols=3) diff --git a/scrapped_outputs/d9744b7f283ec90d95ed95ea3129d2e4.txt b/scrapped_outputs/d9744b7f283ec90d95ed95ea3129d2e4.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/d9a4de90539a616e7981d7556a252ebd.txt b/scrapped_outputs/d9a4de90539a616e7981d7556a252ebd.txt new file mode 100644 index 0000000000000000000000000000000000000000..7b33af7ded71fb9ee111a4c828a87ecbd9858360 --- /dev/null +++ b/scrapped_outputs/d9a4de90539a616e7981d7556a252ebd.txt @@ -0,0 +1,36 @@ +Consistency Decoder Consistency decoder can be used to decode the latents from the denoising UNet in the StableDiffusionPipeline. This decoder was introduced in the DALL-E 3 technical report. The original codebase can be found at openai/consistencydecoder. Inference is only supported for 2 iterations as of now. The pipeline could not have been contributed without the help of madebyollin and mrsteyk from this issue. ConsistencyDecoderVAE class diffusers.ConsistencyDecoderVAE < source > ( scaling_factor: float = 0.18215 latent_channels: int = 4 encoder_act_fn: str = 'silu' encoder_block_out_channels: Tuple = (128, 256, 512, 512) encoder_double_z: bool = True encoder_down_block_types: Tuple = ('DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D') encoder_in_channels: int = 3 encoder_layers_per_block: int = 2 encoder_norm_num_groups: int = 32 encoder_out_channels: int = 4 decoder_add_attention: bool = False decoder_block_out_channels: Tuple = (320, 640, 1024, 1024) decoder_down_block_types: Tuple = ('ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D', 'ResnetDownsampleBlock2D') decoder_downsample_padding: int = 1 decoder_in_channels: int = 7 decoder_layers_per_block: int = 3 decoder_norm_eps: float = 1e-05 decoder_norm_num_groups: int = 32 decoder_num_train_timesteps: int = 1024 decoder_out_channels: int = 6 decoder_resnet_time_scale_shift: str = 'scale_shift' decoder_time_embedding_type: str = 'learned' decoder_up_block_types: Tuple = ('ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D', 'ResnetUpsampleBlock2D') ) The consistency decoder used with DALL-E 3. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline, ConsistencyDecoderVAE + +>>> vae = ConsistencyDecoderVAE.from_pretrained("openai/consistency-decoder", torch_dtype=torch.float16) +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", vae=vae, torch_dtype=torch.float16 +... ).to("cuda") + +>>> pipe("horse", generator=torch.manual_seed(0)).images wrapper < source > ( *args **kwargs ) disable_slicing < source > ( ) Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing +decoding in one step. disable_tiling < source > ( ) Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing +decoding in one step. enable_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_tiling < source > ( use_tiling: bool = True ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. forward < source > ( sample: FloatTensor sample_posterior: bool = False return_dict: bool = True generator: Optional = None ) → DecoderOutput or tuple Parameters sample (torch.FloatTensor) — Input sample. sample_posterior (bool, optional, defaults to False) — +Whether to sample from the posterior. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. generator (torch.Generator, optional, defaults to None) — +Generator to use for sampling. Returns +DecoderOutput or tuple + +If return_dict is True, a DecoderOutput is returned, otherwise a plain tuple is returned. + set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. tiled_encode < source > ( x: FloatTensor return_dict: bool = True ) → ~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput or tuple Parameters x (torch.FloatTensor) — Input batch of images. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput instead of a +plain tuple. Returns +~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput or tuple + +If return_dict is True, a ~models.consistency_decoder_vae.ConsistencyDecoderVAEOutput is returned, +otherwise a plain tuple is returned. + Encode a batch of images using a tiled encoder. When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several +steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is +different from non-tiled encoding because each tile uses a different encoder. To avoid tiling artifacts, the +tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the +output, but they should be much less noticeable. diff --git a/scrapped_outputs/d9be2588e7eb63b0adcfdd5701fe2525.txt b/scrapped_outputs/d9be2588e7eb63b0adcfdd5701fe2525.txt new file mode 100644 index 0000000000000000000000000000000000000000..d23d93327c35d9c8f0901065ebe9c0cc039991a4 --- /dev/null +++ b/scrapped_outputs/d9be2588e7eb63b0adcfdd5701fe2525.txt @@ -0,0 +1,260 @@ +Image-to-image Image-to-image is similar to text-to-image, but in addition to a prompt, you can also pass an initial image as a starting point for the diffusion process. The initial image is encoded to latent space and noise is added to it. Then the latent diffusion model takes a prompt and the noisy latent image, predicts the added noise, and removes the predicted noise from the initial latent image to get the new latent image. Lastly, a decoder decodes the new latent image back into an image. With 🤗 Diffusers, this is as easy as 1-2-3: Load a checkpoint into the AutoPipelineForImage2Image class; this pipeline automatically handles loading the correct pipeline class based on the checkpoint: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() You’ll notice throughout the guide, we use enable_model_cpu_offload() and enable_xformers_memory_efficient_attention(), to save memory and increase inference speed. If you’re using PyTorch 2.0, then you don’t need to call enable_xformers_memory_efficient_attention() on your pipeline because it’ll already be using PyTorch 2.0’s native scaled-dot product attention. Load an image to pass to the pipeline: Copied init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") Pass a prompt and image to the pipeline to generate an image: Copied prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k" +image = pipeline(prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Popular models The most popular image-to-image models are Stable Diffusion v1.5, Stable Diffusion XL (SDXL), and Kandinsky 2.2. The results from the Stable Diffusion and Kandinsky models vary due to their architecture differences and training process; you can generally expect SDXL to produce higher quality images than Stable Diffusion v1.5. Let’s take a quick look at how to use each of these models and compare their results. Stable Diffusion v1.5 Stable Diffusion v1.5 is a latent diffusion model initialized from an earlier checkpoint, and further finetuned for 595K steps on 512x512 images. To use this pipeline for image-to-image, you’ll need to prepare an initial image to pass to the pipeline. Then you can pass a prompt and the image to the pipeline to generate a new image: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Stable Diffusion XL (SDXL) SDXL is a more powerful version of the Stable Diffusion model. It uses a larger base model, and an additional refiner model to increase the quality of the base model’s output. Read the SDXL guide for a more detailed walkthrough of how to use this model, and other techniques it uses to produce high quality images. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-sdxl-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, strength=0.5).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Kandinsky 2.2 The Kandinsky model is different from the Stable Diffusion models because it uses an image prior model to create image embeddings. The embeddings help create a better alignment between text and images, allowing the latent diffusion model to generate better images. The simplest way to use Kandinsky 2.2 is: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Configure pipeline parameters There are several important parameters you can configure in the pipeline that’ll affect the image generation process and image quality. Let’s take a closer look at what these parameters do and how changing them affects the output. Strength strength is one of the most important parameters to consider and it’ll have a huge impact on your generated image. It determines how much the generated image resembles the initial image. In other words: 📈 a higher strength value gives the model more “creativity” to generate an image that’s different from the initial image; a strength value of 1.0 means the initial image is more or less ignored 📉 a lower strength value means the generated image is more similar to the initial image The strength and num_inference_steps parameters are related because strength determines the number of noise steps to add. For example, if the num_inference_steps is 50 and strength is 0.8, then this means adding 40 (50 * 0.8) steps of noise to the initial image and then denoising for 40 steps to get the newly generated image. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, strength=0.8).images[0] +make_image_grid([init_image, image], rows=1, cols=2) strength = 0.4 strength = 0.6 strength = 1.0 Guidance scale The guidance_scale parameter is used to control how closely aligned the generated image and text prompt are. A higher guidance_scale value means your generated image is more aligned with the prompt, while a lower guidance_scale value means your generated image has more space to deviate from the prompt. You can combine guidance_scale with strength for even more precise control over how expressive the model is. For example, combine a high strength + guidance_scale for maximum creativity or use a combination of low strength and low guidance_scale to generate an image that resembles the initial image but is not as strictly bound to the prompt. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, guidance_scale=8.0).images[0] +make_image_grid([init_image, image], rows=1, cols=2) guidance_scale = 0.1 guidance_scale = 5.0 guidance_scale = 10.0 Negative prompt A negative prompt conditions the model to not include things in an image, and it can be used to improve image quality or modify an image. For example, you can improve image quality by including negative prompts like “poor details” or “blurry” to encourage the model to generate a higher quality image. Or you can modify an image by specifying things to exclude from an image. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" + +# pass prompt and image to pipeline +image = pipeline(prompt, negative_prompt=negative_prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" negative_prompt = "jungle" Chained image-to-image pipelines There are some other interesting ways you can use an image-to-image pipeline aside from just generating an image (although that is pretty cool too). You can take it a step further and chain it with other pipelines. Text-to-image-to-image Chaining a text-to-image and image-to-image pipeline allows you to generate an image from text and use the generated image as the initial image for the image-to-image pipeline. This is useful if you want to generate an image entirely from scratch. For example, let’s chain a Stable Diffusion and a Kandinsky model. Start by generating an image with the text-to-image pipeline: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch +from diffusers.utils import make_image_grid + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +text2image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k").images[0] +text2image Now you can pass this generated image to the image-to-image pipeline: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image2image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", image=text2image).images[0] +make_image_grid([text2image, image2image], rows=1, cols=2) Image-to-image-to-image You can also chain multiple image-to-image pipelines together to create more interesting images. This can be useful for iteratively performing style transfer on an image, generating short GIFs, restoring color to an image, or restoring missing areas of an image. Start by generating an image: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, output_type="latent").images[0] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. Pass the latent output from this pipeline to the next pipeline to generate an image in a comic book art style: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "ogkalu/Comic-Diffusion", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# need to include the token "charliebo artstyle" in the prompt to use this checkpoint +image = pipeline("Astronaut in a jungle, charliebo artstyle", image=image, output_type="latent").images[0] Repeat one more time to generate the final image in a pixel art style: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "kohbanye/pixel-art-style", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# need to include the token "pixelartstyle" in the prompt to use this checkpoint +image = pipeline("Astronaut in a jungle, pixelartstyle", image=image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Image-to-upscaler-to-super-resolution Another way you can chain your image-to-image pipeline is with an upscaler and super-resolution pipeline to really increase the level of details in an image. Start with an image-to-image pipeline: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image_1 = pipeline(prompt, image=init_image, output_type="latent").images[0] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. Chain it to an upscaler pipeline to increase the image resolution: Copied from diffusers import StableDiffusionLatentUpscalePipeline + +upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained( + "stabilityai/sd-x2-latent-upscaler", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +upscaler.enable_model_cpu_offload() +upscaler.enable_xformers_memory_efficient_attention() + +image_2 = upscaler(prompt, image=image_1, output_type="latent").images[0] Finally, chain it to a super-resolution pipeline to further enhance the resolution: Copied from diffusers import StableDiffusionUpscalePipeline + +super_res = StableDiffusionUpscalePipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +super_res.enable_model_cpu_offload() +super_res.enable_xformers_memory_efficient_attention() + +image_3 = super_res(prompt, image=image_2).images[0] +make_image_grid([init_image, image_3.resize((512, 512))], rows=1, cols=2) Control image generation Trying to generate an image that looks exactly the way you want can be difficult, which is why controlled generation techniques and models are so useful. While you can use the negative_prompt to partially control image generation, there are more robust methods like prompt weighting and ControlNets. Prompt weighting Prompt weighting allows you to scale the representation of each concept in a prompt. For example, in a prompt like “Astronaut in a jungle, cold color palette, muted colors, detailed, 8k”, you can choose to increase or decrease the embeddings of “astronaut” and “jungle”. The Compel library provides a simple syntax for adjusting prompt weights and generating the embeddings. You can learn how to create the embeddings in the Prompt weighting guide. AutoPipelineForImage2Image has a prompt_embeds (and negative_prompt_embeds if you’re using a negative prompt) parameter where you can pass the embeddings which replaces the prompt parameter. Copied from diffusers import AutoPipelineForImage2Image +import torch + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt_embeds=prompt_embeds, # generated from Compel + negative_prompt_embeds=negative_prompt_embeds, # generated from Compel + image=init_image, +).images[0] ControlNet ControlNets provide a more flexible and accurate way to control image generation because you can use an additional conditioning image. The conditioning image can be a canny image, depth map, image segmentation, and even scribbles! Whatever type of conditioning image you choose, the ControlNet generates an image that preserves the information in it. For example, let’s condition an image with a depth map to keep the spatial information in the image. Copied from diffusers.utils import load_image, make_image_grid + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) +init_image = init_image.resize((958, 960)) # resize to depth image dimensions +depth_image = load_image("https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png") +make_image_grid([init_image, depth_image], rows=1, cols=2) Load a ControlNet model conditioned on depth maps and the AutoPipelineForImage2Image: Copied from diffusers import ControlNetModel, AutoPipelineForImage2Image +import torch + +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11f1p_sd15_depth", torch_dtype=torch.float16, variant="fp16", use_safetensors=True) +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() Now generate a new image conditioned on the depth map, initial image, and prompt: Copied prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image_control_net = pipeline(prompt, image=init_image, control_image=depth_image).images[0] +make_image_grid([init_image, depth_image, image_control_net], rows=1, cols=3) initial image depth image ControlNet image Let’s apply a new style to the image generated from the ControlNet by chaining it with an image-to-image pipeline: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "nitrosocke/elden-ring-diffusion", torch_dtype=torch.float16, +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +prompt = "elden ring style astronaut in a jungle" # include the token "elden ring style" in the prompt +negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" + +image_elden_ring = pipeline(prompt, negative_prompt=negative_prompt, image=image_control_net, strength=0.45, guidance_scale=10.5).images[0] +make_image_grid([init_image, depth_image, image_control_net, image_elden_ring], rows=2, cols=2) Optimize Running diffusion models is computationally expensive and intensive, but with a few optimization tricks, it is entirely possible to run them on consumer and free-tier GPUs. For example, you can use a more memory-efficient form of attention such as PyTorch 2.0’s scaled-dot product attention or xFormers (you can use one or the other, but there’s no need to use both). You can also offload the model to the GPU while the other pipeline components wait on the CPU. Copied + pipeline.enable_model_cpu_offload() ++ pipeline.enable_xformers_memory_efficient_attention() With torch.compile, you can boost your inference speed even more by wrapping your UNet with it: Copied pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) To learn more, take a look at the Reduce memory usage and Torch 2.0 guides. diff --git a/scrapped_outputs/d9c96832ce63564b6274d5799639f08e.txt b/scrapped_outputs/d9c96832ce63564b6274d5799639f08e.txt new file mode 100644 index 0000000000000000000000000000000000000000..cc1a72acaf9ff9434b7d5d17c1deecffdb061dc0 --- /dev/null +++ b/scrapped_outputs/d9c96832ce63564b6274d5799639f08e.txt @@ -0,0 +1,318 @@ +Versatile Diffusion Versatile Diffusion was proposed in Versatile Diffusion: Text, Images and Variations All in One Diffusion Model by Xingqian Xu, Zhangyang Wang, Eric Zhang, Kai Wang, Humphrey Shi. The abstract from the paper is: Recent advances in diffusion models have set an impressive milestone in many generation tasks, and trending works such as DALL-E2, Imagen, and Stable Diffusion have attracted great interest. Despite the rapid landscape changes, recent new approaches focus on extensions and performance rather than capacity, thus requiring separate models for separate tasks. In this work, we expand the existing single-flow diffusion pipeline into a multi-task multimodal network, dubbed Versatile Diffusion (VD), that handles multiple flows of text-to-image, image-to-text, and variations in one unified model. The pipeline design of VD instantiates a unified multi-flow diffusion framework, consisting of sharable and swappable layer modules that enable the crossmodal generality beyond images and text. Through extensive experiments, we demonstrate that VD successfully achieves the following: a) VD outperforms the baseline approaches and handles all its base tasks with competitive quality; b) VD enables novel extensions such as disentanglement of style and semantics, dual- and multi-context blending, etc.; c) The success of our multi-flow multimodal framework over images and text may inspire further diffusion-based universal AI research. Tips You can load the more memory intensive “all-in-one” VersatileDiffusionPipeline that supports all the tasks or use the individual pipelines which are more memory efficient. Pipeline Supported tasks VersatileDiffusionPipeline all of the below VersatileDiffusionTextToImagePipeline text-to-image VersatileDiffusionImageVariationPipeline image variation VersatileDiffusionDualGuidedPipeline image-text dual guided generation Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. VersatileDiffusionPipeline class diffusers.VersatileDiffusionPipeline < source > ( tokenizer: CLIPTokenizer image_feature_extractor: CLIPImageProcessor text_encoder: CLIPTextModel image_encoder: CLIPVisionModel image_unet: UNet2DConditionModel text_unet: UNet2DConditionModel vae: AutoencoderKL scheduler: KarrasDiffusionSchedulers ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). dual_guided < source > ( prompt: typing.Union[PIL.Image.Image, typing.List[PIL.Image.Image]] image: typing.Union[str, typing.List[str]] text_to_image_strength: float = 0.5 height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 guidance_scale: float = 7.5 num_images_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: int = 1 ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import VersatileDiffusionPipeline +>>> import torch +>>> import requests +>>> from io import BytesIO +>>> from PIL import Image + +>>> # let's download an initial image +>>> url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg" + +>>> response = requests.get(url) +>>> image = Image.open(BytesIO(response.content)).convert("RGB") +>>> text = "a red car in the sun" + +>>> pipe = VersatileDiffusionPipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> text_to_image_strength = 0.75 + +>>> image = pipe.dual_guided( +... prompt=text, image=image, text_to_image_strength=text_to_image_strength, generator=generator +... ).images[0] +>>> image.save("./car_variation.png") image_variation < source > ( image: typing.Union[torch.FloatTensor, PIL.Image.Image] height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters image (PIL.Image.Image, List[PIL.Image.Image] or torch.Tensor) — +The image prompt or prompts to guide the image generation. height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import VersatileDiffusionPipeline +>>> import torch +>>> import requests +>>> from io import BytesIO +>>> from PIL import Image + +>>> # let's download an initial image +>>> url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg" + +>>> response = requests.get(url) +>>> image = Image.open(BytesIO(response.content)).convert("RGB") + +>>> pipe = VersatileDiffusionPipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> image = pipe.image_variation(image, generator=generator).images[0] +>>> image.save("./car_variation.png") text_to_image < source > ( prompt: typing.Union[str, typing.List[str]] height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import VersatileDiffusionPipeline +>>> import torch + +>>> pipe = VersatileDiffusionPipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> image = pipe.text_to_image("an astronaut riding on a horse on mars", generator=generator).images[0] +>>> image.save("./astronaut.png") VersatileDiffusionTextToImagePipeline class diffusers.VersatileDiffusionTextToImagePipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection image_unet: UNet2DConditionModel text_unet: UNetFlatConditionModel vae: AutoencoderKL scheduler: KarrasDiffusionSchedulers ) Parameters vqvae (VQModel) — +Vector-quantized (VQ) model to encode and decode images to and from latent representations. bert (LDMBertModel) — +Text-encoder model based on BERT. tokenizer (BertTokenizer) — +A BertTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-image generation using Versatile Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: typing.Union[str, typing.List[str]] height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: int = 1 **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import VersatileDiffusionTextToImagePipeline +>>> import torch + +>>> pipe = VersatileDiffusionTextToImagePipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe.remove_unused_weights() +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> image = pipe("an astronaut riding on a horse on mars", generator=generator).images[0] +>>> image.save("./astronaut.png") VersatileDiffusionImageVariationPipeline class diffusers.VersatileDiffusionImageVariationPipeline < source > ( image_feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection image_unet: UNet2DConditionModel vae: AutoencoderKL scheduler: KarrasDiffusionSchedulers ) Parameters vqvae (VQModel) — +Vector-quantized (VQ) model to encode and decode images to and from latent representations. bert (LDMBertModel) — +Text-encoder model based on BERT. tokenizer (BertTokenizer) — +A BertTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for image variation using Versatile Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: typing.Union[PIL.Image.Image, typing.List[PIL.Image.Image], torch.Tensor] height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: int = 1 **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters image (PIL.Image.Image, List[PIL.Image.Image] or torch.Tensor) — +The image prompt or prompts to guide the image generation. height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import VersatileDiffusionImageVariationPipeline +>>> import torch +>>> import requests +>>> from io import BytesIO +>>> from PIL import Image + +>>> # let's download an initial image +>>> url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg" + +>>> response = requests.get(url) +>>> image = Image.open(BytesIO(response.content)).convert("RGB") + +>>> pipe = VersatileDiffusionImageVariationPipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> image = pipe(image, generator=generator).images[0] +>>> image.save("./car_variation.png") VersatileDiffusionDualGuidedPipeline class diffusers.VersatileDiffusionDualGuidedPipeline < source > ( tokenizer: CLIPTokenizer image_feature_extractor: CLIPImageProcessor text_encoder: CLIPTextModelWithProjection image_encoder: CLIPVisionModelWithProjection image_unet: UNet2DConditionModel text_unet: UNetFlatConditionModel vae: AutoencoderKL scheduler: KarrasDiffusionSchedulers ) Parameters vqvae (VQModel) — +Vector-quantized (VQ) model to encode and decode images to and from latent representations. bert (LDMBertModel) — +Text-encoder model based on BERT. tokenizer (BertTokenizer) — +A BertTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for image-text dual-guided generation using Versatile Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: typing.Union[PIL.Image.Image, typing.List[PIL.Image.Image]] image: typing.Union[str, typing.List[str]] text_to_image_strength: float = 0.5 height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 guidance_scale: float = 7.5 num_images_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: int = 1 **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.image_unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import VersatileDiffusionDualGuidedPipeline +>>> import torch +>>> import requests +>>> from io import BytesIO +>>> from PIL import Image + +>>> # let's download an initial image +>>> url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg" + +>>> response = requests.get(url) +>>> image = Image.open(BytesIO(response.content)).convert("RGB") +>>> text = "a red car in the sun" + +>>> pipe = VersatileDiffusionDualGuidedPipeline.from_pretrained( +... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 +... ) +>>> pipe.remove_unused_weights() +>>> pipe = pipe.to("cuda") + +>>> generator = torch.Generator(device="cuda").manual_seed(0) +>>> text_to_image_strength = 0.75 + +>>> image = pipe( +... prompt=text, image=image, text_to_image_strength=text_to_image_strength, generator=generator +... ).images[0] +>>> image.save("./car_variation.png") diff --git a/scrapped_outputs/d9da6189ca5feb684fcb787a38bcdc51.txt b/scrapped_outputs/d9da6189ca5feb684fcb787a38bcdc51.txt new file mode 100644 index 0000000000000000000000000000000000000000..26ea1e7d24785a469700ab7fb5249e6105ca79b2 --- /dev/null +++ b/scrapped_outputs/d9da6189ca5feb684fcb787a38bcdc51.txt @@ -0,0 +1,274 @@ +Cycle Diffusion + + +Overview + +Cycle Diffusion is a Text-Guided Image-to-Image Generation model proposed in Unifying Diffusion Models’ Latent Space, with Applications to CycleDiffusion and Guidance by Chen Henry Wu, Fernando De la Torre. +The abstract of the paper is the following: +Diffusion models have achieved unprecedented performance in generative modeling. The commonly-adopted formulation of the latent code of diffusion models is a sequence of gradually denoised samples, as opposed to the simpler (e.g., Gaussian) latent space of GANs, VAEs, and normalizing flows. This paper provides an alternative, Gaussian formulation of the latent space of various diffusion models, as well as an invertible DPM-Encoder that maps images into the latent space. While our formulation is purely based on the definition of diffusion models, we demonstrate several intriguing consequences. (1) Empirically, we observe that a common latent space emerges from two diffusion models trained independently on related domains. In light of this finding, we propose CycleDiffusion, which uses DPM-Encoder for unpaired image-to-image translation. Furthermore, applying CycleDiffusion to text-to-image diffusion models, we show that large-scale text-to-image diffusion models can be used as zero-shot image-to-image editors. (2) One can guide pre-trained diffusion models and GANs by controlling the latent codes in a unified, plug-and-play formulation based on energy-based models. Using the CLIP model and a face recognition model as guidance, we demonstrate that diffusion models have better coverage of low-density sub-populations and individuals than GANs. +Tips: +The Cycle Diffusion pipeline is fully compatible with any Stable Diffusion checkpoints +Currently Cycle Diffusion only works with the DDIMScheduler. +Example: +In the following we should how to best use the CycleDiffusionPipeline + + + Copied +import requests +import torch +from PIL import Image +from io import BytesIO + +from diffusers import CycleDiffusionPipeline, DDIMScheduler + +# load the pipeline +# make sure you're logged in with `huggingface-cli login` +model_id_or_path = "CompVis/stable-diffusion-v1-4" +scheduler = DDIMScheduler.from_pretrained(model_id_or_path, subfolder="scheduler") +pipe = CycleDiffusionPipeline.from_pretrained(model_id_or_path, scheduler=scheduler).to("cuda") + +# let's download an initial image +url = "https://raw.githubusercontent.com/ChenWu98/cycle-diffusion/main/data/dalle2/An%20astronaut%20riding%20a%20horse.png" +response = requests.get(url) +init_image = Image.open(BytesIO(response.content)).convert("RGB") +init_image = init_image.resize((512, 512)) +init_image.save("horse.png") + +# let's specify a prompt +source_prompt = "An astronaut riding a horse" +prompt = "An astronaut riding an elephant" + +# call the pipeline +image = pipe( + prompt=prompt, + source_prompt=source_prompt, + image=init_image, + num_inference_steps=100, + eta=0.1, + strength=0.8, + guidance_scale=2, + source_guidance_scale=1, +).images[0] + +image.save("horse_to_elephant.png") + +# let's try another example +# See more samples at the original repo: https://github.com/ChenWu98/cycle-diffusion +url = "https://raw.githubusercontent.com/ChenWu98/cycle-diffusion/main/data/dalle2/A%20black%20colored%20car.png" +response = requests.get(url) +init_image = Image.open(BytesIO(response.content)).convert("RGB") +init_image = init_image.resize((512, 512)) +init_image.save("black.png") + +source_prompt = "A black colored car" +prompt = "A blue colored car" + +# call the pipeline +torch.manual_seed(0) +image = pipe( + prompt=prompt, + source_prompt=source_prompt, + image=init_image, + num_inference_steps=100, + eta=0.1, + strength=0.85, + guidance_scale=3, + source_guidance_scale=1, +).images[0] + +image.save("black_to_blue.png") + +CycleDiffusionPipeline + + +class diffusers.CycleDiffusionPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: DDIMScheduler +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPFeatureExtractor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-guided image to image generation using Stable Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] +source_prompt: typing.Union[str, typing.List[str]] +image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None +strength: float = 0.8 +num_inference_steps: typing.Optional[int] = 50 +guidance_scale: typing.Optional[float] = 7.5 +source_guidance_scale: typing.Optional[float] = 1 +num_images_per_prompt: typing.Optional[int] = 1 +eta: typing.Optional[float] = 0.1 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: typing.Optional[int] = 1 +**kwargs + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. + + +image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. + + +strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter will be modulated by strength. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +source_guidance_scale (float, optional, defaults to 1) — +Guidance scale for the source prompt. This is useful to control the amount of influence the source +prompt for encoding. + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.1) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. diff --git a/scrapped_outputs/da0a8dec1d37376a1d02bba3645a58c7.txt b/scrapped_outputs/da0a8dec1d37376a1d02bba3645a58c7.txt new file mode 100644 index 0000000000000000000000000000000000000000..ef62c086e705e0fd98841711ee18a967fbc85f5e --- /dev/null +++ b/scrapped_outputs/da0a8dec1d37376a1d02bba3645a58c7.txt @@ -0,0 +1,41 @@ +UNetMotionModel The UNet model was originally introduced by Ronneberger et al for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 2D UNet model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNetMotionModel class diffusers.UNetMotionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 down_block_types: Tuple = ('CrossAttnDownBlockMotion', 'CrossAttnDownBlockMotion', 'CrossAttnDownBlockMotion', 'DownBlockMotion') up_block_types: Tuple = ('UpBlockMotion', 'CrossAttnUpBlockMotion', 'CrossAttnUpBlockMotion', 'CrossAttnUpBlockMotion') block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: int = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: int = 32 norm_eps: float = 1e-05 cross_attention_dim: int = 1280 use_linear_projection: bool = False num_attention_heads: Union = 8 motion_max_seq_length: int = 32 motion_num_attention_heads: int = 8 use_motion_mid_block: int = True encoder_hid_dim: Optional = None encoder_hid_dim_type: Optional = None ) A modified conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a +sample shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). disable_freeu < source > ( ) Disables the FreeU mechanism. enable_forward_chunking < source > ( chunk_size: Optional = None dim: int = 0 ) Parameters chunk_size (int, optional) — +The chunk size of the feed-forward layers. If not specified, will run feed-forward layer individually +over each tensor of dim=dim. dim (int, optional, defaults to 0) — +The dimension over which the feed-forward computation should be chunked. Choose between dim=0 (batch) +or dim=1 (sequence length). Sets the attention processor to use feed forward +chunking. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate the “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism from https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stage blocks where they are being applied. Please refer to the official repository for combinations of values that +are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None added_cond_kwargs: Optional = None down_block_additional_residuals: Optional = None mid_block_additional_residual: Optional = None return_dict: bool = True ) → ~models.unet_3d_condition.UNet3DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, num_frames, channel, height, width. timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). +timestep_cond — (torch.Tensor, optional, defaults to None): +Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed +through the self.time_embedding layer to obtain the timestep embeddings. attention_mask (torch.Tensor, optional, defaults to None) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. +down_block_additional_residuals — (tuple of torch.Tensor, optional): +A tuple of tensors that if specified are added to the residuals of down unet blocks. +mid_block_additional_residual — (torch.Tensor, optional): +A tensor that if specified is added to the residual of the middle unet block. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.unet_3d_condition.UNet3DConditionOutput instead of a plain +tuple. Returns +~models.unet_3d_condition.UNet3DConditionOutput or tuple + +If return_dict is True, an ~models.unet_3d_condition.UNet3DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The UNetMotionModel forward method. freeze_unet2d_params < source > ( ) Freeze the weights of just the UNet2DConditionModel, and leave the motion modules +unfrozen for fine tuning. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. UNet3DConditionOutput class diffusers.models.unets.unet_3d_condition.UNet3DConditionOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_frames, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of UNet3DConditionModel. diff --git a/scrapped_outputs/da1b407a1b60fb5fe31c24d51ed13b4e.txt b/scrapped_outputs/da1b407a1b60fb5fe31c24d51ed13b4e.txt new file mode 100644 index 0000000000000000000000000000000000000000..02948f26017297db150c2f1b80c70d14cf529652 --- /dev/null +++ b/scrapped_outputs/da1b407a1b60fb5fe31c24d51ed13b4e.txt @@ -0,0 +1,187 @@ +Kandinsky The Kandinsky models are a series of multilingual text-to-image generation models. The Kandinsky 2.0 model uses two multilingual text encoders and concatenates those results for the UNet. Kandinsky 2.1 changes the architecture to include an image prior model (CLIP) to generate a mapping between text and image embeddings. The mapping provides better text-image alignment and it is used with the text embeddings during training, leading to higher quality results. Finally, Kandinsky 2.1 uses a Modulating Quantized Vectors (MoVQ) decoder - which adds a spatial conditional normalization layer to increase photorealism - to decode the latents into images. Kandinsky 2.2 improves on the previous model by replacing the image encoder of the image prior model with a larger CLIP-ViT-G model to improve quality. The image prior model was also retrained on images with different resolutions and aspect ratios to generate higher-resolution images and different image sizes. Kandinsky 3 simplifies the architecture and shifts away from the two-stage generation process involving the prior model and diffusion model. Instead, Kandinsky 3 uses Flan-UL2 to encode text, a UNet with BigGan-deep blocks, and Sber-MoVQGAN to decode the latents into images. Text understanding and generated image quality are primarily achieved by using a larger text encoder and UNet. This guide will show you how to use the Kandinsky models for text-to-image, image-to-image, inpainting, interpolation, and more. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate Kandinsky 2.1 and 2.2 usage is very similar! The only difference is Kandinsky 2.2 doesn’t accept prompt as an input when decoding the latents. Instead, Kandinsky 2.2 only accepts image_embeds during decoding. Kandinsky 3 has a more concise architecture and it doesn’t require a prior model. This means it’s usage is identical to other diffusion models like Stable Diffusion XL. Text-to-image To use the Kandinsky models for any task, you always start by setting up the prior pipeline to encode the prompt and generate the image embeddings. The prior pipeline also generates negative_image_embeds that correspond to the negative prompt "". For better results, you can pass an actual negative_prompt to the prior pipeline, but this’ll increase the effective batch size of the prior pipeline by 2x. Kandinsky 2.1 Kandinsky 2.2 Kandinsky 3 Copied from diffusers import KandinskyPriorPipeline, KandinskyPipeline +import torch + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16).to("cuda") +pipeline = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16).to("cuda") + +prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting" +negative_prompt = "low quality, bad quality" # optional to include a negative prompt, but results are usually better +image_embeds, negative_image_embeds = prior_pipeline(prompt, negative_prompt, guidance_scale=1.0).to_tuple() Now pass all the prompts and embeddings to the KandinskyPipeline to generate an image: Copied image = pipeline(prompt, image_embeds=image_embeds, negative_prompt=negative_prompt, negative_image_embeds=negative_image_embeds, height=768, width=768).images[0] +image 🤗 Diffusers also provides an end-to-end API with the KandinskyCombinedPipeline and KandinskyV22CombinedPipeline, meaning you don’t have to separately load the prior and text-to-image pipeline. The combined pipeline automatically loads both the prior model and the decoder. You can still set different values for the prior pipeline with the prior_guidance_scale and prior_num_inference_steps parameters if you want. Use the AutoPipelineForText2Image to automatically call the combined pipelines under the hood: Kandinsky 2.1 Kandinsky 2.2 Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) +pipeline.enable_model_cpu_offload() + +prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting" +negative_prompt = "low quality, bad quality" + +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, prior_guidance_scale=1.0, guidance_scale=4.0, height=768, width=768).images[0] +image Image-to-image For image-to-image, pass the initial image and text prompt to condition the image to the pipeline. Start by loading the prior pipeline: Kandinsky 2.1 Kandinsky 2.2 Kandinsky 3 Copied import torch +from diffusers import KandinskyImg2ImgPipeline, KandinskyPriorPipeline + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipeline = KandinskyImg2ImgPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda") Download an image to condition on: Copied from diffusers.utils import load_image + +# download image +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +original_image = load_image(url) +original_image = original_image.resize((768, 512)) Generate the image_embeds and negative_image_embeds with the prior pipeline: Copied prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +image_embeds, negative_image_embeds = prior_pipeline(prompt, negative_prompt).to_tuple() Now pass the original image, and all the prompts and embeddings to the pipeline to generate an image: Kandinsky 2.1 Kandinsky 2.2 Kandinsky 3 Copied from diffusers.utils import make_image_grid + +image = pipeline(prompt, negative_prompt=negative_prompt, image=original_image, image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, height=768, width=768, strength=0.3).images[0] +make_image_grid([original_image.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2) 🤗 Diffusers also provides an end-to-end API with the KandinskyImg2ImgCombinedPipeline and KandinskyV22Img2ImgCombinedPipeline, meaning you don’t have to separately load the prior and image-to-image pipeline. The combined pipeline automatically loads both the prior model and the decoder. You can still set different values for the prior pipeline with the prior_guidance_scale and prior_num_inference_steps parameters if you want. Use the AutoPipelineForImage2Image to automatically call the combined pipelines under the hood: Kandinsky 2.1 Kandinsky 2.2 Copied from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image +import torch + +pipeline = AutoPipelineForImage2Image.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True) +pipeline.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +original_image = load_image(url) + +original_image.thumbnail((768, 768)) + +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=original_image, strength=0.3).images[0] +make_image_grid([original_image.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2) Inpainting ⚠️ The Kandinsky models use ⬜️ white pixels to represent the masked area now instead of black pixels. If you are using KandinskyInpaintPipeline in production, you need to change the mask to use white pixels: Copied # For PIL input +import PIL.ImageOps +mask = PIL.ImageOps.invert(mask) + +# For PyTorch and NumPy input +mask = 1 - mask For inpainting, you’ll need the original image, a mask of the area to replace in the original image, and a text prompt of what to inpaint. Load the prior pipeline: Kandinsky 2.1 Kandinsky 2.2 Copied from diffusers import KandinskyInpaintPipeline, KandinskyPriorPipeline +from diffusers.utils import load_image, make_image_grid +import torch +import numpy as np +from PIL import Image + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipeline = KandinskyInpaintPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16, use_safetensors=True).to("cuda") Load an initial image and create a mask: Copied init_image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png") +mask = np.zeros((768, 768), dtype=np.float32) +# mask area above cat's head +mask[:250, 250:-250] = 1 Generate the embeddings with the prior pipeline: Copied prompt = "a hat" +prior_output = prior_pipeline(prompt) Now pass the initial image, mask, and prompt and embeddings to the pipeline to generate an image: Kandinsky 2.1 Kandinsky 2.2 Copied output_image = pipeline(prompt, image=init_image, mask_image=mask, **prior_output, height=768, width=768, num_inference_steps=150).images[0] +mask = Image.fromarray((mask*255).astype('uint8'), 'L') +make_image_grid([init_image, mask, output_image], rows=1, cols=3) You can also use the end-to-end KandinskyInpaintCombinedPipeline and KandinskyV22InpaintCombinedPipeline to call the prior and decoder pipelines together under the hood. Use the AutoPipelineForInpainting for this: Kandinsky 2.1 Kandinsky 2.2 Copied import torch +import numpy as np +from PIL import Image +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipe = AutoPipelineForInpainting.from_pretrained("kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16) +pipe.enable_model_cpu_offload() + +init_image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png") +mask = np.zeros((768, 768), dtype=np.float32) +# mask area above cat's head +mask[:250, 250:-250] = 1 +prompt = "a hat" + +output_image = pipe(prompt=prompt, image=init_image, mask_image=mask).images[0] +mask = Image.fromarray((mask*255).astype('uint8'), 'L') +make_image_grid([init_image, mask, output_image], rows=1, cols=3) Interpolation Interpolation allows you to explore the latent space between the image and text embeddings which is a cool way to see some of the prior model’s intermediate outputs. Load the prior pipeline and two images you’d like to interpolate: Kandinsky 2.1 Kandinsky 2.2 Copied from diffusers import KandinskyPriorPipeline, KandinskyPipeline +from diffusers.utils import load_image, make_image_grid +import torch + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +img_1 = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png") +img_2 = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/starry_night.jpeg") +make_image_grid([img_1.resize((512,512)), img_2.resize((512,512))], rows=1, cols=2) a cat Van Gogh's Starry Night painting Specify the text or images to interpolate, and set the weights for each text or image. Experiment with the weights to see how they affect the interpolation! Copied images_texts = ["a cat", img_1, img_2] +weights = [0.3, 0.3, 0.4] Call the interpolate function to generate the embeddings, and then pass them to the pipeline to generate the image: Kandinsky 2.1 Kandinsky 2.2 Copied # prompt can be left empty +prompt = "" +prior_out = prior_pipeline.interpolate(images_texts, weights) + +pipeline = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + +image = pipeline(prompt, **prior_out, height=768, width=768).images[0] +image ControlNet ⚠️ ControlNet is only supported for Kandinsky 2.2! ControlNet enables conditioning large pretrained diffusion models with additional inputs such as a depth map or edge detection. For example, you can condition Kandinsky 2.2 with a depth map so the model understands and preserves the structure of the depth image. Let’s load an image and extract it’s depth map: Copied from diffusers.utils import load_image + +img = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png" +).resize((768, 768)) +img Then you can use the depth-estimation Pipeline from 🤗 Transformers to process the image and retrieve the depth map: Copied import torch +import numpy as np + +from transformers import pipeline + +def make_hint(image, depth_estimator): + image = depth_estimator(image)["depth"] + image = np.array(image) + image = image[:, :, None] + image = np.concatenate([image, image, image], axis=2) + detected_map = torch.from_numpy(image).float() / 255.0 + hint = detected_map.permute(2, 0, 1) + return hint + +depth_estimator = pipeline("depth-estimation") +hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda") Text-to-image Load the prior pipeline and the KandinskyV22ControlnetPipeline: Copied from diffusers import KandinskyV22PriorPipeline, KandinskyV22ControlnetPipeline + +prior_pipeline = KandinskyV22PriorPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +pipeline = KandinskyV22ControlnetPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16 +).to("cuda") Generate the image embeddings from a prompt and negative prompt: Copied prompt = "A robot, 4k photo" +negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature" + +generator = torch.Generator(device="cuda").manual_seed(43) + +image_emb, zero_image_emb = prior_pipeline( + prompt=prompt, negative_prompt=negative_prior_prompt, generator=generator +).to_tuple() Finally, pass the image embeddings and the depth image to the KandinskyV22ControlnetPipeline to generate an image: Copied image = pipeline(image_embeds=image_emb, negative_image_embeds=zero_image_emb, hint=hint, num_inference_steps=50, generator=generator, height=768, width=768).images[0] +image Image-to-image For image-to-image with ControlNet, you’ll need to use the: KandinskyV22PriorEmb2EmbPipeline to generate the image embeddings from a text prompt and an image KandinskyV22ControlnetImg2ImgPipeline to generate an image from the initial image and the image embeddings Process and extract a depth map of an initial image of a cat with the depth-estimation Pipeline from 🤗 Transformers: Copied import torch +import numpy as np + +from diffusers import KandinskyV22PriorEmb2EmbPipeline, KandinskyV22ControlnetImg2ImgPipeline +from diffusers.utils import load_image +from transformers import pipeline + +img = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png" +).resize((768, 768)) + +def make_hint(image, depth_estimator): + image = depth_estimator(image)["depth"] + image = np.array(image) + image = image[:, :, None] + image = np.concatenate([image, image, image], axis=2) + detected_map = torch.from_numpy(image).float() / 255.0 + hint = detected_map.permute(2, 0, 1) + return hint + +depth_estimator = pipeline("depth-estimation") +hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda") Load the prior pipeline and the KandinskyV22ControlnetImg2ImgPipeline: Copied prior_pipeline = KandinskyV22PriorEmb2EmbPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +pipeline = KandinskyV22ControlnetImg2ImgPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16 +).to("cuda") Pass a text prompt and the initial image to the prior pipeline to generate the image embeddings: Copied prompt = "A robot, 4k photo" +negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature" + +generator = torch.Generator(device="cuda").manual_seed(43) + +img_emb = prior_pipeline(prompt=prompt, image=img, strength=0.85, generator=generator) +negative_emb = prior_pipeline(prompt=negative_prior_prompt, image=img, strength=1, generator=generator) Now you can run the KandinskyV22ControlnetImg2ImgPipeline to generate an image from the initial image and the image embeddings: Copied image = pipeline(image=img, strength=0.5, image_embeds=img_emb.image_embeds, negative_image_embeds=negative_emb.image_embeds, hint=hint, num_inference_steps=50, generator=generator, height=768, width=768).images[0] +make_image_grid([img.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2) Optimizations Kandinsky is unique because it requires a prior pipeline to generate the mappings, and a second pipeline to decode the latents into an image. Optimization efforts should be focused on the second pipeline because that is where the bulk of the computation is done. Here are some tips to improve Kandinsky during inference. Enable xFormers if you’re using PyTorch < 2.0: Copied from diffusers import DiffusionPipeline + import torch + + pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) ++ pipe.enable_xformers_memory_efficient_attention() Enable torch.compile if you’re using PyTorch >= 2.0 to automatically use scaled dot-product attention (SDPA): Copied pipe.unet.to(memory_format=torch.channels_last) ++ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) This is the same as explicitly setting the attention processor to use AttnAddedKVProcessor2_0: Copied from diffusers.models.attention_processor import AttnAddedKVProcessor2_0 + +pipe.unet.set_attn_processor(AttnAddedKVProcessor2_0()) Offload the model to the CPU with enable_model_cpu_offload() to avoid out-of-memory errors: Copied from diffusers import DiffusionPipeline + import torch + + pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) ++ pipe.enable_model_cpu_offload() By default, the text-to-image pipeline uses the DDIMScheduler but you can replace it with another scheduler like DDPMScheduler to see how that affects the tradeoff between inference speed and image quality: Copied from diffusers import DDPMScheduler +from diffusers import DiffusionPipeline + +scheduler = DDPMScheduler.from_pretrained("kandinsky-community/kandinsky-2-1", subfolder="ddpm_scheduler") +pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", scheduler=scheduler, torch_dtype=torch.float16, use_safetensors=True).to("cuda") diff --git a/scrapped_outputs/da39e3a8c4a6a56dfbd04608c46d7862.txt b/scrapped_outputs/da39e3a8c4a6a56dfbd04608c46d7862.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/da6896202d84edcd57f8a456afcce281.txt b/scrapped_outputs/da6896202d84edcd57f8a456afcce281.txt new file mode 100644 index 0000000000000000000000000000000000000000..f971d25fc44aa74df592b1a56356146d3ed210ee --- /dev/null +++ b/scrapped_outputs/da6896202d84edcd57f8a456afcce281.txt @@ -0,0 +1,83 @@ +K-Diffusion k-diffusion is a popular library created by Katherine Crowson. We provide StableDiffusionKDiffusionPipeline and StableDiffusionXLKDiffusionPipeline that allow you to run Stable DIffusion with samplers from k-diffusion. Note that most the samplers from k-diffusion are implemented in Diffusers and we recommend using existing schedulers. You can find a mapping between k-diffusion samplers and schedulers in Diffusers here StableDiffusionKDiffusionPipeline class diffusers.StableDiffusionKDiffusionPipeline < source > ( vae text_encoder tokenizer unet scheduler safety_checker feature_extractor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. Pipeline for text-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights This is an experimental pipeline and is likely to change in the future. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionXLKDiffusionPipeline class diffusers.StableDiffusionXLKDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. Pipeline for text-to-image generation using Stable Diffusion XL and k-diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. diff --git a/scrapped_outputs/da80dabb60b96db00d8bdc64f05d5ece.txt b/scrapped_outputs/da80dabb60b96db00d8bdc64f05d5ece.txt new file mode 100644 index 0000000000000000000000000000000000000000..70b4217dd0c7138c00d1e18f1498d6ca0f929b68 --- /dev/null +++ b/scrapped_outputs/da80dabb60b96db00d8bdc64f05d5ece.txt @@ -0,0 +1,31 @@ +Load different Stable Diffusion formats Stable Diffusion models are available in different formats depending on the framework they’re trained and saved with, and where you download them from. Converting these formats for use in 🤗 Diffusers allows you to use all the features supported by the library, such as using different schedulers for inference, building your custom pipeline, and a variety of techniques and methods for optimizing inference speed. We highly recommend using the .safetensors format because it is more secure than traditional pickled files which are vulnerable and can be exploited to execute any code on your machine (learn more in the Load safetensors guide). This guide will show you how to convert other Stable Diffusion formats to be compatible with 🤗 Diffusers. PyTorch .ckpt The checkpoint - or .ckpt - format is commonly used to store and save models. The .ckpt file contains the entire model and is typically several GBs in size. While you can load and use a .ckpt file directly with the from_single_file() method, it is generally better to convert the .ckpt file to 🤗 Diffusers so both formats are available. There are two options for converting a .ckpt file: use a Space to convert the checkpoint or convert the .ckpt file with a script. Convert with a Space The easiest and most convenient way to convert a .ckpt file is to use the SD to Diffusers Space. You can follow the instructions on the Space to convert the .ckpt file. This approach works well for basic models, but it may struggle with more customized models. You’ll know the Space failed if it returns an empty pull request or error. In this case, you can try converting the .ckpt file with a script. Convert with a script 🤗 Diffusers provides a conversion script for converting .ckpt files. This approach is more reliable than the Space above. Before you start, make sure you have a local clone of 🤗 Diffusers to run the script and log in to your Hugging Face account so you can open pull requests and push your converted model to the Hub. Copied huggingface-cli login To use the script: Git clone the repository containing the .ckpt file you want to convert. For this example, let’s convert this TemporalNet .ckpt file: Copied git lfs install +git clone https://huggingface.co/CiaraRowles/TemporalNet Open a pull request on the repository where you’re converting the checkpoint from: Copied cd TemporalNet && git fetch origin refs/pr/13:pr/13 +git checkout pr/13 There are several input arguments to configure in the conversion script, but the most important ones are: checkpoint_path: the path to the .ckpt file to convert. original_config_file: a YAML file defining the configuration of the original architecture. If you can’t find this file, try searching for the YAML file in the GitHub repository where you found the .ckpt file. dump_path: the path to the converted model. For example, you can take the cldm_v15.yaml file from the ControlNet repository because the TemporalNet model is a Stable Diffusion v1.5 and ControlNet model. Now you can run the script to convert the .ckpt file: Copied python ../diffusers/scripts/convert_original_stable_diffusion_to_diffusers.py --checkpoint_path temporalnetv3.ckpt --original_config_file cldm_v15.yaml --dump_path ./ --controlnet Once the conversion is done, upload your converted model and test out the resulting pull request! Copied git push origin pr/13:refs/pr/13 Keras .pb or .h5 🧪 This is an experimental feature. Only Stable Diffusion v1 checkpoints are supported by the Convert KerasCV Space at the moment. KerasCV supports training for Stable Diffusion v1 and v2. However, it offers limited support for experimenting with Stable Diffusion models for inference and deployment whereas 🤗 Diffusers has a more complete set of features for this purpose, such as different noise schedulers, flash attention, and other +optimization techniques. The Convert KerasCV Space converts .pb or .h5 files to PyTorch, and then wraps them in a StableDiffusionPipeline so it is ready for inference. The converted checkpoint is stored in a repository on the Hugging Face Hub. For this example, let’s convert the sayakpaul/textual-inversion-kerasio checkpoint which was trained with Textual Inversion. It uses the special token to personalize images with cats. The Convert KerasCV Space allows you to input the following: Your Hugging Face token. Paths to download the UNet and text encoder weights from. Depending on how the model was trained, you don’t necessarily need to provide the paths to both the UNet and text encoder. For example, Textual Inversion only requires the embeddings from the text encoder and a text-to-image model only requires the UNet weights. Placeholder token is only applicable for textual inversion models. The output_repo_prefix is the name of the repository where the converted model is stored. Click the Submit button to automatically convert the KerasCV checkpoint! Once the checkpoint is successfully converted, you’ll see a link to the new repository containing the converted checkpoint. Follow the link to the new repository, and you’ll see the Convert KerasCV Space generated a model card with an inference widget to try out the converted model. If you prefer to run inference with code, click on the Use in Diffusers button in the upper right corner of the model card to copy and paste the code snippet: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline", use_safetensors=True +) Then, you can generate an image like: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline", use_safetensors=True +) +pipeline.to("cuda") + +placeholder_token = "" +prompt = f"two {placeholder_token} getting married, photorealistic, high quality" +image = pipeline(prompt, num_inference_steps=50).images[0] A1111 LoRA files Automatic1111 (A1111) is a popular web UI for Stable Diffusion that supports model sharing platforms like Civitai. Models trained with the Low-Rank Adaptation (LoRA) technique are especially popular because they’re fast to train and have a much smaller file size than a fully finetuned model. 🤗 Diffusers supports loading A1111 LoRA checkpoints with load_lora_weights(): Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "Lykon/dreamshaper-xl-1-0", torch_dtype=torch.float16, variant="fp16" +).to("cuda") Download a LoRA checkpoint from Civitai; this example uses the Blueprintify SD XL 1.0 checkpoint, but feel free to try out any LoRA checkpoint! Copied # uncomment to download the safetensor weights +#!wget https://civitai.com/api/download/models/168776 -O blueprintify.safetensors Load the LoRA checkpoint into the pipeline with the load_lora_weights() method: Copied pipeline.load_lora_weights(".", weight_name="blueprintify.safetensors") Now you can use the pipeline to generate images: Copied prompt = "bl3uprint, a highly detailed blueprint of the empire state building, explaining how to build all parts, many txt, blueprint grid backdrop" +negative_prompt = "lowres, cropped, worst quality, low quality, normal quality, artifacts, signature, watermark, username, blurry, more than one bridge, bad architecture" + +image = pipeline( + prompt=prompt, + negative_prompt=negative_prompt, + generator=torch.manual_seed(0), +).images[0] +image diff --git a/scrapped_outputs/da9be19bf8ae8742c44126d0a7b6aaa2.txt b/scrapped_outputs/da9be19bf8ae8742c44126d0a7b6aaa2.txt new file mode 100644 index 0000000000000000000000000000000000000000..65a9cfaf29f703e7c7512eba0f3f7082686a6b82 --- /dev/null +++ b/scrapped_outputs/da9be19bf8ae8742c44126d0a7b6aaa2.txt @@ -0,0 +1,40 @@ +KDPM2DiscreteScheduler The KDPM2DiscreteScheduler is inspired by the Elucidating the Design Space of Diffusion-Based Generative Models paper, and the scheduler is ported from and created by Katherine Crowson. The original codebase can be found at crowsonkb/k-diffusion. KDPM2DiscreteScheduler class diffusers.KDPM2DiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None use_karras_sigmas: Optional = False prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.00085) — +The starting beta value of inference. beta_end (float, defaults to 0.012) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. KDPM2DiscreteScheduler is inspired by the DPMSolver2 and Algorithm 2 from the Elucidating the Design Space of +Diffusion-Based Generative Models paper. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/daec8824e12ec9dbf84c60b2f6f32942.txt b/scrapped_outputs/daec8824e12ec9dbf84c60b2f6f32942.txt new file mode 100644 index 0000000000000000000000000000000000000000..7b1735de34d975258705c997ab6b7091fbeddde0 --- /dev/null +++ b/scrapped_outputs/daec8824e12ec9dbf84c60b2f6f32942.txt @@ -0,0 +1,2 @@ +Activation functions Customized activation functions for supporting various models in 🤗 Diffusers. GELU class diffusers.models.activations.GELU < source > ( dim_in: int dim_out: int approximate: str = 'none' bias: bool = True ) Parameters dim_in (int) — The number of channels in the input. dim_out (int) — The number of channels in the output. approximate (str, optional, defaults to "none") — If "tanh", use tanh approximation. bias (bool, defaults to True) — Whether to use a bias in the linear layer. GELU activation function with tanh approximation support with approximate="tanh". GEGLU class diffusers.models.activations.GEGLU < source > ( dim_in: int dim_out: int bias: bool = True ) Parameters dim_in (int) — The number of channels in the input. dim_out (int) — The number of channels in the output. bias (bool, defaults to True) — Whether to use a bias in the linear layer. A variant of the gated linear unit activation function. ApproximateGELU class diffusers.models.activations.ApproximateGELU < source > ( dim_in: int dim_out: int bias: bool = True ) Parameters dim_in (int) — The number of channels in the input. dim_out (int) — The number of channels in the output. bias (bool, defaults to True) — Whether to use a bias in the linear layer. The approximate form of the Gaussian Error Linear Unit (GELU). For more details, see section 2 of this +paper. diff --git a/scrapped_outputs/db050769094d8a4c8435e962c70727f2.txt b/scrapped_outputs/db050769094d8a4c8435e962c70727f2.txt new file mode 100644 index 0000000000000000000000000000000000000000..b81f2722b690ac937fbbd2971ff917d220450f6f --- /dev/null +++ b/scrapped_outputs/db050769094d8a4c8435e962c70727f2.txt @@ -0,0 +1,4 @@ +Overview + +A pipeline is an end-to-end class that provides a quick and easy way to use a diffusion system for inference by bundling independently trained models and schedulers together. Certain combinations of models and schedulers define specific pipeline types, like StableDiffusionPipeline or StableDiffusionControlNetPipeline, with specific capabilities. All pipeline types inherit from the base DiffusionPipeline class; pass it any checkpoint, and it’ll automatically detect the pipeline type and load the necessary components. +This section introduces you to some of the tasks supported by our pipelines such as unconditional image generation and different techniques and variations of text-to-image generation. You’ll also learn how to gain more control over the generation process by setting a seed for reproducibility and weighting prompts to adjust the influence certain words in the prompt has over the output. Finally, you’ll see how you can create a community pipeline for a custom task like generating images from speech. diff --git a/scrapped_outputs/db9212358ecee7ded4c83abf9cb335d7.txt b/scrapped_outputs/db9212358ecee7ded4c83abf9cb335d7.txt new file mode 100644 index 0000000000000000000000000000000000000000..e052d55cf32f3cae512726b0ae2689a14cfb5d64 --- /dev/null +++ b/scrapped_outputs/db9212358ecee7ded4c83abf9cb335d7.txt @@ -0,0 +1,378 @@ +unCLIP + + +Overview + +Hierarchical Text-Conditional Image Generation with CLIP Latents by Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen +The abstract of the paper is the following: +Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples. +The unCLIP model in diffusers comes from kakaobrain’s karlo and the original codebase can be found here. Additionally, lucidrains has a DALL-E 2 recreation here. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_unclip.py +Text-to-Image Generation +- +pipeline_unclip_image_variation.py +Image-Guided Image Generation +- + +UnCLIPPipeline + + +class diffusers.UnCLIPPipeline + +< +source +> +( +prior: PriorTransformer +decoder: UNet2DConditionModel +text_encoder: CLIPTextModelWithProjection +tokenizer: CLIPTokenizer +text_proj: UnCLIPTextProjModel +super_res_first: UNet2DModel +super_res_last: UNet2DModel +prior_scheduler: UnCLIPScheduler +decoder_scheduler: UnCLIPScheduler +super_res_scheduler: UnCLIPScheduler + +) + + +Parameters + +text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. + + +text_proj (UnCLIPTextProjModel) — +Utility class to prepare and combine the embeddings before they are passed to the decoder. + + +decoder (UNet2DConditionModel) — +The decoder to invert the image embedding into an image. + + +super_res_first (UNet2DModel) — +Super resolution unet. Used in all but the last step of the super resolution diffusion process. + + +super_res_last (UNet2DModel) — +Super resolution unet. Used in the last step of the super resolution diffusion process. + + +prior_scheduler (UnCLIPScheduler) — +Scheduler used in the prior denoising process. Just a modified DDPMScheduler. + + +decoder_scheduler (UnCLIPScheduler) — +Scheduler used in the decoder denoising process. Just a modified DDPMScheduler. + + +super_res_scheduler (UnCLIPScheduler) — +Scheduler used in the super resolution denoising process. Just a modified DDPMScheduler. + + + +Pipeline for text-to-image generation using unCLIP +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: int = 1 +prior_num_inference_steps: int = 25 +decoder_num_inference_steps: int = 25 +super_res_num_inference_steps: int = 7 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +prior_latents: typing.Optional[torch.FloatTensor] = None +decoder_latents: typing.Optional[torch.FloatTensor] = None +super_res_latents: typing.Optional[torch.FloatTensor] = None +text_model_output: typing.Union[transformers.models.clip.modeling_clip.CLIPTextModelOutput, typing.Tuple, NoneType] = None +text_attention_mask: typing.Optional[torch.Tensor] = None +prior_guidance_scale: float = 4.0 +decoder_guidance_scale: float = 8.0 +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True + +) + + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. This can only be left undefined if +text_model_output and text_attention_mask is passed. + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +prior_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the prior. More denoising steps usually lead to a higher quality +image at the expense of slower inference. + + +decoder_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality +image at the expense of slower inference. + + +super_res_num_inference_steps (int, optional, defaults to 7) — +The number of denoising steps for super resolution. More denoising steps usually lead to a higher +quality image at the expense of slower inference. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +prior_latents (torch.FloatTensor of shape (batch size, embeddings dimension), optional) — +Pre-generated noisy latents to be used as inputs for the prior. + + +decoder_latents (torch.FloatTensor of shape (batch size, channels, height, width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. + + +super_res_latents (torch.FloatTensor of shape (batch size, channels, super res height, super res width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. + + +prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +decoder_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +text_model_output (CLIPTextModelOutput, optional) — +Pre-defined CLIPTextModel outputs that can be derived from the text encoder. Pre-defined text outputs +can be passed for tasks like text embedding interpolations. Make sure to also pass +text_attention_mask in this case. prompt can the be left to None. + + +text_attention_mask (torch.Tensor, optional) — +Pre-defined CLIP text attention mask that can be derived from the tokenizer. Pre-defined text attention +masks are necessary when passing text_model_output. + + +output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + + +Function invoked when calling the pipeline for generation. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline’s +models have their state dicts saved to CPU and then are moved to a torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. + +class diffusers.UnCLIPImageVariationPipeline + +< +source +> +( +decoder: UNet2DConditionModel +text_encoder: CLIPTextModelWithProjection +tokenizer: CLIPTokenizer +text_proj: UnCLIPTextProjModel +feature_extractor: CLIPFeatureExtractor +image_encoder: CLIPVisionModelWithProjection +super_res_first: UNet2DModel +super_res_last: UNet2DModel +decoder_scheduler: UnCLIPScheduler +super_res_scheduler: UnCLIPScheduler + +) + + +Parameters + +text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the image_encoder. + + +image_encoder (CLIPVisionModelWithProjection) — +Frozen CLIP image-encoder. unCLIP Image Variation uses the vision portion of +CLIP, +specifically the clip-vit-large-patch14 variant. + + +text_proj (UnCLIPTextProjModel) — +Utility class to prepare and combine the embeddings before they are passed to the decoder. + + +decoder (UNet2DConditionModel) — +The decoder to invert the image embedding into an image. + + +super_res_first (UNet2DModel) — +Super resolution unet. Used in all but the last step of the super resolution diffusion process. + + +super_res_last (UNet2DModel) — +Super resolution unet. Used in the last step of the super resolution diffusion process. + + +decoder_scheduler (UnCLIPScheduler) — +Scheduler used in the decoder denoising process. Just a modified DDPMScheduler. + + +super_res_scheduler (UnCLIPScheduler) — +Scheduler used in the super resolution denoising process. Just a modified DDPMScheduler. + + + +Pipeline to generate variations from an input image using unCLIP +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +image: typing.Union[PIL.Image.Image, typing.List[PIL.Image.Image], torch.FloatTensor, NoneType] = None +num_images_per_prompt: int = 1 +decoder_num_inference_steps: int = 25 +super_res_num_inference_steps: int = 7 +generator: typing.Optional[torch._C.Generator] = None +decoder_latents: typing.Optional[torch.FloatTensor] = None +super_res_latents: typing.Optional[torch.FloatTensor] = None +image_embeddings: typing.Optional[torch.Tensor] = None +decoder_guidance_scale: float = 8.0 +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True + +) + + +Parameters + +image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — +The image or images to guide the image generation. If you provide a tensor, it needs to comply with the +configuration of +this +CLIPFeatureExtractor. Can be left to None only when image_embeddings are passed. + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +decoder_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality +image at the expense of slower inference. + + +super_res_num_inference_steps (int, optional, defaults to 7) — +The number of denoising steps for super resolution. More denoising steps usually lead to a higher +quality image at the expense of slower inference. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +decoder_latents (torch.FloatTensor of shape (batch size, channels, height, width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. + + +super_res_latents (torch.FloatTensor of shape (batch size, channels, super res height, super res width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. + + +decoder_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +image_embeddings (torch.Tensor, optional) — +Pre-defined image embeddings that can be derived from the image encoder. Pre-defined image embeddings +can be passed for tasks like image interpolations. image can the be left to None. + + +output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + + +Function invoked when calling the pipeline for generation. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline’s +models have their state dicts saved to CPU and then are moved to a torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. diff --git a/scrapped_outputs/dba4da4d6f5772778e0002e60b0316ea.txt b/scrapped_outputs/dba4da4d6f5772778e0002e60b0316ea.txt new file mode 100644 index 0000000000000000000000000000000000000000..e190f6dfad2eb4f89c2931b512ee77701b21aecc --- /dev/null +++ b/scrapped_outputs/dba4da4d6f5772778e0002e60b0316ea.txt @@ -0,0 +1,237 @@ +How to contribute to Diffusers 🧨 + +We ❤️ contributions from the open-source community! Everyone is welcome, and all types of participation –not just code– are valued and appreciated. Answering questions, helping others, reaching out and improving the documentation are all immensely valuable to the community, so don’t be afraid and get involved if you’re up for it! +It also helps us if you spread the word: reference the library from blog posts +on the awesome projects it made possible, shout out on Twitter every time it has +helped you, or simply star the repo to say “thank you”. +We encourage everyone to start by saying 👋 in our public Discord channel. We discuss the hottest trends about diffusion models, ask questions, show-off personal projects, help each other with contributions, or just hang out ☕. +Whichever way you choose to contribute, we strive to be part of an open, welcoming and kind community. Please, read our code of conduct and be mindful to respect it during your interactions. + +Overview + +You can contribute in so many ways! Just to name a few: +Fixing outstanding issues with the existing code. +Implementing new diffusion pipelines, new schedulers or new models. +Contributing to the examples. +Contributing to the documentation. +Submitting issues related to bugs or desired new features. +All are equally valuable to the community. + +Browse GitHub issues for suggestions + +If you need inspiration, you can look out for issues you’d like to tackle to contribute to the library. There are a few filters that can be helpful: +See Good first issues for general opportunities to contribute and getting started with the codebase. +See New pipeline/model to contribute exciting new diffusion models or diffusion pipelines. +See New scheduler to work on new samplers and schedulers. + +Submitting a new issue or feature request + +Do your best to follow these guidelines when submitting an issue or a feature +request. It will make it easier for us to come back to you quickly and with good +feedback. + +Did you find a bug? + +The 🧨 Diffusers library is robust and reliable thanks to the users who notify us of +the problems they encounter. So thank you for reporting an issue. +First, we would really appreciate it if you could make sure the bug was not +already reported (use the search bar on GitHub under Issues). + +Do you want to implement a new diffusion pipeline / diffusion model? + +Awesome! Please provide the following information: +Short description of the diffusion pipeline and link to the paper; +Link to the implementation if it is open-source; +Link to the model weights if they are available. +If you are willing to contribute the model yourself, let us know so we can best +guide you. + +Do you want a new feature (that is not a model)? + +A world-class feature request addresses the following points: +Motivation first: +Is it related to a problem/frustration with the library? If so, please explain +why. Providing a code snippet that demonstrates the problem is best. +Is it related to something you would need for a project? We’d love to hear +about it! +Is it something you worked on and think could benefit the community? +Awesome! Tell us what problem it solved for you. +Write a full paragraph describing the feature; +Provide a code snippet that demonstrates its future use; +In case this is related to a paper, please attach a link; +Attach any additional information (drawings, screenshots, etc.) you think may help. +If your issue is well written we’re already 80% of the way there by the time you +post it. + +Start contributing! (Pull Requests) + +Before writing code, we strongly advise you to search through the existing PRs or +issues to make sure that nobody is already working on the same thing. If you are +unsure, it is always a good idea to open an issue to get some feedback. +You will need basic git proficiency to be able to contribute to +🧨 Diffusers. git is not the easiest tool to use but it has the greatest +manual. Type git --help in a shell and enjoy. If you prefer books, Pro +Git is a very good reference. +Follow these steps to start contributing (supported Python versions): +Fork the repository by +clicking on the ‘Fork’ button on the repository’s page. This creates a copy of the code +under your GitHub user account. +Clone your fork to your local disk, and add the base repository as a remote: + + + Copied +$ git clone git@github.com:/diffusers.git +$ cd diffusers +$ git remote add upstream https://github.com/huggingface/diffusers.git +Create a new branch to hold your development changes: + + + Copied +$ git checkout -b a-descriptive-name-for-my-changes +Do not work on the main branch. +Set up a development environment by running the following command in a virtual environment: + + + Copied +$ pip install -e ".[dev]" +(If Diffusers was already installed in the virtual environment, remove +it with pip uninstall diffusers before reinstalling it in editable +mode with the -e flag.) +To run the full test suite, you might need the additional dependency on transformers and datasets which requires a separate source +install: + + + Copied +$ git clone https://github.com/huggingface/transformers +$ cd transformers +$ pip install -e . + + + Copied +$ git clone https://github.com/huggingface/datasets +$ cd datasets +$ pip install -e . +If you have already cloned that repo, you might need to git pull to get the most recent changes in the datasets +library. +Develop the features on your branch. +As you work on the features, you should make sure that the test suite +passes. You should run the tests impacted by your changes like this: + + + Copied +$ pytest tests/.py +You can also run the full suite with the following command, but it takes +a beefy machine to produce a result in a decent amount of time now that +Diffusers has grown a lot. Here is the command for it: + + + Copied +$ make test +For more information about tests, check out the +dedicated documentation +🧨 Diffusers relies on black and isort to format its source code +consistently. After you make changes, apply automatic style corrections and code verifications +that can’t be automated in one go with: + + + Copied +$ make style +🧨 Diffusers also uses ruff and a few custom scripts to check for coding mistakes. Quality +control runs in CI, however you can also run the same checks with: + + + Copied +$ make quality +Once you’re happy with your changes, add changed files using git add and +make a commit with git commit to record your changes locally: + + + Copied +$ git add modified_file.py +$ git commit +It is a good idea to sync your copy of the code with the original +repository regularly. This way you can quickly account for changes: + + + Copied +$ git fetch upstream +$ git rebase upstream/main +Push the changes to your account using: + + + Copied +$ git push -u origin a-descriptive-name-for-my-changes +Once you are satisfied (and the checklist below is happy too), go to the +webpage of your fork on GitHub. Click on ‘Pull request’ to send your changes +to the project maintainers for review. +It’s ok if maintainers ask you for changes. It happens to core contributors +too! So everyone can see the changes in the Pull request, work in your local +branch and push the changes to your fork. They will automatically appear in +the pull request. + +Checklist + +The title of your pull request should be a summary of its contribution; +If your pull request addresses an issue, please mention the issue number in +the pull request description to make sure they are linked (and people +consulting the issue know you are working on it); +To indicate a work in progress please prefix the title with [WIP]. These +are useful to avoid duplicated work, and to differentiate it from PRs ready +to be merged; +Make sure existing tests pass; +Add high-coverage tests. No quality testing = no merge.If you are adding new @slow tests, make sure they pass using +RUN_SLOW=1 python -m pytest tests/test_my_new_model.py. +If you are adding a new tokenizer, write tests, and make sure +RUN_SLOW=1 python -m pytest tests/test_tokenization_{your_model_name}.py passes. +CircleCI does not run the slow tests, but GitHub actions does every night! +All public methods must have informative docstrings that work nicely with sphinx. See [pipeline_latent_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py) for an example. +Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos and other non-text files. We prefer to leverage a hf.co hosted dataset like +the ones hosted on hf-internal-testing in which to place these files and reference or huggingface/documentation-images. +If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images +to this dataset. + +Tests + +An extensive test suite is included to test the library behavior and several examples. Library tests can be found in +the tests folder. +We like pytest and pytest-xdist because it’s faster. From the root of the +repository, here’s how to run tests with pytest for the library: + + + Copied +$ python -m pytest -n auto --dist=loadfile -s -v ./tests/ +In fact, that’s how make test is implemented! +You can specify a smaller set of tests in order to test only the feature +you’re working on. +By default, slow tests are skipped. Set the RUN_SLOW environment variable to +yes to run them. This will download many gigabytes of models — make sure you +have enough disk space and a good Internet connection, or a lot of patience! + + + Copied +$ RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/ +unittest is fully supported, here’s how to run tests with it: + + + Copied +$ python -m unittest discover -s tests -t . -v +$ python -m unittest discover -s examples -t examples -v + +Syncing forked main with upstream (HuggingFace) main + +To avoid pinging the upstream repository which adds reference notes to each upstream PR and sends unnecessary notifications to the developers involved in these PRs, +when syncing the main branch of a forked repository, please, follow these steps: +When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead, merge directly into the forked main. +If a PR is absolutely necessary, use the following steps after checking out your branch: + + + Copied +$ git checkout -b your-branch-for-syncing +$ git pull --squash --no-commit upstream main +$ git commit -m '' +$ git push --set-upstream origin your-branch-for-syncing + +Style guide + +For documentation strings, 🧨 Diffusers follows the google style. +This guide was heavily inspired by the awesome scikit-learn guide to contributing. diff --git a/scrapped_outputs/dc041b9f3ba7957f65d2f0a48c45943e.txt b/scrapped_outputs/dc041b9f3ba7957f65d2f0a48c45943e.txt new file mode 100644 index 0000000000000000000000000000000000000000..4a2dab2440032fce02434afcfbdf3d52bba38d63 --- /dev/null +++ b/scrapped_outputs/dc041b9f3ba7957f65d2f0a48c45943e.txt @@ -0,0 +1,11 @@ +Philosophy 🧨 Diffusers provides state-of-the-art pretrained diffusion models across multiple modalities. +Its purpose is to serve as a modular toolbox for both inference and training. We aim at building a library that stands the test of time and therefore take API design very seriously. In a nutshell, Diffusers is built to be a natural extension of PyTorch. Therefore, most of our design choices are based on PyTorch’s Design Principles. Let’s go over the most important ones: Usability over Performance While Diffusers has many built-in performance-enhancing features (see Memory and Speed), models are always loaded with the highest precision and lowest optimization. Therefore, by default diffusion pipelines are always instantiated on CPU with float32 precision if not otherwise defined by the user. This ensures usability across different platforms and accelerators and means that no complex installations are required to run the library. Diffusers aims to be a light-weight package and therefore has very few required dependencies, but many soft dependencies that can improve performance (such as accelerate, safetensors, onnx, etc…). We strive to keep the library as lightweight as possible so that it can be added without much concern as a dependency on other packages. Diffusers prefers simple, self-explainable code over condensed, magic code. This means that short-hand code syntaxes such as lambda functions, and advanced PyTorch operators are often not desired. Simple over easy As PyTorch states, explicit is better than implicit and simple is better than complex. This design philosophy is reflected in multiple parts of the library: We follow PyTorch’s API with methods like DiffusionPipeline.to to let the user handle device management. Raising concise error messages is preferred to silently correct erroneous input. Diffusers aims at teaching the user, rather than making the library as easy to use as possible. Complex model vs. scheduler logic is exposed instead of magically handled inside. Schedulers/Samplers are separated from diffusion models with minimal dependencies on each other. This forces the user to write the unrolled denoising loop. However, the separation allows for easier debugging and gives the user more control over adapting the denoising process or switching out diffusion models or schedulers. Separately trained components of the diffusion pipeline, e.g. the text encoder, the unet, and the variational autoencoder, each have their own model class. This forces the user to handle the interaction between the different model components, and the serialization format separates the model components into different files. However, this allows for easier debugging and customization. DreamBooth or Textual Inversion training +is very simple thanks to Diffusers’ ability to separate single components of the diffusion pipeline. Tweakable, contributor-friendly over abstraction For large parts of the library, Diffusers adopts an important design principle of the Transformers library, which is to prefer copy-pasted code over hasty abstractions. This design principle is very opinionated and stands in stark contrast to popular design principles such as Don’t repeat yourself (DRY). +In short, just like Transformers does for modeling files, Diffusers prefers to keep an extremely low level of abstraction and very self-contained code for pipelines and schedulers. +Functions, long code blocks, and even classes can be copied across multiple files which at first can look like a bad, sloppy design choice that makes the library unmaintainable. +However, this design has proven to be extremely successful for Transformers and makes a lot of sense for community-driven, open-source machine learning libraries because: Machine Learning is an extremely fast-moving field in which paradigms, model architectures, and algorithms are changing rapidly, which therefore makes it very difficult to define long-lasting code abstractions. Machine Learning practitioners like to be able to quickly tweak existing code for ideation and research and therefore prefer self-contained code over one that contains many abstractions. Open-source libraries rely on community contributions and therefore must build a library that is easy to contribute to. The more abstract the code, the more dependencies, the harder to read, and the harder to contribute to. Contributors simply stop contributing to very abstract libraries out of fear of breaking vital functionality. If contributing to a library cannot break other fundamental code, not only is it more inviting for potential new contributors, but it is also easier to review and contribute to multiple parts in parallel. At Hugging Face, we call this design the single-file policy which means that almost all of the code of a certain class should be written in a single, self-contained file. To read more about the philosophy, you can have a look +at this blog post. In Diffusers, we follow this philosophy for both pipelines and schedulers, but only partly for diffusion models. The reason we don’t follow this design fully for diffusion models is because almost all diffusion pipelines, such +as DDPM, Stable Diffusion, unCLIP (DALL·E 2) and Imagen all rely on the same diffusion model, the UNet. Great, now you should have generally understood why 🧨 Diffusers is designed the way it is 🤗. +We try to apply these design principles consistently across the library. Nevertheless, there are some minor exceptions to the philosophy or some unlucky design choices. If you have feedback regarding the design, we would ❤️ to hear it directly on GitHub. Design Philosophy in Details Now, let’s look a bit into the nitty-gritty details of the design philosophy. Diffusers essentially consists of three major classes: pipelines, models, and schedulers. +Let’s walk through more in-detail design decisions for each class. Pipelines Pipelines are designed to be easy to use (therefore do not follow Simple over easy 100%), are not feature complete, and should loosely be seen as examples of how to use models and schedulers for inference. The following design principles are followed: Pipelines follow the single-file policy. All pipelines can be found in individual directories under src/diffusers/pipelines. One pipeline folder corresponds to one diffusion paper/project/release. Multiple pipeline files can be gathered in one pipeline folder, as it’s done for src/diffusers/pipelines/stable-diffusion. If pipelines share similar functionality, one can make use of the #Copied from mechanism. Pipelines all inherit from DiffusionPipeline. Every pipeline consists of different model and scheduler components, that are documented in the model_index.json file, are accessible under the same name as attributes of the pipeline and can be shared between pipelines with DiffusionPipeline.components function. Every pipeline should be loadable via the DiffusionPipeline.from_pretrained function. Pipelines should be used only for inference. Pipelines should be very readable, self-explanatory, and easy to tweak. Pipelines should be designed to build on top of each other and be easy to integrate into higher-level APIs. Pipelines are not intended to be feature-complete user interfaces. For future complete user interfaces one should rather have a look at InvokeAI, Diffuzers, and lama-cleaner. Every pipeline should have one and only one way to run it via a __call__ method. The naming of the __call__ arguments should be shared across all pipelines. Pipelines should be named after the task they are intended to solve. In almost all cases, novel diffusion pipelines shall be implemented in a new pipeline folder/file. Models Models are designed as configurable toolboxes that are natural extensions of PyTorch’s Module class. They only partly follow the single-file policy. The following design principles are followed: Models correspond to a type of model architecture. E.g. the UNet2DConditionModel class is used for all UNet variations that expect 2D image inputs and are conditioned on some context. All models can be found in src/diffusers/models and every model architecture shall be defined in its file, e.g. unet_2d_condition.py, transformer_2d.py, etc… Models do not follow the single-file policy and should make use of smaller model building blocks, such as attention.py, resnet.py, embeddings.py, etc… Note: This is in stark contrast to Transformers’ modeling files and shows that models do not really follow the single-file policy. Models intend to expose complexity, just like PyTorch’s Module class, and give clear error messages. Models all inherit from ModelMixin and ConfigMixin. Models can be optimized for performance when it doesn’t demand major code changes, keeps backward compatibility, and gives significant memory or compute gain. Models should by default have the highest precision and lowest performance setting. To integrate new model checkpoints whose general architecture can be classified as an architecture that already exists in Diffusers, the existing model architecture shall be adapted to make it work with the new checkpoint. One should only create a new file if the model architecture is fundamentally different. Models should be designed to be easily extendable to future changes. This can be achieved by limiting public function arguments, configuration arguments, and “foreseeing” future changes, e.g. it is usually better to add string “…type” arguments that can easily be extended to new future types instead of boolean is_..._type arguments. Only the minimum amount of changes shall be made to existing architectures to make a new model checkpoint work. The model design is a difficult trade-off between keeping code readable and concise and supporting many model checkpoints. For most parts of the modeling code, classes shall be adapted for new model checkpoints, while there are some exceptions where it is preferred to add new classes to make sure the code is kept concise and +readable long-term, such as UNet blocks and Attention processors. Schedulers Schedulers are responsible to guide the denoising process for inference as well as to define a noise schedule for training. They are designed as individual classes with loadable configuration files and strongly follow the single-file policy. The following design principles are followed: All schedulers are found in src/diffusers/schedulers. Schedulers are not allowed to import from large utils files and shall be kept very self-contained. One scheduler Python file corresponds to one scheduler algorithm (as might be defined in a paper). If schedulers share similar functionalities, we can make use of the #Copied from mechanism. Schedulers all inherit from SchedulerMixin and ConfigMixin. Schedulers can be easily swapped out with the ConfigMixin.from_config method as explained in detail here. Every scheduler has to have a set_num_inference_steps, and a step function. set_num_inference_steps(...) has to be called before every denoising process, i.e. before step(...) is called. Every scheduler exposes the timesteps to be “looped over” via a timesteps attribute, which is an array of timesteps the model will be called upon. The step(...) function takes a predicted model output and the “current” sample (x_t) and returns the “previous”, slightly more denoised sample (x_t-1). Given the complexity of diffusion schedulers, the step function does not expose all the complexity and can be a bit of a “black box”. In almost all cases, novel schedulers shall be implemented in a new scheduling file. diff --git a/scrapped_outputs/dc04e11d418f45d12a554c322e9d3120.txt b/scrapped_outputs/dc04e11d418f45d12a554c322e9d3120.txt new file mode 100644 index 0000000000000000000000000000000000000000..743357598369036ae890caa3bf05637fb12c3b84 --- /dev/null +++ b/scrapped_outputs/dc04e11d418f45d12a554c322e9d3120.txt @@ -0,0 +1,17 @@ +UNet1DModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 1D UNet model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet1DModel class diffusers.UNet1DModel < source > ( sample_size: int = 65536 sample_rate: Optional = None in_channels: int = 2 out_channels: int = 2 extra_in_channels: int = 0 time_embedding_type: str = 'fourier' flip_sin_to_cos: bool = True use_timestep_embedding: bool = False freq_shift: float = 0.0 down_block_types: Tuple = ('DownBlock1DNoSkip', 'DownBlock1D', 'AttnDownBlock1D') up_block_types: Tuple = ('AttnUpBlock1D', 'UpBlock1D', 'UpBlock1DNoSkip') mid_block_type: Tuple = 'UNetMidBlock1D' out_block_type: str = None block_out_channels: Tuple = (32, 32, 64) act_fn: str = None norm_num_groups: int = 8 layers_per_block: int = 1 downsample_each_block: bool = False ) Parameters sample_size (int, optional) — Default length of sample. Should be adaptable at runtime. in_channels (int, optional, defaults to 2) — Number of channels in the input sample. out_channels (int, optional, defaults to 2) — Number of channels in the output. extra_in_channels (int, optional, defaults to 0) — +Number of additional channels to be added to the input of the first down block. Useful for cases where the +input data has more channels than what the model was initially designed for. time_embedding_type (str, optional, defaults to "fourier") — Type of time embedding to use. freq_shift (float, optional, defaults to 0.0) — Frequency shift for Fourier time embedding. flip_sin_to_cos (bool, optional, defaults to False) — +Whether to flip sin to cos for Fourier time embedding. down_block_types (Tuple[str], optional, defaults to ("DownBlock1DNoSkip", "DownBlock1D", "AttnDownBlock1D")) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to ("AttnUpBlock1D", "UpBlock1D", "UpBlock1DNoSkip")) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (32, 32, 64)) — +Tuple of block output channels. mid_block_type (str, optional, defaults to "UNetMidBlock1D") — Block type for middle of UNet. out_block_type (str, optional, defaults to None) — Optional output processing block of UNet. act_fn (str, optional, defaults to None) — Optional activation function in UNet blocks. norm_num_groups (int, optional, defaults to 8) — The number of groups for normalization. layers_per_block (int, optional, defaults to 1) — The number of layers per block. downsample_each_block (int, optional, defaults to False) — +Experimental feature for using a UNet without upsampling. A 1D UNet model that takes a noisy sample and a timestep and returns a sample shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor timestep: Union return_dict: bool = True ) → UNet1DOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch_size, num_channels, sample_size). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet1DOutput instead of a plain tuple. Returns +UNet1DOutput or tuple + +If return_dict is True, an UNet1DOutput is returned, otherwise a tuple is +returned where the first element is the sample tensor. + The UNet1DModel forward method. UNet1DOutput class diffusers.models.unet_1d.UNet1DOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, sample_size)) — +The hidden states output from the last layer of the model. The output of UNet1DModel. diff --git a/scrapped_outputs/dc0e7981b4388d7b821976888648a5a2.txt b/scrapped_outputs/dc0e7981b4388d7b821976888648a5a2.txt new file mode 100644 index 0000000000000000000000000000000000000000..1f36bce79e2596c156b8269c32ff5e3ac4f935d2 --- /dev/null +++ b/scrapped_outputs/dc0e7981b4388d7b821976888648a5a2.txt @@ -0,0 +1,9 @@ +Stable Video Diffusion Stable Video Diffusion was proposed in Stable Video Diffusion: Scaling Latent Video Diffusion Models to Large Datasets by Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, Varun Jampani, Robin Rombach. The abstract from the paper is: We present Stable Video Diffusion - a latent video diffusion model for high-resolution, state-of-the-art text-to-video and image-to-video generation. Recently, latent diffusion models trained for 2D image synthesis have been turned into generative video models by inserting temporal layers and finetuning them on small, high-quality video datasets. However, training methods in the literature vary widely, and the field has yet to agree on a unified strategy for curating video data. In this paper, we identify and evaluate three different stages for successful training of video LDMs: text-to-image pretraining, video pretraining, and high-quality video finetuning. Furthermore, we demonstrate the necessity of a well-curated pretraining dataset for generating high-quality videos and present a systematic curation process to train a strong base model, including captioning and filtering strategies. We then explore the impact of finetuning our base model on high-quality data and train a text-to-video model that is competitive with closed-source video generation. We also show that our base model provides a powerful motion representation for downstream tasks such as image-to-video generation and adaptability to camera motion-specific LoRA modules. Finally, we demonstrate that our model provides a strong multi-view 3D-prior and can serve as a base to finetune a multi-view diffusion model that jointly generates multiple views of objects in a feedforward fashion, outperforming image-based methods at a fraction of their compute budget. We release code and model weights at this https URL. To learn how to use Stable Video Diffusion, take a look at the Stable Video Diffusion guide. Check out the Stability AI Hub organization for the base and extended frame checkpoints! Tips Video generation is memory-intensive and one way to reduce your memory usage is to set enable_forward_chunking on the pipeline’s UNet so you don’t run the entire feedforward layer at once. Breaking it up into chunks in a loop is more efficient. Check out the Text or image-to-video guide for more details about how certain parameters can affect video generation and how to optimize inference by reducing memory usage. StableVideoDiffusionPipeline class diffusers.StableVideoDiffusionPipeline < source > ( vae: AutoencoderKLTemporalDecoder image_encoder: CLIPVisionModelWithProjection unet: UNetSpatioTemporalConditionModel scheduler: EulerDiscreteScheduler feature_extractor: CLIPImageProcessor ) Parameters vae (AutoencoderKLTemporalDecoder) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. image_encoder (CLIPVisionModelWithProjection) — +Frozen CLIP image-encoder (laion/CLIP-ViT-H-14-laion2B-s32B-b79K). unet (UNetSpatioTemporalConditionModel) — +A UNetSpatioTemporalConditionModel to denoise the encoded image latents. scheduler (EulerDiscreteScheduler) — +A scheduler to be used in combination with unet to denoise the encoded image latents. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images. Pipeline to generate video from an input image using Stable Video Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). StableVideoDiffusionPipelineOutput class diffusers.pipelines.stable_video_diffusion.StableVideoDiffusionPipelineOutput < source > ( frames: Union ) Parameters frames ([List[List[PIL.Image.Image]], np.ndarray, torch.FloatTensor]) — +List of denoised PIL images of length batch_size or numpy array or torch tensor +of shape (batch_size, num_frames, height, width, num_channels). Output class for Stable Video Diffusion pipeline. diff --git a/scrapped_outputs/dc1523bc3daca0d91ada16d9ec9daf7f.txt b/scrapped_outputs/dc1523bc3daca0d91ada16d9ec9daf7f.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/dc4e947c2219b4f7938e9bee5eff3a77.txt b/scrapped_outputs/dc4e947c2219b4f7938e9bee5eff3a77.txt new file mode 100644 index 0000000000000000000000000000000000000000..4540f6a7c0e03add95f145da0638f9a5a6f1c9cb --- /dev/null +++ b/scrapped_outputs/dc4e947c2219b4f7938e9bee5eff3a77.txt @@ -0,0 +1,14 @@ +DeepCache DeepCache accelerates StableDiffusionPipeline and StableDiffusionXLPipeline by strategically caching and reusing high-level features while efficiently updating low-level features by taking advantage of the U-Net architecture. Start by installing DeepCache: Copied pip install DeepCache Then load and enable the DeepCacheSDHelper: Copied import torch + from diffusers import StableDiffusionPipeline + pipe = StableDiffusionPipeline.from_pretrained('runwayml/stable-diffusion-v1-5', torch_dtype=torch.float16).to("cuda") + ++ from DeepCache import DeepCacheSDHelper ++ helper = DeepCacheSDHelper(pipe=pipe) ++ helper.set_params( ++ cache_interval=3, ++ cache_branch_id=0, ++ ) ++ helper.enable() + + image = pipe("a photo of an astronaut on a moon").images[0] The set_params method accepts two arguments: cache_interval and cache_branch_id. cache_interval means the frequency of feature caching, specified as the number of steps between each cache operation. cache_branch_id identifies which branch of the network (ordered from the shallowest to the deepest layer) is responsible for executing the caching processes. +Opting for a lower cache_branch_id or a larger cache_interval can lead to faster inference speed at the expense of reduced image quality (ablation experiments of these two hyperparameters can be found in the paper). Once those arguments are set, use the enable or disable methods to activate or deactivate the DeepCacheSDHelper. You can find more generated samples (original pipeline vs DeepCache) and the corresponding inference latency in the WandB report. The prompts are randomly selected from the MS-COCO 2017 dataset. Benchmark We tested how much faster DeepCache accelerates Stable Diffusion v2.1 with 50 inference steps on an NVIDIA RTX A5000, using different configurations for resolution, batch size, cache interval (I), and cache branch (B). Resolution Batch size Original DeepCache(I=3, B=0) DeepCache(I=5, B=0) DeepCache(I=5, B=1) 512 8 15.96 6.88(2.32x) 5.03(3.18x) 7.27(2.20x) 4 8.39 3.60(2.33x) 2.62(3.21x) 3.75(2.24x) 1 2.61 1.12(2.33x) 0.81(3.24x) 1.11(2.35x) 768 8 43.58 18.99(2.29x) 13.96(3.12x) 21.27(2.05x) 4 22.24 9.67(2.30x) 7.10(3.13x) 10.74(2.07x) 1 6.33 2.72(2.33x) 1.97(3.21x) 2.98(2.12x) 1024 8 101.95 45.57(2.24x) 33.72(3.02x) 53.00(1.92x) 4 49.25 21.86(2.25x) 16.19(3.04x) 25.78(1.91x) 1 13.83 6.07(2.28x) 4.43(3.12x) 7.15(1.93x) diff --git a/scrapped_outputs/dc5bad4d9021a9b5babec4c0da16a015.txt b/scrapped_outputs/dc5bad4d9021a9b5babec4c0da16a015.txt new file mode 100644 index 0000000000000000000000000000000000000000..a1d62e149f06897a73f0cf31016ea5252858f00a --- /dev/null +++ b/scrapped_outputs/dc5bad4d9021a9b5babec4c0da16a015.txt @@ -0,0 +1,525 @@ +Kandinsky 2.1 Kandinsky 2.1 is created by Arseniy Shakhmatov, Anton Razzhigaev, Aleksandr Nikolich, Vladimir Arkhipkin, Igor Pavlov, Andrey Kuznetsov, and Denis Dimitrov. The description from it’s GitHub page is: Kandinsky 2.1 inherits best practicies from Dall-E 2 and Latent diffusion, while introducing some new ideas. As text and image encoder it uses CLIP model and diffusion image prior (mapping) between latent spaces of CLIP modalities. This approach increases the visual performance of the model and unveils new horizons in blending images and text-guided image manipulation. The original codebase can be found at ai-forever/Kandinsky-2. Check out the Kandinsky Community organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. KandinskyPriorPipeline class diffusers.KandinskyPriorPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModelWithProjection text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: UnCLIPScheduler image_processor: CLIPImageProcessor ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Pipeline for generating image prior for Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 output_type: Optional = 'pt' return_dict: bool = True ) → KandinskyPriorPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. output_type (str, optional, defaults to "pt") — +The output format of the generate image. Choose between: "np" (np.array) or "pt" +(torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyPipeline, KandinskyPriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior") +>>> pipe_prior.to("cuda") + +>>> prompt = "red cat, 4k photo" +>>> out = pipe_prior(prompt) +>>> image_emb = out.image_embeds +>>> negative_image_emb = out.negative_image_embeds + +>>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1") +>>> pipe.to("cuda") + +>>> image = pipe( +... prompt, +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... ).images + +>>> image[0].save("cat.png") interpolate < source > ( images_and_prompts: List weights: List num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None negative_prior_prompt: Optional = None negative_prompt: str = '' guidance_scale: float = 4.0 device = None ) → KandinskyPriorPipelineOutput or tuple Parameters images_and_prompts (List[Union[str, PIL.Image.Image, torch.FloatTensor]]) — +list of prompts and images to guide the image generation. +weights — (List[float]): +list of weights for each condition in images_and_prompts num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. negative_prior_prompt (str, optional) — +The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when using the prior pipeline for interpolation. Examples: Copied >>> from diffusers import KandinskyPriorPipeline, KandinskyPipeline +>>> from diffusers.utils import load_image +>>> import PIL + +>>> import torch +>>> from torchvision import transforms + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> img1 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) + +>>> img2 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/starry_night.jpeg" +... ) + +>>> images_texts = ["a cat", img1, img2] +>>> weights = [0.3, 0.3, 0.4] +>>> image_emb, zero_image_emb = pipe_prior.interpolate(images_texts, weights) + +>>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) +>>> pipe.to("cuda") + +>>> image = pipe( +... "", +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=150, +... ).images[0] + +>>> image.save("starry_cat.png") KandinskyPipeline class diffusers.KandinskyPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image_embeds: Union negative_image_embeds: Union negative_prompt: Union = None height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyPipeline, KandinskyPriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained("kandinsky-community/Kandinsky-2-1-prior") +>>> pipe_prior.to("cuda") + +>>> prompt = "red cat, 4k photo" +>>> out = pipe_prior(prompt) +>>> image_emb = out.image_embeds +>>> negative_image_emb = out.negative_image_embeds + +>>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1") +>>> pipe.to("cuda") + +>>> image = pipe( +... prompt, +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... ).images + +>>> image[0].save("cat.png") KandinskyCombinedPipeline class diffusers.KandinskyCombinedPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Combined Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipe = AutoPipelineForText2Image.from_pretrained( + "kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" + +image = pipe(prompt=prompt, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models (unet, text_encoder, vae, and safety checker state dicts) to CPU using 🤗 +Accelerate, significantly reducing memory usage. Models are moved to a torch.device('meta') and loaded on a +GPU only when their specific submodule’s forward method is called. Offloading happens on a submodule basis. +Memory savings are higher than using enable_model_cpu_offload, but performance is lower. KandinskyImg2ImgPipeline class diffusers.KandinskyImg2ImgPipeline < source > ( text_encoder: MultilingualCLIP movq: VQModel tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: DDIMScheduler ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ image encoder and decoder Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union image_embeds: FloatTensor negative_image_embeds: FloatTensor negative_prompt: Union = None height: int = 512 width: int = 512 num_inference_steps: int = 100 strength: float = 0.3 guidance_scale: float = 7.0 num_images_per_prompt: int = 1 generator: Union = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyImg2ImgPipeline, KandinskyPriorPipeline +>>> from diffusers.utils import load_image +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> prompt = "A red cartoon frog, 4k" +>>> image_emb, zero_image_emb = pipe_prior(prompt, return_dict=False) + +>>> pipe = KandinskyImg2ImgPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") + +>>> init_image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/frog.png" +... ) + +>>> image = pipe( +... prompt, +... image=init_image, +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... strength=0.2, +... ).images + +>>> image[0].save("red_frog.png") KandinskyImg2ImgCombinedPipeline class diffusers.KandinskyImg2ImgCombinedPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Combined Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 strength: float = 0.3 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForImage2Image +import torch +import requests +from io import BytesIO +from PIL import Image +import os + +pipe = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") +image.thumbnail((768, 768)) + +image = pipe(prompt=prompt, image=original_image, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. KandinskyInpaintPipeline class diffusers.KandinskyInpaintPipeline < source > ( text_encoder: MultilingualCLIP movq: VQModel tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: DDIMScheduler ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ image encoder and decoder Pipeline for text-guided image inpainting using Kandinsky2.1 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union mask_image: Union image_embeds: FloatTensor negative_image_embeds: FloatTensor negative_prompt: Union = None height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image or np.ndarray) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. mask_image (PIL.Image.Image,torch.FloatTensor or np.ndarray) — +Image, or a tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. You can pass a pytorch tensor as mask only if the +image you passed is a pytorch tensor, and it should contain one color channel (L) instead of 3, so the +expected shape would be either (B, 1, H, W,), (B, H, W), (1, H, W) or (H, W) If image is an PIL +image or numpy array, mask should also be a either PIL image or numpy array. If it is a PIL image, it +will be converted to a single channel (luminance) before use. If it is a nummpy array, the expected +shape is (H, W). image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyInpaintPipeline, KandinskyPriorPipeline +>>> from diffusers.utils import load_image +>>> import torch +>>> import numpy as np + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> prompt = "a hat" +>>> image_emb, zero_image_emb = pipe_prior(prompt, return_dict=False) + +>>> pipe = KandinskyInpaintPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") + +>>> init_image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) + +>>> mask = np.zeros((768, 768), dtype=np.float32) +>>> mask[:250, 250:-250] = 1 + +>>> out = pipe( +... prompt, +... image=init_image, +... mask_image=mask, +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=50, +... ) + +>>> image = out.images[0] +>>> image.save("cat_with_hat.png") KandinskyInpaintCombinedPipeline class diffusers.KandinskyInpaintCombinedPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Combined Pipeline for generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union mask_image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. mask_image (np.array) — +Tensor representing an image batch, to mask image. White pixels in the mask will be repainted, while +black pixels will be preserved. If mask_image is a PIL image, it will be converted to a single +channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, +so the expected shape would be (B, H, W, 1). negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +import torch +import numpy as np + +pipe = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +original_image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png" +) + +mask = np.zeros((768, 768), dtype=np.float32) +# Let's mask out an area above the cat's head +mask[:250, 250:-250] = 1 + +image = pipe(prompt=prompt, image=original_image, mask_image=mask, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. diff --git a/scrapped_outputs/dc699185b641944532b2cada24e3c273.txt b/scrapped_outputs/dc699185b641944532b2cada24e3c273.txt new file mode 100644 index 0000000000000000000000000000000000000000..da7517473881ae8a5f98c9de9071381dc720f891 --- /dev/null +++ b/scrapped_outputs/dc699185b641944532b2cada24e3c273.txt @@ -0,0 +1 @@ +Diffusers 🤗 Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you’re looking for a simple inference solution or want to train your own diffusion model, 🤗 Diffusers is a modular toolbox that supports both. Our library is designed with a focus on usability over performance, simple over easy, and customizability over abstractions. The library has three main components: State-of-the-art diffusion pipelines for inference with just a few lines of code. There are many pipelines in 🤗 Diffusers, check out the table in the pipeline overview for a complete list of available pipelines and the task they solve. Interchangeable noise schedulers for balancing trade-offs between generation speed and quality. Pretrained models that can be used as building blocks, and combined with schedulers, for creating your own end-to-end diffusion systems. Tutorials Learn the fundamental skills you need to start generating outputs, build your own diffusion system, and train a diffusion model. We recommend starting here if you're using 🤗 Diffusers for the first time! How-to guides Practical guides for helping you load pipelines, models, and schedulers. You'll also learn how to use pipelines for specific tasks, control how outputs are generated, optimize for inference speed, and different training techniques. Conceptual guides Understand why the library was designed the way it was, and learn more about the ethical guidelines and safety implementations for using the library. Reference Technical descriptions of how 🤗 Diffusers classes and methods work. diff --git a/scrapped_outputs/dc727564943fd827b97d8400b37c4f4e.txt b/scrapped_outputs/dc727564943fd827b97d8400b37c4f4e.txt new file mode 100644 index 0000000000000000000000000000000000000000..be2cb47ac7929d07604329901692862da670fc66 --- /dev/null +++ b/scrapped_outputs/dc727564943fd827b97d8400b37c4f4e.txt @@ -0,0 +1,70 @@ +MusicLDM MusicLDM was proposed in MusicLDM: Enhancing Novelty in Text-to-Music Generation Using Beat-Synchronous Mixup Strategies by Ke Chen, Yusong Wu, Haohe Liu, Marianna Nezhurina, Taylor Berg-Kirkpatrick, Shlomo Dubnov. +MusicLDM takes a text prompt as input and predicts the corresponding music sample. Inspired by Stable Diffusion and AudioLDM, +MusicLDM is a text-to-music latent diffusion model (LDM) that learns continuous audio representations from CLAP +latents. MusicLDM is trained on a corpus of 466 hours of music data. Beat-synchronous data augmentation strategies are applied to the music samples, both in the time domain and in the latent space. Using beat-synchronous data augmentation strategies encourages the model to interpolate between the training samples, but stay within the domain of the training data. The result is generated music that is more diverse while staying faithful to the corresponding style. The abstract of the paper is the following: Diffusion models have shown promising results in cross-modal generation tasks, including text-to-image and text-to-audio generation. However, generating music, as a special type of audio, presents unique challenges due to limited availability of music data and sensitive issues related to copyright and plagiarism. In this paper, to tackle these challenges, we first construct a state-of-the-art text-to-music model, MusicLDM, that adapts Stable Diffusion and AudioLDM architectures to the music domain. We achieve this by retraining the contrastive language-audio pretraining model (CLAP) and the Hifi-GAN vocoder, as components of MusicLDM, on a collection of music data samples. Then, to address the limitations of training data and to avoid plagiarism, we leverage a beat tracking model and propose two different mixup strategies for data augmentation: beat-synchronous audio mixup and beat-synchronous latent mixup, which recombine training audio directly or via a latent embeddings space, respectively. Such mixup strategies encourage the model to interpolate between musical training samples and generate new music within the convex hull of the training data, making the generated music more diverse while still staying faithful to the corresponding style. In addition to popular evaluation metrics, we design several new evaluation metrics based on CLAP score to demonstrate that our proposed MusicLDM and beat-synchronous mixup strategies improve both the quality and novelty of generated music, as well as the correspondence between input text and generated music. This pipeline was contributed by sanchit-gandhi. Tips When constructing a prompt, keep in mind: Descriptive prompt inputs work best; use adjectives to describe the sound (for example, “high quality” or “clear”) and make the prompt context specific where possible (e.g. “melodic techno with a fast beat and synths” works better than “techno”). Using a negative prompt can significantly improve the quality of the generated audio. Try using a negative prompt of “low quality, average quality”. During inference: The quality of the generated audio sample can be controlled by the num_inference_steps argument; higher steps give higher quality audio at the expense of slower inference. Multiple waveforms can be generated in one go: set num_waveforms_per_prompt to a value greater than 1 to enable. Automatic scoring will be performed between the generated waveforms and prompt text, and the audios ranked from best to worst accordingly. The length of the generated audio sample can be controlled by varying the audio_length_in_s argument. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. MusicLDMPipeline class diffusers.MusicLDMPipeline < source > ( vae: AutoencoderKL text_encoder: Union tokenizer: Union feature_extractor: Optional unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vocoder: SpeechT5HifiGan ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (ClapModel) — +Frozen text-audio embedding model (ClapTextModel), specifically the +laion/clap-htsat-unfused variant. tokenizer (PreTrainedTokenizer) — +A RobertaTokenizer to tokenize text. feature_extractor (ClapFeatureExtractor) — +Feature extractor to compute mel-spectrograms from audio waveforms. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded audio latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. vocoder (SpeechT5HifiGan) — +Vocoder of class SpeechT5HifiGan. Pipeline for text-to-audio generation using MusicLDM. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None audio_length_in_s: Optional = None num_inference_steps: int = 200 guidance_scale: float = 2.0 negative_prompt: Union = None num_waveforms_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None output_type: Optional = 'np' ) → AudioPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide audio generation. If not defined, you need to pass prompt_embeds. audio_length_in_s (int, optional, defaults to 10.24) — +The length of the generated audio sample in seconds. num_inference_steps (int, optional, defaults to 200) — +The number of denoising steps. More denoising steps usually lead to a higher quality audio at the +expense of slower inference. guidance_scale (float, optional, defaults to 2.0) — +A higher guidance scale value encourages the model to generate audio that is closely linked to the text +prompt at the expense of lower sound quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in audio generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_waveforms_per_prompt (int, optional, defaults to 1) — +The number of waveforms to generate per prompt. If num_waveforms_per_prompt > 1, the text encoding +model is a joint text-audio model (ClapModel), and the tokenizer is a +[~transformers.ClapProcessor], then automatic scoring will be performed between the generated outputs +and the input text. This scoring ranks the generated waveforms based on their cosine similarity to text +input in the joint text-audio embedding space. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. return_dict (bool, optional, defaults to True) — +Whether or not to return a AudioPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. output_type (str, optional, defaults to "np") — +The output format of the generated audio. Choose between "np" to return a NumPy np.ndarray or +"pt" to return a PyTorch torch.Tensor object. Set to "latent" to return the latent diffusion +model (LDM) output. Returns +AudioPipelineOutput or tuple + +If return_dict is True, AudioPipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import MusicLDMPipeline +>>> import torch +>>> import scipy + +>>> repo_id = "ucsd-reach/musicldm" +>>> pipe = MusicLDMPipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> prompt = "Techno music with a strong, upbeat tempo and high melodic riffs" +>>> audio = pipe(prompt, num_inference_steps=10, audio_length_in_s=5.0).audios[0] + +>>> # save the audio sample as a .wav file +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio) disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. diff --git a/scrapped_outputs/dc7af353ee26e22fb088f11605b45c95.txt b/scrapped_outputs/dc7af353ee26e22fb088f11605b45c95.txt new file mode 100644 index 0000000000000000000000000000000000000000..07086f82bad81c666b44ca2d095feabb72569dd6 --- /dev/null +++ b/scrapped_outputs/dc7af353ee26e22fb088f11605b45c95.txt @@ -0,0 +1,261 @@ +Pseudo numerical methods for diffusion models (PNDM) + + +Overview + +Original implementation can be found here. + +PNDMScheduler + + +class diffusers.PNDMScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +skip_prk_steps: bool = False +set_alpha_to_one: bool = False +prediction_type: str = 'epsilon' +steps_offset: int = 0 + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +skip_prk_steps (bool) — +allows the scheduler to skip the Runge-Kutta steps that are defined in the original paper as being required +before plms steps; defaults to False. + + +set_alpha_to_one (bool, default False) — +each diffusion step uses the value of alphas product at that step and at the previous one. For the final +step there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the value of alpha at step 0. + + +prediction_type (str, default epsilon, optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion process) +or v_prediction (see section 2.4 https://imagen.research.google/video/paper.pdf) + + +steps_offset (int, default 0) — +an offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False, to make the last step use step 0 for the previous alpha product, as done in +stable diffusion. + + + +Pseudo numerical methods for diffusion models (PNDM) proposes using more advanced ODE integration techniques, +namely Runge-Kutta method and a linear multi-step method. +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. +For more details, see the original paper: https://arxiv.org/abs/2202.09778 + +scale_model_input + +< +source +> +( +sample: FloatTensor +*args +**kwargs + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + + +Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +return_dict: bool = True + +) +→ +SchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +return_dict (bool) — option for returning tuple rather than SchedulerOutput class + + +Returns + +SchedulerOutput or tuple + + + +SchedulerOutput if return_dict is True, otherwise a tuple. When +returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion +process from the learned model outputs (most often the predicted noise). +This function calls step_prk() or step_plms() depending on the internal variable counter. + +step_plms + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +return_dict: bool = True + +) +→ +~scheduling_utils.SchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +return_dict (bool) — option for returning tuple rather than SchedulerOutput class + + +Returns + +~scheduling_utils.SchedulerOutput or tuple + + + +~scheduling_utils.SchedulerOutput if return_dict is +True, otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + +Step function propagating the sample with the linear multi-step method. This has one forward pass with multiple +times to approximate the solution. + +step_prk + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +return_dict: bool = True + +) +→ +~scheduling_utils.SchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +return_dict (bool) — option for returning tuple rather than SchedulerOutput class + + +Returns + +~scheduling_utils.SchedulerOutput or tuple + + + +~scheduling_utils.SchedulerOutput if return_dict is +True, otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + +Step function propagating the sample with the Runge-Kutta method. RK takes 4 forward passes to approximate the +solution to the differential equation. diff --git a/scrapped_outputs/dcb92735ab4022b81baa3ba2b7bd8c38.txt b/scrapped_outputs/dcb92735ab4022b81baa3ba2b7bd8c38.txt new file mode 100644 index 0000000000000000000000000000000000000000..b1a5b1caf72ab66f1458f358678fe7da6bdce6c7 --- /dev/null +++ b/scrapped_outputs/dcb92735ab4022b81baa3ba2b7bd8c38.txt @@ -0,0 +1 @@ +SDXL Turbo Stable Diffusion XL (SDXL) Turbo was proposed in Adversarial Diffusion Distillation by Axel Sauer, Dominik Lorenz, Andreas Blattmann, and Robin Rombach. The abstract from the paper is: We introduce Adversarial Diffusion Distillation (ADD), a novel training approach that efficiently samples large-scale foundational image diffusion models in just 1–4 steps while maintaining high image quality. We use score distillation to leverage large-scale off-the-shelf image diffusion models as a teacher signal in combination with an adversarial loss to ensure high image fidelity even in the low-step regime of one or two sampling steps. Our analyses show that our model clearly outperforms existing few-step methods (GANs,Latent Consistency Models) in a single step and reaches the performance of state-of-the-art diffusion models (SDXL) in only four steps. ADD is the first method to unlock single-step, real-time image synthesis with foundation models. Tips SDXL Turbo uses the exact same architecture as SDXL, which means it also has the same API. Please refer to the SDXL API reference for more details. SDXL Turbo should disable guidance scale by setting guidance_scale=0.0 SDXL Turbo should use timestep_spacing='trailing' for the scheduler and use between 1 and 4 steps. SDXL Turbo has been trained to generate images of size 512x512. SDXL Turbo is open-access, but not open-source meaning that one might have to buy a model license in order to use it for commercial applications. Make sure to read the official model card to learn more. To learn how to use SDXL Turbo for various tasks, how to optimize performance, and other usage examples, take a look at the SDXL Turbo guide. Check out the Stability AI Hub organization for the official base and refiner model checkpoints! diff --git a/scrapped_outputs/dd1780584757a6e78b90f182992b6b17.txt b/scrapped_outputs/dd1780584757a6e78b90f182992b6b17.txt new file mode 100644 index 0000000000000000000000000000000000000000..9cfc96be6aaacc8d08b00ff6b4042e641b297921 --- /dev/null +++ b/scrapped_outputs/dd1780584757a6e78b90f182992b6b17.txt @@ -0,0 +1,13 @@ +PEFT Diffusers supports loading adapters such as LoRA with the PEFT library with the PeftAdapterMixin class. This allows modeling classes in Diffusers like UNet2DConditionModel to load an adapter. Refer to the Inference with PEFT tutorial for an overview of how to use PEFT in Diffusers for inference. PeftAdapterMixin class diffusers.loaders.PeftAdapterMixin < source > ( ) A class containing all functions for loading and using adapters weights that are supported in PEFT library. For +more details about adapters and injecting them in a transformer-based model, check out the PEFT documentation. Install the latest version of PEFT, and use this mixin to: Attach new adapters in the model. Attach multiple adapters and iteratively activate/deactivate them. Activate/deactivate all adapters from the model. Get a list of the active adapters. active_adapters < source > ( ) Gets the current list of active adapters of the model. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +documentation. add_adapter < source > ( adapter_config adapter_name: str = 'default' ) Parameters adapter_config ([~peft.PeftConfig]) — +The configuration of the adapter to add; supported adapters are non-prefix tuning and adaption prompt +methods. adapter_name (str, optional, defaults to "default") — +The name of the adapter to add. If no name is passed, a default name is assigned to the adapter. Adds a new adapter to the current model for training. If no adapter name is passed, a default name is assigned +to the adapter to follow the convention of the PEFT library. If you are not familiar with adapters and PEFT methods, we invite you to read more about them in the PEFT +documentation. disable_adapters < source > ( ) Disable all adapters attached to the model and fallback to inference with the base model only. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +documentation. enable_adapters < source > ( ) Enable adapters that are attached to the model. The model uses self.active_adapters() to retrieve the +list of adapters to enable. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +documentation. set_adapter < source > ( adapter_name: Union ) Parameters adapter_name (Union[str, List[str]])) — +The list of adapters to set or the adapter name in the case of a single adapter. Sets a specific adapter by forcing the model to only use that adapter and disables the other adapters. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +documentation. diff --git a/scrapped_outputs/dd1b9743c6c7b1a18b7268c7eac180ca.txt b/scrapped_outputs/dd1b9743c6c7b1a18b7268c7eac180ca.txt new file mode 100644 index 0000000000000000000000000000000000000000..2add5dcbc2dfbc796cac5009a8f482715b5ce8eb --- /dev/null +++ b/scrapped_outputs/dd1b9743c6c7b1a18b7268c7eac180ca.txt @@ -0,0 +1,5 @@ +UVit2DModel The U-ViT model is a vision transformer (ViT) based UNet. This model incorporates elements from ViT (considers all inputs such as time, conditions and noisy image patches as tokens) and a UNet (long skip connections between the shallow and deep layers). The skip connection is important for predicting pixel-level features. An additional 3x3 convolutional block is applied prior to the final output to improve image quality. The abstract from the paper is: Currently, applying diffusion models in pixel space of high resolution images is difficult. Instead, existing approaches focus on diffusion in lower dimensional spaces (latent diffusion), or have multiple super-resolution levels of generation referred to as cascades. The downside is that these approaches add additional complexity to the diffusion framework. This paper aims to improve denoising diffusion for high resolution images while keeping the model as simple as possible. The paper is centered around the research question: How can one train a standard denoising diffusion models on high resolution images, and still obtain performance comparable to these alternate approaches? The four main findings are: 1) the noise schedule should be adjusted for high resolution images, 2) It is sufficient to scale only a particular part of the architecture, 3) dropout should be added at specific locations in the architecture, and 4) downsampling is an effective strategy to avoid high resolution feature maps. Combining these simple yet effective techniques, we achieve state-of-the-art on image generation among diffusion models without sampling modifiers on ImageNet. UVit2DModel class diffusers.UVit2DModel < source > ( hidden_size: int = 1024 use_bias: bool = False hidden_dropout: float = 0.0 cond_embed_dim: int = 768 micro_cond_encode_dim: int = 256 micro_cond_embed_dim: int = 1280 encoder_hidden_size: int = 768 vocab_size: int = 8256 codebook_size: int = 8192 in_channels: int = 768 block_out_channels: int = 768 num_res_blocks: int = 3 downsample: bool = False upsample: bool = False block_num_heads: int = 12 num_hidden_layers: int = 22 num_attention_heads: int = 16 attention_dropout: float = 0.0 intermediate_size: int = 2816 layer_norm_eps: float = 1e-06 ln_elementwise_affine: bool = True sample_size: int = 64 ) set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. UVit2DConvEmbed class diffusers.models.unets.uvit_2d.UVit2DConvEmbed < source > ( in_channels block_out_channels vocab_size elementwise_affine eps bias ) UVitBlock class diffusers.models.unets.uvit_2d.UVitBlock < source > ( channels num_res_blocks: int hidden_size hidden_dropout ln_elementwise_affine layer_norm_eps use_bias block_num_heads attention_dropout downsample: bool upsample: bool ) ConvNextBlock class diffusers.models.unets.uvit_2d.ConvNextBlock < source > ( channels layer_norm_eps ln_elementwise_affine use_bias hidden_dropout hidden_size res_ffn_factor = 4 ) ConvMlmLayer class diffusers.models.unets.uvit_2d.ConvMlmLayer < source > ( block_out_channels: int in_channels: int use_bias: bool ln_elementwise_affine: bool layer_norm_eps: float codebook_size: int ) diff --git a/scrapped_outputs/dd261968bc69cd2dbd30987e8415dd7c.txt b/scrapped_outputs/dd261968bc69cd2dbd30987e8415dd7c.txt new file mode 100644 index 0000000000000000000000000000000000000000..8f122a19de1be956b49c82971afd11400b04f183 --- /dev/null +++ b/scrapped_outputs/dd261968bc69cd2dbd30987e8415dd7c.txt @@ -0,0 +1,212 @@ +Effective and efficient diffusion + + + + + + + + + + + + +Getting the DiffusionPipeline to generate images in a certain style or include what you want can be tricky. Often times, you have to run the DiffusionPipeline several times before you end up with an image you’re happy with. But generating something out of nothing is a computationally intensive process, especially if you’re running inference over and over again. +This is why it’s important to get the most computational (speed) and memory (GPU RAM) efficiency from the pipeline to reduce the time between inference cycles so you can iterate faster. +This tutorial walks you through how to generate faster and better with the DiffusionPipeline. +Begin by loading the runwayml/stable-diffusion-v1-5 model: + + + Copied +from diffusers import DiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipeline = DiffusionPipeline.from_pretrained(model_id) +The example prompt you’ll use is a portrait of an old warrior chief, but feel free to use your own prompt: + + + Copied +prompt = "portrait photo of a old warrior chief" + +Speed + +💡 If you don’t have access to a GPU, you can use one for free from a GPU provider like Colab! +One of the simplest ways to speed up inference is to place the pipeline on a GPU the same way you would with any PyTorch module: + + + Copied +pipeline = pipeline.to("cuda") +To make sure you can use the same image and improve on it, use a Generator and set a seed for reproducibility: + + + Copied +generator = torch.Generator("cuda").manual_seed(0) +Now you can generate an image: + + + Copied +image = pipeline(prompt, generator=generator).images[0] +image + +This process took ~30 seconds on a T4 GPU (it might be faster if your allocated GPU is better than a T4). By default, the DiffusionPipeline runs inference with full float32 precision for 50 inference steps. You can speed this up by switching to a lower precision like float16 or running fewer inference steps. +Let’s start by loading the model in float16 and generate an image: + + + Copied +import torch + +pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) +pipeline = pipeline.to("cuda") +generator = torch.Generator("cuda").manual_seed(0) +image = pipeline(prompt, generator=generator).images[0] +image + +This time, it only took ~11 seconds to generate the image, which is almost 3x faster than before! +💡 We strongly suggest always running your pipelines in float16, and so far, we’ve rarely seen any degradation in output quality. +Another option is to reduce the number of inference steps. Choosing a more efficient scheduler could help decrease the number of steps without sacrificing output quality. You can find which schedulers are compatible with the current model in the DiffusionPipeline by calling the compatibles method: + + + Copied +pipeline.scheduler.compatibles +[ + diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, + diffusers.schedulers.scheduling_unipc_multistep.UniPCMultistepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_discrete.KDPM2DiscreteScheduler, + diffusers.schedulers.scheduling_deis_multistep.DEISMultistepScheduler, + diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, + diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler, + diffusers.schedulers.scheduling_ddpm.DDPMScheduler, + diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_ancestral_discrete.KDPM2AncestralDiscreteScheduler, + diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler, + diffusers.schedulers.scheduling_pndm.PNDMScheduler, + diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, + diffusers.schedulers.scheduling_ddim.DDIMScheduler, +] +The Stable Diffusion model uses the PNDMScheduler by default which usually requires ~50 inference steps, but more performant schedulers like DPMSolverMultistepScheduler, require only ~20 or 25 inference steps. Use the ConfigMixin.from_config() method to load a new scheduler: + + + Copied +from diffusers import DPMSolverMultistepScheduler + +pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) +Now set the num_inference_steps to 20: + + + Copied +generator = torch.Generator("cuda").manual_seed(0) +image = pipeline(prompt, generator=generator, num_inference_steps=20).images[0] +image + +Great, you’ve managed to cut the inference time to just 4 seconds! ⚡️ + +Memory + +The other key to improving pipeline performance is consuming less memory, which indirectly implies more speed, since you’re often trying to maximize the number of images generated per second. The easiest way to see how many images you can generate at once is to try out different batch sizes until you get an OutOfMemoryError (OOM). +Create a function that’ll generate a batch of images from a list of prompts and Generators. Make sure to assign each Generator a seed so you can reuse it if it produces a good result. + + + Copied +def get_inputs(batch_size=1): + generator = [torch.Generator("cuda").manual_seed(i) for i in range(batch_size)] + prompts = batch_size * [prompt] + num_inference_steps = 20 + + return {"prompt": prompts, "generator": generator, "num_inference_steps": num_inference_steps} +You’ll also need a function that’ll display each batch of images: + + + Copied +from PIL import image + + +def image_grid(imgs, rows=2, cols=2): + w, h = imgs[0].size + grid = Image.new("RGB", size=(cols * w, rows * h)) + + for i, img in enumerate(imgs): + grid.paste(img, box=(i % cols * w, i // cols * h)) + return grid +Start with batch_size=4 and see how much memory you’ve consumed: + + + Copied +images = pipeline(**get_inputs(batch_size=4)).images +image_grid(images) +Unless you have a GPU with more RAM, the code above probably returned an OOM error! Most of the memory is taken up by the cross-attention layers. Instead of running this operation in a batch, you can run it sequentially to save a significant amount of memory. All you have to do is configure the pipeline to use the enable_attention_slicing() function: + + + Copied +pipeline.enable_attention_slicing() +Now try increasing the batch_size to 8! + + + Copied +images = pipeline(**get_inputs(batch_size=8)).images +image_grid(images, rows=2, cols=4) + +Whereas before you couldn’t even generate a batch of 4 images, now you can generate a batch of 8 images at ~3.5 seconds per image! This is probably the fastest you can go on a T4 GPU without sacrificing quality. + +Quality + +In the last two sections, you learned how to optimize the speed of your pipeline by using fp16, reducing the number of inference steps by using a more performant scheduler, and enabling attention slicing to reduce memory consumption. Now you’re going to focus on how to improve the quality of generated images. + +Better checkpoints + +The most obvious step is to use better checkpoints. The Stable Diffusion model is a good starting point, and since its official launch, several improved versions have also been released. However, using a newer version doesn’t automatically mean you’ll get better results. You’ll still have to experiment with different checkpoints yourself, and do a little research (such as using negative prompts) to get the best results. +As the field grows, there are more and more high-quality checkpoints finetuned to produce certain styles. Try exploring the Hub and Diffusers Gallery to find one you’re interested in! + +Better pipeline components + +You can also try replacing the current pipeline components with a newer version. Let’s try loading the latest autodecoder from Stability AI into the pipeline, and generate some images: + + + Copied +from diffusers import AutoencoderKL + +vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16).to("cuda") +pipeline.vae = vae +images = pipeline(**get_inputs(batch_size=8)).images +image_grid(images, rows=2, cols=4) + + +Better prompt engineering + +The text prompt you use to generate an image is super important, so much so that it is called prompt engineering. Some considerations to keep during prompt engineering are: +How is the image or similar images of the one I want to generate stored on the internet? +What additional detail can I give that steers the model towards the style I want? +With this in mind, let’s improve the prompt to include color and higher quality details: + + + Copied +prompt += ", tribal panther make up, blue on red, side profile, looking away, serious eyes" +prompt += " 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta" +Generate a batch of images with the new prompt: + + + Copied +images = pipeline(**get_inputs(batch_size=8)).images +image_grid(images, rows=2, cols=4) + +Pretty impressive! Let’s tweak the second image - corresponding to the Generator with a seed of 1 - a bit more by adding some text about the age of the subject: + + + Copied +prommpts = [ + "portrait photo of the oldest warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a old warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a young warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", +] + +generator = [torch.Generator("cuda").manual_seed(1) for _ in range(len(prompts))] +images = pipeline(prompt=prompts, generator=generator, num_inference_steps=25).images +image_grid(images) + + +Next steps + +In this tutorial, you learned how to optimize a DiffusionPipeline for computational and memory efficiency as well as improving the quality of generated outputs. If you’re interested in making your pipeline even faster, take a look at the following resources: +Enable xFormers memory efficient attention mechanism for faster speed and reduced memory consumption. +Learn how in PyTorch 2.0, torch.compile can yield 2-9% faster inference speed. +Many optimization techniques for inference are also included in this memory and speed guide, such as memory offloading. diff --git a/scrapped_outputs/dd2ed2b0969f3987d217efd90b1f08a9.txt b/scrapped_outputs/dd2ed2b0969f3987d217efd90b1f08a9.txt new file mode 100644 index 0000000000000000000000000000000000000000..ae719be0b7ba5e539ea6636677a7dcc7a90dd1e7 --- /dev/null +++ b/scrapped_outputs/dd2ed2b0969f3987d217efd90b1f08a9.txt @@ -0,0 +1,88 @@ +Text-to-(RGB, depth) LDM3D was proposed in LDM3D: Latent Diffusion Model for 3D by Gabriela Ben Melech Stan, Diana Wofk, Scottie Fox, Alex Redden, Will Saxton, Jean Yu, Estelle Aflalo, Shao-Yen Tseng, Fabio Nonato, Matthias Muller, and Vasudev Lal. LDM3D generates an image and a depth map from a given text prompt unlike the existing text-to-image diffusion models such as Stable Diffusion which only generates an image. With almost the same number of parameters, LDM3D achieves to create a latent space that can compress both the RGB images and the depth maps. Two checkpoints are available for use: ldm3d-original. The original checkpoint used in the paper ldm3d-4c. The new version of LDM3D using 4 channels inputs instead of 6-channels inputs and finetuned on higher resolution images. The abstract from the paper is: This research paper proposes a Latent Diffusion Model for 3D (LDM3D) that generates both image and depth map data from a given text prompt, allowing users to generate RGBD images from text prompts. The LDM3D model is fine-tuned on a dataset of tuples containing an RGB image, depth map and caption, and validated through extensive experiments. We also develop an application called DepthFusion, which uses the generated RGB images and depth maps to create immersive and interactive 360-degree-view experiences using TouchDesigner. This technology has the potential to transform a wide range of industries, from entertainment and gaming to architecture and design. Overall, this paper presents a significant contribution to the field of generative AI and computer vision, and showcases the potential of LDM3D and DepthFusion to revolutionize content creation and digital experiences. A short video summarizing the approach can be found at this url. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionLDM3DPipeline class diffusers.StableDiffusionLDM3DPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image and 3D generation using LDM3D. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 49 guidance_scale: float = 5.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 5.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import StableDiffusionLDM3DPipeline + +>>> pipe = StableDiffusionLDM3DPipeline.from_pretrained("Intel/ldm3d-4c") +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> output = pipe(prompt) +>>> rgb_image, depth_image = output.rgb, output.depth +>>> rgb_image[0].save("astronaut_ldm3d_rgb.jpg") +>>> depth_image[0].save("astronaut_ldm3d_depth.png") disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. LDM3DPipelineOutput class diffusers.pipelines.stable_diffusion_ldm3d.pipeline_stable_diffusion_ldm3d.LDM3DPipelineOutput < source > ( rgb: Union depth: Union nsfw_content_detected: Optional ) Parameters rgb (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). depth (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. __call__ ( *args **kwargs ) Call self as a function. Upscaler LDM3D-VR is an extended version of LDM3D. The abstract from the paper is: +Latent diffusion models have proven to be state-of-the-art in the creation and manipulation of visual outputs. However, as far as we know, the generation of depth maps jointly with RGB is still limited. We introduce LDM3D-VR, a suite of diffusion models targeting virtual reality development that includes LDM3D-pano and LDM3D-SR. These models enable the generation of panoramic RGBD based on textual prompts and the upscaling of low-resolution inputs to high-resolution RGBD, respectively. Our models are fine-tuned from existing pretrained models on datasets containing panoramic/high-resolution RGB images, depth maps and captions. Both models are evaluated in comparison to existing related methods Two checkpoints are available for use: ldm3d-pano. This checkpoint enables the generation of panoramic images and requires the StableDiffusionLDM3DPipeline pipeline to be used. ldm3d-sr. This checkpoint enables the upscaling of RGB and depth images. Can be used in cascade after the original LDM3D pipeline using the StableDiffusionUpscaleLDM3DPipeline from communauty pipeline. diff --git a/scrapped_outputs/dd3a66637d44a4b65aecd7a3ebcadd5c.txt b/scrapped_outputs/dd3a66637d44a4b65aecd7a3ebcadd5c.txt new file mode 100644 index 0000000000000000000000000000000000000000..6024bf1a00e90500c0a7ce1aa584ee7f009df150 --- /dev/null +++ b/scrapped_outputs/dd3a66637d44a4b65aecd7a3ebcadd5c.txt @@ -0,0 +1,448 @@ +Singlestep DPM-Solver + + +Overview + +Original paper can be found here and the improved version. The original implementation can be found here. + +DPMSolverSinglestepScheduler + + +class diffusers.DPMSolverSinglestepScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Optional[numpy.ndarray] = None +solver_order: int = 2 +prediction_type: str = 'epsilon' +thresholding: bool = False +dynamic_thresholding_ratio: float = 0.995 +sample_max_value: float = 1.0 +algorithm_type: str = 'dpmsolver++' +solver_type: str = 'midpoint' +lower_order_final: bool = True + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +solver_order (int, default 2) — +the order of DPM-Solver; can be 1 or 2 or 3. We recommend to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. + + +prediction_type (str, default epsilon) — +indicates whether the model predicts the noise (epsilon), or the data / x0. One of epsilon, sample, +or v-prediction. + + +thresholding (bool, default False) — +whether to use the “dynamic thresholding” method (introduced by Imagen, https://arxiv.org/abs/2205.11487). +For pixel-space diffusion models, you can set both algorithm_type=dpmsolver++ and thresholding=True to +use the dynamic thresholding. Note that the thresholding method is unsuitable for latent-space diffusion +models (such as stable-diffusion). + + +dynamic_thresholding_ratio (float, default 0.995) — +the ratio for the dynamic thresholding method. Default is 0.995, the same as Imagen +(https://arxiv.org/abs/2205.11487). + + +sample_max_value (float, default 1.0) — +the threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++. + + +algorithm_type (str, default dpmsolver++) — +the algorithm type for the solver. Either dpmsolver or dpmsolver++. The dpmsolver type implements the +algorithms in https://arxiv.org/abs/2206.00927, and the dpmsolver++ type implements the algorithms in +https://arxiv.org/abs/2211.01095. We recommend to use dpmsolver++ with solver_order=2 for guided +sampling (e.g. stable-diffusion). + + +solver_type (str, default midpoint) — +the solver type for the second-order solver. Either midpoint or heun. The solver type slightly affects +the sample quality, especially for small number of steps. We empirically find that midpoint solvers are +slightly better, so we recommend to use the midpoint type. + + +lower_order_final (bool, default True) — +whether to use lower-order solvers in the final steps. For singlestep schedulers, we recommend to enable +this to use up all the function evaluations. + + + +DPM-Solver (and the improved version DPM-Solver++) is a fast dedicated high-order solver for diffusion ODEs with +the convergence order guarantee. Empirically, sampling by DPM-Solver with only 20 steps can generate high-quality +samples, and it can generate quite good samples even in only 10 steps. +For more details, see the original paper: https://arxiv.org/abs/2206.00927 and https://arxiv.org/abs/2211.01095 +Currently, we support the singlestep DPM-Solver for both noise prediction models and data prediction models. We +recommend to use solver_order=2 for guided sampling, and solver_order=3 for unconditional sampling. +We also support the “dynamic thresholding” method in Imagen (https://arxiv.org/abs/2205.11487). For pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use the dynamic +thresholding. Note that the thresholding method is unsuitable for latent-space diffusion models (such as +stable-diffusion). +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +convert_model_output + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the converted model output. + + +Convert the model output to the corresponding type that the algorithm (DPM-Solver / DPM-Solver++) needs. +DPM-Solver is designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to +discretize an integral of the data prediction model. So we need to first convert the model output to the +corresponding type to match the algorithm. +Note that the algorithm type and the model type is decoupled. That is to say, we can use either DPM-Solver or +DPM-Solver++ for both noise prediction model and data prediction model. + +dpm_solver_first_order_update + +< +source +> +( +model_output: FloatTensor +timestep: int +prev_timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the first-order DPM-Solver (equivalent to DDIM). +See https://arxiv.org/abs/2206.00927 for the detailed derivation. + +get_order_list + +< +source +> +( +num_inference_steps: int + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + + +Computes the solver order at each time step. + +scale_model_input + +< +source +> +( +sample: FloatTensor +*args +**kwargs + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device, optional) — +the device to which the timesteps should be moved to. If None, the timesteps are not moved. + + + +Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. + +singlestep_dpm_solver_second_order_update + +< +source +> +( +model_output_list: typing.List[torch.FloatTensor] +timestep_list: typing.List[int] +prev_timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output_list (List[torch.FloatTensor]) — +direct outputs from learned diffusion model at current and latter timesteps. + + +timestep (int) — current and latter discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the second-order singlestep DPM-Solver. +It computes the solution at time prev_timestep from the time timestep_list[-2]. + +singlestep_dpm_solver_third_order_update + +< +source +> +( +model_output_list: typing.List[torch.FloatTensor] +timestep_list: typing.List[int] +prev_timestep: int +sample: FloatTensor + +) +→ +torch.FloatTensor + +Parameters + +model_output_list (List[torch.FloatTensor]) — +direct outputs from learned diffusion model at current and latter timesteps. + + +timestep (int) — current and latter discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the third-order singlestep DPM-Solver. +It computes the solution at time prev_timestep from the time timestep_list[-3]. + +singlestep_dpm_solver_update + +< +source +> +( +model_output_list: typing.List[torch.FloatTensor] +timestep_list: typing.List[int] +prev_timestep: int +sample: FloatTensor +order: int + +) +→ +torch.FloatTensor + +Parameters + +model_output_list (List[torch.FloatTensor]) — +direct outputs from learned diffusion model at current and latter timesteps. + + +timestep (int) — current and latter discrete timestep in the diffusion chain. + + +prev_timestep (int) — previous discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +order (int) — +the solver order at this step. + + +Returns + +torch.FloatTensor + + + +the sample tensor at the previous timestep. + + +One step for the singlestep DPM-Solver. + +step + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +return_dict: bool = True + +) +→ +~scheduling_utils.SchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +return_dict (bool) — option for returning tuple rather than SchedulerOutput class + + +Returns + +~scheduling_utils.SchedulerOutput or tuple + + + +~scheduling_utils.SchedulerOutput if return_dict is +True, otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + +Step function propagating the sample with the singlestep DPM-Solver. diff --git a/scrapped_outputs/dd55eef5c177d252c20d1afd42ffcc39.txt b/scrapped_outputs/dd55eef5c177d252c20d1afd42ffcc39.txt new file mode 100644 index 0000000000000000000000000000000000000000..074dfb36700a1ed683f1c6891afc97d56e1cb780 --- /dev/null +++ b/scrapped_outputs/dd55eef5c177d252c20d1afd42ffcc39.txt @@ -0,0 +1,113 @@ +Latent upscaler The Stable Diffusion latent upscaler model was created by Katherine Crowson in collaboration with Stability AI. It is used to enhance the output image resolution by a factor of 2 (see this demo notebook for a demonstration of the original implementation). Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionLatentUpscalePipeline class diffusers.StableDiffusionLatentUpscalePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: EulerDiscreteScheduler ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A EulerDiscreteScheduler to be used in combination with unet to denoise the encoded image latents. Pipeline for upscaling Stable Diffusion output image resolution by a factor of 2. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union image: Union = None num_inference_steps: int = 75 guidance_scale: float = 9.0 negative_prompt: Union = None generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image upscaling. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be upscaled. If it’s a tensor, it can be either a +latent output from a Stable Diffusion model or an image tensor in the range [-1, 1]. It is considered +a latent if image.shape[1] is 4; otherwise, it is considered to be an image representation and +encoded using this pipeline’s vae encoder. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import StableDiffusionLatentUpscalePipeline, StableDiffusionPipeline +>>> import torch + + +>>> pipeline = StableDiffusionPipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 +... ) +>>> pipeline.to("cuda") + +>>> model_id = "stabilityai/sd-x2-latent-upscaler" +>>> upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) +>>> upscaler.to("cuda") + +>>> prompt = "a photo of an astronaut high resolution, unreal engine, ultra realistic" +>>> generator = torch.manual_seed(33) + +>>> low_res_latents = pipeline(prompt, generator=generator, output_type="latent").images + +>>> with torch.no_grad(): +... image = pipeline.decode_latents(low_res_latents) +>>> image = pipeline.numpy_to_pil(image)[0] + +>>> image.save("../images/a1.png") + +>>> upscaled_image = upscaler( +... prompt=prompt, +... image=low_res_latents, +... num_inference_steps=20, +... guidance_scale=0, +... generator=generator, +... ).images[0] + +>>> upscaled_image.save("../images/a2.png") enable_sequential_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using 🤗 Accelerate, significantly reducing memory usage. When called, the state +dicts of all torch.nn.Module components (except those in self._exclude_from_cpu_offload) are saved to CPU +and then moved to torch.device('meta') and loaded to GPU only when their specific submodule has its forward +method called. Offloading happens on a submodule basis. Memory savings are higher than with +enable_model_cpu_offload, but performance is lower. enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/dd8e6e3fb6247744337d2a9a6de4b8a1.txt b/scrapped_outputs/dd8e6e3fb6247744337d2a9a6de4b8a1.txt new file mode 100644 index 0000000000000000000000000000000000000000..173b882d6bb0b0500124b1e8f97633b6bc0e5c16 --- /dev/null +++ b/scrapped_outputs/dd8e6e3fb6247744337d2a9a6de4b8a1.txt @@ -0,0 +1,62 @@ +LoRA This is experimental and the API may change in the future. LoRA (Low-Rank Adaptation of Large Language Models) is a popular and lightweight training technique that significantly reduces the number of trainable parameters. It works by inserting a smaller number of new weights into the model and only these are trained. This makes training with LoRA much faster, memory-efficient, and produces smaller model weights (a few hundred MBs), which are easier to store and share. LoRA can also be combined with other training techniques like DreamBooth to speedup training. LoRA is very versatile and supported for DreamBooth, Kandinsky 2.2, Stable Diffusion XL, text-to-image, and Wuerstchen. This guide will explore the train_text_to_image_lora.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/text_to_image +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script has many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. Default values are provided for most parameters that work pretty well, but you can also set your own values in the training command if you’d like. For example, to increase the number of epochs to train: Copied accelerate launch train_text_to_image_lora.py \ + --num_train_epochs=150 \ Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the LoRA relevant parameters: --rank: the inner dimension of the low-rank matrices to train; a higher rank means more trainable parameters --learning_rate: the default learning rate is 1e-4, but with LoRA, you can use a higher learning rate Training script The dataset preprocessing code and training loop are found in the main() function, and if you need to adapt the training script, this is where you’ll make your changes. As with the script parameters, a walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the LoRA relevant parts of the script. The script begins by adding the new LoRA weights to the attention layers. This involves correctly configuring the weight size for each block in the UNet. You’ll see the rank parameter is used to create the LoRAAttnProcessor: Copied lora_attn_procs = {} +for name in unet.attn_processors.keys(): + cross_attention_dim = None if name.endswith("attn1.processor") else unet.config.cross_attention_dim + if name.startswith("mid_block"): + hidden_size = unet.config.block_out_channels[-1] + elif name.startswith("up_blocks"): + block_id = int(name[len("up_blocks.")]) + hidden_size = list(reversed(unet.config.block_out_channels))[block_id] + elif name.startswith("down_blocks"): + block_id = int(name[len("down_blocks.")]) + hidden_size = unet.config.block_out_channels[block_id] + + lora_attn_procs[name] = LoRAAttnProcessor( + hidden_size=hidden_size, + cross_attention_dim=cross_attention_dim, + rank=args.rank, + ) + +unet.set_attn_processor(lora_attn_procs) +lora_layers = AttnProcsLayers(unet.attn_processors) The optimizer is initialized with the lora_layers because these are the only weights that’ll be optimized: Copied optimizer = optimizer_cls( + lora_layers.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Aside from setting up the LoRA layers, the training script is more or less the same as train_text_to_image.py! Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 Let’s train on the Pokémon BLIP captions dataset to generate our yown Pokémon. Set the environment variables MODEL_NAME and DATASET_NAME to the model and dataset respectively. You should also specify where to save the model in OUTPUT_DIR, and the name of the model to save to on the Hub with HUB_MODEL_ID. The script creates and saves the following files to your repository: saved model checkpoints pytorch_lora_weights.safetensors (the trained LoRA weights) If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. A full training run takes ~5 hours on a 2080 Ti GPU with 11GB of VRAM. Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="/sddata/finetune/lora/pokemon" +export HUB_MODEL_ID="pokemon-lora" +export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch --mixed_precision="fp16" train_text_to_image_lora.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$DATASET_NAME \ + --dataloader_num_workers=8 \ + --resolution=512 \ + --center_crop \ + --random_flip \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --max_train_steps=15000 \ + --learning_rate=1e-04 \ + --max_grad_norm=1 \ + --lr_scheduler="cosine" \ + --lr_warmup_steps=0 \ + --output_dir=${OUTPUT_DIR} \ + --push_to_hub \ + --hub_model_id=${HUB_MODEL_ID} \ + --report_to=wandb \ + --checkpointing_steps=500 \ + --validation_prompt="A pokemon with blue eyes." \ + --seed=1337 Once training has been completed, you can use your model for inference: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") +pipeline.load_lora_weights("path/to/lora/model", weight_name="pytorch_lora_weights.safetensors") +image = pipeline("A pokemon with blue eyes").images[0] Next steps Congratulations on training a new model with LoRA! To learn more about how to use your new model, the following guides may be helpful: Learn how to load different LoRA formats trained using community trainers like Kohya and TheLastBen. Learn how to use and combine multiple LoRA’s with PEFT for inference. diff --git a/scrapped_outputs/dd9f0902f863ae435d627d9b84f508c6.txt b/scrapped_outputs/dd9f0902f863ae435d627d9b84f508c6.txt new file mode 100644 index 0000000000000000000000000000000000000000..9870975bcefa54ea72473d89b0342fceb38f6b83 --- /dev/null +++ b/scrapped_outputs/dd9f0902f863ae435d627d9b84f508c6.txt @@ -0,0 +1,176 @@ +VQDiffusion + + +Overview + +Vector Quantized Diffusion Model for Text-to-Image Synthesis by Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, Baining Guo +The abstract of the paper is the following: +We present the vector quantized diffusion (VQ-Diffusion) model for text-to-image generation. This method is based on a vector quantized variational autoencoder (VQ-VAE) whose latent space is modeled by a conditional variant of the recently developed Denoising Diffusion Probabilistic Model (DDPM). We find that this latent-space method is well-suited for text-to-image generation tasks because it not only eliminates the unidirectional bias with existing methods but also allows us to incorporate a mask-and-replace diffusion strategy to avoid the accumulation of errors, which is a serious problem with existing methods. Our experiments show that the VQ-Diffusion produces significantly better text-to-image generation results when compared with conventional autoregressive (AR) models with similar numbers of parameters. Compared with previous GAN-based text-to-image methods, our VQ-Diffusion can handle more complex scenes and improve the synthesized image quality by a large margin. Finally, we show that the image generation computation in our method can be made highly efficient by reparameterization. With traditional AR methods, the text-to-image generation time increases linearly with the output image resolution and hence is quite time consuming even for normal size images. The VQ-Diffusion allows us to achieve a better trade-off between quality and speed. Our experiments indicate that the VQ-Diffusion model with the reparameterization is fifteen times faster than traditional AR methods while achieving a better image quality. +The original codebase can be found here. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_vq_diffusion.py +Text-to-Image Generation +- + +VQDiffusionPipeline + + +class diffusers.VQDiffusionPipeline + +< +source +> +( +vqvae: VQModel +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +transformer: Transformer2DModel +scheduler: VQDiffusionScheduler +learned_classifier_free_sampling_embeddings: LearnedClassifierFreeSamplingEmbeddings + +) + + +Parameters + +vqvae (VQModel) — +Vector Quantized Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent +representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. VQ Diffusion uses the text portion of +CLIP, specifically +the clip-vit-base-patch32 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +transformer (Transformer2DModel) — +Conditional transformer to denoise the encoded image latents. + + +scheduler (VQDiffusionScheduler) — +A scheduler to be used in combination with transformer to denoise the encoded image latents. + + + +Pipeline for text-to-image generation using VQ Diffusion +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] +num_inference_steps: int = 100 +guidance_scale: float = 5.0 +truncation_rate: float = 1.0 +num_images_per_prompt: int = 1 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 + +) +→ +ImagePipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. + + +num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +truncation_rate (float, optional, defaults to 1.0 (equivalent to no truncation)) — +Used to “truncate” the predicted classes for x_0 such that the cumulative probability for a pixel is at +most truncation_rate. The lowest probabilities that would increase the cumulative probability above +truncation_rate are set to zero. + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor of shape (batch), optional) — +Pre-generated noisy latents to be used as inputs for image generation. Must be valid embedding indices. +Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will +be generated of completely masked latent pixels. + + +output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +ImagePipelineOutput or tuple + + + +~ pipeline_utils.ImagePipelineOutput if return_dict +is True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. + + +Function invoked when calling the pipeline for generation. + +truncate + +< +source +> +( +log_p_x_0: FloatTensor +truncation_rate: float + +) + + + +Truncates log_p_x_0 such that for each column vector, the total cumulative probability is truncation_rate The +lowest probabilities that would increase the cumulative probability above truncation_rate are set to zero. diff --git a/scrapped_outputs/dde82d56aa3abf42dedd5ff9477a4baa.txt b/scrapped_outputs/dde82d56aa3abf42dedd5ff9477a4baa.txt new file mode 100644 index 0000000000000000000000000000000000000000..7645418c174b20843d0dcacad570025d04b154f1 --- /dev/null +++ b/scrapped_outputs/dde82d56aa3abf42dedd5ff9477a4baa.txt @@ -0,0 +1,8 @@ +ScoreSdeVpScheduler ScoreSdeVpScheduler is a variance preserving stochastic differential equation (SDE) scheduler. It was introduced in the Score-Based Generative Modeling through Stochastic Differential Equations paper by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole. The abstract from the paper is: Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model. 🚧 This scheduler is under construction! ScoreSdeVpScheduler class diffusers.schedulers.ScoreSdeVpScheduler < source > ( num_train_timesteps = 2000 beta_min = 0.1 beta_max = 20 sampling_eps = 0.001 ) Parameters num_train_timesteps (int, defaults to 2000) — +The number of diffusion steps to train the model. beta_min (int, defaults to 0.1) — beta_max (int, defaults to 20) — sampling_eps (int, defaults to 1e-3) — +The end value of sampling where timesteps decrease progressively from 1 to epsilon. ScoreSdeVpScheduler is a variance preserving stochastic differential equation (SDE) scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. set_timesteps < source > ( num_inference_steps device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the continuous timesteps used for the diffusion chain (to be run before inference). step_pred < source > ( score x t generator = None ) Parameters score () — x () — t () — generator (torch.Generator, optional) — +A random number generator. Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/ddfdebb560b0e2aa005aaf4c473d8e3b.txt b/scrapped_outputs/ddfdebb560b0e2aa005aaf4c473d8e3b.txt new file mode 100644 index 0000000000000000000000000000000000000000..a6514d3d2fc54f9f4492a0add5b64d8d3b87c87a --- /dev/null +++ b/scrapped_outputs/ddfdebb560b0e2aa005aaf4c473d8e3b.txt @@ -0,0 +1,472 @@ +Pipelines + +The DiffusionPipeline is the easiest way to load any pretrained diffusion pipeline from the Hub and to use it in inference. +One should not use the Diffusion Pipeline class for training or fine-tuning a diffusion model. Individual + components of diffusion pipelines are usually trained individually, so we suggest to directly work + with `UNetModel` and `UNetConditionModel`. + +Any diffusion pipeline that is loaded with from_pretrained() will automatically +detect the pipeline type, e.g. StableDiffusionPipeline and consequently load each component of the +pipeline and pass them into the __init__ function of the pipeline, e.g. __init__(). +Any pipeline object can be saved locally with save_pretrained(). + +DiffusionPipeline + + +class diffusers.DiffusionPipeline + +< +source +> +( +) + + + +Base class for all models. +DiffusionPipeline takes care of storing all components (models, schedulers, processors) for diffusion pipelines +and handles methods for loading, downloading and saving models as well as a few methods common to all pipelines to: +move all PyTorch modules to the device of your choice +enabling/disabling the progress bar for the denoising iteration +Class attributes: +config_name (str) — name of the config file that will store the class and module names of all +components of the diffusion pipeline. +_optional_components (Liststr) — list of all components that are optional so they don’t have to be +passed for the pipeline to function (should be overridden by subclasses). + +__call__ + + +( +*args +**kwargs + +) + + + +Call self as a function. + +device + +< +source +> +( +) +→ +torch.device + +Returns + +torch.device + + + +The torch device on which the pipeline is located. + + + +to + +< +source +> +( +torch_device: typing.Union[str, torch.device, NoneType] = None +silence_dtype_warnings: bool = False + +) + + + + +components + +< +source +> +( +) + + + +The self.components property can be useful to run different pipelines with the same weights and +configurations to not have to re-allocate memory. + +Examples: + + + Copied +>>> from diffusers import ( +... StableDiffusionPipeline, +... StableDiffusionImg2ImgPipeline, +... StableDiffusionInpaintPipeline, +... ) + +>>> text2img = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> img2img = StableDiffusionImg2ImgPipeline(**text2img.components) +>>> inpaint = StableDiffusionInpaintPipeline(**text2img.components) + +disable_attention_slicing + +< +source +> +( +) + + + +Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go +back to computing attention in one step. + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +enable_attention_slicing + +< +source +> +( +slice_size: typing.Union[str, int, NoneType] = 'auto' + +) + + +Parameters + +slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maxium amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) + +from_pretrained + +< +source +> +( +pretrained_model_name_or_path: typing.Union[str, os.PathLike, NoneType] +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id of a pretrained pipeline hosted inside a model repo on +https://huggingface.co/ Valid repo ids have to be located under a user or organization name, like +CompVis/ldm-text2im-large-256. +A path to a directory containing pipeline weights saved using +save_pretrained(), e.g., ./my_pipeline_directory/. + + + +torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model under this dtype. If "auto" is passed the dtype +will be automatically derived from the model’s weights. + + +custom_pipeline (str, optional) — + +This is an experimental feature and is likely to change in the future. + +Can be either: + + +A string, the repo id of a custom pipeline hosted inside a model repo on +https://huggingface.co/. Valid repo ids have to be located under a user or organization name, +like hf-internal-testing/diffusers-dummy-pipeline. + +It is required that the model repo has a file, called pipeline.py that defines the custom +pipeline. + + + +A string, the file name of a community pipeline hosted on GitHub under +https://github.com/huggingface/diffusers/tree/main/examples/community. Valid file names have to +match exactly the file name without .py located under the above link, e.g. +clip_guided_stable_diffusion. + +Community pipelines are always loaded from the current main branch of GitHub. + + + +A path to a directory containing a custom pipeline, e.g., ./my_pipeline_directory/. + +It is required that the directory has a file, called pipeline.py that defines the custom +pipeline. + + + +For more information on how to load and create custom pipelines, please have a look at Loading and +Adding Custom +Pipelines + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running huggingface-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +custom_revision (str, optional, defaults to "main" when loading from the Hub and to local version of diffusers when loading from GitHub) — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a diffusers version when loading a +custom pipeline from GitHub. + + +mirror (str, optional) — +Mirror source to accelerate downloads in China. If you are from China and have an accessibility +problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. +Please refer to the mirror site for more information. specify the folder name here. + + +device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be refined to each +parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the +same device. +To have Accelerate compute the most optimized device_map automatically, set device_map="auto". For +more information about each option see designing a device +map. + + +low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading by not initializing the weights and only loading the pre-trained weights. This +also tries to not use more than 1x model size in CPU memory (including peak memory) while loading the +model. This is only supported when torch version >= 1.9.0. If you are using an older version of torch, +setting this argument to True will raise an error. + + +return_cached_folder (bool, optional, defaults to False) — +If set to True, path to downloaded cached folder will be returned in addition to loaded pipeline. + + +kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load - and saveable variables - i.e. the pipeline components - of the +specific pipeline class. The overwritten components are then directly passed to the pipelines +__init__ method. See example below for more information. + + +variant (str, optional) — +If specified load weights from variant filename, e.g. pytorch_model..bin. variant is +ignored when using from_flax. + + + +Instantiate a PyTorch diffusion pipeline from pre-trained pipeline weights. +The pipeline is set in evaluation mode by default using model.eval() (Dropout modules are deactivated). +The warning Weights from XXX not initialized from pretrained model means that the weights of XXX do not come +pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning +task. +The warning Weights from XXX not used in YYY means that the layer XXX is not used by YYY, therefore those +weights are discarded. +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models, e.g. "runwayml/stable-diffusion-v1-5" +Activate the special “offline-mode” to use +this method in a firewalled environment. + +Examples: + + + Copied +>>> from diffusers import DiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256") + +>>> # Download pipeline that requires an authorization token +>>> # For more information on access tokens, please refer to this section +>>> # of the documentation](https://huggingface.co/docs/hub/security-tokens) +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") + +>>> # Use a different scheduler +>>> from diffusers import LMSDiscreteScheduler + +>>> scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.scheduler = scheduler + +numpy_to_pil + +< +source +> +( +images + +) + + + +Convert a numpy image or a batch of images to a PIL image. + +save_pretrained + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +safe_serialization: bool = False +variant: typing.Optional[str] = None + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. + + +safe_serialization (bool, optional, defaults to False) — +Whether to save the model using safetensors or the traditional PyTorch way (that uses pickle). + + +variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. + + + +Save all variables of the pipeline that can be saved and loaded as well as the pipelines configuration file to +a directory. A pipeline variable can be saved and loaded if its class implements both a save and loading +method. The pipeline can easily be re-loaded using the [from_pretrained()](/docs/diffusers/v0.14.0/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained) class method. + +ImagePipelineOutput + + +By default diffusion pipelines return an object of class + +class diffusers.ImagePipelineOutput + +< +source +> +( +images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] + +) + + +Parameters + +images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or numpy array of shape (batch_size, height, width, num_channels). PIL images or numpy array present the denoised images of the diffusion pipeline. + + + +Output class for image pipelines. + +AudioPipelineOutput + + +By default diffusion pipelines return an object of class + +class diffusers.AudioPipelineOutput + +< +source +> +( +audios: ndarray + +) + + +Parameters + +audios (np.ndarray) — +List of denoised samples of shape (batch_size, num_channels, sample_rate). Numpy array present the +denoised audio samples of the diffusion pipeline. + + + +Output class for audio pipelines. diff --git a/scrapped_outputs/de4334b9a4bc881b82f440344ad1061e.txt b/scrapped_outputs/de4334b9a4bc881b82f440344ad1061e.txt new file mode 100644 index 0000000000000000000000000000000000000000..670e60a336d617da607490febe4cdc7f57188444 --- /dev/null +++ b/scrapped_outputs/de4334b9a4bc881b82f440344ad1061e.txt @@ -0,0 +1,82 @@ +T2I-Adapter T2I-Adapter is a lightweight adapter model that provides an additional conditioning input image (line art, canny, sketch, depth, pose) to better control image generation. It is similar to a ControlNet, but it is a lot smaller (~77M parameters and ~300MB file size) because its only inserts weights into the UNet instead of copying and training it. The T2I-Adapter is only available for training with the Stable Diffusion XL (SDXL) model. This guide will explore the train_t2i_adapter_sdxl.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/t2i_adapter +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to activate gradient accumulation, add the --gradient_accumulation_steps parameter to the training command: Copied accelerate launch train_t2i_adapter_sdxl.py \ + ----gradient_accumulation_steps=4 Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the relevant T2I-Adapter parameters: --pretrained_vae_model_name_or_path: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify a better VAE --crops_coords_top_left_h and --crops_coords_top_left_w: height and width coordinates to include in SDXL’s crop coordinate embeddings --conditioning_image_column: the column of the conditioning images in the dataset --proportion_empty_prompts: the proportion of image prompts to replace with empty strings Training script As with the script parameters, a walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the T2I-Adapter relevant parts of the script. The training script begins by preparing the dataset. This incudes tokenizing the prompt and applying transforms to the images and conditioning images. Copied conditioning_image_transforms = transforms.Compose( + [ + transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), + transforms.CenterCrop(args.resolution), + transforms.ToTensor(), + ] +) Within the main() function, the T2I-Adapter is either loaded from a pretrained adapter or it is randomly initialized: Copied if args.adapter_model_name_or_path: + logger.info("Loading existing adapter weights.") + t2iadapter = T2IAdapter.from_pretrained(args.adapter_model_name_or_path) +else: + logger.info("Initializing t2iadapter weights.") + t2iadapter = T2IAdapter( + in_channels=3, + channels=(320, 640, 1280, 1280), + num_res_blocks=2, + downscale_factor=16, + adapter_type="full_adapter_xl", + ) The optimizer is initialized for the T2I-Adapter parameters: Copied params_to_optimize = t2iadapter.parameters() +optimizer = optimizer_class( + params_to_optimize, + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Lastly, in the training loop, the adapter conditioning image and the text embeddings are passed to the UNet to predict the noise residual: Copied t2iadapter_image = batch["conditioning_pixel_values"].to(dtype=weight_dtype) +down_block_additional_residuals = t2iadapter(t2iadapter_image) +down_block_additional_residuals = [ + sample.to(dtype=weight_dtype) for sample in down_block_additional_residuals +] + +model_pred = unet( + inp_noisy_latents, + timesteps, + encoder_hidden_states=batch["prompt_ids"], + added_cond_kwargs=batch["unet_added_conditions"], + down_block_additional_residuals=down_block_additional_residuals, +).sample If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Now you’re ready to launch the training script! 🚀 For this example training, you’ll use the fusing/fill50k dataset. You can also create and use your own dataset if you want (see the Create a dataset for training guide). Set the environment variable MODEL_DIR to a model id on the Hub or a path to a local model and OUTPUT_DIR to where you want to save the model. Download the following images to condition your training with: Copied wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png +wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_image, --validation_prompt, and --validation_steps to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. Copied export MODEL_DIR="stabilityai/stable-diffusion-xl-base-1.0" +export OUTPUT_DIR="path to save model" + +accelerate launch train_t2i_adapter_sdxl.py \ + --pretrained_model_name_or_path=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --dataset_name=fusing/fill50k \ + --mixed_precision="fp16" \ + --resolution=1024 \ + --learning_rate=1e-5 \ + --max_train_steps=15000 \ + --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ + --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ + --validation_steps=100 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --report_to="wandb" \ + --seed=42 \ + --push_to_hub Once training is complete, you can use your T2I-Adapter for inference: Copied from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, EulerAncestralDiscreteSchedulerTest +from diffusers.utils import load_image +import torch + +adapter = T2IAdapter.from_pretrained("path/to/adapter", torch_dtype=torch.float16) +pipeline = StableDiffusionXLAdapterPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", adapter=adapter, torch_dtype=torch.float16 +) + +pipeline.scheduler = EulerAncestralDiscreteSchedulerTest.from_config(pipe.scheduler.config) +pipeline.enable_xformers_memory_efficient_attention() +pipeline.enable_model_cpu_offload() + +control_image = load_image("./conditioning_image_1.png") +prompt = "pale golden rod circle with old lace background" + +generator = torch.manual_seed(0) +image = pipeline( + prompt, image=control_image, generator=generator +).images[0] +image.save("./output.png") Next steps Congratulations on training a T2I-Adapter model! 🎉 To learn more: Read the Efficient Controllable Generation for SDXL with T2I-Adapters blog post to learn more details about the experimental results from the T2I-Adapter team. diff --git a/scrapped_outputs/de8befe2456545fc7d2527b6ffe7349f.txt b/scrapped_outputs/de8befe2456545fc7d2527b6ffe7349f.txt new file mode 100644 index 0000000000000000000000000000000000000000..89bb655d08dd6180d639a4e910ba59f59d090923 --- /dev/null +++ b/scrapped_outputs/de8befe2456545fc7d2527b6ffe7349f.txt @@ -0,0 +1,98 @@ +MultiDiffusion MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation is by Omer Bar-Tal, Lior Yariv, Yaron Lipman, and Tali Dekel. The abstract from the paper is: Recent advances in text-to-image generation with diffusion models present transformative capabilities in image quality. However, user controllability of the generated image, and fast adaptation to new tasks still remains an open challenge, currently mostly addressed by costly and long re-training and fine-tuning or ad-hoc adaptations to specific image generation tasks. In this work, we present MultiDiffusion, a unified framework that enables versatile and controllable image generation, using a pre-trained text-to-image diffusion model, without any further training or finetuning. At the center of our approach is a new generation process, based on an optimization task that binds together multiple diffusion generation processes with a shared set of parameters or constraints. We show that MultiDiffusion can be readily applied to generate high quality and diverse images that adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes. You can find additional information about MultiDiffusion on the project page, original codebase, and try it out in a demo. Tips While calling StableDiffusionPanoramaPipeline, it’s possible to specify the view_batch_size parameter to be > 1. +For some GPUs with high performance, this can speedup the generation process and increase VRAM usage. To generate panorama-like images make sure you pass the width parameter accordingly. We recommend a width value of 2048 which is the default. Circular padding is applied to ensure there are no stitching artifacts when working with panoramas to ensure a seamless transition from the rightmost part to the leftmost part. By enabling circular padding (set circular_padding=True), the operation applies additional crops after the rightmost point of the image, allowing the model to “see” the transition from the rightmost part to the leftmost part. This helps maintain visual consistency in a 360-degree sense and creates a proper “panorama” that can be viewed using 360-degree panorama viewers. When decoding latents in Stable Diffusion, circular padding is applied to ensure that the decoded latents match in the RGB space. For example, without circular padding, there is a stitching artifact (default): + But with circular padding, the right and the left parts are matching (circular_padding=True): + Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionPanoramaPipeline class diffusers.StableDiffusionPanoramaPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: DDIMScheduler safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using MultiDiffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = 512 width: Optional = 2048 num_inference_steps: int = 50 guidance_scale: float = 7.5 view_batch_size: int = 1 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None circular_padding: bool = False clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 2048) — +The width in pixels of the generated image. The width is kept high because the pipeline is supposed +generate panorama-like images. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. view_batch_size (int, optional, defaults to 1) — +The batch size to denoise split views. For some GPUs with high performance, higher view batch size can +speedup the generation and increase the VRAM usage. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. circular_padding (bool, optional, defaults to False) — +If set to True, circular padding is applied to ensure there are no stitching artifacts. Circular +padding allows the model to seamlessly generate a transition from the rightmost part of the image to +the leftmost part, maintaining consistency in a 360-degree sense. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPanoramaPipeline, DDIMScheduler + +>>> model_ckpt = "stabilityai/stable-diffusion-2-base" +>>> scheduler = DDIMScheduler.from_pretrained(model_ckpt, subfolder="scheduler") +>>> pipe = StableDiffusionPanoramaPipeline.from_pretrained( +... model_ckpt, scheduler=scheduler, torch_dtype=torch.float16 +... ) + +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of the dolomites" +>>> image = pipe(prompt).images[0] encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/de9cd18a601372a533045561c94c403a.txt b/scrapped_outputs/de9cd18a601372a533045561c94c403a.txt new file mode 100644 index 0000000000000000000000000000000000000000..e3c71ca96baa76c1c11f96cfbdad30df65a97ee3 --- /dev/null +++ b/scrapped_outputs/de9cd18a601372a533045561c94c403a.txt @@ -0,0 +1,112 @@ +How to contribute to Diffusers 🧨 We ❤️ contributions from the open-source community! Everyone is welcome, and all types of participation –not just code– are valued and appreciated. Answering questions, helping others, reaching out, and improving the documentation are all immensely valuable to the community, so don’t be afraid and get involved if you’re up for it! Everyone is encouraged to start by saying 👋 in our public Discord channel. We discuss the latest trends in diffusion models, ask questions, show off personal projects, help each other with contributions, or just hang out ☕. Whichever way you choose to contribute, we strive to be part of an open, welcoming, and kind community. Please, read our code of conduct and be mindful to respect it during your interactions. We also recommend you become familiar with the ethical guidelines that guide our project and ask you to adhere to the same principles of transparency and responsibility. We enormously value feedback from the community, so please do not be afraid to speak up if you believe you have valuable feedback that can help improve the library - every message, comment, issue, and pull request (PR) is read and considered. Overview You can contribute in many ways ranging from answering questions on issues to adding new diffusion models to +the core library. In the following, we give an overview of different ways to contribute, ranked by difficulty in ascending order. All of them are valuable to the community. Asking and answering questions on the Diffusers discussion forum or on Discord. Opening new issues on the GitHub Issues tab. Answering issues on the GitHub Issues tab. Fix a simple issue, marked by the “Good first issue” label, see here. Contribute to the documentation. Contribute a Community Pipeline. Contribute to the examples. Fix a more difficult issue, marked by the “Good second issue” label, see here. Add a new pipeline, model, or scheduler, see “New Pipeline/Model” and “New scheduler” issues. For this contribution, please have a look at Design Philosophy. As said before, all contributions are valuable to the community. +In the following, we will explain each contribution a bit more in detail. For all contributions 4 - 9, you will need to open a PR. It is explained in detail how to do so in Opening a pull request. 1. Asking and answering questions on the Diffusers discussion forum or on the Diffusers Discord Any question or comment related to the Diffusers library can be asked on the discussion forum or on Discord. Such questions and comments include (but are not limited to): Reports of training or inference experiments in an attempt to share knowledge Presentation of personal projects Questions to non-official training examples Project proposals General feedback Paper summaries Asking for help on personal projects that build on top of the Diffusers library General questions Ethical questions regarding diffusion models … Every question that is asked on the forum or on Discord actively encourages the community to publicly +share knowledge and might very well help a beginner in the future who has the same question you’re +having. Please do pose any questions you might have. +In the same spirit, you are of immense help to the community by answering such questions because this way you are publicly documenting knowledge for everybody to learn from. Please keep in mind that the more effort you put into asking or answering a question, the higher +the quality of the publicly documented knowledge. In the same way, well-posed and well-answered questions create a high-quality knowledge database accessible to everybody, while badly posed questions or answers reduce the overall quality of the public knowledge database. +In short, a high quality question or answer is precise, concise, relevant, easy-to-understand, accessible, and well-formated/well-posed. For more information, please have a look through the How to write a good issue section. NOTE about channels: +The forum is much better indexed by search engines, such as Google. Posts are ranked by popularity rather than chronologically. Hence, it’s easier to look up questions and answers that we posted some time ago. +In addition, questions and answers posted in the forum can easily be linked to. +In contrast, Discord has a chat-like format that invites fast back-and-forth communication. +While it will most likely take less time for you to get an answer to your question on Discord, your +question won’t be visible anymore over time. Also, it’s much harder to find information that was posted a while back on Discord. We therefore strongly recommend using the forum for high-quality questions and answers in an attempt to create long-lasting knowledge for the community. If discussions on Discord lead to very interesting answers and conclusions, we recommend posting the results on the forum to make the information more available for future readers. 2. Opening new issues on the GitHub issues tab The 🧨 Diffusers library is robust and reliable thanks to the users who notify us of +the problems they encounter. So thank you for reporting an issue. Remember, GitHub issues are reserved for technical questions directly related to the Diffusers library, bug reports, feature requests, or feedback on the library design. In a nutshell, this means that everything that is not related to the code of the Diffusers library (including the documentation) should not be asked on GitHub, but rather on either the forum or Discord. Please consider the following guidelines when opening a new issue: Make sure you have searched whether your issue has already been asked before (use the search bar on GitHub under Issues). Please never report a new issue on another (related) issue. If another issue is highly related, please +open a new issue nevertheless and link to the related issue. Make sure your issue is written in English. Please use one of the great, free online translation services, such as DeepL to translate from your native language to English if you are not comfortable in English. Check whether your issue might be solved by updating to the newest Diffusers version. Before posting your issue, please make sure that python -c "import diffusers; print(diffusers.__version__)" is higher or matches the latest Diffusers version. Remember that the more effort you put into opening a new issue, the higher the quality of your answer will be and the better the overall quality of the Diffusers issues. New issues usually include the following. 2.1. Reproducible, minimal bug reports A bug report should always have a reproducible code snippet and be as minimal and concise as possible. +This means in more detail: Narrow the bug down as much as you can, do not just dump your whole code file. Format your code. Do not include any external libraries except for Diffusers depending on them. Always provide all necessary information about your environment; for this, you can run: diffusers-cli env in your shell and copy-paste the displayed information to the issue. Explain the issue. If the reader doesn’t know what the issue is and why it is an issue, she cannot solve it. Always make sure the reader can reproduce your issue with as little effort as possible. If your code snippet cannot be run because of missing libraries or undefined variables, the reader cannot help you. Make sure your reproducible code snippet is as minimal as possible and can be copy-pasted into a simple Python shell. If in order to reproduce your issue a model and/or dataset is required, make sure the reader has access to that model or dataset. You can always upload your model or dataset to the Hub to make it easily downloadable. Try to keep your model and dataset as small as possible, to make the reproduction of your issue as effortless as possible. For more information, please have a look through the How to write a good issue section. You can open a bug report here. 2.2. Feature requests A world-class feature request addresses the following points: Motivation first: Is it related to a problem/frustration with the library? If so, please explain +why. Providing a code snippet that demonstrates the problem is best. Is it related to something you would need for a project? We’d love to hear +about it! Is it something you worked on and think could benefit the community? +Awesome! Tell us what problem it solved for you. Write a full paragraph describing the feature; Provide a code snippet that demonstrates its future use; In case this is related to a paper, please attach a link; Attach any additional information (drawings, screenshots, etc.) you think may help. You can open a feature request here. 2.3 Feedback Feedback about the library design and why it is good or not good helps the core maintainers immensely to build a user-friendly library. To understand the philosophy behind the current design philosophy, please have a look here. If you feel like a certain design choice does not fit with the current design philosophy, please explain why and how it should be changed. If a certain design choice follows the design philosophy too much, hence restricting use cases, explain why and how it should be changed. +If a certain design choice is very useful for you, please also leave a note as this is great feedback for future design decisions. You can open an issue about feedback here. 2.4 Technical questions Technical questions are mainly about why certain code of the library was written in a certain way, or what a certain part of the code does. Please make sure to link to the code in question and please provide details on +why this part of the code is difficult to understand. You can open an issue about a technical question here. 2.5 Proposal to add a new model, scheduler, or pipeline If the diffusion model community released a new model, pipeline, or scheduler that you would like to see in the Diffusers library, please provide the following information: Short description of the diffusion pipeline, model, or scheduler and link to the paper or public release. Link to any of its open-source implementation(s). Link to the model weights if they are available. If you are willing to contribute to the model yourself, let us know so we can best guide you. Also, don’t forget +to tag the original author of the component (model, scheduler, pipeline, etc.) by GitHub handle if you can find it. You can open a request for a model/pipeline/scheduler here. 3. Answering issues on the GitHub issues tab Answering issues on GitHub might require some technical knowledge of Diffusers, but we encourage everybody to give it a try even if you are not 100% certain that your answer is correct. +Some tips to give a high-quality answer to an issue: Be as concise and minimal as possible. Stay on topic. An answer to the issue should concern the issue and only the issue. Provide links to code, papers, or other sources that prove or encourage your point. Answer in code. If a simple code snippet is the answer to the issue or shows how the issue can be solved, please provide a fully reproducible code snippet. Also, many issues tend to be simply off-topic, duplicates of other issues, or irrelevant. It is of great +help to the maintainers if you can answer such issues, encouraging the author of the issue to be +more precise, provide the link to a duplicated issue or redirect them to the forum or Discord. If you have verified that the issued bug report is correct and requires a correction in the source code, +please have a look at the next sections. For all of the following contributions, you will need to open a PR. It is explained in detail how to do so in the Opening a pull request section. 4. Fixing a “Good first issue” Good first issues are marked by the Good first issue label. Usually, the issue already +explains how a potential solution should look so that it is easier to fix. +If the issue hasn’t been closed and you would like to try to fix this issue, you can just leave a message “I would like to try this issue.”. There are usually three scenarios: a.) The issue description already proposes a fix. In this case and if the solution makes sense to you, you can open a PR or draft PR to fix it. b.) The issue description does not propose a fix. In this case, you can ask what a proposed fix could look like and someone from the Diffusers team should answer shortly. If you have a good idea of how to fix it, feel free to directly open a PR. c.) There is already an open PR to fix the issue, but the issue hasn’t been closed yet. If the PR has gone stale, you can simply open a new PR and link to the stale PR. PRs often go stale if the original contributor who wanted to fix the issue suddenly cannot find the time anymore to proceed. This often happens in open-source and is very normal. In this case, the community will be very happy if you give it a new try and leverage the knowledge of the existing PR. If there is already a PR and it is active, you can help the author by giving suggestions, reviewing the PR or even asking whether you can contribute to the PR. 5. Contribute to the documentation A good library always has good documentation! The official documentation is often one of the first points of contact for new users of the library, and therefore contributing to the documentation is a highly +valuable contribution. Contributing to the library can have many forms: Correcting spelling or grammatical errors. Correct incorrect formatting of the docstring. If you see that the official documentation is weirdly displayed or a link is broken, we would be very happy if you take some time to correct it. Correct the shape or dimensions of a docstring input or output tensor. Clarify documentation that is hard to understand or incorrect. Update outdated code examples. Translating the documentation to another language. Anything displayed on the official Diffusers doc page is part of the official documentation and can be corrected, adjusted in the respective documentation source. Please have a look at this page on how to verify changes made to the documentation locally. 6. Contribute a community pipeline Pipelines are usually the first point of contact between the Diffusers library and the user. +Pipelines are examples of how to use Diffusers models and schedulers. +We support two types of pipelines: Official Pipelines Community Pipelines Both official and community pipelines follow the same design and consist of the same type of components. Official pipelines are tested and maintained by the core maintainers of Diffusers. Their code +resides in src/diffusers/pipelines. +In contrast, community pipelines are contributed and maintained purely by the community and are not tested. +They reside in examples/community and while they can be accessed via the PyPI diffusers package, their code is not part of the PyPI distribution. The reason for the distinction is that the core maintainers of the Diffusers library cannot maintain and test all +possible ways diffusion models can be used for inference, but some of them may be of interest to the community. +Officially released diffusion pipelines, +such as Stable Diffusion are added to the core src/diffusers/pipelines package which ensures +high quality of maintenance, no backward-breaking code changes, and testing. +More bleeding edge pipelines should be added as community pipelines. If usage for a community pipeline is high, the pipeline can be moved to the official pipelines upon request from the community. This is one of the ways we strive to be a community-driven library. To add a community pipeline, one should add a .py file to examples/community and adapt the examples/community/README.md to include an example of the new pipeline. An example can be seen here. Community pipeline PRs are only checked at a superficial level and ideally they should be maintained by their original authors. Contributing a community pipeline is a great way to understand how Diffusers models and schedulers work. Having contributed a community pipeline is usually the first stepping stone to contributing an official pipeline to the +core package. 7. Contribute to training examples Diffusers examples are a collection of training scripts that reside in examples. We support two types of training examples: Official training examples Research training examples Research training examples are located in examples/research_projects whereas official training examples include all folders under examples except the research_projects and community folders. +The official training examples are maintained by the Diffusers’ core maintainers whereas the research training examples are maintained by the community. +This is because of the same reasons put forward in 6. Contribute a community pipeline for official pipelines vs. community pipelines: It is not feasible for the core maintainers to maintain all possible training methods for diffusion models. +If the Diffusers core maintainers and the community consider a certain training paradigm to be too experimental or not popular enough, the corresponding training code should be put in the research_projects folder and maintained by the author. Both official training and research examples consist of a directory that contains one or more training scripts, a requirements.txt file, and a README.md file. In order for the user to make use of the +training examples, it is required to clone the repository: Copied git clone https://github.com/huggingface/diffusers as well as to install all additional dependencies required for training: Copied pip install -r /examples//requirements.txt Therefore when adding an example, the requirements.txt file shall define all pip dependencies required for your training example so that once all those are installed, the user can run the example’s training script. See, for example, the DreamBooth requirements.txt file. Training examples of the Diffusers library should adhere to the following philosophy: All the code necessary to run the examples should be found in a single Python file. One should be able to run the example from the command line with python .py --args. Examples should be kept simple and serve as an example on how to use Diffusers for training. The purpose of example scripts is not to create state-of-the-art diffusion models, but rather to reproduce known training schemes without adding too much custom logic. As a byproduct of this point, our examples also strive to serve as good educational materials. To contribute an example, it is highly recommended to look at already existing examples such as dreambooth to get an idea of how they should look like. +We strongly advise contributors to make use of the Accelerate library as it’s tightly integrated +with Diffusers. +Once an example script works, please make sure to add a comprehensive README.md that states how to use the example exactly. This README should include: An example command on how to run the example script as shown here. A link to some training results (logs, models, etc.) that show what the user can expect as shown here. If you are adding a non-official/research training example, please don’t forget to add a sentence that you are maintaining this training example which includes your git handle as shown here. If you are contributing to the official training examples, please also make sure to add a test to examples/test_examples.py. This is not necessary for non-official training examples. 8. Fixing a “Good second issue” Good second issues are marked by the Good second issue label. Good second issues are +usually more complicated to solve than Good first issues. +The issue description usually gives less guidance on how to fix the issue and requires +a decent understanding of the library by the interested contributor. +If you are interested in tackling a good second issue, feel free to open a PR to fix it and link the PR to the issue. If you see that a PR has already been opened for this issue but did not get merged, have a look to understand why it wasn’t merged and try to open an improved PR. +Good second issues are usually more difficult to get merged compared to good first issues, so don’t hesitate to ask for help from the core maintainers. If your PR is almost finished the core maintainers can also jump into your PR and commit to it in order to get it merged. 9. Adding pipelines, models, schedulers Pipelines, models, and schedulers are the most important pieces of the Diffusers library. +They provide easy access to state-of-the-art diffusion technologies and thus allow the community to +build powerful generative AI applications. By adding a new model, pipeline, or scheduler you might enable a new powerful use case for any of the user interfaces relying on Diffusers which can be of immense value for the whole generative AI ecosystem. Diffusers has a couple of open feature requests for all three components - feel free to gloss over them +if you don’t know yet what specific component you would like to add: Model or pipeline Scheduler Before adding any of the three components, it is strongly recommended that you give the Philosophy guide a read to better understand the design of any of the three components. Please be aware that we cannot merge model, scheduler, or pipeline additions that strongly diverge from our design philosophy +as it will lead to API inconsistencies. If you fundamentally disagree with a design choice, please open a Feedback issue instead so that it can be discussed whether a certain design pattern/design choice shall be changed everywhere in the library and whether we shall update our design philosophy. Consistency across the library is very important for us. Please make sure to add links to the original codebase/paper to the PR and ideally also ping the original author directly on the PR so that they can follow the progress and potentially help with questions. If you are unsure or stuck in the PR, don’t hesitate to leave a message to ask for a first review or help. Copied from mechanism A unique and important feature to understand when adding any pipeline, model or scheduler code is the # Copied from mechanism. You’ll see this all over the Diffusers codebase, and the reason we use it is to keep the codebase easy to understand and maintain. Marking code with the # Copied from mechanism forces the marked code to be identical to the code it was copied from. This makes it easy to update and propagate changes across many files whenever you run make fix-copies. For example, in the code example below, StableDiffusionPipelineOutput is the original code and AltDiffusionPipelineOutput uses the # Copied from mechanism to copy it. The only difference is changing the class prefix from Stable to Alt. Copied # Copied from diffusers.pipelines.stable_diffusion.pipeline_output.StableDiffusionPipelineOutput with Stable->Alt +class AltDiffusionPipelineOutput(BaseOutput): + """ + Output class for Alt Diffusion pipelines. + + Args: + images (`List[PIL.Image.Image]` or `np.ndarray`) + List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width, + num_channels)`. + nsfw_content_detected (`List[bool]`) + List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or + `None` if safety checking could not be performed. + """ To learn more, read this section of the ~Don’t~ Repeat Yourself* blog post. How to write a good issue The better your issue is written, the higher the chances that it will be quickly resolved. Make sure that you’ve used the correct template for your issue. You can pick between Bug Report, Feature Request, Feedback about API Design, New model/pipeline/scheduler addition, Forum, or a blank issue. Make sure to pick the correct one when opening a new issue. Be precise: Give your issue a fitting title. Try to formulate your issue description as simple as possible. The more precise you are when submitting an issue, the less time it takes to understand the issue and potentially solve it. Make sure to open an issue for one issue only and not for multiple issues. If you found multiple issues, simply open multiple issues. If your issue is a bug, try to be as precise as possible about what bug it is - you should not just write “Error in diffusers”. Reproducibility: No reproducible code snippet == no solution. If you encounter a bug, maintainers have to be able to reproduce it. Make sure that you include a code snippet that can be copy-pasted into a Python interpreter to reproduce the issue. Make sure that your code snippet works, i.e. that there are no missing imports or missing links to images, … Your issue should contain an error message and a code snippet that can be copy-pasted without any changes to reproduce the exact same error message. If your issue is using local model weights or local data that cannot be accessed by the reader, the issue cannot be solved. If you cannot share your data or model, try to make a dummy model or dummy data. Minimalistic: Try to help the reader as much as you can to understand the issue as quickly as possible by staying as concise as possible. Remove all code / all information that is irrelevant to the issue. If you have found a bug, try to create the easiest code example you can to demonstrate your issue, do not just dump your whole workflow into the issue as soon as you have found a bug. E.g., if you train a model and get an error at some point during the training, you should first try to understand what part of the training code is responsible for the error and try to reproduce it with a couple of lines. Try to use dummy data instead of full datasets. Add links. If you are referring to a certain naming, method, or model make sure to provide a link so that the reader can better understand what you mean. If you are referring to a specific PR or issue, make sure to link it to your issue. Do not assume that the reader knows what you are talking about. The more links you add to your issue the better. Formatting. Make sure to nicely format your issue by formatting code into Python code syntax, and error messages into normal code syntax. See the official GitHub formatting docs for more information. Think of your issue not as a ticket to be solved, but rather as a beautiful entry to a well-written encyclopedia. Every added issue is a contribution to publicly available knowledge. By adding a nicely written issue you not only make it easier for maintainers to solve your issue, but you are helping the whole community to better understand a certain aspect of the library. How to write a good PR Be a chameleon. Understand existing design patterns and syntax and make sure your code additions flow seamlessly into the existing code base. Pull requests that significantly diverge from existing design patterns or user interfaces will not be merged. Be laser focused. A pull request should solve one problem and one problem only. Make sure to not fall into the trap of “also fixing another problem while we’re adding it”. It is much more difficult to review pull requests that solve multiple, unrelated problems at once. If helpful, try to add a code snippet that displays an example of how your addition can be used. The title of your pull request should be a summary of its contribution. If your pull request addresses an issue, please mention the issue number in +the pull request description to make sure they are linked (and people +consulting the issue know you are working on it); To indicate a work in progress please prefix the title with [WIP]. These +are useful to avoid duplicated work, and to differentiate it from PRs ready +to be merged; Try to formulate and format your text as explained in How to write a good issue. Make sure existing tests pass; Add high-coverage tests. No quality testing = no merge. If you are adding new @slow tests, make sure they pass using +RUN_SLOW=1 python -m pytest tests/test_my_new_model.py. +CircleCI does not run the slow tests, but GitHub Actions does every night! All public methods must have informative docstrings that work nicely with markdown. See pipeline_latent_diffusion.py for an example. Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted dataset like +hf-internal-testing or huggingface/documentation-images to place these files. +If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images +to this dataset. How to open a PR Before writing code, we strongly advise you to search through the existing PRs or +issues to make sure that nobody is already working on the same thing. If you are +unsure, it is always a good idea to open an issue to get some feedback. You will need basic git proficiency to be able to contribute to +🧨 Diffusers. git is not the easiest tool to use but it has the greatest +manual. Type git --help in a shell and enjoy. If you prefer books, Pro +Git is a very good reference. Follow these steps to start contributing (supported Python versions): Fork the repository by +clicking on the ‘Fork’ button on the repository’s page. This creates a copy of the code +under your GitHub user account. Clone your fork to your local disk, and add the base repository as a remote: Copied $ git clone git@github.com:/diffusers.git +$ cd diffusers +$ git remote add upstream https://github.com/huggingface/diffusers.git Create a new branch to hold your development changes: Copied $ git checkout -b a-descriptive-name-for-my-changes Do not work on the main branch. Set up a development environment by running the following command in a virtual environment: Copied $ pip install -e ".[dev]" If you have already cloned the repo, you might need to git pull to get the most recent changes in the +library. Develop the features on your branch. As you work on the features, you should make sure that the test suite +passes. You should run the tests impacted by your changes like this: Copied $ pytest tests/.py Before you run the tests, please make sure you install the dependencies required for testing. You can do so +with this command: Copied $ pip install -e ".[test]" You can also run the full test suite with the following command, but it takes +a beefy machine to produce a result in a decent amount of time now that +Diffusers has grown a lot. Here is the command for it: Copied $ make test 🧨 Diffusers relies on black and isort to format its source code +consistently. After you make changes, apply automatic style corrections and code verifications +that can’t be automated in one go with: Copied $ make style 🧨 Diffusers also uses ruff and a few custom scripts to check for coding mistakes. Quality +control runs in CI, however, you can also run the same checks with: Copied $ make quality Once you’re happy with your changes, add changed files using git add and +make a commit with git commit to record your changes locally: Copied $ git add modified_file.py +$ git commit -m "A descriptive message about your changes." It is a good idea to sync your copy of the code with the original +repository regularly. This way you can quickly account for changes: Copied $ git pull upstream main Push the changes to your account using: Copied $ git push -u origin a-descriptive-name-for-my-changes Once you are satisfied, go to the +webpage of your fork on GitHub. Click on ‘Pull request’ to send your changes +to the project maintainers for review. It’s OK if maintainers ask you for changes. It happens to core contributors +too! So everyone can see the changes in the Pull request, work in your local +branch and push the changes to your fork. They will automatically appear in +the pull request. Tests An extensive test suite is included to test the library behavior and several examples. Library tests can be found in +the tests folder. We like pytest and pytest-xdist because it’s faster. From the root of the +repository, here’s how to run tests with pytest for the library: Copied $ python -m pytest -n auto --dist=loadfile -s -v ./tests/ In fact, that’s how make test is implemented! You can specify a smaller set of tests in order to test only the feature +you’re working on. By default, slow tests are skipped. Set the RUN_SLOW environment variable to +yes to run them. This will download many gigabytes of models — make sure you +have enough disk space and a good Internet connection, or a lot of patience! Copied $ RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/ unittest is fully supported, here’s how to run tests with it: Copied $ python -m unittest discover -s tests -t . -v +$ python -m unittest discover -s examples -t examples -v Syncing forked main with upstream (HuggingFace) main To avoid pinging the upstream repository which adds reference notes to each upstream PR and sends unnecessary notifications to the developers involved in these PRs, +when syncing the main branch of a forked repository, please, follow these steps: When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead, merge directly into the forked main. If a PR is absolutely necessary, use the following steps after checking out your branch: Copied $ git checkout -b your-branch-for-syncing +$ git pull --squash --no-commit upstream main +$ git commit -m '' +$ git push --set-upstream origin your-branch-for-syncing Style guide For documentation strings, 🧨 Diffusers follows the Google style. diff --git a/scrapped_outputs/debd042c67072a597485816d9fbd72df.txt b/scrapped_outputs/debd042c67072a597485816d9fbd72df.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/dedd220480dc7737fba70eaad16a559a.txt b/scrapped_outputs/dedd220480dc7737fba70eaad16a559a.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/dedf06950837edf44bd479a9737f2df0.txt b/scrapped_outputs/dedf06950837edf44bd479a9737f2df0.txt new file mode 100644 index 0000000000000000000000000000000000000000..fbba08e6089c48721c4daf719b002f35502d6466 --- /dev/null +++ b/scrapped_outputs/dedf06950837edf44bd479a9737f2df0.txt @@ -0,0 +1,573 @@ +Stable Diffusion XL Stable Diffusion XL (SDXL) was proposed in SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis by Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. The abstract from the paper is: We present SDXL, a latent diffusion model for text-to-image synthesis. Compared to previous versions of Stable Diffusion, SDXL leverages a three times larger UNet backbone: The increase of model parameters is mainly due to more attention blocks and a larger cross-attention context as SDXL uses a second text encoder. We design multiple novel conditioning schemes and train SDXL on multiple aspect ratios. We also introduce a refinement model which is used to improve the visual fidelity of samples generated by SDXL using a post-hoc image-to-image technique. We demonstrate that SDXL shows drastically improved performance compared the previous versions of Stable Diffusion and achieves results competitive with those of black-box state-of-the-art image generators. Tips Using SDXL with a DPM++ scheduler for less than 50 steps is known to produce visual artifacts because the solver becomes numerically unstable. To fix this issue, take a look at this PR which recommends for ODE/SDE solvers:set use_karras_sigmas=True or lu_lambdas=True to improve image quality set euler_at_final=True if you’re using a solver with uniform step sizes (DPM++2M or DPM++2M SDE) Most SDXL checkpoints work best with an image size of 1024x1024. Image sizes of 768x768 and 512x512 are also supported, but the results aren’t as good. Anything below 512x512 is not recommended and likely won’t be for default checkpoints like stabilityai/stable-diffusion-xl-base-1.0. SDXL can pass a different prompt for each of the text encoders it was trained on. We can even pass different parts of the same prompt to the text encoders. SDXL output images can be improved by making use of a refiner model in an image-to-image setting. SDXL offers negative_original_size, negative_crops_coords_top_left, and negative_target_size to negatively condition the model on image resolution and cropping parameters. To learn how to use SDXL for various tasks, how to optimize performance, and other usage examples, take a look at the Stable Diffusion XL guide. Check out the Stability AI Hub organization for the official base and refiner model checkpoints! StableDiffusionXLPipeline class diffusers.StableDiffusionXLPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Optional = None crops_coords_top_left: Tuple = (0, 0) target_size: Optional = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 5.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput instead +of a plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLPipeline + +>>> pipe = StableDiffusionXLPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionXLImg2ImgPipeline class diffusers.StableDiffusionXLImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires an aesthetic_score condition to be passed during inference. Also see the +config of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None strength: float = 0.3 num_inference_steps: int = 50 timesteps: List = None denoising_start: Optional = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor or PIL.Image.Image or np.ndarray or List[torch.FloatTensor] or List[PIL.Image.Image] or List[np.ndarray]) — +The image(s) to modify with the pipeline. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. Note that in the case of +denoising_start being declared as an integer, the value of strength will be ignored. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_start (float, optional) — +When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be +bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and +it is assumed that the passed image is a partly denoised image. Note that when this is specified, +strength will be ignored. The denoising_start parameter is particularly beneficial when this pipeline +is integrated into a “Mixture of Denoisers” multi-pipeline setup, as detailed in Refine Image +Quality. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be +denoised by a successor pipeline that has denoising_start set to 0.8 so that it only denoises the +final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline +forms a part of a “Mixture of Denoisers” multi-pipeline setup, as elaborated in Refine Image +Quality. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +`tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLImg2ImgPipeline +>>> from diffusers.utils import load_image + +>>> pipe = StableDiffusionXLImg2ImgPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") +>>> url = "https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/aa_xl/000000009.png" + +>>> init_image = load_image(url).convert("RGB") +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt, image=init_image).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionXLInpaintPipeline class diffusers.StableDiffusionXLInpaintPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires a aesthetic_score condition to be passed during inference. Also see the config +of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None mask_image: Union = None masked_image_latents: FloatTensor = None height: Optional = None width: Optional = None padding_mask_crop: Optional = None strength: float = 0.9999 num_inference_steps: int = 50 timesteps: List = None denoising_start: Optional = None denoising_end: Optional = None guidance_scale: float = 7.5 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (PIL.Image.Image) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. padding_mask_crop (int, optional, defaults to None) — +The size of margin in the crop to be applied to the image and masking. If None, no crop is applied to image and mask_image. If +padding_mask_crop is not None, it will first find a rectangular region with the same aspect ration of the image and +contains all masked area, and then expand that area based on padding_mask_crop. The image and mask_image will then be cropped based on +the expanded area before resizing to the original image size for inpainting. This is useful when the masked area is small while the image is large +and contain information inreleant for inpainging, such as background. strength (float, optional, defaults to 0.9999) — +Conceptually, indicates how much to transform the masked portion of the reference image. Must be +between 0 and 1. image will be used as a starting point, adding more noise to it the larger the +strength. The number of denoising steps depends on the amount of noise initially added. When +strength is 1, added noise will be maximum and the denoising process will run for the full number of +iterations specified in num_inference_steps. A value of 1, therefore, essentially ignores the masked +portion of the reference image. Note that in the case of denoising_start being declared as an +integer, the value of strength will be ignored. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_start (float, optional) — +When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be +bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and +it is assumed that the passed image is a partly denoised image. Note that when this is specified, +strength will be ignored. The denoising_start parameter is particularly beneficial when this pipeline +is integrated into a “Mixture of Denoisers” multi-pipeline setup, as detailed in Refining the Image +Output. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be +denoised by a successor pipeline that has denoising_start set to 0.8 so that it only denoises the +final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline +forms a part of a “Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLInpaintPipeline +>>> from diffusers.utils import load_image + +>>> pipe = StableDiffusionXLInpaintPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", +... torch_dtype=torch.float16, +... variant="fp16", +... use_safetensors=True, +... ) +>>> pipe.to("cuda") + +>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +>>> init_image = load_image(img_url).convert("RGB") +>>> mask_image = load_image(mask_url).convert("RGB") + +>>> prompt = "A majestic tiger sitting on a bench" +>>> image = pipe( +... prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=50, strength=0.80 +... ).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. diff --git a/scrapped_outputs/dee9e039d73e6107aad9f1cc41342b43.txt b/scrapped_outputs/dee9e039d73e6107aad9f1cc41342b43.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/df149220778a07b1fbb65a5948657eb3.txt b/scrapped_outputs/df149220778a07b1fbb65a5948657eb3.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/df35bf7557df60475e706f4b3e0d65fe.txt b/scrapped_outputs/df35bf7557df60475e706f4b3e0d65fe.txt new file mode 100644 index 0000000000000000000000000000000000000000..f3ff45d9b537f73b4891b1294f8d618d1aafc935 --- /dev/null +++ b/scrapped_outputs/df35bf7557df60475e706f4b3e0d65fe.txt @@ -0,0 +1,48 @@ +ScoreSdeVeScheduler ScoreSdeVeScheduler is a variance exploding stochastic differential equation (SDE) scheduler. It was introduced in the Score-Based Generative Modeling through Stochastic Differential Equations paper by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole. The abstract from the paper is: Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model. ScoreSdeVeScheduler class diffusers.ScoreSdeVeScheduler < source > ( num_train_timesteps: int = 2000 snr: float = 0.15 sigma_min: float = 0.01 sigma_max: float = 1348.0 sampling_eps: float = 1e-05 correct_steps: int = 1 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. snr (float, defaults to 0.15) — +A coefficient weighting the step from the model_output sample (from the network) to the random noise. sigma_min (float, defaults to 0.01) — +The initial noise scale for the sigma sequence in the sampling procedure. The minimum sigma should mirror +the distribution of the data. sigma_max (float, defaults to 1348.0) — +The maximum value used for the range of continuous timesteps passed into the model. sampling_eps (float, defaults to 1e-5) — +The end value of sampling where timesteps decrease progressively from 1 to epsilon. correct_steps (int, defaults to 1) — +The number of correction steps performed on a produced sample. ScoreSdeVeScheduler is a variance exploding stochastic differential equation (SDE) scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_sigmas < source > ( num_inference_steps: int sigma_min: float = None sigma_max: float = None sampling_eps: float = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. sigma_min (float, optional) — +The initial noise scale value (overrides value given during scheduler instantiation). sigma_max (float, optional) — +The final noise scale value (overrides value given during scheduler instantiation). sampling_eps (float, optional) — +The final timestep value (overrides value given during scheduler instantiation). Sets the noise scales used for the diffusion chain (to be run before inference). The sigmas control the weight +of the drift and diffusion components of the sample update. set_timesteps < source > ( num_inference_steps: int sampling_eps: float = None device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. sampling_eps (float, optional) — +The final timestep value (overrides value given during scheduler instantiation). device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the continuous timesteps used for the diffusion chain (to be run before inference). step_correct < source > ( model_output: FloatTensor sample: FloatTensor generator: Optional = None return_dict: bool = True ) → SdeVeOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a SdeVeOutput or tuple. Returns +SdeVeOutput or tuple + +If return_dict is True, SdeVeOutput is returned, otherwise a tuple +is returned where the first element is the sample tensor. + Correct the predicted sample based on the model_output of the network. This is often run repeatedly after +making the prediction for the previous timestep. step_pred < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator: Optional = None return_dict: bool = True ) → SdeVeOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a SdeVeOutput or tuple. Returns +SdeVeOutput or tuple + +If return_dict is True, SdeVeOutput is returned, otherwise a tuple +is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SdeVeOutput class diffusers.schedulers.scheduling_sde_ve.SdeVeOutput < source > ( prev_sample: FloatTensor prev_sample_mean: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. prev_sample_mean (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Mean averaged prev_sample over previous timesteps. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/df36e16e530aa84f0feb10195b19ab69.txt b/scrapped_outputs/df36e16e530aa84f0feb10195b19ab69.txt new file mode 100644 index 0000000000000000000000000000000000000000..6c6930421010fe84f98ab906144201bb0390aa30 --- /dev/null +++ b/scrapped_outputs/df36e16e530aa84f0feb10195b19ab69.txt @@ -0,0 +1,81 @@ +Latent Diffusion Latent Diffusion was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. The abstract from the paper is: By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. The original codebase can be found at CompVis/latent-diffusion. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. LDMTextToImagePipeline class diffusers.LDMTextToImagePipeline < source > ( vqvae: Union bert: PreTrainedModel tokenizer: PreTrainedTokenizer unet: Union scheduler: Union ) Parameters vqvae (VQModel) — +Vector-quantized (VQ) model to encode and decode images to and from latent representations. bert (LDMBertModel) — +Text-encoder model based on BERT. tokenizer (BertTokenizer) — +A BertTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-image generation using latent diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union height: Optional = None width: Optional = None num_inference_steps: Optional = 50 guidance_scale: Optional = 1.0 eta: Optional = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 1.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Example: Copied >>> from diffusers import DiffusionPipeline + +>>> # load model and scheduler +>>> ldm = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256") + +>>> # run pipeline in inference (sample random noise and denoise) +>>> prompt = "A painting of a squirrel eating a burger" +>>> images = ldm([prompt], num_inference_steps=50, eta=0.3, guidance_scale=6).images + +>>> # save images +>>> for idx, image in enumerate(images): +... image.save(f"squirrel-{idx}.png") LDMSuperResolutionPipeline class diffusers.LDMSuperResolutionPipeline < source > ( vqvae: VQModel unet: UNet2DModel scheduler: Union ) Parameters vqvae (VQModel) — +Vector-quantized (VQ) model to encode and decode images to and from latent representations. unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latens. Can be one of +DDIMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, +EulerAncestralDiscreteScheduler, DPMSolverMultistepScheduler, or PNDMScheduler. A pipeline for image super-resolution using latent diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union = None batch_size: Optional = 1 num_inference_steps: Optional = 100 eta: Optional = 0.0 generator: Union = None output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters image (torch.Tensor or PIL.Image.Image) — +Image or tensor representing an image batch to be used as the starting point for the process. batch_size (int, optional, defaults to 1) — +Number of images to generate. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Example: Copied >>> import requests +>>> from PIL import Image +>>> from io import BytesIO +>>> from diffusers import LDMSuperResolutionPipeline +>>> import torch + +>>> # load model and scheduler +>>> pipeline = LDMSuperResolutionPipeline.from_pretrained("CompVis/ldm-super-resolution-4x-openimages") +>>> pipeline = pipeline.to("cuda") + +>>> # let's download an image +>>> url = ( +... "https://user-images.githubusercontent.com/38061659/199705896-b48e17b8-b231-47cd-a270-4ffa5a93fa3e.png" +... ) +>>> response = requests.get(url) +>>> low_res_img = Image.open(BytesIO(response.content)).convert("RGB") +>>> low_res_img = low_res_img.resize((128, 128)) + +>>> # run pipeline in inference (sample random noise and denoise) +>>> upscaled_image = pipeline(low_res_img, num_inference_steps=100, eta=1).images[0] +>>> # save image +>>> upscaled_image.save("ldm_generated_image.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/df43c182d35ccae52fc27a9e256c792a.txt b/scrapped_outputs/df43c182d35ccae52fc27a9e256c792a.txt new file mode 100644 index 0000000000000000000000000000000000000000..619b44cd8c05a0c372dc935e8e8f3871d9c7d942 --- /dev/null +++ b/scrapped_outputs/df43c182d35ccae52fc27a9e256c792a.txt @@ -0,0 +1,3 @@ +TODO + +Coming soon! diff --git a/scrapped_outputs/df532cd65ddc2f078fbbcb79271a5337.txt b/scrapped_outputs/df532cd65ddc2f078fbbcb79271a5337.txt new file mode 100644 index 0000000000000000000000000000000000000000..cf2d88cd2c276c34e8e6673ea524a7e773e96a51 --- /dev/null +++ b/scrapped_outputs/df532cd65ddc2f078fbbcb79271a5337.txt @@ -0,0 +1,32 @@ +Text-Guided Image-to-Image Generation + +The StableDiffusionImg2ImgPipeline lets you pass a text prompt and an initial image to condition the generation of new images. + + + Copied +import torch +import requests +from PIL import Image +from io import BytesIO + +from diffusers import StableDiffusionImg2ImgPipeline + +# load the pipeline +device = "cuda" +pipe = StableDiffusionImg2ImgPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to( + device +) + +# let's download an initial image +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +response = requests.get(url) +init_image = Image.open(BytesIO(response.content)).convert("RGB") +init_image.thumbnail((768, 768)) + +prompt = "A fantasy landscape, trending on artstation" + +images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images + +images[0].save("fantasy_landscape.png") +You can also run this example on colab diff --git a/scrapped_outputs/df536797447761f70accacfca3faf213.txt b/scrapped_outputs/df536797447761f70accacfca3faf213.txt new file mode 100644 index 0000000000000000000000000000000000000000..1734d899388d6f30b496363114538c7da7d1ab97 --- /dev/null +++ b/scrapped_outputs/df536797447761f70accacfca3faf213.txt @@ -0,0 +1,226 @@ +aMUSEd Amused is a lightweight text to image model based off of the muse architecture. Amused is particularly useful in applications that require a lightweight and fast model such as generating many images quickly at once. Amused is a vqvae token based transformer that can generate an image in fewer forward passes than many diffusion models. In contrast with muse, it uses the smaller text encoder CLIP-L/14 instead of t5-xxl. Due to its small parameter count and few forward pass generation process, amused can generate many images quickly. This benefit is seen particularly at larger batch sizes. Model Params amused-256 603M amused-512 608M AmusedPipeline class diffusers.AmusedPipeline < source > ( vqvae: VQModel tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection transformer: UVit2DModel scheduler: AmusedScheduler ) __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 12 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Optional = None latents: Optional = None prompt_embeds: Optional = None encoder_hidden_states: Optional = None negative_prompt_embeds: Optional = None negative_encoder_hidden_states: Optional = None output_type = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None micro_conditioning_aesthetic_score: int = 6 micro_conditioning_crop_coord: Tuple = (0, 0) temperature: Union = (2, 0) ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.transformer.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 16) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.IntTensor, optional) — +Pre-generated tokens representing latent vectors in self.vqvae, to be used as inputs for image +gneration. If not provided, the starting latents will be completely masked. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. A single vector from the +pooled and projected final hidden states. encoder_hidden_states (torch.FloatTensor, optional) — +Pre-generated penultimate hidden states from the text encoder providing additional text conditioning. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. negative_encoder_hidden_states (torch.FloatTensor, optional) — +Analogous to encoder_hidden_states for the positive prompt. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. micro_conditioning_aesthetic_score (int, optional, defaults to 6) — +The targeted aesthetic score according to the laion aesthetic classifier. See https://laion.ai/blog/laion-aesthetics/ +and the micro-conditioning section of https://arxiv.org/abs/2307.01952. micro_conditioning_crop_coord (Tuple[int], optional, defaults to (0, 0)) — +The targeted height, width crop coordinates. See the micro-conditioning section of https://arxiv.org/abs/2307.01952. temperature (Union[int, Tuple[int, int], List[int]], optional, defaults to (2, 0)) — +Configures the temperature scheduler on self.scheduler see AmusedScheduler#set_timesteps. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import AmusedPipeline + +>>> pipe = AmusedPipeline.from_pretrained( +... "amused/amused-512", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt).images[0] enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. class diffusers.AmusedImg2ImgPipeline < source > ( vqvae: VQModel tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection transformer: UVit2DModel scheduler: AmusedScheduler ) __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.5 num_inference_steps: int = 12 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Optional = None prompt_embeds: Optional = None encoder_hidden_states: Optional = None negative_prompt_embeds: Optional = None negative_encoder_hidden_states: Optional = None output_type = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None micro_conditioning_aesthetic_score: int = 6 micro_conditioning_crop_coord: Tuple = (0, 0) temperature: Union = (2, 0) ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be used as the starting point. For both +numpy array and pytorch tensor, the expected value range is between [0, 1] If it’s a tensor or a list +or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a +list of arrays, the expected shape should be (B, H, W, C) or (H, W, C) It can also accept image +latents as image, but if passing latents directly it is not encoded again. strength (float, optional, defaults to 0.5) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 16) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. A single vector from the +pooled and projected final hidden states. encoder_hidden_states (torch.FloatTensor, optional) — +Pre-generated penultimate hidden states from the text encoder providing additional text conditioning. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. negative_encoder_hidden_states (torch.FloatTensor, optional) — +Analogous to encoder_hidden_states for the positive prompt. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. micro_conditioning_aesthetic_score (int, optional, defaults to 6) — +The targeted aesthetic score according to the laion aesthetic classifier. See https://laion.ai/blog/laion-aesthetics/ +and the micro-conditioning section of https://arxiv.org/abs/2307.01952. micro_conditioning_crop_coord (Tuple[int], optional, defaults to (0, 0)) — +The targeted height, width crop coordinates. See the micro-conditioning section of https://arxiv.org/abs/2307.01952. temperature (Union[int, Tuple[int, int], List[int]], optional, defaults to (2, 0)) — +Configures the temperature scheduler on self.scheduler see AmusedScheduler#set_timesteps. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import AmusedImg2ImgPipeline +>>> from diffusers.utils import load_image + +>>> pipe = AmusedImg2ImgPipeline.from_pretrained( +... "amused/amused-512", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "winter mountains" +>>> input_image = ( +... load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/open_muse/mountains.jpg" +... ) +... .resize((512, 512)) +... .convert("RGB") +... ) +>>> image = pipe(prompt, input_image).images[0] enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. class diffusers.AmusedInpaintPipeline < source > ( vqvae: VQModel tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection transformer: UVit2DModel scheduler: AmusedScheduler ) __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None strength: float = 1.0 num_inference_steps: int = 12 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 generator: Optional = None prompt_embeds: Optional = None encoder_hidden_states: Optional = None negative_prompt_embeds: Optional = None negative_encoder_hidden_states: Optional = None output_type = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None micro_conditioning_aesthetic_score: int = 6 micro_conditioning_crop_coord: Tuple = (0, 0) temperature: Union = (2, 0) ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to be used as the starting point. For both +numpy array and pytorch tensor, the expected value range is between [0, 1] If it’s a tensor or a list +or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a +list of arrays, the expected shape should be (B, H, W, C) or (H, W, C) It can also accept image +latents as image, but if passing latents directly it is not encoded again. mask_image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, numpy array or tensor representing an image batch to mask image. White pixels in the mask +are repainted while black pixels are preserved. If mask_image is a PIL image, it is converted to a +single channel (luminance) before use. If it’s a numpy array or pytorch tensor, it should contain one +color channel (L) instead of 3, so the expected shape for pytorch tensor would be (B, 1, H, W), (B, H, W), (1, H, W), (H, W). And for numpy array would be for (B, H, W, 1), (B, H, W), (H, W, 1), or (H, W). strength (float, optional, defaults to 1.0) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 16) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. A single vector from the +pooled and projected final hidden states. encoder_hidden_states (torch.FloatTensor, optional) — +Pre-generated penultimate hidden states from the text encoder providing additional text conditioning. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. negative_encoder_hidden_states (torch.FloatTensor, optional) — +Analogous to encoder_hidden_states for the positive prompt. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. micro_conditioning_aesthetic_score (int, optional, defaults to 6) — +The targeted aesthetic score according to the laion aesthetic classifier. See https://laion.ai/blog/laion-aesthetics/ +and the micro-conditioning section of https://arxiv.org/abs/2307.01952. micro_conditioning_crop_coord (Tuple[int], optional, defaults to (0, 0)) — +The targeted height, width crop coordinates. See the micro-conditioning section of https://arxiv.org/abs/2307.01952. temperature (Union[int, Tuple[int, int], List[int]], optional, defaults to (2, 0)) — +Configures the temperature scheduler on self.scheduler see AmusedScheduler#set_timesteps. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import AmusedInpaintPipeline +>>> from diffusers.utils import load_image + +>>> pipe = AmusedInpaintPipeline.from_pretrained( +... "amused/amused-512", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "fall mountains" +>>> input_image = ( +... load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/open_muse/mountains_1.jpg" +... ) +... .resize((512, 512)) +... .convert("RGB") +... ) +>>> mask = ( +... load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/open_muse/mountains_1_mask.png" +... ) +... .resize((512, 512)) +... .convert("L") +... ) +>>> pipe(prompt, input_image, mask).images[0].save("out.png") enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. diff --git a/scrapped_outputs/df7b6ddb8730c8fa3ccf9f194e3cd7ed.txt b/scrapped_outputs/df7b6ddb8730c8fa3ccf9f194e3cd7ed.txt new file mode 100644 index 0000000000000000000000000000000000000000..88c6593b32ef62cb7820e9bf8a18fcf276dfa370 --- /dev/null +++ b/scrapped_outputs/df7b6ddb8730c8fa3ccf9f194e3cd7ed.txt @@ -0,0 +1,304 @@ +Stable unCLIP Stable unCLIP checkpoints are finetuned from Stable Diffusion 2.1 checkpoints to condition on CLIP image embeddings. +Stable unCLIP still conditions on text embeddings. Given the two separate conditionings, stable unCLIP can be used +for text guided image variation. When combined with an unCLIP prior, it can also be used for full text to image generation. The abstract from the paper is: Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples. Tips Stable unCLIP takes noise_level as input during inference which determines how much noise is added to the image embeddings. A higher noise_level increases variation in the final un-noised images. By default, we do not add any additional noise to the image embeddings (noise_level = 0). Text-to-Image Generation Stable unCLIP can be leveraged for text-to-image generation by pipelining it with the prior model of KakaoBrain’s open source DALL-E 2 replication Karlo: Copied import torch +from diffusers import UnCLIPScheduler, DDPMScheduler, StableUnCLIPPipeline +from diffusers.models import PriorTransformer +from transformers import CLIPTokenizer, CLIPTextModelWithProjection + +prior_model_id = "kakaobrain/karlo-v1-alpha" +data_type = torch.float16 +prior = PriorTransformer.from_pretrained(prior_model_id, subfolder="prior", torch_dtype=data_type) + +prior_text_model_id = "openai/clip-vit-large-patch14" +prior_tokenizer = CLIPTokenizer.from_pretrained(prior_text_model_id) +prior_text_model = CLIPTextModelWithProjection.from_pretrained(prior_text_model_id, torch_dtype=data_type) +prior_scheduler = UnCLIPScheduler.from_pretrained(prior_model_id, subfolder="prior_scheduler") +prior_scheduler = DDPMScheduler.from_config(prior_scheduler.config) + +stable_unclip_model_id = "stabilityai/stable-diffusion-2-1-unclip-small" + +pipe = StableUnCLIPPipeline.from_pretrained( + stable_unclip_model_id, + torch_dtype=data_type, + variant="fp16", + prior_tokenizer=prior_tokenizer, + prior_text_encoder=prior_text_model, + prior=prior, + prior_scheduler=prior_scheduler, +) + +pipe = pipe.to("cuda") +wave_prompt = "dramatic wave, the Oceans roar, Strong wave spiral across the oceans as the waves unfurl into roaring crests; perfect wave form; perfect wave shape; dramatic wave shape; wave shape unbelievable; wave; wave shape spectacular" + +image = pipe(prompt=wave_prompt).images[0] +image For text-to-image we use stabilityai/stable-diffusion-2-1-unclip-small as it was trained on CLIP ViT-L/14 embedding, the same as the Karlo model prior. stabilityai/stable-diffusion-2-1-unclip was trained on OpenCLIP ViT-H, so we don’t recommend its use. Text guided Image-to-Image Variation Copied from diffusers import StableUnCLIPImg2ImgPipeline +from diffusers.utils import load_image +import torch + +pipe = StableUnCLIPImg2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1-unclip", torch_dtype=torch.float16, variation="fp16" +) +pipe = pipe.to("cuda") + +url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/stable_unclip/tarsila_do_amaral.png" +init_image = load_image(url) + +images = pipe(init_image).images +images[0].save("variation_image.png") Optionally, you can also pass a prompt to pipe such as: Copied prompt = "A fantasy landscape, trending on artstation" + +image = pipe(init_image, prompt=prompt).images[0] +image Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableUnCLIPPipeline class diffusers.StableUnCLIPPipeline < source > ( prior_tokenizer: CLIPTokenizer prior_text_encoder: CLIPTextModelWithProjection prior: PriorTransformer prior_scheduler: KarrasDiffusionSchedulers image_normalizer: StableUnCLIPImageNormalizer image_noising_scheduler: KarrasDiffusionSchedulers tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vae: AutoencoderKL ) Parameters prior_tokenizer (CLIPTokenizer) — +A CLIPTokenizer. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen CLIPTextModelWithProjection text-encoder. prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_scheduler (KarrasDiffusionSchedulers) — +Scheduler used in the prior denoising process. image_normalizer (StableUnCLIPImageNormalizer) — +Used to normalize the predicted image embeddings before the noise is applied and un-normalize the image +embeddings after the noise has been applied. image_noising_scheduler (KarrasDiffusionSchedulers) — +Noise schedule for adding noise to the predicted image embeddings. The amount of noise to add is determined +by the noise_level. tokenizer (CLIPTokenizer) — +A CLIPTokenizer. text_encoder (CLIPTextModel) — +Frozen CLIPTextModel text-encoder. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (KarrasDiffusionSchedulers) — +A scheduler to be used in combination with unet to denoise the encoded image latents. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. Pipeline for text-to-image generation using stable unCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 20 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Optional = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 0 prior_num_inference_steps: int = 25 prior_guidance_scale: float = 4.0 prior_latents: Optional = None clip_skip: Optional = None ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 20) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. noise_level (int, optional, defaults to 0) — +The amount of noise to add to the image embeddings. A higher noise_level increases the variance in +the final un-noised images. See StableUnCLIPPipeline.noise_image_embeddings() for more details. prior_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps in the prior denoising process. More denoising steps usually lead to a +higher quality image at the expense of slower inference. prior_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. prior_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +embedding generation in the prior denoising process. Can be used to tweak the same generation with +different prompts. If not provided, a latents tensor is generated by sampling using the supplied random +generator. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +ImagePipelineOutput or tuple + +~ pipeline_utils.ImagePipelineOutput if return_dict is True, otherwise a tuple. When returning +a tuple, the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableUnCLIPPipeline + +>>> pipe = StableUnCLIPPipeline.from_pretrained( +... "fusing/stable-unclip-2-1-l", torch_dtype=torch.float16 +... ) # TODO update model path +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> images = pipe(prompt).images +>>> images[0].save("astronaut_horse.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. noise_image_embeddings < source > ( image_embeds: Tensor noise_level: int noise: Optional = None generator: Optional = None ) Add noise to the image embeddings. The amount of noise is controlled by a noise_level input. A higher +noise_level increases the variance in the final un-noised images. The noise is applied in two ways: A noise schedule is applied directly to the embeddings. A vector of sinusoidal time embeddings are appended to the output. In both cases, the amount of noise is controlled by the same noise_level. The embeddings are normalized before the noise is applied and un-normalized after the noise is applied. StableUnCLIPImg2ImgPipeline class diffusers.StableUnCLIPImg2ImgPipeline < source > ( feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection image_normalizer: StableUnCLIPImageNormalizer image_noising_scheduler: KarrasDiffusionSchedulers tokenizer: CLIPTokenizer text_encoder: CLIPTextModel unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vae: AutoencoderKL ) Parameters feature_extractor (CLIPImageProcessor) — +Feature extractor for image pre-processing before being encoded. image_encoder (CLIPVisionModelWithProjection) — +CLIP vision model for encoding images. image_normalizer (StableUnCLIPImageNormalizer) — +Used to normalize the predicted image embeddings before the noise is applied and un-normalize the image +embeddings after the noise has been applied. image_noising_scheduler (KarrasDiffusionSchedulers) — +Noise schedule for adding noise to the predicted image embeddings. The amount of noise to add is determined +by the noise_level. tokenizer (~transformers.CLIPTokenizer) — +A [~transformers.CLIPTokenizer)]. text_encoder (CLIPTextModel) — +Frozen CLIPTextModel text-encoder. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (KarrasDiffusionSchedulers) — +A scheduler to be used in combination with unet to denoise the encoded image latents. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. Pipeline for text-guided image-to-image generation using stable unCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( image: Union = None prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 20 guidance_scale: float = 10 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Optional = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 0 image_embeds: Optional = None clip_skip: Optional = None ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, either prompt_embeds will be +used or prompt is initialized to "". image (torch.FloatTensor or PIL.Image.Image) — +Image or tensor representing an image batch. The image is encoded to its CLIP embedding which the +unet is conditioned on. The image is not encoded by the vae and then used as the latents in the +denoising process like it is in the standard Stable Diffusion text-guided image variation process. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 20) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. noise_level (int, optional, defaults to 0) — +The amount of noise to add to the image embeddings. A higher noise_level increases the variance in +the final un-noised images. See StableUnCLIPPipeline.noise_image_embeddings() for more details. image_embeds (torch.FloatTensor, optional) — +Pre-generated CLIP embeddings to condition the unet on. These latents are not used in the denoising +process. If you want to provide pre-generated latents, pass them to __call__ as latents. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +ImagePipelineOutput or tuple + +~ pipeline_utils.ImagePipelineOutput if return_dict is True, otherwise a tuple. When returning +a tuple, the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import requests +>>> import torch +>>> from PIL import Image +>>> from io import BytesIO + +>>> from diffusers import StableUnCLIPImg2ImgPipeline + +>>> pipe = StableUnCLIPImg2ImgPipeline.from_pretrained( +... "fusing/stable-unclip-2-1-l-img2img", torch_dtype=torch.float16 +... ) # TODO update model path +>>> pipe = pipe.to("cuda") + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +>>> response = requests.get(url) +>>> init_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_image = init_image.resize((768, 512)) + +>>> prompt = "A fantasy landscape, trending on artstation" + +>>> images = pipe(prompt, init_image).images +>>> images[0].save("fantasy_landscape.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. noise_image_embeddings < source > ( image_embeds: Tensor noise_level: int noise: Optional = None generator: Optional = None ) Add noise to the image embeddings. The amount of noise is controlled by a noise_level input. A higher +noise_level increases the variance in the final un-noised images. The noise is applied in two ways: A noise schedule is applied directly to the embeddings. A vector of sinusoidal time embeddings are appended to the output. In both cases, the amount of noise is controlled by the same noise_level. The embeddings are normalized before the noise is applied and un-normalized after the noise is applied. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/df86b3a74d8d2675cd0efcacf43b8505.txt b/scrapped_outputs/df86b3a74d8d2675cd0efcacf43b8505.txt new file mode 100644 index 0000000000000000000000000000000000000000..003f32be8c473f8f647bdb5cf370dbd7c1372127 --- /dev/null +++ b/scrapped_outputs/df86b3a74d8d2675cd0efcacf43b8505.txt @@ -0,0 +1,23 @@ +Text-Guided Image-to-Image Generation + +The StableDiffusionDepth2ImgPipeline lets you pass a text prompt and an initial image to condition the generation of new images as well as a depth_map to preserve the images’ structure. If no depth_map is provided, the pipeline will automatically predict the depth via an integrated depth-estimation model. + + + Copied +import torch +import requests +from PIL import Image + +from diffusers import StableDiffusionDepth2ImgPipeline + +pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-depth", + torch_dtype=torch.float16, +).to("cuda") + + +url = "http://images.cocodataset.org/val2017/000000039769.jpg" +init_image = Image.open(requests.get(url, stream=True).raw) +prompt = "two tigers" +n_prompt = "bad, deformed, ugly, bad anatomy" +image = pipe(prompt=prompt, image=init_image, negative_prompt=n_prompt, strength=0.7).images[0] diff --git a/scrapped_outputs/df8aa33b2657dfc37dcacf1eaa956dbe.txt b/scrapped_outputs/df8aa33b2657dfc37dcacf1eaa956dbe.txt new file mode 100644 index 0000000000000000000000000000000000000000..45e22755718c396e45e6a4cc8269e866cbee209f --- /dev/null +++ b/scrapped_outputs/df8aa33b2657dfc37dcacf1eaa956dbe.txt @@ -0,0 +1,175 @@ +ControlNet-XS with Stable Diffusion XL ControlNet-XS was introduced in ControlNet-XS by Denis Zavadski and Carsten Rother. It is based on the observation that the control model in the original ControlNet can be made much smaller and still produce good results. Like the original ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that’ll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process. ControlNet-XS generates images with comparable quality to a regular ControlNet, but it is 20-25% faster (see benchmark) and uses ~45% less memory. Here’s the overview from the project page: With increasing computing capabilities, current model architectures appear to follow the trend of simply upscaling all components without validating the necessity for doing so. In this project we investigate the size and architectural design of ControlNet [Zhang et al., 2023] for controlling the image generation process with stable diffusion-based models. We show that a new architecture with as little as 1% of the parameters of the base model achieves state-of-the art results, considerably better than ControlNet in terms of FID score. Hence we call it ControlNet-XS. We provide the code for controlling StableDiffusion-XL [Podell et al., 2023] (Model B, 48M Parameters) and StableDiffusion 2.1 [Rombach et al. 2022] (Model B, 14M Parameters), all under openrail license. This model was contributed by UmerHA. ❤️ 🧪 Many of the SDXL ControlNet checkpoints are experimental, and there is a lot of room for improvement. Feel free to open an Issue and leave us feedback on how we can improve! Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionXLControlNetXSPipeline class diffusers.StableDiffusionXLControlNetXSPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: ControlNetXSModel scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). text_encoder_2 (CLIPTextModelWithProjection) — +Second frozen text-encoder +(laion/CLIP-ViT-bigG-14-laion2B-39B-b160k). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. tokenizer_2 (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. controlnet (ControlNetXSModel — +Provides additional conditioning to the unet during the denoising process. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings should always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it defaults to True if the package is installed; otherwise no +watermarker is used. Pipeline for text-to-image generation using Stable Diffusion XL with ControlNet-XS guidance. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None controlnet_conditioning_scale: Union = 1.0 control_guidance_start: float = 0.0 control_guidance_end: float = 1.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None clip_skip: Optional = None ) → ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], List[np.ndarray], — +List[List[torch.FloatTensor]], List[List[np.ndarray]] or List[List[PIL.Image.Image]]): +The ControlNet input condition to provide guidance to the unet for generation. If the type is +specified as torch.FloatTensor, it is passed to ControlNet as is. PIL.Image.Image can also be +accepted as an image. The dimensions of the output image defaults to image’s dimensions. If height +and/or width are passed, image is resized accordingly. If multiple ControlNets are specified in +init, images must be passed as a list such that each element of the list can be correctly batched for +input to a single ControlNet. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 5.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. This is sent to tokenizer_2 +and text_encoder_2. If not defined, negative_prompt is used in both text-encoders. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, pooled text embeddings are generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs (prompt +weighting). If not provided, pooled negative_prompt_embeds are generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. controlnet_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added +to the residual in the original unet. control_guidance_start (float, optional, defaults to 0.0) — +The percentage of total steps at which the ControlNet starts applying. control_guidance_end (float, optional, defaults to 1.0) — +The percentage of total steps at which the ControlNet stops applying. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (width, height) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (width, height). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple + +If return_dict is True, ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput is +returned, otherwise a tuple is returned containing the output images. + The call function to the pipeline for generation. Examples: Copied >>> # !pip install opencv-python transformers accelerate +>>> from diffusers import StableDiffusionXLControlNetXSPipeline, ControlNetXSModel, AutoencoderKL +>>> from diffusers.utils import load_image +>>> import numpy as np +>>> import torch + +>>> import cv2 +>>> from PIL import Image + +>>> prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" +>>> negative_prompt = "low quality, bad quality, sketches" + +>>> # download an image +>>> image = load_image( +... "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" +... ) + +>>> # initialize the models and pipeline +>>> controlnet_conditioning_scale = 0.5 # recommended for good generalization +>>> controlnet = ControlNetXSModel.from_pretrained("UmerHA/ConrolNetXS-SDXL-canny", torch_dtype=torch.float16) +>>> vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) +>>> pipe = StableDiffusionXLControlNetXSPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> # get canny image +>>> image = np.array(image) +>>> image = cv2.Canny(image, 100, 200) +>>> image = image[:, :, None] +>>> image = np.concatenate([image, image, image], axis=2) +>>> canny_image = Image.fromarray(image) + +>>> # generate image +>>> image = pipe( +... prompt, controlnet_conditioning_scale=controlnet_conditioning_scale, image=canny_image +... ).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/dfb0ef9fe8a090b0450d4d73a0707e4d.txt b/scrapped_outputs/dfb0ef9fe8a090b0450d4d73a0707e4d.txt new file mode 100644 index 0000000000000000000000000000000000000000..22f4364ce97372704c79647dbbdce1598a164ef3 --- /dev/null +++ b/scrapped_outputs/dfb0ef9fe8a090b0450d4d73a0707e4d.txt @@ -0,0 +1,253 @@ +PaintByExample + + +Overview + +Paint by Example: Exemplar-based Image Editing with Diffusion Models by Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, Fang Wen +The abstract of the paper is the following: +Language-guided image editing has achieved great success recently. In this paper, for the first time, we investigate exemplar-guided image editing for more precise control. We achieve this goal by leveraging self-supervised training to disentangle and re-organize the source image and the exemplar. However, the naive approach will cause obvious fusing artifacts. We carefully analyze it and propose an information bottleneck and strong augmentations to avoid the trivial solution of directly copying and pasting the exemplar image. Meanwhile, to ensure the controllability of the editing process, we design an arbitrary shape mask for the exemplar image and leverage the classifier-free guidance to increase the similarity to the exemplar image. The whole framework involves a single forward of the diffusion model without any iterative optimization. We demonstrate that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity. +The original codebase can be found here. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_paint_by_example.py +Image-Guided Image Painting +- + +Tips + +PaintByExample is supported by the official Fantasy-Studio/Paint-by-Example checkpoint. The checkpoint has been warm-started from the CompVis/stable-diffusion-v1-4 and with the objective to inpaint partly masked images conditioned on example / reference images +To quickly demo PaintByExample, please have a look at this demo +You can run the following code snippet as an example: + + + Copied +# !pip install diffusers transformers + +import PIL +import requests +import torch +from io import BytesIO +from diffusers import DiffusionPipeline + + +def download_image(url): + response = requests.get(url) + return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +img_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/image/example_1.png" +mask_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/mask/example_1.png" +example_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/reference/example_1.jpg" + +init_image = download_image(img_url).resize((512, 512)) +mask_image = download_image(mask_url).resize((512, 512)) +example_image = download_image(example_url).resize((512, 512)) + +pipe = DiffusionPipeline.from_pretrained( + "Fantasy-Studio/Paint-by-Example", + torch_dtype=torch.float16, +) +pipe = pipe.to("cuda") + +image = pipe(image=init_image, mask_image=mask_image, example_image=example_image).images[0] +image + +PaintByExamplePipeline + + +class diffusers.PaintByExamplePipeline + +< +source +> +( +vae: AutoencoderKL +image_encoder: PaintByExampleImageEncoder +unet: UNet2DConditionModel +scheduler: typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler] +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPFeatureExtractor +requires_safety_checker: bool = False + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-guided image inpainting using Stable Diffusion. This is an experimental feature. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +example_image: typing.Union[torch.FloatTensor, PIL.Image.Image] +image: typing.Union[torch.FloatTensor, PIL.Image.Image] +mask_image: typing.Union[torch.FloatTensor, PIL.Image.Image] +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 5.0 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +example_image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +The exemplar image to guide the image generation. + + +image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. + + +mask_image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. diff --git a/scrapped_outputs/dfd03e9c57ecdfdbf86d684111563013.txt b/scrapped_outputs/dfd03e9c57ecdfdbf86d684111563013.txt new file mode 100644 index 0000000000000000000000000000000000000000..b84e158cee82f4f557c3e68680d974534614ca5a --- /dev/null +++ b/scrapped_outputs/dfd03e9c57ecdfdbf86d684111563013.txt @@ -0,0 +1,699 @@ +Loaders + +There are many ways to train adapter neural networks for diffusion models, such as +Textual Inversion +LoRA +Hypernetworks +Such adapter neural networks often only consist of a fraction of the number of weights compared +to the pretrained model and as such are very portable. The Diffusers library offers an easy-to-use +API to load such adapter neural networks via the loaders.py module. +Note: This module is still highly experimental and prone to future changes. + +LoaderMixins + + +UNet2DConditionLoadersMixin + + +class diffusers.loaders.UNet2DConditionLoadersMixin + +< +source +> +( +) + + + + +load_attn_procs + +< +source +> +( +pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]] +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. +Valid model ids should have an organization name, like google/ddpm-celebahq-256. +A path to a directory containing model weights saved using ~ModelMixin.save_config, e.g., +./my_model_directory/. +A torch state +dict. + + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running diffusers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +subfolder (str, optional, defaults to "") — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + +mirror (str, optional) — +Mirror source to accelerate downloads in China. If you are from China and have an accessibility +problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. +Please refer to the mirror site for more information. + + + +Load pretrained attention processor layers into UNet2DConditionModel. Attention processor layers have to be +defined in +cross_attention.py +and be a torch.nn.Module class. +This function is experimental and might change in the future. +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. + +save_attn_procs + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +is_main_process: bool = True +weight_name: str = None +save_function: typing.Callable = None +safe_serialization: bool = False +**kwargs + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. + + +is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful when in distributed training like +TPUs and need to call this function on all processes. In this case, set is_main_process=True only on +the main process to avoid race conditions. + + +save_function (Callable) — +The function to use to save the state dictionary. Useful on distributed training like TPUs when one +need to replace torch.save by another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. + + + +Save an attention processor to a directory, so that it can be re-loaded using the +load_attn_procs() method. + +TextualInversionLoaderMixin + + +class diffusers.loaders.TextualInversionLoaderMixin + +< +source +> +( +) + + + +Mixin class for loading textual inversion tokens and embeddings to the tokenizer and text encoder. + +load_textual_inversion + +< +source +> +( +pretrained_model_name_or_path: typing.Union[str, typing.Dict[str, torch.Tensor]] +token: typing.Optional[str] = None +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path (str or os.PathLike) — +Can be either: + +A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. +Valid model ids should have an organization name, like +"sd-concepts-library/low-poly-hd-logos-icons". +A path to a directory containing textual inversion weights, e.g. +./my_text_inversion_directory/. + + + +weight_name (str, optional) — +Name of a custom weight file. This should be used in two cases: + +The saved textual inversion file is in diffusers format, but was saved under a specific weight +name, such as text_inv.bin. +The saved textual inversion file is in the “Automatic1111” form. + + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running diffusers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +subfolder (str, optional, defaults to "") — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + +mirror (str, optional) — +Mirror source to accelerate downloads in China. If you are from China and have an accessibility +problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. +Please refer to the mirror site for more information. + + + +Load textual inversion embeddings into the text encoder of stable diffusion pipelines. Both diffusers and +Automatic1111 formats are supported (see example below). +This function is experimental and might change in the future. +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. +Example: + +To load a textual inversion embedding vector in diffusers format: + + + Copied +from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") +To load a textual inversion embedding vector in Automatic1111 format, make sure to first download the vector, + +e.g. from civitAI and then load the vector locally: + + + Copied +from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") + +maybe_convert_prompt + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] +tokenizer: PreTrainedTokenizer + +) +→ +str or list of str + +Parameters + +prompt (str or list of str) — +The prompt or prompts to guide the image generation. + + +tokenizer (PreTrainedTokenizer) — +The tokenizer responsible for encoding the prompt into input tokens. + + +Returns + +str or list of str + + + +The converted prompt + + +Maybe convert a prompt into a “multi vector”-compatible prompt. If the prompt includes a token that corresponds +to a multi-vector textual inversion embedding, this function will process the prompt so that the special token +is replaced with multiple special tokens each corresponding to one of the vectors. If the prompt has no textual +inversion token or a textual inversion token that is a single vector, the input prompt is simply returned. + +LoraLoaderMixin + + +class diffusers.loaders.LoraLoaderMixin + +< +source +> +( +) + + + +Utility class for handling the loading LoRA layers into UNet (of class UNet2DConditionModel) and Text Encoder +(of class CLIPTextModel). +This function is experimental and might change in the future. + +load_attn_procs + +< +source +> +( +pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]] +**kwargs + +) +→ +Dict[name, LoRAAttnProcessor] + +Parameters + +pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. +Valid model ids should have an organization name, like google/ddpm-celebahq-256. +A path to a directory containing model weights saved using ~ModelMixin.save_config, e.g., +./my_model_directory/. +A torch state +dict. + + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running diffusers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +subfolder (str, optional, defaults to "") — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + +mirror (str, optional) — +Mirror source to accelerate downloads in China. If you are from China and have an accessibility +problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. +Please refer to the mirror site for more information. + + +Returns + +Dict[name, LoRAAttnProcessor] + + + +Mapping between the module names and their corresponding +LoRAAttnProcessor. + + +Load pretrained attention processor layers for +CLIPTextModel. +This function is experimental and might change in the future. +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. + +load_lora_weights + +< +source +> +( +pretrained_model_name_or_path_or_dict: typing.Union[str, typing.Dict[str, torch.Tensor]] +**kwargs + +) + + +Parameters + +pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. +Valid model ids should have an organization name, like google/ddpm-celebahq-256. +A path to a directory containing model weights saved using ~ModelMixin.save_config, e.g., +./my_model_directory/. +A torch state +dict. + + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +local_files_only(bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running diffusers-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +subfolder (str, optional, defaults to "") — +In case the relevant files are located inside a subfolder of the model repo (either remote in +huggingface.co or downloaded locally), you can specify the folder name here. + + +mirror (str, optional) — +Mirror source to accelerate downloads in China. If you are from China and have an accessibility +problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. +Please refer to the mirror site for more information. + + + +Load pretrained attention processor layers (such as LoRA) into UNet2DConditionModel and +CLIPTextModel). +This function is experimental and might change in the future. +It is required to be logged in (huggingface-cli login) when you want to use private or gated +models. + +save_lora_weights + +< +source +> +( +save_directory: typing.Union[str, os.PathLike] +unet_lora_layers: typing.Dict[str, torch.nn.modules.module.Module] = None +text_encoder_lora_layers: typing.Dict[str, torch.nn.modules.module.Module] = None +is_main_process: bool = True +weight_name: str = None +save_function: typing.Callable = None +safe_serialization: bool = False + +) + + +Parameters + +save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. + + +unet_lora_layers (Dict[str, torch.nn.Module]) — +State dict of the LoRA layers corresponding to the UNet. Specifying this helps to make the +serialization process easier and cleaner. + + +text_encoder_lora_layers (Dict[str, torch.nn.Module]) — +State dict of the LoRA layers corresponding to the text_encoder. Since the text_encoder comes from +transformers, we cannot rejig it. That is why we have to explicitly pass the text encoder LoRA state +dict. + + +is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful when in distributed training like +TPUs and need to call this function on all processes. In this case, set is_main_process=True only on +the main process to avoid race conditions. + + +save_function (Callable) — +The function to use to save the state dictionary. Useful on distributed training like TPUs when one +need to replace torch.save by another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. + + + +Save the LoRA parameters corresponding to the UNet and the text encoder. + +FromCkptMixin + + +class diffusers.loaders.FromCkptMixin + +< +source +> +( +) + + + +This helper class allows to directly load .ckpt stable diffusion file_extension +into the respective classes. + +from_ckpt + +< +source +> +( +pretrained_model_link_or_path +**kwargs + +) + + +Parameters + +pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file on the Hub. Should be in the format +"https://huggingface.co//blob/main/" +A path to a file containing all pipeline weights. + + + +torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model under this dtype. If "auto" is passed the dtype +will be automatically derived from the model’s weights. + + +force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. + + +cache_dir (Union[str, os.PathLike], optional) — +Path to a directory in which a downloaded pretrained model configuration should be cached if the +standard cache should not be used. + + +resume_download (bool, optional, defaults to False) — +Whether or not to delete incompletely received files. Will attempt to resume the download if such a +file exists. + + +proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. + + +local_files_only (bool, optional, defaults to False) — +Whether or not to only look at local files (i.e., do not try to download the model). + + +use_auth_token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, will use the token generated +when running huggingface-cli login (stored in ~/.huggingface). + + +revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a +git-based system for storing models and other artifacts on huggingface.co, so revision can be any +identifier allowed by git. + + +use_safetensors (bool, optional ) — +If set to True, the pipeline will be loaded from safetensors weights. If set to None (the +default). The pipeline will load using safetensors if the safetensors weights are available and if +safetensors is installed. If the to False the pipeline will not use safetensors. + + +extract_ema (bool, optional, defaults to False) — Only relevant for +checkpoints that have both EMA and non-EMA weights. Whether to extract the EMA weights or not. Defaults +to False. Pass True to extract the EMA weights. EMA weights usually yield higher quality images for +inference. Non-EMA weights are usually better to continue fine-tuning. + + +upcast_attention (bool, optional, defaults to None) — +Whether the attention computation should always be upcasted. This is necessary when running stable + + +image_size (int, optional, defaults to 512) — +The image size that the model was trained on. Use 512 for Stable Diffusion v1.X and Stable Diffusion v2 +Base. Use 768 for Stable Diffusion v2. + + +prediction_type (str, optional) — +The prediction type that the model was trained on. Use 'epsilon' for Stable Diffusion v1.X and Stable +Diffusion v2 Base. Use 'v_prediction' for Stable Diffusion v2. + + +num_in_channels (int, optional, defaults to None) — +The number of input channels. If None, it will be automatically inferred. + + +scheduler_type (str, optional, defaults to ‘pndm’) — +Type of scheduler to use. Should be one of ["pndm", "lms", "heun", "euler", "euler-ancestral", "dpm", "ddim"]. + + +load_safety_checker (bool, optional, defaults to True) — +Whether to load the safety checker or not. Defaults to True. + + +kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load - and saveable variables - i.e. the pipeline components - of the +specific pipeline class. The overwritten components are then directly passed to the pipelines +__init__ method. See example below for more information. + + + +Instantiate a PyTorch diffusion pipeline from pre-trained pipeline weights saved in the original .ckpt format. +The pipeline is set in evaluation mode by default using model.eval() (Dropout modules are deactivated). + +Examples: + + + Copied +>>> from diffusers import StableDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = StableDiffusionPipeline.from_ckpt( +... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +... ) + +>>> # Download pipeline from local file +>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt +>>> pipeline = StableDiffusionPipeline.from_ckpt("./v1-5-pruned-emaonly") + +>>> # Enable float16 and move to GPU +>>> pipeline = StableDiffusionPipeline.from_ckpt( +... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", +... torch_dtype=torch.float16, +... ) +>>> pipeline.to("cuda") diff --git a/scrapped_outputs/dfd547f4c0b421b54f44c1267bdf416b.txt b/scrapped_outputs/dfd547f4c0b421b54f44c1267bdf416b.txt new file mode 100644 index 0000000000000000000000000000000000000000..7b1735de34d975258705c997ab6b7091fbeddde0 --- /dev/null +++ b/scrapped_outputs/dfd547f4c0b421b54f44c1267bdf416b.txt @@ -0,0 +1,2 @@ +Activation functions Customized activation functions for supporting various models in 🤗 Diffusers. GELU class diffusers.models.activations.GELU < source > ( dim_in: int dim_out: int approximate: str = 'none' bias: bool = True ) Parameters dim_in (int) — The number of channels in the input. dim_out (int) — The number of channels in the output. approximate (str, optional, defaults to "none") — If "tanh", use tanh approximation. bias (bool, defaults to True) — Whether to use a bias in the linear layer. GELU activation function with tanh approximation support with approximate="tanh". GEGLU class diffusers.models.activations.GEGLU < source > ( dim_in: int dim_out: int bias: bool = True ) Parameters dim_in (int) — The number of channels in the input. dim_out (int) — The number of channels in the output. bias (bool, defaults to True) — Whether to use a bias in the linear layer. A variant of the gated linear unit activation function. ApproximateGELU class diffusers.models.activations.ApproximateGELU < source > ( dim_in: int dim_out: int bias: bool = True ) Parameters dim_in (int) — The number of channels in the input. dim_out (int) — The number of channels in the output. bias (bool, defaults to True) — Whether to use a bias in the linear layer. The approximate form of the Gaussian Error Linear Unit (GELU). For more details, see section 2 of this +paper. diff --git a/scrapped_outputs/dfd5f4748016a21452b35369f9c6bc35.txt b/scrapped_outputs/dfd5f4748016a21452b35369f9c6bc35.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/dfe070aa7f81a5421acd0b7133f0b7e1.txt b/scrapped_outputs/dfe070aa7f81a5421acd0b7133f0b7e1.txt new file mode 100644 index 0000000000000000000000000000000000000000..9ca029bb91c67a0ec67932bad4223385e8d4a365 --- /dev/null +++ b/scrapped_outputs/dfe070aa7f81a5421acd0b7133f0b7e1.txt @@ -0,0 +1 @@ +Reinforcement learning training with DDPO You can fine-tune Stable Diffusion on a reward function via reinforcement learning with the 🤗 TRL library and 🤗 Diffusers. This is done with the Denoising Diffusion Policy Optimization (DDPO) algorithm introduced by Black et al. in Training Diffusion Models with Reinforcement Learning, which is implemented in 🤗 TRL with the ~trl.DDPOTrainer. For more information, check out the ~trl.DDPOTrainer API reference and the Finetune Stable Diffusion Models with DDPO via TRL blog post. diff --git a/scrapped_outputs/dfe6cc9556fb0edf37e67a4260cd32bb.txt b/scrapped_outputs/dfe6cc9556fb0edf37e67a4260cd32bb.txt new file mode 100644 index 0000000000000000000000000000000000000000..89bc887713a4db23ab02dc3377a161ea6292c27f --- /dev/null +++ b/scrapped_outputs/dfe6cc9556fb0edf37e67a4260cd32bb.txt @@ -0,0 +1,23 @@ +Controlled generation Controlling outputs generated by diffusion models has been long pursued by the community and is now an active research topic. In many popular diffusion models, subtle changes in inputs, both images and text prompts, can drastically change outputs. In an ideal world we want to be able to control how semantics are preserved and changed. Most examples of preserving semantics reduce to being able to accurately map a change in input to a change in output. I.e. adding an adjective to a subject in a prompt preserves the entire image, only modifying the changed subject. Or, image variation of a particular subject preserves the subject’s pose. Additionally, there are qualities of generated images that we would like to influence beyond semantic preservation. I.e. in general, we would like our outputs to be of good quality, adhere to a particular style, or be realistic. We will document some of the techniques diffusers supports to control generation of diffusion models. Much is cutting edge research and can be quite nuanced. If something needs clarifying or you have a suggestion, don’t hesitate to open a discussion on the forum or a GitHub issue. We provide a high level explanation of how the generation can be controlled as well as a snippet of the technicals. For more in depth explanations on the technicals, the original papers which are linked from the pipelines are always the best resources. Depending on the use case, one should choose a technique accordingly. In many cases, these techniques can be combined. For example, one can combine Textual Inversion with SEGA to provide more semantic guidance to the outputs generated using Textual Inversion. Unless otherwise mentioned, these are techniques that work with existing models and don’t require their own weights. InstructPix2Pix Pix2Pix Zero Attend and Excite Semantic Guidance Self-attention Guidance Depth2Image MultiDiffusion Panorama DreamBooth Textual Inversion ControlNet Prompt Weighting Custom Diffusion Model Editing DiffEdit T2I-Adapter FABRIC For convenience, we provide a table to denote which methods are inference-only and which require fine-tuning/training. Method Inference only Requires training / fine-tuning Comments InstructPix2Pix ✅ ❌ Can additionally befine-tuned for better performance on specific edit instructions. Pix2Pix Zero ✅ ❌ Attend and Excite ✅ ❌ Semantic Guidance ✅ ❌ Self-attention Guidance ✅ ❌ Depth2Image ✅ ❌ MultiDiffusion Panorama ✅ ❌ DreamBooth ❌ ✅ Textual Inversion ❌ ✅ ControlNet ✅ ❌ A ControlNet can be trained/fine-tuned ona custom conditioning. Prompt Weighting ✅ ❌ Custom Diffusion ❌ ✅ Model Editing ✅ ❌ DiffEdit ✅ ❌ T2I-Adapter ✅ ❌ Fabric ✅ ❌ InstructPix2Pix Paper InstructPix2Pix is fine-tuned from Stable Diffusion to support editing input images. It takes as inputs an image and a prompt describing an edit, and it outputs the edited image. +InstructPix2Pix has been explicitly trained to work well with InstructGPT-like prompts. Pix2Pix Zero Paper Pix2Pix Zero allows modifying an image so that one concept or subject is translated to another one while preserving general image semantics. The denoising process is guided from one conceptual embedding towards another conceptual embedding. The intermediate latents are optimized during the denoising process to push the attention maps towards reference attention maps. The reference attention maps are from the denoising process of the input image and are used to encourage semantic preservation. Pix2Pix Zero can be used both to edit synthetic images as well as real images. To edit synthetic images, one first generates an image given a caption. +Next, we generate image captions for the concept that shall be edited and for the new target concept. We can use a model like Flan-T5 for this purpose. Then, “mean” prompt embeddings for both the source and target concepts are created via the text encoder. Finally, the pix2pix-zero algorithm is used to edit the synthetic image. To edit a real image, one first generates an image caption using a model like BLIP. Then one applies DDIM inversion on the prompt and image to generate “inverse” latents. Similar to before, “mean” prompt embeddings for both source and target concepts are created and finally the pix2pix-zero algorithm in combination with the “inverse” latents is used to edit the image. Pix2Pix Zero is the first model that allows “zero-shot” image editing. This means that the model +can edit an image in less than a minute on a consumer GPU as shown here. As mentioned above, Pix2Pix Zero includes optimizing the latents (and not any of the UNet, VAE, or the text encoder) to steer the generation toward a specific concept. This means that the overall +pipeline might require more memory than a standard StableDiffusionPipeline. An important distinction between methods like InstructPix2Pix and Pix2Pix Zero is that the former +involves fine-tuning the pre-trained weights while the latter does not. This means that you can +apply Pix2Pix Zero to any of the available Stable Diffusion models. Attend and Excite Paper Attend and Excite allows subjects in the prompt to be faithfully represented in the final image. A set of token indices are given as input, corresponding to the subjects in the prompt that need to be present in the image. During denoising, each token index is guaranteed to have a minimum attention threshold for at least one patch of the image. The intermediate latents are iteratively optimized during the denoising process to strengthen the attention of the most neglected subject token until the attention threshold is passed for all subject tokens. Like Pix2Pix Zero, Attend and Excite also involves a mini optimization loop (leaving the pre-trained weights untouched) in its pipeline and can require more memory than the usual StableDiffusionPipeline. Semantic Guidance (SEGA) Paper SEGA allows applying or removing one or more concepts from an image. The strength of the concept can also be controlled. I.e. the smile concept can be used to incrementally increase or decrease the smile of a portrait. Similar to how classifier free guidance provides guidance via empty prompt inputs, SEGA provides guidance on conceptual prompts. Multiple of these conceptual prompts can be applied simultaneously. Each conceptual prompt can either add or remove their concept depending on if the guidance is applied positively or negatively. Unlike Pix2Pix Zero or Attend and Excite, SEGA directly interacts with the diffusion process instead of performing any explicit gradient-based optimization. Self-attention Guidance (SAG) Paper Self-attention Guidance improves the general quality of images. SAG provides guidance from predictions not conditioned on high-frequency details to fully conditioned images. The high frequency details are extracted out of the UNet self-attention maps. Depth2Image Project Depth2Image is fine-tuned from Stable Diffusion to better preserve semantics for text guided image variation. It conditions on a monocular depth estimate of the original image. MultiDiffusion Panorama Paper MultiDiffusion Panorama defines a new generation process over a pre-trained diffusion model. This process binds together multiple diffusion generation methods that can be readily applied to generate high quality and diverse images. Results adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes. +MultiDiffusion Panorama allows to generate high-quality images at arbitrary aspect ratios (e.g., panoramas). Fine-tuning your own models In addition to pre-trained models, Diffusers has training scripts for fine-tuning models on user-provided data. DreamBooth Project DreamBooth fine-tunes a model to teach it about a new subject. I.e. a few pictures of a person can be used to generate images of that person in different styles. Textual Inversion Paper Textual Inversion fine-tunes a model to teach it about a new concept. I.e. a few pictures of a style of artwork can be used to generate images in that style. ControlNet Paper ControlNet is an auxiliary network which adds an extra condition. +There are 8 canonical pre-trained ControlNets trained on different conditionings such as edge detection, scribbles, +depth maps, and semantic segmentations. Prompt Weighting Prompt weighting is a simple technique that puts more attention weight on certain parts of the text +input. Custom Diffusion Paper Custom Diffusion only fine-tunes the cross-attention maps of a pre-trained +text-to-image diffusion model. It also allows for additionally performing Textual Inversion. It supports +multi-concept training by design. Like DreamBooth and Textual Inversion, Custom Diffusion is also used to +teach a pre-trained text-to-image diffusion model about new concepts to generate outputs involving the +concept(s) of interest. Model Editing Paper The text-to-image model editing pipeline helps you mitigate some of the incorrect implicit assumptions a pre-trained text-to-image +diffusion model might make about the subjects present in the input prompt. For example, if you prompt Stable Diffusion to generate images for “A pack of roses”, the roses in the generated images +are more likely to be red. This pipeline helps you change that assumption. DiffEdit Paper DiffEdit allows for semantic editing of input images along with +input prompts while preserving the original input images as much as possible. T2I-Adapter Paper T2I-Adapter is an auxiliary network which adds an extra condition. +There are 8 canonical pre-trained adapters trained on different conditionings such as edge detection, sketch, +depth maps, and semantic segmentations. Fabric Paper Fabric is a training-free +approach applicable to a wide range of popular diffusion models, which exploits +the self-attention layer present in the most widely used architectures to condition +the diffusion process on a set of feedback images. diff --git a/scrapped_outputs/dffc63567f029685d86e2bfb207e8233.txt b/scrapped_outputs/dffc63567f029685d86e2bfb207e8233.txt new file mode 100644 index 0000000000000000000000000000000000000000..78c3d8546c4767fffa594b36c432c1201bb2ccc3 --- /dev/null +++ b/scrapped_outputs/dffc63567f029685d86e2bfb207e8233.txt @@ -0,0 +1,17 @@ +Token merging Token merging (ToMe) merges redundant tokens/patches progressively in the forward pass of a Transformer-based network which can speed-up the inference latency of StableDiffusionPipeline. Install ToMe from pip: Copied pip install tomesd You can use ToMe from the tomesd library with the apply_patch function: Copied from diffusers import StableDiffusionPipeline + import torch + import tomesd + + pipeline = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True, + ).to("cuda") ++ tomesd.apply_patch(pipeline, ratio=0.5) + + image = pipeline("a photo of an astronaut riding a horse on mars").images[0] The apply_patch function exposes a number of arguments to help strike a balance between pipeline inference speed and the quality of the generated tokens. The most important argument is ratio which controls the number of tokens that are merged during the forward pass. As reported in the paper, ToMe can greatly preserve the quality of the generated images while boosting inference speed. By increasing the ratio, you can speed-up inference even further, but at the cost of some degraded image quality. To test the quality of the generated images, we sampled a few prompts from Parti Prompts and performed inference with the StableDiffusionPipeline with the following settings: We didn’t notice any significant decrease in the quality of the generated samples, and you can check out the generated samples in this WandB report. If you’re interested in reproducing this experiment, use this script. Benchmarks We also benchmarked the impact of tomesd on the StableDiffusionPipeline with xFormers enabled across several image resolutions. The results are obtained from A100 and V100 GPUs in the following development environment: Copied - `diffusers` version: 0.15.1 +- Python version: 3.8.16 +- PyTorch version (GPU?): 1.13.1+cu116 (True) +- Huggingface_hub version: 0.13.2 +- Transformers version: 4.27.2 +- Accelerate version: 0.18.0 +- xFormers version: 0.0.16 +- tomesd version: 0.1.2 To reproduce this benchmark, feel free to use this script. The results are reported in seconds, and where applicable we report the speed-up percentage over the vanilla pipeline when using ToMe and ToMe + xFormers. GPU Resolution Batch size Vanilla ToMe ToMe + xFormers A100 512 10 6.88 5.26 (+23.55%) 4.69 (+31.83%) 768 10 OOM 14.71 11 8 OOM 11.56 8.84 4 OOM 5.98 4.66 2 4.99 3.24 (+35.07%) 2.1 (+37.88%) 1 3.29 2.24 (+31.91%) 2.03 (+38.3%) 1024 10 OOM OOM OOM 8 OOM OOM OOM 4 OOM 12.51 9.09 2 OOM 6.52 4.96 1 6.4 3.61 (+43.59%) 2.81 (+56.09%) V100 512 10 OOM 10.03 9.29 8 OOM 8.05 7.47 4 5.7 4.3 (+24.56%) 3.98 (+30.18%) 2 3.14 2.43 (+22.61%) 2.27 (+27.71%) 1 1.88 1.57 (+16.49%) 1.57 (+16.49%) 768 10 OOM OOM 23.67 8 OOM OOM 18.81 4 OOM 11.81 9.7 2 OOM 6.27 5.2 1 5.43 3.38 (+37.75%) 2.82 (+48.07%) 1024 10 OOM OOM OOM 8 OOM OOM OOM 4 OOM OOM 19.35 2 OOM 13 10.78 1 OOM 6.66 5.54 As seen in the tables above, the speed-up from tomesd becomes more pronounced for larger image resolutions. It is also interesting to note that with tomesd, it is possible to run the pipeline on a higher resolution like 1024x1024. You may be able to speed-up inference even more with torch.compile. diff --git a/scrapped_outputs/dffe9e00977300e3a902bbebce8744a1.txt b/scrapped_outputs/dffe9e00977300e3a902bbebce8744a1.txt new file mode 100644 index 0000000000000000000000000000000000000000..8423dbc4c086a93fc684851efbfbaf2fbcda62c5 --- /dev/null +++ b/scrapped_outputs/dffe9e00977300e3a902bbebce8744a1.txt @@ -0,0 +1,127 @@ +Super-resolution The Stable Diffusion upscaler diffusion model was created by the researchers and engineers from CompVis, Stability AI, and LAION. It is used to enhance the resolution of input images by a factor of 4. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionUpscalePipeline class diffusers.StableDiffusionUpscalePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel low_res_scheduler: DDPMScheduler scheduler: KarrasDiffusionSchedulers safety_checker: Optional = None feature_extractor: Optional = None watermarker: Optional = None max_noise_level: int = 350 ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. low_res_scheduler (SchedulerMixin) — +A scheduler used to add initial noise to the low resolution conditioning image. It must be an instance of +DDPMScheduler. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-guided image super-resolution using Stable Diffusion 2. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 75 guidance_scale: float = 9.0 noise_level: int = 20 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: int = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be upscaled. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import requests +>>> from PIL import Image +>>> from io import BytesIO +>>> from diffusers import StableDiffusionUpscalePipeline +>>> import torch + +>>> # load model and scheduler +>>> model_id = "stabilityai/stable-diffusion-x4-upscaler" +>>> pipeline = StableDiffusionUpscalePipeline.from_pretrained( +... model_id, revision="fp16", torch_dtype=torch.float16 +... ) +>>> pipeline = pipeline.to("cuda") + +>>> # let's download an image +>>> url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png" +>>> response = requests.get(url) +>>> low_res_img = Image.open(BytesIO(response.content)).convert("RGB") +>>> low_res_img = low_res_img.resize((128, 128)) +>>> prompt = "a white cat" + +>>> upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0] +>>> upscaled_image.save("upsampled_cat.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/e036e3db58dbcb6a51e92b99cb438ccd.txt b/scrapped_outputs/e036e3db58dbcb6a51e92b99cb438ccd.txt new file mode 100644 index 0000000000000000000000000000000000000000..e9b53eb8a868ef3829ac58348524811ec445482c --- /dev/null +++ b/scrapped_outputs/e036e3db58dbcb6a51e92b99cb438ccd.txt @@ -0,0 +1,143 @@ +BLIP-Diffusion BLIP-Diffusion was proposed in BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing. It enables zero-shot subject-driven generation and control-guided zero-shot generation. The abstract from the paper is: Subject-driven text-to-image generation models create novel renditions of an input subject based on text prompts. Existing models suffer from lengthy fine-tuning and difficulties preserving the subject fidelity. To overcome these limitations, we introduce BLIP-Diffusion, a new subject-driven image generation model that supports multimodal control which consumes inputs of subject images and text prompts. Unlike other subject-driven generation models, BLIP-Diffusion introduces a new multimodal encoder which is pre-trained to provide subject representation. We first pre-train the multimodal encoder following BLIP-2 to produce visual representation aligned with the text. Then we design a subject representation learning task which enables a diffusion model to leverage such visual representation and generates new subject renditions. Compared with previous methods such as DreamBooth, our model enables zero-shot subject-driven generation, and efficient fine-tuning for customized subject with up to 20x speedup. We also demonstrate that BLIP-Diffusion can be flexibly combined with existing techniques such as ControlNet and prompt-to-prompt to enable novel subject-driven generation and editing applications. Project page at this https URL. The original codebase can be found at salesforce/LAVIS. You can find the official BLIP-Diffusion checkpoints under the hf.co/SalesForce organization. BlipDiffusionPipeline and BlipDiffusionControlNetPipeline were contributed by ayushtues. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. BlipDiffusionPipeline class diffusers.BlipDiffusionPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: ContextCLIPTextModel vae: AutoencoderKL unet: UNet2DConditionModel scheduler: PNDMScheduler qformer: Blip2QFormerModel image_processor: BlipImageProcessor ctx_begin_pos: int = 2 mean: List = None std: List = None ) Parameters tokenizer (CLIPTokenizer) — +Tokenizer for the text encoder text_encoder (ContextCLIPTextModel) — +Text encoder to encode the text prompt vae (AutoencoderKL) — +VAE model to map the latents to the image unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. scheduler (PNDMScheduler) — +A scheduler to be used in combination with unet to generate image latents. qformer (Blip2QFormerModel) — +QFormer model to get multi-modal embeddings from the text and image. image_processor (BlipImageProcessor) — +Image Processor to preprocess and postprocess the image. ctx_begin_pos (int, optional, defaults to 2) — +Position of the context token in the text encoder. Pipeline for Zero-Shot Subject Driven Generation using Blip Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: List reference_image: Image source_subject_category: List target_subject_category: List latents: Optional = None guidance_scale: float = 7.5 height: int = 512 width: int = 512 num_inference_steps: int = 50 generator: Union = None neg_prompt: Optional = '' prompt_strength: float = 1.0 prompt_reps: int = 20 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (List[str]) — +The prompt or prompts to guide the image generation. reference_image (PIL.Image.Image) — +The reference image to condition the generation on. source_subject_category (List[str]) — +The source subject category. target_subject_category (List[str]) — +The target subject category. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by random sampling. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. height (int, optional, defaults to 512) — +The height of the generated image. width (int, optional, defaults to 512) — +The width of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. neg_prompt (str, optional, defaults to "") — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). prompt_strength (float, optional, defaults to 1.0) — +The strength of the prompt. Specifies the number of times the prompt is repeated along with prompt_reps +to amplify the prompt. prompt_reps (int, optional, defaults to 20) — +The number of times the prompt is repeated along with prompt_strength to amplify the prompt. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers.pipelines import BlipDiffusionPipeline +>>> from diffusers.utils import load_image +>>> import torch + +>>> blip_diffusion_pipe = BlipDiffusionPipeline.from_pretrained( +... "Salesforce/blipdiffusion", torch_dtype=torch.float16 +... ).to("cuda") + + +>>> cond_subject = "dog" +>>> tgt_subject = "dog" +>>> text_prompt_input = "swimming underwater" + +>>> cond_image = load_image( +... "https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/dog.jpg" +... ) +>>> guidance_scale = 7.5 +>>> num_inference_steps = 25 +>>> negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate" + + +>>> output = blip_diffusion_pipe( +... text_prompt_input, +... cond_image, +... cond_subject, +... tgt_subject, +... guidance_scale=guidance_scale, +... num_inference_steps=num_inference_steps, +... neg_prompt=negative_prompt, +... height=512, +... width=512, +... ).images +>>> output[0].save("image.png") BlipDiffusionControlNetPipeline class diffusers.BlipDiffusionControlNetPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: ContextCLIPTextModel vae: AutoencoderKL unet: UNet2DConditionModel scheduler: PNDMScheduler qformer: Blip2QFormerModel controlnet: ControlNetModel image_processor: BlipImageProcessor ctx_begin_pos: int = 2 mean: List = None std: List = None ) Parameters tokenizer (CLIPTokenizer) — +Tokenizer for the text encoder text_encoder (ContextCLIPTextModel) — +Text encoder to encode the text prompt vae (AutoencoderKL) — +VAE model to map the latents to the image unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. scheduler (PNDMScheduler) — +A scheduler to be used in combination with unet to generate image latents. qformer (Blip2QFormerModel) — +QFormer model to get multi-modal embeddings from the text and image. controlnet (ControlNetModel) — +ControlNet model to get the conditioning image embedding. image_processor (BlipImageProcessor) — +Image Processor to preprocess and postprocess the image. ctx_begin_pos (int, optional, defaults to 2) — +Position of the context token in the text encoder. Pipeline for Canny Edge based Controlled subject-driven generation using Blip Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: List reference_image: Image condtioning_image: Image source_subject_category: List target_subject_category: List latents: Optional = None guidance_scale: float = 7.5 height: int = 512 width: int = 512 num_inference_steps: int = 50 generator: Union = None neg_prompt: Optional = '' prompt_strength: float = 1.0 prompt_reps: int = 20 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (List[str]) — +The prompt or prompts to guide the image generation. reference_image (PIL.Image.Image) — +The reference image to condition the generation on. condtioning_image (PIL.Image.Image) — +The conditioning canny edge image to condition the generation on. source_subject_category (List[str]) — +The source subject category. target_subject_category (List[str]) — +The target subject category. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by random sampling. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. height (int, optional, defaults to 512) — +The height of the generated image. width (int, optional, defaults to 512) — +The width of the generated image. seed (int, optional, defaults to 42) — +The seed to use for random generation. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. neg_prompt (str, optional, defaults to "") — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). prompt_strength (float, optional, defaults to 1.0) — +The strength of the prompt. Specifies the number of times the prompt is repeated along with prompt_reps +to amplify the prompt. prompt_reps (int, optional, defaults to 20) — +The number of times the prompt is repeated along with prompt_strength to amplify the prompt. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers.pipelines import BlipDiffusionControlNetPipeline +>>> from diffusers.utils import load_image +>>> from controlnet_aux import CannyDetector +>>> import torch + +>>> blip_diffusion_pipe = BlipDiffusionControlNetPipeline.from_pretrained( +... "Salesforce/blipdiffusion-controlnet", torch_dtype=torch.float16 +... ).to("cuda") + +>>> style_subject = "flower" +>>> tgt_subject = "teapot" +>>> text_prompt = "on a marble table" + +>>> cldm_cond_image = load_image( +... "https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/kettle.jpg" +... ).resize((512, 512)) +>>> canny = CannyDetector() +>>> cldm_cond_image = canny(cldm_cond_image, 30, 70, output_type="pil") +>>> style_image = load_image( +... "https://huggingface.co/datasets/ayushtues/blipdiffusion_images/resolve/main/flower.jpg" +... ) +>>> guidance_scale = 7.5 +>>> num_inference_steps = 50 +>>> negative_prompt = "over-exposure, under-exposure, saturated, duplicate, out of frame, lowres, cropped, worst quality, low quality, jpeg artifacts, morbid, mutilated, out of frame, ugly, bad anatomy, bad proportions, deformed, blurry, duplicate" + + +>>> output = blip_diffusion_pipe( +... text_prompt, +... style_image, +... cldm_cond_image, +... style_subject, +... tgt_subject, +... guidance_scale=guidance_scale, +... num_inference_steps=num_inference_steps, +... neg_prompt=negative_prompt, +... height=512, +... width=512, +... ).images +>>> output[0].save("image.png") diff --git a/scrapped_outputs/e051e387a59b8b4dcab9aea7cfd4b633.txt b/scrapped_outputs/e051e387a59b8b4dcab9aea7cfd4b633.txt new file mode 100644 index 0000000000000000000000000000000000000000..4c696398635d3121e95a98f588be43126adc80ee --- /dev/null +++ b/scrapped_outputs/e051e387a59b8b4dcab9aea7cfd4b633.txt @@ -0,0 +1,323 @@ +Text-to-image The Stable Diffusion model was created by researchers and engineers from CompVis, Stability AI, Runway, and LAION. The StableDiffusionPipeline is capable of generating photorealistic images given any text input. It’s trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs. Latent diffusion is the research on top of which Stable Diffusion was built. It was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. The abstract from the paper is: By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. Code is available at https://github.com/CompVis/latent-diffusion. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionPipeline class diffusers.StableDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor from Common Diffusion Noise Schedules and Sample Steps are +Flawed. Guidance rescale factor should fix overexposure when +using zero terminal SNR. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt or .safetensors +format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import StableDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +... ) + +>>> # Download pipeline from local file +>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt +>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly") + +>>> # Enable float16 and move to GPU +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", +... torch_dtype=torch.float16, +... ) +>>> pipeline.to("cuda") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionPipeline class diffusers.FlaxStableDiffusionPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-to-image generation using Stable Diffusion. This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: array params: Union prng_seed: Array num_inference_steps: int = 50 height: Optional = None width: Optional = None guidance_scale: Union = 7.5 latents: Array = None neg_prompt_ids: Array = None return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. latents (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +array is generated by sampling using the supplied random generator. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard + +>>> from diffusers import FlaxStableDiffusionPipeline + +>>> pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", revision="bf16", dtype=jax.numpy.bfloat16 +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" + +>>> prng_seed = jax.random.PRNGKey(0) +>>> num_inference_steps = 50 + +>>> num_samples = jax.device_count() +>>> prompt = num_samples * [prompt] +>>> prompt_ids = pipeline.prepare_inputs(prompt) +# shard inputs and rng + +>>> params = replicate(params) +>>> prng_seed = jax.random.split(prng_seed, jax.device_count()) +>>> prompt_ids = shard(prompt_ids) + +>>> images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images +>>> images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) FlaxStableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/e080ac20beb675e4263ddba87e93b14a.txt b/scrapped_outputs/e080ac20beb675e4263ddba87e93b14a.txt new file mode 100644 index 0000000000000000000000000000000000000000..49dfad88e1e2c0dcad3d9918f9f7b9486f85e0dc --- /dev/null +++ b/scrapped_outputs/e080ac20beb675e4263ddba87e93b14a.txt @@ -0,0 +1,92 @@ +DPMSolverMultistepInverse DPMSolverMultistepInverse is the inverted scheduler from DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps and DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. The implementation is mostly based on the DDIM inversion definition of Null-text Inversion for Editing Real Images using Guided Diffusion Models and notebook implementation of the DiffEdit latent inversion from Xiang-cd/DiffEdit-stable-diffusion. Tips Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use the dynamic +thresholding. This thresholding method is unsuitable for latent-space diffusion models such as +Stable Diffusion. DPMSolverMultistepInverseScheduler class diffusers.DPMSolverMultistepInverseScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'dpmsolver++' solver_type: str = 'midpoint' lower_order_final: bool = True euler_at_final: bool = False use_karras_sigmas: Optional = False lambda_min_clipped: float = -inf variance_type: Optional = None timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DPMSolver order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++". algorithm_type (str, defaults to dpmsolver++) — +Algorithm type for the solver; can be dpmsolver, dpmsolver++, sde-dpmsolver or sde-dpmsolver++. The +dpmsolver type implements the algorithms in the DPMSolver +paper, and the dpmsolver++ type implements the algorithms in the +DPMSolver++ paper. It is recommended to use dpmsolver++ or +sde-dpmsolver++ with solver_order=2 for guided sampling like in Stable Diffusion. solver_type (str, defaults to midpoint) — +Solver type for the second-order solver; can be midpoint or heun. The solver type slightly affects the +sample quality, especially for a small number of steps. It is recommended to use midpoint solvers. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. euler_at_final (bool, defaults to False) — +Whether to use Euler’s method in the final step. It is a trade-off between numerical stability and detail +richness. This can stabilize the sampling of the SDE variant of DPMSolver for small number of inference +steps, but sometimes may result in blurring. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. lambda_min_clipped (float, defaults to -inf) — +Clipping threshold for the minimum value of lambda(t) for numerical stability. This is critical for the +cosine (squaredcos_cap_v2) noise schedule. variance_type (str, optional) — +Set to “learned” or “learned_range” for diffusion models that predict variance. If set, the model’s output +contains the predicted Gaussian variance. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DPMSolverMultistepInverseScheduler is the reverse scheduler of DPMSolverMultistepScheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is +designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an +integral of the data prediction model. The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise +prediction and data prediction models. dpm_solver_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DPMSolver (equivalent to DDIM). multistep_dpm_solver_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order multistep DPMSolver. multistep_dpm_solver_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order multistep DPMSolver. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int = None device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator = None return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep DPMSolver. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/e0ad28abcf5d5bcfc84d1ab1ee847641.txt b/scrapped_outputs/e0ad28abcf5d5bcfc84d1ab1ee847641.txt new file mode 100644 index 0000000000000000000000000000000000000000..5eb8aca237f4b1aa72ff085bbc8ab70f6ba7cd91 --- /dev/null +++ b/scrapped_outputs/e0ad28abcf5d5bcfc84d1ab1ee847641.txt @@ -0,0 +1,128 @@ +LoRA LoRA is a fast and lightweight training method that inserts and trains a significantly smaller number of parameters instead of all the model parameters. This produces a smaller file (~100 MBs) and makes it easier to quickly train a model to learn a new concept. LoRA weights are typically loaded into the UNet, text encoder or both. There are two classes for loading LoRA weights: LoraLoaderMixin provides functions for loading and unloading, fusing and unfusing, enabling and disabling, and more functions for managing LoRA weights. This class can be used with any model. StableDiffusionXLLoraLoaderMixin is a Stable Diffusion (SDXL) version of the LoraLoaderMixin class for loading and saving LoRA weights. It can only be used with the SDXL model. To learn more about how to load LoRA weights, see the LoRA loading guide. LoraLoaderMixin class diffusers.loaders.LoraLoaderMixin < source > ( ) Load LoRA layers into UNet2DConditionModel and +CLIPTextModel. delete_adapters < source > ( adapter_names: Union ) Parameters Deletes the LoRA layers of adapter_name for the unet and text-encoder(s). — +adapter_names (Union[List[str], str]): +The names of the adapter to delete. Can be a single string or a list of strings disable_lora_for_text_encoder < source > ( text_encoder: Optional = None ) Parameters text_encoder (torch.nn.Module, optional) — +The text encoder module to disable the LoRA layers for. If None, it will try to get the +text_encoder attribute. Disables the LoRA layers for the text encoder. enable_lora_for_text_encoder < source > ( text_encoder: Optional = None ) Parameters text_encoder (torch.nn.Module, optional) — +The text encoder module to enable the LoRA layers for. If None, it will try to get the text_encoder +attribute. Enables the LoRA layers for the text encoder. fuse_lora < source > ( fuse_unet: bool = True fuse_text_encoder: bool = True lora_scale: float = 1.0 safe_fusing: bool = False adapter_names: Optional = None ) Parameters fuse_unet (bool, defaults to True) — Whether to fuse the UNet LoRA parameters. fuse_text_encoder (bool, defaults to True) — +Whether to fuse the text encoder LoRA parameters. If the text encoder wasn’t monkey-patched with the +LoRA parameters then it won’t have any effect. lora_scale (float, defaults to 1.0) — +Controls how much to influence the outputs with the LoRA parameters. safe_fusing (bool, defaults to False) — +Whether to check fused weights for NaN values before fusing and if values are NaN not fusing them. adapter_names (List[str], optional) — +Adapter names to be used for fusing. If nothing is passed, all active adapters will be fused. Fuses the LoRA parameters into the original parameters of the corresponding blocks. This is an experimental API. Example: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +).to("cuda") +pipeline.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipeline.fuse_lora(lora_scale=0.7) get_active_adapters < source > ( ) Gets the list of the current active adapters. Example: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", +).to("cuda") +pipeline.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") +pipeline.get_active_adapters() get_list_adapters < source > ( ) Gets the current list of all available adapters in the pipeline. load_lora_into_text_encoder < source > ( state_dict network_alphas text_encoder prefix = None lora_scale = 1.0 low_cpu_mem_usage = None adapter_name = None _pipeline = None ) Parameters state_dict (dict) — +A standard state dict containing the lora layer parameters. The key should be prefixed with an +additional text_encoder to distinguish between unet lora layers. network_alphas (Dict[str, float]) — +See LoRALinearLayer for more details. text_encoder (CLIPTextModel) — +The text encoder model to load the LoRA layers into. prefix (str) — +Expected prefix of the text_encoder in the state_dict. lora_scale (float) — +How much to scale the output of the lora linear layer before it is added with the output of the regular +lora layer. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. This will load the LoRA layers specified in state_dict into text_encoder load_lora_into_transformer < source > ( state_dict network_alphas transformer low_cpu_mem_usage = None adapter_name = None _pipeline = None ) Parameters state_dict (dict) — +A standard state dict containing the lora layer parameters. The keys can either be indexed directly +into the unet or prefixed with an additional unet which can be used to distinguish between text +encoder lora layers. network_alphas (Dict[str, float]) — +See LoRALinearLayer for more details. unet (UNet2DConditionModel) — +The UNet model to load the LoRA layers into. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. This will load the LoRA layers specified in state_dict into transformer. load_lora_into_unet < source > ( state_dict network_alphas unet low_cpu_mem_usage = None adapter_name = None _pipeline = None ) Parameters state_dict (dict) — +A standard state dict containing the lora layer parameters. The keys can either be indexed directly +into the unet or prefixed with an additional unet which can be used to distinguish between text +encoder lora layers. network_alphas (Dict[str, float]) — +See LoRALinearLayer for more details. unet (UNet2DConditionModel) — +The UNet model to load the LoRA layers into. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. This will load the LoRA layers specified in state_dict into unet. load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. lora_state_dict < source > ( pretrained_model_name_or_path_or_dict: Union **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with ModelMixin.save_pretrained(). +A torch state +dict. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Return state dict for lora weights and the network alphas. We support loading A1111 formatted LoRA checkpoints in a limited capacity. This function is experimental and might change in the future. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. set_adapters_for_text_encoder < source > ( adapter_names: Union text_encoder: Optional = None text_encoder_weights: List = None ) Parameters adapter_names (List[str] or str) — +The names of the adapters to use. text_encoder (torch.nn.Module, optional) — +The text encoder module to set the adapter layers for. If None, it will try to get the text_encoder +attribute. text_encoder_weights (List[float], optional) — +The weights to use for the text encoder. If None, the weights are set to 1.0 for all the adapters. Sets the adapter layers for the text encoder. set_lora_device < source > ( adapter_names: List device: Union ) Parameters adapter_names (List[str]) — +List of adapters to send device to. device (Union[torch.device, str, int]) — +Device to send the adapters to. Can be either a torch device, a str or an integer. Moves the LoRAs listed in adapter_names to a target device. Useful for offloading the LoRA to the CPU in case +you want to load multiple adapters and free some GPU memory. unfuse_lora < source > ( unfuse_unet: bool = True unfuse_text_encoder: bool = True ) Parameters unfuse_unet (bool, defaults to True) — Whether to unfuse the UNet LoRA parameters. unfuse_text_encoder (bool, defaults to True) — +Whether to unfuse the text encoder LoRA parameters. If the text encoder wasn’t monkey-patched with the +LoRA parameters then it won’t have any effect. Reverses the effect of +pipe.fuse_lora(). This is an experimental API. unload_lora_weights < source > ( ) Unloads the LoRA parameters. Examples: Copied >>> # Assuming `pipeline` is already loaded with the LoRA parameters. +>>> pipeline.unload_lora_weights() +>>> ... StableDiffusionXLLoraLoaderMixin class diffusers.loaders.StableDiffusionXLLoraLoaderMixin < source > ( ) This class overrides LoraLoaderMixin with LoRA loading/saving code that’s specific to SDXL load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name: Optional = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. kwargs (dict, optional) — +See lora_state_dict(). Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. diff --git a/scrapped_outputs/e0b45f887c57205af94755c1f4815a1b.txt b/scrapped_outputs/e0b45f887c57205af94755c1f4815a1b.txt new file mode 100644 index 0000000000000000000000000000000000000000..923735996db131119f1ed82ba37eae73f2bb0f3e --- /dev/null +++ b/scrapped_outputs/e0b45f887c57205af94755c1f4815a1b.txt @@ -0,0 +1,27 @@ +DDPM Denoising Diffusion Probabilistic Models (DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes a diffusion based model of the same name. In the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline. The abstract from the paper is: We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. The original codebase can be found at hohonathanho/diffusion. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. DDPMPipeline class diffusers.DDPMPipeline < source > ( unet scheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image. Can be one of +DDPMScheduler, or DDIMScheduler. Pipeline for image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 generator: Union = None num_inference_steps: int = 1000 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. num_inference_steps (int, optional, defaults to 1000) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Example: Copied >>> from diffusers import DDPMPipeline + +>>> # load model and scheduler +>>> pipe = DDPMPipeline.from_pretrained("google/ddpm-cat-256") + +>>> # run pipeline in inference (sample random noise and denoise) +>>> image = pipe().images[0] + +>>> # save image +>>> image.save("ddpm_generated_image.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/e0f9e159e74f8982cc98e9de8bd79409.txt b/scrapped_outputs/e0f9e159e74f8982cc98e9de8bd79409.txt new file mode 100644 index 0000000000000000000000000000000000000000..98b1f46689d66a702a6b7e7c7df7d908b16635d7 --- /dev/null +++ b/scrapped_outputs/e0f9e159e74f8982cc98e9de8bd79409.txt @@ -0,0 +1,30 @@ +Transformer Temporal A Transformer model for video-like data. TransformerTemporalModel class diffusers.models.TransformerTemporalModel < source > ( num_attention_heads: int = 16 attention_head_dim: int = 88 in_channels: Optional = None out_channels: Optional = None num_layers: int = 1 dropout: float = 0.0 norm_num_groups: int = 32 cross_attention_dim: Optional = None attention_bias: bool = False sample_size: Optional = None activation_fn: str = 'geglu' norm_elementwise_affine: bool = True double_self_attention: bool = True positional_embeddings: Optional = None num_positional_embeddings: Optional = None ) Parameters num_attention_heads (int, optional, defaults to 16) — The number of heads to use for multi-head attention. attention_head_dim (int, optional, defaults to 88) — The number of channels in each head. in_channels (int, optional) — +The number of channels in the input and output (specify if the input is continuous). num_layers (int, optional, defaults to 1) — The number of layers of Transformer blocks to use. dropout (float, optional, defaults to 0.0) — The dropout probability to use. cross_attention_dim (int, optional) — The number of encoder_hidden_states dimensions to use. attention_bias (bool, optional) — +Configure if the TransformerBlock attention should contain a bias parameter. sample_size (int, optional) — The width of the latent images (specify if the input is discrete). +This is fixed during training since it is used to learn a number of position embeddings. activation_fn (str, optional, defaults to "geglu") — +Activation function to use in feed-forward. See diffusers.models.activations.get_activation for supported +activation functions. norm_elementwise_affine (bool, optional) — +Configure if the TransformerBlock should use learnable elementwise affine parameters for normalization. double_self_attention (bool, optional) — +Configure if each TransformerBlock should contain two self-attention layers. +positional_embeddings — (str, optional): +The type of positional embeddings to apply to the sequence input before passing use. +num_positional_embeddings — (int, optional): +The maximum length of the sequence over which to apply positional embeddings. A Transformer model for video-like data. forward < source > ( hidden_states: FloatTensor encoder_hidden_states: Optional = None timestep: Optional = None class_labels: LongTensor = None num_frames: int = 1 cross_attention_kwargs: Optional = None return_dict: bool = True ) → TransformerTemporalModelOutput or tuple Parameters hidden_states (torch.LongTensor of shape (batch size, num latent pixels) if discrete, torch.FloatTensor of shape (batch size, channel, height, width) if continuous) — +Input hidden_states. encoder_hidden_states ( torch.LongTensor of shape (batch size, encoder_hidden_states dim), optional) — +Conditional embeddings for cross attention layer. If not given, cross-attention defaults to +self-attention. timestep ( torch.LongTensor, optional) — +Used to indicate denoising step. Optional timestep to be applied as an embedding in AdaLayerNorm. class_labels ( torch.LongTensor of shape (batch size, num classes), optional) — +Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in +AdaLayerZeroNorm. num_frames (int, optional, defaults to 1) — +The number of frames to be processed per batch. This is used to reshape the hidden states. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. Returns +TransformerTemporalModelOutput or tuple + +If return_dict is True, an TransformerTemporalModelOutput is +returned, otherwise a tuple where the first element is the sample tensor. + The TransformerTemporal forward method. TransformerTemporalModelOutput class diffusers.models.transformer_temporal.TransformerTemporalModelOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size x num_frames, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. The output of TransformerTemporalModel. diff --git a/scrapped_outputs/e14f406930d683cfac9c42d2702b1e2e.txt b/scrapped_outputs/e14f406930d683cfac9c42d2702b1e2e.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/e182125a3d41f34e7dc425e333098007.txt b/scrapped_outputs/e182125a3d41f34e7dc425e333098007.txt new file mode 100644 index 0000000000000000000000000000000000000000..2add5dcbc2dfbc796cac5009a8f482715b5ce8eb --- /dev/null +++ b/scrapped_outputs/e182125a3d41f34e7dc425e333098007.txt @@ -0,0 +1,5 @@ +UVit2DModel The U-ViT model is a vision transformer (ViT) based UNet. This model incorporates elements from ViT (considers all inputs such as time, conditions and noisy image patches as tokens) and a UNet (long skip connections between the shallow and deep layers). The skip connection is important for predicting pixel-level features. An additional 3x3 convolutional block is applied prior to the final output to improve image quality. The abstract from the paper is: Currently, applying diffusion models in pixel space of high resolution images is difficult. Instead, existing approaches focus on diffusion in lower dimensional spaces (latent diffusion), or have multiple super-resolution levels of generation referred to as cascades. The downside is that these approaches add additional complexity to the diffusion framework. This paper aims to improve denoising diffusion for high resolution images while keeping the model as simple as possible. The paper is centered around the research question: How can one train a standard denoising diffusion models on high resolution images, and still obtain performance comparable to these alternate approaches? The four main findings are: 1) the noise schedule should be adjusted for high resolution images, 2) It is sufficient to scale only a particular part of the architecture, 3) dropout should be added at specific locations in the architecture, and 4) downsampling is an effective strategy to avoid high resolution feature maps. Combining these simple yet effective techniques, we achieve state-of-the-art on image generation among diffusion models without sampling modifiers on ImageNet. UVit2DModel class diffusers.UVit2DModel < source > ( hidden_size: int = 1024 use_bias: bool = False hidden_dropout: float = 0.0 cond_embed_dim: int = 768 micro_cond_encode_dim: int = 256 micro_cond_embed_dim: int = 1280 encoder_hidden_size: int = 768 vocab_size: int = 8256 codebook_size: int = 8192 in_channels: int = 768 block_out_channels: int = 768 num_res_blocks: int = 3 downsample: bool = False upsample: bool = False block_num_heads: int = 12 num_hidden_layers: int = 22 num_attention_heads: int = 16 attention_dropout: float = 0.0 intermediate_size: int = 2816 layer_norm_eps: float = 1e-06 ln_elementwise_affine: bool = True sample_size: int = 64 ) set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. UVit2DConvEmbed class diffusers.models.unets.uvit_2d.UVit2DConvEmbed < source > ( in_channels block_out_channels vocab_size elementwise_affine eps bias ) UVitBlock class diffusers.models.unets.uvit_2d.UVitBlock < source > ( channels num_res_blocks: int hidden_size hidden_dropout ln_elementwise_affine layer_norm_eps use_bias block_num_heads attention_dropout downsample: bool upsample: bool ) ConvNextBlock class diffusers.models.unets.uvit_2d.ConvNextBlock < source > ( channels layer_norm_eps ln_elementwise_affine use_bias hidden_dropout hidden_size res_ffn_factor = 4 ) ConvMlmLayer class diffusers.models.unets.uvit_2d.ConvMlmLayer < source > ( block_out_channels: int in_channels: int use_bias: bool ln_elementwise_affine: bool layer_norm_eps: float codebook_size: int ) diff --git a/scrapped_outputs/e18cd2b3612e5f88156c1ad83d6e6dc1.txt b/scrapped_outputs/e18cd2b3612e5f88156c1ad83d6e6dc1.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/e18f1b0c216e99fbb6daca3f8b0b7386.txt b/scrapped_outputs/e18f1b0c216e99fbb6daca3f8b0b7386.txt new file mode 100644 index 0000000000000000000000000000000000000000..a9b23cd194564c43aca8fd94b78d118e14153f64 --- /dev/null +++ b/scrapped_outputs/e18f1b0c216e99fbb6daca3f8b0b7386.txt @@ -0,0 +1,263 @@ +🧪 This pipeline is for research purposes only. Text-to-video ModelScope Text-to-Video Technical Report is by Jiuniu Wang, Hangjie Yuan, Dayou Chen, Yingya Zhang, Xiang Wang, Shiwei Zhang. The abstract from the paper is: This paper introduces ModelScopeT2V, a text-to-video synthesis model that evolves from a text-to-image synthesis model (i.e., Stable Diffusion). ModelScopeT2V incorporates spatio-temporal blocks to ensure consistent frame generation and smooth movement transitions. The model could adapt to varying frame numbers during training and inference, rendering it suitable for both image-text and video-text datasets. ModelScopeT2V brings together three components (i.e., VQGAN, a text encoder, and a denoising UNet), totally comprising 1.7 billion parameters, in which 0.5 billion parameters are dedicated to temporal capabilities. The model demonstrates superior performance over state-of-the-art methods across three evaluation metrics. The code and an online demo are available at https://modelscope.cn/models/damo/text-to-video-synthesis/summary. You can find additional information about Text-to-Video on the project page, original codebase, and try it out in a demo. Official checkpoints can be found at damo-vilab and cerspense. Usage example text-to-video-ms-1.7b Let’s start by generating a short video with the default length of 16 frames (2s at 8 fps): Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import export_to_video + +pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") +pipe = pipe.to("cuda") + +prompt = "Spiderman is surfing" +video_frames = pipe(prompt).frames +video_path = export_to_video(video_frames) +video_path Diffusers supports different optimization techniques to improve the latency +and memory footprint of a pipeline. Since videos are often more memory-heavy than images, +we can enable CPU offloading and VAE slicing to keep the memory footprint at bay. Let’s generate a video of 8 seconds (64 frames) on the same GPU using CPU offloading and VAE slicing: Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import export_to_video + +pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") +pipe.enable_model_cpu_offload() + +# memory optimization +pipe.enable_vae_slicing() + +prompt = "Darth Vader surfing a wave" +video_frames = pipe(prompt, num_frames=64).frames +video_path = export_to_video(video_frames) +video_path It just takes 7 GBs of GPU memory to generate the 64 video frames using PyTorch 2.0, “fp16” precision and the techniques mentioned above. We can also use a different scheduler easily, using the same method we’d use for Stable Diffusion: Copied import torch +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +from diffusers.utils import export_to_video + +pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16") +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() + +prompt = "Spiderman is surfing" +video_frames = pipe(prompt, num_inference_steps=25).frames +video_path = export_to_video(video_frames) +video_path Here are some sample outputs: An astronaut riding a horse. + Darth vader surfing in waves. + cerspense/zeroscope_v2_576w & cerspense/zeroscope_v2_XL Zeroscope are watermark-free model and have been trained on specific sizes such as 576x320 and 1024x576. +One should first generate a video using the lower resolution checkpoint cerspense/zeroscope_v2_576w with TextToVideoSDPipeline, +which can then be upscaled using VideoToVideoSDPipeline and cerspense/zeroscope_v2_XL. Copied import torch +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +from diffusers.utils import export_to_video +from PIL import Image + +pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16) +pipe.enable_model_cpu_offload() + +# memory optimization +pipe.unet.enable_forward_chunking(chunk_size=1, dim=1) +pipe.enable_vae_slicing() + +prompt = "Darth Vader surfing a wave" +video_frames = pipe(prompt, num_frames=24).frames +video_path = export_to_video(video_frames) +video_path Now the video can be upscaled: Copied pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_XL", torch_dtype=torch.float16) +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() + +# memory optimization +pipe.unet.enable_forward_chunking(chunk_size=1, dim=1) +pipe.enable_vae_slicing() + +video = [Image.fromarray(frame).resize((1024, 576)) for frame in video_frames] + +video_frames = pipe(prompt, video=video, strength=0.6).frames +video_path = export_to_video(video_frames) +video_path Here are some sample outputs: Darth vader surfing in waves. + Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. TextToVideoSDPipeline class diffusers.TextToVideoSDPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet3DConditionModel scheduler: KarrasDiffusionSchedulers ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet3DConditionModel) — +A UNet3DConditionModel to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_frames: int = 16 num_inference_steps: int = 50 guidance_scale: float = 9.0 negative_prompt: Union = None eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'np' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → TextToVideoSDPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated video. num_frames (int, optional, defaults to 16) — +The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds +amounts to 2 seconds of video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "np") — +The output format of the generated video. Choose between torch.FloatTensor or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +TextToVideoSDPipelineOutput or tuple + +If return_dict is True, TextToVideoSDPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import TextToVideoSDPipeline +>>> from diffusers.utils import export_to_video + +>>> pipe = TextToVideoSDPipeline.from_pretrained( +... "damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16" +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "Spiderman is surfing" +>>> video_frames = pipe(prompt).frames +>>> video_path = export_to_video(video_frames) +>>> video_path disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. VideoToVideoSDPipeline class diffusers.VideoToVideoSDPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet3DConditionModel scheduler: KarrasDiffusionSchedulers ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode videos to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet3DConditionModel) — +A UNet3DConditionModel to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-guided video-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None video: Union = None strength: float = 0.6 num_inference_steps: int = 50 guidance_scale: float = 15.0 negative_prompt: Union = None eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'np' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → TextToVideoSDPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. video (List[np.ndarray] or torch.FloatTensor) — +video frames or tensor representing a video batch to be used as the starting point for the process. +Can also accept video latents as image, if passing latents directly, it will not be encoded again. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference video. Must be between 0 and 1. video is used as a +starting point, adding more noise to it the larger the strength. The number of denoising steps +depends on the amount of noise initially added. When strength is 1, added noise is maximum and the +denoising process runs for the full number of iterations specified in num_inference_steps. A value of +1 essentially ignores video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in video generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "np") — +The output format of the generated video. Choose between torch.FloatTensor or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +TextToVideoSDPipelineOutput or tuple + +If return_dict is True, TextToVideoSDPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +>>> from diffusers.utils import export_to_video + +>>> pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16) +>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe.to("cuda") + +>>> prompt = "spiderman running in the desert" +>>> video_frames = pipe(prompt, num_inference_steps=40, height=320, width=576, num_frames=24).frames +>>> # safe low-res video +>>> video_path = export_to_video(video_frames, output_video_path="./video_576_spiderman.mp4") + +>>> # let's offload the text-to-image model +>>> pipe.to("cpu") + +>>> # and load the image-to-image model +>>> pipe = DiffusionPipeline.from_pretrained( +... "cerspense/zeroscope_v2_XL", torch_dtype=torch.float16, revision="refs/pr/15" +... ) +>>> pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +>>> pipe.enable_model_cpu_offload() + +>>> # The VAE consumes A LOT of memory, let's make sure we run it in sliced mode +>>> pipe.vae.enable_slicing() + +>>> # now let's upscale it +>>> video = [Image.fromarray(frame).resize((1024, 576)) for frame in video_frames] + +>>> # and denoise it +>>> video_frames = pipe(prompt, video=video, strength=0.6).frames +>>> video_path = export_to_video(video_frames, output_video_path="./video_1024_spiderman.mp4") +>>> video_path disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. TextToVideoSDPipelineOutput class diffusers.pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput < source > ( frames: Union ) Parameters frames (List[np.ndarray] or torch.FloatTensor) — +List of denoised frames (essentially images) as NumPy arrays of shape (height, width, num_channels) or as +a torch tensor. The length of the list denotes the video length (the number of frames). Output class for text-to-video pipelines. diff --git a/scrapped_outputs/e19a15b533f5574adf4af7c2d495b3bc.txt b/scrapped_outputs/e19a15b533f5574adf4af7c2d495b3bc.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/e1a4bb6578f8509b13d2cbe473cc04ea.txt b/scrapped_outputs/e1a4bb6578f8509b13d2cbe473cc04ea.txt new file mode 100644 index 0000000000000000000000000000000000000000..86d9ddbbae81241685d47196515ab51585d529f3 --- /dev/null +++ b/scrapped_outputs/e1a4bb6578f8509b13d2cbe473cc04ea.txt @@ -0,0 +1,93 @@ +Latent Consistency Distillation Latent Consistency Models (LCMs) are able to generate high-quality images in just a few steps, representing a big leap forward because many pipelines require at least 25+ steps. LCMs are produced by applying the latent consistency distillation method to any Stable Diffusion model. This method works by applying one-stage guided distillation to the latent space, and incorporating a skipping-step method to consistently skip timesteps to accelerate the distillation process (refer to section 4.1, 4.2, and 4.3 of the paper for more details). If you’re training on a GPU with limited vRAM, try enabling gradient_checkpointing, gradient_accumulation_steps, and mixed_precision to reduce memory-usage and speedup training. You can reduce your memory-usage even more by enabling memory-efficient attention with xFormers and bitsandbytes’ 8-bit optimizer. This guide will explore the train_lcm_distill_sd_wds.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/consistency_distillation +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment (try enabling torch.compile to significantly speedup training): Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_lcm_distill_sd_wds.py \ + --mixed_precision="fp16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so you’ll focus on the parameters that are relevant to latent consistency distillation in this guide. --pretrained_teacher_model: the path to a pretrained latent diffusion model to use as the teacher model --pretrained_vae_model_name_or_path: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify an alternative VAE (like this VAE by madebyollin which works in fp16) --w_min and --w_max: the minimum and maximum guidance scale values for guidance scale sampling --num_ddim_timesteps: the number of timesteps for DDIM sampling --loss_type: the type of loss (L2 or Huber) to calculate for latent consistency distillation; Huber loss is generally preferred because it’s more robust to outliers --huber_c: the Huber loss parameter Training script The training script starts by creating a dataset class - Text2ImageDataset - for preprocessing the images and creating a training dataset. Copied def transform(example): + image = example["image"] + image = TF.resize(image, resolution, interpolation=transforms.InterpolationMode.BILINEAR) + + c_top, c_left, _, _ = transforms.RandomCrop.get_params(image, output_size=(resolution, resolution)) + image = TF.crop(image, c_top, c_left, resolution, resolution) + image = TF.to_tensor(image) + image = TF.normalize(image, [0.5], [0.5]) + + example["image"] = image + return example For improved performance on reading and writing large datasets stored in the cloud, this script uses the WebDataset format to create a preprocessing pipeline to apply transforms and create a dataset and dataloader for training. Images are processed and fed to the training loop without having to download the full dataset first. Copied processing_pipeline = [ + wds.decode("pil", handler=wds.ignore_and_continue), + wds.rename(image="jpg;png;jpeg;webp", text="text;txt;caption", handler=wds.warn_and_continue), + wds.map(filter_keys({"image", "text"})), + wds.map(transform), + wds.to_tuple("image", "text"), +] In the main() function, all the necessary components like the noise scheduler, tokenizers, text encoders, and VAE are loaded. The teacher UNet is also loaded here and then you can create a student UNet from the teacher UNet. The student UNet is updated by the optimizer during training. Copied teacher_unet = UNet2DConditionModel.from_pretrained( + args.pretrained_teacher_model, subfolder="unet", revision=args.teacher_revision +) + +unet = UNet2DConditionModel(**teacher_unet.config) +unet.load_state_dict(teacher_unet.state_dict(), strict=False) +unet.train() Now you can create the optimizer to update the UNet parameters: Copied optimizer = optimizer_class( + unet.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Create the dataset: Copied dataset = Text2ImageDataset( + train_shards_path_or_url=args.train_shards_path_or_url, + num_train_examples=args.max_train_samples, + per_gpu_batch_size=args.train_batch_size, + global_batch_size=args.train_batch_size * accelerator.num_processes, + num_workers=args.dataloader_num_workers, + resolution=args.resolution, + shuffle_buffer_size=1000, + pin_memory=True, + persistent_workers=True, +) +train_dataloader = dataset.train_dataloader Next, you’re ready to setup the training loop and implement the latent consistency distillation method (see Algorithm 1 in the paper for more details). This section of the script takes care of adding noise to the latents, sampling and creating a guidance scale embedding, and predicting the original image from the noise. Copied pred_x_0 = predicted_origin( + noise_pred, + start_timesteps, + noisy_model_input, + noise_scheduler.config.prediction_type, + alpha_schedule, + sigma_schedule, +) + +model_pred = c_skip_start * noisy_model_input + c_out_start * pred_x_0 It gets the teacher model predictions and the LCM predictions next, calculates the loss, and then backpropagates it to the LCM. Copied if args.loss_type == "l2": + loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") +elif args.loss_type == "huber": + loss = torch.mean( + torch.sqrt((model_pred.float() - target.float()) ** 2 + args.huber_c**2) - args.huber_c + ) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Now you’re ready to launch the training script and start distilling! For this guide, you’ll use the --train_shards_path_or_url to specify the path to the Conceptual Captions 12M dataset stored on the Hub here. Set the MODEL_DIR environment variable to the name of the teacher model and OUTPUT_DIR to where you want to save the model. Copied export MODEL_DIR="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="path/to/saved/model" + +accelerate launch train_lcm_distill_sd_wds.py \ + --pretrained_teacher_model=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --mixed_precision=fp16 \ + --resolution=512 \ + --learning_rate=1e-6 --loss_type="huber" --ema_decay=0.95 --adam_weight_decay=0.0 \ + --max_train_steps=1000 \ + --max_train_samples=4000000 \ + --dataloader_num_workers=8 \ + --train_shards_path_or_url="pipe:curl -L -s https://huggingface.co/datasets/laion/conceptual-captions-12m-webdataset/resolve/main/data/{00000..01099}.tar?download=true" \ + --validation_steps=200 \ + --checkpointing_steps=200 --checkpoints_total_limit=10 \ + --train_batch_size=12 \ + --gradient_checkpointing --enable_xformers_memory_efficient_attention \ + --gradient_accumulation_steps=1 \ + --use_8bit_adam \ + --resume_from_checkpoint=latest \ + --report_to=wandb \ + --seed=453645634 \ + --push_to_hub Once training is complete, you can use your new LCM for inference. Copied from diffusers import UNet2DConditionModel, DiffusionPipeline, LCMScheduler +import torch + +unet = UNet2DConditionModel.from_pretrained("your-username/your-model", torch_dtype=torch.float16, variant="fp16") +pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", unet=unet, torch_dtype=torch.float16, variant="fp16") + +pipeline.scheduler = LCMScheduler.from_config(pipe.scheduler.config) +pipeline.to("cuda") + +prompt = "sushi rolls in the form of panda heads, sushi platter" + +image = pipeline(prompt, num_inference_steps=4, guidance_scale=1.0).images[0] LoRA LoRA is a training technique for significantly reducing the number of trainable parameters. As a result, training is faster and it is easier to store the resulting weights because they are a lot smaller (~100MBs). Use the train_lcm_distill_lora_sd_wds.py or train_lcm_distill_lora_sdxl.wds.py script to train with LoRA. The LoRA training script is discussed in more detail in the LoRA training guide. Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_lcm_distill_sdxl_wds.py script to train a SDXL model with LoRA. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on distilling a LCM model! To learn more about LCM, the following may be helpful: Learn how to use LCMs for inference for text-to-image, image-to-image, and with LoRA checkpoints. Read the SDXL in 4 steps with Latent Consistency LoRAs blog post to learn more about SDXL LCM-LoRA’s for super fast inference, quality comparisons, benchmarks, and more. diff --git a/scrapped_outputs/e1b9eedb25efcd133fe72ddd17213ccf.txt b/scrapped_outputs/e1b9eedb25efcd133fe72ddd17213ccf.txt new file mode 100644 index 0000000000000000000000000000000000000000..a4946c1f029b1a65fb2ea488de115da9e2a87fcc --- /dev/null +++ b/scrapped_outputs/e1b9eedb25efcd133fe72ddd17213ccf.txt @@ -0,0 +1,43 @@ +DPMSolverSDEScheduler The DPMSolverSDEScheduler is inspired by the stochastic sampler from the Elucidating the Design Space of Diffusion-Based Generative Models paper, and the scheduler is ported from and created by Katherine Crowson. DPMSolverSDEScheduler class diffusers.DPMSolverSDEScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' use_karras_sigmas: Optional = False noise_sampler_seed: Optional = None timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.00085) — +The starting beta value of inference. beta_end (float, defaults to 0.012) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. noise_sampler_seed (int, optional, defaults to None) — +The random seed to use for the noise sampler. If None, a random seed is generated. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DPMSolverSDEScheduler implements the stochastic sampler from the Elucidating the Design Space of Diffusion-Based +Generative Models paper. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union return_dict: bool = True s_noise: float = 1.0 ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor or np.ndarray) — +The direct output from learned diffusion model. timestep (float or torch.FloatTensor) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor or np.ndarray) — +A current instance of a sample created by the diffusion process. return_dict (bool, optional, defaults to True) — +Whether or not to return a SchedulerOutput or tuple. s_noise (float, optional, defaults to 1.0) — +Scaling factor for noise added to the sample. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/e21487664fe93788e27f9491649d1007.txt b/scrapped_outputs/e21487664fe93788e27f9491649d1007.txt new file mode 100644 index 0000000000000000000000000000000000000000..77bfc70e39049721df753225367296a6dc627c51 --- /dev/null +++ b/scrapped_outputs/e21487664fe93788e27f9491649d1007.txt @@ -0,0 +1,124 @@ +PixArt-α PixArt-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis is Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, and Zhenguo Li. The abstract from the paper is: The most advanced text-to-image (T2I) models require significant training costs (e.g., millions of GPU hours), seriously hindering the fundamental innovation for the AIGC community while increasing CO2 emissions. This paper introduces PIXART-α, a Transformer-based T2I diffusion model whose image generation quality is competitive with state-of-the-art image generators (e.g., Imagen, SDXL, and even Midjourney), reaching near-commercial application standards. Additionally, it supports high-resolution image synthesis up to 1024px resolution with low training cost, as shown in Figure 1 and 2. To achieve this goal, three core designs are proposed: (1) Training strategy decomposition: We devise three distinct training steps that separately optimize pixel dependency, text-image alignment, and image aesthetic quality; (2) Efficient T2I Transformer: We incorporate cross-attention modules into Diffusion Transformer (DiT) to inject text conditions and streamline the computation-intensive class-condition branch; (3) High-informative data: We emphasize the significance of concept density in text-image pairs and leverage a large Vision-Language model to auto-label dense pseudo-captions to assist text-image alignment learning. As a result, PIXART-α’s training speed markedly surpasses existing large-scale T2I models, e.g., PIXART-α only takes 10.8% of Stable Diffusion v1.5’s training time (675 vs. 6,250 A100 GPU days), saving nearly $300,000 ($26,000 vs. $320,000) and reducing 90% CO2 emissions. Moreover, compared with a larger SOTA model, RAPHAEL, our training cost is merely 1%. Extensive experiments demonstrate that PIXART-α excels in image quality, artistry, and semantic control. We hope PIXART-α will provide new insights to the AIGC community and startups to accelerate building their own high-quality yet low-cost generative models from scratch. You can find the original codebase at PixArt-alpha/PixArt-alpha and all the available checkpoints at PixArt-alpha. Some notes about this pipeline: It uses a Transformer backbone (instead of a UNet) for denoising. As such it has a similar architecture as DiT. It was trained using text conditions computed from T5. This aspect makes the pipeline better at following complex text prompts with intricate details. It is good at producing high-resolution images at different aspect ratios. To get the best results, the authors recommend some size brackets which can be found here. It rivals the quality of state-of-the-art text-to-image generation systems (as of this writing) such as Stable Diffusion XL, Imagen, and DALL-E 2, while being more efficient than them. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. Inference with under 8GB GPU VRAM Run the PixArtAlphaPipeline with under 8GB GPU VRAM by loading the text encoder in 8-bit precision. Let’s walk through a full-fledged example. First, install the bitsandbytes library: Copied pip install -U bitsandbytes Then load the text encoder in 8-bit: Copied from transformers import T5EncoderModel +from diffusers import PixArtAlphaPipeline +import torch + +text_encoder = T5EncoderModel.from_pretrained( + "PixArt-alpha/PixArt-XL-2-1024-MS", + subfolder="text_encoder", + load_in_8bit=True, + device_map="auto", + +) +pipe = PixArtAlphaPipeline.from_pretrained( + "PixArt-alpha/PixArt-XL-2-1024-MS", + text_encoder=text_encoder, + transformer=None, + device_map="auto" +) Now, use the pipe to encode a prompt: Copied with torch.no_grad(): + prompt = "cute cat" + prompt_embeds, prompt_attention_mask, negative_embeds, negative_prompt_attention_mask = pipe.encode_prompt(prompt) Since text embeddings have been computed, remove the text_encoder and pipe from the memory, and free up som GPU VRAM: Copied import gc + +def flush(): + gc.collect() + torch.cuda.empty_cache() + +del text_encoder +del pipe +flush() Then compute the latents with the prompt embeddings as inputs: Copied pipe = PixArtAlphaPipeline.from_pretrained( + "PixArt-alpha/PixArt-XL-2-1024-MS", + text_encoder=None, + torch_dtype=torch.float16, +).to("cuda") + +latents = pipe( + negative_prompt=None, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + prompt_attention_mask=prompt_attention_mask, + negative_prompt_attention_mask=negative_prompt_attention_mask, + num_images_per_prompt=1, + output_type="latent", +).images + +del pipe.transformer +flush() Notice that while initializing pipe, you’re setting text_encoder to None so that it’s not loaded. Once the latents are computed, pass it off to the VAE to decode into a real image: Copied with torch.no_grad(): + image = pipe.vae.decode(latents / pipe.vae.config.scaling_factor, return_dict=False)[0] +image = pipe.image_processor.postprocess(image, output_type="pil")[0] +image.save("cat.png") By deleting components you aren’t using and flushing the GPU VRAM, you should be able to run PixArtAlphaPipeline with under 8GB GPU VRAM. If you want a report of your memory-usage, run this script. Text embeddings computed in 8-bit can impact the quality of the generated images because of the information loss in the representation space caused by the reduced precision. It’s recommended to compare the outputs with and without 8-bit. While loading the text_encoder, you set load_in_8bit to True. You could also specify load_in_4bit to bring your memory requirements down even further to under 7GB. PixArtAlphaPipeline class diffusers.PixArtAlphaPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel vae: AutoencoderKL transformer: Transformer2DModel scheduler: DPMSolverMultistepScheduler ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (T5EncoderModel) — +Frozen text-encoder. PixArt-Alpha uses +T5, specifically the +t5-v1_1-xxl variant. tokenizer (T5Tokenizer) — +Tokenizer of class +T5Tokenizer. transformer (Transformer2DModel) — +A text conditioned Transformer2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with transformer to denoise the encoded image latents. Pipeline for text-to-image generation using PixArt-Alpha. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union = None negative_prompt: str = '' num_inference_steps: int = 20 timesteps: List = None guidance_scale: float = 4.5 num_images_per_prompt: Optional = 1 height: Optional = None width: Optional = None eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None prompt_attention_mask: Optional = None negative_prompt_embeds: Optional = None negative_prompt_attention_mask: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True use_resolution_binning: bool = True **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to self.unet.config.sample_size) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size) — +The width in pixels of the generated image. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. prompt_attention_mask (torch.FloatTensor, optional) — Pre-generated attention mask for text embeddings. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. For PixArt-Alpha this negative prompt should be "". If not +provided, negative_prompt_embeds will be generated from negative_prompt input argument. negative_prompt_attention_mask (torch.FloatTensor, optional) — +Pre-generated attention mask for negative text embeddings. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. use_resolution_binning (bool defaults to True) — +If set to True, the requested height and width are first mapped to the closest resolutions using +ASPECT_RATIO_1024_BIN. After the produced latents are decoded into images, they are resized back to +the requested resolution. Useful for generating non-square images. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import PixArtAlphaPipeline + +>>> # You can replace the checkpoint id with "PixArt-alpha/PixArt-XL-2-512x512" too. +>>> pipe = PixArtAlphaPipeline.from_pretrained("PixArt-alpha/PixArt-XL-2-1024-MS", torch_dtype=torch.float16) +>>> # Enable memory optimizations. +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A small cactus with a happy face in the Sahara desert." +>>> image = pipe(prompt).images[0] classify_height_width_bin < source > ( height: int width: int ratios: dict ) Returns binned height and width. encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True negative_prompt: str = '' num_images_per_prompt: int = 1 device: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None prompt_attention_mask: Optional = None negative_prompt_attention_mask: Optional = None clean_caption: bool = False **kwargs ) Parameters prompt (str or List[str], optional) — +prompt to be encoded negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. If not defined, one has to pass negative_prompt_embeds +instead. Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). For +PixArt-Alpha, this should be "". do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. For PixArt-Alpha, it’s should be the embeddings of the "" +string. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. diff --git a/scrapped_outputs/e220ad11f996b90ca8fc436756247ce7.txt b/scrapped_outputs/e220ad11f996b90ca8fc436756247ce7.txt new file mode 100644 index 0000000000000000000000000000000000000000..7a6daebd26f703523e74d8ad50de1df486caf45f --- /dev/null +++ b/scrapped_outputs/e220ad11f996b90ca8fc436756247ce7.txt @@ -0,0 +1,253 @@ +PaintByExample + + +Overview + +Paint by Example: Exemplar-based Image Editing with Diffusion Models by Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, Fang Wen +The abstract of the paper is the following: +Language-guided image editing has achieved great success recently. In this paper, for the first time, we investigate exemplar-guided image editing for more precise control. We achieve this goal by leveraging self-supervised training to disentangle and re-organize the source image and the exemplar. However, the naive approach will cause obvious fusing artifacts. We carefully analyze it and propose an information bottleneck and strong augmentations to avoid the trivial solution of directly copying and pasting the exemplar image. Meanwhile, to ensure the controllability of the editing process, we design an arbitrary shape mask for the exemplar image and leverage the classifier-free guidance to increase the similarity to the exemplar image. The whole framework involves a single forward of the diffusion model without any iterative optimization. We demonstrate that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity. +The original codebase can be found here. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_paint_by_example.py +Image-Guided Image Painting +- + +Tips + +PaintByExample is supported by the official Fantasy-Studio/Paint-by-Example checkpoint. The checkpoint has been warm-started from the CompVis/stable-diffusion-v1-4 and with the objective to inpaint partly masked images conditioned on example / reference images +To quickly demo PaintByExample, please have a look at this demo +You can run the following code snippet as an example: + + + Copied +# !pip install diffusers transformers + +import PIL +import requests +import torch +from io import BytesIO +from diffusers import DiffusionPipeline + + +def download_image(url): + response = requests.get(url) + return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +img_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/image/example_1.png" +mask_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/mask/example_1.png" +example_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/reference/example_1.jpg" + +init_image = download_image(img_url).resize((512, 512)) +mask_image = download_image(mask_url).resize((512, 512)) +example_image = download_image(example_url).resize((512, 512)) + +pipe = DiffusionPipeline.from_pretrained( + "Fantasy-Studio/Paint-by-Example", + torch_dtype=torch.float16, +) +pipe = pipe.to("cuda") + +image = pipe(image=init_image, mask_image=mask_image, example_image=example_image).images[0] +image + +PaintByExamplePipeline + + +class diffusers.PaintByExamplePipeline + +< +source +> +( +vae: AutoencoderKL +image_encoder: PaintByExampleImageEncoder +unet: UNet2DConditionModel +scheduler: typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler] +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPFeatureExtractor +requires_safety_checker: bool = False + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-guided image inpainting using Stable Diffusion. This is an experimental feature. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +example_image: typing.Union[torch.FloatTensor, PIL.Image.Image] +image: typing.Union[torch.FloatTensor, PIL.Image.Image] +mask_image: typing.Union[torch.FloatTensor, PIL.Image.Image] +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 5.0 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: typing.Optional[int] = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +example_image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +The exemplar image to guide the image generation. + + +image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. + + +mask_image (torch.FloatTensor or PIL.Image.Image or List[PIL.Image.Image]) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. diff --git a/scrapped_outputs/e221885ed755659009ee76ccf0c22849.txt b/scrapped_outputs/e221885ed755659009ee76ccf0c22849.txt new file mode 100644 index 0000000000000000000000000000000000000000..c8cfdd6624eb6d027328a9b3177a43d058b04844 --- /dev/null +++ b/scrapped_outputs/e221885ed755659009ee76ccf0c22849.txt @@ -0,0 +1,115 @@ +Using KerasCV Stable Diffusion Checkpoints in Diffusers + +This is an experimental feature. +KerasCV provides APIs for implementing various computer vision workflows. It +also provides the Stable Diffusion v1 and v2 +models. Many practitioners find it easy to fine-tune the Stable Diffusion models shipped by KerasCV. However, as of this writing, KerasCV offers limited support to experiment with Stable Diffusion models for inference and deployment. On the other hand, +Diffusers provides tooling dedicated to this purpose (and more), such as different noise schedulers, flash attention, and other +optimization techniques. +How about fine-tuning Stable Diffusion models in KerasCV and exporting them such that they become compatible with Diffusers to combine the +best of both worlds? We have created a tool that +lets you do just that! It takes KerasCV Stable Diffusion checkpoints and exports them to Diffusers-compatible checkpoints. +More specifically, it first converts the checkpoints to PyTorch and then wraps them into a +StableDiffusionPipeline which is ready +for inference. Finally, it pushes the converted checkpoints to a repository on the Hugging Face Hub. +We welcome you to try out the tool here +and share feedback via discussions. + +Getting Started + +First, you need to obtain the fine-tuned KerasCV Stable Diffusion checkpoints. We provide an +overview of the different ways Stable Diffusion models can be fine-tuned using diffusers. For the Keras implementation of some of these methods, you can check out these resources: +Teach StableDiffusion new concepts via Textual Inversion +Fine-tuning Stable Diffusion +DreamBooth +Prompt-to-Prompt editing +Stable Diffusion is comprised of the following models: +Text encoder +UNet +VAE +Depending on the fine-tuning task, we may fine-tune one or more of these components (the VAE is almost always left untouched). Here are some common combinations: +DreamBooth: UNet and text encoder +Classical text to image fine-tuning: UNet +Textual Inversion: Just the newly initialized embeddings in the text encoder + +Performing the Conversion + +Let’s use this checkpoint which was generated +by conducting Textual Inversion with the following “placeholder token”: . +On the tool, we supply the following things: +Path(s) to download the fine-tuned checkpoint(s) (KerasCV) +An HF token +Placeholder token (only applicable for Textual Inversion) + +As soon as you hit “Submit”, the conversion process will begin. Once it’s complete, you should see the following: + +If you click the link, you +should see something like so: + +If you head over to the model card of the repository, the +following should appear: + +Note that we’re not specifying the UNet weights here since the UNet is not fine-tuned during Textual Inversion. +And that’s it! You now have your fine-tuned KerasCV Stable Diffusion model in Diffusers 🧨 + +Using the Converted Model in Diffusers + +Just beside the model card of the repository, +you’d notice an inference widget to try out the model directly from the UI 🤗 + +On the top right hand side, we provide a “Use in Diffusers” button. If you click the button, you should see the following code-snippet: + + + Copied +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained("sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline") +The model is in standard diffusers format. Let’s perform inference! + + + Copied +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained("sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline") +pipeline.to("cuda") + +placeholder_token = "" +prompt = f"two {placeholder_token} getting married, photorealistic, high quality" +image = pipeline(prompt, num_inference_steps=50).images[0] +And we get: + +Note that if you specified a placeholder_token while performing the conversion, the tool will log it accordingly. Refer +to the model card of this repository +as an example. +We welcome you to use the tool for various Stable Diffusion fine-tuning scenarios and let us know your feedback! Here are some examples +of Diffusers checkpoints that were obtained using the tool: +sayakpaul/text-unet-dogs-kerascv_sd_diffusers_pipeline (DreamBooth with both the text encoder and UNet fine-tuned) +sayakpaul/unet-dogs-kerascv_sd_diffusers_pipeline (DreamBooth with only the UNet fine-tuned) + +Incorporating Diffusers Goodies 🎁 + +Diffusers provides various options that one can leverage to experiment with different inference setups. One particularly +useful option is the use of a different noise scheduler during inference other than what was used during fine-tuning. +Let’s try out the DPMSolverMultistepScheduler +which is different from the one (DDPMScheduler) used during +fine-tuning. +You can read more details about this process in this section. + + + Copied +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler + +pipeline = DiffusionPipeline.from_pretrained("sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline") +pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) +pipeline.to("cuda") + +placeholder_token = "" +prompt = f"two {placeholder_token} getting married, photorealistic, high quality" +image = pipeline(prompt, num_inference_steps=50).images[0] + +One can also continue fine-tuning from these Diffusers checkpoints by leveraging some relevant tools from Diffusers. Refer here for +more details. For inference-specific optimizations, refer here. + +Known Limitations + +Only Stable Diffusion v1 checkpoints are supported for conversion in this tool. diff --git a/scrapped_outputs/e22d94c6a90e0ccb813b8076af2eca25.txt b/scrapped_outputs/e22d94c6a90e0ccb813b8076af2eca25.txt new file mode 100644 index 0000000000000000000000000000000000000000..27e473e96ef3e5480dbddcafab99a5316b599755 --- /dev/null +++ b/scrapped_outputs/e22d94c6a90e0ccb813b8076af2eca25.txt @@ -0,0 +1,57 @@ +Wuerstchen The Wuerstchen model drastically reduces computational costs by compressing the latent space by 42x, without compromising image quality and accelerating inference. During training, Wuerstchen uses two models (VQGAN + autoencoder) to compress the latents, and then a third model (text-conditioned latent diffusion model) is conditioned on this highly compressed space to generate an image. To fit the prior model into GPU memory and to speedup training, try enabling gradient_accumulation_steps, gradient_checkpointing, and mixed_precision respectively. This guide explores the train_text_to_image_prior.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/wuerstchen/text_to_image +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training scripts that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the scripts and let us know if you have any questions or concerns. Script parameters The training scripts provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image_prior.py \ + --mixed_precision="fp16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so let’s dive right into the Wuerstchen training script! Training script The training script is also similar to the Text-to-image training guide, but it’s been modified to support Wuerstchen. This guide focuses on the code that is unique to the Wuerstchen training script. The main() function starts by initializing the image encoder - an EfficientNet - in addition to the usual scheduler and tokenizer. Copied with ContextManagers(deepspeed_zero_init_disabled_context_manager()): + pretrained_checkpoint_file = hf_hub_download("dome272/wuerstchen", filename="model_v2_stage_b.pt") + state_dict = torch.load(pretrained_checkpoint_file, map_location="cpu") + image_encoder = EfficientNetEncoder() + image_encoder.load_state_dict(state_dict["effnet_state_dict"]) + image_encoder.eval() You’ll also load the WuerstchenPrior model for optimization. Copied prior = WuerstchenPrior.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="prior") + +optimizer = optimizer_cls( + prior.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Next, you’ll apply some transforms to the images and tokenize the captions: Copied def preprocess_train(examples): + images = [image.convert("RGB") for image in examples[image_column]] + examples["effnet_pixel_values"] = [effnet_transforms(image) for image in images] + examples["text_input_ids"], examples["text_mask"] = tokenize_captions(examples) + return examples Finally, the training loop handles compressing the images to latent space with the EfficientNetEncoder, adding noise to the latents, and predicting the noise residual with the WuerstchenPrior model. Copied pred_noise = prior(noisy_latents, timesteps, prompt_embeds) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 Set the DATASET_NAME environment variable to the dataset name from the Hub. This guide uses the Pokémon BLIP captions dataset, but you can create and train on your own datasets as well (see the Create a dataset for training guide). To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_prompt to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. Copied export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch train_text_to_image_prior.py \ + --mixed_precision="fp16" \ + --dataset_name=$DATASET_NAME \ + --resolution=768 \ + --train_batch_size=4 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --dataloader_num_workers=4 \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --checkpoints_total_limit=3 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --validation_prompts="A robot pokemon, 4k photo" \ + --report_to="wandb" \ + --push_to_hub \ + --output_dir="wuerstchen-prior-pokemon-model" Once training is complete, you can use your newly trained model for inference! Copied import torch +from diffusers import AutoPipelineForText2Image +from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS + +pipeline = AutoPipelineForText2Image.from_pretrained("path/to/saved/model", torch_dtype=torch.float16).to("cuda") + +caption = "A cute bird pokemon holding a shield" +images = pipeline( + caption, + width=1024, + height=1536, + prior_timesteps=DEFAULT_STAGE_C_TIMESTEPS, + prior_guidance_scale=4.0, + num_images_per_prompt=2, +).images Next steps Congratulations on training a Wuerstchen model! To learn more about how to use your new model, the following may be helpful: Take a look at the Wuerstchen API documentation to learn more about how to use the pipeline for text-to-image generation and its limitations. diff --git a/scrapped_outputs/e2364ec74838d17ec1fef22691b4d8a3.txt b/scrapped_outputs/e2364ec74838d17ec1fef22691b4d8a3.txt new file mode 100644 index 0000000000000000000000000000000000000000..e3c54914f17fe9d0916ab4c87a6550c0de334316 --- /dev/null +++ b/scrapped_outputs/e2364ec74838d17ec1fef22691b4d8a3.txt @@ -0,0 +1,191 @@ +Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image generation model that iterates on the previous Stable Diffusion models in three key ways: the UNet is 3x larger and SDXL combines a second text encoder (OpenCLIP ViT-bigG/14) with the original text encoder to significantly increase the number of parameters introduces size and crop-conditioning to preserve training data from being discarded and gain more control over how a generated image should be cropped introduces a two-stage model process; the base model (can also be run as a standalone model) generates an image as an input to the refiner model which adds additional high-quality details This guide will show you how to use SDXL for text-to-image, image-to-image, and inpainting. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate omegaconf invisible-watermark>=0.2.0 We recommend installing the invisible-watermark library to help identify images that are generated. If the invisible-watermark library is installed, it is used by default. To disable the watermarker: Copied pipeline = StableDiffusionXLPipeline.from_pretrained(..., add_watermarker=False) Load model checkpoints Model weights may be stored in separate subfolders on the Hub or locally, in which case, you should use the from_pretrained() method: Copied from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = StableDiffusionXLImg2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, use_safetensors=True, variant="fp16" +).to("cuda") You can also use the from_single_file() method to load a model checkpoint stored in a single file format (.ckpt or .safetensors) from the Hub or locally: Copied from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_single_file( + "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0.safetensors", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = StableDiffusionXLImg2ImgPipeline.from_single_file( + "https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/blob/main/sd_xl_refiner_1.0.safetensors", torch_dtype=torch.float16, use_safetensors=True, variant="fp16" +).to("cuda") Text-to-image For text-to-image, pass a text prompt. By default, SDXL generates a 1024x1024 image for the best results. You can try setting the height and width parameters to 768x768 or 512x512, but anything below 512x512 is not likely to work. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline_text2image = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipeline_text2image(prompt=prompt).images[0] +image Image-to-image For image-to-image, SDXL works especially well with image sizes between 768x768 and 1024x1024. Pass an initial image, and a text prompt to condition the image with: Copied from diffusers import AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +# use from_pipe to avoid consuming additional memory when loading a checkpoint +pipeline = AutoPipelineForImage2Image.from_pipe(pipeline_text2image).to("cuda") + +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png" +init_image = load_image(url) +prompt = "a dog catching a frisbee in the jungle" +image = pipeline(prompt, image=init_image, strength=0.8, guidance_scale=10.5).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Inpainting For inpainting, you’ll need the original image and a mask of what you want to replace in the original image. Create a prompt to describe what you want to replace the masked area with. Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +# use from_pipe to avoid consuming additional memory when loading a checkpoint +pipeline = AutoPipelineForInpainting.from_pipe(pipeline_text2image).to("cuda") + +img_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png" +mask_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-inpaint-mask.png" + +init_image = load_image(img_url) +mask_image = load_image(mask_url) + +prompt = "A deep sea diver floating" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.85, guidance_scale=12.5).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Refine image quality SDXL includes a refiner model specialized in denoising low-noise stage images to generate higher-quality images from the base model. There are two ways to use the refiner: use the base and refiner models together to produce a refined image use the base model to produce an image, and subsequently use the refiner model to add more details to the image (this is how SDXL was originally trained) Base + refiner model When you use the base and refiner model together to generate an image, this is known as an ensemble of expert denoisers. The ensemble of expert denoisers approach requires fewer overall denoising steps versus passing the base model’s output to the refiner model, so it should be significantly faster to run. However, you won’t be able to inspect the base model’s output because it still contains a large amount of noise. As an ensemble of expert denoisers, the base model serves as the expert during the high-noise diffusion stage and the refiner model serves as the expert during the low-noise diffusion stage. Load the base and refiner model: Copied from diffusers import DiffusionPipeline +import torch + +base = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", + text_encoder_2=base.text_encoder_2, + vae=base.vae, + torch_dtype=torch.float16, + use_safetensors=True, + variant="fp16", +).to("cuda") To use this approach, you need to define the number of timesteps for each model to run through their respective stages. For the base model, this is controlled by the denoising_end parameter and for the refiner model, it is controlled by the denoising_start parameter. The denoising_end and denoising_start parameters should be a float between 0 and 1. These parameters are represented as a proportion of discrete timesteps as defined by the scheduler. If you’re also using the strength parameter, it’ll be ignored because the number of denoising steps is determined by the discrete timesteps the model is trained on and the declared fractional cutoff. Let’s set denoising_end=0.8 so the base model performs the first 80% of denoising the high-noise timesteps and set denoising_start=0.8 so the refiner model performs the last 20% of denoising the low-noise timesteps. The base model output should be in latent space instead of a PIL image. Copied prompt = "A majestic lion jumping from a big stone at night" + +image = base( + prompt=prompt, + num_inference_steps=40, + denoising_end=0.8, + output_type="latent", +).images +image = refiner( + prompt=prompt, + num_inference_steps=40, + denoising_start=0.8, + image=image, +).images[0] +image default base model ensemble of expert denoisers The refiner model can also be used for inpainting in the StableDiffusionXLInpaintPipeline: Copied from diffusers import StableDiffusionXLInpaintPipeline +from diffusers.utils import load_image, make_image_grid +import torch + +base = StableDiffusionXLInpaintPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = StableDiffusionXLInpaintPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", + text_encoder_2=base.text_encoder_2, + vae=base.vae, + torch_dtype=torch.float16, + use_safetensors=True, + variant="fp16", +).to("cuda") + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url) +mask_image = load_image(mask_url) + +prompt = "A majestic tiger sitting on a bench" +num_inference_steps = 75 +high_noise_frac = 0.7 + +image = base( + prompt=prompt, + image=init_image, + mask_image=mask_image, + num_inference_steps=num_inference_steps, + denoising_end=high_noise_frac, + output_type="latent", +).images +image = refiner( + prompt=prompt, + image=image, + mask_image=mask_image, + num_inference_steps=num_inference_steps, + denoising_start=high_noise_frac, +).images[0] +make_image_grid([init_image, mask_image, image.resize((512, 512))], rows=1, cols=3) This ensemble of expert denoisers method works well for all available schedulers! Base to refiner model SDXL gets a boost in image quality by using the refiner model to add additional high-quality details to the fully-denoised image from the base model, in an image-to-image setting. Load the base and refiner models: Copied from diffusers import DiffusionPipeline +import torch + +base = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", + text_encoder_2=base.text_encoder_2, + vae=base.vae, + torch_dtype=torch.float16, + use_safetensors=True, + variant="fp16", +).to("cuda") Generate an image from the base model, and set the model output to latent space: Copied prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +image = base(prompt=prompt, output_type="latent").images[0] Pass the generated image to the refiner model: Copied image = refiner(prompt=prompt, image=image[None, :]).images[0] base model base model + refiner model For inpainting, load the base and the refiner model in the StableDiffusionXLInpaintPipeline, remove the denoising_end and denoising_start parameters, and choose a smaller number of inference steps for the refiner. Micro-conditioning SDXL training involves several additional conditioning techniques, which are referred to as micro-conditioning. These include original image size, target image size, and cropping parameters. The micro-conditionings can be used at inference time to create high-quality, centered images. You can use both micro-conditioning and negative micro-conditioning parameters thanks to classifier-free guidance. They are available in the StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline, StableDiffusionXLInpaintPipeline, and StableDiffusionXLControlNetPipeline. Size conditioning There are two types of size conditioning: original_size conditioning comes from upscaled images in the training batch (because it would be wasteful to discard the smaller images which make up almost 40% of the total training data). This way, SDXL learns that upscaling artifacts are not supposed to be present in high-resolution images. During inference, you can use original_size to indicate the original image resolution. Using the default value of (1024, 1024) produces higher-quality images that resemble the 1024x1024 images in the dataset. If you choose to use a lower resolution, such as (256, 256), the model still generates 1024x1024 images, but they’ll look like the low resolution images (simpler patterns, blurring) in the dataset. target_size conditioning comes from finetuning SDXL to support different image aspect ratios. During inference, if you use the default value of (1024, 1024), you’ll get an image that resembles the composition of square images in the dataset. We recommend using the same value for target_size and original_size, but feel free to experiment with other options! 🤗 Diffusers also lets you specify negative conditions about an image’s size to steer generation away from certain image resolutions: Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe( + prompt=prompt, + negative_original_size=(512, 512), + negative_target_size=(1024, 1024), +).images[0] Images negatively conditioned on image resolutions of (128, 128), (256, 256), and (512, 512). Crop conditioning Images generated by previous Stable Diffusion models may sometimes appear to be cropped. This is because images are actually cropped during training so that all the images in a batch have the same size. By conditioning on crop coordinates, SDXL learns that no cropping - coordinates (0, 0) - usually correlates with centered subjects and complete faces (this is the default value in 🤗 Diffusers). You can experiment with different coordinates if you want to generate off-centered compositions! Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipeline(prompt=prompt, crops_coords_top_left=(256, 0)).images[0] +image You can also specify negative cropping coordinates to steer generation away from certain cropping parameters: Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe( + prompt=prompt, + negative_original_size=(512, 512), + negative_crops_coords_top_left=(0, 0), + negative_target_size=(1024, 1024), +).images[0] +image Use a different prompt for each text-encoder SDXL uses two text-encoders, so it is possible to pass a different prompt to each text-encoder, which can improve quality. Pass your original prompt to prompt and the second prompt to prompt_2 (use negative_prompt and negative_prompt_2 if you’re using negative prompts): Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +# prompt is passed to OAI CLIP-ViT/L-14 +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +# prompt_2 is passed to OpenCLIP-ViT/bigG-14 +prompt_2 = "Van Gogh painting" +image = pipeline(prompt=prompt, prompt_2=prompt_2).images[0] +image The dual text-encoders also support textual inversion embeddings that need to be loaded separately as explained in the SDXL textual inversion section. Optimizations SDXL is a large model, and you may need to optimize memory to get it to run on your hardware. Here are some tips to save memory and speed up inference. Offload the model to the CPU with enable_model_cpu_offload() for out-of-memory errors: Copied - base.to("cuda") +- refiner.to("cuda") ++ base.enable_model_cpu_offload() ++ refiner.enable_model_cpu_offload() Use torch.compile for ~20% speed-up (you need torch>=2.0): Copied + base.unet = torch.compile(base.unet, mode="reduce-overhead", fullgraph=True) ++ refiner.unet = torch.compile(refiner.unet, mode="reduce-overhead", fullgraph=True) Enable xFormers to run SDXL if torch<2.0: Copied + base.enable_xformers_memory_efficient_attention() ++ refiner.enable_xformers_memory_efficient_attention() Other resources If you’re interested in experimenting with a minimal version of the UNet2DConditionModel used in SDXL, take a look at the minSDXL implementation which is written in PyTorch and directly compatible with 🤗 Diffusers. diff --git a/scrapped_outputs/e290581889ae58fae58c4b4f0ae55c4a.txt b/scrapped_outputs/e290581889ae58fae58c4b4f0ae55c4a.txt new file mode 100644 index 0000000000000000000000000000000000000000..cdcd3a6a8fcddc27fec8e1156f213cb014eca381 --- /dev/null +++ b/scrapped_outputs/e290581889ae58fae58c4b4f0ae55c4a.txt @@ -0,0 +1,276 @@ +ControlNet ControlNet is a type of model for controlling image diffusion models by conditioning the model with an additional input image. There are many types of conditioning inputs (canny edge, user sketching, human pose, depth, and more) you can use to control a diffusion model. This is hugely useful because it affords you greater control over image generation, making it easier to generate specific images without experimenting with different text prompts or denoising values as much. Check out Section 3.5 of the ControlNet paper v1 for a list of ControlNet implementations on various conditioning inputs. You can find the official Stable Diffusion ControlNet conditioned models on lllyasviel’s Hub profile, and more community-trained ones on the Hub. For Stable Diffusion XL (SDXL) ControlNet models, you can find them on the 🤗 Diffusers Hub organization, or you can browse community-trained ones on the Hub. A ControlNet model has two sets of weights (or blocks) connected by a zero-convolution layer: a locked copy keeps everything a large pretrained diffusion model has learned a trainable copy is trained on the additional conditioning input Since the locked copy preserves the pretrained model, training and implementing a ControlNet on a new conditioning input is as fast as finetuning any other model because you aren’t training the model from scratch. This guide will show you how to use ControlNet for text-to-image, image-to-image, inpainting, and more! There are many types of ControlNet conditioning inputs to choose from, but in this guide we’ll only focus on several of them. Feel free to experiment with other conditioning inputs! Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate opencv-python Text-to-image For text-to-image, you normally pass a text prompt to the model. But with ControlNet, you can specify an additional conditioning input. Let’s condition the model with a canny image, a white outline of an image on a black background. This way, the ControlNet can use the canny image as a control to guide the model to generate an image with the same outline. Load an image and use the opencv-python library to extract the canny image: Copied from diffusers.utils import load_image, make_image_grid +from PIL import Image +import cv2 +import numpy as np + +original_image = load_image( + "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +) + +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) original image canny image Next, load a ControlNet model conditioned on canny edge detection and pass it to the StableDiffusionControlNetPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler +import torch + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now pass your prompt and canny image to the pipeline: Copied output = pipe( + "the mona lisa", image=canny_image +).images[0] +make_image_grid([original_image, canny_image, output], rows=1, cols=3) Image-to-image For image-to-image, you’d typically pass an initial image and a prompt to the pipeline to generate a new image. With ControlNet, you can pass an additional conditioning input to guide the model. Let’s condition the model with a depth map, an image which contains spatial information. This way, the ControlNet can use the depth map as a control to guide the model to generate an image that preserves spatial information. You’ll use the StableDiffusionControlNetImg2ImgPipeline for this task, which is different from the StableDiffusionControlNetPipeline because it allows you to pass an initial image as the starting point for the image generation process. Load an image and use the depth-estimation Pipeline from 🤗 Transformers to extract the depth map of an image: Copied import torch +import numpy as np + +from transformers import pipeline +from diffusers.utils import load_image, make_image_grid + +image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-img2img.jpg" +) + +def get_depth_map(image, depth_estimator): + image = depth_estimator(image)["depth"] + image = np.array(image) + image = image[:, :, None] + image = np.concatenate([image, image, image], axis=2) + detected_map = torch.from_numpy(image).float() / 255.0 + depth_map = detected_map.permute(2, 0, 1) + return depth_map + +depth_estimator = pipeline("depth-estimation") +depth_map = get_depth_map(image, depth_estimator).unsqueeze(0).half().to("cuda") Next, load a ControlNet model conditioned on depth maps and pass it to the StableDiffusionControlNetImg2ImgPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, UniPCMultistepScheduler +import torch + +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11f1p_sd15_depth", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now pass your prompt, initial image, and depth map to the pipeline: Copied output = pipe( + "lego batman and robin", image=image, control_image=depth_map, +).images[0] +make_image_grid([image, output], rows=1, cols=2) original image generated image Inpainting For inpainting, you need an initial image, a mask image, and a prompt describing what to replace the mask with. ControlNet models allow you to add another control image to condition a model with. Let’s condition the model with an inpainting mask. This way, the ControlNet can use the inpainting mask as a control to guide the model to generate an image within the mask area. Load an initial image and a mask image: Copied from diffusers.utils import load_image, make_image_grid + +init_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-inpaint.jpg" +) +init_image = init_image.resize((512, 512)) + +mask_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-inpaint-mask.jpg" +) +mask_image = mask_image.resize((512, 512)) +make_image_grid([init_image, mask_image], rows=1, cols=2) Create a function to prepare the control image from the initial and mask images. This’ll create a tensor to mark the pixels in init_image as masked if the corresponding pixel in mask_image is over a certain threshold. Copied import numpy as np +import torch + +def make_inpaint_condition(image, image_mask): + image = np.array(image.convert("RGB")).astype(np.float32) / 255.0 + image_mask = np.array(image_mask.convert("L")).astype(np.float32) / 255.0 + + assert image.shape[0:1] == image_mask.shape[0:1] + image[image_mask > 0.5] = -1.0 # set as masked pixel + image = np.expand_dims(image, 0).transpose(0, 3, 1, 2) + image = torch.from_numpy(image) + return image + +control_image = make_inpaint_condition(init_image, mask_image) original image mask image Load a ControlNet model conditioned on inpainting and pass it to the StableDiffusionControlNetInpaintPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetInpaintPipeline, ControlNetModel, UniPCMultistepScheduler + +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now pass your prompt, initial image, mask image, and control image to the pipeline: Copied output = pipe( + "corgi face with large ears, detailed, pixar, animated, disney", + num_inference_steps=20, + eta=1.0, + image=init_image, + mask_image=mask_image, + control_image=control_image, +).images[0] +make_image_grid([init_image, mask_image, output], rows=1, cols=3) Guess mode Guess mode does not require supplying a prompt to a ControlNet at all! This forces the ControlNet encoder to do it’s best to “guess” the contents of the input control map (depth map, pose estimation, canny edge, etc.). Guess mode adjusts the scale of the output residuals from a ControlNet by a fixed ratio depending on the block depth. The shallowest DownBlock corresponds to 0.1, and as the blocks get deeper, the scale increases exponentially such that the scale of the MidBlock output becomes 1.0. Guess mode does not have any impact on prompt conditioning and you can still provide a prompt if you want. Set guess_mode=True in the pipeline, and it is recommended to set the guidance_scale value between 3.0 and 5.0. Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.utils import load_image, make_image_grid +import numpy as np +import torch +from PIL import Image +import cv2 + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", use_safetensors=True) +pipe = StableDiffusionControlNetPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", controlnet=controlnet, use_safetensors=True).to("cuda") + +original_image = load_image("https://huggingface.co/takuma104/controlnet_dev/resolve/main/bird_512x512.png") + +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +image = pipe("", image=canny_image, guess_mode=True, guidance_scale=3.0).images[0] +make_image_grid([original_image, canny_image, image], rows=1, cols=3) regular mode with prompt guess mode without prompt ControlNet with Stable Diffusion XL There aren’t too many ControlNet models compatible with Stable Diffusion XL (SDXL) at the moment, but we’ve trained two full-sized ControlNet models for SDXL conditioned on canny edge detection and depth maps. We’re also experimenting with creating smaller versions of these SDXL-compatible ControlNet models so it is easier to run on resource-constrained hardware. You can find these checkpoints on the 🤗 Diffusers Hub organization! Let’s use a SDXL ControlNet conditioned on canny images to generate an image. Start by loading an image and prepare the canny image: Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL +from diffusers.utils import load_image, make_image_grid +from PIL import Image +import cv2 +import numpy as np +import torch + +original_image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" +) + +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) +make_image_grid([original_image, canny_image], rows=1, cols=2) original image canny image Load a SDXL ControlNet model conditioned on canny edge detection and pass it to the StableDiffusionXLControlNetPipeline. You can also enable model offloading to reduce memory usage. Copied controlnet = ControlNetModel.from_pretrained( + "diffusers/controlnet-canny-sdxl-1.0", + torch_dtype=torch.float16, + use_safetensors=True +) +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionXLControlNetPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + controlnet=controlnet, + vae=vae, + torch_dtype=torch.float16, + use_safetensors=True +) +pipe.enable_model_cpu_offload() Now pass your prompt (and optionally a negative prompt if you’re using one) and canny image to the pipeline: The controlnet_conditioning_scale parameter determines how much weight to assign to the conditioning inputs. A value of 0.5 is recommended for good generalization, but feel free to experiment with this number! Copied prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" +negative_prompt = 'low quality, bad quality, sketches' + +image = pipe( + prompt, + negative_prompt=negative_prompt, + image=canny_image, + controlnet_conditioning_scale=0.5, +).images[0] +make_image_grid([original_image, canny_image, image], rows=1, cols=3) You can use StableDiffusionXLControlNetPipeline in guess mode as well by setting the parameter to True: Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL +from diffusers.utils import load_image, make_image_grid +import numpy as np +import torch +import cv2 +from PIL import Image + +prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" +negative_prompt = "low quality, bad quality, sketches" + +original_image = load_image( + "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" +) + +controlnet = ControlNetModel.from_pretrained( + "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16, use_safetensors=True +) +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionXLControlNetPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16, use_safetensors=True +) +pipe.enable_model_cpu_offload() + +image = np.array(original_image) +image = cv2.Canny(image, 100, 200) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +image = pipe( + prompt, negative_prompt=negative_prompt, controlnet_conditioning_scale=0.5, image=canny_image, guess_mode=True, +).images[0] +make_image_grid([original_image, canny_image, image], rows=1, cols=3) You can use a refiner model with StableDiffusionXLControlNetPipeline to improve image quality, just like you can with a regular StableDiffusionXLPipeline. +See the Refine image quality section to learn how to use the refiner model. +Make sure to use StableDiffusionXLControlNetPipeline and pass image and controlnet_conditioning_scale. Copied base = StableDiffusionXLControlNetPipeline(...) +image = base( + prompt=prompt, + controlnet_conditioning_scale=0.5, + image=canny_image, + num_inference_steps=40, + denoising_end=0.8, + output_type="latent", +).images +# rest exactly as with StableDiffusionXLPipeline MultiControlNet Replace the SDXL model with a model like runwayml/stable-diffusion-v1-5 to use multiple conditioning inputs with Stable Diffusion models. You can compose multiple ControlNet conditionings from different image inputs to create a MultiControlNet. To get better results, it is often helpful to: mask conditionings such that they don’t overlap (for example, mask the area of a canny image where the pose conditioning is located) experiment with the controlnet_conditioning_scale parameter to determine how much weight to assign to each conditioning input In this example, you’ll combine a canny image and a human pose estimation image to generate a new image. Prepare the canny image conditioning: Copied from diffusers.utils import load_image, make_image_grid +from PIL import Image +import numpy as np +import cv2 + +original_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" +) +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) + +# zero out middle columns of image where pose will be overlaid +zero_start = image.shape[1] // 4 +zero_end = zero_start + image.shape[1] // 2 +image[:, zero_start:zero_end] = 0 + +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) +make_image_grid([original_image, canny_image], rows=1, cols=2) original image canny image For human pose estimation, install controlnet_aux: Copied # uncomment to install the necessary library in Colab +#!pip install -q controlnet-aux Prepare the human pose estimation conditioning: Copied from controlnet_aux import OpenposeDetector + +openpose = OpenposeDetector.from_pretrained("lllyasviel/ControlNet") +original_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/person.png" +) +openpose_image = openpose(original_image) +make_image_grid([original_image, openpose_image], rows=1, cols=2) original image human pose image Load a list of ControlNet models that correspond to each conditioning, and pass them to the StableDiffusionXLControlNetPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to reduce memory usage. Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL, UniPCMultistepScheduler +import torch + +controlnets = [ + ControlNetModel.from_pretrained( + "thibaud/controlnet-openpose-sdxl-1.0", torch_dtype=torch.float16 + ), + ControlNetModel.from_pretrained( + "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16, use_safetensors=True + ), +] + +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionXLControlNetPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnets, vae=vae, torch_dtype=torch.float16, use_safetensors=True +) +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now you can pass your prompt (an optional negative prompt if you’re using one), canny image, and pose image to the pipeline: Copied prompt = "a giant standing in a fantasy landscape, best quality" +negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality" + +generator = torch.manual_seed(1) + +images = [openpose_image.resize((1024, 1024)), canny_image.resize((1024, 1024))] + +images = pipe( + prompt, + image=images, + num_inference_steps=25, + generator=generator, + negative_prompt=negative_prompt, + num_images_per_prompt=3, + controlnet_conditioning_scale=[1.0, 0.8], +).images +make_image_grid([original_image, canny_image, openpose_image, + images[0].resize((512, 512)), images[1].resize((512, 512)), images[2].resize((512, 512))], rows=2, cols=3) diff --git a/scrapped_outputs/e296612d010716ae9ec2ad77e3e06a16.txt b/scrapped_outputs/e296612d010716ae9ec2ad77e3e06a16.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/e29ea138f1c6914261579c39e115d39c.txt b/scrapped_outputs/e29ea138f1c6914261579c39e115d39c.txt new file mode 100644 index 0000000000000000000000000000000000000000..18ff21ef44b1209309d3996bfa0c5efab35a57c1 --- /dev/null +++ b/scrapped_outputs/e29ea138f1c6914261579c39e115d39c.txt @@ -0,0 +1,78 @@ +Safe Stable Diffusion Safe Stable Diffusion was proposed in Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models and mitigates inappropriate degeneration from Stable Diffusion models because they’re trained on unfiltered web-crawled datasets. For instance Stable Diffusion may unexpectedly generate nudity, violence, images depicting self-harm, and otherwise offensive content. Safe Stable Diffusion is an extension of Stable Diffusion that drastically reduces this type of content. The abstract from the paper is: Text-conditioned image generation models have recently achieved astonishing results in image quality and text alignment and are consequently employed in a fast-growing number of applications. Since they are highly data-driven, relying on billion-sized datasets randomly scraped from the internet, they also suffer, as we demonstrate, from degenerated and biased human behavior. In turn, they may even reinforce such biases. To help combat these undesired side effects, we present safe latent diffusion (SLD). Specifically, to measure the inappropriate degeneration due to unfiltered and imbalanced training sets, we establish a novel image generation test bed-inappropriate image prompts (I2P)-containing dedicated, real-world image-to-text prompts covering concepts such as nudity and violence. As our exhaustive empirical evaluation demonstrates, the introduced SLD removes and suppresses inappropriate image parts during the diffusion process, with no additional training required and no adverse effect on overall image quality or text alignment. Tips Use the safety_concept property of StableDiffusionPipelineSafe to check and edit the current safety concept: Copied >>> from diffusers import StableDiffusionPipelineSafe + +>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe") +>>> pipeline.safety_concept +'an image showing hate, harassment, violence, suffering, humiliation, harm, suicide, sexual, nudity, bodily fluids, blood, obscene gestures, illegal activity, drug use, theft, vandalism, weapons, child abuse, brutality, cruelty' For each image generation the active concept is also contained in StableDiffusionSafePipelineOutput. There are 4 configurations (SafetyConfig.WEAK, SafetyConfig.MEDIUM, SafetyConfig.STRONG, and SafetyConfig.MAX) that can be applied: Copied >>> from diffusers import StableDiffusionPipelineSafe +>>> from diffusers.pipelines.stable_diffusion_safe import SafetyConfig + +>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe") +>>> prompt = "the four horsewomen of the apocalypse, painting by tom of finland, gaston bussiere, craig mullins, j. c. leyendecker" +>>> out = pipeline(prompt=prompt, **SafetyConfig.MAX) Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionPipelineSafe class diffusers.StableDiffusionPipelineSafe < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: SafeStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline based on the StableDiffusionPipeline for text-to-image generation using Safe Latent Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 sld_guidance_scale: Optional = 1000 sld_warmup_steps: Optional = 10 sld_threshold: Optional = 0.01 sld_momentum_scale: Optional = 0.3 sld_mom_beta: Optional = 0.4 ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. sld_guidance_scale (float, optional, defaults to 1000) — +If sld_guidance_scale < 1, safety guidance is disabled. sld_warmup_steps (int, optional, defaults to 10) — +Number of warmup steps for safety guidance. SLD is only be applied for diffusion steps greater than +sld_warmup_steps. sld_threshold (float, optional, defaults to 0.01) — +Threshold that separates the hyperplane between appropriate and inappropriate images. sld_momentum_scale (float, optional, defaults to 0.3) — +Scale of the SLD momentum to be added to the safety guidance at each diffusion step. If set to 0.0, +momentum is disabled. Momentum is built up during warmup for diffusion steps smaller than +sld_warmup_steps. sld_mom_beta (float, optional, defaults to 0.4) — +Defines how safety guidance momentum builds up. sld_mom_beta indicates how much of the previous +momentum is kept. Momentum is built up during warmup for diffusion steps smaller than +sld_warmup_steps. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied import torch +from diffusers import StableDiffusionPipelineSafe +from diffusers.pipelines.stable_diffusion_safe import SafetyConfig + +pipeline = StableDiffusionPipelineSafe.from_pretrained( + "AIML-TUDA/stable-diffusion-safe", torch_dtype=torch.float16 +).to("cuda") +prompt = "the four horsewomen of the apocalypse, painting by tom of finland, gaston bussiere, craig mullins, j. c. leyendecker" +image = pipeline(prompt=prompt, **SafetyConfig.MEDIUM).images[0] StableDiffusionSafePipelineOutput class diffusers.pipelines.stable_diffusion_safe.StableDiffusionSafePipelineOutput < source > ( images: Union nsfw_content_detected: Optional unsafe_images: Union applied_safety_concept: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or numpy array of shape (batch_size, height, width, num_channels). PIL images or numpy array present the denoised images of the diffusion pipeline. nsfw_content_detected (List[bool]) — +List of flags denoting whether the corresponding generated image likely represents “not-safe-for-work” +(nsfw) content, or None if safety checking could not be performed. images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images that were flagged by the safety checker any may contain “not-safe-for-work” +(nsfw) content, or None if no safety check was performed or no images were flagged. applied_safety_concept (str) — +The safety concept that was applied for safety guidance, or None if safety guidance was disabled Output class for Safe Stable Diffusion pipelines. __call__ ( *args **kwargs ) Call self as a function. diff --git a/scrapped_outputs/e2f39969eec0efb002d630478f73c150.txt b/scrapped_outputs/e2f39969eec0efb002d630478f73c150.txt new file mode 100644 index 0000000000000000000000000000000000000000..d5b7d8b4b3e7332a66718ccf8ad4bab757e09676 --- /dev/null +++ b/scrapped_outputs/e2f39969eec0efb002d630478f73c150.txt @@ -0,0 +1,526 @@ +Audio Diffusion + + +Overview + +Audio Diffusion by Robert Dargavel Smith. +Audio Diffusion leverages the recent advances in image generation using diffusion models by converting audio samples to +and from mel spectrogram images. +The original codebase of this implementation can be found here, including +training scripts and example notebooks. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_audio_diffusion.py +Unconditional Audio Generation + + +Examples: + + +Audio Diffusion + + + + Copied +import torch +from IPython.display import Audio +from diffusers import DiffusionPipeline + +device = "cuda" if torch.cuda.is_available() else "cpu" +pipe = DiffusionPipeline.from_pretrained("teticio/audio-diffusion-256").to(device) + +output = pipe() +display(output.images[0]) +display(Audio(output.audios[0], rate=mel.get_sample_rate())) + +Latent Audio Diffusion + + + + Copied +import torch +from IPython.display import Audio +from diffusers import DiffusionPipeline + +device = "cuda" if torch.cuda.is_available() else "cpu" +pipe = DiffusionPipeline.from_pretrained("teticio/latent-audio-diffusion-256").to(device) + +output = pipe() +display(output.images[0]) +display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate())) + +Audio Diffusion with DDIM (faster) + + + + Copied +import torch +from IPython.display import Audio +from diffusers import DiffusionPipeline + +device = "cuda" if torch.cuda.is_available() else "cpu" +pipe = DiffusionPipeline.from_pretrained("teticio/audio-diffusion-ddim-256").to(device) + +output = pipe() +display(output.images[0]) +display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate())) + +Variations, in-painting, out-painting etc. + + + + Copied +output = pipe( + raw_audio=output.audios[0, 0], + start_step=int(pipe.get_default_steps() / 2), + mask_start_secs=1, + mask_end_secs=1, +) +display(output.images[0]) +display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate())) + +AudioDiffusionPipeline + + +class diffusers.AudioDiffusionPipeline + +< +source +> +( +vqvae: AutoencoderKL +unet: UNet2DConditionModel +mel: Mel +scheduler: typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_ddpm.DDPMScheduler] + +) + + +Parameters + +vqae (AutoencoderKL) — Variational AutoEncoder for Latent Audio Diffusion or None + + +unet (UNet2DConditionModel) — UNET model + + +mel (Mel) — transform audio <-> spectrogram + + +scheduler ([DDIMScheduler or DDPMScheduler]) — de-noising scheduler + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +batch_size: int = 1 +audio_file: str = None +raw_audio: ndarray = None +slice: int = 0 +start_step: int = 0 +steps: int = None +generator: Generator = None +mask_start_secs: float = 0 +mask_end_secs: float = 0 +step_generator: Generator = None +eta: float = 0 +noise: Tensor = None +encoding: Tensor = None +return_dict = True + +) +→ +List[PIL Image] + +Parameters + +batch_size (int) — number of samples to generate + + +audio_file (str) — must be a file on disk due to Librosa limitation or + + +raw_audio (np.ndarray) — audio as numpy array + + +slice (int) — slice number of audio to convert + + +start_step (int) — step to start from + + +steps (int) — number of de-noising steps (defaults to 50 for DDIM, 1000 for DDPM) + + +generator (torch.Generator) — random number generator or None + + +mask_start_secs (float) — number of seconds of audio to mask (not generate) at start + + +mask_end_secs (float) — number of seconds of audio to mask (not generate) at end + + +step_generator (torch.Generator) — random number generator used to de-noise or None + + +eta (float) — parameter between 0 and 1 used with DDIM scheduler + + +noise (torch.Tensor) — noise tensor of shape (batch_size, 1, height, width) or None + + +encoding (torch.Tensor) — for UNet2DConditionModel shape (batch_size, seq_length, cross_attention_dim) + + +return_dict (bool) — if True return AudioPipelineOutput, ImagePipelineOutput else Tuple + + +Returns + +List[PIL Image] + + + +mel spectrograms (float, List[np.ndarray]): sample rate and raw audios + + +Generate random mel spectrogram from audio input and convert to audio. + +encode + +< +source +> +( +images: typing.List[PIL.Image.Image] +steps: int = 50 + +) +→ +np.ndarray + +Parameters + +images (List[PIL Image]) — list of images to encode + + +steps (int) — number of encoding steps to perform (defaults to 50) + + +Returns + +np.ndarray + + + +noise tensor of shape (batch_size, 1, height, width) + + +Reverse step process: recover noisy image from generated image. + +get_default_steps + +< +source +> +( +) +→ +int + +Returns + +int + + + +number of steps + + +Returns default number of steps recommended for inference + +get_input_dims + +< +source +> +( +) +→ +Tuple + +Returns + +Tuple + + + +(height, width) + + +Returns dimension of input image + +slerp + +< +source +> +( +x0: Tensor +x1: Tensor +alpha: float + +) +→ +torch.Tensor + +Parameters + +x0 (torch.Tensor) — first tensor to interpolate between + + +x1 (torch.Tensor) — seconds tensor to interpolate between + + +alpha (float) — interpolation between 0 and 1 + + +Returns + +torch.Tensor + + + +interpolated tensor + + +Spherical Linear intERPolation + +Mel + + +class diffusers.Mel + +< +source +> +( +x_res: int = 256 +y_res: int = 256 +sample_rate: int = 22050 +n_fft: int = 2048 +hop_length: int = 512 +top_db: int = 80 +n_iter: int = 32 + +) + + +Parameters + +x_res (int) — x resolution of spectrogram (time) + + +y_res (int) — y resolution of spectrogram (frequency bins) + + +sample_rate (int) — sample rate of audio + + +n_fft (int) — number of Fast Fourier Transforms + + +hop_length (int) — hop length (a higher number is recommended for lower than 256 y_res) + + +top_db (int) — loudest in decibels + + +n_iter (int) — number of iterations for Griffin Linn mel inversion + + + + +audio_slice_to_image + +< +source +> +( +slice: int + +) +→ +PIL Image + +Parameters + +slice (int) — slice number of audio to convert (out of get_number_of_slices()) + + +Returns + +PIL Image + + + +grayscale image of x_res x y_res + + +Convert slice of audio to spectrogram. + +get_audio_slice + +< +source +> +( +slice: int = 0 + +) +→ +np.ndarray + +Parameters + +slice (int) — slice number of audio (out of get_number_of_slices()) + + +Returns + +np.ndarray + + + +audio as numpy array + + +Get slice of audio. + +get_number_of_slices + +< +source +> +( +) +→ +int + +Returns + +int + + + +number of spectograms audio can be sliced into + + +Get number of slices in audio. + +get_sample_rate + +< +source +> +( +) +→ +int + +Returns + +int + + + +sample rate of audio + + +Get sample rate: + +image_to_audio + +< +source +> +( +image: Image + +) +→ +audio (np.ndarray) + +Parameters + +image (PIL Image) — x_res x y_res grayscale image + + +Returns + +audio (np.ndarray) + + + +raw audio + + +Converts spectrogram to audio. + +load_audio + +< +source +> +( +audio_file: str = None +raw_audio: ndarray = None + +) + + +Parameters + +audio_file (str) — must be a file on disk due to Librosa limitation or + + +raw_audio (np.ndarray) — audio as numpy array + + + +Load audio. + +set_resolution + +< +source +> +( +x_res: int +y_res: int + +) + + +Parameters + +x_res (int) — x resolution of spectrogram (time) + + +y_res (int) — y resolution of spectrogram (frequency bins) + + + +Set resolution. diff --git a/scrapped_outputs/e326c99865512b097eb0b0fbda797a41.txt b/scrapped_outputs/e326c99865512b097eb0b0fbda797a41.txt new file mode 100644 index 0000000000000000000000000000000000000000..6eb814578b3c61caf6866a5ffadcbcf16e6fec47 --- /dev/null +++ b/scrapped_outputs/e326c99865512b097eb0b0fbda797a41.txt @@ -0,0 +1,26 @@ +How to run Stable Diffusion with Core ML Core ML is the model format and machine learning library supported by Apple frameworks. If you are interested in running Stable Diffusion models inside your macOS or iOS/iPadOS apps, this guide will show you how to convert existing PyTorch checkpoints into the Core ML format and use them for inference with Python or Swift. Core ML models can leverage all the compute engines available in Apple devices: the CPU, the GPU, and the Apple Neural Engine (or ANE, a tensor-optimized accelerator available in Apple Silicon Macs and modern iPhones/iPads). Depending on the model and the device it’s running on, Core ML can mix and match compute engines too, so some portions of the model may run on the CPU while others run on GPU, for example. You can also run the diffusers Python codebase on Apple Silicon Macs using the mps accelerator built into PyTorch. This approach is explained in depth in the mps guide, but it is not compatible with native apps. Stable Diffusion Core ML Checkpoints Stable Diffusion weights (or checkpoints) are stored in the PyTorch format, so you need to convert them to the Core ML format before we can use them inside native apps. Thankfully, Apple engineers developed a conversion tool based on diffusers to convert the PyTorch checkpoints to Core ML. Before you convert a model, though, take a moment to explore the Hugging Face Hub – chances are the model you’re interested in is already available in Core ML format: the Apple organization includes Stable Diffusion versions 1.4, 1.5, 2.0 base, and 2.1 base coreml community includes custom finetuned models use this filter to return all available Core ML checkpoints If you can’t find the model you’re interested in, we recommend you follow the instructions for Converting Models to Core ML by Apple. Selecting the Core ML Variant to Use Stable Diffusion models can be converted to different Core ML variants intended for different purposes: The type of attention blocks used. The attention operation is used to “pay attention” to the relationship between different areas in the image representations and to understand how the image and text representations are related. Attention is compute- and memory-intensive, so different implementations exist that consider the hardware characteristics of different devices. For Core ML Stable Diffusion models, there are two attention variants: split_einsum (introduced by Apple) is optimized for ANE devices, which is available in modern iPhones, iPads and M-series computers. The “original” attention (the base implementation used in diffusers) is only compatible with CPU/GPU and not ANE. It can be faster to run your model on CPU + GPU using original attention than ANE. See this performance benchmark as well as some additional measures provided by the community for additional details. The supported inference framework. packages are suitable for Python inference. This can be used to test converted Core ML models before attempting to integrate them inside native apps, or if you want to explore Core ML performance but don’t need to support native apps. For example, an application with a web UI could perfectly use a Python Core ML backend. compiled models are required for Swift code. The compiled models in the Hub split the large UNet model weights into several files for compatibility with iOS and iPadOS devices. This corresponds to the --chunk-unet conversion option. If you want to support native apps, then you need to select the compiled variant. The official Core ML Stable Diffusion models include these variants, but the community ones may vary: Copied coreml-stable-diffusion-v1-4 +├── README.md +├── original +│ ├── compiled +│ └── packages +└── split_einsum + ├── compiled + └── packages You can download and use the variant you need as shown below. Core ML Inference in Python Install the following libraries to run Core ML inference in Python: Copied pip install huggingface_hub +pip install git+https://github.com/apple/ml-stable-diffusion Download the Model Checkpoints To run inference in Python, use one of the versions stored in the packages folders because the compiled ones are only compatible with Swift. You may choose whether you want to use original or split_einsum attention. This is how you’d download the original attention variant from the Hub to a directory called models: Copied from huggingface_hub import snapshot_download +from pathlib import Path + +repo_id = "apple/coreml-stable-diffusion-v1-4" +variant = "original/packages" + +model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_")) +snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False) +print(f"Model downloaded at {model_path}") Inference Once you have downloaded a snapshot of the model, you can test it using Apple’s Python script. Copied python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" -i models/coreml-stable-diffusion-v1-4_original_packages -o --compute-unit CPU_AND_GPU --seed 93 Pass the path of the downloaded checkpoint with -i flag to the script. --compute-unit indicates the hardware you want to allow for inference. It must be one of the following options: ALL, CPU_AND_GPU, CPU_ONLY, CPU_AND_NE. You may also provide an optional output path, and a seed for reproducibility. The inference script assumes you’re using the original version of the Stable Diffusion model, CompVis/stable-diffusion-v1-4. If you use another model, you have to specify its Hub id in the inference command line, using the --model-version option. This works for models already supported and custom models you trained or fine-tuned yourself. For example, if you want to use runwayml/stable-diffusion-v1-5: Copied python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" --compute-unit ALL -o output --seed 93 -i models/coreml-stable-diffusion-v1-5_original_packages --model-version runwayml/stable-diffusion-v1-5 Core ML inference in Swift Running inference in Swift is slightly faster than in Python because the models are already compiled in the mlmodelc format. This is noticeable on app startup when the model is loaded but shouldn’t be noticeable if you run several generations afterward. Download To run inference in Swift on your Mac, you need one of the compiled checkpoint versions. We recommend you download them locally using Python code similar to the previous example, but with one of the compiled variants: Copied from huggingface_hub import snapshot_download +from pathlib import Path + +repo_id = "apple/coreml-stable-diffusion-v1-4" +variant = "original/compiled" + +model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_")) +snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False) +print(f"Model downloaded at {model_path}") Inference To run inference, please clone Apple’s repo: Copied git clone https://github.com/apple/ml-stable-diffusion +cd ml-stable-diffusion And then use Apple’s command line tool, Swift Package Manager: Copied swift run StableDiffusionSample --resource-path models/coreml-stable-diffusion-v1-4_original_compiled --compute-units all "a photo of an astronaut riding a horse on mars" You have to specify in --resource-path one of the checkpoints downloaded in the previous step, so please make sure it contains compiled Core ML bundles with the extension .mlmodelc. The --compute-units has to be one of these values: all, cpuOnly, cpuAndGPU, cpuAndNeuralEngine. For more details, please refer to the instructions in Apple’s repo. Supported Diffusers Features The Core ML models and inference code don’t support many of the features, options, and flexibility of 🧨 Diffusers. These are some of the limitations to keep in mind: Core ML models are only suitable for inference. They can’t be used for training or fine-tuning. Only two schedulers have been ported to Swift, the default one used by Stable Diffusion and DPMSolverMultistepScheduler, which we ported to Swift from our diffusers implementation. We recommend you use DPMSolverMultistepScheduler, since it produces the same quality in about half the steps. Negative prompts, classifier-free guidance scale, and image-to-image tasks are available in the inference code. Advanced features such as depth guidance, ControlNet, and latent upscalers are not available yet. Apple’s conversion and inference repo and our own swift-coreml-diffusers repos are intended as technology demonstrators to enable other developers to build upon. If you feel strongly about any missing features, please feel free to open a feature request or, better yet, a contribution PR 🙂. Native Diffusers Swift app One easy way to run Stable Diffusion on your own Apple hardware is to use our open-source Swift repo, based on diffusers and Apple’s conversion and inference repo. You can study the code, compile it with Xcode and adapt it for your own needs. For your convenience, there’s also a standalone Mac app in the App Store, so you can play with it without having to deal with the code or IDE. If you are a developer and have determined that Core ML is the best solution to build your Stable Diffusion app, then you can use the rest of this guide to get started with your project. We can’t wait to see what you’ll build 🙂. diff --git a/scrapped_outputs/e3298b8e3407fe4458945c855e41b98d.txt b/scrapped_outputs/e3298b8e3407fe4458945c855e41b98d.txt new file mode 100644 index 0000000000000000000000000000000000000000..c45daf9a97ec4b41db61304ab7ca97f58be2ed61 --- /dev/null +++ b/scrapped_outputs/e3298b8e3407fe4458945c855e41b98d.txt @@ -0,0 +1 @@ +xFormers We recommend xFormers for both inference and training. In our tests, the optimizations performed in the attention blocks allow for both faster speed and reduced memory consumption. Install xFormers from pip: Copied pip install xformers The xFormers pip package requires the latest version of PyTorch. If you need to use a previous version of PyTorch, then we recommend installing xFormers from the source. After xFormers is installed, you can use enable_xformers_memory_efficient_attention() for faster inference and reduced memory consumption as shown in this section. According to this issue, xFormers v0.0.16 cannot be used for training (fine-tune or DreamBooth) in some GPUs. If you observe this problem, please install a development version as indicated in the issue comments. diff --git a/scrapped_outputs/e34f4c4c196d52e940316c517060155e.txt b/scrapped_outputs/e34f4c4c196d52e940316c517060155e.txt new file mode 100644 index 0000000000000000000000000000000000000000..fbba08e6089c48721c4daf719b002f35502d6466 --- /dev/null +++ b/scrapped_outputs/e34f4c4c196d52e940316c517060155e.txt @@ -0,0 +1,573 @@ +Stable Diffusion XL Stable Diffusion XL (SDXL) was proposed in SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis by Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. The abstract from the paper is: We present SDXL, a latent diffusion model for text-to-image synthesis. Compared to previous versions of Stable Diffusion, SDXL leverages a three times larger UNet backbone: The increase of model parameters is mainly due to more attention blocks and a larger cross-attention context as SDXL uses a second text encoder. We design multiple novel conditioning schemes and train SDXL on multiple aspect ratios. We also introduce a refinement model which is used to improve the visual fidelity of samples generated by SDXL using a post-hoc image-to-image technique. We demonstrate that SDXL shows drastically improved performance compared the previous versions of Stable Diffusion and achieves results competitive with those of black-box state-of-the-art image generators. Tips Using SDXL with a DPM++ scheduler for less than 50 steps is known to produce visual artifacts because the solver becomes numerically unstable. To fix this issue, take a look at this PR which recommends for ODE/SDE solvers:set use_karras_sigmas=True or lu_lambdas=True to improve image quality set euler_at_final=True if you’re using a solver with uniform step sizes (DPM++2M or DPM++2M SDE) Most SDXL checkpoints work best with an image size of 1024x1024. Image sizes of 768x768 and 512x512 are also supported, but the results aren’t as good. Anything below 512x512 is not recommended and likely won’t be for default checkpoints like stabilityai/stable-diffusion-xl-base-1.0. SDXL can pass a different prompt for each of the text encoders it was trained on. We can even pass different parts of the same prompt to the text encoders. SDXL output images can be improved by making use of a refiner model in an image-to-image setting. SDXL offers negative_original_size, negative_crops_coords_top_left, and negative_target_size to negatively condition the model on image resolution and cropping parameters. To learn how to use SDXL for various tasks, how to optimize performance, and other usage examples, take a look at the Stable Diffusion XL guide. Check out the Stability AI Hub organization for the official base and refiner model checkpoints! StableDiffusionXLPipeline class diffusers.StableDiffusionXLPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Optional = None crops_coords_top_left: Tuple = (0, 0) target_size: Optional = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 5.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput instead +of a plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLPipeline + +>>> pipe = StableDiffusionXLPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionXLImg2ImgPipeline class diffusers.StableDiffusionXLImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires an aesthetic_score condition to be passed during inference. Also see the +config of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None strength: float = 0.3 num_inference_steps: int = 50 timesteps: List = None denoising_start: Optional = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor or PIL.Image.Image or np.ndarray or List[torch.FloatTensor] or List[PIL.Image.Image] or List[np.ndarray]) — +The image(s) to modify with the pipeline. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. Note that in the case of +denoising_start being declared as an integer, the value of strength will be ignored. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_start (float, optional) — +When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be +bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and +it is assumed that the passed image is a partly denoised image. Note that when this is specified, +strength will be ignored. The denoising_start parameter is particularly beneficial when this pipeline +is integrated into a “Mixture of Denoisers” multi-pipeline setup, as detailed in Refine Image +Quality. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be +denoised by a successor pipeline that has denoising_start set to 0.8 so that it only denoises the +final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline +forms a part of a “Mixture of Denoisers” multi-pipeline setup, as elaborated in Refine Image +Quality. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +`tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLImg2ImgPipeline +>>> from diffusers.utils import load_image + +>>> pipe = StableDiffusionXLImg2ImgPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") +>>> url = "https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/aa_xl/000000009.png" + +>>> init_image = load_image(url).convert("RGB") +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt, image=init_image).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionXLInpaintPipeline class diffusers.StableDiffusionXLInpaintPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires a aesthetic_score condition to be passed during inference. Also see the config +of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for text-to-image generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None mask_image: Union = None masked_image_latents: FloatTensor = None height: Optional = None width: Optional = None padding_mask_crop: Optional = None strength: float = 0.9999 num_inference_steps: int = 50 timesteps: List = None denoising_start: Optional = None denoising_end: Optional = None guidance_scale: float = 7.5 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (PIL.Image.Image) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. This is set to 1024 by default for the best results. +Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. padding_mask_crop (int, optional, defaults to None) — +The size of margin in the crop to be applied to the image and masking. If None, no crop is applied to image and mask_image. If +padding_mask_crop is not None, it will first find a rectangular region with the same aspect ration of the image and +contains all masked area, and then expand that area based on padding_mask_crop. The image and mask_image will then be cropped based on +the expanded area before resizing to the original image size for inpainting. This is useful when the masked area is small while the image is large +and contain information inreleant for inpainging, such as background. strength (float, optional, defaults to 0.9999) — +Conceptually, indicates how much to transform the masked portion of the reference image. Must be +between 0 and 1. image will be used as a starting point, adding more noise to it the larger the +strength. The number of denoising steps depends on the amount of noise initially added. When +strength is 1, added noise will be maximum and the denoising process will run for the full number of +iterations specified in num_inference_steps. A value of 1, therefore, essentially ignores the masked +portion of the reference image. Note that in the case of denoising_start being declared as an +integer, the value of strength will be ignored. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_start (float, optional) — +When specified, indicates the fraction (between 0.0 and 1.0) of the total denoising process to be +bypassed before it is initiated. Consequently, the initial part of the denoising process is skipped and +it is assumed that the passed image is a partly denoised image. Note that when this is specified, +strength will be ignored. The denoising_start parameter is particularly beneficial when this pipeline +is integrated into a “Mixture of Denoisers” multi-pipeline setup, as detailed in Refining the Image +Output. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise (ca. final 20% of timesteps still needed) and should be +denoised by a successor pipeline that has denoising_start set to 0.8 so that it only denoises the +final 20% of the scheduler. The denoising_end parameter should ideally be utilized when this pipeline +forms a part of a “Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLInpaintPipeline +>>> from diffusers.utils import load_image + +>>> pipe = StableDiffusionXLInpaintPipeline.from_pretrained( +... "stabilityai/stable-diffusion-xl-base-1.0", +... torch_dtype=torch.float16, +... variant="fp16", +... use_safetensors=True, +... ) +>>> pipe.to("cuda") + +>>> img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +>>> mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +>>> init_image = load_image(img_url).convert("RGB") +>>> mask_image = load_image(mask_url).convert("RGB") + +>>> prompt = "A majestic tiger sitting on a bench" +>>> image = pipe( +... prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=50, strength=0.80 +... ).images[0] disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. diff --git a/scrapped_outputs/e396d63d782101338eff74965c247feb.txt b/scrapped_outputs/e396d63d782101338eff74965c247feb.txt new file mode 100644 index 0000000000000000000000000000000000000000..5e5f20bcd4c8ced4f5d66653f375f4b97a022c2a --- /dev/null +++ b/scrapped_outputs/e396d63d782101338eff74965c247feb.txt @@ -0,0 +1,13 @@ +Improve image quality with deterministic generation A common way to improve the quality of generated images is with deterministic batch generation, generate a batch of images and select one image to improve with a more detailed prompt in a second round of inference. The key is to pass a list of torch.Generator’s to the pipeline for batched image generation, and tie each Generator to a seed so you can reuse it for an image. Let’s use runwayml/stable-diffusion-v1-5 for example, and generate several versions of the following prompt: Copied prompt = "Labrador in the style of Vermeer" Instantiate a pipeline with DiffusionPipeline.from_pretrained() and place it on a GPU (if available): Copied import torch +from diffusers import DiffusionPipeline +from diffusers.utils import make_image_grid + +pipe = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +pipe = pipe.to("cuda") Now, define four different Generators and assign each Generator a seed (0 to 3) so you can reuse a Generator later for a specific image: Copied generator = [torch.Generator(device="cuda").manual_seed(i) for i in range(4)] To create a batched seed, you should use a list comprehension that iterates over the length specified in range(). This creates a unique Generator object for each image in the batch. If you only multiply the Generator by the batch size, this only creates one Generator object that is used sequentially for each image in the batch. For example, if you want to use the same seed to create 4 identical images: Copied ❌ [torch.Generator().manual_seed(seed)] * 4 + +✅ [torch.Generator().manual_seed(seed) for _ in range(4)] Generate the images and have a look: Copied images = pipe(prompt, generator=generator, num_images_per_prompt=4).images +make_image_grid(images, rows=2, cols=2) In this example, you’ll improve upon the first image - but in reality, you can use any image you want (even the image with double sets of eyes!). The first image used the Generator with seed 0, so you’ll reuse that Generator for the second round of inference. To improve the quality of the image, add some additional text to the prompt: Copied prompt = [prompt + t for t in [", highly realistic", ", artsy", ", trending", ", colorful"]] +generator = [torch.Generator(device="cuda").manual_seed(0) for i in range(4)] Create four generators with seed 0, and generate another batch of images, all of which should look like the first image from the previous round! Copied images = pipe(prompt, generator=generator).images +make_image_grid(images, rows=2, cols=2) diff --git a/scrapped_outputs/e3a4a0a4a2aa3da317f4fcf3ec9a9f2c.txt b/scrapped_outputs/e3a4a0a4a2aa3da317f4fcf3ec9a9f2c.txt new file mode 100644 index 0000000000000000000000000000000000000000..90f987bd68cea6f4c0f29a9a85768db8b9798fed --- /dev/null +++ b/scrapped_outputs/e3a4a0a4a2aa3da317f4fcf3ec9a9f2c.txt @@ -0,0 +1 @@ +Overview A pipeline is an end-to-end class that provides a quick and easy way to use a diffusion system for inference by bundling independently trained models and schedulers together. Certain combinations of models and schedulers define specific pipeline types, like StableDiffusionXLPipeline or StableDiffusionControlNetPipeline, with specific capabilities. All pipeline types inherit from the base DiffusionPipeline class; pass it any checkpoint, and it’ll automatically detect the pipeline type and load the necessary components. This section demonstrates how to use specific pipelines such as Stable Diffusion XL, ControlNet, and DiffEdit. You’ll also learn how to use a distilled version of the Stable Diffusion model to speed up inference, how to create reproducible pipelines, and how to use and contribute community pipelines. diff --git a/scrapped_outputs/e3aa7b7192f99829155a28486ce9de37.txt b/scrapped_outputs/e3aa7b7192f99829155a28486ce9de37.txt new file mode 100644 index 0000000000000000000000000000000000000000..98269f3c31d991ee698908d92c0548b99079f45a --- /dev/null +++ b/scrapped_outputs/e3aa7b7192f99829155a28486ce9de37.txt @@ -0,0 +1,24 @@ +IPNDMScheduler IPNDMScheduler is a fourth-order Improved Pseudo Linear Multistep scheduler. The original implementation can be found at crowsonkb/v-diffusion-pytorch. IPNDMScheduler class diffusers.IPNDMScheduler < source > ( num_train_timesteps: int = 1000 trained_betas: Union = None ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. A fourth-order Improved Pseudo Linear Multistep scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the linear multistep method. It performs one forward pass multiple times to approximate the solution. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/e3abab1ac6c63cb78212af2983e68bd2.txt b/scrapped_outputs/e3abab1ac6c63cb78212af2983e68bd2.txt new file mode 100644 index 0000000000000000000000000000000000000000..350fcde2194ed65053fe8201403456d5175dba21 --- /dev/null +++ b/scrapped_outputs/e3abab1ac6c63cb78212af2983e68bd2.txt @@ -0,0 +1,98 @@ +DPMSolverMultistepScheduler DPMSolverMultistep is a multistep scheduler from DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps and DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. DPMSolver (and the improved version DPMSolver++) is a fast dedicated high-order solver for diffusion ODEs with convergence order guarantee. Empirically, DPMSolver sampling with only 20 steps can generate high-quality +samples, and it can generate quite good samples even in 10 steps. Tips It is recommended to set solver_order to 2 for guide sampling, and solver_order=3 for unconditional sampling. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use the dynamic +thresholding. This thresholding method is unsuitable for latent-space diffusion models such as +Stable Diffusion. The SDE variant of DPMSolver and DPM-Solver++ is also supported, but only for the first and second-order solvers. This is a fast SDE solver for the reverse diffusion SDE. It is recommended to use the second-order sde-dpmsolver++. DPMSolverMultistepScheduler class diffusers.DPMSolverMultistepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'dpmsolver++' solver_type: str = 'midpoint' lower_order_final: bool = True euler_at_final: bool = False use_karras_sigmas: Optional = False use_lu_lambdas: Optional = False final_sigmas_type: Optional = 'zero' lambda_min_clipped: float = -inf variance_type: Optional = None timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DPMSolver order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++". algorithm_type (str, defaults to dpmsolver++) — +Algorithm type for the solver; can be dpmsolver, dpmsolver++, sde-dpmsolver or sde-dpmsolver++. The +dpmsolver type implements the algorithms in the DPMSolver +paper, and the dpmsolver++ type implements the algorithms in the +DPMSolver++ paper. It is recommended to use dpmsolver++ or +sde-dpmsolver++ with solver_order=2 for guided sampling like in Stable Diffusion. solver_type (str, defaults to midpoint) — +Solver type for the second-order solver; can be midpoint or heun. The solver type slightly affects the +sample quality, especially for a small number of steps. It is recommended to use midpoint solvers. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. euler_at_final (bool, defaults to False) — +Whether to use Euler’s method in the final step. It is a trade-off between numerical stability and detail +richness. This can stabilize the sampling of the SDE variant of DPMSolver for small number of inference +steps, but sometimes may result in blurring. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. use_lu_lambdas (bool, optional, defaults to False) — +Whether to use the uniform-logSNR for step sizes proposed by Lu’s DPM-Solver in the noise schedule during +the sampling process. If True, the sigmas and time steps are determined according to a sequence of +lambda(t). final_sigmas_type (str, defaults to "zero") — +The final sigma value for the noise schedule during the sampling process. If "sigma_min", the final sigma +is the same as the last sigma in the training schedule. If zero, the final sigma is set to 0. lambda_min_clipped (float, defaults to -inf) — +Clipping threshold for the minimum value of lambda(t) for numerical stability. This is critical for the +cosine (squaredcos_cap_v2) noise schedule. variance_type (str, optional) — +Set to “learned” or “learned_range” for diffusion models that predict variance. If set, the model’s output +contains the predicted Gaussian variance. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DPMSolverMultistepScheduler is a fast dedicated high-order solver for diffusion ODEs. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is +designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an +integral of the data prediction model. The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise +prediction and data prediction models. dpm_solver_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DPMSolver (equivalent to DDIM). multistep_dpm_solver_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order multistep DPMSolver. multistep_dpm_solver_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order multistep DPMSolver. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int = None device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator = None return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep DPMSolver. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/e3b776abc41a4f138c1edda2f3489292.txt b/scrapped_outputs/e3b776abc41a4f138c1edda2f3489292.txt new file mode 100644 index 0000000000000000000000000000000000000000..e0e7bf9dc6b644c6e5f567e12d4d03cbf4f0b036 --- /dev/null +++ b/scrapped_outputs/e3b776abc41a4f138c1edda2f3489292.txt @@ -0,0 +1,50 @@ +EulerDiscreteScheduler The Euler scheduler (Algorithm 2) is from the Elucidating the Design Space of Diffusion-Based Generative Models paper by Karras et al. This is a fast scheduler which can often generate good outputs in 20-30 steps. The scheduler is based on the original k-diffusion implementation by Katherine Crowson. EulerDiscreteScheduler class diffusers.EulerDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' interpolation_type: str = 'linear' use_karras_sigmas: Optional = False sigma_min: Optional = None sigma_max: Optional = None timestep_spacing: str = 'linspace' timestep_type: str = 'discrete' steps_offset: int = 0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). interpolation_type(str, defaults to "linear", optional) — +The interpolation type to compute intermediate sigmas for the scheduler denoising steps. Should be on of +"linear" or "log_linear". use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. Euler scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. Scales the denoising model input by (sigma**2 + 1) ** 0.5 to match the Euler algorithm. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor s_churn: float = 0.0 s_tmin: float = 0.0 s_tmax: float = inf s_noise: float = 1.0 generator: Optional = None return_dict: bool = True ) → EulerDiscreteSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. s_churn (float) — s_tmin (float) — s_tmax (float) — s_noise (float, defaults to 1.0) — +Scaling factor for noise added to the sample. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a EulerDiscreteSchedulerOutput or +tuple. Returns +EulerDiscreteSchedulerOutput or tuple + +If return_dict is True, EulerDiscreteSchedulerOutput is +returned, otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). EulerDiscreteSchedulerOutput class diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/e3f6166e06bb066fc6d3f54c92e03dcd.txt b/scrapped_outputs/e3f6166e06bb066fc6d3f54c92e03dcd.txt new file mode 100644 index 0000000000000000000000000000000000000000..5ac980c70abc6eba4fbd0f38f30a6ecdd94ad92f --- /dev/null +++ b/scrapped_outputs/e3f6166e06bb066fc6d3f54c92e03dcd.txt @@ -0,0 +1,201 @@ +Depth-to-image The Stable Diffusion model can also infer depth based on an image using MiDaS. This allows you to pass a text prompt and an initial image to condition the generation of new images as well as a depth_map to preserve the image structure. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionDepth2ImgPipeline class diffusers.StableDiffusionDepth2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers depth_estimator: DPTForDepthEstimation feature_extractor: DPTFeatureExtractor ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-guided depth-based image-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None image: Union = None depth_map: Optional = None strength: float = 0.8 num_inference_steps: Optional = 50 guidance_scale: Optional = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: Optional = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be used as the starting point. Can accept image +latents as image only if depth_map is not None. depth_map (torch.FloatTensor, optional) — +Depth prediction to be used as additional conditioning for the image generation process. If not +defined, it automatically predicts the depth with self.depth_estimator. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> import requests +>>> from PIL import Image + +>>> from diffusers import StableDiffusionDepth2ImgPipeline + +>>> pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-depth", +... torch_dtype=torch.float16, +... ) +>>> pipe.to("cuda") + + +>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" +>>> init_image = Image.open(requests.get(url, stream=True).raw) +>>> prompt = "two tigers" +>>> n_propmt = "bad, deformed, ugly, bad anotomy" +>>> image = pipe(prompt=prompt, image=init_image, negative_prompt=n_propmt, strength=0.7).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/e415c7ab881029b729974afe6d358f07.txt b/scrapped_outputs/e415c7ab881029b729974afe6d358f07.txt new file mode 100644 index 0000000000000000000000000000000000000000..987714f2dceff74a56e14f7f4e6b5e17d1f50da7 --- /dev/null +++ b/scrapped_outputs/e415c7ab881029b729974afe6d358f07.txt @@ -0,0 +1,48 @@ +How to use Stable Diffusion in Apple Silicon (M1/M2) + +🤗 Diffusers is compatible with Apple silicon for Stable Diffusion inference, using the PyTorch mps device. These are the steps you need to follow to use your M1 or M2 computer with Stable Diffusion. + +Requirements + +Mac computer with Apple silicon (M1/M2) hardware. +macOS 12.6 or later (13.0 or later recommended). +arm64 version of Python. +PyTorch 1.13. You can install it with pip or conda using the instructions in https://pytorch.org/get-started/locally/. + +Inference Pipeline + +The snippet below demonstrates how to use the mps backend using the familiar to() interface to move the Stable Diffusion pipeline to your M1 or M2 device. +We recommend to “prime” the pipeline using an additional one-time pass through it. This is a temporary workaround for a weird issue we have detected: the first inference pass produces slightly different results than subsequent ones. You only need to do this pass once, and it’s ok to use just one inference step and discard the result. + + + Copied +# make sure you're logged in with `huggingface-cli login` +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +pipe = pipe.to("mps") + +# Recommended if your computer has < 64 GB of RAM +pipe.enable_attention_slicing() + +prompt = "a photo of an astronaut riding a horse on mars" + +# First-time "warmup" pass (see explanation above) +_ = pipe(prompt, num_inference_steps=1) + +# Results match those from the CPU device after the warmup pass. +image = pipe(prompt).images[0] + +Performance Recommendations + +M1/M2 performance is very sensitive to memory pressure. The system will automatically swap if it needs to, but performance will degrade significantly when it does. +We recommend you use attention slicing to reduce memory pressure during inference and prevent swapping, particularly if your computer has lass than 64 GB of system RAM, or if you generate images at non-standard resolutions larger than 512 × 512 pixels. Attention slicing performs the costly attention operation in multiple steps instead of all at once. It usually has a performance impact of ~20% in computers without universal memory, but we have observed better performance in most Apple Silicon computers, unless you have 64 GB or more. + + + Copied +pipeline.enable_attention_slicing() + +Known Issues + +As mentioned above, we are investigating a strange first-time inference issue. +Generating multiple prompts in a batch crashes or doesn’t work reliably. We believe this is related to the mps backend in PyTorch. This is being resolved, but for now we recommend to iterate instead of batching. diff --git a/scrapped_outputs/e422485cf20fbe9b40b75f3e4845bb80.txt b/scrapped_outputs/e422485cf20fbe9b40b75f3e4845bb80.txt new file mode 100644 index 0000000000000000000000000000000000000000..ab7809d34983d6a8ebbe82ac4a22518de74ebdc9 --- /dev/null +++ b/scrapped_outputs/e422485cf20fbe9b40b75f3e4845bb80.txt @@ -0,0 +1,31 @@ +Prior Transformer The Prior Transformer was originally introduced in Hierarchical Text-Conditional Image Generation with CLIP Latents by Ramesh et al. It is used to predict CLIP image embeddings from CLIP text embeddings; image embeddings are predicted through a denoising diffusion process. The abstract from the paper is: Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples. PriorTransformer class diffusers.PriorTransformer < source > ( num_attention_heads: int = 32 attention_head_dim: int = 64 num_layers: int = 20 embedding_dim: int = 768 num_embeddings = 77 additional_embeddings = 4 dropout: float = 0.0 time_embed_act_fn: str = 'silu' norm_in_type: Optional = None embedding_proj_norm_type: Optional = None encoder_hid_proj_type: Optional = 'linear' added_emb_type: Optional = 'prd' time_embed_dim: Optional = None embedding_proj_dim: Optional = None clip_embed_dim: Optional = None ) Parameters num_attention_heads (int, optional, defaults to 32) — The number of heads to use for multi-head attention. attention_head_dim (int, optional, defaults to 64) — The number of channels in each head. num_layers (int, optional, defaults to 20) — The number of layers of Transformer blocks to use. embedding_dim (int, optional, defaults to 768) — The dimension of the model input hidden_states num_embeddings (int, optional, defaults to 77) — +The number of embeddings of the model input hidden_states additional_embeddings (int, optional, defaults to 4) — The number of additional tokens appended to the +projected hidden_states. The actual length of the used hidden_states is num_embeddings + additional_embeddings. dropout (float, optional, defaults to 0.0) — The dropout probability to use. time_embed_act_fn (str, optional, defaults to ‘silu’) — +The activation function to use to create timestep embeddings. norm_in_type (str, optional, defaults to None) — The normalization layer to apply on hidden states before +passing to Transformer blocks. Set it to None if normalization is not needed. embedding_proj_norm_type (str, optional, defaults to None) — +The normalization layer to apply on the input proj_embedding. Set it to None if normalization is not +needed. encoder_hid_proj_type (str, optional, defaults to linear) — +The projection layer to apply on the input encoder_hidden_states. Set it to None if +encoder_hidden_states is None. added_emb_type (str, optional, defaults to prd) — Additional embeddings to condition the model. +Choose from prd or None. if choose prd, it will prepend a token indicating the (quantized) dot +product between the text embedding and image embedding as proposed in the unclip paper +https://arxiv.org/abs/2204.06125 If it is None, no additional embeddings will be prepended. time_embed_dim (int, *optional*, defaults to None) -- The dimension of timestep embeddings. If None, will be set to num_attention_heads * attention_head_dim` embedding_proj_dim (int, optional, default to None) — +The dimension of proj_embedding. If None, will be set to embedding_dim. clip_embed_dim (int, optional, default to None) — +The dimension of the output. If None, will be set to embedding_dim. A Prior Transformer model. forward < source > ( hidden_states timestep: Union proj_embedding: FloatTensor encoder_hidden_states: Optional = None attention_mask: Optional = None return_dict: bool = True ) → ~models.prior_transformer.PriorTransformerOutput or tuple Parameters hidden_states (torch.FloatTensor of shape (batch_size, embedding_dim)) — +The currently predicted image embeddings. timestep (torch.LongTensor) — +Current denoising step. proj_embedding (torch.FloatTensor of shape (batch_size, embedding_dim)) — +Projected embedding vector the denoising process is conditioned on. encoder_hidden_states (torch.FloatTensor of shape (batch_size, num_embeddings, embedding_dim)) — +Hidden states of the text embeddings the denoising process is conditioned on. attention_mask (torch.BoolTensor of shape (batch_size, num_embeddings)) — +Text mask for the text embeddings. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.prior_transformer.PriorTransformerOutput instead of a plain +tuple. Returns +~models.prior_transformer.PriorTransformerOutput or tuple + +If return_dict is True, a ~models.prior_transformer.PriorTransformerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + The PriorTransformer forward method. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. PriorTransformerOutput class diffusers.models.transformers.prior_transformer.PriorTransformerOutput < source > ( predicted_image_embedding: FloatTensor ) Parameters predicted_image_embedding (torch.FloatTensor of shape (batch_size, embedding_dim)) — +The predicted CLIP image embedding conditioned on the CLIP text embedding input. The output of PriorTransformer. diff --git a/scrapped_outputs/e42b41f0c678bdc818d9a83ce3045aa8.txt b/scrapped_outputs/e42b41f0c678bdc818d9a83ce3045aa8.txt new file mode 100644 index 0000000000000000000000000000000000000000..2c75745a0e7cec39d676aa7cacbf09ed6d05e3a4 --- /dev/null +++ b/scrapped_outputs/e42b41f0c678bdc818d9a83ce3045aa8.txt @@ -0,0 +1,361 @@ +Inpainting Inpainting replaces or edits specific areas of an image. This makes it a useful tool for image restoration like removing defects and artifacts, or even replacing an image area with something entirely new. Inpainting relies on a mask to determine which regions of an image to fill in; the area to inpaint is represented by white pixels and the area to keep is represented by black pixels. The white pixels are filled in by the prompt. With 🤗 Diffusers, here is how you can do inpainting: Load an inpainting checkpoint with the AutoPipelineForInpainting class. This’ll automatically detect the appropriate pipeline class to load based on the checkpoint: Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() You’ll notice throughout the guide, we use enable_model_cpu_offload() and enable_xformers_memory_efficient_attention(), to save memory and increase inference speed. If you’re using PyTorch 2.0, it’s not necessary to call enable_xformers_memory_efficient_attention() on your pipeline because it’ll already be using PyTorch 2.0’s native scaled-dot product attention. Load the base and mask images: Copied init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") Create a prompt to inpaint the image with and pass it to the pipeline with the base and mask images: Copied prompt = "a black cat with glowing eyes, cute, adorable, disney, pixar, highly detailed, 8k" +negative_prompt = "bad anatomy, deformed, ugly, disfigured" +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=init_image, mask_image=mask_image).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) base image mask image generated image Create a mask image Throughout this guide, the mask image is provided in all of the code examples for convenience. You can inpaint on your own images, but you’ll need to create a mask image for it. Use the Space below to easily create a mask image. Upload a base image to inpaint on and use the sketch tool to draw a mask. Once you’re done, click Run to generate and download the mask image. Popular models Stable Diffusion Inpainting, Stable Diffusion XL (SDXL) Inpainting, and Kandinsky 2.2 Inpainting are among the most popular models for inpainting. SDXL typically produces higher resolution images than Stable Diffusion v1.5, and Kandinsky 2.2 is also capable of generating high-quality images. Stable Diffusion Inpainting Stable Diffusion Inpainting is a latent diffusion model finetuned on 512x512 images on inpainting. It is a good starting point because it is relatively fast and generates good quality images. To use this model for inpainting, you’ll need to pass a prompt, base and mask image to the pipeline: Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Stable Diffusion XL (SDXL) Inpainting SDXL is a larger and more powerful version of Stable Diffusion v1.5. This model can follow a two-stage model process (though each model can also be used alone); the base model generates an image, and a refiner model takes that image and further enhances its details and quality. Take a look at the SDXL guide for a more comprehensive guide on how to use SDXL and configure it’s parameters. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "diffusers/stable-diffusion-xl-1.0-inpainting-0.1", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Kandinsky 2.2 Inpainting The Kandinsky model family is similar to SDXL because it uses two models as well; the image prior model creates image embeddings, and the diffusion model generates images from them. You can load the image prior and diffusion model separately, but the easiest way to use Kandinsky 2.2 is to load it into the AutoPipelineForInpainting class which uses the KandinskyV22InpaintCombinedPipeline under the hood. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) base image Stable Diffusion Inpainting Stable Diffusion XL Inpainting Kandinsky 2.2 Inpainting Non-inpaint specific checkpoints So far, this guide has used inpaint specific checkpoints such as runwayml/stable-diffusion-inpainting. But you can also use regular checkpoints like runwayml/stable-diffusion-v1-5. Let’s compare the results of the two checkpoints. The image on the left is generated from a regular checkpoint, and the image on the right is from an inpaint checkpoint. You’ll immediately notice the image on the left is not as clean, and you can still see the outline of the area the model is supposed to inpaint. The image on the right is much cleaner and the inpainted area appears more natural. + + + + Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, image], rows=1, cols=2) + + + + Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, image], rows=1, cols=2) + + + runwayml/stable-diffusion-v1-5 runwayml/stable-diffusion-inpainting However, for more basic tasks like erasing an object from an image (like the rocks in the road for example), a regular checkpoint yields pretty good results. There isn’t as noticeable of difference between the regular and inpaint checkpoint. + + + + Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/road-mask.png") + +image = pipeline(prompt="road", image=init_image, mask_image=mask_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) + + + + Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/road-mask.png") + +image = pipeline(prompt="road", image=init_image, mask_image=mask_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) + + + runwayml/stable-diffusion-v1-5 runwayml/stable-diffusion-inpainting The trade-off of using a non-inpaint specific checkpoint is the overall image quality may be lower, but it generally tends to preserve the mask area (that is why you can see the mask outline). The inpaint specific checkpoints are intentionally trained to generate higher quality inpainted images, and that includes creating a more natural transition between the masked and unmasked areas. As a result, these checkpoints are more likely to change your unmasked area. If preserving the unmasked area is important for your task, you can use the code below to force the unmasked area of an image to remain the same at the expense of some more unnatural transitions between the masked and unmasked areas. Copied import PIL +import numpy as np +import torch + +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +device = "cuda" +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", + torch_dtype=torch.float16, +) +pipeline = pipeline.to(device) + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +repainted_image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image).images[0] +repainted_image.save("repainted_image.png") + +# Convert mask to grayscale NumPy array +mask_image_arr = np.array(mask_image.convert("L")) +# Add a channel dimension to the end of the grayscale mask +mask_image_arr = mask_image_arr[:, :, None] +# Binarize the mask: 1s correspond to the pixels which are repainted +mask_image_arr = mask_image_arr.astype(np.float32) / 255.0 +mask_image_arr[mask_image_arr < 0.5] = 0 +mask_image_arr[mask_image_arr >= 0.5] = 1 + +# Take the masked pixels from the repainted image and the unmasked pixels from the initial image +unmasked_unchanged_image_arr = (1 - mask_image_arr) * init_image + mask_image_arr * repainted_image +unmasked_unchanged_image = PIL.Image.fromarray(unmasked_unchanged_image_arr.round().astype("uint8")) +unmasked_unchanged_image.save("force_unmasked_unchanged.png") +make_image_grid([init_image, mask_image, repainted_image, unmasked_unchanged_image], rows=2, cols=2) Configure pipeline parameters Image features - like quality and “creativity” - are dependent on pipeline parameters. Knowing what these parameters do is important for getting the results you want. Let’s take a look at the most important parameters and see how changing them affects the output. Strength strength is a measure of how much noise is added to the base image, which influences how similar the output is to the base image. 📈 a high strength value means more noise is added to an image and the denoising process takes longer, but you’ll get higher quality images that are more different from the base image 📉 a low strength value means less noise is added to an image and the denoising process is faster, but the image quality may not be as great and the generated image resembles the base image more Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.6).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) strength = 0.6 strength = 0.8 strength = 1.0 Guidance scale guidance_scale affects how aligned the text prompt and generated image are. 📈 a high guidance_scale value means the prompt and generated image are closely aligned, so the output is a stricter interpretation of the prompt 📉 a low guidance_scale value means the prompt and generated image are more loosely aligned, so the output may be more varied from the prompt You can use strength and guidance_scale together for more control over how expressive the model is. For example, a combination high strength and guidance_scale values gives the model the most creative freedom. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, guidance_scale=2.5).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) guidance_scale = 2.5 guidance_scale = 7.5 guidance_scale = 12.5 Negative prompt A negative prompt assumes the opposite role of a prompt; it guides the model away from generating certain things in an image. This is useful for quickly improving image quality and preventing the model from generating things you don’t want. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +negative_prompt = "bad architecture, unstable, poor details, blurry" +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=init_image, mask_image=mask_image).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) negative_prompt = "bad architecture, unstable, poor details, blurry" Chained inpainting pipelines AutoPipelineForInpainting can be chained with other 🤗 Diffusers pipelines to edit their outputs. This is often useful for improving the output quality from your other diffusion pipelines, and if you’re using multiple pipelines, it can be more memory-efficient to chain them together to keep the outputs in latent space and reuse the same pipeline components. Text-to-image-to-inpaint Chaining a text-to-image and inpainting pipeline allows you to inpaint the generated image, and you don’t have to provide a base image to begin with. This makes it convenient to edit your favorite text-to-image outputs without having to generate an entirely new image. Start with the text-to-image pipeline to create a castle: Copied import torch +from diffusers import AutoPipelineForText2Image, AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +text2image = pipeline("concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k").images[0] Load the mask image of the output from above: Copied mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_text-chain-mask.png") And let’s inpaint the masked area with a waterfall: Copied pipeline = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +prompt = "digital painting of a fantasy waterfall, cloudy" +image = pipeline(prompt=prompt, image=text2image, mask_image=mask_image).images[0] +make_image_grid([text2image, mask_image, image], rows=1, cols=3) text-to-image inpaint Inpaint-to-image-to-image You can also chain an inpainting pipeline before another pipeline like image-to-image or an upscaler to improve the quality. Begin by inpainting an image: Copied import torch +from diffusers import AutoPipelineForInpainting, AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image_inpainting = pipeline(prompt=prompt, image=init_image, mask_image=mask_image).images[0] + +# resize image to 1024x1024 for SDXL +image_inpainting = image_inpainting.resize((1024, 1024)) Now let’s pass the image to another inpainting pipeline with SDXL’s refiner model to enhance the image details and quality: Copied pipeline = AutoPipelineForInpainting.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt=prompt, image=image_inpainting, mask_image=mask_image, output_type="latent").images[0] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. For example, in the Text-to-image-to-inpaint section, Kandinsky 2.2 uses a different VAE class than the Stable Diffusion model so it won’t work. But if you use Stable Diffusion v1.5 for both pipelines, then you can keep everything in latent space because they both use AutoencoderKL. Finally, you can pass this image to an image-to-image pipeline to put the finishing touches on it. It is more efficient to use the from_pipe() method to reuse the existing pipeline components, and avoid unnecessarily loading all the pipeline components into memory again. Copied pipeline = AutoPipelineForImage2Image.from_pipe(pipeline) +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt=prompt, image=image).images[0] +make_image_grid([init_image, mask_image, image_inpainting, image], rows=2, cols=2) initial image inpaint image-to-image Image-to-image and inpainting are actually very similar tasks. Image-to-image generates a new image that resembles the existing provided image. Inpainting does the same thing, but it only transforms the image area defined by the mask and the rest of the image is unchanged. You can think of inpainting as a more precise tool for making specific changes and image-to-image has a broader scope for making more sweeping changes. Control image generation Getting an image to look exactly the way you want is challenging because the denoising process is random. While you can control certain aspects of generation by configuring parameters like negative_prompt, there are better and more efficient methods for controlling image generation. Prompt weighting Prompt weighting provides a quantifiable way to scale the representation of concepts in a prompt. You can use it to increase or decrease the magnitude of the text embedding vector for each concept in the prompt, which subsequently determines how much of each concept is generated. The Compel library offers an intuitive syntax for scaling the prompt weights and generating the embeddings. Learn how to create the embeddings in the Prompt weighting guide. Once you’ve generated the embeddings, pass them to the prompt_embeds (and negative_prompt_embeds if you’re using a negative prompt) parameter in the AutoPipelineForInpainting. The embeddings replace the prompt parameter: Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt_embeds=prompt_embeds, # generated from Compel + negative_prompt_embeds=negative_prompt_embeds, # generated from Compel + image=init_image, + mask_image=mask_image +).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) ControlNet ControlNet models are used with other diffusion models like Stable Diffusion, and they provide an even more flexible and accurate way to control how an image is generated. A ControlNet accepts an additional conditioning image input that guides the diffusion model to preserve the features in it. For example, let’s condition an image with a ControlNet pretrained on inpaint images: Copied import torch +import numpy as np +from diffusers import ControlNetModel, StableDiffusionControlNetInpaintPipeline +from diffusers.utils import load_image, make_image_grid + +# load ControlNet +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16, variant="fp16") + +# pass ControlNet to the pipeline +pipeline = StableDiffusionControlNetInpaintPipeline.from_pretrained( + "runwayml/stable-diffusion-inpainting", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +# prepare control image +def make_inpaint_condition(init_image, mask_image): + init_image = np.array(init_image.convert("RGB")).astype(np.float32) / 255.0 + mask_image = np.array(mask_image.convert("L")).astype(np.float32) / 255.0 + + assert init_image.shape[0:1] == mask_image.shape[0:1], "image and image_mask must have the same image size" + init_image[mask_image > 0.5] = -1.0 # set as masked pixel + init_image = np.expand_dims(init_image, 0).transpose(0, 3, 1, 2) + init_image = torch.from_numpy(init_image) + return init_image + +control_image = make_inpaint_condition(init_image, mask_image) Now generate an image from the base, mask and control images. You’ll notice features of the base image are strongly preserved in the generated image. Copied prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, control_image=control_image).images[0] +make_image_grid([init_image, mask_image, PIL.Image.fromarray(np.uint8(control_image[0][0])).convert('RGB'), image], rows=2, cols=2) You can take this a step further and chain it with an image-to-image pipeline to apply a new style: Copied from diffusers import AutoPipelineForImage2Image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "nitrosocke/elden-ring-diffusion", torch_dtype=torch.float16, +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +prompt = "elden ring style castle" # include the token "elden ring style" in the prompt +negative_prompt = "bad architecture, deformed, disfigured, poor details" + +image_elden_ring = pipeline(prompt, negative_prompt=negative_prompt, image=image).images[0] +make_image_grid([init_image, mask_image, image, image_elden_ring], rows=2, cols=2) initial image ControlNet inpaint image-to-image Optimize It can be difficult and slow to run diffusion models if you’re resource constrained, but it doesn’t have to be with a few optimization tricks. One of the biggest (and easiest) optimizations you can enable is switching to memory-efficient attention. If you’re using PyTorch 2.0, scaled-dot product attention is automatically enabled and you don’t need to do anything else. For non-PyTorch 2.0 users, you can install and use xFormers’s implementation of memory-efficient attention. Both options reduce memory usage and accelerate inference. You can also offload the model to the CPU to save even more memory: Copied + pipeline.enable_xformers_memory_efficient_attention() ++ pipeline.enable_model_cpu_offload() To speed-up your inference code even more, use torch_compile. You should wrap torch.compile around the most intensive component in the pipeline which is typically the UNet: Copied pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) Learn more in the Reduce memory usage and Torch 2.0 guides. diff --git a/scrapped_outputs/e42ef161a64ff21c9f8ee083f97cb1d1.txt b/scrapped_outputs/e42ef161a64ff21c9f8ee083f97cb1d1.txt new file mode 100644 index 0000000000000000000000000000000000000000..6c6930421010fe84f98ab906144201bb0390aa30 --- /dev/null +++ b/scrapped_outputs/e42ef161a64ff21c9f8ee083f97cb1d1.txt @@ -0,0 +1,81 @@ +Latent Diffusion Latent Diffusion was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. The abstract from the paper is: By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. The original codebase can be found at CompVis/latent-diffusion. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. LDMTextToImagePipeline class diffusers.LDMTextToImagePipeline < source > ( vqvae: Union bert: PreTrainedModel tokenizer: PreTrainedTokenizer unet: Union scheduler: Union ) Parameters vqvae (VQModel) — +Vector-quantized (VQ) model to encode and decode images to and from latent representations. bert (LDMBertModel) — +Text-encoder model based on BERT. tokenizer (BertTokenizer) — +A BertTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-image generation using latent diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union height: Optional = None width: Optional = None num_inference_steps: Optional = 50 guidance_scale: Optional = 1.0 eta: Optional = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 1.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Example: Copied >>> from diffusers import DiffusionPipeline + +>>> # load model and scheduler +>>> ldm = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256") + +>>> # run pipeline in inference (sample random noise and denoise) +>>> prompt = "A painting of a squirrel eating a burger" +>>> images = ldm([prompt], num_inference_steps=50, eta=0.3, guidance_scale=6).images + +>>> # save images +>>> for idx, image in enumerate(images): +... image.save(f"squirrel-{idx}.png") LDMSuperResolutionPipeline class diffusers.LDMSuperResolutionPipeline < source > ( vqvae: VQModel unet: UNet2DModel scheduler: Union ) Parameters vqvae (VQModel) — +Vector-quantized (VQ) model to encode and decode images to and from latent representations. unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latens. Can be one of +DDIMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, +EulerAncestralDiscreteScheduler, DPMSolverMultistepScheduler, or PNDMScheduler. A pipeline for image super-resolution using latent diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union = None batch_size: Optional = 1 num_inference_steps: Optional = 100 eta: Optional = 0.0 generator: Union = None output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters image (torch.Tensor or PIL.Image.Image) — +Image or tensor representing an image batch to be used as the starting point for the process. batch_size (int, optional, defaults to 1) — +Number of images to generate. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Example: Copied >>> import requests +>>> from PIL import Image +>>> from io import BytesIO +>>> from diffusers import LDMSuperResolutionPipeline +>>> import torch + +>>> # load model and scheduler +>>> pipeline = LDMSuperResolutionPipeline.from_pretrained("CompVis/ldm-super-resolution-4x-openimages") +>>> pipeline = pipeline.to("cuda") + +>>> # let's download an image +>>> url = ( +... "https://user-images.githubusercontent.com/38061659/199705896-b48e17b8-b231-47cd-a270-4ffa5a93fa3e.png" +... ) +>>> response = requests.get(url) +>>> low_res_img = Image.open(BytesIO(response.content)).convert("RGB") +>>> low_res_img = low_res_img.resize((128, 128)) + +>>> # run pipeline in inference (sample random noise and denoise) +>>> upscaled_image = pipeline(low_res_img, num_inference_steps=100, eta=1).images[0] +>>> # save image +>>> upscaled_image.save("ldm_generated_image.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/e43efeb74378942bf054e6b93ea0996d.txt b/scrapped_outputs/e43efeb74378942bf054e6b93ea0996d.txt new file mode 100644 index 0000000000000000000000000000000000000000..ac59df5433d23b7c188dd3d53bf865450ff7dab9 --- /dev/null +++ b/scrapped_outputs/e43efeb74378942bf054e6b93ea0996d.txt @@ -0,0 +1 @@ +Reinforcement learning training with DDPO You can fine-tune Stable Diffusion on a reward function via reinforcement learning with the 🤗 TRL library and 🤗 Diffusers. This is done with the Denoising Diffusion Policy Optimization (DDPO) algorithm introduced by Black et al. in Training Diffusion Models with Reinforcement Learning, which is implemented in 🤗 TRL with the DDPOTrainer. For more information, check out the DDPOTrainer API reference and the Finetune Stable Diffusion Models with DDPO via TRL blog post. diff --git a/scrapped_outputs/e443c41768dfeb6b096aa996e22f8807.txt b/scrapped_outputs/e443c41768dfeb6b096aa996e22f8807.txt new file mode 100644 index 0000000000000000000000000000000000000000..78a3b346e030b1216370c892ffe83052959511a8 --- /dev/null +++ b/scrapped_outputs/e443c41768dfeb6b096aa996e22f8807.txt @@ -0,0 +1,105 @@ +Attend-and-Excite Attend-and-Excite for Stable Diffusion was proposed in Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models and provides textual attention control over image generation. The abstract from the paper is: Recent text-to-image generative models have demonstrated an unparalleled ability to generate diverse and creative imagery guided by a target text prompt. While revolutionary, current state-of-the-art diffusion models may still fail in generating images that fully convey the semantics in the given text prompt. We analyze the publicly available Stable Diffusion model and assess the existence of catastrophic neglect, where the model fails to generate one or more of the subjects from the input prompt. Moreover, we find that in some cases the model also fails to correctly bind attributes (e.g., colors) to their corresponding subjects. To help mitigate these failure cases, we introduce the concept of Generative Semantic Nursing (GSN), where we seek to intervene in the generative process on the fly during inference time to improve the faithfulness of the generated images. Using an attention-based formulation of GSN, dubbed Attend-and-Excite, we guide the model to refine the cross-attention units to attend to all subject tokens in the text prompt and strengthen - or excite - their activations, encouraging the model to generate all subjects described in the text prompt. We compare our approach to alternative approaches and demonstrate that it conveys the desired concepts more faithfully across a range of text prompts. You can find additional information about Attend-and-Excite on the project page, the original codebase, or try it out in a demo. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionAttendAndExcitePipeline class diffusers.StableDiffusionAttendAndExcitePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion and Attend-and-Excite. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings __call__ < source > ( prompt: Union token_indices: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: int = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None max_iter_to_alter: int = 25 thresholds: dict = {0: 0.05, 10: 0.5, 20: 0.8} scale_factor: int = 20 attn_res: Optional = (16, 16) clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. token_indices (List[int]) — +The token indices to alter with attend-and-excite. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. max_iter_to_alter (int, optional, defaults to 25) — +Number of denoising steps to apply attend-and-excite. The max_iter_to_alter denoising steps are when +attend-and-excite is applied. For example, if max_iter_to_alter is 25 and there are a total of 30 +denoising steps, the first 25 denoising steps applies attend-and-excite and the last 5 will not. thresholds (dict, optional, defaults to {0 -- 0.05, 10: 0.5, 20: 0.8}): +Dictionary defining the iterations and desired thresholds to apply iterative latent refinement in. scale_factor (int, optional, default to 20) — +Scale factor to control the step size of each attend-and-excite update. attn_res (tuple, optional, default computed from width and height) — +The 2D resolution of the semantic attention map. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionAttendAndExcitePipeline + +>>> pipe = StableDiffusionAttendAndExcitePipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 +... ).to("cuda") + + +>>> prompt = "a cat and a frog" + +>>> # use get_indices function to find out indices of the tokens you want to alter +>>> pipe.get_indices(prompt) +{0: '<|startoftext|>', 1: 'a', 2: 'cat', 3: 'and', 4: 'a', 5: 'frog', 6: '<|endoftext|>'} + +>>> token_indices = [2, 5] +>>> seed = 6141 +>>> generator = torch.Generator("cuda").manual_seed(seed) + +>>> images = pipe( +... prompt=prompt, +... token_indices=token_indices, +... guidance_scale=7.5, +... generator=generator, +... num_inference_steps=50, +... max_iter_to_alter=25, +... ).images + +>>> image = images[0] +>>> image.save(f"../images/{prompt}_{seed}.png") encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_indices < source > ( prompt: str ) Utility function to list the indices of the tokens you wish to alte StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/e49a8552189a1190e3f9cd9843b86426.txt b/scrapped_outputs/e49a8552189a1190e3f9cd9843b86426.txt new file mode 100644 index 0000000000000000000000000000000000000000..c6f952ad08987328ef5a7108f6c98636c5902202 --- /dev/null +++ b/scrapped_outputs/e49a8552189a1190e3f9cd9843b86426.txt @@ -0,0 +1,76 @@ +Contribute a community pipeline 💡 Take a look at GitHub Issue #841 for more context about why we’re adding community pipelines to help everyone easily share their work without being slowed down. Community pipelines allow you to add any additional features you’d like on top of the DiffusionPipeline. The main benefit of building on top of the DiffusionPipeline is anyone can load and use your pipeline by only adding one more argument, making it super easy for the community to access. This guide will show you how to create a community pipeline and explain how they work. To keep things simple, you’ll create a “one-step” pipeline where the UNet does a single forward pass and calls the scheduler once. Initialize the pipeline You should start by creating a one_step_unet.py file for your community pipeline. In this file, create a pipeline class that inherits from the DiffusionPipeline to be able to load model weights and the scheduler configuration from the Hub. The one-step pipeline needs a UNet and a scheduler, so you’ll need to add these as arguments to the __init__ function: Copied from diffusers import DiffusionPipeline +import torch + +class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() To ensure your pipeline and its components (unet and scheduler) can be saved with save_pretrained(), add them to the register_modules function: Copied from diffusers import DiffusionPipeline + import torch + + class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() + ++ self.register_modules(unet=unet, scheduler=scheduler) Cool, the __init__ step is done and you can move to the forward pass now! 🔥 Define the forward pass In the forward pass, which we recommend defining as __call__, you have complete creative freedom to add whatever feature you’d like. For our amazing one-step pipeline, create a random image and only call the unet and scheduler once by setting timestep=1: Copied from diffusers import DiffusionPipeline + import torch + + class UnetSchedulerOneForwardPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() + + self.register_modules(unet=unet, scheduler=scheduler) + ++ def __call__(self): ++ image = torch.randn( ++ (1, self.unet.config.in_channels, self.unet.config.sample_size, self.unet.config.sample_size), ++ ) ++ timestep = 1 + ++ model_output = self.unet(image, timestep).sample ++ scheduler_output = self.scheduler.step(model_output, timestep, image).prev_sample + ++ return scheduler_output That’s it! 🚀 You can now run this pipeline by passing a unet and scheduler to it: Copied from diffusers import DDPMScheduler, UNet2DModel + +scheduler = DDPMScheduler() +unet = UNet2DModel() + +pipeline = UnetSchedulerOneForwardPipeline(unet=unet, scheduler=scheduler) + +output = pipeline() But what’s even better is you can load pre-existing weights into the pipeline if the pipeline structure is identical. For example, you can load the google/ddpm-cifar10-32 weights into the one-step pipeline: Copied pipeline = UnetSchedulerOneForwardPipeline.from_pretrained("google/ddpm-cifar10-32", use_safetensors=True) + +output = pipeline() Share your pipeline Open a Pull Request on the 🧨 Diffusers repository to add your awesome pipeline in one_step_unet.py to the examples/community subfolder. Once it is merged, anyone with diffusers >= 0.4.0 installed can use this pipeline magically 🪄 by specifying it in the custom_pipeline argument: Copied from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "google/ddpm-cifar10-32", custom_pipeline="one_step_unet", use_safetensors=True +) +pipe() Another way to share your community pipeline is to upload the one_step_unet.py file directly to your preferred model repository on the Hub. Instead of specifying the one_step_unet.py file, pass the model repository id to the custom_pipeline argument: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "google/ddpm-cifar10-32", custom_pipeline="stevhliu/one_step_unet", use_safetensors=True +) Take a look at the following table to compare the two sharing workflows to help you decide the best option for you: GitHub community pipeline HF Hub community pipeline usage same same review process open a Pull Request on GitHub and undergo a review process from the Diffusers team before merging; may be slower upload directly to a Hub repository without any review; this is the fastest workflow visibility included in the official Diffusers repository and documentation included on your HF Hub profile and relies on your own usage/promotion to gain visibility 💡 You can use whatever package you want in your community pipeline file - as long as the user has it installed, everything will work fine. Make sure you have one and only one pipeline class that inherits from DiffusionPipeline because this is automatically detected. How do community pipelines work? A community pipeline is a class that inherits from DiffusionPipeline which means: It can be loaded with the custom_pipeline argument. The model weights and scheduler configuration are loaded from pretrained_model_name_or_path. The code that implements a feature in the community pipeline is defined in a pipeline.py file. Sometimes you can’t load all the pipeline components weights from an official repository. In this case, the other components should be passed directly to the pipeline: Copied from diffusers import DiffusionPipeline +from transformers import CLIPImageProcessor, CLIPModel + +model_id = "CompVis/stable-diffusion-v1-4" +clip_model_id = "laion/CLIP-ViT-B-32-laion2B-s34B-b79K" + +feature_extractor = CLIPImageProcessor.from_pretrained(clip_model_id) +clip_model = CLIPModel.from_pretrained(clip_model_id, torch_dtype=torch.float16) + +pipeline = DiffusionPipeline.from_pretrained( + model_id, + custom_pipeline="clip_guided_stable_diffusion", + clip_model=clip_model, + feature_extractor=feature_extractor, + scheduler=scheduler, + torch_dtype=torch.float16, + use_safetensors=True, +) The magic behind community pipelines is contained in the following code. It allows the community pipeline to be loaded from GitHub or the Hub, and it’ll be available to all 🧨 Diffusers packages. Copied # 2. Load the pipeline class, if using custom module then load it from the Hub +# if we load from explicit class, let's use it +if custom_pipeline is not None: + pipeline_class = get_class_from_dynamic_module( + custom_pipeline, module_file=CUSTOM_PIPELINE_FILE_NAME, cache_dir=custom_pipeline + ) +elif cls != DiffusionPipeline: + pipeline_class = cls +else: + diffusers_module = importlib.import_module(cls.__module__.split(".")[0]) + pipeline_class = getattr(diffusers_module, config_dict["_class_name"]) diff --git a/scrapped_outputs/e4a8b8b0aada48707dd4aa6f901a991c.txt b/scrapped_outputs/e4a8b8b0aada48707dd4aa6f901a991c.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69636ab475595c240f0bd86a1983886d1f8de0d --- /dev/null +++ b/scrapped_outputs/e4a8b8b0aada48707dd4aa6f901a991c.txt @@ -0,0 +1,40 @@ +DDIM Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. The abstract from the paper is: Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space. The original codebase can be found at ermongroup/ddim. DDIMPipeline class diffusers.DDIMPipeline < source > ( unet scheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image. Can be one of +DDPMScheduler, or DDIMScheduler. Pipeline for image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 generator: Union = None eta: float = 0.0 num_inference_steps: int = 50 use_clipped_model_output: Optional = None output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. A value of 0 corresponds to +DDIM and 1 corresponds to DDPM. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. use_clipped_model_output (bool, optional, defaults to None) — +If True or False, see documentation for DDIMScheduler.step(). If None, nothing is passed +downstream to the scheduler (use None for schedulers which don’t support this argument). output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Example: Copied >>> from diffusers import DDIMPipeline +>>> import PIL.Image +>>> import numpy as np + +>>> # load model and scheduler +>>> pipe = DDIMPipeline.from_pretrained("fusing/ddim-lsun-bedroom") + +>>> # run pipeline in inference (sample random noise and denoise) +>>> image = pipe(eta=0.0, num_inference_steps=50) + +>>> # process image to PIL +>>> image_processed = image.cpu().permute(0, 2, 3, 1) +>>> image_processed = (image_processed + 1.0) * 127.5 +>>> image_processed = image_processed.numpy().astype(np.uint8) +>>> image_pil = PIL.Image.fromarray(image_processed[0]) + +>>> # save image +>>> image_pil.save("test.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/e4aaac6ec9fbbf97a352a07dd88cd9e0.txt b/scrapped_outputs/e4aaac6ec9fbbf97a352a07dd88cd9e0.txt new file mode 100644 index 0000000000000000000000000000000000000000..7ba14b6e0e43d4ca7ed6b0c338388308b99ebb1d --- /dev/null +++ b/scrapped_outputs/e4aaac6ec9fbbf97a352a07dd88cd9e0.txt @@ -0,0 +1,265 @@ +ControlNet ControlNet is a type of model for controlling image diffusion models by conditioning the model with an additional input image. There are many types of conditioning inputs (canny edge, user sketching, human pose, depth, and more) you can use to control a diffusion model. This is hugely useful because it affords you greater control over image generation, making it easier to generate specific images without experimenting with different text prompts or denoising values as much. Check out Section 3.5 of the ControlNet paper v1 for a list of ControlNet implementations on various conditioning inputs. You can find the official Stable Diffusion ControlNet conditioned models on lllyasviel’s Hub profile, and more community-trained ones on the Hub. For Stable Diffusion XL (SDXL) ControlNet models, you can find them on the 🤗 Diffusers Hub organization, or you can browse community-trained ones on the Hub. A ControlNet model has two sets of weights (or blocks) connected by a zero-convolution layer: a locked copy keeps everything a large pretrained diffusion model has learned a trainable copy is trained on the additional conditioning input Since the locked copy preserves the pretrained model, training and implementing a ControlNet on a new conditioning input is as fast as finetuning any other model because you aren’t training the model from scratch. This guide will show you how to use ControlNet for text-to-image, image-to-image, inpainting, and more! There are many types of ControlNet conditioning inputs to choose from, but in this guide we’ll only focus on several of them. Feel free to experiment with other conditioning inputs! Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate opencv-python Text-to-image For text-to-image, you normally pass a text prompt to the model. But with ControlNet, you can specify an additional conditioning input. Let’s condition the model with a canny image, a white outline of an image on a black background. This way, the ControlNet can use the canny image as a control to guide the model to generate an image with the same outline. Load an image and use the opencv-python library to extract the canny image: Copied from diffusers.utils import load_image, make_image_grid +from PIL import Image +import cv2 +import numpy as np + +original_image = load_image( + "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +) + +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) original image canny image Next, load a ControlNet model conditioned on canny edge detection and pass it to the StableDiffusionControlNetPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler +import torch + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now pass your prompt and canny image to the pipeline: Copied output = pipe( + "the mona lisa", image=canny_image +).images[0] +make_image_grid([original_image, canny_image, output], rows=1, cols=3) Image-to-image For image-to-image, you’d typically pass an initial image and a prompt to the pipeline to generate a new image. With ControlNet, you can pass an additional conditioning input to guide the model. Let’s condition the model with a depth map, an image which contains spatial information. This way, the ControlNet can use the depth map as a control to guide the model to generate an image that preserves spatial information. You’ll use the StableDiffusionControlNetImg2ImgPipeline for this task, which is different from the StableDiffusionControlNetPipeline because it allows you to pass an initial image as the starting point for the image generation process. Load an image and use the depth-estimation Pipeline from 🤗 Transformers to extract the depth map of an image: Copied import torch +import numpy as np + +from transformers import pipeline +from diffusers.utils import load_image, make_image_grid + +image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-img2img.jpg" +) + +def get_depth_map(image, depth_estimator): + image = depth_estimator(image)["depth"] + image = np.array(image) + image = image[:, :, None] + image = np.concatenate([image, image, image], axis=2) + detected_map = torch.from_numpy(image).float() / 255.0 + depth_map = detected_map.permute(2, 0, 1) + return depth_map + +depth_estimator = pipeline("depth-estimation") +depth_map = get_depth_map(image, depth_estimator).unsqueeze(0).half().to("cuda") Next, load a ControlNet model conditioned on depth maps and pass it to the StableDiffusionControlNetImg2ImgPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetImg2ImgPipeline, ControlNetModel, UniPCMultistepScheduler +import torch + +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11f1p_sd15_depth", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now pass your prompt, initial image, and depth map to the pipeline: Copied output = pipe( + "lego batman and robin", image=image, control_image=depth_map, +).images[0] +make_image_grid([image, output], rows=1, cols=2) original image generated image Inpainting For inpainting, you need an initial image, a mask image, and a prompt describing what to replace the mask with. ControlNet models allow you to add another control image to condition a model with. Let’s condition the model with an inpainting mask. This way, the ControlNet can use the inpainting mask as a control to guide the model to generate an image within the mask area. Load an initial image and a mask image: Copied from diffusers.utils import load_image, make_image_grid + +init_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-inpaint.jpg" +) +init_image = init_image.resize((512, 512)) + +mask_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet-inpaint-mask.jpg" +) +mask_image = mask_image.resize((512, 512)) +make_image_grid([init_image, mask_image], rows=1, cols=2) Create a function to prepare the control image from the initial and mask images. This’ll create a tensor to mark the pixels in init_image as masked if the corresponding pixel in mask_image is over a certain threshold. Copied import numpy as np +import torch + +def make_inpaint_condition(image, image_mask): + image = np.array(image.convert("RGB")).astype(np.float32) / 255.0 + image_mask = np.array(image_mask.convert("L")).astype(np.float32) / 255.0 + + assert image.shape[0:1] == image_mask.shape[0:1] + image[image_mask > 0.5] = -1.0 # set as masked pixel + image = np.expand_dims(image, 0).transpose(0, 3, 1, 2) + image = torch.from_numpy(image) + return image + +control_image = make_inpaint_condition(init_image, mask_image) original image mask image Load a ControlNet model conditioned on inpainting and pass it to the StableDiffusionControlNetInpaintPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to speed up inference and reduce memory usage. Copied from diffusers import StableDiffusionControlNetInpaintPipeline, ControlNetModel, UniPCMultistepScheduler + +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetInpaintPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now pass your prompt, initial image, mask image, and control image to the pipeline: Copied output = pipe( + "corgi face with large ears, detailed, pixar, animated, disney", + num_inference_steps=20, + eta=1.0, + image=init_image, + mask_image=mask_image, + control_image=control_image, +).images[0] +make_image_grid([init_image, mask_image, output], rows=1, cols=3) Guess mode Guess mode does not require supplying a prompt to a ControlNet at all! This forces the ControlNet encoder to do it’s best to “guess” the contents of the input control map (depth map, pose estimation, canny edge, etc.). Guess mode adjusts the scale of the output residuals from a ControlNet by a fixed ratio depending on the block depth. The shallowest DownBlock corresponds to 0.1, and as the blocks get deeper, the scale increases exponentially such that the scale of the MidBlock output becomes 1.0. Guess mode does not have any impact on prompt conditioning and you can still provide a prompt if you want. Set guess_mode=True in the pipeline, and it is recommended to set the guidance_scale value between 3.0 and 5.0. Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.utils import load_image, make_image_grid +import numpy as np +import torch +from PIL import Image +import cv2 + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", use_safetensors=True) +pipe = StableDiffusionControlNetPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", controlnet=controlnet, use_safetensors=True).to("cuda") + +original_image = load_image("https://huggingface.co/takuma104/controlnet_dev/resolve/main/bird_512x512.png") + +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +image = pipe("", image=canny_image, guess_mode=True, guidance_scale=3.0).images[0] +make_image_grid([original_image, canny_image, image], rows=1, cols=3) regular mode with prompt guess mode without prompt ControlNet with Stable Diffusion XL There aren’t too many ControlNet models compatible with Stable Diffusion XL (SDXL) at the moment, but we’ve trained two full-sized ControlNet models for SDXL conditioned on canny edge detection and depth maps. We’re also experimenting with creating smaller versions of these SDXL-compatible ControlNet models so it is easier to run on resource-constrained hardware. You can find these checkpoints on the 🤗 Diffusers Hub organization! Let’s use a SDXL ControlNet conditioned on canny images to generate an image. Start by loading an image and prepare the canny image: Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL +from diffusers.utils import load_image, make_image_grid +from PIL import Image +import cv2 +import numpy as np +import torch + +original_image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" +) + +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) +make_image_grid([original_image, canny_image], rows=1, cols=2) original image canny image Load a SDXL ControlNet model conditioned on canny edge detection and pass it to the StableDiffusionXLControlNetPipeline. You can also enable model offloading to reduce memory usage. Copied controlnet = ControlNetModel.from_pretrained( + "diffusers/controlnet-canny-sdxl-1.0", + torch_dtype=torch.float16, + use_safetensors=True +) +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionXLControlNetPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + controlnet=controlnet, + vae=vae, + torch_dtype=torch.float16, + use_safetensors=True +) +pipe.enable_model_cpu_offload() Now pass your prompt (and optionally a negative prompt if you’re using one) and canny image to the pipeline: The controlnet_conditioning_scale parameter determines how much weight to assign to the conditioning inputs. A value of 0.5 is recommended for good generalization, but feel free to experiment with this number! Copied prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" +negative_prompt = 'low quality, bad quality, sketches' + +image = pipe( + prompt, + negative_prompt=negative_prompt, + image=canny_image, + controlnet_conditioning_scale=0.5, +).images[0] +make_image_grid([original_image, canny_image, image], rows=1, cols=3) You can use StableDiffusionXLControlNetPipeline in guess mode as well by setting the parameter to True: Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL +from diffusers.utils import load_image, make_image_grid +import numpy as np +import torch +import cv2 +from PIL import Image + +prompt = "aerial view, a futuristic research complex in a bright foggy jungle, hard lighting" +negative_prompt = "low quality, bad quality, sketches" + +original_image = load_image( + "https://hf.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd_controlnet/hf-logo.png" +) + +controlnet = ControlNetModel.from_pretrained( + "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16, use_safetensors=True +) +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionXLControlNetPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16, use_safetensors=True +) +pipe.enable_model_cpu_offload() + +image = np.array(original_image) +image = cv2.Canny(image, 100, 200) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +image = pipe( + prompt, negative_prompt=negative_prompt, controlnet_conditioning_scale=0.5, image=canny_image, guess_mode=True, +).images[0] +make_image_grid([original_image, canny_image, image], rows=1, cols=3) MultiControlNet Replace the SDXL model with a model like runwayml/stable-diffusion-v1-5 to use multiple conditioning inputs with Stable Diffusion models. You can compose multiple ControlNet conditionings from different image inputs to create a MultiControlNet. To get better results, it is often helpful to: mask conditionings such that they don’t overlap (for example, mask the area of a canny image where the pose conditioning is located) experiment with the controlnet_conditioning_scale parameter to determine how much weight to assign to each conditioning input In this example, you’ll combine a canny image and a human pose estimation image to generate a new image. Prepare the canny image conditioning: Copied from diffusers.utils import load_image, make_image_grid +from PIL import Image +import numpy as np +import cv2 + +original_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" +) +image = np.array(original_image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) + +# zero out middle columns of image where pose will be overlaid +zero_start = image.shape[1] // 4 +zero_end = zero_start + image.shape[1] // 2 +image[:, zero_start:zero_end] = 0 + +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) +make_image_grid([original_image, canny_image], rows=1, cols=2) original image canny image For human pose estimation, install controlnet_aux: Copied # uncomment to install the necessary library in Colab +#!pip install -q controlnet-aux Prepare the human pose estimation conditioning: Copied from controlnet_aux import OpenposeDetector + +openpose = OpenposeDetector.from_pretrained("lllyasviel/ControlNet") +original_image = load_image( + "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/person.png" +) +openpose_image = openpose(original_image) +make_image_grid([original_image, openpose_image], rows=1, cols=2) original image human pose image Load a list of ControlNet models that correspond to each conditioning, and pass them to the StableDiffusionXLControlNetPipeline. Use the faster UniPCMultistepScheduler and enable model offloading to reduce memory usage. Copied from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel, AutoencoderKL, UniPCMultistepScheduler +import torch + +controlnets = [ + ControlNetModel.from_pretrained( + "thibaud/controlnet-openpose-sdxl-1.0", torch_dtype=torch.float16 + ), + ControlNetModel.from_pretrained( + "diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16, use_safetensors=True + ), +] + +vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionXLControlNetPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnets, vae=vae, torch_dtype=torch.float16, use_safetensors=True +) +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() Now you can pass your prompt (an optional negative prompt if you’re using one), canny image, and pose image to the pipeline: Copied prompt = "a giant standing in a fantasy landscape, best quality" +negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality" + +generator = torch.manual_seed(1) + +images = [openpose_image.resize((1024, 1024)), canny_image.resize((1024, 1024))] + +images = pipe( + prompt, + image=images, + num_inference_steps=25, + generator=generator, + negative_prompt=negative_prompt, + num_images_per_prompt=3, + controlnet_conditioning_scale=[1.0, 0.8], +).images +make_image_grid([original_image, canny_image, openpose_image, + images[0].resize((512, 512)), images[1].resize((512, 512)), images[2].resize((512, 512))], rows=2, cols=3) diff --git a/scrapped_outputs/e5148487f6847a65848054603cb0c7a9.txt b/scrapped_outputs/e5148487f6847a65848054603cb0c7a9.txt new file mode 100644 index 0000000000000000000000000000000000000000..67c8b53cf21b58b36cb7eadc4efa707362746029 --- /dev/null +++ b/scrapped_outputs/e5148487f6847a65848054603cb0c7a9.txt @@ -0,0 +1,61 @@ +Stable Diffusion 2 Stable Diffusion 2 is a text-to-image latent diffusion model built upon the work of the original Stable Diffusion, and it was led by Robin Rombach and Katherine Crowson from Stability AI and LAION. The Stable Diffusion 2.0 release includes robust text-to-image models trained using a brand new text encoder (OpenCLIP), developed by LAION with support from Stability AI, which greatly improves the quality of the generated images compared to earlier V1 releases. The text-to-image models in this release can generate images with default resolutions of both 512x512 pixels and 768x768 pixels. +These models are trained on an aesthetic subset of the LAION-5B dataset created by the DeepFloyd team at Stability AI, which is then further filtered to remove adult content using LAION’s NSFW filter. For more details about how Stable Diffusion 2 works and how it differs from the original Stable Diffusion, please refer to the official announcement post. The architecture of Stable Diffusion 2 is more or less identical to the original Stable Diffusion model so check out it’s API documentation for how to use Stable Diffusion 2. We recommend using the DPMSolverMultistepScheduler as it gives a reasonable speed/quality trade-off and can be run with as little as 20 steps. Stable Diffusion 2 is available for tasks like text-to-image, inpainting, super-resolution, and depth-to-image: Task Repository text-to-image (512x512) stabilityai/stable-diffusion-2-base text-to-image (768x768) stabilityai/stable-diffusion-2 inpainting stabilityai/stable-diffusion-2-inpainting super-resolution stable-diffusion-x4-upscaler depth-to-image stabilityai/stable-diffusion-2-depth Here are some examples for how to use Stable Diffusion 2 for each task: Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! Text-to-image Copied from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +import torch + +repo_id = "stabilityai/stable-diffusion-2-base" +pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") + +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") + +prompt = "High quality photo of an astronaut riding a horse in space" +image = pipe(prompt, num_inference_steps=25).images[0] +image Inpainting Copied import torch +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +from diffusers.utils import load_image, make_image_grid + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +repo_id = "stabilityai/stable-diffusion-2-inpainting" +pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") + +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") + +prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +image = pipe(prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=25).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Super-resolution Copied from diffusers import StableDiffusionUpscalePipeline +from diffusers.utils import load_image, make_image_grid +import torch + +# load model and scheduler +model_id = "stabilityai/stable-diffusion-x4-upscaler" +pipeline = StableDiffusionUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) +pipeline = pipeline.to("cuda") + +# let's download an image +url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png" +low_res_img = load_image(url) +low_res_img = low_res_img.resize((128, 128)) +prompt = "a white cat" +upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0] +make_image_grid([low_res_img.resize((512, 512)), upscaled_image.resize((512, 512))], rows=1, cols=2) Depth-to-image Copied import torch +from diffusers import StableDiffusionDepth2ImgPipeline +from diffusers.utils import load_image, make_image_grid + +pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-depth", + torch_dtype=torch.float16, +).to("cuda") + + +url = "http://images.cocodataset.org/val2017/000000039769.jpg" +init_image = load_image(url) +prompt = "two tigers" +negative_prompt = "bad, deformed, ugly, bad anotomy" +image = pipe(prompt=prompt, image=init_image, negative_prompt=negative_prompt, strength=0.7).images[0] +make_image_grid([init_image, image], rows=1, cols=2) diff --git a/scrapped_outputs/e514af932897cb0e9ef9db27b16c5efc.txt b/scrapped_outputs/e514af932897cb0e9ef9db27b16c5efc.txt new file mode 100644 index 0000000000000000000000000000000000000000..27e473e96ef3e5480dbddcafab99a5316b599755 --- /dev/null +++ b/scrapped_outputs/e514af932897cb0e9ef9db27b16c5efc.txt @@ -0,0 +1,57 @@ +Wuerstchen The Wuerstchen model drastically reduces computational costs by compressing the latent space by 42x, without compromising image quality and accelerating inference. During training, Wuerstchen uses two models (VQGAN + autoencoder) to compress the latents, and then a third model (text-conditioned latent diffusion model) is conditioned on this highly compressed space to generate an image. To fit the prior model into GPU memory and to speedup training, try enabling gradient_accumulation_steps, gradient_checkpointing, and mixed_precision respectively. This guide explores the train_text_to_image_prior.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/wuerstchen/text_to_image +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training scripts that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the scripts and let us know if you have any questions or concerns. Script parameters The training scripts provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image_prior.py \ + --mixed_precision="fp16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so let’s dive right into the Wuerstchen training script! Training script The training script is also similar to the Text-to-image training guide, but it’s been modified to support Wuerstchen. This guide focuses on the code that is unique to the Wuerstchen training script. The main() function starts by initializing the image encoder - an EfficientNet - in addition to the usual scheduler and tokenizer. Copied with ContextManagers(deepspeed_zero_init_disabled_context_manager()): + pretrained_checkpoint_file = hf_hub_download("dome272/wuerstchen", filename="model_v2_stage_b.pt") + state_dict = torch.load(pretrained_checkpoint_file, map_location="cpu") + image_encoder = EfficientNetEncoder() + image_encoder.load_state_dict(state_dict["effnet_state_dict"]) + image_encoder.eval() You’ll also load the WuerstchenPrior model for optimization. Copied prior = WuerstchenPrior.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="prior") + +optimizer = optimizer_cls( + prior.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Next, you’ll apply some transforms to the images and tokenize the captions: Copied def preprocess_train(examples): + images = [image.convert("RGB") for image in examples[image_column]] + examples["effnet_pixel_values"] = [effnet_transforms(image) for image in images] + examples["text_input_ids"], examples["text_mask"] = tokenize_captions(examples) + return examples Finally, the training loop handles compressing the images to latent space with the EfficientNetEncoder, adding noise to the latents, and predicting the noise residual with the WuerstchenPrior model. Copied pred_noise = prior(noisy_latents, timesteps, prompt_embeds) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 Set the DATASET_NAME environment variable to the dataset name from the Hub. This guide uses the Pokémon BLIP captions dataset, but you can create and train on your own datasets as well (see the Create a dataset for training guide). To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_prompt to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. Copied export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch train_text_to_image_prior.py \ + --mixed_precision="fp16" \ + --dataset_name=$DATASET_NAME \ + --resolution=768 \ + --train_batch_size=4 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --dataloader_num_workers=4 \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --checkpoints_total_limit=3 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --validation_prompts="A robot pokemon, 4k photo" \ + --report_to="wandb" \ + --push_to_hub \ + --output_dir="wuerstchen-prior-pokemon-model" Once training is complete, you can use your newly trained model for inference! Copied import torch +from diffusers import AutoPipelineForText2Image +from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS + +pipeline = AutoPipelineForText2Image.from_pretrained("path/to/saved/model", torch_dtype=torch.float16).to("cuda") + +caption = "A cute bird pokemon holding a shield" +images = pipeline( + caption, + width=1024, + height=1536, + prior_timesteps=DEFAULT_STAGE_C_TIMESTEPS, + prior_guidance_scale=4.0, + num_images_per_prompt=2, +).images Next steps Congratulations on training a Wuerstchen model! To learn more about how to use your new model, the following may be helpful: Take a look at the Wuerstchen API documentation to learn more about how to use the pipeline for text-to-image generation and its limitations. diff --git a/scrapped_outputs/e52647b5a0f19dd762ff85c276b98c07.txt b/scrapped_outputs/e52647b5a0f19dd762ff85c276b98c07.txt new file mode 100644 index 0000000000000000000000000000000000000000..c618df35dab9f1ea7404eb6772bf3711c834e51e --- /dev/null +++ b/scrapped_outputs/e52647b5a0f19dd762ff85c276b98c07.txt @@ -0,0 +1,40 @@ +Stable Video Diffusion Stable Video Diffusion (SVD) is a powerful image-to-video generation model that can generate 2-4 second high resolution (576x1024) videos conditioned on an input image. This guide will show you how to use SVD to generate short videos from images. Before you begin, make sure you have the following libraries installed: Copied !pip install -q -U diffusers transformers accelerate The are two variants of this model, SVD and SVD-XT. The SVD checkpoint is trained to generate 14 frames and the SVD-XT checkpoint is further finetuned to generate 25 frames. You’ll use the SVD-XT checkpoint for this guide. Copied import torch + +from diffusers import StableVideoDiffusionPipeline +from diffusers.utils import load_image, export_to_video + +pipe = StableVideoDiffusionPipeline.from_pretrained( + "stabilityai/stable-video-diffusion-img2vid-xt", torch_dtype=torch.float16, variant="fp16" +) +pipe.enable_model_cpu_offload() + +# Load the conditioning image +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png") +image = image.resize((1024, 576)) + +generator = torch.manual_seed(42) +frames = pipe(image, decode_chunk_size=8, generator=generator).frames[0] + +export_to_video(frames, "generated.mp4", fps=7) "source image of a rocket" "generated video from source image" torch.compile You can gain a 20-25% speedup at the expense of slightly increased memory by compiling the UNet. Copied - pipe.enable_model_cpu_offload() ++ pipe.to("cuda") ++ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) Reduce memory usage Video generation is very memory intensive because you’re essentially generating num_frames all at once, similar to text-to-image generation with a high batch size. To reduce the memory requirement, there are multiple options that trade-off inference speed for lower memory requirement: enable model offloading: each component of the pipeline is offloaded to the CPU once it’s not needed anymore. enable feed-forward chunking: the feed-forward layer runs in a loop instead of running a single feed-forward with a huge batch size. reduce decode_chunk_size: the VAE decodes frames in chunks instead of decoding them all together. Setting decode_chunk_size=1 decodes one frame at a time and uses the least amount of memory (we recommend adjusting this value based on your GPU memory) but the video might have some flickering. Copied - pipe.enable_model_cpu_offload() +- frames = pipe(image, decode_chunk_size=8, generator=generator).frames[0] ++ pipe.enable_model_cpu_offload() ++ pipe.unet.enable_forward_chunking() ++ frames = pipe(image, decode_chunk_size=2, generator=generator, num_frames=25).frames[0] Using all these tricks togethere should lower the memory requirement to less than 8GB VRAM. Micro-conditioning Stable Diffusion Video also accepts micro-conditioning, in addition to the conditioning image, which allows more control over the generated video: fps: the frames per second of the generated video. motion_bucket_id: the motion bucket id to use for the generated video. This can be used to control the motion of the generated video. Increasing the motion bucket id increases the motion of the generated video. noise_aug_strength: the amount of noise added to the conditioning image. The higher the values the less the video resembles the conditioning image. Increasing this value also increases the motion of the generated video. For example, to generate a video with more motion, use the motion_bucket_id and noise_aug_strength micro-conditioning parameters: Copied import torch + +from diffusers import StableVideoDiffusionPipeline +from diffusers.utils import load_image, export_to_video + +pipe = StableVideoDiffusionPipeline.from_pretrained( + "stabilityai/stable-video-diffusion-img2vid-xt", torch_dtype=torch.float16, variant="fp16" +) +pipe.enable_model_cpu_offload() + +# Load the conditioning image +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png") +image = image.resize((1024, 576)) + +generator = torch.manual_seed(42) +frames = pipe(image, decode_chunk_size=8, generator=generator, motion_bucket_id=180, noise_aug_strength=0.1).frames[0] +export_to_video(frames, "generated.mp4", fps=7) diff --git a/scrapped_outputs/e52d2d2e865f1963a3170191d1bc1dc3.txt b/scrapped_outputs/e52d2d2e865f1963a3170191d1bc1dc3.txt new file mode 100644 index 0000000000000000000000000000000000000000..0454f29f161e7c79737a21f6448f556cf18eca51 --- /dev/null +++ b/scrapped_outputs/e52d2d2e865f1963a3170191d1bc1dc3.txt @@ -0,0 +1,81 @@ +Push files to the Hub 🤗 Diffusers provides a PushToHubMixin for uploading your model, scheduler, or pipeline to the Hub. It is an easy way to store your files on the Hub, and also allows you to share your work with others. Under the hood, the PushToHubMixin: creates a repository on the Hub saves your model, scheduler, or pipeline files so they can be reloaded later uploads folder containing these files to the Hub This guide will show you how to use the PushToHubMixin to upload your files to the Hub. You’ll need to log in to your Hub account with your access token first: Copied from huggingface_hub import notebook_login + +notebook_login() Models To push a model to the Hub, call push_to_hub() and specify the repository id of the model to be stored on the Hub: Copied from diffusers import ControlNetModel + +controlnet = ControlNetModel( + block_out_channels=(32, 64), + layers_per_block=2, + in_channels=4, + down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), + cross_attention_dim=32, + conditioning_embedding_out_channels=(16, 32), +) +controlnet.push_to_hub("my-controlnet-model") For models, you can also specify the variant of the weights to push to the Hub. For example, to push fp16 weights: Copied controlnet.push_to_hub("my-controlnet-model", variant="fp16") The push_to_hub() function saves the model’s config.json file and the weights are automatically saved in the safetensors format. Now you can reload the model from your repository on the Hub: Copied model = ControlNetModel.from_pretrained("your-namespace/my-controlnet-model") Scheduler To push a scheduler to the Hub, call push_to_hub() and specify the repository id of the scheduler to be stored on the Hub: Copied from diffusers import DDIMScheduler + +scheduler = DDIMScheduler( + beta_start=0.00085, + beta_end=0.012, + beta_schedule="scaled_linear", + clip_sample=False, + set_alpha_to_one=False, +) +scheduler.push_to_hub("my-controlnet-scheduler") The push_to_hub() function saves the scheduler’s scheduler_config.json file to the specified repository. Now you can reload the scheduler from your repository on the Hub: Copied scheduler = DDIMScheduler.from_pretrained("your-namepsace/my-controlnet-scheduler") Pipeline You can also push an entire pipeline with all it’s components to the Hub. For example, initialize the components of a StableDiffusionPipeline with the parameters you want: Copied from diffusers import ( + UNet2DConditionModel, + AutoencoderKL, + DDIMScheduler, + StableDiffusionPipeline, +) +from transformers import CLIPTextModel, CLIPTextConfig, CLIPTokenizer + +unet = UNet2DConditionModel( + block_out_channels=(32, 64), + layers_per_block=2, + sample_size=32, + in_channels=4, + out_channels=4, + down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), + up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), + cross_attention_dim=32, +) + +scheduler = DDIMScheduler( + beta_start=0.00085, + beta_end=0.012, + beta_schedule="scaled_linear", + clip_sample=False, + set_alpha_to_one=False, +) + +vae = AutoencoderKL( + block_out_channels=[32, 64], + in_channels=3, + out_channels=3, + down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], + up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], + latent_channels=4, +) + +text_encoder_config = CLIPTextConfig( + bos_token_id=0, + eos_token_id=2, + hidden_size=32, + intermediate_size=37, + layer_norm_eps=1e-05, + num_attention_heads=4, + num_hidden_layers=5, + pad_token_id=1, + vocab_size=1000, +) +text_encoder = CLIPTextModel(text_encoder_config) +tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") Pass all of the components to the StableDiffusionPipeline and call push_to_hub() to push the pipeline to the Hub: Copied components = { + "unet": unet, + "scheduler": scheduler, + "vae": vae, + "text_encoder": text_encoder, + "tokenizer": tokenizer, + "safety_checker": None, + "feature_extractor": None, +} + +pipeline = StableDiffusionPipeline(**components) +pipeline.push_to_hub("my-pipeline") The push_to_hub() function saves each component to a subfolder in the repository. Now you can reload the pipeline from your repository on the Hub: Copied pipeline = StableDiffusionPipeline.from_pretrained("your-namespace/my-pipeline") Privacy Set private=True in the push_to_hub() function to keep your model, scheduler, or pipeline files private: Copied controlnet.push_to_hub("my-controlnet-model-private", private=True) Private repositories are only visible to you, and other users won’t be able to clone the repository and your repository won’t appear in search results. Even if a user has the URL to your private repository, they’ll receive a 404 - Sorry, we can't find the page you are looking for. You must be logged in to load a model from a private repository. diff --git a/scrapped_outputs/e55ba7a5a5996bdeb3ff10aadc502309.txt b/scrapped_outputs/e55ba7a5a5996bdeb3ff10aadc502309.txt new file mode 100644 index 0000000000000000000000000000000000000000..b36fcdaae1a968a902d79e9e2398812f703a2021 --- /dev/null +++ b/scrapped_outputs/e55ba7a5a5996bdeb3ff10aadc502309.txt @@ -0,0 +1,63 @@ +Kandinsky 2.2 This script is experimental, and it’s easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset. Kandinsky 2.2 is a multilingual text-to-image model capable of producing more photorealistic images. The model includes an image prior model for creating image embeddings from text prompts, and a decoder model that generates images based on the prior model’s embeddings. That’s why you’ll find two separate scripts in Diffusers for Kandinsky 2.2, one for training the prior model and one for training the decoder model. You can train both models separately, but to get the best results, you should train both the prior and decoder models. Depending on your GPU, you may need to enable gradient_checkpointing (⚠️ not supported for the prior model!), mixed_precision, and gradient_accumulation_steps to help fit the model into memory and to speedup training. You can reduce your memory-usage even more by enabling memory-efficient attention with xFormers (version v0.0.16 fails for training on some GPUs so you may need to install a development version instead). This guide explores the train_text_to_image_prior.py and the train_text_to_image_decoder.py scripts to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the scripts, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/kandinsky2_2/text_to_image +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training scripts that are important for understanding how to modify it, but it doesn’t cover every aspect of the scripts in detail. If you’re interested in learning more, feel free to read through the scripts and let us know if you have any questions or concerns. Script parameters The training scripts provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. The training scripts provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image_prior.py \ + --mixed_precision="fp16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so let’s get straight to a walkthrough of the Kandinsky training scripts! Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_text_to_image_prior.py \ + --snr_gamma=5.0 Training script The training script is also similar to the Text-to-image training guide, but it’s been modified to support training the prior and decoder models. This guide focuses on the code that is unique to the Kandinsky 2.2 training scripts. prior model decoder model The main() function contains the code for preparing the dataset and training the model. One of the main differences you’ll notice right away is that the training script also loads a CLIPImageProcessor - in addition to a scheduler and tokenizer - for preprocessing images and a CLIPVisionModelWithProjection model for encoding the images: Copied noise_scheduler = DDPMScheduler(beta_schedule="squaredcos_cap_v2", prediction_type="sample") +image_processor = CLIPImageProcessor.from_pretrained( + args.pretrained_prior_model_name_or_path, subfolder="image_processor" +) +tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="tokenizer") + +with ContextManagers(deepspeed_zero_init_disabled_context_manager()): + image_encoder = CLIPVisionModelWithProjection.from_pretrained( + args.pretrained_prior_model_name_or_path, subfolder="image_encoder", torch_dtype=weight_dtype + ).eval() + text_encoder = CLIPTextModelWithProjection.from_pretrained( + args.pretrained_prior_model_name_or_path, subfolder="text_encoder", torch_dtype=weight_dtype + ).eval() Kandinsky uses a PriorTransformer to generate the image embeddings, so you’ll want to setup the optimizer to learn the prior mode’s parameters. Copied prior = PriorTransformer.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="prior") +prior.train() +optimizer = optimizer_cls( + prior.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Next, the input captions are tokenized, and images are preprocessed by the CLIPImageProcessor: Copied def preprocess_train(examples): + images = [image.convert("RGB") for image in examples[image_column]] + examples["clip_pixel_values"] = image_processor(images, return_tensors="pt").pixel_values + examples["text_input_ids"], examples["text_mask"] = tokenize_captions(examples) + return examples Finally, the training loop converts the input images into latents, adds noise to the image embeddings, and makes a prediction: Copied model_pred = prior( + noisy_latents, + timestep=timesteps, + proj_embedding=prompt_embeds, + encoder_hidden_states=text_encoder_hidden_states, + attention_mask=text_mask, +).predicted_image_embedding If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 You’ll train on the Pokémon BLIP captions dataset to generate your own Pokémon, but you can also create and train on your own dataset by following the Create a dataset for training guide. Set the environment variable DATASET_NAME to the name of the dataset on the Hub or if you’re training on your own files, set the environment variable TRAIN_DIR to a path to your dataset. If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_prompt to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. prior model decoder model Copied export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch --mixed_precision="fp16" train_text_to_image_prior.py \ + --dataset_name=$DATASET_NAME \ + --resolution=768 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --checkpoints_total_limit=3 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --validation_prompts="A robot pokemon, 4k photo" \ + --report_to="wandb" \ + --push_to_hub \ + --output_dir="kandi2-prior-pokemon-model" Once training is finished, you can use your newly trained model for inference! prior model decoder model Copied from diffusers import AutoPipelineForText2Image, DiffusionPipeline +import torch + +prior_pipeline = DiffusionPipeline.from_pretrained(output_dir, torch_dtype=torch.float16) +prior_components = {"prior_" + k: v for k,v in prior_pipeline.components.items()} +pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", **prior_components, torch_dtype=torch.float16) + +pipe.enable_model_cpu_offload() +prompt="A robot pokemon, 4k photo" +image = pipeline(prompt=prompt, negative_prompt=negative_prompt).images[0] Feel free to replace kandinsky-community/kandinsky-2-2-decoder with your own trained decoder checkpoint! Next steps Congratulations on training a Kandinsky 2.2 model! To learn more about how to use your new model, the following guides may be helpful: Read the Kandinsky guide to learn how to use it for a variety of different tasks (text-to-image, image-to-image, inpainting, interpolation), and how it can be combined with a ControlNet. Check out the DreamBooth and LoRA training guides to learn how to train a personalized Kandinsky model with just a few example images. These two training techniques can even be combined! diff --git a/scrapped_outputs/e5725aef581f2dd019717522d0fd7b5b.txt b/scrapped_outputs/e5725aef581f2dd019717522d0fd7b5b.txt new file mode 100644 index 0000000000000000000000000000000000000000..b1a5b1caf72ab66f1458f358678fe7da6bdce6c7 --- /dev/null +++ b/scrapped_outputs/e5725aef581f2dd019717522d0fd7b5b.txt @@ -0,0 +1 @@ +SDXL Turbo Stable Diffusion XL (SDXL) Turbo was proposed in Adversarial Diffusion Distillation by Axel Sauer, Dominik Lorenz, Andreas Blattmann, and Robin Rombach. The abstract from the paper is: We introduce Adversarial Diffusion Distillation (ADD), a novel training approach that efficiently samples large-scale foundational image diffusion models in just 1–4 steps while maintaining high image quality. We use score distillation to leverage large-scale off-the-shelf image diffusion models as a teacher signal in combination with an adversarial loss to ensure high image fidelity even in the low-step regime of one or two sampling steps. Our analyses show that our model clearly outperforms existing few-step methods (GANs,Latent Consistency Models) in a single step and reaches the performance of state-of-the-art diffusion models (SDXL) in only four steps. ADD is the first method to unlock single-step, real-time image synthesis with foundation models. Tips SDXL Turbo uses the exact same architecture as SDXL, which means it also has the same API. Please refer to the SDXL API reference for more details. SDXL Turbo should disable guidance scale by setting guidance_scale=0.0 SDXL Turbo should use timestep_spacing='trailing' for the scheduler and use between 1 and 4 steps. SDXL Turbo has been trained to generate images of size 512x512. SDXL Turbo is open-access, but not open-source meaning that one might have to buy a model license in order to use it for commercial applications. Make sure to read the official model card to learn more. To learn how to use SDXL Turbo for various tasks, how to optimize performance, and other usage examples, take a look at the SDXL Turbo guide. Check out the Stability AI Hub organization for the official base and refiner model checkpoints! diff --git a/scrapped_outputs/e583fb073eaa90b84885d02d330abd51.txt b/scrapped_outputs/e583fb073eaa90b84885d02d330abd51.txt new file mode 100644 index 0000000000000000000000000000000000000000..a1d62e149f06897a73f0cf31016ea5252858f00a --- /dev/null +++ b/scrapped_outputs/e583fb073eaa90b84885d02d330abd51.txt @@ -0,0 +1,525 @@ +Kandinsky 2.1 Kandinsky 2.1 is created by Arseniy Shakhmatov, Anton Razzhigaev, Aleksandr Nikolich, Vladimir Arkhipkin, Igor Pavlov, Andrey Kuznetsov, and Denis Dimitrov. The description from it’s GitHub page is: Kandinsky 2.1 inherits best practicies from Dall-E 2 and Latent diffusion, while introducing some new ideas. As text and image encoder it uses CLIP model and diffusion image prior (mapping) between latent spaces of CLIP modalities. This approach increases the visual performance of the model and unveils new horizons in blending images and text-guided image manipulation. The original codebase can be found at ai-forever/Kandinsky-2. Check out the Kandinsky Community organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. KandinskyPriorPipeline class diffusers.KandinskyPriorPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModelWithProjection text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: UnCLIPScheduler image_processor: CLIPImageProcessor ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Pipeline for generating image prior for Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 output_type: Optional = 'pt' return_dict: bool = True ) → KandinskyPriorPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. output_type (str, optional, defaults to "pt") — +The output format of the generate image. Choose between: "np" (np.array) or "pt" +(torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyPipeline, KandinskyPriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior") +>>> pipe_prior.to("cuda") + +>>> prompt = "red cat, 4k photo" +>>> out = pipe_prior(prompt) +>>> image_emb = out.image_embeds +>>> negative_image_emb = out.negative_image_embeds + +>>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1") +>>> pipe.to("cuda") + +>>> image = pipe( +... prompt, +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... ).images + +>>> image[0].save("cat.png") interpolate < source > ( images_and_prompts: List weights: List num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None negative_prior_prompt: Optional = None negative_prompt: str = '' guidance_scale: float = 4.0 device = None ) → KandinskyPriorPipelineOutput or tuple Parameters images_and_prompts (List[Union[str, PIL.Image.Image, torch.FloatTensor]]) — +list of prompts and images to guide the image generation. +weights — (List[float]): +list of weights for each condition in images_and_prompts num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. negative_prior_prompt (str, optional) — +The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when using the prior pipeline for interpolation. Examples: Copied >>> from diffusers import KandinskyPriorPipeline, KandinskyPipeline +>>> from diffusers.utils import load_image +>>> import PIL + +>>> import torch +>>> from torchvision import transforms + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> img1 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) + +>>> img2 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/starry_night.jpeg" +... ) + +>>> images_texts = ["a cat", img1, img2] +>>> weights = [0.3, 0.3, 0.4] +>>> image_emb, zero_image_emb = pipe_prior.interpolate(images_texts, weights) + +>>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) +>>> pipe.to("cuda") + +>>> image = pipe( +... "", +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=150, +... ).images[0] + +>>> image.save("starry_cat.png") KandinskyPipeline class diffusers.KandinskyPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image_embeds: Union negative_image_embeds: Union negative_prompt: Union = None height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyPipeline, KandinskyPriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained("kandinsky-community/Kandinsky-2-1-prior") +>>> pipe_prior.to("cuda") + +>>> prompt = "red cat, 4k photo" +>>> out = pipe_prior(prompt) +>>> image_emb = out.image_embeds +>>> negative_image_emb = out.negative_image_embeds + +>>> pipe = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1") +>>> pipe.to("cuda") + +>>> image = pipe( +... prompt, +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... ).images + +>>> image[0].save("cat.png") KandinskyCombinedPipeline class diffusers.KandinskyCombinedPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Combined Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipe = AutoPipelineForText2Image.from_pretrained( + "kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" + +image = pipe(prompt=prompt, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models (unet, text_encoder, vae, and safety checker state dicts) to CPU using 🤗 +Accelerate, significantly reducing memory usage. Models are moved to a torch.device('meta') and loaded on a +GPU only when their specific submodule’s forward method is called. Offloading happens on a submodule basis. +Memory savings are higher than using enable_model_cpu_offload, but performance is lower. KandinskyImg2ImgPipeline class diffusers.KandinskyImg2ImgPipeline < source > ( text_encoder: MultilingualCLIP movq: VQModel tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: DDIMScheduler ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ image encoder and decoder Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union image_embeds: FloatTensor negative_image_embeds: FloatTensor negative_prompt: Union = None height: int = 512 width: int = 512 num_inference_steps: int = 100 strength: float = 0.3 guidance_scale: float = 7.0 num_images_per_prompt: int = 1 generator: Union = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyImg2ImgPipeline, KandinskyPriorPipeline +>>> from diffusers.utils import load_image +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> prompt = "A red cartoon frog, 4k" +>>> image_emb, zero_image_emb = pipe_prior(prompt, return_dict=False) + +>>> pipe = KandinskyImg2ImgPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") + +>>> init_image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/frog.png" +... ) + +>>> image = pipe( +... prompt, +... image=init_image, +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... strength=0.2, +... ).images + +>>> image[0].save("red_frog.png") KandinskyImg2ImgCombinedPipeline class diffusers.KandinskyImg2ImgCombinedPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Combined Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 strength: float = 0.3 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForImage2Image +import torch +import requests +from io import BytesIO +from PIL import Image +import os + +pipe = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") +image.thumbnail((768, 768)) + +image = pipe(prompt=prompt, image=original_image, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. KandinskyInpaintPipeline class diffusers.KandinskyInpaintPipeline < source > ( text_encoder: MultilingualCLIP movq: VQModel tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: DDIMScheduler ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ image encoder and decoder Pipeline for text-guided image inpainting using Kandinsky2.1 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union mask_image: Union image_embeds: FloatTensor negative_image_embeds: FloatTensor negative_prompt: Union = None height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image or np.ndarray) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. mask_image (PIL.Image.Image,torch.FloatTensor or np.ndarray) — +Image, or a tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. You can pass a pytorch tensor as mask only if the +image you passed is a pytorch tensor, and it should contain one color channel (L) instead of 3, so the +expected shape would be either (B, 1, H, W,), (B, H, W), (1, H, W) or (H, W) If image is an PIL +image or numpy array, mask should also be a either PIL image or numpy array. If it is a PIL image, it +will be converted to a single channel (luminance) before use. If it is a nummpy array, the expected +shape is (H, W). image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyInpaintPipeline, KandinskyPriorPipeline +>>> from diffusers.utils import load_image +>>> import torch +>>> import numpy as np + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> prompt = "a hat" +>>> image_emb, zero_image_emb = pipe_prior(prompt, return_dict=False) + +>>> pipe = KandinskyInpaintPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") + +>>> init_image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) + +>>> mask = np.zeros((768, 768), dtype=np.float32) +>>> mask[:250, 250:-250] = 1 + +>>> out = pipe( +... prompt, +... image=init_image, +... mask_image=mask, +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=50, +... ) + +>>> image = out.images[0] +>>> image.save("cat_with_hat.png") KandinskyInpaintCombinedPipeline class diffusers.KandinskyInpaintCombinedPipeline < source > ( text_encoder: MultilingualCLIP tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: Union movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters text_encoder (MultilingualCLIP) — +Frozen text-encoder. tokenizer (XLMRobertaTokenizer) — +Tokenizer of class scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Combined Pipeline for generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union mask_image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. mask_image (np.array) — +Tensor representing an image batch, to mask image. White pixels in the mask will be repainted, while +black pixels will be preserved. If mask_image is a PIL image, it will be converted to a single +channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, +so the expected shape would be (B, H, W, 1). negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +import torch +import numpy as np + +pipe = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +original_image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png" +) + +mask = np.zeros((768, 768), dtype=np.float32) +# Let's mask out an area above the cat's head +mask[:250, 250:-250] = 1 + +image = pipe(prompt=prompt, image=original_image, mask_image=mask, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. diff --git a/scrapped_outputs/e586ee1eccb1ed4de2a076010c75eda3.txt b/scrapped_outputs/e586ee1eccb1ed4de2a076010c75eda3.txt new file mode 100644 index 0000000000000000000000000000000000000000..7235bf599eac1cdd8c6108fb262fc2794379fab5 --- /dev/null +++ b/scrapped_outputs/e586ee1eccb1ed4de2a076010c75eda3.txt @@ -0,0 +1,34 @@ +🧨 Diffusers’ Ethical Guidelines + + +Preamble + +Diffusers provides pre-trained diffusion models and serves as a modular toolbox for inference and training. +Given its real case applications in the world and potential negative impacts on society, we think it is important to provide the project with ethical guidelines to guide the development, users’ contributions, and usage of the Diffusers library. +The risks associated with using this technology are still being examined, but to name a few: copyrights issues for artists; deep-fake exploitation; sexual content generation in inappropriate contexts; non-consensual impersonation; harmful social biases perpetuating the oppression of marginalized groups. +We will keep tracking risks and adapt the following guidelines based on the community’s responsiveness and valuable feedback. + +Scope + +The Diffusers community will apply the following ethical guidelines to the project’s development and help coordinate how the community will integrate the contributions, especially concerning sensitive topics related to ethical concerns. + +Ethical guidelines + +The following ethical guidelines apply generally, but we will primarily implement them when dealing with ethically sensitive issues while making a technical choice. Furthermore, we commit to adapting those ethical principles over time following emerging harms related to the state of the art of the technology in question. +Transparency: we are committed to being transparent in managing PRs, explaining our choices to users, and making technical decisions. +Consistency: we are committed to guaranteeing our users the same level of attention in project management, keeping it technically stable and consistent. +Simplicity: with a desire to make it easy to use and exploit the Diffusers library, we are committed to keeping the project’s goals lean and coherent. +Accessibility: the Diffusers project helps lower the entry bar for contributors who can help run it even without technical expertise. Doing so makes research artifacts more accessible to the community. +Reproducibility: we aim to be transparent about the reproducibility of upstream code, models, and datasets when made available through the Diffusers library. +Responsibility: as a community and through teamwork, we hold a collective responsibility to our users by anticipating and mitigating this technology’s potential risks and dangers. + +Examples of implementations: Safety features and Mechanisms + +The team works daily to make the technical and non-technical tools available to deal with the potential ethical and social risks associated with diffusion technology. Moreover, the community’s input is invaluable in ensuring these features’ implementation and raising awareness with us. +Community tab: it enables the community to discuss and better collaborate on a project. +Bias exploration and evaluation: the Hugging Face team provides a space to demonstrate the biases in Stable Diffusion interactively. In this sense, we support and encourage bias explorers and evaluations. +Encouraging safety in deployment +Safe Stable Diffusion: It mitigates the well-known issue that models, like Stable Diffusion, that are trained on unfiltered, web-crawled datasets tend to suffer from inappropriate degeneration. Related paper: Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models. +Safety Checker: It checks and compares the class probability of a set of hard-coded harmful concepts in the embedding space against an image after it has been generated. The harmful concepts are intentionally hidden to prevent reverse engineering of the checker. +Staged released on the Hub: in particularly sensitive situations, access to some repositories should be restricted. This staged release is an intermediary step that allows the repository’s authors to have more control over its use. +Licensing: OpenRAILs, a new type of licensing, allow us to ensure free access while having a set of restrictions that ensure more responsible use. diff --git a/scrapped_outputs/e59174af82e987e2a48d00f08c022e8d.txt b/scrapped_outputs/e59174af82e987e2a48d00f08c022e8d.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/e5a5678cdc2c2ca2d326c18a0ed977f9.txt b/scrapped_outputs/e5a5678cdc2c2ca2d326c18a0ed977f9.txt new file mode 100644 index 0000000000000000000000000000000000000000..3202fb51e10a32c683f71e7b038c0b00367fe667 --- /dev/null +++ b/scrapped_outputs/e5a5678cdc2c2ca2d326c18a0ed977f9.txt @@ -0,0 +1 @@ +Overview The APIs in this section are more experimental and prone to breaking changes. Most of them are used internally for development, but they may also be useful to you if you’re interested in building a diffusion model with some custom parts or if you’re interested in some of our helper utilities for working with 🤗 Diffusers. diff --git a/scrapped_outputs/e5e68a6608fc5bc8b7a6122d94f3da1d.txt b/scrapped_outputs/e5e68a6608fc5bc8b7a6122d94f3da1d.txt new file mode 100644 index 0000000000000000000000000000000000000000..1fe3bd3f06785a74a09c4c4199e812fcd2270991 --- /dev/null +++ b/scrapped_outputs/e5e68a6608fc5bc8b7a6122d94f3da1d.txt @@ -0,0 +1,6 @@ +Overview 🤗 Diffusers provides a collection of training scripts for you to train your own diffusion models. You can find all of our training scripts in diffusers/examples. Each training script is: Self-contained: the training script does not depend on any local files, and all packages required to run the script are installed from the requirements.txt file. Easy-to-tweak: the training scripts are an example of how to train a diffusion model for a specific task and won’t work out-of-the-box for every training scenario. You’ll likely need to adapt the training script for your specific use-case. To help you with that, we’ve fully exposed the data preprocessing code and the training loop so you can modify it for your own use. Beginner-friendly: the training scripts are designed to be beginner-friendly and easy to understand, rather than including the latest state-of-the-art methods to get the best and most competitive results. Any training methods we consider too complex are purposefully left out. Single-purpose: each training script is expressly designed for only one task to keep it readable and understandable. Our current collection of training scripts include: Training SDXL-support LoRA-support Flax-support unconditional image generation text-to-image 👍 👍 👍 textual inversion 👍 DreamBooth 👍 👍 👍 ControlNet 👍 👍 InstructPix2Pix 👍 Custom Diffusion T2I-Adapters 👍 Kandinsky 2.2 👍 Wuerstchen 👍 These examples are actively maintained, so please feel free to open an issue if they aren’t working as expected. If you feel like another training example should be included, you’re more than welcome to start a Feature Request to discuss your feature idea with us and whether it meets our criteria of being self-contained, easy-to-tweak, beginner-friendly, and single-purpose. Install Make sure you can successfully run the latest versions of the example scripts by installing the library from source in a new virtual environment: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the folder of the training script (for example, DreamBooth) and install the requirements.txt file. Some training scripts have a specific requirement file for SDXL, LoRA or Flax. If you’re using one of these scripts, make sure you install its corresponding requirements file. Copied cd examples/dreambooth +pip install -r requirements.txt +# to train SDXL with DreamBooth +pip install -r requirements_sdxl.txt To speedup training and reduce memory-usage, we recommend: using PyTorch 2.0 or higher to automatically use scaled dot product attention during training (you don’t need to make any changes to the training code) installing xFormers to enable memory-efficient attention diff --git a/scrapped_outputs/e613cc189f2b8b62e5319d7ad0396657.txt b/scrapped_outputs/e613cc189f2b8b62e5319d7ad0396657.txt new file mode 100644 index 0000000000000000000000000000000000000000..923735996db131119f1ed82ba37eae73f2bb0f3e --- /dev/null +++ b/scrapped_outputs/e613cc189f2b8b62e5319d7ad0396657.txt @@ -0,0 +1,27 @@ +DDPM Denoising Diffusion Probabilistic Models (DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes a diffusion based model of the same name. In the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline. The abstract from the paper is: We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. The original codebase can be found at hohonathanho/diffusion. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. DDPMPipeline class diffusers.DDPMPipeline < source > ( unet scheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image. Can be one of +DDPMScheduler, or DDIMScheduler. Pipeline for image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 generator: Union = None num_inference_steps: int = 1000 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. num_inference_steps (int, optional, defaults to 1000) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Example: Copied >>> from diffusers import DDPMPipeline + +>>> # load model and scheduler +>>> pipe = DDPMPipeline.from_pretrained("google/ddpm-cat-256") + +>>> # run pipeline in inference (sample random noise and denoise) +>>> image = pipe().images[0] + +>>> # save image +>>> image.save("ddpm_generated_image.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/e642e96cc0a39f92ab0489a08df9eef3.txt b/scrapped_outputs/e642e96cc0a39f92ab0489a08df9eef3.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/e6463f9e98ea7eccc1680a659ef74b52.txt b/scrapped_outputs/e6463f9e98ea7eccc1680a659ef74b52.txt new file mode 100644 index 0000000000000000000000000000000000000000..67c8b53cf21b58b36cb7eadc4efa707362746029 --- /dev/null +++ b/scrapped_outputs/e6463f9e98ea7eccc1680a659ef74b52.txt @@ -0,0 +1,61 @@ +Stable Diffusion 2 Stable Diffusion 2 is a text-to-image latent diffusion model built upon the work of the original Stable Diffusion, and it was led by Robin Rombach and Katherine Crowson from Stability AI and LAION. The Stable Diffusion 2.0 release includes robust text-to-image models trained using a brand new text encoder (OpenCLIP), developed by LAION with support from Stability AI, which greatly improves the quality of the generated images compared to earlier V1 releases. The text-to-image models in this release can generate images with default resolutions of both 512x512 pixels and 768x768 pixels. +These models are trained on an aesthetic subset of the LAION-5B dataset created by the DeepFloyd team at Stability AI, which is then further filtered to remove adult content using LAION’s NSFW filter. For more details about how Stable Diffusion 2 works and how it differs from the original Stable Diffusion, please refer to the official announcement post. The architecture of Stable Diffusion 2 is more or less identical to the original Stable Diffusion model so check out it’s API documentation for how to use Stable Diffusion 2. We recommend using the DPMSolverMultistepScheduler as it gives a reasonable speed/quality trade-off and can be run with as little as 20 steps. Stable Diffusion 2 is available for tasks like text-to-image, inpainting, super-resolution, and depth-to-image: Task Repository text-to-image (512x512) stabilityai/stable-diffusion-2-base text-to-image (768x768) stabilityai/stable-diffusion-2 inpainting stabilityai/stable-diffusion-2-inpainting super-resolution stable-diffusion-x4-upscaler depth-to-image stabilityai/stable-diffusion-2-depth Here are some examples for how to use Stable Diffusion 2 for each task: Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! Text-to-image Copied from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +import torch + +repo_id = "stabilityai/stable-diffusion-2-base" +pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") + +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") + +prompt = "High quality photo of an astronaut riding a horse in space" +image = pipe(prompt, num_inference_steps=25).images[0] +image Inpainting Copied import torch +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +from diffusers.utils import load_image, make_image_grid + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +repo_id = "stabilityai/stable-diffusion-2-inpainting" +pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") + +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") + +prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +image = pipe(prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=25).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Super-resolution Copied from diffusers import StableDiffusionUpscalePipeline +from diffusers.utils import load_image, make_image_grid +import torch + +# load model and scheduler +model_id = "stabilityai/stable-diffusion-x4-upscaler" +pipeline = StableDiffusionUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) +pipeline = pipeline.to("cuda") + +# let's download an image +url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png" +low_res_img = load_image(url) +low_res_img = low_res_img.resize((128, 128)) +prompt = "a white cat" +upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0] +make_image_grid([low_res_img.resize((512, 512)), upscaled_image.resize((512, 512))], rows=1, cols=2) Depth-to-image Copied import torch +from diffusers import StableDiffusionDepth2ImgPipeline +from diffusers.utils import load_image, make_image_grid + +pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-depth", + torch_dtype=torch.float16, +).to("cuda") + + +url = "http://images.cocodataset.org/val2017/000000039769.jpg" +init_image = load_image(url) +prompt = "two tigers" +negative_prompt = "bad, deformed, ugly, bad anotomy" +image = pipe(prompt=prompt, image=init_image, negative_prompt=negative_prompt, strength=0.7).images[0] +make_image_grid([init_image, image], rows=1, cols=2) diff --git a/scrapped_outputs/e65ca6a232008a4cccb534c9d6fec8b5.txt b/scrapped_outputs/e65ca6a232008a4cccb534c9d6fec8b5.txt new file mode 100644 index 0000000000000000000000000000000000000000..68423ddd910d132ae1322ca37d1a005d76c1e75b --- /dev/null +++ b/scrapped_outputs/e65ca6a232008a4cccb534c9d6fec8b5.txt @@ -0,0 +1,238 @@ +VQDiffusionScheduler + + +Overview + +Original paper can be found here + +VQDiffusionScheduler + + +class diffusers.VQDiffusionScheduler + +< +source +> +( +num_vec_classes: int +num_train_timesteps: int = 100 +alpha_cum_start: float = 0.99999 +alpha_cum_end: float = 9e-06 +gamma_cum_start: float = 9e-06 +gamma_cum_end: float = 0.99999 + +) + + +Parameters + +num_vec_classes (int) — +The number of classes of the vector embeddings of the latent pixels. Includes the class for the masked +latent pixel. + + +num_train_timesteps (int) — +Number of diffusion steps used to train the model. + + +alpha_cum_start (float) — +The starting cumulative alpha value. + + +alpha_cum_end (float) — +The ending cumulative alpha value. + + +gamma_cum_start (float) — +The starting cumulative gamma value. + + +gamma_cum_end (float) — +The ending cumulative gamma value. + + + +The VQ-diffusion transformer outputs predicted probabilities of the initial unnoised image. +The VQ-diffusion scheduler converts the transformer’s output into a sample for the unnoised image at the previous +diffusion timestep. +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. +For more details, see the original paper: https://arxiv.org/abs/2111.14822 + +log_Q_t_transitioning_to_known_class + +< +source +> +( +t: torch.int32 +x_t: LongTensor +log_onehot_x_t: FloatTensor +cumulative: bool + +) +→ +torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels) + +Parameters + +t (torch.Long) — +The timestep that determines which transition matrix is used. + + +x_t (torch.LongTensor of shape (batch size, num latent pixels)) — +The classes of each latent pixel at time t. + + +log_onehot_x_t (torch.FloatTensor of shape (batch size, num classes, num latent pixels)) — +The log one-hot vectors of x_t + + +cumulative (bool) — +If cumulative is False, we use the single step transition matrix t-1->t. If cumulative is True, +we use the cumulative transition matrix 0->t. + + +Returns + +torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels) + + + +Each column of the returned matrix is a row of log probabilities of the complete probability +transition matrix. +When non cumulative, returns self.num_classes - 1 rows because the initial latent pixel cannot be +masked. +Where: + +q_n is the probability distribution for the forward process of the nth latent pixel. +C_0 is a class of a latent pixel embedding +C_k is the class of the masked latent pixel + +non-cumulative result (omitting logarithms): +_0(x_t | x_{t-1\} = C_0) ... q_n(x_t | x_{t-1\} = C_0) + . . . + . . . + . . . +q_0(x_t | x_{t-1\} = C_k) ... q_n(x_t | x_{t-1\} = C_k)`} + /> +cumulative result (omitting logarithms): +_0_cumulative(x_t | x_0 = C_0) ... q_n_cumulative(x_t | x_0 = C_0) + . . . + . . . + . . . +q_0_cumulative(x_t | x_0 = C_{k-1\}) ... q_n_cumulative(x_t | x_0 = C_{k-1\})`} + /> + + +Returns the log probabilities of the rows from the (cumulative or non-cumulative) transition matrix for each +latent pixel in x_t. +See equation (7) for the complete non-cumulative transition matrix. The complete cumulative transition matrix +is the same structure except the parameters (alpha, beta, gamma) are the cumulative analogs. + +q_posterior + +< +source +> +( +log_p_x_0 +x_t +t + +) +→ +torch.FloatTensor of shape (batch size, num classes, num latent pixels) + +Parameters + +t (torch.Long) — +The timestep that determines which transition matrix is used. + + +Returns + +torch.FloatTensor of shape (batch size, num classes, num latent pixels) + + + +The log probabilities for the predicted classes of the image at timestep t-1. I.e. Equation (11). + + +Calculates the log probabilities for the predicted classes of the image at timestep t-1. I.e. Equation (11). +Instead of directly computing equation (11), we use Equation (5) to restate Equation (11) in terms of only +forward probabilities. +Equation (11) stated in terms of forward probabilities via Equation (5): +Where: +the sum is over x0 = {C_0 … C{k-1}} (classes for x_0) +p(x{t-1} | x_t) = sum( q(x_t | x{t-1}) q(x_{t-1} | x_0) p(x_0) / q(x_t | x_0) ) + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device) — +device to place the timesteps and the diffusion process parameters (alpha, beta, gamma) on. + + + +Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: torch.int64 +sample: LongTensor +generator: typing.Optional[torch._C.Generator] = None +return_dict: bool = True + +) +→ +~schedulers.scheduling_utils.VQDiffusionSchedulerOutput or tuple + +Parameters + +t (torch.long) — +The timestep that determines which transition matrices are used. +x_t — (torch.LongTensor of shape (batch size, num latent pixels)): +The classes of each latent pixel at time t +generator — (torch.Generator or None): +RNG for the noise applied to p(x_{t-1} | x_t) before it is sampled from. + + +return_dict (bool) — +option for returning tuple rather than VQDiffusionSchedulerOutput class + + +Returns + +~schedulers.scheduling_utils.VQDiffusionSchedulerOutput or tuple + + + +~schedulers.scheduling_utils.VQDiffusionSchedulerOutput if return_dict is True, otherwise a tuple. +When returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep via the reverse transition distribution i.e. Equation (11). See the +docstring for self.q_posterior for more in depth docs on how Equation (11) is computed. diff --git a/scrapped_outputs/e68c136ed3cbad0dffbf4d9bd956887b.txt b/scrapped_outputs/e68c136ed3cbad0dffbf4d9bd956887b.txt new file mode 100644 index 0000000000000000000000000000000000000000..89bb655d08dd6180d639a4e910ba59f59d090923 --- /dev/null +++ b/scrapped_outputs/e68c136ed3cbad0dffbf4d9bd956887b.txt @@ -0,0 +1,98 @@ +MultiDiffusion MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation is by Omer Bar-Tal, Lior Yariv, Yaron Lipman, and Tali Dekel. The abstract from the paper is: Recent advances in text-to-image generation with diffusion models present transformative capabilities in image quality. However, user controllability of the generated image, and fast adaptation to new tasks still remains an open challenge, currently mostly addressed by costly and long re-training and fine-tuning or ad-hoc adaptations to specific image generation tasks. In this work, we present MultiDiffusion, a unified framework that enables versatile and controllable image generation, using a pre-trained text-to-image diffusion model, without any further training or finetuning. At the center of our approach is a new generation process, based on an optimization task that binds together multiple diffusion generation processes with a shared set of parameters or constraints. We show that MultiDiffusion can be readily applied to generate high quality and diverse images that adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes. You can find additional information about MultiDiffusion on the project page, original codebase, and try it out in a demo. Tips While calling StableDiffusionPanoramaPipeline, it’s possible to specify the view_batch_size parameter to be > 1. +For some GPUs with high performance, this can speedup the generation process and increase VRAM usage. To generate panorama-like images make sure you pass the width parameter accordingly. We recommend a width value of 2048 which is the default. Circular padding is applied to ensure there are no stitching artifacts when working with panoramas to ensure a seamless transition from the rightmost part to the leftmost part. By enabling circular padding (set circular_padding=True), the operation applies additional crops after the rightmost point of the image, allowing the model to “see” the transition from the rightmost part to the leftmost part. This helps maintain visual consistency in a 360-degree sense and creates a proper “panorama” that can be viewed using 360-degree panorama viewers. When decoding latents in Stable Diffusion, circular padding is applied to ensure that the decoded latents match in the RGB space. For example, without circular padding, there is a stitching artifact (default): + But with circular padding, the right and the left parts are matching (circular_padding=True): + Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionPanoramaPipeline class diffusers.StableDiffusionPanoramaPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: DDIMScheduler safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using MultiDiffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = 512 width: Optional = 2048 num_inference_steps: int = 50 guidance_scale: float = 7.5 view_batch_size: int = 1 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None circular_padding: bool = False clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 2048) — +The width in pixels of the generated image. The width is kept high because the pipeline is supposed +generate panorama-like images. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. view_batch_size (int, optional, defaults to 1) — +The batch size to denoise split views. For some GPUs with high performance, higher view batch size can +speedup the generation and increase the VRAM usage. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. ip_adapter_image_embeds (List[torch.FloatTensor], optional) — +Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. +Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding +if do_classifier_free_guidance is set to True. +If not provided, embeddings are computed from the ip_adapter_image input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. circular_padding (bool, optional, defaults to False) — +If set to True, circular padding is applied to ensure there are no stitching artifacts. Circular +padding allows the model to seamlessly generate a transition from the rightmost part of the image to +the leftmost part, maintaining consistency in a 360-degree sense. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPanoramaPipeline, DDIMScheduler + +>>> model_ckpt = "stabilityai/stable-diffusion-2-base" +>>> scheduler = DDIMScheduler.from_pretrained(model_ckpt, subfolder="scheduler") +>>> pipe = StableDiffusionPanoramaPipeline.from_pretrained( +... model_ckpt, scheduler=scheduler, torch_dtype=torch.float16 +... ) + +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of the dolomites" +>>> image = pipe(prompt).images[0] encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/e6909c2b2afa9809257a6bb22dfd0fab.txt b/scrapped_outputs/e6909c2b2afa9809257a6bb22dfd0fab.txt new file mode 100644 index 0000000000000000000000000000000000000000..003f32be8c473f8f647bdb5cf370dbd7c1372127 --- /dev/null +++ b/scrapped_outputs/e6909c2b2afa9809257a6bb22dfd0fab.txt @@ -0,0 +1,23 @@ +Text-Guided Image-to-Image Generation + +The StableDiffusionDepth2ImgPipeline lets you pass a text prompt and an initial image to condition the generation of new images as well as a depth_map to preserve the images’ structure. If no depth_map is provided, the pipeline will automatically predict the depth via an integrated depth-estimation model. + + + Copied +import torch +import requests +from PIL import Image + +from diffusers import StableDiffusionDepth2ImgPipeline + +pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-depth", + torch_dtype=torch.float16, +).to("cuda") + + +url = "http://images.cocodataset.org/val2017/000000039769.jpg" +init_image = Image.open(requests.get(url, stream=True).raw) +prompt = "two tigers" +n_prompt = "bad, deformed, ugly, bad anatomy" +image = pipe(prompt=prompt, image=init_image, negative_prompt=n_prompt, strength=0.7).images[0] diff --git a/scrapped_outputs/e6c31483effa8514f7542abebea4e6ba.txt b/scrapped_outputs/e6c31483effa8514f7542abebea4e6ba.txt new file mode 100644 index 0000000000000000000000000000000000000000..67b5d55033891dd6a812b152f5290d66bb31ebfe --- /dev/null +++ b/scrapped_outputs/e6c31483effa8514f7542abebea4e6ba.txt @@ -0,0 +1,239 @@ +Würstchen Wuerstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models is by Pablo Pernias, Dominic Rampas, Mats L. Richter and Christopher Pal and Marc Aubreville. The abstract from the paper is: We introduce Würstchen, a novel architecture for text-to-image synthesis that combines competitive performance with unprecedented cost-effectiveness for large-scale text-to-image diffusion models. A key contribution of our work is to develop a latent diffusion technique in which we learn a detailed but extremely compact semantic image representation used to guide the diffusion process. This highly compressed representation of an image provides much more detailed guidance compared to latent representations of language and this significantly reduces the computational requirements to achieve state-of-the-art results. Our approach also improves the quality of text-conditioned image generation based on our user preference study. The training requirements of our approach consists of 24,602 A100-GPU hours - compared to Stable Diffusion 2.1’s 200,000 GPU hours. Our approach also requires less training data to achieve these results. Furthermore, our compact latent representations allows us to perform inference over twice as fast, slashing the usual costs and carbon footprint of a state-of-the-art (SOTA) diffusion model significantly, without compromising the end performance. In a broader comparison against SOTA models our approach is substantially more efficient and compares favorably in terms of image quality. We believe that this work motivates more emphasis on the prioritization of both performance and computational accessibility. Würstchen Overview + +Würstchen is a diffusion model, whose text-conditional model works in a highly compressed latent space of images. Why is this important? Compressing data can reduce computational costs for both training and inference by magnitudes. Training on 1024x1024 images is way more expensive than training on 32x32. Usually, other works make use of a relatively small compression, in the range of 4x - 8x spatial compression. Würstchen takes this to an extreme. Through its novel design, we achieve a 42x spatial compression. This was unseen before because common methods fail to faithfully reconstruct detailed images after 16x spatial compression. Würstchen employs a two-stage compression, what we call Stage A and Stage B. Stage A is a VQGAN, and Stage B is a Diffusion Autoencoder (more details can be found in the [paper](https://huggingface.co/papers/2306.00637)). A third model, Stage C, is learned in that highly compressed latent space. This training requires fractions of the compute used for current top-performing models, while also allowing cheaper and faster inference. + Würstchen v2 comes to Diffusers After the initial paper release, we have improved numerous things in the architecture, training and sampling, making Würstchen competitive to current state-of-the-art models in many ways. We are excited to release this new version together with Diffusers. Here is a list of the improvements. Higher resolution (1024x1024 up to 2048x2048) Faster inference Multi Aspect Resolution Sampling Better quality We are releasing 3 checkpoints for the text-conditional image generation model (Stage C). Those are: v2-base v2-aesthetic (default) v2-interpolated (50% interpolation between v2-base and v2-aesthetic) We recommend using v2-interpolated, as it has a nice touch of both photorealism and aesthetics. Use v2-base for finetunings as it does not have a style bias and use v2-aesthetic for very artistic generations. +A comparison can be seen here: Text-to-Image Generation For the sake of usability, Würstchen can be used with a single pipeline. This pipeline can be used as follows: Copied import torch +from diffusers import AutoPipelineForText2Image +from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS + +pipe = AutoPipelineForText2Image.from_pretrained("warp-ai/wuerstchen", torch_dtype=torch.float16).to("cuda") + +caption = "Anthropomorphic cat dressed as a fire fighter" +images = pipe( + caption, + width=1024, + height=1536, + prior_timesteps=DEFAULT_STAGE_C_TIMESTEPS, + prior_guidance_scale=4.0, + num_images_per_prompt=2, +).images For explanation purposes, we can also initialize the two main pipelines of Würstchen individually. Würstchen consists of 3 stages: Stage C, Stage B, Stage A. They all have different jobs and work only together. When generating text-conditional images, Stage C will first generate the latents in a very compressed latent space. This is what happens in the prior_pipeline. Afterwards, the generated latents will be passed to Stage B, which decompresses the latents into a bigger latent space of a VQGAN. These latents can then be decoded by Stage A, which is a VQGAN, into the pixel-space. Stage B & Stage A are both encapsulated in the decoder_pipeline. For more details, take a look at the paper. Copied import torch +from diffusers import WuerstchenDecoderPipeline, WuerstchenPriorPipeline +from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS + +device = "cuda" +dtype = torch.float16 +num_images_per_prompt = 2 + +prior_pipeline = WuerstchenPriorPipeline.from_pretrained( + "warp-ai/wuerstchen-prior", torch_dtype=dtype +).to(device) +decoder_pipeline = WuerstchenDecoderPipeline.from_pretrained( + "warp-ai/wuerstchen", torch_dtype=dtype +).to(device) + +caption = "Anthropomorphic cat dressed as a fire fighter" +negative_prompt = "" + +prior_output = prior_pipeline( + prompt=caption, + height=1024, + width=1536, + timesteps=DEFAULT_STAGE_C_TIMESTEPS, + negative_prompt=negative_prompt, + guidance_scale=4.0, + num_images_per_prompt=num_images_per_prompt, +) +decoder_output = decoder_pipeline( + image_embeddings=prior_output.image_embeddings, + prompt=caption, + negative_prompt=negative_prompt, + guidance_scale=0.0, + output_type="pil", +).images[0] +decoder_output Speed-Up Inference + +You can make use of `torch.compile` function and gain a speed-up of about 2-3x: + + Copied prior_pipeline.prior = torch.compile(prior_pipeline.prior, mode="reduce-overhead", fullgraph=True) +decoder_pipeline.decoder = torch.compile(decoder_pipeline.decoder, mode="reduce-overhead", fullgraph=True) Limitations Due to the high compression employed by Würstchen, generations can lack a good amount +of detail. To our human eye, this is especially noticeable in faces, hands etc. Images can only be generated in 128-pixel steps, e.g. the next higher resolution +after 1024x1024 is 1152x1152 The model lacks the ability to render correct text in images The model often does not achieve photorealism Difficult compositional prompts are hard for the model The original codebase, as well as experimental ideas, can be found at dome272/Wuerstchen. WuerstchenCombinedPipeline class diffusers.WuerstchenCombinedPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModel decoder: WuerstchenDiffNeXt scheduler: DDPMWuerstchenScheduler vqgan: PaellaVQModel prior_tokenizer: CLIPTokenizer prior_text_encoder: CLIPTextModel prior_prior: WuerstchenPrior prior_scheduler: DDPMWuerstchenScheduler ) Parameters tokenizer (CLIPTokenizer) — +The decoder tokenizer to be used for text inputs. text_encoder (CLIPTextModel) — +The decoder text encoder to be used for text inputs. decoder (WuerstchenDiffNeXt) — +The decoder model to be used for decoder image generation pipeline. scheduler (DDPMWuerstchenScheduler) — +The scheduler to be used for decoder image generation pipeline. vqgan (PaellaVQModel) — +The VQGAN model to be used for decoder image generation pipeline. prior_tokenizer (CLIPTokenizer) — +The prior tokenizer to be used for text inputs. prior_text_encoder (CLIPTextModel) — +The prior text encoder to be used for text inputs. prior_prior (WuerstchenPrior) — +The prior model to be used for prior pipeline. prior_scheduler (DDPMWuerstchenScheduler) — +The scheduler to be used for prior pipeline. Combined Pipeline for text-to-image generation using Wuerstchen This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union = None height: int = 512 width: int = 512 prior_num_inference_steps: int = 60 prior_timesteps: Optional = None prior_guidance_scale: float = 4.0 num_inference_steps: int = 12 decoder_timesteps: Optional = None decoder_guidance_scale: float = 0.0 negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation for the prior and decoder. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings for the prior. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings for the prior. Can be used to easily tweak text inputs, e.g. +prompt weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt +input argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +prior_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +prior_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked +to the text prompt, usually at the expense of lower image quality. prior_num_inference_steps (Union[int, Dict[float, int]], optional, defaults to 60) — +The number of prior denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. For more specific timestep spacing, you can pass customized +prior_timesteps num_inference_steps (int, optional, defaults to 12) — +The number of decoder denoising steps. More denoising steps usually lead to a higher quality image at +the expense of slower inference. For more specific timestep spacing, you can pass customized +timesteps prior_timesteps (List[float], optional) — +Custom timesteps to use for the denoising process for the prior. If not defined, equal spaced +prior_num_inference_steps timesteps are used. Must be in descending order. decoder_timesteps (List[float], optional) — +Custom timesteps to use for the denoising process for the decoder. If not defined, equal spaced +num_inference_steps timesteps are used. Must be in descending order. decoder_guidance_scale (float, optional, defaults to 0.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. prior_callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). prior_callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the prior_callback_on_step_end function. The tensors specified in the +list will be passed as callback_kwargs argument. You will only be able to include variables listed in +the ._callback_tensor_inputs attribute of your pipeline class. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusions import WuerstchenCombinedPipeline + +>>> pipe = WuerstchenCombinedPipeline.from_pretrained("warp-ai/Wuerstchen", torch_dtype=torch.float16).to( +... "cuda" +... ) +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> images = pipe(prompt=prompt) enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models (unet, text_encoder, vae, and safety checker state dicts) to CPU using 🤗 +Accelerate, significantly reducing memory usage. Models are moved to a torch.device('meta') and loaded on a +GPU only when their specific submodule’s forward method is called. Offloading happens on a submodule basis. +Memory savings are higher than using enable_model_cpu_offload, but performance is lower. WuerstchenPriorPipeline class diffusers.WuerstchenPriorPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModel prior: WuerstchenPrior scheduler: DDPMWuerstchenScheduler latent_mean: float = 42.0 latent_std: float = 1.0 resolution_multiple: float = 42.67 ) Parameters prior (Prior) — +The canonical unCLIP prior to approximate the image embedding from the text embedding. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (DDPMWuerstchenScheduler) — +A scheduler to be used in combination with prior to generate image embedding. latent_mean (‘float’, optional, defaults to 42.0) — +Mean value for latent diffusers. latent_std (‘float’, optional, defaults to 1.0) — +Standard value for latent diffusers. resolution_multiple (‘float’, optional, defaults to 42.67) — +Default resolution for multiple images generated. Pipeline for generating image prior for Wuerstchen. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None height: int = 1024 width: int = 1024 num_inference_steps: int = 60 timesteps: List = None guidance_scale: float = 8.0 negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pt' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. height (int, optional, defaults to 1024) — +The height in pixels of the generated image. width (int, optional, defaults to 1024) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 60) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 8.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +decoder_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +decoder_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely +linked to the text prompt, usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if decoder_guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import WuerstchenPriorPipeline + +>>> prior_pipe = WuerstchenPriorPipeline.from_pretrained( +... "warp-ai/wuerstchen-prior", torch_dtype=torch.float16 +... ).to("cuda") + +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> prior_output = pipe(prompt) WuerstchenPriorPipelineOutput class diffusers.pipelines.wuerstchen.pipeline_wuerstchen_prior.WuerstchenPriorPipelineOutput < source > ( image_embeddings: Union ) Parameters image_embeddings (torch.FloatTensor or np.ndarray) — +Prior image embeddings for text prompt Output class for WuerstchenPriorPipeline. WuerstchenDecoderPipeline class diffusers.WuerstchenDecoderPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModel decoder: WuerstchenDiffNeXt scheduler: DDPMWuerstchenScheduler vqgan: PaellaVQModel latent_dim_scale: float = 10.67 ) Parameters tokenizer (CLIPTokenizer) — +The CLIP tokenizer. text_encoder (CLIPTextModel) — +The CLIP text encoder. decoder (WuerstchenDiffNeXt) — +The WuerstchenDiffNeXt unet decoder. vqgan (PaellaVQModel) — +The VQGAN model. scheduler (DDPMWuerstchenScheduler) — +A scheduler to be used in combination with prior to generate image embedding. latent_dim_scale (float, optional, defaults to 10.67) — +Multiplier to determine the VQ latent space size from the image embeddings. If the image embeddings are +height=24 and width=24, the VQ latent shape needs to be height=int(2410.67)=256 and +width=int(2410.67)=256 in order to match the training conditions. Pipeline for generating images from the Wuerstchen model. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeddings: Union prompt: Union = None num_inference_steps: int = 12 timesteps: Optional = None guidance_scale: float = 0.0 negative_prompt: Union = None num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) Parameters image_embedding (torch.FloatTensor or List[torch.FloatTensor]) — +Image Embeddings either extracted from an image or generated by a Prior Model. prompt (str or List[str]) — +The prompt or prompts to guide the image generation. num_inference_steps (int, optional, defaults to 12) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 0.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +decoder_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +decoder_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely +linked to the text prompt, usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if decoder_guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import WuerstchenPriorPipeline, WuerstchenDecoderPipeline + +>>> prior_pipe = WuerstchenPriorPipeline.from_pretrained( +... "warp-ai/wuerstchen-prior", torch_dtype=torch.float16 +... ).to("cuda") +>>> gen_pipe = WuerstchenDecoderPipeline.from_pretrain("warp-ai/wuerstchen", torch_dtype=torch.float16).to( +... "cuda" +... ) + +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> prior_output = pipe(prompt) +>>> images = gen_pipe(prior_output.image_embeddings, prompt=prompt) Citation Copied @misc{pernias2023wuerstchen, + title={Wuerstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models}, + author={Pablo Pernias and Dominic Rampas and Mats L. Richter and Christopher J. Pal and Marc Aubreville}, + year={2023}, + eprint={2306.00637}, + archivePrefix={arXiv}, + primaryClass={cs.CV} + } diff --git a/scrapped_outputs/e6d48e036a06dee8ff6d5e6a63dc67ce.txt b/scrapped_outputs/e6d48e036a06dee8ff6d5e6a63dc67ce.txt new file mode 100644 index 0000000000000000000000000000000000000000..eae97d48957b506f07d72be1233edeb0c5d9045e --- /dev/null +++ b/scrapped_outputs/e6d48e036a06dee8ff6d5e6a63dc67ce.txt @@ -0,0 +1,194 @@ +Euler scheduler + + +Overview + +Euler scheduler (Algorithm 2) from the paper Elucidating the Design Space of Diffusion-Based Generative Models by Karras et al. (2022). Based on the original k-diffusion implementation by Katherine Crowson. +Fast scheduler which often times generates good outputs with 20-30 steps. + +EulerDiscreteScheduler + + +class diffusers.EulerDiscreteScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +beta_start: float = 0.0001 +beta_end: float = 0.02 +beta_schedule: str = 'linear' +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None +prediction_type: str = 'epsilon' +interpolation_type: str = 'linear' +use_karras_sigmas: typing.Optional[bool] = False + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + +beta_start (float) — the starting beta value of inference. + + +beta_end (float) — the final beta value. + + +beta_schedule (str) — +the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. + + +trained_betas (np.ndarray, optional) — +option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. + + +prediction_type (str, default "epsilon", optional) — +prediction type of the scheduler function, one of epsilon (predicting the noise of the diffusion +process), sample (directly predicting the noisy sample) or v_prediction` (see section 2.4 +https://imagen.research.google/video/paper.pdf) + + +interpolation_type (str, default "linear", optional) — +interpolation type to compute intermediate sigmas for the scheduler denoising steps. Should be one of +["linear", "log_linear"]. + + +use_karras_sigmas (bool, optional, defaults to False) — +This parameter controls whether to use Karras sigmas (Karras et al. (2022) scheme) for step sizes in the +noise schedule during the sampling process. If True, the sigmas will be determined according to a sequence +of noise levels {σi} as defined in Equation (5) of the paper https://arxiv.org/pdf/2206.00364.pdf. + + + +Euler scheduler (Algorithm 2) from Karras et al. (2022) https://arxiv.org/abs/2206.00364. . Based on the original +k-diffusion implementation by Katherine Crowson: +https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L51 +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. + +scale_model_input + +< +source +> +( +sample: FloatTensor +timestep: typing.Union[float, torch.FloatTensor] + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +timestep (float or torch.FloatTensor) — the current timestep in the diffusion chain + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Scales the denoising model input by (sigma**2 + 1) ** 0.5 to match the Euler algorithm. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + +device (str or torch.device, optional) — +the device to which the timesteps should be moved to. If None, the timesteps are not moved. + + + +Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: typing.Union[float, torch.FloatTensor] +sample: FloatTensor +s_churn: float = 0.0 +s_tmin: float = 0.0 +s_tmax: float = inf +s_noise: float = 1.0 +generator: typing.Optional[torch._C.Generator] = None +return_dict: bool = True + +) +→ +~schedulers.scheduling_utils.EulerDiscreteSchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (float) — current timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +s_churn (float) — + + +s_tmin (float) — + + +s_tmax (float) — + + +s_noise (float) — + + +generator (torch.Generator, optional) — Random number generator. + + +return_dict (bool) — option for returning tuple rather than EulerDiscreteSchedulerOutput class + + +Returns + +~schedulers.scheduling_utils.EulerDiscreteSchedulerOutput or tuple + + + +~schedulers.scheduling_utils.EulerDiscreteSchedulerOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is the sample tensor. + + +Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/e6d85e099f7acd244a22d602225d3b4e.txt b/scrapped_outputs/e6d85e099f7acd244a22d602225d3b4e.txt new file mode 100644 index 0000000000000000000000000000000000000000..318ab6fdce153fe6e03446d657892dab8f47db92 --- /dev/null +++ b/scrapped_outputs/e6d85e099f7acd244a22d602225d3b4e.txt @@ -0,0 +1,48 @@ +How to use the ONNX Runtime for inference + +🤗 Optimum provides a Stable Diffusion pipeline compatible with ONNX Runtime. + +Installation + +Install 🤗 Optimum with the following command for ONNX Runtime support: + + + Copied +pip install optimum["onnxruntime"] + +Stable Diffusion Inference + +To load an ONNX model and run inference with the ONNX Runtime, you need to replace StableDiffusionPipeline with ORTStableDiffusionPipeline. In case you want to load +a PyTorch model and convert it to the ONNX format on-the-fly, you can set export=True. + + + Copied +from optimum.onnxruntime import ORTStableDiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = ORTStableDiffusionPipeline.from_pretrained(model_id, export=True) +prompt = "a photo of an astronaut riding a horse on mars" +images = pipe(prompt).images[0] +pipe.save_pretrained("./onnx-stable-diffusion-v1-5") +If you want to export the pipeline in the ONNX format offline and later use it for inference, +you can use the optimum-cli export command: + + + Copied +optimum-cli export onnx --model runwayml/stable-diffusion-v1-5 sd_v15_onnx/ +Then perform inference: + + + Copied +from optimum.onnxruntime import ORTStableDiffusionPipeline + +model_id = "sd_v15_onnx" +pipe = ORTStableDiffusionPipeline.from_pretrained(model_id) +prompt = "a photo of an astronaut riding a horse on mars" +images = pipe(prompt).images[0] +Notice that we didn’t have to specify export=True above. +You can find more examples in optimum documentation. + +Known Issues + +Generating multiple prompts in a batch seems to take too much memory. While we look into it, you may need to iterate instead of batching. diff --git a/scrapped_outputs/e6e2fe74744fad35f7875aedf0fc3ada.txt b/scrapped_outputs/e6e2fe74744fad35f7875aedf0fc3ada.txt new file mode 100644 index 0000000000000000000000000000000000000000..c6ada9556f117e916687e4a6c5586a56d8e2825d --- /dev/null +++ b/scrapped_outputs/e6e2fe74744fad35f7875aedf0fc3ada.txt @@ -0,0 +1,17 @@ +Load safetensors safetensors is a safe and fast file format for storing and loading tensors. Typically, PyTorch model weights are saved or pickled into a .bin file with Python’s pickle utility. However, pickle is not secure and pickled files may contain malicious code that can be executed. safetensors is a secure alternative to pickle, making it ideal for sharing model weights. This guide will show you how you load .safetensor files, and how to convert Stable Diffusion model weights stored in other formats to .safetensor. Before you start, make sure you have safetensors installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install safetensors If you look at the runwayml/stable-diffusion-v1-5 repository, you’ll see weights inside the text_encoder, unet and vae subfolders are stored in the .safetensors format. By default, 🤗 Diffusers automatically loads these .safetensors files from their subfolders if they’re available in the model repository. For more explicit control, you can optionally set use_safetensors=True (if safetensors is not installed, you’ll get an error message asking you to install it): Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) However, model weights are not necessarily stored in separate subfolders like in the example above. Sometimes, all the weights are stored in a single .safetensors file. In this case, if the weights are Stable Diffusion weights, you can load the file directly with the from_single_file() method: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_single_file( + "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +) Convert to safetensors Not all weights on the Hub are available in the .safetensors format, and you may encounter weights stored as .bin. In this case, use the Convert Space to convert the weights to .safetensors. The Convert Space downloads the pickled weights, converts them, and opens a Pull Request to upload the newly converted .safetensors file on the Hub. This way, if there is any malicious code contained in the pickled files, they’re uploaded to the Hub - which has a security scanner to detect unsafe files and suspicious pickle imports - instead of your computer. You can use the model with the new .safetensors weights by specifying the reference to the Pull Request in the revision parameter (you can also test it in this Check PR Space on the Hub), for example refs/pr/22: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", revision="refs/pr/22", use_safetensors=True +) Why use safetensors? There are several reasons for using safetensors: Safety is the number one reason for using safetensors. As open-source and model distribution grows, it is important to be able to trust the model weights you downloaded don’t contain any malicious code. The current size of the header in safetensors prevents parsing extremely large JSON files. Loading speed between switching models is another reason to use safetensors, which performs zero-copy of the tensors. It is especially fast compared to pickle if you’re loading the weights to CPU (the default case), and just as fast if not faster when directly loading the weights to GPU. You’ll only notice the performance difference if the model is already loaded, and not if you’re downloading the weights or loading the model for the first time. The time it takes to load the entire pipeline: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", use_safetensors=True) +"Loaded in safetensors 0:00:02.033658" +"Loaded in PyTorch 0:00:02.663379" But the actual time it takes to load 500MB of the model weights is only: Copied safetensors: 3.4873ms +PyTorch: 172.7537ms Lazy loading is also supported in safetensors, which is useful in distributed settings to only load some of the tensors. This format allowed the BLOOM model to be loaded in 45 seconds on 8 GPUs instead of 10 minutes with regular PyTorch weights. diff --git a/scrapped_outputs/e6ec77ab119ec91552006374926a4c47.txt b/scrapped_outputs/e6ec77ab119ec91552006374926a4c47.txt new file mode 100644 index 0000000000000000000000000000000000000000..810a91b8fef1b421013373c972981ec5ae26c4c4 --- /dev/null +++ b/scrapped_outputs/e6ec77ab119ec91552006374926a4c47.txt @@ -0,0 +1,21 @@ +ConsistencyDecoderScheduler This scheduler is a part of the ConsistencyDecoderPipeline and was introduced in DALL-E 3. The original codebase can be found at openai/consistency_models. ConsistencyDecoderScheduler class diffusers.schedulers.ConsistencyDecoderScheduler < source > ( num_train_timesteps: int = 1024 sigma_data: float = 0.5 ) scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor generator: Optional = None return_dict: bool = True ) → ~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (float) — +The current timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a +~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput or tuple. Returns +~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput or tuple + +If return_dict is True, +~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/e6feae495f0863bca1dd63f9cb11d24a.txt b/scrapped_outputs/e6feae495f0863bca1dd63f9cb11d24a.txt new file mode 100644 index 0000000000000000000000000000000000000000..86d9ddbbae81241685d47196515ab51585d529f3 --- /dev/null +++ b/scrapped_outputs/e6feae495f0863bca1dd63f9cb11d24a.txt @@ -0,0 +1,93 @@ +Latent Consistency Distillation Latent Consistency Models (LCMs) are able to generate high-quality images in just a few steps, representing a big leap forward because many pipelines require at least 25+ steps. LCMs are produced by applying the latent consistency distillation method to any Stable Diffusion model. This method works by applying one-stage guided distillation to the latent space, and incorporating a skipping-step method to consistently skip timesteps to accelerate the distillation process (refer to section 4.1, 4.2, and 4.3 of the paper for more details). If you’re training on a GPU with limited vRAM, try enabling gradient_checkpointing, gradient_accumulation_steps, and mixed_precision to reduce memory-usage and speedup training. You can reduce your memory-usage even more by enabling memory-efficient attention with xFormers and bitsandbytes’ 8-bit optimizer. This guide will explore the train_lcm_distill_sd_wds.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/consistency_distillation +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment (try enabling torch.compile to significantly speedup training): Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_lcm_distill_sd_wds.py \ + --mixed_precision="fp16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so you’ll focus on the parameters that are relevant to latent consistency distillation in this guide. --pretrained_teacher_model: the path to a pretrained latent diffusion model to use as the teacher model --pretrained_vae_model_name_or_path: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify an alternative VAE (like this VAE by madebyollin which works in fp16) --w_min and --w_max: the minimum and maximum guidance scale values for guidance scale sampling --num_ddim_timesteps: the number of timesteps for DDIM sampling --loss_type: the type of loss (L2 or Huber) to calculate for latent consistency distillation; Huber loss is generally preferred because it’s more robust to outliers --huber_c: the Huber loss parameter Training script The training script starts by creating a dataset class - Text2ImageDataset - for preprocessing the images and creating a training dataset. Copied def transform(example): + image = example["image"] + image = TF.resize(image, resolution, interpolation=transforms.InterpolationMode.BILINEAR) + + c_top, c_left, _, _ = transforms.RandomCrop.get_params(image, output_size=(resolution, resolution)) + image = TF.crop(image, c_top, c_left, resolution, resolution) + image = TF.to_tensor(image) + image = TF.normalize(image, [0.5], [0.5]) + + example["image"] = image + return example For improved performance on reading and writing large datasets stored in the cloud, this script uses the WebDataset format to create a preprocessing pipeline to apply transforms and create a dataset and dataloader for training. Images are processed and fed to the training loop without having to download the full dataset first. Copied processing_pipeline = [ + wds.decode("pil", handler=wds.ignore_and_continue), + wds.rename(image="jpg;png;jpeg;webp", text="text;txt;caption", handler=wds.warn_and_continue), + wds.map(filter_keys({"image", "text"})), + wds.map(transform), + wds.to_tuple("image", "text"), +] In the main() function, all the necessary components like the noise scheduler, tokenizers, text encoders, and VAE are loaded. The teacher UNet is also loaded here and then you can create a student UNet from the teacher UNet. The student UNet is updated by the optimizer during training. Copied teacher_unet = UNet2DConditionModel.from_pretrained( + args.pretrained_teacher_model, subfolder="unet", revision=args.teacher_revision +) + +unet = UNet2DConditionModel(**teacher_unet.config) +unet.load_state_dict(teacher_unet.state_dict(), strict=False) +unet.train() Now you can create the optimizer to update the UNet parameters: Copied optimizer = optimizer_class( + unet.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Create the dataset: Copied dataset = Text2ImageDataset( + train_shards_path_or_url=args.train_shards_path_or_url, + num_train_examples=args.max_train_samples, + per_gpu_batch_size=args.train_batch_size, + global_batch_size=args.train_batch_size * accelerator.num_processes, + num_workers=args.dataloader_num_workers, + resolution=args.resolution, + shuffle_buffer_size=1000, + pin_memory=True, + persistent_workers=True, +) +train_dataloader = dataset.train_dataloader Next, you’re ready to setup the training loop and implement the latent consistency distillation method (see Algorithm 1 in the paper for more details). This section of the script takes care of adding noise to the latents, sampling and creating a guidance scale embedding, and predicting the original image from the noise. Copied pred_x_0 = predicted_origin( + noise_pred, + start_timesteps, + noisy_model_input, + noise_scheduler.config.prediction_type, + alpha_schedule, + sigma_schedule, +) + +model_pred = c_skip_start * noisy_model_input + c_out_start * pred_x_0 It gets the teacher model predictions and the LCM predictions next, calculates the loss, and then backpropagates it to the LCM. Copied if args.loss_type == "l2": + loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") +elif args.loss_type == "huber": + loss = torch.mean( + torch.sqrt((model_pred.float() - target.float()) ** 2 + args.huber_c**2) - args.huber_c + ) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Now you’re ready to launch the training script and start distilling! For this guide, you’ll use the --train_shards_path_or_url to specify the path to the Conceptual Captions 12M dataset stored on the Hub here. Set the MODEL_DIR environment variable to the name of the teacher model and OUTPUT_DIR to where you want to save the model. Copied export MODEL_DIR="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="path/to/saved/model" + +accelerate launch train_lcm_distill_sd_wds.py \ + --pretrained_teacher_model=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --mixed_precision=fp16 \ + --resolution=512 \ + --learning_rate=1e-6 --loss_type="huber" --ema_decay=0.95 --adam_weight_decay=0.0 \ + --max_train_steps=1000 \ + --max_train_samples=4000000 \ + --dataloader_num_workers=8 \ + --train_shards_path_or_url="pipe:curl -L -s https://huggingface.co/datasets/laion/conceptual-captions-12m-webdataset/resolve/main/data/{00000..01099}.tar?download=true" \ + --validation_steps=200 \ + --checkpointing_steps=200 --checkpoints_total_limit=10 \ + --train_batch_size=12 \ + --gradient_checkpointing --enable_xformers_memory_efficient_attention \ + --gradient_accumulation_steps=1 \ + --use_8bit_adam \ + --resume_from_checkpoint=latest \ + --report_to=wandb \ + --seed=453645634 \ + --push_to_hub Once training is complete, you can use your new LCM for inference. Copied from diffusers import UNet2DConditionModel, DiffusionPipeline, LCMScheduler +import torch + +unet = UNet2DConditionModel.from_pretrained("your-username/your-model", torch_dtype=torch.float16, variant="fp16") +pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", unet=unet, torch_dtype=torch.float16, variant="fp16") + +pipeline.scheduler = LCMScheduler.from_config(pipe.scheduler.config) +pipeline.to("cuda") + +prompt = "sushi rolls in the form of panda heads, sushi platter" + +image = pipeline(prompt, num_inference_steps=4, guidance_scale=1.0).images[0] LoRA LoRA is a training technique for significantly reducing the number of trainable parameters. As a result, training is faster and it is easier to store the resulting weights because they are a lot smaller (~100MBs). Use the train_lcm_distill_lora_sd_wds.py or train_lcm_distill_lora_sdxl.wds.py script to train with LoRA. The LoRA training script is discussed in more detail in the LoRA training guide. Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_lcm_distill_sdxl_wds.py script to train a SDXL model with LoRA. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on distilling a LCM model! To learn more about LCM, the following may be helpful: Learn how to use LCMs for inference for text-to-image, image-to-image, and with LoRA checkpoints. Read the SDXL in 4 steps with Latent Consistency LoRAs blog post to learn more about SDXL LCM-LoRA’s for super fast inference, quality comparisons, benchmarks, and more. diff --git a/scrapped_outputs/e70761a96c9b122616cf8800eb98dde6.txt b/scrapped_outputs/e70761a96c9b122616cf8800eb98dde6.txt new file mode 100644 index 0000000000000000000000000000000000000000..fa29aaa3795982e1203729759aa3fb501feeb077 --- /dev/null +++ b/scrapped_outputs/e70761a96c9b122616cf8800eb98dde6.txt @@ -0,0 +1,19 @@ +Habana Gaudi 🤗 Diffusers is compatible with Habana Gaudi through 🤗 Optimum. Follow the installation guide to install the SynapseAI and Gaudi drivers, and then install Optimum Habana: Copied python -m pip install --upgrade-strategy eager optimum[habana] To generate images with Stable Diffusion 1 and 2 on Gaudi, you need to instantiate two instances: GaudiStableDiffusionPipeline, a pipeline for text-to-image generation. GaudiDDIMScheduler, a Gaudi-optimized scheduler. When you initialize the pipeline, you have to specify use_habana=True to deploy it on HPUs and to get the fastest possible generation, you should enable HPU graphs with use_hpu_graphs=True. Finally, specify a GaudiConfig which can be downloaded from the Habana organization on the Hub. Copied from optimum.habana import GaudiConfig +from optimum.habana.diffusers import GaudiDDIMScheduler, GaudiStableDiffusionPipeline + +model_name = "stabilityai/stable-diffusion-2-base" +scheduler = GaudiDDIMScheduler.from_pretrained(model_name, subfolder="scheduler") +pipeline = GaudiStableDiffusionPipeline.from_pretrained( + model_name, + scheduler=scheduler, + use_habana=True, + use_hpu_graphs=True, + gaudi_config="Habana/stable-diffusion-2", +) Now you can call the pipeline to generate images by batches from one or several prompts: Copied outputs = pipeline( + prompt=[ + "High quality photo of an astronaut riding a horse in space", + "Face of a yellow cat, high resolution, sitting on a park bench", + ], + num_images_per_prompt=10, + batch_size=4, +) For more information, check out 🤗 Optimum Habana’s documentation and the example provided in the official GitHub repository. Benchmark We benchmarked Habana’s first-generation Gaudi and Gaudi2 with the Habana/stable-diffusion and Habana/stable-diffusion-2 Gaudi configurations (mixed precision bf16/fp32) to demonstrate their performance. For Stable Diffusion v1.5 on 512x512 images: Latency (batch size = 1) Throughput first-generation Gaudi 3.80s 0.308 images/s (batch size = 8) Gaudi2 1.33s 1.081 images/s (batch size = 8) For Stable Diffusion v2.1 on 768x768 images: Latency (batch size = 1) Throughput first-generation Gaudi 10.2s 0.108 images/s (batch size = 4) Gaudi2 3.17s 0.379 images/s (batch size = 8) diff --git a/scrapped_outputs/e72553257118e15aed5b25449052929e.txt b/scrapped_outputs/e72553257118e15aed5b25449052929e.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/e7362287c73b376e2955e69469c88258.txt b/scrapped_outputs/e7362287c73b376e2955e69469c88258.txt new file mode 100644 index 0000000000000000000000000000000000000000..6aaf3fc017641a3b23f127adc2cdbafd5e059ae6 --- /dev/null +++ b/scrapped_outputs/e7362287c73b376e2955e69469c88258.txt @@ -0,0 +1,33 @@ +Transformer2D A Transformer model for image-like data from CompVis that is based on the Vision Transformer introduced by Dosovitskiy et al. The Transformer2DModel accepts discrete (classes of vector embeddings) or continuous (actual embeddings) inputs. When the input is continuous: Project the input and reshape it to (batch_size, sequence_length, feature_dimension). Apply the Transformer blocks in the standard way. Reshape to image. When the input is discrete: It is assumed one of the input classes is the masked latent pixel. The predicted classes of the unnoised image don’t contain a prediction for the masked pixel because the unnoised image cannot be masked. Convert input (classes of latent pixels) to embeddings and apply positional embeddings. Apply the Transformer blocks in the standard way. Predict classes of unnoised image. Transformer2DModel class diffusers.Transformer2DModel < source > ( num_attention_heads: int = 16 attention_head_dim: int = 88 in_channels: Optional = None out_channels: Optional = None num_layers: int = 1 dropout: float = 0.0 norm_num_groups: int = 32 cross_attention_dim: Optional = None attention_bias: bool = False sample_size: Optional = None num_vector_embeds: Optional = None patch_size: Optional = None activation_fn: str = 'geglu' num_embeds_ada_norm: Optional = None use_linear_projection: bool = False only_cross_attention: bool = False double_self_attention: bool = False upcast_attention: bool = False norm_type: str = 'layer_norm' norm_elementwise_affine: bool = True norm_eps: float = 1e-05 attention_type: str = 'default' caption_channels: int = None interpolation_scale: float = None ) Parameters num_attention_heads (int, optional, defaults to 16) — The number of heads to use for multi-head attention. attention_head_dim (int, optional, defaults to 88) — The number of channels in each head. in_channels (int, optional) — +The number of channels in the input and output (specify if the input is continuous). num_layers (int, optional, defaults to 1) — The number of layers of Transformer blocks to use. dropout (float, optional, defaults to 0.0) — The dropout probability to use. cross_attention_dim (int, optional) — The number of encoder_hidden_states dimensions to use. sample_size (int, optional) — The width of the latent images (specify if the input is discrete). +This is fixed during training since it is used to learn a number of position embeddings. num_vector_embeds (int, optional) — +The number of classes of the vector embeddings of the latent pixels (specify if the input is discrete). +Includes the class for the masked latent pixel. activation_fn (str, optional, defaults to "geglu") — Activation function to use in feed-forward. num_embeds_ada_norm ( int, optional) — +The number of diffusion steps used during training. Pass if at least one of the norm_layers is +AdaLayerNorm. This is fixed during training since it is used to learn a number of embeddings that are +added to the hidden states. +During inference, you can denoise for up to but not more steps than num_embeds_ada_norm. attention_bias (bool, optional) — +Configure if the TransformerBlocks attention should contain a bias parameter. A 2D Transformer model for image-like data. forward < source > ( hidden_states: Tensor encoder_hidden_states: Optional = None timestep: Optional = None added_cond_kwargs: Dict = None class_labels: Optional = None cross_attention_kwargs: Dict = None attention_mask: Optional = None encoder_attention_mask: Optional = None return_dict: bool = True ) Parameters hidden_states (torch.LongTensor of shape (batch size, num latent pixels) if discrete, torch.FloatTensor of shape (batch size, channel, height, width) if continuous) — +Input hidden_states. encoder_hidden_states ( torch.FloatTensor of shape (batch size, sequence len, embed dims), optional) — +Conditional embeddings for cross attention layer. If not given, cross-attention defaults to +self-attention. timestep ( torch.LongTensor, optional) — +Used to indicate denoising step. Optional timestep to be applied as an embedding in AdaLayerNorm. class_labels ( torch.LongTensor of shape (batch size, num classes), optional) — +Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in +AdaLayerZeroNorm. cross_attention_kwargs ( Dict[str, Any], optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. attention_mask ( torch.Tensor, optional) — +An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask +is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large +negative values to the attention scores corresponding to “discard” tokens. encoder_attention_mask ( torch.Tensor, optional) — +Cross-attention mask applied to encoder_hidden_states. Two formats supported: + +Mask (batch, sequence_length) True = keep, False = discard. +Bias (batch, 1, sequence_length) 0 = keep, -10000 = discard. + +If ndim == 2: will be interpreted as a mask, then converted into a bias consistent with the format +above. This bias will be added to the cross-attention scores. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. The Transformer2DModel forward method. Transformer2DModelOutput class diffusers.models.transformers.transformer_2d.Transformer2DModelOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if Transformer2DModel is discrete) — +The hidden states output conditioned on the encoder_hidden_states input. If discrete, returns probability +distributions for the unnoised latent pixels. The output of Transformer2DModel. diff --git a/scrapped_outputs/e7486b5faa06aed9c7194eb311000eb9.txt b/scrapped_outputs/e7486b5faa06aed9c7194eb311000eb9.txt new file mode 100644 index 0000000000000000000000000000000000000000..c618df35dab9f1ea7404eb6772bf3711c834e51e --- /dev/null +++ b/scrapped_outputs/e7486b5faa06aed9c7194eb311000eb9.txt @@ -0,0 +1,40 @@ +Stable Video Diffusion Stable Video Diffusion (SVD) is a powerful image-to-video generation model that can generate 2-4 second high resolution (576x1024) videos conditioned on an input image. This guide will show you how to use SVD to generate short videos from images. Before you begin, make sure you have the following libraries installed: Copied !pip install -q -U diffusers transformers accelerate The are two variants of this model, SVD and SVD-XT. The SVD checkpoint is trained to generate 14 frames and the SVD-XT checkpoint is further finetuned to generate 25 frames. You’ll use the SVD-XT checkpoint for this guide. Copied import torch + +from diffusers import StableVideoDiffusionPipeline +from diffusers.utils import load_image, export_to_video + +pipe = StableVideoDiffusionPipeline.from_pretrained( + "stabilityai/stable-video-diffusion-img2vid-xt", torch_dtype=torch.float16, variant="fp16" +) +pipe.enable_model_cpu_offload() + +# Load the conditioning image +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png") +image = image.resize((1024, 576)) + +generator = torch.manual_seed(42) +frames = pipe(image, decode_chunk_size=8, generator=generator).frames[0] + +export_to_video(frames, "generated.mp4", fps=7) "source image of a rocket" "generated video from source image" torch.compile You can gain a 20-25% speedup at the expense of slightly increased memory by compiling the UNet. Copied - pipe.enable_model_cpu_offload() ++ pipe.to("cuda") ++ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) Reduce memory usage Video generation is very memory intensive because you’re essentially generating num_frames all at once, similar to text-to-image generation with a high batch size. To reduce the memory requirement, there are multiple options that trade-off inference speed for lower memory requirement: enable model offloading: each component of the pipeline is offloaded to the CPU once it’s not needed anymore. enable feed-forward chunking: the feed-forward layer runs in a loop instead of running a single feed-forward with a huge batch size. reduce decode_chunk_size: the VAE decodes frames in chunks instead of decoding them all together. Setting decode_chunk_size=1 decodes one frame at a time and uses the least amount of memory (we recommend adjusting this value based on your GPU memory) but the video might have some flickering. Copied - pipe.enable_model_cpu_offload() +- frames = pipe(image, decode_chunk_size=8, generator=generator).frames[0] ++ pipe.enable_model_cpu_offload() ++ pipe.unet.enable_forward_chunking() ++ frames = pipe(image, decode_chunk_size=2, generator=generator, num_frames=25).frames[0] Using all these tricks togethere should lower the memory requirement to less than 8GB VRAM. Micro-conditioning Stable Diffusion Video also accepts micro-conditioning, in addition to the conditioning image, which allows more control over the generated video: fps: the frames per second of the generated video. motion_bucket_id: the motion bucket id to use for the generated video. This can be used to control the motion of the generated video. Increasing the motion bucket id increases the motion of the generated video. noise_aug_strength: the amount of noise added to the conditioning image. The higher the values the less the video resembles the conditioning image. Increasing this value also increases the motion of the generated video. For example, to generate a video with more motion, use the motion_bucket_id and noise_aug_strength micro-conditioning parameters: Copied import torch + +from diffusers import StableVideoDiffusionPipeline +from diffusers.utils import load_image, export_to_video + +pipe = StableVideoDiffusionPipeline.from_pretrained( + "stabilityai/stable-video-diffusion-img2vid-xt", torch_dtype=torch.float16, variant="fp16" +) +pipe.enable_model_cpu_offload() + +# Load the conditioning image +image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/svd/rocket.png") +image = image.resize((1024, 576)) + +generator = torch.manual_seed(42) +frames = pipe(image, decode_chunk_size=8, generator=generator, motion_bucket_id=180, noise_aug_strength=0.1).frames[0] +export_to_video(frames, "generated.mp4", fps=7) diff --git a/scrapped_outputs/e75ee991d01923af39b9d349121fa31c.txt b/scrapped_outputs/e75ee991d01923af39b9d349121fa31c.txt new file mode 100644 index 0000000000000000000000000000000000000000..6eb814578b3c61caf6866a5ffadcbcf16e6fec47 --- /dev/null +++ b/scrapped_outputs/e75ee991d01923af39b9d349121fa31c.txt @@ -0,0 +1,26 @@ +How to run Stable Diffusion with Core ML Core ML is the model format and machine learning library supported by Apple frameworks. If you are interested in running Stable Diffusion models inside your macOS or iOS/iPadOS apps, this guide will show you how to convert existing PyTorch checkpoints into the Core ML format and use them for inference with Python or Swift. Core ML models can leverage all the compute engines available in Apple devices: the CPU, the GPU, and the Apple Neural Engine (or ANE, a tensor-optimized accelerator available in Apple Silicon Macs and modern iPhones/iPads). Depending on the model and the device it’s running on, Core ML can mix and match compute engines too, so some portions of the model may run on the CPU while others run on GPU, for example. You can also run the diffusers Python codebase on Apple Silicon Macs using the mps accelerator built into PyTorch. This approach is explained in depth in the mps guide, but it is not compatible with native apps. Stable Diffusion Core ML Checkpoints Stable Diffusion weights (or checkpoints) are stored in the PyTorch format, so you need to convert them to the Core ML format before we can use them inside native apps. Thankfully, Apple engineers developed a conversion tool based on diffusers to convert the PyTorch checkpoints to Core ML. Before you convert a model, though, take a moment to explore the Hugging Face Hub – chances are the model you’re interested in is already available in Core ML format: the Apple organization includes Stable Diffusion versions 1.4, 1.5, 2.0 base, and 2.1 base coreml community includes custom finetuned models use this filter to return all available Core ML checkpoints If you can’t find the model you’re interested in, we recommend you follow the instructions for Converting Models to Core ML by Apple. Selecting the Core ML Variant to Use Stable Diffusion models can be converted to different Core ML variants intended for different purposes: The type of attention blocks used. The attention operation is used to “pay attention” to the relationship between different areas in the image representations and to understand how the image and text representations are related. Attention is compute- and memory-intensive, so different implementations exist that consider the hardware characteristics of different devices. For Core ML Stable Diffusion models, there are two attention variants: split_einsum (introduced by Apple) is optimized for ANE devices, which is available in modern iPhones, iPads and M-series computers. The “original” attention (the base implementation used in diffusers) is only compatible with CPU/GPU and not ANE. It can be faster to run your model on CPU + GPU using original attention than ANE. See this performance benchmark as well as some additional measures provided by the community for additional details. The supported inference framework. packages are suitable for Python inference. This can be used to test converted Core ML models before attempting to integrate them inside native apps, or if you want to explore Core ML performance but don’t need to support native apps. For example, an application with a web UI could perfectly use a Python Core ML backend. compiled models are required for Swift code. The compiled models in the Hub split the large UNet model weights into several files for compatibility with iOS and iPadOS devices. This corresponds to the --chunk-unet conversion option. If you want to support native apps, then you need to select the compiled variant. The official Core ML Stable Diffusion models include these variants, but the community ones may vary: Copied coreml-stable-diffusion-v1-4 +├── README.md +├── original +│ ├── compiled +│ └── packages +└── split_einsum + ├── compiled + └── packages You can download and use the variant you need as shown below. Core ML Inference in Python Install the following libraries to run Core ML inference in Python: Copied pip install huggingface_hub +pip install git+https://github.com/apple/ml-stable-diffusion Download the Model Checkpoints To run inference in Python, use one of the versions stored in the packages folders because the compiled ones are only compatible with Swift. You may choose whether you want to use original or split_einsum attention. This is how you’d download the original attention variant from the Hub to a directory called models: Copied from huggingface_hub import snapshot_download +from pathlib import Path + +repo_id = "apple/coreml-stable-diffusion-v1-4" +variant = "original/packages" + +model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_")) +snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False) +print(f"Model downloaded at {model_path}") Inference Once you have downloaded a snapshot of the model, you can test it using Apple’s Python script. Copied python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" -i models/coreml-stable-diffusion-v1-4_original_packages -o --compute-unit CPU_AND_GPU --seed 93 Pass the path of the downloaded checkpoint with -i flag to the script. --compute-unit indicates the hardware you want to allow for inference. It must be one of the following options: ALL, CPU_AND_GPU, CPU_ONLY, CPU_AND_NE. You may also provide an optional output path, and a seed for reproducibility. The inference script assumes you’re using the original version of the Stable Diffusion model, CompVis/stable-diffusion-v1-4. If you use another model, you have to specify its Hub id in the inference command line, using the --model-version option. This works for models already supported and custom models you trained or fine-tuned yourself. For example, if you want to use runwayml/stable-diffusion-v1-5: Copied python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" --compute-unit ALL -o output --seed 93 -i models/coreml-stable-diffusion-v1-5_original_packages --model-version runwayml/stable-diffusion-v1-5 Core ML inference in Swift Running inference in Swift is slightly faster than in Python because the models are already compiled in the mlmodelc format. This is noticeable on app startup when the model is loaded but shouldn’t be noticeable if you run several generations afterward. Download To run inference in Swift on your Mac, you need one of the compiled checkpoint versions. We recommend you download them locally using Python code similar to the previous example, but with one of the compiled variants: Copied from huggingface_hub import snapshot_download +from pathlib import Path + +repo_id = "apple/coreml-stable-diffusion-v1-4" +variant = "original/compiled" + +model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_")) +snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False) +print(f"Model downloaded at {model_path}") Inference To run inference, please clone Apple’s repo: Copied git clone https://github.com/apple/ml-stable-diffusion +cd ml-stable-diffusion And then use Apple’s command line tool, Swift Package Manager: Copied swift run StableDiffusionSample --resource-path models/coreml-stable-diffusion-v1-4_original_compiled --compute-units all "a photo of an astronaut riding a horse on mars" You have to specify in --resource-path one of the checkpoints downloaded in the previous step, so please make sure it contains compiled Core ML bundles with the extension .mlmodelc. The --compute-units has to be one of these values: all, cpuOnly, cpuAndGPU, cpuAndNeuralEngine. For more details, please refer to the instructions in Apple’s repo. Supported Diffusers Features The Core ML models and inference code don’t support many of the features, options, and flexibility of 🧨 Diffusers. These are some of the limitations to keep in mind: Core ML models are only suitable for inference. They can’t be used for training or fine-tuning. Only two schedulers have been ported to Swift, the default one used by Stable Diffusion and DPMSolverMultistepScheduler, which we ported to Swift from our diffusers implementation. We recommend you use DPMSolverMultistepScheduler, since it produces the same quality in about half the steps. Negative prompts, classifier-free guidance scale, and image-to-image tasks are available in the inference code. Advanced features such as depth guidance, ControlNet, and latent upscalers are not available yet. Apple’s conversion and inference repo and our own swift-coreml-diffusers repos are intended as technology demonstrators to enable other developers to build upon. If you feel strongly about any missing features, please feel free to open a feature request or, better yet, a contribution PR 🙂. Native Diffusers Swift app One easy way to run Stable Diffusion on your own Apple hardware is to use our open-source Swift repo, based on diffusers and Apple’s conversion and inference repo. You can study the code, compile it with Xcode and adapt it for your own needs. For your convenience, there’s also a standalone Mac app in the App Store, so you can play with it without having to deal with the code or IDE. If you are a developer and have determined that Core ML is the best solution to build your Stable Diffusion app, then you can use the rest of this guide to get started with your project. We can’t wait to see what you’ll build 🙂. diff --git a/scrapped_outputs/e7a551efdf4a1ff935ebc309d674fcc7.txt b/scrapped_outputs/e7a551efdf4a1ff935ebc309d674fcc7.txt new file mode 100644 index 0000000000000000000000000000000000000000..b413917c52bc7069ecb64d4b6c9ce531220bac25 --- /dev/null +++ b/scrapped_outputs/e7a551efdf4a1ff935ebc309d674fcc7.txt @@ -0,0 +1,87 @@ +Create reproducible pipelines Reproducibility is important for testing, replicating results, and can even be used to improve image quality. However, the randomness in diffusion models is a desired property because it allows the pipeline to generate different images every time it is run. While you can’t expect to get the exact same results across platforms, you can expect results to be reproducible across releases and platforms within a certain tolerance range. Even then, tolerance varies depending on the diffusion pipeline and checkpoint. This is why it’s important to understand how to control sources of randomness in diffusion models or use deterministic algorithms. 💡 We strongly recommend reading PyTorch’s statement about reproducibility: Completely reproducible results are not guaranteed across PyTorch releases, individual commits, or different platforms. Furthermore, results may not be reproducible between CPU and GPU executions, even when using identical seeds. Control randomness During inference, pipelines rely heavily on random sampling operations which include creating the +Gaussian noise tensors to denoise and adding noise to the scheduling step. Take a look at the tensor values in the DDIMPipeline after two inference steps: Copied from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np").images +print(np.abs(image).sum()) Running the code above prints one value, but if you run it again you get a different value. What is going on here? Every time the pipeline is run, torch.randn uses a different random seed to create Gaussian noise which is denoised stepwise. This leads to a different result each time it is run, which is great for diffusion pipelines since it generates a different random image each time. But if you need to reliably generate the same image, that’ll depend on whether you’re running the pipeline on a CPU or GPU. CPU To generate reproducible results on a CPU, you’ll need to use a PyTorch Generator and set a seed: Copied import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) + +# create a generator for reproducibility +generator = torch.Generator(device="cpu").manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) Now when you run the code above, it always prints a value of 1491.1711 no matter what because the Generator object with the seed is passed to all the random functions of the pipeline. If you run this code example on your specific hardware and PyTorch version, you should get a similar, if not the same, result. 💡 It might be a bit unintuitive at first to pass Generator objects to the pipeline instead of +just integer values representing the seed, but this is the recommended design when dealing with +probabilistic models in PyTorch, as Generators are random states that can be +passed to multiple pipelines in a sequence. GPU Writing a reproducible pipeline on a GPU is a bit trickier, and full reproducibility across different hardware is not guaranteed because matrix multiplication - which diffusion pipelines require a lot of - is less deterministic on a GPU than a CPU. For example, if you run the same code example above on a GPU: Copied import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) +ddim.to("cuda") + +# create a generator for reproducibility +generator = torch.Generator(device="cuda").manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) The result is not the same even though you’re using an identical seed because the GPU uses a different random number generator than the CPU. To circumvent this problem, 🧨 Diffusers has a randn_tensor() function for creating random noise on the CPU, and then moving the tensor to a GPU if necessary. The randn_tensor function is used everywhere inside the pipeline, allowing the user to always pass a CPU Generator even if the pipeline is run on a GPU. You’ll see the results are much closer now! Copied import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) +ddim.to("cuda") + +# create a generator for reproducibility; notice you don't place it on the GPU! +generator = torch.manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) 💡 If reproducibility is important, we recommend always passing a CPU generator. +The performance loss is often neglectable, and you’ll generate much more similar +values than if the pipeline had been run on a GPU. Finally, for more complex pipelines such as UnCLIPPipeline, these are often extremely +susceptible to precision error propagation. Don’t expect similar results across +different GPU hardware or PyTorch versions. In this case, you’ll need to run +exactly the same hardware and PyTorch version for full reproducibility. Deterministic algorithms You can also configure PyTorch to use deterministic algorithms to create a reproducible pipeline. However, you should be aware that deterministic algorithms may be slower than nondeterministic ones and you may observe a decrease in performance. But if reproducibility is important to you, then this is the way to go! Nondeterministic behavior occurs when operations are launched in more than one CUDA stream. To avoid this, set the environment variable CUBLAS_WORKSPACE_CONFIG to :16:8 to only use one buffer size during runtime. PyTorch typically benchmarks multiple algorithms to select the fastest one, but if you want reproducibility, you should disable this feature because the benchmark may select different algorithms each time. Lastly, pass True to torch.use_deterministic_algorithms to enable deterministic algorithms. Copied import os +import torch + +os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":16:8" + +torch.backends.cudnn.benchmark = False +torch.use_deterministic_algorithms(True) Now when you run the same pipeline twice, you’ll get identical results. Copied import torch +from diffusers import DDIMScheduler, StableDiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True).to("cuda") +pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) +g = torch.Generator(device="cuda") + +prompt = "A bear is playing a guitar on Times Square" + +g.manual_seed(0) +result1 = pipe(prompt=prompt, num_inference_steps=50, generator=g, output_type="latent").images + +g.manual_seed(0) +result2 = pipe(prompt=prompt, num_inference_steps=50, generator=g, output_type="latent").images + +print("L_inf dist =", abs(result1 - result2).max()) +"L_inf dist = tensor(0., device='cuda:0')" diff --git a/scrapped_outputs/e7a61348504e056d70bf216265d24c52.txt b/scrapped_outputs/e7a61348504e056d70bf216265d24c52.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/e7af79c20b3d253cd514356cbe21c6d0.txt b/scrapped_outputs/e7af79c20b3d253cd514356cbe21c6d0.txt new file mode 100644 index 0000000000000000000000000000000000000000..e2e3f78fe8241413a85b55d180c1b4b614352447 --- /dev/null +++ b/scrapped_outputs/e7af79c20b3d253cd514356cbe21c6d0.txt @@ -0,0 +1,123 @@ +Unconditional Image-Generation + +In this section, we explain how one can train an unconditional image generation diffusion +model. “Unconditional” because the model is not conditioned on any context to generate an image - once trained the model will simply generate images that resemble its training data +distribution. + +Installing the dependencies + +Before running the scripts, make sure to install the library’s training dependencies: + + + Copied +pip install diffusers[training] accelerate datasets +And initialize an 🤗Accelerate environment with: + + + Copied +accelerate config + +Unconditional Flowers + +The command to train a DDPM UNet model on the Oxford Flowers dataset: + + + Copied +accelerate launch train_unconditional.py \ + --dataset_name="huggan/flowers-102-categories" \ + --resolution=64 \ + --output_dir="ddpm-ema-flowers-64" \ + --train_batch_size=16 \ + --num_epochs=100 \ + --gradient_accumulation_steps=1 \ + --learning_rate=1e-4 \ + --lr_warmup_steps=500 \ + --mixed_precision=no \ + --push_to_hub +An example trained model: https://huggingface.co/anton-l/ddpm-ema-flowers-64 +A full training run takes 2 hours on 4xV100 GPUs. + + +Unconditional Pokemon + +The command to train a DDPM UNet model on the Pokemon dataset: + + + Copied +accelerate launch train_unconditional.py \ + --dataset_name="huggan/pokemon" \ + --resolution=64 \ + --output_dir="ddpm-ema-pokemon-64" \ + --train_batch_size=16 \ + --num_epochs=100 \ + --gradient_accumulation_steps=1 \ + --learning_rate=1e-4 \ + --lr_warmup_steps=500 \ + --mixed_precision=no \ + --push_to_hub +An example trained model: https://huggingface.co/anton-l/ddpm-ema-pokemon-64 +A full training run takes 2 hours on 4xV100 GPUs. + + +Using your own data + +To use your own dataset, there are 2 ways: +you can either provide your own folder as --train_data_dir +or you can upload your dataset to the hub (possibly as a private repo, if you prefer so), and simply pass the --dataset_name argument. +Note: If you want to create your own training dataset please have a look at this document. +Below, we explain both in more detail. + +Provide the dataset as a folder + +If you provide your own folders with images, the script expects the following directory structure: + + + Copied +data_dir/xxx.png +data_dir/xxy.png +data_dir/[...]/xxz.png +In other words, the script will take care of gathering all images inside the folder. You can then run the script like this: + + + Copied +accelerate launch train_unconditional.py \ + --train_data_dir \ + +Internally, the script will use the ImageFolder feature which will automatically turn the folders into 🤗 Dataset objects. + +Upload your data to the hub, as a (possibly private) repo + +It’s very easy (and convenient) to upload your image dataset to the hub using the ImageFolder feature available in 🤗 Datasets. Simply do the following: + + + Copied +from datasets import load_dataset + +# example 1: local folder +dataset = load_dataset("imagefolder", data_dir="path_to_your_folder") + +# example 2: local files (supported formats are tar, gzip, zip, xz, rar, zstd) +dataset = load_dataset("imagefolder", data_files="path_to_zip_file") + +# example 3: remote files (supported formats are tar, gzip, zip, xz, rar, zstd) +dataset = load_dataset( + "imagefolder", + data_files="https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip", +) + +# example 4: providing several splits +dataset = load_dataset( + "imagefolder", data_files={"train": ["path/to/file1", "path/to/file2"], "test": ["path/to/file3", "path/to/file4"]} +) +ImageFolder will create an image column containing the PIL-encoded images. +Next, push it to the hub! + + + Copied +# assuming you have ran the huggingface-cli login command in a terminal +dataset.push_to_hub("name_of_your_dataset") + +# if you want to push to a private repo, simply pass private=True: +dataset.push_to_hub("name_of_your_dataset", private=True) +and that’s it! You can now train your model by simply setting the --dataset_name argument to the name of your dataset on the hub. +More on this can also be found in this blog post. diff --git a/scrapped_outputs/e7b976e1afb5fdbfe5d72a461eaaf05f.txt b/scrapped_outputs/e7b976e1afb5fdbfe5d72a461eaaf05f.txt new file mode 100644 index 0000000000000000000000000000000000000000..8423dbc4c086a93fc684851efbfbaf2fbcda62c5 --- /dev/null +++ b/scrapped_outputs/e7b976e1afb5fdbfe5d72a461eaaf05f.txt @@ -0,0 +1,127 @@ +Super-resolution The Stable Diffusion upscaler diffusion model was created by the researchers and engineers from CompVis, Stability AI, and LAION. It is used to enhance the resolution of input images by a factor of 4. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionUpscalePipeline class diffusers.StableDiffusionUpscalePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel low_res_scheduler: DDPMScheduler scheduler: KarrasDiffusionSchedulers safety_checker: Optional = None feature_extractor: Optional = None watermarker: Optional = None max_noise_level: int = 350 ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. low_res_scheduler (SchedulerMixin) — +A scheduler used to add initial noise to the low resolution conditioning image. It must be an instance of +DDPMScheduler. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-guided image super-resolution using Stable Diffusion 2. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 75 guidance_scale: float = 9.0 noise_level: int = 20 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: int = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be upscaled. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import requests +>>> from PIL import Image +>>> from io import BytesIO +>>> from diffusers import StableDiffusionUpscalePipeline +>>> import torch + +>>> # load model and scheduler +>>> model_id = "stabilityai/stable-diffusion-x4-upscaler" +>>> pipeline = StableDiffusionUpscalePipeline.from_pretrained( +... model_id, revision="fp16", torch_dtype=torch.float16 +... ) +>>> pipeline = pipeline.to("cuda") + +>>> # let's download an image +>>> url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png" +>>> response = requests.get(url) +>>> low_res_img = Image.open(BytesIO(response.content)).convert("RGB") +>>> low_res_img = low_res_img.resize((128, 128)) +>>> prompt = "a white cat" + +>>> upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0] +>>> upscaled_image.save("upsampled_cat.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/e7bf436912591fdbf0cc3886c1728651.txt b/scrapped_outputs/e7bf436912591fdbf0cc3886c1728651.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/e801c34f580a8be34a614dab6f4cecf9.txt b/scrapped_outputs/e801c34f580a8be34a614dab6f4cecf9.txt new file mode 100644 index 0000000000000000000000000000000000000000..b6426f311f1dcd39145426a6e04473141bc5c4c0 --- /dev/null +++ b/scrapped_outputs/e801c34f580a8be34a614dab6f4cecf9.txt @@ -0,0 +1,157 @@ +Stable diffusion 2 + +Stable Diffusion 2 is a text-to-image latent diffusion model built upon the work of Stable Diffusion 1. +The project to train Stable Diffusion 2 was led by Robin Rombach and Katherine Crowson from Stability AI and LAION. +The Stable Diffusion 2.0 release includes robust text-to-image models trained using a brand new text encoder (OpenCLIP), developed by LAION with support from Stability AI, which greatly improves the quality of the generated images compared to earlier V1 releases. The text-to-image models in this release can generate images with default resolutions of both 512x512 pixels and 768x768 pixels. +These models are trained on an aesthetic subset of the LAION-5B dataset created by the DeepFloyd team at Stability AI, which is then further filtered to remove adult content using LAION’s NSFW filter. +For more details about how Stable Diffusion 2 works and how it differs from Stable Diffusion 1, please refer to the official launch announcement post. + +Tips + + +Available checkpoints: + +Note that the architecture is more or less identical to Stable Diffusion 1 so please refer to this page for API documentation. +Text-to-Image (512x512 resolution): stabilityai/stable-diffusion-2-base with StableDiffusionPipeline +Text-to-Image (768x768 resolution): stabilityai/stable-diffusion-2 with StableDiffusionPipeline +Image Inpainting (512x512 resolution): stabilityai/stable-diffusion-2-inpainting with StableDiffusionInpaintPipeline +Super-Resolution (x4 resolution resolution): stable-diffusion-x4-upscaler StableDiffusionUpscalePipeline +Depth-to-Image (512x512 resolution): stabilityai/stable-diffusion-2-depth with StableDiffusionDepth2ImagePipeline +We recommend using the DPMSolverMultistepScheduler as it’s currently the fastest scheduler there is. + +Text-to-Image + +Text-to-Image (512x512 resolution): stabilityai/stable-diffusion-2-base with StableDiffusionPipeline + + + Copied +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +import torch + +repo_id = "stabilityai/stable-diffusion-2-base" +pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") + +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") + +prompt = "High quality photo of an astronaut riding a horse in space" +image = pipe(prompt, num_inference_steps=25).images[0] +image.save("astronaut.png") +Text-to-Image (768x768 resolution): stabilityai/stable-diffusion-2 with StableDiffusionPipeline + + + Copied +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +import torch + +repo_id = "stabilityai/stable-diffusion-2" +pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") + +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") + +prompt = "High quality photo of an astronaut riding a horse in space" +image = pipe(prompt, guidance_scale=9, num_inference_steps=25).images[0] +image.save("astronaut.png") + +Image Inpainting + +Image Inpainting (512x512 resolution): stabilityai/stable-diffusion-2-inpainting with StableDiffusionInpaintPipeline + + + Copied +import PIL +import requests +import torch +from io import BytesIO + +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler + + +def download_image(url): + response = requests.get(url) + return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = download_image(img_url).resize((512, 512)) +mask_image = download_image(mask_url).resize((512, 512)) + +repo_id = "stabilityai/stable-diffusion-2-inpainting" +pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") + +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") + +prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +image = pipe(prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=25).images[0] + +image.save("yellow_cat.png") + +Super-Resolution + +Image Upscaling (x4 resolution resolution): stable-diffusion-x4-upscaler with StableDiffusionUpscalePipeline + + + Copied +import requests +from PIL import Image +from io import BytesIO +from diffusers import StableDiffusionUpscalePipeline +import torch + +# load model and scheduler +model_id = "stabilityai/stable-diffusion-x4-upscaler" +pipeline = StableDiffusionUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) +pipeline = pipeline.to("cuda") + +# let's download an image +url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png" +response = requests.get(url) +low_res_img = Image.open(BytesIO(response.content)).convert("RGB") +low_res_img = low_res_img.resize((128, 128)) +prompt = "a white cat" +upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0] +upscaled_image.save("upsampled_cat.png") + +Depth-to-Image + +Depth-Guided Text-to-Image: stabilityai/stable-diffusion-2-depth StableDiffusionDepth2ImagePipeline + + + Copied +import torch +import requests +from PIL import Image + +from diffusers import StableDiffusionDepth2ImgPipeline + +pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-depth", + torch_dtype=torch.float16, +).to("cuda") + + +url = "http://images.cocodataset.org/val2017/000000039769.jpg" +init_image = Image.open(requests.get(url, stream=True).raw) +prompt = "two tigers" +n_propmt = "bad, deformed, ugly, bad anotomy" +image = pipe(prompt=prompt, image=init_image, negative_prompt=n_propmt, strength=0.7).images[0] + +How to load and use different schedulers. + +The stable diffusion pipeline uses DDIMScheduler scheduler by default. But diffusers provides many other schedulers that can be used with the stable diffusion pipeline such as PNDMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, EulerAncestralDiscreteScheduler etc. +To use a different scheduler, you can either change it via the ConfigMixin.from_config() method or pass the scheduler argument to the from_pretrained method of the pipeline. For example, to use the EulerDiscreteScheduler, you can do the following: + + + Copied +>>> from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler + +>>> pipeline = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2") +>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +>>> # or +>>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("stabilityai/stable-diffusion-2", subfolder="scheduler") +>>> pipeline = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2", scheduler=euler_scheduler) diff --git a/scrapped_outputs/e8205158a020345f38e63e94cd4e268f.txt b/scrapped_outputs/e8205158a020345f38e63e94cd4e268f.txt new file mode 100644 index 0000000000000000000000000000000000000000..e052d55cf32f3cae512726b0ae2689a14cfb5d64 --- /dev/null +++ b/scrapped_outputs/e8205158a020345f38e63e94cd4e268f.txt @@ -0,0 +1,378 @@ +unCLIP + + +Overview + +Hierarchical Text-Conditional Image Generation with CLIP Latents by Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen +The abstract of the paper is the following: +Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples. +The unCLIP model in diffusers comes from kakaobrain’s karlo and the original codebase can be found here. Additionally, lucidrains has a DALL-E 2 recreation here. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_unclip.py +Text-to-Image Generation +- +pipeline_unclip_image_variation.py +Image-Guided Image Generation +- + +UnCLIPPipeline + + +class diffusers.UnCLIPPipeline + +< +source +> +( +prior: PriorTransformer +decoder: UNet2DConditionModel +text_encoder: CLIPTextModelWithProjection +tokenizer: CLIPTokenizer +text_proj: UnCLIPTextProjModel +super_res_first: UNet2DModel +super_res_last: UNet2DModel +prior_scheduler: UnCLIPScheduler +decoder_scheduler: UnCLIPScheduler +super_res_scheduler: UnCLIPScheduler + +) + + +Parameters + +text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. + + +text_proj (UnCLIPTextProjModel) — +Utility class to prepare and combine the embeddings before they are passed to the decoder. + + +decoder (UNet2DConditionModel) — +The decoder to invert the image embedding into an image. + + +super_res_first (UNet2DModel) — +Super resolution unet. Used in all but the last step of the super resolution diffusion process. + + +super_res_last (UNet2DModel) — +Super resolution unet. Used in the last step of the super resolution diffusion process. + + +prior_scheduler (UnCLIPScheduler) — +Scheduler used in the prior denoising process. Just a modified DDPMScheduler. + + +decoder_scheduler (UnCLIPScheduler) — +Scheduler used in the decoder denoising process. Just a modified DDPMScheduler. + + +super_res_scheduler (UnCLIPScheduler) — +Scheduler used in the super resolution denoising process. Just a modified DDPMScheduler. + + + +Pipeline for text-to-image generation using unCLIP +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: int = 1 +prior_num_inference_steps: int = 25 +decoder_num_inference_steps: int = 25 +super_res_num_inference_steps: int = 7 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +prior_latents: typing.Optional[torch.FloatTensor] = None +decoder_latents: typing.Optional[torch.FloatTensor] = None +super_res_latents: typing.Optional[torch.FloatTensor] = None +text_model_output: typing.Union[transformers.models.clip.modeling_clip.CLIPTextModelOutput, typing.Tuple, NoneType] = None +text_attention_mask: typing.Optional[torch.Tensor] = None +prior_guidance_scale: float = 4.0 +decoder_guidance_scale: float = 8.0 +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True + +) + + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. This can only be left undefined if +text_model_output and text_attention_mask is passed. + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +prior_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the prior. More denoising steps usually lead to a higher quality +image at the expense of slower inference. + + +decoder_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality +image at the expense of slower inference. + + +super_res_num_inference_steps (int, optional, defaults to 7) — +The number of denoising steps for super resolution. More denoising steps usually lead to a higher +quality image at the expense of slower inference. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +prior_latents (torch.FloatTensor of shape (batch size, embeddings dimension), optional) — +Pre-generated noisy latents to be used as inputs for the prior. + + +decoder_latents (torch.FloatTensor of shape (batch size, channels, height, width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. + + +super_res_latents (torch.FloatTensor of shape (batch size, channels, super res height, super res width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. + + +prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +decoder_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +text_model_output (CLIPTextModelOutput, optional) — +Pre-defined CLIPTextModel outputs that can be derived from the text encoder. Pre-defined text outputs +can be passed for tasks like text embedding interpolations. Make sure to also pass +text_attention_mask in this case. prompt can the be left to None. + + +text_attention_mask (torch.Tensor, optional) — +Pre-defined CLIP text attention mask that can be derived from the tokenizer. Pre-defined text attention +masks are necessary when passing text_model_output. + + +output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + + +Function invoked when calling the pipeline for generation. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline’s +models have their state dicts saved to CPU and then are moved to a torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. + +class diffusers.UnCLIPImageVariationPipeline + +< +source +> +( +decoder: UNet2DConditionModel +text_encoder: CLIPTextModelWithProjection +tokenizer: CLIPTokenizer +text_proj: UnCLIPTextProjModel +feature_extractor: CLIPFeatureExtractor +image_encoder: CLIPVisionModelWithProjection +super_res_first: UNet2DModel +super_res_last: UNet2DModel +decoder_scheduler: UnCLIPScheduler +super_res_scheduler: UnCLIPScheduler + +) + + +Parameters + +text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the image_encoder. + + +image_encoder (CLIPVisionModelWithProjection) — +Frozen CLIP image-encoder. unCLIP Image Variation uses the vision portion of +CLIP, +specifically the clip-vit-large-patch14 variant. + + +text_proj (UnCLIPTextProjModel) — +Utility class to prepare and combine the embeddings before they are passed to the decoder. + + +decoder (UNet2DConditionModel) — +The decoder to invert the image embedding into an image. + + +super_res_first (UNet2DModel) — +Super resolution unet. Used in all but the last step of the super resolution diffusion process. + + +super_res_last (UNet2DModel) — +Super resolution unet. Used in the last step of the super resolution diffusion process. + + +decoder_scheduler (UnCLIPScheduler) — +Scheduler used in the decoder denoising process. Just a modified DDPMScheduler. + + +super_res_scheduler (UnCLIPScheduler) — +Scheduler used in the super resolution denoising process. Just a modified DDPMScheduler. + + + +Pipeline to generate variations from an input image using unCLIP +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +image: typing.Union[PIL.Image.Image, typing.List[PIL.Image.Image], torch.FloatTensor, NoneType] = None +num_images_per_prompt: int = 1 +decoder_num_inference_steps: int = 25 +super_res_num_inference_steps: int = 7 +generator: typing.Optional[torch._C.Generator] = None +decoder_latents: typing.Optional[torch.FloatTensor] = None +super_res_latents: typing.Optional[torch.FloatTensor] = None +image_embeddings: typing.Optional[torch.Tensor] = None +decoder_guidance_scale: float = 8.0 +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True + +) + + +Parameters + +image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — +The image or images to guide the image generation. If you provide a tensor, it needs to comply with the +configuration of +this +CLIPFeatureExtractor. Can be left to None only when image_embeddings are passed. + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +decoder_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality +image at the expense of slower inference. + + +super_res_num_inference_steps (int, optional, defaults to 7) — +The number of denoising steps for super resolution. More denoising steps usually lead to a higher +quality image at the expense of slower inference. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +decoder_latents (torch.FloatTensor of shape (batch size, channels, height, width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. + + +super_res_latents (torch.FloatTensor of shape (batch size, channels, super res height, super res width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. + + +decoder_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +image_embeddings (torch.Tensor, optional) — +Pre-defined image embeddings that can be derived from the image encoder. Pre-defined image embeddings +can be passed for tasks like image interpolations. image can the be left to None. + + +output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + + +Function invoked when calling the pipeline for generation. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline’s +models have their state dicts saved to CPU and then are moved to a torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. diff --git a/scrapped_outputs/e843cf65954258f298a6d99e82da21c2.txt b/scrapped_outputs/e843cf65954258f298a6d99e82da21c2.txt new file mode 100644 index 0000000000000000000000000000000000000000..a782332fc7cd440b86e7889f43564b9e3d2ea725 --- /dev/null +++ b/scrapped_outputs/e843cf65954258f298a6d99e82da21c2.txt @@ -0,0 +1,87 @@ +Understanding pipelines, models and schedulers 🧨 Diffusers is designed to be a user-friendly and flexible toolbox for building diffusion systems tailored to your use-case. At the core of the toolbox are models and schedulers. While the DiffusionPipeline bundles these components together for convenience, you can also unbundle the pipeline and use the models and schedulers separately to create new diffusion systems. In this tutorial, you’ll learn how to use models and schedulers to assemble a diffusion system for inference, starting with a basic pipeline and then progressing to the Stable Diffusion pipeline. Deconstruct a basic pipeline A pipeline is a quick and easy way to run a model for inference, requiring no more than four lines of code to generate an image: Copied >>> from diffusers import DDPMPipeline + +>>> ddpm = DDPMPipeline.from_pretrained("google/ddpm-cat-256", use_safetensors=True).to("cuda") +>>> image = ddpm(num_inference_steps=25).images[0] +>>> image That was super easy, but how did the pipeline do that? Let’s breakdown the pipeline and take a look at what’s happening under the hood. In the example above, the pipeline contains a UNet2DModel model and a DDPMScheduler. The pipeline denoises an image by taking random noise the size of the desired output and passing it through the model several times. At each timestep, the model predicts the noise residual and the scheduler uses it to predict a less noisy image. The pipeline repeats this process until it reaches the end of the specified number of inference steps. To recreate the pipeline with the model and scheduler separately, let’s write our own denoising process. Load the model and scheduler: Copied >>> from diffusers import DDPMScheduler, UNet2DModel + +>>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cat-256") +>>> model = UNet2DModel.from_pretrained("google/ddpm-cat-256", use_safetensors=True).to("cuda") Set the number of timesteps to run the denoising process for: Copied >>> scheduler.set_timesteps(50) Setting the scheduler timesteps creates a tensor with evenly spaced elements in it, 50 in this example. Each element corresponds to a timestep at which the model denoises an image. When you create the denoising loop later, you’ll iterate over this tensor to denoise an image: Copied >>> scheduler.timesteps +tensor([980, 960, 940, 920, 900, 880, 860, 840, 820, 800, 780, 760, 740, 720, + 700, 680, 660, 640, 620, 600, 580, 560, 540, 520, 500, 480, 460, 440, + 420, 400, 380, 360, 340, 320, 300, 280, 260, 240, 220, 200, 180, 160, + 140, 120, 100, 80, 60, 40, 20, 0]) Create some random noise with the same shape as the desired output: Copied >>> import torch + +>>> sample_size = model.config.sample_size +>>> noise = torch.randn((1, 3, sample_size, sample_size), device="cuda") Now write a loop to iterate over the timesteps. At each timestep, the model does a UNet2DModel.forward() pass and returns the noisy residual. The scheduler’s step() method takes the noisy residual, timestep, and input and it predicts the image at the previous timestep. This output becomes the next input to the model in the denoising loop, and it’ll repeat until it reaches the end of the timesteps array. Copied >>> input = noise + +>>> for t in scheduler.timesteps: +... with torch.no_grad(): +... noisy_residual = model(input, t).sample +... previous_noisy_sample = scheduler.step(noisy_residual, t, input).prev_sample +... input = previous_noisy_sample This is the entire denoising process, and you can use this same pattern to write any diffusion system. The last step is to convert the denoised output into an image: Copied >>> from PIL import Image +>>> import numpy as np + +>>> image = (input / 2 + 0.5).clamp(0, 1).squeeze() +>>> image = (image.permute(1, 2, 0) * 255).round().to(torch.uint8).cpu().numpy() +>>> image = Image.fromarray(image) +>>> image In the next section, you’ll put your skills to the test and breakdown the more complex Stable Diffusion pipeline. The steps are more or less the same. You’ll initialize the necessary components, and set the number of timesteps to create a timestep array. The timestep array is used in the denoising loop, and for each element in this array, the model predicts a less noisy image. The denoising loop iterates over the timestep’s, and at each timestep, it outputs a noisy residual and the scheduler uses it to predict a less noisy image at the previous timestep. This process is repeated until you reach the end of the timestep array. Let’s try it out! Deconstruct the Stable Diffusion pipeline Stable Diffusion is a text-to-image latent diffusion model. It is called a latent diffusion model because it works with a lower-dimensional representation of the image instead of the actual pixel space, which makes it more memory efficient. The encoder compresses the image into a smaller representation, and a decoder to convert the compressed representation back into an image. For text-to-image models, you’ll need a tokenizer and an encoder to generate text embeddings. From the previous example, you already know you need a UNet model and a scheduler. As you can see, this is already more complex than the DDPM pipeline which only contains a UNet model. The Stable Diffusion model has three separate pretrained models. 💡 Read the How does Stable Diffusion work? blog for more details about how the VAE, UNet, and text encoder models work. Now that you know what you need for the Stable Diffusion pipeline, load all these components with the from_pretrained() method. You can find them in the pretrained runwayml/stable-diffusion-v1-5 checkpoint, and each component is stored in a separate subfolder: Copied >>> from PIL import Image +>>> import torch +>>> from transformers import CLIPTextModel, CLIPTokenizer +>>> from diffusers import AutoencoderKL, UNet2DConditionModel, PNDMScheduler + +>>> vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae", use_safetensors=True) +>>> tokenizer = CLIPTokenizer.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="tokenizer") +>>> text_encoder = CLIPTextModel.from_pretrained( +... "CompVis/stable-diffusion-v1-4", subfolder="text_encoder", use_safetensors=True +... ) +>>> unet = UNet2DConditionModel.from_pretrained( +... "CompVis/stable-diffusion-v1-4", subfolder="unet", use_safetensors=True +... ) Instead of the default PNDMScheduler, exchange it for the UniPCMultistepScheduler to see how easy it is to plug a different scheduler in: Copied >>> from diffusers import UniPCMultistepScheduler + +>>> scheduler = UniPCMultistepScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler") To speed up inference, move the models to a GPU since, unlike the scheduler, they have trainable weights: Copied >>> torch_device = "cuda" +>>> vae.to(torch_device) +>>> text_encoder.to(torch_device) +>>> unet.to(torch_device) Create text embeddings The next step is to tokenize the text to generate embeddings. The text is used to condition the UNet model and steer the diffusion process towards something that resembles the input prompt. 💡 The guidance_scale parameter determines how much weight should be given to the prompt when generating an image. Feel free to choose any prompt you like if you want to generate something else! Copied >>> prompt = ["a photograph of an astronaut riding a horse"] +>>> height = 512 # default height of Stable Diffusion +>>> width = 512 # default width of Stable Diffusion +>>> num_inference_steps = 25 # Number of denoising steps +>>> guidance_scale = 7.5 # Scale for classifier-free guidance +>>> generator = torch.manual_seed(0) # Seed generator to create the initial latent noise +>>> batch_size = len(prompt) Tokenize the text and generate the embeddings from the prompt: Copied >>> text_input = tokenizer( +... prompt, padding="max_length", max_length=tokenizer.model_max_length, truncation=True, return_tensors="pt" +... ) + +>>> with torch.no_grad(): +... text_embeddings = text_encoder(text_input.input_ids.to(torch_device))[0] You’ll also need to generate the unconditional text embeddings which are the embeddings for the padding token. These need to have the same shape (batch_size and seq_length) as the conditional text_embeddings: Copied >>> max_length = text_input.input_ids.shape[-1] +>>> uncond_input = tokenizer([""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt") +>>> uncond_embeddings = text_encoder(uncond_input.input_ids.to(torch_device))[0] Let’s concatenate the conditional and unconditional embeddings into a batch to avoid doing two forward passes: Copied >>> text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) Create random noise Next, generate some initial random noise as a starting point for the diffusion process. This is the latent representation of the image, and it’ll be gradually denoised. At this point, the latent image is smaller than the final image size but that’s okay though because the model will transform it into the final 512x512 image dimensions later. 💡 The height and width are divided by 8 because the vae model has 3 down-sampling layers. You can check by running the following: Copied 2 ** (len(vae.config.block_out_channels) - 1) == 8 Copied >>> latents = torch.randn( +... (batch_size, unet.config.in_channels, height // 8, width // 8), +... generator=generator, +... device=torch_device, +... ) Denoise the image Start by scaling the input with the initial noise distribution, sigma, the noise scale value, which is required for improved schedulers like UniPCMultistepScheduler: Copied >>> latents = latents * scheduler.init_noise_sigma The last step is to create the denoising loop that’ll progressively transform the pure noise in latents to an image described by your prompt. Remember, the denoising loop needs to do three things: Set the scheduler’s timesteps to use during denoising. Iterate over the timesteps. At each timestep, call the UNet model to predict the noise residual and pass it to the scheduler to compute the previous noisy sample. Copied >>> from tqdm.auto import tqdm + +>>> scheduler.set_timesteps(num_inference_steps) + +>>> for t in tqdm(scheduler.timesteps): +... # expand the latents if we are doing classifier-free guidance to avoid doing two forward passes. +... latent_model_input = torch.cat([latents] * 2) + +... latent_model_input = scheduler.scale_model_input(latent_model_input, timestep=t) + +... # predict the noise residual +... with torch.no_grad(): +... noise_pred = unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample + +... # perform guidance +... noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) +... noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) + +... # compute the previous noisy sample x_t -> x_t-1 +... latents = scheduler.step(noise_pred, t, latents).prev_sample Decode the image The final step is to use the vae to decode the latent representation into an image and get the decoded output with sample: Copied # scale and decode the image latents with vae +latents = 1 / 0.18215 * latents +with torch.no_grad(): + image = vae.decode(latents).sample Lastly, convert the image to a PIL.Image to see your generated image! Copied >>> image = (image / 2 + 0.5).clamp(0, 1).squeeze() +>>> image = (image.permute(1, 2, 0) * 255).to(torch.uint8).cpu().numpy() +>>> images = (image * 255).round().astype("uint8") +>>> image = Image.fromarray(image) +>>> image Next steps From basic to complex pipelines, you’ve seen that all you really need to write your own diffusion system is a denoising loop. The loop should set the scheduler’s timesteps, iterate over them, and alternate between calling the UNet model to predict the noise residual and passing it to the scheduler to compute the previous noisy sample. This is really what 🧨 Diffusers is designed for: to make it intuitive and easy to write your own diffusion system using models and schedulers. For your next steps, feel free to: Learn how to build and contribute a pipeline to 🧨 Diffusers. We can’t wait and see what you’ll come up with! Explore existing pipelines in the library, and see if you can deconstruct and build a pipeline from scratch using the models and schedulers separately. diff --git a/scrapped_outputs/e843ec425d9bfc10ee7253bc04cd482f.txt b/scrapped_outputs/e843ec425d9bfc10ee7253bc04cd482f.txt new file mode 100644 index 0000000000000000000000000000000000000000..fa29aaa3795982e1203729759aa3fb501feeb077 --- /dev/null +++ b/scrapped_outputs/e843ec425d9bfc10ee7253bc04cd482f.txt @@ -0,0 +1,19 @@ +Habana Gaudi 🤗 Diffusers is compatible with Habana Gaudi through 🤗 Optimum. Follow the installation guide to install the SynapseAI and Gaudi drivers, and then install Optimum Habana: Copied python -m pip install --upgrade-strategy eager optimum[habana] To generate images with Stable Diffusion 1 and 2 on Gaudi, you need to instantiate two instances: GaudiStableDiffusionPipeline, a pipeline for text-to-image generation. GaudiDDIMScheduler, a Gaudi-optimized scheduler. When you initialize the pipeline, you have to specify use_habana=True to deploy it on HPUs and to get the fastest possible generation, you should enable HPU graphs with use_hpu_graphs=True. Finally, specify a GaudiConfig which can be downloaded from the Habana organization on the Hub. Copied from optimum.habana import GaudiConfig +from optimum.habana.diffusers import GaudiDDIMScheduler, GaudiStableDiffusionPipeline + +model_name = "stabilityai/stable-diffusion-2-base" +scheduler = GaudiDDIMScheduler.from_pretrained(model_name, subfolder="scheduler") +pipeline = GaudiStableDiffusionPipeline.from_pretrained( + model_name, + scheduler=scheduler, + use_habana=True, + use_hpu_graphs=True, + gaudi_config="Habana/stable-diffusion-2", +) Now you can call the pipeline to generate images by batches from one or several prompts: Copied outputs = pipeline( + prompt=[ + "High quality photo of an astronaut riding a horse in space", + "Face of a yellow cat, high resolution, sitting on a park bench", + ], + num_images_per_prompt=10, + batch_size=4, +) For more information, check out 🤗 Optimum Habana’s documentation and the example provided in the official GitHub repository. Benchmark We benchmarked Habana’s first-generation Gaudi and Gaudi2 with the Habana/stable-diffusion and Habana/stable-diffusion-2 Gaudi configurations (mixed precision bf16/fp32) to demonstrate their performance. For Stable Diffusion v1.5 on 512x512 images: Latency (batch size = 1) Throughput first-generation Gaudi 3.80s 0.308 images/s (batch size = 8) Gaudi2 1.33s 1.081 images/s (batch size = 8) For Stable Diffusion v2.1 on 768x768 images: Latency (batch size = 1) Throughput first-generation Gaudi 10.2s 0.108 images/s (batch size = 4) Gaudi2 3.17s 0.379 images/s (batch size = 8) diff --git a/scrapped_outputs/e84ffb738bee52f93be597b1222dbb35.txt b/scrapped_outputs/e84ffb738bee52f93be597b1222dbb35.txt new file mode 100644 index 0000000000000000000000000000000000000000..619c1344357a2477dbdb089431e14fc3b2eaccb0 --- /dev/null +++ b/scrapped_outputs/e84ffb738bee52f93be597b1222dbb35.txt @@ -0,0 +1,130 @@ +improved pseudo numerical methods for diffusion models (iPNDM) + + +Overview + +Original implementation can be found here. + +IPNDMScheduler + + +class diffusers.IPNDMScheduler + +< +source +> +( +num_train_timesteps: int = 1000 +trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None + +) + + +Parameters + +num_train_timesteps (int) — number of diffusion steps used to train the model. + + + +Improved Pseudo numerical methods for diffusion models (iPNDM) ported from @crowsonkb’s amazing k-diffusion +library +~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__ +function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. +SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and +from_pretrained() functions. +For more details, see the original paper: https://arxiv.org/abs/2202.09778 + +scale_model_input + +< +source +> +( +sample: FloatTensor +*args +**kwargs + +) +→ +torch.FloatTensor + +Parameters + +sample (torch.FloatTensor) — input sample + + +Returns + +torch.FloatTensor + + + +scaled input sample + + +Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. + +set_timesteps + +< +source +> +( +num_inference_steps: int +device: typing.Union[str, torch.device] = None + +) + + +Parameters + +num_inference_steps (int) — +the number of diffusion steps used when generating samples with a pre-trained model. + + + +Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. + +step + +< +source +> +( +model_output: FloatTensor +timestep: int +sample: FloatTensor +return_dict: bool = True + +) +→ +~scheduling_utils.SchedulerOutput or tuple + +Parameters + +model_output (torch.FloatTensor) — direct output from learned diffusion model. + + +timestep (int) — current discrete timestep in the diffusion chain. + + +sample (torch.FloatTensor) — +current instance of sample being created by diffusion process. + + +return_dict (bool) — option for returning tuple rather than SchedulerOutput class + + +Returns + +~scheduling_utils.SchedulerOutput or tuple + + + +~scheduling_utils.SchedulerOutput if return_dict is +True, otherwise a tuple. When returning a tuple, the first element is the sample tensor. + + +Step function propagating the sample with the linear multi-step method. This has one forward pass with multiple +times to approximate the solution. diff --git a/scrapped_outputs/e85f8c30abbb6827fc0e7bc70b9b1f70.txt b/scrapped_outputs/e85f8c30abbb6827fc0e7bc70b9b1f70.txt new file mode 100644 index 0000000000000000000000000000000000000000..191eba717cd93724b13a5915ff44bfc9153360dd --- /dev/null +++ b/scrapped_outputs/e85f8c30abbb6827fc0e7bc70b9b1f70.txt @@ -0,0 +1,338 @@ +GLIGEN (Grounded Language-to-Image Generation) The GLIGEN model was created by researchers and engineers from University of Wisconsin-Madison, Columbia University, and Microsoft. The StableDiffusionGLIGENPipeline and StableDiffusionGLIGENTextImagePipeline can generate photorealistic images conditioned on grounding inputs. Along with text and bounding boxes with StableDiffusionGLIGENPipeline, if input images are given, StableDiffusionGLIGENTextImagePipeline can insert objects described by text at the region defined by bounding boxes. Otherwise, it’ll generate an image described by the caption/prompt and insert objects described by text at the region defined by bounding boxes. It’s trained on COCO2014D and COCO2014CD datasets, and the model uses a frozen CLIP ViT-L/14 text encoder to condition itself on grounding inputs. The abstract from the paper is: Large-scale text-to-image diffusion models have made amazing advances. However, the status quo is to use text input alone, which can impede controllability. In this work, we propose GLIGEN, Grounded-Language-to-Image Generation, a novel approach that builds upon and extends the functionality of existing pre-trained text-to-image diffusion models by enabling them to also be conditioned on grounding inputs. To preserve the vast concept knowledge of the pre-trained model, we freeze all of its weights and inject the grounding information into new trainable layers via a gated mechanism. Our model achieves open-world grounded text2img generation with caption and bounding box condition inputs, and the grounding ability generalizes well to novel spatial configurations and concepts. GLIGEN’s zeroshot performance on COCO and LVIS outperforms existing supervised layout-to-image baselines by a large margin. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality and how to reuse pipeline components efficiently! If you want to use one of the official checkpoints for a task, explore the gligen Hub organizations! StableDiffusionGLIGENPipeline was contributed by Nikhil Gajendrakumar and StableDiffusionGLIGENTextImagePipeline was contributed by Nguyễn Công Tú Anh. StableDiffusionGLIGENPipeline class diffusers.StableDiffusionGLIGENPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with Grounded-Language-to-Image Generation (GLIGEN). This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 gligen_scheduled_sampling_beta: float = 0.3 gligen_phrases: List = None gligen_boxes: List = None gligen_inpaint_image: Optional = None negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. gligen_phrases (List[str]) — +The phrases to guide what to include in each of the regions defined by the corresponding +gligen_boxes. There should only be one phrase per bounding box. gligen_boxes (List[List[float]]) — +The bounding boxes that identify rectangular regions of the image that are going to be filled with the +content described by the corresponding gligen_phrases. Each rectangular box is defined as a +List[float] of 4 elements [xmin, ymin, xmax, ymax] where each value is between [0,1]. gligen_inpaint_image (PIL.Image.Image, optional) — +The input image, if provided, is inpainted with objects described by the gligen_boxes and +gligen_phrases. Otherwise, it is treated as a generation task on a blank input image. gligen_scheduled_sampling_beta (float, defaults to 0.3) — +Scheduled Sampling factor from GLIGEN: Open-Set Grounded Text-to-Image +Generation. Scheduled Sampling factor is only varied for +scheduled sampling during inference for improved quality and controllability. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor from Common Diffusion Noise Schedules and Sample Steps are +Flawed. Guidance rescale factor should fix overexposure when +using zero terminal SNR. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionGLIGENPipeline +>>> from diffusers.utils import load_image + +>>> # Insert objects described by text at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENPipeline.from_pretrained( +... "masterful/gligen-1-4-inpainting-text-box", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> input_image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/livingroom_modern.png" +... ) +>>> prompt = "a birthday cake" +>>> boxes = [[0.2676, 0.6088, 0.4773, 0.7183]] +>>> phrases = ["a birthday cake"] + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_inpaint_image=input_image, +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-1-4-inpainting-text-box.jpg") + +>>> # Generate an image described by the prompt and +>>> # insert objects described by text at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENPipeline.from_pretrained( +... "masterful/gligen-1-4-generation-text-box", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a waterfall and a modern high speed train running through the tunnel in a beautiful forest with fall foliage" +>>> boxes = [[0.1387, 0.2051, 0.4277, 0.7090], [0.4980, 0.4355, 0.8516, 0.7266]] +>>> phrases = ["a waterfall", "a modern high speed train running through the tunnel"] + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-1-4-generation-text-box.jpg") enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. prepare_latents < source > ( batch_size num_channels_latents height width dtype device generator latents = None ) enable_fuser < source > ( enabled = True ) encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionGLIGENTextImagePipeline class diffusers.StableDiffusionGLIGENTextImagePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer processor: CLIPProcessor image_encoder: CLIPVisionModelWithProjection image_project: CLIPImageProjection unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. processor (CLIPProcessor) — +A CLIPProcessor to procces reference image. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder (clip-vit-large-patch14). image_project (CLIPImageProjection) — +A CLIPImageProjection to project image embedding into phrases embedding space. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with Grounded-Language-to-Image Generation (GLIGEN). This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 gligen_scheduled_sampling_beta: float = 0.3 gligen_phrases: List = None gligen_images: List = None input_phrases_mask: Union = None input_images_mask: Union = None gligen_boxes: List = None gligen_inpaint_image: Optional = None negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None gligen_normalize_constant: float = 28.7 clip_skip: int = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. gligen_phrases (List[str]) — +The phrases to guide what to include in each of the regions defined by the corresponding +gligen_boxes. There should only be one phrase per bounding box. gligen_images (List[PIL.Image.Image]) — +The images to guide what to include in each of the regions defined by the corresponding gligen_boxes. +There should only be one image per bounding box input_phrases_mask (int or List[int]) — +pre phrases mask input defined by the correspongding input_phrases_mask input_images_mask (int or List[int]) — +pre images mask input defined by the correspongding input_images_mask gligen_boxes (List[List[float]]) — +The bounding boxes that identify rectangular regions of the image that are going to be filled with the +content described by the corresponding gligen_phrases. Each rectangular box is defined as a +List[float] of 4 elements [xmin, ymin, xmax, ymax] where each value is between [0,1]. gligen_inpaint_image (PIL.Image.Image, optional) — +The input image, if provided, is inpainted with objects described by the gligen_boxes and +gligen_phrases. Otherwise, it is treated as a generation task on a blank input image. gligen_scheduled_sampling_beta (float, defaults to 0.3) — +Scheduled Sampling factor from GLIGEN: Open-Set Grounded Text-to-Image +Generation. Scheduled Sampling factor is only varied for +scheduled sampling during inference for improved quality and controllability. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. gligen_normalize_constant (float, optional, defaults to 28.7) — +The normalize value of the image embedding. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionGLIGENTextImagePipeline +>>> from diffusers.utils import load_image + +>>> # Insert objects described by image at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained( +... "anhnct/Gligen_Inpainting_Text_Image", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> input_image = load_image( +... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/livingroom_modern.png" +... ) +>>> prompt = "a backpack" +>>> boxes = [[0.2676, 0.4088, 0.4773, 0.7183]] +>>> phrases = None +>>> gligen_image = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/backpack.jpeg" +... ) + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_inpaint_image=input_image, +... gligen_boxes=boxes, +... gligen_images=[gligen_image], +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-inpainting-text-image-box.jpg") + +>>> # Generate an image described by the prompt and +>>> # insert objects described by text and image at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained( +... "anhnct/Gligen_Text_Image", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a flower sitting on the beach" +>>> boxes = [[0.0, 0.09, 0.53, 0.76]] +>>> phrases = ["flower"] +>>> gligen_image = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/pexels-pixabay-60597.jpg" +... ) + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=phrases, +... gligen_images=[gligen_image], +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-generation-text-image-box.jpg") + +>>> # Generate an image described by the prompt and +>>> # transfer style described by image at the region defined by bounding boxes +>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained( +... "anhnct/Gligen_Text_Image", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a dragon flying on the sky" +>>> boxes = [[0.4, 0.2, 1.0, 0.8], [0.0, 1.0, 0.0, 1.0]] # Set `[0.0, 1.0, 0.0, 1.0]` for the style + +>>> gligen_image = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" +... ) + +>>> gligen_placeholder = load_image( +... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" +... ) + +>>> images = pipe( +... prompt=prompt, +... gligen_phrases=[ +... "dragon", +... "placeholder", +... ], # Can use any text instead of `placeholder` token, because we will use mask here +... gligen_images=[ +... gligen_placeholder, +... gligen_image, +... ], # Can use any image in gligen_placeholder, because we will use mask here +... input_phrases_mask=[1, 0], # Set 0 for the placeholder token +... input_images_mask=[0, 1], # Set 0 for the placeholder image +... gligen_boxes=boxes, +... gligen_scheduled_sampling_beta=1, +... output_type="pil", +... num_inference_steps=50, +... ).images + +>>> images[0].save("./gligen-generation-text-image-box-style-transfer.jpg") enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. prepare_latents < source > ( batch_size num_channels_latents height width dtype device generator latents = None ) enable_fuser < source > ( enabled = True ) complete_mask < source > ( has_mask max_objs device ) Based on the input mask corresponding value 0 or 1 for each phrases and image, mask the features +corresponding to phrases and images. crop < source > ( im new_width new_height ) Crop the input image to the specified dimensions. draw_inpaint_mask_from_boxes < source > ( boxes size ) Create an inpainting mask based on given boxes. This function generates an inpainting mask using the provided +boxes to mark regions that need to be inpainted. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_clip_feature < source > ( input normalize_constant device is_image = False ) Get image and phrases embedding by using CLIP pretrain model. The image embedding is transformed into the +phrases embedding space through a projection. get_cross_attention_kwargs_with_grounded < source > ( hidden_size gligen_phrases gligen_images gligen_boxes input_phrases_mask input_images_mask repeat_batch normalize_constant max_objs device ) Prepare the cross-attention kwargs containing information about the grounded input (boxes, mask, image +embedding, phrases embedding). get_cross_attention_kwargs_without_grounded < source > ( hidden_size repeat_batch max_objs device ) Prepare the cross-attention kwargs without information about the grounded input (boxes, mask, image embedding, +phrases embedding) (All are zero tensor). target_size_center_crop < source > ( im new_hw ) Crop and resize the image to the target size while keeping the center. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/e866078a27876c833a25f02df5dc44a5.txt b/scrapped_outputs/e866078a27876c833a25f02df5dc44a5.txt new file mode 100644 index 0000000000000000000000000000000000000000..78bbe5a9f180ff0b096046b649d06bb4063d6161 --- /dev/null +++ b/scrapped_outputs/e866078a27876c833a25f02df5dc44a5.txt @@ -0,0 +1,137 @@ +DiffEdit Image editing typically requires providing a mask of the area to be edited. DiffEdit automatically generates the mask for you based on a text query, making it easier overall to create a mask without image editing software. The DiffEdit algorithm works in three steps: the diffusion model denoises an image conditioned on some query text and reference text which produces different noise estimates for different areas of the image; the difference is used to infer a mask to identify which area of the image needs to be changed to match the query text the input image is encoded into latent space with DDIM the latents are decoded with the diffusion model conditioned on the text query, using the mask as a guide such that pixels outside the mask remain the same as in the input image This guide will show you how to use DiffEdit to edit images without manually creating a mask. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate The StableDiffusionDiffEditPipeline requires an image mask and a set of partially inverted latents. The image mask is generated from the generate_mask() function, and includes two parameters, source_prompt and target_prompt. These parameters determine what to edit in the image. For example, if you want to change a bowl of fruits to a bowl of pears, then: Copied source_prompt = "a bowl of fruits" +target_prompt = "a bowl of pears" The partially inverted latents are generated from the invert() function, and it is generally a good idea to include a prompt or caption describing the image to help guide the inverse latent sampling process. The caption can often be your source_prompt, but feel free to experiment with other text descriptions! Let’s load the pipeline, scheduler, inverse scheduler, and enable some optimizations to reduce memory usage: Copied import torch +from diffusers import DDIMScheduler, DDIMInverseScheduler, StableDiffusionDiffEditPipeline + +pipeline = StableDiffusionDiffEditPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", + torch_dtype=torch.float16, + safety_checker=None, + use_safetensors=True, +) +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +pipeline.enable_model_cpu_offload() +pipeline.enable_vae_slicing() Load the image to edit: Copied from diffusers.utils import load_image, make_image_grid + +img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" +raw_image = load_image(img_url).resize((768, 768)) +raw_image Use the generate_mask() function to generate the image mask. You’ll need to pass it the source_prompt and target_prompt to specify what to edit in the image: Copied from PIL import Image + +source_prompt = "a bowl of fruits" +target_prompt = "a basket of pears" +mask_image = pipeline.generate_mask( + image=raw_image, + source_prompt=source_prompt, + target_prompt=target_prompt, +) +Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L").resize((768, 768)) Next, create the inverted latents and pass it a caption describing the image: Copied inv_latents = pipeline.invert(prompt=source_prompt, image=raw_image).latents Finally, pass the image mask and inverted latents to the pipeline. The target_prompt becomes the prompt now, and the source_prompt is used as the negative_prompt: Copied output_image = pipeline( + prompt=target_prompt, + mask_image=mask_image, + image_latents=inv_latents, + negative_prompt=source_prompt, +).images[0] +mask_image = Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L").resize((768, 768)) +make_image_grid([raw_image, mask_image, output_image], rows=1, cols=3) original image edited image Generate source and target embeddings The source and target embeddings can be automatically generated with the Flan-T5 model instead of creating them manually. Load the Flan-T5 model and tokenizer from the 🤗 Transformers library: Copied import torch +from transformers import AutoTokenizer, T5ForConditionalGeneration + +tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-large") +model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", device_map="auto", torch_dtype=torch.float16) Provide some initial text to prompt the model to generate the source and target prompts. Copied source_concept = "bowl" +target_concept = "basket" + +source_text = f"Provide a caption for images containing a {source_concept}. " +"The captions should be in English and should be no longer than 150 characters." + +target_text = f"Provide a caption for images containing a {target_concept}. " +"The captions should be in English and should be no longer than 150 characters." Next, create a utility function to generate the prompts: Copied @torch.no_grad() +def generate_prompts(input_prompt): + input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids.to("cuda") + + outputs = model.generate( + input_ids, temperature=0.8, num_return_sequences=16, do_sample=True, max_new_tokens=128, top_k=10 + ) + return tokenizer.batch_decode(outputs, skip_special_tokens=True) + +source_prompts = generate_prompts(source_text) +target_prompts = generate_prompts(target_text) +print(source_prompts) +print(target_prompts) Check out the generation strategy guide if you’re interested in learning more about strategies for generating different quality text. Load the text encoder model used by the StableDiffusionDiffEditPipeline to encode the text. You’ll use the text encoder to compute the text embeddings: Copied import torch +from diffusers import StableDiffusionDiffEditPipeline + +pipeline = StableDiffusionDiffEditPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +pipeline.enable_vae_slicing() + +@torch.no_grad() +def embed_prompts(sentences, tokenizer, text_encoder, device="cuda"): + embeddings = [] + for sent in sentences: + text_inputs = tokenizer( + sent, + padding="max_length", + max_length=tokenizer.model_max_length, + truncation=True, + return_tensors="pt", + ) + text_input_ids = text_inputs.input_ids + prompt_embeds = text_encoder(text_input_ids.to(device), attention_mask=None)[0] + embeddings.append(prompt_embeds) + return torch.concatenate(embeddings, dim=0).mean(dim=0).unsqueeze(0) + +source_embeds = embed_prompts(source_prompts, pipeline.tokenizer, pipeline.text_encoder) +target_embeds = embed_prompts(target_prompts, pipeline.tokenizer, pipeline.text_encoder) Finally, pass the embeddings to the generate_mask() and invert() functions, and pipeline to generate the image: Copied from diffusers import DDIMInverseScheduler, DDIMScheduler + from diffusers.utils import load_image, make_image_grid + from PIL import Image + + pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) + pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) + + img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + raw_image = load_image(img_url).resize((768, 768)) + + mask_image = pipeline.generate_mask( + image=raw_image, +- source_prompt=source_prompt, +- target_prompt=target_prompt, ++ source_prompt_embeds=source_embeds, ++ target_prompt_embeds=target_embeds, + ) + + inv_latents = pipeline.invert( +- prompt=source_prompt, ++ prompt_embeds=source_embeds, + image=raw_image, + ).latents + + output_image = pipeline( + mask_image=mask_image, + image_latents=inv_latents, +- prompt=target_prompt, +- negative_prompt=source_prompt, ++ prompt_embeds=target_embeds, ++ negative_prompt_embeds=source_embeds, + ).images[0] + mask_image = Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L") + make_image_grid([raw_image, mask_image, output_image], rows=1, cols=3) Generate a caption for inversion While you can use the source_prompt as a caption to help generate the partially inverted latents, you can also use the BLIP model to automatically generate a caption. Load the BLIP model and processor from the 🤗 Transformers library: Copied import torch +from transformers import BlipForConditionalGeneration, BlipProcessor + +processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") +model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base", torch_dtype=torch.float16, low_cpu_mem_usage=True) Create a utility function to generate a caption from the input image: Copied @torch.no_grad() +def generate_caption(images, caption_generator, caption_processor): + text = "a photograph of" + + inputs = caption_processor(images, text, return_tensors="pt").to(device="cuda", dtype=caption_generator.dtype) + caption_generator.to("cuda") + outputs = caption_generator.generate(**inputs, max_new_tokens=128) + + # offload caption generator + caption_generator.to("cpu") + + caption = caption_processor.batch_decode(outputs, skip_special_tokens=True)[0] + return caption Load an input image and generate a caption for it using the generate_caption function: Copied from diffusers.utils import load_image + +img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" +raw_image = load_image(img_url).resize((768, 768)) +caption = generate_caption(raw_image, model, processor) generated caption: "a photograph of a bowl of fruit on a table" Now you can drop the caption into the invert() function to generate the partially inverted latents! diff --git a/scrapped_outputs/e8b126e55dbc2bdc6d457687197e2e53.txt b/scrapped_outputs/e8b126e55dbc2bdc6d457687197e2e53.txt new file mode 100644 index 0000000000000000000000000000000000000000..6b2f521e40e38cf54824f4d7c2c05c78554dd3cf --- /dev/null +++ b/scrapped_outputs/e8b126e55dbc2bdc6d457687197e2e53.txt @@ -0,0 +1,62 @@ +AudioLDM AudioLDM was proposed in AudioLDM: Text-to-Audio Generation with Latent Diffusion Models by Haohe Liu et al. Inspired by Stable Diffusion, AudioLDM +is a text-to-audio latent diffusion model (LDM) that learns continuous audio representations from CLAP +latents. AudioLDM takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional +sound effects, human speech and music. The abstract from the paper is: Text-to-audio (TTA) system has recently gained attention for its ability to synthesize general audio based on text descriptions. However, previous studies in TTA have limited generation quality with high computational costs. In this study, we propose AudioLDM, a TTA system that is built on a latent space to learn the continuous audio representations from contrastive language-audio pretraining (CLAP) latents. The pretrained CLAP models enable us to train LDMs with audio embedding while providing text embedding as a condition during sampling. By learning the latent representations of audio signals and their compositions without modeling the cross-modal relationship, AudioLDM is advantageous in both generation quality and computational efficiency. Trained on AudioCaps with a single GPU, AudioLDM achieves state-of-the-art TTA performance measured by both objective and subjective metrics (e.g., frechet distance). Moreover, AudioLDM is the first TTA system that enables various text-guided audio manipulations (e.g., style transfer) in a zero-shot fashion. Our implementation and demos are available at this https URL. The original codebase can be found at haoheliu/AudioLDM. Tips When constructing a prompt, keep in mind: Descriptive prompt inputs work best; you can use adjectives to describe the sound (for example, “high quality” or “clear”) and make the prompt context specific (for example, “water stream in a forest” instead of “stream”). It’s best to use general terms like “cat” or “dog” instead of specific names or abstract objects the model may not be familiar with. During inference: The quality of the predicted audio sample can be controlled by the num_inference_steps argument; higher steps give higher quality audio at the expense of slower inference. The length of the predicted audio sample can be controlled by varying the audio_length_in_s argument. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. AudioLDMPipeline class diffusers.AudioLDMPipeline < source > ( vae: AutoencoderKL text_encoder: ClapTextModelWithProjection tokenizer: Union unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vocoder: SpeechT5HifiGan ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (ClapTextModelWithProjection) — +Frozen text-encoder (ClapTextModelWithProjection, specifically the +laion/clap-htsat-unfused variant. tokenizer (PreTrainedTokenizer) — +A RobertaTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded audio latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. vocoder (SpeechT5HifiGan) — +Vocoder of class SpeechT5HifiGan. Pipeline for text-to-audio generation using AudioLDM. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None audio_length_in_s: Optional = None num_inference_steps: int = 10 guidance_scale: float = 2.5 negative_prompt: Union = None num_waveforms_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None output_type: Optional = 'np' ) → AudioPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide audio generation. If not defined, you need to pass prompt_embeds. audio_length_in_s (int, optional, defaults to 5.12) — +The length of the generated audio sample in seconds. num_inference_steps (int, optional, defaults to 10) — +The number of denoising steps. More denoising steps usually lead to a higher quality audio at the +expense of slower inference. guidance_scale (float, optional, defaults to 2.5) — +A higher guidance scale value encourages the model to generate audio that is closely linked to the text +prompt at the expense of lower sound quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in audio generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_waveforms_per_prompt (int, optional, defaults to 1) — +The number of waveforms to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. return_dict (bool, optional, defaults to True) — +Whether or not to return a AudioPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. output_type (str, optional, defaults to "np") — +The output format of the generated image. Choose between "np" to return a NumPy np.ndarray or +"pt" to return a PyTorch torch.Tensor object. Returns +AudioPipelineOutput or tuple + +If return_dict is True, AudioPipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import AudioLDMPipeline +>>> import torch +>>> import scipy + +>>> repo_id = "cvssp/audioldm-s-full-v2" +>>> pipe = AudioLDMPipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> prompt = "Techno music with a strong, upbeat tempo and high melodic riffs" +>>> audio = pipe(prompt, num_inference_steps=10, audio_length_in_s=5.0).audios[0] + +>>> # save the audio sample as a .wav file +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio) disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. diff --git a/scrapped_outputs/e8b397983fea6447c60eab07bebab95b.txt b/scrapped_outputs/e8b397983fea6447c60eab07bebab95b.txt new file mode 100644 index 0000000000000000000000000000000000000000..c096748fc9379b50eaf61a541e581e9ab2545d55 --- /dev/null +++ b/scrapped_outputs/e8b397983fea6447c60eab07bebab95b.txt @@ -0,0 +1,383 @@ +Text2Video-Zero Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators is by Levon Khachatryan, Andranik Movsisyan, Vahram Tadevosyan, Roberto Henschel, Zhangyang Wang, Shant Navasardyan, Humphrey Shi. Text2Video-Zero enables zero-shot video generation using either: A textual prompt A prompt combined with guidance from poses or edges Video Instruct-Pix2Pix (instruction-guided video editing) Results are temporally consistent and closely follow the guidance and textual prompts. The abstract from the paper is: Recent text-to-video generation approaches rely on computationally heavy training and require large-scale video datasets. In this paper, we introduce a new task of zero-shot text-to-video generation and propose a low-cost approach (without any training or optimization) by leveraging the power of existing text-to-image synthesis methods (e.g., Stable Diffusion), making them suitable for the video domain. +Our key modifications include (i) enriching the latent codes of the generated frames with motion dynamics to keep the global scene and the background time consistent; and (ii) reprogramming frame-level self-attention using a new cross-frame attention of each frame on the first frame, to preserve the context, appearance, and identity of the foreground object. +Experiments show that this leads to low overhead, yet high-quality and remarkably consistent video generation. Moreover, our approach is not limited to text-to-video synthesis but is also applicable to other tasks such as conditional and content-specialized video generation, and Video Instruct-Pix2Pix, i.e., instruction-guided video editing. +As experiments show, our method performs comparably or sometimes better than recent approaches, despite not being trained on additional video data. You can find additional information about Text2Video-Zero on the project page, paper, and original codebase. Usage example Text-To-Video To generate a video from prompt, run the following Python code: Copied import torch +from diffusers import TextToVideoZeroPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +prompt = "A panda is playing guitar on times square" +result = pipe(prompt=prompt).images +result = [(r * 255).astype("uint8") for r in result] +imageio.mimsave("video.mp4", result, fps=4) You can change these parameters in the pipeline call: Motion field strength (see the paper, Sect. 3.3.1):motion_field_strength_x and motion_field_strength_y. Default: motion_field_strength_x=12, motion_field_strength_y=12 T and T' (see the paper, Sect. 3.3.1)t0 and t1 in the range {0, ..., num_inference_steps}. Default: t0=45, t1=48 Video length:video_length, the number of frames video_length to be generated. Default: video_length=8 We can also generate longer videos by doing the processing in a chunk-by-chunk manner: Copied import torch +from diffusers import TextToVideoZeroPipeline +import numpy as np + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") +seed = 0 +video_length = 24 #24 ÷ 4fps = 6 seconds +chunk_size = 8 +prompt = "A panda is playing guitar on times square" + +# Generate the video chunk-by-chunk +result = [] +chunk_ids = np.arange(0, video_length, chunk_size - 1) +generator = torch.Generator(device="cuda") +for i in range(len(chunk_ids)): + print(f"Processing chunk {i + 1} / {len(chunk_ids)}") + ch_start = chunk_ids[i] + ch_end = video_length if i == len(chunk_ids) - 1 else chunk_ids[i + 1] + # Attach the first frame for Cross Frame Attention + frame_ids = [0] + list(range(ch_start, ch_end)) + # Fix the seed for the temporal consistency + generator.manual_seed(seed) + output = pipe(prompt=prompt, video_length=len(frame_ids), generator=generator, frame_ids=frame_ids) + result.append(output.images[1:]) + +# Concatenate chunks and save +result = np.concatenate(result) +result = [(r * 255).astype("uint8") for r in result] +imageio.mimsave("video.mp4", result, fps=4) SDXL SupportIn order to use the SDXL model when generating a video from prompt, use the TextToVideoZeroSDXLPipeline pipeline: Copied import torch +from diffusers import TextToVideoZeroSDXLPipeline + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipe = TextToVideoZeroSDXLPipeline.from_pretrained( + model_id, torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") Text-To-Video with Pose Control To generate a video from prompt with additional pose control Download a demo video Copied from huggingface_hub import hf_hub_download + +filename = "__assets__/poses_skeleton_gifs/dance1_corr.mp4" +repo_id = "PAIR/Text2Video-Zero" +video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) Read video containing extracted pose images Copied from PIL import Image +import imageio + +reader = imageio.get_reader(video_path, "ffmpeg") +frame_count = 8 +pose_images = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] To extract pose from actual video, read ControlNet documentation. Run StableDiffusionControlNetPipeline with our custom attention processor Copied import torch +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +model_id = "runwayml/stable-diffusion-v1-5" +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + model_id, controlnet=controlnet, torch_dtype=torch.float16 +).to("cuda") + +# Set the attention processor +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) +pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) + +# fix latents for all frames +latents = torch.randn((1, 4, 64, 64), device="cuda", dtype=torch.float16).repeat(len(pose_images), 1, 1, 1) + +prompt = "Darth Vader dancing in a desert" +result = pipe(prompt=[prompt] * len(pose_images), image=pose_images, latents=latents).images +imageio.mimsave("video.mp4", result, fps=4) SDXL Support Since our attention processor also works with SDXL, it can be utilized to generate a video from prompt using ControlNet models powered by SDXL: Copied import torch +from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +controlnet_model_id = 'thibaud/controlnet-openpose-sdxl-1.0' +model_id = 'stabilityai/stable-diffusion-xl-base-1.0' + +controlnet = ControlNetModel.from_pretrained(controlnet_model_id, torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + model_id, controlnet=controlnet, torch_dtype=torch.float16 +).to('cuda') + +# Set the attention processor +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) +pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) + +# fix latents for all frames +latents = torch.randn((1, 4, 128, 128), device="cuda", dtype=torch.float16).repeat(len(pose_images), 1, 1, 1) + +prompt = "Darth Vader dancing in a desert" +result = pipe(prompt=[prompt] * len(pose_images), image=pose_images, latents=latents).images +imageio.mimsave("video.mp4", result, fps=4) Text-To-Video with Edge Control To generate a video from prompt with additional Canny edge control, follow the same steps described above for pose-guided generation using Canny edge ControlNet model. Video Instruct-Pix2Pix To perform text-guided video editing (with InstructPix2Pix): Download a demo video Copied from huggingface_hub import hf_hub_download + +filename = "__assets__/pix2pix video/camel.mp4" +repo_id = "PAIR/Text2Video-Zero" +video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) Read video from path Copied from PIL import Image +import imageio + +reader = imageio.get_reader(video_path, "ffmpeg") +frame_count = 8 +video = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] Run StableDiffusionInstructPix2PixPipeline with our custom attention processor Copied import torch +from diffusers import StableDiffusionInstructPix2PixPipeline +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +model_id = "timbrooks/instruct-pix2pix" +pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=3)) + +prompt = "make it Van Gogh Starry Night style" +result = pipe(prompt=[prompt] * len(video), image=video).images +imageio.mimsave("edited_video.mp4", result, fps=4) DreamBooth specialization Methods Text-To-Video, Text-To-Video with Pose Control and Text-To-Video with Edge Control +can run with custom DreamBooth models, as shown below for +Canny edge ControlNet model and +Avatar style DreamBooth model: Download a demo video Copied from huggingface_hub import hf_hub_download + +filename = "__assets__/canny_videos_mp4/girl_turning.mp4" +repo_id = "PAIR/Text2Video-Zero" +video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename) Read video from path Copied from PIL import Image +import imageio + +reader = imageio.get_reader(video_path, "ffmpeg") +frame_count = 8 +canny_edges = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)] Run StableDiffusionControlNetPipeline with custom trained DreamBooth model Copied import torch +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor + +# set model id to custom model +model_id = "PAIR/text2video-zero-controlnet-canny-avatar" +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + model_id, controlnet=controlnet, torch_dtype=torch.float16 +).to("cuda") + +# Set the attention processor +pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) +pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2)) + +# fix latents for all frames +latents = torch.randn((1, 4, 64, 64), device="cuda", dtype=torch.float16).repeat(len(canny_edges), 1, 1, 1) + +prompt = "oil painting of a beautiful girl avatar style" +result = pipe(prompt=[prompt] * len(canny_edges), image=canny_edges, latents=latents).images +imageio.mimsave("video.mp4", result, fps=4) You can filter out some available DreamBooth-trained models with this link. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. TextToVideoZeroPipeline class diffusers.TextToVideoZeroPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet3DConditionModel to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for zero-shot text-to-video generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union video_length: Optional = 8 height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None motion_field_strength_x: float = 12 motion_field_strength_y: float = 12 output_type: Optional = 'tensor' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 t0: int = 44 t1: int = 47 frame_ids: Optional = None ) → TextToVideoPipelineOutput Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. video_length (int, optional, defaults to 8) — +The number of generated video frames. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in video generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_videos_per_prompt (int, optional, defaults to 1) — +The number of videos to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "numpy") — +The output format of the generated video. Choose between "latent" and "numpy". return_dict (bool, optional, defaults to True) — +Whether or not to return a +TextToVideoPipelineOutput instead of +a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. motion_field_strength_x (float, optional, defaults to 12) — +Strength of motion in generated video along x-axis. See the paper, +Sect. 3.3.1. motion_field_strength_y (float, optional, defaults to 12) — +Strength of motion in generated video along y-axis. See the paper, +Sect. 3.3.1. t0 (int, optional, defaults to 44) — +Timestep t0. Should be in the range [0, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. t1 (int, optional, defaults to 47) — +Timestep t0. Should be in the range [t0 + 1, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. frame_ids (List[int], optional) — +Indexes of the frames that are being generated. This is used when generating longer videos +chunk-by-chunk. Returns +TextToVideoPipelineOutput + +The output contains a ndarray of the generated video, when output_type != "latent", otherwise a +latent code of generated videos and a list of bools indicating whether the corresponding generated +video contains “not-safe-for-work” (nsfw) content.. + The call function to the pipeline for generation. backward_loop < source > ( latents timesteps prompt_embeds guidance_scale callback callback_steps num_warmup_steps extra_step_kwargs cross_attention_kwargs = None ) → latents Parameters callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. +extra_step_kwargs — +Extra_step_kwargs. +cross_attention_kwargs — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. +num_warmup_steps — +number of warmup steps. Returns +latents + +Latents of backward process output at time timesteps[-1]. + Perform backward process given list of time steps. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. forward_loop < source > ( x_t0 t0 t1 generator ) → x_t1 Parameters generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. Returns +x_t1 + +Forward process applied to x_t0 from time t0 to t1. + Perform DDPM forward process from time t0 to t1. This is the same as adding noise with corresponding variance. TextToVideoZeroSDXLPipeline class diffusers.TextToVideoZeroSDXLPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for zero-shot text-to-video generation using Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union prompt_2: Union = None video_length: Optional = 8 height: Optional = None width: Optional = None num_inference_steps: int = 50 denoising_end: Optional = None guidance_scale: float = 7.5 negative_prompt: Union = None negative_prompt_2: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None frame_ids: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None latents: Optional = None motion_field_strength_x: float = 12 motion_field_strength_y: float = 12 output_type: Optional = 'tensor' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Optional = None crops_coords_top_left: Tuple = (0, 0) target_size: Optional = None t0: int = 44 t1: int = 47 ) Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders video_length (int, optional, defaults to 8) — +The number of generated video frames. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_videos_per_prompt (int, optional, defaults to 1) — +The number of videos to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. frame_ids (List[int], optional) — +Indexes of the frames that are being generated. This is used when generating longer videos +chunk-by-chunk. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. motion_field_strength_x (float, optional, defaults to 12) — +Strength of motion in generated video along x-axis. See the paper, +Sect. 3.3.1. motion_field_strength_y (float, optional, defaults to 12) — +Strength of motion in generated video along y-axis. See the paper, +Sect. 3.3.1. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. guidance_rescale (float, optional, defaults to 0.7) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (width, height) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (width, height). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. t0 (int, optional, defaults to 44) — +Timestep t0. Should be in the range [0, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. t1 (int, optional, defaults to 47) — +Timestep t0. Should be in the range [t0 + 1, num_inference_steps - 1]. See the +paper, Sect. 3.3.1. Function invoked when calling the pipeline for generation. backward_loop < source > ( latents timesteps prompt_embeds guidance_scale callback callback_steps num_warmup_steps extra_step_kwargs add_text_embeds add_time_ids cross_attention_kwargs = None guidance_rescale: float = 0.0 ) → latents Parameters callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. +extra_step_kwargs — +Extra_step_kwargs. +cross_attention_kwargs — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. +num_warmup_steps — +number of warmup steps. Returns +latents + +latents of backward process output at time timesteps[-1] + Perform backward process given list of time steps disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. forward_loop < source > ( x_t0 t0 t1 generator ) → x_t1 Parameters generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. Returns +x_t1 + +Forward process applied to x_t0 from time t0 to t1. + Perform DDPM forward process from time t0 to t1. This is the same as adding noise with corresponding variance. TextToVideoPipelineOutput class diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.TextToVideoPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images ([List[PIL.Image.Image], np.ndarray]) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected ([List[bool]]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for zero-shot text-to-video pipeline. diff --git a/scrapped_outputs/e926d70a6fce490b2a998a91ec2839f9.txt b/scrapped_outputs/e926d70a6fce490b2a998a91ec2839f9.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/e9302ec1d3c2b24a8340d5d08cb35c79.txt b/scrapped_outputs/e9302ec1d3c2b24a8340d5d08cb35c79.txt new file mode 100644 index 0000000000000000000000000000000000000000..1f6f4515145581efe8db27c822c4dac240053ef7 --- /dev/null +++ b/scrapped_outputs/e9302ec1d3c2b24a8340d5d08cb35c79.txt @@ -0,0 +1,68 @@ +Consistency Models Consistency Models were proposed in Consistency Models by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. The abstract from the paper is: Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast one-step generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet 64x64 and LSUN 256x256. The original codebase can be found at openai/consistency_models, and additional checkpoints are available at openai. The pipeline was contributed by dg845 and ayushtues. ❤️ Tips For an additional speed-up, use torch.compile to generate multiple images in <1 second: Copied import torch + from diffusers import ConsistencyModelPipeline + + device = "cuda" + # Load the cd_bedroom256_lpips checkpoint. + model_id_or_path = "openai/diffusers-cd_bedroom256_lpips" + pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) + pipe.to(device) + ++ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + + # Multistep sampling + # Timesteps can be explicitly specified; the particular timesteps below are from the original GitHub repo: + # https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L83 + for _ in range(10): + image = pipe(timesteps=[17, 0]).images[0] + image.show() ConsistencyModelPipeline class diffusers.ConsistencyModelPipeline < source > ( unet: UNet2DModel scheduler: CMStochasticIterativeScheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Currently only +compatible with CMStochasticIterativeScheduler. Pipeline for unconditional or class-conditional image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 class_labels: Union = None num_inference_steps: int = 1 timesteps: List = None generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. class_labels (torch.Tensor or List[int] or int, optional) — +Optional class labels for conditioning class-conditional consistency models. Not used if the model is +not class-conditional. num_inference_steps (int, optional, defaults to 1) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + Examples: Copied >>> import torch + +>>> from diffusers import ConsistencyModelPipeline + +>>> device = "cuda" +>>> # Load the cd_imagenet64_l2 checkpoint. +>>> model_id_or_path = "openai/diffusers-cd_imagenet64_l2" +>>> pipe = ConsistencyModelPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +>>> pipe.to(device) + +>>> # Onestep Sampling +>>> image = pipe(num_inference_steps=1).images[0] +>>> image.save("cd_imagenet64_l2_onestep_sample.png") + +>>> # Onestep sampling, class-conditional image generation +>>> # ImageNet-64 class label 145 corresponds to king penguins +>>> image = pipe(num_inference_steps=1, class_labels=145).images[0] +>>> image.save("cd_imagenet64_l2_onestep_sample_penguin.png") + +>>> # Multistep sampling, class-conditional image generation +>>> # Timesteps can be explicitly specified; the particular timesteps below are from the original Github repo: +>>> # https://github.com/openai/consistency_models/blob/main/scripts/launch.sh#L77 +>>> image = pipe(num_inference_steps=None, timesteps=[22, 0], class_labels=145).images[0] +>>> image.save("cd_imagenet64_l2_multistep_sample_penguin.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/e934b6bab307c18c5f82b6b058d5da51.txt b/scrapped_outputs/e934b6bab307c18c5f82b6b058d5da51.txt new file mode 100644 index 0000000000000000000000000000000000000000..aea51731081493d7722569e65ed73dc07e09b92f --- /dev/null +++ b/scrapped_outputs/e934b6bab307c18c5f82b6b058d5da51.txt @@ -0,0 +1,271 @@ +Image Variation + + +StableDiffusionImageVariationPipeline + +StableDiffusionImageVariationPipeline lets you generate variations from an input image using Stable Diffusion. It uses a fine-tuned version of Stable Diffusion model, trained by Justin Pinkney (@Buntworthy) at Lambda +The original codebase can be found here: +Stable Diffusion Image Variations +Available Checkpoints are: +sd-image-variations-diffusers: lambdalabs/sd-image-variations-diffusers + +class diffusers.StableDiffusionImageVariationPipeline + +< +source +> +( +vae: AutoencoderKL +image_encoder: CLIPVisionModelWithProjection +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPFeatureExtractor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +image_encoder (CLIPVisionModelWithProjection) — +Frozen CLIP image-encoder. Stable Diffusion Image Variation uses the vision portion of +CLIP, +specifically the clip-vit-large-patch14 variant. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline to generate variations from an input image using Stable Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +image: typing.Union[PIL.Image.Image, typing.List[PIL.Image.Image], torch.FloatTensor] +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — +The image or images to guide the image generation. If you provide a tensor, it needs to comply with the +configuration of +this +CLIPFeatureExtractor + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +enable_attention_slicing + +< +source +> +( +slice_size: typing.Union[str, int, NoneType] = 'auto' + +) + + +Parameters + +slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maxium amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +disable_attention_slicing + +< +source +> +( +) + + + +Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go +back to computing attention in one step. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. diff --git a/scrapped_outputs/e9458bd3816e6055d6ef53da8acd8003.txt b/scrapped_outputs/e9458bd3816e6055d6ef53da8acd8003.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/e94e6ff3ea42b20777fd601aff9513d9.txt b/scrapped_outputs/e94e6ff3ea42b20777fd601aff9513d9.txt new file mode 100644 index 0000000000000000000000000000000000000000..f695b722000cb30de90398c0e34dfcc9554715bb --- /dev/null +++ b/scrapped_outputs/e94e6ff3ea42b20777fd601aff9513d9.txt @@ -0,0 +1,315 @@ +Inpainting Inpainting replaces or edits specific areas of an image. This makes it a useful tool for image restoration like removing defects and artifacts, or even replacing an image area with something entirely new. Inpainting relies on a mask to determine which regions of an image to fill in; the area to inpaint is represented by white pixels and the area to keep is represented by black pixels. The white pixels are filled in by the prompt. With 🤗 Diffusers, here is how you can do inpainting: Load an inpainting checkpoint with the AutoPipelineForInpainting class. This’ll automatically detect the appropriate pipeline class to load based on the checkpoint: Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() You’ll notice throughout the guide, we use enable_model_cpu_offload() and enable_xformers_memory_efficient_attention(), to save memory and increase inference speed. If you’re using PyTorch 2.0, it’s not necessary to call enable_xformers_memory_efficient_attention() on your pipeline because it’ll already be using PyTorch 2.0’s native scaled-dot product attention. Load the base and mask images: Copied init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") Create a prompt to inpaint the image with and pass it to the pipeline with the base and mask images: Copied prompt = "a black cat with glowing eyes, cute, adorable, disney, pixar, highly detailed, 8k" +negative_prompt = "bad anatomy, deformed, ugly, disfigured" +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=init_image, mask_image=mask_image).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) base image mask image generated image Create a mask image Throughout this guide, the mask image is provided in all of the code examples for convenience. You can inpaint on your own images, but you’ll need to create a mask image for it. Use the Space below to easily create a mask image. Upload a base image to inpaint on and use the sketch tool to draw a mask. Once you’re done, click Run to generate and download the mask image. Mask blur The ~VaeImageProcessor.blur method provides an option for how to blend the original image and inpaint area. The amount of blur is determined by the blur_factor parameter. Increasing the blur_factor increases the amount of blur applied to the mask edges, softening the transition between the original image and inpaint area. A low or zero blur_factor preserves the sharper edges of the mask. To use this, create a blurred mask with the image processor. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +from PIL import Image + +pipeline = AutoPipelineForInpainting.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to('cuda') + +mask = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore_mask.png") +blurred_mask = pipeline.mask_processor.blur(mask, blur_factor=33) +blurred_mask mask with no blur mask with blur applied Popular models Stable Diffusion Inpainting, Stable Diffusion XL (SDXL) Inpainting, and Kandinsky 2.2 Inpainting are among the most popular models for inpainting. SDXL typically produces higher resolution images than Stable Diffusion v1.5, and Kandinsky 2.2 is also capable of generating high-quality images. Stable Diffusion Inpainting Stable Diffusion Inpainting is a latent diffusion model finetuned on 512x512 images on inpainting. It is a good starting point because it is relatively fast and generates good quality images. To use this model for inpainting, you’ll need to pass a prompt, base and mask image to the pipeline: Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Stable Diffusion XL (SDXL) Inpainting SDXL is a larger and more powerful version of Stable Diffusion v1.5. This model can follow a two-stage model process (though each model can also be used alone); the base model generates an image, and a refiner model takes that image and further enhances its details and quality. Take a look at the SDXL guide for a more comprehensive guide on how to use SDXL and configure it’s parameters. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "diffusers/stable-diffusion-xl-1.0-inpainting-0.1", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Kandinsky 2.2 Inpainting The Kandinsky model family is similar to SDXL because it uses two models as well; the image prior model creates image embeddings, and the diffusion model generates images from them. You can load the image prior and diffusion model separately, but the easiest way to use Kandinsky 2.2 is to load it into the AutoPipelineForInpainting class which uses the KandinskyV22InpaintCombinedPipeline under the hood. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) base image Stable Diffusion Inpainting Stable Diffusion XL Inpainting Kandinsky 2.2 Inpainting Non-inpaint specific checkpoints So far, this guide has used inpaint specific checkpoints such as runwayml/stable-diffusion-inpainting. But you can also use regular checkpoints like runwayml/stable-diffusion-v1-5. Let’s compare the results of the two checkpoints. The image on the left is generated from a regular checkpoint, and the image on the right is from an inpaint checkpoint. You’ll immediately notice the image on the left is not as clean, and you can still see the outline of the area the model is supposed to inpaint. The image on the right is much cleaner and the inpainted area appears more natural. runwayml/stable-diffusion-v1-5 runwayml/stable-diffusion-inpainting Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, generator=generator).images[0] +make_image_grid([init_image, image], rows=1, cols=2) runwayml/stable-diffusion-v1-5 runwayml/stable-diffusion-inpainting However, for more basic tasks like erasing an object from an image (like the rocks in the road for example), a regular checkpoint yields pretty good results. There isn’t as noticeable of difference between the regular and inpaint checkpoint. runwayml/stable-diffusion-v1-5 runwayml/stable-diffusion-inpaint Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/road-mask.png") + +image = pipeline(prompt="road", image=init_image, mask_image=mask_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) runwayml/stable-diffusion-v1-5 runwayml/stable-diffusion-inpainting The trade-off of using a non-inpaint specific checkpoint is the overall image quality may be lower, but it generally tends to preserve the mask area (that is why you can see the mask outline). The inpaint specific checkpoints are intentionally trained to generate higher quality inpainted images, and that includes creating a more natural transition between the masked and unmasked areas. As a result, these checkpoints are more likely to change your unmasked area. If preserving the unmasked area is important for your task, you can use the VaeImageProcessor.apply_overlay method to force the unmasked area of an image to remain the same at the expense of some more unnatural transitions between the masked and unmasked areas. Copied import PIL +import numpy as np +import torch + +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +device = "cuda" +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", + torch_dtype=torch.float16, +) +pipeline = pipeline.to(device) + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +repainted_image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image).images[0] +repainted_image.save("repainted_image.png") + +unmasked_unchanged_image = pipeline.image_processor.apply_overlay(mask_image, init_image, repainted_image) +unmasked_unchanged_image.save("force_unmasked_unchanged.png") +make_image_grid([init_image, mask_image, repainted_image, unmasked_unchanged_image], rows=2, cols=2) Configure pipeline parameters Image features - like quality and “creativity” - are dependent on pipeline parameters. Knowing what these parameters do is important for getting the results you want. Let’s take a look at the most important parameters and see how changing them affects the output. Strength strength is a measure of how much noise is added to the base image, which influences how similar the output is to the base image. 📈 a high strength value means more noise is added to an image and the denoising process takes longer, but you’ll get higher quality images that are more different from the base image 📉 a low strength value means less noise is added to an image and the denoising process is faster, but the image quality may not be as great and the generated image resembles the base image more Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.6).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) strength = 0.6 strength = 0.8 strength = 1.0 Guidance scale guidance_scale affects how aligned the text prompt and generated image are. 📈 a high guidance_scale value means the prompt and generated image are closely aligned, so the output is a stricter interpretation of the prompt 📉 a low guidance_scale value means the prompt and generated image are more loosely aligned, so the output may be more varied from the prompt You can use strength and guidance_scale together for more control over how expressive the model is. For example, a combination high strength and guidance_scale values gives the model the most creative freedom. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, guidance_scale=2.5).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) guidance_scale = 2.5 guidance_scale = 7.5 guidance_scale = 12.5 Negative prompt A negative prompt assumes the opposite role of a prompt; it guides the model away from generating certain things in an image. This is useful for quickly improving image quality and preventing the model from generating things you don’t want. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +negative_prompt = "bad architecture, unstable, poor details, blurry" +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=init_image, mask_image=mask_image).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) negative_prompt = "bad architecture, unstable, poor details, blurry" Padding mask crop A method for increasing the inpainting image quality is to use the padding_mask_crop parameter. When enabled, this option crops the masked area with some user-specified padding and it’ll also crop the same area from the original image. Both the image and mask are upscaled to a higher resolution for inpainting, and then overlaid on the original image. This is a quick and easy way to improve image quality without using a separate pipeline like StableDiffusionUpscalePipeline. Add the padding_mask_crop parameter to the pipeline call and set it to the desired padding value. Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +from PIL import Image + +generator = torch.Generator(device='cuda').manual_seed(0) +pipeline = AutoPipelineForInpainting.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to('cuda') + +base = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore.png") +mask = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore_mask.png") + +image = pipeline("boat", image=base, mask_image=mask, strength=0.75, generator=generator, padding_mask_crop=32).images[0] +image default inpaint image inpaint image with `padding_mask_crop` enabled Chained inpainting pipelines AutoPipelineForInpainting can be chained with other 🤗 Diffusers pipelines to edit their outputs. This is often useful for improving the output quality from your other diffusion pipelines, and if you’re using multiple pipelines, it can be more memory-efficient to chain them together to keep the outputs in latent space and reuse the same pipeline components. Text-to-image-to-inpaint Chaining a text-to-image and inpainting pipeline allows you to inpaint the generated image, and you don’t have to provide a base image to begin with. This makes it convenient to edit your favorite text-to-image outputs without having to generate an entirely new image. Start with the text-to-image pipeline to create a castle: Copied import torch +from diffusers import AutoPipelineForText2Image, AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +text2image = pipeline("concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k").images[0] Load the mask image of the output from above: Copied mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_text-chain-mask.png") And let’s inpaint the masked area with a waterfall: Copied pipeline = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +prompt = "digital painting of a fantasy waterfall, cloudy" +image = pipeline(prompt=prompt, image=text2image, mask_image=mask_image).images[0] +make_image_grid([text2image, mask_image, image], rows=1, cols=3) text-to-image inpaint Inpaint-to-image-to-image You can also chain an inpainting pipeline before another pipeline like image-to-image or an upscaler to improve the quality. Begin by inpainting an image: Copied import torch +from diffusers import AutoPipelineForInpainting, AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image_inpainting = pipeline(prompt=prompt, image=init_image, mask_image=mask_image).images[0] + +# resize image to 1024x1024 for SDXL +image_inpainting = image_inpainting.resize((1024, 1024)) Now let’s pass the image to another inpainting pipeline with SDXL’s refiner model to enhance the image details and quality: Copied pipeline = AutoPipelineForInpainting.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt=prompt, image=image_inpainting, mask_image=mask_image, output_type="latent").images[0] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. For example, in the Text-to-image-to-inpaint section, Kandinsky 2.2 uses a different VAE class than the Stable Diffusion model so it won’t work. But if you use Stable Diffusion v1.5 for both pipelines, then you can keep everything in latent space because they both use AutoencoderKL. Finally, you can pass this image to an image-to-image pipeline to put the finishing touches on it. It is more efficient to use the from_pipe() method to reuse the existing pipeline components, and avoid unnecessarily loading all the pipeline components into memory again. Copied pipeline = AutoPipelineForImage2Image.from_pipe(pipeline) +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt=prompt, image=image).images[0] +make_image_grid([init_image, mask_image, image_inpainting, image], rows=2, cols=2) initial image inpaint image-to-image Image-to-image and inpainting are actually very similar tasks. Image-to-image generates a new image that resembles the existing provided image. Inpainting does the same thing, but it only transforms the image area defined by the mask and the rest of the image is unchanged. You can think of inpainting as a more precise tool for making specific changes and image-to-image has a broader scope for making more sweeping changes. Control image generation Getting an image to look exactly the way you want is challenging because the denoising process is random. While you can control certain aspects of generation by configuring parameters like negative_prompt, there are better and more efficient methods for controlling image generation. Prompt weighting Prompt weighting provides a quantifiable way to scale the representation of concepts in a prompt. You can use it to increase or decrease the magnitude of the text embedding vector for each concept in the prompt, which subsequently determines how much of each concept is generated. The Compel library offers an intuitive syntax for scaling the prompt weights and generating the embeddings. Learn how to create the embeddings in the Prompt weighting guide. Once you’ve generated the embeddings, pass them to the prompt_embeds (and negative_prompt_embeds if you’re using a negative prompt) parameter in the AutoPipelineForInpainting. The embeddings replace the prompt parameter: Copied import torch +from diffusers import AutoPipelineForInpainting +from diffusers.utils import make_image_grid + +pipeline = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", torch_dtype=torch.float16, +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt_embeds=prompt_embeds, # generated from Compel + negative_prompt_embeds=negative_prompt_embeds, # generated from Compel + image=init_image, + mask_image=mask_image +).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) ControlNet ControlNet models are used with other diffusion models like Stable Diffusion, and they provide an even more flexible and accurate way to control how an image is generated. A ControlNet accepts an additional conditioning image input that guides the diffusion model to preserve the features in it. For example, let’s condition an image with a ControlNet pretrained on inpaint images: Copied import torch +import numpy as np +from diffusers import ControlNetModel, StableDiffusionControlNetInpaintPipeline +from diffusers.utils import load_image, make_image_grid + +# load ControlNet +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_inpaint", torch_dtype=torch.float16, variant="fp16") + +# pass ControlNet to the pipeline +pipeline = StableDiffusionControlNetInpaintPipeline.from_pretrained( + "runwayml/stable-diffusion-inpainting", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16" +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +# prepare control image +def make_inpaint_condition(init_image, mask_image): + init_image = np.array(init_image.convert("RGB")).astype(np.float32) / 255.0 + mask_image = np.array(mask_image.convert("L")).astype(np.float32) / 255.0 + + assert init_image.shape[0:1] == mask_image.shape[0:1], "image and image_mask must have the same image size" + init_image[mask_image > 0.5] = -1.0 # set as masked pixel + init_image = np.expand_dims(init_image, 0).transpose(0, 3, 1, 2) + init_image = torch.from_numpy(init_image) + return init_image + +control_image = make_inpaint_condition(init_image, mask_image) Now generate an image from the base, mask and control images. You’ll notice features of the base image are strongly preserved in the generated image. Copied prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, control_image=control_image).images[0] +make_image_grid([init_image, mask_image, PIL.Image.fromarray(np.uint8(control_image[0][0])).convert('RGB'), image], rows=2, cols=2) You can take this a step further and chain it with an image-to-image pipeline to apply a new style: Copied from diffusers import AutoPipelineForImage2Image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "nitrosocke/elden-ring-diffusion", torch_dtype=torch.float16, +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +prompt = "elden ring style castle" # include the token "elden ring style" in the prompt +negative_prompt = "bad architecture, deformed, disfigured, poor details" + +image_elden_ring = pipeline(prompt, negative_prompt=negative_prompt, image=image).images[0] +make_image_grid([init_image, mask_image, image, image_elden_ring], rows=2, cols=2) initial image ControlNet inpaint image-to-image Optimize It can be difficult and slow to run diffusion models if you’re resource constrained, but it doesn’t have to be with a few optimization tricks. One of the biggest (and easiest) optimizations you can enable is switching to memory-efficient attention. If you’re using PyTorch 2.0, scaled-dot product attention is automatically enabled and you don’t need to do anything else. For non-PyTorch 2.0 users, you can install and use xFormers’s implementation of memory-efficient attention. Both options reduce memory usage and accelerate inference. You can also offload the model to the CPU to save even more memory: Copied + pipeline.enable_xformers_memory_efficient_attention() ++ pipeline.enable_model_cpu_offload() To speed-up your inference code even more, use torch_compile. You should wrap torch.compile around the most intensive component in the pipeline which is typically the UNet: Copied pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) Learn more in the Reduce memory usage and Torch 2.0 guides. diff --git a/scrapped_outputs/e967c78a894c667d5f6f61aa0f5141c7.txt b/scrapped_outputs/e967c78a894c667d5f6f61aa0f5141c7.txt new file mode 100644 index 0000000000000000000000000000000000000000..f0397b5c22fb5325147b95174f9f4e470d2a9999 --- /dev/null +++ b/scrapped_outputs/e967c78a894c667d5f6f61aa0f5141c7.txt @@ -0,0 +1,136 @@ +Kandinsky 2.2 This script is experimental, and it’s easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset. Kandinsky 2.2 is a multilingual text-to-image model capable of producing more photorealistic images. The model includes an image prior model for creating image embeddings from text prompts, and a decoder model that generates images based on the prior model’s embeddings. That’s why you’ll find two separate scripts in Diffusers for Kandinsky 2.2, one for training the prior model and one for training the decoder model. You can train both models separately, but to get the best results, you should train both the prior and decoder models. Depending on your GPU, you may need to enable gradient_checkpointing (⚠️ not supported for the prior model!), mixed_precision, and gradient_accumulation_steps to help fit the model into memory and to speedup training. You can reduce your memory-usage even more by enabling memory-efficient attention with xFormers (version v0.0.16 fails for training on some GPUs so you may need to install a development version instead). This guide explores the train_text_to_image_prior.py and the train_text_to_image_decoder.py scripts to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the scripts, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/kandinsky2_2/text_to_image +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training scripts that are important for understanding how to modify it, but it doesn’t cover every aspect of the scripts in detail. If you’re interested in learning more, feel free to read through the scripts and let us know if you have any questions or concerns. Script parameters The training scripts provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. The training scripts provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image_prior.py \ + --mixed_precision="fp16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so let’s get straight to a walkthrough of the Kandinsky training scripts! Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_text_to_image_prior.py \ + --snr_gamma=5.0 Training script The training script is also similar to the Text-to-image training guide, but it’s been modified to support training the prior and decoder models. This guide focuses on the code that is unique to the Kandinsky 2.2 training scripts. + + +The main() function contains the code for preparing the dataset and training the model. One of the main differences you’ll notice right away is that the training script also loads a CLIPImageProcessor - in addition to a scheduler and tokenizer - for preprocessing images and a CLIPVisionModelWithProjection model for encoding the images: Copied noise_scheduler = DDPMScheduler(beta_schedule="squaredcos_cap_v2", prediction_type="sample") +image_processor = CLIPImageProcessor.from_pretrained( + args.pretrained_prior_model_name_or_path, subfolder="image_processor" +) +tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="tokenizer") + +with ContextManagers(deepspeed_zero_init_disabled_context_manager()): + image_encoder = CLIPVisionModelWithProjection.from_pretrained( + args.pretrained_prior_model_name_or_path, subfolder="image_encoder", torch_dtype=weight_dtype + ).eval() + text_encoder = CLIPTextModelWithProjection.from_pretrained( + args.pretrained_prior_model_name_or_path, subfolder="text_encoder", torch_dtype=weight_dtype + ).eval() Kandinsky uses a PriorTransformer to generate the image embeddings, so you’ll want to setup the optimizer to learn the prior mode’s parameters. Copied prior = PriorTransformer.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="prior") +prior.train() +optimizer = optimizer_cls( + prior.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Next, the input captions are tokenized, and images are preprocessed by the CLIPImageProcessor: Copied def preprocess_train(examples): + images = [image.convert("RGB") for image in examples[image_column]] + examples["clip_pixel_values"] = image_processor(images, return_tensors="pt").pixel_values + examples["text_input_ids"], examples["text_mask"] = tokenize_captions(examples) + return examples Finally, the training loop converts the input images into latents, adds noise to the image embeddings, and makes a prediction: Copied model_pred = prior( + noisy_latents, + timestep=timesteps, + proj_embedding=prompt_embeds, + encoder_hidden_states=text_encoder_hidden_states, + attention_mask=text_mask, +).predicted_image_embedding If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. + + +The main() function contains the code for preparing the dataset and training the model. Unlike the prior model, the decoder initializes a VQModel to decode the latents into images and it uses a UNet2DConditionModel: Copied with ContextManagers(deepspeed_zero_init_disabled_context_manager()): + vae = VQModel.from_pretrained( + args.pretrained_decoder_model_name_or_path, subfolder="movq", torch_dtype=weight_dtype + ).eval() + image_encoder = CLIPVisionModelWithProjection.from_pretrained( + args.pretrained_prior_model_name_or_path, subfolder="image_encoder", torch_dtype=weight_dtype + ).eval() +unet = UNet2DConditionModel.from_pretrained(args.pretrained_decoder_model_name_or_path, subfolder="unet") Next, the script includes several image transforms and a preprocessing function for applying the transforms to the images and returning the pixel values: Copied def preprocess_train(examples): + images = [image.convert("RGB") for image in examples[image_column]] + examples["pixel_values"] = [train_transforms(image) for image in images] + examples["clip_pixel_values"] = image_processor(images, return_tensors="pt").pixel_values + return examples Lastly, the training loop handles converting the images to latents, adding noise, and predicting the noise residual. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Copied model_pred = unet(noisy_latents, timesteps, None, added_cond_kwargs=added_cond_kwargs).sample[:, :4] + + + Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 You’ll train on the Pokémon BLIP captions dataset to generate your own Pokémon, but you can also create and train on your own dataset by following the Create a dataset for training guide. Set the environment variable DATASET_NAME to the name of the dataset on the Hub or if you’re training on your own files, set the environment variable TRAIN_DIR to a path to your dataset. If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_prompt to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. + + + + Copied export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch --mixed_precision="fp16" train_text_to_image_prior.py \ + --dataset_name=$DATASET_NAME \ + --resolution=768 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --checkpoints_total_limit=3 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --validation_prompts="A robot pokemon, 4k photo" \ + --report_to="wandb" \ + --push_to_hub \ + --output_dir="kandi2-prior-pokemon-model" + + + + Copied export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch --mixed_precision="fp16" train_text_to_image_decoder.py \ + --dataset_name=$DATASET_NAME \ + --resolution=768 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --checkpoints_total_limit=3 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --validation_prompts="A robot pokemon, 4k photo" \ + --report_to="wandb" \ + --push_to_hub \ + --output_dir="kandi2-decoder-pokemon-model" + + +Once training is finished, you can use your newly trained model for inference! + + + + Copied from diffusers import AutoPipelineForText2Image, DiffusionPipeline +import torch + +prior_pipeline = DiffusionPipeline.from_pretrained(output_dir, torch_dtype=torch.float16) +prior_components = {"prior_" + k: v for k,v in prior_pipeline.components.items()} +pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", **prior_components, torch_dtype=torch.float16) + +pipe.enable_model_cpu_offload() +prompt="A robot pokemon, 4k photo" +image = pipeline(prompt=prompt, negative_prompt=negative_prompt).images[0] Feel free to replace kandinsky-community/kandinsky-2-2-decoder with your own trained decoder checkpoint! + + + + Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("path/to/saved/model", torch_dtype=torch.float16) +pipeline.enable_model_cpu_offload() + +prompt="A robot pokemon, 4k photo" +image = pipeline(prompt=prompt).images[0] For the decoder model, you can also perform inference from a saved checkpoint which can be useful for viewing intermediate results. In this case, load the checkpoint into the UNet: Copied from diffusers import AutoPipelineForText2Image, UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("path/to/saved/model" + "/checkpoint-/unet") + +pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", unet=unet, torch_dtype=torch.float16) +pipeline.enable_model_cpu_offload() + +image = pipeline(prompt="A robot pokemon, 4k photo").images[0] + + + Next steps Congratulations on training a Kandinsky 2.2 model! To learn more about how to use your new model, the following guides may be helpful: Read the Kandinsky guide to learn how to use it for a variety of different tasks (text-to-image, image-to-image, inpainting, interpolation), and how it can be combined with a ControlNet. Check out the DreamBooth and LoRA training guides to learn how to train a personalized Kandinsky model with just a few example images. These two training techniques can even be combined! diff --git a/scrapped_outputs/e982c76bf88c92ee2f62d105a76acfee.txt b/scrapped_outputs/e982c76bf88c92ee2f62d105a76acfee.txt new file mode 100644 index 0000000000000000000000000000000000000000..218eb87f8f649852b0b2e0b52a2a1d758aa1b603 --- /dev/null +++ b/scrapped_outputs/e982c76bf88c92ee2f62d105a76acfee.txt @@ -0,0 +1 @@ +Using Diffusers with other modalities Diffusers is in the process of expanding to modalities other than images. Example type Colab Pipeline Molecule conformation generation ❌ More coming soon! diff --git a/scrapped_outputs/e98d42ff770679bd545c62976a8731e8.txt b/scrapped_outputs/e98d42ff770679bd545c62976a8731e8.txt new file mode 100644 index 0000000000000000000000000000000000000000..7c2ceeecc625c651b3ef1cc43f5c7fb053d83bae --- /dev/null +++ b/scrapped_outputs/e98d42ff770679bd545c62976a8731e8.txt @@ -0,0 +1,100 @@ +Stochastic Karras VE + + +Overview + +Elucidating the Design Space of Diffusion-Based Generative Models by Tero Karras, Miika Aittala, Timo Aila and Samuli Laine. +The abstract of the paper is the following: +We argue that the theory and practice of diffusion-based generative models are currently unnecessarily convoluted and seek to remedy the situation by presenting a design space that clearly separates the concrete design choices. This lets us identify several changes to both the sampling and training processes, as well as preconditioning of the score networks. Together, our improvements yield new state-of-the-art FID of 1.79 for CIFAR-10 in a class-conditional setting and 1.97 in an unconditional setting, with much faster sampling (35 network evaluations per image) than prior designs. To further demonstrate their modular nature, we show that our design changes dramatically improve both the efficiency and quality obtainable with pre-trained score networks from previous work, including improving the FID of an existing ImageNet-64 model from 2.07 to near-SOTA 1.55. +This pipeline implements the Stochastic sampling tailored to the Variance-Expanding (VE) models. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_stochastic_karras_ve.py +Unconditional Image Generation +- + +KarrasVePipeline + + +class diffusers.KarrasVePipeline + +< +source +> +( +unet: UNet2DModel +scheduler: KarrasVeScheduler + +) + + +Parameters + +unet (UNet2DModel) — U-Net architecture to denoise the encoded image. + + +scheduler (KarrasVeScheduler) — +Scheduler for the diffusion process to be used in combination with unet to denoise the encoded image. + + + +Stochastic sampling from Karras et al. [1] tailored to the Variance-Expanding (VE) models [2]. Use Algorithm 2 and +the VE column of Table 1 from [1] for reference. +[1] Karras, Tero, et al. “Elucidating the Design Space of Diffusion-Based Generative Models.” +https://arxiv.org/abs/2206.00364 [2] Song, Yang, et al. “Score-based generative modeling through stochastic +differential equations.” https://arxiv.org/abs/2011.13456 + +__call__ + +< +source +> +( +batch_size: int = 1 +num_inference_steps: int = 50 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +**kwargs + +) +→ +ImagePipelineOutput or tuple + +Parameters + +batch_size (int, optional, defaults to 1) — +The number of images to generate. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + +Returns + +ImagePipelineOutput or tuple + + + +~pipelines.utils.ImagePipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. diff --git a/scrapped_outputs/e995a68b68524204132fc8764597c440.txt b/scrapped_outputs/e995a68b68524204132fc8764597c440.txt new file mode 100644 index 0000000000000000000000000000000000000000..ac59df5433d23b7c188dd3d53bf865450ff7dab9 --- /dev/null +++ b/scrapped_outputs/e995a68b68524204132fc8764597c440.txt @@ -0,0 +1 @@ +Reinforcement learning training with DDPO You can fine-tune Stable Diffusion on a reward function via reinforcement learning with the 🤗 TRL library and 🤗 Diffusers. This is done with the Denoising Diffusion Policy Optimization (DDPO) algorithm introduced by Black et al. in Training Diffusion Models with Reinforcement Learning, which is implemented in 🤗 TRL with the DDPOTrainer. For more information, check out the DDPOTrainer API reference and the Finetune Stable Diffusion Models with DDPO via TRL blog post. diff --git a/scrapped_outputs/e9c43c54c19fb8a1a34aba47eca8e57c.txt b/scrapped_outputs/e9c43c54c19fb8a1a34aba47eca8e57c.txt new file mode 100644 index 0000000000000000000000000000000000000000..18ff21ef44b1209309d3996bfa0c5efab35a57c1 --- /dev/null +++ b/scrapped_outputs/e9c43c54c19fb8a1a34aba47eca8e57c.txt @@ -0,0 +1,78 @@ +Safe Stable Diffusion Safe Stable Diffusion was proposed in Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models and mitigates inappropriate degeneration from Stable Diffusion models because they’re trained on unfiltered web-crawled datasets. For instance Stable Diffusion may unexpectedly generate nudity, violence, images depicting self-harm, and otherwise offensive content. Safe Stable Diffusion is an extension of Stable Diffusion that drastically reduces this type of content. The abstract from the paper is: Text-conditioned image generation models have recently achieved astonishing results in image quality and text alignment and are consequently employed in a fast-growing number of applications. Since they are highly data-driven, relying on billion-sized datasets randomly scraped from the internet, they also suffer, as we demonstrate, from degenerated and biased human behavior. In turn, they may even reinforce such biases. To help combat these undesired side effects, we present safe latent diffusion (SLD). Specifically, to measure the inappropriate degeneration due to unfiltered and imbalanced training sets, we establish a novel image generation test bed-inappropriate image prompts (I2P)-containing dedicated, real-world image-to-text prompts covering concepts such as nudity and violence. As our exhaustive empirical evaluation demonstrates, the introduced SLD removes and suppresses inappropriate image parts during the diffusion process, with no additional training required and no adverse effect on overall image quality or text alignment. Tips Use the safety_concept property of StableDiffusionPipelineSafe to check and edit the current safety concept: Copied >>> from diffusers import StableDiffusionPipelineSafe + +>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe") +>>> pipeline.safety_concept +'an image showing hate, harassment, violence, suffering, humiliation, harm, suicide, sexual, nudity, bodily fluids, blood, obscene gestures, illegal activity, drug use, theft, vandalism, weapons, child abuse, brutality, cruelty' For each image generation the active concept is also contained in StableDiffusionSafePipelineOutput. There are 4 configurations (SafetyConfig.WEAK, SafetyConfig.MEDIUM, SafetyConfig.STRONG, and SafetyConfig.MAX) that can be applied: Copied >>> from diffusers import StableDiffusionPipelineSafe +>>> from diffusers.pipelines.stable_diffusion_safe import SafetyConfig + +>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe") +>>> prompt = "the four horsewomen of the apocalypse, painting by tom of finland, gaston bussiere, craig mullins, j. c. leyendecker" +>>> out = pipeline(prompt=prompt, **SafetyConfig.MAX) Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionPipelineSafe class diffusers.StableDiffusionPipelineSafe < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: SafeStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline based on the StableDiffusionPipeline for text-to-image generation using Safe Latent Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 sld_guidance_scale: Optional = 1000 sld_warmup_steps: Optional = 10 sld_threshold: Optional = 0.01 sld_momentum_scale: Optional = 0.3 sld_mom_beta: Optional = 0.4 ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. sld_guidance_scale (float, optional, defaults to 1000) — +If sld_guidance_scale < 1, safety guidance is disabled. sld_warmup_steps (int, optional, defaults to 10) — +Number of warmup steps for safety guidance. SLD is only be applied for diffusion steps greater than +sld_warmup_steps. sld_threshold (float, optional, defaults to 0.01) — +Threshold that separates the hyperplane between appropriate and inappropriate images. sld_momentum_scale (float, optional, defaults to 0.3) — +Scale of the SLD momentum to be added to the safety guidance at each diffusion step. If set to 0.0, +momentum is disabled. Momentum is built up during warmup for diffusion steps smaller than +sld_warmup_steps. sld_mom_beta (float, optional, defaults to 0.4) — +Defines how safety guidance momentum builds up. sld_mom_beta indicates how much of the previous +momentum is kept. Momentum is built up during warmup for diffusion steps smaller than +sld_warmup_steps. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied import torch +from diffusers import StableDiffusionPipelineSafe +from diffusers.pipelines.stable_diffusion_safe import SafetyConfig + +pipeline = StableDiffusionPipelineSafe.from_pretrained( + "AIML-TUDA/stable-diffusion-safe", torch_dtype=torch.float16 +).to("cuda") +prompt = "the four horsewomen of the apocalypse, painting by tom of finland, gaston bussiere, craig mullins, j. c. leyendecker" +image = pipeline(prompt=prompt, **SafetyConfig.MEDIUM).images[0] StableDiffusionSafePipelineOutput class diffusers.pipelines.stable_diffusion_safe.StableDiffusionSafePipelineOutput < source > ( images: Union nsfw_content_detected: Optional unsafe_images: Union applied_safety_concept: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or numpy array of shape (batch_size, height, width, num_channels). PIL images or numpy array present the denoised images of the diffusion pipeline. nsfw_content_detected (List[bool]) — +List of flags denoting whether the corresponding generated image likely represents “not-safe-for-work” +(nsfw) content, or None if safety checking could not be performed. images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images that were flagged by the safety checker any may contain “not-safe-for-work” +(nsfw) content, or None if no safety check was performed or no images were flagged. applied_safety_concept (str) — +The safety concept that was applied for safety guidance, or None if safety guidance was disabled Output class for Safe Stable Diffusion pipelines. __call__ ( *args **kwargs ) Call self as a function. diff --git a/scrapped_outputs/e9ddda7ed42551e3991df4a9e7f5fdb0.txt b/scrapped_outputs/e9ddda7ed42551e3991df4a9e7f5fdb0.txt new file mode 100644 index 0000000000000000000000000000000000000000..576dcc80f8d3648a3bfddba4f5d8e453c126504f --- /dev/null +++ b/scrapped_outputs/e9ddda7ed42551e3991df4a9e7f5fdb0.txt @@ -0,0 +1,58 @@ +Tiny AutoEncoder Tiny AutoEncoder for Stable Diffusion (TAESD) was introduced in madebyollin/taesd by Ollin Boer Bohan. It is a tiny distilled version of Stable Diffusion’s VAE that can quickly decode the latents in a StableDiffusionPipeline or StableDiffusionXLPipeline almost instantly. To use with Stable Diffusion v-2.1: Copied import torch +from diffusers import DiffusionPipeline, AutoencoderTiny + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1-base", torch_dtype=torch.float16 +) +pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesd", torch_dtype=torch.float16) +pipe = pipe.to("cuda") + +prompt = "slice of delicious New York-style berry cheesecake" +image = pipe(prompt, num_inference_steps=25).images[0] +image To use with Stable Diffusion XL 1.0 Copied import torch +from diffusers import DiffusionPipeline, AutoencoderTiny + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16 +) +pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesdxl", torch_dtype=torch.float16) +pipe = pipe.to("cuda") + +prompt = "slice of delicious New York-style berry cheesecake" +image = pipe(prompt, num_inference_steps=25).images[0] +image AutoencoderTiny class diffusers.AutoencoderTiny < source > ( in_channels: int = 3 out_channels: int = 3 encoder_block_out_channels: Tuple = (64, 64, 64, 64) decoder_block_out_channels: Tuple = (64, 64, 64, 64) act_fn: str = 'relu' latent_channels: int = 4 upsampling_scaling_factor: int = 2 num_encoder_blocks: Tuple = (1, 3, 3, 3) num_decoder_blocks: Tuple = (3, 3, 3, 1) latent_magnitude: int = 3 latent_shift: float = 0.5 force_upcast: bool = False scaling_factor: float = 1.0 ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. encoder_block_out_channels (Tuple[int], optional, defaults to (64, 64, 64, 64)) — +Tuple of integers representing the number of output channels for each encoder block. The length of the +tuple should be equal to the number of encoder blocks. decoder_block_out_channels (Tuple[int], optional, defaults to (64, 64, 64, 64)) — +Tuple of integers representing the number of output channels for each decoder block. The length of the +tuple should be equal to the number of decoder blocks. act_fn (str, optional, defaults to "relu") — +Activation function to be used throughout the model. latent_channels (int, optional, defaults to 4) — +Number of channels in the latent representation. The latent space acts as a compressed representation of +the input image. upsampling_scaling_factor (int, optional, defaults to 2) — +Scaling factor for upsampling in the decoder. It determines the size of the output image during the +upsampling process. num_encoder_blocks (Tuple[int], optional, defaults to (1, 3, 3, 3)) — +Tuple of integers representing the number of encoder blocks at each stage of the encoding process. The +length of the tuple should be equal to the number of stages in the encoder. Each stage has a different +number of encoder blocks. num_decoder_blocks (Tuple[int], optional, defaults to (3, 3, 3, 1)) — +Tuple of integers representing the number of decoder blocks at each stage of the decoding process. The +length of the tuple should be equal to the number of stages in the decoder. Each stage has a different +number of decoder blocks. latent_magnitude (float, optional, defaults to 3.0) — +Magnitude of the latent representation. This parameter scales the latent representation values to control +the extent of information preservation. latent_shift (float, optional, defaults to 0.5) — +Shift applied to the latent representation. This parameter controls the center of the latent space. scaling_factor (float, optional, defaults to 1.0) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. For this Autoencoder, +however, no such scaling factor was used, hence the value of 1.0 as the default. force_upcast (bool, optional, default to False) — +If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE +can be fine-tuned / trained to a lower range without losing too much precision, in which case +force_upcast can be set to False (see this fp16-friendly +AutoEncoder). A tiny distilled VAE model for encoding images into latents and decoding latent representations into images. AutoencoderTiny is a wrapper around the original implementation of TAESD. This model inherits from ModelMixin. Check the superclass documentation for its generic methods implemented for +all models (such as downloading or saving). disable_slicing < source > ( ) Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing +decoding in one step. disable_tiling < source > ( ) Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing +decoding in one step. enable_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_tiling < source > ( use_tiling: bool = True ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. forward < source > ( sample: FloatTensor return_dict: bool = True ) Parameters sample (torch.FloatTensor) — Input sample. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. scale_latents < source > ( x: FloatTensor ) raw latents -> [0, 1] unscale_latents < source > ( x: FloatTensor ) [0, 1] -> raw latents AutoencoderTinyOutput class diffusers.models.autoencoders.autoencoder_tiny.AutoencoderTinyOutput < source > ( latents: Tensor ) Parameters latents (torch.Tensor) — Encoded outputs of the Encoder. Output of AutoencoderTiny encoding method. diff --git a/scrapped_outputs/ea1fd268c82b6cb05c9de487c8012e02.txt b/scrapped_outputs/ea1fd268c82b6cb05c9de487c8012e02.txt new file mode 100644 index 0000000000000000000000000000000000000000..3ac97628e55336ffb0041210b78e5d43066c4f7c --- /dev/null +++ b/scrapped_outputs/ea1fd268c82b6cb05c9de487c8012e02.txt @@ -0,0 +1,225 @@ +AudioLDM 2 AudioLDM 2 was proposed in AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining by Haohe Liu et al. AudioLDM 2 takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional sound effects, human speech and music. Inspired by Stable Diffusion, AudioLDM 2 is a text-to-audio latent diffusion model (LDM) that learns continuous audio representations from text embeddings. Two text encoder models are used to compute the text embeddings from a prompt input: the text-branch of CLAP and the encoder of Flan-T5. These text embeddings are then projected to a shared embedding space by an AudioLDM2ProjectionModel. A GPT2 language model (LM) is used to auto-regressively predict eight new embedding vectors, conditional on the projected CLAP and Flan-T5 embeddings. The generated embedding vectors and Flan-T5 text embeddings are used as cross-attention conditioning in the LDM. The UNet of AudioLDM 2 is unique in the sense that it takes two cross-attention embeddings, as opposed to one cross-attention conditioning, as in most other LDMs. The abstract of the paper is the following: Although audio generation shares commonalities across different types of audio, such as speech, music, and sound effects, designing models for each type requires careful consideration of specific objectives and biases that can significantly differ from those of other types. To bring us closer to a unified perspective of audio generation, this paper proposes a framework that utilizes the same learning method for speech, music, and sound effect generation. Our framework introduces a general representation of audio, called “language of audio” (LOA). Any audio can be translated into LOA based on AudioMAE, a self-supervised pre-trained representation learning model. In the generation process, we translate any modalities into LOA by using a GPT-2 model, and we perform self-supervised audio generation learning with a latent diffusion model conditioned on LOA. The proposed framework naturally brings advantages such as in-context learning abilities and reusable self-supervised pretrained AudioMAE and latent diffusion models. Experiments on the major benchmarks of text-to-audio, text-to-music, and text-to-speech demonstrate state-of-the-art or competitive performance against previous approaches. Our code, pretrained model, and demo are available at this https URL. This pipeline was contributed by sanchit-gandhi. The original codebase can be found at haoheliu/audioldm2. Tips Choosing a checkpoint AudioLDM2 comes in three variants. Two of these checkpoints are applicable to the general task of text-to-audio generation. The third checkpoint is trained exclusively on text-to-music generation. All checkpoints share the same model size for the text encoders and VAE. They differ in the size and depth of the UNet. +See table below for details on the three checkpoints: Checkpoint Task UNet Model Size Total Model Size Training Data / h audioldm2 Text-to-audio 350M 1.1B 1150k audioldm2-large Text-to-audio 750M 1.5B 1150k audioldm2-music Text-to-music 350M 1.1B 665k Constructing a prompt Descriptive prompt inputs work best: use adjectives to describe the sound (e.g. “high quality” or “clear”) and make the prompt context specific (e.g. “water stream in a forest” instead of “stream”). It’s best to use general terms like “cat” or “dog” instead of specific names or abstract objects the model may not be familiar with. Using a negative prompt can significantly improve the quality of the generated waveform, by guiding the generation away from terms that correspond to poor quality audio. Try using a negative prompt of “Low quality.” Controlling inference The quality of the predicted audio sample can be controlled by the num_inference_steps argument; higher steps give higher quality audio at the expense of slower inference. The length of the predicted audio sample can be controlled by varying the audio_length_in_s argument. Evaluating generated waveforms: The quality of the generated waveforms can vary significantly based on the seed. Try generating with different seeds until you find a satisfactory generation. Multiple waveforms can be generated in one go: set num_waveforms_per_prompt to a value greater than 1. Automatic scoring will be performed between the generated waveforms and prompt text, and the audios ranked from best to worst accordingly. The following example demonstrates how to construct good music generation using the aforementioned tips: example. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. AudioLDM2Pipeline class diffusers.AudioLDM2Pipeline < source > ( vae: AutoencoderKL text_encoder: ClapModel text_encoder_2: T5EncoderModel projection_model: AudioLDM2ProjectionModel language_model: GPT2Model tokenizer: Union tokenizer_2: Union feature_extractor: ClapFeatureExtractor unet: AudioLDM2UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vocoder: SpeechT5HifiGan ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (ClapModel) — +First frozen text-encoder. AudioLDM2 uses the joint audio-text embedding model +CLAP, +specifically the laion/clap-htsat-unfused variant. The +text branch is used to encode the text prompt to a prompt embedding. The full audio-text model is used to +rank generated waveforms against the text prompt by computing similarity scores. text_encoder_2 (T5EncoderModel) — +Second frozen text-encoder. AudioLDM2 uses the encoder of +T5, specifically the +google/flan-t5-large variant. projection_model (AudioLDM2ProjectionModel) — +A trained model used to linearly project the hidden-states from the first and second text encoder models +and insert learned SOS and EOS token embeddings. The projected hidden-states from the two text encoders are +concatenated to give the input to the language model. language_model (GPT2Model) — +An auto-regressive language model used to generate a sequence of hidden-states conditioned on the projected +outputs from the two text encoders. tokenizer (RobertaTokenizer) — +Tokenizer to tokenize text for the first frozen text-encoder. tokenizer_2 (T5Tokenizer) — +Tokenizer to tokenize text for the second frozen text-encoder. feature_extractor (ClapFeatureExtractor) — +Feature extractor to pre-process generated audio waveforms to log-mel spectrograms for automatic scoring. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded audio latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. vocoder (SpeechT5HifiGan) — +Vocoder of class SpeechT5HifiGan to convert the mel-spectrogram latents to the final audio waveform. Pipeline for text-to-audio generation using AudioLDM2. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None audio_length_in_s: Optional = None num_inference_steps: int = 200 guidance_scale: float = 3.5 negative_prompt: Union = None num_waveforms_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None generated_prompt_embeds: Optional = None negative_generated_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None max_new_tokens: Optional = None return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None output_type: Optional = 'np' ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide audio generation. If not defined, you need to pass prompt_embeds. audio_length_in_s (int, optional, defaults to 10.24) — +The length of the generated audio sample in seconds. num_inference_steps (int, optional, defaults to 200) — +The number of denoising steps. More denoising steps usually lead to a higher quality audio at the +expense of slower inference. guidance_scale (float, optional, defaults to 3.5) — +A higher guidance scale value encourages the model to generate audio that is closely linked to the text +prompt at the expense of lower sound quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in audio generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_waveforms_per_prompt (int, optional, defaults to 1) — +The number of waveforms to generate per prompt. If num_waveforms_per_prompt > 1, then automatic +scoring is performed between the generated outputs and the text prompt. This scoring ranks the +generated waveforms based on their cosine similarity with the text input in the joint text-audio +embedding space. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for spectrogram +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings from the GPT2 langauge model. Can be used to easily tweak text inputs, +e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input +argument. negative_generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings from the GPT2 language model. Can be used to easily tweak text +inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from +negative_prompt input argument. attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the prompt_embeds. If not provided, attention mask will +be computed from prompt input argument. negative_attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the negative_prompt_embeds. If not provided, attention +mask will be computed from negative_prompt input argument. max_new_tokens (int, optional, defaults to None) — +Number of new tokens to generate with the GPT2 language model. If not provided, number of tokens will +be taken from the config of the model. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. output_type (str, optional, defaults to "np") — +The output format of the generated audio. Choose between "np" to return a NumPy np.ndarray or +"pt" to return a PyTorch torch.Tensor object. Set to "latent" to return the latent diffusion +model (LDM) output. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Examples: Copied >>> import scipy +>>> import torch +>>> from diffusers import AudioLDM2Pipeline + +>>> repo_id = "cvssp/audioldm2" +>>> pipe = AudioLDM2Pipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> # define the prompts +>>> prompt = "The sound of a hammer hitting a wooden surface." +>>> negative_prompt = "Low quality." + +>>> # set the seed for generator +>>> generator = torch.Generator("cuda").manual_seed(0) + +>>> # run the generation +>>> audio = pipe( +... prompt, +... negative_prompt=negative_prompt, +... num_inference_steps=200, +... audio_length_in_s=10.0, +... num_waveforms_per_prompt=3, +... generator=generator, +... ).audios + +>>> # save the best audio sample (index 0) as a .wav file +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio[0]) disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_waveforms_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None generated_prompt_embeds: Optional = None negative_generated_prompt_embeds: Optional = None attention_mask: Optional = None negative_attention_mask: Optional = None max_new_tokens: Optional = None ) → prompt_embeds (torch.FloatTensor) Parameters prompt (str or List[str], optional) — +prompt to be encoded device (torch.device) — +torch device num_waveforms_per_prompt (int) — +number of waveforms that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the audio generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-computed text embeddings from the Flan T5 model. Can be used to easily tweak text inputs, e.g. +prompt weighting. If not provided, text embeddings will be computed from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-computed negative text embeddings from the Flan T5 model. Can be used to easily tweak text inputs, +e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from +negative_prompt input argument. generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings from the GPT2 langauge model. Can be used to easily tweak text inputs, +e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input +argument. negative_generated_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings from the GPT2 language model. Can be used to easily tweak text +inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from +negative_prompt input argument. attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the prompt_embeds. If not provided, attention mask will +be computed from prompt input argument. negative_attention_mask (torch.LongTensor, optional) — +Pre-computed attention mask to be applied to the negative_prompt_embeds. If not provided, attention +mask will be computed from negative_prompt input argument. max_new_tokens (int, optional, defaults to None) — +The number of new tokens to generate with the GPT2 language model. Returns +prompt_embeds (torch.FloatTensor) + +Text embeddings from the Flan T5 model. +attention_mask (torch.LongTensor): +Attention mask to be applied to the prompt_embeds. +generated_prompt_embeds (torch.FloatTensor): +Text embeddings generated from the GPT2 langauge model. + Encodes the prompt into text encoder hidden states. Example: Copied >>> import scipy +>>> import torch +>>> from diffusers import AudioLDM2Pipeline + +>>> repo_id = "cvssp/audioldm2" +>>> pipe = AudioLDM2Pipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> # Get text embedding vectors +>>> prompt_embeds, attention_mask, generated_prompt_embeds = pipe.encode_prompt( +... prompt="Techno music with a strong, upbeat tempo and high melodic riffs", +... device="cuda", +... do_classifier_free_guidance=True, +... ) + +>>> # Pass text embeddings to pipeline for text-conditional audio generation +>>> audio = pipe( +... prompt_embeds=prompt_embeds, +... attention_mask=attention_mask, +... generated_prompt_embeds=generated_prompt_embeds, +... num_inference_steps=200, +... audio_length_in_s=10.0, +... ).audios[0] + +>>> # save generated audio sample +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio) generate_language_model < source > ( inputs_embeds: Tensor = None max_new_tokens: int = 8 **model_kwargs ) → inputs_embeds (torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)`) Parameters inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — +The sequence used as a prompt for the generation. max_new_tokens (int) — +Number of new tokens to generate. model_kwargs (Dict[str, Any], optional) — +Ad hoc parametrization of additional model-specific kwargs that will be forwarded to the forward +function of the model. Returns +inputs_embeds (torch.FloatTensorof shape(batch_size, sequence_length, hidden_size)`) + +The sequence of generated hidden-states. + Generates a sequence of hidden-states from the language model, conditioned on the embedding inputs. AudioLDM2ProjectionModel class diffusers.AudioLDM2ProjectionModel < source > ( text_encoder_dim text_encoder_1_dim langauge_model_dim ) Parameters text_encoder_dim (int) — +Dimensionality of the text embeddings from the first text encoder (CLAP). text_encoder_1_dim (int) — +Dimensionality of the text embeddings from the second text encoder (T5 or VITS). langauge_model_dim (int) — +Dimensionality of the text embeddings from the language model (GPT2). A simple linear projection model to map two text embeddings to a shared latent space. It also inserts learned +embedding vectors at the start and end of each text embedding sequence respectively. Each variable appended with +_1 refers to that corresponding to the second text encoder. Otherwise, it is from the first. forward < source > ( hidden_states: Optional = None hidden_states_1: Optional = None attention_mask: Optional = None attention_mask_1: Optional = None ) AudioLDM2UNet2DConditionModel class diffusers.AudioLDM2UNet2DConditionModel < source > ( sample_size: Optional = None in_channels: int = 4 out_channels: int = 4 flip_sin_to_cos: bool = True freq_shift: int = 0 down_block_types: Tuple = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') mid_block_type: Optional = 'UNetMidBlock2DCrossAttn' up_block_types: Tuple = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') only_cross_attention: Union = False block_out_channels: Tuple = (320, 640, 1280, 1280) layers_per_block: Union = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: Optional = 32 norm_eps: float = 1e-05 cross_attention_dim: Union = 1280 transformer_layers_per_block: Union = 1 attention_head_dim: Union = 8 num_attention_heads: Union = None use_linear_projection: bool = False class_embed_type: Optional = None num_class_embeds: Optional = None upcast_attention: bool = False resnet_time_scale_shift: str = 'default' time_embedding_type: str = 'positional' time_embedding_dim: Optional = None time_embedding_act_fn: Optional = None timestep_post_act: Optional = None time_cond_proj_dim: Optional = None conv_in_kernel: int = 3 conv_out_kernel: int = 3 projection_class_embeddings_input_dim: Optional = None class_embeddings_concat: bool = False ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. in_channels (int, optional, defaults to 4) — Number of channels in the input sample. out_channels (int, optional, defaults to 4) — Number of channels in the output. flip_sin_to_cos (bool, optional, defaults to False) — +Whether to flip the sin to cos in the time embedding. freq_shift (int, optional, defaults to 0) — The frequency shift to apply to the time embedding. down_block_types (Tuple[str], optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")) — +The tuple of downsample blocks to use. mid_block_type (str, optional, defaults to "UNetMidBlock2DCrossAttn") — +Block type for middle of UNet, it can only be UNetMidBlock2DCrossAttn for AudioLDM2. up_block_types (Tuple[str], optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")) — +The tuple of upsample blocks to use. only_cross_attention (bool or Tuple[bool], optional, default to False) — +Whether to include self-attention in the basic transformer blocks, see +BasicTransformerBlock. block_out_channels (Tuple[int], optional, defaults to (320, 640, 1280, 1280)) — +The tuple of output channels for each block. layers_per_block (int, optional, defaults to 2) — The number of layers per block. downsample_padding (int, optional, defaults to 1) — The padding to use for the downsampling convolution. mid_block_scale_factor (float, optional, defaults to 1.0) — The scale factor to use for the mid block. act_fn (str, optional, defaults to "silu") — The activation function to use. norm_num_groups (int, optional, defaults to 32) — The number of groups to use for the normalization. +If None, normalization and activation layers is skipped in post-processing. norm_eps (float, optional, defaults to 1e-5) — The epsilon to use for the normalization. cross_attention_dim (int or Tuple[int], optional, defaults to 1280) — +The dimension of the cross attention features. transformer_layers_per_block (int or Tuple[int], optional, defaults to 1) — +The number of transformer blocks of type BasicTransformerBlock. Only relevant for +~models.unet_2d_blocks.CrossAttnDownBlock2D, ~models.unet_2d_blocks.CrossAttnUpBlock2D, +~models.unet_2d_blocks.UNetMidBlock2DCrossAttn. attention_head_dim (int, optional, defaults to 8) — The dimension of the attention heads. num_attention_heads (int, optional) — +The number of attention heads. If not defined, defaults to attention_head_dim resnet_time_scale_shift (str, optional, defaults to "default") — Time scale shift config +for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", "identity", "projection", or "simple_projection". num_class_embeds (int, optional, defaults to None) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim, when performing +class conditioning with class_embed_type equal to None. time_embedding_type (str, optional, defaults to positional) — +The type of position embedding to use for timesteps. Choose from positional or fourier. time_embedding_dim (int, optional, defaults to None) — +An optional override for the dimension of the projected time embedding. time_embedding_act_fn (str, optional, defaults to None) — +Optional activation function to use only once on the time embeddings before they are passed to the rest of +the UNet. Choose from silu, mish, gelu, and swish. timestep_post_act (str, optional, defaults to None) — +The second activation function to use in timestep embedding. Choose from silu, mish and gelu. time_cond_proj_dim (int, optional, defaults to None) — +The dimension of cond_proj layer in the timestep embedding. conv_in_kernel (int, optional, default to 3) — The kernel size of conv_in layer. conv_out_kernel (int, optional, default to 3) — The kernel size of conv_out layer. projection_class_embeddings_input_dim (int, optional) — The dimension of the class_labels input when +class_embed_type="projection". Required when class_embed_type="projection". class_embeddings_concat (bool, optional, defaults to False) — Whether to concatenate the time +embeddings with the class embeddings. A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample +shaped output. Compared to the vanilla UNet2DConditionModel, this variant optionally includes an additional +self-attention layer in each Transformer block, as well as multiple cross-attention layers. It also allows for up +to two cross-attention embeddings, encoder_hidden_states and encoder_hidden_states_1. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor timestep: Union encoder_hidden_states: Tensor class_labels: Optional = None timestep_cond: Optional = None attention_mask: Optional = None cross_attention_kwargs: Optional = None encoder_attention_mask: Optional = None return_dict: bool = True encoder_hidden_states_1: Optional = None encoder_attention_mask_1: Optional = None ) → UNet2DConditionOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, channel, height, width). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. encoder_hidden_states (torch.FloatTensor) — +The encoder hidden states with shape (batch, sequence_length, feature_dim). encoder_attention_mask (torch.Tensor) — +A cross-attention mask of shape (batch, sequence_length) is applied to encoder_hidden_states. If +True the mask is kept, otherwise if False it is discarded. Mask will be converted into a bias, +which adds large negative values to the attention scores corresponding to “discard” tokens. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor. encoder_hidden_states_1 (torch.FloatTensor, optional) — +A second set of encoder hidden states with shape (batch, sequence_length_2, feature_dim_2). Can be +used to condition the model on a different set of embeddings to encoder_hidden_states. encoder_attention_mask_1 (torch.Tensor, optional) — +A cross-attention mask of shape (batch, sequence_length_2) is applied to encoder_hidden_states_1. +If True the mask is kept, otherwise if False it is discarded. Mask will be converted into a bias, +which adds large negative values to the attention scores corresponding to “discard” tokens. Returns +UNet2DConditionOutput or tuple + +If return_dict is True, an UNet2DConditionOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + The AudioLDM2UNet2DConditionModel forward method. AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. diff --git a/scrapped_outputs/ea46f710434e0fba6fde1ee743a6daca.txt b/scrapped_outputs/ea46f710434e0fba6fde1ee743a6daca.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/ea5677cd67196afccc5dc6a3488d1cc7.txt b/scrapped_outputs/ea5677cd67196afccc5dc6a3488d1cc7.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/ea8d2490b85fc5e9a48c4a04b405cbbf.txt b/scrapped_outputs/ea8d2490b85fc5e9a48c4a04b405cbbf.txt new file mode 100644 index 0000000000000000000000000000000000000000..b45fe5213bcfa863fc1c686b497f93e27b1008f7 --- /dev/null +++ b/scrapped_outputs/ea8d2490b85fc5e9a48c4a04b405cbbf.txt @@ -0,0 +1,630 @@ +Kandinsky 2.2 Kandinsky 2.2 is created by Arseniy Shakhmatov, Anton Razzhigaev, Aleksandr Nikolich, Vladimir Arkhipkin, Igor Pavlov, Andrey Kuznetsov, and Denis Dimitrov. The description from it’s GitHub page is: Kandinsky 2.2 brings substantial improvements upon its predecessor, Kandinsky 2.1, by introducing a new, more powerful image encoder - CLIP-ViT-G and the ControlNet support. The switch to CLIP-ViT-G as the image encoder significantly increases the model’s capability to generate more aesthetic pictures and better understand text, thus enhancing the model’s overall performance. The addition of the ControlNet mechanism allows the model to effectively control the process of generating images. This leads to more accurate and visually appealing outputs and opens new possibilities for text-guided image manipulation. The original codebase can be found at ai-forever/Kandinsky-2. Check out the Kandinsky Community organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting. Make sure to check out the schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. KandinskyV22PriorPipeline class diffusers.KandinskyV22PriorPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModelWithProjection text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: UnCLIPScheduler image_processor: CLIPImageProcessor ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Pipeline for generating image prior for Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 output_type: Optional = 'pt' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → KandinskyPriorPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. output_type (str, optional, defaults to "pt") — +The output format of the generate image. Choose between: "np" (np.array) or "pt" +(torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyV22Pipeline, KandinskyV22PriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior") +>>> pipe_prior.to("cuda") +>>> prompt = "red cat, 4k photo" +>>> image_emb, negative_image_emb = pipe_prior(prompt).to_tuple() + +>>> pipe = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder") +>>> pipe.to("cuda") +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=50, +... ).images +>>> image[0].save("cat.png") interpolate < source > ( images_and_prompts: List weights: List num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None negative_prior_prompt: Optional = None negative_prompt: str = '' guidance_scale: float = 4.0 device = None ) → KandinskyPriorPipelineOutput or tuple Parameters images_and_prompts (List[Union[str, PIL.Image.Image, torch.FloatTensor]]) — +list of prompts and images to guide the image generation. +weights — (List[float]): +list of weights for each condition in images_and_prompts num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. negative_prior_prompt (str, optional) — +The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when using the prior pipeline for interpolation. Examples: Copied >>> from diffusers import KandinskyV22PriorPipeline, KandinskyV22Pipeline +>>> from diffusers.utils import load_image +>>> import PIL +>>> import torch +>>> from torchvision import transforms + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") +>>> img1 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) +>>> img2 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/starry_night.jpeg" +... ) +>>> images_texts = ["a cat", img1, img2] +>>> weights = [0.3, 0.3, 0.4] +>>> out = pipe_prior.interpolate(images_texts, weights) +>>> pipe = KandinskyV22Pipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") +>>> image = pipe( +... image_embeds=out.image_embeds, +... negative_image_embeds=out.negative_image_embeds, +... height=768, +... width=768, +... num_inference_steps=50, +... ).images[0] +>>> image.save("starry_cat.png") KandinskyV22Pipeline class diffusers.KandinskyV22Pipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union negative_image_embeds: Union height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyV22Pipeline, KandinskyV22PriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior") +>>> pipe_prior.to("cuda") +>>> prompt = "red cat, 4k photo" +>>> out = pipe_prior(prompt) +>>> image_emb = out.image_embeds +>>> zero_image_emb = out.negative_image_embeds +>>> pipe = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder") +>>> pipe.to("cuda") +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=50, +... ).images +>>> image[0].save("cat.png") KandinskyV22CombinedPipeline class diffusers.KandinskyV22CombinedPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. prior_image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Combined Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. prior_callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference of the prior pipeline. +The function is called with the following arguments: prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). prior_callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the prior_callback_on_step_end function. The tensors specified in the +list will be passed as callback_kwargs argument. You will only be able to include variables listed in +the ._callback_tensor_inputs attribute of your prior pipeline class. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference of the decoder pipeline. +The function is called with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors +as specified by callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipe = AutoPipelineForText2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" + +image = pipe(prompt=prompt, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. KandinskyV22ControlnetPipeline class diffusers.KandinskyV22ControlnetPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union negative_image_embeds: Union hint: FloatTensor height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. hint (torch.FloatTensor) — +The controlnet condition. image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22PriorEmb2EmbPipeline class diffusers.KandinskyV22PriorEmb2EmbPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModelWithProjection text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: UnCLIPScheduler image_processor: CLIPImageProcessor ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Pipeline for generating image prior for Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union strength: float = 0.3 negative_prompt: Union = None num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None guidance_scale: float = 4.0 output_type: Optional = 'pt' return_dict: bool = True ) → KandinskyPriorPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference emb. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. emb (torch.FloatTensor) — +The image embedding. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. output_type (str, optional, defaults to "pt") — +The output format of the generate image. Choose between: "np" (np.array) or "pt" +(torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyV22Pipeline, KandinskyV22PriorEmb2EmbPipeline +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> prompt = "red cat, 4k photo" +>>> img = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) +>>> image_emb, nagative_image_emb = pipe_prior(prompt, image=img, strength=0.2).to_tuple() + +>>> pipe = KandinskyPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-decoder, torch_dtype=torch.float16" +... ) +>>> pipe.to("cuda") + +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... ).images + +>>> image[0].save("cat.png") interpolate < source > ( images_and_prompts: List weights: List num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None negative_prior_prompt: Optional = None negative_prompt: str = '' guidance_scale: float = 4.0 device = None ) → KandinskyPriorPipelineOutput or tuple Parameters images_and_prompts (List[Union[str, PIL.Image.Image, torch.FloatTensor]]) — +list of prompts and images to guide the image generation. +weights — (List[float]): +list of weights for each condition in images_and_prompts num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. negative_prior_prompt (str, optional) — +The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when using the prior pipeline for interpolation. Examples: Copied >>> from diffusers import KandinskyV22PriorEmb2EmbPipeline, KandinskyV22Pipeline +>>> from diffusers.utils import load_image +>>> import PIL + +>>> import torch +>>> from torchvision import transforms + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> img1 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) + +>>> img2 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/starry_night.jpeg" +... ) + +>>> images_texts = ["a cat", img1, img2] +>>> weights = [0.3, 0.3, 0.4] +>>> image_emb, zero_image_emb = pipe_prior.interpolate(images_texts, weights) + +>>> pipe = KandinskyV22Pipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") + +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=150, +... ).images[0] + +>>> image.save("starry_cat.png") KandinskyV22Img2ImgPipeline class diffusers.KandinskyV22Img2ImgPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union image: Union negative_image_embeds: Union height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 strength: float = 0.3 num_images_per_prompt: int = 1 generator: Union = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22Img2ImgCombinedPipeline class diffusers.KandinskyV22Img2ImgCombinedPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. prior_image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Combined Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 strength: float = 0.3 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForImage2Image +import torch +import requests +from io import BytesIO +from PIL import Image +import os + +pipe = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") +image.thumbnail((768, 768)) + +image = pipe(prompt=prompt, image=original_image, num_inference_steps=25).images[0] enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. KandinskyV22ControlnetImg2ImgPipeline class diffusers.KandinskyV22ControlnetImg2ImgPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union image: Union negative_image_embeds: Union hint: FloatTensor height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 strength: float = 0.3 num_images_per_prompt: int = 1 generator: Union = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. hint (torch.FloatTensor) — +The controlnet condition. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22InpaintPipeline class diffusers.KandinskyV22InpaintPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-guided image inpainting using Kandinsky2.1 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union image: Union mask_image: Union negative_image_embeds: Union height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. image (PIL.Image.Image) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. mask_image (np.array) — +Tensor representing an image batch, to mask image. White pixels in the mask will be repainted, while +black pixels will be preserved. If mask_image is a PIL image, it will be converted to a single +channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, +so the expected shape would be (B, H, W, 1). negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22InpaintCombinedPipeline class diffusers.KandinskyV22InpaintCombinedPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. prior_image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Combined Pipeline for inpainting generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union mask_image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. mask_image (np.array) — +Tensor representing an image batch, to mask image. White pixels in the mask will be repainted, while +black pixels will be preserved. If mask_image is a PIL image, it will be converted to a single +channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, +so the expected shape would be (B, H, W, 1). negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. prior_callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). prior_callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the prior_callback_on_step_end function. The tensors specified in the +list will be passed as callback_kwargs argument. You will only be able to include variables listed in +the ._callback_tensor_inputs attribute of your pipeline class. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +import torch +import numpy as np + +pipe = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +original_image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png" +) + +mask = np.zeros((768, 768), dtype=np.float32) +# Let's mask out an area above the cat's head +mask[:250, 250:-250] = 1 + +image = pipe(prompt=prompt, image=original_image, mask_image=mask, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. diff --git a/scrapped_outputs/eaa94b1da1e6b5a4cde4d505786e1f41.txt b/scrapped_outputs/eaa94b1da1e6b5a4cde4d505786e1f41.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/eaacbc1693549fa276c62828e7489c55.txt b/scrapped_outputs/eaacbc1693549fa276c62828e7489c55.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/eaacf27b6259dc15d3f75d4d5051dd36.txt b/scrapped_outputs/eaacf27b6259dc15d3f75d4d5051dd36.txt new file mode 100644 index 0000000000000000000000000000000000000000..13aef0767c19d544c8b380b818921e179de42362 --- /dev/null +++ b/scrapped_outputs/eaacf27b6259dc15d3f75d4d5051dd36.txt @@ -0,0 +1,14 @@ +Speed up inference There are several ways to optimize 🤗 Diffusers for inference speed. As a general rule of thumb, we recommend using either xFormers or torch.nn.functional.scaled_dot_product_attention in PyTorch 2.0 for their memory-efficient attention. In many cases, optimizing for speed or memory leads to improved performance in the other, so you should try to optimize for both whenever you can. This guide focuses on inference speed, but you can learn more about preserving memory in the Reduce memory usage guide. The results below are obtained from generating a single 512x512 image from the prompt a photo of an astronaut riding a horse on mars with 50 DDIM steps on a Nvidia Titan RTX, demonstrating the speed-up you can expect. latency speed-up original 9.50s x1 fp16 3.61s x2.63 channels last 3.30s x2.88 traced UNet 3.21s x2.96 memory efficient attention 2.63s x3.61 Use TensorFloat-32 On Ampere and later CUDA devices, matrix multiplications and convolutions can use the TensorFloat-32 (TF32) mode for faster, but slightly less accurate computations. By default, PyTorch enables TF32 mode for convolutions but not matrix multiplications. Unless your network requires full float32 precision, we recommend enabling TF32 for matrix multiplications. It can significantly speeds up computations with typically negligible loss in numerical accuracy. Copied import torch + +torch.backends.cuda.matmul.allow_tf32 = True You can learn more about TF32 in the Mixed precision training guide. Half-precision weights To save GPU memory and get more speed, try loading and running the model weights directly in half-precision or float16: Copied import torch +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +image = pipe(prompt).images[0] Don’t use torch.autocast in any of the pipelines as it can lead to black images and is always slower than pure float16 precision. diff --git a/scrapped_outputs/eae5be0b9292cfe4be241c74a395a473.txt b/scrapped_outputs/eae5be0b9292cfe4be241c74a395a473.txt new file mode 100644 index 0000000000000000000000000000000000000000..0bb1c25cee39a4d571553ee70193cbd912f297b7 --- /dev/null +++ b/scrapped_outputs/eae5be0b9292cfe4be241c74a395a473.txt @@ -0,0 +1,31 @@ +Prior Transformer The Prior Transformer was originally introduced in Hierarchical Text-Conditional Image Generation with CLIP Latents by Ramesh et al. It is used to predict CLIP image embeddings from CLIP text embeddings; image embeddings are predicted through a denoising diffusion process. The abstract from the paper is: Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples. PriorTransformer class diffusers.PriorTransformer < source > ( num_attention_heads: int = 32 attention_head_dim: int = 64 num_layers: int = 20 embedding_dim: int = 768 num_embeddings = 77 additional_embeddings = 4 dropout: float = 0.0 time_embed_act_fn: str = 'silu' norm_in_type: Optional = None embedding_proj_norm_type: Optional = None encoder_hid_proj_type: Optional = 'linear' added_emb_type: Optional = 'prd' time_embed_dim: Optional = None embedding_proj_dim: Optional = None clip_embed_dim: Optional = None ) Parameters num_attention_heads (int, optional, defaults to 32) — The number of heads to use for multi-head attention. attention_head_dim (int, optional, defaults to 64) — The number of channels in each head. num_layers (int, optional, defaults to 20) — The number of layers of Transformer blocks to use. embedding_dim (int, optional, defaults to 768) — The dimension of the model input hidden_states num_embeddings (int, optional, defaults to 77) — +The number of embeddings of the model input hidden_states additional_embeddings (int, optional, defaults to 4) — The number of additional tokens appended to the +projected hidden_states. The actual length of the used hidden_states is num_embeddings + additional_embeddings. dropout (float, optional, defaults to 0.0) — The dropout probability to use. time_embed_act_fn (str, optional, defaults to ‘silu’) — +The activation function to use to create timestep embeddings. norm_in_type (str, optional, defaults to None) — The normalization layer to apply on hidden states before +passing to Transformer blocks. Set it to None if normalization is not needed. embedding_proj_norm_type (str, optional, defaults to None) — +The normalization layer to apply on the input proj_embedding. Set it to None if normalization is not +needed. encoder_hid_proj_type (str, optional, defaults to linear) — +The projection layer to apply on the input encoder_hidden_states. Set it to None if +encoder_hidden_states is None. added_emb_type (str, optional, defaults to prd) — Additional embeddings to condition the model. +Choose from prd or None. if choose prd, it will prepend a token indicating the (quantized) dot +product between the text embedding and image embedding as proposed in the unclip paper +https://arxiv.org/abs/2204.06125 If it is None, no additional embeddings will be prepended. time_embed_dim (int, *optional*, defaults to None) -- The dimension of timestep embeddings. If None, will be set to num_attention_heads * attention_head_dim` embedding_proj_dim (int, optional, default to None) — +The dimension of proj_embedding. If None, will be set to embedding_dim. clip_embed_dim (int, optional, default to None) — +The dimension of the output. If None, will be set to embedding_dim. A Prior Transformer model. forward < source > ( hidden_states timestep: Union proj_embedding: FloatTensor encoder_hidden_states: Optional = None attention_mask: Optional = None return_dict: bool = True ) → PriorTransformerOutput or tuple Parameters hidden_states (torch.FloatTensor of shape (batch_size, embedding_dim)) — +The currently predicted image embeddings. timestep (torch.LongTensor) — +Current denoising step. proj_embedding (torch.FloatTensor of shape (batch_size, embedding_dim)) — +Projected embedding vector the denoising process is conditioned on. encoder_hidden_states (torch.FloatTensor of shape (batch_size, num_embeddings, embedding_dim)) — +Hidden states of the text embeddings the denoising process is conditioned on. attention_mask (torch.BoolTensor of shape (batch_size, num_embeddings)) — +Text mask for the text embeddings. return_dict (bool, optional, defaults to True) — +Whether or not to return a PriorTransformerOutput instead of a plain +tuple. Returns +PriorTransformerOutput or tuple + +If return_dict is True, a PriorTransformerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + The PriorTransformer forward method. set_attn_processor < source > ( processor: Union _remove_lora = False ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. PriorTransformerOutput class diffusers.models.prior_transformer.PriorTransformerOutput < source > ( predicted_image_embedding: FloatTensor ) Parameters predicted_image_embedding (torch.FloatTensor of shape (batch_size, embedding_dim)) — +The predicted CLIP image embedding conditioned on the CLIP text embedding input. The output of PriorTransformer. diff --git a/scrapped_outputs/eaf6ba972abdd6d432b3adb758c11ca5.txt b/scrapped_outputs/eaf6ba972abdd6d432b3adb758c11ca5.txt new file mode 100644 index 0000000000000000000000000000000000000000..cb002bda2ec40dcae9dd008d1a32f6d02e3caa74 --- /dev/null +++ b/scrapped_outputs/eaf6ba972abdd6d432b3adb758c11ca5.txt @@ -0,0 +1,97 @@ +Dance Diffusion + + +Overview + +Dance Diffusion by Zach Evans. +Dance Diffusion is the first in a suite of generative audio tools for producers and musicians to be released by Harmonai. +For more info or to get involved in the development of these tools, please visit https://harmonai.org and fill out the form on the front page. +The original codebase of this implementation can be found here. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_dance_diffusion.py +Unconditional Audio Generation +- + +DanceDiffusionPipeline + + +class diffusers.DanceDiffusionPipeline + +< +source +> +( +unet +scheduler + +) + + +Parameters + +unet (UNet1DModel) — U-Net architecture to denoise the encoded image. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image. Can be one of +IPNDMScheduler. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +batch_size: int = 1 +num_inference_steps: int = 100 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +audio_length_in_s: typing.Optional[float] = None +return_dict: bool = True + +) +→ +AudioPipelineOutput or tuple + +Parameters + +batch_size (int, optional, defaults to 1) — +The number of audio samples to generate. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality audio sample at +the expense of slower inference. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +audio_length_in_s (float, optional, defaults to self.unet.config.sample_size/self.unet.config.sample_rate) — +The length of the generated audio sample in seconds. Note that the output of the pipeline, i.e. +sample_size, will be audio_length_in_s * self.unet.sample_rate. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a AudioPipelineOutput instead of a plain tuple. + + +Returns + +AudioPipelineOutput or tuple + + + +~pipelines.utils.AudioPipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. diff --git a/scrapped_outputs/eb124d8a2535cf16ad40654d7923491e.txt b/scrapped_outputs/eb124d8a2535cf16ad40654d7923491e.txt new file mode 100644 index 0000000000000000000000000000000000000000..191230d895650a96c9b8f907a3911fdd00d72140 --- /dev/null +++ b/scrapped_outputs/eb124d8a2535cf16ad40654d7923491e.txt @@ -0,0 +1,55 @@ +DDPMScheduler Denoising Diffusion Probabilistic Models (DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes a diffusion based model of the same name. In the context of the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline. The abstract from the paper is: We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. Our implementation is available at this https URL. DDPMScheduler class diffusers.DDPMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None variance_type: str = 'fixed_small' clip_sample: bool = True prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 clip_sample_range: float = 1.0 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' steps_offset: int = 0 rescale_betas_zero_snr: int = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +An array of betas to pass directly to the constructor without using beta_start and beta_end. variance_type (str, defaults to "fixed_small") — +Clip the variance when adding noise to the denoised sample. Choose from fixed_small, fixed_small_log, +fixed_large, fixed_large_log, learned or learned_range. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. DDPMScheduler explores the connections between denoising score matching and Langevin dynamics sampling. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: Optional = None device: Union = None timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. If used, +timesteps must be None. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. timesteps (List[int], optional) — +Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default +timestep spacing strategy of equal spacing between timesteps is used. If timesteps is passed, +num_inference_steps must be None. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator = None return_dict: bool = True ) → DDPMSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a DDPMSchedulerOutput or tuple. Returns +DDPMSchedulerOutput or tuple + +If return_dict is True, DDPMSchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). DDPMSchedulerOutput class diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/eb1314a7c61ad96002bf09d26d305920.txt b/scrapped_outputs/eb1314a7c61ad96002bf09d26d305920.txt new file mode 100644 index 0000000000000000000000000000000000000000..4c696398635d3121e95a98f588be43126adc80ee --- /dev/null +++ b/scrapped_outputs/eb1314a7c61ad96002bf09d26d305920.txt @@ -0,0 +1,323 @@ +Text-to-image The Stable Diffusion model was created by researchers and engineers from CompVis, Stability AI, Runway, and LAION. The StableDiffusionPipeline is capable of generating photorealistic images given any text input. It’s trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs. Latent diffusion is the research on top of which Stable Diffusion was built. It was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. The abstract from the paper is: By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. Code is available at https://github.com/CompVis/latent-diffusion. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionPipeline class diffusers.StableDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor from Common Diffusion Noise Schedules and Sample Steps are +Flawed. Guidance rescale factor should fix overexposure when +using zero terminal SNR. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt or .safetensors +format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import StableDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +... ) + +>>> # Download pipeline from local file +>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt +>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly") + +>>> # Enable float16 and move to GPU +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", +... torch_dtype=torch.float16, +... ) +>>> pipeline.to("cuda") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionPipeline class diffusers.FlaxStableDiffusionPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-to-image generation using Stable Diffusion. This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: array params: Union prng_seed: Array num_inference_steps: int = 50 height: Optional = None width: Optional = None guidance_scale: Union = 7.5 latents: Array = None neg_prompt_ids: Array = None return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. latents (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +array is generated by sampling using the supplied random generator. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard + +>>> from diffusers import FlaxStableDiffusionPipeline + +>>> pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", revision="bf16", dtype=jax.numpy.bfloat16 +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" + +>>> prng_seed = jax.random.PRNGKey(0) +>>> num_inference_steps = 50 + +>>> num_samples = jax.device_count() +>>> prompt = num_samples * [prompt] +>>> prompt_ids = pipeline.prepare_inputs(prompt) +# shard inputs and rng + +>>> params = replicate(params) +>>> prng_seed = jax.random.split(prng_seed, jax.device_count()) +>>> prompt_ids = shard(prompt_ids) + +>>> images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images +>>> images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) FlaxStableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/eb1da957674b8ad2c82fc505e8c59524.txt b/scrapped_outputs/eb1da957674b8ad2c82fc505e8c59524.txt new file mode 100644 index 0000000000000000000000000000000000000000..af8bc21f7006c2432f3cf43cbda561eb3e9ef283 --- /dev/null +++ b/scrapped_outputs/eb1da957674b8ad2c82fc505e8c59524.txt @@ -0,0 +1,42 @@ +RePaintScheduler RePaintScheduler is a DDPM-based inpainting scheduler for unsupervised inpainting with extreme masks. It is designed to be used with the RePaintPipeline, and it is based on the paper RePaint: Inpainting using Denoising Diffusion Probabilistic Models by Andreas Lugmayr et al. The abstract from the paper is: Free-form inpainting is the task of adding new content to an image in the regions specified by an arbitrary binary mask. Most existing approaches train for a certain distribution of masks, which limits their generalization capabilities to unseen mask types. Furthermore, training with pixel-wise and perceptual losses often leads to simple textural extensions towards the missing areas instead of semantically meaningful generation. In this work, we propose RePaint: A Denoising Diffusion Probabilistic Model (DDPM) based inpainting approach that is applicable to even extreme masks. We employ a pretrained unconditional DDPM as the generative prior. To condition the generation process, we only alter the reverse diffusion iterations by sampling the unmasked regions using the given image information. Since this technique does not modify or condition the original DDPM network itself, the model produces high-quality and diverse output images for any inpainting form. We validate our method for both faces and general-purpose image inpainting using standard and extreme masks. RePaint outperforms state-of-the-art Autoregressive, and GAN approaches for at least five out of six mask distributions. GitHub Repository: this http URL. The original implementation can be found at andreas128/RePaint. RePaintScheduler class diffusers.RePaintScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' eta: float = 0.0 trained_betas: Optional = None clip_sample: bool = True ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, squaredcos_cap_v2, or sigmoid. eta (float) — +The weight of noise for added noise in diffusion step. If its value is between 0.0 and 1.0 it corresponds +to the DDIM scheduler, and if its value is between -0.0 and 1.0 it corresponds to the DDPM scheduler. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. clip_sample (bool, defaults to True) — +Clip the predicted sample between -1 and 1 for numerical stability. RePaintScheduler is a scheduler for DDPM inpainting inside a given mask. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int jump_length: int = 10 jump_n_sample: int = 10 device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. If used, +timesteps must be None. jump_length (int, defaults to 10) — +The number of steps taken forward in time before going backward in time for a single jump (“j” in +RePaint paper). Take a look at Figure 9 and 10 in the paper. jump_n_sample (int, defaults to 10) — +The number of times to make a forward time jump for a given chosen time sample. Take a look at Figure 9 +and 10 in the paper. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor original_image: FloatTensor mask: FloatTensor generator: Optional = None return_dict: bool = True ) → RePaintSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. original_image (torch.FloatTensor) — +The original image to inpaint on. mask (torch.FloatTensor) — +The mask where a value of 0.0 indicates which part of the original image to inpaint. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a RePaintSchedulerOutput or tuple. Returns +RePaintSchedulerOutput or tuple + +If return_dict is True, RePaintSchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). RePaintSchedulerOutput class diffusers.schedulers.scheduling_repaint.RePaintSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from +the current timestep. pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/ebc2ff0e519a1107b93f32d856624d5f.txt b/scrapped_outputs/ebc2ff0e519a1107b93f32d856624d5f.txt new file mode 100644 index 0000000000000000000000000000000000000000..923735996db131119f1ed82ba37eae73f2bb0f3e --- /dev/null +++ b/scrapped_outputs/ebc2ff0e519a1107b93f32d856624d5f.txt @@ -0,0 +1,27 @@ +DDPM Denoising Diffusion Probabilistic Models (DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes a diffusion based model of the same name. In the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline. The abstract from the paper is: We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. The original codebase can be found at hohonathanho/diffusion. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. DDPMPipeline class diffusers.DDPMPipeline < source > ( unet scheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image. Can be one of +DDPMScheduler, or DDIMScheduler. Pipeline for image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 generator: Union = None num_inference_steps: int = 1000 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. num_inference_steps (int, optional, defaults to 1000) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Example: Copied >>> from diffusers import DDPMPipeline + +>>> # load model and scheduler +>>> pipe = DDPMPipeline.from_pretrained("google/ddpm-cat-256") + +>>> # run pipeline in inference (sample random noise and denoise) +>>> image = pipe().images[0] + +>>> # save image +>>> image.save("ddpm_generated_image.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/ebddddb6101da7fd8597222f203ae9e7.txt b/scrapped_outputs/ebddddb6101da7fd8597222f203ae9e7.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/ebeaf86fef796251fc8f107cbbe65a5d.txt b/scrapped_outputs/ebeaf86fef796251fc8f107cbbe65a5d.txt new file mode 100644 index 0000000000000000000000000000000000000000..13e167b0959d0af366c380365e13d791431b1240 --- /dev/null +++ b/scrapped_outputs/ebeaf86fef796251fc8f107cbbe65a5d.txt @@ -0,0 +1,8 @@ +Utilities Utility and helper functions for working with 🤗 Diffusers. numpy_to_pil diffusers.utils.numpy_to_pil < source > ( images ) Convert a numpy image or a batch of images to a PIL image. pt_to_pil diffusers.utils.pt_to_pil < source > ( images ) Convert a torch image to a PIL image. load_image diffusers.utils.load_image < source > ( image: Union convert_method: Callable = None ) → PIL.Image.Image Parameters image (str or PIL.Image.Image) — +The image to convert to the PIL Image format. convert_method (Callable[[PIL.Image.Image], PIL.Image.Image], optional) — +A conversion method to apply to the image after loading it. +When set to None the image will be converted “RGB”. Returns +PIL.Image.Image + +A PIL Image. + Loads image to a PIL Image. export_to_gif diffusers.utils.export_to_gif < source > ( image: List output_gif_path: str = None ) export_to_video diffusers.utils.export_to_video < source > ( video_frames: Union output_video_path: str = None fps: int = 8 ) make_image_grid diffusers.utils.make_image_grid < source > ( images: List rows: int cols: int resize: int = None ) Prepares a single grid of images. Useful for visualization purposes. diff --git a/scrapped_outputs/ebefb00b0466cf51a958b927569bb560.txt b/scrapped_outputs/ebefb00b0466cf51a958b927569bb560.txt new file mode 100644 index 0000000000000000000000000000000000000000..49dfad88e1e2c0dcad3d9918f9f7b9486f85e0dc --- /dev/null +++ b/scrapped_outputs/ebefb00b0466cf51a958b927569bb560.txt @@ -0,0 +1,92 @@ +DPMSolverMultistepInverse DPMSolverMultistepInverse is the inverted scheduler from DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps and DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. The implementation is mostly based on the DDIM inversion definition of Null-text Inversion for Editing Real Images using Guided Diffusion Models and notebook implementation of the DiffEdit latent inversion from Xiang-cd/DiffEdit-stable-diffusion. Tips Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use the dynamic +thresholding. This thresholding method is unsuitable for latent-space diffusion models such as +Stable Diffusion. DPMSolverMultistepInverseScheduler class diffusers.DPMSolverMultistepInverseScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'dpmsolver++' solver_type: str = 'midpoint' lower_order_final: bool = True euler_at_final: bool = False use_karras_sigmas: Optional = False lambda_min_clipped: float = -inf variance_type: Optional = None timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DPMSolver order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++". algorithm_type (str, defaults to dpmsolver++) — +Algorithm type for the solver; can be dpmsolver, dpmsolver++, sde-dpmsolver or sde-dpmsolver++. The +dpmsolver type implements the algorithms in the DPMSolver +paper, and the dpmsolver++ type implements the algorithms in the +DPMSolver++ paper. It is recommended to use dpmsolver++ or +sde-dpmsolver++ with solver_order=2 for guided sampling like in Stable Diffusion. solver_type (str, defaults to midpoint) — +Solver type for the second-order solver; can be midpoint or heun. The solver type slightly affects the +sample quality, especially for a small number of steps. It is recommended to use midpoint solvers. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. euler_at_final (bool, defaults to False) — +Whether to use Euler’s method in the final step. It is a trade-off between numerical stability and detail +richness. This can stabilize the sampling of the SDE variant of DPMSolver for small number of inference +steps, but sometimes may result in blurring. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. lambda_min_clipped (float, defaults to -inf) — +Clipping threshold for the minimum value of lambda(t) for numerical stability. This is critical for the +cosine (squaredcos_cap_v2) noise schedule. variance_type (str, optional) — +Set to “learned” or “learned_range” for diffusion models that predict variance. If set, the model’s output +contains the predicted Gaussian variance. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DPMSolverMultistepInverseScheduler is the reverse scheduler of DPMSolverMultistepScheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is +designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an +integral of the data prediction model. The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise +prediction and data prediction models. dpm_solver_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DPMSolver (equivalent to DDIM). multistep_dpm_solver_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order multistep DPMSolver. multistep_dpm_solver_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order multistep DPMSolver. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int = None device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator = None return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep DPMSolver. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/ec0bce75f10dca3a14f08190301e9ec0.txt b/scrapped_outputs/ec0bce75f10dca3a14f08190301e9ec0.txt new file mode 100644 index 0000000000000000000000000000000000000000..d509c1ac7ab849c2b3afbdbbc876d1114069ba2e --- /dev/null +++ b/scrapped_outputs/ec0bce75f10dca3a14f08190301e9ec0.txt @@ -0,0 +1,217 @@ +Latent Consistency Models Latent Consistency Models (LCMs) were proposed in Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference by Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao. The abstract of the paper is as follows: Latent Diffusion models (LDMs) have achieved remarkable results in synthesizing high-resolution images. However, the iterative sampling process is computationally intensive and leads to slow generation. Inspired by Consistency Models (song et al.), we propose Latent Consistency Models (LCMs), enabling swift inference with minimal steps on any pre-trained LDMs, including Stable Diffusion (rombach et al). Viewing the guided reverse diffusion process as solving an augmented probability flow ODE (PF-ODE), LCMs are designed to directly predict the solution of such ODE in latent space, mitigating the need for numerous iterations and allowing rapid, high-fidelity sampling. Efficiently distilled from pre-trained classifier-free guided diffusion models, a high-quality 768 x 768 2~4-step LCM takes only 32 A100 GPU hours for training. Furthermore, we introduce Latent Consistency Fine-tuning (LCF), a novel method that is tailored for fine-tuning LCMs on customized image datasets. Evaluation on the LAION-5B-Aesthetics dataset demonstrates that LCMs achieve state-of-the-art text-to-image generation performance with few-step inference. Project Page: this https URL. A demo for the SimianLuo/LCM_Dreamshaper_v7 checkpoint can be found here. The pipelines were contributed by luosiallen, nagolinc, and dg845. LatentConsistencyModelPipeline class diffusers.LatentConsistencyModelPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: LCMScheduler safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Currently only +supports LCMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. requires_safety_checker (bool, optional, defaults to True) — +Whether the pipeline requires a safety checker component. Pipeline for text-to-image generation using a latent consistency model. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 4 original_inference_steps: int = None timesteps: List = None guidance_scale: float = 8.5 num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. original_inference_steps (int, optional) — +The original number of inference steps use to generate a linearly-spaced timestep schedule, from which +we will draw num_inference_steps evenly spaced timesteps from as our final timestep schedule, +following the Skipping-Step method in the paper (see Section 4.3). If not set this will default to the +scheduler’s original_inference_steps attribute. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps on the original LCM training/distillation timestep schedule are used. Must be in descending +order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. +Note that the original latent consistency models paper uses a different CFG formulation where the +guidance scales are decreased by 1 (so in the paper formulation CFG is enabled when guidance_scale > 0). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import DiffusionPipeline +>>> import torch + +>>> pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7") +>>> # To save GPU memory, torch.float16 can be used, but it may compromise image quality. +>>> pipe.to(torch_device="cuda", torch_dtype=torch.float32) + +>>> prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" + +>>> # Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps. +>>> num_inference_steps = 4 +>>> images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=8.0).images +>>> images[0].save("image.png") enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 LatentConsistencyModelImg2ImgPipeline class diffusers.LatentConsistencyModelImg2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: LCMScheduler safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Currently only +supports LCMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. requires_safety_checker (bool, optional, defaults to True) — +Whether the pipeline requires a safety checker component. Pipeline for image-to-image generation using a latent consistency model. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 4 strength: float = 0.8 original_inference_steps: int = None timesteps: List = None guidance_scale: float = 8.5 num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. original_inference_steps (int, optional) — +The original number of inference steps use to generate a linearly-spaced timestep schedule, from which +we will draw num_inference_steps evenly spaced timesteps from as our final timestep schedule, +following the Skipping-Step method in the paper (see Section 4.3). If not set this will default to the +scheduler’s original_inference_steps attribute. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps on the original LCM training/distillation timestep schedule are used. Must be in descending +order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. +Note that the original latent consistency models paper uses a different CFG formulation where the +guidance scales are decreased by 1 (so in the paper formulation CFG is enabled when guidance_scale > 0). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import AutoPipelineForImage2Image +>>> import torch +>>> import PIL + +>>> pipe = AutoPipelineForImage2Image.from_pretrained("SimianLuo/LCM_Dreamshaper_v7") +>>> # To save GPU memory, torch.float16 can be used, but it may compromise image quality. +>>> pipe.to(torch_device="cuda", torch_dtype=torch.float32) + +>>> prompt = "High altitude snowy mountains" +>>> image = PIL.Image.open("./snowy_mountains.png") + +>>> # Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps. +>>> num_inference_steps = 4 +>>> images = pipe( +... prompt=prompt, image=image, num_inference_steps=num_inference_steps, guidance_scale=8.0 +... ).images + +>>> images[0].save("image.png") enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/ec2865690594c185fed2de2173362d3d.txt b/scrapped_outputs/ec2865690594c185fed2de2173362d3d.txt new file mode 100644 index 0000000000000000000000000000000000000000..ac84e7af684acbbe414a495264a2879f29f202cf --- /dev/null +++ b/scrapped_outputs/ec2865690594c185fed2de2173362d3d.txt @@ -0,0 +1,114 @@ +Accelerate inference of text-to-image diffusion models Diffusion models are slower than their GAN counterparts because of the iterative and sequential reverse diffusion process. There are several techniques that can address this limitation such as progressive timestep distillation (LCM LoRA), model compression (SSD-1B), and reusing adjacent features of the denoiser (DeepCache). However, you don’t necessarily need to use these techniques to speed up inference. With PyTorch 2 alone, you can accelerate the inference latency of text-to-image diffusion pipelines by up to 3x. This tutorial will show you how to progressively apply the optimizations found in PyTorch 2 to reduce inference latency. You’ll use the Stable Diffusion XL (SDXL) pipeline in this tutorial, but these techniques are applicable to other text-to-image diffusion pipelines too. Make sure you’re using the latest version of Diffusers: Copied pip install -U diffusers Then upgrade the other required libraries too: Copied pip install -U transformers accelerate peft Install PyTorch nightly to benefit from the latest and fastest kernels: Copied pip3 install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu121 The results reported below are from a 80GB 400W A100 with its clock rate set to the maximum. If you’re interested in the full benchmarking code, take a look at huggingface/diffusion-fast. Baseline Let’s start with a baseline. Disable reduced precision and the scaled_dot_product_attention (SDPA) function which is automatically used by Diffusers: Copied from diffusers import StableDiffusionXLPipeline + +# Load the pipeline in full-precision and place its model components on CUDA. +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0" +).to("cuda") + +# Run the attention ops without SDPA. +pipe.unet.set_default_attn_processor() +pipe.vae.set_default_attn_processor() + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] This default setup takes 7.36 seconds. bfloat16 Enable the first optimization, reduced precision or more specifically bfloat16. There are several benefits of using reduced precision: Using a reduced numerical precision (such as float16 or bfloat16) for inference doesn’t affect the generation quality but significantly improves latency. The benefits of using bfloat16 compared to float16 are hardware dependent, but modern GPUs tend to favor bfloat16. bfloat16 is much more resilient when used with quantization compared to float16, but more recent versions of the quantization library (torchao) we used don’t have numerical issues with float16. Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 +).to("cuda") + +# Run the attention ops without SDPA. +pipe.unet.set_default_attn_processor() +pipe.vae.set_default_attn_processor() + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] bfloat16 reduces the latency from 7.36 seconds to 4.63 seconds. In our later experiments with float16, recent versions of torchao do not incur numerical problems from float16. Take a look at the Speed up inference guide to learn more about running inference with reduced precision. SDPA Attention blocks are intensive to run. But with PyTorch’s scaled_dot_product_attention function, it is a lot more efficient. This function is used by default in Diffusers so you don’t need to make any changes to the code. Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] Scaled dot product attention improves the latency from 4.63 seconds to 3.31 seconds. torch.compile PyTorch 2 includes torch.compile which uses fast and optimized kernels. In Diffusers, the UNet and VAE are usually compiled because these are the most compute-intensive modules. First, configure a few compiler flags (refer to the full list for more options): Copied from diffusers import StableDiffusionXLPipeline +import torch + +torch._inductor.config.conv_1x1_as_mm = True +torch._inductor.config.coordinate_descent_tuning = True +torch._inductor.config.epilogue_fusion = False +torch._inductor.config.coordinate_descent_check_all_directions = True It is also important to change the UNet and VAE’s memory layout to “channels_last” when compiling them to ensure maximum speed. Copied pipe.unet.to(memory_format=torch.channels_last) +pipe.vae.to(memory_format=torch.channels_last) Now compile and perform inference: Copied # Compile the UNet and VAE. +pipe.unet = torch.compile(pipe.unet, mode="max-autotune", fullgraph=True) +pipe.vae.decode = torch.compile(pipe.vae.decode, mode="max-autotune", fullgraph=True) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# First call to `pipe` is slow, subsequent ones are faster. +image = pipe(prompt, num_inference_steps=30).images[0] torch.compile offers different backends and modes. For maximum inference speed, use “max-autotune” for the inductor backend. “max-autotune” uses CUDA graphs and optimizes the compilation graph specifically for latency. CUDA graphs greatly reduces the overhead of launching GPU operations by using a mechanism to launch multiple GPU operations through a single CPU operation. Using SDPA attention and compiling both the UNet and VAE cuts the latency from 3.31 seconds to 2.54 seconds. Prevent graph breaks Specifying fullgraph=True ensures there are no graph breaks in the underlying model to take full advantage of torch.compile without any performance degradation. For the UNet and VAE, this means changing how you access the return variables. Copied - latents = unet( +- latents, timestep=timestep, encoder_hidden_states=prompt_embeds +-).sample + ++ latents = unet( ++ latents, timestep=timestep, encoder_hidden_states=prompt_embeds, return_dict=False ++)[0] Remove GPU sync after compilation During the iterative reverse diffusion process, the step() function is called on the scheduler each time after the denoiser predicts the less noisy latent embeddings. Inside step(), the sigmas variable is indexed which when placed on the GPU, causes a communication sync between the CPU and GPU. This introduces latency and it becomes more evident when the denoiser has already been compiled. But if the sigmas array always stays on the CPU, the CPU and GPU sync doesn’t occur and you don’t get any latency. In general, any CPU and GPU communication sync should be none or be kept to a bare minimum because it can impact inference latency. Combine the attention block’s projection matrices The UNet and VAE in SDXL use Transformer-like blocks which consists of attention blocks and feed-forward blocks. In an attention block, the input is projected into three sub-spaces using three different projection matrices – Q, K, and V. These projections are performed separately on the input. But we can horizontally combine the projection matrices into a single matrix and perform the projection in one step. This increases the size of the matrix multiplications of the input projections and improves the impact of quantization. You can combine the projection matrices with just a single line of code: Copied pipe.fuse_qkv_projections() This provides a minor improvement from 2.54 seconds to 2.52 seconds. Support for fuse_qkv_projections() is limited and experimental. It’s not available for many non-Stable Diffusion pipelines such as Kandinsky. You can refer to this PR to get an idea about how to enable this for the other pipelines. Dynamic quantization You can also use the ultra-lightweight PyTorch quantization library, torchao (commit SHA 54bcd5a10d0abbe7b0c045052029257099f83fd9), to apply dynamic int8 quantization to the UNet and VAE. Quantization adds additional conversion overhead to the model that is hopefully made up for by faster matmuls (dynamic quantization). If the matmuls are too small, these techniques may degrade performance. First, configure all the compiler tags: Copied from diffusers import StableDiffusionXLPipeline +import torch + +# Notice the two new flags at the end. +torch._inductor.config.conv_1x1_as_mm = True +torch._inductor.config.coordinate_descent_tuning = True +torch._inductor.config.epilogue_fusion = False +torch._inductor.config.coordinate_descent_check_all_directions = True +torch._inductor.config.force_fuse_int_mm_with_mul = True +torch._inductor.config.use_mixed_mm = True Certain linear layers in the UNet and VAE don’t benefit from dynamic int8 quantization. You can filter out those layers with the dynamic_quant_filter_fn shown below. Copied def dynamic_quant_filter_fn(mod, *args): + return ( + isinstance(mod, torch.nn.Linear) + and mod.in_features > 16 + and (mod.in_features, mod.out_features) + not in [ + (1280, 640), + (1920, 1280), + (1920, 640), + (2048, 1280), + (2048, 2560), + (2560, 1280), + (256, 128), + (2816, 1280), + (320, 640), + (512, 1536), + (512, 256), + (512, 512), + (640, 1280), + (640, 1920), + (640, 320), + (640, 5120), + (640, 640), + (960, 320), + (960, 640), + ] + ) + + +def conv_filter_fn(mod, *args): + return ( + isinstance(mod, torch.nn.Conv2d) and mod.kernel_size == (1, 1) and 128 in [mod.in_channels, mod.out_channels] + ) Finally, apply all the optimizations discussed so far: Copied # SDPA + bfloat16. +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 +).to("cuda") + +# Combine attention projection matrices. +pipe.fuse_qkv_projections() + +# Change the memory layout. +pipe.unet.to(memory_format=torch.channels_last) +pipe.vae.to(memory_format=torch.channels_last) Since dynamic quantization is only limited to the linear layers, convert the appropriate pointwise convolution layers into linear layers to maximize its benefit. Copied from torchao import swap_conv2d_1x1_to_linear + +swap_conv2d_1x1_to_linear(pipe.unet, conv_filter_fn) +swap_conv2d_1x1_to_linear(pipe.vae, conv_filter_fn) Apply dynamic quantization: Copied from torchao import apply_dynamic_quant + +apply_dynamic_quant(pipe.unet, dynamic_quant_filter_fn) +apply_dynamic_quant(pipe.vae, dynamic_quant_filter_fn) Finally, compile and perform inference: Copied pipe.unet = torch.compile(pipe.unet, mode="max-autotune", fullgraph=True) +pipe.vae.decode = torch.compile(pipe.vae.decode, mode="max-autotune", fullgraph=True) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] Applying dynamic quantization improves the latency from 2.52 seconds to 2.43 seconds. diff --git a/scrapped_outputs/ec3e7f3b1eba765e3b7b6a80fe7f59c0.txt b/scrapped_outputs/ec3e7f3b1eba765e3b7b6a80fe7f59c0.txt new file mode 100644 index 0000000000000000000000000000000000000000..ae719be0b7ba5e539ea6636677a7dcc7a90dd1e7 --- /dev/null +++ b/scrapped_outputs/ec3e7f3b1eba765e3b7b6a80fe7f59c0.txt @@ -0,0 +1,88 @@ +Text-to-(RGB, depth) LDM3D was proposed in LDM3D: Latent Diffusion Model for 3D by Gabriela Ben Melech Stan, Diana Wofk, Scottie Fox, Alex Redden, Will Saxton, Jean Yu, Estelle Aflalo, Shao-Yen Tseng, Fabio Nonato, Matthias Muller, and Vasudev Lal. LDM3D generates an image and a depth map from a given text prompt unlike the existing text-to-image diffusion models such as Stable Diffusion which only generates an image. With almost the same number of parameters, LDM3D achieves to create a latent space that can compress both the RGB images and the depth maps. Two checkpoints are available for use: ldm3d-original. The original checkpoint used in the paper ldm3d-4c. The new version of LDM3D using 4 channels inputs instead of 6-channels inputs and finetuned on higher resolution images. The abstract from the paper is: This research paper proposes a Latent Diffusion Model for 3D (LDM3D) that generates both image and depth map data from a given text prompt, allowing users to generate RGBD images from text prompts. The LDM3D model is fine-tuned on a dataset of tuples containing an RGB image, depth map and caption, and validated through extensive experiments. We also develop an application called DepthFusion, which uses the generated RGB images and depth maps to create immersive and interactive 360-degree-view experiences using TouchDesigner. This technology has the potential to transform a wide range of industries, from entertainment and gaming to architecture and design. Overall, this paper presents a significant contribution to the field of generative AI and computer vision, and showcases the potential of LDM3D and DepthFusion to revolutionize content creation and digital experiences. A short video summarizing the approach can be found at this url. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! StableDiffusionLDM3DPipeline class diffusers.StableDiffusionLDM3DPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image and 3D generation using LDM3D. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 49 guidance_scale: float = 5.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 5.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import StableDiffusionLDM3DPipeline + +>>> pipe = StableDiffusionLDM3DPipeline.from_pretrained("Intel/ldm3d-4c") +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> output = pipe(prompt) +>>> rgb_image, depth_image = output.rgb, output.depth +>>> rgb_image[0].save("astronaut_ldm3d_rgb.jpg") +>>> depth_image[0].save("astronaut_ldm3d_depth.png") disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. LDM3DPipelineOutput class diffusers.pipelines.stable_diffusion_ldm3d.pipeline_stable_diffusion_ldm3d.LDM3DPipelineOutput < source > ( rgb: Union depth: Union nsfw_content_detected: Optional ) Parameters rgb (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). depth (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. __call__ ( *args **kwargs ) Call self as a function. Upscaler LDM3D-VR is an extended version of LDM3D. The abstract from the paper is: +Latent diffusion models have proven to be state-of-the-art in the creation and manipulation of visual outputs. However, as far as we know, the generation of depth maps jointly with RGB is still limited. We introduce LDM3D-VR, a suite of diffusion models targeting virtual reality development that includes LDM3D-pano and LDM3D-SR. These models enable the generation of panoramic RGBD based on textual prompts and the upscaling of low-resolution inputs to high-resolution RGBD, respectively. Our models are fine-tuned from existing pretrained models on datasets containing panoramic/high-resolution RGB images, depth maps and captions. Both models are evaluated in comparison to existing related methods Two checkpoints are available for use: ldm3d-pano. This checkpoint enables the generation of panoramic images and requires the StableDiffusionLDM3DPipeline pipeline to be used. ldm3d-sr. This checkpoint enables the upscaling of RGB and depth images. Can be used in cascade after the original LDM3D pipeline using the StableDiffusionUpscaleLDM3DPipeline from communauty pipeline. diff --git a/scrapped_outputs/ec470df8410a1520015e3e26f61f5309.txt b/scrapped_outputs/ec470df8410a1520015e3e26f61f5309.txt new file mode 100644 index 0000000000000000000000000000000000000000..96a0a5c22497290cdb231bbf72184daeee1b4d8c --- /dev/null +++ b/scrapped_outputs/ec470df8410a1520015e3e26f61f5309.txt @@ -0,0 +1,18 @@ +VQModel The VQ-VAE model was introduced in Neural Discrete Representation Learning by Aaron van den Oord, Oriol Vinyals and Koray Kavukcuoglu. The model is used in 🤗 Diffusers to decode latent representations into images. Unlike AutoencoderKL, the VQModel works in a quantized latent space. The abstract from the paper is: Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector Quantised-Variational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of “posterior collapse” — where the latents are ignored when they are paired with a powerful autoregressive decoder — typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations. VQModel class diffusers.VQModel < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) up_block_types: Tuple = ('UpDecoderBlock2D',) block_out_channels: Tuple = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 3 sample_size: int = 32 num_vq_embeddings: int = 256 norm_num_groups: int = 32 vq_embed_dim: Optional = None scaling_factor: float = 0.18215 norm_type: str = 'group' mid_block_add_attention = True lookup_from_codebook = False force_upcast = False ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("DownEncoderBlock2D",)) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to ("UpDecoderBlock2D",)) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of block output channels. layers_per_block (int, optional, defaults to 1) — Number of layers per block. act_fn (str, optional, defaults to "silu") — The activation function to use. latent_channels (int, optional, defaults to 3) — Number of channels in the latent space. sample_size (int, optional, defaults to 32) — Sample input size. num_vq_embeddings (int, optional, defaults to 256) — Number of codebook vectors in the VQ-VAE. norm_num_groups (int, optional, defaults to 32) — Number of groups for normalization layers. vq_embed_dim (int, optional) — Hidden dim of codebook vectors in the VQ-VAE. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. norm_type (str, optional, defaults to "group") — +Type of normalization layer to use. Can be one of "group" or "spatial". A VQ-VAE model for decoding latent representations. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor return_dict: bool = True ) → VQEncoderOutput or tuple Parameters sample (torch.FloatTensor) — Input sample. return_dict (bool, optional, defaults to True) — +Whether or not to return a models.vq_model.VQEncoderOutput instead of a plain tuple. Returns +VQEncoderOutput or tuple + +If return_dict is True, a VQEncoderOutput is returned, otherwise a plain tuple +is returned. + The VQModel forward method. VQEncoderOutput class diffusers.models.vq_model.VQEncoderOutput < source > ( latents: FloatTensor ) Parameters latents (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The encoded output sample from the last layer of the model. Output of VQModel encoding method. diff --git a/scrapped_outputs/ec4f0ec50932fa68e202ca91c7bdc927.txt b/scrapped_outputs/ec4f0ec50932fa68e202ca91c7bdc927.txt new file mode 100644 index 0000000000000000000000000000000000000000..5ac980c70abc6eba4fbd0f38f30a6ecdd94ad92f --- /dev/null +++ b/scrapped_outputs/ec4f0ec50932fa68e202ca91c7bdc927.txt @@ -0,0 +1,201 @@ +Depth-to-image The Stable Diffusion model can also infer depth based on an image using MiDaS. This allows you to pass a text prompt and an initial image to condition the generation of new images as well as a depth_map to preserve the image structure. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionDepth2ImgPipeline class diffusers.StableDiffusionDepth2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers depth_estimator: DPTForDepthEstimation feature_extractor: DPTFeatureExtractor ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-guided depth-based image-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None image: Union = None depth_map: Optional = None strength: float = 0.8 num_inference_steps: Optional = 50 guidance_scale: Optional = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: Optional = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be used as the starting point. Can accept image +latents as image only if depth_map is not None. depth_map (torch.FloatTensor, optional) — +Depth prediction to be used as additional conditioning for the image generation process. If not +defined, it automatically predicts the depth with self.depth_estimator. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> import requests +>>> from PIL import Image + +>>> from diffusers import StableDiffusionDepth2ImgPipeline + +>>> pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-depth", +... torch_dtype=torch.float16, +... ) +>>> pipe.to("cuda") + + +>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" +>>> init_image = Image.open(requests.get(url, stream=True).raw) +>>> prompt = "two tigers" +>>> n_propmt = "bad, deformed, ugly, bad anotomy" +>>> image = pipe(prompt=prompt, image=init_image, negative_prompt=n_propmt, strength=0.7).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/ec9e5948756c88178a3164f30eec5d46.txt b/scrapped_outputs/ec9e5948756c88178a3164f30eec5d46.txt new file mode 100644 index 0000000000000000000000000000000000000000..1f36bce79e2596c156b8269c32ff5e3ac4f935d2 --- /dev/null +++ b/scrapped_outputs/ec9e5948756c88178a3164f30eec5d46.txt @@ -0,0 +1,9 @@ +Stable Video Diffusion Stable Video Diffusion was proposed in Stable Video Diffusion: Scaling Latent Video Diffusion Models to Large Datasets by Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, Varun Jampani, Robin Rombach. The abstract from the paper is: We present Stable Video Diffusion - a latent video diffusion model for high-resolution, state-of-the-art text-to-video and image-to-video generation. Recently, latent diffusion models trained for 2D image synthesis have been turned into generative video models by inserting temporal layers and finetuning them on small, high-quality video datasets. However, training methods in the literature vary widely, and the field has yet to agree on a unified strategy for curating video data. In this paper, we identify and evaluate three different stages for successful training of video LDMs: text-to-image pretraining, video pretraining, and high-quality video finetuning. Furthermore, we demonstrate the necessity of a well-curated pretraining dataset for generating high-quality videos and present a systematic curation process to train a strong base model, including captioning and filtering strategies. We then explore the impact of finetuning our base model on high-quality data and train a text-to-video model that is competitive with closed-source video generation. We also show that our base model provides a powerful motion representation for downstream tasks such as image-to-video generation and adaptability to camera motion-specific LoRA modules. Finally, we demonstrate that our model provides a strong multi-view 3D-prior and can serve as a base to finetune a multi-view diffusion model that jointly generates multiple views of objects in a feedforward fashion, outperforming image-based methods at a fraction of their compute budget. We release code and model weights at this https URL. To learn how to use Stable Video Diffusion, take a look at the Stable Video Diffusion guide. Check out the Stability AI Hub organization for the base and extended frame checkpoints! Tips Video generation is memory-intensive and one way to reduce your memory usage is to set enable_forward_chunking on the pipeline’s UNet so you don’t run the entire feedforward layer at once. Breaking it up into chunks in a loop is more efficient. Check out the Text or image-to-video guide for more details about how certain parameters can affect video generation and how to optimize inference by reducing memory usage. StableVideoDiffusionPipeline class diffusers.StableVideoDiffusionPipeline < source > ( vae: AutoencoderKLTemporalDecoder image_encoder: CLIPVisionModelWithProjection unet: UNetSpatioTemporalConditionModel scheduler: EulerDiscreteScheduler feature_extractor: CLIPImageProcessor ) Parameters vae (AutoencoderKLTemporalDecoder) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. image_encoder (CLIPVisionModelWithProjection) — +Frozen CLIP image-encoder (laion/CLIP-ViT-H-14-laion2B-s32B-b79K). unet (UNetSpatioTemporalConditionModel) — +A UNetSpatioTemporalConditionModel to denoise the encoded image latents. scheduler (EulerDiscreteScheduler) — +A scheduler to be used in combination with unet to denoise the encoded image latents. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images. Pipeline to generate video from an input image using Stable Video Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). StableVideoDiffusionPipelineOutput class diffusers.pipelines.stable_video_diffusion.StableVideoDiffusionPipelineOutput < source > ( frames: Union ) Parameters frames ([List[List[PIL.Image.Image]], np.ndarray, torch.FloatTensor]) — +List of denoised PIL images of length batch_size or numpy array or torch tensor +of shape (batch_size, num_frames, height, width, num_channels). Output class for Stable Video Diffusion pipeline. diff --git a/scrapped_outputs/ecbd0df37672903426ce3507bbc89c6e.txt b/scrapped_outputs/ecbd0df37672903426ce3507bbc89c6e.txt new file mode 100644 index 0000000000000000000000000000000000000000..da7517473881ae8a5f98c9de9071381dc720f891 --- /dev/null +++ b/scrapped_outputs/ecbd0df37672903426ce3507bbc89c6e.txt @@ -0,0 +1 @@ +Diffusers 🤗 Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you’re looking for a simple inference solution or want to train your own diffusion model, 🤗 Diffusers is a modular toolbox that supports both. Our library is designed with a focus on usability over performance, simple over easy, and customizability over abstractions. The library has three main components: State-of-the-art diffusion pipelines for inference with just a few lines of code. There are many pipelines in 🤗 Diffusers, check out the table in the pipeline overview for a complete list of available pipelines and the task they solve. Interchangeable noise schedulers for balancing trade-offs between generation speed and quality. Pretrained models that can be used as building blocks, and combined with schedulers, for creating your own end-to-end diffusion systems. Tutorials Learn the fundamental skills you need to start generating outputs, build your own diffusion system, and train a diffusion model. We recommend starting here if you're using 🤗 Diffusers for the first time! How-to guides Practical guides for helping you load pipelines, models, and schedulers. You'll also learn how to use pipelines for specific tasks, control how outputs are generated, optimize for inference speed, and different training techniques. Conceptual guides Understand why the library was designed the way it was, and learn more about the ethical guidelines and safety implementations for using the library. Reference Technical descriptions of how 🤗 Diffusers classes and methods work. diff --git a/scrapped_outputs/eccc9fae8da632e2fcfe2b00a471047a.txt b/scrapped_outputs/eccc9fae8da632e2fcfe2b00a471047a.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/ece53485e815d64f0cad8bc44354dd59.txt b/scrapped_outputs/ece53485e815d64f0cad8bc44354dd59.txt new file mode 100644 index 0000000000000000000000000000000000000000..a3ef40f070274021b77fb2e361dbd5e9e695ba0c --- /dev/null +++ b/scrapped_outputs/ece53485e815d64f0cad8bc44354dd59.txt @@ -0,0 +1,116 @@ +Single files Diffusers supports loading pretrained pipeline (or model) weights stored in a single file, such as a ckpt or safetensors file. These single file types are typically produced from community trained models. There are three classes for loading single file weights: FromSingleFileMixin supports loading pretrained pipeline weights stored in a single file, which can either be a ckpt or safetensors file. FromOriginalVAEMixin supports loading a pretrained AutoencoderKL from pretrained ControlNet weights stored in a single file, which can either be a ckpt or safetensors file. FromOriginalControlnetMixin supports loading pretrained ControlNet weights stored in a single file, which can either be a ckpt or safetensors file. To learn more about how to load single file weights, see the Load different Stable Diffusion formats loading guide. FromSingleFileMixin class diffusers.loaders.FromSingleFileMixin < source > ( ) Load model weights saved in the .ckpt format into a DiffusionPipeline. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt or .safetensors +format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import StableDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +... ) + +>>> # Download pipeline from local file +>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt +>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly") + +>>> # Enable float16 and move to GPU +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", +... torch_dtype=torch.float16, +... ) +>>> pipeline.to("cuda") FromOriginalVAEMixin class diffusers.loaders.FromOriginalVAEMixin < source > ( ) Load pretrained AutoencoderKL weights saved in the .ckpt or .safetensors format into a AutoencoderKL. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + config_file (str, optional) — +Filepath to the configuration YAML file associated with the model. If not provided it will default to: +https://raw.githubusercontent.com/CompVis/stable-diffusion/main/configs/stable-diffusion/v1-inference.yaml torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. image_size (int, optional, defaults to 512) — +The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable +Diffusion v2 base model. Use 768 for Stable Diffusion v2. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution +Image Synthesis with Latent Diffusion Models paper. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (for example the pipeline components of the +specific pipeline class). The overwritten components are directly passed to the pipelines __init__ +method. See example below for more information. Instantiate a AutoencoderKL from pretrained ControlNet weights saved in the original .ckpt or +.safetensors format. The pipeline is set in evaluation mode (model.eval()) by default. Make sure to pass both image_size and scaling_factor to from_single_file() if you’re loading +a VAE from SDXL or a Stable Diffusion v2 model or higher. Examples: Copied from diffusers import AutoencoderKL + +url = "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors" # can also be local file +model = AutoencoderKL.from_single_file(url) FromOriginalControlnetMixin class diffusers.loaders.FromOriginalControlNetMixin < source > ( ) Load pretrained ControlNet weights saved in the .ckpt or .safetensors format into a ControlNetModel. from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. image_size (int, optional, defaults to 512) — +The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable +Diffusion v2 base model. Use 768 for Stable Diffusion v2. upcast_attention (bool, optional, defaults to None) — +Whether the attention computation should always be upcasted. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (for example the pipeline components of the +specific pipeline class). The overwritten components are directly passed to the pipelines __init__ +method. See example below for more information. Instantiate a ControlNetModel from pretrained ControlNet weights saved in the original .ckpt or +.safetensors format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel + +url = "https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_canny.pth" # can also be a local path +model = ControlNetModel.from_single_file(url) + +url = "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.safetensors" # can also be a local path +pipe = StableDiffusionControlNetPipeline.from_single_file(url, controlnet=controlnet) diff --git a/scrapped_outputs/ecf8b2cfd3da4391c561547a489d77b7.txt b/scrapped_outputs/ecf8b2cfd3da4391c561547a489d77b7.txt new file mode 100644 index 0000000000000000000000000000000000000000..49dfad88e1e2c0dcad3d9918f9f7b9486f85e0dc --- /dev/null +++ b/scrapped_outputs/ecf8b2cfd3da4391c561547a489d77b7.txt @@ -0,0 +1,92 @@ +DPMSolverMultistepInverse DPMSolverMultistepInverse is the inverted scheduler from DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps and DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. The implementation is mostly based on the DDIM inversion definition of Null-text Inversion for Editing Real Images using Guided Diffusion Models and notebook implementation of the DiffEdit latent inversion from Xiang-cd/DiffEdit-stable-diffusion. Tips Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use the dynamic +thresholding. This thresholding method is unsuitable for latent-space diffusion models such as +Stable Diffusion. DPMSolverMultistepInverseScheduler class diffusers.DPMSolverMultistepInverseScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'dpmsolver++' solver_type: str = 'midpoint' lower_order_final: bool = True euler_at_final: bool = False use_karras_sigmas: Optional = False lambda_min_clipped: float = -inf variance_type: Optional = None timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DPMSolver order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++". algorithm_type (str, defaults to dpmsolver++) — +Algorithm type for the solver; can be dpmsolver, dpmsolver++, sde-dpmsolver or sde-dpmsolver++. The +dpmsolver type implements the algorithms in the DPMSolver +paper, and the dpmsolver++ type implements the algorithms in the +DPMSolver++ paper. It is recommended to use dpmsolver++ or +sde-dpmsolver++ with solver_order=2 for guided sampling like in Stable Diffusion. solver_type (str, defaults to midpoint) — +Solver type for the second-order solver; can be midpoint or heun. The solver type slightly affects the +sample quality, especially for a small number of steps. It is recommended to use midpoint solvers. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. euler_at_final (bool, defaults to False) — +Whether to use Euler’s method in the final step. It is a trade-off between numerical stability and detail +richness. This can stabilize the sampling of the SDE variant of DPMSolver for small number of inference +steps, but sometimes may result in blurring. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. lambda_min_clipped (float, defaults to -inf) — +Clipping threshold for the minimum value of lambda(t) for numerical stability. This is critical for the +cosine (squaredcos_cap_v2) noise schedule. variance_type (str, optional) — +Set to “learned” or “learned_range” for diffusion models that predict variance. If set, the model’s output +contains the predicted Gaussian variance. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DPMSolverMultistepInverseScheduler is the reverse scheduler of DPMSolverMultistepScheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is +designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an +integral of the data prediction model. The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise +prediction and data prediction models. dpm_solver_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DPMSolver (equivalent to DDIM). multistep_dpm_solver_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order multistep DPMSolver. multistep_dpm_solver_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order multistep DPMSolver. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int = None device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator = None return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep DPMSolver. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/ed0d59a2cf8feb23b084591c368b59fc.txt b/scrapped_outputs/ed0d59a2cf8feb23b084591c368b59fc.txt new file mode 100644 index 0000000000000000000000000000000000000000..b38b5c13a31ff2d5b90900e6331e648465b535b4 --- /dev/null +++ b/scrapped_outputs/ed0d59a2cf8feb23b084591c368b59fc.txt @@ -0,0 +1,174 @@ +Reduce memory usage A barrier to using diffusion models is the large amount of memory required. To overcome this challenge, there are several memory-reducing techniques you can use to run even some of the largest models on free-tier or consumer GPUs. Some of these techniques can even be combined to further reduce memory usage. In many cases, optimizing for memory or speed leads to improved performance in the other, so you should try to optimize for both whenever you can. This guide focuses on minimizing memory usage, but you can also learn more about how to Speed up inference. The results below are obtained from generating a single 512x512 image from the prompt a photo of an astronaut riding a horse on mars with 50 DDIM steps on a Nvidia Titan RTX, demonstrating the speed-up you can expect as a result of reduced memory consumption. latency speed-up original 9.50s x1 fp16 3.61s x2.63 channels last 3.30s x2.88 traced UNet 3.21s x2.96 memory-efficient attention 2.63s x3.61 Sliced VAE Sliced VAE enables decoding large batches of images with limited VRAM or batches with 32 images or more by decoding the batches of latents one image at a time. You’ll likely want to couple this with enable_xformers_memory_efficient_attention() to reduce memory use further if you have xFormers installed. To use sliced VAE, call enable_vae_slicing() on your pipeline before inference: Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_vae_slicing() +#pipe.enable_xformers_memory_efficient_attention() +images = pipe([prompt] * 32).images You may see a small performance boost in VAE decoding on multi-image batches, and there should be no performance impact on single-image batches. Tiled VAE Tiled VAE processing also enables working with large images on limited VRAM (for example, generating 4k images on 8GB of VRAM) by splitting the image into overlapping tiles, decoding the tiles, and then blending the outputs together to compose the final image. You should also used tiled VAE with enable_xformers_memory_efficient_attention() to reduce memory use further if you have xFormers installed. To use tiled VAE processing, call enable_vae_tiling() on your pipeline before inference: Copied import torch +from diffusers import StableDiffusionPipeline, UniPCMultistepScheduler + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) +pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") +prompt = "a beautiful landscape photograph" +pipe.enable_vae_tiling() +#pipe.enable_xformers_memory_efficient_attention() + +image = pipe([prompt], width=3840, height=2224, num_inference_steps=20).images[0] The output image has some tile-to-tile tone variation because the tiles are decoded separately, but you shouldn’t see any sharp and obvious seams between the tiles. Tiling is turned off for images that are 512x512 or smaller. CPU offloading Offloading the weights to the CPU and only loading them on the GPU when performing the forward pass can also save memory. Often, this technique can reduce memory consumption to less than 3GB. To perform CPU offloading, call enable_sequential_cpu_offload(): Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_sequential_cpu_offload() +image = pipe(prompt).images[0] CPU offloading works on submodules rather than whole models. This is the best way to minimize memory consumption, but inference is much slower due to the iterative nature of the diffusion process. The UNet component of the pipeline runs several times (as many as num_inference_steps); each time, the different UNet submodules are sequentially onloaded and offloaded as needed, resulting in a large number of memory transfers. Consider using model offloading if you want to optimize for speed because it is much faster. The tradeoff is your memory savings won’t be as large. When using enable_sequential_cpu_offload(), don’t move the pipeline to CUDA beforehand or else the gain in memory consumption will only be minimal (see this issue for more information). enable_sequential_cpu_offload() is a stateful operation that installs hooks on the models. Model offloading Model offloading requires 🤗 Accelerate version 0.17.0 or higher. Sequential CPU offloading preserves a lot of memory but it makes inference slower because submodules are moved to GPU as needed, and they’re immediately returned to the CPU when a new module runs. Full-model offloading is an alternative that moves whole models to the GPU, instead of handling each model’s constituent submodules. There is a negligible impact on inference time (compared with moving the pipeline to cuda), and it still provides some memory savings. During model offloading, only one of the main components of the pipeline (typically the text encoder, UNet and VAE) +is placed on the GPU while the others wait on the CPU. Components like the UNet that run for multiple iterations stay on the GPU until they’re no longer needed. Enable model offloading by calling enable_model_cpu_offload() on the pipeline: Copied import torch +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +) + +prompt = "a photo of an astronaut riding a horse on mars" +pipe.enable_model_cpu_offload() +image = pipe(prompt).images[0] In order to properly offload models after they’re called, it is required to run the entire pipeline and models are called in the pipeline’s expected order. Exercise caution if models are reused outside the context of the pipeline after hooks have been installed. See Removing Hooks for more information. enable_model_cpu_offload() is a stateful operation that installs hooks on the models and state on the pipeline. Channels-last memory format The channels-last memory format is an alternative way of ordering NCHW tensors in memory to preserve dimension ordering. Channels-last tensors are ordered in such a way that the channels become the densest dimension (storing images pixel-per-pixel). Since not all operators currently support the channels-last format, it may result in worst performance but you should still try and see if it works for your model. For example, to set the pipeline’s UNet to use the channels-last format: Copied print(pipe.unet.conv_out.state_dict()["weight"].stride()) # (2880, 9, 3, 1) +pipe.unet.to(memory_format=torch.channels_last) # in-place operation +print( + pipe.unet.conv_out.state_dict()["weight"].stride() +) # (2880, 1, 960, 320) having a stride of 1 for the 2nd dimension proves that it works Tracing Tracing runs an example input tensor through the model and captures the operations that are performed on it as that input makes its way through the model’s layers. The executable or ScriptFunction that is returned is optimized with just-in-time compilation. To trace a UNet: Copied import time +import torch +from diffusers import StableDiffusionPipeline +import functools + +# torch disable grad +torch.set_grad_enabled(False) + +# set variables +n_experiments = 2 +unet_runs_per_experiment = 50 + + +# load inputs +def generate_inputs(): + sample = torch.randn((2, 4, 64, 64), device="cuda", dtype=torch.float16) + timestep = torch.rand(1, device="cuda", dtype=torch.float16) * 999 + encoder_hidden_states = torch.randn((2, 77, 768), device="cuda", dtype=torch.float16) + return sample, timestep, encoder_hidden_states + + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") +unet = pipe.unet +unet.eval() +unet.to(memory_format=torch.channels_last) # use channels_last memory format +unet.forward = functools.partial(unet.forward, return_dict=False) # set return_dict=False as default + +# warmup +for _ in range(3): + with torch.inference_mode(): + inputs = generate_inputs() + orig_output = unet(*inputs) + +# trace +print("tracing..") +unet_traced = torch.jit.trace(unet, inputs) +unet_traced.eval() +print("done tracing") + + +# warmup and optimize graph +for _ in range(5): + with torch.inference_mode(): + inputs = generate_inputs() + orig_output = unet_traced(*inputs) + + +# benchmarking +with torch.inference_mode(): + for _ in range(n_experiments): + torch.cuda.synchronize() + start_time = time.time() + for _ in range(unet_runs_per_experiment): + orig_output = unet_traced(*inputs) + torch.cuda.synchronize() + print(f"unet traced inference took {time.time() - start_time:.2f} seconds") + for _ in range(n_experiments): + torch.cuda.synchronize() + start_time = time.time() + for _ in range(unet_runs_per_experiment): + orig_output = unet(*inputs) + torch.cuda.synchronize() + print(f"unet inference took {time.time() - start_time:.2f} seconds") + +# save the model +unet_traced.save("unet_traced.pt") Replace the unet attribute of the pipeline with the traced model: Copied from diffusers import StableDiffusionPipeline +import torch +from dataclasses import dataclass + + +@dataclass +class UNet2DConditionOutput: + sample: torch.FloatTensor + + +pipe = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") + +# use jitted unet +unet_traced = torch.jit.load("unet_traced.pt") + + +# del pipe.unet +class TracedUNet(torch.nn.Module): + def __init__(self): + super().__init__() + self.in_channels = pipe.unet.config.in_channels + self.device = pipe.unet.device + + def forward(self, latent_model_input, t, encoder_hidden_states): + sample = unet_traced(latent_model_input, t, encoder_hidden_states)[0] + return UNet2DConditionOutput(sample=sample) + + +pipe.unet = TracedUNet() + +with torch.inference_mode(): + image = pipe([prompt] * 1, num_inference_steps=50).images[0] Memory-efficient attention Recent work on optimizing bandwidth in the attention block has generated huge speed-ups and reductions in GPU memory usage. The most recent type of memory-efficient attention is Flash Attention (you can check out the original code at HazyResearch/flash-attention). If you have PyTorch >= 2.0 installed, you should not expect a speed-up for inference when enabling xformers. To use Flash Attention, install the following: PyTorch > 1.12 CUDA available xFormers Then call enable_xformers_memory_efficient_attention() on the pipeline: Copied from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") + +pipe.enable_xformers_memory_efficient_attention() + +with torch.inference_mode(): + sample = pipe("a small cat") + +# optional: You can disable it via +# pipe.disable_xformers_memory_efficient_attention() The iteration speed when using xformers should match the iteration speed of PyTorch 2.0 as described here. diff --git a/scrapped_outputs/ed1947f0f7018e811371ac1f51608c1f.txt b/scrapped_outputs/ed1947f0f7018e811371ac1f51608c1f.txt new file mode 100644 index 0000000000000000000000000000000000000000..cb5fa177049166af704bb3ebcbb48f62393ceeef --- /dev/null +++ b/scrapped_outputs/ed1947f0f7018e811371ac1f51608c1f.txt @@ -0,0 +1,7 @@ +Overview + +Welcome to 🧨 Diffusers! If you’re new to diffusion models and generative AI, and want to learn more, then you’ve come to the right place. These beginner-friendly tutorials are designed to provide a gentle introduction to diffusion models and help you understand the library fundamentals - the core components and how 🧨 Diffusers is meant to be used. +You’ll learn how to use a pipeline for inference to rapidly generate things, and then deconstruct that pipeline to really understand how to use the library as a modular toolbox for building your own diffusion systems. In the next lesson, you’ll learn how to train your own diffusion model to generate what you want. +After completing the tutorials, you’ll have gained the necessary skills to start exploring the library on your own and see how to use it for your own projects and applications. +Feel free to join our community on Discord or the forums to connect and collaborate with other users and developers! +Let’s start diffusing! 🧨 diff --git a/scrapped_outputs/ed43095ae5f61357977ad6fa8eee8909.txt b/scrapped_outputs/ed43095ae5f61357977ad6fa8eee8909.txt new file mode 100644 index 0000000000000000000000000000000000000000..cdd78d68bba0e712cfad73d0a4eb0e2833f322c8 --- /dev/null +++ b/scrapped_outputs/ed43095ae5f61357977ad6fa8eee8909.txt @@ -0,0 +1,15 @@ +Outputs All model outputs are subclasses of BaseOutput, data structures containing all the information returned by the model. The outputs can also be used as tuples or dictionaries. For example: Copied from diffusers import DDIMPipeline + +pipeline = DDIMPipeline.from_pretrained("google/ddpm-cifar10-32") +outputs = pipeline() The outputs object is a ImagePipelineOutput which means it has an image attribute. You can access each attribute as you normally would or with a keyword lookup, and if that attribute is not returned by the model, you will get None: Copied outputs.images +outputs["images"] When considering the outputs object as a tuple, it only considers the attributes that don’t have None values. +For instance, retrieving an image by indexing into it returns the tuple (outputs.images): Copied outputs[:1] To check a specific pipeline or model output, refer to its corresponding API documentation. BaseOutput class diffusers.utils.BaseOutput < source > ( ) Base class for all model outputs as dataclass. Has a __getitem__ that allows indexing by integer or slice (like a +tuple) or strings (like a dictionary) that will ignore the None attributes. Otherwise behaves like a regular +Python dictionary. You can’t unpack a BaseOutput directly. Use the to_tuple() method to convert it to a tuple +first. to_tuple < source > ( ) Convert self to a tuple containing all the attributes/keys that are not None. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. FlaxImagePipelineOutput class diffusers.pipelines.pipeline_flax_utils.FlaxImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. ImageTextPipelineOutput class diffusers.ImageTextPipelineOutput < source > ( images: Union text: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). text (List[str] or List[List[str]]) — +List of generated text strings of length batch_size or a list of list of strings whose outer list has +length batch_size. Output class for joint image-text pipelines. diff --git a/scrapped_outputs/ed5ca1c2fcf3be2b72157d414e371210.txt b/scrapped_outputs/ed5ca1c2fcf3be2b72157d414e371210.txt new file mode 100644 index 0000000000000000000000000000000000000000..7aa9d7438e50cb065d601931ea93e05ed669bc92 --- /dev/null +++ b/scrapped_outputs/ed5ca1c2fcf3be2b72157d414e371210.txt @@ -0,0 +1,58 @@ +Effective and efficient diffusion Getting the DiffusionPipeline to generate images in a certain style or include what you want can be tricky. Often times, you have to run the DiffusionPipeline several times before you end up with an image you’re happy with. But generating something out of nothing is a computationally intensive process, especially if you’re running inference over and over again. This is why it’s important to get the most computational (speed) and memory (GPU vRAM) efficiency from the pipeline to reduce the time between inference cycles so you can iterate faster. This tutorial walks you through how to generate faster and better with the DiffusionPipeline. Begin by loading the runwayml/stable-diffusion-v1-5 model: Copied from diffusers import DiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipeline = DiffusionPipeline.from_pretrained(model_id, use_safetensors=True) The example prompt you’ll use is a portrait of an old warrior chief, but feel free to use your own prompt: Copied prompt = "portrait photo of a old warrior chief" Speed 💡 If you don’t have access to a GPU, you can use one for free from a GPU provider like Colab! One of the simplest ways to speed up inference is to place the pipeline on a GPU the same way you would with any PyTorch module: Copied pipeline = pipeline.to("cuda") To make sure you can use the same image and improve on it, use a Generator and set a seed for reproducibility: Copied import torch + +generator = torch.Generator("cuda").manual_seed(0) Now you can generate an image: Copied image = pipeline(prompt, generator=generator).images[0] +image This process took ~30 seconds on a T4 GPU (it might be faster if your allocated GPU is better than a T4). By default, the DiffusionPipeline runs inference with full float32 precision for 50 inference steps. You can speed this up by switching to a lower precision like float16 or running fewer inference steps. Let’s start by loading the model in float16 and generate an image: Copied import torch + +pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, use_safetensors=True) +pipeline = pipeline.to("cuda") +generator = torch.Generator("cuda").manual_seed(0) +image = pipeline(prompt, generator=generator).images[0] +image This time, it only took ~11 seconds to generate the image, which is almost 3x faster than before! 💡 We strongly suggest always running your pipelines in float16, and so far, we’ve rarely seen any degradation in output quality. Another option is to reduce the number of inference steps. Choosing a more efficient scheduler could help decrease the number of steps without sacrificing output quality. You can find which schedulers are compatible with the current model in the DiffusionPipeline by calling the compatibles method: Copied pipeline.scheduler.compatibles +[ + diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, + diffusers.schedulers.scheduling_unipc_multistep.UniPCMultistepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_discrete.KDPM2DiscreteScheduler, + diffusers.schedulers.scheduling_deis_multistep.DEISMultistepScheduler, + diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, + diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler, + diffusers.schedulers.scheduling_ddpm.DDPMScheduler, + diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_ancestral_discrete.KDPM2AncestralDiscreteScheduler, + diffusers.utils.dummy_torch_and_torchsde_objects.DPMSolverSDEScheduler, + diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler, + diffusers.schedulers.scheduling_pndm.PNDMScheduler, + diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, + diffusers.schedulers.scheduling_ddim.DDIMScheduler, +] The Stable Diffusion model uses the PNDMScheduler by default which usually requires ~50 inference steps, but more performant schedulers like DPMSolverMultistepScheduler, require only ~20 or 25 inference steps. Use the from_config() method to load a new scheduler: Copied from diffusers import DPMSolverMultistepScheduler + +pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) Now set the num_inference_steps to 20: Copied generator = torch.Generator("cuda").manual_seed(0) +image = pipeline(prompt, generator=generator, num_inference_steps=20).images[0] +image Great, you’ve managed to cut the inference time to just 4 seconds! ⚡️ Memory The other key to improving pipeline performance is consuming less memory, which indirectly implies more speed, since you’re often trying to maximize the number of images generated per second. The easiest way to see how many images you can generate at once is to try out different batch sizes until you get an OutOfMemoryError (OOM). Create a function that’ll generate a batch of images from a list of prompts and Generators. Make sure to assign each Generator a seed so you can reuse it if it produces a good result. Copied def get_inputs(batch_size=1): + generator = [torch.Generator("cuda").manual_seed(i) for i in range(batch_size)] + prompts = batch_size * [prompt] + num_inference_steps = 20 + + return {"prompt": prompts, "generator": generator, "num_inference_steps": num_inference_steps} Start with batch_size=4 and see how much memory you’ve consumed: Copied from diffusers.utils import make_image_grid + +images = pipeline(**get_inputs(batch_size=4)).images +make_image_grid(images, 2, 2) Unless you have a GPU with more vRAM, the code above probably returned an OOM error! Most of the memory is taken up by the cross-attention layers. Instead of running this operation in a batch, you can run it sequentially to save a significant amount of memory. All you have to do is configure the pipeline to use the enable_attention_slicing() function: Copied pipeline.enable_attention_slicing() Now try increasing the batch_size to 8! Copied images = pipeline(**get_inputs(batch_size=8)).images +make_image_grid(images, rows=2, cols=4) Whereas before you couldn’t even generate a batch of 4 images, now you can generate a batch of 8 images at ~3.5 seconds per image! This is probably the fastest you can go on a T4 GPU without sacrificing quality. Quality In the last two sections, you learned how to optimize the speed of your pipeline by using fp16, reducing the number of inference steps by using a more performant scheduler, and enabling attention slicing to reduce memory consumption. Now you’re going to focus on how to improve the quality of generated images. Better checkpoints The most obvious step is to use better checkpoints. The Stable Diffusion model is a good starting point, and since its official launch, several improved versions have also been released. However, using a newer version doesn’t automatically mean you’ll get better results. You’ll still have to experiment with different checkpoints yourself, and do a little research (such as using negative prompts) to get the best results. As the field grows, there are more and more high-quality checkpoints finetuned to produce certain styles. Try exploring the Hub and Diffusers Gallery to find one you’re interested in! Better pipeline components You can also try replacing the current pipeline components with a newer version. Let’s try loading the latest autoencoder from Stability AI into the pipeline, and generate some images: Copied from diffusers import AutoencoderKL + +vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16).to("cuda") +pipeline.vae = vae +images = pipeline(**get_inputs(batch_size=8)).images +make_image_grid(images, rows=2, cols=4) Better prompt engineering The text prompt you use to generate an image is super important, so much so that it is called prompt engineering. Some considerations to keep during prompt engineering are: How is the image or similar images of the one I want to generate stored on the internet? What additional detail can I give that steers the model towards the style I want? With this in mind, let’s improve the prompt to include color and higher quality details: Copied prompt += ", tribal panther make up, blue on red, side profile, looking away, serious eyes" +prompt += " 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta" Generate a batch of images with the new prompt: Copied images = pipeline(**get_inputs(batch_size=8)).images +make_image_grid(images, rows=2, cols=4) Pretty impressive! Let’s tweak the second image - corresponding to the Generator with a seed of 1 - a bit more by adding some text about the age of the subject: Copied prompts = [ + "portrait photo of the oldest warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a old warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a young warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", +] + +generator = [torch.Generator("cuda").manual_seed(1) for _ in range(len(prompts))] +images = pipeline(prompt=prompts, generator=generator, num_inference_steps=25).images +make_image_grid(images, 2, 2) Next steps In this tutorial, you learned how to optimize a DiffusionPipeline for computational and memory efficiency as well as improving the quality of generated outputs. If you’re interested in making your pipeline even faster, take a look at the following resources: Learn how PyTorch 2.0 and torch.compile can yield 5 - 300% faster inference speed. On an A100 GPU, inference can be up to 50% faster! If you can’t use PyTorch 2, we recommend you install xFormers. Its memory-efficient attention mechanism works great with PyTorch 1.13.1 for faster speed and reduced memory consumption. Other optimization techniques, such as model offloading, are covered in this guide. diff --git a/scrapped_outputs/ed5e363428c9bd669455ad084564659d.txt b/scrapped_outputs/ed5e363428c9bd669455ad084564659d.txt new file mode 100644 index 0000000000000000000000000000000000000000..f57b44311834487e66dc102fce3208b71376c674 --- /dev/null +++ b/scrapped_outputs/ed5e363428c9bd669455ad084564659d.txt @@ -0,0 +1,49 @@ +Improve generation quality with FreeU The UNet is responsible for denoising during the reverse diffusion process, and there are two distinct features in its architecture: Backbone features primarily contribute to the denoising process Skip features mainly introduce high-frequency features into the decoder module and can make the network overlook the semantics in the backbone features However, the skip connection can sometimes introduce unnatural image details. FreeU is a technique for improving image quality by rebalancing the contributions from the UNet’s skip connections and backbone feature maps. FreeU is applied during inference and it does not require any additional training. The technique works for different tasks such as text-to-image, image-to-image, and text-to-video. In this guide, you will apply FreeU to the StableDiffusionPipeline, StableDiffusionXLPipeline, and TextToVideoSDPipeline. You need to install Diffusers from source to run the examples below. StableDiffusionPipeline Load the pipeline: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, safety_checker=None +).to("cuda") Then enable the FreeU mechanism with the FreeU-specific hyperparameters. These values are scaling factors for the backbone and skip features. Copied pipeline.enable_freeu(s1=0.9, s2=0.2, b1=1.2, b2=1.4) The values above are from the official FreeU code repository where you can also find reference hyperparameters for different models. Disable the FreeU mechanism by calling disable_freeu() on a pipeline. And then run inference: Copied prompt = "A squirrel eating a burger" +seed = 2023 +image = pipeline(prompt, generator=torch.manual_seed(seed)).images[0] +image The figure below compares non-FreeU and FreeU results respectively for the same hyperparameters used above (prompt and seed): Let’s see how Stable Diffusion 2 results are impacted: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16, safety_checker=None +).to("cuda") + +prompt = "A squirrel eating a burger" +seed = 2023 + +pipeline.enable_freeu(s1=0.9, s2=0.2, b1=1.1, b2=1.2) +image = pipeline(prompt, generator=torch.manual_seed(seed)).images[0] +image Stable Diffusion XL Finally, let’s take a look at how FreeU affects Stable Diffusion XL results: Copied from diffusers import DiffusionPipeline +import torch + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, +).to("cuda") + +prompt = "A squirrel eating a burger" +seed = 2023 + +# Comes from +# https://wandb.ai/nasirk24/UNET-FreeU-SDXL/reports/FreeU-SDXL-Optimal-Parameters--Vmlldzo1NDg4NTUw +pipeline.enable_freeu(s1=0.6, s2=0.4, b1=1.1, b2=1.2) +image = pipeline(prompt, generator=torch.manual_seed(seed)).images[0] +image Text-to-video generation FreeU can also be used to improve video quality: Copied from diffusers import DiffusionPipeline +from diffusers.utils import export_to_video +import torch + +model_id = "cerspense/zeroscope_v2_576w" +pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +prompt = "an astronaut riding a horse on mars" +seed = 2023 + +# The values come from +# https://github.com/lyn-rgb/FreeU_Diffusers#video-pipelines +pipe.enable_freeu(b1=1.2, b2=1.4, s1=0.9, s2=0.2) +video_frames = pipe(prompt, height=320, width=576, num_frames=30, generator=torch.manual_seed(seed)).frames[0] +export_to_video(video_frames, "astronaut_rides_horse.mp4") Thanks to kadirnar for helping to integrate the feature, and to justindujardin for the helpful discussions. diff --git a/scrapped_outputs/ed69e1393b7d9ec3f29db277dfe42ccd.txt b/scrapped_outputs/ed69e1393b7d9ec3f29db277dfe42ccd.txt new file mode 100644 index 0000000000000000000000000000000000000000..be2cb47ac7929d07604329901692862da670fc66 --- /dev/null +++ b/scrapped_outputs/ed69e1393b7d9ec3f29db277dfe42ccd.txt @@ -0,0 +1,70 @@ +MusicLDM MusicLDM was proposed in MusicLDM: Enhancing Novelty in Text-to-Music Generation Using Beat-Synchronous Mixup Strategies by Ke Chen, Yusong Wu, Haohe Liu, Marianna Nezhurina, Taylor Berg-Kirkpatrick, Shlomo Dubnov. +MusicLDM takes a text prompt as input and predicts the corresponding music sample. Inspired by Stable Diffusion and AudioLDM, +MusicLDM is a text-to-music latent diffusion model (LDM) that learns continuous audio representations from CLAP +latents. MusicLDM is trained on a corpus of 466 hours of music data. Beat-synchronous data augmentation strategies are applied to the music samples, both in the time domain and in the latent space. Using beat-synchronous data augmentation strategies encourages the model to interpolate between the training samples, but stay within the domain of the training data. The result is generated music that is more diverse while staying faithful to the corresponding style. The abstract of the paper is the following: Diffusion models have shown promising results in cross-modal generation tasks, including text-to-image and text-to-audio generation. However, generating music, as a special type of audio, presents unique challenges due to limited availability of music data and sensitive issues related to copyright and plagiarism. In this paper, to tackle these challenges, we first construct a state-of-the-art text-to-music model, MusicLDM, that adapts Stable Diffusion and AudioLDM architectures to the music domain. We achieve this by retraining the contrastive language-audio pretraining model (CLAP) and the Hifi-GAN vocoder, as components of MusicLDM, on a collection of music data samples. Then, to address the limitations of training data and to avoid plagiarism, we leverage a beat tracking model and propose two different mixup strategies for data augmentation: beat-synchronous audio mixup and beat-synchronous latent mixup, which recombine training audio directly or via a latent embeddings space, respectively. Such mixup strategies encourage the model to interpolate between musical training samples and generate new music within the convex hull of the training data, making the generated music more diverse while still staying faithful to the corresponding style. In addition to popular evaluation metrics, we design several new evaluation metrics based on CLAP score to demonstrate that our proposed MusicLDM and beat-synchronous mixup strategies improve both the quality and novelty of generated music, as well as the correspondence between input text and generated music. This pipeline was contributed by sanchit-gandhi. Tips When constructing a prompt, keep in mind: Descriptive prompt inputs work best; use adjectives to describe the sound (for example, “high quality” or “clear”) and make the prompt context specific where possible (e.g. “melodic techno with a fast beat and synths” works better than “techno”). Using a negative prompt can significantly improve the quality of the generated audio. Try using a negative prompt of “low quality, average quality”. During inference: The quality of the generated audio sample can be controlled by the num_inference_steps argument; higher steps give higher quality audio at the expense of slower inference. Multiple waveforms can be generated in one go: set num_waveforms_per_prompt to a value greater than 1 to enable. Automatic scoring will be performed between the generated waveforms and prompt text, and the audios ranked from best to worst accordingly. The length of the generated audio sample can be controlled by varying the audio_length_in_s argument. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. MusicLDMPipeline class diffusers.MusicLDMPipeline < source > ( vae: AutoencoderKL text_encoder: Union tokenizer: Union feature_extractor: Optional unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vocoder: SpeechT5HifiGan ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (ClapModel) — +Frozen text-audio embedding model (ClapTextModel), specifically the +laion/clap-htsat-unfused variant. tokenizer (PreTrainedTokenizer) — +A RobertaTokenizer to tokenize text. feature_extractor (ClapFeatureExtractor) — +Feature extractor to compute mel-spectrograms from audio waveforms. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded audio latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. vocoder (SpeechT5HifiGan) — +Vocoder of class SpeechT5HifiGan. Pipeline for text-to-audio generation using MusicLDM. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None audio_length_in_s: Optional = None num_inference_steps: int = 200 guidance_scale: float = 2.0 negative_prompt: Union = None num_waveforms_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None output_type: Optional = 'np' ) → AudioPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide audio generation. If not defined, you need to pass prompt_embeds. audio_length_in_s (int, optional, defaults to 10.24) — +The length of the generated audio sample in seconds. num_inference_steps (int, optional, defaults to 200) — +The number of denoising steps. More denoising steps usually lead to a higher quality audio at the +expense of slower inference. guidance_scale (float, optional, defaults to 2.0) — +A higher guidance scale value encourages the model to generate audio that is closely linked to the text +prompt at the expense of lower sound quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in audio generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_waveforms_per_prompt (int, optional, defaults to 1) — +The number of waveforms to generate per prompt. If num_waveforms_per_prompt > 1, the text encoding +model is a joint text-audio model (ClapModel), and the tokenizer is a +[~transformers.ClapProcessor], then automatic scoring will be performed between the generated outputs +and the input text. This scoring ranks the generated waveforms based on their cosine similarity to text +input in the joint text-audio embedding space. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. return_dict (bool, optional, defaults to True) — +Whether or not to return a AudioPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. output_type (str, optional, defaults to "np") — +The output format of the generated audio. Choose between "np" to return a NumPy np.ndarray or +"pt" to return a PyTorch torch.Tensor object. Set to "latent" to return the latent diffusion +model (LDM) output. Returns +AudioPipelineOutput or tuple + +If return_dict is True, AudioPipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import MusicLDMPipeline +>>> import torch +>>> import scipy + +>>> repo_id = "ucsd-reach/musicldm" +>>> pipe = MusicLDMPipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> prompt = "Techno music with a strong, upbeat tempo and high melodic riffs" +>>> audio = pipe(prompt, num_inference_steps=10, audio_length_in_s=5.0).audios[0] + +>>> # save the audio sample as a .wav file +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio) disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. diff --git a/scrapped_outputs/ed711cd66b4f29697d734922eeb67c22.txt b/scrapped_outputs/ed711cd66b4f29697d734922eeb67c22.txt new file mode 100644 index 0000000000000000000000000000000000000000..4122aa376447d7e50c2e46c3ee80f13ca14d4399 --- /dev/null +++ b/scrapped_outputs/ed711cd66b4f29697d734922eeb67c22.txt @@ -0,0 +1,286 @@ +Custom Pipelines + +For more information about community pipelines, please have a look at this issue. +Community examples consist of both inference and training examples that have been added by the community. +Please have a look at the following table to get an overview of all community examples. Click on the Code Example to get a copy-and-paste ready code example that you can try out. +If a community doesn’t work as expected, please open an issue and ping the author on it. +Example +Description +Code Example +Colab +Author +CLIP Guided Stable Diffusion +Doing CLIP guidance for text to image generation with Stable Diffusion +CLIP Guided Stable Diffusion + +Suraj Patil +One Step U-Net (Dummy) +Example showcasing of how to use Community Pipelines (see https://github.com/huggingface/diffusers/issues/841) +One Step U-Net +- +Patrick von Platen +Stable Diffusion Interpolation +Interpolate the latent space of Stable Diffusion between different prompts/seeds +Stable Diffusion Interpolation +- +Nate Raw +Stable Diffusion Mega +One Stable Diffusion Pipeline with all functionalities of Text2Image, Image2Image and Inpainting +Stable Diffusion Mega +- +Patrick von Platen +Long Prompt Weighting Stable Diffusion +One Stable Diffusion Pipeline without tokens length limit, and support parsing weighting in prompt. +Long Prompt Weighting Stable Diffusion +- +SkyTNT +Speech to Image +Using automatic-speech-recognition to transcribe text and Stable Diffusion to generate images +Speech to Image +- +Mikail Duzenli +To load a custom pipeline you just need to pass the custom_pipeline argument to DiffusionPipeline, as one of the files in diffusers/examples/community. Feel free to send a PR with your own pipelines, we will merge them quickly. + + + Copied +pipe = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", custom_pipeline="filename_in_the_community_folder" +) + +Example usages + + +CLIP Guided Stable Diffusion + +CLIP guided stable diffusion can help to generate more realistic images +by guiding stable diffusion at every denoising step with an additional CLIP model. +The following code requires roughly 12GB of GPU RAM. + + + Copied +from diffusers import DiffusionPipeline +from transformers import CLIPFeatureExtractor, CLIPModel +import torch + + +feature_extractor = CLIPFeatureExtractor.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K") +clip_model = CLIPModel.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K", torch_dtype=torch.float16) + + +guided_pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + custom_pipeline="clip_guided_stable_diffusion", + clip_model=clip_model, + feature_extractor=feature_extractor, + torch_dtype=torch.float16, +) +guided_pipeline.enable_attention_slicing() +guided_pipeline = guided_pipeline.to("cuda") + +prompt = "fantasy book cover, full moon, fantasy forest landscape, golden vector elements, fantasy magic, dark light night, intricate, elegant, sharp focus, illustration, highly detailed, digital painting, concept art, matte, art by WLOP and Artgerm and Albert Bierstadt, masterpiece" + +generator = torch.Generator(device="cuda").manual_seed(0) +images = [] +for i in range(4): + image = guided_pipeline( + prompt, + num_inference_steps=50, + guidance_scale=7.5, + clip_guidance_scale=100, + num_cutouts=4, + use_cutouts=False, + generator=generator, + ).images[0] + images.append(image) + +# save images locally +for i, img in enumerate(images): + img.save(f"./clip_guided_sd/image_{i}.png") +The images list contains a list of PIL images that can be saved locally or displayed directly in a google colab. +Generated images tend to be of higher qualtiy than natively using stable diffusion. E.g. the above script generates the following images: +. + +One Step Unet + +The dummy “one-step-unet” can be run as follows: + + + Copied +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained("google/ddpm-cifar10-32", custom_pipeline="one_step_unet") +pipe() +Note: This community pipeline is not useful as a feature, but rather just serves as an example of how community pipelines can be added (see https://github.com/huggingface/diffusers/issues/841). + +Stable Diffusion Interpolation + +The following code can be run on a GPU of at least 8GB VRAM and should take approximately 5 minutes. + + + Copied +from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + torch_dtype=torch.float16, + safety_checker=None, # Very important for videos...lots of false positives while interpolating + custom_pipeline="interpolate_stable_diffusion", +).to("cuda") +pipe.enable_attention_slicing() + +frame_filepaths = pipe.walk( + prompts=["a dog", "a cat", "a horse"], + seeds=[42, 1337, 1234], + num_interpolation_steps=16, + output_dir="./dreams", + batch_size=4, + height=512, + width=512, + guidance_scale=8.5, + num_inference_steps=50, +) +The output of the walk(...) function returns a list of images saved under the folder as defined in output_dir. You can use these images to create videos of stable diffusion. +Please have a look at https://github.com/nateraw/stable-diffusion-videos for more in-detail information on how to create videos using stable diffusion as well as more feature-complete functionality. + +Stable Diffusion Mega + +The Stable Diffusion Mega Pipeline lets you use the main use cases of the stable diffusion pipeline in a single class. + + + Copied +#!/usr/bin/env python3 +from diffusers import DiffusionPipeline +import PIL +import requests +from io import BytesIO +import torch + + +def download_image(url): + response = requests.get(url) + return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +pipe = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + custom_pipeline="stable_diffusion_mega", + torch_dtype=torch.float16, +) +pipe.to("cuda") +pipe.enable_attention_slicing() + + +### Text-to-Image + +images = pipe.text2img("An astronaut riding a horse").images + +### Image-to-Image + +init_image = download_image( + "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +) + +prompt = "A fantasy landscape, trending on artstation" + +images = pipe.img2img(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images + +### Inpainting + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" +init_image = download_image(img_url).resize((512, 512)) +mask_image = download_image(mask_url).resize((512, 512)) + +prompt = "a cat sitting on a bench" +images = pipe.inpaint(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.75).images +As shown above this one pipeline can run all both “text-to-image”, “image-to-image”, and “inpainting” in one pipeline. + +Long Prompt Weighting Stable Diffusion + +The Pipeline lets you input prompt without 77 token length limit. And you can increase words weighting by using ”()” or decrease words weighting by using ”[]” +The Pipeline also lets you use the main use cases of the stable diffusion pipeline in a single class. + +pytorch + + + + Copied +from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained( + "hakurei/waifu-diffusion", custom_pipeline="lpw_stable_diffusion", torch_dtype=torch.float16 +) +pipe = pipe.to("cuda") + +prompt = "best_quality (1girl:1.3) bow bride brown_hair closed_mouth frilled_bow frilled_hair_tubes frills (full_body:1.3) fox_ear hair_bow hair_tubes happy hood japanese_clothes kimono long_sleeves red_bow smile solo tabi uchikake white_kimono wide_sleeves cherry_blossoms" +neg_prompt = "lowres, bad_anatomy, error_body, error_hair, error_arm, error_hands, bad_hands, error_fingers, bad_fingers, missing_fingers, error_legs, bad_legs, multiple_legs, missing_legs, error_lighting, error_shadow, error_reflection, text, error, extra_digit, fewer_digits, cropped, worst_quality, low_quality, normal_quality, jpeg_artifacts, signature, watermark, username, blurry" + +pipe.text2img(prompt, negative_prompt=neg_prompt, width=512, height=512, max_embeddings_multiples=3).images[0] + +onnxruntime + + + + Copied +from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + custom_pipeline="lpw_stable_diffusion_onnx", + revision="onnx", + provider="CUDAExecutionProvider", +) + +prompt = "a photo of an astronaut riding a horse on mars, best quality" +neg_prompt = "lowres, bad anatomy, error body, error hair, error arm, error hands, bad hands, error fingers, bad fingers, missing fingers, error legs, bad legs, multiple legs, missing legs, error lighting, error shadow, error reflection, text, error, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry" + +pipe.text2img(prompt, negative_prompt=neg_prompt, width=512, height=512, max_embeddings_multiples=3).images[0] +if you see Token indices sequence length is longer than the specified maximum sequence length for this model ( *** > 77 ) . Running this sequence through the model will result in indexing errors. Do not worry, it is normal. + +Speech to Image + +The following code can generate an image from an audio sample using pre-trained OpenAI whisper-small and Stable Diffusion. + + + Copied +import torch + +import matplotlib.pyplot as plt +from datasets import load_dataset +from diffusers import DiffusionPipeline +from transformers import ( + WhisperForConditionalGeneration, + WhisperProcessor, +) + + +device = "cuda" if torch.cuda.is_available() else "cpu" + +ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") + +audio_sample = ds[3] + +text = audio_sample["text"].lower() +speech_data = audio_sample["audio"]["array"] + +model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small").to(device) +processor = WhisperProcessor.from_pretrained("openai/whisper-small") + +diffuser_pipeline = DiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + custom_pipeline="speech_to_image_diffusion", + speech_model=model, + speech_processor=processor, + + torch_dtype=torch.float16, +) + +diffuser_pipeline.enable_attention_slicing() +diffuser_pipeline = diffuser_pipeline.to(device) + +output = diffuser_pipeline(speech_data) +plt.imshow(output.images[0]) +This example produces the following image: diff --git a/scrapped_outputs/ed74dbbf4b9c467daee0e0a545315060.txt b/scrapped_outputs/ed74dbbf4b9c467daee0e0a545315060.txt new file mode 100644 index 0000000000000000000000000000000000000000..f0397b5c22fb5325147b95174f9f4e470d2a9999 --- /dev/null +++ b/scrapped_outputs/ed74dbbf4b9c467daee0e0a545315060.txt @@ -0,0 +1,136 @@ +Kandinsky 2.2 This script is experimental, and it’s easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset. Kandinsky 2.2 is a multilingual text-to-image model capable of producing more photorealistic images. The model includes an image prior model for creating image embeddings from text prompts, and a decoder model that generates images based on the prior model’s embeddings. That’s why you’ll find two separate scripts in Diffusers for Kandinsky 2.2, one for training the prior model and one for training the decoder model. You can train both models separately, but to get the best results, you should train both the prior and decoder models. Depending on your GPU, you may need to enable gradient_checkpointing (⚠️ not supported for the prior model!), mixed_precision, and gradient_accumulation_steps to help fit the model into memory and to speedup training. You can reduce your memory-usage even more by enabling memory-efficient attention with xFormers (version v0.0.16 fails for training on some GPUs so you may need to install a development version instead). This guide explores the train_text_to_image_prior.py and the train_text_to_image_decoder.py scripts to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the scripts, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/kandinsky2_2/text_to_image +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training scripts that are important for understanding how to modify it, but it doesn’t cover every aspect of the scripts in detail. If you’re interested in learning more, feel free to read through the scripts and let us know if you have any questions or concerns. Script parameters The training scripts provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. The training scripts provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_text_to_image_prior.py \ + --mixed_precision="fp16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so let’s get straight to a walkthrough of the Kandinsky training scripts! Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_text_to_image_prior.py \ + --snr_gamma=5.0 Training script The training script is also similar to the Text-to-image training guide, but it’s been modified to support training the prior and decoder models. This guide focuses on the code that is unique to the Kandinsky 2.2 training scripts. + + +The main() function contains the code for preparing the dataset and training the model. One of the main differences you’ll notice right away is that the training script also loads a CLIPImageProcessor - in addition to a scheduler and tokenizer - for preprocessing images and a CLIPVisionModelWithProjection model for encoding the images: Copied noise_scheduler = DDPMScheduler(beta_schedule="squaredcos_cap_v2", prediction_type="sample") +image_processor = CLIPImageProcessor.from_pretrained( + args.pretrained_prior_model_name_or_path, subfolder="image_processor" +) +tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="tokenizer") + +with ContextManagers(deepspeed_zero_init_disabled_context_manager()): + image_encoder = CLIPVisionModelWithProjection.from_pretrained( + args.pretrained_prior_model_name_or_path, subfolder="image_encoder", torch_dtype=weight_dtype + ).eval() + text_encoder = CLIPTextModelWithProjection.from_pretrained( + args.pretrained_prior_model_name_or_path, subfolder="text_encoder", torch_dtype=weight_dtype + ).eval() Kandinsky uses a PriorTransformer to generate the image embeddings, so you’ll want to setup the optimizer to learn the prior mode’s parameters. Copied prior = PriorTransformer.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="prior") +prior.train() +optimizer = optimizer_cls( + prior.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Next, the input captions are tokenized, and images are preprocessed by the CLIPImageProcessor: Copied def preprocess_train(examples): + images = [image.convert("RGB") for image in examples[image_column]] + examples["clip_pixel_values"] = image_processor(images, return_tensors="pt").pixel_values + examples["text_input_ids"], examples["text_mask"] = tokenize_captions(examples) + return examples Finally, the training loop converts the input images into latents, adds noise to the image embeddings, and makes a prediction: Copied model_pred = prior( + noisy_latents, + timestep=timesteps, + proj_embedding=prompt_embeds, + encoder_hidden_states=text_encoder_hidden_states, + attention_mask=text_mask, +).predicted_image_embedding If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. + + +The main() function contains the code for preparing the dataset and training the model. Unlike the prior model, the decoder initializes a VQModel to decode the latents into images and it uses a UNet2DConditionModel: Copied with ContextManagers(deepspeed_zero_init_disabled_context_manager()): + vae = VQModel.from_pretrained( + args.pretrained_decoder_model_name_or_path, subfolder="movq", torch_dtype=weight_dtype + ).eval() + image_encoder = CLIPVisionModelWithProjection.from_pretrained( + args.pretrained_prior_model_name_or_path, subfolder="image_encoder", torch_dtype=weight_dtype + ).eval() +unet = UNet2DConditionModel.from_pretrained(args.pretrained_decoder_model_name_or_path, subfolder="unet") Next, the script includes several image transforms and a preprocessing function for applying the transforms to the images and returning the pixel values: Copied def preprocess_train(examples): + images = [image.convert("RGB") for image in examples[image_column]] + examples["pixel_values"] = [train_transforms(image) for image in images] + examples["clip_pixel_values"] = image_processor(images, return_tensors="pt").pixel_values + return examples Lastly, the training loop handles converting the images to latents, adding noise, and predicting the noise residual. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Copied model_pred = unet(noisy_latents, timesteps, None, added_cond_kwargs=added_cond_kwargs).sample[:, :4] + + + Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 You’ll train on the Pokémon BLIP captions dataset to generate your own Pokémon, but you can also create and train on your own dataset by following the Create a dataset for training guide. Set the environment variable DATASET_NAME to the name of the dataset on the Hub or if you’re training on your own files, set the environment variable TRAIN_DIR to a path to your dataset. If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_prompt to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. + + + + Copied export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch --mixed_precision="fp16" train_text_to_image_prior.py \ + --dataset_name=$DATASET_NAME \ + --resolution=768 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --checkpoints_total_limit=3 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --validation_prompts="A robot pokemon, 4k photo" \ + --report_to="wandb" \ + --push_to_hub \ + --output_dir="kandi2-prior-pokemon-model" + + + + Copied export DATASET_NAME="lambdalabs/pokemon-blip-captions" + +accelerate launch --mixed_precision="fp16" train_text_to_image_decoder.py \ + --dataset_name=$DATASET_NAME \ + --resolution=768 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --max_train_steps=15000 \ + --learning_rate=1e-05 \ + --max_grad_norm=1 \ + --checkpoints_total_limit=3 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --validation_prompts="A robot pokemon, 4k photo" \ + --report_to="wandb" \ + --push_to_hub \ + --output_dir="kandi2-decoder-pokemon-model" + + +Once training is finished, you can use your newly trained model for inference! + + + + Copied from diffusers import AutoPipelineForText2Image, DiffusionPipeline +import torch + +prior_pipeline = DiffusionPipeline.from_pretrained(output_dir, torch_dtype=torch.float16) +prior_components = {"prior_" + k: v for k,v in prior_pipeline.components.items()} +pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", **prior_components, torch_dtype=torch.float16) + +pipe.enable_model_cpu_offload() +prompt="A robot pokemon, 4k photo" +image = pipeline(prompt=prompt, negative_prompt=negative_prompt).images[0] Feel free to replace kandinsky-community/kandinsky-2-2-decoder with your own trained decoder checkpoint! + + + + Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("path/to/saved/model", torch_dtype=torch.float16) +pipeline.enable_model_cpu_offload() + +prompt="A robot pokemon, 4k photo" +image = pipeline(prompt=prompt).images[0] For the decoder model, you can also perform inference from a saved checkpoint which can be useful for viewing intermediate results. In this case, load the checkpoint into the UNet: Copied from diffusers import AutoPipelineForText2Image, UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("path/to/saved/model" + "/checkpoint-/unet") + +pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", unet=unet, torch_dtype=torch.float16) +pipeline.enable_model_cpu_offload() + +image = pipeline(prompt="A robot pokemon, 4k photo").images[0] + + + Next steps Congratulations on training a Kandinsky 2.2 model! To learn more about how to use your new model, the following guides may be helpful: Read the Kandinsky guide to learn how to use it for a variety of different tasks (text-to-image, image-to-image, inpainting, interpolation), and how it can be combined with a ControlNet. Check out the DreamBooth and LoRA training guides to learn how to train a personalized Kandinsky model with just a few example images. These two training techniques can even be combined! diff --git a/scrapped_outputs/ed7c2440cd7e0c7972699deed77c96a3.txt b/scrapped_outputs/ed7c2440cd7e0c7972699deed77c96a3.txt new file mode 100644 index 0000000000000000000000000000000000000000..836dee32c8271dc967057672c03614a463c4ec61 --- /dev/null +++ b/scrapped_outputs/ed7c2440cd7e0c7972699deed77c96a3.txt @@ -0,0 +1,324 @@ +Pipelines Pipelines provide a simple way to run state-of-the-art diffusion models in inference by bundling all of the necessary components (multiple independently-trained models, schedulers, and processors) into a single end-to-end class. Pipelines are flexible and they can be adapted to use different schedulers or even model components. All pipelines are built from the base DiffusionPipeline class which provides basic functionality for loading, downloading, and saving all the components. Specific pipeline types (for example StableDiffusionPipeline) loaded with from_pretrained() are automatically detected and the pipeline components are loaded and passed to the __init__ function of the pipeline. You shouldn’t use the DiffusionPipeline class for training. Individual components (for example, UNet2DModel and UNet2DConditionModel) of diffusion pipelines are usually trained individually, so we suggest directly working with them instead. Pipelines do not offer any training functionality. You’ll notice PyTorch’s autograd is disabled by decorating the __call__() method with a torch.no_grad decorator because pipelines should not be used for training. If you’re interested in training, please take a look at the Training guides instead! The table below lists all the pipelines currently available in 🤗 Diffusers and the tasks they support. Click on a pipeline to view its abstract and published paper. Pipeline Tasks AltDiffusion image2image AnimateDiff text2video Attend-and-Excite text2image Audio Diffusion image2audio AudioLDM text2audio AudioLDM2 text2audio BLIP Diffusion text2image Consistency Models unconditional image generation ControlNet text2image, image2image, inpainting ControlNet with Stable Diffusion XL text2image ControlNet-XS text2image ControlNet-XS with Stable Diffusion XL text2image Cycle Diffusion image2image Dance Diffusion unconditional audio generation DDIM unconditional image generation DDPM unconditional image generation DeepFloyd IF text2image, image2image, inpainting, super-resolution DiffEdit inpainting DiT text2image GLIGEN text2image InstructPix2Pix image editing Kandinsky 2.1 text2image, image2image, inpainting, interpolation Kandinsky 2.2 text2image, image2image, inpainting Kandinsky 3 text2image, image2image Latent Consistency Models text2image Latent Diffusion text2image, super-resolution LDM3D text2image, text-to-3D, text-to-pano, upscaling MultiDiffusion text2image MusicLDM text2audio Paint by Example inpainting ParaDiGMS text2image Pix2Pix Zero image editing PixArt-α text2image PNDM unconditional image generation RePaint inpainting Score SDE VE unconditional image generation Self-Attention Guidance text2image Semantic Guidance text2image Shap-E text-to-3D, image-to-3D Spectrogram Diffusion Stable Diffusion text2image, image2image, depth2image, inpainting, image variation, latent upscaler, super-resolution Stable Diffusion Model Editing model editing Stable Diffusion XL text2image, image2image, inpainting Stable Diffusion XL Turbo text2image, image2image, inpainting Stable unCLIP text2image, image variation Stochastic Karras VE unconditional image generation T2I-Adapter text2image Text2Video text2video, video2video Text2Video-Zero text2video unCLIP text2image, image variation Unconditional Latent Diffusion unconditional image generation UniDiffuser text2image, image2text, image variation, text variation, unconditional image generation, unconditional audio generation Value-guided planning value guided sampling Versatile Diffusion text2image, image variation VQ Diffusion text2image Wuerstchen text2image DiffusionPipeline class diffusers.DiffusionPipeline < source > ( ) Base class for all pipelines. DiffusionPipeline stores all components (models, schedulers, and processors) for diffusion pipelines and +provides methods for loading, downloading and saving models. It also includes methods to: move all PyTorch modules to the device of your choice enable/disable the progress bar for the denoising iteration Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. _optional_components (List[str]) — List of all optional components that don’t have to be passed to the +pipeline to function (should be overridden by subclasses). __call__ ( *args **kwargs ) Call self as a function. device < source > ( ) → torch.device Returns +torch.device + +The torch device on which the pipeline is located. + to < source > ( *args **kwargs ) → DiffusionPipeline Parameters dtype (torch.dtype, optional) — +Returns a pipeline with the specified +dtype device (torch.Device, optional) — +Returns a pipeline with the specified +device silence_dtype_warnings (str, optional, defaults to False) — +Whether to omit warnings if the target dtype is not compatible with the target device. Returns +DiffusionPipeline + +The pipeline converted to specified dtype and/or dtype. + Performs Pipeline dtype and/or device conversion. A torch.dtype and torch.device are inferred from the +arguments of self.to(*args, **kwargs). If the pipeline already has the correct torch.dtype and torch.device, then it is returned as is. Otherwise, +the returned pipeline is a copy of self with the desired torch.dtype and torch.device. Here are the ways to call to: to(dtype, silence_dtype_warnings=False) → DiffusionPipeline to return a pipeline with the specified +dtype to(device, silence_dtype_warnings=False) → DiffusionPipeline to return a pipeline with the specified +device to(device=None, dtype=None, silence_dtype_warnings=False) → DiffusionPipeline to return a pipeline with the +specified device and +dtype components < source > ( ) The self.components property can be useful to run different pipelines with the same weights and +configurations without reallocating additional memory. Returns (dict): +A dictionary containing all the modules needed to initialize the pipeline. Examples: Copied >>> from diffusers import ( +... StableDiffusionPipeline, +... StableDiffusionImg2ImgPipeline, +... StableDiffusionInpaintPipeline, +... ) + +>>> text2img = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> img2img = StableDiffusionImg2ImgPipeline(**text2img.components) +>>> inpaint = StableDiffusionInpaintPipeline(**text2img.components) disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. download < source > ( pretrained_model_name **kwargs ) → os.PathLike Parameters pretrained_model_name (str or os.PathLike, optional) — +A string, the repository id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. custom_pipeline (str, optional) — +Can be either: + + +A string, the repository id (for example CompVis/ldm-text2im-large-256) of a pretrained +pipeline hosted on the Hub. The repository must contain a file called pipeline.py that defines +the custom pipeline. + + +A string, the file name of a community pipeline hosted on GitHub under +Community. Valid file +names must match the file name and not the pipeline script (clip_guided_stable_diffusion +instead of clip_guided_stable_diffusion.py). Community pipelines are always loaded from the +current main branch of GitHub. + + +A path to a directory (./my_pipeline_directory/) containing a custom pipeline. The directory +must contain a file called pipeline.py that defines the custom pipeline. + + + +🧪 This is an experimental feature and may change in the future. + +For more information on how to load and create custom pipelines, take a look at How to contribute a +community pipeline. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. use_onnx (bool, optional, defaults to False) — +If set to True, ONNX weights will always be downloaded if present. If set to False, ONNX weights +will never be downloaded. By default use_onnx defaults to the _is_onnx class attribute which is +False for non-ONNX pipelines and True for ONNX pipelines. ONNX weights include both files ending +with .onnx and .pb. trust_remote_code (bool, optional, defaults to False) — +Whether or not to allow for custom pipelines and components defined on the Hub in their own files. This +option should only be set to True for repositories you trust and in which you have read the code, as +it will execute code present on the Hub on your local machine. Returns +os.PathLike + +A path to the downloaded pipeline. + Download and cache a PyTorch diffusion pipeline from pretrained pipeline weights. To use private or gated models, log-in with +huggingface-cli login. enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] enable_model_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_sequential_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using 🤗 Accelerate, significantly reducing memory usage. When called, the state +dicts of all torch.nn.Module components (except those in self._exclude_from_cpu_offload) are saved to CPU +and then moved to torch.device('meta') and loaded to GPU only when their specific submodule has its forward +method called. Offloading happens on a submodule basis. Memory savings are higher than with +enable_model_cpu_offload, but performance is lower. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) from_pretrained < source > ( pretrained_model_name_or_path: Union **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example CompVis/ldm-text2im-large-256) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_pipeline_directory/) containing pipeline weights +saved using +save_pretrained(). + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If “auto” is passed, the +dtype is automatically derived from the model’s weights. custom_pipeline (str, optional) — + +🧪 This is an experimental feature and may change in the future. + +Can be either: + +A string, the repo id (for example hf-internal-testing/diffusers-dummy-pipeline) of a custom +pipeline hosted on the Hub. The repository must contain a file called pipeline.py that defines +the custom pipeline. +A string, the file name of a community pipeline hosted on GitHub under +Community. Valid file +names must match the file name and not the pipeline script (clip_guided_stable_diffusion +instead of clip_guided_stable_diffusion.py). Community pipelines are always loaded from the +current main branch of GitHub. +A path to a directory (./my_pipeline_directory/) containing a custom pipeline. The directory +must contain a file called pipeline.py that defines the custom pipeline. + +For more information on how to load and create custom pipelines, please have a look at Loading and +Adding Custom +Pipelines force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. custom_revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, or a commit id similar to +revision when loading a custom pipeline from the Hub. It can be a 🤗 Diffusers version when loading a +custom pipeline from GitHub, otherwise it defaults to "main" when loading from the Hub. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. use_onnx (bool, optional, defaults to None) — +If set to True, ONNX weights will always be downloaded if present. If set to False, ONNX weights +will never be downloaded. By default use_onnx defaults to the _is_onnx class attribute which is +False for non-ONNX pipelines and True for ONNX pipelines. ONNX weights include both files ending +with .onnx and .pb. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline +class). The overwritten components are passed directly to the pipelines __init__ method. See example +below for more information. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. Instantiate a PyTorch diffusion pipeline from pretrained pipeline weights. The pipeline is set in evaluation mode (model.eval()) by default. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import DiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256") + +>>> # Download pipeline that requires an authorization token +>>> # For more information on access tokens, please refer to this section +>>> # of the documentation](https://huggingface.co/docs/hub/security-tokens) +>>> pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") + +>>> # Use a different scheduler +>>> from diffusers import LMSDiscreteScheduler + +>>> scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.scheduler = scheduler maybe_free_model_hooks < source > ( ) Function that offloads all components, removes all model hooks that were added when using +enable_model_cpu_offload and then applies them again. In case the model has not been offloaded this function +is a no-op. Make sure to add this function to the end of the __call__ function of your pipeline so that it +functions correctly when applying enable_model_cpu_offload. numpy_to_pil < source > ( images ) Convert a NumPy image or a batch of images to a PIL image. save_pretrained < source > ( save_directory: Union safe_serialization: bool = True variant: Optional = None push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save a pipeline to. Will be created if it doesn’t exist. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save all saveable variables of the pipeline to a directory. A pipeline variable can be saved and loaded if its +class implements both a save and loading method. The pipeline is easily reloaded using the +from_pretrained() class method. FlaxDiffusionPipeline class diffusers.FlaxDiffusionPipeline < source > ( ) Base class for Flax-based pipelines. FlaxDiffusionPipeline stores all components (models, schedulers, and processors) for diffusion pipelines and +provides methods for loading, downloading and saving models. It also includes methods to: enable/disable the progress bar for the denoising iteration Class attributes: config_name (str) — The configuration filename that stores the class and module names of all the +diffusion pipeline’s components. from_pretrained < source > ( pretrained_model_name_or_path: Union **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the repo id (for example runwayml/stable-diffusion-v1-5) of a pretrained pipeline +hosted on the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +using save_pretrained(). + dtype (str or jnp.dtype, optional) — +Override the default jnp.dtype and load the model under this dtype. If "auto", the dtype is +automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (the pipeline components) of the specific pipeline +class. The overwritten components are passed directly to the pipelines __init__ method. Instantiate a Flax-based diffusion pipeline from pretrained pipeline weights. The pipeline is set in evaluation mode (`model.eval()) by default and dropout modules are deactivated. If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of FlaxUNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: To use private or gated models, log-in with +huggingface-cli login. Examples: Copied >>> from diffusers import FlaxDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> # Requires to be logged in to Hugging Face hub, +>>> # see more in [the documentation](https://huggingface.co/docs/hub/security-tokens) +>>> pipeline, params = FlaxDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... revision="bf16", +... dtype=jnp.bfloat16, +... ) + +>>> # Download pipeline, but use a different scheduler +>>> from diffusers import FlaxDPMSolverMultistepScheduler + +>>> model_id = "runwayml/stable-diffusion-v1-5" +>>> dpmpp, dpmpp_state = FlaxDPMSolverMultistepScheduler.from_pretrained( +... model_id, +... subfolder="scheduler", +... ) + +>>> dpm_pipe, dpm_params = FlaxStableDiffusionPipeline.from_pretrained( +... model_id, revision="bf16", dtype=jnp.bfloat16, scheduler=dpmpp +... ) +>>> dpm_params["scheduler"] = dpmpp_state numpy_to_pil < source > ( images ) Convert a NumPy image or a batch of images to a PIL image. save_pretrained < source > ( save_directory: Union params: Union push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to which to save. Will be created if it doesn’t exist. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save all saveable variables of the pipeline to a directory. A pipeline variable can be saved and loaded if its +class implements both a save and loading method. The pipeline is easily reloaded using the +from_pretrained() class method. PushToHubMixin class diffusers.utils.PushToHubMixin < source > ( ) A Mixin to push a model, scheduler, or pipeline to the Hugging Face Hub. push_to_hub < source > ( repo_id: str commit_message: Optional = None private: Optional = None token: Optional = None create_pr: bool = False safe_serialization: bool = True variant: Optional = None ) Parameters repo_id (str) — +The name of the repository you want to push your model, scheduler, or pipeline files to. It should +contain your organization name when pushing to an organization. repo_id can also be a path to a local +directory. commit_message (str, optional) — +Message to commit while pushing. Default to "Upload {object}". private (bool, optional) — +Whether or not the repository created should be private. token (str, optional) — +The token to use as HTTP bearer authorization for remote files. The token generated when running +huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) — +Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to True) — +Whether or not to convert the model weights to the safetensors format. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. Upload model, scheduler, or pipeline files to the 🤗 Hugging Face Hub. Examples: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet") + +# Push the `unet` to your namespace with the name "my-finetuned-unet". +unet.push_to_hub("my-finetuned-unet") + +# Push the `unet` to an organization with the name "my-finetuned-unet". +unet.push_to_hub("your-org/my-finetuned-unet") diff --git a/scrapped_outputs/ede1d0ccf9427d58632637d6fd3692f3.txt b/scrapped_outputs/ede1d0ccf9427d58632637d6fd3692f3.txt new file mode 100644 index 0000000000000000000000000000000000000000..d0a7733b20b78bdc5197af0cfb33fb05e4395be0 --- /dev/null +++ b/scrapped_outputs/ede1d0ccf9427d58632637d6fd3692f3.txt @@ -0,0 +1,346 @@ +Text-to-image The Stable Diffusion model was created by researchers and engineers from CompVis, Stability AI, Runway, and LAION. The StableDiffusionPipeline is capable of generating photorealistic images given any text input. It’s trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs. Latent diffusion is the research on top of which Stable Diffusion was built. It was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. The abstract from the paper is: By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. Code is available at https://github.com/CompVis/latent-diffusion. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionPipeline class diffusers.StableDiffusionPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights from_single_file() for loading .ckpt files load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor from Common Diffusion Noise Schedules and Sample Steps are +Flawed. Guidance rescale factor should fix overexposure when +using zero terminal SNR. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") from_single_file < source > ( pretrained_model_link_or_path **kwargs ) Parameters pretrained_model_link_or_path (str or os.PathLike, optional) — +Can be either: +A link to the .ckpt file (for example +"https://huggingface.co//blob/main/.ckpt") on the Hub. +A path to a file containing all pipeline weights. + torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. extract_ema (bool, optional, defaults to False) — +Whether to extract the EMA weights or not. Pass True to extract the EMA weights which usually yield +higher quality images for inference. Non-EMA weights are usually better for continuing finetuning. upcast_attention (bool, optional, defaults to None) — +Whether the attention computation should always be upcasted. image_size (int, optional, defaults to 512) — +The image size the model was trained on. Use 512 for all Stable Diffusion v1 models and the Stable +Diffusion v2 base model. Use 768 for Stable Diffusion v2. prediction_type (str, optional) — +The prediction type the model was trained on. Use 'epsilon' for all Stable Diffusion v1 models and +the Stable Diffusion v2 base model. Use 'v_prediction' for Stable Diffusion v2. num_in_channels (int, optional, defaults to None) — +The number of input channels. If None, it is automatically inferred. scheduler_type (str, optional, defaults to "pndm") — +Type of scheduler to use. Should be one of ["pndm", "lms", "heun", "euler", "euler-ancestral", "dpm", "ddim"]. load_safety_checker (bool, optional, defaults to True) — +Whether to load the safety checker or not. text_encoder (CLIPTextModel, optional, defaults to None) — +An instance of CLIPTextModel to use, specifically the +clip-vit-large-patch14 variant. If this +parameter is None, the function loads a new instance of CLIPTextModel by itself if needed. vae (AutoencoderKL, optional, defaults to None) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. If +this parameter is None, the function will load a new instance of [CLIP] by itself, if needed. tokenizer (CLIPTokenizer, optional, defaults to None) — +An instance of CLIPTokenizer to use. If this parameter is None, the function loads a new instance +of CLIPTokenizer by itself if needed. original_config_file (str) — +Path to .yaml config file corresponding to the original architecture. If None, will be +automatically inferred by looking for a key that only exists in SD2.0 models. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to overwrite load and saveable variables (for example the pipeline components of the +specific pipeline class). The overwritten components are directly passed to the pipelines __init__ +method. See example below for more information. Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt or .safetensors +format. The pipeline is set in evaluation mode (model.eval()) by default. Examples: Copied >>> from diffusers import StableDiffusionPipeline + +>>> # Download pipeline from huggingface.co and cache. +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors" +... ) + +>>> # Download pipeline from local file +>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt +>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly") + +>>> # Enable float16 and move to GPU +>>> pipeline = StableDiffusionPipeline.from_single_file( +... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt", +... torch_dtype=torch.float16, +... ) +>>> pipeline.to("cuda") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. fuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 unfuse_qkv_projections < source > ( unet: bool = True vae: bool = True ) Parameters unet (bool, defaults to True) — To apply fusion on the UNet. vae (bool, defaults to True) — To apply fusion on the VAE. Disable QKV projection fusion if enabled. This API is 🧪 experimental. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionPipeline class diffusers.FlaxStableDiffusionPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor dtype: dtype = ) Parameters vae (FlaxAutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) — +A FlaxUNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or +FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-to-image generation using Stable Diffusion. This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: array params: Union prng_seed: Array num_inference_steps: int = 50 height: Optional = None width: Optional = None guidance_scale: Union = 7.5 latents: Array = None neg_prompt_ids: Array = None return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. latents (jnp.ndarray, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +array is generated by sampling using the supplied random generator. jit (bool, defaults to False) — +Whether to run pmap versions of the generation and safety scoring functions. + +This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a +future release. + return_dict (bool, optional, defaults to True) — +Whether or not to return a FlaxStableDiffusionPipelineOutput instead of +a plain tuple. Returns +FlaxStableDiffusionPipelineOutput or tuple + +If return_dict is True, FlaxStableDiffusionPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated images +and the second element is a list of bools indicating whether the corresponding generated image +contains “not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import jax +>>> import numpy as np +>>> from flax.jax_utils import replicate +>>> from flax.training.common_utils import shard + +>>> from diffusers import FlaxStableDiffusionPipeline + +>>> pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", revision="bf16", dtype=jax.numpy.bfloat16 +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" + +>>> prng_seed = jax.random.PRNGKey(0) +>>> num_inference_steps = 50 + +>>> num_samples = jax.device_count() +>>> prompt = num_samples * [prompt] +>>> prompt_ids = pipeline.prepare_inputs(prompt) +# shard inputs and rng + +>>> params = replicate(params) +>>> prng_seed = jax.random.split(prng_seed, jax.device_count()) +>>> prompt_ids = shard(prompt_ids) + +>>> images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images +>>> images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) FlaxStableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — +Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content +or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/ee2f6b5d73ebedf40586777234c8e32a.txt b/scrapped_outputs/ee2f6b5d73ebedf40586777234c8e32a.txt new file mode 100644 index 0000000000000000000000000000000000000000..2c39533f3811507775688b8fc90c71c93f8c744f --- /dev/null +++ b/scrapped_outputs/ee2f6b5d73ebedf40586777234c8e32a.txt @@ -0,0 +1,324 @@ +InstructPix2Pix InstructPix2Pix: Learning to Follow Image Editing Instructions is by Tim Brooks, Aleksander Holynski and Alexei A. Efros. The abstract from the paper is: We propose a method for editing images from human instructions: given an input image and a written instruction that tells the model what to do, our model follows these instructions to edit the image. To obtain training data for this problem, we combine the knowledge of two large pretrained models — a language model (GPT-3) and a text-to-image model (Stable Diffusion) — to generate a large dataset of image editing examples. Our conditional diffusion model, InstructPix2Pix, is trained on our generated data, and generalizes to real images and user-written instructions at inference time. Since it performs edits in the forward pass and does not require per example fine-tuning or inversion, our model edits images quickly, in a matter of seconds. We show compelling editing results for a diverse collection of input images and written instructions. You can find additional information about InstructPix2Pix on the project page, original codebase, and try it out in a demo. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionInstructPix2PixPipeline class diffusers.StableDiffusionInstructPix2PixPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for pixel-level image editing by following text instructions (based on Stable Diffusion). This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 100 guidance_scale: float = 7.5 image_guidance_scale: float = 1.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor np.ndarray, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be repainted according to prompt. Can also accept +image latents as image, but if passing latents directly it is not encoded again. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. image_guidance_scale (float, optional, defaults to 1.5) — +Push the generated image towards the inital image. Image guidance scale is enabled by setting +image_guidance_scale > 1. Higher image guidance scale encourages generated images that are closely +linked to the source image, usually at the expense of lower image quality. This pipeline requires a +value of at least 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionInstructPix2PixPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" + +>>> image = download_image(img_url).resize((512, 512)) + +>>> pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained( +... "timbrooks/instruct-pix2pix", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "make the mountains snowy" +>>> image = pipe(prompt=prompt, image=image).images[0] load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. StableDiffusionXLInstructPix2PixPipeline class diffusers.StableDiffusionXLInstructPix2PixPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires a aesthetic_score condition to be passed during inference. Also see the config +of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for pixel-level image editing by following text instructions. Based on Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 100 denoising_end: Optional = None guidance_scale: float = 5.0 image_guidance_scale: float = 1.5 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None ) → ~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor or PIL.Image.Image or np.ndarray or List[torch.FloatTensor] or List[PIL.Image.Image] or List[np.ndarray]) — +The image(s) to modify with the pipeline. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 5.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. image_guidance_scale (float, optional, defaults to 1.5) — +Image guidance scale is to push the generated image towards the inital image image. Image guidance +scale is enabled by setting image_guidance_scale > 1. Higher image guidance scale encourages to +generate images that are closely linked to the source image image, usually at the expense of lower +image quality. This pipeline requires a value of at least 1. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. Returns +~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLInstructPix2PixPipeline +>>> from diffusers.utils import load_image + +>>> resolution = 768 +>>> image = load_image( +... "https://hf.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" +... ).resize((resolution, resolution)) +>>> edit_instruction = "Turn sky into a cloudy one" + +>>> pipe = StableDiffusionXLInstructPix2PixPipeline.from_pretrained( +... "diffusers/sdxl-instructpix2pix-768", torch_dtype=torch.float16 +... ).to("cuda") + +>>> edited_image = pipe( +... prompt=edit_instruction, +... image=image, +... height=resolution, +... width=resolution, +... guidance_scale=3.0, +... image_guidance_scale=1.5, +... num_inference_steps=30, +... ).images[0] +>>> edited_image disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously invoked, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several +steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in +several steps. This is useful to save a large amount of memory and to allow the processing of larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. Encodes the prompt into text encoder hidden states. diff --git a/scrapped_outputs/ee525e32b3fb5635de091d3188d83aa6.txt b/scrapped_outputs/ee525e32b3fb5635de091d3188d83aa6.txt new file mode 100644 index 0000000000000000000000000000000000000000..a7b663b381edb40c44b5dc45124142bca44fb798 --- /dev/null +++ b/scrapped_outputs/ee525e32b3fb5635de091d3188d83aa6.txt @@ -0,0 +1,148 @@ +PyTorch 2.0 🤗 Diffusers supports the latest optimizations from PyTorch 2.0 which include: A memory-efficient attention implementation, scaled dot product attention, without requiring any extra dependencies such as xFormers. torch.compile, a just-in-time (JIT) compiler to provide an extra performance boost when individual models are compiled. Both of these optimizations require PyTorch 2.0 or later and 🤗 Diffusers > 0.13.0. Copied pip install --upgrade torch diffusers Scaled dot product attention torch.nn.functional.scaled_dot_product_attention (SDPA) is an optimized and memory-efficient attention (similar to xFormers) that automatically enables several other optimizations depending on the model inputs and GPU type. SDPA is enabled by default if you’re using PyTorch 2.0 and the latest version of 🤗 Diffusers, so you don’t need to add anything to your code. However, if you want to explicitly enable it, you can set a DiffusionPipeline to use AttnProcessor2_0: Copied import torch + from diffusers import DiffusionPipeline ++ from diffusers.models.attention_processor import AttnProcessor2_0 + + pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True).to("cuda") ++ pipe.unet.set_attn_processor(AttnProcessor2_0()) + + prompt = "a photo of an astronaut riding a horse on mars" + image = pipe(prompt).images[0] SDPA should be as fast and memory efficient as xFormers; check the benchmark for more details. In some cases - such as making the pipeline more deterministic or converting it to other formats - it may be helpful to use the vanilla attention processor, AttnProcessor. To revert to AttnProcessor, call the set_default_attn_processor() function on the pipeline: Copied import torch + from diffusers import DiffusionPipeline + + pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True).to("cuda") ++ pipe.unet.set_default_attn_processor() + + prompt = "a photo of an astronaut riding a horse on mars" + image = pipe(prompt).images[0] torch.compile The torch.compile function can often provide an additional speed-up to your PyTorch code. In 🤗 Diffusers, it is usually best to wrap the UNet with torch.compile because it does most of the heavy lifting in the pipeline. Copied from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) +images = pipe(prompt, num_inference_steps=steps, num_images_per_prompt=batch_size).images[0] Depending on GPU type, torch.compile can provide an additional speed-up of 5-300x on top of SDPA! If you’re using more recent GPU architectures such as Ampere (A100, 3090), Ada (4090), and Hopper (H100), torch.compile is able to squeeze even more performance out of these GPUs. Compilation requires some time to complete, so it is best suited for situations where you prepare your pipeline once and then perform the same type of inference operations multiple times. For example, calling the compiled pipeline on a different image size triggers compilation again which can be expensive. For more information and different options about torch.compile, refer to the torch_compile tutorial. Benchmark We conducted a comprehensive benchmark with PyTorch 2.0’s efficient attention implementation and torch.compile across different GPUs and batch sizes for five of our most used pipelines. The code is benchmarked on 🤗 Diffusers v0.17.0.dev0 to optimize torch.compile usage (see here for more details). Expand the dropdown below to find the code used to benchmark each pipeline: Stable Diffusion text-to-image Copied from diffusers import DiffusionPipeline +import torch + +path = "runwayml/stable-diffusion-v1-5" + +run_compile = True # Set True / False + +pipe = DiffusionPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors=True) +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + images = pipe(prompt=prompt).images Stable Diffusion image-to-image Copied from diffusers import StableDiffusionImg2ImgPipeline +from diffusers.utils import load_image +import torch + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +init_image = load_image(url) +init_image = init_image.resize((512, 512)) + +path = "runwayml/stable-diffusion-v1-5" + +run_compile = True # Set True / False + +pipe = StableDiffusionImg2ImgPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors=True) +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + image = pipe(prompt=prompt, image=init_image).images[0] Stable Diffusion inpainting Copied from diffusers import StableDiffusionInpaintPipeline +from diffusers.utils import load_image +import torch + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +path = "runwayml/stable-diffusion-inpainting" + +run_compile = True # Set True / False + +pipe = StableDiffusionInpaintPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors=True) +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] ControlNet Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.utils import load_image +import torch + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +init_image = load_image(url) +init_image = init_image.resize((512, 512)) + +path = "runwayml/stable-diffusion-v1-5" + +run_compile = True # Set True / False +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16, use_safetensors=True) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + path, controlnet=controlnet, torch_dtype=torch.float16, use_safetensors=True +) + +pipe = pipe.to("cuda") +pipe.unet.to(memory_format=torch.channels_last) +pipe.controlnet.to(memory_format=torch.channels_last) + +if run_compile: + print("Run torch compile") + pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) + pipe.controlnet = torch.compile(pipe.controlnet, mode="reduce-overhead", fullgraph=True) + +prompt = "ghibli style, a fantasy landscape with castles" + +for _ in range(3): + image = pipe(prompt=prompt, image=init_image).images[0] DeepFloyd IF text-to-image + upscaling Copied from diffusers import DiffusionPipeline +import torch + +run_compile = True # Set True / False + +pipe_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-M-v1.0", variant="fp16", text_encoder=None, torch_dtype=torch.float16, use_safetensors=True) +pipe_1.to("cuda") +pipe_2 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-II-M-v1.0", variant="fp16", text_encoder=None, torch_dtype=torch.float16, use_safetensors=True) +pipe_2.to("cuda") +pipe_3 = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-x4-upscaler", torch_dtype=torch.float16, use_safetensors=True) +pipe_3.to("cuda") + + +pipe_1.unet.to(memory_format=torch.channels_last) +pipe_2.unet.to(memory_format=torch.channels_last) +pipe_3.unet.to(memory_format=torch.channels_last) + +if run_compile: + pipe_1.unet = torch.compile(pipe_1.unet, mode="reduce-overhead", fullgraph=True) + pipe_2.unet = torch.compile(pipe_2.unet, mode="reduce-overhead", fullgraph=True) + pipe_3.unet = torch.compile(pipe_3.unet, mode="reduce-overhead", fullgraph=True) + +prompt = "the blue hulk" + +prompt_embeds = torch.randn((1, 2, 4096), dtype=torch.float16) +neg_prompt_embeds = torch.randn((1, 2, 4096), dtype=torch.float16) + +for _ in range(3): + image_1 = pipe_1(prompt_embeds=prompt_embeds, negative_prompt_embeds=neg_prompt_embeds, output_type="pt").images + image_2 = pipe_2(image=image_1, prompt_embeds=prompt_embeds, negative_prompt_embeds=neg_prompt_embeds, output_type="pt").images + image_3 = pipe_3(prompt=prompt, image=image_1, noise_level=100).images The graph below highlights the relative speed-ups for the StableDiffusionPipeline across five GPU families with PyTorch 2.0 and torch.compile enabled. The benchmarks for the following graphs are measured in number of iterations/second. To give you an even better idea of how this speed-up holds for the other pipelines, consider the following +graph for an A100 with PyTorch 2.0 and torch.compile: In the following tables, we report our findings in terms of the number of iterations/second. A100 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 21.66 23.13 44.03 49.74 SD - img2img 21.81 22.40 43.92 46.32 SD - inpaint 22.24 23.23 43.76 49.25 SD - controlnet 15.02 15.82 32.13 36.08 IF 20.21 / 13.84 / 24.00 20.12 / 13.70 / 24.03 ❌ 97.34 / 27.23 / 111.66 SDXL - txt2img 8.64 9.9 - - A100 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 11.6 13.12 14.62 17.27 SD - img2img 11.47 13.06 14.66 17.25 SD - inpaint 11.67 13.31 14.88 17.48 SD - controlnet 8.28 9.38 10.51 12.41 IF 25.02 18.04 ❌ 48.47 SDXL - txt2img 2.44 2.74 - - A100 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 3.04 3.6 3.83 4.68 SD - img2img 2.98 3.58 3.83 4.67 SD - inpaint 3.04 3.66 3.9 4.76 SD - controlnet 2.15 2.58 2.74 3.35 IF 8.78 9.82 ❌ 16.77 SDXL - txt2img 0.64 0.72 - - V100 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 18.99 19.14 20.95 22.17 SD - img2img 18.56 19.18 20.95 22.11 SD - inpaint 19.14 19.06 21.08 22.20 SD - controlnet 13.48 13.93 15.18 15.88 IF 20.01 / 9.08 / 23.34 19.79 / 8.98 / 24.10 ❌ 55.75 / 11.57 / 57.67 V100 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 5.96 5.89 6.83 6.86 SD - img2img 5.90 5.91 6.81 6.82 SD - inpaint 5.99 6.03 6.93 6.95 SD - controlnet 4.26 4.29 4.92 4.93 IF 15.41 14.76 ❌ 22.95 V100 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 1.66 1.66 1.92 1.90 SD - img2img 1.65 1.65 1.91 1.89 SD - inpaint 1.69 1.69 1.95 1.93 SD - controlnet 1.19 1.19 OOM after warmup 1.36 IF 5.43 5.29 ❌ 7.06 T4 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 6.9 6.95 7.3 7.56 SD - img2img 6.84 6.99 7.04 7.55 SD - inpaint 6.91 6.7 7.01 7.37 SD - controlnet 4.89 4.86 5.35 5.48 IF 17.42 / 2.47 / 18.52 16.96 / 2.45 / 18.69 ❌ 24.63 / 2.47 / 23.39 SDXL - txt2img 1.15 1.16 - - T4 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 1.79 1.79 2.03 1.99 SD - img2img 1.77 1.77 2.05 2.04 SD - inpaint 1.81 1.82 2.09 2.09 SD - controlnet 1.34 1.27 1.47 1.46 IF 5.79 5.61 ❌ 7.39 SDXL - txt2img 0.288 0.289 - - T4 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 2.34s 2.30s OOM after 2nd iteration 1.99s SD - img2img 2.35s 2.31s OOM after warmup 2.00s SD - inpaint 2.30s 2.26s OOM after 2nd iteration 1.95s SD - controlnet OOM after 2nd iteration OOM after 2nd iteration OOM after warmup OOM after warmup IF * 1.44 1.44 ❌ 1.94 SDXL - txt2img OOM OOM - - RTX 3090 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 22.56 22.84 23.84 25.69 SD - img2img 22.25 22.61 24.1 25.83 SD - inpaint 22.22 22.54 24.26 26.02 SD - controlnet 16.03 16.33 17.38 18.56 IF 27.08 / 9.07 / 31.23 26.75 / 8.92 / 31.47 ❌ 68.08 / 11.16 / 65.29 RTX 3090 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 6.46 6.35 7.29 7.3 SD - img2img 6.33 6.27 7.31 7.26 SD - inpaint 6.47 6.4 7.44 7.39 SD - controlnet 4.59 4.54 5.27 5.26 IF 16.81 16.62 ❌ 21.57 RTX 3090 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 1.7 1.69 1.93 1.91 SD - img2img 1.68 1.67 1.93 1.9 SD - inpaint 1.72 1.71 1.97 1.94 SD - controlnet 1.23 1.22 1.4 1.38 IF 5.01 5.00 ❌ 6.33 RTX 4090 (batch size: 1) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 40.5 41.89 44.65 49.81 SD - img2img 40.39 41.95 44.46 49.8 SD - inpaint 40.51 41.88 44.58 49.72 SD - controlnet 29.27 30.29 32.26 36.03 IF 69.71 / 18.78 / 85.49 69.13 / 18.80 / 85.56 ❌ 124.60 / 26.37 / 138.79 SDXL - txt2img 6.8 8.18 - - RTX 4090 (batch size: 4) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 12.62 12.84 15.32 15.59 SD - img2img 12.61 12,.79 15.35 15.66 SD - inpaint 12.65 12.81 15.3 15.58 SD - controlnet 9.1 9.25 11.03 11.22 IF 31.88 31.14 ❌ 43.92 SDXL - txt2img 2.19 2.35 - - RTX 4090 (batch size: 16) Pipeline torch 2.0 - no compile torch nightly - no compile torch 2.0 - compile torch nightly - compile SD - txt2img 3.17 3.2 3.84 3.85 SD - img2img 3.16 3.2 3.84 3.85 SD - inpaint 3.17 3.2 3.85 3.85 SD - controlnet 2.23 2.3 2.7 2.75 IF 9.26 9.2 ❌ 13.31 SDXL - txt2img 0.52 0.53 - - Notes Follow this PR for more details on the environment used for conducting the benchmarks. For the DeepFloyd IF pipeline where batch sizes > 1, we only used a batch size of > 1 in the first IF pipeline for text-to-image generation and NOT for upscaling. That means the two upscaling pipelines received a batch size of 1. Thanks to Horace He from the PyTorch team for their support in improving our support of torch.compile() in Diffusers. diff --git a/scrapped_outputs/ee6e6caf32ab608921d0c96393be9f19.txt b/scrapped_outputs/ee6e6caf32ab608921d0c96393be9f19.txt new file mode 100644 index 0000000000000000000000000000000000000000..49d64c2bb4b20fbd4bc944a6449825ee53c95919 --- /dev/null +++ b/scrapped_outputs/ee6e6caf32ab608921d0c96393be9f19.txt @@ -0,0 +1,41 @@ +KDPM2AncestralDiscreteScheduler The KDPM2DiscreteScheduler with ancestral sampling is inspired by the Elucidating the Design Space of Diffusion-Based Generative Models paper, and the scheduler is ported from and created by Katherine Crowson. The original codebase can be found at crowsonkb/k-diffusion. KDPM2AncestralDiscreteScheduler class diffusers.KDPM2AncestralDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None use_karras_sigmas: Optional = False prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.00085) — +The starting beta value of inference. beta_end (float, defaults to 0.012) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. KDPM2DiscreteScheduler with ancestral sampling is inspired by the DPMSolver2 and Algorithm 2 from the Elucidating +the Design Space of Diffusion-Based Generative Models paper. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union generator: Optional = None return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, ~schedulers.scheduling_ddim.SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/ee6fcd6746a4e0e29c76f8e451589654.txt b/scrapped_outputs/ee6fcd6746a4e0e29c76f8e451589654.txt new file mode 100644 index 0000000000000000000000000000000000000000..260e2d1961cab74b037b8005bfcbb5822351f744 --- /dev/null +++ b/scrapped_outputs/ee6fcd6746a4e0e29c76f8e451589654.txt @@ -0,0 +1,197 @@ +UniDiffuser The UniDiffuser model was proposed in One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale by Fan Bao, Shen Nie, Kaiwen Xue, Chongxuan Li, Shi Pu, Yaole Wang, Gang Yue, Yue Cao, Hang Su, Jun Zhu. The abstract from the paper is: This paper proposes a unified diffusion framework (dubbed UniDiffuser) to fit all distributions relevant to a set of multi-modal data in one model. Our key insight is — learning diffusion models for marginal, conditional, and joint distributions can be unified as predicting the noise in the perturbed data, where the perturbation levels (i.e. timesteps) can be different for different modalities. Inspired by the unified view, UniDiffuser learns all distributions simultaneously with a minimal modification to the original diffusion model — perturbs data in all modalities instead of a single modality, inputs individual timesteps in different modalities, and predicts the noise of all modalities instead of a single modality. UniDiffuser is parameterized by a transformer for diffusion models to handle input types of different modalities. Implemented on large-scale paired image-text data, UniDiffuser is able to perform image, text, text-to-image, image-to-text, and image-text pair generation by setting proper timesteps without additional overhead. In particular, UniDiffuser is able to produce perceptually realistic samples in all tasks and its quantitative results (e.g., the FID and CLIP score) are not only superior to existing general-purpose models but also comparable to the bespoken models (e.g., Stable Diffusion and DALL-E 2) in representative tasks (e.g., text-to-image generation). You can find the original codebase at thu-ml/unidiffuser and additional checkpoints at thu-ml. There is currently an issue on PyTorch 1.X where the output images are all black or the pixel values become NaNs. This issue can be mitigated by switching to PyTorch 2.X. This pipeline was contributed by dg845. ❤️ Usage Examples Because the UniDiffuser model is trained to model the joint distribution of (image, text) pairs, it is capable of performing a diverse range of generation tasks: Unconditional Image and Text Generation Unconditional generation (where we start from only latents sampled from a standard Gaussian prior) from a UniDiffuserPipeline will produce a (image, text) pair: Copied import torch + +from diffusers import UniDiffuserPipeline + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Unconditional image and text generation. The generation task is automatically inferred. +sample = pipe(num_inference_steps=20, guidance_scale=8.0) +image = sample.images[0] +text = sample.text[0] +image.save("unidiffuser_joint_sample_image.png") +print(text) This is also called “joint” generation in the UniDiffuser paper, since we are sampling from the joint image-text distribution. Note that the generation task is inferred from the inputs used when calling the pipeline. +It is also possible to manually specify the unconditional generation task (“mode”) manually with UniDiffuserPipeline.set_joint_mode(): Copied # Equivalent to the above. +pipe.set_joint_mode() +sample = pipe(num_inference_steps=20, guidance_scale=8.0) When the mode is set manually, subsequent calls to the pipeline will use the set mode without attempting to infer the mode. +You can reset the mode with UniDiffuserPipeline.reset_mode(), after which the pipeline will once again infer the mode. You can also generate only an image or only text (which the UniDiffuser paper calls “marginal” generation since we sample from the marginal distribution of images and text, respectively): Copied # Unlike other generation tasks, image-only and text-only generation don't use classifier-free guidance +# Image-only generation +pipe.set_image_mode() +sample_image = pipe(num_inference_steps=20).images[0] +# Text-only generation +pipe.set_text_mode() +sample_text = pipe(num_inference_steps=20).text[0] Text-to-Image Generation UniDiffuser is also capable of sampling from conditional distributions; that is, the distribution of images conditioned on a text prompt or the distribution of texts conditioned on an image. +Here is an example of sampling from the conditional image distribution (text-to-image generation or text-conditioned image generation): Copied import torch + +from diffusers import UniDiffuserPipeline + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Text-to-image generation +prompt = "an elephant under the sea" + +sample = pipe(prompt=prompt, num_inference_steps=20, guidance_scale=8.0) +t2i_image = sample.images[0] +t2i_image The text2img mode requires that either an input prompt or prompt_embeds be supplied. You can set the text2img mode manually with UniDiffuserPipeline.set_text_to_image_mode(). Image-to-Text Generation Similarly, UniDiffuser can also produce text samples given an image (image-to-text or image-conditioned text generation): Copied import torch + +from diffusers import UniDiffuserPipeline +from diffusers.utils import load_image + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Image-to-text generation +image_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unidiffuser/unidiffuser_example_image.jpg" +init_image = load_image(image_url).resize((512, 512)) + +sample = pipe(image=init_image, num_inference_steps=20, guidance_scale=8.0) +i2t_text = sample.text[0] +print(i2t_text) The img2text mode requires that an input image be supplied. You can set the img2text mode manually with UniDiffuserPipeline.set_image_to_text_mode(). Image Variation The UniDiffuser authors suggest performing image variation through a “round-trip” generation method, where given an input image, we first perform an image-to-text generation, and then perform a text-to-image generation on the outputs of the first generation. +This produces a new image which is semantically similar to the input image: Copied import torch + +from diffusers import UniDiffuserPipeline +from diffusers.utils import load_image + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Image variation can be performed with an image-to-text generation followed by a text-to-image generation: +# 1. Image-to-text generation +image_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unidiffuser/unidiffuser_example_image.jpg" +init_image = load_image(image_url).resize((512, 512)) + +sample = pipe(image=init_image, num_inference_steps=20, guidance_scale=8.0) +i2t_text = sample.text[0] +print(i2t_text) + +# 2. Text-to-image generation +sample = pipe(prompt=i2t_text, num_inference_steps=20, guidance_scale=8.0) +final_image = sample.images[0] +final_image.save("unidiffuser_image_variation_sample.png") Text Variation Similarly, text variation can be performed on an input prompt with a text-to-image generation followed by a image-to-text generation: Copied import torch + +from diffusers import UniDiffuserPipeline + +device = "cuda" +model_id_or_path = "thu-ml/unidiffuser-v1" +pipe = UniDiffuserPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +pipe.to(device) + +# Text variation can be performed with a text-to-image generation followed by a image-to-text generation: +# 1. Text-to-image generation +prompt = "an elephant under the sea" + +sample = pipe(prompt=prompt, num_inference_steps=20, guidance_scale=8.0) +t2i_image = sample.images[0] +t2i_image.save("unidiffuser_text2img_sample_image.png") + +# 2. Image-to-text generation +sample = pipe(image=t2i_image, num_inference_steps=20, guidance_scale=8.0) +final_prompt = sample.text[0] +print(final_prompt) Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. UniDiffuserPipeline class diffusers.UniDiffuserPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel image_encoder: CLIPVisionModelWithProjection clip_image_processor: CLIPImageProcessor clip_tokenizer: CLIPTokenizer text_decoder: UniDiffuserTextDecoder text_tokenizer: GPT2Tokenizer unet: UniDiffuserModel scheduler: KarrasDiffusionSchedulers ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. This +is part of the UniDiffuser image representation along with the CLIP vision encoding. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). image_encoder (CLIPVisionModel) — +A CLIPVisionModel to encode images as part of its image representation along with the VAE +latent representation. image_processor (CLIPImageProcessor) — +CLIPImageProcessor to preprocess an image before CLIP encoding it with image_encoder. clip_tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize the prompt before encoding it with text_encoder. text_decoder (UniDiffuserTextDecoder) — +Frozen text decoder. This is a GPT-style model which is used to generate text from the UniDiffuser +embedding. text_tokenizer (GPT2Tokenizer) — +A GPT2Tokenizer to decode text for text generation; used along with the text_decoder. unet (UniDiffuserModel) — +A U-ViT model with UNNet-style skip connections between transformer +layers to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image and/or text latents. The +original UniDiffuser paper uses the DPMSolverMultistepScheduler scheduler. Pipeline for a bimodal image-text model which supports unconditional text and image generation, text-conditioned +image generation, image-conditioned text generation, and joint image-text generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None image: Union = None height: Optional = None width: Optional = None data_type: Optional = 1 num_inference_steps: int = 50 guidance_scale: float = 8.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 num_prompts_per_image: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_latents: Optional = None vae_latents: Optional = None clip_latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → ImageTextPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. +Required for text-conditioned image generation (text2img) mode. image (torch.FloatTensor or PIL.Image.Image, optional) — +Image or tensor representing an image batch. Required for image-conditioned text generation +(img2text) mode. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. data_type (int, optional, defaults to 1) — +The data type (either 0 or 1). Only used if you are loading a checkpoint which supports a data type +embedding; this is added for compatibility with the +UniDiffuser-v1 checkpoint. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 8.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). Used in +text-conditioned image generation (text2img) mode. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. Used in text2img (text-conditioned image generation) and +img mode. If the mode is joint and both num_images_per_prompt and num_prompts_per_image are +supplied, min(num_images_per_prompt, num_prompts_per_image) samples are generated. num_prompts_per_image (int, optional, defaults to 1) — +The number of prompts to generate per image. Used in img2text (image-conditioned text generation) and +text mode. If the mode is joint and both num_images_per_prompt and num_prompts_per_image are +supplied, min(num_images_per_prompt, num_prompts_per_image) samples are generated. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for joint +image-text generation. Can be used to tweak the same generation with different prompts. If not +provided, a latents tensor is generated by sampling using the supplied random generator. This assumes +a full set of VAE, CLIP, and text latents, if supplied, overrides the value of prompt_latents, +vae_latents, and clip_latents. prompt_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for text +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. vae_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. clip_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. Used in text-conditioned +image generation (text2img) mode. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are be generated from the negative_prompt input argument. Used +in text-conditioned image generation (text2img) mode. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImageTextPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +ImageTextPipelineOutput or tuple + +If return_dict is True, ImageTextPipelineOutput is returned, otherwise a +tuple is returned where the first element is a list with the generated images and the second element +is a list of generated texts. + The call function to the pipeline for generation. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. reset_mode < source > ( ) Removes a manually set mode; after calling this, the pipeline will infer the mode from inputs. set_image_mode < source > ( ) Manually set the generation mode to unconditional (“marginal”) image generation. set_image_to_text_mode < source > ( ) Manually set the generation mode to image-conditioned text generation. set_joint_mode < source > ( ) Manually set the generation mode to unconditional joint image-text generation. set_text_mode < source > ( ) Manually set the generation mode to unconditional (“marginal”) text generation. set_text_to_image_mode < source > ( ) Manually set the generation mode to text-conditioned image generation. ImageTextPipelineOutput class diffusers.ImageTextPipelineOutput < source > ( images: Union text: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). text (List[str] or List[List[str]]) — +List of generated text strings of length batch_size or a list of list of strings whose outer list has +length batch_size. Output class for joint image-text pipelines. diff --git a/scrapped_outputs/ee7fd7daf695684fd6beed6e82dfee0d.txt b/scrapped_outputs/ee7fd7daf695684fd6beed6e82dfee0d.txt new file mode 100644 index 0000000000000000000000000000000000000000..4049d6b91ac5929ba92113dc859ead44d28a4f4e --- /dev/null +++ b/scrapped_outputs/ee7fd7daf695684fd6beed6e82dfee0d.txt @@ -0,0 +1,45 @@ +EulerAncestralDiscreteScheduler A scheduler that uses ancestral sampling with Euler method steps. This is a fast scheduler which can often generate good outputs in 20-30 steps. The scheduler is based on the original k-diffusion implementation by Katherine Crowson. EulerAncestralDiscreteScheduler class diffusers.EulerAncestralDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. Ancestral sampling with Euler method steps. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. Scales the denoising model input by (sigma**2 + 1) ** 0.5 to match the Euler algorithm. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor generator: Optional = None return_dict: bool = True ) → EulerAncestralDiscreteSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a +EulerAncestralDiscreteSchedulerOutput or tuple. Returns +EulerAncestralDiscreteSchedulerOutput or tuple + +If return_dict is True, +EulerAncestralDiscreteSchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). EulerAncestralDiscreteSchedulerOutput class diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/eeb7977891713559369c50b1c743b11c.txt b/scrapped_outputs/eeb7977891713559369c50b1c743b11c.txt new file mode 100644 index 0000000000000000000000000000000000000000..9cfc96be6aaacc8d08b00ff6b4042e641b297921 --- /dev/null +++ b/scrapped_outputs/eeb7977891713559369c50b1c743b11c.txt @@ -0,0 +1,13 @@ +PEFT Diffusers supports loading adapters such as LoRA with the PEFT library with the PeftAdapterMixin class. This allows modeling classes in Diffusers like UNet2DConditionModel to load an adapter. Refer to the Inference with PEFT tutorial for an overview of how to use PEFT in Diffusers for inference. PeftAdapterMixin class diffusers.loaders.PeftAdapterMixin < source > ( ) A class containing all functions for loading and using adapters weights that are supported in PEFT library. For +more details about adapters and injecting them in a transformer-based model, check out the PEFT documentation. Install the latest version of PEFT, and use this mixin to: Attach new adapters in the model. Attach multiple adapters and iteratively activate/deactivate them. Activate/deactivate all adapters from the model. Get a list of the active adapters. active_adapters < source > ( ) Gets the current list of active adapters of the model. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +documentation. add_adapter < source > ( adapter_config adapter_name: str = 'default' ) Parameters adapter_config ([~peft.PeftConfig]) — +The configuration of the adapter to add; supported adapters are non-prefix tuning and adaption prompt +methods. adapter_name (str, optional, defaults to "default") — +The name of the adapter to add. If no name is passed, a default name is assigned to the adapter. Adds a new adapter to the current model for training. If no adapter name is passed, a default name is assigned +to the adapter to follow the convention of the PEFT library. If you are not familiar with adapters and PEFT methods, we invite you to read more about them in the PEFT +documentation. disable_adapters < source > ( ) Disable all adapters attached to the model and fallback to inference with the base model only. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +documentation. enable_adapters < source > ( ) Enable adapters that are attached to the model. The model uses self.active_adapters() to retrieve the +list of adapters to enable. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +documentation. set_adapter < source > ( adapter_name: Union ) Parameters adapter_name (Union[str, List[str]])) — +The list of adapters to set or the adapter name in the case of a single adapter. Sets a specific adapter by forcing the model to only use that adapter and disables the other adapters. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +documentation. diff --git a/scrapped_outputs/eeb7b2ea6db9f1bb341ff877f12d73c0.txt b/scrapped_outputs/eeb7b2ea6db9f1bb341ff877f12d73c0.txt new file mode 100644 index 0000000000000000000000000000000000000000..70b4217dd0c7138c00d1e18f1498d6ca0f929b68 --- /dev/null +++ b/scrapped_outputs/eeb7b2ea6db9f1bb341ff877f12d73c0.txt @@ -0,0 +1,31 @@ +Load different Stable Diffusion formats Stable Diffusion models are available in different formats depending on the framework they’re trained and saved with, and where you download them from. Converting these formats for use in 🤗 Diffusers allows you to use all the features supported by the library, such as using different schedulers for inference, building your custom pipeline, and a variety of techniques and methods for optimizing inference speed. We highly recommend using the .safetensors format because it is more secure than traditional pickled files which are vulnerable and can be exploited to execute any code on your machine (learn more in the Load safetensors guide). This guide will show you how to convert other Stable Diffusion formats to be compatible with 🤗 Diffusers. PyTorch .ckpt The checkpoint - or .ckpt - format is commonly used to store and save models. The .ckpt file contains the entire model and is typically several GBs in size. While you can load and use a .ckpt file directly with the from_single_file() method, it is generally better to convert the .ckpt file to 🤗 Diffusers so both formats are available. There are two options for converting a .ckpt file: use a Space to convert the checkpoint or convert the .ckpt file with a script. Convert with a Space The easiest and most convenient way to convert a .ckpt file is to use the SD to Diffusers Space. You can follow the instructions on the Space to convert the .ckpt file. This approach works well for basic models, but it may struggle with more customized models. You’ll know the Space failed if it returns an empty pull request or error. In this case, you can try converting the .ckpt file with a script. Convert with a script 🤗 Diffusers provides a conversion script for converting .ckpt files. This approach is more reliable than the Space above. Before you start, make sure you have a local clone of 🤗 Diffusers to run the script and log in to your Hugging Face account so you can open pull requests and push your converted model to the Hub. Copied huggingface-cli login To use the script: Git clone the repository containing the .ckpt file you want to convert. For this example, let’s convert this TemporalNet .ckpt file: Copied git lfs install +git clone https://huggingface.co/CiaraRowles/TemporalNet Open a pull request on the repository where you’re converting the checkpoint from: Copied cd TemporalNet && git fetch origin refs/pr/13:pr/13 +git checkout pr/13 There are several input arguments to configure in the conversion script, but the most important ones are: checkpoint_path: the path to the .ckpt file to convert. original_config_file: a YAML file defining the configuration of the original architecture. If you can’t find this file, try searching for the YAML file in the GitHub repository where you found the .ckpt file. dump_path: the path to the converted model. For example, you can take the cldm_v15.yaml file from the ControlNet repository because the TemporalNet model is a Stable Diffusion v1.5 and ControlNet model. Now you can run the script to convert the .ckpt file: Copied python ../diffusers/scripts/convert_original_stable_diffusion_to_diffusers.py --checkpoint_path temporalnetv3.ckpt --original_config_file cldm_v15.yaml --dump_path ./ --controlnet Once the conversion is done, upload your converted model and test out the resulting pull request! Copied git push origin pr/13:refs/pr/13 Keras .pb or .h5 🧪 This is an experimental feature. Only Stable Diffusion v1 checkpoints are supported by the Convert KerasCV Space at the moment. KerasCV supports training for Stable Diffusion v1 and v2. However, it offers limited support for experimenting with Stable Diffusion models for inference and deployment whereas 🤗 Diffusers has a more complete set of features for this purpose, such as different noise schedulers, flash attention, and other +optimization techniques. The Convert KerasCV Space converts .pb or .h5 files to PyTorch, and then wraps them in a StableDiffusionPipeline so it is ready for inference. The converted checkpoint is stored in a repository on the Hugging Face Hub. For this example, let’s convert the sayakpaul/textual-inversion-kerasio checkpoint which was trained with Textual Inversion. It uses the special token to personalize images with cats. The Convert KerasCV Space allows you to input the following: Your Hugging Face token. Paths to download the UNet and text encoder weights from. Depending on how the model was trained, you don’t necessarily need to provide the paths to both the UNet and text encoder. For example, Textual Inversion only requires the embeddings from the text encoder and a text-to-image model only requires the UNet weights. Placeholder token is only applicable for textual inversion models. The output_repo_prefix is the name of the repository where the converted model is stored. Click the Submit button to automatically convert the KerasCV checkpoint! Once the checkpoint is successfully converted, you’ll see a link to the new repository containing the converted checkpoint. Follow the link to the new repository, and you’ll see the Convert KerasCV Space generated a model card with an inference widget to try out the converted model. If you prefer to run inference with code, click on the Use in Diffusers button in the upper right corner of the model card to copy and paste the code snippet: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline", use_safetensors=True +) Then, you can generate an image like: Copied from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline", use_safetensors=True +) +pipeline.to("cuda") + +placeholder_token = "" +prompt = f"two {placeholder_token} getting married, photorealistic, high quality" +image = pipeline(prompt, num_inference_steps=50).images[0] A1111 LoRA files Automatic1111 (A1111) is a popular web UI for Stable Diffusion that supports model sharing platforms like Civitai. Models trained with the Low-Rank Adaptation (LoRA) technique are especially popular because they’re fast to train and have a much smaller file size than a fully finetuned model. 🤗 Diffusers supports loading A1111 LoRA checkpoints with load_lora_weights(): Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "Lykon/dreamshaper-xl-1-0", torch_dtype=torch.float16, variant="fp16" +).to("cuda") Download a LoRA checkpoint from Civitai; this example uses the Blueprintify SD XL 1.0 checkpoint, but feel free to try out any LoRA checkpoint! Copied # uncomment to download the safetensor weights +#!wget https://civitai.com/api/download/models/168776 -O blueprintify.safetensors Load the LoRA checkpoint into the pipeline with the load_lora_weights() method: Copied pipeline.load_lora_weights(".", weight_name="blueprintify.safetensors") Now you can use the pipeline to generate images: Copied prompt = "bl3uprint, a highly detailed blueprint of the empire state building, explaining how to build all parts, many txt, blueprint grid backdrop" +negative_prompt = "lowres, cropped, worst quality, low quality, normal quality, artifacts, signature, watermark, username, blurry, more than one bridge, bad architecture" + +image = pipeline( + prompt=prompt, + negative_prompt=negative_prompt, + generator=torch.manual_seed(0), +).images[0] +image diff --git a/scrapped_outputs/eedc47823ccc91e1142fdc6e5a02318a.txt b/scrapped_outputs/eedc47823ccc91e1142fdc6e5a02318a.txt new file mode 100644 index 0000000000000000000000000000000000000000..cdd78d68bba0e712cfad73d0a4eb0e2833f322c8 --- /dev/null +++ b/scrapped_outputs/eedc47823ccc91e1142fdc6e5a02318a.txt @@ -0,0 +1,15 @@ +Outputs All model outputs are subclasses of BaseOutput, data structures containing all the information returned by the model. The outputs can also be used as tuples or dictionaries. For example: Copied from diffusers import DDIMPipeline + +pipeline = DDIMPipeline.from_pretrained("google/ddpm-cifar10-32") +outputs = pipeline() The outputs object is a ImagePipelineOutput which means it has an image attribute. You can access each attribute as you normally would or with a keyword lookup, and if that attribute is not returned by the model, you will get None: Copied outputs.images +outputs["images"] When considering the outputs object as a tuple, it only considers the attributes that don’t have None values. +For instance, retrieving an image by indexing into it returns the tuple (outputs.images): Copied outputs[:1] To check a specific pipeline or model output, refer to its corresponding API documentation. BaseOutput class diffusers.utils.BaseOutput < source > ( ) Base class for all model outputs as dataclass. Has a __getitem__ that allows indexing by integer or slice (like a +tuple) or strings (like a dictionary) that will ignore the None attributes. Otherwise behaves like a regular +Python dictionary. You can’t unpack a BaseOutput directly. Use the to_tuple() method to convert it to a tuple +first. to_tuple < source > ( ) Convert self to a tuple containing all the attributes/keys that are not None. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. FlaxImagePipelineOutput class diffusers.pipelines.pipeline_flax_utils.FlaxImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. ImageTextPipelineOutput class diffusers.ImageTextPipelineOutput < source > ( images: Union text: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). text (List[str] or List[List[str]]) — +List of generated text strings of length batch_size or a list of list of strings whose outer list has +length batch_size. Output class for joint image-text pipelines. diff --git a/scrapped_outputs/eeefd49e83fecd00cdb9e79867d882f8.txt b/scrapped_outputs/eeefd49e83fecd00cdb9e79867d882f8.txt new file mode 100644 index 0000000000000000000000000000000000000000..d7fd413d05fa387c0d26fa8503b5642d90f939e1 --- /dev/null +++ b/scrapped_outputs/eeefd49e83fecd00cdb9e79867d882f8.txt @@ -0,0 +1,135 @@ +ControlNet ControlNet models are adapters trained on top of another pretrained model. It allows for a greater degree of control over image generation by conditioning the model with an additional input image. The input image can be a canny edge, depth map, human pose, and many more. If you’re training on a GPU with limited vRAM, you should try enabling the gradient_checkpointing, gradient_accumulation_steps, and mixed_precision parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing or xFormers. You should have a GPU with >30GB of memory if you want to train faster with Flax. This guide will explore the train_controlnet.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: + + +```bash +cd examples/controlnet +pip install -r requirements.txt +``` + + +If you have access to a TPU, the Flax training script runs even faster! Let’s run the training script on the Google Cloud TPU VM. Create a single TPU v4-8 VM and connect to it: Copied ZONE=us-central2-b +TPU_TYPE=v4-8 +VM_NAME=hg_flax + +gcloud alpha compute tpus tpu-vm create $VM_NAME \ + --zone $ZONE \ + --accelerator-type $TPU_TYPE \ + --version tpu-vm-v4-base + +gcloud alpha compute tpus tpu-vm ssh $VM_NAME --zone $ZONE -- \ Install JAX 0.4.5: Copied pip install "jax[tpu]==0.4.5" -f https://storage.googleapis.com/jax-releases/libtpu_releases.html Then install the required dependencies for the Flax script: Copied cd examples/controlnet +pip install -r requirements_flax.txt + + + 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_controlnet.py \ + --mixed_precision="fp16" Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the relevant parameters for ControlNet: --max_train_samples: the number of training samples; this can be lowered for faster training, but if you want to stream really large datasets, you’ll need to include this parameter and the --streaming parameter in your training command --gradient_accumulation_steps: number of update steps to accumulate before the backward pass; this allows you to train with a bigger batch size than your GPU memory can typically handle Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_controlnet.py \ + --snr_gamma=5.0 Training script As with the script parameters, a general walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the relevant parts of the ControlNet script. The training script has a make_train_dataset function for preprocessing the dataset with image transforms and caption tokenization. You’ll see that in addition to the usual caption tokenization and image transforms, the script also includes transforms for the conditioning image. If you’re streaming a dataset on a TPU, performance may be bottlenecked by the 🤗 Datasets library which is not optimized for images. To ensure maximum throughput, you’re encouraged to explore other dataset formats like WebDataset, TorchData, and TensorFlow Datasets. Copied conditioning_image_transforms = transforms.Compose( + [ + transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), + transforms.CenterCrop(args.resolution), + transforms.ToTensor(), + ] +) Within the main() function, you’ll find the code for loading the tokenizer, text encoder, scheduler and models. This is also where the ControlNet model is loaded either from existing weights or randomly initialized from a UNet: Copied if args.controlnet_model_name_or_path: + logger.info("Loading existing controlnet weights") + controlnet = ControlNetModel.from_pretrained(args.controlnet_model_name_or_path) +else: + logger.info("Initializing controlnet weights from unet") + controlnet = ControlNetModel.from_unet(unet) The optimizer is set up to update the ControlNet parameters: Copied params_to_optimize = controlnet.parameters() +optimizer = optimizer_class( + params_to_optimize, + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Finally, in the training loop, the conditioning text embeddings and image are passed to the down and mid-blocks of the ControlNet model: Copied encoder_hidden_states = text_encoder(batch["input_ids"])[0] +controlnet_image = batch["conditioning_pixel_values"].to(dtype=weight_dtype) + +down_block_res_samples, mid_block_res_sample = controlnet( + noisy_latents, + timesteps, + encoder_hidden_states=encoder_hidden_states, + controlnet_cond=controlnet_image, + return_dict=False, +) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Now you’re ready to launch the training script! 🚀 This guide uses the fusing/fill50k dataset, but remember, you can create and use your own dataset if you want (see the Create a dataset for training guide). Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model and OUTPUT_DIR to where you want to save the model. Download the following images to condition your training with: Copied wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png +wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png One more thing before you launch the script! Depending on the GPU you have, you may need to enable certain optimizations to train a ControlNet. The default configuration in this script requires ~38GB of vRAM. If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. + + +On a 16GB GPU, you can use bitsandbytes 8-bit optimizer and gradient checkpointing to optimize your training run. Install bitsandbytes: Copied pip install bitsandbytes Then, add the following parameter to your training command: Copied accelerate launch train_controlnet.py \ + --gradient_checkpointing \ + --use_8bit_adam \ + + +On a 12GB GPU, you’ll need bitsandbytes 8-bit optimizer, gradient checkpointing, xFormers, and set the gradients to None instead of zero to reduce your memory-usage. Copied accelerate launch train_controlnet.py \ + --use_8bit_adam \ + --gradient_checkpointing \ + --enable_xformers_memory_efficient_attention \ + --set_grads_to_none \ + + +On a 8GB GPU, you’ll need to use DeepSpeed to offload some of the tensors from the vRAM to either the CPU or NVME to allow training with less GPU memory. Run the following command to configure your 🤗 Accelerate environment: Copied accelerate config During configuration, confirm that you want to use DeepSpeed stage 2. Now it should be possible to train on under 8GB vRAM by combining DeepSpeed stage 2, fp16 mixed precision, and offloading the model parameters and the optimizer state to the CPU. The drawback is that this requires more system RAM (~25 GB). See the DeepSpeed documentation for more configuration options. Your configuration file should look something like: Copied compute_environment: LOCAL_MACHINE +deepspeed_config: + gradient_accumulation_steps: 4 + offload_optimizer_device: cpu + offload_param_device: cpu + zero3_init_flag: false + zero_stage: 2 +distributed_type: DEEPSPEED You should also change the default Adam optimizer to DeepSpeed’s optimized version of Adam deepspeed.ops.adam.DeepSpeedCPUAdam for a substantial speedup. Enabling DeepSpeedCPUAdam requires your system’s CUDA toolchain version to be the same as the one installed with PyTorch. bitsandbytes 8-bit optimizers don’t seem to be compatible with DeepSpeed at the moment. That’s it! You don’t need to add any additional parameters to your training command. + + + + + + Copied export MODEL_DIR="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="path/to/save/model" + +accelerate launch train_controlnet.py \ + --pretrained_model_name_or_path=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --dataset_name=fusing/fill50k \ + --resolution=512 \ + --learning_rate=1e-5 \ + --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ + --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --push_to_hub + + +With Flax, you can profile your code by adding the --profile_steps==5 parameter to your training command. Install the Tensorboard profile plugin: Copied pip install tensorflow tensorboard-plugin-profile +tensorboard --logdir runs/fill-circle-100steps-20230411_165612/ Then you can inspect the profile at http://localhost:6006/#profile. If you run into version conflicts with the plugin, try uninstalling and reinstalling all versions of TensorFlow and Tensorboard. The debugging functionality of the profile plugin is still experimental, and not all views are fully functional. The trace_viewer cuts off events after 1M, which can result in all your device traces getting lost if for example, you profile the compilation step by accident. Copied python3 train_controlnet_flax.py \ + --pretrained_model_name_or_path=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --dataset_name=fusing/fill50k \ + --resolution=512 \ + --learning_rate=1e-5 \ + --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ + --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ + --validation_steps=1000 \ + --train_batch_size=2 \ + --revision="non-ema" \ + --from_pt \ + --report_to="wandb" \ + --tracker_project_name=$HUB_MODEL_ID \ + --num_train_epochs=11 \ + --push_to_hub \ + --hub_model_id=$HUB_MODEL_ID + + +Once training is complete, you can use your newly trained model for inference! Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.utils import load_image +import torch + +controlnet = ControlNetModel.from_pretrained("path/to/controlnet", torch_dtype=torch.float16) +pipeline = StableDiffusionControlNetPipeline.from_pretrained( + "path/to/base/model", controlnet=controlnet, torch_dtype=torch.float16 +).to("cuda") + +control_image = load_image("./conditioning_image_1.png") +prompt = "pale golden rod circle with old lace background" + +generator = torch.manual_seed(0) +image = pipe(prompt, num_inference_steps=20, generator=generator, image=control_image).images[0] +image.save("./output.png") Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_controlnet_sdxl.py script to train a ControlNet adapter for the SDXL model. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on training your own ControlNet! To learn more about how to use your new model, the following guides may be helpful: Learn how to use a ControlNet for inference on a variety of tasks. diff --git a/scrapped_outputs/eefc019cb01ab5e1d44fdc84bddec8c4.txt b/scrapped_outputs/eefc019cb01ab5e1d44fdc84bddec8c4.txt new file mode 100644 index 0000000000000000000000000000000000000000..b6ee2d139f8d33d1b57f5e5dc720363dd35642a1 --- /dev/null +++ b/scrapped_outputs/eefc019cb01ab5e1d44fdc84bddec8c4.txt @@ -0,0 +1,101 @@ +Shap-E The Shap-E model was proposed in Shap-E: Generating Conditional 3D Implicit Functions by Alex Nichol and Heewoo Jun from OpenAI. The abstract from the paper is: We present Shap-E, a conditional generative model for 3D assets. Unlike recent work on 3D generative models which produce a single output representation, Shap-E directly generates the parameters of implicit functions that can be rendered as both textured meshes and neural radiance fields. We train Shap-E in two stages: first, we train an encoder that deterministically maps 3D assets into the parameters of an implicit function; second, we train a conditional diffusion model on outputs of the encoder. When trained on a large dataset of paired 3D and text data, our resulting models are capable of generating complex and diverse 3D assets in a matter of seconds. When compared to Point-E, an explicit generative model over point clouds, Shap-E converges faster and reaches comparable or better sample quality despite modeling a higher-dimensional, multi-representation output space. The original codebase can be found at openai/shap-e. See the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. ShapEPipeline class diffusers.ShapEPipeline < source > ( prior: PriorTransformer text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: HeunDiscreteScheduler shap_e_renderer: ShapERenderer ) Parameters prior (PriorTransformer) — +The canonical unCLIP prior to approximate the image embedding from the text embedding. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. scheduler (HeunDiscreteScheduler) — +A scheduler to be used in combination with the prior model to generate image embedding. shap_e_renderer (ShapERenderer) — +Shap-E renderer projects the generated latents into parameters of a MLP to create 3D objects with the NeRF +rendering method. Pipeline for generating latent representation of a 3D asset and rendering with the NeRF method. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: str num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 frame_size: int = 64 output_type: Optional = 'pil' return_dict: bool = True ) → ShapEPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. frame_size (int, optional, default to 64) — +The width and height of each image frame of the generated 3D output. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between "pil" (PIL.Image.Image), "np" +(np.array), "latent" (torch.Tensor), or mesh (MeshDecoderOutput). return_dict (bool, optional, defaults to True) — +Whether or not to return a ShapEPipelineOutput instead of a plain +tuple. Returns +ShapEPipelineOutput or tuple + +If return_dict is True, ShapEPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from diffusers.utils import export_to_gif + +>>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +>>> repo = "openai/shap-e" +>>> pipe = DiffusionPipeline.from_pretrained(repo, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> guidance_scale = 15.0 +>>> prompt = "a shark" + +>>> images = pipe( +... prompt, +... guidance_scale=guidance_scale, +... num_inference_steps=64, +... frame_size=256, +... ).images + +>>> gif_path = export_to_gif(images[0], "shark_3d.gif") ShapEImg2ImgPipeline class diffusers.ShapEImg2ImgPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModel image_processor: CLIPImageProcessor scheduler: HeunDiscreteScheduler shap_e_renderer: ShapERenderer ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModel) — +Frozen image-encoder. image_processor (CLIPImageProcessor) — +A CLIPImageProcessor to process images. scheduler (HeunDiscreteScheduler) — +A scheduler to be used in combination with the prior model to generate image embedding. shap_e_renderer (ShapERenderer) — +Shap-E renderer projects the generated latents into parameters of a MLP to create 3D objects with the NeRF +rendering method. Pipeline for generating latent representation of a 3D asset and rendering with the NeRF method from an image. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 frame_size: int = 64 output_type: Optional = 'pil' return_dict: bool = True ) → ShapEPipelineOutput or tuple Parameters image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be used as the starting point. Can also accept image +latents as image, but if passing latents directly it is not encoded again. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. frame_size (int, optional, default to 64) — +The width and height of each image frame of the generated 3D output. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between "pil" (PIL.Image.Image), "np" +(np.array), "latent" (torch.Tensor), or mesh (MeshDecoderOutput). return_dict (bool, optional, defaults to True) — +Whether or not to return a ShapEPipelineOutput instead of a plain +tuple. Returns +ShapEPipelineOutput or tuple + +If return_dict is True, ShapEPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> from PIL import Image +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from diffusers.utils import export_to_gif, load_image + +>>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + +>>> repo = "openai/shap-e-img2img" +>>> pipe = DiffusionPipeline.from_pretrained(repo, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> guidance_scale = 3.0 +>>> image_url = "https://hf.co/datasets/diffusers/docs-images/resolve/main/shap-e/corgi.png" +>>> image = load_image(image_url).convert("RGB") + +>>> images = pipe( +... image, +... guidance_scale=guidance_scale, +... num_inference_steps=64, +... frame_size=256, +... ).images + +>>> gif_path = export_to_gif(images[0], "corgi_3d.gif") ShapEPipelineOutput class diffusers.pipelines.shap_e.pipeline_shap_e.ShapEPipelineOutput < source > ( images: Union ) Parameters images (torch.FloatTensor) — +A list of images for 3D rendering. Output class for ShapEPipeline and ShapEImg2ImgPipeline. diff --git a/scrapped_outputs/ef241497e054b383759b02f19766f207.txt b/scrapped_outputs/ef241497e054b383759b02f19766f207.txt new file mode 100644 index 0000000000000000000000000000000000000000..bedbfd4f29d8fea8e1cb1523c05c8b8e204c564f --- /dev/null +++ b/scrapped_outputs/ef241497e054b383759b02f19766f207.txt @@ -0,0 +1,52 @@ +CMStochasticIterativeScheduler Consistency Models by Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever introduced a multistep and onestep scheduler (Algorithm 1) that is capable of generating good samples in one or a small number of steps. The abstract from the paper is: Diffusion models have significantly advanced the fields of image, audio, and video generation, but they depend on an iterative sampling process that causes slow generation. To overcome this limitation, we propose consistency models, a new family of models that generate high quality samples by directly mapping noise to data. They support fast one-step generation by design, while still allowing multistep sampling to trade compute for sample quality. They also support zero-shot data editing, such as image inpainting, colorization, and super-resolution, without requiring explicit training on these tasks. Consistency models can be trained either by distilling pre-trained diffusion models, or as standalone generative models altogether. Through extensive experiments, we demonstrate that they outperform existing distillation techniques for diffusion models in one- and few-step sampling, achieving the new state-of-the-art FID of 3.55 on CIFAR-10 and 6.20 on ImageNet 64x64 for one-step generation. When trained in isolation, consistency models become a new family of generative models that can outperform existing one-step, non-adversarial generative models on standard benchmarks such as CIFAR-10, ImageNet 64x64 and LSUN 256x256. The original codebase can be found at openai/consistency_models. CMStochasticIterativeScheduler class diffusers.CMStochasticIterativeScheduler < source > ( num_train_timesteps: int = 40 sigma_min: float = 0.002 sigma_max: float = 80.0 sigma_data: float = 0.5 s_noise: float = 1.0 rho: float = 7.0 clip_denoised: bool = True ) Parameters num_train_timesteps (int, defaults to 40) — +The number of diffusion steps to train the model. sigma_min (float, defaults to 0.002) — +Minimum noise magnitude in the sigma schedule. Defaults to 0.002 from the original implementation. sigma_max (float, defaults to 80.0) — +Maximum noise magnitude in the sigma schedule. Defaults to 80.0 from the original implementation. sigma_data (float, defaults to 0.5) — +The standard deviation of the data distribution from the EDM +paper. Defaults to 0.5 from the original implementation. s_noise (float, defaults to 1.0) — +The amount of additional noise to counteract loss of detail during sampling. A reasonable range is [1.000, +1.011]. Defaults to 1.0 from the original implementation. rho (float, defaults to 7.0) — +The parameter for calculating the Karras sigma schedule from the EDM +paper. Defaults to 7.0 from the original implementation. clip_denoised (bool, defaults to True) — +Whether to clip the denoised outputs to (-1, 1). timesteps (List or np.ndarray or torch.Tensor, optional) — +An explicit timestep schedule that can be optionally specified. The timesteps are expected to be in +increasing order. Multistep and onestep sampling for consistency models. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. get_scalings_for_boundary_condition < source > ( sigma ) → tuple Parameters sigma (torch.FloatTensor) — +The current sigma in the Karras sigma schedule. Returns +tuple + +A two-element tuple where c_skip (which weights the current sample) is the first element and c_out +(which weights the consistency model output) is the second element. + Gets the scalings used in the consistency model parameterization (from Appendix C of the +paper) to enforce boundary condition. epsilon in the equations for c_skip and c_out is set to sigma_min. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (float or torch.FloatTensor) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Scales the consistency model input by (sigma**2 + sigma_data**2) ** 0.5. set_timesteps < source > ( num_inference_steps: Optional = None device: Union = None timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. timesteps (List[int], optional) — +Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default +timestep spacing strategy of equal spacing between timesteps is used. If timesteps is passed, +num_inference_steps must be None. Sets the timesteps used for the diffusion chain (to be run before inference). sigma_to_t < source > ( sigmas: Union ) → float or np.ndarray Parameters sigmas (float or np.ndarray) — +A single Karras sigma or an array of Karras sigmas. Returns +float or np.ndarray + +A scaled input timestep or scaled input timestep array. + Gets scaled timesteps from the Karras sigmas for input to the consistency model. step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor generator: Optional = None return_dict: bool = True ) → CMStochasticIterativeSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (float) — +The current timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a +CMStochasticIterativeSchedulerOutput or tuple. Returns +CMStochasticIterativeSchedulerOutput or tuple + +If return_dict is True, +CMStochasticIterativeSchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). CMStochasticIterativeSchedulerOutput class diffusers.schedulers.scheduling_consistency_models.CMStochasticIterativeSchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Output class for the scheduler’s step function. diff --git a/scrapped_outputs/ef5f4f96ad21c8ab59a9c58683d61659.txt b/scrapped_outputs/ef5f4f96ad21c8ab59a9c58683d61659.txt new file mode 100644 index 0000000000000000000000000000000000000000..aa2d63d59b04449a98f5d12b99c53e29a1ead14b --- /dev/null +++ b/scrapped_outputs/ef5f4f96ad21c8ab59a9c58683d61659.txt @@ -0,0 +1,64 @@ +Textual Inversion Textual Inversion is a training technique for personalizing image generation models with just a few example images of what you want it to learn. This technique works by learning and updating the text embeddings (the new embeddings are tied to a special word you must use in the prompt) to match the example images you provide. If you’re training on a GPU with limited vRAM, you should try enabling the gradient_checkpointing and mixed_precision parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing or xFormers. With the same configuration and setup as PyTorch, the Flax training script should be at least ~70% faster! This guide will explore the textual_inversion.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Navigate to the example folder with the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/textual_inversion +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script has many parameters to help you tailor the training run to your needs. All of the parameters and their descriptions are listed in the parse_args() function. Where applicable, Diffusers provides default values for each parameter such as the training batch size and learning rate, but feel free to change these values in the training command if you’d like. For example, to increase the number of gradient accumulation steps above the default value of 1: Copied accelerate launch textual_inversion.py \ + --gradient_accumulation_steps=4 Some other basic and important parameters to specify include: --pretrained_model_name_or_path: the name of the model on the Hub or a local path to the pretrained model --train_data_dir: path to a folder containing the training dataset (example images) --output_dir: where to save the trained model --push_to_hub: whether to push the trained model to the Hub --checkpointing_steps: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding --resume_from_checkpoint to your training command --num_vectors: the number of vectors to learn the embeddings with; increasing this parameter helps the model learn better but it comes with increased training costs --placeholder_token: the special word to tie the learned embeddings to (you must use the word in your prompt for inference) --initializer_token: a single-word that roughly describes the object or style you’re trying to train on --learnable_property: whether you’re training the model to learn a new “style” (for example, Van Gogh’s painting style) or “object” (for example, your dog) Training script Unlike some of the other training scripts, textual_inversion.py has a custom dataset class, TextualInversionDataset for creating a dataset. You can customize the image size, placeholder token, interpolation method, whether to crop the image, and more. If you need to change how the dataset is created, you can modify TextualInversionDataset. Next, you’ll find the dataset preprocessing code and training loop in the main() function. The script starts by loading the tokenizer, scheduler and model: Copied # Load tokenizer +if args.tokenizer_name: + tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name) +elif args.pretrained_model_name_or_path: + tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer") + +# Load scheduler and models +noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") +text_encoder = CLIPTextModel.from_pretrained( + args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision +) +vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision) +unet = UNet2DConditionModel.from_pretrained( + args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision +) The special placeholder token is added next to the tokenizer, and the embedding is readjusted to account for the new token. Then, the script creates a dataset from the TextualInversionDataset: Copied train_dataset = TextualInversionDataset( + data_root=args.train_data_dir, + tokenizer=tokenizer, + size=args.resolution, + placeholder_token=(" ".join(tokenizer.convert_ids_to_tokens(placeholder_token_ids))), + repeats=args.repeats, + learnable_property=args.learnable_property, + center_crop=args.center_crop, + set="train", +) +train_dataloader = torch.utils.data.DataLoader( + train_dataset, batch_size=args.train_batch_size, shuffle=True, num_workers=args.dataloader_num_workers +) Finally, the training loop handles everything else from predicting the noisy residual to updating the embedding weights of the special placeholder token. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! 🚀 For this guide, you’ll download some images of a cat toy and store them in a directory. But remember, you can create and use your own dataset if you want (see the Create a dataset for training guide). Copied from huggingface_hub import snapshot_download + +local_dir = "./cat" +snapshot_download( + "diffusers/cat_toy_example", local_dir=local_dir, repo_type="dataset", ignore_patterns=".gitattributes" +) Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model, and DATA_DIR to the path where you just downloaded the cat images to. The script creates and saves the following files to your repository: learned_embeds.bin: the learned embedding vectors corresponding to your example images token_identifier.txt: the special placeholder token type_of_concept.txt: the type of concept you’re training on (either “object” or “style”) A full training run takes ~1 hour on a single V100 GPU. One more thing before you launch the script. If you’re interested in following along with the training process, you can periodically save generated images as training progresses. Add the following parameters to the training command: Copied --validation_prompt="A train" +--num_validation_images=4 +--validation_steps=100 PyTorch Flax Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" +export DATA_DIR="./cat" + +accelerate launch textual_inversion.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --train_data_dir=$DATA_DIR \ + --learnable_property="object" \ + --placeholder_token="" \ + --initializer_token="toy" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --max_train_steps=3000 \ + --learning_rate=5.0e-04 \ + --scale_lr \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --output_dir="textual_inversion_cat" \ + --push_to_hub After training is complete, you can use your newly trained model for inference like: PyTorch Flax Copied from diffusers import StableDiffusionPipeline +import torch + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to("cuda") +pipeline.load_textual_inversion("sd-concepts-library/cat-toy") +image = pipeline("A train", num_inference_steps=50).images[0] +image.save("cat-train.png") Next steps Congratulations on training your own Textual Inversion model! 🎉 To learn more about how to use your new model, the following guides may be helpful: Learn how to load Textual Inversion embeddings and also use them as negative embeddings. Learn how to use Textual Inversion for inference with Stable Diffusion 1/2 and Stable Diffusion XL. diff --git a/scrapped_outputs/ef6087231998dbd01060d69d734c0f86.txt b/scrapped_outputs/ef6087231998dbd01060d69d734c0f86.txt new file mode 100644 index 0000000000000000000000000000000000000000..4fdf516b6d77156c92f409f664a1bb5bd1902c7b --- /dev/null +++ b/scrapped_outputs/ef6087231998dbd01060d69d734c0f86.txt @@ -0,0 +1,65 @@ +ControlNet ControlNet models are adapters trained on top of another pretrained model. It allows for a greater degree of control over image generation by conditioning the model with an additional input image. The input image can be a canny edge, depth map, human pose, and many more. If you’re training on a GPU with limited vRAM, you should try enabling the gradient_checkpointing, gradient_accumulation_steps, and mixed_precision parameters in the training command. You can also reduce your memory footprint by using memory-efficient attention with xFormers. JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn’t support gradient checkpointing or xFormers. You should have a GPU with >30GB of memory if you want to train faster with Flax. This guide will explore the train_controlnet.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/controlnet +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_controlnet.py \ + --mixed_precision="fp16" Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the relevant parameters for ControlNet: --max_train_samples: the number of training samples; this can be lowered for faster training, but if you want to stream really large datasets, you’ll need to include this parameter and the --streaming parameter in your training command --gradient_accumulation_steps: number of update steps to accumulate before the backward pass; this allows you to train with a bigger batch size than your GPU memory can typically handle Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting epsilon (noise) or v_prediction, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the --snr_gamma parameter and set it to the recommended value of 5.0: Copied accelerate launch train_controlnet.py \ + --snr_gamma=5.0 Training script As with the script parameters, a general walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the relevant parts of the ControlNet script. The training script has a make_train_dataset function for preprocessing the dataset with image transforms and caption tokenization. You’ll see that in addition to the usual caption tokenization and image transforms, the script also includes transforms for the conditioning image. If you’re streaming a dataset on a TPU, performance may be bottlenecked by the 🤗 Datasets library which is not optimized for images. To ensure maximum throughput, you’re encouraged to explore other dataset formats like WebDataset, TorchData, and TensorFlow Datasets. Copied conditioning_image_transforms = transforms.Compose( + [ + transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), + transforms.CenterCrop(args.resolution), + transforms.ToTensor(), + ] +) Within the main() function, you’ll find the code for loading the tokenizer, text encoder, scheduler and models. This is also where the ControlNet model is loaded either from existing weights or randomly initialized from a UNet: Copied if args.controlnet_model_name_or_path: + logger.info("Loading existing controlnet weights") + controlnet = ControlNetModel.from_pretrained(args.controlnet_model_name_or_path) +else: + logger.info("Initializing controlnet weights from unet") + controlnet = ControlNetModel.from_unet(unet) The optimizer is set up to update the ControlNet parameters: Copied params_to_optimize = controlnet.parameters() +optimizer = optimizer_class( + params_to_optimize, + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Finally, in the training loop, the conditioning text embeddings and image are passed to the down and mid-blocks of the ControlNet model: Copied encoder_hidden_states = text_encoder(batch["input_ids"])[0] +controlnet_image = batch["conditioning_pixel_values"].to(dtype=weight_dtype) + +down_block_res_samples, mid_block_res_sample = controlnet( + noisy_latents, + timesteps, + encoder_hidden_states=encoder_hidden_states, + controlnet_cond=controlnet_image, + return_dict=False, +) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Now you’re ready to launch the training script! 🚀 This guide uses the fusing/fill50k dataset, but remember, you can create and use your own dataset if you want (see the Create a dataset for training guide). Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model and OUTPUT_DIR to where you want to save the model. Download the following images to condition your training with: Copied wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png +wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png One more thing before you launch the script! Depending on the GPU you have, you may need to enable certain optimizations to train a ControlNet. The default configuration in this script requires ~38GB of vRAM. If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. 16GB 12GB 8GB On a 16GB GPU, you can use bitsandbytes 8-bit optimizer and gradient checkpointing to optimize your training run. Install bitsandbytes: Copied pip install bitsandbytes Then, add the following parameter to your training command: Copied accelerate launch train_controlnet.py \ + --gradient_checkpointing \ + --use_8bit_adam \ PyTorch Flax Copied export MODEL_DIR="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="path/to/save/model" + +accelerate launch train_controlnet.py \ + --pretrained_model_name_or_path=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --dataset_name=fusing/fill50k \ + --resolution=512 \ + --learning_rate=1e-5 \ + --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ + --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --push_to_hub Once training is complete, you can use your newly trained model for inference! Copied from diffusers import StableDiffusionControlNetPipeline, ControlNetModel +from diffusers.utils import load_image +import torch + +controlnet = ControlNetModel.from_pretrained("path/to/controlnet", torch_dtype=torch.float16) +pipeline = StableDiffusionControlNetPipeline.from_pretrained( + "path/to/base/model", controlnet=controlnet, torch_dtype=torch.float16 +).to("cuda") + +control_image = load_image("./conditioning_image_1.png") +prompt = "pale golden rod circle with old lace background" + +generator = torch.manual_seed(0) +image = pipe(prompt, num_inference_steps=20, generator=generator, image=control_image).images[0] +image.save("./output.png") Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_controlnet_sdxl.py script to train a ControlNet adapter for the SDXL model. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on training your own ControlNet! To learn more about how to use your new model, the following guides may be helpful: Learn how to use a ControlNet for inference on a variety of tasks. diff --git a/scrapped_outputs/efc07c95e34a82b06cbe14ad66adb65d.txt b/scrapped_outputs/efc07c95e34a82b06cbe14ad66adb65d.txt new file mode 100644 index 0000000000000000000000000000000000000000..468c0483a2546314fa3f8291e558ee4a11ec620d --- /dev/null +++ b/scrapped_outputs/efc07c95e34a82b06cbe14ad66adb65d.txt @@ -0,0 +1,69 @@ +JAX/Flax 🤗 Diffusers supports Flax for super fast inference on Google TPUs, such as those available in Colab, Kaggle or Google Cloud Platform. This guide shows you how to run inference with Stable Diffusion using JAX/Flax. Before you begin, make sure you have the necessary libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q jax==0.3.25 jaxlib==0.3.25 flax transformers ftfy +#!pip install -q diffusers You should also make sure you’re using a TPU backend. While JAX does not run exclusively on TPUs, you’ll get the best performance on a TPU because each server has 8 TPU accelerators working in parallel. If you are running this guide in Colab, select Runtime in the menu above, select the option Change runtime type, and then select TPU under the Hardware accelerator setting. Import JAX and quickly check whether you’re using a TPU: Copied import jax +import jax.tools.colab_tpu +jax.tools.colab_tpu.setup_tpu() + +num_devices = jax.device_count() +device_type = jax.devices()[0].device_kind + +print(f"Found {num_devices} JAX devices of type {device_type}.") +assert ( + "TPU" in device_type, + "Available device is not a TPU, please select TPU from Runtime > Change runtime type > Hardware accelerator" +) +# Found 8 JAX devices of type Cloud TPU. Great, now you can import the rest of the dependencies you’ll need: Copied import jax.numpy as jnp +from jax import pmap +from flax.jax_utils import replicate +from flax.training.common_utils import shard + +from diffusers import FlaxStableDiffusionPipeline Load a model Flax is a functional framework, so models are stateless and parameters are stored outside of them. Loading a pretrained Flax pipeline returns both the pipeline and the model weights (or parameters). In this guide, you’ll use bfloat16, a more efficient half-float type that is supported by TPUs (you can also use float32 for full precision if you want). Copied dtype = jnp.bfloat16 +pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + revision="bf16", + dtype=dtype, +) Inference TPUs usually have 8 devices working in parallel, so let’s use the same prompt for each device. This means you can perform inference on 8 devices at once, with each device generating one image. As a result, you’ll get 8 images in the same amount of time it takes for one chip to generate a single image! Learn more details in the How does parallelization work? section. After replicating the prompt, get the tokenized text ids by calling the prepare_inputs function on the pipeline. The length of the tokenized text is set to 77 tokens as required by the configuration of the underlying CLIP text model. Copied prompt = "A cinematic film still of Morgan Freeman starring as Jimi Hendrix, portrait, 40mm lens, shallow depth of field, close up, split lighting, cinematic" +prompt = [prompt] * jax.device_count() +prompt_ids = pipeline.prepare_inputs(prompt) +prompt_ids.shape +# (8, 77) Model parameters and inputs have to be replicated across the 8 parallel devices. The parameters dictionary is replicated with flax.jax_utils.replicate which traverses the dictionary and changes the shape of the weights so they are repeated 8 times. Arrays are replicated using shard. Copied # parameters +p_params = replicate(params) + +# arrays +prompt_ids = shard(prompt_ids) +prompt_ids.shape +# (8, 1, 77) This shape means each one of the 8 devices receives as an input a jnp array with shape (1, 77), where 1 is the batch size per device. On TPUs with sufficient memory, you could have a batch size larger than 1 if you want to generate multiple images (per chip) at once. Next, create a random number generator to pass to the generation function. This is standard procedure in Flax, which is very serious and opinionated about random numbers. All functions that deal with random numbers are expected to receive a generator to ensure reproducibility, even when you’re training across multiple distributed devices. The helper function below uses a seed to initialize a random number generator. As long as you use the same seed, you’ll get the exact same results. Feel free to use different seeds when exploring results later in the guide. Copied def create_key(seed=0): + return jax.random.PRNGKey(seed) The helper function, or rng, is split 8 times so each device receives a different generator and generates a different image. Copied rng = create_key(0) +rng = jax.random.split(rng, jax.device_count()) To take advantage of JAX’s optimized speed on a TPU, pass jit=True to the pipeline to compile the JAX code into an efficient representation and to ensure the model runs in parallel across the 8 devices. You need to ensure all your inputs have the same shape in subsequent calls, otherwise JAX will need to recompile the code which is slower. The first inference run takes more time because it needs to compile the code, but subsequent calls (even with different inputs) are much faster. For example, it took more than a minute to compile on a TPU v2-8, but then it takes about 7s on a future inference run! Copied %%time +images = pipeline(prompt_ids, p_params, rng, jit=True)[0] + +# CPU times: user 56.2 s, sys: 42.5 s, total: 1min 38s +# Wall time: 1min 29s The returned array has shape (8, 1, 512, 512, 3) which should be reshaped to remove the second dimension and get 8 images of 512 × 512 × 3. Then you can use the numpy_to_pil() function to convert the arrays into images. Copied from diffusers.utils import make_image_grid + +images = images.reshape((images.shape[0] * images.shape[1],) + images.shape[-3:]) +images = pipeline.numpy_to_pil(images) +make_image_grid(images, rows=2, cols=4) Using different prompts You don’t necessarily have to use the same prompt on all devices. For example, to generate 8 different prompts: Copied prompts = [ + "Labrador in the style of Hokusai", + "Painting of a squirrel skating in New York", + "HAL-9000 in the style of Van Gogh", + "Times Square under water, with fish and a dolphin swimming around", + "Ancient Roman fresco showing a man working on his laptop", + "Close-up photograph of young black woman against urban background, high quality, bokeh", + "Armchair in the shape of an avocado", + "Clown astronaut in space, with Earth in the background", +] + +prompt_ids = pipeline.prepare_inputs(prompts) +prompt_ids = shard(prompt_ids) + +images = pipeline(prompt_ids, p_params, rng, jit=True).images +images = images.reshape((images.shape[0] * images.shape[1],) + images.shape[-3:]) +images = pipeline.numpy_to_pil(images) + +make_image_grid(images, 2, 4) How does parallelization work? The Flax pipeline in 🤗 Diffusers automatically compiles the model and runs it in parallel on all available devices. Let’s take a closer look at how that process works. JAX parallelization can be done in multiple ways. The easiest one revolves around using the jax.pmap function to achieve single-program multiple-data (SPMD) parallelization. It means running several copies of the same code, each on different data inputs. More sophisticated approaches are possible, and you can go over to the JAX documentation to explore this topic in more detail if you are interested! jax.pmap does two things: Compiles (or ”jits”) the code which is similar to jax.jit(). This does not happen when you call pmap, and only the first time the pmapped function is called. Ensures the compiled code runs in parallel on all available devices. To demonstrate, call pmap on the pipeline’s _generate method (this is a private method that generates images and may be renamed or removed in future releases of 🤗 Diffusers): Copied p_generate = pmap(pipeline._generate) After calling pmap, the prepared function p_generate will: Make a copy of the underlying function, pipeline._generate, on each device. Send each device a different portion of the input arguments (this is why it’s necessary to call the shard function). In this case, prompt_ids has shape (8, 1, 77, 768) so the array is split into 8 and each copy of _generate receives an input with shape (1, 77, 768). The most important thing to pay attention to here is the batch size (1 in this example), and the input dimensions that make sense for your code. You don’t have to change anything else to make the code work in parallel. The first time you call the pipeline takes more time, but the calls afterward are much faster. The block_until_ready function is used to correctly measure inference time because JAX uses asynchronous dispatch and returns control to the Python loop as soon as it can. You don’t need to use that in your code; blocking occurs automatically when you want to use the result of a computation that has not yet been materialized. Copied %%time +images = p_generate(prompt_ids, p_params, rng) +images = images.block_until_ready() + +# CPU times: user 1min 15s, sys: 18.2 s, total: 1min 34s +# Wall time: 1min 15s Check your image dimensions to see if they’re correct: Copied images.shape +# (8, 1, 512, 512, 3) diff --git a/scrapped_outputs/f024db473e4c9cbcdfe733fa79eb33f3.txt b/scrapped_outputs/f024db473e4c9cbcdfe733fa79eb33f3.txt new file mode 100644 index 0000000000000000000000000000000000000000..4540f6a7c0e03add95f145da0638f9a5a6f1c9cb --- /dev/null +++ b/scrapped_outputs/f024db473e4c9cbcdfe733fa79eb33f3.txt @@ -0,0 +1,14 @@ +DeepCache DeepCache accelerates StableDiffusionPipeline and StableDiffusionXLPipeline by strategically caching and reusing high-level features while efficiently updating low-level features by taking advantage of the U-Net architecture. Start by installing DeepCache: Copied pip install DeepCache Then load and enable the DeepCacheSDHelper: Copied import torch + from diffusers import StableDiffusionPipeline + pipe = StableDiffusionPipeline.from_pretrained('runwayml/stable-diffusion-v1-5', torch_dtype=torch.float16).to("cuda") + ++ from DeepCache import DeepCacheSDHelper ++ helper = DeepCacheSDHelper(pipe=pipe) ++ helper.set_params( ++ cache_interval=3, ++ cache_branch_id=0, ++ ) ++ helper.enable() + + image = pipe("a photo of an astronaut on a moon").images[0] The set_params method accepts two arguments: cache_interval and cache_branch_id. cache_interval means the frequency of feature caching, specified as the number of steps between each cache operation. cache_branch_id identifies which branch of the network (ordered from the shallowest to the deepest layer) is responsible for executing the caching processes. +Opting for a lower cache_branch_id or a larger cache_interval can lead to faster inference speed at the expense of reduced image quality (ablation experiments of these two hyperparameters can be found in the paper). Once those arguments are set, use the enable or disable methods to activate or deactivate the DeepCacheSDHelper. You can find more generated samples (original pipeline vs DeepCache) and the corresponding inference latency in the WandB report. The prompts are randomly selected from the MS-COCO 2017 dataset. Benchmark We tested how much faster DeepCache accelerates Stable Diffusion v2.1 with 50 inference steps on an NVIDIA RTX A5000, using different configurations for resolution, batch size, cache interval (I), and cache branch (B). Resolution Batch size Original DeepCache(I=3, B=0) DeepCache(I=5, B=0) DeepCache(I=5, B=1) 512 8 15.96 6.88(2.32x) 5.03(3.18x) 7.27(2.20x) 4 8.39 3.60(2.33x) 2.62(3.21x) 3.75(2.24x) 1 2.61 1.12(2.33x) 0.81(3.24x) 1.11(2.35x) 768 8 43.58 18.99(2.29x) 13.96(3.12x) 21.27(2.05x) 4 22.24 9.67(2.30x) 7.10(3.13x) 10.74(2.07x) 1 6.33 2.72(2.33x) 1.97(3.21x) 2.98(2.12x) 1024 8 101.95 45.57(2.24x) 33.72(3.02x) 53.00(1.92x) 4 49.25 21.86(2.25x) 16.19(3.04x) 25.78(1.91x) 1 13.83 6.07(2.28x) 4.43(3.12x) 7.15(1.93x) diff --git a/scrapped_outputs/f0371219067fa884217ff0819a2e1d90.txt b/scrapped_outputs/f0371219067fa884217ff0819a2e1d90.txt new file mode 100644 index 0000000000000000000000000000000000000000..e0e7bf9dc6b644c6e5f567e12d4d03cbf4f0b036 --- /dev/null +++ b/scrapped_outputs/f0371219067fa884217ff0819a2e1d90.txt @@ -0,0 +1,50 @@ +EulerDiscreteScheduler The Euler scheduler (Algorithm 2) is from the Elucidating the Design Space of Diffusion-Based Generative Models paper by Karras et al. This is a fast scheduler which can often generate good outputs in 20-30 steps. The scheduler is based on the original k-diffusion implementation by Katherine Crowson. EulerDiscreteScheduler class diffusers.EulerDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' interpolation_type: str = 'linear' use_karras_sigmas: Optional = False sigma_min: Optional = None sigma_max: Optional = None timestep_spacing: str = 'linspace' timestep_type: str = 'discrete' steps_offset: int = 0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). interpolation_type(str, defaults to "linear", optional) — +The interpolation type to compute intermediate sigmas for the scheduler denoising steps. Should be on of +"linear" or "log_linear". use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. Euler scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. Scales the denoising model input by (sigma**2 + 1) ** 0.5 to match the Euler algorithm. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor s_churn: float = 0.0 s_tmin: float = 0.0 s_tmax: float = inf s_noise: float = 1.0 generator: Optional = None return_dict: bool = True ) → EulerDiscreteSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. s_churn (float) — s_tmin (float) — s_tmax (float) — s_noise (float, defaults to 1.0) — +Scaling factor for noise added to the sample. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a EulerDiscreteSchedulerOutput or +tuple. Returns +EulerDiscreteSchedulerOutput or tuple + +If return_dict is True, EulerDiscreteSchedulerOutput is +returned, otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). EulerDiscreteSchedulerOutput class diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/f07141d51380faed60b66cce1262e2f7.txt b/scrapped_outputs/f07141d51380faed60b66cce1262e2f7.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/f076cfe20286cc5121239985a204c750.txt b/scrapped_outputs/f076cfe20286cc5121239985a204c750.txt new file mode 100644 index 0000000000000000000000000000000000000000..d23d93327c35d9c8f0901065ebe9c0cc039991a4 --- /dev/null +++ b/scrapped_outputs/f076cfe20286cc5121239985a204c750.txt @@ -0,0 +1,260 @@ +Image-to-image Image-to-image is similar to text-to-image, but in addition to a prompt, you can also pass an initial image as a starting point for the diffusion process. The initial image is encoded to latent space and noise is added to it. Then the latent diffusion model takes a prompt and the noisy latent image, predicts the added noise, and removes the predicted noise from the initial latent image to get the new latent image. Lastly, a decoder decodes the new latent image back into an image. With 🤗 Diffusers, this is as easy as 1-2-3: Load a checkpoint into the AutoPipelineForImage2Image class; this pipeline automatically handles loading the correct pipeline class based on the checkpoint: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() You’ll notice throughout the guide, we use enable_model_cpu_offload() and enable_xformers_memory_efficient_attention(), to save memory and increase inference speed. If you’re using PyTorch 2.0, then you don’t need to call enable_xformers_memory_efficient_attention() on your pipeline because it’ll already be using PyTorch 2.0’s native scaled-dot product attention. Load an image to pass to the pipeline: Copied init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") Pass a prompt and image to the pipeline to generate an image: Copied prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k" +image = pipeline(prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Popular models The most popular image-to-image models are Stable Diffusion v1.5, Stable Diffusion XL (SDXL), and Kandinsky 2.2. The results from the Stable Diffusion and Kandinsky models vary due to their architecture differences and training process; you can generally expect SDXL to produce higher quality images than Stable Diffusion v1.5. Let’s take a quick look at how to use each of these models and compare their results. Stable Diffusion v1.5 Stable Diffusion v1.5 is a latent diffusion model initialized from an earlier checkpoint, and further finetuned for 595K steps on 512x512 images. To use this pipeline for image-to-image, you’ll need to prepare an initial image to pass to the pipeline. Then you can pass a prompt and the image to the pipeline to generate a new image: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Stable Diffusion XL (SDXL) SDXL is a more powerful version of the Stable Diffusion model. It uses a larger base model, and an additional refiner model to increase the quality of the base model’s output. Read the SDXL guide for a more detailed walkthrough of how to use this model, and other techniques it uses to produce high quality images. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-sdxl-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, strength=0.5).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Kandinsky 2.2 The Kandinsky model is different from the Stable Diffusion models because it uses an image prior model to create image embeddings. The embeddings help create a better alignment between text and images, allowing the latent diffusion model to generate better images. The simplest way to use Kandinsky 2.2 is: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Configure pipeline parameters There are several important parameters you can configure in the pipeline that’ll affect the image generation process and image quality. Let’s take a closer look at what these parameters do and how changing them affects the output. Strength strength is one of the most important parameters to consider and it’ll have a huge impact on your generated image. It determines how much the generated image resembles the initial image. In other words: 📈 a higher strength value gives the model more “creativity” to generate an image that’s different from the initial image; a strength value of 1.0 means the initial image is more or less ignored 📉 a lower strength value means the generated image is more similar to the initial image The strength and num_inference_steps parameters are related because strength determines the number of noise steps to add. For example, if the num_inference_steps is 50 and strength is 0.8, then this means adding 40 (50 * 0.8) steps of noise to the initial image and then denoising for 40 steps to get the newly generated image. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, strength=0.8).images[0] +make_image_grid([init_image, image], rows=1, cols=2) strength = 0.4 strength = 0.6 strength = 1.0 Guidance scale The guidance_scale parameter is used to control how closely aligned the generated image and text prompt are. A higher guidance_scale value means your generated image is more aligned with the prompt, while a lower guidance_scale value means your generated image has more space to deviate from the prompt. You can combine guidance_scale with strength for even more precise control over how expressive the model is. For example, combine a high strength + guidance_scale for maximum creativity or use a combination of low strength and low guidance_scale to generate an image that resembles the initial image but is not as strictly bound to the prompt. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, guidance_scale=8.0).images[0] +make_image_grid([init_image, image], rows=1, cols=2) guidance_scale = 0.1 guidance_scale = 5.0 guidance_scale = 10.0 Negative prompt A negative prompt conditions the model to not include things in an image, and it can be used to improve image quality or modify an image. For example, you can improve image quality by including negative prompts like “poor details” or “blurry” to encourage the model to generate a higher quality image. Or you can modify an image by specifying things to exclude from an image. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" + +# pass prompt and image to pipeline +image = pipeline(prompt, negative_prompt=negative_prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" negative_prompt = "jungle" Chained image-to-image pipelines There are some other interesting ways you can use an image-to-image pipeline aside from just generating an image (although that is pretty cool too). You can take it a step further and chain it with other pipelines. Text-to-image-to-image Chaining a text-to-image and image-to-image pipeline allows you to generate an image from text and use the generated image as the initial image for the image-to-image pipeline. This is useful if you want to generate an image entirely from scratch. For example, let’s chain a Stable Diffusion and a Kandinsky model. Start by generating an image with the text-to-image pipeline: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch +from diffusers.utils import make_image_grid + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +text2image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k").images[0] +text2image Now you can pass this generated image to the image-to-image pipeline: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image2image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", image=text2image).images[0] +make_image_grid([text2image, image2image], rows=1, cols=2) Image-to-image-to-image You can also chain multiple image-to-image pipelines together to create more interesting images. This can be useful for iteratively performing style transfer on an image, generating short GIFs, restoring color to an image, or restoring missing areas of an image. Start by generating an image: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, output_type="latent").images[0] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. Pass the latent output from this pipeline to the next pipeline to generate an image in a comic book art style: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "ogkalu/Comic-Diffusion", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# need to include the token "charliebo artstyle" in the prompt to use this checkpoint +image = pipeline("Astronaut in a jungle, charliebo artstyle", image=image, output_type="latent").images[0] Repeat one more time to generate the final image in a pixel art style: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "kohbanye/pixel-art-style", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# need to include the token "pixelartstyle" in the prompt to use this checkpoint +image = pipeline("Astronaut in a jungle, pixelartstyle", image=image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Image-to-upscaler-to-super-resolution Another way you can chain your image-to-image pipeline is with an upscaler and super-resolution pipeline to really increase the level of details in an image. Start with an image-to-image pipeline: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image_1 = pipeline(prompt, image=init_image, output_type="latent").images[0] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. Chain it to an upscaler pipeline to increase the image resolution: Copied from diffusers import StableDiffusionLatentUpscalePipeline + +upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained( + "stabilityai/sd-x2-latent-upscaler", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +upscaler.enable_model_cpu_offload() +upscaler.enable_xformers_memory_efficient_attention() + +image_2 = upscaler(prompt, image=image_1, output_type="latent").images[0] Finally, chain it to a super-resolution pipeline to further enhance the resolution: Copied from diffusers import StableDiffusionUpscalePipeline + +super_res = StableDiffusionUpscalePipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +super_res.enable_model_cpu_offload() +super_res.enable_xformers_memory_efficient_attention() + +image_3 = super_res(prompt, image=image_2).images[0] +make_image_grid([init_image, image_3.resize((512, 512))], rows=1, cols=2) Control image generation Trying to generate an image that looks exactly the way you want can be difficult, which is why controlled generation techniques and models are so useful. While you can use the negative_prompt to partially control image generation, there are more robust methods like prompt weighting and ControlNets. Prompt weighting Prompt weighting allows you to scale the representation of each concept in a prompt. For example, in a prompt like “Astronaut in a jungle, cold color palette, muted colors, detailed, 8k”, you can choose to increase or decrease the embeddings of “astronaut” and “jungle”. The Compel library provides a simple syntax for adjusting prompt weights and generating the embeddings. You can learn how to create the embeddings in the Prompt weighting guide. AutoPipelineForImage2Image has a prompt_embeds (and negative_prompt_embeds if you’re using a negative prompt) parameter where you can pass the embeddings which replaces the prompt parameter. Copied from diffusers import AutoPipelineForImage2Image +import torch + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt_embeds=prompt_embeds, # generated from Compel + negative_prompt_embeds=negative_prompt_embeds, # generated from Compel + image=init_image, +).images[0] ControlNet ControlNets provide a more flexible and accurate way to control image generation because you can use an additional conditioning image. The conditioning image can be a canny image, depth map, image segmentation, and even scribbles! Whatever type of conditioning image you choose, the ControlNet generates an image that preserves the information in it. For example, let’s condition an image with a depth map to keep the spatial information in the image. Copied from diffusers.utils import load_image, make_image_grid + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) +init_image = init_image.resize((958, 960)) # resize to depth image dimensions +depth_image = load_image("https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png") +make_image_grid([init_image, depth_image], rows=1, cols=2) Load a ControlNet model conditioned on depth maps and the AutoPipelineForImage2Image: Copied from diffusers import ControlNetModel, AutoPipelineForImage2Image +import torch + +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11f1p_sd15_depth", torch_dtype=torch.float16, variant="fp16", use_safetensors=True) +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() Now generate a new image conditioned on the depth map, initial image, and prompt: Copied prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image_control_net = pipeline(prompt, image=init_image, control_image=depth_image).images[0] +make_image_grid([init_image, depth_image, image_control_net], rows=1, cols=3) initial image depth image ControlNet image Let’s apply a new style to the image generated from the ControlNet by chaining it with an image-to-image pipeline: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "nitrosocke/elden-ring-diffusion", torch_dtype=torch.float16, +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +prompt = "elden ring style astronaut in a jungle" # include the token "elden ring style" in the prompt +negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" + +image_elden_ring = pipeline(prompt, negative_prompt=negative_prompt, image=image_control_net, strength=0.45, guidance_scale=10.5).images[0] +make_image_grid([init_image, depth_image, image_control_net, image_elden_ring], rows=2, cols=2) Optimize Running diffusion models is computationally expensive and intensive, but with a few optimization tricks, it is entirely possible to run them on consumer and free-tier GPUs. For example, you can use a more memory-efficient form of attention such as PyTorch 2.0’s scaled-dot product attention or xFormers (you can use one or the other, but there’s no need to use both). You can also offload the model to the GPU while the other pipeline components wait on the CPU. Copied + pipeline.enable_model_cpu_offload() ++ pipeline.enable_xformers_memory_efficient_attention() With torch.compile, you can boost your inference speed even more by wrapping your UNet with it: Copied pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) To learn more, take a look at the Reduce memory usage and Torch 2.0 guides. diff --git a/scrapped_outputs/f07c3a7f8c7cedfe1cb99ae02a71eea3.txt b/scrapped_outputs/f07c3a7f8c7cedfe1cb99ae02a71eea3.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/f0836d0af1846aa52ff7255d4df5d6b4.txt b/scrapped_outputs/f0836d0af1846aa52ff7255d4df5d6b4.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/f08ff386866b35d9e484ebc3427056a0.txt b/scrapped_outputs/f08ff386866b35d9e484ebc3427056a0.txt new file mode 100644 index 0000000000000000000000000000000000000000..ec0ca022fc192e20ccf6ff3307b2799096156b70 --- /dev/null +++ b/scrapped_outputs/f08ff386866b35d9e484ebc3427056a0.txt @@ -0,0 +1,44 @@ +Using Diffusers for reinforcement learning + +Support for one RL model and related pipelines is included in the experimental source of diffusers. +More models and examples coming soon! + +Diffuser Value-guided Planning + +You can run the model from Planning with Diffusion for Flexible Behavior Synthesis with Diffusers. +The script is located in the RL Examples folder. +Or, run this example in Colab + +class diffusers.experimental.ValueGuidedRLPipeline + +< +source +> +( +value_function: UNet1DModel +unet: UNet1DModel +scheduler: DDPMScheduler +env + +) + + +Parameters + +value_function (UNet1DModel) — A specialized UNet for fine-tuning trajectories base on reward. + + +unet (UNet1DModel) — U-Net architecture to denoise the encoded trajectories. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded trajectories. Default for this +application is DDPMScheduler. +env — An environment following the OpenAI gym API to act in. For now only Hopper has pretrained models. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) +Pipeline for sampling actions from a diffusion model trained to predict sequences of states. +Original implementation inspired by this repository: https://github.com/jannerm/diffuser. diff --git a/scrapped_outputs/f0b01df8ee16eaecf1d510a796256e85.txt b/scrapped_outputs/f0b01df8ee16eaecf1d510a796256e85.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69636ab475595c240f0bd86a1983886d1f8de0d --- /dev/null +++ b/scrapped_outputs/f0b01df8ee16eaecf1d510a796256e85.txt @@ -0,0 +1,40 @@ +DDIM Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. The abstract from the paper is: Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space. The original codebase can be found at ermongroup/ddim. DDIMPipeline class diffusers.DDIMPipeline < source > ( unet scheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image. Can be one of +DDPMScheduler, or DDIMScheduler. Pipeline for image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 generator: Union = None eta: float = 0.0 num_inference_steps: int = 50 use_clipped_model_output: Optional = None output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. A value of 0 corresponds to +DDIM and 1 corresponds to DDPM. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. use_clipped_model_output (bool, optional, defaults to None) — +If True or False, see documentation for DDIMScheduler.step(). If None, nothing is passed +downstream to the scheduler (use None for schedulers which don’t support this argument). output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Example: Copied >>> from diffusers import DDIMPipeline +>>> import PIL.Image +>>> import numpy as np + +>>> # load model and scheduler +>>> pipe = DDIMPipeline.from_pretrained("fusing/ddim-lsun-bedroom") + +>>> # run pipeline in inference (sample random noise and denoise) +>>> image = pipe(eta=0.0, num_inference_steps=50) + +>>> # process image to PIL +>>> image_processed = image.cpu().permute(0, 2, 3, 1) +>>> image_processed = (image_processed + 1.0) * 127.5 +>>> image_processed = image_processed.numpy().astype(np.uint8) +>>> image_pil = PIL.Image.fromarray(image_processed[0]) + +>>> # save image +>>> image_pil.save("test.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/f0b4ac942f128757405f29e2490a643a.txt b/scrapped_outputs/f0b4ac942f128757405f29e2490a643a.txt new file mode 100644 index 0000000000000000000000000000000000000000..5ca717f0700a4b307853b574bca0b3a716666e97 --- /dev/null +++ b/scrapped_outputs/f0b4ac942f128757405f29e2490a643a.txt @@ -0,0 +1,69 @@ +How to use Stable Diffusion on Habana Gaudi + +🤗 Diffusers is compatible with Habana Gaudi through 🤗 Optimum Habana. + +Requirements + +Optimum Habana 1.5 or later, here is how to install it. +SynapseAI 1.9. + +Inference Pipeline + +To generate images with Stable Diffusion 1 and 2 on Gaudi, you need to instantiate two instances: +A pipeline with GaudiStableDiffusionPipeline. This pipeline supports text-to-image generation. +A scheduler with GaudiDDIMScheduler. This scheduler has been optimized for Habana Gaudi. +When initializing the pipeline, you have to specify use_habana=True to deploy it on HPUs. +Furthermore, in order to get the fastest possible generations you should enable HPU graphs with use_hpu_graphs=True. +Finally, you will need to specify a Gaudi configuration which can be downloaded from the Hugging Face Hub. + + + Copied +from optimum.habana import GaudiConfig +from optimum.habana.diffusers import GaudiDDIMScheduler, GaudiStableDiffusionPipeline + +model_name = "stabilityai/stable-diffusion-2-base" +scheduler = GaudiDDIMScheduler.from_pretrained(model_name, subfolder="scheduler") +pipeline = GaudiStableDiffusionPipeline.from_pretrained( + model_name, + scheduler=scheduler, + use_habana=True, + use_hpu_graphs=True, + gaudi_config="Habana/stable-diffusion", +) +You can then call the pipeline to generate images by batches from one or several prompts: + + + Copied +outputs = pipeline( + prompt=[ + "High quality photo of an astronaut riding a horse in space", + "Face of a yellow cat, high resolution, sitting on a park bench", + ], + num_images_per_prompt=10, + batch_size=4, +) +For more information, check out Optimum Habana’s documentation and the example provided in the official Github repository. + +Benchmark + +Here are the latencies for Habana first-generation Gaudi and Gaudi2 with the Habana/stable-diffusion Gaudi configuration (mixed precision bf16/fp32): +Stable Diffusion v1.5 (512x512 resolution): + +Latency (batch size = 1) +Throughput (batch size = 8) +first-generation Gaudi +4.22s +0.29 images/s +Gaudi2 +1.70s +0.925 images/s +Stable Diffusion v2.1 (768x768 resolution): + +Latency (batch size = 1) +Throughput +first-generation Gaudi +23.3s +0.045 images/s (batch size = 2) +Gaudi2 +7.75s +0.14 images/s (batch size = 5) diff --git a/scrapped_outputs/f0d30c5a580867cce7eaf8b336743c53.txt b/scrapped_outputs/f0d30c5a580867cce7eaf8b336743c53.txt new file mode 100644 index 0000000000000000000000000000000000000000..ac84e7af684acbbe414a495264a2879f29f202cf --- /dev/null +++ b/scrapped_outputs/f0d30c5a580867cce7eaf8b336743c53.txt @@ -0,0 +1,114 @@ +Accelerate inference of text-to-image diffusion models Diffusion models are slower than their GAN counterparts because of the iterative and sequential reverse diffusion process. There are several techniques that can address this limitation such as progressive timestep distillation (LCM LoRA), model compression (SSD-1B), and reusing adjacent features of the denoiser (DeepCache). However, you don’t necessarily need to use these techniques to speed up inference. With PyTorch 2 alone, you can accelerate the inference latency of text-to-image diffusion pipelines by up to 3x. This tutorial will show you how to progressively apply the optimizations found in PyTorch 2 to reduce inference latency. You’ll use the Stable Diffusion XL (SDXL) pipeline in this tutorial, but these techniques are applicable to other text-to-image diffusion pipelines too. Make sure you’re using the latest version of Diffusers: Copied pip install -U diffusers Then upgrade the other required libraries too: Copied pip install -U transformers accelerate peft Install PyTorch nightly to benefit from the latest and fastest kernels: Copied pip3 install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu121 The results reported below are from a 80GB 400W A100 with its clock rate set to the maximum. If you’re interested in the full benchmarking code, take a look at huggingface/diffusion-fast. Baseline Let’s start with a baseline. Disable reduced precision and the scaled_dot_product_attention (SDPA) function which is automatically used by Diffusers: Copied from diffusers import StableDiffusionXLPipeline + +# Load the pipeline in full-precision and place its model components on CUDA. +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0" +).to("cuda") + +# Run the attention ops without SDPA. +pipe.unet.set_default_attn_processor() +pipe.vae.set_default_attn_processor() + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] This default setup takes 7.36 seconds. bfloat16 Enable the first optimization, reduced precision or more specifically bfloat16. There are several benefits of using reduced precision: Using a reduced numerical precision (such as float16 or bfloat16) for inference doesn’t affect the generation quality but significantly improves latency. The benefits of using bfloat16 compared to float16 are hardware dependent, but modern GPUs tend to favor bfloat16. bfloat16 is much more resilient when used with quantization compared to float16, but more recent versions of the quantization library (torchao) we used don’t have numerical issues with float16. Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 +).to("cuda") + +# Run the attention ops without SDPA. +pipe.unet.set_default_attn_processor() +pipe.vae.set_default_attn_processor() + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] bfloat16 reduces the latency from 7.36 seconds to 4.63 seconds. In our later experiments with float16, recent versions of torchao do not incur numerical problems from float16. Take a look at the Speed up inference guide to learn more about running inference with reduced precision. SDPA Attention blocks are intensive to run. But with PyTorch’s scaled_dot_product_attention function, it is a lot more efficient. This function is used by default in Diffusers so you don’t need to make any changes to the code. Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] Scaled dot product attention improves the latency from 4.63 seconds to 3.31 seconds. torch.compile PyTorch 2 includes torch.compile which uses fast and optimized kernels. In Diffusers, the UNet and VAE are usually compiled because these are the most compute-intensive modules. First, configure a few compiler flags (refer to the full list for more options): Copied from diffusers import StableDiffusionXLPipeline +import torch + +torch._inductor.config.conv_1x1_as_mm = True +torch._inductor.config.coordinate_descent_tuning = True +torch._inductor.config.epilogue_fusion = False +torch._inductor.config.coordinate_descent_check_all_directions = True It is also important to change the UNet and VAE’s memory layout to “channels_last” when compiling them to ensure maximum speed. Copied pipe.unet.to(memory_format=torch.channels_last) +pipe.vae.to(memory_format=torch.channels_last) Now compile and perform inference: Copied # Compile the UNet and VAE. +pipe.unet = torch.compile(pipe.unet, mode="max-autotune", fullgraph=True) +pipe.vae.decode = torch.compile(pipe.vae.decode, mode="max-autotune", fullgraph=True) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# First call to `pipe` is slow, subsequent ones are faster. +image = pipe(prompt, num_inference_steps=30).images[0] torch.compile offers different backends and modes. For maximum inference speed, use “max-autotune” for the inductor backend. “max-autotune” uses CUDA graphs and optimizes the compilation graph specifically for latency. CUDA graphs greatly reduces the overhead of launching GPU operations by using a mechanism to launch multiple GPU operations through a single CPU operation. Using SDPA attention and compiling both the UNet and VAE cuts the latency from 3.31 seconds to 2.54 seconds. Prevent graph breaks Specifying fullgraph=True ensures there are no graph breaks in the underlying model to take full advantage of torch.compile without any performance degradation. For the UNet and VAE, this means changing how you access the return variables. Copied - latents = unet( +- latents, timestep=timestep, encoder_hidden_states=prompt_embeds +-).sample + ++ latents = unet( ++ latents, timestep=timestep, encoder_hidden_states=prompt_embeds, return_dict=False ++)[0] Remove GPU sync after compilation During the iterative reverse diffusion process, the step() function is called on the scheduler each time after the denoiser predicts the less noisy latent embeddings. Inside step(), the sigmas variable is indexed which when placed on the GPU, causes a communication sync between the CPU and GPU. This introduces latency and it becomes more evident when the denoiser has already been compiled. But if the sigmas array always stays on the CPU, the CPU and GPU sync doesn’t occur and you don’t get any latency. In general, any CPU and GPU communication sync should be none or be kept to a bare minimum because it can impact inference latency. Combine the attention block’s projection matrices The UNet and VAE in SDXL use Transformer-like blocks which consists of attention blocks and feed-forward blocks. In an attention block, the input is projected into three sub-spaces using three different projection matrices – Q, K, and V. These projections are performed separately on the input. But we can horizontally combine the projection matrices into a single matrix and perform the projection in one step. This increases the size of the matrix multiplications of the input projections and improves the impact of quantization. You can combine the projection matrices with just a single line of code: Copied pipe.fuse_qkv_projections() This provides a minor improvement from 2.54 seconds to 2.52 seconds. Support for fuse_qkv_projections() is limited and experimental. It’s not available for many non-Stable Diffusion pipelines such as Kandinsky. You can refer to this PR to get an idea about how to enable this for the other pipelines. Dynamic quantization You can also use the ultra-lightweight PyTorch quantization library, torchao (commit SHA 54bcd5a10d0abbe7b0c045052029257099f83fd9), to apply dynamic int8 quantization to the UNet and VAE. Quantization adds additional conversion overhead to the model that is hopefully made up for by faster matmuls (dynamic quantization). If the matmuls are too small, these techniques may degrade performance. First, configure all the compiler tags: Copied from diffusers import StableDiffusionXLPipeline +import torch + +# Notice the two new flags at the end. +torch._inductor.config.conv_1x1_as_mm = True +torch._inductor.config.coordinate_descent_tuning = True +torch._inductor.config.epilogue_fusion = False +torch._inductor.config.coordinate_descent_check_all_directions = True +torch._inductor.config.force_fuse_int_mm_with_mul = True +torch._inductor.config.use_mixed_mm = True Certain linear layers in the UNet and VAE don’t benefit from dynamic int8 quantization. You can filter out those layers with the dynamic_quant_filter_fn shown below. Copied def dynamic_quant_filter_fn(mod, *args): + return ( + isinstance(mod, torch.nn.Linear) + and mod.in_features > 16 + and (mod.in_features, mod.out_features) + not in [ + (1280, 640), + (1920, 1280), + (1920, 640), + (2048, 1280), + (2048, 2560), + (2560, 1280), + (256, 128), + (2816, 1280), + (320, 640), + (512, 1536), + (512, 256), + (512, 512), + (640, 1280), + (640, 1920), + (640, 320), + (640, 5120), + (640, 640), + (960, 320), + (960, 640), + ] + ) + + +def conv_filter_fn(mod, *args): + return ( + isinstance(mod, torch.nn.Conv2d) and mod.kernel_size == (1, 1) and 128 in [mod.in_channels, mod.out_channels] + ) Finally, apply all the optimizations discussed so far: Copied # SDPA + bfloat16. +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.bfloat16 +).to("cuda") + +# Combine attention projection matrices. +pipe.fuse_qkv_projections() + +# Change the memory layout. +pipe.unet.to(memory_format=torch.channels_last) +pipe.vae.to(memory_format=torch.channels_last) Since dynamic quantization is only limited to the linear layers, convert the appropriate pointwise convolution layers into linear layers to maximize its benefit. Copied from torchao import swap_conv2d_1x1_to_linear + +swap_conv2d_1x1_to_linear(pipe.unet, conv_filter_fn) +swap_conv2d_1x1_to_linear(pipe.vae, conv_filter_fn) Apply dynamic quantization: Copied from torchao import apply_dynamic_quant + +apply_dynamic_quant(pipe.unet, dynamic_quant_filter_fn) +apply_dynamic_quant(pipe.vae, dynamic_quant_filter_fn) Finally, compile and perform inference: Copied pipe.unet = torch.compile(pipe.unet, mode="max-autotune", fullgraph=True) +pipe.vae.decode = torch.compile(pipe.vae.decode, mode="max-autotune", fullgraph=True) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe(prompt, num_inference_steps=30).images[0] Applying dynamic quantization improves the latency from 2.52 seconds to 2.43 seconds. diff --git a/scrapped_outputs/f0d5be80b4c1c56519f12c21f7aadd0b.txt b/scrapped_outputs/f0d5be80b4c1c56519f12c21f7aadd0b.txt new file mode 100644 index 0000000000000000000000000000000000000000..62825fe72aa801b97e465830300492417c227d28 --- /dev/null +++ b/scrapped_outputs/f0d5be80b4c1c56519f12c21f7aadd0b.txt @@ -0,0 +1,18 @@ +Stable Diffusion pipelines Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from CompVis, Stability AI and LAION. Latent diffusion applies the diffusion process over a lower dimensional latent space to reduce memory and compute complexity. This specific type of diffusion model was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. Stable Diffusion is trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs. For more details about how Stable Diffusion works and how it differs from the base latent diffusion model, take a look at the Stability AI announcement and our own blog post for more technical details. You can find the original codebase for Stable Diffusion v1.0 at CompVis/stable-diffusion and Stable Diffusion v2.0 at Stability-AI/stablediffusion as well as their original scripts for various tasks. Additional official checkpoints for the different Stable Diffusion versions and tasks can be found on the CompVis, Runway, and Stability AI Hub organizations. Explore these organizations to find the best checkpoint for your use-case! The table below summarizes the available Stable Diffusion pipelines, their supported tasks, and an interactive demo: Pipeline Supported tasks 🤗 Space StableDiffusion text-to-image StableDiffusionImg2Img image-to-image StableDiffusionInpaint inpainting StableDiffusionDepth2Img depth-to-image StableDiffusionImageVariation image variation StableDiffusionPipelineSafe filtered text-to-image StableDiffusion2 text-to-image, inpainting, depth-to-image, super-resolution StableDiffusionXL text-to-image, image-to-image StableDiffusionLatentUpscale super-resolution StableDiffusionUpscale super-resolution StableDiffusionLDM3D text-to-rgb, text-to-depth, text-to-pano StableDiffusionUpscaleLDM3D ldm3d super-resolution Tips To help you get the most out of the Stable Diffusion pipelines, here are a few tips for improving performance and usability. These tips are applicable to all Stable Diffusion pipelines. Explore tradeoff between speed and quality StableDiffusionPipeline uses the PNDMScheduler by default, but 🤗 Diffusers provides many other schedulers (some of which are faster or output better quality) that are compatible. For example, if you want to use the EulerDiscreteScheduler instead of the default: Copied from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler + +pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") +pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +# or +euler_scheduler = EulerDiscreteScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler") +pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=euler_scheduler) Reuse pipeline components to save memory To save memory and use the same components across multiple pipelines, use the .components method to avoid loading weights into RAM more than once. Copied from diffusers import ( + StableDiffusionPipeline, + StableDiffusionImg2ImgPipeline, + StableDiffusionInpaintPipeline, +) + +text2img = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") +img2img = StableDiffusionImg2ImgPipeline(**text2img.components) +inpaint = StableDiffusionInpaintPipeline(**text2img.components) + +# now you can use text2img(...), img2img(...), inpaint(...) just like the call methods of each respective pipeline diff --git a/scrapped_outputs/f0d70b58a12bc5d0581b3440977afbfa.txt b/scrapped_outputs/f0d70b58a12bc5d0581b3440977afbfa.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/f0dedf05b8b9255b7c071e61913fe7aa.txt b/scrapped_outputs/f0dedf05b8b9255b7c071e61913fe7aa.txt new file mode 100644 index 0000000000000000000000000000000000000000..bbc3acf76c7c15bd0150cb7a94aa944d1e65fda4 --- /dev/null +++ b/scrapped_outputs/f0dedf05b8b9255b7c071e61913fe7aa.txt @@ -0,0 +1,93 @@ +InstructPix2Pix InstructPix2Pix is a Stable Diffusion model trained to edit images from human-provided instructions. For example, your prompt can be “turn the clouds rainy” and the model will edit the input image accordingly. This model is conditioned on the text prompt (or editing instruction) and the input image. This guide will explore the train_instruct_pix2pix.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/instruct_pix2pix +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script has many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. Default values are provided for most parameters that work pretty well, but you can also set your own values in the training command if you’d like. For example, to increase the resolution of the input image: Copied accelerate launch train_instruct_pix2pix.py \ + --resolution=512 \ Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the relevant parameters for InstructPix2Pix: --original_image_column: the original image before the edits are made --edited_image_column: the image after the edits are made --edit_prompt_column: the instructions to edit the image --conditioning_dropout_prob: the dropout probability for the edited image and edit prompts during training which enables classifier-free guidance (CFG) for one or both conditioning inputs Training script The dataset preprocessing code and training loop are found in the main() function. This is where you’ll make your changes to the training script to adapt it for your own use-case. As with the script parameters, a walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the InstructPix2Pix relevant parts of the script. The script begins by modifing the number of input channels in the first convolutional layer of the UNet to account for InstructPix2Pix’s additional conditioning image: Copied in_channels = 8 +out_channels = unet.conv_in.out_channels +unet.register_to_config(in_channels=in_channels) + +with torch.no_grad(): + new_conv_in = nn.Conv2d( + in_channels, out_channels, unet.conv_in.kernel_size, unet.conv_in.stride, unet.conv_in.padding + ) + new_conv_in.weight.zero_() + new_conv_in.weight[:, :4, :, :].copy_(unet.conv_in.weight) + unet.conv_in = new_conv_in These UNet parameters are updated by the optimizer: Copied optimizer = optimizer_cls( + unet.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Next, the edited images and and edit instructions are preprocessed and tokenized. It is important the same image transformations are applied to the original and edited images. Copied def preprocess_train(examples): + preprocessed_images = preprocess_images(examples) + + original_images, edited_images = preprocessed_images.chunk(2) + original_images = original_images.reshape(-1, 3, args.resolution, args.resolution) + edited_images = edited_images.reshape(-1, 3, args.resolution, args.resolution) + + examples["original_pixel_values"] = original_images + examples["edited_pixel_values"] = edited_images + + captions = list(examples[edit_prompt_column]) + examples["input_ids"] = tokenize_captions(captions) + return examples Finally, in the training loop, it starts by encoding the edited images into latent space: Copied latents = vae.encode(batch["edited_pixel_values"].to(weight_dtype)).latent_dist.sample() +latents = latents * vae.config.scaling_factor Then, the script applies dropout to the original image and edit instruction embeddings to support CFG. This is what enables the model to modulate the influence of the edit instruction and original image on the edited image. Copied encoder_hidden_states = text_encoder(batch["input_ids"])[0] +original_image_embeds = vae.encode(batch["original_pixel_values"].to(weight_dtype)).latent_dist.mode() + +if args.conditioning_dropout_prob is not None: + random_p = torch.rand(bsz, device=latents.device, generator=generator) + prompt_mask = random_p < 2 * args.conditioning_dropout_prob + prompt_mask = prompt_mask.reshape(bsz, 1, 1) + null_conditioning = text_encoder(tokenize_captions([""]).to(accelerator.device))[0] + encoder_hidden_states = torch.where(prompt_mask, null_conditioning, encoder_hidden_states) + + image_mask_dtype = original_image_embeds.dtype + image_mask = 1 - ( + (random_p >= args.conditioning_dropout_prob).to(image_mask_dtype) + * (random_p < 3 * args.conditioning_dropout_prob).to(image_mask_dtype) + ) + image_mask = image_mask.reshape(bsz, 1, 1, 1) + original_image_embeds = image_mask * original_image_embeds That’s pretty much it! Aside from the differences described here, the rest of the script is very similar to the Text-to-image training script, so feel free to check it out for more details. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Once you’re happy with the changes to your script or if you’re okay with the default configuration, you’re ready to launch the training script! 🚀 This guide uses the fusing/instructpix2pix-1000-samples dataset, which is a smaller version of the original dataset. You can also create and use your own dataset if you’d like (see the Create a dataset for training guide). Set the MODEL_NAME environment variable to the name of the model (can be a model id on the Hub or a path to a local model), and the DATASET_ID to the name of the dataset on the Hub. The script creates and saves all the components (feature extractor, scheduler, text encoder, UNet, etc.) to a subfolder in your repository. For better results, try longer training runs with a larger dataset. We’ve only tested this training script on a smaller-scale dataset. To monitor training progress with Weights and Biases, add the --report_to=wandb parameter to the training command and specify a validation image with --val_image_url and a validation prompt with --validation_prompt. This can be really useful for debugging the model. If you’re training on more than one GPU, add the --multi_gpu parameter to the accelerate launch command. Copied accelerate launch --mixed_precision="fp16" train_instruct_pix2pix.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --dataset_name=$DATASET_ID \ + --enable_xformers_memory_efficient_attention \ + --resolution=256 \ + --random_flip \ + --train_batch_size=4 \ + --gradient_accumulation_steps=4 \ + --gradient_checkpointing \ + --max_train_steps=15000 \ + --checkpointing_steps=5000 \ + --checkpoints_total_limit=1 \ + --learning_rate=5e-05 \ + --max_grad_norm=1 \ + --lr_warmup_steps=0 \ + --conditioning_dropout_prob=0.05 \ + --mixed_precision=fp16 \ + --seed=42 \ + --push_to_hub After training is finished, you can use your new InstructPix2Pix for inference: Copied import PIL +import requests +import torch +from diffusers import StableDiffusionInstructPix2PixPipeline +from diffusers.utils import load_image + +pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained("your_cool_model", torch_dtype=torch.float16).to("cuda") +generator = torch.Generator("cuda").manual_seed(0) + +image = load_image("https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/test_pix2pix_4.png") +prompt = "add some ducks to the lake" +num_inference_steps = 20 +image_guidance_scale = 1.5 +guidance_scale = 10 + +edited_image = pipeline( + prompt, + image=image, + num_inference_steps=num_inference_steps, + image_guidance_scale=image_guidance_scale, + guidance_scale=guidance_scale, + generator=generator, +).images[0] +edited_image.save("edited_image.png") You should experiment with different num_inference_steps, image_guidance_scale, and guidance_scale values to see how they affect inference speed and quality. The guidance scale parameters are especially impactful because they control how much the original image and edit instructions affect the edited image. Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_instruct_pix2pix_sdxl.py script to train a SDXL model to follow image editing instructions. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on training your own InstructPix2Pix model! 🥳 To learn more about the model, it may be helpful to: Read the Instruction-tuning Stable Diffusion with InstructPix2Pix blog post to learn more about some experiments we’ve done with InstructPix2Pix, dataset preparation, and results for different instructions. diff --git a/scrapped_outputs/f0e0d63e6ea32d1444fc24fad4b74bd1.txt b/scrapped_outputs/f0e0d63e6ea32d1444fc24fad4b74bd1.txt new file mode 100644 index 0000000000000000000000000000000000000000..edc4f7b1ca0249c72aa65698e4a858d07f008b84 --- /dev/null +++ b/scrapped_outputs/f0e0d63e6ea32d1444fc24fad4b74bd1.txt @@ -0,0 +1,146 @@ +Reproducibility + +Before reading about reproducibility for Diffusers, it is strongly recommended to take a look at +PyTorch’s statement about reproducibility. +PyTorch states that +completely reproducible results are not guaranteed across PyTorch releases, individual commits, or different platforms. +While one can never expect the same results across platforms, one can expect results to be reproducible +across releases, platforms, etc… within a certain tolerance. However, this tolerance strongly varies +depending on the diffusion pipeline and checkpoint. +In the following, we show how to best control sources of randomness for diffusion models. + +Inference + +During inference, diffusion pipelines heavily rely on random sampling operations, such as the creating the +gaussian noise tensors to be denoised and adding noise to the scheduling step. +Let’s have a look at an example. We run the DDIM pipeline +for just two inference steps and return a numpy tensor to look into the numerical values of the output. + + + Copied +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np").images +print(np.abs(image).sum()) +Running the above prints a value of 1464.2076, but running it again prints a different +value of 1495.1768. What is going on here? Every time the pipeline is run, gaussian noise +is created and step-wise denoised. To create the gaussian noise with torch.randn, a different random seed is taken every time, thus leading to a different result. +This is a desired property of diffusion pipelines, as it means that the pipeline can create a different random image every time it is run. In many cases, one would like to generate the exact same image of a certain +run, for which case an instance of a PyTorch generator has to be passed: + + + Copied +import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id) + +# create a generator for reproducibility +generator = torch.Generator(device="cpu").manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) +Running the above always prints a value of 1491.1711 - also upon running it again because we +define the generator object to be passed to all random functions of the pipeline. +If you run this code snippet on your specific hardware and version, you should get a similar, if not the same, result. +It might be a bit unintuitive at first to pass generator objects to the pipelines instead of +just integer values representing the seed, but this is the recommended design when dealing with +probabilistic models in PyTorch as generators are random states that are advanced and can thus be +passed to multiple pipelines in a sequence. +Great! Now, we know how to write reproducible pipelines, but it gets a bit trickier since the above example only runs on the CPU. How do we also achieve reproducibility on GPU? +In short, one should not expect full reproducibility across different hardware when running pipelines on GPU +as matrix multiplications are less deterministic on GPU than on CPU and diffusion pipelines tend to require +a lot of matrix multiplications. Let’s see what we can do to keep the randomness within limits across +different GPU hardware. +To achieve maximum speed performance, it is recommended to create the generator directly on GPU when running +the pipeline on GPU: + + + Copied +import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id) +ddim.to("cuda") + +# create a generator for reproducibility +generator = torch.Generator(device="cuda").manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) +Running the above now prints a value of 1389.8634 - even though we’re using the exact same seed! +This is unfortunate as it means we cannot reproduce the results we achieved on GPU, also on CPU. +Nevertheless, it should be expected since the GPU uses a different random number generator than the CPU. +To circumvent this problem, we created a randn_tensor function, which can create random noise +on the CPU and then move the tensor to GPU if necessary. The function is used everywhere inside the pipelines allowing the user to always pass a CPU generator even if the pipeline is run on GPU: + + + Copied +import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id) +ddim.to("cuda") + +# create a generator for reproducibility +generator = torch.manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) +Running the above now prints a value of 1491.1713, much closer to the value of 1491.1711 when +the pipeline is fully run on the CPU. +As a consequence, we recommend always passing a CPU generator if Reproducibility is important. +The loss of performance is often neglectable, but one can be sure to generate much more similar +values than if the pipeline would have been run on CPU. +Finally, we noticed that more complex pipelines, such as UnCLIPPipeline are often extremely +susceptible to precision error propagation and thus one cannot expect even similar results across +different GPU hardware or PyTorch versions. In such cases, one has to make sure to run +exactly the same hardware and PyTorch version for full Reproducibility. + +Randomness utilities + + +randn_tensor + + +diffusers.utils.randn_tensor + +< +source +> +( +shape: typing.Union[typing.Tuple, typing.List] +generator: typing.Union[typing.List[ForwardRef('torch.Generator')], ForwardRef('torch.Generator'), NoneType] = None +device: typing.Optional[ForwardRef('torch.device')] = None +dtype: typing.Optional[ForwardRef('torch.dtype')] = None +layout: typing.Optional[ForwardRef('torch.layout')] = None + +) + + + +This is a helper function that allows to create random tensors on the desired device with the desired dtype. When +passing a list of generators one can seed each batched size individually. If CPU generators are passed the tensor +will always be created on CPU. diff --git a/scrapped_outputs/f0eb84d8112962e9e945fb99b1697c6c.txt b/scrapped_outputs/f0eb84d8112962e9e945fb99b1697c6c.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/f112433e9f0514db5b8fa84fc37cfbff.txt b/scrapped_outputs/f112433e9f0514db5b8fa84fc37cfbff.txt new file mode 100644 index 0000000000000000000000000000000000000000..0a7cc0b79a2823c78003b419462fee63e47bb1de --- /dev/null +++ b/scrapped_outputs/f112433e9f0514db5b8fa84fc37cfbff.txt @@ -0,0 +1,18 @@ +ONNX Runtime 🤗 Optimum provides a Stable Diffusion pipeline compatible with ONNX Runtime. You’ll need to install 🤗 Optimum with the following command for ONNX Runtime support: Copied pip install -q optimum["onnxruntime"] This guide will show you how to use the Stable Diffusion and Stable Diffusion XL (SDXL) pipelines with ONNX Runtime. Stable Diffusion To load and run inference, use the ORTStableDiffusionPipeline. If you want to load a PyTorch model and convert it to the ONNX format on-the-fly, set export=True: Copied from optimum.onnxruntime import ORTStableDiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id, export=True) +prompt = "sailing ship in storm by Leonardo da Vinci" +image = pipeline(prompt).images[0] +pipeline.save_pretrained("./onnx-stable-diffusion-v1-5") Generating multiple prompts in a batch seems to take too much memory. While we look into it, you may need to iterate instead of batching. To export the pipeline in the ONNX format offline and use it later for inference, +use the optimum-cli export command: Copied optimum-cli export onnx --model runwayml/stable-diffusion-v1-5 sd_v15_onnx/ Then to perform inference (you don’t have to specify export=True again): Copied from optimum.onnxruntime import ORTStableDiffusionPipeline + +model_id = "sd_v15_onnx" +pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id) +prompt = "sailing ship in storm by Leonardo da Vinci" +image = pipeline(prompt).images[0] You can find more examples in 🤗 Optimum documentation, and Stable Diffusion is supported for text-to-image, image-to-image, and inpainting. Stable Diffusion XL To load and run inference with SDXL, use the ORTStableDiffusionXLPipeline: Copied from optimum.onnxruntime import ORTStableDiffusionXLPipeline + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipeline = ORTStableDiffusionXLPipeline.from_pretrained(model_id) +prompt = "sailing ship in storm by Leonardo da Vinci" +image = pipeline(prompt).images[0] To export the pipeline in the ONNX format and use it later for inference, use the optimum-cli export command: Copied optimum-cli export onnx --model stabilityai/stable-diffusion-xl-base-1.0 --task stable-diffusion-xl sd_xl_onnx/ SDXL in the ONNX format is supported for text-to-image and image-to-image. diff --git a/scrapped_outputs/f11a62462837d20aee033eccbab83111.txt b/scrapped_outputs/f11a62462837d20aee033eccbab83111.txt new file mode 100644 index 0000000000000000000000000000000000000000..86d9ddbbae81241685d47196515ab51585d529f3 --- /dev/null +++ b/scrapped_outputs/f11a62462837d20aee033eccbab83111.txt @@ -0,0 +1,93 @@ +Latent Consistency Distillation Latent Consistency Models (LCMs) are able to generate high-quality images in just a few steps, representing a big leap forward because many pipelines require at least 25+ steps. LCMs are produced by applying the latent consistency distillation method to any Stable Diffusion model. This method works by applying one-stage guided distillation to the latent space, and incorporating a skipping-step method to consistently skip timesteps to accelerate the distillation process (refer to section 4.1, 4.2, and 4.3 of the paper for more details). If you’re training on a GPU with limited vRAM, try enabling gradient_checkpointing, gradient_accumulation_steps, and mixed_precision to reduce memory-usage and speedup training. You can reduce your memory-usage even more by enabling memory-efficient attention with xFormers and bitsandbytes’ 8-bit optimizer. This guide will explore the train_lcm_distill_sd_wds.py script to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/consistency_distillation +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment (try enabling torch.compile to significantly speedup training): Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. Script parameters The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to speedup training with mixed precision using the fp16 format, add the --mixed_precision parameter to the training command: Copied accelerate launch train_lcm_distill_sd_wds.py \ + --mixed_precision="fp16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so you’ll focus on the parameters that are relevant to latent consistency distillation in this guide. --pretrained_teacher_model: the path to a pretrained latent diffusion model to use as the teacher model --pretrained_vae_model_name_or_path: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify an alternative VAE (like this VAE by madebyollin which works in fp16) --w_min and --w_max: the minimum and maximum guidance scale values for guidance scale sampling --num_ddim_timesteps: the number of timesteps for DDIM sampling --loss_type: the type of loss (L2 or Huber) to calculate for latent consistency distillation; Huber loss is generally preferred because it’s more robust to outliers --huber_c: the Huber loss parameter Training script The training script starts by creating a dataset class - Text2ImageDataset - for preprocessing the images and creating a training dataset. Copied def transform(example): + image = example["image"] + image = TF.resize(image, resolution, interpolation=transforms.InterpolationMode.BILINEAR) + + c_top, c_left, _, _ = transforms.RandomCrop.get_params(image, output_size=(resolution, resolution)) + image = TF.crop(image, c_top, c_left, resolution, resolution) + image = TF.to_tensor(image) + image = TF.normalize(image, [0.5], [0.5]) + + example["image"] = image + return example For improved performance on reading and writing large datasets stored in the cloud, this script uses the WebDataset format to create a preprocessing pipeline to apply transforms and create a dataset and dataloader for training. Images are processed and fed to the training loop without having to download the full dataset first. Copied processing_pipeline = [ + wds.decode("pil", handler=wds.ignore_and_continue), + wds.rename(image="jpg;png;jpeg;webp", text="text;txt;caption", handler=wds.warn_and_continue), + wds.map(filter_keys({"image", "text"})), + wds.map(transform), + wds.to_tuple("image", "text"), +] In the main() function, all the necessary components like the noise scheduler, tokenizers, text encoders, and VAE are loaded. The teacher UNet is also loaded here and then you can create a student UNet from the teacher UNet. The student UNet is updated by the optimizer during training. Copied teacher_unet = UNet2DConditionModel.from_pretrained( + args.pretrained_teacher_model, subfolder="unet", revision=args.teacher_revision +) + +unet = UNet2DConditionModel(**teacher_unet.config) +unet.load_state_dict(teacher_unet.state_dict(), strict=False) +unet.train() Now you can create the optimizer to update the UNet parameters: Copied optimizer = optimizer_class( + unet.parameters(), + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Create the dataset: Copied dataset = Text2ImageDataset( + train_shards_path_or_url=args.train_shards_path_or_url, + num_train_examples=args.max_train_samples, + per_gpu_batch_size=args.train_batch_size, + global_batch_size=args.train_batch_size * accelerator.num_processes, + num_workers=args.dataloader_num_workers, + resolution=args.resolution, + shuffle_buffer_size=1000, + pin_memory=True, + persistent_workers=True, +) +train_dataloader = dataset.train_dataloader Next, you’re ready to setup the training loop and implement the latent consistency distillation method (see Algorithm 1 in the paper for more details). This section of the script takes care of adding noise to the latents, sampling and creating a guidance scale embedding, and predicting the original image from the noise. Copied pred_x_0 = predicted_origin( + noise_pred, + start_timesteps, + noisy_model_input, + noise_scheduler.config.prediction_type, + alpha_schedule, + sigma_schedule, +) + +model_pred = c_skip_start * noisy_model_input + c_out_start * pred_x_0 It gets the teacher model predictions and the LCM predictions next, calculates the loss, and then backpropagates it to the LCM. Copied if args.loss_type == "l2": + loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") +elif args.loss_type == "huber": + loss = torch.mean( + torch.sqrt((model_pred.float() - target.float()) ** 2 + args.huber_c**2) - args.huber_c + ) If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Now you’re ready to launch the training script and start distilling! For this guide, you’ll use the --train_shards_path_or_url to specify the path to the Conceptual Captions 12M dataset stored on the Hub here. Set the MODEL_DIR environment variable to the name of the teacher model and OUTPUT_DIR to where you want to save the model. Copied export MODEL_DIR="runwayml/stable-diffusion-v1-5" +export OUTPUT_DIR="path/to/saved/model" + +accelerate launch train_lcm_distill_sd_wds.py \ + --pretrained_teacher_model=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --mixed_precision=fp16 \ + --resolution=512 \ + --learning_rate=1e-6 --loss_type="huber" --ema_decay=0.95 --adam_weight_decay=0.0 \ + --max_train_steps=1000 \ + --max_train_samples=4000000 \ + --dataloader_num_workers=8 \ + --train_shards_path_or_url="pipe:curl -L -s https://huggingface.co/datasets/laion/conceptual-captions-12m-webdataset/resolve/main/data/{00000..01099}.tar?download=true" \ + --validation_steps=200 \ + --checkpointing_steps=200 --checkpoints_total_limit=10 \ + --train_batch_size=12 \ + --gradient_checkpointing --enable_xformers_memory_efficient_attention \ + --gradient_accumulation_steps=1 \ + --use_8bit_adam \ + --resume_from_checkpoint=latest \ + --report_to=wandb \ + --seed=453645634 \ + --push_to_hub Once training is complete, you can use your new LCM for inference. Copied from diffusers import UNet2DConditionModel, DiffusionPipeline, LCMScheduler +import torch + +unet = UNet2DConditionModel.from_pretrained("your-username/your-model", torch_dtype=torch.float16, variant="fp16") +pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", unet=unet, torch_dtype=torch.float16, variant="fp16") + +pipeline.scheduler = LCMScheduler.from_config(pipe.scheduler.config) +pipeline.to("cuda") + +prompt = "sushi rolls in the form of panda heads, sushi platter" + +image = pipeline(prompt, num_inference_steps=4, guidance_scale=1.0).images[0] LoRA LoRA is a training technique for significantly reducing the number of trainable parameters. As a result, training is faster and it is easier to store the resulting weights because they are a lot smaller (~100MBs). Use the train_lcm_distill_lora_sd_wds.py or train_lcm_distill_lora_sdxl.wds.py script to train with LoRA. The LoRA training script is discussed in more detail in the LoRA training guide. Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the train_lcm_distill_sdxl_wds.py script to train a SDXL model with LoRA. The SDXL training script is discussed in more detail in the SDXL training guide. Next steps Congratulations on distilling a LCM model! To learn more about LCM, the following may be helpful: Learn how to use LCMs for inference for text-to-image, image-to-image, and with LoRA checkpoints. Read the SDXL in 4 steps with Latent Consistency LoRAs blog post to learn more about SDXL LCM-LoRA’s for super fast inference, quality comparisons, benchmarks, and more. diff --git a/scrapped_outputs/f14097deee765398bf346aee9d190111.txt b/scrapped_outputs/f14097deee765398bf346aee9d190111.txt new file mode 100644 index 0000000000000000000000000000000000000000..ab7809d34983d6a8ebbe82ac4a22518de74ebdc9 --- /dev/null +++ b/scrapped_outputs/f14097deee765398bf346aee9d190111.txt @@ -0,0 +1,31 @@ +Prior Transformer The Prior Transformer was originally introduced in Hierarchical Text-Conditional Image Generation with CLIP Latents by Ramesh et al. It is used to predict CLIP image embeddings from CLIP text embeddings; image embeddings are predicted through a denoising diffusion process. The abstract from the paper is: Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples. PriorTransformer class diffusers.PriorTransformer < source > ( num_attention_heads: int = 32 attention_head_dim: int = 64 num_layers: int = 20 embedding_dim: int = 768 num_embeddings = 77 additional_embeddings = 4 dropout: float = 0.0 time_embed_act_fn: str = 'silu' norm_in_type: Optional = None embedding_proj_norm_type: Optional = None encoder_hid_proj_type: Optional = 'linear' added_emb_type: Optional = 'prd' time_embed_dim: Optional = None embedding_proj_dim: Optional = None clip_embed_dim: Optional = None ) Parameters num_attention_heads (int, optional, defaults to 32) — The number of heads to use for multi-head attention. attention_head_dim (int, optional, defaults to 64) — The number of channels in each head. num_layers (int, optional, defaults to 20) — The number of layers of Transformer blocks to use. embedding_dim (int, optional, defaults to 768) — The dimension of the model input hidden_states num_embeddings (int, optional, defaults to 77) — +The number of embeddings of the model input hidden_states additional_embeddings (int, optional, defaults to 4) — The number of additional tokens appended to the +projected hidden_states. The actual length of the used hidden_states is num_embeddings + additional_embeddings. dropout (float, optional, defaults to 0.0) — The dropout probability to use. time_embed_act_fn (str, optional, defaults to ‘silu’) — +The activation function to use to create timestep embeddings. norm_in_type (str, optional, defaults to None) — The normalization layer to apply on hidden states before +passing to Transformer blocks. Set it to None if normalization is not needed. embedding_proj_norm_type (str, optional, defaults to None) — +The normalization layer to apply on the input proj_embedding. Set it to None if normalization is not +needed. encoder_hid_proj_type (str, optional, defaults to linear) — +The projection layer to apply on the input encoder_hidden_states. Set it to None if +encoder_hidden_states is None. added_emb_type (str, optional, defaults to prd) — Additional embeddings to condition the model. +Choose from prd or None. if choose prd, it will prepend a token indicating the (quantized) dot +product between the text embedding and image embedding as proposed in the unclip paper +https://arxiv.org/abs/2204.06125 If it is None, no additional embeddings will be prepended. time_embed_dim (int, *optional*, defaults to None) -- The dimension of timestep embeddings. If None, will be set to num_attention_heads * attention_head_dim` embedding_proj_dim (int, optional, default to None) — +The dimension of proj_embedding. If None, will be set to embedding_dim. clip_embed_dim (int, optional, default to None) — +The dimension of the output. If None, will be set to embedding_dim. A Prior Transformer model. forward < source > ( hidden_states timestep: Union proj_embedding: FloatTensor encoder_hidden_states: Optional = None attention_mask: Optional = None return_dict: bool = True ) → ~models.prior_transformer.PriorTransformerOutput or tuple Parameters hidden_states (torch.FloatTensor of shape (batch_size, embedding_dim)) — +The currently predicted image embeddings. timestep (torch.LongTensor) — +Current denoising step. proj_embedding (torch.FloatTensor of shape (batch_size, embedding_dim)) — +Projected embedding vector the denoising process is conditioned on. encoder_hidden_states (torch.FloatTensor of shape (batch_size, num_embeddings, embedding_dim)) — +Hidden states of the text embeddings the denoising process is conditioned on. attention_mask (torch.BoolTensor of shape (batch_size, num_embeddings)) — +Text mask for the text embeddings. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.prior_transformer.PriorTransformerOutput instead of a plain +tuple. Returns +~models.prior_transformer.PriorTransformerOutput or tuple + +If return_dict is True, a ~models.prior_transformer.PriorTransformerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + The PriorTransformer forward method. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. PriorTransformerOutput class diffusers.models.transformers.prior_transformer.PriorTransformerOutput < source > ( predicted_image_embedding: FloatTensor ) Parameters predicted_image_embedding (torch.FloatTensor of shape (batch_size, embedding_dim)) — +The predicted CLIP image embedding conditioned on the CLIP text embedding input. The output of PriorTransformer. diff --git a/scrapped_outputs/f148ece690723682c311b417bb239fff.txt b/scrapped_outputs/f148ece690723682c311b417bb239fff.txt new file mode 100644 index 0000000000000000000000000000000000000000..8c5bcb9f001a84d9b945c267456eb710daaafe80 --- /dev/null +++ b/scrapped_outputs/f148ece690723682c311b417bb239fff.txt @@ -0,0 +1,104 @@ +DPMSolverSinglestepScheduler DPMSolverSinglestepScheduler is a single step scheduler from DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps and DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. DPMSolver (and the improved version DPMSolver++) is a fast dedicated high-order solver for diffusion ODEs with convergence order guarantee. Empirically, DPMSolver sampling with only 20 steps can generate high-quality +samples, and it can generate quite good samples even in 10 steps. The original implementation can be found at LuChengTHU/dpm-solver. Tips It is recommended to set solver_order to 2 for guide sampling, and solver_order=3 for unconditional sampling. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use dynamic +thresholding. This thresholding method is unsuitable for latent-space diffusion models such as +Stable Diffusion. DPMSolverSinglestepScheduler class diffusers.DPMSolverSinglestepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Optional = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'dpmsolver++' solver_type: str = 'midpoint' lower_order_final: bool = False use_karras_sigmas: Optional = False final_sigmas_type: Optional = 'zero' lambda_min_clipped: float = -inf variance_type: Optional = None ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DPMSolver order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++". algorithm_type (str, defaults to dpmsolver++) — +Algorithm type for the solver; can be dpmsolver or dpmsolver++. The +dpmsolver type implements the algorithms in the DPMSolver +paper, and the dpmsolver++ type implements the algorithms in the +DPMSolver++ paper. It is recommended to use dpmsolver++ or +sde-dpmsolver++ with solver_order=2 for guided sampling like in Stable Diffusion. solver_type (str, defaults to midpoint) — +Solver type for the second-order solver; can be midpoint or heun. The solver type slightly affects the +sample quality, especially for a small number of steps. It is recommended to use midpoint solvers. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. final_sigmas_type (str, optional, defaults to "zero") — +The final sigma value for the noise schedule during the sampling process. If "sigma_min", the final sigma +is the same as the last sigma in the training schedule. If zero, the final sigma is set to 0. lambda_min_clipped (float, defaults to -inf) — +Clipping threshold for the minimum value of lambda(t) for numerical stability. This is critical for the +cosine (squaredcos_cap_v2) noise schedule. variance_type (str, optional) — +Set to “learned” or “learned_range” for diffusion models that predict variance. If set, the model’s output +contains the predicted Gaussian variance. DPMSolverSinglestepScheduler is a fast dedicated high-order solver for diffusion ODEs. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is +designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an +integral of the data prediction model. The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise +prediction and data prediction models. dpm_solver_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DPMSolver (equivalent to DDIM). get_order_list < source > ( num_inference_steps: int ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. Computes the solver order at each time step. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). singlestep_dpm_solver_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. timestep (int) — +The current and latter discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order singlestep DPMSolver that computes the solution at time prev_timestep from the +time timestep_list[-2]. singlestep_dpm_solver_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. timestep (int) — +The current and latter discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order singlestep DPMSolver that computes the solution at time prev_timestep from the +time timestep_list[-3]. singlestep_dpm_solver_update < source > ( model_output_list: List *args sample: FloatTensor = None order: int = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. timestep (int) — +The current and latter discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. order (int) — +The solver order at this step. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the singlestep DPMSolver. step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the singlestep DPMSolver. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/f16224af7ed70e7cee34bbf6687da088.txt b/scrapped_outputs/f16224af7ed70e7cee34bbf6687da088.txt new file mode 100644 index 0000000000000000000000000000000000000000..0824b6b7ee98c1a6f9d50f91c37b16cb080bb278 --- /dev/null +++ b/scrapped_outputs/f16224af7ed70e7cee34bbf6687da088.txt @@ -0,0 +1,46 @@ +EulerAncestralDiscreteScheduler A scheduler that uses ancestral sampling with Euler method steps. This is a fast scheduler which can often generate good outputs in 20-30 steps. The scheduler is based on the original k-diffusion implementation by Katherine Crowson. EulerAncestralDiscreteScheduler class diffusers.EulerAncestralDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. Ancestral sampling with Euler method steps. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. Scales the denoising model input by (sigma**2 + 1) ** 0.5 to match the Euler algorithm. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor generator: Optional = None return_dict: bool = True ) → EulerAncestralDiscreteSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a +EulerAncestralDiscreteSchedulerOutput or tuple. Returns +EulerAncestralDiscreteSchedulerOutput or tuple + +If return_dict is True, +EulerAncestralDiscreteSchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). EulerAncestralDiscreteSchedulerOutput class diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/f17caa96f44fa01e16616c1e7dd4bb4f.txt b/scrapped_outputs/f17caa96f44fa01e16616c1e7dd4bb4f.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/f1d815eba5ceb58776cb698462a3d949.txt b/scrapped_outputs/f1d815eba5ceb58776cb698462a3d949.txt new file mode 100644 index 0000000000000000000000000000000000000000..f23ed184f8989b9b541a7cf22695885314b847bb --- /dev/null +++ b/scrapped_outputs/f1d815eba5ceb58776cb698462a3d949.txt @@ -0,0 +1,333 @@ +Super-Resolution + + +StableDiffusionUpscalePipeline + +The upscaler diffusion model was created by the researchers and engineers from CompVis, Stability AI, and LAION, as part of Stable Diffusion 2.0. StableDiffusionUpscalePipeline can be used to enhance the resolution of input images by a factor of 4. +The original codebase can be found here: +Stable Diffusion v2: Stability-AI/stablediffusion +Available Checkpoints are: +stabilityai/stable-diffusion-x4-upscaler (x4 resolution resolution): stable-diffusion-x4-upscaler + +class diffusers.StableDiffusionUpscalePipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +low_res_scheduler: DDPMScheduler +scheduler: KarrasDiffusionSchedulers +safety_checker: typing.Optional[typing.Any] = None +feature_extractor: typing.Optional[transformers.models.clip.image_processing_clip.CLIPImageProcessor] = None +watermarker: typing.Optional[typing.Any] = None +max_noise_level: int = 350 + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +low_res_scheduler (SchedulerMixin) — +A scheduler used to add initial noise to the low res conditioning image. It must be an instance of +DDPMScheduler. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + + +Pipeline for text-guided image super-resolution using Stable Diffusion 2. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +image: typing.Union[torch.FloatTensor, PIL.Image.Image, typing.List[PIL.Image.Image]] = None +num_inference_steps: int = 75 +guidance_scale: float = 9.0 +noise_level: int = 20 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +image (PIL.Image.Image or ListPIL.Image.Image or torch.FloatTensor) — +Image, or tensor representing an image batch which will be upscaled. * + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. Ignored when not using guidance (i.e., ignored if guidance_scale +is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import requests +>>> from PIL import Image +>>> from io import BytesIO +>>> from diffusers import StableDiffusionUpscalePipeline +>>> import torch + +>>> # load model and scheduler +>>> model_id = "stabilityai/stable-diffusion-x4-upscaler" +>>> pipeline = StableDiffusionUpscalePipeline.from_pretrained( +... model_id, revision="fp16", torch_dtype=torch.float16 +... ) +>>> pipeline = pipeline.to("cuda") + +>>> # let's download an image +>>> url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png" +>>> response = requests.get(url) +>>> low_res_img = Image.open(BytesIO(response.content)).convert("RGB") +>>> low_res_img = low_res_img.resize((128, 128)) +>>> prompt = "a white cat" + +>>> upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0] +>>> upscaled_image.save("upsampled_cat.png") + +enable_attention_slicing + +< +source +> +( +slice_size: typing.Union[str, int, NoneType] = 'auto' + +) + + +Parameters + +slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. + + + +Enable sliced attention computation. +When this option is enabled, the attention module will split the input tensor in slices, to compute attention +in several steps. This is useful to save some memory in exchange for a small speed decrease. + +disable_attention_slicing + +< +source +> +( +) + + + +Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go +back to computing attention in one step. + +enable_xformers_memory_efficient_attention + +< +source +> +( +attention_op: typing.Optional[typing.Callable] = None + +) + + +Parameters + +attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. + + + +Enable memory efficient attention as implemented in xformers. +When this option is enabled, you should observe lower GPU memory usage and a potential speed up at inference +time. Speed up at training time is not guaranteed. +Warning: When Memory Efficient Attention and Sliced attention are both enabled, the Memory Efficient Attention +is used. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) + +disable_xformers_memory_efficient_attention + +< +source +> +( +) + + + +Disable memory efficient attention as implemented in xformers. + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. diff --git a/scrapped_outputs/f1ff4f6617d91cc0ce49cf89812a3aa8.txt b/scrapped_outputs/f1ff4f6617d91cc0ce49cf89812a3aa8.txt new file mode 100644 index 0000000000000000000000000000000000000000..d38fe382771f8913300e4beb3a4637c7f124a711 --- /dev/null +++ b/scrapped_outputs/f1ff4f6617d91cc0ce49cf89812a3aa8.txt @@ -0,0 +1,41 @@ +KDPM2DiscreteScheduler The KDPM2DiscreteScheduler is inspired by the Elucidating the Design Space of Diffusion-Based Generative Models paper, and the scheduler is ported from and created by Katherine Crowson. The original codebase can be found at crowsonkb/k-diffusion. KDPM2DiscreteScheduler class diffusers.KDPM2DiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None use_karras_sigmas: Optional = False prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.00085) — +The starting beta value of inference. beta_end (float, defaults to 0.012) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. KDPM2DiscreteScheduler is inspired by the DPMSolver2 and Algorithm 2 from the Elucidating the Design Space of +Diffusion-Based Generative Models paper. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/f20de40ac1bacc6ec211b361e20febbb.txt b/scrapped_outputs/f20de40ac1bacc6ec211b361e20febbb.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/f2246f4766cd32e7ff9a7046d1ae5291.txt b/scrapped_outputs/f2246f4766cd32e7ff9a7046d1ae5291.txt new file mode 100644 index 0000000000000000000000000000000000000000..2dde9c6e189ad6d607bc313e3e555570773bb332 --- /dev/null +++ b/scrapped_outputs/f2246f4766cd32e7ff9a7046d1ae5291.txt @@ -0,0 +1,19 @@ +Adapt a model to a new task Many diffusion systems share the same components, allowing you to adapt a pretrained model for one task to an entirely different task. This guide will show you how to adapt a pretrained text-to-image model for inpainting by initializing and modifying the architecture of a pretrained UNet2DConditionModel. Configure UNet2DConditionModel parameters A UNet2DConditionModel by default accepts 4 channels in the input sample. For example, load a pretrained text-to-image model like runwayml/stable-diffusion-v1-5 and take a look at the number of in_channels: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) +pipeline.unet.config["in_channels"] +4 Inpainting requires 9 channels in the input sample. You can check this value in a pretrained inpainting model like runwayml/stable-diffusion-inpainting: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-inpainting", use_safetensors=True) +pipeline.unet.config["in_channels"] +9 To adapt your text-to-image model for inpainting, you’ll need to change the number of in_channels from 4 to 9. Initialize a UNet2DConditionModel with the pretrained text-to-image model weights, and change in_channels to 9. Changing the number of in_channels means you need to set ignore_mismatched_sizes=True and low_cpu_mem_usage=False to avoid a size mismatch error because the shape is different now. Copied from diffusers import UNet2DConditionModel + +model_id = "runwayml/stable-diffusion-v1-5" +unet = UNet2DConditionModel.from_pretrained( + model_id, + subfolder="unet", + in_channels=9, + low_cpu_mem_usage=False, + ignore_mismatched_sizes=True, + use_safetensors=True, +) The pretrained weights of the other components from the text-to-image model are initialized from their checkpoints, but the input channel weights (conv_in.weight) of the unet are randomly initialized. It is important to finetune the model for inpainting because otherwise the model returns noise. diff --git a/scrapped_outputs/f28ce147bf91e35c8ea179fab7991ce8.txt b/scrapped_outputs/f28ce147bf91e35c8ea179fab7991ce8.txt new file mode 100644 index 0000000000000000000000000000000000000000..0454f29f161e7c79737a21f6448f556cf18eca51 --- /dev/null +++ b/scrapped_outputs/f28ce147bf91e35c8ea179fab7991ce8.txt @@ -0,0 +1,81 @@ +Push files to the Hub 🤗 Diffusers provides a PushToHubMixin for uploading your model, scheduler, or pipeline to the Hub. It is an easy way to store your files on the Hub, and also allows you to share your work with others. Under the hood, the PushToHubMixin: creates a repository on the Hub saves your model, scheduler, or pipeline files so they can be reloaded later uploads folder containing these files to the Hub This guide will show you how to use the PushToHubMixin to upload your files to the Hub. You’ll need to log in to your Hub account with your access token first: Copied from huggingface_hub import notebook_login + +notebook_login() Models To push a model to the Hub, call push_to_hub() and specify the repository id of the model to be stored on the Hub: Copied from diffusers import ControlNetModel + +controlnet = ControlNetModel( + block_out_channels=(32, 64), + layers_per_block=2, + in_channels=4, + down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), + cross_attention_dim=32, + conditioning_embedding_out_channels=(16, 32), +) +controlnet.push_to_hub("my-controlnet-model") For models, you can also specify the variant of the weights to push to the Hub. For example, to push fp16 weights: Copied controlnet.push_to_hub("my-controlnet-model", variant="fp16") The push_to_hub() function saves the model’s config.json file and the weights are automatically saved in the safetensors format. Now you can reload the model from your repository on the Hub: Copied model = ControlNetModel.from_pretrained("your-namespace/my-controlnet-model") Scheduler To push a scheduler to the Hub, call push_to_hub() and specify the repository id of the scheduler to be stored on the Hub: Copied from diffusers import DDIMScheduler + +scheduler = DDIMScheduler( + beta_start=0.00085, + beta_end=0.012, + beta_schedule="scaled_linear", + clip_sample=False, + set_alpha_to_one=False, +) +scheduler.push_to_hub("my-controlnet-scheduler") The push_to_hub() function saves the scheduler’s scheduler_config.json file to the specified repository. Now you can reload the scheduler from your repository on the Hub: Copied scheduler = DDIMScheduler.from_pretrained("your-namepsace/my-controlnet-scheduler") Pipeline You can also push an entire pipeline with all it’s components to the Hub. For example, initialize the components of a StableDiffusionPipeline with the parameters you want: Copied from diffusers import ( + UNet2DConditionModel, + AutoencoderKL, + DDIMScheduler, + StableDiffusionPipeline, +) +from transformers import CLIPTextModel, CLIPTextConfig, CLIPTokenizer + +unet = UNet2DConditionModel( + block_out_channels=(32, 64), + layers_per_block=2, + sample_size=32, + in_channels=4, + out_channels=4, + down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), + up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), + cross_attention_dim=32, +) + +scheduler = DDIMScheduler( + beta_start=0.00085, + beta_end=0.012, + beta_schedule="scaled_linear", + clip_sample=False, + set_alpha_to_one=False, +) + +vae = AutoencoderKL( + block_out_channels=[32, 64], + in_channels=3, + out_channels=3, + down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], + up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], + latent_channels=4, +) + +text_encoder_config = CLIPTextConfig( + bos_token_id=0, + eos_token_id=2, + hidden_size=32, + intermediate_size=37, + layer_norm_eps=1e-05, + num_attention_heads=4, + num_hidden_layers=5, + pad_token_id=1, + vocab_size=1000, +) +text_encoder = CLIPTextModel(text_encoder_config) +tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip") Pass all of the components to the StableDiffusionPipeline and call push_to_hub() to push the pipeline to the Hub: Copied components = { + "unet": unet, + "scheduler": scheduler, + "vae": vae, + "text_encoder": text_encoder, + "tokenizer": tokenizer, + "safety_checker": None, + "feature_extractor": None, +} + +pipeline = StableDiffusionPipeline(**components) +pipeline.push_to_hub("my-pipeline") The push_to_hub() function saves each component to a subfolder in the repository. Now you can reload the pipeline from your repository on the Hub: Copied pipeline = StableDiffusionPipeline.from_pretrained("your-namespace/my-pipeline") Privacy Set private=True in the push_to_hub() function to keep your model, scheduler, or pipeline files private: Copied controlnet.push_to_hub("my-controlnet-model-private", private=True) Private repositories are only visible to you, and other users won’t be able to clone the repository and your repository won’t appear in search results. Even if a user has the URL to your private repository, they’ll receive a 404 - Sorry, we can't find the page you are looking for. You must be logged in to load a model from a private repository. diff --git a/scrapped_outputs/f29f4f0418006158373d480369066800.txt b/scrapped_outputs/f29f4f0418006158373d480369066800.txt new file mode 100644 index 0000000000000000000000000000000000000000..684383d3b766fe2306777de3fdfe7ac6f1cc9bb6 --- /dev/null +++ b/scrapped_outputs/f29f4f0418006158373d480369066800.txt @@ -0,0 +1,29 @@ +Create a dataset for training There are many datasets on the Hub to train a model on, but if you can’t find one you’re interested in or want to use your own, you can create a dataset with the 🤗 Datasets library. The dataset structure depends on the task you want to train your model on. The most basic dataset structure is a directory of images for tasks like unconditional image generation. Another dataset structure may be a directory of images and a text file containing their corresponding text captions for tasks like text-to-image generation. This guide will show you two ways to create a dataset to finetune on: provide a folder of images to the --train_data_dir argument upload a dataset to the Hub and pass the dataset repository id to the --dataset_name argument 💡 Learn more about how to create an image dataset for training in the Create an image dataset guide. Provide a dataset as a folder For unconditional generation, you can provide your own dataset as a folder of images. The training script uses the ImageFolder builder from 🤗 Datasets to automatically build a dataset from the folder. Your directory structure should look like: Copied data_dir/xxx.png +data_dir/xxy.png +data_dir/[...]/xxz.png Pass the path to the dataset directory to the --train_data_dir argument, and then you can start training: Copied accelerate launch train_unconditional.py \ + --train_data_dir \ + Upload your data to the Hub 💡 For more details and context about creating and uploading a dataset to the Hub, take a look at the Image search with 🤗 Datasets post. Start by creating a dataset with the ImageFolder feature, which creates an image column containing the PIL-encoded images. You can use the data_dir or data_files parameters to specify the location of the dataset. The data_files parameter supports mapping specific files to dataset splits like train or test: Copied from datasets import load_dataset + +# example 1: local folder +dataset = load_dataset("imagefolder", data_dir="path_to_your_folder") + +# example 2: local files (supported formats are tar, gzip, zip, xz, rar, zstd) +dataset = load_dataset("imagefolder", data_files="path_to_zip_file") + +# example 3: remote files (supported formats are tar, gzip, zip, xz, rar, zstd) +dataset = load_dataset( + "imagefolder", + data_files="https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip", +) + +# example 4: providing several splits +dataset = load_dataset( + "imagefolder", data_files={"train": ["path/to/file1", "path/to/file2"], "test": ["path/to/file3", "path/to/file4"]} +) Then use the push_to_hub method to upload the dataset to the Hub: Copied # assuming you have ran the huggingface-cli login command in a terminal +dataset.push_to_hub("name_of_your_dataset") + +# if you want to push to a private repo, simply pass private=True: +dataset.push_to_hub("name_of_your_dataset", private=True) Now the dataset is available for training by passing the dataset name to the --dataset_name argument: Copied accelerate launch --mixed_precision="fp16" train_text_to_image.py \ + --pretrained_model_name_or_path="runwayml/stable-diffusion-v1-5" \ + --dataset_name="name_of_your_dataset" \ + Next steps Now that you’ve created a dataset, you can plug it into the train_data_dir (if your dataset is local) or dataset_name (if your dataset is on the Hub) arguments of a training script. For your next steps, feel free to try and use your dataset to train a model for unconditional generation or text-to-image generation! diff --git a/scrapped_outputs/f2b5ac50f22512908008f39f77d71ea6.txt b/scrapped_outputs/f2b5ac50f22512908008f39f77d71ea6.txt new file mode 100644 index 0000000000000000000000000000000000000000..f559dcc80ec22dbf65c22dd7f4b1273f5e564097 --- /dev/null +++ b/scrapped_outputs/f2b5ac50f22512908008f39f77d71ea6.txt @@ -0,0 +1,118 @@ +Latent upscaler The Stable Diffusion latent upscaler model was created by Katherine Crowson in collaboration with Stability AI. It is used to enhance the output image resolution by a factor of 2 (see this demo notebook for a demonstration of the original implementation). Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionLatentUpscalePipeline class diffusers.StableDiffusionLatentUpscalePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: EulerDiscreteScheduler ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A EulerDiscreteScheduler to be used in combination with unet to denoise the encoded image latents. Pipeline for upscaling Stable Diffusion output image resolution by a factor of 2. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union image: Union = None num_inference_steps: int = 75 guidance_scale: float = 9.0 negative_prompt: Union = None generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image upscaling. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be upscaled. If it’s a tensor, it can be either a +latent output from a Stable Diffusion model or an image tensor in the range [-1, 1]. It is considered +a latent if image.shape[1] is 4; otherwise, it is considered to be an image representation and +encoded using this pipeline’s vae encoder. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import StableDiffusionLatentUpscalePipeline, StableDiffusionPipeline +>>> import torch + + +>>> pipeline = StableDiffusionPipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 +... ) +>>> pipeline.to("cuda") + +>>> model_id = "stabilityai/sd-x2-latent-upscaler" +>>> upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) +>>> upscaler.to("cuda") + +>>> prompt = "a photo of an astronaut high resolution, unreal engine, ultra realistic" +>>> generator = torch.manual_seed(33) + +>>> low_res_latents = pipeline(prompt, generator=generator, output_type="latent").images + +>>> with torch.no_grad(): +... image = pipeline.decode_latents(low_res_latents) +>>> image = pipeline.numpy_to_pil(image)[0] + +>>> image.save("../images/a1.png") + +>>> upscaled_image = upscaler( +... prompt=prompt, +... image=low_res_latents, +... num_inference_steps=20, +... guidance_scale=0, +... generator=generator, +... ).images[0] + +>>> upscaled_image.save("../images/a2.png") enable_sequential_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using 🤗 Accelerate, significantly reducing memory usage. When called, the state +dicts of all torch.nn.Module components (except those in self._exclude_from_cpu_offload) are saved to CPU +and then moved to torch.device('meta') and loaded to GPU only when their specific submodule has its forward +method called. Offloading happens on a submodule basis. Memory savings are higher than with +enable_model_cpu_offload, but performance is lower. enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/f2bebcb3a2ae837d12947cbb4547613a.txt b/scrapped_outputs/f2bebcb3a2ae837d12947cbb4547613a.txt new file mode 100644 index 0000000000000000000000000000000000000000..6c6930421010fe84f98ab906144201bb0390aa30 --- /dev/null +++ b/scrapped_outputs/f2bebcb3a2ae837d12947cbb4547613a.txt @@ -0,0 +1,81 @@ +Latent Diffusion Latent Diffusion was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. The abstract from the paper is: By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. The original codebase can be found at CompVis/latent-diffusion. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. LDMTextToImagePipeline class diffusers.LDMTextToImagePipeline < source > ( vqvae: Union bert: PreTrainedModel tokenizer: PreTrainedTokenizer unet: Union scheduler: Union ) Parameters vqvae (VQModel) — +Vector-quantized (VQ) model to encode and decode images to and from latent representations. bert (LDMBertModel) — +Text-encoder model based on BERT. tokenizer (BertTokenizer) — +A BertTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-image generation using latent diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union height: Optional = None width: Optional = None num_inference_steps: Optional = 50 guidance_scale: Optional = 1.0 eta: Optional = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 1.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Example: Copied >>> from diffusers import DiffusionPipeline + +>>> # load model and scheduler +>>> ldm = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256") + +>>> # run pipeline in inference (sample random noise and denoise) +>>> prompt = "A painting of a squirrel eating a burger" +>>> images = ldm([prompt], num_inference_steps=50, eta=0.3, guidance_scale=6).images + +>>> # save images +>>> for idx, image in enumerate(images): +... image.save(f"squirrel-{idx}.png") LDMSuperResolutionPipeline class diffusers.LDMSuperResolutionPipeline < source > ( vqvae: VQModel unet: UNet2DModel scheduler: Union ) Parameters vqvae (VQModel) — +Vector-quantized (VQ) model to encode and decode images to and from latent representations. unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latens. Can be one of +DDIMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, +EulerAncestralDiscreteScheduler, DPMSolverMultistepScheduler, or PNDMScheduler. A pipeline for image super-resolution using latent diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union = None batch_size: Optional = 1 num_inference_steps: Optional = 100 eta: Optional = 0.0 generator: Union = None output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters image (torch.Tensor or PIL.Image.Image) — +Image or tensor representing an image batch to be used as the starting point for the process. batch_size (int, optional, defaults to 1) — +Number of images to generate. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Example: Copied >>> import requests +>>> from PIL import Image +>>> from io import BytesIO +>>> from diffusers import LDMSuperResolutionPipeline +>>> import torch + +>>> # load model and scheduler +>>> pipeline = LDMSuperResolutionPipeline.from_pretrained("CompVis/ldm-super-resolution-4x-openimages") +>>> pipeline = pipeline.to("cuda") + +>>> # let's download an image +>>> url = ( +... "https://user-images.githubusercontent.com/38061659/199705896-b48e17b8-b231-47cd-a270-4ffa5a93fa3e.png" +... ) +>>> response = requests.get(url) +>>> low_res_img = Image.open(BytesIO(response.content)).convert("RGB") +>>> low_res_img = low_res_img.resize((128, 128)) + +>>> # run pipeline in inference (sample random noise and denoise) +>>> upscaled_image = pipeline(low_res_img, num_inference_steps=100, eta=1).images[0] +>>> # save image +>>> upscaled_image.save("ldm_generated_image.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/f2d1eab53220cefdc0db6eb3cac2731c.txt b/scrapped_outputs/f2d1eab53220cefdc0db6eb3cac2731c.txt new file mode 100644 index 0000000000000000000000000000000000000000..1fe3bd3f06785a74a09c4c4199e812fcd2270991 --- /dev/null +++ b/scrapped_outputs/f2d1eab53220cefdc0db6eb3cac2731c.txt @@ -0,0 +1,6 @@ +Overview 🤗 Diffusers provides a collection of training scripts for you to train your own diffusion models. You can find all of our training scripts in diffusers/examples. Each training script is: Self-contained: the training script does not depend on any local files, and all packages required to run the script are installed from the requirements.txt file. Easy-to-tweak: the training scripts are an example of how to train a diffusion model for a specific task and won’t work out-of-the-box for every training scenario. You’ll likely need to adapt the training script for your specific use-case. To help you with that, we’ve fully exposed the data preprocessing code and the training loop so you can modify it for your own use. Beginner-friendly: the training scripts are designed to be beginner-friendly and easy to understand, rather than including the latest state-of-the-art methods to get the best and most competitive results. Any training methods we consider too complex are purposefully left out. Single-purpose: each training script is expressly designed for only one task to keep it readable and understandable. Our current collection of training scripts include: Training SDXL-support LoRA-support Flax-support unconditional image generation text-to-image 👍 👍 👍 textual inversion 👍 DreamBooth 👍 👍 👍 ControlNet 👍 👍 InstructPix2Pix 👍 Custom Diffusion T2I-Adapters 👍 Kandinsky 2.2 👍 Wuerstchen 👍 These examples are actively maintained, so please feel free to open an issue if they aren’t working as expected. If you feel like another training example should be included, you’re more than welcome to start a Feature Request to discuss your feature idea with us and whether it meets our criteria of being self-contained, easy-to-tweak, beginner-friendly, and single-purpose. Install Make sure you can successfully run the latest versions of the example scripts by installing the library from source in a new virtual environment: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the folder of the training script (for example, DreamBooth) and install the requirements.txt file. Some training scripts have a specific requirement file for SDXL, LoRA or Flax. If you’re using one of these scripts, make sure you install its corresponding requirements file. Copied cd examples/dreambooth +pip install -r requirements.txt +# to train SDXL with DreamBooth +pip install -r requirements_sdxl.txt To speedup training and reduce memory-usage, we recommend: using PyTorch 2.0 or higher to automatically use scaled dot product attention during training (you don’t need to make any changes to the training code) installing xFormers to enable memory-efficient attention diff --git a/scrapped_outputs/f2d82a5e146bc34e271e3904931d7655.txt b/scrapped_outputs/f2d82a5e146bc34e271e3904931d7655.txt new file mode 100644 index 0000000000000000000000000000000000000000..2bdd92145f3e1d95344f3a558a1b0165b1443a85 --- /dev/null +++ b/scrapped_outputs/f2d82a5e146bc34e271e3904931d7655.txt @@ -0,0 +1,403 @@ +Kandinsky The Kandinsky models are a series of multilingual text-to-image generation models. The Kandinsky 2.0 model uses two multilingual text encoders and concatenates those results for the UNet. Kandinsky 2.1 changes the architecture to include an image prior model (CLIP) to generate a mapping between text and image embeddings. The mapping provides better text-image alignment and it is used with the text embeddings during training, leading to higher quality results. Finally, Kandinsky 2.1 uses a Modulating Quantized Vectors (MoVQ) decoder - which adds a spatial conditional normalization layer to increase photorealism - to decode the latents into images. Kandinsky 2.2 improves on the previous model by replacing the image encoder of the image prior model with a larger CLIP-ViT-G model to improve quality. The image prior model was also retrained on images with different resolutions and aspect ratios to generate higher-resolution images and different image sizes. Kandinsky 3 simplifies the architecture and shifts away from the two-stage generation process involving the prior model and diffusion model. Instead, Kandinsky 3 uses Flan-UL2 to encode text, a UNet with BigGan-deep blocks, and Sber-MoVQGAN to decode the latents into images. Text understanding and generated image quality are primarily achieved by using a larger text encoder and UNet. This guide will show you how to use the Kandinsky models for text-to-image, image-to-image, inpainting, interpolation, and more. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate Kandinsky 2.1 and 2.2 usage is very similar! The only difference is Kandinsky 2.2 doesn’t accept prompt as an input when decoding the latents. Instead, Kandinsky 2.2 only accepts image_embeds during decoding. Kandinsky 3 has a more concise architecture and it doesn’t require a prior model. This means it’s usage is identical to other diffusion models like Stable Diffusion XL. Text-to-image To use the Kandinsky models for any task, you always start by setting up the prior pipeline to encode the prompt and generate the image embeddings. The prior pipeline also generates negative_image_embeds that correspond to the negative prompt "". For better results, you can pass an actual negative_prompt to the prior pipeline, but this’ll increase the effective batch size of the prior pipeline by 2x. + + + + Copied from diffusers import KandinskyPriorPipeline, KandinskyPipeline +import torch + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16).to("cuda") +pipeline = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16).to("cuda") + +prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting" +negative_prompt = "low quality, bad quality" # optional to include a negative prompt, but results are usually better +image_embeds, negative_image_embeds = prior_pipeline(prompt, negative_prompt, guidance_scale=1.0).to_tuple() Now pass all the prompts and embeddings to the KandinskyPipeline to generate an image: Copied image = pipeline(prompt, image_embeds=image_embeds, negative_prompt=negative_prompt, negative_image_embeds=negative_image_embeds, height=768, width=768).images[0] +image + + + + Copied from diffusers import KandinskyV22PriorPipeline, KandinskyV22Pipeline +import torch + +prior_pipeline = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16).to("cuda") +pipeline = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16).to("cuda") + +prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting" +negative_prompt = "low quality, bad quality" # optional to include a negative prompt, but results are usually better +image_embeds, negative_image_embeds = prior_pipeline(prompt, guidance_scale=1.0).to_tuple() Pass the image_embeds and negative_image_embeds to the KandinskyV22Pipeline to generate an image: Copied image = pipeline(image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, height=768, width=768).images[0] +image + + +Kandinsky 3 doesn’t require a prior model so you can directly load the Kandinsky3Pipeline and pass a prompt to generate an image: Copied from diffusers import Kandinsky3Pipeline +import torch + +pipeline = Kandinsky3Pipeline.from_pretrained("kandinsky-community/kandinsky-3", variant="fp16", torch_dtype=torch.float16) +pipeline.enable_model_cpu_offload() + +prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting" +image = pipeline(prompt).images[0] +image + + +🤗 Diffusers also provides an end-to-end API with the KandinskyCombinedPipeline and KandinskyV22CombinedPipeline, meaning you don’t have to separately load the prior and text-to-image pipeline. The combined pipeline automatically loads both the prior model and the decoder. You can still set different values for the prior pipeline with the prior_guidance_scale and prior_num_inference_steps parameters if you want. Use the AutoPipelineForText2Image to automatically call the combined pipelines under the hood: + + + + Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) +pipeline.enable_model_cpu_offload() + +prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting" +negative_prompt = "low quality, bad quality" + +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, prior_guidance_scale=1.0, guidance_scale=4.0, height=768, width=768).images[0] +image + + + + Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16) +pipeline.enable_model_cpu_offload() + +prompt = "A alien cheeseburger creature eating itself, claymation, cinematic, moody lighting" +negative_prompt = "low quality, bad quality" + +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, prior_guidance_scale=1.0, guidance_scale=4.0, height=768, width=768).images[0] +image + + + Image-to-image For image-to-image, pass the initial image and text prompt to condition the image to the pipeline. Start by loading the prior pipeline: + + + + Copied import torch +from diffusers import KandinskyImg2ImgPipeline, KandinskyPriorPipeline + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipeline = KandinskyImg2ImgPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + + + + Copied import torch +from diffusers import KandinskyV22Img2ImgPipeline, KandinskyPriorPipeline + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipeline = KandinskyV22Img2ImgPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + + +Kandinsky 3 doesn’t require a prior model so you can directly load the image-to-image pipeline: Copied from diffusers import Kandinsky3Img2ImgPipeline +from diffusers.utils import load_image +import torch + +pipeline = Kandinsky3Img2ImgPipeline.from_pretrained("kandinsky-community/kandinsky-3", variant="fp16", torch_dtype=torch.float16) +pipeline.enable_model_cpu_offload() + + +Download an image to condition on: Copied from diffusers.utils import load_image + +# download image +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +original_image = load_image(url) +original_image = original_image.resize((768, 512)) Generate the image_embeds and negative_image_embeds with the prior pipeline: Copied prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +image_embeds, negative_image_embeds = prior_pipeline(prompt, negative_prompt).to_tuple() Now pass the original image, and all the prompts and embeddings to the pipeline to generate an image: + + + + Copied from diffusers.utils import make_image_grid + +image = pipeline(prompt, negative_prompt=negative_prompt, image=original_image, image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, height=768, width=768, strength=0.3).images[0] +make_image_grid([original_image.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2) + + + + Copied from diffusers.utils import make_image_grid + +image = pipeline(image=original_image, image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, height=768, width=768, strength=0.3).images[0] +make_image_grid([original_image.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2) + + + + Copied image = pipeline(prompt, negative_prompt=negative_prompt, image=image, strength=0.75, num_inference_steps=25).images[0] +image + + +🤗 Diffusers also provides an end-to-end API with the KandinskyImg2ImgCombinedPipeline and KandinskyV22Img2ImgCombinedPipeline, meaning you don’t have to separately load the prior and image-to-image pipeline. The combined pipeline automatically loads both the prior model and the decoder. You can still set different values for the prior pipeline with the prior_guidance_scale and prior_num_inference_steps parameters if you want. Use the AutoPipelineForImage2Image to automatically call the combined pipelines under the hood: + + + + Copied from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image +import torch + +pipeline = AutoPipelineForImage2Image.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True) +pipeline.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +original_image = load_image(url) + +original_image.thumbnail((768, 768)) + +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=original_image, strength=0.3).images[0] +make_image_grid([original_image.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2) + + + + Copied from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image +import torch + +pipeline = AutoPipelineForImage2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16) +pipeline.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +original_image = load_image(url) + +original_image.thumbnail((768, 768)) + +image = pipeline(prompt=prompt, negative_prompt=negative_prompt, image=original_image, strength=0.3).images[0] +make_image_grid([original_image.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2) + + + Inpainting ⚠️ The Kandinsky models use ⬜️ white pixels to represent the masked area now instead of black pixels. If you are using KandinskyInpaintPipeline in production, you need to change the mask to use white pixels: Copied # For PIL input +import PIL.ImageOps +mask = PIL.ImageOps.invert(mask) + +# For PyTorch and NumPy input +mask = 1 - mask For inpainting, you’ll need the original image, a mask of the area to replace in the original image, and a text prompt of what to inpaint. Load the prior pipeline: + + + + Copied from diffusers import KandinskyInpaintPipeline, KandinskyPriorPipeline +from diffusers.utils import load_image, make_image_grid +import torch +import numpy as np +from PIL import Image + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipeline = KandinskyInpaintPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + + + + Copied from diffusers import KandinskyV22InpaintPipeline, KandinskyV22PriorPipeline +from diffusers.utils import load_image, make_image_grid +import torch +import numpy as np +from PIL import Image + +prior_pipeline = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +pipeline = KandinskyV22InpaintPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + + +Load an initial image and create a mask: Copied init_image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png") +mask = np.zeros((768, 768), dtype=np.float32) +# mask area above cat's head +mask[:250, 250:-250] = 1 Generate the embeddings with the prior pipeline: Copied prompt = "a hat" +prior_output = prior_pipeline(prompt) Now pass the initial image, mask, and prompt and embeddings to the pipeline to generate an image: + + + + Copied output_image = pipeline(prompt, image=init_image, mask_image=mask, **prior_output, height=768, width=768, num_inference_steps=150).images[0] +mask = Image.fromarray((mask*255).astype('uint8'), 'L') +make_image_grid([init_image, mask, output_image], rows=1, cols=3) + + + + Copied output_image = pipeline(image=init_image, mask_image=mask, **prior_output, height=768, width=768, num_inference_steps=150).images[0] +mask = Image.fromarray((mask*255).astype('uint8'), 'L') +make_image_grid([init_image, mask, output_image], rows=1, cols=3) + + +You can also use the end-to-end KandinskyInpaintCombinedPipeline and KandinskyV22InpaintCombinedPipeline to call the prior and decoder pipelines together under the hood. Use the AutoPipelineForInpainting for this: + + + + Copied import torch +import numpy as np +from PIL import Image +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipe = AutoPipelineForInpainting.from_pretrained("kandinsky-community/kandinsky-2-1-inpaint", torch_dtype=torch.float16) +pipe.enable_model_cpu_offload() + +init_image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png") +mask = np.zeros((768, 768), dtype=np.float32) +# mask area above cat's head +mask[:250, 250:-250] = 1 +prompt = "a hat" + +output_image = pipe(prompt=prompt, image=init_image, mask_image=mask).images[0] +mask = Image.fromarray((mask*255).astype('uint8'), 'L') +make_image_grid([init_image, mask, output_image], rows=1, cols=3) + + + + Copied import torch +import numpy as np +from PIL import Image +from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +pipe = AutoPipelineForInpainting.from_pretrained("kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16) +pipe.enable_model_cpu_offload() + +init_image = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png") +mask = np.zeros((768, 768), dtype=np.float32) +# mask area above cat's head +mask[:250, 250:-250] = 1 +prompt = "a hat" + +output_image = pipe(prompt=prompt, image=original_image, mask_image=mask).images[0] +mask = Image.fromarray((mask*255).astype('uint8'), 'L') +make_image_grid([init_image, mask, output_image], rows=1, cols=3) + + + Interpolation Interpolation allows you to explore the latent space between the image and text embeddings which is a cool way to see some of the prior model’s intermediate outputs. Load the prior pipeline and two images you’d like to interpolate: + + + + Copied from diffusers import KandinskyPriorPipeline, KandinskyPipeline +from diffusers.utils import load_image, make_image_grid +import torch + +prior_pipeline = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-1-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +img_1 = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png") +img_2 = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/starry_night.jpeg") +make_image_grid([img_1.resize((512,512)), img_2.resize((512,512))], rows=1, cols=2) + + + + Copied from diffusers import KandinskyV22PriorPipeline, KandinskyV22Pipeline +from diffusers.utils import load_image, make_image_grid +import torch + +prior_pipeline = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True).to("cuda") +img_1 = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png") +img_2 = load_image("https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/starry_night.jpeg") +make_image_grid([img_1.resize((512,512)), img_2.resize((512,512))], rows=1, cols=2) + + + a cat Van Gogh's Starry Night painting Specify the text or images to interpolate, and set the weights for each text or image. Experiment with the weights to see how they affect the interpolation! Copied images_texts = ["a cat", img_1, img_2] +weights = [0.3, 0.3, 0.4] Call the interpolate function to generate the embeddings, and then pass them to the pipeline to generate the image: + + + + Copied # prompt can be left empty +prompt = "" +prior_out = prior_pipeline.interpolate(images_texts, weights) + +pipeline = KandinskyPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + +image = pipeline(prompt, **prior_out, height=768, width=768).images[0] +image + + + + Copied # prompt can be left empty +prompt = "" +prior_out = prior_pipeline.interpolate(images_texts, weights) + +pipeline = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True).to("cuda") + +image = pipeline(prompt, **prior_out, height=768, width=768).images[0] +image + + + ControlNet ⚠️ ControlNet is only supported for Kandinsky 2.2! ControlNet enables conditioning large pretrained diffusion models with additional inputs such as a depth map or edge detection. For example, you can condition Kandinsky 2.2 with a depth map so the model understands and preserves the structure of the depth image. Let’s load an image and extract it’s depth map: Copied from diffusers.utils import load_image + +img = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png" +).resize((768, 768)) +img Then you can use the depth-estimation Pipeline from 🤗 Transformers to process the image and retrieve the depth map: Copied import torch +import numpy as np + +from transformers import pipeline + +def make_hint(image, depth_estimator): + image = depth_estimator(image)["depth"] + image = np.array(image) + image = image[:, :, None] + image = np.concatenate([image, image, image], axis=2) + detected_map = torch.from_numpy(image).float() / 255.0 + hint = detected_map.permute(2, 0, 1) + return hint + +depth_estimator = pipeline("depth-estimation") +hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda") Text-to-image Load the prior pipeline and the KandinskyV22ControlnetPipeline: Copied from diffusers import KandinskyV22PriorPipeline, KandinskyV22ControlnetPipeline + +prior_pipeline = KandinskyV22PriorPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +pipeline = KandinskyV22ControlnetPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16 +).to("cuda") Generate the image embeddings from a prompt and negative prompt: Copied prompt = "A robot, 4k photo" +negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature" + +generator = torch.Generator(device="cuda").manual_seed(43) + +image_emb, zero_image_emb = prior_pipeline( + prompt=prompt, negative_prompt=negative_prior_prompt, generator=generator +).to_tuple() Finally, pass the image embeddings and the depth image to the KandinskyV22ControlnetPipeline to generate an image: Copied image = pipeline(image_embeds=image_emb, negative_image_embeds=zero_image_emb, hint=hint, num_inference_steps=50, generator=generator, height=768, width=768).images[0] +image Image-to-image For image-to-image with ControlNet, you’ll need to use the: KandinskyV22PriorEmb2EmbPipeline to generate the image embeddings from a text prompt and an image KandinskyV22ControlnetImg2ImgPipeline to generate an image from the initial image and the image embeddings Process and extract a depth map of an initial image of a cat with the depth-estimation Pipeline from 🤗 Transformers: Copied import torch +import numpy as np + +from diffusers import KandinskyV22PriorEmb2EmbPipeline, KandinskyV22ControlnetImg2ImgPipeline +from diffusers.utils import load_image +from transformers import pipeline + +img = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinskyv22/cat.png" +).resize((768, 768)) + +def make_hint(image, depth_estimator): + image = depth_estimator(image)["depth"] + image = np.array(image) + image = image[:, :, None] + image = np.concatenate([image, image, image], axis=2) + detected_map = torch.from_numpy(image).float() / 255.0 + hint = detected_map.permute(2, 0, 1) + return hint + +depth_estimator = pipeline("depth-estimation") +hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda") Load the prior pipeline and the KandinskyV22ControlnetImg2ImgPipeline: Copied prior_pipeline = KandinskyV22PriorEmb2EmbPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16, use_safetensors=True +).to("cuda") + +pipeline = KandinskyV22ControlnetImg2ImgPipeline.from_pretrained( + "kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16 +).to("cuda") Pass a text prompt and the initial image to the prior pipeline to generate the image embeddings: Copied prompt = "A robot, 4k photo" +negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature" + +generator = torch.Generator(device="cuda").manual_seed(43) + +img_emb = prior_pipeline(prompt=prompt, image=img, strength=0.85, generator=generator) +negative_emb = prior_pipeline(prompt=negative_prior_prompt, image=img, strength=1, generator=generator) Now you can run the KandinskyV22ControlnetImg2ImgPipeline to generate an image from the initial image and the image embeddings: Copied image = pipeline(image=img, strength=0.5, image_embeds=img_emb.image_embeds, negative_image_embeds=negative_emb.image_embeds, hint=hint, num_inference_steps=50, generator=generator, height=768, width=768).images[0] +make_image_grid([img.resize((512, 512)), image.resize((512, 512))], rows=1, cols=2) Optimizations Kandinsky is unique because it requires a prior pipeline to generate the mappings, and a second pipeline to decode the latents into an image. Optimization efforts should be focused on the second pipeline because that is where the bulk of the computation is done. Here are some tips to improve Kandinsky during inference. Enable xFormers if you’re using PyTorch < 2.0: Copied from diffusers import DiffusionPipeline + import torch + + pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) ++ pipe.enable_xformers_memory_efficient_attention() Enable torch.compile if you’re using PyTorch >= 2.0 to automatically use scaled dot-product attention (SDPA): Copied pipe.unet.to(memory_format=torch.channels_last) ++ pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) This is the same as explicitly setting the attention processor to use AttnAddedKVProcessor2_0: Copied from diffusers.models.attention_processor import AttnAddedKVProcessor2_0 + +pipe.unet.set_attn_processor(AttnAddedKVProcessor2_0()) Offload the model to the CPU with enable_model_cpu_offload() to avoid out-of-memory errors: Copied from diffusers import DiffusionPipeline + import torch + + pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", torch_dtype=torch.float16) ++ pipe.enable_model_cpu_offload() By default, the text-to-image pipeline uses the DDIMScheduler but you can replace it with another scheduler like DDPMScheduler to see how that affects the tradeoff between inference speed and image quality: Copied from diffusers import DDPMScheduler +from diffusers import DiffusionPipeline + +scheduler = DDPMScheduler.from_pretrained("kandinsky-community/kandinsky-2-1", subfolder="ddpm_scheduler") +pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-1", scheduler=scheduler, torch_dtype=torch.float16, use_safetensors=True).to("cuda") diff --git a/scrapped_outputs/f2eba4438d99d92f230fe513534099cb.txt b/scrapped_outputs/f2eba4438d99d92f230fe513534099cb.txt new file mode 100644 index 0000000000000000000000000000000000000000..ff6a9e3f448f32b5e091930c4a212ed0ac90283a --- /dev/null +++ b/scrapped_outputs/f2eba4438d99d92f230fe513534099cb.txt @@ -0,0 +1,50 @@ +Attention Processor An attention processor is a class for applying different types of attention mechanisms. AttnProcessor class diffusers.models.attention_processor.AttnProcessor < source > ( ) Default processor for performing attention-related computations. AttnProcessor2_0 class diffusers.models.attention_processor.AttnProcessor2_0 < source > ( ) Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). AttnAddedKVProcessor class diffusers.models.attention_processor.AttnAddedKVProcessor < source > ( ) Processor for performing attention-related computations with extra learnable key and value matrices for the text +encoder. AttnAddedKVProcessor2_0 class diffusers.models.attention_processor.AttnAddedKVProcessor2_0 < source > ( ) Processor for performing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0), with extra +learnable key and value matrices for the text encoder. CrossFrameAttnProcessor class diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.CrossFrameAttnProcessor < source > ( batch_size = 2 ) Cross frame attention processor. Each frame attends the first frame. CustomDiffusionAttnProcessor class diffusers.models.attention_processor.CustomDiffusionAttnProcessor < source > ( train_kv: bool = True train_q_out: bool = True hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 ) Parameters train_kv (bool, defaults to True) — +Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) — +Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) — +Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) — +The dropout probability to use. Processor for implementing attention for the Custom Diffusion method. CustomDiffusionAttnProcessor2_0 class diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0 < source > ( train_kv: bool = True train_q_out: bool = True hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 ) Parameters train_kv (bool, defaults to True) — +Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) — +Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) — +Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) — +The dropout probability to use. Processor for implementing attention for the Custom Diffusion method using PyTorch 2.0’s memory-efficient scaled +dot-product attention. CustomDiffusionXFormersAttnProcessor class diffusers.models.attention_processor.CustomDiffusionXFormersAttnProcessor < source > ( train_kv: bool = True train_q_out: bool = False hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 attention_op: Optional = None ) Parameters train_kv (bool, defaults to True) — +Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) — +Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) — +Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) — +The dropout probability to use. attention_op (Callable, optional, defaults to None) — +The base +operator to use +as the attention operator. It is recommended to set to None, and allow xFormers to choose the best operator. Processor for implementing memory efficient attention using xFormers for the Custom Diffusion method. FusedAttnProcessor2_0 class diffusers.models.attention_processor.FusedAttnProcessor2_0 < source > ( ) Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). +It uses fused projection layers. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is currently 🧪 experimental in nature and can change in future. LoRAAttnAddedKVProcessor class diffusers.models.attention_processor.LoRAAttnAddedKVProcessor < source > ( hidden_size: int cross_attention_dim: Optional = None rank: int = 4 network_alpha: Optional = None ) Parameters hidden_size (int, optional) — +The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) — +The number of channels in the encoder_hidden_states. rank (int, defaults to 4) — +The dimension of the LoRA update matrices. network_alpha (int, optional) — +Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) — +Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism with extra learnable key and value matrices for the text +encoder. LoRAXFormersAttnProcessor class diffusers.models.attention_processor.LoRAXFormersAttnProcessor < source > ( hidden_size: int cross_attention_dim: int rank: int = 4 attention_op: Optional = None network_alpha: Optional = None **kwargs ) Parameters hidden_size (int, optional) — +The hidden size of the attention layer. cross_attention_dim (int, optional) — +The number of channels in the encoder_hidden_states. rank (int, defaults to 4) — +The dimension of the LoRA update matrices. attention_op (Callable, optional, defaults to None) — +The base +operator to +use as the attention operator. It is recommended to set to None, and allow xFormers to choose the best +operator. network_alpha (int, optional) — +Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) — +Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism with memory efficient attention using xFormers. SlicedAttnProcessor class diffusers.models.attention_processor.SlicedAttnProcessor < source > ( slice_size: int ) Parameters slice_size (int, optional) — +The number of steps to compute attention. Uses as many slices as attention_head_dim // slice_size, and +attention_head_dim must be a multiple of the slice_size. Processor for implementing sliced attention. SlicedAttnAddedKVProcessor class diffusers.models.attention_processor.SlicedAttnAddedKVProcessor < source > ( slice_size ) Parameters slice_size (int, optional) — +The number of steps to compute attention. Uses as many slices as attention_head_dim // slice_size, and +attention_head_dim must be a multiple of the slice_size. Processor for implementing sliced attention with extra learnable key and value matrices for the text encoder. XFormersAttnProcessor class diffusers.models.attention_processor.XFormersAttnProcessor < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional, defaults to None) — +The base +operator to +use as the attention operator. It is recommended to set to None, and allow xFormers to choose the best +operator. Processor for implementing memory efficient attention using xFormers. diff --git a/scrapped_outputs/f2f210e07da8697f7ad5073aab9e94b4.txt b/scrapped_outputs/f2f210e07da8697f7ad5073aab9e94b4.txt new file mode 100644 index 0000000000000000000000000000000000000000..ff28dd01033ce547a340e7754e35c2123f361679 --- /dev/null +++ b/scrapped_outputs/f2f210e07da8697f7ad5073aab9e94b4.txt @@ -0,0 +1,14 @@ +Text-guided depth-to-image generation The StableDiffusionDepth2ImgPipeline lets you pass a text prompt and an initial image to condition the generation of new images. In addition, you can also pass a depth_map to preserve the image structure. If no depth_map is provided, the pipeline automatically predicts the depth via an integrated depth-estimation model. Start by creating an instance of the StableDiffusionDepth2ImgPipeline: Copied import torch +from diffusers import StableDiffusionDepth2ImgPipeline +from diffusers.utils import load_image, make_image_grid + +pipeline = StableDiffusionDepth2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-depth", + torch_dtype=torch.float16, + use_safetensors=True, +).to("cuda") Now pass your prompt to the pipeline. You can also pass a negative_prompt to prevent certain words from guiding how an image is generated: Copied url = "http://images.cocodataset.org/val2017/000000039769.jpg" +init_image = load_image(url) +prompt = "two tigers" +negative_prompt = "bad, deformed, ugly, bad anatomy" +image = pipeline(prompt=prompt, image=init_image, negative_prompt=negative_prompt, strength=0.7).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Input Output diff --git a/scrapped_outputs/f337342c2fdfe2794a8f30a9b3efcc10.txt b/scrapped_outputs/f337342c2fdfe2794a8f30a9b3efcc10.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/f34a0d190c4c5b3e551118894f028a36.txt b/scrapped_outputs/f34a0d190c4c5b3e551118894f028a36.txt new file mode 100644 index 0000000000000000000000000000000000000000..5ac980c70abc6eba4fbd0f38f30a6ecdd94ad92f --- /dev/null +++ b/scrapped_outputs/f34a0d190c4c5b3e551118894f028a36.txt @@ -0,0 +1,201 @@ +Depth-to-image The Stable Diffusion model can also infer depth based on an image using MiDaS. This allows you to pass a text prompt and an initial image to condition the generation of new images as well as a depth_map to preserve the image structure. Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionDepth2ImgPipeline class diffusers.StableDiffusionDepth2ImgPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers depth_estimator: DPTForDepthEstimation feature_extractor: DPTFeatureExtractor ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-guided depth-based image-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None image: Union = None depth_map: Optional = None strength: float = 0.8 num_inference_steps: Optional = 50 guidance_scale: Optional = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: Optional = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be used as the starting point. Can accept image +latents as image only if depth_map is not None. depth_map (torch.FloatTensor, optional) — +Depth prediction to be used as additional conditioning for the image generation process. If not +defined, it automatically predicts the depth with self.depth_estimator. strength (float, optional, defaults to 0.8) — +Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a +starting point and more noise is added the higher the strength. The number of denoising steps depends +on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising +process runs for the full number of iterations specified in num_inference_steps. A value of 1 +essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter is modulated by strength. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> import requests +>>> from PIL import Image + +>>> from diffusers import StableDiffusionDepth2ImgPipeline + +>>> pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( +... "stabilityai/stable-diffusion-2-depth", +... torch_dtype=torch.float16, +... ) +>>> pipe.to("cuda") + + +>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" +>>> init_image = Image.open(requests.get(url, stream=True).raw) +>>> prompt = "two tigers" +>>> n_propmt = "bad, deformed, ugly, bad anotomy" +>>> image = pipe(prompt=prompt, image=init_image, negative_prompt=n_propmt, strength=0.7).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/f3fd9258599a778a2349ac44a36cfe87.txt b/scrapped_outputs/f3fd9258599a778a2349ac44a36cfe87.txt new file mode 100644 index 0000000000000000000000000000000000000000..8c5bcb9f001a84d9b945c267456eb710daaafe80 --- /dev/null +++ b/scrapped_outputs/f3fd9258599a778a2349ac44a36cfe87.txt @@ -0,0 +1,104 @@ +DPMSolverSinglestepScheduler DPMSolverSinglestepScheduler is a single step scheduler from DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps and DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. DPMSolver (and the improved version DPMSolver++) is a fast dedicated high-order solver for diffusion ODEs with convergence order guarantee. Empirically, DPMSolver sampling with only 20 steps can generate high-quality +samples, and it can generate quite good samples even in 10 steps. The original implementation can be found at LuChengTHU/dpm-solver. Tips It is recommended to set solver_order to 2 for guide sampling, and solver_order=3 for unconditional sampling. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use dynamic +thresholding. This thresholding method is unsuitable for latent-space diffusion models such as +Stable Diffusion. DPMSolverSinglestepScheduler class diffusers.DPMSolverSinglestepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Optional = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'dpmsolver++' solver_type: str = 'midpoint' lower_order_final: bool = False use_karras_sigmas: Optional = False final_sigmas_type: Optional = 'zero' lambda_min_clipped: float = -inf variance_type: Optional = None ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DPMSolver order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++". algorithm_type (str, defaults to dpmsolver++) — +Algorithm type for the solver; can be dpmsolver or dpmsolver++. The +dpmsolver type implements the algorithms in the DPMSolver +paper, and the dpmsolver++ type implements the algorithms in the +DPMSolver++ paper. It is recommended to use dpmsolver++ or +sde-dpmsolver++ with solver_order=2 for guided sampling like in Stable Diffusion. solver_type (str, defaults to midpoint) — +Solver type for the second-order solver; can be midpoint or heun. The solver type slightly affects the +sample quality, especially for a small number of steps. It is recommended to use midpoint solvers. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. final_sigmas_type (str, optional, defaults to "zero") — +The final sigma value for the noise schedule during the sampling process. If "sigma_min", the final sigma +is the same as the last sigma in the training schedule. If zero, the final sigma is set to 0. lambda_min_clipped (float, defaults to -inf) — +Clipping threshold for the minimum value of lambda(t) for numerical stability. This is critical for the +cosine (squaredcos_cap_v2) noise schedule. variance_type (str, optional) — +Set to “learned” or “learned_range” for diffusion models that predict variance. If set, the model’s output +contains the predicted Gaussian variance. DPMSolverSinglestepScheduler is a fast dedicated high-order solver for diffusion ODEs. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is +designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an +integral of the data prediction model. The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise +prediction and data prediction models. dpm_solver_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DPMSolver (equivalent to DDIM). get_order_list < source > ( num_inference_steps: int ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. Computes the solver order at each time step. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). singlestep_dpm_solver_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. timestep (int) — +The current and latter discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order singlestep DPMSolver that computes the solution at time prev_timestep from the +time timestep_list[-2]. singlestep_dpm_solver_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. timestep (int) — +The current and latter discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order singlestep DPMSolver that computes the solution at time prev_timestep from the +time timestep_list[-3]. singlestep_dpm_solver_update < source > ( model_output_list: List *args sample: FloatTensor = None order: int = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. timestep (int) — +The current and latter discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. order (int) — +The solver order at this step. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the singlestep DPMSolver. step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the singlestep DPMSolver. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/f40682a0ade742266f393420603e6a31.txt b/scrapped_outputs/f40682a0ade742266f393420603e6a31.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/f41d43d1e5d291dd0f69e7648bfa8def.txt b/scrapped_outputs/f41d43d1e5d291dd0f69e7648bfa8def.txt new file mode 100644 index 0000000000000000000000000000000000000000..f4cc4262c8901cbf0efaaf3a95066a4f6481fc18 --- /dev/null +++ b/scrapped_outputs/f41d43d1e5d291dd0f69e7648bfa8def.txt @@ -0,0 +1,78 @@ +unCLIP Hierarchical Text-Conditional Image Generation with CLIP Latents is by Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen. The unCLIP model in 🤗 Diffusers comes from kakaobrain’s karlo. The abstract from the paper is following: Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples. You can find lucidrains’ DALL-E 2 recreation at lucidrains/DALLE2-pytorch. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. UnCLIPPipeline class diffusers.UnCLIPPipeline < source > ( prior: PriorTransformer decoder: UNet2DConditionModel text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer text_proj: UnCLIPTextProjModel super_res_first: UNet2DModel super_res_last: UNet2DModel prior_scheduler: UnCLIPScheduler decoder_scheduler: UnCLIPScheduler super_res_scheduler: UnCLIPScheduler ) Parameters text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. prior (PriorTransformer) — +The canonical unCLIP prior to approximate the image embedding from the text embedding. text_proj (UnCLIPTextProjModel) — +Utility class to prepare and combine the embeddings before they are passed to the decoder. decoder (UNet2DConditionModel) — +The decoder to invert the image embedding into an image. super_res_first (UNet2DModel) — +Super resolution UNet. Used in all but the last step of the super resolution diffusion process. super_res_last (UNet2DModel) — +Super resolution UNet. Used in the last step of the super resolution diffusion process. prior_scheduler (UnCLIPScheduler) — +Scheduler used in the prior denoising process (a modified DDPMScheduler). decoder_scheduler (UnCLIPScheduler) — +Scheduler used in the decoder denoising process (a modified DDPMScheduler). super_res_scheduler (UnCLIPScheduler) — +Scheduler used in the super resolution denoising process (a modified DDPMScheduler). Pipeline for text-to-image generation using unCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None num_images_per_prompt: int = 1 prior_num_inference_steps: int = 25 decoder_num_inference_steps: int = 25 super_res_num_inference_steps: int = 7 generator: Union = None prior_latents: Optional = None decoder_latents: Optional = None super_res_latents: Optional = None text_model_output: Union = None text_attention_mask: Optional = None prior_guidance_scale: float = 4.0 decoder_guidance_scale: float = 8.0 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image generation. This can only be left undefined if text_model_output +and text_attention_mask is passed. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. prior_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the prior. More denoising steps usually lead to a higher quality +image at the expense of slower inference. decoder_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality +image at the expense of slower inference. super_res_num_inference_steps (int, optional, defaults to 7) — +The number of denoising steps for super resolution. More denoising steps usually lead to a higher +quality image at the expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. prior_latents (torch.FloatTensor of shape (batch size, embeddings dimension), optional) — +Pre-generated noisy latents to be used as inputs for the prior. decoder_latents (torch.FloatTensor of shape (batch size, channels, height, width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. super_res_latents (torch.FloatTensor of shape (batch size, channels, super res height, super res width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. prior_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. decoder_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. text_model_output (CLIPTextModelOutput, optional) — +Pre-defined CLIPTextModel outputs that can be derived from the text encoder. Pre-defined text +outputs can be passed for tasks like text embedding interpolations. Make sure to also pass +text_attention_mask in this case. prompt can the be left None. text_attention_mask (torch.Tensor, optional) — +Pre-defined CLIP text attention mask that can be derived from the tokenizer. Pre-defined text attention +masks are necessary when passing text_model_output. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. UnCLIPImageVariationPipeline class diffusers.UnCLIPImageVariationPipeline < source > ( decoder: UNet2DConditionModel text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer text_proj: UnCLIPTextProjModel feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection super_res_first: UNet2DModel super_res_last: UNet2DModel decoder_scheduler: UnCLIPScheduler super_res_scheduler: UnCLIPScheduler ) Parameters text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the image_encoder. image_encoder (CLIPVisionModelWithProjection) — +Frozen CLIP image-encoder (clip-vit-large-patch14). text_proj (UnCLIPTextProjModel) — +Utility class to prepare and combine the embeddings before they are passed to the decoder. decoder (UNet2DConditionModel) — +The decoder to invert the image embedding into an image. super_res_first (UNet2DModel) — +Super resolution UNet. Used in all but the last step of the super resolution diffusion process. super_res_last (UNet2DModel) — +Super resolution UNet. Used in the last step of the super resolution diffusion process. decoder_scheduler (UnCLIPScheduler) — +Scheduler used in the decoder denoising process (a modified DDPMScheduler). super_res_scheduler (UnCLIPScheduler) — +Scheduler used in the super resolution denoising process (a modified DDPMScheduler). Pipeline to generate image variations from an input image using UnCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( image: Union = None num_images_per_prompt: int = 1 decoder_num_inference_steps: int = 25 super_res_num_inference_steps: int = 7 generator: Optional = None decoder_latents: Optional = None super_res_latents: Optional = None image_embeddings: Optional = None decoder_guidance_scale: float = 8.0 output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters image (PIL.Image.Image or List[PIL.Image.Image] or torch.FloatTensor) — +Image or tensor representing an image batch to be used as the starting point. If you provide a +tensor, it needs to be compatible with the CLIPImageProcessor +configuration. +Can be left as None only when image_embeddings are passed. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. decoder_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps for the decoder. More denoising steps usually lead to a higher quality +image at the expense of slower inference. super_res_num_inference_steps (int, optional, defaults to 7) — +The number of denoising steps for super resolution. More denoising steps usually lead to a higher +quality image at the expense of slower inference. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. decoder_latents (torch.FloatTensor of shape (batch size, channels, height, width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. super_res_latents (torch.FloatTensor of shape (batch size, channels, super res height, super res width), optional) — +Pre-generated noisy latents to be used as inputs for the decoder. decoder_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. image_embeddings (torch.Tensor, optional) — +Pre-defined image embeddings that can be derived from the image encoder. Pre-defined image embeddings +can be passed for tasks like image interpolations. image can be left as None. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images. + The call function to the pipeline for generation. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/f44fef5bfaf9ebc0710c849e697305fd.txt b/scrapped_outputs/f44fef5bfaf9ebc0710c849e697305fd.txt new file mode 100644 index 0000000000000000000000000000000000000000..a782332fc7cd440b86e7889f43564b9e3d2ea725 --- /dev/null +++ b/scrapped_outputs/f44fef5bfaf9ebc0710c849e697305fd.txt @@ -0,0 +1,87 @@ +Understanding pipelines, models and schedulers 🧨 Diffusers is designed to be a user-friendly and flexible toolbox for building diffusion systems tailored to your use-case. At the core of the toolbox are models and schedulers. While the DiffusionPipeline bundles these components together for convenience, you can also unbundle the pipeline and use the models and schedulers separately to create new diffusion systems. In this tutorial, you’ll learn how to use models and schedulers to assemble a diffusion system for inference, starting with a basic pipeline and then progressing to the Stable Diffusion pipeline. Deconstruct a basic pipeline A pipeline is a quick and easy way to run a model for inference, requiring no more than four lines of code to generate an image: Copied >>> from diffusers import DDPMPipeline + +>>> ddpm = DDPMPipeline.from_pretrained("google/ddpm-cat-256", use_safetensors=True).to("cuda") +>>> image = ddpm(num_inference_steps=25).images[0] +>>> image That was super easy, but how did the pipeline do that? Let’s breakdown the pipeline and take a look at what’s happening under the hood. In the example above, the pipeline contains a UNet2DModel model and a DDPMScheduler. The pipeline denoises an image by taking random noise the size of the desired output and passing it through the model several times. At each timestep, the model predicts the noise residual and the scheduler uses it to predict a less noisy image. The pipeline repeats this process until it reaches the end of the specified number of inference steps. To recreate the pipeline with the model and scheduler separately, let’s write our own denoising process. Load the model and scheduler: Copied >>> from diffusers import DDPMScheduler, UNet2DModel + +>>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cat-256") +>>> model = UNet2DModel.from_pretrained("google/ddpm-cat-256", use_safetensors=True).to("cuda") Set the number of timesteps to run the denoising process for: Copied >>> scheduler.set_timesteps(50) Setting the scheduler timesteps creates a tensor with evenly spaced elements in it, 50 in this example. Each element corresponds to a timestep at which the model denoises an image. When you create the denoising loop later, you’ll iterate over this tensor to denoise an image: Copied >>> scheduler.timesteps +tensor([980, 960, 940, 920, 900, 880, 860, 840, 820, 800, 780, 760, 740, 720, + 700, 680, 660, 640, 620, 600, 580, 560, 540, 520, 500, 480, 460, 440, + 420, 400, 380, 360, 340, 320, 300, 280, 260, 240, 220, 200, 180, 160, + 140, 120, 100, 80, 60, 40, 20, 0]) Create some random noise with the same shape as the desired output: Copied >>> import torch + +>>> sample_size = model.config.sample_size +>>> noise = torch.randn((1, 3, sample_size, sample_size), device="cuda") Now write a loop to iterate over the timesteps. At each timestep, the model does a UNet2DModel.forward() pass and returns the noisy residual. The scheduler’s step() method takes the noisy residual, timestep, and input and it predicts the image at the previous timestep. This output becomes the next input to the model in the denoising loop, and it’ll repeat until it reaches the end of the timesteps array. Copied >>> input = noise + +>>> for t in scheduler.timesteps: +... with torch.no_grad(): +... noisy_residual = model(input, t).sample +... previous_noisy_sample = scheduler.step(noisy_residual, t, input).prev_sample +... input = previous_noisy_sample This is the entire denoising process, and you can use this same pattern to write any diffusion system. The last step is to convert the denoised output into an image: Copied >>> from PIL import Image +>>> import numpy as np + +>>> image = (input / 2 + 0.5).clamp(0, 1).squeeze() +>>> image = (image.permute(1, 2, 0) * 255).round().to(torch.uint8).cpu().numpy() +>>> image = Image.fromarray(image) +>>> image In the next section, you’ll put your skills to the test and breakdown the more complex Stable Diffusion pipeline. The steps are more or less the same. You’ll initialize the necessary components, and set the number of timesteps to create a timestep array. The timestep array is used in the denoising loop, and for each element in this array, the model predicts a less noisy image. The denoising loop iterates over the timestep’s, and at each timestep, it outputs a noisy residual and the scheduler uses it to predict a less noisy image at the previous timestep. This process is repeated until you reach the end of the timestep array. Let’s try it out! Deconstruct the Stable Diffusion pipeline Stable Diffusion is a text-to-image latent diffusion model. It is called a latent diffusion model because it works with a lower-dimensional representation of the image instead of the actual pixel space, which makes it more memory efficient. The encoder compresses the image into a smaller representation, and a decoder to convert the compressed representation back into an image. For text-to-image models, you’ll need a tokenizer and an encoder to generate text embeddings. From the previous example, you already know you need a UNet model and a scheduler. As you can see, this is already more complex than the DDPM pipeline which only contains a UNet model. The Stable Diffusion model has three separate pretrained models. 💡 Read the How does Stable Diffusion work? blog for more details about how the VAE, UNet, and text encoder models work. Now that you know what you need for the Stable Diffusion pipeline, load all these components with the from_pretrained() method. You can find them in the pretrained runwayml/stable-diffusion-v1-5 checkpoint, and each component is stored in a separate subfolder: Copied >>> from PIL import Image +>>> import torch +>>> from transformers import CLIPTextModel, CLIPTokenizer +>>> from diffusers import AutoencoderKL, UNet2DConditionModel, PNDMScheduler + +>>> vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae", use_safetensors=True) +>>> tokenizer = CLIPTokenizer.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="tokenizer") +>>> text_encoder = CLIPTextModel.from_pretrained( +... "CompVis/stable-diffusion-v1-4", subfolder="text_encoder", use_safetensors=True +... ) +>>> unet = UNet2DConditionModel.from_pretrained( +... "CompVis/stable-diffusion-v1-4", subfolder="unet", use_safetensors=True +... ) Instead of the default PNDMScheduler, exchange it for the UniPCMultistepScheduler to see how easy it is to plug a different scheduler in: Copied >>> from diffusers import UniPCMultistepScheduler + +>>> scheduler = UniPCMultistepScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler") To speed up inference, move the models to a GPU since, unlike the scheduler, they have trainable weights: Copied >>> torch_device = "cuda" +>>> vae.to(torch_device) +>>> text_encoder.to(torch_device) +>>> unet.to(torch_device) Create text embeddings The next step is to tokenize the text to generate embeddings. The text is used to condition the UNet model and steer the diffusion process towards something that resembles the input prompt. 💡 The guidance_scale parameter determines how much weight should be given to the prompt when generating an image. Feel free to choose any prompt you like if you want to generate something else! Copied >>> prompt = ["a photograph of an astronaut riding a horse"] +>>> height = 512 # default height of Stable Diffusion +>>> width = 512 # default width of Stable Diffusion +>>> num_inference_steps = 25 # Number of denoising steps +>>> guidance_scale = 7.5 # Scale for classifier-free guidance +>>> generator = torch.manual_seed(0) # Seed generator to create the initial latent noise +>>> batch_size = len(prompt) Tokenize the text and generate the embeddings from the prompt: Copied >>> text_input = tokenizer( +... prompt, padding="max_length", max_length=tokenizer.model_max_length, truncation=True, return_tensors="pt" +... ) + +>>> with torch.no_grad(): +... text_embeddings = text_encoder(text_input.input_ids.to(torch_device))[0] You’ll also need to generate the unconditional text embeddings which are the embeddings for the padding token. These need to have the same shape (batch_size and seq_length) as the conditional text_embeddings: Copied >>> max_length = text_input.input_ids.shape[-1] +>>> uncond_input = tokenizer([""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt") +>>> uncond_embeddings = text_encoder(uncond_input.input_ids.to(torch_device))[0] Let’s concatenate the conditional and unconditional embeddings into a batch to avoid doing two forward passes: Copied >>> text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) Create random noise Next, generate some initial random noise as a starting point for the diffusion process. This is the latent representation of the image, and it’ll be gradually denoised. At this point, the latent image is smaller than the final image size but that’s okay though because the model will transform it into the final 512x512 image dimensions later. 💡 The height and width are divided by 8 because the vae model has 3 down-sampling layers. You can check by running the following: Copied 2 ** (len(vae.config.block_out_channels) - 1) == 8 Copied >>> latents = torch.randn( +... (batch_size, unet.config.in_channels, height // 8, width // 8), +... generator=generator, +... device=torch_device, +... ) Denoise the image Start by scaling the input with the initial noise distribution, sigma, the noise scale value, which is required for improved schedulers like UniPCMultistepScheduler: Copied >>> latents = latents * scheduler.init_noise_sigma The last step is to create the denoising loop that’ll progressively transform the pure noise in latents to an image described by your prompt. Remember, the denoising loop needs to do three things: Set the scheduler’s timesteps to use during denoising. Iterate over the timesteps. At each timestep, call the UNet model to predict the noise residual and pass it to the scheduler to compute the previous noisy sample. Copied >>> from tqdm.auto import tqdm + +>>> scheduler.set_timesteps(num_inference_steps) + +>>> for t in tqdm(scheduler.timesteps): +... # expand the latents if we are doing classifier-free guidance to avoid doing two forward passes. +... latent_model_input = torch.cat([latents] * 2) + +... latent_model_input = scheduler.scale_model_input(latent_model_input, timestep=t) + +... # predict the noise residual +... with torch.no_grad(): +... noise_pred = unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample + +... # perform guidance +... noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) +... noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) + +... # compute the previous noisy sample x_t -> x_t-1 +... latents = scheduler.step(noise_pred, t, latents).prev_sample Decode the image The final step is to use the vae to decode the latent representation into an image and get the decoded output with sample: Copied # scale and decode the image latents with vae +latents = 1 / 0.18215 * latents +with torch.no_grad(): + image = vae.decode(latents).sample Lastly, convert the image to a PIL.Image to see your generated image! Copied >>> image = (image / 2 + 0.5).clamp(0, 1).squeeze() +>>> image = (image.permute(1, 2, 0) * 255).to(torch.uint8).cpu().numpy() +>>> images = (image * 255).round().astype("uint8") +>>> image = Image.fromarray(image) +>>> image Next steps From basic to complex pipelines, you’ve seen that all you really need to write your own diffusion system is a denoising loop. The loop should set the scheduler’s timesteps, iterate over them, and alternate between calling the UNet model to predict the noise residual and passing it to the scheduler to compute the previous noisy sample. This is really what 🧨 Diffusers is designed for: to make it intuitive and easy to write your own diffusion system using models and schedulers. For your next steps, feel free to: Learn how to build and contribute a pipeline to 🧨 Diffusers. We can’t wait and see what you’ll come up with! Explore existing pipelines in the library, and see if you can deconstruct and build a pipeline from scratch using the models and schedulers separately. diff --git a/scrapped_outputs/f4759bb3d09e1620ba72a6ff086f672d.txt b/scrapped_outputs/f4759bb3d09e1620ba72a6ff086f672d.txt new file mode 100644 index 0000000000000000000000000000000000000000..2a6f8d6ced6b91e1a0e4d7840137c4d469ea2882 --- /dev/null +++ b/scrapped_outputs/f4759bb3d09e1620ba72a6ff086f672d.txt @@ -0,0 +1,154 @@ +Scalable Diffusion Models with Transformers (DiT) + + +Overview + +Scalable Diffusion Models with Transformers (DiT) by William Peebles and Saining Xie. +The abstract of the paper is the following: +We explore a new class of diffusion models based on the transformer architecture. We train latent diffusion models of images, replacing the commonly-used U-Net backbone with a transformer that operates on latent patches. We analyze the scalability of our Diffusion Transformers (DiTs) through the lens of forward pass complexity as measured by Gflops. We find that DiTs with higher Gflops — through increased transformer depth/width or increased number of input tokens — consistently have lower FID. In addition to possessing good scalability properties, our largest DiT-XL/2 models outperform all prior diffusion models on the class-conditional ImageNet 512x512 and 256x256 benchmarks, achieving a state-of-the-art FID of 2.27 on the latter. +The original codebase of this paper can be found here: facebookresearch/dit. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_dit.py +Conditional Image Generation +- + +Usage example + + + + Copied +from diffusers import DiTPipeline, DPMSolverMultistepScheduler +import torch + +pipe = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256", torch_dtype=torch.float16) +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") + +# pick words from Imagenet class labels +pipe.labels # to print all available words + +# pick words that exist in ImageNet +words = ["white shark", "umbrella"] + +class_ids = pipe.get_label_ids(words) + +generator = torch.manual_seed(33) +output = pipe(class_labels=class_ids, num_inference_steps=25, generator=generator) + +image = output.images[0] # label 'white shark' + +DiTPipeline + + +class diffusers.DiTPipeline + +< +source +> +( +transformer: Transformer2DModel +vae: AutoencoderKL +scheduler: KarrasDiffusionSchedulers +id2label: typing.Union[typing.Dict[int, str], NoneType] = None + +) + + +Parameters + +transformer (Transformer2DModel) — +Class conditioned Transformer in Diffusion model to denoise the encoded image latents. + + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +scheduler (DDIMScheduler) — +A scheduler to be used in combination with dit to denoise the encoded image latents. + + + +This pipeline inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +class_labels: typing.List[int] +guidance_scale: float = 4.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +num_inference_steps: int = 50 +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True + +) + + +Parameters + +class_labels (List[int]) — +List of imagenet class labels for the images to be generated. + + +guidance_scale (float, optional, defaults to 4.0) — +Scale of the guidance signal. + + +generator (torch.Generator, optional) — +A torch generator to make generation +deterministic. + + +num_inference_steps (int, optional, defaults to 250) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + + +Function invoked when calling the pipeline for generation. + +get_label_ids + +< +source +> +( +label: typing.Union[str, typing.List[str]] + +) +→ +list of int + +Parameters + +label (str or dict of str) — label strings to be mapped to class ids. + + +Returns + +list of int + + + +Class ids to be processed by pipeline. + + +Map label strings, e.g. from ImageNet, to corresponding class ids. diff --git a/scrapped_outputs/f48d4aebfbaca45623cae0771cd58803.txt b/scrapped_outputs/f48d4aebfbaca45623cae0771cd58803.txt new file mode 100644 index 0000000000000000000000000000000000000000..7f071804a6d1fd96f89b53ac2e21853833e83f62 --- /dev/null +++ b/scrapped_outputs/f48d4aebfbaca45623cae0771cd58803.txt @@ -0,0 +1,74 @@ +DEISMultistepScheduler Diffusion Exponential Integrator Sampler (DEIS) is proposed in Fast Sampling of Diffusion Models with Exponential Integrator by Qinsheng Zhang and Yongxin Chen. DEISMultistepScheduler is a fast high order solver for diffusion ordinary differential equations (ODEs). This implementation modifies the polynomial fitting formula in log-rho space instead of the original linear t space in the DEIS paper. The modification enjoys closed-form coefficients for exponential multistep update instead of replying on the numerical solver. The abstract from the paper is: The past few years have witnessed the great success of Diffusion models~(DMs) in generating high-fidelity samples in generative modeling tasks. A major limitation of the DM is its notoriously slow sampling procedure which normally requires hundreds to thousands of time discretization steps of the learned diffusion process to reach the desired accuracy. Our goal is to develop a fast sampling method for DMs with a much less number of steps while retaining high sample quality. To this end, we systematically analyze the sampling procedure in DMs and identify key factors that affect the sample quality, among which the method of discretization is most crucial. By carefully examining the learned diffusion process, we propose Diffusion Exponential Integrator Sampler~(DEIS). It is based on the Exponential Integrator designed for discretizing ordinary differential equations (ODEs) and leverages a semilinear structure of the learned diffusion process to reduce the discretization error. The proposed method can be applied to any DMs and can generate high-fidelity samples in as few as 10 steps. In our experiments, it takes about 3 minutes on one A6000 GPU to generate 50k images from CIFAR10. Moreover, by directly using pre-trained DMs, we achieve the state-of-art sampling performance when the number of score function evaluation~(NFE) is limited, e.g., 4.17 FID with 10 NFEs, 3.37 FID, and 9.74 IS with only 15 NFEs on CIFAR10. Code is available at this https URL. Tips It is recommended to set solver_order to 2 or 3, while solver_order=1 is equivalent to DDIMScheduler. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set thresholding=True to use the dynamic thresholding. DEISMultistepScheduler class diffusers.DEISMultistepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Optional = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'deis' solver_type: str = 'logrho' lower_order_final: bool = True use_karras_sigmas: Optional = False timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DEIS order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. algorithm_type (str, defaults to deis) — +The algorithm type for the solver. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DEISMultistepScheduler is a fast high order solver for diffusion ordinary differential equations (ODEs). This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DEIS algorithm needs. deis_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. prev_timestep (int) — +The previous discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DEIS (equivalent to DDIM). multistep_deis_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order multistep DEIS. multistep_deis_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order multistep DEIS. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep DEIS. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/f4a02d8af91e55becf079d391642d26d.txt b/scrapped_outputs/f4a02d8af91e55becf079d391642d26d.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/f4c2783d27a3248855f1c4e24ddffa13.txt b/scrapped_outputs/f4c2783d27a3248855f1c4e24ddffa13.txt new file mode 100644 index 0000000000000000000000000000000000000000..4a2dab2440032fce02434afcfbdf3d52bba38d63 --- /dev/null +++ b/scrapped_outputs/f4c2783d27a3248855f1c4e24ddffa13.txt @@ -0,0 +1,11 @@ +Philosophy 🧨 Diffusers provides state-of-the-art pretrained diffusion models across multiple modalities. +Its purpose is to serve as a modular toolbox for both inference and training. We aim at building a library that stands the test of time and therefore take API design very seriously. In a nutshell, Diffusers is built to be a natural extension of PyTorch. Therefore, most of our design choices are based on PyTorch’s Design Principles. Let’s go over the most important ones: Usability over Performance While Diffusers has many built-in performance-enhancing features (see Memory and Speed), models are always loaded with the highest precision and lowest optimization. Therefore, by default diffusion pipelines are always instantiated on CPU with float32 precision if not otherwise defined by the user. This ensures usability across different platforms and accelerators and means that no complex installations are required to run the library. Diffusers aims to be a light-weight package and therefore has very few required dependencies, but many soft dependencies that can improve performance (such as accelerate, safetensors, onnx, etc…). We strive to keep the library as lightweight as possible so that it can be added without much concern as a dependency on other packages. Diffusers prefers simple, self-explainable code over condensed, magic code. This means that short-hand code syntaxes such as lambda functions, and advanced PyTorch operators are often not desired. Simple over easy As PyTorch states, explicit is better than implicit and simple is better than complex. This design philosophy is reflected in multiple parts of the library: We follow PyTorch’s API with methods like DiffusionPipeline.to to let the user handle device management. Raising concise error messages is preferred to silently correct erroneous input. Diffusers aims at teaching the user, rather than making the library as easy to use as possible. Complex model vs. scheduler logic is exposed instead of magically handled inside. Schedulers/Samplers are separated from diffusion models with minimal dependencies on each other. This forces the user to write the unrolled denoising loop. However, the separation allows for easier debugging and gives the user more control over adapting the denoising process or switching out diffusion models or schedulers. Separately trained components of the diffusion pipeline, e.g. the text encoder, the unet, and the variational autoencoder, each have their own model class. This forces the user to handle the interaction between the different model components, and the serialization format separates the model components into different files. However, this allows for easier debugging and customization. DreamBooth or Textual Inversion training +is very simple thanks to Diffusers’ ability to separate single components of the diffusion pipeline. Tweakable, contributor-friendly over abstraction For large parts of the library, Diffusers adopts an important design principle of the Transformers library, which is to prefer copy-pasted code over hasty abstractions. This design principle is very opinionated and stands in stark contrast to popular design principles such as Don’t repeat yourself (DRY). +In short, just like Transformers does for modeling files, Diffusers prefers to keep an extremely low level of abstraction and very self-contained code for pipelines and schedulers. +Functions, long code blocks, and even classes can be copied across multiple files which at first can look like a bad, sloppy design choice that makes the library unmaintainable. +However, this design has proven to be extremely successful for Transformers and makes a lot of sense for community-driven, open-source machine learning libraries because: Machine Learning is an extremely fast-moving field in which paradigms, model architectures, and algorithms are changing rapidly, which therefore makes it very difficult to define long-lasting code abstractions. Machine Learning practitioners like to be able to quickly tweak existing code for ideation and research and therefore prefer self-contained code over one that contains many abstractions. Open-source libraries rely on community contributions and therefore must build a library that is easy to contribute to. The more abstract the code, the more dependencies, the harder to read, and the harder to contribute to. Contributors simply stop contributing to very abstract libraries out of fear of breaking vital functionality. If contributing to a library cannot break other fundamental code, not only is it more inviting for potential new contributors, but it is also easier to review and contribute to multiple parts in parallel. At Hugging Face, we call this design the single-file policy which means that almost all of the code of a certain class should be written in a single, self-contained file. To read more about the philosophy, you can have a look +at this blog post. In Diffusers, we follow this philosophy for both pipelines and schedulers, but only partly for diffusion models. The reason we don’t follow this design fully for diffusion models is because almost all diffusion pipelines, such +as DDPM, Stable Diffusion, unCLIP (DALL·E 2) and Imagen all rely on the same diffusion model, the UNet. Great, now you should have generally understood why 🧨 Diffusers is designed the way it is 🤗. +We try to apply these design principles consistently across the library. Nevertheless, there are some minor exceptions to the philosophy or some unlucky design choices. If you have feedback regarding the design, we would ❤️ to hear it directly on GitHub. Design Philosophy in Details Now, let’s look a bit into the nitty-gritty details of the design philosophy. Diffusers essentially consists of three major classes: pipelines, models, and schedulers. +Let’s walk through more in-detail design decisions for each class. Pipelines Pipelines are designed to be easy to use (therefore do not follow Simple over easy 100%), are not feature complete, and should loosely be seen as examples of how to use models and schedulers for inference. The following design principles are followed: Pipelines follow the single-file policy. All pipelines can be found in individual directories under src/diffusers/pipelines. One pipeline folder corresponds to one diffusion paper/project/release. Multiple pipeline files can be gathered in one pipeline folder, as it’s done for src/diffusers/pipelines/stable-diffusion. If pipelines share similar functionality, one can make use of the #Copied from mechanism. Pipelines all inherit from DiffusionPipeline. Every pipeline consists of different model and scheduler components, that are documented in the model_index.json file, are accessible under the same name as attributes of the pipeline and can be shared between pipelines with DiffusionPipeline.components function. Every pipeline should be loadable via the DiffusionPipeline.from_pretrained function. Pipelines should be used only for inference. Pipelines should be very readable, self-explanatory, and easy to tweak. Pipelines should be designed to build on top of each other and be easy to integrate into higher-level APIs. Pipelines are not intended to be feature-complete user interfaces. For future complete user interfaces one should rather have a look at InvokeAI, Diffuzers, and lama-cleaner. Every pipeline should have one and only one way to run it via a __call__ method. The naming of the __call__ arguments should be shared across all pipelines. Pipelines should be named after the task they are intended to solve. In almost all cases, novel diffusion pipelines shall be implemented in a new pipeline folder/file. Models Models are designed as configurable toolboxes that are natural extensions of PyTorch’s Module class. They only partly follow the single-file policy. The following design principles are followed: Models correspond to a type of model architecture. E.g. the UNet2DConditionModel class is used for all UNet variations that expect 2D image inputs and are conditioned on some context. All models can be found in src/diffusers/models and every model architecture shall be defined in its file, e.g. unet_2d_condition.py, transformer_2d.py, etc… Models do not follow the single-file policy and should make use of smaller model building blocks, such as attention.py, resnet.py, embeddings.py, etc… Note: This is in stark contrast to Transformers’ modeling files and shows that models do not really follow the single-file policy. Models intend to expose complexity, just like PyTorch’s Module class, and give clear error messages. Models all inherit from ModelMixin and ConfigMixin. Models can be optimized for performance when it doesn’t demand major code changes, keeps backward compatibility, and gives significant memory or compute gain. Models should by default have the highest precision and lowest performance setting. To integrate new model checkpoints whose general architecture can be classified as an architecture that already exists in Diffusers, the existing model architecture shall be adapted to make it work with the new checkpoint. One should only create a new file if the model architecture is fundamentally different. Models should be designed to be easily extendable to future changes. This can be achieved by limiting public function arguments, configuration arguments, and “foreseeing” future changes, e.g. it is usually better to add string “…type” arguments that can easily be extended to new future types instead of boolean is_..._type arguments. Only the minimum amount of changes shall be made to existing architectures to make a new model checkpoint work. The model design is a difficult trade-off between keeping code readable and concise and supporting many model checkpoints. For most parts of the modeling code, classes shall be adapted for new model checkpoints, while there are some exceptions where it is preferred to add new classes to make sure the code is kept concise and +readable long-term, such as UNet blocks and Attention processors. Schedulers Schedulers are responsible to guide the denoising process for inference as well as to define a noise schedule for training. They are designed as individual classes with loadable configuration files and strongly follow the single-file policy. The following design principles are followed: All schedulers are found in src/diffusers/schedulers. Schedulers are not allowed to import from large utils files and shall be kept very self-contained. One scheduler Python file corresponds to one scheduler algorithm (as might be defined in a paper). If schedulers share similar functionalities, we can make use of the #Copied from mechanism. Schedulers all inherit from SchedulerMixin and ConfigMixin. Schedulers can be easily swapped out with the ConfigMixin.from_config method as explained in detail here. Every scheduler has to have a set_num_inference_steps, and a step function. set_num_inference_steps(...) has to be called before every denoising process, i.e. before step(...) is called. Every scheduler exposes the timesteps to be “looped over” via a timesteps attribute, which is an array of timesteps the model will be called upon. The step(...) function takes a predicted model output and the “current” sample (x_t) and returns the “previous”, slightly more denoised sample (x_t-1). Given the complexity of diffusion schedulers, the step function does not expose all the complexity and can be a bit of a “black box”. In almost all cases, novel schedulers shall be implemented in a new scheduling file. diff --git a/scrapped_outputs/f4e0d2b8f62a707040c7df6a58a7ce77.txt b/scrapped_outputs/f4e0d2b8f62a707040c7df6a58a7ce77.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/f4e5d9da14c35d21037906f0fd802eae.txt b/scrapped_outputs/f4e5d9da14c35d21037906f0fd802eae.txt new file mode 100644 index 0000000000000000000000000000000000000000..670e60a336d617da607490febe4cdc7f57188444 --- /dev/null +++ b/scrapped_outputs/f4e5d9da14c35d21037906f0fd802eae.txt @@ -0,0 +1,82 @@ +T2I-Adapter T2I-Adapter is a lightweight adapter model that provides an additional conditioning input image (line art, canny, sketch, depth, pose) to better control image generation. It is similar to a ControlNet, but it is a lot smaller (~77M parameters and ~300MB file size) because its only inserts weights into the UNet instead of copying and training it. The T2I-Adapter is only available for training with the Stable Diffusion XL (SDXL) model. This guide will explore the train_t2i_adapter_sdxl.py training script to help you become familiar with it, and how you can adapt it for your own use-case. Before running the script, make sure you install the library from source: Copied git clone https://github.com/huggingface/diffusers +cd diffusers +pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/t2i_adapter +pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: Copied accelerate config To setup a default 🤗 Accelerate environment without choosing any configurations: Copied accelerate config default Or if your environment doesn’t support an interactive shell, like a notebook, you can use: Copied from accelerate.utils import write_basic_config + +write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn’t cover every aspect of the script in detail. If you’re interested in learning more, feel free to read through the script and let us know if you have any questions or concerns. Script parameters The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the parse_args() function. It provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you’d like. For example, to activate gradient accumulation, add the --gradient_accumulation_steps parameter to the training command: Copied accelerate launch train_t2i_adapter_sdxl.py \ + ----gradient_accumulation_steps=4 Many of the basic and important parameters are described in the Text-to-image training guide, so this guide just focuses on the relevant T2I-Adapter parameters: --pretrained_vae_model_name_or_path: path to a pretrained VAE; the SDXL VAE is known to suffer from numerical instability, so this parameter allows you to specify a better VAE --crops_coords_top_left_h and --crops_coords_top_left_w: height and width coordinates to include in SDXL’s crop coordinate embeddings --conditioning_image_column: the column of the conditioning images in the dataset --proportion_empty_prompts: the proportion of image prompts to replace with empty strings Training script As with the script parameters, a walkthrough of the training script is provided in the Text-to-image training guide. Instead, this guide takes a look at the T2I-Adapter relevant parts of the script. The training script begins by preparing the dataset. This incudes tokenizing the prompt and applying transforms to the images and conditioning images. Copied conditioning_image_transforms = transforms.Compose( + [ + transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), + transforms.CenterCrop(args.resolution), + transforms.ToTensor(), + ] +) Within the main() function, the T2I-Adapter is either loaded from a pretrained adapter or it is randomly initialized: Copied if args.adapter_model_name_or_path: + logger.info("Loading existing adapter weights.") + t2iadapter = T2IAdapter.from_pretrained(args.adapter_model_name_or_path) +else: + logger.info("Initializing t2iadapter weights.") + t2iadapter = T2IAdapter( + in_channels=3, + channels=(320, 640, 1280, 1280), + num_res_blocks=2, + downscale_factor=16, + adapter_type="full_adapter_xl", + ) The optimizer is initialized for the T2I-Adapter parameters: Copied params_to_optimize = t2iadapter.parameters() +optimizer = optimizer_class( + params_to_optimize, + lr=args.learning_rate, + betas=(args.adam_beta1, args.adam_beta2), + weight_decay=args.adam_weight_decay, + eps=args.adam_epsilon, +) Lastly, in the training loop, the adapter conditioning image and the text embeddings are passed to the UNet to predict the noise residual: Copied t2iadapter_image = batch["conditioning_pixel_values"].to(dtype=weight_dtype) +down_block_additional_residuals = t2iadapter(t2iadapter_image) +down_block_additional_residuals = [ + sample.to(dtype=weight_dtype) for sample in down_block_additional_residuals +] + +model_pred = unet( + inp_noisy_latents, + timesteps, + encoder_hidden_states=batch["prompt_ids"], + added_cond_kwargs=batch["unet_added_conditions"], + down_block_additional_residuals=down_block_additional_residuals, +).sample If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic pattern of the denoising process. Launch the script Now you’re ready to launch the training script! 🚀 For this example training, you’ll use the fusing/fill50k dataset. You can also create and use your own dataset if you want (see the Create a dataset for training guide). Set the environment variable MODEL_DIR to a model id on the Hub or a path to a local model and OUTPUT_DIR to where you want to save the model. Download the following images to condition your training with: Copied wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_1.png +wget https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/controlnet_training/conditioning_image_2.png To monitor training progress with Weights & Biases, add the --report_to=wandb parameter to the training command. You’ll also need to add the --validation_image, --validation_prompt, and --validation_steps to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. Copied export MODEL_DIR="stabilityai/stable-diffusion-xl-base-1.0" +export OUTPUT_DIR="path to save model" + +accelerate launch train_t2i_adapter_sdxl.py \ + --pretrained_model_name_or_path=$MODEL_DIR \ + --output_dir=$OUTPUT_DIR \ + --dataset_name=fusing/fill50k \ + --mixed_precision="fp16" \ + --resolution=1024 \ + --learning_rate=1e-5 \ + --max_train_steps=15000 \ + --validation_image "./conditioning_image_1.png" "./conditioning_image_2.png" \ + --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ + --validation_steps=100 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=4 \ + --report_to="wandb" \ + --seed=42 \ + --push_to_hub Once training is complete, you can use your T2I-Adapter for inference: Copied from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, EulerAncestralDiscreteSchedulerTest +from diffusers.utils import load_image +import torch + +adapter = T2IAdapter.from_pretrained("path/to/adapter", torch_dtype=torch.float16) +pipeline = StableDiffusionXLAdapterPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", adapter=adapter, torch_dtype=torch.float16 +) + +pipeline.scheduler = EulerAncestralDiscreteSchedulerTest.from_config(pipe.scheduler.config) +pipeline.enable_xformers_memory_efficient_attention() +pipeline.enable_model_cpu_offload() + +control_image = load_image("./conditioning_image_1.png") +prompt = "pale golden rod circle with old lace background" + +generator = torch.manual_seed(0) +image = pipeline( + prompt, image=control_image, generator=generator +).images[0] +image.save("./output.png") Next steps Congratulations on training a T2I-Adapter model! 🎉 To learn more: Read the Efficient Controllable Generation for SDXL with T2I-Adapters blog post to learn more details about the experimental results from the T2I-Adapter team. diff --git a/scrapped_outputs/f4fb4724b3c1636fc6d0527a37208489.txt b/scrapped_outputs/f4fb4724b3c1636fc6d0527a37208489.txt new file mode 100644 index 0000000000000000000000000000000000000000..a782332fc7cd440b86e7889f43564b9e3d2ea725 --- /dev/null +++ b/scrapped_outputs/f4fb4724b3c1636fc6d0527a37208489.txt @@ -0,0 +1,87 @@ +Understanding pipelines, models and schedulers 🧨 Diffusers is designed to be a user-friendly and flexible toolbox for building diffusion systems tailored to your use-case. At the core of the toolbox are models and schedulers. While the DiffusionPipeline bundles these components together for convenience, you can also unbundle the pipeline and use the models and schedulers separately to create new diffusion systems. In this tutorial, you’ll learn how to use models and schedulers to assemble a diffusion system for inference, starting with a basic pipeline and then progressing to the Stable Diffusion pipeline. Deconstruct a basic pipeline A pipeline is a quick and easy way to run a model for inference, requiring no more than four lines of code to generate an image: Copied >>> from diffusers import DDPMPipeline + +>>> ddpm = DDPMPipeline.from_pretrained("google/ddpm-cat-256", use_safetensors=True).to("cuda") +>>> image = ddpm(num_inference_steps=25).images[0] +>>> image That was super easy, but how did the pipeline do that? Let’s breakdown the pipeline and take a look at what’s happening under the hood. In the example above, the pipeline contains a UNet2DModel model and a DDPMScheduler. The pipeline denoises an image by taking random noise the size of the desired output and passing it through the model several times. At each timestep, the model predicts the noise residual and the scheduler uses it to predict a less noisy image. The pipeline repeats this process until it reaches the end of the specified number of inference steps. To recreate the pipeline with the model and scheduler separately, let’s write our own denoising process. Load the model and scheduler: Copied >>> from diffusers import DDPMScheduler, UNet2DModel + +>>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cat-256") +>>> model = UNet2DModel.from_pretrained("google/ddpm-cat-256", use_safetensors=True).to("cuda") Set the number of timesteps to run the denoising process for: Copied >>> scheduler.set_timesteps(50) Setting the scheduler timesteps creates a tensor with evenly spaced elements in it, 50 in this example. Each element corresponds to a timestep at which the model denoises an image. When you create the denoising loop later, you’ll iterate over this tensor to denoise an image: Copied >>> scheduler.timesteps +tensor([980, 960, 940, 920, 900, 880, 860, 840, 820, 800, 780, 760, 740, 720, + 700, 680, 660, 640, 620, 600, 580, 560, 540, 520, 500, 480, 460, 440, + 420, 400, 380, 360, 340, 320, 300, 280, 260, 240, 220, 200, 180, 160, + 140, 120, 100, 80, 60, 40, 20, 0]) Create some random noise with the same shape as the desired output: Copied >>> import torch + +>>> sample_size = model.config.sample_size +>>> noise = torch.randn((1, 3, sample_size, sample_size), device="cuda") Now write a loop to iterate over the timesteps. At each timestep, the model does a UNet2DModel.forward() pass and returns the noisy residual. The scheduler’s step() method takes the noisy residual, timestep, and input and it predicts the image at the previous timestep. This output becomes the next input to the model in the denoising loop, and it’ll repeat until it reaches the end of the timesteps array. Copied >>> input = noise + +>>> for t in scheduler.timesteps: +... with torch.no_grad(): +... noisy_residual = model(input, t).sample +... previous_noisy_sample = scheduler.step(noisy_residual, t, input).prev_sample +... input = previous_noisy_sample This is the entire denoising process, and you can use this same pattern to write any diffusion system. The last step is to convert the denoised output into an image: Copied >>> from PIL import Image +>>> import numpy as np + +>>> image = (input / 2 + 0.5).clamp(0, 1).squeeze() +>>> image = (image.permute(1, 2, 0) * 255).round().to(torch.uint8).cpu().numpy() +>>> image = Image.fromarray(image) +>>> image In the next section, you’ll put your skills to the test and breakdown the more complex Stable Diffusion pipeline. The steps are more or less the same. You’ll initialize the necessary components, and set the number of timesteps to create a timestep array. The timestep array is used in the denoising loop, and for each element in this array, the model predicts a less noisy image. The denoising loop iterates over the timestep’s, and at each timestep, it outputs a noisy residual and the scheduler uses it to predict a less noisy image at the previous timestep. This process is repeated until you reach the end of the timestep array. Let’s try it out! Deconstruct the Stable Diffusion pipeline Stable Diffusion is a text-to-image latent diffusion model. It is called a latent diffusion model because it works with a lower-dimensional representation of the image instead of the actual pixel space, which makes it more memory efficient. The encoder compresses the image into a smaller representation, and a decoder to convert the compressed representation back into an image. For text-to-image models, you’ll need a tokenizer and an encoder to generate text embeddings. From the previous example, you already know you need a UNet model and a scheduler. As you can see, this is already more complex than the DDPM pipeline which only contains a UNet model. The Stable Diffusion model has three separate pretrained models. 💡 Read the How does Stable Diffusion work? blog for more details about how the VAE, UNet, and text encoder models work. Now that you know what you need for the Stable Diffusion pipeline, load all these components with the from_pretrained() method. You can find them in the pretrained runwayml/stable-diffusion-v1-5 checkpoint, and each component is stored in a separate subfolder: Copied >>> from PIL import Image +>>> import torch +>>> from transformers import CLIPTextModel, CLIPTokenizer +>>> from diffusers import AutoencoderKL, UNet2DConditionModel, PNDMScheduler + +>>> vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae", use_safetensors=True) +>>> tokenizer = CLIPTokenizer.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="tokenizer") +>>> text_encoder = CLIPTextModel.from_pretrained( +... "CompVis/stable-diffusion-v1-4", subfolder="text_encoder", use_safetensors=True +... ) +>>> unet = UNet2DConditionModel.from_pretrained( +... "CompVis/stable-diffusion-v1-4", subfolder="unet", use_safetensors=True +... ) Instead of the default PNDMScheduler, exchange it for the UniPCMultistepScheduler to see how easy it is to plug a different scheduler in: Copied >>> from diffusers import UniPCMultistepScheduler + +>>> scheduler = UniPCMultistepScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler") To speed up inference, move the models to a GPU since, unlike the scheduler, they have trainable weights: Copied >>> torch_device = "cuda" +>>> vae.to(torch_device) +>>> text_encoder.to(torch_device) +>>> unet.to(torch_device) Create text embeddings The next step is to tokenize the text to generate embeddings. The text is used to condition the UNet model and steer the diffusion process towards something that resembles the input prompt. 💡 The guidance_scale parameter determines how much weight should be given to the prompt when generating an image. Feel free to choose any prompt you like if you want to generate something else! Copied >>> prompt = ["a photograph of an astronaut riding a horse"] +>>> height = 512 # default height of Stable Diffusion +>>> width = 512 # default width of Stable Diffusion +>>> num_inference_steps = 25 # Number of denoising steps +>>> guidance_scale = 7.5 # Scale for classifier-free guidance +>>> generator = torch.manual_seed(0) # Seed generator to create the initial latent noise +>>> batch_size = len(prompt) Tokenize the text and generate the embeddings from the prompt: Copied >>> text_input = tokenizer( +... prompt, padding="max_length", max_length=tokenizer.model_max_length, truncation=True, return_tensors="pt" +... ) + +>>> with torch.no_grad(): +... text_embeddings = text_encoder(text_input.input_ids.to(torch_device))[0] You’ll also need to generate the unconditional text embeddings which are the embeddings for the padding token. These need to have the same shape (batch_size and seq_length) as the conditional text_embeddings: Copied >>> max_length = text_input.input_ids.shape[-1] +>>> uncond_input = tokenizer([""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt") +>>> uncond_embeddings = text_encoder(uncond_input.input_ids.to(torch_device))[0] Let’s concatenate the conditional and unconditional embeddings into a batch to avoid doing two forward passes: Copied >>> text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) Create random noise Next, generate some initial random noise as a starting point for the diffusion process. This is the latent representation of the image, and it’ll be gradually denoised. At this point, the latent image is smaller than the final image size but that’s okay though because the model will transform it into the final 512x512 image dimensions later. 💡 The height and width are divided by 8 because the vae model has 3 down-sampling layers. You can check by running the following: Copied 2 ** (len(vae.config.block_out_channels) - 1) == 8 Copied >>> latents = torch.randn( +... (batch_size, unet.config.in_channels, height // 8, width // 8), +... generator=generator, +... device=torch_device, +... ) Denoise the image Start by scaling the input with the initial noise distribution, sigma, the noise scale value, which is required for improved schedulers like UniPCMultistepScheduler: Copied >>> latents = latents * scheduler.init_noise_sigma The last step is to create the denoising loop that’ll progressively transform the pure noise in latents to an image described by your prompt. Remember, the denoising loop needs to do three things: Set the scheduler’s timesteps to use during denoising. Iterate over the timesteps. At each timestep, call the UNet model to predict the noise residual and pass it to the scheduler to compute the previous noisy sample. Copied >>> from tqdm.auto import tqdm + +>>> scheduler.set_timesteps(num_inference_steps) + +>>> for t in tqdm(scheduler.timesteps): +... # expand the latents if we are doing classifier-free guidance to avoid doing two forward passes. +... latent_model_input = torch.cat([latents] * 2) + +... latent_model_input = scheduler.scale_model_input(latent_model_input, timestep=t) + +... # predict the noise residual +... with torch.no_grad(): +... noise_pred = unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample + +... # perform guidance +... noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) +... noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) + +... # compute the previous noisy sample x_t -> x_t-1 +... latents = scheduler.step(noise_pred, t, latents).prev_sample Decode the image The final step is to use the vae to decode the latent representation into an image and get the decoded output with sample: Copied # scale and decode the image latents with vae +latents = 1 / 0.18215 * latents +with torch.no_grad(): + image = vae.decode(latents).sample Lastly, convert the image to a PIL.Image to see your generated image! Copied >>> image = (image / 2 + 0.5).clamp(0, 1).squeeze() +>>> image = (image.permute(1, 2, 0) * 255).to(torch.uint8).cpu().numpy() +>>> images = (image * 255).round().astype("uint8") +>>> image = Image.fromarray(image) +>>> image Next steps From basic to complex pipelines, you’ve seen that all you really need to write your own diffusion system is a denoising loop. The loop should set the scheduler’s timesteps, iterate over them, and alternate between calling the UNet model to predict the noise residual and passing it to the scheduler to compute the previous noisy sample. This is really what 🧨 Diffusers is designed for: to make it intuitive and easy to write your own diffusion system using models and schedulers. For your next steps, feel free to: Learn how to build and contribute a pipeline to 🧨 Diffusers. We can’t wait and see what you’ll come up with! Explore existing pipelines in the library, and see if you can deconstruct and build a pipeline from scratch using the models and schedulers separately. diff --git a/scrapped_outputs/f5019f5f4ebbdd28116ba5f76fcbb3de.txt b/scrapped_outputs/f5019f5f4ebbdd28116ba5f76fcbb3de.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/f50f3f683892a40d04c6812c27f372bd.txt b/scrapped_outputs/f50f3f683892a40d04c6812c27f372bd.txt new file mode 100644 index 0000000000000000000000000000000000000000..5746a2c02dac5496a887c2e8bf83d617fa20db99 --- /dev/null +++ b/scrapped_outputs/f50f3f683892a40d04c6812c27f372bd.txt @@ -0,0 +1,323 @@ +Pipelines + +Pipelines provide a simple way to run state-of-the-art diffusion models in inference. +Most diffusion systems consist of multiple independently-trained models and highly adaptable scheduler +components - all of which are needed to have a functioning end-to-end diffusion system. +As an example, Stable Diffusion has three independently trained models: +Autoencoder +Conditional Unet +CLIP text encoder +a scheduler component, scheduler, +a CLIPImageProcessor, +as well as a safety checker. +All of these components are necessary to run stable diffusion in inference even though they were trained +or created independently from each other. +To that end, we strive to offer all open-sourced, state-of-the-art diffusion system under a unified API. +More specifically, we strive to provide pipelines that +can load the officially published weights and yield 1-to-1 the same outputs as the original implementation according to the corresponding paper (e.g. LDMTextToImagePipeline, uses the officially released weights of High-Resolution Image Synthesis with Latent Diffusion Models), +have a simple user interface to run the model in inference (see the Pipelines API section), +are easy to understand with code that is self-explanatory and can be read along-side the official paper (see Pipelines summary), +can easily be contributed by the community (see the Contribution section). +Note that pipelines do not (and should not) offer any training functionality. +If you are looking for official training examples, please have a look at examples. + +🧨 Diffusers Summary + +The following table summarizes all officially supported pipelines, their corresponding paper, and if +available a colab notebook to directly try them out. +Pipeline +Paper +Tasks +Colab +alt_diffusion +AltDiffusion +Image-to-Image Text-Guided Generation +- +audio_diffusion +Audio Diffusion +Unconditional Audio Generation + +controlnet +ControlNet with Stable Diffusion +Image-to-Image Text-Guided Generation + +cycle_diffusion +Cycle Diffusion +Image-to-Image Text-Guided Generation + +dance_diffusion +Dance Diffusion +Unconditional Audio Generation + +ddpm +Denoising Diffusion Probabilistic Models +Unconditional Image Generation + +ddim +Denoising Diffusion Implicit Models +Unconditional Image Generation + +if +IF +Image Generation + +if_img2img +IF +Image-to-Image Generation + +if_inpainting +IF +Image-to-Image Generation + +latent_diffusion +High-Resolution Image Synthesis with Latent Diffusion Models +Text-to-Image Generation + +latent_diffusion +High-Resolution Image Synthesis with Latent Diffusion Models +Super Resolution Image-to-Image + +latent_diffusion_uncond +High-Resolution Image Synthesis with Latent Diffusion Models +Unconditional Image Generation + +paint_by_example +Paint by Example: Exemplar-based Image Editing with Diffusion Models +Image-Guided Image Inpainting + +pndm +Pseudo Numerical Methods for Diffusion Models on Manifolds +Unconditional Image Generation + +score_sde_ve +Score-Based Generative Modeling through Stochastic Differential Equations +Unconditional Image Generation + +score_sde_vp +Score-Based Generative Modeling through Stochastic Differential Equations +Unconditional Image Generation + +semantic_stable_diffusion +SEGA: Instructing Diffusion using Semantic Dimensions +Text-to-Image Generation + +stable_diffusion_text2img +Stable Diffusion +Text-to-Image Generation + +stable_diffusion_img2img +Stable Diffusion +Image-to-Image Text-Guided Generation + +stable_diffusion_inpaint +Stable Diffusion +Text-Guided Image Inpainting + +stable_diffusion_panorama +MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation +Text-Guided Panorama View Generation + +stable_diffusion_pix2pix +InstructPix2Pix: Learning to Follow Image Editing Instructions +Text-Based Image Editing + +stable_diffusion_pix2pix_zero +Zero-shot Image-to-Image Translation +Text-Based Image Editing + +stable_diffusion_attend_and_excite +Attend and Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models +Text-to-Image Generation + +stable_diffusion_self_attention_guidance +Self-Attention Guidance +Text-to-Image Generation + +stable_diffusion_image_variation +Stable Diffusion Image Variations +Image-to-Image Generation + +stable_diffusion_latent_upscale +Stable Diffusion Latent Upscaler +Text-Guided Super Resolution Image-to-Image + +stable_diffusion_2 +Stable Diffusion 2 +Text-to-Image Generation + +stable_diffusion_2 +Stable Diffusion 2 +Text-Guided Image Inpainting + +stable_diffusion_2 +Stable Diffusion 2 +Depth-to-Image Text-Guided Generation + +stable_diffusion_2 +Stable Diffusion 2 +Text-Guided Super Resolution Image-to-Image + +stable_diffusion_safe +Safe Stable Diffusion +Text-Guided Generation + +stable_unclip +Stable unCLIP +Text-to-Image Generation + +stable_unclip +Stable unCLIP +Image-to-Image Text-Guided Generation + +stochastic_karras_ve +Elucidating the Design Space of Diffusion-Based Generative Models +Unconditional Image Generation + +text_to_video_sd +Modelscope’s Text-to-video-synthesis Model in Open Domain +Text-to-Video Generation + +unclip +Hierarchical Text-Conditional Image Generation with CLIP Latents +Text-to-Image Generation + +versatile_diffusion +Versatile Diffusion: Text, Images and Variations All in One Diffusion Model +Text-to-Image Generation + +versatile_diffusion +Versatile Diffusion: Text, Images and Variations All in One Diffusion Model +Image Variations Generation + +versatile_diffusion +Versatile Diffusion: Text, Images and Variations All in One Diffusion Model +Dual Image and Text Guided Generation + +vq_diffusion +Vector Quantized Diffusion Model for Text-to-Image Synthesis +Text-to-Image Generation + +text_to_video_zero +Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators +Text-to-Video Generation + +Note: Pipelines are simple examples of how to play around with the diffusion systems as described in the corresponding papers. +However, most of them can be adapted to use different scheduler components or even different model components. Some pipeline examples are shown in the Examples below. + +Pipelines API + +Diffusion models often consist of multiple independently-trained models or other previously existing components. +Each model has been trained independently on a different task and the scheduler can easily be swapped out and replaced with a different one. +During inference, we however want to be able to easily load all components and use them in inference - even if one component, e.g. CLIP’s text encoder, originates from a different library, such as Transformers. To that end, all pipelines provide the following functionality: +from_pretrained method that accepts a Hugging Face Hub repository id, e.g. runwayml/stable-diffusion-v1-5 or a path to a local directory, e.g. +”./stable-diffusion”. To correctly retrieve which models and components should be loaded, one has to provide a model_index.json file, e.g. runwayml/stable-diffusion-v1-5/model_index.json, which defines all components that should be +loaded into the pipelines. More specifically, for each model/component one needs to define the format : ["", ""]. is the attribute name given to the loaded instance of which can be found in the library or pipeline folder called "". +save_pretrained that accepts a local path, e.g. ./stable-diffusion under which all models/components of the pipeline will be saved. For each component/model a folder is created inside the local path that is named after the given attribute name, e.g. ./stable_diffusion/unet. +In addition, a model_index.json file is created at the root of the local path, e.g. ./stable_diffusion/model_index.json so that the complete pipeline can again be instantiated +from the local path. +to which accepts a string or torch.device to move all models that are of type torch.nn.Module to the passed device. The behavior is fully analogous to PyTorch’s to method. +__call__ method to use the pipeline in inference. __call__ defines inference logic of the pipeline and should ideally encompass all aspects of it, from pre-processing to forwarding tensors to the different models and schedulers, as well as post-processing. The API of the __call__ method can strongly vary from pipeline to pipeline. E.g. a text-to-image pipeline, such as StableDiffusionPipeline should accept among other things the text prompt to generate the image. A pure image generation pipeline, such as DDPMPipeline on the other hand can be run without providing any inputs. To better understand what inputs can be adapted for +each pipeline, one should look directly into the respective pipeline. +Note: All pipelines have PyTorch’s autograd disabled by decorating the __call__ method with a torch.no_grad decorator because pipelines should +not be used for training. If you want to store the gradients during the forward pass, we recommend writing your own pipeline, see also our community-examples. + +Contribution + +We are more than happy about any contribution to the officially supported pipelines 🤗. We aspire +all of our pipelines to be self-contained, easy-to-tweak, beginner-friendly and for one-purpose-only. +Self-contained: A pipeline shall be as self-contained as possible. More specifically, this means that all functionality should be either directly defined in the pipeline file itself, should be inherited from (and only from) the DiffusionPipeline class or be directly attached to the model and scheduler components of the pipeline. +Easy-to-use: Pipelines should be extremely easy to use - one should be able to load the pipeline and +use it for its designated task, e.g. text-to-image generation, in just a couple of lines of code. Most +logic including pre-processing, an unrolled diffusion loop, and post-processing should all happen inside the __call__ method. +Easy-to-tweak: Certain pipelines will not be able to handle all use cases and tasks that you might like them to. If you want to use a certain pipeline for a specific use case that is not yet supported, you might have to copy the pipeline file and tweak the code to your needs. We try to make the pipeline code as readable as possible so that each part –from pre-processing to diffusing to post-processing– can easily be adapted. If you would like the community to benefit from your customized pipeline, we would love to see a contribution to our community-examples. If you feel that an important pipeline should be part of the official pipelines but isn’t, a contribution to the official pipelines would be even better. +One-purpose-only: Pipelines should be used for one task and one task only. Even if two tasks are very similar from a modeling point of view, e.g. image2image translation and in-painting, pipelines shall be used for one task only to keep them easy-to-tweak and readable. + +Examples + + +Text-to-Image generation with Stable Diffusion + + + + Copied +# make sure you're logged in with `huggingface-cli login` +from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler + +pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") +pipe = pipe.to("cuda") + +prompt = "a photo of an astronaut riding a horse on mars" +image = pipe(prompt).images[0] + +image.save("astronaut_rides_horse.png") + +Image-to-Image text-guided generation with Stable Diffusion + +The StableDiffusionImg2ImgPipeline lets you pass a text prompt and an initial image to condition the generation of new images. + + + Copied +import requests +from PIL import Image +from io import BytesIO + +from diffusers import StableDiffusionImg2ImgPipeline + +# load the pipeline +device = "cuda" +pipe = StableDiffusionImg2ImgPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to( + device +) + +# let's download an initial image +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +response = requests.get(url) +init_image = Image.open(BytesIO(response.content)).convert("RGB") +init_image = init_image.resize((768, 512)) + +prompt = "A fantasy landscape, trending on artstation" + +images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images + +images[0].save("fantasy_landscape.png") +You can also run this example on colab + +Tweak prompts reusing seeds and latents + +You can generate your own latents to reproduce results, or tweak your prompt on a specific result you liked. This notebook shows how to do it step by step. You can also run it in Google Colab + +In-painting using Stable Diffusion + +The StableDiffusionInpaintPipeline lets you edit specific parts of an image by providing a mask and text prompt. + + + Copied +import PIL +import requests +import torch +from io import BytesIO + +from diffusers import StableDiffusionInpaintPipeline + + +def download_image(url): + response = requests.get(url) + return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = download_image(img_url).resize((512, 512)) +mask_image = download_image(mask_url).resize((512, 512)) + +pipe = StableDiffusionInpaintPipeline.from_pretrained( + "runwayml/stable-diffusion-inpainting", + torch_dtype=torch.float16, +) +pipe = pipe.to("cuda") + +prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] +You can also run this example on colab diff --git a/scrapped_outputs/f542209302cc9e52712730cdd4a21a35.txt b/scrapped_outputs/f542209302cc9e52712730cdd4a21a35.txt new file mode 100644 index 0000000000000000000000000000000000000000..2eda1601aa5e193fb122502a0750cc9c60eccfd5 --- /dev/null +++ b/scrapped_outputs/f542209302cc9e52712730cdd4a21a35.txt @@ -0,0 +1,196 @@ +Stable Cascade This model is built upon the Würstchen architecture and its main +difference to other models like Stable Diffusion is that it is working at a much smaller latent space. Why is this +important? The smaller the latent space, the faster you can run inference and the cheaper the training becomes. +How small is the latent space? Stable Diffusion uses a compression factor of 8, resulting in a 1024x1024 image being +encoded to 128x128. Stable Cascade achieves a compression factor of 42, meaning that it is possible to encode a +1024x1024 image to 24x24, while maintaining crisp reconstructions. The text-conditional model is then trained in the +highly compressed latent space. Previous versions of this architecture, achieved a 16x cost reduction over Stable +Diffusion 1.5. Therefore, this kind of model is well suited for usages where efficiency is important. Furthermore, all known extensions +like finetuning, LoRA, ControlNet, IP-Adapter, LCM etc. are possible with this method as well. The original codebase can be found at Stability-AI/StableCascade. Model Overview Stable Cascade consists of three models: Stage A, Stage B and Stage C, representing a cascade to generate images, +hence the name “Stable Cascade”. Stage A & B are used to compress images, similar to what the job of the VAE is in Stable Diffusion. +However, with this setup, a much higher compression of images can be achieved. While the Stable Diffusion models use a +spatial compression factor of 8, encoding an image with resolution of 1024 x 1024 to 128 x 128, Stable Cascade achieves +a compression factor of 42. This encodes a 1024 x 1024 image to 24 x 24, while being able to accurately decode the +image. This comes with the great benefit of cheaper training and inference. Furthermore, Stage C is responsible +for generating the small 24 x 24 latents given a text prompt. Uses Direct Use The model is intended for research purposes for now. Possible research areas and tasks include Research on generative models. Safe deployment of models which have the potential to generate harmful content. Probing and understanding the limitations and biases of generative models. Generation of artworks and use in design and other artistic processes. Applications in educational or creative tools. Excluded uses are described below. Out-of-Scope Use The model was not trained to be factual or true representations of people or events, +and therefore using the model to generate such content is out-of-scope for the abilities of this model. +The model should not be used in any way that violates Stability AI’s Acceptable Use Policy. Limitations and Bias Limitations Faces and people in general may not be generated properly. The autoencoding part of the model is lossy. StableCascadeCombinedPipeline class diffusers.StableCascadeCombinedPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModel decoder: StableCascadeUNet scheduler: DDPMWuerstchenScheduler vqgan: PaellaVQModel prior_prior: StableCascadeUNet prior_text_encoder: CLIPTextModel prior_tokenizer: CLIPTokenizer prior_scheduler: DDPMWuerstchenScheduler prior_feature_extractor: Optional = None prior_image_encoder: Optional = None ) Parameters tokenizer (CLIPTokenizer) — +The decoder tokenizer to be used for text inputs. text_encoder (CLIPTextModel) — +The decoder text encoder to be used for text inputs. decoder (StableCascadeUNet) — +The decoder model to be used for decoder image generation pipeline. scheduler (DDPMWuerstchenScheduler) — +The scheduler to be used for decoder image generation pipeline. vqgan (PaellaVQModel) — +The VQGAN model to be used for decoder image generation pipeline. feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the image_encoder. image_encoder (CLIPVisionModelWithProjection) — +Frozen CLIP image-encoder (clip-vit-large-patch14). prior_prior (StableCascadeUNet) — +The prior model to be used for prior pipeline. prior_scheduler (DDPMWuerstchenScheduler) — +The scheduler to be used for prior pipeline. Combined Pipeline for text-to-image generation using Stable Cascade. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union = None images: Union = None height: int = 512 width: int = 512 prior_num_inference_steps: int = 60 prior_timesteps: Optional = None prior_guidance_scale: float = 4.0 num_inference_steps: int = 12 decoder_timesteps: Optional = None decoder_guidance_scale: float = 0.0 negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation for the prior and decoder. images (torch.Tensor, PIL.Image.Image, List[torch.Tensor], List[PIL.Image.Image], optional) — +The images to guide the image generation for the prior. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings for the prior. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings for the prior. Can be used to easily tweak text inputs, e.g. +prompt weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt +input argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +prior_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +prior_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked +to the text prompt, usually at the expense of lower image quality. prior_num_inference_steps (Union[int, Dict[float, int]], optional, defaults to 60) — +The number of prior denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. For more specific timestep spacing, you can pass customized +prior_timesteps num_inference_steps (int, optional, defaults to 12) — +The number of decoder denoising steps. More denoising steps usually lead to a higher quality image at +the expense of slower inference. For more specific timestep spacing, you can pass customized +timesteps decoder_guidance_scale (float, optional, defaults to 0.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. prior_callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). prior_callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the prior_callback_on_step_end function. The tensors specified in the +list will be passed as callback_kwargs argument. You will only be able to include variables listed in +the ._callback_tensor_inputs attribute of your pipeine class. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusions import StableCascadeCombinedPipeline + +>>> pipe = StableCascadeCombinedPipeline.from_pretrained("stabilityai/stable-cascade-combined", torch_dtype=torch.bfloat16).to( +... "cuda" +... ) +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> images = pipe(prompt=prompt) enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models (unet, text_encoder, vae, and safety checker state dicts) to CPU using 🤗 +Accelerate, significantly reducing memory usage. Models are moved to a torch.device('meta') and loaded on a +GPU only when their specific submodule’s forward method is called. Offloading happens on a submodule basis. +Memory savings are higher than using enable_model_cpu_offload, but performance is lower. StableCascadePriorPipeline class diffusers.StableCascadePriorPipeline < source > ( tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection prior: StableCascadeUNet scheduler: DDPMWuerstchenScheduler resolution_multiple: float = 42.67 feature_extractor: Optional = None image_encoder: Optional = None ) Parameters prior (StableCascadeUNet) — +The Stable Cascade prior to approximate the image embedding from the text and/or image embedding. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder (laion/CLIP-ViT-bigG-14-laion2B-39B-b160k). feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the image_encoder. image_encoder (CLIPVisionModelWithProjection) — +Frozen CLIP image-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (DDPMWuerstchenScheduler) — +A scheduler to be used in combination with prior to generate image embedding. resolution_multiple (‘float’, optional, defaults to 42.67) — +Default resolution for multiple images generated. Pipeline for generating image prior for Stable Cascade. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union = None images: Union = None height: int = 1024 width: int = 1024 num_inference_steps: int = 20 timesteps: List = None guidance_scale: float = 4.0 negative_prompt: Union = None prompt_embeds: Optional = None prompt_embeds_pooled: Optional = None negative_prompt_embeds: Optional = None negative_prompt_embeds_pooled: Optional = None image_embeds: Optional = None num_images_per_prompt: Optional = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pt' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. height (int, optional, defaults to 1024) — +The height in pixels of the generated image. width (int, optional, defaults to 1024) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 60) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 8.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +decoder_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +decoder_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely +linked to the text prompt, usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if decoder_guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. prompt_embeds_pooled (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. negative_prompt_embeds_pooled (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds_pooled will be generated from negative_prompt input +argument. image_embeds (torch.FloatTensor, optional) — +Pre-generated image embeddings. Can be used to easily tweak image inputs, e.g. prompt weighting. +If not provided, image embeddings will be generated from image input argument if existing. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableCascadePriorPipeline + +>>> prior_pipe = StableCascadePriorPipeline.from_pretrained( +... "stabilityai/stable-cascade-prior", torch_dtype=torch.bfloat16 +... ).to("cuda") + +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> prior_output = pipe(prompt) StableCascadePriorPipelineOutput class diffusers.pipelines.stable_cascade.pipeline_stable_cascade_prior.StableCascadePriorPipelineOutput < source > ( image_embeddings: Union prompt_embeds: Union negative_prompt_embeds: Union ) Parameters image_embeddings (torch.FloatTensor or np.ndarray) — +Prior image embeddings for text prompt prompt_embeds (torch.FloatTensor) — +Text embeddings for the prompt. negative_prompt_embeds (torch.FloatTensor) — +Text embeddings for the negative prompt. Output class for WuerstchenPriorPipeline. StableCascadeDecoderPipeline class diffusers.StableCascadeDecoderPipeline < source > ( decoder: StableCascadeUNet tokenizer: CLIPTokenizer text_encoder: CLIPTextModel scheduler: DDPMWuerstchenScheduler vqgan: PaellaVQModel latent_dim_scale: float = 10.67 ) Parameters tokenizer (CLIPTokenizer) — +The CLIP tokenizer. text_encoder (CLIPTextModel) — +The CLIP text encoder. decoder (StableCascadeUNet) — +The Stable Cascade decoder unet. vqgan (PaellaVQModel) — +The VQGAN model. scheduler (DDPMWuerstchenScheduler) — +A scheduler to be used in combination with prior to generate image embedding. latent_dim_scale (float, optional, defaults to 10.67) — +Multiplier to determine the VQ latent space size from the image embeddings. If the image embeddings are +height=24 and width=24, the VQ latent shape needs to be height=int(2410.67)=256 and +width=int(2410.67)=256 in order to match the training conditions. Pipeline for generating images from the Stable Cascade model. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeddings: Union prompt: Union = None num_inference_steps: int = 10 guidance_scale: float = 0.0 negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) Parameters image_embedding (torch.FloatTensor or List[torch.FloatTensor]) — +Image Embeddings either extracted from an image or generated by a Prior Model. prompt (str or List[str]) — +The prompt or prompts to guide the image generation. num_inference_steps (int, optional, defaults to 12) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 0.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +decoder_guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting +decoder_guidance_scale > 1. Higher guidance scale encourages to generate images that are closely +linked to the text prompt, usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if decoder_guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableCascadePriorPipeline, StableCascadeDecoderPipeline + +>>> prior_pipe = StableCascadePriorPipeline.from_pretrained( +... "stabilityai/stable-cascade-prior", torch_dtype=torch.bfloat16 +... ).to("cuda") +>>> gen_pipe = StableCascadeDecoderPipeline.from_pretrain( +... "stabilityai/stable-cascade", torch_dtype=torch.float16 +... ).to("cuda") + +>>> prompt = "an image of a shiba inu, donning a spacesuit and helmet" +>>> prior_output = pipe(prompt) +>>> images = gen_pipe(prior_output.image_embeddings, prompt=prompt) diff --git a/scrapped_outputs/f54ead01afea9bcdc6b809c4dcdaaddf.txt b/scrapped_outputs/f54ead01afea9bcdc6b809c4dcdaaddf.txt new file mode 100644 index 0000000000000000000000000000000000000000..4540f6a7c0e03add95f145da0638f9a5a6f1c9cb --- /dev/null +++ b/scrapped_outputs/f54ead01afea9bcdc6b809c4dcdaaddf.txt @@ -0,0 +1,14 @@ +DeepCache DeepCache accelerates StableDiffusionPipeline and StableDiffusionXLPipeline by strategically caching and reusing high-level features while efficiently updating low-level features by taking advantage of the U-Net architecture. Start by installing DeepCache: Copied pip install DeepCache Then load and enable the DeepCacheSDHelper: Copied import torch + from diffusers import StableDiffusionPipeline + pipe = StableDiffusionPipeline.from_pretrained('runwayml/stable-diffusion-v1-5', torch_dtype=torch.float16).to("cuda") + ++ from DeepCache import DeepCacheSDHelper ++ helper = DeepCacheSDHelper(pipe=pipe) ++ helper.set_params( ++ cache_interval=3, ++ cache_branch_id=0, ++ ) ++ helper.enable() + + image = pipe("a photo of an astronaut on a moon").images[0] The set_params method accepts two arguments: cache_interval and cache_branch_id. cache_interval means the frequency of feature caching, specified as the number of steps between each cache operation. cache_branch_id identifies which branch of the network (ordered from the shallowest to the deepest layer) is responsible for executing the caching processes. +Opting for a lower cache_branch_id or a larger cache_interval can lead to faster inference speed at the expense of reduced image quality (ablation experiments of these two hyperparameters can be found in the paper). Once those arguments are set, use the enable or disable methods to activate or deactivate the DeepCacheSDHelper. You can find more generated samples (original pipeline vs DeepCache) and the corresponding inference latency in the WandB report. The prompts are randomly selected from the MS-COCO 2017 dataset. Benchmark We tested how much faster DeepCache accelerates Stable Diffusion v2.1 with 50 inference steps on an NVIDIA RTX A5000, using different configurations for resolution, batch size, cache interval (I), and cache branch (B). Resolution Batch size Original DeepCache(I=3, B=0) DeepCache(I=5, B=0) DeepCache(I=5, B=1) 512 8 15.96 6.88(2.32x) 5.03(3.18x) 7.27(2.20x) 4 8.39 3.60(2.33x) 2.62(3.21x) 3.75(2.24x) 1 2.61 1.12(2.33x) 0.81(3.24x) 1.11(2.35x) 768 8 43.58 18.99(2.29x) 13.96(3.12x) 21.27(2.05x) 4 22.24 9.67(2.30x) 7.10(3.13x) 10.74(2.07x) 1 6.33 2.72(2.33x) 1.97(3.21x) 2.98(2.12x) 1024 8 101.95 45.57(2.24x) 33.72(3.02x) 53.00(1.92x) 4 49.25 21.86(2.25x) 16.19(3.04x) 25.78(1.91x) 1 13.83 6.07(2.28x) 4.43(3.12x) 7.15(1.93x) diff --git a/scrapped_outputs/f55b89f5f263a368539fe511156355aa.txt b/scrapped_outputs/f55b89f5f263a368539fe511156355aa.txt new file mode 100644 index 0000000000000000000000000000000000000000..98b1f46689d66a702a6b7e7c7df7d908b16635d7 --- /dev/null +++ b/scrapped_outputs/f55b89f5f263a368539fe511156355aa.txt @@ -0,0 +1,30 @@ +Transformer Temporal A Transformer model for video-like data. TransformerTemporalModel class diffusers.models.TransformerTemporalModel < source > ( num_attention_heads: int = 16 attention_head_dim: int = 88 in_channels: Optional = None out_channels: Optional = None num_layers: int = 1 dropout: float = 0.0 norm_num_groups: int = 32 cross_attention_dim: Optional = None attention_bias: bool = False sample_size: Optional = None activation_fn: str = 'geglu' norm_elementwise_affine: bool = True double_self_attention: bool = True positional_embeddings: Optional = None num_positional_embeddings: Optional = None ) Parameters num_attention_heads (int, optional, defaults to 16) — The number of heads to use for multi-head attention. attention_head_dim (int, optional, defaults to 88) — The number of channels in each head. in_channels (int, optional) — +The number of channels in the input and output (specify if the input is continuous). num_layers (int, optional, defaults to 1) — The number of layers of Transformer blocks to use. dropout (float, optional, defaults to 0.0) — The dropout probability to use. cross_attention_dim (int, optional) — The number of encoder_hidden_states dimensions to use. attention_bias (bool, optional) — +Configure if the TransformerBlock attention should contain a bias parameter. sample_size (int, optional) — The width of the latent images (specify if the input is discrete). +This is fixed during training since it is used to learn a number of position embeddings. activation_fn (str, optional, defaults to "geglu") — +Activation function to use in feed-forward. See diffusers.models.activations.get_activation for supported +activation functions. norm_elementwise_affine (bool, optional) — +Configure if the TransformerBlock should use learnable elementwise affine parameters for normalization. double_self_attention (bool, optional) — +Configure if each TransformerBlock should contain two self-attention layers. +positional_embeddings — (str, optional): +The type of positional embeddings to apply to the sequence input before passing use. +num_positional_embeddings — (int, optional): +The maximum length of the sequence over which to apply positional embeddings. A Transformer model for video-like data. forward < source > ( hidden_states: FloatTensor encoder_hidden_states: Optional = None timestep: Optional = None class_labels: LongTensor = None num_frames: int = 1 cross_attention_kwargs: Optional = None return_dict: bool = True ) → TransformerTemporalModelOutput or tuple Parameters hidden_states (torch.LongTensor of shape (batch size, num latent pixels) if discrete, torch.FloatTensor of shape (batch size, channel, height, width) if continuous) — +Input hidden_states. encoder_hidden_states ( torch.LongTensor of shape (batch size, encoder_hidden_states dim), optional) — +Conditional embeddings for cross attention layer. If not given, cross-attention defaults to +self-attention. timestep ( torch.LongTensor, optional) — +Used to indicate denoising step. Optional timestep to be applied as an embedding in AdaLayerNorm. class_labels ( torch.LongTensor of shape (batch size, num classes), optional) — +Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in +AdaLayerZeroNorm. num_frames (int, optional, defaults to 1) — +The number of frames to be processed per batch. This is used to reshape the hidden states. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DConditionOutput instead of a plain +tuple. Returns +TransformerTemporalModelOutput or tuple + +If return_dict is True, an TransformerTemporalModelOutput is +returned, otherwise a tuple where the first element is the sample tensor. + The TransformerTemporal forward method. TransformerTemporalModelOutput class diffusers.models.transformer_temporal.TransformerTemporalModelOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size x num_frames, num_channels, height, width)) — +The hidden states output conditioned on encoder_hidden_states input. The output of TransformerTemporalModel. diff --git a/scrapped_outputs/f583d72dd2cfe906f7f0f5a1a34efc55.txt b/scrapped_outputs/f583d72dd2cfe906f7f0f5a1a34efc55.txt new file mode 100644 index 0000000000000000000000000000000000000000..88c6593b32ef62cb7820e9bf8a18fcf276dfa370 --- /dev/null +++ b/scrapped_outputs/f583d72dd2cfe906f7f0f5a1a34efc55.txt @@ -0,0 +1,304 @@ +Stable unCLIP Stable unCLIP checkpoints are finetuned from Stable Diffusion 2.1 checkpoints to condition on CLIP image embeddings. +Stable unCLIP still conditions on text embeddings. Given the two separate conditionings, stable unCLIP can be used +for text guided image variation. When combined with an unCLIP prior, it can also be used for full text to image generation. The abstract from the paper is: Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples. Tips Stable unCLIP takes noise_level as input during inference which determines how much noise is added to the image embeddings. A higher noise_level increases variation in the final un-noised images. By default, we do not add any additional noise to the image embeddings (noise_level = 0). Text-to-Image Generation Stable unCLIP can be leveraged for text-to-image generation by pipelining it with the prior model of KakaoBrain’s open source DALL-E 2 replication Karlo: Copied import torch +from diffusers import UnCLIPScheduler, DDPMScheduler, StableUnCLIPPipeline +from diffusers.models import PriorTransformer +from transformers import CLIPTokenizer, CLIPTextModelWithProjection + +prior_model_id = "kakaobrain/karlo-v1-alpha" +data_type = torch.float16 +prior = PriorTransformer.from_pretrained(prior_model_id, subfolder="prior", torch_dtype=data_type) + +prior_text_model_id = "openai/clip-vit-large-patch14" +prior_tokenizer = CLIPTokenizer.from_pretrained(prior_text_model_id) +prior_text_model = CLIPTextModelWithProjection.from_pretrained(prior_text_model_id, torch_dtype=data_type) +prior_scheduler = UnCLIPScheduler.from_pretrained(prior_model_id, subfolder="prior_scheduler") +prior_scheduler = DDPMScheduler.from_config(prior_scheduler.config) + +stable_unclip_model_id = "stabilityai/stable-diffusion-2-1-unclip-small" + +pipe = StableUnCLIPPipeline.from_pretrained( + stable_unclip_model_id, + torch_dtype=data_type, + variant="fp16", + prior_tokenizer=prior_tokenizer, + prior_text_encoder=prior_text_model, + prior=prior, + prior_scheduler=prior_scheduler, +) + +pipe = pipe.to("cuda") +wave_prompt = "dramatic wave, the Oceans roar, Strong wave spiral across the oceans as the waves unfurl into roaring crests; perfect wave form; perfect wave shape; dramatic wave shape; wave shape unbelievable; wave; wave shape spectacular" + +image = pipe(prompt=wave_prompt).images[0] +image For text-to-image we use stabilityai/stable-diffusion-2-1-unclip-small as it was trained on CLIP ViT-L/14 embedding, the same as the Karlo model prior. stabilityai/stable-diffusion-2-1-unclip was trained on OpenCLIP ViT-H, so we don’t recommend its use. Text guided Image-to-Image Variation Copied from diffusers import StableUnCLIPImg2ImgPipeline +from diffusers.utils import load_image +import torch + +pipe = StableUnCLIPImg2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1-unclip", torch_dtype=torch.float16, variation="fp16" +) +pipe = pipe.to("cuda") + +url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/stable_unclip/tarsila_do_amaral.png" +init_image = load_image(url) + +images = pipe(init_image).images +images[0].save("variation_image.png") Optionally, you can also pass a prompt to pipe such as: Copied prompt = "A fantasy landscape, trending on artstation" + +image = pipe(init_image, prompt=prompt).images[0] +image Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableUnCLIPPipeline class diffusers.StableUnCLIPPipeline < source > ( prior_tokenizer: CLIPTokenizer prior_text_encoder: CLIPTextModelWithProjection prior: PriorTransformer prior_scheduler: KarrasDiffusionSchedulers image_normalizer: StableUnCLIPImageNormalizer image_noising_scheduler: KarrasDiffusionSchedulers tokenizer: CLIPTokenizer text_encoder: CLIPTextModelWithProjection unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vae: AutoencoderKL ) Parameters prior_tokenizer (CLIPTokenizer) — +A CLIPTokenizer. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen CLIPTextModelWithProjection text-encoder. prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_scheduler (KarrasDiffusionSchedulers) — +Scheduler used in the prior denoising process. image_normalizer (StableUnCLIPImageNormalizer) — +Used to normalize the predicted image embeddings before the noise is applied and un-normalize the image +embeddings after the noise has been applied. image_noising_scheduler (KarrasDiffusionSchedulers) — +Noise schedule for adding noise to the predicted image embeddings. The amount of noise to add is determined +by the noise_level. tokenizer (CLIPTokenizer) — +A CLIPTokenizer. text_encoder (CLIPTextModel) — +Frozen CLIPTextModel text-encoder. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (KarrasDiffusionSchedulers) — +A scheduler to be used in combination with unet to denoise the encoded image latents. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. Pipeline for text-to-image generation using stable unCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 20 guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Optional = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 0 prior_num_inference_steps: int = 25 prior_guidance_scale: float = 4.0 prior_latents: Optional = None clip_skip: Optional = None ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 20) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. noise_level (int, optional, defaults to 0) — +The amount of noise to add to the image embeddings. A higher noise_level increases the variance in +the final un-noised images. See StableUnCLIPPipeline.noise_image_embeddings() for more details. prior_num_inference_steps (int, optional, defaults to 25) — +The number of denoising steps in the prior denoising process. More denoising steps usually lead to a +higher quality image at the expense of slower inference. prior_guidance_scale (float, optional, defaults to 4.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. prior_latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +embedding generation in the prior denoising process. Can be used to tweak the same generation with +different prompts. If not provided, a latents tensor is generated by sampling using the supplied random +generator. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +ImagePipelineOutput or tuple + +~ pipeline_utils.ImagePipelineOutput if return_dict is True, otherwise a tuple. When returning +a tuple, the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableUnCLIPPipeline + +>>> pipe = StableUnCLIPPipeline.from_pretrained( +... "fusing/stable-unclip-2-1-l", torch_dtype=torch.float16 +... ) # TODO update model path +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> images = pipe(prompt).images +>>> images[0].save("astronaut_horse.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. noise_image_embeddings < source > ( image_embeds: Tensor noise_level: int noise: Optional = None generator: Optional = None ) Add noise to the image embeddings. The amount of noise is controlled by a noise_level input. A higher +noise_level increases the variance in the final un-noised images. The noise is applied in two ways: A noise schedule is applied directly to the embeddings. A vector of sinusoidal time embeddings are appended to the output. In both cases, the amount of noise is controlled by the same noise_level. The embeddings are normalized before the noise is applied and un-normalized after the noise is applied. StableUnCLIPImg2ImgPipeline class diffusers.StableUnCLIPImg2ImgPipeline < source > ( feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection image_normalizer: StableUnCLIPImageNormalizer image_noising_scheduler: KarrasDiffusionSchedulers tokenizer: CLIPTokenizer text_encoder: CLIPTextModel unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vae: AutoencoderKL ) Parameters feature_extractor (CLIPImageProcessor) — +Feature extractor for image pre-processing before being encoded. image_encoder (CLIPVisionModelWithProjection) — +CLIP vision model for encoding images. image_normalizer (StableUnCLIPImageNormalizer) — +Used to normalize the predicted image embeddings before the noise is applied and un-normalize the image +embeddings after the noise has been applied. image_noising_scheduler (KarrasDiffusionSchedulers) — +Noise schedule for adding noise to the predicted image embeddings. The amount of noise to add is determined +by the noise_level. tokenizer (~transformers.CLIPTokenizer) — +A [~transformers.CLIPTokenizer)]. text_encoder (CLIPTextModel) — +Frozen CLIPTextModel text-encoder. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (KarrasDiffusionSchedulers) — +A scheduler to be used in combination with unet to denoise the encoded image latents. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. Pipeline for text-guided image-to-image generation using stable unCLIP. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( image: Union = None prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 20 guidance_scale: float = 10 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Optional = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 0 image_embeds: Optional = None clip_skip: Optional = None ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, either prompt_embeds will be +used or prompt is initialized to "". image (torch.FloatTensor or PIL.Image.Image) — +Image or tensor representing an image batch. The image is encoded to its CLIP embedding which the +unet is conditioned on. The image is not encoded by the vae and then used as the latents in the +denoising process like it is in the standard Stable Diffusion text-guided image variation process. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 20) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 10.0) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. noise_level (int, optional, defaults to 0) — +The amount of noise to add to the image embeddings. A higher noise_level increases the variance in +the final un-noised images. See StableUnCLIPPipeline.noise_image_embeddings() for more details. image_embeds (torch.FloatTensor, optional) — +Pre-generated CLIP embeddings to condition the unet on. These latents are not used in the denoising +process. If you want to provide pre-generated latents, pass them to __call__ as latents. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +ImagePipelineOutput or tuple + +~ pipeline_utils.ImagePipelineOutput if return_dict is True, otherwise a tuple. When returning +a tuple, the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> import requests +>>> import torch +>>> from PIL import Image +>>> from io import BytesIO + +>>> from diffusers import StableUnCLIPImg2ImgPipeline + +>>> pipe = StableUnCLIPImg2ImgPipeline.from_pretrained( +... "fusing/stable-unclip-2-1-l-img2img", torch_dtype=torch.float16 +... ) # TODO update model path +>>> pipe = pipe.to("cuda") + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +>>> response = requests.get(url) +>>> init_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_image = init_image.resize((768, 512)) + +>>> prompt = "A fantasy landscape, trending on artstation" + +>>> images = pipe(prompt, init_image).images +>>> images[0].save("fantasy_landscape.png") enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. noise_image_embeddings < source > ( image_embeds: Tensor noise_level: int noise: Optional = None generator: Optional = None ) Add noise to the image embeddings. The amount of noise is controlled by a noise_level input. A higher +noise_level increases the variance in the final un-noised images. The noise is applied in two ways: A noise schedule is applied directly to the embeddings. A vector of sinusoidal time embeddings are appended to the output. In both cases, the amount of noise is controlled by the same noise_level. The embeddings are normalized before the noise is applied and un-normalized after the noise is applied. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/f594435f1269ab09ee9b86291486737f.txt b/scrapped_outputs/f594435f1269ab09ee9b86291486737f.txt new file mode 100644 index 0000000000000000000000000000000000000000..6b2f521e40e38cf54824f4d7c2c05c78554dd3cf --- /dev/null +++ b/scrapped_outputs/f594435f1269ab09ee9b86291486737f.txt @@ -0,0 +1,62 @@ +AudioLDM AudioLDM was proposed in AudioLDM: Text-to-Audio Generation with Latent Diffusion Models by Haohe Liu et al. Inspired by Stable Diffusion, AudioLDM +is a text-to-audio latent diffusion model (LDM) that learns continuous audio representations from CLAP +latents. AudioLDM takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional +sound effects, human speech and music. The abstract from the paper is: Text-to-audio (TTA) system has recently gained attention for its ability to synthesize general audio based on text descriptions. However, previous studies in TTA have limited generation quality with high computational costs. In this study, we propose AudioLDM, a TTA system that is built on a latent space to learn the continuous audio representations from contrastive language-audio pretraining (CLAP) latents. The pretrained CLAP models enable us to train LDMs with audio embedding while providing text embedding as a condition during sampling. By learning the latent representations of audio signals and their compositions without modeling the cross-modal relationship, AudioLDM is advantageous in both generation quality and computational efficiency. Trained on AudioCaps with a single GPU, AudioLDM achieves state-of-the-art TTA performance measured by both objective and subjective metrics (e.g., frechet distance). Moreover, AudioLDM is the first TTA system that enables various text-guided audio manipulations (e.g., style transfer) in a zero-shot fashion. Our implementation and demos are available at this https URL. The original codebase can be found at haoheliu/AudioLDM. Tips When constructing a prompt, keep in mind: Descriptive prompt inputs work best; you can use adjectives to describe the sound (for example, “high quality” or “clear”) and make the prompt context specific (for example, “water stream in a forest” instead of “stream”). It’s best to use general terms like “cat” or “dog” instead of specific names or abstract objects the model may not be familiar with. During inference: The quality of the predicted audio sample can be controlled by the num_inference_steps argument; higher steps give higher quality audio at the expense of slower inference. The length of the predicted audio sample can be controlled by varying the audio_length_in_s argument. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. AudioLDMPipeline class diffusers.AudioLDMPipeline < source > ( vae: AutoencoderKL text_encoder: ClapTextModelWithProjection tokenizer: Union unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vocoder: SpeechT5HifiGan ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (ClapTextModelWithProjection) — +Frozen text-encoder (ClapTextModelWithProjection, specifically the +laion/clap-htsat-unfused variant. tokenizer (PreTrainedTokenizer) — +A RobertaTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded audio latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. vocoder (SpeechT5HifiGan) — +Vocoder of class SpeechT5HifiGan. Pipeline for text-to-audio generation using AudioLDM. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None audio_length_in_s: Optional = None num_inference_steps: int = 10 guidance_scale: float = 2.5 negative_prompt: Union = None num_waveforms_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None output_type: Optional = 'np' ) → AudioPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide audio generation. If not defined, you need to pass prompt_embeds. audio_length_in_s (int, optional, defaults to 5.12) — +The length of the generated audio sample in seconds. num_inference_steps (int, optional, defaults to 10) — +The number of denoising steps. More denoising steps usually lead to a higher quality audio at the +expense of slower inference. guidance_scale (float, optional, defaults to 2.5) — +A higher guidance scale value encourages the model to generate audio that is closely linked to the text +prompt at the expense of lower sound quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in audio generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_waveforms_per_prompt (int, optional, defaults to 1) — +The number of waveforms to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. return_dict (bool, optional, defaults to True) — +Whether or not to return a AudioPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. output_type (str, optional, defaults to "np") — +The output format of the generated image. Choose between "np" to return a NumPy np.ndarray or +"pt" to return a PyTorch torch.Tensor object. Returns +AudioPipelineOutput or tuple + +If return_dict is True, AudioPipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import AudioLDMPipeline +>>> import torch +>>> import scipy + +>>> repo_id = "cvssp/audioldm-s-full-v2" +>>> pipe = AudioLDMPipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> prompt = "Techno music with a strong, upbeat tempo and high melodic riffs" +>>> audio = pipe(prompt, num_inference_steps=10, audio_length_in_s=5.0).audios[0] + +>>> # save the audio sample as a .wav file +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio) disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. diff --git a/scrapped_outputs/f5966845f6e5d12fe021e28a18c62344.txt b/scrapped_outputs/f5966845f6e5d12fe021e28a18c62344.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/f5a256341bc3ec28f35a9ac90e59448c.txt b/scrapped_outputs/f5a256341bc3ec28f35a9ac90e59448c.txt new file mode 100644 index 0000000000000000000000000000000000000000..a3ac22e44f82a2bfeede971a5b1063163f7e9fc2 --- /dev/null +++ b/scrapped_outputs/f5a256341bc3ec28f35a9ac90e59448c.txt @@ -0,0 +1,176 @@ +Image-to-Video Generation with PIA (Personalized Image Animator) Overview PIA: Your Personalized Image Animator via Plug-and-Play Modules in Text-to-Image Models by Yiming Zhang, Zhening Xing, Yanhong Zeng, Youqing Fang, Kai Chen Recent advancements in personalized text-to-image (T2I) models have revolutionized content creation, empowering non-experts to generate stunning images with unique styles. While promising, adding realistic motions into these personalized images by text poses significant challenges in preserving distinct styles, high-fidelity details, and achieving motion controllability by text. In this paper, we present PIA, a Personalized Image Animator that excels in aligning with condition images, achieving motion controllability by text, and the compatibility with various personalized T2I models without specific tuning. To achieve these goals, PIA builds upon a base T2I model with well-trained temporal alignment layers, allowing for the seamless transformation of any personalized T2I model into an image animation model. A key component of PIA is the introduction of the condition module, which utilizes the condition frame and inter-frame affinity as input to transfer appearance information guided by the affinity hint for individual frame synthesis in the latent space. This design mitigates the challenges of appearance-related image alignment within and allows for a stronger focus on aligning with motion-related guidance. Project page Available Pipelines Pipeline Tasks Demo PIAPipeline Image-to-Video Generation with PIA Available checkpoints Motion Adapter checkpoints for PIA can be found under the OpenMMLab org. These checkpoints are meant to work with any model based on Stable Diffusion 1.5 Usage example PIA works with a MotionAdapter checkpoint and a Stable Diffusion 1.5 model checkpoint. The MotionAdapter is a collection of Motion Modules that are responsible for adding coherent motion across image frames. These modules are applied after the Resnet and Attention blocks in the Stable Diffusion UNet. In addition to the motion modules, PIA also replaces the input convolution layer of the SD 1.5 UNet model with a 9 channel input convolution layer. The following example demonstrates how to use PIA to generate a video from a single image. Copied import torch +from diffusers import ( + EulerDiscreteScheduler, + MotionAdapter, + PIAPipeline, +) +from diffusers.utils import export_to_gif, load_image + +adapter = MotionAdapter.from_pretrained("openmmlab/PIA-condition-adapter") +pipe = PIAPipeline.from_pretrained("SG161222/Realistic_Vision_V6.0_B1_noVAE", motion_adapter=adapter, torch_dtype=torch.float16) + +pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config) +pipe.enable_model_cpu_offload() +pipe.enable_vae_slicing() + +image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat_6.png?download=true" +) +image = image.resize((512, 512)) +prompt = "cat in a field" +negative_prompt = "wrong white balance, dark, sketches,worst quality,low quality" + +generator = torch.Generator("cpu").manual_seed(0) +output = pipe(image=image, prompt=prompt, generator=generator) +frames = output.frames[0] +export_to_gif(frames, "pia-animation.gif") Here are some sample outputs: masterpiece, bestquality, sunset. + If you plan on using a scheduler that can clip samples, make sure to disable it by setting clip_sample=False in the scheduler as this can also have an adverse effect on generated samples. Additionally, the PIA checkpoints can be sensitive to the beta schedule of the scheduler. We recommend setting this to linear. Using FreeInit FreeInit: Bridging Initialization Gap in Video Diffusion Models by Tianxing Wu, Chenyang Si, Yuming Jiang, Ziqi Huang, Ziwei Liu. FreeInit is an effective method that improves temporal consistency and overall quality of videos generated using video-diffusion-models without any addition training. It can be applied to PIA, AnimateDiff, ModelScope, VideoCrafter and various other video generation models seamlessly at inference time, and works by iteratively refining the latent-initialization noise. More details can be found it the paper. The following example demonstrates the usage of FreeInit. Copied import torch +from diffusers import ( + DDIMScheduler, + MotionAdapter, + PIAPipeline, +) +from diffusers.utils import export_to_gif, load_image + +adapter = MotionAdapter.from_pretrained("openmmlab/PIA-condition-adapter") +pipe = PIAPipeline.from_pretrained("SG161222/Realistic_Vision_V6.0_B1_noVAE", motion_adapter=adapter) + +# enable FreeInit +# Refer to the enable_free_init documentation for a full list of configurable parameters +pipe.enable_free_init(method="butterworth", use_fast_sampling=True) + +# Memory saving options +pipe.enable_model_cpu_offload() +pipe.enable_vae_slicing() + +pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) +image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat_6.png?download=true" +) +image = image.resize((512, 512)) +prompt = "cat in a hat" +negative_prompt = "wrong white balance, dark, sketches,worst quality,low quality" + +generator = torch.Generator("cpu").manual_seed(0) + +output = pipe(image=image, prompt=prompt, generator=generator) +frames = output.frames[0] +export_to_gif(frames, "pia-freeinit-animation.gif") masterpiece, bestquality, sunset. + FreeInit is not really free - the improved quality comes at the cost of extra computation. It requires sampling a few extra times depending on the num_iters parameter that is set when enabling it. Setting the use_fast_sampling parameter to True can improve the overall performance (at the cost of lower quality compared to when use_fast_sampling=False but still better results than vanilla video generation models). PIAPipeline class diffusers.PIAPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: Union scheduler: Union motion_adapter: Optional = None feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel used to create a UNetMotionModel to denoise the encoded video latents. motion_adapter (MotionAdapter) — +A MotionAdapter to be used in combination with unet to denoise the encoded video latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. Pipeline for text-to-video generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( image: Union prompt: Union = None strength: float = 1.0 num_frames: Optional = 16 height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_videos_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None motion_scale: int = 0 output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: Optional = None callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → TextToVideoSDPipelineOutput or tuple Parameters image (PipelineImageInput) — +The input image to be used for video generation. prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. strength (float, optional, defaults to 1.0) — Indicates extent to transform the reference image. Must be between 0 and 1. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated video. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated video. num_frames (int, optional, defaults to 16) — +The number of video frames that are generated. Defaults to 16 frames which at 8 frames per seconds +amounts to 2 seconds of video. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality videos at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for video +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. Latents should be of shape +(batch_size, num_channel, num_frames, height, width). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. +motion_scale — (int, optional, defaults to 0): +Parameter that controls the amount and type of motion that is added to the image. Increasing the value increases the amount of motion, while specific +ranges of values control the type of motion that is added. Must be between 0 and 8. +Set between 0-2 to only increase the amount of motion. +Set between 3-5 to create looping motion. +Set between 6-8 to perform motion with image style transfer. output_type (str, optional, defaults to "pil") — +The output format of the generated video. Choose between torch.FloatTensor, PIL.Image or +np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a TextToVideoSDPipelineOutput instead +of a plain tuple. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeine class. Returns +TextToVideoSDPipelineOutput or tuple + +If return_dict is True, TextToVideoSDPipelineOutput is +returned, otherwise a tuple is returned where the first element is a list with the generated frames. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import ( +... EulerDiscreteScheduler, +... MotionAdapter, +... PIAPipeline, +... ) +>>> from diffusers.utils import export_to_gif, load_image +>>> adapter = MotionAdapter.from_pretrained("../checkpoints/pia-diffusers") +>>> pipe = PIAPipeline.from_pretrained("SG161222/Realistic_Vision_V6.0_B1_noVAE", motion_adapter=adapter) +>>> pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config) +>>> image = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat_6.png?download=true" +... ) +>>> image = image.resize((512, 512)) +>>> prompt = "cat in a hat" +>>> negative_prompt = "wrong white balance, dark, sketches,worst quality,low quality, deformed, distorted, disfigured, bad eyes, wrong lips,weird mouth, bad teeth, mutated hands and fingers, bad anatomy,wrong anatomy, amputation, extra limb, missing limb, floating,limbs, disconnected limbs, mutation, ugly, disgusting, bad_pictures, negative_hand-neg" +>>> generator = torch.Generator("cpu").manual_seed(0) +>>> output = pipe(image=image, prompt=prompt, negative_prompt=negative_prompt, generator=generator) +>>> frames = output.frames[0] +>>> export_to_gif(frames, "pia-animation.gif") disable_free_init < source > ( ) Disables the FreeInit mechanism if enabled. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_free_init < source > ( num_iters: int = 3 use_fast_sampling: bool = False method: str = 'butterworth' order: int = 4 spatial_stop_frequency: float = 0.25 temporal_stop_frequency: float = 0.25 generator: Optional = None ) Parameters num_iters (int, optional, defaults to 3) — +Number of FreeInit noise re-initialization iterations. use_fast_sampling (bool, optional, defaults to False) — +Whether or not to speedup sampling procedure at the cost of probably lower quality results. Enables +the “Coarse-to-Fine Sampling” strategy, as mentioned in the paper, if set to True. method (str, optional, defaults to butterworth) — +Must be one of butterworth, ideal or gaussian to use as the filtering method for the +FreeInit low pass filter. order (int, optional, defaults to 4) — +Order of the filter used in butterworth method. Larger values lead to ideal method behaviour +whereas lower values lead to gaussian method behaviour. spatial_stop_frequency (float, optional, defaults to 0.25) — +Normalized stop frequency for spatial dimensions. Must be between 0 to 1. Referred to as d_s in +the original implementation. temporal_stop_frequency (float, optional, defaults to 0.25) — +Normalized stop frequency for temporal dimensions. Must be between 0 to 1. Referred to as d_t in +the original implementation. generator (torch.Generator, optional, defaults to 0.25) — +A torch.Generator to make +FreeInit generation deterministic. Enables the FreeInit mechanism as in https://arxiv.org/abs/2312.07537. This implementation has been adapted from the official repository. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. enable_freeu disable_freeu enable_free_init disable_free_init enable_vae_slicing disable_vae_slicing enable_vae_tiling disable_vae_tiling PIAPipelineOutput class diffusers.pipelines.pia.PIAPipelineOutput < source > ( frames: Union ) Parameters frames (torch.Tensor, np.ndarray, or List[PIL.Image.Image]) — Nested list of length batch_size with denoised PIL image sequences of length num_frames, — NumPy array of shape `(batch_size, num_frames, channels, height, width, — Torch tensor of shape (batch_size, num_frames, channels, height, width). — Output class for PIAPipeline. diff --git a/scrapped_outputs/f5b7ef3b3d7181aed65e2adcc30246dd.txt b/scrapped_outputs/f5b7ef3b3d7181aed65e2adcc30246dd.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/f5ebc5d076df6d5fa11ddf169df3df78.txt b/scrapped_outputs/f5ebc5d076df6d5fa11ddf169df3df78.txt new file mode 100644 index 0000000000000000000000000000000000000000..032f569366b1a5bb387a95e95afb74b4ab65d517 --- /dev/null +++ b/scrapped_outputs/f5ebc5d076df6d5fa11ddf169df3df78.txt @@ -0,0 +1,17 @@ +UNet1DModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 1D UNet model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet1DModel class diffusers.UNet1DModel < source > ( sample_size: int = 65536 sample_rate: Optional = None in_channels: int = 2 out_channels: int = 2 extra_in_channels: int = 0 time_embedding_type: str = 'fourier' flip_sin_to_cos: bool = True use_timestep_embedding: bool = False freq_shift: float = 0.0 down_block_types: Tuple = ('DownBlock1DNoSkip', 'DownBlock1D', 'AttnDownBlock1D') up_block_types: Tuple = ('AttnUpBlock1D', 'UpBlock1D', 'UpBlock1DNoSkip') mid_block_type: Tuple = 'UNetMidBlock1D' out_block_type: str = None block_out_channels: Tuple = (32, 32, 64) act_fn: str = None norm_num_groups: int = 8 layers_per_block: int = 1 downsample_each_block: bool = False ) Parameters sample_size (int, optional) — Default length of sample. Should be adaptable at runtime. in_channels (int, optional, defaults to 2) — Number of channels in the input sample. out_channels (int, optional, defaults to 2) — Number of channels in the output. extra_in_channels (int, optional, defaults to 0) — +Number of additional channels to be added to the input of the first down block. Useful for cases where the +input data has more channels than what the model was initially designed for. time_embedding_type (str, optional, defaults to "fourier") — Type of time embedding to use. freq_shift (float, optional, defaults to 0.0) — Frequency shift for Fourier time embedding. flip_sin_to_cos (bool, optional, defaults to False) — +Whether to flip sin to cos for Fourier time embedding. down_block_types (Tuple[str], optional, defaults to ("DownBlock1DNoSkip", "DownBlock1D", "AttnDownBlock1D")) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to ("AttnUpBlock1D", "UpBlock1D", "UpBlock1DNoSkip")) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (32, 32, 64)) — +Tuple of block output channels. mid_block_type (str, optional, defaults to "UNetMidBlock1D") — Block type for middle of UNet. out_block_type (str, optional, defaults to None) — Optional output processing block of UNet. act_fn (str, optional, defaults to None) — Optional activation function in UNet blocks. norm_num_groups (int, optional, defaults to 8) — The number of groups for normalization. layers_per_block (int, optional, defaults to 1) — The number of layers per block. downsample_each_block (int, optional, defaults to False) — +Experimental feature for using a UNet without upsampling. A 1D UNet model that takes a noisy sample and a timestep and returns a sample shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor timestep: Union return_dict: bool = True ) → ~models.unet_1d.UNet1DOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch_size, num_channels, sample_size). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.unet_1d.UNet1DOutput instead of a plain tuple. Returns +~models.unet_1d.UNet1DOutput or tuple + +If return_dict is True, an ~models.unet_1d.UNet1DOutput is returned, otherwise a tuple is +returned where the first element is the sample tensor. + The UNet1DModel forward method. UNet1DOutput class diffusers.models.unets.unet_1d.UNet1DOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, sample_size)) — +The hidden states output from the last layer of the model. The output of UNet1DModel. diff --git a/scrapped_outputs/f6079c1d54828c6256824e8f083b9794.txt b/scrapped_outputs/f6079c1d54828c6256824e8f083b9794.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/f62a06388d417520f8058ca55597a716.txt b/scrapped_outputs/f62a06388d417520f8058ca55597a716.txt new file mode 100644 index 0000000000000000000000000000000000000000..ca6b3b34aaa6cc1e6674850833f805d99aa68782 --- /dev/null +++ b/scrapped_outputs/f62a06388d417520f8058ca55597a716.txt @@ -0,0 +1,33 @@ +Logging 🤗 Diffusers has a centralized logging system to easily manage the verbosity of the library. The default verbosity is set to WARNING. To change the verbosity level, use one of the direct setters. For instance, to change the verbosity to the INFO level. Copied import diffusers + +diffusers.logging.set_verbosity_info() You can also use the environment variable DIFFUSERS_VERBOSITY to override the default verbosity. You can set it +to one of the following: debug, info, warning, error, critical. For example: Copied DIFFUSERS_VERBOSITY=error ./myprogram.py Additionally, some warnings can be disabled by setting the environment variable +DIFFUSERS_NO_ADVISORY_WARNINGS to a true value, like 1. This disables any warning logged by +logger.warning_advice. For example: Copied DIFFUSERS_NO_ADVISORY_WARNINGS=1 ./myprogram.py Here is an example of how to use the same logger as the library in your own module or script: Copied from diffusers.utils import logging + +logging.set_verbosity_info() +logger = logging.get_logger("diffusers") +logger.info("INFO") +logger.warning("WARN") All methods of the logging module are documented below. The main methods are +logging.get_verbosity() to get the current level of verbosity in the logger and +logging.set_verbosity() to set the verbosity to the level of your choice. In order from the least verbose to the most verbose: Method Integer value Description diffusers.logging.CRITICAL or diffusers.logging.FATAL 50 only report the most critical errors diffusers.logging.ERROR 40 only report errors diffusers.logging.WARNING or diffusers.logging.WARN 30 only report errors and warnings (default) diffusers.logging.INFO 20 only report errors, warnings, and basic information diffusers.logging.DEBUG 10 report all information By default, tqdm progress bars are displayed during model download. logging.disable_progress_bar() and logging.enable_progress_bar() are used to enable or disable this behavior. Base setters diffusers.utils.logging.set_verbosity_error < source > ( ) Set the verbosity to the ERROR level. diffusers.utils.logging.set_verbosity_warning < source > ( ) Set the verbosity to the WARNING level. diffusers.utils.logging.set_verbosity_info < source > ( ) Set the verbosity to the INFO level. diffusers.utils.logging.set_verbosity_debug < source > ( ) Set the verbosity to the DEBUG level. Other functions diffusers.utils.logging.get_verbosity < source > ( ) → int Returns +int + +Logging level integers which can be one of: + +50: diffusers.logging.CRITICAL or diffusers.logging.FATAL +40: diffusers.logging.ERROR +30: diffusers.logging.WARNING or diffusers.logging.WARN +20: diffusers.logging.INFO +10: diffusers.logging.DEBUG + + Return the current level for the 🤗 Diffusers’ root logger as an int. diffusers.utils.logging.set_verbosity < source > ( verbosity: int ) Parameters verbosity (int) — +Logging level which can be one of: + +diffusers.logging.CRITICAL or diffusers.logging.FATAL +diffusers.logging.ERROR +diffusers.logging.WARNING or diffusers.logging.WARN +diffusers.logging.INFO +diffusers.logging.DEBUG + Set the verbosity level for the 🤗 Diffusers’ root logger. diffusers.utils.get_logger < source > ( name: Optional = None ) Return a logger with the specified name. This function is not supposed to be directly accessed unless you are writing a custom diffusers module. diffusers.utils.logging.enable_default_handler < source > ( ) Enable the default handler of the 🤗 Diffusers’ root logger. diffusers.utils.logging.disable_default_handler < source > ( ) Disable the default handler of the 🤗 Diffusers’ root logger. diffusers.utils.logging.enable_explicit_format < source > ( ) Enable explicit formatting for every 🤗 Diffusers’ logger. The explicit formatter is as follows: Copied [LEVELNAME|FILENAME|LINE NUMBER] TIME >> MESSAGE +All handlers currently bound to the root logger are affected by this method. diffusers.utils.logging.reset_format < source > ( ) Resets the formatting for 🤗 Diffusers’ loggers. All handlers currently bound to the root logger are affected by this method. diffusers.utils.logging.enable_progress_bar < source > ( ) Enable tqdm progress bar. diffusers.utils.logging.disable_progress_bar < source > ( ) Disable tqdm progress bar. diff --git a/scrapped_outputs/f652e780a82d719251e0a79812482508.txt b/scrapped_outputs/f652e780a82d719251e0a79812482508.txt new file mode 100644 index 0000000000000000000000000000000000000000..810a91b8fef1b421013373c972981ec5ae26c4c4 --- /dev/null +++ b/scrapped_outputs/f652e780a82d719251e0a79812482508.txt @@ -0,0 +1,21 @@ +ConsistencyDecoderScheduler This scheduler is a part of the ConsistencyDecoderPipeline and was introduced in DALL-E 3. The original codebase can be found at openai/consistency_models. ConsistencyDecoderScheduler class diffusers.schedulers.ConsistencyDecoderScheduler < source > ( num_train_timesteps: int = 1024 sigma_data: float = 0.5 ) scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor generator: Optional = None return_dict: bool = True ) → ~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. timestep (float) — +The current timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a +~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput or tuple. Returns +~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput or tuple + +If return_dict is True, +~schedulers.scheduling_consistency_models.ConsistencyDecoderSchedulerOutput is returned, otherwise +a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). diff --git a/scrapped_outputs/f6635a14a1565b175f9de9c4569f5ec2.txt b/scrapped_outputs/f6635a14a1565b175f9de9c4569f5ec2.txt new file mode 100644 index 0000000000000000000000000000000000000000..fb52f025805b2d01444b6cca5ff880e32ccc5ff8 --- /dev/null +++ b/scrapped_outputs/f6635a14a1565b175f9de9c4569f5ec2.txt @@ -0,0 +1,71 @@ +AutoencoderKL The variational autoencoder (VAE) model with KL loss was introduced in Auto-Encoding Variational Bayes by Diederik P. Kingma and Max Welling. The model is used in 🤗 Diffusers to encode images into latents and to decode latent representations into images. The abstract from the paper is: How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions are two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results. Loading from the original format By default the AutoencoderKL should be loaded with from_pretrained(), but it can also be loaded +from the original format using FromOriginalVAEMixin.from_single_file as follows: Copied from diffusers import AutoencoderKL + +url = "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors" # can also be a local file +model = AutoencoderKL.from_single_file(url) AutoencoderKL class diffusers.AutoencoderKL < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) up_block_types: Tuple = ('UpDecoderBlock2D',) block_out_channels: Tuple = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 4 norm_num_groups: int = 32 sample_size: int = 32 scaling_factor: float = 0.18215 latents_mean: Optional = None latents_std: Optional = None force_upcast: float = True ) Parameters in_channels (int, optional, defaults to 3) — Number of channels in the input image. out_channels (int, optional, defaults to 3) — Number of channels in the output. down_block_types (Tuple[str], optional, defaults to ("DownEncoderBlock2D",)) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to ("UpDecoderBlock2D",)) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (64,)) — +Tuple of block output channels. act_fn (str, optional, defaults to "silu") — The activation function to use. latent_channels (int, optional, defaults to 4) — Number of channels in the latent space. sample_size (int, optional, defaults to 32) — Sample input size. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. force_upcast (bool, optional, default to True) — +If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE +can be fine-tuned / trained to a lower range without loosing too much precision in which case +force_upcast can be set to False - see: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix A VAE model with KL loss for encoding images into latents and decoding latent representations into images. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). wrapper < source > ( *args **kwargs ) wrapper < source > ( *args **kwargs ) disable_slicing < source > ( ) Disable sliced VAE decoding. If enable_slicing was previously enabled, this method will go back to computing +decoding in one step. disable_tiling < source > ( ) Disable tiled VAE decoding. If enable_tiling was previously enabled, this method will go back to computing +decoding in one step. enable_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_tiling < source > ( use_tiling: bool = True ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. forward < source > ( sample: FloatTensor sample_posterior: bool = False return_dict: bool = True generator: Optional = None ) Parameters sample (torch.FloatTensor) — Input sample. sample_posterior (bool, optional, defaults to False) — +Whether to sample from the posterior. return_dict (bool, optional, defaults to True) — +Whether or not to return a DecoderOutput instead of a plain tuple. fuse_qkv_projections < source > ( ) Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, +key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is 🧪 experimental. set_attn_processor < source > ( processor: Union ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — +The instantiated processor class or a dictionary of processor classes that will be set as the processor +for all Attention layers. +If processor is a dict, the key needs to define the path to the corresponding cross attention +processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. tiled_decode < source > ( z: FloatTensor return_dict: bool = True ) → ~models.vae.DecoderOutput or tuple Parameters z (torch.FloatTensor) — Input batch of latent vectors. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.vae.DecoderOutput instead of a plain tuple. Returns +~models.vae.DecoderOutput or tuple + +If return_dict is True, a ~models.vae.DecoderOutput is returned, otherwise a plain tuple is +returned. + Decode a batch of images using a tiled decoder. tiled_encode < source > ( x: FloatTensor return_dict: bool = True ) → ~models.autoencoder_kl.AutoencoderKLOutput or tuple Parameters x (torch.FloatTensor) — Input batch of images. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~models.autoencoder_kl.AutoencoderKLOutput instead of a plain tuple. Returns +~models.autoencoder_kl.AutoencoderKLOutput or tuple + +If return_dict is True, a ~models.autoencoder_kl.AutoencoderKLOutput is returned, otherwise a plain +tuple is returned. + Encode a batch of images using a tiled encoder. When this option is enabled, the VAE will split the input tensor into tiles to compute encoding in several +steps. This is useful to keep memory use constant regardless of image size. The end result of tiled encoding is +different from non-tiled encoding because each tile uses a different encoder. To avoid tiling artifacts, the +tiles overlap and are blended together to form a smooth output. You may still see tile-sized changes in the +output, but they should be much less noticeable. unfuse_qkv_projections < source > ( ) Disables the fused QKV projection if enabled. This API is 🧪 experimental. AutoencoderKLOutput class diffusers.models.modeling_outputs.AutoencoderKLOutput < source > ( latent_dist: DiagonalGaussianDistribution ) Parameters latent_dist (DiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of DiagonalGaussianDistribution. +DiagonalGaussianDistribution allows for sampling latents from the distribution. Output of AutoencoderKL encoding method. DecoderOutput class diffusers.models.autoencoders.vae.DecoderOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The decoded output sample from the last layer of the model. Output of decoding method. FlaxAutoencoderKL class diffusers.FlaxAutoencoderKL < source > ( in_channels: int = 3 out_channels: int = 3 down_block_types: Tuple = ('DownEncoderBlock2D',) up_block_types: Tuple = ('UpDecoderBlock2D',) block_out_channels: Tuple = (64,) layers_per_block: int = 1 act_fn: str = 'silu' latent_channels: int = 4 norm_num_groups: int = 32 sample_size: int = 32 scaling_factor: float = 0.18215 dtype: dtype = parent: Union = name: Optional = None ) Parameters in_channels (int, optional, defaults to 3) — +Number of channels in the input image. out_channels (int, optional, defaults to 3) — +Number of channels in the output. down_block_types (Tuple[str], optional, defaults to (DownEncoderBlock2D)) — +Tuple of downsample block types. up_block_types (Tuple[str], optional, defaults to (UpDecoderBlock2D)) — +Tuple of upsample block types. block_out_channels (Tuple[str], optional, defaults to (64,)) — +Tuple of block output channels. layers_per_block (int, optional, defaults to 2) — +Number of ResNet layer for each block. act_fn (str, optional, defaults to silu) — +The activation function to use. latent_channels (int, optional, defaults to 4) — +Number of channels in the latent space. norm_num_groups (int, optional, defaults to 32) — +The number of groups for normalization. sample_size (int, optional, defaults to 32) — +Sample input size. scaling_factor (float, optional, defaults to 0.18215) — +The component-wise standard deviation of the trained latent space computed using the first batch of the +training set. This is used to scale the latent space to have unit variance when training the diffusion +model. The latents are scaled with the formula z = z * scaling_factor before being passed to the +diffusion model. When decoding, the latents are scaled back to the original scale with the formula: z = 1 / scaling_factor * z. For more details, refer to sections 4.3.2 and D.1 of the High-Resolution Image +Synthesis with Latent Diffusion Models paper. dtype (jnp.dtype, optional, defaults to jnp.float32) — +The dtype of the parameters. Flax implementation of a VAE model with KL loss for decoding latent representations. This model inherits from FlaxModelMixin. Check the superclass documentation for it’s generic methods +implemented for all models (such as downloading or saving). This model is a Flax Linen flax.linen.Module +subclass. Use it as a regular Flax Linen module and refer to the Flax documentation for all matter related to its +general usage and behavior. Inherent JAX features such as the following are supported: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization FlaxAutoencoderKLOutput class diffusers.models.vae_flax.FlaxAutoencoderKLOutput < source > ( latent_dist: FlaxDiagonalGaussianDistribution ) Parameters latent_dist (FlaxDiagonalGaussianDistribution) — +Encoded outputs of Encoder represented as the mean and logvar of FlaxDiagonalGaussianDistribution. +FlaxDiagonalGaussianDistribution allows for sampling latents from the distribution. Output of AutoencoderKL encoding method. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. FlaxDecoderOutput class diffusers.models.vae_flax.FlaxDecoderOutput < source > ( sample: Array ) Parameters sample (jnp.ndarray of shape (batch_size, num_channels, height, width)) — +The decoded output sample from the last layer of the model. dtype (jnp.dtype, optional, defaults to jnp.float32) — +The dtype of the parameters. Output of decoding method. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. diff --git a/scrapped_outputs/f6bf4e2942f4074cd92933063e7c5553.txt b/scrapped_outputs/f6bf4e2942f4074cd92933063e7c5553.txt new file mode 100644 index 0000000000000000000000000000000000000000..2c39533f3811507775688b8fc90c71c93f8c744f --- /dev/null +++ b/scrapped_outputs/f6bf4e2942f4074cd92933063e7c5553.txt @@ -0,0 +1,324 @@ +InstructPix2Pix InstructPix2Pix: Learning to Follow Image Editing Instructions is by Tim Brooks, Aleksander Holynski and Alexei A. Efros. The abstract from the paper is: We propose a method for editing images from human instructions: given an input image and a written instruction that tells the model what to do, our model follows these instructions to edit the image. To obtain training data for this problem, we combine the knowledge of two large pretrained models — a language model (GPT-3) and a text-to-image model (Stable Diffusion) — to generate a large dataset of image editing examples. Our conditional diffusion model, InstructPix2Pix, is trained on our generated data, and generalizes to real images and user-written instructions at inference time. Since it performs edits in the forward pass and does not require per example fine-tuning or inversion, our model edits images quickly, in a matter of seconds. We show compelling editing results for a diverse collection of input images and written instructions. You can find additional information about InstructPix2Pix on the project page, original codebase, and try it out in a demo. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionInstructPix2PixPipeline class diffusers.StableDiffusionInstructPix2PixPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for pixel-level image editing by following text instructions (based on Stable Diffusion). This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None image: Union = None num_inference_steps: int = 100 guidance_scale: float = 7.5 image_guidance_scale: float = 1.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. image (torch.FloatTensor np.ndarray, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be repainted according to prompt. Can also accept +image latents as image, but if passing latents directly it is not encoded again. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. image_guidance_scale (float, optional, defaults to 1.5) — +Push the generated image towards the inital image. Image guidance scale is enabled by setting +image_guidance_scale > 1. Higher image guidance scale encourages generated images that are closely +linked to the source image, usually at the expense of lower image quality. This pipeline requires a +value of at least 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import PIL +>>> import requests +>>> import torch +>>> from io import BytesIO + +>>> from diffusers import StableDiffusionInstructPix2PixPipeline + + +>>> def download_image(url): +... response = requests.get(url) +... return PIL.Image.open(BytesIO(response.content)).convert("RGB") + + +>>> img_url = "https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" + +>>> image = download_image(img_url).resize((512, 512)) + +>>> pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained( +... "timbrooks/instruct-pix2pix", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "make the mountains snowy" +>>> image = pipe(prompt=prompt, image=image).images[0] load_textual_inversion < source > ( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike or List[str or os.PathLike] or Dict or List[Dict]) — +Can be either one of the following or a list of them: + +A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a +pretrained model hosted on the Hub. +A path to a directory (for example ./my_text_inversion_directory/) containing the textual +inversion weights. +A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. +A torch state +dict. + token (str or List[str], optional) — +Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a +list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) — +Frozen text-encoder (clip-vit-large-patch14). +If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) — +A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) — +Name of a custom weight file. This should be used when: + +The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight +name such as text_inv.bin. +The saved textual inversion file is in the Automatic1111 format. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and +Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in 🤗 Diffusers format: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("sd-concepts-library/cat-toy") + +prompt = "A backpack" + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first +(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline +import torch + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") + +prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." + +image = pipe(prompt, num_inference_steps=50).images[0] +image.save("character.png") load_lora_weights < source > ( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +See lora_state_dict(). kwargs (dict, optional) — +See lora_state_dict(). adapter_name (str, optional) — +Adapter name to be used for referencing the loaded adapter model. If not specified, it will use +default_{i} where i is the total number of adapters being loaded. Load LoRA weights specified in pretrained_model_name_or_path_or_dict into self.unet and +self.text_encoder. All kwargs are forwarded to self.lora_state_dict. See lora_state_dict() for more details on how the state dict is loaded. See load_lora_into_unet() for more details on how the state dict is loaded into +self.unet. See load_lora_into_text_encoder() for more details on how the state dict is loaded +into self.text_encoder. save_lora_weights < source > ( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None transformer_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True ) Parameters save_directory (str or os.PathLike) — +Directory to save LoRA parameters to. Will be created if it doesn’t exist. unet_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the unet. text_encoder_lora_layers (Dict[str, torch.nn.Module] or Dict[str, torch.Tensor]) — +State dict of the LoRA layers corresponding to the text_encoder. Must explicitly pass the text +encoder LoRA state dict because it comes from 🤗 Transformers. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. Save the LoRA parameters corresponding to the UNet and text encoder. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. StableDiffusionXLInstructPix2PixPipeline class diffusers.StableDiffusionXLInstructPix2PixPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True add_watermarker: Optional = None ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion XL uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. text_encoder_2 ( CLIPTextModelWithProjection) — +Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of +CLIP, +specifically the +laion/CLIP-ViT-bigG-14-laion2B-39B-b160k +variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. tokenizer_2 (CLIPTokenizer) — +Second Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. requires_aesthetics_score (bool, optional, defaults to "False") — +Whether the unet requires a aesthetic_score condition to be passed during inference. Also see the config +of stabilityai/stable-diffusion-xl-refiner-1-0. force_zeros_for_empty_prompt (bool, optional, defaults to "True") — +Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of +stabilityai/stable-diffusion-xl-base-1-0. add_watermarker (bool, optional) — +Whether to use the invisible_watermark library to +watermark output images. If not defined, it will default to True if the package is installed, otherwise no +watermarker will be used. Pipeline for pixel-level image editing by following text instructions. Based on Stable Diffusion XL. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 100 denoising_end: Optional = None guidance_scale: float = 5.0 image_guidance_scale: float = 1.5 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Tuple = None crops_coords_top_left: Tuple = (0, 0) target_size: Tuple = None ) → ~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor or PIL.Image.Image or np.ndarray or List[torch.FloatTensor] or List[PIL.Image.Image] or List[np.ndarray]) — +The image(s) to modify with the pipeline. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 5.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. image_guidance_scale (float, optional, defaults to 1.5) — +Image guidance scale is to push the generated image towards the inital image image. Image guidance +scale is enabled by setting image_guidance_scale > 1. Higher image guidance scale encourages to +generate images that are closely linked to the source image image, usually at the expense of lower +image quality. This pipeline requires a value of at least 1. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.StableDiffusionXLPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. aesthetic_score (float, optional, defaults to 6.0) — +Used to simulate an aesthetic score of the generated image by influencing the positive text condition. +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. negative_aesthetic_score (float, optional, defaults to 2.5) — +Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. Can be used to +simulate an aesthetic score of the generated image by influencing the negative text condition. Returns +~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput or tuple + +~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionXLInstructPix2PixPipeline +>>> from diffusers.utils import load_image + +>>> resolution = 768 +>>> image = load_image( +... "https://hf.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" +... ).resize((resolution, resolution)) +>>> edit_instruction = "Turn sky into a cloudy one" + +>>> pipe = StableDiffusionXLInstructPix2PixPipeline.from_pretrained( +... "diffusers/sdxl-instructpix2pix-768", torch_dtype=torch.float16 +... ).to("cuda") + +>>> edited_image = pipe( +... prompt=edit_instruction, +... image=image, +... height=resolution, +... width=resolution, +... guidance_scale=3.0, +... image_guidance_scale=1.5, +... num_inference_steps=30, +... ).images[0] +>>> edited_image disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to +computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously invoked, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several +steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in +several steps. This is useful to save a large amount of memory and to allow the processing of larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. Encodes the prompt into text encoder hidden states. diff --git a/scrapped_outputs/f6c84b96e84c11ffb30da7b086836b4c.txt b/scrapped_outputs/f6c84b96e84c11ffb30da7b086836b4c.txt new file mode 100644 index 0000000000000000000000000000000000000000..7d4f2a190ae5e539921a29c22ee5aca25a320dc2 --- /dev/null +++ b/scrapped_outputs/f6c84b96e84c11ffb30da7b086836b4c.txt @@ -0,0 +1,10 @@ +Using Diffusers with other modalities + +Diffusers is in the process of expanding to modalities other than images. +Example type +Colab +Pipeline +Molecule conformation generation + +❌ +More coming soon! diff --git a/scrapped_outputs/f70f1e03304419e54491ab86e21c21f2.txt b/scrapped_outputs/f70f1e03304419e54491ab86e21c21f2.txt new file mode 100644 index 0000000000000000000000000000000000000000..78c3d8546c4767fffa594b36c432c1201bb2ccc3 --- /dev/null +++ b/scrapped_outputs/f70f1e03304419e54491ab86e21c21f2.txt @@ -0,0 +1,17 @@ +Token merging Token merging (ToMe) merges redundant tokens/patches progressively in the forward pass of a Transformer-based network which can speed-up the inference latency of StableDiffusionPipeline. Install ToMe from pip: Copied pip install tomesd You can use ToMe from the tomesd library with the apply_patch function: Copied from diffusers import StableDiffusionPipeline + import torch + import tomesd + + pipeline = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True, + ).to("cuda") ++ tomesd.apply_patch(pipeline, ratio=0.5) + + image = pipeline("a photo of an astronaut riding a horse on mars").images[0] The apply_patch function exposes a number of arguments to help strike a balance between pipeline inference speed and the quality of the generated tokens. The most important argument is ratio which controls the number of tokens that are merged during the forward pass. As reported in the paper, ToMe can greatly preserve the quality of the generated images while boosting inference speed. By increasing the ratio, you can speed-up inference even further, but at the cost of some degraded image quality. To test the quality of the generated images, we sampled a few prompts from Parti Prompts and performed inference with the StableDiffusionPipeline with the following settings: We didn’t notice any significant decrease in the quality of the generated samples, and you can check out the generated samples in this WandB report. If you’re interested in reproducing this experiment, use this script. Benchmarks We also benchmarked the impact of tomesd on the StableDiffusionPipeline with xFormers enabled across several image resolutions. The results are obtained from A100 and V100 GPUs in the following development environment: Copied - `diffusers` version: 0.15.1 +- Python version: 3.8.16 +- PyTorch version (GPU?): 1.13.1+cu116 (True) +- Huggingface_hub version: 0.13.2 +- Transformers version: 4.27.2 +- Accelerate version: 0.18.0 +- xFormers version: 0.0.16 +- tomesd version: 0.1.2 To reproduce this benchmark, feel free to use this script. The results are reported in seconds, and where applicable we report the speed-up percentage over the vanilla pipeline when using ToMe and ToMe + xFormers. GPU Resolution Batch size Vanilla ToMe ToMe + xFormers A100 512 10 6.88 5.26 (+23.55%) 4.69 (+31.83%) 768 10 OOM 14.71 11 8 OOM 11.56 8.84 4 OOM 5.98 4.66 2 4.99 3.24 (+35.07%) 2.1 (+37.88%) 1 3.29 2.24 (+31.91%) 2.03 (+38.3%) 1024 10 OOM OOM OOM 8 OOM OOM OOM 4 OOM 12.51 9.09 2 OOM 6.52 4.96 1 6.4 3.61 (+43.59%) 2.81 (+56.09%) V100 512 10 OOM 10.03 9.29 8 OOM 8.05 7.47 4 5.7 4.3 (+24.56%) 3.98 (+30.18%) 2 3.14 2.43 (+22.61%) 2.27 (+27.71%) 1 1.88 1.57 (+16.49%) 1.57 (+16.49%) 768 10 OOM OOM 23.67 8 OOM OOM 18.81 4 OOM 11.81 9.7 2 OOM 6.27 5.2 1 5.43 3.38 (+37.75%) 2.82 (+48.07%) 1024 10 OOM OOM OOM 8 OOM OOM OOM 4 OOM OOM 19.35 2 OOM 13 10.78 1 OOM 6.66 5.54 As seen in the tables above, the speed-up from tomesd becomes more pronounced for larger image resolutions. It is also interesting to note that with tomesd, it is possible to run the pipeline on a higher resolution like 1024x1024. You may be able to speed-up inference even more with torch.compile. diff --git a/scrapped_outputs/f728ef0eb6957360e6af42a1fe7ee3d6.txt b/scrapped_outputs/f728ef0eb6957360e6af42a1fe7ee3d6.txt new file mode 100644 index 0000000000000000000000000000000000000000..c796491cbfe9ea7c96684c36934fc2d682903305 --- /dev/null +++ b/scrapped_outputs/f728ef0eb6957360e6af42a1fe7ee3d6.txt @@ -0,0 +1,191 @@ +Stable Diffusion XL Stable Diffusion XL (SDXL) is a powerful text-to-image generation model that iterates on the previous Stable Diffusion models in three key ways: the UNet is 3x larger and SDXL combines a second text encoder (OpenCLIP ViT-bigG/14) with the original text encoder to significantly increase the number of parameters introduces size and crop-conditioning to preserve training data from being discarded and gain more control over how a generated image should be cropped introduces a two-stage model process; the base model (can also be run as a standalone model) generates an image as an input to the refiner model which adds additional high-quality details This guide will show you how to use SDXL for text-to-image, image-to-image, and inpainting. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate invisible-watermark>=0.2.0 We recommend installing the invisible-watermark library to help identify images that are generated. If the invisible-watermark library is installed, it is used by default. To disable the watermarker: Copied pipeline = StableDiffusionXLPipeline.from_pretrained(..., add_watermarker=False) Load model checkpoints Model weights may be stored in separate subfolders on the Hub or locally, in which case, you should use the from_pretrained() method: Copied from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = StableDiffusionXLImg2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, use_safetensors=True, variant="fp16" +).to("cuda") You can also use the from_single_file() method to load a model checkpoint stored in a single file format (.ckpt or .safetensors) from the Hub or locally: Copied from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_single_file( + "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0.safetensors", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = StableDiffusionXLImg2ImgPipeline.from_single_file( + "https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/blob/main/sd_xl_refiner_1.0.safetensors", torch_dtype=torch.float16, use_safetensors=True, variant="fp16" +).to("cuda") Text-to-image For text-to-image, pass a text prompt. By default, SDXL generates a 1024x1024 image for the best results. You can try setting the height and width parameters to 768x768 or 512x512, but anything below 512x512 is not likely to work. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline_text2image = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipeline_text2image(prompt=prompt).images[0] +image Image-to-image For image-to-image, SDXL works especially well with image sizes between 768x768 and 1024x1024. Pass an initial image, and a text prompt to condition the image with: Copied from diffusers import AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +# use from_pipe to avoid consuming additional memory when loading a checkpoint +pipeline = AutoPipelineForImage2Image.from_pipe(pipeline_text2image).to("cuda") + +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png" +init_image = load_image(url) +prompt = "a dog catching a frisbee in the jungle" +image = pipeline(prompt, image=init_image, strength=0.8, guidance_scale=10.5).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Inpainting For inpainting, you’ll need the original image and a mask of what you want to replace in the original image. Create a prompt to describe what you want to replace the masked area with. Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image, make_image_grid + +# use from_pipe to avoid consuming additional memory when loading a checkpoint +pipeline = AutoPipelineForInpainting.from_pipe(pipeline_text2image).to("cuda") + +img_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png" +mask_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-inpaint-mask.png" + +init_image = load_image(img_url) +mask_image = load_image(mask_url) + +prompt = "A deep sea diver floating" +image = pipeline(prompt=prompt, image=init_image, mask_image=mask_image, strength=0.85, guidance_scale=12.5).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Refine image quality SDXL includes a refiner model specialized in denoising low-noise stage images to generate higher-quality images from the base model. There are two ways to use the refiner: use the base and refiner models together to produce a refined image use the base model to produce an image, and subsequently use the refiner model to add more details to the image (this is how SDXL was originally trained) Base + refiner model When you use the base and refiner model together to generate an image, this is known as an ensemble of expert denoisers. The ensemble of expert denoisers approach requires fewer overall denoising steps versus passing the base model’s output to the refiner model, so it should be significantly faster to run. However, you won’t be able to inspect the base model’s output because it still contains a large amount of noise. As an ensemble of expert denoisers, the base model serves as the expert during the high-noise diffusion stage and the refiner model serves as the expert during the low-noise diffusion stage. Load the base and refiner model: Copied from diffusers import DiffusionPipeline +import torch + +base = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", + text_encoder_2=base.text_encoder_2, + vae=base.vae, + torch_dtype=torch.float16, + use_safetensors=True, + variant="fp16", +).to("cuda") To use this approach, you need to define the number of timesteps for each model to run through their respective stages. For the base model, this is controlled by the denoising_end parameter and for the refiner model, it is controlled by the denoising_start parameter. The denoising_end and denoising_start parameters should be a float between 0 and 1. These parameters are represented as a proportion of discrete timesteps as defined by the scheduler. If you’re also using the strength parameter, it’ll be ignored because the number of denoising steps is determined by the discrete timesteps the model is trained on and the declared fractional cutoff. Let’s set denoising_end=0.8 so the base model performs the first 80% of denoising the high-noise timesteps and set denoising_start=0.8 so the refiner model performs the last 20% of denoising the low-noise timesteps. The base model output should be in latent space instead of a PIL image. Copied prompt = "A majestic lion jumping from a big stone at night" + +image = base( + prompt=prompt, + num_inference_steps=40, + denoising_end=0.8, + output_type="latent", +).images +image = refiner( + prompt=prompt, + num_inference_steps=40, + denoising_start=0.8, + image=image, +).images[0] +image default base model ensemble of expert denoisers The refiner model can also be used for inpainting in the StableDiffusionXLInpaintPipeline: Copied from diffusers import StableDiffusionXLInpaintPipeline +from diffusers.utils import load_image, make_image_grid +import torch + +base = StableDiffusionXLInpaintPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = StableDiffusionXLInpaintPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", + text_encoder_2=base.text_encoder_2, + vae=base.vae, + torch_dtype=torch.float16, + use_safetensors=True, + variant="fp16", +).to("cuda") + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url) +mask_image = load_image(mask_url) + +prompt = "A majestic tiger sitting on a bench" +num_inference_steps = 75 +high_noise_frac = 0.7 + +image = base( + prompt=prompt, + image=init_image, + mask_image=mask_image, + num_inference_steps=num_inference_steps, + denoising_end=high_noise_frac, + output_type="latent", +).images +image = refiner( + prompt=prompt, + image=image, + mask_image=mask_image, + num_inference_steps=num_inference_steps, + denoising_start=high_noise_frac, +).images[0] +make_image_grid([init_image, mask_image, image.resize((512, 512))], rows=1, cols=3) This ensemble of expert denoisers method works well for all available schedulers! Base to refiner model SDXL gets a boost in image quality by using the refiner model to add additional high-quality details to the fully-denoised image from the base model, in an image-to-image setting. Load the base and refiner models: Copied from diffusers import DiffusionPipeline +import torch + +base = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +refiner = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", + text_encoder_2=base.text_encoder_2, + vae=base.vae, + torch_dtype=torch.float16, + use_safetensors=True, + variant="fp16", +).to("cuda") Generate an image from the base model, and set the model output to latent space: Copied prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +image = base(prompt=prompt, output_type="latent").images[0] Pass the generated image to the refiner model: Copied image = refiner(prompt=prompt, image=image[None, :]).images[0] base model base model + refiner model For inpainting, load the base and the refiner model in the StableDiffusionXLInpaintPipeline, remove the denoising_end and denoising_start parameters, and choose a smaller number of inference steps for the refiner. Micro-conditioning SDXL training involves several additional conditioning techniques, which are referred to as micro-conditioning. These include original image size, target image size, and cropping parameters. The micro-conditionings can be used at inference time to create high-quality, centered images. You can use both micro-conditioning and negative micro-conditioning parameters thanks to classifier-free guidance. They are available in the StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline, StableDiffusionXLInpaintPipeline, and StableDiffusionXLControlNetPipeline. Size conditioning There are two types of size conditioning: original_size conditioning comes from upscaled images in the training batch (because it would be wasteful to discard the smaller images which make up almost 40% of the total training data). This way, SDXL learns that upscaling artifacts are not supposed to be present in high-resolution images. During inference, you can use original_size to indicate the original image resolution. Using the default value of (1024, 1024) produces higher-quality images that resemble the 1024x1024 images in the dataset. If you choose to use a lower resolution, such as (256, 256), the model still generates 1024x1024 images, but they’ll look like the low resolution images (simpler patterns, blurring) in the dataset. target_size conditioning comes from finetuning SDXL to support different image aspect ratios. During inference, if you use the default value of (1024, 1024), you’ll get an image that resembles the composition of square images in the dataset. We recommend using the same value for target_size and original_size, but feel free to experiment with other options! 🤗 Diffusers also lets you specify negative conditions about an image’s size to steer generation away from certain image resolutions: Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe( + prompt=prompt, + negative_original_size=(512, 512), + negative_target_size=(1024, 1024), +).images[0] Images negatively conditioned on image resolutions of (128, 128), (256, 256), and (512, 512). Crop conditioning Images generated by previous Stable Diffusion models may sometimes appear to be cropped. This is because images are actually cropped during training so that all the images in a batch have the same size. By conditioning on crop coordinates, SDXL learns that no cropping - coordinates (0, 0) - usually correlates with centered subjects and complete faces (this is the default value in 🤗 Diffusers). You can experiment with different coordinates if you want to generate off-centered compositions! Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipeline(prompt=prompt, crops_coords_top_left=(256, 0)).images[0] +image You can also specify negative cropping coordinates to steer generation away from certain cropping parameters: Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image = pipe( + prompt=prompt, + negative_original_size=(512, 512), + negative_crops_coords_top_left=(0, 0), + negative_target_size=(1024, 1024), +).images[0] +image Use a different prompt for each text-encoder SDXL uses two text-encoders, so it is possible to pass a different prompt to each text-encoder, which can improve quality. Pass your original prompt to prompt and the second prompt to prompt_2 (use negative_prompt and negative_prompt_2 if you’re using negative prompts): Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +).to("cuda") + +# prompt is passed to OAI CLIP-ViT/L-14 +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +# prompt_2 is passed to OpenCLIP-ViT/bigG-14 +prompt_2 = "Van Gogh painting" +image = pipeline(prompt=prompt, prompt_2=prompt_2).images[0] +image The dual text-encoders also support textual inversion embeddings that need to be loaded separately as explained in the SDXL textual inversion section. Optimizations SDXL is a large model, and you may need to optimize memory to get it to run on your hardware. Here are some tips to save memory and speed up inference. Offload the model to the CPU with enable_model_cpu_offload() for out-of-memory errors: Copied - base.to("cuda") +- refiner.to("cuda") ++ base.enable_model_cpu_offload() ++ refiner.enable_model_cpu_offload() Use torch.compile for ~20% speed-up (you need torch>=2.0): Copied + base.unet = torch.compile(base.unet, mode="reduce-overhead", fullgraph=True) ++ refiner.unet = torch.compile(refiner.unet, mode="reduce-overhead", fullgraph=True) Enable xFormers to run SDXL if torch<2.0: Copied + base.enable_xformers_memory_efficient_attention() ++ refiner.enable_xformers_memory_efficient_attention() Other resources If you’re interested in experimenting with a minimal version of the UNet2DConditionModel used in SDXL, take a look at the minSDXL implementation which is written in PyTorch and directly compatible with 🤗 Diffusers. diff --git a/scrapped_outputs/f732f920c6511f692f537a7c3717b6d4.txt b/scrapped_outputs/f732f920c6511f692f537a7c3717b6d4.txt new file mode 100644 index 0000000000000000000000000000000000000000..3daa3c7a8e691d007251b50d136b24a76d843cf8 --- /dev/null +++ b/scrapped_outputs/f732f920c6511f692f537a7c3717b6d4.txt @@ -0,0 +1,36 @@ +Stable Diffusion XL Turbo SDXL Turbo is an adversarial time-distilled Stable Diffusion XL (SDXL) model capable +of running inference in as little as 1 step. This guide will show you how to use SDXL-Turbo for text-to-image and image-to-image. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate omegaconf Load model checkpoints Model weights may be stored in separate subfolders on the Hub or locally, in which case, you should use the from_pretrained() method: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16") +pipeline = pipeline.to("cuda") You can also use the from_single_file() method to load a model checkpoint stored in a single file format (.ckpt or .safetensors) from the Hub or locally: Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_single_file( + "https://huggingface.co/stabilityai/sdxl-turbo/blob/main/sd_xl_turbo_1.0_fp16.safetensors", torch_dtype=torch.float16) +pipeline = pipeline.to("cuda") Text-to-image For text-to-image, pass a text prompt. By default, SDXL Turbo generates a 512x512 image, and that resolution gives the best results. You can try setting the height and width parameters to 768x768 or 1024x1024, but you should expect quality degradations when doing so. Make sure to set guidance_scale to 0.0 to disable, as the model was trained without it. A single inference step is enough to generate high quality images. +Increasing the number of steps to 2, 3 or 4 should improve image quality. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline_text2image = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16") +pipeline_text2image = pipeline_text2image.to("cuda") + +prompt = "A cinematic shot of a baby racoon wearing an intricate italian priest robe." + +image = pipeline_text2image(prompt=prompt, guidance_scale=0.0, num_inference_steps=1).images[0] +image Image-to-image For image-to-image generation, make sure that num_inference_steps * strength is larger or equal to 1. +The image-to-image pipeline will run for int(num_inference_steps * strength) steps, e.g. 0.5 * 2.0 = 1 step in +our example below. Copied from diffusers import AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +# use from_pipe to avoid consuming additional memory when loading a checkpoint +pipeline = AutoPipelineForImage2Image.from_pipe(pipeline_text2image).to("cuda") + +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") +init_image = init_image.resize((512, 512)) + +prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k" + +image = pipeline(prompt, image=init_image, strength=0.5, guidance_scale=0.0, num_inference_steps=2).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Speed-up SDXL Turbo even more Compile the UNet if you are using PyTorch version 2 or better. The first inference run will be very slow, but subsequent ones will be much faster. Copied pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) When using the default VAE, keep it in float32 to avoid costly dtype conversions before and after each generation. You only need to do this one before your first generation: Copied pipe.upcast_vae() As an alternative, you can also use a 16-bit VAE created by community member @madebyollin that does not need to be upcasted to float32. diff --git a/scrapped_outputs/f736ef81ef63a36132de0683c3a9f2a9.txt b/scrapped_outputs/f736ef81ef63a36132de0683c3a9f2a9.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/f7c6b2d75a07593dbffafd5ee00fe75f.txt b/scrapped_outputs/f7c6b2d75a07593dbffafd5ee00fe75f.txt new file mode 100644 index 0000000000000000000000000000000000000000..49e19fb4c11ed7fa69c26f38e304a1a47862bdca --- /dev/null +++ b/scrapped_outputs/f7c6b2d75a07593dbffafd5ee00fe75f.txt @@ -0,0 +1,466 @@ +Text-to-Image Generation with Adapter Conditioning Overview T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models by Chong Mou, Xintao Wang, Liangbin Xie, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie. Using the pretrained models we can provide control images (for example, a depth map) to control Stable Diffusion text-to-image generation so that it follows the structure of the depth image and fills in the details. The abstract of the paper is the following: The incredible generative ability of large-scale text-to-image (T2I) models has demonstrated strong power of learning complex structures and meaningful semantics. However, relying solely on text prompts cannot fully take advantage of the knowledge learned by the model, especially when flexible and accurate controlling (e.g., color and structure) is needed. In this paper, we aim to “dig out” the capabilities that T2I models have implicitly learned, and then explicitly use them to control the generation more granularly. Specifically, we propose to learn simple and lightweight T2I-Adapters to align internal knowledge in T2I models with external control signals, while freezing the original large T2I models. In this way, we can train various adapters according to different conditions, achieving rich control and editing effects in the color and structure of the generation results. Further, the proposed T2I-Adapters have attractive properties of practical value, such as composability and generalization ability. Extensive experiments demonstrate that our T2I-Adapter has promising generation quality and a wide range of applications. This model was contributed by the community contributor HimariO ❤️ . Available Pipelines: Pipeline Tasks Demo StableDiffusionAdapterPipeline Text-to-Image Generation with T2I-Adapter Conditioning - StableDiffusionXLAdapterPipeline Text-to-Image Generation with T2I-Adapter Conditioning on StableDiffusion-XL - Usage example with the base model of StableDiffusion-1.4/1.5 In the following we give a simple example of how to use a T2I-Adapter checkpoint with Diffusers for inference based on StableDiffusion-1.4/1.5. +All adapters use the same pipeline. Images are first converted into the appropriate control image format. The control image and prompt are passed to the StableDiffusionAdapterPipeline. Let’s have a look at a simple example using the Color Adapter. Copied from diffusers.utils import load_image, make_image_grid + +image = load_image("https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_ref.png") Then we can create our color palette by simply resizing it to 8 by 8 pixels and then scaling it back to original size. Copied from PIL import Image + +color_palette = image.resize((8, 8)) +color_palette = color_palette.resize((512, 512), resample=Image.Resampling.NEAREST) Let’s take a look at the processed image. Next, create the adapter pipeline Copied import torch +from diffusers import StableDiffusionAdapterPipeline, T2IAdapter + +adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_color_sd14v1", torch_dtype=torch.float16) +pipe = StableDiffusionAdapterPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + adapter=adapter, + torch_dtype=torch.float16, +) +pipe.to("cuda") Finally, pass the prompt and control image to the pipeline Copied # fix the random seed, so you will get the same result as the example +generator = torch.Generator("cuda").manual_seed(7) + +out_image = pipe( + "At night, glowing cubes in front of the beach", + image=color_palette, + generator=generator, +).images[0] +make_image_grid([image, color_palette, out_image], rows=1, cols=3) Usage example with the base model of StableDiffusion-XL In the following we give a simple example of how to use a T2I-Adapter checkpoint with Diffusers for inference based on StableDiffusion-XL. +All adapters use the same pipeline. Images are first downloaded into the appropriate control image format. The control image and prompt are passed to the StableDiffusionXLAdapterPipeline. Let’s have a look at a simple example using the Sketch Adapter. Copied from diffusers.utils import load_image, make_image_grid + +sketch_image = load_image("https://huggingface.co/Adapter/t2iadapter/resolve/main/sketch.png").convert("L") Then, create the adapter pipeline Copied import torch +from diffusers import ( + T2IAdapter, + StableDiffusionXLAdapterPipeline, + DDPMScheduler +) + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +adapter = T2IAdapter.from_pretrained("Adapter/t2iadapter", subfolder="sketch_sdxl_1.0", torch_dtype=torch.float16, adapter_type="full_adapter_xl") +scheduler = DDPMScheduler.from_pretrained(model_id, subfolder="scheduler") + +pipe = StableDiffusionXLAdapterPipeline.from_pretrained( + model_id, adapter=adapter, safety_checker=None, torch_dtype=torch.float16, variant="fp16", scheduler=scheduler +) + +pipe.to("cuda") Finally, pass the prompt and control image to the pipeline Copied # fix the random seed, so you will get the same result as the example +generator = torch.Generator().manual_seed(42) + +sketch_image_out = pipe( + prompt="a photo of a dog in real world, high quality", + negative_prompt="extra digit, fewer digits, cropped, worst quality, low quality", + image=sketch_image, + generator=generator, + guidance_scale=7.5 +).images[0] +make_image_grid([sketch_image, sketch_image_out], rows=1, cols=2) Available checkpoints Non-diffusers checkpoints can be found under TencentARC/T2I-Adapter. T2I-Adapter with Stable Diffusion 1.4 Model Name Control Image Overview Control Image Example Generated Image Example TencentARC/t2iadapter_color_sd14v1 Trained with spatial color palette An image with 8x8 color palette. TencentARC/t2iadapter_canny_sd14v1 Trained with canny edge detection A monochrome image with white edges on a black background. TencentARC/t2iadapter_sketch_sd14v1 Trained with PidiNet edge detection A hand-drawn monochrome image with white outlines on a black background. TencentARC/t2iadapter_depth_sd14v1 Trained with Midas depth estimation A grayscale image with black representing deep areas and white representing shallow areas. TencentARC/t2iadapter_openpose_sd14v1 Trained with OpenPose bone image A OpenPose bone image. TencentARC/t2iadapter_keypose_sd14v1 Trained with mmpose skeleton image A mmpose skeleton image. TencentARC/t2iadapter_seg_sd14v1Trained with semantic segmentation An custom segmentation protocol image. TencentARC/t2iadapter_canny_sd15v2 TencentARC/t2iadapter_depth_sd15v2 TencentARC/t2iadapter_sketch_sd15v2 TencentARC/t2iadapter_zoedepth_sd15v1 Adapter/t2iadapter, subfolder=‘sketch_sdxl_1.0’ Adapter/t2iadapter, subfolder=‘canny_sdxl_1.0’ Adapter/t2iadapter, subfolder=‘openpose_sdxl_1.0’ Combining multiple adapters MultiAdapter can be used for applying multiple conditionings at once. Here we use the keypose adapter for the character posture and the depth adapter for creating the scene. Copied from diffusers.utils import load_image, make_image_grid + +cond_keypose = load_image( + "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png" +) +cond_depth = load_image( + "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png" +) +cond = [cond_keypose, cond_depth] + +prompt = ["A man walking in an office room with a nice view"] The two control images look as such: MultiAdapter combines keypose and depth adapters. adapter_conditioning_scale balances the relative influence of the different adapters. Copied import torch +from diffusers import StableDiffusionAdapterPipeline, MultiAdapter, T2IAdapter + +adapters = MultiAdapter( + [ + T2IAdapter.from_pretrained("TencentARC/t2iadapter_keypose_sd14v1"), + T2IAdapter.from_pretrained("TencentARC/t2iadapter_depth_sd14v1"), + ] +) +adapters = adapters.to(torch.float16) + +pipe = StableDiffusionAdapterPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + torch_dtype=torch.float16, + adapter=adapters, +).to("cuda") + +image = pipe(prompt, cond, adapter_conditioning_scale=[0.8, 0.8]).images[0] +make_image_grid([cond_keypose, cond_depth, image], rows=1, cols=3) T2I-Adapter vs ControlNet T2I-Adapter is similar to ControlNet. +T2I-Adapter uses a smaller auxiliary network which is only run once for the entire diffusion process. +However, T2I-Adapter performs slightly worse than ControlNet. StableDiffusionAdapterPipeline class diffusers.StableDiffusionAdapterPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel adapter: Union scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True ) Parameters adapter (T2IAdapter or MultiAdapter or List[T2IAdapter]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple Adapter as a +list, the outputs from each Adapter are added together to create one combined additional conditioning. adapter_weights (List[float], optional, defaults to None) — +List of floats representing the weight which will be multiply to each adapter’s output before adding them +together. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. Pipeline for text-to-image generation using Stable Diffusion augmented with T2I-Adapter +https://arxiv.org/abs/2302.08453 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None adapter_conditioning_scale: Union = 1.0 clip_skip: Optional = None ) → ~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor] or List[PIL.Image.Image] or List[List[PIL.Image.Image]]) — +The Adapter input condition. Adapter uses this input condition to generate guidance to Unet. If the +type is specified as Torch.FloatTensor, it is passed to Adapter as is. PIL.Image.Image` can also be +accepted as an image. The control image is automatically resized to fit the output image. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput instead +of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor as defined under +self.processor in +diffusers.models.attention_processor. adapter_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the adapter are multiplied by adapter_conditioning_scale before they are added to the +residual in the original unet. If multiple adapters are specified in init, you can set the +corresponding scale as a list. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from PIL import Image +>>> from diffusers.utils import load_image +>>> import torch +>>> from diffusers import StableDiffusionAdapterPipeline, T2IAdapter + +>>> image = load_image( +... "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_ref.png" +... ) + +>>> color_palette = image.resize((8, 8)) +>>> color_palette = color_palette.resize((512, 512), resample=Image.Resampling.NEAREST) + +>>> adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_color_sd14v1", torch_dtype=torch.float16) +>>> pipe = StableDiffusionAdapterPipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", +... adapter=adapter, +... torch_dtype=torch.float16, +... ) + +>>> pipe.to("cuda") + +>>> out_image = pipe( +... "At night, glowing cubes in front of the beach", +... image=color_palette, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 StableDiffusionXLAdapterPipeline class diffusers.StableDiffusionXLAdapterPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel adapter: Union scheduler: KarrasDiffusionSchedulers force_zeros_for_empty_prompt: bool = True feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None ) Parameters adapter (T2IAdapter or MultiAdapter or List[T2IAdapter]) — +Provides additional conditioning to the unet during the denoising process. If you set multiple Adapter as a +list, the outputs from each Adapter are added together to create one combined additional conditioning. adapter_weights (List[float], optional, defaults to None) — +List of floats representing the weight which will be multiply to each adapter’s output before adding them +together. vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. Pipeline for text-to-image generation using Stable Diffusion augmented with T2I-Adapter +https://arxiv.org/abs/2302.08453 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings from_single_file() for loading .ckpt files load_lora_weights() for loading LoRA weights save_lora_weights() for saving LoRA weights load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None prompt_2: Union = None image: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 timesteps: List = None denoising_end: Optional = None guidance_scale: float = 5.0 negative_prompt: Union = None negative_prompt_2: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None guidance_rescale: float = 0.0 original_size: Optional = None crops_coords_top_left: Tuple = (0, 0) target_size: Optional = None negative_original_size: Optional = None negative_crops_coords_top_left: Tuple = (0, 0) negative_target_size: Optional = None adapter_conditioning_scale: Union = 1.0 adapter_conditioning_factor: float = 1.0 clip_skip: Optional = None ) → ~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders image (torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor] or List[PIL.Image.Image] or List[List[PIL.Image.Image]]) — +The Adapter input condition. Adapter uses this input condition to generate guidance to Unet. If the +type is specified as Torch.FloatTensor, it is passed to Adapter as is. PIL.Image.Image` can also be +accepted as an image. The control image is automatically resized to fit the output image. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. Anything below 512 pixels won’t work well for +stabilityai/stable-diffusion-xl-base-1.0 +and checkpoints that are not specifically fine-tuned on low resolutions. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process with schedulers which support a timesteps argument +in their set_timesteps method. If not defined, the default behavior when num_inference_steps is +passed will be used. Must be in descending order. denoising_end (float, optional) — +When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be +completed before it is intentionally prematurely terminated. As a result, the returned sample will +still retain a substantial amount of noise as determined by the discrete timesteps selected by the +scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a +“Mixture of Denoisers” multi-pipeline setup, as elaborated in Refining the Image +Output guidance_scale (float, optional, defaults to 5.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. +ip_adapter_image — (PipelineImageInput, optional): Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion_xl.StableDiffusionAdapterPipelineOutput +instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. guidance_rescale (float, optional, defaults to 0.0) — +Guidance rescale factor proposed by Common Diffusion Noise Schedules and Sample Steps are +Flawed guidance_scale is defined as φ in equation 16. of +Common Diffusion Noise Schedules and Sample Steps are Flawed. +Guidance rescale factor should fix overexposure when using zero terminal SNR. original_size (Tuple[int], optional, defaults to (1024, 1024)) — +If original_size is not the same as target_size the image will appear to be down- or upsampled. +original_size defaults to (height, width) if not specified. Part of SDXL’s micro-conditioning as +explained in section 2.2 of +https://huggingface.co/papers/2307.01952. crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +crops_coords_top_left can be used to generate an image that appears to be “cropped” from the position +crops_coords_top_left downwards. Favorable, well-centered images are usually achieved by setting +crops_coords_top_left to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. target_size (Tuple[int], optional, defaults to (1024, 1024)) — +For most cases, target_size should be set to the desired height and width of the generated image. If +not specified it will default to (height, width). Part of SDXL’s micro-conditioning as explained in +section 2.2 of https://huggingface.co/papers/2307.01952. +section 2.2 of https://huggingface.co/papers/2307.01952. negative_original_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a specific image resolution. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_crops_coords_top_left (Tuple[int], optional, defaults to (0, 0)) — +To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s +micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. negative_target_size (Tuple[int], optional, defaults to (1024, 1024)) — +To negatively condition the generation process based on a target image resolution. It should be as same +as the target_size for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 of +https://huggingface.co/papers/2307.01952. For more +information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. adapter_conditioning_scale (float or List[float], optional, defaults to 1.0) — +The outputs of the adapter are multiplied by adapter_conditioning_scale before they are added to the +residual in the original unet. If multiple adapters are specified in init, you can set the +corresponding scale as a list. adapter_conditioning_factor (float, optional, defaults to 1.0) — +The fraction of timesteps for which adapter should be applied. If adapter_conditioning_factor is +0.0, adapter is not applied at all. If adapter_conditioning_factor is 1.0, adapter is applied for +all timesteps. If adapter_conditioning_factor is 0.5, adapter is applied for half of the timesteps. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput or tuple + +~pipelines.stable_diffusion.StableDiffusionAdapterPipelineOutput if return_dict is True, otherwise a +tuple. When returning a tuple, the first element is a list with the generated images. + Function invoked when calling the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import T2IAdapter, StableDiffusionXLAdapterPipeline, DDPMScheduler +>>> from diffusers.utils import load_image + +>>> sketch_image = load_image("https://huggingface.co/Adapter/t2iadapter/resolve/main/sketch.png").convert("L") + +>>> model_id = "stabilityai/stable-diffusion-xl-base-1.0" + +>>> adapter = T2IAdapter.from_pretrained( +... "Adapter/t2iadapter", +... subfolder="sketch_sdxl_1.0", +... torch_dtype=torch.float16, +... adapter_type="full_adapter_xl", +... ) +>>> scheduler = DDPMScheduler.from_pretrained(model_id, subfolder="scheduler") + +>>> pipe = StableDiffusionXLAdapterPipeline.from_pretrained( +... model_id, adapter=adapter, torch_dtype=torch.float16, variant="fp16", scheduler=scheduler +... ).to("cuda") + +>>> generator = torch.manual_seed(42) +>>> sketch_image_out = pipe( +... prompt="a photo of a dog in real world, high quality", +... negative_prompt="extra digit, fewer digits, cropped, worst quality, low quality", +... image=sketch_image, +... generator=generator, +... guidance_scale=7.5, +... ).images[0] enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to +computing decoding in one step. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to +compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow +processing larger images. encode_prompt < source > ( prompt: str prompt_2: Optional = None device: Optional = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: Optional = None negative_prompt_2: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None pooled_prompt_embeds: Optional = None negative_pooled_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded prompt_2 (str or List[str], optional) — +The prompt or prompts to be sent to the tokenizer_2 and text_encoder_2. If not defined, prompt is +used in both text-encoders +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). negative_prompt_2 (str or List[str], optional) — +The prompt or prompts not to guide the image generation to be sent to tokenizer_2 and +text_encoder_2. If not defined, negative_prompt is used in both text-encoders prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. +If not provided, pooled text embeddings will be generated from prompt input argument. negative_pooled_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, pooled negative_prompt_embeds will be generated from negative_prompt +input argument. lora_scale (float, optional) — +A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_guidance_scale_embedding < source > ( w embedding_dim = 512 dtype = torch.float32 ) → torch.FloatTensor Parameters timesteps (torch.Tensor) — +generate embedding vectors at these timesteps embedding_dim (int, optional, defaults to 512) — +dimension of the embeddings to generate +dtype — +data type of the generated embeddings Returns +torch.FloatTensor + +Embedding vectors with shape (len(timesteps), embedding_dim) + See https://github.com/google-research/vdm/blob/dc27b98a554f65cdc654b800da5aa1846545d41b/model_vdm.py#L298 diff --git a/scrapped_outputs/f81391e6a71eb2b53bed95eac11be252.txt b/scrapped_outputs/f81391e6a71eb2b53bed95eac11be252.txt new file mode 100644 index 0000000000000000000000000000000000000000..be2cb47ac7929d07604329901692862da670fc66 --- /dev/null +++ b/scrapped_outputs/f81391e6a71eb2b53bed95eac11be252.txt @@ -0,0 +1,70 @@ +MusicLDM MusicLDM was proposed in MusicLDM: Enhancing Novelty in Text-to-Music Generation Using Beat-Synchronous Mixup Strategies by Ke Chen, Yusong Wu, Haohe Liu, Marianna Nezhurina, Taylor Berg-Kirkpatrick, Shlomo Dubnov. +MusicLDM takes a text prompt as input and predicts the corresponding music sample. Inspired by Stable Diffusion and AudioLDM, +MusicLDM is a text-to-music latent diffusion model (LDM) that learns continuous audio representations from CLAP +latents. MusicLDM is trained on a corpus of 466 hours of music data. Beat-synchronous data augmentation strategies are applied to the music samples, both in the time domain and in the latent space. Using beat-synchronous data augmentation strategies encourages the model to interpolate between the training samples, but stay within the domain of the training data. The result is generated music that is more diverse while staying faithful to the corresponding style. The abstract of the paper is the following: Diffusion models have shown promising results in cross-modal generation tasks, including text-to-image and text-to-audio generation. However, generating music, as a special type of audio, presents unique challenges due to limited availability of music data and sensitive issues related to copyright and plagiarism. In this paper, to tackle these challenges, we first construct a state-of-the-art text-to-music model, MusicLDM, that adapts Stable Diffusion and AudioLDM architectures to the music domain. We achieve this by retraining the contrastive language-audio pretraining model (CLAP) and the Hifi-GAN vocoder, as components of MusicLDM, on a collection of music data samples. Then, to address the limitations of training data and to avoid plagiarism, we leverage a beat tracking model and propose two different mixup strategies for data augmentation: beat-synchronous audio mixup and beat-synchronous latent mixup, which recombine training audio directly or via a latent embeddings space, respectively. Such mixup strategies encourage the model to interpolate between musical training samples and generate new music within the convex hull of the training data, making the generated music more diverse while still staying faithful to the corresponding style. In addition to popular evaluation metrics, we design several new evaluation metrics based on CLAP score to demonstrate that our proposed MusicLDM and beat-synchronous mixup strategies improve both the quality and novelty of generated music, as well as the correspondence between input text and generated music. This pipeline was contributed by sanchit-gandhi. Tips When constructing a prompt, keep in mind: Descriptive prompt inputs work best; use adjectives to describe the sound (for example, “high quality” or “clear”) and make the prompt context specific where possible (e.g. “melodic techno with a fast beat and synths” works better than “techno”). Using a negative prompt can significantly improve the quality of the generated audio. Try using a negative prompt of “low quality, average quality”. During inference: The quality of the generated audio sample can be controlled by the num_inference_steps argument; higher steps give higher quality audio at the expense of slower inference. Multiple waveforms can be generated in one go: set num_waveforms_per_prompt to a value greater than 1 to enable. Automatic scoring will be performed between the generated waveforms and prompt text, and the audios ranked from best to worst accordingly. The length of the generated audio sample can be controlled by varying the audio_length_in_s argument. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. MusicLDMPipeline class diffusers.MusicLDMPipeline < source > ( vae: AutoencoderKL text_encoder: Union tokenizer: Union feature_extractor: Optional unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vocoder: SpeechT5HifiGan ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (ClapModel) — +Frozen text-audio embedding model (ClapTextModel), specifically the +laion/clap-htsat-unfused variant. tokenizer (PreTrainedTokenizer) — +A RobertaTokenizer to tokenize text. feature_extractor (ClapFeatureExtractor) — +Feature extractor to compute mel-spectrograms from audio waveforms. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded audio latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. vocoder (SpeechT5HifiGan) — +Vocoder of class SpeechT5HifiGan. Pipeline for text-to-audio generation using MusicLDM. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None audio_length_in_s: Optional = None num_inference_steps: int = 200 guidance_scale: float = 2.0 negative_prompt: Union = None num_waveforms_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None output_type: Optional = 'np' ) → AudioPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide audio generation. If not defined, you need to pass prompt_embeds. audio_length_in_s (int, optional, defaults to 10.24) — +The length of the generated audio sample in seconds. num_inference_steps (int, optional, defaults to 200) — +The number of denoising steps. More denoising steps usually lead to a higher quality audio at the +expense of slower inference. guidance_scale (float, optional, defaults to 2.0) — +A higher guidance scale value encourages the model to generate audio that is closely linked to the text +prompt at the expense of lower sound quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in audio generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_waveforms_per_prompt (int, optional, defaults to 1) — +The number of waveforms to generate per prompt. If num_waveforms_per_prompt > 1, the text encoding +model is a joint text-audio model (ClapModel), and the tokenizer is a +[~transformers.ClapProcessor], then automatic scoring will be performed between the generated outputs +and the input text. This scoring ranks the generated waveforms based on their cosine similarity to text +input in the joint text-audio embedding space. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. return_dict (bool, optional, defaults to True) — +Whether or not to return a AudioPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. output_type (str, optional, defaults to "np") — +The output format of the generated audio. Choose between "np" to return a NumPy np.ndarray or +"pt" to return a PyTorch torch.Tensor object. Set to "latent" to return the latent diffusion +model (LDM) output. Returns +AudioPipelineOutput or tuple + +If return_dict is True, AudioPipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import MusicLDMPipeline +>>> import torch +>>> import scipy + +>>> repo_id = "ucsd-reach/musicldm" +>>> pipe = MusicLDMPipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> prompt = "Techno music with a strong, upbeat tempo and high melodic riffs" +>>> audio = pipe(prompt, num_inference_steps=10, audio_length_in_s=5.0).audios[0] + +>>> # save the audio sample as a .wav file +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio) disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. diff --git a/scrapped_outputs/f82200a109304c3b6a572885ac8dd554.txt b/scrapped_outputs/f82200a109304c3b6a572885ac8dd554.txt new file mode 100644 index 0000000000000000000000000000000000000000..7b1735de34d975258705c997ab6b7091fbeddde0 --- /dev/null +++ b/scrapped_outputs/f82200a109304c3b6a572885ac8dd554.txt @@ -0,0 +1,2 @@ +Activation functions Customized activation functions for supporting various models in 🤗 Diffusers. GELU class diffusers.models.activations.GELU < source > ( dim_in: int dim_out: int approximate: str = 'none' bias: bool = True ) Parameters dim_in (int) — The number of channels in the input. dim_out (int) — The number of channels in the output. approximate (str, optional, defaults to "none") — If "tanh", use tanh approximation. bias (bool, defaults to True) — Whether to use a bias in the linear layer. GELU activation function with tanh approximation support with approximate="tanh". GEGLU class diffusers.models.activations.GEGLU < source > ( dim_in: int dim_out: int bias: bool = True ) Parameters dim_in (int) — The number of channels in the input. dim_out (int) — The number of channels in the output. bias (bool, defaults to True) — Whether to use a bias in the linear layer. A variant of the gated linear unit activation function. ApproximateGELU class diffusers.models.activations.ApproximateGELU < source > ( dim_in: int dim_out: int bias: bool = True ) Parameters dim_in (int) — The number of channels in the input. dim_out (int) — The number of channels in the output. bias (bool, defaults to True) — Whether to use a bias in the linear layer. The approximate form of the Gaussian Error Linear Unit (GELU). For more details, see section 2 of this +paper. diff --git a/scrapped_outputs/f84513dae53b8321be30a7a2e916ae2f.txt b/scrapped_outputs/f84513dae53b8321be30a7a2e916ae2f.txt new file mode 100644 index 0000000000000000000000000000000000000000..d01ee532445db56e8f74e8c6a472f7e9146b01fe --- /dev/null +++ b/scrapped_outputs/f84513dae53b8321be30a7a2e916ae2f.txt @@ -0,0 +1,97 @@ +Loading and Adding Custom Pipelines + +Diffusers allows you to conveniently load any custom pipeline from the Hugging Face Hub as well as any official community pipeline +via the DiffusionPipeline class. + +Loading custom pipelines from the Hub + +Custom pipelines can be easily loaded from any model repository on the Hub that defines a diffusion pipeline in a pipeline.py file. +Let’s load a dummy pipeline from hf-internal-testing/diffusers-dummy-pipeline. +All you need to do is pass the custom pipeline repo id with the custom_pipeline argument alongside the repo from where you wish to load the pipeline modules. + + + Copied +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "google/ddpm-cifar10-32", custom_pipeline="hf-internal-testing/diffusers-dummy-pipeline" +) +This will load the custom pipeline as defined in the model repository. +By loading a custom pipeline from the Hugging Face Hub, you are trusting that the code you are loading +is safe 🔒. Make sure to check out the code online before loading & running it automatically. + +Loading official community pipelines + +Community pipelines are summarized in the community examples folder +Similarly, you need to pass both the repo id from where you wish to load the weights as well as the custom_pipeline argument. Here the custom_pipeline argument should consist simply of the filename of the community pipeline excluding the .py suffix, e.g. clip_guided_stable_diffusion. +Since community pipelines are often more complex, one can mix loading weights from an official repo id +and passing pipeline modules directly. + + + Copied +from diffusers import DiffusionPipeline +from transformers import CLIPFeatureExtractor, CLIPModel + +clip_model_id = "laion/CLIP-ViT-B-32-laion2B-s34B-b79K" + +feature_extractor = CLIPFeatureExtractor.from_pretrained(clip_model_id) +clip_model = CLIPModel.from_pretrained(clip_model_id) + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + custom_pipeline="clip_guided_stable_diffusion", + clip_model=clip_model, + feature_extractor=feature_extractor, +) + +Adding custom pipelines to the Hub + +To add a custom pipeline to the Hub, all you need to do is to define a pipeline class that inherits +from DiffusionPipeline in a pipeline.py file. +Make sure that the whole pipeline is encapsulated within a single class and that the pipeline.py file +has only one such class. +Let’s quickly define an example pipeline. + + + Copied +import torch +from diffusers import DiffusionPipeline + + +class MyPipeline(DiffusionPipeline): + def __init__(self, unet, scheduler): + super().__init__() + + self.register_modules(unet=unet, scheduler=scheduler) + + @torch.no_grad() + def __call__(self, batch_size: int = 1, num_inference_steps: int = 50): + # Sample gaussian noise to begin loop + image = torch.randn((batch_size, self.unet.in_channels, self.unet.sample_size, self.unet.sample_size)) + + image = image.to(self.device) + + # set step values + self.scheduler.set_timesteps(num_inference_steps) + + for t in self.progress_bar(self.scheduler.timesteps): + # 1. predict noise model_output + model_output = self.unet(image, t).sample + + # 2. predict previous mean of image x_t-1 and add variance depending on eta + # eta corresponds to η in paper and should be between [0, 1] + # do x_t -> x_t-1 + image = self.scheduler.step(model_output, t, image, eta).prev_sample + + image = (image / 2 + 0.5).clamp(0, 1) + image = image.cpu().permute(0, 2, 3, 1).numpy() + + return image +Now you can upload this short file under the name pipeline.py in your preferred model repository. For Stable Diffusion pipelines, you may also join the community organisation for shared pipelines to upload yours. +Finally, we can load the custom pipeline by passing the model repository name, e.g. sd-diffusers-pipelines-library/my_custom_pipeline alongside the model repository from where we want to load the unet and scheduler components. + + + Copied +my_pipeline = DiffusionPipeline.from_pretrained( + "google/ddpm-cifar10-32", custom_pipeline="patrickvonplaten/my_custom_pipeline" +) diff --git a/scrapped_outputs/f87b037706bfdd08b88d3e2a32bd5480.txt b/scrapped_outputs/f87b037706bfdd08b88d3e2a32bd5480.txt new file mode 100644 index 0000000000000000000000000000000000000000..350fcde2194ed65053fe8201403456d5175dba21 --- /dev/null +++ b/scrapped_outputs/f87b037706bfdd08b88d3e2a32bd5480.txt @@ -0,0 +1,98 @@ +DPMSolverMultistepScheduler DPMSolverMultistep is a multistep scheduler from DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps and DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. DPMSolver (and the improved version DPMSolver++) is a fast dedicated high-order solver for diffusion ODEs with convergence order guarantee. Empirically, DPMSolver sampling with only 20 steps can generate high-quality +samples, and it can generate quite good samples even in 10 steps. Tips It is recommended to set solver_order to 2 for guide sampling, and solver_order=3 for unconditional sampling. Dynamic thresholding from Imagen is supported, and for pixel-space +diffusion models, you can set both algorithm_type="dpmsolver++" and thresholding=True to use the dynamic +thresholding. This thresholding method is unsuitable for latent-space diffusion models such as +Stable Diffusion. The SDE variant of DPMSolver and DPM-Solver++ is also supported, but only for the first and second-order solvers. This is a fast SDE solver for the reverse diffusion SDE. It is recommended to use the second-order sde-dpmsolver++. DPMSolverMultistepScheduler class diffusers.DPMSolverMultistepScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'dpmsolver++' solver_type: str = 'midpoint' lower_order_final: bool = True euler_at_final: bool = False use_karras_sigmas: Optional = False use_lu_lambdas: Optional = False final_sigmas_type: Optional = 'zero' lambda_min_clipped: float = -inf variance_type: Optional = None timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. solver_order (int, defaults to 2) — +The DPMSolver order which can be 1 or 2 or 3. It is recommended to use solver_order=2 for guided +sampling, and solver_order=3 for unconditional sampling. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True and +algorithm_type="dpmsolver++". algorithm_type (str, defaults to dpmsolver++) — +Algorithm type for the solver; can be dpmsolver, dpmsolver++, sde-dpmsolver or sde-dpmsolver++. The +dpmsolver type implements the algorithms in the DPMSolver +paper, and the dpmsolver++ type implements the algorithms in the +DPMSolver++ paper. It is recommended to use dpmsolver++ or +sde-dpmsolver++ with solver_order=2 for guided sampling like in Stable Diffusion. solver_type (str, defaults to midpoint) — +Solver type for the second-order solver; can be midpoint or heun. The solver type slightly affects the +sample quality, especially for a small number of steps. It is recommended to use midpoint solvers. lower_order_final (bool, defaults to True) — +Whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. This can +stabilize the sampling of DPMSolver for steps < 15, especially for steps <= 10. euler_at_final (bool, defaults to False) — +Whether to use Euler’s method in the final step. It is a trade-off between numerical stability and detail +richness. This can stabilize the sampling of the SDE variant of DPMSolver for small number of inference +steps, but sometimes may result in blurring. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. use_lu_lambdas (bool, optional, defaults to False) — +Whether to use the uniform-logSNR for step sizes proposed by Lu’s DPM-Solver in the noise schedule during +the sampling process. If True, the sigmas and time steps are determined according to a sequence of +lambda(t). final_sigmas_type (str, defaults to "zero") — +The final sigma value for the noise schedule during the sampling process. If "sigma_min", the final sigma +is the same as the last sigma in the training schedule. If zero, the final sigma is set to 0. lambda_min_clipped (float, defaults to -inf) — +Clipping threshold for the minimum value of lambda(t) for numerical stability. This is critical for the +cosine (squaredcos_cap_v2) noise schedule. variance_type (str, optional) — +Set to “learned” or “learned_range” for diffusion models that predict variance. If set, the model’s output +contains the predicted Gaussian variance. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. DPMSolverMultistepScheduler is a fast dedicated high-order solver for diffusion ODEs. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. convert_model_output < source > ( model_output: FloatTensor *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The converted model output. + Convert the model output to the corresponding type the DPMSolver/DPMSolver++ algorithm needs. DPM-Solver is +designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an +integral of the data prediction model. The algorithm and model type are decoupled. You can use either DPMSolver or DPMSolver++ for both noise +prediction and data prediction models. dpm_solver_first_order_update < source > ( model_output: FloatTensor *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output (torch.FloatTensor) — +The direct output from the learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the first-order DPMSolver (equivalent to DDIM). multistep_dpm_solver_second_order_update < source > ( model_output_list: List *args sample: FloatTensor = None noise: Optional = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the second-order multistep DPMSolver. multistep_dpm_solver_third_order_update < source > ( model_output_list: List *args sample: FloatTensor = None **kwargs ) → torch.FloatTensor Parameters model_output_list (List[torch.FloatTensor]) — +The direct outputs from learned diffusion model at current and latter timesteps. sample (torch.FloatTensor) — +A current instance of a sample created by diffusion process. Returns +torch.FloatTensor + +The sample tensor at the previous timestep. + One step for the third-order multistep DPMSolver. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int = None device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator = None return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the multistep DPMSolver. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/f87c2afdc0c4b76da837d09f9196b940.txt b/scrapped_outputs/f87c2afdc0c4b76da837d09f9196b940.txt new file mode 100644 index 0000000000000000000000000000000000000000..0aba4091381b5504e1d5649bc5294cc35e6adca9 --- /dev/null +++ b/scrapped_outputs/f87c2afdc0c4b76da837d09f9196b940.txt @@ -0,0 +1,338 @@ +Safe Stable Diffusion + +Safe Stable Diffusion was proposed in Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models and mitigates the well known issue that models like Stable Diffusion that are trained on unfiltered, web-crawled datasets tend to suffer from inappropriate degeneration. For instance Stable Diffusion may unexpectedly generate nudity, violence, images depicting self-harm, or otherwise offensive content. +Safe Stable Diffusion is an extension to the Stable Diffusion that drastically reduces content like this. +The abstract of the paper is the following: +Text-conditioned image generation models have recently achieved astonishing results in image quality and text alignment and are consequently employed in a fast-growing number of applications. Since they are highly data-driven, relying on billion-sized datasets randomly scraped from the internet, they also suffer, as we demonstrate, from degenerated and biased human behavior. In turn, they may even reinforce such biases. To help combat these undesired side effects, we present safe latent diffusion (SLD). Specifically, to measure the inappropriate degeneration due to unfiltered and imbalanced training sets, we establish a novel image generation test bed-inappropriate image prompts (I2P)-containing dedicated, real-world image-to-text prompts covering concepts such as nudity and violence. As our exhaustive empirical evaluation demonstrates, the introduced SLD removes and suppresses inappropriate image parts during the diffusion process, with no additional training required and no adverse effect on overall image quality or text alignment. +Overview: +Pipeline +Tasks +Colab +Demo +pipeline_stable_diffusion_safe.py +Text-to-Image Generation + + + +Tips + +Safe Stable Diffusion may also be used with weights of Stable Diffusion. + +Run Safe Stable Diffusion + +Safe Stable Diffusion can be tested very easily with the StableDiffusionPipelineSafe, and the "AIML-TUDA/stable-diffusion-safe" checkpoint exactly in the same way it is shown in the Conditional Image Generation Guide. + +Interacting with the Safety Concept + +To check and edit the currently used safety concept, use the safety_concept property of StableDiffusionPipelineSafe: + + + Copied +>>> from diffusers import StableDiffusionPipelineSafe + +>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe") +>>> pipeline.safety_concept +For each image generation the active concept is also contained in StableDiffusionSafePipelineOutput. + +Using pre-defined safety configurations + +You may use the 4 configurations defined in the Safe Latent Diffusion paper as follows: + + + Copied +>>> from diffusers import StableDiffusionPipelineSafe +>>> from diffusers.pipelines.stable_diffusion_safe import SafetyConfig + +>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe") +>>> prompt = "the four horsewomen of the apocalypse, painting by tom of finland, gaston bussiere, craig mullins, j. c. leyendecker" +>>> out = pipeline(prompt=prompt, **SafetyConfig.MAX) +The following configurations are available: SafetyConfig.WEAK, SafetyConfig.MEDIUM, SafetyConfig.STRONG, and SafetyConfig.MAX. + +How to load and use different schedulers + +The safe stable diffusion pipeline uses PNDMScheduler scheduler by default. But diffusers provides many other schedulers that can be used with the stable diffusion pipeline such as DDIMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, EulerAncestralDiscreteScheduler etc. +To use a different scheduler, you can either change it via the ConfigMixin.from_config() method or pass the scheduler argument to the from_pretrained method of the pipeline. For example, to use the EulerDiscreteScheduler, you can do the following: + + + Copied +>>> from diffusers import StableDiffusionPipelineSafe, EulerDiscreteScheduler + +>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe") +>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +>>> # or +>>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("AIML-TUDA/stable-diffusion-safe", subfolder="scheduler") +>>> pipeline = StableDiffusionPipelineSafe.from_pretrained( +... "AIML-TUDA/stable-diffusion-safe", scheduler=euler_scheduler +... ) + +StableDiffusionSafePipelineOutput + + +class diffusers.pipelines.stable_diffusion_safe.StableDiffusionSafePipelineOutput + +< +source +> +( +images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] +nsfw_content_detected: typing.Optional[typing.List[bool]] +unsafe_images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray, NoneType] +applied_safety_concept: typing.Optional[str] + +) + + +Parameters + +images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or numpy array of shape (batch_size, height, width, num_channels). PIL images or numpy array present the denoised images of the diffusion pipeline. + + +nsfw_content_detected (List[bool]) — +List of flags denoting whether the corresponding generated image likely represents “not-safe-for-work” +(nsfw) content, or None if safety checking could not be performed. + + +images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images that were flagged by the safety checker any may contain “not-safe-for-work” +(nsfw) content, or None if no safety check was performed or no images were flagged. + + +applied_safety_concept (str) — +The safety concept that was applied for safety guidance, or None if safety guidance was disabled + + + +Output class for Safe Stable Diffusion pipelines. + +__call__ + + +( +*args +**kwargs + +) + + + +Call self as a function. + +StableDiffusionPipelineSafe + + +class diffusers.StableDiffusionPipelineSafe + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: SafeStableDiffusionSafetyChecker +feature_extractor: CLIPImageProcessor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-to-image generation using Safe Latent Diffusion. +The implementation is based on the StableDiffusionPipeline +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +sld_guidance_scale: typing.Optional[float] = 1000 +sld_warmup_steps: typing.Optional[int] = 10 +sld_threshold: typing.Optional[float] = 0.01 +sld_momentum_scale: typing.Optional[float] = 0.3 +sld_mom_beta: typing.Optional[float] = 0.4 + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +sld_guidance_scale (float, optional, defaults to 1000) — +Safe latent guidance as defined in Safe Latent Diffusion. +sld_guidance_scale is defined as sS of Eq. 6. If set to be less than 1, safety guidance will be +disabled. + + +sld_warmup_steps (int, optional, defaults to 10) — +Number of warmup steps for safety guidance. SLD will only be applied for diffusion steps greater than +sld_warmup_steps. sld_warmup_steps is defined as delta of Safe Latent +Diffusion. + + +sld_threshold (float, optional, defaults to 0.01) — +Threshold that separates the hyperplane between appropriate and inappropriate images. sld_threshold +is defined as lamda of Eq. 5 in Safe Latent Diffusion. + + +sld_momentum_scale (float, optional, defaults to 0.3) — +Scale of the SLD momentum to be added to the safety guidance at each diffusion step. If set to 0.0 +momentum will be disabled. Momentum is already built up during warmup, i.e. for diffusion steps smaller +than sld_warmup_steps. sld_momentum_scale is defined as sm of Eq. 7 in Safe Latent +Diffusion. + + +sld_mom_beta (float, optional, defaults to 0.4) — +Defines how safety guidance momentum builds up. sld_mom_beta indicates how much of the previous +momentum will be kept. Momentum is already built up during warmup, i.e. for diffusion steps smaller +than sld_warmup_steps. sld_mom_beta is defined as beta m of Eq. 8 in Safe Latent +Diffusion. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +enable_sequential_cpu_offload + +< +source +> +( +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. diff --git a/scrapped_outputs/f8800e80229ae69de9c41feb4ee8c5d3.txt b/scrapped_outputs/f8800e80229ae69de9c41feb4ee8c5d3.txt new file mode 100644 index 0000000000000000000000000000000000000000..c2c26b00071da8c297b8820b37c712b100e43678 --- /dev/null +++ b/scrapped_outputs/f8800e80229ae69de9c41feb4ee8c5d3.txt @@ -0,0 +1,245 @@ +Models 🤗 Diffusers provides pretrained models for popular algorithms and modules to create custom diffusion systems. The primary function of models is to denoise an input sample as modeled by the distribution pθ(xt−1∣xt)p_{\theta}(x_{t-1}|x_{t})pθ​(xt−1​∣xt​). All models are built from the base ModelMixin class which is a torch.nn.Module providing basic functionality for saving and loading models, locally and from the Hugging Face Hub. ModelMixin class diffusers.ModelMixin < source > ( ) Base class for all models. ModelMixin takes care of storing the model configuration and provides methods for loading, downloading and +saving models. config_name (str) — Filename to save a model to when calling save_pretrained(). active_adapters < source > ( ) Gets the current list of active adapters of the model. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +official documentation: https://huggingface.co/docs/peft add_adapter < source > ( adapter_config adapter_name: str = 'default' ) Parameters adapter_config ([~peft.PeftConfig]) — +The configuration of the adapter to add; supported adapters are non-prefix tuning and adaption prompt +methods. adapter_name (str, optional, defaults to "default") — +The name of the adapter to add. If no name is passed, a default name is assigned to the adapter. Adds a new adapter to the current model for training. If no adapter name is passed, a default name is assigned +to the adapter to follow the convention of the PEFT library. If you are not familiar with adapters and PEFT methods, we invite you to read more about them in the PEFT +documentation. disable_adapters < source > ( ) Disable all adapters attached to the model and fallback to inference with the base model only. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +official documentation: https://huggingface.co/docs/peft disable_gradient_checkpointing < source > ( ) Deactivates gradient checkpointing for the current model (may be referred to as activation checkpointing or +checkpoint activations in other frameworks). disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. enable_adapters < source > ( ) Enable adapters that are attached to the model. The model will use self.active_adapters() to retrieve the +list of adapters to enable. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +official documentation: https://huggingface.co/docs/peft enable_gradient_checkpointing < source > ( ) Activates gradient checkpointing for the current model (may be referred to as activation checkpointing or +checkpoint activations in other frameworks). enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this option is enabled, you should observe lower GPU memory usage and a potential speed up during +inference. Speed up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import UNet2DConditionModel +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> model = UNet2DConditionModel.from_pretrained( +... "stabilityai/stable-diffusion-2-1", subfolder="unet", torch_dtype=torch.float16 +... ) +>>> model = model.to("cuda") +>>> model.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) from_pretrained < source > ( pretrained_model_name_or_path: Union **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with save_pretrained(). + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. torch_dtype (str or torch.dtype, optional) — +Override the default torch.dtype and load the model with another dtype. If "auto" is passed, the +dtype is automatically derived from the model’s weights. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info (bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. from_flax (bool, optional, defaults to False) — +Load the model weights from a Flax checkpoint save file. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) — +Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not +guarantee the timeliness or safety of the source, and you should refer to the mirror site for more +information. device_map (str or Dict[str, Union[int, str, torch.device]], optional) — +A map that specifies where each submodule should go. It doesn’t need to be defined for each +parameter/buffer name; once a given module name is inside, every submodule of it will be sent to the +same device. +Set device_map="auto" to have 🤗 Accelerate automatically compute the most optimized device_map. For +more information about each option see designing a device +map. max_memory (Dict, optional) — +A dictionary device identifier for the maximum memory. Will default to the maximum memory available for +each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — +The path to offload weights if device_map contains the value "disk". offload_state_dict (bool, optional) — +If True, temporarily offloads the CPU state dict to the hard drive to avoid running out of CPU RAM if +the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True +when there is some disk offload. low_cpu_mem_usage (bool, optional, defaults to True if torch version >= 1.9.0 else False) — +Speed up model loading only loading the pretrained weights and not initializing the weights. This also +tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. +Only supported for PyTorch >= 1.9.0. If you are using an older version of PyTorch, setting this +argument to True will raise an error. variant (str, optional) — +Load weights from a specified variant filename such as "fp16" or "ema". This is ignored when +loading from_flax. use_safetensors (bool, optional, defaults to None) — +If set to None, the safetensors weights are downloaded if they’re available and if the +safetensors library is installed. If set to True, the model is forcibly loaded from safetensors +weights. If set to False, safetensors weights are not loaded. Instantiate a pretrained PyTorch model from a pretrained model configuration. The model is set in evaluation mode - model.eval() - by default, and dropout modules are deactivated. To +train the model, set it back in training mode with model.train(). To use private or gated models, log-in with +huggingface-cli login. You can also activate the special +“offline-mode” to use this method in a +firewalled environment. Example: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5", subfolder="unet") If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. num_parameters < source > ( only_trainable: bool = False exclude_embeddings: bool = False ) → int Parameters only_trainable (bool, optional, defaults to False) — +Whether or not to return only the number of trainable parameters. exclude_embeddings (bool, optional, defaults to False) — +Whether or not to return only the number of non-embedding parameters. Returns +int + +The number of parameters. + Get number of (trainable or non-embedding) parameters in the module. Example: Copied from diffusers import UNet2DConditionModel + +model_id = "runwayml/stable-diffusion-v1-5" +unet = UNet2DConditionModel.from_pretrained(model_id, subfolder="unet") +unet.num_parameters(only_trainable=True) +859520964 save_pretrained < source > ( save_directory: Union is_main_process: bool = True save_function: Optional = None safe_serialization: bool = True variant: Optional = None push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save a model and its configuration file to. Will be created if it doesn’t exist. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. save_function (Callable) — +The function to use to save the state dictionary. Useful during distributed training when you need to +replace torch.save with another method. Can be configured with the environment variable +DIFFUSERS_SAVE_MODE. safe_serialization (bool, optional, defaults to True) — +Whether to save the model using safetensors or the traditional PyTorch way with pickle. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save a model and its configuration file to a directory so that it can be reloaded using the +from_pretrained() class method. set_adapter < source > ( adapter_name: Union ) Parameters adapter_name (Union[str, List[str]])) — +The list of adapters to set or the adapter name in case of single adapter. Sets a specific adapter by forcing the model to only use that adapter and disables the other adapters. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT +official documentation: https://huggingface.co/docs/peft FlaxModelMixin class diffusers.FlaxModelMixin < source > ( ) Base class for all Flax models. FlaxModelMixin takes care of storing the model configuration and provides methods for loading, downloading and +saving models. config_name (str) — Filename to save a model to when calling save_pretrained(). from_pretrained < source > ( pretrained_model_name_or_path: Union dtype: dtype = *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — +Can be either: + +A string, the model id (for example runwayml/stable-diffusion-v1-5) of a pretrained model +hosted on the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +using save_pretrained(). + dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — +The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and +jax.numpy.bfloat16 (on TPUs). +This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If +specified, all the computation will be performed with the given dtype. + +This only specifies the dtype of the computation and does not influence the dtype of model +parameters. +If you wish to change the dtype of the model parameters, see to_fp16() and +to_bf16(). + model_args (sequence of positional arguments, optional) — +All remaining positional arguments are passed to the underlying model’s __init__ method. cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only(bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. from_pt (bool, optional, defaults to False) — +Load the model weights from a PyTorch checkpoint save file. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to update the configuration object (after it is loaded) and initiate the model (for +example, output_attentions=True). Behaves differently depending on whether a config is provided or +automatically loaded: + +If a configuration is provided with config, kwargs are directly passed to the underlying +model’s __init__ method (we assume all relevant updates to the configuration have already been +done). +If a configuration is not provided, kwargs are first passed to the configuration class +initialization function from_config(). Each key of the kwargs that corresponds +to a configuration attribute is used to override said attribute with the supplied kwargs value. +Remaining keys that do not correspond to any configuration attribute are passed to the underlying +model’s __init__ function. + Instantiate a pretrained Flax model from a pretrained model configuration. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # Download model and configuration from huggingface.co and cache. +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # Model was saved using *save_pretrained('./test/saved_model/')* (for example purposes, not runnable). +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("./test/saved_model/") If you get the error message below, you need to finetune the weights for your downstream task: Copied Some weights of UNet2DConditionModel were not initialized from the model checkpoint at runwayml/stable-diffusion-v1-5 and are newly initialized because the shapes did not match: +- conv_in.weight: found shape torch.Size([320, 4, 3, 3]) in the checkpoint and torch.Size([320, 9, 3, 3]) in the model instantiated +You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. save_pretrained < source > ( save_directory: Union params: Union is_main_process: bool = True push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory to save a model and its configuration file to. Will be created if it doesn’t exist. params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. is_main_process (bool, optional, defaults to True) — +Whether the process calling this is the main process or not. Useful during distributed training and you +need to call this function on all processes. In this case, set is_main_process=True only on the main +process to avoid race conditions. push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional key word arguments passed along to the push_to_hub() method. Save a model and its configuration file to a directory so that it can be reloaded using the +from_pretrained() class method. to_bf16 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans. It should be True +for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.bfloat16. This returns a new params tree and does not cast +the params in place. This method can be used on a TPU to explicitly convert the model parameters to bfloat16 precision to do full +half-precision training or to save weights in bfloat16 for inference in order to save memory and improve speed. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # load model +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model parameters will be in fp32 precision, to cast these to bfloat16 precision +>>> params = model.to_bf16(params) +>>> # If you don't want to cast certain parameters (for example layer norm bias and scale) +>>> # then pass the mask as follows +>>> from flax import traverse_util + +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> flat_params = traverse_util.flatten_dict(params) +>>> mask = { +... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) +... for path in flat_params +... } +>>> mask = traverse_util.unflatten_dict(mask) +>>> params = model.to_bf16(params, mask) to_fp16 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans. It should be True +for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.float16. This returns a new params tree and does not cast the +params in place. This method can be used on a GPU to explicitly convert the model parameters to float16 precision to do full +half-precision training or to save weights in float16 for inference in order to save memory and improve speed. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # load model +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model params will be in fp32, to cast these to float16 +>>> params = model.to_fp16(params) +>>> # If you want don't want to cast certain parameters (for example layer norm bias and scale) +>>> # then pass the mask as follows +>>> from flax import traverse_util + +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> flat_params = traverse_util.flatten_dict(params) +>>> mask = { +... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) +... for path in flat_params +... } +>>> mask = traverse_util.unflatten_dict(mask) +>>> params = model.to_fp16(params, mask) to_fp32 < source > ( params: Union mask: Any = None ) Parameters params (Union[Dict, FrozenDict]) — +A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — +A PyTree with same structure as the params tree. The leaves should be booleans. It should be True +for params you want to cast, and False for those you want to skip. Cast the floating-point params to jax.numpy.float32. This method can be used to explicitly convert the +model parameters to fp32 precision. This returns a new params tree and does not cast the params in place. Examples: Copied >>> from diffusers import FlaxUNet2DConditionModel + +>>> # Download model and configuration from huggingface.co +>>> model, params = FlaxUNet2DConditionModel.from_pretrained("runwayml/stable-diffusion-v1-5") +>>> # By default, the model params will be in fp32, to illustrate the use of this method, +>>> # we'll first cast to fp16 and back to fp32 +>>> params = model.to_f16(params) +>>> # now cast back to fp32 +>>> params = model.to_fp32(params) PushToHubMixin class diffusers.utils.PushToHubMixin < source > ( ) A Mixin to push a model, scheduler, or pipeline to the Hugging Face Hub. push_to_hub < source > ( repo_id: str commit_message: Optional = None private: Optional = None token: Optional = None create_pr: bool = False safe_serialization: bool = True variant: Optional = None ) Parameters repo_id (str) — +The name of the repository you want to push your model, scheduler, or pipeline files to. It should +contain your organization name when pushing to an organization. repo_id can also be a path to a local +directory. commit_message (str, optional) — +Message to commit while pushing. Default to "Upload {object}". private (bool, optional) — +Whether or not the repository created should be private. token (str, optional) — +The token to use as HTTP bearer authorization for remote files. The token generated when running +huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) — +Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to True) — +Whether or not to convert the model weights to the safetensors format. variant (str, optional) — +If specified, weights are saved in the format pytorch_model..bin. Upload model, scheduler, or pipeline files to the 🤗 Hugging Face Hub. Examples: Copied from diffusers import UNet2DConditionModel + +unet = UNet2DConditionModel.from_pretrained("stabilityai/stable-diffusion-2", subfolder="unet") + +# Push the `unet` to your namespace with the name "my-finetuned-unet". +unet.push_to_hub("my-finetuned-unet") + +# Push the `unet` to an organization with the name "my-finetuned-unet". +unet.push_to_hub("your-org/my-finetuned-unet") diff --git a/scrapped_outputs/f88c2de65a1991c0b80be01bcad5c40b.txt b/scrapped_outputs/f88c2de65a1991c0b80be01bcad5c40b.txt new file mode 100644 index 0000000000000000000000000000000000000000..bfa3d716ecf3de4b47681af162e2893154070538 --- /dev/null +++ b/scrapped_outputs/f88c2de65a1991c0b80be01bcad5c40b.txt @@ -0,0 +1,60 @@ +AudioLDM AudioLDM was proposed in AudioLDM: Text-to-Audio Generation with Latent Diffusion Models by Haohe Liu et al. Inspired by Stable Diffusion, AudioLDM +is a text-to-audio latent diffusion model (LDM) that learns continuous audio representations from CLAP +latents. AudioLDM takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional +sound effects, human speech and music. The abstract from the paper is: Text-to-audio (TTA) system has recently gained attention for its ability to synthesize general audio based on text descriptions. However, previous studies in TTA have limited generation quality with high computational costs. In this study, we propose AudioLDM, a TTA system that is built on a latent space to learn the continuous audio representations from contrastive language-audio pretraining (CLAP) latents. The pretrained CLAP models enable us to train LDMs with audio embedding while providing text embedding as a condition during sampling. By learning the latent representations of audio signals and their compositions without modeling the cross-modal relationship, AudioLDM is advantageous in both generation quality and computational efficiency. Trained on AudioCaps with a single GPU, AudioLDM achieves state-of-the-art TTA performance measured by both objective and subjective metrics (e.g., frechet distance). Moreover, AudioLDM is the first TTA system that enables various text-guided audio manipulations (e.g., style transfer) in a zero-shot fashion. Our implementation and demos are available at this https URL. The original codebase can be found at haoheliu/AudioLDM. Tips When constructing a prompt, keep in mind: Descriptive prompt inputs work best; you can use adjectives to describe the sound (for example, “high quality” or “clear”) and make the prompt context specific (for example, “water stream in a forest” instead of “stream”). It’s best to use general terms like “cat” or “dog” instead of specific names or abstract objects the model may not be familiar with. During inference: The quality of the predicted audio sample can be controlled by the num_inference_steps argument; higher steps give higher quality audio at the expense of slower inference. The length of the predicted audio sample can be controlled by varying the audio_length_in_s argument. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. AudioLDMPipeline class diffusers.AudioLDMPipeline < source > ( vae: AutoencoderKL text_encoder: ClapTextModelWithProjection tokenizer: Union unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vocoder: SpeechT5HifiGan ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (ClapTextModelWithProjection) — +Frozen text-encoder (ClapTextModelWithProjection, specifically the +laion/clap-htsat-unfused variant. tokenizer (PreTrainedTokenizer) — +A RobertaTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded audio latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded audio latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. vocoder (SpeechT5HifiGan) — +Vocoder of class SpeechT5HifiGan. Pipeline for text-to-audio generation using AudioLDM. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None audio_length_in_s: Optional = None num_inference_steps: int = 10 guidance_scale: float = 2.5 negative_prompt: Union = None num_waveforms_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None output_type: Optional = 'np' ) → AudioPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide audio generation. If not defined, you need to pass prompt_embeds. audio_length_in_s (int, optional, defaults to 5.12) — +The length of the generated audio sample in seconds. num_inference_steps (int, optional, defaults to 10) — +The number of denoising steps. More denoising steps usually lead to a higher quality audio at the +expense of slower inference. guidance_scale (float, optional, defaults to 2.5) — +A higher guidance scale value encourages the model to generate audio that is closely linked to the text +prompt at the expense of lower sound quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in audio generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_waveforms_per_prompt (int, optional, defaults to 1) — +The number of waveforms to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. return_dict (bool, optional, defaults to True) — +Whether or not to return a AudioPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. output_type (str, optional, defaults to "np") — +The output format of the generated image. Choose between "np" to return a NumPy np.ndarray or +"pt" to return a PyTorch torch.Tensor object. Returns +AudioPipelineOutput or tuple + +If return_dict is True, AudioPipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated audio. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import AudioLDMPipeline +>>> import torch +>>> import scipy + +>>> repo_id = "cvssp/audioldm-s-full-v2" +>>> pipe = AudioLDMPipeline.from_pretrained(repo_id, torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> prompt = "Techno music with a strong, upbeat tempo and high melodic riffs" +>>> audio = pipe(prompt, num_inference_steps=10, audio_length_in_s=5.0).audios[0] + +>>> # save the audio sample as a .wav file +>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio) AudioPipelineOutput class diffusers.AudioPipelineOutput < source > ( audios: ndarray ) Parameters audios (np.ndarray) — +List of denoised audio samples of a NumPy array of shape (batch_size, num_channels, sample_rate). Output class for audio pipelines. diff --git a/scrapped_outputs/f890b77384b15a737952141ac46a13f6.txt b/scrapped_outputs/f890b77384b15a737952141ac46a13f6.txt new file mode 100644 index 0000000000000000000000000000000000000000..00271b49f1e24fbd75015632570698e2956adecc --- /dev/null +++ b/scrapped_outputs/f890b77384b15a737952141ac46a13f6.txt @@ -0,0 +1,253 @@ +The Stable Diffusion Guide 🎨 + + + +Intro + +Stable Diffusion is a Latent Diffusion model developed by researchers from the Machine Vision and Learning group at LMU Munich, a.k.a CompVis. +Model checkpoints were publicly released at the end of August 2022 by a collaboration of Stability AI, CompVis, and Runway with support from EleutherAI and LAION. For more information, you can check out the official blog post. +Since its public release the community has done an incredible job at working together to make the stable diffusion checkpoints faster, more memory efficient, and more performant. +🧨 Diffusers offers a simple API to run stable diffusion with all memory, computing, and quality improvements. +This notebook walks you through the improvements one-by-one so you can best leverage StableDiffusionPipeline for inference. + +Prompt Engineering 🎨 + + + +When running *Stable Diffusion* in inference, we usually want to generate a certain type, or style of image and then improve upon it. Improving upon a previously generated image means running inference over and over again with a different prompt and potentially a different seed until we are happy with our generation. +So to begin with, it is most important to speed up stable diffusion as much as possible to generate as many pictures as possible in a given amount of time. +This can be done by both improving the computational efficiency (speed) and the memory efficiency (GPU RAM). +Let’s start by looking into computational efficiency first. +Throughout the notebook, we will focus on runwayml/stable-diffusion-v1-5: + + + Copied +model_id = "runwayml/stable-diffusion-v1-5" +Let’s load the pipeline. + +Speed Optimization + + + + Copied +from diffusers import StableDiffusionPipeline + +pipe = StableDiffusionPipeline.from_pretrained(model_id) +We aim at generating a beautiful photograph of an old warrior chief and will later try to find the best prompt to generate such a photograph. For now, let’s keep the prompt simple: + + + Copied +prompt = "portrait photo of a old warrior chief" +To begin with, we should make sure we run inference on GPU, so let’s move the pipeline to GPU, just like you would with any PyTorch module. + + + Copied +pipe = pipe.to("cuda") +To generate an image, you should use the [~StableDiffusionPipeline.__call__] method. +To make sure we can reproduce more or less the same image in every call, let’s make use of the generator. See the documentation on reproducibility here for more information. + + + Copied +generator = torch.Generator("cuda").manual_seed(0) +Now, let’s take a spin on it. + + + Copied +image = pipe(prompt, generator=generator).images[0] +image + +Cool, this now took roughly 30 seconds on a T4 GPU (you might see faster inference if your allocated GPU is better than a T4). +The default run we did above used full float32 precision and ran the default number of inference steps (50). The easiest speed-ups come from switching to float16 (or half) precision and simply running fewer inference steps. Let’s load the model now in float16 instead. + + + Copied +import torch + +pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) +pipe = pipe.to("cuda") +And we can again call the pipeline to generate an image. + + + Copied +generator = torch.Generator("cuda").manual_seed(0) + +image = pipe(prompt, generator=generator).images[0] +image + +Cool, this is almost three times as fast for arguably the same image quality. +We strongly suggest always running your pipelines in float16 as so far we have very rarely seen degradations in quality because of it. +Next, let’s see if we need to use 50 inference steps or whether we could use significantly fewer. The number of inference steps is associated with the denoising scheduler we use. Choosing a more efficient scheduler could help us decrease the number of steps. +Let’s have a look at all the schedulers the stable diffusion pipeline is compatible with. + + + Copied +pipe.scheduler.compatibles + + + Copied + [diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler, + diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, + diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler, + diffusers.schedulers.scheduling_pndm.PNDMScheduler, + diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, + diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, + diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler, + diffusers.schedulers.scheduling_ddpm.DDPMScheduler, + diffusers.schedulers.scheduling_ddim.DDIMScheduler] +Cool, that’s a lot of schedulers. +🧨 Diffusers is constantly adding a bunch of novel schedulers/samplers that can be used with Stable Diffusion. For more information, we recommend taking a look at the official documentation here. +Alright, right now Stable Diffusion is using the PNDMScheduler which usually requires around 50 inference steps. However, other schedulers such as DPMSolverMultistepScheduler or DPMSolverSinglestepScheduler seem to get away with just 20 to 25 inference steps. Let’s try them out. +You can set a new scheduler by making use of the from_config function. + + + Copied +from diffusers import DPMSolverMultistepScheduler + +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +Now, let’s try to reduce the number of inference steps to just 20. + + + Copied +generator = torch.Generator("cuda").manual_seed(0) + +image = pipe(prompt, generator=generator, num_inference_steps=20).images[0] +image + +The image now does look a little different, but it’s arguably still of equally high quality. We now cut inference time to just 4 seconds though 😍. + +Memory Optimization + +Less memory used in generation indirectly implies more speed, since we’re often trying to maximize how many images we can generate per second. Usually, the more images per inference run, the more images per second too. +The easiest way to see how many images we can generate at once is to simply try it out, and see when we get a “Out-of-memory (OOM)” error. +We can run batched inference by simply passing a list of prompts and generators. Let’s define a quick function that generates a batch for us. + + + Copied +def get_inputs(batch_size=1): + generator = [torch.Generator("cuda").manual_seed(i) for i in range(batch_size)] + prompts = batch_size * [prompt] + num_inference_steps = 20 + + return {"prompt": prompts, "generator": generator, "num_inference_steps": num_inference_steps} +This function returns a list of prompts and a list of generators, so we can reuse the generator that produced a result we like. +We also need a method that allows us to easily display a batch of images. + + + Copied +from PIL import Image + +def image_grid(imgs, rows=2, cols=2): + w, h = imgs[0].size + grid = Image.new('RGB', size=(cols*w, rows*h)) + + for i, img in enumerate(imgs): + grid.paste(img, box=(i%cols*w, i//cols*h)) + return grid +Cool, let’s see how much memory we can use starting with batch_size=4. + + + Copied +images = pipe(**get_inputs(batch_size=4)).images +image_grid(images) + +Going over a batch_size of 4 will error out in this notebook (assuming we are running it on a T4 GPU). Also, we can see we only generate slightly more images per second (3.75s/image) compared to 4s/image previously. +However, the community has found some nice tricks to improve the memory constraints further. After stable diffusion was released, the community found improvements within days and shared them freely over GitHub - open-source at its finest! I believe the original idea came from this GitHub thread. +By far most of the memory is taken up by the cross-attention layers. Instead of running this operation in batch, one can run it sequentially to save a significant amount of memory. +It can easily be enabled by calling enable_attention_slicing as is documented here. + + + Copied +pipe.enable_attention_slicing() +Great, now that attention slicing is enabled, let’s try to double the batch size again, going for batch_size=8. + + + Copied +images = pipe(**get_inputs(batch_size=8)).images +image_grid(images, rows=2, cols=4) + +Nice, it works. However, the speed gain is again not very big (it might however be much more significant on other GPUs). +We’re at roughly 3.5 seconds per image 🔥 which is probably the fastest we can be with a simple T4 without sacrificing quality. +Next, let’s look into how to improve the quality! + +Quality Improvements + +Now that our image generation pipeline is blazing fast, let’s try to get maximum image quality. +First of all, image quality is extremely subjective, so it’s difficult to make general claims here. +The most obvious step to take to improve quality is to use better checkpoints. Since the release of Stable Diffusion, many improved versions have been released, which are summarized here: +Official Release - 22 Aug 2022: Stable-Diffusion 1.4 +20 October 2022: Stable-Diffusion 1.5 +24 Nov 2022: Stable-Diffusion 2.0 +7 Dec 2022: Stable-Diffusion 2.1 +Newer versions don’t necessarily mean better image quality with the same parameters. People mentioned that 2.0 is slightly worse than 1.5 for certain prompts, but given the right prompt engineering 2.0 and 2.1 seem to be better. +Overall, we strongly recommend just trying the models out and reading up on advice online (e.g. it has been shown that using negative prompts is very important for 2.0 and 2.1 to get the highest possible quality. See for example this nice blog post. +Additionally, the community has started fine-tuning many of the above versions on certain styles with some of them having an extremely high quality and gaining a lot of traction. +We recommend having a look at all diffusers checkpoints sorted by downloads and trying out the different checkpoints. +For the following, we will stick to v1.5 for simplicity. +Next, we can also try to optimize single components of the pipeline, e.g. switching out the latent decoder. For more details on how the whole Stable Diffusion pipeline works, please have a look at this blog post. +Let’s load stabilityai’s newest auto-decoder. + + + Copied +from diffusers import AutoencoderKL + +vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16).to("cuda") +Now we can set it to the vae of the pipeline to use it. + + + Copied +pipe.vae = vae +Let’s run the same prompt as before to compare quality. + + + Copied +images = pipe(**get_inputs(batch_size=8)).images +image_grid(images, rows=2, cols=4) + +Seems like the difference is only very minor, but the new generations are arguably a bit sharper. +Cool, finally, let’s look a bit into prompt engineering. +Our goal was to generate a photo of an old warrior chief. Let’s now try to bring a bit more color into the photos and make the look more impressive. +Originally our prompt was ”portrait photo of an old warrior chief“. +To improve the prompt, it often helps to add cues that could have been used online to save high-quality photos, as well as add more details. +Essentially, when doing prompt engineering, one has to think: +How was the photo or similar photos of the one I want probably stored on the internet? +What additional detail can I give that steers the models into the style that I want? +Cool, let’s add more details. + + + Copied +prompt += ", tribal panther make up, blue on red, side profile, looking away, serious eyes" +and let’s also add some cues that usually help to generate higher quality images. + + + Copied +prompt += " 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta" +prompt +Cool, let’s now try this prompt. + + + Copied +images = pipe(**get_inputs(batch_size=8)).images +image_grid(images, rows=2, cols=4) + +Pretty impressive! We got some very high-quality image generations there. The 2nd image is my personal favorite, so I’ll re-use this seed and see whether I can tweak the prompts slightly by using “oldest warrior”, “old”, "", and “young” instead of “old”. + + + Copied +prompts = [ + "portrait photo of the oldest warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a old warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", + "portrait photo of a young warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes 50mm portrait photography, hard rim lighting photography--beta --ar 2:3 --beta --upbeta", +] + +generator = [torch.Generator("cuda").manual_seed(1) for _ in range(len(prompts))] # 1 because we want the 2nd image + +images = pipe(prompt=prompts, generator=generator, num_inference_steps=25).images +image_grid(images) + +The first picture looks nice! The eye movement slightly changed and looks nice. This finished up our 101-guide on how to use Stable Diffusion 🤗. +For more information on optimization or other guides, I recommend taking a look at the following: +Blog post about Stable Diffusion: In-detail blog post explaining Stable Diffusion. +FlashAttention: XFormers flash attention can optimize your model even further with more speed and memory improvements. +Dreambooth - Quickly customize the model by fine-tuning it. +General info on Stable Diffusion - Info on other tasks that are powered by Stable Diffusion. diff --git a/scrapped_outputs/f8a27b3949d8a7cc0568cf4d6fc6a0dd.txt b/scrapped_outputs/f8a27b3949d8a7cc0568cf4d6fc6a0dd.txt new file mode 100644 index 0000000000000000000000000000000000000000..f44a3d21a8e26d613db10e2b1641d1bc1fb54490 --- /dev/null +++ b/scrapped_outputs/f8a27b3949d8a7cc0568cf4d6fc6a0dd.txt @@ -0,0 +1,2 @@ +🧨 Diffusers’ Ethical Guidelines Preamble Diffusers provides pre-trained diffusion models and serves as a modular toolbox for inference and training. Given its real case applications in the world and potential negative impacts on society, we think it is important to provide the project with ethical guidelines to guide the development, users’ contributions, and usage of the Diffusers library. The risks associated with using this technology are still being examined, but to name a few: copyrights issues for artists; deep-fake exploitation; sexual content generation in inappropriate contexts; non-consensual impersonation; harmful social biases perpetuating the oppression of marginalized groups. +We will keep tracking risks and adapt the following guidelines based on the community’s responsiveness and valuable feedback. Scope The Diffusers community will apply the following ethical guidelines to the project’s development and help coordinate how the community will integrate the contributions, especially concerning sensitive topics related to ethical concerns. Ethical guidelines The following ethical guidelines apply generally, but we will primarily implement them when dealing with ethically sensitive issues while making a technical choice. Furthermore, we commit to adapting those ethical principles over time following emerging harms related to the state of the art of the technology in question. Transparency: we are committed to being transparent in managing PRs, explaining our choices to users, and making technical decisions. Consistency: we are committed to guaranteeing our users the same level of attention in project management, keeping it technically stable and consistent. Simplicity: with a desire to make it easy to use and exploit the Diffusers library, we are committed to keeping the project’s goals lean and coherent. Accessibility: the Diffusers project helps lower the entry bar for contributors who can help run it even without technical expertise. Doing so makes research artifacts more accessible to the community. Reproducibility: we aim to be transparent about the reproducibility of upstream code, models, and datasets when made available through the Diffusers library. Responsibility: as a community and through teamwork, we hold a collective responsibility to our users by anticipating and mitigating this technology’s potential risks and dangers. Examples of implementations: Safety features and Mechanisms The team works daily to make the technical and non-technical tools available to deal with the potential ethical and social risks associated with diffusion technology. Moreover, the community’s input is invaluable in ensuring these features’ implementation and raising awareness with us. Community tab: it enables the community to discuss and better collaborate on a project. Bias exploration and evaluation: the Hugging Face team provides a space to demonstrate the biases in Stable Diffusion interactively. In this sense, we support and encourage bias explorers and evaluations. Encouraging safety in deployment Safe Stable Diffusion: It mitigates the well-known issue that models, like Stable Diffusion, that are trained on unfiltered, web-crawled datasets tend to suffer from inappropriate degeneration. Related paper: Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models. Safety Checker: It checks and compares the class probability of a set of hard-coded harmful concepts in the embedding space against an image after it has been generated. The harmful concepts are intentionally hidden to prevent reverse engineering of the checker. Staged released on the Hub: in particularly sensitive situations, access to some repositories should be restricted. This staged release is an intermediary step that allows the repository’s authors to have more control over its use. Licensing: OpenRAILs, a new type of licensing, allow us to ensure free access while having a set of restrictions that ensure more responsible use. diff --git a/scrapped_outputs/f8dae35c093cebd4c182980f637776cc.txt b/scrapped_outputs/f8dae35c093cebd4c182980f637776cc.txt new file mode 100644 index 0000000000000000000000000000000000000000..9de2a9918b4f9735de3ea0d622cdf65706556cae --- /dev/null +++ b/scrapped_outputs/f8dae35c093cebd4c182980f637776cc.txt @@ -0,0 +1,124 @@ +Schedulers Diffusion pipelines are inherently a collection of diffusion models and schedulers that are partly independent from each other. This means that one is able to switch out parts of the pipeline to better customize +a pipeline to one’s use case. The best example of this is the Schedulers. Whereas diffusion models usually simply define the forward pass from noise to a less noisy sample, +schedulers define the whole denoising process, i.e.: How many denoising steps? Stochastic or deterministic? What algorithm to use to find the denoised sample? They can be quite complex and often define a trade-off between denoising speed and denoising quality. +It is extremely difficult to measure quantitatively which scheduler works best for a given diffusion pipeline, so it is often recommended to simply try out which works best. The following paragraphs show how to do so with the 🧨 Diffusers library. Load pipeline Let’s start by loading the runwayml/stable-diffusion-v1-5 model in the DiffusionPipeline: Copied from huggingface_hub import login +from diffusers import DiffusionPipeline +import torch + +login() + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) Next, we move it to GPU: Copied pipeline.to("cuda") Access the scheduler The scheduler is always one of the components of the pipeline and is usually called "scheduler". +So it can be accessed via the "scheduler" property. Copied pipeline.scheduler Output: Copied PNDMScheduler { + "_class_name": "PNDMScheduler", + "_diffusers_version": "0.21.4", + "beta_end": 0.012, + "beta_schedule": "scaled_linear", + "beta_start": 0.00085, + "clip_sample": false, + "num_train_timesteps": 1000, + "set_alpha_to_one": false, + "skip_prk_steps": true, + "steps_offset": 1, + "timestep_spacing": "leading", + "trained_betas": null +} We can see that the scheduler is of type PNDMScheduler. +Cool, now let’s compare the scheduler in its performance to other schedulers. +First we define a prompt on which we will test all the different schedulers: Copied prompt = "A photograph of an astronaut riding a horse on Mars, high resolution, high definition." Next, we create a generator from a random seed that will ensure that we can generate similar images as well as run the pipeline: Copied generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image Changing the scheduler Now we show how easy it is to change the scheduler of a pipeline. Every scheduler has a property compatibles +which defines all compatible schedulers. You can take a look at all available, compatible schedulers for the Stable Diffusion pipeline as follows. Copied pipeline.scheduler.compatibles Output: Copied [diffusers.utils.dummy_torch_and_torchsde_objects.DPMSolverSDEScheduler, + diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, + diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, + diffusers.schedulers.scheduling_ddim.DDIMScheduler, + diffusers.schedulers.scheduling_ddpm.DDPMScheduler, + diffusers.schedulers.scheduling_heun_discrete.HeunDiscreteScheduler, + diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler, + diffusers.schedulers.scheduling_deis_multistep.DEISMultistepScheduler, + diffusers.schedulers.scheduling_pndm.PNDMScheduler, + diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, + diffusers.schedulers.scheduling_unipc_multistep.UniPCMultistepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_discrete.KDPM2DiscreteScheduler, + diffusers.schedulers.scheduling_dpmsolver_singlestep.DPMSolverSinglestepScheduler, + diffusers.schedulers.scheduling_k_dpm_2_ancestral_discrete.KDPM2AncestralDiscreteScheduler] Cool, lots of schedulers to look at. Feel free to have a look at their respective class definitions: EulerDiscreteScheduler, LMSDiscreteScheduler, DDIMScheduler, DDPMScheduler, HeunDiscreteScheduler, DPMSolverMultistepScheduler, DEISMultistepScheduler, PNDMScheduler, EulerAncestralDiscreteScheduler, UniPCMultistepScheduler, KDPM2DiscreteScheduler, DPMSolverSinglestepScheduler, KDPM2AncestralDiscreteScheduler. We will now compare the input prompt with all other schedulers. To change the scheduler of the pipeline you can make use of the +convenient config property in combination with the from_config() function. Copied pipeline.scheduler.config returns a dictionary of the configuration of the scheduler: Output: Copied FrozenDict([('num_train_timesteps', 1000), + ('beta_start', 0.00085), + ('beta_end', 0.012), + ('beta_schedule', 'scaled_linear'), + ('trained_betas', None), + ('skip_prk_steps', True), + ('set_alpha_to_one', False), + ('prediction_type', 'epsilon'), + ('timestep_spacing', 'leading'), + ('steps_offset', 1), + ('_use_default_values', ['timestep_spacing', 'prediction_type']), + ('_class_name', 'PNDMScheduler'), + ('_diffusers_version', '0.21.4'), + ('clip_sample', False)]) This configuration can then be used to instantiate a scheduler +of a different class that is compatible with the pipeline. Here, +we change the scheduler to the DDIMScheduler. Copied from diffusers import DDIMScheduler + +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) Cool, now we can run the pipeline again to compare the generation quality. Copied generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image If you are a JAX/Flax user, please check this section instead. Compare schedulers So far we have tried running the stable diffusion pipeline with two schedulers: PNDMScheduler and DDIMScheduler. +A number of better schedulers have been released that can be run with much fewer steps; let’s compare them here: LMSDiscreteScheduler usually leads to better results: Copied from diffusers import LMSDiscreteScheduler + +pipeline.scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator).images[0] +image EulerDiscreteScheduler and EulerAncestralDiscreteScheduler can generate high quality results with as little as 30 steps. Copied from diffusers import EulerDiscreteScheduler + +pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=30).images[0] +image and: Copied from diffusers import EulerAncestralDiscreteScheduler + +pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=30).images[0] +image DPMSolverMultistepScheduler gives a reasonable speed/quality trade-off and can be run with as little as 20 steps. Copied from diffusers import DPMSolverMultistepScheduler + +pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) + +generator = torch.Generator(device="cuda").manual_seed(8) +image = pipeline(prompt, generator=generator, num_inference_steps=20).images[0] +image As you can see, most images look very similar and are arguably of very similar quality. It often really depends on the specific use case which scheduler to choose. A good approach is always to run multiple different +schedulers to compare results. Changing the Scheduler in Flax If you are a JAX/Flax user, you can also change the default pipeline scheduler. This is a complete example of how to run inference using the Flax Stable Diffusion pipeline and the super-fast DPM-Solver++ scheduler: Copied import jax +import numpy as np +from flax.jax_utils import replicate +from flax.training.common_utils import shard + +from diffusers import FlaxStableDiffusionPipeline, FlaxDPMSolverMultistepScheduler + +model_id = "runwayml/stable-diffusion-v1-5" +scheduler, scheduler_state = FlaxDPMSolverMultistepScheduler.from_pretrained( + model_id, + subfolder="scheduler" +) +pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( + model_id, + scheduler=scheduler, + revision="bf16", + dtype=jax.numpy.bfloat16, +) +params["scheduler"] = scheduler_state + +# Generate 1 image per parallel device (8 on TPUv2-8 or TPUv3-8) +prompt = "a photo of an astronaut riding a horse on mars" +num_samples = jax.device_count() +prompt_ids = pipeline.prepare_inputs([prompt] * num_samples) + +prng_seed = jax.random.PRNGKey(0) +num_inference_steps = 25 + +# shard inputs and rng +params = replicate(params) +prng_seed = jax.random.split(prng_seed, jax.device_count()) +prompt_ids = shard(prompt_ids) + +images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images +images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) The following Flax schedulers are not yet compatible with the Flax Stable Diffusion Pipeline: FlaxLMSDiscreteScheduler FlaxDDPMScheduler diff --git a/scrapped_outputs/f8dbe63f78c4d6537f9e0721cd178fbf.txt b/scrapped_outputs/f8dbe63f78c4d6537f9e0721cd178fbf.txt new file mode 100644 index 0000000000000000000000000000000000000000..6cb15709ef2db459331589418952eb68057fc110 --- /dev/null +++ b/scrapped_outputs/f8dbe63f78c4d6537f9e0721cd178fbf.txt @@ -0,0 +1,26 @@ +IP-Adapter IP-Adapter is a lightweight adapter that enables prompting a diffusion model with an image. This method decouples the cross-attention layers of the image and text features. The image features are generated from an image encoder. Files generated from IP-Adapter are only ~100MBs. Learn how to load an IP-Adapter checkpoint and image in the IP-Adapter loading guide. IPAdapterMixin class diffusers.loaders.IPAdapterMixin < source > ( ) Mixin for handling IP Adapters. load_ip_adapter < source > ( pretrained_model_name_or_path_or_dict: Union subfolder: Union weight_name: Union **kwargs ) Parameters pretrained_model_name_or_path_or_dict (str or os.PathLike or dict) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing the model weights saved +with ModelMixin.save_pretrained(). +A torch state +dict. + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. unload_ip_adapter < source > ( ) Unloads the IP Adapter weights Examples: Copied >>> # Assuming `pipeline` is already loaded with the IP Adapter weights. +>>> pipeline.unload_ip_adapter() +>>> ... diff --git a/scrapped_outputs/f8dc491162163f49310952661c908c1d.txt b/scrapped_outputs/f8dc491162163f49310952661c908c1d.txt new file mode 100644 index 0000000000000000000000000000000000000000..b5b8f792fc115c3ab0410e2647d2cb1a410a75ea --- /dev/null +++ b/scrapped_outputs/f8dc491162163f49310952661c908c1d.txt @@ -0,0 +1,27 @@ +UNet2DModel The UNet model was originally introduced by Ronneberger et al. for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it’s number of dimensions and whether it is a conditional model or not. This is a 2D UNet model. The abstract from the paper is: There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. UNet2DModel class diffusers.UNet2DModel < source > ( sample_size: Union = None in_channels: int = 3 out_channels: int = 3 center_input_sample: bool = False time_embedding_type: str = 'positional' freq_shift: int = 0 flip_sin_to_cos: bool = True down_block_types: Tuple = ('DownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D', 'AttnDownBlock2D') up_block_types: Tuple = ('AttnUpBlock2D', 'AttnUpBlock2D', 'AttnUpBlock2D', 'UpBlock2D') block_out_channels: Tuple = (224, 448, 672, 896) layers_per_block: int = 2 mid_block_scale_factor: float = 1 downsample_padding: int = 1 downsample_type: str = 'conv' upsample_type: str = 'conv' dropout: float = 0.0 act_fn: str = 'silu' attention_head_dim: Optional = 8 norm_num_groups: int = 32 attn_norm_num_groups: Optional = None norm_eps: float = 1e-05 resnet_time_scale_shift: str = 'default' add_attention: bool = True class_embed_type: Optional = None num_class_embeds: Optional = None num_train_timesteps: Optional = None ) Parameters sample_size (int or Tuple[int, int], optional, defaults to None) — +Height and width of input/output sample. Dimensions must be a multiple of 2 ** (len(block_out_channels) - 1). in_channels (int, optional, defaults to 3) — Number of channels in the input sample. out_channels (int, optional, defaults to 3) — Number of channels in the output. center_input_sample (bool, optional, defaults to False) — Whether to center the input sample. time_embedding_type (str, optional, defaults to "positional") — Type of time embedding to use. freq_shift (int, optional, defaults to 0) — Frequency shift for Fourier time embedding. flip_sin_to_cos (bool, optional, defaults to True) — +Whether to flip sin to cos for Fourier time embedding. down_block_types (Tuple[str], optional, defaults to ("DownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D", "AttnDownBlock2D")) — +Tuple of downsample block types. mid_block_type (str, optional, defaults to "UNetMidBlock2D") — +Block type for middle of UNet, it can be either UNetMidBlock2D or UnCLIPUNetMidBlock2D. up_block_types (Tuple[str], optional, defaults to ("AttnUpBlock2D", "AttnUpBlock2D", "AttnUpBlock2D", "UpBlock2D")) — +Tuple of upsample block types. block_out_channels (Tuple[int], optional, defaults to (224, 448, 672, 896)) — +Tuple of block output channels. layers_per_block (int, optional, defaults to 2) — The number of layers per block. mid_block_scale_factor (float, optional, defaults to 1) — The scale factor for the mid block. downsample_padding (int, optional, defaults to 1) — The padding for the downsample convolution. downsample_type (str, optional, defaults to conv) — +The downsample type for downsampling layers. Choose between “conv” and “resnet” upsample_type (str, optional, defaults to conv) — +The upsample type for upsampling layers. Choose between “conv” and “resnet” dropout (float, optional, defaults to 0.0) — The dropout probability to use. act_fn (str, optional, defaults to "silu") — The activation function to use. attention_head_dim (int, optional, defaults to 8) — The attention head dimension. norm_num_groups (int, optional, defaults to 32) — The number of groups for normalization. attn_norm_num_groups (int, optional, defaults to None) — +If set to an integer, a group norm layer will be created in the mid block’s Attention layer with the +given number of groups. If left as None, the group norm layer will only be created if +resnet_time_scale_shift is set to default, and if created will have norm_num_groups groups. norm_eps (float, optional, defaults to 1e-5) — The epsilon for normalization. resnet_time_scale_shift (str, optional, defaults to "default") — Time scale shift config +for ResNet blocks (see ResnetBlock2D). Choose from default or scale_shift. class_embed_type (str, optional, defaults to None) — +The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None, +"timestep", or "identity". num_class_embeds (int, optional, defaults to None) — +Input dimension of the learnable embedding matrix to be projected to time_embed_dim when performing class +conditioning with class_embed_type equal to None. A 2D UNet model that takes a noisy sample and a timestep and returns a sample shaped output. This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented +for all models (such as downloading or saving). forward < source > ( sample: FloatTensor timestep: Union class_labels: Optional = None return_dict: bool = True ) → UNet2DOutput or tuple Parameters sample (torch.FloatTensor) — +The noisy input tensor with the following shape (batch, channel, height, width). timestep (torch.FloatTensor or float or int) — The number of timesteps to denoise an input. class_labels (torch.FloatTensor, optional, defaults to None) — +Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. return_dict (bool, optional, defaults to True) — +Whether or not to return a UNet2DOutput instead of a plain tuple. Returns +UNet2DOutput or tuple + +If return_dict is True, an UNet2DOutput is returned, otherwise a tuple is +returned where the first element is the sample tensor. + The UNet2DModel forward method. UNet2DOutput class diffusers.models.unet_2d.UNet2DOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — +The hidden states output from the last layer of the model. The output of UNet2DModel. diff --git a/scrapped_outputs/f8dd0d7ffdcb1db80c65b247c04201c1.txt b/scrapped_outputs/f8dd0d7ffdcb1db80c65b247c04201c1.txt new file mode 100644 index 0000000000000000000000000000000000000000..e4abc6c3bdbf1174d841ae03e5693f7552e06dd7 --- /dev/null +++ b/scrapped_outputs/f8dd0d7ffdcb1db80c65b247c04201c1.txt @@ -0,0 +1,38 @@ +Distributed inference with multiple GPUs On distributed setups, you can run inference across multiple GPUs with 🤗 Accelerate or PyTorch Distributed, which is useful for generating with multiple prompts in parallel. This guide will show you how to use 🤗 Accelerate and PyTorch Distributed for distributed inference. 🤗 Accelerate 🤗 Accelerate is a library designed to make it easy to train or run inference across distributed setups. It simplifies the process of setting up the distributed environment, allowing you to focus on your PyTorch code. To begin, create a Python file and initialize an accelerate.PartialState to create a distributed environment; your setup is automatically detected so you don’t need to explicitly define the rank or world_size. Move the DiffusionPipeline to distributed_state.device to assign a GPU to each process. Now use the split_between_processes utility as a context manager to automatically distribute the prompts between the number of processes. Copied import torch +from accelerate import PartialState +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) +distributed_state = PartialState() +pipeline.to(distributed_state.device) + +with distributed_state.split_between_processes(["a dog", "a cat"]) as prompt: + result = pipeline(prompt).images[0] + result.save(f"result_{distributed_state.process_index}.png") Use the --num_processes argument to specify the number of GPUs to use, and call accelerate launch to run the script: Copied accelerate launch run_distributed.py --num_processes=2 To learn more, take a look at the Distributed Inference with 🤗 Accelerate guide. PyTorch Distributed PyTorch supports DistributedDataParallel which enables data parallelism. To start, create a Python file and import torch.distributed and torch.multiprocessing to set up the distributed process group and to spawn the processes for inference on each GPU. You should also initialize a DiffusionPipeline: Copied import torch +import torch.distributed as dist +import torch.multiprocessing as mp + +from diffusers import DiffusionPipeline + +sd = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True +) You’ll want to create a function to run inference; init_process_group handles creating a distributed environment with the type of backend to use, the rank of the current process, and the world_size or the number of processes participating. If you’re running inference in parallel over 2 GPUs, then the world_size is 2. Move the DiffusionPipeline to rank and use get_rank to assign a GPU to each process, where each process handles a different prompt: Copied def run_inference(rank, world_size): + dist.init_process_group("nccl", rank=rank, world_size=world_size) + + sd.to(rank) + + if torch.distributed.get_rank() == 0: + prompt = "a dog" + elif torch.distributed.get_rank() == 1: + prompt = "a cat" + + image = sd(prompt).images[0] + image.save(f"./{'_'.join(prompt)}.png") To run the distributed inference, call mp.spawn to run the run_inference function on the number of GPUs defined in world_size: Copied def main(): + world_size = 2 + mp.spawn(run_inference, args=(world_size,), nprocs=world_size, join=True) + + +if __name__ == "__main__": + main() Once you’ve completed the inference script, use the --nproc_per_node argument to specify the number of GPUs to use and call torchrun to run the script: Copied torchrun run_distributed.py --nproc_per_node=2 diff --git a/scrapped_outputs/f8f75993c9f61db5622344fe19181c1f.txt b/scrapped_outputs/f8f75993c9f61db5622344fe19181c1f.txt new file mode 100644 index 0000000000000000000000000000000000000000..3430d6a11deef14ab8481ec2f16a8dea3078a149 --- /dev/null +++ b/scrapped_outputs/f8f75993c9f61db5622344fe19181c1f.txt @@ -0,0 +1,2226 @@ +IF + + +Overview + +DeepFloyd IF is a novel state-of-the-art open-source text-to-image model with a high degree of photorealism and language understanding. +The model is a modular composed of a frozen text encoder and three cascaded pixel diffusion modules: +Stage 1: a base model that generates 64x64 px image based on text prompt, +Stage 2: a 64x64 px => 256x256 px super-resolution model, and a +Stage 3: a 256x256 px => 1024x1024 px super-resolution model +Stage 1 and Stage 2 utilize a frozen text encoder based on the T5 transformer to extract text embeddings, +which are then fed into a UNet architecture enhanced with cross-attention and attention pooling. +Stage 3 is Stability’s x4 Upscaling model. +The result is a highly efficient model that outperforms current state-of-the-art models, achieving a zero-shot FID score of 6.66 on the COCO dataset. +Our work underscores the potential of larger UNet architectures in the first stage of cascaded diffusion models and depicts a promising future for text-to-image synthesis. + +Usage + +Before you can use IF, you need to accept its usage conditions. To do so: +Make sure to have a Hugging Face account and be loggin in +Accept the license on the model card of DeepFloyd/IF-I-IF-v1.0 and DeepFloyd/IF-II-L-v1.0 +Make sure to login locally. Install huggingface_hub + + + Copied +pip install huggingface_hub --upgrade +run the login function in a Python shell + + + Copied +from huggingface_hub import login + +login() +and enter your Hugging Face Hub access token. +Next we install diffusers and dependencies: + + + Copied +pip install diffusers accelerate transformers safetensors +The following sections give more in-detail examples of how to use IF. Specifically: +Text-to-Image Generation +Image-to-Image Generation +Inpainting +Reusing model weights +Speed optimization +Memory optimization +Available checkpoints +Stage-1 +DeepFloyd/IF-I-IF-v1.0 +DeepFloyd/IF-I-L-v1.0 +DeepFloyd/IF-I-M-v1.0 +Stage-2 +DeepFloyd/IF-II-L-v1.0 +DeepFloyd/IF-II-M-v1.0 +Stage-3 +stabilityai/stable-diffusion-x4-upscaler +Demo + +Google Colab + + +Text-to-Image Generation + +By default diffusers makes use of model cpu offloading +to run the whole IF pipeline with as little as 14 GB of VRAM. + + + Copied +from diffusers import DiffusionPipeline +from diffusers.utils import pt_to_pil +import torch + +# stage 1 +stage_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-IF-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +image = stage_1( + prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, generator=generator, output_type="pt" +).images +pt_to_pil(image)[0].save("./if_stage_I.png") + +# stage 2 +image = stage_2( + image=image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +pt_to_pil(image)[0].save("./if_stage_II.png") + +# stage 3 +image = stage_3(prompt=prompt, image=image, noise_level=100, generator=generator).images +image[0].save("./if_stage_III.png") + +Text Guided Image-to-Image Generation + +The same IF model weights can be used for text-guided image-to-image translation or image variation. +In this case just make sure to load the weights using the IFInpaintingPipeline and IFInpaintingSuperResolutionPipeline pipelines. +Note: You can also directly move the weights of the text-to-image pipelines to the image-to-image pipelines +without loading them twice by making use of the ~DiffusionPipeline.components() function as explained here. + + + Copied +from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +from diffusers.utils import pt_to_pil + +import torch + +from PIL import Image +import requests +from io import BytesIO + +# download image +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +response = requests.get(url) +original_image = Image.open(BytesIO(response.content)).convert("RGB") +original_image = original_image.resize((768, 512)) + +# stage 1 +stage_1 = IFImg2ImgPipeline.from_pretrained("DeepFloyd/IF-I-IF-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = IFImg2ImgSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = "A fantasy landscape in style minecraft" +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +image = stage_1( + image=original_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +pt_to_pil(image)[0].save("./if_stage_I.png") + +# stage 2 +image = stage_2( + image=image, + original_image=original_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +pt_to_pil(image)[0].save("./if_stage_II.png") + +# stage 3 +image = stage_3(prompt=prompt, image=image, generator=generator, noise_level=100).images +image[0].save("./if_stage_III.png") + +Text Guided Inpainting Generation + +The same IF model weights can be used for text-guided image-to-image translation or image variation. +In this case just make sure to load the weights using the IFInpaintingPipeline and IFInpaintingSuperResolutionPipeline pipelines. +Note: You can also directly move the weights of the text-to-image pipelines to the image-to-image pipelines +without loading them twice by making use of the ~DiffusionPipeline.components() function as explained here. + + + Copied +from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +from diffusers.utils import pt_to_pil +import torch + +from PIL import Image +import requests +from io import BytesIO + +# download image +url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +response = requests.get(url) +original_image = Image.open(BytesIO(response.content)).convert("RGB") +original_image = original_image + +# download mask +url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +response = requests.get(url) +mask_image = Image.open(BytesIO(response.content)) +mask_image = mask_image + +# stage 1 +stage_1 = IFInpaintingPipeline.from_pretrained("DeepFloyd/IF-I-IF-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = IFInpaintingSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = "blue sunglasses" +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +image = stage_1( + image=original_image, + mask_image=mask_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +pt_to_pil(image)[0].save("./if_stage_I.png") + +# stage 2 +image = stage_2( + image=image, + original_image=original_image, + mask_image=mask_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +pt_to_pil(image)[0].save("./if_stage_II.png") + +# stage 3 +image = stage_3(prompt=prompt, image=image, generator=generator, noise_level=100).images +image[0].save("./if_stage_III.png") + +Converting between different pipelines + +In addition to being loaded with from_pretrained, Pipelines can also be loaded directly from each other. + + + Copied +from diffusers import IFPipeline, IFSuperResolutionPipeline + +pipe_1 = IFPipeline.from_pretrained("DeepFloyd/IF-I-IF-v1.0") +pipe_2 = IFSuperResolutionPipeline.from_pretrained("DeepFloyd/IF-II-L-v1.0") + + +from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline + +pipe_1 = IFImg2ImgPipeline(**pipe_1.components) +pipe_2 = IFImg2ImgSuperResolutionPipeline(**pipe_2.components) + + +from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline + +pipe_1 = IFInpaintingPipeline(**pipe_1.components) +pipe_2 = IFInpaintingSuperResolutionPipeline(**pipe_2.components) + +Optimizing for speed + +The simplest optimization to run IF faster is to move all model components to the GPU. + + + Copied +pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-IF-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") +You can also run the diffusion process for a shorter number of timesteps. +This can either be done with the num_inference_steps argument + + + Copied +pipe("", num_inference_steps=30) +Or with the timesteps argument + + + Copied +from diffusers.pipelines.deepfloyd_if import fast27_timesteps + +pipe("", timesteps=fast27_timesteps) +When doing image variation or inpainting, you can also decrease the number of timesteps +with the strength argument. The strength argument is the amount of noise to add to +the input image which also determines how many steps to run in the denoising process. +A smaller number will vary the image less but run faster. + + + Copied +pipe = IFImg2ImgPipeline.from_pretrained("DeepFloyd/IF-I-IF-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") + +image = pipe(image=image, prompt="", strength=0.3).images +You can also use torch.compile. Note that we have not exhaustively tested torch.compile +with IF and it might not give expected results. + + + Copied +import torch + +pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-IF-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") + +pipe.text_encoder = torch.compile(pipe.text_encoder) +pipe.unet = torch.compile(pipe.unet) + +Optimizing for memory + +When optimizing for GPU memory, we can use the standard diffusers cpu offloading APIs. +Either the model based CPU offloading, + + + Copied +pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-IF-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.enable_model_cpu_offload() +or the more aggressive layer based CPU offloading. + + + Copied +pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-IF-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.enable_sequential_cpu_offload() +Additionally, T5 can be loaded in 8bit precision + + + Copied +from transformers import T5EncoderModel + +text_encoder = T5EncoderModel.from_pretrained( + "DeepFloyd/IF-I-IF-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit" +) + +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-I-IF-v1.0", + text_encoder=text_encoder, # pass the previously instantiated 8bit text encoder + unet=None, + device_map="auto", +) + +prompt_embeds, negative_embeds = pipe.encode_prompt("") +For CPU RAM constrained machines like google colab free tier where we can’t load all +model components to the CPU at once, we can manually only load the pipeline with +the text encoder or unet when the respective model components are needed. + + + Copied +from diffusers import IFPipeline, IFSuperResolutionPipeline +import torch +import gc +from transformers import T5EncoderModel +from diffusers.utils import pt_to_pil + +text_encoder = T5EncoderModel.from_pretrained( + "DeepFloyd/IF-I-IF-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit" +) + +# text to image + +pipe = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-I-IF-v1.0", + text_encoder=text_encoder, # pass the previously instantiated 8bit text encoder + unet=None, + device_map="auto", +) + +prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +# Remove the pipeline so we can re-load the pipeline with the unet +del text_encoder +del pipe +gc.collect() +torch.cuda.empty_cache() + +pipe = IFPipeline.from_pretrained( + "DeepFloyd/IF-I-IF-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto" +) + +generator = torch.Generator().manual_seed(0) +image = pipe( + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + output_type="pt", + generator=generator, +).images + +pt_to_pil(image)[0].save("./if_stage_I.png") + +# Remove the pipeline so we can load the super-resolution pipeline +del pipe +gc.collect() +torch.cuda.empty_cache() + +# First super resolution + +pipe = IFSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto" +) + +generator = torch.Generator().manual_seed(0) +image = pipe( + image=image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + output_type="pt", + generator=generator, +).images + +pt_to_pil(image)[0].save("./if_stage_II.png") + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_if.py +Text-to-Image Generation +- +pipeline_if_superresolution.py +Text-to-Image Generation +- +pipeline_if_img2img.py +Image-to-Image Generation +- +pipeline_if_img2img_superresolution.py +Image-to-Image Generation +- +pipeline_if_inpainting.py +Image-to-Image Generation +- +pipeline_if_inpainting_superresolution.py +Image-to-Image Generation +- + +IFPipeline + + +class diffusers.IFPipeline + +< +source +> +( +tokenizer: T5Tokenizer +text_encoder: T5EncoderModel +unet: UNet2DConditionModel +scheduler: DDPMScheduler +safety_checker: typing.Optional[diffusers.pipelines.deepfloyd_if.safety_checker.IFSafetyChecker] +feature_extractor: typing.Optional[transformers.models.clip.image_processing_clip.CLIPImageProcessor] +watermarker: typing.Optional[diffusers.pipelines.deepfloyd_if.watermark.IFWatermarker] +requires_safety_checker: bool = True + +) + + + + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +num_inference_steps: int = 100 +timesteps: typing.List[int] = None +guidance_scale: float = 7.0 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +height: typing.Optional[int] = None +width: typing.Optional[int] = None +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +clean_caption: bool = True +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None + +) +→ +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +height (int, optional, defaults to self.unet.config.sample_size) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size) — +The width in pixels of the generated image. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +Returns + +~pipelines.stable_diffusion.IFPipelineOutput or tuple + + + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import IFPipeline, IFSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch + +>>> pipe = IFPipeline.from_pretrained("DeepFloyd/IF-I-IF-v1.0", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt").images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt" +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> safety_modules = { +... "feature_extractor": pipe.feature_extractor, +... "safety_checker": pipe.safety_checker, +... "watermarker": pipe.watermarker, +... } +>>> super_res_2_pipe = DiffusionPipeline.from_pretrained( +... "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +... ) +>>> super_res_2_pipe.enable_model_cpu_offload() + +>>> image = super_res_2_pipe( +... prompt=prompt, +... image=image, +... ).images +>>> image[0].save("./if_stage_II.png") + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline’s +models have their state dicts saved to CPU and then are moved to a torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. + +encode_prompt + +< +source +> +( +prompt +do_classifier_free_guidance = True +num_images_per_prompt = 1 +device = None +negative_prompt = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +clean_caption: bool = False + +) + + +Parameters + +prompt (str or List[str], optional) — +prompt to be encoded + + + +Encodes the prompt into text encoder hidden states. +device: (torch.device, optional): +torch device to place the resulting embeddings on +num_images_per_prompt (int, optional, defaults to 1): +number of images that should be generated per prompt +do_classifier_free_guidance (bool, optional, defaults to True): +whether to use classifier free guidance or not +negative_prompt (str or List[str], optional): +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). +prompt_embeds (torch.FloatTensor, optional): +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. +negative_prompt_embeds (torch.FloatTensor, optional): +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + +IFSuperResolutionPipeline + + +class diffusers.IFSuperResolutionPipeline + +< +source +> +( +tokenizer: T5Tokenizer +text_encoder: T5EncoderModel +unet: UNet2DConditionModel +scheduler: DDPMScheduler +image_noising_scheduler: DDPMScheduler +safety_checker: typing.Optional[diffusers.pipelines.deepfloyd_if.safety_checker.IFSafetyChecker] +feature_extractor: typing.Optional[transformers.models.clip.image_processing_clip.CLIPImageProcessor] +watermarker: typing.Optional[diffusers.pipelines.deepfloyd_if.watermark.IFWatermarker] +requires_safety_checker: bool = True + +) + + + + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.FloatTensor] = None +num_inference_steps: int = 50 +timesteps: typing.List[int] = None +guidance_scale: float = 4.0 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None +noise_level: int = 250 +clean_caption: bool = True + +) +→ +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +image (PIL.Image.Image, np.ndarray, torch.FloatTensor) — +The image to be upscaled. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +noise_level (int, optional, defaults to 250) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) + + +clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. + + +Returns + +~pipelines.stable_diffusion.IFPipelineOutput or tuple + + + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import IFPipeline, IFSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch + +>>> pipe = IFPipeline.from_pretrained("DeepFloyd/IF-I-IF-v1.0", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt").images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds +... ).images +>>> image[0].save("./if_stage_II.png") + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline’s +models have their state dicts saved to CPU and then are moved to a torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. + +encode_prompt + +< +source +> +( +prompt +do_classifier_free_guidance = True +num_images_per_prompt = 1 +device = None +negative_prompt = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +clean_caption: bool = False + +) + + +Parameters + +prompt (str or List[str], optional) — +prompt to be encoded + + + +Encodes the prompt into text encoder hidden states. +device: (torch.device, optional): +torch device to place the resulting embeddings on +num_images_per_prompt (int, optional, defaults to 1): +number of images that should be generated per prompt +do_classifier_free_guidance (bool, optional, defaults to True): +whether to use classifier free guidance or not +negative_prompt (str or List[str], optional): +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). +prompt_embeds (torch.FloatTensor, optional): +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. +negative_prompt_embeds (torch.FloatTensor, optional): +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + +IFImg2ImgPipeline + + +class diffusers.IFImg2ImgPipeline + +< +source +> +( +tokenizer: T5Tokenizer +text_encoder: T5EncoderModel +unet: UNet2DConditionModel +scheduler: DDPMScheduler +safety_checker: typing.Optional[diffusers.pipelines.deepfloyd_if.safety_checker.IFSafetyChecker] +feature_extractor: typing.Optional[transformers.models.clip.image_processing_clip.CLIPImageProcessor] +watermarker: typing.Optional[diffusers.pipelines.deepfloyd_if.watermark.IFWatermarker] +requires_safety_checker: bool = True + +) + + + + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +image: typing.Union[PIL.Image.Image, torch.Tensor, numpy.ndarray, typing.List[PIL.Image.Image], typing.List[torch.Tensor], typing.List[numpy.ndarray]] = None +strength: float = 0.7 +num_inference_steps: int = 80 +timesteps: typing.List[int] = None +guidance_scale: float = 10.0 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +clean_caption: bool = True +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None + +) +→ +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. + + +strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +Returns + +~pipelines.stable_diffusion.IFPipelineOutput or tuple + + + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image.resize((768, 512)) + +>>> pipe = IFImg2ImgPipeline.from_pretrained( +... "DeepFloyd/IF-I-IF-v1.0", +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A fantasy landscape in style minecraft" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFImg2ImgSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", +... text_encoder=None, +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline’s +models have their state dicts saved to CPU and then are moved to a torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. + +encode_prompt + +< +source +> +( +prompt +do_classifier_free_guidance = True +num_images_per_prompt = 1 +device = None +negative_prompt = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +clean_caption: bool = False + +) + + +Parameters + +prompt (str or List[str], optional) — +prompt to be encoded + + + +Encodes the prompt into text encoder hidden states. +device: (torch.device, optional): +torch device to place the resulting embeddings on +num_images_per_prompt (int, optional, defaults to 1): +number of images that should be generated per prompt +do_classifier_free_guidance (bool, optional, defaults to True): +whether to use classifier free guidance or not +negative_prompt (str or List[str], optional): +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). +prompt_embeds (torch.FloatTensor, optional): +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. +negative_prompt_embeds (torch.FloatTensor, optional): +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + +IFImg2ImgSuperResolutionPipeline + + +class diffusers.IFImg2ImgSuperResolutionPipeline + +< +source +> +( +tokenizer: T5Tokenizer +text_encoder: T5EncoderModel +unet: UNet2DConditionModel +scheduler: DDPMScheduler +image_noising_scheduler: DDPMScheduler +safety_checker: typing.Optional[diffusers.pipelines.deepfloyd_if.safety_checker.IFSafetyChecker] +feature_extractor: typing.Optional[transformers.models.clip.image_processing_clip.CLIPImageProcessor] +watermarker: typing.Optional[diffusers.pipelines.deepfloyd_if.watermark.IFWatermarker] +requires_safety_checker: bool = True + +) + + + + +__call__ + +< +source +> +( +image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.FloatTensor] +original_image: typing.Union[PIL.Image.Image, torch.Tensor, numpy.ndarray, typing.List[PIL.Image.Image], typing.List[torch.Tensor], typing.List[numpy.ndarray]] = None +strength: float = 0.8 +prompt: typing.Union[str, typing.List[str]] = None +num_inference_steps: int = 50 +timesteps: typing.List[int] = None +guidance_scale: float = 4.0 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None +noise_level: int = 250 +clean_caption: bool = True + +) +→ +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +Parameters + +image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. + + +original_image (torch.FloatTensor or PIL.Image.Image) — +The original image that image was varied from. + + +strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. + + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +noise_level (int, optional, defaults to 250) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) + + +clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. + + +Returns + +~pipelines.stable_diffusion.IFPipelineOutput or tuple + + + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image.resize((768, 512)) + +>>> pipe = IFImg2ImgPipeline.from_pretrained( +... "DeepFloyd/IF-I-IF-v1.0", +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A fantasy landscape in style minecraft" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFImg2ImgSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", +... text_encoder=None, +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline’s +models have their state dicts saved to CPU and then are moved to a torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. + +encode_prompt + +< +source +> +( +prompt +do_classifier_free_guidance = True +num_images_per_prompt = 1 +device = None +negative_prompt = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +clean_caption: bool = False + +) + + +Parameters + +prompt (str or List[str], optional) — +prompt to be encoded + + + +Encodes the prompt into text encoder hidden states. +device: (torch.device, optional): +torch device to place the resulting embeddings on +num_images_per_prompt (int, optional, defaults to 1): +number of images that should be generated per prompt +do_classifier_free_guidance (bool, optional, defaults to True): +whether to use classifier free guidance or not +negative_prompt (str or List[str], optional): +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). +prompt_embeds (torch.FloatTensor, optional): +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. +negative_prompt_embeds (torch.FloatTensor, optional): +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + +IFInpaintingPipeline + + +class diffusers.IFInpaintingPipeline + +< +source +> +( +tokenizer: T5Tokenizer +text_encoder: T5EncoderModel +unet: UNet2DConditionModel +scheduler: DDPMScheduler +safety_checker: typing.Optional[diffusers.pipelines.deepfloyd_if.safety_checker.IFSafetyChecker] +feature_extractor: typing.Optional[transformers.models.clip.image_processing_clip.CLIPImageProcessor] +watermarker: typing.Optional[diffusers.pipelines.deepfloyd_if.watermark.IFWatermarker] +requires_safety_checker: bool = True + +) + + + + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +image: typing.Union[PIL.Image.Image, torch.Tensor, numpy.ndarray, typing.List[PIL.Image.Image], typing.List[torch.Tensor], typing.List[numpy.ndarray]] = None +mask_image: typing.Union[PIL.Image.Image, torch.Tensor, numpy.ndarray, typing.List[PIL.Image.Image], typing.List[torch.Tensor], typing.List[numpy.ndarray]] = None +strength: float = 1.0 +num_inference_steps: int = 50 +timesteps: typing.List[int] = None +guidance_scale: float = 7.0 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +clean_caption: bool = True +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None + +) +→ +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. + + +mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). + + +strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +Returns + +~pipelines.stable_diffusion.IFPipelineOutput or tuple + + + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +>>> response = requests.get(url) +>>> mask_image = Image.open(BytesIO(response.content)) +>>> mask_image = mask_image + +>>> pipe = IFInpaintingPipeline.from_pretrained( +... "DeepFloyd/IF-I-IF-v1.0", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "blue sunglasses" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... mask_image=mask_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFInpaintingSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... mask_image=mask_image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline’s +models have their state dicts saved to CPU and then are moved to a torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. + +encode_prompt + +< +source +> +( +prompt +do_classifier_free_guidance = True +num_images_per_prompt = 1 +device = None +negative_prompt = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +clean_caption: bool = False + +) + + +Parameters + +prompt (str or List[str], optional) — +prompt to be encoded + + + +Encodes the prompt into text encoder hidden states. +device: (torch.device, optional): +torch device to place the resulting embeddings on +num_images_per_prompt (int, optional, defaults to 1): +number of images that should be generated per prompt +do_classifier_free_guidance (bool, optional, defaults to True): +whether to use classifier free guidance or not +negative_prompt (str or List[str], optional): +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). +prompt_embeds (torch.FloatTensor, optional): +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. +negative_prompt_embeds (torch.FloatTensor, optional): +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + +IFInpaintingSuperResolutionPipeline + + +class diffusers.IFInpaintingSuperResolutionPipeline + +< +source +> +( +tokenizer: T5Tokenizer +text_encoder: T5EncoderModel +unet: UNet2DConditionModel +scheduler: DDPMScheduler +image_noising_scheduler: DDPMScheduler +safety_checker: typing.Optional[diffusers.pipelines.deepfloyd_if.safety_checker.IFSafetyChecker] +feature_extractor: typing.Optional[transformers.models.clip.image_processing_clip.CLIPImageProcessor] +watermarker: typing.Optional[diffusers.pipelines.deepfloyd_if.watermark.IFWatermarker] +requires_safety_checker: bool = True + +) + + + + +__call__ + +< +source +> +( +image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.FloatTensor] +original_image: typing.Union[PIL.Image.Image, torch.Tensor, numpy.ndarray, typing.List[PIL.Image.Image], typing.List[torch.Tensor], typing.List[numpy.ndarray]] = None +mask_image: typing.Union[PIL.Image.Image, torch.Tensor, numpy.ndarray, typing.List[PIL.Image.Image], typing.List[torch.Tensor], typing.List[numpy.ndarray]] = None +strength: float = 0.8 +prompt: typing.Union[str, typing.List[str]] = None +num_inference_steps: int = 100 +timesteps: typing.List[int] = None +guidance_scale: float = 4.0 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None +noise_level: int = 0 +clean_caption: bool = True + +) +→ +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +Parameters + +image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. + + +original_image (torch.FloatTensor or PIL.Image.Image) — +The original image that image was varied from. + + +mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). + + +strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. + + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.cross_attention. + + +noise_level (int, optional, defaults to 0) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) + + +clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. + + +Returns + +~pipelines.stable_diffusion.IFPipelineOutput or tuple + + + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +>>> response = requests.get(url) +>>> mask_image = Image.open(BytesIO(response.content)) +>>> mask_image = mask_image + +>>> pipe = IFInpaintingPipeline.from_pretrained( +... "DeepFloyd/IF-I-IF-v1.0", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "blue sunglasses" + +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) +>>> image = pipe( +... image=original_image, +... mask_image=mask_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFInpaintingSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... mask_image=mask_image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline’s +models have their state dicts saved to CPU and then are moved to a torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. + +encode_prompt + +< +source +> +( +prompt +do_classifier_free_guidance = True +num_images_per_prompt = 1 +device = None +negative_prompt = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +clean_caption: bool = False + +) + + +Parameters + +prompt (str or List[str], optional) — +prompt to be encoded + + + +Encodes the prompt into text encoder hidden states. +device: (torch.device, optional): +torch device to place the resulting embeddings on +num_images_per_prompt (int, optional, defaults to 1): +number of images that should be generated per prompt +do_classifier_free_guidance (bool, optional, defaults to True): +whether to use classifier free guidance or not +negative_prompt (str or List[str], optional): +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). +prompt_embeds (torch.FloatTensor, optional): +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. +negative_prompt_embeds (torch.FloatTensor, optional): +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. diff --git a/scrapped_outputs/f914e89fa1650249693277bc171af535.txt b/scrapped_outputs/f914e89fa1650249693277bc171af535.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/f927505376f70faa584197c84970db62.txt b/scrapped_outputs/f927505376f70faa584197c84970db62.txt new file mode 100644 index 0000000000000000000000000000000000000000..97a771bf1c4a69150adf921fcc1b4adbe14566c1 --- /dev/null +++ b/scrapped_outputs/f927505376f70faa584197c84970db62.txt @@ -0,0 +1,927 @@ +DeepFloyd IF Overview DeepFloyd IF is a novel state-of-the-art open-source text-to-image model with a high degree of photorealism and language understanding. +The model is a modular composed of a frozen text encoder and three cascaded pixel diffusion modules: Stage 1: a base model that generates 64x64 px image based on text prompt, Stage 2: a 64x64 px => 256x256 px super-resolution model, and Stage 3: a 256x256 px => 1024x1024 px super-resolution model +Stage 1 and Stage 2 utilize a frozen text encoder based on the T5 transformer to extract text embeddings, which are then fed into a UNet architecture enhanced with cross-attention and attention pooling. +Stage 3 is Stability AI’s x4 Upscaling model. +The result is a highly efficient model that outperforms current state-of-the-art models, achieving a zero-shot FID score of 6.66 on the COCO dataset. +Our work underscores the potential of larger UNet architectures in the first stage of cascaded diffusion models and depicts a promising future for text-to-image synthesis. Usage Before you can use IF, you need to accept its usage conditions. To do so: Make sure to have a Hugging Face account and be logged in. Accept the license on the model card of DeepFloyd/IF-I-XL-v1.0. Accepting the license on the stage I model card will auto accept for the other IF models. Make sure to login locally. Install huggingface_hub: Copied pip install huggingface_hub --upgrade run the login function in a Python shell: Copied from huggingface_hub import login + +login() and enter your Hugging Face Hub access token. Next we install diffusers and dependencies: Copied pip install -q diffusers accelerate transformers The following sections give more in-detail examples of how to use IF. Specifically: Text-to-Image Generation Image-to-Image Generation Inpainting Reusing model weights Speed optimization Memory optimization Available checkpoints Stage-1 DeepFloyd/IF-I-XL-v1.0 DeepFloyd/IF-I-L-v1.0 DeepFloyd/IF-I-M-v1.0 Stage-2 DeepFloyd/IF-II-L-v1.0 DeepFloyd/IF-II-M-v1.0 Stage-3 stabilityai/stable-diffusion-x4-upscaler Google Colab Text-to-Image Generation By default diffusers makes use of model cpu offloading to run the whole IF pipeline with as little as 14 GB of VRAM. Copied from diffusers import DiffusionPipeline +from diffusers.utils import pt_to_pil, make_image_grid +import torch + +# stage 1 +stage_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +stage_1_output = stage_1( + prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, generator=generator, output_type="pt" +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# stage 2 +stage_2_output = stage_2( + image=stage_1_output, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png") + +# stage 3 +stage_3_output = stage_3(prompt=prompt, image=stage_2_output, noise_level=100, generator=generator).images +#stage_3_output[0].save("./if_stage_III.png") +make_image_grid([pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=3) Text Guided Image-to-Image Generation The same IF model weights can be used for text-guided image-to-image translation or image variation. +In this case just make sure to load the weights using the IFImg2ImgPipeline and IFImg2ImgSuperResolutionPipeline pipelines. Note: You can also directly move the weights of the text-to-image pipelines to the image-to-image pipelines +without loading them twice by making use of the components argument as explained here. Copied from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +from diffusers.utils import pt_to_pil, load_image, make_image_grid +import torch + +# download image +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +original_image = load_image(url) +original_image = original_image.resize((768, 512)) + +# stage 1 +stage_1 = IFImg2ImgPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = IFImg2ImgSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = "A fantasy landscape in style minecraft" +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +stage_1_output = stage_1( + image=original_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# stage 2 +stage_2_output = stage_2( + image=stage_1_output, + original_image=original_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png") + +# stage 3 +stage_3_output = stage_3(prompt=prompt, image=stage_2_output, generator=generator, noise_level=100).images +#stage_3_output[0].save("./if_stage_III.png") +make_image_grid([original_image, pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=4) Text Guided Inpainting Generation The same IF model weights can be used for text-guided image-to-image translation or image variation. +In this case just make sure to load the weights using the IFInpaintingPipeline and IFInpaintingSuperResolutionPipeline pipelines. Note: You can also directly move the weights of the text-to-image pipelines to the image-to-image pipelines +without loading them twice by making use of the ~DiffusionPipeline.components() function as explained here. Copied from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +from diffusers.utils import pt_to_pil, load_image, make_image_grid +import torch + +# download image +url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +original_image = load_image(url) + +# download mask +url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +mask_image = load_image(url) + +# stage 1 +stage_1 = IFInpaintingPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +stage_1.enable_model_cpu_offload() + +# stage 2 +stage_2 = IFInpaintingSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +) +stage_2.enable_model_cpu_offload() + +# stage 3 +safety_modules = { + "feature_extractor": stage_1.feature_extractor, + "safety_checker": stage_1.safety_checker, + "watermarker": stage_1.watermarker, +} +stage_3 = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +) +stage_3.enable_model_cpu_offload() + +prompt = "blue sunglasses" +generator = torch.manual_seed(1) + +# text embeds +prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) + +# stage 1 +stage_1_output = stage_1( + image=original_image, + mask_image=mask_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# stage 2 +stage_2_output = stage_2( + image=stage_1_output, + original_image=original_image, + mask_image=mask_image, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + generator=generator, + output_type="pt", +).images +#pt_to_pil(stage_1_output)[0].save("./if_stage_II.png") + +# stage 3 +stage_3_output = stage_3(prompt=prompt, image=stage_2_output, generator=generator, noise_level=100).images +#stage_3_output[0].save("./if_stage_III.png") +make_image_grid([original_image, mask_image, pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0], stage_3_output[0]], rows=1, rows=5) Converting between different pipelines In addition to being loaded with from_pretrained, Pipelines can also be loaded directly from each other. Copied from diffusers import IFPipeline, IFSuperResolutionPipeline + +pipe_1 = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0") +pipe_2 = IFSuperResolutionPipeline.from_pretrained("DeepFloyd/IF-II-L-v1.0") + + +from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline + +pipe_1 = IFImg2ImgPipeline(**pipe_1.components) +pipe_2 = IFImg2ImgSuperResolutionPipeline(**pipe_2.components) + + +from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline + +pipe_1 = IFInpaintingPipeline(**pipe_1.components) +pipe_2 = IFInpaintingSuperResolutionPipeline(**pipe_2.components) Optimizing for speed The simplest optimization to run IF faster is to move all model components to the GPU. Copied pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") You can also run the diffusion process for a shorter number of timesteps. This can either be done with the num_inference_steps argument: Copied pipe("", num_inference_steps=30) Or with the timesteps argument: Copied from diffusers.pipelines.deepfloyd_if import fast27_timesteps + +pipe("", timesteps=fast27_timesteps) When doing image variation or inpainting, you can also decrease the number of timesteps +with the strength argument. The strength argument is the amount of noise to add to the input image which also determines how many steps to run in the denoising process. +A smaller number will vary the image less but run faster. Copied pipe = IFImg2ImgPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") + +image = pipe(image=image, prompt="", strength=0.3).images You can also use torch.compile. Note that we have not exhaustively tested torch.compile +with IF and it might not give expected results. Copied from diffusers import DiffusionPipeline +import torch + +pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.to("cuda") + +pipe.text_encoder = torch.compile(pipe.text_encoder, mode="reduce-overhead", fullgraph=True) +pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) Optimizing for memory When optimizing for GPU memory, we can use the standard diffusers CPU offloading APIs. Either the model based CPU offloading, Copied pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.enable_model_cpu_offload() or the more aggressive layer based CPU offloading. Copied pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +pipe.enable_sequential_cpu_offload() Additionally, T5 can be loaded in 8bit precision Copied from transformers import T5EncoderModel + +text_encoder = T5EncoderModel.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit" +) + +from diffusers import DiffusionPipeline + +pipe = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", + text_encoder=text_encoder, # pass the previously instantiated 8bit text encoder + unet=None, + device_map="auto", +) + +prompt_embeds, negative_embeds = pipe.encode_prompt("") For CPU RAM constrained machines like Google Colab free tier where we can’t load all model components to the CPU at once, we can manually only load the pipeline with +the text encoder or UNet when the respective model components are needed. Copied from diffusers import IFPipeline, IFSuperResolutionPipeline +import torch +import gc +from transformers import T5EncoderModel +from diffusers.utils import pt_to_pil, make_image_grid + +text_encoder = T5EncoderModel.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", subfolder="text_encoder", device_map="auto", load_in_8bit=True, variant="8bit" +) + +# text to image +pipe = DiffusionPipeline.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", + text_encoder=text_encoder, # pass the previously instantiated 8bit text encoder + unet=None, + device_map="auto", +) + +prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +# Remove the pipeline so we can re-load the pipeline with the unet +del text_encoder +del pipe +gc.collect() +torch.cuda.empty_cache() + +pipe = IFPipeline.from_pretrained( + "DeepFloyd/IF-I-XL-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto" +) + +generator = torch.Generator().manual_seed(0) +stage_1_output = pipe( + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + output_type="pt", + generator=generator, +).images + +#pt_to_pil(stage_1_output)[0].save("./if_stage_I.png") + +# Remove the pipeline so we can load the super-resolution pipeline +del pipe +gc.collect() +torch.cuda.empty_cache() + +# First super resolution + +pipe = IFSuperResolutionPipeline.from_pretrained( + "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16, device_map="auto" +) + +generator = torch.Generator().manual_seed(0) +stage_2_output = pipe( + image=stage_1_output, + prompt_embeds=prompt_embeds, + negative_prompt_embeds=negative_embeds, + output_type="pt", + generator=generator, +).images + +#pt_to_pil(stage_2_output)[0].save("./if_stage_II.png") +make_image_grid([pt_to_pil(stage_1_output)[0], pt_to_pil(stage_2_output)[0]], rows=1, rows=2) Available Pipelines: Pipeline Tasks Colab pipeline_if.py Text-to-Image Generation - pipeline_if_superresolution.py Text-to-Image Generation - pipeline_if_img2img.py Image-to-Image Generation - pipeline_if_img2img_superresolution.py Image-to-Image Generation - pipeline_if_inpainting.py Image-to-Image Generation - pipeline_if_inpainting_superresolution.py Image-to-Image Generation - IFPipeline class diffusers.IFPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None num_inference_steps: int = 100 timesteps: List = None guidance_scale: float = 7.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 height: Optional = None width: Optional = None eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True cross_attention_kwargs: Optional = None ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 7.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. height (int, optional, defaults to self.unet.config.sample_size) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size) — +The width in pixels of the generated image. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFPipeline, IFSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch + +>>> pipe = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt").images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt" +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> safety_modules = { +... "feature_extractor": pipe.feature_extractor, +... "safety_checker": pipe.safety_checker, +... "watermarker": pipe.watermarker, +... } +>>> super_res_2_pipe = DiffusionPipeline.from_pretrained( +... "stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16 +... ) +>>> super_res_2_pipe.enable_model_cpu_offload() + +>>> image = super_res_2_pipe( +... prompt=prompt, +... image=image, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFSuperResolutionPipeline class diffusers.IFSuperResolutionPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler image_noising_scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None height: int = None width: int = None image: Union = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 4.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 250 clean_caption: bool = True ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. height (int, optional, defaults to None) — +The height in pixels of the generated image. width (int, optional, defaults to None) — +The width in pixels of the generated image. image (PIL.Image.Image, np.ndarray, torch.FloatTensor) — +The image to be upscaled. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional, defaults to None) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. noise_level (int, optional, defaults to 250) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFPipeline, IFSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch + +>>> pipe = IFPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, output_type="pt").images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFImg2ImgPipeline class diffusers.IFImg2ImgPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None image: Union = None strength: float = 0.7 num_inference_steps: int = 80 timesteps: List = None guidance_scale: float = 10.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True cross_attention_kwargs: Optional = None ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. strength (float, optional, defaults to 0.7) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. num_inference_steps (int, optional, defaults to 80) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 10.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image.resize((768, 512)) + +>>> pipe = IFImg2ImgPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A fantasy landscape in style minecraft" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFImg2ImgSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", +... text_encoder=None, +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFImg2ImgSuperResolutionPipeline class diffusers.IFImg2ImgSuperResolutionPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler image_noising_scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( image: Union original_image: Union = None strength: float = 0.8 prompt: Union = None num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 4.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 250 clean_caption: bool = True ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. original_image (torch.FloatTensor or PIL.Image.Image) — +The original image that image was varied from. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. noise_level (int, optional, defaults to 250) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFImg2ImgPipeline, IFImg2ImgSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image.resize((768, 512)) + +>>> pipe = IFImg2ImgPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "A fantasy landscape in style minecraft" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFImg2ImgSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", +... text_encoder=None, +... variant="fp16", +... torch_dtype=torch.float16, +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFInpaintingPipeline class diffusers.IFInpaintingPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( prompt: Union = None image: Union = None mask_image: Union = None strength: float = 1.0 num_inference_steps: int = 50 timesteps: List = None guidance_scale: float = 7.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 clean_caption: bool = True cross_attention_kwargs: Optional = None ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). strength (float, optional, defaults to 1.0) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 7.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +>>> response = requests.get(url) +>>> mask_image = Image.open(BytesIO(response.content)) +>>> mask_image = mask_image + +>>> pipe = IFInpaintingPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "blue sunglasses" +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) + +>>> image = pipe( +... image=original_image, +... mask_image=mask_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFInpaintingSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... mask_image=mask_image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. IFInpaintingSuperResolutionPipeline class diffusers.IFInpaintingSuperResolutionPipeline < source > ( tokenizer: T5Tokenizer text_encoder: T5EncoderModel unet: UNet2DConditionModel scheduler: DDPMScheduler image_noising_scheduler: DDPMScheduler safety_checker: Optional feature_extractor: Optional watermarker: Optional requires_safety_checker: bool = True ) __call__ < source > ( image: Union original_image: Union = None mask_image: Union = None strength: float = 0.8 prompt: Union = None num_inference_steps: int = 100 timesteps: List = None guidance_scale: float = 4.0 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None noise_level: int = 0 clean_caption: bool = True ) → ~pipelines.stable_diffusion.IFPipelineOutput or tuple Parameters image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. original_image (torch.FloatTensor or PIL.Image.Image) — +The original image that image was varied from. mask_image (PIL.Image.Image) — +Image, or tensor representing an image batch, to mask image. White pixels in the mask will be +repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted +to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) +instead of 3, so the expected shape would be (B, H, W, 1). strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. timesteps (List[int], optional) — +Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps +timesteps are used. Must be in descending order. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.IFPipelineOutput instead of a plain tuple. callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under +self.processor in +diffusers.models.attention_processor. noise_level (int, optional, defaults to 0) — +The amount of noise to add to the upscaled image. Must be in the range [0, 1000) clean_caption (bool, optional, defaults to True) — +Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to +be installed. If the dependencies are not installed, the embeddings will be created from the raw +prompt. Returns +~pipelines.stable_diffusion.IFPipelineOutput or tuple + +~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked content, according to the safety_checker`. + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline +>>> from diffusers.utils import pt_to_pil +>>> import torch +>>> from PIL import Image +>>> import requests +>>> from io import BytesIO + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" +>>> response = requests.get(url) +>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> original_image = original_image + +>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" +>>> response = requests.get(url) +>>> mask_image = Image.open(BytesIO(response.content)) +>>> mask_image = mask_image + +>>> pipe = IFInpaintingPipeline.from_pretrained( +... "DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16 +... ) +>>> pipe.enable_model_cpu_offload() + +>>> prompt = "blue sunglasses" + +>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) +>>> image = pipe( +... image=original_image, +... mask_image=mask_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... output_type="pt", +... ).images + +>>> # save intermediate image +>>> pil_image = pt_to_pil(image) +>>> pil_image[0].save("./if_stage_I.png") + +>>> super_res_1_pipe = IFInpaintingSuperResolutionPipeline.from_pretrained( +... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 +... ) +>>> super_res_1_pipe.enable_model_cpu_offload() + +>>> image = super_res_1_pipe( +... image=image, +... mask_image=mask_image, +... original_image=original_image, +... prompt_embeds=prompt_embeds, +... negative_prompt_embeds=negative_embeds, +... ).images +>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Parameters prompt (str or List[str], optional) — +prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) — +whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) — +number of images that should be generated per prompt +device — (torch.device, optional): +torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. clean_caption (bool, defaults to False) — +If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. diff --git a/scrapped_outputs/f92c3c07088600acdda830130c3bf227.txt b/scrapped_outputs/f92c3c07088600acdda830130c3bf227.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69636ab475595c240f0bd86a1983886d1f8de0d --- /dev/null +++ b/scrapped_outputs/f92c3c07088600acdda830130c3bf227.txt @@ -0,0 +1,40 @@ +DDIM Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. The abstract from the paper is: Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space. The original codebase can be found at ermongroup/ddim. DDIMPipeline class diffusers.DDIMPipeline < source > ( unet scheduler ) Parameters unet (UNet2DModel) — +A UNet2DModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image. Can be one of +DDPMScheduler, or DDIMScheduler. Pipeline for image generation. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( batch_size: int = 1 generator: Union = None eta: float = 0.0 num_inference_steps: int = 50 use_clipped_model_output: Optional = None output_type: Optional = 'pil' return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters batch_size (int, optional, defaults to 1) — +The number of images to generate. generator (torch.Generator, optional) — +A torch.Generator to make +generation deterministic. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. A value of 0 corresponds to +DDIM and 1 corresponds to DDPM. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. use_clipped_model_output (bool, optional, defaults to None) — +If True or False, see documentation for DDIMScheduler.step(). If None, nothing is passed +downstream to the scheduler (use None for schedulers which don’t support this argument). output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + +If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is +returned where the first element is a list with the generated images + The call function to the pipeline for generation. Example: Copied >>> from diffusers import DDIMPipeline +>>> import PIL.Image +>>> import numpy as np + +>>> # load model and scheduler +>>> pipe = DDIMPipeline.from_pretrained("fusing/ddim-lsun-bedroom") + +>>> # run pipeline in inference (sample random noise and denoise) +>>> image = pipe(eta=0.0, num_inference_steps=50) + +>>> # process image to PIL +>>> image_processed = image.cpu().permute(0, 2, 3, 1) +>>> image_processed = (image_processed + 1.0) * 127.5 +>>> image_processed = image_processed.numpy().astype(np.uint8) +>>> image_pil = PIL.Image.fromarray(image_processed[0]) + +>>> # save image +>>> image_pil.save("test.png") ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: Union ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. diff --git a/scrapped_outputs/f984f73f583ea844be77fb1d868cd942.txt b/scrapped_outputs/f984f73f583ea844be77fb1d868cd942.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/f9a20ffa70bff37ff6e927836804c2c7.txt b/scrapped_outputs/f9a20ffa70bff37ff6e927836804c2c7.txt new file mode 100644 index 0000000000000000000000000000000000000000..6eb814578b3c61caf6866a5ffadcbcf16e6fec47 --- /dev/null +++ b/scrapped_outputs/f9a20ffa70bff37ff6e927836804c2c7.txt @@ -0,0 +1,26 @@ +How to run Stable Diffusion with Core ML Core ML is the model format and machine learning library supported by Apple frameworks. If you are interested in running Stable Diffusion models inside your macOS or iOS/iPadOS apps, this guide will show you how to convert existing PyTorch checkpoints into the Core ML format and use them for inference with Python or Swift. Core ML models can leverage all the compute engines available in Apple devices: the CPU, the GPU, and the Apple Neural Engine (or ANE, a tensor-optimized accelerator available in Apple Silicon Macs and modern iPhones/iPads). Depending on the model and the device it’s running on, Core ML can mix and match compute engines too, so some portions of the model may run on the CPU while others run on GPU, for example. You can also run the diffusers Python codebase on Apple Silicon Macs using the mps accelerator built into PyTorch. This approach is explained in depth in the mps guide, but it is not compatible with native apps. Stable Diffusion Core ML Checkpoints Stable Diffusion weights (or checkpoints) are stored in the PyTorch format, so you need to convert them to the Core ML format before we can use them inside native apps. Thankfully, Apple engineers developed a conversion tool based on diffusers to convert the PyTorch checkpoints to Core ML. Before you convert a model, though, take a moment to explore the Hugging Face Hub – chances are the model you’re interested in is already available in Core ML format: the Apple organization includes Stable Diffusion versions 1.4, 1.5, 2.0 base, and 2.1 base coreml community includes custom finetuned models use this filter to return all available Core ML checkpoints If you can’t find the model you’re interested in, we recommend you follow the instructions for Converting Models to Core ML by Apple. Selecting the Core ML Variant to Use Stable Diffusion models can be converted to different Core ML variants intended for different purposes: The type of attention blocks used. The attention operation is used to “pay attention” to the relationship between different areas in the image representations and to understand how the image and text representations are related. Attention is compute- and memory-intensive, so different implementations exist that consider the hardware characteristics of different devices. For Core ML Stable Diffusion models, there are two attention variants: split_einsum (introduced by Apple) is optimized for ANE devices, which is available in modern iPhones, iPads and M-series computers. The “original” attention (the base implementation used in diffusers) is only compatible with CPU/GPU and not ANE. It can be faster to run your model on CPU + GPU using original attention than ANE. See this performance benchmark as well as some additional measures provided by the community for additional details. The supported inference framework. packages are suitable for Python inference. This can be used to test converted Core ML models before attempting to integrate them inside native apps, or if you want to explore Core ML performance but don’t need to support native apps. For example, an application with a web UI could perfectly use a Python Core ML backend. compiled models are required for Swift code. The compiled models in the Hub split the large UNet model weights into several files for compatibility with iOS and iPadOS devices. This corresponds to the --chunk-unet conversion option. If you want to support native apps, then you need to select the compiled variant. The official Core ML Stable Diffusion models include these variants, but the community ones may vary: Copied coreml-stable-diffusion-v1-4 +├── README.md +├── original +│ ├── compiled +│ └── packages +└── split_einsum + ├── compiled + └── packages You can download and use the variant you need as shown below. Core ML Inference in Python Install the following libraries to run Core ML inference in Python: Copied pip install huggingface_hub +pip install git+https://github.com/apple/ml-stable-diffusion Download the Model Checkpoints To run inference in Python, use one of the versions stored in the packages folders because the compiled ones are only compatible with Swift. You may choose whether you want to use original or split_einsum attention. This is how you’d download the original attention variant from the Hub to a directory called models: Copied from huggingface_hub import snapshot_download +from pathlib import Path + +repo_id = "apple/coreml-stable-diffusion-v1-4" +variant = "original/packages" + +model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_")) +snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False) +print(f"Model downloaded at {model_path}") Inference Once you have downloaded a snapshot of the model, you can test it using Apple’s Python script. Copied python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" -i models/coreml-stable-diffusion-v1-4_original_packages -o --compute-unit CPU_AND_GPU --seed 93 Pass the path of the downloaded checkpoint with -i flag to the script. --compute-unit indicates the hardware you want to allow for inference. It must be one of the following options: ALL, CPU_AND_GPU, CPU_ONLY, CPU_AND_NE. You may also provide an optional output path, and a seed for reproducibility. The inference script assumes you’re using the original version of the Stable Diffusion model, CompVis/stable-diffusion-v1-4. If you use another model, you have to specify its Hub id in the inference command line, using the --model-version option. This works for models already supported and custom models you trained or fine-tuned yourself. For example, if you want to use runwayml/stable-diffusion-v1-5: Copied python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" --compute-unit ALL -o output --seed 93 -i models/coreml-stable-diffusion-v1-5_original_packages --model-version runwayml/stable-diffusion-v1-5 Core ML inference in Swift Running inference in Swift is slightly faster than in Python because the models are already compiled in the mlmodelc format. This is noticeable on app startup when the model is loaded but shouldn’t be noticeable if you run several generations afterward. Download To run inference in Swift on your Mac, you need one of the compiled checkpoint versions. We recommend you download them locally using Python code similar to the previous example, but with one of the compiled variants: Copied from huggingface_hub import snapshot_download +from pathlib import Path + +repo_id = "apple/coreml-stable-diffusion-v1-4" +variant = "original/compiled" + +model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_")) +snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False) +print(f"Model downloaded at {model_path}") Inference To run inference, please clone Apple’s repo: Copied git clone https://github.com/apple/ml-stable-diffusion +cd ml-stable-diffusion And then use Apple’s command line tool, Swift Package Manager: Copied swift run StableDiffusionSample --resource-path models/coreml-stable-diffusion-v1-4_original_compiled --compute-units all "a photo of an astronaut riding a horse on mars" You have to specify in --resource-path one of the checkpoints downloaded in the previous step, so please make sure it contains compiled Core ML bundles with the extension .mlmodelc. The --compute-units has to be one of these values: all, cpuOnly, cpuAndGPU, cpuAndNeuralEngine. For more details, please refer to the instructions in Apple’s repo. Supported Diffusers Features The Core ML models and inference code don’t support many of the features, options, and flexibility of 🧨 Diffusers. These are some of the limitations to keep in mind: Core ML models are only suitable for inference. They can’t be used for training or fine-tuning. Only two schedulers have been ported to Swift, the default one used by Stable Diffusion and DPMSolverMultistepScheduler, which we ported to Swift from our diffusers implementation. We recommend you use DPMSolverMultistepScheduler, since it produces the same quality in about half the steps. Negative prompts, classifier-free guidance scale, and image-to-image tasks are available in the inference code. Advanced features such as depth guidance, ControlNet, and latent upscalers are not available yet. Apple’s conversion and inference repo and our own swift-coreml-diffusers repos are intended as technology demonstrators to enable other developers to build upon. If you feel strongly about any missing features, please feel free to open a feature request or, better yet, a contribution PR 🙂. Native Diffusers Swift app One easy way to run Stable Diffusion on your own Apple hardware is to use our open-source Swift repo, based on diffusers and Apple’s conversion and inference repo. You can study the code, compile it with Xcode and adapt it for your own needs. For your convenience, there’s also a standalone Mac app in the App Store, so you can play with it without having to deal with the code or IDE. If you are a developer and have determined that Core ML is the best solution to build your Stable Diffusion app, then you can use the rest of this guide to get started with your project. We can’t wait to see what you’ll build 🙂. diff --git a/scrapped_outputs/f9c6705f300311e3b87894d7c4968ce7.txt b/scrapped_outputs/f9c6705f300311e3b87894d7c4968ce7.txt new file mode 100644 index 0000000000000000000000000000000000000000..b413917c52bc7069ecb64d4b6c9ce531220bac25 --- /dev/null +++ b/scrapped_outputs/f9c6705f300311e3b87894d7c4968ce7.txt @@ -0,0 +1,87 @@ +Create reproducible pipelines Reproducibility is important for testing, replicating results, and can even be used to improve image quality. However, the randomness in diffusion models is a desired property because it allows the pipeline to generate different images every time it is run. While you can’t expect to get the exact same results across platforms, you can expect results to be reproducible across releases and platforms within a certain tolerance range. Even then, tolerance varies depending on the diffusion pipeline and checkpoint. This is why it’s important to understand how to control sources of randomness in diffusion models or use deterministic algorithms. 💡 We strongly recommend reading PyTorch’s statement about reproducibility: Completely reproducible results are not guaranteed across PyTorch releases, individual commits, or different platforms. Furthermore, results may not be reproducible between CPU and GPU executions, even when using identical seeds. Control randomness During inference, pipelines rely heavily on random sampling operations which include creating the +Gaussian noise tensors to denoise and adding noise to the scheduling step. Take a look at the tensor values in the DDIMPipeline after two inference steps: Copied from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np").images +print(np.abs(image).sum()) Running the code above prints one value, but if you run it again you get a different value. What is going on here? Every time the pipeline is run, torch.randn uses a different random seed to create Gaussian noise which is denoised stepwise. This leads to a different result each time it is run, which is great for diffusion pipelines since it generates a different random image each time. But if you need to reliably generate the same image, that’ll depend on whether you’re running the pipeline on a CPU or GPU. CPU To generate reproducible results on a CPU, you’ll need to use a PyTorch Generator and set a seed: Copied import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) + +# create a generator for reproducibility +generator = torch.Generator(device="cpu").manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) Now when you run the code above, it always prints a value of 1491.1711 no matter what because the Generator object with the seed is passed to all the random functions of the pipeline. If you run this code example on your specific hardware and PyTorch version, you should get a similar, if not the same, result. 💡 It might be a bit unintuitive at first to pass Generator objects to the pipeline instead of +just integer values representing the seed, but this is the recommended design when dealing with +probabilistic models in PyTorch, as Generators are random states that can be +passed to multiple pipelines in a sequence. GPU Writing a reproducible pipeline on a GPU is a bit trickier, and full reproducibility across different hardware is not guaranteed because matrix multiplication - which diffusion pipelines require a lot of - is less deterministic on a GPU than a CPU. For example, if you run the same code example above on a GPU: Copied import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) +ddim.to("cuda") + +# create a generator for reproducibility +generator = torch.Generator(device="cuda").manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) The result is not the same even though you’re using an identical seed because the GPU uses a different random number generator than the CPU. To circumvent this problem, 🧨 Diffusers has a randn_tensor() function for creating random noise on the CPU, and then moving the tensor to a GPU if necessary. The randn_tensor function is used everywhere inside the pipeline, allowing the user to always pass a CPU Generator even if the pipeline is run on a GPU. You’ll see the results are much closer now! Copied import torch +from diffusers import DDIMPipeline +import numpy as np + +model_id = "google/ddpm-cifar10-32" + +# load model and scheduler +ddim = DDIMPipeline.from_pretrained(model_id, use_safetensors=True) +ddim.to("cuda") + +# create a generator for reproducibility; notice you don't place it on the GPU! +generator = torch.manual_seed(0) + +# run pipeline for just two steps and return numpy tensor +image = ddim(num_inference_steps=2, output_type="np", generator=generator).images +print(np.abs(image).sum()) 💡 If reproducibility is important, we recommend always passing a CPU generator. +The performance loss is often neglectable, and you’ll generate much more similar +values than if the pipeline had been run on a GPU. Finally, for more complex pipelines such as UnCLIPPipeline, these are often extremely +susceptible to precision error propagation. Don’t expect similar results across +different GPU hardware or PyTorch versions. In this case, you’ll need to run +exactly the same hardware and PyTorch version for full reproducibility. Deterministic algorithms You can also configure PyTorch to use deterministic algorithms to create a reproducible pipeline. However, you should be aware that deterministic algorithms may be slower than nondeterministic ones and you may observe a decrease in performance. But if reproducibility is important to you, then this is the way to go! Nondeterministic behavior occurs when operations are launched in more than one CUDA stream. To avoid this, set the environment variable CUBLAS_WORKSPACE_CONFIG to :16:8 to only use one buffer size during runtime. PyTorch typically benchmarks multiple algorithms to select the fastest one, but if you want reproducibility, you should disable this feature because the benchmark may select different algorithms each time. Lastly, pass True to torch.use_deterministic_algorithms to enable deterministic algorithms. Copied import os +import torch + +os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":16:8" + +torch.backends.cudnn.benchmark = False +torch.use_deterministic_algorithms(True) Now when you run the same pipeline twice, you’ll get identical results. Copied import torch +from diffusers import DDIMScheduler, StableDiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipe = StableDiffusionPipeline.from_pretrained(model_id, use_safetensors=True).to("cuda") +pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) +g = torch.Generator(device="cuda") + +prompt = "A bear is playing a guitar on Times Square" + +g.manual_seed(0) +result1 = pipe(prompt=prompt, num_inference_steps=50, generator=g, output_type="latent").images + +g.manual_seed(0) +result2 = pipe(prompt=prompt, num_inference_steps=50, generator=g, output_type="latent").images + +print("L_inf dist =", abs(result1 - result2).max()) +"L_inf dist = tensor(0., device='cuda:0')" diff --git a/scrapped_outputs/f9f7cff40f91df5a387424175679b020.txt b/scrapped_outputs/f9f7cff40f91df5a387424175679b020.txt new file mode 100644 index 0000000000000000000000000000000000000000..f559dcc80ec22dbf65c22dd7f4b1273f5e564097 --- /dev/null +++ b/scrapped_outputs/f9f7cff40f91df5a387424175679b020.txt @@ -0,0 +1,118 @@ +Latent upscaler The Stable Diffusion latent upscaler model was created by Katherine Crowson in collaboration with Stability AI. It is used to enhance the output image resolution by a factor of 2 (see this demo notebook for a demonstration of the original implementation). Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! StableDiffusionLatentUpscalePipeline class diffusers.StableDiffusionLatentUpscalePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: EulerDiscreteScheduler ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A EulerDiscreteScheduler to be used in combination with unet to denoise the encoded image latents. Pipeline for upscaling Stable Diffusion output image resolution by a factor of 2. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: from_single_file() for loading .ckpt files __call__ < source > ( prompt: Union image: Union = None num_inference_steps: int = 75 guidance_scale: float = 9.0 negative_prompt: Union = None generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide image upscaling. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image or tensor representing an image batch to be upscaled. If it’s a tensor, it can be either a +latent output from a Stable Diffusion model or an image tensor in the range [-1, 1]. It is considered +a latent if image.shape[1] is 4; otherwise, it is considered to be an image representation and +encoded using this pipeline’s vae encoder. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images. + The call function to the pipeline for generation. Examples: Copied >>> from diffusers import StableDiffusionLatentUpscalePipeline, StableDiffusionPipeline +>>> import torch + + +>>> pipeline = StableDiffusionPipeline.from_pretrained( +... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 +... ) +>>> pipeline.to("cuda") + +>>> model_id = "stabilityai/sd-x2-latent-upscaler" +>>> upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) +>>> upscaler.to("cuda") + +>>> prompt = "a photo of an astronaut high resolution, unreal engine, ultra realistic" +>>> generator = torch.manual_seed(33) + +>>> low_res_latents = pipeline(prompt, generator=generator, output_type="latent").images + +>>> with torch.no_grad(): +... image = pipeline.decode_latents(low_res_latents) +>>> image = pipeline.numpy_to_pil(image)[0] + +>>> image.save("../images/a1.png") + +>>> upscaled_image = upscaler( +... prompt=prompt, +... image=low_res_latents, +... num_inference_steps=20, +... guidance_scale=0, +... generator=generator, +... ).images[0] + +>>> upscaled_image.save("../images/a2.png") enable_sequential_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — +The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — +The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will +default to “cuda”. Offloads all models to CPU using 🤗 Accelerate, significantly reducing memory usage. When called, the state +dicts of all torch.nn.Module components (except those in self._exclude_from_cpu_offload) are saved to CPU +and then moved to torch.device('meta') and loaded to GPU only when their specific submodule has its forward +method called. Offloading happens on a submodule basis. Memory savings are higher than with +enable_model_cpu_offload, but performance is lower. enable_attention_slicing < source > ( slice_size: Union = 'auto' ) Parameters slice_size (str or int, optional, defaults to "auto") — +When "auto", halves the input to the attention heads, so attention will be computed in two steps. If +"max", maximum amount of memory will be saved by running only one slice at a time. If a number is +provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim +must be a multiple of slice_size. Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor +in slices to compute attention in several steps. For more than one attention head, the computation is performed +sequentially over each head. This is useful to save some memory in exchange for a small speed decrease. ⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch +2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable +this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs! Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionPipeline + +>>> pipe = StableDiffusionPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", +... torch_dtype=torch.float16, +... use_safetensors=True, +... ) + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> pipe.enable_attention_slicing() +>>> image = pipe(prompt).images[0] disable_attention_slicing < source > ( ) Disable sliced attention computation. If enable_attention_slicing was previously called, attention is +computed in one step. enable_xformers_memory_efficient_attention < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional) — +Override the default None operator for use as op argument to the +memory_efficient_attention() +function of xFormers. Enable memory efficient attention from xFormers. When this +option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed +up during training is not guaranteed. ⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes +precedent. Examples: Copied >>> import torch +>>> from diffusers import DiffusionPipeline +>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp + +>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") +>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp) +>>> # Workaround for not accepting attention shape using VAE for Flash Attention +>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None) disable_xformers_memory_efficient_attention < source > ( ) Disable memory efficient attention from xFormers. disable_freeu < source > ( ) Disables the FreeU mechanism if enabled. enable_freeu < source > ( s1: float s2: float b1: float b2: float ) Parameters s1 (float) — +Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. s2 (float) — +Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to +mitigate “oversmoothing effect” in the enhanced denoising process. b1 (float) — Scaling factor for stage 1 to amplify the contributions of backbone features. b2 (float) — Scaling factor for stage 2 to amplify the contributions of backbone features. Enables the FreeU mechanism as in https://arxiv.org/abs/2309.11497. The suffixes after the scaling factors represent the stages where they are being applied. Please refer to the official repository for combinations of the values +that are known to work well for different pipelines such as Stable Diffusion v1, v2, and Stable Diffusion XL. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/fa08525dda3cdb6dc72f10ab204ed73b.txt b/scrapped_outputs/fa08525dda3cdb6dc72f10ab204ed73b.txt new file mode 100644 index 0000000000000000000000000000000000000000..28d0025fe6227f68f990a2d355304bcc0dc60e92 --- /dev/null +++ b/scrapped_outputs/fa08525dda3cdb6dc72f10ab204ed73b.txt @@ -0,0 +1,112 @@ +Unconditional Latent Diffusion + + +Overview + +Unconditional Latent Diffusion was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. +The abstract of the paper is the following: +By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. +The original codebase can be found here. + +Tips: + + + + + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_latent_diffusion_uncond.py +Unconditional Image Generation +- + +Examples: + + +LDMPipeline + + +class diffusers.LDMPipeline + +< +source +> +( +vqvae: VQModel +unet: UNet2DModel +scheduler: DDIMScheduler + +) + + +Parameters + +vqvae (VQModel) — +Vector-quantized (VQ) Model to encode and decode images to and from latent representations. + + +unet (UNet2DModel) — U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +DDIMScheduler is to be used in combination with unet to denoise the encoded image latents. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +batch_size: int = 1 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +eta: float = 0.0 +num_inference_steps: int = 50 +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +**kwargs + +) +→ +ImagePipelineOutput or tuple + +Parameters + +batch_size (int, optional, defaults to 1) — +Number of images to generate. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + +Returns + +ImagePipelineOutput or tuple + + + +~pipelines.utils.ImagePipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. diff --git a/scrapped_outputs/fa17b9b34ac623846de549f1c02089c2.txt b/scrapped_outputs/fa17b9b34ac623846de549f1c02089c2.txt new file mode 100644 index 0000000000000000000000000000000000000000..468c0483a2546314fa3f8291e558ee4a11ec620d --- /dev/null +++ b/scrapped_outputs/fa17b9b34ac623846de549f1c02089c2.txt @@ -0,0 +1,69 @@ +JAX/Flax 🤗 Diffusers supports Flax for super fast inference on Google TPUs, such as those available in Colab, Kaggle or Google Cloud Platform. This guide shows you how to run inference with Stable Diffusion using JAX/Flax. Before you begin, make sure you have the necessary libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q jax==0.3.25 jaxlib==0.3.25 flax transformers ftfy +#!pip install -q diffusers You should also make sure you’re using a TPU backend. While JAX does not run exclusively on TPUs, you’ll get the best performance on a TPU because each server has 8 TPU accelerators working in parallel. If you are running this guide in Colab, select Runtime in the menu above, select the option Change runtime type, and then select TPU under the Hardware accelerator setting. Import JAX and quickly check whether you’re using a TPU: Copied import jax +import jax.tools.colab_tpu +jax.tools.colab_tpu.setup_tpu() + +num_devices = jax.device_count() +device_type = jax.devices()[0].device_kind + +print(f"Found {num_devices} JAX devices of type {device_type}.") +assert ( + "TPU" in device_type, + "Available device is not a TPU, please select TPU from Runtime > Change runtime type > Hardware accelerator" +) +# Found 8 JAX devices of type Cloud TPU. Great, now you can import the rest of the dependencies you’ll need: Copied import jax.numpy as jnp +from jax import pmap +from flax.jax_utils import replicate +from flax.training.common_utils import shard + +from diffusers import FlaxStableDiffusionPipeline Load a model Flax is a functional framework, so models are stateless and parameters are stored outside of them. Loading a pretrained Flax pipeline returns both the pipeline and the model weights (or parameters). In this guide, you’ll use bfloat16, a more efficient half-float type that is supported by TPUs (you can also use float32 for full precision if you want). Copied dtype = jnp.bfloat16 +pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", + revision="bf16", + dtype=dtype, +) Inference TPUs usually have 8 devices working in parallel, so let’s use the same prompt for each device. This means you can perform inference on 8 devices at once, with each device generating one image. As a result, you’ll get 8 images in the same amount of time it takes for one chip to generate a single image! Learn more details in the How does parallelization work? section. After replicating the prompt, get the tokenized text ids by calling the prepare_inputs function on the pipeline. The length of the tokenized text is set to 77 tokens as required by the configuration of the underlying CLIP text model. Copied prompt = "A cinematic film still of Morgan Freeman starring as Jimi Hendrix, portrait, 40mm lens, shallow depth of field, close up, split lighting, cinematic" +prompt = [prompt] * jax.device_count() +prompt_ids = pipeline.prepare_inputs(prompt) +prompt_ids.shape +# (8, 77) Model parameters and inputs have to be replicated across the 8 parallel devices. The parameters dictionary is replicated with flax.jax_utils.replicate which traverses the dictionary and changes the shape of the weights so they are repeated 8 times. Arrays are replicated using shard. Copied # parameters +p_params = replicate(params) + +# arrays +prompt_ids = shard(prompt_ids) +prompt_ids.shape +# (8, 1, 77) This shape means each one of the 8 devices receives as an input a jnp array with shape (1, 77), where 1 is the batch size per device. On TPUs with sufficient memory, you could have a batch size larger than 1 if you want to generate multiple images (per chip) at once. Next, create a random number generator to pass to the generation function. This is standard procedure in Flax, which is very serious and opinionated about random numbers. All functions that deal with random numbers are expected to receive a generator to ensure reproducibility, even when you’re training across multiple distributed devices. The helper function below uses a seed to initialize a random number generator. As long as you use the same seed, you’ll get the exact same results. Feel free to use different seeds when exploring results later in the guide. Copied def create_key(seed=0): + return jax.random.PRNGKey(seed) The helper function, or rng, is split 8 times so each device receives a different generator and generates a different image. Copied rng = create_key(0) +rng = jax.random.split(rng, jax.device_count()) To take advantage of JAX’s optimized speed on a TPU, pass jit=True to the pipeline to compile the JAX code into an efficient representation and to ensure the model runs in parallel across the 8 devices. You need to ensure all your inputs have the same shape in subsequent calls, otherwise JAX will need to recompile the code which is slower. The first inference run takes more time because it needs to compile the code, but subsequent calls (even with different inputs) are much faster. For example, it took more than a minute to compile on a TPU v2-8, but then it takes about 7s on a future inference run! Copied %%time +images = pipeline(prompt_ids, p_params, rng, jit=True)[0] + +# CPU times: user 56.2 s, sys: 42.5 s, total: 1min 38s +# Wall time: 1min 29s The returned array has shape (8, 1, 512, 512, 3) which should be reshaped to remove the second dimension and get 8 images of 512 × 512 × 3. Then you can use the numpy_to_pil() function to convert the arrays into images. Copied from diffusers.utils import make_image_grid + +images = images.reshape((images.shape[0] * images.shape[1],) + images.shape[-3:]) +images = pipeline.numpy_to_pil(images) +make_image_grid(images, rows=2, cols=4) Using different prompts You don’t necessarily have to use the same prompt on all devices. For example, to generate 8 different prompts: Copied prompts = [ + "Labrador in the style of Hokusai", + "Painting of a squirrel skating in New York", + "HAL-9000 in the style of Van Gogh", + "Times Square under water, with fish and a dolphin swimming around", + "Ancient Roman fresco showing a man working on his laptop", + "Close-up photograph of young black woman against urban background, high quality, bokeh", + "Armchair in the shape of an avocado", + "Clown astronaut in space, with Earth in the background", +] + +prompt_ids = pipeline.prepare_inputs(prompts) +prompt_ids = shard(prompt_ids) + +images = pipeline(prompt_ids, p_params, rng, jit=True).images +images = images.reshape((images.shape[0] * images.shape[1],) + images.shape[-3:]) +images = pipeline.numpy_to_pil(images) + +make_image_grid(images, 2, 4) How does parallelization work? The Flax pipeline in 🤗 Diffusers automatically compiles the model and runs it in parallel on all available devices. Let’s take a closer look at how that process works. JAX parallelization can be done in multiple ways. The easiest one revolves around using the jax.pmap function to achieve single-program multiple-data (SPMD) parallelization. It means running several copies of the same code, each on different data inputs. More sophisticated approaches are possible, and you can go over to the JAX documentation to explore this topic in more detail if you are interested! jax.pmap does two things: Compiles (or ”jits”) the code which is similar to jax.jit(). This does not happen when you call pmap, and only the first time the pmapped function is called. Ensures the compiled code runs in parallel on all available devices. To demonstrate, call pmap on the pipeline’s _generate method (this is a private method that generates images and may be renamed or removed in future releases of 🤗 Diffusers): Copied p_generate = pmap(pipeline._generate) After calling pmap, the prepared function p_generate will: Make a copy of the underlying function, pipeline._generate, on each device. Send each device a different portion of the input arguments (this is why it’s necessary to call the shard function). In this case, prompt_ids has shape (8, 1, 77, 768) so the array is split into 8 and each copy of _generate receives an input with shape (1, 77, 768). The most important thing to pay attention to here is the batch size (1 in this example), and the input dimensions that make sense for your code. You don’t have to change anything else to make the code work in parallel. The first time you call the pipeline takes more time, but the calls afterward are much faster. The block_until_ready function is used to correctly measure inference time because JAX uses asynchronous dispatch and returns control to the Python loop as soon as it can. You don’t need to use that in your code; blocking occurs automatically when you want to use the result of a computation that has not yet been materialized. Copied %%time +images = p_generate(prompt_ids, p_params, rng) +images = images.block_until_ready() + +# CPU times: user 1min 15s, sys: 18.2 s, total: 1min 34s +# Wall time: 1min 15s Check your image dimensions to see if they’re correct: Copied images.shape +# (8, 1, 512, 512, 3) diff --git a/scrapped_outputs/fa210ef6a066cd0c791638ebf11395af.txt b/scrapped_outputs/fa210ef6a066cd0c791638ebf11395af.txt new file mode 100644 index 0000000000000000000000000000000000000000..84d54169e993a685cd0d3adbb2feeedc473399bb --- /dev/null +++ b/scrapped_outputs/fa210ef6a066cd0c791638ebf11395af.txt @@ -0,0 +1,54 @@ +DDPMScheduler Denoising Diffusion Probabilistic Models (DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes a diffusion based model of the same name. In the context of the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline. The abstract from the paper is: We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN. Our implementation is available at this https URL. DDPMScheduler class diffusers.DDPMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None variance_type: str = 'fixed_small' clip_sample: bool = True prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 clip_sample_range: float = 1.0 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' steps_offset: int = 0 rescale_betas_zero_snr: int = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. variance_type (str, defaults to "fixed_small") — +Clip the variance when adding noise to the denoised sample. Choose from fixed_small, fixed_small_log, +fixed_large, fixed_large_log, learned or learned_range. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. DDPMScheduler explores the connections between denoising score matching and Langevin dynamics sampling. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: Optional = None device: Union = None timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. If used, +timesteps must be None. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. timesteps (List[int], optional) — +Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default +timestep spacing strategy of equal spacing between timesteps is used. If timesteps is passed, +num_inference_steps must be None. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator = None return_dict: bool = True ) → DDPMSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a DDPMSchedulerOutput or tuple. Returns +DDPMSchedulerOutput or tuple + +If return_dict is True, DDPMSchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). DDPMSchedulerOutput class diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/fa3647303069d945d02ca03fe05d4845.txt b/scrapped_outputs/fa3647303069d945d02ca03fe05d4845.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/fa79a794a514de9654781897fd8e15e2.txt b/scrapped_outputs/fa79a794a514de9654781897fd8e15e2.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/fa93d32207046aa97e49dfb34b28597c.txt b/scrapped_outputs/fa93d32207046aa97e49dfb34b28597c.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/fad300eb39f91f99b6e0c32b2baec540.txt b/scrapped_outputs/fad300eb39f91f99b6e0c32b2baec540.txt new file mode 100644 index 0000000000000000000000000000000000000000..0a7cc0b79a2823c78003b419462fee63e47bb1de --- /dev/null +++ b/scrapped_outputs/fad300eb39f91f99b6e0c32b2baec540.txt @@ -0,0 +1,18 @@ +ONNX Runtime 🤗 Optimum provides a Stable Diffusion pipeline compatible with ONNX Runtime. You’ll need to install 🤗 Optimum with the following command for ONNX Runtime support: Copied pip install -q optimum["onnxruntime"] This guide will show you how to use the Stable Diffusion and Stable Diffusion XL (SDXL) pipelines with ONNX Runtime. Stable Diffusion To load and run inference, use the ORTStableDiffusionPipeline. If you want to load a PyTorch model and convert it to the ONNX format on-the-fly, set export=True: Copied from optimum.onnxruntime import ORTStableDiffusionPipeline + +model_id = "runwayml/stable-diffusion-v1-5" +pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id, export=True) +prompt = "sailing ship in storm by Leonardo da Vinci" +image = pipeline(prompt).images[0] +pipeline.save_pretrained("./onnx-stable-diffusion-v1-5") Generating multiple prompts in a batch seems to take too much memory. While we look into it, you may need to iterate instead of batching. To export the pipeline in the ONNX format offline and use it later for inference, +use the optimum-cli export command: Copied optimum-cli export onnx --model runwayml/stable-diffusion-v1-5 sd_v15_onnx/ Then to perform inference (you don’t have to specify export=True again): Copied from optimum.onnxruntime import ORTStableDiffusionPipeline + +model_id = "sd_v15_onnx" +pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id) +prompt = "sailing ship in storm by Leonardo da Vinci" +image = pipeline(prompt).images[0] You can find more examples in 🤗 Optimum documentation, and Stable Diffusion is supported for text-to-image, image-to-image, and inpainting. Stable Diffusion XL To load and run inference with SDXL, use the ORTStableDiffusionXLPipeline: Copied from optimum.onnxruntime import ORTStableDiffusionXLPipeline + +model_id = "stabilityai/stable-diffusion-xl-base-1.0" +pipeline = ORTStableDiffusionXLPipeline.from_pretrained(model_id) +prompt = "sailing ship in storm by Leonardo da Vinci" +image = pipeline(prompt).images[0] To export the pipeline in the ONNX format and use it later for inference, use the optimum-cli export command: Copied optimum-cli export onnx --model stabilityai/stable-diffusion-xl-base-1.0 --task stable-diffusion-xl sd_xl_onnx/ SDXL in the ONNX format is supported for text-to-image and image-to-image. diff --git a/scrapped_outputs/fb07c019938538b5cd744ed5b4d7a3a7.txt b/scrapped_outputs/fb07c019938538b5cd744ed5b4d7a3a7.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/fb1ead8e4ba44faa0d74f1bd8106ada9.txt b/scrapped_outputs/fb1ead8e4ba44faa0d74f1bd8106ada9.txt new file mode 100644 index 0000000000000000000000000000000000000000..ae35bd71905061d7430ba6a839a139739f34ded5 --- /dev/null +++ b/scrapped_outputs/fb1ead8e4ba44faa0d74f1bd8106ada9.txt @@ -0,0 +1,84 @@ +Self-Attention Guidance Improving Sample Quality of Diffusion Models Using Self-Attention Guidance is by Susung Hong et al. The abstract from the paper is: Denoising diffusion models (DDMs) have attracted attention for their exceptional generation quality and diversity. This success is largely attributed to the use of class- or text-conditional diffusion guidance methods, such as classifier and classifier-free guidance. In this paper, we present a more comprehensive perspective that goes beyond the traditional guidance methods. From this generalized perspective, we introduce novel condition- and training-free strategies to enhance the quality of generated images. As a simple solution, blur guidance improves the suitability of intermediate samples for their fine-scale information and structures, enabling diffusion models to generate higher quality samples with a moderate guidance scale. Improving upon this, Self-Attention Guidance (SAG) uses the intermediate self-attention maps of diffusion models to enhance their stability and efficacy. Specifically, SAG adversarially blurs only the regions that diffusion models attend to at each iteration and guides them accordingly. Our experimental results show that our SAG improves the performance of various diffusion models, including ADM, IDDPM, Stable Diffusion, and DiT. Moreover, combining SAG with conventional guidance methods leads to further improvement. You can find additional information about Self-Attention Guidance on the project page, original codebase, and try it out in a demo or notebook. Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. StableDiffusionSAGPipeline class diffusers.StableDiffusionSAGPipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: Optional = None requires_safety_checker: bool = True ) Parameters vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) — +Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) — +A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) — +A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please refer to the model card for more details +about a model’s potential harms. feature_extractor (CLIPImageProcessor) — +A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods +implemented for all pipelines (downloading, saving, running on a particular device, etc.). The pipeline also inherits the following loading methods: load_textual_inversion() for loading textual inversion embeddings load_ip_adapter() for loading IP Adapters __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 sag_scale: float = 0.75 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: Optional = 1 cross_attention_kwargs: Optional = None clip_skip: Optional = None ) → StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) — +The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — +A higher guidance scale value encourages the model to generate images closely linked to the text +prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. sag_scale (float, optional, defaults to 0.75) — +Chosen between [0, 1.0] for better quality. negative_prompt (str or List[str], optional) — +The prompt or prompts to guide what to not include in image generation. If not defined, you need to +pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) from the DDIM paper. Only applies +to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) — +A torch.Generator to make +generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not +provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If +not provided, negative_prompt_embeds are generated from the negative_prompt input argument. +ip_adapter_image — (PipelineImageInput, optional): +Optional image input to work with IP Adapters. output_type (str, optional, defaults to "pil") — +The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in +self.processor. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Returns +StableDiffusionPipelineOutput or tuple + +If return_dict is True, StableDiffusionPipelineOutput is returned, +otherwise a tuple is returned where the first element is a list with the generated images and the +second element is a list of bools indicating whether the corresponding generated image contains +“not-safe-for-work” (nsfw) content. + The call function to the pipeline for generation. Examples: Copied >>> import torch +>>> from diffusers import StableDiffusionSAGPipeline + +>>> pipe = StableDiffusionSAGPipeline.from_pretrained( +... "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +... ) +>>> pipe = pipe.to("cuda") + +>>> prompt = "a photo of an astronaut riding a horse on mars" +>>> image = pipe(prompt, sag_scale=0.75).images[0] disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to +computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to +compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional) — +prompt to be encoded +device — (torch.device): +torch device num_images_per_prompt (int) — +number of images that should be generated per prompt do_classifier_free_guidance (bool) — +whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is +less than 1). prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. lora_scale (float, optional) — +A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — +Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that +the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — +List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or +None if safety checking could not be performed. Output class for Stable Diffusion pipelines. diff --git a/scrapped_outputs/fb3268b3d490692fae6a20e5d65cda97.txt b/scrapped_outputs/fb3268b3d490692fae6a20e5d65cda97.txt new file mode 100644 index 0000000000000000000000000000000000000000..bfb701b6b92da524e2044f38c56691f6854d8e5e --- /dev/null +++ b/scrapped_outputs/fb3268b3d490692fae6a20e5d65cda97.txt @@ -0,0 +1,169 @@ +Latent Consistency Model Latent Consistency Models (LCM) enable quality image generation in typically 2-4 steps making it possible to use diffusion models in almost real-time settings. From the official website: LCMs can be distilled from any pre-trained Stable Diffusion (SD) in only 4,000 training steps (~32 A100 GPU Hours) for generating high quality 768 x 768 resolution images in 2~4 steps or even one step, significantly accelerating text-to-image generation. We employ LCM to distill the Dreamshaper-V7 version of SD in just 4,000 training iterations. For a more technical overview of LCMs, refer to the paper. LCM distilled models are available for stable-diffusion-v1-5, stable-diffusion-xl-base-1.0, and the SSD-1B model. All the checkpoints can be found in this collection. This guide shows how to perform inference with LCMs for text-to-image image-to-image combined with style LoRAs ControlNet/T2I-Adapter Text-to-image You’ll use the StableDiffusionXLPipeline pipeline with the LCMScheduler and then load the LCM-LoRA. Together with the LCM-LoRA and the scheduler, the pipeline enables a fast inference workflow, overcoming the slow iterative nature of diffusion models. Copied from diffusers import StableDiffusionXLPipeline, UNet2DConditionModel, LCMScheduler +import torch + +unet = UNet2DConditionModel.from_pretrained( + "latent-consistency/lcm-sdxl", + torch_dtype=torch.float16, + variant="fp16", +) +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16, variant="fp16", +).to("cuda") +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=8.0 +).images[0] Notice that we use only 4 steps for generation which is way less than what’s typically used for standard SDXL. Some details to keep in mind: To perform classifier-free guidance, batch size is usually doubled inside the pipeline. LCM, however, applies guidance using guidance embeddings, so the batch size does not have to be doubled in this case. This leads to a faster inference time, with the drawback that negative prompts don’t have any effect on the denoising process. The UNet was trained using the [3., 13.] guidance scale range. So, that is the ideal range for guidance_scale. However, disabling guidance_scale using a value of 1.0 is also effective in most cases. Image-to-image LCMs can be applied to image-to-image tasks too. For this example, we’ll use the LCM_Dreamshaper_v7 model, but the same steps can be applied to other LCM models as well. Copied import torch +from diffusers import AutoPipelineForImage2Image, UNet2DConditionModel, LCMScheduler +from diffusers.utils import make_image_grid, load_image + +unet = UNet2DConditionModel.from_pretrained( + "SimianLuo/LCM_Dreamshaper_v7", + subfolder="unet", + torch_dtype=torch.float16, +) + +pipe = AutoPipelineForImage2Image.from_pretrained( + "Lykon/dreamshaper-7", + unet=unet, + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) +prompt = "Astronauts in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +generator = torch.manual_seed(0) +image = pipe( + prompt, + image=init_image, + num_inference_steps=4, + guidance_scale=7.5, + strength=0.5, + generator=generator +).images[0] +make_image_grid([init_image, image], rows=1, cols=2) You can get different results based on your prompt and the image you provide. To get the best results, we recommend trying different values for num_inference_steps, strength, and guidance_scale parameters and choose the best one. Combine with style LoRAs LCMs can be used with other styled LoRAs to generate styled-images in very few steps (4-8). In the following example, we’ll use the papercut LoRA. Copied from diffusers import StableDiffusionXLPipeline, UNet2DConditionModel, LCMScheduler +import torch + +unet = UNet2DConditionModel.from_pretrained( + "latent-consistency/lcm-sdxl", + torch_dtype=torch.float16, + variant="fp16", +) +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16, variant="fp16", +).to("cuda") +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +pipe.load_lora_weights("TheLastBen/Papercut_SDXL", weight_name="papercut.safetensors", adapter_name="papercut") + +prompt = "papercut, a cute fox" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=8.0 +).images[0] +image ControlNet/T2I-Adapter Let’s look at how we can perform inference with ControlNet/T2I-Adapter and a LCM. ControlNet For this example, we’ll use the LCM_Dreamshaper_v7 model with canny ControlNet, but the same steps can be applied to other LCM models as well. Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +image = load_image( + "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +).resize((512, 512)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + "SimianLuo/LCM_Dreamshaper_v7", + controlnet=controlnet, + torch_dtype=torch.float16, + safety_checker=None, +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +generator = torch.manual_seed(0) +image = pipe( + "the mona lisa", + image=canny_image, + num_inference_steps=4, + generator=generator, +).images[0] +make_image_grid([canny_image, image], rows=1, cols=2) The inference parameters in this example might not work for all examples, so we recommend trying different values for the `num_inference_steps`, `guidance_scale`, `controlnet_conditioning_scale`, and `cross_attention_kwargs` parameters and choosing the best one. T2I-Adapter This example shows how to use the lcm-sdxl with the Canny T2I-Adapter. Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionXLAdapterPipeline, UNet2DConditionModel, T2IAdapter, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +# Prepare image +# Detect the canny map in low resolution to avoid high-frequency details +image = load_image( + "https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_canny.jpg" +).resize((384, 384)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image).resize((1024, 1216)) + +# load adapter +adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, varient="fp16").to("cuda") + +unet = UNet2DConditionModel.from_pretrained( + "latent-consistency/lcm-sdxl", + torch_dtype=torch.float16, + variant="fp16", +) +pipe = StableDiffusionXLAdapterPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + unet=unet, + adapter=adapter, + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +prompt = "Mystical fairy in real, magic, 4k picture, high quality" +negative_prompt = "extra digit, fewer digits, cropped, worst quality, low quality, glitch, deformed, mutated, ugly, disfigured" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, + negative_prompt=negative_prompt, + image=canny_image, + num_inference_steps=4, + guidance_scale=5, + adapter_conditioning_scale=0.8, + adapter_conditioning_factor=1, + generator=generator, +).images[0] +grid = make_image_grid([canny_image, image], rows=1, cols=2) diff --git a/scrapped_outputs/fb44b87cedd2057bef0ba6fb88ca77e7.txt b/scrapped_outputs/fb44b87cedd2057bef0ba6fb88ca77e7.txt new file mode 100644 index 0000000000000000000000000000000000000000..5afc2be3d91199356b9d7628f7ca4a75d3ed1ce9 --- /dev/null +++ b/scrapped_outputs/fb44b87cedd2057bef0ba6fb88ca77e7.txt @@ -0,0 +1,74 @@ +DDIMScheduler Denoising Diffusion Implicit Models (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. The abstract from the paper is: Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. +To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models +with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. +We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. +We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space. The original codebase of this paper can be found at ermongroup/ddim, and you can contact the author on tsong.me. Tips The paper Common Diffusion Noise Schedules and Sample Steps are Flawed claims that a mismatch between the training and inference settings leads to suboptimal inference generation results for Stable Diffusion. To fix this, the authors propose: 🧪 This is an experimental feature! rescale the noise schedule to enforce zero terminal signal-to-noise ratio (SNR) Copied pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, rescale_betas_zero_snr=True) train a model with v_prediction (add the following argument to the train_text_to_image.py or train_text_to_image_lora.py scripts) Copied --prediction_type="v_prediction" change the sampler to always start from the last timestep Copied pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing") rescale classifier-free guidance to prevent over-exposure Copied image = pipe(prompt, guidance_rescale=0.7).images[0] For example: Copied from diffusers import DiffusionPipeline, DDIMScheduler +import torch + +pipe = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", torch_dtype=torch.float16) +pipe.scheduler = DDIMScheduler.from_config( + pipe.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing" +) +pipe.to("cuda") + +prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" +image = pipe(prompt, guidance_rescale=0.7).images[0] +image DDIMScheduler class diffusers.DDIMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None clip_sample: bool = True set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 clip_sample_range: float = 1.0 sample_max_value: float = 1.0 timestep_spacing: str = 'leading' rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. set_alpha_to_one (bool, defaults to True) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the alpha value at step 0. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). thresholding (bool, defaults to False) — +Whether to use the “dynamic thresholding” method. This is unsuitable for latent-space diffusion models such +as Stable Diffusion. dynamic_thresholding_ratio (float, defaults to 0.995) — +The ratio for the dynamic thresholding method. Valid only when thresholding=True. sample_max_value (float, defaults to 1.0) — +The threshold value for dynamic thresholding. Valid only when thresholding=True. timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. DDIMScheduler extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with +non-Markovian guidance. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor eta: float = 0.0 use_clipped_model_output: bool = False generator = None variance_noise: Optional = None return_dict: bool = True ) → ~schedulers.scheduling_utils.DDIMSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. eta (float) — +The weight of noise for added noise in diffusion step. use_clipped_model_output (bool, defaults to False) — +If True, computes “corrected” model_output from the clipped predicted original sample. Necessary +because predicted original sample is clipped to [-1, 1] when self.config.clip_sample is True. If no +clipping has happened, “corrected” model_output would coincide with the one provided as input and +use_clipped_model_output has no effect. generator (torch.Generator, optional) — +A random number generator. variance_noise (torch.FloatTensor) — +Alternative to generating noise with generator by directly providing the noise for the variance +itself. Useful for methods such as CycleDiffusion. return_dict (bool, optional, defaults to True) — +Whether or not to return a DDIMSchedulerOutput or tuple. Returns +~schedulers.scheduling_utils.DDIMSchedulerOutput or tuple + +If return_dict is True, DDIMSchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). DDIMSchedulerOutput class diffusers.schedulers.scheduling_ddim.DDIMSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/fb4adf70541b7fccf820eebfdd4df60e.txt b/scrapped_outputs/fb4adf70541b7fccf820eebfdd4df60e.txt new file mode 100644 index 0000000000000000000000000000000000000000..28bbd8ab513203be6ef3f2a2c6fc1c0624afc176 --- /dev/null +++ b/scrapped_outputs/fb4adf70541b7fccf820eebfdd4df60e.txt @@ -0,0 +1,469 @@ +DreamBooth + + + + + + + + + + + + +DreamBooth is a method to personalize text-to-image models like Stable Diffusion given just a few (3-5) images of a subject. It allows the model to generate contextualized images of the subject in different scenes, poses, and views. + +Dreambooth examples from the project's blog. +This guide will show you how to finetune DreamBooth with the CompVis/stable-diffusion-v1-4 model for various GPU sizes, and with Flax. All the training scripts for DreamBooth used in this guide can be found here if you’re interested in digging deeper and seeing how things work. +Before running the scripts, make sure you install the library’s training dependencies. We also recommend installing 🧨 Diffusers from the main GitHub branch: + + + Copied +pip install git+https://github.com/huggingface/diffusers +pip install -U -r diffusers/examples/dreambooth/requirements.txt +xFormers is not part of the training requirements, but we recommend you install it if you can because it could make your training faster and less memory intensive. +After all the dependencies have been set up, initialize a 🤗 Accelerate environment with: + + + Copied +accelerate config +To setup a default 🤗 Accelerate environment without choosing any configurations: + + + Copied +accelerate config default +Or if your environment doesn’t support an interactive shell like a notebook, you can use: + + + Copied +from accelerate.utils import write_basic_config + +write_basic_config() +Finally, download a few images of a dog to DreamBooth with: + + + Copied +from huggingface_hub import snapshot_download + +local_dir = "./dog" +snapshot_download( + "diffusers/dog-example", + local_dir=local_dir, + repo_type="dataset", + ignore_patterns=".gitattributes", +) + +Finetuning + +DreamBooth finetuning is very sensitive to hyperparameters and easy to overfit. We recommend you take a look at our in-depth analysis with recommended settings for different subjects to help you choose the appropriate hyperparameters. + + +Pytorch + +Hide Pytorch content + +Set the INSTANCE_DIR environment variable to the path of the directory containing the dog images. +Specify the MODEL_NAME environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the ~diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path argument. + + + Copied +export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export INSTANCE_DIR="./dog" +export OUTPUT_DIR="path_to_saved_model" +Then you can launch the training script (you can find the full training script here) with the following command: + + + Copied +accelerate launch train_dreambooth.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --output_dir=$OUTPUT_DIR \ + --instance_prompt="a photo of sks dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=1 \ + --learning_rate=5e-6 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --max_train_steps=400 + +JAX + +Hide JAX content + +If you have access to TPUs or want to train even faster, you can try out the Flax training script. The Flax training script doesn’t support gradient checkpointing or gradient accumulation, so you’ll need a GPU with at least 30GB of memory. +Before running the script, make sure you have the requirements installed: + + + Copied +pip install -U -r requirements.txt +Specify the MODEL_NAME environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the ~diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path argument. +Now you can launch the training script with the following command: + + + Copied +export MODEL_NAME="duongna/stable-diffusion-v1-4-flax" +export INSTANCE_DIR="./dog" +export OUTPUT_DIR="path-to-save-model" + +python train_dreambooth_flax.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --output_dir=$OUTPUT_DIR \ + --instance_prompt="a photo of sks dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --learning_rate=5e-6 \ + --max_train_steps=400 + + +Finetuning with prior-preserving loss + +Prior preservation is used to avoid overfitting and language-drift (check out the paper to learn more if you’re interested). For prior preservation, you use other images of the same class as part of the training process. The nice thing is that you can generate those images using the Stable Diffusion model itself! The training script will save the generated images to a local path you specify. +The authors recommend generating num_epochs * num_samples images for prior preservation. In most cases, 200-300 images work well. + + +Pytorch + +Hide Pytorch content + + + + Copied +export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export INSTANCE_DIR="./dog" +export CLASS_DIR="path_to_class_images" +export OUTPUT_DIR="path_to_saved_model" + +accelerate launch train_dreambooth.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --class_data_dir=$CLASS_DIR \ + --output_dir=$OUTPUT_DIR \ + --with_prior_preservation --prior_loss_weight=1.0 \ + --instance_prompt="a photo of sks dog" \ + --class_prompt="a photo of dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=1 \ + --learning_rate=5e-6 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --num_class_images=200 \ + --max_train_steps=800 + +JAX + +Hide JAX content + + + + Copied +export MODEL_NAME="duongna/stable-diffusion-v1-4-flax" +export INSTANCE_DIR="./dog" +export CLASS_DIR="path-to-class-images" +export OUTPUT_DIR="path-to-save-model" + +python train_dreambooth_flax.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --class_data_dir=$CLASS_DIR \ + --output_dir=$OUTPUT_DIR \ + --with_prior_preservation --prior_loss_weight=1.0 \ + --instance_prompt="a photo of sks dog" \ + --class_prompt="a photo of dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --learning_rate=5e-6 \ + --num_class_images=200 \ + --max_train_steps=800 + + +Finetuning the text encoder and UNet + +The script also allows you to finetune the text_encoder along with the unet. In our experiments (check out the Training Stable Diffusion with DreamBooth using 🧨 Diffusers post for more details), this yields much better results, especially when generating images of faces. +Training the text encoder requires additional memory and it won’t fit on a 16GB GPU. You’ll need at least 24GB VRAM to use this option. +Pass the --train_text_encoder argument to the training script to enable finetuning the text_encoder and unet: + + +Pytorch + +Hide Pytorch content + + + + Copied +export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export INSTANCE_DIR="./dog" +export CLASS_DIR="path_to_class_images" +export OUTPUT_DIR="path_to_saved_model" + +accelerate launch train_dreambooth.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --train_text_encoder \ + --instance_data_dir=$INSTANCE_DIR \ + --class_data_dir=$CLASS_DIR \ + --output_dir=$OUTPUT_DIR \ + --with_prior_preservation --prior_loss_weight=1.0 \ + --instance_prompt="a photo of sks dog" \ + --class_prompt="a photo of dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --use_8bit_adam + --gradient_checkpointing \ + --learning_rate=2e-6 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --num_class_images=200 \ + --max_train_steps=800 + +JAX + +Hide JAX content + + + + Copied +export MODEL_NAME="duongna/stable-diffusion-v1-4-flax" +export INSTANCE_DIR="./dog" +export CLASS_DIR="path-to-class-images" +export OUTPUT_DIR="path-to-save-model" + +python train_dreambooth_flax.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --train_text_encoder \ + --instance_data_dir=$INSTANCE_DIR \ + --class_data_dir=$CLASS_DIR \ + --output_dir=$OUTPUT_DIR \ + --with_prior_preservation --prior_loss_weight=1.0 \ + --instance_prompt="a photo of sks dog" \ + --class_prompt="a photo of dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --learning_rate=2e-6 \ + --num_class_images=200 \ + --max_train_steps=800 + + +Finetuning with LoRA + +You can also use Low-Rank Adaptation of Large Language Models (LoRA), a fine-tuning technique for accelerating training large models, on DreamBooth. For more details, take a look at the LoRA training guide. + +Saving checkpoints while training + +It’s easy to overfit while training with Dreambooth, so sometimes it’s useful to save regular checkpoints during the training process. One of the intermediate checkpoints might actually work better than the final model! Pass the following argument to the training script to enable saving checkpoints: + + + Copied + --checkpointing_steps=500 +This saves the full training state in subfolders of your output_dir. Subfolder names begin with the prefix checkpoint-, followed by the number of steps performed so far; for example, checkpoint-1500 would be a checkpoint saved after 1500 training steps. + +Resume training from a saved checkpoint + +If you want to resume training from any of the saved checkpoints, you can pass the argument --resume_from_checkpoint to the script and specify the name of the checkpoint you want to use. You can also use the special string "latest" to resume from the last saved checkpoint (the one with the largest number of steps). For example, the following would resume training from the checkpoint saved after 1500 steps: + + + Copied + --resume_from_checkpoint="checkpoint-1500" +This is a good opportunity to tweak some of your hyperparameters if you wish. + +Inference from a saved checkpoint + +Saved checkpoints are stored in a format suitable for resuming training. They not only include the model weights, but also the state of the optimizer, data loaders, and learning rate. +If you have "accelerate>=0.16.0" installed, use the following code to run +inference from an intermediate checkpoint. + + + Copied +from diffusers import DiffusionPipeline, UNet2DConditionModel +from transformers import CLIPTextModel +import torch + +# Load the pipeline with the same arguments (model, revision) that were used for training +model_id = "CompVis/stable-diffusion-v1-4" + +unet = UNet2DConditionModel.from_pretrained("/sddata/dreambooth/daruma-v2-1/checkpoint-100/unet") + +# if you have trained with `--args.train_text_encoder` make sure to also load the text encoder +text_encoder = CLIPTextModel.from_pretrained("/sddata/dreambooth/daruma-v2-1/checkpoint-100/text_encoder") + +pipeline = DiffusionPipeline.from_pretrained(model_id, unet=unet, text_encoder=text_encoder, dtype=torch.float16) +pipeline.to("cuda") + +# Perform inference, or save, or push to the hub +pipeline.save_pretrained("dreambooth-pipeline") +If you have "accelerate<0.16.0" installed, you need to convert it to an inference pipeline first: + + + Copied +from accelerate import Accelerator +from diffusers import DiffusionPipeline + +# Load the pipeline with the same arguments (model, revision) that were used for training +model_id = "CompVis/stable-diffusion-v1-4" +pipeline = DiffusionPipeline.from_pretrained(model_id) + +accelerator = Accelerator() + +# Use text_encoder if `--train_text_encoder` was used for the initial training +unet, text_encoder = accelerator.prepare(pipeline.unet, pipeline.text_encoder) + +# Restore state from a checkpoint path. You have to use the absolute path here. +accelerator.load_state("/sddata/dreambooth/daruma-v2-1/checkpoint-100") + +# Rebuild the pipeline with the unwrapped models (assignment to .unet and .text_encoder should work too) +pipeline = DiffusionPipeline.from_pretrained( + model_id, + unet=accelerator.unwrap_model(unet), + text_encoder=accelerator.unwrap_model(text_encoder), +) + +# Perform inference, or save, or push to the hub +pipeline.save_pretrained("dreambooth-pipeline") + +Optimizations for different GPU sizes + +Depending on your hardware, there are a few different ways to optimize DreamBooth on GPUs from 16GB to just 8GB! + +xFormers + +xFormers is a toolbox for optimizing Transformers, and it includes a memory-efficient attention mechanism that is used in 🧨 Diffusers. You’ll need to install xFormers and then add the following argument to your training script: + + + Copied + --enable_xformers_memory_efficient_attention +xFormers is not available in Flax. + +Set gradients to none + +Another way you can lower your memory footprint is to set the gradients to None instead of zero. However, this may change certain behaviors, so if you run into any issues, try removing this argument. Add the following argument to your training script to set the gradients to None: + + + Copied + --set_grads_to_none + +16GB GPU + +With the help of gradient checkpointing and bitsandbytes 8-bit optimizer, it’s possible to train DreamBooth on a 16GB GPU. Make sure you have bitsandbytes installed: + + + Copied +pip install bitsandbytes +Then pass the --use_8bit_adam option to the training script: + + + Copied +export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export INSTANCE_DIR="./dog" +export CLASS_DIR="path_to_class_images" +export OUTPUT_DIR="path_to_saved_model" + +accelerate launch train_dreambooth.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --class_data_dir=$CLASS_DIR \ + --output_dir=$OUTPUT_DIR \ + --with_prior_preservation --prior_loss_weight=1.0 \ + --instance_prompt="a photo of sks dog" \ + --class_prompt="a photo of dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=2 --gradient_checkpointing \ + --use_8bit_adam \ + --learning_rate=5e-6 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --num_class_images=200 \ + --max_train_steps=800 + +12GB GPU + +To run DreamBooth on a 12GB GPU, you’ll need to enable gradient checkpointing, the 8-bit optimizer, xFormers, and set the gradients to None: + + + Copied +export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export INSTANCE_DIR="./dog" +export CLASS_DIR="path-to-class-images" +export OUTPUT_DIR="path-to-save-model" + +accelerate launch train_dreambooth.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --class_data_dir=$CLASS_DIR \ + --output_dir=$OUTPUT_DIR \ + --with_prior_preservation --prior_loss_weight=1.0 \ + --instance_prompt="a photo of sks dog" \ + --class_prompt="a photo of dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --gradient_accumulation_steps=1 --gradient_checkpointing \ + --use_8bit_adam \ + --enable_xformers_memory_efficient_attention \ + --set_grads_to_none \ + --learning_rate=2e-6 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --num_class_images=200 \ + --max_train_steps=800 + +8 GB GPU + +For 8GB GPUs, you’ll need the help of DeepSpeed to offload some +tensors from the VRAM to either the CPU or NVME, enabling training with less GPU memory. +Run the following command to configure your 🤗 Accelerate environment: + + + Copied +accelerate config +During configuration, confirm that you want to use DeepSpeed. Now it’s possible to train on under 8GB VRAM by combining DeepSpeed stage 2, fp16 mixed precision, and offloading the model parameters and the optimizer state to the CPU. The drawback is that this requires more system RAM, about 25 GB. See the DeepSpeed documentation for more configuration options. +You should also change the default Adam optimizer to DeepSpeed’s optimized version of Adam +deepspeed.ops.adam.DeepSpeedCPUAdam for a substantial speedup. Enabling DeepSpeedCPUAdam requires your system’s CUDA toolchain version to be the same as the one installed with PyTorch. +8-bit optimizers don’t seem to be compatible with DeepSpeed at the moment. +Launch training with the following command: + + + Copied +export MODEL_NAME="CompVis/stable-diffusion-v1-4" +export INSTANCE_DIR="./dog" +export CLASS_DIR="path_to_class_images" +export OUTPUT_DIR="path_to_saved_model" + +accelerate launch train_dreambooth.py \ + --pretrained_model_name_or_path=$MODEL_NAME \ + --instance_data_dir=$INSTANCE_DIR \ + --class_data_dir=$CLASS_DIR \ + --output_dir=$OUTPUT_DIR \ + --with_prior_preservation --prior_loss_weight=1.0 \ + --instance_prompt="a photo of sks dog" \ + --class_prompt="a photo of dog" \ + --resolution=512 \ + --train_batch_size=1 \ + --sample_batch_size=1 \ + --gradient_accumulation_steps=1 --gradient_checkpointing \ + --learning_rate=5e-6 \ + --lr_scheduler="constant" \ + --lr_warmup_steps=0 \ + --num_class_images=200 \ + --max_train_steps=800 \ + --mixed_precision=fp16 + +Inference + +Once you have trained a model, specify the path to where the model is saved, and use it for inference in the StableDiffusionPipeline. Make sure your prompts include the special identifier used during training (sks in the previous examples). +If you have "accelerate>=0.16.0" installed, you can use the following code to run +inference from an intermediate checkpoint: + + + Copied +from diffusers import DiffusionPipeline +import torch + +model_id = "path_to_saved_model" +pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") + +prompt = "A photo of sks dog in a bucket" +image = pipe(prompt, num_inference_steps=50, guidance_scale=7.5).images[0] + +image.save("dog-bucket.png") +You may also run inference from any of the saved training checkpoints. diff --git a/scrapped_outputs/fb5595a4db847b9088e51d021abda8f0.txt b/scrapped_outputs/fb5595a4db847b9088e51d021abda8f0.txt new file mode 100644 index 0000000000000000000000000000000000000000..4f5f95602b52a4c51557a01aeb7698b87042d05c --- /dev/null +++ b/scrapped_outputs/fb5595a4db847b9088e51d021abda8f0.txt @@ -0,0 +1,737 @@ +Zero-shot Image-to-Image Translation + + +Overview + +Zero-shot Image-to-Image Translation. +The abstract of the paper is the following: +Large-scale text-to-image generative models have shown their remarkable ability to synthesize diverse and high-quality images. However, it is still challenging to directly apply these models for editing real images for two reasons. First, it is hard for users to come up with a perfect text prompt that accurately describes every visual detail in the input image. Second, while existing models can introduce desirable changes in certain regions, they often dramatically alter the input content and introduce unexpected changes in unwanted regions. In this work, we propose pix2pix-zero, an image-to-image translation method that can preserve the content of the original image without manual prompting. We first automatically discover editing directions that reflect desired edits in the text embedding space. To preserve the general content structure after editing, we further propose cross-attention guidance, which aims to retain the cross-attention maps of the input image throughout the diffusion process. In addition, our method does not need additional training for these edits and can directly use the existing pre-trained text-to-image diffusion model. We conduct extensive experiments and show that our method outperforms existing and concurrent works for both real and synthetic image editing. +Resources: +Project Page. +Paper. +Original Code. +Demo. + +Tips + +The pipeline can be conditioned on real input images. Check out the code examples below to know more. +The pipeline exposes two arguments namely source_embeds and target_embeds +that let you control the direction of the semantic edits in the final image to be generated. Let’s say, +you wanted to translate from “cat” to “dog”. In this case, the edit direction will be “cat -> dog”. To reflect +this in the pipeline, you simply have to set the embeddings related to the phrases including “cat” to +source_embeds and “dog” to target_embeds. Refer to the code example below for more details. +When you’re using this pipeline from a prompt, specify the source concept in the prompt. Taking +the above example, a valid input prompt would be: “a high resolution painting of a cat in the style of van gough”. +If you wanted to reverse the direction in the example above, i.e., “dog -> cat”, then it’s recommended to:Swap the source_embeds and target_embeds. +Change the input prompt to include “dog”. +To learn more about how the source and target embeddings are generated, refer to the original +paper. Below, we also provide some directions on how to generate the embeddings. +Note that the quality of the outputs generated with this pipeline is dependent on how good the source_embeds and target_embeds are. Please, refer to this discussion for some suggestions on the topic. + +Available Pipelines: + +Pipeline +Tasks +Demo +StableDiffusionPix2PixZeroPipeline +Text-Based Image Editing +🤗 Space + +Usage example + + +Based on an image generated with the input prompt + + + + Copied +import requests +import torch + +from diffusers import DDIMScheduler, StableDiffusionPix2PixZeroPipeline + + +def download(embedding_url, local_filepath): + r = requests.get(embedding_url) + with open(local_filepath, "wb") as f: + f.write(r.content) + + +model_ckpt = "CompVis/stable-diffusion-v1-4" +pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained( + model_ckpt, conditions_input_image=False, torch_dtype=torch.float16 +) +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +pipeline.to("cuda") + +prompt = "a high resolution painting of a cat in the style of van gogh" +src_embs_url = "https://github.com/pix2pixzero/pix2pix-zero/raw/main/assets/embeddings_sd_1.4/cat.pt" +target_embs_url = "https://github.com/pix2pixzero/pix2pix-zero/raw/main/assets/embeddings_sd_1.4/dog.pt" + +for url in [src_embs_url, target_embs_url]: + download(url, url.split("/")[-1]) + +src_embeds = torch.load(src_embs_url.split("/")[-1]) +target_embeds = torch.load(target_embs_url.split("/")[-1]) + +images = pipeline( + prompt, + source_embeds=src_embeds, + target_embeds=target_embeds, + num_inference_steps=50, + cross_attention_guidance_amount=0.15, +).images +images[0].save("edited_image_dog.png") + +Based on an input image + +When the pipeline is conditioned on an input image, we first obtain an inverted +noise from it using a DDIMInverseScheduler with the help of a generated caption. Then +the inverted noise is used to start the generation process. +First, let’s load our pipeline: + + + Copied +import torch +from transformers import BlipForConditionalGeneration, BlipProcessor +from diffusers import DDIMScheduler, DDIMInverseScheduler, StableDiffusionPix2PixZeroPipeline + +captioner_id = "Salesforce/blip-image-captioning-base" +processor = BlipProcessor.from_pretrained(captioner_id) +model = BlipForConditionalGeneration.from_pretrained(captioner_id, torch_dtype=torch.float16, low_cpu_mem_usage=True) + +sd_model_ckpt = "CompVis/stable-diffusion-v1-4" +pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained( + sd_model_ckpt, + caption_generator=model, + caption_processor=processor, + torch_dtype=torch.float16, + safety_checker=None, +) +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +pipeline.enable_model_cpu_offload() +Then, we load an input image for conditioning and obtain a suitable caption for it: + + + Copied +import requests +from PIL import Image + +img_url = "https://github.com/pix2pixzero/pix2pix-zero/raw/main/assets/test_images/cats/cat_6.png" +raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB").resize((512, 512)) +caption = pipeline.generate_caption(raw_image) +Then we employ the generated caption and the input image to get the inverted noise: + + + Copied +generator = torch.manual_seed(0) +inv_latents = pipeline.invert(caption, image=raw_image, generator=generator).latents +Now, generate the image with edit directions: + + + Copied +# See the "Generating source and target embeddings" section below to +# automate the generation of these captions with a pre-trained model like Flan-T5 as explained below. +source_prompts = ["a cat sitting on the street", "a cat playing in the field", "a face of a cat"] +target_prompts = ["a dog sitting on the street", "a dog playing in the field", "a face of a dog"] + +source_embeds = pipeline.get_embeds(source_prompts, batch_size=2) +target_embeds = pipeline.get_embeds(target_prompts, batch_size=2) + + +image = pipeline( + caption, + source_embeds=source_embeds, + target_embeds=target_embeds, + num_inference_steps=50, + cross_attention_guidance_amount=0.15, + generator=generator, + latents=inv_latents, + negative_prompt=caption, +).images[0] +image.save("edited_image.png") + +Generating source and target embeddings + +The authors originally used the GPT-3 API to generate the source and target captions for discovering +edit directions. However, we can also leverage open source and public models for the same purpose. +Below, we provide an end-to-end example with the Flan-T5 model +for generating captions and CLIP for +computing embeddings on the generated captions. +1. Load the generation model: + + + Copied +import torch +from transformers import AutoTokenizer, T5ForConditionalGeneration + +tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-xl") +model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xl", device_map="auto", torch_dtype=torch.float16) +2. Construct a starting prompt: + + + Copied +source_concept = "cat" +target_concept = "dog" + +source_text = f"Provide a caption for images containing a {source_concept}. " +"The captions should be in English and should be no longer than 150 characters." + +target_text = f"Provide a caption for images containing a {target_concept}. " +"The captions should be in English and should be no longer than 150 characters." +Here, we’re interested in the “cat -> dog” direction. +3. Generate captions: +We can use a utility like so for this purpose. + + + Copied +def generate_captions(input_prompt): + input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids.to("cuda") + + outputs = model.generate( + input_ids, temperature=0.8, num_return_sequences=16, do_sample=True, max_new_tokens=128, top_k=10 + ) + return tokenizer.batch_decode(outputs, skip_special_tokens=True) +And then we just call it to generate our captions: + + + Copied +source_captions = generate_captions(source_text) +target_captions = generate_captions(target_concept) +We encourage you to play around with the different parameters supported by the +generate() method (documentation) for the generation quality you are looking for. +4. Load the embedding model: +Here, we need to use the same text encoder model used by the subsequent Stable Diffusion model. + + + Copied +from diffusers import StableDiffusionPix2PixZeroPipeline + +pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained( + "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 +) +pipeline = pipeline.to("cuda") +tokenizer = pipeline.tokenizer +text_encoder = pipeline.text_encoder +5. Compute embeddings: + + + Copied +import torch + +def embed_captions(sentences, tokenizer, text_encoder, device="cuda"): + with torch.no_grad(): + embeddings = [] + for sent in sentences: + text_inputs = tokenizer( + sent, + padding="max_length", + max_length=tokenizer.model_max_length, + truncation=True, + return_tensors="pt", + ) + text_input_ids = text_inputs.input_ids + prompt_embeds = text_encoder(text_input_ids.to(device), attention_mask=None)[0] + embeddings.append(prompt_embeds) + return torch.concatenate(embeddings, dim=0).mean(dim=0).unsqueeze(0) + +source_embeddings = embed_captions(source_captions, tokenizer, text_encoder) +target_embeddings = embed_captions(target_captions, tokenizer, text_encoder) +And you’re done! Here is a Colab Notebook that you can use to interact with the entire process. +Now, you can use these embeddings directly while calling the pipeline: + + + Copied +from diffusers import DDIMScheduler + +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) + +images = pipeline( + prompt, + source_embeds=source_embeddings, + target_embeds=target_embeddings, + num_inference_steps=50, + cross_attention_guidance_amount=0.15, +).images +images[0].save("edited_image_dog.png") + +StableDiffusionPix2PixZeroPipeline + + +class diffusers.StableDiffusionPix2PixZeroPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: typing.Union[diffusers.schedulers.scheduling_ddpm.DDPMScheduler, diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler] +feature_extractor: CLIPFeatureExtractor +safety_checker: StableDiffusionSafetyChecker +inverse_scheduler: DDIMInverseScheduler +caption_generator: BlipForConditionalGeneration +caption_processor: BlipProcessor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, EulerAncestralDiscreteScheduler, or DDPMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + +requires_safety_checker (bool) — +Whether the pipeline requires a safety checker. We recommend setting it to True if you’re using the +pipeline publicly. + + + +Pipeline for pixel-levl image editing using Pix2Pix Zero. Based on Stable Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str], NoneType] = None +image: typing.Union[torch.FloatTensor, PIL.Image.Image, NoneType] = None +source_embeds: Tensor = None +target_embeds: Tensor = None +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +cross_attention_guidance_amount: float = 0.1 +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: typing.Optional[int] = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None + +) +→ +StableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +source_embeds (torch.Tensor) — +Source concept embeddings. Generation of the embeddings as per the original +paper. Used in discovering the edit direction. + + +target_embeds (torch.Tensor) — +Target concept embeddings. Generation of the embeddings as per the original +paper. Used in discovering the edit direction. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +cross_attention_guidance_amount (float, defaults to 0.1) — +Amount of guidance needed from the reference cross-attention maps. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +StableDiffusionPipelineOutput or tuple + + + +StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import requests +>>> import torch + +>>> from diffusers import DDIMScheduler, StableDiffusionPix2PixZeroPipeline + + +>>> def download(embedding_url, local_filepath): +... r = requests.get(embedding_url) +... with open(local_filepath, "wb") as f: +... f.write(r.content) + + +>>> model_ckpt = "CompVis/stable-diffusion-v1-4" +>>> pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained(model_ckpt, torch_dtype=torch.float16) +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.to("cuda") + +>>> prompt = "a high resolution painting of a cat in the style of van gough" +>>> source_emb_url = "https://hf.co/datasets/sayakpaul/sample-datasets/resolve/main/cat.pt" +>>> target_emb_url = "https://hf.co/datasets/sayakpaul/sample-datasets/resolve/main/dog.pt" + +>>> for url in [source_emb_url, target_emb_url]: +... download(url, url.split("/")[-1]) + +>>> src_embeds = torch.load(source_emb_url.split("/")[-1]) +>>> target_embeds = torch.load(target_emb_url.split("/")[-1]) +>>> images = pipeline( +... prompt, +... source_embeds=src_embeds, +... target_embeds=target_embeds, +... num_inference_steps=50, +... cross_attention_guidance_amount=0.15, +... ).images + +>>> images[0].save("edited_image_dog.png") + +construct_direction + +< +source +> +( +embs_source: Tensor +embs_target: Tensor + +) + + + +Constructs the edit direction to steer the image generation process semantically. + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. + +generate_caption + +< +source +> +( +images + +) + + + +Generates caption for a given image. + +invert + +< +source +> +( +prompt: typing.Optional[str] = None +image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None +num_inference_steps: int = 50 +guidance_scale: float = 1 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +cross_attention_guidance_amount: float = 0.1 +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: typing.Optional[int] = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None +lambda_auto_corr: float = 20.0 +lambda_kl: float = 20.0 +num_reg_steps: int = 5 +num_auto_corr_rolls: int = 5 + +) + + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +image (PIL.Image.Image, optional) — +Image, or tensor representing an image batch which will be used for conditioning. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +cross_attention_guidance_amount (float, defaults to 0.1) — +Amount of guidance needed from the reference cross-attention maps. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +lambda_auto_corr (float, optional, defaults to 20.0) — +Lambda parameter to control auto correction + + +lambda_kl (float, optional, defaults to 20.0) — +Lambda parameter to control Kullback–Leibler divergence output + + +num_reg_steps (int, optional, defaults to 5) — +Number of regularization loss steps + + +num_auto_corr_rolls (int, optional, defaults to 5) — +Number of auto correction roll steps + + + +Function used to generate inverted latents given a prompt and image. + +Examples: + + + Copied +>>> import torch +>>> from transformers import BlipForConditionalGeneration, BlipProcessor +>>> from diffusers import DDIMScheduler, DDIMInverseScheduler, StableDiffusionPix2PixZeroPipeline + +>>> import requests +>>> from PIL import Image + +>>> captioner_id = "Salesforce/blip-image-captioning-base" +>>> processor = BlipProcessor.from_pretrained(captioner_id) +>>> model = BlipForConditionalGeneration.from_pretrained( +... captioner_id, torch_dtype=torch.float16, low_cpu_mem_usage=True +... ) + +>>> sd_model_ckpt = "CompVis/stable-diffusion-v1-4" +>>> pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained( +... sd_model_ckpt, +... caption_generator=model, +... caption_processor=processor, +... torch_dtype=torch.float16, +... safety_checker=None, +... ) + +>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +>>> pipeline.enable_model_cpu_offload() + +>>> img_url = "https://github.com/pix2pixzero/pix2pix-zero/raw/main/assets/test_images/cats/cat_6.png" + +>>> raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB").resize((512, 512)) +>>> # generate caption +>>> caption = pipeline.generate_caption(raw_image) + +>>> # "a photography of a cat with flowers and dai dai daie - daie - daie kasaii" +>>> inv_latents = pipeline.invert(caption, image=raw_image).latents +>>> # we need to generate source and target embeds + +>>> source_prompts = ["a cat sitting on the street", "a cat playing in the field", "a face of a cat"] + +>>> target_prompts = ["a dog sitting on the street", "a dog playing in the field", "a face of a dog"] + +>>> source_embeds = pipeline.get_embeds(source_prompts) +>>> target_embeds = pipeline.get_embeds(target_prompts) +>>> # the latents can then be used to edit a real image + +>>> image = pipeline( +... caption, +... source_embeds=source_embeds, +... target_embeds=target_embeds, +... num_inference_steps=50, +... cross_attention_guidance_amount=0.15, +... generator=generator, +... latents=inv_latents, +... negative_prompt=caption, +... ).images[0] +>>> image.save("edited_image.png") diff --git a/scrapped_outputs/fb596062b2307ff5a3fa37921dc3da8f.txt b/scrapped_outputs/fb596062b2307ff5a3fa37921dc3da8f.txt new file mode 100644 index 0000000000000000000000000000000000000000..1be12d79ba5093a72f6b36cdd1b7acba966736f4 --- /dev/null +++ b/scrapped_outputs/fb596062b2307ff5a3fa37921dc3da8f.txt @@ -0,0 +1,42 @@ +HeunDiscreteScheduler The Heun scheduler (Algorithm 1) is from the Elucidating the Design Space of Diffusion-Based Generative Models paper by Karras et al. The scheduler is ported from the k-diffusion library and created by Katherine Crowson. HeunDiscreteScheduler class diffusers.HeunDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' use_karras_sigmas: Optional = False clip_sample: Optional = False clip_sample_range: float = 1.0 timestep_spacing: str = 'linspace' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). clip_sample (bool, defaults to True) — +Clip the predicted sample for numerical stability. clip_sample_range (float, defaults to 1.0) — +The maximum magnitude for sample clipping. Valid only when clip_sample=True. use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. Scheduler with Heun steps for discrete beta schedules. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_begin_index < source > ( begin_index: int = 0 ) Parameters begin_index (int) — +The begin index for the scheduler. Sets the begin index for the scheduler. This function should be run from pipeline before the inference. set_timesteps < source > ( num_inference_steps: int device: Union = None num_train_timesteps: Optional = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: Union timestep: Union sample: Union return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/fb741d0c9cf8df3796da56e2c16a3cfb.txt b/scrapped_outputs/fb741d0c9cf8df3796da56e2c16a3cfb.txt new file mode 100644 index 0000000000000000000000000000000000000000..26903c98059769d09923319afe7503b246c3bfc7 --- /dev/null +++ b/scrapped_outputs/fb741d0c9cf8df3796da56e2c16a3cfb.txt @@ -0,0 +1,92 @@ +Score SDE VE + + +Overview + +Score-Based Generative Modeling through Stochastic Differential Equations (Score SDE) by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon and Ben Poole. +The abstract of the paper is the following: +Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model. +The original codebase can be found here. +This pipeline implements the Variance Expanding (VE) variant of the method. + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_score_sde_ve.py +Unconditional Image Generation +- + +ScoreSdeVePipeline + + +class diffusers.ScoreSdeVePipeline + +< +source +> +( +unet: UNet2DModel +scheduler: DiffusionPipeline + +) + + +Parameters + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the — + + +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) — +unet (UNet2DModel): U-Net architecture to denoise the encoded image. scheduler (SchedulerMixin): +The ScoreSdeVeScheduler scheduler to be used in combination with unet to denoise the encoded image. + + + + +__call__ + +< +source +> +( +batch_size: int = 1 +num_inference_steps: int = 2000 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +**kwargs + +) +→ +ImagePipelineOutput or tuple + +Parameters + +batch_size (int, optional, defaults to 1) — +The number of images to generate. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + +Returns + +ImagePipelineOutput or tuple + + + +~pipelines.utils.ImagePipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. diff --git a/scrapped_outputs/fb7ae9360bb885e708f179b6e3529185.txt b/scrapped_outputs/fb7ae9360bb885e708f179b6e3529185.txt new file mode 100644 index 0000000000000000000000000000000000000000..0216b63015b72cee2b55724c811388c4d1a98e96 --- /dev/null +++ b/scrapped_outputs/fb7ae9360bb885e708f179b6e3529185.txt @@ -0,0 +1,41 @@ +KarrasVeScheduler KarrasVeScheduler is a stochastic sampler tailored to variance-expanding (VE) models. It is based on the Elucidating the Design Space of Diffusion-Based Generative Models and Score-based generative modeling through stochastic differential equations papers. KarrasVeScheduler class diffusers.KarrasVeScheduler < source > ( sigma_min: float = 0.02 sigma_max: float = 100 s_noise: float = 1.007 s_churn: float = 80 s_min: float = 0.05 s_max: float = 50 ) Parameters sigma_min (float, defaults to 0.02) — +The minimum noise magnitude. sigma_max (float, defaults to 100) — +The maximum noise magnitude. s_noise (float, defaults to 1.007) — +The amount of additional noise to counteract loss of detail during sampling. A reasonable range is [1.000, +1.011]. s_churn (float, defaults to 80) — +The parameter controlling the overall amount of stochasticity. A reasonable range is [0, 100]. s_min (float, defaults to 0.05) — +The start value of the sigma range to add noise (enable stochasticity). A reasonable range is [0, 10]. s_max (float, defaults to 50) — +The end value of the sigma range to add noise. A reasonable range is [0.2, 80]. A stochastic scheduler tailored to variance-expanding models. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. For more details on the parameters, see Appendix E. The grid search values used +to find the optimal {s_noise, s_churn, s_min, s_max} for a specific model are described in Table 5 of the paper. add_noise_to_input < source > ( sample: FloatTensor sigma: float generator: Optional = None ) Parameters sample (torch.FloatTensor) — +The input sample. sigma (float) — generator (torch.Generator, optional) — +A random number generator. Explicit Langevin-like “churn” step of adding noise to the sample according to a gamma_i ≥ 0 to reach a +higher noise level sigma_hat = sigma_i + gamma_i*sigma_i. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor sigma_hat: float sigma_prev: float sample_hat: FloatTensor return_dict: bool = True ) → ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. sigma_hat (float) — sigma_prev (float) — sample_hat (torch.FloatTensor) — return_dict (bool, optional, defaults to True) — +Whether or not to return a ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput or tuple. Returns +~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput or tuple + +If return_dict is True, ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). step_correct < source > ( model_output: FloatTensor sigma_hat: float sigma_prev: float sample_hat: FloatTensor sample_prev: FloatTensor derivative: FloatTensor return_dict: bool = True ) → prev_sample (TODO) Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. sigma_hat (float) — TODO sigma_prev (float) — TODO sample_hat (torch.FloatTensor) — TODO sample_prev (torch.FloatTensor) — TODO derivative (torch.FloatTensor) — TODO return_dict (bool, optional, defaults to True) — +Whether or not to return a DDPMSchedulerOutput or tuple. Returns +prev_sample (TODO) + +updated sample in the diffusion chain. derivative (TODO): TODO + Corrects the predicted sample based on the model_output of the network. KarrasVeOutput class diffusers.schedulers.deprecated.scheduling_karras_ve.KarrasVeOutput < source > ( prev_sample: FloatTensor derivative: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. derivative (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Derivative of predicted original image sample (x_0). pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/fb8484c9815cc6c388511b435e24ff52.txt b/scrapped_outputs/fb8484c9815cc6c388511b435e24ff52.txt new file mode 100644 index 0000000000000000000000000000000000000000..ad1c3e3ea4d799d1b1a429d0168b7b574878ba2a --- /dev/null +++ b/scrapped_outputs/fb8484c9815cc6c388511b435e24ff52.txt @@ -0,0 +1,327 @@ +Semantic Guidance + +Semantic Guidance for Diffusion Models was proposed in SEGA: Instructing Diffusion using Semantic Dimensions and provides strong semantic control over the image generation. +Small changes to the text prompt usually result in entirely different output images. However, with SEGA a variety of changes to the image are enabled that can be controlled easily and intuitively, and stay true to the original image composition. +The abstract of the paper is the following: +Text-to-image diffusion models have recently received a lot of interest for their astonishing ability to produce high-fidelity images from text only. However, achieving one-shot generation that aligns with the user’s intent is nearly impossible, yet small changes to the input prompt often result in very different images. This leaves the user with little semantic control. To put the user in control, we show how to interact with the diffusion process to flexibly steer it along semantic directions. This semantic guidance (SEGA) allows for subtle and extensive edits, changes in composition and style, as well as optimizing the overall artistic conception. We demonstrate SEGA’s effectiveness on a variety of tasks and provide evidence for its versatility and flexibility. +Overview: +Pipeline +Tasks +Colab +Demo +pipeline_semantic_stable_diffusion.py +Text-to-Image Generation + +Coming Soon + +Tips + +The Semantic Guidance pipeline can be used with any Stable Diffusion checkpoint. + +Run Semantic Guidance + +The interface of SemanticStableDiffusionPipeline provides several additional parameters to influence the image generation. +Exemplary usage may look like this: + + + Copied +import torch +from diffusers import SemanticStableDiffusionPipeline + +pipe = SemanticStableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) +pipe = pipe.to("cuda") + +out = pipe( + prompt="a photo of the face of a woman", + num_images_per_prompt=1, + guidance_scale=7, + editing_prompt=[ + "smiling, smile", # Concepts to apply + "glasses, wearing glasses", + "curls, wavy hair, curly hair", + "beard, full beard, mustache", + ], + reverse_editing_direction=[False, False, False, False], # Direction of guidance i.e. increase all concepts + edit_warmup_steps=[10, 10, 10, 10], # Warmup period for each concept + edit_guidance_scale=[4, 5, 5, 5.4], # Guidance scale for each concept + edit_threshold=[ + 0.99, + 0.975, + 0.925, + 0.96, + ], # Threshold for each concept. Threshold equals the percentile of the latent space that will be discarded. I.e. threshold=0.99 uses 1% of the latent dimensions + edit_momentum_scale=0.3, # Momentum scale that will be added to the latent guidance + edit_mom_beta=0.6, # Momentum beta + edit_weights=[1, 1, 1, 1, 1], # Weights of the individual concepts against each other +) +For more examples check the Colab notebook. + +StableDiffusionSafePipelineOutput + + +class diffusers.pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput + +< +source +> +( +images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] +nsfw_content_detected: typing.Optional[typing.List[bool]] + +) + + +Parameters + +images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or numpy array of shape (batch_size, height, width, num_channels). PIL images or numpy array present the denoised images of the diffusion pipeline. + + +nsfw_content_detected (List[bool]) — +List of flags denoting whether the corresponding generated image likely represents “not-safe-for-work” +(nsfw) content, or None if safety checking could not be performed. + + + +Output class for Stable Diffusion pipelines. + +SemanticStableDiffusionPipeline + + +class diffusers.SemanticStableDiffusionPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: CLIPTextModel +tokenizer: CLIPTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPImageProcessor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (CLIPTextModel) — +Frozen text-encoder. Stable Diffusion uses the text portion of +CLIP, specifically +the clip-vit-large-patch14 variant. + + +tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latens. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (Q16SafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPImageProcessor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-to-image generation with latent editing. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) +This model builds on the implementation of [‘StableDiffusionPipeline’] + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: int = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +editing_prompt: typing.Union[str, typing.List[str], NoneType] = None +editing_prompt_embeddings: typing.Optional[torch.Tensor] = None +reverse_editing_direction: typing.Union[bool, typing.List[bool], NoneType] = False +edit_guidance_scale: typing.Union[float, typing.List[float], NoneType] = 5 +edit_warmup_steps: typing.Union[int, typing.List[int], NoneType] = 10 +edit_cooldown_steps: typing.Union[int, typing.List[int], NoneType] = None +edit_threshold: typing.Union[float, typing.List[float], NoneType] = 0.9 +edit_momentum_scale: typing.Optional[float] = 0.1 +edit_mom_beta: typing.Optional[float] = 0.4 +edit_weights: typing.Optional[typing.List[float]] = None +sem_guidance: typing.Optional[typing.List[torch.Tensor]] = None + +) +→ +SemanticStableDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a StableDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +editing_prompt (str or List[str], optional) — +The prompt or prompts to use for Semantic guidance. Semantic guidance is disabled by setting +editing_prompt = None. Guidance direction of prompt should be specified via +reverse_editing_direction. + + +editing_prompt_embeddings (torch.Tensor>, optional) — +Pre-computed embeddings to use for semantic guidance. Guidance direction of embedding should be +specified via reverse_editing_direction. + + +reverse_editing_direction (bool or List[bool], optional, defaults to False) — +Whether the corresponding prompt in editing_prompt should be increased or decreased. + + +edit_guidance_scale (float or List[float], optional, defaults to 5) — +Guidance scale for semantic guidance. If provided as list values should correspond to editing_prompt. +edit_guidance_scale is defined as s_e of equation 6 of SEGA +Paper. + + +edit_warmup_steps (float or List[float], optional, defaults to 10) — +Number of diffusion steps (for each prompt) for which semantic guidance will not be applied. Momentum +will still be calculated for those steps and applied once all warmup periods are over. +edit_warmup_steps is defined as delta (δ) of SEGA Paper. + + +edit_cooldown_steps (float or List[float], optional, defaults to None) — +Number of diffusion steps (for each prompt) after which semantic guidance will no longer be applied. + + +edit_threshold (float or List[float], optional, defaults to 0.9) — +Threshold of semantic guidance. + + +edit_momentum_scale (float, optional, defaults to 0.1) — +Scale of the momentum to be added to the semantic guidance at each diffusion step. If set to 0.0 +momentum will be disabled. Momentum is already built up during warmup, i.e. for diffusion steps smaller +than sld_warmup_steps. Momentum will only be added to latent guidance once all warmup periods are +finished. edit_momentum_scale is defined as s_m of equation 7 of SEGA +Paper. + + +edit_mom_beta (float, optional, defaults to 0.4) — +Defines how semantic guidance momentum builds up. edit_mom_beta indicates how much of the previous +momentum will be kept. Momentum is already built up during warmup, i.e. for diffusion steps smaller +than edit_warmup_steps. edit_mom_beta is defined as beta_m (β) of equation 8 of SEGA +Paper. + + +edit_weights (List[float], optional, defaults to None) — +Indicates how much each individual concept should influence the overall guidance. If no weights are +provided all concepts are applied equally. edit_mom_beta is defined as g_i of equation 9 of SEGA +Paper. + + +sem_guidance (List[torch.Tensor], optional) — +List of pre-generated guidance vectors to be applied at generation. Length of the list has to +correspond to num_inference_steps. + + +Returns + +SemanticStableDiffusionPipelineOutput or tuple + + + +SemanticStableDiffusionPipelineOutput if return_dict is True, +otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. diff --git a/scrapped_outputs/fb8ea709d4c08756e738192ab408b97a.txt b/scrapped_outputs/fb8ea709d4c08756e738192ab408b97a.txt new file mode 100644 index 0000000000000000000000000000000000000000..78c3d8546c4767fffa594b36c432c1201bb2ccc3 --- /dev/null +++ b/scrapped_outputs/fb8ea709d4c08756e738192ab408b97a.txt @@ -0,0 +1,17 @@ +Token merging Token merging (ToMe) merges redundant tokens/patches progressively in the forward pass of a Transformer-based network which can speed-up the inference latency of StableDiffusionPipeline. Install ToMe from pip: Copied pip install tomesd You can use ToMe from the tomesd library with the apply_patch function: Copied from diffusers import StableDiffusionPipeline + import torch + import tomesd + + pipeline = StableDiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True, + ).to("cuda") ++ tomesd.apply_patch(pipeline, ratio=0.5) + + image = pipeline("a photo of an astronaut riding a horse on mars").images[0] The apply_patch function exposes a number of arguments to help strike a balance between pipeline inference speed and the quality of the generated tokens. The most important argument is ratio which controls the number of tokens that are merged during the forward pass. As reported in the paper, ToMe can greatly preserve the quality of the generated images while boosting inference speed. By increasing the ratio, you can speed-up inference even further, but at the cost of some degraded image quality. To test the quality of the generated images, we sampled a few prompts from Parti Prompts and performed inference with the StableDiffusionPipeline with the following settings: We didn’t notice any significant decrease in the quality of the generated samples, and you can check out the generated samples in this WandB report. If you’re interested in reproducing this experiment, use this script. Benchmarks We also benchmarked the impact of tomesd on the StableDiffusionPipeline with xFormers enabled across several image resolutions. The results are obtained from A100 and V100 GPUs in the following development environment: Copied - `diffusers` version: 0.15.1 +- Python version: 3.8.16 +- PyTorch version (GPU?): 1.13.1+cu116 (True) +- Huggingface_hub version: 0.13.2 +- Transformers version: 4.27.2 +- Accelerate version: 0.18.0 +- xFormers version: 0.0.16 +- tomesd version: 0.1.2 To reproduce this benchmark, feel free to use this script. The results are reported in seconds, and where applicable we report the speed-up percentage over the vanilla pipeline when using ToMe and ToMe + xFormers. GPU Resolution Batch size Vanilla ToMe ToMe + xFormers A100 512 10 6.88 5.26 (+23.55%) 4.69 (+31.83%) 768 10 OOM 14.71 11 8 OOM 11.56 8.84 4 OOM 5.98 4.66 2 4.99 3.24 (+35.07%) 2.1 (+37.88%) 1 3.29 2.24 (+31.91%) 2.03 (+38.3%) 1024 10 OOM OOM OOM 8 OOM OOM OOM 4 OOM 12.51 9.09 2 OOM 6.52 4.96 1 6.4 3.61 (+43.59%) 2.81 (+56.09%) V100 512 10 OOM 10.03 9.29 8 OOM 8.05 7.47 4 5.7 4.3 (+24.56%) 3.98 (+30.18%) 2 3.14 2.43 (+22.61%) 2.27 (+27.71%) 1 1.88 1.57 (+16.49%) 1.57 (+16.49%) 768 10 OOM OOM 23.67 8 OOM OOM 18.81 4 OOM 11.81 9.7 2 OOM 6.27 5.2 1 5.43 3.38 (+37.75%) 2.82 (+48.07%) 1024 10 OOM OOM OOM 8 OOM OOM OOM 4 OOM OOM 19.35 2 OOM 13 10.78 1 OOM 6.66 5.54 As seen in the tables above, the speed-up from tomesd becomes more pronounced for larger image resolutions. It is also interesting to note that with tomesd, it is possible to run the pipeline on a higher resolution like 1024x1024. You may be able to speed-up inference even more with torch.compile. diff --git a/scrapped_outputs/fc15c73c02edabdbaea3a3a7750459aa.txt b/scrapped_outputs/fc15c73c02edabdbaea3a3a7750459aa.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/fc39ca8ede4764c61fc6d2a0cf47787f.txt b/scrapped_outputs/fc39ca8ede4764c61fc6d2a0cf47787f.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/fc4f5f040c1ab6018fd60641ec51e483.txt b/scrapped_outputs/fc4f5f040c1ab6018fd60641ec51e483.txt new file mode 100644 index 0000000000000000000000000000000000000000..78bbe5a9f180ff0b096046b649d06bb4063d6161 --- /dev/null +++ b/scrapped_outputs/fc4f5f040c1ab6018fd60641ec51e483.txt @@ -0,0 +1,137 @@ +DiffEdit Image editing typically requires providing a mask of the area to be edited. DiffEdit automatically generates the mask for you based on a text query, making it easier overall to create a mask without image editing software. The DiffEdit algorithm works in three steps: the diffusion model denoises an image conditioned on some query text and reference text which produces different noise estimates for different areas of the image; the difference is used to infer a mask to identify which area of the image needs to be changed to match the query text the input image is encoded into latent space with DDIM the latents are decoded with the diffusion model conditioned on the text query, using the mask as a guide such that pixels outside the mask remain the same as in the input image This guide will show you how to use DiffEdit to edit images without manually creating a mask. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate The StableDiffusionDiffEditPipeline requires an image mask and a set of partially inverted latents. The image mask is generated from the generate_mask() function, and includes two parameters, source_prompt and target_prompt. These parameters determine what to edit in the image. For example, if you want to change a bowl of fruits to a bowl of pears, then: Copied source_prompt = "a bowl of fruits" +target_prompt = "a bowl of pears" The partially inverted latents are generated from the invert() function, and it is generally a good idea to include a prompt or caption describing the image to help guide the inverse latent sampling process. The caption can often be your source_prompt, but feel free to experiment with other text descriptions! Let’s load the pipeline, scheduler, inverse scheduler, and enable some optimizations to reduce memory usage: Copied import torch +from diffusers import DDIMScheduler, DDIMInverseScheduler, StableDiffusionDiffEditPipeline + +pipeline = StableDiffusionDiffEditPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", + torch_dtype=torch.float16, + safety_checker=None, + use_safetensors=True, +) +pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) +pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) +pipeline.enable_model_cpu_offload() +pipeline.enable_vae_slicing() Load the image to edit: Copied from diffusers.utils import load_image, make_image_grid + +img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" +raw_image = load_image(img_url).resize((768, 768)) +raw_image Use the generate_mask() function to generate the image mask. You’ll need to pass it the source_prompt and target_prompt to specify what to edit in the image: Copied from PIL import Image + +source_prompt = "a bowl of fruits" +target_prompt = "a basket of pears" +mask_image = pipeline.generate_mask( + image=raw_image, + source_prompt=source_prompt, + target_prompt=target_prompt, +) +Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L").resize((768, 768)) Next, create the inverted latents and pass it a caption describing the image: Copied inv_latents = pipeline.invert(prompt=source_prompt, image=raw_image).latents Finally, pass the image mask and inverted latents to the pipeline. The target_prompt becomes the prompt now, and the source_prompt is used as the negative_prompt: Copied output_image = pipeline( + prompt=target_prompt, + mask_image=mask_image, + image_latents=inv_latents, + negative_prompt=source_prompt, +).images[0] +mask_image = Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L").resize((768, 768)) +make_image_grid([raw_image, mask_image, output_image], rows=1, cols=3) original image edited image Generate source and target embeddings The source and target embeddings can be automatically generated with the Flan-T5 model instead of creating them manually. Load the Flan-T5 model and tokenizer from the 🤗 Transformers library: Copied import torch +from transformers import AutoTokenizer, T5ForConditionalGeneration + +tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-large") +model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-large", device_map="auto", torch_dtype=torch.float16) Provide some initial text to prompt the model to generate the source and target prompts. Copied source_concept = "bowl" +target_concept = "basket" + +source_text = f"Provide a caption for images containing a {source_concept}. " +"The captions should be in English and should be no longer than 150 characters." + +target_text = f"Provide a caption for images containing a {target_concept}. " +"The captions should be in English and should be no longer than 150 characters." Next, create a utility function to generate the prompts: Copied @torch.no_grad() +def generate_prompts(input_prompt): + input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids.to("cuda") + + outputs = model.generate( + input_ids, temperature=0.8, num_return_sequences=16, do_sample=True, max_new_tokens=128, top_k=10 + ) + return tokenizer.batch_decode(outputs, skip_special_tokens=True) + +source_prompts = generate_prompts(source_text) +target_prompts = generate_prompts(target_text) +print(source_prompts) +print(target_prompts) Check out the generation strategy guide if you’re interested in learning more about strategies for generating different quality text. Load the text encoder model used by the StableDiffusionDiffEditPipeline to encode the text. You’ll use the text encoder to compute the text embeddings: Copied import torch +from diffusers import StableDiffusionDiffEditPipeline + +pipeline = StableDiffusionDiffEditPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +pipeline.enable_vae_slicing() + +@torch.no_grad() +def embed_prompts(sentences, tokenizer, text_encoder, device="cuda"): + embeddings = [] + for sent in sentences: + text_inputs = tokenizer( + sent, + padding="max_length", + max_length=tokenizer.model_max_length, + truncation=True, + return_tensors="pt", + ) + text_input_ids = text_inputs.input_ids + prompt_embeds = text_encoder(text_input_ids.to(device), attention_mask=None)[0] + embeddings.append(prompt_embeds) + return torch.concatenate(embeddings, dim=0).mean(dim=0).unsqueeze(0) + +source_embeds = embed_prompts(source_prompts, pipeline.tokenizer, pipeline.text_encoder) +target_embeds = embed_prompts(target_prompts, pipeline.tokenizer, pipeline.text_encoder) Finally, pass the embeddings to the generate_mask() and invert() functions, and pipeline to generate the image: Copied from diffusers import DDIMInverseScheduler, DDIMScheduler + from diffusers.utils import load_image, make_image_grid + from PIL import Image + + pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) + pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config) + + img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" + raw_image = load_image(img_url).resize((768, 768)) + + mask_image = pipeline.generate_mask( + image=raw_image, +- source_prompt=source_prompt, +- target_prompt=target_prompt, ++ source_prompt_embeds=source_embeds, ++ target_prompt_embeds=target_embeds, + ) + + inv_latents = pipeline.invert( +- prompt=source_prompt, ++ prompt_embeds=source_embeds, + image=raw_image, + ).latents + + output_image = pipeline( + mask_image=mask_image, + image_latents=inv_latents, +- prompt=target_prompt, +- negative_prompt=source_prompt, ++ prompt_embeds=target_embeds, ++ negative_prompt_embeds=source_embeds, + ).images[0] + mask_image = Image.fromarray((mask_image.squeeze()*255).astype("uint8"), "L") + make_image_grid([raw_image, mask_image, output_image], rows=1, cols=3) Generate a caption for inversion While you can use the source_prompt as a caption to help generate the partially inverted latents, you can also use the BLIP model to automatically generate a caption. Load the BLIP model and processor from the 🤗 Transformers library: Copied import torch +from transformers import BlipForConditionalGeneration, BlipProcessor + +processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") +model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base", torch_dtype=torch.float16, low_cpu_mem_usage=True) Create a utility function to generate a caption from the input image: Copied @torch.no_grad() +def generate_caption(images, caption_generator, caption_processor): + text = "a photograph of" + + inputs = caption_processor(images, text, return_tensors="pt").to(device="cuda", dtype=caption_generator.dtype) + caption_generator.to("cuda") + outputs = caption_generator.generate(**inputs, max_new_tokens=128) + + # offload caption generator + caption_generator.to("cpu") + + caption = caption_processor.batch_decode(outputs, skip_special_tokens=True)[0] + return caption Load an input image and generate a caption for it using the generate_caption function: Copied from diffusers.utils import load_image + +img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png" +raw_image = load_image(img_url).resize((768, 768)) +caption = generate_caption(raw_image, model, processor) generated caption: "a photograph of a bowl of fruit on a table" Now you can drop the caption into the invert() function to generate the partially inverted latents! diff --git a/scrapped_outputs/fc8c0c614054c44bd378e1287173aecc.txt b/scrapped_outputs/fc8c0c614054c44bd378e1287173aecc.txt new file mode 100644 index 0000000000000000000000000000000000000000..f3d2a1a340ad1efdbcd58232cb5909967c8d6d47 --- /dev/null +++ b/scrapped_outputs/fc8c0c614054c44bd378e1287173aecc.txt @@ -0,0 +1,64 @@ +Configuration Schedulers from SchedulerMixin and models from ModelMixin inherit from ConfigMixin which stores all the parameters that are passed to their respective __init__ methods in a JSON-configuration file. To use private or gated models, log-in with huggingface-cli login. ConfigMixin class diffusers.ConfigMixin < source > ( ) Base class for all configuration classes. All configuration parameters are stored under self.config. Also +provides the from_config() and save_config() methods for loading, downloading, and +saving classes that inherit from ConfigMixin. Class attributes: config_name (str) — A filename under which the config should stored when calling +save_config() (should be overridden by parent class). ignore_for_config (List[str]) — A list of attributes that should not be saved in the config (should be +overridden by subclass). has_compatibles (bool) — Whether the class has compatible classes (should be overridden by subclass). _deprecated_kwargs (List[str]) — Keyword arguments that are deprecated. Note that the init function +should only have a kwargs argument if at least one argument is deprecated (should be overridden by +subclass). load_config < source > ( pretrained_model_name_or_path: Union return_unused_kwargs = False return_commit_hash = False **kwargs ) → dict Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — +Can be either: + +A string, the model id (for example google/ddpm-celebahq-256) of a pretrained model hosted on +the Hub. +A path to a directory (for example ./my_model_directory) containing model weights saved with +save_config(). + cache_dir (Union[str, os.PathLike], optional) — +Path to a directory where a downloaded pretrained model configuration is cached if the standard cache +is not used. force_download (bool, optional, defaults to False) — +Whether or not to force the (re-)download of the model weights and configuration files, overriding the +cached versions if they exist. resume_download (bool, optional, defaults to False) — +Whether or not to resume downloading the model weights and configuration files. If set to False, any +incompletely downloaded files are deleted. proxies (Dict[str, str], optional) — +A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — +Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only (bool, optional, defaults to False) — +Whether to only load local model weights and configuration files or not. If set to True, the model +won’t be downloaded from the Hub. token (str or bool, optional) — +The token to use as HTTP bearer authorization for remote files. If True, the token generated from +diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") — +The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier +allowed by Git. subfolder (str, optional, defaults to "") — +The subfolder location of a model file within a larger model repository on the Hub or locally. return_unused_kwargs (bool, optional, defaults to `False) — +Whether unused keyword arguments of the config are returned. return_commit_hash (bool, optional, defaults to False) -- Whether the commit_hash` of the loaded configuration are returned. Returns +dict + +A dictionary of all the parameters stored in a JSON configuration file. + Load a model or scheduler configuration. from_config < source > ( config: Union = None return_unused_kwargs = False **kwargs ) → ModelMixin or SchedulerMixin Parameters config (Dict[str, Any]) — +A config dictionary from which the Python class is instantiated. Make sure to only load configuration +files of compatible classes. return_unused_kwargs (bool, optional, defaults to False) — +Whether kwargs that are not consumed by the Python class should be returned or not. kwargs (remaining dictionary of keyword arguments, optional) — +Can be used to update the configuration object (after it is loaded) and initiate the Python class. +**kwargs are passed directly to the underlying scheduler/model’s __init__ method and eventually +overwrite the same named arguments in config. Returns +ModelMixin or SchedulerMixin + +A model or scheduler object instantiated from a config dictionary. + Instantiate a Python class from a config dictionary. Examples: Copied >>> from diffusers import DDPMScheduler, DDIMScheduler, PNDMScheduler + +>>> # Download scheduler from huggingface.co and cache. +>>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cifar10-32") + +>>> # Instantiate DDIM scheduler class with same config as DDPM +>>> scheduler = DDIMScheduler.from_config(scheduler.config) + +>>> # Instantiate PNDM scheduler class with same config as DDPM +>>> scheduler = PNDMScheduler.from_config(scheduler.config) save_config < source > ( save_directory: Union push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — +Directory where the configuration JSON file is saved (will be created if it does not exist). push_to_hub (bool, optional, defaults to False) — +Whether or not to push your model to the Hugging Face Hub after saving it. You can specify the +repository you want to push to with repo_id (will default to the name of save_directory in your +namespace). kwargs (Dict[str, Any], optional) — +Additional keyword arguments passed along to the push_to_hub() method. Save a configuration object to the directory specified in save_directory so that it can be reloaded using the +from_config() class method. to_json_file < source > ( json_file_path: Union ) Parameters json_file_path (str or os.PathLike) — +Path to the JSON file to save a configuration instance’s parameters. Save the configuration instance’s parameters to a JSON file. to_json_string < source > ( ) → str Returns +str + +String containing all the attributes that make up the configuration instance in JSON format. + Serializes the configuration instance to a JSON string. diff --git a/scrapped_outputs/fc917090be291cba8d10dafee5cab09e.txt b/scrapped_outputs/fc917090be291cba8d10dafee5cab09e.txt new file mode 100644 index 0000000000000000000000000000000000000000..56e59074b7081ba9ef6c56015b3698c1be3d3268 --- /dev/null +++ b/scrapped_outputs/fc917090be291cba8d10dafee5cab09e.txt @@ -0,0 +1,252 @@ +Latent Diffusion + + +Overview + +Latent Diffusion was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. +The abstract of the paper is the following: +By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs. +The original codebase can be found here. + +Tips: + + + + + +Available Pipelines: + +Pipeline +Tasks +Colab +pipeline_latent_diffusion.py +Text-to-Image Generation +- +pipeline_latent_diffusion_superresolution.py +Super Resolution +- + +Examples: + + +LDMTextToImagePipeline + + +class diffusers.LDMTextToImagePipeline + +< +source +> +( +vqvae: typing.Union[diffusers.models.vq_model.VQModel, diffusers.models.autoencoder_kl.AutoencoderKL] +bert: PreTrainedModel +tokenizer: PreTrainedTokenizer +unet: typing.Union[diffusers.models.unet_2d.UNet2DModel, diffusers.models.unet_2d_condition.UNet2DConditionModel] +scheduler: typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler] + +) + + +Parameters + +vqvae (VQModel) — +Vector-quantized (VQ) Model to encode and decode images to and from latent representations. + + +bert (LDMBertModel) — +Text-encoder model based on BERT architecture. + + +tokenizer (transformers.BertTokenizer) — +Tokenizer of class +BertTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + + +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: typing.Optional[int] = 50 +guidance_scale: typing.Optional[float] = 1.0 +eta: typing.Optional[float] = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +**kwargs + +) +→ +ImagePipelineOutput or tuple + +Parameters + +prompt (str or List[str]) — +The prompt or prompts to guide the image generation. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 1.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt at +the, usually at the expense of lower image quality. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + +Returns + +ImagePipelineOutput or tuple + + + +~pipelines.utils.ImagePipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. + + + +LDMSuperResolutionPipeline + + +class diffusers.LDMSuperResolutionPipeline + +< +source +> +( +vqvae: VQModel +unet: UNet2DModel +scheduler: typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler] + +) + + +Parameters + +vqvae (VQModel) — +Vector-quantized (VQ) VAE Model to encode and decode images to and from latent representations. + + +unet (UNet2DModel) — U-Net architecture to denoise the encoded image. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latens. Can be one of +DDIMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, +EulerAncestralDiscreteScheduler, DPMSolverMultistepScheduler, or PNDMScheduler. + + + +A pipeline for image super-resolution using Latent +This class inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +image: typing.Union[torch.Tensor, PIL.Image.Image] = None +batch_size: typing.Optional[int] = 1 +num_inference_steps: typing.Optional[int] = 100 +eta: typing.Optional[float] = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True + +) +→ +ImagePipelineOutput or tuple + +Parameters + +image (torch.Tensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. + + +batch_size (int, optional, defaults to 1) — +Number of images to generate. + + +num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. + + +Returns + +ImagePipelineOutput or tuple + + + +~pipelines.utils.ImagePipelineOutput if return_dict is +True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. diff --git a/scrapped_outputs/fc9833236d38744cc28e15411120bc29.txt b/scrapped_outputs/fc9833236d38744cc28e15411120bc29.txt new file mode 100644 index 0000000000000000000000000000000000000000..67c8b53cf21b58b36cb7eadc4efa707362746029 --- /dev/null +++ b/scrapped_outputs/fc9833236d38744cc28e15411120bc29.txt @@ -0,0 +1,61 @@ +Stable Diffusion 2 Stable Diffusion 2 is a text-to-image latent diffusion model built upon the work of the original Stable Diffusion, and it was led by Robin Rombach and Katherine Crowson from Stability AI and LAION. The Stable Diffusion 2.0 release includes robust text-to-image models trained using a brand new text encoder (OpenCLIP), developed by LAION with support from Stability AI, which greatly improves the quality of the generated images compared to earlier V1 releases. The text-to-image models in this release can generate images with default resolutions of both 512x512 pixels and 768x768 pixels. +These models are trained on an aesthetic subset of the LAION-5B dataset created by the DeepFloyd team at Stability AI, which is then further filtered to remove adult content using LAION’s NSFW filter. For more details about how Stable Diffusion 2 works and how it differs from the original Stable Diffusion, please refer to the official announcement post. The architecture of Stable Diffusion 2 is more or less identical to the original Stable Diffusion model so check out it’s API documentation for how to use Stable Diffusion 2. We recommend using the DPMSolverMultistepScheduler as it gives a reasonable speed/quality trade-off and can be run with as little as 20 steps. Stable Diffusion 2 is available for tasks like text-to-image, inpainting, super-resolution, and depth-to-image: Task Repository text-to-image (512x512) stabilityai/stable-diffusion-2-base text-to-image (768x768) stabilityai/stable-diffusion-2 inpainting stabilityai/stable-diffusion-2-inpainting super-resolution stable-diffusion-x4-upscaler depth-to-image stabilityai/stable-diffusion-2-depth Here are some examples for how to use Stable Diffusion 2 for each task: Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently! If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations! Text-to-image Copied from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +import torch + +repo_id = "stabilityai/stable-diffusion-2-base" +pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") + +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") + +prompt = "High quality photo of an astronaut riding a horse in space" +image = pipe(prompt, num_inference_steps=25).images[0] +image Inpainting Copied import torch +from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler +from diffusers.utils import load_image, make_image_grid + +img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" +mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" + +init_image = load_image(img_url).resize((512, 512)) +mask_image = load_image(mask_url).resize((512, 512)) + +repo_id = "stabilityai/stable-diffusion-2-inpainting" +pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16") + +pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) +pipe = pipe.to("cuda") + +prompt = "Face of a yellow cat, high resolution, sitting on a park bench" +image = pipe(prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=25).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) Super-resolution Copied from diffusers import StableDiffusionUpscalePipeline +from diffusers.utils import load_image, make_image_grid +import torch + +# load model and scheduler +model_id = "stabilityai/stable-diffusion-x4-upscaler" +pipeline = StableDiffusionUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) +pipeline = pipeline.to("cuda") + +# let's download an image +url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png" +low_res_img = load_image(url) +low_res_img = low_res_img.resize((128, 128)) +prompt = "a white cat" +upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0] +make_image_grid([low_res_img.resize((512, 512)), upscaled_image.resize((512, 512))], rows=1, cols=2) Depth-to-image Copied import torch +from diffusers import StableDiffusionDepth2ImgPipeline +from diffusers.utils import load_image, make_image_grid + +pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-depth", + torch_dtype=torch.float16, +).to("cuda") + + +url = "http://images.cocodataset.org/val2017/000000039769.jpg" +init_image = load_image(url) +prompt = "two tigers" +negative_prompt = "bad, deformed, ugly, bad anotomy" +image = pipe(prompt=prompt, image=init_image, negative_prompt=negative_prompt, strength=0.7).images[0] +make_image_grid([init_image, image], rows=1, cols=2) diff --git a/scrapped_outputs/fcaa7b5e42947e4d2412bbb3ddc64669.txt b/scrapped_outputs/fcaa7b5e42947e4d2412bbb3ddc64669.txt new file mode 100644 index 0000000000000000000000000000000000000000..b45fe5213bcfa863fc1c686b497f93e27b1008f7 --- /dev/null +++ b/scrapped_outputs/fcaa7b5e42947e4d2412bbb3ddc64669.txt @@ -0,0 +1,630 @@ +Kandinsky 2.2 Kandinsky 2.2 is created by Arseniy Shakhmatov, Anton Razzhigaev, Aleksandr Nikolich, Vladimir Arkhipkin, Igor Pavlov, Andrey Kuznetsov, and Denis Dimitrov. The description from it’s GitHub page is: Kandinsky 2.2 brings substantial improvements upon its predecessor, Kandinsky 2.1, by introducing a new, more powerful image encoder - CLIP-ViT-G and the ControlNet support. The switch to CLIP-ViT-G as the image encoder significantly increases the model’s capability to generate more aesthetic pictures and better understand text, thus enhancing the model’s overall performance. The addition of the ControlNet mechanism allows the model to effectively control the process of generating images. This leads to more accurate and visually appealing outputs and opens new possibilities for text-guided image manipulation. The original codebase can be found at ai-forever/Kandinsky-2. Check out the Kandinsky Community organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting. Make sure to check out the schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines. KandinskyV22PriorPipeline class diffusers.KandinskyV22PriorPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModelWithProjection text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: UnCLIPScheduler image_processor: CLIPImageProcessor ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Pipeline for generating image prior for Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None guidance_scale: float = 4.0 output_type: Optional = 'pt' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → KandinskyPriorPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. output_type (str, optional, defaults to "pt") — +The output format of the generate image. Choose between: "np" (np.array) or "pt" +(torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyV22Pipeline, KandinskyV22PriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior") +>>> pipe_prior.to("cuda") +>>> prompt = "red cat, 4k photo" +>>> image_emb, negative_image_emb = pipe_prior(prompt).to_tuple() + +>>> pipe = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder") +>>> pipe.to("cuda") +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=50, +... ).images +>>> image[0].save("cat.png") interpolate < source > ( images_and_prompts: List weights: List num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None negative_prior_prompt: Optional = None negative_prompt: str = '' guidance_scale: float = 4.0 device = None ) → KandinskyPriorPipelineOutput or tuple Parameters images_and_prompts (List[Union[str, PIL.Image.Image, torch.FloatTensor]]) — +list of prompts and images to guide the image generation. +weights — (List[float]): +list of weights for each condition in images_and_prompts num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. negative_prior_prompt (str, optional) — +The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when using the prior pipeline for interpolation. Examples: Copied >>> from diffusers import KandinskyV22PriorPipeline, KandinskyV22Pipeline +>>> from diffusers.utils import load_image +>>> import PIL +>>> import torch +>>> from torchvision import transforms + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") +>>> img1 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) +>>> img2 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/starry_night.jpeg" +... ) +>>> images_texts = ["a cat", img1, img2] +>>> weights = [0.3, 0.3, 0.4] +>>> out = pipe_prior.interpolate(images_texts, weights) +>>> pipe = KandinskyV22Pipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") +>>> image = pipe( +... image_embeds=out.image_embeds, +... negative_image_embeds=out.negative_image_embeds, +... height=768, +... width=768, +... num_inference_steps=50, +... ).images[0] +>>> image.save("starry_cat.png") KandinskyV22Pipeline class diffusers.KandinskyV22Pipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union negative_image_embeds: Union height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyV22Pipeline, KandinskyV22PriorPipeline +>>> import torch + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior") +>>> pipe_prior.to("cuda") +>>> prompt = "red cat, 4k photo" +>>> out = pipe_prior(prompt) +>>> image_emb = out.image_embeds +>>> zero_image_emb = out.negative_image_embeds +>>> pipe = KandinskyV22Pipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder") +>>> pipe.to("cuda") +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=50, +... ).images +>>> image[0].save("cat.png") KandinskyV22CombinedPipeline class diffusers.KandinskyV22CombinedPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. prior_image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Combined Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. prior_callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference of the prior pipeline. +The function is called with the following arguments: prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). prior_callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the prior_callback_on_step_end function. The tensors specified in the +list will be passed as callback_kwargs argument. You will only be able to include variables listed in +the ._callback_tensor_inputs attribute of your prior pipeline class. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference of the decoder pipeline. +The function is called with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors +as specified by callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipe = AutoPipelineForText2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" + +image = pipe(prompt=prompt, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. KandinskyV22ControlnetPipeline class diffusers.KandinskyV22ControlnetPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union negative_image_embeds: Union hint: FloatTensor height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. hint (torch.FloatTensor) — +The controlnet condition. image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22PriorEmb2EmbPipeline class diffusers.KandinskyV22PriorEmb2EmbPipeline < source > ( prior: PriorTransformer image_encoder: CLIPVisionModelWithProjection text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer scheduler: UnCLIPScheduler image_processor: CLIPImageProcessor ) Parameters prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. Pipeline for generating image prior for Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union strength: float = 0.3 negative_prompt: Union = None num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None guidance_scale: float = 4.0 output_type: Optional = 'pt' return_dict: bool = True ) → KandinskyPriorPipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference emb. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. emb (torch.FloatTensor) — +The image embedding. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. output_type (str, optional, defaults to "pt") — +The output format of the generate image. Choose between: "np" (np.array) or "pt" +(torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import KandinskyV22Pipeline, KandinskyV22PriorEmb2EmbPipeline +>>> import torch + +>>> pipe_prior = KandinskyPriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> prompt = "red cat, 4k photo" +>>> img = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) +>>> image_emb, nagative_image_emb = pipe_prior(prompt, image=img, strength=0.2).to_tuple() + +>>> pipe = KandinskyPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-decoder, torch_dtype=torch.float16" +... ) +>>> pipe.to("cuda") + +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=negative_image_emb, +... height=768, +... width=768, +... num_inference_steps=100, +... ).images + +>>> image[0].save("cat.png") interpolate < source > ( images_and_prompts: List weights: List num_images_per_prompt: int = 1 num_inference_steps: int = 25 generator: Union = None latents: Optional = None negative_prior_prompt: Optional = None negative_prompt: str = '' guidance_scale: float = 4.0 device = None ) → KandinskyPriorPipelineOutput or tuple Parameters images_and_prompts (List[Union[str, PIL.Image.Image, torch.FloatTensor]]) — +list of prompts and images to guide the image generation. +weights — (List[float]): +list of weights for each condition in images_and_prompts num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. negative_prior_prompt (str, optional) — +The prompt not to guide the prior diffusion process. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). negative_prompt (str or List[str], optional) — +The prompt not to guide the image generation. Ignored when not using guidance (i.e., ignored if +guidance_scale is less than 1). guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. Returns +KandinskyPriorPipelineOutput or tuple + Function invoked when using the prior pipeline for interpolation. Examples: Copied >>> from diffusers import KandinskyV22PriorEmb2EmbPipeline, KandinskyV22Pipeline +>>> from diffusers.utils import load_image +>>> import PIL + +>>> import torch +>>> from torchvision import transforms + +>>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 +... ) +>>> pipe_prior.to("cuda") + +>>> img1 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/cat.png" +... ) + +>>> img2 = load_image( +... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" +... "/kandinsky/starry_night.jpeg" +... ) + +>>> images_texts = ["a cat", img1, img2] +>>> weights = [0.3, 0.3, 0.4] +>>> image_emb, zero_image_emb = pipe_prior.interpolate(images_texts, weights) + +>>> pipe = KandinskyV22Pipeline.from_pretrained( +... "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +... ) +>>> pipe.to("cuda") + +>>> image = pipe( +... image_embeds=image_emb, +... negative_image_embeds=zero_image_emb, +... height=768, +... width=768, +... num_inference_steps=150, +... ).images[0] + +>>> image.save("starry_cat.png") KandinskyV22Img2ImgPipeline class diffusers.KandinskyV22Img2ImgPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union image: Union negative_image_embeds: Union height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 strength: float = 0.3 num_images_per_prompt: int = 1 generator: Union = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22Img2ImgCombinedPipeline class diffusers.KandinskyV22Img2ImgCombinedPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. prior_image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Combined Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 strength: float = 0.3 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. strength (float, optional, defaults to 0.3) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForImage2Image +import torch +import requests +from io import BytesIO +from PIL import Image +import os + +pipe = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +response = requests.get(url) +image = Image.open(BytesIO(response.content)).convert("RGB") +image.thumbnail((768, 768)) + +image = pipe(prompt=prompt, image=original_image, num_inference_steps=25).images[0] enable_model_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. KandinskyV22ControlnetImg2ImgPipeline class diffusers.KandinskyV22ControlnetImg2ImgPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for image-to-image generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union image: Union negative_image_embeds: Union hint: FloatTensor height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 strength: float = 0.3 num_images_per_prompt: int = 1 generator: Union = None output_type: Optional = 'pil' callback: Optional = None callback_steps: int = 1 return_dict: bool = True ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. hint (torch.FloatTensor) — +The controlnet condition. negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). callback (Callable, optional) — +A function that calls every callback_steps steps during inference. The function is called with the +following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function is called. If not specified, the callback is called at +every step. return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22InpaintPipeline class diffusers.KandinskyV22InpaintPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel ) Parameters scheduler (DDIMScheduler) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. Pipeline for text-guided image inpainting using Kandinsky2.1 This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( image_embeds: Union image: Union mask_image: Union negative_image_embeds: Union height: int = 512 width: int = 512 num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for text prompt, that will be used to condition the image generation. image (PIL.Image.Image) — +Image, or tensor representing an image batch which will be inpainted, i.e. parts of the image will +be masked out with mask_image and repainted according to prompt. mask_image (np.array) — +Tensor representing an image batch, to mask image. White pixels in the mask will be repainted, while +black pixels will be preserved. If mask_image is a PIL image, it will be converted to a single +channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, +so the expected shape would be (B, H, W, 1). negative_image_embeds (torch.FloatTensor or List[torch.FloatTensor]) — +The clip image embeddings for negative text prompt, will be used to condition the image generation. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: KandinskyV22InpaintCombinedPipeline class diffusers.KandinskyV22InpaintCombinedPipeline < source > ( unet: UNet2DConditionModel scheduler: DDPMScheduler movq: VQModel prior_prior: PriorTransformer prior_image_encoder: CLIPVisionModelWithProjection prior_text_encoder: CLIPTextModelWithProjection prior_tokenizer: CLIPTokenizer prior_scheduler: UnCLIPScheduler prior_image_processor: CLIPImageProcessor ) Parameters scheduler (Union[DDIMScheduler,DDPMScheduler]) — +A scheduler to be used in combination with unet to generate image latents. unet (UNet2DConditionModel) — +Conditional U-Net architecture to denoise the image embedding. movq (VQModel) — +MoVQ Decoder to generate the image from the latents. prior_prior (PriorTransformer) — +The canonincal unCLIP prior to approximate the image embedding from the text embedding. prior_image_encoder (CLIPVisionModelWithProjection) — +Frozen image-encoder. prior_text_encoder (CLIPTextModelWithProjection) — +Frozen text-encoder. prior_tokenizer (CLIPTokenizer) — +Tokenizer of class +CLIPTokenizer. prior_scheduler (UnCLIPScheduler) — +A scheduler to be used in combination with prior to generate image embedding. prior_image_processor (CLIPImageProcessor) — +A image_processor to be used to preprocess image from clip. Combined Pipeline for inpainting generation using Kandinsky This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) __call__ < source > ( prompt: Union image: Union mask_image: Union negative_prompt: Union = None num_inference_steps: int = 100 guidance_scale: float = 4.0 num_images_per_prompt: int = 1 height: int = 512 width: int = 512 prior_guidance_scale: float = 4.0 prior_num_inference_steps: int = 25 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True prior_callback_on_step_end: Optional = None prior_callback_on_step_end_tensor_inputs: List = ['latents'] callback_on_step_end: Optional = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → ImagePipelineOutput or tuple Parameters prompt (str or List[str]) — +The prompt or prompts to guide the image generation. image (torch.FloatTensor, PIL.Image.Image, np.ndarray, List[torch.FloatTensor], List[PIL.Image.Image], or List[np.ndarray]) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. Can also accept image latents as image, if passing latents directly, it will not be encoded +again. mask_image (np.array) — +Tensor representing an image batch, to mask image. White pixels in the mask will be repainted, while +black pixels will be preserved. If mask_image is a PIL image, it will be converted to a single +channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) instead of 3, +so the expected shape would be (B, H, W, 1). negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored +if guidance_scale is less than 1). num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. height (int, optional, defaults to 512) — +The height in pixels of the generated image. width (int, optional, defaults to 512) — +The width in pixels of the generated image. prior_guidance_scale (float, optional, defaults to 4.0) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. prior_num_inference_steps (int, optional, defaults to 100) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between: "pil" (PIL.Image.Image), "np" +(np.array) or "pt" (torch.Tensor). return_dict (bool, optional, defaults to True) — +Whether or not to return a ImagePipelineOutput instead of a plain tuple. prior_callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: prior_callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). prior_callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the prior_callback_on_step_end function. The tensors specified in the +list will be passed as callback_kwargs argument. You will only be able to include variables listed in +the ._callback_tensor_inputs attribute of your pipeline class. callback_on_step_end (Callable, optional) — +A function that calls at the end of each denoising steps during the inference. The function is called +with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by +callback_on_step_end_tensor_inputs. callback_on_step_end_tensor_inputs (List, optional) — +The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list +will be passed as callback_kwargs argument. You will only be able to include variables listed in the +._callback_tensor_inputs attribute of your pipeline class. Returns +ImagePipelineOutput or tuple + Function invoked when calling the pipeline for generation. Examples: Copied from diffusers import AutoPipelineForInpainting +from diffusers.utils import load_image +import torch +import numpy as np + +pipe = AutoPipelineForInpainting.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 +) +pipe.enable_model_cpu_offload() + +prompt = "A fantasy landscape, Cinematic lighting" +negative_prompt = "low quality, bad quality" + +original_image = load_image( + "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" "/kandinsky/cat.png" +) + +mask = np.zeros((768, 768), dtype=np.float32) +# Let's mask out an area above the cat's head +mask[:250, 250:-250] = 1 + +image = pipe(prompt=prompt, image=original_image, mask_image=mask, num_inference_steps=25).images[0] enable_sequential_cpu_offload < source > ( gpu_id = 0 ) Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. diff --git a/scrapped_outputs/fccad51bb1c5b7afd90b318c225317d8.txt b/scrapped_outputs/fccad51bb1c5b7afd90b318c225317d8.txt new file mode 100644 index 0000000000000000000000000000000000000000..218eb87f8f649852b0b2e0b52a2a1d758aa1b603 --- /dev/null +++ b/scrapped_outputs/fccad51bb1c5b7afd90b318c225317d8.txt @@ -0,0 +1 @@ +Using Diffusers with other modalities Diffusers is in the process of expanding to modalities other than images. Example type Colab Pipeline Molecule conformation generation ❌ More coming soon! diff --git a/scrapped_outputs/fce249113401ec33be5f4502848c9483.txt b/scrapped_outputs/fce249113401ec33be5f4502848c9483.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/fce9289fe3ca0529906caa4582d8dc59.txt b/scrapped_outputs/fce9289fe3ca0529906caa4582d8dc59.txt new file mode 100644 index 0000000000000000000000000000000000000000..68e7435f9073e50ea2e9fbf1d917e1ce99d02f13 --- /dev/null +++ b/scrapped_outputs/fce9289fe3ca0529906caa4582d8dc59.txt @@ -0,0 +1,264 @@ +Performing inference with LCM-LoRA Latent Consistency Models (LCM) enable quality image generation in typically 2-4 steps making it possible to use diffusion models in almost real-time settings. From the official website: LCMs can be distilled from any pre-trained Stable Diffusion (SD) in only 4,000 training steps (~32 A100 GPU Hours) for generating high quality 768 x 768 resolution images in 2~4 steps or even one step, significantly accelerating text-to-image generation. We employ LCM to distill the Dreamshaper-V7 version of SD in just 4,000 training iterations. For a more technical overview of LCMs, refer to the paper. However, each model needs to be distilled separately for latent consistency distillation. The core idea with LCM-LoRA is to train just a few adapter layers, the adapter being LoRA in this case. +This way, we don’t have to train the full model and keep the number of trainable parameters manageable. The resulting LoRAs can then be applied to any fine-tuned version of the model without distilling them separately. +Additionally, the LoRAs can be applied to image-to-image, ControlNet/T2I-Adapter, inpainting, AnimateDiff etc. +The LCM-LoRA can also be combined with other LoRAs to generate styled images in very few steps (4-8). LCM-LoRAs are available for stable-diffusion-v1-5, stable-diffusion-xl-base-1.0, and the SSD-1B model. All the checkpoints can be found in this collection. For more details about LCM-LoRA, refer to the technical report. This guide shows how to perform inference with LCM-LoRAs for text-to-image image-to-image combined with styled LoRAs ControlNet/T2I-Adapter inpainting AnimateDiff Before going through this guide, we’ll take a look at the general workflow for performing inference with LCM-LoRAs. +LCM-LoRAs are similar to other Stable Diffusion LoRAs so they can be used with any DiffusionPipeline that supports LoRAs. Load the task specific pipeline and model. Set the scheduler to LCMScheduler. Load the LCM-LoRA weights for the model. Reduce the guidance_scale between [1.0, 2.0] and set the num_inference_steps between [4, 8]. Perform inference with the pipeline with the usual parameters. Let’s look at how we can perform inference with LCM-LoRAs for different tasks. First, make sure you have peft installed, for better LoRA support. Copied pip install -U peft Text-to-image You’ll use the StableDiffusionXLPipeline with the scheduler: LCMScheduler and then load the LCM-LoRA. Together with the LCM-LoRA and the scheduler, the pipeline enables a fast inference workflow overcoming the slow iterative nature of diffusion models. Copied import torch +from diffusers import DiffusionPipeline, LCMScheduler + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + variant="fp16", + torch_dtype=torch.float16 +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl") + +prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" + +generator = torch.manual_seed(42) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=1.0 +).images[0] Notice that we use only 4 steps for generation which is way less than what’s typically used for standard SDXL. You may have noticed that we set guidance_scale=1.0, which disables classifer-free-guidance. This is because the LCM-LoRA is trained with guidance, so the batch size does not have to be doubled in this case. This leads to a faster inference time, with the drawback that negative prompts don’t have any effect on the denoising process. You can also use guidance with LCM-LoRA, but due to the nature of training the model is very sensitve to the guidance_scale values, high values can lead to artifacts in the generated images. In our experiments, we found that the best values are in the range of [1.0, 2.0]. Inference with a fine-tuned model As mentioned above, the LCM-LoRA can be applied to any fine-tuned version of the model without having to distill them separately. Let’s look at how we can perform inference with a fine-tuned model. In this example, we’ll use the animagine-xl model, which is a fine-tuned version of the SDXL model for generating anime. Copied from diffusers import DiffusionPipeline, LCMScheduler + +pipe = DiffusionPipeline.from_pretrained( + "Linaqruf/animagine-xl", + variant="fp16", + torch_dtype=torch.float16 +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl") + +prompt = "face focus, cute, masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, night, turtleneck" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=1.0 +).images[0] Image-to-image LCM-LoRA can be applied to image-to-image tasks too. Let’s look at how we can perform image-to-image generation with LCMs. For this example we’ll use the dreamshaper-7 model and the LCM-LoRA for stable-diffusion-v1-5 . Copied import torch +from diffusers import AutoPipelineForImage2Image, LCMScheduler +from diffusers.utils import make_image_grid, load_image + +pipe = AutoPipelineForImage2Image.from_pretrained( + "Lykon/dreamshaper-7", + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5") + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) +prompt = "Astronauts in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +generator = torch.manual_seed(0) +image = pipe( + prompt, + image=init_image, + num_inference_steps=4, + guidance_scale=1, + strength=0.6, + generator=generator +).images[0] +make_image_grid([init_image, image], rows=1, cols=2) You can get different results based on your prompt and the image you provide. To get the best results, we recommend trying different values for num_inference_steps, strength, and guidance_scale parameters and choose the best one. Combine with styled LoRAs LCM-LoRA can be combined with other LoRAs to generate styled-images in very few steps (4-8). In the following example, we’ll use the LCM-LoRA with the papercut LoRA. +To learn more about how to combine LoRAs, refer to this guide. Copied import torch +from diffusers import DiffusionPipeline, LCMScheduler + +pipe = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + variant="fp16", + torch_dtype=torch.float16 +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LoRAs +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl", adapter_name="lcm") +pipe.load_lora_weights("TheLastBen/Papercut_SDXL", weight_name="papercut.safetensors", adapter_name="papercut") + +# Combine LoRAs +pipe.set_adapters(["lcm", "papercut"], adapter_weights=[1.0, 0.8]) + +prompt = "papercut, a cute fox" +generator = torch.manual_seed(0) +image = pipe(prompt, num_inference_steps=4, guidance_scale=1, generator=generator).images[0] +image ControlNet/T2I-Adapter Let’s look at how we can perform inference with ControlNet/T2I-Adapter and LCM-LoRA. ControlNet + +For this example, we'll use the SD-v1-5 model and the LCM-LoRA for SD-v1-5 with canny ControlNet. + + Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, LCMScheduler +from diffusers.utils import load_image + +image = load_image( + "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +).resize((512, 512)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", + controlnet=controlnet, + torch_dtype=torch.float16, + safety_checker=None, + variant="fp16" +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5") + +generator = torch.manual_seed(0) +image = pipe( + "the mona lisa", + image=canny_image, + num_inference_steps=4, + guidance_scale=1.5, + controlnet_conditioning_scale=0.8, + cross_attention_kwargs={"scale": 1}, + generator=generator, +).images[0] +make_image_grid([canny_image, image], rows=1, cols=2) The inference parameters in this example might not work for all examples, so we recommend you to try different values for `num_inference_steps`, `guidance_scale`, `controlnet_conditioning_scale` and `cross_attention_kwargs` parameters and choose the best one. T2I-Adapter This example shows how to use the LCM-LoRA with the Canny T2I-Adapter and SDXL. Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +# Prepare image +# Detect the canny map in low resolution to avoid high-frequency details +image = load_image( + "https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_canny.jpg" +).resize((384, 384)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image).resize((1024, 1024)) + +# load adapter +adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, varient="fp16").to("cuda") + +pipe = StableDiffusionXLAdapterPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + adapter=adapter, + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl") + +prompt = "Mystical fairy in real, magic, 4k picture, high quality" +negative_prompt = "extra digit, fewer digits, cropped, worst quality, low quality, glitch, deformed, mutated, ugly, disfigured" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, + negative_prompt=negative_prompt, + image=canny_image, + num_inference_steps=4, + guidance_scale=1.5, + adapter_conditioning_scale=0.8, + adapter_conditioning_factor=1, + generator=generator, +).images[0] +make_image_grid([canny_image, image], rows=1, cols=2) Inpainting LCM-LoRA can be used for inpainting as well. Copied import torch +from diffusers import AutoPipelineForInpainting, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +pipe = AutoPipelineForInpainting.from_pretrained( + "runwayml/stable-diffusion-inpainting", + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5") + +# load base and mask image +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png") +mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint_mask.png") + +# generator = torch.Generator("cuda").manual_seed(92) +prompt = "concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k" +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, + image=init_image, + mask_image=mask_image, + generator=generator, + num_inference_steps=4, + guidance_scale=4, +).images[0] +make_image_grid([init_image, mask_image, image], rows=1, cols=3) AnimateDiff AnimateDiff allows you to animate images using Stable Diffusion models. To get good results, we need to generate multiple frames (16-24), and doing this with standard SD models can be very slow. +LCM-LoRA can be used to speed up the process significantly, as you just need to do 4-8 steps for each frame. Let’s look at how we can perform animation with LCM-LoRA and AnimateDiff. Copied import torch +from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler, LCMScheduler +from diffusers.utils import export_to_gif + +adapter = MotionAdapter.from_pretrained("diffusers/animatediff-motion-adapter-v1-5") +pipe = AnimateDiffPipeline.from_pretrained( + "frankjoshua/toonyou_beta6", + motion_adapter=adapter, +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# load LCM-LoRA +pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5", adapter_name="lcm") +pipe.load_lora_weights("guoyww/animatediff-motion-lora-zoom-in", weight_name="diffusion_pytorch_model.safetensors", adapter_name="motion-lora") + +pipe.set_adapters(["lcm", "motion-lora"], adapter_weights=[0.55, 1.2]) + +prompt = "best quality, masterpiece, 1girl, looking at viewer, blurry background, upper body, contemporary, dress" +generator = torch.manual_seed(0) +frames = pipe( + prompt=prompt, + num_inference_steps=5, + guidance_scale=1.25, + cross_attention_kwargs={"scale": 1}, + num_frames=24, + generator=generator +).frames[0] +export_to_gif(frames, "animation.gif") diff --git a/scrapped_outputs/fcf851adfaf42cd9ea8e285290515047.txt b/scrapped_outputs/fcf851adfaf42cd9ea8e285290515047.txt new file mode 100644 index 0000000000000000000000000000000000000000..af8bc21f7006c2432f3cf43cbda561eb3e9ef283 --- /dev/null +++ b/scrapped_outputs/fcf851adfaf42cd9ea8e285290515047.txt @@ -0,0 +1,42 @@ +RePaintScheduler RePaintScheduler is a DDPM-based inpainting scheduler for unsupervised inpainting with extreme masks. It is designed to be used with the RePaintPipeline, and it is based on the paper RePaint: Inpainting using Denoising Diffusion Probabilistic Models by Andreas Lugmayr et al. The abstract from the paper is: Free-form inpainting is the task of adding new content to an image in the regions specified by an arbitrary binary mask. Most existing approaches train for a certain distribution of masks, which limits their generalization capabilities to unseen mask types. Furthermore, training with pixel-wise and perceptual losses often leads to simple textural extensions towards the missing areas instead of semantically meaningful generation. In this work, we propose RePaint: A Denoising Diffusion Probabilistic Model (DDPM) based inpainting approach that is applicable to even extreme masks. We employ a pretrained unconditional DDPM as the generative prior. To condition the generation process, we only alter the reverse diffusion iterations by sampling the unmasked regions using the given image information. Since this technique does not modify or condition the original DDPM network itself, the model produces high-quality and diverse output images for any inpainting form. We validate our method for both faces and general-purpose image inpainting using standard and extreme masks. RePaint outperforms state-of-the-art Autoregressive, and GAN approaches for at least five out of six mask distributions. GitHub Repository: this http URL. The original implementation can be found at andreas128/RePaint. RePaintScheduler class diffusers.RePaintScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' eta: float = 0.0 trained_betas: Optional = None clip_sample: bool = True ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, squaredcos_cap_v2, or sigmoid. eta (float) — +The weight of noise for added noise in diffusion step. If its value is between 0.0 and 1.0 it corresponds +to the DDIM scheduler, and if its value is between -0.0 and 1.0 it corresponds to the DDPM scheduler. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. clip_sample (bool, defaults to True) — +Clip the predicted sample between -1 and 1 for numerical stability. RePaintScheduler is a scheduler for DDPM inpainting inside a given mask. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int jump_length: int = 10 jump_n_sample: int = 10 device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. If used, +timesteps must be None. jump_length (int, defaults to 10) — +The number of steps taken forward in time before going backward in time for a single jump (“j” in +RePaint paper). Take a look at Figure 9 and 10 in the paper. jump_n_sample (int, defaults to 10) — +The number of times to make a forward time jump for a given chosen time sample. Take a look at Figure 9 +and 10 in the paper. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor original_image: FloatTensor mask: FloatTensor generator: Optional = None return_dict: bool = True ) → RePaintSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. original_image (torch.FloatTensor) — +The original image to inpaint on. mask (torch.FloatTensor) — +The mask where a value of 0.0 indicates which part of the original image to inpaint. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a RePaintSchedulerOutput or tuple. Returns +RePaintSchedulerOutput or tuple + +If return_dict is True, RePaintSchedulerOutput is returned, +otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). RePaintSchedulerOutput class diffusers.schedulers.scheduling_repaint.RePaintSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from +the current timestep. pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/fd226f57a56978696408f533769e899b.txt b/scrapped_outputs/fd226f57a56978696408f533769e899b.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/fd28a0c43c6292aa181cd763959803b1.txt b/scrapped_outputs/fd28a0c43c6292aa181cd763959803b1.txt new file mode 100644 index 0000000000000000000000000000000000000000..f3ff45d9b537f73b4891b1294f8d618d1aafc935 --- /dev/null +++ b/scrapped_outputs/fd28a0c43c6292aa181cd763959803b1.txt @@ -0,0 +1,48 @@ +ScoreSdeVeScheduler ScoreSdeVeScheduler is a variance exploding stochastic differential equation (SDE) scheduler. It was introduced in the Score-Based Generative Modeling through Stochastic Differential Equations paper by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole. The abstract from the paper is: Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model. ScoreSdeVeScheduler class diffusers.ScoreSdeVeScheduler < source > ( num_train_timesteps: int = 2000 snr: float = 0.15 sigma_min: float = 0.01 sigma_max: float = 1348.0 sampling_eps: float = 1e-05 correct_steps: int = 1 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. snr (float, defaults to 0.15) — +A coefficient weighting the step from the model_output sample (from the network) to the random noise. sigma_min (float, defaults to 0.01) — +The initial noise scale for the sigma sequence in the sampling procedure. The minimum sigma should mirror +the distribution of the data. sigma_max (float, defaults to 1348.0) — +The maximum value used for the range of continuous timesteps passed into the model. sampling_eps (float, defaults to 1e-5) — +The end value of sampling where timesteps decrease progressively from 1 to epsilon. correct_steps (int, defaults to 1) — +The number of correction steps performed on a produced sample. ScoreSdeVeScheduler is a variance exploding stochastic differential equation (SDE) scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Optional = None ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_sigmas < source > ( num_inference_steps: int sigma_min: float = None sigma_max: float = None sampling_eps: float = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. sigma_min (float, optional) — +The initial noise scale value (overrides value given during scheduler instantiation). sigma_max (float, optional) — +The final noise scale value (overrides value given during scheduler instantiation). sampling_eps (float, optional) — +The final timestep value (overrides value given during scheduler instantiation). Sets the noise scales used for the diffusion chain (to be run before inference). The sigmas control the weight +of the drift and diffusion components of the sample update. set_timesteps < source > ( num_inference_steps: int sampling_eps: float = None device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. sampling_eps (float, optional) — +The final timestep value (overrides value given during scheduler instantiation). device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the continuous timesteps used for the diffusion chain (to be run before inference). step_correct < source > ( model_output: FloatTensor sample: FloatTensor generator: Optional = None return_dict: bool = True ) → SdeVeOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a SdeVeOutput or tuple. Returns +SdeVeOutput or tuple + +If return_dict is True, SdeVeOutput is returned, otherwise a tuple +is returned where the first element is the sample tensor. + Correct the predicted sample based on the model_output of the network. This is often run repeatedly after +making the prediction for the previous timestep. step_pred < source > ( model_output: FloatTensor timestep: int sample: FloatTensor generator: Optional = None return_dict: bool = True ) → SdeVeOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. generator (torch.Generator, optional) — +A random number generator. return_dict (bool, optional, defaults to True) — +Whether or not to return a SdeVeOutput or tuple. Returns +SdeVeOutput or tuple + +If return_dict is True, SdeVeOutput is returned, otherwise a tuple +is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). SdeVeOutput class diffusers.schedulers.scheduling_sde_ve.SdeVeOutput < source > ( prev_sample: FloatTensor prev_sample_mean: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. prev_sample_mean (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Mean averaged prev_sample over previous timesteps. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/fd32ac9c0dbc7f328eafbbd4611e8767.txt b/scrapped_outputs/fd32ac9c0dbc7f328eafbbd4611e8767.txt new file mode 100644 index 0000000000000000000000000000000000000000..487bda9b7ff66f944694ae1672fd7d2dab01cda6 --- /dev/null +++ b/scrapped_outputs/fd32ac9c0dbc7f328eafbbd4611e8767.txt @@ -0,0 +1,625 @@ +AltDiffusion + +AltDiffusion was proposed in AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities by Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang, Ledell Wu +The abstract of the paper is the following: +In this work, we present a conceptually simple and effective method to train a strong bilingual multimodal representation model. Starting from the pretrained multimodal representation model CLIP released by OpenAI, we switched its text encoder with a pretrained multilingual text encoder XLM-R, and aligned both languages and image representations by a two-stage training schema consisting of teacher learning and contrastive learning. We validate our method through evaluations of a wide range of tasks. We set new state-of-the-art performances on a bunch of tasks including ImageNet-CN, Flicker30k- CN, and COCO-CN. Further, we obtain very close performances with CLIP on almost all tasks, suggesting that one can simply alter the text encoder in CLIP for extended capabilities such as multilingual understanding. +Overview: +Pipeline +Tasks +Colab +Demo +pipeline_alt_diffusion.py +Text-to-Image Generation +- +- +pipeline_alt_diffusion_img2img.py +Image-to-Image Text-Guided Generation +- +- + +Tips + +AltDiffusion is conceptually exaclty the same as Stable Diffusion. +Run AltDiffusion +AltDiffusion can be tested very easily with the AltDiffusionPipeline, AltDiffusionImg2ImgPipeline and the "BAAI/AltDiffusion-m9" checkpoint exactly in the same way it is shown in the Conditional Image Generation Guide and the Image-to-Image Generation Guide. +How to load and use different schedulers. +The alt diffusion pipeline uses DDIMScheduler scheduler by default. But diffusers provides many other schedulers that can be used with the alt diffusion pipeline such as PNDMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, EulerAncestralDiscreteScheduler etc. +To use a different scheduler, you can either change it via the ConfigMixin.from_config() method or pass the scheduler argument to the from_pretrained method of the pipeline. For example, to use the EulerDiscreteScheduler, you can do the following: + + + Copied +>>> from diffusers import AltDiffusionPipeline, EulerDiscreteScheduler + +>>> pipeline = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9") +>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) + +>>> # or +>>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("BAAI/AltDiffusion-m9", subfolder="scheduler") +>>> pipeline = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9", scheduler=euler_scheduler) +How to convert all use cases with multiple or single pipeline +If you want to use all possible use cases in a single DiffusionPipeline we recommend using the components functionality to instantiate all components in the most memory-efficient way: + + + Copied +>>> from diffusers import ( +... AltDiffusionPipeline, +... AltDiffusionImg2ImgPipeline, +... ) + +>>> text2img = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9") +>>> img2img = AltDiffusionImg2ImgPipeline(**text2img.components) + +>>> # now you can use text2img(...) and img2img(...) just like the call methods of each respective pipeline + +AltDiffusionPipelineOutput + + +class diffusers.pipelines.alt_diffusion.AltDiffusionPipelineOutput + +< +source +> +( +images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] +nsfw_content_detected: typing.Optional[typing.List[bool]] + +) + + +Parameters + +images (List[PIL.Image.Image] or np.ndarray) — +List of denoised PIL images of length batch_size or numpy array of shape (batch_size, height, width, num_channels). PIL images or numpy array present the denoised images of the diffusion pipeline. + + +nsfw_content_detected (List[bool]) — +List of flags denoting whether the corresponding generated image likely represents “not-safe-for-work” +(nsfw) content, or None if safety checking could not be performed. + + + +Output class for Alt Diffusion pipelines. + +__call__ + + +( +*args +**kwargs + +) + + + +Call self as a function. + +AltDiffusionPipeline + + +class diffusers.AltDiffusionPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: RobertaSeriesModelWithTransformation +tokenizer: XLMRobertaTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPFeatureExtractor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (RobertaSeriesModelWithTransformation) — +Frozen text-encoder. Alt Diffusion uses the text portion of +CLIP, +specifically the clip-vit-large-patch14 variant. + + +tokenizer (XLMRobertaTokenizer) — +Tokenizer of class +XLMRobertaTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-to-image generation using Alt Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +height: typing.Optional[int] = None +width: typing.Optional[int] = None +num_inference_steps: int = 50 +guidance_scale: float = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: float = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +latents: typing.Optional[torch.FloatTensor] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 +cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None + +) +→ +~pipelines.stable_diffusion.AltDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The height in pixels of the generated image. + + +width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — +The width in pixels of the generated image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. +Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator or List[torch.Generator], optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +latents (torch.FloatTensor, optional) — +Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image +generation. Can be used to tweak the same generation with different prompts. If not provided, a latents +tensor will ge generated by sampling using the supplied random generator. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.AltDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +cross_attention_kwargs (dict, optional) — +A kwargs dictionary that if specified is passed along to the AttnProcessor as defined under +self.processor in +diffusers.cross_attention. + + +Returns + +~pipelines.stable_diffusion.AltDiffusionPipelineOutput or tuple + + + +~pipelines.stable_diffusion.AltDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import torch +>>> from diffusers import AltDiffusionPipeline + +>>> pipe = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9", torch_dtype=torch.float16) +>>> pipe = pipe.to("cuda") + +>>> # "dark elf princess, highly detailed, d & d, fantasy, highly detailed, digital painting, trending on artstation, concept art, sharp focus, illustration, art by artgerm and greg rutkowski and fuji choko and viktoria gavrilenko and hoang lap" +>>> prompt = "黑暗精灵公主,非常详细,幻想,非常详细,数字绘画,概念艺术,敏锐的焦点,插图" +>>> image = pipe(prompt).images[0] + +disable_vae_slicing + +< +source +> +( +) + + + +Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to +computing decoding in one step. + +disable_vae_tiling + +< +source +> +( +) + + + +Disable tiled VAE decoding. If enable_vae_tiling was previously invoked, this method will go back to +computing decoding in one step. + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. + +enable_vae_slicing + +< +source +> +( +) + + + +Enable sliced VAE decoding. +When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several +steps. This is useful to save some memory and allow larger batch sizes. + +enable_vae_tiling + +< +source +> +( +) + + + +Enable tiled VAE decoding. +When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in +several steps. This is useful to save a large amount of memory and to allow the processing of larger images. + +AltDiffusionImg2ImgPipeline + + +class diffusers.AltDiffusionImg2ImgPipeline + +< +source +> +( +vae: AutoencoderKL +text_encoder: RobertaSeriesModelWithTransformation +tokenizer: XLMRobertaTokenizer +unet: UNet2DConditionModel +scheduler: KarrasDiffusionSchedulers +safety_checker: StableDiffusionSafetyChecker +feature_extractor: CLIPFeatureExtractor +requires_safety_checker: bool = True + +) + + +Parameters + +vae (AutoencoderKL) — +Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. + + +text_encoder (RobertaSeriesModelWithTransformation) — +Frozen text-encoder. Alt Diffusion uses the text portion of +CLIP, +specifically the clip-vit-large-patch14 variant. + + +tokenizer (XLMRobertaTokenizer) — +Tokenizer of class +XLMRobertaTokenizer. + + +unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents. + + +scheduler (SchedulerMixin) — +A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of +DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. + + +safety_checker (StableDiffusionSafetyChecker) — +Classification module that estimates whether generated images could be considered offensive or harmful. +Please, refer to the model card for details. + + +feature_extractor (CLIPFeatureExtractor) — +Model that extracts features from generated images to be used as inputs for the safety_checker. + + + +Pipeline for text-guided image to image generation using Alt Diffusion. +This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the +library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) + +__call__ + +< +source +> +( +prompt: typing.Union[str, typing.List[str]] = None +image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None +strength: float = 0.8 +num_inference_steps: typing.Optional[int] = 50 +guidance_scale: typing.Optional[float] = 7.5 +negative_prompt: typing.Union[str, typing.List[str], NoneType] = None +num_images_per_prompt: typing.Optional[int] = 1 +eta: typing.Optional[float] = 0.0 +generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None +prompt_embeds: typing.Optional[torch.FloatTensor] = None +negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None +output_type: typing.Optional[str] = 'pil' +return_dict: bool = True +callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None +callback_steps: int = 1 + +) +→ +~pipelines.stable_diffusion.AltDiffusionPipelineOutput or tuple + +Parameters + +prompt (str or List[str], optional) — +The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. +instead. + + +image (torch.FloatTensor or PIL.Image.Image) — +Image, or tensor representing an image batch, that will be used as the starting point for the +process. + + +strength (float, optional, defaults to 0.8) — +Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image +will be used as a starting point, adding more noise to it the larger the strength. The number of +denoising steps depends on the amount of noise initially added. When strength is 1, added noise will +be maximum and the denoising process will run for the full number of iterations specified in +num_inference_steps. A value of 1, therefore, essentially ignores image. + + +num_inference_steps (int, optional, defaults to 50) — +The number of denoising steps. More denoising steps usually lead to a higher quality image at the +expense of slower inference. This parameter will be modulated by strength. + + +guidance_scale (float, optional, defaults to 7.5) — +Guidance scale as defined in Classifier-Free Diffusion Guidance. +guidance_scale is defined as w of equation 2. of Imagen +Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, +usually at the expense of lower image quality. + + +negative_prompt (str or List[str], optional) — +The prompt or prompts not to guide the image generation. If not defined, one has to pass +negative_prompt_embeds. instead. Ignored when not using guidance (i.e., ignored if guidance_scale +is less than 1). + + +num_images_per_prompt (int, optional, defaults to 1) — +The number of images to generate per prompt. + + +eta (float, optional, defaults to 0.0) — +Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to +schedulers.DDIMScheduler, will be ignored for others. + + +generator (torch.Generator, optional) — +One or a list of torch generator(s) +to make generation deterministic. + + +prompt_embeds (torch.FloatTensor, optional) — +Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not +provided, text embeddings will be generated from prompt input argument. + + +negative_prompt_embeds (torch.FloatTensor, optional) — +Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt +weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input +argument. + + +output_type (str, optional, defaults to "pil") — +The output format of the generate image. Choose between +PIL: PIL.Image.Image or np.array. + + +return_dict (bool, optional, defaults to True) — +Whether or not to return a ~pipelines.stable_diffusion.AltDiffusionPipelineOutput instead of a +plain tuple. + + +callback (Callable, optional) — +A function that will be called every callback_steps steps during inference. The function will be +called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). + + +callback_steps (int, optional, defaults to 1) — +The frequency at which the callback function will be called. If not specified, the callback will be +called at every step. + + +Returns + +~pipelines.stable_diffusion.AltDiffusionPipelineOutput or tuple + + + +~pipelines.stable_diffusion.AltDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. + + +Function invoked when calling the pipeline for generation. + +Examples: + + + Copied +>>> import requests +>>> import torch +>>> from PIL import Image +>>> from io import BytesIO + +>>> from diffusers import AltDiffusionImg2ImgPipeline + +>>> device = "cuda" +>>> model_id_or_path = "BAAI/AltDiffusion-m9" +>>> pipe = AltDiffusionImg2ImgPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16) +>>> pipe = pipe.to(device) + +>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg" + +>>> response = requests.get(url) +>>> init_image = Image.open(BytesIO(response.content)).convert("RGB") +>>> init_image = init_image.resize((768, 512)) + +>>> # "A fantasy landscape, trending on artstation" +>>> prompt = "幻想风景, artstation" + +>>> images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images +>>> images[0].save("幻想风景.png") + +enable_model_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared +to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward +method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with +enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. + +enable_sequential_cpu_offload + +< +source +> +( +gpu_id = 0 + +) + + + +Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, +text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a +torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload`, but performance is lower. diff --git a/scrapped_outputs/fd57a8488b2ae6bb332d76c3b97aaf20.txt b/scrapped_outputs/fd57a8488b2ae6bb332d76c3b97aaf20.txt new file mode 100644 index 0000000000000000000000000000000000000000..12f932f27da948cb5ce81edca4bff5444475b84d --- /dev/null +++ b/scrapped_outputs/fd57a8488b2ae6bb332d76c3b97aaf20.txt @@ -0,0 +1,11 @@ +Control image brightness The Stable Diffusion pipeline is mediocre at generating images that are either very bright or dark as explained in the Common Diffusion Noise Schedules and Sample Steps are Flawed paper. The solutions proposed in the paper are currently implemented in the DDIMScheduler which you can use to improve the lighting in your images. 💡 Take a look at the paper linked above for more details about the proposed solutions! One of the solutions is to train a model with v prediction and v loss. Add the following flag to the train_text_to_image.py or train_text_to_image_lora.py scripts to enable v_prediction: Copied --prediction_type="v_prediction" For example, let’s use the ptx0/pseudo-journey-v2 checkpoint which has been finetuned with v_prediction. Next, configure the following parameters in the DDIMScheduler: rescale_betas_zero_snr=True, rescales the noise schedule to zero terminal signal-to-noise ratio (SNR) timestep_spacing="trailing", starts sampling from the last timestep Copied from diffusers import DiffusionPipeline, DDIMScheduler + +pipeline = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", use_safetensors=True) + +# switch the scheduler in the pipeline to use the DDIMScheduler +pipeline.scheduler = DDIMScheduler.from_config( + pipeline.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing" +) +pipeline.to("cuda") Finally, in your call to the pipeline, set guidance_rescale to prevent overexposure: Copied prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" +image = pipeline(prompt, guidance_rescale=0.7).images[0] +image diff --git a/scrapped_outputs/fd6e03cd097e3aacd049f67dfb446bb1.txt b/scrapped_outputs/fd6e03cd097e3aacd049f67dfb446bb1.txt new file mode 100644 index 0000000000000000000000000000000000000000..b1a5b1caf72ab66f1458f358678fe7da6bdce6c7 --- /dev/null +++ b/scrapped_outputs/fd6e03cd097e3aacd049f67dfb446bb1.txt @@ -0,0 +1 @@ +SDXL Turbo Stable Diffusion XL (SDXL) Turbo was proposed in Adversarial Diffusion Distillation by Axel Sauer, Dominik Lorenz, Andreas Blattmann, and Robin Rombach. The abstract from the paper is: We introduce Adversarial Diffusion Distillation (ADD), a novel training approach that efficiently samples large-scale foundational image diffusion models in just 1–4 steps while maintaining high image quality. We use score distillation to leverage large-scale off-the-shelf image diffusion models as a teacher signal in combination with an adversarial loss to ensure high image fidelity even in the low-step regime of one or two sampling steps. Our analyses show that our model clearly outperforms existing few-step methods (GANs,Latent Consistency Models) in a single step and reaches the performance of state-of-the-art diffusion models (SDXL) in only four steps. ADD is the first method to unlock single-step, real-time image synthesis with foundation models. Tips SDXL Turbo uses the exact same architecture as SDXL, which means it also has the same API. Please refer to the SDXL API reference for more details. SDXL Turbo should disable guidance scale by setting guidance_scale=0.0 SDXL Turbo should use timestep_spacing='trailing' for the scheduler and use between 1 and 4 steps. SDXL Turbo has been trained to generate images of size 512x512. SDXL Turbo is open-access, but not open-source meaning that one might have to buy a model license in order to use it for commercial applications. Make sure to read the official model card to learn more. To learn how to use SDXL Turbo for various tasks, how to optimize performance, and other usage examples, take a look at the SDXL Turbo guide. Check out the Stability AI Hub organization for the official base and refiner model checkpoints! diff --git a/scrapped_outputs/fdb47176e174fd07181c3f647133282b.txt b/scrapped_outputs/fdb47176e174fd07181c3f647133282b.txt new file mode 100644 index 0000000000000000000000000000000000000000..3daa3c7a8e691d007251b50d136b24a76d843cf8 --- /dev/null +++ b/scrapped_outputs/fdb47176e174fd07181c3f647133282b.txt @@ -0,0 +1,36 @@ +Stable Diffusion XL Turbo SDXL Turbo is an adversarial time-distilled Stable Diffusion XL (SDXL) model capable +of running inference in as little as 1 step. This guide will show you how to use SDXL-Turbo for text-to-image and image-to-image. Before you begin, make sure you have the following libraries installed: Copied # uncomment to install the necessary libraries in Colab +#!pip install -q diffusers transformers accelerate omegaconf Load model checkpoints Model weights may be stored in separate subfolders on the Hub or locally, in which case, you should use the from_pretrained() method: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16") +pipeline = pipeline.to("cuda") You can also use the from_single_file() method to load a model checkpoint stored in a single file format (.ckpt or .safetensors) from the Hub or locally: Copied from diffusers import StableDiffusionXLPipeline +import torch + +pipeline = StableDiffusionXLPipeline.from_single_file( + "https://huggingface.co/stabilityai/sdxl-turbo/blob/main/sd_xl_turbo_1.0_fp16.safetensors", torch_dtype=torch.float16) +pipeline = pipeline.to("cuda") Text-to-image For text-to-image, pass a text prompt. By default, SDXL Turbo generates a 512x512 image, and that resolution gives the best results. You can try setting the height and width parameters to 768x768 or 1024x1024, but you should expect quality degradations when doing so. Make sure to set guidance_scale to 0.0 to disable, as the model was trained without it. A single inference step is enough to generate high quality images. +Increasing the number of steps to 2, 3 or 4 should improve image quality. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline_text2image = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16") +pipeline_text2image = pipeline_text2image.to("cuda") + +prompt = "A cinematic shot of a baby racoon wearing an intricate italian priest robe." + +image = pipeline_text2image(prompt=prompt, guidance_scale=0.0, num_inference_steps=1).images[0] +image Image-to-image For image-to-image generation, make sure that num_inference_steps * strength is larger or equal to 1. +The image-to-image pipeline will run for int(num_inference_steps * strength) steps, e.g. 0.5 * 2.0 = 1 step in +our example below. Copied from diffusers import AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +# use from_pipe to avoid consuming additional memory when loading a checkpoint +pipeline = AutoPipelineForImage2Image.from_pipe(pipeline_text2image).to("cuda") + +init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") +init_image = init_image.resize((512, 512)) + +prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k" + +image = pipeline(prompt, image=init_image, strength=0.5, guidance_scale=0.0, num_inference_steps=2).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Speed-up SDXL Turbo even more Compile the UNet if you are using PyTorch version 2 or better. The first inference run will be very slow, but subsequent ones will be much faster. Copied pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) When using the default VAE, keep it in float32 to avoid costly dtype conversions before and after each generation. You only need to do this one before your first generation: Copied pipe.upcast_vae() As an alternative, you can also use a 16-bit VAE created by community member @madebyollin that does not need to be upcasted to float32. diff --git a/scrapped_outputs/fe50ca3681ee6a76134869cf05fc604a.txt b/scrapped_outputs/fe50ca3681ee6a76134869cf05fc604a.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/fe57af5e39da2c9f26033e22b3605a5c.txt b/scrapped_outputs/fe57af5e39da2c9f26033e22b3605a5c.txt new file mode 100644 index 0000000000000000000000000000000000000000..2dde9c6e189ad6d607bc313e3e555570773bb332 --- /dev/null +++ b/scrapped_outputs/fe57af5e39da2c9f26033e22b3605a5c.txt @@ -0,0 +1,19 @@ +Adapt a model to a new task Many diffusion systems share the same components, allowing you to adapt a pretrained model for one task to an entirely different task. This guide will show you how to adapt a pretrained text-to-image model for inpainting by initializing and modifying the architecture of a pretrained UNet2DConditionModel. Configure UNet2DConditionModel parameters A UNet2DConditionModel by default accepts 4 channels in the input sample. For example, load a pretrained text-to-image model like runwayml/stable-diffusion-v1-5 and take a look at the number of in_channels: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", use_safetensors=True) +pipeline.unet.config["in_channels"] +4 Inpainting requires 9 channels in the input sample. You can check this value in a pretrained inpainting model like runwayml/stable-diffusion-inpainting: Copied from diffusers import StableDiffusionPipeline + +pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-inpainting", use_safetensors=True) +pipeline.unet.config["in_channels"] +9 To adapt your text-to-image model for inpainting, you’ll need to change the number of in_channels from 4 to 9. Initialize a UNet2DConditionModel with the pretrained text-to-image model weights, and change in_channels to 9. Changing the number of in_channels means you need to set ignore_mismatched_sizes=True and low_cpu_mem_usage=False to avoid a size mismatch error because the shape is different now. Copied from diffusers import UNet2DConditionModel + +model_id = "runwayml/stable-diffusion-v1-5" +unet = UNet2DConditionModel.from_pretrained( + model_id, + subfolder="unet", + in_channels=9, + low_cpu_mem_usage=False, + ignore_mismatched_sizes=True, + use_safetensors=True, +) The pretrained weights of the other components from the text-to-image model are initialized from their checkpoints, but the input channel weights (conv_in.weight) of the unet are randomly initialized. It is important to finetune the model for inpainting because otherwise the model returns noise. diff --git a/scrapped_outputs/fe6da026ee8363a8c611a53fa686e90f.txt b/scrapped_outputs/fe6da026ee8363a8c611a53fa686e90f.txt new file mode 100644 index 0000000000000000000000000000000000000000..25c46b6891734af2caccd73456b27f1ecd1e462b --- /dev/null +++ b/scrapped_outputs/fe6da026ee8363a8c611a53fa686e90f.txt @@ -0,0 +1,64 @@ +PNDMScheduler PNDMScheduler, or pseudo numerical methods for diffusion models, uses more advanced ODE integration techniques like the Runge-Kutta and linear multi-step method. The original implementation can be found at crowsonkb/k-diffusion. PNDMScheduler class diffusers.PNDMScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None skip_prk_steps: bool = False set_alpha_to_one: bool = False prediction_type: str = 'epsilon' timestep_spacing: str = 'leading' steps_offset: int = 0 ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear, scaled_linear, or squaredcos_cap_v2. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. skip_prk_steps (bool, defaults to False) — +Allows the scheduler to skip the Runge-Kutta steps defined in the original paper as being required before +PLMS steps. set_alpha_to_one (bool, defaults to False) — +Each diffusion step uses the alphas product value at that step and at the previous one. For the final step +there is no previous alpha. When this option is True the previous alpha product is fixed to 1, +otherwise it uses the alpha value at step 0. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process) +or v_prediction (see section 2.4 of Imagen Video +paper). timestep_spacing (str, defaults to "leading") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. PNDMScheduler uses pseudo numerical methods for diffusion models such as the Runge-Kutta and linear multi-step +method. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor *args **kwargs ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise), and calls step_prk() +or step_plms() depending on the internal variable counter. step_plms < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the linear multistep method. It performs one forward pass multiple times to approximate the solution. step_prk < source > ( model_output: FloatTensor timestep: int sample: FloatTensor return_dict: bool = True ) → SchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (int) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. return_dict (bool) — +Whether or not to return a SchedulerOutput or tuple. Returns +SchedulerOutput or tuple + +If return_dict is True, SchedulerOutput is returned, otherwise a +tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with +the Runge-Kutta method. It performs four forward passes to approximate the solution to the differential +equation. SchedulerOutput class diffusers.schedulers.scheduling_utils.SchedulerOutput < source > ( prev_sample: FloatTensor ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. Base class for the output of a scheduler’s step function. diff --git a/scrapped_outputs/fea561b0d5a89051ad4686e5c368e697.txt b/scrapped_outputs/fea561b0d5a89051ad4686e5c368e697.txt new file mode 100644 index 0000000000000000000000000000000000000000..bfb701b6b92da524e2044f38c56691f6854d8e5e --- /dev/null +++ b/scrapped_outputs/fea561b0d5a89051ad4686e5c368e697.txt @@ -0,0 +1,169 @@ +Latent Consistency Model Latent Consistency Models (LCM) enable quality image generation in typically 2-4 steps making it possible to use diffusion models in almost real-time settings. From the official website: LCMs can be distilled from any pre-trained Stable Diffusion (SD) in only 4,000 training steps (~32 A100 GPU Hours) for generating high quality 768 x 768 resolution images in 2~4 steps or even one step, significantly accelerating text-to-image generation. We employ LCM to distill the Dreamshaper-V7 version of SD in just 4,000 training iterations. For a more technical overview of LCMs, refer to the paper. LCM distilled models are available for stable-diffusion-v1-5, stable-diffusion-xl-base-1.0, and the SSD-1B model. All the checkpoints can be found in this collection. This guide shows how to perform inference with LCMs for text-to-image image-to-image combined with style LoRAs ControlNet/T2I-Adapter Text-to-image You’ll use the StableDiffusionXLPipeline pipeline with the LCMScheduler and then load the LCM-LoRA. Together with the LCM-LoRA and the scheduler, the pipeline enables a fast inference workflow, overcoming the slow iterative nature of diffusion models. Copied from diffusers import StableDiffusionXLPipeline, UNet2DConditionModel, LCMScheduler +import torch + +unet = UNet2DConditionModel.from_pretrained( + "latent-consistency/lcm-sdxl", + torch_dtype=torch.float16, + variant="fp16", +) +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16, variant="fp16", +).to("cuda") +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=8.0 +).images[0] Notice that we use only 4 steps for generation which is way less than what’s typically used for standard SDXL. Some details to keep in mind: To perform classifier-free guidance, batch size is usually doubled inside the pipeline. LCM, however, applies guidance using guidance embeddings, so the batch size does not have to be doubled in this case. This leads to a faster inference time, with the drawback that negative prompts don’t have any effect on the denoising process. The UNet was trained using the [3., 13.] guidance scale range. So, that is the ideal range for guidance_scale. However, disabling guidance_scale using a value of 1.0 is also effective in most cases. Image-to-image LCMs can be applied to image-to-image tasks too. For this example, we’ll use the LCM_Dreamshaper_v7 model, but the same steps can be applied to other LCM models as well. Copied import torch +from diffusers import AutoPipelineForImage2Image, UNet2DConditionModel, LCMScheduler +from diffusers.utils import make_image_grid, load_image + +unet = UNet2DConditionModel.from_pretrained( + "SimianLuo/LCM_Dreamshaper_v7", + subfolder="unet", + torch_dtype=torch.float16, +) + +pipe = AutoPipelineForImage2Image.from_pretrained( + "Lykon/dreamshaper-7", + unet=unet, + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) +prompt = "Astronauts in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +generator = torch.manual_seed(0) +image = pipe( + prompt, + image=init_image, + num_inference_steps=4, + guidance_scale=7.5, + strength=0.5, + generator=generator +).images[0] +make_image_grid([init_image, image], rows=1, cols=2) You can get different results based on your prompt and the image you provide. To get the best results, we recommend trying different values for num_inference_steps, strength, and guidance_scale parameters and choose the best one. Combine with style LoRAs LCMs can be used with other styled LoRAs to generate styled-images in very few steps (4-8). In the following example, we’ll use the papercut LoRA. Copied from diffusers import StableDiffusionXLPipeline, UNet2DConditionModel, LCMScheduler +import torch + +unet = UNet2DConditionModel.from_pretrained( + "latent-consistency/lcm-sdxl", + torch_dtype=torch.float16, + variant="fp16", +) +pipe = StableDiffusionXLPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16, variant="fp16", +).to("cuda") +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +pipe.load_lora_weights("TheLastBen/Papercut_SDXL", weight_name="papercut.safetensors", adapter_name="papercut") + +prompt = "papercut, a cute fox" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, num_inference_steps=4, generator=generator, guidance_scale=8.0 +).images[0] +image ControlNet/T2I-Adapter Let’s look at how we can perform inference with ControlNet/T2I-Adapter and a LCM. ControlNet For this example, we’ll use the LCM_Dreamshaper_v7 model with canny ControlNet, but the same steps can be applied to other LCM models as well. Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +image = load_image( + "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" +).resize((512, 512)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image) + +controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) +pipe = StableDiffusionControlNetPipeline.from_pretrained( + "SimianLuo/LCM_Dreamshaper_v7", + controlnet=controlnet, + torch_dtype=torch.float16, + safety_checker=None, +).to("cuda") + +# set scheduler +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +generator = torch.manual_seed(0) +image = pipe( + "the mona lisa", + image=canny_image, + num_inference_steps=4, + generator=generator, +).images[0] +make_image_grid([canny_image, image], rows=1, cols=2) The inference parameters in this example might not work for all examples, so we recommend trying different values for the `num_inference_steps`, `guidance_scale`, `controlnet_conditioning_scale`, and `cross_attention_kwargs` parameters and choosing the best one. T2I-Adapter This example shows how to use the lcm-sdxl with the Canny T2I-Adapter. Copied import torch +import cv2 +import numpy as np +from PIL import Image + +from diffusers import StableDiffusionXLAdapterPipeline, UNet2DConditionModel, T2IAdapter, LCMScheduler +from diffusers.utils import load_image, make_image_grid + +# Prepare image +# Detect the canny map in low resolution to avoid high-frequency details +image = load_image( + "https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_canny.jpg" +).resize((384, 384)) + +image = np.array(image) + +low_threshold = 100 +high_threshold = 200 + +image = cv2.Canny(image, low_threshold, high_threshold) +image = image[:, :, None] +image = np.concatenate([image, image, image], axis=2) +canny_image = Image.fromarray(image).resize((1024, 1216)) + +# load adapter +adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, varient="fp16").to("cuda") + +unet = UNet2DConditionModel.from_pretrained( + "latent-consistency/lcm-sdxl", + torch_dtype=torch.float16, + variant="fp16", +) +pipe = StableDiffusionXLAdapterPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", + unet=unet, + adapter=adapter, + torch_dtype=torch.float16, + variant="fp16", +).to("cuda") + +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) + +prompt = "Mystical fairy in real, magic, 4k picture, high quality" +negative_prompt = "extra digit, fewer digits, cropped, worst quality, low quality, glitch, deformed, mutated, ugly, disfigured" + +generator = torch.manual_seed(0) +image = pipe( + prompt=prompt, + negative_prompt=negative_prompt, + image=canny_image, + num_inference_steps=4, + guidance_scale=5, + adapter_conditioning_scale=0.8, + adapter_conditioning_factor=1, + generator=generator, +).images[0] +grid = make_image_grid([canny_image, image], rows=1, cols=2) diff --git a/scrapped_outputs/fefc1f05915fd840b1806c4fe2edffb9.txt b/scrapped_outputs/fefc1f05915fd840b1806c4fe2edffb9.txt new file mode 100644 index 0000000000000000000000000000000000000000..44404381265fb59e40a4d0a64a09200029284152 --- /dev/null +++ b/scrapped_outputs/fefc1f05915fd840b1806c4fe2edffb9.txt @@ -0,0 +1,49 @@ +EulerDiscreteScheduler The Euler scheduler (Algorithm 2) is from the Elucidating the Design Space of Diffusion-Based Generative Models paper by Karras et al. This is a fast scheduler which can often generate good outputs in 20-30 steps. The scheduler is based on the original k-diffusion implementation by Katherine Crowson. EulerDiscreteScheduler class diffusers.EulerDiscreteScheduler < source > ( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None prediction_type: str = 'epsilon' interpolation_type: str = 'linear' use_karras_sigmas: Optional = False sigma_min: Optional = None sigma_max: Optional = None timestep_spacing: str = 'linspace' timestep_type: str = 'discrete' steps_offset: int = 0 rescale_betas_zero_snr: bool = False ) Parameters num_train_timesteps (int, defaults to 1000) — +The number of diffusion steps to train the model. beta_start (float, defaults to 0.0001) — +The starting beta value of inference. beta_end (float, defaults to 0.02) — +The final beta value. beta_schedule (str, defaults to "linear") — +The beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from +linear or scaled_linear. trained_betas (np.ndarray, optional) — +Pass an array of betas directly to the constructor to bypass beta_start and beta_end. prediction_type (str, defaults to epsilon, optional) — +Prediction type of the scheduler function; can be epsilon (predicts the noise of the diffusion process), +sample (directly predicts the noisy sample) or v_prediction` (see section 2.4 of Imagen +Video paper). interpolation_type(str, defaults to "linear", optional) — +The interpolation type to compute intermediate sigmas for the scheduler denoising steps. Should be on of +"linear" or "log_linear". use_karras_sigmas (bool, optional, defaults to False) — +Whether to use Karras sigmas for step sizes in the noise schedule during the sampling process. If True, +the sigmas are determined according to a sequence of noise levels {σi}. timestep_spacing (str, defaults to "linspace") — +The way the timesteps should be scaled. Refer to Table 2 of the Common Diffusion Noise Schedules and +Sample Steps are Flawed for more information. steps_offset (int, defaults to 0) — +An offset added to the inference steps. You can use a combination of offset=1 and +set_alpha_to_one=False to make the last step use step 0 for the previous alpha product like in Stable +Diffusion. rescale_betas_zero_snr (bool, defaults to False) — +Whether to rescale the betas to have zero terminal SNR. This enables the model to generate very bright and +dark samples instead of limiting it to samples with medium brightness. Loosely related to +--offset_noise. Euler scheduler. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic +methods the library implements for all schedulers such as loading and saving. scale_model_input < source > ( sample: FloatTensor timestep: Union ) → torch.FloatTensor Parameters sample (torch.FloatTensor) — +The input sample. timestep (int, optional) — +The current timestep in the diffusion chain. Returns +torch.FloatTensor + +A scaled input sample. + Ensures interchangeability with schedulers that need to scale the denoising model input depending on the +current timestep. Scales the denoising model input by (sigma**2 + 1) ** 0.5 to match the Euler algorithm. set_timesteps < source > ( num_inference_steps: int device: Union = None ) Parameters num_inference_steps (int) — +The number of diffusion steps used when generating samples with a pre-trained model. device (str or torch.device, optional) — +The device to which the timesteps should be moved to. If None, the timesteps are not moved. Sets the discrete timesteps used for the diffusion chain (to be run before inference). step < source > ( model_output: FloatTensor timestep: Union sample: FloatTensor s_churn: float = 0.0 s_tmin: float = 0.0 s_tmax: float = inf s_noise: float = 1.0 generator: Optional = None return_dict: bool = True ) → EulerDiscreteSchedulerOutput or tuple Parameters model_output (torch.FloatTensor) — +The direct output from learned diffusion model. timestep (float) — +The current discrete timestep in the diffusion chain. sample (torch.FloatTensor) — +A current instance of a sample created by the diffusion process. s_churn (float) — s_tmin (float) — s_tmax (float) — s_noise (float, defaults to 1.0) — +Scaling factor for noise added to the sample. generator (torch.Generator, optional) — +A random number generator. return_dict (bool) — +Whether or not to return a EulerDiscreteSchedulerOutput or +tuple. Returns +EulerDiscreteSchedulerOutput or tuple + +If return_dict is True, EulerDiscreteSchedulerOutput is +returned, otherwise a tuple is returned where the first element is the sample tensor. + Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion +process from the learned model outputs (most often the predicted noise). EulerDiscreteSchedulerOutput class diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteSchedulerOutput < source > ( prev_sample: FloatTensor pred_original_sample: Optional = None ) Parameters prev_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the +denoising loop. pred_original_sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) for images) — +The predicted denoised sample (x_{0}) based on the model output from the current timestep. +pred_original_sample can be used to preview progress or for guidance. Output class for the scheduler’s step function output. diff --git a/scrapped_outputs/ff107b9778040ca7ec6220dfa5e08f71.txt b/scrapped_outputs/ff107b9778040ca7ec6220dfa5e08f71.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/scrapped_outputs/ff1ffd74003f455d0fbda1264abf7933.txt b/scrapped_outputs/ff1ffd74003f455d0fbda1264abf7933.txt new file mode 100644 index 0000000000000000000000000000000000000000..da7517473881ae8a5f98c9de9071381dc720f891 --- /dev/null +++ b/scrapped_outputs/ff1ffd74003f455d0fbda1264abf7933.txt @@ -0,0 +1 @@ +Diffusers 🤗 Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you’re looking for a simple inference solution or want to train your own diffusion model, 🤗 Diffusers is a modular toolbox that supports both. Our library is designed with a focus on usability over performance, simple over easy, and customizability over abstractions. The library has three main components: State-of-the-art diffusion pipelines for inference with just a few lines of code. There are many pipelines in 🤗 Diffusers, check out the table in the pipeline overview for a complete list of available pipelines and the task they solve. Interchangeable noise schedulers for balancing trade-offs between generation speed and quality. Pretrained models that can be used as building blocks, and combined with schedulers, for creating your own end-to-end diffusion systems. Tutorials Learn the fundamental skills you need to start generating outputs, build your own diffusion system, and train a diffusion model. We recommend starting here if you're using 🤗 Diffusers for the first time! How-to guides Practical guides for helping you load pipelines, models, and schedulers. You'll also learn how to use pipelines for specific tasks, control how outputs are generated, optimize for inference speed, and different training techniques. Conceptual guides Understand why the library was designed the way it was, and learn more about the ethical guidelines and safety implementations for using the library. Reference Technical descriptions of how 🤗 Diffusers classes and methods work. diff --git a/scrapped_outputs/ff48978a13c31c4afa161c9c7c441ea1.txt b/scrapped_outputs/ff48978a13c31c4afa161c9c7c441ea1.txt new file mode 100644 index 0000000000000000000000000000000000000000..e3c71ca96baa76c1c11f96cfbdad30df65a97ee3 --- /dev/null +++ b/scrapped_outputs/ff48978a13c31c4afa161c9c7c441ea1.txt @@ -0,0 +1,112 @@ +How to contribute to Diffusers 🧨 We ❤️ contributions from the open-source community! Everyone is welcome, and all types of participation –not just code– are valued and appreciated. Answering questions, helping others, reaching out, and improving the documentation are all immensely valuable to the community, so don’t be afraid and get involved if you’re up for it! Everyone is encouraged to start by saying 👋 in our public Discord channel. We discuss the latest trends in diffusion models, ask questions, show off personal projects, help each other with contributions, or just hang out ☕. Whichever way you choose to contribute, we strive to be part of an open, welcoming, and kind community. Please, read our code of conduct and be mindful to respect it during your interactions. We also recommend you become familiar with the ethical guidelines that guide our project and ask you to adhere to the same principles of transparency and responsibility. We enormously value feedback from the community, so please do not be afraid to speak up if you believe you have valuable feedback that can help improve the library - every message, comment, issue, and pull request (PR) is read and considered. Overview You can contribute in many ways ranging from answering questions on issues to adding new diffusion models to +the core library. In the following, we give an overview of different ways to contribute, ranked by difficulty in ascending order. All of them are valuable to the community. Asking and answering questions on the Diffusers discussion forum or on Discord. Opening new issues on the GitHub Issues tab. Answering issues on the GitHub Issues tab. Fix a simple issue, marked by the “Good first issue” label, see here. Contribute to the documentation. Contribute a Community Pipeline. Contribute to the examples. Fix a more difficult issue, marked by the “Good second issue” label, see here. Add a new pipeline, model, or scheduler, see “New Pipeline/Model” and “New scheduler” issues. For this contribution, please have a look at Design Philosophy. As said before, all contributions are valuable to the community. +In the following, we will explain each contribution a bit more in detail. For all contributions 4 - 9, you will need to open a PR. It is explained in detail how to do so in Opening a pull request. 1. Asking and answering questions on the Diffusers discussion forum or on the Diffusers Discord Any question or comment related to the Diffusers library can be asked on the discussion forum or on Discord. Such questions and comments include (but are not limited to): Reports of training or inference experiments in an attempt to share knowledge Presentation of personal projects Questions to non-official training examples Project proposals General feedback Paper summaries Asking for help on personal projects that build on top of the Diffusers library General questions Ethical questions regarding diffusion models … Every question that is asked on the forum or on Discord actively encourages the community to publicly +share knowledge and might very well help a beginner in the future who has the same question you’re +having. Please do pose any questions you might have. +In the same spirit, you are of immense help to the community by answering such questions because this way you are publicly documenting knowledge for everybody to learn from. Please keep in mind that the more effort you put into asking or answering a question, the higher +the quality of the publicly documented knowledge. In the same way, well-posed and well-answered questions create a high-quality knowledge database accessible to everybody, while badly posed questions or answers reduce the overall quality of the public knowledge database. +In short, a high quality question or answer is precise, concise, relevant, easy-to-understand, accessible, and well-formated/well-posed. For more information, please have a look through the How to write a good issue section. NOTE about channels: +The forum is much better indexed by search engines, such as Google. Posts are ranked by popularity rather than chronologically. Hence, it’s easier to look up questions and answers that we posted some time ago. +In addition, questions and answers posted in the forum can easily be linked to. +In contrast, Discord has a chat-like format that invites fast back-and-forth communication. +While it will most likely take less time for you to get an answer to your question on Discord, your +question won’t be visible anymore over time. Also, it’s much harder to find information that was posted a while back on Discord. We therefore strongly recommend using the forum for high-quality questions and answers in an attempt to create long-lasting knowledge for the community. If discussions on Discord lead to very interesting answers and conclusions, we recommend posting the results on the forum to make the information more available for future readers. 2. Opening new issues on the GitHub issues tab The 🧨 Diffusers library is robust and reliable thanks to the users who notify us of +the problems they encounter. So thank you for reporting an issue. Remember, GitHub issues are reserved for technical questions directly related to the Diffusers library, bug reports, feature requests, or feedback on the library design. In a nutshell, this means that everything that is not related to the code of the Diffusers library (including the documentation) should not be asked on GitHub, but rather on either the forum or Discord. Please consider the following guidelines when opening a new issue: Make sure you have searched whether your issue has already been asked before (use the search bar on GitHub under Issues). Please never report a new issue on another (related) issue. If another issue is highly related, please +open a new issue nevertheless and link to the related issue. Make sure your issue is written in English. Please use one of the great, free online translation services, such as DeepL to translate from your native language to English if you are not comfortable in English. Check whether your issue might be solved by updating to the newest Diffusers version. Before posting your issue, please make sure that python -c "import diffusers; print(diffusers.__version__)" is higher or matches the latest Diffusers version. Remember that the more effort you put into opening a new issue, the higher the quality of your answer will be and the better the overall quality of the Diffusers issues. New issues usually include the following. 2.1. Reproducible, minimal bug reports A bug report should always have a reproducible code snippet and be as minimal and concise as possible. +This means in more detail: Narrow the bug down as much as you can, do not just dump your whole code file. Format your code. Do not include any external libraries except for Diffusers depending on them. Always provide all necessary information about your environment; for this, you can run: diffusers-cli env in your shell and copy-paste the displayed information to the issue. Explain the issue. If the reader doesn’t know what the issue is and why it is an issue, she cannot solve it. Always make sure the reader can reproduce your issue with as little effort as possible. If your code snippet cannot be run because of missing libraries or undefined variables, the reader cannot help you. Make sure your reproducible code snippet is as minimal as possible and can be copy-pasted into a simple Python shell. If in order to reproduce your issue a model and/or dataset is required, make sure the reader has access to that model or dataset. You can always upload your model or dataset to the Hub to make it easily downloadable. Try to keep your model and dataset as small as possible, to make the reproduction of your issue as effortless as possible. For more information, please have a look through the How to write a good issue section. You can open a bug report here. 2.2. Feature requests A world-class feature request addresses the following points: Motivation first: Is it related to a problem/frustration with the library? If so, please explain +why. Providing a code snippet that demonstrates the problem is best. Is it related to something you would need for a project? We’d love to hear +about it! Is it something you worked on and think could benefit the community? +Awesome! Tell us what problem it solved for you. Write a full paragraph describing the feature; Provide a code snippet that demonstrates its future use; In case this is related to a paper, please attach a link; Attach any additional information (drawings, screenshots, etc.) you think may help. You can open a feature request here. 2.3 Feedback Feedback about the library design and why it is good or not good helps the core maintainers immensely to build a user-friendly library. To understand the philosophy behind the current design philosophy, please have a look here. If you feel like a certain design choice does not fit with the current design philosophy, please explain why and how it should be changed. If a certain design choice follows the design philosophy too much, hence restricting use cases, explain why and how it should be changed. +If a certain design choice is very useful for you, please also leave a note as this is great feedback for future design decisions. You can open an issue about feedback here. 2.4 Technical questions Technical questions are mainly about why certain code of the library was written in a certain way, or what a certain part of the code does. Please make sure to link to the code in question and please provide details on +why this part of the code is difficult to understand. You can open an issue about a technical question here. 2.5 Proposal to add a new model, scheduler, or pipeline If the diffusion model community released a new model, pipeline, or scheduler that you would like to see in the Diffusers library, please provide the following information: Short description of the diffusion pipeline, model, or scheduler and link to the paper or public release. Link to any of its open-source implementation(s). Link to the model weights if they are available. If you are willing to contribute to the model yourself, let us know so we can best guide you. Also, don’t forget +to tag the original author of the component (model, scheduler, pipeline, etc.) by GitHub handle if you can find it. You can open a request for a model/pipeline/scheduler here. 3. Answering issues on the GitHub issues tab Answering issues on GitHub might require some technical knowledge of Diffusers, but we encourage everybody to give it a try even if you are not 100% certain that your answer is correct. +Some tips to give a high-quality answer to an issue: Be as concise and minimal as possible. Stay on topic. An answer to the issue should concern the issue and only the issue. Provide links to code, papers, or other sources that prove or encourage your point. Answer in code. If a simple code snippet is the answer to the issue or shows how the issue can be solved, please provide a fully reproducible code snippet. Also, many issues tend to be simply off-topic, duplicates of other issues, or irrelevant. It is of great +help to the maintainers if you can answer such issues, encouraging the author of the issue to be +more precise, provide the link to a duplicated issue or redirect them to the forum or Discord. If you have verified that the issued bug report is correct and requires a correction in the source code, +please have a look at the next sections. For all of the following contributions, you will need to open a PR. It is explained in detail how to do so in the Opening a pull request section. 4. Fixing a “Good first issue” Good first issues are marked by the Good first issue label. Usually, the issue already +explains how a potential solution should look so that it is easier to fix. +If the issue hasn’t been closed and you would like to try to fix this issue, you can just leave a message “I would like to try this issue.”. There are usually three scenarios: a.) The issue description already proposes a fix. In this case and if the solution makes sense to you, you can open a PR or draft PR to fix it. b.) The issue description does not propose a fix. In this case, you can ask what a proposed fix could look like and someone from the Diffusers team should answer shortly. If you have a good idea of how to fix it, feel free to directly open a PR. c.) There is already an open PR to fix the issue, but the issue hasn’t been closed yet. If the PR has gone stale, you can simply open a new PR and link to the stale PR. PRs often go stale if the original contributor who wanted to fix the issue suddenly cannot find the time anymore to proceed. This often happens in open-source and is very normal. In this case, the community will be very happy if you give it a new try and leverage the knowledge of the existing PR. If there is already a PR and it is active, you can help the author by giving suggestions, reviewing the PR or even asking whether you can contribute to the PR. 5. Contribute to the documentation A good library always has good documentation! The official documentation is often one of the first points of contact for new users of the library, and therefore contributing to the documentation is a highly +valuable contribution. Contributing to the library can have many forms: Correcting spelling or grammatical errors. Correct incorrect formatting of the docstring. If you see that the official documentation is weirdly displayed or a link is broken, we would be very happy if you take some time to correct it. Correct the shape or dimensions of a docstring input or output tensor. Clarify documentation that is hard to understand or incorrect. Update outdated code examples. Translating the documentation to another language. Anything displayed on the official Diffusers doc page is part of the official documentation and can be corrected, adjusted in the respective documentation source. Please have a look at this page on how to verify changes made to the documentation locally. 6. Contribute a community pipeline Pipelines are usually the first point of contact between the Diffusers library and the user. +Pipelines are examples of how to use Diffusers models and schedulers. +We support two types of pipelines: Official Pipelines Community Pipelines Both official and community pipelines follow the same design and consist of the same type of components. Official pipelines are tested and maintained by the core maintainers of Diffusers. Their code +resides in src/diffusers/pipelines. +In contrast, community pipelines are contributed and maintained purely by the community and are not tested. +They reside in examples/community and while they can be accessed via the PyPI diffusers package, their code is not part of the PyPI distribution. The reason for the distinction is that the core maintainers of the Diffusers library cannot maintain and test all +possible ways diffusion models can be used for inference, but some of them may be of interest to the community. +Officially released diffusion pipelines, +such as Stable Diffusion are added to the core src/diffusers/pipelines package which ensures +high quality of maintenance, no backward-breaking code changes, and testing. +More bleeding edge pipelines should be added as community pipelines. If usage for a community pipeline is high, the pipeline can be moved to the official pipelines upon request from the community. This is one of the ways we strive to be a community-driven library. To add a community pipeline, one should add a .py file to examples/community and adapt the examples/community/README.md to include an example of the new pipeline. An example can be seen here. Community pipeline PRs are only checked at a superficial level and ideally they should be maintained by their original authors. Contributing a community pipeline is a great way to understand how Diffusers models and schedulers work. Having contributed a community pipeline is usually the first stepping stone to contributing an official pipeline to the +core package. 7. Contribute to training examples Diffusers examples are a collection of training scripts that reside in examples. We support two types of training examples: Official training examples Research training examples Research training examples are located in examples/research_projects whereas official training examples include all folders under examples except the research_projects and community folders. +The official training examples are maintained by the Diffusers’ core maintainers whereas the research training examples are maintained by the community. +This is because of the same reasons put forward in 6. Contribute a community pipeline for official pipelines vs. community pipelines: It is not feasible for the core maintainers to maintain all possible training methods for diffusion models. +If the Diffusers core maintainers and the community consider a certain training paradigm to be too experimental or not popular enough, the corresponding training code should be put in the research_projects folder and maintained by the author. Both official training and research examples consist of a directory that contains one or more training scripts, a requirements.txt file, and a README.md file. In order for the user to make use of the +training examples, it is required to clone the repository: Copied git clone https://github.com/huggingface/diffusers as well as to install all additional dependencies required for training: Copied pip install -r /examples//requirements.txt Therefore when adding an example, the requirements.txt file shall define all pip dependencies required for your training example so that once all those are installed, the user can run the example’s training script. See, for example, the DreamBooth requirements.txt file. Training examples of the Diffusers library should adhere to the following philosophy: All the code necessary to run the examples should be found in a single Python file. One should be able to run the example from the command line with python .py --args. Examples should be kept simple and serve as an example on how to use Diffusers for training. The purpose of example scripts is not to create state-of-the-art diffusion models, but rather to reproduce known training schemes without adding too much custom logic. As a byproduct of this point, our examples also strive to serve as good educational materials. To contribute an example, it is highly recommended to look at already existing examples such as dreambooth to get an idea of how they should look like. +We strongly advise contributors to make use of the Accelerate library as it’s tightly integrated +with Diffusers. +Once an example script works, please make sure to add a comprehensive README.md that states how to use the example exactly. This README should include: An example command on how to run the example script as shown here. A link to some training results (logs, models, etc.) that show what the user can expect as shown here. If you are adding a non-official/research training example, please don’t forget to add a sentence that you are maintaining this training example which includes your git handle as shown here. If you are contributing to the official training examples, please also make sure to add a test to examples/test_examples.py. This is not necessary for non-official training examples. 8. Fixing a “Good second issue” Good second issues are marked by the Good second issue label. Good second issues are +usually more complicated to solve than Good first issues. +The issue description usually gives less guidance on how to fix the issue and requires +a decent understanding of the library by the interested contributor. +If you are interested in tackling a good second issue, feel free to open a PR to fix it and link the PR to the issue. If you see that a PR has already been opened for this issue but did not get merged, have a look to understand why it wasn’t merged and try to open an improved PR. +Good second issues are usually more difficult to get merged compared to good first issues, so don’t hesitate to ask for help from the core maintainers. If your PR is almost finished the core maintainers can also jump into your PR and commit to it in order to get it merged. 9. Adding pipelines, models, schedulers Pipelines, models, and schedulers are the most important pieces of the Diffusers library. +They provide easy access to state-of-the-art diffusion technologies and thus allow the community to +build powerful generative AI applications. By adding a new model, pipeline, or scheduler you might enable a new powerful use case for any of the user interfaces relying on Diffusers which can be of immense value for the whole generative AI ecosystem. Diffusers has a couple of open feature requests for all three components - feel free to gloss over them +if you don’t know yet what specific component you would like to add: Model or pipeline Scheduler Before adding any of the three components, it is strongly recommended that you give the Philosophy guide a read to better understand the design of any of the three components. Please be aware that we cannot merge model, scheduler, or pipeline additions that strongly diverge from our design philosophy +as it will lead to API inconsistencies. If you fundamentally disagree with a design choice, please open a Feedback issue instead so that it can be discussed whether a certain design pattern/design choice shall be changed everywhere in the library and whether we shall update our design philosophy. Consistency across the library is very important for us. Please make sure to add links to the original codebase/paper to the PR and ideally also ping the original author directly on the PR so that they can follow the progress and potentially help with questions. If you are unsure or stuck in the PR, don’t hesitate to leave a message to ask for a first review or help. Copied from mechanism A unique and important feature to understand when adding any pipeline, model or scheduler code is the # Copied from mechanism. You’ll see this all over the Diffusers codebase, and the reason we use it is to keep the codebase easy to understand and maintain. Marking code with the # Copied from mechanism forces the marked code to be identical to the code it was copied from. This makes it easy to update and propagate changes across many files whenever you run make fix-copies. For example, in the code example below, StableDiffusionPipelineOutput is the original code and AltDiffusionPipelineOutput uses the # Copied from mechanism to copy it. The only difference is changing the class prefix from Stable to Alt. Copied # Copied from diffusers.pipelines.stable_diffusion.pipeline_output.StableDiffusionPipelineOutput with Stable->Alt +class AltDiffusionPipelineOutput(BaseOutput): + """ + Output class for Alt Diffusion pipelines. + + Args: + images (`List[PIL.Image.Image]` or `np.ndarray`) + List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width, + num_channels)`. + nsfw_content_detected (`List[bool]`) + List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or + `None` if safety checking could not be performed. + """ To learn more, read this section of the ~Don’t~ Repeat Yourself* blog post. How to write a good issue The better your issue is written, the higher the chances that it will be quickly resolved. Make sure that you’ve used the correct template for your issue. You can pick between Bug Report, Feature Request, Feedback about API Design, New model/pipeline/scheduler addition, Forum, or a blank issue. Make sure to pick the correct one when opening a new issue. Be precise: Give your issue a fitting title. Try to formulate your issue description as simple as possible. The more precise you are when submitting an issue, the less time it takes to understand the issue and potentially solve it. Make sure to open an issue for one issue only and not for multiple issues. If you found multiple issues, simply open multiple issues. If your issue is a bug, try to be as precise as possible about what bug it is - you should not just write “Error in diffusers”. Reproducibility: No reproducible code snippet == no solution. If you encounter a bug, maintainers have to be able to reproduce it. Make sure that you include a code snippet that can be copy-pasted into a Python interpreter to reproduce the issue. Make sure that your code snippet works, i.e. that there are no missing imports or missing links to images, … Your issue should contain an error message and a code snippet that can be copy-pasted without any changes to reproduce the exact same error message. If your issue is using local model weights or local data that cannot be accessed by the reader, the issue cannot be solved. If you cannot share your data or model, try to make a dummy model or dummy data. Minimalistic: Try to help the reader as much as you can to understand the issue as quickly as possible by staying as concise as possible. Remove all code / all information that is irrelevant to the issue. If you have found a bug, try to create the easiest code example you can to demonstrate your issue, do not just dump your whole workflow into the issue as soon as you have found a bug. E.g., if you train a model and get an error at some point during the training, you should first try to understand what part of the training code is responsible for the error and try to reproduce it with a couple of lines. Try to use dummy data instead of full datasets. Add links. If you are referring to a certain naming, method, or model make sure to provide a link so that the reader can better understand what you mean. If you are referring to a specific PR or issue, make sure to link it to your issue. Do not assume that the reader knows what you are talking about. The more links you add to your issue the better. Formatting. Make sure to nicely format your issue by formatting code into Python code syntax, and error messages into normal code syntax. See the official GitHub formatting docs for more information. Think of your issue not as a ticket to be solved, but rather as a beautiful entry to a well-written encyclopedia. Every added issue is a contribution to publicly available knowledge. By adding a nicely written issue you not only make it easier for maintainers to solve your issue, but you are helping the whole community to better understand a certain aspect of the library. How to write a good PR Be a chameleon. Understand existing design patterns and syntax and make sure your code additions flow seamlessly into the existing code base. Pull requests that significantly diverge from existing design patterns or user interfaces will not be merged. Be laser focused. A pull request should solve one problem and one problem only. Make sure to not fall into the trap of “also fixing another problem while we’re adding it”. It is much more difficult to review pull requests that solve multiple, unrelated problems at once. If helpful, try to add a code snippet that displays an example of how your addition can be used. The title of your pull request should be a summary of its contribution. If your pull request addresses an issue, please mention the issue number in +the pull request description to make sure they are linked (and people +consulting the issue know you are working on it); To indicate a work in progress please prefix the title with [WIP]. These +are useful to avoid duplicated work, and to differentiate it from PRs ready +to be merged; Try to formulate and format your text as explained in How to write a good issue. Make sure existing tests pass; Add high-coverage tests. No quality testing = no merge. If you are adding new @slow tests, make sure they pass using +RUN_SLOW=1 python -m pytest tests/test_my_new_model.py. +CircleCI does not run the slow tests, but GitHub Actions does every night! All public methods must have informative docstrings that work nicely with markdown. See pipeline_latent_diffusion.py for an example. Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted dataset like +hf-internal-testing or huggingface/documentation-images to place these files. +If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images +to this dataset. How to open a PR Before writing code, we strongly advise you to search through the existing PRs or +issues to make sure that nobody is already working on the same thing. If you are +unsure, it is always a good idea to open an issue to get some feedback. You will need basic git proficiency to be able to contribute to +🧨 Diffusers. git is not the easiest tool to use but it has the greatest +manual. Type git --help in a shell and enjoy. If you prefer books, Pro +Git is a very good reference. Follow these steps to start contributing (supported Python versions): Fork the repository by +clicking on the ‘Fork’ button on the repository’s page. This creates a copy of the code +under your GitHub user account. Clone your fork to your local disk, and add the base repository as a remote: Copied $ git clone git@github.com:/diffusers.git +$ cd diffusers +$ git remote add upstream https://github.com/huggingface/diffusers.git Create a new branch to hold your development changes: Copied $ git checkout -b a-descriptive-name-for-my-changes Do not work on the main branch. Set up a development environment by running the following command in a virtual environment: Copied $ pip install -e ".[dev]" If you have already cloned the repo, you might need to git pull to get the most recent changes in the +library. Develop the features on your branch. As you work on the features, you should make sure that the test suite +passes. You should run the tests impacted by your changes like this: Copied $ pytest tests/.py Before you run the tests, please make sure you install the dependencies required for testing. You can do so +with this command: Copied $ pip install -e ".[test]" You can also run the full test suite with the following command, but it takes +a beefy machine to produce a result in a decent amount of time now that +Diffusers has grown a lot. Here is the command for it: Copied $ make test 🧨 Diffusers relies on black and isort to format its source code +consistently. After you make changes, apply automatic style corrections and code verifications +that can’t be automated in one go with: Copied $ make style 🧨 Diffusers also uses ruff and a few custom scripts to check for coding mistakes. Quality +control runs in CI, however, you can also run the same checks with: Copied $ make quality Once you’re happy with your changes, add changed files using git add and +make a commit with git commit to record your changes locally: Copied $ git add modified_file.py +$ git commit -m "A descriptive message about your changes." It is a good idea to sync your copy of the code with the original +repository regularly. This way you can quickly account for changes: Copied $ git pull upstream main Push the changes to your account using: Copied $ git push -u origin a-descriptive-name-for-my-changes Once you are satisfied, go to the +webpage of your fork on GitHub. Click on ‘Pull request’ to send your changes +to the project maintainers for review. It’s OK if maintainers ask you for changes. It happens to core contributors +too! So everyone can see the changes in the Pull request, work in your local +branch and push the changes to your fork. They will automatically appear in +the pull request. Tests An extensive test suite is included to test the library behavior and several examples. Library tests can be found in +the tests folder. We like pytest and pytest-xdist because it’s faster. From the root of the +repository, here’s how to run tests with pytest for the library: Copied $ python -m pytest -n auto --dist=loadfile -s -v ./tests/ In fact, that’s how make test is implemented! You can specify a smaller set of tests in order to test only the feature +you’re working on. By default, slow tests are skipped. Set the RUN_SLOW environment variable to +yes to run them. This will download many gigabytes of models — make sure you +have enough disk space and a good Internet connection, or a lot of patience! Copied $ RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/ unittest is fully supported, here’s how to run tests with it: Copied $ python -m unittest discover -s tests -t . -v +$ python -m unittest discover -s examples -t examples -v Syncing forked main with upstream (HuggingFace) main To avoid pinging the upstream repository which adds reference notes to each upstream PR and sends unnecessary notifications to the developers involved in these PRs, +when syncing the main branch of a forked repository, please, follow these steps: When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead, merge directly into the forked main. If a PR is absolutely necessary, use the following steps after checking out your branch: Copied $ git checkout -b your-branch-for-syncing +$ git pull --squash --no-commit upstream main +$ git commit -m '' +$ git push --set-upstream origin your-branch-for-syncing Style guide For documentation strings, 🧨 Diffusers follows the Google style. diff --git a/scrapped_outputs/ff528e56cf2db41b765a68e72b7249bc.txt b/scrapped_outputs/ff528e56cf2db41b765a68e72b7249bc.txt new file mode 100644 index 0000000000000000000000000000000000000000..d05e83f211afd073b47b8d298eea79b4b3c9daf7 --- /dev/null +++ b/scrapped_outputs/ff528e56cf2db41b765a68e72b7249bc.txt @@ -0,0 +1,97 @@ +Text-to-image When you think of diffusion models, text-to-image is usually one of the first things that come to mind. Text-to-image generates an image from a text description (for example, “Astronaut in a jungle, cold color palette, muted colors, detailed, 8k”) which is also known as a prompt. From a very high level, a diffusion model takes a prompt and some random initial noise, and iteratively removes the noise to construct an image. The denoising process is guided by the prompt, and once the denoising process ends after a predetermined number of time steps, the image representation is decoded into an image. Read the How does Stable Diffusion work? blog post to learn more about how a latent diffusion model works. You can generate images from a prompt in 🤗 Diffusers in two steps: Load a checkpoint into the AutoPipelineForText2Image class, which automatically detects the appropriate pipeline class to use based on the checkpoint: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") Pass a prompt to the pipeline to generate an image: Copied image = pipeline( + "stained glass of darth vader, backlight, centered composition, masterpiece, photorealistic, 8k" +).images[0] +image Popular models The most common text-to-image models are Stable Diffusion v1.5, Stable Diffusion XL (SDXL), and Kandinsky 2.2. There are also ControlNet models or adapters that can be used with text-to-image models for more direct control in generating images. The results from each model are slightly different because of their architecture and training process, but no matter which model you choose, their usage is more or less the same. Let’s use the same prompt for each model and compare their results. Stable Diffusion v1.5 Stable Diffusion v1.5 is a latent diffusion model initialized from Stable Diffusion v1-4, and finetuned for 595K steps on 512x512 images from the LAION-Aesthetics V2 dataset. You can use this model like: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0] +image Stable Diffusion XL SDXL is a much larger version of the previous Stable Diffusion models, and involves a two-stage model process that adds even more details to an image. It also includes some additional micro-conditionings to generate high-quality images centered subjects. Take a look at the more comprehensive SDXL guide to learn more about how to use it. In general, you can use SDXL like: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0] +image Kandinsky 2.2 The Kandinsky model is a bit different from the Stable Diffusion models because it also uses an image prior model to create embeddings that are used to better align text and images in the diffusion model. The easiest way to use Kandinsky 2.2 is: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16 +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", generator=generator).images[0] +image ControlNet ControlNet models are auxiliary models or adapters that are finetuned on top of text-to-image models, such as Stable Diffusion v1.5. Using ControlNet models in combination with text-to-image models offers diverse options for more explicit control over how to generate an image. With ControlNet, you add an additional conditioning input image to the model. For example, if you provide an image of a human pose (usually represented as multiple keypoints that are connected into a skeleton) as a conditioning input, the model generates an image that follows the pose of the image. Check out the more in-depth ControlNet guide to learn more about other conditioning inputs and how to use them. In this example, let’s condition the ControlNet with a human pose estimation image. Load the ControlNet model pretrained on human pose estimations: Copied from diffusers import ControlNetModel, AutoPipelineForText2Image +from diffusers.utils import load_image +import torch + +controlnet = ControlNetModel.from_pretrained( + "lllyasviel/control_v11p_sd15_openpose", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +pose_image = load_image("https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png") Pass the controlnet to the AutoPipelineForText2Image, and provide the prompt and pose estimation image: Copied pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16" +).to("cuda") +generator = torch.Generator("cuda").manual_seed(31) +image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", image=pose_image, generator=generator).images[0] +image Stable Diffusion v1.5 Stable Diffusion XL Kandinsky 2.2 ControlNet (pose conditioning) Configure pipeline parameters There are a number of parameters that can be configured in the pipeline that affect how an image is generated. You can change the image’s output size, specify a negative prompt to improve image quality, and more. This section dives deeper into how to use these parameters. Height and width The height and width parameters control the height and width (in pixels) of the generated image. By default, the Stable Diffusion v1.5 model outputs 512x512 images, but you can change this to any size that is a multiple of 8. For example, to create a rectangular image: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16" +).to("cuda") +image = pipeline( + "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", height=768, width=512 +).images[0] +image Other models may have different default image sizes depending on the image sizes in the training dataset. For example, SDXL’s default image size is 1024x1024 and using lower height and width values may result in lower quality images. Make sure you check the model’s API reference first! Guidance scale The guidance_scale parameter affects how much the prompt influences image generation. A lower value gives the model “creativity” to generate images that are more loosely related to the prompt. Higher guidance_scale values push the model to follow the prompt more closely, and if this value is too high, you may observe some artifacts in the generated image. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +image = pipeline( + "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", guidance_scale=3.5 +).images[0] +image guidance_scale = 2.5 guidance_scale = 7.5 guidance_scale = 10.5 Negative prompt Just like how a prompt guides generation, a negative prompt steers the model away from things you don’t want the model to generate. This is commonly used to improve overall image quality by removing poor or bad image features such as “low resolution” or “bad details”. You can also use a negative prompt to remove or modify the content and style of an image. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +image = pipeline( + prompt="Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", + negative_prompt="ugly, deformed, disfigured, poor details, bad anatomy", +).images[0] +image negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" negative_prompt = "astronaut" Generator A torch.Generator object enables reproducibility in a pipeline by setting a manual seed. You can use a Generator to generate batches of images and iteratively improve on an image generated from a seed as detailed in the Improve image quality with deterministic generation guide. You can set a seed and Generator as shown below. Creating an image with a Generator should return the same result each time instead of randomly generating a new image. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +generator = torch.Generator(device="cuda").manual_seed(30) +image = pipeline( + "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", + generator=generator, +).images[0] +image Control image generation There are several ways to exert more control over how an image is generated outside of configuring a pipeline’s parameters, such as prompt weighting and ControlNet models. Prompt weighting Prompt weighting is a technique for increasing or decreasing the importance of concepts in a prompt to emphasize or minimize certain features in an image. We recommend using the Compel library to help you generate the weighted prompt embeddings. Learn how to create the prompt embeddings in the Prompt weighting guide. This example focuses on how to use the prompt embeddings in the pipeline. Once you’ve created the embeddings, you can pass them to the prompt_embeds (and negative_prompt_embeds if you’re using a negative prompt) parameter in the pipeline. Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 +).to("cuda") +image = pipeline( + prompt_embeds=prompt_embeds, # generated from Compel + negative_prompt_embeds=negative_prompt_embeds, # generated from Compel +).images[0] ControlNet As you saw in the ControlNet section, these models offer a more flexible and accurate way to generate images by incorporating an additional conditioning image input. Each ControlNet model is pretrained on a particular type of conditioning image to generate new images that resemble it. For example, if you take a ControlNet model pretrained on depth maps, you can give the model a depth map as a conditioning input and it’ll generate an image that preserves the spatial information in it. This is quicker and easier than specifying the depth information in a prompt. You can even combine multiple conditioning inputs with a MultiControlNet! There are many types of conditioning inputs you can use, and 🤗 Diffusers supports ControlNet for Stable Diffusion and SDXL models. Take a look at the more comprehensive ControlNet guide to learn how you can use these models. Optimize Diffusion models are large, and the iterative nature of denoising an image is computationally expensive and intensive. But this doesn’t mean you need access to powerful - or even many - GPUs to use them. There are many optimization techniques for running diffusion models on consumer and free-tier resources. For example, you can load model weights in half-precision to save GPU memory and increase speed or offload the entire model to the GPU to save even more memory. PyTorch 2.0 also supports a more memory-efficient attention mechanism called scaled dot product attention that is automatically enabled if you’re using PyTorch 2.0. You can combine this with torch.compile to speed your code up even more: Copied from diffusers import AutoPipelineForText2Image +import torch + +pipeline = AutoPipelineForText2Image.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16").to("cuda") +pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) For more tips on how to optimize your code to save memory and speed up inference, read the Memory and speed and Torch 2.0 guides. diff --git a/scrapped_outputs/ff7431d8e7db3ac935948c9e5def308d.txt b/scrapped_outputs/ff7431d8e7db3ac935948c9e5def308d.txt new file mode 100644 index 0000000000000000000000000000000000000000..d4bab27ceb13236796194079a998b0f3de0f952d --- /dev/null +++ b/scrapped_outputs/ff7431d8e7db3ac935948c9e5def308d.txt @@ -0,0 +1,357 @@ +Train a diffusion model + +Unconditional image generation is a popular application of diffusion models that generates images that look like those in the dataset used for training. Typically, the best results are obtained from finetuning a pretrained model on a specific dataset. You can find many of these checkpoints on the Hub, but if you can’t find one you like, you can always train your own! +This tutorial will teach you how to train a UNet2DModel from scratch on a subset of the Smithsonian Butterflies dataset to generate your own 🦋 butterflies 🦋. +💡 This training tutorial is based on the Training with 🧨 Diffusers notebook. For additional details and context about diffusion models like how they work, check out the notebook! +Before you begin, make sure you have 🤗 Datasets installed to load and preprocess image datasets, and 🤗 Accelerate, to simplify training on any number of GPUs. The following command will also install TensorBoard to visualize training metrics (you can also use Weights & Biases to track your training). + + + Copied +!pip install diffusers[training] +We encourage you to share your model with the community, and in order to do that, you’ll need to login to your Hugging Face account (create one here if you don’t already have one!). You can login from a notebook and enter your token when prompted: + + + Copied +>>> from huggingface_hub import notebook_login + +>>> notebook_login() +Or login in from the terminal: + + + Copied +huggingface-cli login +Since the model checkpoints are quite large, install Git-LFS to version these large files: + + + Copied +!sudo apt -qq install git-lfs +!git config --global credential.helper store + +Training configuration + +For convenience, create a TrainingConfig class containing the training hyperparameters (feel free to adjust them): + + + Copied +>>> from dataclasses import dataclass + + +>>> @dataclass +... class TrainingConfig: +... image_size = 128 # the generated image resolution +... train_batch_size = 16 +... eval_batch_size = 16 # how many images to sample during evaluation +... num_epochs = 50 +... gradient_accumulation_steps = 1 +... learning_rate = 1e-4 +... lr_warmup_steps = 500 +... save_image_epochs = 10 +... save_model_epochs = 30 +... mixed_precision = "fp16" # `no` for float32, `fp16` for automatic mixed precision +... output_dir = "ddpm-butterflies-128" # the model name locally and on the HF Hub + +... push_to_hub = True # whether to upload the saved model to the HF Hub +... hub_private_repo = False +... overwrite_output_dir = True # overwrite the old model when re-running the notebook +... seed = 0 + + +>>> config = TrainingConfig() + +Load the dataset + +You can easily load the Smithsonian Butterflies dataset with the 🤗 Datasets library: + + + Copied +>>> from datasets import load_dataset + +>>> config.dataset_name = "huggan/smithsonian_butterflies_subset" +>>> dataset = load_dataset(config.dataset_name, split="train") +💡 You can find additional datasets from the HugGan Community Event or you can use your own dataset by creating a local ImageFolder. Set config.dataset_name to the repository id of the dataset if it is from the HugGan Community Event, or imagefolder if you’re using your own images. +🤗 Datasets uses the Image feature to automatically decode the image data and load it as a PIL.Image which we can visualize: + + + Copied +>>> import matplotlib.pyplot as plt + +>>> fig, axs = plt.subplots(1, 4, figsize=(16, 4)) +>>> for i, image in enumerate(dataset[:4]["image"]): +... axs[i].imshow(image) +... axs[i].set_axis_off() +>>> fig.show() + +The images are all different sizes though, so you’ll need to preprocess them first: +Resize changes the image size to the one defined in config.image_size. +RandomHorizontalFlip augments the dataset by randomly mirroring the images. +Normalize is important to rescale the pixel values into a [-1, 1] range, which is what the model expects. + + + Copied +>>> from torchvision import transforms + +>>> preprocess = transforms.Compose( +... [ +... transforms.Resize((config.image_size, config.image_size)), +... transforms.RandomHorizontalFlip(), +... transforms.ToTensor(), +... transforms.Normalize([0.5], [0.5]), +... ] +... ) +Use 🤗 Datasets’ set_transform method to apply the preprocess function on the fly during training: + + + Copied +>>> def transform(examples): +... images = [preprocess(image.convert("RGB")) for image in examples["image"]] +... return {"images": images} + + +>>> dataset.set_transform(transform) +Feel free to visualize the images again to confirm that they’ve been resized. Now you’re ready to wrap the dataset in a DataLoader for training! + + + Copied +>>> import torch + +>>> train_dataloader = torch.utils.data.DataLoader(dataset, batch_size=config.train_batch_size, shuffle=True) + +Create a UNet2DModel + +Pretrained models in 🧨 Diffusers are easily created from their model class with the parameters you want. For example, to create a UNet2DModel: + + + Copied +>>> from diffusers import UNet2DModel + +>>> model = UNet2DModel( +... sample_size=config.image_size, # the target image resolution +... in_channels=3, # the number of input channels, 3 for RGB images +... out_channels=3, # the number of output channels +... layers_per_block=2, # how many ResNet layers to use per UNet block +... block_out_channels=(128, 128, 256, 256, 512, 512), # the number of output channels for each UNet block +... down_block_types=( +... "DownBlock2D", # a regular ResNet downsampling block +... "DownBlock2D", +... "DownBlock2D", +... "DownBlock2D", +... "AttnDownBlock2D", # a ResNet downsampling block with spatial self-attention +... "DownBlock2D", +... ), +... up_block_types=( +... "UpBlock2D", # a regular ResNet upsampling block +... "AttnUpBlock2D", # a ResNet upsampling block with spatial self-attention +... "UpBlock2D", +... "UpBlock2D", +... "UpBlock2D", +... "UpBlock2D", +... ), +... ) +It is often a good idea to quickly check the sample image shape matches the model output shape: + + + Copied +>>> sample_image = dataset[0]["images"].unsqueeze(0) +>>> print("Input shape:", sample_image.shape) +Input shape: torch.Size([1, 3, 128, 128]) + +>>> print("Output shape:", model(sample_image, timestep=0).sample.shape) +Output shape: torch.Size([1, 3, 128, 128]) +Great! Next, you’ll need a scheduler to add some noise to the image. + +Create a scheduler + +The scheduler behaves differently depending on whether you’re using the model for training or inference. During inference, the scheduler generates image from the noise. During training, the scheduler takes a model output - or a sample - from a specific point in the diffusion process and applies noise to the image according to a noise schedule and an update rule. +Let’s take a look at the DDPMScheduler and use the add_noise method to add some random noise to the sample_image from before: + + + Copied +>>> import torch +>>> from PIL import Image +>>> from diffusers import DDPMScheduler + +>>> noise_scheduler = DDPMScheduler(num_train_timesteps=1000) +>>> noise = torch.randn(sample_image.shape) +>>> timesteps = torch.LongTensor([50]) +>>> noisy_image = noise_scheduler.add_noise(sample_image, noise, timesteps) + +>>> Image.fromarray(((noisy_image.permute(0, 2, 3, 1) + 1.0) * 127.5).type(torch.uint8).numpy()[0]) + +The training objective of the model is to predict the noise added to the image. The loss at this step can be calculated by: + + + Copied +>>> import torch.nn.functional as F + +>>> noise_pred = model(noisy_image, timesteps).sample +>>> loss = F.mse_loss(noise_pred, noise) + +Train the model + +By now, you have most of the pieces to start training the model and all that’s left is putting everything together. +First, you’ll need an optimizer and a learning rate scheduler: + + + Copied +>>> from diffusers.optimization import get_cosine_schedule_with_warmup + +>>> optimizer = torch.optim.AdamW(model.parameters(), lr=config.learning_rate) +>>> lr_scheduler = get_cosine_schedule_with_warmup( +... optimizer=optimizer, +... num_warmup_steps=config.lr_warmup_steps, +... num_training_steps=(len(train_dataloader) * config.num_epochs), +... ) +Then, you’ll need a way to evaluate the model. For evaluation, you can use the DDPMPipeline to generate a batch of sample images and save it as a grid: + + + Copied +>>> from diffusers import DDPMPipeline +>>> import math +>>> import os + + +>>> def make_grid(images, rows, cols): +... w, h = images[0].size +... grid = Image.new("RGB", size=(cols * w, rows * h)) +... for i, image in enumerate(images): +... grid.paste(image, box=(i % cols * w, i // cols * h)) +... return grid + + +>>> def evaluate(config, epoch, pipeline): +... # Sample some images from random noise (this is the backward diffusion process). +... # The default pipeline output type is `List[PIL.Image]` +... images = pipeline( +... batch_size=config.eval_batch_size, +... generator=torch.manual_seed(config.seed), +... ).images + +... # Make a grid out of the images +... image_grid = make_grid(images, rows=4, cols=4) + +... # Save the images +... test_dir = os.path.join(config.output_dir, "samples") +... os.makedirs(test_dir, exist_ok=True) +... image_grid.save(f"{test_dir}/{epoch:04d}.png") +Now you can wrap all these components together in a training loop with 🤗 Accelerate for easy TensorBoard logging, gradient accumulation, and mixed precision training. To upload the model to the Hub, write a function to get your repository name and information and then push it to the Hub. +💡 The training loop below may look intimidating and long, but it’ll be worth it later when you launch your training in just one line of code! If you can’t wait and want to start generating images, feel free to copy and run the code below. You can always come back and examine the training loop more closely later, like when you’re waiting for your model to finish training. 🤗 + + + Copied +>>> from accelerate import Accelerator +>>> from huggingface_hub import HfFolder, Repository, whoami +>>> from tqdm.auto import tqdm +>>> from pathlib import Path +>>> import os + + +>>> def get_full_repo_name(model_id: str, organization: str = None, token: str = None): +... if token is None: +... token = HfFolder.get_token() +... if organization is None: +... username = whoami(token)["name"] +... return f"{username}/{model_id}" +... else: +... return f"{organization}/{model_id}" + + +>>> def train_loop(config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler): +... # Initialize accelerator and tensorboard logging +... accelerator = Accelerator( +... mixed_precision=config.mixed_precision, +... gradient_accumulation_steps=config.gradient_accumulation_steps, +... log_with="tensorboard", +... logging_dir=os.path.join(config.output_dir, "logs"), +... ) +... if accelerator.is_main_process: +... if config.push_to_hub: +... repo_name = get_full_repo_name(Path(config.output_dir).name) +... repo = Repository(config.output_dir, clone_from=repo_name) +... elif config.output_dir is not None: +... os.makedirs(config.output_dir, exist_ok=True) +... accelerator.init_trackers("train_example") + +... # Prepare everything +... # There is no specific order to remember, you just need to unpack the +... # objects in the same order you gave them to the prepare method. +... model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( +... model, optimizer, train_dataloader, lr_scheduler +... ) + +... global_step = 0 + +... # Now you train the model +... for epoch in range(config.num_epochs): +... progress_bar = tqdm(total=len(train_dataloader), disable=not accelerator.is_local_main_process) +... progress_bar.set_description(f"Epoch {epoch}") + +... for step, batch in enumerate(train_dataloader): +... clean_images = batch["images"] +... # Sample noise to add to the images +... noise = torch.randn(clean_images.shape).to(clean_images.device) +... bs = clean_images.shape[0] + +... # Sample a random timestep for each image +... timesteps = torch.randint( +... 0, noise_scheduler.config.num_train_timesteps, (bs,), device=clean_images.device +... ).long() + +... # Add noise to the clean images according to the noise magnitude at each timestep +... # (this is the forward diffusion process) +... noisy_images = noise_scheduler.add_noise(clean_images, noise, timesteps) + +... with accelerator.accumulate(model): +... # Predict the noise residual +... noise_pred = model(noisy_images, timesteps, return_dict=False)[0] +... loss = F.mse_loss(noise_pred, noise) +... accelerator.backward(loss) + +... accelerator.clip_grad_norm_(model.parameters(), 1.0) +... optimizer.step() +... lr_scheduler.step() +... optimizer.zero_grad() + +... progress_bar.update(1) +... logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0], "step": global_step} +... progress_bar.set_postfix(**logs) +... accelerator.log(logs, step=global_step) +... global_step += 1 + +... # After each epoch you optionally sample some demo images with evaluate() and save the model +... if accelerator.is_main_process: +... pipeline = DDPMPipeline(unet=accelerator.unwrap_model(model), scheduler=noise_scheduler) + +... if (epoch + 1) % config.save_image_epochs == 0 or epoch == config.num_epochs - 1: +... evaluate(config, epoch, pipeline) + +... if (epoch + 1) % config.save_model_epochs == 0 or epoch == config.num_epochs - 1: +... if config.push_to_hub: +... repo.push_to_hub(commit_message=f"Epoch {epoch}", blocking=True) +... else: +... pipeline.save_pretrained(config.output_dir) +Phew, that was quite a bit of code! But you’re finally ready to launch the training with 🤗 Accelerate’s notebook_launcher function. Pass the function the training loop, all the training arguments, and the number of processes (you can change this value to the number of GPUs available to you) to use for training: + + + Copied +>>> from accelerate import notebook_launcher + +>>> args = (config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler) + +>>> notebook_launcher(train_loop, args, num_processes=1) +Once training is complete, take a look at the final 🦋 images 🦋 generated by your diffusion model! + + + Copied +>>> import glob + +>>> sample_images = sorted(glob.glob(f"{config.output_dir}/samples/*.png")) +>>> Image.open(sample_images[-1]) + + +Next steps + +Unconditional image generation is one example of a task that can be trained. You can explore other tasks and training techniques by visiting the 🧨 Diffusers Training Examples page. Here are some examples of what you can learn: +Textual Inversion, an algorithm that teaches a model a specific visual concept and integrates it into the generated image. +DreamBooth, a technique for generating personalized images of a subject given several input images of the subject. +Guide to finetuning a Stable Diffusion model on your own dataset. +Guide to using LoRA, a memory-efficient technique for finetuning really large models faster. diff --git a/scrapped_outputs/ffeec7be3375b8d4ec3524a29d286ffc.txt b/scrapped_outputs/ffeec7be3375b8d4ec3524a29d286ffc.txt new file mode 100644 index 0000000000000000000000000000000000000000..d23d93327c35d9c8f0901065ebe9c0cc039991a4 --- /dev/null +++ b/scrapped_outputs/ffeec7be3375b8d4ec3524a29d286ffc.txt @@ -0,0 +1,260 @@ +Image-to-image Image-to-image is similar to text-to-image, but in addition to a prompt, you can also pass an initial image as a starting point for the diffusion process. The initial image is encoded to latent space and noise is added to it. Then the latent diffusion model takes a prompt and the noisy latent image, predicts the added noise, and removes the predicted noise from the initial latent image to get the new latent image. Lastly, a decoder decodes the new latent image back into an image. With 🤗 Diffusers, this is as easy as 1-2-3: Load a checkpoint into the AutoPipelineForImage2Image class; this pipeline automatically handles loading the correct pipeline class based on the checkpoint: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import load_image, make_image_grid + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() You’ll notice throughout the guide, we use enable_model_cpu_offload() and enable_xformers_memory_efficient_attention(), to save memory and increase inference speed. If you’re using PyTorch 2.0, then you don’t need to call enable_xformers_memory_efficient_attention() on your pipeline because it’ll already be using PyTorch 2.0’s native scaled-dot product attention. Load an image to pass to the pipeline: Copied init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") Pass a prompt and image to the pipeline to generate an image: Copied prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k" +image = pipeline(prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Popular models The most popular image-to-image models are Stable Diffusion v1.5, Stable Diffusion XL (SDXL), and Kandinsky 2.2. The results from the Stable Diffusion and Kandinsky models vary due to their architecture differences and training process; you can generally expect SDXL to produce higher quality images than Stable Diffusion v1.5. Let’s take a quick look at how to use each of these models and compare their results. Stable Diffusion v1.5 Stable Diffusion v1.5 is a latent diffusion model initialized from an earlier checkpoint, and further finetuned for 595K steps on 512x512 images. To use this pipeline for image-to-image, you’ll need to prepare an initial image to pass to the pipeline. Then you can pass a prompt and the image to the pipeline to generate a new image: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Stable Diffusion XL (SDXL) SDXL is a more powerful version of the Stable Diffusion model. It uses a larger base model, and an additional refiner model to increase the quality of the base model’s output. Read the SDXL guide for a more detailed walkthrough of how to use this model, and other techniques it uses to produce high quality images. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-sdxl-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, strength=0.5).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Kandinsky 2.2 The Kandinsky model is different from the Stable Diffusion models because it uses an image prior model to create image embeddings. The embeddings help create a better alignment between text and images, allowing the latent diffusion model to generate better images. The simplest way to use Kandinsky 2.2 is: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Configure pipeline parameters There are several important parameters you can configure in the pipeline that’ll affect the image generation process and image quality. Let’s take a closer look at what these parameters do and how changing them affects the output. Strength strength is one of the most important parameters to consider and it’ll have a huge impact on your generated image. It determines how much the generated image resembles the initial image. In other words: 📈 a higher strength value gives the model more “creativity” to generate an image that’s different from the initial image; a strength value of 1.0 means the initial image is more or less ignored 📉 a lower strength value means the generated image is more similar to the initial image The strength and num_inference_steps parameters are related because strength determines the number of noise steps to add. For example, if the num_inference_steps is 50 and strength is 0.8, then this means adding 40 (50 * 0.8) steps of noise to the initial image and then denoising for 40 steps to get the newly generated image. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, strength=0.8).images[0] +make_image_grid([init_image, image], rows=1, cols=2) strength = 0.4 strength = 0.6 strength = 1.0 Guidance scale The guidance_scale parameter is used to control how closely aligned the generated image and text prompt are. A higher guidance_scale value means your generated image is more aligned with the prompt, while a lower guidance_scale value means your generated image has more space to deviate from the prompt. You can combine guidance_scale with strength for even more precise control over how expressive the model is. For example, combine a high strength + guidance_scale for maximum creativity or use a combination of low strength and low guidance_scale to generate an image that resembles the initial image but is not as strictly bound to the prompt. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, guidance_scale=8.0).images[0] +make_image_grid([init_image, image], rows=1, cols=2) guidance_scale = 0.1 guidance_scale = 5.0 guidance_scale = 10.0 Negative prompt A negative prompt conditions the model to not include things in an image, and it can be used to improve image quality or modify an image. For example, you can improve image quality by including negative prompts like “poor details” or “blurry” to encourage the model to generate a higher quality image. Or you can modify an image by specifying things to exclude from an image. Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" + +# pass prompt and image to pipeline +image = pipeline(prompt, negative_prompt=negative_prompt, image=init_image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" negative_prompt = "jungle" Chained image-to-image pipelines There are some other interesting ways you can use an image-to-image pipeline aside from just generating an image (although that is pretty cool too). You can take it a step further and chain it with other pipelines. Text-to-image-to-image Chaining a text-to-image and image-to-image pipeline allows you to generate an image from text and use the generated image as the initial image for the image-to-image pipeline. This is useful if you want to generate an image entirely from scratch. For example, let’s chain a Stable Diffusion and a Kandinsky model. Start by generating an image with the text-to-image pipeline: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image +import torch +from diffusers.utils import make_image_grid + +pipeline = AutoPipelineForText2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +text2image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k").images[0] +text2image Now you can pass this generated image to the image-to-image pipeline: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image2image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", image=text2image).images[0] +make_image_grid([text2image, image2image], rows=1, cols=2) Image-to-image-to-image You can also chain multiple image-to-image pipelines together to create more interesting images. This can be useful for iteratively performing style transfer on an image, generating short GIFs, restoring color to an image, or restoring missing areas of an image. Start by generating an image: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image = pipeline(prompt, image=init_image, output_type="latent").images[0] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. Pass the latent output from this pipeline to the next pipeline to generate an image in a comic book art style: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "ogkalu/Comic-Diffusion", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# need to include the token "charliebo artstyle" in the prompt to use this checkpoint +image = pipeline("Astronaut in a jungle, charliebo artstyle", image=image, output_type="latent").images[0] Repeat one more time to generate the final image in a pixel art style: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "kohbanye/pixel-art-style", torch_dtype=torch.float16 +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# need to include the token "pixelartstyle" in the prompt to use this checkpoint +image = pipeline("Astronaut in a jungle, pixelartstyle", image=image).images[0] +make_image_grid([init_image, image], rows=1, cols=2) Image-to-upscaler-to-super-resolution Another way you can chain your image-to-image pipeline is with an upscaler and super-resolution pipeline to really increase the level of details in an image. Start with an image-to-image pipeline: Copied import torch +from diffusers import AutoPipelineForImage2Image +from diffusers.utils import make_image_grid, load_image + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) + +prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" + +# pass prompt and image to pipeline +image_1 = pipeline(prompt, image=init_image, output_type="latent").images[0] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. Chain it to an upscaler pipeline to increase the image resolution: Copied from diffusers import StableDiffusionLatentUpscalePipeline + +upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained( + "stabilityai/sd-x2-latent-upscaler", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +upscaler.enable_model_cpu_offload() +upscaler.enable_xformers_memory_efficient_attention() + +image_2 = upscaler(prompt, image=image_1, output_type="latent").images[0] Finally, chain it to a super-resolution pipeline to further enhance the resolution: Copied from diffusers import StableDiffusionUpscalePipeline + +super_res = StableDiffusionUpscalePipeline.from_pretrained( + "stabilityai/stable-diffusion-x4-upscaler", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +super_res.enable_model_cpu_offload() +super_res.enable_xformers_memory_efficient_attention() + +image_3 = super_res(prompt, image=image_2).images[0] +make_image_grid([init_image, image_3.resize((512, 512))], rows=1, cols=2) Control image generation Trying to generate an image that looks exactly the way you want can be difficult, which is why controlled generation techniques and models are so useful. While you can use the negative_prompt to partially control image generation, there are more robust methods like prompt weighting and ControlNets. Prompt weighting Prompt weighting allows you to scale the representation of each concept in a prompt. For example, in a prompt like “Astronaut in a jungle, cold color palette, muted colors, detailed, 8k”, you can choose to increase or decrease the embeddings of “astronaut” and “jungle”. The Compel library provides a simple syntax for adjusting prompt weights and generating the embeddings. You can learn how to create the embeddings in the Prompt weighting guide. AutoPipelineForImage2Image has a prompt_embeds (and negative_prompt_embeds if you’re using a negative prompt) parameter where you can pass the embeddings which replaces the prompt parameter. Copied from diffusers import AutoPipelineForImage2Image +import torch + +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +image = pipeline(prompt_embeds=prompt_embeds, # generated from Compel + negative_prompt_embeds=negative_prompt_embeds, # generated from Compel + image=init_image, +).images[0] ControlNet ControlNets provide a more flexible and accurate way to control image generation because you can use an additional conditioning image. The conditioning image can be a canny image, depth map, image segmentation, and even scribbles! Whatever type of conditioning image you choose, the ControlNet generates an image that preserves the information in it. For example, let’s condition an image with a depth map to keep the spatial information in the image. Copied from diffusers.utils import load_image, make_image_grid + +# prepare image +url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" +init_image = load_image(url) +init_image = init_image.resize((958, 960)) # resize to depth image dimensions +depth_image = load_image("https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png") +make_image_grid([init_image, depth_image], rows=1, cols=2) Load a ControlNet model conditioned on depth maps and the AutoPipelineForImage2Image: Copied from diffusers import ControlNetModel, AutoPipelineForImage2Image +import torch + +controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11f1p_sd15_depth", torch_dtype=torch.float16, variant="fp16", use_safetensors=True) +pipeline = AutoPipelineForImage2Image.from_pretrained( + "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16, variant="fp16", use_safetensors=True +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() Now generate a new image conditioned on the depth map, initial image, and prompt: Copied prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" +image_control_net = pipeline(prompt, image=init_image, control_image=depth_image).images[0] +make_image_grid([init_image, depth_image, image_control_net], rows=1, cols=3) initial image depth image ControlNet image Let’s apply a new style to the image generated from the ControlNet by chaining it with an image-to-image pipeline: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( + "nitrosocke/elden-ring-diffusion", torch_dtype=torch.float16, +) +pipeline.enable_model_cpu_offload() +# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed +pipeline.enable_xformers_memory_efficient_attention() + +prompt = "elden ring style astronaut in a jungle" # include the token "elden ring style" in the prompt +negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" + +image_elden_ring = pipeline(prompt, negative_prompt=negative_prompt, image=image_control_net, strength=0.45, guidance_scale=10.5).images[0] +make_image_grid([init_image, depth_image, image_control_net, image_elden_ring], rows=2, cols=2) Optimize Running diffusion models is computationally expensive and intensive, but with a few optimization tricks, it is entirely possible to run them on consumer and free-tier GPUs. For example, you can use a more memory-efficient form of attention such as PyTorch 2.0’s scaled-dot product attention or xFormers (you can use one or the other, but there’s no need to use both). You can also offload the model to the GPU while the other pipeline components wait on the CPU. Copied + pipeline.enable_model_cpu_offload() ++ pipeline.enable_xformers_memory_efficient_attention() With torch.compile, you can boost your inference speed even more by wrapping your UNet with it: Copied pipeline.unet = torch.compile(pipeline.unet, mode="reduce-overhead", fullgraph=True) To learn more, take a look at the Reduce memory usage and Torch 2.0 guides.